text
stringlengths
1
2.25M
--- abstract: 'Hot gas dominates the emission in X-ray luminous early-type galaxies, but in relatively X-ray faint systems, integrated X-ray emission from discrete stellar-like sources is thought to be considerable, although the amount of the contribution is controversial. To help resolve this issue, we examine the radial X-ray surface brightness distribution of 17 X-ray faint galaxies observed with the ROSAT HRI and PSPC. We assume that the stellar contribution follows a de Vaucouleurs law while the hot gas component follows a King $\beta$ model. For some galaxies, both models fit equally well, but for a number of systems, a dual component model yields the best fit, from which, upper bounds are placed on the stellar contribution. Best-fit values for the stellar contribution are inconsistent with (lower than) that suggested by Fabbiano, Gioia, & Trinchieri (1989) and estimated from the bulge of M31, but are consistent with the Forman, Jones, & Tucker (1985) estimate of the stellar fraction in X-ray faint elliptical and S0 galaxies. Our results indicate an upper limit to discrete sources of $L_X/L_B = 1.6\times10^{29} \, {\rm ergs \, s}^{-1}/L_{\odot}$.' author: - 'Beth A. Brown' - 'Joel N. Bregman' title: 'Emission Mechanisms in X-Ray Faint Galaxies ' --- Introduction {#sec:intro} ============ One key concern in the study of early-type galaxies, is the amount of discrete X-ray sources relative to hot gas in X-ray faint ellipticals. It is generally agreed that integrated X-ray emission from stellar sources (such as accreting X-ray binaries) is likely to be present at some fraction in all elliptical galaxies. This contribution is low in X-ray–luminous galaxies (i.e. $L_X/L_B \gtrsim 3.2\times10^{30} \, {\rm ergs \, s}^{-1}/L_{\odot}$), where hot, diffuse gas dominates the total X-ray emission (e.g. Forman, Jones & Tucker 1985; Canizares, Fabbiano & Trinchieri 1987; Davis & White 1996; Brown & Bregman 1998; Buote & Fabian 1998). The fraction of X-ray emission from stellar sources is expected to increase with decreasing $L_X$, but the $L_X$ at which the stellar component ($L_{X,\star}$) dominates the observed emission is still debated. Using [*Einstein Observatory*]{}, [*ROSAT*]{}, and [*ASCA*]{} data, researchers have derived estimates (or upper limits) to the stellar X-ray emission in E and S0 galaxies using various methods. Forman, Jones & Tucker (1985) determined an upper limit of $L_{X,\star}/L_B \simeq 4\times10^{28} \, {\rm ergs \, s}^{-1}/L_{\odot}$ ([*Einstein*]{} IPC band) by assuming discrete sources make a 50% contribution to the total diffuse X-ray emission of Cen A. In this limit, only the faintest galaxies ($M_B > -19$ in their sample) can have their X-ray emission dominated by discrete-source emission. Irwin & Sarazin (1998) concluded that the X-ray emission of faint galaxies ($L_X/L_B < 5\times10^{29} \, {\rm ergs \, s}^{-1}/L_{\odot}$) is a combination of a hard stellar component and a very soft component, which is most likely stellar in origin as well. This was inferred by comparing the X-ray “colors" of a sample of early-type galaxies to the bulge of M31 ($L_X/L_B = 3.2\times10^{29} \, {\rm ergs \, s}^{-1}/L_{\odot}$; Irwin 2000, private communication). Fabbiano, Gioia & Trinchieri (1989) used the average $L_X/L_B$ of early-type spirals ($L_X/L_B = 4\times10^{29} \, {\rm ergs \, s}^{-1}/L_{\odot}$, for $L_X = 10^{38} - 10^{42} \, {\rm ergs \, s}^{-1}$) as the benchmark for the stellar contribution. By comparing the $L_X$ and $L_B$ of elliptical and S0 galaxies to spirals (in the [*Einstein*]{} 0.5–4.5 keV), they concluded that X-ray emission from hot gas is not significant in systems for which $L_X < 10^{41} \, {\rm ergs \, s}^{-1}$. In the various estimates of the stellar emission fraction, there is an order of magnitude difference, making an accurate determination even more necessary. The primary mechanisms for the X-ray emission can be determined through the examination of X-ray spectra (e.g., Trinchieri, Kim, Fabbiano, & Canizares 1994; Loewenstein & Mushotzky 1997; Buote & Fabian 1998). A hard spectrum ($T_X \sim$ 10 keV) would indicate the presence of evolved stellar sources, while soft X-rays ($T_X \sim$ 1 keV) would be reflective of a hot gas component. Recent work incorporating the use of multi-temperature spectral models indicate the presence of a very soft component in addition to the hard and soft components described above, the nature of which is suspected by most to be stellar in origin but which may be due in part to a warm ISM (Pellegrini & Fabbiano 1994; Fabbiano, Kim, & Trinchieri 1994; Kim et al. 1996). [*ROSAT*]{}, although it has better angular resolution than the [*Einstein Observatory*]{}, does not provide high enough spectral resolution to show signatures unique to a stellar population, especially in fainter systems. Another problem with spectral modeling is that acceptable fits can be obtained with both single temperature models and multi-temperature models. Even with the better resolution of [*ASCA*]{}, calibration problems at low energies may affect spectral modeling. The use of X-ray radial surface brightness profiles offers another way to determine the relative fractions of gas and discrete sources to the observed X-ray emission. Previously, these profiles existed only for X-ray-bright galaxies for which adequate signal-to-noise data was achieved, and were used to obtain gas densities and masses, test predictions of the cooling flow model, or model the dynamical evolution of gas flows (Sarazin & White 1988; Pellegrini & Ciotti 1998; David, Forman & Jones 1991). The use of surface brightness profiles to infer the contributions to the X-ray emission has been limited, and few conclusions have been reached with regard to faint galaxies (Forman, Jones & Tucker 1985; Trinchieri, G., Fabbiano, G., & Canizares, C. R. 1986; Pellegrini & Fabbiano 1994). Data has become available through [*ROSAT*]{} that better allow for the examination of radial surface brightness profiles of fainter galaxies. Brown & Bregman (1998, 2000) provide X-ray information on a [*ROSAT*]{} survey of early-type galaxies extending to fainter X-ray luminosities than available from [*Einstein Observatory*]{} data. In this paper, radial surface brightness profiles are used to determine or set upper limits to the stellar contribution in the less luminous galaxies of this survey. The Sample {#sec:sample3} ========== The galaxies chosen for this study (see Table \[tab:sub\]) are the X-ray faintest galaxies of the Brown & Bregman (1998) survey of early-type galaxies. Galaxy distances are derived from Faber et al. (1989) using an $H_0$ = 50 km/s/Mpc (the McMillan et al. 1994 distance is used for NGC 5102). Values for the stellar velocity dispersion, $\sigma$, are also obtained from Faber et al. (1989), from which the dispersion temperature, $T_{\sigma}$, is calculated according to $kT = \mu m_p \sigma ^2$ (where $\mu$ is the mean molecular weight, and $\sigma$ is the one-dimensional stellar velocity dispersion). The galaxies have optical luminosities ranging from log$(L_B/ L_{\odot}) =$ 10.2–11.2 (derived from Faber et al. 1989 magnitudes), except NGC 5102 whose distance determination indicates log$(L_B/ L_{\odot}) =$ 9.0. X-ray-to-optical luminosity ratios fall between $5\times10^{28}$ and $4.5\times10^{29} \rm ergs \, s^{-1}/L_{\odot}$. [lcclcr]{} & [Dist]{} & [log $\sigma$]{} & [$T_{\sigma}$]{} & [log$L_B$]{} & [log$\frac{L_X}{L_B}$]{}\ & [Mpc]{} & [(km s$^{-1}$)]{} & [keV]{} & [($L_{\odot}$)]{} & [($\frac{erg s^{-1}}{L_{\odot}}$)]{}\ N 1344 & 28.44[$\scriptstyle \pm \phn 1.76$]{} & 2.204 & 0.163 & 10.66[$\scriptstyle \pm 0.06$]{} & $28.81^{+0.15}_{-0.21}$\ N 1549 & 24.26[$\scriptstyle \pm \phn 5.12$]{} & 2.312 & 0.267 & 10.73[$\scriptstyle \pm 0.06$]{} & 29.31[$\scriptstyle \pm 0.07$]{}\ N 2768 & 30.64[$\scriptstyle \pm \phn 6.50$]{} & 2.296 & 0.248 & 10.79[$\scriptstyle \pm 0.12$]{} & 29.62[$\scriptstyle \pm 0.13$]{}\ N 3115 & 20.42[$\scriptstyle \pm \phn 4.30$]{} & 2.425 & 0.450 & 10.83[$\scriptstyle \pm 0.06$]{} & $28.91^{+0.08}_{-0.09}$\ N 3377 & 17.14[$\scriptstyle \pm \phn 2.52$]{} & 2.116 & 0.108 & 10.21[$\scriptstyle \pm 0.12$]{} & $29.21^{+0.16}_{-0.19}$\ N 3379 & 17.14[$\scriptstyle \pm \phn 2.52$]{} & 2.303 & 0.257 & 10.49[$\scriptstyle \pm 0.06$]{} & $29.29^{+0.16}_{-0.23}$\ N 3557 & 47.98[$\scriptstyle \pm 10.18$]{} & 2.465 & 0.541 & 11.10[$\scriptstyle \pm 0.06$]{} & 29.51[$\scriptstyle \pm 0.08$]{}\ N 3585 & 23.54[$\scriptstyle \pm \phn 4.98$]{} & 2.343 & 0.308 & 10.72[$\scriptstyle \pm 0.06$]{} & $29.12^{+0.11}_{-0.13}$\ N 3607 & 39.82[$\scriptstyle \pm \phn 4.84$]{} & 2.394 & 0.390 & 11.18[$\scriptstyle \pm 0.12$]{} & 29.64[$\scriptstyle \pm 0.12$]{}\ N 4365 & 26.66[$\scriptstyle \pm \phn 1.42$]{} & 2.394 & 0.390 & 10.79[$\scriptstyle \pm 0.06$]{} & 29.69[$\scriptstyle \pm 0.07$]{}\ N 4494 & 13.90[$\scriptstyle \pm \phn 2.94$]{} & 2.095 & 0.300 & 10.20[$\scriptstyle \pm 0.06$]{} & $29.08^{+0.15}_{-0.23}$\ N 4621 & 26.66[$\scriptstyle \pm \phn 1.42$]{} & 2.381 & 0.367 & 10.78[$\scriptstyle \pm 0.06$]{} & $29.01^{+0.14}_{-0.16}$\ N 4697 & 15.88[$\scriptstyle \pm \phn 3.36$]{} & 2.218 & 0.173 & 10.58[$\scriptstyle \pm 0.06$]{} & 29.55[$\scriptstyle \pm 0.06$]{}\ N 5061 & 23.92[$\scriptstyle \pm \phn 5.06$]{} & 2.282 & 0.233 & 10.53[$\scriptstyle \pm 0.06$]{} & $29.01^{+0.13}_{-0.17}$\ N 5102 & 3.10[$\scriptstyle \pm \phn 0.30$]{} & 1.820 & 0.500 & 8.95[$\scriptstyle \pm 0.12$]{} & $28.75^{+0.17}_{-0.21}$\ N 5322 & 33.22[$\scriptstyle \pm \phn 7.04$]{} & 2.350 & 0.319 & 10.80[$\scriptstyle \pm 0.12$]{} & 29.31[$\scriptstyle \pm 0.13$]{}\ N 7507 & 35.00[$\scriptstyle \pm \phn 7.42$]{} & 2.377 & 0.361 & 10.82[$\scriptstyle \pm 0.06$]{} & $29.31^{+0.20}_{-0.34}$\ Columns 2, 4, & 5 derived from Faber et al. 1989 values (Distance for NGC 5102 from McMillan et al. 1994). Column 3 from Faber et al. 1989. Column 6 from Brown 1998.\ $^a$ Adopted value for $T_{\sigma}$. The targeted galaxies are bounded by the stellar X-ray emission estimates of Forman, Jones & Tucker (1985) and Fabbiano, Gioia & Trinchieri (1989). For comparison, we have converted these estimates into [*ROSAT*]{} band equivalents. The Forman, Jones & Tucker (1985) [*ROSAT*]{} equivalent is $L_{X,\star}/L_B = 3.6\times10^{28} \, {\rm ergs \, s}^{-1}/L_{\odot}$ obtained by assuming a Raymond-Smith thermal model with a Galactic $N_H$ column density of $10^{20} \, {\rm cm}^{-2}$ and $kT = 1.0$ keV. The Fabbiano, Gioia & Trinchieri (1989) estimate is the averaged best-fit linear regression for a sample of early-type spiral galaxies. Using an average distance of 18.12 Mpc (obtained from Fabbiano, Gioia & Trinchieri’s 1988 sample of early-type spiral galaxies), and spectral model parameters above, we obtain a [*ROSAT*]{} band equivalent of $L_{X,\star}/L_B = 3.6\times10^{29} \, {\rm ergs \, s}^{-1}/L_{\odot}$. X-ray surface brightness profiles for the targeted sample were extracted out to 4–5 effective radii ($r_e$) from processed [*ROSAT*]{} PSPC or HRI data. Observations from both instruments exist only for NGC 1549 and NGC 5322. Details for the general data processing of these galaxies are given in Brown & Bregman (1998). A background, taken at large radius from the galaxy center (typically at $> 7 r_e$), was subtracted from the data. Normalized PSPC and HRI data were blocked into in 4$\arcsec$ and 2$\arcsec$ pixels, respectively, and subsequently binned according to the resolution of the instrument assuming azimuthal symmetry. PSPC data were binned in 25$\arcsec$ wide annuli (data for NGC 3557 was binned in 35–40$\arcsec$ widths beyond $\sim 2 r_e$ to improve signal-to-noise), and HRI data were binned in annuli of widths increasing from 5$\arcsec$ to 40$\arcsec$ in the outer regions where the surface brightness was lower. Five galaxies had total surface brightness counts too low to be useful, and so were not included in further analysis. NGC 4494 and NGC 4621 were also excluded because their detections were off-axis, which affected the shape of the profiles. Of the ten remaining galaxies, eight are classified as elliptical and two as S0 galaxies (Tully 1988 classification). Obtaining the Stellar Contribution {#sec:profiles} ================================== Hot, interstellar gas and discrete stellar X-ray sources are the two primary mechanisms suggested for X-ray emission in early-type galaxies. Moderate to low signal-to-noise data prevents extensive and detailed modeling, however by separately characterizing the radial surface brightness distribution due to each component, we seek to reach an initial indication of the relative fraction of stellar X-ray sources present in X-ray faint galaxies We use a modified King model ($\beta$ model) to parameterize the X-ray emission attributable to hot gas. The $\beta$ model was developed initially to describe hot gas behavior in clusters, but has been often extended to individual elliptical galaxies (Gorenstein et al. 1978; Fabricant & Gorenstein 1983; Forman, Jones & Tucker 1985; Trinchieri, Fabbiano, & Canizares 1986). The $\beta$ model takes on the form $$S(r)_{\beta} = S_0[ 1 + (r / r_c)^2]^{-3\beta + \frac{1}{2}},$$ where $S_0$ is the central brightness and $r_c$ is the core radius of the X-ray emission. Values of $\beta \approx 0.5$ and $r_c \sim$ 2 kpc are typical in brighter ellipticals where $L_X \approx 10^{39}$–$10^{42} \, {\rm ergs \, s}^{-1}$ (e.g. Sarazin 1990; Goudfrooij & de Jong 1995). X-ray emission of bright early-type galaxies, modeled with this King function, demonstrate radial surface brightness profiles that decline more slowly than optical profiles at large radii (e.g., Forman, Jones & Tucker 1985). A de Vaucouleurs $r^{1/4}$ law is used to trace the integrated emission from discrete X-ray sources. The discrete source emission is most likely dominated by low-mass X-ray binaries (LMXBs) in the galaxy (Irwin & Sarazin 1998; David, Forman, & Jones 1991; Trinchieri & Fabbiano 1985). LMXBs come from an evolved stellar population, so their X-ray brightness distribution should follow that of stars (Pellegrini & Fabbiano 1994). A de Vaucouleurs $r^{1/4}$ law has been very successful in modeling the optical surface brightness of ellipticals (Fry et al. 1999, Burkert 1993, Hjorth & Madsen 1991), and so we adopt it here in the form $$S(r)_{\star} = S_e \cdot {\rm exp} \{-7.67[(r/r_e)^{0.25}-1]\},$$ where $r_e$ is the effective radius (the isophote radius containing half of the total luminosity) and $S_e$ is the surface brightness at $r_e$. Other models, including King models, have been used to parameterize the stellar profiles of early-type galaxies. This was primarily because there is no straightforward transformation from the de Vaucouleurs function to an analytic form of the density distribution. However, since our goal is not to derive a density distribution, this is of no concern. A fitting algorithm was developed to determine the four parameters - $S_0$, $r_c$, $\beta$, and $S_e$ - that best represent the data according to the models described above. The models are convolved with an instrumental point spread function (a Gaussian of $25''$ FWHM for the PSPC, and $5''$ FWHM for the HRI), which has the form $$p(x) = \frac{1}{2 \pi \sigma ^2} e^{-x^2/2\sigma ^2},$$ where $\sigma = {\rm FWHM}/2.35$. Spherical symmetry is assumed in the convolved profiles, which take the form $$G(r) = \frac{1}{\pi \sigma ^2} \int^{\infty}_{0} \int^{\pi}_{0} e^{-x^2/2\sigma ^2} g(s) \: d\phi \: x \: dx.$$ The function $g(s)$ is the true profile (gaseous or stellar), where $$s^2 = r^2 + x^2 - 2xr{\rm cos} \phi\:.$$ The convolved profiles are optimized within the program by the $\chi ^2$ test, which yields the best-fit values for the four parameters. The optimization utilizes the downhill simplex method in multidimensions of Nelder & Mead (1965). Surface brightness data for the targets were initially fitted to the $\beta$ model and then the de Vaucouleurs $r^{1/4}$ law, out to 4–5 $r_e$. Generally, $r_c$ was held as a fixed parameter since it often cannot be resolved by the PSPC. If adequate HRI data existed, then $r_c$ was determined by the $\beta$ model, the value of which was subsequently used in fits to the PSPC data. If useful HRI data was not available, a core radius of $r_c = r_e / 10$ was adopted based upon examination of HRI data for bright elliptical galaxies (see Table \[tab:rcore\]). [lccccc]{} & & &\ & [$r_e / 11$]{} & & [$r_c$]{} & [$\nu$]{} & [$\chi_{\nu}^2$]{}\ & [($\arcsec$)]{} & & [($\arcsec$)]{} & &\ N 1395 & 4.10 & &4.56 & 42 & 0.76\ N 1404 & 2.43 & &5.68 & 11 & 2.84\ N 1549 & 4.30 & &6.08 & 17 & 0.69\ N 4649 & 6.67 & &7.66 & 30 & 1.49\ The core radius, $r_c$, fitted within a $\beta$ model ($\beta \sim 0.5$) and compared to HRI data using a $\chi^2$ test. Best-fits ($\chi_{\nu}^2$) are for $n$ degrees of freedom ($\nu$). Results of the $\beta$ model and de Vaucouleurs $r^{1/4}$ law fits determined whether a particular profile would then be fit with a two-component function where $$S(r)_{fit} = S(r)_{\beta} + S(r)_{\star}.$$ Theoretical calculations of $\beta$ models ($\beta =$ 0.4, 0.5, 0.6, and 0.7) show that where the diffuse X-ray emission can be described by $\beta =$ 0.5 (common in bright ellipticals), it is not possible to conclusively distinguish between a $\beta$ model and an de Vaucouleurs $r^{1/4}$ law in terms of goodness of fit (a degenerate fit, see Figure \[fig:models\]). A unique solution cannot be obtained with a two-component function to such a profile, therefore the $S(r)_{\beta} + S(r)_{\star}$ profile was applied only when $\beta$ was determined to be steeper or flatter than $\sim$0.5 For a dual-component fit, a best-fit parameterization to the data was first obtained. Next, upper limits to the stellar contribution (relative to the best-fit) were determined by incrementally increasing $S(r)_{\star}$ until the fit became unacceptable at the 90% and 98–99% confidence levels: $$\chi_{limit}^2 \approx \chi _{best}^2 + \Delta \chi^2$$ where $\Delta \chi^2$ = $\chi^2$ at 90% and 99% significance for 3 degrees of freedom. Once best-fit and upper limit values were found, the integrated stellar fraction was computed, from which log($L_{X,\star}/L_B$) can be derived. Results of the Modeling {#sec:results} ======================= [lrrrrrrrrr]{} & & &\ & [$r_e$]{} & [$S_0$]{} & [$r_c$ ]{} & [$\beta$]{} & [$\nu$]{} & [$\chi_{\nu}^2$]{} & [$S_e$]{} & [$\nu$]{} & [$\chi_{\nu}^2$]{}\ & [($\arcsec$)]{} & [($\frac{cnts}{pix}$)]{} & [($\arcsec$)]{} & & & & [($\frac{cnts}{pix}$)]{} & &\ N 1549(H)$\star$ & 47.44 & 2.31 & 5.58 & 0.50 & 9 & 1.73 & 0.039 & 11 & 1.44\ N 1549(P)$\star$ & & 6.38 & 5.58 & 0.46 & 7 & 3.36 & 0.150 & 8 & 3.12\ N 2768(P) & 49.44 & 1.44 & 4.94 & 0.41 & 7 & 0.20 & 0.039 & 8 & 1.75\ N 3115(P)$\star$ & 32.32 & 5.57 & 3.23 & 0.47 & 4 & 0.90 & 0.095 & 5 & 0.80\ N 3379(H) & 35.19 & 6.86 & 3.43 & 0.64 & 5 & 0.75 & 0.035 & 7 & 4.09\ N 3557(P) & 37.10 & 19.59 & 3.71 & 0.56 & 4 & 0.91 & 0.160 & 5 & 4.68\ N 3585(P)$\star$ & 38.04 & 1.70 & 3.81 & 0.48 & 5 & 0.79 & 0.027 & 6 & 0.55\ N 3607(P) & 65.49 & 4.80 & 6.55 & 0.39 & 11 & 3.03 & 0.150 & 12 & 17.10\ N 4365(P) & 56.57 & 4.17 & 5.65 & 0.41 & 8 & 1.84 & 0.110 & 9 & 6.90\ N 4697(P) & 73.51 & 9.20 & 7.35 & 0.40 & 12 & 3.53 & 0.260 & 13 & 22.81\ N 5322(H) & 34.76 & 2.82 & 3.92 & 0.61 & 6 & 0.77 & 0.023 & 8 & 1.46\ N 5322(P) & & 19.17 & 3.92 & 0.48 & 4 & 2.94 & 0.340 & 5 & 1.65\ Best-fits ($\chi_{\nu}^2 = \chi^2 / n$ degrees of freedom). In column 1, H=HRI, P=PSPC. A starred designation indicates both models fit equally well.\ $^a$ Core radius, $r_c$, fixed for PSPC data at either HRI fitted values or $r_e/10$. Single component $\beta$ model and de Vaucouleurs $r^{1/4}$ law model fits were performed separately to the surface brightness data for ten galaxies. A summary of the results are given in Table \[tab:sin\]. If a galaxy had PSPC and HRI data available, fits to both data sets are summarized. In addition to $r_e$, fitted parameter values corresponding to the minimum $\chi^2$ for both models are tabulated along with the number of degrees of freedom ($\nu$) and the reduced $\chi^2$ ($\chi _{\nu}^2$). Both $\beta$ and de Vaucouleurs models fit the profiles of NGC 1549, NGC 3115, and NGC 3585 equally well (see, for example Figure \[fig:n1549\]), which is reflected in $\beta$ values around 0.5. Where $\beta$ was not $\sim 0.5$, pure de Vaucouleurs model fits were consistently worse compared to pure $\beta$ model fits, which is evidence against complete gas-depletion in these systems. Poor fits to NGC 3607 is a result of strong curvature in the brightness profile, which causes the data to be underestimated in the central region and overestimated in the outer regions. Small-scale fluctuations in the outer brightness profile of NGC 4697 also resulted in poor fits. The results of gas-plus-stellar component models are summarized in Table \[tab:dual\] for the seven galaxies whose derived $\beta$ values were different from 0.5 in the single-component fitting. Best-fit results as well as those for upper limits to the discrete source contribution (at the 90% and 98–99% confidence levels) are tabulated. The core radius, $r_c$, is fixed in all but the best-fits to HRI data. Also given in Table \[tab:dual\] is the integrated count fraction of the stellar component ($S_{\star}/S_{tot}$), and the corresponding luminosity ratio log($L_{X,\star}/L_B$). In the best-fit modeling, the fraction of surface brightness counts that can be attributed to discrete sources is less than 25% for each of the seven galaxies modeled. The modeled profile of NGC 2768 (Figure \[fig:best\]) indicates that the stellar component is enhanced in the central regions of the galaxy versus the outer regions (relative to the gas component), the reverse of what is indicated for NGC 3557 and NGC 5322. NGC 4365 (8% stellar fraction) exhibits an approximately even ratio of stars to gas throughout the data set. The dual-component best-fits to NGC 3379, NGC 3607, and NGC 4697 are essentially the same as in single-component $\beta$ modeling. [lllrrlrrrr]{} & & [$S_0$]{} & [$r_c$]{} & [$\beta$]{} & [$S_e$]{} & [$\nu$]{} & [$\chi_{\nu}^2$ ]{} & [$\frac{S_{\star}}{S_{tot}}$]{} & [log$\frac{L_{X,\star}}{L_B}$]{}\ & & [($\frac{cnts}{pix}$)]{} & [($\arcsec$)]{} & & [($\frac{cnts}{pix}$)]{} & & & & [($\frac{erg s^{-1}}{L_{\odot}}$)]{}\ N 2768 & best & 0.99 & 4.94 & 0.40 & 0.810E-2 & 6 & 0.23 & 0.16 & 28.83\ & 90% & 0.24E-1 & 4.94 & 0.25 & 0.426E-1 & 6 & 1.28 & 0.79 & 29.52\ & 99% & 0.19E-1 & 4.94 & 0.25 & 0.475E-1 & 6 & 2.12 & 0.84 & 29.54\ N 3379 & best & 6.87 & 3.42 & 0.64 & 0.987E-6 & 4 & 0.94 & [$<$]{}0.01 & 24.79\ & 90% & 6.12 & 3.42 & 0.86 & 0.280E-1 & 5 & 1.99 & 0.68 & 29.12\ & 99% & 5.86 & 3.42 & 0.97 & 0.352E-1 & 5 & 3.01 & 0.78 & 29.18\ N 3557 & best & 18.87 & 3.71 & 0.57 & 0.170E-1 & 3 & 1.20 & 0.13 & 28.63\ & 90% & 24.39 & 3.71 & 0.99 & 0.135 & 3 & 3.25 & 0.97 & 29.50\ & 99% & 17.45 & 3.71 & 0.99 & 0.154 & 3 & 4.94 & 0.98 & 29.50\ N 3607 & best & 4.79 & 6.55 & 0.39 & 0.809E-5 & 10 & 3.33 & [$<$]{}0.01 & 25.16\ & 90% & 2.27 & 6.55 & 0.36 & 0.583E-1 & 10 & 3.95 & 0.24 & 29.02\ & 99% & 1.58 & 6.55 & 0.34 & 0.783E-1 & 10 & 4.46 & 0.33 & 29.15\ N 4365 & best & 3.48 & 5.65 & 0.40 & 0.130E-1 & 7 & 2.09 & 0.08 & 28.61\ & 90% & 0.43 & 5.65 & 0.31 & 0.890E-1 & 7 & 2.97 & 0.58 & 29.45\ & 99% & 0.19 & 5.65 & 0.28 & 0.103 & 7 & 3.69 & 0.66 & 29.51\ N 4697 & best & 9.22 & 7.35 & 0.40 & 0.123E-4 & 11 & 3.85 & [$<$]{}0.01 & 25.04\ & 90% & 4.19 & 7.35 & 0.37 & 0.103 & 11 & 4.41 & 0.27 & 28.97\ & 99% & 2.88 & 7.35 & 0.35 & 0.136 & 11 & 4.88 & 0.35 & 29.10\ N 5322 & best & 2.79 & 3.88 & 0.65 & 0.434E-2 & 5 & 0.91 & 0.22 & 28.66\ & 90% & 1.39 & 3.88 & 0.94 & 0.251E-1 & 6 & 1.65 & 0.89 & 29.26\ & 99% & 0.50E-5 & 3.88 & 0.54 & 0.307E-1 & 6 & 2.64 & 1.00 & 29.31\ First entry for each galaxy corresponds to “best-fit." Second and third entries for each galaxy corresponds to the 90% and 98–99% upper limits to the discrete source contribution. Core radius, $r_c$, is fixed for PSPC data, and fitted for best-fit to HRI data, and then fixed for subsequent upper limit fits. $\chi_{\nu}^2 = \chi^2 / n$ degrees of freedom.\ $^a$ $\chi_{limits}^2 \approx \chi _{best}^2 + \Delta \chi^2$, where $\Delta \chi^2$ = $\chi^2$ at 90% and 99% significance for 3 degrees of freedom. In the 90% upper limit, the stellar fraction to the combined fit ranges from around 25% to near 100%. The stellar component dominates the total emission of NGC 2768 except in the outermost regions (at $> 130 \arcsec$, see Figure \[fig:upper\]), and $\beta$ flattens considerably. In NGC 3557 and NGC 5322, a steep $\beta$ model curve rapidly falls at or before $r_e$. Here, the stellar component dominates throughout the radial range, however gas enhancement is indicated in the center. The greatest increase in stellar fraction occurs in NGC 3379 (from $<$1% in the best-fit to 68%). In this galaxy, the gas component is dominant only in the very center (radius $< 5 \arcsec$). NGC 3607 and NGC 4697 results indicate that the stellar component contributes equally, or possibly exceeds, in the very center, with gas dominating the total emission throughout the remainder of the data set. The results from NGC 4365 show stars dominating in the inner regions of the galaxy, with gas being more enhanced beyond radius $\sim 90 \arcsec$. Discussion {#sec:disc} ========== We compare the modeled estimates of the stellar fraction to the estimates of Forman, Jones & Tucker (1985, FJT) and Fabbiano, Gioia & Trinchieri (1989, FGT), and that estimated from M31 (Irwin 2000, private communication). The linear curves are superimposed upon a plot of the X-ray luminosities of the targeted galaxies against their optical blue luminosities (Figure \[fig:answer\]). For log($L_B/ L_{\odot}) = 10.4-11$, the total $L_X$ of our sample spans the region between these estimates. The seven galaxies, whose radial surface brightness profiles could be fitted with combined King and de Vaucouleurs models, all have total $L_X/L_B$ values in the brighter half of the region of interest. The best fit and 90% upper limit $L_{X,\star}$ values are plotted for those galaxies as well. Best-fit modeling indicates a hot gas-dominated emission in each of the seven galaxies fit with the combined profile. The median for the best fits is log($L_{X,\star}/L_B$) = 28.61 ($L_X$ in ergs s$^{-1}$ and $L_B$ in $L_{\odot}$), in reasonable agreement with the Forman, Jones & Tucker (1985) upper limit of log($L_{X,\star}/L_B$) = 28.6. In the 90-99% upper limit, however, all but two of the seven galaxies can be modeled with a combination profile indicative of a stellar-dominated emission. The median for the 99% confidence fits is log($L_{X,\star}/L_B$) = 29.31, which is a factor of $\sim$1.8 and 1.6 below the limits of Fabbiano, Gioia & Trinchieri (1989) and M31 respectively. We further find $3\sigma$ upper limits of log($L_{X,\star}/L_B$) = 29.15 - 29.26 for the X-ray faintest galaxies not modeled, assuming that all of the observed emission is stellar. We suggest, then, an upper limit to log($L_{X,\star}/L_B$) of 29.2 for the integrated X-ray emission from discrete sources in X-ray faint early-type galaxies. This supports the Irwin & Sarazin (1998) suggestion that the brightest of X-ray faint galaxies (log $L_{X,\star}/L_B < 29.7$) may retain a significant amount of gas with a temperature of 0.3–0.6 keV. We explore how our results may relate to recent spectral studies of faint E and S0 galaxies. [*ASCA*]{} studies of early-type galaxies confirm a hard spectral component attributed to the integrated emission from LMXBs (e.g., Matsumoto et al. 1997), and a very soft component (VSC), initially detected in [*ROSAT*]{} data (Kim, Fabbiano, & Trinchieri 1992; Kim et al. 1996). For galaxies with log $L_X/L_B \geq 30.0$, spectral signatures harden as $L_X/L_B$ decreases, until the hard component dominates the emission. The Kim et al. (1996) study of an X-ray faint S0 galaxy, NGC 4382, reports that the hard spectral component contributes one-half to three-fourths of the total X-ray luminosity, with the remainder due to the VSC. This is consistent with the earlier [*ROSAT*]{} results which indicate that the VCS and hard components contribute near equally to the total X-ray emission in galaxies with log $L_X/L_B \leq 30.0$ ($L_X$ in ergs s$^{-1}$ and $L_B$ in $L_{\odot}$). The VSC may be due to a warm ISM, arise from stellar sources, or be a combination of both (Irwin & Sarazin 1998; Fabbiano, Kim, & Trinchieri 1994). Our results may be indicative of the VSC having an ISM origin, considering that the best-fit modeling is consistent with a substantial ISM component. In this initial analysis of X-ray surface brightness profiles, we have assumed low-mass X-ray binaries (LMXBs) are the primary constituents of a discrete X-ray source population. LMXBs contribute to the bulk of X-ray emission in large spiral bulges such as M31 (Trinchieri & Fabbiano 1985). Because spiral bulges are very similar to elliptical galaxies in their properties and stellar populations, it is reasonable to presuppose that LMXBs are also the main contributors to the discrete source population in early-type galaxies (David, Forman, & Jones 1991; Irwin & Sarazin 1998). However, it is possible that globular clusters may be a significant source of X-ray emission in X-ray faint E and S0 galaxies (Trinchieri & Fabbiano 1985), in which case the X-ray radial brightness distribution due to non-gaseous sources may be more appropriately described by a model other than a pure de Vaucouleurs $r^{1/4}$ law. The goal of finding the stellar contribution in X-ray faint galaxies has been hampered by low photon counts and lack of data from both [*ROSAT*]{} detectors. Long observations with Chandra will be especially helpful in constraining the core radii in faint galaxies, fixing the shape of the radial surface brightness profiles, and resolving bright point sources. It is also hoped that Chandra will provide the spectral resolution needed to determine whether the “very soft component" modeled by various authors (see §\[sec:intro\]) is due to a warm ISM of to discrete sources. Additionally, techniques that circumvent the problem of low counts, such as defining X-ray “colors" (Irwin & Sarazin 1998), will be helpful in obtaining a definite answer to the long-standing question of dominant emission mechanisms in X-ray faint early-type galaxies. We would like to thank J. Irwin and J. S. Arabadjis for valuable discussion, and the anonymous referee for helpful comments. Also, we wish to acknowledge the use of the NASA Extragalactic Database (NED), operated by IPAC under contract with NASA. B. Brown would like to acknowledge support through a NASA Graduate Student Researchers Program grant NGT-51408 and a National Academy of Science research associateship NRC-9822890. J. Bregman acknowledges NASA grant NAG5-3247. Brown, B. A. 1998, Ph.D. Thesis, University of Michigan. Brown, B. A., & Bregman, J. N. 1998, , [**495**]{}, L75, astro-ph/9712209. Brown, B. A., & Bregman, J. N. 2000, , [**539**]{}, 592. Buote, D. A., & Fabian, A. C. 1998, , [**296**]{}, 977, astro-ph/9707117. Burkert, A. 1993, , [**278**]{}, 23. Canizares, C. R., Fabbiano, G., & Trinchieri, G. 1987, , [**312**]{}, 503. David, L. P., Forman, W. & Jones, C. 1991, , 369, 121. Davis, D. S., & White, R. E. 1996, , 470, L35, astro-ph/9607052. Fabbiano, G., Gioia, I. M., & Trinchieri, G. 1988, , [**324**]{}, 749. Fabbiano, G., Gioia, I. M., & Trinchieri, G. 1989, , [**347**]{}, 127. Fabbiano, G., Kim, D.-W., & Trinchieri, G. 1994, , [**429**]{}, 94. Faber, S. M., Wegner, G., Burstein, D., Davies, R. L., Dressler, A., Lynden-Bell, D., & Terlevich, R. J. 1989, , 69, 763. Fabricant, D., & Gorenstein, P. 1983, , [**267**]{}, 535. Forman, W., Jones, C., & Tucker, W. 1985, , [**293**]{}, 102. Fry, A. M., Morrison, H. L., Harding, P., Boroson, T. 1999, in The Third Stromlo Symposium: The Galactic Halo, eds. Gibson, B.K., Axelrod, T.S. & Putman, M.E., ASP Conference Series, [**165**]{}, 197. Gorenstein, P., Fabricant, D., Topka, K., Harnden, F. R., Jr., & Tucker, W. H. 1978, , [**224**]{}, 718. Goudfrooij, P. & de Jong, T. 1995, , [**298**]{}, 784, astro-ph/9504011. Hjorth, J., Madsen, J. 1991, , [**253**]{}, 703. Irwin, J. A., & Sarazin, C. L. 1998, , [**499**]{}, 650, astro-ph/9804210. Kim, D.-W., Fabbiano, G., Matsumoto, H., Koyama, K., & Trinchieri, G. 1996, , [**468**]{}, 175. Kim, D.-W., Fabbiano, G., &Trinchieri, G. 1992, , [**393**]{}, 134. Loewenstein, M. & Mushotzky, R. F. 1997, [*Proceedings of IAU Symposium 187 on Cosmic Chemical Evolution*]{}, astro-ph/9710339. Matsumoto, H., Koyama, K., Awaki, H., Tsuru, T., Loewenstein, M., Matsushita, K. 1997, , [**482**]{}, 133, astro-ph/9701077. McMillan, R., Ciardullo, R., & Jacoby, G. H. 1994, , 108, 1610. Mihalas, D., & Binney, J. 1981, [*Galactic Astronomy*]{} (New York: W. H. Freeman and Company). Nelder, J. A. & Mead, R. 1965, [*Computer Journal*]{}, [**7**]{}, 308. Pellegrini, S. & Ciotti, L. 1998, , [**333**]{}, 433, astro-ph/9802035. Pellegrini, S. & Fabbiano, G. 1994, , [**429**]{}, 105, astro-ph/9312046. Sarazin, C. L. 1990, in The Interstellar Medium in Galaxies, eds. H. A. Thronson, Jr. & J. M. Shull (Dordrecht: Kluwer), 201. Sarazin, C. L & White, R. III 1988, , [**331**]{}, 102. Trinchieri, G., & Fabbiano, G. 1985, , [**296**]{}, 447. Trinchieri, G., Fabbiano, G., & Canizares, C. R. 1986, , [**310**]{}, 637. Trinchieri, G., Kim, D.-W., Fabbiano, G., & Canizares, C. R. C. 1994, , [**428**]{}, 555. Tully, R. B. 1988, Nearby Galaxies Catalog (Cambridge University: Cambridge)
--- abstract: 'We study phonon-mediated temporary trapping of an electron in polarization-induced external surface states (image states) of a dielectric surface. Our approach is based on a quantum-kinetic equation for the occupancy of the image states. It allows us to distinguish between prompt and kinetic sticking. Because the depth of the image potential is much larger than the Debye energy multi-phonon processes are important. Taking two-phonon processes into account in cases where one-phonon processes yield a vanishing transition probability, as it is applicable, for instance, to graphite, we analyze the adsorption scenario as a function of potential depth and surface temperature and calculate prompt and kinetic sticking coefficients. We find rather small sticking coefficients, at most of the order of $10^{-3}$, and a significant suppression of the kinetic sticking coefficient due to a relaxation bottleneck inhibiting thermalization of the electron with the surface at short timescales.' author: - 'R. L. Heinisch, F. X. Bronold, and H. Fehske' title: 'Phonon-mediated sticking of electrons at dielectric surfaces' --- Introduction ============ A complete kinetic modeling of atmospheric [@RL01], interstellar [@Whipple81; @DS87; @Mann08], or man-made bounded plasmas [@FIK05; @Ishihara07; @GMB02; @Kogelschatz03; @KCO04; @SAB06; @SLP07; @LLZ08] requires boundary conditions for the distribution functions of the relevant plasma species (electrons, ions, neutrals), that is, a quantitative microscopic understanding of the elementary processes at the plasma boundary. Of particular importance is the build-up of a quasi-stationary negative surface charge, which not only depletes the electron density in front of the boundary (sheath formation) but also acts as an electron reservoir for surface-supported electron-ion recombination and secondary electron emission which in turn affect the charge balance in the bulk of the plasma. [@LL05] Despite its unquestioned importance, little is quantitatively known about the microphysics of electrons at plasma boundaries. It is only until recently that we proposed that the charging of plasma boundaries can be perhaps microscopically understood in terms of an electronic physisorption process. [@BFKD08; @BDF09] The physisorption scenario applies to a plasma electron approaching a metallic or a dielectric boundary provided its kinetic energy is large enough to overcome the Coulomb barrier due to the charges already residing on the boundary but small enough to make the surface appear as having a negative electron affinity. If the electron can convert its energy into internal energy of the boundary, via exciting elementary excitations of the solid, it may get stuck (adsorbed) at the boundary. Later it may desorb again if it gains enough energy from the boundary. Like physisorption of neutral particles [@BY73; @IN76; @GKT80a; @GKT80b; @Leuthaeusser81; @Brenig82; @KG86; @CG88; @GS91; @AM91; @CK92; @BG93; @BGB93; @BGR94; @CC98] physisorption of electrons is the polarization-induced temporary binding to a surface. It can be characterized by a desorption time and a sticking coefficient. At first glance physisorption of electrons seems to be not much different from physisorption of neutral particles. There are however important qualitative differences which warrant a separate theoretical investigation. First, albeit not in the focus of our investigation, the long-range $1/z$-tail of the image potential leads to a finite electron sticking coefficient at vanishing electron energy and surface temperature. [@CK92; @BR92] This is in contrast to the quantum reflection, that is, the vanishing sticking coefficient, one finds in this limit for the short-ranged surface potentials typical for physisorption of neutral particles. [@CG88; @CC98] Second, the surface potential in which physisorption of electrons occurs, in particular at plasma boundaries, consists of a polarization-induced attractive part and a repulsive Coulomb part due to electrons already adsorbed on the surface. The limit of vanishing coverage, very often adopted in the theoretical description of physisorption of neutral particles, is thus only applicable to the very first (last) electron approaching (leaving) the boundary. Third, in contrast to physisorption of neutral particles, physisorption of electrons has to be always described quantum mechanically because the image potential varies on a scale comparable to the thermal de Broglie wave length of the electron. [@BFKD08] This is also the case for physisorption of positronium. [@NNS86; @MSL91; @WJS92] Finally, and this will be the theme of our investigation, the polarization-induced image potential supports deep states, in addition to shallow ones. Direct transitions from the continuum to deep bound states are very unlikely. Hence, a modeling in terms of a quantum-kinetic rate equation for the occupancy of the bound surface states, [@Brenig82; @KG86] and Brenig’s distinction between prompt and kinetic sticking, [@Brenig82] is vital for a correct description of electron physisorption. For phonon-controlled adsorption and desorption, as it occurs at dielectric surfaces, deep states also imply that multi-phonon processes have to be taken into account in the calculation of state-to-state transition probabilities. This can be done either via an expansion of the energy dependent $T$-matrix [@BY73; @GKT80a; @AM91], the method we are using [@HBF10], or via a Magnus-type resummation of the time-dependent scattering operator. [@Gumhalter96; @Gumhalter01; @SG03; @SG05; @SG08] In the following we investigate adsorption of an electron to a dielectric surface at finite temperature assuming an acoustic longitudinal bulk phonon controlling electron energy relaxation at the surface. To avoid complications due to finite coverage we focus on the first approaching electron. Using the quantum-kinetic rate equation for the occupancy of the image states of our previous work [@HBF10] (thereafter referred to as I), where we studied desorption of an image-bound electron from a dielectric surface, we calculate prompt and kinetic sticking coefficients. Compared to semiclassical estimates [@UN80] they turn out to be extremely small. Instead of the order of $10^{-1}$ we find them to be at most of the order of $10^{-3}$. We also analyze in detail the adsorption scenario as a function of surface temperature and potential depth. Most notable, our results reveal an energy relaxation bottleneck prohibiting, on a short timescale, thermalization of the electron with the surface, that is, the trickling through of the electron from upper to deep states. The reduced accessibility of deep states makes the kinetic sticking coefficient much smaller than the prompt sticking coefficient in contrast to what is usually found in physisorption of neutral particles. [@BGB93] The remaining paper is structured as follows. In Sec. II, we specify the quantum-kinetic approach of our previous work concerning desorption (paper I [@HBF10]) to the situation of adsorption and introduce prompt and kinetic sticking coefficients. We then briefly recall in Sec. III the calculation of the state-to-state transition probabilities based on a microscopic model for the electron-surface interaction and an expansion of the $T$-matrix for the dynamic part of that interaction. Mathematical details not given can be found in I. Finally, in Sec. IV, we present and discuss our results before we conclude in Sec. V. Electron kinetics ================= The probability for an approaching electron in the continuum state $k$ to make a transition to any of the bound states $n$ of the polarization-induced image potential is given by the prompt energy-resolved sticking coefficient, [@KG86] $$\begin{aligned} s_{e,k}^\text{prompt} = \tau_t \sum_n W_{nk} \text{ ,}\end{aligned}$$ where $\tau_t=2L/v_z$ is the travelling time through the surface potential of width $L$ which, in the limit $L\rightarrow \infty$, can be absorbed into the transition probability from the continuum state $k$ to the bound state $n$, $W_{nk}$. If the incident unit electron flux (we consider only a single electron impinging on the surface) is stationary and corresponds to an electron with Boltzmann distributed kinetic energies, the prompt energy-averaged sticking coefficient is given by [@KG86] $$\begin{aligned} s^{\rm prompt}_e=\frac{\sum_k s^{\rm prompt}_{e,k} k e^{-\beta_e E_k}}{\sum_k k e^{-\beta_e E_k}} \text{ ,} \label{promptenergyavergaed}\end{aligned}$$ where $\beta_e^{-1}=k_BT_e$ is the mean electron energy. Prompt sticking coefficients are properly weighted sums over state-to-state transition probabilities from continuum to bound surface states. They give the probability for initial trapping, which, according to Iche and Noziéres[@IN76] and Brenig [@Brenig82], is the first out of three stages of physisorption. The second stage encompasses relaxation of the bound state occupancy and the third stage is the desorption of the temporarily bound particle. The second stage, which also includes transitions back to the continuum, cannot be captured by simple state-to-state transition probabilities. Instead, a quantum-kinetic rate equation for the time-dependent occupancy of the bound surface states $n_n(t)$ has to be employed. [@GKT80a; @Brenig82] This equation describes processes on a timescale much longer than the lifetime of the individual surface states but shorter than the desorption time. [@GKT80a; @Brenig82; @KG86] Following Gortel and coworkers, [@GKT80a; @KG86] $$\begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t}n_n(t)=&\sum_{n^\prime} \left[W_{n n^\prime} n_{n^\prime}(t) - W_{n^\prime n} n_n(t) \right] \nonumber \\ & -\sum_k W_{k n} n_n(t) +\sum_k \tau_t W_{nk} j_k(t) \text{ ,} \label{fullrateeqn}\end{aligned}$$ where $W_{n^\prime n}$ is the probability for a transition from a bound state $n$ to another bound state $n^\prime$, $W_{kn}$ and $W_{nk}$ are the probabilities, respectively, for a transition from a bound state $n$ to a continuum state $k$ and vice versa, and $$\begin{aligned} j_k(t)=n_k(t)\tau^{-1}_t \end{aligned}$$ is the incident electron flux which in principle can be non-stationary. The solution to Eq. (\[fullrateeqn\]) can be obtained from the solution of the corresponding homogeneous equation, $$\begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t}n_n(t)&=\sum_{n^\prime} \left[W_{n n^\prime} n_{n^\prime}(t) - W_{n^\prime n} n_n(t) \right] -\sum_k W_{k n} n_n(t)\nonumber\\ &= T_{nn^\prime}n_{n^\prime}(t)~, \label{homorateeqn}\end{aligned}$$ and treating the electron flux $ j_k(t)$ as an externally specified quantity. [@Brenig82; @KG86] In the simplest case, which is also the basis of Eq. (\[promptenergyavergaed\]), $ j_k(t)$ is the stationary flux corresponding to a single electron whose energy is Boltzmann distributed over the continuum states $k$ with a mean electron energy $k_BT_e$, that is, $j_k(t)\equiv j_k\sim k e^{-\beta_eE_k}$. To solve Eq. (\[homorateeqn\]) amounts to solving the eigenvalue problem for the matrix ${\bf T}$. [@Brenig82; @KG86] For the particular case of an electron physisorbing at a dielectric surface this has been already done in I. If the transitions between bound states are much faster than the transitions to the continuum, so that the adsorbed electron escapes very slowly, one eigenvalue, $-\lambda_0$, turns out to be considerably smaller than all the other eigenvalues $-\lambda_\kappa$. The equilibrium occupation of the bound states, $n_n^\mathrm{eq}$, is then to a very good approximation the right eigenvector to $-\lambda_0$, which can be thus identified with the negative of the inverse of the desorption time, that is, $\lambda_0=\tau_e^{-1}$. The kinetic sticking coefficient, which takes into account not only the initial capture but also the subsequent relaxation of the occupancy of the bound surface states, can be obtained as follows. [@KG86] The solution of Eq. (\[fullrateeqn\]) is split according to $$\begin{aligned} n_n(t)=n_n^{\rm s}(t)+n_n^{\rm f}(t)~, \label{ntotal}\end{aligned}$$ where $$\begin{aligned} n_n^{\rm s}(t)=e^{-\lambda_0 t} \int_{-\infty}^t \mathrm{d}t^\prime e^{\lambda_0 t^\prime} e_n^{(0)} \sum_{k,l} \tilde{e}_l^{(0)} \tau_t W_{lk} j_k(t^\prime) \text{ } \label{nslow}\end{aligned}$$ is the slowly and $$\begin{aligned} n_n^{\rm f}(t)=\!\sum_{\kappa \neq 0} e^{-\lambda_\kappa t}\!\!\int_{-\infty}^t\!\!\!\!\!\mathrm{d}t^\prime e^{\lambda_\kappa t^\prime} e_n^{(\kappa)} \sum_{k,l} \tilde{e}_l^{(\kappa)} \tau_t W_{lk} j_k(t^\prime) \text{ } \label{nquick}\end{aligned}$$ the quickly varying part of $n_n(t)$. The quantities $e_n^{(\kappa)} $ and $\tilde{e}_n^{(\kappa)}$ are, respectively, the components of the right and left eigenvectors of the matrix ${\bf T}$ to the eigenvalue $-\lambda_\kappa$. The probability of the electron remaining in the surface states for times of the order of the desorption time is given by the slowly varying part only, that is, $n^{\rm s}(t)=\sum_n n_n^{\rm s}(t)$. Differentiating $n^{\rm s}(t)$ with respect to time, $$\begin{aligned} \frac{d}{dt}n^{\rm s}(t)=\sum_k s^{\rm kinetic}_{e,k} j_k(t) -\lambda_0 n^{\rm s}(t)~,\end{aligned}$$ enables us, following Brenig [@Brenig82], to identify the kinetic energy-resolved sticking coefficient, $$\begin{aligned} s_{e,k}^\text{kinetic}=\tau_t \sum_{n,l} e_{n}^{(0)} \tilde{e}_l^{(0)} W_{lk} \text{ ,}\end{aligned}$$ which gives the probability for the electron being trapped even after the energy relaxation of the second stage of physisorption. In analogy to Eq. (\[promptenergyavergaed\]) the energy-averaged kinetic sticking coefficient reads for a stationary Boltzmannian electron flux $$\begin{aligned} s^{\rm kinetic}_e=\frac{\sum_k s^{\rm kinetic}_{e,k} k e^{-\beta_e E_k}}{\sum_k k e^{-\beta_e E_k}} \text{ .} \label{kineticenergyavergaed}\end{aligned}$$ Transition probabilities {#Electron-surface interaction and transition probabilities} ======================== The transition probabilities $W_{qq^\prime}$, where $q$ and $q^\prime$ stand either for $k$ or $ n$, are the fundamental building blocks of the foregoing analysis. They have to be calculated from a microscopic model for the electron-surface interaction. The necessary steps have been described in I. In short, for a dielectric surface, the main source, leaving interband electronic excitations aside, which primarily affect the dielectric constant, of the attractive static electron-surface potential is the coupling of the electron to a dipole-active surface phonon. [@RM72; @EM73] Far from the surface the surface potential merges with the classical image potential and thus $\sim 1/z$. Close to the surface, however, it is strongly modified by the recoil energy resulting from the momentum transfer parallel to the surface when the electron absorbs or emits a surface phonon. Taking this effect into account leads to a recoil-corrected image potential $\sim 1/(z+z_c)$ with $z_c$ a cut-off parameter defined in I. Transitions between the surface states supported by the image potential are caused by a longitudinal acoustic bulk phonon perpendicular to the surface. The Hamiltonian from which we calculated the transition probabilities was introduced in I where all quantities entering the Hamiltonian are explicitly defined. It is given by [@HBF10] $$\begin{aligned} H=H_e^\text{static} +H_{ph}+H_{e-ph}^\text{dyn} \text{ ,} \label{Htotal}\end{aligned}$$ where $$\begin{aligned} H_e^\text{static}=\sum_q E_q c_q^\dagger c_q \end{aligned}$$ describes the electron in the recoil-corrected image potential, which thus accounts for the coupling of the electron to the dipole-active surface phonon, $$\begin{aligned} H_{ph}=\sum_Q \hbar \omega_Q b_Q^\dagger b_Q \text{ ,}\end{aligned}$$ describes the free dynamics of the acoustic bulk phonon responsible for transitions between surface states, and $$\begin{aligned} H_{e-ph}^\text{dyn}=\sum_{q,q^\prime} \langle q^\prime | V_p(u,z)|q\rangle c_{q^\prime}^\dagger c_q \text{ .}\end{aligned}$$ denotes the dynamic coupling of the electron to the bulk phonon. Expanding $V_p(u,z)$ with respect to the displacement field, $$\begin{aligned} u=\sum_Q \sqrt{\frac{\hbar}{2\mu \omega_Q N_s}} \left( b_Q+b_{-Q}^\dagger \right) \label{uomegarel} \text{ ,}\end{aligned}$$ allows us to classify the dynamic interaction according to the number of exchanged bulk phonons. As in I we use a Debye model for the bulk phonon. Sums over phonon momenta are thus replaced by $$\begin{aligned} \sum_Q \dots=\frac{3N_s}{\omega_D^3}\int \mathrm{d}\omega \omega^2 \dots \text{ .} \label{debyemodel}\end{aligned}$$ Measuring energies in units of the Debye energy $\hbar \omega_D=k_BT_D$, important dimensionless energy parameters characterizing (\[Htotal\]) are $$\begin{aligned} \epsilon_n=\frac{E_n}{\hbar \omega_D} \quad \text{and} \quad \Delta_{nn^\prime}=\frac{E_n-E_{n^\prime}}{\hbar \omega_D} \text{ ,}\end{aligned}$$ where $E_n < 0$ is the energy of the $n^\text{th}$ bound state. We call the surface potential shallow if the lowest bound state is at most one Debye energy beneath the continuum, that is, $\epsilon_1>-1$, one-phonon deep if the energy difference between the lowest two bound states does not exceed one Debye energy, that is, $\Delta_{12}>-1$, two-phonon deep if the energy difference between the lowest two bound states is between one and two Debye energies, that is, $-1>\Delta_{12}>-2$, and so forth. $\epsilon_s$ $\hbar\omega_D$ $\Delta E_{12} $ $\Delta_{12}$ --------------- -------------- ----------------- ------------------ --------------- -- graphite $13.5$ $0.215$eV $0.233$eV $1.06$ ${\rm SiO_2}$ $3.8$ $0.041$eV $0.105$eV $2.59$ GaAs $13$ $0.030$eV $0.152$eV $5.13$ : Dielectric constant $\epsilon_s$, Debye energy $\hbar\omega_D$, energy difference of the lowest two bound states of the recoil-corrected image potential $\Delta E_{12}$, and the corresponding potential depth parameter $\Delta_{12}$ for graphite, silicon dioxide, and gallium arsenide. \[materialcomp\] Because of the strong interaction between the electron and the dipole-active surface phonon, physisorption of an electron typically takes place in an at least two-phonon deep image potential (see Table \[materialcomp\]). Hence, physisorption of an electron controlled by a bulk acoustic phonon, as anticipated in (\[Htotal\]) and in fact applicable to dielectric surfaces, where large energy gaps block electronic relaxation channels due to internal electron-hole pairs and/or plasmons, has to involve the exchange of many bulk phonons. The transition probability from an electronic state $q$ to an electronic state $q^\prime$ is given by[@BY73] $$\begin{aligned} \mathcal{R}(q^\prime,q)=&\frac{2\pi}{\hbar} \sum_{s,s^\prime} \frac{e^{-\beta_s E_s}}{\sum_{s^{\prime\prime}}e^{-\beta E_{s^{\prime\prime}}}} |\langle s^\prime, q^\prime |T|s,q\rangle|^2 \nonumber \\ &\times \delta(E_s-E_{s^{\prime}}+E_q-E_{q^\prime}) \text{ ,}\end{aligned}$$ where $T$ is the on-shell $T$-matrix corresponding to $H_{e-ph}^\text{dyn}$ and $\beta_s=(k_BT_s)^{-1}$ with $T_s$ the surface temperature; $|s\rangle$ and $|s^\prime \rangle$ are initial and final phonon states, which are averaged over. Multi-phonon processes have two possible origins. [@BY73] They arise from the expansion of $H_{e-ph}^\text{dyn}$ with respect to $u$, $$\begin{aligned} H_{e-ph}^\text{dyn}=V_1+V_2+V_3+\mathcal{O}(u^4)\end{aligned}$$ and from the multiple action of this perturbation. Defining the free electron-phonon resolvent, $$\begin{aligned} G_0=(E-H_e^\text{static}-H_{ph}+i\epsilon)^{-1} \text{ ,}\end{aligned}$$ the latter is encoded in the $T$-matrix equation, $$\begin{aligned} T=H_{e-ph}^\text{dyn}+H_{e-ph}^\text{dyn}G_0T \text{ .}\end{aligned}$$ Using the short-hand notation introduced in I, the one-phonon process, proportional to $u^2$, is accounted for by $$\begin{aligned} \langle s^\prime,q^\prime | V_1 |s,q\rangle \langle s,q|V_1^\ast | s^\prime, q^\prime \rangle~.\end{aligned}$$ It leads to the standard golden rule approximation for the transition probability. Two-phonon processes are proportional to $u^4$ and thus less likely than one-phonon processes. Most of them renormalize only the one-phonon transition probability and can thus be neglected in a first approximation. There are however two-phonon processes which induce transitions absent in the one-phonon approximation and hence have to be included in the calculation of the transition probabilities. In our short-hand notation the processes in question are $$\begin{aligned} &\langle s^\prime,q^\prime | V_2 |s,q\rangle \langle s,q|V_2^\ast | s^\prime, q^\prime \rangle~, \label{v2square} \\ &\langle s^\prime,q^\prime | V_2 |s,q\rangle \langle s,q|V_1^\ast G_0^\ast V_1^\ast | s^\prime, q^\prime \rangle~, \label{v2v1square} \\ &\langle s^\prime,q^\prime | V_1 G_0 V_1 |s,q\rangle \langle s,q|V_2^\ast | s^\prime, q^\prime \rangle~, \label{v1squarev2} \\ &\langle s^\prime,q^\prime | V_1 G_0 V_1 |s,q\rangle \langle s,q| V_1^\ast G_0^\ast V_1^\ast | s^\prime, q^\prime \rangle~. \label{v1quartic} \end{aligned}$$ It is shown in I how these processes can be included in the calculation of the transition probabilities $W_{qq^\prime}$ entering the rate equation (\[fullrateeqn\]). Singularities appearing in some of the two-phonon transition rates have been regularized by taking a finite phonon lifetime into account (see I and Ref. [@Rafael09] for details). The electronic matrix elements entering the transition probabilities have been also calculated in I, using bound and unbound wavefunctions of the recoil-corrected image potential. Hence, not only bound states but also continuum states belong to the static surface potential. [@HBF10] Our approach is thus on par with the distorted-wave Born approximation employed, for instance, by Armand and Manson for the calculation the sticking coefficient for light neutral particles. [@AM91] Results ======= The material parameters chosen for the numerical calculations are, unless specified otherwise, given in Table \[materialtable\]. They correspond to graphite. For some calculations we use however the Debye temperature as a tunable parameter to realize different potential depths which is the main focus of this investigation. -------------------------- --------- ------------------ --------- ------------------ Debye temperature $\quad$ $T_D$ $\quad$ $2500$K Dielectric constant $\quad$ $\epsilon_s$ $\quad$ $13.5$ TO phonon mode frequency $\quad$ $\hbar \omega_T$ $\quad$ $170 \text{meV}$ Grüneisen parameter $\quad$ $\gamma_G$ $\quad$ $1.7$ Shear modulus $\quad$ $\mu $ $\quad$ $5$ GPa -------------------------- --------- ------------------ --------- ------------------ : Material parameters for the numerical results. \[materialtable\] One-phonon deep potentials -------------------------- First, we present results for shallow and one-phonon deep surface potentials. In leading order, only one-phonon processes are involved and the one-phonon approximation for the transition probabilities is sufficient. Because the electron thermalizes then very quickly with the surface the prompt and kinetic sticking coefficients are almost identical. In this subsection we show therefore only results for the prompt sticking coefficient. Figure \[figure1\] compares $s^{\rm prompt}_e$ for a shallow and a one-phonon deep potential. The sketches in the upper part of the figure illustrate the main difference between the two potentials. For a shallow potential the lowest bound state is less than one Debye energy below the continuum so that a low-lying electron from the continuum can be directly trapped in the lowest bound state, $n=1$, by a one-phonon transition. In the case of a one-phonon deep potential, one-phonon processes can only lead to trapping in one of the upper bound states $ n>1$. ![Upper panel: Sketch of a shallow (left) and one-phonon deep (right) potential. The grey shaded areas show the energy range of sticking by one-phonon processes. Middle panel: Energy-resolved prompt sticking coefficient as a function of the electron energy for a shallow potential ($T_D=4100{\rm K}$) at $T_s=410{\rm K}$ (left) and for a one-phonon deep potential ($T_D=3000{\rm K}$) at $T_s=300{\rm K}$ (right). Lower panel: Energy-averaged prompt sticking coefficient as a function of the mean electron energy for a shallow potential ($T_D=4100{\rm K}$) at $T_s=205{\rm K}$ (left) and a one-phonon deep potential ($T_D=3000{\rm K}$) at $T_s=150{\rm K}$ (right). []{data-label="figure1"}](figure1a.eps){width="0.8\linewidth"} ![Upper panel: Sketch of a shallow (left) and one-phonon deep (right) potential. The grey shaded areas show the energy range of sticking by one-phonon processes. Middle panel: Energy-resolved prompt sticking coefficient as a function of the electron energy for a shallow potential ($T_D=4100{\rm K}$) at $T_s=410{\rm K}$ (left) and for a one-phonon deep potential ($T_D=3000{\rm K}$) at $T_s=300{\rm K}$ (right). Lower panel: Energy-averaged prompt sticking coefficient as a function of the mean electron energy for a shallow potential ($T_D=4100{\rm K}$) at $T_s=205{\rm K}$ (left) and a one-phonon deep potential ($T_D=3000{\rm K}$) at $T_s=150{\rm K}$ (right). []{data-label="figure1"}](figure1d.eps){width="0.8\linewidth"} ![Upper panel: Sketch of a shallow (left) and one-phonon deep (right) potential. The grey shaded areas show the energy range of sticking by one-phonon processes. Middle panel: Energy-resolved prompt sticking coefficient as a function of the electron energy for a shallow potential ($T_D=4100{\rm K}$) at $T_s=410{\rm K}$ (left) and for a one-phonon deep potential ($T_D=3000{\rm K}$) at $T_s=300{\rm K}$ (right). Lower panel: Energy-averaged prompt sticking coefficient as a function of the mean electron energy for a shallow potential ($T_D=4100{\rm K}$) at $T_s=205{\rm K}$ (left) and a one-phonon deep potential ($T_D=3000{\rm K}$) at $T_s=150{\rm K}$ (right). []{data-label="figure1"}](figure1b.eps){width="\linewidth"} ![Upper panel: Sketch of a shallow (left) and one-phonon deep (right) potential. The grey shaded areas show the energy range of sticking by one-phonon processes. Middle panel: Energy-resolved prompt sticking coefficient as a function of the electron energy for a shallow potential ($T_D=4100{\rm K}$) at $T_s=410{\rm K}$ (left) and for a one-phonon deep potential ($T_D=3000{\rm K}$) at $T_s=300{\rm K}$ (right). Lower panel: Energy-averaged prompt sticking coefficient as a function of the mean electron energy for a shallow potential ($T_D=4100{\rm K}$) at $T_s=205{\rm K}$ (left) and a one-phonon deep potential ($T_D=3000{\rm K}$) at $T_s=150{\rm K}$ (right). []{data-label="figure1"}](figure1e.eps){width="\linewidth"} ![Upper panel: Sketch of a shallow (left) and one-phonon deep (right) potential. The grey shaded areas show the energy range of sticking by one-phonon processes. Middle panel: Energy-resolved prompt sticking coefficient as a function of the electron energy for a shallow potential ($T_D=4100{\rm K}$) at $T_s=410{\rm K}$ (left) and for a one-phonon deep potential ($T_D=3000{\rm K}$) at $T_s=300{\rm K}$ (right). Lower panel: Energy-averaged prompt sticking coefficient as a function of the mean electron energy for a shallow potential ($T_D=4100{\rm K}$) at $T_s=205{\rm K}$ (left) and a one-phonon deep potential ($T_D=3000{\rm K}$) at $T_s=150{\rm K}$ (right). []{data-label="figure1"}](figure1c.eps){width="\linewidth"} ![Upper panel: Sketch of a shallow (left) and one-phonon deep (right) potential. The grey shaded areas show the energy range of sticking by one-phonon processes. Middle panel: Energy-resolved prompt sticking coefficient as a function of the electron energy for a shallow potential ($T_D=4100{\rm K}$) at $T_s=410{\rm K}$ (left) and for a one-phonon deep potential ($T_D=3000{\rm K}$) at $T_s=300{\rm K}$ (right). Lower panel: Energy-averaged prompt sticking coefficient as a function of the mean electron energy for a shallow potential ($T_D=4100{\rm K}$) at $T_s=205{\rm K}$ (left) and a one-phonon deep potential ($T_D=3000{\rm K}$) at $T_s=150{\rm K}$ (right). []{data-label="figure1"}](figure1f.eps){width="\linewidth"} The middle panels of Fig. \[figure1\] show the prompt energy-resolved sticking coefficient as a function of the energy of the incident electron. Apart from discontinuities the sticking coefficient depends linearly on the electron energy. As explained in Sec. \[Electron-surface interaction and transition probabilities\] the one-phonon transition probability is proportional to $u^2$. From Eq. (\[uomegarel\]) we have $u^2 \sim 1/\omega$ so that in conjunction with the Debye model (\[debyemodel\]) the transition probability is proportional to $\omega$ which translates due to energy conservation to a proportionality to the electron energy. The phonon spectrum is thus reflected in the (one-phonon) energy-resolved sticking coefficient, as it is, for instance, also in the cross section for (one-phonon) inelastic particle-surface scattering. [@Gumhalter01] Steep jumps in the energy-resolved sticking coefficient reflect the level accessibility. When the energy difference between the electron and a bound state exceeds the Debye energy, one-phonon transitions are no longer possible and the electron can no longer directly reach that level. For a shallow potential, the first drop reflects therefore the accessibility of the first bound state, whereas for a one-phonon deep potential, where this bound state cannot be directly reached, the first drop reflects the accessibility of the second bound state. As energy differences between successive bound states of the image potential decrease towards the ionization threshold, that is, with increasing $n$ (see upper panels of Fig. \[figure1\]), more such steps are found near the maximum electron energy allowing for trapping, which is the Debye energy. The contribution of the $n^\mathrm{th}$ bound state to the sticking coefficient, reflected in the height of the corresponding accessibility threshold, decreases for higher bound states. The reason for this lies in the electronic matrix element appearing in first order perturbation theory, $\langle n |1/(z+z_c)^2 |k \rangle$. This matrix element is smaller for higher bound states because higher bound states have less weight near the surface where the perturbation is strongest. Of considerable importance is hence the lowest bound state, which, if available, yields a particularly large contribution. The decreasing electronic matrix element also implies that neglecting all but a few, say seven, of the infinitely many bound states suffices for the calculation of the sticking coefficient. The prompt energy-averaged sticking coefficient is shown in the lower panels of Fig. \[figure1\] as function of the mean electron energy. Due to thermal averaging the sticking coefficient does no longer exhibit characteristics of the phonon spectrum and level accessibility, making it thus more robust against changes in the phonon model. It does however reflect the importance of the lowest bound state for shallow potentials. Note also, due to the long-range tail of the recoil-corrected image potential $\sim 1/(z+z_c)$ the energy-resolved and the energy-averaged electron sticking coefficients are finite for vanishing electron energy and electron temperature, irrespective of the surface temperature, as it should be. [@BR92; @CK92] Two-phonon deep potentials -------------------------- We now turn our attention to two-phonon deep potentials. Under the assumption that the true one-phonon process dominates the corrections coming from two-phonon processes, the latter are only taken into account for transitions where one-phonon processes alone would yield no transition probability. Two-phonon processes affect in a two-phonon deep potential sticking in two ways. They enable prompt trapping from higher-lying continuum states, outside the one-phonon trapping range, and they control the energy relaxation of the trapped electron and thus the formation of the quasi-stationary bound state occupancy from which desorption occurs. There are thus two questions to be answered: How significant are two-phonon processes for prompt sticking and how does the relaxation thereafter depend on the type of phonon processes available. ![Energy-resolved prompt sticking coefficient for a two-phonon deep potential ($T_D=2500{\rm K}$ and $T_s=500{\rm K})$ calculated with different numbers of bound states $N$. Full lines denote the one-phonon contribution, dashed lines the two-phonon contribution. Inset: Contribution of the second bound state. One-phonon contribution (red), two-phonon contribution (blue) broken down into the processes $(V_2)^2$, $(V_1)^2V_2$ and $(V_1)^4$.[]{data-label="figure2"}](figure2.eps){width="\linewidth"} To address the first question we show in Fig. \[figure2\] the contributions to the prompt energy-resolved sticking coefficient arising from, respectively, one- and two-phonon processes. If available, one-phonon processes provide for much larger sticking coefficients than two-phonon processes. Figure \[figure2\] also confirms that the sticking coefficient saturates quickly with the number of bound states included into the calculation. To investigate the relative importance of the various two-phonon processes arising, respectively, from the expansion of the dynamical perturbation and the $T$-matrix we plot in the inset of Fig. \[figure2\] the partial contributions to the prompt sticking coefficient arising from the various two-phonon processes which trigger transition to the second bound state. A two-phonon process can be simultaneous, as encoded in $V_2$, or successive, as described by $V_1G_0V_1$. Hence, the total two-phonon transition probability contains a contribution without virtual intermediate states, symbolically denoted by $V_2^2$ \[see Eq. (\[v2square\])\] and two contributions with virtual intermediate states, symbolically denoted by $(V_1)^2V_2$ and $(V_1)^4$ \[see Eqs. (\[v2v1square\]), (\[v1squarev2\]), and (\[v1quartic\])\]. The prompt energy-resolved sticking coefficient calculated with either $V_2^2$, $(V_1)^2V_2$, or $V_1^4$ only is shown in the inset of Fig. \[figure2\]. In accordance to what we found in our calculation of the desorption time of an image-bound electron (paper I) and to what Gumhalter and Šiber found in their calculation of the cross section for inelastic particle-surface scattering [@SG03; @SG05; @SG08], the direct two phonon process $V_2^2$ is dominated by the processes $V_1^2V_2$ and $(V_1)^4$. Having clarified that two-phonon processes lead to a much smaller prompt sticking coefficient than one-phonon processes we now move on to study the effect of two-phonon transitions on the relaxation of the bound state occupancy. For a two-phonon deep potential the energy difference between the lowest two bound states exceeds one Debye energy. Hence, the relaxation of an electron trapped in one of the upper bound states to the quasi-stationary occupancy can only occur via two-phonon processes. Since the kinetic sticking coefficient gives the probability for the incident electron making not only a transition to a bound state but also a subsequent relaxation to the quasi-stationary occupancy of these states, the importance of two-phonon processes should be signalled by the amount the kinetic sticking coefficient deviates from the prompt sticking coefficient. Figure \[figure3\] shows that for a two-phonon deep potential the kinetic energy-resolved sticking coefficient is for intermediate electron energies considerably smaller than the prompt energy-resolved sticking coefficient. This is due to the fact that the two-phonon transition to the lowest bound state, where the major part of the quasi-stationary occupancy resides, is very unlikely and thus very slow. Only for small energies, the first hump of the two-phonon contribution to the sticking coefficient, due to trapping in the lowest bound state, are prompt and kinetic sticking coefficients identical because no trickling through is needed. ![Prompt and kinetic energy-resolved sticking coefficient as a function of the electron energy for a two-phonon deep potential ($T_D=2500{\rm K}$ and $T_s=357.14{\rm K})$. Full lines one-phonon contribution, dashed lines two-phonon contribution. []{data-label="figure3"}](figure3.eps){width="\linewidth"} The weak coupling between the lowest two bound states in a two-phonon deep surface potential leads to a relaxation bottleneck for the electron if it is initially trapped in one of the upper states. In Figs. \[figure4\] and \[figure5\] we analyze the relaxation bottleneck in more detail as a function of the Debye temperature $T_D$ (to realize different potential depths) and the surface temperature $T_s$. The upper panel shows the desorption time from the lowest bound state, that is, the desorption time for an electron capable to fall to the lowest bound state, and the desorption time from the upper bound states, that is, the desorption time for an electron not capable to fall to the lowest bound state. The probability for the electron initially trapped in the upper bound states to fall down to the lowest bound states and the probability to desorb to the continuum without ever passing through the lowest bound state are shown in the middle panel and the lower panel shows the prompt and the kinetic sticking coefficient. ![Upper panel: Inverse desorption time from the lowest level (dashed blue line) and the upper levels (full red line). Middle panel: Probability for an electron initially trapped in one of the upper levels of the surface potential ($n=2,3,4\dots $) either to fall to the lowest bound state (dashed blue line) or to desorb without ever reaching the lowest bound state (full red line). Lower panel: Prompt (full red line) and kinetic (dashed blue line) energy-resolved sticking coefficient. In all three panels, $E_e=0.1{\rm eV}$ and $T_s/T_D=0.2$ (to keep the level of phonon excitation constant we set $T_D/T_s$ constant [@HBF10]). For $T_D<2707{\rm K}$ the surface potential is two-phonon deep, for $2707{\rm K} < T_D < 4029{\rm K} $ it is one-phonon deep, and for $T_D > 4029{\rm K}$ it is shallow.[]{data-label="figure4"}](figure4.eps){width="\linewidth"} Before we discuss Figs. \[figure4\] and \[figure5\], we say a few words about the way we calculated the quantities shown in the upper and middle panels. The desorption time from the lowest bound state is the desorption time for an electron equilibrated with the surface, the quantity we calculated in I, because the quasi-stationary occupancy and the equilibrium occupancy coincide and both reside moreover, for the considered surface temperatures, mainly on the lowest level. The desorption time from the upper bound states we simply calculated from the rate equation (\[homorateeqn\]) with the lowest bound state excluded. ![The three panels show, as a function of the surface temperature, the quantities of Fig. \[figure4\] for $T_D=2500{\rm K}$, that is, graphite, and $E_e=0.09{\rm eV}$. []{data-label="figure5"}](figure5.eps){width="\linewidth"} The probabilities shown in the middle panels we obtained from the following consideration. Whether an electron trapped in the upper bound states passes through to the lowest bound state or not depends on how large the transition probabilities from the upper bound states to the lowest bound state are in comparison to the transition probabilities to the continuum. Hence, the second stage of physisorption, that is, the time evolution of the occupancy after the initial trapping, can be captured by a rate equation for the occupancy of the upper bound states ($n=2,3,\dots$), similar to Eq. (\[homorateeqn\]), but with a loss term to both the continuum and the lowest bound state, $$\begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t}n_n=&\sum_n \left[W_{n n^\prime} n_{n^\prime}(t)- W_{n^\prime n} n_n(t) \right] \nonumber \\ & -\sum_k W_{k n} n_n(t) -W_{1 n}n_n(t) \nonumber \\ =&D_{n n^\prime} n_{n^\prime}(t)~, \label{upperrelax}\end{aligned}$$ where $n$ and $n^\prime$ run over the upper image states. Solving Eq. (\[upperrelax\]) with the initial condition $$\begin{aligned} n_l(0)=\frac{\sum_k W_{lk}j_k}{\sum_{l,k}W_{lk}j_k} \text{ ,}\end{aligned}$$ which is the (conditional) probability that the electron is trapped in the $l^\mathrm{th}$ image state under the condition that it is trapped in any of the bound states, we deduce for the probability for an electron trapped in one of the upper bound states to fall either to the lowest bound state ($f=1$) or to desorb without falling to the lowest bound state ($f=c$), $$\begin{aligned} p_f=n_f(t\rightarrow \infty)=\sum_{n,\kappa} W_{fn} \frac{1}{\lambda_\kappa} d_n^{(\kappa)} \sum_l \tilde{d}_l^{(\kappa)} n_l(0) \text{ ,}\end{aligned}$$ where, $d^{(\kappa)}_n$ and $\tilde{d}^{(\kappa)}_n$ are the components of the right and left eigenvectors of the matrix ${\bf D}$. We now turn our attention to Fig. \[figure4\]. It shows the effect of different potential depths realized by tuning the Debye temperature. For a shallow potential ($T_D>4029 {\rm K}$) desorption from the lowest level is mainly due to direct one-phonon transitions to the continuum; the same type of transition emptying the upper bound states. Hence, the desorption time from the lowest bound state and the upper bound states, respectively, differ not too much for shallow potentials. For one-phonon deep potentials ($2707{\rm K} < T_D < 4029{\rm K}$), however, the cascade of two one-phonon processes via the second level yields much larger desorption times from the lowest level compared to the desorption time form the upper levels. For a two-phonon deep potential ($T_D<2707{\rm K}$), finally, the first leg of the cascade, the transition to the second bound state, is a two-phonon transition, which increases the desorption time compared to a one-phonon deep potential. The second level is the link between the upper bound states and the lowest bound state. The ratio of the transition probabilities from the second bound state to the lowest bound state, $W_{12}$, and from the second bound state to the continuum, $W_{c2}$, determines if the electron trickles through after initial trapping or not, that is, whether it thermalizes with the surface or not. For a one-phonon deep potential both $W_{12}$ and $W_{c2}$ are due to one-phonon processes, in this case $W_{12} > W_{c2}$. For a two-phonon deep potential, however, the transition from the second to the lowest bound state is enabled by a two-phonon process only. In this case, and for moderate surface temperatures, $W_{12} < W_{c2}$, so that the electron is more likely to desorb before relaxing to the lowest bound state. As the kinetic sticking coefficient gives the probability of the trapped electron to relax to the quasi-stationary occupancy, the drop in the probability for reaching the lowest level at $T_D=2707{\rm K}$, which is the one-phonon/two-phonon threshold, translates into the abrupt reduction of the kinetic sticking coefficient at $T_D=2707{\rm K}$ (see middle and lower panels of Fig. \[figure4\]). Figure \[figure5\] shows the quantities of Fig. \[figure4\] as a function of the surface temperature. The Debye temperature is fixed to the value for graphite. At low surface temperatures the kinetic sticking coefficient is only slightly smaller than the prompt sticking coefficient, yet for high surface temperatures their difference increases significantly as a consequence of the inhibited thermalization. This can be understood as follows: The transition from the second to the first bound state entails the emission of two phonons and the transition from the second bound state to the continuum requires only the absorption of a single phonon. At low enough surface temperatures it is however nevertheless possible that the electron drops to the lowest bound state because the likelihood of phonon emission is proportional to $1+n_B$ whilst the likelihood of phonon absorption is proportional to $n_B$. Hence, for sufficiently low surface temperatures, $W_{12} > W_{c2}$, even when $W_{12}$ entails a two-phonon and $W_{c2}$ a one-phonon process, so that the electron has a good chance to trickle through. Increasing the surface temperature benefits however $W_{c2}$ more than $W_{12}$ so that $W_{12}<W_{c2}$, prohibiting the trickling through and leading to a considerable reduction of the kinetic sticking coefficient at high surface temperatures. ![Bound state occupancy of the lowest bound state (dashed blue line) and the upper bound states (solid red line) as a function of time for an unit flux of a Boltzmannian electron with $k_B T_e=0.1{\rm eV}$. The left and right panels show the two occupancies on two timescales. The left panel on the scale of the desorption time $\lambda_0^{-1}=\tau_e$ (vertical dashed line in the left panels) and the right panel on the scale set by $\lambda_1^{-1}$ (vertical dashed line in the right panels). The upper two panels are for $T_s=200{\rm K}$ and $T_D=2500{\rm K}$ whereas the lower two panels show results for $T_s=600{\rm K}$ and $T_D=2500{\rm K}$.[]{data-label="figure6"}](figure6.eps){width="\linewidth"} From the discussion of Figs. \[figure4\] and \[figure5\] we conclude that a pronounced relaxation bottleneck inhibiting thermalization can only occur for at least two-phonon deep potentials and sufficiently high surface temperatures. The question arises then on what timescale does the relaxation bottleneck affect physisorption. To answer this question we analyze, respectively, the time evolution of the occupancy of the lowest level and the occupancy of the upper levels of the surface potential under the assumption that initially all bound states were empty and that for $t>0$ a stationary unit flux of a Boltzmannian electron fills the levels. Accordingly, the occupancy of the lowest state ($n=1$) and the upper states ($n\ge 2$) can be determined from Eq. (\[ntotal\]) setting $j_k(t)=0$ for $t<0$ and $j_k(t)=j_k\sim k e^{-\beta_e E_k}$ for $t \geq 0$ with $\sum_k j_k = 1$. The results of this calculation are shown in Fig. \[figure6\] for low (upper two panels) and high (lower two panels) surface temperature. Clearly, for times of the order of the desorption time, $\tau_e=\lambda_0^{-1}$, indicated by the vertical dashed line in the left panels, the upper levels are basically empty indicating that a thermalized electron desorbs; for $T_D=2500{\rm K}$ and $T_s=500{\rm K}$ the quasi-stationary occupancy deviates from the equilibrium occupancy less than 3%. The upper levels are more populated than the lower one only for very short timescales, set by $\lambda_1^{-1}$, indicated by the vertical dashed line in the right panels. Since $\lambda_1^{-1}\ll \lambda_0^{-1}$, the relaxation bottleneck does not affect desorption, which still occurs from the equilibrium occupancy. It does thus only affect the kinetic sticking coefficient which is significantly smaller than the prompt one and actually the one to be used to characterize polarization-induced trapping of an electron at a dielectric surface. The relaxation bottleneck is absent in neutral physisorption systems because the level spacing is small compared to the Debye energy. Prompt and kinetic sticking coefficients are thus almost identical as has been indeed found for neon atoms physisorbing on a copper substrate. [@BGB93] ![Prompt (full line) and kinetic (dashed line) energy-averaged sticking coefficient for graphite ($T_D = 2500{\rm K} $) as a function of the mean energy of the electron and the surface temperature.[]{data-label="figure7"}](figure7.eps){width="\linewidth"} Figure \[figure7\] finally shows for graphite the energy-averaged prompt and kinetic sticking coefficients as a function of the mean energy of the incident electron and the surface temperature. As two-phonon processes contribute little to the initial trapping of the electron, their most important role is to control relaxation to the lowest bound state. In agreement with the foregoing discussion, the kinetic sticking coefficient diminishes therefore for higher surface temperatures whereas the prompt sticking coefficient is less sensitive to the surface temperature. From Fig. \[figure7\] it can be also seen that even the prompt sticking coefficient for graphite is only at most of the order of of $10^{-3}$, the order we also found in our investigation of electron sticking at metallic surfaces. [@BDF09] It is two orders of magnitude smaller than the value obtained from a semiclassical estimate [@UN80] whose range of applicability is however hard to grasp. We expect it, at best, to be applicable to very low mean electron energies, below $0.0026 {\rm eV}$, and rather high electron binding energies, larger than $1 {\rm eV}$. [@BDF09] Conclusions =========== As a preparatory step towards a microscopic understanding of the build-up of surface charges at dielectric plasma boundaries, we investigated phonon-mediated temporary trapping of an electron on a dielectric surface. In our simple model for the polarization-induced interaction of the electron and the dielectric surface, the adsorbed electron occupies the bound states of a recoil-corrected image potential. Electron energy relaxation responsible for transitions between the image states leading to adsorption and eventually to desorption is due to the coupling to an acoustic bulk phonon. Dielectrics typically used as plasma boundaries are graphite, silicon oxide, aluminium oxide, and bismuth silicon oxide. They all have large energy gaps blocking internal electronic degrees of freedom and small Debye energies compared to the energy difference of at least the lowest two bound surface states. Electron physisorption at these boundaries is thus driven by multi-phonon processes. As in I we presented results for a two-phonon deep surface potential, as it is applicable to graphite, where the energy difference between the lowest two bound states is between one and two Debye energies. Classifying two-phonon processes by the energy difference they allow to bridge, we included two-phonon transition probabilities only for transitions not already triggered by one-phonon processes. Besides the Debye temperature, which we varied to realize different potential depths, the material parameters used in the numerical calculations are the ones for graphite. Similar to physisorption of a neutral particle, sticking and desorption of an electron can be subdivided into three characteristic stages. At first, the electron is trapped in one of the upper bound states of the surface potential. Then the bound state occupancy relaxes to a quasi-stationary occupancy. Finally, over the timescale set by the desorption time, the electron desorbs. In order to account for both initial trapping and subsequent relaxation we employed a quantum-kinetic rate equation for the occupancy of the image states. Apart from calculating the energy-resolved and energy-averaged prompt and kinetic electron sticking coefficients, which typically turn out to be of the order of $10^{-3}$, we also investigated the relative importance of one- and two-phonon processes for the two stages of the sticking process. The initial trapping is almost entirely due to one-phonon transitions from the continuum, two-phonon processes from higher-lying continuum states contribute very little. The relaxation of the bound state occupancy after the initial trapping depends strongly on the ratio of the probabilities for downwards transitions to the lowest state and upwards transitions to the continuum. For graphite, with its two-phonon deep surface potential, the upper bound states are linked to the lowest bound state only by a two-phonon process. The trapped electron has thus only a slim chance to drop to the lowest bound state, particularly at high surface temperatures, which favor transitions back to the continuum. The decreased accessibility of the lowest surface state leads to a significant reduction of the kinetic sticking coefficient compared to the prompt sticking coefficient. For the other dielectrics typically used as plasma boundaries, silicon dioxide, aluminium oxide, and bismuth silicon oxide, the surface potentials are much deeper because the Debye energy for these materials is very small. Hence, more than two phonons are required to link the upper image states to the lowest one, the accessibility of the lowest image state is thus even more suppressed, and the kinetic sticking coefficient should be accordingly small. [*Acknowledgments.*]{} This work was supported by the Deutsche Forschungsgemeinschaft through the transregional collaborative research center TRR 24. F.X.B. and H.F. acknowledge discussions with H. Deutsch in the early stages of this investigation. [44]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , ****, (). , ****, (). , ****, (). , ****, (). , , , , , ****, (). , ****, (). , , , , ****, (). , ****, (). , , , , (). , , , , ****, (). , , , ****, (). , , , , ****, (). , ** (, , ). , , , , ****, (). , , , ****, (). , ****, (). , ****, (). , , , ****, (). , , , ****, (). , ****, (). , ****, (). , ** (, , ). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , , , , ****, (). , , , ****, (). , ****, (). , ****, (). , , , ****, (). , , , , , , , , , ****, (). , , , , ****, (). , , , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , **, ().
--- abstract: 'We study the Holstein model of spinless fermions, which at half-filling exhibits a quantum phase transition from a metallic Tomonaga-Luttinger liquid phase to an insulating charge-density-wave (CDW) phase at a critical electron-phonon coupling strength. In our work, we focus on the real-time evolution starting from two different types of initial states that are CDW ordered: (i) ideal CDW states with and without additional phonons in the system and (ii) correlated ground states in the CDW phase. We identify the mechanism for CDW melting in the ensuing real-time dynamics and show that it strongly depends on the type of initial state. We focus on the far-from-equilibrium regime and emphasize the role of electron-phonon coupling rather than dominant electronic correlations, thus complementing a previous study of photo-induced CDW melting \[H. Hashimoto and S. Ishihara, Phys. Rev. B [**96**]{}, 035154 (2017)\]. The numerical simulations are performed by means of matrix-product-state based methods with a local basis optimization (LBO). Within these techniques, one rotates the local (bosonic) Hilbert spaces adaptively into an optimized basis that can then be truncated while still maintaining a high precision. In this work, we extend the time-evolving block decimation (TEBD) algorithm with LBO, previously applied to single-polaron dynamics, to a half-filled system. We demonstrate that in some parameter regimes, a conventional TEBD method without LBO would fail. Furthermore, we introduce and use a ground-state density-matrix renormalization group method for electron-phonon systems using local basis optimization. In our examples, we account for up to $M_{\rm ph} = 40$ bare phonons per site by working with $\mathcal{O}(10)$ optimal phonon modes.' author: - Jan Stolpp - Jacek Herbrych - Florian Dorfner - Elbio Dagotto - 'Fabian Heidrich-Meisner' bibliography: - 'biblio-holstein-cdw.bib' title: 'Charge-density-wave melting in the one-dimensional Holstein model' --- \[sec:intro\]Introduction ========================= Pump-probe experiments have become a popular setup to study ultrafast dynamics in solids (in, e.g., [@Schmitt-Science-2008; @Tomeljak-PRL-2009; @Ehrke-PRL-2011; @Okamoto-PRB-2011; @Stojchevska-Science-2014; @Hu-NatMat-2014; @Dal-Conte-NatPhys-2015; @Giannetti-AiP-2016; @Vogelsang-NatPhys-2018; @Ligges-PRL-2018; @Storeck-arxiv-2019]). In these experiments, photoinduced phase transitions between metallic and insulating states [@Okamoto-PRB-2011], melting of CDW or antiferromagnetic order [@Schmitt-Science-2008; @Tomeljak-PRL-2009; @Ehrke-PRL-2011], or accessing metastable states [@Stojchevska-Science-2014] were investigated. A prominent example are the observations of Ref.  that were interpreted as photo-induced enhanced superconductivity. In the interpretation of experiments on ultrafast dynamics, the whole system is often treated as a collection of coupled subsystems [@Giannetti-AiP-2016]. These include the electronic subsystem, lattice degrees of freedom (phonons) and possibly spin degrees of freedom. In the experiments, first electrons are optically excited into empty states and then their relaxation dynamics is monitored. Relaxation can occur via electronic interactions or via a coupling to bosons, i.e., either phonons or spin excitations. Theoretical support is needed to understand the time scales, the bottlenecks for relaxation and to determine which bosonic excitations are relevant. In general, it is unclear whether the subsystems first relax and thermalize separately before reaching global equilibrium or whether all degrees of freedom are out-of-equilibrium throughout the transient dynamics. Moreover, the strength of phonon mediated interactions could be affected in the transient dynamics [@Murakamu2017a; @Kennes-Nature-2017]. Thus, a major task for theory is to understand such questions in simplified yet paradigmatic models. Many studies focussed solely on electronic degrees of freedom (see, e.g., [@Moritz-CPC-2011; @Werner-PRB-2012; @Eckstein-PRL-2013; @Lu-PRB-2015; @Golez-PRB-2016; @Kohler2018; @Paeckel2019; @Golez2019]), yet from the above it is clear that phonons need to be modelled as well [@Werner-EPL-2015; @Kemper-AnPhys-2017; @Murakami2017b]. The Holstein model of spinless fermions in one dimension is a prototypical model to study electron-phonon coupled systems. It hosts a variety of different phenomena driven by the electron-phonon coupling, especially polaron formation and a phase transition between a metallic and a charge-density-wave phase [@Barisic-EPJB-2008; @Hirsch-PRB-1983]. The rich phenomena present in the Holstein model and, in particular, its nonequilibrium dynamics are still actively discussed. Studies of the latter in electron-phonon coupled systems are often restricted to single electrons (Holstein-polaron problem) [@Vidmar-PRB-2011; @Fehske-PRB-2011; @Golez-PRL-2012; @Sayyad-PRB-2015; @Dorfner-PRB-2015; @Kloss-PRL-2019]. However, having more than one electron in the system can lead to interesting collective phenomena already in equilibrium. One of the most prominent examples is the Peierls instability leading to an insulating charge-density-wave ordered state in a half-filled electron band coupled to phonons. Despite the challenges, efforts were made to study the real-time dynamics in the Holstein model at half filling [@Filipps-PRL-2012; @Matsueda-JPSJ-2012; @Hohenadler-PRB-2013; @Werner-EPL-2015; @Wall-PRA-2016; @Hashimoto-PRB-2017]. ![\[fig:holstein\_sketch\] (a) Sketch of the different terms in the Holstein model Eq. . The fermions can hop from site to site with an amplitude $t_0$. If a fermion is on a particular site it can create or destroy phonon excitations at that site with a coupling strength $\gamma$. Every phononic excitation costs an energy $\omega_0$. (b) Sketch of the phase diagram of the half-filled Holstein model [@Bursill-PRL-1998; @Creffield-EPJB-2005]. As the electron-phonon coupling $\gamma$ increases there is a phase transition from a metallic Tomonaga-Luttinger liquid phase (TLL) to an insulating charge-density-wave phase (CDW). The arrows represent the different quenches that we will investigate in Sec. \[sec:quench\], i.e., frequency and coupling quench (FQ and CQ, respectively).](drawing_v1){width="\columnwidth"} Perturbative approaches can give reliable results in the vicinity of the atomic limit, where the bandwidth of the electrons is much smaller than all other energy scales in the system [@Hirsch-PRB-1983] and also in the limit of small phonon energies [@Caron-PRB-1984]. For the intermediate regime, the so-called momentum average approach developed by Berciu and collaborators [@Berciu-PRL-2006; @Goodvin-PRB-2006; @Barisic-PRL-2007; @Berciu-PRL-2007; @Berciu-PRB-2007; @Goodvin-PRL-2011] is argued to provide reliable analytical results for the Holstein-polaron problem in equilibrium. A variety of different quantum Monte Carlo methods have been developed to investigate the Holstein model [@Hirsch-PRB-1983; @McKenzie-PRB-1996; @Kornilovitch-PRL-1998; @Hohenadler-PRB-2004; @Creffield-EPJB-2005; @Goodvin-PRL-2011; @Ohgoe-PRB-2014; @Weber-PRB-2016]. For wave-function based methods, such as exact diagonalization (ED) or the density-matrix renormalization group (DMRG), electron-phonon systems are computationally very demanding. These methods require that the local Hilbert space has a finite dimension which is not the case for electron-phonon coupled systems. The bosonic nature of the phonons and the fact that their number is not conserved makes the Hilbert space infinite-dimensional, irrespective of the system size. Therefore, one has to introduce an [*ad-hoc*]{} cutoff that limits the number of phonons per site. This cutoff has to be chosen in such way that it does not affect the physics of the system and the quantitative reliability of the results. Depending on the task at hand, this can render the problem unfeasible or at least very hard for wave-function based methods. Several strategies were suggested to overcome the problem of large local Hilbert spaces [@Jeckelmann-RNC-2007]. In the context of DMRG [@Schollwoeck-RMP-2005], one can map the Holstein model to a lattice including pseudo sites for the phononic degrees of freedom where every pseudo site can host one phonon excitation [@Jeckelmann-PRB-1999; @Jeckelmann-PRB-1998]. As a result, the local Hilbert-space dimension is reduced, however, one introduces long-range hopping into the system. Weiße and Fehske [@Weisse-PRB-1998] used an inhomogeneous modified variational Lang-Firsov transformation to obtain an effective Hamiltonian including variational parameters that can be solved in a self consistency loop including a Lanczos diagonalization [@Fehske-PRB-1995]. In other approaches, one chooses basis states in such a way that the Hilbert space is not too big but still the essential physics is captured. For instance, Bonča [*et al.*]{} [@Bonca-PRB-1999] introduced diagonalization in a limited functional space. In this approach, a set of dynamically important basis states is constructed by repeatedly applying parts of the Hamiltonian to an initial state. This method takes advantage of the spatial correlations of electrons and phonons and is therefore especially well suited for studying single electrons on a periodic or infinite lattice [@Bonca-PRL-2000; @Li-PRB-2010; @Vidmar-PRB-2010; @Vidmar-PRL-2011; @Vidmar-PRB-2011; @Golez-PRL-2012; @Golez-PRB-2012; @Dorfner-PRB-2015]. In this work, we will use an approach called local basis optimization (LBO) introduced by Zhang [*et al.*]{} [@Zhang-PRL-1998]. This approach is very flexible since it adaptively chooses the most important local basis states (called optimal modes) during the simulation by diagonalization of the single-site reduced density matrix. The ideas of Zhang [*et al.*]{} [@Zhang-PRL-1998] were first used in combination with exact-diagonalization techniques [@Zhang-PRL-1998; @Zhang-PRB-1999; @Nishiyama-EPJB-1999; @Weisse-PRB-2000; @Zhao-PRB-2005] and also with DMRG in its original formulation [@Bursill-PRB-1999; @Friedman-PRB-2000; @Friedman-JoPCM-2002; @Bursill-PRB-2002; @Barford-PRB-2002; @Barford-PRB-2006; @Wong-PRB-2008; @Tozer-PRB-2014]. Here, we will combine LBO with a time-dependent DMRG algorithm as well as with a ground-state DMRG algorithm in the matrix-product-state (MPS) formulation. In these DMRG implementations, we choose the optimized basis in an unbiased way and fully adaptive to system size, system parameters and boundary conditions. The time-dependent version is based on the work by Brockt [*et al.*]{} [@Brockt-PRB-2015] to simulate the real-time evolution in the Holstein-polaron problem (see also Ref. [@Schroeder-PRB-2016; @Brockt-PRB-2017]). In this work, we extend this algorithm to the Holstein model at half filling. Our ground-state DMRG method combines the algorithm implemented by Guo [*et al.*]{} [@Guo-PRL-2012] for spin-boson models (see also [@Bruognolo-PRB-2014; @Blunden-Codd-PRB-2017; @Bruognolo-PRB-2017]) with the subspace expansion introduced by Hubig [*et al.*]{} [@Hubig-PRB-2015]. The algorithm can be applied to arbitrary one-dimensional electron- or spin-phonon problems with local electron- or spin-phonon coupling. Here, we use this algorithm to study the half-filled Holstein model. In the first setup, we prepare the system in a product state where every second site is occupied by an electron and no phonons are present in the initial state. We then perform a real-time evolution of this state for different parameter sets. As we increase the coupling to the phonons we observe a transition from dynamics that is dominated by the electron hopping to dynamics that is strongly influenced by the coupling to the phonons. This includes a temporal self trapping of the electrons for large electron-phonon coupling. In the second setup, we prepare the system in a product state of small onsite polarons that form the CDW. In this case, the real-time evolution can be understood by considering the renormalized hopping-matrix elements of the quasiparticles. As a consequence, the dynamics at strong coupling is so slow that the initial state hardly changes over our accessible simulation times. In the last setup, we prepare the system in the ground state of the CDW phase and then we perform quenches to the metallic phase. We observe that the short-time dynamics is dominated by the phonons when we decrease the coupling between electrons and phonons only. However, if we decrease the phonon frequency compared to the electron bandwidth the short-time dynamics is dominated by the electron hopping, while the phonons respond very slowly to the quench. The melting of charge-density-wave states in a one-dimensional electron-phonon coupled system was previously studied by Hashimoto and Ishihara [@Hashimoto-PRB-2017] using time-dependent DMRG simulations with a fixed cut-off in the local phonon number basis of $M_{\text{ph}} \leq 8$. They study a Holstein model with an electronic interaction of the form $H_{\text{int}} = V\sum_l n_l n_{l+1} $ ($n_l=c^\dagger_l c^{\vphantom{\dagger}}_l$, $c^{\vphantom{\dagger}}_l$: fermionic annihilation operator at site $l$) and drive the system out of equilibrium by applying a pulse. Starting from the uncoupled limit of a vanishing electron-phonon coupling \[$\gamma=0$, cf. Fig. \[fig:holstein\_sketch\](a)\], they demonstrate that the CDW order parameter decays exponentially for $V>t_0$ \[where $t_0$ is the electron hopping parameter, cf. Fig. \[fig:holstein\_sketch\](a)\]. Turning on electron-phonon interactions causes a slower decay due to the formation of polarons and thus a mass renormalization of the electrons. The excess energies pumped into the system that were considered in [@Hashimoto-PRB-2017] are of the order of $\Delta E \lesssim 0.1 t_0 N$ above the ground-state energy, where $N$ is the number of fermions in the system. In our work, we consider different initial states and we deliberately work in the regime of large quench energies $0.1 t_0 N \lesssim \Delta E\lesssim 8t_0 N$ to exemplify the capabilities of our local basis-approximation method. The paper is organized as follows. In Sec. \[sec:model\], we will revisit the Holstein model and its phase diagram at half filling. In Sec. \[sec:num\_methods\], we describe the different numerical methods used throughout this paper. In Sec. \[sec:nonequ\], we present the results of our numerical simulations and in Sec. \[sec:summary\], we give a summary. \[sec:model\]Holstein model of spinless fermions ================================================ The Holstein model [@Holstein-I-AoP-1959; @Holstein-II-AoP-1959] of spinless fermions describes a spin-polarized gas of electrons that locally couples to harmonic oscillators via the density of the electrons. The harmonic oscillators model dispersionless phonons. The Hamiltonian on a one-dimensional (1D) lattice can be written as: $$\begin{aligned} H_{\rm Hol} = H_{\rm kin} + H_{\rm ph} + H_{\rm el-ph}\,, \label{eq:hol-ham}\end{aligned}$$ where $H_{\rm kin}$ is the electron kinetic energy, i.e.: $$\begin{aligned} H_{\rm kin} = -t_0 \sum_{l=1}^{L-1} (c_l^\dagger c_{l+1}^{\vphantom{\dagger}} + h.c.)\,.\end{aligned}$$ Here the $c_l^{\vphantom{\dagger}}$ \[$c_l^\dagger$\] are annihilation \[creation\] operators for spinless fermions and $t_0$ is the hopping parameter. $H_{\rm ph}$ is the purely phononic part defined as: $$\begin{aligned} H_{\rm ph} = \omega_0 \sum_{l=1}^L b_l^\dagger b_l^{\vphantom{\dagger}}\,,\end{aligned}$$ where $b_l^{\vphantom{\dagger}}$ \[$b_l^\dagger$\] are bosonic annihilation \[creation\] operators for phonons and $\omega_0$ is the phonon frequency. $H_{\rm el-ph}$ is the electron-phonon coupling part: $$\begin{aligned} H_{\rm el-ph} = -\gamma \sum_{l=1}^L n_l^{\vphantom{\dagger}} (b_l^\dagger + b_l^{\vphantom{\dagger}})\,, \label{eq:ham-el-ph}\end{aligned}$$ where $n_l^{\vphantom{\dagger}} = c_l^\dagger c_{l}^{\vphantom{\dagger}}$ is the on-site density of the electrons and $\gamma$ is the electron-phonon coupling strength. The different parts of the Holstein Hamiltonian Eq.  are sketched in Fig. \[fig:holstein\_sketch\](a). The total number of fermions $N = \sum_{l=1}^L \langle n_l^{\vphantom{\dagger}}\rangle$ is conserved in the system while the number of phonons is not, as is evident from Eq. . Throughout this paper, we express energies and times in units of the hopping parameter $t_0$ and $1/t_0$, respectively. Open boundary conditions are used within our numerical simulations. In Fig. \[fig:holstein\_sketch\](b), we sketch the ground-state phase diagram of the half-filled Holstein model that was obtained by a combination of perturbative approaches, quantum Monte-Carlo and DMRG methods [@Creffield-EPJB-2005; @Bursill-PRL-1998; @McKenzie-PRB-1996; @Hirsch-PRB-1983]. For small values of the coupling parameter $\gamma/\omega_0 \ll 1$, the system is in a (metallic) Tomonaga-Luttinger liquid phase (TLL) while for increasing coupling strength $\gamma/\omega_0$, there is a phase transition to a charge-density-wave phase (CDW) for all values of the hopping parameter $t_0>0$. The order parameter in the latter can be defined as the staggered density of the fermions in the system: $$\begin{aligned} \mathcal{O}_{\rm CDW} = \frac{1}{N} \sum_{l=1}^L (-1)^l \langle n_l \rangle \,.\end{aligned}$$ In the metallic TLL phase, the density is homogenous $\langle n_l \rangle = 0.5 = \mathrm{const.}$ and therefore, the order parameter vanishes. On the other hand, $\mathcal{O}_{\rm CDW} \neq 0$ indicates the onset of the CDW phase, with a maximum value of $\mathcal{O}_{\rm CDW} = \pm 1$ in the limit $\gamma/t_0 \rightarrow \infty$. This is strictly true in the thermodynamic limit, yet we will break the symmetry here by the choice of initial conditions or system size and boundary conditions. A subtlety that emerges from using small odd system sizes is that the order parameter $\mathcal{O}_{\rm CDW}$ can be zero although the density is not completely uniform. This arises because there is one more odd site than there are even sites. However, this should not be concerning. Consider free spinless fermions on a lattice with odd system size $L$ and open boundary conditions. The number of fermions is $N = (L-1)/2$. Then, in the ground state, the $N$ lowest single-particle eigenstates are occupied which also leads to a density profile that is not flat but has exactly $\mathcal{O}_{\rm CDW} = 0$. This effect becomes less pronounced as the system size is increased. In the atomic limit $t_0 = 0$, the Holstein model can be diagonalized by performing a Lang-Firsov transformation [@Lang-SP-1963]. In the ground state, fermions are localized at single sites and are accompanied by coherent states of phonons. All other sites that do not contain a fermion are free of any phonon. For every fermion in the system, one gets a binding energy: $$\begin{aligned} \epsilon_b = \frac{\gamma^2}{\omega_0} \,,\end{aligned}$$ and the ground-state energy is therefore $E_0 = - N \epsilon_b$. The ground state in this limit is highly degenerate since one can distribute the fermions arbitrarily in the system. It takes the form of a product state: $$\begin{aligned} | \psi_0 \rangle \propto \left[ \prod_{l \in \{l_{\rm occ}\}} c_{l}^\dagger \ e^{\frac{\gamma}{\omega_0}b_{l}^\dagger} \right] | \emptyset \rangle _{\rm el} | \emptyset \rangle_{\rm ph}\,, \label{eq:gs-lang-firsov}\end{aligned}$$ where $| \emptyset \rangle _{\rm el[ph]}$ is the vacuum state of the electrons \[phonons\] and $\{ l_{\rm occ}\}$ is the set of sites that are occupied. Close to the atomic limit $t_0 \ll \gamma, \omega_0$ one can understand the phase transition from second-order perturbation theory [@Hirsch-PRB-1983]. One obtains an effective polaron hopping-matrix element: $$\begin{aligned} \tilde{t}_0 = t_0 e^{-\gamma^2/\omega_0^2} \label{eq:teff}\end{aligned}$$ and an effective nearest-neighbour repulsion: $$\begin{aligned} \tilde{V} = 2 \frac{\tilde{t}_0^2}{\omega_0} \int_0^{\frac{\gamma^2}{2\omega_0^2}} dg\, \frac{e^{4g}-1}{g}\,.\end{aligned}$$ The effective model can then be mapped to the spin-1/2 XXZ Hamiltonian and the phase transition at the isotropic Heisenberg point is reached at $\tilde{V}_c/2\tilde{t}_0 = 1$ [@Hirsch-PRB-1983]. Numerical methods {#sec:num_methods} ================= Ground-state DMRG with local basis optimization {#sec:dmrg+lbo} ----------------------------------------------- To calculate ground states of the half-filled Holstein model we use a single-site DMRG algorithm and combine this with a local basis optimization (LBO) [@Zhang-PRL-1998]. In the first efforts to combine LBO with DMRG, the optimal modes were computed from small systems using exact diagonalization and then fed into larger systems (see, e.g., [@Bursill-PRB-1999; @Friedman-PRB-2000; @Bursill-PRB-2002]) or the optimal modes were computed from units larger than a single site (see, e.g., [@Wong-PRB-2008]). The algorithm presented in [@Barford-PRB-2002] uses the original DMRG formulation [@White-PRL-1992] and is the closest to our implementation and the one of [@Guo-PRL-2012], yet uses different environment-block DMRG basis dimensions depending on whether optimal-phonon mode optimizations takes place or not. The algorithm used in this work is an adaptation of the method described in [@Guo-PRL-2012] to electron-phonon systems combined with the subspace-expansion method (DMRG3S) [@Hubig-PRB-2015]. Therefore, we use the abbreviation DMRG3S+LBO when referring to the method used in this work. Consider a pure quantum state of a lattice system $\left| \psi \right\rangle$ that can be expanded in a product-state basis of $d$-dimensional local Hilbert spaces. We start out by writing this state as a matrix-product state (MPS) in the standard fashion following Ref. : $$\begin{aligned} \left| \psi \right\rangle = \sum_{\{ \sigma_l \}} a_{ \sigma_1 ... \sigma_L} \left| \sigma_1 ... \sigma_L \right\rangle = \sum_{\{ \sigma_l \}} M^{\sigma_1} ... M^{\sigma_L} \left| \sigma_1 ... \sigma_L \right\rangle\,,\end{aligned}$$ where the $\sigma_l$ label the state in the local Hilbert space and $M^{\sigma_l}$ are matrices such that the matrix product yields $M^{\sigma_1} ... M^{\sigma_L} = a_{ \sigma_1 ... \sigma_L}$(actually, the first matrix $M^{\sigma_1}$ and the last matrix $M^{\sigma_L}$ have to be a row and column vector, respectively, for the matrix product to yield a scalar). The sum runs over all possible values of $\sigma_1, ..., \sigma_L$. The full many-body Hilbert space has dimension $\mbox{dim}(\mathcal{H} ) = d^L$. In principle, the dimension of the matrices $M^{\sigma_l}$ - the so-called bond dimension - also grows exponentially with the system size $L$ except at the edges of the system. The success of MPS-based methods relies on the fact that ground states of short-range Hamiltonians in one dimension that have a gap to the excitation spectrum can be efficiently represented with matrices of a limited dimension that does not depend on the system size $L$ [@Laflorencie-PR-2016; @Schollwoeck-AoP-2011; @Eisert-RMP-2010; @Hastings-JSM-2007]. This can be understood in the following way: divide the system into two parts and consider the reduced density matrix of one of these subsystems. If the spectrum of the reduced density matrix of the subsystems falls off fast enough, the state can be efficiently and accurately represented by considering just a limited part of the states in either one of the subsystems. The area law of entanglement for the ground state of gapped short range Hamiltonians in one dimension ensures a fast algebraic decay of the spectrum [@Hastings-JSM-2007]. Therefore, it is enough to consider a finite dimension of the matrices $M^{\sigma_l}$ [@Schollwoeck-AoP-2011]. Following Ref. , we now consider a special bipartition where we only look at one site. The local reduced density matrix at site $l$ is given by: $$\begin{aligned} \rho_l = \underset{\underset{m \neq l}{\sigma_m}}{\rm tr}(\left| \psi \right> \left< \psi \right|)\,,\end{aligned}$$ where the trace runs over all local degrees of freedom $\sigma_m$ that are not on site $l$. Diagonalizing this local density matrix we obtain: $$\begin{aligned} \rho_l = U_l \Lambda_l U^\dagger_l\,,\end{aligned}$$ where $\Lambda_l$ is a diagonal matrix with the eigenvalues of the local density matrix on the diagonal and $U_l$ is a local basis transformation from the original basis (in practice, this will most often be an occupation number basis) to the eigenbasis of the local reduced density matrix. If the spectrum of the local reduced density matrix falls off fast enough it is advisable to rotate the original $M^{\sigma_l}$ of our MPS into the new $\tilde\sigma_l$ eigenbasis of the local reduced density matrix. It is then sufficient to only keep that part of the eigenbasis with the largest eigenvalues of the local reduced density matrix without loosing much of the information of the state [@Zhang-PRL-1998]. Therefore, we introduce a truncated basis transformation $R^{ \tilde\sigma_l \sigma_l}$ that has dimension $d_o \times d$ where $d_o < d$. $R$ is identical to $U^\dagger$ just that we got rid of the $d-d_o$ rows of the matrix that correspond to the smallest eigenvalues of $\rho_l$. We then write the MPS as: $$\begin{aligned} | \tilde\psi \rangle = \sum_{\{ \sigma_l \}} (\tilde M^{\tilde\sigma_1}R^{\tilde\sigma_1 \sigma_1}) ... (\tilde M^{\tilde\sigma_L}R^{\tilde\sigma_L \sigma_L}) \left| \sigma_1 ... \sigma_L \right\rangle\,,\end{aligned}$$ where: $$\begin{aligned} \tilde M^{\tilde\sigma_l} = M^{\sigma_l} R^{\dagger \sigma_l \tilde\sigma_l}\,.\end{aligned}$$ The rotation into an optimized local basis is motivated by the observation that the Holstein model Eq.  can be diagonalized in the atomic limit $t_0 = 0$ via a Lang-Firsov transformation as discussed in Sec. \[sec:model\]. The ground state of the model will then take the form Eq.  where the sites that are occupied by an electron also contain a coherent state of phonons. In order to represent this state accurately, large phonon occupations need to be accounted for such that the Hilbert space in the phonon occupation basis has to have a large dimension. On the other hand, in the Lang-Firsov basis, a two-dimensional local Hilbert space is enough to represent the ground state: one state for a site occupied by an electron and one for an empty site. That is, in the atomic limit keeping only one state per fermion occupation sector is sufficient to represent the ground state exactly. Away from the atomic limit, keeping only $d_o \ll d$ states is still sufficient to accurately represent the ground state [@Zhang-PRL-1998; @Zhang-PRB-1999] as we will see in the following. In fact, Zhang [*et al.*]{} found numerical evidence that the spectrum of the local reduced density matrix falls off exponentially in ground-states [@Dorfner-PRA-2016; @Zhang-PRL-1998], which seems to hold also in time-evolved states of the Holstein polaron model [@Dorfner-PRB-2015]. Diagonalizing the local density matrix automatically finds the optimal basis to represent the state. Manipulations on the MPS matrices $M^{\tilde\sigma_l}$ that we have to do during the DMRG sweeps become cheaper because of the reduced dimensionality when using the optimized local basis. We stress that the outlined ansatz finds the optimized basis at every site adapted to the system parameters, boundary conditions and also time during a time evolution. ![\[fig:sketch\_dmrg\_lbo\] Different steps of the DMRG3S+LBO update. (a) Shift of the focus to the basis transformation tensor and optimization. (b) Shift of the focus back to the site tensor and truncation. (c) Transformation of the local part of the Hamiltonian MPO into the optimized basis (see also [@Guo-PRL-2012])](dmrg_lbo){width="\columnwidth"} We will now explain the basic steps of our algorithm. We consider an MPS in mixed canonical form where the MPS matrices are transformed into an optimal basis: $$\begin{aligned} \left| \psi \right\rangle = \sum_{\{ \sigma_l \}} \tilde A^{\tilde\sigma_1}_{a_0 a_1} R^{\tilde\sigma_1 \sigma_1} ... \tilde A^{\tilde\sigma_{i-1}}_{a_{i-2} a_{i-1}} R^{\tilde\sigma_{i-1} \sigma_{i-1}} \tilde M^{\tilde\sigma_{i}}_{a_{i-1} a_{i}} R^{\tilde\sigma_{i} \sigma_{i}} \nonumber \\ \times \tilde B^{\tilde\sigma_{i+1}}_{a_{i} a_{i+1}} R^{\tilde\sigma_{i+1} \sigma_{i+1}} ... \tilde B^{\tilde\sigma_{L}}_{a_{L-1} a_{L}} R^{\tilde\sigma_{L} \sigma_{L}} \left| \sigma_1 ... \sigma_L \right\rangle \,.\label{eq:mps-lbo}\end{aligned}$$ Here and for the rest of the section a summation over all indices that appear twice is implied. The indices $a_0$ and $a_{L}$ are fixed dummy indices to standardize notation. The $\tilde A^{\tilde \sigma_l}_{a_{l-1} a_l}$ and $\tilde B^{\tilde \sigma_l}_{a_{l-1} a_l}$ are left- and right-normalized MPS tensors, respectively: $$\begin{aligned} \tilde A^{\dagger \tilde\sigma_l}_{a_l a_{l-1}} \tilde A^{\tilde\sigma_l}_{a_{l-1} a^\prime_l} &= \delta_{a_l a^\prime_l} \\ \tilde B^{\tilde\sigma_l}_{a^\prime_{l-1} a_l} \tilde B^{\dagger \tilde\sigma_l}_{a_l a_{l-1}} &= \delta_{a^\prime_{l-1} a_{l-1}}\end{aligned}$$ such that the local reduced density matrix at site $i$ in the optimized basis can be written as: $$\begin{aligned} (\rho_i)^{\tilde\sigma^\prime_i \tilde\sigma_i} = \tilde M^{\tilde\sigma^\prime_i}_{a_{i-1} a_i} \tilde M^{\dagger \tilde\sigma_i}_{a_i a_{i-1}}.\end{aligned}$$ The first step is to shift the focus of the state which is currently on the $\tilde M^{\tilde\sigma_i}$ tensor to the basis transformation tensor $R^{\tilde\sigma_i \sigma_i}$ such that the local reduced density matrix can be written in terms of only the $R^{\tilde\sigma_i \sigma_i}$ tensor instead of the $\tilde M^{\tilde\sigma_i}$ tensor. The different tensor manipulations that are necessary are depicted in Fig. \[fig:sketch\_dmrg\_lbo\](a). We perform a singular value decomposition (SVD) of $\tilde M^{\tilde\sigma_i}$: $$\begin{aligned} \tilde M^{\tilde\sigma_i}_{a_{i-1} a_i} = X^\tau_{a_{i-1} a_i} \Lambda^{\tau \tau^\prime} Y^{\tau^\prime \tilde\sigma_i}.\end{aligned}$$ Now the local reduced density matrix can be written as: $$\begin{aligned} (\rho_i)^{\tilde\sigma^\prime_i \tilde\sigma_i} = \Lambda^{\tau \tau^\prime} Y^{\tau^\prime \tilde\sigma^\prime_i} Y^{\dagger \tilde\sigma_i \tau^{\prime\prime}} \Lambda^{\tau^{\prime\prime}\tau}.\end{aligned}$$ We then perform a DMRG optimization step on $\tilde R^{\tau \sigma_i} = \Lambda^{\tau \tau^\prime} Y^{\tau^\prime \tilde\sigma_i} R^{\tilde\sigma_{i} \sigma_{i}}$ using a Lanczos optimization scheme. This step optimizes the local basis for the current MPS. The next step is to shift the focus back to the local site tensor \[Fig. \[fig:sketch\_dmrg\_lbo\](b)\]. We again perform an SVD, now on the optimized $\tilde R^{\tau \sigma_i}$ tensor and in the process truncate the new optimized basis to the desired size: $$\begin{aligned} \tilde R^{\tau \sigma_i} = \tilde X^{\tau \tilde\sigma^\prime_i} \tilde\Lambda^{\tilde\sigma^\prime_i \tilde\sigma_i} \tilde Y^{\tilde\sigma_i \sigma_i}.\end{aligned}$$ We set the $\tilde Y^{\tilde\sigma_i \sigma_i}$ as our new local basis transformation matrix and our new site tensor is: $$\begin{aligned} \tilde M^{\tilde\sigma_{i}}_{a_{i-1} a_{i}} = X^\tau_{a_{i-1} a_i} \tilde X^{\tau \tilde\sigma^\prime_i} \tilde\Lambda^{\tilde\sigma^\prime_i \tilde\sigma_i}.\end{aligned}$$ The third step is to perform a single-site DMRG optimization on the new $\tilde M^{\tilde\sigma_{i}}_{a_{i-1} a_{i}}$ tensor using the local Hamiltonian in matrix-product operator form. The local Hamiltonian can be transformed into the optimized basis using the updated tensor $R^{\tilde\sigma_i \sigma_i}$ \[Fig. \[fig:sketch\_dmrg\_lbo\](c)\]. In principle, these three steps can be repeated several times until no further improvements can be detected. However, in the implementation used for this work we fix the number of iterations to just one or two. When we shift the focus to the next site we perform a subspace expansion as explained in Ref.  to avoid getting stuck in local minima in the energy landscape. ![\[fig:gs\_convergence\_L4g2\]Relative error $\Delta \epsilon$ of the ground-state energy obtained with the DMRG3S+LBO algorithm calculated for $L = 4, \omega_0/t_0 = 2$ and $\gamma/t_0 = 4$ (crosses: data calculated with $d_o = 5$, diamonds: $d_o = 8$). Red symbols were calculated with $M_{\rm ph} = 20$ and blue symbols with $M_{\rm ph} = 30$. The discarded weight for the bond dimension is $10^{-10}$. The thin black dotted line marks $\Delta\epsilon = 10^{-10}$.](gs_en_sweeps_L4_t05_g2_alt){width="\columnwidth"} The first two steps of the local optimization described above can be combined with any DMRG algorithm. However, using single-site DMRG is especially beneficial here since such an algorithm scales better with the local (optimal) basis dimension. For example, for spinless fermions or the Fermi-Hubbard model, the local dimension is $d=2$ or $d=4$, respectively. Utilizing symmetry sectors (e.g., particle-number conservation), the effective local dimension of every symmetry block can be reduced down to $d_{\rm eff} = 1$. As a consequence, the local Hilbert-space dimension is more or less irrelevant for the performance of the algorithm (the runtime scales at most linearly with the number of symmetry blocks). Therefore, single-site DMRG algorithms have no major performance benefit over a two-site DMRG algorithm. However, for systems such as the Holstein model, where some degrees of freedom are not conserved (i.e., the number of phonons), the scaling of the algorithm with the local dimension becomes substantial. Away from the atomic limit, $t_0 \neq 0$, the local dimension is $d_o > 1$ in the different symmetry blocks and, as a consequence, an efficient single-site DMRG algorithm is desirable. In the implementation used for this work, we utilize the fermion number conservation of the Hamiltonian Eq. . This means that the local basis transformation tensors $R$ consist of two symmetry blocks. In our algorithm, we fix a maximal dimension $d_o$ of the blocks. In the truncation process \[Fig. \[fig:sketch\_dmrg\_lbo\](b)\] we take the singular values of both blocks of $\tilde\Lambda$, sort them by size and then start filling the blocks starting with the largest singular value. We stop as soon as one of the blocks has reached the maximal dimension $d_o$. In order to test the validity of our approach, we compare the DMRG3S+LBO results with Lanczos diagonalization that produces numerically exact results [@Prelovsek-2013]. As already mentioned in the introduction, the unbounded Hilbert space of the bosonic phonon degrees of freedom requires an ad-hoc cutoff in order to be feasible for exact wave-function based methods. In Fig. \[fig:gs\_convergence\_L4g2\], we show the relative error of the ground-state energy, i.e., $$\begin{aligned} \Delta\varepsilon = \frac{E_{\rm DMRG} - E_{\rm Lz}}{|E_{\rm Lz}|}\,,\end{aligned}$$ where $E_{\rm DMRG}$\[$E_{\rm Lz}$\] stands for the ground-state energy obtained with DMRG3S+LBO \[Lanczos\]. In order to compare with Lanczos diagonalization, we investigate a small system of $L = 4$ in the CDW phase ($\omega_0/t_0 = 2$ and $\gamma/t_0 = 4$). In the Lanczos approach, we use $M_{\rm Lz}=400$ Lanczos steps and $M_{\rm ph}=30$ phonons per site, which yields a Hilbert space of $\mbox{dim}(\mathcal{H})\simeq 5\cdot10^6$ at half-filling. In the DMRG3S+LBO ground-state search, we fix the discarded weight in the bond dimension to $10^{-10}$. We present $\Delta\varepsilon$ for different maximal phonon numbers per site $M_{\rm ph}$ as different colors and different maximal numbers of optimal modes per fermion sector $d_o$ as different symbols in Fig. \[fig:gs\_convergence\_L4g2\]. It is evident from the presented results that one needs to converge in both the number of optimal modes $d_o$ and the maximal local phonon number $M_{\rm ph}$ to get an accurate result. One can see that a maximum number of optimal modes of $d_o = 5$ or a maximum local phonon number of $M_{\rm ph} = 20$ is not enough to get an energy with an error of the same order as the discarded weight $10^{-10}$ (thin black dotted line in Fig. \[fig:gs\_convergence\_L4g2\]). To converge the energy difference $\Delta\varepsilon$ to the same order of magnitude as the discarded weight, a minimum number of phonons per site of $M_{\rm ph} = 30$ and a minimum number of optimal modes of $d_o = 8$ is required. Comparing the convergence behavior for different parameter sets (not shown here), we observe that our DMRG3S+LBO method is especially well suited for the region where $t_0 \sim \omega_0$. As mentioned above, we use a subspace expansion to avoid local minima in the energy landscape when converging to the ground state [@Hubig-PRB-2015]. Within this scheme, a mixing factor is introduced that controls the MPS-basis enrichment process. As pointed out in Ref. , it is a delicate task to choose this mixing factor in such a way that one avoids local minima while still converging in energy. This seems to be especially hard when working with a fixed discarded weight in the bond dimension. To check convergence of the algorithm, it is advisable to not only monitor the ground state energy during the runs but also the variance of the energy $\sigma_E^2 = \langle \psi | H^2 | \psi \rangle - \langle \psi | H | \psi \rangle^2$. The variance can be taken as a measure of how close a given state is to an eigenstate of the Hamiltonian. In the present work, the DMRG3S+LBO algorithm will be used for comparatively small system sizes since these are constrained by what can be handled with the following time-evolution method. A more extensive discussion of the DMRG3S+LBO method and a benchmark against other state-of-the-art DMRG methods for electron-phonon systems such as the pseudo-site method [@Jeckelmann-PRB-1998] and the method introduced in Ref. [@Kloss-PRL-2019] will be presented elsewhere. TEBD with local basis optimization {#sec:tebd+lbo} ---------------------------------- As discussed in the previous section, rotating the local basis into an optimized basis can be beneficial for MPS-based numerical methods. For the time evolution used in this work, we therefore employ the same strategy. We use the time-evolving block-decimation algorithm pioneered by Vidal [@Vidal-PRL-2004; @Vidal-PRL-2003] and combine it with local basis optimization [@Zhang-PRL-1998]. The algorithm used here is based on Ref. [@Brockt-PRB-2015], where single-electron problems are studied, and applies this method to finite electron densities. In the following, we will outline the different steps in this time-evolution approach. ![\[fig:sketch\_tebd\_lbo\] (a) General structure of a TEBD algorithm [@Schollwoeck-AoP-2011; @Vidal-PRL-2004]. (b) Different steps in the application of a single local time-evolution operator in the TEBD-LBO algorithm [@Brockt-PRB-2015].](tebd_lbo){width="\columnwidth"} The time-evolving block-decimation relies on the Trotter decomposition of the Hamiltonian. Consider the Hamiltonian $H_{\rm NN}$ of a one-dimensional lattice system with at most nearest-neighbor interaction. Then $H_{\rm NN}$ can be split into two sums: $$\begin{aligned} H_{\rm NN} = \sum_{l = 1}^L h_l = \sum_{l \ \rm odd} h_l + \sum_{l \ \rm even} h_l = H_{\rm odd} + H_{\rm even}\,,\end{aligned}$$ where all local summands $h_l$ in $H_{\rm even}$ and $H_{\rm odd}$ commute with each other. The corresponding time-evolution operator can be written in a second-order Trotter decomposition as: $$\begin{aligned} e^{-{\mathrm{i}}H_{\rm NN} \Delta t} = \ &e^{-{\mathrm{i}}H_{\rm odd} \Delta t/2} e^{-{\mathrm{i}}H_{\rm even} \Delta t} e^{-{\mathrm{i}}H_{\rm odd} \Delta t/2} \nonumber \\ &+ \mathcal{O}((\Delta t)^3) \nonumber \\ = \ &\prod_{l \ \rm odd} e^{-{\mathrm{i}}h_{l} \Delta t/2} \prod_{l \ \rm even} e^{-{\mathrm{i}}h_{l} \Delta t} \prod_{l \ \rm odd} e^{-{\mathrm{i}}h_{l} \Delta t/2} \nonumber \\ &+ \mathcal{O}((\Delta t)^3)\,.\end{aligned}$$ The individual local time-evolution operators $U_l = e^{-{\mathrm{i}}h_{l} \Delta t}$ only act on two adjacent sites. In the MPS algorithm, these $U_l$ operators take the form of gates that are applied to the MPS \[Fig. \[fig:sketch\_tebd\_lbo\](a)\]. Consider a generic MPS in Vidal’s notation [@Vidal-PRL-2003] where on every site, there is an additional basis transformation tensor $R$ as in Eq. : $$\begin{aligned} \left| \psi \right\rangle = \sum_{\{ \sigma_l \}} \tilde \Gamma^{\tilde\sigma_1}_{a_0 a_1} R^{\tilde\sigma_1 \sigma_1} \Lambda^{[1]}_{a_1 a^\prime_1} \tilde \Gamma^{\tilde\sigma_2}_{a^\prime_1 a_2} R^{\tilde\sigma_2 \sigma_2} \Lambda^{[2]}_{a_2 a^\prime_2} ... \nonumber \\ \Lambda^{[L-1]}_{a_{L-1} a^\prime_{L-1}} \Gamma^{\tilde\sigma_L}_{a^\prime_{L-1} a_L} R^{\tilde\sigma_L \sigma_L} \left| \sigma_1 ... \sigma_L \right\rangle \,.\end{aligned}$$ The first step in the time evolution is to contract the local basis transformation from one side to the local time evolution operators $U_l$ while the other side stays in the original basis \[Fig. \[fig:sketch\_tebd\_lbo\](b)\]: $$\begin{aligned} R^{\tilde\sigma_l \sigma_l} R^{\tilde\sigma_{l+1} \sigma_{l+1}} U_l^{\sigma_l \sigma_{l+1} \sigma^\prime_l \sigma^\prime_{l+1}} = \tilde U_l^{\tilde\sigma_l \tilde\sigma_{l+1} \sigma^\prime_l \sigma^\prime_{l+1}}\,.\end{aligned}$$ With this modified time-evolution operator $\tilde U_l$ we act on the bond tensor $\Phi$ \[Fig. \[fig:sketch\_tebd\_lbo\](b)\]: $$\begin{aligned} \Phi_{a_{l-1} a^\prime_{l+1}}^{\tilde\sigma_l \tilde\sigma_{l+1}} &= \Lambda^{[l-1]}_{a_{l-1} a^\prime_{l-1}} \Gamma^{\tilde\sigma_l}_{a^\prime_{l-1} a_l} \Lambda^{[l]}_{a_l a^\prime_l} \Gamma^{\tilde\sigma_{l+1}}_{a^\prime_{l} a_{l+1}} \Lambda^{[l+1]}_{a_{l+1} a^\prime_{l+1}} \\ \Psi_{a_{l-1} a^\prime_{l+1}}^{\sigma^\prime_l \sigma^\prime_{l+1}} &= \Phi_{a_{l-1} a^\prime_{l+1}}^{\tilde\sigma_l \tilde\sigma_{l+1}} \tilde U_l^{\tilde\sigma_l \tilde\sigma_{l+1} \sigma^\prime_l \sigma^\prime_{l+1}}\,.\end{aligned}$$ Note that the updated bond tensor $\Psi$ is now in the original basis. This is important to ensure that during the time evolution, the full local Hilbert space can be explored and also the optimal modes can change from before to after the application of the time-evolution operator. Next, we transform the time-evolved bond tensor $\Psi$ to the optimized basis. For that we calculate the local reduced density matrix on the sites $l$ and $l+1$: $$\begin{aligned} \rho^{\sigma^\prime_l \sigma^{\prime \prime}_l} &= \Psi_{a_{l-1} a^\prime_{l+1}}^{\sigma^\prime_l \sigma^\prime_{l+1}} \Psi_{a^\prime_{l+1} a_{l-1} }^{\dagger \sigma^\prime_{l+1} \sigma^{\prime\prime}_l} \\ \rho^{\sigma^\prime_{l+1} \sigma^{\prime \prime}_{l+1}} &= \Psi_{a_{l-1} a^\prime_{l+1}}^{\sigma^\prime_l \sigma^\prime_{l+1}} \Psi_{a^\prime_{l+1} a_{l-1} }^{\dagger \sigma^{\prime\prime}_{l+1} \sigma^\prime_l}\,.\end{aligned}$$ Next, we diagonalize the local reduced density matrices to obtain the local basis transformation matrices $U^\dagger$. Each of them can then be truncated to the desired optimal dimension $d_o$ to obtain the basis transformation matrices $R_l$ and $R_{l+1}$. For the time evolution, we actually define a local discarded weight $\Delta_{\rm loc}$ which is the maximum weight that is discarded from the spectrum of the local density matrix. We keep this local discarded weight fixed rather than the optimal dimension throughout one simulation. By applying the inverse of $R$ on the new bond tensor $\Psi$, we get the bond tensor in the optimal basis $\tilde\Psi$. We then go back to the original Vidal notation by performing an SVD of the $\tilde\Psi = U S V^\dagger$ and contracting the inverse of $\Lambda^{[l-1]}$ from the left to $U$ and the inverse of $\Lambda^{[l+1]}$ from the right to $V^\dagger$ to obtain $\Gamma^{\tilde\sigma_l}$, $\Lambda^{[l]}$ and $\Gamma^{\tilde\sigma_{l+1}}$. In a conventional time-dependent DMRG method, one has the discarded weight in the bond dimension $\Delta_{\rm tr}$ and the time-step size $\delta t$ as simulation parameters. In the TEBD-LBO algorithm, one additionally gets the maximal local phonon number $M_{\rm ph}$ and the local discarded weight $\Delta_{\rm loc}$ as simulation parameters. For all of the results presented in Sec. \[sec:nonequ\], we made sure that the total error originating from $\delta t$, $M_{\rm ph}$, $\Delta_{\rm tr}$, and $\Delta_{\rm loc}$ is smaller than the symbol size (as a consequence, the error-bars are omitted in all figures). This is achieved by setting $\Delta_{\rm tr}, \Delta_{\rm loc} \leq 10^{-7}$ throughout the paper and choosing $\delta t\,t_0 \leq 0.05$. As opposed to the method used by Brockt [*et al.*]{} [@Brockt-PRB-2015], where the maximum number of phonons per site can grow during the time evolution, we work with a fixed maximal phonon number $M_{\rm ph}$ per site. ![\[fig:bare\_Ocdw\_Lanczos\]Comparison between TEBD-LBO data (open black symbols) and Lanczos time-evolution data (small blue symbols) of the decay of the charge-density-wave order parameter $\mathcal{O}_{\rm CDW}$ starting from the bare CDW state $|\mathrm{BCDW}\rangle$ Eq. . Calculations are done for system size $L=4$, phonon frequency $\omega_0/t_0 = 2$ and different coupling strengths $\gamma/t_0 = 1, 3, 4$ (squares, diamonds and circles, respectively). In the TEBD-LBO time evolution, we use a local phonon cutoff $M_{\rm ph} = 10, 30, 40$, respectively. The local discarded weight is set to $\Delta_{\rm loc} = 10^{-8}$. For clarity, we only show every fourth data point that was computed in TEBD-LBO and every twentieth data point from the Lanczos time evolution.](bare_Ocdw_t05_Lanczos){width="\columnwidth"} Let us now test the accuracy of the TEBD-LBO algorithm. In Fig. \[fig:bare\_Ocdw\_Lanczos\], we present the decay of $\mathcal{O}_{\rm CDW}$ starting from a CDW state without phonons, i.e., $|\psi(\tau=0)\rangle=|0101\rangle_{\rm el}|\emptyset\rangle_{\rm ph}$ (with $|\emptyset\rangle_{\rm ph}$ as the vacuum state of the phonons, see also Sec. \[sec:bare\] for details) as calculated with TEBD-LBO and Lanczos time evolution for system size $L=4$. The time evolution within the latter is carried out with a time step of $\delta t\,t_0=10^{-2}$ and $M_{\rm Lz}=20$ Lanczos steps. It is evident from the presented data that, similarly to DMRG3S+LBO, the TEBD-LBO algorithm perfectly reproduces the Lanczos data for all considered values of the coupling strength $\gamma$. Furthermore, we have checked (not shown) that the time evolution from other initial states (discussed in Sec. \[sec:dressed\] and Sec. \[sec:quench\]) is in full agreement with the Lanczos results. Results for the real time evolution {#sec:nonequ} =================================== In this section, we present the main findings of our work: a study of the melting of CDW order during the time evolution from initial product states (see Sec. \[sec:bare\] and \[sec:dressed\]) and after quenches from correlated ground states (see Sec. \[sec:quench\]). In order to get a non-zero value of the CDW order parameter $\mathcal{O}_{\rm CDW}$ in the correlated ground state, we work with an odd number of sites $L$. As a consequence, we are not exactly at half filling but rather $N = (L-1)/2$. For consistency, we also use odd system sizes $L$ in Secs. \[sec:bare\] and \[sec:dressed\]. ![\[fig:init\_sketch\] Sketch of the initial states: (a) $|\mathrm{BCDW}\rangle$ Eq.  and (b) $|\mathrm{DCDW} \rangle$ Eq. .](drawing_v2){width="0.8\columnwidth"} \[sec:bare\]Bare CDW melting ---------------------------- As a first example of charge-density-wave melting in the Holstein model we prepare the system in a product state where every second site is occupied by a fermion and no phonons are present in the system: $$\begin{aligned} | \mathrm{BCDW} \rangle = \left[ \prod_{l=1}^{(L-1)/2} c_{2l}^\dagger \right] | \emptyset \rangle _{\rm el} | \emptyset \rangle _{\rm ph}\,. \label{eq:BCDW}\end{aligned}$$ $| \emptyset \rangle _{\rm el[ph]}$ is the vacuum state of the electrons \[phonons\]. We call this state a bare charge-density wave (BCDW). The structure of the state in real space is sketched in Fig. \[fig:init\_sketch\](a). Next, we time evolve this state $$\begin{aligned} | \mathrm{BCDW(t)} \rangle = e^{- {\mathrm{i}}H_{\rm Hol} t} | \mathrm{BCDW} \rangle\end{aligned}$$ with the Hamiltonian Eq.  of the Holstein model for different parameter sets. In Fig. \[fig:bare-Ocdw-t05\](a), we plot the time evolution of the CDW order parameter $\mathcal{O}_{\rm CDW}$ when starting from the bare charge-density-wave state for $L=13$, $\omega_0/t_0 = 2$ and coupling strengths $\gamma/t_0 = 1, 3, 4$. These parameter sets correspond to the TLL phase, the transition region and the CDW phase, respectively [@Creffield-EPJB-2005; @Bursill-PRL-1998]. As expected for small values of $\gamma/t_0 = 1$, the order parameter decays towards zero and oscillates around this value with an amplitude that slowly dies out. In the same figure, we compare the behavior at $\gamma/t_0 = 1$ to the behavior at $\gamma = 0$ for which the time evolution of $\mathcal{O}_{\rm CDW}$ can be calculated analytically in the thermodynamic limit, i.e., $\mathcal{O}_{\mathrm{CDW},\gamma = 0}(t) = J_0(4tt_0)$, where $J_0$ is the zeroth-order Bessel function of the first kind (see, e.g.[@Barmettler-PRL-2009]). From this comparison, it is evident that the frequency of the oscillations is controlled by the hopping parameter $t_0$. However, in contrast to the case of $\gamma = 0$ where the oscillations are very long lived and the amplitude decays algebraically, at $\gamma/t_0 = 1$ the amplitude of the oscillations is strongly damped. ![\[fig:bare-Ocdw-t05\] Time evolution of (a) the decay of the charge-density-wave order parameter $\mathcal{O}_{\rm CDW}$ and (b) the phonon number per fermion $N_{\rm ph}/N$ when starting from the bare CDW state $|\mathrm{BCDW}\rangle$ Eq. . The small black dots in panel (a) are exact analytical results for $\gamma = 0$ in the thermodynamic limit [@Barmettler-PRL-2009]. The dashed horizontal lines in panel (b) represent the phonon number in the ground state at the respective parameters. Simulations are performed for $L=13$, $\omega_0/t_0 = 2$ and different coupling strengths $\gamma/t_0 = 1, 3, 4$. In the time evolution, we use a local phonon cutoff $M_{\rm ph} = 10, 30, 40$, respectively. The local discarded weight is set to $\Delta_{\rm loc} = 10^{-8}$. For clarity, we only show every fifth data point that was computed.](bare_Ocdw_t05_L13){width="\columnwidth"} On the contrary, for the large coupling strength $\gamma/t_0 = 4$, the order parameter, after an initial fast drop, is temporarily stuck at $\mathcal{O}_{\rm CDW} \approx 0.6$ between $ t t_0 \approx 1$ and $t t_0 \approx 2.5$ before it eventually decays towards zero. Such a plateau is also clearly visible at coupling strength $\gamma/t_0 = 3$. This behavior of the order parameter at strong coupling can be understood as follows. When starting from the bare charge-density-wave state the fermions are free to move around. By tunneling into empty sites, the fermions reduce the order imprinted in the initial state. However, at large couplings the fermions have a strong tendency to form heavy polarons, i.e., many phonons are created as can be seen in Fig. \[fig:bare-Ocdw-t05\](b) where we plot the time evolution of the number of phonons per fermion in the system $N_{\rm ph}/N = (1/N) \sum_{l=1}^L \langle b_l^\dagger b_l^{\vphantom{\dagger}} \rangle$. These phonons surrounding the fermions drastically change their effective mass and they form heavy and therefore immobile polarons. As their movement is impeded, the order parameter does not change for a time span of $\approx 1.5/t_0$. This self-trapping effect is, however, only temporary. The system coherently oscillates between a state with a large and a small amount of phonons and the order parameter decays further as soon as the phonons are re-emitted, allowing the electron to move again into empty sites. The phonon oscillation period can clearly be seen in the time evolution of the phonon density in the system shown in Fig. \[fig:bare-Ocdw-t05\](b). The phonon number $N_{\rm ph}/N$ oscillates with a period of $2 \pi /\omega_0$ and the length of the plateaus in $\mathcal{O}_{\rm CDW}$ at $\gamma/t_0 = 3,4$ is controlled by this phonon oscillation period. ![\[fig:bare-Ocdw-t01\] Time evolution of (a) the decay of the charge-density-wave order parameter $\mathcal{O}_{\rm CDW}$ and (b) the phonon number per fermion $N_{\rm ph}/N$ when starting from the bare CDW state $|\mathrm{BCDW}\rangle$ Eq. . Simulations are performed for $L=13$, $\omega_0/t_0 = 10$ and different coupling strengths $\gamma/t_0 = 5, 15, 20$. In the time evolution, we use a local phonon cutoff $M_{\rm ph} = 10, 30, 40$, respectively, and the local discarded weight is set to $\Delta_{\rm loc} = 10^{-8}$. For clarity, we only show every twentyfifth data point that was computed.](bare_Ocdw_t01_L13_alt){width="\columnwidth"} If one further increases the phonon frequency $\omega_0/t_0$, several plateaus can be observed before the order parameter $\mathcal{O}_{\rm CDW}$ relaxes towards zero. Such a behavior can be seen in Fig. \[fig:bare-Ocdw-t01\](a) where we plot the time evolution of $\mathcal{O}_{\rm CDW}$ for the same initial state $|\mathrm{BCDW}\rangle$ but for $\omega_0/t_0 = 10$. The step-like structure in the decay of the order parameter is evident in the data for $\gamma/t_0 = 15,20$ and the length of the plateaus coincides well with the phonon oscillation period $2\pi/\omega_0$ \[see Fig. \[fig:bare-Ocdw-t01\](b) for the time dependence of the phonon density $N_{\rm ph}/N$ in the system\]. Similar to the case at $\omega_0/t_0 = 2$, for the weaker coupling $\gamma/t_0 = 5$, we observe a decay of the order parameter towards zero with damped oscillations with a frequency controlled by the hopping parameter $t_0$. These oscillations are superimposed with oscillations that have a frequency controlled by the phonon frequency $\omega_0$. ![\[fig:bare-Ekin-t05\] Time evolution of the kinetic energy per fermion $E_{\rm kin}/N$ when starting from the bare CDW state $|\mathrm{BCDW}\rangle$ Eq. . The dashed horizontal lines represent the kinetic energy in the ground state at the respective parameters. Simulations are performed for $L=13$, $\omega_0/t_0 = 2$ and different coupling strengths $\gamma/t_0 = 1, 3, 4$. In the time evolution, we use a local phonon cutoff $M_{\rm ph} = 10, 30, 40$, respectively. The local discarded weight is set to $\Delta_{\rm loc} = 10^{-8}$. For clarity, we only show every fifth data point that was computed.](bare_Ekin_t05_L13){width="\columnwidth"} In Fig. \[fig:bare-Ekin-t05\], we plot the kinetic energy $E_{\rm kin} = \langle H_{\rm kin} \rangle$ of the fermions as a function of time for $L = 13$, $\omega_0/t_0 = 2$ and different coupling strengths $\gamma/t_0 = 1, 3, 4$ (i.e., the same parameters as in Fig. \[fig:bare-Ocdw-t05\]). It is not surprising that the initial drop in kinetic energy gets steeper as the coupling strength $\gamma/t_0$ is increased. However, for longer times the energy loss from the electronic system decreases with increasing coupling strength. This trend follows the trend of the ground-state kinetic energy plotted as dashed horizontal lines in Fig. \[fig:bare-Ekin-t05\]. The electrons get more and more localized in the ground state as $\gamma/t_0$ increases and therefore, the kinetic energy grows. The time evolution of the kinetic energy at the different couplings follows this overall trend. Yet, we emphasize here that during the time evolution we do not drift towards the ground state since energy is conserved throughout the time evolution. Quite on the contrary, we remain in a high-energy state. An open question left for future work is a comparison to finite-temperature equilibrium expectation values of the same observables. In order to illustrate the capabilities of the TEBD-LBO method, we compare such a simulation that is converged for a given local and global discarded weight for $L=13$ sites ($\omega_0/t_0 = 2$, $\gamma/t_0 = 4$) with a resulting $d_o=12$ (and an $M_{\text{ph}}=40$) to a simulation with $M_{\text{ph}} =10$ and $M_{\text{ph}} =20$, which is shown in Fig. \[fig:compare\]. Clearly, the simulation with $M_{\text{ph}} = 10$ cannot correctly produce even the dynamics for $t>1/t_0$ and fails to capture the intermediate plateau formation for $1 \lesssim t t_0 \lesssim 2.5$. The simulation with $M_{\text{ph}} = 20$ is able to capture the plateau formation but evidently is not converged. This shows that the TEBD-LBO is not only more accurate on a quantitative level but is also capable to access parameter regimes that are out of reach for conventional simulations with a small $M_{\text{ph}}$ using the phonon-number basis. ![\[fig:compare\] Time evolution of (a) the decay of the charge-density-wave order parameter $\mathcal{O}_{\rm CDW}$ and (b) the phonon number per fermion $N_{\rm ph}/N$ when starting from the bare CDW state $|\mathrm{BCDW}\rangle$ Eq. . Simulations are performed for $L=13$, $\omega_0/t_0 = 2$ and coupling strengths $\gamma/t_0 = 4$. In the time evolution, we use different local phonon cutoffs $M_{\rm ph} = 10, 20, 40$ to illustrate convergence in this parameter. The local discarded weight is set to $\Delta_{\rm loc} = 10^{-7}$. For clarity, we only show every fifth data point that was computed.](bare_Ocdw_t05_g2_L13_diffM.pdf){width="\columnwidth"} \[sec:dressed\]Dressed CDW melting ---------------------------------- In the second example, we start from the ground state in the atomic limit $t_0 = 0$. As discussed in Sec. \[sec:model\], the ground state takes the form Eq.  and we prepare it in such a way that $\mathcal{O}_{\rm CDW} = 1$. This is done by setting the hopping parameter $t_0 = 0$ and performing an imaginary time evolution of the bare charge-density-wave state $| \mathrm{BCDW} \rangle$ to reach the ground state. This results in the state: $$\begin{aligned} | \mathrm{DCDW} \rangle = e^{-\frac{(L-1) \gamma^2}{4\omega_0^2}} \left[ \prod_{l=1}^{(L-1)/2} c_{2l}^\dagger \ e^{\frac{\gamma}{\omega_0}b_{2l}^\dagger} \right] | \emptyset \rangle _{\rm el} | \emptyset \rangle_{\rm ph}\,, \label{eq:DCDW}\end{aligned}$$ up to machine precision. We will refer to this state as a dressed charge-density wave (DCDW) and its structure in real space is sketched in Fig. \[fig:init\_sketch\](b). ![\[fig:dressed-Ocdw-t05\] Time evolution of (a) the decay of the charge-density-wave order parameter $\mathcal{O}_{\rm CDW}$ and (b) the phonon number per fermion $N_{\rm ph}/N$ when starting from the dressed CDW state $|\mathrm{DCDW}\rangle$ Eq. . The small black dots in panel (a) are exact analytical results for $\gamma = 0$ when starting from the $|\mathrm{BCDW}\rangle$ state in the thermodynamic limit [@Barmettler-PRL-2009]. The dashed horizontal lines in panel (b) represent the number of phonons per fermion in the ground states at the respective parameters. Simulations are performed for $L=13$, $\omega_0/t_0 = 2$ and different coupling strengths $\gamma/t_0 = 1, 3, 4$. In the time evolution, we use a local phonon cutoff $M_{\rm ph} = 20, 30, 40$, respectively. The local discarded weight is set to $\Delta_{\rm loc} = 10^{-8}$. For clarity, we only show every fifth data point that was computed.](dressed_Ocdw_t05_L13){width="\columnwidth"} In Fig. \[fig:dressed-Ocdw-t05\](a), we plot the order parameter $\mathcal{O}_{\rm CDW}$ as a function of time when starting from the DCDW state. We set the phonon frequency to $\omega_0/t_0 = 2$ during the time evolution and use different coupling strengths $\gamma/t_0 = 1,3,4$ (the same as for the BCDW state in Fig. \[fig:bare-Ocdw-t05\] and Fig. \[fig:bare-Ekin-t05\]). For the strongest coupling $\gamma/t_0 = 4$, the initial state is close to the ground state and therefore, the order parameter decays very slowly. This resemblance is also reflected in the time dependence of the phonon number plotted in Fig. \[fig:dressed-Ocdw-t05\](b). For the strong coupling $\gamma/t_0 = 4$, the phonon number barely changes over time and stays close to the value in the ground state plotted as dashed horizontal line. On the other hand, for the small coupling $\gamma/t_0 = 1$, the initial state is far from the ground state and, as a consequence, the order decays fast towards zero and oscillates around this value. Again, the frequency of the oscillations is controlled by the hopping parameter $t_0$ as is evident from the comparison to the exact analytical curve at $\gamma = 0$ [@Barmettler-PRL-2009] \[small black dots in Fig. \[fig:dressed-Ocdw-t05\](a)\]. Furthermore, for $\gamma/t_0 = 1$, the phonon number increases by a factor of two within $tt_0 \approx 1.5$. For the intermediate coupling of $\gamma/t_0 = 3$, the order parameter slowly and steadily decays to zero and the phonon number in the system changes moderately compared to the other two cases. ![\[fig:dressed-Ocdw-t05-ttilde\] Time evolution of the decay of the charge-density-wave order parameter $\mathcal{O}_{\rm CDW}$ when starting from the dressed CDW state $|\mathrm{DCDW}\rangle$ Eq. . Here, the time axis is in units of the effective hopping matrix element $\tilde t_0$ Eq.  (system size $L=13$, phonon frequency $\omega_0/t_0 = 2$ and data for different coupling strengths $\gamma/t_0 = 1, 3, 4$). In the time evolution, we use a local phonon cutoff $M_{\rm ph} = 20, 30, 40$, respectively. The local discarded weight is set to $\Delta_{\rm loc} = 10^{-8}$. For clarity, we only show every fifth \[every twentieth\] data point that was computed for $\gamma/t_0 = 3$ \[$\gamma/t_0 = 4$\].](dressed_Ocdw_t05_L13_ttilde){width="\columnwidth"} The different time scales of the dynamics in Fig. \[fig:dressed-Ocdw-t05\](a) can also be understood in terms of decreasing effective hopping matrix elements for the polarons for increasing coupling strength $\gamma/t_0$. In Fig. \[fig:dressed-Ocdw-t05-ttilde\], we plot the order parameter $\mathcal{O}_{\rm CDW}$ as a function of time where time is expressed in units of the inverse effective hopping matrix element $\tilde t_0$, Eq. , from the small $t_0$ perturbation theory [@Hirsch-PRB-1983]. This does not produce a complete collapse of the data sets since we are already far away from the small $t_0$ limit. Nevertheless, the decay of the order parameter now happens on comparable time scales for the different coupling strengths. Another feature that is noticeable in Fig. \[fig:dressed-Ocdw-t05\](a) are peaks in $\mathcal{O}_{\rm CDW}$ around $t t_0 \approx 3.1$ and $t t_0 \approx 6.3$ for $\gamma/t_0 = 4$. The first peak is also visible for $\gamma/t_0 = 3$. The positions in time of these features coincide with multiples of the phonon period $2 \pi / \omega_0$. This becomes evident when comparing data for different phonon frequencies $\omega_0/t_0$ (not shown here). These features are also very prominent in Fig. \[fig:dressed-Ekin-t05\] where we plot the kinetic energy as a function of time when starting from the DCDW state. For the strong coupling $\gamma/t_0 = 4$, the kinetic energy relaxes to the ground-state value (dashed red line in Fig. \[fig:dressed-Ekin-t05\]) after $tt_0 \approx 0.5$ and fluctuates around it. Around $t t_0 \approx 3.1$, a peak appears that corresponds to the one seen in Fig. \[fig:dressed-Ocdw-t05\](a). After $tt_0 \approx 3.5$, the kinetic energy again fluctuates around the ground-state value before the second peak appears around $t t_0 \approx 6.3$. In contrast, the kinetic energy at $\gamma/t_0 = 1$ slowly decays to $E_{\rm kin}/(t_0 N) \approx -0.7$ and only shows very slow fluctuations around that value. It is worth noting that this value is still far above the ground-state kinetic energy (horizontal dashed green line in Fig. \[fig:dressed-Ekin-t05\]). The latter is not surprising since the initial DCDW state is far away from the ground state at these parameters. ![\[fig:dressed-Ekin-t05\] Time evolution of the kinetic energy per fermion $E_{\rm kin}/N$ when starting from the dressed CDW state $|\mathrm{DCDW}\rangle$ Eq. . The dashed horizontal lines represent the kinetic energy in the ground states at the respective parameters. Simulations are performed for $L=13$, $\omega_0/t_0 = 2$ and different coupling strengths $\gamma/t_0 = 1, 3, 4$. In the time evolution, we use a local phonon cutoff $M_{\rm ph} = 20, 30, 40$, respectively. The local discarded weight is set to $\Delta_{\rm loc} = 10^{-8}$. For clarity, we only show every fifth data point that was computed.](dressed_Ekin_t05_L13_alt){width="\columnwidth"} Comparing the time evolution of the BCDW state and the DCDW state, one notices that the behavior at the weak coupling $\gamma/t_0 = 1$ in the two cases is very similar. The order parameter decays towards zero very fast and oscillates with a frequency controlled by the hopping parameter $t_0$. In contrast, the behavior for the stronger couplings $\gamma/t_0 = 3,4$ is quite different for the two different initial states. When starting from the BCDW state the initial movement of the fermions is not affected much by the coupling to the phonons and only after a transient time, when phonons are emitted by the fermions and the polarons are formed, the fermions become very slow. However, this slowing down of the movement is only temporary and after the phonons are reabsorbed the dynamics of the fermions speeds up again. In contrast, the DCDW state at $\gamma/t_0 = 3,4$ already contains very heavy polarons and the movement of the fermions is slow right from the beginning. A closely related behavior has been seen in a recent work by Kloss [*et al.*]{} [@Kloss-PRL-2019] in the expansion of a single particle injected into an empty Holstein lattice. When the particle is initially dressed by phonons, the expansion is strongly suppressed as the coupling strength is increased. In the opposite case of a bare electron, a repeated temporal suppression of the dynamics over time intervals of one phonon period is observed. We find both these phenomena in the time evolution of the dressed and bare CDW state, respectively. -------------- ------------------------------ ------------------------------ $\gamma/t_0$ $\Delta E_{\rm BCDW}/(t_0N)$ $\Delta E_{\rm DCDW}/(t_0N)$ 1 1.674 1.174 3 4.877 0.377 4 8.153 0.153 -------------- ------------------------------ ------------------------------ : \[tab:energies\]Energy difference between the ground states and the initial states $\Delta E_{\mathrm{BCDW} [ \mathrm{DCDW} ]} = E_{\mathrm{BCDW} [ \mathrm{DCDW} ]} - E^{\rm gs}$ for the BCDW \[DCDW\] state with $L = 13$ and $\omega_0/t_0 = 2$. Another aspect is that the BCDW state as initial state is increasingly further away from the ground state as $\gamma/\omega_0$ increases (cf. Table \[tab:energies\]). On the other hand, in the case of the DCDW state the opposite is true. The stronger the coupling $\gamma/\omega_0$ the closer the initial state is to the ground state in terms of energy. This explains the slower relaxation due to the smaller fraction of intermediate states available in the many-body spectrum. \[sec:quench\]Quench from CDW to metallic phase ----------------------------------------------- In contrast to the initial CDW product states discussed in the previous sections, we now start from a fully correlated CDW state, i.e., the many-body ground state. The quench protocol is as follows. We prepare the system in the ground state for parameters in the CDW phase. Then, at time $t = 0$, we quench the phonon frequency $\omega_0/t_0$ and the electron-phonon coupling parameter $\gamma/t_0$ such that for the resulting parameter set, the system is in the metallic TLL phase. The quenches considered here are illustrated in the sketch of the phase diagram in Fig. \[fig:holstein\_sketch\](b) as arrows. The horizontal arrow (FQ) illustrates the quench of both the phonon frequency $\omega_0/t_0$ and the coupling strength $\gamma/\omega_0$ in such a way that $\gamma/\omega_0$ stays constant, while the vertical arrow (CQ) illustrates the quench of only the coupling $\gamma/t_0$. As mentioned earlier, we use an odd system size $L$ to pin the charge-density wave and get a non-zero value for the order parameter $\mathcal{O}_{\rm CDW}$ in the initial ground state. The number of particles in the system is then $N = (L-1)/2$. In Tab. \[tab:quench\_energies\], we list the quench energies $\Delta E^{\rm qu} = E^{\rm init} - E^{\rm gs}$ in the two quenches, which is the difference between the energy of the state after the quench $E^{\rm init}$ and the ground-state energy $E^{\rm gs}$ for these parameters. Furthermore, we list the kinetic and phonon quench energy $\Delta E_{\rm kin}^{\rm qu} = E_{\rm kin}^{\rm init} - E_{\rm kin}^{\rm gs}$ and $\Delta E_{\rm ph}^{\rm qu} = E_{\rm ph}^{\rm init} - E_{\rm ph}^{\rm gs}$ respectively, where $E_{\alpha} = \langle H_{\rm \alpha} \rangle$. The kinetic part of the quench energy is very similar in the two quenches while the phononic part is not. In the frequency quench, we reduce the energy of individual phonons with respect to the bandwidth and therefore, $\Delta E_{\rm ph}^{\rm qu}$ becomes quite small. In comparison, in the coupling quench the ratio of phonon energy and bandwidth stays fixed and $\Delta E_{\rm ph}^{\rm qu}$ dominates $\Delta E^{\rm qu}$. This explains why in the frequency quench, $\Delta E^{\rm qu}$ is smaller than in the coupling quench. ---- ---------------------------- -------------------------------------- ------------------------------------- --   $\Delta E^{\rm qu}/(t_0N)$ $\Delta E^{\rm qu}_{\rm kin}/(t_0N)$ $\Delta E^{\rm qu}_{\rm ph}/(t_0N)$ FQ 0.987 1.007 0.197 CQ 5.190 0.952 7.398 ---- ---------------------------- -------------------------------------- ------------------------------------- -- : \[tab:quench\_energies\] Total quench energies $\Delta E^{\rm qu}$ and the contributions from the kinetic part $\Delta E^{\rm qu}_{\rm kin}$ and the phononic part $\Delta E^{\rm qu}_{\rm ph}$. ### \[sec:FQ\] Frequency quench We first consider the frequency quench, starting from the ground state at $\omega_{0, \rm init}/t_0 = 2$ and $\gamma_{\rm init}/t_0 = 4$ which is in the CDW phase. At $t=0$, we quench the phonon frequency to $\omega_0/t_0 = 0.1$ and the coupling strength to $\gamma/t_0 = 0.2$ with $\gamma/\omega_0=2=$const. The time evolution of the order parameter is shown in Fig. \[fig:g\_quench\_Ocdw\](a) as circles. The order quickly decays towards zero and oscillates around a value slightly bigger than zero. The comparison to the exact analytical results for relaxation from the BCDW state with $\gamma = 0$ [@Barmettler-PRL-2009] \[small black dots in Fig. \[fig:g\_quench\_Ocdw\](a)\] reveals that the frequency of the oscillations are controlled by the hopping parameter $t_0$. Moreover, the electrons clearly move into the previously empty sites. ![\[fig:g\_quench\_Ocdw\] Time evolution of (a) the decay of the charge-density-wave order parameter $\mathcal{O}_{\rm CDW}$ and (b) the staggered displacement $\mathcal{O}_{\rm disp}$ Eq.  in a quench from the CDW phase to the TLL phase. Circles: Quench from $\omega_{0, \rm init}/t_0 = 2$ and $\gamma/t_0 = 4$ to $\omega_0/t_0 = 0.1$ and $\gamma/t_0 = 0.2$ (frequency quench, FQ). Diamonds: Quench from $\gamma_{\rm init}/t_0 = 4$ to $\gamma/t_0 = 1$ while $\omega_0/t_0 = 2$ is kept fixed (coupling quench, CQ). The small black dots in panel (a) are exact analytical results in the thermodynamic limit for $| \mathrm{BCDW} \rangle$ as initial state and no coupling to phonons [@Barmettler-PRL-2009]. The dashed horizontal lines in panel (b) represent the value of $\mathcal{O}_{\rm disp}$ in the respective ground states. The system size is $L = 13$, local phonon cutoff $M_{\rm ph} = 40$ and the local discarded weight is set to $\Delta_{\rm loc} = 10^{-8}$. For clarity, we only plot every second \[fourth\] data point that was computed in the FQ \[CQ\].](quench_Ocdw_Ocdw_tq_gq_L13){width="\columnwidth"} Instead of the phonon number, we discuss the staggered displacement to characterize the dynamics in the phonon sector: $$\begin{aligned} \mathcal{O}_{\rm disp} = \frac{1}{N} \sum_{l = 1}^L (-1)^l \langle b_l^\dagger + b_l^{\vphantom{\dagger}} \rangle \,. \label{eq:Ocdwph}\end{aligned}$$ $\langle b_l^\dagger + b_l^{\vphantom{\dagger}} \rangle$ is the expectation value of the displacement of the harmonic oscillator on site $l$. In equilibrium, a non-zero value of the fermion CDW order parameter $\mathcal{O}_{\rm CDW}$ is accompanied by a non-zero value of the staggered displacement $\mathcal{O}_{\rm disp}$. We plot the staggered displacement in Fig. \[fig:g\_quench\_Ocdw\](b). For the FQ, it remains positive during the simulation window and decreases only slightly. To qualitatively understand the nonequilibrium phenomena investigated here it is helpful to adapt a mean-field like picture. The displacements of the harmonic oscillators can be viewed as a potential landscape for the electrons when we replace the displacement operators in Eq.  by their expectation values. ![\[fig:g\_quench\_Ekin\] Time evolution of the kinetic energy per fermion $E_{\rm kin}/N$ in a quench from the CDW phase to the TLL phase. Circles: Quench from $\omega_{0, \rm init}/t_0 = 2$ and $\gamma/t_0 = 4$ to $\omega_0/t_0 = 0.1$ and $\gamma/t_0 = 0.2$ (frequency quench, FQ). Diamonds: Quench from $\gamma_{\rm init}/t_0 = 4$ to $\gamma/t_0 = 1$ while $\omega_0/t_0 = 2$ is kept fixed (coupling quench, CQ). The dashed horizontal lines represent the kinetic energy per fermion $E_{\rm kin}/N$ in the respective ground states. The system size is $L = 13$, local phonon cutoff $M_{\rm ph} = 40$ and the local discarded weight is set to $\Delta_{\rm loc} = 10^{-8}$. For clarity, we only plot every second \[fourth\] data point that was computed in the FQ \[CQ\].](quench_Ekin_tq_gq_L13){width="\columnwidth"} In the case of the frequency quench, the staggered displacement $\mathcal{O}_{\rm disp}$ changes very slowly as a function of time since the phonon period $2\pi/\omega_0$ is very large. As a consequence, at the end of our simulation time, there is still a background potential landscape. The electrons move in this background potential and therefore, their order remains larger than in the free case $\gamma=0$. This also means that although the electron CDW order parameter $\mathcal{O}_{\rm CDW}$ exhibits a fast dynamics and only shows small oscillations, the entire system is still very far from equilibration since the phonons remain in a spatially inhomogeneous state. In order to observe the relaxation of the whole system towards a stationary state one would have to simulate to much longer times than what is currently feasible with our method. Finally, in Fig. \[fig:g\_quench\_Ekin\] we present the kinetic energy after the frequency quench. It relaxes towards an almost stationary value after $t t_0 \approx 1.5$ with only small oscillations with a similar frequency as in the time evolution of $\mathcal{O}_{\rm CDW}$. ### \[sec:CQ\] Coupling quench In the second quench scenario, we fix the phonon frequency to $\omega_0/t_0 = 2$ and quench only the coupling strength from $\gamma_{\rm init}/t_0 = 4$ to $\gamma/t_0 = 1$. The time evolution of the order parameter $\mathcal{O}_{\rm CDW}$ is plotted as diamonds in Fig. \[fig:g\_quench\_Ocdw\](a). In contrast to the frequency quench, the order parameter in the coupling quench shows large slow oscillations with an amplitude that barely decreases on the time scales that are accessible here. In Fig. \[fig:g\_quench\_Ocdw\](b), we plot the staggered displacement $\mathcal{O}_{\rm disp}$ in this quench as diamonds. One can see that the staggered displacement oscillates with a period of $2\pi/\omega_0$ between positive and negative values, i.e., the phonons, once released from the polaron start to undergo a nonequilibrium dynamics with oscillating displacement. Note that the phonon density itself also remains largely concentrated on the even sites (data not shown here). For the effective potential landscape this means that the fermions are attracted to their initial places when $\mathcal{O}_{\rm disp}$ is positive and are pushed away from these sites when $\mathcal{O}_{\rm disp}$ is negative. Therefore, the oscillations in $\mathcal{O}_{\rm CDW}$ and $\mathcal{O}_{\rm disp}$ are locked to one another and the frequencies are comparable. Similar to the FQ, the spatially inhomogeneous nonequilibrium distribution of the phonons remains stable. This locking effect also explains the oscillations in the kinetic energy plotted as diamonds in Fig. \[fig:g\_quench\_Ekin\]. The kinetic energy has a maximum whenever $\mathcal{O}_{\rm disp}$ has a maximum or a minimum. This occurs when the fermions are localized on the even or odd sites, respectively. On the other hand, when the potential landscape is closer to being flat and $\mathcal{O}_{\rm disp}$ is close to zero, the fermions hop around and the kinetic energy has a minimum. In summary, the quenches again exhibit strong dependencies on the initial state and on the final-state parameters in the transient dynamics. As in the relaxation dynamics of the BCDW and DCDW states, the phonons primarily slow down the electronic dynamics. For the postquench parameters in the TLL phase considered here, the electrons can move but the phonon distribution relaxes much slower, resulting in a slowly decaying inhomogeneous nonequilibrium distribution. It would be very interesting to extend the analysis to the case of dispersive phonons to study whether this can speed up both the electronic relaxation and the dissolving of spatially inhomogeneous phonon distributions. From a broader perspective, this leads to the topic of energy transport, which in the Holstein model can only occur via electronic quasi-particle motion while dispersive phonons could carry an energy current themselves. These questions are left for future studies. Summary {#sec:summary} ======= To summarize, we studied the melting of CDW order by means of real-time simulations of the half-filled Holstein model of spinless fermions in one dimension. To this end, we investigated relaxation dynamics that is dominated by electron-phonon coupling in the far-from-equilibrium regime, complementary to the case studied in [@Hashimoto-PRB-2017] where strong electron interactions were present. We find a strong dependence of the transient dynamics on the precise initial state and on the model parameters. As discussed in previous work [@Golez-PRL-2012; @Matsueda-JPSJ-2012; @Hashimoto-PRB-2017; @Mendoza-Arenas-PRB-2019; @Kloss-PRL-2019], a main effect of an electron-phonon coupling is the slowing down of the dynamics of the electrons compared to a purely electronic system. This is attributed to the formation of polarons which renormalizes the mass of the charge carriers. For weak coupling the movement of the electrons is comparable to the dynamics of free particles with small corrections. In the case of strong coupling, the dynamics on transient time scales can be altered more drastically, which is exemplified by the temporal self trapping of the electrons observed here. Furthermore, we often find very different time scales for the relaxation in the electron and the phonon sector as is most clearly evident in the quenches from correlated ground states. In these situations, we observe that the initial spatially inhomogeneous phonon distribution persists and forms a potential background for the electron relaxation. As a result, inhomogeneities remain in the spatial electron distribution as well. A question for further studies is how this picture changes when introducing a dispersion of the phonons. It remains as an open question whether regimes can be found where the presence of phonons actually accelerates the full relaxation of the electronic system. This connects our work to the question of how inhomogeneities in the phonon sector of an electron-phonon coupled system relax and, more generally, how different channels of energy and charge transport compete in such systems (in the context of the SSH model, such questions were discussed in, e.g. [@Mendoza-Arenas-PRB-2019]). Our work demonstrates the capabilities of combining LBO with MPS-based numerical methods when applied to electron-phonon coupled systems. The TEBD-LBO algorithm gives access to regimes far from equilibrium that are out of reach for conventional MPS-based techniques. We postpone the question of a benchmark of our DMRG3S+LBO algorithm against other state-of-the-art ground-state DMRG algorithms that were developed for electron-phonon coupled systems (such as the pseudo site method [@Jeckelmann-PRB-1998]) to future studies. We acknowlage usefull discussions with J. Bonča, C. Brockt, C. Hubig, E. Jeckelmann, and L. Vidmar. This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Project No. 207383564 via Research Unit FOR 1807 and via SFB 1073 (project B09) under Project no. 217133147. J. H. acknowledges support from the Polish National Agency of Academic Exchange (NAWA) under contract PPN/PPO/2018/1/00035. E. D. was supported by the US Department of Energy (DOE), Office of Science, Basic Energy Sciences (BES), Materials Sciences and Engineering Division. Finite-size dependence ====================== ![\[fig:bare\_Ocdw\_sys\] Time evolution of the charge-density-wave order parameter $\mathcal{O}_{\rm CDW}$ when starting from the bare CDW state $|\mathrm{BCDW}\rangle$ Eq. . Comparison between different system sizes $L = 5$ (open black symbols), $L = 9$ (open colored symbols) and $L = 13$ (filled symbols). The phonon frequency is $\omega_0/t_0 = 2$ while in panel (a) $\gamma/t_0 = 1$, in panel (b) $\gamma/t_0 = 3$ and in panel (c) $\gamma/t_0 = 4$. In the time evolution, we use a local phonon cutoff $M_{\rm ph} = 10, 30, 40$, respectively, and the local discarded weight is set to $\Delta_{\rm loc} = 10^{-8}$. For clarity, we only show every fifth data point that was computed.](bare_Ocdw_t05_L5_9_13){width="\columnwidth"} In Figs. \[fig:bare\_Ocdw\_sys\], \[fig:dressed\_Ocdw\_sys\] and \[fig:quench\_Ocdw\_sys\], we compare time-evolution data produced with our TEBD-LBO method (cf. Sec. \[sec:tebd+lbo\]) for different system sizes $L = 5,9,13$. In Fig. \[fig:bare\_Ocdw\_sys\], the initial state is the bare CDW state and the phonon frequency is set to $\omega_0/t_0 = 2$ as in Sec. \[sec:bare\]. The largest finite-size effects are seen for $\gamma/t_0 = 1$ in Fig. \[fig:bare\_Ocdw\_sys\](a). This is expected since because of the weak coupling, the dynamics is the fastest here. Nevertheless, there are no big qualitative differences between the different system sizes. Finite-size effects are even smaller for the larger couplings $\gamma/t_0 = 3$ \[Fig. \[fig:bare\_Ocdw\_sys\](b)\] and $\gamma/t_0 = 4$ \[Fig. \[fig:bare\_Ocdw\_sys\](c)\] where until $tt_0 \approx 2.8$, the data of the different system sizes lie on top of each other and only small deviations are seen for larger times. The picture is similar for the dressed CDW state as the initial state as in Sec. \[sec:dressed\]. In Fig. \[fig:dressed\_Ocdw\_sys\], we compare the time evolution of $\mathcal{O}_{\rm CDW}$ at $\omega_0/t_0 = 2$ for the system sizes $L = 5,9,13$. Again, the largest finite-size effects are seen in panel (a) of Fig. \[fig:dressed\_Ocdw\_sys\] for $\gamma/t_0 = 1$. For $\gamma/t_0 = 3$ \[Fig. \[fig:dressed\_Ocdw\_sys\](b)\] small finite-size effects are observable for $tt_0 \gtrsim 4$, while for $\gamma/t_0 = 4$ \[Fig. \[fig:dressed\_Ocdw\_sys\](c)\] the data for the different system sizes lie on top of each other for the full simulation time. This is a manifestation of the very slow dynamics and the proximity to the ground state of the dressed CDW state at large $\gamma/t_0$. In Fig. \[fig:quench\_Ocdw\_sys\], we compare different system size data for the quenches discussed in Sec. \[sec:quench\]. In Fig. \[fig:quench\_Ocdw\_sys\](a), the phonon frequency and coupling strength is quenched from $\omega_{0,\rm init}/t_0 = 2$ and $\gamma_{\rm init}/t_0 = 4$ to $\omega_{0}/t_0 = 0.1$ and $\gamma/t_0 = 0.2$. Here, we can observe large boundary effects for $L = 5$ after $tt_0 \approx 3$ and for $L = 9$ after $tt_0 \approx 4.7$. This is not surprising since the dynamics is dominated by the hopping parameter $t_0$ in this case as discussed in Sec. \[sec:quench\]. The largest velocity in the system is therefore $v_{\rm max} \approx 2t_0$ and hence the fastest excitations had time to travel across the entire system and bounce back from the boundary. The situation is different in Fig. \[fig:quench\_Ocdw\_sys\](b) for the quench of the coupling strength $\gamma_{\rm init}/t_0 = 4$ to $\gamma/t_0 = 1$ while the phonon frequency is fixed to $\omega_0/t_0 = 2$. In this case, the finite-size effects seen are very small which is evidence for the fact that the dynamics in the system is not dominated by the free movement of the electrons. Instead, the presence of the phonons from the CDW initial state plays the key role in the dynamics. Overall, the Figs. \[fig:bare\_Ocdw\_sys\], \[fig:dressed\_Ocdw\_sys\] and \[fig:quench\_Ocdw\_sys\] show that the key features in the time evolution of $\mathcal{O}_{\rm CDW}$ that are described in Sec. \[sec:nonequ\] are robust against finite-size effects and are not just an effect of the small system sizes considered in this work. ![\[fig:dressed\_Ocdw\_sys\] Time evolution of the charge-density-wave order parameter $\mathcal{O}_{\rm CDW}$ when starting from the dressed CDW state $|\mathrm{DCDW}\rangle$ Eq. . Comparison between different system sizes $L = 5$ (open black symbols), $L = 9$ (open colored symbols) and $L = 13$ (filled symbols). The phonon frequency is $\omega_0/t_0 = 2$ while in panel (a) $\gamma/t_0 = 1$, in panel (b) $\gamma/t_0 = 3$ and in panel (c) $\gamma/t_0 = 4$. In the time evolution, we use a local phonon cutoff $M_{\rm ph} = 20, 30, 40$, respectively, and the local discarded weight is set to $\Delta_{\rm loc} = 10^{-8}$. For clarity, we only show every fifth data point that was computed](dressed_Ocdw_t05_L5_9_13){width="\columnwidth"} ![\[fig:quench\_Ocdw\_sys\] Time evolution of the charge-density-wave order parameter $\mathcal{O}_{\rm CDW}$ for different system sizes $L = 5$ (open black symbols), $L = 9$ (open colored symbols) and $L = 13$ (filled symbols). Panel (a): quench from $\omega_{0, \rm init}/t_0 = 2$ and $\gamma/t_0 = 4$ to $\omega_0/t_0 = 0.1$ and $\gamma/t_0 = 0.2$ (frequency quench). Panel (b): quench from $\gamma_{\rm init}/t_0 = 4$ to $\gamma/t_0 = 1$ while $\omega_0/t_0 = 2$ is kept fixed (coupling quench). The local phonon cutoff is $M_{\rm ph} = 40$ and the local discarded weight is set to $\Delta_{\rm loc} = 10^{-8}$. For clarity, we only plot every second data point that was computed in panel (a) and every fourth data point that was computed in panel (b).](quench_Ocdw_tq_gq_L5_9_13){width="\columnwidth"}
--- abstract: 'Combining electronic Raman scattering experiments with cellular dynamical mean field theory, we present evidence of the pseudogap in the superconducting state of various hole-doped cuprates. In Bi$_{2}$Sr$_{2}$CaCu$_{2}$O$_{8 + \delta}$ we track the superconducting pseudogap hallmark, a peak-dip feature, as a function of temperature $T$ and doping $p$, well beyond the optimal one. We show that, at all temperatures under the superconducting dome, the pseudogap disappears at the doping $p_c$, between $0.222$ and $0.226$, where also the normal-state pseudogap collapses at a Lifshitz transition. This demonstrates that the superconducting pseudogap boundary forms a vertical line in the $T-p$ phase diagram.' author: - 'B. Loret$^{1,2}$, S. Sakai$^3$, S. Benhabib$^1$, Y. Gallais$^1$, M. Cazayous$^1$, M. A. M[é]{}asson$^1$, R. D. Zhong$^4$, J. Schneeloch$^4$, G. D. Gu$^4$, A. Forget$^2$, D. Colson$^2$, I. Paul$^1$, M. Civelli$^5$ and A. Sacuto$^1$' bibliography: - 'referencesmain.bib' title: 'Vertical temperature-boundary of the pseudogap under the superconducting dome of the Bi$_{2}$Sr$_{2}$CaCu$_{2}$O$_{8+\delta}$ phase-diagram' --- Discovered thirty years ago [@Bednorz1986], the copper oxide (cuprate) superconductors have not ceased to arise interest because their critical temperature [$T_{\rm c}\,$]{}is incredibly high at ambient pressure in comparison with conventional superconductors. Central to the high-[$T_{\rm c}\,$]{}cuprate problem is the challenge to understand the pseudogap (PG) state. In the normal phase, where the PG has been studied extensively, it manifests below a characteristic temperature [$T^{\ast}\,$]{}$>$ [$T_{\rm c}\,$]{}as a loss of low energy spectral weight in spectroscopic responses [@Alloul1989; @Warren1989; @Homes1995; @Fedorov1999; @Opel2000; @Fauque2006; @Bernhard2008; @Chatterjee2011; @Vishik2012; @Sacuto2013; @Sakai2013; @Benhabib15; @Hashimoto2015; @Mangin-Thro], and indirectly in thermodynamical and transport properties [@Loram2001; @Ando2004; @Daou2009; @Shekhter2013]. Its properties cannot be accounted for by the standard Fermi liquid theory of metals [@Abrikosov88; @Nozieres1997]. An even greater challenge is to establish whether the PG exists in the superconducting phase, and if yes, what its doping dependence is. This is crucial to understand the relation between superconductivity and the pseudogap [@Anderson1987; @Kotliar1988; @Varma1999; @Chubukov2008a; @Moon2009], which remains far from being well-understood [@Norman2005; @Keimer2015]. However, there are only very few probes that can disentangle a pseudogap from a superconducting gap. Note, even when the doping end-point of the normal state PG is known, it is unclear how that extrapolates in the superconducting phase, since it involves crossing a phase boundary. In the absence of an explicit method to identify the PG in the superconducting phase, this can be settled only through normal state extrapolations that require involved data analysis of heat capacity [@Loram2001] and angle-resolved photo-emission spectra (ARPES) [@Hashimoto2015], or of magneto-resistivity and nuclear magnetic resonance measurements [@Alloul3; @Daou2009; @Badoux2016; @Zheng2005; @Kawasaki2010] under application of very high magnetic fields. In this article, we present evidence that the PG develops in the SC state of different under-doped compounds, showing that it is a universal property of cuprates. In the case of Bi$_{2}$Sr$_{2}$CaCu$_{2}$O$_{8 +\delta}$ (Bi-2212), we are able to follow the PG evolution with doping under the superconducting dome. We show that the pseudogap end is a vertical line in the $T-p$ phase diagram within a narrow range of doping $0.222< p_c< 0.226$ [@Note2], the doping level where a Lifshitz transition from a hole-like to an electron-like Fermi surface takes place in the underlying electronic structure [@Kaminski2006; @Benhabib15]. Our experimental findings are analyzed within the cellular dynamical mean-field theory (CDMFT) applied to the two-dimensional Hubbard model. ![(Color online). Cartographies of the [$B_{1g}\,$]{}Raman response difference $\chi(\omega, T)-\chi(\omega, T=250\, K$) versus temperature of (a) an over-doped (OD58, $p=0.226$) and (b) an under-doped (UD80, $p=0.123$) Bi-2212 single crystals. Cartographies were built from the [$B_{1g}\,$]{}Raman responses plotted in panels (c) and (d) subtracted from the ones at 250 K. The energy is expressed in $2\Delta$ units. $2\Delta$ corresponds to $247\,{\ensuremath{{\rm cm}^{-1}}}$ and $558\,{\ensuremath{{\rm cm}^{-1}}}$ for OD58 and UD80, respectively. The blue (dark gray) and the red (bright gray) colors correspond respectively to a loss and gain of Raman spectral weight.[]{data-label="fig:1"}](fig1.pdf){width="9cm" height="7cm"} Before presenting our results, we shall explain how we identify the PG in the superconducting state. In this case, the PG manifests itself as a dip in the electronic background of the anti-nodal Raman response at frequencies higher than the pair breaking peak (PP). This PP-dip structure results from the interplay between the PG and the SC gap, and can be smoothly connected to the PG appearing in the electronic spectrum above [$T_{\rm c}\,$]{}[@Loret2016]. In order to illustrate this point, we discuss the cartographies of the difference between the anti-nodal ([$B_{1g}\,$]{}) Raman response $\chi(\omega, T)-\chi(\omega, T=250\, K$) of an over-doped and under-doped Bi-2212 single crystals over a wide range of temperature and energy (Fig. \[fig:1\]). Details of the Raman experiments, the crystal characterization and the doping setting are given in Supplementary Material (SM). We focus first on the overdoped compound (Fig.\[fig:1\] (a)) for which there is no PG. At low temperature below [$T_{\rm c}\,$]{}, two distinct zones can be clearly seen: a blue (dark gray) zone at low energy representative of a loss of spectral weight and a red (gray) one around $2\Delta$ associated with the spectral weight transfer into the PP as expected when a SC gap is opening [@Klein1984; @Devereaux2007]. The redistribution is partial because there is no sum rule in Raman scattering in contrast to that in optics [@Devereaux2007]. Above [$T_{\rm c}\,$]{}, this spectral weight redistribution disappears and there is no sign of the PG phase at this doping level. On the other hand, the cartography of the under-doped compound (Fig.\[fig:1\] (b)) is sharply different. Below [$T_{\rm c}\,$]{}the PP is surrounded by two regions of spectral weight depressions (blue/dark gray), and spectral weight is transferred to the PP also from energies higher than $2\Delta$ [@Loret2016] leaving a spectral depression (blue/dark gray): This is the dip associated to PG in SC phase. Above [$T_{\rm c}\,$]{}, the dip (centered around $\omega/2\Delta \simeq$ 1.7) disappears while the loss of spectral weight below $\omega/2\Delta \simeq$ 1.5 persists and it merges with the normal-state PG [@Opel2000; @Venturini2002; @Guyard2008; @Sacuto2011; @Sacuto2013]. The PP-dip feature is therefore the hallmark of the PG in the SC phase, which has been also confirmed theoretically by a CDMFT analysis on the Hubbard model [@Loret2016]. We note that in the normal phase the PG spectral depression completely vanishes nearby $210$ K (which corresponds to [$T^{\ast}\,$]{}). A thorough analysis [@Note] not shown here, reveals this depletion is connected to a hump increasing in high energy electronic background (green/yellow zone around 3 of $2\Delta$), suggesting a transfer of spectral weight over a large frequency range as expected from CDMFT studies [@Sakai2013; @Gull2013] and also detected by other studies [@Li2012; @Bernhard2008]. ![(Color online). Superconducting and normal (just above [$T_{\rm c}\,$]{}) anti-nodal ([$B_{1g}\,$]{}) Raman responses $\chi^{\prime \prime}_{{\ensuremath{B_{1g}\,}}} (\omega, T)$ of four distinct underdoped cuprates (a) Hg-1201 ([$T_{\rm c}\,$]{}= $88$ K) (b) Y-123 ([$T_{\rm c}\,$]{}=$90$ K), (c) Bi-2212 ([$T_{\rm c}\,$]{}=$80$ K), (d) Hg-1223 ([$T_{\rm c}\,$]{}=$133$ K). The Raman spectra in panels (b), (c) and (d) were obtained with the 532 nm laser line whereas the Hg-1201 one with the 647.1 nm line to reduce the Raman phonon activity. The PP-dip structure is also detected with the $532$ nm laser line in Hg-1201 showing the PP-dip structure is not a Raman resonant effect. Sharp peaks correspond to phonon modes. In the insets, a closer view of the PP-dip structure is plotted by subtracting the normal Raman spectrum from the superconducting one.[]{data-label="fig:2"}](fig2.pdf){width="8cm"} Our first key result is that the PP-dip feature is present in several families of hole-doped cuprates, as shown in Fig. \[fig:2\]. Here the anti-nodal ([$B_{1g}\,$]{}) Raman responses $\chi^{\prime \prime}_{{\ensuremath{B_{1g}\,}}} (\omega, T)$ are displayed over a wide frequency range in the SC state and in the normal state (just above [$T_{\rm c}\,$]{}) for four distinct slightly under-doped cuprates: HgBa$_2$Cu$ $O$_{4+\delta}$ (Hg-1201), YBa$_{2}$Cu$_{3}$O$_{7-\delta}$ (Y-123), Bi$_{2}$Sr$_{2}$CaCu$_{2}$O$_{8+\delta}$ and HgBa$_2$Ca$_2$Cu$_3$O$_{8+\delta}$ (Hg-1223). The electronic background, which is superposed on various phononic peak contributions, strongly depends on the material. In order to disentangle the material-dependent features and put into evidence the universal character of PP-dip structure we have subtracted the normal-state Raman spectra (just above [$T_{\rm c}\,$]{}) from the superconducting ones as displayed in the insets of Fig.\[fig:2\](a)-(d). The PP-dip feature is clearly observable at and above $2\Delta$ in all the compounds considered. The PP-dip disappears just above [$T_{\rm c}\,$]{}(see black curves) as reported in our previous work [@Loret2016]. The key observation here is that the presence of the PP-dip is independent of the number of copper-oxide layers: one layer (Hg-1201), two layers (Y-123 and Bi-2212) and three layers (Hg-1223), hence cannot be ascribed to a coupling between copper-oxygen planes [@Damascelli2003]. We can use then the PP-dip as a hallmark to track the PG inside the SC dome with doping and find the doping level for which the pseudogap disappears. In Fig. \[fig:3\] we display the difference between the SC [$B_{1g}\,$]{}Raman responses of Bi-2212 at $12$ K and the one just above $T_c$ for several doping levels. The following are our main results concerning the $T-$doping evolution of the PP-dip feature. (i) The PP-dip survives in the over-doped region. (ii) The dip, which is the PG feature in the superconducting phase, reduces with increasing doping. The continuation of the PG in the superconducting overdoped regime is in agreement with earlier works[@Zheng2005; @Kawasaki2010; @Vishik2012; @Hashimoto2015]. (iii) Most interestingly, the PP-dip feature disappears in a narrow doping range between $p=0.222$ and $p=0.226$. ![(Color online).Difference of the antinodal ([$B_{1g}\,$]{}) Raman response in the SC ($12$K) and normal state (just above $T_c$) of Bi-2212 crystals for a set of doping levels from $p=0.096$ to $0.236$. The Raman intensity has been normalized to the maximum intensity of the PP. The dip disappears above $p=0.222$ while the PP is still observable. The arrows indicate the PP and the dip-bottom. Raw data are reported in the SM.[]{data-label="fig:3"}](fig3.pdf){width="8.5cm" height="6.5cm"} We now analyze how the PP-dip feature evolves as a function of doping comparing with the CDMFT results and examine if this gives a consistent picture. We calculate the Raman spectra of the two-dimensional Hubbard model with parameters appropriate for hole-doped cuprates: The (next-)nearest-neighbor transfer integral $t\sim0.3{\rm~eV}$ ($t'=-0.2t$) and the onsite Coulomb repulsion $U=8t$. The CDMFT is implemented on a 2$\times$2 cluster and is solved with a finite-temperature extension of the exact diagonalization method [@Caffarel1994; @Capone2007; @Liebsch2012; @Sakai2016a]. Within the bubble approximation, the [$B_{1g}\,$]{}Raman response is calculated through $$\begin{aligned} \chi_{{\ensuremath{B_{1g}\,}}}''(\w)=2&\int \frac{d\mathbf{k}}{(2\pi)^2} \g_{{\ensuremath{B_{1g}\,}}} ^2(\mathbf{k}) \int_{-\infty}^\infty d\w' [f(\w')-f(\w+\w')]\nonumber\\ &\times [{\rm Im}G(\mathbf{k},\w'){\rm Im}G(\mathbf{k},\w+\w')\nonumber\\ &-{\rm Im}F(\mathbf{k},\w'){\rm Im}F(\mathbf{k},\w+\w')] \label{eq:raman}\end{aligned}$$ with $\g_{{\ensuremath{B_{1g}\,}}}$$=$$\frac{1}{2}[\cos(k_x)-\cos(k_y)]$ and $f(\w)$ being the Fermi distribution function. Here, the normal ($G$) and anomalous ($F$) Green’s functions calculated with the CDMFT are interpolated in the momentum space [@Kyung2006]. This approximation is quite robust in the anti-nodal region, which includes the cluster momenta $ \mathbf{K}= (0, \pm \pi)$, $(\pm \pi, 0) $, and will not affect our conclusions on the [$B_{1g}\,$]{}Raman response. The 2$\times$2 CDMFT well portrays the richness of phases appearing in hole-doped cuprates, including the Mott insulator, the anti-ferromagnetism, the $d$-wave SC and the PG state[@Maier2005; @Kotliar2006; @Tremblay2006; @Kancharla2008; @Ferrero; @Sordi2010; @Gull2010; @Gull2013; @Gull2015]. In particular within CDMFT the PG originates from a singularity in the self-energy close to the Mott transition. This singularity evolves in the superconducting state, determining a prominent peak structure in the superconducting pairing function [@Sakai2016a; @Sakai2016b] and, as outcome, the PP-dip structure in the Raman [$B_{1g}\,$]{}spectra [@Loret2016]. However, the optimal doping for which [$T_{\rm c}\,$]{}is maximal is $p^{th}_{opt} \approx 0.08 - 0.10$, which is smaller than the one ($p_{opt}\approx 0.16$) in experiments. A quantitative comparison with experiments is therefore not possible and we restrict ourselves to a qualitative one. ![(Color online). Theoretical [$B_{1g}\,$]{}Raman response obtained from Eq.(\[eq:raman\]) combined with the CDMFT. The PP and the dip-bottom energies are denoted by arrows and go down with doping but their difference in energy remains almost constant with doping, as observed experimentally in Fig.3.[]{data-label="fig:4"}](fig4.pdf){width="8.5cm"} The CDMFT [$B_{1g}\,$]{}Raman responses in the SC state for increasing doping levels at a low temperature $T=0.005t$ are plotted on Fig.\[fig:4\]. Comparing with the experimental Raman spectra of Fig. \[fig:3\], we observe that the PP and dip trends are well reproduced by the CDMFT until $p^{th}=0.178$, which in the CDMFT phase diagram corresponds to a highly over-doped point. Both the theoretical and experimental dip-depth (pointed out by red arrows) reduce, while the distance in energy between the PP (black arrow) and dip-bottom (red arrows) stays roughly constant as a function of doping. At $p^{th}=0.178$, the theoretical dip, if still present, is very weak (see Fig. \[fig:4\]). A Lifshitz transition is presumably located just above $p^{th}=0.178$ because the spectral function at the anti-nodal point is almost symmetric (see SM). Above $p^{th}=0.178$ the convergence of the CDMFT noticeably slows down and we could not obtain a well-converged solution. The closeness to the van Hove singularity might be one of the reasons why it becomes technically harder and harder to obtain converged CDMFT solutions. A more detailed comparison between experiment and theory will be published elsewhere. ![(Color online). Temperature dependence of the difference between the SC [$B_{1g}\,$]{}Raman response and the one just above [$T_{\rm c}\,$]{}, denoted $T_0$ for (a) an overdoped OD62 Bi-2212 compound ($T_0= 70$ K), (b) an overdoped OD58 Bi-2212 compound ($T_0= 60$ K). (c) and (d) Closer views of the dip energy range above the pair breaking peak.[]{data-label="fig:5"}](fig5.pdf){width="8.5cm" height="7cm"} We finally show that at a doping $p_c$ which is between $p=$0.222 and 0.226, the PP-dip, and hence the PG, disappears from low temperatures ($T\approx10$ K) to [$T_{\rm c}\,$]{}. Fig. \[fig:5\](a) and (b) display respectively the $T-$dependent subtracted [$B_{1g}\,$]{}Raman response ($\chi"(T) - \chi"(T_0)$) of the OD62 ($p=0.222$) and OD58 ($p=0.226$) compounds, where $T_0$ is a temperature just above $T_c$, and is 70 K and 60 K, respectively. While the OD62 compound still displays a dip between 600 and $1200\, {\ensuremath{{\rm cm}^{-1}}}$ for $T<$ [$T_{\rm c}\,$]{}, shown as a negative contribution in the close-up of Fig. \[fig:5\](c), the OD58 compound displays no dip over an equivalent temperature-range, as shown by the positive contribution in the closeup of Fig. \[fig:5\](d). This proves that the PG in Bi-2212 ends on a vertical line inside the SC dome of the $T-p$ phase diagram, which can be drawn between $p=0.222$ and $0.226$ (see Fig. \[fig:6\]). Our result does not show any reentrant behaviour of the pseudogap inside the superconducting dome, in contrast to that conjectured in [@Vishik2012; @Hashimoto2015], but rather a straight line, at least down to 12 K (well below that of Ref.[@Hashimoto2015]). Note, our conclusions are based entirely on anti-nodal studies, unlike that of Ref. [@Vishik2012]. Concerning the PG end-point in the normal state, our earlier results [@Benhabib15] are in good agreement with antinodal ARPES analysis[@Vishik2012; @Hashimoto2015]. In this last case ARPES and Raman probe both antinodal quasiparticles. ![(Color online). Temperature-doping phase diagram of Bi-2212, showing the PG in the normal and SC phases. The normal state PG which develops between [$T^{\ast}\,$]{}and [$T_{\rm c}\,$]{}is obtained from the [$B_{1g}\,$]{}spectral loss observed in Ref.[@Benhabib15]. The [$T^{\ast}\,$]{}values are reported from Ref.[@Benhabib15]. The dip is the PG-related feature in the SC state. The PG collapses abruptly (vertical line) between $p=$0.222 and $p=$0.226 in the SC state.[]{data-label="fig:6"}](fig6.pdf){width="8.5cm" height="7cm"} Our results strongly suggest that the superconducting PG, as in the normal state, is sensitive to the topology of the underlying Fermi surface since close to $p_c$ a Lifshitz transition takes place from a hole-like to an electron-like Fermi surface [@Benhabib15]. Furthermore, if the PG disappearance were a phase transition, it would be a first order one. This is expected for a Lifshitz transition of electrons coupled to a lattice [@Hackl2008; @Kee2005]. On the theory side, the relation between the pseudogap and the Lifshitz transition is not a well settled issue. The slowing down of the CDMFT solution approaching the van Hove doping level in concomitant with a strong decreasing of the dip depth is compatible with the experimental scenario, though future CDMFT improvements are needed to settle this issue. In conclusion, we have shown that the peak-dip structure in the Raman [$B_{1g}\,$]{}spectra, which is the hallmark of the PG in the SC phase, is a universal feature of the hole-doped cuprates. Following the PP-dip evolution with doping and temperature in the case of Bi-2212, we show that the pseudogap persists on the over-doped side before disappearing abruptly and its end draws a vertical line in the $T-p$ phase diagram just in between $p=0.222$ and $p=0.226$. This corresponds to the same doping range where the normal-state pseudogap collapses, following up a Lifshitz transition of the Bi-2212 anti-bonding band, where the Fermi surface changes from holelike to electronlike. We are grateful to A. Georges, A.J. Millis, S. Borisenko, and Louis Taillefer for fruitful discussions. Correspondences and requests for materials should be addressed to A.S. ([email protected]), M.C. ([email protected]) and S.S ([email protected]). S.S. is supported by JSPS KAKENHI Grant No. JP26800179 and JP16H06345; B.L. is supported by the DIM OxyMORE, Ile de France. SUPPLEMENTAL MATERIAL ===================== Details of the Bi-2212 single crystal characterization {#details-of-the-bi-2212-single-crystal-characterization .unnumbered} ------------------------------------------------------ The Bi-2212 single crystals were grown by using a floating zone method. [@Wen2008; @Mihaly1993] The critical temperature $T_{c}$ for each crystal has been determined from magnetization susceptibility measurements at a $10$ Gauss field parallel to the c-axis of the crystal. A complementary estimate of $T_{c}$ was achieved from electronic Raman scattering measurements by defining the temperature from which the [$B_{1g}\,$]{}superconducting pair breaking peak disappears. The level of doping $p$ was defined from $T_c$ using Presland and Tallon’s equation[@Presland1991]: $1-T_{c}/T_{c}^{max} = 82.6 (p-0.16)^{2}$. In the over-doped regime, we have established a relationship between $T_c$ and the $2\Delta$ pair breaking peak. This give us another reliable way for directly estimate [$T_{\rm c}\,$]{}from $2\Delta$, and then evluate $p$ via the Presland and Tallon’s equation, see details of section C in the SM of Ref. [@Benhabib15]. Details of the Raman experiments {#details-of-the-raman-experiments .unnumbered} -------------------------------- Raman experiments have been carried out using a JY-T64000 spectrometer in single grating configuration using a 600 grooves/mm grating and a Thorlabs NF533-17 notch filter to block the stray light. The spectrometer is equipped with a nitrogen cooled back illuminated 2048x512 CCD detector. Two laser excitation lines were used: 532 nm and 647.1 nm from respectively a diode pump solid state laser and a Ar+/Kr+ mixed laser gas. Measurements between 10 and 290 K have been performed using an ARS closed-cycle He cryostat. This configuration allows us to cover a wide spectral range (90 cm$^{-1}$ to 2200 cm$^{-1}$) in one shot. The [$B_{1g}\,$]{}-symmetry Raman response is obtained from crossed light polarizations along the Cu-O bond directions, giving us access to the anti-nodal region of the momentum space where the $d$-wave superconducting gap is maximal and the pseudogap sets in. All the spectra have been corrected for the Bose factor and the instrumental spectral response. They are thus proportional to the imaginary part of the Raman response function $\chi^{\prime \prime}(\omega,T)$ [@Sacuto2011; @Sacuto2013]. In order to have reliable comparisons, all the Bi-2212 crystals have been measured in exactly the same experimental conditions. Special care has been devoted to cover a wide spectral range in one shot and maintain the laser spot at the same location on the crystal surface during the run in temperature. This makes reliable the intensities of the electronic Raman background from low to high energy without any adjusting the spectra from one temperature to another. We have checked that each spectrum is reproducible. Raman responses of Bi-2212 in the superconducting and normal states {#raman-responses-of-bi-2212-in-the-superconducting-and-normal-states .unnumbered} ------------------------------------------------------------------- In Figure \[fig1S\] are reported the Raman responses of Bi-2212 single crystals for several doping levels in the superconducting ($T=12\,K$) and the normal state just above [$T_{\rm c}\,$]{}. The doping range extends from the under-doped to the over-doped regime. ![Raman responses of Bi-2212 crystals normalized to the maximum intensity of the pair-breaking peak. All the spectra have been corrected for the Bose factor and the optical constants as determined by ellipsometry measurements [@Reznik2006]. The $532$ nm laser line was used for all the spectra.[]{data-label="fig1S"}](fig1S.pdf){width="8cm" height="6cm"} CDMFT spectral function {#cdmft-spectral-function .unnumbered} ----------------------- In Figure \[fig2S\] is plotted the spectral function $A(\textbf{k},\omega)$ at the antinodal point $\textbf{k}=(\pi,0)$, calculated with the CDMFT for various doping levels $p^{th}$. The calculation was done at $T=0.005t$ in the superconducting state. At small $p^{th}$, the spectra show a strong electron-hole asymmetry: The Bogoliubov peaks are much stronger for $\omega<0$ than $\omega>0$. This asymmetry results from the underlying normal-state bare electronic structure, where the van Hove singularity is located below the Fermi energy. As $p^{th}$ increases, the asymmetry decreases as expected. Because the spectrum at $p^{th}=0.18$ is close to a symmetric one, we estimate that the Lifshitz transition point, where the underlying van Hove singularity crosses the Fermi energy, is located just above $p^{th}=0.18$. ![Spectral function at $\textbf{k}=(\pi,0)$, calculated with the CDMFT for various doping levels.[]{data-label="fig2S"}](fig2S.pdf){width="8cm"}
--- abstract: 'The superconducting properties of a two-dimensional superconducting wire network with a new geometry have been measured as a function of the external magnetic field. The extreme localization effect recently predicted for this periodic lattice is revealed as a suppression of the critical current when the applied magnetic field corresponds to half a flux quantum per unit cell. For this particular magnetic field, the observed vortex state configuration is highly disordered.' address: - | CNRS-CRTBT, associé à l’Université Joseph Fourier\ 25 Av. des Martyrs, 38042 Grenoble Cedex 9, France - | Groupe de Physique des Solides-CNRS, UMR 7588, Universités Paris 7 et Paris 6,\ 2 place Jussieu, 75251 Paris Cedex 05, France author: - 'B. Pannetier, C.C. Abilio, E. Serret, Th. Fournier and P. Butaud' - 'J. Vidal' title: Localization Effect in a 2D Superconducting Network without Disorder --- PACS\[72.15, 73.23, 74.25\] Introduction ============ Superconducting wire networks are well known model systems which provide an unique experimental access [@exp] to the fascinating properties of the energy spectrum of tight binding electrons. For example the critical temperature of a superconducting network exhibits an oscillatory dependence with the magnetic field which very accurately describes the features of the ground state of the spectrum. The mapping of the superconducting transition line to the Landau Levels is a direct consequence of the analogy between the linear Ginzburg-Landau equation for the superconducting order parameter and the Schroedinger equation for noninteracting charged particles. Other superconducting properties such as the equilibrium magnetization [@Gandit] or the critical current [@Buisson] have also been shown to result directly from the Landau levels spectrum.\ Recently a novel case of extreme localization induced by a magnetic field was predicted [@Vidal] for noninteracting electrons in a regular two dimensional lattice with the so-called T3 geometry (Fig.1, inset). This phenomenon is due to a subtle interplay between the lattice geometry and the magnetic field and occurs for a particular magnetic flux which corresponds to half a flux quantum per plaquette. For this special magnetic flux the hopping terms between neighbour sites interfere destructively and lead to a confinement of the electron motion within the so-called Aharonov-Bohm cages. The properties of these electronic states are associated with an interesting behaviour of the Landau level spectrum for the $T3$ lattice at half frustration $f=1/2$. Here the frustration $f=\phi/\phi_{0}$ is defined as the magnetic flux per plaquette in units of the flux quantum $\phi_{0}=h/e$. Instead of forming energy bands, the highly degenerate eigenvalues merge into only 3 levels. The confinement effect is a consequence of the non-dispersive character of the eigenstates. This contrasts with the case of a square lattice [@Hofstadter] where the eigenstates are dispersive and form broad energy bands $\epsilon(k,f)$ at every rational frustration.\ The signature of the finite group velocity $v=\frac {1}{\hbar}\frac {\partial{\epsilon}}{\partial{k}}$ of such extended states for a superconducting system manifests itself in the ability of the superconducting wavefuntion to carry a supercurrent. A simple model based on the depairing current of a superconducting wire [@Tinkham] was developped in [@Wang; @Buisson] for superconducting wire networks. According to this model the critical current exhibits strong maxima at rational frustrations $f=p/q$ (p and q integer numbers) with strength proportional to the curvature of $\epsilon (k)$. The validity of this picture was confirmed by experiments in a square lattice [@Buisson]. An alternate formulation in terms of vortex pinning for commensurate vortex lattice at rational $f$ is fully equivalent.\ The absence of dispersion $\epsilon (k)$ in the $T3$ lattice suggests that the corresponding superconducting network should be unable to carry a supercurrent at $f=1/2$. Experimental Results ==================== We have fabricated superconducting wire networks that reproduce the T3 geometry and we have studied the consequences of the above charge confinement phenomenon on the superconducting properties : critical temperature, critical current and vortex configuration. The observed $T_{c}¥(H)$ transition line (Fig1.a) can be understood in its very details [@Abilio] from the ground state of the Landau level spectrum and will not be discussed here. The critical current curve measured at constant temperature, $T=1.185$ K is shown in Fig.1b. For each magnetic field the dynamic resistance characteristics was measured $vs$ increasing dc bias current. The critical current was defined as the threshold current for which the dynamic resistance exceeds $0.2\%$ of the normal state resistance ($R_{N}$). The width $\Delta T_{c}$ of the superconducting transition (Fig.1c) is defined as the difference between $T_{c}$ curves taken from criteria $0.6R_{N}$ and $0.1R_{N}$. The collection of data shown in Fig.1 reveals a quite unusual behaviour of the superconducting state of the $T3$ network. The usual picture that rational frustrations are characterized by (i) an upward cusp in the critical temperature curve, (ii) a sharp peak in the critical current and (iii) a dip in the transition width, is observed for values $0$, $1$, $1/6$, $1/3$, etc…but [*is not observed*]{} for the simplest rational number $f=1/2$. Instead one observes a downward cusp in the critical temperature as in the case of an isolated single loop [@LP] while the critical current shows absolute minimum for this particular value. The suppression of critical current at $f=1/2$ reflects the effect of the band structure on the superfluid velocity and provides strong evidence of the non dispersive character of the eigenstates. The large broadening of the transition indicates that phase correlation between network sites cannot be established, leading to enhanced phase fluctuations.\ This anomalous behaviour was never pointed out before. We claim that it is related to the localization effect discussed in ref[@Vidal]. Further experiments have been carried out recently in sample with wire strand lengthes as small as $0.5 \mu$m and will be published elsewhere [@Serret].\ \ The above discussed phenomena have their counterpart on the properties of the vortex lattice. The frustration $f$ is still the only control parameter which, here, can be viewed as the filling factor for vortices in the plaquettes of the lattice. We have used the Bitter decoration technique to visualize the vortex configuration in the $T3$ lattice for $f=1/3$ and $f=1/2$. As shown recently [@Bezryadin] the decoration contrast can be significantly improved, in the case of superconducting arrays, by using the so-called flux compression method in which a thin superconducting bottom layer converts the network coreless vortices into well-defined Abrikosov vortices. Fig.2 compares the observed vortex patterns for $f=1/3$ and $f=1/2$. The $1/3$ case represents the reference situation where the natural triangular vortex lattice is commensurate with the underlying network. As expected we do observe an ordered state (Fig.2 left) which consists of a single domain state with a few point defects but without domain walls. The critical current peak and the pinning of the vortex lattice are two related phenomena whose origin is the dispersive curve $\epsilon (k)$. The nature of the vortex state at $f=1/3$ in the $T3$ lattice is similar to that of the checkerboard commensurate state observed at half filling in a square lattice [@Runge; @Hess]. In contrast, the $f=1/2$ vortex structure does not exhibit any commensurate state (Fig. 2 right). A detailed analysis of the measured vortex correlation functions on an array containing several thousands plaquettes [@Serret] shows under these conditions. Although there is no available theoretical model for the nature of the vortex ground state, the absence of a vortex commensurate state is likely consistent with the very high degeneracy of the electronic states in the tight-binding formulation. This result may have significant relevance on the more general field of frustrated systems. A detailed investigation of the ground states of vortices sitting on a Kagome lattice which is the dual of the $T3$ lattice is now in progress [@Butaud]. Conclusion ========== It is noteworthy that the rationality of the frustration parameter at $f=1/2$ does not lead to a commensurate state in the $T3$ lattice. This uncommon situation is demonstrated by the decoration experiment. Actually the properties of the superconducting state at $f=1/2$ are reminiscent of the case of irrational frustration [@Yu] in a square lattice. We believe that the anomalous superconducting properties presented in this paper are consistent with the localization effect predicted in ref [@Vidal]. The weakening of phase coherence by destructive interference is unambiguously observed. The mapping between the superconducting properties with the tight-binding problem allows for a formal connection between the current carrying superconducting states and the Landau level spectrum. The observed magnetic field dependence of the critical current can be accounted for qualitatively [@Abilio] by expressing the supercurrent in terms of the curvature of the band edge $\epsilon (k)$. However, this mapping is only valid in the vicinity of the critical temperature. The ordinary superconducting behaviour is recovered at lower temperature when the superconducting coherence length becomes smaller than the cell size. In this regime the large energy barriers prevents the vortex motion across the superconducting wires. A quite different situation may occur in Josephson junction arrays [@Martinoli] where the superconducting phase fluctuations play a more important role. The absence of commensurate states in the $T3$ lattice at $f=1/2$ may lead to interesting dynamics of the vortices at low temperatures. Apart from the superconducting systems, arrays of quantum wires [@Mailly] with the $T3$ geometry are also expected to exhibit interesting properties related to this unique interference effect. We acknowledge B. Douçot, R. Mosseri and O. Buisson for fruitful discussions. The *e-beam* lithography and the SEM observations were carried out with PLATO organization teams and tools. [99]{} B. Pannetier, J. Chaussy and R. Rammal, J. Phys. Lettres [**44**]{}, L853 (1983); B. Pannetier, J. Chaussy, R. Rammal, Ph. Gandit and J.C. Villégier, Phys. Rev. Lett. [**53**]{}, 1845 (1984). P. Gandit, J. Chaussy, A. Vareille and A. Tissier, EuroPhys. Lett. [**3**]{}, 623 (1987). O. Buisson, M. Giroud and B. Pannetier, EuroPhys. Lett. [**12**]{}, 727 (1990). J. Vidal, R. Mosseri and B. Douçot, Phys. Rev. Lett. [**81**]{}, 5888 (1998). D.R. Hofstadter, Phys. Rev. B [**14**]{}, 2239 (1976). M. Tinkham, [*Introduction to Superconductivity*]{}, MacGraw Hill Inc (1996). Y.Y. Wang, R.Rammal and B. Pannetier, J. Phys. [**49**]{}, 2045 (1988). C.C. Abilio, P. Butaud, Th. Fournier, B. Pannetier, J. Vidal, S. Tedesco and B. Dalzotto, Phys. Rev. Lett. [**83**]{}, 5102 (1999). E. Serret et al. to be published. W.A. Little and R. Parks, Phys. Rev. A [**44**]{}, 97 (1964). A. Bezryadin, Y. Ovchinnikov and B. Pannetier, Phys. Rev. B [**53**]{}, 8553 (1996). K. Runge and B. Pannetier, EuroPhys. Lett. [**24**]{}, 737 (1993). H.D. Hallen, R. Seshadri, A.M. Chang, R.E. Miller, L.N. Pfeiffer, K.W. West, C.A. Murray and H.F. Hess, Phys. Rev. Lett. [ **71**]{}, 3007 (1993). P. Butaud et al. to be published. F. Yu, N. E. Israeloff, A.M. Goldman and R. Bojko, Phys. Rev. Lett. [**68**]{}, 2535 (1992). see for example the recent review by P. Martinoli and Ch. Leemann, J. Low Temp. Phys. [**118**]{}, 699 (2000). C. Naud and D. Mailly, Proceedings of the EPS conference, Montreux, march 2000.
--- abstract: 'Recently Kuzenko and McCarty observed the cancellation of 4-derivative terms in the $D=4 \ {\cal N}=1$ Volkov-Akulov supersymmetric action for the fermionic Nambu-Goldstone field. Here is presented a simple algebraic proof of the cancellation based on using the Majorana bispinors and Fierz identities. The cancellation shows a difference between the Volkov-Akulov action and the effective superfield action recently studied by Komargodski and Seiberg and containing one 4-derivative term. We find out that the cancellation effect takes place in coupling of the Nambu-Goldstone fermion with the Dirac field. Equivalence between the KS and the VA Lagrangians is proved up to the first order in the interaction constant of the NG fermions.' author: - | A. A. Zheltukhin $^{a,b,c}$[^1]\ \ $^a$ Kharkov Institute of Physics and Technology,\ 1, Akademicheskaya St., Kharkov, 61108, Ukraine\ \ $^b$ Fysikum, AlbaNova, Stockholm University,\ 106 91, Stockholm, Sweden\ \ $^c$ NORDITA,\ Roslagstullsbacken 23, 106 91 Stockholm, Sweden title: | -50 pt NORDITA-2010-14 20 pt On the cancellation of 4-derivative terms in the Volkov-Akulov action --- Introduction ============ A general approach to the construction of the phenomenological Lagrangians for the Nambu-Goldstone bosons associated with arbitrary group $G$, spontaneously broken to its subgroup $H$, was studied in the known papers [@CCWZ],[@V1]. The Volkov’s approach [@V1] uses the powerful Cartan’s formalism of the exterior differential $\omega$-forms resulting in the invariant phenomenological Lagrangians of the interacting NG bosons $$\label{L} \mathcal{L} = \frac{1}{2}Sp (G^{-1}dG)_{k}(G^{-1}dG)_{k}, \ \ \ G=KH,$$ where the differential 1-forms $ G^{-1}dG=H^{-1}(K^{-1}dK)H + H^{-1}dH$ represent the vielbein $ (G^{-1}dG)_{k}$, and the connection $(G^{-1}dG)_{h}$ associated with the vacuum symmetry subgroup $H$. The generalization of the NG boson conception to the fermions with spin $1/2$ associated with the spontaneous breaking of supersymmetry was proposed by Volkov in [@V2] and their action was consructed in [@VA]. The idea of the fermionic Nambu-Goldstone particles attracts much attention and was discussed in many papers. As the NG fermion field gives a nonlinear realization of supersymmetry, its connection with the linear realization and superfields is an important issue within the spontaneous symmetry breaking theory. Light onto this question was shed in papers [@Z], [@IK1], [@R], [@LR]. In [@IK1] Ivanov and Kapustnikov generalized the known theorems of the nonlinear realization theory of the internal symmetries [@CCWZ] to the case of supersymmetry. They proved that any superfield could be splitted in a set of independently transforming components with the supersymmetry parameters depending on the NG field. Also, they found that the Volkov-Akulov Lagrangian happened to be discovered in the invariant integration measure, associated with $x$ and $\theta$ variable changes in the superfield action. In [@IK1] these changes were expressed in the form of supersymmetry transformations, but with their parameters substituted by the NG fermionic field. On the other hand in [@R] Rocek derived the VA Lagrangian starting from the scalar superfield [@WZ] with the invariant constraints put on it. As a result, he revealed the VA Lagrangian to be the component auxiliary field of the scalar superfield expressed through NG field. In [@LR] Lindstrom and Rocek generalized this approach to the case of the vector superfield [@WB]. The connection between the linear supersymmetry and constrained superfields was further developed in the recent paper by Komargodski and Seiberg [@Seib], where a new superfield formalism for finding the low-energy Lagrangian of the NG fermionic and other fields was proposed, and its connection with the VA Lagrangian was considered [^2]. The connection stimulates some questions and further studies in this direction. Our interest in particular is motivated by the Kuzenko and McCarty paper [@Kuz], where they observed the complete cancellation among 4-derivative terms in the $D=4 \ {\cal N}=1$ Volkov-Akulov supersymmetric action [^3]. This cancellation shows a difference between the VA [@VA] and KS [@Seib] actions and gives rise to the question about the constrained superfield action generating an effective NG Lagrangian without 4-derivative and higher derivative terms. The difference between KS and VA actions originates from the different realizations of the NG fermionic field in the VA and KS actions. In view of the invariance of the both actions under supersymmetry transformations the problem reduces to a proper redefinition of the NG field. As experience shows the finding of the explicit redefinition formula may turn out to be an intricate problem due to the presence of higher derivative terms of the NG field ( see e.g. [@HK]). Another question is whether such a cancellation takes place in the NG fermion couplings with other fields. Here we present an independent proof of the cancellation effect [@Kuz], based on using the Majorana bispinor representation of the $D=4 \ {\cal N}=1$ fermionic NG field and the corresponding Fierz rearrangements. We also find out that the cancellation effect occurs in interactions of the NG fermion with other fields. As a result, the 4-derivative and higher terms, associated with the fermionic NG field, are absent in the VA Volkov-Akulov Lagrangian with couplings [@VA]. We show that the maximal numbers of the NG fermions and their derivatives in the VA Lagrangian of interactions with the Dirac fields equal six and three respectively. An algorithmic procedure to verify the assumption about equivalency between the KS and VA Lagrangians, based on the redefinition of the KS fermionic field, is discussed, and their equivalence up to the first order in the constant $a$, describing the interaction between the NG fermions themselves, is proved. In sections 2, 3, 4 we draw attention to supersymmetry algebra in the Weyl and Majorana representations, the Volkov-Akulov action and its generalizations including the higher derivative terms. In section 5 we present a new proof of the cancellations of 4-derivative terms in the Volkov-Akulov action. In section 6 we find out that the cancellation effect takes place in the NG fermion couplings with the Dirac and other fields. The explicit formula, expressing the KS fermionic field through the VA fermion up to the first order in the interaction constant $a$, is derived in section 7. Supersymmetry and superalgebra ============================== The focus here is on the case of $D=4, \,\, {\cal N}=1$ supersymmetry which transformations are given by $$\label{susy} \theta'_{\alpha}= \theta_{\alpha} + \xi_{\alpha}, \ \ \bar\theta'_{\dot\alpha}= \bar\theta_{\dot\alpha} + \bar\xi_{\dot\alpha}, \ \ x^{'}_{\alpha\dot\alpha}=x_{\alpha\dot\alpha} + \frac{i}{2}( \theta_{\alpha}\bar\xi_{\dot\alpha}- \xi_{\alpha}\bar\theta_{\dot\alpha} )$$ in the Weyl spinor representation with $x_{\alpha\dot\alpha}=x_{m}\sigma^{m}_{\alpha\dot\alpha}$ [^4]. The Pauli matrices $\sigma_{i}$ and the identity matrix $\sigma_{0}$ form a basic set $\sigma_{m}=(\sigma_{0}, \sigma_{i})$ in the space of all $SL(2C)$ matrices. The Lorentz covariant description uses the second set of the Pauli matrices with the upper indices $\tilde\sigma_{m}:=( \tilde\sigma_{0}, \tilde\sigma_{i}):=(\sigma_{0}, -\sigma_{i})$ $$\label{antc} \{\sigma_{m}, \tilde\sigma_{n}\}= -2\eta _{mn}, \ \ \ Sp\sigma_{m}\tilde\sigma_{n}=-2\eta_{mn}, \ \ \ \sigma^{m}_{\alpha\dot\alpha}\tilde\sigma_{m}^{\dot\beta\beta} =-2\delta_{\alpha}^{\beta}\delta_{\dot\alpha}^{\dot\beta}\ ,$$ where $\eta_{mn}=diag(-1,1,1,1)$. The matrices $\sigma_{m}$ and $\tilde\sigma_{m}$ are Lorentz invariant similarly to the tensors $\eta_{mn}$ and the spinor antisymmetric metric $\varepsilon_{\alpha\beta}$ with the components $\varepsilon_{12}=\varepsilon^{21}=-1$. The supersymmetry generators $Q^{\alpha}$ and their complex conjugate $\bar Q^{\dot\alpha}:= -(Q^{\alpha})^{*}$ have the form $$\label{gener} Q^{\alpha}=\frac{\partial}{\partial\theta_{\alpha}} - \frac{i}{2} \bar\theta_{\dot\alpha} \frac{\partial}{\partial x_{\alpha\dot\alpha}}, \ \ \ \bar Q^{\dot\alpha}= \frac{\partial}{\partial\bar\theta_{\dot\alpha}} - \frac{i}{2} \theta_{\alpha} \frac{\partial}{\partial x_{\alpha\dot\alpha}}$$ and form the supersymmetry algebra $$\begin{aligned} \label{susyalg} \{ Q^{\alpha}, \bar Q^{\dot\alpha} \}=-i\frac{\partial}{\partial x_{\alpha\dot\alpha}} %=-\frac{i}{2}\tilde\sigma_{m}^{\dot\alpha\alpha}\frac{\partial}{\partial x_{m}} =\frac{1}{2}\tilde\sigma_{m}^{\dot\alpha\alpha}P^{m}, \\ \{ Q^{\alpha}, Q^{\beta} \}=\{ \bar Q^{\dot\alpha}, \bar Q^{\dot\beta} \} =[ Q^{\alpha},P^{m}]=[ \bar Q^{\dot\alpha},P^{m}]=0 \nonumber\end{aligned}$$ together with the translation generator $P^{m}=i\frac{\partial}{\partial x_{m}}$. The supersymmetry transformations (\[susy\]) and superalgebra (\[susyalg\]) are presented in equivalent bispinor form after transition to the Majorana spinors $$\begin{aligned} \label{gamsusy} \delta\theta=\xi, \ \ \ \ \delta\bar\theta=\bar\xi, \ \ \ \ \delta x_{m}=-\frac{i}{4}(\bar\xi\gamma_{m}\theta), \ \ \ \ \{ Q_{a}, Q_{b}\}= \frac{1}{2}(\gamma_{m}C^{-1})_{ab}P^{m},\end{aligned}$$ where $\bar\theta=\theta^{T}C$ with the antisymmetric matrix of the charge conjugation $C$ $$\begin{aligned} \label{major} C^{ab}= \left( \begin{array}{cc} \varepsilon^{\alpha\beta}&0\\ 0 & \varepsilon_{\dot\alpha\dot\beta} \end{array} \right), \ \ \ Q_{a}= \frac{\partial}{\partial\bar\theta^{a}} - \frac{i}{4} (\gamma_{m}\theta)_{a}\frac{\partial}{\partial x_{m}}.\end{aligned}$$ The Majorana spinors and the Dirac $\gamma$-matrices are defined as in [@WB] $$\begin{aligned} \label{gamma} \theta _{a}=\left(\begin{array}{c} \theta_{\alpha} \\ \bar\theta^{\dot\alpha} \end{array} \right), \ \ \xi_{a}=\left(\begin{array}{c} \xi_{\alpha} \\ \bar\xi^{\dot\alpha} \end{array} \right), \ \ \gamma_{m}= \left( \begin{array}{cc} 0 & \sigma_{m}\\ \tilde\sigma_{m} & 0 \end{array} \right), \ \ \{\gamma_{m}, \gamma_{n}\}= -2\eta _{mn}.\end{aligned}$$ The Volkov-Akulov action ======================== To construct the phenomenological Lagrangian of the Nambu-Goldstone fermions the elegant formalism of the invariant Cartan $\omega$-forms [@V1], unified with supersymmetry by Volkov, was used in [@VA]. The supersymmetry invariant differential $\omega$-forms in extended superspace with the Grassmannian coordinates $\theta_{\alpha}^{I}$ have the form $$\begin{aligned} \label{wforms} \omega_{\alpha}^{I}= d\theta_{\alpha}^{I} , \ \ \ \bar\omega_{\dot\alpha I}=d\bar\theta_{\dot\alpha I} , \ \ \ \omega_{\alpha\dot\alpha}=dx_{\alpha\dot\alpha}- \frac{i}{2}( d\theta_{\alpha}^{I} \bar\theta_{\dot\alpha I} - \theta_{\alpha}^{I} d\bar\theta_{\dot\alpha I}), \end{aligned}$$ where $I =1,2,...,N$ is the index of the internal $SU(N)$ symmetry. In the Majorana representation these fermionic and bosonic 1-forms are $$\begin{aligned} \label{biwforms} \omega= d\theta , \ \ \ \bar\omega=d\bar\theta_ , \ \ \ \omega_{m}=dx_{m}- \frac{i}{4}( d\bar\theta\gamma_{m}\theta ).\end{aligned}$$ The $\omega$-forms (\[wforms\]) were used in [@VA] as the building blocks for the construction of supersymmetric actions for the interacting NG fermions. Posssible actions for the fermionic NG fields are constructed in the form of the wedge products of the $\omega$-forms (\[wforms\]), forming hyper-volumes imbedded in the extended superspace. The action candidates have to be invariant under the Lorentz and internal (unitary) symmetries. In the case of the $4D$ Minkowski space the invariant action of the NG fermions must include the factorized volume element $d^{4}x$. This requirement restricts the structure of the admissible combinations of the $\omega$-forms. If such a combination is given by a wedge product of the $\omega$-forms (\[wforms\]) and their differentials, it should have the general number of the differentials equals four. The conditition is satisfied by the well known invariant [@VA] $$\begin{aligned} \label{volum} d^{4}V=\frac{1}{4!}\varepsilon_{mnpq}\omega^{m}\wedge\omega^{n}\wedge\omega^{p} \wedge\omega^{q}, \end{aligned}$$ where $\wedge$ is the wedge product symbol, that gives the supersymmetric extension of the volume element $d^{4}x$ of the Minkowski space. The supersymmetric volume (\[volum\]) is invariant under the Lorentz and unitary groups. It does not contain the spinorial one-forms $\omega_{\alpha}^{I}$ and $\bar\omega_{\dot\alpha I}$, but they appear, e.g. in the following invariant products [@VA] $$\begin{aligned} \label{volhor} \Omega^{(4)}=\omega_{\alpha}^{I}\wedge\bar\omega_{\dot\beta I}\wedge \tilde\sigma^{\dot\beta\alpha}_{m} d\wedge\omega^{m},\ \ \ {\tilde\Omega}^{(4)}=\varepsilon^{\alpha\beta}\omega_{\alpha}^{I}\wedge \omega_{\beta}^{J}\wedge\bar\omega_{\dot\alpha I}\wedge \bar\omega_{\dot\beta J}\varepsilon^{\dot\alpha\dot\beta},\end{aligned}$$ where $d\wedge\omega^{m}$ is the exterior differential of $\omega^{m}$. The additional important symmetry of the invariants (\[volum\]) and (\[volhor\]) is their independence on the choice of the superspace coordinate realization. It means that the four dimensional hypersurfaces, associated with (\[volum\]-\[volhor\]), may be parametrized by various ways. Because the Volkov’s idea was to identify the Grassmannian $\theta$ coordinates with the fermionic NG fields, associated with the spontaneous breaking of supersymmetry, they must be considered as functions of $x$. This requirement means transition to the non-linear realization of supersymmetry. It explains why the pullbacks of the 4-form $d^{4}V$ (\[volum\]) or its generalizations (\[volhor\]) on the $4$-dimensional Minkowski subspace were proposed in [@VA] to generate supersymmetric actions for the fermionic NG fields. As a result of the observations, the differential forms $\omega_{m}$ (\[biwforms\]) and $d^{4}V$ are presented as $$\begin{aligned} \label{pullb} \omega_{m}= (\delta_{m}^{n} -\frac{i}{4}\frac{\partial\bar\theta} {\partial x_{n}}\gamma_{m}\theta) dx_{n}=W_{m}^{n}dx_{n}, \ \ \ d^{4}V=\det W d^{4}x.\end{aligned}$$ The identification of $\theta$ with the fermionic NG field is achieved by the change: $\psi(x) =a^{-1/2}\theta(x)$, where $a$ has sense of the interaction constant $[a]=L^{4}$ that introduces a supersymmetry breaking scale. This constant restores the correct dimension $L^{-3/2}$ of the fermionic field $\psi(x)$ and the transition to $\psi$ in (\[pullb\]) and $d^{4}V$ yields the original Volkov-Akulov action [@VA] $$\begin{aligned} \label{action} S=\frac{1}{a}\int\det W d^{4}x \end{aligned}$$ with the $4\times4$ matrix $W_{m}^{n}(\psi,\partial_{m}\psi)$ defined by the following relations $$\begin{aligned} \label{acterm} W_{m}^{n}= \delta^{n}_{m} + aT_{m}^{n},\ \ \ \ T_{m}^{n}= -\frac{i}{4}\partial^{n}\bar\psi\gamma_{m}\psi.\end{aligned}$$ An explicit form of the action $S$ (\[action\]) follows from the definition of $\det W$ $$\begin{aligned} \label{determ} \det W = -\frac{1}{4!}\varepsilon_{n_{1}n_{2}n_{3}n_{4}}\varepsilon^{m_{1}m_{2}m_{3}m_{4}} W_{m_{1}}^{n_{1}}W_{m_{2}}^{n_{2}}W_{m_{3}}^{n_{3}}W_{m_{4}}^{n_{4}},\end{aligned}$$ where we chose $\varepsilon_{0123}=1$. Using (\[acterm\]) and (\[determ\]) presents $S$ (\[action\]) in the form $$\begin{aligned} \label{action'} S= \int d^{4}x [\, \frac{1}{a} + T_{m}^{m} + \frac{a}{2}(T_{m}^{m}T_{n}^{n}- T_{m}^{n}T_{n}^ {m}) + a^2 T^{(3)}+ a^3 T^{(4)} \,],\end{aligned}$$ where $T^{(3)}$ and $T^{(4)}$ code the interaction terms of the NG fermions that are cubic and quartic in the fermion derivative $\partial_{m}\psi$. The first term in (\[action’\]) provides a non-zero vacuum expectation value for the VA Lagrangian, confirming that it describes the spontaneously broken supersymmetry. In supergravity this term associates with the cosmological term [@VS], [@Verice]. The second term reproduces the free action for the massless NG fermion $\psi(x)$ $$\begin{aligned} \label{dirac} S_{0}=\int d^{4}x T_{m}^{m} = -\frac{i}{4}\int d^{4}x\partial^{m}\bar\psi\gamma_{m}\psi,\end{aligned}$$ The terms $ T^{(3)}$ and $ T^{(4)}$ cubic and respectively quartic in the NG fermion derivatives were presented in [@VA] in the form $$\begin{aligned} \label{vertex} T^{(3)}= \frac{1}{3!}\sum_{p} (-)^{p} T_{m}^{m} T_{n}^{n} T_{l}^{l} \ ,\ \ \ \ \ T^{(4)}= \frac{1}{4!}\sum_{p} (-)^{p} T_{m}^{m} T_{n}^{n} T_{l}^{l}T_{k}^{k} \ ,\end{aligned}$$ where the sum $\sum_{p}$ corresponds to the sum in all permutations of subindices in the products of the tensors $ T_{n}^{m}$. The terms (\[vertex\]) describe the vertexes with six and eight NG fermions. Higher derivative generalizations\ of the Volkov-Akulov action ================================== The $\omega$-form formalism [@VA] yields a clear geometric way to extend the VA action by the higher degree terms in the NG fermion derivatives. In general case the combinations of the $\omega$-forms (\[biwforms\]), admissible for the higher order invariant actions, have to be the homogenious functions of the degree four in the differentials $dx$ and $d\psi$. The latter condition guarantees the factorization of the volume element $d^{4}x$ in the action integral. To restrict the number of these invariants the minimality condition for the degree of derivatives $\partial_{m}\psi$ in the general action $$\label{genract} S=\int d^{4}x L(\psi, \partial_{m}\psi)$$ was proposed in [@VA]. The minimality condition takes into account only the lowest degrees of the derivatives $\partial_{m}\psi$ in the Lagrangian and corresponds to the low energy limit. To count the degree of $\partial_{m}\psi$ in different invariants observed was that these derivatives appear from the differentials $d\psi$ in the fundamental $\omega$-forms. From this point of view there is an important difference among the spinor and vector 1-forms (\[wforms\]). The spinor one-form contain one derivative $\partial_{m}\psi$, but the vector form (\[pullb\]) either do not contain the $\psi$ fields at all or contain one derivative $\partial_{m}\psi$ accompanied by $\psi$. As a result, the whole number of the derivatives $\partial_{m}\psi$ with respect to the whole number of the fermionic NG fields is lower in the vector differential one-forms than in the spinor ones. The invariants including the exterior differential of the $\omega$-forms like $\Omega^{(4)}$ in (\[volhor\]) have the higher degree in $\partial_{m}\psi$ in comparison with the product of $\omega$-forms themselves. The same conclusion concerns the invariant ${\tilde\Omega}^{(4)}$ including only the spinor forms. Thus, the demand of the minimality of the number of the derivatives $\partial_{m}\psi$ in $S$ (\[genract\]) will be satisfied if the admissible invariants will contain only the vector differential one-forms $\omega_{m}$. The exact realization of the minimality condition by the VA action fixes the latter, and solves the problem of the effective action construction in the low energy limit. Cancellation of 4-derivative terms in the Volkov-Akulov action ============================================================== For the case of ${\cal N}=1$ supersymmetry the algebraic structure of the terms $T^{(3)}$ and $T^{(4)}$ (\[vertex\]) was analysed in [@Kuz] using the Weyl spinor basis. It was observed that the terms having the fourth degree in $\partial_{m}\psi$ and collected in $T^{(4)}$ completely cancel out. Here we consider an alternative proof of the observation using the Majorana bispinor representation. In correspondence to representation (\[determ\]) the term $T^{(4)}$ (\[vertex\]) may be written as $$\begin{aligned} \label{8vertex} &&T^{(4)} = -\frac{1}{4!}\varepsilon_{n_{1}n_{2}n_{3}n_{4}}\varepsilon^{m_{1}m_{2}m_{3}m_{4}} T_{m_{1}}^{n_{1}}T_{m_{2}}^{n_{2}}T_{m_{3}}^{n_{3}}T_{m_{4}}^{n_{4}} = \\ && -\frac{1}{4!}(\varepsilon_{n_{1}n_{2}n_{3}n_{4}}\bar\psi_{a_{1}}^{,n_{1}} \bar\psi_{a_{2}}^{,n_{2}}\bar\psi_{a_{3}}^{,n_{3}}\bar\psi_{a_{4}}^{,n_{4}}) ( \varepsilon^{m_{1}m_{2}m_{3}m_{4}}\gamma^{a_{1}b_{1}}_{m_{1}}\gamma^{a_{2}b_{2}}_{m_{2}} \gamma^{a_{3}b_{3}}_{m_{3}}\gamma^{a_{4}b_{4}}_{m_{4}}) \nonumber \\ && (\psi_{b_{1}}\psi_{b_{2}}\psi_{b_{3}}\psi_{b_{4}}),\nonumber\end{aligned}$$ where $\gamma^{ab}_{m}=(C\gamma_{m})^{ab}$ is a symmetric matrix in the bispinor indices $(a,b=1,2,3,4)$ and the condenced notation $\bar\psi_{a}^{,n}:=\partial^{n}\bar\psi_{a}$ is introduced. The product $\psi_{b_{1}}\psi_{b_{2}}\psi_{b_{3}}\psi_{b_{4}}$ in (\[8vertex\]) is a completely antisymmetric spin-tensor of the maximal rank four since of the Grassmannian nature of the spinor components $\psi_{b}$. Then we find that the product may be presented in the equivalent form as $$\begin{aligned} \label{antisimtr} \psi_{b_{1}}\psi_{b_{2}}\psi_{b_{3}}\psi_{b_{4}} = - (C^{-1}_{b_{1}b_{2}}C^{-1}_{b_{3}b_{4}} + C^{-1}_{b_{1}b_{3}}C^{-1}_{b_{4}b_{2}} + C^{-1}_{b_{1}b_{4}}C^{-1}_{b_{2}b_{3}}) \psi_{1}\psi_{2}\psi_{3}\psi_{4},\end{aligned}$$ where the antisymmetric matric $C^{-1}$ is inverse of the charge conjugation matrix $C$ (\[gamsusy\]). The representation (\[antisimtr\]) collects all spinors $\psi$ without derivatives in the form of a scalar multiplier. The substitution of (\[antisimtr\]) in (\[8vertex\]) transforms it into the sum of products of the bilinear spinor covariants $$\begin{aligned} T^{(4)}= \frac{3}{4!} \,\Phi\,\Xi, \ \ \ \Xi:=\psi_{1}\psi_{2}\psi_{3}\psi_{4}, \label{bilcovar} \\ \Phi:=\varepsilon_{n_{1}n_{2}n_{3}n_{4}}\varepsilon^{m_{1}m_{2}m_{3}m_{4}} (\bar\psi^{,n_{1}}\Sigma_{m_{1}m_{2}}\psi^{,n_{2}}) (\bar\psi^{,n_{3}}\Sigma_{m_{3}m_{4}}\psi^{,n_{4}}),\nonumber\end{aligned}$$ where $\Sigma_{mn}:= \frac{1}{2}[\gamma_{m}, \gamma_{n}]$ are the Lorentz transformation generators. Taking into account the well known property of $\Sigma_{mn}$ $$\begin{aligned} \label{rotat} \varepsilon^{m_{1}m_{2}m_{3}m_{4}}\Sigma_{m_{3}m_{4}}=-2\gamma^{5}\Sigma_{m_{1}m_{2}}, \ \ \ \gamma^{5}:=\gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3}= \left( \begin{array}{cc} -i& 0 \\ 0 & i \end{array} \right),\end{aligned}$$ one can present the Lorentz invariant $\Phi$ (\[bilcovar\]) in the compact form $$\begin{aligned} \label{bilcovar'} \Phi=-2\varepsilon_{n_{1}n_{2}n_{3}n_{4}} (\bar\psi^{,n_{1}}\Sigma_{m_{1}m_{2}}\psi^{,n_{2}}) (\bar\psi^{,n_{3}}\Sigma^{m_{1}m_{2}}\gamma^{5}\psi^{,n_{4}}).\end{aligned}$$ Using representation (\[bilcovar’\]) we shall prove the vanishing of $\Phi$. To this end let us recall the known Fierz relation for the Grassmannian spinors $\chi_{i}$ $$\begin{aligned} \label{fierz1} (\bar\chi_{1}\chi_{2})(\bar\chi_{3}\chi_{4})=- \frac{1}{4} \sum_{N=1}^{16} (\bar\chi_{1}\Gamma^{A}\chi_{4})(\bar\chi_{3}\Gamma_{A}\chi_{2}),\end{aligned}$$ where the 16 Dirac matrices $\Gamma^{A}$ and their inverse $\Gamma_{A}=(\Gamma_{A})^{-1}$, defined as $$\begin{aligned} \Gamma^{A}:=(1,\, \gamma^{m}, \, \Sigma^{mn}, \, \gamma^{5},\, \gamma^{5} \gamma^{m}), \label{basis} \\ \Gamma_{A}:=(\Gamma^{A})^{-1}=(1,\, -\gamma_{m}, \, -\Sigma_{mn}, \, -\gamma^{5},\, -\gamma^{5} \gamma_{m}), \nonumber\end{aligned}$$ form the complete basis in the space of $4\times4$ matrices. As a result, we obtain $$\begin{aligned} \label{firtz2} \Phi=\frac{1}{2}\varepsilon_{n_{1}n_{2}n_{3}n_{4}} \sum_{A=1}^{16} (\bar\psi^{,n_{1}}\Sigma_{m_{1}m_{2}}\Gamma^{A}\Sigma^{m_{1}m_{2}}\gamma^{5}\psi^{,n_{4}}) (\bar\psi^{,n_{3}}\Gamma_{A}\psi^{,n_{2}}).\end{aligned}$$ The r.h.s of (\[firtz2\]) includes the products of two bilinear covariants. The second (right) of them $(\bar\psi^{,n_{3}}\Gamma_{A}\psi^{,n_{2}})$ is either symmetric or antisymmetric under the permutation $n_{3}\leftrightarrow n_{2}$. Only the antisymmetric covariants generated by $\Gamma_{A}= (- \gamma_{r}, -\Sigma_{rs})$ give non-zero contribution to (\[firtz2\]). The first (left) covariant in (\[firtz2\]), corresponding to the above choice of $\Gamma_{A}$, includes either the matrix $L_{v}$ or $L_{t}$ given by the expressions $$\begin{aligned} \label{tvcovr} L_{v}=\Sigma_{m_{1}m_{2}}\gamma^{r}\Sigma^{m_{1}m_{2}}\gamma^{5}, \ \ \ L_{t}=\Sigma_{m_{1}m_{2}}\Sigma^{rs}\Sigma^{m_{1}m_{2}}\gamma^{5}. \end{aligned}$$ Using the representation of $\Sigma_{m_{1}m_{2}}$ in the form $\Sigma_{m_{1}m_{2}}= (\eta_{m_{1}m_{2}}+ \gamma_{m_{1}}\gamma_{m_{2}})$ we obtain the following relations $$\begin{aligned} \Sigma_{m_{1}m_{2}}\Gamma^{A}\Sigma^{m_{1}m_{2}}= 4\Gamma^{A} - \gamma_{m_{1}}\gamma_{m_{2}}\Gamma^{A}\gamma^{m_{2}}\gamma^{m_{1}}, \label{idents} \\ \gamma_{m}\gamma^{r}\gamma^{m}= 2\gamma^{r}, \ \ \ \ \gamma_{m}\Sigma^{rs}\gamma^{m}=0 \nonumber\end{aligned}$$ which show that $$\begin{aligned} \label{lvzero} L_{v}=0, \ \ \ L_{t}= 4\Sigma^{rs}\gamma^{5}. \end{aligned}$$ Using the results (\[lvzero\]) permits to present (\[firtz2\]) in the next form $$\begin{aligned} \label{tvcovr'} \Phi=-2\varepsilon_{n_{1}n_{2}n_{3}n_{4}}(\bar\psi^{,n_{1}}\Sigma^{rs}\gamma^{5}\psi^{,n_{4}}) (\bar\psi^{,n_{3}}\Sigma_{rs}\psi^{,n_{2}}).\end{aligned}$$ Taking into account the symmetry property $(C\Sigma^{rs}\gamma^{5})^{ab}=(C\Sigma^{rs}\gamma^{5})^{ba}$ and changing the summation indices $n_{3}\leftrightarrow n_{1}$ one can present the expression (\[tvcovr’\]) in the form $$\begin{aligned} \label{tvcovrfin} \Phi=2\varepsilon_{n_{1}n_{2}n_{3}n_{4}}(\bar\psi^{,n_{1}}\Sigma_{rs}\psi^{,n_{2}}) (\bar\psi^{,n_{3}}\Sigma^{rs}\gamma^{5}\psi^{,n_{4}}).\end{aligned}$$ The matching (\[bilcovar’\]) and (\[tvcovrfin\]) yields the expected result $$\begin{aligned} \label{final} \Phi=-\Phi \ \ \ \Rightarrow \ \ \ \Phi=0, \ \ \ \ T^{(4)}=0\end{aligned}$$ which proves that the 4-derivative term $T^{(4)}$ (\[8vertex\]) actually vanishes in agreement with the observation [@Kuz]. Thus, the maximal number of derivatives present in the Volkov-Akulov action reduces to three and the action takes the following form $$\begin{aligned} \label{finaction'} S= \int d^{4}x [\, \frac{1}{a} + T_{m}^{m} + \frac{a}{2}(T_{m}^{m}T_{n}^{n}- T_{m}^{n}T_{n}^ {m}) + \frac{a^2}{3!}\sum_{p} (-)^{p} T_{m}^{m} T_{n}^{n} T_{l}^{l} \,]\end{aligned}$$ with the maximal number of NG fermions in the vertices equal six. Matching the Lagrangian (\[finaction’\]) and the Komargodski and Seiberg Lagrangian [@Seib], having the form $$\begin{aligned} \label{KS} {\cal L}_{KS}= -f^2 + i\partial_{\mu}\bar G\tilde\sigma^{\mu} G + \frac{1}{4f^2} {\bar G}^2\partial^{2} G^{2} - \frac{1}{16f^6} G^2{\bar G}^2\partial^{2} G^{2}\partial^{2} {\bar G}^{2},\end{aligned}$$ shows their difference, because of the presence of one 4-derivative term including eight NG fermions in (\[KS\]). We shall explain that the difference originates from various realizations of the NG field used in the VA and KS Lagrangians. The second question concerns a possibility of such type cancellations in the NG fermion couplings with other fields. Coupling of the fermionic Nambu-Goldstone fields with the Dirac field ===================================================================== Here we show that the above discussed cancellation of the 4-derivative terms also occurs in the NG fermion couplings with other fields. It is easy to see by the application of the general Volkov’s method [@V1] in the construction of the phenomenological Lagrangian, describing the NG particles interacting with other fields. The extension of this method, aimed at including the supersymmetric couplings, is based on joining of the differential $d\chi$ of a given field $\chi$, carrying arbitrary spinor and unitary indices, to the set of the supersymmetric $\omega$-forms [@VA]. Then the above described procedure for the minimal VA action construction, using only the $\omega$-forms (\[wforms\]), may be applied to the enlarged set of these supersymmetric one-forms. The only restriction on the admissible $\chi$-terms is the demand of their invariance under the Lorentz and the internal symmetry groups. The effective actions must be the homogenious functions of the degree four in the differentials $dx, \, d\psi$ and $d\chi$, and generally it has to restict the number of the derivatives $\partial_{m}\psi$ to be less than four. Then the considered cancellations are not relevant. However, if $d\chi$ is absent in the couplings then the 4-derivative cancellation may take place and will reduce the derivative $\partial_{m}\psi$ number in the coresponding vertices. An instructive example of the described possibility gives the ${\cal N}=1$ minimal supersymmetric coupling of the fermionic NG particle with the massive Dirac field $\chi$ in the low energy limit [@VA] $$\begin{aligned} \label{fermi1} S= \int [ \, \frac{i}{2}\varepsilon_{mnpq} (\bar\chi{\gamma^m}d\chi-d\bar\chi{\gamma^m}\chi) \wedge\omega^{n}\wedge\omega^{p}\wedge\omega^{q} + \\ m{\bar\chi}\chi \varepsilon_{mnpq}\omega^{m}\wedge\omega^{n} \wedge\omega^{p}\wedge\omega^{q} \, ]. \nonumber \end{aligned}$$ The kinetic term of the Dirac field in (\[fermi1\]) includes the differential $d\chi$ and the cancellation is absent here. The maximal number of the NG fermions at this term $n_{NGf}$ equals six and the maximal number $n_{NGd}$ of their derivatives equals three, just as in the case of the VA Lagrangian (\[finaction’\]) after 4-derivative cancellation. The mass term in (\[fermi1\]) does not include $d\chi$ and respectively it includes the supervolume form $d^{4}V$ (\[volum\]), because of the minimality condition. Then the cancellation effect does work and results in the same maximal numbers $n_{NGf}=6$ and $n_{NGd}=3$ as in the kinetic term. To present (\[fermi1\]) in the standard notations [@VA] we substitute the $\omega$-forms (\[pullb\]) in (\[fermi1\]) and obtain $$\begin{aligned} S= \int d^{4}x [\, R^{m}_{m} +a( R^{m}_{m} T^{n}_{n} - R^{m}_{n} T^{n}_{m}) + \frac{a^2}{2}\sum_{p} (-)^{p} R_{m}^{m} T_{n}^{n} T_{l}^{l} + \label{fermi2} \\ \frac{a^3}{3!}\sum_{p} (-)^{p} R_{m}^{m} T_{n}^{n} T_{l}^{l}T_{k}^{k} + m\bar\chi\chi\det W \,], \nonumber\end{aligned}$$ where $ R^{m}_{n}:= \frac{i}{2}( \bar\chi{\gamma^m}\partial_{n}\chi - \partial_{n}\bar\chi{\gamma^m}\chi)$ is the kinetic term for $\chi$. Using the expression for $\det W$ from (\[finaction’\]), the mass term in (\[fermi2\]) is presented as $$\begin{aligned} m\bar\chi\chi\det W = m\bar\chi\chi + a m\bar\chi\chi [\, T_{m}^{m} + \frac{a}{2}(T_{m}^{m}T_{n}^{n}- T_{m}^{n}T_{n}^ {m}) + \label{fmass} \\ \frac{a^2}{3!}\sum_{p} (-)^{p} T_{m}^{m} T_{n}^{n} T_{l}^{l}\, ], \nonumber\end{aligned}$$ where $T_{m}^{n}= -\frac{i}{4}\partial^{n}\bar\psi\gamma_{m}\psi$ in accordance with the definition (\[acterm\]). The mass term (\[fmass\]) contains the maximal number of the NG fermions $n_{NGf}=6$ and respectively the derivative number $n_{NGd}=3$, as a consequence of the cancellation of 4-derivative terms. These maximal numbers $n_{NGf}=6$ and $n_{NGd}=3$, characterizing the structure of the interaction action (\[fermi1\]), are the same as for the VA action (\[finaction’\]). The considered example shows that the cancellation effect takes place in the supersymmetric couplings containing the supervolume (\[volum\]). So, we obtain that an enough condition for the 4-derivative cancellation in the couplings of the fermionic NG particles is the presence of $d^{4}V$ (\[volum\]) there. The observation sets issue on the restoration of a constrained superfield action with couplings which coincides with the effective VA action. Relation between the KS and\ the VA Lagrangians ============================ Despite the difference between the VA and the KS Lagrangians it seems that they are equivalent up to the NG field redefinition. Here we outline a straightforward way to check this assumption, and prove equivalence these Lagrangians up to the first order in the constant $a$. The proof is analogous with the one considered in [@R], and further developed in [@HK] in the context of nonlinear realization of the ${\cal N}=1$ Maxwell superfield and the component structure of the supersymmetric nonlinear electrodynamics [@Kuz] (see additional refs. in these papers). To make a comparison between the VA Lagrangian (\[finaction’\]) $$\begin{aligned} \label{VAlag} {\cal L}_{VA}= \frac{1}{a} - \frac{i}{4}\bar\psi^{,m}\gamma_{m}\psi - \frac{a}{32}[(\bar\psi^{,m}\gamma_{m}\psi)^2{} - (\bar\psi^{,n}\gamma_{m}\psi)(\bar\psi^{,m}\gamma_{n}\psi) ] + a^2 T^{(3)} \end{aligned}$$ and the KS Lagrangian (\[KS\]) clearer, we present the latter in the bispinor Majorana representation omitting the terms which have the form of total derivative $$\begin{aligned} \label{KSlag} {\cal L}_{KS}= \frac{1}{a} - \frac{i}{4}{\bar g}^{,m}\gamma_{m} g - \frac{a}{16}[({\bar g}^{,m}g)^{2} + ({\bar g}^{,m}\gamma_{5}g)^{2}] \nonumber \\ - (\frac{a}{16})^3[({\bar g}g)^{2} + ({\bar g}\gamma_{5}g)^{2}] [(\partial^{2}({\bar g}g))^2 +((\partial^{2}({\bar g}\gamma_{5}g))^{2}],\end{aligned}$$ where $g:= \sqrt{2}G$ and $a:=-1/f^2$, and the relations [@Z_Gra] connecting bilinear covariants in the Weyl and the Majorana representations were used. To eliminate the 4-derivative term from ${\cal L}_{KS}$, the expression for the Majorana spinor field $g_a$ in terms of $\psi_a$ has to include its higher derivatives. So, we shall seek for it in the form of a polynomial in the interaction constant $a$ $$\begin{aligned} \label{redef} g= \psi + a\chi + a^2\chi_{2} + a^3\chi_{3}, \end{aligned}$$ where the sought-for Grassmannian spinors $\chi, \, \chi_{2} , \, \chi_{3} $ depend only on $\psi, \, \bar\psi$ and their derivatives. The substitution of the expansion (\[redef\]) in the KS Lagrangian (\[KSlag\]) and putting it equal to the VA Lagrangian (\[VAlag\]) will produce the equations defining the spinors $\chi, \, \chi_{2}$ and $\chi_{3}$. Thus, the proof of the equivalency of the Lagrangians is reduced to the solutions of these equations. The comparison of the terms, having the same degree with respect to the constant $a$ in the redefined KS and the original VA Lagrangians, provides an algorithmic way to generate the equations under question. By this way we observe that the spinors $\chi_{2}$ and $\chi_{3}$ don’t contribute in the terms linear in $a$ in the redefined $L_{KS}$. Thus, it is easy to obtain equation defining the spinor $\chi$. Actually, the substitution of (\[redef\]) into (\[KSlag\]) and omitting the total derivative term redefines the kinetic term to the form $$\begin{aligned} \label{kinet} - \frac{i}{4}{\bar g}^{,m}\gamma_{m} g = - \frac{i}{4}\bar\psi^{,m}\gamma_{m}\psi - \frac{i}{2}a(\bar\psi^{,m}\gamma_{m}\chi) + {\cal O}(a^2).\end{aligned}$$ The next relevant term from $L_{KS}$ (\[KSlag\]) is the term linear in $a$ and quartic in the field number. Summing up of the mentioned terms results in the redefined KS Lagrangian in the linear order in $a$ $$\begin{aligned} \label{KSlagred} {\cal L}_{KS}= \frac{1}{a} - \frac{i}{4}\bar\psi^{,m}\gamma_{m}\psi - \frac{i}{2}a(\bar\psi^{,m}\gamma_{m}\chi) \nonumber \\ - \frac{a}{16}[({\bar\psi}^{,m}\psi)^{2} + ({\bar\psi}^{,m}\gamma_{5}\psi)^{2}] + {\cal O}(a^2).\end{aligned}$$ Matching the Lagrangians (\[KSlagred\]) and (\[VAlag\]) yields the sought-for equation for $\chi$ $$\begin{aligned} \label{eqn1} i(\bar\psi^{,m}\gamma_{m}\chi) = - \frac{1}{8}[({\bar\psi}^{,m}\psi)^{2} + ({\bar\psi}^{,m}\gamma_{5}\psi)^{2}] + \nonumber \\ \frac{1}{16}[(\bar\psi^{,m}\gamma_{m}\psi)^2{} - (\bar\psi^{,n}\gamma_{m}\psi)(\bar\psi^{,m}\gamma_{n}\psi)].\end{aligned}$$ To solve Eq. (\[eqn1\]) we observe that its terms have the multiplier $ \bar\psi^{,m}$ which can be canceled resulting in $$\begin{aligned} \label{eqn2} \gamma_{m}\chi= \frac{i}{8}[\psi({\bar\psi}_{,m}\psi) + \gamma_{5}\psi({\bar\psi}_{,m}\gamma_{5}\psi)] - \nonumber \\ \frac{i}{16}[\gamma_{m}\psi (\bar\psi^{,n}\gamma_{n}\psi) -\gamma_{n}\psi (\bar\psi^{,n}\gamma_{m}\psi)].\end{aligned}$$ Multiplication of Eq. (\[eqn2\]) by $\gamma^{m}$ results in the general solution $$\begin{aligned} \label{solut} \chi= - \frac{i}{32}[(\gamma_{m}\psi)({\bar\psi}^{,m}\psi) + (\gamma_{m}\gamma_{5}\psi)({\bar\psi}^{,m}\gamma_{5}\psi)] - \nonumber \\ \frac{i}{64}[3\psi (\bar\psi^{,m}\gamma_{m}\psi) + (\Sigma_{mn}\psi) (\bar\psi^{,n}\gamma^{m}\psi)].\end{aligned}$$ Substitution of (\[solut\]) in (\[redef\]) yields the explicit expression connecting the KS and the VA realizations of the NG field up to terms linear in $a$ $$\begin{aligned} \label{connect} \sqrt{2}G= \psi[1 + \frac{3ia}{64}(\bar\psi^{,m}\gamma_{m}\psi)] - \frac{ia}{32}[(\gamma_{m}\psi)({\bar\psi}^{,m}\psi) + \nonumber \\ (\gamma_{m}\gamma_{5}\psi)({\bar\psi}^{,m}\gamma_{5}\psi) - \frac{1}{2} (\Sigma_{mn}\psi)(\bar\psi^{,n}\gamma^{m}\psi)] + {\cal O}(a^2).\end{aligned}$$ The quadratic terms in $a$ are restored through the substitution of $\chi$ (\[solut\]) into the expansion (\[redef\]), and subsequently repetition of the above considered procedure with respect to the quadratic terms in $a$. Having fulfilled this, one can find $\chi_2$, and after to repeat again similar procedure with respect to the cubic terms in the constant $a$. As a result, one can obtain the explicit expression for the KS field $G$ through the VA field $\psi$ and to conclude about the expected equivalency between the KS and VA Lagrangians. Discussions =========== Here we presented an independent algebraic proof of the cancellation of 4-derivative terms in the $D=4 \ {\cal N}=1$ VA action using the Majorana bispinor representation and the Fierz rearrangements. The Majorana representation may simplify the investigation of such cancellations in the case of extended supersymmetries and/or of the higher dimensional spaces. We observed that the cancellation results in the difference between the Komargodski-Seiberg superfield [@Seib] and the Volkov-Akulov [@VA] actions. The difference gives rise to the question whether the KS Lagrangian is equivalent to the VA Lagrangian? The second question arising from the cancellation concerns its presence in the NG fermion interactions with other fields. We found out that the cancellation occurs in the coupling of the fermionic NG field with massive Dirac fields. It yields the maximal number of the NG fermions $n_{NGf}$ and their derivatives $n_{NGd}$ in the interaction Lagrangian which equals six and three, respectively. The maximal numbers $n_{NGf}=6$ and $n_{NGd}=3$ are the same as in the VA action describing the NG fermion interactions between themselves. The observation poses the issue of restoration of superfield Lagrangian of interactions which uses realization of the NG fermionic field coinciding with the one in the VA Lagrangian with couplings. A way to solve these issues implies the construction of the explicit expression connecting the KS and the VA realizations of the NG field. The representation of the KS fermion field through the VA field has to contain terms with its derivatives. We discussed the problem and found the explicit formula connecting the VA and the KS realizations of NG field up to the first order in the interaction constant $a$. The substitution of the expression into the KS action reduced it to the VA action. It points to the expected equivalence of these actions in all orders in $a$. The equivalency problem posed in [@Zhep], has recently been discussed in [@L3W] with pointing to some difficulties appearing on the way. Taking into account the recent application of the formalism of $ {\cal N}=1$ constrained superfields in the minimal supersymmetric standard model (MSSM), as well as its generalizations to $ {\cal N}$-extended supersymmetric models (see e.g. [@ADGT], [@AB]), it is interesting to study the above considered kind of cancellations in these models. Availability of an explicit formula connecting VA and KS realizations of NG field could simplify the phenomenological analysis of the mentioned and other new models. [**Acknowledgments**]{} I would like to thank Paolo Di Vecchia and Fawad Hassan for the interesting discussions and critical remarks and Sergei Kuzenko for the letter concerning the paper [@Kuz]. Also, I am owed to Eugenii Ivanov, Sergei Ketov, Zohar Komargodski and Sergei Kuzenko for their helpful comments concerning [@Zhep] and new references added. I am grateful to the Department of Physics of Stockholm University and Nordic Institute for Theoretical Physics Nordita for kind hospitality. This research was supported in part by Nordita. [99]{} S. Coleman, J. Wess and B. Zumino, Phys. Rev. [**177**]{}, 2239 (1969);\ C. Callan, S. Coleman, J. Wess and B. Zumino, Phys. Rev. [**177**]{}, 2247 (1969). D.V. Volkov, Phenomenological Lagrangian of the Goldstone particle interactions, Preprint ITF-69-75, Kiev, (1969) (in Russian); Sov. J. Part. Nucl. [**4**]{}(1),3 (1973). D.V. Volkov, Phenomenological Lagrangians invariant under symmetry groups including the Poincare group as a subgroup, Lebedev Phys. Inst. Preprint N114, Moscow, (1971) (in Russian). D.V. Volkov and V.P. Akulov, JETP Letters [**16**]{} 478 (1972); Phys. Lett. [**B 46**]{}, 109 (1973); Theor. Math. Phys. [**18**]{}, 28 (1974). B. Zumino, Nucl. Phys. [**B 127**]{} 189, (1977). E.A. Ivanov and A.A. Kapustnikov, J. Phys. [**A 11**]{} 2375, (1978);\ J. Phys. [**G 8**]{} 167, (1982). M. Rocek, Phys.Rev. Lett. [**41**]{} 451, (1978). J. Wess and B. Zumino, Phys. Lett. [**B 49**]{} 52, (1974). U. Lindstrom and M. Rocek, Phys. Rev. [**D 19**]{} 2300, (1979). J. Wess and J. Bagger, Supersymmetry and supergravity, Princeton University Press, Pinceton, 1992. S.M. Kuzenko and S.A. McCarthy, JHEP [**05**]{} 012 (2005). T. Hatanaka and S.V. Ketov, Phys. Lett. [**B 58**]{} 265, (2004); arXiv: hep-th/0310152. Z. Komargodski and N. Seiberg, JHEP [**0909**]{} 066, (2009); arXiv: 0907.2441v3 \[hep-th\]. A.A. Zheltukhin, Mod. Phys. Lett. [**21**]{}, No. 28, 2117 (2006); Dmitrij Volkov, super-Poincare group and Grassmann variables, Ann. Phys. (Berlin) [**19**]{}, No. 3-5, 177 (2010); arXiv:0911.0550 \[hep-th\]; D.V. Volkov and V.A. Soroka, JETP Letters [**18**]{}, 312 (1973); Theor. Math. Phys. 20, 829 (1974). D.V. Volkov, Supergravity before 1976, Proceedings of International Conference “History of Original Ideas and Basic Discoveries in Particle Physics”. Eds. H.B. Newman and T. Ypsilantis, Plenum Press (NY) 1996, 663-675. A.A. Zheltukhin, On the cancellation of 4-derivative terms in the Volkov-Akulov action; arXiv:1003.4143 \[hep-th\]. H. Liu, H. Luo, M. Luo and L. Wang, Leading order actions of Goldstino fields; arXiv:1005.0231 \[hep-th\]. I. Antoniadis, E. Dudas , D.M. Ghilencea, P. Tziveloglou, Non-linear MSSM, arXiv:1006.1662 \[hep-ph\]. I. Antoniadis, M. Buican, Goldstinos, Supercurrents and Metastable SUSY Breaking in N=2 Supersymmetric Gauge Theories, arXiv:1005.3012 \[hep-th\]. [^1]: e-mail: [email protected] [^2]: Paolo Di Vecchia attracted my attention to Ref. [@Seib] [^3]: Sergei Kuzenko kindly informed me about Ref. [@Kuz]. [^4]: We use algebraic agreements accepted in [@Z_Gra].
--- abstract: 'Maxwell equations (Faraday and Ampere-Maxwell laws) can be presented as a three component equation in a way similar to the two component neutrino equation. However, in this case, the electric and magnetic Gauss’s laws can not be derived from first principles. We have shown how all Maxwell equations can be derived simultaneously from first principles, similar to those which have been used to derive the Dirac relativistic electron equation. We have also shown that equations for massless particles, derived by Dirac in 1936, lead to the same result.$\ $The complex wave function, being a linear combination of the electric and magnetic fields, is a locally measurable and well understood quantity. Therefore Maxwell equations should be used as a guideline for proper interpretations of quantum theories.' address: | Department of Physics, Ben-Gurion University of the Negev, Beer-Sheva, Israel\ e:mail - [email protected] author: - 'A. Gersten' date: 'May 7, 1999' title: | [Published in Foundations of Physics Letters, vol. 12, pp. 291-8 (1998)]{}\ Maxwell equations as the one-photon quantum equation\ --- \[theorem\][Acknowledgement]{} \[theorem\][Algorithm]{} \[theorem\][Axiom]{} \[theorem\][Claim]{} \[theorem\][Conclusion]{} \[theorem\][Condition]{} \[theorem\][Conjecture]{} \[theorem\][Corollary]{} \[theorem\][Criterion]{} \[theorem\][Definition]{} \[theorem\][Example]{} \[theorem\][Exercise]{} \[theorem\][Lemma]{} \[theorem\][Notation]{} \[theorem\][Problem]{} \[theorem\][Proposition]{} \[theorem\][Remark]{} \[theorem\][Solution]{} \[theorem\][Summary]{} The Maxwell equations (except for the electric and magnetic Gauss’s law) can be presented by a three component equation in a way similar to the two component neutrino equation. This was already known to Oppenheimer [@oppenheimer] and to Majorana [@mignani], [@gersten]. Also this type of equation is a particular case of a more general equation for any spin derived by Weinberg [@weinberg]. There is a continuous interest in this equation even to this day [@good], [@tucker], [@ahluwalia], [@dvoe]. However one of the drawbacks of the above derivations is that the electric and magnetic Gauss’s laws are not derived from first principles. The aim of the present latter is to complement the above mentioned works, and to derive all Maxwell equations directly from a decomposition similar to that which was used to derive the Dirac relativistic electron equation. The Dirac equation is derived from the relativistic condition on the Energy $% E,$ mass $m,$ and momentum ${\bf \vec{p}}$: $$\left( E^{2}-c^{2}{\bf \vec{p}}^{2}-m^{2}c^{4}\right) I^{(4)}\Psi =0, \label{n1}$$ where $I^{(4)}$ is the $4\times 4$ unit matrix and $\Psi $ is a four component column (bispinor) wave function. Eq. (\[n1\]) is decomposed into $$\left[ EI^{(4)}+\left( \begin{array}{cc} mc^{2}I^{(2)} & c{\bf \vec{p}\cdot \vec{\sigma}} \\ c{\bf \vec{p}\cdot \vec{\sigma}} & -mc^{2}I^{(2)} \end{array} \right) \right] \left[ EI^{(4)}-\left( \begin{array}{cc} mc^{2}I^{\left( 2\right) } & c{\bf \vec{p}}\cdot {\bf \vec{\sigma}} \\ c{\bf \vec{p}\cdot \vec{\sigma}} & -mc^{2}I^{\left( 2\right) } \end{array} \right) \right] \Psi =0, \label{n2}$$ where $I^{\left( 2\right) }$ is the $2\times 2$ unit matrix and ${\bf \vec{% \sigma}}$ is the Pauli spin one-half vector matrix with the components $$\sigma _{x}=\left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right) ,\quad \sigma _{y}=\left( \begin{array}{cc} 0 & -i \\ i & 0 \end{array} \right) ,\quad \sigma _{z}=\left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right) ,\quad I^{\left( 2\right) }=\left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right) . \label{n3}$$ The two component neutrino equation can be derived from the decomposition $$\left( E^{2}-c^{2}{\bf \vec{p}}^{2}\right) I^{\left( 2\right) }\psi =\left[ EI^{\left( 2\right) }-c{\bf \vec{p}\cdot \vec{\sigma}}\right] \left[ EI^{\left( 2\right) }+c{\bf \vec{p}\cdot \vec{\sigma}}\right] \psi =0, \label{n6}$$ where $\psi $ is a two component spinor wavefunction. We shall derive the photon equation from the following decomposition $$\left( \frac{E^{2}}{c^{2}}-{\bf \vec{p}}^{2}\right) I^{\left( 3\right) }{\bf % =}\left( \frac{E}{c}I^{\left( 3\right) }-{\bf \vec{p}\cdot \vec{S}}\right) \left( \frac{E}{c}I^{\left( 3\right) }+{\bf \vec{p}\cdot \vec{S}}\right) -\left( \begin{array}{ccc} p_{x}^{2} & p_{x}p_{y} & p_{x}p_{z} \\ p_{y}p_{x} & p_{y}^{2} & p_{y}p_{z} \\ p_{z}p_{x} & p_{z}p_{y} & p_{z}^{2} \end{array} \right) =0, \label{n7}$$ where $I^{\left( 3\right) }$ is a $3\times 3$ unit matrix, and ${\bf \vec{S}} $ is a spin one vector matrix with components $$S_{x}=\left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & -i \\ 0 & i & 0 \end{array} \right) ,\quad S_{y}=\left( \begin{array}{ccc} 0 & 0 & i \\ 0 & 0 & 0 \\ -i & 0 & 0 \end{array} \right) ,\quad S_{z}=\left( \begin{array}{ccc} 0 & -i & 0 \\ i & 0 & 0 \\ 0 & 0 & 0 \end{array} \right) ,\quad I^{\left( 3\right) }=\left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right) , \label{n8}$$ and with the properties $$\left[ S_{x},S_{y}\right] =iS_{z},\quad \left[ S_{z},S_{x}\right] =iS_{y},\quad \left[ S_{y},S_{z}\right] =iS_{x},\quad {\bf \vec{S}}% ^{2}=2I^{\left( 3\right) }. \label{n9}$$ The decomposition (\[n7\]) can be verified directly by substitution. It will be crucial to note that the matrix on the right hand side of Eq. (\[n7\]) can be rewritten as: $$\left( \begin{array}{ccc} p_{x}^{2} & p_{x}p_{y} & p_{x}p_{z} \\ p_{y}p_{x} & p_{y}^{2} & p_{y}p_{z} \\ p_{z}p_{x} & p_{z}p_{y} & p_{z}^{2} \end{array} \right) =\left( \begin{array}{c} p_{x} \\ p_{y} \\ p_{z} \end{array} \right) \left( \begin{array}{ccc} p_{x} & p_{y} & p_{z} \end{array} \right) . \label{n10}$$ From Eqs. $\left( \ref{n7}-\ref{n8}\right) $ and  $\left( \ref{n10}\right) $, the photon equation can be obtained form $$\left( \frac{E^{2}}{c^{2}}-{\bf \vec{p}}^{2}\right) {\bf \vec{\Psi}=}\left( \frac{E}{c}I^{\left( 3\right) }-{\bf \vec{p}\cdot \vec{S}}\right) \left( \frac{E}{c}I^{\left( 3\right) }+{\bf \vec{p}\cdot \vec{S}}\right) {\bf \vec{% \Psi}-}\left( \begin{array}{c} p_{x} \\ p_{y} \\ p_{z} \end{array} \right) \left( {\bf \vec{p}\cdot \vec{\Psi}}\right) =0, \label{n11}$$ where ${\bf \vec{\Psi}}$ is a 3 component (column) wave function. Eq. ($\ref {n11}$) will be satisfied if the two equations $$\begin{aligned} \left( \frac{E}{c}I^{\left( 3\right) }+{\bf \vec{p}\cdot \vec{S}}\right) {\bf \vec{\Psi}} &=&0, \label{n12} \\ {\bf \vec{p}\cdot \vec{\Psi}} &=&0, \label{n13}\end{aligned}$$ will be simultaneously satisfied. For real energies and momenta complex conjugation of Eqs. (\[n11\]) and (\[n8\]) leads to $$\left( \frac{E^{2}}{c^{2}}-{\bf \vec{p}}^{2}\right) {\bf \vec{\Psi}}^{\ast }% {\bf =}\left( \frac{E}{c}I^{\left( 3\right) }+{\bf \vec{p}\cdot \vec{S}}% \right) \left( \frac{E}{c}I^{\left( 3\right) }-{\bf \vec{p}\cdot \vec{S}}% \right) {\bf \vec{\Psi}}^{\ast }{\bf -}\left( \begin{array}{c} p_{x} \\ p_{y} \\ p_{z} \end{array} \right) \left( {\bf \vec{p}\cdot \vec{\Psi}}^{\ast }\right) =0, \label{n13a}$$ where ${\bf \vec{\Psi}}^{\ast }$ is the complex conjugate of  ${\bf \vec{% \Psi}}.$ Eq. (\[n13a\]) will be satisfied if the two equations $$\begin{aligned} \left( \frac{E}{c}I^{\left( 3\right) }-{\bf \vec{p}\cdot \vec{S}}\right) {\bf \vec{\Psi}}^{\ast } &=&0, \label{n13b} \\ {\bf \vec{p}\cdot \vec{\Psi}}^{\ast } &=&0, \label{n13c}\end{aligned}$$ will be simultaneously satisfied. Eqs. (\[n11\]) and (\[n13a\]) are the two different possible decompositions of their left hand side. Eqs. (\[n13a\]-\[n13c\]) do not contain new information as they are only the complex conjugates of Eqs. (\[n11\]-\[n13\]). On the other hand the physical interpretation is different, namely Eq. (\[n12\]) is the negative helicity equation, while Eq. (\[n13b\]) is the positive helicity equation. It will be interesting to note that also other set of equivalent equations is possible. Eqs. (\[n11\]) and (\[n13\]) can be rewritten as $$\left( \begin{array}{c} p_{x} \\ p_{y} \\ p_{z} \end{array} \right) \left( {\bf \vec{p}\cdot \vec{\Psi}}\right) =\left( \frac{E}{c}% I^{\left( 3\right) }-{\bf \vec{p}\cdot \vec{S}}\right) \left( \frac{E}{c}% I^{\left( 3\right) }+{\bf \vec{p}\cdot \vec{S}}\right) {\bf \vec{\Psi}-}% \left( \frac{E^{2}}{c^{2}}-{\bf \vec{p}}^{2}\right) {\bf \vec{\Psi}=0,} \label{n13d}$$ which will be satisfied if the two equations $$\left( \frac{E}{c}I^{\left( 3\right) }+{\bf \vec{p}\cdot \vec{S}}\right) {\bf \vec{\Psi}}=0,\quad \left( \frac{E^{2}}{c^{2}}-{\bf \vec{p}}^{2}\right) {\bf \vec{\Psi}=0,} \label{n13f}$$ or their equivalents $$\left( \frac{E}{c}I^{\left( 3\right) }-{\bf \vec{p}\cdot \vec{S}}\right) {\bf \vec{\Psi}}^{\ast }=0,\quad \left( \frac{E^{2}}{c^{2}}-{\bf \vec{p}}% ^{2}\right) {\bf \vec{\Psi}}^{\ast }{\bf =0,} \label{n13g}$$ will be simultaneously satisfied. Maxwell equations will be derived from Eqs. ($\ref{n12}$-$\ref{n13}$). We will show below that if in Eqs. ($\ref{n12}$) and ($\ref{n13}$) the quantum operator substitutions $$E\Longrightarrow i\hbar \frac{\partial }{\partial t},\quad {\bf \vec{p}% \Longrightarrow -}i\hbar \nabla ,\quad \label{n14}$$ and the wavefunction substitution $${\bf \vec{\Psi}=\vec{E}-}i{\bf \vec{B},} \label{n15}$$ are made, as a result the Maxwell equations will be obtained. In Eq. (\[n15\]) ${\bf \vec{E}}$ and ${\bf \vec{B}}$ are the electric and magnetic fields respectively. Indeed, one can easily check from Eqs. (\[n8\]) and (\[n14\]) that the following identity is satisfied $$\left( {\bf \vec{p}\cdot \vec{S}}\right) {\bf \vec{\Psi}=}\hbar \nabla \times {\bf \vec{\Psi}.} \label{n16}$$ From Eqs. (\[n12\]-\[n13\]) and (\[n14\], \[n16\]) we obtain $$\frac{i\hbar }{c}\frac{\partial }{\partial t}{\bf \vec{\Psi}=}-\hbar {\bf % \nabla }\times {\bf \vec{\Psi},} \label{n17}$$ $$-i\hslash \nabla \cdot {\bf \vec{\Psi}=}0. \label{n18}$$ The constant $\hslash $ can be cancelled out in Eqs. (\[n17\]-\[n18\]), and after replacing ${\bf \vec{\Psi}}$ by Eq. (\[n15\]), the following equations are obtained $${\bf \nabla }\times \left( {\bf \vec{E}-}i{\bf \vec{B}}\right) {\bf =-}i% \frac{1}{c}\frac{\partial \left( {\bf \vec{E}-}i{\bf \vec{B}}\right) }{% \partial t}, \label{n19}$$ $${\bf \nabla }\cdot \left( {\bf \vec{E}-}i{\bf \vec{B}}\right) {\bf =}0. \label{n20}$$ If in Eqs. (\[n19\]-\[n20\]) the electric and magnetic fields are real, the separation into the real and imaginary parts will lead to the Maxwell equations $${\bf \nabla }\times {\bf \vec{E}=-}\frac{1}{c}\frac{\partial {\bf \vec{B}}}{% \partial t} \label{1}$$ $${\bf \nabla }\times {\bf \vec{B}=}\frac{1}{c}\frac{\partial {\bf \vec{E}}}{% \partial t} \label{2}$$ $${\bf \nabla }\cdot {\bf \vec{E}}=0 \label{3}$$ $${\bf \nabla }\cdot {\bf \vec{B}}=0. \label{4}$$ One should note that the Plank constant $\hslash $ was cancelled out earlier, in Eqs. (\[n17\]-\[n18\]), which explains its absence in the Maxwell equations. Another comment should be made here, starting from equations (\[n13f\]) and (\[n14\]-\[n15\]) one can get equations which are equivalent to Maxwell equations (without sources), namely $${\bf \nabla }\times {\bf \vec{E}=-}\frac{1}{c}\frac{\partial {\bf \vec{B}}}{% \partial t},\quad {\bf \nabla }\times {\bf \vec{B}=}\frac{1}{c}\frac{% \partial {\bf \vec{E}}}{\partial t},\quad \left( \frac{\partial ^{2}}{% c^{2}\partial t^{2}}-{\bf \nabla }^{2}\right) \overrightarrow{{\bf E}}% =0,\quad \left( \frac{\partial ^{2}}{c^{2}\partial t^{2}}-{\bf \nabla }% ^{2}\right) \overrightarrow{{\bf B}}=0, \label{4a}$$ the Gauss laws (\[3\]-\[4\]) are satisfied on the basis of Eq. (\[n13d\]). Dirac [@dirac] and Wigner [@wigner],[@bacry] have derived relativistic equations for massless particles of any spin from which the Gauss laws, for the spin one case, can be derived. Moreover, Wigner [@wigner] has shown that any finite-component massless field has only two possible helicity states. Dirac has derived equations for massless particles with spin $k$, which in the ordinary vector notation [@dirac] are $$\left\{ kp_{t}+S_{x}p_{x}+S_{y}p_{y}+S_{z}p_{z}\right\} \psi =0, \label{a1}$$ $$\left\{ kp_{x}+S_{x}p_{t}-iS_{y}p_{z}+iS_{z}p_{y}\right\} \psi =0, \label{a2}$$ $$\left\{ kp_{y}+S_{y}p_{t}-iS_{z}p_{x}+iS_{x}p_{z}\right\} \psi =0, \label{a3}$$ $$\left\{ kp_{z}+S_{z}p_{t}-iS_{x}p_{y}+iS_{y}p_{x}\right\} \psi =0, \label{a4}$$ where the $p_{n}$ are the momenta, $p_{t}=E/c$, $E$ the energy, $\psi $ a $% \left( 2k+1\right) $ component wave function and $S_{n}$ are the spin $% \left( 2k+1\right) \times \left( 2k+1\right) $ matrices which satisfy $$\left[ S_{x},S_{y}\right] =iS_{z},\quad \left[ S_{z},S_{x}\right] =iS_{y},\quad \left[ S_{y},S_{z}\right] =iS_{x},\quad S_{x}^{2}+S_{y}^{2}+S_{z}^{2}=k(k+1)I^{\left( k\right) }, \label{a5}$$ and $I^{\left( k\right) }$ is a $\left( 2k+1\right) \times \left( 2k+1\right) $ unit matrix. As we shall see below, for the case $k=1$, Eq. (\[a1\]) will lead to the Faraday and Ampere-Maxwell laws. The Gauss laws can be derived from Eqs. (\[a1\]-\[a4\]) in a way which will be described below. Eqs. (\[a1\]-\[a4\]) were analyzed extensively by Bacry [@bacry], who derived them using Wigner’s condition [@wigner] on the Pauli-Lubanski vector $W^{\mu }$ for massless fields $$W^{\mu }=kp^{\mu },\quad \mu =x,y,z,t. \label{a6}$$ Let us now demonstrate how Eq. (\[n13\]) can be derived from Eqs. (\[a1\]-\[a4\]). Following Dirac [@dirac], one replaces Eqs. (\[a1\]-\[a4\]), which are linearly dependent, with the Eq. (\[a1\]) and 3 conditions on the wave function, which are obtained by substituting $p_{t}$ from Eq. (\[a1\]) into Eqs. (\[a2\]-\[a4\]) $$\left\{ kp_{t}+S_{x}p_{x}+S_{y}p_{y}+S_{z}p_{z}\right\} \psi =0, \label{b1}$$ $$\left\{ (k^{2}-S_{x}^{2})p_{x}+(ikS_{z}-S_{x}S_{y})p_{y}-\left( ikS_{y}+S_{x}S_{z}\right) p_{z}\right\} \psi =0, \label{b2}$$ $$\left\{ (k^{2}-S_{y}^{2})p_{y}+(ikS_{x}-S_{y}S_{z})p_{z}-\left( ikS_{z}+S_{y}S_{x}\right) p_{x}\right\} \psi =0, \label{b3}$$ $$\left\{ (k^{2}-S_{z}^{2})p_{z}+(ikS_{y}-S_{z}S_{x})p_{x}-\left( ikS_{x}+S_{z}S_{y}\right) p_{y}\right\} \psi =0. \label{b5}$$ For the case $k=1$, $\psi \equiv \vec{\Psi}$, and using the representation (\[n8\]) for the spin matrices, one obtains for Eq. (\[b2\]) $$\left( \begin{array}{ccc} p_{x} & p_{y} & p_{z} \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array} \right) \vec{\Psi}=0, \label{b6}$$ for Eq. (\[b3\]) $$\left( \begin{array}{ccc} 0 & 0 & 0 \\ p_{x} & p_{y} & p_{z} \\ 0 & 0 & 0 \end{array} \right) \vec{\Psi}=0, \label{b7}$$ and for Eq. (\[b5\]) $$\left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & 0 \\ p_{x} & p_{y} & p_{z} \end{array} \right) \vec{\Psi}=0, \label{b8}$$ which all are equivalent to Eq. (\[n13\]). It is interesting to note that $$\begin{aligned} \left( \begin{array}{ccc} p_{x}^{2} & p_{x}p_{y} & p_{x}p_{z} \\ p_{y}p_{x} & p_{y}^{2} & p_{y}p_{z} \\ p_{z}p_{x} & p_{z}p_{y} & p_{z}^{2} \end{array} \right) &=&\left( \begin{array}{ccc} p_{x} & 0 & 0 \\ p_{y} & 0 & 0 \\ p_{z} & 0 & 0 \end{array} \right) \left( \begin{array}{ccc} p_{x} & p_{y} & p_{z} \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array} \right) \label{b9} \\ &=&\left( \begin{array}{ccc} 0 & p_{x} & 0 \\ 0 & p_{y} & 0 \\ 0 & p_{z} & 0 \end{array} \right) \left( \begin{array}{ccc} 0 & 0 & 0 \\ p_{x} & p_{y} & p_{z} \\ 0 & 0 & 0 \end{array} \right) \label{b10} \\ &=&\left( \begin{array}{ccc} 0 & 0 & p_{x} \\ 0 & 0 & p_{y} \\ 0 & 0 & p_{z} \end{array} \right) \left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & 0 \\ p_{x} & p_{y} & p_{z} \end{array} \right) , \label{b11}\end{aligned}$$ from which we deduce that the equations (\[n12\]) and (\[n13\]) and the decomposition (\[n7\]), can be realized on the basis of Eq. (\[b1\]) and one of the equations (\[b2\]),(\[b3\]) and (\[b5\]). Above, we have shown how all Maxwell equations can be derived simultaneously from first principles, similar to those which have been used to derive the Dirac relativistic electron equation. Moreover the wave function ${\bf \vec{% \Psi}}$ has a definite local classical interpretation in terms of the electric and magnetic fields, as given by Eq. (\[n15\]), which are locally measurable and well understood quantities. Therefore Maxwell equations should be used as a guideline for proper interpretations of quantum theories. J.R. Oppenheimer, Phys. Rev. [**38**]{}, 725 (1931). R. Mignani, E. Recami and M. Boldo, Lett. Nuovo Cimento [**11**]{}, 568 (1974). A. Gersten, Ann. Fond. L. de Broglie, [**21**]{}, 67 (1996). S. Weinberg, Phys. Rev. [**B133**]{}, 1318 (1964), [**B134**]{}, 882 (1964). R.H. Good and T.J. Nelson, [*Classical Theory of Electric and Magnetic Fields*]{}, Academic Press, New York (1971), Chapter 11. R.H. Tucker and C.L. Hammer, Phys. Rev. [**D3**]{}, 2448 (1971). D.V. Ahluwalia, M.B. Johnson and T. Goldman, Phys. Lett. [**B316**]{}, 102 (1993). V.V. Dvoeglazov, Helv. Phys. Acta, [**70**]{}, 697 (1997). P.A.M. Dirac, Proc. Roy. Soc. [**A155**]{}, 447-59 (1936). E.P. Wigner, Ann. Math., [**40,**]{} 149 (1939). H. Bacry, Nuovo Cimento, [**A32**]{}, 448-60 (1976).
--- abstract: 'Given a Stein manifold with the density property, we show that under a suitable topological condition it is possible to prescribe derivatives at a finite number of points to automorphisms depending holomorphically on a Stein parameter. This is an Oka property of the manifold and is related to its holomorphic flexibility.' address: - 'Matematisk Institutt, Universitetet i Oslo. Postboks 1053, Blindern. 0316 OSLO, Norway' - 'IMFM, University of Ljubljana, Jandranska Ulica 19, 1000 Ljubljana, Slovenia' author: - 'Alexandre Ramos-Peon' - Riccardo Ugolini bibliography: - 'bib2.bib' title: Parametric Jet Interpolation for Stein manifolds with the Density Property --- Introduction {#sec:intro} ============ Given a (connected) complex manifold $X$, it is often of interest to study its group of holomorphic automorphisms, denoted by ${\operatorname{Aut}}(X)$. In many cases this object can not be described in a simple way, but one may determine properties of this group and of its action on $X$. A basic property is transitivity, which can be seen as a special case of $N$-transitivity. Given $N \in {\mathbb{N}}$ and a group $G$ equipped with a left action on a set $X$, we say that the action is $N$-transitive if for any two subsets $\{a_1,\dots,a_N\}, \{b_1,\dots,b_N\}$ of $X$ consisting of $N$ distinct elements, there exists $g\in G$ such that $ga_j=b_j$ for $j=1,\dots,N$. The action is *infinitely transitive* if it is $N$-transitive for every $N \in {\mathbb{N}}$. When one deals with complex manifolds and their group of automorphisms, it is possible to consider not only pointwise interpolation, but also *jet interpolation*. This means that we wish to find holomorphic automorphisms of $X$ with prescribed values of all derivatives (up to a given order). The first result in this direction, due to Forstnerič [@F-Interpolation] for $X={\mathbb{C}}^n$, uses the dense subgroup of ${\operatorname{Aut}}({\mathbb{C}}^n)$ generated by shears, which are some well-known automorphisms that appeared in the seminal paper by Rosay and Rudin about self-maps of ${\mathbb{C}}^n$ [@RosayRudin]. An example of more general complex manifolds for which ${\operatorname{Aut}}(X)$ acts infinitely transitively is provided by Stein manifolds with the *density property*. A complex manifold $X$ has the *density property* if the Lie algebra generated by [complete]{} vector fields (those whose flows are defined for all times) is dense in the Lie algebra of all vector fields in the compact-open topology. This notion (see Definition \[def:DP\] below for a more detailed discussion), first introduced by Varolin [@Varolin1], has turned out to be fruitful, allowing for new insights into classical questions. Many constructions are possible in Stein manifolds with the density property and there has been an ongoing effort to determine which manifolds have the density property. For a complete account, we refer the interested reader to the monograph [@F Chapter 4] and the references therein. In another paper [@VarolinII], Varolin proved a jet interpolation theorem for holomorphic automorphisms of a Stein manifold with the density property. In this paper, we generalize his result to holomorphic families of jets. Given an $N$-tuple $\hat{x}=(\hat{x}_1,\dots,\hat{x}_N)$ of distinct points in $X$, we can consider the space $Y$ of all possible collections of nondegenerate $k$-jets at the points $\{ \hat{x}_i \}_{i=1}^N$ such that their respective images are distinct (see Section 2 for formal definitions). The following is the main theorem of this paper. \[main\] Let $W$ be a Stein manifold, $X$ a Stein manifold with the density property, $k,N\in {\mathbb{N}}_{\geq 0}$, and $(\hat{x}_1,\dots,\hat{x}_N)$ an $N$-tuple of distinct points in $X$. For each $i=1,\dots,N$ let $\gamma_i^w$ be a nondegenerate $k$-jet at $\hat{x}_i$ depending holomorphically on the parameter $w\in W$. Then there exists a null-homotopic parametric family of automorphisms $F^w\in{\operatorname{Aut}}(X)$ depending holomorphically on $w \in W$, such that the $k$-jet of $F^w$ at $\hat{x}_i$ equals $\gamma_i^w$ for all $i=1,\dots,N$ and $w\in W$ if and only if $\gamma=(\gamma_1,\dots,\gamma_N):W\to Y$ is null-homotopic. The special case $k=0$ corresponds to pointwise interpolation and was proved by Kutzschebauch and the first author [@K-R]. For arbitrary $k,N \in {\mathbb{N}}$ and $X={\mathbb{C}}^n, \ n>1$, this result was proved by the second author [@Ugolini]. The present paper is part of a common effort to describe so-called Oka properties of groups of automorphisms. The main results in this direction are due to Forstnerič and Lárusson in [@FrancFinnur], where the authors focus on ${\operatorname{Aut}}({\mathbb{C}}^n)$ and many of its subgroups. We refer to [@F Chapters 5 & 6] for a comprehensive survey on Oka theory. We now outline the proof strategy for Theorem \[main\]: (i) Modify the homotopy of jets $\gamma^t:W\to Y$, assumed to exist by hypothesis, to an isotopy of holomorphic maps $\gamma^t:W\to Y$ (smooth on $t\in [0,1]$), connecting the given jet $\gamma^1:W\to Y$ to the constant map $\gamma^0:W\to Y$ consisting of the jet of the identity map on $X$; (ii) Use the homotopy $\gamma^t$ from (i) to construct a homotopy of injective holomorphic maps $F_t$, defined on an open neighborhood of the fixed $N$-tuple $\hat{x}=(\hat{x}_1,\dots, \hat{x}_N)$, such that the jet of $F_t$ is equal to $\gamma^t$; (iii) Approximate the homotopy from (ii) with a family of parametrized automorphisms on a large compact set $L_1 \times K_1 \subset W \times X$; (iv) repeat the above steps inductively on larger and larger compact sets $L_j \times K_j \subset W \times X$, $j \in {\mathbb{N}}$. This results in countably many parametrized automorphisms and we must ensure that their composition converges on $W \times X$. A necessary condition for convergence is that the holomorphic maps from (ii) approach the identity as the compacts $L_j \times K_j \subset W \times X$ become larger (i.e. at each induction step). Correspondingly, we must ensure that at each induction step (with the exception of the initial one), for each $t$, the jets from the isotopy in (i) be close to the jet of the identity map on $L_j \times K_j $. We point out that we just need to assume for the parametrized family of jets to be null-homotopic and not homotopic to the jet of the identity as in (i), this is equivalent as $Y$ is path-connected when $X$ is. This is the strategy followed in [@K-R] for the pointwise interpolation case (i.e. $k=0$). When $k\geq 1$, a new significant difficulty arises in this process, namely in step (ii). We explain this in detail in the remainder of this section. The rest of the paper is organized as follows: in Section 2, we lay out the notation and define the relevant space of jets and prove its ellipticity. In Section 3, we handle step (ii), and reduce the rest of the problem to a technical construction whose proof is the object of Section 4. Suppose that the given homotopy $\gamma^t:W\to Y$ is such that for each fixed $t$, the jet $\gamma^t:W \to Y$ is close to the jet of the identity on a compact set $L\subset W$. Given $K \subset X$, we wish to find a homotopy of injective holomorphic maps which are close to the identity on $L\times K$. For simplicity assume that $N=1$ and that $\gamma^t(w)$ fixes the point $\hat{x}$ for all $(t,w) \in [0,1] \times W$. For fixed values of $t$ and $w$, the jet $\gamma^t(w)$ can be represented by a polynomial (in $z \in X$) in a local chart. If we restrict ourselves to a small enough compact $L \subset W$, this polynomial can be chosen to be holomorphic in $w \in L$ and smooth in $t \in [0,1]$. According to the proof of [@Ugolini Theorem 1], on this local chart $U\subset X$ there exist locally defined vector fields $\{V_1,\dots, V_M\}\subset {\operatorname{VF}}(U)$ and holomorphic functions $\{f_1,\dots, f_M\}\subset {\mathcal{O}}(L)$ such that the jet of $\phi_{V_1}^{tf_1(w)} \circ \dots \circ \phi_{V_M}^{tf_M(w)}$ at $\hat{x}$ is $\gamma^{t,w}$, where $\phi_V^s$ denotes the flow of the vector field $V$ at time $s\in {\mathbb{C}}$. Choosing $U$ to be Runge and using the density property one can replace the locally defined vector fields with globally defined complete ones $\{W_1,\dots, W_M\}\subset {\operatorname{CVF}}(X)$ in such a way that the jet of $\phi_{W_1}^{tf_1(w)} \circ \dots \circ \phi_{W_M}^{tf_M(w)}$ at $\hat{x}$ approximates $\gamma^{t,w}$ for $w \in L$. If we further require this composition to be close to the identity on a large compact $K \subset X$, it will be sufficient that the functions $f_i$, $i=1,\dots,M$ are sufficiently close to zero for each $w \in L$. Unfortunately, the above construction fails to produce functions with this property already at the level of $1$-jets for the following reason. We can identify a parametrized $1$-jet fixing a point with a map $G:W \to {\operatorname{GL}}_n({\mathbb{C}})$. It is easy to reduce to the case $G:W \to {\operatorname{SL}}_n({\mathbb{C}})$. In the proof of [@Ugolini Theorem 1] the author uses the following solution to the Vaserstein problem, proved by Ivarsson and Kutzschebauch in a spectacular application of Oka theory: \[vaser\] Let $W$ be a finite dimensional reduced Stein space and $G:W \rightarrow SL_n({\mathbb{C}})$ be a nullhomotopic holomorphic mapping. Then there exist an integer $M \in {\mathbb{N}}$ and holomorphic mappings $$G_1,\dots, G_M: W \rightarrow {\mathbb{C}}^{n(n-1)/2}$$ such that $G$ can be written as the finite product of upper and lower diagonal unipotent matrices with entries in ${\mathcal{O}}(W)$: $$G(w)= \left( \begin{array}{ccc} 1 & 0 \\ G_1(w) & 1 \end{array} \right) \left( \begin{array}{ccc} 1 & G_2(w) \\ 0 & 1 \end{array} \right) \dots \left( \begin{array}{ccc} 1 & G_M(w) \\ 0 & 1 \end{array} \right).$$ Consider the following statement: if $G(w_0)=Id$ holds for some $w_0 \in W$, then $G_1(w_0)=\dots=G_M(w_0)=0$. In the above setting, if the jet $\gamma^t$ happens to be equal to the identity for some $w_0$, this statement would imply that the corresponding functions $f_i$ from the described construction should evaluate to zero at $w_0$, and hence the approximation would be close to the identity on $K\subset X$ for $w \in W$ close to $w_0$, as desired. Unfortunately, the naive statement just considered is false and we now provide a counterexample. Suppose $W$ is an open disc in ${\mathbb{C}}$ of radius $1$ and center $1\in{\mathbb{C}}$. Since $W$ is contractible, any map $W\to SL_n({\mathbb{C}})$ is nullhomotopic. For $n=2$, let $$G(w)=\left( \begin{array}{ccc} \frac{1}{w} & 0 \\ 0 & w \end{array} \right)$$ and suppose it can be written as a product of upper and lower diagonal unipotent matrix functions which are the identity when $w=1$. By a simple induction, this implies that the term in position $(2,2)$ is always of the form $1+(w-1)^2 f(w)$ for an $f\in {\mathcal{O}}(W)$ whose Laurent polynomial around $1$ does not include summands with negative exponent. As this term also needs to be equal to $w$ for all $w \in W$, we reach a contradiction. This fact is deeper than it appears and a more detailed account of this phenomenon can be found in the proof of Theorem \[vaser\]. Here we illustrate it for the case $n=2$. For a fixed $M \in {\mathbb{N}}$, consider the map $$\begin{aligned} \psi_M:{\mathbb{C}}^M &\to SL_2({\mathbb{C}}) \\ \psi_M(z_1,\dots,z_M)&=\left( \begin{array}{ccc} 1 & 0 \\ z_1& 1 \end{array} \right) \left( \begin{array}{ccc} 1 & z_2 \\ 0 & 1 \end{array} \right) \dots \left( \begin{array}{ccc} 1 & z_M \\ 0 & 1 \end{array} \right). \end{aligned}$$ Theorem \[vaser\] implies the existence of a holomorphic lift in the following diagram: $$\begin{tikzcd}[row sep=1cm, column sep=1.5cm] & {\mathbb{C}}^M \arrow{d}{\psi_M} \\ W \arrow{r}{G} \arrow{ru} & SL_2({\mathbb{C}}) \end{tikzcd}$$ The existence of a continuous lift was proven by Vaserstein [@Vaserstein]. To find a holomorphic lift, the authors use the Oka-Grauert-Gromov principle for sections of holomorphic submersions coming from the diagram $$\begin{tikzcd}[row sep=1cm, column sep=1.5cm] & {\mathbb{C}}^M \arrow{d}{\pi \circ \psi_M} \\ W \arrow{r}{\pi \circ G} \arrow{ru} & {\mathbb{C}}^2 \setminus \{0\} \end{tikzcd}$$ where $\pi: SL_2({\mathbb{C}}) \to {\mathbb{C}}^2 \setminus \{0\}$ is given by projection to the last row. This choice was made to simplify the discussion of the fibers of the submersion, nonetheless the map $\pi \circ \psi_M$ is a submersion only outside the set $$S_M=\{(z_1,\dots,z_M) \in {\mathbb{C}}^M: z_1=\dots=z_{M-1}=0\}.$$ The main consequence of this fact is that Theorem \[vaser\] ensures the existence of functions $G_i:W \to {\mathbb{C}},\ i=1,\dots,M$ which are never all zero for the same value $w_0 \in W$. Therefore, the previously discussed approach for step (ii) is not viable, and a more elaborate procedure is required. Notation and set up =================== Let $X$ be a complex manifold of dimension $n$ and fix $k \in {\mathbb{N}}$. We now give precise definitions of the objects that were discussed in the previous section. Let $F,G:U \subset X \to X$ be representatives for holomorphic germs at $p \in U$. We say that $F$ and $G$ have the same $k$-jet at $p$ if their Taylor expansion in some local chart about $p$ agrees up to order $k$. This defines an equivalence class which we call a $k$-jet and denote by $[F]_p$. We say that the jet is nondegenerate if its linear part (the Jacobian of any representative) has nonzero determinant. Thus in a local chart, a $k$-jet can be uniquely represented by a polynomial mapping of total degree (i.e. the maximal degree of its $n$ components) at most $k$. The set of all nondegenerate $k$-jets at a point $p \in X$ will be denoted by $J_{p,\ast} (X)$. Observe that this is a complex manifold. Since a jet $\gamma \in J_{p,\ast} (X)$ is an equivalence class of germs, $\gamma(p) \in X$ is well-defined and we call this the *image* of $\gamma$ at $p$. Furthermore, the map $$\begin{aligned} \pi:J_{p,\ast} (X) &\to X \\ \gamma &\mapsto \gamma(p)\end{aligned}$$ is surjective and holomorphic. For $q \in X$ we can think of $\pi^{-1}(q) =: J_{p,q} (X)$ as the set of non-degenerate $k$-jets at $p$ whose image is $q$. For convenience and unless noted otherwise we will use the word *jet* for *nondegenerate $k$-jet*. As we are interested in jet interpolation by automorphisms at more than one point, let us define the relevant spaces. Fix $N \in {\mathbb{N}}$ distinct points $\{ \hat{x}_i \}_{i=1}^N \subset X$. We interpret this $N$-tuple as a point $\hat{x} \in X^N \setminus \Delta$, where $$\Delta= \bigcup_{1\leq i<j \leq N} \left\{(z_1,\dots,z_N) \in X^N : z_i=z_j \right\}.$$ If we apply $\pi$ coordinate by coordinate we obtain a map (which we still denote by $\pi$) from $J_{\hat{x}_1,\ast} (X) \times \dots \times J_{\hat{x}_N,\ast} (X)$ to $X^N$. Now let $$Y:=J_{\hat{x}_1,\ast} (X) \times \dots \times J_{\hat{x}_N,\ast} (X) \setminus \pi^{-1} (\Delta).$$ be the complex manifold representing all possible collections of nondegenerate $k$-jets at the points $\{ \hat{x}_i \}_{i=1}^N$ such that their respective images are distinct. If $U$ is an open set containing all of the points $\{ \hat{x}_i \}_{i=1}^N$ and $F:U \to X$ is an injective holomorphic map, we denote by $[F]_{\hat{x}}$ the $N$-tuple of jets $([F]_{\hat{x}_1}, \dots, [F]_{\hat{x}_N}) \in Y$. Given a jet of the form $\gamma = [F]_{\hat{x}} \in Y$, we denote by $(\gamma)^{-1}$ the jet $[F^{-1}]_{F(\hat{x})}$ and observe that $(\gamma)^{-1} \in Y$ if $\gamma(\hat{x})=\hat{x}$. If some metric compatible with the manifold structure on $X$ is given, the space $J_{p,\ast}(X)$ inherits a metric, and it induces the natural distance on $X^N\setminus\Delta$ defined by taking the maximum distance of each coordinate projection. Therefore to $Y$ is associated a distance function $d$. It follows from the Cauchy estimates that uniform convergence on compacts of $X$ implies convergence in $Y$ with respect to this metric. Let $W$ be a complex manifold and $\gamma:W \to Y$ a holomorphic map. We are interested in finding a *holomorphic* map $F:W \to {\operatorname{Aut}}(X)$ such that $[F^w]_{\hat{x}} = \gamma^w$ for all $w \in W$. As ${\operatorname{Aut}}(X)$ is not a complex manifold, we define $F:W \to {\operatorname{Aut}}(X)$ to be holomorphic if $F^w(x)$ is holomorphic as a map from $W \times X$ to $X$. We denote the space of all such maps with ${\operatorname{Aut}}_{W}(X)$ and observe that it can be seen as a subgroup of ${\operatorname{Aut}}(W \times X)$. It is clear that a necessary condition for the existence of such a map is that ${\operatorname{Aut}}(X)$ is *large*: more precisely we will require $X$ to be a Stein manifold with the density property, which we now define. Let ${\operatorname{VF}}(X)$ be the Lie algebra of holomorphic vector fields on $X$ (holomorphic sections of $T^{1,0}X$). In this paper *vector field* means *holomorphic vector field* in this sense. Recall that a vector field on a manifold is called *complete* if at every point $x$ the solution $\phi_t(x)$ to the flow equation starting at $x$ is defined for all $t\in {\mathbb{C}}$. \[def:DP\] Let $X$ be a complex manifold and $\mathfrak{g} \subset {\operatorname{VF}}(X)$ be a Lie subalgebra of vector fields on $X$. We say that $\mathfrak{g}$ has the density property if the subalgebra of $\mathfrak{g}$ generated by complete vector fields is dense in $\mathfrak{g}$ with respect to the uniform topology on compact sets. We say that $X$ has the density property if $ {\operatorname{VF}}(X)$ does. The density property by itself does not imply that ${\operatorname{Aut}}(X)$ is large, as one can deduce by considering a compact manifold. However under the additional assumption that $X$ is Stein and of dimension at least $2$, the density property implies that ${\operatorname{Aut}}(X)$ is infinite dimensional (see e.g. [@Varolin1]). An important feature of Stein manifolds with the density property is that locally defined holomorphic maps $U\to X$ can be approximated uniformly on compacts by automorphisms: this is known as the Andersén-Lempert theorem. The following is the parametric version proven in [@K-R Theorem 2.2], which we will use in its full generality. \[AL\] Let $W$ be a Stein manifold and $X$ a Stein manifold with the density property. Let $U\subset W\times X$ be an open set and $F_t:U\to W\times X$ be a smooth homotopy of injective holomorphic maps acting as the identity on the $W$ coordinate and with $F_0$ being the inclusion map. Suppose $K\subset U$ is a compact set such that $F_t(K)$ is ${\mathcal{O}}(W\times X)$-convex for each $t\in {[0,1]}$. Then there exists a neighborhood $V$ of $K$ such that for all $t\in{[0,1]}$, $F_t$ can be approximated uniformly on compacts of $V$ (with respect to any distance function on $X$) by automorphisms $\alpha_t\in{\operatorname{Aut}}_{W}(X)$ which depend smoothly on $t$, and moreover we can choose $\alpha_0=id$. Let us return to our setting. Given $\gamma=(\gamma_1, \dots, \gamma_N) \in Y$, any $V \in {\operatorname{VF}}(X)$ defines a flow $\phi_V^t$ in a neighborhood of $\{ \pi(\gamma_i) \}_{i=1}^N$ for small enough values of $t$. Hence the jet $[\phi_V^t \circ \gamma]_{\hat{x}} \in Y$ is well defined (for small $t$). Differentiating with respect to $t$, we obtain $\tilde{V} \in {\operatorname{VF}}(Y)$ such that $\phi_{\tilde{V}}^t (\gamma)= [\phi_V^t \circ \gamma]_{\hat{x}}$ for all $\gamma\in Y$: we call $\tilde{V}$ *the lift* of $V$. We observe that if $V$ is complete, then so is $\tilde{V}$. We denote by ${\operatorname{CVF}}(X)$ the set of complete vector fields on $X$. The set $\widetilde{{\operatorname{VF}}(X)} = \{ \tilde{V} \in {\operatorname{VF}}(Y) : V \in {\operatorname{VF}}(X) \}$ is a Lie subalgebra of ${\operatorname{VF}}(Y)$ and we note that if $X$ has the density property, so does $\widetilde{{\operatorname{VF}}(X)}$. Similarly, an automorphism $\alpha$ of $X$ lifts to an automorphism $\tilde\alpha$ of $Y$. We now prove that complete vector fields on $X$ can be lifted to span the tangent space of $Y$. \[lem:elliptic\] Let $X$ be a Stein manifold with the density property and let $\gamma \in Y$. Then there exist $M= \dim Y \in {\mathbb{N}}$ complete vector fields $\{ \theta_i \}_{i=1}^M \subset {\operatorname{CVF}}(X)$ such that $\{\tilde{\theta}_i(\gamma)\}_{i=1}^M$ is a basis for the tangent space of $Y$ at the point $\gamma$. In particular the map $$\begin{aligned} {\mathbb{C}}^M &\to Y \\ (t_1,\dots,t_M) &\mapsto \phi_{\tilde{\theta}_1}^{t_1} \circ \dots \circ \phi_{\tilde{\theta}_M}^{t_M}(\gamma)\end{aligned}$$ is a biholomorphism from a neighborhood of $0$ to a neighborhood of $\gamma$. It is sufficient to prove the lemma for $\gamma = [Id]_{\hat{x}}$. Indeed, Varolin proved in [@VarolinII] that there exists $F \in {\operatorname{Aut}}(X)$ such that $[F\circ \gamma]_{\hat{x}}=[Id]_{\hat{x}}$ . If the vector fields $\{\tilde{\theta}_i\}_{i=1}^M$ span the tangent space to $Y$ at $[Id]_{\hat{x}}$, then $\{\widetilde{F^*\theta}_i\}_{i=1}^M$ span $T_\gamma Y$. We first claim that the conclusion is true for $N=1$ and $X={\mathbb{C}}^n, \ n>1$. To see this, suppose $\hat{x}=(0,\dots,0)$ and consider the vector fields $V_{I,j}=z^I \frac{\partial}{\partial z_j}$ where $I$ runs through the multi-indexes of degree less or equal than $k$ and $j=1,\dots, n$. Not all of them are complete, but they do span the tangent space of $Y$ at $[Id]_{\hat{x}}$. As $X$ has the density property, we can approximate each $V_{I,j}$ by a sum of complete vector fields. For a good enough approximation, this new collection must also span the tangent space of $Y$ at $[Id]_{\hat{x}}$. The claim is proved by picking out a basis from this generating set. A similar proof works for arbitrary $X$ (Stein with the density property): let $U$ be a Runge coordinate neighborhood of $\hat{x}$ such that $\hat{x}$ corresponds to $0 \in {\mathbb{C}}^n$ under the chart map. On $U$ we consider the pullback of the vector fields $V_{I,j}$ above. As $U$ is Runge and $X$ has the density property, we conclude as above by approximating these pulled-back fields by sums of complete vector fields. Let now $X$ be as above and $N>1$. We choose coordinate neighborhoods $U_i$ of $\hat{x}_i$, $i=1,\dots N$ such that $U = U_1 \cup \dots \cup U_N$ is Runge in $X$ and each $\hat{x}_j$ is mapped to zero under the appropriate coordinate chart. For each $i=1, \dots, N$, we pull back $V_{I,j}$ on $U_i$ and extend it to the zero field on the other coordinate neighborhoods in order to obtain a generating set for $Y$ at $[Id]_{\hat{x}}$, and proceed as above to obtain complete vector fields. The collection $\{ \tilde{\theta_i} \}_{i=1}^M\subset {\operatorname{CVF}}(Y)$ spans $T_\gamma Y$ for all $\gamma$ outside of an analytic set $A\subset Y$. By a procedure involving a step-by-step lowering of the (finite) dimension of $A$ (see the second part of the proof of [@K-R Lemma 3.2], or [@KK-Zeit Thm. 4]), implies that the collection $\{ \theta_i \}\subset {\operatorname{CVF}}(X)_{i=1}^M$ can be enlarged to a finite collection $\{ \theta_i \}\subset {\operatorname{CVF}}(X)$ such that $\{\tilde{\theta}_i\}$ spans the tangent space of $Y$ at *every point* in $Y$. In particular, this proves that $Y$ is an *elliptic manifold* in the sense of Gromov (though not necessarily Stein) and hence an *Oka-Forstnerič manifold*. These notions are more general than we require here, but for our purposes, it will suffice to record that an Oka-Forstnerič manifold $Y$ satisfies the following h-principle: given a homotopy $f_t:W\to Y\ (t\in{[0,1]})$, where $W$ is Stein and $f_0,f_1$ are holomorphic, there exists a new homotopy $g_t:W\to Y$ which is smooth on $t$, holomorphic on $W$ for all $t\in {[0,1]}$ and $f_0=g_0,f_1=g_1$. We call such a special homotopy a *smooth isotopy of holomorphic maps* (or jets, in the case that $Y$ is as defined previously in this section). To avoid confusion, we refrain from talking about smooth isotopies of maps $W\to {\operatorname{Aut}}(X)$ depending (smoothly) on a variable $t\in [0,1]$, which we prefer to call families of parametrized automorphisms. First local approximation ========================= We begin by showing (Lemma \[lem:stepzero\] below) that we can approximately achieve the conclusion of Theorem \[main\] when the parameter lies in a compact set. We will make use of the following result from [@K-R]: \[thm:KR\] Let $W$ be a Stein manifold and $X$ a connected Stein manifold with the density property. Let $N$ be a natural number and $x:W\to X^N \setminus \Delta$ be a holomorphic map, and fix $\hat{x}=(\hat{x}_1,\dots,\hat{x}_N)$. Then there exists a holomorphic map $\alpha:W\to {\operatorname{Aut}}(X)$, homotopic to the identity, such that $\alpha^w(\hat{x}_i)=x_i^w$ for all $i=1,\dots, N$ and all $w\in W$, if and only if $x$ is nullhomotopic. In our notation, a holomorphic map $x:W\to X^N\setminus \Delta$ is nothing but a parametrized $0$-jet at the point $\hat{x}=(\hat{x}_1,\dots,\hat{x}_N)$. \[lem:stepzero\] Let $W$ be Stein and $X$ Stein with the density property. Fix $\hat{x}$ so that the space $Y$ of $k$-jets at $\hat{x}$ is defined as in Section 2. Let $\gamma^1: W\to Y$ be holomorphic and null-homotopic, with $\gamma^t$ denoting the homotopy from $\gamma^1$ to the constant jet $\gamma^0=[Id]_{\hat{x}}$. Let $\varepsilon>0$ and a holomorphically convex compact set $L=\hat{L}\subset W$ be given. Then there exists a family of parametrized automorphisms $F:[0,1] \times W \to {\operatorname{Aut}}(X)$ with $F^0=id$ such that $$d([F^{t,w} \circ \gamma^{t,w}]_{\hat{x}}, [Id]_{\hat{x}})< \varepsilon$$ for all $(t,w) \in [0,1] \times L$, where $d$ denotes the distance in $Y$ defined in Section 2. Since both $\gamma^0$ and $\gamma^1$ are holomorphic, $W$ is Stein and $Y$ is elliptic by Lemma \[lem:elliptic\], the h-principle applies: we can therefore assume that $\gamma^t$ is in fact a smooth isotopy of holomorphic maps. Define $x_t$ to be $\pi(\gamma^t)$, i.e. the image of the jets $\gamma^t$ at $\hat{x}$. As $x_t$ takes values in $X^N \setminus \Delta$, we may apply Theorem \[thm:KR\]. Therefore we may assume without any loss of generality that for all $(t,w) \in [0,1] \times W$ the image of $\gamma^{t,w}$ (hence the one of $(\gamma^{t,w})^{-1}$) is the fixed $N$-tuple $\hat{x}$. This allows us to uniquely represent the jets $(\gamma^{t,w})^{-1}$ by parametrized polynomial mappings of total degree at most $k$ fixing $0\in {\mathbb{C}}^n$. Indeed, let $U\subset X$ be a disjoint union of coordinate neighborhoods $U_j$ of the fixed points $\hat{x}_j$ and $\phi_j:U_j\to \phi_j(U_j)\subset {\mathbb{C}}^{n}$ be the charts sending $\hat{x}_j$ to $0$. For each $j=1,\dots,N$ and $(t,w) \in [0,1] \times W$ there exists a uniquely determined polynomial mapping $Q_j^{t,w}$ of total degree at most $k$ such that $$\left[ \phi_j\circ(\gamma^{t,w})^{-1}\circ\phi_j^{-1}\right]_0=\left[ Q_j^{t,w} \right]_0.$$ By uniqueness, these polynomials mappings which fix $0\in {\mathbb{C}}^n$ depend smoothly on $t \in [0,1]$ and holomorphically on $w \in W$. For fixed $w\in W$ and for each $j=1,\dots,N$, since nondegenerate mappings are locally invertible, there exists a neighborhood ${V}_j$ of $0$ in ${\mathbb{C}}^{n}$ such that for all $t\in [0,1]$, $ Q_j^{t,w}|_{{V}_j}$ is injective and $ Q_j^{t,w}({V}_j)\subset \phi_j(U_j)$. Given a compact set $L' \supset L$ such that $\mathring{L'} \supset L$, by compactness we can shrink $V_j$ such that the above holds for all $(t,w)\in{[0,1]}\times L'$. Let $V$ be the disjoint union of $\phi_j^{-1}({V_j})$, and define the *injective holomorphic map* $P^t$ on $\mathring{L'} \times V$ by setting $$P^t(w,z):=(w,\phi_j^{-1}\circ Q_j^{t,w} \circ \phi_j(z)):\mathring{L'} \times V \to W \times X.$$ Notice that the union of the “graph of the $N$ fixed points” $$K=\bigcup_{j=1}^N\{(w,\hat{x}_j): w \in L\}\subset W \times X$$ is a ${\mathcal{O}}(W \times X)$-convex set which is fixed by $P^t$. Since $P^0$ is the identity, we can apply Proposition \[AL\] and obtain a family of parametrized automorphisms $F^{t,w}$ which approximates $(\gamma^{t,w})^{-1}$ uniformly on compacts in a neighborhood of $K$. By the Cauchy estimates, this implies the approximation of the jet with respect to the distance function $d$ (see Section 2). Proof ===== The following technical proposition is similar to [@K-R Proposition 4.4] but with jet approximation instead of just pointwise approximation. \[prop:jetapprox\] Let $L_1, L_2 \subset W$ be ${\mathcal{O}}(W)$-convex compact sets such that $L_1 \subset \mathring{L}_2$ and let $K,C \subset X$ be ${\mathcal{O}}(X)$-convex compact sets with $K \subset \mathring{C}$. Let $\gamma:[0,1] \times W \to Y$ be a smooth isotopy of holomorphic jets such that $\gamma^{0,w} = [Id]_{\hat{x}}$ for all $w \in W$. Then for every $\varepsilon, \alpha>0$ there exists $\delta=\delta(K,L_1,\varepsilon)>0$ such that if $$\label{smallness} d(\gamma^{t,w},[Id]_{\hat{x}})<\delta \quad \forall (t,w) \in [0,1] \times L_1,$$ then 1. there exists $\psi:[0,1] \times L_1 \to {\operatorname{Aut}}(X)$ such that $$\begin{aligned} d(\psi^{t,w}(z),z) <\varepsilon \text{ for }& (t,w,z) \in [0,1]\times L_1 \times K \\ [\psi^{t,w}]_{\hat{x}} = \gamma^{t,w} \text{ for }& (t,w) \in [0,1]\times L_1\end{aligned}$$ 2. there exists $F:[0,1] \times W \to {\operatorname{Aut}}(X)$ with $F^{0,w}=Id$ for all $w \in W$ and such that $$\begin{aligned} d([F^{1-t,w}\circ \gamma^{1,w}]_{\hat{x}}, \gamma^{t,w}) <\alpha \text{ for }& (t,w) \in [0,1]\times L_2\\ d(F^{1-t,w}(z),\psi^{t,w}(z)) <\varepsilon \text{ for }& (t,w,z) \in [0,1]\times L_1 \times C\end{aligned}$$ such that $\psi_0=F_0=Id$. Let us examine the nature of the local biholomorphism near $[Id]_{\hat{x}} \in Y$ given by Lemma \[lem:elliptic\]. In particular observe that given a compact $K \subset X$, for $(t_1,\dots,t_M) \in U \subset {\mathbb{C}}^M$ small enough, the automorphism $\phi_{\theta_1}^{t_1} \circ \dots \circ \phi_{\theta_M}^{t_M} \in {\operatorname{Aut}}(X)$ is going to be arbitrarily close to the identity on $K$. We can then pick $\delta$ such that $$\{\gamma^{t,w} :(t,w) \in [0,1] \times L_1\} \subset \{\phi_{\tilde{\theta}_1}^{t_1} \circ \dots \circ \phi_{\tilde{\theta}_M}^{t_M}([Id]_{\hat{x}}):(t_1,\dots,t_M) \in U \} \subset Y.$$ Apply Lemma \[lem:elliptic\] to obtain a family of parametrized automorphisms $\psi:[0,1]\times L_1 \to {\operatorname{Aut}}(X)$ such that $\psi_0=Id$, $dist(\psi^{t,w}(z),z)<\varepsilon/2$ and $[\psi^{t,w}]_{\hat{x}}=\gamma^{t,w}$ for $(t,w,z) \in [0,1]\times L_1 \times K$. This proves (i). Consider the non-autonomous parametrized vector field on $X$ $$\Theta^{t,w}(z)=\frac{d}{ds}\bigg|_{s=t} \psi^{1-s,w}((\psi^{1-t,w})^{-1}(z))$$ defined for $(t,w,z)\in [0,1]\times L_1 \times X$ and observe that the lift of $\Theta$ to $Y$ satisfies $$\tilde{\Theta}^{t,w}(\gamma^{1-t,w})=\frac{d}{ds}\bigg|_{s=t} \gamma^{1-s,w}.$$ Therefore $\Theta$ is well defined on $[0,1]\times L_1 \times X \cup [0,1]\times W \times \{ \hat{x}_1,\dots, \hat{x}_N \}$. By [@FR Lemma 2.2], as in the proof of Lemma \[lem:stepzero\], there is a Runge neighborhood $\Omega \subset W \times X$ of $L_1 \times K \cup L_2 \times \{ \hat{x}_1,\dots, \hat{x}_N \}$ such that the flow $f^t:\Omega \to W \times X$ of $\Theta^t$ is injective and $f^t(\Omega)$ is Runge for every $t \in [0,1]$. Using Proposition \[AL\], we obtain the desired $F^t:W \to {\operatorname{Aut}}(X)$. Observe that condition (4.4) only depends on this last step, hence we can approximate arbitrarily well and choose $\alpha$ only when invoking (ii). To prove the main theorem, we will construct families of automorphisms $F:[0,1] \times W \to {\operatorname{Aut}}(X)$ using the above result inductively on a growing sequence of compacts of $W \times X$. In order to apply Proposition \[prop:jetapprox\] again (on a larger compact, in views of exhausting the parameter space $W$), we need a smooth isotopy of parametrized jets that are close to the identity for all $t \in [0,1]$ over the compact $L_2$. This homotopy does not come for free during the induction step for the following reason. Let $\gamma^t$ be as above starting at the constant jet $\gamma^0=[Id]_{\hat{x}}$ and apply Proposition \[prop:jetapprox\] to obtain a family of parametrized automorphisms $F^t$. We now have a new homotopy of jets $$h:[0,1] \times W \to Y$$ given by $$h^{t,w}= \begin{cases*} \gamma^{2t,w} & for $0 \leq t \leq \frac{1}{2}$ \\ [F^{2t-1,w} \circ \gamma^{1,w}]_{\hat{x}} & for $\frac{1}{2} \leq t \leq 1$ \end{cases*}$$ connecting $h^{0,w}=[Id]_{\hat{x}}$ to $h^{1,w} \approx[Id]_{\hat{x}}$ for $w \in L_2$. We cannot immediately use Proposition \[prop:jetapprox\] for $h^{t,w}$ over the larger $L_2$, as the smallness condition (\[smallness\]) required for all $t$ is only satisfied at the end point $t=1$. However, this issue can be handled as follows: \[Lemma 4.2, [@K-R]\] \[lem:homotopy\] Let $L \subset W$ be a ${\mathcal{O}}(W)$-convex compact set and $h^t:W \to Y$ be a smooth homotopy between the constant $h^0=[Id]_{\hat{x}}$ and a holomorphic map $h^1$. Then there exists $\varepsilon=\varepsilon(h,L)>0$ such that for every $0<\alpha < \varepsilon$ and every smooth $F:[0,1] \times W \to Y$ with $F^t = h^{2t}$ for $t \leq \frac{1}{2}$ satisfying $$d(F^{t,w},F^{1-t,w}) < \alpha/2 \text{ for } (t,w) \in [0,1] \times L$$ and every ${\mathcal{O}}(W)$-convex compact set $L' \subset \mathring{L}$, there exists an analytic homotopy $H: [0,1] \times W \to Y$ between $[Id]_{\hat{x}}$ and $h^1$ such that $$d(H^{t,w}, [Id]_{\hat{x}}) < \alpha \text{ for } (t,w) \in [0,1] \times L'$$ Note that [@K-R Lemma 4.2] is *stated* for the manifold $Y$ which stands for $X^N\setminus \Delta$. However, the proof uses only general topological constructions, as well as the Oka property which holds for any elliptic manifold $Y$. We now proceed with the proof of Theorem \[main\]. Let $L_j \subset W, K_j \subset X, \ j \in {\mathbb{N}}$ be exhaustions by compact holomorphically convex compact sets such that $L_j \subset \mathring{L}_{j+1}$ and $K_j \subset \mathring{K}_{j+1}$. Assume that $L_0= \emptyset$ and $(\hat{x}_1, \dots, \hat{x}_N) \in \mathring{K}_0$. Fix a sequence $\varepsilon_j>0$ such that $\varepsilon_j<d(K_{j-1},X \setminus K_j)$ and $\sum \varepsilon_j < + \infty$. Since $\gamma : W \to Y$ is null-homotopic, there exists an isotopy on holomorphic maps $\gamma^t:W \to Y$ such that $\gamma^0=[Id]_{\hat{x}}$ and $\gamma^1=\gamma$. We can immediately apply Lemma \[lem:stepzero\] to obtain $\varphi_0:[0,1] \times W \to {\operatorname{Aut}}(X)$ such that $$d([\varphi_0^{t,w} \circ \gamma^{t,w}]_{\hat{x}}, [Id]_{\hat{x}})<min(\varepsilon_0,\delta(K_0,L_1,\varepsilon_1/2)) \text{ for } (t,w) \in [0,1] \times L_1$$ where $\delta$ is as in Proposition \[prop:jetapprox\]. Conclusion (i) in the latter gives $\psi:[0,1] \times L_1 \to {\operatorname{Aut}}(X)$ such that $$\begin{aligned} d(\psi^{t,w}(z),z) <\varepsilon_1/2 \text{ for }& (t,w,z) \in [0,1]\times L_1 \times K_0 \\ [\psi^{t,w}]_{\hat{x}} =[\varphi_0^{t,w} \circ \gamma^{t,w}]_{\hat{x}} \text{ for }& (t,w) \in [0,1]\times L_1\end{aligned}$$ Let $C_1 \subset X$ be a ${\mathcal{O}}(X)$-convex compact containing the $\varepsilon_1/2$-envelope of $$K_1 \cup \psi^{[0,1], L_1}(K_1).$$ Let $\alpha_1:= min(\varepsilon_1, \delta(C_1, L_2,\varepsilon_2/2),\varepsilon([\varphi_0^{t,w} \circ \gamma^{t,w}]_{\hat{x}},L_2))/2$, where $\varepsilon([\varphi_0^{t,w} \circ \gamma^{t,w}]_{\hat{x}},L_2)$ is the one arising from Lemma \[lem:homotopy\]. By (ii) in Proposition \[prop:jetapprox\] there exists $\varphi_1:[0,1] \times W \to {\operatorname{Aut}}(X)$ such that $$d([\varphi_1^{1-t,w}\circ \varphi_0^{1,w} \circ \gamma^{1,w}]_{\hat{x}},[\varphi_0^{t,w} \circ \gamma^{t,w}]_{\hat{x}} ) < \alpha_1$$ for $(t,w) \in [0,1]\times L_2$ and $$d(\varphi_1^{1-t,w}(z), \psi^{t,w}(z) ) <\varepsilon_1/2 \text{ for } (t,w,z) \in [0,1]\times L_1 \times C_1.$$ We just proved the initial step of the following induction. Suppose for $j=1, \dots, k$ we have $C_j \subset X$ holomorphically convex sets and $\varphi_j:[0,1] \times W \to {\operatorname{Aut}}(X)$ such that (a) $\varphi_k^{0,w}=Id \in {\operatorname{Aut}}(X)$ for all $w \in W$; (b) $d([\varphi_k^{1-t,w} \circ \varphi_{k-1}^{1,w} \circ \dots \circ \varphi_0^{1,w} \circ \gamma^{1,w}]_{\hat{x}},h^{t,w})<\alpha_k$ for $(t,w) \in [0,1] \times L_{k+1}$; (c) $\mathring{C}_k \supset K_{k} \cup F_k^{t,w}( (F_{j-1}^{t,w})^{-1} (K_k))$ for each $j=1, \dots, k$, $t \in [0,1]$ and $w \in L_j \setminus L_{j-1}$; (d) for each $j=1, \dots, k$, if $w \in L_j \setminus L_{j-1}$ then $d(\varphi_k^{t,w}(z),z) < \varepsilon_k$ for each $t \in [0,1]$ and $z \in K_k \cup F_{k-1}^{t,w}( (F_{j-1}^{t,w})^{-1} (K_k))$. where $h:[0,1] \times W \to Y$ is a homotopy between $[Id]_{\hat{x}}$ and $[\varphi_{k-1}^{1,w} \circ \dots \circ \varphi_0^{1,w} \circ \gamma^{1,w}]_{\hat{x}}$, $F_k^{t,w}(z)=\varphi_k^{t,w} \circ \varphi_{k-1}^{t,w} \circ \dots \circ \varphi_0^{t,w}(z)$, $$\alpha_k=min(\varepsilon_k,\delta(C_k,L_{k+1}, \varepsilon_{k+1}/2), \varepsilon(h,L_{k+1}))/2.$$ We first explain how the induction provides a familiy of parametrized automorphisms $G:[0,1] \times W \to {\operatorname{Aut}}(X)$ such that $G^{0,w}= Id$ and $G^{1,w}$ satisfies the thesis of the theorem. Thanks to (c) and (d), [@K-R Lemma 4.1] ensures that the sequence $F_k:[0,1] \times W \to {\operatorname{Aut}}(X)$ converges to $F:[0,1] \times W \to {\operatorname{Aut}}(X)$ uniformly on compacts, while condition (b) evaluated at $t=0$ shows that the inverse of $F^{1,w}$ provides the required parametrized automorphism. The fact that such inverse is null-homotopic is guaranteed by (a). Let us now assume that we have the required objects for $j=1, \dots, k$, we begin by considering the homotopy $\tilde{H}:[0,1] \times W \to {\operatorname{Aut}}(X)$ given by $$\tilde{H}^{t,w}= \begin{cases*} h^{2t,w} & for $0 \leq t \leq \frac{1}{2}$ \\ [\varphi_k^{2t-1,w} \circ \varphi_{k-1}^{1,w} \circ \dots \circ \varphi_0^{1,w} \circ \gamma^{1,w}]_{\hat{x}} & for $\frac{1}{2} \leq t \leq 1$ \end{cases*}$$ Observe that condition (b) and the definition of $\alpha_k$ ensure we can apply Lemma \[lem:homotopy\], hence there exists an homotopy $H:[0,1] \times W \to {\operatorname{Aut}}(X)$ such that $H^{0,w}=Id$, $H^{1,w}=[\varphi_k^{1,w} \circ \varphi_{k-1}^{1,w} \circ \dots \circ \varphi_0^{1,w} \circ \gamma^{1,w}]_{\hat{x}}$ and $$d(H^{t,w}, [Id]_{\hat{x}}) < \delta(C_k,L_{k+1},\varepsilon_{k+1}/2) \text{ for } (t,w) \in [0,1] \times L_k$$ Part (i) of Proposition \[prop:jetapprox\] provides the existence of $\psi:[0,1] \times L_k \to {\operatorname{Aut}}(X)$ such that $$\begin{aligned} d(\psi^{t,w}(z),z) <\varepsilon_{k+1}/2 \text{ for }& (t,w,z) \in [0,1]\times L_k \times C_k \\ [\psi^{t,w}]_{\hat{x}} = H^{t,w} \text{ for }& (t,w) \in [0,1]\times L_k\end{aligned}$$ Let $C_{k+1} \subset X$ be a holomorphically convex compact set containing the $\varepsilon_{k+1}/2$-envelope of $$\tag{*} C_k \cup K_{k+1} \cup \psi^{[0,1],L_k}(K_{k+1}) \cup \psi^{[0,1],L_{j-1}}(K_{k+1})$$ and of $$\tag{*} \psi^{t,w}(F_k^{t,w}((F_{j-1}^{t,w})^{-1}(K_{k+1}))$$ for each $j=1, \dots, k$ and $(t,w) \in [0,1] \times L_{j-1}$. Part (ii) of Proposition \[prop:jetapprox\] gives $\varphi_{k+1}:[0,1] \times W \to {\operatorname{Aut}}(X)$ with $\varphi_{k+1}^{0,w}=Id$ for all $w \in W$ and such that $$\begin{aligned} d([\varphi_{k+1}^{1-t,w}\circ H^{1,w}]_{\hat{x}}, H^{t,w}) <\alpha_{k+1} \text{ for }& (t,w) \in [0,1]\times L_{k+2}\\ d(\varphi_{k+1}^{1-t,w}(z),\psi^{t,w}(z)) <\varepsilon_{k+1}/2 \text{ for }& (t,w,z) \in [0,1]\times L_k \times C_{k+1}\end{aligned}$$ with $\alpha_{k+1}= min(\varepsilon_{k+1},\delta(C_{k+1},L_{k+2}, \varepsilon_{k+2}/2), \varepsilon(H,L_{k+2}))/2$. We now explain the reason these choices provide the $k+1$-th step. Condition (a) is clearly satisfied. Equation (4.8) and the fact that $H^{1,w}=[\varphi_{k}^{1,w} \circ \dots \circ \varphi_0^{1,w} \circ \gamma^{1,w}]_{\hat{x}}$ provide condition (b). Equation (4.9) tells that the image of $\varphi_{k+1}$ is $\varepsilon_{k+1}$-close to the one of $\psi$, hence $C_{k+1}$ also contains the sets in (\*) if we substitute $\psi$ with $\varphi_{k+1}$, thus condition (c) is fulfilled. Condition (d) is obtained as a combination of (4.6) and (4.9). Discussion and Questions ======================== Theorem \[main\] covers the case of a *finite* number of points; in a previous paper by the second author [@Ugolini] as well as in the first paper on jet interpolation in ${\mathbb{C}}^n$ [@F-Interpolation] the (parametric) jet interpolation is provided for a special class of sequences. \[def:tame\] A closed discrete sequence of points $(a_j)_{j\in{\mathbb{N}}} \subset {\mathbb{C}}^n$ without repetition is [*tame*]{} if there exists a holomorphic automorphism $F \in {\operatorname{Aut}}({\mathbb{C}}^n)$ such that $$F(a_j)=(j,0,\dots,0) \ \ \text{for all}\ j=1,2,\ldots.$$ The reason to restrict to such sequences is given by the fact that not all sequences in ${\mathbb{C}}^n$ are equivalent under the action of ${\operatorname{Aut}}({\mathbb{C}}^n)$ and the one consisting of natural numbers in the first axis presents good properties of flexibility. In particular ${\operatorname{Aut}}({\mathbb{C}}^n)$ acts transitively on ${\mathbb{N}}\times \{ 0\}^{n-1}$, thus it is common to speak about *tame sets*. In [@Ugolini] the author provides parametric jet interpolation at such sequences, yet he does not consider the possibility of having a countable amount of parametrized points. In fact, it is not well understood whether tame sequence are *generic* in some suitable sense. To our knowledge and opinion, the result shedding the most light on the issue is due to Winkelmann: Let $\{a_k\}_{k \in {\mathbb{N}}} \subset {\mathbb{C}}^n, \ n>1$ be a discrete sequence without repetition such that $$\sum_k \frac{1}{| | a_k| | ^{2n-2}} < \infty$$ Then $\{a_k\}_{k \in {\mathbb{N}}}$ is tame. The reason this result provides an insight on the topology of the class of tame sequences is that condition (5.1) is open in $\ell^2$. \[question\] Let $W$ be a Stein manifold and $a_k: W \to {\mathbb{C}}^n$ for $k \in {\mathbb{N}}$ a sequence of holomorphic maps such that $a_k(w) \neq a_j(w)$ for $j \neq k$ and all $w \in W$. Furthermore assume that $$\sum_k \frac{1}{| | a_k(w)| | ^{2n-2}} < \infty$$ for all $w \in W$. What topological conditions do we need on $\{a_k\}_{k \in {\mathbb{N}}}$ to ensure the existence of a holomorphic $F: W \to {\operatorname{Aut}}({\mathbb{C}}^n)$ such that $F^w (k,0,\dots,0)=a_k(w)$ for all $w \in W$ and $k \in {\mathbb{N}}$? A similar question can be posed for the newly introduced notion of weakly and strongly tame sequences in a complex Stein manifold with the density property: Let $X$ be a complex manifold. An infinite discrete subset $D$ is called weakly tame if for every exhaustion function $\rho : X \to {\mathbb{R}}$ and every function $\zeta : D \to {\mathbb{R}}$ there exists an automorphism $\Phi$ of $X$ such that $\rho(\Phi(x)) \geq \zeta(x)$ for all $x \in D$. \[def-tame\] Let $X$ be a complex manifold. We call a closed discrete infinite set $D \subset X$ a strongly tame set if for every injective mapping $f \colon D \to D$ there exists a holomorphic automorphism $F \in {\operatorname{Aut}}(X)$ such that $F_{|_D} = f$. It is worth mentioning that both notions are equivalent to the standard definition of tameness for $X={\mathbb{C}}^n$. If $X$ is a Stein manifold with the density property, any two strongly tame sequences are ${\operatorname{Aut}}(X)$ equivalent [@STame Proposition 2.4]; providing an answer to Question \[question\] for wealky tame sequences should then be harder as the previous result is not known for this class of manifolds.
--- author: - 'M. Sasaki[^1]' - 'W. Pietsch' - 'F. Haberl' date: 'Received December, 11, 2002; accepted March, 17, 2003' title: 'XMM-Newton observations of High Mass X-ray Binaries in the SMC [^2]' --- Introduction ============ After the discovery of X-ray emission from the Magellanic Clouds (MCs) in 1970 [@1971ApJ...168L...7P], surveying observations of each MC were performed by different X-ray observatories. As for the Small Magellanic Cloud (SMC), source catalogues were created from observations with Einstein [@1981ApJ...243..736S; @1987ApJ...317..152B; @1992ApJS...78..391W], ROSAT , and ASCA [@2000ApJS..128..491Y]. The analysis of these X-ray sources has shown, that a large number of X-ray bright objects belongs to the class of X-ray binaries (XRBs) in which a neutron star or a black hole forms a binary system with a companion star. In these systems, mass is accreted from the donor star onto the compact object. X-ray binaries can be divided into low mass X-ray binaries and high mass (or massive) X-ray binaries (HMXBs), depending on the mass of the companion star. Therefore, the identification of optical counterparts of the X-ray sources is crucial for the understanding of the nature of these sources. Furthermore, HMXBs form two subgroups with either an OB supergiant or a Be star as donor. A detailed catalogue of HMXBs was compiled by . performed high resolution spectroscopy of optical counterparts of HMXBs in the Large Magellanic Cloud (LMC) and studied the population of HMXBs. In the Milky Way or in the LMC, the fraction of Be/X-ray binary systems (Be/XRB) is 60 – 70% of all HMXBs, whereas more than 90% of the HMXBs in the SMC turned out to be Be systems . Since pulsed X-ray emission can be observed from neutron star HMXBs, these sources are also called X-ray binary pulsars. Based on ASCA, RXTE, ROSAT and Beppo SAX observations, more than 20 X-ray binary pulsars have been discovered in the SMC so far . Moreover, in one of the first observations of XMM-Newton , pulsed emission from another HMXB was found, which was identified with a Be star . In order to improve our understanding of the X-ray source population in the SMC, we proposed and analysed pointed observations of the SMC by XMM-Newton and performed spectral and temporal studies of detected sources. In this paper, we focus on the class of HMXBs and present the results on each HMXB and candidate in the observed fields. Data ==== [clcccccll]{} & & & & & & &\ & & & & & & &\ & & & & & & & &\ & & & & & & &\ 157 & 01100002 & 00 59 46.6 & –72 09 30 & PN & Full & Medium & 2000/10/17 16:16:36 & 2000/10/17 20:41:09\ & & & & M1 & Full & Medium & 2000/10/17 15:10:44 & 2000/10/17 20:39:43\ & & & & M2 & Full & Medium & 2000/10/17 15:10:35 & 2000/10/17 20:39:42\ 247 & 01357206 & 01 03 29.0 & –72 02 33 & PN & Full & Thin1 & 2001/04/15 01:20:28 & 2001/04/15 05:50:27\ & & & & M1 & PW3 & Thin1 & 2001/04/14 20:47:25 & 2001/04/15 05:55:45\ & & & & M2 & PW3 & Thin1 & 2001/04/14 20:47:25 & 2001/04/15 05:55:45\ 340 & 00842008 & 00 54 54.3 & –73 40 12 & PN & Full & Thin1 & 2001/10/17 10:46:50 & 2001/10/17 15:55:11\ & & & & M1 & Full & Medium & 2001/10/17 10:07:40 & 2001/10/17 15:59:35\ & & & & M2 & Full & Medium & 2001/10/17 10:07:40 & 2001/10/17 15:59:36\ 422 & 00842001 & 00 56 24.4 & –72 21 33 & PN & Full & Thin1 & 2002/03/30 14:21:45 & 2002/03/30 19:39:38\ & & & & M1 & Full & Medium & 2002/03/30 13:48:28 & 2002/03/30 19:44:36\ & & & & M2 & Full & Medium & 2002/03/30 13:48:29 & 2002/03/30 19:44:54\ Notes to column No 5: PW3: Partial Window 3. Notes to column No 6: M1: MOS1, M2: MOS2. ![\[pointings\] DSS image of the SMC and the position of the EPIC field of view of the XMM-Newton observations listed in Table \[obstab\]. The discontinuity seen in the DSS image is an artifact.](h4167f1.eps){width="8.5cm"} For AO-1 of XMM-Newton, we proposed observations of eight fields in the SMC in order to study the X-ray binary population (PI: W.P.). Two observations of this proposal were performed. During the first observation (ID 00842008), the telescope pointed towards the HMXB SMCX-2 in the south of the main galaxy. In order to perform source detection and analysis in the whole field of view, we used data from the European Photon Imaging Cameras EPIC PN , EPIC MOS1, and EPIC MOS2 . The observation was performed with all the EPIC cameras in full frame mode. For EPIC PN the thin filter was used, whereas for the EPIC MOS cameras medium filters were chosen. The next observation, ID 00842001, covered a region in the north of the SMC. The CCD read out modes and the filters of the EPIC cameras were the same as in the first observation. Moreover, we searched the XMM-Newton Science Data Archive for public data of the SMC, suitable for our purposes. We found two data sets (ID 01100002 and 01357206) of fields in the north of the SMC, slightly overlapping with each other as well as with the pointing ID 00842001. The details of the observations are summarised in Table \[obstab\]. Source detection {#soudet} ---------------- [lrllccrc]{} & & & & & & &\ & & & & & & &\ & & & & & &\ 01 & 00842008M1 & 00 51 56.05 & –73 41 51.4 & 2.1 & $6.33 \times 10^{-3} \pm 1.04 \times 10^{-3}$ & 84.2 &$\sim1 \times 10^{-13}$\ 02 & 00842008PN & 00 54 33.4 & –73 41 04$^{\diamondsuit}$& – & $<2.33 \times 10^{-03\,\dagger}$ & 3.4 & $<1.5 \times 10^{-14}$\ 03 & 00842001PN & 00 54 56.02 & –72 26 48.6 & 1.0 & $3.30 \times 10^{-2} \pm 3.44 \times 10^{-3}$ & 267.1 & $5.3 \times 10^{-14}$\ 04 & 00842001PN & 00 56 05.24 & –72 22 00.9 & 2.0 & $7.46 \times 10^{-3} \pm 1.32 \times 10^{-3}$ & 76.0 & $2.4 \times 10^{-14}$\ 05 & 00842001PN & 00 57 19.58 & –72 25 35.1 & 0.5 & $8.79 \times 10^{-2} \pm 4.16 \times 10^{-3}$ & 225.6 & $1.4 \times 10^{-13}$\ 06 & 00842001PN & 00 57 35.71 & –72 19 32.6 & 0.9 & $3.07 \times 10^{-2} \pm 2.46 \times 10^{-3}$ & 478.5 & $9.5 \times 10^{-14}$\ 06a& 01100002PN & 00 57 35.56 & –72 19 36.8 & 2.9 & $2.53 \times 10^{-2} \pm 6.94 \times 10^{-3}$ & 55.9 & $1.9 \times 10^{-14}$\ 07 & 01100002PN & 00 57 50.22 & –72 02 37.0 & 0.7 & $1.32 \times 10^{-1} \pm 8.75 \times 10^{-3}$ &1147.0 & $2.3 \times 10^{-13}$\ 08 & 01100002PN & 00 57 50.80 & –72 07 58.7 & 0.5 & $1.79 \times 10^{-1} \pm 8.11 \times 10^{-3}$ &2798.1 & $5.8 \times 10^{-13}$\ 09 & 00842001PN & 00 58 11.68 & –72 30 50.4 & 1.7 & $5.37 \times 10^{-2} \pm 8.49 \times 10^{-3}$ & 120.2 & $\sim3 \times 10^{-13}$\ 10 & 01100002M1 & 00 59 21.01 & –72 23 18.4 & 0.8 & $5.22 \times 10^{-2} \pm 3.22 \times 10^{-3}$ &1269.6 & $2.3 \times 10^{-13}$\ 10a& 00842001PN & 00 59 21.10 & –72 23 15.8 & 0.9 & $1.05 \times 10^{-1} \pm 7.23 \times 10^{-3}$ &828.9 & $1.2 \times 10^{-13}$\ 11 & 01100002PN & 01 00 16.18 & –72 04 45.8 & 1.2 & $2.42 \times 10^{-2} \pm 2.86 \times 10^{-3}$ & 237.5 & $4.2 \times 10^{-14}$\ 12 & 01100002PN & 01 00 30.23 & –72 20 35.1 & 3.2 & $9.81 \times 10^{-3} \pm 3.00 \times 10^{-3}$ & 25.8 &$\sim6 \times 10^{-14}$\ 13 & 01357206PN & 01 01 02.98 & –72 06 59.5 & 2.9 & $1.77 \times 10^{-2} \pm 4.16 \times 10^{-3}$ & 38.2 & $2.6 \times 10^{-14}$\ 13a& 01100002PN & 01 01 03.87 & –72 07 02.4 & 3.8 & $9.16 \times 10^{-3} \pm 2.68 \times 10^{-3}$ & 23.0 & $3.2 \times 10^{-14}$\ 14 & 01100002PN & 01 01 20.82 & –72 11 21.1 & 0.6 & $1.40 \times 10^{-1} \pm 7.47 \times 10^{-3}$ &1689.6 & $3.1 \times 10^{-13}$\ 14a& 01357206PN & 01 01 20.87 & –72 11 16.8 & 1.0 & $8.83 \times 10^{-2} \pm 8.74 \times 10^{-3}$ & 368.7 & $8.5 \times 10^{-14}$\ 15 & 01357206PN & 01 01 37.56 & –72 04 18.7 & 1.0 & $5.01 \times 10^{-2} \pm 4.26 \times 10^{-3}$ & 436.3 & $5.7 \times 10^{-14}$\ 15a& 01100002PN & 01 01 37.77 & –72 04 22.1 & 1.2 & $3.55 \times 10^{-2} \pm 4.36 \times 10^{-3}$ & 222.7 & $4.6 \times 10^{-14}$\ 16 & 01357206PN & 01 03 14.11 & –72 09 14.2 & 0.5 & $1.34 \times 10^{-1} \pm 6.38 \times 10^{-3}$ &2049.8 & $3.5 \times 10^{-13}$\ 17 & 01357206PN & 01 03 37.57 & –72 01 33.2 & 0.2 & $5.09 \times 10^{-1} \pm 9.62 \times 10^{-3}$ &17172.3& $2.6 \times 10^{-12}$\ 18 & 01357206PN & 01 05 55.38 & –72 03 47.9 & 1.6 & $3.05 \times 10^{-2} \pm 3.79 \times 10^{-3}$ & 119.5 & $3.9 \times 10^{-14}$\ Notes to column No 1: Observation ID and the instrument with which the detection with the highest likelihood was achieved. Notes to columns No 3 – 5: Position of the XMM-Newton detections with 1 $\sigma$ statistical positional error, except for No 02, for which the position of the optical counterpart$^{\diamondsuit}$ is given. Notes to column No 6: Count rate for EPIC PN detection (except for entries No 01 and 10 which are EPIC MOS1 detections, and $^{\dagger}$ 3 $\sigma$ upper limit for No 02). Notes to column No 7: Maximum likelihood (ML) of detection for the total band (0.3 – 10.0 keV). Notes to column No 8: Flux (0.3 – 10.0 keV) calculated from the best fit model spectrum, setting the Galactic foreground absorption to zero. See Sect.\[timspec\] for used models. In order to obtain the luminosity, multiply by $4.3 \times 10^{47}$ cm$^{2}$. [lccccccc]{} & & & & & & &\ & & & & & & &\ & & & & & & &\ 01 &+0.15$\pm$0.20 &+0.09$\pm$0.18 &–1.00$\pm$0.26 & – & – & – & –\ 02 & – & – & – & too weak & – & – & –\ 03 &+0.41$\pm$0.12 &–0.03$\pm$0.11 &–0.30$\pm$0.17& 59.00$\pm$0.02 & 1.2 & 00545617–7226476 & 810\ 04 &+1.00$\pm$0.23 &–0.15$\pm$0.17 &–0.13$\pm$0.25& 140.1$\pm$0.3\* & – & – & 904\ 05 &+0.49$\pm$0.05 &–0.29$\pm$0.05 &–0.36$\pm$0.08& – & 1.8 & 00571981–7225337 & –\ 06 &+0.45$\pm$0.11 & +0.04$\pm$0.09 &–0.41$\pm$0.11& & 1.9 & &\ 06a&+0.80$\pm$0.31 & +0.10$\pm$0.26 &–0.54$\pm$0.43& \[.5mm\]\[.5mm\][–]{} & 3.5 & \[.5mm\]\[.5mm\][00573601–7219339]{}& \[.5mm\]\[.5mm\][1020]{}\ 07 &+0.19$\pm$0.09 & +0.03$\pm$0.08 &–0.14$\pm$0.10& 281.1$\pm$0.2 & – & – & 1036\ 08 &+0.48$\pm$0.07 & +0.25$\pm$0.05 &–0.06$\pm$0.06& 152.34$\pm$0.05\* & – & – & 1038\ 09 &+1.00$\pm$0.33 & +0.44$\pm$0.21 & +0.51$\pm$0.13& – & 4.5 & 00581258–7230485 & –\ 10 &+0.49$\pm$0.07 &–0.15$\pm$0.07 &–0.21$\pm$0.10& & 1.2 & &\ 10a&+0.27$\pm$0.09 &–0.05$\pm$0.09 & +0.12$\pm$0.09& \[.5mm\]\[.5mm\][–]{}& 1.4 & \[.5mm\]\[.5mm\][00592103–7223171]{}& \[.5mm\]\[.5mm\][–]{}\ 11 &+0.18$\pm$0.14 &–0.19$\pm$0.14 &–0.46$\pm$0.21& – & – & – & –\ 12 &+1.00$\pm$0.13 &–0.37$\pm$0.29 & +0.01$\pm$0.52& – & – & – & 1208\ 13 &+0.37$\pm$0.29 &–0.30$\pm$0.25 & +0.06$\pm$0.41& & & &\ 13a&–0.25$\pm$0.36 &–0.21$\pm$0.48 &+0.40$\pm$0.41& \[.5mm\]\[.5mm\][–]{} & \[.5mm\]\[.5mm\][–]{} & \[.5mm\]\[.5mm\][–]{}& \[.5mm\]\[.5mm\][1240]{}\ 14 &+0.14$\pm$0.08 & +0.21$\pm$0.07 & +0.06$\pm$0.07& 452.2$\pm$0.5 & 2.6 & &\ 14a&+0.26$\pm$0.12 & +0.06$\pm$0.12 &–0.10$\pm$0.15& too weak & 2.2 & \[.5mm\]\[.5mm\][01012064–7211187]{}& \[.5mm\]\[.5mm\][1257]{}\ 15 &+0.17$\pm$0.10 &–0.23$\pm$0.10 &–0.20$\pm$0.16& & & &\ 15a&+0.17$\pm$0.13 &–0.36$\pm$0.12 &–0.26$\pm$0.30& \[.5mm\]\[.5mm\][–]{} &\[.5mm\]\[.5mm\][–]{} &\[.5mm\]\[.5mm\][–]{} &\[.5mm\]\[.5mm\][1277]{}\ 16 &+0.16$\pm$0.07 & +0.11$\pm$0.06 & +0.00$\pm$0.06& 341.7$\pm$0.4 & – & – & 1367\ 17 &+0.28$\pm$0.03 & +0.18$\pm$0.02 & +0.07$\pm$0.02& – & – & – & 1393\ 18 &–0.19$\pm$0.18 & +0.12$\pm$0.18 &+0.22$\pm$0.16& – & – & – & 1557\ Notes to columns No 9 – 11: Hardness ratios as defined in Eq.(\[hr123\]). Notes to column No 12: Pulse periods from timing analysis. \* New X-ray binary pulsar! Notes to column No 13: Distance to OGLE object. Notes to column No 15: Entry numbers in . [lccccll]{} & & & & & &\ & & & & & &\ & & & & & &\ 01 & – & – & – & – & RXJ0051.7–7341 & XRB?\ 02 &22.8 & – & 547 & 28 & SMCX-2 & HMXB Be, P\ 03 & 4.5 & 058 & 241 & 31 & RXJ0054.9–7226 & HMXB Be, P\ 04 & – & – & – & 32 & XMMUJ005605.2–722200 = 2E0054.4–7237? & HMXB?, P\ 05 & 5.6 & – & 234 & – & 2E0055.6–7241, RXJ0057.3–7225 & AGN, $z$ = 0.15\ 06 & & & & & &\ 06a& \[.5mm\]\[.5mm\][–]{} & \[.5mm\]\[.5mm\][–]{} & \[.5mm\]\[.5mm\][–]{} & \[.5mm\]\[.5mm\][–]{} & \[.5mm\]\[.5mm\][XMMUJ005735.7–721932 = \[YIT2000\]19?]{} & \[.5mm\]\[.5mm\][new HMXB?]{}\ 07 & 8.1 & 073 & 114 & 35 & AXJ0058–720, RXJ0057.8–7202 & HMXB?, P\ 08 & 7.3 & 074 & 136 & 36 & RXJ0057.8–7207 & HMXB?, P\ 09 & 4.3 & 076 & – & 38 & RXJ0058.2–7231 & HMXB Be\ 10 & 2.9 & & & & &\ 10a& 4.7 & \[.5mm\]\[.5mm\][081]{} & \[.5mm\]\[.5mm\][218]{} & \[.5mm\]\[.5mm\][–]{} & \[.5mm\]\[.5mm\][RXJ0059.3–7223]{} & \[.5mm\]\[.5mm\][XRB?]{}\ 11 & 6.6 & 088 & 123 &– & RXJ0100.2–7204 & XRB? AGN?\ 12 & – & – & – &– & XMMUJ010030.2–722035 & new HMXB?\ 13 & 4.8 & & & & &\ 13a& 9.8 & \[.5mm\]\[.5mm\][093]{} & \[.5mm\]\[.5mm\][132]{} & \[.5mm\]\[.5mm\][42]{} & \[.5mm\]\[.5mm\][RXJ0101.0–7206]{} & \[.5mm\]\[.5mm\][HMXB Be]{}\ 14 & 6.3 & – & & & &\ 14a& 6.3 & \[.5mm\]\[.5mm\][095]{} & \[.5mm\]\[.5mm\][159]{} & \[.5mm\]\[.5mm\][43]{} & \[.5mm\]\[.5mm\][RXJ0101.3–7211]{} & \[.5mm\]\[.5mm\][HMXB Be, P]{}\ 15 & 5.3 & & & & &\ 15a& 7.4 & \[.5mm\]\[.5mm\][096]{} & \[.5mm\]\[.5mm\][121]{} & \[.5mm\]\[.5mm\][44]{} & \[.5mm\]\[.5mm\][RXJ0101.6–7204]{} & \[.5mm\]\[.5mm\][HMXB?]{}\ 16 & 4.6 & 101 & 143 & 49 & AXJ0103–722, SAXJ0103.2–7209 & HMXB Be, P\ 17 & 2.6 & 105 & 106 & 50 & RXJ0103.6–7201 & HMXB?\ 18 & 6.9 & – & 120 & 55 & RXJ0105.9–7203 & HMXB?\ Notes to column No 16: Distance to ROSAT source. Notes to columns No 17 – 19: Entry numbers in ROSAT HRI catalogue , ROSAT PSPC catalogue , and . Notes to column No 20: YIT2000: @2000ApJS..128..491Y. Notes to column No 21: HMXB: High mass X-ray binary, HMXB?: HMXB candidate, XRB?: X-ray binary candidate, Be: Be system, P: pulsar. All the data were processed with the XMM-Newton Science Analysis System (SAS) version 5.3.3. For source detection, the events were separated into four energy bands: B$_{1}$ = 0.3 – 1.0 keV, B$_{2}$ = 1.0 – 2.0 keV, B$_{3}$ = 2.0 – 4.5 keV, and B$_{4}$ = 4.5 – 10.0 keV. In all these bands, images were created and source detection was performed using the sliding window and maximum likelihood methods of the SAS. Detections with likelihood of existence (ML) higher than 10.0 were accepted as real sources. This corresponds to the probability $P = 1 - {\rm exp}(-{\rm ML}) = 0.999955$ for the existence of the source. Hardness ratios were computed using the source counts in different bands: $$\label{hr123} {\rm HR}i = \frac{{\rm B}_{i+1} - {\rm B}_{i}}{{\rm B}_{i+1} + {\rm B}_{i}}$$ for $i = 1, 2, 3$. The values of HR1, HR2, and HR3 are shown in two diagrams in Fig.\[hr1hr2hr3\]. In most cases, X-ray binaries in the SMC or Active Galactic Nuclei (AGNs) behind the SMC have absorbed spectra and therefore show positive values for HR1. As can be seen in the second diagram, HR2 and HR3 which compare the events in the energy bands above 1.0 keV, cluster around zero. A large fraction of the sources (90%) is located in a region with $-0.4 <$ HR2 $< 0.3$ and $-0.6 <$ HR3 $< 0.5$. ![\[hr1hr2hr3\] HR1 plotted over HR2 and HR2 over HR3 with errors for all the sources in Table \[xbtab\] except for SMCX-2. Circle is used to mark the source No 5 which is an AGN, and triangle for source No 11 which is either an XRB or an AGN. ](h4167f2a.ps "fig:"){width="6.5cm"}\ ![\[hr1hr2hr3\] HR1 plotted over HR2 and HR2 over HR3 with errors for all the sources in Table \[xbtab\] except for SMCX-2. Circle is used to mark the source No 5 which is an AGN, and triangle for source No 11 which is either an XRB or an AGN. ](h4167f2b.ps "fig:"){width="6.5cm"} The detected sources were cross-correlated with catalogues of Einstein sources [@1992ApJS...78..391W] as well as ROSAT sources detected by PSPC and by HRI instruments. The positions of the X-ray sources were plotted on Digitized Sky Survey DSS2 (red) images of this field in order to find probable optical counterparts. The optical sources were also verified by cross-correlating the X-ray source list with the USNO-A2.0 catalogue produced by the United States Naval Observatory [@1996AAS...188.5404M; @1998AAS...19312003M]. In addition, we compared the source positions to the entries in the Optical Gravitational Lensing Experiment OGLE-II project list of variable sources in the Magellanic Clouds [@2001AcA....51..317Z]. For five out of 15 sources, correlations with an OGLE object were found. Finally, the source list was cross-correlated with the list of emission line objects in the SMC (Meyssonnier & Azzopardi, , \[MA93\]). The existence of an emission line star at the position of a hard X-ray source indicates that the source might be an X-ray binary system with a Be-star companion. The complete set of source lists will be presented in another paper. Here, we shall concentrate on the HMXBs and candidates in the four fields. The results on the eighteen sources are summarised in Table \[xbtab\]. The table includes the X-ray source coordinates, 1$\sigma$ positional error, count rate, likelihood for detection in the total band, flux, hardness ratios (see Eq.(\[hr123\])), pulse period with 1$\sigma$ error (Sect.\[timspec\]), and identifications. Correlations with ROSAT sources, OGLE objects, and emission line objects can be found as well. The sources are sorted by RA and Dec (J2000.0), and the entry numbers are used in the following. The positional errors which are given in this table are statistical errors. The systematic error of the X-ray position is about 3 – 4 . In order to calculate the flux, model parameters resulting from the spectral analysis (see Sect.\[timspec\]) were used for all sources. In the following, the source number in the ROSAT HRI catalogue of the SMC is given as RHNNN, and the number in the ROSAT PSPC catalogue as RPNNN. The entry number in the list of Haberl & Sasaki (, \[HS2000\]) is also mentioned using the format \[HS2000\]NN (also see Table \[xbtab\]). Timing and spectral analysis {#timspec} ---------------------------- [clcccllccc]{} & & & & & & & & &\ No & & $\Gamma$ & $N_{\rm H}$ & $kT$ & & & $\chi^{2}$ & dof & No\ & & & \[cm$^{-2}$\] & \[keV\] & & & & &\ 05 & 2E0055.6–7241 & $2.59^{+0.16}_{-0.33}$ & $5.3^{+1.1}_{-0.8} \times 10^{21}$ & – & & – & 41.4 & 51 & 2\ 06 & XMMUJ005735.7–721932 & $1.42^{+0.25}_{-0.20}$ & $3.6^{+1.9}_{-1.5} \times 10^{21}$ & – & & – & 84.6 & 63 & 4\ 07 & AXJ0058–720 & $1.01^{+0.11}_{-0.11}$ & $3.4^{+5.9}_{-3.4} \times 10^{20}$ & – & & – & 91.6 & 65 & 3\ 08 & RXJ0057.8–7207 & $0.97^{+0.08}_{-0.07}$ & $3.0^{+0.8}_{-0.4} \times 10^{21}$ & – & & – & 161.9 & 133 & 3\ 10 & RXJ0059.3–7223 & $1.46^{+0.12}_{-0.13}$ & $1.8^{+0.7}_{-0.6} \times 10^{21}$ & – & & – & 117.8 & 99 & 5\ 11 & RXJ0100.2–7204 & $2.00^{+0.41}_{-0.26}$ & $1.7^{+0.9}_{-1.0} \times 10^{21}$ & – & & – & 28.9 & 26 & 3\ 14 & RXJ0101.3–7211 & $1.14^{+0.18}_{-0.13}$ & $3.3^{+2.4}_{-1.1} \times 10^{21}$ & $0.20^{+0.09}_{-0.06}$ & & MEKAL & 82.6 & 56 & 2\ 15 & RXJ0101.6–7204 & $1.73^{+0.15}_{-0.17}$ & $8.6^{+4.9}_{-5.6} \times 10^{20}$ & – & & – & 68.0 & 58 & 6\ 16 & AXJ0103–722 & $1.08_{-0.19}^{+0.12}$ & $1.9_{-1.7}^{+1.9} \times 10^{21}$ & $0.27_{-0.07}^{+0.08}$ & & MEKAL & 172.7 & 155 & 3\ 17 & RXJ0103.6–7201 (Model 1) & $0.72_{-0.07}^{+0.06}$ & $1.7_{-0.9}^{+1.0} \times 10^{21}$ & $0.27_{-0.04}^{+0.03}$ & & MEKAL & 295.1 & 240 & 3\ & & $0.71_{-0.06}^{+0.05}$ & $2.3_{-0.8}^{+0.8} \times 10^{21}$ & $0.32_{-0.05}^{+0.05}$ & $1.07_{-0.18}^{+0.57}$ (O) & VMEKAL & 272.9 & 238 & 3\ & & & & & $1.61_{-0.66}^{+1.40}$ (Ne) & & & &\ Notes to column No 4: Additional $N_{\rm H}$ to Galactic foreground $N_{\rm H, Gal}$ (see Eqs.(\[power\]) and (\[powme\])). Notes to column No 9: Degrees of freedom. Notes to column No 10: Number of used spectra. As the very first step of data analysis, we checked the EPIC PN data for incorrect time information. It has been reported that in some cases, there are time jumps of 1 s in the EPIC PN data which were not corrected in the SAS processing. Since we didn’t find any event which indicated such a time jump, we could proceed without any countermeasure. After selecting the events for each source, they were analysed using the XANADU software package distributed by the High Energy Astrophysics Science Archive Research Center (HEASARC). It contains the packages XRONOS for timing analysis and XSPEC for spectral fitting. Based on EPIC PN data, period search was carried out with XRONOS after correcting the photon arrival times for solar system barycentre. If a peak was found in the power spectrum indicating pulsations, a more detailed epoch folding search was performed around the preliminary value. Once we got the rough value for the pulse period, the $\chi^{2}$ distribution around this value was fitted with a Lorentz profile and the maximum of the Lorentz profile was determined together with the 1 $\sigma$ error. Finally, folded light curves were created in three energy bands: B$_{1}$ = 0.3 – 1.0 keV, B$_{2}$ = 1.0 – 2.0 keV, B$_{3+4}$ = 2.0 – 10.0 keV. In addition, the ratio of the count rates in the harder band to the softer band (B$_{2}$/B$_{1}$ and B$_{3+4}$/B$_{1+2}$, with B$_{1+2}$ = 0.3 – 2.0 keV) were computed to illustrate the changes in the hardness ratios with pulse phase. Note that these hardness ratios are different to the numbers defined in Eq.(\[hr123\]). Except for sources which were too faint, spectra were extracted for each source. These spectra were modelled with a power law component together with the fixed Galactic foreground absorbing column density of $N_{\rm H, Gal} = 5.74 \times 10^{20}$ cm$^{-2}$ and a free column density $N_{\rm H}$: $$\label{power} S_{1}(E) = e^{-\sigma(E)\,N_{\rm H\,Gal}} \times e^{-\sigma(E)\,N_{\rm H}} \times K \times E^{-\Gamma},$$ $E$ being the energy in \[keV\], $\Gamma$ the photon index, and $K$ the normalisation. In some cases there was a deviation of the observed spectrum from a power law spectrum, suggesting the existence of an additional soft component. Since the spectra show features indicating emission lines, the soft component was modelled as thermal plasma emission. This thermal emission presumably arises from circumstellar matter, and the absorption must be negligibly low in comparison to the absorption of the hard X-ray emission from the neutron star. Moreover, if the column density was high, the soft emission would be absorbed and thus not detectable. Using the MEKAL model in XSPEC for the thermal component without additional absorbing column density, the spectrum can be written as $$\begin{aligned} \label{powme} S_{2}(E) & = & e^{-\sigma(E)\,N_{\rm H\,Gal}} \times (e^{-\sigma(E)\,N_{\rm H}} \times K \times E^{-\Gamma} \nonumber \\ & & + S_{\rm MEKAL}(T,{\rm Abund.})).\end{aligned}$$ $S_{\rm MEKAL}(T)$ is the MEKAL model spectrum with a temperature corresponding to $kT$ in \[keV\] and elemental abundances with respect to solar. We also performed a fit with a blackbody component instead of the MEKAL model. Although for fainter sources, no significant difference was found in the fits, for bright sources like No 16 (Fig.\[J0103-7209\]) and No 17 (Fig.\[J0103-7201\]), the blackbody fit results in higher $\chi^{2}$: 231.5 for 156 degrees of freedom for source No 16 and 322.5 for 241 degrees of freedom for source No 17 (compare to Table \[spectab\]). This is because the blackbody model fits the low energy tail of the spectrum, but does not account for the peaks around 0.6 keV and 0.9 keV, which might indicate emission lines from highly ionized oxygen and neon, as well as iron lines. For sources which were bright enough to obtain a significant spectral fit, the results are shown in figures and the model parameters yielding the best fit results are listed in the Table \[spectab\] together with 1$\sigma$ errors. Moreover, for source No 17 which is very bright and shows emission lines most prominently, we also used the VMEKAL instead of the MEKAL model. In this more elaborate model, the abundance for each of the element is a variable fit parameter. We obtained an improved fit with high oxygen and neon abundances, whereas the other elements have values below solar, not differing significantly from zero. From the spectral models for the emission we were able to estimate the flux of the sources in the ROSAT band (0.1 – 2.4 keV). The flux was calculated from the fitted models, except for four sources which were too faint: For No 02, a power law spectrum with $\Gamma = 0.7$ and $N_{\rm H} = 0.0 \times 10^{21}$ cm$^{-2}$ [@2001PASJ...53..227Y] was assumed to estimate the flux upper limit. For No 01, 09, and 12, $\Gamma = 1.0$ and $N_{\rm H} = 1.0 \times 10^{21}$ cm$^{-2}$ were adopted. The resulting luminosity was used to create a long term light curve of all the ROSAT and the new XMM-Newton data. In the light curves, crosses are used for ROSAT PSPC data, triangles for ROSAT HRI data, and dots for XMM-Newton EPIC data. Upper limits determined from ROSAT observations are plotted as arrows. For the distance to the SMC, a mean value of 60 kpc was assumed [see review by @1999IAUS..190..569V]. Comments on individual HMXBs and candidates =========================================== In this section, we present the results on individual sources. All sources, which were detected in the four data sets and were proven to be HMXBs or candidates, are listed in Table \[xbtab\]. Source No 1: RXJ0051.7–7341 --------------------------- which has been suggested as an XRB candidate by was only detected in MOS1/2 data. In the PN data the source was located on a bad column. It is faint, so neither spectral nor timing analysis was performed for this source. The PSPC count rate of $1.64 \times 10^{-3} \pm 0.68 \times 10^{-3}$ s$^{-1}$ during the ROSAT observation corresponds to XMM-Newton MOS (medium filter) count rate of about $8 \times 10^{-3}$ s$^{-1}$. This means that the luminosities of the source during the ROSAT and XMM-Newton observations were comparable (see Table \[xbtab\]). Source No 2: SMCX-2 ------------------- was one of the first three X-ray sources which were discovered in the SMC [@1978ApJ...221L..37C]. It was also detected in the HEAO1 A-2 experiment [@1979ApJS...40..657M], but not in the Einstein IPC survey [@1981ApJ...243..736S]. In ROSAT observations, this transient source was detected only once . It is thought to be a Be/XRB, since a Be-star was found as its optical counterpart [@1979MNRAS.186P..43M]. In early 2000, the RXTE All-Sky Monitor detected an outburst at the position of SMCX-2 [@2001ApJ...548L..41C] and a pulse period of 2.374$\pm$0.007 s was determined [@2000IAUC.7402....3C; @2000IAUC.7441....2T]. In the XMM-Newton data (Obs. ID 00842008), there was no detection with ML $>$ 10 (see Sect.\[soudet\]) at the position of SMCX-2 which was apparently in low luminosity state during the XMM-Newton observation. Therefore, we performed source detection using the maximum likelihood routine at the position of the optical counterpart (SIMBAD): RA = 00$^h$ 54$^m$ 33.4$^s$, Dec = –73 41 04 (J2000.0). Since we set the ML limit lower, the source was detected with a likelihood of ML = 3.4. The 3 $\sigma$ upper limit count rate obtained from the ML source detection routine is $2.33 \times 10^{-3}$ s$^{-1}$. The source counts were highest in the B$_{3}$ band (2.0 – 4.5 keV). In order to estimate the flux upper limit, spectral parameters derived by @2001PASJ...53..227Y from the ASCA spectrum during the outburst were used: Photon index $\Gamma = 0.7$ for a power law spectrum absorbed by a column density of $N_{\rm H} < 1.0 \times 10^{21}$ cm$^{-2}$. This results in an upper limit for the un-absorbed flux of $1.5 \times 10^{-14}$ erg cm$^{-2}$ s$^{-1}$, corresponding to $L_{\rm X} = 6.5 \times 10^{33}$ erg s$^{-1}$ (0.3 – 10.0 keV) during the XMM-Newton observation in Oct. 2001. Source No 3: RXJ0054.9–7226 --------------------------- ![\[J0054-7226\] Folded light curves and long term light curve of RXJ0054.9–7226 (source No 3). The hardness ratio is the ratio between the count rates in harder band and the count rates in softer band (Sect.\[timspec\]). See text for the symbols used for the long term light curve. ](h4167f3a.ps "fig:"){width="5.3cm"}\ ![\[J0054-7226\] Folded light curves and long term light curve of RXJ0054.9–7226 (source No 3). The hardness ratio is the ratio between the count rates in harder band and the count rates in softer band (Sect.\[timspec\]). See text for the symbols used for the long term light curve. ](h4167f3b.ps "fig:"){width="5.3cm"}\ ![\[J0054-7226\] Folded light curves and long term light curve of RXJ0054.9–7226 (source No 3). The hardness ratio is the ratio between the count rates in harder band and the count rates in softer band (Sect.\[timspec\]). See text for the symbols used for the long term light curve. ](h4167f3c.ps "fig:"){width="5.65cm"} is known to be an X-ray binary pulsar with a pulse period of 58.969$\pm$0.001 s and is the only source in our sample, for which the orbital period has been measured: 65 d [@1999AAS...194.5218L]. In the timing analysis of the new XMM-Newton data, the pulse period was verified to be 59.00$\pm$0.02 s. The folded light curves show variations especially above 1.0 keV, and there is no significant change in hardness ratios (Fig.\[J0054-7226\]). As can be seen in the long term light curve, compared to ROSAT data, the source was observed in low luminosity state. Due to the low flux, the statistics of the spectrum were not high enough and the spectrum is thus not discussed here. However, the results of the spectral analysis was used to estimate the flux of the source (see Table \[xbtab\]). The optical counterpart, a Be-star, is identified with the variable star OGLE00545617–7226476 [@2001AcA....51..317Z]. Source No 4: XMMUJ005605.2–722200 = 2E0054.4–7237? -------------------------------------------------- ![\[J0056-7222\] Folded light curves of XMMUJ005605.2–722200 (source No 4). Hardness ratio as in Fig\[J0054-7226\]. ](h4167f4a.ps "fig:"){width="5.3cm"}\ ![\[J0056-7222\] Folded light curves of XMMUJ005605.2–722200 (source No 4). Hardness ratio as in Fig\[J0054-7226\]. ](h4167f4b.ps "fig:"){width="5.3cm"}\ The error circle of the Einstein source includes an emission line object. Therefore, it was suggested as a Be/XRB candidate (\[HS2000\]). In the XMM-Newton data, a source consistent with the position of the emission line object was detected () and pulsations from this source was discovered. XMMUJ005605.2–722200 is most likely consistent with . The period is 140.1$\pm$0.3 s. As can be seen in Fig.\[J0056-7222\], the pulses in the soft band are narrower than in the harder band. Source No 5: 2E0055.6–7241 {#souno5} -------------------------- ![\[J0057-7225\] Spectrum of 2E0055.6–7241 (source No 5). ](h4167f5.ps){width="5.2cm"} had been suggested as an XRB candidate by . Timing analysis revealed no pulsations of the X-ray source. Also on longer timescales no flux change was verified: The ROSAT PSPC count rate was $7.48 \times 10^{-3} \pm 0.62 \times 10^{-3}$ s$^{-1}$ , corresponding to a count rate of $8 \times 10^{-2}$ s$^{-1}$ for XMM-Newton EPIC PN (thin1 filter). This value is similar to the count rate of the XMM-Newton observation, which is $8.79 \times 10^{-2} \pm 0.42 \times 10^{-2}$ s$^{-1}$. The X-ray spectrum is shown in Fig.\[J0057-7225\]. It has a photon index of $\Gamma = 2.59^{+0.16}_{-0.33}$, which is higher than for other sources of our sample, and highest absorbing column density of $N_{\rm H} = 5.3^{+1.1}_{-0.8} \times 10^{21}$ cm$^{-2}$ (also see Table \[spectab\]). A difference to other sources is also seen in the hardness ratios, as the source has a relatively high HR1 and lower values of HR2 and HR3 (HR1 = +0.49$\pm$0.05, HR2 = –0.29$\pm$0.05, HR3 = –0.36$\pm$0.08, also see Fig.\[hr1hr2hr3\]). The high absorption makes HR1 positive, whereas HR2 and HR3 are negative due to steeper power law spectrum. On the DSS2 (red) image, there is a source at the X-ray position, which coincides with the variable object OGLE00571981–72253375 [@2001AcA....51..317Z] with $B = 19.7$ and $R = 17.8$ (USNO-A2.00150-00625436), i.e.$B - R = 1.9$. have shown, that all the HMXBs and candidates in the SMC HRI catalogue have $14 < R < 18$ and $-2 < B - R < 3$, whereas e.g. AGNs have $R > 16$ and $B - R > 0$. Both the optical magnitudes and the X-ray spectra indicate that this source might as well be an AGN. Spectroscopy of the optical counterpart by @2003aph0301617....D showed that this object is a $z$ = 0.15 quasar located behind the SMC. Source No 7: AXJ0058–720 ------------------------ ![\[J0058-7202\] Folded light curves, spectrum, and long term light curve (0.1 – 2.4 keV) of AXJ0058–720 (source No 7). Hardness ratio and symbols as in Fig\[J0054-7226\]. ](h4167f6a.ps "fig:"){width="5.3cm"}\ ![\[J0058-7202\] Folded light curves, spectrum, and long term light curve (0.1 – 2.4 keV) of AXJ0058–720 (source No 7). Hardness ratio and symbols as in Fig\[J0054-7226\]. ](h4167f6b.ps "fig:"){width="5.3cm"}\ ![\[J0058-7202\] Folded light curves, spectrum, and long term light curve (0.1 – 2.4 keV) of AXJ0058–720 (source No 7). Hardness ratio and symbols as in Fig\[J0054-7226\]. ](h4167f6c.ps "fig:"){width="5.1cm"}\ ![\[J0058-7202\] Folded light curves, spectrum, and long term light curve (0.1 – 2.4 keV) of AXJ0058–720 (source No 7). Hardness ratio and symbols as in Fig\[J0054-7226\]. ](h4167f6d.ps "fig:"){width="5.65cm"} The pulse period of was determined from the ASCA data as 280.4$\pm$0.3 s [@1998IAUC.6853....2Y], which we confirmed in the XMM-Newton data: 281.1$\pm$0.2 s. It shows strong pulses in the softer bands and its spectrum becomes harder during the ’off’ time (Fig.\[J0058-7202\]). The residuals of the power law fit (Table \[spectab\] and Fig.\[J0058-7202\]) indicate the existence of an additional soft component. The source has been suggested as a HMXB candidate due to the likely optical counterpart, which is an emission line object (\[HS2000\]). Source No 8: RXJ0057.8–7207 --------------------------- ![\[J0057-7207\] Folded light curves, spectrum, and long term light curve (0.1 – 2.4 keV) of RXJ0057.8–7207 (source No 8). Hardness ratio and symbols as in Fig\[J0054-7226\]. ](h4167f7a.ps "fig:"){width="5.3cm"}\ ![\[J0057-7207\] Folded light curves, spectrum, and long term light curve (0.1 – 2.4 keV) of RXJ0057.8–7207 (source No 8). Hardness ratio and symbols as in Fig\[J0054-7226\]. ](h4167f7b.ps "fig:"){width="5.3cm"}\ ![\[J0057-7207\] Folded light curves, spectrum, and long term light curve (0.1 – 2.4 keV) of RXJ0057.8–7207 (source No 8). Hardness ratio and symbols as in Fig\[J0054-7226\]. ](h4167f7c.ps "fig:"){width="5.1cm"}\ ![\[J0057-7207\] Folded light curves, spectrum, and long term light curve (0.1 – 2.4 keV) of RXJ0057.8–7207 (source No 8). Hardness ratio and symbols as in Fig\[J0054-7226\]. ](h4167f7d.ps "fig:"){width="5.65cm"} is a HMXB candidate with an emission line object suggested as a likely optical counterpart (\[HS2000\]). We discovered pulsations in the new XMM-Newton data and derived a pulse period of 152.34$\pm$0.05 s. For this source, a period of 152.098$\pm$0.016 s was independently found in Chandra data by @2003ApJ...584L..79M. The folded light curves in Fig.\[J0057-7207\] show, that there are correlated flux variations in all bands with a significant minimum at phase 0.4. Especially in the hard band, there is a slow increase and fast decay. Therefore, the hardness ratio falls off at phase 0.2 and increases slowly after phase 0.7. The source spectrum is well reproduced by a power law spectrum (see Table \[spectab\]) with a significant absorption within the SMC or the source itself. As can be seen in the long term light curve, there was a weak flare observed by ROSAT, whereas the XMM-Newton observation was performed in a low luminosity state, 5.3 times lower than the maximum observed by ROSAT. Source No 9: RXJ0058.2–7231 --------------------------- The source corresponding to the optically identified HMXB is very faint, so that no timing analysis could be performed. However, the hardness ratios HR1, HR2, and HR3 indicate, that this source has a hard spectrum. Its optical counterpart is a variable Be star in the SMC, OGLE00581258–7230485 [@2001AcA....51..317Z]. From the ROSAT HRI count rate of $4.28 \times 10^{-3} \pm 0.48 \times 10^{-3}$ s$^{-1}$ we estimated the corresponding XMM-Newton EPIC PN (thin1 filter) count rate: $\sim2 \times 10^{-1}$ s$^{-1}$. The source was about 3.6 times brighter when it was detected by ROSAT than when it was observed by XMM-Newton. Source No 10: RXJ0059.3–7223 ---------------------------- ![\[J0059-7223\] Spectrum and long term light curve (0.1 – 2.4 keV) of RXJ0059.3–7223 (source No 10). Symbols for the long term light curve as in Fig.\[J0054-7226\].](h4167f8a.ps "fig:"){width="5.1cm"}\ ![\[J0059-7223\] Spectrum and long term light curve (0.1 – 2.4 keV) of RXJ0059.3–7223 (source No 10). Symbols for the long term light curve as in Fig.\[J0054-7226\].](h4167f8b.ps "fig:"){width="5.65cm"} has been suggested as an XRB candidate by . It was observed by XMM-Newton in two pointings. Its spectrum mainly consists of a power law component typical for a HMXB with additional features (Fig.\[J0059-7223\]). For this source no pulsations were detected. At its position, there is the variable star OGLE00592103–7223171 [@2001AcA....51..317Z], which is suggested as the optical counterpart. Its magnitudes are $B = 17.4$ and $R = 14.6$ (USNO-A2.00150-00660299), which gives $B - R = 2.8$. The $R$ magnitude in particular is characteristic for a HMXB (see Sect.\[souno5\]). Source No 11: RXJ0100.2–7204 ---------------------------- ![\[J0100-7204\] Spectrum and long term light curve (0.1 – 2.4 keV) of RXJ0100.2–7204 (source No 11). Symbols for the long term light curve as in Fig.\[J0054-7226\]. ](h4167f9a.ps "fig:"){width="5.1cm"}\ ![\[J0100-7204\] Spectrum and long term light curve (0.1 – 2.4 keV) of RXJ0100.2–7204 (source No 11). Symbols for the long term light curve as in Fig.\[J0054-7226\]. ](h4167f9b.ps "fig:"){width="5.65cm"} At the position of the XMM-Newton detection corresponding to , a very faint object can be found on the DSS2 (red) image. However, there is no entry in the USNO-A2.0 catalogue for this source. We also looked for information in different catalogues using BROWSE of the HEASARC archive, but could not find the magnitudes of this optical source. The X-ray source was suggested as an XRB candidate by . The spectrum of the source is a power law with $\Gamma = 2.00^{+0.41}_{-0.26}$ and absorbing column density of $N_{\rm H} = 1.7^{+0.9}_{-1.0} \times 10^{21}$ cm$^{-2}$ (Fig.\[J0100-7204\]). Since the probable optical counterpart is very faint and the power law photon index is higher than for most of the other sources presented here, it can not be ruled out that this source is an AGN (also see Table \[spectab\]). Source No 13: RXJ0101.0–7206 ---------------------------- ![\[J0101-7206\] Long term light curve (0.1 – 2.4 keV) of RXJ0101.0–7206 (source No 13). Symbols as in Fig.\[J0054-7226\]. ](h4167f10.ps){width="5.65cm"} The Be/X-ray binary showed a luminosity of $\sim3 \times 10^{33}$ erg s$^{-1}$ in the ROSAT band (0.1 – 2.4keV) during two XMM-Newton observations. It was about 60 times fainter than at the maximum observed by ROSAT (Fig.\[J0101-7206\]). Pulsations with a period of 304.49$\pm$0.13 s were discovered in Chandra data [@2003ApJ...584L..79M]. This period could not be verified in the XMM-Newton observation, because the source was too faint. @2003MNRAS.338..428E presented results on the optical analysis of likely counterparts, discussing two objects (No 1 and 4) in the ROSAT PSPC error circle. They conclude that the optical counterpart is object No 1 which is confirmed to be a Be star. This object is also the only optical source, which can be found on the DSS image within the XMM-Newton 1 $\sigma$ error circle. Source No 14: RXJ0101.3–7211 ---------------------------- ![\[J0101-7211\] Folded light curves, spectra, and long term light curve (0.1 – 2.4 keV) of RXJ0101.3–7211 (source No 14). Hardness ratio and symbols as in Fig\[J0054-7226\]. For the spectra, solid lines are used for the data of the obs. ID 01100002, and dashed lines for obs. ID 01357206. ](h4167f11a.ps "fig:"){width="5.3cm"}\ ![\[J0101-7211\] Folded light curves, spectra, and long term light curve (0.1 – 2.4 keV) of RXJ0101.3–7211 (source No 14). Hardness ratio and symbols as in Fig\[J0054-7226\]. For the spectra, solid lines are used for the data of the obs. ID 01100002, and dashed lines for obs. ID 01357206. ](h4167f11b.ps "fig:"){width="5.3cm"}\ ![\[J0101-7211\] Folded light curves, spectra, and long term light curve (0.1 – 2.4 keV) of RXJ0101.3–7211 (source No 14). Hardness ratio and symbols as in Fig\[J0054-7226\]. For the spectra, solid lines are used for the data of the obs. ID 01100002, and dashed lines for obs. ID 01357206. ](h4167f11c.ps "fig:"){width="5.1cm"}\ ![\[J0101-7211\] Folded light curves, spectra, and long term light curve (0.1 – 2.4 keV) of RXJ0101.3–7211 (source No 14). Hardness ratio and symbols as in Fig\[J0054-7226\]. For the spectra, solid lines are used for the data of the obs. ID 01100002, and dashed lines for obs. ID 01357206. ](h4167f11d.ps "fig:"){width="5.65cm"} The ROSAT source is the first X-ray binary pulsar of which the discovery was based on XMM-Newton data. It was covered in two additional observations finding the source again in a low intensity state. The pulse period of 455$\pm$2 s was verified in the new data of the observation ID 01100002: 452.2$\pm$0.5 s. During the observation ID 01357206, the source was too faint for a timing analysis. The folded light curves show strong variation in all bands (Fig.\[J0101-7211\]). The spectrum of the source becomes harder during pulse minimum. The spectrum is well fitted with a soft thermal component described by a MEKAL model ($kT = 0.20^{+0.09}_{-0.06}$ keV) with a low metal abundance ($0.11^{+0.10}_{-0.11}$ times solar) and a power law component absorbed by a high column density (Table \[spectab\]). The optical counterpart (OGLE01012064–7211187) is a Be-star. Source No 15: RXJ0101.6–7204 ---------------------------- ![\[J0101-7204\] Spectra and long term light curve (0.1 – 2.4 keV) of RXJ0101.6–7204 (source No 15). For the spectra, solid lines are used for the data of the obs. ID 01357206, and dashed lines for obs. ID 01100002. Symbols for the long term light curve as in Fig.\[J0054-7226\]. ](h4167f12a.ps "fig:"){width="5.1cm"}\ ![\[J0101-7204\] Spectra and long term light curve (0.1 – 2.4 keV) of RXJ0101.6–7204 (source No 15). For the spectra, solid lines are used for the data of the obs. ID 01357206, and dashed lines for obs. ID 01100002. Symbols for the long term light curve as in Fig.\[J0054-7226\]. ](h4167f12b.ps "fig:"){width="5.65cm"} The Be/XRB candidate with an emission line star at the ROSAT PSPC and HRI positions (\[HS2000\]), was observed in two XMM-Newton pointings. Its spectrum and long term light curve are shown in Fig.\[J0101-7204\]. The spectrum can be modelled as a moderately absorbed power law. No pulsations were discovered. Source No 16: AXJ0103–722 ------------------------- ![\[J0103-7209\] Folded light curves, spectrum, and long term light curve (0.1 – 2.4 keV) of AXJ0103–722 (source No 16). Hardness ratio and symbols as in Fig\[J0054-7226\]. ](h4167f13a.ps "fig:"){width="5.3cm"}\ ![\[J0103-7209\] Folded light curves, spectrum, and long term light curve (0.1 – 2.4 keV) of AXJ0103–722 (source No 16). Hardness ratio and symbols as in Fig\[J0054-7226\]. ](h4167f13b.ps "fig:"){width="5.3cm"}\ ![\[J0103-7209\] Folded light curves, spectrum, and long term light curve (0.1 – 2.4 keV) of AXJ0103–722 (source No 16). Hardness ratio and symbols as in Fig\[J0054-7226\]. ](h4167f13c.ps "fig:"){width="5.1cm"}\ ![\[J0103-7209\] Folded light curves, spectrum, and long term light curve (0.1 – 2.4 keV) of AXJ0103–722 (source No 16). Hardness ratio and symbols as in Fig\[J0054-7226\]. ](h4167f13d.ps "fig:"){width="5.65cm"} For the Be/X-ray binary a pulse period of 345.2$\pm$0.1 s was determined by @1998IAUC.6999....1I. In the XMM-Newton data, pulsations were confirmed with a period of 341.7$\pm$0.4 s. The folded light curves show strong variation below 2.0 keV (Fig.\[J0103-7209\]), whereas in the hard band, the variations are strongly reduced. The spectrum is well reproduced with a power law and a thermal component (see Table \[spectab\]). The MEKAL model for the thermal component yields $kT = 0.27_{-0.07}^{+0.08}$ keV and metal abundances of $0.31_{-0.17}^{+0.43}$ with respect to solar. Source No 17: RXJ0103.6–7201 ---------------------------- ![\[J0103-7201\] Spectrum and long term light curve (0.1 – 2.4 keV) of RXJ0103.6–7201 (source No 17). Symbols for the long term light curve as in Fig.\[J0054-7226\]. ](h4167f14a.ps "fig:"){width="5.1cm"}\ ![\[J0103-7201\] Spectrum and long term light curve (0.1 – 2.4 keV) of RXJ0103.6–7201 (source No 17). Symbols for the long term light curve as in Fig.\[J0054-7226\]. ](h4167f14b.ps "fig:"){width="5.65cm"} For the HMXB candidate (\[HS2000\]), an acceptable fit was obtained for the spectrum with a power law and a thermal component (Table \[spectab\]). Modelling the thermal component with MEKAL, we obtained $kT = 0.27_{-0.04}^{+0.03}$ and metal abundances of $0.77_{-0.28}^{+0.17}$ times solar. Since the source was bright, we also used the VMEKAL model instead of the MEKAL model, which allows to determine the abundance for each of the elements. This resulted in an improvement of the fit, the model reproducing the peaks around 0.6 and 0.9 keV. The photon indices $\Gamma$ and the absorbing column densities $N_{\rm H}$ for both fits are comparable, as can be seen in Table \[spectab\]. Also the temperature values $kT$ agree well for MEKAL and VMEKAL within the 1 $\sigma$ errors. The spectrum with the power law + VMEKAL fit is shown in Fig.\[J0103-7201\]. The comparison to ROSAT data shows that this source was in high luminosity state during the XMM-Newton observation with $L_{\rm X} = 1.1 \times 10^{36}$ erg s$^{-1}$ (0.3 – 10.0 keV). In spite of the high photon statistics with 3,300 counts, no pulsations were discovered. Also the analysis of the events separated into soft, medium, and hard band revealed no pulsations. Source No 18: RXJ0105.9–7203 ---------------------------- is a HMXB candidate, coinciding with an emission line object. Since the source was very faint during the XMM-Newton observation, the photon statistics are very low and no timing analysis was possible. The PSPC count rate derived from the ROSAT observation was $4.01 \times 10^{-3} \pm 0.56 \times 10^{-3}$ s$^{-1}$ , corresponding to $\sim5 \times 10^{-2}$ s$^{-1}$ for XMM-Newton EPIC PN (thin1 filter). With a count rate of $3.05 \times 10^{-2} \pm 0.28 \times 10^{-2}$ s$^{-1}$ (Table \[xbtab\]), the source was fainter during the XMM-Newton observation. Sources No 6 and 12: New HMXB candidates ======================================== ![\[J0057-7219\] Spectrum of XMMUJ005735.7–721932 (source No 6). ](h4167f15.ps){width="5.2cm"} To identify a HMXB, it is crucial to find an optical counterpart and confirm that it is an early-type star. If an emission line object is found at the position of a hard X-ray source and other objects are ruled out as counterpart, the source is presumably a Be/XRB. Cross-correlating the XMM-Newton source list with the emission line star catalogue of , we discovered two new sources which met these criteria. (source No 6) is found at the position of \[MA93\]1020 and likely coincides with the source No 19 in @2000ApJS..128..491Y. (source No 12) is associated with \[MA93\]1208. Both X-ray sources are very faint. Only for XMMUJ005735.7–721932, we had enough counts to extract a spectrum (Fig.\[J0057-7219\]). The best fit model is a moderately absorbed power law (Table \[spectab\]). Furthermore, Chandra data showed that this source has pulsed emission with a period of 564.81$\pm$0.41 s [@2003ApJ...584L..79M]. This period could not be confirmed in the XMM-Newton data. As indicated by the hardness ratio HR1, these sources have a hard spectrum. Therefore, these two sources are suggested as new Be/XRB candidates. For further investigation, we need to perform follow-up observation in the optical band in order to verify if the emission line objects are Be stars. Discussion ========== The comparison between the XMM-Newton sources detected with ML $>$ 10 and other X-ray catalogues demonstrates that we detected all known HMXBs and candidates which exist in the four observed fields, except for SMCX-2 which was very faint. SMCX-2 was marginally detected with a likelihood of ML = 3.4, and we derived an upper limit of $6.5 \times 10^{33}$ erg s$^{-1}$ (0.3 – 10.0 keV). The luminosities of all the other sources in the 0.3 – 10.0 keV band are higher than $8 \times 10^{33}$ erg s$^{-1}$ at an assumed distance of 60 kpc [@1999IAUS..190..569V], as is shown in Fig.\[lxhisto\]. As we have seen in the long term light curves, most of the sources were in quiescence during the XMM-Newton observations, whereas they were mostly detected during outburst by previous missions. This indicates that all known HMXBs in the SMC have luminosities higher than $\sim7 \times 10^{33}$ erg s$^{-1}$ in quiescence and can be detected by XMM-Newton in observations with an exposure of about 15 ks. Consequently, we have an extensive set of HMXBs for studying their properties. In order to visualise the spectral characteristics of the HMXBs, we plotted the hardness ratios HR1, HR2, and HR3 in Fig.\[hr1hr2hr3\]. The high absorption in XRBs causes positive values for HR1, while HR2 and HR3 have small absolute values around zero. AGNs typically show steeper X-ray spectra than HMXBs. Therefore, the two source classes can be disentangled using hardness ratios. This can also be applied to classification work on other nearby galaxies. ![\[lxhisto\] Histogram of luminosities \[erg s$^{-1}$\] of the HMXBs and candidates in the XMM-Newton band (0.3 – 10.0 keV). The upper limit for SMCX-2 is shown with dashed line. ](h4167f16.ps){width="6.1cm"} Pulsations and soft energy emission ----------------------------------- X-ray spectra of high mass X-ray binaries below 10 keV can be in general modelled as a power law with a photon index of $\Gamma = 0 - 2$. For HMXBs located far away from the Galactic plane or in the Magellanic Clouds, the interstellar absorption in the line of sight is low, and an additional soft spectral component was discovered in supergiant X-ray binary systems like SMCX-1 [@1983ApJ...266..814M; @1995ApJ...445..896W] or LMCX-4 [@1996ApJ...467..811W], as well as in Be/X-ray binary systems like RXJ0059.2–7138 [@2000PASJ...52..299K] or EXO053109–6609.2 . In our sample of HMXBs and candidates in the SMC, pulsations were confirmed for six sources. Studying the pulsations and hardness ratio changes in different bands, we found that there are different types of pulsations. Furthermore, four out of these six were bright enough to allow us to test the existence of a soft component in their spectra. The pulsating sources of our sample can be divided into four groups: 1. There are pulsations in all bands and the ratios between the harder and softer bands are almost constant (RXJ0054.9–7226, Fig.\[J0054-7226\] and XMMUJ005605.2–722200, Fig.\[J0056-7222\]). Because of low photon statistics, spectral analysis of these sources yielded no significant results. 2. Pulsations are discovered below 2 keV (AXJ0058–720, Fig.\[J0058-7202\] and AXJ0103–722, Fig.\[J0103-7209\]). There might be pulsations also in the hardest band, although not significant due to low statistics. The hardness ratios seem to become higher during pulse minimum. The spectra of these sources include a thermal component. If it is confirmed that pulsations are in fact existent only in the soft band, this will suggest that their origin is of small size, probably locally illuminated surface or surroundings of the neutron star. 3. In the case of RXJ0057.8–7207 (Fig.\[J0057-7207\]), there are flux variations in all bands. Most pronounced pulsations are found above 2 keV with correlated increase of the hardness ratio, which follows the pulse shape of the hard band. Since the spectrum is a perfect power law, the coincidence of the maximum of the hardness ratio and that of the pulse might indicate a variation of the absorption ($N_{\rm H}$). 4. RXJ0101.3–7211 (Fig.\[J0101-7211\]) shows clear pulsations in all bands and becomes harder significantly at pulse minimum. In its spectrum, there is a low energy excess besides the power law component, which can be modelled as a thermal emission. The pulsations in the soft band might be caused by $N_{\rm H}$ variations as well as by changes in the soft emission component. The low energy component, which seems to be thermal was also found in the spectrum of RXJ0103.6–7201. This source is variable on long timescales, as can be seen in Fig.\[J0103-7201\]. However, pulsations on short timescales were not discovered, although the source was brighter during the XMM-Newton observation than in former observations. Origin of the soft emission --------------------------- The low energy component in the spectrum of the supergiant systems SMCX-1 [@1983ApJ...266..814M] and LMCX-4 [@1996ApJ...467..811W] was modelled as blackbody emission or thermal Bremsstrahlung which arises from the stellar wind of the supergiant, the accretion disk, or the fan-beam of the accretion column close to the neutron star surface. However, @2002ApJ...579..411P pointed out that a power law nature is most probable for the soft emission. They derived that the pulse shape of the soft emission from SMCX-1 is sinusoidal, similar to the soft energy light curve of HerX-1 . In our Galaxy, the supergiant system VelaX-1 is thought to show emission from the atmosphere and stellar wind of the companion as well as from the gas stream towards the neutron star [@1990ApJ...361..225H and references therein]. High resolution spectroscopy of Galactic HMXBs like CenX-3 [with Chandra HETG, @2002APS..APRN17069W] or HerX-1 [with XMM-Newton RGS, @2002ApJ...578..391J] resolved fluorescent lines and hydrogen- and helium-like lines of elements from Ne to Fe. The line fluxes of CenX-3 are consistent with recombination radiation from photo-ionised and collisionally ionised plasma as well as resonant line scattering in photo-ionised plasma [@2002APS..APRN17069W]. As for the Be/X-ray binary systems, @2000PASJ...52..299K analysed both ASCA and ROSAT data of RXJ0059.2–7138 and found that there is a soft component in the spectrum, which can be modelled as a thermal emission with $kT = 0.37$ keV. Below 2.0 keV, the source shows no pulsations. Therefore they argue that the soft emission originates from a large region comparable to the full binary system. Using an XMM-Newton observation of a northern field in the LMC, extracted emission from the Be/X-ray binary EXO053109–6609.2 and showed that there are strong pulsations above 0.4 keV. In the spectrum there is a low energy thermal component, which is believed to arise from the equatorial disk around the Be star, illuminated by the X-ray source. The origin of the soft emission from HMXBs is not clearly understood. One would expect that there are differences between a supergiant and a Be system. Most of the HMXBs which have been studied in detail (since they are located in the Milky Way and therefore closer) are supergiant systems, whereas the sources in the SMC we are confronted with, are Be systems. In Be/XRBs, the neutron star and the Be star are thought to form a binary system with an extended orbit. This makes the stellar material in the equatorial disk around the Be star as the origin of the soft pulsed emission rather implausible. The HMXBs in the MCs are ideal objects to study the soft part of their spectrum, since the absorption by Galactic foreground matter is low in the direction of the MCs. The existence of a soft thermal component in the spectrum and pulsations below 1 – 2 keV in our data indicates that the size of the origin of the soft emission is not as large as is assumed for e.g. RXJ0059.2–7138. In addition to timescales and luminosities, a crucial parameter for the physical processes responsible for this emission is the magnetic field of the neutron star. In order to clarify the conditions in which the soft component is produced, at least we need to get information about the orbital motion and about a possible orbital phase dependence of the total source spectrum as well as the pulsed emission. As for the SMC Be systems discussed here, the orbital period is known only for one source. OB systems vs. Be systems ------------------------- In the last few years, the number of known Be/XRBs in the SMC increased drastically based on temporal studies of hard X-ray sources and optical observations. In order to identify an X-ray source as a HMXB and clarify the nature of the mass donor star, we need to perform spectroscopy of the optical counterpart. Since most of the HMXB candidates which are known now are correlated to emission line objects, we expect that additional Be/XRBs will be found in the near future. This will further increase the ratio between the Be systems and the OB systems among the HMXBs in the SMC. Be/XRBs are thought to evolve from binary systems in about $1.5 \times 10^{7}$ yrs, whereas supergiant systems evolve faster due to the high mass of the companion star. The large number of Be/XRBs sets constraints on the secondary star formation in the SMC, making a burst some $10^{7}$ yrs ago most likely. Summary ======= We analysed XMM-Newton EPIC PN and MOS 1/2 data of four pointings towards the SMC. One observation covered the field around the HMXB SMCX-2 in the south, whereas the fields of view of the other three are located in the northern part of the main body of the SMC. In total, there were 15 detections which were identified as known HMXBs or XRB candidates. For SMCX-2 which was faint during the observation, a flux upper limit of $1.5 \times 10^{-14}$ erg cm$^{-2}$ s$^{-1}$ (0.3 – 10.0 keV) was derived. We found two new sources (XMMUJ005735.7–721932 and XMMUJ010030.2–722035) which have a hard spectrum and positionally coincide with emission line objects . These sources are proposed as new HMXB candidates, probably Be systems. Four sources in our list were known to show pulsed emission and pulse periods had been determined in former observations. In this work, the pulse periods were confirmed for all four sources. Furthermore, we discovered that two other sources which had been proposed to be Be/XRB candidates, show pulsations: XMMUJ005605.2–722200 with a pulse period of 140.1$\pm$0.3 s and RXJ0057.8–7207 with 152.34$\pm$0.05 s. Spectral analysis of the sources was performed. For faint sources, a good fit was obtained with a single power law spectrum. However, for three brighter sources, we could show that there is a significant low energy excess in the spectrum, if we only assume a power law. The spectra indicate emission line features, suggesting that the emission is thermal. This soft component was modelled as thermal emission, yielding temperatures of 0.2 – 0.3 keV. The abundances in the emitting plasma are below solar values, but comparable to typical SMC values [@1992ApJ...384..508R]: for RXJ0101.3–7211 it is $0.11^{+0.10}_{-0.11}$ times solar, and for AXJ0103–722 best fit is obtained with $0.31_{-0.17}^{+0.43}$ times solar. The errors are 1 $\sigma$ values. Only for RXJ0103.6–7201 the abundance is higher with $0.77_{-0.28}^{+0.17}$ with respect to solar. The flux of the sources in the MCs is low compared to the bright ($L_{\rm X} = 10^{37-38}$ erg s$^{-1}$) HMXBs in our Galaxy, making it difficult to perform a detailed analysis of their soft emission. However, the sources in the MCs have the advantage of low Galactic absorption. This allows us to study the thermal emission from a large sample of HMXBs and to increase the understanding of the interaction between X-rays from the compact object and the ambient stellar matter. It is also important to verify if there is a change in temperature or emissivity, which is related to the orbital phase of the binary system. Due to the improved time resolution and sensitivity, there is a large detection potential for new pulsating XRBs in further XMM-Newton observations. We would like to thank the anonymous referee for useful comments. The XMM-Newton project is supported by the Bundesministerium für Bildung und Forschung / Deutsches Zentrum für Luft- und Raumfahrt (BMBF/DLR), the Max-Planck Society and the Heidenhain-Stiftung. This research has been carried out by making extensive use of the SIMBAD data base operated at CDS, Strasbourg, France. The Digitized Sky Survey was produced at the Space Telescope Science Institute under U.S. Government grant NAG W-2166. The images of these surveys are based on photographic data obtained using the Oschin Schmidt Telescope on Palomar Mountain and the UK Schmidt Telescope. The plates were processed into the present compressed digital form with the permission of these institutions. This research has made use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center. [53]{} natexlab\#1[\#1]{} , X., [Carrera]{}, F. J., [Watson]{}, M. G., [et al.]{} 2002, , 382, 522 , F. C., [Klinglesmith]{}, D. A., [Gull]{}, T. R., & [Sofia]{}, S. 1987, , 317, 152 , G., [Doxsey]{}, R., [Li]{}, F., [Jernigan]{}, J. G., & [van Paradijs]{}, J. 1978, , 221, L37 , R. & [Marshall]{}, F. E. 2000, , 7402, 3 , R. H. D., [Marshall]{}, F. E., [Coe]{}, M. J., [Laycock]{}, S., & [Handler]{}, G. 2001, , 548, L41 , J. M. & [Lockman]{}, F. J. 1990, , 28, 215 , A., [Stanek]{}, K., [Macri]{}, L., & [Groot]{}, P. 2003, , submitted, astro-ph/0301617 , W. R. T. & [Coe]{}, M. J. 2003, , 338, 428 , F., [Dennerl]{}, K., & [Pietsch]{}, W. 2003, , submitted, astro-ph/0212319 , F., [Filipovi[ć]{}]{}, M. D., [Pietsch]{}, W., & [Kahabka]{}, P. 2000, , 142, 41 , F. & [Sasaki]{}, M. 2000, , 359, 573 , F. & [White]{}, N. E. 1990, , 361, 225 , G. L., [Stella]{}, L., [Campana]{}, S., [et al.]{} 1998, , 6999, 1 , F., [Lumb]{}, D., [Altieri]{}, B., [et al.]{} 2001, , 365, L1 , M. A., [Hailey]{}, C. J., [Herder]{}, J. W. d., [Zane]{}, S., & [Ramsay]{}, G. 2002, , 578, 391 , J. S. 1992, An X-Ray Spectral Code for Optically Thin Plasmas (Internal SRON-Leiden Report, updated version 2.0) , P. & [Pietsch]{}, W. 1996, , 312, 919 , P., [Pietsch]{}, W., [Filipovi[ć]{} ]{}, M. D., & [Haberl]{}, F. 1999, , 136, 81 , M., [Yokogawa]{}, J., & [Koyama]{}, K. 2000, , 52, 299 , D. A., [Osterheld]{}, A. L., & [Goldstein]{}, W. H. 1995, , 438, L115 , Q. Z., [van Paradijs]{}, J., & [van den Heuvel]{}, E. P. J. 2000, , 147, 25 , J. C., [Whitlock]{}, L. A., [Corbet]{}, R. H. D., & [Marshall]{}, F. E. 1999, Bulletin of the American Astronomical Society, 31, 905 , D. J., [Fox]{}, D. W., [Lamb]{}, R. C., & [Prince]{}, T. A. 2003, , 584, L79 , F. E., [Becker]{}, R. H., & [White]{}, N. E. 1983, , 266, 814 , F. E., [Boldt]{}, E. A., [Holt]{}, S. S., [et al.]{} 1979, , 40, 657 , F. E., [Lochner]{}, J. C., [Santangelo]{}, A., [et al.]{} 1998, , 6818, 1 , R., [Gronenschild]{}, E. H. B. M., & [van den Oord]{}, G. H. J. 1985, , 62, 197 , R., [Lemen]{}, J. R., & [van den Oord]{}, G. H. J. 1986, , 65, 511 , N. & [Azzopardi]{}, M. 1993, , 102, 451 , D. 1996, in American Astronomical Society Meeting, Vol. 188, 5404 , D. G. 1998, in American Astronomical Society Meeting, Vol. 193, 12003 , P., [Morton]{}, D. C., & [Thomas]{}, R. M. 1979, , 186, 43P , I. & [Coe]{}, M. J. 2002, , 385, 517 , T., [Parmar]{}, A. N., [Martin]{}, D. D. E., & [Lammers]{}, U. 1997, , 327, 215 , B., [Nagase]{}, F., [Endo]{}, T., [et al.]{} 2002, , 579, 411 , R. E., [Groves]{}, D. J., [Rodrigues]{}, R. M., [et al.]{} 1971, , 168, L7 , S. C. & [Dopita]{}, M. A. 1992, , 384, 508 , A., [Cusumano]{}, G., [dal Fiume]{}, D., [et al.]{} 1998, , 338, L59 , M., [Haberl]{}, F., [Keller]{}, S., & [Pietsch]{}, W. 2001, , 369, L29 , M., [Haberl]{}, F., & [Pietsch]{}, W. 2000, , 147, 75 , F. D. & [Mitchell]{}, M. 1981, , 243, 736 , L., [Briel]{}, U., [Dennerl]{}, K., [et al.]{} 2001, , 365, L18 , K., [Kohmura]{}, T., [Yokogawa]{}, J., & [Koyama]{}, K. 2000, , 7441, 2 , M. J. L., [Abbey]{}, A., [Arnaud]{}, M., [et al.]{} 2001, , 365, L27 , S. 1999, in IAU Symp. 190: New Views of the Magellanic Clouds, Vol. 190, 569 , Q. & [Wu]{}, X. 1992, , 78, 391 , P., [Liedahl]{}, D., [Mauche]{}, C., [et al.]{} 2002, American Physical Society, April Meeting, Albuquerque, New Mexico, abstract \#N17.069, 17069 , J. W., [Clark]{}, G. W., [Blondin]{}, J. M., [Kallman]{}, T. R., & [Nagase]{}, F. 1995, , 445, 896 , J. W., [Clark]{}, G. W., [Levine]{}, A. M., [Corbet]{}, R. H. D., & [Nagase]{}, F. 1996, , 467, 811 , J., [Imanishi]{}, K., [Tsujimoto]{}, M., [et al.]{} 2000, , 128, 491 , J. & [Koyama]{}, K. 1998, , 6853, 2 , J., [Torii]{}, K., [Kohmura]{}, T., & [Koyama]{}, K. 2001, , 53, 227 , K., [Soszynski]{}, I., [Wozniak]{}, P. R., [et al.]{} 2001, Acta Astronomica, 51, 317 [^1]: *Present address:* Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA [^2]: XMM-Newton is an ESA Science Mission with instruments and contributions directly funded by ESA Member states and the USA (NASA).
--- abstract: 'Dynamics among central sources (hubs) providing a resource and large number of components enjoying and contributing to this resource describes many real life situations. Modeling, controlling, and balancing this dynamics is a general problem that arises in many scientific disciplines. We analyze a stochastic dynamical system exhibiting this dynamics with a multiplicative noise. We show that this model can be solved exactly by passing to variables that describe the mass ratio between the components and the hub. We derive a deterministic equation for the average mass ratio. This equation describes logistic growth. We derive the full phase diagram of the model and identify three regimes by calculating the sample and moment Lyapunov exponent of the system. The first regime describes full balance between the non-hub components and the hub, in the second regime the entire resource is concentrated mainly in the hub, and in the third regime the resource is localized on a few non-hub components and the hub. Surprisingly, in the limit of large number of components the transition values do not depend on the amount of resource given by the hub. This model has interesting application in the context of analysis of porous media using Magnetic Resonance (MR) techniques.' author: - Inbar Seroussi - Nir Sochen bibliography: - 'bibGiantComponent.bib' title: From Logistic Growth to Exponential Growth in a Population Dynamical Model --- Introduction ============ Population dynamics on large scale networks has attracted a lot of attention due to its wide occurrence in many disciplines, such as social sciences [@Bouchaud2000536; @castellano2009statistical], physics [@kardar1986dynamic] and biology, communication and control theory [@allegra2016phase]. This dynamics is mainly affected by the topology of the network as well as some internal stochastic noise. In many applications there are only a few nodes playing a major role in the dynamical process, distributing and carrying most of the resources [@barabasi2000scale; @song2005self]. For example, this can be the case in models describing population dynamics, economic growth [@bouchaud2015growth], and distributed control systems [@allegra2016phase]. An additional application of this problem is in the context of diffusion measurements of porous systems, such as brain tissue, using Magnetic Resonance (MR) techniques. In this last case, the sensitivity of the MR signal to self-diffusion of water molecules can be utilized to extract information about the network of cells (neurons) in the brain. The concept of self-diffusion of molecules in a network of pores was already introduced in Refs. [@seroussi2018spectral; @callaghan1992diffusion; @callaghan2011translational]. The main challenge is how to determine the topology of the network based on the MR measurements [@seroussi2018spectral]. Our model consists of a system of interacting sites on a graph $\mathcal{G}$ with $N$ vertices and $E$ edges between them. We are interested in the stochastic dynamics of some characteristic property $\{m_{i}(t)\}_{i\in\mathcal{G},t\text{\ensuremath{\ge}}0}$. The property $m_{i}(t)$ is linked to a physical measurable quantity in the real world and the graph is the underlying geometry/topology in which the property lives. The topology is a complex network of sites. The model is described by the following family of stochastic differential equations in the Stratonovich form on the graph $\mathcal{G}$: $$\frac{dm_{i}(t)}{dt}=\underset{j}{\sum}J_{ij}m_{j}(t)-\underset{j}{\sum}J_{ij}m_{i}(t)+g_{i}(t)m_{i}(t),\label{eq:General model}$$ with the initial conditions $m_{i}(0)=m_{0}$. The term $g_{i}(t)$ is a multiplicative white noise, such that $\langle g_{i}(t)\rangle=f_{i}$, and $\langle g_{i}(t)g_{j}(t')\rangle=\sigma_{i}^{2}\delta_{ij}\delta(t-t')$. We choose the Stratonovich form, since its solution is a limiting case of a physical system involving white noise with short memory [@van1981ito]. The topology of the network is encoded in the adjacency matrix $J$ of the graph. The model consists of two parts: an interacting part, where the interaction strength depends on the location on the graph, and a non-interacting part, where each component follows a stochastic noise with different variance $\sigma_{i}^{2}$. The first part causes spreading, while the second pushes towards concentration (a.k.a localization or condensation). The model was already analyzed in the mean field topology, i.e., when all the nodes are connected and interact at the same rate. In this case, the equilibrium distribution is a Pareto power-law [@Bouchaud2000536]. It was also analyzed on trees [@derrida1988polymers; @gueudre2014explore], and random graphs assuming separable probability distribution on the nodes [@ichinomiya2012bouchaud]. The model on the lattice is known in the mathematical literature as the time-dependent Parabolic Anderson Model (PAM) [@carmona1994parabolic; @molchanov1991ideas]. The phase diagram of the model in this case depends on the dimension of the lattice. On a general network, phase transitions depend on the spectral dimension of the network [@seroussi2018spectral]. Here, we present and analyze a specific topology in which the model is shown to be solvable. Namely, we consider a directed graph with $N+1$ nodes, one hub node interacting with $N$ independent nodes. In the context of MR measurements of diffusion in a porous structure, the MR signal measured is assumed to be composed of two contributions: one coming from hindered diffusion in the extracellular space and the other from restricted diffusion in the intracellular space [@karger1985nmr; @moutal2018karger; @niendorf1996biexponential; @mulkern1999multi]. The hub node represents the magnetization in the extracellular space (e.g., water), $h_{0}(t)$, and the non-hub nodes represents $N$ independent intracellular pores with magnetization, $m_{i}(t)$. The motion of molecules between these regions changes the value of the magnetization as a function of time and is represented by the interaction term between nodes, $J_{ij}$. The effect of the magnetic field gradient can be incorporated in the stochastic noise, for example, in $f_{i},$ and/or its variance $\sigma_{i}^{2}$. In the economic context, the system describes the dynamics of the money hold by the hub, which represents by the state/bank, and the money of each agent $m_{i}(t)$. In this case, the agents deposit money in the bank and the bank pays interest on it. The stochastic noise represents the bank/state and the agent’s investments in the stock market and housing [@Bouchaud2000536; @bouchaud2015growth]. Analysis of the dynamics of the sums of the money held by the agents and the bank/state was curried out in Ref. [@bouchaud2015growth]. Here, we analyze the dynamics of the mass ratio between each agent and the bank/state (hub). We derive the equilibrium distribution in this case, and show that when the number of nodes growth at least exponentially with time there exist a localization phase. We also generalize our analysis to multiple number of hubs. Our main result is a full phase-diagram of the model. We show that this model can be described by a stochastic equation for the mass ratio between each of the non-hub nodes and the hub, and a deterministic non-linear equation for the average relative mass of all the non-hub nodes with respect to the hub. To identify the phases of the system, we calculate the sample and moment Lyapunov exponents and identify a gap between them. The phase transitions are characterized by one parameter. This parameter takes into account the exchange rate between the non-hub nodes and the hub and the variances of the multiplicative noises. Hub Topology\[sec:Hub-Topology\] ================================ The basic hub topology is composed of an infinite number of nodes, $\{m_{i}\}$, interacting at constant rate with a hub node, $h_{0}$, such that, $J_{i0}=\frac{J_{\mathrm{out}}}{N}$, and $J_{0j}=\frac{J_{0}}{N}$, respectively. Our normalization is such that, the overall interaction between the nodes and the hub is finite in the limit of infinite number of nodes. The interaction among the non-hub nodes is characterized by the parameter $\delta$; when $\delta=0$, any interaction (transfer of mass) between the non-hub nodes is done only through the hub. The topology of the interaction between the non-hub nodes is defined by a Laplacian matrix, $L$, satisfying $\sum_{i}L_{ij}=0$. Figure \[fig:Hub-Topology\] presents an illustration of such a system for $\delta=0$. We assume that the stochastic noise acting on the non-hub nodes has the same variance and average for all nodes, $\sigma_{i}=\sigma_{\mathrm{out}}$, $f_{i}=f$. Eq. (\[eq:General model\]) in the Itô form reduces to the following system of stochastic equations: $$\frac{dh_{0}}{dt}=\frac{J_{\mathrm{in}}}{N}\underset{j}{\sum}m_{j}-J_{\mathrm{out}}h_{0}+f_{0}h_{0}+\frac{\sigma_{0}^{2}}{2}h_{0}+\sigma_{0}g_{0}h_{0},\label{eq:m0 Ito}$$ $$\frac{dm_{i}}{dt}=\frac{J_{\mathrm{out}}}{N}h_{0}-\frac{J_{\mathrm{in}}}{N}m_{i}+fm_{i}+\frac{\sigma_{\mathrm{out}}^{2}}{2}m_{i}-\delta\underset{j}{\sum}L_{ij}m_{j}+\sigma_{\mathrm{out}}g_{i}m_{i}.\label{eq:mi Ito}$$ The Itô form will be useful latter on when one takes the limit $N\rightarrow\infty$. It is instructive to pass to the following normalized variables by introducing the mass ratio between the non-hub nodes and the hub node, $M_{i}=\frac{m_{i}}{h_{0}}$. This leads to the following system of $N$ equations in the Itô form (See \[sec:Variable-transformations\] for more details): $$\frac{dM_{i}}{dt}=\frac{J_{\mathrm{out}}}{N}-\left(\frac{J_{\mathrm{in}}}{N}+\Delta f-J_{\mathrm{out}}-\frac{\sigma^{2}}{2}+J_{\mathrm{in}}\overline{M}(t)\right)M_{i}-\delta\underset{j}{\sum}L_{ij}M_{j}+\sigma\xi_{i}M_{i}.\label{eq:stochastic Mi}$$ We introduce the average mass ratio $\overline{M}=\frac{1}{N}\sum_{i}M_{i}$, the effective variance $\frac{\sigma^{2}}{2}=\frac{\sigma_{0}^{2}+\sigma_{\mathrm{out}}^{2}}{2}$, and the average difference $\Delta f=f_{0}-f$. Here, $\xi_{i}(t)$ is a Gaussian process with mean zero and variance one. The transformation of variables introduces a non-linear term, which accounts for the interaction between any two nodes through the hub. In the limit $N\rightarrow\infty$, averaging over all the nodes in Eq. (\[eq:stochastic Mi\]) yields the following deterministic law for the average mass ratio: $$\frac{d\overline{M}}{dt}=\frac{\sigma^{2}\beta}{2}\overline{M}\left(\frac{\alpha+1}{\beta}-\overline{M}\right),\label{eq:Mbar deterministic}$$ where we use the dimensionless parameters $\alpha=2\frac{J_{\mathrm{out}}-\Delta f}{\sigma^{2}}$ and $\beta=\frac{2J_{\mathrm{in}}}{\sigma^{2}}$. Since, Eq. (\[eq:Mbar deterministic\]) it is a non-linear equation, there are two steady-state solutions: $\overline{M}_{1\mathrm{eq}}{\rightarrow}\frac{J_{\mathrm{out}}-\Delta f+\frac{\sigma^{2}}{2}}{J_{\mathrm{in}}}=\frac{\alpha+1}{\beta}$ as $N\rightarrow\infty$, and $\overline{M}_{2\mathrm{eq}}{\rightarrow}0$ as $N\rightarrow\infty$. Note that, in the case of $\delta=0$ the system geometry can be viewed as a directed tree structure with one level and infinitely many leaves. Therefore, the presence of a the non-linear term is caused by to the indirect interaction between the non-hub nodes following the tree topology of the system [@derrida1988polymers]. Convergence to each one of these fixed points depend on the initial condition of the system and on the stability of these points. Stability analysis of these two points shows that the point $\overline{M}_{1\mathrm{eq}}$ is stable when $\alpha+1>0$, while the point $\overline{M}_{2\mathrm{eq}}=0$, is stable when $\alpha+1\leq0$. For example, in the context of MR measurements in porous media, this system can model a complex structure measured from a single voxel in the MR image. The value $\overline{M}_{\mathrm{eq}}$ describes the steady-state average magnetization ratio, between the extracellular space and the intracellular space. The first fixed point $\overline{M}_{1\mathrm{eq}}$ is reached when the steady-state magnetization ratio is equal to the amount of molecules leaving the non-hub pores reduced by the magnetic field effects divided by the amount of molecules leaving the extracellular space. The second fixed point $\overline{M}_{2\mathrm{eq}}$ represents the case in which on average most of the contribution to the magnetization in a single voxel comes mainly from the extracellular space. Eq. (\[eq:Mbar deterministic\]) shows logistic growth and is a version of Lotka-Volterra equation, which describes many social phenomena in nature [@lotka1925elements; @volterra1926fluctuations]. It can be solved exactly: for an initial condition $\overline{M}(0)=\overline{M}_{0}$, the solution is $$\overline{M}(t)=\frac{\overline{M}_{0}(\alpha+1)}{\beta\left(\overline{M}_{0}+\left(\frac{\alpha+1}{\beta}-\overline{M}_{0}\right)e^{-\frac{\sigma^{2}\left(\alpha+1\right)t}{2}}\right)}.\label{eq:Mbarsol}$$ Equilibrium Distribution\[subsec:Equilibrium-Distribution\] ----------------------------------------------------------- Given Eqs. (\[eq:stochastic Mi\]) and (\[eq:Mbar deterministic\]) for the relative mass between the non-hub nodes and the hub, one can derive an equivalent form describing the dynamics of the probability distributions of $M_{i}$. Since the dynamics of the average mass ratio between the non-hub nodes and the hub node is governed by a deterministic non-linear equation, in the limit $N\rightarrow\infty$ the system reduces to a set of stochastic independent equations for the relative mass in each node, $M_{i}$. Therefore, we can omit the index $i$, and look at the dynamics of the probability distribution of a typical node $P(M,t)$. This dynamics of the probability distribution is described by the following Fokker-Plank equation: $$\frac{\partial P}{\partial t}=-\frac{\sigma^{2}}{2}\frac{\partial\left(\left(\left(\alpha+1-\beta\,\overline{M}-\tilde{\delta}\right)M+\tilde{\delta}\,\overline{M}(t)-\tilde{\delta}\right)P\right)}{\partial M}+\frac{\sigma^{2}}{2}\frac{\partial^{2}\left(M^{2}P\right)}{\partial^{2}M}.\label{eq:FP M0}$$ To study the effect of interaction between nodes, we introduce a small mean field interaction between the non-hub nodes, $\delta$, and define the dimensionless interaction rate, $\tilde{\delta}=\frac{2\delta}{\sigma^{2}}$. By equating the left hand side to of Eq. (\[eq:FP M0\]) zero one can find the steady-state distribution. The solution shows a Pareto power-law behavior: $$P_{\mathrm{eq}}(M)=A\,\mathrm{exp}\left(-\frac{\tilde{\delta}\,\overline{M}_{\mathrm{eq}}}{M}\right)M^{-\mu(\overline{M}_{\mathrm{eq}})}.$$ The power is a function of the steady-state average relative mass $\overline{M}_{\mathrm{eq}}$, i.e., steady-state solution of Eq. (\[eq:Mbar deterministic\]): $\mu(\overline{M}_{\mathrm{eq}})=\beta\overline{M}_{\mathrm{eq}}+1-\alpha+2\tilde{\delta}$. Substituting the value of the average mass ratio, we find that **$\mu(\frac{\alpha+1}{\beta})|_{\alpha+1\geq0}=2+2\tilde{\delta}$**, and $\mu(0)|_{\alpha+1<0}\rightarrow1-\alpha+2\tilde{\delta}$. This shows that the system has two steady-states. The system collapses to one of them depending on the initial condition, i.e., the initial mass ratio between the hub and the non-hub nodes. Note that the value of $\mu$ is greater than $2$ when the system collapses to the state $\overline{M}_{\mathrm{eq}}=\frac{\alpha+1}{\beta}$, showing equality among the non-hub nodes and the hub. For, $\overline{M}_{\mathrm{eq}}=0$, the mass is localized on a few nodes within the set of non-hub nodes. The power is, $\mu<2$, as long as $2\tilde{\delta}<1+\alpha$. Therefore, adding interaction between non-hub nodes reduces localization, as expected. Balance and Localization\[sec:Balance-and-Localization\] -------------------------------------------------------- The analysis above reveals the localization regime within the non-hub nodes when the influence of the hub is renormalized. In this section, we analyze the regime at which there is localization on the hub. For this purpose, we study the asymptotic properties of the total mass, $E(t)=h_{0}(t)+\sum_{i}m_{i}(t)$. We calculate the Lyapunov exponents of the solution [@carmona1994parabolic; @molchanov1991ideas]. Here, we perform the analysis for the case of $\delta=0.$ The Lyapunov exponents describe the growth rates of the solution and its moments. They indicate the level of complexity in the solution’s landscape. The first moment Lyapunov exponent of the solution is as follows: $$\gamma_{1}=\underset{t,N\rightarrow\infty}{\mathrm{lim}}\,\frac{1}{t}\mathrm{ln}\left(\langle E(t)\rangle\right)=\begin{cases} \begin{array}{c} f+\frac{\sigma_{\mathrm{out}}^{2}}{2}\\ f_{0}-J_{\mathrm{out}}+\frac{\sigma_{0}^{2}}{2} \end{array} & \begin{array}{c} \frac{\Delta\sigma^{2}}{\sigma^{2}}<\alpha\\ \frac{\Delta\sigma^{2}}{\sigma^{2}}\geq\alpha \end{array}\end{cases},\label{eq:gamma1}$$ where $\Delta\sigma^{2}=\sigma_{0}^{2}-\sigma_{\mathrm{out}}^{2}$ is the variance difference, see \[sec:Moments-equations\] for details of the proof together with corrections for finite network size. Interestingly, what determines the growth on average is the difference between the variance, $\Delta\sigma^{2}$ of the stochastic noises of the hub and that of the non-hub nodes. To understand Eq. (\[eq:gamma1\]), let us look at the limit where the stochastic noise in the non-hub nodes has significantly higher variance compared to the hub. This is equivalent to the presence of large fluctuations in the non-hub pores with respect to the extracellular space, i.e., $\Delta\sigma^{2}\approx-\sigma_{\mathrm{out}}^{2}$. In this case, comparing to the stability points in Eq. ($\ref{eq:Mbar deterministic}$) in the regime $\alpha\leq-1$, there is a high concentration of magnetization on the hub and a few non-hub nodes which contribute most to the total growth rate. For $\alpha>-1$, in the detailed balance limit the non-hub nodes and the hub contribute equally to the total growth. On the other hand, in the limit of $\Delta\sigma^{2}\approx\sigma_{0}^{2}$, the system exhibits three regimes: for $\alpha>1$ the magnetization spreads equally among the nodes, for $-1<\alpha\leq1$ the magnetization is mainly concentrated on the hub, and for $\alpha<-1$, there is concentration of the magnetization on the hub and/or several non-hub nodes. This analysis is verified by calculating the value of the sample Lyapunov exponent, which provide the sample growth rate. We define the sample Lyapunov exponent, $\widetilde{\gamma}$, as the limit of the logarithm of the total mass of the solution divided by time as $N,t\rightarrow\infty$. Knowing the dynamics of the average mass ratio, see Eq. (\[eq:Mbar deterministic\]), we can calculate the sample growth rate of the mass ratio exactly. The resulting sample/quenched Lyapunov exponent is $$\widetilde{\gamma}=\underset{t,N\rightarrow\infty}{\mathrm{lim}}\,\frac{1}{t}\mathrm{ln}\left(h_{0}+\overline{m}\right)=f+\frac{\sigma_{\mathrm{out}}^{2}}{2},\label{eq:sample Lyapunov}$$ where we denote $\overline{m}=\sum_{i}m_{i}$; see \[sec:Sample-Lyapunov-Exponent\] for details of the proof. Note that, in order to take the limit one needs to specify at which rate the number of nodes growth with time. We show that when the number of nodes growth at least exponentially with time, then the sample Lypunov is as in Eq. (\[eq:sample Lyapunov\]). This value is independent of the initial conditions and is bounded from above by the first moment Lyapunov exponent, $\gamma_{1}$. Localization of the solution is defined as the regime at which strict inequality hold, $\widetilde{\gamma}<\gamma_{1}$ [@carmona1994parabolic]. The gap $\Delta\gamma=\gamma_{1}-\widetilde{\gamma}$ between these two exponents i.e., the difference between the expression in Eq. (\[eq:gamma1\]) and Eq. (\[eq:sample Lyapunov\]), characterizes the localization regime in the system. Combining the transition in the values of the exponent $\widetilde{\gamma}$ with the stability analysis of the steady-state solutions of Eq. (\[eq:Mbar deterministic\]), we identify three regimes: a regime of full equality, for $\alpha>\frac{\Delta\sigma^{2}}{\sigma^{2}}$, in which the mass is spread equally between all the non-hub nodes and the hub, a second regime for $-1<\alpha\leq\frac{\Delta\sigma^{2}}{\sigma^{2}},$ in which the mass is localized mainly on the hub, and a third regime for $\alpha\leq-1$, in which the mass is localized on the hub and a few non-hub nodes. Multiple Hubs Topology \[Sec: multipileHaubs\] ============================================== In this section, we consider the effect of $H$ hub nodes, $h_{i},$ connected to all the nodes in the system, and a set of $N$ non-hub independent nodes, $m_{i}$, connected only to the hubs nodes. Figure \[fig:Multipile hubs\] illustrate of this topology. This kind of topology appears in many applications, for instance, in the economic setting in the presence of more than one central bank/company. In a porous structure, it can describe different extra-cellular regions interacting with intracellular pores. It is also applied in analyzing the dynamics of control systems [@allegra2016phase], and in machine learning algorithms. The equations of the system now read as follows: $$\frac{dh_{i}}{dt}=\frac{J_{\mathrm{in}}}{HN}\sum_{i}m_{i}-\frac{J_{\mathrm{out}}}{H}h_{i}+f_{i}h_{i}+\frac{\delta}{H-1}\sum_{j\neq i}^{H}h_{j}-\delta h_{i}+\sigma_{i}g^h_{i}h_{i},\label{eq:hi-multi-hub}$$ $$\frac{dm_{i}}{dt}=\frac{J_{\mathrm{out}}}{NH}\sum_{i}h_{i}-\frac{J_{\mathrm{in}}}{N}m_{i}+q_{i}m_{i}+\nu_{i}g^m_{i}m_{i}.\label{eq:mi-multi-hub}$$ Note that, as before, the total interaction rates between the hubs and the non-hub nodes, $J_{\mathrm{in}},$ and $J_{\mathrm{out}}$, are defined to be finite in the limit of infinite $N$ and $H$. The processes $g^h_{i}(t)$ and $g^m_{i}(t)$, are Gaussian processes with mean zero and variance one. Similar to the one-hub topology, we can now pass to the relative mass parameters by dividing by the average hubs mass, see \[sec:Multiple-Hubs-Derivation\] for more details. The results of the previous sections are recovered when $H=1$. Note that the equations for the relative mass are decoupled in the case $H=1$ and also in the limit of $H$ very large. In the presence of a finite small number of hub nodes, one can show that the total variance depends on the hubs value. Therefore, having finite number of hubs decreases the value of the parameter $\alpha$, and causes more equality in the system and reduces the localization. Moreover, in the simple case where all the hubs have the same statistics, such that the stochastic noise, $g^h_{i}(t)=g(t)$, does not depend on the hub location, $H$ hubs are equivalent to one hub with the total effective net flux $J_{\mathrm{out}}$. The limit of one non-hub node with $J_{\mathrm{out}}=J_{\mathrm{in}}=\delta$, $N=1$, and an infinite number of hubs, $H\rightarrow\infty$, is the mean field model with exponential growth of the total mass [@Bouchaud2000536]. Note that when both the number of non-hub nodes and the number of hubs are very large, i.e., $N\rightarrow\infty$ and $H\rightarrow\infty$, the average mass ratio grows exponentially. The exponent depends on the average and variance difference of the stochastic multiplicative noises. The phase transition in this case is equivalent to the results in Sec. \[sec:Hub-Topology\], i.e., there are three phases: localization on the non-hub nodes, localization on the hubs and equal spreading over all the nodes. Note that in this limit, the non-hub nodes play the same role as the hubs, since they are connected to infinitely many hub nodes. The interaction among the hubs, $\delta$ reduces localization, but doesn’t effect the growth rate. The analysis above affects mainly the sample Lyapunov exponent. The first moment Lyapunov exponent remains the same for any $H$ and $N$, since the average equations do not change. This shows that the phase transitions predicted in Sec. \[sec:Balance-and-Localization\] are general and can be observed with small modifications to a system of multiple hubs. In addition, the transition between a logistic growth in the relative mass to an exponential growth is a function of the ratio between the $N$ and $H$. A Note on MRI ============= In the context of MR measurements the model in Eq. (\[eq:General model\]) is a generalization of the Kärger model [@karger1985nmr], which accounts for random changes in the diffusivity due to restricted diffusion or a non-homogeneous magnetic field. This model was already analyzed on a general network, where the importance of the spectral dimension as a measurable parameter is stated [@seroussi2018spectral]. The hub topology (see Figure \[fig:Hub-Topology\]) is a simplified version of this model. We show that in this topology under the assumption that all the non-hub pores have similar properties, the average equations of the model are those for the Kärger model for two compartments [@karger1985nmr], see \[sec:Moments-equations\]. The parameters are then as follows: $f_{0}=-q^{2}D_{\mathrm{ex}}$ and $f=-q^{2}D_{\mathrm{in}}$, $\sigma_{0}^{2}=-q^{2}\sigma_{\mathrm{ex}}^{2}$, $\sigma_{\mathrm{out}}^{2}=-q^{2}\sigma_{\mathrm{in}}^{2}$, such that the parameter $\alpha(q^{2})=2\frac{J_{\mathrm{out}}-\Delta f}{\sigma^{2}}=-2\frac{J_{\mathrm{out}}+q^{2}\left(D_{\mathrm{in}}-D_{\mathrm{ex}}\right)}{q^{2}\left(\sigma_{\mathrm{ex}}^{2}+\sigma_{\mathrm{in}}^{2}\right)}$, is controlled by the gradient of the applied magnetic field, incorporated in the value of $q$. For example consider the basic Stejskal-Tanner sequence [@stejskal1965spin], which is composed of two gradients pulses of the magnetic field with magnitude $G$ in opposite direction and with duration $\delta$. The pulses are separated by a diffusion time $\Delta$. For $q$ one takes the wave vector, $q=\frac{\delta\gamma G}{2\pi}$, where the parameter, $\gamma$, is the gyro-magnetic ratio. The average equations for the magnetization in the $(x,y)-$plane, and with a magnetic field gradient in the $\hat{z}$ direction, read $$\frac{d\langle h_{0}\rangle}{dt}=\frac{J_{\mathrm{in}}}{N}\langle\overline{m}\rangle-J_{\mathrm{out}}\langle h_{0}\rangle-q^{2}D_{\mathrm{ex}}\langle h_{0}\rangle,\label{eq:m0 Ito-MRI params}$$ $$\frac{d\langle\overline{m}\rangle}{dt}=\frac{J_{\mathrm{out}}}{N}\langle h_{0}\rangle-\left(\frac{J_{\mathrm{in}}}{N}+q^{2}D_{\mathrm{in}}+\frac{q^{2}\sigma_{\mathrm{in}}^{2}}{2}\right)\langle\overline{m}\rangle.\label{eq:mi Ito-MRI params}$$ Note that, the wave vector $q$ turns on the stochastic dynamics. We denote $\delta D=D_{\mathrm{ex}}-D_{\mathrm{in}}$, and $\delta\sigma^{2}=\sigma_{\mathrm{ex}}^{2}-\sigma_{\mathrm{in}}^{2}$. The multiplicative white noise accounts for the random diffusivity changes of the medium due to restricted diffusion. Based on the analysis in Sec. \[sec:Balance-and-Localization\], one can find the transition point in terms of the wave vector $q$. The transition point is at $q_{\mathrm{c}}=\sqrt{\frac{J_{0}}{\delta D-\frac{\delta\sigma^{2}}{2}}}$. The average signal reveals only the transition at $q_{\mathrm{c}}$. Note that, the signal decay is also affected by the noise variance of the extracellular space. Taking typical values such as, $J_{\mathrm{out}}=\frac{1}{1800\mathrm{msec}}=0.5556[\frac{1}{\mathrm{sec}}]$, $D_{\mathrm{ex}}=2e-5[\frac{\mathrm{cm}^{2}}{\mathrm{sec}}]$, $D_{\mathrm{in}}=0.1e-5[\frac{cm^{2}}{\mathrm{sec}}]$. Then the critical value is $q_{\mathrm{c}}\approx\sqrt{\frac{J_{\mathrm{out}}}{D_{\mathrm{ex}}-D_{\mathrm{in}}+\frac{\sigma_{\mathrm{in}}^{2}}{2}}}=\sqrt{\frac{0.5556}{1.9e-5}}\approx171[\frac{1}{\mathrm{cm}}]=0.1[\frac{1}{\mathrm{\mu m}}]$. A larger variance in the non-hub pores will set the transition for a lower value of $q$, value. Whereas larger variance in the diffusion in the extracellular space will set the transition to a higher value of $q$ value. The decay of the signal has a bi-exponential form as predicted by the Kärger model: $$\gamma_{1}=\underset{t,N\rightarrow\infty}{\mathrm{lim}}\,\frac{1}{t}\mathrm{ln}\left(\langle E(t)\rangle\right)=\begin{cases} \begin{array}{c} -q^{2}\left(D_{\mathrm{in}}+\frac{\sigma_{\mathrm{in}}^{2}}{2}\right)\\ -q^{2}D_{\mathrm{ex}}-J_{\mathrm{out}} \end{array} & \begin{array}{c} q>q_{\mathrm{c}}\\ q\leq q_{\mathrm{c}} \end{array}\end{cases}.$$ The stochastic model is a natural generalization of the Kärger model. Using this generalization, we are able to explore and analyze the behavior of the model in the presence of complex topological structures, as well as the effect of changes in the apparent diffusivity as a result of stochastic noise. Note that this transition appears also in the presence of any interaction $\delta$ among the non-hub nodes. Figure \[fig:The-eigenvalue-vsN\] presents the behavior of the eigenvalues as a function of $N$. (a)(b) Discussion and Conclusion ========================== We have presented a stochastic model that describes diffusion on a graph with an additional multiplicative stochastic noise. We analyze this model on a directed graph with one hub node that is connected to a large number of non-hub nodes. We derive a non-linear equation for the average mass ratio between the non-hub nodes and the hub. This equation describes logistic growth. It has two phases one in which the overall mass is mainly concentrated on the hub, and the other in which there is a “detailed balance” such that the steady-state depends on the ratio between the exchange rate between the hub and the non-hub nodes. We show that this model is completely solvable in the large $N$ limit. In addition, we identify the phase transitions of the model in terms of the two parameters $\alpha$ and $\frac{\Delta\sigma^{2}}{\sigma^{2}}$. We show that in order for localization phase occur the number of non-hub nodes needs to grow at least exponentially with time. Surprisingly, in the limit of large number of independent nodes the transition points do not depend on the amount of resource given by the hub, provided that it is finite and non-zero. We generalize this analysis to a system of multiple hubs. We show that in the limit of infinitely many hubs the growth of the system becomes exponential. The model has numerous applications. We introduce an application of this model in the context of MR measurements of complex structures. Our results in this context may provide a theoretical framework that may help interpret and propose new MR experiments to identify the concentration phases that we see theoretically. This may have impact on the prediction of the underlying measured geometry. Our results and analysis can also be of interest in other applications, for example, in predicting economic growth, and in analyzing the stability of control systems. Acknowledgment {#acknowledgment .unnumbered} ============== We would like to thank Prof. Ofer Pasternak for proposing the idea for the paper. Transformations of Variable\[sec:Variable-transformations\] =========================================================== In this section, we derive the relative magnetization equations, Eq. (\[eq:stochastic Mi\]). We use Itô’s formula in order to perform the change of variables $$\begin{gathered} df(t,\boldsymbol{m})=\frac{\partial f}{\partial t}dt+\sum_{i}\frac{\partial f}{\partial m_{i}}dm_{i}+\frac{1}{2}\sum_{i,j}\frac{\partial f^{2}}{\partial m_{i}\partial m_{j}}\left[B^{2}\right]_{ij}dt\\ =\left(\frac{\partial f}{\partial t}+\sum_{i}\frac{\partial f}{\partial m_{i}}A_{i}+\frac{1}{2}\sum_{i,j}\frac{\partial f^{2}}{\partial m_{i}\partial m_{j}}\left[B^{2}\right]_{ij}\right)dt+\sum_{ij}\frac{\partial f}{\partial m_{i}}B_{ij}dW_{j},\end{gathered}$$ where $A$ and $B$ are the coefficients of the stochastic equations Eq. (\[eq:m0 Ito\]) and Eq. (\[eq:mi Ito\]), respectively, defined as follows: $A_{i}=J_{i0}h_{0}-J_{0i}m_{i}-\delta\,\underset{j}{\sum}L_{ij}m_{j}+\frac{\sigma_{i}^{2}}{2}m_{i}+f_{i}m_{i}$, $A_{0}=\underset{j}{\sum}\,J_{0j}m_{j}-\underset{j}{\sum}\,J_{j0}h_{0}+\frac{\sigma_{0}^{2}}{2}h_{0}+f_{0}h_{0}=\frac{J_{\mathrm{in}}}{N}\underset{j}{\sum}\,m_{j}-J_{\mathrm{out}}h_{0}+\frac{\sigma_{0}^{2}}{2}h_{0}+f_{0}h_{0}$, and $B_{ij}=\delta_{ij}\sigma_{\mathrm{out}}m_{i}$, and $B_{0i}=B_{i0}=\delta_{i0}\sigma_{0}m_{0}$. $$\begin{gathered} \frac{dM_{i}}{dt}=\frac{J_{\mathrm{out}}}{N}-M_{i}\left(\frac{J_{\mathrm{in}}}{N}+J_{\mathrm{in}}\overline{M}-J_{\mathrm{out}}-\frac{\sigma^{2}}{2}+\Delta f\right)-\delta\underset{j}{\sum}L_{ij}M_{j}+\sqrt{\sigma_{\mathrm{out}}^{2}+\sigma_{0}^{2}}\,\xi_{i}M_{i}\\ =\frac{J_{\mathrm{out}}}{N}-M_{i}\left(\frac{J_{\mathrm{in}}}{N}+J_{\mathrm{in}}\overline{M}-J_{\mathrm{out}}-\frac{\sigma^{2}}{2}+\Delta f\right)-\delta\underset{j}{\sum}L_{ij}M_{j}+\sigma\xi_{i}M_{i}\\ =-M_{i}\frac{\sigma^{2}}{2}\left(\beta\overline{M}-\alpha-1-\tilde{\delta}\right)-\delta\overline{M}(t)+\sigma\xi_{i}M_{i}.\end{gathered}$$ We introduce the dimensionless parameters $\alpha=2\frac{J_{\mathrm{out}}-\Delta f}{\sigma^{2}}$, and $\beta=\frac{2J_{\mathrm{in}}}{\sigma^{2}}$. The equations are written under the assumption that the interaction among the nodes and the hub is described by $J_{i0}=\frac{J_{\mathrm{out}}}{N}$, and $J_{0j}=\frac{J_{\mathrm{in}}}{N}$, respectively. We also assume, for simplicity, that all the non-hub compartments obey the statistics $\sigma_{i}=\sigma_{\mathrm{out}}$ and $f_{i}=f$. The last transition is under the assumption of mean-field interaction $\delta$ among the non-hub nodes. In the Itô form in the limit of $N\rightarrow\infty$, we have $$\underset{N\rightarrow\infty}{\mathrm{lim}}\frac{1}{N}\underset{i}{\sum}\sigma_{i}g_{i}(t)M_{i}=0.$$ Note that one cannot take the limit $N\rightarrow\infty$ in Eq. (\[eq:mi Ito\]), since the variables $m_{i}$ depend on the stochastic noise of the hub, $g_{0}$. Taking the sum and letting $N\rightarrow\infty$, we arrive to the following deterministic equation describing the growth of the average relative mass of the $N$ non-hub nodes as a function of time: $$\frac{d\overline{M}(t)}{dt}=\overline{M}(t)\left(J_{\mathrm{out}}+\frac{\sigma^{2}}{2}-\Delta f\right)-J_{\mathrm{in}}\overline{M}(t)^{2}.\label{eq:growth averge m-1}$$ The steady-state solutions of this non-linear equation are: $\overline{M}_{1\mathrm{eq}}{\rightarrow}-\frac{\Delta f-J_{\mathrm{out}}-\frac{\sigma^{2}}{2}}{J_{\mathrm{in}}}=\frac{1+\alpha}{\beta}$ as ${N\rightarrow\infty}$, and $\overline{M}_{2\mathrm{eq}}{\rightarrow}0$ as ${N\rightarrow\infty}$. The equation is also valid when $\delta\neq0$. Finite-$N$ corrections \[subsec: finite N mbar\] ------------------------------------------------ In this subsection, we consider finite $N$ corrections to the average equation. With this effect accumulated for the equation reads $$\frac{d\overline{M}(t)}{dt}=\varepsilon a +\varepsilon b\overline{M}(t) +c\overline{M}(t)+ b\overline{M}(t)^{2},\label{eq:growth averge m_finite_N}$$ where $\varepsilon=\frac{1}{N}$. We denote $a=\frac{\sigma^{2}}{2}J_{\mathrm{out}}$, $b=-\frac{\sigma^{2}}{2}\beta$, and $c=\frac{\sigma^{2}}{2}(\alpha+1)$. The equation has a Riccati form. Taking the first-order correction in $\varepsilon$, $\overline{M}(t)=\overline{M}_0(t)+\varepsilon \overline{M}_1(t)$, we get $$\frac{d\overline{M}_0(t)}{dt}= c\overline{M}_0(t)+ b\overline{M}_0(t)^{2}.\label{eq:growth averge m_0}$$ The solution for $\overline{M}_0(t)$ is the logistic function as before. Next, $$\frac{d\overline{M}_1(t)}{dt}= a + b\overline{M}_0(t)+ (c+2b \overline{M}_0(t)) \overline{M}_1(t). \label{eq:growth averge m_1}$$ The solution for $\overline{M}_1(t)$ is given by $$\begin{gathered} \overline{M}_1(t) = \overline{M}_1(0)\mathrm{exp}(\int_{0}^{t}(c+2b \overline{M}_0(s))ds)+ \int_{0}^{t}\mathrm{exp}(\int_{s}^{t}(c+2b \overline{M}_0(s))ds)(a + b\overline{M}_0(s))ds \\=\overline{M}_1(0)\mathrm{exp}(ct+2b\int_{0}^{t} \overline{M}_0(s)ds)+ \int_{0}^{t}\mathrm{exp}(c(t-s)+2b\int_{s}^{t} \overline{M}_0(\tau)d\tau)(a + b\overline{M}_0(s))ds.\end{gathered}$$ Substituting the expression of the solution to $\overline{M}_0(t)$, one can show that $\underset{t\rightarrow\infty}{\mathrm{lim}}\,\overline{M}_{1}(t)$ is finite, meaning that the correction of order $\frac{1}{N}$ to Eq. (\[eq:growth averge m-1\]) is negligible in the large $N$ limit. Moments Lyapunov Exponent\[sec:Moments-equations\] ================================================== In this section, we derive the moment Lyapunov exponent Eq. (\[eq:gamma1\]). For this purpose, we present the first moment equations of Eq. (\[eq:m0 Ito\]) and Eq. (\[eq:mi Ito\]) for a finite number of non-hub nodes $N$ and $\delta=0$. These equations can be derived using the Fokker-Plank equation or alternatively the Feynman-Kac formula [@seroussi2018spectral; @schuss2009theory; @molchanov1991ideas; @carmona1994parabolic]. They read $$\frac{d\langle h_{0}\rangle}{dt}=\frac{J_{\mathrm{in}}}{N}\langle\overline{m}\rangle-J_{\mathrm{out}}\langle h_{0}\rangle+f_{0}\langle h_{0}\rangle+\frac{\sigma_{0}^{2}}{2}\langle h_{0}\rangle,\label{eq:first moment m0}$$ $$\frac{d\langle m_{i}\rangle}{dt}=\frac{J_{\mathrm{out}}}{N}\langle h_{0}\rangle-\frac{J_{\mathrm{in}}}{N}\langle m_{i}\rangle+f\langle m_{i}\rangle+\frac{\sigma_{\mathrm{out}}^{2}}{2}\langle m_{i}\rangle.\label{eq:first moment mi}$$ Here it is assumed that all the non-hub nodes are independent and with the same dynamics and denoting the total mass, $\langle\overline{m}\rangle=N\langle m_{i}\rangle$. Summing over $N$ in Eq. (\[eq:first moment mi\]), we get $$\frac{d\langle\overline{m}\rangle}{dt}=J_{\mathrm{out}}\langle h_{0}\rangle-\frac{J_{\mathrm{in}}}{N}\langle\overline{m}\rangle+f\langle\overline{m}\rangle+\frac{\sigma_{\mathrm{out}}^{2}}{2}\langle\overline{m}\rangle.\label{eq:first moment mbar}$$ Eqs. (\[eq:first moment mbar\]) and (\[eq:first moment m0\]) can be written in vector form as $$\frac{d\boldsymbol{a}}{dt}=A\boldsymbol{a}(t)=\left(\begin{array}{cc} f_{0}-J_{\mathrm{out}}+\frac{\sigma_{0}^{2}}{2} & \frac{J_{\mathrm{in}}}{N}\\ J_{\mathrm{out}} & f-\frac{J_{\mathrm{in}}}{N}+\frac{\sigma_{\mathrm{out}}^{2}}{2} \end{array}\right)\boldsymbol{a}(t).$$ This is a simple system of linear equations, and the dynamics it describes is determined by the eigenvalues of the matrix $A$. The resulted eigenvalues are than, $$\lambda_{1}=f+\frac{\sigma_{\mathrm{out}}^{2}}{2}-\frac{2J_{\mathrm{in}}}{N}\left(1+\frac{J_{\mathrm{out}}}{(\Delta f+\frac{\Delta\sigma^{2}}{2}-\frac{J_{\mathrm{in}}}{N}-J_{\mathrm{out}})}\right),$$ and, $$\lambda_{2}=f_{0}-J_{\mathrm{out}}+\frac{\sigma_{0}^{2}}{2}+\frac{2J_{\mathrm{out}}\frac{J_{\mathrm{in}}}{N}}{(\Delta f+\frac{\Delta\sigma^{2}}{2}-\frac{J_{\mathrm{in}}}{N}-J_{\mathrm{out}})}.$$ The solution is then given by $\left(\begin{array}{c} \langle m_{0}\rangle(t)\\ \langle\overline{m}\rangle(t) \end{array}\right)=Av_{1}e^{\lambda_{1}t}+Bv_{2}e^{\lambda_{2}t}=A\left(\begin{array}{c} \lambda_{1}-d\\ c \end{array}\right)e^{\lambda_{1}t}+B\left(\begin{array}{c} \lambda_{2}-d\\ c \end{array}\right)e^{\lambda_{2}t}=A\left(\begin{array}{c} \Delta f-J_{\mathrm{out}}+\frac{\Delta\sigma^{2}}{2}\\ J_{\mathrm{out}} \end{array}\right)e^{\lambda_{1}t}+B\left(\begin{array}{c} 0\\ J_{\mathrm{out}} \end{array}\right)e^{\lambda_{2}t}$. Plugging this expression into the equation of the first moment Lyapunov exponent we get $$\begin{gathered} \gamma_{1}=\underset{t\rightarrow\infty}{\mathrm{lim}}\,\underset{N\rightarrow\infty}{\mathrm{lim}}\,\frac{1}{t}\mathrm{ln}\left(\langle h_{0}\rangle+\langle\overline{m}\rangle\right) \\=\mathrm{\underset{t\rightarrow\infty}{lim}}\,\frac{1}{t}\mathrm{ln}\,e^{\lambda_{2}t}\left(A\left(\Delta f-J_{0}+\frac{\Delta\sigma^{2}}{2}+J_{\mathrm{out}}\right)e^{\left(\lambda_{1}-\lambda_{2}\right)t}+BJ_{\mathrm{out}}\right)\\ =\begin{cases} \begin{array}{c} f+\frac{\sigma_{\mathrm{out}}^{2}}{2}\\ f_{0}-J_{\mathrm{out}}+\frac{\sigma_{0}^{2}}{2} \end{array} & \begin{array}{c} \frac{\Delta\sigma^{2}}{2}+\Delta f-J_{\mathrm{out}}<0\\ \frac{\Delta\sigma^{2}}{2}+\Delta f-J_{\mathrm{out}}\geq0 \end{array}\end{cases}\\=\begin{cases} \begin{array}{c} f+\frac{\sigma_{\mathrm{out}}^{2}}{2}\\ f_{0}-J_{\mathrm{out}}+\frac{\sigma_{0}^{2}}{2} \end{array} & \begin{array}{c} \alpha>\frac{\Delta\sigma^{2}}{\sigma^{2}}\\ \alpha\leq\frac{\Delta\sigma^{2}}{\sigma^{2}} \end{array}\end{cases}.\end{gathered}$$ Sample Lyapunov Exponent\[sec:Sample-Lyapunov-Exponent\] ======================================================== Here, we calculate the sample Lyapunov exponent of the mass of the system, defined as $$\widetilde{\gamma}=\underset{t,N\rightarrow\infty}{\mathrm{lim}}\,\frac{1}{t}\mathrm{ln}\left(h_{0}+\sum_{i}m_{i}\right).$$ To prove the formula (\[eq:sample Lyapunov\]) in the main text, we consider at the system of equations $$\frac{dm_{i}}{dt}=\frac{J_{\mathrm{out}}}{N}h_{0}-\frac{J_{\mathrm{in}}}{N}m_{i}+fm_{i}+\frac{\sigma_{\mathrm{out}}^{2}}{2}m_{i}+\sigma_{\mathrm{out}}g_{i}(t)m_{i},$$ $$\frac{dh_{0}}{dt}=\frac{J_{\mathrm{in}}}{N}\overline{m}+\left(f_{0}+\frac{\sigma_{0}^{2}}{2}-J_{\mathrm{out}}\right)h_{0}+\sigma_{0}g_{0}(t)h_{0}.$$ Using a transformation of variable given by the Itô formula $$\begin{gathered} \frac{d\mathrm{ln}\left(\overline{m}+h_{0}\right)}{dt}=\frac{\partial f}{\partial t}+\sum_{i}\frac{\partial f}{\partial m_{i}}A_{i}+\frac{\partial f}{\partial h_{0}}A_{0}+\frac{1}{2}\frac{\partial f^{2}}{\partial^{2}h_{0}}\left[B^{2}\right]_{00}\\ +\frac{1}{2}\sum_{i}\frac{\partial f^{2}}{\partial^{2}m_{i}}\left[B^{2}\right]_{ii}+\frac{\partial f}{\partial h_{0}}B_{00}\frac{dW_{0}}{dt}+\sum_{i}\frac{\partial f}{\partial m_{i}}B_{ii}\frac{dW_{i}}{dt}\\ =\frac{1}{N\overline{M}+1}\left(J_{\mathrm{out}}-J_{\mathrm{in}}\overline{M}+f\overline{M}+\frac{\sigma_{\mathrm{out}}^{2}}{2}\overline{M}+\sigma_{\mathrm{out}}\sum_{i}g_{i}(t)M_{i}\right)\\ +\frac{1}{N\overline{M}+1}\left(J_{\mathrm{in}}\overline{M}+f_{0}+\frac{\sigma_{0}^{2}}{2}-J_{\mathrm{out}}\right)-\frac{1}{2}\frac{\sigma_{0}^{2}+\sigma_{\mathrm{out}}^{2}\sum_{i}M_{i}^{2}}{\left(N\overline{M}+1\right)^{2}}\\ +\frac{1}{N\overline{M}+1}\sigma_{0}g_{0}(t)+\sigma_{\mathrm{out}}\sum_{i}\frac{g_{i}(t)M_{i}}{N\overline{M}+1}\\ =\frac{1}{N\overline{M}+1}\sigma_{0}g_{0}(t)+\sigma_{\mathrm{out}}\sum_{i}\frac{g_{i}(t)M_{i}}{N\overline{M}+1}\\ +\frac{1}{N\overline{M}+1}\left(\left(f+\frac{\sigma_{\mathrm{out}}^{2}}{2}\right)N\overline{M}+\left(f_{0}+\frac{\sigma_{0}^{2}}{2}\right)\right)-\frac{1}{2}\frac{\sigma_{0}^{2}+\sigma_{\mathrm{out}}^{2}\sum_{i}M_{i}^{2}}{\left(N\overline{M}+1\right)^{2}}.\end{gathered}$$ Integrating with respect to time and dividing by $t$, we have $$\frac{1}{t}\mathrm{ln}\left(\overline{m}(t)+h_{0}(t)\right)=\frac{1}{t}\int_{0}^{t}\frac{d\mathrm{ln}\left(\overline{m}(\tau)+h_{0}(\tau)\right)}{d\tau}d\tau.$$ Letting the limit of $t,N\rightarrow\infty$, we get $$\begin{gathered} \widetilde{\gamma}=\underset{t,N\rightarrow\infty}{\mathrm{lim}}\,\frac{1}{t}\mathrm{ln}\left(\overline{m}+h_{0}\right)=\underset{t, N\rightarrow\infty}{\mathrm{lim}}\frac{1}{t}\int_{0}^{t}\frac{d\mathrm{ln}\left(\overline{m}(\tau)+h_{0}(\tau)\right)}{d\tau}d\tau\\ =\underset{t,N\rightarrow\infty}{\mathrm{lim}}\,\frac{1}{t}\int_{0}^{t}\left[\frac{\sigma_{0}g_{0}(\tau)}{N\overline{M}+1}-\frac{1}{2}\frac{\sigma_{0}^{2}}{\left(N\overline{M}+1\right)^{2}}+\sigma_{\mathrm{out}}\sum_{i}\frac{g_{i}(\tau)M_{i}}{N\overline{M}+1}-\frac{\sigma_{\mathrm{out}}^{2}}{2}\frac{\sum_{i}M_{i}^{2}}{\left(N\overline{M}+1\right)^{2}}\right]d\tau\\ +\underset{t,N\rightarrow\infty}{\mathrm{lim}}\,\frac{1}{t}\int_{0}^{t}\frac{1}{N\overline{M}+1}\left(\left(f+\frac{\sigma_{\mathrm{out}}^{2}}{2}\right)N\overline{M}+\left(f_{0}+\frac{\sigma_{0}^{2}}{2}\right)\right)d\tau.\label{eq:smaple detailed integrals}\end{gathered}$$ Here we used the deterministic law of $\overline{M}$ given that higher corrections in $N$ are negligible, see \[subsec: finite N mbar\] for more details. Using the stationarity property of the Gaussian processes $g_{0}(t)$ and $g_{i}(t)$, and the ergodic theorem: $$\begin{gathered} \widetilde{\gamma}=\underset{t,N\rightarrow\infty}{\mathrm{lim}}\,\frac{1}{t}\int_{0}^{t}\left[\frac{1}{\left(N\overline{M}+1\right)}\left(\left(f+\frac{\sigma_{\mathrm{out}}^{2}}{2}\right)N\overline{M}+\left(f_{0}+\frac{\sigma_{0}^{2}}{2}\right)\right)\right]d\tau\\ =\underset{t,N\rightarrow\infty}{\mathrm{lim}}\left(f+\frac{\sigma_{\mathrm{out}}^{2}}{2}\right)\frac{1}{t}\int_{0}^{t}\frac{\overline{M}}{\overline{M}+\frac{1}{N}}d\tau+\left(f_{0}+\frac{\sigma_{0}^{2}}{2}\right)\frac{1}{t}\int_{0}^{t}\frac{d\tau}{N\overline{M}+1}.\label{Eq:gmSample2terms}\end{gathered}$$ This formula is obtained under the assumption that the fluctuations of the noise are finite (Novikov condition), i.e., $$\underset{t,N\rightarrow\infty}{\mathrm{lim}}\,\frac{1}{t}\int_{0}^{t}\frac{\sum_{i}M_{i}^{2}}{\left(N\overline{M}+1\right)^{2}}d\tau<\infty\label{eq:fluctuation 1}$$ and, $$\underset{t,N\rightarrow\infty}{\mathrm{lim}}\,\frac{1}{t}\int_{0}^{t}\frac{1}{\left(N\overline{M}+1\right)^{2}}d\tau<\infty.\label{eq:fluctuation 2}$$ In order to calculate the above integrals one needs to specify how the number of nodes in the graphs growth with time. We show that if the number of nodes grows exponentially in $t$, i.e., $N\sim e^{\varepsilon t}$, then there exists a localization phase. In addition, the fluctuation, conditions \[eq:fluctuation 1\] and \[eq:fluctuation 2\], are satisfied. These conditions are verified in \[subsec:Noise-fluctuations\] below. Since the fluctuations are finite, we are left with calculating integrals of the logistic function $\overline{M}(t)$. We use the following properties of the logistic function: $$\overline{M}(t)=\frac{1}{A+Be^{-\xi t}}=\frac{e^{\xi t}}{Ae^{\xi t}+B}\label{eq:logistic identety1}$$ and $$\int_{0}^{t}\overline{M}d\tau=\frac{1}{\xi A}\mathrm{log}\left(Ae^{\xi t}+B\right).\label{eq:logistic identety2}$$ In our case, $A=\frac{\beta}{\alpha+1}$, $B=\frac{1}{\overline{M}_{0}}-\frac{\beta}{\alpha+1}$, and $\xi=\frac{\sigma^{2}\left(\alpha+1\right)}{2}$. Using Eqs. (\[eq:logistic identety1\]), and (\[eq:logistic identety2\]) it is easy to show that $$\begin{gathered} \underset{t,N\rightarrow\infty}{\mathrm{lim}}\,\frac{1}{t}\int_{0}^{t}\frac{\overline{M}}{\overline{M}+\frac{1}{N}}d\tau=\underset{t\rightarrow\infty}{\mathrm{lim}}\,\frac{1}{t}\int_{T}^{t}\frac{\overline{M}}{\overline{M}+e^{-\varepsilon(\tau-T)}}d\tau \\=\underset{t\rightarrow\infty}{\mathrm{lim}}\frac{1}{t}\int_{T}^{t}\frac{1}{1+Ae^{\varepsilon T}e^{-\varepsilon\tau}+Be^{\varepsilon T}e^{-\left(\xi+\varepsilon\right)\tau}}d\tau=1,\label{eq:term1}\end{gathered}$$ where $T$ is some finite time. In the same manner, one can calculate the second term in Eq. (\[Eq:gmSample2terms\]): $$\begin{gathered} \underset{t,N\rightarrow\infty}{\mathrm{lim}}\,\frac{1}{t}\int_{0}^{t}\frac{d\tau}{N\overline{M}+1}=\underset{t\rightarrow\infty}{\mathrm{lim}}\frac{1}{t}\int_{0}^{t}\frac{A+Be^{-\xi\tau}d\tau}{N+A+Be^{-\xi\tau}}\\=\underset{t\rightarrow\infty}{\mathrm{lim}}\frac{1}{t}\int_{0}^{t}\frac{Ad\tau}{e^{\varepsilon(\tau-T)}+A+Be^{-\xi\tau}}+\underset{t\rightarrow\infty}{\mathrm{lim}}\,\frac{1}{t}\int_{0}^{t}\frac{Be^{-\xi\tau}}{e^{\varepsilon(\tau-T)}+A+Be^{-\xi\tau}}d\tau\\=\underset{t\rightarrow\infty}{\mathrm{lim}}\,\frac{1}{t}\int_{0}^{t}\frac{Be^{-\left(\xi+\varepsilon\right)\tau}}{e^{-\varepsilon T}+Be^{-\left(\xi+\varepsilon\right)\tau}}d\tau=\underset{t\rightarrow\infty}{\mathrm{lim}}\,-\frac{1}{t\left(\xi+\varepsilon\right)}\mathrm{log}\left(e^{-\varepsilon T}+Be^{-\left(\xi+\varepsilon\right)t}\right)=0,\label{eq:term2}\end{gathered}$$ provided that $\mathrm{max}\{-\xi,0\}\leq\varepsilon$. Note that, in order to have finite fluctuations, $\sigma^{2}\leq\epsilon$, see \[subsec:Noise-fluctuations\]. Substituting the results in Eq. (\[eq:term1\]) and Eq. (\[eq:term2\]) we obtain $$\widetilde{\gamma}=f+\frac{\sigma_{\mathrm{out}}^{2}}{2}.$$ Therefore, for any exponential growth rate $\sigma^{2}\mathrm{max}\{-\frac{\alpha+1}{2},1\}\leq\varepsilon$, the fluctuations and sample Lyapunov are finite. Noise fluctuations\[subsec:Noise-fluctuations\] ------------------------------------------------ In this subsection, we prove that the fluctuation are finite when $N\sim e^{\varepsilon t}$, i.e., the conditions in (\[eq:fluctuation 1\]) and (\[eq:fluctuation 2\]) are satisfied. That condition (\[eq:fluctuation 2\]) is satisfied, is readily seen since the function under the integral is bounded between zero and one, and so the integral itself is also finite: $$\underset{t,N\rightarrow\infty}{\mathrm{lim}}\,\frac{1}{t}\int_{0}^{t}\frac{d\tau}{\left(N\overline{M}+1\right)^{2}} <\infty\label{eq:Fluctuations hub}$$ In order to calculate the fluctuation of the non-hub nodes, i.e., establish (\[eq:fluctuation 1\]), we approximate the limit, $\underset{N\rightarrow\infty}{\mathrm{lim}}\frac{1}{N}\sum_{i}M_{i}^{2}\rightarrow\langle M_{i}^{2}\rangle$; since $M_{i}$ are i.i.d. random variables. The second moment can be calculated using the Fokker-Plank equation Eq. (\[eq:FP M0\]): $$\frac{\partial P(M,t)}{\partial t}=-\frac{\sigma^{2}}{2}\frac{\partial\left(\left(\left(\alpha+1-\beta\overline{M}\right)M\right)P(M,t)\right)}{\partial M}+\frac{\sigma^{2}}{2}\frac{\partial^{2}\left(M^{2}P(M,t)\right)}{\partial^{2}M}.$$ Averaging over the second moment yields: $$\frac{d\langle M^{2}\rangle}{dt}=\frac{\sigma^{2}}{2}\left(2\left(\alpha+2-\beta\overline{M}\right)\langle M^{2}\rangle\right).$$ The solution for $\tilde{\delta}=0$ is $$\begin{gathered} \langle M^{2}(t)\rangle=\langle M^{2}\rangle(0)\mathrm{exp}\left(\sigma^{2}t\left(\alpha+2\right)-\sigma^{2}\beta\int_{0}^{t}\overline{M}d\tau\right)\\=\frac{\langle M^{2}\rangle(0)e^{\sigma^{2}t}}{\left(\frac{\beta}{(\alpha+1)}+\frac{\beta\left(\frac{\alpha+1}{\beta}-\overline{M}_{0}\right)}{\overline{M}_{0}(\alpha+1)}e^{-\frac{\sigma^{2}\left(\alpha+1\right)t}{2}}\right)^{2}} =\langle M^{2}\rangle(0)e^{\sigma^{2}t}\overline{M}(t)^{2}.\end{gathered}$$ The fluctuations of the non-hub nodes, i.e., the last term in Eq. (\[eq:smaple detailed integrals\]) are then, given by $$\begin{gathered} \underset{t,N\rightarrow\infty}{\mathrm{lim}}\,\frac{1}{t}\int_{0}^{t}\frac{\langle M_{i}^{2}\rangle}{N\left(\overline{M}+\frac{1}{N}\right)^{2}}d\tau=\underset{t,N\rightarrow\infty}{\mathrm{lim}}\,\frac{1}{t}\int_{0}^{t}\frac{\langle M_{i}^{2}\rangle}{N\left(\overline{M}+\frac{1}{N}\right)^{2}}d\tau\\ =\langle M^{2}\rangle(0)\underset{t,N\rightarrow\infty}{\mathrm{lim}}\,\frac{1}{t}\int_{0}^{t}\frac{e^{\sigma^{2}t}\overline{M}^{2}}{N\left(\overline{M}+\frac{1}{N}\right)^{2}}d\tau.\end{gathered}$$ Substituting here the logistic function $\overline{M}(t)$ (Eq. (\[eq:logistic identety1\])), we get $$\begin{gathered} \underset{t,N\rightarrow\infty}{\mathrm{lim}}\frac{1}{t}\int_{0}^{t}\frac{e^{\sigma^{2}\tau}\overline{M}^{2}}{N\left(\overline{M}+\frac{1}{N}\right)^{2}}d\tau=\underset{t,N\rightarrow\infty}{\mathrm{lim}}\frac{1}{t}\int_{0}^{t}\frac{e^{\sigma^{2}\tau}}{N+2\left(A+Be^{-\xi\tau}\right)+\frac{1}{N}\left(A+Be^{-\xi\tau}\right)^{2}}d\tau\\=\underset{t\rightarrow\infty}{\mathrm{lim}}\frac{1}{t}\int_{0}^{t}\frac{e^{\sigma^{2}\tau}}{e^{\varepsilon(\tau-T)}+2\left(A+Be^{-\xi\tau}\right)+e^{-\varepsilon(\tau-T)}\left(A+Be^{-\xi\tau}\right)^{2}}d\tau\\=\underset{t\rightarrow\infty}{\mathrm{lim}}\frac{1}{t\left(\sigma^{2}-\varepsilon\right)}e^{\left(\sigma^{2}-\varepsilon\right)\left(t-T\right)+\varepsilon T}d\tau=0.\label{eq:fluctuations small nodes}\end{gathered}$$ Therefore, the fluctuation are finite for any $\varepsilon\geq\sigma^2$. Multiple Hubs Derivation\[sec:Multiple-Hubs-Derivation\] ======================================================== In this section, we derive the normalized equation in the multiple hubs models of Sec. \[Sec: multipileHaubs\]. Taking normalized variables $M_{i}=\frac{m_{i}}{\overline{h}}$, $x_{i}=\frac{h_{i}}{\overline{h}}$, so that, $\overline{M}=\frac{1}{N}\sum_{i}\frac{m_{i}}{\overline{h}}$, and $\frac{1}{H}\sum_{j}x_{j}=1$, Eq. (\[eq:hi-multi-hub\]), and Eq. (\[eq:mi-multi-hub\]) are transformed into the following set of equations: $$\frac{dM_{i}}{dt}=\frac{J_{\mathrm{out}}}{N}-\left(\frac{J_{\mathrm{in}}}{N}-\frac{J_{\mathrm{out}}}{H}-\Delta f+\frac{J_{\mathrm{in}}}{H}\overline{M}\right)M_{i}+\frac{\nu_{i}^{2}\left(x\right)}{2}M_{i}+\nu_{i}\left(x\right)M_{i}\xi_{i}\label{eq:Mi-multi-hub}$$ $$\frac{dx_{i}}{dt}=J_{\mathrm{in}}\overline{M}\left(1-x_{i}\right)+\frac{\sigma_{i}^{2}\left(x\right)}{2}x_{i}+\sigma_{i}\left(x\right)x_{i}\phi_{i}.\label{eq:xi-Multi-hub}$$ Set $$\nu_{i}f_{i}(t)-\frac{1}{H}\sum_{j}^{H}\sigma_{j}g_{j}(t)x_{j}=\sqrt{\nu_{i}^{2}+\frac{1}{H}\sum_{j}^{H}\sigma_{j}^{2}x_{j}^{2}}\xi_{i}=\nu_{i}\left(x\right)\xi_{i}$$ and $$\sigma_{i}g_{i}(t)-\frac{1}{H}\sum_{j}^{H}\sigma_{j}g_{j}(t)x_{j}=\sqrt{\sigma_{i}^{2}+\frac{1}{H}\sum_{j}^{H}\sigma_{j}^{2}x_{j}^{2}}\phi_{i}=\sigma_{i}\left(x\right)\phi_{i},$$ so that $\frac{1}{H}\sum_{i}^{H}\sigma_{i}\left(x\right)\phi_{i}x_{i}=0$, and $\nu_{i}^{2}\left(x\right)-\nu_{i}^{2}=\sigma_{i}^{2}\left(x\right)-\sigma_{i}^{2}$, and $\Delta f=f-q$. In the limit of $H,N\rightarrow\infty$, the variances are constants, $\nu^{2}\left(x\right)=\nu^{2}+\sigma^{2}$, and $\sigma^{2}\left(x\right)=2\sigma^{2}$. In this limit,one can average Eq. (\[eq:Mi-multi-hub\]), since the variables are decoupled.
Introduction ============ In this note we are interested in the special trilogarithmic solutions of the generalized Witten–Dijkgraaf–Verlinde–Verlinde (WDVV) equations [@MMM]. Such solutions are determined by a finite collection $A$ of covectors ${\alpha}$ with multiplicities $c_{\alpha}$. More specifically, the prepotential satisfying the WDVV equations has the form $$\begin{gathered} \label{fintro} F= \sum_{{\alpha}\in A} c_{\alpha}Li_3\big(e^{2i{\alpha}(x)}\big) + \mbox{cubic terms}, \end{gathered}$$ where $Li_3$ is the trilogarithm function. Solution of this type for the $A_n$ root system appeared in [@MMM2] in relation with Seiberg–Witten theory. More systematically such solutions were studied by Hoevenaars and Martini in [@H1; @H2] who determined solutions for all irreducible reduced root systems [@H2]. More recently solutions of the form were derived from reductions of Egorov hydrodynamic chains in [@P]. The rational versions of solutions play an important role in the theory of Frobenius manifolds, a geometric framework for the WDVV equations [@D]. Thus solutions corresponding to the Coxeter root systems are almost dual to the Frobenius structures on the orbit spaces of finite Coxeter groups [@D2]. In the trigonometric case such a duality is verified for the affine $A_n$ case in [@R; @RS]. The study of general rational solutions of the form $$\begin{gathered} \label{fintro2} F=\sum_{{\alpha}\in A} {\alpha}(x)^2 \log {\alpha}(x) \end{gathered}$$ was initiated by Veselov in [@V1] where a geometric notion of the $\vee$-system equivalent to the WDVV equations for was introduced. It was shown in [@V3] that any generalized Calogero–Moser–Sutherland (CMS) operator admitting a factorized eigenfunction determines a $\vee$-system. In this note we are interested in the solutions where the cubic terms involve extra variable like in the works [@H1; @H2] on the solutions for the root systems. We derive geometric and algebraic conditions for a system of vectors with multiplicities so that the corresponding function satisfies the WDVV equations. These conditions should be thought of as trigonometric analogue of the notion of the $\vee$-system. The conditions carry rather strong geometrical restrictions on the collection of vectors formulated in terms of series of vectors parallel to a chosen one. We illustrate this by determining all trigonometric $\vee$-systems with up to five vectors in the plane. Trigonometric ansatz, in contrast to the rational one, allows to define the generalized CMS operator corresponding to the solution . We show that this operator has a factorized eigenfunction. This statement inverts the one for the rational $\vee$-systems obtained in [@V3]. In fact our arguments follow [@V3] very closely. We also discuss additional condition needed to have trigonometric solution to the WDVV equations starting from a CMS operator with factorized eigenfunction. Trigonometric $\boldsymbol{\vee}$-systems ========================================= Consider a function $F$ of the form $$\begin{gathered} \label{F} F=\frac13 y^3 + \sum_{{\alpha}\in A} c_{{\alpha}} {\alpha}(x)^2 y + \lambda \sum_{{\alpha}\in A} c_{\alpha}f({\alpha}(x)), \end{gathered}$$ where $A$ is a finite collection of covectors on $V\cong{\mathbb C}^n$, $x=(x_1,\ldots,x_n)$, $c_{\alpha}$, $\lambda$ are non-zero constants and function $f(x)$ satisfies $f'''(x)=\cot x$. The last equation fixes function $f(x)$ up to 2nd order terms which will not be important for the WDVV equations below. We may fix a choice of $f(x)$ by $$f(x)=\frac16 i x^3 + \frac14 {\rm Li}_3\big(e^{-2ix}\big).$$ The ansatz introducing extra variable $y$ was proposed in [@H2] in the case of root systems $A=\mathcal R$. The form guarantees that the matrix of third derivatives involving $y$ is constant, as we will explore below. [*We will assume throughout the paper that collection $A$ of covectors ${\alpha}$ belongs to an $n$-dimensional lattice, and that the bilinear form $$\begin{gathered} \label{G} (u,v):=\sum\limits_{{\alpha}\in A}c_{\alpha}{\alpha}(u) {\alpha}(v) \end{gathered}$$ is non-degenerate on $V$.*]{} The form $(\cdot,\cdot)$ identifies $V$ and $V^*$ and following [@V1] we will denote by $\gamma^\vee$ the vector dual to the covector $\gamma$. We will also denote through $(\cdot,\cdot)$ the corresponding inner product on $V^*$. We are interested in the conditions on $\{{\alpha}, c_{{\alpha}}, \lambda\}$ when function $F$ satisfies the WDVV equations $$\begin{gathered} \label{wdvv} F_i F_k^{-1} F_j = F_j F_k^{-1} F_i, \end{gathered}$$ $i,j,k=0, 1,\ldots,n$. Here $F_i$ are $(n+1)\times (n+1)$ matrices of third derivatives in the coordinates $(x_0=y,x_1,\ldots,x_n)$, $(F_i)_{ab}=\frac{{\partial}^3F}{{\partial}x_i {\partial}x_a {\partial}x_b}$. It is sufficient to fix $k=0$, then $F_0=F_y$ is the following non-degenerate matrix $$F_y=2 \left( \begin{array}{cc} 1 & 0\\ 0& \sum\limits_{{\alpha}\in A} c_{\alpha}{\alpha}\otimes {\alpha}\end{array} \right).$$ Similarly $$F_i=\left( \begin{array}{cc} 0 & 2\sum\limits_{{\alpha}\in A} c_{\alpha}{\alpha}_i {\alpha}\vspace{2mm}\\ 2\sum\limits_{{\alpha}\in A} c_{\alpha}{\alpha}_i {\alpha}& \lambda\sum\limits_{{\alpha}\in A} c_{\alpha}{\alpha}_i \cot {\alpha}(x) {\alpha}\otimes {\alpha}\end{array} \right),$$ where we denoted by ${\alpha}$ both column and row vectors ${\alpha}=({\alpha}_1,\ldots,{\alpha}_n)$. The WDVV conditions for a function $F$ can be reformulated partly using geometry of the system $A$. For any ${\alpha}\in A$ let us collect all the covectors from $A$ non-parallel to ${\alpha}$ into the disjoint union of [***$\pmb {\alpha}$-series***]{} $\Gamma_{\alpha}^1, \ldots, \Gamma_{\alpha}^k$. These series are determined by the property that for any $s=1,\ldots, k$ and for any two covectors $\gamma_1, \gamma_2 \in \Gamma_{\alpha}^s$ one has either $\gamma_1-\gamma_2=n\alpha$ or $\gamma_1+\gamma_2=n\alpha$ for some [*integer*]{} $n$. We also assume that the series are maximal, that is if ${\beta}\in \Gamma_{\alpha}^i$ then $\Gamma_{\alpha}^i$ must contain all the vectors of the form $\pm {\beta}+n {\alpha}\in A$ with $n \in {\mathbb Z}$. We note that solution is not affected if some of the covectors ${\alpha}\in A$ are replaced with $-{\alpha}$. By appropriate choice of signs the vectors can be made to belong to a half-space, we will denote such systems as $A_+$. Moreover, for any ${\alpha}\in A$ one can choose a positive system $A_+ \ni {\alpha}$ in such a way that ${\alpha}$-series $\Gamma_{\alpha}^s$ will consist of vectors of the form ${\beta}_s + n_i {\alpha}\in A_+$ for appropriate integer parameters $n_i$ with ${\beta}_s \in A_+$. Let $A \subset V^*\cong{\mathbb C}^n$ be a finite collection of covectors ${\alpha}$ with multiplicities $c_{\alpha}$ such that the corresponding form is non-degenerate and the covectors ${\alpha}$ belong to an $n$-dimensional lattice. We say that $A$ is a trigonometric $\vee$-system if for any ${\alpha}\in A$ and for any ${\alpha}$-series $\Gamma_{\alpha}^s$ one has $$\begin{gathered} \label{V1} \sum_{{\beta}\in \Gamma_{\alpha}^s} c_{\beta}({\alpha},{\beta}) {\alpha}\wedge {\beta}=0. \end{gathered}$$ Notice that ${\alpha}\wedge \beta_1 = \pm {\alpha}\wedge {\beta}_2$ if ${\beta}_1$, ${\beta}_2$ belong to the same ${\alpha}$-series $\Gamma_{\alpha}^s$ so identities may be simplified accordingly. Also replacement of some of the covectors by their opposite preserves the class of trigonometric $\vee$-systems. Note also that the non-degenerate linear transformations act naturally on the trigonometric $\vee$-systems, and that the direct sum $A_1 \oplus A_2$ of the trigonometric $\vee$-systems $A_1 \subset V_1^*$, $A_2 \subset V_2^*$ considered as a set of covectors in $V_1\oplus V_2$ is again a trigonometric $\vee$-system. The systems obtained in this way will be called [*reducible*]{}. If such a decomposition is not possible then the (trigonometric $\vee$-)system is called irreducible. \[t1\] The WDVV equations $$\begin{gathered} F_i F_y^{-1} F_j = F_j F_y^{-1} F_i,\end{gathered}$$ $i,j=0, 1,\ldots,n$, for the function are equivalent to the following two conditions: 1. $A$ is a trigonometric $\vee$-system; 2. for a positive system $A_+$ and for any vectors $a,b,c,d \in V$ $$\begin{gathered} \label{V2} \sum_{{\alpha},{\beta}\in A_+} \left(\frac14 \lambda^2 ({\alpha}, {\beta}) - 1\right) c_{\alpha}c_{\beta}B_{{\alpha},{\beta}}(a,b) B_{{\alpha},{\beta}}(c,d) =0, \end{gathered}$$ where $B_{{\alpha},{\beta}}(a,b)={\alpha}\wedge {\beta}(a,b)= {\alpha}(a){\beta}(b)-{\alpha}(b){\beta}(a)$. For a vector $a \in V$ we define $F_a^\vee = F_y^{-1} F_a$ where $F_a= \sum\limits_{i=1}^n a_i F_i$. The WDVV equations are equivalent to the commutativity $[F_a^\vee, F_b^\vee]=0$ for any $a,b \in V$. We have $$F_i^\vee=\left( \begin{array}{cc} 0 & \sum\limits_{{\alpha}\in A} c_{\alpha}{\alpha}_i {\alpha}\vspace{2mm}\\ \sum\limits_{{\alpha}\in A} c_{\alpha}{\alpha}_i {\alpha}^\vee & \frac{\lambda}{2} \sum\limits_{{\alpha}\in A} c_{\alpha}{\alpha}_i \cot {\alpha}(x) {\alpha}\otimes {\alpha}^\vee \end{array} \right),$$ where ${\alpha}^\vee$ is the (column) vector dual to the (row) covector ${\alpha}$ under the bilinear form $G=\sum\limits_{{\alpha}\in A} c_{{\alpha}} {\alpha}\otimes{\alpha}$. Therefore $$F_a^\vee=\left( \begin{array}{cc} 0 & \sum\limits_{{\alpha}\in A} c_{\alpha}{\alpha}(a) {\alpha}\vspace{2mm}\\ \sum\limits_{{\alpha}\in A} c_{\alpha}{\alpha}(a) {\alpha}^\vee & \frac{\lambda}{2} \sum\limits_{{\alpha}\in A} c_{\alpha}{\alpha}(a) \cot {\alpha}(x) {\alpha}\otimes {\alpha}^\vee \end{array} \right)$$ for any $a \in {\mathbb C}^n$. Now the product $F_a^\vee F_b^\vee$ equals $$\left( \begin{array}{cc} \sum\limits_{{\alpha},{\beta}\in A} c_{\alpha}c_{\beta}{\alpha}(a) {\beta}(b) {\alpha}({\beta}^\vee) & \frac{\lambda}{2} \sum\limits_{{\alpha},{\beta}\in A} c_{\alpha}c_{\beta}{\alpha}(a) {\beta}(b) {\alpha}({\beta}^\vee) \cot {\beta}(x) {\beta}\vspace{2mm}\\ \frac{\lambda}{2} \sum\limits_{{\alpha}, {\beta}\in A} c_{\beta}c_{\alpha}{\alpha}(a) {\beta}(b) {\alpha}({\beta}^\vee) \cot {\alpha}(x) {\alpha}^\vee & {\genfrac{}{}{0pt}{}{\sum\limits_{{\alpha},{\beta}\in A} c_{\alpha}c_{\beta}{\alpha}(a) {\beta}(b) {\alpha}^\vee \otimes {\beta}}{+\frac{\lambda^2}{4} \sum\limits_{{\alpha}, {\beta}\in A} c_{\alpha}c_{\beta}{\alpha}(a) {\beta}(b) {\alpha}({\beta}^\vee) \cot {\alpha}(x) \cot {\beta}(x) {\beta}\otimes{\alpha}^\vee}} \end{array} \right).$$ Therefore $[F_a^\vee, F_b^\vee]=0$ is equivalent to the identities $$\begin{gathered} \label{sing0} \sum_{{\alpha},{\beta}\in A} c_{\alpha}c_{\beta}B_{{\alpha},{\beta}}(a,b) ({\alpha}, {\beta}) \cot {\alpha}(x) {\alpha}^\vee = 0, \\ \label{sing} \sum_{{\alpha},{\beta}\in A} \left( \frac{\lambda^2}{4} c_{\alpha}c_{\beta}({\alpha}, {\beta}) \cot {\alpha}(x) \cot {\beta}(x) + c_{\alpha}c_{\beta}\right) B_{{\alpha},{\beta}}(a,b) {\alpha}\wedge {\beta}=0.\end{gathered}$$ To cancel singularities in one should have $$\sum_{{\genfrac{}{}{0pt}{}{{\beta}\in A}{{\beta}\nsim {\alpha}}}} c_{\beta}({\alpha}, {\beta}) \cot {\beta}(x) B_{{\alpha},{\beta}}(a,b) {\alpha}\wedge {\beta}=0$$ when $\cot {\alpha}(x)=0$. A linear combination of functions $\cot {\beta}(x)|_{\cot {\alpha}(x)=0}$ can vanish only if it vanishes for each ${\alpha}$-series: $$\sum_{{\beta}\in \Gamma_{\alpha}^s} c_{\beta}({\alpha}, {\beta}) \cot {\beta}(x) B_{{\alpha},{\beta}}(a,b) {\alpha}\wedge {\beta}=0$$ for all ${\alpha}$-series $\Gamma_{\alpha}^s$ (see e.g. [@F] for more detailed explanation). The last relation can be simplified as $$\begin{gathered} \label{V11} \sum_{{\beta}\in \Gamma_{\alpha}^s} c_{\beta}({\alpha},{\beta}) {\alpha}\wedge {\beta}=0, \end{gathered}$$ which means that $A$ is a trigonometric $\vee$-system. Identities guarantee that the left-hand side of is non-singular. Since all the vectors from $A$ belong to an $n$-dimensional lattice with basis $e^1, \ldots, e^n$, the left-hand side of is a rational function in the exponential coordinates $e^{e^i(x)}$. This rational function has degree zero and therefore it is a constant. We can assume that all covectors from $A$ belong to a half-space hence form a positive system $A_+$, so in appropriate limit $\cot({\alpha},x) \to i$ for all ${\alpha}\in A_+$. Thus property is equivalent to together with the condition $$\begin{gathered} \sum_{{\alpha},{\beta}\in A_+} \left(\frac{\lambda^2}{4} c_{\alpha}c_{\beta}({\alpha}, {\beta}) - c_{\alpha}c_{\beta}\right) B_{{\alpha},{\beta}}(a,b) {\alpha}\wedge {\beta}=0.\end{gathered}$$ The remaining condition is equivalent to the set of properties $$\begin{gathered} \label{V3} \sum_{{\beta}\in A} c_{\beta}({\alpha}, {\beta}) B_{{\alpha},{\beta}}(a,b) = 0, \end{gathered}$$ for any ${\alpha}\in A$. Identities follow from the $\vee$-conditions , this completes the proof of the theorem. \[remark1\] Let trigonometric $\vee$-systems $A_1 \subset V_1^*, A_2\subset V_2^*$ define the solutions of the WDVV equations for some $\lambda_1$, $\lambda_2$. Then the trigonometric $\vee$-system $A_1 \oplus A_2$ does not define a solution. Indeed, let us take vectors $a,c \in V_1$ and $b,d \in V_2$. Then property implies that $$\begin{gathered} \label{dirsn} (a,c)_1 (b,d)_2 =0, \end{gathered}$$ where $(\cdot,\cdot)_{1,2}$ are $\vee$-forms in the corresponding spaces $V_{1,2}$. Clearly, the relation does not hold for general vectors $a$, $b$, $c$, $d$. \[remark2\] Not all the trigonometric solutions of the WDVV equations have the form . It is shown in [@BMMM] that trilogarithmic functions have to arise when ansatz for $F$ is given by summation of $g(({\alpha},x))$ over the roots of a root system, $x\in V$. \[remark3\] A slightly more general ansatz for the solutions $F$ can be considered when cubic terms in $x$ are added to $F$. Similarly to the proof of Theorem \[t1\] it follows that $A$ still has to be a trigonometric $\vee$-system. The almost dual potentials corresponding to the $A_n$ affine Weyl group orbit spaces have such a form [@RS]. The corresponding trigonometric $\vee$-system $A$ is the $A_n$ root system in this case. \[proposition1\] Let $A=\{{\alpha}, c_{\alpha}\}$ be a trigonometric $\vee$-system. Then the set of vectors $\{\sqrt{c_{{\alpha}}}{\alpha}\}$ is a $($rational$)$ $\vee$-system, that is $F^{\rm rat}=\sum\limits_{{\alpha}\in A} c_{\alpha}{\alpha}(x)^2 \log {\alpha}(x)$ is a solution of the WDVV equations in the space $V$. By definition of the trigonometric $\vee$-system for any ${\alpha}\in A$ relations hold. Consider two-dimensional plane $\pi \subset V$ and sum up relations over $s$ so that the ${\alpha}$-series $\Gamma_{\alpha}^s$ belong to the plane $\pi \ni {\alpha}$. We arrive at the relations $$\sum_{{\beta}\in A\cap\pi} c_{\beta}({\alpha},{\beta}) {\alpha}\wedge {\beta}=0,$$ or, equivalently, $$\begin{gathered} \label{rV} \sum_{{\beta}\in A\cap\pi} c_{\beta}({\alpha},{\beta}){\beta}\quad \mbox{ is proportional to } {\alpha}. \end{gathered}$$ Relations is a definition of the (rational) $\vee$-system for the set of covectors $\{\sqrt{c_{{\alpha}}}{\alpha}\}$ (see [@V1] and [@FV2] for the complex space). It is equivalent to the property that $F^{\rm rat}$ satisfies WDVV equations in the space $V$ [@V1; @FV2]. Proposition is proven. Due to existence of extra variable $y$ in the ansatz the WDVV equations are nontrivial already when $n=2$. Thus it is natural to study at first two-dimensional configurations $A$ defining solutions of WDVV equations. When $A$ consists of one vector the corresponding form  is degenerate. If $A$ consists of two non-collinear vectors ${\alpha}$, ${\beta}$ then it follows that $({\alpha},{\beta})=0$ therefore relation holds and $A$ is a trigonometric $\vee$-system. However relation cannot hold then for any $\lambda$ and therefore a pair of vectors does not define a solution to WDVV equations (see also Remark \[remark1\] above). The following propositions deal with the next simplest cases when $A$ consists of 3, 4 and 5 vectors respectively. In fact all irreducible trigonometric $\vee$-systems with up to 5 covectors have to be two-dimensional. \[proposition2\] Let system $A$ consist of three vectors ${\alpha}$, ${\beta}$, ${\gamma}$ with nonzero multiplicities $c_{\alpha}$, $c_{\beta}$, $c_{\gamma}$. Then $A$ is an irreducible trigonometric $\vee$-system iff ${\alpha}\pm {\beta}\pm {\gamma}=0$ for some choice of signs. The non-degeneracy condition for the form is then given by $c_{\alpha}c_{\beta}+ c_{\alpha}c_{\gamma}+ c_{\beta}c_{\gamma}\ne 0$. Any such system $A$ defines the solution of the WDVV equations with $\lambda=2(c_{\alpha}c_{\beta}+ c_{\alpha}c_{\gamma}+ c_{\beta}c_{\gamma})(c_{\alpha}c_{\beta}c_{\gamma})^{-1/2}$. It follows from relations that ${\gamma}={\alpha}+{\beta}$ up to multiplication of some of the vectors by $-1$. We take a basis $e^1={\alpha}$, $e^2={\beta}$ in ${\mathbb C}^2$. The bilinear form takes the form $G=c_{\alpha}x_1^2+c_{\beta}x_2^2 + c_{\gamma}(x_1+x_2)^2$. This form is non-degenerate iff $c_{\alpha}c_{\beta}+ c_{\alpha}c_{\gamma}+ c_{\beta}c_{\gamma}\ne 0$. One can check that $${e^1}^\vee=\frac{(c_{\beta}+c_{\gamma})e_1-{c_{\gamma}}e_2}{c_{\alpha}c_{\beta}+ c_{\alpha}c_{\gamma}+ c_{\beta}c_{\gamma}}, \qquad {e^2}^\vee=\frac{-c_{\gamma}e_1+(c_{\alpha}+c_{\gamma})e_2}{c_{\alpha}c_{\beta}+ c_{\alpha}c_{\gamma}+ c_{\beta}c_{\gamma}},$$ where $e_1$, $e_2$ is dual basis to $e^1$, $e^2$, that is $e^i(e_j)=\delta^i_j$. Relations look as follows $$\begin{gathered} \big({e^1}^\vee,{e^2}^\vee\big)(c_{\beta}+c_{\gamma})+\big({e^1}^\vee,{e^1}^\vee\big)c_{\gamma}=0, \\ \big({e^1}^\vee,{e^2}^\vee\big)(c_{\alpha}+c_{\gamma})+\big({e^2}^\vee,{e^2}^\vee\big)c_{\gamma}=0, \\ \big({e^1}^\vee,{e^2}^\vee\big)(c_{\alpha}-c_{\beta})+\big({e^1}^\vee,{e^1}^\vee\big)c_{\alpha}-\big({e^2}^\vee,{e^2}^\vee\big)c_{\beta}=0,\end{gathered}$$ and they are automatically satisfied. Relation results to one scalar equation $$\frac{\lambda^2}{4}\left(c_{\alpha}c_{\beta}({\alpha}^\vee,{\beta}^\vee)+c_{\alpha}c_{\gamma}({\alpha}^\vee,{\gamma}^\vee)+c_{\beta}c_{\gamma}({\beta}^\vee,{\gamma}^\vee)\right)=c_{\alpha}c_{\beta}+c_{\alpha}c_{\gamma}+c_{\beta}c_{\gamma},$$ which has solution as stated in the formulation. In the following proposition we study configurations consisting of four covectors. \[proposition3\] Let system $A$ consist of four vectors ${\alpha}$, ${\beta}$, ${\gamma}$, $\delta$ with nonzero multiplicities $c_{\alpha}$, $c_{\beta}$, $c_{\gamma}$, $c_\delta$. Then $A$ is an irreducible trigonometric $\vee$-system iff the vectors in $A_+$ take the form $e^1$, $e^2$, $e^1 \pm e^2$ in a suitable basis, and the corresponding multiplicities $c_1$, $c_2$, $c_\pm$ satisfy $c_1=c_2$. This property is equivalent to the orthogonality $(e^1+e^2, e^1-e^2)=0$ under the corresponding $\vee$-product. The non-degeneracy condition for the form is then given by $\Delta=(c_1+2 c_+)(c_1+2c_-)\ne 0$. These systems $A$ define the solutions of the WDVV equations with $\lambda=2\Delta c_1^{-1/2}(4c_+c_-+c_1(c_++c_-))^{-1/2}$ once $\lambda$ is finite. It follows from the series relations that there is a a vector ${\alpha}\in A$ such that all the remaining vectors ${\beta},{\gamma}, \delta \in A$ belong to single ${\alpha}$-series $\Gamma^1_{\alpha}$. Indeed, otherwise, up to renaming the covectors and taking opposite, we have $\delta=\gamma+ n {\alpha}$, $n \in {\mathbb N}$, $({\alpha},{\beta})=({\gamma},\delta)=0$. Then consideration of ${\beta}$-series gives $2{\gamma}+n{\alpha}=m {\beta}$ for some $m \in {\mathbb Z}$. And consideration of $\gamma$-series gives ${\alpha}+ p {\gamma}= \pm {\beta}$ for some $p \in {\mathbb Z}$. Therefore $2 {\gamma}+ n {\alpha}= \pm m({\alpha}+ p {\gamma})$, hence $n=\pm m$ and $2 = \pm mp$, thus $m=\pm 1$ or $p=\pm 1$. In the case $m=\pm 1$ we have $n=1$ hence $\gamma$-series contains $\delta$ together with ${\alpha}$ and ${\beta}$. And in the case $p=\pm 1$ the ${\alpha}$-series contains ${\beta}$ together with ${\gamma}$ and $\delta$. So we can assume that there is only one ${\alpha}$-series so that the remaining vectors take the form $\gamma={\beta}+ n_1 {\alpha}$, $\delta= {\beta}+ n_2 {\alpha}$ with integer $n_2 > n_1 >0$. By considering ${\beta}$-series we conclude that $n_1=1$. Consider now the $\delta$-series. It is easy to see that covector ${\beta}$ has to form a single series, therefore $({\beta},\delta)=0$ and the covectors ${\beta}+{\alpha}$ and ${\alpha}$ belong to a $\delta$-series. This is possible only if $n_2=2$. Taking now the basis vectors as $e^1={\alpha}$, $e^2={\beta}+ {\alpha}$ we conclude that the system $A$ consists of covectors $e^1$, $e^2$, $e^1 \pm e^2$. The bilinear form takes now the form $$\begin{gathered} G=c_1 x_1^2+c_2 x_2^2 + c_+ (x_1+x_2)^2+ c_- (x_1-x_2)^2\\ \phantom{G}{}=(c_1+c_++c_-) x_1^2 + (c_2+c_+ + c_-) x_2^2 + 2 (c_+-c_-) x_1 x_2,\end{gathered}$$ which has determinant $\Delta= c_1 c_2 +(c_1+c_2)(c_++c_-)+4c_+c_-$. Therefore $$\begin{gathered} \big(e^1,e^1\big)=\Delta^{-1}(c_2+c_++c_-),\qquad \big(e^2,e^2\big)=\Delta^{-1}(c_1+c_++c_-),\nonumber\\ \big(e^1,e^2\big)=\Delta^{-1}(c_--c_+).\label{scpr}\end{gathered}$$ Now we analyze the series relations . The orthogonality $(e^1-e^2, e^1+e^2)=0$ is clearly equivalent to the condition $c_1=c_2$. Then the remaining conditions on $(e^1\pm e^2)$-series are automatically satisfied. The condition for the $e^1$-series has the form $$c_-\big(-e^1+e^2,e^1\big)+ c_2 \big(e^2, e^1\big)+ c_+ \big(e^1+e^2,e^1\big)=0,$$ and it follows from the scalar products . The condition on the $e^2$-series is also satisfied. It is easy to check that relation holds iff $\lambda$ is as stated, hence proposition is proven. \[proposition4\] Let irreducible trigonometric $\vee$-system $A$ consist of five vectors with non-zero multiplicities. Then in the appropriate basis $A_+$ takes the form $e^1$, $2e^1$, $e^2$, $e^1 \pm e^2$ and the corresponding multiplicities $c_1$, ${\widetilde}c_1$, $c_2$, $c_{\pm}$ satisfy $c_+=c_-$ (equivalently, $(e^1,e^2)=0$) and $2{\widetilde}c_1 c_2 = c_+(c_1-c_2)$. The form is then non-degenerate when $\Delta=(c_1+4{\widetilde}c_1 + 2 c_+)(c_2+2c_+)\ne 0$. The corresponding solution of the WDVV equations has the form with $\lambda=\sqrt{2}\Delta (c_2+2c_+)^{-1/2}(c_1+4{\widetilde}c_1)^{-1/2} c_+^{-1/2}$. Proof is obtained by simple analysis of the series conditions . One can firstly establish that $A$ is two-dimensional. Then it is easy to see that $A$ has to contain collinear vectors, and the required form follows. To conclude this section we present a few examples of trigonometric $\vee$-systems on the plane with higher number of vectors. Recall firstly that the positive roots of the root system ${\cal G}_2$ can be written as ${\alpha}$, ${\beta}$, ${\beta}+{\alpha}$, ${\beta}+n {\alpha}$, ${\beta}+(n+1){\alpha}$, $2{\beta}+(n+1){\alpha}$ where $n=2$. One can show that for integer $n>2$ the above vectors never form a trigonometric $\vee$-system, and that for $n=2$ the multiplicities have to satisfy $c_{\alpha}=c_{{\beta}+{\alpha}}=c_{{\beta}+n{\alpha}}$ and $c_{\beta}=c_{2{\beta}+(n+1){\alpha}}=c_{{\beta}+(n+1){\alpha}}$ which is the case of the ${\cal G}_2$ system. There are though some possibilities to extend the ${\cal G}_2$ system. Firstly, one can show that ${\cal G}_2 \cup {\cal A}_2$ where the system ${\cal A}_2$ consists of doubled short roots of ${\cal G}_2$, is a trigonometric $\vee$-system for appropriate multiplicities. Secondly the following proposition takes place. \[proposition5\] Let $A$ consist of the vectors $e_1$, $e_2$, $2e_2$, $\frac12 (e_1 \pm e_2)$, $\frac12 (e_1\pm 3 e_2)$ with the corresponding nonzero multiplicities $c_1$, $c_2$, ${\widetilde}c_2$, $a$, $b$. Then $A$ is a trigonometric $\vee$-system iff the multiplicities satisfy the relations $a=3b$, $c_2=a+ 2 {\widetilde}c_2$, $(2c_1+b)c_2=(c_1+2b)a$. Note that in the limiting case ${\widetilde}c_2=0$ we recover the system ${\cal G}_2$ with special multiplicities. An example of trigonometric $\vee$-system with yet higher number of vectors is given by vectors $e_1$, $2e_1$, $e_2$, $2e_2$, $e_1 \pm e_2$, $e_1 \pm 2e_2$, $2e_1 \pm e_2$ where the multiplicities can be chosen appropriately. Relations with generalized Calogero–Moser–Sutherland\ systems ===================================================== Relation between $\vee$-systems and the property of a Schrödinger operator of CMS type to have a factorized eigenfunction was observed by Veselov in [@V3]. Namely, it was shown in [@V3] that if an operator $$L=-\Delta+\sum_{{\alpha}\in A_+} \frac{m_{{\alpha}}(m_{{\alpha}}+1)({\alpha},{\alpha})}{\sin^2 ({\alpha},x)}$$ has a formal eigenfunction $$\psi=\prod_{{\alpha}\in A_+} \sin^{-m_{{\alpha}}}({\alpha},x), \qquad L \psi =\mu \psi,$$ then $F=\sum\limits_{{\alpha}\in A_+} m_{\alpha}({\alpha},x)^2 \log ({\alpha},x)$ satisfies the WDVV equations. The following theorem establishes the converse statement in the case of trigonometric $\vee$-systems. \[theorem2\] Let $A$ be a trigonometric $\vee$-system consisting of pairwise non-collinear covectors ${\alpha}$ with multiplicities $c_{\alpha}$. Then Schrödinger operator $$\begin{gathered} \label{sch}L=-\Delta+\sum_{{\alpha}\in A} \frac{c_{{\alpha}}(c_{{\alpha}}+1)({\alpha},{\alpha})}{\sin^2 {\alpha}(x)}\end{gathered}$$ constructed by the metric has the formal eigenfunction $$\begin{gathered} \label{eig} \psi=\prod_{{\alpha}\in A} \sin^{-c_{{\alpha}}}{\alpha}(x), \qquad L \psi =\mu \psi. \end{gathered}$$ The property $L\psi = \mu \psi$ is equivalent to the identity $$\begin{gathered} \label{iden} \sum_{{\alpha}\ne {\beta}} c_{\alpha}c_{\beta}({\alpha},{\beta}) \cot {\alpha}(x) \cot {\beta}(x)={\rm const}. \end{gathered}$$ To establish the last identity it is sufficient to show that the left-hand side of is non-singular. In other words, we need to show that $$\begin{gathered} \label{trt} \sum_{{\beta}, {\beta}\ne {\alpha}} c_{\beta}({\alpha},{\beta}) \cot {\beta}(x)=0 \end{gathered}$$ if $\cot {\alpha}(x)=0$. The last properties are sufficient to check when summation is taken along arbitrary ${\alpha}$-series, then it is guaranteed by relation . This proves the theorem. \[corollary1\] Assume that function constructed by a set of pairwise non-collinear covectors ${\alpha}$ with multiplicities $c_{\alpha}$ satisfies the WDVV equations . Then relation holds for the Schrödinger operator . Conversely, the property of a Schrödinger operator to have a factorized eigenfunction implies that the corresponding vectors $\sqrt{c_{\alpha}}{\alpha}$ form a rational $\vee$-system [@V3]. This property is also sufficient to obtain the trigonometric $\vee$-system, and the arguments are close to [@V3]. \[theorem3\] Assume that the Schrödinger operator has an eigenfunction . Then the set $A$ of vectors ${\alpha}$ with the multiplicities $c_{\alpha}$ forms the trigonometric $\vee$-system. From equation , it follows identity at $\cot {\alpha}(x)=0$. Therefore for each ${\alpha}$-series $\Gamma_{\alpha}^s$ we have $$\begin{gathered} \label{sersum} \sum_{{\beta}\in \Gamma_{\alpha}^s} c_{\beta}({\alpha},{\beta}) {\alpha}\wedge {\beta}=0. \end{gathered}$$ Let ${\beta}^\text{w}$ denote a vector dual to ${\beta}$ with respect to the inner product $(\cdot,\cdot)$ involved in the Schrödinger equation. By summing identities along all the ${\alpha}$-series we conclude that $$\begin{gathered} \label{rav} \sum_{{\beta}\in A} c_{\beta}{\beta}({\alpha}^\text{w}) {\beta}^\text{w} \quad \mbox{is proportional to } {\alpha}^\text{w}. \end{gathered}$$ Now we can decompose the space $V=V_1\oplus\cdots\oplus V_k$ so that the operator $\sum_{{\beta}\in A} c_{\beta}{\beta}\otimes {\beta}^\text{w}$ is equal to constant $\mu_i$ on $V_i$. We can also assume that $(V_i, V_j)=0$ if $i\ne j$. It follows from that $G(\cdot,\cdot)|_{V_i}=\mu_i (\cdot,\cdot)|_{V_i}$. Therefore identities imply $$\sum_{{\beta}\in \Gamma_{\alpha}^s} c_{\beta}{\alpha}({\beta}^\vee) {\alpha}\wedge {\beta}=0$$ which are identities from the definition of the trigonometric $\vee$-systems. \[corollary2\] Assume that the Schrödinger operator has an eigenfunction . Assume also that the system $A$ is irreducible and that for some $\Lambda$ and any $a,b,c,d \in V$ the property $$\begin{gathered} \label{fin} \sum_{{\alpha},{\beta}\in A_+} (\Lambda ({\alpha}, {\beta}) - 1)c_{\alpha}c_{\beta}B_{{\alpha},{\beta}}(a,b) B_{{\alpha},{\beta}}(c,d) =0 \end{gathered}$$ holds. Then the corresponding function with appropriate $\lambda$ satisfies the WDVV equations . \[remqrk4\] The previous corollary also holds for the reducible systems $A$ if we replace the Schrödinger equation metric $({\alpha},{\beta})$ in by the $\vee$-product ${\alpha}({\beta}^\vee)$. In this case $\lambda=2 \sqrt{\Lambda}$. Concluding remarks ================== Trigonometric $\vee$-systems require further investigations. It would be interesting to obtain almost dual prepotentials for the Frobenius manifolds of the affine Weyl groups as well as for their discriminants (cf. rational case [@D2; @FV1]). Comparison with a recent work on the elliptic solutions [@Str] might also be interesting. We also hope that the series conditions would allow understanding and eventually classification of the trigonometric $\vee$-systems. We hope to return to some of these questions soon. Acknowledgements {#acknowledgements .unnumbered} ---------------- I am very grateful to L. Hoevenaars, A. Kirpichnikova, M. Pavlov, I. Strachan and A.P. Veselov for useful and stimulating discussions. The work was partially supported by the EPSRC grant EP/F032889/1, by European research network ENIGMA (contract MRTN-CT-2004-5652), by PMI2 Project funded by the UK Department for Innovation, Universities and Skills for the benefit of the Japanese Higher Education Sector and the UK Higher Education Sector. [99]{} Marshakov A., Mironov A., Morozov A., WDVV-like equations in $N=2$ SUSY Yang–Mills theory, [*Phys. Lett. B*]{} [**389**]{} (1996), 43–52, [hep-th/9607109](http://arxiv.org/abs/hep-th/9607109). Marshakov A., Mironov A., Morozov A., More evidence for the WDVV equations in $N=2$ SUSY Yang–Mills theories, [*Internat. J. Modern Phys. A*]{} [**15**]{} (2000), 1157–1206, [hep-th/9701123](http://arxiv.org/abs/hep-th/9701123). Hoevenaars L.K., Martini R., On the WDVV equations in five-dimensional gauge theories, [*Phys. Lett. B*]{} [**557**]{} (2003), 94–104, [math-ph/0212016](http://arxiv.org/abs/math-ph/0212016). Martini R., Hoevenaars L.K., Trigonometric solutions of the WDVV equations from root systems, [*Lett. Math. Phys.*]{} [**65**]{} (2003), 15–18, [math-ph/0302059](http://arxiv.org/abs/math-ph/0302059). Pavlov M., Explicit solutions of the WDVV equation determined by the “flat” hydrodynamic reductions of the Egorov hydrodynamic chains, [nlin.SI/0606008](http://arxiv.org/abs/nlin.SI/0606008). Dubrovin B., Geometry of 2D topological field theories, in Integrable Systems and Quantum Groups (Montecatini Terme, 1993), [*Lecture Notes in Math.*]{}, Vol. 1620, Springer, Berlin, 1996, 120–348, [hep-th/9407018](http://arxiv.org/abs/hep-th/9407018). Dubrovin B., On almost duality for Frobenius manifolds, in Geometry, Topology, and Mathematical Physics, [*Amer. Math. Soc. Transl. Ser. 2*]{}, Vol. 212, Amer. Math. Soc., Providence, RI, 2004, 75–132, [math.DG/0307374](http://arxiv.org/abs/math.DG/0307374). Riley A., Frobenius manifolds: caustic submanifolds and discriminant almost duality, Ph.D. Thesis, Hull University, 2007. Riley A., Strachan I.A.B., A note on the relationship between rational and trigonometric solutions of the WDVV equations, [*J. Nonlinear Math. Phys.*]{} [**14**]{} (2007), 82–94, [nlin.SI/0605005](http://arxiv.org/abs/nlin.SI/0605005). Veselov A.P., Deformations of the root systems and new solutions to generalised WDVV equations, [*Phys. Lett. A*]{} [**261**]{} (1999), 297–302, [hep-th/9902142](http://arxiv.org/abs/hep-th/9902142). Veselov A.P., On generalizations of the Calogero–Moser–Sutherland quantum problem and WDVV equations, [*J. Math. Phys.*]{} [**43**]{} (2002), 5675–5682, [math-ph/0204050](http://arxiv.org/abs/math-ph/0204050). Feigin M.V., Veselov A.P., Logarithmic Frobenius structures and Coxeter discriminants, [*Adv. Math.*]{} [**212**]{} (2007), 143–162, [math-ph/0512095](http://arxiv.org/abs/math-ph/0512095). Feigin M.V., Veselov A.P., On the geometry of $\vee$-systems, in Geometry, Topology, and Mathematical Physics, [*Amer. Math. Soc. Transl. Ser. 2*]{}, Vol. 224, Amer. Math. Soc., Providence, RI, 2008, 111–123, [arXiv:0710.5729](http://arxiv.org/abs/0710.5729). Feigin M.V., Bispectrality for deformed Calogero–Moser–Sutherland systems, [*J. Nonlinear Math. Phys.*]{} [**12**]{} (2005), suppl. 2, 95–136, [math-ph/0503020](http://arxiv.org/abs/math-ph/0503020). Braden H., Marshakov A., Mironov A., Morozov A., Seiberg–Witten theory for a non-trivial compactification from five to four dimensions, [*Phys. Lett. B*]{} [**448**]{} (1999), 195–202, [hep-th/9812078](http://arxiv.org/abs/hep-th/9812078). Strachan I.A.B., Weyl groups and elliptic solutions of the WDVV equations, [arXiv:0802.0388](http://arxiv.org/abs/0802.0388).
--- abstract: 'We explicitely unveil several classes of inner functions $u$ in $\H$ with the property that there is $\eta\in ]0,1[$ such that the level set $\Omega_u(\eta):=\{z\in\D: |u(z)|<\eta\}$ is connected. These so-called one-component inner functions play an important role in operator theory.' address: - 'Department of Mathematics, UNC, Chapel Hill, North-Carolina,USA' - | Université de Lorraine\ Département de Mathématiques et Institut Élie Cartan de Lorraine, UMR 7502\ Ile du Saulcy\ F-57045 Metz, France author: - Joseph Cima - Raymond Mortini title: 'One-component inner functions' --- *Dedicated to the memory of Vadim Tolokonnikov* .. Introduction {#introduction .unnumbered} ============ An inner function $u$ in $\H$ is said to be a [*one-component inner function*]{} if there is $\eta\in ]0,1[$ such that the level set (also called sublevel set or filled level set) $\Omega_u(\eta):=\{z\in\D: |u(z)|<\eta\}$ is connected. One-component inner functions, the collection of which we denote by $\mathfrak I_c$, were first studied by B. Cohn [@coh] in connection with embedding theorems and Carleson-measures. It was shown in for instance that arclength on $\{z\in\D: |u(z)|=\e\}$ is such a measure whenever $$\Omega_u(\eta)=\{z\in\D: |u(z)|<\eta\}$$ is connected and $\eta<\e<1$. A thorough study of the class $\mathfrak I_c$ was given by A.B. Aleksandrov [@alex] who showed the interesting result that $u\in \mathfrak I_c$ if and only if there is a constant $C=C(u)$ such that for all $a\in \D$ $$\sup_{z\in \D}\left| \frac{1-\ov {u(a)} u(z)}{1-\ov a z} \right| \leq C \frac{1-|u(a)|^2}{1-|a|^2}.$$ Many operator-theoretic applications are given in [@alex; @almp; @bes; @bbk]. In our paper here we are interested in explicit examples, which are somewhat lacking in literature. For example, if $S$ is the atomic inner function, which is given by $$S(z)=\exp\left(- \frac{1+z}{1-z}\right),$$ then all level sets $\Omega_S(\eta)$, $0<\eta<1$ are connected, because these sets coincide with the disks $$\label{horo} D_\eta:=\left\{z\in\D:\left |z- \frac{L}{L+1}\right|<\frac{1}{L+1}\right\},~ L:= \log \frac{1}{\eta},$$ which are tangential to the unit circle at $p=1$. The scheme of our note here is as follows: in section \[levi\] we prove a general result on level sets which will be the key for our approach to the problem of unveiling classes of one-component inner functions. Then in section \[geo\] we first present with elementary geometric/function theoretic methods several examples and then we use Aleksandrov’s criterion to achieve this goal. For instance, we prove that $BS, B\circ S$ and $S\circ B$ are in $\mathfrak I_c$ whenever $B$ is a finite . Considered are also s. It will further be shown that, under the supremum norm, $\mathfrak I_c$ is an open subset of the set of all inner functions and multiplicatively closed. In the final section we give counterexamples. Level sets {#levi} ========== We first begin with a topological property of the class of general level sets. Although statement (1) is “well-known" (the earliest appearance seems to be in [@tsu Theorem VIII,  31]), we could nowhere locate a proof. The argument that the result is a simple and direct consequence of the maximum principle is, in our viewpoint, not tenable. \[compos1\] Given a non-constant inner function $u$ in $\H$ and $\eta\in \;]0,1[$, let $\Omega:=\Omega_u(\eta)=\{z\in\D: |u(z)|<\eta\}$ be a level set. Suppose that $\Omega_0$ is a component (=maximal connected subset) of $\Omega$. Then 1. $\Omega_0$ is a simply connected domain; that is, $\C\setminus \Omega_0$ has no bounded components [^1]. 2. $\inf_{\Omega_0} |u|=0$. We show that (1) holds for every holomorphic function $f$ in $\D$; that is if $\Omega_0$ is a component of the level set $\Omega_f(\eta)$, $\eta>0$, then it is a simply connected domain [^2]. Note that each component $\Omega_0$ of the open set $\Omega_f(\eta)$ is an open subset of $\D$. We may assume that $\eta$ is chosen so that $\{z\in\D: |f(z)|=\eta\}\not=\emp$. Suppose, to the contrary, that $D$ is a bounded component of $\C\setminus \Omega_0$. Note that $D$ is closed in $\C$. Then, necessarily, $D$ is contained in $\D$, because the unique unbounded complementary component of $\Omega_0$ contains $\{z\in \C: |z|\geq 1\}$. Hence $D$ is a compact subset of $\D$. Let $G:=\Omega_0^*$ be the simply-connected hull of $\Omega_0$; that is the union of $\Omega_0$ with all bounded complementary components of $\Omega_0$. Note that $G$ is open because it coincides with the complement of the unique unbounded complementary component of $\Omega_0$. Then, by definition of the simply connected hull, $D\ss G$. Now if $H$ is any bounded complementary component of $\Omega_0$ then (as it was the case for $D$) $H$ is a compact subset of $\D$ and so $\partial H\ss \D$. Moreover, $$\label{hcompo} \partial H\ss\partial\Omega_0.$$ In fact, given $z_0\in\partial H$, let $U$ be a disk centered at $z_0$. Then $U\inter \Omega_0\not=\emp$, since otherwise $U\union H$ would be a connected set strictly bigger than $H$ and contained in the complement of $\Omega_0$; a contradiction to the maximality of $H$. Since $z_0\in \partial H\ss H\ss \C\setminus \Omega_0$, we conclude that $z_0\in \partial \Omega_0$. Now $\partial H\ss\partial\Omega_0$ and $\Omega_0\ss \Omega_f(\eta)$ imply that $|f|\leq \eta$ on $\partial H$, and so, by the maximum principle, $|f|\leq \eta$ on $H$. Consequently, $|f|\leq \eta$ on $G$. By the local maximum principle, $|f|<\eta$ on $G$. Since $\partial D\ss D\ss G$, $$\label{klein-auf-rand} \mbox{$ |f|<\eta$ on $\partial D$}.$$ On the other hand, $$\label{rand-zu-rand} \partial D\buildrel\ss_{}^{{(\ref{hcompo})}} \partial \Omega_0\inter \D\ss \{z\in \D: |f(z)|=\eta\}.$$ Note that the second inclusion follows from the fact that if $|f(z_0)|<\eta$ for $z_0\in \partial \Omega_0\inter \D$, then $\Omega_0$ would no longer be a maximal connected subset of $\Omega_f(\eta)$. Hence $|f|=\eta$ on $\partial D$. This is a contradiction to [(\[klein-auf-rand\])]{}. Thus $\Omega_0$ is a simply connected domain. \(2) If $\ov\Omega_0\ss \D$, then, due to $\partial \Omega_0\ss \{z\in \D: |u(z)|=\eta\}$, we obtain from the minimum principle that $u$ must have a zero in $\Omega_0$. Now let $E:=\ov \Omega_0\inter \partial\D\not=\emp$. In view of achieving a contradiction, suppose that $u$ is bounded away from zero in $\Omega_0$. Then $1/|u|$ is subharmonic and bounded in $\Omega_0$ and $$\limsup_{\xi\to x\atop x\in \partial \Omega_0\setminus E}|u(\xi)|^{-1}=\eta^{-1}.$$ Since $u$ is an inner function, $E$ has linear measure zero (by [@ber Theorem 4.2]). The maximum principle for subharmonic functions with few exceptional points (here on the set $E$; see [@be-co] or [@gar]), now implies that $|u|^{-1}\leq \eta^{-1}$ on $\Omega_0$. But $|u|<\eta$ on $\Omega$ is a contradiction. We conclude that $\inf_{\Omega_0}|u|=0$. [@coh]\[com-bi\] Let $u$ be an inner function. Then the connectedness of $\Omega_u(\eta)$ implies the one of $\Omega_u(\eta')$ for every $\eta'>\eta$. Because $\Omega_u(\eta)$ is connected and $\Omega_u(\eta)\ss \Omega_u(\eta')$, $\Omega_u(\eta)$ is contained in a unique component $U_1(\eta')$ of $\Omega_u(\eta')$. Suppose that $U_0(\eta')$ is a second component of $\Omega_u(\eta')$. Then $|u|\geq \eta$ on $U_0(\eta')$, because $U_0(\eta')$ is disjoint with $U_1(\eta')$ and hence with $\Omega_u(\eta)$. By Lemma \[compos1\] though, $\inf_{U_0(\eta')} |u|=0$; a contradiction. Thus $\Omega_u(\eta')$ is connected. Explicit examples of one-component inner functions {#geo} ================================================== Let $$\rho(z,w)=\left|\frac{z-w}{1-\ov zw}\right|$$ be the pseudohyperbolic distance of $z$ to $w$ in $\D$ and $$D_\rho(z_0,r)=\{z\in\D: \rho(z,z_0)<r\}$$ the associated disks, $0<r<1$. Here is a first class of examples of functions in $\mathfrak I_c$. Although the next Proposition must be known (in view of A.B. Aleksandrov’s criterion [@alex]), see \[alexi\] below), we include a simple geometric proof for the reader’s convenience. \[fbp\] Let $B$ be a finite Blaschke product. Then $B\in \mathfrak I_c$. Denote by $z_1,\dots,z_N$ the zeros of $B$, multiplicities included. Let $\eta\in \;]0,1[$ be chosen so close to 1 that $G:=\Union_{n=1}^N D_\rho(z_n,\eta)$ is connected (for example by choosing $\eta$ so that $z_j\in D_\rho(z_1,\eta)$ for all $j$). Now $$G\ss \{z\in \D: |B(z)|<\eta\}=\Omega_B(\eta),$$ because $z\in G$ implies that for some $n$, $$|B(z)|=\rho(B(z), B(z_n))\leq \rho(z,z_n)<\sigma.$$ Since $G$ is connected, there is a unique component $\Omega_1$ of $\Omega$ containing $G$. In particular, $Z(B)\ss G\ss \Omega_1$. If, in view of achieving a contradiction, we suppose that $\Omega:=\Omega_B(\eta)$ is not connected, there is a component $\Omega_0$ of $\Omega$ which is disjoint with $\Omega_1$, and so with $G$. In particular, $$\label{consta} \rho(z, Z(B))\geq \sigma\;\; \text{for every}\;\; z\in \Omega_0.$$ Since $\ov \Omega_0 \ss \ov{\Omega_B(\eta)}\ss \D$, and $|B|=\eta$ on $\partial \Omega_0$, we deduce from the minimum principle that $\Omega_0$ contains a zero of $B$; a contradiction. We now generalize this result to a class of s. Recall that a  $b$ with zero set/sequence $\{z_n:n\in \N\}$ is said to be an  if $\delta(b):=\inf (1-|z_n|^2)|b'(z_n)|>0$. If $b$ is an   then, for small $\e$, the pseudohyperbolic disks $$D_\rho(z_n,r)=\{z\in\D: \rho(z,z_n)<\e\}$$ are pairwise disjoint. Moreover, by Hoffman’s Lemma (see below and also [@kl]), for any $\eta\in ]0,1[$, $b$ is bounded away from zero on $\{z\in\D: \rho(z,Z(b))\geq \eta\}$. \[lemm\] ([@ho] p. 86, 106 and [@ga] p. 404, 310). Let $\delta, \eta$ and $\epsilon$ be real numbers, called Hoffman constants, satisfying $0 <~\delta <~1$, $ 0 < \eta < (1-\sqrt {1-\delta ^2})/ \delta $, (that is, $0 < \eta < \rho(\delta,\eta)$) and $$0 < \e < \eta \frac{\delta - \eta}{1- \delta \eta}.$$ If $B$ is any  with zeros $\{z_n:n\in \N\}$ such that $$\delta(B) = \inf_{n \in \Bbb N} (1- | z_n |^2) | B'(z_n)| \geq \delta,$$ then 1. the pseudohyperbolic disks $D_\rho(a,\eta)$ for $a\in Z(B)$ are pairwise disjoint. 2. The following inclusions hold: $$\{z \in \D: |B(z)| < \e \} \ss \{z \in \D: \rho(z, Z(B)) < \eta\} \ss \{z \in \D: |B(z)| < \eta \}.$$ A large class of s which are one-component inner functions now is given in the following result. \[one-co\] Let $b$ be an  with zero set $\{z_n:n\in \N\}$. Suppose that for some $\sigma\in \;]0,1[$ the set $$G:=\Union_n D_\rho(z_n,\sigma)$$ is connected. Then $b$ is a one-component inner function. This holds in particular, if $\rho(z_n,z_{n+1})<\sigma<1$ for all $n$; for example if $z_n=1-2^{-n}$. As in the proof of Proposition \[fbp\] $$G\ss \{z\in \D: |b(z)|<\sigma\}=:\Omega.$$ Since $G$ is assumed to be connected, there is a unique component $\Omega_1$ of $\Omega$ containing $G$. In particular, $Z(b)\ss G\ss \Omega_1$. Now, if we suppose that $\Omega$ is not connected, then there is a component $\Omega_0$ of $\Omega$ which is disjoint with $\Omega_1$, and so with $G$. In particular, $$\label{consti} \rho(z, Z(b))\geq \sigma\;\; \text{for every}\;\; z\in \Omega_0.$$ Let $\delta:=\delta(b)$, $$0 < \eta < \min\{(1-\sqrt {1-\delta ^2})/ \delta, \sigma\},$$ $$0 < \e < \eta \frac{\delta - \eta}{1- \delta \eta}.$$ By Lemma \[compos1\], $\inf_{\Omega_0}|b|=0$. Thus, there is $z_0\in \Omega_0$ be so that $|b(z_0)|<\e$. We deduce from Hoffman’s Lemma \[lemm\] that $\rho(z_0,Z(b))<\eta<\sigma$. This is a contradiction to [(\[consti\])]{}. We conclude that $\Omega$ must be connected. It is clear that the condition $\rho(z_n,z_{n+1})<\sigma$ for every $n$ implies that $\Union_n D_\rho(z_n,\sigma)$ is connected. For the rest, just note that $$\rho(1-2^{-n}, 1-2^{-n-1})=\frac{2^{-n}-2^{-n-1}}{2^{-n}+2^{-n-1}+2^{-n}2^{-n-1}}=\frac{1}{3 +2^{-n}}.$$ \[radi\] Let $B$ be a Blaschke product with increasing real zeros $x_n$ satisfying $$0<\eta_1\leq\rho(x_n, x_{n+1})\leq \eta_2<1.$$ Then $b\in \mathfrak I_c$. Just use Theorem \[one-co\] and the fact that by the Vinogradov-Hayman-Newman theorem, $B$ is interpolating if and only if $$\sup_n\frac{1-x_{n+1}}{1-x_n}\leq s<1$$ or equivalently $$\inf_n\rho(x_n,x_{n+1})\geq r>0.$$ Using a result of Kam-Fook Tse [@tse], telling us that a sequence $(z_n)$ of points contained in a Stolz angle (or cone) $\{z\in \D: |1-z|< C (1-|z|)\}$ is interpolating if and only if it is separated (meaning that $\inf_{n\not =m}\rho(z_n,z_m)>0$), we obtain: \[kft\] Let $B$ be a Blaschke product whose zeros $(z_n)$ are contained in a Stolz angle and are separated. Suppose that $\rho(z_n,z_{n+1})\leq \eta<1$. Then $B\in \mathfrak I_c$. Similarily, using a result by M. Weiss [@we Theorem 3.6] and its refinement in [@bor Theorem B], we also obtain the following assertion for sequences that may be tangential at 1 (see also Wortman [@wo]). \[we-wo\] Let $B$ be a Blaschke product whose zeros $z_n=r_ne^{i\theta_n}$ satisfy: $$r_n<r_{n+1}, ~ \theta_{n+1}< \theta_n,$$ $$r_n\nearrow 1,~ \theta_n\searrow 0,$$ $$\label{wort} 0<\eta_1\leq \rho(z_n,z_{n+1})\leq \eta_2<1.$$ Then $B$ is an  contained in $\mathfrak I_c$. This holds in paticular if the zeros are located on a convex curve in $\D$ with endpoint $1$ and satisfying [(\[wort\])]{}. Other classes of this type can be deduced from [@gw]. Here are two explicit examples of s in $\mathfrak I_c$ whose zeros are given by iteration of the zero of a hyperbolic, respectively parabolic automorphism of $\D$. These functions appear, for instance, in the context of isometries on the Hardy space $H^p$ (see [@cw]). $\bullet$   Let $\dis \varphi(z)=\frac{z-1/2}{1- (1/2) z}$. Then $\varphi$ is an hyperbolic automorphism with fixed points $\pm 1$. If $\varphi_j:=\underbrace{\varphi\circ\cdots\circ \varphi}_{j-{\rm times}}$, then $\varphi_j\in {\rm Aut}(\D)$ and vanishes exactly at the point $$x_j:=\frac{3^j-1}{3^j+1}= 1-\frac{2}{3^j+1}.$$ This can readily be seen by using that $x_{j+1}=\varphi^{-1}(x_j)$ and $$\varphi_{j+1}(z) = (\varphi_ j\circ \varphi )(z)=\frac {z- \frac{\frac{1}{2}+x_j} {1+ \frac{1}{2}x_j}}{1-z \frac{\frac{1}{2}+x_j} {1+ \frac{1}{2}x_j}}.$$ Since $$\rho(x_j,x_{j+1})=\frac{3^{j+1}-3^j}{3^{j+1}+ 3^j}=\frac{1}{2},$$ we deduce from Corollary \[radi\] that the   $$B(z):=\prod_{j=1}^\infty \frac{x_j-z}{1-x_j z}$$ associated with these zeros is in $\mathfrak I_c$.\ $\bullet$ Let $\sigma\in {\rm Aut}(\D)$ and $\tau=\sigma\circ \varphi\circ \sigma^{-1}$. Then $\tau$ is also a hyperbolic automorphism fixing the points $\sigma(\pm 1)$, and where $\xi:=\sigma(1)$ is the Denjoy-Wolff point with $\tau'(\xi)<1$. The zeros of the $n$-th iterate $\tau_n$ of $\tau$ are given by $$z_n=\tau_n^{-1}(0)=(\sigma\circ \varphi^{-1}_n\circ \sigma^{-1})(0).$$ By the grand iteration theorem [@sh p.78], since $1$ is an attracting fixpoint with $(\varphi^{-1})'(1)= 1/3<1$, the sequence $(\varphi^{-1}_n(\sigma^{-1}(0)))$ converges nontangentially to $1$. Hence the points $z_n$ are located in a cone with cusp at $\xi$. Moreover, if $n>k$ and $a=\sigma^{-1}(0)$, $$\begin{aligned} \rho(z_n,z_k)&=& \rho\big( (\varphi^{-1}_n\circ \sigma^{-1})(0), (\varphi^{-1}_k\circ \sigma^{-1})(0)\big)\\ &=& \rho\big(\varphi^{-1}_{n-k}(a), a\big)$$ Thus, $\rho(z_n,z_{n+1})=\rho (\varphi(a), a)$ for all $n$ and $\inf_{n\not=k}\rho(z_n,z_k)>0$. Now $(z_n)$ is a Blaschke sequence [^3] ([@sh Ex. 6, p. 85]); in fact, use d’Alembert’s quotient criterion and observe that by the Denjoy-Wolff theorem, $$\frac{1-|z_{n+1}|}{1-|z_n|}= \frac{1-|\tau^{-1}(z_n)|}{1-|z_n|}\to (\tau^{-1})'(\xi)<1.$$ Hence, by Corollary \[kft\], $(z_n)$ is an interpolating sequence (see also [@cm p.80]) and the associated   $b=\prod_{n=1}^\infty e^{i\theta_n} \tau_n$ belongs to $\mathfrak I_c$ (here $\theta_n$ is chosen so that the $n$-th Blaschke factor is positive at the origin). $\bullet$   Let $\dis \psi(z)=i\; \frac{z- \frac{1+i}{2}}{1- \frac{1-i}{2}\; z}$. Then $\psi$ is a parabolic automorphism with attracting fixed point $1$. The automorphism $\psi$ is conjugated to the translation $w\mapsto w+2$ on the upper half-plane (see figure \[itera\]) via the map $M(z)= i (1+z)/(1-z)$ and $\psi_n=M^{-1}\circ T_n\circ M$. The zeros of the $n$-th iterate $\psi_n$ of $\psi$ are given by $$z_n=\frac {n}{n-i};$$ just use that $z_n=(M^{-1}\circ T_n^{-1}\circ M)(0)$. These zeros satisfy $\left|z_n-\frac{1}{2}\right|=\frac{1}{2}$ and of course also the Blaschke condition $\sum_{n=1}^\infty 1-|z_n|^2<\infty$. Moreover, $$\rho(z_n,z_{n+1})=\frac{1}{\sqrt{2}}.$$ Thus, by, Corollary \[we-wo\], the  associated with these zeros is interpolating and belongs to $\mathfrak I_c$.\ \[ibp-mul\] Let $B$ be a finite  or an  with real zeros clustering at $p=1$. Then $f:=BS\in \mathfrak I_c$. i\) Let $B$ be a finite . Chose $\eta\in \;]0,1[$ so close to 1 that the disk $D_\eta$ in [(\[horo\])]{}, which coincides with the level set $\Omega_S(\eta)$, contains all zeros of $B$. Now $D_\eta=\Omega_S(\eta)\ss \Omega_f(\eta)$. Now $\Omega_f(\eta)$ must be connected, since otherwise there would be a component $\Omega_0$ of $\Omega_f(\eta)$ disjoint from the component $\Omega_1$ containing $D_\eta$. But $f$ is bounded away from zero outside $D_\eta$; hence $f=BS$ is bounded away from zero on $\Omega_0$. This is a contradiction to Lemma \[compos1\] (2). ii\) If $B$ is an  with zeros $(z_n)$, then, by Hoffman’s Lemma \[lemm\], $B$ is bounded away from zero outside $R:=\Union D_\rho(z_n,\e)$ for every $\e\in \;]0,1[$. Now, if the zeros of $B$ are real, and bigger than $-\sigma$ for some $\sigma\in ]0,1[$, this set $R$ is contained in a cone with cusp at 1 and aperture-angle strictly less than $\pi$ (see for instance [@mru]). Hence $R$ is contained in $D_\eta$ for all $\eta$ close to $1$. Thus, as above, we can deduce that $\Omega_{BS}(\eta)$ is connected. The previous result shows, in particular, that certain non one-component inner functions (for example a thin  with positive zeros, see Corollary \[thin\]), can be multiplied by a one-component inner function into $\mathfrak I_c$. In particular, $uv\in \mathfrak I_c$ does not imply that $u$ and $v$ belong to $\mathfrak I_c$. The reciprocal, though, is true: that is $\mathfrak I_c$ itself is stable under multiplication, as we are going to show below. \[prodi\] Let $u,v$ be two inner functions in $\mathfrak I_c$. Then $uv\in\mathfrak I_c$. Let $\Omega_u(\eta)$ and $\Omega_v(\eta')$ be two connected level sets. Due to monotonicity (Lemma \[com-bi\]), and the fact that $\Union_{\lambda\in [\lambda_0,1[} \Omega_f (\lambda)=\D$, we may assume that $\sigma$ satisfies $$\max\{\eta,\eta'\}\leq \sigma<1$$ and is so close to 1 that $0\in \Omega_u(\sigma)\inter \Omega_v(\sigma)\not=\emp$. Hence $U:=\Omega_u(\sigma)\union \Omega_v(\sigma)$ is connected. Now $$\Omega_u(\sigma)\union \Omega_v(\sigma)\ss\Omega_{uv}(\sigma).$$ If we suppose that $\Omega_{uv}(\sigma)$ is not connected, then there is a component $\Omega_0$ of $\Omega_{uv}(\sigma)$ which is disjoint from $U$. In particular, $u$ and $v$ are bounded away from zero on $\Omega_0$. This contradicts Lemma \[compos1\] (2). Hence $\Omega_{uv}(\sigma)$ is connected and so $uv\in \mathfrak I_c$. The set of one-component inner functions is open inside the set of all inner functions (with respect to the uniform norm topoplogy). Let $u\in \mathfrak I_c$. Then, by Lemma \[com-bi\], $\Omega_u(\eta)$ is connected for all $\eta\in [\eta_0,1[$. Choose $0<\e<\min\{\eta, 1-\eta\}$ and let $\Theta$ be an inner function with $||u-\Theta||<\e$. We claim that $\Theta\in\mathfrak I_c$, too. To this end we note that $$\Omega_{\Theta}(\eta-\e)\ss \Omega_u(\eta)\ss \Omega_\Theta(\eta+\e),$$ where $0<\eta-\e <\eta+\e<1$. As usual, if we suppose that $\Omega_{\Theta}(\eta+\e)$ is not connected, then there is a component $\Omega_0$ of $\Omega_{\Theta}(\eta+\e)$ which is disjoint from the connected set $\Omega_u(\eta)$, hence disjoint with $\Omega_{\Theta}(\eta-\e)$. In other words, $|\Theta|\geq \eta-\e>0$ on $\Omega_0$. This contradicts Lemma \[compos1\] (2). Hence $\Omega_\Theta(\eta+\e)$ is connected and so $\Theta\in\mathfrak I_c$. Next we look at right-compositions of $S$ with finite s. We first deal with the case where $B(z)=z^2$. The function $S(z^2)$ is a one-component inner function. Let $\Omega_S(\eta)$ be the $\eta$-level set of $S$. Then, as we have already seen, this is a disk tangent to the unit circle at the point 1. We may choose $0<\eta<1$ so close to $1$ that $0$ belongs to $\Omega_S(\eta)$. Let $U=\Omega_S(\eta)\setminus ]-\infty, 0]$. Then $U$ is a simply connected slitted disk on which exists a holomorphic square root $q$ of $z$. The image of $U$ under $q$ is a simply connected domain $V$ in the semi-disk $\{z: |z|<1, {\rm Re}\; z>0\}$. Let $ V^*$ be its reflection along the imginary axis. Then $E:=\ov{V^*\union V }$ is mapped by $z^2$ onto the closed disk $\ov{\Omega_S(\eta)}$. Then $E\setminus \partial E$ coincides with $\Omega_{S(z^2)}(\eta)$. Using Aleksandrov’s criterion (see below), we can extend this by replacing $z^2$ with any finite . Recall that the spectrum $\rho(\Theta)$ of an inner function $\Theta$ is the set of all boundary points $\zeta$ for which $\Theta$ does not admit a holomorphic extension; or equivalently, for which $Cl(\Theta,\zeta)=\ov\D$, where $$\mbox{$Cl(\Theta,\zeta)=\{w\in\C: \exists (z_n)\in\D^\N,~ \lim z_n=\zeta$ and $ \lim \Theta(z_n)=w\}$}$$ is the cluster set of $\Theta$ at $\zeta$ (see [@ga p. 80]). \[alexi\][@alex Theorem 1.11 and Remark 2, p. 2915] Let $\Theta$ be an inner function. The following assertions are equivalent: 1. $\Theta\in \mathfrak I_c$. 2. There is a constant $C>0$ such that for every $\zeta\in \T\setminus \rho(\Theta)$ we have $$i)~~~ |\Theta '' (\zeta)|\leq C\; |\Theta' (\zeta)|^2,$$ and $$ii)~~~ \mbox{$\liminf_{r\to 1} |\Theta(r\zeta)|<1$ for all $\zeta\in \rho(\Theta)$}.$$ Note that, due to this theorem, $\Theta\in \mathfrak I_c$ necessarily implies that $\rho(\Theta)$ has measure zero. \[sing\] Let $B$ be a finite . Then $S\circ B\in \mathfrak I_c$. Let us note first that $\rho(S\circ B)=B^{-1}(\{1\})$. Since the derivative of $B$ on the boundary never vanishes (due to $$\label{deri} z\frac{B'(z)}{B(z)}=\sum_{n=1}^N \frac{1-|a_n|^2}{|a_n-z|^2}, ~~|z|=1, B(a_n)=0,)$$ $B$ is schlicht in a neighborhood of $1$. The angle conservation law now implies that for $\zeta\in B^{-1}(1)$ the curve $r\mapsto B(r\zeta)$ stays in a Stolz angle at 1 in the image space of $B$. Hence $\liminf_{r\to 1} S(B(r \zeta))=0$ for $\zeta\in \rho(S\circ B)$. Now let us calculate the derivatives: $$S'(z)=-S(z) \frac{2}{(1-z)^2},$$ $$S''(z)= S(z)\left[ \frac{4}{(1-z)^4}- \frac{4}{(1-z)^3}\right],$$ $$(S\circ B)'= (S'\circ B) B'$$ $$(S\circ B)'' =(S''\circ B) B'^2 + (S'\circ B) B''$$ $$\begin{aligned} \label{compos} A:=\frac{(S\circ B)''}{[(S\circ B)']^2}&=&\frac{S''\circ B}{(S'\circ B)^2}+ \frac{(S'\circ B)}{(S'\circ B)^2}\frac{B''}{B'^2}\\\nonumber &=&\frac{S''\circ B}{(S'\circ B)^2}+ \frac{1}{S'\circ B}\frac{B''}{B'^2}.\end{aligned}$$ Hence, for $\zeta\in \T\setminus \rho(S\circ B)$, $|B(\zeta)|=1$ , but $\xi:=B(\zeta)\not=1$, and so, by [(\[deri\])]{}, $$\begin{aligned} |A(\zeta)|&\leq& \sup_{\xi\not=1} \frac{|S''(\xi)|}{|S'(\xi)|^2}+ 2\sup_{\xi\not=1}\frac{|1-\xi|^2}{|S(\xi)|}\; C \\ &\leq& C' \sup_{\xi\not=1} \frac{|1-\xi|^4}{|1-\xi|^4} +8C<\infty.\end{aligned}$$ Let $S_\mu$ be a singular inner function with finite spectrum $\rho(S_\mu)$. Then $S_\mu\in \mathfrak I_c$. Since $S$ is the universal covering map of $\D\setminus \{0\}$, each singular inner function $S_\mu$ writes as $S_\mu=S\circ v$ for some inner function $v$. Since $\rho(S_\mu)$ is finite, $v$ necessarily is a finite . (This can also be seen from [@glmr Proof of Theorem 2.2]). The assertion now follows from Proposition \[sing\]. Note that this result also follows in an elementary way from Proposition \[prodi\] and the fact that every such $S_\mu$ is a finite product of powers of the atomic inner function $S$. We now consider left-compositions with finite Blaschke products. \[frost\] Let $\Theta$ be a one-component inner function. Then each Frostman shift $(a-\Theta)/(1-\ov a \Theta)\in \mathfrak I_c$, too. Here $a\in \D$. Let $\tau(z)=(a-z)/(1-\ov az)$. Then $\rho(\tau\circ \Theta)=\rho(\Theta)$. As above, $$\liminf_{r\to 1}|\tau\circ \Theta(r\zeta)|<1$$ for every $\zeta\in \rho(\tau\circ\Theta)$. Now $$\tau(z)= \frac{1}{\ov a}+ \frac{|a|^2-1}{\ov a} \frac{1}{1-\ov a z},$$ from which we easily deduce the first and second derivatives. By using the formulas \[compos\], we obtain $$\begin{aligned} A:=\left|\frac{(\tau\circ \Theta)''}{[(\tau\circ \Theta)']^2}\right|&\leq &C\frac{|1-\ov a \Theta|^4} {|1-\ov a\Theta|^3}+ C' |1-\ov a \Theta|^2 \; \frac{|\Theta''|}{|\Theta'|^2}.\end{aligned}$$ Hence, the assumption $\Theta\in \mathfrak I_c$ now yields (via Aleksandrov’s criterion \[alexi\]) that $\sup_{\zeta\in \rho(\tau\circ \Theta)} A(\zeta)<\infty$. Thus $\tau\circ\Theta\in \mathfrak I_c$. Given $a\in\D\setminus \{0\}$, the interpolating Blaschke products\ $(S-a)/(1-\ov a S)$ belong to $\mathfrak I_c$. This also follows from Corollary \[we-wo\] by noticing that the $a$-points of $S$ are located on a disk tangent at $1$ and that the pseudohyperbolic distance between two consecutive ones is constant (see [@mo]). There it is also shown that the Frostman shift $(S-a)/(1-\ov a S)$ is an . Let $B$ be a finite Blaschke product and $\Theta\in \mathfrak I_c$. Then $B\circ \Theta\in \mathfrak I_c$. This is a combination of Propositions \[frost\] and \[prodi\]. Inner functions not belonging to $\mathfrak I_c$ {#nonic} ================================================ Here we present a class of Blaschke products that are not one-component inner functions. Recall that a Blaschke product $b$ with zero-sequence $(z_n)$ is [*thin*]{} if $$\lim_n\prod_{k\not=n} \rho(z_k,z_n)=\lim_{n\to 1} (1-|z_n|^2)|b'(z_n)|=1.$$ It was shown by Tolokonnikov [@to Theorem 2.3] that $b$ is thin if and only if $$\lim_{|z|\to 1} (|b(z)|^2+(1-|z|^2)|b'(z)|)=1.$$ \[thin\] Thin Blaschke products are never one-component inner functions. Let $\e\in \;]0,1[$ be arbitrary close to $1$. Choose $\eta>0$ and $\delta>0$ so close to 1 so that $$\mbox{$\e<\eta^2$ and $\dis \eta < (1-\sqrt {1-\delta ^2})/ \delta$}.$$ By deleting finitely many zeros, say $z_1,\dots, z_N$ of $b$, we obtain a tail $b_N$ such that $(1-|z_n|^2)|b_N'(z_n)|\geq \delta$ for every $n>N$. Hence, by Theorem \[lemm\], $$\label{bn} \{z \in \D: |b_N(z)| < \e \} \ss \{z \in \D: \rho(z, Z(b_N)) < \eta\}$$ and the disks $D(z_n,\eta)$ are pairwise disjoint. This implies that the level set $\{z \in \D: |b_N(z)| < \e \}$ is not connected. Now choose $r$ so close to 1 that $$p(z):=\prod_{n=1}^N \rho(z,z_n)\geq \e$$ for every $z$ with $r\leq |z|<1$. We show that the level set $\{|b|<\e^2\}$ is not connected. In fact, for some $r\leq |z|<1$ we have $|b(z)|<\e^2$, then $$|b_N(z)|= \frac{|b(z)|}{|p(z)|}<\frac{\e^2}{\e}=\e.$$ Hence $$\{z: r<|z|<1, |b(z)|<\e^2\}\ss \{|b_N(z)|<\e\}\buildrel\ss_{}^{{(\ref{bn})}} \Union_{n>N} D(z_n,\eta).$$ Since the disks $D_\rho(z_n,\eta)$ are pairwise disjoint if $n>N$, we are done. \[2bps\] No finite product $B$ of thin s belongs to $\mathfrak I_c$. Let $\e\in \;]0,1[$ be arbitrary close to 1. By Corollary \[thin\], if $b_j$, $(j=1,2)$, are two thin s with zero-sequence $(z_n^{(j)})_n$, $$\Omega_{b_j}(\e)\ss \Union_{n=1}^\infty D_\rho(z^{(j)}_n,\eta)$$ for suitable $\eta$, the disks $D_\rho(z^{(j)}_n,\eta)$, being pairwise disjoint for $n$ large. Since $\lim_n\rho(z_n^{(j)}, z_{n+1}^{(j)})=1$, we see that a disk $D_\rho(z^{(1)}_n,\eta)$ can meet at most one disk $D_\rho(z^{(2)}_m,\eta)$ for $n$ large. Hence $$\Omega_{b_1b_2}(\e^2)\ss \Union_{j=1}^2\Union_{n=1}^\infty D_\rho(z^{(j)}_n,\eta),$$ where the set on the right hand side obviously is disonnected. The general case works via induction. The conditions $$\label{no-one-co} \eta^*:=\sup_{n\in \N}\rho(z_n, Z(b)\setminus \{z_n\})<1,$$ or equivalently $$\label{no-one-coco} \mbox{$D(z_n,\eta)\inter \Union_{m\not= n}D(z_m, \eta)\not=\emp$ for some $\eta\in ]0,1[$},$$ are not sufficient to guarantee that the  $b$ is a one-component inner function. Take $z_{2n}=1-n^{-n}$ and $z_{2n+1}=1-(n^{-n}+n^{-n})$. Then $(z_{2n})$ and $(z_{2n+1})$ are (thin) interpolating sequences by [@gm Corollary 2.4]. Using with $a=n^{-n}$ and $b=2a$ the identity $$\rho(1-a,1-b)=\frac{|a-b|}{a+b-ab},$$ we conclude that $$\rho(z_{2n}, z_{2n+1})= \frac{n^{-n}}{1-z_{2n} z_{2n+1}}\to 1/3,$$ and so the union $(z_n)$ is an interpolating sequence satisfying [(\[no-one-coco\])]{}. By Corollary \[2bps\], the  formed with the zero-sequence $(z_n)$ is not in $\mathfrak I_c$. Using the following theorem in [@ber], we can exclude a much larger class of Blaschke products from being one-component inner functions: Let $u$ be an inner function. Then, for every $\e\in \;]0,1[$, all the components of the level sets $\{z\in \C: |u(z)|<\e\}$ have compact closures in $\D$ if and only if $u$ is a Blaschke product and $$\mbox{$\limsup_{r\to 1} |u(r\xi)|=1$ for every $\xi\in \T$}.$$ In particular this condition is satisfied by finite products of thin Blaschke products (see [@gomo Proposition 2.2]) as well as by the class of uniform Frostman Blaschke products $$\sup_{\xi\in \T} \sum_{n=1}^\infty \frac{1-|z_n|^2}{|\xi-z_n|}<\infty.$$ Note that this Frostman condition implies that the associated  has radial limits of modulus one everywhere [@cl p. 33]. As a byproduct of Theorem \[one-co\] we therefore obtain If $b$ is a uniform Frostman Blaschke product with zeros $(z_n)$ clustering at a single point, then $\limsup_\rho(z_n,z_{n+1})=1$. [To conclude, we would like to ask two questions and present three problems]{}: 1. Can every inner function $u$ whose boundary spectrum $\rho(u)$ has measure zero, be multiplied by a one-component inner function into $\mathfrak I_c$? 2. Let $S_\mu$ be a singular inner function with countable spectrum. Give a characterization of those measures $\mu$ such that $S_\mu\in \mathfrak I_c$. Do the same for singular continuous measures. 3. In terms of the zeros, give a characterization of those s that belong to $\mathfrak I_c$. 4. Does the  $B$ with zeros $z_n=1-n^{-2}$ belong to $\mathfrak I_c$? Acknowledgement ================ We thank Rudolf Rupp and Robert Burckel for their valuable comments concerning Lemma \[compos1\] (1), the proof of which was orginally developed for the upcoming monograph [@moru]. [99]{} A.B. Aleksandrov, On embedding theorems for coinvariant subspaces of the shift operator. II, J. Math. Sciences 110 (2002), 2907–2929. A. Aleman, Y. Lyubarskii, E. Malinnikova, K.-M. Perfekt, Trace ideal criteria for embeddings and composition operators on model spaces, J. Funct. Anal. 270 (2016), 861–883. A. Baranov, R. Bessonov, V. Kapustin, Symbols of truncated Toeplitz operators, J. Funct. Anal. 261 (2011), 3437–3456. C.L. Belna, S.A. Obaid, D.C. Rung, Geometric conditions for interpolation, Proc. Amer. Math. Soc. 88 (1983), 469–475. R. Berman, The level sets of the moduli of functions of bounded characteristic, Trans. Amer. Math. Soc. 281 (1984), 725–744. R. Berman, W. S. Cohn, Phragmén-Lindelöf theorems for subharmonic functions on the unit disk, Math. Scand. 62 (1988), 269–293. R.V. Bessonov, Fredholmness and compactness of truncated Toeplitz and Hankel operators, Integral Equations Oper. Theory 82 (2015), 451–467. J. Cima, W. Wogen, Isometric equivalence of isometries on $H^p$, Proc. Amer. Math. Soc. 144 (2016), 4887–4898.s E. Collingwood, A. Lohwater, Theory of cluster sets, Cambridge Univ. Press, 1966. , Carleson measures for functions orthogonal to invariant subspaces, Pacific J. Math. 103 (1982), 347–364. C.Cowen, B.MacCluer, Composition operators on spaces of analytic functions, CRC-Press, New York, 1995. S. Gardiner, Asymptotic maximum principles for subharmonic functions, Comp. Meth. Funct. Theory 8 (2008), 167–172. , *Bounded Analytic Functions*, Academic Press, New York, 1981. E.A. Gerber, M.L. Weiss, Interpolation, trivial and non-trivial homomorphisms in $\H$, J. Math. Soc. Japan 34 (1982), 173–185. P. Gorkin, L. Laroco, R. Mortini, R. Rupp, Composition of inner functions, Results in Math. 25 (1994), 252–269. P. Gorkin, R. Mortini, Asymptotic interpolating sequences in uniform algebras, J. London Math. Soc. 67 (2003), 481–498. P. Gorkin, R. Mortini, Cluster sets of interpolating Blaschke products, J. d’Analyse Math. 96 (2005), 369–395. K. Hoffman: [*Bounded analytic functions and Gleason parts*]{}, Ann. Math. 86 [ (1967)]{}, 74–111. A.Kerr-Lawson, Some lemmas in interpolating Blaschke products and a correction, Can. J. Math. 21 (1969), 531–534. R. Mortini, Commuting inner functions. J. Math. Analysis and Appl. 209 (1997), 724–728. R. Mortini, R. Rupp, On a family of pseudohyperbolic disks, Elem. Math. 70 (2015), 153–160. R. Mortini, R. Rupp, An introduction to extension problems, Bézout equations and stable ranks in classical function algebras, Accompanied by introductory chapters on point-set topology and function theory. Book-manuscript; ca 1500 pages; in preparation. J. H. Shapiro, Composition operators and classical function theory, Springer Verlag, New York-Heidelberg, 1993 V. Tolokonnikov, Carleson-Blaschke products and Douglas algebras, Algebra i Analyz 3 (1991), 185-196 (Russian) and St. Petersburg Math. J. 3 (1992), 881–892. Kam-Fook Tse, Nontangential interpolating sequences and interpolation by normal functions, Proc. Amer. Math. Soc. 29 (1971), 351–354. M. Tsuji, Potential Theory in Modern Function Theory, Chelsea Pub. Co. 1975 M.L. Weiss, Some $\H$-interpolating sequences and the behavior of certain of their Blaschke products, Trans. Amer. Math. Soc. 209 (1975), 211–223. D. Wortman, Interpolating sequences on convex curves in the open unit disc, Proc. Amer. Math. Soc. 48 (1975), 157–164. [^1]: A shorter proof can be given by using the advanced definition that a domain $G$ in $\C$ is simply connected if every curve in $G$ is contractible in $G$, or equivalently, if for every Jordan curve $J$ in $G$ the interior of $J$ belongs to $G$. That depends though on the Jordan curve theorem. [^2]: This proof, as well as two different ones, including the one mentioned in footnote 1, stem from the forthcoming book manuscript [@moru] of the second author together with R. Rupp. [^3]: This also follows form the inequalities $1-|\sigma(\xi_n)|^2= \frac{(1-|a|^2)(1-|\xi_n|^2)}{|1-\ov a \xi_n|^2}\leq \frac{1+|a|}{1-|a|} (1-|\xi_n|^2)$ and $1-|\psi_n(a)|^2\leq \frac{1+|a|}{1-|a|} (1-|w_n|^2)$, whenever $(w_n)$ is a Blaschke sequence and $\psi_n(w_n)=\sigma(a)=0$.
--- abstract: | Given a smooth curve of genus $2$ embedded in $\mathbf P^{d-2}$ with a complete linear system of degree $d\geq 6$, we list all types of rational normal scrolls arising from linear systems $g^1_2$ and $g^1_3$ on $C$. Furthermore, we give a description of the ideal of such a curve of genus $2$ embedded in $\mathbf P^{d-2}$ as a sum of the ideal of the two-dimensional scroll defined by the unique $g^1_2$ on $C$ and the ideal of a three-dimensional scroll arising from a $g^1_3$ on $C$ and not containing the scroll defined by the $g^1_2$ on $C$. address: | Matematisk Institutt\ Universitetet I Oslo\ PO Box 1053\ Blindern, NO-0316 Oslo, Norway author: - Andrea Hofmann title: The ideal of curves of genus $2$ on rational normal scrolls --- Introduction {#intro} ============ Given a projective variety $X$, in order to find a description of its ideal $I_X$ which is, in some sense, easy to handle, one useful approach is to look for a decomposition of $I_X$ as a sum of ideals of higher-dimensional varieties that contain $X$. This problem at hand extends naturally to the question whether the syzygies of a given variety are generated by the syzygies of higher-dimensional varieties containing this variety.\ In the past decades this question has been studied for curves $C\subseteq \mathbf P^n$. Since each $g^1_k$ on $C$ for $k\leq n-1$ gives rise to a rational normal scroll that contains the curve, natural candidates for these higher-dimensional varieties are exactly rational normal scrolls. The above presented problem has been studied to some extent for elliptic normal curves and canonical curves: In 1984 Green ([@Green1]) showed that the space of quadrics in the ideal of a non-hyperelliptic canonical curve of genus $g\geq 5$ is spanned by quadrics of rank less or equal to $4$. In [@vBoHu] v. Bothmer and Hulek prove that the linear syzygies of elliptic normal curves, of their secant varieties and of bielliptic canonical curves are generated by syzygies of rational normal scrolls that contain these varieties. In [@vBo] v. Bothmer proves that the $j$th syzygies of a general canonical curve of genus $g$ are generated by the $j$th syzygies of rational normal scrolls containing the curve for the cases $(g,j)\in \{(6,1),(7,1),(8,2)\}$, and in [@vBo1] the author shows that the first syzygies of general canonical curves of genus $g\geq 9$ are generated by scrollar syzygies. In this paper we will study curves of genus $2$, which cannot be canonically embedded and are all hyperelliptic, and we will focus on the $0$th syzygies, i.e. the ideal $I_C$. Let $C$ be a curve of genus $2$, embedded as a smooth and irreducible curve in $\mathbf P^{d-2}$ with a complete linear system of degree $d\geq 6$. Since the linear system is complete, the embedded curve $C\subseteq \mathbf P^{d-2}$ is linearly normal.\ Our main result is the following: Let $C$ be a smooth and irreducible curve of genus $2$, linearly normal embedded in $\mathbf P^{d-2}$ with a complete linear system $\vert H_C\vert$ of degree $d\geq 6$. Moreover, let $S$ be the $g^1_2(C)$-scroll, and let $V$ be a $g^1_3(C)$-scroll that does not contain $S$. Then we have $$I_S+I_V=I_C.$$ In Section \[idealC\] we will give a proof of this theorem which is based on an inductive argument. In Sections \[scrollS\] and \[scrollV\] we will list all possible scroll types of the unique $g^1_2(C)$-scroll $S$ and $g^1_3(C)$-scrolls $V_{\vert D\vert}$. We give a connection between the linear system $\vert H_C\vert$ that embeds the curve into projective space and the scroll types of $S$ and a $V_{\vert D\vert}$ and provide thus a proof of existence in all cases. Preliminaries {#prelim} ============= Let $C$ be a non-singular curve of genus $2$, and let $\vert H_C\vert$ be a complete linear system of degree $d\geq 6$ on $C$. By the Riemann-Roch theorem for curves (see e.g. [@Ha], Thm. 1.3 in Chapter IV.1) the system $\vert H_C\vert$ embeds $C$ into projective space $\mathbf P^{d-2}$. Since $\vert H_C\vert$ is complete, the embedded curve $C\subseteq \mathbf P^{d-2}$ is linearly normal. Throughout the whole article $C$ will always denote a smooth curve of degree $d\geq 6$ in $\mathbf P^{d-2}$. We use the notation $g^1_k(C)$ to denote a $g^1_k$ on $C$. We are interested in rational normal scrolls that arise from the unique $g^1_2(C)$ and from $g^1_3(C)$’s. The following proposition states that indeed there is only one $g^1_2$ on $C$, and furthermore it computes the dimension of the family of $g^1_3$’s on $C$, which we will denote by $G^1_3(C)$: \[linsystem\] There exists exactly one $g^1_2(C)$, and this is equal to the canonical system $\vert K_C\vert$. The family $G^1_3(C):=\{g^1_3(C)'s\}$ is two-dimensional. We use the Riemann-Roch theorem for curves (see e.g. [@Ha], Thm. 1.3 in Chapter IV.1): If $D$ is a divisor of degree $2$, then $$h^0(\mathcal O_C(D))=1+h^0(\mathcal O_C(K_C-D))= \left\{\begin{array}{ccc} 1&\textrm{if}&D\notin\vert K_C\vert,\\ 2&\textrm{if}&D\in \vert K_C\vert.\\ \end{array}\right.$$ Hence we can conclude that the linear system $\vert D\vert$ is a $g^1_2(C)$ if and only if $\vert D\vert=\vert K_C\vert$. If $D$ is a divisor of degree $3$ on $C$, then, again by the Riemann-Roch theorem for curves, $h^0(\mathcal O_C(D))=2$, i.e. each linear system $\vert D\vert$ of degree $3$ is a $g^1_3(C)$. The set of all effective divisors of degree $3$ on $C$ is given by $C_3:=(C\times C\times C)/S_3$, where $S_3$ denotes the symmetric group on $3$ letters. The dimension of this family is equal to $3$, and since each linear system $\vert D\vert$ of degree $3$ has dimension $1$, as shown above, the family of $g^1_3(C)$’s has to be two-dimensional. We are interested in the rational normal scroll arising from the unique $g^1_2(C)$, and for each $\vert D\vert$ in the two-dimensional family $G^1_3(C)$ we are interested in the scroll defined by $\vert D\vert$. There are several different presentations of a rational normal scroll. We will use two of these, which will be given in the following paragraphs. \[ratscroll\]\ Let $e_1,e_2,\ldots,e_k$ be integers with $e_1\geq e_2\geq \cdots \geq e_k\geq 0$ and $e_1+e_2+\cdots +e_k\geq 2$. Set $\mathcal E=\mathcal O_{\mathbf P^1}(e_1)\oplus \mathcal O_{\mathbf P^1}(e_2)\oplus \cdots \oplus \mathcal O_{\mathbf P^1}(e_k)$, a locally free sheaf of rank $k$ on $\mathbf P^1$, and let $\pi: \mathbf P(\mathcal E)\to \mathbf P^{1}$ be the corresponding $\mathbf P^{k-1}$-bundle. A $X$ is the image of the map $\iota:\mathbf P(\mathcal E)\hookrightarrow \mathbf P^N:=\mathbf P H^0(\mathbf P(\mathcal E), \mathcal O_{\mathbf P(\mathcal E)}(1))$.\ The of $X$ is defined to be equal to $(e_1,e_2,\cdots ,e_k)$. \[degscroll\] The dimension of $X$ is equal to $k$, and the degree of $X$ is equal to the degree of $\mathcal E$ which is equal to $f:=\sum_{i=1}^k{e_i}$. Moreover, by the Riemann-Roch theorem for vector bundles, $h^0(\mathbf P(\mathcal E),\mathcal O_{\mathbf P(\mathcal E)}(1))=h^0(\mathbf P^1,\mathcal E)=\operatorname{rk}(\mathcal E)+\deg(\mathcal E)=k+\sum_{i=1}^{k}e_i$, i.e. the dimension of the ambient projective space is equal to $N=k+\sum_{i=1}^{k}e_i-1$.\ Thus for a rational normal scroll $X$ we obtain $\dim(X)+\deg(X)=k+\sum_{i=1}^ke_i=N+1$, and consequently a rational normal scroll $X\subseteq \mathbf P^N$ is a non-degenerate irreducible variety of minimal degree $f=\operatorname{codim}(X)+1$. The scroll $X$ is smooth if and only if all $e_i$, $i=1,\ldots, k$, are positive. If this is the case, then $\iota:\mathbf P(\mathcal E)\to X\subseteq \mathbf P^N$ is an isomorphism. \[linscroll\] Each linearly normal scroll $X$ over $\mathbf P^1$ is a rational normal scroll. If $X$ is a linearly normal scroll over $\mathbf P^1$, then $X=\iota(\mathbf P(\mathcal E))$, where $\mathcal E=\pi_{\ast}\mathcal O_{\mathbf P(\mathcal E)}(1)$ is a vector bundle over $\mathbf P^1$ and $\iota:\mathbf P(\mathcal E)\hookrightarrow \mathbf P(H^0(\mathcal E))$. By Grothendieck’s splitting theorem (cf. [@Haze]) every vector bundle over $\mathbf P^1$ splits, i.e. $\mathcal E$ is of the form $\mathcal E=\oplus_{i}\mathcal O_{\mathbf P^1}(e_i)$. A more geometric description of a rational normal scroll is given by the following definition (cf. [@Stevens]): Let $e_1,e_2,\ldots, e_k$ be integers with $e_1\geq e_2\geq \ldots \geq e_k\geq 0$ and $\sum_{i=1}^k{e_i}\geq 2$. Let for $i=1,\ldots, k$, $\phi_i: \mathbf P^1\to C_i\subseteq \mathbf P^{e_i}\subseteq \mathbf P^N$, where $N=\sum_{i=1}^k{e_i}-k-1$, parametrize a rational normal curve of degree $e_i$, such that $\mathbf P^{e_1},\ldots,\mathbf P^{e_k}$ span the whole $\mathbf P^N$. Then $$X=\overline{\bigcup_{P\in \mathbf P^1}{\langle\phi_1(P),\ldots, \phi_k(P)\rangle}}$$ is a of dimension $k$, degree $e_1+\cdots +e_k$ and scroll type $(e_1,\ldots, e_k)$. In other words, each fiber of $X$ is spanned by $k$ points where each of these lies on a different rational normal curve. We call these $k$ rational normal curves $C_i$ of the scroll. The Picard group of rational normal scrolls {#Picardscroll} ------------------------------------------- Let $H=[\iota^{\ast}\mathcal O_{\mathbf P^N}(1)]$ denote the hyperplane class, and let $F=[\pi^{\ast}\mathcal O_{\mathbf P^1}(1)]$ be the class of a fiber of $\mathbf P(\mathcal E)$. In the following we will use $H$ and $F$ to denote both the classes and divisors in the respective classes. The *Picard group* of $\mathbf P(\mathcal E)$ is generated by $H$ and $F$: $$\operatorname{Pic}(\mathbf P(\mathcal E))=\mathbf Z[H]\oplus \mathbf Z[F].$$ We have the following intersection products: $$H^k=f=\sum_{i=1}^k{e_i}, \quad H^{k-1}.F=1, \quad F^2=0.$$ A minimal section of $\mathbf P(\mathcal E)$ is given by $B_0=H-rF$, where $r\in \mathbf N$ is maximal such that $H-rF$ is effective, in other words, $B_0=H-e_1F$. The $g^1_2(C)$-scroll and $g^1_3(C)$-scrolls -------------------------------------------- By Proposition \[linsystem\] there exists exactly one $g^1_2$ on $C$. We set $$S=\overline{\bigcup_{E\in g^1_2(C)}{\operatorname{span}(E)}}\subseteq \mathbf P^{d-2},$$ where $\operatorname{span}(E)$ denotes the line in $\mathbf P^{d-2}$ spanned by the two points in the divisor $E$. For each $\vert D\vert$ which is a $g^1_3(C)$ we set $$V_{\vert D\vert}=\overline{\bigcup_{D^{\prime}\in \vert D\vert}{\operatorname{span}(D^{\prime})}}\subseteq \mathbf P^{d-2},$$ where $\operatorname{span}(D^{\prime})$ denotes the plane in $\mathbf P^{d-2}$ spanned by the three points in the divisor $D^{\prime}$. \[normal\] Let $C\subseteq \mathbf P^{d-2}$ be a smooth linearly normal curve of degree $d\geq 6$ and genus $2$, let $S$ be the $g^1_2(C)$-scroll, and for a linear system $\vert D\vert$ which is a $g^1_3(C)$ let $V_{\vert D\vert}$ be the $g^1_3(C)$-scroll associated to $\vert D\vert$. The scrolls $S$ and $V_{\vert D\vert}$ are rational normal scrolls. The rationality of $S$ and each $V_{\vert D\vert}$ is obvious. For the rest notice that if a scroll $X$ contains a linearly normal curve $C$, then also $X$ has to be linearly normal: If $X$ was the image of a non-degenerate variety in higher-dimensional projective space under some projection, then $C$ had to be as well. We conclude that since $C$ is linearly normal, $S$ and all $V_{\vert D\vert}$ are linearly normal. By Proposition \[linscroll\] we can conclude that $S$ and all $V_{\vert D\vert}$ are rational normal scrolls. Note that the dimension of $S$ is equal to $\dim(\vert K_C\vert)+\dim(\operatorname{span}(E))=2$ and that the dimension of $V_{\vert D\vert}$ is equal to $\dim(\vert D\vert)+\dim(\operatorname{span}(D^{\prime}))=3$. By Proposition \[normal\] the scrolls $S$ and $V_{\vert D\vert}$ are rational normal scrolls which implies by the observations in Remark \[degscroll\] that we obtain the following degrees: $$\label{degreescroll} \deg (S)=d-3,\qquad \deg (V_{\vert D\vert})=d-4.$$ \[classesonS\] Let $C\subseteq \mathbf P^{d-2}$ be a curve of genus $2$ and degree $d\geq 6$, and let $\mathcal E$ be a $\mathbf P^1$-bundle such that the image of the map $\iota:\mathbf P(\mathcal E)\to\mathbf P^{d-2}$ is the $g^1_2(C)$-scroll $S$. The class of $C$ on $\mathbf P(\mathcal E)$ is equal to $[C]=2H-(d-6)F$. Write $[C]=aH+bF$ with $a,b\in \mathbf Z$. Since $[C].F=2$, we obtain $a=2$, and $[C].H=d$ implies that $d=2(d-3)+b$, i.e. $b=6-d$. \[classonV\] Let $C\subseteq \mathbf P^{d-2}$ be a smooth curve of genus $2$ and degree $d\geq 6$, and let $\mathcal E$ be a $\mathbf P^1$-bundle such that the image of $\iota:\mathbf P(\mathcal E)\to \mathbf P^{d-2}$ is a $g^1_3(C)$-scroll $V$ such that $C$ does not pass through the (possibly empty) singular locus of $V$.\ The class of $C$ on $\mathbf P(\mathcal E)$ is equal to $[C]=3H^2-2(d-6)H.F$. Since $C$ is of codimension $2$ on $\mathbf P(\mathcal E)$, we can write the class of $C$ on $\mathbf P(\mathcal E)$ as $[C]=aH^2+bH.F$ with $a,b\in \mathbf Z$. Since $[C].F=3$, we obtain $a=3$, and $[C].H=d$ implies that $d=3(d-4)+b$, i.e. $b=2(6-d)$. Theorem \[idealSV\] states that $I_C$ is generated by the union of $I_S$ and the ideal of *one* $g^1_3(C)$-scroll $V_{\vert D\vert}$ that obviously does not contain $S$. Since the ideal of each rational normal scroll is generated by quadrics, in order to give this statement a sense we have to prove that the ideal $I_C$ is generated by quadrics as well: \[genbyquadrics\] Let $C$ be a smooth curve of genus $2$ embedded in $\mathbf P^{d-2}$ with a complete linear system of degree $d\geq 6$. The ideal $I_C$ is generated by quadrics. This is Theorem (4.a.1) in [@Green]. \[notrisecant\] Let $C\subseteq \mathbf P^{d-2}$ be a smooth curve of genus $2$ embedded with a complete linear system of degree $d\geq 6$. Then $C$ has no trisecant lines. Since the ideal of $C$ is generated by quadrics, we can write $C=Q_1\cap \ldots \cap Q_r$ where the $Q_i$ are quadrics and $r=h^0(\mathcal I_C(2))$. Any line that intersects $C$ in three points, intersects each quadric $Q_i$ in at least three points, consequently it is contained in each $Q_i$ and hence in the intersection of all $Q_i$’s which is equal to $C$. Scroll types of the $g^1_2(C)$-scroll $S$ {#scrollS} ========================================= In this section we will list all possible types of the $g^1_2(C)$-scroll $S$ and give a connection between the complete linear system $\vert H_C\vert$ that embeds the curve $C$ into projective space and the scroll type of $S$. In this way we also give a proof of existence in all cases. First we give a relation between the degrees of the directrix curves of the $g^1_2(C)$-scroll: \[scrolltypeS\] Let $C\subseteq \mathbf P^{d-2}$ be a smooth curve of degree $d\geq 6$ and genus $2$. For the scroll type $(e_1,e_2)$ of the $g^1_2(C)$-scroll $S$ we have $e_1-e_2\leq 3$. Let $C_0=H-e_1F$ be a minimal section as described as $B_0$ in general in Section \[Picardscroll\]. Since $C$ and $C_0$ are effective and $C$ is smooth, so $C_0\not\subseteq C$, we have $[C].C_0\geq 0$, which means by Proposition \[classesonS\] that $(2H-(d-6)F).(H-e_1F)\geq 0$, consequently $2e_1+2e_2-2e_1-(d-6)\geq 0$. Since $d=e_1+e_2+3$ the result follows. Now we will describe the relation between $\vert H_C\vert$ with respect to $\vert K_C\vert$ and the scroll type of the $g^1_2(C)$-scroll $S$. By Equation (\[degreescroll\]) the degree of $S$ is equal to $d-3$. The case when the degree $d$ of the curve $C$ is even ----------------------------------------------------- If the degree $d$ of the embedded curve $C\subseteq \mathbf P^{d-2}$ is even, then by Proposition \[scrolltypeS\] the $g^1_2(C)$-scroll $S$ has scroll type $\left(\frac{d-2}{2},\frac{d-4}{2}\right)$ or $\left(\frac{d}{2},\frac{d-6}{2}\right)$. The linear system $\vert H_C-\frac{d-2}{2}K_C\vert$ is of degree $2$ and thus non-empty by the Riemann-Roch theorem for curves and either equal to $\vert K_C\vert$ or equal to $\vert P+Q\vert$, where $P$ and $Q$ are points on $C$ such that $P+Q\notin \vert K_C\vert$. There is the following relation between $\vert H_C-\frac{d-2}{2}K_C\vert$ and the scroll type of $S$: \[scrolltypeS1\] The scroll type of the $g^1_2(C)$-scroll $S$ is equal to $\left(\frac{d-2}{2},\frac{d-4}{2}\right)$ if and only if $\vert H_C-\frac{d-2}{2}K_C\vert=\vert P+Q\vert$, where $P$ and $Q$ are points on $C$ such that $P+Q\notin \vert K_C\vert$. If the scroll type of $S$ is equal to $\left(\frac{d-2}{2},\frac{d-4}{2}\right)$, then a minimal section $C_0$ is of degree $\frac{d-4}{2}$, and a general hyperplane section of $S$ containing $C_0$ consists of $C_0$ and $\frac{d-2}{2}$ fibers of $S$. Consequently, $\vert H_C\vert=\vert \frac{d-2}{2}K_C+P+Q\vert$, where $P$ and $Q$ are points in $C_0\cap C$ and $P+Q\notin \vert K_C\vert$. Conversely, if $S$ is of scroll type $\left(\frac{d}{2},\frac{d-6}{2}\right)$, a general hyperplane section of $S$ that contains a minimal section $C_0$, which is of degree $\frac{d-6}{2}$, decomposes into $C_0$ and $\frac{d}{2}$ fibers of $S$. Hence $\vert H_C\vert=\frac{d}{2}\vert K_C\vert$. The case when the degree $d$ of the curve $C$ is odd ---------------------------------------------------- If the degree $d$ of the embedded curve $C$ is odd, then by Proposition \[scrolltypeS\] the $g^1_2(C)$-scroll $S$ is of scroll type $\left(\frac{d-3}{2},\frac{d-3}{2}\right)$ or $\left(\frac{d-1}{2},\frac{d-5}{2}\right)$. The linear system $\vert H_C-\frac{d-3}{2}K_C\vert$ has degree $3$, and it is thus non-empty by the Riemann-Roch theorem for curves. It is either equal to $\vert P+Q+R\vert$, where $P$, $Q$ and $R$ are points on $C$ such that none of $P+Q$, $P+R$ and $Q+R$ is a divisor in $\vert K_C\vert$, or equal to $\vert K_C+P\vert$, where $P$ is a point on $C$. The following proposition states the relation between $\vert H_C-\frac{d-3}{2}K_C\vert$ and the scroll type of $S$: \[scrolltypeS2\] The $g^1_2(C)$-scroll $S$ has scroll type $\left(\frac{d-3}{2},\frac{d-3}{2}\right)$ if and only if $\vert H_C-\frac{d-3}{2}K_C\vert=\vert P+Q+R\vert$, where $P$, $Q$ and $R$ are points on $C$ such that none of $P+Q$, $P+R$ and $Q+R$ is a divisor in $\vert K_C\vert$. If $S$ has scroll type $\left(\frac{d-3}{2},\frac{d-3}{2}\right)$, then a general hyperplane section of $S$ containing a minimal section $C_0$ is equal to the union of $C_0$ and $\frac{d-3}{2}$ fibers of $S$. We obtain that $\vert H_C\vert=\vert \frac{d-3}{2}K_C+P+Q+R\vert$, where $P,Q,R$ are points lying on $C_0\cap C$ and none of $P+Q$, $P+R$ or $Q+R$ is a divisor in $\vert K_C\vert$. Conversely, if the scroll type of $S$ is equal to $\left(\frac{d-1}{2},\frac{d-5}{2}\right)$, then a general hyperplane section of $S$ that contains $C_0$ decomposes into a minimal section $C_0$ and $\frac{d-1}{2}$ fibers of $S$. Consequently, $\vert H_C\vert=\vert\frac{d-1}{2}K_C+P\vert$ where $P$ is a point in $C_0\cap C$. Scroll types of $g^1_3(C)$-scrolls $V_{\vert D\vert}$ {#scrollV} ===================================================== In this section we will first give a relation between the degrees of the three directrix curves of a scroll $V_{\vert D\vert}$, and then we will give a connection between $\vert H_C\vert$ and the scroll type of $V_{\vert D\vert}$ for a given $\vert D\vert \in G^1_3(C)$. \[scrolltypeV\] If $V$ is a $g^1_3(C)$-scroll such that the curve $C$ does not intersect the (possibly empty) singular locus of $V$, then for its scroll type $(e_1,e_2,e_3)$ we have $2e_1-e_2-e_3\leq 4$. Let $B_0=H-e_1F$ denote a minimal section of the bundle $\mathbf P(\mathcal E)$. Since we have $h^0(\mathcal O_V(H-B_0))=h^0(\mathcal O_V(e_1F))=e_1+1\geq 1$, $B_0$ is contained in at least one hyperplane, consequently $B_0$ does not span all of $\mathbf P^{d-2}$. Since $C$ spans all of $\mathbf P^{d-2}$, $B_0$ cannot contain $C$, thus we have that $[C].B_0\geq 0$, i.e. we know that $(3H^2-2(d-6)HF).(H-e_1F)\geq 0$ by Proposition \[classonV\], which means that $3e_1+3e_2+3e_3-3e_1-2(d-6)\geq 0$. The result follows from $d=e_1+e_2+e_3+4$. \[scrolltypesingV\] If $V=V_{\vert D\vert}$ is a singular scroll of scroll type $(e_1,e_2,0)$ such that the curve $C$ intersects the singular locus of $V$, then $e_1$ and $e_2$ satisfy the following: $e_1-e_2\leq 3$. If $V=V_{\vert D\vert}$ is a singular $g^1_3(C)$-scroll of type $(e_1,e_2,0)$ such that $C$ intersects its singular locus, then a point $P\in \operatorname{sing}(V)\cap C$ is a basepoint of $\vert D\vert$, i.e. $\vert D\vert=\vert K_C+P\vert$. The projection from $P$ maps $C$ to a curve $C^{\prime}$ of degree $d-1$ in $\mathbf P^{d-3}$, and it maps $V_{\vert D\vert}$ to the $g^1_2(C^{\prime})$-scroll of type $(e_1,e_2)$. By Proposition \[scrolltypeS\] we obtain $e_1-e_2\leq 3$. We will now come to the converse of Proposition \[scrolltypesingV\], i.e. the existence part: \[singVexistence\] If $e_1$ and $e_2$ are integers with $e_1\geq e_2\geq 0$, $e_1-e_2\leq 3$ and $e_1+e_2=d-4$ with $d\geq 6$, then there exists a curve $C$ of genus $2$ and a divisor class $\vert H_C\vert$ on $C$ of degree $d$ that embeds $C$ into $\mathbf P^{d-2}$ such that there exists a $g^1_3(C)$-scroll of type $(e_1,e_2,0)$ such that its singular locus intersects the curve $C$. Let $e_1\geq e_2\geq 0$ be integers with $e_1-e_2\leq 3$ and $e_1+e_2=d-4$. By the results in Section \[scrollS\] there exists a curve $C$ of genus $2$, embedded with a system $\vert H^{\prime}\vert$ of degree $d-1$ into $\mathbf P^{d-3}$ such that its $g^1_2(C)$-scroll is of type $(e_1,e_2)$. Take a point $P$ on $C$ and reembed the curve $C$ with the linear system $\vert H_C\vert:=\vert H^{\prime}+P\vert$ into $\mathbf P^{d-2}$. The cone over the $g^1_2(C)$-scroll in $\mathbf P^{d-3}$ with $P$ as vertex is a $g^1_3(C)$-scroll in $\mathbf P^{d-2}$ of type $(e_1,e_2,0)$, and the point $P$ lies in the intersection of its singular locus and the curve $C$. Now we will describe all scroll types a $g^1_3(C)$-scroll $V_{\vert D\vert}$ can have as $\vert D\vert$ varies. We will distinguish between $\vert D\vert$ with one basepoint and $\vert D\vert$ basepoint-free. The case when $V_{\vert D\vert}$ is smooth or $V_{\vert D\vert}$ is singular and $C$ does not pass through the singular locus of $V_{\vert D\vert}$ --------------------------------------------------------------------------------------------------------------------------------------------------- If $V_{\vert D\vert}$ is smooth or $V_{\vert D\vert}$ is singular, but the curve $C$ does not pass through the singular locus of $V_{\vert D\vert}$, then $\vert D\vert$ necessarily has to be basepoint-free, since a basepoint of $\vert D\vert$ is a point in the intersection of $C$ with the singular locus of $V_{\vert D\vert}$.\ In order to describe the possible scroll types with a relation to $\vert H_C\vert$ we use a formula given in [@Schreyer]. We give an alternative proof of the fact that the following numbers $d_i$ determine the scroll type of a $g^1_3(C)$-scroll $V_{\vert D\vert}$: ([@Schreyer], p. 114)\[di\] Given a basepoint-free $\vert D\vert\in G^1_3(C)$, set $$\begin{aligned} d_0&=&h^0(\mathcal O_C(H_C))-h^0(\mathcal O_C(H_C-D)),\\ d_1&=&h^0(\mathcal O_C(H_C-D))-h^0(\mathcal O_C(H_C-2D)),\\ d_2&=&h^0(\mathcal O_C(H_C-2D))-h^0(\mathcal O_C(H_C-3D)),\\ &\vdots&\\ d_{\lfloor \frac{d}{3}\rfloor}&=&h^0(\mathcal O_C(H_C-\lfloor \frac{d}{3}\rfloor D))-\underbrace{h^0(\mathcal O_C(H_C-(\lfloor \frac{d}{3}\rfloor+1)D))}_{=0}.\end{aligned}$$ The scroll type $(e_1,e_2,e_3)$ of $V_{\vert D\vert}$ is then given by $$\begin{aligned} e_1&=&\#\{j\vert d_j\geq 1\}-1,\\ e_2&=&\#\{j\vert d_j\geq 2\}-1,\\ e_3&=&\#\{j\vert d_j\geq 3\}-1.\end{aligned}$$ Since $C$ and $V$ are both linearly normal and span all of $\mathbf P^{d-2}$, there is an isomorphism $H^0(C,\mathcal O_C(H_C))\cong H^0(\mathbf P(\mathcal E),\mathcal O_{\mathbf P(\mathcal E)}(H))\cong H^0(\mathbf P^1,\mathcal O_{\mathbf P^1}(e_1)\oplus \mathcal O_{\mathbf P^1}(e_2)\oplus \mathcal O_{\mathbf P^1}(e_3))$, and thus we obtain $$H^0(C,\mathcal O_C(H-iD))\cong H^0(\mathbf P(\mathcal E),\mathcal O_{\mathbf P(\mathcal E)}(H-iF))\cong H^0(\mathbf P^1,\mathcal O_{\mathbf P^1}(e_1-i)\oplus \mathcal O_{\mathbf P^1}(e_2-i)\oplus \mathcal O_{\mathbf P^1}(e_3-i))$$ for all $i \in \mathbf N_0$. Consequently we obtain $$d_i=\left\{\begin{array}{ccl} 3,&\textrm{if } &0\leq i\leq e_3,\\ 2,&\textrm{if } &e_3+1\leq i\leq e_2,\\ 1,&\textrm{if } &e_2+1\leq i \leq e_1.\\ \end{array}\right.$$ Using the formula in Proposition \[di\] we will now give all possible scroll types of $V_{\vert D\vert}$, in relation to $\vert H_C\vert$: $d$ Conditions on $\vert H_C\vert$ Scroll type of $V_{\vert D\vert}$ ----- ---------------------------------------------------- ----------------------------------------------------- $\vert H_C-\frac{d-3}{3} D\vert=\vert D\vert$ $(\frac{d}{3},\frac{d}{3}-2,\frac{d}{3}-2)$ $\vert H_C-\frac{d-3}{3} D\vert \neq \vert D\vert$ $(\frac{d}{3}-1,\frac{d}{3}-1,\frac{d}{3}-2)$ $\vert H_C-\frac{d-1}{3}D\vert \neq \emptyset$ $(\frac{d-1}{3},\frac{d-1}{3}-1,\frac{d-1}{3}-2)$ $\vert H_C-\frac{d-1}{3}D\vert=\emptyset$ $(\frac{d-1}{3}-1,\frac{d-1}{3}-1,\frac{d-1}{3}-1)$ $\vert H_C-\frac{d-2}{3} D\vert=\vert K_C\vert$ $(\frac{d-2}{3},\frac{d-2}{3},\frac{d-2}{3}-2)$ $\vert H_C-\frac{d-2}{3} D\vert=\vert P+Q\vert$, $(\frac{d-2}{3},\frac{d-2}{3}-1,\frac{d-2}{3}-1)$ $P+Q\notin \vert K_C\vert$ The case when $V_{\vert D\vert}$ is singular and $C$ passes through the singular locus of $V_{\vert D\vert}$ ------------------------------------------------------------------------------------------------------------ If $V_{\vert D\vert}$ is singular, then its scroll type is equal to $(e_1,e_2,0)$ where $e_1\geq e_2\geq 0$. By Proposition \[scrolltypesingV\], if $C$ passes through the singular locus of $V_{\vert D\vert}$, then $e_2=0$ is only possible for $d\leq 7$. For $d\geq 8$ the scroll $V_{\vert D\vert}$ has exactly one singular point which we will denote by $P$. For $d=6$ and $d=7$ let $P$ denote one point in the singular locus of $V_{\vert D\vert}$. If the system $\vert D\vert$ has a basepoint $P$, then the curve passes through the singular locus of the scroll $V_{\vert D\vert}$. As suggested in Proposition \[singVexistence\], given $\vert H_C\vert$ and $\vert D\vert=\vert K_C+P\vert$, we can find the scroll type of $V_{\vert D\vert}$ by projecting from the point $P$ and using Proposition \[scrolltypeS1\] and Proposition \[scrolltypeS2\] for the $g^1_2(C^{\prime})$-scroll, where $C^{\prime}$ is the image of $C$ under the projection. Let in this situation $P^{\prime}$ always denote the point in $\vert K_C-P\vert$. ### The case when the degree $d$ of the curve $C$ is even If $d$ is even, then, since each linear system of degree $2$ is non-empty by the Riemann-Roch theorem for curves, we can write $\vert H_C\vert=\vert \frac{d}{2}K_C\vert=\vert \frac{d-2}{2}K_C+P+P^{\prime}\vert$ or $\vert H_C\vert=\vert \frac{d-2}{2}K_C+Q_1+Q_2\vert=\vert \frac{d-4}{2}K_C+Q_1+Q_2+P+P^{\prime}\vert$, where $Q_1$ and $Q_2$ are points on $C$ such that $Q_1+Q_2\notin \vert K_C\vert$. \[scrolltypeV1\] Given $\vert D\vert=\vert K_C+P\vert$, the scroll type of $V_{\vert D\vert}$ is equal to $\left(\frac{d-4}{2},\frac{d-4}{2},0\right)$ if and only if $\vert H_C-\frac{d-2}{2}K_C-P\vert=\emptyset$. Let $\vert H_C-\frac{d-2}{2}K_C-P\vert=\emptyset$. Projecting $C$ from $P$ yields a curve $C^{\prime}$ of genus $2$ and degree $d-1$ which is embedded in $\mathbf P^{d-3}$ with the linear system $\vert H^{\prime}\vert:=\vert H_C-P\vert$. Under this projection the scroll $V_{\vert D\vert}$ maps to the $g^1_2(C^{\prime})$-scroll $S^{\prime}$. The scroll $V_{\vert D\vert}$ is thus the cone over $S^{\prime}$ with $P$ as vertex, so if $S^{\prime}$ is of scroll type $(e_1,e_2)$, then $V_{\vert D\vert}$ is of scroll type $(e_1,e_2,0)$.\ Since $\vert H^{\prime}-\frac{d-2}{2}K_C\vert=\emptyset$, we obtain by Proposition \[scrolltypeS2\] that the scroll type of $V_{\vert D\vert}$ is equal to $\left(\frac{d-4}{2},\frac{d-4}{2},0\right)$. Conversely, if $\vert H_C-\frac{d-2}{2}K_C-P\vert\neq \emptyset$, then the same procedure as above, i.e. projecting from $P$, yields $\vert H^{\prime}-\frac{d-2}{2}K_C\vert\neq \emptyset$, i.e. by Proposition \[scrolltypeS2\] the scroll type of $V_{\vert D\vert}$ is equal to $\left(\frac{d-2}{2},\frac{d-6}{2},0\right)$. ### The case when the degree $d$ of the curve $C$ is odd If $d$ is odd, then we can write either $\vert H_C\vert=\vert \frac{d-1}{2}K_C+Q\vert=\vert \frac{d-3}{2}K_C+Q+P+P^{\prime}\vert$ or $\vert H_C\vert=\vert \frac{d-3}{2}K_C+\sum_{i=1}^3Q_i\vert=\vert \frac{d-5}{2}K_C+\sum_{i=1}^3Q_i+P+P^{\prime}\vert$, where $Q_1$, $Q_2$ and $Q_3$ are points on $C$ such that $Q_i+Q_j\notin \vert K_C\vert$ for all $i,j\in \{1,2,3\}$, $i\neq j$. \[scrolltypeV2\] The scroll type of the $g^1_3(C)$-scroll $V_{\vert D\vert}$ associated to $\vert D\vert=\vert K_C+P\vert$ is equal to $\left(\frac{d-3}{2},\frac{d-5}{2},0\right)$ if and only if $\vert H_C-\frac{d-1}{2}K_C-P\vert=\emptyset$. We use the same idea as in the proof of Proposition \[scrolltypeV1\]. Let $\vert H_C-\frac{d-1}{2}K_C-P\vert=\emptyset$. Projecting $C$ from $P$ yields a curve $C^{\prime}$ of genus $2$ in $\mathbf P^{d-3}$, embedded with the system $\vert H^{\prime}\vert:=\vert H_C-P\vert$. Since $\vert H^{\prime}-\frac{d-1}{2}K_C\vert=\emptyset$, by Proposition \[scrolltypeS1\] the scroll type of $V_{\vert D\vert}$ is equal to $\left(\frac{d-3}{2},\frac{d-5}{2},0\right)$. Conversely, if $\vert H_C-\frac{d-1}{2}K_C-P\vert\neq \emptyset$, then $\vert H^{\prime}-\frac{d-1}{2}K_C\vert\neq\emptyset$. By Proposition \[scrolltypeS2\] we obtain that the scroll type of $V_{\vert D\vert}$ is equal to $\left(\frac{d-1}{2},\frac{d-7}{2},0\right)$. In order to illustrate Propositions \[di\] and \[scrolltypeV2\] we study the case $d=7$, i.e. let now $C\subseteq \mathbf P^5$ be a smooth curve of genus $2$ and degree $7$. Below we list all scroll types of a $g^1_3(C)$-scroll $V_{\vert D\vert}$ in relation to $\vert H_C\vert$. In our description we will consider the systems $\vert H_C-3K_C\vert$ and $\vert H_C-2D\vert$. Both are of degree $1$, and thus each of these can either be empty or consist of one point. $$\begin{array}{|l|l|c|c|c|} \hline \begin{array}{l} \textrm{Conditions on}\\ \textrm{the basepoint}\\ \textrm{locus of }\vert D\vert\\ \end{array} & \begin{array}{c} \hspace{1.35cm} \vert H_C\vert \\ \end{array} & \begin{array}{c} \vert H_C-2D\vert\\ \end{array} & \begin{array}{c} \vert H_C-3K_C\vert \\ \end{array} & \begin{array}{c} \textrm{Scroll type} \\ \textrm{of }V_{\vert D\vert}\\ \end{array} \\ \hline\hline \begin{array}{l} \textrm{No basepoints}\\ \end{array} & \begin{array}{c} \vert D+2K_C\vert\\ \end{array} & \begin{array}{c} \emptyset \\ \end{array} & \begin{array}{c} \emptyset \\ \end{array} & \begin{array}{c} (1,1,1)\\ \end{array}\\ \hline \begin{array}{l} \textrm{No basepoints}\\ \end{array} & \begin{array}{l} \vert D+K_C+Q_1+Q_2\vert,\\ Q_1+Q_2\notin \vert K_C\vert,\\ \vert D-Q_1-Q_2\vert =\emptyset,\\ \vert D-Q_1^{\prime}-Q_2^{\prime}\vert=\emptyset,\\ \textrm{where}\\ \vert Q_1^{\prime}\vert=\vert K_C-Q_1\vert,\\ \vert Q_2^{\prime}\vert=\vert K_C-Q_2\vert\\ \end{array} & \begin{array}{c} \emptyset \\ \end{array} & \begin{array}{c} \emptyset \\ \end{array} & \begin{array}{c} (1,1,1)\\ \end{array}\\ \hline \begin{array}{l} \textrm{No basepoints}\\ \end{array} & \begin{array}{l} \vert D+K_C+Q_1+Q_2\vert,\\ Q_1+Q_2\notin \vert K_C\vert,\\ \vert D-Q_1-Q_2\vert =\emptyset,\\ \vert D-Q_1^{\prime}-Q_2^{\prime}\vert\neq\emptyset,\\ \textrm{where}\\ \vert Q_1^{\prime}\vert=\vert K_C-Q_1\vert,\\ \vert Q_2^{\prime}\vert=\vert K_C-Q_2\vert\\ \end{array} & \begin{array}{c} \emptyset \\ \end{array} & \begin{array}{c} \vert D-Q_1^{\prime}-Q_2^{\prime}\vert\\ (\textrm{non-empty}) \\ \end{array} & \begin{array}{c} (1,1,1)\\ \end{array}\\ \hline \end{array}$$ $$\begin{array}{|c|l|c|c|c|} \hline \begin{array}{l} \textrm{Conditions on}\\ \textrm{the basepoint}\\ \textrm{locus of }\vert D\vert\\ \end{array} & \begin{array}{c} \hspace{1.625cm} \vert H_C\vert \\ \end{array} & \begin{array}{c} \vert H_C-2D\vert\\ \end{array} & \begin{array}{c} \vert H_C-3K_C\vert \\ \end{array} & \begin{array}{c} \textrm{Scroll type} \\ \textrm{of }V_{\vert D\vert}\\ \end{array} \\ \hline\hline \begin{array}{l} \textrm{No basepoints}\\ \end{array} & \begin{array}{l} \vert D+K_C+Q_1+Q_2\vert,\\ Q_1+Q_2\notin \vert K_C\vert,\\ \vert D-Q_1-Q_2\vert \neq\emptyset,\\ \vert D-Q_1^{\prime}-Q_2^{\prime}\vert=\emptyset,\\ \textrm{where}\\ \vert Q_1^{\prime}\vert=\vert K_C-Q_1\vert,\\ \vert Q_2^{\prime}\vert=\vert K_C-Q_2\vert\\ \end{array} & \begin{array}{c} \textrm{non-empty}\\ \end{array} & \begin{array}{c} \emptyset\\ \end{array} & \begin{array}{c} (2,1,0)\\ \end{array}\\ \hline \begin{array}{l} \textrm{No basepoints}\\ \end{array} & \begin{array}{l} \vert D+K_C+Q_1+Q_2\vert,\\ Q_1+Q_2\notin \vert K_C\vert,\\ \vert D-Q_1-Q_2\vert \neq\emptyset,\\ \vert D-Q_1^{\prime}-Q_2^{\prime}\vert\neq\emptyset,\\ \textrm{where}\\ \vert Q_1^{\prime}\vert=\vert K_C-Q_1\vert,\\ \vert Q_2^{\prime}\vert=\vert K_C-Q_2\vert\\ \end{array} & \begin{array}{c} \textrm{non-empty}\\ \end{array} & \begin{array}{c} \vert D-Q_1^{\prime}-Q_2^{\prime}\vert\\ (\textrm{non-empty})\\ \end{array} & \begin{array}{c} (2,1,0)\\ \end{array}\\ \hline \begin{array}{l} \textrm{One basepoint }P\\ \end{array} & \begin{array}{l} \vert D+K_C+Q_1+Q_2\vert\\ =\vert 2K_C+P+Q_1+Q_2\vert,\\ Q_1+Q_2\notin \vert K_C\vert,\\ P\neq Q_i\\ \textrm{for all }i\in\{1,2\},\\ P+Q_i \notin \vert K_C\vert\\ \textrm{for all } i\in\{1,2\}\\ \end{array} & \begin{array}{c} \emptyset\\ \end{array} & \begin{array}{c} \emptyset\\ \end{array} & \begin{array}{c} (2,1,0)\\ \end{array} \\ \hline \begin{array}{l} \textrm{One basepoint }P\\ \end{array} & \begin{array}{l} \vert D+K_C+Q_1+Q_2\vert\\ =\vert 2K_C+P+Q_1+Q_2\vert,\\ Q_1+Q_2\notin \vert K_C\vert,\\ P\neq Q_i\textrm{ for all}\\ i\in\{1,2\},\\ P+Q_i \in \vert K_C\vert\\ \textrm{for some } i\in\{1,2\}\\ \end{array} & \begin{array}{c} \emptyset\\ \end{array} & \begin{array}{l} \textrm{non-empty},\\ \textrm{different}\\ \textrm{from }P\\ \end{array} & \begin{array}{c} (2,1,0)\\ \end{array}\\ \hline \begin{array}{l} \textrm{One basepoint }P\\ \end{array} & \begin{array}{l} \vert 2D+Q\vert\\ =\vert 2K_C+2P+Q\vert,\\ P+Q\notin \vert K_C\vert,\\ 2P \notin \vert K_C\vert\\ \end{array} & \begin{array}{c} Q\\ (\textrm{non-empty})\\ \end{array} & \begin{array}{c} \emptyset\\ \end{array} & \begin{array}{c} (2,1,0)\\ \end{array}\\ \hline \begin{array}{l} \textrm{One basepoint }P\\ \end{array} & \begin{array}{l} \vert 2D+Q\vert\\ =\vert 2K_C+2P+Q\vert,\\ P+Q\notin \vert K_C\vert,\\ P\neq Q,\\ 2P\in \vert K_C\vert\\ \end{array} & \begin{array}{c} Q\\ (\textrm{non-empty})\\ \end{array} & \begin{array}{l} \hspace{0.8cm} Q\\ (\textrm{non-empty},\\ \textrm{different}\\ \textrm{from } P)\\ \end{array} & \begin{array}{c} (2,1,0)\\ \end{array}\\ \hline \begin{array}{l} \textrm{One basepoint }P\\ \end{array} & \begin{array}{l} \vert 3K_C+P\vert\\ \end{array} & \begin{array}{c} \vert K_C-P\vert\\ (\textrm{non-empty})\\ \end{array} & \begin{array}{c} P\\ \end{array} & \begin{array}{c} (3,0,0)\\ \end{array}\\ \hline \end{array}$$ The ideal of $C$ as a sum of scrollar ideals {#idealC} ============================================ In this section we will show that the ideal $I_C$ of a linearly normal embedded curve $C\subseteq \mathbf P^{d-2}$ of genus $2$ and degree $d\geq 6$ is generated by the ideals $I_S$ and $I_V$, where $S$ is the $g^1_2(C)$-scroll and $V=V_{\vert D\vert}$ is a $g^1_3(C)$-scroll not containing $S$. In other words, we will prove the following main theorem in this section: \[idealSV\] Let $C$ be a smooth and irreducible curve of genus $2$, linearly normal embedded in $\mathbf P^{d-2}$ with a complete linear system $\vert H_C\vert$ of degree $d\geq 6$. Moreover, let $S$ be the $g^1_2(C)$-scroll, and let $V$ be a $g^1_3(C)$-scroll that does not contain $S$. Then we have $$I_S+I_V=I_C.$$ We see that in this section we are only interested in $g^1_3(C)$-scrolls $V_{\vert D\vert}$ that do not contain the $g^1_2(C)$-scroll $S$. For this purpose we will now give a criterion for when a given $g^1_3(C)$-scroll $V_{\vert D\vert}$ does not contain the $g^1_2(C)$-scroll $S$: \[SinV\] Let $C\subseteq \mathbf P^{d-2}$ be a curve of genus $2$ and degree $d\geq 6$, embedded with the system $\vert H_C\vert$, and let $S$ be the $g^1_2(C)$-scroll. A $g^1_3(C)$-scroll $V=V_{\vert D\vert}$ contains $S$ if and only if at least one of the following holds: - $\vert D\vert$ has a basepoint, - $d=6$ and $\vert H_C-D\vert$ has a basepoint or - $d=7$ and $\vert H_C\vert=\vert D +2K_C\vert$. If $\vert D\vert$ has a basepoint $P$, then $\vert D\vert=\vert K_C+P\vert$, hence each fiber of $S$ is contained in a fiber of $V_{\vert D\vert}$, and consequently $V_{\vert D\vert}$ contains $S$. Conversely, if $S\subseteq V_{\vert D\vert}$ and $\vert D\vert$ is basepoint-free, then each fiber of $V_{\vert D\vert}$ intersects each fiber of $S$ in one point, since if it did not, then each fiber of $S$ had to be contained in a fiber of $V$ which meant that $\vert D\vert$ had a basepoint. This implies that each fiber of $V_{\vert D\vert}$, which is a plane, intersects the scroll $S$ in a directrix curve of $S$. This curve is a smooth rational planar curve, consequently the degree of this curve is equal to $1$ or $2$. This means that, since the degree of $C$ is greater or equal to $6$, the scroll type of $S$ is equal to $(2,1)$ or $(2,2)$, i.e. $d=6$ or $d=7$. Since each fiber of $V_{\vert D\vert}$ intersects each fiber of $S$ in one point, every divisor in $\vert D+K_C\vert$ spans a $\mathbf P^3$ and thus $h^0(H-(D+K_C))=d-5$, which implies that in the case $d=6$ we obtain that $\vert H-D-K_C\vert$ contains one point, and that in the case $d=7$, $\vert H-D-K_C\vert=\vert K_C\vert$. Before we will prove Theorem \[idealSV\] we show that $S\cap V=C$ for any $g^1_3(C)$-scroll $V$ that does not contain $S$: \[interSV\] Let $C\subseteq \mathbf P^{d-2}$ be a smooth and irreducible linearly normal curve of genus $2$ and degree $d\geq 6$. For a $g^1_3(C)$-scroll $V=V_{\vert D\vert}$ that does not contain the $g^1_2(C)$-scroll $S$ the following holds: $$S\cap V=C.$$ Obviously, $C\subseteq S\cap V$. In the case $d=6$ the claim follows by Bézout’s Theorem, since then $S\cap V$ is of degree $6$ and dimension $1$. Let now $d\geq 7$. If $S\cap V$ is more than $C$, then it must at least contain one line: If $S\cap V\supseteq C\cup P$ for a point $P$ that does not lie on $C$, then $P$ lies on one fiber $F_0$ of the scroll $S$. But since $P$ does not lie on the curve, each quadric that contains $V$ intersects $F_0$ in at least three points, consequently the whole fiber $F_0$ must be contained in each quadric that contains $V$, and since the ideal $I_V$ is generated by quadrics, $F_0$ is contained in $S\cap V$. Now there are a priori two possibilities for a fiber $F$ of $S$ to be contained in $V$: - $F$ is contained in one of the fibers of $V=V_{\vert D\vert}$; this implies that the system $\vert D\vert$ has a basepoint, $\vert D\vert=\vert K_C+P\vert$, and consequently $S\subseteq V$. - $F$ is intersecting each fiber of $V$ in one point. Since $F$ is a fiber of $S$, the point of intersection lies on $C$ for exactly two fibers of $V$. Projecting $C$ from $F$ yields a curve $C^{\prime}$ of genus $2$ and degree $d-2$, linearly normal embedded in $\mathbf P^{d-4}$ with the linear system $\vert H_C-K_C\vert$. The curve $C^{\prime}$ lies on the scrollar surface $S^{\prime}$ which is the image of $V$ under the projection from $F$. A general fiber of $V$ is projected to a fiber in $S^{\prime}$, and the three points in the intersection of $C$ with a general fiber in $V$ are projected to three points on a fiber in $S^{\prime}$, which is impossible by Corollary \[notrisecant\], unless $C$ was a curve of degree $7$ in $\mathbf P^5$. If $C$ is a curve of degree $7$ that projects to a curve $C^{\prime}$ of degree $5$ on $\mathbf P^1\times \mathbf P^1$, then $\vert H_C\vert=\vert D+2K_C\vert$, and the $g^1_3(C)$-scroll $V=V_{\vert D\vert}$ contains the $g^1_2(C)$-scroll $S$ by Proposition \[SinV\]. This proves that the intersection $S\cap V$ cannot contain any line, i.e. in total we obtain $S\cap V=C$. ***Proof of Theorem \[idealSV\]***:\ Let $S$ be the $g^1_2(C)$-scroll, and let $V=V_{\vert D\vert}$ be a $g^1_3(C)$-scroll that does not contain $S$. There is the following short exact sequence of ideal sheaves: $$0\to \mathcal I_{S\cup V}\to \mathcal I_V\to \mathcal I_{S\cap V}\vert_S\to 0.$$ By Propositon \[interSV\] we have $S\cap V=C$, and moreover we know that $\mathcal I_C\vert_S=\mathcal O_S(-C)$. We thus obtain the following short exact sequence: $$0\to \mathcal I_{S\cup V}\to \mathcal I_V\to \mathcal O_{S}(-C)\to 0.$$ Tensoring with $\mathcal O_{\mathbf P^{d-2}}(2H)$ and restricting yields the following exact sequence: $$0\to \mathcal I_{S\cup V}(2H)\to \mathcal I_{V}(2H)\to \mathcal O_{S}(2H-C)\to 0.$$ Taking the long exact sequence in cohomology yields $$0\to H^0(\mathcal I_{S\cup V}(2))\to H^0(\mathcal I_{V}(2))\to H^0(\mathcal O_{S}(2H-C))\to H^1(\mathcal I_{S\cup V}(2))\to 0.$$ Note that $h^1(\mathcal I_V(2))=0$ since $V$ is projectively normal. Since $[C]=2H-(d-6)F$ on $S$, we can write the above sequence in the following form: $$0\to H^0(\mathcal I_{S\cup V}(2))\to H^0(\mathcal I_{V}(2))\overset{\psi}{\to} H^0(\mathcal O_{S}((d-6)F))\to H^1(\mathcal I_{S\cup V}(2))\to 0.$$ \ Our aim is now to show the following claim: \[psisurjective\] For each $\vert D\vert\in G^1_3(C)=\{g^1_3(C)'s\}$ such that $V_{\vert D\vert}$ does not contain $S$, the map $\psi: H^0(\mathcal I_{V}(2))\to H^0(\mathcal O_{S}((d-6)F))$ defined via $$\psi(Q):=\left\{ \begin{array}{ccc} 0& \textrm{ if } & S\subseteq Q,\\ Q\cap S-C\in \vert (d-6)F\vert&\textrm{ if }& S\not\subseteq Q\\ \end{array} \right.$$ is surjective. If the claim is true, then we have $h^1(\mathcal I_{S\cup V}(2))=0$, and thus the short exact sequence $$0\to \mathcal I_{S\cup V}(2)\to \mathcal I_{S}(2)\oplus \mathcal I_{V}(2)\to \mathcal I_{\underbrace{S\cap V}_{=C}}(2)\to 0$$ gives the following long exact sequence in cohomology: $$0\to H^0(\mathcal I_{S\cup V}(2))\to H^0(\mathcal I_{S}(2))\oplus H^0(\mathcal I_{V}(2))\to H^0(\mathcal I_{C}(2))\to 0.$$ This implies that $$\begin{aligned} h^0(\mathcal I_C(2))&=&\dim(H^0(\mathcal I_{S}(2))\oplus H^0(\mathcal I_{V}(2)))-h^0(\mathcal I_{S\cup V}(2))\\ &=&\dim(H^0(\mathcal I_{S}(2))+H^0(\mathcal I_{V}(2))).\end{aligned}$$ This argument implies that, since $H^0(\mathcal I_{S}(2))+H^0(\mathcal I_{V}(2))\subseteq H^0(\mathcal I_{C}(2))$, $$H^0(\mathcal I_{S}(2))+H^0(\mathcal I_{V}(2))= H^0(\mathcal I_{C}(2)),$$ but since all $I_S$, $I_V$ and $I_C$ are generated by quadrics, we obtain $I_S+I_V=I_C$.\ *Proof of Claim \[psisurjective\]*: Now we will prove by induction that the map $$\psi:H^0(\mathcal I_V(2))\to H^0(\mathcal O_S((d-6)F))$$ as defined above is surjective:\ *The induction start: $d=6$ and $d=7$*:\ For $d=6$ the surjectivity of $\psi$ is obvious. More precisely, if $C\subseteq \mathbf P^4$ is a curve of degree $6$, and if $\vert D\vert$ is a basepoint-free $g^1_3(C)$ such that $\vert H_C-D\vert$ is basepoint-free as well, then by Proposition \[SinV\] the scroll $V_{\vert D\vert}=:Q_6$ is a quadric that does not contain the $g^1_2(C)$-scroll $S$.\ For a curve $C\subseteq \mathbf P^5$ of degree $7$ let $V_{\vert D\vert}$ be a $g^1_3(C)$-scroll that does not contain the $g^1_2(C)$-scroll $S$. For any two quadrics $Q_1\neq Q_2$ in $\mathbf P^5$ their intersection $Q_1\cap Q_2$ is a complete intersection of dimension $3$ and degree $4$, hence if $Q_1$ and $Q_2$ both contained $S$ and $V$, then we have $Q_1\cap Q_2=V\cup \mathbf P^3$. Since $S\subseteq Q_1\cap Q_2$ and $S$ is irreducible, we must have $S\subseteq V$ or $S\subseteq \mathbf P^3$, but since $S$ spans all of $\mathbf P^5$ and by hypothesis $S$ is not contained in $V$, both cases are impossible.\ This shows that $h^0(\mathcal I_{S\cup V}(2))\leq 1$, and consequently we obtain $$\dim(H^0(\mathcal I_S(2))+H^0(\mathcal I_V(2)))\geq h^0(\mathcal I_S(2))+h^0(\mathcal I_V(2))-1=8=h^0(\mathcal I_C(2)),$$ and thus $\psi$ is surjective.\ *The induction step: $d\geq 8$*:\ Pick $d-8$ fibers $F_1,\ldots, F_{d-8}$ on $S$. Let $R_1$ and $R_2$ be two points on $C$ such that $R_1+R_2$ is not a divisor in $\vert K_C\vert$ and such that $\vert R_i\vert\neq \vert H_C-D-2K_C\vert$, $i=1,2$, in the case if $d=8$ and the linear system $\vert H_C-D-2K_C\vert$ is non-empty. Moreover, let $R_1^{\prime}$ and $R_2^{\prime}$ be two points on $C$ such that $R_1+R_1^{\prime}$ and $R_2+R_2^{\prime}$ are divisors in $\vert K_C\vert$. Projecting $C$ from the line $L_R$ spanned by $R_1$ and $R_2$ yields a curve $C^{\prime}$ of degree $d-2$ in $\mathbf P^{d-4}$, embedded with the system $\vert H_C-R_1-R_2\vert$. Under this projection the $g^1_2(C)$-scroll $S$ maps to the $g^1_2(C^{\prime})$-scroll $S^{\prime}$, and the scroll $V_{\vert D\vert}$ maps to a $g^1_3(C^{\prime})$-scroll $V_{\vert D\vert}^{\prime}$ that does not contain $S^{\prime}$. Notice that the choice of $R_1$ and $R_2$ ensures that $V_{\vert D\vert}^{\prime}$ does not contain $S^{\prime}$ also in the cases $d=8$, $\vert H_C-D-2K\vert\neq \emptyset$ and $d=9$, $\vert H_C\vert=\vert D+3K_C\vert$. (In general it would not have been clear that the system $\vert H_C-R_1-R_2\vert$ does not belong to the cases given in Proposition \[SinV\].)\ By the induction hypothesis we find a quadric $Q_{d-2}\subseteq \mathbf P^{d-4}$ which contains $V_{\vert D\vert}^{\prime}$ but not $S^{\prime}$, and which contains the fibers $F_1^{\prime},\ldots, F_{d-8}^{\prime}$, where $F_i^{\prime}$, $i=1,\ldots, d-8$, denotes the image of $F_i$ under the projection. The cone over $Q_{d-2}$ with the line $L_R$ as vertex is then a quadric $Q_d$ in $\mathbf P^{d-2}$ which contains $V_{\vert D\vert}$ and not $S$. Moreover, $Q_d$ contains the fibers $F_1,\ldots, F_{d-8}$ and two more fibers of $S$: The fiber $F_{R_1}$ spanned by $R_1$ and $R_1^{\prime}$ intersects the quadric $Q_d$ in three points, counted with multiplicity: The quadric $Q_d$ intersects this line in at least the two points $R_1$ and $R_1^{\prime}$, and since the quadric is singular along the line $L_R$, the quadric $Q_d$ intersects $F_{R_1}$ in the point $R_1$ with at least multiplicity $2$. Consequently $Q_d$ must contain the fiber $F_{R_1}$. The same argument applies to the fiber $F_{R_2}$ which is spanned by the points $R_2$ and $R_2^{\prime}$.\ By degree reasons $Q_d\cap S$ cannot contain more than $C$, $F_{R_1}$, $F_{R_2}$ and $F_1,\ldots, F_{d-8}$. Consequently, $\psi(Q_d)=F_{R_1}\cup F_{R_2}\cup F_1 \cup \ldots \cup F_{d-8}$. Since the divisors $F_{R_1}+F_{R_2}+F_1+\cdots +F_{d-8}$, where $R_1$ and $R_2$ run through all points on $C$ and $F_1,\ldots, F_{d-8}$ run through all fibers of $S$, span the linear system $\vert (d-6)F\vert$, the linearity of $\psi$ together with varying the points $R_1$ and $R_2$ and the fibers $F_1,\ldots, F_{d-8}$ yields the surjectivity of $\psi$. Acknowledgements {#acknowledgements .unnumbered} ---------------- This paper arose from parts of my Ph.D. thesis. I wish to thank my advisor Kristian Ranestad for many interesting discussions and very helpful feedback. [99]{} M.L. Green: Koszul cohomology and the geometry of projective varieties, [*Journal of Differential Geometry*]{} [**19**]{} (1984) 125–171. M.L. Green: Quadrics of rank four in the ideal of a canonical curve, [*Inventiones mathematicae*]{} [**75(1)**]{} (1984) 85–104. R. Hartshorne: [*Algebraic Geometry*]{}, Springer Verlag, 1977 M. Hazewinkel and C.F. Martin: A short elementary proof of Grothendieck’s theorem on algebraic vector bundles over the projective line, [*Journal of Pure and Applied Algebra*]{} [**25(2)**]{} (1982) 207–211. F.-O. Schreyer: Syzygies of canonical curves and special linear series, [*Mathematische Annalen*]{} [**275(1)**]{} (1986) 105–137. J. Stevens: Rolling factors deformations and extensions of canonical curves, [*Documenta Mathematica*]{} [**7**]{} (2002) 185–226. H.-C. v. Bothmer: Scrollar Syzygies of general canonical curves with genus at most $8$, [*Trans. Amer. Math. Soc.*]{} [**359**]{} (2007) 465–488. H.-C. v. Bothmer: Geometrische Syzygien von kanonischen Kurven, [*Dissertation, Universität Bayreuth*]{} (2000) H.-C. v. Bothmer and K. Hulek: Geometric syzygies of elliptic normal curves and their secant varieties, [*Manuscripta Mathematica*]{} [**113(1)**]{} (2004) 35–68.
--- abstract: 'The rate for complete two-photon annihilation of molecular positronium Ps$_{2}$ is reported. This decay channel involves a four-body collision among the fermions forming Ps$_{2}$, and two photons of 1.022 MeV, each, as the final state. The quantum electrodynamics result for the rate of this process is found to be $\Gamma_{Ps_{2} \rightarrow \gamma\gamma}$ = 9.0 $\times 10^{-12}$ s$^{-1}$. This decay channel completes the most comprehensive decay chart for Ps$_{2}$ up to date.' author: - 'Jesús Pérez-Ríos' - 'Sherwin T. Love' - 'Chris H. Greene' bibliography: - 'Ps2.bib' title: 'Two-photon total annihilation of molecular positronium' --- Introduction ============ Positronium or Ps, is the bound state of an electron and its antiparticle, the positron, forming a metastable hydrogen-like atom [@Positron-Physics]. In the 1940’s, Wheeler speculated that two Ps atoms may form molecular positronium Ps$_{2}$, in analogy with two hydrogen atoms that can combine to form molecular hydrogen [@Wheeler-1946]. In the same decade, calculations of the binding energy of Ps$_{2}$ were carried out, and it turned out to be 0.4 eV [@Hylleraas-1947], supporting Wheeler’s prediction. More recently, in 2007, Cassidy and Mills reported the first observation of molecular positronium [@Cassidy-2007]. Molecular positronium can decay to different final states or channels. The characterization of the decay channels is essential in order to estimate its lifetime. Moreover, the complete characterization of the Ps$_{2}$ decay channels and their partial widths could lead to the design of efficient detection schemes for this molecule. For a bound state, the total annihilation rate $\Gamma$ is determined as the sum of partial annihilation rates associated with each allowed decay channel $\Gamma_{i}$, i.e., $\Gamma=\sum_{i=1,N}\Gamma_{i}$, where $N$ represents the total number of decay channels. Each of the $\Gamma_{i}$ has to be computed by including all the topologically distinct Feynman diagrams associated with such channels, and in some cases, it can be important to include radiative corrections. Frolov has reported the most complete chart of decay channels as well as partial annihilation rates up to date [@Frolov-2009], including all the main decay channels, going from zero photon decay up to the 5-photon decay channel. However, a higher order decay channel of Ps$_{2}$ involving two-photons as the final state has not been considered in any estimation of the Ps$_{2}$ lifetime, and apparently never previously contemplated as a possible decay channel. The present study reports the calculation of the two-photon complete annihilation rate of Ps$_{2}$, in which two electrons and two positrons annihilate simultaneously, producing two photons of 1.022 MeV energy each. The calculations have been carried out by using standard techniques of quantum field theory, such as the Feynman rules and trace technology [@Peskin]. This decay channel completes the decay chart of Ps$_{2}$, previously reported in part by Frolov [@Frolov-2009], besides the six-photon and seven-photon decay channels. While this decay is rare, it is worth mentioning that it provides a unique experimental signature of the presence of molecular positronium. Two-photon annihilation of Ps$_{2}$ =================================== The annihilation of Ps$_{2}$ into two photons (denoted here as Ps$_{2} \rightarrow \gamma\gamma$) is governed by eight topological distinct Feynman diagrams. Four of them are shown in Fig.1. The rest of the diagrams emerge as cross terms of the ones shown in Fig. 1, [*i.e.*]{}, in which the momenta of the outgoing photons are interchanged. Fig. 1 shows that the decay channel at hand is a four-body event, where the energy-momentum vectors of the incoming fermions are labelled as $p_{1}$, $p_{2}$, $p_{3}$, and $p_{4}$, whereas the energy-momenta of the outgoing photons are labelled as $k_{1}$ and $k_{2}$. Here, the energy-momentum vectors are represented as $(E,\vec{p})$, and natural units ($\hbar=c$ = 1, and $\alpha$ = 1/137, being fine structure constant) are assumed. The momenta of the electrons and positrons in Ps$_{2}$ are very low in comparison with their rest mass energy. Hence the binding energy of the Ps$_{2}$ molecule is negligible in comparison with the rest mass energy of whatever of their constituents. Therefore, the first non-vanishing term in the amplitude expansion can be obtained by substitution of the initial energy-momentum vectors by $(m_{e},0,0,0)$, instead of the initial energy-momentum vectors. Here $m_{e}$ is the electron mass. Thus, in this approximation, the transition probability does not depend on the initial momenta $\vec{p}_{i}$. Within this approximation it is possible to establish a relationship between the annihilation rate and the probability to find the four fermions in the Ps$_{2}$ molecule to all be located at the same point in space. This information can be determined by generalizing the method employed for the calculation of the electron-positron annihilation rate of Ps [@Peskin], but going beyond the two-body perspective of that reference. This generalization leads to: $$\begin{aligned} \label{eq-1} \Gamma_{Ps_{2}\rightarrow \gamma \gamma}=\frac{|\Psi_{Ps_{2}}\left( 0,0,0,0 \right)|^{2}}{4} \int \frac{d^{3}\vec{k}_{1}}{\left( 2\pi \right)^{3}2|\vec{k}_{1}|} \frac{d^{3}\vec{k}_{2}}{\left( 2\pi \right)^{3}2|\vec{k}_{2}|}\nonumber \\ \left( 2\pi \right)^{4}\frac{\delta^{(4)}\left(p_{1}+p_{2}+p_{3}+p_{4}-k_{1}-k_{2}\right)}{\prod_{i=1,4}(2E_{i})} |\mathcal{M}|^{2}. \nonumber \\ \end{aligned}$$ [diagram1]{} (60,30) \ \ [diagram2]{} (60,30) \ \ [diagram3]{} (60,30) \ \ [venga]{} (60,30) \ The quantity $|\Psi_{Ps_{2}}\left( 0,0,0,0 \right)|^{2}$ represents the probability of finding the four fermions at the same point in position space. Some details about its calculation are given below. Eq. (\[eq-1\]) can be viewed as an extension of the previous generalization of Kryuchkov [@Kryuchkov-1994] where a three body initial state was taken into account for the single photon decay of Ps$^{-}$. $\mathcal{M}$ represents the transition matrix associated with the decay channel, and therefore $|\mathcal{M}|^{2}$ represents the probability for such a transition. It is obtained by averaging the squared modulus of the total amplitude $\mathcal{A}$ over the spin states of the incoming particles \[e$^{-}$($p_{1}$,$s_{1}$),e$^{+}$($p_{2}$,$s_{2}$) ,e$^{-}$($p_{3}$,$s_{3}$),e$^{+}$($p_{4}$,$s_{4}$), here $s_{i}$ represents the spin of each particle\] and by summing over the polarizations of the outgoing particles \[$\epsilon(k_{1})$, $\epsilon({k_{2}})$, here $\epsilon(k_{i})$ denotes the polarization of each photon\], [*i.e.*]{}, $$\label{eq-2} |\mathcal{M}|^{2}=\sum_{\epsilon(k_{1})}\sum_{\epsilon(k_{2})}\frac{1}{2^{4}}\sum_{s_{1}} \sum_{s_{2}}\sum_{s_{3}}\sum_{s_{4}}|\mathcal{A}|^{2}.$$ The amplitude $\mathcal{A}$ associated with the decay channel Ps$_{2}\rightarrow \gamma \gamma$ contains eight terms, each of them associated with every Feynman diagram that contributes to the process (see Fig.1). The amplitude is given by $$\begin{aligned} \label{eq-3} \mathcal{A}=e^{4}\bigg[ \bar{v}(p_{4},s_{4})\gamma^{\lambda} \epsilon_{\lambda}(k_{1}) \frac{\slashed{p}_{4}-\slashed{k}_{2}+m_{e}}{(p_{4}-k_{2})^{2}-m_{e}^{2}}\gamma^{\nu} \nonumber \\ \times \frac{\slashed{p}_{3}-\slashed{k}_{1}+m_{e}}{(p_{3}-k_{1})^{2}-m_{e}^{2}}\gamma^{\sigma} \epsilon_{\sigma}(k_{1})u(p_{3},s_{3}) \nonumber \\ \times \frac{g_{\mu \nu}}{(p_{1}+p_{2})^{2}}\bar{v}(p_{2},s_{2})\gamma^{\mu}u(p_{1},s_{1})\nonumber \\ +\bar{v}(p_{4},s_{4})\gamma^{\nu} \frac{\slashed{p}_{3}-\slashed{k}_{1}-\slashed{k}_{2}+m_{e}}{(p_{3}-k_{1}-k_{2})^{2}-m_{e}^{2}} \gamma^{\lambda} \nonumber \\ \times \epsilon_{\lambda}(k_{2}) \frac{\slashed{p}_{3}-\slashed{k}_{1}+m_{e}}{(p_{3}-k_{1})^{2}-m_{e}^{2}}\gamma^{\sigma} \epsilon_{\sigma}(k_{1})u(p_{3},s_{3}) \nonumber \\ \times \frac{g_{\mu\nu}}{(p_{1}+p_{2})^{2}}\bar{v}(p_{2},s_{2})\gamma^{\mu}u(p_{1},s_{1}) \nonumber \\ +\bar{v}(p_{4},s_{4})\gamma^{\sigma}\epsilon_{\sigma}(k_{1}) \frac{\slashed{p}_{4}-\slashed{k}_{1}+m_{e}}{(p_{4}-k_{1})^{2}-m_{e}^{2}} \gamma^{\lambda} \nonumber \\ \times \epsilon_{\lambda}(k_{2}) \frac{\slashed{p}_{4}-\slashed{k}_{1}-\slashed{k}_{2}+m_{e}}{(p_{4}-k_{1}-k_{2})^{2}-m_{e}^{2}}\gamma^{\nu} u(p_{3},s_{3}) \nonumber \\ \times \frac{g_{\mu\nu}}{(p_{1}+p_{2})^{2}}\bar{v}(p_{2},s_{2})\gamma^{\mu}u(p_{1},s_{1}) \nonumber \\ +\bar{v}(p_{4},s_{4})\gamma^{\lambda}\epsilon_{\lambda}(k_{2}) \frac{\slashed{p}_{4}-\slashed{k}_{2}+m_{e}}{(p_{4}-k_{2})^{2}-m_{e}^{2}} \gamma^{\nu} \nonumber \\ \times u(p_{3},s_{3}) \frac{g_{\mu\nu}}{(p_{2}+p_{1}-k_{1})^{2}} \bar{v}(p_{2},s_{2}) \gamma^{\mu} \nonumber \\ \frac{\slashed{p}_{1}-\slashed{k}_{1}+m_{e}}{(p_{1}-k_{1})^{2}-m_{e}^{2}}\gamma^{\sigma} \epsilon_{\sigma}(k_{1})u(p_{1},s_{1}) + (k_{1}\leftrightarrow k_{2}) \bigg], \nonumber \\\end{aligned}$$ Here the Feynman gauge has been employed as well as the slashed notation, [*i.e.*]{}, $\slashed{p}=\gamma^{\nu}p_{\nu}$. The $\gamma$ matrices are related with the Dirac matrices as defined in Ref. [@Peskin]. Once the amplitude of the process is known $\mathcal{A}$, the transition probability associated with the decay channel at hand can be found, by means of Eq. (\[eq-2\]). The calculations needed are rather involved, so they have been undertaken using the software program Mathematica [@Mathematica], yielding $$\label{eq-4} \Gamma_{Ps_{2}\rightarrow \gamma \gamma}=|\Psi_{Ps_{2}}\left( 0,0,0,0 \right)|^{2} \frac{521}{512}\frac{\pi^{3}}{2}\frac{\alpha^{4}}{m_{e}^{8}}.$$ ![Jacobi coordinates for the four-body problem.[]{data-label="default"}](Jacobi_coordinates_4b.eps){width="10.cm"} The Ps$_{2}$ ground state wave function has been obtained by using hyperspherical coordinates [@Hyperspherical] in conjunction with an explicitly correlated gaussian basis, following the method of Daily and Greene [@Daily-2014]. In particular, the Ps$_{2}$ wave function is described in terms of the Jacobi coordinates depicted in Fig. 2, $\Psi_{Ps_{2}}(\rho_{1},\rho_{2},\rho_{1,2},\rho_{CM})$. After neglecting the center of (CM) motion (since the interaction potential does not depend on $\rho_{CM}$) and using adiabatic hyperspherical approximation, the wave function may be expressed as $\Psi_{Ps_{2}}(R,\Omega)$, where $R$ denotes the hyperradius and $\Omega$ labels the solid angle element associated to the eight hyperangles needed for the characterization of a four-body collision (neglecting the CM motion). Here the normalization condition for the wave function is $$\label{eq-5} \frac{\int |\Psi_{Ps_{2}}(R,0)|^{2}R^{8}dR}{\int d\Omega}=1.$$ Finally, taking into account that $|\Psi_{Ps_{2}}(0,0,0,0)|^{2}=\frac{|\Psi_{Ps_{2}}(0,0)|^{2}}{\sqrt{\Omega}}$, one finds $|\Psi_{Ps_{2}}(0,0,0,0)|^{2}$=4.5$\times 10^{-6}$ a$_{0}^{-9}$, with a$_{0}$ the Bohr radius. This value is in good agreement with the value reported previously by Frolov, 4.56 $\times 10^{-6}$ a$_{0}^{-9}$ [@Frolov-2009]. After inserting the probability to find the four fermions at the same point $|\Psi_{Ps_{2}}(0,0,0,0)|^{2}$, the relation between atomic units and natural units, and after taking into account Eq. (\[eq-4\]), we find $\Gamma_{Ps_{2}\rightarrow \gamma \gamma}$ = 9.0 $\times 10^{-12}$ s$^{-1}$. This decay rate is smaller than the alternative decay channels explored thus far, and which have been previously reported by Folov [@Frolov-2009]. Table I shows a comparison between the rate for the two-photon decay and all the decay channels previously reported. Table I implies that the rate reported here, although smaller than the rest, is still comparable with the zero-photon decay channel. It is related with the number of vertices in each decay channel. The zero-photon decay involves three vertices ,whereas the two-photon decay channels require four vertices. This difference implies that $|\mathcal{M}|^{2}$ has an extra factor of $\alpha$ for the case of two-photon decay, in comparison with the zero-photon decay. Decay Channel Decay rate (s$^{-1}$) -------------------------------------------- -- ----------------------- $\Gamma_{0\gamma}$ 2.32 $\times 10^{-9}$ $\Gamma_{1\gamma}$ 1.94 $\times 10^{-1}$ $\Gamma_{2\gamma}$ 4.44 $\times 10^{9}$ $\Gamma_{Ps_{2} \rightarrow \gamma\gamma}$ 9.0 $\times 10^{-12}$ $\Gamma_{3\gamma}$ 1.20 $\times 10^{7}$ $\Gamma_{4\gamma}$ 6.56 $\times 10^{3}$ $\Gamma_{5\gamma}$ 0.11 $\times 10^{2}$ : Decay rates for Ps$_{2}$ molecule in (s$^{-1}$). The decay rates labelled as $\Gamma_{n\gamma}$, previous calculated by Frolov [@Frolov-2009], refers to the annihilation of electron-positron pairs in the Ps$_{2}$ molecule, being $n$ the number of photons emitted. Whereas $\Gamma_{Ps_{2}\rightarrow \gamma \gamma}$ stands for the four-body collision among the four fermions leading to the formation of two photons, see text for details. \[default\] Conclusions =========== The two-photon annihilation rate of Ps$_{2}$ has been calculated using a non-relativistic reduction of quantum electrodynamic methods. This annihilation process refers to the simultaneous decay of two electrons and two positrons into two photons, providing a rare but unambiguously unique signature of the presence of the Ps$_{2}$ molecule. All the Feynman diagrams contributing to such process have been taken into account for the calculation of the transition probability. The wave function for ground state Ps$_{2}$ has been calculated by employing correlated Gaussian basis functions in combination with hyperspherical coordinates [@Daily-2014]. The annihilation rate for this process turns out to be $\Gamma_{Ps_{2}\rightarrow \gamma \gamma}$ = 9.0 $\times 10^{-12}$ s$^{-1}$. While this value is smaller than that of other decay channels of Ps$_{2}$, it is nevertheless in the same range as the rate associated with the zero-photon decay [@Frolov-2009]. The observation of the event studied here will be very challenging due to its very long lifetime. However, from a fundamental point of view, the two-photon annihilation of Ps$_{2}$ constitutes a way to sample the Ps$_{2}$ wave function, from a four-body perspective, yielding crucial information about the nature of the bound state. Finally, we point out that in some astrophysical regions such as near the galactic center where a high density of positrons and electrons are available, this event may be observed, due to its unique emission signature of two photons with energies equal to 1.022 MeV. This region of the gamma ray spectra remains largely unexplored to date, although the International Gamma-Ray Astrophysics Laboratory (INTEGRAL) telescope has the capability for it. Indeed this telescope has found the signatures of two-photon annihilation in Ps [@INTEGRAL]. Acknowledgements ================ The authors would thank K. M. Daily for supplying the value of the Ps$_{2}$ wave function at the origin. S. T. L. thanks T. Clark for enjoyable discussions. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, under Award number DE-SC0010545 (for J.P.-R. and C.H.G.).
--- address: - '$^1$Department of Mathematics, University of Notre Dame, 106 Hayes-Healy Hall, Notre Dame, IN 46556' - '$^1$Tel: +1(574) 631-7776' - '$^*$Corresponding author' - '$^{2}$Department of Mathematics, Pennsylvania State University, 235 McAllister Building, University Park, PA 16802' - '$^2$Tel: +1(814)865-1123' author: - 'Prasit Bhattacharya$^{1,*}$' - Philip Egger$^2$ title: 'A class of $2$-local finite spectra which admit a $v_2^1$-self-map' ---
--- abstract: 'Measuring redshifted CO line emission is an unambiguous method for obtaining an accurate redshift and total cold gas content of optically faint, dusty starburst systems. Here, we report the first successful spectroscopic redshift determination of AzTEC J095942.9+022938 (“COSMOS AzTEC-1"), the brightest 1.1mm continuum source found in the AzTEC/JCMT survey [@scott08], through a clear detection of the redshifted CO (4-3) and CO (5-4) lines using the Redshift Search Receiver on the Large Millimeter Telescope. The CO redshift of $z=4.3420\pm0.0004$ is confirmed by the detection of the redshifted 158   2 line using the Submillimeter Array. The new redshift and  photometry yield $L_{FIR}=(1.1\pm0.1)\times 10^{13} L_\odot$ and $SFR\approx 1300\, M_\odot$ yr$^{-1}$. Its molecular gas mass derived using the ULIRG conversion factor is $1.4\pm0.2 \times 10^{11} M_\odot$ while the total ISM mass derived from the 1.1mm dust continuum is $3.7\pm0.7 \times 10^{11} M_\odot$ assuming $T_d=35$ K. Our dynamical mass analysis suggests that the compact gas disk ($r\approx 1.1$ kpc, inferred from dust continuum and SED analysis) has to be nearly face-on, providing a natural explanation for the uncommonly bright, compact stellar light seen by the . The 2 line luminosity $L_{[C\, II]}= 7.8\pm1.1 \times 10^9 L_\odot$ is remarkably high, but it is only 0.04 per cent of the total IR luminosity. AzTEC COSMOS-1 and other high redshift sources with a spatially resolved size extend the tight trend seen between 2/FIR ratio and $\Sigma_{FIR}$ among IR-bright galaxies reported by @diaz13 by more than an order of magnitude, supporting the explanation that the higher intensity of the IR radiation field is responsible for the “2 deficiency" seen among luminous starburst galaxies.' author: - | Min S. Yun$^1$[^1], I. Aretxaga$^2$, M. A. Gurwell$^3$ , D. H. Hughes$^2$, A. Monta[ñ]{}a$^{4,2}$ , G. Narayanan$^1$, D. Rosa Gonz[á]{}lez$^2$, D. Sánchez-Argüelles$^2$, F. P. Schloerb$^1$, R. L. Snell$^1$, O. Vega$^2$, G. W. Wilson$^1$, M. Zeballos$^2$, M. Chavez$^2$, J. R. Cybulski$^1$, T. Díaz-Santos$^5$, V. De la Luz$^{2,6}$, N. Erickson$^1$, D. Ferrusca$^2$ , H. B. Gim$^1$, M. H. Heyer$^1$, D. Iono$^7$, A. Pope$^1$, S. M. Rogstad$^1$, K. S. Scott$^8$, K. Souccar$^1$, E. Terlevich$^2$, R. Terlevich$^2$, D. Wilner$^3$, J.  A. Zavala$^2$\ $^{1}$Department of Astronomy, University of Massachusetts, Amherst, MA 01003, USA\ $^2$Instituto Nacional de Astrofísica, Óptica y Electrónica, Tonantzintla, Luis Enrique Erro 1, Sta. Ma. Tonantzintla, Puebla, México\ $^{3}$Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA\ $^{4}$Consejo Nacional de Ciencia y Tecnología, Av. Insurgentes Sur 1582, Col. Crédito Constructor, Del. Benito Juárez, C.P.: 03940, México, D.F.\ $^{5}$Nucleo de Astronomia de la Facultad de Ingenieria, Universidad Diego Portales, Av. Ejercito Libertador 441, Santiago, Chile\ $^{6}$SCiESMEX, Instituto de Geofísica, Unidad Michoacan, Universidad Nacional Autónoma de Mexico, Morelia, Michoacan, Mexico. CP 58190.\ $^{7}$National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka,Tokyo 181-8588, Japan\ $^{8}$National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville, VA, USA\ title: 'Early Science with the Large Millimeter Telescope: CO and \[C II\] Emission in the z=4.3 AzTEC J095942.9+022938 (COSMOS AzTEC-1)' --- \[firstpage\] galaxies: high-redshift – galaxies: starburst – galaxies: distances and redshifts – galaxies: individual: AzTEC J095942.9+022938 – submillimetre: galaxies – radio lines: ISM Introduction ============ Recent studies of cosmic star formation history and galaxy mass build-up have shown a remarkably tight correlation between star formation rate (SFR) and stellar mass ($M_*$), also known as star formation “main sequence", for galaxies with $M_*$ up to $10^{11} M_\odot$ extending out to $z\sim 6$ [see @steinhardt14; @salmon15 and references therein]. A substantial population of quiescent galaxies with $M_*\ge 10^{10-11} M_\odot$ are also found to $z\sim4$, suggesting rapid formation and quenching of massive galaxies at $z\sim 6$ or earlier [@whitaker13; @straatman14]. Given the constraints on rapid formation and cessation of stellar mass build-up and their compact morphology, intense starbursts and feedback driven by a rapid gas accretion are thought to be important in this process [see @williams14 and references therein]. The submillimetre galaxies (SMGs) are natural laboratories for testing this hypothesis and probing the details of the physical processes that govern this rapid build-up and quenching of massive galaxies. SMGs are identified by their large FIR luminosity, which is widely interpreted to be powered by intense star formation with $SFR\ge 10^{2-3} M_\odot$ yr$^{-2}$ [@blain02; @yun12; @kirkpatrick12]. Wide area surveys by the  and  Space Telescopes have shown that these luminous IR galaxies account for a significant fraction of the Cosmic IR background [@penner11; @bethermin12], suggesting that they are an important component of the cosmic mass build-up history at $z\ge1$ [@lefloch05; @caputi07; @magnelli11]. Because of their faintness in the optical bands, their precise redshift distribution is poorly determined (see below), but @toft14 have found that the well established population of massive ($M_*>10^{11}M_\odot$) compact quiescent galaxies at $z\sim2$ can be fully accounted by the known SMGs at $3<z<6$ in terms of their abundance and stellar population, and @simpson14 reproduce the local elliptical luminosity function by passively evolving the population of bright SMGs. The study of the brightest SMG found in the COSMOS [@scoville07] field, AzTEC J095942.9+022938 (“COSMOS AzTEC-1" hereafter), exemplifies the major challenge behind making this important connection between SMGs and the rapid build-up of stellar mass in galaxies. First discovered by the AzTEC COSMOS survey using the James Clerk Maxwell Telescope [@scott08], COSMOS AzTEC-1 is one of the brightest SMGs known and is particularly well studied because of the extensive deep multi-wavelength data readily available in the COSMOS field [see @smolcic11]. Unlike many other SMGs (including those discovered by ) that suffer from low angular resolution of single dish telescopes and source blending, the location of this SMG is known to better than $0.1\arcsec$ accuracy because of a dedicated interferometric imaging survey done using the Submillimeter Array (SMA) by @younger07. COSMOS AzTEC-1 is the only object among the 7 AzTEC sources imaged with SMA by Younger et al. that has an unambiguous optical counterpart, and this relatively bright ($m=25.3$ mag \[AB\] in the  F814W band) source is extremely compact, $\sim$0.2 ($\sim$1.5 kpc) in diameter, comparable to the sizes of the massive compact galaxies found at $z=2\sim 4$. A 4 hour long exposure with DEIMOS on Keck II telescope by @smolcic11 did not yield any emission lines, and the continuum break near 6700 Å was interpreted as the blue cutoff of Ly-$\alpha$ at $z=4.65$. Also, using 31 NUV-NIR photometric measurements, Smol[č]{}i[ć]{} et al. derived a photometric redshift of $z=4.64$ with a secondary peak at $z=4.44$. Their attempt to confirm this redshift by CO spectroscopy using CARMA and PdBI interferometers failed to detect any CO emission in the redshift range of $4.56 <z< 4.76$ and $4.94<z<5.02$. Later, @iono12 expanded the CO line search using the Nobeyama 45-m Telescope to $4.38<z<4.56$, but they also failed to detect a CO line. Therefore, the redshift of this arguably the best studied AzTEC SMG in the COSMOS field still lacks a spectroscopic confirmation despite nearly 10 years of efforts using some of the most powerful astronomical facilities available. With the exception of the SMG COSMOS AzTEC-3, which is recently shown to be part of a large scale structure at $z=5.3$ [@riechers10; @riechers14], the situation is essentially the same for the remaining AzTEC sources imaged by the SMA as well – all are expected to be at $z\gg3$ because of their faintness in the optical and the radio bands, with a much worse prospect of yielding a spectroscopic redshift. In this paper, we report the first successful spectroscopic redshift determination of COSMOS AzTEC-1 obtained using the Redshift Search Receiver (RSR) on the Large Millimeter Telescope Alfonso Serrano (LMT), which is a ultra-wide bandwidth spectrometer designed to conduct a blind search for redshifted CO lines from molecular gas rich galaxies. We also report the confirmation of the CO redshift through the detection of the redshifted 158  2 line using the Submillimeter Array. We interpret the observed CO and 2 luminosity in terms of a highly concentrated and intense starburst, fueled by the CO and 2 emitting gas and its properties, including the “2 deficiency" in COSMOS AzTEC-1. Throughout this paper, we assume flat $\Lambda$CDM cosmology with $\Omega_M=0.3$ and $H_0=70$ km s$^{-1}$ Mpc$^{-1}$ and Kroupa initial mass function [IMF; @kroupa01].[^2]. Observations ============ Redshift Search Receiver Observations on the LMT ------------------------------------------------ The Redshift Search Receiver observations of COSMOS AzTEC-1 were conducted in January and February 2014 as part of the Early Science program at the Large Millimeter Telescope [@hughes10]. The Redshift Search Receiver (RSR) consists of two dual polarization front end receivers that are chopped between the ON and OFF source positions separated by 76$\arcsec$ in Azimuth at 1 kHz rate using a ferrite switch, producing a flat baseline over the entire 38 GHz (73-111 GHz) bandwidth and always integrating on-source [see @erickson07; @chung09 for further descriptions of the instrument]. The ultra-wideband backend spectrometer covers the entire frequency range between 73 and 111 GHz simultaneously with 31 MHz ($R=3000$ or 100 km s$^{-1}$ at 93 GHz) spectral resolution. During this Early Science phase operation, only the inner 32 metre diameter section of the telescope surface is illuminated, leading to an effective beam size of $20\arcsec$ at 110 GHz and $28\arcsec$ at 75 GHz. A total of 290 minutes of on-source integrations were obtained over 3 different nights, mostly in excellent weather with $T_{sys}\approx 90$ K ($\tau_{225GHz}=0.05-0.1$). Telescope pointing was checked every 60-90 minutes by observing the nearby QSO J0909+013. Data were reduced and calibrated using DREAMPY (Data REduction and Analysis Methods in PYthon), which is the RSR data reduction pipeline software written by G. Narayanan. After flagging any data adversely affected by a hardware or software problem, a linear baseline is removed from each spectrum. The final spectrum shown in Figure \[fig:CO\] was obtained by averaging all spectra using the $1/\sigma^2$ weight, and the resulting final rms noise is $\sigma=0.13$ mK. The measured gain as a function of elevation and frequency using Uranus and MWC349A is flat, $7$ Jy/K (in $T_A^*$ unit) between the elevation range of 30-75 degree, where all observations were made. Submillimeter Array ------------------- Spectroscopic imaging observations of COSMOS AzTEC-1 were obtained on 1 March 2014 using the using the Submillimeter Array [SMA; @ho04], the 8-element interferometer located near the summit of Mauna Kea, Hawaii. The SMA was in a close pack configuration with projected baselines ranging from 6 to 45 metre (mean $\sim$21 metre). The array was operated using two orthogonally polarized SIS receivers on each antenna, each tuned to 355.7 GHz within the 2 GHz wide upper sideband, the expected frequency for the redshifted 2 line based on the results of the LMT observations. The lower sideband was also captured allowing a sensitive measure of the thermal continuum near 345.4 GHz (1850 GHz = 162  in the source frame). The raw spectral resolution was 3.25 MHz uniform over both sidebands, or about 2.75 km s$^{-1}$ around the 2 line. The spectral response was calibrated using observations of the bright QSO 3C 84, and the flux density scale was calibrated from measurements of the Jovian moon Callisto, known to within  5 per cent in the submillimetre bands (based upon SMA observations, see ALMA Memo 594[^3]). Observations of the target were interleaved with measurements of QSOs J0909+013 (0.326 Jy) and J1058+015 (2.31 Jy) for use in calibrating the complex gains due to instrumental and atmospheric effects. The observations were obtained in very good weather, with $\tau_{225GHz} \sim 0.075$. The total on-source integration time was 5.2 hours. The complex visibility data were calibrated within the MIR reduction package[^4], and the calibrated visibilities were then exported to MIRIAD [@sault95] for resampling to a common spectral grid. The continuum and the 2 spectral line image cube were produced using the Astronomical Image Processing System (AIPS)[^5] task [*IMAGR*]{}. Since no spatial details are expected to be revealed at the angular resolutions achieved (see below), natural weighting was used in the mapping in order to maximize sensitivity. The 345 GHz continuum image with an effective bandwidth of 2 GHz has a synthesized beam of $5.8\arcsec \times 3.4\arcsec$ ($PA=60^\circ$) and $1\sigma$ noise of 1.4 mJy beam$^{-1}$. The spectral resolution of the line data is 11.865 MHz (10 km s$^{-1}$ at 355 GHz). The final 2 spectral line cube was produced by first subtracting the continuum from the [*uv-data*]{} and then averaging over 8 channels with an increment of 4 channels, covering the frequency range between 354.4347 GHz and 356.2381 GHz. The resulting cube has a spectral resolution of 94.92 MHz with a synthesized beam of $5.7\arcsec \times 3.3\arcsec$ ($PA=59^\circ$) and $1\sigma$ noise of 4.2 mJy beam$^{-1}$ in each channel. Results ======= CO Redshift and Line Luminosity \[sec:CO\] ------------------------------------------ ---------- --------------------- ------------------- --------------- ------------------ ------------------- ---------------------------------- --------------------- Line $\nu_{CO/[C\, II]}$ $z_{CO/[C\, II]}$ $\Delta V$ $S\Delta V$ $L_{CO/[C\, II]}$ $L'_{CO/[C\, II]}$ $M_{H2}^b$ (GHz) (km s$^{-1}$) (Jy km s$^{-1}$) ($10^8 L_\odot$) ($10^{10}$ K km s$^{-1}$ pc$^2$) ($10^{10} M_\odot$) CO (4–3) 86.3085 $4.3418\pm0.0006$ 380 $1.75^a\pm0.24$ $2.5\pm 0.3$ $7.8\pm 1.1$ $14\pm 2$ CO (5–4) 107.8739 $4.3421\pm0.0006$ 364 $1.55^a\pm0.22$ $2.8\pm 0.4$ $4.4\pm 0.6$ $8.8\pm 1.2$ 2  355.8038 $4.3415\pm0.0003$ 366 $13.05\pm0.70$ $78\pm 5$ $3.6\pm 0.2$ – ---------- --------------------- ------------------- --------------- ------------------ ------------------- ---------------------------------- --------------------- \(a) A conversion of 7 Jy/K is adopted to convert the measured antenna temperature in $T_A*$ to flux density, using the calibration factor derived between December 2013 and January 2014.\ (b) $L'_{CO(1-0)}$ is estimated using the average line ratios for SMGs [@carilli13], and $M_{H2}$ is derived using the “ULIRG" conversion factor $\alpha_{CO}=0.8 M_\odot$ (K km s$^{-1}$ pc$^2$)$^{-1}$ – see § \[sec:mass\]. ![RSR spectrum of COSMOS AzTEC-1. The two spectral features well above the noise level are interpreted as CO (4–3) and (5–4) lines at $z=4.342$. The redshifted 492 GHz \[C I\] line should appear at 92.13 GHz (marked with an arrow). []{data-label="fig:CO"}](fig1.png){width="0.99\columnwidth"} ![Zoom-in views of the two CO lines in the RSR spectrum of COSMOS AzTEC-1. The x-axis is in velocity offset with respect to the systematic redshift of $z=4.3420$ (vertical long-dashed lines). []{data-label="fig:COzoom"}](fig2.png){width="0.99\columnwidth"} The final RSR spectrum of COSMOS AzTEC-1 shown in Figure \[fig:CO\] has two emission lines clearly above the noise level. A straightforward interpretation of the spectrum is that these are two redshifted, adjacent rotational transitions of CO, and the separation between the two lines, $\Delta\nu = 21.565$ GHz, corresponds to the expected frequency offset between CO $J=4 \rightarrow 3$ and $J=5\rightarrow 4$ transitions at $z= 4.342$ (see Appendix for a detailed discussion on the redshift determination from an RSR spectrum). Both lines are fully resolved by the RSR (see Figure \[fig:COzoom\]), and the best fit Gaussian parameters are summarized in Table \[tab:CO\]. The CO (4–3) line is centered at $\nu=86.3085$ GHz, corresponding to a redshift of $4.3418\pm0.0006$ while the CO (5–4) line is centered at $\nu=107.8739$ GHz, at a redshift of $4.3421\pm0.0006$. The best-fit linewidths (FWHM) are 380  and 364 , respectively, in good agreement as expected if they are two CO transitions from the same galaxy. The redshifted 492 GHz \[C I\] line, which should appear at 92.130 GHz, is undetected with $S_{[C~I]}/S_{CO(4-3)} \le 0.45$, in line with the measured \[C I\] line strengths in other high redshift galaxies [@walter11 $S_{[C~I]}/S_{CO(3-2)} \sim 0.3$]. For an ultra-wide spectrum produced by the RSR, the redshift information is also present in weak lines such as \[C I\] that are not formally detected individually as well as in bright lines such as CO. We have developed a method to exploit all spectral information present in the RSR data by cross-correlating the observed spectrum with a theoretical or an empirical spectral template [see @yun07]. A detailed analysis of the cross-correlation amplitude along with the expected CO line multiplicity and the redshift constraints from the radio-millimetric photometric redshift analysis uniquely identifies the $z=4.342$ solution with a total $S/N=9.0$ (see Appendix). As shown in the zoom-in inset of Figure \[fig:COLadder\], this redshift peak is well-resolved by the cross-correlation analysis with a spread in redshift between 4.338 and 4.347 (FWHM). It is well centered on the redshift of AzTEC-1 derived from fitting the individual CO lines, $z=4.3420\pm0.0004$ (see Table \[tab:CO\]), but the width of the distribution is nearly 10 times larger than the uncertainty from the individual line fitting, indicating that the width of the cross-correlation amplitude arises from the finite width of the CO lines ($\sim$375 km s$^{-1}$) rather than reflecting the uncertainty in the redshift determination. The CO line redshift of $z=4.3420\pm0.0004$ we derive from the RSR spectrum is significantly lower than the Lyman-$\alpha$ break based redshift of $4.650\pm0.005$ or the optical/IR photometric redshift of $z=4.64^{+0.06}_{-0.08}$ reported by @smolcic11, and naturally explains why their CARMA and PdBI CO line searches failed. Their photometric redshift analysis produced a secondary solution at $z=4.44$, which is much closer to our CO redshift. Although @iono12 had the right idea to search for a CO line near this secondary redshift peak, they missed detecting the CO (5–4) line just outside their search range. In either case, the redshift adopted by @smolcic11 is close enough to the actual CO redshift that their analysis of stellar mass and IR spectral energy distribution is still mostly valid, and their conclusion that AzTEC-1 is an extremely young ($\le 50$ Myr), massive ($M_* \sim 10^{11} M_\odot$), and compact ($\le2$ kpc) galaxy with a star formation rate of $SFR\sim10^3 M_\odot$ yr$^{-1}$ still holds. The CO line luminosity $L_{CO}$ in $L_\odot$ can be computed from the measured line integrals and the CO redshift using Eq. (1) by @solomon97 as, $$L_{CO}=1.04\times 10^{-3}S_{CO}\Delta V \nu_0 (1+z)^{-1}D_L^2 ~~[L_\odot]$$ where $S_{CO} \Delta V$ is the measured CO line integral in Jy km s$^{-1}$, $\nu_0$ is the rest frequency of the CO transition in GHz, and $D_L$ is the luminosity distance in Mpc. As summarized in Table \[tab:CO\], $L_{CO}$ is $(2.5\pm0.3)\times10^8 L_\odot$ and $(2.8\pm0.4)\times10^8 L_\odot$ for the CO (4–3) and CO (5–4) transitions, respectively. Total molecular gas mass is related to the quantity $L'_{CO}$ [see Eq. (3) by @solomon97], $$L'_{CO}=3.25\times 10^7 S_{CO}\Delta V \nu_{obs}^{-2} (1+z)^{-3}D_L^2 ~~[K\, km\, s^{-1} pc^2]$$ where $\nu_{obs} = \nu_0/(1+z)$ is the observed line frequency in GHz. The derived CO luminosities are $L'_{CO(4-3)}=(7.8\pm1.1)\times 10^{10}$ K km s$^{-1} $ pc$^2$ and $L'_{CO(5-4)}=(4.4\pm0.6)\times 10^{10}$ K km s$^{-1}$ pc$^2$. These quantities can be converted to molecular gas mass $M_{H2}$ with several assumptions. Given the large uncertainties involved in this conversion, the total gas mass estimation from $L'_{CO}$ is deferred to a discussion later (see § \[sec:mass\] below). 2 Line and 345 GHz Continuum \[sec:C2\] --------------------------------------- ![SMA 345 GHz continuum image of COSMOS AzTEC-1 at $5.8\arcsec \times 3.4\arcsec$ resolution. Contours correspond to 2$\sigma$, 4$\sigma$, 6$\sigma$, 8$\sigma$, & 10$\sigma$ ($\sigma=1.4$ mJy/beam). The  $i$-band image is shown in greyscale. []{data-label="fig:345continuum"}](fig3.pdf){width="0.99\columnwidth"} ![A 2 spectrum of COSMOS AzTEC-1 obtained using the SMA. The 2 line redshift of $z_{[C\, II]}=4.3415 \pm 0.0003$ and $\Delta V = 366$  are in excellent agreement with the RSR CO line measurements (see § \[sec:CO\]). []{data-label="fig:C2"}](fig4.png){width="0.99\columnwidth"} While the two CO lines detected with the RSR on the LMT look quite good, confirming the CO redshift of $z=4.342$ and ruling out the possibility of a chance superposition of two unrelated low redshift ($z<3$) CO sources along the same line of sight requires a confirmation with another emission line. The redshifted 2 line ($\nu_0 = 1900.537$ GHz) falls in the middle of the SMA 350 GHz receiver band, which also overlaps with the frequency coverage of the SMA 400 GHz receiver. The SMA spectrum obtained (Figure \[fig:C2\]) shows a clearly detected and fully resolved 2 line near 355.7 GHz with $S/N\sim 15$, confirming the CO redshift. Three other redshifted 2 line sources (the $z=4.7$ QSO BR1202$-$0725 [@iono06], the $z=5.24$ lensed  source HLSJ091828+514223 [@rawle14], and the $z=4.68$ lensed SMG HLS1-MACSJ2043 [@zavala15]) have been detected by the SMA before, and this new 2 detection of COSMOS AzTEC-1 nicely demonstrates the intrinsic brightness of the 2 line and the excellent sensitivity of the SMA for studying high redshift 2 sources. Because the 2 line is detected with a significantly higher $S/N$ and a higher spectral resolution than the RSR CO observations, this SMA 2 spectrum can reveal much more than simply confirming the redshift of COSMOS AzTEC-1. The 2 line shown in Figure \[fig:C2\] is slightly asymmetric, spanning 520 MHz (438 ). The center of the line at full-width-zero-intensity (FWZI) corresponds to $\nu=355.74$ GHz or $z=4.3425$, which agrees very well with the CO redshift. The best fit Gaussian model for the line yields a mean redshift of $z_{[C\, II]}=4.3415 \pm 0.0003$ with $\Delta V = 366$ , reflecting the asymmetry with brighter emission on the blue-shifted side of the line. The origin of this line asymmetry is unknown at the moment, and future high spectral resolution measurements with a better $S/N$ should reveal whether this asymmetry is also present in CO lines and potentially offer a useful insight into the spatial distribution between the CO and the 2 emitting gas. The measured 2 line integral of $S_{[C\, II]}\Delta V = 13.05 \pm 0.70$ Jy km s$^{-1}$ translates to 2 line luminosity of $L_{[C\, II]}=1.04\times 10^{-3}\, S_{[C\, II]}\Delta V\, \nu_0 (1+z)^{-1} D_L^2 = 7.8\pm1.1 \times 10^9 L_\odot$. This is nearly 30 times larger than the line luminosity of both CO transitions detected with the RSR (see Table \[tab:CO\]). To put this in perspective, the 2 line luminosity of COSMOS AzTEC-1 is only a factor of 3 smaller than the [*total*]{} IR luminosity of an $L^*$ galaxy in the local universe [@soifer87; @saunders90; @yun01]. Indeed COSMOS AzTEC-1 is extremely luminous in 2 line, and this explains why the 2 line is detected so easily by the SMA. Nevertheless, this 2 line luminosity is only 0.04 per cent of the total IR and bolometric luminosity (see below), and this “\[C II\] deficiency" is discussed further in detail in § \[sec:C2\]. Both the 2 line and the 345 GHz continuum detected are centered precisely on the position of the  ACS $i$-band source as shown in Figure \[fig:345continuum\]. The total measured 345 GHz continuum flux of $17.8\pm1.4$ mJy is in excellent agreement with the 340 GHz continuum fluxes of $15.6\pm1.1$ mJy previously reported by @younger07. This continuum is only marginally resolved by the longest baselines of the SMA with an inferred Gaussian source diameter of only 0.3[@younger08]. Both the line and continuum emission is unresolved by the $5.8\arcsec \times 3.4\arcsec$ resolution of the new SMA data, as expected. IR Luminosity and SFR --------------------- The extensive and deep multi-wavelength photometric data readily available in the COSMOS field allowed @smolcic11 to assemble an impressive array of spectral energy distribution (SED) data for their photometric redshift analysis as well as stellar population modeling and bolometric luminosity estimation – see their Table 1 and Figure 4. By adding the  SPIRE 250, 350, and 500  photometry [@smith12] as well as our new 345 GHz continuum measurement (see § \[sec:C2\]), we have fully mapped the infrared peak of the SED as shown in Figure \[fig:SED\], and a more reliable analysis of the dust heating and infrared luminosity is now possible. We model and interpret the observed SED using three different commonly used tools: a modified black body model, a starburst SED model by @efstathiou00, and the GRASIL population synthesis and radiative transfer model [@silva98]. The modified black body model characterizes only the far-IR part of the SED as dust processed radiation at equilibrium temperature weighted by luminosity. The two latter models aim to gain further insight into the nature of the luminosity sources by adding assumptions on the source geometry and star formation history. All these models are highly idealized, however, and these interpretations should be taken in the context of the assumptions adopted. ### Modified Black Body Model A common illustrative model for thermal dust emission from astronomical sources is modified black body radiation or “grey body" radiation. Following the classical derivation by @hildebrand83 and adopting an emissivity function of the form $Q(\nu)=1 - exp[-(\frac{\nu}{\nu_c})^\beta]$ (so that the emerging spectrum is pure black body spectrum at $\nu \gg \nu_c$ and $S_\nu \propto \nu^{2+\beta}$ at lower $\nu$) leads to a simple functional form of $S_d(\nu)=\Omega_d B(\nu,T_D) [1-exp[-(\frac{\nu}{\nu_c})^\beta]$, where $\Omega_d$ is the solid angle of the source and $B(\nu,T_D)$ is Planck Function at frequency $\nu$ and dust temperature $T_d$ [@yun02]. The best fit model that describes the observed SED between 250 and 1000  photometry measurements is shown in Figure \[fig:SED\], characterized by dust temperature $T_d=54\pm3$ K and emissivity $\beta=1.6\pm0.2$. The derived IR luminosities are $L_{IR}=1.4\times 10^{13} L_\odot$ and $L_{FIR}=1.0 \times 10^{13} L_\odot$, where $L_{IR}$ and $L_{FIR}$ are luminosities between $\lambda=8-1000$  and $\lambda=40-120$ , respectively. These luminosity estimates agree well with other estimates discussed below, although they are slightly smaller because this single dust component characterization does not account for the warm dust contribution in the mid-IR region. The derived dust emissivity $\beta=1.6$ is similar to the commonly adopted value of 1.5 and is known to be somewhat degenerate with $T_d$ in this formulation. The dust temperature of AzTEC-1 follows the general trend of increasing $T_d$ with $L_{IR}$ reported by recent statistical studies such as by @symeonidis13 and @magnelli12 [@magnelli14]. The derived dust temperature of 54 K for AzTEC-1 is higher than the average $\left< T_d \right> \approx 40$ K for $L_{IR}=10^{13} L_\odot$ sources at $z=2$ analyzed in these studies, and their predictions on the redshift evolution of $T_d$ with $L_{IR}$ differ slightly – sample selection is likely important. Far-IR data for $z>4$ sources are rare because of the limitations of existing facilities, but our derived dust properties are similar to those of the seven $z>4$ SMGs with $L\sim10^{13}L_\odot$ analyzed by @huang14 [$T_d=$ 40–80 K]. ### Starburst SED Model by Efstathiou et al. (2000) ![A spectral energy distribution (SED) of COSMOS AzTEC-1 from UV to radio wavelengths. Most of the photometry points shown are already summarized in Table 1 by @smolcic11. The new 345 GHz SMA photometry and the  SPIRE 250, 350, and 500  points (shown in red) are from the published catalog by @smith12 clearly map out the dust peak, allowing us an accurate IR luminosity for the first time. The best fit modified black body model fitting the far-IR part of the SED is shown in magenta line (see the text for details). The “starburst" model SEDs by @efstathiou00 with ages of 26, 37, 45, & 64 Myr are shown for comparison.[]{data-label="fig:SED"}](fig5.png){width="0.99\columnwidth"} Model SEDs of young stellar clusters embedded in a giant molecular cloud by @efstathiou00 are shown in Figure \[fig:SED\], primarily for illustrative purposes and to compute the IR luminosity of COSMOS AzTEC-1. Although based on a relatively simple geometry and a highly idealized star formation history (a $\tau$-model), these “starburst" models as well as “cirrus" models with a lower opacity are shown to be remarkably effective in reproducing the observed SEDs of high redshift ultraluminous infrared galaxies (ULIRGs) and SMGs [see @efstathiou09]. The main impact of increasing starburst age is the build-up of the photospheric emission at wavelengths shorter than 3  (with a corresponding increase in luminosity in the rest frame optical bands) and a systematic shift of the dust peak to a longer wavelength as average opacity decreases and lower mass stars contribute more to the luminosity (i.e., cooler dust temperature). The success of these relatively simple SED models can be attributed at least in part to the basic fact that the youngest stars dominate the luminosity and the detailed star formation history is largely washed out. Thus, we expect the IR luminosity and the current star formation rate to be reliable but the mass of the stars produced by the ongoing starburst to be less certain. The SED with 45 Myr old starburst is in very good agreement with nearly every photometry data in Figure \[fig:SED\], but this time scaling may be meaningful only in the context of this specific model. The radio to millimetre wavelength part of the SED is constructed using the well established radio-IR correlation for star forming galaxies as described by @yun02, and the observed 1.4 GHz radio emission is entirely consistent with the radio and IR luminosity being powered by a pure starburst. Using the best fit SED (“45 Myr") model shown in Figure \[fig:SED\], we estimate the luminosities of $L_{IR}=1.5 \times 10^{13} L_\odot$ and $L_{FIR}=9.1 \times 10^{12} L_\odot$, where $L_{IR}$ and $L_{FIR}$ are luminosities between $\lambda=8-1000$  and $\lambda=40-120$ , respectively. The star formation rate derived from $L_{IR}$ using the empirical calibration by @kennicutt98 [i.e., $SFR=L_{IR}/(9.4\times 10^9 L_\odot)\, M_\odot /yr$ adjusted for Kroupa IMF] is $1596\, M_\odot$/yr. Assuming $z=4.64$, @smolcic11 estimated $L_{IR}=2.9\times10^{13}L_\odot$, nearly a factor of 2 larger than our estimate when corrected for the new redshift, because they did not have the  photometry to constrain the FIR peak. In comparison, the current $SFR$ computed by the best fit SED model shown in Figure \[fig:SED\] is $880\, M_\odot$/yr, about a factor of 2 smaller than estimated from the IR luminosity. The smaller current $SFR$ of this model stems from the exponentially decreasing star formation scenario adopted by the model and may not be accurate. ### GRASIL SED Models \[sec:GRASIL\] GRASIL is a population synthesis code which predicts the SED of galaxies from far-UV to radio wavelengths [@silva98]. By allowing a wide range of geometry for gas/dust and stars and a realistic treatment of dust processing as well as a variety of star formation histories and IMFs, GRASIL can model SEDs of a wide range of plausible astrophysical scenarios. To model the observed SED of COSMOS AzTEC-1, we adopt a Schmidt type law ($SFR(t)=\nu_{Sch}M_{gas}^k$) with an efficiency of $\nu_{Sch} = 0.5$ Gyr$^{-1}$ and an exponent $k = 1$ for the quiescent star formation history. The starburst component is modeled as an exponentially decreasing $SFR(t)$ with an e-folding time $t_b$, observed at different times (“$age_b$") after the outset of the burst. Both stellar and gas/dust sources are modeled with a King profile with core radii of $r_*$ and $r_{gas}$. Two different components of gas and dust are also considered: (a) molecular clouds (MCs) where young stars are forming; and (b) “diffuse" or “cirrus" component that surrounds the MCs, old free stars, and exposed new stars with a large filling factor. A self-consistent radiative transfer calculation is performed to compute the emerging SED. Uncertainties in the model parameters are estimated from the analyses of 250 Monte Carlo realisations of the input photometry data. ![Best fit GRASIL starburst SED model for COSMOS AzTEC-1 is shown with a solid line. In addition to the photometry points shown in Figure \[fig:SED\], additional photometry data in the UV and optical bands compiled by @smolcic11 are also shown \[in green\]. The observed dust reprocessed (FIR) light originates nearly equally from the diffuse medium (blue dashed line) and the molecular clouds (red dotted-dashed line).[]{data-label="fig:grasilsed"}](fig6.png){width="0.99\columnwidth"} The best fit GRASIL starburst SED model for COSMOS AzTEC-1 is shown in Figure \[fig:grasilsed\] along with the photometry data used to constrain the model. An acceptable fit could be obtained without any AGN contribution, and the observed SED is consistent with the bulk of the luminosity being powered by a strong starburst, similar to the majority of local ultra-luminous infrared galaxies [ULIRGs; @vega08]. The starburst component of the best fit model is characterized by an exponentially fading burst with an e-folding time of $t_b=35$ Myr, observed at $age_b=51$ Myr. At these time scales, the majority of young stars are still embedded within their parent molecular clouds, and most of their luminosity emerges in the IR. Regardless of any detailed assumptions in the model, the global quantities such as luminosity and $SFR$ should be quite robust. The total IR and FIR luminosities derived are $L_{IR}=1.6\pm0.3 \times 10^{13} L_\odot$ and $L_{FIR}=1.2\pm0.2 \times 10^{13} L_\odot$, respectively, with the current star formation rate of $SFR=1320\pm230\, M_\odot$ yr$^{-1}$. These IR luminosities are in good agreement with those derived in the previous section, and the model $SFR$ is close to $SFR$ derived from the IR luminosity, $1702\pm296\, M_\odot$ yr$^{-1}$. There is little evidence to support any AGN activity in AzTEC-1, despite its large luminosity ($L_{IR}>10^{13}L_\odot$). While the light distribution seen in the  $i$-band image (Fig. \[fig:345continuum\]) is compact, it is clearly resolved with a diameter of 0.3$\arcsec$, similar to other $z>3$ SMGs that are compact and clumpy with $r_e\le 2$ kpc [@toft14]. The photometry data is sparse in the mid-IR (5-50 ) range, and the presence of a heavily obscured AGN cannot be completely ruled out by this modeling. The red continuum between 0.8-8.0  can be interpreted as an indication of a buried AGN [@lacy04; @stern05], but it is also the characteristics of a heavily obscured young star clusters, commonly seen among most SMGs [@yun08]. The observed radio continuum flux is entirely consistent with the expected supernovae rate (see Fig. \[fig:SED\]), and there is no room for any significant AGN contribution in the radio either. A good spectral coverage from the UV to the radio imposes strong constraints, not only on the global properties of the galaxies but also on other important physical parameters. In the absence of an AGN, the fit to the near-IR and radio luminosities provides a strong constraint on the star formation rate, while the detailed shape of the SED is affected mainly by the value of the model parameters such as star formation history and extinction. For instance, the SED shape in the UV range is determined by the geometry between stars and dust while the SED in the optical range can help us to put constraints on the age of the old/intermediate age stellar populations. The optical depth mainly affects the mid-IR spectral range by varying the MC contribution. The best fit model core radii of the stellar and gas distributions are $r_*=0.10\pm0.02$ kpc and $r_{gas}=0.95\pm0.42$ kpc, respectively, in agreement with their sizes measured by the  and . The larger extent of gas and dust over the stars ensures an efficient obscuration of the stellar light, and the high mean opacity ($A_V>200$ for the MC component), shapes the observed very red continuum SED in the rest frame optical and near-IR bands, as commonly seen in other high redshift SMGs [e.g., @yun08; @yun12]. The total stellar mass derived by the GRASIL model $M_*=4.4\pm0.7 \times 10^{11} M_\odot$ is nearly twice as large as the estimate by @smolcic11, but this estimate depends strongly on the chosen star formation history and thus is not very secure. The total gas mass inferred from the GRASIL model is $3.6\pm0.6 \times 10^{11} M_\odot$, and the 39 per cent of this total ($1.4\times 10^{11} M_\odot$) is in the “dense" (or MC) phase directly fueling the star formation. As discussed below, deriving the total molecular gas mass from the new CO measurements requires several highly uncertain assumptions, and the gas mass estimate by the GRASIL model is near the high end of the estimates derived using different methods and is close to the gas mass estimated from the dust luminosity using the relation derived by @scoville14. The gas mass fraction $f_{gas}\equiv \frac{M_{gas}}{M_{gas}+M_*}$ of 45 per cent is significantly higher than the value derived in nearby galaxies of 10-20 per cent and is similar to the mass fraction found for the $z\sim2$ submillimetre galaxies [e.g., @tacconi10; @geach11]. About 50 per cent of the IR luminosity arises from the MC component while the other 50 per cent comes from the reprocessed light from free stars in the cirrus component. The average density of the dense MC component, $n_{MC} = 7\times 10^5$ cm$^{-3}$, is higher than the critical density of CO (4–3) and CO (5–4) transitions, and this gas is capable of producing fully thermalized emission in these transitions. The GRASIL model does not constrain the nature of the cirrus component well, but the average density is expected to be 2-3 orders of magnitude lower when scaled by mass and sizes, and the two high $J$ CO transitions are expected to be sub-thermally excited in this diffuse component. The best fit GRASIL mode for COSMOS AzTEC-1 is remarkably similar to the best fit starburst SED models by @efstathiou00, as the gross shapes of the SED are driven largely by a young starburst embedded in dense gas clouds in both models. The best fit GRASIL model also requires a starburst history remarkably similar to the Efstathiou models discussed above, and this offers a plausible explanation for the apparent success of the Efstathiou models despite its simplicity. With a more sophisticated parameterization, the GRASIL model can offer a more nuanced physical insight into the gas, dust, and stellar properties. Discussion ========== Gas Mass and Nature of the Molecular Gas Fueling the Luminosity \[sec:mass\] ---------------------------------------------------------------------------- A standard practice for deriving total molecular gas mass from a redshifted CO line measurement is to adopt a standard line ratio between different rotational transitions of CO to estimate the luminosity of the CO (1–0) transition $L'_{CO(1-0)}$ and then to translate this line luminosity to molecular gas mass assuming an “$\alpha_{CO}$" conversion factor [see a review by @carilli13]. The Table 2 by @carilli13 gives the average CO line ratios for SMGs as $L'_{CO(4-3)}/L'_{CO(1-0)}=0.46$ and $L'_{CO(5-4)}/L'_{CO(1-0)}=0.39$. Using these ratios, the measured CO line luminosities $L'_{CO(4-3)}$ and $L'_{CO(5-4)}$ in Table \[tab:CO\] can be translated to $L'_{CO(1-0)}$ of $1.7\pm0.2 \times10^{11}$ and $1.1\pm0.2 \times10^{11}$ K km s$^{-1}$ pc$^{-2}$, respectively. Taking CO(4–3) line luminosity (which requires a smaller correction to CO(1–0) line luminosity) and a “ULIRG" conversion factor of $0.8 M_\odot [{\rm K\, km s^{-1}\, pc^{-2}}]^{-1}$, we derive a total molecular gas mass of $1.4\pm0.2 \times 10^{11} M_\odot$. Using a Galactic conversion factor of $\alpha_{CO} \equiv M_{H2}/L'_{CO(1-0)}$ of $\sim4 M_\odot [{\rm K\, km s^{-1}\, pc^{-2}}]^{-1}$ yields a 5 times larger total molecular gas mass, $6.8\pm0.8 \times 10^{11} M_\odot$. Since we have spectral measurements of multiple CO lines, 2 line, nearly fully mapped SED, and spatially resolved dust continuum distribution, we should be able to probe the gas properties of AzTEC-1 beyond simply adopting a highly uncertain and somewhat arbitrary “average" calibration. In this section, we explore several different methods for estimating gas mass, including dynamical mass analysis, radiative transfer analysis, and dust continuum measurements, in order to obtain a better handle on the gas mass and excitation conditions. ### Gas Mass from Dynamical Mass \[sec:dyn\_mass\] There is a growing awareness that the CO-to-H$_2$ conversion factor is not a single value but a quantity dependent on several different factors, such as metallicity, density, temperature, and non-gravitational pressure [see a review by @carilli13]. Since CO is a highly optically thick transition, metallicity is important mainly for low metallicity systems and should not be an important factor for SMGs – they are selected by their bright dust continuum and are often bright in CO, 2, and other transitions that require near solar abundance of metals. The main reason why the “ULIRG" conversion factor is often favored for SMGs is that the observed line width may include significant other pressure contributions such as stellar mass and stronger turbulence, as well as a higher radiation field resulting from a high density of young stars, similar to the central kpc of nearby ULIRGs. @downes98 derived the ULIRG conversion factor, $\sim$5 times smaller than the MW value [*on average*]{}, by constructing a dynamical model of spatially resolved CO emission in 10 nearby ULIRGs and by running a full radiative transfer calculation, also taking into account the mass contributions by the new and old stars. Because of poor spatial resolution of nearly all existing molecular line imaging data, few examples of such a dynamical mass analysis exist for SMGs. In one of the best studies of high redshift SMGs using this approach, @hodge12 have derived $\alpha_{CO} = 1.1\pm0.6 M_\odot [{\rm K\, km s^{-1}\, pc^{-2}}]^{-1}$ for the $z=4.05$ SMG GN20, which has a rather large ($14\pm4$ kpc diameter) CO disk with a dynamical mass of $M_{dyn}= 5.4\pm2.4 \times 10^{11}M_\odot$, stellar mass of $M_*= 2.3\pm2.4 \times 10^{11}M_\odot$, and a total gas mass of $M_{H2}= 1.8\pm0.7 \times 10^{11}M_\odot$. We do not have a spatially resolved CO map of COSMOS AzTEC-1, but adopting its 890  source size (also the optical source size[^6]) of 0.3 (2.1 kpc) as the diameter of the CO emitting molecular gas disk and 1/2 of the observed FWZI 2 line width of 219 km s$^{-1}$ (see § \[sec:C2\]) as the rotation velocity $v_c \times (sin~i)$ where $i$ is the disk inclination, a dynamical mass can be computed as $M_{dyn}=1.1(sin~i)^{-2}\times10^{10}M_\odot$. To reconcile this value with its stellar mass [$1.5\pm0.2 \times 10^{11}M_\odot$, @smolcic11 or $4.4 \times 10^{11}M_\odot$ according to our GRASIL model] as well as the molecular gas mass derived using the ULIRG conversion factor ($M_{H2} \sim 10^{11} M_\odot$), this disk has to be highly inclined, $i \lesssim 12^\circ$. The gas disk traced in 2 line in the $z=4.76$ SMG ALESS 73.1 is 2.2 times larger than the continuum [@debreuck14], and the dynamical mass should double if the gas disk emitting 2 in AzTEC-1 is also twice as large. However, the required inclination changes only slightly, $i \lesssim 17^\circ$. This situation is similar to the nearby ULIRG Mrk 231, which has a 450 pc radius molecular gas disk with an inclination of $i\sim10^\circ$ [@bryant96; @downes98]. This nearly face-on geometry also offers a natural explanation as why COSMOS AzTEC-1 is exceptionally bright in the optical bands. However, its dynamical mass is highly uncertain because of the unknown and small inclination angle, and unfortunately this approach does not offer much useful insight into the total gas mass and the CO-to-H$_2$ conversion factor for COSMOS AzTEC-1. ### Optically Thin CO: a Low Mass Limit Approaching the conversion factor problem from an entirely theoretical point of view, a lower limit to the total gas mass can be derived by considering an optically thin CO case. @bryant96 have offered a rather detailed derivations of gas mass determination based on spatially resolved CO observations and dynamical modeling in their effort to determine the gas mass for Mrk 231. Their derivation shows that the optically thin limit conversion factor should be $\alpha_{CO,min}=0.20 (X_{CO}/10^{-4})^{-1}$, where $X_{CO}$ is the CO abundance. The minimum gas mass for COSMOS AzTEC-1 derived from this relation is $\sim 3\times 10^{10} M_\odot$ if the gas excitation is typical of SMGs. The minimum gas mass required increases to $M_{H_2}\sim 10^{11} M_\odot$ if the excitation temperature is lower as indicated by the measured CO(4–3)/CO(5–4) line ratio (see below). These values can be understood better in the context of gas excitation provided the radiative transfer models discussed below. ### Gas Mass from Radiative Transfer Modeling The measured $L'_{CO(4-3)}/L'_{CO(5-4)}$ line ratio of $1.78\pm0.35$ for COSMOS AzTEC-1 is larger than the average SMG ratio of $0.46/0.39=1.18$ reported in the recent review by @carilli13 and is closer to the average Milky Way GMC value of 2.1 – see their Table 2. Although omitted by the Carilli & Walter review, the scatter in the SMG line ratios found in the literature is rather large because many published CO measurements have low $S/N$ ratios.[^7] In contrast, the line ratio for AzTEC-1 is quite secure not only because each line is detected with $S/N>7$ individually, but also because the two CO lines are measured simultaneously using the RSR, removing any potentially large systematic uncertainties arising from utilizing measurements taken at different times using different instruments, often on different telescopes. We explore the mass and physical conditions of the gas producing the observed CO emission in COSMOS AzTEC-1 by examining a grid of models covering a range of density $n$, kinetic temperature $T_{kin}$, and the CO column density $N_{CO}$ that can reproduce the measured CO line intensities and line ratios. If the gas was optically thin, then only density, temperature and total number of CO molecules are needed to model the observed emission. However, it is more likely that the CO emission is optically thick, and this impacts not only the escape of CO photons from the molecular gas, but also the excitation of CO column trapping. The optical depth of the CO emission is determined by the ratio of the CO column density to the line width $\Delta V$, and we have run models for three cases in which $N_{CO}/\Delta V = 2\times 10^{14}$ cm$^{-2}$ per km s$^{-1}$, $6\times 10^{16}$ cm$^{-2}$ per km s$^{-1}$, and $2\times 10^{17}$ cm$^{-2}$ per km s$^{-1}$. In the first case the CO emission in the $J=4-3$ line is optically thin ($\tau_{CO} \ll 1$). In the second case, the optical depth in this line is modest ($\tau_{CO} \approx 5$), and in the final case the optical depth is large ($\tau_{CO} \approx 20$). The excitation by the Cosmic Microwave Background (CMB) with $T_{CMB}=14.6$ K at $z=4.342$ is explicitly included in these calculations. [@cccccc@]{} n & $T_{kin}$ & $I_{CO1-0}/I_{CO4-3}$ & $I_{CO1-0}/I_{CO5-4}$ & $I_{CO4-3}/N_{CO}$ & $M_{H2}$\ (cm$^{-3}$) & (K) & & & (K km s$^{-1}$ cm$^2$) & ($M_\odot$)\ \ $1\times10^3$ & 500 & 1.28 & 2.30 & $6.9\times10^{-16}$ & $1.8\times10^{10}$\ $3\times10^3$ & 200 & 0.79 & 1.50 & $9.6\times10^{-16}$ & $1.3\times10^{10}$\ $1\times10^4$ & 100 & 0.36 & 0.63 & $1.4\times10^{-15}$ & $8.9\times10^9$\ $3\times10^4$ & 50 & 0.30 & 0.54 & $1.4\times10^{-15}$ & $8.9\times10^9$\ $1\times10^5$ & 33 & 0.31 & 0.55 & $1.2\times10^{-15}$ & $1.0\times10^{10}$\ $3\times10^5$ & 28 & 0.33 & 0.58 & $1.0\times10^{-15}$ & $1.2\times10^{10}$\ $1\times10^6$ & 25 & 0.37 & 0.68 & $8.5\times10^{-16}$ & $1.5\times10^{10}$\ \ $1\times10^3$ & 120 & 2.6 & 4.8 & $1.5\times10^{-16}$ & $8\times10^{10}$\ $3\times10^3$ & 80 & 1.7 & 2.9 & $2.4\times10^{-16}$ & $5\times10^{10}$\ $1\times10^4$ & 35 & 1.4 & 2.4 & $1.8\times10^{-16}$ & $7\times10^{10}$\ $2\times10^4$ & 25 & 1.3 & 2.2 & $1.1\times10^{-16}$ & $1\times10^{11}$\ $3\times10^4 $ & 20 & 1.3 & 2.3 & $6.3\times10^{-17}$ & $2\times10^{11}$\ $1\times10^5$ & 15 & 1.3 & 2.3 & $5.0\times10^{-18}$ & $2.5\times10^{12}$\ $3\times10^5$ &\ \ $1\times10^3$ & 85 & 2.1 & 3.6 & $7.9\times10^{-17}$ & $1.6\times10^{11}$\ $3\times10^3$ & 35 & 1.7 & 3.1 & $5.2\times10^{-17 }$ & $2.4\times10^{11}$\ $6\times10^3$ & 25 & 1.5 & 2.6 & $3.4\times10^{-17}$ & $3.7\times10^{11}$\ $1\times10^4$ & 20 & 1.4 & 2.5 & $1.9\times10^{-17}$ & $6.6\times10^{11}$\ $3\times10^4$ &\ Models producing acceptable solutions are summarized in Table \[tab:RADEX\]. The gas density and temperature that satisfy the observed line ratio are given along with the ratios of the CO (1–0) line intensity relative to the CO (4–3) and CO (5–4) lines and the integrated intensity in the CO (4–3) line divided by the CO column density (in units of K km s$^{-1}$ per cm$^{-2}$). If we make the assumption that the abundance of CO relative to molecular hydrogen is $X_{CO}=10^{-4}$, then we can determine the amount of mass required to produce the observed $L'_{CO}$ in the CO (4–3) line. The last column gives the required gas mass for that model. At a CO column density per unit line width of $N_{CO}/\Delta V = 2\times10^{14}$ cm$^{-2}$ per km s$^{-1}$ where all CO transitions are optically thin, an acceptable solution is found over a broad range of temperature. These optically thin cases require the least amount of gas mass to reproduce the observations among the models examined, but one still needs at least of order $10^{10}M_\odot$ of gas to produce the observed CO line emission, unless CO is more abundant than it is in the Milky Way. A comparison with the optically thin, low limiting mass calculation based on the derivation by @bryant96 above suggests that the $L'_{CO(1-0)}$ estimates derived using the average SMG CO line ratios taken from @carilli13 are 2-3 times too large, or the CO emitting gas has to be cold ($\lesssim25$ K). For the modest to high optical depth cases with $N_{CO}/\Delta V = 6\times10^{16}$ cm$^{-2}$ per km s$^{-1}$ and $2\times10^{17}$ cm$^{-2}$ per km s$^{-1}$, acceptable solutions exist only for a narrower range of density and temperature: $10^3 \lesssim n \lesssim 10^5$ and $15 \lesssim T_{kin} \lesssim 120$ K. The required gas mass is at least $5\times10^{10} M_\odot$ in these cases, and more likely the range of mass is $(1-7) \times 10^{11}M_\odot$ (a higher mass for lower $T_{kin}$ and higher $N_{CO}/ \Delta V$), comparable or exceeding the estimated total stellar mass. If the mean gas density for the star forming gas (the “MC" component) is $\gtrsim 10^5$ cm$^{-3}$ as suggested by the GRASIL model (see § \[sec:GRASIL\]), then there is a very small range of parameter space in density and temperature where an acceptable solution exists, and these solutions favor cold gas temperature ($\lesssim25$ K) with a large total mass ($M_{H2}>(2-7)\times10^{11} M_\odot$). An important caveat for this radiative transfer calculation is its assumption of a single component gas. If more than one phase gas with vastly different excitation conditions (e.g., “cirrus" and “MC") contribute [*significantly*]{} to the observed CO line intensities and line ratios, then a single component analysis such as presented here may lead to a misleading result. While a solution satisfying the observed $L'_{CO(4-3)}/L'_{CO(5-4)}$ line ratio can be found in a fairly broad range of excitation conditions and CO column density, an important merit of the models summarized Table \[tab:RADEX\] is the clear prediction they make on the CO (1–0) line intensity, that the optically thick cases should produce five times stronger CO (1–0) line than the optically thin cases (for $T_{kin}\le200$ K). Therefore, future CO (1–0) line measurements should yield an effective discrimination between these limiting cases and the gas properties when combined with these measurements. We briefly explored using 2 line intensity to gain further constraints on the gas excitation and mass, but we found this even more problematic. The critical density for excitation of 2 line is much lower ($n_{H2}\ge10^3$ cm$^{-3}$) than those of the CO transitions we measured, making this analysis more susceptible to the likely presence of multiple gas phases. Furthermore, the 2 line can originate from both neutral and ionized gas [with critical electron density of $n_e=10-100$ cm$^{-3}$, see Table 2 by @goldsmith12 and discussions below], and we do not have much confidence in making the assumption that 2 and CO lines arise from the same gas. ### Gas Mass from Dust Continuum The molecular gas mass derived from these high $J$ rotational transitions are likely lower limits since they may be sub-thermally excited compared with the CO (1–0) transition. One way to check this is to compare the total gas mass derived from the Rayleigh-Jeans (RJ) part of the dust spectrum as proposed by @scoville14. Given the uncertainties in excitation and conversion factor for CO, Scoville et al. have argued that dust mass derived from the RJ part of the dust spectrum and adopting a gas-to-dust ratio is more robust than an estimate based on high $J$ CO line luminosity. The Eq. 12 by Scoville et al. can be rewritten as $$M_{ISM} =\frac{1.2\times10^{10}}{(1+z)^{4.8}}[\frac{\Gamma_{RJ}}{\Gamma_0}]^{-1}[\frac{S_\nu}{\rm mJy}][\frac{\nu_{}}{353GHz}]^{-3.8}[\frac{D_L}{\rm Gpc}]^2 M_\odot$$ where $S_\nu$ is the observed dust continuum in mJy, $D_L$ is luminosity distance in Gpc, and $\frac{\Gamma_{RJ}}{\Gamma_0}$ is the RJ correction factor (see their Fig. 2). Using this relation, the 345 GHz continuum measurement from the SMA (see § \[sec:C2\]) can then be translated to a total “ISM mass" of $M_{ISM}=(7.4\pm1.3)\times 10^{11} M_\odot$ for $T_d=25$ K and $(3.6\pm0.7)\times 10^{11} M_\odot$ for $T_d=35$ K. As noted by Scoville et al., estimating dust temperature from the measured dust peak may be a mistake since the observed SED is indicative of luminosity-weighted (rather than mass-weighted) measure of dust temperature. And the resulting smaller gas mass derived for the higher dust temperature is only a lower limit. The error in gas mass depends only linearly with any error in dust temperature, and the resulting $\sim$50 per cent uncertainty due to poorly constrained dust temperature still leads to a better gas mass estimate than the estimates from the CO luminosity, which is fraught with a wide range of substantial and systematic uncertainties. It is notable that the total gas mass derived from the measured dust continuum is among the largest estimates obtained by the different methods, but it agrees well with the estimate by the GRASIL model (see § \[sec:GRASIL\]) and the gas masses required to produce the observed CO line ratio in the optically thick cases (see Table \[tab:RADEX\]). ### Summary of Gas Mass Estimation and Broader Implications A detailed review of the process of converting the measured CO (4–3) and (5–4) line luminosity to a total gas mass demonstrates several assumptions one has to make, particularly when using empirical calibrations. The measured line ratio between these two transitions for COSMOS AzTEC-1 is larger than the average ratio reported for a sample of SMGs by @carilli13, and the resulting uncertainty in estimating CO (1–0) line luminosity is large (a factor of 2 to 3). The CO-to-H$_2$ conversion factor is also uncertain by at least a factor of 2 or more, but the nominal total molecular gas mass based on the CO (4–3) line luminosity is $1.4\times 10^{11} M_\odot$. The non-LTE radiative transfer calculations have yielded a range of acceptable solutions summarized in Table \[tab:RADEX\], with likely molecular gas mass in the range of $(1-7)\times 10^{11}M_\odot$ for the modest to high optical depth cases and with a minimum (optically thin) limit of $\sim 10^{10}M_\odot$. The total ISM mass estimate based on dust continuum [@scoville14] favors the upper end of these estimates, $(4-7)\times 10^{11}M_\odot$ while the empirical calibration that yields a total gas mass of $\sim 2\times 10^{11}M_\odot$ from the measured CO (4–3) line luminosity is at the low end of these different estimates. Two interesting outcomes from these analysis deserve additional comments. Firstly, the gas mass analysis using the dynamical mass and stellar mass estimates has revealed that the geometry of the gas disk has to be nearly face-on, and consequently the poorly constrained dynamical mass prevents us from deriving a meaningful gas mass estimate. This calculation also sheds an interesting insight in that such a nearly face-on geometry with minimum dust obscuration would naturally explain why the stellar component of the host galaxy is seen only modestly obscured in the rest frame UV light ($A_V\lesssim 3.5$), unlike most other SMGs with similarly high IR luminosity $\gtrsim10^{13}L_\odot$. Secondly, the observed CO (4–3) to (5–4) line ratio is lower than the typical value for SMGs and is closer to the Milky Way value. The radiative transfer models for gas density and temperature characteristic of the MW star forming dense cores ($n\sim 10^{4}$ cm$^{-3}$ and $T\sim 25$ K) require a total gas mass of $(2-4)\times 10^{11}M_\odot$, which is more in line with the gas mass estimate from the dust continuum and the gas mass estimate based on the MW value for $\alpha_{CO}$ ($\sim 5\times 10^{11}M_\odot$). The range of excitation conditions that can reproduce the observed line ratios and intensities are uncomfortably narrow, however, and this may indicate that the observed CO lines include significant contributions from more than one component of molecular ISM present in this galaxy [e.g., @harris10], as also suggested by the GRASIL analysis. Given the range of gas mass estimates and current star formation rate, the gas depletion time for COSMOS AzTEC-1 is about 200 Myr, with about a factor of 2 overall uncertainty. This means COSMOS AzTEC-1 will exhaust its gas reserve and will shutoff its star formation activity by $z\approx4$ even without any negative feedback, unless gas continues to flow in at a rate matching the star formation rate, $\dot{M} \approx 10^3 M_\odot$ yr$^{-1}$. Some gas recycling can extend this time by about 50 per cent, but the stellar feedback process is expected to do more than compensating for this effect. The stellar mass doubling time $\tau_* \equiv M_*/SFR$ for COSMOS AzTEC-1 is also about 200 Myr, and its substantial stellar mass could have been plausibly built up [*entirely*]{} during the current episode of starburst [see @yun12]. In such a scenario, the starburst activity would have started around $z\approx5$, ending with a $M_* \gtrsim 6\times 10^{11} M_\odot$ stellar galaxy with $\sim2$ kpc diameter by $z\approx4$, similar to the massive quiescent galaxies reported by @whitaker13, @straatman14, and others. 2/FIR Ratio and High Radiation Field \[sec:C2\] ----------------------------------------------- ![$L_{[C\, II]}$ as a function of far-IR luminosity $L_{FIR}$. Empty circles are the GOALS sample of local LIRGs and ULIRGs [@diaz13]. Empty squares and crosses are \[C II\] measurements for $z>1$ sources from the literature [@walter09; @hailey10; @stacey10; @wagg10; @cox11; @valtchanov11; @gallerani12; @swinbank12; @venemans12; @walter12; @carniani13; @george13; @wang13; @magdis14; @rawle14; @riechers13; @willott13; @riechers14; @brisbin15; @gullberg15; @schaerer15] – crosses are strongly lensed sources while squares may also be lensed sources. COSMOS AzTEC-1 is shown as a large filled circle and extends the trend of “2 deficiency" to $L_{FIR}\ge 10^{13}L_\odot$. Filled squares are high redshift sources with spatially resolved 2 and FIR distribution: COSMOS AzTEC-3 [@riechers14], HDF 850.1 [@neri14], BR1202$-$0725 A&B [@carniani13], ALESS 73.1[@debreuck14], and four $z=6$ QSOs imaged using ALMA [@wang13]. A typical error bar is shown on the bottom right corner.[]{data-label="fig:LC2vsLFIR"}](fig7.png){width="0.95\columnwidth"} The 158  2 line is an important coolant of the neutral ISM and thus is a bright tracer of star formation in galaxies, typically accounting for 0.1-1 per cent of IR luminosity [@madden93; @malhotra01; @stacey10]. Because 2 emission can be produced by different gas phases with a wide range of physical conditions, interpreting 2  emission is difficult [see a recent review by @goldsmith12]. A broad correlation is seen between observed 2 emission and other tracers of star formation [e.g., @boselli02; @delooze11 also see Fig. \[fig:LC2vsLFIR\]]. However, interpreting observed 2 line luminosity in terms of a particular physical process, such as a tracer of SFR, is problematic because 2 emission arises from a variety of different excitation mechanisms, in both ionized and neutral phase. Also, star forming galaxies observed in 2 show a factor of 100 or more spread in the $L_{[CII]}/L_{FIR}$ ratio, which is correlated with IR luminosity and dust temperature . The measured 2 line luminosity of COSMOS AzTEC-1 is significantly higher than those of the Great Observatories All-sky LIRG Survey (GOALS) sample of 241 luminous infrared galaxies studied by @diaz13 using the [*Herschel Space Observatory*]{} (see Figure \[fig:LC2vsLFIR\]). Also shown are a collection of 2 line sources at $z>1$ from the literature, and they extends the observed broad correlation to $L_{FIR}>10^{13}L_\odot$. The \[C II\]/FIR ratio, shown in Figure \[fig:C2vsFIR\], reveals that the measured $L_{[C\, II]}/L_{FIR}$ ratio of $6.5\times 10^{-4}$ for AzTEC-1 is among the lowest measured and extends the 2 deficiency to $L_{FIR}\ge 10^{13} L_\odot$. While most of the GOALS sample LIRGs and ULIRGs form a broad trend with a decreasing $L_{[C\, II]}/L_{FIR}$ ratio with increasing $L_{FIR}$, the $z>1$ 2 sources show a much larger scatter. These high redshift systems simply being a scaled up versions of the local star forming galaxies is a commonly offered explanation [e.g., @stacey10; @brisbin15]. At least 1/2 of the sources detected in 2 thus far are strongly lensed systems (shown as crosses in Fig. \[fig:C2vsFIR\]) found by the South Pole Telescope [SPT; @gullberg15] and  [@cox11; @valtchanov11; @riechers13; @magdis14; @rawle14], and many should fall along the local LIRG/ULIRG relation when corrected for magnification, as shown by a detailed study of a $z=2.013$ lensed 2 source by @schaerer15. Determining whether the larger scatter associated with the remaining $z>1$ sources can be accounted by lensing will require future detailed follow-up studies of the individual sources. Along with the $z=5.3$ SMG COSMOS AzTEC-3 [@riechers14] and the $z=5.2$ SMG HDF 850.1 [@neri14], AzTEC-1 has the one of the smallest $L_{[C\, II]}/L_{FIR}$ ratio in Figure \[fig:C2vsFIR\], clustered together with 7 IR luminous galaxies hosting an optical QSO at $z>4$ imaged in 2 and continuum by ALMA (shown as filled squares). One possible explanation for their extremely low $L_{[C\, II]}/L_{FIR}$ ratio is the elevated dust-obscured AGN contribution to the FIR luminosity, but this explanation is not supported by systematic studies of large samples of IR luminous galaxies conducted using . By examining the GOALS LIRGs with and without AGN activity (identified through  mid-IR spectroscopy), @diaz13 have shown that the correlation between $L_{[C\, II]}$ and $L_{FIR}$ and the 2 deficiency is an intrinsic property of the star formation, and there is no need to invoke AGN activity to explain it (at least at levels of $L_{[C\, II]}/L_{FIR}>10^{-3}$). A study of 154 intermediate redshift ($\left< z \right>\sim 0.15$) 24 $\mu$-m-selected galaxies by @magdis13 and a study of 130 mid- and far-IR selected galaxies by @sargsyan14 also drew a similar conclusion. ![2/FIR ratio as a function of the FIR luminosity. Many of the $z>1$ sources are lensed, and their luminosities are not corrected for lensing because the magnification factor is not always known. All symbols are identical to those in Figure \[fig:LC2vsLFIR\]. A typical error bar is shown on the bottom left corner.[]{data-label="fig:C2vsFIR"}](fig8.png){width="0.95\columnwidth"} The compact source sizes of AzTEC-1 and other sources revealed by high resolution continuum imaging using the SMA and ALMA suggests the high intensity of the infrared radiation field may offer an important clue to the 2 deficiency. Possible explanations for the 2 deficiency include: (1) self-absorption; (2) saturation of the 2 line due to high gas column density; (3) decreased photoelectric heating in high UV radiation field; and (4) high dust-to-gas opacity caused by an increase of the average ionization parameter [see reviews by @malhotra01; @diaz13]. Citing a clear trend for LIRGs with deeper 9.7  silicate strengths, higher mid-IR luminosity surface densities ($\Sigma_{MIR}$), smaller fractions of extended emission, and higher specific star formation rates (SSFRs) to display a greater 2 deficiency, @diaz13 have concluded that the dust responsible for these correlations must be directly linked to the process driving the observed 2 deficiency. They have also found the correlation becoming much tighter when the FIR luminosity is normalized by mid-IR source size (i.e., surface density $\Sigma_{FIR}$), independent of the nature of the powering source. As shown in Figure \[fig:C2vsSIR\], COSMOS AzTEC-1 and other high redshift 2 sources with spatially resolved continuum sizes follow the same tight correlation defined by the local LIRGs and extend this correlation by one order of magnitude larger in $\Sigma_{FIR}$. In addition to extending this correlation to a higher luminosity density, this comparison also further supports the proposed scenario that the compactness of the active region and the resulting higher intensity of the infrared radiation field dictates the 2 deficiency. In an earlier modeling study using the spectral synthesis code CLOUDY, @gracia11 have shown that [*all*]{} far-IR fine structure lines, regardless of their origin in the ionized or neutral phase of the ISM, show a deficit with increasing $L_{FIR}/M_{H2}$ ratio, and they further conclude that this deficiency is driven by the increased ionization parameter. This is an extremely interesting prediction that should be tested further using future observations of other far-IR fine structure lines. Determining through a high resolution imaging study whether the unlensed $L_{FIR}>10^{13}L_\odot$ 2 sources with $L_{[C\, II]}/L_{FIR} \gg 10^{-3}$ follow the narrow trend seen in Figure \[fig:C2vsSIR\] is another important test for this 2 deficiency scenario. ![2/FIR ratio as a function of FIR surface density. Only the sources with a resolved mid-IR or far-IR sizes are included. All symbols are identical to those in Figure \[fig:LC2vsLFIR\]. A typical error bar is shown on the bottom left corner.[]{data-label="fig:C2vsSIR"}](fig9.png){width="0.95\columnwidth"} Conclusions =========== We report the first successful spectroscopic redshift determination of COSMOS AzTEC-1 obtained with a clear detection of the redshift CO (4-3) and CO (5-4) lines using the Redshift Search Receiver on the Large Millimeter Telescope and the confirmation of the CO redshift through the detection of the redshifted 158   2 line using the Submillimeter Array. Utilizing the newly measured redshift and CO and 2 line intensities, we have explored the gas mass and physical conditions of the gas fueling the enormous luminosity associated with this $z=4.342$ SMG. The RSR spectrum of COSMOS AzTEC-1 (Figure \[fig:CO\]) has two emission lines clearly above the noise level ($\ge7\sigma$) at 86.31 GHz and 107.87 GHz, and they are identified as redshifted CO (4-3) and (5-4) lines at $z=4.3420 \pm 0.0004$, respectively. This conclusion is supported by its over-all SED as well as by the radio-millimetric spectral index analysis by @carilli99. A detailed discussion of the unique redshift determination is presented in the Appendix section. The derived CO redshift of $z=4.3420$ is slightly lower than the photometric redshift derived by @smolcic11 using the rest frame UV and optical photometry data and is outside the failed previous blind CO searches by @smolcic11 and @iono12. This successful redshift determination after nearly 10 years of effort demonstrates the power of the ultra-wideband spectroscopic capability of the RSR on the Large Millimeter Telescope. The redshifted 492 GHz \[C I\] line is not detected ($S_{[C~I]}/S_{CO(4-3)} \le 0.45$), but this upper limit is still in line with the measured \[C I\] line strengths in other high redshift galaxies [@walter11 $S_{[C~I]}/S_{CO(3-2)} \sim 0.3$]. The new CO redshift for COSMOS AzTEC-1 is verified by the detection of redshifted 2 line at 335.8 GHz using the SMA. The bright 2 line is detected with $S/N\sim15$, and a higher spectral resolution clearly shows that the line is asymmetric. The cause of this asymmetry is not known yet, but this explains the slightly lower redshift determined for the 2 line. Although the derived 2 line luminosity of $L_{[C\, II]}= 7.8 \times 10^9 L_\odot$ is remarkably high, it is only 0.04 per cent of the total IR luminosity, making COSMOS AzTEC-1 one of the most 2 deficient objects known. We show that AzTEC COSMOS-1 and other high redshift 2 sources with a spatially resolved source size extend the tight trend seen between the 2/FIR ratio as a function of FIR surface density among the IR-bright galaxies by @diaz13 by more than an order of magnitude. This result lends further support for the explanation that the higher intensity of the IR radiation field and the resulting increased ionization parameter are likely responsible for the “2 deficiency" seen among luminous infrared starburst galaxies. Our modeling of the observed spectral energy distribution using a modified black body model, starburst SED models by @efstathiou00, and the GRASIL SED code [@silva98] produces a consistent estimate of the IR luminosity ($L_{IR}=(1.4-1.6) \times 10^{13} L_\odot$ and $L_{FIR}=(0.9-1.2) \times 10^{13} L_\odot$). The estimated star formation rate from the IR luminosity ($1600-1700\, M_\odot$ yr$^{-1}$) is slightly larger than the model-based $SFR$ (880 and 1320 $M_\odot$ yr$^{-1}$ for the Efstathiou and GRASIL model, respectively). The model $SFR$ and total stellar mass estimates depend on the adopted star formation history which is intrinsically more uncertain. The best fit GRASIL model constrained by the observed luminosity and the shape of the UV-to-radio SED further suggests of an intense, compact starburst ($r_* \approx 0.1$ kpc) heavily obscured ($A_V>200$ for the molecular clouds and $A_V=3.5$ for the “cirrus" component) by a massive, compact gas cloud ($M_{gas}=3.6\pm0.6 \times 10^{11} M_\odot$, $r_{gas}\approx 1$ kpc). The total molecular gas mass was derived from the measured CO (4-3) and CO (5-4) lines and 345 GHz continuum using several different methods, specifically addressing the uncertainties associated with each method. A grid search for non-LTE radiative transfer models that match the observed CO line intensity and line ratio yields acceptable solutions over a wide range of gas temperature and density with a minimum (optically thin) limit of $\sim 10^{10} M_\odot$. However, plausible models (modest to high optical depth) require a narrow range of gas temperature ($T\approx$ 20-35 K) for densities $n\gtrsim10^{4-5}$ cm$^{-3}$, requiring gas masses of $M_{H2}=(1-7)\times 10^{11} M_\odot$, Conventional methods of computing molecular gas mass from the observed CO line intensities are subject to very large uncertainties in translating these high $J$ transitions to the intensity of the CO (1-0) line, as well as to the similarly uncertain “$\alpha_{CO}$" conversion factor. The empirical calibration that yields a total gas mass of $\sim 2\times 10^{11}M_\odot$ from the measured CO (4–3) line luminosity is on the low end of these different estimates. The total ISM mass derived from the 345 GHz continuum [@scoville14] is near the top of the mass range derived by the other methods: $M_{ISM}=(7.4\pm1.3)\times 10^{11} M_\odot$ for $T_d=25$ K and $(3.6\pm0.7)\times 10^{11} M_\odot$ for $T_d=35$ K. Future measurements of the CO (1-0) transition should remove the uncertainty associated with the translation of the higher $J$ lines and offer a useful constraint on the CO optical depth. Our dynamical mass analysis shows that the gas disk in COSMOS AzTEC-1 has to be nearly face-on in order for the derived dynamical mass to be consistent with the minimum possible combined gas and stellar masses. This offers a natural explanation for the bright, compact stellar light distribution visible in the rest frame UV band  images, similar to the situation in the local ULIRG Mrk 231. The same analysis also suggests extremely high opacity ($A_V>200$) for most other viewing angles, as seen in many other high redshift SMGs. Among the 15 brightest AzTEC sources identified by the AzTEC/JCMT survey of the COSMOS field and located with a better than 1 positional accuracy using the SMA observations, COSMOS AzTEC-1 is only the second object with a secure spectroscopic redshift, after the $z=5.3$ COSMOS AzTEC-3 [@riechers10]. Advent of the RSR on LMT and other similar broadband spectrometer systems on modern telescopes with a large collecting area (e.g., ALMA) is finally making accurate determination of redshifts of these distant, optically faint galaxies possible. In addition to yielding redshifts, these CO spectroscopic surveys can also yield information on total gas masses and dynamical masses along with the excitation conditions of the gas fueling the rapid growths of these young, massive galaxies. A complete RSR survey of these COSMOS AzTEC sources has started at the LMT, and we should soon be able to gain an unbiased view of the redshift distribution and total molecular gas mass contents of these and other SMGs. Acknowledgments {#acknowledgments .unnumbered} =============== The authors thank the anonymous referee for the comments and suggestions that improved this manuscript. The authors also acknowledge the valuable discussions with Lee Armus, Andrew Baker, Daniela Calzetti, Chris Carilli, Giovanni Fazio, Dave Frayer, Andy Harris, Dave Sanders, Nick Scoville, Vernesa Smol[č]{}i[ć]{}, Axel [Wei[ß]{}]{}, Al Wootten, and Josh Younger that benefitted this work. The authors thank R. Blundell for granting us the Director’s Discressionary Time (DDT) for the SMA 2 observations presented here. This work would not have been possible without the long-term financial support from the Mexican Science and Technology Funding Agency, CONACYT (Consejo Nacional de Ciencia y Tecnología) during the construction and early operational phase of the Large Millimeter Telescope Alfonso Serrano, as well as support from the the US National Science Foundation via the University Radio Observatory program, the Instituto Nacional de Astrofísica, Óptica y Electrónica (INAOE) and the University of Massachusetts, Amherst (UMass). The Submillimeter Array is a joint project between the Smithsonian Astrophysical Observatory and the Academia Sinica Institute of Astronomy and Astrophysics and is funded by the Smithsonian Institution and the Academia Sinica. The UMass LMT group acknowledges support from NSF URO and ATI grants (AST-0096854, AST-0215916, AST-0540852, and AST-0704966) for the LMT project and the construction of the RSR and AzTEC. IA, DHH, DSA and MZ«s work is partly supported by CONACyT research grants CB-2009-13326 and CB-2011-167291. DRG is partly supported by CONACyT research grant CB-2011-01-167281. TDS was supported by ALMA-CONICYT grant number 31130005. RC and HG would like to acknowledge support from a William Bannick Student Travel Grant. We are grateful to all of the LMT observers from Mexico and UMass who took data for this project. This work is based in part on observations made with the Herschel Space Observatory, which is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA, and Planck, which is European Space Agency mission with significant NASA involvement. This research has made use of the NASA/ IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. [99]{} B[é]{}thermin M. et al., 2012, , 542, 58 Blain A.W., Smail I., Ivison R.J., Kneib J.-P., Frayer D.T., 2002, Physics Report, 369, 111 Boselli A., Gavazzi G., Lequeux J., Pierini D., 2012, , 385, 454 Bothwell, M.S. et al., 2013, , 429, 3047 Brisbin, D., et al., 2015, , 799, 13 Bryant P. M., Scoville N. Z., 1996, , 457, 678 Capak P. et al., 2011, , 470, 233 Caputi K. I. et al., 2007, , 660, 97 Carilli C., Yun M.S., 1999, , 513, L13 Carilli C., Walter F., 2013, , 51, 105 Carniani S. et al., 2013, , 559, 29 Chapman S.C., Blain A.W., Smail I., Ivison R.J., 2005, , 622, 772 Chung A., Narayanan G., Yun M.S., Heyer M., Erickson N.R., 2009, , 138, 858 Cox, P. et al., 2011, , 740, 63 Daddi E. et al., 2009, , 694, 1517 De Breuck, C., Maiolino, R., Caselli, P., Coppin, K., Hailey-Dunsheath, S., Nagao, T., 2011, , 530, L8 De Breuck C. et al., 2014, , 565, 59 de Looze I., Baes M., Bendo G.J., Cortese L., Fritz J., 2011, , 416, 2712 Díaz-Santos T. et al., 2013, , 774, 68 Downes D., Solomon P.M., 1998, , 507, 615 Efstathiou A., Siebenmorgan R., 2009, , 502, 541 Efstathiou A., Rowan-Robinson M., Siebenmorgan R., 2000, , 313, 734 Erickson N., Narayanan G., Goeller R., Grosslein R., 2007, in [*From Z-Machines to ALMA: (Sub)Millimeter Spectroscopy of Galaxies*]{}, ASP Conference Series, Vol. 375, p.71 Gallerani. et al., 2012, , 543,114 Geach J.E., Smail I., Moran S.M., MacArthur L.A., del P. Lagos C., Edge A.C., 2011, , 730, L19 George, R.D., et al., 2013, , 436, L99 Goldsmith P.F., Langer W.D., Pneda, J.L., Velusamy T., 2012, , 203, 12 Graci[á]{}-Carpio J. et al., 2011, , 728, L7 Gullberg, B., et al., 2015, , 449, 2883 Hailey-Dunsheath, S., Nikola, T., Stacey, G.J., Oberst, T.E., Parshley, S.C., Benford, D.J., Staguhn, J.G., Tucker, C.E., 2010, , 714, L162 Harris, A.I. et al., 2010, , 723, 1139 Hayward C.C., Narayanan D, [Kere[š]{}]{} D., Jonsson P., Hopkins P.F., Cox T.J., Hernquist L., 2013, , 428, 2529 Hildebrand, R.H., 1983, QJRAS, 24, 267 Ho P.T.P., Moran J.M., Lo K.Y., 2004, , 616, L1 Hodge J.A. et al., 2012, , 760, 11 Hodge J.A. et al., 2013, , 768, 91 Huang, J. et al., 2014, , 784, 52 Hughes D.H. et al., 2010, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 7733, p.12 Iono D. et al., 2006, , 645, L97 Iono D. et al., 2012, , 64, L2 Kennicutt R.C., Jr., , 36, 189 Kirkpatrick A. et al., 2012, , 759, 139 Kroupa P., 2001, , 322, 231 Lacy, M. et al., 2004, , 154, 166 Le Floc’h E. et al., 2005, , 632, 169 Lilly S.J., Le Fevre O., Hammer F., Crampton D., 1996, , 460, L1 Luhman M.L. et al., 1998, , 504, L11 Madau P. et al., 1996, , 283, 1388 Madden S.C., Geis N., Genzel R., Herrmann F., Jackson J., Poglitsch A., Stacey G.J., Townes C.H., 1993, , 407, 579 Magdis, G.E. et al., 2013, , 558, A136 Magdis, G.E. et al., 2014, , 796, 63 Magnelli B., Elbaz D., Chary R.R, Dickinson M., Le Borgne D., Frayer D.T., Willmer C.N.A., 2011, , 528, A35 Magnelli B. et al., 2012, , 539, A155 Magnelli B. et al., 2014, , 561, A86 Malhotra S. et al., 1997, , 491, L27 Malhotra S. et al., 2001, , 561, 766 Narayanan D., Krumholtz M.R., Ostriker E.C., Hernquist L., 2012, , 421, 3127 Neri, R., Downes, D., Cox, P., Walter, F., 2014, , 562, A35 Penner K. et al., 2011, , 410, 2749 Rawle T.D. et al., 2014, , 783, 59 Riechers D.A. et al., 2010, , 720, L131 Riechers D.A. et al., 2013, , 496, 329 Riechers D.A. et al., 2014, , 796, 84 Salmon, B. et al., 2015, , 799, 183 Sargsyan, L. et al., 2014, , 790, 15 Sault R.J., Teuben P.J., Wright M.C.H., 1995, in [*Astronomical Data Analysis Software and Systems IV*]{}, ed. R. Shaw, H.E. Payne, J.J.E. Hayes, ASP Conference Series, Vol. 77, p.433 Saunders W., Rowan-Robinson M., Lawrence A., Efstathiou G., Kaiser N., Ellis R.S., Frenk C.S., 1990, , 242, 318 Schaerer, D. et al., 2015, , 576, L2 Scott K. et al., 2008, , 385, 2225 Scott K. et al., 2012, , 423, 575 Scoville N. Z. et al., 2007, , 172, 1 Scoville N. Z. et al., 2014, , 783, 84 Silva L., Granato G.L., Bressan A., Danese L. 1998, , 509, 103 Simpson J.M. et al. 2014, , 788, 125 Smith A.J. et al., 2012, , 419, 377 Smol[č]{}i[ć]{} V. et al., 2011, , 731, L27 Soifer B.T., Sanders D.B., Madore B.F., Neugebauer G., Danielson G.E., Elias J.H., Lonsdale C.J., Rice W.L., 1987, , 320, 238 Solomon P. M., Downes D., Radford S.J.E., Barrett J.W., , 478, 144 Spilker J.S. et al., 2014, , 785, id.149 Stacey G.J., Hailey-Dunsheath S., Ferinkhoff C., Nikola T., Parshley S.C., Benford D.J., Staguhn J.G., Fiolet N., 2010, , 724, 957 Steinhardt C.L. et al., 2014, , 791, L25 Stern, D. et al., 2005, , 631, 163 Straatman C. M. S. et al., 2014, , 783, L14 Swinbank, A.M. et al., 2012, , 427, 1066 Symeonidis, M. et al., 2013, , 431, 2317 Tacconi L.J. et al., 2010, , 463, 781 Tamura Y. et al., 2009, , 459, 61 Toft S. et al., 2014, , 782, 68 Valtchanov, I. et al., 2011, , 415, 3473 Van der Tak F.F.S., Black J.H., Sch[ö]{}ler F.L., Jansen D.J., van Dishoeck E.F., 2007, , 468, 627 Vega O., Clemens M.S., Bressan A., Granato G.L., Silva L., Panuzzo P., 2008, , 484, 631 Venemans, B.P., et al., 2012, , 751, L25 Vieira J.D. et al., 2013, , 495, 344 Walter F., et al., 2009, , 457, 699 Walter F., [Wei[ß]{}]{} A., Downes D., Decarli R., Hankel C., 2011, , 730, 18 Walter F., et al., 2012, , 486, 233 Wagg, J., Carilli, C.L., Wilner, D.J., Cox, P., De Breuck, C., Menten, K., Riechers, D.A., Walter, F., 2010, , 519, L1 Wang R. et al., 2013, , 773, 44 A. et al., 2007, , 467, 955 A. et al., 2013, , 767, 88 Whitaker K.E. et al., 2013, , 770, L39 Williams C.C. et al., 2014, , 780, 1 Willott, C.J., Omont, A., Bergeron, J., 2013, , 770, 13 Young J.S., Scoville N.Z., 1991, , 29, 581 Younger J.D. et al., 2007, , 671, 1531 Younger J.D. et al., 2008, , 688, 59 Yun M.S., Cailli C.L., 2002, , 568, 88 Yun M.S., Heyer M., Aretxaga, I., 2007, in [*From Z-Machines to ALMA: (Sub)Millimeter Spectroscopy of Galaxies*]{}, ASP Conference Series, Vol. 375, p.174 Yun M.S., Reddy N.A., Condon J.J., 2001, , 554, 803 Yun M.S., and Carilli, C.L. 2002, , 568, 88 Yun M.S. et al., 2008, , 389, 333 Yun M.S. et al., 2012, , 420, 957 Zavala J. et al., 2015, , 452, 1140 Redshift Determination from an RSR Spectrum \[sec:appendix\] ============================================================ Template Cross-correlation Analysis ----------------------------------- ![Observed frequencies of redshifted CO and \[C I\] line transitions falling within the RSR frequency coverage range (73 to 111 GHz). At least one CO line should appear in the RSR spectrum at all redshifts except for a narrow redshift range of $0.58<z<1.08$. Two or more CO & \[C I\] lines should appear simultaneously within the RSR spectrum at $z>3.15$.[]{data-label="fig:COLadder"}](figA1.png){width="0.99\columnwidth"} The simultaneous frequency coverage of the RSR between 73 and 111 GHz means at least one CO transition falls within the spectral coverage at all redshifts except for a narrow redshift range between $0.58 <z< 1.08$, and two or more CO or \[C I\] transitions fall within the RSR spectral range at $z\ge3.15$ (see Fig. \[fig:COLadder\]). A variety of fainter molecular transitions from less abundant species such as HCN, HCO$^+$, HNC, CS, CN, HC$_3$N, and H$_2$O have also been detected in nearby and distant galaxies [see a review by @carilli13 and references therein]. As first introduced by @yun07, a cross-correlation analysis is a powerful method to derive the redshift information from such a broadband spectrum, even when many of the lines are not individually detected with a good $S/N$ ratio. A cross-correlation product $\zeta(z)$ can be derived as a function of redshift $z$ from the observed spectrum $S(\nu)$ and the model spectral template $M(\nu,z)$ as $$\zeta(z)\equiv \int S(\nu) M(\nu,z) W(\nu) d\nu.$$ The Doppler shifted model spectral template $M(\nu,z)$ is derived as $$M(\nu,z) = \int_{(\nu-\Delta\nu/2)(1+z)}^{(\nu+\Delta\nu/2)(1+z)} T(\nu')~d\nu'$$ where $T(\nu')$ is the rest frame template spectrum and $\Delta\nu$ is the RSR channel width. The weight function $W(\nu)$ represents the relative strength of different molecular transitions, and an empirical composite spectrum based on observed relative line strengths for high redshift sources [e.g., @spilker14] is adopted for the analysis presented here. ![Template cross-correlation amplitude of COSMOS AzTEC-1 RSR spectrum in S/N unit. A zoomed in details of the most significant peak ($S/N=9.0$) at $z=4.342$ is shown in the inset, and it clearly shows that the CO lines are clearly resolved spectrally. []{data-label="fig:XCOR"}](figA2.png){width="0.99\columnwidth"} The number of spectral lines contributing to the model spectral template $M(\nu,z)$ increases with redshift as the total frequency coverage of the RSR in the rest frame grows as 38$(1+z)$ GHz. As a result, the noise in the cross-correlation amplitude $\zeta(z)$ increases accordingly with redshift, and interpreting the raw cross-correlation amplitude is not straightforward. Also, since many of the molecular transitions occurring in the millimetre and sub-millimetre bands are rotational transitions with only slightly different rotation constants, the distribution of line transitions is highly clumped in the spectral domain, further complicating the situation. Therefore, rather than interpreting the raw cross-correlation amplitude $\zeta(z)$ for the redshift analysis, we compute a “$S/N$ ratio" of $\zeta(z)$ for a quantitative analysis of acceptable redshift solutions. The “noise" in each redshift bin is estimated by randomly shuffling the input RSR spectrum 10,000 times, and the derived cross-correlation amplitude $\zeta(z)$ is converted to a histogram of $S/N$ ratio as shown in Figure \[fig:XCOR\]. Determination of A Unique Redshift Solution ------------------------------------------- The histogram of the template cross-correlation amplitude for the COSMOS AzTEC-1 RSR spectrum in Figure \[fig:XCOR\] has the highest peak with $S/N=9$ at $z=4.342$, but other peaks with an apparent $S/N>5$ are also seen. Since the cross-correlation analysis is sensitive to [*all*]{} real signal, the two spectral line features detected near 86 GHz and 108 GHz in Figure \[fig:CO\] [*each*]{} produce a series of $S/N$=5-7 peaks that corresponds to different rotational transitions of CO at $z \sim 0$, 1, 2, & 3, in addition to the strongest peak resulting from the [*two*]{} CO lines at $z=4.342$. The presence of two distinct lines for AzTEC-1 in the RSR spectrum (Fig. \[fig:CO\]) rules out all single line identifications at $z<3.15$ (see Fig. \[fig:COLadder\]), and the $z=4.324$ solution remains as the only plausible interpretation with $S/N>5$. A small but non-zero possibility of two unrelated CO sources at $z<3.15$ along the same line of sight still remains [e.g., @zavala15], but we can rule out this scenario by using other tests. ![A plot of the template cross-correlation amplitude of COSMOS AzTEC-1 RSR spectrum in S/N unit with the redshift constraint from the radio-millimetric spectral index technique [@carilli99] utilizing just the AzTEC 1.1mm and the VLA 1.4 GHz photometry.[]{data-label="fig:XCOR_photz"}](figA3.png){width="0.99\columnwidth"} Photometric redshift constraints can be extremely helpful for determining the likely redshift identification [@yun07]. Exploiting the well known radio-IR correlation among star forming galaxies and the strong positive and negative $k$-corrections at the radio and millimetre wavelengths, the radio-millimetric spectral index technique [@carilli99] in particular is a simple but remarkably powerful method that requires just two broadband photometry measurements. As shown in Figure \[fig:XCOR\_photz\], the product of the probability distribution for radio-millimetric photometric redshift $p(z)$ and $SNR[\zeta(z)]$ (shown in Fig. \[fig:XCOR\]) effectively removes all $z<3$ scenarios and nicely isolates the $z=4.342$ solution. The effectiveness of the radio-millimetric photometric redshift method is further demonstrated by the fact that a powerful constraint against low redshift solutions can be derived even when only a good radio upper limit is used – all low redshift solutions can be rejected equally well by treating the $\sim 4 \sigma$ radio photometry point of AzTEC-1 only as an upper limit. \[lastpage\] [^1]: E-mail: [email protected] [^2]: Stellar mass in Kroupa IMF is 38 per cent smaller than that of the Salpeter IMF – i.e., $M_*(Kroupa) = 0.62 M_*(Salpeter)$. [^3]: <http://library.nrao.edu/public/memos/alma/memo594.pdf> [^4]: <https://www.cfa.harvard.edu/~cqi/mircook.html> [^5]: <http://www.aips.nrao.edu/index.shtml> [^6]: This stellar source size is estimated from the spatially resolved  $i$-band image. @toft14 report an upper limit of $r_{e,NIR}<2.6$ kpc based on their analysis of the UltraVista near-IR images. [^7]: For example, in one of the largest recent studies of multiple CO transitions by @bothwell13, 18 out of 32 (56%) CO line measurements used for analyzing the SMG line ratios have $S/N<5$ – see their Table 5.
--- abstract: | In this Letter we point out that redshift surveys can break the degeneracy between the galaxy bias, the power spectrum normalization, $\sigma_{8,0}$ and the growth factor, without the need for external information by using a simple and rather general parametrization for the growth rate, the well known $\gamma$ parametrization and measuring the power spectrum at least at two different redshifts. We find that in next-generation surveys like Euclid, $\sigma_{8,0}$ and $\gamma$ can be measured to within $1\%$ and $5\%$, respectively, while the bias $b(z)$ can be measured to within $1-2\%$ in each of 14 equal-width redshift bins spanning $0.7\leq z \leq 2$. author: - | Cinzia Di Porto$^{1,2}$, Luca Amendola$^{2}$, Enzo Branchini$^{3}$\ $^{1}$INAF-Osservatorio Astronomico di Bologna, Via Ranzani 1, 40127, Bologna, Italy and INFN Sezione di Bologna\ $^{2}$Institut f$\ddot{\textrm{u}}$r Theoretische Physik, Universit$\ddot{\textrm{a}}$t Heidelberg, Philosophenweg 16, 69120 Heidelberg, Germany\ $^{3}$Dipartimento di Fisica “E. Amaldi”, Universitá degli Studi “Roma Tre”, via della Vasca Navale 84, 00146, Roma, Italy,\ INFN Sezione di Roma Tre and INAF, Osservatorio Astronomico di Brera, Milano, Italy bibliography: - 's8paper.bib' date: 'Submitted:' title: 'Simultaneous constraints on bias, normalization and growth index through power spectrum measurements' --- \[firstpage\] Introduction ============ The issue of constraining the cosmological parameters by employing the 3D galaxy power spectrum as a summary statistic of the matter density perturbations has been widely explored in the last few years. Indeed, the power spectrum estimated from a redshift survey can give a wealth of information through its shape, amplitude and radial anisotropy induced by peculiar velocity-driven redshift space distortions (RSD). In principle the shape can be used to constrain background quantities such as the dark energy equation of state and the geometry of the system that are related to the expansion history of the universe. The amplitude and RSD of galaxy clustering can constrain the growth of cosmological perturbations and the biasing relation between galaxies and mass, i.e. the relation between the spatial distribution of galaxy and the underlying mass density field. Constraining the growth is essential in order to discriminate dark energy models with the same background properties but different physical origins. This makes the power spectrum a powerful tool in discriminating standard dark energy models from modified gravity theories. However, sharp discrimination is possible only if galaxy bias can be estimated precisely or, at the very least, marginalized over efficiently. Even though the work of [@kaiser87] made it clear that one of the most promising ways to determine the fluctuation growth is to exploit the redshift distortion effect (see [@hamilton98] for a review), in the first pioneering works [@seo03; @seo07] dealing with Fisher matrix formalism as a tool to constrain cosmological parameters with redshift surveys, the redshift distortions were in fact considered a sort of “noise” disturbing the measure of the baryon acoustic oscillations peaks. The attitude was to marginalize over the RSD and the growth. Later on, it was realized that information contained in the redshift distortions could tighten constraints over cosmological parameters [@amendola05; @wang08; @linder07] and RSD began to be considered as a standard, additional probe [@guzzo08], able to constrain both the growth rate and the dark energy responsible for the accelerated expansion of the universe. The growth factor itself began to be regarded as useful information to be exploited rather than to be marginalized over [@amendola05; @wang10]. However, as noticed in several works [@wang08; @percival09; @song09], the degeneracies between the growth factor $G(z)$, the (linear) bias $b(z)$ and the power spectrum normalization $\sigma_{8,0}\equiv\sigma_{8}(z=0)$ do not allow us to constrain all three quantities simultaneously. One can only measure combinations such as $f(z)\sigma_{8}(z)=f(z)G(z)\sigma_{8,0}$, $b(z)\sigma_{8}(z)=b(z)G(z)\sigma_{8,0}$ or $\beta(z)\equiv f(z)/b(z)$ (e.g. [@ross06; @guzzo08; @blake11]), where $f(z)$ is the growth rate, related to the growth factor via $f(z)=d\ln G(z)/d\ln a$ and $a=(1+z)^{-1}$ is the scale factor. The growth rate was originally parametrized by [@peebles80] as $f(z)= \Omega_m(z)^{0.6}$ in a matter-dominated cosmology. This expression was later generalized as $f(z)=\Omega_m(z)^{\gamma}$ to accurately describe the growth rate in a variety of cosmological models ranging from Dark Energy models to non-standard gravity theories [@lahav91; @wang98; @amendola04; @linder05; @polarski07]. This parametrization obviously implies $G(z)=\int\Omega_{m}^{\gamma}/(1+z)\, dz$. Several authors have adopted different strategies to break, bypass or ignore the degeneracy between $G(z)$, $b(z)$ and $\sigma_{8,0}$ depending on the parameters they were interested in constraining. - In general one fixes (or assumes external priors for) one of the parameters and then estimate the others. Different authors have fixed the clustering amplitude $\sigma_{8,0}$ to the value estimated from the cosmic microwave background (CMB) data to constrain $f(z)$ and $b(z)$ (e.g. @diporto11) or the growth parameter $\gamma$ (e.g. @diporto11 [@belloso11]) or both the bias and the growth (e.g. @diporto11). Instead of fixing $\sigma_{8,0}$ to a single value, some authors have assumed CMB priors for this parameter to constrain $f(z)$ (e.g. @guzzo08) or $\gamma$ and $b(z)$ [@gaztanaga11]. - One can assume that General relativity holds and fix $\gamma$ to the $\Lambda$ cold dark matter ($\Lambda$CDM) value (e.g. @seo03 [@amendola05; @ross06; @wang10]). For example [@hawkins03] and [@cole05] fix $\gamma$ to estimate the bias while [@percival04] assume a further prior on $\Omega_m$ to determine $\sigma_{8,0}$. - Alternatively, one can consider some measurable combination of the above parameters that can also discriminate models efficiently. A popular choice is the [“]{}mass weighted[”]{} growth rate $f(z)\sigma_{8}(z)$ which provides a good test for dark energy models (e.g. @song09 [@white09; @percival09; @blake11; @carbone10; @wang10]) It is a fact that galaxy bias, growth and clustering amplitude cannot be independently estimated from the observed $P(k)$ alone [@percival09] and yet effective constraints can be placed provided that these quantities could be accurately parametrized under fairly general hypotheses. Although the idea is somewhat implicit in some of the works quoted above, it seems to us it has not been clearly pointed out nor discussed thoroughly anywhere. The scope of this work is to make it explicit with a simple proof, that we present in Sec. \[sec:method\]. In Sec. \[sec:fm\] we present the Fisher matrix method that we employ in order to obtain the aforementioned constraints on parameters. Results are presented in Sec. \[sec:results\] and finally in Sec. \[sec:conclusions\] we draw our conclusions. Lifting Parameter Degeneracy {#sec:method} ============================ Let us consider a measurement of the galaxy power spectrum in redshift space at different epochs, i.e. in different redshift bins $z$. In the linear regime, the shape of the power spectrum and its redshift distortions can be modeled as follows [@seo03; @seo07] $$\begin{aligned} P_{\textrm{obs}}(z;k,\mu) &\!\!\!\!\! =&\!\!\!\!\! \frac{D_{F}^{2}(z)H(z)}{D^{2}(z)H_{F}(z)}G^{2}(z)b^{2}(z)\sigma_{8,0}^{2}\left[1+\frac{f(z)}{b(z)}\mu^{2}\right]^{2}P_0\nonumber\\ &&\!\! +P_{\textrm{s}}(z)\nonumber\\ &\!\! \!\!\!\equiv &\!\! \!\!\!C(z)G^{2}(z)B^{2}(z)\left[1+R(z)\mu^{2}\right]^{2}P_{0}+P_{\textrm{s}}(z)\label{eq:spectrum}\end{aligned}$$ where $B(z)\equiv b(z)\sigma_{8,0}$, $R(z)=\frac{f(z)\sigma_{8,0}}{B(z)}$, $\mu$ is the direction cosine and $P_{0}\equiv P(k,z=0)$ is the linear power spectrum at the present epoch. The factor $C(z)\equiv {D_{F}^{2}(z)H(z)}/({D^{2}(z)H_{F}(z))}$, where $D(z)$ is the angular diameter distance and $H(z)$ is the Hubble parameter, takes into account the difference in comoving volume between the fiducial cosmology - the one we use to convert observed redshifts into distances - (subscript $F$) and the true cosmology. Finally, we model the shot noise contribution as an additional factor $P_{s}(z)$. However, let us notice that, since we are not interested in constraining $D(z)$ and $H(z)$ in every redshift bin, but only the parameters they depend on, these functions and their combination $C(z)$ are not considered further in this Letter. Let us assume that the growth rate can be modelled as $\Omega_{m}(z)^{\gamma}$. The growth index $\gamma$ needs not to be constant. The simple parametrization $\gamma(z)=\gamma_{0}+\gamma_{1}z/(1+z)$ has been shown to reproduce the time dependence of $\gamma$ in a variety of non standard gravity models [@fu09]. For the sake of simplicity we consider in this Section the case of constant $\gamma$, but our conclusion remains valid also with time-dependent growth index, as we argue below. Cosmological parameters are determined by fitting Eq. (\[eq:spectrum\]) to the observed galaxy power spectrum. From the spectral shape in the linear regime, $P_{0}$, one can determine those parameters that describe the expansion history $H(z)$ (and then $D(z)$) ($h$, $\Omega_{m,0}$, $\Omega_{DE}$, $w_{0}$, $w_{1}$, etc...). From the amplitude of the power spectrum one determines the combination $A(z)\equiv G(z)B(z)$. If this analysis can be performed at two (or more) redshifts and the growth rate is modelled as $f=\Omega_{m}(z)^{\gamma}$, then the degeneracy can be broken. Let us prove it for the simple case in which the power spectrum has been measured at two redshifts $z_{1}$ and $z_{2}$. In this case, one determines $A$ and $R$ at two different epochs, $A_{1},\, R_{1},\, A_{2},\, R_{2}$ and can solve the linear system $$\begin{aligned} A_{1} & = & G_{1}(\gamma)B_{1}\\ A_{2} & = & G_{2}(\gamma)B_{2}\\ R_{1} & = & \frac{\Omega_{m,1}^{\gamma}\sigma_{8,0}}{B_{1}}=\frac{\Omega_{m,1}^{\gamma}\sigma_{8,0}G_{1}(\gamma)}{A_{1}}\\ R_{2} & = & \frac{\Omega_{m,2}^{\gamma}\sigma_{8,0}}{B_{2}}=\frac{\Omega_{m,2}^{\gamma}\sigma_{8,0}G_{2}(\gamma)}{A_{2}}\;.\end{aligned}$$ By assumption $\Omega_{m,z_i}$ has been already determined because it depends only on the background parameters. Then the ratio of the last two equations yields $$\frac{R_{1}A_{1}}{R_{2}A_{2}}=\left(\frac{\Omega_{m,1}}{\Omega_{m,2}}\right)^{\gamma}\frac{G_{1}(\gamma)}{G_{2}(\gamma)}$$ where the only unknown is $\gamma$ and therefore we can solve for it. Then, from the above equations, one estimates $\sigma_{8,0}$ and $b(z)$ as $$\begin{aligned} \sigma_{8,0} & = & \frac{A_{i}R_{i}}{G_{i}(\gamma)\Omega_{m,i}^{\gamma}}\ \ \ \ {\rm and}\ \ \ \ \ b(z_{i})=\frac{\Omega_{m,i}^{\gamma}}{R_{i}}\end{aligned}$$ We notice that if the power spectrum is measured in three redshift bins, then one can also constrain the time dependence of the growth index $\gamma(z)=\gamma_{0}+\gamma_{1}z/(1+z)$. This procedure assumes that the size of the bins and the number densities of galaxies within are large enough to provide effective constraints to avoid that parameters’ degeneracy, although broken in principle, can reappear due to poor statistics. As we show in the next section, this is, however, not of concern for large surveys as Euclid. The proof that the degeneracy can be broken can be also obtained using the Fisher matrix formalism [@fisher35; @tegmark97r]. If one assumes $ f=\Omega_{m}(z)^{\gamma}$, a constant growth index and the model power spectrum of Eq. (\[eq:spectrum\]), then the derivatives of the spectrum with respect to $b(z),\,\gamma$ and $\sigma_{8,0}$ $$\begin{aligned} \frac{\partial\ln P_{\textrm{obs}}}{\partial\ln b_{z_{i}}} & = & 2-2\frac{\Omega_{m}(z_{i})^{\gamma}\mu^{2}}{b_{z_{i}}+\Omega_{m}(z_{i})^{\gamma}\mu^{2}}\\ \frac{\partial\ln P_{\textrm{obs}}}{\partial\gamma} & = & 2\frac{\partial\ln G}{\partial\gamma}+2\frac{\Omega_{m}(z_{i})^{\gamma}\ln\Omega_{m}(z_{i})\mu^{2}}{b_{z_{i}}+\Omega_{m}(z_{i})^{\gamma}\mu^{2}}\\ \frac{\partial\ln P_{\textrm{obs}}}{\partial\sigma_{8,0}} & = & \frac{2}{\sigma_{8,0}}\,\end{aligned}$$ are not degenerate because of their different dependence from $z$ and $\mu$. Dropping the $\gamma$ parametrization, i.e. treating $G(z)$ as an extra free parameter for every bin, would not allow us to lift the parameter degeneracy and a singular Fisher matrix would result. We note that here we assumed the bias to be scale-independent which should be true on the large, linear scales we are focusing on here. A possible scale dependence would modify the shape of the spectrum and should be accounted for to avoid systematic errors in the estimate of the background cosmology parameters. Furthermore, a scale-dependent bias could make the Fisher matrix degenerate again and cause the parameter degeneracies to return. Clearly, in order to break the degeneracy in presence of a bias scale dependence, one should parametrize it as, e.g., a simple power law of $k$, employing a number of parameters smaller than the number of distinct redshift bins. This issue will be studied in a future work. Fisher matrix forecasts {#sec:fm} ======================= We now apply the strategy outlined above to a specific measurement and adopt the Fisher matrix approach to evaluate the expected errors on the measured parameters. For this purpose we choose a reference cosmological model and run CAMB [@lewis00] to obtain the linear power spectrum in real space $P_{0}$. Then, the model power spectrum in Eq. \[eq:spectrum\] depends on a number of cosmological parameters, listed in Tab. \[tab:parameters\]. In the upper part we list parameters that do not depend on redshift. Redshift-dependent parameters are listed in the bottom part of the table. In our application we consider the more general case of a time-dependent growth index $\gamma(z)=\gamma_{0}+\gamma_{1}z/(1+z)$. We adopt a slightly unconventional fiducial $\Lambda$CDM model with $\gamma_{0}=0.545,\ \ {\rm and}\ \ \gamma_{1}=0$ (to be compared with the concordance $\Lambda$CDM values $\gamma_{0}=0.556$, $\gamma_{1}=-0.018$ [@fu09]) and allow for a time-dependent equation of state for the Dark Energy with $w_{0}=-0.95$ and $w_{1}=0$. In this work we do not consider a possible smearing of the wiggles associated with baryonic acoustic oscillation (BAO) [@seo07]. Further, we ignore non-linear effects that modify the spectral shape and the RSD pattern on small scales (e.g. [@blake11]) by limiting our analysis to small wavenumbers $k\leq k_{\textrm{max}}\equiv \textrm{min}[k_{\textrm{cut}}, k_{\textrm{lin}}(z)]$, where $k_{\textrm{cut}}$ is set to $0.2\,h\,\textrm{Mpc}^{-1}$. The value for $k_{\textrm{lin}}(z)$ is set by requiring that the variance in cells $\sigma^{2}(k_{\textrm{lin}},z)=0.25$ (for example, at $z=0.7$ we have $k_{\textrm{lin}}\sim0.16\, h\,\textrm{Mpc}^{-1}$). Having assumed our model power spectrum and set the parameter values of the fiducial model, we compute the elements of the Fisher matrix as in [@tegmark97] by integrating over all modes below $k_{\textrm{max}}$. The result depends on the characteristics of the galaxy redshift catalogue used to compute the power spectrum. More precisely, it depends on the volume of the survey and on the galaxy redshift distribution $dN/dz$. In this work we take, as a reference case, the Euclid redshift survey as specified in the [*Red Book*]{} [@euclid_rb] and at the website http://www.euclid-ec.org/. This survey will span a broad redshift range $0.7\leq z \leq 2$ that we split in 14 bins of $\Delta z=0.1$. The expected sky coverage is $15000$ deg$^{2}$ . The expected $ dN/dz$ is given in the aforementioned [*Red Book*]{}, while the fiducial values for the bias in each redshift bin, for Euclid galaxies, are derived from [@orsi10]. [|&gt;m[4.2cm]{}&gt;m[1cm]{}&gt;m[2.0cm]{}|]{} Redshift-independent parameters & & fiducial valuesReduced total matter density & $\Omega_{m,0}h^{2}$ & $0.271\cdot(0.703)^{2}$Reduced baryon density & $\Omega_{b,0}h^{2}$ & $0.045\cdot(0.703)^{2}$Curvature density & $\Omega_{k}$ & 0Hubble constant at present & $h$ & 0.703Primordial fluctuation slope & $n_{s}$ & 0.966Dark energy eq. of state & $w_{0},w_{1}$ & -0.95, 0Power spectrum normalization & $\sigma_{8,0}$ & 0.809*$\gamma$-parameterization* parameters & $\gamma_{0}$, $\gamma_{1}$ & 0.545, 0 & & Redshift-dependent parameters & & (in 14 $z$-bins) & & Shot noise & $P_{s}$ & 0Bias & $\log b$ & Derived from [@orsi10] Results {#sec:results} ======= Our analysis allows us to set simultaneous constraints on $b(z),\sigma_{8}$ and $\gamma$, when the power spectrum is measured at two or more redshifts. Tabs. \[tab:Errors-on-cosmological\],\[bias\_errors\] display the expected $1\sigma$ uncertainties on these (and other) parameters, listed in the first row. Errors on each parameter are obtained after marginalizing over all other parameters in the table. Therefore they do not coincide with the length of the ellipses’ axis shown in Fig. \[fig:gamma\_s8\_cplot\] which represent joint, rather than marginalized, probabilities. Table \[tab:Errors-on-cosmological\] shows the uncertainties on the redshift independent parameters listed in the top row. In the second row the values refer to 1$\sigma$ uncertainties computed marginalizing over all other parameters while in the bottom row they are obtained through marginalization after fixing $\gamma_{1}$ in order to compare our results to other, similar analysis. In Table \[bias\_errors\] we list the expected uncertainties on the bias parameter estimated at the different redshifts specified in the top row. The meaning of the second and third rows is the same as in Table \[tab:Errors-on-cosmological\]. In the case of a time-dependent growth index, uncertainties on the parameters are remarkably small $\Delta \sigma_{8}=0.03 \;, \Delta \gamma_0 =0.19 \; {\rm and} \; b(z)=2-4\%$ depending on the redshift. They decrease even further if $\gamma$ is assumed to be constant ($\Delta\sigma_{8}=0.007$, $\Delta\gamma=0.03$ and $\Delta b(z)=1-1.7\%$). In Fig. (\[fig:gamma\_s8\_cplot\]) we plot the confidence regions of $\gamma_{0}$ and $\sigma_{8}$. Contours refer to 68% and 95% joint probability levels. The blue, dotted ellipses refer to the case in which we marginalize over all parameters, including $b(z)$. The contour levels shrink considerably when one fixes the value of $\gamma_{1}$ (red, dashed) and become tiny in the case in which all cosmological parameters and the bias are fixed to their fiducial values (we leave only the shot noise free to vary). This last case represents the ideal situation in which the available complementary probes (e.g. SN Ia, CMB fluctuations, lensing etc) allow us to estimate all other parameters with very high precision. In Fig. \[fig:gamma\_s8\_vs\_b\_cplot\] we show the analogous confidence contour levels in the $\sigma_{8}-b(z)$ (left-hand panel) and $\gamma-b(z)$ (right-hand panel) planes. As in the left-hand panel of Fig. \[fig:gamma\_s8\_cplot\] ellipses of different sizes refer to marginalization over different sets of parameters. The four sets of ellipses refer to different redshift slices, as specified by the labels. We do not show all 14 redshifts to avoid overcrowding. Keeping $\gamma$ constant and assuming a flat universe allows to reduce errors on $\sigma_{8}$ significantly but has little effect on galaxy bias uncertainties. Finally, we have tested the sensitivity of our result to $k_{\textrm{cut}}$, i.e. to the cut we have imposed to exclude nonlinear effects. Increasing $k_{\textrm{cut}}$ increases the number of $k$-modes used in the analysis and reduces statistical errors. However, if $k_{\textrm{cut}}$ is too large, non-linear effects kick in and induce systematic errors. Decreasing $k_{\textrm{cut}}$ allows to apply linear theory but increases random errors. To find the best tradeoff one needs to estimate the power spectrum in some simulated galaxy catalog obtained from fully nonlinear N-body simulations. However, we can obtain an order of magnitude estimate for the relative importance of non-linear effects by re-computing errors at different values of $k_{\textrm{cut}}$ and check how much they change. The right-hand panel of Fig.  \[fig:gamma\_s8\_cplot\] shows the result of such exercise in the $\gamma_{0}-\sigma_{8}$ plane. Ellipses filled with different shades of blue represent the reference case of $k_{\textrm{cut}}=0.2\, h\,\textrm{Mpc}^{-1}$ (and correspond to the blue, shorth-dashed ellipses in the left panel). Pushing $k_{\textrm{cut}}$ up to $0.5\, h\,\textrm{Mpc}^{-1}$ (red, long-dashed) has a little effect, meaning that the decrease in statistical errors does not justify the risk of introducing systematic errors driven by nonlinear effects. Reducing $k_{\textrm{cut}}$ to $0.1\, h\,\textrm{Mpc}^{-1}$ has a quite dramatic effect (purple, dotted ellipses). The large increase in the errors reflects the fact that with this $k_{\textrm{cut}}$ one cuts off all but one of the BAO wiggles. Finally, a value of $k_{\textrm{cut}}=0.15\, h\,\textrm{Mpc}^{-1}$ (black, dot-dashed ellipses) seems to represent a safer and yet acceptable option. However, the optimal choice of $k_{\textrm{cut}}$ depends on the underlying cosmological model and on the accurate modeling of nonlinear effects. [|&gt;p[0.8cm]{}|&gt;p[1cm]{}&gt;p[1cm]{}&gt;p[1cm]{}&gt;p[1cm]{}&gt;p[1cm]{}&gt;p[1cm]{}&gt;p[1cm]{}&gt;p[1cm]{}&gt;p[1cm]{}&gt;p[1cm]{}|]{} $Par.$ & $h$ & $\Omega_{m,0}$ & $\Omega_{b,0}$ & $n$ & $\Omega_{DE}$ & $w_{0}$ & $w_{1}$ & $\gamma_{0}$ & $\gamma_{1}$ & $\sigma_{8,0}$ $1\sigma$ & 0.008 & 0.004 & 0.0008 & 0.03 & 0.009 & 0.07 & 0.31 & 0.19 & 0.49 & 0.03 $1\sigma$ & 0.008 & 0.004 & 0.0008 & 0.02 & 0.009 & 0.07 & 0.28 & 0.03 & - & 0.007 [|&gt;p[0.7cm]{}|&gt;p[0.7cm]{}&gt;p[0.7cm]{}&gt;p[0.7cm]{}&gt;p[0.7cm]{}&gt;p[0.7cm]{}&gt;p[0.7cm]{}&gt;p[0.7cm]{}&gt;p[0.7cm]{}&gt;p[0.7cm]{}&gt;p[0.7cm]{}&gt;p[0.7cm]{}&gt;p[0.7cm]{}&gt;p[0.7cm]{}&gt;p[0.8cm]{}|]{} $z$ & $0.7$ & $0.8$$ $ & $0.9$ & $1.0$ & $1.1$ & $1.2$ & $1.3$ & $1.4$ & $1.5$ & $1.6$ & $1.7$ & $1.8$ & $1.9$ & $2.0$ $1\sigma$ & 0.023 & 0.022 & 0.022 & 0.023 & 0.025 & 0.027 & 0.029 & 0.031 & 0.034 & 0.038 & 0.039 & 0.041 & 0.044 & 0.046 $1\sigma$ & 0.010 & 0.011 & 0.011 & 0.012 & 0.013 & 0.014 & 0.016 & 0.016 & 0.018 & 0.02 & 0.021 & 0.022 & 0.024 & 0.026 ![image](gam_s8_marg_fix_best){height="7.5cm"}$\quad$ ![image](gam_s8_kcut){height="7.5cm"} ![image](s8_b_3cplot){height="7.5cm"}$\quad$ ![image](gam_b_3cplot){height="7.5cm"} Conclusions {#sec:conclusions} =========== The goal of this Letter is to point out explicitly that a two-point statistics like the power spectrum can provide independent constraints on the cosmological parameters $\sigma_{8,0}$, $\gamma$ and the galaxy bias, under the quite general assumption that the growth rate of density fluctuation can be parametrized as $ f=\Omega_{m}(z)^{\gamma}$ and the measurement is performed at two (or more) different redshifts. This result is not surprising. In fact it may appear even obvious, but to the best of our knowledge it has never been explicitly pointed out in the literature. To assess how precisely these parameters can be determined after breaking the degeneracy we have performed a Fisher-Matrix analysis and explored the case of the next generation Euclid redshift survey [@euclid_rb]. We found that these quantities could be measured to the percent level. Constraints tighten considerably if a flat universe is assumed or if $\gamma$ is taken to be time independent. We find that $\sigma_{8}$ and $\gamma$ can be measured to within $1\%,5\%$, respectively, while the bias $b(z)$ can be measured to within $1-2\%$ (assuming flatness) in every redshift bin. This procedure can be applied to redshift surveys like WiggleZ [@blake11] that are deep enough to provide independent redshift shells in which to measure the power spectrum. Analyses of this dataset have provided estimates of combinations $f\sigma_{8,0}G(z)$ and $\beta(z)$ [@blake11]. We will investigate the possibility of using the WiggleZ to estimate $\sigma_{8}$, $\gamma$ and $b(z)$ in a future paper. Acknowledgments =============== We thank Will Percival for his comments on a preliminary version of this work and B. Garilli and the NISP simulations group for providing us with the predicted galaxy redshift distribution for Euclid, as presented in the Euclid Red Book, and the Euclid Consortium for allowing access to this information. CDP acknowledges the support provided by INAF through a PRIN/2008/1.06.11.10 grant. CDP also thanks the Institut f$\ddot{\textrm{u}}$r Theoretische Physik and the University of Heidelberg for the kind hospitality and Carmelita Carbone for useful discussions on this topic. EB acknowledges the support provided by MIUR PRIN 2008 “Dark energy and cosmology with large galaxy surveys” and by Agenzia Spaziale Italiana (ASI-Uni Bologna-Astronomy Dept. ‘Euclid-NIS’ I/039/10/0). L.A. acknowledges funding from DFG through the project TRR33 “The Dark Universe”. [99]{}
--- abstract: 'We present the numerical solution of the full gap equation in a weak coupling constant $g$. It is found that the standard approximations to derive the gap equation to the leading order of coupling constant are essential for a secure numerical evaluation of the logarithmic singularity with a small coupling constant. The approximate integral gap equation with a very small $g$ should be inverted to a soft integral equation to smooth the logarithmic singularity near the Fermi surface. The full gap equation is solved for a rather large coupling constant $g\ge 2.0$. The approximate and soft integral gap equations are solved for small $g$ values. When their solutions are extrapolated to larger $g$ values, they coincide the full gap equation solution near the Fermi surface. Furthermore, the analytical solution matches the numerical one up to the order one $O(1)$. Our results confirm the previous estimates that the gap energy is of the order tens to 100 MeV for the chemical potential $\mu\le 1000$ MeV. They also support the validity of leading approximations applied to the full gap equation to derive the soft integral gap equation and its analytical solution near the Fermi surface.' address: - 'Department of Physics, Stanford University, Stanford, CA 94305-4060' - 'Department of Physics, Bethlehem University, P.O. Box 9, Bethlehem, PA' - 'Institut für Theoretische Physik, Robert Mayer Str. 8-10, D-60054, Frankfurt am Main, Germany' author: - 'I. Zakout' - 'H.R. Jaqaman' - 'W. Greiner' date: today title: Numerical solution of the color superconductivity gap in a weak coupling constant --- Introduction ============ The deconfined quark matter is expected at baryon densities much larger than the normal nuclear matter density. Any attractive quark-quark interaction causes instability of the Fermi surface by formation of Cooper pairs and leads to the color superconductivity phase[@Barrois1; @Frautschi1; @Bailin1] in a very dense quark matter at low temperatures. However, the quark matter at relatively low densities may have properties qualitatively different from those for BCS-type superconductor. It is argued that the nuclear matter at low density might be continuously connected to the quark matter at high density without any phase transition. The calculation of superconducting gap is important to study many properties of hadronic matter at high densities[@Alford01; @Rajagopal00]. In QCD, the single-gluon exchange between quarks of different color generates an attractive interaction in the color antitriplet channel. The scattering through a single-gluon exchange strongly correlates the direction of the in- and outgoing quarks. However, there is a logarithmic divergence for the forward angle scattering. This implies that the gap equation is not an exponential in $1/g^2$[@Bailin1], as in BCS like theories, but only in $1/g$[@Son1] where $g$ is the coupling constant. It is found for the nonzero quark density at low temperature, the free energy is an expansion in $g^2\ln(1/g)$, and appears well behaved even for much larger coupling constant $g\le 4$[@Schafer1]. The earlier estimations of the color superconductivity gap were found rather small $\Delta\approx \mbox{(few) MeV}$ for quark chemical potentials that are of physical interest $\mu\le 1000 MeV$[@Barrois1; @Frautschi1; @Bailin1]. Recently, it is shown that the gap is large and of order 100 MeV for $\mu < 1000$ MeV [@Schafer1; @Alford97a; @Rapp97a]. These large estimations may lead to observable effects in the core of neutron stars and the physics of heavy ion collisions[@Alford01; @Rajagopal00; @AlfordCFL]. The reliable estimation of gap equation plays an essential role in the phenomenological calculations involve the color superconductivity physics. The gap equation has been studied numerically for a quantitatively estimation for a wide range of the chemical potential in the time-like energy space[@Schafer1] as well as momentum space[@Abuki; @Fugleberg1; @Fugleberg2]. In these studies, the gluon propagator has been specified by Thomas-Fermi screening and Landau damping defined by the hard dense loop approximation (HDL). Furthermore, the resultant equation has been simplified by approximating the angular integration near the Fermi surface[@Abuki]. Nonetheless, the gap equation is highly singular near the Fermi surface due to the correlation of in- and outgoing quarks direction. Therefore, any approximation may affect significantly the gap value in particular for a moderate quark chemical potential. In the present work, we shall adopt the spectral representation of the gluon propagator[@Pisarski1; @Pisarski2] and solve the gap equation exactly numerically without any further approximation. The spectral representation is more rigorous than previous approximation based on screening gluon propagator. Recently, Pisarski and Rischke[@Pisarski1; @Pisarski2; @Rischke1; @Rischke2] have derived the full gap equation in the momentum space for two massless quark flavors with a total spin $J=0$ at an arbitrary temperature $T$. They adopted the gluon propagator in the spectral representation derived from HDL approximation. They derived the approximate gap equation to the leading order of coupling constant $g$. It is given by Eq.(72) in Ref.[@Pisarski1]. This equation has a logarithm dependence on the quark energy under the integral. Furthermore, they have approximated the logarithmic term under the integral to derive a soft integral equation that is given by Eq.(80) in Ref.[@Pisarski1]. Hereafter, the approximated integral gap equation is called Eq. (I) while the soft integral gap equation is called Eq. (II). Finally, they have inverted the soft integral equation to an ordinary differential equation to derive the analytical solution. In the present work, the full gap equation is solved numerically for a rather large coupling constant $g=2-4$ which corresponds to the chemical potential $\mu < 1000$ MeV. We confirm the previous estimates that for the chemical potential of order 1 GeV, the gap amplitude can be of order tens to 100 MeV. We also study the numerical solution for a small coupling constant $g<2.5$ by solving Eq. (I) as well as Eq. (II). We compare the numerical solutions with the analytical one to the leading order logarithmic accuracy. This paper is organized as follows. In Sec. II, we present the full gap equation and its approximated versions. The results and the conclusions are presented in Sec. III. The gap equation ================ The full gap equation for the color superconductivity in the weak coupling with total spin zero and two massless flavors in a dense QCD reads [@Pisarski1] $$\begin{aligned} \phi(k)= \frac{2}{3} g^{2} \int \frac{d^{3}q}{(2\pi)^{3}} \left[ T\sum_{q_0}\Delta_l(K-Q)\Xi(Q){\cal P}_l(\hat{\bf k}\cdot\hat{\bf q}) +T\sum_{q_0}\Delta_t(K-Q)\Xi(Q){\cal P}_t(\hat{\bf k}\cdot\hat{\bf q}) \right]\end{aligned}$$ where $$\begin{aligned} {\cal P}_l(\hat{\bf k}\cdot\hat{\bf q})= \left(\frac{1+\hat{\bf k}\cdot\hat{\bf q}}{2}\right)\end{aligned}$$ and $$\begin{aligned} {\cal P}_t(\hat{\bf k}\cdot\hat{\bf q})= \left( -\frac{3-\hat{\bf k}\cdot\hat{\bf q}}{2} +\frac{1+\hat{\bf k}\cdot\hat{\bf q}}{2} \frac{(k-q)^2}{({\bf k}-{\bf q})^2} \right).\end{aligned}$$ The Matsubara sums over $q_0$ in the spectral representation for the quantity $\Xi(Q)$ are computed as $$\begin{aligned} &T&\sum_{q_0}\Delta_l(K-Q)\Xi(Q)= -\frac{\phi(\epsilon_q,{\bf q})}{2\epsilon_q} \left\{-\frac{1}{2}\tanh\left(\frac{\epsilon_q}{2T}\right)\frac{2}{p^2} \right. \nonumber\\ &+& \int_{0}^{\infty}d\omega \rho_l(\omega,{\bf p}) \frac{1}{2}\tanh(\frac{\epsilon_q}{2T}) \left[ \frac{1}{\epsilon_k+\epsilon_q+\omega} -\frac{1}{\epsilon_k-\epsilon_q-\omega} -\frac{1}{\epsilon_k+\epsilon_q-\omega} +\frac{1}{\epsilon_k-\epsilon_q+\omega} \right] \nonumber\\ &+& \left. \int_{0}^{\infty}d\omega \rho_l(\omega,{\bf p}) \frac{1}{2}\coth(\frac{\omega}{2T}) \left[ \frac{1}{\epsilon_k+\epsilon_q+\omega} -\frac{1}{\epsilon_k-\epsilon_q-\omega} +\frac{1}{\epsilon_k+\epsilon_q-\omega} -\frac{1}{\epsilon_k-\epsilon_q+\omega} \right]\right\},\end{aligned}$$ and $$\begin{aligned} &T&\sum_{q_0}\Delta_t(K-Q)\Xi(Q)= \nonumber\\ &-&\frac{\phi(\epsilon_q,{\bf q})}{2\epsilon_q} \left\{ \int_{0}^{\infty}d\omega \rho_t(\omega,{\bf p}) \frac{1}{2}\tanh(\frac{\epsilon_q}{2T}) \left[ \frac{1}{\epsilon_k+\epsilon_q+\omega} -\frac{1}{\epsilon_k-\epsilon_q-\omega} -\frac{1}{\epsilon_k+\epsilon_q-\omega} +\frac{1}{\epsilon_k-\epsilon_q+\omega} \right] \right. \nonumber\\ &+& \left. \int_{0}^{\infty}d\omega \rho_t(\omega,{\bf p}) \frac{1}{2}\coth(\frac{\omega}{2T}) \left[ \frac{1}{\epsilon_k+\epsilon_q+\omega} -\frac{1}{\epsilon_k-\epsilon_q-\omega} +\frac{1}{\epsilon_k+\epsilon_q-\omega} -\frac{1}{\epsilon_k-\epsilon_q+\omega} \right]\right\},\end{aligned}$$ where $\epsilon_q=\sqrt{ (q-\mu)^2+|\phi(q)|^2 }$. The spectral densities, $\rho_{l}$ and $\rho_{t}$, are given by $$\begin{aligned} \rho_{l,t}(\omega,{\bf p})= \rho^{\mbox{pole}}_{l,t}(\omega,{\bf p}) \delta[\omega-\omega_{l,t}({\bf p})] +\rho^{\mbox{cut}}_{l,t}(\omega,{\bf p})\theta(p-\omega),\end{aligned}$$ The explicit expressions for the spectral densities $\rho^{\mbox{pole,cut}}_{l,t}$ are given by Eqns(34b-e) in Ref. [@Pisarski1]. The $\omega_{l,t}({\bf p})$ are the solutions of the dispersion relations for longitudinal and transverse gluons which are given by Eqns(37a-b) in Ref.[@Pisarski1]. In HDL approximation there are three scales in cold dense QCD, the chemical potential $\mu$, the gluon mass $m_g\sim g\mu$ and the color superconductivity condensate $\phi_0\sim \mu\exp(-1/g)$ and they are naturally ordered $\mu>>m_g>>\phi_0$. Pisarski and Rischke [@Pisarski1] have derived the approximate integral gap equation by using the assumptions of these scales from the full gap equation. This approximate integral gap equation is derived in the momentum space unlike to the Eliashberg equation given by Schäfer and Wilczak [@Schafer1] is derived in the time-like energy space. The approximate integral equation, Eq.(I), reads $$\begin{aligned} \phi(k)\approx\frac{g^{2}}{18\pi^2}\frac{1}{2}\int^{\mu+\delta}_{\mu-\delta} \frac{dq}{\epsilon_q}\tanh\left(\frac{\epsilon_q}{2T}\right) \frac{1}{2}\ln\left(\frac{b^2\mu^2}{|\epsilon^2_q-\epsilon^2_k|}\right) \phi_q,\end{aligned}$$ where $b=256\pi^{4}\left(\frac{2}{N_fg^2}\right)^{5/2}$. This equation can be simplified to a soft integral equation by replacing the logarithm under the integral by [@Son1] $$\begin{aligned} \frac{1}{2}\ln\left(\frac{b^2\mu^2}{|\epsilon^2_q-\epsilon^2_k|}\right) \approx \ln\left(\frac{b\mu}{\epsilon_q}\right)\theta(q-k)+ \ln\left(\frac{b\mu}{\epsilon_k}\right)\theta(k-q).\end{aligned}$$ Hence, the soft integral gap equation, Eq.(II), becomes $$\begin{aligned} \phi(k)\approx\overline{g}^2 \left[ \ln\left(\frac{b\mu}{\epsilon_k}\right) \int^{k}_{\mu}\frac{dq}{\epsilon_q} \tanh\left(\frac{\epsilon_q}{2T}\right)\phi_q +\int^{\mu+\delta}_{k} \frac{dq}{\epsilon_q} \tanh\left(\frac{\epsilon_q}{2T}\right) \ln\left(\frac{b\mu}{\epsilon_q}\right) \phi_q \right]\end{aligned}$$ where $\overline{g}=\frac{g}{3\sqrt{2}\pi}$. The soft integral gap equation, Eq.(II), is inverted to an ordinary differential equation which has a simple solution to the leading order in $\overline{g}$ and Fermi surface $x^{*}=\pi/2\overline{g}$. The solution for zero temperature case reads $$\begin{aligned} \phi(x)=2b\mu\exp\left(-\pi/(2\overline{g}) \right) \sin(\overline{g}x) \times {\cal O}(1) \end{aligned}$$ where $x=\ln\left(\frac{2b\mu}{k-\mu+\epsilon_k}\right)$. Furthermore, the analytical solution of the gap equation in the time-like energy space, Eliashberg equation, is given by [@Schafer1] $$\begin{aligned} \phi_0\approx b\mu\exp(-\frac{\pi}{2\overline{g}}) \times {\cal O}(1)\end{aligned}$$ where the overall coefficient is correct up to a prefactor of order one “${\cal O}(1)$”. Since the analytical solution of the gap equation to leading order of coupling constant $g$ is undetermined by a prefactor of order one, we assume that the analytical solution at the Fermi surface reads $$\begin{aligned} \phi_0=\zeta b\mu\exp(-\frac{\pi}{2\overline{g}})\end{aligned}$$ where $\zeta$ is constant of order one. Results and conclusions ======================= We have studied the numerical solution of the full gap equation and the approximations validity for Eqns. (I) and (II) near the Fermi surface. The full gap equation is solved in momentum space for a rather large coupling constant $g>2.0$. The approximate versions of the full gap equation (i.e. Eq.(I) and (2)) are solved for small coupling constant values $g<1$. Furthermore, we have assumed that Eqns. (I) and (II) are extrapolated to large coupling constant values and then we have presented their numerical solutions for coupling constant values $1.0<g<3.0$. Fig. 1 displays the relative gap amplitude $\phi(|{\bf k}|)/\phi_0$ versus the scaled Fermi momentum $k/\mu$. It is shown that the solution of Eq. (II) have a sharp peak near the Fermi surface for small coupling constant values $g\le 1$. The peak around the Fermi surface for Eq.(II) is much more sharp than that for Eq.(I) with a small coupling constant $g\le 1$. Hence, the replacement of logarithm under the integral in Eq.(I) by step functions in Eq. (II) secures the singularity problem more thoroughly than the Cauchy numerical integration in Eq.(I) for a small coupling constant $g\le 1.0$. For the large coupling constant, $g>1$, both equations (I) and (II) give similar results. As we shall show below, Eq. (I) gives much a smaller amplitude than that for Eq. (II) with a small coupling constant $g\le 1$. The reason for that is simple. The gap equation appears very singular near the Fermi surface for a small coupling constant. Hence the approximation in Eq. (II) evaluates the gap amplitude value near the Fermi surface more thoroughly than numerical Cauchy integration. Furthermore, the gap amplitude in Eq. (II) approaches the same behavior for that in Eq. (I) with large coupling constants $g>1$. Therefore, the numerical Cauchy integration in Eq. (I) becomes unreliable for a very small coupling constant $g<1$. Fig. 2 displays the gap amplitude $\phi/\mu$ for the full gap equation versus the relative momentum $k/\mu$ with rather large coupling constant values $g>2.0$ that correspond chemical potentials of the order $1$ GeV. It is shown that the gap amplitude has a peak near Fermi surface and the function becomes more smooth as the coupling constant $g$ increases. Fig. 3 displays the gap amplitude versus the logarithmic parameter $x$ for the gap Eqs. (I) and (II) as well as the analytical solution normalized to the amplitude of Eq. (II). The maximum value represents the gap amplitude near Fermi surface. Hence the values of $\phi(x_0)/\mu$ and $x_0$ are important near Fermi surface $k=\mu$. It is shown that for small coupling constant values $g<1$, the analytical solution and numerical solution of Eq. (II) have almost the same behavior while the numerical solution of Eq. (I) has a significant deviation. As the coupling constant $g$ increases, the numerical solution of (I) and (II) approaches each other and almost coincides for $g\ge 1.5$. However, the analytical solution deviates from the numerical one (II) significantly as the coupling constant exceeds $g>1.5$. This means that Eq. (I) and Eq. (II) becomes almost the same for a large $g$ and the standard approximation which are applied to Eq. (II) to derive the analytical solution fails at large coupling constant. In Fig. 4, the gap amplitude $\phi(x)/\mu$ versus the logarithmic parameter $x$ with coupling constant $g=2.5$ is displayed for the full gap equation as well as for Eqns. (I) and (II). It is shown that the amplitude of the full gap equation approaches the values of Eqns. (I) and (2) more closely near the Fermi surface (i.e. at the hill of the curves) while it deviates significantly for momentum far away from the Fermi surface. Eqns. (I) and (II) solutions have almost similar behavior for the most values of $x$. It is interesting to note here that the validity of Eqns. (I) and (II) is restricted only near the Fermi surface and any solution far away from the Fermi surface isn’t guaranteed. Hence, Eqns. (I) and (II) give asymptotically accurate solutions near the Fermi surface for large coupling constant values $g\ge 2$. However, Eq. (I) solution seems accurate for a moderate coupling constant $g$ since the singularity near the Fermi surface becomes less severe. In the other hand, the solution of Eq. (II) seems more accurate than that for Eq. (I) for small coupling constant since it seems to account the singularity near Fermi surface more precisely. Fig. 5 displays the gap amplitude $\phi_0/\mu$ at the Fermi surface versus the coupling constant $g$ for the full gap equation and its approximation to Eqns. (I) and (II) as well as the analytical solution with prefactors $\zeta=0.5$ and $1.0$. We have shown the solutions for Eqns. (I) and (II) with $g\ge 0.7$ and the solution of the full gap equation with $g>2.0$. The sharp peak near the Fermi surface for Eq. (II) is larger than that for Eq. (I) for a coupling constant $g<1$ since Eq. (II) is expected to secure the singularity near the Fermi surface better than Eq. (I) as mentioned above. However, the solutions (I) and (II) become close to each other for a wide range of coupling constant values $1<g<3$. The solutions of Eqns. (I) and (II) match the solution of full gap equation for $2<g<3.0$. They deviate from the full gap equation for $g>3.0$. The full gap equation solution, for $g>3.0$, starts to increase linear logarithmic with respect to $g$. We also display the analytical solution with prefactor coefficients $\zeta=0.5$ and $2.0$. The analytical solution with $\zeta=2.0$ seems to fit the solution of Eq. (I) for small $g<1$. When the coupling constant $g$ increases, the analytical solution with a prefactor coefficient $zeta=1/2$ seems to fit the solutions of Eqns. (I) and (II) as well as the full gap equation for a range $1<g<3.0$. However, the analytical solution fails to fit the full gap one with a specific prefactor coefficient for the large coupling constant range $g>3.0$. Therefore, the analytical solution approximation breaks down for a rather large coupling constant $g>3.0$. Fig. 6 displays the amplitude of full gap equation $\phi_0$ at the Fermi surface versus the chemical potential $\mu$ in the range 550 MeV $<\mu<$ 800 MeV. We have used the running coupling constant defined by $g^2=(8\pi^2/\beta_0)\times 1/\ln(\mu^{2}/\Lambda^{2}_{QCD})$ where $\beta_0=(11 N_c - 2 N_f)/3$. The QCD scale is taken as $\Lambda_{QCD}=400$ MeV. The value of the gap amplitude is found of the order of tens to 100 MeV with quark chemical potential that may exist in neutron stars. This large value of the gap amplitude near the Fermi surface confirms the earlier estimates[@Schafer1]. In summary, we have studied the numerically solution of the full gap equation and its approximations. The numerical results are compared to the analytical one with the leading order $g$ approximation. It is found that the gap equation has a sharp peak near the Fermi surface for a small coupling constant $g<1$. The singularity at the Fermi surface is hard to secure numerically in the full gap equation with small $g$. The approximation to a leading coupling constant $g$ near the Fermi surface is essential to smear and evaluate this singularity in Eqns. (I) and (II). When Eqns. (I) and (II) are extrapolated to large coupling constant values $g>2$, they match the results of the full gap equation near the Fermi surface. It is shown that the analytical solution coincides the solution of Eq. (II) for small $g$. However, this analytical solution approaches those of Eq. (I) and full gap equation with a rather large coupling constant $g\le 1.5$ only near Fermi surface. This justifies the validity of approximation hold in Eqns. (I) and (II) near Fermi surface. We would like to thank D. H. Rischke and M. G. Alford for the discussions. Financial support by the Deutsche Forschungsgemeinschaft through the grant GR 243/51-2 is gratefully acknowledged. One of us (I.Z.) thanks Fulbright Foundation for the financial support. B. Barrois, Nucl. Phys.[**B129**]{} (1977) 390; “Nonperturbative effects in dense quark matter”, Cal Tech PhD thesis, UMI 79-04847-mc (1979). S. Frautschi, Proceedings of workshop on hadronic matter at extreme density, Erice 1978. D. Bailin and A. Love, Phys. Rept.[**107**]{}, 325 (1984). M. G. Alford, Ann. Rev. Nucl. Part. Sci. [**51**]{}, 131 (2001). K. Rajagopal and F. Wilczek, hep-ph/0011333. D. T. Son, Phys. Rev. D [**59**]{}, 094019 (1999). T. Schäfer and F. Wilczek, Phys. Rev. D [**60**]{}, 114033 (1999). M. G. Alford, K. Rajagopal and F. Wilczek, Phys. Lett. [**B422**]{}, 247 (1998). R. Rapp, T. Schäfer, E. V. Shuryak and M. Velkovsky, Phys. Rev. Lett. [**81**]{}, 53 (1998). M. G. Alford, K. Rajagopal , S. Reddy and F. Wilczek, Phys. Rev. D. [**64**]{}, 074017 (2001). H. Abuki, T. Hatsuda and K. Itakura, Phys. Rev. D [**65**]{}, 074014 (2002). T. D. Fugleberg, hep-ph/0112162 T. D. Fugleberg, hep-ph/0206033 R. D. Pisarski and D. H. Rischke, Phys. Rev. D [**61**]{}, 074017 (2000). R. D. Pisarski and D. H. Rischke, Phys. Rev. D [**60**]{}, 094013 (1999). Q. Wang and D. H. Rischke, Phys. Rev. D [**65**]{}, 054005 (2002). A. Schmitt, Q. Wang and D. H. Rischke, nucl-th/0209050.
--- abstract: 'Modeling the impact of the order flow on asset prices is of primary importance to understand the behavior of financial markets. Part I of this paper reported the remarkable improvements in the description of the price dynamics which can be obtained when one incorporates the impact of past returns on the future order flow. However, impact models presented in Part I consider the order flow as an exogenous process, only characterized by its two-point correlations. This assumption seriously limits the forecasting ability of the model. Here we attempt to model directly the stream of discrete events with a so-called Mixture Transition Distribution (MTD) framework, introduced originally by Raftery (1985). We distinguish between price-changing and non price-changing events and combine them with the order sign in order to reduce the order flow dynamics to the dynamics of a four-state discrete random variable. The MTD represents a parsimonious approximation of a full high-order Markov chain. The new approach captures with adequate realism the conditional correlation functions between signed events for both small and large tick stocks and signature plots. From a methodological viewpoint, we discuss a novel and flexible way to calibrate a large class of MTD models with a very large number of parameters. In spite of this large number of parameters, an out-of-sample analysis confirms that the model does not overfit the data.' author: - Damian Eduardo Taranto - Giacomo Bormetti - 'Jean-Philippe Bouchaud' - | \ Fabrizio Lillo - Bence Tóth title: | Linear models for the impact of order flow on prices\ II. The Mixture Transition Distribution model --- Introduction ============ This paper is the companion of [@taranto16]. In the previous, part I paper we discussed the differences and similarities between two linear models describing the impact of order flow on prices, namely the Transient Impact Model (TIM) and the History Dependent Impact Model (HDIM). In these models, the sign of the order flow is considered to be an exogenous, time correlated process that affects price dynamics either through a “propagator”, i.e. a linear combination of past values (TIM) or via a “surprise” mechanism, i.e. the deviation between the realised order flow and its expected level (HDIM). In reality, however, order flow is not exogenous and is itself affected by the past history of price. In [@taranto16] we partly overcame this issue by enhancing the description of the order flow to account for price changing events and non price changing events, in the spirit of [@eisler2012a; @eisler2012b]. This allows one to encode the propensity of the order flow to invert its sign after a price change, an effect that is particularly important for large tick stocks. This extended model improves significantly the description of the price process, both in terms of the lag-dependent volatility (i.e. the signature plot) and in terms of the response function computed for negative lags. However this approach is incomplete as it does not specify the data generating process for the order flow itself, which is only described through two-point correlation functions. This does not allow one to [*forecast*]{} the future order flow itself, for example whether a trade is likely to change the price or not. In this paper we attempt to model the joint dynamics of order flow and prices. This family of models has a long tradition in market microstructure, starting from the seminal work of Hasbrouck [@hasbrouck88; @hasbrouck91], who proposed a Vector Autoregressive (VAR) model for the joint dynamics of order flow and prices[^1]. There are two main related limitations of this approach. The first is that VAR models are adequate for variables with continuous support (e.g. Gaussian), while the order flow (signs and events) and tick by tick price changes are more naturally described by discrete variables. Second, the standard VAR approach prescribes a linear relation between the variables, while a broader definition includes the possibility of a linear relation between past variables and the [*probability*]{} of observing in the future the value of a given variable. A paradigmatic example is a finite state Markov chain $X_t$. Let $m$ be the number of states, $\boldsymbol{Q}$ the $m\times m$ time-invariant transition matrix, and let $\chi_t=(x_t(1),...,x_t(m))$ be a row vector such that $x_t(i)=1$ if $X_t=i$ and zero otherwise. The probability vector $\hat\chi_t=(\mathbb{P}(X_t=1),..., \mathbb{P}(X_t=m))$ is determined by the [*linear*]{} system of equations $$\hat\chi_t= \chi_{t-1} \boldsymbol{Q}\,.$$ Therefore a natural way to describe the joint dynamics of discrete valued variables (such as the order flow sign and price changes) in a linear setting is with a Markov process of large order. In fact, we have shown in Ref. [@taranto16] that for large tick stocks[^2] the model with two propagators (TIM2) corresponding to price changing and not-changing trades gives constant (in time) propagators when calibrated on real data (see top left panel of Fig. 7 of  [@taranto16]). This means that the knowledge of the order flow and the information on whether a trade changes the price completely characterises the price dynamics. Thus, in the framework of linear models, it is natural to describe the system with a Markov process with $m=4$ states, $(\epsilon_t,\pi_t)\in\lbrace(-1,\mathrm{C}),(-1,\mathrm{NC}),(+1,\mathrm{NC}),(+1,\mathrm{C})\rbrace$, corresponding to buys ($\epsilon_t=+1$) and sells ($\epsilon_t=-1$) and price changing ($\pi_t=\mathrm{C}$) and not changing ($\pi_t= \mathrm{NC}$) trades. However, the main limitation of Markov models comes from the long memory of the order flow [@bouchaud2004; @lillo2004]. Since the order flow sign is very persistent, a low order Markov process cannot be suitable to describe real markets. On the other hand, Markov processes of high order $p$ depend in general on a very large number of parameters ($O(m^p)$) and might result in inefficient estimation when a limited amount of data is available. For this reason in this paper we propose to use a parsimonious, yet versatile class of high order Markov processes termed the Mixed Transition Distribution (MTD) model [@raftery1985] and its generalization (MTDg) [@berchtold1995]. Thanks to a simple structure, where each lag contributes to the prediction of the current state in a separate and additive way, the dimension of model parameter space grows only linearly with the order of the MTDg model, i.e. as $O(m^2p)$. The model can be calibrated via Maximum Likelihood or via the Generalized Method of Moments. Moreover in the case of $m=2$ states (such as the signs of the order flow), the version of the MTDg model proposed in this paper reduces to the Discrete Autoregressive (DAR) model [@jacobs1978], which has been used to model the order flow in [@taranto2014] and in the companion of this paper [@taranto16]. Hence MTD and MTDg aim at providing a natural generalisation of the DAR(p) model to account for an arbitrary number of $m \geq 2$ states, while avoiding the exponential increase ($m^p$) of the number of parameters of the full Markov model. Perhaps surprisingly, this class of models has not been applied to financial data and the present paper attempts to fill this gap. In fact, the main methodological innovation of our work is a parametrization of the MTDg model which can be estimated even when the number of parameters is very large, as required to account for the correlation structure of financial data. Specifically, we consider in this paper MTD and MTDg models as promising models for the joint dynamics of order flow and price changes for large tick stocks. Compared to the models investigated in Ref. [@taranto16], we provide here an explicit model for the order flow, and in particular its response to past price dynamics. Thus we aim at reproducing with the MTD model the complex conditional correlation functions of signed events for large tick stocks (see left panel of Fig. 4 of [@taranto16] whose curves are reproduced also in Fig. \[fig:sim\_mtd\_exp\_eps\_msft\]). Moreover our modeling approach allows to perform out of sample analyses of the MTD’s forecasting ability of the order flow and future price changes. Still, this framework has limitations when calibrated on anonymized order flow because one cannot easily disentangle order flow correlations coming from “herding” and coming from “order splitting”. In other words, although MTDs give explicit predictions for the response of the order flow to a single event (impulse response), one has to be careful in interpreting the result, as it might not describe the true reaction of the market to an isolated, exogeneous order (see [@toth2015; @toth2012; @toth2016]). The paper is organized as follows. In Section \[sec:mtd\] we review the definition, main properties, and estimation methods of MTD and MTDg. In Section \[sec:mtdmicro\] we present our parametrization of the model and some proposed improvements for the estimation. This Section also contains our empirical results on real financial data. Section \[sec:out\] describes the results of some out of sample analyses for predicting price changes and order flow and in Section \[sec:conclusions\] we draw some remarks and conclusions, and discuss the limitations of our approach. The Mixture Transition Distribution model {#sec:mtd} ========================================= Definition ---------- We start from a simple, but restrictive, definition of MTD models. Let $\left\lbrace X_t\right\rbrace_{t\in \mathbb{N}}$ be a sequence of random variables taking values in the finite set $\mathcal{X}=\lbrace 1,\ldots,m\rbrace$. This random sequence is said to be a $p$-th order MTDg sequence if for all $t > p$ and for all $(i,i_1,\ldots,i_p) \in \mathcal{X}^{p+1}$, $$\begin{aligned} \mathbb{P}(X_t=i|X_{t-1}=i_1,\ldots,X_{t-p}=i_p) = \sum_{g=1}^p \lambda_g q^g_{i_g,i}\,, \label{eqn:mtdg}\end{aligned}$$ where the vector $\lambda=(\lambda_1,\ldots,\lambda_p)$ is subject to the constraints: $$\begin{aligned} \lambda_g &\geq 0, \qquad \forall g \in \lbrace 1,\ldots,p\rbrace\,, \label{eqn:mtd_cond1} \\ \sum_{g=1}^p \lambda_g&=1\,. \label{eqn:mtd_cond2}\end{aligned}$$ The matrices $\left\lbrace \boldsymbol{Q}^g=\left[q_{i,j}^g\right]; \, {i,j\in \mathcal{X}}; 1 \leq g \leq p \right\rbrace$ are positive $m \times m$ stochastic matrices, i.e. they satisfy $$\begin{aligned} q_{i,j}^g \ge 0 \quad \mbox{and} \quad \sum_{j=1}^m q_{i,j}^g=1 \qquad \forall g \in \lbrace 1,\ldots,p\rbrace, \forall i,j\in \mathcal{X}\,. \label{eqn:mtd_cond3}\end{aligned}$$ Raftery [@raftery1985] has originally defined the model with the same transition matrix $\boldsymbol{Q}^g\equiv\boldsymbol{Q}$ for each lag $g=1,\ldots,p$ and this model is called the MTD. Later, Berchtold [@berchtold1995] has introduced the more general definition of MTD models as a mixture of transitions from subsets of lagged variables $\lbrace X_{t-1},\ldots,X_{t-p} \rbrace$ to the present one $X_t$. In other words, the order of the transition matrices $\boldsymbol{Q}^g$ can be larger than one. Berchtold and Raftery [@berchtold2002] have published a complete review of the MTD model. They recall theoretical results on the limiting behavior of the model and on its auto-correlation structure. In particular, they proved that if conditions of Eqs. \[eqn:mtd\_cond1\], \[eqn:mtd\_cond2\], and \[eqn:mtd\_cond3\] are satisfied, then the model of Eq. \[eqn:mtdg\] is a well defined high-order Markov chain and its stationary distribution $\hat{\eta}=(\hat{\eta}_1,\ldots,\hat{\eta}_m)$ exists and it is unique. The above Mixture Transition Distribution models are Markov models where each lag $X_{t-1}, X_{t-2}, \ldots$ contributes additively to the distribution of the random variable $X_t$. Hence the model is linear in the sense described in the introduction. In words, this class of models means the following: in order to determine the type of event $X_t$, occurring at time $t$, start choosing a reference time $t-g$ in the past, where $g$ is drawn at random with probability $\lambda_g$. If the event $X_{t-g}$ that occurred at time $t-g$ is of type $j$, then choose the event at time $t$ to be of type $i$ with probability $q_{i,j}$. This model can thus be interpreted as a probabilistic mixture of Markov processes. For this interpretation the fact that $(\lambda_g)_{g=1,...,p}$ is a probability vector and $\boldsymbol{Q}^g$ are stochastic matrices is critical. However, as already noted in the original papers [@raftery1985; @berchtold1995], the MTDg model can be also defined when these parameters are negative or larger than one, provided that the conditions $$\begin{aligned} 0 \le \sum_{g=1}^p \lambda_g q_{i_g,i}^g \le 1, \qquad \forall (i,i_1,\ldots i_p)\in \mathcal{X}^{p+1}\,,\end{aligned}$$ are satisfied, in such a way that all transition probabilities are well defined. As we shall see below, calibrated parameters do not necessarily abide to the probabilistic interpretation. In this paper we will consider a specific class of MTDg models where the matrices $\boldsymbol{Q}^g$ share the same stationary state, i.e. the same left eigenvector $\hat{\eta}$ corresponding to the eigenvalue $1$. Under this assumption, generalizing a result of [@berchtold1995], we can prove the following theorem of the existence and uniqueness of the stationary distribution. \[teo1\] Suppose that a sequence of random variables $\left\lbrace X_t\right\rbrace_{t\in \mathbb{N}}$ taking values in the finite set $\mathcal{X}=\lbrace 1,\ldots,m\rbrace$ is defined by $$\begin{aligned} \mathbb{P}(X_t=i|X_{t-1}=i_1,\ldots,X_{t-p}=i_p) = \sum_{g=1}^p \lambda_g q^g_{i_g,i}\,,\end{aligned}$$ where $\boldsymbol{Q}^g=\left[q_{i,j}^g\right]_{i,j\in \mathcal{X}}$ are matrices with normalized rows, $\sum_j q^g_{i,j}=1$, $\sum_{g=1}^p \lambda_g=1$, and assume that $\hat{\eta} \boldsymbol{Q}^g=\hat{\eta}, \forall g$. If the vector $\hat{\eta}$ is such that $\hat{\eta}_i>0, i \in \mathcal{X}$ and $\sum_i \hat{\eta}_i=1$, and $$\begin{aligned} \label{condition} 0 < \sum_{g=1}^p \lambda_g q_{i_g,i}^g < 1, \qquad \forall (i,i_1,\ldots,i_p)\in \mathcal{X}^{p+1}\,,\end{aligned}$$ then $$\begin{aligned} \lim_{\ell \to \infty} \mathbb{P}(X_{t+\ell}=i|X_{t-1}=i_1,\ldots,X_{t-p}=i_p)=\hat{\eta}_{i}\,.\end{aligned}$$ The proof of the theorem is in Appendix \[app:B\]. Notice that in this theorem we do not need to assume that the parameters $(\lambda_g,\boldsymbol{Q}^g)_{1\leq g \leq p}$ are between zero and one, but the probabilistic interpretation is guaranteed by the condition (\[condition\]). Finally notice that the condition on $\hat{\eta}$ implies that $\forall g$ we can write $\boldsymbol{Q}^g=\boldsymbol{Q}+\tilde{\boldsymbol{Q}}^g$, where $\hat{\eta} \boldsymbol{Q} =\hat{\eta}$ and $\hat{\eta} \tilde{\boldsymbol{Q}}^g =0$. Estimation ---------- Despite being parsimonious with respect to full Markov models, the MTDg parameters $\boldsymbol\theta=(\lambda_g,\boldsymbol{Q}^g)_{1\leq g \leq p}$ are difficult to estimate because they have to comply with the normalization constraints of transition matrices. In the literature many different estimation methods have been proposed [@berchtold2002], but in our paper we will focus on two specific methodologies: the maximum likelihood estimation (MLE) and the generalized method of moments (GMM). Let us introduce the details of these methods. ### Maximum likelihood estimation For a given data sequence with length $n$, $\lbrace X_t=x_t \rbrace_{t=1,\ldots,n}$, we define $(X_{t_1}^{t_2}=x_{t_1}^{t_2})$ as the sequence of events $(X_{t_1}=x_{t_1},X_{t_1+1}=x_{t_1+1},.....,X_{t_2}=x_{t_2})$ and $\mathbb{P}(X_1^p=x_1^p)$ is the joint distribution of $\lbrace X_t=x_t \rbrace_{t=1,\ldots,p}$. From the definition of MTDg models of order $p$, the likelihood function is $$\begin{aligned} L(\boldsymbol\theta)&=\mathbb{P}_{\boldsymbol\theta}(X_1^n=x_1^n)=\mathbb{P}(X_1^p=x_1^p)\mathbb{P}_{\boldsymbol\theta}(X_{p+1}^n=x_{p+1}^n|X_1^p=x_1^p) \nonumber \\ &=\mathbb{P}(X_1^p=x_1^p)\prod_{t=p+1}^n\left\lbrace\sum_{g=1}^p\lambda_g q^g_{x_{t-g},x_t}\right\rbrace\,, \label{eqn:l_function}\end{aligned}$$ To estimate the parameters of MTDg model, we have excluded $\mathbb{P}(X_1^p=x_1^p)$ from the likelihood function. Therefore, the log-likelihood function that we consider is $$\ell(\boldsymbol\theta)=\log{\mathbb{P}_{\boldsymbol\theta}(X_{p+1}^n=x_{p+1}^n|X_1^p=x_1^p)}=\sum_{t=p+1}^n\log{\left\lbrace\sum_{g=1}^p\lambda_g q^g_{x_{t-g},x_t}\right\rbrace}\,, \label{eqn:ll_function}$$ where $\boldsymbol\theta=(\lambda_g,\boldsymbol{Q}^g)_{1\leq g \leq p}$ satisfies all the constraints of Eqs. \[eqn:mtd\_cond1\], \[eqn:mtd\_cond2\], \[eqn:mtd\_cond3\] or \[condition\]. Hence, the maximum likelihood estimation of the parameters $\hat{\boldsymbol\theta}=\left(\hat{\lambda}_g,\hat{\boldsymbol{Q}}^g\right)_{1\leq g \leq p}$ is the solution of the following constrained non-linear optimization problem: $$\begin{aligned} \left(\hat{\lambda}_g,\hat{\boldsymbol{Q}}^g\right)_{1\leq g \leq p}&=\underset{(\lambda_g,\boldsymbol{Q}^g)_{1\leq g \leq p}}{\operatorname{argmax}} \sum_{t=p+1}^n\log{\left\lbrace\sum_{g=1}^p\lambda_g q^g_{x_{t-g},x_t}\right\rbrace}\,, \nonumber \\ \mbox{s.t} \quad \sum_{g=1}^p \lambda_g&=1\,, \nonumber \\ \lambda_g &\geq 0, \qquad \forall g \in \lbrace 1,\ldots,p\rbrace \nonumber \\ q_{i,j}^g &\ge 0 \quad \mbox{and} \quad \sum_{j=1}^m q_{i,j}^g=1 \qquad \forall g \in \lbrace 1,\ldots,p\rbrace, \forall i,j\in \mathcal{X}\,. \label{eqn:mtd_mle}\end{aligned}$$ Clearly the solution of the previous optimization problem is very hard due to the high number of constraints. Berchtold [@berchtold2001] proposes an efficient iterative process with the boundary adjustment in the MLE process which leads to a modification of the Newton’s method. Under the constraints of Eq. \[eqn:mtd\_cond1\] and \[eqn:mtd\_cond2\], Lèbre and Bourguignon in [@lebre2008] introduce a hidden process for the coefficients of the MTDg and propose an Expectation-Maximization approach for the parameters estimation. Chen et al. in [@chen2009] note that all the previous constraints can be rewritten in a box-constrained form, which is easier to handle. ### Generalized Method of Moments Raftery in [@raftery1985] shows that the bivariate distributions of the MTD model satisfy a linear system of equations similar to the Yule-Walker equations. Here we extend this result to the MTDg case, i.e. when transition matrices $\boldsymbol{Q}^g$ differ at each lag $g$. In Appendix \[app:A\] we prove the following proposition: Suppose that a sequence of random variables $\left\lbrace X_t\right\rbrace_{t\in \mathbb{N}}$ taking values in the finite set $\mathcal{X}=\lbrace 1,\ldots,m\rbrace$ is defined by Eq. \[eqn:mtdg\] and is stationary. Let $\boldsymbol{B}(k)$ be a $m \times m$ matrix with elements $$\begin{aligned} b_{i,j}^k=\mathbb{P}(X_t=i, X_{t+k}=j), \qquad i,j \in \mathcal{X}; k\in \mathbb{Z} \end{aligned}$$ and $\boldsymbol{B}(0)=\mbox{diag}(\hat{\eta}_1,\ldots,\hat{\eta}_m)$. Then $$\begin{aligned} \label{eqn:biv_system} \boldsymbol{B}(k)=\sum_{g=1}^p \lambda_g \boldsymbol{B}(k-g) \boldsymbol{Q}^g\,. \end{aligned}$$ The system (\[eqn:biv\_system\]) consists in $m^2p$ different equations which can be employed as orthogonality conditions of the GMM applied to the MTDg model. These equations are not all independent, because the matrices of the bivariate distributions $\boldsymbol{B}(k)$ satisfy the usual normalization conditions. In fact, the rows and the columns of each matrix sum up to the corresponding unconditional probability, $\sum_j{b_{i,j}^k}=\hat{\eta}_i$ and $\sum_i{b_{i,j}^k}=\hat{\eta}_j$. By using these relations, the number of independent equations is reduced to $p(m^2-2m+1)$. The uniquenes of the solution of the system of Eq. \[eqn:biv\_system\] requires that the number of independent parameters of the model has to be equal to the number of independent equations. MTD for order flow and price impact {#sec:mtdmicro} =================================== We consider the joint dynamics of order flow and price changes in transaction time $t\in \mathbb{N}$. Each event is a transaction which has a positive sign ($\epsilon_t=+1$) if it is buyer initiated or negative ($\epsilon_t=-1$) if is seller initiated. For the price we distinguish two possibilities, namely that the trade changes the price ($\pi_t=C$) or not ($\pi_t=NC$). Notice that we are not considering the amplitude if the immediate price changes. For large tick stocks this is a minor problem, since price changes almost always of $\pm1$ tick, while for small tick stocks this is not true and we lose the information on the size of price change. In this paper we use the MTDg model to describe the sequence of signed events $$\left\lbrace (\epsilon_t,\pi_t) \right\rbrace_{t\in \mathbb{N}} \rightarrow \left\lbrace X_t \right\rbrace_{t\in \mathbb{N}},$$ hence the number of states of the model is $m=4$. The relation between the states of the model and the signed events is obtained with the arbitrary mapping $$\begin{aligned} \epsilon_t=-1,\pi_t=\mathrm{C} \quad &\rightarrow \quad X_t=1\,, \\ \epsilon_t=-1,\pi_t=\mathrm{NC} \quad &\rightarrow \quad X_t=2\,, \\ \epsilon_t=+1,\pi_t=\mathrm{NC} \quad &\rightarrow \quad X_t=3\,, \\ \epsilon_t=+1,\pi_t=\mathrm{C} \quad &\rightarrow \quad X_t=4\,.\end{aligned}$$ The main quantity of interest is the cross and autocorrelation functions $C_{\pi_1,\pi_2}(\ell)$, already introduced in [@eisler2012a; @eisler2012b; @taranto16]. Since $$\hat{\eta}=\mathbb{P}(X_t)\equiv\mathbb{P}(\epsilon_t,\pi_t) \qquad \boldsymbol{B}(\ell)=\mathbb{P}(X_t;X_{t+\ell})\equiv\mathbb{P}(\epsilon_t,\pi_t;\epsilon_{t+\ell},\pi_{t+\ell})$$ these correlations $$\begin{aligned} C_{\pi_1,\pi_2}(\ell)&=\frac{\mathbb{E}[\epsilon_t I(\pi_t=\pi_1) \cdot \epsilon_{t+\ell} I(\pi_{t+\ell}=\pi_2)]}{\mathbb{P}(\pi_1)\mathbb{P}(\pi_2)} \\ &=\sum_{\epsilon_{t}\epsilon_{t+\ell}}\sum_{\pi_t \pi_{t+\ell}}\frac{\epsilon_t I(\pi_t=\pi_1) \epsilon_{t+\ell} I(\pi_{t+\ell}=\pi_2) \mathbb{P}(\epsilon_t,\pi_t;\epsilon_{t+\ell},\pi_{t+\ell})}{P(\pi_1)P(\pi_2)} \,,\end{aligned}$$ where $I(\pi_t=\pi)$ is the indicator function, can be expressed in terms of $\hat{\eta}$ and $\boldsymbol{B}(\ell)$. For instance, for $\pi_t=\mathrm{NC}$ and $\pi_{t+\ell}=\mathrm{NC}$ the following relations hold $$\begin{aligned} \mathbb{P}(\mathrm{NC})&=\hat{\eta}_2+\hat{\eta}_3\,,\\ C_{\mathrm{NC},\mathrm{NC}}(\ell)&=\frac{b_{2,2}(\ell)-b_{2,3}(\ell)-b_{3,2}(\ell)+b_{3,3}(\ell)}{(\hat{\eta}_2+\hat{\eta}_3)^2}\,.\end{aligned}$$ In the next two subsections we will estimate MTDg models on real financial data of the US markets. We will consider two different parametrizations and estimation methods. The first one, used as a benchmark case, is based on MLE and uses a parametrization which preserves the probabilistic interpretation of the mixture, i.e. it assumes that the parameters $(\lambda_g,\boldsymbol{Q}^g)_{1\leq g \leq p}$ are between zero and one. Moreover, in order to be able to apply MLE, we will impose a very strong structure of $(\lambda_g,\boldsymbol{Q}^g)_{1\leq g \leq p}$, reducing the number of parameters from $p(m^2-m)+p-1\sim 1,300$ for $p=100$ to $11$. In the second case we relax the constraint that $(\lambda_g,\boldsymbol{Q}^g)_{1\leq g \leq p}$ are between zero and one and we use GMM. We show that a suitable parametrization allows to reduce the estimation to the solution of a constrained linear system, which we prove to be a convex problem. This model is weakly constrained and we are able to estimate reliably $500$ parameters, improving significantly the performance of the model with respect to the benchmark case. Strongly constrained MTDg model ------------------------------- Estimation methods for the MTDg model proposed so far in literature have dealt with low order models. Unfortunately, our case requires the estimation of an high-order version of the model to capture the long-ranged dependence measured for the flow of trade signs. The log-likelihood function of Eq. \[eqn:ll\_function\] is highly non-linear and the solution of the optimization problem could be very hard to find for large values of $p$. [**Parametrization.**]{} In order to reduce the number of parameters and to avoid non-linear constraints, we impose a functional form for the parameters which automatically satisfies all the constraints. For the $\lambda_g$ it is very natural to assume a power law scaling, $\lambda_g=N_\beta g^{-\beta}$, where $N_\beta^{-1}=\sum_{i=1}^p g^{-\beta}$. The reason behind this choice is that the values of $\lambda_g$ influence the correlations for large lags $\ell$, which empirically decay slowly with the lag. Another significant simplification of the problem can be achieved by assuming a buy/sell symmetry, which leads to the definition of centro-symmetric matrices $\boldsymbol{Q}^g$. This assumption leads to $$q_{ij}^g=q_{m-i+1,m-j+1}^g, \quad \mbox{for } i,j=1,\ldots,m\,,$$ and for the first-order stationary distribution of the process $$\hat{\eta}_i=\hat{\eta}_{m-i+1}, \quad \mbox{for } i=1,\ldots,m\,.$$ For instance, $q_{12}^g=q_{43}^g$ since the influence of a sell order price changing event at time $t-g$ on the probability of a sell order not price changing event at time $t$ is equal to the influence of a buy order price changing event at time $t-g$ on the probability of a buy order not price changing event at time $t$. As mentioned above (see theorem \[teo1\]), we consider matrices $\boldsymbol{Q}^g$ sharing the same left eigenvector with eigenvalue one. Writing $\boldsymbol{Q}^g=\boldsymbol{Q}+\tilde{\boldsymbol{Q}}^g$, we make the following strongly parametrized ansatz: $$\begin{aligned} \boldsymbol{Q}=\begin{pmatrix} B_1 & A_1 & A_1 & B_1 \\ B_2 & A_2 & A_2 & B_2 \\ B_2 & A_2 & A_2 & B_2 \\ B_1 & A_1 & A_1 & B_1 \end{pmatrix}, \qquad \boldsymbol{\tilde{Q}}^g=\begin{pmatrix} -\mu_1 e^{-\alpha_{11}g} & -\nu_1 e^{-\alpha_{12} g} & \nu_1 e^{-\alpha_{12} g} & \mu_1 e^{-\alpha_{11}g} \\ \mu_2 e^{-\alpha_{21}g} & \nu_2 e^{-\alpha_{22} g} & -\nu_2 e^{-\alpha_{22} g} & -\mu_2 e^{-\alpha_{21}g} \\ -\mu_2 e^{-\alpha_{21}g} & -\nu_2 e^{-\alpha_{22} g} & \nu_2 e^{-\alpha_{22} g} & \mu_2 e^{-\alpha_{21}g} \\ \mu_1 e^{-\alpha_{11}g} & \nu_1 e^{-\alpha_{12} g} & -\nu_1 e^{-\alpha_{12} g} & -\mu_1 e^{-\alpha_{11}g} \end{pmatrix}\,, \label{eqn:mtd_exp}\end{aligned}$$ where $\alpha_{ij}\geq 0$. Imposing $$\begin{aligned} A_1&=1/2-B_1, & A_2&=1/2-B_2\,, \nonumber \\ 0 \leq &B_1 \leq 1/2, & 0 \leq &B_2 \leq 1/2\,, \nonumber \\ -B_1 \leq &\mu_1 \leq B_1, & -B_2 \leq &\mu_2 \leq B_2\,, \nonumber \\ B_1-1/2 \leq &\nu_1 \leq 1/2-B_1, & B_2-1/2 \leq &\nu_2 \leq 1/2-B_2\,, \label{eqn:mtd_exp_cond}\end{aligned}$$ we automatically satisfy all the constraints of the model. Moreover it is immediate to see that $\hat{\eta}\tilde{\boldsymbol{Q}}^g=0$, as required, where $$\hat{\eta}=\begin{pmatrix} \frac{B_2}{1-2B_1+2B_2}, \frac{1-2B_1}{2-4B_1+4B_2}, \frac{1-2B_1}{2-4B_1+4B_2}, \frac{B_2}{1-2B_1+2B_2}\end{pmatrix}\,,$$ Finally the parametrization in Eq. \[eqn:mtd\_exp\] with the linear constraints of Eq. \[eqn:mtd\_exp\_cond\] guarantees that the matrices have the right normalization on the rows, $\sum_{j}q_{ij}^g=1, \forall g,i$ and $0<q_{ij}^g<1, \forall g,i,j$. The intuition behind our choice is that the parameters $q_{ij}^g$ determine the correlations between the event $i$ at time $t$ and the event $j$ at time $t-g$. From the left panel in Figure 4 in [@taranto16], reporting the empirical correlations measured for the large tick stock Microsoft, we see a quite different behavior depending on the conditioning event. For example, the order flow correlations among non price-changing events is extremely persistent. This has motivated the choice of a power law decaying pre-factor $\lambda_g$. However, since $\lambda_g$ multiplies all entries of the matrices $\boldsymbol{Q}^g$ we need to include different decays in the $\tilde{\boldsymbol{Q}}^g$ matrices in order to reproduce the faster decay of the empirical correlations which involve price-changing events, and for this reason we have introduced the four exponential decay rates $\alpha_{ij}$ ($i,j=1,2$). The parameters of this model can be obtained via MLE. The optimization problem is non-trivial since the likelihood function is highly non-linear. However the dimensionality is low and, thanks to the parametrization, the constraints of the problem are linear inequalities. The total number of parameters is 11, $\boldsymbol{\theta}=\{\beta,B_i,\mu_i,\nu_i,\alpha_{ij}\}$ with $i,j=1,2$. [**Results.**]{} In Fig. \[fig:sim\_mtd\_exp\_eps\_msft\] and \[fig:sim\_mtd\_exp\_eps\_aapl\] we plot the correlation functions computed from a Monte Carlo simulation of the MTDg(100) model with parameter values obtained from MLE on Microsoft (MSFT) and Apple (AAPL) data (details about the data set are given in Section 4.1 of the companion paper [@taranto16]). More precisely, we compare the auto and cross-correlations $C_{\pi_1,\pi_2}(\ell)$ for price-changing and non price-changing events with the empirical ones. As can be noted, for small tick stocks the model can reproduce the structure of the correlations for short time scales, but not their persistence. For large tick stocks the persistence of the empirical correlations is not well reproduced either, and the structure of the correlations predicted by MTD for small lags is not rich enough to fit the empirical data. The lack in the persistence of the simulated correlations can be explained by the fact that the optimized exponent $\beta$ is too high. Also for the case of large tick stocks we conclude that the functional forms assumed for the matrices $\boldsymbol{Q}^g$ is not flexible enough to reproduce the different speed of decays of the empirical correlations. Nonetheless, the advantage of this modeling assumption is that the estimation process is extremely fast even for higher order models. In the next subsection we will explore a better alternative. ![*MLE calibration of the strongly constrained MTDg*. Comparison between the auto and cross-correlation functions $C_{\pi_1,\pi_2}(\ell)$ of signed events from a simulation of the MTDg(100) model estimated on AAPL data (solid lines) and the empirical curves (triangles). The error bars correspond to one standard deviation. Estimated parameter values are $\beta=2.21$, $B_1=0.38$, $B_2=0.01$, $\mu_1=-0.22$, $\alpha_{11}=0.0$, $\nu_1=-0.07$, $\alpha_{12}=0.0$, $\mu_2=0.27$, $\alpha_{21}=0.043$, $\nu_2=0.21$, and $\alpha_{22}=0.0$. The scale for values close to zero and bounded by horizontal solid lines is linear, whereas outside this region the scale is logarithmic.[]{data-label="fig:sim_mtd_exp_eps_aapl"}](mtd_exp_eps_sym_powerlaw_MSFT_UQ_acf.pdf){width="0.8\columnwidth"} ![*MLE calibration of the strongly constrained MTDg*. Comparison between the auto and cross-correlation functions $C_{\pi_1,\pi_2}(\ell)$ of signed events from a simulation of the MTDg(100) model estimated on AAPL data (solid lines) and the empirical curves (triangles). The error bars correspond to one standard deviation. Estimated parameter values are $\beta=2.21$, $B_1=0.38$, $B_2=0.01$, $\mu_1=-0.22$, $\alpha_{11}=0.0$, $\nu_1=-0.07$, $\alpha_{12}=0.0$, $\mu_2=0.27$, $\alpha_{21}=0.043$, $\nu_2=0.21$, and $\alpha_{22}=0.0$. The scale for values close to zero and bounded by horizontal solid lines is linear, whereas outside this region the scale is logarithmic.[]{data-label="fig:sim_mtd_exp_eps_aapl"}](mtd_exp_eps_sym_powerlaw_AAPL_UQ_acf.pdf){width="0.8\columnwidth"} Weakly constrained MTDg model {#sec:weakly} ----------------------------- [**Model definition.**]{} Here we introduce the main methodological innovation of this paper, namely a parametrization of the MTDg model which can be estimated with GMM even when the number of parameters is very large. To motivate it, let us consider the DAR(p) process with $m$ states (employed for example in [@taranto2014] as a model for the order flow) [^3]. The model can be seen as a particular case of the MTD(p) model, where the transition matrices are the same for all $g$, $\boldsymbol{Q}^g \equiv \boldsymbol{Q}$ and such that $$\boldsymbol{Q}=1^T\hat{\eta}+\varphi(\mathbb{I}-1^T\hat{\eta})\,,$$ where $1$ is a row of $m$ ones and the parameter $\varphi$ ranges between zero and one. The left eigenvector of $\boldsymbol{Q}$ corresponding to the eigenvalue $1$ is $\hat{\eta}$, since it belongs to the kernel of $\mathbb{I}-1^T\hat{\eta}$. Following the same idea, we introduce MTD(p) models where $$\begin{aligned} \boldsymbol{Q}^g&=1^T\hat{\eta}+\boldsymbol{\tilde{Q}}^g\\end{aligned}$$ and $\hat{\eta} \boldsymbol{\tilde{Q}}^g=0$. Moreover normalization of $\boldsymbol{Q}^g$ imposes that each row of $\boldsymbol{\tilde{Q}}^g$ sums to zero, hence these matrices will have negative elements. As in Theorem \[teo1\], all the $\boldsymbol{Q}^g$ share the same left eigenvector $\hat{\eta}$ with eigenvalue $1$. It is easy to show that the conditional probabilities of this model can be written as $$\begin{aligned} \mathbb{P}(X_t=i|X_{t-1}=i_1,\ldots,X_{t-p}=i_p) = \hat{\eta}_{i}+\sum_{g=1}^p a_{i_g,i}^g\,, \label{eqn:smart_mtdg}\end{aligned}$$ where $a_{i_g, i}^g \equiv \lambda_g (\boldsymbol{\tilde{Q}}^g)_{i_g,i}$. Thus the matrices $\boldsymbol{A}^g \equiv \lambda_g\boldsymbol{\tilde{Q}}^g$ describe the deviations of the $p-$order transition probability from the stationary value given by $\hat{\eta}$. Finally, as shown in Appendix \[app:C\], the system of equations of Theorem \[teo1\] for this model is $$\begin{aligned} \boldsymbol{B}(k)-\hat{\eta}^T\hat{\eta}=\sum_{g=1}^p \boldsymbol{B}(k-g) \boldsymbol{A}^g. \label{eqn:smart_biv_system}\end{aligned}$$ This linear system can be used to estimate the model, i.e. the matrices $\boldsymbol{A}^g$, from the knowledge of the stationary probabilities $\hat{\eta}$ and the bivariate distributions $\boldsymbol{B}(k)$. There are however two technical problems: - The estimated model might not have a probabilistic interpretation, i.e. the estimated model might generate transition probabilities larger than one or smaller than zero; - The solution of Eq. \[eqn:smart\_biv\_system\] gives the matrix $\boldsymbol{A}^g$, while one might need $\lambda_g$ and $(\boldsymbol{\tilde{Q}}^g)$ separately, thus the identifiability problem must be solved by fixing arbitrarily one parameter. Note however that the dynamics of the model is independent from this choice. In the following we will tackle these points. In order to have a well defined probabilistic model, and to be able to use Theorem \[teo1\] which guarantees the existence and uniqueness of the solution, it must also hold that $$\begin{aligned} 0 < \hat{\eta}_{i}+\sum_{g=1}^p a_{i_g,i}^g < 1, \qquad \forall (i,i_1,\ldots,i_p)\in \mathcal{X}^{p+1}\,,\end{aligned}$$ which corresponds to $2m^{p+1}$ constraints. Clearly, in practical applications it is impossible to handle the previous number of conditions, but we can satisfy all of them imposing the necessary and sufficient conditions $$\begin{aligned} \hat{\eta}_{i}+\sum_{g=1}^p \max_{i_g} \left(a_{i_g,i}^g \right) &< 1, \qquad \forall i \in \mathcal{X} \label{eqn:smart_mtdg_comb1} \\ \hat{\eta}_{i}+\sum_{g=1}^p \min_{i_g} \left(a_{i_g,i}^g \right) &> 0, \qquad \forall i \in \mathcal{X} \label{eqn:smart_mtdg_comb2}\end{aligned}$$ which are only $2m$ inequality constraints. Under these conditions the process is well defined and possesses a unique stationary solution (Theorem \[teo1\]) and the estimation of the model can be performed solving the optimization program $$\begin{aligned} \hat{q}&=\underset{{\bf q} \in \mathbb{R}^{p(m^2-2m+1)}}{\operatorname{argmin}} \left\Vert {\bf d}-\boldsymbol{K}\cdot {\bf q} \right\Vert^2 \nonumber \\ \mbox{s.t.} \qquad &\hat{\eta}_{i}+\sum_{g=1}^p \max_{i_g} \left(a_{i_g,i}^g \right) < 1, \qquad \forall i \in \mathcal{X} \nonumber \\ &\hat{\eta}_{i}+\sum_{g=1}^p \min_{i_g} \left(a_{i_g,i}^g \right) > 0, \qquad \forall i \in \mathcal{X} \label{eqn:mtd_yw_min_contrained}\end{aligned}$$ where the elements of the $p(m^2-2m+1)$-dimensional vector $\bf d$ correspond to left hand side of Eq. \[eqn:smart\_biv\_system\], namely $${\bf d}=(\overline b_{1,1}^1,\ldots,\overline b_{1,m-1}^1,\ldots,\overline b_{m-1,1}^1,\ldots,\overline b_{m-1,m-1}^1,\ldots,\overline b_{1,1}^p,\ldots,\overline b_{1,m-1}^p,\ldots,\overline b_{m-1,1}^p,\ldots,\overline b_{m-1,m-1}^p)$$ with $$\overline b_{i,j}^k=b_{i,j}^k-\hat{\eta}_i\hat{\eta}_j ,$$ the vector $\bf q$ corresponds to the parameters of the model $\lambda_g \tilde{q}_{i,j}^g$ $${\bf q}=(a_{1,1}^1,\ldots,a_{1,m-1}^1,\ldots,a_{m-1,1}^1,\ldots,a_{m-1,m-1}^1,\ldots,a_{1,1}^p,\ldots,a_{1,m-1}^p,\ldots,a_{m-1,1}^p,\ldots,a_{m-1,m-1}^p)$$ and the elements of the matrix $\boldsymbol{K}$ are linear combinations of $b_{i,j}^k$, according to Eq. \[eqn:smart\_biv\_system\] (we do not report the matrix since its form is not transparent). The reason for the choice of the constraints in Eq. \[eqn:mtd\_yw\_min\_contrained\] is that we prove in Appendix \[app:D\] the following proposition: If $\boldsymbol{K}$ is not singular, the optimization program of Eq. (\[eqn:mtd\_yw\_min\_contrained\]) is strictly convex in $\mathbb{R}^{p(m^2-2m+1)}$. Therefore if a local minimum exists, then it is a global minimum. The convexity property solves the issue of the high dimensionality of the problem and the model can be estimated also for large order $p$. [**Application to order flow and impact.**]{} We now consider the application of the above described MTDg model to the $m=4$ process describing jointly the order flow and the price changes. As done in the previous section, we reduce the dimensionality of the system by exploiting the buy/sell symmetry, which leads to centrosymmetric $\hat{\eta}$ and $\boldsymbol{B}(k)$. In fact, for $m=4$ we have that $$b_{i,j}^k=b_{m-i+1,m-j+1}^k, %\quad q_{ij}^g=q_{m-i+1,m-j+1}^g, \quad \mbox{for } i,j=1,\ldots,m$$ and for the stationary distribution $$\hat{\eta}_i=\hat{\eta}_{m-i+1}, \quad \mbox{for } i=1,\ldots,m\,.$$ The buy/sell symmetry and the normalization of matrices $\boldsymbol{B}(k)$ reduces the number of independent variables in $\boldsymbol{B}(k)$ to $5p$, $5$ for each lag $k$. Thus, we have that $$\begin{aligned} \boldsymbol{B}(k)=\begin{pmatrix} b_{1,1}^k & b_{1,2}^k & \hat{\eta}_2-b_{1,2}^k-b_{2,2}^k-b_{3,2}^k & \hat{\eta}_1-\hat{\eta}_2+b_{2,2}^k+b_{3,2}^k-b_{1,1}^k \\ b_{2,1}^k & b_{2,2}^k & b_{3,2}^k & \hat{\eta}_2-b_{2,1}^k-b_{2,2}^k-b_{3,2}^k \\ \hat{\eta}_2-b_{2,1}^k-b_{2,2}^k-b_{3,2}^k & b_{3,2}^k & b_{2,2}^k & b_{2,1}^k \\ \hat{\eta}_1-\hat{\eta}_2+b_{2,2}^k+b_{3,2}^k-b_{1,1}^k & \hat{\eta}_2-b_{1,2}^k-b_{2,2}^k-b_{3,2}^k & b_{1,2}^k & b_{1,1}^k \end{pmatrix}\,. \end{aligned}$$ In order to find a solution of the problem of Eq. \[eqn:mtd\_yw\_min\_contrained\], we assume that the imposed centrosymmetry of $\boldsymbol{B}(k)$ and $\hat{\eta}$ does not change the rank of the matrix $\boldsymbol{K}$. In this case the solution is unique and it is easy to show that also $\boldsymbol{\tilde Q}^g$ must be centrosymmetric, as $$\begin{aligned} % \boldsymbol{Q}^g&=1^T\eta+\boldsymbol{\tilde{Q}}_g\,, \nonumber \\ \boldsymbol{\tilde{Q}}^g&=\begin{pmatrix} \tilde{q}_{1,1}^g & \tilde{q}_{1,2}^g & -\tilde{q}_{1,2}^g-c_2(\tilde{q}_{2,2}^g+\tilde{q}_{2,3}^g) & -\tilde{q}_{1,1}^g+c_2(\tilde{q}_{2,2}^g+\tilde{q}_{2,3}^g) \\ \tilde{q}_{2,1}^g & \tilde{q}_{2,2}^g & \tilde{q}_{2,3}^g & -\tilde{q}_{2,1}^g-\tilde{q}_{2,2}^g-\tilde{q}_{2,3}^g \\ -\tilde{q}_{2,1}^g-\tilde{q}_{2,2}^g-\tilde{q}_{2,3}^g & \tilde{q}_{2,3}^g & \tilde{q}_{2,2}^g & \tilde{q}_{2,1}^g \\ -\tilde{q}_{1,1}^g+c_2(\tilde{q}_{2,2}^g+\tilde{q}_{2,3}^g) & -\tilde{q}_{1,2}^g-c_2(\tilde{q}_{2,2}^g+\tilde{q}_{2,3}^g) & \tilde{q}_{1,2}^g & \tilde{q}_{1,1}^g \end{pmatrix}\,, \label{eqn:smart_mtdg_model}\end{aligned}$$ where $c_2=\hat{\eta}_2/\hat{\eta}_1$. With this definition the number of independent parameters in $\boldsymbol{Q}^g$ is also equal to 5 for each $g$. We can now solve the system of Eq. \[eqn:mtd\_yw\_min\_contrained\] whose unknowns are the components of the matrix $\boldsymbol{A}^g$. This way we obtain the value of the products $\lambda_g \tilde{q}_{i,j}^g$, but not the value of the components $\lambda_g$ and $\tilde{q}_{i,j}^g$ separately. For this reason we impose that one of the five components among $\tilde{q}_{1,1}^g$, $\tilde{q}_{1,2}^g$, $\tilde{q}_{2,1}^g$, $\tilde{q}_{2,2}^g$, and $\tilde{q}_{2,3}^g$ is independent of the lag $g$. We arbitrarily fix $\tilde{q}_{2,1}^g\equiv \tilde{q}_{2,1}$. We are left with 4$p$ free parameters from $\boldsymbol{\tilde Q}^g$ (4 for each $g$), $p-1$ parameters from $\lambda_g$ and $\tilde{q}_{21}$. In total we have $5p$ free parameters, which is exactly the same number of independent components $b_{i,j}^k$. The values of the products $\lambda_g \tilde{q}_{i,j}^g$ define the MTDg model. Different choices of $\tilde{q}_{i,j}^g=\tilde{q}_{i,j}$ give different factorizations, but lead to the same high-order Markov chain. The arbitrariness of the choice is an evidence of the well known identifiability problem of all mixture models. In the literature there exist many algorithms which solve iteratively the constrained optimization problem of Eq. \[eqn:mtd\_yw\_min\_contrained\]. A widely used class belongs to the Sequential Quadratic Programming (SQP) family [@boggs1995]. However, an issue of our optimization is that constraints are non-smooth functions, which is a necessary condition required by the usual SQP algorithms. In a recent paper, Curtis and co-authors [@curtis2012] have proposed the Sequential Quadratic Programming Gradient Sampling algorithm (SQP-GS), which can be applied to non-smooth, non-linear objective and constraint functions. We have implemented this algorithm in order to solve our optimization problem. [**Results.**]{} We estimated the above MTDg(100) model on MSFT and AAPL. Before showing the results, we mention that we have estimated also the model by using Eq. (\[eqn:smart\_biv\_system\]) [*without*]{} the additional constraints of Eqs. \[eqn:smart\_mtdg\_comb1\] and \[eqn:smart\_mtdg\_comb2\]. We found negative transition probabilities, indicating the importance to impose the constraints to have meaningful model estimation (and guarantee of existence and uniqueness of the solution). We now turn to the correctly constrained model. Fig. \[fig:mtd\_yw\_min\_abscon\_lambq\_msft\] and \[fig:mtd\_yw\_min\_abscon\_lambq\_aapl\] show the estimation of $\lambda_q\tilde q_{i,j}^g$ for MSFT and AAPL. Despite the large number of estimated parameters, they turn out to be only moderately noisy. Moreover it is interesting to note that negative values of $\lambda_q\tilde q_{i,j}^g$ are present, even if, by construction, the transition probabilities of the model are well defined in $[0,1]$. Clearly the estimation shows that the probabilistic mixture discussed at the beginning is, perhaps meaningfully, not suitable for the present data. Fig. \[fig:sim\_mtd\_yw\_min\_abscon\_msft\] and \[fig:sim\_mtd\_yw\_min\_abscon\_aapl\] show correlation functions $C_{\pi_1,\pi_2}(\ell)$ of signed events computed from a Monte Carlo simulation of the calibrated model and compared with real data. As can be noted, for small tick stocks we have significantly improved the results of Fig. \[fig:sim\_mtd\_exp\_eps\_aapl\]. Compared with the benchmark, the new estimation method reproduces the high persistence of the correlations of order signs independently from the conditioning events. In the case of the large tick stocks, whose correlations present an highly non-trivial structure, the GMM methodology greatly improves the results with respect to Fig. \[fig:sim\_mtd\_exp\_eps\_msft\]. In particular, the high persistence of non price-changing events is very well reproduced. Moreover, the $C_{\mathrm{NC},\mathrm{C}}(\ell)$ curve decays faster as compared to the previous estimation method, and is thus closer to data. ![*GMM calibration of the weakly constrained MTDg*. Comparison between the auto and cross-correlation functions $C_{\pi_1,\pi_2}(\ell)$ of signed events from a simulation of the MTDg(100) model estimated on MSFT data (triangles) and the empirical curves (solid lines). The error bars correspond to one standard deviation. The scale for values close to zero and bounded by horizontal solid lines is linear, whereas outside this region the scale is logarithmic. []{data-label="fig:sim_mtd_yw_min_abscon_msft"}](mtd_yw_min_abscon_MSFT_UQ_lambq.pdf){width="0.8\columnwidth"} ![*GMM calibration of the weakly constrained MTDg*. Comparison between the auto and cross-correlation functions $C_{\pi_1,\pi_2}(\ell)$ of signed events from a simulation of the MTDg(100) model estimated on MSFT data (triangles) and the empirical curves (solid lines). The error bars correspond to one standard deviation. The scale for values close to zero and bounded by horizontal solid lines is linear, whereas outside this region the scale is logarithmic. []{data-label="fig:sim_mtd_yw_min_abscon_msft"}](mtd_yw_min_abscon_MSFT_UQ_acf.pdf){width="0.8\columnwidth"} ![*GMM calibration of the weakly constrained MTDg*. Comparison between the auto and cross-correlation functions $C_{\pi_1,\pi_2}(\ell)$ of signed events from a simulation of the MTDg(100) model estimated on AAPL data (triangles) and the empirical curves (solid lines). The error bars correspond to one standard deviation.[]{data-label="fig:sim_mtd_yw_min_abscon_aapl"}](mtd_yw_min_abscon_AAPL_UQ_lambq.pdf){width="0.8\columnwidth"} ![*GMM calibration of the weakly constrained MTDg*. Comparison between the auto and cross-correlation functions $C_{\pi_1,\pi_2}(\ell)$ of signed events from a simulation of the MTDg(100) model estimated on AAPL data (triangles) and the empirical curves (solid lines). The error bars correspond to one standard deviation.[]{data-label="fig:sim_mtd_yw_min_abscon_aapl"}](mtd_yw_min_abscon_AAPL_UQ_acf.pdf){width="0.8\columnwidth"} Large tick stock signature plot ------------------------------- Another way to assess the quality of the MTDg model is to analyse how well it describes the volatility of prices. As noted above and in [@taranto16], the impact of a price changing event is nearly price independent for large tick stocks (within the TIM2 model). This means that the signature plot is simply given by: $$D^{\text{TIM2}}(\ell) \approx D_\mathrm{LF} + G_\mathrm{C}(1)^2 \mathbb{P}(\mathrm{C})+2 \frac{G_\mathrm{C}(1)^2}{\ell} \sum_{0 \leq n < m <\ell}\mathbb{P}(\mathrm{C})^2 C_{\mathrm{C}, \mathrm{C}}(m-n)\,, \label{eqn:sign_plot_mtd}$$ which is completely determined by the correlation function $C_{\mathrm{C}, \mathrm{C}}(\ell)$ (once the value of $G_\mathrm{C}(1)$ has been estimated). This correlation function is, as presented above, only approximately reproduced by the MTDg model, although it is calibrated to minimize the distance to all $C_{\pi, \pi'}(\ell)$. In the context of financial applications, it is therefore interesting to replot the difference between the MTDg $C_{\mathrm{C}, \mathrm{C}}(\ell)$ and empirical data in terms of the signature plot $D^{\text{TIM2}}(\ell)$, which involves the integral of the correlation function. In Fig. \[fig:signature\_plot\_msft\] we show the curves corresponding to Eq. \[eqn:sign\_plot\_mtd\] for the strongly and weakly constrained versions of the MTDg model proposed above, where the extra fitting parameter $D_\mathrm{LF}$ is optimized with OLS in order to minimize the distance between the empirical and the theoretical curves of the model. We see that in terms of the signature plot of the model, the weakly constrained and strongly constrained MTDg fare nearly equally well. We also show the predictions of the TIM2 model that uses the empirical $C_{\mathrm{C}, \mathrm{C}}(\ell)$; the nearly perfect fit in this case is a consequence of the fact that $G_\mathrm{C}(\ell) \approx G_\mathrm{C}(1)$ for large tick stocks. Note that the TIM2 price process is strictly diffusive only if the quantity $D^{\text{TIM2}}(\ell+1)(\ell+1)-D^{\text{TIM2}}(\ell)\ell$ is a constant independent from $\ell$. In fact, we have that $$\begin{aligned} D^{\text{TIM2}}(\ell+1)(\ell+1)-D^{\text{TIM2}}(\ell)\ell=D_\mathrm{LF} + G_\mathrm{C}(1)^2 \mathbb{P}(\mathrm{C})+2 G_\mathrm{C}(1)^2 \mathbb{P}(\mathrm{C})^2 \sum_{0 < n \leq \ell} C_{\mathrm{C}, \mathrm{C}}(n)\,,\end{aligned}$$ which means that the price process becomes diffusive for $\ell> \ell^*$ only if $C_{\mathrm{C}, \mathrm{C}}(\ell> \ell^*)=0$. Figs \[fig:sim\_mtd\_exp\_eps\_msft\] and \[fig:signature\_plot\_msft\] suggest that this is indeed the case for $\ell^* \approx 10$. ![Signature plot for MSFT data: Empirical data (crosses), weakly constrained (GMM) MTDg(100) model with $D_\mathrm{LF}=0.41$ (dashed line), strongly constrained (MLE) MTDg(100) model with $D_\mathrm{LF}=0.43$ (dashed-dotted line), and the theoretical prediction of the calibrated TIM2 model [@taranto16].[]{data-label="fig:signature_plot_msft"}](diffusion_rate_mtdg_MSFT_UQ_100.pdf){width="0.8\columnwidth"} Out-of-sample analysis {#sec:out} ====================== In the previous sections we have presented two MTDg models – strongly and weakly constrained – and discussed two alternative estimation methodologies based on MLE and GMM. Since they differ both in the number of parameters and in estimation efficiency, it is important to compare their performances testing the predictive power of the models in an out-of-sample analysis. We consider as a measure of the performance the expected prediction error (EPE) defined as $$\mathrm{EPE}(\boldsymbol\theta)=\mathbb{E}[L(X_t,\hat{X}_t^{\boldsymbol\theta})]\,,$$ where $X_t$ is the observed process, $\hat{X}_t^{\boldsymbol\theta}$ is the predictor of $X_t$ based on the model with parameter set $\boldsymbol\theta$, and the $p$ past observations of the process $X_t$. As common in the literature for categorical data, we use as loss function the log-likelihood $L(X_t,\hat{X}_t^{\boldsymbol\theta})=-2\sum_{i=1}^m I(X_t=i)\log (\hat{\chi}_t)_i=-2\log(\hat{\chi}_t)_{X_t}$, also called cross-entropy. We remind that $\hat\chi_t$ is the $m$-probability vector describing the prediction of the model and in the previous formula we take the $X_t$-th component. For the MTDg(p) model this probability vector is $$\hat{\chi}_t=\sum_{g=1}^p\chi_{t-g} \lambda_g\boldsymbol{Q}^g$$ where, as before, $\chi_{t-g}$ is a $m$-vector of zeros with the exception of the realized component $X_{t-g}$. This quantity can be easily computed once the model is calibrated, since it depends on the transition probabilities. EPE values are in the range $[0,+\infty)$, and it is zero if all probabilities $(\hat{\chi}_t)_{X_t}$ of the sample are equal to one (perfect prediction), and it is infinity if all probabilities $(\hat{\chi}_t)_{X_t}$ of the sample are zero (prediction of impossible events). We evaluate the best performing model as the model with the lowest EPE and benchmark the MTDg with a model with the unconditional probabilities as predictors of future signed events. Table \[tab:epe\] reports all the EPE values for different models of the predictor estimated on MSFT, Bank of America-CitiGroup (BAC), General Electric (GE), Cisco (CSCO), AAPL, and Amazon (AMZN) data. The scheme of the out-of-sample analysis is the following: The model is trained on a time period of 10 days, then we compute the loss functions in the following trading day by using the parameter set provided by MLE (strongly constrained) or by GMM (weakly constrained). We repeat the procedure by shifting the estimation by one trading day ahead. Finally, we compute the global loss by averaging all measured loss function. The financial interpretation of the EPE values is clear in the case of the large tick stocks, because a price-changing event moves the price by one tick with probability almost one and thus there exists a direct relation between the states of the MTDg model and the price return. Hence, for large tick stocks the EPE value can be employed as a proxy of the predictability of returns at high frequency time scale. Model A Model B Model C -- ----- --------- --------- --------- EPE 1.928 1.199 1.181 SE 0.003 0.004 0.004 EPE 1.744 0.799 0.785 SE 0.003 0.004 0.004 EPE 1.922 1.169 1.153 SE 0.004 0.005 0.005 EPE 1.919 1.112 1.098 SE 0.004 0.005 0.005 EPE 2.643 2.211 2.192 SE 0.001 0.002 0.002 EPE 2.579 2.196 2.183 SE 0.002 0.004 0.004 : EPE values and standard errors (SE) for MSFT, BAC, GE, CSCO, AAPL and AMZN data. *Model A*: Unconditional probabilities as predictor. *Model B*: Strongly constrained MTDg(100) estimated via MLE according to Eq. \[eqn:mtd\_exp\]. Total number of parameters: 11. *Model C*: Weakly constrained MTDg(100) model estimated via GMM with matrices as in Eq. \[eqn:smart\_mtdg\_model\]. Total number of parameters: $500$.[]{data-label="tab:epe"} From Table \[tab:epe\] we see that both MTDg models out-perform the benchmark. More importantly, there is a clear evidence that the weakly constrained model with the highest number of parameters (Model C) outperforms the strongly constrained MTDg, for all considered stocks. These results exclude the over-fitting hypothesis, and support the claim that weakly constrained MTDg models are good candidates to capture the high-frequency dynamics of signed events. Discussion and conclusion {#sec:conclusions} ========================= The companion paper [@taranto16] has established that treating all market orders on the same basis produces erroneous predictions both for the “response functions” (average lagged impact) at negative lags and the signature plot. Single-propagator models and history dependent impact models are not designed to capture the feedback effects between past price returns and future order flow. These serious discrepancies have been significantly reduced by introducing the extended versions of the linear impact models (TIM and HDIM) which consider a richer set of signed events (see [@eisler2012a; @eisler2012b]). The argument which has motivated our generalization of the impact models is the observation that price-changing and non price-changing events have to be treated differently. This is particularly evident for large tick stocks, where price moving events are extremely rare but very informative. This apparently minor modification has lead to an extended class of propagator models which describe with remarkable realism the intertwined high-frequency dynamics of prices and order flow. Nonetheless, the linear description of the market dynamics achieved in Part I [@taranto16] is still too rigid: these models are designed to describe the evolution of the market with an exogeneously specified order flow. This fact seriously limits the forecasting capabilities of linear impact models. The Mixture Transition Distribution model partly solves the above issue by introducing an explicit stochastic model for the order flow, treated as an endogenous component of the dynamics. It is specially designed for variables which are inherently discrete – a feature of great relevance for price returns of large tick stocks. In this paper we have presented a class of so-called MTDg models as a natural extension of the Discrete Autoregressive DAR(p) in a multi-event context. Our aim was to test how well a calibrated MTDg model can account for the statistics of the order flow, i.e. the string of 4 events: buy/sell – price changing/non changing events. One of the most interesting aspects of our work is of methodological nature, and concerns the practical calibration of “large” models. The class of weakly constrained MTDg models introduced in section \[sec:weakly\] and Appendix \[app:C\] represents a rich family of discrete models, where the number of free parameters equates the number of independent observable correlation functions. This fact allows to introduce a numerical procedure which solves the estimation of the model parameters in a remarkably robust way. This result is rooted on the proof that the optimization problem is convex in the parameter space. From the financial viewpoint we have shown that – perhaps surprisingly – a weakly constrained version of the MTDg models captures the dynamics of signed events with greater realism than alternative and more parsimonious versions. Despite the large number of parameters, the out-of-sample analysis confirms that such good performances are achieved without over-fitting the data. The improvement brought by the MTDg models and the new estimation methodology is remarkable, but still some discrepancies persist when comparing the model predictions with the empirical correlation functions. Several reasons may be responsible for these deviations. The first one was already pointed out by Raftery in [@raftery1985] where he has shown that there exist regions of correlations which simply cannot be reproduced by MTDg models. A second reason is that, even though the MTDg model was the correct data generating process, the estimation methodology which involves information only coming from second order conditions, may lack efficiency with respect to the MLE approach. Finally, the MTDg model represents a parsimonious approximation of a full Markov chain of order $p$. This parsimony may come at expense of the realism of the model. From a microstructural point of view, we can hypothesize that the string of past signed events $X_{t-1},\ldots,X_{t-p}$ is not informative enough to predict the value $X_t$. In particular, for large tick stocks price-changing events $\pi=\mathrm{C}$ are much rarer than non price-changing event $\pi=\mathrm{NC}$. Therefore, a $\pi=\mathrm{C}$ event is by construction difficult to predict with past information based only on realised signs and trades. Hence, the behavior that we observe may be ascribed to a problem of missing explanatory variables. A natural candidate in this respect could be the volume of orders outstanding at the opposite side of the limit order book before the execution of a trade order, i.e. the local order book imbalance. From a more fundamental point of view, we should also point out that the MTDg calibrated kernel, which gives the probability that an event at time $t=0$ will trigger similar or opposite events at time $t=g$ later, must be interpreted with care. Indeed, this kernel receives contributions both from order splitting, which increases the probability that an agent places an order of the same sign in the future, and from genuine reactions of the rest of the market to this event [@toth2015; @toth2012]. These reactions can be herding (copy cat trades) or, on the contrary, trades in the other direction (coming e.g. from liquidity providers). The response of the order flow to a single, isolated trade is thus expected to be rather different from the impulse function obtained by calibrating an MTDg model to the full order flow since order splitting contributions will be absent in the former, but contribute to the latter. The distinction between the two effects requires trade identification to be resolved. We hope to come back to this issue in a forthcoming work [@toth2016]. Acknowledgement {#acknowledgement .unnumbered} =============== We want to thank Z. Eisler, J. Donier, and I. Mastromatteo for very useful discussions. D. E. Taranto acknowledges CFM for supporting his extended visit at CFM where part of this research was done. Existence and uniqueness of the stationary distribution of the MTDg model {#app:B} ========================================================================= Suppose that a sequence of random variables $\left\lbrace X_t\right\rbrace_{t\in \mathbb{N}}$ taking values in the finite set $\mathcal{X}=\lbrace 1,\ldots,m\rbrace$ is defined by $$\begin{aligned} \mathbb{P}(X_t=i|X_{t-1}=i_1,\ldots,X_{t-p}=i_p) = \sum_{g=1}^p \lambda_g q^g_{i_g,i}\,,\end{aligned}$$ where $\boldsymbol{Q}^g=\left[q_{i,j}^g\right]_{i,j\in \mathcal{X}}$ are matrices with normalized row, $\sum_j q^g_{i,j}=1$, $\sum_{g=1}^p \lambda_g=1$, and assume that $\hat{\eta} \boldsymbol{Q}^g=\hat{\eta}, \forall g$. If the vector $\hat{\eta}$ is such that $\hat{\eta}_i>0, i \in \mathcal{X}$ and $\sum_i \hat{\eta}_i=1$, and $$\begin{aligned} 0 < \sum_{g=1}^p \lambda_g q_{i_g,i}^g < 1, \qquad \forall (i,i_1,\ldots,i_p)\in \mathcal{X}^{p+1}\,, \label{eqn:appB_positivity}\end{aligned}$$ then $$\begin{aligned} \lim_{\ell \to \infty} \mathbb{P}(X_{t+\ell}=i|X_{t-1}=i_1,\ldots,X_{t-p}=i_p)=\hat{\eta}_{i}\,.\end{aligned}$$ **Proof.** Let $\boldsymbol{T}$ be the $m^p \times m^p$ transition matrix for the Markov chain with the $m^p$ possible values of $(X_{t-1},\ldots,X_{t-p})$ as states. The elements of $\boldsymbol{T}$ are $$\begin{aligned} \mathbb{P}(X_t=i,X_{t-1}=i_1,\ldots,X_{t-p+1}=i_{p-1}|X_{t-1}=j_1,\ldots,X_{t-p}=j_p)\,, \\ =\begin{cases} \sum_{g=1}^p \lambda_g q_{j_g,i}^g & \mbox{if } i_g=j_g, \mbox{ for } g=1,2,\ldots,p-1\,, \\ 0 & \mbox{otherwise}\,. \end{cases}\end{aligned}$$ Each column of $\boldsymbol{T}$ represents the $p$-vector $(i,\ldots,i_{p-1})$ of arrival states, which are ordered in such a way that $i$ varies most slowly, $i_1$ second most slowly, and so on. Similarly, the rows of $\boldsymbol{T}$ represents the values of $(j_1,\ldots,j_p)$ with $j_1$ varies most slowly, and so on. The assumption of Eq. \[eqn:appB\_positivity\] guarantees that all states of $\boldsymbol{T}$ intercommunicate, so $\boldsymbol{T}$ is irreducible. Amongst the diagonal elements of $\boldsymbol{T}$ $m$ are aperiodic, then, since $\boldsymbol{T}$ is irreducible, all states are aperiodic. Hence, $\boldsymbol{T}$, being finite, specifies an ergodic Markov chain and has a unique equilibrium distribution $\xi$ satisfying $\xi \boldsymbol{T}=\xi$ with elements $$\begin{aligned} \xi_{j_1,\ldots,j_p}=\lim_{t \to \infty} \mathbb{P}(X_{t-1}=j_1,\ldots,X_{t-p}=j_p)\end{aligned}$$ where the $p$-vector $(j_1,\ldots,j_p)$ is ordered in the same way of the matrix $\boldsymbol{T}$. We call $\omega=(\omega_1,\ldots,\omega_m)$ the corresponding one-dimensional marginal equilibrium distribution. Also let $\boldsymbol{R}$ be the “collapsed form” of $\boldsymbol{T}$ as defined in [@pegram1980], which is the $m^p \times m$ matrix of the non-zero elements of $\boldsymbol{T}$. Clearly, in general $$\begin{aligned} \xi \boldsymbol{R} = \omega\,. \label{eqn:appB_existence}\end{aligned}$$ We write the same matrix for the model (\[eqn:smart\_mtdg\]) $$\begin{aligned} \boldsymbol{R} = \sum_{g=1}^p \lambda_g \boldsymbol{U}_g\,,\end{aligned}$$ where $\boldsymbol{U}_g=\boldsymbol{A}_{g,1} \otimes \cdots \otimes \boldsymbol{A}_{g,p}$ where $$\begin{aligned} \boldsymbol{A}_{g,k}=\begin{cases} \boldsymbol{Q}^g & \mbox{if } g=k\\ 1^T & \mbox{if } g \neq k \end{cases}\end{aligned}$$ and $\otimes$ is the Kronecker product and $1^T$ is a $m \times 1$ vector of ones. We now calculate $\xi \boldsymbol{R}$ in another way. The $k$-th column of $\xi \boldsymbol{U}_g$ is $$\begin{aligned} \sum_{i_1,\ldots,i_p}^m q_{i_g,k}^g \xi_{i_1,\ldots,i_p}&=\sum_{i_g}^m q_{i_g,k}^g \sum_{i_h, h \neq g}^m \xi_{i_1,\ldots,i_p} \\ &=\sum_{i_g}^m q_{i_g,k}^g \omega_{i_g}\end{aligned}$$ which is also the $k$-th column of $\omega \boldsymbol{Q}^g$. Thus $$\begin{aligned} \xi \boldsymbol{R}=\sum_{g=1}^p \lambda_g \omega \boldsymbol{Q}^g\,. \label{eqn:appB_uniqueness}\end{aligned}$$ Equating Eqs. \[eqn:appB\_existence\] and \[eqn:appB\_uniqueness\], we have that $\omega=\hat{\eta}$, by uniqueness of $\omega$ and $\hat{\eta} \boldsymbol{Q}^g=\hat{\eta}, \forall g$. System of matrix equations of the MTDg model {#app:A} ============================================ Suppose that a sequence of random variables $\left\lbrace X_t\right\rbrace_{t\in \mathbb{N}}$ taking values in the finite set $\mathcal{X}=\lbrace 1,\ldots,m\rbrace$ is defined by Eq. \[eqn:mtdg\] and is stationary. Let $\boldsymbol{B}(k)$ be a $m \times m$ matrix with elements $$\begin{aligned} b_{i,j}^k=\mathbb{P}(X_t=i, X_{t+k}=j), \qquad i,j \in \mathcal{X}; k\in \mathbb{Z} \end{aligned}$$ and $\boldsymbol{B}(0)=\mbox{diag}(\hat{\eta}_1,\ldots,\hat{\eta}_m)$. Then $$\begin{aligned} \boldsymbol{B}(k)=\sum_{g=1}^p \lambda_g \boldsymbol{B}(k-g) \boldsymbol{Q}^g\,. \end{aligned}$$ **Proof.** First consider the case where $k=1,\ldots,p$. Let $$\begin{aligned} Y_t^k&=\lbrace X_{t+k-g}: g=1,\ldots,p; g \neq k\rbrace,\end{aligned}$$ then $$\begin{aligned} b_{i,j}^k&=\mathbb{P}(X_t=i, X_{t+k}=j) \\ &=\sum_{Y_t^k}\mathbb{P}(X_t=i, X_{t+k}=j|Y_t^k)\mathbb{P}(Y_t^k) \\ &=\sum_{Y_t^k}\mathbb{P}(X_{t+k}=j|X_t=i,Y_t^k)\mathbb{P}(X_t=i|Y_t^k)\mathbb{P}(Y_t^k) \\ &=\sum_{Y_t^k} \sum_{g=1,g \neq k}^p \lambda_g q_{X_{t+k-g},j}^g \mathbb{P}(X_t=i|Y_t^k)\mathbb{P}(Y_t^k)+\sum_{Y_t^k}\lambda_k q_{i,j}^k \mathbb{P}(X_t=i|Y_t^k)\mathbb{P}(Y_t^k) \\ &=\sum_{g=1,g \neq k}^p \lambda_g \sum_{X_{t+k-g}} q_{X_{t+k-g},j}^g \mathbb{P}(X_t=i|X_{t+k-g})\mathbb{P}(X_{t+k-g}) + \lambda_k \hat{\eta}_i q_{i,j}^k \\ &=\sum_{g=1,g \neq k}^p \lambda_g \sum_{h=1}^m b_{i,h}^{k-g} q_{h,j}^g + \lambda_k \hat{\eta}_i q_{i,j}^k\end{aligned}$$ which is the $(i,j)$-th element of $$\begin{aligned} \sum_{g=1}^p \lambda_g \boldsymbol{B}(k-g) \boldsymbol{Q}^g\end{aligned}$$ as required. A general class of MTDg models {#app:C} ============================== Let $\boldsymbol{B}(k)$ be an $m \times m$ matrix whose elements are $$b_{i,j}^k=\mathbb{P}(X_t=i, X_{t+k}=j), \qquad i,j=1,\ldots,m, k \in \mathbb{Z},$$ where $\boldsymbol{B}(0)=diag(\hat{\eta}_1,\ldots,\hat{\eta}_m)$. The matrices $\boldsymbol{B}(k)$ represent the bivariate distributions of the random variable $X_t$. Then, we have that $$\begin{aligned} \boldsymbol{B}(k)=\begin{pmatrix} b_{1,1}^k & \cdots & b_{1,m-1}^k & \hat{\eta}_1-\sum_{i=1}^{m-1}b_{1,i}^k \\ \vdots & \ddots & \vdots & \vdots \\ b_{m-1,1}^k & \cdots & b_{m-1,m-1}^k & \hat{\eta}_{m-1}-\sum_{i=1}^{m-1}b_{m-1,i}^k \\ \hat{\eta}_1-\sum_{i=1}^{m-1}b_{i,1}^k & \cdots & \hat{\eta}_{m-1}-\sum_{i=1}^{m-1}b_{i,m-1}^k & 2\hat{\eta}_m-1+\sum_{i,j=1}^{m-1} b_{i,j}^k \\ \end{pmatrix}\,, \end{aligned}$$ where the total number of independent elements is $m^2-2m+1$ for each $k$. The parameters of the MTD model of order $p$ consists in the vector $\lambda=(\lambda_1,\ldots,\lambda_p)$ and the matrices $\boldsymbol{Q}^g$, such that $$\begin{aligned} \boldsymbol{Q}^g&=1^T\hat{\eta}+\boldsymbol{\tilde{Q}}^g\,, \nonumber \\ \boldsymbol{\tilde{Q}}^g&=\begin{pmatrix} \tilde{q}_{1,1}^g & \cdots & \tilde{q}_{1,m-1}^g & -\sum_{i=1}^{m-1}\tilde{q}_{1,i}^g \\ \vdots & \ddots & \vdots & \vdots \\ \tilde{q}_{m-1,1}^g & \cdots & \tilde{q}_{m-1,m-1}^g & -\sum_{i=1}^{m-1}\tilde{q}_{m-1,i}^g \\ -\sum_{i=1}^{m-1}c_i \tilde{q}_{i,1}^g & \cdots & -\sum_{i=1}^{m-1}c_i \tilde{q}_{i,m-1}^g & \sum_{i,j=1}^{m-1} c_i \tilde{q}_{i,j}^g\\ \end{pmatrix}\,, \end{aligned}$$ where $\hat{\eta}\boldsymbol{\tilde{Q}}^g=0, \forall g$ and $c_i=\hat{\eta}_i/\hat{\eta}_m$. Consistently with these definitions, the conditional probabilities of the $p$-order Markov chain read $$\begin{aligned} \mathbb{P}(X_t=i|X_{t-1}=i_1,\ldots,X_{t-p}=i_p) = \hat{\eta}_{i}+\sum_{g=1}^p a_{i_g,i}^g\,,\end{aligned}$$ where $a_{i_g,i}^g \equiv \lambda_g \tilde{q}_{i_g,i}^g$. Within this framework, the bivariate distributions and the matrices $\boldsymbol{\tilde{Q}}^g$ satisfy the following system of matrix equations $$\begin{aligned} \boldsymbol{B}(k)-\hat{\eta}^T\hat{\eta}=\sum_{g=1}^p \boldsymbol{B}(k-g) \boldsymbol{A}_g\,,\end{aligned}$$ where $\boldsymbol{A}^g \equiv \lambda_g\boldsymbol{\tilde{Q}}^g$. Employing the empirical bivariate distributions, above linear system can be inverted in order to find the parameters of the model. Resulting parameters have to satisfy the following conditions in order to characterise a well defined $p$-order Markov model $$\begin{aligned} \hat{\eta}_{i}+\sum_{g=1}^p \max_{i_g} \left(a_{i_g,i}^g \right) < 1, \qquad \forall i \in \mathcal{X}\,; \nonumber \\ \hat{\eta}_{i}+\sum_{g=1}^p \min_{i_g} \left(a_{i_g,i}^g \right) > 0, \qquad \forall i \in \mathcal{X}\,.\end{aligned}$$ Convexity of the optimization problem {#app:D} ===================================== If $\boldsymbol{K}$ is not singular, the following constrained optimization problem $$\begin{aligned} \hat{q}&=\underset{{\bf q} \in \mathbb{R}^{p(m^2-2m+1)}}{\operatorname{argmin}} \left\Vert {\bf d}-\boldsymbol{K}\cdot {\bf q} \right\Vert^2 \nonumber \\ \mbox{s.t.} \qquad &\hat{\eta}_{i}+\sum_{g=1}^p \max_{i_g} \left(a_{i_g,i}^g \right) < 1, \qquad \forall i \in \mathcal{X} \nonumber \\ &\hat{\eta}_{i}+\sum_{g=1}^p \min_{i_g} \left(a_{i_g,i}^g \right) > 0, \qquad \forall i \in \mathcal{X} \end{aligned}$$ is convex in $\mathbb{R}^{p(m^2-2m+1)}$. **Proof.** This is true if the objective function and all the constraints are convex functions. First of all, it is straightforward to show that the Hessian of the objective function $2\boldsymbol{K}\boldsymbol{K}^T$ is a positive semi-definite matrix. The constraints are convex in $q$, if they are convex in the parameters $a_{i,j}^g$ because they are affine functions of the components of $q$. Let $a$ be the vector of parameters $\left(a_{i,j}^g \right)_{i,j \in \mathcal{X}; 1 \leq g \leq p}$, we need to prove that the function $$\begin{aligned} f(a)=\sum_{g=1}^p \max_{i_g} \left(a_{i_g,i}^g \right), \qquad \forall i \in \mathcal{X}\end{aligned}$$ is convex in $\mathbb{R}^{p(m^2-2m+1)}$. If we prove it for a fixed $i$, then it is true for all $i \in \mathcal{X}$ and also for the constraints with the minimum function. The function $f(a)$ satisfies, for $0 \leq \theta \leq 1 $, different vectors of parameters $a,b\in \mathbb{R}^{m^2p}$, and fixed $i$ $$\begin{aligned} f(\theta a + (1-\theta)b)&=\sum_{g=1}^p \max_{i_g} \left(\theta a_{i_g,i}^g+(1-\theta) b_{i_g,i}^g \right) \nonumber \\ &\leq\theta \sum_{g=1}^p \max_{i_g} \left(a_{i_g,i}^g \right) + (1-\theta) \sum_{g=1}^p \max_{i_g} \left(b_{i_g,i}^g \right) \nonumber \\ &=\theta f(a)+(1-\theta)f(b)\,.\end{aligned}$$ Therefore, we conclude that the function $f(a)$ is convex in $\mathbb{R}^{p(m^2-2m+1)}$. [99]{} Taranto, D.E., Bormetti, G., Bouchaud, J.-P., Lillo, F., and Tóth, B. (2016). Linear models for the impact of order flow on prices I. Propagators: Transient vs. History Dependent Impact. Preprint available at http://arxiv.org/abs/1602.02735. Hasbrouck, J. (1988). Trades, quotes, inventory and information. Journal of Financial Economics, 22, 229-252. Hasbrouck, J. (1991). Measuring the information content of stock trades. Journal of Finance, 46, 179-207. Bacry, E., and Muzy, J. F. (2014). Hawkes model for price and trades high-frequency dynamics. Quantitative Finance, 14(7), 1147-1166. Bouchaud, J. P., Gefen, Y., Potters, M., and Wyart, M. (2004). Fluctuations and response in financial markets: The subtle nature of “random” price changes. Quantitative Finance, 4(2), 176-190. Bouchaud, J. P., Kockelkoren, J., and Potters, M. (2006). Random walks, liquidity molasses and critical response in financial markets. Quantitative Finance, 6(02), 115-123. Lillo, F., and Farmer, J. D. (2004). The long memory of the efficient market. Studies in Nonlinear Dynamics & Econometrics, 8(3). Tóth, B., Lemperiere, Y., Deremble, C., De Lataillade, J., Kockelkoren, J., and Bouchaud, J. P. (2011). Anomalous price impact and the critical nature of liquidity in financial markets. Physical Review X, 1(2), 021006. Tóth, B., Eisler, Z., Lillo, F., Kockelkoren, J., Bouchaud, J.-P., and Farmer, J. D. (2012). How does the market react to your order flow? Quantitative Finance, 12(7), 1015-1024 Tóth, B., Palit, I., Lillo, F., and Farmer, J. D. (2015). Why is equity order flow so persistent? Journal of Economic Dynamics and Control, 51, 218-239. Tóth, B., Eisler, Z., and Bouchaud, J.-P, Propagator models calibrated on proprietary data, *in preparation* Mastromatteo, I., Tóth, B., and Bouchaud, J. P. (2014). Agent-based models for latent liquidity and concave price impact. Physical Review E, 89(4), 042805. Donier, J., Bonart J., Mastromatteo I., and Bouchaud J.-P. (2015). A fully consistent, minimal model for non-linear market impact. Quantitative Finance, 15(7), 1109-1121. Taranto, D. E., Bormetti, G., and Lillo, F. (2014). The adaptive nature of liquidity taking in limit order books. Journal of Statistical Mechanics: Theory and Experiment, 2014(6), P06002. Eisler, Z., Bouchaud, J. P., and Kockelkoren, J. (2012). The price impact of order book events: Market orders, limit orders and cancellations. Quantitative Finance, 12(9), 1395-1419. Eisler, Z., Bouchaud, J.-P. and Kockelkoren, J. (2012) Models for the impact of all order book events, in Market Microstructure: Confronting Many Viewpoints (eds F. Abergel, J.-P. Bouchaud, T. Foucault, C.-A. Lehalle, and M. Rosenbaum), John Wiley & Sons Ltd, Oxford, UK. Curato, G., and Lillo, F. (2015). Modeling the coupled return-spread high frequency dynamics of large tick assets. Journal of Statistical Mechanics: Theory and Experiment, 2015(1), P01028. Lillo, F., Mike, S., and Farmer, J. D. (2005). Theory for long memory in supply and demand. Physical Review E, 71(6), 066122. Jacobs, P. A., and Lewis, P. A. (1978). Discrete time series generated by mixtures. I: Correlational and runs properties. Journal of the Royal Statistical Society. Series B (Methodological), 94-105. Raftery, A. E. (1985). A model for high-order Markov chains. Journal of the Royal Statistical Society. Series B (Methodological), 528-539. Berchtold, A., and Raftery, A. E. (2002). The mixture transition distribution model for high-order Markov chains and non-Gaussian time series. Statistical Science, 328-356. Berchtold, A. (1995). Autoregressive modeling of Markov chains. Statistical Modelling: Proceedings of the 10 th International Workshop on Statistical Modelling, 19-26. Springer-Verlag. Raftery, A., and Tavaré, S. (1994). Estimation and modelling repeated patterns in high order Markov chains with the mixture transition distribution model. Applied Statistics, 179-199. Berchtold, A. (2001). Estimation in the mixture transition distribution model. Journal of Time Series Analysis, 22(4), 379-397. Lèbre, S., and Bourguignon, P. Y. (2008). An EM algorithm for estimation in the mixture transition distribution model. Journal of Statistical Computation and Simulation, 78(8), 713-729. Chen, D. G., and Lio, Y. L. (2009). A novel estimation approach for mixture transition distribution model in high-order Markov chains. Communications in Statistics-Simulation and Computation, 38(5), 990-1003. Boggs, P. T., and Tolle, J. W. (1995). Sequential quadratic programming. Acta numerica, 4, 1-51. Curtis, F. E., and Overton, M. L. (2012). A sequential quadratic programming algorithm for nonconvex, nonsmooth constrained optimization. SIAM Journal on Optimization, 22(2), 474-500. Pegram, G. G. S. (1980). An autoregressive model for multilag Markov chains. Journal of Applied Probability, 17, 350-362. [^1]: More recent modeling in continuous time makes use of Hawkes processes [@bacry14], which bear some degree of similarity with the models considered in the present paper. [^2]: We remind that large tick stocks have the property that the ratio between tick size and price is relatively high and as a consequence spread is almost always equal to one tick. [^3]: The case $m=2$ considered in Ref. [@taranto16] corresponds to a MTD(p) model with transition matrices that are the same for all $g$, $\boldsymbol{Q}^g \equiv \boldsymbol{Q}$ and $$\boldsymbol{Q}=\begin{pmatrix} \rho & 1-\rho \\ 1-\rho & \rho \end{pmatrix}\,.$$ In the stationary condition the two states are equiprobable, as can be verified solving the left eigenvalue problem $\hat{\eta} \boldsymbol{Q}=\hat{\eta}$.
--- abstract: | In this contribution we present a survey of concepts in localized model order reduction methods for parameterized partial differential equations. The key concept of localized model order reduction is to construct local reduced spaces that have only support on part of the domain and compute a global approximation by a suitable coupling of the local spaces. In detail, we show how optimal local approximation spaces can be constructed and approximated by random sampling. An overview of possible conforming and non-conforming couplings of the local spaces is provided and corresponding localized a posteriori error estimates are derived. We introduce concepts of local basis enrichment, which includes a discussion of adaptivity. Implementational aspects of localized model reduction methods are addressed. Finally, we illustrate the presented concepts for multiscale, linear elasticity and fluid-flow problems, providing several numerical experiments.\ This work has been accepted as a chapter in P. Benner, S. Grivet-Talocia, A. Quarteroni, G. Rozza, W.H.A. Schilders, L.M. Sileira. Handbook on Model Order Reduction. Walter De Gruyter GmbH, Berlin, 2019+. address: - 'Mathematics Münster, Einsteinstr. 62, D-48149 Münster, Germany' - 'Department of Mathematics and Computer Science, TU Eindhoven, Eindhoven, The Netherlands' - 'Mathematics Münster, Einsteinstr. 62, D-48149 Münster, Germany' - 'Mathematics Münster, Einsteinstr. 62, D-48149 Münster, Germany' - 'Mathematics Münster, Einsteinstr. 62, D-48149 Münster, Germany' - 'University of Twente, Faculty of Electrical Engineering, Mathematics & Computer Science, Zilverling, P.O. Box 217, 7500 AE Enschede, The Netherlands' author: - Andreas Buhr - Laura Iapichino - Mario Ohlberger - Stephan Rave - Felix Schindler - Kathrin Smetana title: Localized model reduction for parameterized problems --- Introduction {#sec:intro} ============ Parameterized partial differential equations and localization {#sec:setting} ============================================================= Coupling local approximation spaces {#sec:coupling} ==================================== Conforming approach {#subsec:conforming} ------------------- Non-conforming approach {#subsec:non-conforming} ----------------------- Preparation of local approximation spaces {#sec:prep} ========================================= Both, couplings that yield a conforming and non-conforming approximation require either reduced spaces $\Lambda_{N^{\gamma}}^{\gamma} \subset V_{h}|_{\gamma}$ for interfaces and/or edges $\Lambda_{N^{e}}^{e} \subset V_{h}|_{e}$ (see subsection \[subsec:conforming\]) or reduced spaces $V_{N}^{m}$ (see subsection \[subsec:non-conforming\]) or both. As the generation of edge basis functions can be done analogously to the construction of interface basis functions we restrict ourselves to the latter in order to simplify notation. To fix the setting we thus consider the task of finding a suitable reduced space either on a subdomain $\Omega_{m} \subsetneq \Omega_{out} \subset \Omega$ with ${\operatorname{dist}}(\Gamma_{out},\partial \Omega_{m}) \geq \rho > 0$, $\Gamma_{out}:=\partial\Omega_{out}\setminus\partial \Omega$ or an interface $\Gamma_{m,m'} \subset \partial \Omega_{m}$, where ${\operatorname{dist}}(\Gamma_{out},\Gamma_{m,m'})\geq \rho > 0$. Possible geometric configurations of the oversampling domain $\Omega_{out}$ are illustrated in Fig. \[fig:illustration geometry\]. ![Illustration of possible decompositions of $\Omega_{out}$ with respect to $\Gamma_{m,m'}$ or $\Omega_{m}$.[]{data-label="fig:illustration geometry"}](two_subdomains_neumann.png "fig:"){height="15.00000%"} ![Illustration of possible decompositions of $\Omega_{out}$ with respect to $\Gamma_{m,m'}$ or $\Omega_{m}$.[]{data-label="fig:illustration geometry"}](inner_domain.png "fig:"){height="15.00000%"} We will first briefly discuss in subsection \[subsec:bas\_gen\_polynomial\] reduced spaces that are spanned by *polynomials or solutions of “standard” eigenvalue problems* and are thus related to the spectral element method or hp-FEM. Subsequently, in subsection \[subsec:bas\_gen\_empirical\] we will present reduced spaces that are generated from local solutions of the PDE, are thus of *empirical* nature, and are optimal in the sense of Kolmogorov. We will also show how those optimal basis functions can be efficiently and accurately approximated by means of random sampling. Polynomial-based local approximation spaces {#subsec:bas_gen_polynomial} ------------------------------------------- Local approximation spaces based on empirical training {#subsec:bas_gen_empirical} ------------------------------------------------------ A posteriori error estimation {#sec:apost} ============================= \[sec:a\_posteriori\] Basis enrichment and online adaptivity {#sec:adaptivity} ====================================== Computational aspects {#sec:computation} ===================== Applications and numerical experiments {#sec:examples} ====================================== Multiscale problems {#sec:multiscale_experiments} ------------------- Fluid dynamics -------------- Further perspectives {#sec:perspectives} ==================== Parabolic problems ------------------ Non-affine parameter dependence and non-linear problems ------------------------------------------------------- [MMPY15]{} A. Abdulle and Y. Bai. Reduced basis finite element heterogeneous multiscale method for high-order discretizations of elliptic homogenization problems. , 231(21):7014 – 7036, 2012. A. Abdulle. On a priori error analysis of fully discrete heterogeneous multiscale [FEM]{}. , 4(2):447–459, 2005. P. Abdulle, A.and Henning. A reduced basis localized orthogonal decomposition. , 295:379–401, 2015. J. Aarnes and T. Y. Hou. Multiscale domain decomposition methods for elliptic problems with high aspect ratios. , 18(1):63–76, 2002. F. Albrecht, B. Haasdonk, S. Kaulmann, and M. Ohlberger. The localized reduced basis multiscale method. , pages 393–403, 2012. P.F. Antonietti, P. Pacciarini, and A. Quarteroni. A discontinuous [G]{}alerkin reduced basis element method for elliptic problems. , 50(2):337–360, 2016. I. Babu[š]{}ka, U. Banerjee, and J. Osborn. Generalized finite element methods — main ideas, results and perspective. , 1(1):67–103, 2004. M. Bampton and R. Craig. Coupling of substructures for dynamic analyses. , 6(7):1313–1319, 1968. I. Babu[š]{}ka, G. Caloz, and J. E. Osborn. Special finite element methods for a class of second order elliptic problems with rough coefficients. , 31(4):945–981, 1994. P. Benner, A. Cohen, M. Ohlberger, and K. Willcox, editors. , volume 15 of [ *Computational Science & Engineering*]{}. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2017. Theory and algorithms. A. Buhr, C. Engwer, M. Ohlberger, and S. Rave. A numerically stable a posteriori error estimator for reduced basis approximations of elliptic equations. In X. Oliver E. Onate and A. Huerta, editors, [*Proceedings of the 11th World Congress on Computational Mechanics*]{}, pages 4094–4102. CIMNE, Barcelona, 2014. A. Buhr, C. Engwer, M. Ohlberger, and S. Rave. Arbi[L]{}o[M]{}od, a simulation technique designed for arbitrary local modifications. , 39(4):A1435–A1465, 2017. F. Brezzi and M. Fortin. . Springer-Verlag New York, Inc., 1991. P. Benner, S. Gugercin, and K. Willcox. A survey of projection-based model reduction methods for parametric dynamical systems. , 57(4):483–531, 2015. I. Babu[š]{}ka, X. Huang, and R. Lipton. Machine computation using the exponentially convergent multiscale spectral generalized finite element method. , 48(2):493–515, 2014. I. Babu[š]{}ka and R. Lipton. Optimal local approximation spaces for generalized finite element methods with application to multiscale problems. , 9(1):373–406, 2011. I. Babu[š]{}ka and J. M. Melenk. The partition of unity method. , 40(4):727–758, 1997. M. Barrault, Y. Maday, N. C. Nguyen, and A. T. Patera. An ‘empirical interpolation’ method: application to efficient reduced-basis discretization of partial differential equations. , 339(9):667–672, 2004. C. Bernardi, Y. Maday, and A. T. Patera. A new nonconforming approach to domain decomposition: the mortar element method. In [*Nonlinear partial differential equations and their applications. [C]{}oll[è]{}ge de [F]{}rance [S]{}eminar, [V]{}ol. [XI]{} ([P]{}aris, 1989–1991)*]{}, volume 299 of [*Pitman Res. Notes Math. Ser.*]{}, pages 13–51. Longman Sci. Tech., Harlow, 1994. F. Bourquin. Component mode synthesis and eigenvalues of second order operators: discretization and algorithm. , 26(3):385–423, 1992. A. Buhr and K. Smetana. Randomized [L]{}ocal [M]{}odel [O]{}rder [R]{}eduction. , 40(4):A2120–A2151, 2018. F. Chinesta and E. Ammar, A.and Cueto. Recent advances and new challenges in the use of the proper generalized decomposition for solving multidimensional models. , 17(4):327–350, 2010. K. Carlberg. Adaptive [$h$]{}-refinement for reduced-order models. , 102(5):1192–1210, 2015. V. M. Calo, Y. Efendiev, J. Galvis, and M. Ghommem. Multiscale empirical interpolation for solving nonlinear [PDE]{}s. , 278:204–220, 2014. V. M. Calo, Y. Efendiev, J. Galvis, and G. Li. Randomized oversampling for generalized multiscale finite element methods. , 14(1):482–501, 2016. F. Casenave, A. Ern, and T. Lelièvre. Accurate and online-efficient evaluation of the [*a posteriori*]{} error bound in the reduced basis method. , 48:207–229, 2014. E. T. Chung, Y. Efendiev, and G. Li. An adaptive [GM]{}s[FEM]{} for high-contrast flow problems. , 273:54–76, Sep 2014. E. T. Chung, Y. Efendiev, and W. T. Leung. Residual-driven online generalized multiscale finite element methods. , 302:176–190, 2015. E. T. Chung, Y. Efendiev, and W. T. Leung. An online generalized multiscale discontinuous [G]{}alerkin method ([GM]{}s[DGM]{}) for flows in heterogeneous media. , 21(2):401–422, 2017. E. T. Chung, Y. Efendiev, and W. T. Leung. An online generalized multiscale discontinuous [G]{}alerkin method ([GM]{}s[DGM]{}) for flows in heterogeneous media. , 21(2):401–422, 2017. E. T. Chung, Y. Efendiev, and W. T. Leung. An adaptive generalized multiscale discontinuous [G]{}alerkin method for high-contrast flow problems. , 16(3):1227–1257, 2018. E. T. Chung, Y. Efendiev, and Wing T. Leung. An adaptive generalized multiscale discontinuous [G]{}alerkin method for high-contrast flow problems. , 16(3):1227–1257, 2018. Y. Chen, J. S. Hesthaven, and Y. Maday. A seamless reduced basis element method for 2[D]{} [M]{}axwell’s problem: an introduction. In [*Spectral and high order methods for partial differential equations*]{}, volume 76 of [*Lect. Notes Comput. Sci. Eng.*]{}, pages 141–152. Springer, Heidelberg, 2011. Francisco Chinesta, Roland Keunings, and Adrien Leygue. . SpringerBriefs in Applied Sciences and Technology. Springer, Cham, 2014. F. Chinesta, P. Ladeveze, and E. Cueto. A short review on model order reduction based on proper generalized decomposition. , 18(4):395–404, 2011. S. Chaturantabut and D. C. Sorensen. Nonlinear model reduction via discrete empirical interpolation. , 32(5):2737–2764, 2010. M. Drohmann, B. Haasdonk, and M. Ohlberger. Reduced basis approximation for nonlinear parametrized evolution equations based on empirical operator interpolation. , 34(2):A937–A969, 2012. P. Drineas and M. W. Mahoney. . , 59(6):80–90, 2016. W. E and B. Engquist. The heterogeneous multiscale methods. , 1(1):87–132, 2003. W. E and B. Engquist. The heterogeneous multi-scale method for homogenization problems. In [*Multiscale methods in science and engineering*]{}, volume 44 of [*Lect. Notes Comput. Sci. Eng.*]{}, pages 89–110. Springer, Berlin, 2005. Y. Efendiev, J. Galvis, and T.Y. Hou. Generalized multiscale finite element methods ([GMsFEM]{}). , January 2013. Y. Efendiev and T. Y. Hou. , volume 4 of [*Surveys and Tutorials in the Applied Mathematical Sciences*]{}. Springer, New York, 2009. Theory and applications. Y. Efendiev, T. Hou, and V. Ginting. Multiscale finite element methods for nonlinear problems and their applications. , 2(4):553–589, 2004. J. L Eftang and A. T. Patera. Port reduction in parametrized component static condensation: approximation and a posteriori error estimation. , 96(5):269–302, 2013. J.L. Eftang and A.T. Patera. A port-reduced static condensation reduced basis element method for large component-synthesized structures: approximation and *A posteriori* error estimation. , 1(3), 2014. A. Ern, A. F. Stephansen, and M Vohral[í]{}k. Guaranteed and robust discontinuous galerkin a posteriori error estimates for convection–diffusion–reaction problems. , 234(1):114–130, 2010. A. Ern, A. F. Stephansen, and P. Zunino. A discontinuous galerkin method with weighted averages for advection–diffusion equations with locally small and anisotropic diffusivity. , 29(2):235–256, 2009. A. Ferrero, A. Iollo, and F. Larocca. Global and local [POD]{} models for the prediction of compressible flows with [DG]{} methods. , 116(5):332–357, 2018. J. Galvis and Y. Efendiev. Domain decomposition preconditioners for multiscale flows in high-contrast media. , 8(4):1461–1483, 2010. M. J. Gander and A. Loneland. S[HEM]{}: an optimal coarse space for [RAS]{} and its multiscale approximation. In [*Domain decomposition methods in science and engineering [XXIII]{}*]{}, volume 116 of [*Lect. Notes Comput. Sci. Eng.*]{}, pages 313–321. Springer, Cham, 2017. I. G. Graham, P. O. Lechner, and R. Scheichl. Domain decomposition for multiscale [PDE]{}s. , 106(4):589–626, 2007. Emmanuil H. Georgoulis, Omar Lakkis, and Juha M. Virtanen. A posteriori error control for discontinuous [G]{}alerkin methods for parabolic problems. , 49(2):427–458, 2011. T.J.R. Hughes, G. Engel, L. Mazzei, and M.G. Larson. The continuous [G]{}alerkin method is locally conservative. , 163(2):467–488, 2000. T. J. R. Hughes, G. R. Feij[ó]{}o, L. Mazzei, and J.-B. Quincy. The variational multiscale method - a paradigm for computational mechanics. , 166(1-2):3–24, 1998. U. Hetmaniuk and A. Klawonn. Error estimates for a two-dimensional special finite element method based on component mode synthesis. , 41:109–132, 2014. A. Heinlein, A. Klawonn, J. Knepper, and O. Rheinbach. Multiscale coarse spaces for overlapping [S]{}chwarz methods based on the [ACMS]{} space in 2[D]{}. , 48:156–182, 2018. D. B. P. Huynh, D. J. Knezevic, and A. T. Patera. . , 259:197–216, 2013. D. B. P. Huynh, D. J. Knezevic, and A. T. Patera. A static condensation reduced basis element method: approximation and [*a posteriori*]{} error estimation. , 47(1):213–251, 2013. U. Hetmaniuk and R. B. Lehoucq. A special finite element method based on component mode synthesis. , 44(3):401–420, 2010. C. Himpe, T. Leibner, and S. Rave. Hierarchical approximate proper orthogonal decomposition. , 40(5):A3267–A3292, 2018. P. Henning, A. Malqvist, and D. Peterseim. A localized orthogonal decomposition method for semi-linear elliptic problems. , 48(5):1331–1349, 2014. N. Halko, P. Martinsson, and J. A. Tropp. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. , 53(2):217–288, 2011. A. Huerta, E. Nadal, and F. Chinesta. Proper generalized decomposition solutions within a domain decomposition strategy. , 113(13):1972–1994, 2018. P. Henning, M. Ohlberger, and B. Schweizer. An adaptive multiscale finite element method. , 12(3):1078–1107, 2014. J. S. Hesthaven, G. Rozza, and B. Stamm. . SpringerBriefs in Mathematics. Springer, Cham; BCAM Basque Center for Applied Mathematics, Bilbao, 2016. BCAM SpringerBriefs. T. J. R. Hughes. Multiscale phenomena: [G]{}reen’s functions, the [D]{}irichlet-to-[N]{}eumann formulation, subgrid scale models, bubbles and the origins of stabilized methods. , 127(1-4):387–401, 1995. W. C. Hurty. Dynamic analysis of structural systems using component modes. , 3(4):678–685, 1965. T. Y. Hou and X. Wu. A multiscale finite element method for elliptic problems in composite materials and porous media. , 134(1):169–189, 1997. L. Iapichino, A. Quarteroni, and G. Rozza. A reduced basis hybrid method for the coupling of parametrized domains represented by fluidic networks. , 221/222:63–82, 2012. L. Iapichino, A. Quarteroni, and G. Rozza. Reduced basis method and domain decomposition for elliptic problems in networks and complex parametrized geometries. , 71(1):408–430, 2016. H. Jakobsson, F. Bengzon, and M. G. Larson. Adaptive component mode synthesis in linear elasticity. , 86(7):829–844, 2011. S. Kaulmann, B. Flemisch, B. Haasdonk, K.-A. Lie, and M. Ohlberger. The localized reduced basis multiscale method for two-phase flows in porous media. , 102(5):1018–1040, 2015. S. Kaulmann, M. Ohlberger, and B. Haasdonk. A new local reduced basis discontinuous [G]{}alerkin approach for heterogeneous multiscale problems. , 349(23-24):1233–1238, 2011. A. Kolmogoroff. Über die beste [A]{}nn[ä]{}herung von [F]{}unktionen einer gegebenen [F]{}unktionenklasse. , 37(1):107–110, 1936. R. Kornhuber, J. Podlesny, and H. Yserentant. Direct and iterative methods for numerical homogenization. In [*Domain decomposition methods in science and engineering [XXIII]{}*]{}, volume 116 of [*Lect. Notes Comput. Sci. Eng.*]{}, pages 217–225. Springer, Cham, 2017. R. Kornhuber, D. Peterseim, and H. Yserentant. An analysis of a class of variational multiscale methods based on subspace decomposition. , 87(314):2765–2774, 2018. A. Klawonn, P. Radtke, and O. Rheinbach. A comparison of adaptive coarse spaces for iterative substructuring in two dimensions. , 45:75–106, 2016. R. Kornhuber and H. Yserentant. Numerical homogenization of elliptic multiscale problems by subspace decomposition. , 14(3):1017–1036, 2016. M. G. Larson and A. M[a]{}lqvist. Adaptive variational multiscale methods based on a posteriori error estimation: duality techniques for elliptic problems. In [*Multiscale methods in science and engineering*]{}, volume 44 of [*Lect. Notes Comput. Sci. Eng.*]{}, pages 181–193. Springer, Berlin, 2005. A. E. L[ø]{}vgren, Y. Maday, and E. M. R[ø]{}nquist. A reduced basis element method for the steady stokes problem. , 40(3):529–552, 2006. R. B. Lehoucq, D. C. Sorensen, and C. Yang. , volume 6 of [*Software, Environments, and Tools*]{}. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1998. Solution of large-scale eigenvalue problems with implicitly restarted Arnoldi methods. M. W. Mahoney. Randomized algorithms for matrices and data. , 3(2):123–224, February 2011. M. W. Mahoney and P. Drineas. . , 106(3):697–702, 2009. I. Maier and B. Haasdonk. A [D]{}irichlet–[N]{}eumann reduced basis method for homogeneous domain decomposition problems. , 78:31–48, 2014. Y. Maday, O. Mula, A. T. Patera, and M. Yano. The generalized empirical interpolation method: stability theory on [H]{}ilbert spaces with an application to the [S]{}tokes equation. , 287:310–334, 2015. A. M[a]{}lqvist and D. Peterseim. Localization of elliptic multiscale problems. , 83(290):2583–2603, 2014. Y. Maday and E. M. R[ø]{}nquist. A reduced-basis element method. , 17(1-4):447–459, 2002. Y. Maday and E. M. R[ø]{}nquist. The reduced basis element method: application to a thermal fin problem. , 26(1):240–258 (electronic), 2004. I. Martini, G. Rozza, and B. Haasdonk. Reduced basis approximation and a-posteriori error estimation for the coupled [S]{}tokes-[D]{}arcy system. , 41(5):1131–1157, 2015. R. Milk, S. Rave, and F. Schindler. py[MOR]{}—generic algorithms and interfaces for model order reduction. , 38(5):S194–S216, 2016. J. Mandel and B. Sousedík. Adaptive selection of face coarse degrees of freedom in the [BDDC]{} and the [FETI]{}-[DP]{} iterative substructuring methods. , 196(8):1389–1399, 2007. M. Ohlberger. A posteriori error estimates for the heterogeneous multiscale finite element method for elliptic homogenization problems. , 4(1):88–114, 2005. M. Ohlberger and S. Rave. Localized reduced basis approximation of a nonlinear finite volume battery model with resolved electrode geometry. In [*Model Reduction of Parametrized Systems*]{}, volume 17 of [ *MS&A. Model. Simul. Appl.*]{}, pages 201–212. Springer, 2017. M. Ohlberger, S. Rave, and F. Schindler. True error control for the localized reduced basis method for parabolic problems. In [*Model reduction of parametrized systems*]{}, volume 17 of [ *MS&A. Model. Simul. Appl.*]{}, pages 169–182. Springer, Cham, 2017. M. Ohlberger and F. Schindler. A-posteriori error estimates for the localized reduced basis multi-scale method. In [*Finite volumes for complex applications [VII]{}. [M]{}ethods and theoretical aspects*]{}, volume 77 of [*Springer Proc. Math. Stat.*]{}, pages 421–429. Springer, Cham, 2014. M. Ohlberger and K. Smetana. A dimensional reduction approach based on the application of reduced basis methods in the framework of hierarchical model reduction. , 36(2):A714–A736, 2014. M. Ohlberger and F. Schindler. Error control for the localized reduced basis multiscale method with adaptive on-line enrichment. , 37(6):A2865–A2895, 2015. M. Ohlberger and F. Schindler. Non-conforming localized model reduction with online enrichment: towards optimal complexity in [PDE]{} constrained optimization. In [*Finite volumes for complex applications [VIII]{}—hyperbolic, elliptic and parabolic problems*]{}, volume 200 of [*Springer Proc. Math. Stat.*]{}, pages 357–365. 2017. M. Ohlberger, M. Schaefer, and F. Schindler. , pages 143–163. Springer International Publishing, Cham, 2018. S. Perotto, A. Ern, and A. Veneziani. Hierarchical local model reduction for elliptic problems: a domain decomposition approach. , 8(4):1102–1127, 2010. P. Pacciarini, P. Gervasio, and A. Quarteroni. Spectral based discontinuous [G]{}alerkin reduced basis element method for parametrized [S]{}tokes problems. , 72(8):1977–1987, 2016. A. Pinkus. , volume 7. Springer-Verlag, Berlin, 1985. M. Presho and S. Ye. Reduced-order multiscale modeling of nonlinear $p$-[Laplacian]{} flows in high-contrast media. , 19(4):921–932, 2015. A. Quarteroni, A.and Manzoni and F. Negri. , volume 92 of [*Unitext*]{}. Springer, Cham, 2016. An introduction, La Matematica per il 3+2. A. Quarteroni and A. Valli. . Numerical Mathematics and Scientific Computation. The Clarendon Press, Oxford University Press, New York, reprint, 2005. J. Schleuß. Optimal local approximation spaces for parabolic problems. Master’s thesis, University of Münster, 2019. N. Spillane, V. Dolean, P. Hauret, F. Nataf, C. Pechstein, and R. Scheichl. Achieving robustness through coarse space enrichment in the two level [S]{}chwarz framework. In [*Domain decomposition methods in science and engineering [XXI]{}*]{}, volume 98 of [*Lect. Notes Comput. Sci. Eng.*]{}, pages 447–455. Springer, Cham, 2014. A. Sommer, O. Farle, and R. Dyczij-Edlinger. . , 51(3):1–4, 2015. K. Smetana. A new certification framework for the port reduced static condensation reduced basis element method. , 283:352—383, 2015. K. Smetana. . In B. Haasdonk J. Fehr, editor, [*[IUTAM Symposium on Model Order Reduction of Coupled Systems, Stuttgart, Germany, May 22-25, 2018: MORCOS 2018]{}*]{}, 2019. K. Smetana and M. Ohlberger. Hierarchical model reduction of nonlinear partial differential equations based on the adaptive empirical projection method and reduced basis techniques. , 51(2):641–677, 2017. K. Smetana and A. T. Patera. Optimal local approximation spaces for component-based static condensation procedures. , 38(5):A3318–A3356, 2016. , 2001. S. Stepanov, M. Vasilyeva, and V. I. Vasil’ev. Generalized multiscale discontinuous [G]{}alerkin method for solving the heat problem with phase change. , 340:645–652, 2018. T. Taddei. . PhD thesis, Massachusetts Insitute of Technology, 2016. T. Taddei and A. T. Patera. A localization strategy for data assimilation; application to state estimation and parameter estimation. , 40(2):B611–B636, 2018. A. Toselli and O. Widlund. , volume 34 of [*Springer Series in Computational Mathematics*]{}. Springer-Verlag, Berlin, 2005. M. Vogelius and I. Babu[š]{}ka. On a dimensional reduction method. [I]{}. [T]{}he optimal selection of basis functions. , 37(155):31–46, 1981. K. Veroy, C. Prud’homme, D. V. Rovas, and A. T. Patera. A posteriori error bounds for reduced-basis approximation of parametrized noncoercive and nonlinear elliptic partial differential equations. In [*Proceedings of the 16th AIAA Computational Fluid Dynamics Conference*]{}, volume 3847, 2003. S. Wu Fung and L. Ruthotto. A multiscale method for model order reduction in [PDE]{} parameter estimation. , 350:19–34, 2019. W. Wang and M. N. Vouvakis. Randomized computations in domain decomposition methods. In [*2015 IEEE International Symposium on Antennas and Propagation USNC/URSI National Radio Science Meeting*]{}, pages 177–178, July 2015.
--- abstract: 'We present a sparse representation of model uncertainty for Deep Neural Networks (DNNs) where the parameter posterior is approximated with an inverse formulation of the Multivariate Normal Distribution (MND), also known as the *information form*. The key insight of our work is that the information matrix, i.e. the inverse of the covariance matrix tends to be sparse in its spectrum. Therefore, dimensionality reduction techniques such as low rank approximations (LRA) can be effectively exploited. To achieve this, we develop a novel sparsification algorithm and derive a cost-effective analytical sampler. As a result, we show that the information form can be scalably applied to represent model uncertainty in DNNs. Our exhaustive theoretical analysis and empirical evaluations on various benchmarks show the competitiveness of our approach over the current methods.' bibliography: - 'reference.bib' --- Acknowledgements {#acknowledgements .unnumbered} ================ We thank the anonymous reviewers and the area chairs for their time and thoughtful comments. Special thanks to Konstantin Kondak for many inspiring discussions, and Klaus Strobl for many pointers to the related works. The authors acknowledge the support of Helmholtz Association, the project ARCHES (contract number ZT-0033) and the EU-project AUTOPILOT (contract number 731993). Jianxiang Feng is supported by the Munich School for Data Science (MUDS) and Rudolph Triebel is a member of MUDS.
--- abstract: 'We consider two cases of kinetically constrained models, namely East and FA-1f models. The object of interest of our work is the activity ${\ensuremath{\mathcal A}}(t)$ defined as the total number of configuration changes in the interval $[0,t]$ for the dynamics on a finite domain. It has been shown in [@GJLPDW1; @GJLPDW2] that the large deviations of the activity exhibit a non-equilibirum phase transition in the thermodynamic limit and that reducing the activity is more likely than increasing it due to a blocking mechanism induced by the constraints. In this paper, we study the finite size effects around this first order phase transition and analyze the phase coexistence between the active and inactive dynamical phases in dimension 1. In higher dimensions, we show that the finite size effects are also determined by the dimension and the choice of boundary conditions.' author: - 'T. Bodineau, C. Toninelli' title: | Activity phase transition\ for constrained dynamics --- *Mathematics Subject Classification: 60K35, 82C22, 60F10* *Keywords: kinetically constrained models, non-equilibrium dynamics, large deviations, glassy dynamics, interacting particle systems.* Introduction {#intro} ============ Kinetically constrained spin models (KCSM) are interacting particle systems which have been introduced and very much studied in the physics literature to model liquid/glass transition and more generally glassy dynamics (see [@RS; @GST] and references therein). A configuration is given by assigning to each vertex $x$ of a (finite or infinite) connected graph ${\ensuremath{\mathcal G}}$ its occupation variable $\eta_x \in\{0,1\}$ which corresponds to an empty or filled site, respectively. The evolution is given by Markovian stochastic dynamics of Glauber type. Each site with rate one refreshes its occupation variable to a filled or to an empty state with probability $\rho$ or $1-\rho$ respectively provided that the current configuration satisfies an a priori specified local constraint. Here we focus on two of the most studied KCSM, the East [@JE] and FA-1f models [@FA1; @FA2] on hypercubic lattices (${\ensuremath{\mathcal G}}\subset \mathbb Z^d$): the constraint at $x$ requires for East model its right nearest neighbour to be empty, for FA-1f model at least one of its nearest neighbours to be empty. Note that in both cases (and this is a general feature of KCSM) the constraint which should be satisfied to allow creation/annihilation of a particle at $x$ does not involve $\eta_x$, thus detailed balance w.r.t. the Bernoulli product measure at density $\rho$ is an invariant reversible measure for the process. Both models are ergodic on ${\ensuremath{\mathcal G}}=\mathbb Z^d$ for any $\rho\in(0,1)$ with a positive spectral gap which shrinks to zero as $\rho\to 1$ corresponding to the occurrence of diverging mixing times [@CMRT]. Several numerical works and approximated analytical treatments have shown that relaxation for both models occurs in a more and more spatially heterogenous way as density is increased (see Section 1.5 of [@GST] and references therein). For example when measuring the persistence field $p_x(t)$ which equals to one if site $x$ has never changed its state up to time $t$ and zero otherwise a clear spatial segregation is observed among sites with 0/1 values of $p$ at time scales corresponding to the typical relaxation time of the persistence function which corresponds to the spatial average of the persistence field. More quantitatively, if one measures the spatial correlation function of this persistence field, a dynamical correlation length corresponding to the extent of these heterogeneities can be extracted. This length increases as the density is increased. The occurrence of these dynamical heterogeneites has lead to the idea that the dynamics of KCM takes place on a first-order coexistence line between active and inactive dynamical phases [@MGC; @JGC]. In order to exploit this idea in [@GJLPDW1; @GJLPDW2] the fluctuation of the dynamical activity ${\ensuremath{\mathcal A}}(t)$ defined as the number of microscopic configuration changes on a volume of linear size $N$ in the time interval $[0,t]$ has been investigated. The mean activity scales as $$\lim_{N\to\infty}\lim_{t\to\infty} \frac{{\langle}{\ensuremath{\mathcal A}}(t) {\rangle}}{N t} = \mathbb A \, ,$$ where $\mathbb A$ depends on the density and on the choice of the constraints, as we will detail in Section \[models\]. Thus one could expect that the probability of observing a deviation from the mean value scales as $$\label{eq: LD intro} \lim_{N\to\infty}\lim_{t\to\infty} \; \frac{1}{Nt} \log \left {\langle}\frac{{\ensuremath{\mathcal A}}(t)}{N t} \simeq a \right {\rangle}=-f(a) \, ,$$ with $0<f (a) <\infty$ for $a\neq \mathbb A$ as it occurs for the models without constraints. However, as it has been observed in [@GJLPDW1; @GJLPDW2], due to the presence of the constraint it is possible to realize at a low cost a trajectory with zero activity by starting from a completely filled configuration and imposing that a single site does not change its state (see Section \[heuristics\] for a detailed explanation of the mechanism behind this phenomenon). Analogously one can obtain a smaller activity than the mean one by blocking for a fraction of time a single site. As a consequence of this sub-extensive cost for lowering the activity $f(a)=0$ for $a<\mathbb A$. For the same reason, the moment generating function $$\begin{aligned} \label{eq: 1st order intro} \psi({\lambda}) = \lim_{N \to \infty} \lim_{t \to \infty} \; \frac{1}{N t} \log \left {\langle}\exp \big( {\lambda}{\ensuremath{\mathcal A}}(t) \big) \right {\rangle}\end{aligned}$$ is non analytic at $\lambda=0$ with a discontinuous first order derivative [@GJLPDW1; @GJLPDW2]. In this paper, we investigate the finite size scaling of the first order transition . Our main results are estimates of the cost of phase coexistence between the active and inactive dynamical phases. From these estimates, the relevant scaling asymptotic in can be determined. For East and FA1f in one dimension, we prove (Theorem \[teo:phasetrans\]) that $${\varphi}(\alpha) :=\lim_{N\to\infty}\lim_{t\to\infty} \frac{1}{t} \log \left {\langle}\exp \left( \frac{\alpha {\ensuremath{\mathcal A}}(t)}{N} \right ) \right {\rangle}$$ satisfies ${\varphi}(\alpha)=\alpha \mathbb A$ if $\alpha>\alpha_0$ and ${\varphi}(\alpha)=-\Sigma$ if $\alpha<\alpha_1$. This shows that a transition in occurs for a value ${\lambda}= \frac{{\alpha}_c}{N}$ with ${\alpha}_1 < {\alpha}_c < {\alpha}_0$. As a consequence, the scaling of the large deviations differs for increasing or decreasing the activity (see Theorem \[fluctu1\]). We also analyze the measure on the space-time configurations $$\mu_{\alpha,T}^{N}: = \frac{{\left{\langle}\cdot \; \exp( \frac{{\alpha}}{N} {\ensuremath{\mathcal A}}(T) ) \right{\rangle}}}{{\left{\langle}\exp( \frac{{\alpha}}{N} {\ensuremath{\mathcal A}}(T) ) \right{\rangle}}}$$ which corresponds to the conditional measure with a fixed activity $\frac{{\ensuremath{\mathcal A}}(t)}{t} = \bar A$ where ${\alpha}$ is the parameter conjugated to $\bar A$ in Legendre transform and prove (Theorem \[teo:condmes\]) that depending on the value of $\alpha$ this measure has very different typical configurations which can be interpreted as active and inactive dynamical phases : for $\alpha>\alpha_0$, $\mu_{\alpha,T}^{N}$ concentrates on trajectories with the mean activity and for $\alpha<\alpha_1$, it concentrates on trajectories with zero activity. Finally, we investigate the higher dimensional cases and show that the finite size scaling depends not only on the dimension but also on the boundary conditions (Theorem \[linearityd&gt;1\]). This leads to a variety of scalings for the large deviations when the activity is reduced (Theorem \[fluctu2\]). Models and results {#models} ================== East and FA-1f in $d=1$: the phase transition --------------------------------------------- The East and FA-1f models in one dimension are Glauber type Markov processes on the configuration space ${\Omega}= \{0,1\}^{\Lambda}$ where $\Lambda\subset{{\ensuremath{\mathbb Z}} }$. Both models depend on a parameter $\rho$, with $\rho\in(0,1)$, which we will call the [*density*]{}. Here we will consider the models in finite volumes $\Lambda=\Lambda_N:=[1,N]$ and we will be interested in the thermodynamic limit $N\to\infty$. We call ${\Omega}_N$ the configuration space correspondent to $\Lambda_N$ and denote by greek letters $\eta,\omega$ the elements of $\Omega_N$. Then for any site $i\in\Lambda_N$ we let $\eta_i\in(0,1)$ be the value of configuration $\eta$ at site $i$ and we say that $i$ is [*empty*]{} ([*filled*]{}) if $\eta_i=0$ ($\eta_i=1$, respectively). The Markov process corresponding to both models can be informally described as follows. Each site $i\in\Lambda_N$ waits an independent mean one exponential time and then, provided the current configuration satisfies a proper local constraint, we refresh the value of the configuration at $i$ by setting it to $1$ with probability $\rho$ and to zero with probability $1-\rho$. Instead if the constraint is not satisfied nothing occurs. Then the procedure starts again. The specific choice of the constraint identifies the model: for East one requires that the right nearest neighbour of $i$ is empty; for FA-1f model one requires that at least one among the right and left nearest neighbours of $i$ is empty. In formulas the constraint at $i$ is satisfied for East and FA-1f in the configuration $\eta$ iff $\eta_{i+1}=0$ and $\eta_{i+1} \, \eta_{i-1}=0$, respectively. Note that in both models the constraint is local (it depends on the configuration on a finite neighborhood of the to-be-updated site) and does not depend on the value of the configuration on the to-be-updated site. Both models belongs to the larger class of Kinetically Constrained Models (KCM in short), which have been introduced and widely studied in physics literature (see for reviews [@RS; @GST]). In order for the above description to be complete, we need to specify what happens at sites near the boundary. A standard choice in statistical mechanics is to defined the dynamics in finite volume by imposing a fixed boundary condition. Here the choice of this boundary condition is very delicate, indeed due to the presence of the constraints both models are very sensitive to the specific choice of these conditions even on large volumes. For example for the East model it is easy to verify that if we fix a boundary condition equal to one at $N+1$, we start the evolution from $\eta \in \Omega_N$ and we let $x$ be the position of the rightmost zero of $\eta$, then at any subsequent time site $x$ stays empty and sites $[x+1,N]$ stay filled. In this case we say that the configuration is [*frozen*]{} on $[x,N]$ meaning that under the evolution, the configuration on these sites remains unchanged. On the other hand, for a boundary condition equal to zero at $N+1$, it can be easily verified that there is no site on which the configuration is frozen no matter which is the choice of the initial configuration. From the above observation it follows that in the case of filled boundary condition the configuration space (even on finite volume) is not irreducible. Indeed there exists configurations $\sigma,\eta\in \Omega_N$ such that it is not possible to devise a path of elementary moves with strictly positive rates which connects $\sigma$ to $\eta$. Instead if we take an empty boundary condition the configuration space is irreducible. This can be easily verified by constructing a path which completely empties any configuration starting from the right boundary. Analogously for FA-1f model if one imposes filled boundary conditions both at $0$ and $N+1$ the configuration space is reducible. On the other hand any of the choices which has at least one empty site in the couple $(0,N+1)$ is sufficient to guarantee irreducibility. Here for both models we will only be interested on choices of the boundary conditions which guarantee irreducibility (and therefore ergodicity as we consider finite systems). Note also that for both models, no matter which choice we perform for the boundary condition, the dynamics satisfy detailed balance with respect to Bernoulli product measure $\nu$ at density $\rho$, namely $\nu(\eta):=\prod_{i\in\Lambda_N}\nu_i(\eta_i)$ with $\nu_i(1)=\rho$ (this is a direct consequence of the above observed fact that the constraint at $i$ does not depend on the value of $\eta_i$). Therefore $\nu$ is an invariant measure for the process and in the irreducible case this is the unique invariant measure. Let us now give a formal definition of these processes via the action of their generator ${\ensuremath{\mathcal L}}_N$ on local functions $f:\Omega_N\to{{\ensuremath{\mathbb R}} }$. We introduce $$\begin{aligned} \label{gene} {\ensuremath{\mathcal L}}_N f (\eta) = \sum_{i\in {\Lambda}_N} c_i(\eta) \big( f(\eta^i) - f (\eta) \big)\end{aligned}$$ where $\eta^i$ stands for the configuration $\eta$ changed at $i$, namely $$\eta^i_j= \begin{cases} \eta_j & \text{ if }\, j\neq i\\ 1-\eta_j & \text{ if }\, j=i \end{cases} \label{flip}$$ and we let $$\label{ci} c_i(\eta):=r_i (\eta)[\eta_i(1-\rho)+(1-\eta_i)\rho]$$ with $r_i$ the function that encodes the constraint at site $i$, namely $r_i(\eta)=1$ ($r_i(\eta)=0)$ iff the constraint at $i$ is (is not) satisfied. Thus $r_i$ is model dependent and for the East model with frozen empty boundary condition at the right boundary we set $$\label{ceast} r_i(\eta):=(1-\eta_{i+1})\,\,\,\,\,\,{\mbox{ if }} i\in[1,N-1];\,\,\,\,\,\,\,\,\,r_N=1$$ for FA-1f with empty boundary condition at the right and left boundary we set $$\label{cFA-1ftwo} r_i(\eta):=(1-\eta_{i+1}\eta_{i-1})\,\,\,\,\,\,{\mbox{ if }} i\in[2,N-1];\,\,\,\,\,\,\,\,\,r_1=1,\,\,\,r_N=1$$ for FA-1f with empty boundary condition at the right boundary and occupied boundary condition at the left boundary we set $$\label{cFA-1fone} r_i(\eta):=(1-\eta_{i+1}\eta_{i-1})\,\,\,\,\,\,{\mbox{ if }} i\in[2,N-1];\,\,\,\,\,\,\,\,\,r_1=(1-\eta_2),\,\,\,r_N=1$$ and finally for FA-1f with empty boundary condition at the left boundary and occupied boundary condition at the right boundary we set $$r_i(\eta):=(1-\eta_{i+1}\eta_{i-1})\,\,\,\,\,\,{\mbox{ if }} i\in[2,N-1];\,\,\,\,\,\,\,\,\,r_1=1,\,\,\,r_N=(1-\eta_{N-1}). \label{cfas}$$ As already mentioned our analysis will focus on the above choices which are the only ones which guarantee ergodicity. Therefore in the following when we refer to the East model, to FA-1f with two empty boundaries and to FA-1f with one empty boundary we mean respectively the choice , and (by symmetry reasons the choices and for FA-1f are equivalent therefore we never consider the case ). Also, when we state results referred to FA-1f model without further specifying the boundary conditions it means that these results hold for both the choices and . Note that the generator can be equivalently rewritten as $$\begin{aligned} {\ensuremath{\mathcal L}}_N f (\eta) = \sum_{i\in {\Lambda}_N} r_i(\eta) \big( \nu_{i}(f) - f (\eta) \big)\end{aligned}$$ with $\nu_i(f)=\int d\nu_i(\eta_i)f(\eta)$ the local mean at site $i$ . The object of interest of our work is the total activity $$\label{defactivity}{\ensuremath{\mathcal A}}(t): = \sum_{i\in {\Lambda}_N} {\ensuremath{\mathcal A}}_i (t)$$ where ${\ensuremath{\mathcal A}}_i(t)$ is the random variable which corresponds to the number of configuration changes on site $i$ during the time interval $[0,t]$. It is easy to verify that ${\ensuremath{\mathcal A}}_i(t)-\int_0^t c_i(\eta(s))ds$ is a martingale and therefore ${\ensuremath{\mathcal A}}(t)$ satisfies a law of large numbers with $$\lim_{N\to\infty}\lim_{t\to\infty}\frac{{\ensuremath{\mathcal A}}(t)}{Nt} =\mathbb A$$ where $\mathbb A$, which will be referred to in the following as the mean instantaneous activity, is defined as $$\label{bbA} {{\ensuremath{\mathbb A}} }:=\nu(c_j(\eta)), \qquad j\in \Lambda_{N}\setminus (1,N)$$ note that the definition is well posed since by translation invariance $\nu(c_j(\eta))=\nu(c_{j'}(\eta))$ for $j,j'\in[2,N-1]$. For East, we get $\mathbb A=2\rho(1-\rho)^2$, for FA-1f instead $\mathbb A=2\rho(1-\rho)(1-\rho^2)$. Here we will be interested in the study of the generating function which controls the fluctuations of ${\ensuremath{\mathcal A}}(t)$ for a given $N$ $$\label{eq:free} {\varphi}^{(N)}({\lambda}) = \lim_{t\to\infty} \frac{1}{t} \log {\langle}\exp( {\lambda}{\ensuremath{\mathcal A}}(t) ) {\rangle}$$ where here and in the following ${\langle}{\rangle}$ denotes the mean over the evolution of the process and over the initial configuration which is distributed with the equilibrium Bernoulli measure $\nu$ at density $\rho$ (where the density $\rho$ is fixed by the rates ). With a slight abuse of notation for any event ${\ensuremath{\mathcal E}}$ we will also denote by ${\langle}{\ensuremath{\mathcal E}}{\rangle}$ the probability of ${\ensuremath{\mathcal E}}$, namely we set ${\langle}{\ensuremath{\mathcal E}}{\rangle}:={\langle}\mathds{1}_{{\ensuremath{\mathcal E}}}{\rangle}$. The main result of this paper is that in the scaling ${\lambda}= \alpha/N$ a phase transition occurs for this generating function. More precisely if we define $$\label{rescaled} {\varphi}({\alpha}): = \limsup_{N\to\infty} {\varphi}^{(N)}\left(\frac{{\alpha}}{N}\right)$$ then the following holds: \[teo:phasetrans\] Consider East or FA-1f model in $d=1$ at any $\rho\in(0,1)$. There exists $\alpha_1<\alpha_0<0$ and a constant ${\Sigma}> 0$ such that - for $\alpha>\alpha_0$ it holds ${\varphi}\left({\alpha}\right) = {{\ensuremath{\mathbb A}} }{\alpha}$; - for $\alpha<\alpha_1$ it holds ${\varphi}({\alpha})= - {\Sigma}$.\ As a consequence of this theorem, estimates on the large deviations for a reduced activity can be obtained. \[fluctu1\] Consider East or FA-1f model in $d=1$ at any $\rho\in(0,1)$. For any $u \in[0,1)$ it holds $$\begin{aligned} \label{eqfluc} -{\Sigma}(1-u) &{\;\leqslant\;}& \liminf_{N\to\infty} \lim_{t\to\infty}\frac{1}{t} \log \left {\langle}\frac{{\ensuremath{\mathcal A}}(t)}{Nt} \simeq u {{\ensuremath{\mathbb A}} }\right {\rangle}\\ &{\;\leqslant\;}& \limsup_{N\to\infty} \lim_{t\to\infty}\frac{1}{t} \log \left {\langle}\frac{{\ensuremath{\mathcal A}}(t)}{Nt} \simeq u {{\ensuremath{\mathbb A}} }\right {\rangle}{\;\leqslant\;}\alpha_0 {{\ensuremath{\mathbb A}} }(1-u) \, . \nonumber\end{aligned}$$ We conjecture that in the $\limsup_N$ can be replaced by $\lim_N$. In the regime $\alpha>\alpha_0$ and $\alpha<\alpha_1$, this follows from the proof of Theorem \[teo:phasetrans\]. Theorem \[teo:phasetrans\] (i) and (ii) will be proved in Section \[linearity\] and \[largealpha\] respectively. Theorem \[fluctu1\] will be proven in Section \[new\]. We also analyze the measure on the space-time configurations defined as $$\begin{aligned} \label{eq: measure} \mu_{\alpha,T}^{N} = \frac{{\left{\langle}\cdot \; \exp( \frac{{\alpha}}{N} {\ensuremath{\mathcal A}}(T) ) \right{\rangle}}}{{\left{\langle}\exp( \frac{{\alpha}}{N} {\ensuremath{\mathcal A}}(T) ) \right{\rangle}}} \end{aligned}$$ and Theorem \[teo:condmes\] states that depending on the value of $\alpha$ this measure has very different typical configurations. For any configuration $\eta\in\Omega_{N}$, we call $\{\eta(s)\}_{s{\;\geqslant\;}0}$ the trajectory of the Markov process generated by ${\ensuremath{\mathcal L}}_N$ starting at time zero from $\eta$. Then the following holds \[teo:condmes\] Consider East model and FA-1f model in $d=1$ with $\rho\in(0,1)$. Then there exists $\alpha_1<\alpha_0<0$ and a sequence $\gamma_N$ with $\lim_{N\to\infty} \gamma_N = 0$ such that - for $\alpha>\alpha_0$ $$\begin{aligned} \label{eq: densite haute tris} \lim_{N\to\infty}\lim_{T \to \infty} \mu_{\alpha,T}^{N}\left({\int_0^T dt \, |\sum_{i \in {\Lambda}_N } \,\eta_i(t)-N\rho| {\;\leqslant\;}\gamma_N N T}\right) = 1 \, ;\end{aligned}$$ $$\begin{aligned} \label{eq: densite haute tqua} \lim_{N\to\infty}\lim_{T \to \infty} \mu_{\alpha,T}^{N}\left({\int_0^T dt \, |\sum_{i \in {\Lambda}_N } \,c_i(\eta(t))-N{{\ensuremath{\mathbb A}} }| {\;\leqslant\;}\gamma_N N T}\right) = 1 \, .\end{aligned}$$ - if $\alpha<\alpha_1$ $$\begin{aligned} \label{eq: densite haute tris} \lim_{N\to\infty}\lim_{T \to \infty} \mu_{\alpha,T}^{N}\left({\int_0^T dt \, \sum_{i \in {\Lambda}_N } \,\eta_i(t) {\;\geqslant\;}(1-\gamma_N) N T} \right)= 1 \, ;\end{aligned}$$ $$\begin{aligned} \label{eq: densite haute tris} \lim_{N\to\infty}\lim_{T \to \infty} \mu_{\alpha,T}^{N}\left({\int_0^T dt \, \sum_{i \in {\Lambda}_N } \,c_i(\eta(t)) {\;\leqslant\;}\gamma_N N T}\right) = 1 \, .\end{aligned}$$ Theorem \[teo:condmes\](i) and (ii) will be proven in Section \[condmes1\] and \[condmes2\] respectively where stronger results (Lemma \[usolemma\] and \[usolemma2\]) concerning the concentration of the number of particles and the activity on mesoscopic boxes (and not on the whole volume) will also be established. Heuristics of the phase transition and open problems {#heuristics} ---------------------------------------------------- As we already mentioned in the introduction, the occurrence of a phase transition for the activity large deviations was first discovered in [@GJLPDW2], where it was shown that $$\begin{aligned} \label{eq: psi} {\lambda}\in {{\ensuremath{\mathbb R}} }, \qquad \psi({\lambda}) = \lim_{N \to \infty} \lim_{t \to \infty} \; \frac{1}{N t} \log \left {\langle}\exp \big( {\lambda}{\ensuremath{\mathcal A}}(t) \big) \right {\rangle}\end{aligned}$$ has a critical value at ${\lambda}=0 $ (see the left part of figure \[fig: courbes\]). We recall below the mechanism of this phase transition in the case of the one-dimensional East model (the case of FA-1f model is analogous). When ${\lambda}>0$, the activity is increased and the large deviation functional is expected to be smooth. Negative values of ${\lambda}$ lead to a decay of the activity and the constraint will play a crucial role. To have no activity, a possible strategy is to start at time $t = 0$ from a configuration totally filled ($\eta_i = 1$ for all $i$) and to remain in this configuration at any time. This can be achieved by preventing $\eta_N$ from flipping to 0, indeed the site $N$ is the only site allowed to flip due to the constraints and if it is maintained equal to $1$ then the rest of the configuration is blocked. This leads to the lower bound $$\begin{aligned} \label{eq: crude LB} \left {\langle}{\ensuremath{\mathcal A}}(t)= 0 \right {\rangle}{\;\geqslant\;}{\rho}^N \exp( -(1-{\rho}) t) \, ,\end{aligned}$$ where ${\rho}^N$ stands for the cost of the initial configuration and the last term is the probability that a Poisson process of intensity $1-{\rho}$ has no jump up to time $t$. After rescaling , this shows that $\psi({\lambda}) {\;\geqslant\;}0$ for ${\lambda}<0$ and since $\psi(0)=0$ and $\psi$ is increasing in $\lambda$ it follows that $\psi(\lambda)=0$ for $\lambda{\;\leqslant\;}0$. On the other hand by convexity $\psi(\lambda){\;\geqslant\;}\lambda\mathbb A$ and therefore at $\lambda=0$ the first order derivative of $\psi$ has a jump. As noted in [@GJLPDW1; @GJLPDW2], it is remarkable that the phase transition occurs at ${\lambda}= 0$ which corresponds to the unperturbed dynamics. Thus one may wonder if the singularity of the large deviation functional would lead to specific properties of the constrained systems. In this paper, we investigate the finite size scaling of this first order phase transition through the function ${\varphi}({\alpha})$ introduced in . This refined thermodynamic scaling corresponds to a blow up of the region ${\lambda}= 0$. The results of Theorem \[teo:phasetrans\] are depicted in the right part of figure \[fig: courbes\]. In a range of ${\lambda}$ of order $1/N$, we have shown that the transition is shifted from 0. = 5 cm \[c\]\[l\][$\psi({\lambda})$]{} \[c\]\[l\][${\lambda}$]{} = 6 cm \[b\]\[l\][${\varphi}({\alpha})$]{} \[c\]\[l\][${\alpha}$]{} \[c\]\[l\][${\alpha}_1$]{} \[c\]\[l\][${\alpha}_0$]{} \[c\]\[l\]\[c\]\[c\][$\ \ - {\Sigma}$]{} To understand this, we first suppose that no transition takes place at 0 and that $\psi$ could be expanded (analytically). If this was the case then we would expect that for large $N$ $$\begin{aligned} \label{eq: expansion} {\varphi}({\alpha}) \simeq N \psi ( \frac{{\alpha}}{N}) = N \left[ \psi (0) + \psi' (0) \frac{{\alpha}}{N} + {\small O \left( \frac{1}{N^2} \right)} \right] \, = {{\ensuremath{\mathbb A}} }{\alpha}+ {\small O \left( \frac{1}{N}\right)} \, ,\end{aligned}$$ where we used that $\psi (0) = 0$ and $\psi' (0)$ is equal to the mean activity ${{\ensuremath{\mathbb A}} }$ (in fact $\psi$ is not differentiable at 0, but its right-derivative is equal to ${{\ensuremath{\mathbb A}} }$). Part (i) of Theorem \[teo:phasetrans\] shows that this behavior persists for negative values of ${\alpha}$ provided ${\alpha}> {\alpha}_0$. In this regime, we expect that the total activity is shifted from its mean value ${{\ensuremath{\mathbb A}} }N t$ by an order ${\alpha}\, t \, \psi''(0)$ which does not scale with $N$. For such small shifts of the activity, the system remains very close to its typical state (when $N$ is large). In particular, Theorem \[teo:condmes\] asserts that the mean density is very concentrated close to its equilibrium value ${\rho}$. A phase transition occurs for smaller values of ${\alpha}$ and ${\varphi}$ becomes equal to the constant $- {\Sigma}> - (1-{\rho})$. The estimate leads to the lower bound $- (1-{\rho})$ and thus it is too crude to justify the claimed behavior. Indeed for ${\lambda}= \frac{{\alpha}}{N}$, an activity of order $o(N)$ will not contribute to the scaling limit , thus it is more favorable to leave a small portion of the system active as depicted in figure \[fig: interface\] instead of forcing the whole configuration to remain totally filled. = 4 cm \[t\]\[l\][[$o(N)$]{}]{} By analogy with equilibrium, one can interpret ${\Sigma}$ as a surface tension between the inactive and the active region (per unit of time) $$\begin{aligned} \label{eq: refine LB} \left {\langle}\frac{1}{N t} {\ensuremath{\mathcal A}}(t) \approx 0 \right {\rangle}\simeq \exp( - {\Sigma}t) \, ,\end{aligned}$$ where $\frac{1}{N t} {\ensuremath{\mathcal A}}(t) \approx 0$ means that the rescaled activity is close to 0 in the thermodynamic limit. Contrary to the strategy in , the interface between the inactive and the active region is now allowed to fluctuate and the probabilistic cost is lowered. The surface tension ${\Sigma}$ can be obtained from a variational problem which is specified for FA-1f with two empty boundaries in Section \[largealpha\] and for East and FA-1f with one empty boundary in Section \[largealpha2\]. However, our results do not provide a complete description of the system and it remains to prove that the typical configurations look like figure \[fig: interface\]. Nevertheless, Theorem \[teo:condmes\] ensures that in the inactive regime almost all the sites are equal to 1. This confirms the conjectured picture. Our results (Theorems \[teo:phasetrans\] and \[teo:condmes\]) do not provide the entire phase diagram for the generating function ${\varphi}$ (only for ${\alpha}\not \in [{\alpha}_1,{\alpha}_0]$). However, this is enough to deduce (see Theorem \[fluctu1\]) the correct order of the scaling for the large deviations of the activity below the mean value $$\forall u \in [0,1], \qquad -\Sigma(1-u) {\;\leqslant\;}\lim_{N\to\infty} \lim_{t\to\infty}\frac{1}{t}\log \left {\langle}\frac{{\ensuremath{\mathcal A}}(t)}{Nt}\simeq u {{\ensuremath{\mathbb A}} }\right {\rangle}< {\alpha}_0 {{\ensuremath{\mathbb A}} }(1-u) \, .$$ This scaling is anomalous compared to the extensive scaling in $N$ of the unconstrained models . We conjecture that there is a unique critical value ${\alpha}_c$ and that the two regimes remain valid up to ${\alpha}_c$ as depicted in figure \[fig: courbes\], namely ${\varphi}=-\Sigma$ for $\alpha{\;\leqslant\;}\alpha_c$ and ${\varphi}=\alpha{{\ensuremath{\mathbb A}} }$ for $\alpha{\;\geqslant\;}\alpha_c$. This would imply $${\alpha}_c = - \frac{{\Sigma}}{{{\ensuremath{\mathbb A}} }} \, .$$ This conjecture is supported by numerical simulations and we refer to [@BLT] for an account on these numerical results. If this conjecture is verified, then Theorem \[fluctu1\] can be improved and the large deviations for reducing the activity would be given by $$\forall u \in [0,1], \qquad \lim_{N\to\infty} \lim_{t\to\infty}\frac{1}{t}\log \left {\langle}\frac{{\ensuremath{\mathcal A}}(t)}{Nt}\simeq u {{\ensuremath{\mathbb A}} }\right {\rangle}=-\Sigma(1-u) \, .$$ = 5 cm \[c\]\[l\][$m(h)$]{} \[c\]\[l\][$h$]{} = 6 cm \[c\]\[l\][$m_N({\alpha}/N)$]{} \[c\]\[l\][${\alpha}$]{} \[b\]\[l\][${\alpha}_c$]{} In [@GJLPDW1; @GJLPDW2], it was suggested that the large deviation approach could provide a natural way to define a dynamical free energy characterizing glassiness. It is currently an open question to understand if (and how) the dynamical phase transition in $\psi({\lambda})$ can lead to quantitative predictions on the model at ${\lambda}= 0$. Equilibrium statistical mechanics could serve as a guide to clarify this issue. Indeed a similar phenomenon to the one depicted above for East and FA-1f occurs in the finite size scaling of the ferromagnetic Ising model. For an Ising model in the phase transition regime ($T< T_c$), a first order phase transition occurs in the magnetic field $h$ and the magnetization $m(h)$ is discontinuous at $h=0$ (see figure \[fig: courbes ising\]). On a finite domain, say a square of size $N$, with external boundary conditions $+$, then the magnetization $m_N(h)$ is continuous and approaches the graph of $m(h)$. A finite size scaling [@SS] shows that up to rescaling $h = {\alpha}/N$, the magnetization $m_N(\frac{{\alpha}}{N})$ converges to a step function with a jump at a critical value ${\alpha}_c \not = 0$ (see figure \[fig: courbes ising\]). The shift of the transition is reminiscent of the shift for the constrained models and it can be understood as follows. For ${\alpha}\in [{\alpha}_c, 0]$, the magnetization is slightly lowered but remains close to the magnetization $m^* = \lim_{h \to 0^+} m(h)$ imposed by the $+$ boundary conditions, then for ${\alpha}< {\alpha}_c$ the negative magnetic field forces a droplet of the $-$ phase which fills the system. The creation of this droplet has a cost proportional to a surface order $\tau N$ (where $\tau$ is the surface tension term), but leads to an energy gain $- 2 m^* h N^2$. Thus, the critical value is obtained for $$- 2 m^* h N^2 = \tau N \quad \Rightarrow \quad {\alpha}_c = - \frac{\tau}{2 m^*} \, .$$ In this analogy, the magnetization plays a role similar to the activity and $h, {\lambda}$ are the conjugate parameters. Even so a first order phase transition occurs for the Ising model at $h$ equal 0, it is known that the $+$ pure phase, i.e. the Gibbs measure obtained from the $+$ boundary conditions after the thermodynamic limit, is well behaved and that the cumulants of the magnetization can be obtained by taking the successive derivative of the pressure for $h \to 0^+$. As ${\lambda}$ has no physical meaning (contrary to $h$), it is not clear from the mere knowledge of the first order phase transition how to deduce precise informations on the constrained dynamics at ${\lambda}=0$. East and FA-1f in $d{\;\geqslant\;}2$ ------------------------------------- In the previous sections we have considered one dimensional East and FA-1f models. Both models can be extended to higher dimensions in a very natural way. Let us set some notation. Let $d>1$, then $\Lambda_{N}^d:=[1,N]^d$ and $\Omega_N^d:=\{0,1\}^{\Lambda_{N}^d}$ and let $\vec e_j$ with $j\in[1,d]$ be the Euclidean basis vectors. The East and FA-1f models in dimension $d$ at density $\rho\in(0,1)$ are Glauber type Markov processes with generator ${\ensuremath{\mathcal L}}_N^d$ which acts on $f:\Omega_N^d\to{{\ensuremath{\mathbb R}} }$ exactly as in with the sum over $i$ running now on $i\in \Lambda_N^d$, with $c_i$ defined again as in and $r_i$ defined for the East model as $$\label{ceastd} r_i(\eta)=1-\eta_{i+\vec e_1} \qquad {\mbox{~if~}} i\cdot\vec e_1\in [1,N-1];\qquad r_i(\eta)=1 {\mbox{~otherwise~}}$$ and for FA-1f model with completely empty boundary conditions as $$\label{cFA-1fd} r_i(\eta)=1-\prod_{j=1}^d\eta_{i+\vec e_j} \eta_{i-\vec e_j} \qquad {\mbox{~if~}} i\cdot\vec e_j\in [2,N-1] \,\,\,\,\,\forall j\in [1,d];\qquad r_i(\eta)=1 {\mbox{~otherwise~}}.$$ As in the one dimensional case, the above choice of the boundary condition is the only ergodic choice for the East constraints, while in the FA-1f case any choice with at least one empty boundary site is ergodic. We will be interested here in all the choices of the boundary conditions which correspond to requiring a completely empty hyperplane of linear size $N$ and dimension $c$ with $c\in[0,d-1]$ ($c=0$ corresponds to the choice of a single empty boundary site). In short we will say that we consider a boundary condition of dimension $c$ in this case (note that the only ergodic choice for East corresponds to a particular boundary condition of dimension $d-1$). Note that as in the one dimensional case both dynamics satisfy detailed balance with respect to Bernoulli product measure $\nu$ at density $\rho$, namely $\nu(\eta):=\prod_{i\in\Lambda_N^d}\nu_i(\eta_i)$ with $\nu_i(1)=\rho$. Let ${\ensuremath{\mathcal A}}(t)$ be defined as in where now the sum runs over all sites inside $\Lambda_N^d$. As for the one dimensional case it is immediate to verify that ${\ensuremath{\mathcal A}}(t)$ satisfies the law of large number $$\lim_{t\to\infty}\lim_{N\to\infty}\frac{{\ensuremath{\mathcal A}}(t)}{N^d t}={{\ensuremath{\mathbb A}} }$$ with ${{\ensuremath{\mathbb A}} }$ defined as in for $j\in\Lambda_N^d$ with $i\cdot\vec e_j \in[2,N-1]$ for all $j\in [1,d]$. Note that ${{\ensuremath{\mathbb A}} }$ coincides with the one for the corresponding one dimensional model at the same density. Let ${\varphi}^{(N)}$ be defined as in and let the rescaled generating function ${\varphi}_d(\alpha)$ and the measure $\mu_{\alpha,T}^{N,d}$ be defined as $$\label{rescaledd} {\varphi}_d(\alpha):=\limsup_{N\to\infty}\frac{1}{N^{c}}{\varphi}^{(N)}\left(\frac{\alpha}{N^{d-c}}\right)$$ $$\label{mesureh} \mu^{N,d}_{\alpha,T}:=\frac{{\left{\langle}\cdot\exp\left(\frac{\alpha}{N^{d-c}}{\ensuremath{\mathcal A}}(T)\right) \right{\rangle}}}{{\left{\langle}\exp\left(\frac{\alpha}{N^{d-c}}{\ensuremath{\mathcal A}}(T)\right) \right{\rangle}}}.$$ Note that the above definitions if we set $d=1$ and $c=0$ are compatible with the definitions used in Section \[models\] for the one dimensional case, namely ${\varphi}_1(\alpha)={\varphi}(\alpha)$ and $\mu^{N,1}_{\alpha,T}=\mu^{N}_{\alpha,T}$ with ${\varphi}(\alpha)$ and $\mu_{\alpha,T}^N$ defined by equations and respectively. We stress that the finite size scaling depends on the choice of the boundary conditions. We expect this to be the choice which leads to a phase transition for the generating function, as in the one dimensional case. This conjecture, as will be further clarified by the proof of Theorem \[poor\], is related to the fact that in order to have no activity for a $d$-dimensional model a possible strategy is to start at time zero from a completely filled configuration and prevent all the $O(N^c)$ sites which are in contact with the boundary empty set from flipping. The $d$-dimensional East model corresponds to $N^{d-1}$ independent one dimensional East models so that ${\varphi}_d(\alpha)={\varphi}_1(\alpha)$ for any $d$. Thus the phase transition results of Theorem \[teo:phasetrans\] hold for ${\varphi}_d$ and Theorem \[teo:condmes\] applies as well in this case. For FA-1f it is not immediate to generalize the one dimensional results since the model cannot be decoupled into independent one dimensional FA-1f models. In this case we prove \[linearityd&gt;1\] Consider FA-1f model in dimension $d{\;\geqslant\;}2$ with boundary condition of dimension $c$ with $c\in[0,d-1]$. There exists $\alpha_0<0$ and a sequence $\gamma_N$ with $\lim_{N\to\infty}\gamma_N=0$ such that for $\alpha>\alpha_0$ $${\varphi}_d(\alpha)={{\ensuremath{\mathbb A}} }\alpha \, ,$$ $$\lim_{N\to\infty}\lim_{T\to\infty}\mu_{\alpha,T}^{N,d}\left(\int_{0}^T dt |\sum_{i\in\Lambda_N^d}\eta_i(t)-N^d\rho|{\;\leqslant\;}\gamma_N N^d T \right)=1\, ,$$ $$\lim_{N\to\infty}\lim_{T\to\infty}\mu_{\alpha,T}^{N,d}\left(\int_{0}^T dt |\sum_{i\in\Lambda_N^d}c_i(\eta(t))-N^d{{\ensuremath{\mathbb A}} }|{\;\leqslant\;}\gamma_N N^d T \right)=1\, .$$ \[fluctu2\] Consider East or FA-1f model in $d{\;\geqslant\;}2$ at any $\rho\in(0,1)$. For any $u \in[0,1]$ it holds $$\begin{aligned} \label{eqfluc2} -(1-\rho)(1-u) &{\;\leqslant\;}& \liminf_{N\to\infty} \lim_{t\to\infty} \; \frac{1}{t N^c} \log \left {\langle}\frac{{\ensuremath{\mathcal A}}(t)}{N^dt}\simeq u {{\ensuremath{\mathbb A}} }\right {\rangle}\\ &{\;\leqslant\;}& \limsup_{N\to\infty} \lim_{t\to\infty} \; \frac{1}{t N^c} \log \left {\langle}\frac{{\ensuremath{\mathcal A}}(t)}{N^dt}\simeq u{{\ensuremath{\mathbb A}} }\right {\rangle}{\;\leqslant\;}\alpha_0 {{\ensuremath{\mathbb A}} }(1-u) \, , \nonumber\end{aligned}$$ where ${\alpha}_0<0$ was introduced in Theorem \[linearityd&gt;1\]. These results correspond to those of Theorems \[teo:phasetrans\](i) and \[teo:condmes\] (i) and \[fluctu1\] in the one dimensional case. For the large negative $\alpha$ regime, the result for dimension larger than one is much less precise. \[poor\] Consider FA-1f model in dimension $d{\;\geqslant\;}2$ with boundary condition of dimension $c\in[0,d-1]$. For any $\delta>0$ there exists $\alpha_1(\delta)<0$ such that $$\lim_{N\to\infty}\lim_{T\to\infty}\mu_{\alpha,T}^{N,d}\left(\int_{0}^T dt \sum_{i\in\Lambda_N^d}\eta_i(t){\;\geqslant\;}(1-\delta) N^d T \right)=1 \, .$$ Theorem \[linearityd&gt;1\] and \[poor\] will be proven in Section \[genesmall\] and \[secpoor\] respectively. Preliminary results =================== In Section \[var\] we recall some basic tools from the Donsker-Varadhan large deviation theory and we establish a variational formula for ${\varphi}^{(N)}(\lambda)$ . We underline that this formula is valid in any dimension. Finally, in Section \[subsec: sg\] we recall a result on the spectral gap of the generator for KCM which will be used in some of our proofs. Donsker-Varadhan theory {#var} ----------------------- The Donsker-Varadhan theory for large deviations [@DZ] will be a basic tool to derive our results. Fix the dimension $d$ and let $\mathcal{D}_N$ be the Dirichlet form corresponding to the generator ${\ensuremath{\mathcal L}}^d_N$ which is defined on any function $g$ as $$\label{dirichlet} {\ensuremath{\mathcal D}}_N({g}) := - \nu ( g \, {\ensuremath{\mathcal L}}_N^d g).$$ For future use we note that the Dirichlet form can be rewritten by using the definition as $$\begin{aligned} \label{eq: dirichlet} {\ensuremath{\mathcal D}}_N({g}) = \sum_{i\in {\Lambda}_N^d} \nu \left( c_i (\eta) \big( {g(\eta^i)} - {g(\eta)} \big)^2 \right)=\sum_{i\in {\Lambda}_N^d} \nu \left( r_i(\eta){\mbox{Var}}_i(g)\right)\end{aligned}$$ with $\mbox{Var}_i(g)=\nu_i(f-\nu_i(g))^2$. For any smooth function $V:\Omega_N^d\to \mathbb{R}$, we define the time average of $V$ over the process as $$\pi_t(V):=\frac{1}{t}\int_0^t V(\eta(s))ds$$ where $\eta(s)$ is a trajectory of the Markov process starting at time zero from $\eta$. The dynamics is reversible with respect to the measure $\nu$, thus the Donsker-Varadhan theory asserts that for any $\gamma\in\mathbb{R}$ $$\label{dv} \lim_{t\to\infty} \; \frac{1}{t} \log \left {\langle}\exp \big(\gamma \, t\pi_t(V) \big) \right {\rangle}= \sup_f \Big \{ \gamma\nu(fV) - {\ensuremath{\mathcal D}}_N (\sqrt f) \Big\} \, ,$$ where we recall that ${\langle}\cdot {\rangle}$ is the mean over the evolution of the process and over the initial configuration which is distributed with the equilibrium Bernoulli measure $\nu$ and the supremum is over the positive functions $f$ which satisfy $\nu(f)=1$. Note that the r.h.s of corresponds to the largest eigenvalue of the modified operator $\mathcal{L} + \gamma V$. Furthermore, if one defines the empirical measure in $\Omega_N^d$ by setting for any $A\subset\Omega_N^d$ and any $t {\;\geqslant\;}0$ $$\label{eq: emprirical measure} \pi_t(A) = \frac{1}{t} \int_0^t ds \; \mathds{1}_{A} (\eta(s)),$$ Donsker-Varahdan theory establishes that the large deviation functional of the empirical measure is the Dirichlet form ${\ensuremath{\mathcal D}}_N$. Thus if we let $\psi$ be any function from $\Omega_N^d\to \mathbb R$, for any $[a,b]\subset \mathbb R$ it holds $$\lim_{t\to\infty} \frac{1}{t}\log \left {\langle}\frac{1}{t} \int_0^t ds \psi(\eta(s))\in[a,b] \right {\rangle}= \lim_{t\to\infty} \frac{1}{t}\log {\langle}\pi_t(\psi)\in[a,b] {\rangle}= -\inf_{g: \nu(g)=1,~~ g{\;\geqslant\;}0\atop \nu(g \psi ) \in [a,b] } {\ensuremath{\mathcal D}}_N ({\sqrt g}) \, . \label{DV2}$$ For any $\lambda\in{{\ensuremath{\mathbb R}} }$, we consider the modified dynamics obtained by rescaling the time by $\exp(\lambda)$. The generator reads $$\begin{aligned} {\ensuremath{\mathcal L}}_{N,{\lambda}} f (\eta) = \sum_{i\in {\Lambda}_N^d} \exp( {\lambda}) c_i (\eta) \big( f(\eta^i) - f (\eta) \big) = \exp( {\lambda}) {\ensuremath{\mathcal L}}_N f (\eta),\end{aligned}$$ which is again reversible with respect to $\nu$. Then by evaluating the Radon-Nykodim derivative $d {{\ensuremath{\mathbb P}} }_t/d {{\ensuremath{\mathbb P}} }_t^{\lambda}$ where ${{\ensuremath{\mathbb P}} }_t$ and ${{\ensuremath{\mathbb P}} }_t^{\lambda}$ denote respectively the probability of the trajectory up to time t for the process evolving under ${\ensuremath{\mathcal L}}_N$ and ${\ensuremath{\mathcal L}}_{N,{\lambda}}$, we obtain $$\begin{aligned} \label{eq: Radon Nykodim} \frac{d\mathbb P_t}{d\mathbb P_t^{\lambda}}= \exp\left(- {\lambda}{\ensuremath{\mathcal A}}(t) + (\exp( {\lambda}) - 1) \int_0^t \, ds \; {\ensuremath{\mathcal H}}(\eta(s)) \right),\end{aligned}$$ with $$\begin{aligned} \label{eq: H} {\ensuremath{\mathcal H}}(\eta) = \sum_{i\in {\Lambda}_N^d} c_i (\eta) \, .\end{aligned}$$ where we recall that ${\ensuremath{\mathcal A}}(t)$ is the total activity up to time $t$ and $c_i$ has been defined in . This implies in particular for any function $\psi(t):(\eta_{s})_{s{\;\leqslant\;}t}\to \mathbb R$ $$\label{cm} \left {\langle}\psi(t)\exp( {\lambda}{\ensuremath{\mathcal A}}(t)) \right {\rangle}= \left {\langle}\psi(t)\exp \left( (\exp( {\lambda}) - 1) \int_0^t \, ds \; {\ensuremath{\mathcal H}}(\eta(s)) \right) \right {\rangle}_{\lambda}$$ where here and in the following $< \cdot >_{\lambda}$ is the expectation over the modified dynamics, i.e. the mean over the initial configuration distributed with $\nu$ and over the evolution of the process with generator ${\ensuremath{\mathcal L}}_{N, {\lambda}}$. Note that together with definition and formula (${\ensuremath{\mathcal L}}_{N, {\lambda}}$ is reversible with respect to $\nu$ thus Donsker-Varadhan theory applies), leads to the following variational formula $$\begin{aligned} {\varphi}^{(N)} ({\lambda}) &=& \sup_f \left\{ (\exp( {\lambda}) - 1) \nu (f \, {\ensuremath{\mathcal H}}) - \exp({\lambda}) {\ensuremath{\mathcal D}}_N(\sqrt{f}) \right\} \label{eq: exact}\end{aligned}$$ where the supremum is taken over the positive functions $f:\Omega_N \to{{\ensuremath{\mathbb R}} }$ such that $\nu(f) = 1$. Positivity of the spectral gap {#subsec: sg} ------------------------------ Finally, another tool which we will use is the knowledge of a positive lower bound uniform on $N$ for the spectral gap of ${\ensuremath{\mathcal L}}_{N}$ which is defined as $$\label{defgap} {\mbox{gap}}({\ensuremath{\mathcal L}}_N):=\inf_{f, f\neq const} \frac{{\ensuremath{\mathcal D}}_N(f)}{\mbox{Var}(f)},$$ where $\mbox{Var}(f)$ is the variance w.r.t. the invariant Bernoulli measure $\nu$ on $\Lambda_{N}$ and the minimization is over the functions $f:\Omega_N\to{{\ensuremath{\mathbb R}} }$ which are not constant. The following holds \[teogap\] Consider East or FA-1f model in any dimension and with any choice of the boundary condition which guarantees ergodicity. For any $\rho\in(0,1)$ there exists $S_{\rho}>0$ such that $$\inf_{N}{\mbox{gap}}({{\ensuremath{\mathcal L}}}_N)>S_{\rho} \, .$$ From this result and Definition it follows immediately that $$\label{spgapineq} {\ensuremath{\mathcal D}}_N(f){\;\geqslant\;}S_{\rho}{\mbox{Var}}(f).$$ Let ${\ensuremath{\mathcal G}}$ be a generic connected subset of ${{\ensuremath{\mathbb Z}} }^d$ and consider FA-1f with a single empty site at the boundary on ${\ensuremath{\mathcal G}}$. We call ${\ensuremath{\mathcal L}}_{{\ensuremath{\mathcal G}}}$ the generator and $\mbox{gap}({\ensuremath{\mathcal L}}_{{\ensuremath{\mathcal G}}})$ the corresponding spectral gap defined as in with ${\ensuremath{\mathcal D}}_{N}$ substituted by ${\ensuremath{\mathcal D}}_{{\ensuremath{\mathcal G}}}(f):=-\nu(f,{\ensuremath{\mathcal L}}_{{\ensuremath{\mathcal G}}}f).$ Then $${\mbox{gap}}({{\ensuremath{\mathcal L}}}_{{\ensuremath{\mathcal G}}})>S_{\rho} \, .$$ The result for the East model has been derived in [@AD] and subsequently proven in [@CMRT1] for a larger class of constraints including FA-1f in any dimension with completely empty boundary conditions. Actually, the latter result could be easily derived by the result in [@AD] without using the technique of [@CMRT1]. Instead, the result for FA-1f with generic boundary condition and on a generic graph has been derived in [@CMRT2]. Local equilibrium {#zeroblocks} ================= East and FA-1f in $d=1$ {#zeroblocks_eastFA-1f} ----------------------- Throughout this section, we consider East and FA-1f models in one dimension with mean density $\rho\in(0,1)$. Let us start by defining the coarse grained activity. Let $K$ be such that $N/K$ is integer and partition $\Lambda_N$ into boxes $B_{i}$ with $i\in[1,N/K]$ of size $K$, namely $B_i=[(i-1)K+1,iK]$. We define the activity in the interior of $B_i$ as $$\label{defact} \mathcal{H}_i(\eta)=\sum_{i\in \widetilde B_i}c_i(\eta),$$ where $\widetilde B_i\subset B_i$ are the sites such that the corresponding constraints depend only on the configuration inside $B_i$, namely $\widetilde B_i:=B_i\setminus iK$ for East and $\widetilde B_i:=B_i\setminus \{(i-1)K+1,iK\}$ for FA-1f. We also define the coarse grained density as $${\mathcal{R}}_i(\eta)=\sum_{i\in B_i}\frac{\eta_i}{K}.$$ Fix $\epsilon>0$, we define the [*activity-density*]{} associated to a configuration $\eta$ as $$\begin{aligned} \label{activity densitylabels} u_{K,{\varepsilon}}^i (\eta) = \begin{cases} 1\qquad & {\rm if} \quad \eta_j= 1\qquad\forall j\in B_i \, ,\\ -1 \qquad & {\rm if} \quad |\mathcal{H}_{i} (\eta) - {{\ensuremath{\mathbb A}} }{|\widetilde B_i|} \; | {\;\leqslant\;}{\varepsilon}K {\mbox{ and }} |\mathcal{R}_{i} (\eta) - \rho | {\;\leqslant\;}{\varepsilon}\, ,\\ 0 \qquad & {\rm otherwise} \, \end{cases}\end{aligned}$$ where the mean instantaneous activity ${{\ensuremath{\mathbb A}} }$ has been defined in formula and in order for the above definition to be well posed we restrict $\epsilon$ to the values such that $\rho+\epsilon<1$ and ${{\ensuremath{\mathbb A}} }|\widetilde B_i|-\epsilon K >0$. In the rest of the paper in any result that uses these labels we imply that $N$ and $K$ are integers chosen in order that $N/K$ is also integer and that the above restriction on $\epsilon$ is satisfied. The main result of this section is Lemma \[prop: bad blocksnew\] which states that the probability of finding under the empirical measure $\pi_T$ a density of boxes with activity-density label equal to zero, namely with activity different from the mean activity and from zero and/or particle density different from $\rho$ and one is suppressed exponentially in $T$ and $N/K$. In other words locally we are equilibrated either in the completely filled state or in the mean state identified by $\nu$. Instead Lemma \[lem: variance 0\] guarantees that the probability of finding a density of boxes with activity-density label equal to one, namely completely filled boxes, is suppressed exponentially in $T$ (but not wrt $N/K$). Before stating and proving these results we give some inequalities which immediately follow from the above definition of the labels and which will be used in the subsequent sections. Recall the definition of ${\ensuremath{\mathcal H}}$ , then $$\begin{aligned} {\ensuremath{\mathcal H}}(\eta){\;\geqslant\;}\sum_{i=1}^{N/K}{\ensuremath{\mathcal H}}_i(\eta)& {\;\geqslant\;}& \left(K{{\ensuremath{\mathbb A}} }-2{{\ensuremath{\mathbb A}} }-\epsilon K \right)\sum_{i=1}^{N/K} \mathds{1}_{u_{K,\epsilon} ^i = -1}\label{usoH}\end{aligned}$$ $${\ensuremath{\mathcal H}}(\eta){\;\leqslant\;}\sum_{i=1}^{N/K}{\ensuremath{\mathcal H}}_i(\eta)+2\frac{N}{K}{\;\leqslant\;}\sum_{i=1}^{N/K} \mathds{1}_{u_{K,\epsilon} ^i = -1}\left(K{{\ensuremath{\mathbb A}} }-2{{\ensuremath{\mathbb A}} }+\epsilon K \right)+K\sum_{i=1}^{N/K} \mathds{1}_{u_{K,\epsilon} ^i = 0}+2\frac{N}{K}. \label{usoH2}$$ Recall the definition of the empirical measure $\pi_T$, see equation , we define the following events: Fix $T>0$, $N,K$ integers, $\delta\in[0,1]$ and $\epsilon>0$. Then for $j\in\{-1,0,1\}$ we define the event ${\ensuremath{\mathcal W}}_{j,\delta}$ by requiring a density at least $\delta$ of $j$ activity-density labels. In formulas ${\ensuremath{\mathcal W}}_{j,\delta}$ is verified iff $$\pi_T\left( \sum_{i =1}^{N/K} \, \mathds{1}_{ \big\{ u_{K,\epsilon} ^i = j \big\} } \right) {\;\geqslant\;}\delta\frac{N}{K} \, .$$ For any integer $\ell$ we also define $V_{j,\ell}$ which is verified iff $$\pi_T\left( \sum_{i =1}^{N/K} \, \mathds{1}_{ \big\{ u_{K,\epsilon} ^i = j \big\}}\right)\in[\ell,\ell+1).$$ \[defW\] We stress that, even if it is not explicited for simplicity of notation, the event ${\ensuremath{\mathcal W}}_{j,\delta}$ and ${\ensuremath{\mathcal V}}_{j,\ell}$ depend on the choice of $T,N,K,\epsilon$. \[prop: bad blocksnew\] There exists $C(\rho),C'(\rho)>0$ such that for any $\delta, \epsilon>0$, any $\lambda\in{{\ensuremath{\mathbb R}} }$ and any integer $N,K$ provided $K>\bar K(\delta,\epsilon)=C'\frac{|\log(\delta)|}{\epsilon ^2}$ it holds $$\begin{aligned} \label{comparinew3} \lim_{T \to \infty} \; \frac{1}{T} \log {\langle}{\ensuremath{\mathcal W}}_{0,\delta}{\rangle}_{\lambda} {\;\leqslant\;}- C\delta^2 \frac{N}{K} \exp(\lambda)\, .\end{aligned}$$ \[lem: variance 0\] For any $\delta>0$, there is $K({\delta})$ such that for any $K {\;\geqslant\;}K({\delta})$, for any $\lambda\in{{\ensuremath{\mathbb R}} }$, any $N{\;\geqslant\;}K$ and any $\epsilon>0$ it holds $$\begin{aligned} \label{eq: scaling: 4.7} \lim_{T\to \infty} \frac{1}{T} \log {\left{\langle}{\ensuremath{\mathcal W}}_{1,\delta} \right{\rangle}}_{\lambda} {\;\leqslant\;}- \frac{S_{\rho}\delta }{4} \exp(\lambda)\, ,\end{aligned}$$ where $S_{\rho}>0$ has been defined in Proposition \[teogap\]. The blocking mechanism described in Section \[heuristics\] implies that $$\begin{aligned} \lim_{T\to \infty} \frac{1}{T} \log {\left{\langle}{\ensuremath{\mathcal W}}_{1,\delta} \right{\rangle}}_{\lambda} {\;\geqslant\;}- (1-\rho) \exp(\lambda)\, ,\end{aligned}$$ thus the scaling in cannot be improved. As a consequence of Lemma \[prop: bad blocksnew\], we will see that \[bad\] There exists $C(\rho)>0$ s.t. for any $\epsilon,\delta>0$, any $\alpha\in{{\ensuremath{\mathbb R}} }$ and any $K{\;\geqslant\;}\bar K$ with $\bar K$ specified in Lemma \[prop: bad blocksnew\] and any $N{\;\geqslant\;}K$ it holds $$\begin{aligned} \label{eq: bad blocks bound} \lim_{T \to \infty} \; \frac{1}{T} \log \mu_{\alpha,T}^{N} \left({\ensuremath{\mathcal W}}_{0,\delta} \right) {\;\leqslant\;}-C \delta^2\frac{N}{K}\exp(\frac{\alpha}{N})+|\alpha|(1+\frac{C}{N})\, .\end{aligned}$$ We start by proving a preliminary result. For $i\in [1,N/K]$, let $\mathcal{Z}_i$ be the event that there exists at least one empty site inside the box $B_i$ and ${\ensuremath{\mathcal E}}_i$ be the event that is verified iff $ u^i_{K,\epsilon}\in\{1,0\}$ and define the function $V:\Omega_N\to{{\ensuremath{\mathbb R}} }$ as $$\label{defVeta} V(\eta):=\sum_{i =1}^{N/K-1} \, \mathds{1}_{\mathcal{E}_i } (\eta)\mathds{1}_{\mathcal{Z}_{i+1}}(\eta)+\mathds{1}_{\mathcal{E}_{N/K} } (\eta) \, .$$ \[prop: bad blocks\] Set $m:=\nu({\ensuremath{\mathcal E}}_i)$. There exists a constant $C=C(\rho)>0$ such that uniformly in $N$ for any $\lambda\in{{\ensuremath{\mathbb R}} }$ it holds $$\begin{aligned} \label{compari} \lim_{T \to \infty} \; \frac{1}{T} \log \left {\langle}\pi_{T}(V) {\;\geqslant\;}x + {m}\frac{N}{K} \right{\rangle}_{\lambda} {\;\leqslant\;}- C \frac{K}{N} x^2 \exp(\lambda) \, .\end{aligned}$$ For any ${\gamma}>0$ $$\begin{aligned} \label{eq: chebyshev} \left{\langle}\pi_T(V) {\;\geqslant\;}x + {m}\frac{N}{K}\right{\rangle}_{\lambda} {\;\leqslant\;}\exp \left( - T {\gamma}\left( x + {m}\frac{N}{K} \right) \right) {\left{\langle}\exp \left( {\gamma}\int_0^T dt \, V(\eta(t)) \right) \right{\rangle}}_{\lambda} \, .\end{aligned}$$ Then using we get $$\begin{aligned} \label{fey} \lim_{T \to \infty} \frac{1}{T}\log \; \left{\langle}\pi_T(V) {\;\geqslant\;}x +\frac{N}{K}\right{\rangle}_{\lambda} {\;\leqslant\;}-{\gamma}\left( x + {m}\frac{N}{K} \right)+ \sup_{f} \left\{ {\gamma}\nu \big( f V \big) - \exp(\lambda) {\ensuremath{\mathcal D}}_N (\sqrt{f}) \right \},\nonumber\\ \end{aligned}$$ where the supremum is over the $f:\Omega_N\to{{\ensuremath{\mathbb R}} }$ such that $\nu(f) =1$ and $f {\;\geqslant\;}0$. Notice that $$\label{divido} \nu\big (f \mathds{1}_{\mathcal{E}_i }\mathds{1}_{\mathcal{Z}_{i+1}}\big )=\nu\big (\mathds{1}_{\mathcal{E}_i } \mathds{1}_{\mathcal{Z}_{i+1}}\nu_{B_i}(\sqrt f)^2\big )+ \nu\left (\mathds{1}_{\mathcal{E}_i } \mathds{1}_{\mathcal{Z}_{i+1}}\left[f-\nu_{B_i}(\sqrt f)^2\right]\right ).$$ The first term in can be bounded from above by $$\label{impero1} \nu\big (\mathds{1}_{\mathcal{E}_i }\mathds{1}_{\mathcal{Z}_{i+1}}\nu_{B_i}(\sqrt f)^2\big ){\;\leqslant\;}\nu\big (\mathds{1}_{\mathcal{E}_i } \nu_{B_i}(\sqrt f)^2\big ){\;\leqslant\;}\nu(f)m=m \, ,$$ where we use the fact that $\nu_{B_i}(\sqrt f)^2$ does not depend on the variables inside $B_i$, $\mathcal{E}_i$ does not depend on the variables outside $B_i$ and the fact that $\nu(\nu_{B_i}(\sqrt f)^2){\;\leqslant\;}\nu( \nu_{B_i}(f))=\nu(f)=1$. On the other hand for the second term in we have $$\begin{aligned} \label{impero2} \nu\left (\mathds{1}_{\mathcal{E}_i } \mathds{1}_{\mathcal{Z}_{i+1}}\left[f-\nu_{B_i}(\sqrt f)^2\right]\right ) & {\;\leqslant\;}\nu\left (\mathds{1}_{\mathcal{Z}_{i+1}}(\eta)\left[\sqrt f-\nu_{B_i}(\sqrt f)\right]^2\right )^{1/2}\nu\left (\left[\sqrt f+\nu_{B_i}(\sqrt f)\right]^2\right )^{1/2}\nonumber\\ {\;\leqslant\;}2~\nu\left (\mathds{1}_{\mathcal{Z}_{i+1}}\mbox{Var}_{B_i}(\sqrt{f})\right )^{1/2} \, ,\end{aligned}$$ where to obtain the first inequality we upper bound $\mathds{1}_{\mathcal{E}_i }$ by one and we use Cauchy-Schwartz while for the second inequality we use the fact that the event $\mathcal {Z}_{i+1}$ depends only on the variables inside $B_{i+1}$, thus it is independent on the variables in the block $B_{i}$. Then we notice that $\mathds{1}_{\mathcal{Z}_{i+1}}=1$ guarantees the existence of (at least) one zero inside $B_{i+1}$ and we let $\xi$ be the position of the first zero starting from the right border of this box. Thus $$\label{us1} \nu\left (\mathds{1}_{\mathcal{Z}_{i+1}}\mbox{Var}_{B_i}(\sqrt{f})\right )=\sum_{j=1}^{K}\nu\left ( \mathds{1}_{\xi=j+iK}\mbox{Var}_{B_i}(\sqrt{f})\right ) \, ,$$ and by letting $L_{i,j}$ and $R^{i,j}$ be the subset of $B_i\cup B_{i+1}$ to the left (respectively right) of $j+iK$, namely $L_{i,j}:=B_i\cup [iK,\dots, j+iK-1]$ and $R_{i,j}=B_{i+1}\setminus L_{i,j}$ we have $$\begin{aligned} \nu\left ( \mathds{1}_{\xi=j+iK}\mbox{Var}_{B_i}(\sqrt{f})\right ) {\;\leqslant\;}\sum_{\eta^r} \nu(\eta^r)\mathds{1}_{\xi=j+iK}\sum_{\eta^l}\nu(\eta^l)\mbox{Var}_{B_i}(\sqrt{f})\nonumber\\ {\;\leqslant\;}\sum_{\eta^r} \nu(\eta^r)\mathds{1}_{\xi=j+iK}\mbox{Var}_{L_{i,j}}(\sqrt{f}) \, ,\end{aligned}$$ where $\eta^r$ ($\eta^l$) is the configuration restricted to $B^{jr}$ ($B^{jl}$) and we use the product form of $\nu$ and, in the last passage, the convexity of the variance and the fact that $B_i \subset B^{jl}$. Then by using the spectral gap inequality for the model on $L_{i,j}$ with a frozen zero at the right boundary together with the expression of the Dirichlet form we get (recall that $S_{\rho}>0$ is the lower bound on the infimum over $N$ of the spectral gap of $\Lambda_N$ at density $\rho$) $$\label{us} \mbox{Var}_{L_{i,j}}(\sqrt{f}){\;\leqslant\;}S_{\rho}^{-1} \sum_{\eta^l}\nu(\eta^l)(\sum_{x\in L_{i,j}} r_x^l(\eta^l) \mbox{Var}_x(\sqrt{f})) \, ,$$ where we denote by $r_x^l$ the constraints for the model on $L_{i,j}$ with empty boundary condition on the right boundary, namely $r_x^l(\eta)=1$ if $x=j+iK-1$ and otherwise $r_x^l(\eta^l)=1-\eta^l_{x+1}$ if we are considering East or $r_x^l(\eta^l)=1-\eta^l_{x+1}\eta^l_{x-1}$ if we are considering FA-1f. Then we note that for any $\eta$ such that $\xi(\eta)=j+iK$ and which equals $\eta^l$ on $B^{jl}$, it holds $c_x^l(\eta^l)=c_x(\eta)$ for any $x\in L_{i,j}$. Thus we can insert into and use this observation to get $$\begin{aligned} \label{impero3} \nu\left ( \mathds{1}_{\mathcal{Z}_{i+1}} \mbox{Var}_{B_i}(\sqrt{f})\right ) {\;\leqslant\;}\frac{1}{S_{\rho}}\nu \left( \sum_{x\in B_i\cup B_{i+1}} r_x \mbox{Var}_x(\sqrt{f}) \right) \, .\end{aligned}$$ Then , , and yield $$\nu\big (f \mathds{1}_{\mathcal{E}_i }\mathds{1}_{\mathcal{Z}_{i+1}}\big ) {\;\leqslant\;}m+2\sqrt{\frac{\nu(\sum_{x\in B_i\cup B_{i+1}} r_x \mbox{Var}_x(\sqrt{f}))}{S_{\rho}}}$$ and for $\nu\big (f \mathds{1}_{\mathcal{E}_{N/K}} (\eta)\big )$ the same upper bound can be obtained along the same lines (actually easily because the boundary condition guarantees a zero at the right border of $B_{N/K}$). Thus for any function $f$ s.t. $ \nu(f)=1$ and $f>0$ it holds $$\begin{aligned} & \gamma\nu(fV)-\exp(\lambda){\ensuremath{\mathcal D}}(\sqrt f){\;\leqslant\;}\gamma m \frac{N}{K}+ \sum_{i=1}^{N/K}\left[\frac{2\gamma \sqrt 2 }{\sqrt S_{\rho}} \sqrt {{\ensuremath{\mathcal D}}_{K,i}(\sqrt f)}-\exp(\lambda){\ensuremath{\mathcal D}}_{K,i}(\sqrt f)\right] \\ & = \frac{N}{K}\left[\gamma m+\frac{ 2 \gamma^2}{ S_{\rho}\exp(\lambda)} -\frac{K}{N}\sum_{i=1}^{N/K}\left(\exp(\lambda/2)\sqrt {{\ensuremath{\mathcal D}}_{K,i}(\sqrt f)}-\frac{\sqrt 2\gamma }{\sqrt S_{\rho}\exp(\lambda/2)}\right)^2\right]{\;\leqslant\;}\frac{N}{K}\left[\gamma m+\frac{ 2\gamma^2 }{ S_{\rho}\exp(\lambda)}\right]\nonumber\end{aligned}$$ with ${\ensuremath{\mathcal D}}_{K,i}(\sqrt f)$ the contribution to the Dirichlet form coming from the sites in the box $B_i$, namely $$\begin{aligned} {\ensuremath{\mathcal D}}_{K,i} \left( \sqrt{f } \right) = \nu \left( \sum_{j \in B_i} c_j (\eta) \big( \sqrt{f(\eta^j)} - \sqrt{f (\eta)} \big)^2 \right) \, .\end{aligned}$$ Then by using and optimizing over $\gamma$ we get $$\lim_{T\to\infty}\frac{1}{T}\log\left{\langle}\frac{1}{T} \int_0^T dt \, V(\eta(t)) {\;\geqslant\;}x + {m}\frac{N}{K}\right{\rangle}_{\lambda} {\;\leqslant\;}- \frac{K}{N}\frac{S_{\rho}\exp(\lambda)}{8}x^2 \, .$$ This completes the proof. We are now ready to prove the main results of this section. \[Proof of Lemma \[prop: bad blocksnew\] \] We recall the definition for the function $V$, where $\mathcal{E}_i$ is the event which is verified iff $u^i_{K,{\varepsilon}} \in\{0,1\}$. Thus from the definition of the activity-density labels it follows immediately that the probability of $\mathcal{E}_i$ goes to zero as the size of the box, $K$, goes to infinity and it is bounded from above by $\exp(-K\epsilon^2 C)$. Thus provided $K{\;\geqslant\;}\bar K$ it holds $\nu(\mathds{1}_{\mathcal{E}_i})<\delta/6$. Thanks to these facts we can apply Lemma \[prop: bad blocks\] with the choice $x=\delta N/(6K)$ to obtain that $$\begin{aligned} \label{compari6} \lim_{T \to \infty} \; \frac{1}{T} \log \left {\langle}\pi_{T}\left(V\right) {\;\geqslant\;}\frac{\delta N}{3K} \right{\rangle}_{\lambda} {\;\leqslant\;}- C \frac{N}{K} \delta^2 \exp(\lambda) \, ,\end{aligned}$$ where $V$ is defined in . We will now prove that the following inequality holds for any $\eta\in\Omega_N$ $$\label{soloq} \sum_{i=1}^{N/K} \mathds{1}_{u_{K,\epsilon}^i=0}(\eta){\;\leqslant\;}V(\eta) \, .$$ Then collecting and the proof of is completed. We are therefore left with the proof of which immediately follows from the following observation. Let $i<j$ be such that $u^i_{K,\epsilon}=u^j_{K,\epsilon}=0$ and $u^k_{K,\epsilon}\neq 0$ for all $k\in [i+1,j-1]$. Then there exists $k\in [i,j-1]$ such that $\mathds{1}_{\mathcal{E}_{k}}\mathds{1}_{Z_{k+1}}=1$. In order to prove this statement we consider separately the case (a) $j=i+1$ and (b) $j>i+1$. In case (a) the result holds since $u^{i}_{K,\epsilon}=0$ implies $\mathds{1}_{\mathcal{E}_{i}}=1$ and $u^{i+1}_{K,\epsilon}=0$ implies $\mathds{1}_{\mathcal{Z}_{i+1}}=1$, thus $ \mathds{1}_{\mathcal{E}_{i}}\mathds{1}_{Z_{i+1}}=1$. In case (b) we distinguish subcases (b1) $u^{k}_{K,\epsilon}=1$ for all $k\in [i+1,j-1]$ and (b2) there exists at least one site $k\in [i+1,j-1]$ such that $u^{k}_{K,\epsilon}=-1$. If (b1) holds then $ \mathds{1}_{\mathcal{E}_{j-1}}\mathds{1}_{Z_{j}}=1$ (since $u^{j-1}_{K,\epsilon}=1$ implies $\mathds{1}_{\mathcal{E}_{j-1}}=1$ and $u^{j}_{K,\epsilon}=0$ implies $\mathds{1}_{Z_{j}}=1$) and the desired result is proven. In case (b2) if we let $\ell$ be the smallest index in $ [i+1,j-1]$ such that $u^{\ell}_{K,\epsilon}=-1$ then $ \mathds{1}_{\mathcal{E}_{\ell-1}}\mathds{1}_{Z_{\ell}}=1$ (indeed $u^{\ell-1}_{K,\epsilon}\in\{0,1\}$ and therefore $\mathds{1}_{\mathcal{E}_{\ell-1}}=1$ and $u^{\ell}_{K,\epsilon}=-1$ implies $\mathds{1}_{Z_{\ell}}=1$) and again the desired result is proven. \[Proof of Lemma \[lem: variance 0\]\] By Donsker-Varadhan large deviation principle and the spectral gap inequality we get $$\begin{aligned} \label{princ} \lim_{T\to \infty} \frac{1}{T} \log {\left{\langle}{\ensuremath{\mathcal W}}_{1,\delta} \right{\rangle}}_{\lambda} {\;\leqslant\;}-\exp(\lambda)\inf_{f}\left\{{\ensuremath{\mathcal D}}_{N}(\sqrt f)\right\}{\;\leqslant\;}- \exp(\lambda)S_{\rho}\inf_f\left\{\mbox{Var}(\sqrt f)\right\} \, ,\end{aligned}$$ where the infimum is over the positive functions $f$ such that $$\label{conditionf} \nu(f)=1, \qquad \quad \nu \left( f\sum_{i=1}^{N/K}\mathds{1}_{\{u^i_{K,\epsilon}=1\}} \right) {\;\geqslant\;}\delta\frac{N}{K} \, .$$ We will now show that under the latter constraint $$\label{added} \mbox{Var}(\sqrt f) {\;\geqslant\;}\delta -\rho^K-\rho^{K/2}.$$ Then by choosing $K$ sufficiently large so that $\rho^{K}{\;\leqslant\;}\delta^2/4$ by collecting and the desired result follows (note that $\rho^K{\;\leqslant\;}\delta^2/4$ implies $\rho^K{\;\leqslant\;}\delta/4$ since we only have to deal with the case $\delta {\;\leqslant\;}1$). We are therefore left with proving that under conditions the inequality holds. For each box $B_i$ with $i\in[1,N/K]$ we define the coarse grained variable $\omega_i$ by $$\omega_i=\mathds{1}_{\{u_{K,\epsilon}^i=1\}},$$ and let $p:=\nu(u_{K,\epsilon}^i=1)=\rho^{K}$ and $m_p$ be the Bernoulli product measure with density $p$ on $\Omega_{N/K}$. The marginal of $\nu(\eta) f(\eta)$ on the coarse grained variables is given by $m_p ({\omega}) g({\omega})$ with $$\begin{aligned} \label{defg} g({\omega}): = \frac{1}{m_p({\omega})} \; \sum_{\eta \sim {\omega}} f(\eta) \nu(\eta)\, ,\end{aligned}$$ where the sum is over the $\eta$’s compatible with ${\omega}$. Then $$\begin{aligned} \label{eq: average variance} \sqrt{g({\omega})} =\sqrt{\sum_{\eta \sim {\omega}} \frac{\nu(\eta)}{m_p({\omega})} f(\eta)} {\;\geqslant\;}\sum_{\eta \sim {\omega}} \frac{\nu(\eta)}{m_p({\omega})} \sqrt{f(\eta)} \, ,\end{aligned}$$ where we used the fact that for each fixed $\omega$ it holds $ \sum_{\eta \sim {\omega}} \frac{\nu(\eta)}{m_p({\omega})}=1$ and the concavity of the square root. Then from we get $$\label{eccola} \mbox{Var}(\sqrt f)=1 - \nu ( \sqrt{f})^2 {\;\geqslant\;}1 - m_p ( \sqrt{g})^2.$$ Thus in order to prove it is sufficient to show that $$\label{tobe} m_p(\sqrt g)^2 {\;\leqslant\;}1-\delta+p+\sqrt p \, ,$$ when $g$ is defined as in and $f$ satisfies conditions which imply $$m_p(g\sum_{i=1}^{N/K}\omega_i){\;\geqslant\;}\delta N/K \, .$$ The latter inequality implies that there exists (at least) a box $B_j$ with $j\in[1,N/K]$ such that ${\vartheta}_j: = m_p \left(g ({\omega}) {\omega}_j \right) {\;\geqslant\;}\delta$. Let $j$ be the rightmost box which verifies this constraint and rewrite each configuration $\omega$ via the couple $(\omega_j,\sigma)$ with $\sigma= \{ {\omega}_i \}_{i \not = j}$. Thus $$\label{mpg} m_p (\sqrt{g}) = p Z_1 + (1-p) Z_2$$ where $$\begin{aligned} Z_1: = \sum_{\sigma}m_p^j({\sigma}) \sqrt{g (1,{\sigma})} \quad {\rm and} \quad Z_2 := \sum_{\sigma}m_p^j(\sigma) \sqrt{ g (0,{\sigma})} \, .\end{aligned}$$ where now $m_p^j$ denotes the product measure with density $p$ on $\{1,\dots, N/K \}\setminus j$. By Jensen inequality $$\begin{aligned} \label{z12}Z_1^2 {\;\leqslant\;}\sum_{\sigma}m_p^j({\sigma}) g(1,{\sigma}) = \frac{{\vartheta}_j}{p}, \qquad Z_2^2 {\;\leqslant\;}\sum_{\sigma}m_p^j({\sigma}) g (0,{\sigma}) = \frac{1-{\vartheta}_j}{1-p} \, .\end{aligned}$$ Thus yields $$\begin{aligned} m_p (\sqrt{g})^2 &=& p^2 Z_1^2 + (1-p)^2 Z_2^2 + 2 p(1-p) Z_1 Z_2\\ &{\;\leqslant\;}& p {\vartheta}_j + (1-p) (1-{\vartheta}_j) + 2 \sqrt{p (1-p)} \; \sqrt{{\vartheta}_j (1-{\vartheta}_j)} \\ &{\;\leqslant\;}& 1- {\vartheta}_j + p + \sqrt{p}{\;\leqslant\;}1- \delta+ p + \sqrt{p}\, ,\end{aligned}$$ where we used for the first inequality, the fact that ${\vartheta}_j {\;\leqslant\;}1$ thus ${\vartheta}_j (1-{\vartheta}_j) {\;\leqslant\;}1/4 $ and that $p<1$ for the second inequality and finally the bound ${\vartheta}_j{\;\geqslant\;}\delta$ for the last inequality. Thus is proven and the proof of the Lemma is concluded. Before proving the last result of this section, Lemma \[bad\], we state separately a result which will be also useful in other proofs. \[easy\] Consider East and FA-1f model in $d=1$. There exists $C>0$ s.t. for any $N$ if $\alpha<0$ $$\begin{aligned} \label{b1} {\left{\langle}\exp \left( \frac{{\alpha}}{N} {\ensuremath{\mathcal A}}(T) \right) \right{\rangle}} {\;\geqslant\;}\exp\left( {\alpha}{{\ensuremath{\mathbb A}} }T \left(1+\frac{C}{N}\right)\right) \, \end{aligned}$$ if $\alpha>0$ $$\begin{aligned} \label{b1} {\left{\langle}\exp \left( \frac{{\alpha}}{N} {\ensuremath{\mathcal A}}(T) \right) \right{\rangle}} {\;\geqslant\;}\exp\left( {\alpha}{{\ensuremath{\mathbb A}} }T \left(1-\frac{C}{N}\right)\right) \, \end{aligned}$$ By Jensen inequality it holds $$\label{jensen} {\left{\langle}\exp \left( \frac{{\alpha}}{N} {\ensuremath{\mathcal A}}(T) \right) \right{\rangle}} {\;\geqslant\;}\exp \left( \frac{{\alpha}}{N} {\left{\langle}{\ensuremath{\mathcal A}}(T) \right{\rangle}} \right)$$ Then from the definition of the process and recalling it follows immediately that $$<{\ensuremath{\mathcal A}}(t)>=t\sum_{i=1}^N\nu(c_i(\eta))=t (N-2){{\ensuremath{\mathbb A}} }+t\nu(c_1(\eta))+t\nu(c_N(\eta))$$ where ${{\ensuremath{\mathbb A}} }=\nu(c_1(\eta))=\rho(1-\rho)^2$ and $\nu(c_N(\eta))=\rho(1-\rho)$ for East; ${{\ensuremath{\mathbb A}} }=\rho(1-\rho)(1-\rho^2)$, $\nu(c_1(\eta))=\nu(c_N(\eta))=\rho(1-\rho)$ for FA-1f with two zeros at the boundary; ${{\ensuremath{\mathbb A}} }=\rho(1-\rho)(1-\rho^2)$, $\nu(c_1(\eta))=\rho(1-\rho)^2$ and $\nu(c_N(\eta))=\rho(1-\rho)$ for FA-1f with one zero at the (right) boundary. Thus $$\label{boundA} {{\ensuremath{\mathbb A}} }t N \left(1-\frac{C}{N}\right){\;\leqslant\;}<{\ensuremath{\mathcal A}}(t)>{\;\leqslant\;}{{\ensuremath{\mathbb A}} }t N \left(1+\frac{C}{N}\right)$$ and immediately follows from these bounds and inequality . From and Lemma \[easy\] for any ${\alpha}{\;\leqslant\;}0$ $$\begin{aligned} \mu_{\alpha,T}^{N}({\ensuremath{\mathcal W}}_{0,\delta}) {\;\leqslant\;}\exp( -\alpha{{\ensuremath{\mathbb A}} }T(1+\frac{C}{N})) {\left{\langle}\mathds{1}_{{\ensuremath{\mathcal W}}_{0,\delta}} \right{\rangle}}_{\frac{{\alpha}}{N}} \,\end{aligned}$$ if instead $\alpha>0$ by using the fact that ${\ensuremath{\mathcal H}}(\eta(s)){\;\leqslant\;}N$ we get $$\begin{aligned} \mu_{\alpha,T}^{N}({\ensuremath{\mathcal W}}_{0,\delta}) {\;\leqslant\;}\exp( -\alpha {{\ensuremath{\mathbb A}} }T+\alpha T) {\left{\langle}\mathds{1}_{{\ensuremath{\mathcal W}}_{0,\delta}} \right{\rangle}}_{\frac{{\alpha}}{N}} \, .\end{aligned}$$ The result immediately follows from the above inequalities and applying Lemma \[prop: bad blocksnew\]. FA-1f in $d {\;\geqslant\;}2$ {#secbad} ----------------------------- We start by extending to the higher dimensional case the notion of coarse grained activity. We let again $N/K$ be integer and partition $\Lambda_N^d$ into $(N/K)^d$ boxes of linear size $K$. We let $B_i$ be the boxes (with $i\in [1,(N/K)^d]$) numbered in such a way that for any $i\in [1,(N/K)^d-1]$ there is $j\in[1,\dots,d]$ such that $B_{i+1}$ is obtained by shifting $B_i$ of $K\vec e_j$. Then we define the activity-density labels as in the one dimensional case and the event ${\ensuremath{\mathcal W}}_{i,\delta}$ as in Definition \[defW\] with $N/K$ substituted by $(N/K)^d$. The following holds \[ahh\] Consider FA-1f model in dimension $d{\;\geqslant\;}2$ with any boundary condition which guarantees ergodicity and $\rho\in(0,1)$. The results in Lemma \[prop: bad blocksnew\] hold also for FA-1f in dimension $d$ if we substitute $N/K$ with $(N/K)^d$. The proof follows the same lines as for Lemma \[prop: bad blocksnew\] with some new ingredients that we detail here. Since there is at least one empty boundary condition, there exists at least one box which has an empty site on its boundary. We let $j$ be the index of the smallest such box. Then we let $V(\eta):=\sum_{i=1}^{j-1}\mathds{1}_{\mathcal{E}_i } (\eta)\mathds{1}_{\mathcal{Z}_{i+1}}(\eta)+\sum_{i=j+1}^{(N/K)^d}\mathds{1}_{\mathcal{E}_{i} } (\eta)\mathds{1}_{\mathcal{Z}_{i-1}}(\eta)+\mathds{1}_{\mathcal{E}_j} $ and notice (along the same lines as for the unidimensional case) that $\sum_{i=1}^{(N/K)^d}\mathds{1}_{u_{K,\epsilon}=0}(\eta){\;\leqslant\;}V(\eta)$ for any $\eta$. Thus provided we can establish for $V$ the validity of Lemma \[prop: bad blocks\] with $N/K$ substituted by $(N/K)^d$ we can conclude along the same lines as for the one dimensional case. The validity of this modified Lemma \[prop: bad blocks\] also follows along the same lines as the one dimensional case with a different point that we detail here. Recall that in one dimension under the event $Z_{i+1}$ which guarantees that there exists at least one empty site in $B_{i+1}$ we identified the rightmost such site, which we denoted by $\xi$. Then under the event that $\xi=j+iK$ we somehow extended the variance up to $L_{i,j}:=B_i\cup[iK,\dots,j+iK-1]$ and used the positivity of the spectral gap on $L_{i,j}$ (which is a volume of the type $\Lambda_{K+j}$) with fixed empty boundary condition at $j+iK$. Now instead under the event $Z_{i+1}$ we number the sites of $B_{i+1}$ as $x_1,\dots,x_{K^d}$ in a way that $x_{i+1}$ is nearest neighbour of $x_i$ and then let $\xi$ be the empty site with the biggest label. Then under the event $\xi=x_j$ we let $L_{i,j}:=B_i\cup B_{i+1}\setminus [x_{j},\dots,x_{K^d}]$ and use the positivity of the spectral gap on $L_{i,j}$ with fixed empty boundary condition at $x_j$. Note that now $L_{i,j}$ is not an hypercube and the boundary condition is a single empty boundary condition even if we are considering the dynamics on $\Lambda_N$ with, for example, completely empty boundary conditions. Nevertheless we can use the positivity of the spectral gap on a generic connected graph with empty boundary condition (see Proposition \[teogap\]) to bound again the variance on $L_{i,j}$ with the Dirichlet form and then proceed as in the one dimensional case. \[nonhopiu\] Consider FA-1f model in dimension $d{\;\geqslant\;}2$ with any boundary condition which guarantees ergodicity and $\rho\in(0,1)$. There exists $C(\rho)$ such that $$\lim_{T\to\infty}\frac{1}{T}\log \mu_{\alpha,T}^{N,d}( {\ensuremath{\mathcal W}}_{0,\delta}){\;\leqslant\;}-C\delta^2\left(\frac{N}{K}\right)^d\exp(\alpha/N^{d-c})+|\alpha|N^c(1+CN^{c-d})$$ The proof uses Lemma \[ahh\] and follows exactly the same lines as the proof for Lemma \[bad\]. Consider FA-1f model in dimension $d{\;\geqslant\;}2$ with $\rho\in(0,1)$ and boundary condition of dimension $c\in[0,d-1]$. Then $$\begin{aligned} \lim_{T\to \infty} \frac{1}{T} \log {\left{\langle}{\ensuremath{\mathcal W}}_{1,\delta} \right{\rangle}}_{\lambda} {\;\leqslant\;}- \frac{N^c S_{\rho}\delta }{4} \, ,\end{aligned}$$ with $S_{\rho}$ defined in Proposition \[teogap\]. We detail the proof in the case $d=2$, $c=1$ with the specific choice that all the sites in the boundary set which have first coordinate equal to $N+1$ are empty. The other cases can be proven analogously. As in the proof of Lemma \[lem: variance 0\] we start by applying Donsker-Varadhan large deviation principle which leads to $$\begin{aligned} \label{princ2} \lim_{T\to \infty} \frac{1}{T} \log {\left{\langle}{\ensuremath{\mathcal W}}_{1,\delta} \right{\rangle}}_{\lambda} {\;\leqslant\;}-\exp(\lambda)\inf_{f\in{\ensuremath{\mathcal C}}} \left\{ {\ensuremath{\mathcal D}}_{N}^{(2)}(\sqrt f) \right \} \, ,\end{aligned}$$ where ${\ensuremath{\mathcal C}}$ is the set of positive functions $f$ s.t. $$\label{conditionff} \nu(f)=1, \quad\quad \nu \left ( f\sum_{i=1}^{(N/K)^2}1_{u^i_{K,\epsilon}=1} \right) {\;\geqslant\;}\delta\left(\frac{N}{K}\right)^2 \, ,$$ and we added the index $(2)$ to explicitate the fact that we refer here to the Dirichlet form of the two dimensional model. By the monotonicity of the rates for any function $f$, one has $${\ensuremath{\mathcal D}}_{N}^{(2)}(\sqrt f)=\sum_{i=1}^N{\ensuremath{\mathcal D}}_N^{(2,i)}(\sqrt f){\;\geqslant\;}\sum_{i=1}^N\mu\left({\ensuremath{\mathcal D}}_N^{(1,i)}(\sqrt f) \right) {\;\geqslant\;}S_{\rho}\sum_{i=1}^N\mu\left(\mbox{Var}_i(\sqrt f)\right)\, ,$$ where ${\ensuremath{\mathcal D}}_N^{(2,i)}$ is the contribution of the $i-th$ line to the Dirichlet form and ${\ensuremath{\mathcal D}}_N^{(1,i)}$ is instead the Dirichlet form of the one dimensional FA-1f model on the $i-th$ line with empty boundary condition on the right border (note that ${\ensuremath{\mathcal D}}_N^{(1,i)}(\sqrt f)$ is a function of the configuration on all the sites that do not belong to the $i$-th line) and $\mbox{Var}_i$ denotes the variance w.r.t. the Bernoulli measure restricted to the $i$-th line with the other variables held fixed. The first inequality follows from the fact that for any site belonging to the $i-th$ line if the constraint is satisfied for the one dimensional model so it is for the two dimensional model (but the converse is not true), indeed for any $x$ it holds $1-\eta_{x+\vec e_1}\eta_{x-\vec e_1}{\;\leqslant\;}1-\eta_{x+\vec e_1}\eta_{x-\vec e_1}\eta_{x+\vec e_2}\eta_{x-\vec e_2}$. The second inequality follows by using the spectral gap inequality for the one dimensional model. Then we notice that the condition implies that $$\label{conditionff2} \nu \left ( f\sum_{i=1}^{N}\sum_{j=1}^{(N/K)}1_{\tilde u^{i,j}_{K,\epsilon}=1} \right) {\;\geqslant\;}\delta N \frac{N}{K} \, ,$$ where $\tilde u^{i,j}_{K,\epsilon}$ stands for the activity label on the $j$-th one-dimensional box (of size $K$) of the line $i$. Thus we get $$\label{ssss} \inf_{f\in{\ensuremath{\mathcal C}}}{\ensuremath{\mathcal D}}_N^{(2)}(\sqrt f){\;\geqslant\;}S_{\rho} \inf_{\delta_1,\dots,\delta_N\atop \sum_{i=1}^N\delta_i{\;\geqslant\;}\delta N}\sum_{i=1}^N \mu(\inf_{f\in {\ensuremath{\mathcal C}}_i(\delta_i)}\mbox{Var}_i(\sqrt{f})) \, ,$$ where we denote by ${\ensuremath{\mathcal C}}(\delta)$ the set of positive functions which satisfy the conditions $\nu(f)=1$ and $$\nu\left(f\sum_{j=1}^{N/K}1_{\tilde u^{i,j}_{K,\epsilon}=1}\right){\;\geqslant\;}\delta\frac{N}{K}$$ Following the lines of Lemma \[lem: variance 0\] we get for any $i\in[1,N]$ $$\inf_{f\in{\ensuremath{\mathcal C}}_i(\delta_i)} \mbox{Var}_i(\sqrt{f}) {\;\geqslant\;}\delta_i-\rho^K-\rho^{K/2},$$ and inserting this result in and yields the desired result provided $K$ is chosen sufficiently large so that $\rho^K+\rho^{K/2}{\;\leqslant\;}\delta 3/4$. Active regime ============= In this section we prove Theorem \[teo:phasetrans\] (i) which establishes the linearity of the moment generating function ${\varphi}$ for FA-1f and East model in the small $\alpha$ regime. Then, we prove Theorem \[teo:condmes\] (i) and the stronger result Lemma \[usolemma\] on the conditional measure. In section \[genesmall\] we generalize these results to higher dimensions. The key ingredints which will be used in these (quite technical) proofs are the main results obtained in the previous section, namely Lemma \[prop: bad blocksnew\] and Lemma \[lem: variance 0\]. Linearity of the generating function: proof of Theorem \[teo:phasetrans\] (i) {#linearity} ----------------------------------------------------------------------------- In this section we will prove the following proposition \[altra\] There exists $C<\infty$ s.t. for any ${\delta}>0$ and $\alpha>\alpha_0=-\frac{S_{\rho}}{4{{\ensuremath{\mathbb A}} }}$ there is $\bar N({\delta},\alpha)<\infty$ such that for all $N{\;\geqslant\;}\bar N$ it holds $$\begin{aligned} \label{eq: borne inf sup} \qquad \lim_{T \to \infty} \; \frac{1}{T} \log {\left{\langle}\exp \left( \frac{{\alpha}}{N} {\ensuremath{\mathcal A}}(T) \right) \right{\rangle}} {\;\leqslant\;}{{\ensuremath{\mathbb A}} }{\alpha}+ C {\delta}+ C\frac{\alpha}{\sqrt N}+C\frac{\alpha^2}{\sqrt N}\, .\end{aligned}$$ From this proposition, we deduce The result follows from the definition using the lower bound of Lemma \[easy\] and the upper bound of Proposition \[altra\]. We are therefore left with proving Proposition \[altra\]. We distinguish two cases. [**Case $\alpha{\;\leqslant\;}0$.**]{} Choose $\epsilon= {{\ensuremath{\mathbb A}} }\frac{\delta}{2}$ and $K$ such that $K{\;\geqslant\;}\bar K( \frac{\delta}{2}, {{\ensuremath{\mathbb A}} }\frac{\delta}{2})$ with $\bar K$ defined in Lemma \[prop: bad blocksnew\], then $$\label{ineevents} \mathds{1}_{ {\ensuremath{\mathcal W}}_{0,\delta/2}}+\mathds{1}_{ {\ensuremath{\mathcal W}}_{-1,1-\delta}}+\sum_{\ell=\lceil \delta N/(2K)\rceil }^{N/K}\mathds{1}_{ {\ensuremath{\mathcal V}}_{1,\ell}\cap {\ensuremath{\mathcal W}}_{0,\delta/2}^c}{\;\geqslant\;}1.$$ where the sets ${\ensuremath{\mathcal V}}$ and ${\ensuremath{\mathcal W}}$ were introduced in Definition \[defW\]. From it follows that $$\label{brutta} {\left{\langle}\exp\left(\frac{\alpha}{N}\mathcal{A}(T)\right) \right{\rangle}}{\;\leqslant\;}F_1+F_{2}+\sum_{\ell=\lceil \delta N/(2K)\rceil }^{N/K} F_{3,\ell}$$ where $$\label{I} F_1:={\left{\langle}\mathds{1}_{{\ensuremath{\mathcal W}}_{0,\delta/2}} \exp\left(\left(\exp(\frac{\alpha}{N})-1\right) \int_0^T \, ds \; {\ensuremath{\mathcal H}}(\eta(s))\right) \right{\rangle}}_{{\alpha}/N}$$ $$\label{II} F_{2}:= {\left{\langle}\mathds{1}_{ {\ensuremath{\mathcal W}}_{-1,1-\delta}}\, \exp \left( \left(\exp(\frac{\alpha}{N})-1\right) \int_0^T \, ds \; {\ensuremath{\mathcal H}}(\eta(s)) \right) \right{\rangle}}_{{\alpha}/N}$$ and $$\label{III} F_{3,\ell}:= {\left{\langle}\mathds{1}_{ {\ensuremath{\mathcal V}}_{1,\ell}\cap {\ensuremath{\mathcal W}}_{0,\delta/2}^c} \, \exp \left( \left(\exp(\frac{\alpha}{N})-1\right) \int_0^T \, ds \; {\ensuremath{\mathcal H}}(\eta(s)) \right) \right{\rangle}}_{{\alpha}/N}.$$ By using the fact that $\alpha{\;\leqslant\;}0$, we get from , and $$\label{Ibis} F_1{\;\leqslant\;}{\left{\langle}\mathds{1}_{{\ensuremath{\mathcal W}}_{0,\delta/2}} \right{\rangle}}_{{\alpha}/N}$$ $$\label{IIbis} F_{2}{\;\leqslant\;}{\left{\langle}\mathds{1}_{ {\ensuremath{\mathcal W}}_{-1,1-\delta}} \, \exp \left( \frac{\alpha}{N} \int_0^T \, ds \; {\ensuremath{\mathcal H}}(\eta(s)) \right) \right{\rangle}}_{{\alpha}/N} \exp\left(\frac{T\alpha^2}{N}\right)$$ and $$\label{IIIbis} F_{3,\ell}{\;\leqslant\;}{\left{\langle}\mathds{1}_{ {\ensuremath{\mathcal V}}_{1,\ell}\cap {\ensuremath{\mathcal W}}_{0,\delta/2}^c} \, \exp \left( \frac{{\alpha}}{N} \int_0^T \, ds \; {\ensuremath{\mathcal H}}(\eta(s)) \right) \right{\rangle}}_{{\alpha}/N}\exp\left(\frac{T\alpha^2}{N}\right).$$ Recall Lemma \[prop: bad blocksnew\] then since we have chosen $K{\;\geqslant\;}\bar K(\frac{\delta}{2},{{\ensuremath{\mathbb A}} }\frac{\delta}{2})$ there is $C>0$ such that $$\begin{aligned} \label{eq: gaussian bound neglect 2} \lim_{T \to \infty} \; \frac{1}{T} \log(F_1){\;\leqslant\;}\lim_{T \to \infty} \; \frac{1}{T} \log {\left{\langle}\mathds{1}_{{\ensuremath{\mathcal W}}_{0,\delta/2}} \right{\rangle}}_{{\alpha}/N} {\;\leqslant\;}- \delta^2C\frac{N}{K} \exp(\frac{\alpha}{N})\, .\end{aligned}$$ Then recalling inequality , Definition \[defW\] for the event ${\ensuremath{\mathcal W}}_{-1,1-\delta}$ and our choice $\epsilon={{\ensuremath{\mathbb A}} }\frac{\delta}{2}$, we get from $$\label{colc0} F_{2}{\;\leqslant\;}\exp\left(T\alpha\left({{\ensuremath{\mathbb A}} }( 1 - \frac{{\delta}}{2}) ( 1- {\delta}) -\frac{2 {{\ensuremath{\mathbb A}} }}{K}\right) + \frac{T\alpha^2}{N} \right)$$ Since the event ${\ensuremath{\mathcal V}}_{1,\ell}\cap {\ensuremath{\mathcal W}}_{0,\delta/2}^c$ is a subset of the event ${\ensuremath{\mathcal W}}_{-1,1-\delta '}$ with $\delta '= \frac{\delta}{2}+ (\ell+1) \frac{K}{N}$, by using again we get from $$\label{IIItris} F_{3,\ell}{\;\leqslant\;}\exp\left(T\alpha \left( {{\ensuremath{\mathbb A}} }( 1 - \frac{{\delta}}{2}) -\frac{2{{\ensuremath{\mathbb A}} }}{K} \right) \left(1- \frac{\delta}{2} - (\ell+1) \frac{K}{N} \right) + \frac{T \alpha^2}{N}\right){\left{\langle}\mathds{1}_{ {\ensuremath{\mathcal W}}_{1,\ell \frac{K}{N} }} \right{\rangle}}_{{\alpha}/N}$$ where we used the fact that the event ${\ensuremath{\mathcal V}}_{1,\ell}$ is a subset of the event $ {\ensuremath{\mathcal W}}_{1,\ell \frac{K}{N} }$. Then by using Lemma \[lem: variance 0\], it holds for any $\ell{\;\geqslant\;}\lceil \delta N/(2K)\rceil $ that $$\label{colc31} \lim_{T\to\infty}\frac{1}{T}\log {\left{\langle}\mathds{1}_{ {\ensuremath{\mathcal W}}_{1,\ell \frac{K}{N} }} \right{\rangle}}_{{\alpha}/N} {\;\leqslant\;}-\frac{S_{\rho}\ell K}{4N}\exp(\frac{\alpha}{N}) \, .$$ Thus collecting and we get $$\lim_{T\to\infty}\frac{1}{T}\log(F_{3,\ell}) {\;\leqslant\;}{\alpha}{{\ensuremath{\mathbb A}} }( 1 - \frac{{\delta}}{2})^2 - \left( {\alpha}{{\ensuremath{\mathbb A}} }( 1 - \frac{{\delta}}{2}) + \frac{S_{\rho}}{4} \exp \big(\frac{\alpha}{N} \big) \right) \frac{\ell K}{N} + \frac{C}{K} + \frac{C K }{N} \, .$$ Since $\alpha>\alpha_0=-\frac{S_{\rho}}{{{\ensuremath{\mathbb A}} }4}$ and $ \frac{\ell K}{N} {\;\geqslant\;}\frac{{\delta}}{2}$, one gets (for ${\delta}$ small enough) $$\begin{aligned} \label{colc3} \lim_{T\to\infty}\frac{1}{T}\log(F_{3,\ell}) & {\;\leqslant\;}& {\alpha}{{\ensuremath{\mathbb A}} }- {\alpha}{{\ensuremath{\mathbb A}} }{\delta}- \left( {\alpha}{{\ensuremath{\mathbb A}} }+ \frac{S_{\rho}}{4} \exp \big(\frac{\alpha}{N} \big) \right) \frac{{\delta}}{2}+ \frac{C}{K} + \frac{C K }{N}\nonumber \\ & {\;\leqslant\;}& {\alpha}{{\ensuremath{\mathbb A}} }- {\alpha}{{\ensuremath{\mathbb A}} }{\delta}\frac{C}{K} + \frac{C K }{N}\, .\end{aligned}$$ Thus under this hypothesis by using and collecting , and if we now set $N{\;\geqslant\;}\bar K( \frac{\delta}{2}, {{\ensuremath{\mathbb A}} }\frac{\delta}{2})^2$ and $K=\sqrt N$ we get the desired result. [**Case $\alpha>0$.**]{} Recall Definition \[defW\] and the definition for $F_1$. Then it holds $$\label{brutta2} {\left{\langle}\exp\left(\frac{\alpha}{N}\mathcal{A}(T)\right) \right{\rangle}}{\;\leqslant\;}F_1+F_{4} \, ,$$ where we let $$\label{IV} F_4:={\left{\langle}\mathds{1}_{{\ensuremath{\mathcal W}}_{0,\delta/2}^c} \exp\left(\left(\exp(\frac{\alpha}{N})-1\right) \int_0^T \, ds \; {\ensuremath{\mathcal H}}(\eta(s))\right) \right{\rangle}}_{{\alpha}/N} \, .$$ For $N>\alpha/\log 2$, then $\exp(\alpha/N)-1<\alpha/N+(\alpha/N)^2$. Thus via definition since ${\ensuremath{\mathcal H}}{\;\leqslant\;}N$, using Lemma \[prop: bad blocksnew\] we get $$\label{GI} \lim_{T\to\infty}\frac{1}{T}\log (F_1){\;\leqslant\;}-\delta^2 C\frac{N}{K}+\alpha+\frac{\alpha^2}{N} \, .$$ Then using definition and recalling inequality we get $$\label{GII} \lim_{T\to\infty}\frac{1}{T}\log (F_{4}){\;\leqslant\;}\alpha{{\ensuremath{\mathbb A}} }+\alpha\delta+\frac{3 \alpha}{K}+\frac{\alpha^2}{N} \, .$$ Thus by using and collecting and if we now set $N{\;\geqslant\;}\max( \bar K(\frac{\delta}{2}, {{\ensuremath{\mathbb A}} }\frac{\delta}{2})^2, \frac{\alpha}{\log 2})$ and $K=\sqrt N$ we get the desired result. Conditional measure: proof of Theorem \[teo:condmes\] (i) and a stronger result {#condmes1} ------------------------------------------------------------------------------- The main result of this section is the following Lemma \[usolemma\] which, as we will explain, is a stronger version of Theorem \[teo:condmes\](i). \[usolemma\] Let $K_N$ and $\delta_N$ be two sequences such that $$\lim_{N\to\infty}\delta_N=\lim_{N\to\infty}1/K_N=\lim_{N\to\infty}K_N/(N\delta_N^2)=\frac{\log(\delta_N)}{K_N\delta_N^2}=0.$$ Note that there are subsequences which verify the above conditions, e.g. the choice $K_N=\sqrt N$ and $\delta_N=N^{-1/8}$. For each $N$ let the activity-density labels be defined with $K=K_N$ and $\epsilon_N={{\ensuremath{\mathbb A}} }\frac{\delta_N}{2}$. Then there exists $\alpha_0<0$ such that if $\alpha>\alpha_0$ it holds $$\lim_{N\to\infty}\lim_{T\to\infty}\mu_{\alpha,T}^N({\ensuremath{\mathcal W}}_{-1,1-\delta_N})=1\, .$$ Fix $ N$ sufficiently large in order that it holds $K_N{\;\geqslant\;}\bar K(\frac{\delta_N}{2}, {{\ensuremath{\mathbb A}} }\frac{\delta_N}{2})$ with $\bar K$ defined in Lemma \[prop: bad blocksnew\] (this is possible thanks to the hypothesis $\lim_{N\to\infty}\frac{K_N\delta_N^2}{\log(\delta_N)}=\infty$). We distinguish two cases. [**Case ${\alpha}<0$.**]{} Since $\mu_{\alpha,T}^N$ is a measure from inequality it holds $$\label{impo} 1-\mu_{\alpha,T}^N({ {\ensuremath{\mathcal W}}_{0,\delta_N/2}})-\sum_{\ell=\lceil \frac{\delta_N}{2} \frac{N}{K} \rceil }^{N/K}\mu_{\alpha,T}^N({\ensuremath{\mathcal V}}_{1,\ell}\cap {\ensuremath{\mathcal W}}_{0,\delta_N/2}^c)\,{\;\leqslant\;}\,\mu_{\alpha,T}^N({\ensuremath{\mathcal W}}_{-1,1-\delta_N})\, {\;\leqslant\;}\,1$$ Recall equation which defines the conditional measure $\mu_{\alpha,T}^N$. If we apply Proposition \[easy\] to bound the denominator and to rewrite the numerator we get $$\label{eq1} \mu_{\alpha,T}^N({\ensuremath{\mathcal W}}_{0,\delta_N/2}){\;\leqslant\;}\exp\left(-\alpha{{\ensuremath{\mathbb A}} }T (1+\frac{C}{N})\right)F_1 \, ,$$ $$\label{eq2} \mu_{\alpha,T}^N( {\ensuremath{\mathcal V}}_{1,\ell}\cap {\ensuremath{\mathcal W}}_{0,\delta_N/2}^c){\;\leqslant\;}\exp\left(-\alpha{{\ensuremath{\mathbb A}} }T (1+\frac{C}{N})\right) F_{3,\ell} \, ,$$ where $F_1$ and $F_{3,\ell}$ are the functions that have been defined respectively in and and here we set $\delta=\delta_N$. Then we use and the assumption $\lim_{N\to\infty} \frac{\delta_N^2 N}{K_N} =\infty$ to conclude that $$\label{ea} \lim_{N\to\infty}\lim_{T\to\infty}\mu_{\alpha,T}^N({\ensuremath{\mathcal W}}_{0,\delta_N/2})=0 \, .$$ Then by using , we get for $\frac{\ell K}{N} {\;\geqslant\;}\frac{{\delta}}{2}$ $$\lim_{T\to\infty} \; \frac{1}{T} \log \; \mu_{\alpha,T}^N({\ensuremath{\mathcal V}}_{1,\ell}\cap {\ensuremath{\mathcal W}}_{0,\delta_N/2}^c) {\;\leqslant\;}- 3 {\alpha}{{\ensuremath{\mathbb A}} }\frac{{\delta}_N}{2} - \frac{S_{\rho}}{4} \frac{{\delta}_N}{2} + \frac{C'}{N} - 2 {\alpha}\frac{{{\ensuremath{\mathbb A}} }}{K_N} \, .$$ By construction ${\delta}_N \gg \frac{1}{K_N} \gg \frac{1}{N}$. Thus for $\alpha> {\alpha}_0 = - \frac{S_{\rho}}{12 {{\ensuremath{\mathbb A}} }}$ and $N$ large enough $$\label{eq4} \lim_{T\to\infty}\mu_{\alpha,T}^N({\ensuremath{\mathcal V}}_{1,\ell}\cap {\ensuremath{\mathcal W}}_{0,\delta_N/2}^c)=0 \, .$$ Note that the threshold ${\alpha}_0$ obtained here is not as sharp as in Proposition \[altra\]. The proof is then completed via , and . [**Case ${\alpha}{\;\geqslant\;}0$.**]{} Recall Definition \[defW\], then it holds $$\label{newineevents} \mathds{1}_{{\ensuremath{\mathcal W}}_{-1,1-\delta_N}}+\mathds{1}_{{\ensuremath{\mathcal W}}_{0,{{\ensuremath{\mathbb A}} }\delta_N/4}}+\mathds{1}_{{\ensuremath{\mathcal W}}_{0,{{\ensuremath{\mathbb A}} }\delta_N/4}^c \cap {\ensuremath{\mathcal W}}_{-1,1-\delta_N}^c}{\;\geqslant\;}1\, .$$ Since $\mu_{\alpha,T}^N$ is a measure from inequality it holds $$\label{impo2} 1-\mu_{\alpha,T}^N({ {\ensuremath{\mathcal W}}_{0,{{\ensuremath{\mathbb A}} }\delta_N/4}})-\mu_{\alpha,T}^N({\ensuremath{\mathcal W}}_{0,{{\ensuremath{\mathbb A}} }\delta_N/4}^c\cap {\ensuremath{\mathcal W}}_{-1,1-\delta_N}^c) \, {\;\leqslant\;}\,\mu_{\alpha,T}^N({\ensuremath{\mathcal W}}_{-1,1-\delta_N})\, {\;\leqslant\;}\,1 \, .$$ Recall equation which defines the measure $\mu_{\alpha,T}^N$. Applying Proposition \[easy\] to bound the denominator and to rewrite the numerator, there is $C>0$ such that $$\label{neweq1} \mu_{\alpha,T}^N({\ensuremath{\mathcal W}}_{0,{{\ensuremath{\mathbb A}} }\delta_N/4}){\;\leqslant\;}\exp\left(-\alpha{{\ensuremath{\mathbb A}} }T(1-\frac{C}{N})\right) F_1 \, ,$$ where $F_1$ was defined in and here we set $\delta= \frac{\delta_N {{\ensuremath{\mathbb A}} }}{4}$. Thus together with the hypothesis on $\delta_N$ and $K_N$ imply that $$\label{impo3} \lim_{N\to\infty}\lim_{T\to\infty} \mu_{\alpha,T}^N({\ensuremath{\mathcal W}}_{0,{{\ensuremath{\mathbb A}} }\delta_N/4})=0 \, .$$ On the other hand, by using inequality and again Proposition \[easy\] to bound the denominator and to rewrite the numerator of the conditional measure, we get with ${\varepsilon}= {{\ensuremath{\mathbb A}} }\frac{{\delta}_N}{2}$ $$\begin{aligned} && \lim_{T\to\infty} \frac{1}{T} \log \left {\langle}1_{\left\{ {\ensuremath{\mathcal W}}_{0,{{\ensuremath{\mathbb A}} }\frac{\delta_N}{4}}^c \cap {\ensuremath{\mathcal W}}_{-1,1-\delta_N}^c \right\} } \exp \big( \frac{{\alpha}}{N} {\ensuremath{\mathcal A}}(t) \big) \right {\rangle}\\ && \qquad {\;\leqslant\;}{\alpha}{\delta}_N \frac{{{\ensuremath{\mathbb A}} }}{4} + {\alpha}(1- {\delta}_N) {{\ensuremath{\mathbb A}} }(1 + \frac{{\delta}_N}{2}) + \frac{2{\alpha}}{K_N} {\;\leqslant\;}{\alpha}{{\ensuremath{\mathbb A}} }\left( 1- \frac{{\delta}_N}{4} - \frac{{\delta}_N^2}{2} \right) +\frac{2 {\alpha}}{K_N} \, .\end{aligned}$$ Thus $$\label{impo4} \lim_{N\to\infty}\lim_{T\to\infty} \mu_{\alpha,T}^N({\ensuremath{\mathcal W}}_{0,{{\ensuremath{\mathbb A}} }\delta_N/4}^c \cap {\ensuremath{\mathcal W}}_{-1,1-\delta_N}^c)=0.$$ The result is then proved thanks to , and . It is immediate to verify that the event $ {\ensuremath{\mathcal W}}_{-1,1-\delta}$ implies that $\pi_T(|\sum_{i=1}^N\eta_i-N\rho|){\;\leqslant\;}(\epsilon +\delta)N$. Therefore is proven by using Lemma \[usolemma\] and taking $\gamma_N=\frac{3}{4} {{\ensuremath{\mathbb A}} }\delta_N$. A similar argument leads to result . FA-1f in dimension $d>1$: proof of Theorem \[linearityd&gt;1\] {#genesmall} -------------------------------------------------------------- The proof follows by using the results of Section \[secbad\] along exactly the same lines as the proof of Theorem \[teo:phasetrans\](i) and \[teo:condmes\](i). Inactive regime =============== In this section we prove Theorem \[teo:phasetrans\] (ii). We analyze in detail the case of FA-1f with two empty boundaries in Section \[genesmall1\] and then we sketch how the proof is extended to FA-1f with one empty boundary and East model in Section \[largealpha2\]. Our proof will provide a variational characterization of the constant $\Sigma$ which appears in the theorem and which, as explained in section \[heuristics\], plays the role of an interface energy. We underline that this variational problem, and therefore the value of $\Sigma$, depends not only on the choice of the constraints (the interface energy for East and FA-1f at the same density are different) but also on the choice of the boundary conditions (the interface energy for FA-1f with one and two empty boundaries are also different, see Remark \[remdiff\]). Finally, we prove Theorem \[poor\] for FA-1f in higher dimensions. FA-1f with two empty boundaries: proof of Theorem \[teo:phasetrans\] (ii) {#genesmall1} ------------------------------------------------------------------------- \[largealpha\] Fix integer $L,L'>0$ and let $\mathcal{C}_{L,L'}$ be the set of probability densities on $\Omega_{L+L'+2}$ such that $\eta_{L+1}=\eta_{ L+2}=1$ with probability one, namely $$\begin{aligned} \label{eq: CK} {\ensuremath{\mathcal C}}_{L,L'} = \Big \{ f : \qquad \nu \big(f \big) = 1, \quad \nu \big(f\eta_{L+1} \eta_{L+2}\big) = 1 \Big\} \, .\end{aligned}$$ Let ${\Sigma}_{L,L'} = \inf_{f \in \mathcal{C}_{L,L'}} {\ensuremath{\mathcal D}}_{L+L'+2} \big( \sqrt{f} \big)$ with ${\ensuremath{\mathcal D}}$ the Dirichlet form of FA-1f model with two empty boundaries and define the interface energy ${\Sigma}$ as $$\label{defS1} {\Sigma}:=\lim_{L\to\infty}\lim_{L'\to\infty}{\Sigma}_{L,L'}$$ The definition is well posed thanks to the following Lemma \[monotonicity\]. \[monotonicity\] ${\Sigma}_{L,L'}$ is non-increasing in $L$ and in $L'$. For any $L,L'$ it holds ${\Sigma}_{L,L'}{\;\geqslant\;}0$. Therefore the limit $\lim_{L\to\infty}\lim_{L'\to\infty}{\Sigma}_{L,L'}$ exists. The positivity of ${\Sigma}_{L,L'}$ immediately follows from its definition. Let $f(\eta_1,\dots ,\eta_{L+L'+2})$ be the function s.t. $\Sigma_{L,L'}={\ensuremath{\mathcal D}}_{L+L'+2}(\sqrt f)$. Set $g(\eta_1,\dots,\eta_{L+L'+3}):=f(\eta_2,\dots,\eta_{L+L'+3}).$ Then $g\in{\ensuremath{\mathcal C}}_{L+1,L'}$ and $$\begin{aligned} \label{asfor} \Sigma_{L+1,L'}{\;\leqslant\;}{\ensuremath{\mathcal D}}_{L+L'+3}(\sqrt g)=\nonumber\\ \sum_{i=3}^{L+L'+3}\sum_{\omega\in\Omega_{L+L'+3}} \nu(\omega) c_i(\omega) \left(\sqrt {g(\omega^i)}-\sqrt {g(\omega)}\right)^2+\nonumber\\ \sum_{\omega\in\Omega_{L+L'+3}} \nu(\omega)(1-\omega_{1}\omega_{3})(\rho(1-\omega_2)+(1-\rho)\omega_2)\left(\sqrt {g(\omega^2)}-\sqrt {g(\omega)}\right)^2\nonumber\\ {\;\leqslant\;}\sum_{i=2}^{L+L'+2}\sum_{\omega\in\Omega_{L+L'+2}} \nu(\omega) c_i(\omega) \left(\sqrt {f(\omega^i)}-\sqrt {f(\omega)}\right)^2+\nonumber\\ \sum_{\omega\in\Omega_{L+L'+2}}\nu(\omega)(\rho(1-\omega_1)+(1-\rho)\omega_1)\left(\sqrt {f(\omega^1)}-\sqrt {f(\omega)}\right)^2=\nonumber\\ {\ensuremath{\mathcal D}}_{L+L'+2}(\sqrt f)=\Sigma_{L,L'} \, .\end{aligned}$$ Note that we have used the fact that the occupation variable at position $1$ is unconstrained. Analogously if we set $h(\eta_1,\dots,\eta_{L+L'+3}):=f(\eta_1,\dots,\eta_{L+L'+2}).$ Then $h\in{\ensuremath{\mathcal C}}_{L,L'+1}$ and $ {\ensuremath{\mathcal D}}_{L+L'+3}(\sqrt h){\;\leqslant\;}{\ensuremath{\mathcal D}}_{L+L'+2}(\sqrt f)$. Thus $\Sigma_{L,L'+1}{\;\leqslant\;}\Sigma_{L,L'}$ follows. We split the proof of Theorem \[teo:phasetrans\] (ii) into upper and lower bounds which are stated in the two following lemmas Let $\Sigma$ be defined as in . Then for FA-1f model with two empty boundaries and any $\alpha<0$ it holds \[lowereasy\]$$\begin{aligned} \label{eq: free energy lower} \lim_{N \to \infty} \lim_{T \to \infty} \frac{1}{T} \log {\left{\langle}\exp \left( \frac{{\alpha}}{N} {\ensuremath{\mathcal A}}(T) \right) \right{\rangle}} {\;\geqslant\;}- {\Sigma}\, .\end{aligned}$$ \[upperdifficult\] Let $\Sigma$ be defined as in . Then for FA-1f model with two empty boundaries and any $\alpha<-\frac{\Sigma+8\sqrt {\rho(1-\rho)}}{{{\ensuremath{\mathbb A}} }}$ it holds $$\begin{aligned} \label{eq: free energy upper} \lim_{N \to \infty} \lim_{T \to \infty} \frac{1}{T} \log {\left{\langle}\exp \left( \frac{{\alpha}}{N} {\ensuremath{\mathcal A}}(T) \right) \right{\rangle}} {\;\leqslant\;}- {\Sigma}.\end{aligned}$$ Then \[Theorem \[teo:phasetrans\] (ii) for FA-1f model with two empty boundaries\] The result follows immediately from Lemma \[lowereasy\] and Lemma \[upperdifficult\]. We are now left with the proof of the two above Lemmas. We fix $K$ and take $N {\;\geqslant\;}K$. Let ${\ensuremath{\mathcal O}}$ be the event which is verified iff $\pi_T(\eta_{K+1},\dots\eta_{N-K})=1$. Then from we get $$\begin{aligned} \label{num} {\left{\langle}\exp\left( \frac{{\alpha}}{N} {\ensuremath{\mathcal A}}(T)\right) \right{\rangle}} {\;\geqslant\;}{\left{\langle}\exp\left( \frac{{\alpha}}{N} {\ensuremath{\mathcal A}}(T) \right) \mathds{1}_{{\ensuremath{\mathcal O}}} \right{\rangle}}{\;\geqslant\;}{\left{\langle}\exp \left( \frac{{\alpha}}{N} \int_0^T \, ds \; {\ensuremath{\mathcal H}}(\eta(s)) \right) \, \mathds{1}_{{\ensuremath{\mathcal O}}} \right{\rangle}}_{{\alpha}/N} \exp\left(\frac{T\alpha^2}{N}\right)\, .\end{aligned}$$ By using the fact that on the event ${\ensuremath{\mathcal O}}$, it holds $\int_0^T \eta_{i-1}(s)\eta_{i+1}(s) \, ds =T$ for any $i\in [K+2,N-K-1]$ we get $$\begin{aligned} \label{whynot1} & \int_0^T ds {\ensuremath{\mathcal H}}(\eta(s)) {\;\leqslant\;}2(K+1)T+\sum_{i=K+2}^{N-K-1}\int_0^T ds~~ c_i(\eta(s))=\nonumber \\ & 2(K+1)T+\sum_{i=K+2}^{N-K-1}\int_0^T ds (1-\eta_{i-1}(s)\eta_{i+1}(s))=2(K+1)T\end{aligned}$$ which together with yields $$\begin{aligned} \label{whynot0} {\left{\langle}\exp( \frac{{\alpha}}{N} {\ensuremath{\mathcal A}}(T) ) \right{\rangle}} {\;\geqslant\;}\exp\left( \frac{{\alpha}}{N} 2(K+1) T +\frac{{\alpha}^2}{N} T\right) {\left{\langle}\mathds{1}_{\ensuremath{\mathcal O}}\right{\rangle}}_{{\alpha}/N} \, .\end{aligned}$$ From the Donsker-Varadhan large deviation principle it holds $$\begin{aligned} \label{whynot} \lim_{T \to \infty} \frac{1}{T} \log {\left{\langle}\mathds{1}_{\ensuremath{\mathcal O}}\right{\rangle}}_{{\alpha}/N} =-\exp( \frac{{\alpha}}{N})\inf_{f:\nu(f)=1,\nu(f\eta_{K+1}\dots\eta_{N-K})=1}{\ensuremath{\mathcal D}}_N(\sqrt f){\;\geqslant\;}-\Sigma_{K,K} \, ,\end{aligned}$$ where in order to obtain the last inequality we proceed as follows. Denote by $g$ the function that belongs to ${\ensuremath{\mathcal C}}_{K,K}$ and s.t. $\Sigma_{K,K}={\ensuremath{\mathcal D}}_{K+K+2}(\sqrt g)$. Set $$h(\eta_1,\dots,\eta_N):= \frac{1}{\rho^{N-2K-2}} g(\eta_1,\dots\eta_{K},\eta_{K+1},\eta_{N-K},\eta_{N-K+1},\dots\eta_{N})\prod_{j=K+2}^{N-K-1}\eta_{j} \, .$$ Then it can be verified that $\nu(h)=1$, $\nu(h \eta_{K+1}\dots\eta_{N-K})=1$ and ${\ensuremath{\mathcal D}}_N(\sqrt h) = {\ensuremath{\mathcal D}}_{K+K+2}(\sqrt g)$. Thus follows. From and we therefore obtain $$\begin{aligned} \label{eq: free energy lower int} \lim_{T \to \infty} \frac{1}{T} \log {\left{\langle}\exp \left( \frac{{\alpha}}{N} {\ensuremath{\mathcal A}}(T) \right) \right{\rangle}} {\;\geqslant\;}- {\Sigma}_{K,K}+ \frac{{\alpha}}{N} 2(K+1)+\frac{{\alpha}^2}{N} \, .\end{aligned}$$ The result follows by taking $N$ to infinity and then $K$ to infinity. Before proving Lemma \[upperdifficult\], we state and prove an auxiliary result. Fix integers $L,L'>0$ and $u\in (0,1)$ and consider $\mathcal{C}_{L,L'}^u$ the set of probability densities on $\{0,1\}^{L+L'+4}$ such that $\eta_{L+1}=\eta_{ L+2}=\eta_{L+3}=\eta_{L+4}=1$ with probability at least $1-u$, namely $$\begin{aligned} \label{eq: CK} {\ensuremath{\mathcal C}}_{L,L'}^u = \Big \{ f : \qquad \nu \big(f(\eta) \big) = 1, \quad \nu \big(f(\eta) \eta_{L+1} \eta_{L+2}\eta_{L+3}\eta_{L+4}\big) {\;\geqslant\;}1-u \Big\} \, .\end{aligned}$$ We will now prove that provided $u$ is sufficiently small the interface energy is well approximated by the infimum of ${\ensuremath{\mathcal D}}_{L+L'+2}(\sqrt f)$ restricted to ${\ensuremath{\mathcal C}}_{L,L'}^u$. More precisely \[prop: interface energy approx\] For any $u>0$ $$\begin{aligned} \label{eq: SKu} \inf_{f \in {\ensuremath{\mathcal C}}^u_{L,L'}} {\ensuremath{\mathcal D}}_{L+L'+2} \big( \sqrt{f} \big) {\;\geqslant\;}{\Sigma}_{L,L'} - \left( 8\sqrt{ {\rho}(1-{\rho})} + {\Sigma}_{L,L'} \right) u \, .\end{aligned}$$ Fix $f \in {\ensuremath{\mathcal C}}^u_{L+1,L'+1}$. The Dirichlet form can be bounded from below by $$\begin{aligned} \label{eq: lower Dirichlet} {\ensuremath{\mathcal D}}_{L+L'+4} (\sqrt{f}) &{\;\geqslant\;}& \sum_{i=1}^{L+1} \sum_{\eta:\eta_{L+2}=\eta_{L+3}=1}\nu(\eta) c_i (\eta) \big( \sqrt{f(\eta^i)} - \sqrt{f (\eta)} \big)^2 +\nonumber \\ & & \sum_{i=L+4}^{L+L'+4} \sum_{\eta:\eta_{L+2}=\eta_{L+3}=1} \nu(\eta)c_i (\eta) \big( \sqrt{f(\eta^i)} - \sqrt{f (\eta)} \big)^2 +\nonumber \\ & & \sum_{\eta:\eta_{L+3}=1} \nu(\eta)c_{L+2} (\eta) \big( \sqrt{f(\eta^{L+2})} - \sqrt{f (\eta)} \big)^2+\nonumber\\ & & \sum_{\eta:\eta_{L+2}=1} \nu(\eta)c_{L+3} (\eta) \big( \sqrt{f(\eta^{L+3})} - \sqrt{f (\eta)} \big)^2 \, . $$ We define a new probability density $g$ on $\{0,1\}^{L+L'+4}$ $$\begin{aligned} g(\eta) = \frac{f(\eta)\eta_{L+2}\eta_{L+3}}{c(u)} \, , $$ with $c(u): = \nu \big( f(\eta) \eta_{L+2}\eta_{L+3} \big)$. Note that $\nu(g\eta_{L+2}\eta_{L+3})=\nu(g)=1$, thus $g$ belongs to ${\ensuremath{\mathcal C}}_{L+1,L'+1}.$ Furthermore, since $f$ belongs to ${\ensuremath{\mathcal C}}^u_{L+1,L'+1}$ one has $c(u) {\;\geqslant\;}1- u$. Note that the Dirichlet form of $g$ satisfies $$\begin{aligned} c(u){\ensuremath{\mathcal D}}_{L+L'+4}(\sqrt g) &=& \sum_{i=1}^{L+1} \sum_{\eta:\eta_{L+2}=\eta_{L+3}=1}\nu(\eta) c_i (\eta) \big( \sqrt{f(\eta^i)} - \sqrt{f (\eta)} \big)^2 +\nonumber \\ & & \sum_{i=L+4}^{L+L'+4} \sum_{\eta:\eta_{L+2}=\eta_{L+3}=1} \nu(\eta)c_i (\eta) \big( \sqrt{f(\eta^i)} - \sqrt{f (\eta)} \big)^2 +\nonumber \\ & & + 2(1-\rho)\sum_{\eta:\eta_{L+2}=\eta_{L+3}=1}\nu(\eta) (1-\eta_{L+1})f(\eta)\nonumber \\ & & + 2(1-\rho)\sum_{\eta:\eta_{L+2}=\eta_{L+3}=1}\nu(\eta) (1-\eta_{L+4})f(\eta) \, . \label{eq: lower Dirichletg}\end{aligned}$$ Decompose $\eta$ as $\eta=(\omega_l,\eta_{L+2},\eta_{L+3},\omega_r)$ where $\omega_l=\eta_1,\dots\eta_{L+1}$ and $\omega_r=\omega_{L+4},\dots\omega_{L+L'+4}$, then $$\begin{aligned} \label{chec} & &\sum_{\eta:\eta_{L+3}=1} \nu(\eta)c_{L+2} (\eta) \big( \sqrt{f(\eta^{L+2})} - \sqrt{f (\eta)} \big)^2 {\;\geqslant\;}2(1-\rho)\sum_{\eta:\eta_{L+2}=\eta_{L+3}=1} \nu(\eta) (1-\eta_{L+1})f(\eta) \nonumber\\ & & \qquad - 4\rho(1-\rho)\sum_{\omega_l,\omega_r}\nu(\omega_l)\nu(\omega_r)\rho(1-\eta_{L+1})\sqrt{f(\omega_l,1,1,\omega_r)}\sqrt{f(\omega_l,0,1,\omega_r)} \, .\end{aligned}$$ Then by using Cauchy-Schwartz $$\begin{aligned} \label{chc2} \sum_{\omega_l,\omega_r}\nu(\omega_l)\nu(\omega_r)\rho(1-\eta_{L+1})\sqrt{f(\omega_l,1,1,\omega_r)}\sqrt{f(\omega_l,0,1,\omega_r)} {\;\leqslant\;}\nonumber\\ \frac{1}{\sqrt{\rho(1-\rho)}} \sqrt {\nu\left(f(1-\eta_{L+1})\right)}\sqrt{\nu(f(1-\eta_{L+1}))}{\;\leqslant\;}\frac{u}{\sqrt{\rho(1-\rho)}}\nonumber\\\end{aligned}$$ where to obtain the last inequality we used the fact that $f$ belongs to ${\ensuremath{\mathcal C}}^u_{L,L'}$. Thus $$\begin{aligned} \label{chec2} & &\sum_{\eta:\eta_{L+3}=1} \nu(\eta)c_{L+2} (\eta) \big( \sqrt{f(\eta^{L+2})} - \sqrt{f (\eta)} \big)^2 \\ && \qquad \qquad {\;\geqslant\;}2(1-\rho)\sum_{\eta:\eta_{L+2}=\eta_{L+3}=1}\nu(\eta) (1-\eta_{L+1})f(\eta)-4\sqrt{\rho(1-\rho)}u \, . \nonumber\end{aligned}$$ Similarly it can be verified that $$\begin{aligned} \label{chech3} & &\sum_{\eta:\eta_{L+2}=1} \nu(\eta)c_{L+3} (\eta) \big( \sqrt{f(\eta^{L+3})} - \sqrt{f (\eta)} \big)^2\\ && \qquad \qquad {\;\geqslant\;}2(1-\rho)\sum_{\eta:\eta_{L+2}=\eta_{L+3}=1}\nu(\eta) (1-\eta_{L+4})f(\eta)-4\sqrt{\rho(1-\rho)}u \, . \nonumber\end{aligned}$$ Thus by using , , and we get $$\begin{aligned} {\ensuremath{\mathcal D}}_{L+L'+4} (\sqrt{f}) {\;\geqslant\;}c(u) {\ensuremath{\mathcal D}}_{L+L'+4} (\sqrt{g}) - 8\sqrt{\rho(1-\rho)}u{\;\geqslant\;}\Sigma_{L+1,L'+1}- (8\sqrt{\rho(1-\rho)}+\Sigma_{L+1,L'+1})u \, ,\end{aligned}$$ where for the last inequality we used that, as noted above, $g$ belongs to ${\ensuremath{\mathcal C}}_{L+1,L'+1}$ and $c(u){\;\geqslant\;}(1-u)$. Recall Definition \[defW\] and set $\epsilon={{\ensuremath{\mathbb A}} }\delta$ and choose $K{\;\geqslant\;}\bar K(\delta, {{\ensuremath{\mathbb A}} }\delta)$ with $\bar K$ defined in Lemma \[prop: bad blocksnew\] and let $a_N=N^{-1/16}$. Then the following inequality holds $$\label{ineevents3} \mathds{1}_{ {\ensuremath{\mathcal W}}_{0,{\delta}a_N}}+\mathds{1}_{ {\ensuremath{\mathcal W}}_{1,1-\delta}}+\sum_{\ell=\lceil \delta(1-a_N)N/K\rceil }^{N/K}\mathds{1}_{ {\ensuremath{\mathcal V}}_{-1,\ell}\cap {\ensuremath{\mathcal W}}_{0,\delta a_N}^c}{\;\geqslant\;}1$$ which implies $$\label{brutta3} {\left{\langle}\exp\left(\frac{\alpha}{N}\mathcal{A}(T)\right) \right{\rangle}}{\;\leqslant\;}G_1+G_{2}+\sum_{\ell=\lceil \delta (1-a_N) N/K\rceil }^{N/K} G_{3,\ell}$$ where $$\label{GIbis} G_1:={\left{\langle}\mathds{1}_{{\ensuremath{\mathcal W}}_{0,\delta a_N }} \exp\left(\left(\exp(\frac{\alpha}{N})-1\right) \int_0^T \, ds \; {\ensuremath{\mathcal H}}(\eta(s))\right) \right{\rangle}}_{{\alpha}/N}$$ $$\label{GIIbis} G_{2}:= {\left{\langle}\mathds{1}_{ {\ensuremath{\mathcal W}}_{1,1-\delta}}\, \exp \left( \left(\exp(\frac{\alpha}{N})-1\right) \int_0^T \, ds \; {\ensuremath{\mathcal H}}(\eta(s)) \right) \right{\rangle}}_{{\alpha}/N}$$ $$\label{GIII} G_{3,\ell}:= {\left{\langle}\mathds{1}_{ {\ensuremath{\mathcal V}}_{-1,\ell}\cap{\ensuremath{\mathcal W}}_{0,\delta a_N }^c} \, \exp \left( \left(\exp(\frac{\alpha}{N})-1\right) \int_0^T \, ds \; {\ensuremath{\mathcal H}}(\eta(s)) \right) \right{\rangle}}_{{\alpha}/N} \, .$$ As in , $G_1$ is bounded by $$\begin{aligned} \label{miserve} \lim_{T \to \infty} \; \frac{1}{T} \log(G_1){\;\leqslant\;}- a_N^2\delta^2C\frac{N}{K} \, . \label{Itris}\end{aligned}$$ On the other hand since $\alpha<0$, we have $$\begin{aligned} \label{IIquo} G_{2}{\;\leqslant\;}{\left{\langle}\mathds{1}_{{\ensuremath{\mathcal W}}_{1,1-\delta}} \right{\rangle}}_{\alpha/N} .\end{aligned}$$ We notice that the event ${\ensuremath{\mathcal W}}_{1,1-\delta}$ implies that there exists at least one box $i\in[1,N/K]$ such that $\pi_T (\mathds{1}_{\{u^i_{K,\epsilon}=1\}}){\;\geqslant\;}1-\delta$. Thus in this box, there are at least 4 consecutive sites occupied with high probability $$\label{eq: occupied sites} \pi_T(\eta_{(i-1)K+\lceil K/2\rceil +1} \; \eta_{(i-1)K+\lceil K/2\rceil+2} \; \eta_{(i-1)K+\lceil K/2\rceil+3} \; \eta_{(i-1)K+\lceil K/2\rceil+4}) {\;\geqslant\;}1-\delta.$$ Let ${\ensuremath{\mathcal R}}_i$ be the event that is verified if holds. We get $$\begin{aligned} \label{IIquo2} {\left{\langle}\mathds{1}_{{\ensuremath{\mathcal W}}_{1,1-\delta}} \right{\rangle}}_{\alpha/N}{\;\leqslant\;}\sum_{i=1}^{N/K} {\left{\langle}\mathds{1}_{{\ensuremath{\mathcal R}}_{i}} \right{\rangle}}_{\alpha/N} \, .\end{aligned}$$ Donsker-Varadhan large deviation principle implies $$\begin{aligned} \lim_{T\to\infty}\frac{1}{T}\log {\left{\langle}\mathds{1}_{{\ensuremath{\mathcal R}}_{i}} \right{\rangle}}_{\alpha/N} = -\exp(\alpha/N)\inf_{f\in{\ensuremath{\mathcal C}}_{L,L'}^{\delta}} \left\{ {\ensuremath{\mathcal D}}_{L+L'+2}(\sqrt f) \right\} \, ,\end{aligned}$$ where we have set $L=(i-1)K+\lceil K/2\rceil $ and $L'=N-2-(i-1)K-\lceil K/2\rceil $. By using Lemma \[prop: interface energy approx\], noticing that $L,L'{\;\geqslant\;}\lceil K/2\rceil-2$ and recalling the monotonicity property stated by Lemma \[monotonicity\], we obtain $$\begin{aligned} \label{IIquo3} \lim_{T\to\infty}\frac{1}{T}\log {\left{\langle}\mathds{1}_{{\ensuremath{\mathcal R}}_{i}} \right{\rangle}}_{\alpha/N} &{\;\leqslant\;}& -\Sigma_{L,L'}+(8\sqrt{\rho(1-\rho)}+\Sigma_{L,L'})\delta-\frac{\alpha}{N}{\Sigma}_{L,L'} \nonumber \\ &{\;\leqslant\;}& -\Sigma+(8\sqrt{\rho(1-\rho)}+\Sigma_{\frac{K}{2},\frac{K}{2}})\delta+\frac{|\alpha|}{N}{\Sigma}_{1,1} \, .\end{aligned}$$ Thus $$\begin{aligned} \label{IIquo5} \lim_{T\to\infty}\frac{1}{T}\log G_{2}{\;\leqslant\;}-\Sigma+C\delta+\frac{|\alpha|\Sigma_{1,1}}{N} \, ,\end{aligned}$$ where $C=8\sqrt{\rho(1-\rho)}+\Sigma_{\frac{K}{2},\frac{K}{2}}$. By using definition and inequality , we get $$\begin{aligned} \label{IIquo3} G_{3,\ell}{\;\leqslant\;}{\left{\langle}\mathds{1}_{{\ensuremath{\mathcal V}}_{-1,\ell}\cap{\ensuremath{\mathcal W}}_{0,\delta a_N}^c} \right{\rangle}}_{\alpha/N} \exp \left(\frac{T\alpha^2}{N}\right)\exp\left(\frac{\alpha}{N}T\ell (K{{\ensuremath{\mathbb A}} }-2{{\ensuremath{\mathbb A}} }- K\epsilon)\right) \, .\end{aligned}$$ Note that ${\ensuremath{\mathcal V}}_{-1,\ell}\cap{\ensuremath{\mathcal W}}_{0,\delta a_N}^c$ is a subset of the event ${\ensuremath{\mathcal W}}_{1,1-(\delta a_N+(\ell+1)\frac{K}{N})}$. Thus, we can use an estimate similar to in order to bound from above ${\left{\langle}\mathds{1}_{{\ensuremath{\mathcal W}}_{1,1-(\delta a_N+(\ell+1)\frac{K}{N})}} \right{\rangle}}_{\alpha/N}$ $$\lim_{T\to\infty}\frac{1}{T}\log {\left{\langle}\mathds{1}_{{\ensuremath{\mathcal V}}_{-1,\ell}\cap{\ensuremath{\mathcal W}}_{0,\delta a_N}^c} \right{\rangle}}_{\alpha/N} {\;\leqslant\;}-\Sigma+C \left( \delta a_N+(\ell+1)\frac{K}{N} \right) +\frac{C''}{N} \, , \label{tis}$$ where $C'' >0$ is a constant. Combining and , we obtain $$\lim_{T\to\infty}\frac{1}{T}\log G_{3,\ell} {\;\leqslant\;}-\Sigma+C \left( \delta a_N+(\ell+1)\frac{K}{N} \right) + \alpha \frac{K}{N} \ell \left( {{\ensuremath{\mathbb A}} }(1 - {\delta}) -2\frac{{{\ensuremath{\mathbb A}} }}{K} \right) +\frac{C''}{N} \, , \label{tis++}$$ where we used that $\epsilon = {{\ensuremath{\mathbb A}} }{\delta}$. Recall $\ell{\;\geqslant\;}\lceil \delta (1-a_N) \frac{N}{K} \rceil $ and $C=8\sqrt{\rho(1-\rho)}+\Sigma_{\frac{K}{2},\frac{K}{2}}$. Thus for ${\alpha}< -\frac{8\sqrt{\rho(1-\rho)}+\Sigma}{{{\ensuremath{\mathbb A}} }}$, we can choose ${\delta}$ small enough and $K$ large enough such that $$\lim_{T\to\infty}\frac{1}{T}\log G_{3,\ell}{\;\leqslant\;}-\Sigma+C' \frac{K}{N} \, . \label{tistis}$$ Sending $N\to \infty$ and then $K\to\infty$, we get the desired result by collecting , , and . East and FA-1f with one empty boundaries: proof of Theorem \[teo:phasetrans\] (ii) {#largealpha2} ---------------------------------------------------------------------------------- Here we explain how to extend the results of the previous section to the case of East and FA-1f model with one empty boundary, thus completing the proof of Theorem \[teo:phasetrans\] (ii). We start by giving the definition of the interface energy $\Sigma$ for these models. Fix integer $L>0$ and consider $\mathcal{C}_{L}$ the set of probability densities on $\Omega_L$ such that $\eta_{1}=1$ with probability one, namely $$\begin{aligned} \label{eq: CK2} {\ensuremath{\mathcal C}}_{L} = \Big \{ f ; \qquad \nu \big(f \big) = 1, \quad \nu \big(f\eta_{1}) = 1 \Big\} \, .\end{aligned}$$ Let ${\Sigma}_{L} = \inf_{f \in \mathcal{C}_{L}} {\ensuremath{\mathcal D}}_{L} \big( \sqrt{f} \big)$, then we define the interface energy ${\Sigma}$ as $$\label{defS2} {\Sigma}:=\lim_{L\to\infty}{\Sigma}_{L}$$ The definition is well posed thanks to the following Lemma \[monotonicity\]. \[monotonicity2\] Let ${\ensuremath{\mathcal D}}_L$ be either the Dirichlet form of FA-1f with one empty boundary or the Dirichlet form of East. Then ${\Sigma}_{L}$ is non-increasing in $L$ and it holds ${\Sigma}_{L}{\;\geqslant\;}0$. Therefore the limit $\lim_{L\to\infty}{\Sigma}_{L}$ exists. Let $f(\eta_1,\dots,\eta_L)$ be the function in ${\ensuremath{\mathcal C}}_L$ s.t. ${\ensuremath{\mathcal D}}(\sqrt f)=\Sigma_L$. Then set $g(\eta_1,\dots,\eta_{L+1}):=f(\eta_1,\dots,\eta_L)$. Then $g\in{\ensuremath{\mathcal C}}_{L+1}$ and, as for inequality , one can verify that ${\ensuremath{\mathcal D}}_{L+1}(\sqrt g) {\;\leqslant\;}{\ensuremath{\mathcal D}}_L(\sqrt f)$ which implies $\Sigma_{L+1}{\;\leqslant\;}\Sigma_L$ (since $\Sigma_{L+1}{\;\leqslant\;}{\ensuremath{\mathcal D}}_{L+1}(\sqrt g)$ and $\Sigma_L={\ensuremath{\mathcal D}}_L(\sqrt f)$). We will now state a result which allows to approximate the interface energy in the spirit of Lemma \[prop: interface energy approx\]. Let for any integer $L>0$ and $u\in(0,1)$ $${\ensuremath{\mathcal C}}_L^{u}:=\Big \{ f ; \qquad \nu \big(f(\eta) \big) = 1, \quad \nu \big(f(\eta) \eta_{1} \eta_{2}) {\;\geqslant\;}1-u \Big\}$$ then \[energy approxbis\] For any $u>0$ $$\begin{aligned} \label{eq: SKubis} \inf_{f \in {\ensuremath{\mathcal C}}^u_{L}} {\ensuremath{\mathcal D}}_{L} \big( \sqrt{f} \big) {\;\geqslant\;}{\Sigma}_{L} - \left( 4\sqrt{ {\rho}(1-{\rho})} + {\Sigma}_{L} \right) u \, .\end{aligned}$$ The proof is analogous to the proof of Lemma \[prop: interface energy approx\], therefore we sketch only the main points. Let $f\in{\ensuremath{\mathcal C}}_L^u$ and set $g(\eta)=\eta_1 f(\eta)/c(u)$ with $c(u)=\nu(f\eta_1){\;\geqslant\;}1-u$. Then $g$ belongs to ${\ensuremath{\mathcal C}}_L$ and it holds $${\ensuremath{\mathcal D}}_L(\sqrt f){\;\geqslant\;}c(u){\ensuremath{\mathcal D}}_L(\sqrt g)-4\sqrt {\rho(1-\rho)}u{\;\geqslant\;}(1-u)\Sigma_L-4\sqrt {\rho(1-\rho)}u$$ where the first inequality is obtained following the same lines as in Lemma \[prop: interface energy approx\]. By using the above definitions and results we are now ready to prove Theorem \[teo:phasetrans\] (ii). \[Theorem \[teo:phasetrans\] (ii) for East and for FA-1f model with one empty boundary\] It is enough to prove that if $\Sigma$ is defined as in then for FA-1f with one empty boundary and for East the same inequalities as in Lemma \[lowereasy\] and Lemma \[upperdifficult\] with $\alpha_0=-\frac{4\sqrt{\rho(1-\rho)}+\Sigma}{{{\ensuremath{\mathbb A}} }}$. In order to prove the lower bound one introduces the event ${\ensuremath{\mathcal O}}$ which is verified iff $\pi_T(\eta_1\dots\eta_{N-K})=1$. Along the same lines used to obtain on can verify that $$\lim_{N\to\infty}\lim_{T\to\infty}\frac{1}{T}\log{\left{\langle}\exp\left(\frac{\alpha}{N}{\ensuremath{\mathcal A}}(T)\right) \right{\rangle}}{\;\geqslant\;}-\Sigma_{K}+\frac{\alpha}{N}(K+1) \, .$$ The lower bound follows again by taking $N$ to infinity and then $K$ to infinity. In order to prove the upper bound we follow exactly the same lines as in Lemma \[upperdifficult\], the only difference being that in inequality now the event ${\ensuremath{\mathcal R}}_i$ is substituted by an event $\widetilde{\ensuremath{\mathcal R}}_i$ which verified iff $$\pi_T(\eta_{(i-1)K+1}\eta_{(i-1)K+2}){\;\geqslant\;}1-\delta.$$ Then Donsker-Varhadan principle yields $$\label{IIquo3new} \lim_{T\to\infty} \frac{1}{T}\log {\left{\langle}\mathds{1}_{\widetilde{\ensuremath{\mathcal R}}_{i}} \right{\rangle}}_{\alpha/N} =-\exp(\alpha/N)\inf_{f:\nu(f)=1, \nu(f(\eta_{(i-1)K+1}\eta_{(i-1)K+2}){\;\geqslant\;}1-\delta}{\ensuremath{\mathcal D}}_{N}(\sqrt f)$$ Given $f$ on $\Omega_N$ we define a new function function $g$ on $\Omega_{N-(i-1)K}$ as follows $$g(\eta_1,\dots,\eta_{N-(i-1)K}):=\sum_{\eta_1'\dots\eta'_{(i-1)K}}\prod_{j=1}^{(i-1)K}\rho^{\eta_j'}(1-\rho)^{1-\eta'_j}f(\eta_1'\dots\eta'_{(i-1)K},\eta_1,\dots,\eta_{N-(i-1)K}).$$ Then one can verify that it holds ${\ensuremath{\mathcal D}}_N(\sqrt f){\;\geqslant\;}{\ensuremath{\mathcal D}}_{K}(\sqrt g)$ and if $f$ satisfies $\nu(f)=1$ and $\nu(f\eta_{(i-1)K+1}\eta_{(i-1)K+2}){\;\geqslant\;}1-\delta$ then $g$ satisfies $\nu(g)=1$ and $\nu (g \eta_1\eta_2){\;\geqslant\;}1-\delta$, therefore $g$ belongs to ${\ensuremath{\mathcal C}}^{\delta}_K$. Therefore from we get $$\begin{aligned} && \lim_{T\to\infty} \frac{1}{T}\log {\left{\langle}\mathds{1}_{\widetilde{\ensuremath{\mathcal R}}_{i}} \right{\rangle}}_{\alpha/N}{\;\leqslant\;}-\Sigma_{N-(i-1)K}+(4\sqrt{\rho(1-\rho)}+\Sigma_{N-(i-1)K})\delta+\frac{|\alpha|}{N}{\Sigma}_{1}\nonumber\\ && {\;\leqslant\;}-\Sigma+(4\sqrt{\rho(1-\rho)}+\Sigma_{K})\delta+\frac{|\alpha|}{N}{\Sigma}_{1}\end{aligned}$$ \[remdiff\] Fix $\rho\in(0,1)$ and let $\Sigma_1$ and $\Sigma_2$ be the interface energies defined in formulas and by using the Dirichlet form of FA-1f with two empty boundaries and one empty boundary respectively. Then it can be easily verified that $\Sigma_1=2\Sigma_2$. Proof of Theorem \[teo:condmes\] (ii) and a stronger result {#condmes2} ----------------------------------------------------------- We start by establishing a result which is stronger than Theorem \[teo:condmes\] (ii). \[usolemma2\] Consider East or FA-1f model in one dimension. Let $K_N=\sqrt N$ and $\delta_N=N^{-1/8}$ and for each $N$ let the activity-density labels be defined with $K=K_N$ and $\epsilon_N={{\ensuremath{\mathbb A}} }\delta_N/2$. Then if $\alpha<\alpha_0$ with $\alpha_0$ defined in Lemma \[upperdifficult\] it holds $$\begin{aligned} \label{eq: densite haute bis} \lim_{N\to\infty}\lim_{T \to \infty} \mu_{\alpha,T}^{N} ( {\ensuremath{\mathcal W}}_{1,1-\delta_N})= 1 \, .\end{aligned}$$ Fix $ N$ sufficiently large such that $K_N{\;\geqslant\;}\bar K(\frac{\delta_N}{2}, {{\ensuremath{\mathbb A}} }\frac{\delta_N}{2})$ with $\bar K$ defined in Lemma \[prop: bad blocksnew\]. Since $\mu^N_{\alpha,T}$ is a measure from inequality it holds $$\label{mone} 1-\mu_{\alpha,T}^N( {\ensuremath{\mathcal W}}_{0,\delta_N a_N})- \sum_{\ell=\lceil \delta_N (1-a_N) \frac{N}{K} \rceil }^{N/K} \mu_{\alpha,T}^N( {\ensuremath{\mathcal V}}_{-1,\ell}\cap {\ensuremath{\mathcal W}}_{0,\delta_N a_N}^c) {\;\leqslant\;}\mu_{\alpha,T}^N( {\ensuremath{\mathcal W}}_{1,1-\delta_N }){\;\leqslant\;}1 \, ,$$ where we set $a_N=N^{-1/16}$. We get from Lemma \[lowereasy\] $$\lim_{N\to\infty}\lim_{T\to\infty}\frac{1}{T}\log\mu_{\alpha,T}^N ( {\ensuremath{\mathcal W}}_{0,a_N \delta_N}){\;\leqslant\;}\Sigma+\lim_{N\to\infty}\lim_{T\to\infty}\frac{1}{T}\log(G_1){\;\leqslant\;}\Sigma -C\lim_{N\to\infty}a_N^2\delta_N^2\frac{N}{K_N} \, , \label{mone1}$$ where $G_1$ has been defined in and we used inequality . Choose ${\alpha}$ such that $\alpha< - 2 \frac{8\sqrt{\rho(1-\rho)}+\Sigma}{{{\ensuremath{\mathbb A}} }} -\delta_N$. For $G_{3,\ell}$ defined in with $\ell {\;\geqslant\;}\lceil \delta_N (1-a_N) \frac{N}{K} \rceil$, one has by using for any $N$ large enough $$\begin{aligned} \lim_{T\to\infty}\frac{1}{T}\log\mu_{\alpha,T}^N( {\ensuremath{\mathcal V}}_{-1,\ell}\cap {\ensuremath{\mathcal W}}_{0,a_N \delta_N}^c) {\;\leqslant\;}\Sigma + \lim_{T\to\infty}\frac{1}{T}\log (G_{3,\ell}) {\;\leqslant\;}- \frac{{{\ensuremath{\mathbb A}} }}{2} \delta_N + C'' \frac{K_N}{N} \, .\nonumber\\ \label{mone2}\end{aligned}$$ Thanks to the hypothesis on $K_N$, $a_N$, $\delta_N$, it holds $\lim_{N\to\infty}\delta_N\frac{N}{K_N}=\infty$ and $\lim_{N\to\infty}a_N^2\delta_N^2\frac{N}{K_N}=\infty$ thus by using , and the proof is concluded. The event $ {\ensuremath{\mathcal W}}_{1,1-\delta}$ implies $$\pi_T( \sum_i \eta_i){\;\geqslant\;}N(1-\delta), \quad \text{and} \quad \pi_T( \sum_i c_i(\eta)){\;\leqslant\;}\delta N .$$ Thus the result follows by taking $\gamma_N=\delta_N$ with $\delta_N$ defined in Lemma \[usolemma2\]. FA-1f in $d>1$: proof of Theorem \[poor\] {#secpoor} ----------------------------------------- Recall that we have extended definition \[defW\] to the higher dimensional case by substituting $N/K$ with $(N/K)^d$. We start from inequality $$\label{last} \mathds{1}_{{\ensuremath{\mathcal W}}_{0,\delta/2}}+\mathds{1}_{{\ensuremath{\mathcal W}}_{1,1-\delta}}+\mathds{1}_{{\ensuremath{\mathcal W}}_{-1,\delta/2}}{\;\geqslant\;}1$$ which leads to $$\label{uffff} 1-\mu_{\alpha,T}^{N,d}(\mathds{1}_{{\ensuremath{\mathcal W}}_{0,\delta/2}})-\mu_{\alpha,T}^{N,d}(\mathds{1}_{{\ensuremath{\mathcal W}}_{-1,\delta/2}}){\;\leqslant\;}\mu_{\alpha,T}^{N,d} (\mathds{1}_{{\ensuremath{\mathcal W}}_{1,1-\delta}}){\;\leqslant\;}1.$$ Lemma \[nonhopiu\] guarantees that $$\lim_{T\to\infty}\lim_{N\to\infty}\mu_{\alpha,T}^{N,d}(\mathds{1}_{{\ensuremath{\mathcal W}}_{0,\delta/2}})=0\label{ufffff}$$ Then we notice that if at time zero the configuration is completely filled and the clocks on all the $N^c$ sites which are unconstrained do not ring up to time $T$ then ${\ensuremath{\mathcal A}}(T)=0$. Therefore $$\left {\langle}\exp\left(\frac{\alpha}{N^{d-c}} {\ensuremath{\mathcal A}}(T)\right) \right{\rangle}{\;\geqslant\;}\exp(-N^c T)\rho^{N^d} \label{uffffff}$$ and by inserting this bound in the denominator of the definition of the measure $\mu_{\alpha,T}^{N,d}$ and using inequality (extended to dimension $d$), we get $$\lim_{T\to\infty}\lim_{N\to\infty}\mu_{\alpha,T}^{N,d}(\mathds{1}_{{\ensuremath{\mathcal W}}_{-1,\delta/2}})=0$$ provided $\alpha<-2/(\delta{{\ensuremath{\mathbb A}} })$ and the result is proven by inserting and into . Large deviations for a reduced activity {#new} ======================================= As we already recalled in the introduction, in absence of constraints (i.e. for the model defined by and with $r_i(\eta)\equiv 1$), the probability of observing a large deviation from the mean value scales as $$\lim_{N\to\infty}\lim_{t\to\infty} \; \frac{1}{Nt} \log \left {\langle}\frac{{\ensuremath{\mathcal A}}(t)}{Nt} \simeq a \right {\rangle}=-f(a) \, ,$$ with $0< f(a) <\infty$ for $a\neq 2\rho(1-\rho)$. In this section we will prove Theorems \[fluctu1\] and \[fluctu2\] which establish that a different scaling occurs for the large deviations of the activity below the mean value in the presence of constraints. Let us start by the upper bound. Let $\alpha_0$ be defined as in Theorem \[teo:phasetrans\], then $$\left {\langle}\frac{{\ensuremath{\mathcal A}}(t)}{Nt}\sim u {{\ensuremath{\mathbb A}} }\right {\rangle}{\;\leqslant\;}\exp(-\alpha_0 u {{\ensuremath{\mathbb A}} }t) \; \left {\langle}\exp\left(\frac{\alpha_0}{N}{\ensuremath{\mathcal A}}(t)\right) \right {\rangle}\, .$$ Therefore by taking the $\limsup_{N\to\infty}$ on the right and left hand side and using Theorem \[teo:phasetrans\] (i), we obtain and the desired upper bound. For the lower bound, we consider FA-1f with two empty boundaries (the proof in the other cases is analogous). The contribution to ${\ensuremath{\mathcal A}}(t)/(Nt)$ can be decomposed into the contributions coming respectively from the configuration changes during the time interval $[0,u t]$ and $[u t,t]$. With probability (w.r.t. the mean over the initial distribution $\nu$ and the evolution of the process) which goes to one as $t$ goes to infinity the first contribution goes to $u{{\ensuremath{\mathbb A}} }$ and, thanks to the reversibility of $\nu$, the distribution at time $u t$ is still $\nu$. Then we can impose that during the second time interval $[u t,t]$ the contribution to ${\ensuremath{\mathcal A}}(t)/(Nt)$ is at most $2/\sqrt N$ by requiring that $\eta_{\sqrt N+1 }(s)\dots\eta_{N-\sqrt N}(s)=1$ for any time $s$ in $[u t, t]$. Thus $$\liminf_{N\to\infty}\lim_{t\to\infty} \; \frac{1}{t} \log \left {\langle}\frac{{\ensuremath{\mathcal A}}(t)}{Nt}\sim u{{\ensuremath{\mathbb A}} }\right {\rangle}{\;\geqslant\;}\liminf_{N\to\infty}\lim_{t\to\infty} \; \frac{1}{t} \log \left {\langle}\mathds{1}_{{\ensuremath{\mathcal B}}} \right {\rangle}\, ,$$ where ${\ensuremath{\mathcal B}}$ is the event which is verified iff starting from $\nu$ it holds $\pi_{(1-u)t}(\eta_{\sqrt N+1} \dots\eta_{N-\sqrt N})=1$. As we did for the event ${\ensuremath{\mathcal O}}$ in we get here $$\liminf_{N\to\infty} \lim_{t\to\infty}\frac{1}{(1-u)t}\log {\langle}\mathds{1}_{{\ensuremath{\mathcal B}}} {\rangle}{\;\geqslant\;}-\Sigma \, ,$$ and the proof is completed. The proof of the upper bound follows along the same line as for Theorem \[fluctu1\] starting now from the inequality $$\left {\langle}\frac{{\ensuremath{\mathcal A}}(t)}{N^d t}\sim u {{\ensuremath{\mathbb A}} }\right {\rangle}{\;\leqslant\;}\exp(-\alpha_0 u \, {{\ensuremath{\mathbb A}} }tN^c) \left {\langle}\exp\left(\frac{\alpha_0}{N^{d-c}}{\ensuremath{\mathcal A}}(t)\right) \right {\rangle}\, .$$ The lower bound is derived in the same way by freezing the configuration during the time interval $[ut,t]$. The probability of realizing this event can be bounded from below by the probability that the $N^c$ unconstrained sites (those which are in contact with the empty boundary) remain frozen equal to 1. Acknowledgements {#acknowledgements .unnumbered} ---------------- We wish to thank L.Bertini, B.Derrida, V.Lecomte, F. van Wijland and L.Zambotti for very useful discussions. We acknowledge the support of the French Ministry of Education through the ANR BLAN07-2184264. C.T acknowledges the support of ANR DynHet and of the ERC Advanced Grant PTRELSS 228032. [10]{} D.Aldous, P.Diaconis, [*The asymmetric one-dimensional constrained Ising model: rigorous results*]{} J.Stat.Phys [**107**]{}, 945 (2002) T. Bodineau, V. Lecomte, C. Toninelli, work in progress. N. Cancrini, F. Martinelli, C. Roberto, C. Toninelli, [*Kinetically constrained spin models*]{}, Probability Theory and Related Fields **140**, 459–504, (2008) N. Cancrini, F. Martinelli, R. Schonmann, C. Toninelli, [*Facilitated Oriented Spin Models: Some Non Equilibrium Results*]{}, J.Stat.Phys [**138**]{}, 1109-1123 (2010) N.Cancrini, F.Martinelli, C.Roberto, C.Toninelli, [*Facilitated spin models: recent and new results*]{} in Methods of contemporary mathematical statistical physics, Lecture Notes in Mathematics, p.307-339 R.Kotecky Ed., Springer (2009); A.Dembo, O.Zeitouni, [*Large deviations techniques and applications*]{}, series Stochastic modelling and applied probability, vol.38, Springer (1998) G.H. Fredrickson, H.C. Andersen, *[Kinetic Ising model of the glass transition]{}, Phys. Rev. Lett. [**53**]{}, 1244–1247, (1984)* G.H. Fredrickson, H.C. Andersen, *[Facilitated kinetic Ising models and the glass transition]{} J. Chem. Phys. [**83**]{}, 5822–5831 (1985)* J.P. Garrahan, R.L. Jack, V. Lecomte, E. Pitard, K. van Duijvendijk, F. van Wijland, [*First-order dynamical phase transition in models of glasses: an approach based on ensembles of histories* ]{}, J. Phys. A [**42**]{} 075007 (2009) J.P. Garrahan, R.L. Jack, V. Lecomte, E. Pitard, K. van Duijvendijk, F. van Wijland, [*Dynamic first-order transition in kinetically constrained models of glasses*]{} Phys.Rev.Lett. [**98**]{}, 195702 (2007) J.P. Garrahan, P. Sollich, C.Toninelli, [*Kinetically Constrained Models*]{}, preprint arXiv:1009.6113 R. Jack, J.P. Garrahan, D. Chandler, [*Space-time thermodynamics and subsystem observables in kinetically constrained models of glassy materials*]{} J.Chem.Phys. [**125**]{}, 184509, (2006) J. Jäckle, S. Eisinger, *A hierarchically constrained kinetic Ising model*, Z. Phys. B: Condens. Matter, **84**, 115–124, (1991) C. Kipnis, C. Landim, [*Scaling limits of interacting particle systems*]{}, Grundlehren der Mathematischen Wissenschaften [**320**]{} Springer (1999) M.Merolle, J.P.Garrahan, D.Chandler [*Space-time thermodynamics of the glass transition*]{}, Proc.Matl.Acad.Sci. USA, [**102**]{}, 10837–10840 (2005) F. Ritort, P. Sollich, [*Glassy dynamics of kinetically constraint models*]{}, Advances in Physics, [**52**]{}, 219-342, (2003) R. Schonmann, S. Shlosman, [*Complete analyticity for $2$D Ising completed*]{}, Comm. Math. Phys. [**170**]{}, no. 2, 453–482 (1995)
--- abstract: 'Under the assumption that the photospheric quiet-Sun magnetic field is turbulent, the cancellation function has previously been used to estimate the true, resolution-independent mean, unsigned vertical flux $\langle|B_z|\rangle_{\mathrm{true}}$. We show that the presence of network elements, noise, and seeing complicate the measurement of accurate cancellation functions and their power-law exponents $\kappa$. Failure to exclude network elements previously led to too low estimates of both the cancellation exponent $\kappa$ and of $\langle|B_z|\rangle_{\mathrm{true}}$. However, both $\kappa$ and $\langle|B_z|\rangle_{\mathrm{true}}$ are over-estimated due to noise in magnetograms. While no conclusive value can be derived with data from current instruments, our [*Hinode*]{}/SP results of $\kappa\lessapprox0.38$ and $\langle|B_z|\rangle_{\mathrm{true}}\lessapprox 270\,$gauss can be taken as upper bounds.' author: - 'A. $^{1}$ ,J. $^{2}$' title: Instrumental and Observational Artifacts in Quiet Sun Magnetic Flux Cancellation Functions --- Introduction ============ Turbulent fields possess self-similar power-laws, [*i.e.*]{}, they are fractals [@BrPrSe+1992; @F95]. Such power-laws ([*i.e*]{}, self-similarity) are found in systems where the underlying [physical processes]{} are the same at all scales ([*e.g.*]{}, vortex [or]{} flux tube stretching suggested by ). [In these systems, the fields have the same degree of complexity (look the same) regardless of the observational resolution. For example,]{} incompressible magnetohydrodynamics (MHD) is expected [@I64; @K65; @GoSr1995] [and found ([*e.g.*]{}]{}, ; ) to display self-similar magnetic energy spectra. [Power-law magnetic energy spectra are also seen in compressible, stratified MHD photospheric simulations [@Br1995] including those with realistic radiation and partial ionization [@PiGrCaSc2009; @MoPiGrPr+2011].]{} [Power-laws are found in solar observations of the line-of-sight magnetic energy,]{} kinetic energy, and even the granulation intensity [@AbYu2010; @AbYu2010b; @GoYuCa+2010]. [Power-laws are, of course, fundamental in measurements of the fractal dimension of magnetic structures in both observations and simulations ([*e.g.*]{}, ). Both the observations and simulations above indicate that the quiet Sun magnetic field is likely turbulent. ]{} The application of turbulent self-similarity to estimate the true mean unsigned vertical flux from magnetograms was introduced by @PiGrDaSc2009 ([-@PiGrDaSc2009], hereafter PGDS2009). Define the “net” flux ($\mathrm{Flux}_l$) at scale $l$ for any vertical field $f_z$ to be the flux remaining after averaging over boxes $\mathcal A_i(l)$ of edge length $l$: $$\mathrm{Flux}_{l} = \sum_i\bigg{|}\int_{\mathcal A_i(l)\in\mathcal A}f_z\mathrm{d}a\bigg{|}\,, \label{eq:netflux}$$ where the boxes partition the total area: $\bigcup_i\mathcal A_i(l)=\mathcal A$. The unsigned flux of the field is given by $$\mathrm{Flux}_{0} = \int_{\mathcal A}|f_z|\mathrm{d}a\,. \label{eq:netflux}$$ The ratio of the net flux at length scale $l$ to the true mean unsigned flux ([*i.e.*]{}, flux [*cancellation*]{}) is called the partition function $$\chi(l) = \frac{\mathrm{Flux}_{l}}{\mathrm{Flux}_{0}}\,, \label{eq:partition}$$ which follows a power-law for self-similar fields, $$\chi(l)\propto l^{-\kappa}\,, \label{eq:kappa}$$ where $\kappa$ is the cancellation exponent [@ODS+92]. For a completely self-similar field, knowledge of net flux at any scale and of the power-law scaling exponent implies knowledge of net flux at all scales and, hence, the unsigned flux. The two extreme examples are a unipolar field, for which the net flux always equals the unsigned flux ($\kappa=0$), and a random field. For a random field, the observed net flux equals the unsigned flux times the ratio of the noise correlation length to the diameter of the resolution element ($\kappa=1$). See the Appendix for a mathematical derivation of the random noise case and of the general formula, Equation (\[eq:extrapolate\]), to determine the unsigned flux for a completely self-similar field given $\kappa$ and the correlation length below which the field becomes smooth. Under the assumption that the quiet-Sun magnetic field is turbulent, the theoretical framework in the Appendix applies. This might suggest that the unsigned quiet-Sun flux could be deduced from Zeeman-based instruments. Such studies have previously been made (PGDS2009; ). PGDS2009 observed a power-law in $\chi(l)$ (which they dubbed the “cancellation function” when applied to a magnetogram) down to $\approx200\,$km with $\kappa=0.26$ in [*Hinode*]{} Spectro-Polarimeter (SP; ) data. Note that power-laws for $\chi(l)$ have previously been seen in simulations of the electric currents [@SVCN+02; @PGMP05] and in solar observations of current-helicity [@SoVaAbCa+2003b; @Ab2003; @SVCV+04; @AbYu2010]. PGDS2009 used Equation (\[eq:extrapolate\]) to estimate the net flux at a scale of $800\,$m to be $\approx 50\,$G. Turbulent power-laws ([*e.g.*]{}, energy spectra and $\chi(l)$) extend only down to the dissipative range, but the magnetic diffusion scale is expected to be significantly smaller than $800\,$m. Thus, the extrapolation was taken as a lower bound, $\mathrm{Flux}_0>50\,$G (gauss). This result was found to be in agreement with a separate extrapolation from radiative MHD simulations. Other estimates of $\kappa$ can be made. found a 30% increase in flux changing from a resolution of 1[$^{\prime\prime}$]{} to 0.33[$^{\prime\prime}$]{} which indicates $\kappa=0.24$. found a factor of 3.7 more flux on doubling of resolution, indicating $\kappa=1.9$. Using many instruments with different spatial resolutions, fit $\mathrm{Flux}_l \propto l^{-1}$, [*i.e.*]{}, $\kappa$=1. The data set used by PGDS2009 was recalibrated by leading to more pronounced high field strength tails in the flux probability distribution. The power law from the recalibrated magnetogram yielded a value of $\kappa$ that is half of the value derived by PGDS2009. In addition to the SP magnetogram computed cancellation functions from a set of [*Hinode*]{} Narrow-band Filter Imager (NFI) Na [i]{} $D_{1}$ magnetograms. The resulting $\kappa$ was found to be 0.127, [*i.e.*]{}, very similar to the value for the recalibrated SP magnetogram. In order to agree with Hanle-based magnetic field measurements, this cancellation function needs to be extrapolated down to a spatial scale of 10–100 m. Another application for using the fractal nature of magnetic fields is in flare predictions. Previous works (, [-@SVCV+04]; ) have shown it to be promising. However, as in the case of cancellation functions, the data-sample sizes used to study the suitability of the method have been limited. Recently has shown how previous analyses may have been misleading: A statistical comparison of flaring and non-flaring active regions found that the fractal properties of the line-of-sight magnetic field cannot distinguish flaring and non-flaring active regions. Many of the previous works, however, used current helicity, not magnetic flux, for the analysis and therefore the conclusions of may not apply to them. Prompted by the results of PGDS2009, , and , we suggest that measurements of the cancellation function do not confront the reality of the observational data. Such confrontation is the aim of this paper. We analyze the statistics of quiet Sun cancellation functions using magnetograms from three different instruments. We test if the cancellation exponent $\kappa$ is robust within and among the instruments to point out observational artifacts in the measurement of $\kappa$ and to determine if any conclusive values or bounds can be made with existing data. Data and Methods ================ We use altogether $\approx$800 magnetograms from the Helioseismic and Magnetic Imager (HMI; ) on the [*Solar Dynamics Observatory*]{} (SDO) satellite, Michelson Doppler Imager (MDI; ) on the [*Solar and Heliospheric Observatory*]{} (SOHO) sarellite, and [*Hinode*]{} SP. The largest data set, 700 HMI magnetograms, allows us to characterize the variation within a single instrument. Data from MDI and SP allow for inter-instrument comparisons and [for]{} identification of possible instrumental artifacts as well as [for determination of the effects]{} of spatial resolution and magnetogram noise. The HMI data consist of daily (from end of March 2010 to mid-June 2011) 4 s integration magnetograms taken around 12:00 UT. We analyze a $501\times 501$ pixel ($\approx 253$ [$^{\prime\prime}$]{}$\times 253$ [$^{\prime\prime}$]{}) area around the disk center. For MDI, we use $301 \times 301$ pixel ($\approx 182$ [$^{\prime\prime}$]{}$\times 182$ [$^{\prime\prime}$]{}) sub-regions, excluding active regions, in high resolution magnetograms (level 1.8) near the solar disk center. Note that a calibration coefficient ([BSCALE]{}=2.81) is applied to the MDI magnetograms to convert from counts to gauss leading to the magnetograms being quantized in units of [BSCALE]{}. The SP magnetograms are the longitudinal magnetic field measurements in Level 1D data with exposure times varying between 1.6 and 12 s. Both $\approx$0.16[$^{\prime\prime}$]{} and $\approx$0.32[$^{\prime\prime}$]{} pixel size magnetograms are included in the data set. We choose observations of quiet Sun taken near the disk center. The size of SP magnetograms ranges from $\approx$ $400 \times 400$ to $1000 \times 1000$ pixels ($\approx 64$ [$^{\prime\prime}$]{}$\times 64$ [$^{\prime\prime}$]{}–160 [$^{\prime\prime}$]{}$\times 160$ [$^{\prime\prime}$]{}). Cancellation functions \[[$\chi(l)$]{}\] are computed for each magnetogram using the Monte Carlo technique of . Linear fits [to $\chi(l)$ in log-log space are made]{} for three different spatial scales: small scale $<$2[$^{\prime\prime}$]{}  [(exponent]{} [$\kappa_{\mathrm{S}}$]{}), intermediate scale 2–5[$^{\prime\prime}$]{} ([$\kappa_{\mathrm{I}}$]{}) and large scale 2–9 [$^{\prime\prime}$]{}([$\kappa_{\mathrm{L}}$]{}). While the physical scales are the same for all instruments, the number of pixels for fitting the exponent varies due to the instruments’ different pixel sizes (HMI $\approx$0.5[$^{\prime\prime}$]{}, MDI $\approx$0.6[$^{\prime\prime}$]{}, SP $\approx$0.16[$^{\prime\prime}$]{}or $\approx$0.32[$^{\prime\prime}$]{}). [The spatial scales were chosen to be sensitive to the scales most affected by noise ([$\kappa_{\mathrm{S}}$]{}) and scales not dominated by noise, but still below the spatial scale of the network ([$\kappa_{\mathrm{I}}$]{} and [$\kappa_{\mathrm{L}}$]{}). The overlap of [$\kappa_{\mathrm{I}}$]{} and [$\kappa_{\mathrm{L}}$]{} ensures that enough data points (14 for HMI, 11 for MDI and 24 or 47, depending on pixel size, for SP) are included in fitting [$\kappa_{\mathrm{L}}$]{} and that the results are not strongly sensitive to the upper cut-off (5[$^{\prime\prime}$]{} or 9[$^{\prime\prime}$]{}) of the fitted spatial scales.]{} Additionally, the mean unsigned flux [[$\langle|B_{z}|\rangle$]{}]{} and flux imbalance [$\langle B_{z}\rangle$/[$\langle|B_{z}|\rangle$]{}]{} are computed for each magnetogram. Results ======= Exponents and Effects of Network -------------------------------- The results of the analysis on the HMI cancellation function are summarized in Figure \[fig:hmi-all\] and Table \[table:hmi-mdi\]. Compared to large spatial scales ($>$10[$^{\prime\prime}$]{}, for which we do not [fit a power-law), the exponents]{} at the smaller scales are fairly similar: [$\kappa_{\mathrm{I}}$]{} and [$\kappa_{\mathrm{L}}$]{}are nearly identical, which indicates that we are fitting a genuine power-law at these scales ( 2[$^{\prime\prime}$]{}to 9[$^{\prime\prime}$]{}). [$\chi(l)$]{} turns up at the small scales as is demonstrated by [$\kappa_{\mathrm{S}}$]{} having $\approx$30% higher values than [$\kappa_{\mathrm{I}}$]{} and [$\kappa_{\mathrm{L}}$]{}. The increase is partially due to contributions from noise becoming more dominant (see Section \[sec:noise\] and note that $\kappa_{\mathrm{noise}}=1$; [All exponents decrease]{} as a function of [$\langle|B_{z}|\rangle$]{}, [[$\langle|B_{z}|\rangle$]{}$\equiv \mathrm{Flux}_l$ for $l=1\,$pixel.]{} The more flux there is in the magnetogram, the smaller [each]{} $\kappa$ is. A linear fit of $\kappa$ as a function of $\log$[$\langle|B_{z}|\rangle$]{} shows that the decrease is strongest for [$\kappa_{\mathrm{L}}$]{} and weakest for [$\kappa_{\mathrm{S}}$]{}. No correlation is found for $\kappa$ and the global (full-disk) unsigned flux, which reflects the overall activity present on the solar disk at a given time: The cancellation functions reflect the local magnetic environment, whose fractal properties do not appear to be influenced by the global field. $\kappa$ does not depend on the flux imbalance at small imbalances (below $\approx$10%). For imbalances greater than $\approx$10% $\kappa$ decreases with increasing imbalance on all spatial scales. The exponents show no strong temporal [evolution]{} or indications of the exponent changing as solar activity increases (Figure \[fig:hmi-ts\]). Since the activity in the rising phase of the solar cycle emerges at high latitudes, it is not surprising that no significant change in the exponent is seen near disk center. The results for MDI are shown in Figure \[fig:mdi-all\] and Table \[table:hmi-mdi\]. They are qualitatively similar to HMI, confirming that the trends ([$\langle|B_{z}|\rangle$]{}, flux imbalance) are of solar, not instrumental, origin. The exponents are larger and have more scatter [(standard deviation is 9% of the mean).]{} Also the dependence of $\kappa$ on [$\langle|B_{z}|\rangle$]{} is stronger. ![image](cf_HMI){width="12.cm"} ![image](hmi_cf_ts){width="10cm"} ![image](cf_MDI){width="12.cm"} \[table:sp\] Mean Standard deviation c f$^{\mathrm{1}}$ --------------------------- ------ -------------------- ------------- ------------------ HMI [$\kappa_{\mathrm{S}}$]{} 0.53 0.017 0.74 (0.01) -0.22 (0.01) [$\kappa_{\mathrm{I}}$]{} 0.36 0.020 0.61 (0.01) -0.26 (0.01) [$\kappa_{\mathrm{L}}$]{} 0.35 0.022 0.66 (0.01) -0.34 (0.01) MDI$^\mathrm{2}$ [$\kappa_{\mathrm{S}}$]{} 0.61 0.053 1.62 (0.06) -0.64 (0.03) [$\kappa_{\mathrm{I}}$]{} 0.45 0.033 1.47 (0.07) -0.65 (0.04) [$\kappa_{\mathrm{L}}$]{} 0.42 0.036 1.51 (0.08) -0.70 (0.05) SP $t_{\mathrm{exp}} <$2 s [$\kappa_{\mathrm{S}}$]{} 0.30 0.020 – – [$\kappa_{\mathrm{I}}$]{} 0.28 0.020 – – [$\kappa_{\mathrm{L}}$]{} 0.29 0.014 – – $t_{\mathrm{exp}} >$2 s [$\kappa_{\mathrm{S}}$]{} 0.24 0.0063 – – [$\kappa_{\mathrm{I}}$]{} 0.26 0.014 – – [$\kappa_{\mathrm{L}}$]{} 0.28 0.032 – – : [Power-law fits to $\chi(l)$ for HMI, MDI, and SP magnetograms.]{}[]{data-label="table:hmi-mdi"} The SP magnetograms (Figure \[fig:sp-all\] and Table \[table:sp\]) are a mixture of data with different pixel sizes and exposure times. A comparison of [$\kappa_{\mathrm{S}}$]{} for magnetograms with exposure times less than two seconds and ones with above, shows that [$\kappa_{\mathrm{S}}$]{} is larger for shorter exposure (and thus noisier) magnetograms. The difference is smaller for [$\kappa_{\mathrm{I}}$]{} and [$\kappa_{\mathrm{L}}$]{}. This is to be expected, if the difference is due to noise, which is more dominant at small scales (see Section \[sec:noise\]). The measured exponents are similar to $\kappa$ found by PGDS2009 who measured $\kappa$=0.26 [from a single magnetogram]{}. Regardless of exposure time, all the SP exponents are smaller than MDI and HMI exponents. No dependence of $\kappa$ on [$\langle|B_{z}|\rangle$]{} or flux imbalance is seen. The sample, however, is too small to establish this. Note that for the magnetograms with the longest exposure times a mixture of spatial and temporal cancellation may occur. The time to raster 2[$^{\prime\prime}$]{} varies strongly depending on the exposure time and step size: For exposure times of 1.6 and 4.8 s the rastering times are well below the granulation turnover time scale. For the exposure time of 8 s and above the rastering time is over 100 s. ![image](cf_SP){width="12.cm"} The dependence of MDI and HMI cancellation exponents on [$\langle|B_{z}|\rangle$]{} and flux imbalance suggest that the presence of magnetic network elements in the magnetograms influences the cancellation exponents. (Magnetic network flux is stronger than the intra-network, the network is mostly unipolar, and occurs at scales of a few tens of arcseconds.) To [model]{} the effect on $\chi(l)$ of a large (active-region-remnant or network-like) unipolar flux region, our observation region $\mathcal{A}$ is decomposed into two sub-regions: $\mathcal{B}$ (containing intra-network quiet-Sun in which a scaling $\chi(l)$ would be found) and the unipolar flux concentration $\mathcal{C}$ (over which $\int_{\mathcal C}|B_z|\mathrm{d}a = \big{|}\int_{\mathcal C}B_z\mathrm{d}a\big{|}$, [*i.e.*]{}, there is no cancellation). Our goal is to find the cancellation function \[$\chi'(l)$\] over the entire region $\mathcal{A} = \mathcal{B}\cup\mathcal{C}$. We consider only scales $l\ll$ the size of the unipolar flux concentration. So, a given square $\mathcal A_i(l)$ is either in $\mathcal{B}$ or $\mathcal{C}$, $$\begin{aligned} \chi'(l) = \frac{\sum_{\mathcal A_i(l)\in\mathcal B}\bigg{|}\int_{\mathcal A_i(l)}B_z\mathrm{d}a\bigg{|}+\sum_{\mathcal A_i(l)\in\mathcal C}\bigg{|}\int_{\mathcal A_i(l)}B_z\mathrm{d}a\bigg{|}}{\int_{\mathcal B}|B_z|\mathrm{d}a + \int_{\mathcal C}|B_z|\mathrm{d}a}\,.\end{aligned}$$ Using $\sum_{\mathcal A_i(l)\in\mathcal C}\bigg{|}\int_{\mathcal A_i(l)}B_z\mathrm{d}a\bigg{|}=\int_{\mathcal C}|B_z|\mathrm{d}a$ and defining $P=\int_{\mathcal C}|B_z|\mathrm{d}a/\int_{\mathcal B}|B_z|\mathrm{d}a$, the ratio of mean unsigned flux in the unipolar flux concentration compared to the rest of the region, we find $$\begin{aligned} \chi'(l) = \frac{\chi(l)+P}{1+P}\,. \label{eq:model}\end{aligned}$$ Assuming that the intra-network quiet-Sun flux is completely balanced, flux imbalance (FI) is given by $$\mathrm{FI} = \frac{\bigg{|}\int_{\mathcal A}B_z\mathrm{d}a\bigg{|}}{\int_{\mathcal A}\big{|}B_z\big{|}\mathrm{d}a} = \frac{P}{1+P}\,. \label{eq:fip}$$ [We take $\chi(l)$ from]{} the HMI magnetogram with the smallest flux imbalance [and employ Equations (\[eq:model\]) and (\[eq:fip\]) to plot synthetic $\chi'(l)$ versus flux imbalance in Figure \[fig:unipolar\].]{} The synthetic exponents as a function of flux imbalance mimic the MDI and HMI observations: Small increases in the imbalance do not alter the exponent, while imbalances above $\approx$10% lead to a decreasing $\kappa$. The model explains the observed dependence of $\kappa$ on flux imbalance as a consequence of unipolar network magnetic fields altering the cancellation function. Note that the dependence of the exponents on [$\langle|B_{z}|\rangle$]{} applies also to magnetograms with very small imbalances. Small imbalances imply that the positive and negative network concentrations balance out each other, not that there is only a small amount of network flux present. ![image](unipolar){width="10cm"} Intra-Network Exponents ----------------------- Since the cancellation exponents are affected by the presence of network elements, a combination of smoothing and thresholding [can be]{} applied to mask out all the network pixels in the magnetograms prior to computing the cancellation functions. An example of a mask applied to an HMI magnetogram is shown in Figure \[fig:mask\]. To test how the masking affects the measured exponents, we apply a network mask to a synthetic magnetogram consisting of pure noise ($\kappa_{\mathrm{noise}}=1$) and compare the exponents from the masked and unmasked magnetograms: The [effect on]{} the exponent is smaller than the error in the fit. ![image](HMI_qs_mask){width="12.cm"} ![image](cf_HMI_qs){width="12.cm"} Masking out the network pixels in the HMI magnetograms to measure the “true” intra-network cancellation exponents (Figure \[fig:hmi-qs\] and Table \[table:inw\]) leads to significantly higher values for [$\kappa_{\mathrm{L}}$]{} and [$\kappa_{\mathrm{I}}$]{}. $\kappa$ still depends on [$\langle|B_{z}|\rangle$]{}, but in the opposite sense [than]{} in the unmasked magnetograms: In the intra-network magnetograms $\kappa$ increases with increasing [$\langle|B_{z}|\rangle$]{}. This demonstrates that the decrease of $\kappa$ with increasing [$\langle|B_{z}|\rangle$]{} in the unmasked magnetograms was due to the presence of the network elements. A linear fit of intra-network magnetogram $\kappa$ vs. $\log$([$\langle|B_{z}|\rangle$]{}) gives a slope of $\approx$0.3. This increase of $\kappa$ with [$\langle|B_{z}|\rangle$]{} can be understood from Equation (\[eq:model\]). An increase in flux imbalance means more pixels that are never canceled out at any scale, a flattening of the cancellation function, and a decrease in $\kappa$. A magnetogram with very weak signal can be out of balance from a small number of moderately strong pixels of the same sign. In fact, we see in Figure \[fig:for-j\] that nearly all magnetograms with over 1% flux imbalance, have [$\langle|B_{z}|\rangle$]{}$<\,5.5\,$G. The weakest magnetograms have greater flux imbalances and, consequently, smaller $\kappa$. ![image](for_j){width="12.cm"} The network masking of MDI magnetograms was not entirely successful and effects of network are still visible in the [$\langle|B_{z}|\rangle$]{} and flux imbalance plots. Compared to HMI, the effect of masking the magnetograms is small in MDI (Figure \[fig:mdi-inw\] and Table \[table:inw\]). The modest change may also be partially due to the network vs. intra-network contrast being smaller in MDI than HMI. The intra-network SP exponents (Figure \[fig:sp-qs\] and Table \[table:inw2\]) are increased on all scales. The change is largest for longer exposures and large spatial scales. Since the longer exposure times have less noise and their $\kappa$ values increase the most after removal of the network, the increase in $\kappa$ cannot be due to noise alone. ![image](cf_MDI_qs){width="12.cm"} ![image](cf_SP_qs){width="12.cm"} \[table:inw2\] Mean Standard deviation --------------------------- ------ -------------------- HMI [$\kappa_{\mathrm{S}}$]{} 0.63 0.024 [$\kappa_{\mathrm{I}}$]{} 0.58 0.024 [$\kappa_{\mathrm{L}}$]{} 0.64 0.084 MDI [$\kappa_{\mathrm{S}}$]{} 0.59 0.055 [$\kappa_{\mathrm{I}}$]{} 0.46 0.077 [$\kappa_{\mathrm{L}}$]{} 0.46 0.089 SP $t_{\mathrm{exp}} <$2s [$\kappa_{\mathrm{S}}$]{} 0.32 0.020 [$\kappa_{\mathrm{I}}$]{} 0.32 0.017 [$\kappa_{\mathrm{L}}$]{} 0.38 0.017 $t_{\mathrm{exp}} >$2s [$\kappa_{\mathrm{S}}$]{} 0.28 0.0063 [$\kappa_{\mathrm{I}}$]{} 0.38 0.014 [$\kappa_{\mathrm{L}}$]{} 0.48 0.036 : [Power-law fits]{} for intra-network [(network masked out) cancellation functions for HMI, MDI, and SP magnetograms.]{}[]{data-label="table:inw"} Effect of Noise and Seeing {#sec:noise} -------------------------- An examination of Tables \[table:sp\] and \[table:inw\] suggests that the cancellation exponent is larger for noisier instruments and shorter exposures, [*i.e.*]{}, for noisier magnetograms. We now demonstrate that this is the case by artificially adding noise to our magnetograms. [The]{} first panel in Figure \[fig:noise\] shows how an HMI cancellation function changes as random-distributed noise is incrementally added to the magnetogram prior to computing [$\chi(l)$]{}. Adding noise changes first the smallest scales. As the amount of noise increases the scales affected by noise [also]{} increase. [For]{} HMI adding 1 G (random numbers with a mean of zero and a standard deviation of one) of noise does not change the cancellation function noticeably and only a small difference is seen for 3 G. In contrast, [adding noise at the 6 and 15 G levels, respectively, significantly alters $\chi(l)$ on increasing spatial scales.]{} That no change is visible for $3\,$G, gives us a measure of noise in the HMI magnetogram: between 3 and $6\,$G. ![image](noise2){width="12.cm"} The remaining panels in Figure \[fig:noise\] show how the small scale cancellation exponents for the different instruments react to increased noise. (Noise was added in units of counts (1 count=2.81 G) to the MDI magnetograms.) All of the [$\kappa_{\mathrm{S}}$]{}, as a function of noise curves, have the same general shape (rightmost panel in Figure \[fig:noise\]): They start with a plateau and after an instrument-specific threshold (magnetogram noise level) is reached, [$\kappa_{\mathrm{S}}$]{} begins to increase linearly as a function of added noise. The nominal noise values (upper limits) of the different instruments can be defined as the full width at half maximum of Gaussian fits to the [magnetic flux]{} histograms. The widths are 31.6 G for MDI, 21.1 G for HMI, and 13.5 G for SP. In the linear increase regime the increase in $\kappa$ per added noise is 0.01 per G for MDI and 0.02 per G for SP and HMI. The length of the plateau and steepness of the increase are related to the inherent noise in the data: The less instrumental noise there is, the more sensitive the cancellation function is to added noise. SP, which has the lowest nominal noise level, has the shortest plateau and steepest increase, while MDI, the noisiest instrument, is less sensitive to noise. Consistent with the noise experiment, SP has the smallest measured exponents and MDI the largest, suggesting that MDI magnetograms are more strongly affected by noise. ![image](smooth){width="10cm"} Figure \[fig:hmi-smooth\] shows how spatial smoothing changes the cancellation function. The effect of smoothing is opposite to that of increasing the noise (recall that $\kappa=0$ for a smooth field and $k=1$ for noise). The exponents decrease with increased spatial smoothing. [This demonstrates]{} how seeing conditions and spatial over-sampling can [reduce]{} the measured exponents. Conclusions =========== [Significant artifacts in measuring the cancellation function from current observations have been identified. The cancellation exponent is sensitive to noise level differences between instruments: Noise increases both $\kappa$ and the scatter between individual measurements of $\kappa$. Not surprisingly, $\kappa$ decreases by seeing and spatial over-sampling. We also find that $\kappa$ is sensitive within a single instrument’s observations to magnetic flux imbalances and mean unsigned flux. A simple model suggests that this dependence is the effect of network elements: They reduce $\kappa$.]{} This simple model also explains why the value of $\kappa$ calculated in PGDS2009 and differ: The network is enhanced in the latter study, decreasing $\kappa$. [Of all the observational artifacts, the effect of the network can be removed by masking network elements]{} out of magnetograms. We then find values of $\kappa_I\approx0.58$ for HMI and $\kappa_I\approx0.38$ for SP. (For MDI the removal of network was not fully successful and the dependence of [$\chi(l)$]{} on the presence of network elements was not entirely removed.) The difference in $\kappa$ between HMI and SP is due to HMI magnetograms being demonstrably more noisy: HMI’s noise level is $\approx 4\,$G and SP’s is $\approx 2\,$G, at least for determinations of the cancellation exponent (see Figure \[fig:noise\]). Since noise first affects the smallest scales of the cancellation function, high resolution (diffraction limit, pixel size, and seeing) observations with minimal noise are needed to measure the exponent on scales below a couple of arcseconds. [Therefore, no conclusive value of the unsigned flux can be given by cancellation function analysis with current instruments.]{} [An upper bound of the unsigned flux is the best that can be done with the available data. Since SP has the lowest noise level, we can safely conclude $\kappa\lessapprox0.38$. The quiet-Sun magnetic field is not a completely self-similar field: It will become smooth over an order-of-magnitude in scales near the magnetic diffusion scale]{} (below which there is no further cancellation). [However, using $\kappa\lessapprox0.38$ and that the power-law ends at $l_0 \ge 80\,$m]{} (the magnetic diffusion scale estimate from PGDS2009) [in Equation (\[eq:extrapolate\]), we have an upper bound on the true mean unsigned flux. We find that]{} $\langle|B_z|\rangle_{\mathrm{true}}\equiv\mathrm{Flux}_0\lessapprox268\,$G (standard deviation 44 G) from SP data with exposure times longer than 5 s. [(Note that if the power-law for some reason ends at even larger scales than the $\approx800\,$m beginning of the diffusive range, this bound still holds.)]{} The noisier instruments, MDI and HMI, give significantly higher values. Both the noise level and sensitivity of the instruments affect the estimates. SP being the least noisy of the instruments can be considered as an upper bound for the flux. [ found a similar upper bound, $\langle|B_z|\rangle_{\mathrm{true}}\lessapprox200-300\,$G from the scattering line polarization of Ce [ii]{}.]{} We found no correlation between $\kappa$ and global mean unsigned flux: The small scale ($<$10[$^{\prime\prime}$]{}) field distribution/fractal geometry is not affected by the global magnetic configuration, at least not during a rising phase of the cycle sampled by HMI. [A study of HMI cancellation functions from the end of the previous solar minimum past the next solar maximum may show how flux from decaying active regions affects the flux distribution in the quiet Sun, and possibly give indications of the relative importance of possible quiet-Sun small-scale dynamo action in different phases of the solar activity cycle.]{} [It should be emphasized that the present analysis and findings are for quiet Sun magnetic fields which are known to be turbulent. Issues identified in the analysis most likely do not affect as strongly measurements of active region fractal dimensions.]{} The significance of noise in the cancellation statistics may extend to the studies of fractal dimensions and flaring probability ([*e.g.*]{}, ). Based on the current analysis, using less-noisy magnetograms, such as HMI, may show the fractal dimension [of the magnetic flux]{} to still have some predictive capability for flaring active regions. To establish the usability of fractal analysis for flare forecasting, however, a large sample of flaring and non-flaring active region magnetograms with little noise is needed. [A statistical study of flaring and non-flaring HMI vector magnetograms could also be used to address the differences of using line of sight flux and current helicity for predictions.]{} JPG gratefully acknowledges the support of the U.S. Department of Energy through the LANL/LDRD Program for this work. [*Hinode*]{} is a Japanese mission developed and launched by ISAS/JAXA, with NAOJ as domestic partner and NASA and STFC (UK) as international partners. It is operated by these agencies in co-operation with ESA and NSC (Norway) Data provided by the SOHO/MDI consortium. SOHO is a mission of international cooperation between ESA and NASA. SDO is a mission for NASA’s Living With a Star program. Appendix {#appendix .unnumbered} ======== [For a completely self-similar field, knowledge of net flux at any scale and of the power-law scaling exponent implies knowledge of net flux at all scales and, hence, the unsigned flux. This is most easily seen in the example of a vertical random field \[$f_z$\]. For a random field, the integral over an area is proportional to the square root of the area (random walk), $$\bigg{|}\int_{\mathcal A_i(l)\in\mathcal A}f_z\mathrm{d}a\bigg{|} \propto l\,,$$ while the integral over a strictly positive field is proportional to its area, $$\int_{\mathcal A_i(l)\in\mathcal A}|f_z|\mathrm{d}a \propto l^2\,.$$ Therefore, $$\chi(l)\propto \frac{1}{l}\,,$$ and we identify $\kappa=1$ for pure noise. For scales smaller than the correlation length of the noise, the net flux is equal to the unsigned flux, $$\mathrm{Flux}_{l} = \mathrm{Flux}_{0} \qquad \forall l \le l_0\,, \label{eq:kappais1a}$$ and for larger scales, $$\mathrm{Flux}_{l} = \frac{C}{l} \qquad \forall l \ge l_0\,. \label{eq:kappais1b}$$ The constant $C$ is determined by equating Equations (\[eq:kappais1a\]) and (\[eq:kappais1b\]) at the correlation length, $l=l_0$: $$\frac{\mathrm{Flux}_{l}}{\mathrm{Flux}_{0}} = \frac{l_0}{l} \qquad \forall l \ge l_0\,. \label{eq:kappais1c}$$ This power-law dependence, Equation. (\[eq:kappais1c\]), is shown in Figure \[fig:kappais1\]. This means that the unsigned flux can be exactly calculated given the net flux at any scale and the scale at the end of the power-law, $l_0$, $$\mathrm{Flux}_{0} = \mathrm{Flux}_{l}\frac{l}{l_0}\,,$$ for a random field. For any other purely self-similar field with cancellation exponent $\kappa$, $$\mathrm{Flux}_{l} = \mathrm{Flux}_{L}\big{(}\frac{L}{l}\big{)}^\kappa \qquad \forall l,L \ge l_0\,. \label{eq:extrapolate}$$ For the extreme case of a unipolar field, $\kappa=0$ and $\mathrm{Flux}_{l} = \mathrm{Flux}_{0}$ for all scales $l$. In general, $0\le\kappa\le1$. ]{} ![image](fig_noise){width="12.cm"} [31]{} \#1[ISBN \#1]{}\#1[\#1]{}\#1[\#1]{}\#1[\#1]{}\#1[*\#1*]{}\#1[**\#1**]{}\#1[\#1]{}\#1[\#1]{}\#1[\#1]{}\#1[\#1]{}\#1\#1\#2 \#1[[](http://dx.doi.org/#1)]{}\#1[\#1]{}\#1[\#1]{}\#1[\#1]{}\#1[\#1]{}\#1[*\#1*]{}\#1[\#1]{}\#1[**\#1**]{}\#1[\#1]{}\#1[*\#1*]{}\#1[\#1]{}\#1[\#1]{}\#1[\#1]{}\#1\#1\#1\#1\#1\#1[\#1]{}\#1[\#1]{}\#1[\#1]{} \#1[\#1]{} : , . , . , : a, . , . , : b, . , . : , , . , , , : , . , . , , , : , . , . : , . . : , , . , : , . , . , , , , , , : , . , . : , . , . , , : , . , . : , . , . , , , , : , . , . , , , , , , , . , . , , : , . , . , , , , , : , . , . , , , , : , . , . , , : , . , . . , , : , . , . , , : , . , . , , : , . , . , : , . In: (eds.), , . , , , , , , , . , . , , , , , , , . , . , , , , , , , . , . , , , , , : , . , . , , , , , , , : , . , . , , , , , , , : , , . : , . , . , , , , , , , . , .
--- author: - | \ Institute of Nuclear Physics PAN\ E-mail: - | Krzysztof GOLEC-BIERNAT\ Institute of Nuclear Physics PAN, University of Rzeszow\ E-mail: bibliography: - 'mybib.bib' title: Initial conditions for QCD evolution of double parton distributions --- Introduction ============ Double parton distribution functions are used in the description of double hard scattering [@Blok:2010ge]. Their QCD evolution equations are known in the leading logarithmic approximation (LLA) [@Snigirev:2003cq; @Korotkikh:2004bz; @Gaunt:2009re; @Ceccopieri:2010kg; @Gaunt:2011xd; @Ryskin:2011kk; @Diehl:2011tt; @Manohar:2012jr]. The DPDFs obey nontrivial sum rules which are conserved by the evolution equations. In this presentation we address the problem how to specify initial conditions for the evolution equations which exactly obey these sum rules. Parton distribution functions ============================= In the single parton scattering (SPS), the final state of the hadron-hadron collision has been produced from only one hard interaction while in the double parton scattering, two hard subprocesses occur. For the description of the SPS we use the single parton distribution functions, ${D_f(x,Q)}$, while for the double parton scattering - the double parton distribution functions denoted by ${D_{f_1f_2}(x_1,x_2,Q_1,Q_2)}$. The DPDF depend on parton flavours ${f_1,f_2}$ (including gluon), longitudinal momentum fractions ${x_1,x_2}$ and two hard scales ${Q_1,Q_2}$. The parton momentum fractions obey the condition, $$\begin{aligned} \label{eq:limit} x_1+x_2 \le 1\,,\end{aligned}$$ which says that the sum of partons’ momenta cannot exceed the total nucleon momentum Evolution equations =================== The general form of QCD evolution equations for single PDFs is given by $$\begin{aligned} \label{eq:onepdfeq} \partial_{t}D_{f}(x,t)=\sum_{f^\prime}\int^{1}_{0}du\, {\cal{K}}_{ff^\prime}(x,u,t)\,D_{f^\prime}(u,t),\end{aligned}$$ with the evolution parameter ${t=\ln(Q^2/Q_0^2})$. The integral kernels ${{\cal{K}}_{ff^\prime}}$ presented in above equation describe real and virtual parton emission. The real emission kernels take the following form $$\begin{aligned} {\cal{K}}^R_{ff'}(x,u,t)=\frac{1}{u} P_{ff'}(\frac{x}{u},t)\,\theta(u-x)\end{aligned}$$ in which ${P_{ff'}}$ are splitting functions computed perturbatively in QCD in powers of the strong coulpling constant ${\alpha_s}$: $$\begin{aligned} P_{ff'}(z,t)=\frac{\alpha_s(t)}{2\pi}P^{(0)}_{ff'}(z)+\frac{\alpha^2_s(t)}{(2\pi)^2}P^{(1)}_{ff'}(z)+...\end{aligned}$$ After including the splitting functions, we find the well known DGLAP evolution equations for the single PDFs: $$\begin{aligned} \label{eq:dglap} \partial_{t}D_{f}(x,t)=\sum_{f^\prime}\int^{1}_{x}\frac{dz}{z}P_{ff'}(z,t)\,D_{f'}(\frac{x}{z},t)-D_f(x,t)\sum_{f'}\int^1_0 dz\,z\, P_{f'f}(z,t).\end{aligned}$$ In the leading logarithmic approximation, the evolution equations of the DPDFs (for equal two hard scales, $Q_1=Q_2,\equiv Q$) have the following form, see [@Gaunt:2009re] for more details, $$\begin{aligned} \label{eq:ddglap} \nonumber \label{eq:twopdfeq} \partial_t\, D_{f_1f_2}(x_1,x_2,t) &=& \sum_{f'}\int^{1-x_2}_{0} du \,{\cal{K}}_{f_1f'}(x_1,u,t) \, D_{f' f_2}(u,x_2,t) \\\nonumber &+& \sum_{f'}\int_{0}^{1-x_1}du\,{\cal{K}}_{f_2f'}(x_2,u,t) \,D_{f_1f'}(x_1,u,t) \\ &+& \sum_{f'}\,{\cal{K}}_{f'\to f_1f_2}^R (x_1,x_1+x_2,t)\, D_{f'}(x_1+x_2,t).\end{aligned}$$ The upper integration limits reflect condition (\[eq:limit\]). The third term constains the single PDFs and because of that eqs. (\[eq:twopdfeq\]) and (\[eq:onepdfeq\]) have to be solved together. That is why the initial conditions for both the single and double PDFs have to be specified at some initial scale $Q_0$. Sum rules ========= The DGLAP evolution equations (\[eq:dglap\]) preserve the momentum sum rule for the single PDFs: $$\begin{aligned} \sum_f \int^1_0 dx\,x\,D_f(x,Q)=1\,\end{aligned}$$ while the evolution equations (\[eq:ddglap\]) preserve the momentum sum rule for the DPDFs $$\begin{aligned} \sum_{f_1}\int_{0}^{1-x_2}dx_1\,x_1\, \frac{D_{f_1f_2}(x_1,x_2,Q)}{D_{f_2}(x_2,Q)}=(1-x_2).\end{aligned}$$ The ratio of the double and single PDFs in the above relation looks like a conditional probability to find a parton with the momentum fraction ${x_1}$, while the second parton is fixed. Thus, it is clearly seen that the new momentum sum rule relates the double and single PDFs $$\begin{aligned} \label{eq:momrule} \sum_{f_1}\int_{0}^{1-x_2}dx_1\,x_1\,D_{f_1f_2}(x_1,x_2,Q)=(1-x_2)\,D_{f_2}(x_2,Q).\end{aligned}$$ The valence quark number sum rule for the single PDFs has the well known form $$\begin{aligned} \int_0^{1}dx\left\{D_{q_i}(x,Q)-D_{\bar{q_i}}(x,Q)\right\}=N_{i}\end{aligned}$$ where $N_i$ is the number of valence quarks. For the DPDFs, the following relation holds, depending on the second parton flavour, [@Gaunt:2009re] $$\begin{aligned} %I_{q_if_2} = \int_0^{1-x_2}dx_1\left\{D_{q_if_2}(x_1,x_2,Q)-D_{\bar{q_i}f_2}(x_1,x_2,Q)\right\} %\\\nonumber %\\ %&=& =\left\{ \begin{array}{ll} N_{i}\,D_{f_2}(x_2,Q) & \mbox{\rm ~~~~for $f_2\ne q_i,\bar{q_i}$} \\ \\ (N_{i}-1)\,D_{f_2}(x_2,Q) & \mbox{\rm ~~~~for $f_2=q_i$} \\ \\ (N_{i}+1)\,D_{f_2}(x_2,Q) & \mbox{\rm ~~~~for $f_2=\bar{q_i}$}\, \end{array} \right. \label{eq:valrule}\end{aligned}$$ It is important to emphasize again that the momentum and valence quark number sum rules are conserved by the evolution equations (\[eq:dglap\]) and (\[eq:ddglap\]) once they are imposed at an initial scale $Q_0$. Initial conditions ================== In order to solve eqs. (\[eq:dglap\]) and (\[eq:ddglap\]) we need to specify initial conditions for the DPDFs. For practical reason, their form is to built using the existing single PDFs. For example, in [@Korotkikh:2004bz; @Gaunt:2009re] a [*symmetric*]{} input with respect to the parton interchange was proposed, $$\begin{aligned} \label{eq:gsinput} D_{f_1f_2}(x_1,x_2,Q_0)=D_{f_1}(x_1,Q_0)\,D_{f_2}(x_2,Q_0)\,\frac{(1-x_1-x_2)^2}{(1-x_1)^{2+n_1}(1-x_2)^{2+n_2}}\,,\end{aligned}$$ which is also positive definite (provided the single PDFs are positive). In the below, we show how the momentum and valence quark number sum rules are fulfilled by this input by showing the ratios of the r.h.s to l.h.s of eqs. (\[eq:momrule\]) and (\[eq:valrule\]) for $q_i=u$ (and $n_1=n_2=0$ in (\[eq:gsinput\]), for simplicity). We see that the valence quark number sum rule is significantly violated. -0.6cm Is it possible to construct an input which [*exacly*]{} fulfill the sum rules (\[eq:momrule\]) and (\[eq:valrule\])? In order to obey the momentum sum rule we could use an [*asymmetric*]{} ansatz (we skip $Q_0$ in the notation): $$\begin{aligned} D_{f_1f_2}(x_1,x_2)\,=\, \frac{1}{1-x_2} D_{f_1}\!\left(\frac{x_1}{1-x_2}\right)\cdot D_{f_2}(x_2).\end{aligned}$$ To fulfill the valence number sum rule we need to introduce corrections for identical quark flavours and antiflavours, $$\begin{aligned} \nonumber D_{f_if_i}(x_1,x_2)\!\!\! &=& \!\!\! \frac{1}{1-x_2} \left\{D_{f_1}(\frac{x_1}{1-x_2}) - \frac{1}{2} \right\} D_{f_1}(x_2) \\\nonumber \\ D_{f_i\bar{f_i}}(x_1,x_2)\!\!\! &=& \!\!\!\frac{1}{1-x_2} \left\{D_{f_1}(\frac{x_1}{1-x_2}) + \frac{1}{2} \right\} D_{\bar{f_1}}(x_2)\,,\end{aligned}$$ which do not spoil the already fulfilled momentum sum rule. However, we pay the price that the DPDFs for identical flavours are not positive definite because of the factor $-1/2$. We cannot avoid such a situation in the contruction which uses single PDFs. The graphical comparison of the symmetric and asymmetric inputs is shown below. -0.4cm For the distributions ${D_{uu}}$ both inputs give similar results in the small ${x_1}$ region, while for large ${x_1}$ there are differences between them because of the lack of positive definitness of the asymmetric input. The same results are found for the evolved distribution. For ${D_{u\bar{u}}}$, both distributions are positive but there are differences between them at the large ${x_1}$: And finally, for the ${D_{gu}}$ distributions both inputs give the same results in the whole ${x_1}$ domain. Summary ======= The specification of the initial conditions for evolution of the DPDFs is not a simple task. The symmetric input obeys parton symmetry and positivity but does not exactly fulfill sum rules. On the other hand, the asymmetric input obeys the sum rules exactly but it is not positive definite. There also exist an alternative solution: one could specify positive initial double distributions and then generate the single PDFs using the sum rules. However, a little is known about the DPDFs from experiments. For small values of parton momentum fractions, the factorized form, ${D_{f_1f_2}(x_1,x_2,Q)\approx D_{f_1}(x_1,Q)\,D_{f_2}(x_2,Q)}$, is a good approximation. However, the problem occurs for large values of $x$ ($>10^{-2})$. **Acknowledgement** This work was supported by the Polish NCN grants DEC-2011/01/B/ST2/03915 and DEC-2012/05/N/ST2/02678.
--- author: - 'G. Wilkinson' title: 'CP-tagged charm decays: relevance, status and prospects' --- Overview ======== In the last couple of years results have begun to emerge from the $\psi(3770)$ dataset of the CLEO-c experiment which exploit the quantum-correlated nature of the $D-\bar{D}$ production at this resonance. These results provide insight into the strong-phase differences existing between $D^0$ and $\bar{D^0}$ decays which is of great interest for various applications, in particular the measurement of the CP-violating unitarity triangle angle $\gamma$ ($\phi_3$) in $B$-decays. This review is organised as follows. First the relevance of $CP$-tagging is explained, and the basic analysis principle is presented. Results, both preliminary and final, are then shown in two categories of $D$-decay: $D \to K^0 h^+h^-$ ($h = \pi$ or $K$) [^1] and $D \to K n \pi$ ($= K^\pm \pi^\mp, \, K^\pm \pi^\mp \pi^+\pi^-$ and $ K^\pm\pi^\mp \pi^0 $). Particular attention is paid to the impact these measurements will have on the determination of $\gamma$. Finally, a summary is given, along with future prospects. Preliminaries ============= Importance of CP-tagged $D$-decays ---------------------------------- CP-tagged $D$ decays access information which is not available from decays of flavour-tagged mesons, namely the strong-phase difference between $D^0$ and $\bar{D^0}$ decay to the final state of interest. As a $D$-meson in a CP-eigenstate is a superposition of $D^0$ and $\bar{D^0}$, the decay probability has a dependence on the cosine of this strong-phase difference. In a two-body $D$-decay this strong-phase difference is a single quantity, whereas for three or more particles it will vary over Dalitz space, depending on the intermediate resonances contributing at each position. Knowledge of strong-phase differences (hereafter ‘strong phases’) is important for three reasons. - [It is interesting in itself for understanding $D$-decay dynamics and the resulting light-quark mesons produced.]{} - [It is necessary to relate various measurements of the $D$-mixing parameters $x$ and $y$ (where $x$ characterises the mass splitting between the mass eigenstates, and $y$ the width splitting). For example in the ‘wrong sign’ $D \to K\pi$ mixing analysis the measured parameters differ from $x$ and $y$ through the rotation by the strong phase $\delta_D^{K\pi}$.]{} - [It is invaluable for measurements of the CP-violating angle $\gamma$ ($\phi_3$). ]{} It is the last point which provides the main context for the discussion in the present review. Measuring $\gamma$ in $B \to D K$ decays ---------------------------------------- The angle $\gamma$ is the least well-known parameter of the CKM unitarity triangle with an uncertainty presently estimated to be around $30^\circ$ [@CKMFITTER]. All measurements contributing to our existing knowledge come from the $B$-factory experiments, and have been performed using the so-called ‘$B \to DK$’ family of methods. This approach will continue to dominate the $\gamma$ determination at the LHCb experiment, where the increased $B$-decay statistics will allow an order of magnitude improvement in precision [@AKIBA]. A $B^-$ meson may decay to $D^0 K^-$ through a $b \to c$ transition, or $\bar{D^0} K^-$ through a $b \to u$ transition. If the charm meson is reconstructed in any mode common to both $D^0$ and $\bar{D^0}$ (examples include $K^0_S \pi^+\pi^-$ and $K^\pm\pi^\mp$) then interference occurs which involves $\gamma$, the CP-violating phase between the two $b$-decay paths. Therefore measuring the difference in rates (or kinematical distributions in the case of three-or-more-body channels) between $B^-$ and $B^+$ decays gives sensitivity to this angle. In order to extract $\gamma$, however, other parameters must be accounted for which also affect the interference. These include $r_B$, the ratio of the magnitude of the $B$-decay amplitudes, $\delta_B$, the strong-phase difference between the $B$-decay amplitudes, and $\delta_D$ the strong phase (or phases) associated with the $D$-decay. Although in some cases it is in principle possible to use the $B$-data themselves to extract the $B$- and $D$-decay parameters along with $\gamma$, it is generally advantageous, and often essential, to have an external constraint (or constraints) on $\delta_D$. The best source of this information is CP-tagged $D$-decays. Quantum-correlated $\psi(3770)$ decays -------------------------------------- The most practical source of CP-tagged $D$ decays are neutral $D - \bar{D}$ events produced at the $\psi(3770)$ resonance in which one meson (the ‘signal $D$’) decays to the final state of interest, and the other is reconstructed in a CP-eigenstate (the ‘tagging $D$’). The $D - \bar{D}$ system is produced in a coherent state which, due to the quantum numbers of the resonance, is known to be C-odd. Therefore if one $D$ is reconstructed as CP-even, for example through $D \to K^+K^-$, then the other meson is ‘tagged’ as being in a CP-odd state. Reconstructing both $D$-mesons (‘double-tagging’) in $\psi(3770)$ decays allows for studies to be generalised beyond pure CP-tagging. As all hadronic final states can be accessed from both $D^0$ and $\bar{D^0}$ decays this means that even if the tagging $D$ is reconstructed in a non-CP eigenstate hadronic mode then the signal $D$ will also be in some superposition of $D^0$ and $\bar{D^0}$, albeit not in the equal proportions that are present in the CP-tagged case. Such events turn out to be a powerful addition to the pure CP-tagged sample and are extensively used in the analyses described in this review. For this reason the subsequent discussion will often refer to ‘quantum-correlated decays’. The only existing $\psi(3770)$ dataset of significant size was collected by the CLEO-c experiment in $e^+e^-$ collisions at the Cornell Electron Storage Ring (CESR). CLEO-c finished operation in Spring 2008, by which time it had accumulated 818 $\rm pb^{-1}$ of data at the $\psi(3770)$ resonance, together with additional integrated luminosity at other centre-of-mass energies. All analyses presented in this review use this dataset. In the near future, higher statistics samples are expected from BES-III [@BES]. There are other important side-benefits to running at this threshold energy of 3770 MeV. Events are very clean, without fragmentation debris. If all $D$-decay charged particles and photons are identified in the event, the kinematical constraints allow the presence of neutral particles such as $K^0_L$ or indeed neutrinos to be inferred. This feature enables the range of CP-tags to be essentially doubled with respect to those available with normal reconstruction techniques. For example $K^0_L \pi^0$ decays may be used as well as $K^0_S \pi^0$. Furthermore, signal decays involving $K^0$ mesons may be studied in both the $K^0_S$ and $K^0_L$ categories. Quantum correlated studies of $D \to K^0_S h^+ h^-$ decays ========================================================== Model dependent measurement of $\gamma$ in\ $B \to D (K^0_S \pi^+\pi^-)K$ decays ------------------------------------------- With the statistics presently available at the $B$-factory experiments the highest sensitivity to the angle $\gamma$ is found in the channel $B \to D (K^0_S \pi^+\pi^-)K$. Comparison of the $K^0_S \pi^+\pi^-$ Dalitz space in decays originating from a $B^-$ with those from a $B^+$ meson reveals CP-violating differences which, when analysed in an unbinned likelihood fit, can be used to extract a value for $\gamma$. Using this approach BABAR obtain a result of $\gamma = (76 \pm 22 \pm 5 \pm 5 )^\circ$ [@BABARDALITZ] and Belle $\gamma = (76^{+12}_{-13} \pm 4 \pm 9)^\circ$ [@BELLEDALITZ]. Here the uncertainties are statistical, systematic and model errors, respectively [^2]. In fact rhe BABAR result also receives a contribution from the analysis of $B \to D (K^0_S K^+K-)K$ events; the model error for $B \to D (K^0_S \pi^+\pi^-)K$ alone is estimated to be $\sim 7^\circ$. The model uncertainty in these analyses represents the limit in the understanding of the $D$-meson decay. The likelihood function used in the fit includes a description of the $D^0 \to K^0_S \pi^+\pi^-$ decay which is modelled from a sample of flavour-tagged $D^{\ast +} \to D^0 (K^0_S \pi^+\pi^-) \pi^+$ events. Both BABAR and Belle have developed their own models of this decay. The BABAR model, for example, derives from 487,000 events and is based on the isobar formalism [@KOPP], with the S-wave $\pi\pi$ and $K\pi$ contributions treated with the K-matrix [@KMATRIX] and LASS [@LASS] approaches respectively. The agreement of data with this model is very impressive, but inevitably is not perfect. (The $\chi^2/n.d.f.$ is found to be 1.11 for $\sim 19k$ degrees of freedom.) Although a $7-9^\circ$ model uncertainty is at present adequate in the measurement of $\gamma$ given the $B$-meson decay statistics available at the $B$-factories, it will rapidly become a limiting factor to the precision of the same analysis performed at LHCb, where a few degree statistical uncertainty is foreseen with $10\,{\rm fb^{-1}}$ [@LAZZERONI]. A model independent approach is therefore highly desirable. Model independent measurement of $\gamma$ in\ $B \to D (K^0_S \pi^+\pi^-)K$ decays --------------------------------------------- A binned Dalitz analysis approach to the $\gamma$ determination in $B\to D (K^0_S \pi^+\pi^-)K$ [@GIRI; @BONDAR] removes any model dependence by relating the number of events observed in a given bin of Dalitz space to [*experimental observables*]{}. If the Dalitz plot is partitioned into a set of bins symmetric through the line $m^2(K^0_S \pi^+) = m^2(K^0_S \pi^-)$, then the number of $B^\pm$ events giving rise to decays in bin $i$ is given by: $$N_i^\pm = h \left( K_{i} + r_B^2 K_{-i} + 2 \sqrt{K_i K_{-i}} (x_\pm c_i \pm y_\pm s_i) \right) \label{eq:dalitzbin}$$ where $h$ is a normalisation factor, $x_\pm =r_B \cos (\delta_B \pm \gamma )$, $y_\pm =r_B \sin (\delta_B \pm \gamma )$, $K_i$ are the number of flavour-tagged $D^0 \to K^0_S \pi^+\pi^-$ decays in bin $i$, and $c_i$ and $s_i$ are the amplitude averaged cosine and sine of the strong phase of the $D$-decay in the bin in question. In this expression the subscript $-i$ indicates a bin which is defined symmetrically in the lower region of the Dalitz plot with respect to bin $i$ in the upper region. The parameters $\gamma$, $r_B$ and $\delta_ B$ are to be extracted from the measurement, while $c_i$ and $s_i$ are inputs which are determined from quantum-correlated $D$-decays. The values of $K_i$ can be determined from any sample of flavour-tagged $D$-decays. In making such a measurement the statistical sensitivity is affected by the choice of binning. It is advantageous to define bins in which the variation of strong phase is small (so that $c_i^2 + s_i^2$ is as close as possible to $1$). This choice can be informed by the models developed on the flavour-tagged data. It must be emphasised, however, that any difference between the model and reality will [*not*]{} lead to any bias in the measurement, but will merely result in the statistical sensitivity being lower than expected. In the analysis reported below 8 bins are chosen, each covering the same span in strong phase. The model used to make this choice is that constructed by BABAR [@BABARDALITZ]. This binning is shown in Fig. \[fig:kpipi\_binning\]. ![Equal phase binning of the $D^0 \to K^0_S \pi^+\pi^-$ Dalitz plot.[]{data-label="fig:kpipi_binning"}](binning){width="40.00000%"} Measurement of $c_i$ and $s_i$ in $D \to K^0_S \pi^+\pi^-$ ---------------------------------------------------------- The parameters $c_i$ and $s_i$ have been determined using $818~\rm{fb^{-1}}$ of $\psi(3770)$ data collected by CLEO-c [@CLEOCCISI]. Central to the analysis is a sample of double-tagged events in which $D \to K^0_S \pi^+\pi^-$ decays are reconstructed together with a CP-eigenstate, or against other $D \to K^0_S \pi^+\pi^-$ decays. $D \to K^0_L \pi^+\pi^-$ decays are also used. The presence of a $K^0_L$ mesons is inferred by selecting events in which the missing-mass squared is consistent with $m_{K^0_L}^2$, as illustrated in Fig. \[fig:klrec\] ![CLEO-c missing mass squared distribution in $K^0_L\pi^+\pi^-$ events reconstructed together with CP-even tags. The shaded histogram represents the background expectation from the simulation.[]{data-label="fig:klrec"}](klpp_cpeven){width="40.00000%"} A summary of the double-tag sample is given in Tab. \[tab:kpipiyield\]. Four CP-even tag and three CP-odd tag modes are employed alongside $D \to K^0_S \pi^+\pi^-$ decays. Background considerations mean that certain CP-tags are not used with the $D \to K^0_L \pi^+\pi^-$ decays. Approximately 1600 CP-tagged events are selected in total, and around 1300 $K^0_S \pi^+\pi^-$ vs $K^0 \pi^+\pi^-$ events. The signal to background level is between 10 and 100, depending on the tag mode. Also selected (but not tabulated here) are events in which the signal is reconstructed alongside decays such as $D \to K^\pm \pi^\mp$, which to a very good approximation serve as flavour-tags. [lcc]{}\ Mode & $K^0_S \pi^+\pi^-$ yield & $K^0_L \pi^+\pi^-$ yield\ \ $K^+K^-$ & 124 & 345\ $\pi^+\pi^-$ & 62 & 172\ $K^0_S \pi^0\pi^0$ & 56 & -\ $K^0_L\pi^0$ & 229 & -\ \ $K^0_S \pi^0$ & 189 & 281\ $K^0_S \eta$ & 39 & 41\ $K^0_S \omega$ & 83 & -\ \ $K^0_S \pi^+\pi^-$ & 475 & 867\ An inspection of the Dalitz plots and projections for the CP-tagged $K^0_S \pi^+\pi^-$ samples, as presented in Fig. \[fig:kspp\_cp\], reveals clear differences. For example, the sample with the CP-even tags has in the $m^2(\pi^+\pi^-)$ projection a clear $\rho^0$ peak associated with the CP-odd $K^0_S \rho$ decays. This feature is absent from the sample with the CP-odd tags. The corresponding $K^0_L \pi^+\pi^-$ distributions are shown in Fig. \[fig:klpp\_cp\]. Observe that, as expected, the $K^0_L \pi^+\pi^-$ with CP-even(-odd) tag distributions resemble those of the $K^0_S \pi^-\pi^+$ with CP-odd(-even) tags. ![CLEO-c Dalitz plots and projections for CP-tagged $K^0_S \pi^+\pi^-$ events.[]{data-label="fig:kspp_cp"}](3620109-003a "fig:"){width="48.00000%"} ![CLEO-c Dalitz plots and projections for CP-tagged $K^0_S \pi^+\pi^-$ events.[]{data-label="fig:kspp_cp"}](3620109-005a "fig:"){width="48.00000%"} ![CLEO-c Dalitz plots and projections for CP-tagged $K^0_L \pi^+\pi^-$ events.[]{data-label="fig:klpp_cp"}](3620109-002a "fig:"){width="48.00000%"} ![CLEO-c Dalitz plots and projections for CP-tagged $K^0_L \pi^+\pi^-$ events.[]{data-label="fig:klpp_cp"}](3620109-004a "fig:"){width="48.00000%"} In the analysis the parameters $c_i$ and $s_i$ are determined from measuring the event yield, after background subtraction and efficiency correction, in each bin of the Dalitz plot for the CP-tagged and the $K^0_S \pi^+\pi^-$ vs $K^0 \pi^+\pi^-$ samples. The number of events in bin $i$ of a CP-tagged $K^0_S \pi^+\pi^-$ Dalitz plot is $$M_i^\pm = h_{CP\pm} \left( K_i \pm 2c_i \sqrt{K_i K_{-i}} + K_{-i} \right) \label{eq:ksppvscp}$$ where $h_{CP\pm}$ is a normalisation factor which can be determined from the number of single flavour-tagged signal decays and single CP-decays. For a $K^0_S \pi^+\pi^-$ vs $K^0_S \pi^+\pi^-$ sample, the number of events with entries in bin $i$ of the first plot and bin $j$ of the second plot is given by $$\begin{aligned} M_{ij} & = & h_{corr} ( K_i K_{-j} \, + \, K_{-i}K_{j} \, - \nonumber \\ & & 2 \sqrt {K_i K_{-j} K_{-i}K_j} (c_i c_j \, + \,s_i s_j) ) \label{eq:ksppvskspp}\end{aligned}$$ where $h_{corr}$ is another normalisation factor. Events including $D^0 \to K^0_L \pi^+\pi^-$ decays make a significant contribution to the overall sensitivity of the analysis. Superficially, CP-even (-odd) $K^0_L \pi^+\pi^-$ events can be treated as CP-odd (-even) $K^0_S \pi^+\pi^-$ events. The expression for the number of CP-tagged $K^0_L \pi^+\pi^-$ events in bin $i$ is given by $$M_i^\pm = h_{CP\pm} \left( K_i \mp 2{c_i}' \sqrt{K_i K_{-i}} + K_{-i} \right) \label{eq:klppvscp}$$ and the corresponding expression to Eqn. \[eq:ksppvskspp\] for $K^0_S \pi^+\pi^-$ vs $K^0_L \pi^+\pi^-$ decays is $$\begin{aligned} M_{ij} & = & h_{corr} ( K_i K_{-j} \, + \, K_{-i}K_{j} \, + \nonumber \\ & & 2 \sqrt {K_i K_{-j} K_{-i}K_j} (c_i {c_j}' \, + \,s_i {s_j}') ). \label{eq:ksppvsklpp}\end{aligned}$$ As well as the sign-flips with respect to the earlier expressions, note that the cosine and sine of the binned strong phases for the $D \to K^0_L \pi^+\pi^-$ decays are denoted with the primed quantities ${c_i}'$ and ${s_i}'$. These are not expected to be quite identical to $c_i$ and $s_i$, as can be seen by writing $$\begin{aligned} A(D^0 \to K^0_S \pi^+\pi^-) &=& \frac{1}{\sqrt{2}}[A(D^0 \to \bar{K^0} \pi^+\pi^-) \nonumber \\ & + & A(D^0 \to K^0 \pi^+\pi^-)] \nonumber\end{aligned}$$ and $$\begin{aligned} A(D^0 \to K^0_L \pi^+\pi^-) &=& \frac{1}{\sqrt{2}}[A(D^0 \to \bar{K^0} \pi^+\pi^-) \nonumber \\ & - & A(D^0 \to K^0 \pi^+\pi^-)], \nonumber\end{aligned}$$ from which it follows $$\begin{aligned} A(D^0 \to K^0_L \pi^+\pi^-) &=& A(D^0 \to K^0_S \pi^+\pi^-) \nonumber \\ & - & \sqrt{2} A(D^0 \to K^0 \pi^+\pi^-). \nonumber\end{aligned}$$ Thus in relating $D^0 \to K^0_L \pi^+\pi^-$ to $D^0 \to K^0_S \pi^+\pi^-$ the set of doubly Cabibbo suppressed (DCS) amplitudes $A(D^0 \to K^0 \pi^+\pi^-)$ appear as a correction term, with a minus sign. If, for the purposes of illustration, we consider only intermediate resonances of the sort $K^{*\pm}$ and $\rho^0$ then it is easy to show $$A(D^0 \to K^0_S \pi^+\pi^-) = \alpha K^{*-} \pi^+ \, + \, \beta K^{*+}\pi^- \, + \, \chi \rho^0 K^0_S$$ and $$A(D^0 \to K^0_L \pi^+\pi^-) = \alpha K^{*-} \pi^+ \, - \, \beta K^{*+}\pi^- \, + \, \chi' \rho^0 K^0_L,$$ where $\alpha$, $\beta$, $\chi$ and $\chi'$ are coefficients. Thus in going from $K^0_S \pi^+\pi^-$ to $K^0_L \pi^+\pi^-$ the $K^{*+}\pi^-$ term changes sign, and the $\rho^0$ term enters with a different factor, which is caused by the sign-flip in the DCS contribution to this amplitude. Therefore is is expected that ${c_i}' \ne c_i$ and ${s_i}' \ne s_i$, although the difference between the two sets of parameters is predicted to be small. In the analysis ${c_i},{s_i},{c_i}'$ and ${s_i}'$ are extracted simultaneously, but the allowed differences between the unprimed and primed quantities are constrained in the fit. The expected differences on which this constraint is based are calculated from the flavour-tagged models of the $K^0_S \pi^+\pi^-$ decay, which give the coefficients $\alpha$, $\beta$ and $\chi$. The coefficient $\chi'$ is related to $\chi$ by a DCS correction, the phase and magnitude of which is allowed to vary in a conservative range as a systematic in the study. Additional contributions to the systematic uncertainty come from using different models to give the values of $\alpha$, $\beta$ and $\chi$. The results of these studies indicate that the residual model dependence is small compared with the statistical uncertainty of the analysis. Results and impact on the $\gamma$ measurement ---------------------------------------------- The results of the CLEO-c analysis for $c_i$ and $s_i$ are shown in Fig. \[fig:kpp\_results\], together with the expectations from the BABAR model [@BABARDALITZ]. The measurement errors are the sum in quadrature of statistical uncertainties, uncertainties arising from the reconstruction (such as that arising from the momentum resolution), and the uncertainties arising from the residual model dependence in the $K^0_S\pi^+\pi^-$–$K^0_L\pi^+\pi^-$ constraint. The statistical uncertainties are dominant. The $c_i$ measurements are more precise than those from $s_i$, as they benefit both from the CP-tags and the $K^0 \pi^+\pi^-$ vs $K^0 \pi^+\pi^-$ events, whereas sensitivity to $s_i$ comes solely from the latter category. The measurements are compatible with the model predictions. ![CLEO-c results [@CLEOCCISI] and model predictions [@BABARDALITZ] for $c_i$ and $s_i$.[]{data-label="fig:kpp_results"}](kspipi_results){width="38.00000%"} When these results for $c_i$ and $s_i$ will be used as input to the $\gamma$ determination using $B \to D(K_S^0 \pi^+\pi^-)K$ decays, the uncertainties on the parameter values will induce a corresponding error on $\gamma$. This uncertainty has been estimated to be around $2^\circ$, which is much smaller than both the present BABAR assigned model uncertainty of $7^\circ$ and the expected statistical uncertainty of $5.5^\circ$ at LHCb with $10~{\rm fb^{-1}}$. The loss in statistical precision from the binned method, compared with a binned model-dependent approach, is a relative 20%. It is being investigated whether an alternative choice of binning could reduce this loss. With the present binning the sensitivity to $\gamma$ will surpass that of the unbinned approach with less than $2~{\rm fb^{-1}}$ of LHCb data. Extending to $D \to K^0_S K^+K^-$ --------------------------------- As has been demonstrated by BABAR [@BABARDALITZ], $B \to D(K^0_S K^+K^-)K$ decays can be used to measure $\gamma$ using a model dependent unbinned fit with a method entirely analogous to that used for $B \to D(K^0_S \pi^+\pi^-)K$. In order to allow a model independent exploitation of this mode, CLEO-c has embarked upon a measurement of the corresponding $c_i$ and $s_i$ parameters in $D \to K^0_S K^+K^-$. This determination will exploit around 550 quantum-correlated double tags, including $K^0 K^+ K^-$ vs $K^0 \pi^+ \pi^-$ events that can contribute to the analysis thanks to the knowledge of the $c_i$ and $s_i$ values for $D \to K^0_S \pi^+\pi^-$. Figure \[fig:kskk\] shows the Dalitz plots and $m^2(K^+K^-)$ projections for CP-tagged $K^0_S K^+K^-$ and $K^0_L K^+K^-$ events. Observe that the $\phi$ peak associated with $K^0_{S (L)} \phi $ decays is only prominent for the CP-even(odd) tags. ![CLEO-c $D \to K^0 K^+K^-$ Dalitz plots and projections. Left: $K^0_{S(L)} K^+ K^-$ with CP-even(odd) tags. Right: $K^0_{S(L)} K^+ K^-$ with CP-odd(even) tags.[]{data-label="fig:kskk"}](Data_DP_KsKKvsEKlKKvsO_50 "fig:"){width="24.00000%"} ![CLEO-c $D \to K^0 K^+K^-$ Dalitz plots and projections. Left: $K^0_{S(L)} K^+ K^-$ with CP-even(odd) tags. Right: $K^0_{S(L)} K^+ K^-$ with CP-odd(even) tags.[]{data-label="fig:kskk"}](Data_DP_KsKKvsOKlKKvsE_50 "fig:"){width="24.00000%"} ![CLEO-c $D \to K^0 K^+K^-$ Dalitz plots and projections. Left: $K^0_{S(L)} K^+ K^-$ with CP-even(odd) tags. Right: $K^0_{S(L)} K^+ K^-$ with CP-odd(even) tags.[]{data-label="fig:kskk"}](Data_DPproj_KsKKvsEKlKKvsO_50 "fig:"){width="24.00000%"} ![CLEO-c $D \to K^0 K^+K^-$ Dalitz plots and projections. Left: $K^0_{S(L)} K^+ K^-$ with CP-even(odd) tags. Right: $K^0_{S(L)} K^+ K^-$ with CP-odd(even) tags.[]{data-label="fig:kskk"}](Data_DPproj_KsKKvsOKlKKvsE_50 "fig:"){width="24.00000%"} Preliminary results on the measurement of the strong phase difference in $D \to K^0_S K^+K^-$ events can be found in [@STEFANIA]. Quantum correlated studies of\ $D \to K (n) \pi$ decays ============================== The ADS strategy for measuring $\gamma$ --------------------------------------- A powerful subset of the $B\to DK$ family of methods to measure $\gamma$ is the so-called ‘ADS’ approach [@ADS]. Here the $D$-decay mode that is reconstructed is one which involves a charged kaon and one or more pions. The simplest example is $D \to K^\pm \pi^\mp$, which is here taken as an example. Depending on the charge of the $B$-meson and the kaon from the $D$-decay, there are four possible final states. The partial widths into each state depend on the physics parameters of interest. Two of these final states have particular sensitivity: $$\begin{aligned} \Gamma (B^\mp \to (K^\pm\pi^\mp)_D K^\mp) & \propto & \nonumber \end{aligned}$$ $$\begin{aligned} r_B^2 + (r_D^{K\pi})^2 + 2 r_B r_D^{K\pi} \cos(\delta_B + \delta^{K\pi}_D \mp \gamma). && \label{eq:adskpi}\end{aligned}$$ Since $r_B$ and the magnitude of the ratio between the doubly Cabibbo suppressed and the favoured $D$ decay amplitudes, $r_D^{K\pi}$, are of similar size, the interference term that involves $\gamma$ appears at first order. The consequence is that a large asymmetry may exist between the number of events found in each final state. Measuring this asymmetry, and combining with observables in other $B \to DK$ modes, allows $\gamma$ to be determined. This extraction however benefits from external constraints on the $D$-decay parameters $r_D^{K\pi}$ and $\delta_D^{K\pi}$. The former of these is well known, essentially from the ratio of the suppressed and favoured branching ratios. Knowledge of $\delta_D^{K\pi}$ comes both from $\psi(3770)$ decays and from the ensemble of $D$-meson mixing measurements, as is discussed below. The form of the ADS decay rates, given in expression \[eq:adskpi\], takes on a different form for multibody decays such as $D \to K^\pm \pi^\mp \pi^+\pi^-$. In this case there are many intermediate resonances (eg. $K^{*0}\rho^0$, $K^\pm a_1^\mp$), which in general will contribute with different strong phases. The two rates of interest are then as follows: $$\begin{aligned} \Gamma (B^\mp \to (K^\pm\pi^\mp\pi^-\pi^+)_D K^\mp) & \propto & \nonumber \end{aligned}$$ $$\begin{aligned} r_B^2 + (r_D^{K3\pi})^2 + 2 r_B r_D^{K3\pi} R_{K3\pi} \cos(\delta_B + \delta^{K3\pi}_D \mp \gamma). && \label{eq:adscoherence}\end{aligned}$$ The parameter $R_{K3\pi}$ is termed the coherence factor, and can take any value between $0$ and $1$, where the latter limit corresponds to the case when all resonances contribute in phase and the channel behaves as a two-body decay. The parameter $\delta^{K3\pi}_D$ is now the strong-phase difference averaged over all Dalitz space. Precise definitions of the quantities can be found in [@COHERENCE]. Analogous parameters exist for other decays, for example $R_{K\pi\pi^0}$ and $\delta^{K\pi\pi^0}_D$ in the case of $D \to K^\pm \pi^\mp \pi^0$. Analysis of the mode $D \to K^\pm \pi^\mp$ ------------------------------------------ The strong-phase difference $\delta_D^{K\pi}$ between $D^0$ and $\bar{D^0}$ decays to $K^+\pi^-$ has been determined by CLEO-c using 281 $\rm {pb^{-1}}$ of $\psi(3770)$ data [@TQCA]. This result is the least recent of those reported in this review, and so only a very brief summary of the analysis is given here. If one neglects mixing then the rate, $F^{K\pi}_{CP \pm}$, of CP-tagged $D \to K^\pm \pi^\mp$ events is as follows: $$F^{K\pi}_{CP \pm} \approx {\mathcal{B}}_{CP \pm} {\mathcal{B}_{K\pi}}(1 + (r_D^{K\pi})^2 \pm 2 r_D^{K\pi}\cos\delta_D^{K\pi}). \label{eq:tqca}$$ where ${\mathcal{B}}_{CP \pm}$ and ${\mathcal{B}_{K\pi}}$ are the branching ratios of the CP-tag and the Cabibbo favoured signal decay, respectively. The analysis reported in [@TQCA] uses a range of CP-tags, other double-tagged events, and single tags both to extract $\delta_D^{K\pi}$ and to gain sensitivity to the mixing parameters $x$ and $y$, which modify the result for $F^{K\pi}_{CP \pm}$ and the rates for other event types. When a fit is performed which imposes no external constraints on the mixing parameters, a result for the strong phase difference of $\cos \delta_D^{K\pi} = 1.03^{+0.31}_{-0.17} \pm 0.06$ is obtained [^3]. In fact this measurement is less precise than that of the indirect determination which can be obtained from a global fit to all the $D$-mixing results [@HFAG]. As Fig. \[fig:hfag\_tqca\] makes clear, however, the inclusion of the CLEO-c result brings important extra information, as it resolves a two-fold ambiguity which would otherwise exist in our knowledge of $\delta^{K\pi}_D$. ![CLEO-c impact on the knowledge of $\delta_D^{K\pi}$ [@HFAG]. The dotted line is obtained from a global fit to $D$-mixing measurements which do not use $\psi(3770)$ data and the solid line from a fit which in addition includes the result of [@TQCA].[]{data-label="fig:hfag_tqca"}](tqca_hfag_cleoc_impact){width="38.00000%"} This analysis is now being updated with the full CLEO-c dataset and improved analysis methods [@NAIK]. Analysis of the modes $D \to K^\pm \pi^\mp \pi^+\pi^-$ and $D \to K^\pm \pi^\mp \pi^0$ -------------------------------------------------------------------------------------- The coherence factors and the average strong phase differences have been determined by CLEO-c for the modes $D \to K^\pm \pi^\mp \pi^+\pi^-$ and $D \to K\pm \pi^\mp \pi^0$ using 818 ${\rm fb^{-1}}$ of $\psi(3770)$ data [@CLEOCCOHERENCE]. The analysis is based on the method proposed in [@COHERENCE]. Each signal mode is reconstructed alongside various categories of tags. These include: CP-eigenstates; the signal mode itself in the case where the two kaons in the event are of identical sign (giving so-called ‘like-sign’ events); the other signal mode under consideration, again in the like-sign configuration; and like-sign $K^\pm \pi^\mp$ decays. The sensitivity of each event class to the coherence factors and strong phase differences is indicated in Tab. \[tab:coherence\_sens\], although Ref. [@COHERENCE] should be consulted to obtain the complete expressions. -------------------------------------------------------------------------------------------------------------------------- Double-tag Sensitive to ----------------------------------------------------------- -------------------------------------------------------------- $K^\pm \pi^\mp \pi^+ \pi^-$ vs $K^\pm \pi^\mp \pi^+\pi^-$ $(R_{K3\pi})^2$ $K^\pm \pi^\mp \pi^0 $ vs $K^\pm \pi^\mp \pi^0$ $(R_{K\pi\pi^0})^2$ $K^\pm \pi^\mp \pi^+ \pi^-$ vs CP $R_{K3\pi} \cos(\delta_D^{K3\pi})$ $K^\pm \pi^\mp \pi^0$ vs CP $R_{K\pi\pi^0} \cos(\delta_D^{K\pi\pi^0})$ $K^\pm \pi^\mp \pi^+\pi^-$ vs $K^\pm \pi^\mp$ $R_{K3\pi} \cos(\delta_D^{K3\pi} - \delta_D^{K\pi})$ $K^\pm \pi^\mp \pi^0$ vs $K^\pm \pi^\mp$ $R_{K\pi\pi^0} \cos(\delta_D^{K\pi\pi^0} - \delta_D^{K\pi})$ $K^\pm \pi^\mp \pi^+\pi^-$ vs $K^\pm \pi^\mp \pi^0$ $R_{K3\pi} R_{K\pi\pi^0} \cos(\delta_D^{K3\pi} - %%@ \delta_D^{K\pi\pi^0})$ -------------------------------------------------------------------------------------------------------------------------- : Dependence of double-tag rates in the coherence factor analysis.[]{data-label="tab:coherence_sens"} The analysis uses 10 types of CP-tags. The event yield for each category of event, after background subtraction, is listed in Tab. \[tab:coherenceyield\]. Additional double-tags, not detailed here, include ‘unlike-sign’ events, and $K^\pm \pi^\mp$ vs CP-eigenstate events, both needed for normalisation purposes. [lcc]{}\ Tag Mode & $K^\pm \pi^\mp \pi^+\pi^-$ yield & $K^\pm \pi^\mp \pi^0$ yield\ \ $K^+K^-$ & 536 & 764\ $\pi^+\pi^-$ & 246 & 336\ $K^0_S \pi^0\pi^0$ & 283 & 406\ $K^0_L\pi^0$ & 695 & 1234\ $K^0_L\omega$ & 296 & 449\ \ $K^0_S \pi^0$ & 705 & 891\ $K^0_S \omega$ & 319 & 389\ $K^0_S \phi$ & 53 & 91\ $K^0_S \eta$ & 164 & 152\ $K^0_S \eta'$ & 36 & 61\ \ $K^\pm \pi^\mp \pi^+\pi^-$ & 29 & 64\ $K^\pm \pi^\mp \pi^0$ & see row above & 13\ $K^\pm \pm^\mp$ & 36 & 7\ The analysis chooses as observables so-called ‘$\rho$-parameters’, which give the ratio of the number of observed events in each category to the number expected were the $D-\bar{D}$ pair to decay in an uncorrelated manner, and/or the coherence parameter to be zero. The value of $\rho_{\rm CP}$, this ratio for the CP-tagged events, is shown for each CP-tag in Fig. \[fig:coherence\_cp\]. For a given signal-mode the same behaviour is expected for each CP-eigenvalue, and behaviour of an opposite sign for CP-odd and CP-even. The values of the observables are consistent with these expectations. ![CLEO-c $\rho_{\rm CP}$ parameters for the $D \to K^\pm \pi^\mp \pi^+\pi^-$ (top) and $D \to K^\pm \pi^\mp \pi^0$ (bottom) analyses, representing the number of observed events to the incoherent expectation. The cyan shaded bands show the mean result for each CP-eigenvalue.[]{data-label="fig:coherence_cp"}](cptags_k3pi_res "fig:"){width="40.00000%"} ![CLEO-c $\rho_{\rm CP}$ parameters for the $D \to K^\pm \pi^\mp \pi^+\pi^-$ (top) and $D \to K^\pm \pi^\mp \pi^0$ (bottom) analyses, representing the number of observed events to the incoherent expectation. The cyan shaded bands show the mean result for each CP-eigenvalue.[]{data-label="fig:coherence_cp"}](cptags_kpp0_res "fig:"){width="40.00000%"} In Fig. \[fig:coherence\_obs\] are shown the results for the nine observables. These comprise: the mean results per CP-tag per mode ($\rho_{\rm CP+}$ and $\rho_{\rm CP-}$); the results for the like-sign events for both signal modes ($\rho_{\rm LS}$); the results for the like-sign $K\pi$ tags ($\rho_{\rm K\pi, LS}$); and that coming from the like-sign $K^\pm \pi^\mp \pi^+\pi^-$ vs $K^\pm \pi^\mp \pi^0$ events ($\rho^{K\pi\pi^0}_{K3\pi,LS}$). The error bars include the statistical and systematic uncertainties. In the case of $\rho_{\rm CP \pm}$ the largest systematic uncertainty is associated with the normalisation procedure, which is significant alongside the statistical uncertainty, but is itself statistical in origin. For the other observables the systematic uncertainties are small. ![CLEO-c results for coherence observables. Filled red circles indicate $D \to K^\pm \pi^\mp \pi^+\pi^-$, open blue squares indicate $D \to K^\pm \pi^\mp \pi^0$ and the filled magenta triangle indicates like-sign $K^\pm \pi^\mp \pi^+\pi^-$ vs $K^\pm \pi^\mp \pi^0$ events.[]{data-label="fig:coherence_obs"}](observables_all_res_new){width="48.00000%"} The expected size and sign of any deviation of the $\rho$ parameters from a value of one depends on the tag-category. The general behaviour in Fig. \[fig:coherence\_obs\], however, makes clear that there is evidence of high coherence in $D \to K^\pm \pi^\mp \pi^0$ decays, but much less so for $D \to K^\pm \pi^\mp \pi^+\pi^-$. Values for the coherence factors, $R_{K3\pi}$ and $R_{K\pi\pi^0}$, and mean strong-phase differences, $\delta_D^{K3\pi}$ and $\delta_D^{K\pi\pi^0}$, have been obtained by making a $\chi^2$ fit to the above observables. Other free parameters in the fit include $\delta_D^{K\pi}$, the mixing parameters $x$ and $y$ and the Cabibbo favoured and doubly suppressed branching ratios, all of which are given a Gaussian constraint to lie close to their world-best measured values. The best fit values for the coherence factors and strong phases are as follows: $R_{K3\pi}=0.33^{+0.20}_{-0.23}$, $R_{K\pi\pi^0}=0.84\pm 0.07$, $\delta_D^{K3\pi}=(114^{+26}_{-23})^\circ$ and $\delta_D^{K\pi\pi^0} = (227^{+14}_{-17})^\circ$. The one, two and three sigma contours are shown in Fig. \[fig:coherence\_res\]. Thus it is seen that $D \to K^\pm\pi^\mp\pi^0$ is highly coherent, whereas the indications are that this is not so for $D \to K^\pm\pi^\mp\pi^+ \pi^-$. Interesting results are also obtained for the auxiliary parameters in the fit, where in some cases small but significant improvements are found with respect to the external constraints. For example the fitted value of $\delta_D^{K\pi}$ is $(-151.5^{+9.6}_{-9.5})^\circ$ to be compared with the applied constraint of $(-157.5^{+10.4}_{-11.0})^\circ$. This sensitivity arises from the importance of the like-sign $K\pi$ tags in the analysis. A relative 10% improvement is also found in the knowledge of $y$, with the fit returning a value of $0.81 \pm 0.16 \%$, to be compared with the applied constraint of $0.76 \pm 0.18 \%$. ![CLEO-c results for the coherence factor and mean strong phase difference for $D \to K^\pm \pi^\mp \pi^0$ (a) and $D \to K^\pm \pi^\mp \pi^+\pi^-$ (b).[]{data-label="fig:coherence_res"}](kpipi0_coherence "fig:"){width="40.00000%"} ![CLEO-c results for the coherence factor and mean strong phase difference for $D \to K^\pm \pi^\mp \pi^0$ (a) and $D \to K^\pm \pi^\mp \pi^+\pi^-$ (b).[]{data-label="fig:coherence_res"}](k3pi_coherence "fig:"){width="40.00000%"} Impact on $\gamma$ determination -------------------------------- A study has been made within LHCb to assess the importance of the CLEO-c $D \to K (n) \pi$ results on the $\gamma$ measurement [@AKIBA]. A standalone simulation study has been made in which the precision on $\gamma$ is determined from a simultaneous analysis of $B \to DK$ events using both $K^\pm \pi^\mp$ and $K^\pm \pi^\mp \pi^+\pi^-$ as $D$-decay modes. The simulated events are generated with the $D$-decay parameters set to the central values of the CLEO-c analysis. In addition to $\gamma$, the fit also returns $r_B$, the $B$- and $D-$meson strong phases, and the coherence factors. The assumed sample size corresponds to one nominal year (2 ${\rm fb^{-1}}$) of data. The results are compared between the case where no external knowledge is assumed, and the case where the CLEO-c results for $\delta_D^{K\pi}$, $R_{K3\pi}$ and $\delta_D^{K3\pi}$ are applied as external constraints in the fit. The exact results of the study vary with the assumed value of the parameters, but in general a significant improvement in the precision on $\gamma$ is observed when the CLEO-c constraints are used, similar to that which would come about from a doubling of the LHCb dataset. The impact of the $D \to K^\pm \pi^\mp$ and the $D \to K^\pm \pi^\mp \pi^+\pi^-$ constraints are found to be similar. At first sight the importance of the $D \to K^\pm \pi^\mp \pi^+\pi^-$ events in the analysis is unexpected, given the low value of the coherence. This effect can be understood by inspecting Eqn. \[eq:adscoherence\] and considering the limit when $R_{K3\pi} \to 0$. In this case the observed decay rate in the suppressed $D \to K^\pm \pi^\mp \pi^+\pi^-$ mode allows $r_B$ to be determined, which then benefits the $\gamma$ extraction from the simultaneous $D \to K^\pm \pi^\mp$ analysis. The mode $D \to K^\pm \pi^\mp \pi^0$ has not yet been included in the LHCb ADS analysis, but it is anticipated that here also the CLEO-c constraints will be helpful in improving the overall $\gamma$ sensitivity. Summary and prospects ===================== CLEO-c analyses have been published which determine $D$-decay parameters in the modes $D \to K^0_S \pi^+\pi^-$, $D \to K^\pm \pi^\mp$, $D \to K^\pm \pi^\mp \pi^+\pi^-$ and $D \to K^\pm \pi^\mp \pi^0$. All these studies rely on the quantum-correlated nature of the $D-\bar{D}$ pair in $\psi(3770)$ decays. The results are found to have significant consequences for the measurement of $\gamma$ in $B \to DK$ decays, allowing for both an improvement in overall precision and the removal of model dependence in the analyses. Results are anticipated soon in the mode $D \to K^0_S K^+K^-$. Work is also underway to extend the $D \to K^\pm \pi^\mp$ analysis to the full 818 ${\rm fb^{-1}}$ dataset, and to provide further results in $D \to K^0_S \pi^+\pi^-$ for alternative choices of binnings. There exist other channels which are potentially useful in the $B \to DK$ analysis and so could benefit from measurements of their decay properties in quantum correlated events. These include $D \to K^+K^-\pi^+\pi^-$, $D \to K^0_S K^\pm \pi^\mp$ and $D \to K^0_S \pi^+\pi^-\pi^0$ The studies reported here would benefit greatly from the increase in $\psi(3770)$ statistics which could be collected by the BES-III experiment. For this reason a significant open charm programme at BES-III is to be encouraged. Acknowledgements {#acknowledgements .unnumbered} ================ I am grateful to Jim Libby and other CLEO-c colleagues for useful discussions in preparing this review. [9]{} J. Charles [*et al.*]{} (CKMfitter group), Eur. Phys. J. [**C 41**]{} (2005) 1, updated results and plots available at : http://ckmfitter.in2p3.fr/ . K. Akiba [*et al.*]{}, [*Determination of the CKM-angle $\gamma$ with tree-level processes at LHCb*]{}, LHCb-2008-031. Y. Wang, [*Status of BES-III*]{}, these proceedings. B. Aubert [*et al.*]{} (BABAR Collaboration), Phys. Rev. [**D 78**]{} (2008) 034023. K. Abe [*et al.*]{} (Belle Collaboration), arXiv:0803.3375 \[hep-ex\]. Review on Dalitz plot formalism in [@PDG08]. E.P. Wigner, Phys. Rev. [**70**]{} (1946) 15; S.U. Cheung [*et al.*]{} Ann. Phys. (Paris) [**507**]{} (1995) 404; I.J.R. Aitchison, Nucl. Phys. [**A 189**]{} (1972) 417. D. Aston [*et al.*]{} (LASS Collaboration), Nucl. Phys. J. [**B 296**]{} (1988) 493. V. Gibson, C. Lazzeroni and J. Libby, [*Measuring $\gamma$ at LHCb with $B^\pm \to D^0/\bar{D^0}(K^0_S \pi^+\pi^-)K$ decays*]{}, LHCb-2007-048. A. Giri [*et al.*]{}, Phys. Rev. [**D 68**]{} (2003) 054018. A. Bondar and A. Poluektov, Eur. Phys. J [**C 55**]{} (2008) 51; A. Bondar and A. Poluektov, Eur. Phys. J [**C 47**]{} (2006) 347. R.A. Briere [*et al.*]{} (CLEO Collaboration), Phys. Rev. [**D 80**]{} (2009) 032002. S. Ricciardi, [*Quantum Correlated $D$ Decays at CLEO-c*]{}, Europhysics Conference on High Energy Physics, 16-22 July 2009, Krakow, Poland. D. Atwood, I. Dunietz and A. Soni, Phys. Rev. Lett. [**78**]{} (1997) 3257; D. Atwood, I. Dunietz and A. Soni, Phys. Rev. [**D 63**]{} (2001) 036005. D. Atwood and A. Soni, Phys. Rev. [**D 68**]{} (2003) 033003. D.M. Asner [*et al.*]{} (CLEO Collaboration), Phys. Rev. [**D 78**]{} (2008) 012001; J.L. Rosner [*et al.*]{} (CLEO Collaboration), Phys. Rev. Lett. [**100**]{} (2008) 221801. The Heavy Flavor Averaging Group (HFAG), http://www.slac.stanford.edu/xorg/hfag . P. Naik, [*D mixing at CLEO-c*]{}, these proceedings. N. Lowrey [*et al.*]{} (CLEO Collaboration), Phys. Rev. [**D 80**]{} (2009) 031105 (R). C. Amsler [*et al.*]{} (Particle Data Group), Phys. Lett. [**B 667**]{} (2008) 1. [^1]: In this article $D$ will be used to indicate any neutral $D$-meson. [^2]: It may be noted that there is a significant difference in the reported statistical precision in the two results which cannot be explained by the variation in sample sizes between the experiments. This difference is largely driven by the very different values of the interference parameter $r_B$ that is found in the two analyses. [^3]: It is worth remarking that the convention used to describe the effect of a CP operation on the $D^0$ meson has non-trivial consequences in the definition of phase differences. In particular, the convention assumed in most charm mixing analyses, and implicit in the results presented here, is offset by $\pi$ from that assumed in most $B \to DK$ studies. Thus the central value that should be used in both expressions \[eq:adskpi\] and \[eq:adscoherence\] is [*not*]{} $\delta_D^{K\pi} \approx 0.4$, as suggested by Fig. \[fig:hfag\_tqca\], but $\delta_D^{K\pi} \approx 0.4 - \pi$.
--- abstract: 'The hyperelliptic Torelli group is the subgroup of the mapping class group consisting of elements that act trivially on the homology of the surface and that also commute with some fixed hyperelliptic involution. We prove a Birman exact sequence for hyperelliptic Torelli groups, and we show that this sequence splits. As a consequence, we show that the hyperelliptic Torelli group is generated by Dehn twists if and only if it is generated by reducible elements. We also give an application to the kernel of the Burau representation.' address: - | Tara E. Brendle\ School of Mathematics & Statistics\ University Gardens\ University of Glasgow\ G12 8QW\ [email protected] - | Dan Margalit\ School of Mathematics\ Georgia Institute of Technology\ 686 Cherry St.\ Atlanta, GA 30332\ [email protected] author: - 'Tara E. Brendle' - Dan Margalit bibliography: - 'sibk.bib' title: 'Point pushing, homology, and the hyperelliptic involution' --- [^1] Introduction ============ Let $S_g$ denote a closed, connected, orientable surface of genus $g$. The hyperelliptic Torelli group $\SI(S_g)$ is the subgroup of the mapping class group $\Mod(S_g)$ consisting of all elements that act trivially on $H_1(S_g;\Z)$ and that commute with the isotopy class of some fixed hyperelliptic involution $s : S_g \to S_g$ (see Figure \[figure:hi\] below). The group $\SI(S_g)$ arises in algebraic geometry in the following context. Let $\T(S_g)$ denote the cover of the moduli space of Riemann surfaces corresponding to the Torelli subgroup $\I(S_g)$ of $\Mod(S_g)$. This is the subgroup of $\Mod(S_g)$ consisting of all elements acting trivially on $H_1(S_g;\Z)$. The period mapping is a function on $\T(S_g)$ whose image lies in the Siegel upper half-space of rank $g$. This map is branched over the subset $\Br(S_g)$ of $\T(S_g)$ that is fixed by the action of $s$ and $$\pi_1(\Br(S_g)) \cong \SI(S_g).$$ Because of this, $\SI(S_g)$ is related, for example, to the topological Schottky problem; see [@hain Problem 1]. A basic tool in the theory of mapping class groups is the Birman exact sequence. This sequence relates the mapping class group of a surface with marked points to the mapping class group of the surface obtained by forgetting the marked points; see Section \[section:bes\]. This is a key ingredient for performing inductive arguments on the mapping class group. For instance, the standard proof that $\Mod(S_g)$ is generated by Dehn twists uses the Birman exact sequence in the inductive step on genus. The main goal of this paper is to provide a Birman exact sequence for $\SI(S_g)$. As in the case of $\Mod(S_g)$, the Birman exact sequence is crucial for inductive arguments. For example, the authors and Childers have recently used our Birman exact sequences in order to show that the top-dimensional homology of $\SI(S_g)$ is infinitely generated [@cdsi]. In order to state our main theorems, we need to define the hyperelliptic Torelli group for a marked surface. The hyperelliptic Torelli group $\SI(S_g,P)$ of the surface $S_g$ with marked points $P$ is the group of isotopy classes of homeomorphisms of $S_g$ that commute with $s$, that preserve the set $P$, and that act trivially on the homology of $S_g$ relative to $P$. There is a forgetful homomorphism $\SI(S_g,P) \to \SI(S_g)$, and our Birman exact sequences give a precise description of the kernel in two cases. In the first case, we show that the kernel is trivial, and so the Birman exact sequence degenerates to an isomorphism. \[thm:sibes1\] Let $g \geq 0$ and let $P$ be a single point in $S_g$ fixed by $s$. The forgetful map $\SI(S_g,P) \to \SI(S_g)$ is an isomorphism. Let $S_g^1$ denote a surface of genus $g$ with one boundary component. We prove in Theorem \[thm:uncapping\] that $$\SI(S_g^1) \cong \SI(S_g) \times \Z.$$ It is surprising to realize $\SI(S_g)$ as a subgroup of $\SI(S_g^1)$ since there is no inclusion $S_g \to S_g^1$. Next we consider the case where $P$ is a pair of distinct points interchanged by $s$. The Birman–Hilden theorem (Theorem \[thm:sbes2\]) identifies the kernel of $\SI(S_g,P) \to \SI(S_g)$ as a subgroup of $F_{2g+1} \cong \pi_1(S_{0,2g+2})$, where $S_{0,2g+2}$ is a sphere with $2g+2$ punctures. Denote the generators of $F_{2g+1}$ by $\zeta_1,\dots,\zeta_{2g+1}$ and the generators for the group $\Z^{2g+1}$ by $e_1, \dots, e_{2g+1}$. Denote by $F_{2g+1}^{even}$ the subgroup of $F_{2g+1}$ consisting of all even length words in the $\zeta_i$. There is a homomorphism $\epsilon : F_{2g+1}^{even} \to \Z^{2g+1}$ defined by $$\zeta_{i_1}^{\alpha_1} \zeta_{i_2}^{\alpha_2} \mapsto e_{i_1} - e_{i_2}$$ where $\alpha_j = \pm 1$ for each $j$. \[thm:algchar\] Let $g \geq 1$, and let $P$ be a pair of distinct points in $S_g$ interchanged by $s$. If we identify the kernel of the map $\SI(S_g,P) \to \SI(S_g)$ with a subgroup of $F_{2g+1}$ as above, then the following sequence is split exact: $$1 \to \ker \epsilon \to \SI(S_g,P) \to \SI(S_g) \to 1.$$ Again, the fact that the short exact sequence in Theorem \[thm:algchar\] is split is unexpected because there is no splitting induced by a map $S_g \to S_g-P$. Since $\epsilon$ is a map from a nonabelian free group onto an infinite abelian group, its kernel is an infinitely generated free group. We thus obtain the following. \[cor:surprise\] Let $g \geq 1$, and let $P$ be a pair of distinct points in $S_g$ interchanged by $s$. We have $$\SI(S_g, P) \cong \SI(S_g) \ltimes F_\infty.$$ A simple closed curve in $S_g$ is *symmetric* if it is fixed by the hyperelliptic involution $s$. We prove in Theorem \[thm:topchar\] that the image of each element of $\ker \epsilon$ in $\SI(S_g,P)$ is a product of Dehn twists about symmetric separating curves. Hain has conjectured that the entire group $\SI(S_g)$ is in fact generated by Dehn twists about symmetric separating curves [@hain Conjecture 1]; see also Morifuji [@morifuji Section 4]. Hain’s conjecture is well-known to be true for $g=2$; see Theorem \[thm:g21\] below. Using Theorem \[thm:topchar\], we prove the following theorem in Section \[sec:app\]. In the statement, an element of $\Mod(S_g)$ is *reducible* if it fixes a collection of isotopy classes of pairwise disjoint simple closed curves in $S_g$. \[thm:reducibles\] Let $g \geq 0$. Suppose that $\SI(S_k)$ is generated by reducible elements for all $k$ between 0 and $g$, inclusive. Then the group $\SI(S_g)$ is generated by Dehn twists about symmetric separating curves. In other words, by the results of this paper, Hain’s conjecture is reduced to showing that $\SI(S_g)$ is generated by reducible elements. Also, we prove in Theorem \[thm:g21\] that the hyperelliptic Torelli group for a surface of genus two with one marked point or one boundary component is generated by Dehn twists about symmetric separating curves. In the proof of Theorem \[thm:uncapping\], we will explain how to identify $\SI(S_g^1)$ with a subgroup of the pure braid group $PB_{2g+1}$. It is then easy to check that $\SI(S_g^1)$ is isomorphic to the kernel $K_{2g+1}$ of the reduced Burau representation of $PB_{2g+1}$ evaluated at $t=-1$; see [@perron Remark 4.3], [@mp], and [@companion]. Using the fact that $\SI(S_g^1)$ splits over its center (Theorem \[thm:uncapping\]), it follows that $K_{2g+1}$ also splits as a direct product over its center $Z(K_{2g+1})$ and $K_{2g+1}/Z(K_{2g+1}) \cong \SI(S_g)$. We can also use the Birman exact sequence in Theorem \[thm:algchar\] to relate $K_{2g+2}$ to $K_{2g+1}$. Analogously to the case of odd index braid groups, we have $\SI(S_g,P) \cong K_{2g+2}/Z(K_{2g+2})$. In the even degree case, we also have $Z(K_{2g+2}) = Z(B_{2g+2})$, where $B_{2g+2}$ is the braid group on $2g+2$ strands. Thus, by Corollary \[cor:surprise\], we have $$K_{2g+2}/Z(B_{2g+2}) \cong (K_{2g+1}/Z(K_{2g+1})) \ltimes F_{\infty}.$$ We would also like to thank Joan Birman, Kai-Uwe Bux, Tom Church, Benson Farb, Dick Hain, Chris Leininger, and Andy Putman for helpful discussions. Hyperelliptic mapping class groups, hyperelliptic Torelli groups, and the Birman–Hilden theorem {#sec:bg} =============================================================================================== We recall some basic information about hyperelliptic mapping class groups, including the fundamental theorem of Birman–Hilden. Theorems \[thm:bh\], \[thm:bh marked\], and \[thm:bh low genus\] below are all special cases of their general theorem [@birmanhilden Theorem 1]. Let $S$ be a compact, connected, orientable surface with finitely many marked points in its interior. The *mapping class group* $\Mod(S)$ is the group of homotopy classes of homeomorphisms of $S$, where all homeomorphisms and homotopies are required to fix the marked points as a set and to fix $\partial S$ pointwise. A hyperelliptic involution is an order two homeomorphism of $S_g$ that acts by $-I$ on $H_1(S_g;\Z)$. We fix some hyperelliptic involution $s$ of $S_g$, once and for all. Its mapping class, which we also call a hyperelliptic involution, will be denoted $\sigma$. The mapping class $\sigma$ is unique up to conjugacy. ![Rotation by $\pi$ about the indicated axis is a hyperelliptic involution.[]{data-label="figure:hi"}](hi) The *hyperelliptic mapping class group*, or *hyperelliptic mapping class group*, is the group $\SMod(S_g)$ of isotopy classes of orientation-preserving homeomorphisms of $S_g$ that commute with $s$ (isotopies are not required to be $s$-equivariant). The *hyperelliptic Torelli group* of $S_g$ is the group $$\SI(S_g) = \I(S_g) \cap \SMod(S_g).$$ Suppose that $P$ is a set of marked points in $S_g$, and denote the marked surface by $(S_g,P)$. The hyperelliptic mapping class group $\SMod(S_g,P)$ is the subgroup of $\Mod(S_g,P)$ consisting of elements represented by homeomorphisms that commute with $s$. The hyperelliptic Torelli group $\SI(S_g,P)$ is the subgroup of $\SMod(S_g,P)$ consisting of elements that act trivially on the relative homology $H_1(S_g,P;\Z)$. Let $g \in \{0,1,2\}$. The group $\Mod(S_g)$ has a generating set consisting of Dehn twists about symmetric simple closed curves. Each such Dehn twist has a representative that commutes with $s$, and so we obtain the following. \[fact:low genus\] For $g \in \{0,1,2\}$, we have $$\SMod(S_g) = \Mod(S_g).$$ For $g \geq 3$, the group $\SMod(S_g)$ has infinite index in $\Mod(S_g)$. Indeed, if $a$ is any isotopy class of simple closed curves in $S_g$ that is not fixed by $\sigma$, then no nontrivial power of the Dehn twist $T_a$ is an element of $\SMod(S_g)$ (note that, by Fact \[fact:low genus\] no such curves exist in a closed genus two surface!). However, there is a very useful description of $\SMod(S_g)$ given by Birman–Hilden, which we now explain. The quotient of $S_g$ by $s$ is a sphere with $2g+2$ marked points, namely the images of the fixed points of $s$. We denote this surface by $S_{0,2g+2}$. Any homeomorphism of $S_g$ that commutes with $s$ necessarily fixes the fixed points of $s$, and hence descends to a homeomorphism of $S_{0,2g+2}$ preserving the marked points. By definition, any element $f$ of $\SMod(S_g)$ has a representative $\phi$ that commutes with $s$. Thus, there is a map $$\theta : \SMod(S_g) \to \Mod(S_{0,2g+2}).$$ The following is a special case of a theorem of Birman–Hilden. \[thm:bh\] For $g \geq 2$, the map $\theta : \SMod(S_g) \to \Mod(S_{0,2g+2})$ is a well-defined, surjective homomorphism with kernel $\langle \sigma \rangle$. In particular, $\SMod(S_g)/\langle \sigma \rangle \cong \Mod(S_{0,2g+2})$. Let $p_1,p_2 \in S_g$ be distinct points that are interchanged by $s$. The quotient $(S_g,\{p_1,p_2\})/\langle s \rangle$ is the pair $(S_{0,2g+2},p)$, where $p \in S_{0,2g+2}$ is the image of $p_1 \cup p_2$. Elements of $\Mod(S_{0,2g+2},p)$ can permute the $2g+2$ marked points coming from $S_{0,2g+2}$, but must preserve the marked point $p$. As before there is a map $\theta : \SMod(S_g,\{p_1,p_2\}) \to \Mod(S_{0,2g+2},p)$. We have the following analogue of Theorem \[thm:bh\]. \[thm:bh marked\] For $g \geq 1$, the homomorphism $\theta : \SMod(S_g,\{p_1,p_2\}) \to \Mod(S_{0,2g+2},p)$ is a well-defined, surjective homomorphism with kernel $\langle \sigma \rangle$. In particular, $\SMod(S_g,\{p_1,p_2\})/\langle \sigma \rangle \cong \Mod(S_{0,2g+2},p)$. Theorem \[thm:bh\] does not hold as stated for $g \in \{0,1\}$. In fact, in these cases, the map $\theta$ is not even well defined. This is because there are nontrivial finite order homeomorphisms of $S_{0,2g+2}$ that lift to homeomorphisms of $S_g$ that are homotopic to the identity. Therefore, we are forced to redefine $\theta$ in these cases. Let $p$ be some particular fixed point of ${s}$. We have $$\SMod(S^2,p) = \Mod(S^2,p) = \Mod(S^2) = \SMod(S_2) = 1$$ and $$\SMod(T^2,p) = \Mod(T^2,p) \cong \Mod(T^2) = \SMod(T^2) \cong \textrm{SL}(2,\Z).$$ Therefore, for $g \in \{0,1\}$, we can define $\theta$ via the composition $$\SMod(S_g) \stackrel{\cong}{\to} \SMod(S_g,p) \to \Mod(S_{0,2g+2}).$$ For $g=0$, this map $\theta$ is the trivial map $\Mod(S^2) \to \Mod(S_{0,2}) \cong \Z/2\Z$. \[thm:bh low genus\] The map $\theta : \SMod(T^2) \to \Mod(S_{0,4})$ is a well-defined homomorphism with kernel $\langle \sigma \rangle$. Its image is the subgroup of $\Mod(S_{0,4})$ consisting of elements that fix the marked point corresponding to $p \in T^2$. Birman exact sequences for the hyperelliptic mapping class group {#section:bes} ================================================================ In this section we give Birman exact sequences for hyperelliptic mapping class groups in the two cases which will be of interest for us: first, forgetting one marked point, and then forgetting two. We begin by recalling the classical Birman exact sequence. Let $S$ denote a connected, orientable, compact surface with finitely many marked points in its interior. Assume that the surface obtained by removing the marked points from $S$ has negative Euler characteristic. Let $p \in S$ be a marked point (distinct from any others in $S$). There is a forgetful map $\Mod(S,p) \to \Mod(S)$, and the Birman exact sequence identifies the kernel of this map with $\pi_1(S,p)$: $$1 \to \pi_1(S,p) \stackrel{\Push}{\to} \Mod(S,p) \to \Mod(S) \to 1.$$ Given an element $\alpha$ of $\pi_1(S,p)$, we can describe $\Push(\alpha)$ as the map obtained by pushing $p$ along $\alpha$; see [@primer Section 5.2] or [@birmanes Section 1]. Forgetting one point -------------------- Let $p \in S_g$ be a fixed point of ${s}$. As in the classical Birman exact sequence, there is a forgetful map $\SMod(S_g, p) \to \SMod(S_g)$. \[thm:sbes1\] Let $g \geq 0$. The forgetful map $\SMod(S_g, p) \to \SMod(S_g)$ is injective. Note that this map is not surjective for $g \geq 2$. For example, its image does not contain a Dehn twist about a symmetric curve through $p$. The group $\SMod(S^2,p)$ is trivial, so there is nothing to show in this case. For $g=1$, we can use Fact \[fact:low genus\] plus the fact that the forgetful map $\Mod(T^2,p) \to \Mod(T^2)$ is an isomorphism [@primer Section 5.2]. So assume $g \geq 2$. The classical Birman exact sequence gives: $$1 \to \pi_1(S_g) \stackrel{\Push}{\to} \Mod(S_g,p) \to \Mod(S_g) \to 1 .$$ Therefore, to prove the theorem, we need to show that the image of $\pi_1(S_g)$ in $\Mod(S_g,p)$ intersects $\SMod(S_g,p)$ trivially. In other words, we need to show that $\sigma$ does not commute with any nontrivial element of the image of $\pi_1(S_g)$. For $f \in \Mod(S_g,p)$ and $\alpha \in \pi_1(S_g)$, we have that $f \Push(\alpha) f^{-1} = \Push(f_\star(\alpha))$. Therefore, we need to show that $\sigma_\star(\alpha) \neq \alpha$ for all nontrivial $\alpha \in \pi_1(S_g)$. Choose a hyperbolic metric on $S_g$ so that $s$ is an isometry. A concrete way to do this is to identify $S_g$ with a hyperbolic $(4g+2)$-gon with opposite sides glued, and take $s$ to be rotation by $\pi$ through the center. Next, choose a universal metric covering $\mathbb{H}^2 \to S_{g}$. The preimage of $p$ in $\mathbb{H}^2$ is the set $\{ \gamma \cdot \widetilde p : \gamma \in \pi_1(S_g) \}$, where $\widetilde p$ is some fixed lift of $p$. The map ${s}$ has a unique lift $\widetilde {s}$ to $\Isom^+(\mathbb{H}^2)$ that fixes $\widetilde p$. This lift has order two. By the classification of elements of $\Isom^+(\mathbb{H}^2)$, it is a rotation by $\pi$. Thus, $\widetilde {s}$ has exactly one fixed point. The action of $\widetilde {s}$ on the set $\{ \gamma \cdot \widetilde p\}$ is given by $$\gamma \cdot \widetilde p \mapsto {s}_\star(\gamma) \cdot \widetilde p.$$ If $\sigma_\star(\alpha)=\alpha$, then it follows that $\widetilde {s}$ fixes $\alpha \cdot \widetilde p$. But we already said that $\widetilde {s}$ has a unique fixed point, namely $\widetilde p$. So $\alpha=1$, as desired. Forgetting two points {#subsection:forget2} --------------------- Let $p_1,p_2 \in S_g$, with ${s}(p_1)=p_2$, and let $p$ denote the image of $p_1 \cup p_2$ in the quotient sphere $S_{0,2g+2}$. Let $\SBK(S_g,\{p_1,p_2\})$ denote the kernel of the forgetful homomorphism $\SMod(S_g, \{ p_1 , p_2 \}) \to \SMod(S_g)$ (the notation is for “symmetric Birman kernel”). We have a short exact sequence: $$1 \to \SBK(S_g,\{p_1,p_2\}) \to \SMod(S_g, \{ p_1 , p_2 \}) \to \SMod(S_g) \to 1.$$ By $\pi_1(S_{0,2g+2},p)$ we mean the fundamental group of the complement in $S_{0,2g+2}$ of the $2g+2$ marked points. \[thm:sbes2\] Let $g \geq 1$. We have that $\SBK(S_g,\{p_1,p_2\}) \cong F_{2g+1}$, where $F_{2g+1}$ is identified with $\pi_1(S_{0,2g+2},p)$. We have the following commutative diagram. $$\xymatrix{ & & 1\ar[d] & 1 \ar[d]&\\ & & \langle \sigma \rangle \ar[d] \ar[r]^\cong & \langle \sigma \rangle \ar[d] & \\ 1 \ar[r]& \SBK(S_g,\{p_1,p_2\}) \ar[r]& \SMod(S_g, \{p_1, p_2 \}) \ar[d] \ar[r] & \SMod(S_g) \ar[r] \ar[d] & 1 \\ 1 \ar[r]& \pi_1(S_{0,2g+2},p) \ar@{}[d]^{\rotatebox[origin=c]{270}{$\cong$}} \ar[r] & \Mod(S_{0,2g+2},p) \ar[r] \ar[d] & \Mod(S_{0,2g+2}) \ar[r] & 1\\ &\ \ \ F_{2g+1} & 1 & & }$$ The second horizontal short exact sequence is an instance of the Birman exact sequence, and the two vertical sequences are given by Theorems \[thm:bh\], \[thm:bh marked\], and \[thm:bh low genus\]. From the diagram it is straightforward to see that $\SBK(S_g,\{p_1,p_2\}) \cong \pi_1(S_{0,2g+2})$. Birman exact sequences for hyperelliptic Torelli groups: main results {#sec:main} ===================================================================== The main results of the paper are Birman exact sequences for hyperelliptic Torelli groups. As in the previous section, there are two versions, corresponding to forgetting one point (Theorem \[thm:sibes1\]) and forgetting two points (Theorem \[thm:algchar\]). Forgetting one point -------------------- Let $\PMod(S_{0,2g+2})$ denote the subgroup of $\Mod(S_{0,2g+2})$ consisting of elements that induce the trivial permutation of the marked points. The next fact is an easy exercise (it also follows immediately from the main result of [@arnold]). \[lem:torelli pure\] Let $g \geq 0$. Under the map $\theta:\SMod(S_g) \to \Mod(S_{0,2g+2})$, the image of $\SI(S_g)$ lies in $\PMod(S_{0,2g+2})$. We are now ready for the proof of our first Birman exact sequence for hyperelliptic Torelli groups. It follows immediately from Theorem \[thm:sbes1\] that the homomorphism $\SI(S_g,p) \to \SI(S_g)$ is injective. We will show that it is also surjective. Let $f \in \SI(S_g)$ and let $\phi$ be a representative homeomorphism. By Lemma \[lem:torelli pure\], the induced homeomorphism of $S_{0,2g+2}$ fixes each of the $2g+2$ marked points. It follows that $\phi$ fixes $p$ and that $\phi$ represents an element $\widetilde f$ of $\SI(S_g,p)$ that maps to $f$. Before moving on to the second Birman exact sequence for the hyperelliptic Torelli group, we give a variation of Theorem \[thm:sibes1\], where we forget a boundary component (really, cap a boundary component) instead of a marked point. Let $p \in S_g$ be a fixed point of ${s}$, and let $\Delta \subset S_g$ be an embedded disk that contains $p$ and is fixed by ${s}$. The surface $S_g-\Delta$ is a compact surface of genus $g$ with one boundary component, which we denote by $S_g^1$. The hyperelliptic mapping class group $\SMod(S_g^1)$ is defined as the group of isotopy classes of orientation-preserving homeomorphisms that commute with $s$ are restrict to the identity on $\partial S_g^1$. Also, $\SI(S_g^1)$ is the subgroup of $\SMod(S_g^1)$ consisting of all elements that act trivially on $H_1(S_g^1;\Z)$. The Dehn twist $T_{\partial S_g^1}$ is an infinite order element of $\SI(S_g^1)$. The inclusion $S_g^1 \to S_g$ induces a homomorphism $\SI(S_g^1) \to \SI(S_g)$, and so we can again ask about the kernel. In this case we have the following. \[thm:uncapping\] Let $g \geq 1$. We have $$\SI(S_g^1) \cong \SI(S_g) \times \Z,$$ where the map $\SI(S_g^1) \to \SI(S_g)$ is the one induced by the inclusion $S_g^1 \to S_g$, and where the $\Z$ factor is $\langle T_{\partial S_g^1} \rangle$. One version of the Birman–Hilden theorem [@primer Theorem 9.2] identifies $\SMod(S_g^1)$ isomorphically with the mapping class group of a disk $D_{2g+1}$ with $2g+1$ marked points. The latter is isomorphic to the braid group $B_{2g+1}$ on $2g+1$ strands. By Lemma \[lem:torelli pure\], the group $\SI(S_g^1)$ is identified isomorphically with a subgroup of $PB_{2g+1}$, the subgroup of $B_{2g+1}$ consisting of elements that fix each marked point/strand. For any $n$, the group $PB_n$ splits as a direct product over its center, which is generated by the Dehn twist $T_{\partial D_n}$ [@primer Section 9.3]. Under the restriction $\bar \theta : \SI(S_g) \hookrightarrow PB_{2g+1}$, we have $\bar \theta^{-1}(Z(PB_{2g+1})) = \langle T_{\partial S_g^1} \rangle$. Thus, $\SI(S_g^1)$ splits as a direct product over $\langle T_{\partial S_g^1} \rangle$. It remains to show that $\SI(S_g^1)/ \langle T_{\partial S_g^1} \rangle \cong \SI(S_g)$. There is a short exact sequence $$1 \to \langle T_{\partial S_g^1} \rangle \to \Mod(S_g^1) \to \Mod(S_g,p) \to 1,$$ where the map $\Mod(S_g^1) \to \Mod(S_g,p)$ is the one induced by the inclusion $S_g^1 \to (S_g,p)$; see [@primer Proposition 3.19]. On the level of hyperelliptic Torelli groups, this gives $$1 \to \langle T_{\partial S_g^1} \rangle \to \SI(S_g^1) \to \SI(S_g,p) \to 1.$$ We have already shown that $\SI(S_g,p) \cong \SI(S_g)$ (Theorem \[thm:sibes1\]). Thus, $\SI(S_g^1)/ \langle T_{\partial S_g^1} \rangle \cong \SI(S_g)$, and we are done. In the proof of Proposition \[prop:g12\] we will need a version of Theorem \[thm:uncapping\] for a surface with two marked points. Let $g \geq 1$, and let $p_1$ and $p_2$ be distinct points in $S_g^1$ interchanged by $s$. We define $\SMod(S_g^1,\{p_1,p_2\})$ in the usual way, and then we define $\SI(S_g^1,\{p_1,p_2\})$ as the kernel of the action of $\SMod(S_g^1,\{p_1,p_2\})$ on $H_1(S_g^1,\{p_1,p_2\};\Z)$. By essentially the same argument as the one used for Theorem \[thm:uncapping\], we have, for $g \geq 0$, $$\SI(S_g^1,\{p_1,p_2\}) \cong \SI(S_g,\{p_1,p_2\}) \times \Z,$$ where the $\Z$ factor is the Dehn twist about $\partial S_g^1$. Forgetting two points {#siforget2} --------------------- Recall from Theorem \[thm:sbes2\] that the kernel of $\SMod(S_g, \{ p_1 , p_2 \}) \to \SMod(S_g)$ is $\SBK(S_g,\{p_1,p_2\}) \cong F_{2g+1}$, which is identified with $\pi_1(S_{0,2g+2})$. Let $p$ denote the image of $p_1 \cup p_2$ in $S_{0,2g+2}$. Let $\zeta_1,\dots,\zeta_{2g+1}$ be the generators for $\pi_1(S_{0,2g+2},p) \cong F_{2g+1}$ shown in Figure \[figure:zetas\]. In what follows, we identify $F_{2g+1}$ with $\pi_1(S_{0,2g+2},p)=\langle \zeta_i\rangle$. ![The elements $\zeta_i$ of $\pi_1(S_{0,2g+2},p)$.[]{data-label="figure:zetas"}](zetas) Let $\SIBK(S_g,\{p_1,p_2\})$ denote the kernel of the forgetful homomorphism $\SI(S_g, \{ p_1,p_2 \}) \to \SI(S_g)$, that is: $$1 \to \SIBK(S_g,\{p_1,p_2\}) \to \SI(S_g, \{ p_1,p_2 \}) \to \SI(S_g) \to 1$$ (the notation stands for “symmetric Torelli Birman kernel”). Consider the homomorphism $F_{2g+1} \to \Z$ obtained by sending each $\zeta_i$ to 1. As in the introduction, we define the *even subgroup* of $F_{2g+1}$ to be the preimage of $2\Z$, and we denote it by $F_{2g+1}^{even}$. The elements of $F_{2g+1}^{even}$ are products $\zeta_{i_1}^{\alpha_1}\zeta_{i_2}^{\alpha_2}\cdots \zeta_{i_k}^{\alpha_k}$ where $k$ is even and $\alpha_i \in \{-1,1\}$ for all ${i}$. As in the introduction, let $$\epsilon : F_{2g+1}^{even} \to \Z^{2g+1}$$ be the homomorphism given by $$\zeta_{i_1}^{\alpha_1}\zeta_{i_2}^{\alpha_2}\cdots \zeta_{i_k}^{\alpha_k} \mapsto \sum_{j=1}^k (-1)^{j+1} e_{i_k}$$ where $\{e_1, \ldots, e_{2g+1}\}$ are the standard generators for $\Z^{2g+1}$. Theorem \[thm:algchar\] states that, as a subgroup of $\SBK(S_g,\{p_1,p_2\})$, the group $\SIBK(S_g,\{p_1,p_2\})$ is equal to the image of $\ker \epsilon$ under the isomorphism $$\langle \zeta_1,\dots,\zeta_{2g+1} \rangle = F_{2g+1} \cong \pi_1(S_{0,2g+2},p) \stackrel{\cong}{\to} \SBK(S_g,\{p_1,p_2\}).$$ In order to prove Theorem \[thm:algchar\], we will need two lemmas describing the action of elements of $\SBK(S_g,\{p_1,p_2\})$ on the relative homology $H_1(S_g,\{p_1,p_2\};\Z)$. Our argument has its origins in the work of Johnson [@dj2 Section 2], van den Berg [@vdb Section 2.4], and Putman [@cutpaste Section 4]. A *proper arc* $\alpha$ in a surface $S$ with marked points $\{p_i\}$ is a map $\alpha : I \to (S,\{p_i\})$ where $\alpha^{-1}(\{p_i\}) = \{0,1\}$. \[lem:fix2\] Let $g \geq 1$. If $f$ is an element of $\SBK(S_g,\{p_1,p_2\})$, and if $\beta$ is any oriented proper arc in $(S_g,\{p_1.p_2\})$ connecting the two marked points, then $f$ is an element of $\SIBK(S_g,\{p_1,p_2\})$ if and only if in $H_1(S_g,\{p_1,p_2\};\Z)$ we have $f_\star([\beta]) = [\beta]$. One direction is trivial: if $f \in \SIBK(S_g,\{p_1,p_2\})$, then by definition, $f$ acts trivially on $H_1(S_g,\{p_1,p_2\};\Z)$. We now prove the other direction. There is a basis for $H_1(S_g,\{p_1,p_2\};\Z)$ given by (the classes of) finitely many oriented closed curves plus the oriented arc $\beta$. Thus, to prove the lemma, we only need to show that any $f \in \SBK(S_g,\{p_1,p_2\})$ preserves the class in $H_1(S_g,\{p_1,p_2\};\Z)$ of each oriented closed curve in $S_g$. Let $\phi$ be a representative of $f$. We can regard $\phi$ either as a homeomorphism of $(S_g,\{p_1,p_2\})$ or as a homeomorphism of $S_g$. Also, let $\gamma$ be an oriented closed curve in $S_g$. We can similarly regard $\gamma$ as a representative of an element of either $H_1(S_g;\Z)$ or of $H_1(S_g,\{p_1,p_2\};\Z)$. Since $f \in \SBK(S_g,\{p_1,p_2\})$, it follows that $\phi$ is isotopic to the identity as a homeomorphism of $S_g$. In particular, we have $$[\gamma] = [\phi(\gamma)] \in H_1(S_g;\Z).$$ There is a natural map $H_1(S_g;\Z) \to H_1(S_g,\{p_1,p_2\};\Z)$ where $[\gamma]$ maps to $[\gamma]$ and $[\phi(\gamma)]$ maps to $[\phi(\gamma)]$. Since this map is well defined, it follows that $$[\gamma] = [\phi(\gamma)] \in H_1(S_g,\{p_1,p_2\};\Z),$$ which is what we wanted to show. Lemma \[lem:fix2\] tell us that in order to show that an element of the group $\SBK(S_g,\{p_1,p_2\})$ lies in the Torelli group, we only need to keep track of its action on the homology class of a single arc. The only other ingredient we need in order to prove Theorem \[thm:algchar\] is a formula for how elements of $\SBK(S_g,\{p_1,p_2\})$ act on these classes. Via the isomorphism $\pi_1(S_{0,2g+2},p) \to \SBK(S_g,\{p_1,p_2\})$, there is an action of $\pi_1(S_{0,2g+2},p)$ on $H_1(S_g,\{p_1,p_2\};\Z)$. We denote the action of $\zeta \in \pi_1(S_{0,2g+2},p)$ by $\zeta_\star$. Each generator $\zeta_i$ of $\pi_1(S_{0,2g+2},p)$ is represented by a simple loop in $S_{0,2g+2}$ based at $p$ (see Figure \[figure:zetas\]). Each loop lies in the regular neighborhood of an arc in $S_{0,2g+2}$ that connects $p$ to the $i$th marked point of $S_{0,2g+2}$. We denote the preimage in $(S_g,\{p_1,p_2\})$ of the $i$th such arc in $(S_{0,2g+2},p)$ by $\beta_i$. We orient the $\beta_i$ so that they all emanate from the same marked point; see Figure \[figure:zetadots\]. ![The arcs $\beta_i$ in $(S_g,\{p_1,p_2\})$.[]{data-label="figure:zetadots"}](zetadots) For each $i$, we choose a neighborhood $N_i$ of $\beta_i$ that is fixed by $s$. A *half-twist* about $\beta_i$ is a homeomorphism of $(S_g,\{p_1,p_2\})$ that is the identity on the complement of $N_i$ and is described on $N_i$ by Figure \[figure:zetaaction\]. This half-twist is well defined as a mapping class. \[lem:action\] Let $g \geq 1$, let $\zeta \in \pi_1(S_{0,2g+2},p)$, and say $$\zeta = \zeta_{i_1}^{\alpha_1} \cdots \zeta_{i_{m}}^{\alpha_m}$$ where $\zeta_{i_j} \in \{\zeta_i\}$ and $\alpha_i \in \{-1,1\}$. We have the following formula for the action on $H_1(S_g,\{p_1,p_2\};\Z)$: $$\zeta_\star([\beta_k]) = [\beta_k] + 2\sum_{j=1}^{m} (-1)^j [\beta_{i_j}].$$ First of all, we claim that the image of $\zeta_i$ under the isomorphism $\pi_1(S_{0,2g+2},p) \to \SBK(S_g,\{p_1,p_2\})$ is the half-twist about $\beta_i$. Indeed, the image of $\zeta_i$ under the map $\pi_1(S_{0,2g+2}) \to \Mod(S_{0,2g+2})$ is a Dehn twist about the boundary of a regular neighborhood of $\zeta_i$, and the unique lift of this Dehn twist to $\SBK(S_g,\{p_1,p_2\})$ is a half-twist about $\beta_i$. We can now compute the action of $\pi_1(S_{0,2g+2},p)$ on $H_1(S_g,\{p_1,p_2\};\Z)$. We first deal with the case where $\zeta = \zeta_i^{\pm 1}$. If $i=k$, then we immediately see that the half-twist about $\beta_i$ (or its inverse) simply reverses the orientation of $\beta_k$, and so we have $$\zeta_\star([\beta_k]) = - [\beta_k] = [\beta_k] - 2[\beta_k] = [\beta_k]-2[\beta_i],$$ and the lemma is verified in this case. If $\zeta=\zeta_i$ where $\zeta_i \neq \zeta_k$, then a neighborhood of $\beta_i \cup \beta_k$ in $(S_g,\{p_1,p_2\})$ is an annulus with two marked points. As above, $\zeta=\zeta_i$ maps to the half-twist about $\beta_i$. Simply by drawing the picture of the action (see Figure \[figure:zetaaction\]), we check the formula: $$\zeta_\star([\beta_k]) = [\beta_k] - 2[\beta_i].$$ The case $\zeta=\zeta_i^{-1}$ is similar. ![The action of the half-twist about $\beta_i$ on $\beta_k$.[]{data-label="figure:zetaaction"}](zetaaction) Since the action of $\SBK(S_g,\{p_1,p_2\})$ on $H_1(S_g,\{p_1,p_2\};\Z)$ is linear, we can now complete the proof of the lemma by induction. Suppose the lemma holds for $m-1$, that is, the induced action of $\zeta_{i_1}^{\alpha_i} \cdots \zeta_{i_{m-1}}^{\alpha_{m-1}}$ on $[\beta_k]$ is $$[\beta_k] \mapsto [\beta_k] + 2\sum_{j=1}^{m-1} (-1)^j [\beta_{i_j}].$$ By linearity, and applying the case where $\zeta=\zeta_i^{\pm 1}$, the image of the latter homology class under $\zeta_{i_m}^{\alpha_m}$ is $$\left([\beta_k]-2[\beta_{i_m}]\right) + 2\sum_{j=1}^{m-1} (-1)^j \left([\beta_{i_j}]-2[\beta_{i_m}]\right),$$ which we rewrite as $$[\beta_k] + \left(2\sum_{j=1}^{m-1} (-1)^j [\beta_{i_j}]\right) + \left(4\sum_{j=1}^{m-1} (-1)^{j+1} [\beta_{i_m}]\right) -2[\beta_{i_m}].$$ The sum of the third and fourth terms is $2(-1)^j[\beta_{i_m}]$, and so the lemma is proven. We are now poised to prove Theorem \[thm:algchar\], which states that the map $\SI(S_g,\{p_1,p_2\}) \to \SI(S_g)$ is surjective and identifies its kernel with $\ker \epsilon$. By Lemma \[lem:fix2\], an element of $\SBK(S_g,\{p_1,p_2\})$ lies in $\SIBK(S_g,\{p_1,p_2\})$ if and only if it fixes the relative class $[\beta_1]$ in $H_1(S_g,\{p_1,p_2\};\Z)$. It then follows from Lemma \[lem:action\] that an element of $\SBK(S_g,\{p_1,p_2\})$ fixes $[\beta_1]$ if and only if it lies in the image of $\ker \epsilon$. It remains to show that there is a splitting $\SI(S_g) \to \SI(S_g,P)$. By Theorem \[thm:uncapping\], there is an injective homomorphism $\SI(S_g) \to \SI(S_g^1)$ with a left inverse induced by the inclusion $S_g^1 \to S_g$. Via the (symmetric) inclusion $S_g^1 \to (S_g, \{ p_1,p_2 \})$, we obtain an injective homomorphism $\SI(S_g^1) \to \SI(S_g, \{ p_1,p_2 \})$. The composition is the desired map $\SI(S_g) \to \SI(S_g, \{ p_1,p_2 \})$. Generating $\SIBK(S_g,\{p_1,p_2\})$ by products of twists --------------------------------------------------------- We will now use Theorem \[thm:topchar\] to give another description of the group $\SIBK(S_g,\{p_1,p_2\})$. \[thm:topchar\] For $g \geq 0$, each element of $\SIBK(S_g,\{p_1,p_2\})$ is a product of Dehn twists about symmetric separating simple closed curves. Theorem \[thm:algchar\] identifies $\SIBK(S_g,\{p_1,p_2\})$ with $\ker \epsilon$. It is a general fact from combinatorial group theory that the kernel of a homomorphism is normally generated by elements that map to the defining relators for the image of the homomorphism. We aim to exploit this fact, and so we start by determining the image of $\epsilon$. Let $\Z^{2g+1} \to \Z$ be the map that records the sum of the coordinates, and let $\Z^{2g+1}_{bal}$ be the kernel. \[lem:im ep\] Let $g \geq 0$. The image of $F_{2g+1}^{even}$ under $\epsilon$ is $\Z^{2g+1}_{bal}$. It follows immediately from the definition of the map $\epsilon$ that $\epsilon(F_{2g+1}^{even})$ lies in $\Z^{2g+1}_{bal}$. To show that $\epsilon(F_{2g+1}^{even})$ is all of $\Z^{2g+1}_{bal}$, it suffices to show that $\Z^{2g+1}_{bal}$ is generated by the elements $\epsilon(\zeta_i\zeta_j)=e_i-e_j$, where $e_i$ is a generator the $i$th factor of $\Z^{2g+1}$. We denote the element $e_i-e_j$ by $e_{i,j}$. Let $\Z^{2g+1} \to \Z$ be the function that records the sum of the absolute values of the coordinates. We think of this function as a height function. The only element of $\Z^{2g+1}$ at height zero is the identity, which is the image of the identity element of $F_{2g+1}^{even}$. Let $z$ be an arbitrary nontrivial element of $\Z^{2g+1}_{bal}$. Since $z$ is nontrivial, it has at least one nonzero component, say the ${m}$th. By the definition of $\Z^{2g+1}_{bal}$, there must be one component, say the $j$th, with opposite sign. Say the ${m}$th component is negative and the $j$th component is positive. The sum $\epsilon(\zeta_i\zeta_j)+z$ has height strictly smaller than that of $z$, so by induction the lemma is proven. \[lem:zbalpres\] Let $g \geq 0$. The group $\Z^{2g+1}_{bal}$ has a presentation: $$\langle e_{1,1}, \dots, e_{2g+1,2g+1} , e_{2,1}, \dots, e_{2g+1,1} \ |\ e_{i,i}=1, [e_{i,1},e_{j,1}]=1 \rangle.$$ Since $\Z^{2g+1}_{bal}$ is the subgroup of $\Z^{2g+1}$ described by one linear equation (the sum of the coordinates is 0), we see that $\Z^{2g+1}_{bal} \cong \Z^{2g}$. Denote by $\eta$ the isomorphism given $\Z^{2g+1}_{bal} \to \Z^{2g}$ given by forgetting the first coordinate. The group $\Z^{2g}$ is the free abelian group on $\eta(e_{2,1}), \dots, \eta(e_{2g+1,1})$, and so it has a presentation whose generators are $\eta(e_{2,1}), \dots, \eta(e_{2g+1,1})$ and whose relations are $[\eta(e_{i,1}),\eta(e_{j,1})]=1$. We thus obtain a presentation for $\Z^{2g+1}_{bal}$ with generators $e_{2,1}, \dots, e_{2g+1,1}$ and relations $[e_{i,1},e_{j,1}]=1$. If we add (formal) generators $e_{i,i}$ to this presentation, as well as relations $e_{i,i}=1$, we obtain a new presentation for the same group; this is an elementary Tietze transformation [@mks Section 1.5]. We are now ready to prove Theorem \[thm:topchar\]. Since $\SI(S^2,\{p_1,p_2\})=1$, we may assume $g \geq 1$. By Lemma \[lem:im ep\], we have a short exact sequence: $$1 \to \ker \epsilon \to F_{2g+1}^{even} \stackrel{\epsilon}{\to} \Z^{2g+1}_{bal} \to 1$$ where $\epsilon(\zeta_i \zeta_j) = e_{i,1}$ and $\epsilon(\zeta_i^2) = e_{i,i}=0$. Consider the presentation for $\Z^{2g+1}_{bal}$ given in Lemma \[lem:zbalpres\]. If we lift each relator in this presentation to an element of $F_{2g+1}^{even}$, we obtain a normal generating set for $\ker \epsilon$, that is, these elements and their conjugates in $F_{2g+1}^{even}$ generate $\ker \epsilon$. The relators $e_{i,i}$ and $[e_{i,1},e_{j,1}]$ lift to elements $$\zeta_i^2 \quad \mbox{and} \quad [\zeta_i\zeta_1,\zeta_j\zeta_1],$$ respectively. Passing through the isomorphism $F_{2g+1} \to \SBK(S_g,\{p_1,p_2\})$ from Theorem \[thm:sbes2\], and applying Theorem \[thm:algchar\] we obtain a normal generating set for $\SIBK(S_g,\{p_1,p_2\})$. Since the group generated by Dehn twists about symmetric separating curves is normal in $\SMod(S_g,\{p_1,p_2\})$ (hence in $\SBK(S_g,\{p_1,p_2\})$), it remains to show that the image of each $\zeta_i^2$ and $[\zeta_i\zeta_1,\zeta_j\zeta_1]$ in the group $\SBK(S_g,\{p_1,p_2\})$ can be written as a product of Dehn twists about symmetric separating curves. To further simplify matters, the image of each $\zeta_i^2$ in $\SBK(S_g,\{p_1,p_2\})$ is conjugate to $\zeta_1^2$ in $\SMod(S_g,\{p_1,p_2\})$, and (up to taking inverses) the image of each $[\zeta_i\zeta_1,\zeta_j\zeta_1]$ is conjugate to $[\zeta_3\zeta_1,\zeta_2\zeta_1]$ in $\SMod(S_g,\{p_1,p_2\})$ (the point is that there are elements of $\Mod(S_{0,2g+2}, p)$ taking the elements $\zeta_1^2$ and $[\zeta_3\zeta_1,\zeta_2\zeta_1]$ of $\pi_1(S_{0,2g+2},p)$ to the other given elements). Thus, we are reduced to checking that the images in $\SBK(S_g,\{p_1,p_2\})$ of $\zeta_1^2$ and $[\zeta_3\zeta_1,\zeta_2\zeta_1]$ are both products of Dehn twists about symmetric separating curves. In the proof of Lemma \[lem:action\], we showed that the image of $\zeta_1$ in the group $\SBK(S_g,\{p_1,p_2\})$ is a half-twist about the arc $\beta_1$. It follows that the image of $\zeta_1^2$ is the Dehn twist about the boundary of a regular neighborhood of $\beta_1$. This boundary is (isotopic to) a symmetric separating curve in $(S_g,\{p_1,p_2\})$. It remains to analyze the element $[\zeta_3\zeta_1,\zeta_2\zeta_1]$. There is a closed disk in $S_{0,2g+2}$ that contains the distinguished marked point $p$, the 1st, 2nd, and 3rd marked points of $S_{0,2g+2}$, and a representative of $[\zeta_3\zeta_1,\zeta_2\zeta_1] \in \pi_1(S_{0,2g+2},p)$. Under the isomorphism $F_{2g+1} \to \SBK(S_g,\{p_1,p_2\})$ from Theorem \[thm:sbes2\], we see that the commutator $[\zeta_3\zeta_1,\zeta_2\zeta_1]$ maps to an element of $\SI(S_g,\{p_1,p_2\})$ supported on a copy of $(S_1^1,\{p_1,p_2\})$ fixed by $s$. Proposition \[prop:g12\] below states that $\SI(S_1^1,\{p_1,p_2\})$ is generated by Dehn twists about symmetric separating curves. Combining this with the fact that the inclusion $(S_1^1,\{p_1,p_2\}) \to (S_g,\{p_1,p_2\})$ takes symmetric separating curves to symmetric separating curves, we conclude that $[\zeta_3\zeta_1,\zeta_2\zeta_1]$ is a product of Dehn twists about separating curves, and we are done. In the proof of Theorem \[thm:topchar\], we used the following fact. \[prop:g12\] The group $\SI(S_1^1,\{p_1,p_2\})$ is generated by Dehn twists about symmetric separating curves. By the discussion after Theorem \[thm:uncapping\], we have $$\SI(S_1^1,\{p_1,p_2\}) \cong \SI(S_1,\{p_1,p_2\}) \times \Z,$$ where the $\Z$ factor is the Dehn twist about $\partial S_1^1$. Therefore, it suffices to show that $\SI(S_1,\{p_1,p_2\})$ is generated by Dehn twists about symmetric separating curves in $(S_1,\{p_1,p_2\})$. This follows immediately from the fact that $\SMod(S_1,\{p_1,p_2\})=\Mod(S_1,\{p_1,p_2\})$ [@primer Section 3.4] and the fact that $\I(S_1,\{p_1,p_2\})$ is generated by Dehn twists about separating curves (this can be proven directly via the argument of [@bbm Lemma 7.2], or it can be obtained immediately by combining [@bbm Lemma 7.2] with Lemma 5.8 below). Application to generating sets {#sec:app} ============================== Recall that Hain has conjectured that $\SI(S_g)$ is generated by Dehn twists about symmetric separating curves. Since $\SI(S^2)$ and $\SI(T^2)$ are trivial, there is nothing to do for those cases. Hain’s conjecture is also known to be true in genus two. Indeed, it follows from Fact \[fact:low genus\] that $$\SI(S_2) = \I(S_2).$$ A theorem of Birman and Powell gives that $\I(S_2)$ is generated by Dehn twists about separating curves [@birmansp; @powell]. It follows that Hain’s conjecture is true for $\SI(S_2)$. Applying Theorems \[thm:sibes1\] and \[thm:uncapping\], we obtain the following extension. \[thm:g21\] The groups $\SI(S_2)$, $\SI(S_2,p)$, and $\SI(S_2^1)$ are each generated by Dehn twists about symmetric separating curves. We now aim to apply Theorem \[thm:topchar\] in order to prove Theorem \[thm:reducibles\], which states that, in order to prove Hain’s conjecture, it is enough to show that $\SI(S_g)$ is generated by reducible elements. To prove this theorem, we assume that $\SI(S_k)$ is generated by reducible elements for $k \leq g$, and we show that each reducible element of $\SI(S_g)$ is generated by Dehn twists about symmetric separating curves. We say that an element of $\Mod(S_g)$ is *strongly reducible* if it fixes the isotopy class of a simple closed curve in $S_g$. We have the following theorem of Ivanov [@ivanov Theorem 3]. \[thm:pure\] Let $g \geq 0$. If $f \in \I(S_g)$ is reducible, then $f$ is strongly reducible. We say that an isotopy class $a$ of simple closed curves is *pre-symmetric* if it is not symmetric and $\sigma(a)$ and $a$ have disjoint representatives. Reduction to the symmetrically reducible case {#section:reduction} --------------------------------------------- We say that an element $f$ of $\SMod(S_g)$ is *symmetrically strongly reducible* if there is an isotopy class of simple closed curves in $S_g$ that is either symmetric or pre-symmetric and is fixed by $f$. We have the following standard fact (see, e.g., [@primer Lemma 2.9]). In the statement, we say that two simple closed curves $\alpha$ and $\beta$ are in minimal position if $|\alpha \cap \beta|$ is minimal with respect to the homotopy classes of $\alpha$ and $\beta$. \[lemma:alexander\] Let $S$ be any compact surface. Let $\alpha$ and $\beta$ be two simple closed curves in $S$ that are in minimal position and that are not isotopic. If $\phi : S \to S$ is a homeomorphism that preserves the set of isotopy classes $\{[\alpha],[\beta]\}$, then $\phi$ is isotopic to a homeomorphism that preserves the set $\alpha \cup \beta$. \[prop:reduction\] Let $g \geq 0$. If $f \in \SI(S_g)$ is strongly reducible, then $f$ is symmetrically strongly reducible. Let $a$ be an isotopy class of simple closed curves in $S_g$ that is fixed by $f$. We may assume that $\sigma(a) \neq a$, for in that case there is nothing to do. Since $f$ lies in $\SMod(S_g)$, we have: $$f(\sigma(a)) = \sigma(f(a)) = \sigma(a).$$ In other words, $f$ fixes the isotopy class $\sigma(a)$. Since $\sigma$ has order 2, it preserves the set of the isotopy classes $\{a,\sigma(a)\}$. Let $\alpha$ and $\alpha'$ be representatives for $a$ and $\sigma(a)$ that are in minimal position. Let $\mu$ denote the boundary of a closed regular neighborhood of $\alpha \cup \alpha'$, and let $\mu'$ denote the multicurve obtained from $\mu$ by deleting the inessential components of $\mu$ and replacing any set of parallel curves with a single curve. Lemma \[lemma:alexander\] implies that both $f$ and $\sigma$ fix the isotopy class of $\mu'$. By Theorem \[thm:pure\], $f$ fixes the isotopy class of each component of $\mu'$. Let $\mu_1, \dots, \mu_k$ denote the isotopy classes of the connected components of $\mu'$. If $k=0$, that is to say $a$ and $\sigma(a)$ fill $S_g$, then it follows that $f$ has finite order (see [@primer Proposition 2.8]); hence $f$ is the identity. Now suppose $k > 0$. By construction, $i(\mu_i,\mu_j)=0$ for all $i$ and $j$, and $\sigma$ acts as an involution on the set of isotopy classes $\{[\mu_i]\}$. Thus, there is either a singleton $\{[\mu_i]\}$ or a pair $\{[\mu_i],[\mu_j]\}$ fixed by $\sigma$, and the proposition is proven. Analyzing individual stabilizers {#subsection:cutting} -------------------------------- Let $a$ be the isotopy class of an essential simple closed curve in $S_g$. Assume that $a$ is symmetric or pre-symmetric. If $a$ is symmetric and separating, we choose a representative simple closed curve $\alpha$ so that ${s}(\alpha) = \alpha$, and if $a$ is nonseparating, we choose a representative simple closed curve $\alpha$ so that ${s}(\alpha) \cap \alpha = \emptyset$. Let $A$ denote either $\alpha$ or $\alpha \cup {s}(\alpha)$, depending on whether $a$ is separating or nonseparating, respectively. Let $R_1$ and $R_2$ denote the closures in $S_g$ of the two connected components of $S_g-A$. Let $R_1'$ and $R_2'$ denote the surfaces obtained from $R_1$ and $R_2$ obtained by collapsing each boundary component to a marked point. Let $A'$ denote the set of marked points in either $R_1'$ or $R_2'$. Each pair $(R_k',A')$ is homeomorphic to either $(S_g,p)$ where $p$ is a fixed point of ${s}$ or $(S_g,\{p_1,p_2\})$ where $p_1$ and $p_2$ are interchanged by ${s}$. Since the hyperelliptic involution of $S_g$ induces a hyperelliptic involution of each $(R_k',A')$, we can define $\SMod(R_k',A')$ and $\SI(R_k',A')$ as in Section \[sec:bg\]. We remark that if $a$ is symmetric and nonseparating, then one of the surfaces $R_k'$ is a sphere with two marked points. For this surface, $\SMod(R_k',A') \cong \Z/2\Z$ and $\SI(R_k',A') = 1$. For such $a$, it would have been more natural to take a representative $\alpha$ of $a$ that is symmetric. However, the choice we made will allow us to make most of our arguments uniform for the various cases of $a$. Let $\SMod(S_g,a)$ denote the stabilizer of the isotopy class $a$ in $\SMod(S_g)$, and let $\SMod(S_g,\vec a)$ denote the index 2 subgroup of $\SMod(S_g,a)$ consisting of elements that fix the orientation of $a$. We now define maps $$\Psi_k : \SMod(S_g,\vec a) \to \SMod(R_k',A')$$ for $k=1,2$. Let $f \in \SMod(S_g,\vec a)$, and let $\phi$ be a representative that commutes with $s$. We may assume that $\phi$ fixes $\alpha$. Since $\phi$ commutes with $s$, it must also fix $s(\alpha)$. Since $f \in \SMod(S_g,\vec a)$, it follows that $\phi$ does not permute the components of $S_g-A$, and hence induces a homeomorphism $\phi_k'$ of $R_k'$. By construction, $\phi_k'$ commutes with the hyperelliptic involution of $R_k'$. Finally, we define $$\Psi_k(f) = [\phi_k'].$$ We have the following standard fact; see [@primer Proposition 3.20]. \[lem:scut\] Let $g \geq 2$, and let $a$ be either a symmetric or pre-symmetric isotopy class of simple closed curves in $S_g$. Define $R_i'$ and $A'$ as above. The homomorphism $$\Psi_1 \times \Psi_2 : \SMod(S_g, \vec a) \to \SMod(R_1',A') \times \SMod(R_2',A')$$ is well defined and has kernel $$\ker(\Psi_1 \times \Psi_2) = \begin{cases} \langle T_a \rangle & a \ \ \text{symmetric} \\ \langle T_aT_{\sigma(a)} \rangle & a \ \ \text{pre-symmetric} \end{cases}$$ Let $\SI(S_g,a)$ denote $\SI(S_g) \cap \SMod(S_g,a)$. Since $\SI(S_g,a)$ is a subgroup of $\SMod(S_g,\vec a)$, we can restrict each $\Psi_k$ to $\SI(S_g,a)$. \[lemma:cutting\] Let $g \geq 2$. For $k \in \{1,2\}$, the image of $\SI(S_g,a)$ under $\Psi_k$ lies in $\SI(R_k',A')$. By the relative version of the Mayer–Vietoris sequence, we have an exact sequence: $$H_1(A,A) \to H_1(R_1,A) \times H_1(R_2,A) \to H_1(S_g,A) \to H_0(A,A).$$ (In this sequence, and in the rest of the proof, we take the coefficients for all homology groups to be $\Z$.) The first and last groups are trivial, and so we have $$H_1(R_1,A) \times H_1(R_2,A) \cong H_1(S_g,A).$$ For each $k$, the map $R_k \to R_k'$ that collapses the boundary components to marked points induces an isomorphism $$H_1(R_k,A) \cong H_1(R_k',A').$$ The natural map $H_1(S_g) \to H_1(S_g,A)$ is not surjective in general (it fails to be surjective in the case that $a$ is nonseparating). However, the composition $$\pi : H_1(S_g) \to H_1(S_g,A) \stackrel{\cong}{\to} H_1(R_1',A') \times H_1(R_2',A') \to H_1(R_k',A')$$ is surjective for $k \in \{1,2\}$. Indeed, any element $x$ of $H_1(R_1',A') \cong H_1(R_1,A)$ is represented by a collection of closed oriented curves in $R_1$ and oriented arcs in $R_1$ connecting $A$ to itself. If we connect the endpoints of each oriented arc in $R_1$ by a similarly oriented arc in $R_2$, we obtain an element of $H_1(S_g)$ that maps to $x$. By construction the following diagram is commutative: $$\xymatrix{ H_1(S_g) \ar[r]^{f_\star} \ar[d]_\pi & H_1(S_g) \ar[d]^\pi \\ H_1(R_k',A')\ar[r]^{\Psi_k(f)_\star} & H_1(R_k',A') }$$ and the lemma follows immediately. Let $\hat i(\cdot,\cdot)$ denote the algebraic intersection form on $H_1(S_g;\Z)$. We will need the following lemma from [@companion]. \[lem:other paper\] Let $g \geq 2$, and let $a$ and $b$ be isotopy classes of oriented simple closed curves in $S_g$. Suppose that $a$ is pre-symmetric, $b$ is symmetric, and $\hat i([a],[b])$ is odd. Let $k \in \Z$. If $[b] + k[a]$ is represented by a symmetric simple closed curve, then $k$ is even. \[lem:image of pk\] Let $g \geq 2$, and let $a$ be the isotopy class of a simple closed curve in $S_g$ that is either symmetric or pre-symmetric. Define $A'$ and $R_k'$ as above. The homomorphism $$(\Psi_1 \times \Psi_2)|_{\SI(S_g,a)} : \SI(S_g,a) \to \SI(R_1',A') \times \SI(R_2',A')$$ is surjective with kernel $$\ker (\Psi_1 \times \Psi_2)|_{\SI(S_g,a)} = \begin{cases} \langle T_a \rangle & a \text{ is separating} \\ 1 & a \text{ is nonseparating}. \end{cases}$$ The kernel of $(\Psi_1 \times \Psi_2)|_{\SI(S_g,a)}$ is $\ker(\Psi_1 \times \Psi_2) \cap \SI(S_g,a)$. The description of $\ker (\Psi_1 \times \Psi_2)|_{\SI(S_g,a)}$ in the statement of the lemma then follows from Lemma \[lem:scut\]. By Lemma \[lemma:cutting\], we have $$(\Psi_1 \times \Psi_2)(\SI(S_g,a)) \subseteq \SI(R_1',A') \times \SI(R_2',A').$$ It remains to show that $(\Psi_1 \times \Psi_2)|_{\SI(S_g,a)}$ is surjective. Let $f' \in \SI(R_1',A') \times \SI(R_2',A')$. Choose some $f \in \SMod(S_g,\vec a)$ that maps to $f'$. Fix some orientation of $a$. Consider the natural map $$\eta : H_1(S_g;\Z)/\langle [a] \rangle \to H_1(R_1',A';\Z) \times H_1(R_2',A';\Z).$$ The mapping classes $f$ and $f'$ induce automorphisms $f_\star$ and $f_\star'$ of $H_1(S_g;\Z)$ and $\textrm{Im}(\eta)$, respectively. Since $f_\star([a])=[a]$, we further have that $f_\star$ induces an automorphism $\overline{f_\star}$ of $H_1(S_g;\Z)/\langle [a] \rangle$. If we give $a$ an orientation, then it represents an element of $H_1(S_g;\Z)$. There is a commutative diagram $$\xymatrix{ H_1(S_g;\Z)/\langle [a] \rangle \ar[r]^{\overline{f_\star}} \ar[d]_\eta & H_1(S_g;\Z)/\langle [a] \rangle \ar[d]^\eta &\\ \textrm{Im}(\eta) \ar[r]^{f_\star'} & \textrm{Im}(\eta) & \hspace{-.5in} \subset \ \ H_1(R_1',A';\Z) \times H_1(R_2',A';\Z) }$$ Since $f_\star'$ is the identity and $\eta$ is injective, it follows that $\overline{f_\star}$ is the identity. If $a$ is separating, then $[a]=0$ and so $f_\star = \overline{f_\star}$ is the identity. Thus, $f$ is an element of $\SI(S_g,a)$, and since $\Psi_1 \times \Psi_2$ maps $f$ to $f'$, we are done in this case. If $a$ is nonseparating, we can find an isotopy class $b$ of oriented symmetric simple closed curves in $S_g$ with $\hat i([a],[b])=1$. Since $\overline{f_\star}$ is the identity, we have $$f_\star([b]) = [b] + k[a]$$ for some $k \in \Z$. Our next goal is to find some $h \in \ker(\Psi_1 \times \Psi_2)$ so that $(hf)_\star$ fixes $[b]$. If $a$ is symmetric, then we can simply take $h$ to be either $T_a^k$ (cf. [@primer Proposition 8.3]). If $a$ is pre-symmetric, then this does not work, since $T_a^k \notin \SMod(S_g)$. However, if $a$ is pre-symmetric, then Lemma \[lem:other paper\] implies that $k$ is even. Thus we can take $h$ to be $\left(T_a T_{\sigma(a)}\right)^{k/2}$. Now, let $x$ be any element of $H_1(S_g;\Z)$. Since $\overline{f_\star}$ is the identity and since $h$ induces the identity map on $H_1(S_g;\Z)/\langle [a] \rangle$ we have $(hf)_\star(x) = x + j[a]$ for some $j \in \Z$. Since $(hf)_\star$ is an automorphism of $H_1(S_g;\Z)$, we have: $$\begin{aligned} \hat i(x,[b]) &=& \hat i ((hf)_\star(x),(hf)_\star([b])) \\ &=& \hat i (x+j[a],[b]) \\ &=& \hat i (x,[b]) + j\, \hat i([a],[b]) \\ &=& \hat i (x,[b]) + j\end{aligned}$$ and so $j=0$. Thus $(hf)_\star(x)=x$ and so $hf \in \SI(S_g,a)$. Since $h \in \ker(\Psi_1 \times \Psi_2)$, we have that $(\Psi_1 \times \Psi_2)(hf) = f'$, and we are done. The inductive step ------------------ In Section \[section:reduction\] we showed that reducible elements of $\SI(S_g)$ are strongly symmetrically reducible, and in Section \[subsection:cutting\] we studied strongly symmetrically reducible elements of $\SI(S_g)$. We now combine the results from these sections with our Birman exact sequences for $\SI(S_g)$ in Section \[section:bes\] in order to prove Theorem \[thm:reducibles\]. Let $g \geq 2$. As in the statement of the theorem, we assume that $\SI(S_k)$ is generated by reducible elements for all $k \in \{0,\dots,g\}$. Fix some $k \in \{0,\dots,g\}$. We make the inductive hypothesis that $\SI(S_j)$ is generated by Dehn twists about symmetric separating curves for $j \in \{0,\dots,k-1\}$, and we will show that $\SI(S_k)$ is generated by such Dehn twists. The base cases $k=0,1$ are trivial since $\SI(S_0)=\SI(S_1)=1$. Let $f \in \SI(S_k)$ be a reducible element. By Theorem \[thm:pure\], we have that $f$ is strongly reducible. By Proposition \[prop:reduction\], $f$ is symmetrically strongly reducible. In other words, there is an isotopy class $a$ of essential simple closed curves in $S_k$, where $a$ is either symmetric or pre-symmetric, and where $f \in \SI(S_k,a)$. Define the subsets $A,R_1,R_2 \subset S_k$ as in Section \[subsection:cutting\]. As per Lemma \[lem:image of pk\], there is a (surjective) homomorphism $$(\Psi_1 \times \Psi_2)|_{\SI(S_k)} : \SI(S_k,a) \to \SI(R_1',A') \times \SI(R_2',A'),$$ and each element of the kernel is a power of a Dehn twist about a symmetric separating curve (when $a$ is nonseparating, the kernel is trivial). For $i=1,2$, each Dehn twist about a symmetric separating simple closed curve in $\SI(R_i',A')$ has a preimage in $\SI(S_k,a)$ that is also a Dehn twist about a symmetric separating curve. Thus, to prove the theorem, it suffices to show that each element of $\SI(R_i',A')$ is a product of Dehn twists about symmetric separating curves. Fix $i \in \{1,2\}$. Say that $R_i'$ has genus $g_i$. Note that $0 \leq g_i < k$. Combining Theorem \[thm:sibes1\] with Theorem \[thm:topchar\], we have a short exact sequence $$1 \to \SIBK(R_i',A') \to \SI(R_i',A') \to \SI(S_{g_i}) \to 1,$$ where each element of $\SIBK(R_i',A')$ is a product of Dehn twists about symmetric separating simple closed curves in $(R_i',A')$ (in the case that $a$ is separating, we have $\SIBK(R_i',A')=1$). Our inductive hypothesis on $k$ gives that $\SI(S_{g_i})$ is generated by Dehn twists about symmetric separating curves. Since each such Dehn twist has a preimage in $\SI(R_i',A')$ that is also a Dehn twist about a symmetric separating curve, it follows that each element of $\SI(R_i',A')$ is a product of Dehn twists about symmetric separating curves, and we are done. [^1]: The second author gratefully acknowledges support from the Sloan Foundation and the National Science Foundation.
--- abstract: 'We investigate the possibility of correcting for the magnification due to gravitational lensing of standard candle sources, such as Type Ia supernovae. Our method uses the observed properties of the foreground galaxies along the lines-of-sight to each source and the accuracy of the lensing correction depends on the quality and depth of these observations as well as the uncertainties in translating the observed luminosities to the matter distribution in the lensing galaxies. The current work is limited to cases where the matter density is dominated by the individual galaxy halos. However, it is straightforward to generalize the method to include also gravitational lensing from cluster scale halos. We show that the dispersion due to lensing for a standard candle source at $z=1.5$ can be reduced from about $7\,\%$ to $\lesssim 3\,\%$, i.e. the magnification correction is useful in reducing the scatter in the Type Ia Hubble diagram, especially at high redshifts where the required long exposure times makes it hard to reach large statistics and the dispersion due to lensing becomes comparable to the intrinsic Type Ia scatter.' author: - 'C. Gunnarsson, T. Dahlén, A. Goobar, and J. Jönsson' - 'E. Mörtsell' title: 'CORRECTIONS FOR GRAVITATIONAL LENSING OF SUPERNOVAE: BETTER THAN AVERAGE?' --- INTRODUCTION {#sec:intro} ============ Observations of high-redshift Type Ia supernovae (SNIa) have in the last decade or so lead to a dramatic paradigm shift in cosmology [@perl98; @riess98; @schmidt; @perl99; @knop03; @tonry; @riess04]. Measurements of the luminosity distance to supernovae over a wide range of redshifts were used to break the degeneracy between cosmic fluids, as suggested by @goo95. The data clearly favors a universe dominated by repulsive dark energy, and which is presently undergoing accelerated expansion. The next step in observational cosmology is to test the nature of this dark energy, whether constant, i.e. compatible with Einstein’s cosmological constant, or due to completely new physics. Observations of SNIa are among the leading astrophysical tools to explore this question further, as they probe the expansion history of the Universe directly. Large dedicated surveys are in progress (e.g. CFHTLS, ESSENCE, SDSSII) and even more ambitious space based projects are being planned for the future, e.g. the JDEM proposals, DESTINY, JEDI and SNAP. One thing in common for all these projects is the very large projected number of SNIa that eventually will populate the Hubble diagram used to derive cosmological parameters. Clearly, systematic uncertainties will (soon) become the limiting factor. While some of these uncertainties are due to our lack of knowledge of the SNIa physics and intrinsic properties, others stem from possible interactions of the supernova light (rest-frame UV and optical) near the source or along the line-of-sight (l-o-s), e.g. extinction by dust in the host galaxy or intergalactic medium. In this work, we focus on the gravitational interaction of photons along the l-o-s, i.e. gravitational lensing. As supernova surveys become deeper, the measured source fluxes become increasingly more sensitive to the inhomogeneities in the matter distribution of the Universe. In @ramme, the JDEM/SNAP mission was simulated using the SNOC Monte-Carlo package [@snoc] and it was found that a careful [*statistical*]{} treatment could be used to optimize the fitting of cosmological parameters from the Hubble diagram of SNIa taking into account the (asymmetric) redshift dependent lensing magnification distribution. Lensing on [*individual*]{} SNe have also been studied, in e.g @lewibata [@97ff; @benitez; @qlet] by modeling the effect from the galaxies close to the l-o-s to the SN. In this work, we investigate the accuracy to which lensing (de)magnification can be estimated on individual supernovae. For that purpose, we create mock galaxy catalogs with properties (e.g., galaxy magnitudes, redshifts and spectral types) based on luminosity functions derived from observations by @dah05. Using the brightness of galaxies as a tracer of the gravitational fields along the l-o-s, we use the multiple lens-plane package Q-LET [@qlet] to investigate the accuracy to which the magnification can be estimated as a function of the survey parameters, assumptions on M/L-ratios and halo shapes. In an accompanying paper [@jacke], we apply the technique described here to investigate the lensing magnification probability distribution for 33 supernovae in the GOODS survey [@riess04; @str04]. In this paper, we will assume that dark matter halos in individual galaxy halos and small groups are most important for the lensing magnification of supernovae. For cluster size lenses, additional information is needed to model the gravitational potential, e.g., lensing of background galaxies. Though such a generalization of the method is straightforward, in the following we have assumed the uncertainty in the lensing magnification factor for the small fraction of SNe with foreground clusters to be of the same order as SNe with massive foreground galaxies. Also, we assume that the large scale dark matter structures in the Universe is traced by the luminous matter, i.e. that filaments and walls are populated by galaxies. This approach is reasonable as long as the luminous and dark matter are not [*anti-*]{}correlated, see §\[sec:summary\]. In §\[sec:correct\], we discuss whether one needs to correct for lensing at all. Section §\[sec:modeling\] describes the underlying theory and §\[sec:simsurv\] treats our method for estimating the accuracy of the lensing corrections. We summarize and discuss our results in §\[sec:summary\]. Throughout the paper we use natural units, where $c=G=1$. We assume that the underlying cosmological parameters are $H_0=70\ {\rm km}\ \!{\rm s}^{-1} {\rm Mpc}^{-1}$, the matter density ${\Omega_{\rm M}}=0.3$ and the dark energy density ${\Omega_\Lambda}=0.7$. When no explicit redshift dependence is shown, quantities refer to present values ($z=0$). Quoted magnitudes are in the Vega system. TO CORRECT OR NOT TO CORRECT {#sec:correct} ============================ Since the mean magnification due to gravitational lensing of a large number of sources is expected to be unity relative to a homogeneous universe, the question arises whether one should correct for the effect of gravitational lensing at all. Because flux, $f$, is conserved, it is the mean of the magnification [*factor*]{}, $\mu$, that is equal to one, i.e. $\bar\mu =1$, or defining $\mu =1+\delta$ where $\delta$ is the fractional difference in luminosity from the unlensed (homogeneous universe) case, $\bar \delta=0$. The magnitude is given by $m=-2.5\log f + {\rm const}$, and we can write $$m = m_0 -\frac{2.5}{\ln 10}\ln\mu ,$$ where $m_0$ is the unlensed magnitude. Taylor expanding $\ln\mu =\ln (1+\delta)$, we get $$m = m_0 -\frac{2.5}{\ln 10}\left[\delta-\frac{\delta^2}{2}+{\mathcal O}(\delta^3)\right],$$ with mean value $\bar m =m_0+0.54\bar{\delta^2} + {\mathcal O}(\bar{\delta^3})$. From this it is clear that the average lensed magnitude need not be equal to the unlensed magnitude. Note that in current surveys, this effect is very small compared to, e.g., the intrinsic scatter of SN luminosities which is why the distinction is still unimportant. Note also that the mean magnification factor is unity only for random source positions. For an actual sample of observed SNIa, magnification bias can push the mean magnification to higher values. However, given that we have a sample of random source position SNIa and neglect the small corrections to $\bar m$ (or perform our cosmology fit using flux units), then $\bar m$ is an unbiased estimator for the population mean of the observed magnitudes[^1]. Under these circumstances, neglecting the scatter due to lensing does not cause any bias in the fitted cosmological parameters and good statistics will help in beating down the error [e.g., @holz04]. There could still be good reasons to consider correcting for lensing effects. If we are able to reduce the scatter in the observed magnitudes and keep $\bar m$ as an unbiased estimator, then we are able to make more accurate cosmology fits. There are also cases where it is non-trivial to quantify the importance of the magnification bias, e.g., the case of SN 1997ff. In a similar context, the ability to correct individual lines-of-sight for gravitational lensing magnification would have a profound impact in our ability use gravitational wave “sirens” for measuring cosmological parameters, as their use as standard candles is ultimately limited by the lensing uncertainty . MODELING AN INHOMOGENEOUS UNIVERSE {#sec:modeling} ================================== In this section we present a method to investigate the effects of gravitational lensing in an inhomogeneous universe. Halo Profiles {#subs:halo} ------------- Neglecting gravitational lensing is equivalent to assuming that matter is homogeneously distributed in the Universe. However, on small scales, the Universe is certainly inhomogeneous. To investigate the effects of gravitational lensing of distant sources, a realistic model of the matter distribution in the Universe is needed. In the following, we describe how we (re)distribute the matter in our model universe using observations of the luminous matter. We assume that each galaxy is surrounded by a dark matter halo and that the mass of this halo can be estimated from the galaxy luminosity. However, inferring masses of dark matter halos from luminosities of galaxies is non-trivial. The effects of lensing by a halo depends not only on its mass, but also on its density profile. Both the density profile and mass of dark matter halos are issues under debate. We have chosen to work mainly with two different halo models, Singular Isothermal Spheres (SIS) and the model of Navarro, Frenk and White [NFW; @nfwref]. The density profile of a SIS, $\rho_{\rm SIS}(r)=\sigma^2/(2 \pi r^2)$, is characterized by its l-o-s velocity dispersion $\sigma$, which can be estimated from the galaxy luminosity via the Faber–Jackson (F–J) or Tully–Fisher (T–F) relations, approximately valid for elliptical and spiral galaxies respectively. Since the mass of a SIS halo diverges, $m_{\rm SIS}(r)=2 \sigma^2 r$, we use a truncation radius $r_{\rm t}$. A commonly used scale for halo profiles in general is $r_{\rm 200}$, defined as the radius inside which the mean mass density is 200 times the present critical density. For a SIS halo, $r_{\rm 200}$ and the corresponding mass within this radius, $m_{\rm 200}$, are given by $$r_{\rm 200}^{\rm SIS}=\frac{\sqrt{2} \sigma}{10H_0},\;m_{\rm 200}^{\rm SIS}=\frac{\sqrt{2}\sigma^3}{5H_0}.$$ The density profile of a NFW halo is $$\rho_{\rm NFW}(r)=\frac{\rho_{\rm s}} {(r/r_{\rm s})(1+r/r_{\rm s})^2},$$ where $r_{\rm s}$ is the scale radius for which approximately $\rho_{\rm NFW}\propto r^{-2}$ and $\rho_{\rm s}$ is the density at $r\sim 0.5 r_{\rm s}$. The NFW halo is fully determined by $m_{\rm 200}$ since $r_{\rm 200}=\left[ m_{\rm 200}/(100 H_0^2)\right]^{1/3}$ and the scale radius $r_{\rm s}$ and $\rho_{\rm s}$ can be found numerically from $m_{\rm 200}$ [@nfwref]. In the following, we assume that the mass within $r_{\rm 200}$ is roughly the same for the SIS and NFW halo profiles, i.e. $m_{\rm 200}^{\rm SIS}=m_{\rm 2 00}^{\rm NFW}$. We also set the truncation radius $r_{\rm t}=r_{\rm 200}$ for both SIS and NFW halos. Varying $r_{\rm t}$ does not alter gravitational lensing effects significantly, see §\[subs:halochoice\]. The Smoothness Parameter {#subs:smooth} ------------------------ Very faint and/or small scale structures cannot all be seen in a magnitude limited survey. Also, for the method used in this paper, any matter not directly associated with individual galaxies such as completely dark halos and cluster halos need to be accounted for. In order to assure that the mean mass density in our model universe is kept constant, we keep the “remaining” mass, not accounted for when relating the dark matter to the luminous matter, as a homogeneous component. The homogeneous part can be characterized by the *smoothness parameter* $\eta(z)$, quantifying the fraction of smoothly distributed matter in our model universe (or our lack of knowledge on the dark matter distribution in the real Universe). Since the fraction of galaxies observed at a given magnitude limit is a function of redshift, and also since the Universe evolves, the smoothness parameter is expected to vary with redshift. The smoothness parameter in a given survey can be computed from the [*observed*]{} density of matter in clumps, i.e. in our case galaxies surrounded by dark matter halos, $\rho_{\rm g}(z)$. If the redshift dependence of $\rho_{\rm g}(z)$ can be factorized into a term $(1+z)^3$, scaling like the matter density, and an unknown factor $f(z)$ originating from the magnitude limit of the survey and evolution, we can write $$\rho_{\rm g}(z)=\rho_{\rm g}(0)(1+z)^3 f(z).$$ Then the smoothness parameter is simply given by $$\eta(z)=1-\frac{{\Omega_{\rm G}}}{{\Omega_{\rm M}}}f(z),$$ where the density in galaxies at $z=0$ has been scaled with the present critical density to ${\Omega_{\rm G}}$. Once the galaxies have been associated with halos of definite masses, the comoving density of clumps as a function of redshift ${\Omega_{\rm G}}f(z)$ can be estimated. We divide the distribution of galaxies into redshift bins and estimate ${\Omega_{\rm G}}f(z)$ in each bin. The density of clumps in the $i$:th bin, centered on redshift $z_i$, is obtained through $${\Omega_{\rm G}}f(z_i)=\frac{1}{\rho_{\rm c}}\frac{\sum_j m_j}{V_i},$$ where $m_j$ is the mass of a clump and $\rho_{\rm c}$ is the critical density. The comoving volume of the $i$:th bin is given by $$V_i=\int_{z_i-\Delta z/2}^{z_i+\Delta z/2} \frac{D_{\rm A}^2 (1+z)^2}{\left[ {\Omega_{\rm M}}(1+z)^3 +{\Omega_\Lambda}\right]^{1/2}} {\Delta\Omega} dz, \label{eq:vol}$$ where $\Delta z$ is the width of the bin, $\Delta \Omega$ is the solid angle under study and $D_{\rm A}$ is the angular diameter distance. Distances have been calculated using the `angsiz` routines described in @helbig, in which a smoothness parameter varying with redshift can be included. Note that the angular diameter distance $D_{\rm A}$ used to determine the volume element in Eq. (\[eq:vol\]) above is calculated using the filled-beam approximation ($\eta=1$), since the volume is governed by the global expansion rate, which in turn is governed by the properties on very large scales where the Universe is homogeneous. Deriving Velocity Dispersions from Observed Luminosities {#subs:vdfromlum} -------------------------------------------------------- Galaxy halo masses can be estimated from the velocity dispersion of galaxies. We calculate the velocity dispersion of each galaxy using absolute magnitudes ($M_B$) combined with empirical F–J and T–F relations. For ellipticals, we use the F–J relation $$\log_{10}\sigma =\log_{10}\sigma_*-\frac{0.4}{\gamma}(M_B-M_B^*) \label{eq:FJ}$$ where $M_B^*$ is the characteristic magnitude and $\sigma_*$ is the normalization in velocity dispersion. We use $\gamma=4.4$, as derived by @mit05 using Sloan Digital Sky Survey (SDSS) data. To derive $\sigma_*$, we use equation (33) in @mit05, where we use the relation $M_r=M_B-1.32$ to convert SDSS $r$–band magnitudes in AB system to standard $B$–band Vega normalized magnitudes. We have here assumed a typical color $M_B-M_r=1.20$ for ellipticals in the AB-system, and an AB to Vega relation B$_{\rm AB}$=B$_{\rm Vega}-0.12$. The normalization in velocity dispersion is given by $$\log_{10}\sigma_*=2.2-0.091(M_B^*+19.47+0.85z), \label{eq:star}$$ where we use $M_B^*=-21.04$ derived for the early–type population by @dah05. Equation (\[eq:star\]) yields $\sigma_*=220$ ${\rm km \, s^{-1}}$ at $z=0$. Combining equation (\[eq:FJ\]) and (\[eq:star\]) gives an expression for the velocity dispersion $$\log_{10}\sigma=-0.091(M_B-4.74+0.85z'), \label{eq:sigma}$$ where we use $z'=z$ for redshifts $z<1$ and $z'=1$ for $z> 1$. The redshift dependence of the relation accounts for the brightening of the stellar population with redshift. Since this evolution is poorly known at $z>1$, we assume a flat evolution at these redshifts. As a measurement of the error in the derived relation, we use the observed scatter in the SDSS measurements by @she03 $${\rm rms}(\log_{10}\sigma)=0.079[1+0.17(M_B+19.705+0.85z')], \label{eq:rms}$$ where we again have transformed SDSS $r$–band to standard $B$–band magnitudes. For the spiral and later type population, we use the T–F relation derived by @pie92, with correction for redshift calculated by @boh04 $$\log_{10}V_{\rm max}=-0.134(M_B-\Delta M_B+3.52), \label{eq:TF}$$ where $V_{\rm max}$ is the maximum rotation velocity for the galaxy. The correction due to redshift dependence is $$\Delta M_B=-1.22z'-0.09.$$ The observed scatter in the relation derived by @pie92 is ${\rm rms}(M_B)=0.41$, corresponding to $${\rm rms}(\log_{10}V_{\rm max})=0.06. \label{eq:scatter}$$ At $M_B^*$, this is similar to the errors in the F–J relation above. Finally, the velocity dispersion in spiral galaxies is related to the circular velocity via $\sigma=V_{\rm max}/\sqrt{2}$. Gravitational Lensing with Multiple Lenses {#subs:lensing} ------------------------------------------ A typical source l-o-s within some angular radius $\theta_{\rm s}$ will contain more than one lens. This requires the multiple lens-plane algorithm [see @schneider; @qlet for further details], which takes into account each lens along the l-o-s by projecting the lens’ mass distribution onto a plane and then traces the light-ray from the image plane (first lens-plane) back through all lens-planes up to the source plane where the magnification and intrinsic position can be found. In the following we denote angular diameter distances between redshifts $z_i$ and $z_j$ by $D_{ij}$. We use $o$ for observer, $s$ for source and $d$ for lens (deflector). When $z_i=0$, that index is omitted. Each halo is truncated in 3D at $r=r_{\rm t}$, then, upon projection onto a plane, the corresponding surface mass density will smoothly go to zero at the projected truncation radius. The projection can be done analytically for our lens models. For simplicity, we start by considering a single lens-plane. The equations can be simplified if we let $\xi$ be the impact parameter on a halo and define $x=\xi/\xi_0$ and $x_{\rm t}=r_{\rm t}/\xi_0$, where $$\xi_0=\frac{4\pi \sigma^2 D_{\rm d}D_{\rm ds}}{D_{\rm s}} \label{eq:xsizerosis}$$ for the SIS and $$\xi_0=r_{\rm s} \label{eq:xsizeronfw}$$ for the NFW halo. Then, the projected density $\kappa (x)$ can be written as $$\kappa_{\rm SIS}(x)=\frac{1}{\pi x} \arctan \left( \frac{\sqrt{x_{\rm t}^2-x^2}}{x} \right) \label{eq:sisproj}$$ and $$\kappa_{\rm NFW}(x)=\frac{2\kappa_{\rm s}}{x^2-1}f(x), \label{eq:nfwproj1}$$ where $$f(x)=\left\{ \begin{array}{ccc} \frac{\sqrt{x_{\rm t}^2-x^2}}{1+x_{\rm t}}+\frac{1}{\sqrt{1-x^2}}\left( {\rm arctanh} \left( \sqrt{\frac{x_{\rm t}^2-x^2}{1-x^2}}\right)-{\rm arctanh}\left(\frac{ \sqrt{\frac{x_{\rm t}^2-x^2}{1-x^2}}}{x_{\rm t}}\right)\right) & \ & x<1<x_{\rm t} \\ \frac{\sqrt{x_{\rm t}^2-x^2}}{1+x_{\rm t}}+\frac{1}{\sqrt{x^2-1}}\left( \arctan\left(\frac{ \sqrt{\frac{x_{\rm t}^2-x^2}{x^2-1}}}{x_{\rm t}}\right)-\arctan \left( \sqrt{\frac{x_{\rm t}^2-x^2}{x^2-1}}\right)\right) & \ & 1<x\leq x_{\rm t} \\ \end{array} \right. \label{eq:fofx}$$ and $$\kappa_{\rm NFW}(1)=\frac{2}{3}\kappa_{\rm s}\left( \frac{x_{\rm t}^3-3x_{\rm t}+2}{\left( x_{\rm t}^2-1\right)^{\frac{3}{2}}} \right). \label{eq:nfwproj2}$$ Here, $\kappa_{\rm s}=\rho_{\rm s}r_{\rm s}/\Sigma_{\rm cr}$, where $\Sigma_{\rm cr}=D_{\rm s}/4\pi D_{\rm d}D_{\rm ds}$ is a critical density related to strong lensing. Note that $x_{\rm t}>1$ must be assumed for the NFW and that $\kappa=0$ for $x>x_{\rm t}$ for both halo types. The general expression for the deflection angle for circularly symmetric lenses is $$\hat{\alpha}(x)=s\alpha(x)=s\frac{2}{x}\int^x_0 x'\kappa(x')dx', \label{eq:deflgen}$$ where $s=\xi_0 D_{\rm s}/D_{\rm d}D_{\rm ds}$, resulting in $$\alpha(x)=\frac{2}{\pi}\left( \arctan \left( \frac{\sqrt{x_{\rm t}^2-x^2}}{x} \right)+\frac{x_{\rm t}-\sqrt{x_{\rm t}^2-x^2}}{x} \right) \label{eq:sisdefl}$$ for the SIS model. For the NFW halo, numerical evaluation is needed. As the magnification factor, $\mu'$, for all halo models is obtained with distances calculated with the $z$-dependent $\eta$ function, this is the universe relative to which $\mu'$ is found (implying $\mu'\geq 1$ for primary images). In the following, we will quote magnifications, $\mu$, relative to a universe with homogeneously distributed matter, the filled-beam value (fb) where $\bar\mu =1$. The magnifications are related by $$\mu=\mu' \left(\frac{D_{\rm s}^{\rm fb}}{D_{\rm s}^{\eta({\rm z})}}\right)^2. \label{eq:mu}$$ SIMULATED SURVEYS {#sec:simsurv} ================= In order to study gravitational lensing corrections, we perform Monte Carlo simulations where we calculate the magnification factor for random source positions in mock galaxy catalogs. By varying the assumptions of the galaxy mass distributions as well as the magnitude limit of the observations, we can estimate the accuracy to which it is possible to correct for the lensing magnification. For all lensing calculations we have used the publicly available `fortran 77` code Q-LET[^2] [@qlet], although substantially modified. The code fully utilizes the multiple lens-plane algorithm and has been used previously by @qlet and @riess04 to study lensing effects on supernovae. As our simulation base, we create for each Monte Carlo realization a mock galaxy catalog designed to reflect the distribution of galaxies expected in a circular cone around a random l-o-s. To characterize the galaxy population we use the $B$-band rest-frame Schechter luminosity function (LF) derived by @dah05 using GOODS CDF-S observations. The LF is used to generate the number of expected galaxies within the cone where we take into account Poissonian fluctuations but do not include effects of galaxy correlations or cosmic variance. The same LF is used to assign absolute magnitudes to each object within the range $-23<M_B<-16$. To account for evolutionary effects, we include a brightening of the $B$-band characteristic magnitude by $\sim$1 mag to redshift $z=1$ as discussed in §\[subs:vdfromlum\]. A random spectral type is assigned according to the type-specific LF of early-types, late-types and starburst galaxies at $z\sim$0.4 in @dah05. We thereafter assign early-type galaxies an elliptical morphology and late-type galaxies a spiral morphology. We assume that the fraction of galaxies with elliptical morphology is constant over the redshift range investigated. The redshift of each object is assigned with a probability proportional to the volume element, $dV(z)/dz$, which is equivalent to assuming a constant comoving number density of galaxies with redshift, i.e. we do not include any evolution of the number densities due to e.g., mergers or large scale structures. The galaxy is finally given a random position within the l-o-s cone. Besides redshift, absolute magnitude, spectral type and position, we also calculate the apparent magnitude in the observed $I$-band for each object. This allows us to draw subsamples from the catalog with imposed magnitude cutoffs as is the case for real observations. Furthermore, to resemble an observational situation where redshifts are determined photometrically, we also have the option to add a random error to the redshifts. These errors are calculated using simulations and depend on redshift, spectral type and detection S/N. For bright objects with high S/N, errors are typically $\Delta_z\equiv\langle|z_{\rm phot}-z_{\rm true}|/(1+z_{\rm spec})\rangle\sim 0.05$, while errors for faint objects (mostly at high $z$) can be as large as $\Delta z\sim 0.3$. The errors for early-type galaxies are typically a factor two smaller compared to late-type galaxies when comparing at the same apparent magnitudes. Besides this “Gaussian” contribution to the error distribution, a fraction of the galaxies may also be assigned “catastrophic redshifts” with large errors. We discuss this further in §\[subs:reduncert\]. Figure \[fig:photz\] shows the simulated accuracy of the photometric redshifts for a survey with limiting magnitude $I<27$ (S/N=10). The bottom panel shows the generated (input) redshifts vs. the photometric redshifts, while the top panel shows the difference between generated and photometric redshifts as a function of galaxy magnitude. Understanding the Magnification Uncertainties {#subs:understand} --------------------------------------------- We have identified and addressed the following uncertainties when estimating the lensing magnification of a specific source given a galaxy catalog: - Finite field size - The intrinsic scatter in, and accuracy of, the F–J and T–F relations - Redshift and position uncertainties - Choice of halo profile - The magnitude limit These sources of error are addressed individually in the following sections. Our reference model consists of NFW halos truncated at $r_{\rm t}=r_{\rm 200}$, velocity dispersion/circular velocity normalizations of 220 km$\ \!$s$^{-1}$ for ellipticals, 203 km$\ \!$s$^{-1}$ for spirals and source redshift $z=1.5$. In Figure \[fig:default\], the Probability Distribution Function (PDF) for lensing magnifications for the reference model is shown. The lensing dispersion at $z=1.5$ is $\sim 7\,\%$. We analyze the results by comparing the distribution of magnifications in the reference model with the distribution obtained after performing a correction with the above-mentioned uncertainties. We denote the uncorrected value $\mu_{\rm ref}$ and the corrected one $q_{\mu}$, where $$\label{eq:q_c} q_{\mu}=\frac{\mu_{\rm ref}}{\mu_{\rm est}},$$ where $\mu_{\rm est}$ is the estimated magnification factor including one or more uncertainties. Corrections will reduce the uncertainties from lensing whenever the width of the distribution of $q_{\mu}$ is smaller than the corresponding width in the $\mu_{\rm ref}$ distribution. If no uncertainties were present $\mu_{\rm est}=\mu_{\rm ref}$ and $q_{\mu }=1$ implying a perfect correction. As a measure of the width we give the standard deviation of the distribution. Since many of the distributions are non-Gaussian, we also report the 68% and 95% confidence levels. Finite Field Size {#subs:field} ----------------- The mean magnification relative to an homogeneous universe of a large number of sources lensed by randomly distributed matter is expected to be unity due to photon number conservation. When we model a lensing system, only galaxies within angular radius $\theta_{\rm s}$ of the position of the source on the sky are taken into account and thus $\bar\mu <1$. If $\theta_{\rm s}$ is increased, more lenses are added and the mean magnification increases. This dependence is illustrated in Figure \[fig:rhost\], showing the mean magnification of 5000 point sources, as a function of $\theta_{\rm s}$. The mean magnification increases rapidly for small $\theta_{\rm s}$, but only slowly for $\theta_{\rm s}$ larger than an arc-minute, where the error is $\sim 1\,\%$. In our simulations, we use $60''$ as a cutoff to save computing time. In a real survey, a cutoff will have to be introduced for practical purposes since only a limited portion of the sky will be observed. In order to avoid a systematic bias due to the finite field size for a given survey, the computed magnifications should be corrected with a factor corresponding to the inverse of the mean magnification for the cutoff radius used. In the following, we have neglected this small ($\sim 1\,\,\%$) correction. Furthermore, going to larger $\theta_{\rm s}$ would not render $\bar \mu$ being exactly unity since some flux is lost whenever multiple imaging occurs. Q-LET gives the magnification and intrinsic source position of a given *image*, not observed position and magnification of a given source. Therefore, in the rare cases of multiple imaging, only one of the images will be taken into account resulting in some flux loss. Note also that since random l-o-s and random source positions are different [e.g., @schneider], we have to use a magnification dependent weighting procedure to see whether each simulated event should be kept or discarded in order to get a sample of random source positions [see e.g., @snoc]. The F–J and T–F Relations {#subs:fjtf} ------------------------- The F–J and T–F relations give the velocity dispersions of the *luminous* matter and we make the assumption that the dark matter that constitutes the halo follows scales in the same way. Both the Faber-Jackson and the Tully-Fisher relations have an intrinsic scatter with an rms estimated in §\[subs:vdfromlum\]. To study the effect of this scatter, we add random offsets to the F–J and T–F relations when calculating the halo mass. Since we do not want to bias the total mass in our simulations, we distribute the offsets using a Gaussian distribution in $\sigma^3$ (since mass $\propto \sigma^3$). We derive the width of the Gaussian (one sigma value in $\sigma^3$) from the rms in log$_{10}(\sigma)$ and log$_{10}(V_{\rm max})$ using Eqs. (11)-(12) (F–J) and Eqs. (13)-(15) (T–F). In panel a) in Figure \[fig:pdffigs\], we compare the distribution of the corrected value $q_\mu$ due to the scatter in the F–J and T–F relations with the distribution of magnifications in the reference model (dashed line). Note that if we knew all velocity dispersions and circular velocities exactly, $q_\mu$ would be represented by a $\delta$-function at $q_{\mu}=1$. We see that the intrinsic scatter in the velocity dispersion and circular velocity causes a dispersion of $q_\mu$ of approximately $3\,\%$, a factor of $\sim 2.6$ less than the original dispersion. Panel a) in Figure \[fig:spreadfigs\] shows $\mu_{\rm ref}-1$ vs $q_{\mu}-1$ for each individual source. For $82\,\%$ of the sources, the corrected luminosity will be better than the uncorrected one. Besides the intrinsic scatter in the F–J and T–F relations, there is also a possible *systematic* uncertainty in $\sigma_*$. To estimate this, we use SDSS data in @bernardi03 where photometric and spectroscopic parameters for a sample of $\sim 9000$ early-type galaxies in the redshift range $0.01<z<0.3$ are given, including K-corrections and accurately measured velocity dispersions. We fit a straight line $M_B=b\log\sigma +a$ and estimate the error in $\sigma_*$ for a given $M_B^*$ by propagating the errors in the parameters $a$ and $b$. For $M_B^*=-21.04$, we obtain $\sigma_*=218\pm 7$ km$\ \!$s$^{-1}$. In panel b) and c) in Figure \[fig:pdffigs\], we investigate the effect of a systematic shift of $\pm 10$ km$\ \!$s$^{-1}$ in $\sigma_*$. The dispersion in $q_\mu$ due to such a shift is quite small or at the order of $1\,\%$. Panels b) and c) in Figure \[fig:spreadfigs\] shows $\mu_{\rm ref}-1$ vs $q_{\mu}-1$ for each individual source. The corrected value will be better than the uncorrected for $>95\,\%$ of the sources. A further source that may increase the scatter in the magnification is the possible misclassification of galaxy morphology. An elliptical galaxy wrongly classified as a spiral, leads to an underestimation of the underlying mass, and vice versa if a spiral is misclassified as an elliptical. To investigate the possible effect of this, we first use a set of simulated galaxies with known morphological types (i.e. an exponential radial profile for spirals and a de Vaucouleurs profile for ellipticals) and measure how many are correctly recovered in an observational setup resembling the GOODS. To classify galaxies, we use the GALFIT software [@peng02], which measures the slope of the radial profile and therefore allows a discrimination between ellipticals and spirals. At a S/N=10 detection limit (m$\sim$25), we find that $\sim$25% of the ellipticals and $\sim$8% of the spirals are misclassified. The fraction of misclassified galaxies quickly drops to $\sim$1% at a magnitude $\sim$2 mag brighter than the detection limit. We then use these results to estimate the effect of misclassification on the derived magnifications. We find that the increase in the scatter in the magnification is $\sim$0.5% due to this effect. Therefore, misidentification of galaxy morphology should only affect the results marginally. However, for ground-based surveys with low resolution, the effect may be larger. Redshift and Position Uncertainties {#subs:reduncert} ----------------------------------- Uncertainties in the redshifts of the lenses will alter the results both through the uncertain distances between different lens-planes and by introducing an uncertainty in their absolute magnitudes used in the F–J and T–F relations. In an ideal observational situation, all redshifts are determined spectroscopically. In many real situations, however, only photometric redshift are available due to, e.g., the faintness of the galaxies and the large number of sources. To investigate the effects of photometric redshift uncertainties, we add a random offset to the redshift of each object. The size of the offset depends on redshift, apparent magnitude and spectral type of the object and is drawn from the simulated error distribution discussed above. In panel d) in Figure \[fig:pdffigs\], we show the distribution of the corrected value $q_\mu$ due to a random offset to the redshift of each lensing object. The induced error in the estimated magnification is less than $1\,\%$. The corresponding panel in Figure \[fig:spreadfigs\] shows $\mu_{\rm ref}-1$ vs $q_{\mu}-1$ for each individual source. In this case, $96\,\%$ of the sources have corrected values which are better than when uncorrected. Besides the Gaussian-like distribution of the photometric redshift errors investigated above, there is also a possibility that a fraction of the objects get “catastrophic redshifts” with large errors, so called outliers. E.g., by comparing with $\sim$1400 spectroscopic redshifts in the CDF-S and HDF-N, we find that the GOODS photometric redshifts have about 3% outliers with $\Delta_z>0.3$. The redshift probability distribution, for a majority of these objects are characterized by a primary ($\sim$Gaussian) peak combined with a less pronounced secondary peak. Outliers are foremost objects assigned the redshift of the primary peak, but where the true redshift is that of the secondary peak. To estimate the effect of outliers, we use the galaxies in the GOODS and simulate the case where we distribute the photometric redshifts over the full probability distribution, including the secondary peak. This will allow $\sim$3% outliers. We compare this with the case where we only include redshifts in the primary peak (i.e. objects with $\Delta_z<0.3$). We find that the increase in lensing dispersion due to the population of outliers is less than 1%. One reason for the small effect is that outliers are mainly faint and very blue objects, which therefore should have relatively small masses. Any error in the exact positions of the lensing galaxies can also affect the resulting magnification. Apart from the observational error, such an effect can be due to a misalignment between the luminous and the dark matter in a given galaxy. We have investigated the effect of a Gaussian random shift with $\sigma_{\rm pos}=0.5$ arcseconds of all lensing galaxies along the l-o-s. Even such a large shift of [*all*]{} galaxy positions results in a distribution of corrected values $q_\mu$ with a dispersion of less than $0.5\,\%$. Choice of Halo Profile {#subs:halochoice} ---------------------- The choice of halo model is only important in those lens-planes where the light-ray passes through a halo. If passing outside, the halo will act as a point mass and when $m_{\rm tot}^{\rm halo}=m_{200}^{\rm SIS}=m_{200}^{\rm NFW}$ the two different halo models give exactly the same results. We have performed simulations where all halos were of SIS instead of NFW type. The effect of different halo profiles are also present in the realistic and pessimistic case simulations below. Panel e) in Figure \[fig:pdffigs\] shows the PDF of $q_\mu$ when assuming SIS halos instead of NFW as in the reference model. The dispersion is less than $1.5\,\%$. In $>90\,\%$ of the cases, the corrected value is better than the uncorrected, see panel e) in Figure \[fig:spreadfigs\]. We have also investigated how important the assumption on the truncation radius is for the resulting magnification distribution by running tests with $0.75\times r_{200}\leq r_{\rm t} \leq 1.25\times r_{200}$ for the SIS model. The uncertainty in the resulting magnification gives a distribution of corrected values $q_\mu$ with a dispersion of $\sim 0.5-1\,\%$. Since the NFW profile falls of as $\rho_{\rm NFW}\propto r^{-3}$ at large radii as compared to $\rho_{\rm SIS}\propto r^{-2}$ for SIS halos, this should be considered a very conservative limit on the effect of changing the truncation radius. Magnitude Limits {#subs:maglimsims} ---------------- Our reference model uses a constant comoving mass density of galaxies, implying a constant smoothness parameter, $\eta$. In a real scenario with an observational magnitude limit, an increasing fraction of galaxies drop out at higher redshift. The faint high redshift galaxies will not be seen and hence not included as lenses in the magnification calculation but instead added as homogeneously distributed matter. Therefore, when deriving the smoothness parameter from observations, $\eta(z)$ increases with redshift even if the ’underlying’ smoothness parameter is constant. For each simulation with a finite magnitude limit, a new $\eta$-function is computed using the method described in §\[subs:smooth\]. For $I=27$ mag the distribution of $q_{\mu}$ is very narrow and for $I=29$ mag, it is in principle a $\delta$-function. Panel f) in Figure \[fig:pdffigs\] shows the PDF of $q_\mu$ for $I=23$. The dispersion is $\sim 2\,\%$. In $86\,\%$ of the cases, the corrected value is better than the uncorrected, see panel f) in Figure \[fig:spreadfigs\]. In Figure \[fig:incompl2325\], we compare $q_{\mu}-1$ as a function of source redshift for models with magnitude limits $I=23$ and $I=25$ with $\mu_{\rm ref}-1$. Even for source redshifts as high as $z\sim 2$, a magnitude limit of $I=23$ does not significantly impair our results. Photometric errors in the apparent magnitudes translates to an increased scatter in the absolute magnitudes and therefore also in the derived velocity dispersions and masses. At the faintest magnitude limits considered here, S/N=10, typical errors are $\sim 0.1$ mag. For ellipticals, this corresponds to an increased dispersion of $\Delta{\rm log_{10}}\sigma \sim 0.01$ (using Eq. 10). This is significantly less than the intrinsic scatter in the F–J relation which is rms(log$_{10})\sigma \sim 0.08$ (at $\sim M_B^*$, Eq. 12). For the T–F relation the increased scatter due to photometric errors is $\Delta{\rm log_{10}}V_{\rm max}\sim 0.013$ (using Eq. 13), again significantly less than the intrinsic scatter rms(log$_{10}V_{\rm max}) \sim 0.06$. So even for the faintest galaxies considered, the errors in apparent magnitude should not affect results more than marginally. Realistic and Pessimistic Scenarios {#subs:wcrc} ----------------------------------- We have studied the uncertainty in the lensing correction in a realistic scenario where a reasonable error budget is assumed. In this case we have assumed 50% NFW, 50% SIS, no central value shift of the velocity dispersion normalization but a dispersion around this value and a magnitude limit of $I=25$. Lens redshifts were assumed to be distributed around their reference values. As the correct model we have used the reference NFW model as above. A pessimistic scenario for space based surveys has also been studied where we have maximized the uncertainties in relating the luminous and the dark matter. However, one could easily imagine worse cases in a ground-based experiment. For our scenario the erroneous assumptions were: SIS halos, central value of velocity dispersion normalization shifted +10 km$\ \!$s$^{-1}$ and distributed around this value, a magnitude limit of $I=25$, lens redshifts assumed to be distributed around their reference model values, an offset in lens positions as described in §\[subs:reduncert\] and finally a truncation radius of $1.25\times r_{200}$. We consider this being a pessimistic but not completely unrealistic scenario. The left panels in Figure \[fig:wcrc\] show results for the realistic scenario, the right panels for the pessimistic scenario. For the realistic case, $q_{\mu}$ has a dispersion of $\sim 3\,\%$ and $(q_{\mu}-1)<(\mu_{\rm ref}-1)$ for $\sim 80\,\%$ of the sources. In the pessimistic case, the corresponding numbers are $\sim 3\,\%$ and $\sim 77\,\%$, i.e. our ability to correct for lensing is more or less unimpaired when going from a realistic to a pessimistic scenario. The bottom row shows $q_{\mu}-1$ as a function of source redshift for the two scenarios. The confidence levels of the realistic case vs $z$ can be well fitted with straight lines and these are found in Table \[tab:lines\] expressed in magnitudes. SUMMARY AND DISCUSSION {#sec:summary} ====================== We have investigated the accuracy to which lensing magnification can be estimated on individual lines of sight using the observed properties of the foreground galaxies of each source. The result depends on the uncertainties in translating observed galaxy luminosities to the (invisible) matter distribution in the lensing galaxies. We have shown that none of the studied uncertainties neither individually nor combined will render the corrected distribution of magnifications wider than the dispersion from lensing. Even for a pessimistic scenario, the dispersion due to lensing for a standard candle source at $z=1.5$ can be reduced by a factor $\gtrsim 2$, comparable to the result for a realistic scenario. The reason our pessimistic case result is not significantly worse than for the realistic case, is that the uncertainties are dominated by the scatter in the F–J and T–F relations for both scenarios[^3]. At lower redshifts ($z\lesssim 0.5$), the effects from lensing are small and correcting for lensing is not likely to improve the results. Even though the fraction of SNe lines-of-sight passing through galaxy cluster lenses is expected to be relatively low, these are potentially important due to magnification bias and in surveys specifically aimed at using cluster potentials as gravitational telescopes [@gunngoo]. In those cases, individual modeling of the cluster potentials using, e.g., weak and strong lensing of background galaxies is needed to correct the observed magnitudes for the lensing magnification. Alternatively, one can choose to discard SNe background to galaxy clusters with very uncertain matter distributions when estimating cosmological parameters. Our method also takes into account gravitational lensing from large scale dark matter structures such as filaments and walls as long as the matter density is dominated by the individual galaxy size halos. If we very conservatively assume that large scale structures are completely uncorrelated with the luminous matter, we expect the lensing magnification contribution to be less than 2% on scales larger than 5 arcminutes (for a source redshift of unity) [@cooray2005]. For a given galaxy catalog, the computed magnifications do not depend strongly on the cosmological parameters used. However, since the formation of matter structure is a function of cosmology, it should in principle be possible to determine the cosmology from the observed distribution of standard candle luminosities. In this case, the use of magnifications of e.g. SNIa would be similar to using shear of background galaxies as in weak lensing studies. Such a study would require large statistics of very well observed SNIa and will probably have to await future dedicated missions such as the proposed SNAP satellite [^4]. For current high-$z$ SNIa observations the concern is to correct for the magnification and investigate the possibility for magnification bias. Such a study for SNe in the GOODS fields is described in a accompanying paper [@jacke] where magnification bias is shown to be negligible but lensing for individual SNe can be estimated quite robustly from foreground galaxy observations and thus be corrected for. In fact, as long as the luminous and dark matter are not anti-correlated, we would expect to be able to reduce the scatter in the Hubble diagram by assuming that dark matter follows light. Thus, we find that even though the exact relation between luminous and dark matter is uncertain, correcting for gravitational lensing using observed galaxy properties should be harmless at the worst and very useful at the best. The authors would like to thank Mariangela Bernardi for help with the velocity dispersion data from the Sloan survey, Joakim Edsjö, Daniel Holz and Saul Perlmutter for helpful discussions during the course of the work. We are grateful to Swara Ravindranath for providing simulated galaxy catalogs used to derive the recovery fraction of galaxy morphologies. CG would like to thank the Swedish Research Council for financial support. AG is a Royal Swedish Academy Research Fellow supported by grants from the Swedish Research Council, the Knut and Alice Wallenberg Foundation and the Göran Gustafsson Foundation for Research in Natural Sciences and Medicine. Amanullah, R., Mörtsell, E., Goobar, A., 2003, , 397, 819 Benítez, N., Riess, A. G., Nugent, P. E., Dickinson, M., Chornock, R., Filippenko, A. V., , 577, L1 Bernardi, M., et al.,  2003, , 125, 1817 Böhm, A., et al., 2004, , 420, 97 Cooray, A., Huterer, D., & Holz, D., 2005, astro-ph/0509581 Dahlén T., Mobasher, B., Somerville, R., Moustakas, L., Dickinson, M., Ferguson, H., Giavalisco, M., 2005, , 631, 126 Goobar, A., Perlmutter. S, 1995, , 450, 14 Goobar, A., Mörtsell, E., Amanullah, R., Goliath, M., Bergström, L., Dahlén, T., 2002, , 392, 757 Gunnarsson, C., 2004, JCAP, 0403, 002 Gunnarsson, C., & Goobar, A. 2003, , 405, 859 Holz, D., Linder, E., 2005, , 631, 678 Holz, D., Hughes, S. A., 2005, , 629, 15 Jönsson, J., Dahlén, T., Goobar, A, Gunnarsson, C., Mörtsell, E., 2005, submitted to Kayser, R., Helbig, P., Schramm, T., 1997, , 318, 680 Knop, R., et al., 2003, , 598, 102 Lewis, G. F., Ibata, R. A., 2001, , 324, L25 Mitchell, J., Keeton, C.,Frieman, J., Sheth, R., 2005, , 622, 81 Mörtsell, E, Gunnarsson, C., Goobar, A., 2001, , 561, 106; erratum [*ibid*]{} 2003, 589, 1089 Navarro, J. F., Frenk, C. S., White, S. D. M., 1997, , 490, 93 Peng, C.Y., Ho, L.C., Impey, C.D., & Rix, H.-W. 2002, , 124, 266 Perlmutter, S., et al., 1998, Nature, 391, 51 Perlmutter, S., et al., 1999, , 517, 565 Pierce, M. J., Tully, R. B., , 387, 47 Riess, A. G., et al., 1998, , 116, 1009 Riess, A. G., et al., 2004, , 607, 665 Schmidt, B. P., et al., 1998, , 507, 46 Schneider,  P., Ehlers, J., Falco, E. E., 1992, [*Gravitational Lenses*]{}, Springer-Verlag, Berlin Sheth, R. K., et al., 2003, , 594, 225 Strolger, L.-G., et al., 2004, , 613, 200 Tonry, J. L., et al., 2003, , 594, 1 [lrrrr]{}\ & &\ & & & &\ 68% magn. & -0.038 & 0.005 & -0.020 & 0.008\ 68% demagn. & 0.060 & -0.009 & 0.016 & 0.006\ 95% magn. & -0.140 & -0.003 & -0.055 & 0.008\ 95% demagn. & 0.086 & -0.017 & 0.035 & 0.012\ [^1]: As long as the variance of $m$ is finite. [^2]: Available at `http://www.physto.se/~cg/qlet/qlet.htm` [^3]: We were able to increase the width in the $q_{\mu}$ distribution for the pessimistic case scenario with $\sim 50\,\%$ by modelling 20% of the galaxies as point masses. However, we consider such a compact mass distribution at galaxy scales too contrived to be included in the simulations. [^4]: `http://snap.lbl.gov`
--- abstract: 'We review recent experimental progress towards quantum information processing and quantum simulation using neutral atoms in two-dimensional (2D) arrays of optical microtraps as 2D registers of qubits. We describe a scalable quantum information architecture based on micro-fabricated optical elements, simultaneously targeting the important issues of single-site addressability and scalability. This approach provides flexible and integrable configurations for quantum state storage, manipulation, and retrieval. We present recent experimental results on the initialization and coherent one-qubit rotation of up to 100 individually addressable qubits, the coherent transport of atomic quantum states in a scalable quantum shift register, and discuss the feasibility of two-qubit gates in 2D microtrap arrays.' author: - Malte Schlosser - Sascha Tichelmann - Jens Kruse - Gerhard Birkl bibliography: - 'QUIPS.bib' title: 'Scalable Architecture for Quantum Information Processing with Atoms in Optical Micro-Structures' --- =1 Introduction {#sec:intro} ============ ![image](fig1.jpg){width="75.00000%"} The ability to synchronously investigate multi-component quantum systems decoupled from the environment in multi-site architectures is fostering some of the most active research in quantum physics and quantum information processing [@2000:QCQ:544199]. Among the many currently pursued approaches which range from solid state physics to quantum optics [@Bouwmeester:2001; @Beth:2005:QIP:1076259; @Everitt:2005:EAQ:1205130; @Schleich:2007:EQI:1526289], the ones in atomic physics seem to be particularly suited for advancing the field at this stage. This is due to the remarkable experimentally achieved control of single and multiple qubit systems, of qubit interactions, and the detailed understanding and control of the relevant coherent and incoherent processes, including excellent decoupling from the environment. Concentrating on work with neutral atoms, recently there has been a series of important advances, such as the near-deterministic preparation of single atomic qubits [@2001Natur.411.1024S; @2011NatPh...6..951G], the coherent transport of atomic quantum states [@2007NatPh...3..696B; @PhysRevLett.91.213002; @PhysRevLett.105.170502], the manipulation of selected individual spins [@PhysRevLett.93.150501; @PhysRevA.81.060308; @2011Natur.471.319W], and the implementation of two-qubit gates [@2003NatMandel; @2007Natur.448..452A; @PhysRevLett.104.010502; @PhysRevLett.104.010503].\ Each of these experimental achievements represents an important step towards a successful physical implementation of quantum information processing [@2000ForPh..48..771D] by means of atom optics. Of great importance for future progress is consequentially the implementation of an architecture which incorporates all of the above achievements while at the same time providing scalability, reconfigurability, stability, and a modern technological basis as met for example by the newly emerging field of miniaturized and integrated atom optics [@2007BirklLasphotrev].\ This can be obtained by using different types of micro-fabricated configurations: the trapping and guiding of neutral atoms in micro-fabricated charged and current carrying structures have been pursued by a number of groups in recent years [@0022-3727-32-18-201; @2002AdAMP..48..263F; @RevModPhys.79.235; @Reichel:2011:AtomChips]. An alternative approach to generate miniaturized and integrated atom optical systems has been introduced by our group: we proposed [@2001OptComm; @Buchkremer:2001] and demonstrated [@PhysRevLett.105.170502; @PhysRevA.81.060308; @PhysRevLett.89.097903; @2007ApPhB..86..377L] the application of micro-fabricated optical elements (Fig. \[fig:microoptics\]) for the manipulation of atomic qubits with laser light. Using these elements for quantum information processing takes advantage of the vast industrial and research interest in the field of applied optics directed towards the development of micro-optical systems [@Jahns:2004:MTA:994002; @2011:FMO:Zappe; @herzig1997micro] and establishes a novel technological basis for research in quantum physics.\ A special attraction of this approach lies in the fact that many of the currently used techniques in atom manipulation are based on the interaction of atoms with light. Thus, the use of micro-fabricated optical elements is in many ways the canonical extension of the conventional optical methods into the micro-regime, so that much of the knowledge and experience that has been acquired in macroscopic atom optics can be applied to this new regime in a very straightforward way. Moreover, the flexibility of the manufacturing process allows one to realize complex optical systems like computer-generated diffractive optical elements which can create light fields not achievable with standard components, and almost arbitrary spatial intensity distributions become possible. In addition, miniaturization enables one to scale from a single conventional element to multiple realizations, simply by utilizing parallelized lithographic fabrication techniques adapted from semiconductor processing. The use of these manufacturing techniques allows the optical engineer to fabricate structures with dimensions in the micrometer range and submicrometer features, such as the diffractive lenses of Fig. \[fig:microoptics\] (c). Up to $10^4$ microoptical elements can be produced on an area of $\unit{1}{\milli\meter\squared}$ while maintaining diffraction limited performance with numerical apertures (NA) large enough to define light patterns with structure sizes in the single micrometer regime.\ Fig. \[fig:microoptics\] shows typical examples of the micro-fabricated optical elements we use, the generated light fields, and fluorescence images of atoms trapped in the resulting potential geometries. One-dimensional arrays of cylindrical microlenses (Fig. \[fig:microoptics\] (a)) allow us to realize atomic waveguides and arrays of interferometer-type guiding structures [@PhysRevLett.89.220402; @PhysRevLett.92.163201]. Two-dimensional arrays of up to $300\times 300$ refractive (Fig. \[fig:microoptics\] (b)) and diffractive (Fig. \[fig:microoptics\] (c)) spherical microlenses are used to create 2D arrays of laser foci (Fig. \[fig:microoptics\] (d)) which serve as 2D dipole trap arrays for neutral atoms with well over $100$ occupied sites (Fig. \[fig:microoptics\] (e)) and typical site-to-site separations ranging from a few to about $100$ micron. Scalable architecture for a neutral atom quantum processor {#sec:architecture} ========================================================== For a functional quantum processor, a sequential but partially also parallelized algorithm has to be implemented in a suitable geometry for performing the designated computational task: (a) qubits have to be prepared and initialized, (b) one- and two-qubit quantum operations have to be applied according to the quantum algorithm to be processed, and (c) high-fidelity readout of the final quantum state has to be achieved. An essential ingredient is a suitable architecture for the reliable storage and manipulation of qubits, thus presenting the hardware of the quantum processor.\ Significant progress towards the implementation of this hardware has been achieved in systems relying on the optical storage of neutral-atom qubits, such as optical lattices [@PhysRevLett.93.150501; @2011Natur.471.319W; @2003NatMandel; @2007Natur.448..452A; @2009NatPh...5..575L; @2007NatPh...3..556N; @2009:Greiner:Microscope] or small configurations of individually focused laser beams [@2001Natur.411.1024S; @PhysRevLett.104.010502; @PhysRevLett.104.010503; @PhysRevLett.96.063001]. In our work, we have developed a quantum processor hardware based on the combination of optical methods for storage and control of neutral-atom qubits and the above introduced micro- and nano-fabricated optical systems, simultaneously targeting the important issues of single-site addressability and scalability. As a guideline for our work, we followed the generally acknowledged requirements for the physical implementation of quantum computing, as for example listed in reference [@2000ForPh..48..771D]. In specific, we have developed a scalable architecture for quantum information processing based on 2D quantum state registers built from 2D arrays of optical micro-potentials as shown in Figs. \[fig:microoptics\] (d) and (e) and Fig. \[fig:architecture\]: ![image](fig2.png){width="50.00000%"} - 2D configurations of laser beams focused by 2D arrays of microlenses serve as registers of optical potentials for the storage of small samples or single neutral atoms ($^{85}$Rb in our case), thus laying the foundation of a quantum register based processor architecture, where quantum information can be inscribed in the internal or external atomic states. Possible implementations range from small size registers where the sequential algorithm is applied in a temporal sequence to a localized set of qubits to large-scale registers with spatially separated functional subsections (see Fig. \[fig:architecture\]) where atomic qubits or even atomic quantum bytes are transported during the algorithm resembling a standard shift register operation (see Section \[sec:storage\]). - The reliable operation of the quantum processor requires the precise initialization and readout of each qubit together with targeted single-qubit and two-qubit gate operation. This requires the ability to perform incoherent and coherent operations in a global but also in a site-specific fashion which is one of the inherent advantages of our architecture (see Section \[sec:prepmanread\]). - A high degree of flexibility in the architecture is neccessary to implement different algorithms and to perform the sequential operations within an algorithm efficiently. The combination of microlens arrays with reconfigurable spatial light modulators allows us to implement adaptable trap configurations and reconfigurable schemes for qubit manipulation (see Section \[sec:slm\]). - The implementation of a quantum shift register operation realizes the data bus in our architecture. It connects adjacent operational units, e.g. loading, preparation, processing, and readout sections (Fig. \[fig:architecture\]). During operation, qubits are shifted in parallel through the quantum processor, allowing for massively parallelized quantum information processing. The preservation of coherence during the shift operation becomes an essential factor in evaluating this approach (see Section \[sec:transport\]). - In addition to discussing the state-of-the-art of our architecture, we show that there is a well-defined and straightforward path for implementing the last remaining - but nevertheless crucial - element still missing for a functional quantum processor: two-qubit-gate operations. Several potential schemes have been proposed which can be implemented in our architecture (see Section \[sec:gates\]). ![image](fig3.jpg){width="75.00000%"} In the following sections, we discuss in detail how our architecture can meet the above listed requirements. The experimental setup of Fig. \[fig:setup\] shows the central elements for the quantum processor hardware. The key element is a 2D microlens array which, globally illuminated with appropriate laser light, produces a 2D array of laser spots in the focal plane. The focal plane is re-imaged into a vacuum chamber by a demagnifying imaging system, thus producing a 2D register of optical traps for neutral atoms with typical trap separations ranging from single to about $100$ microns. Fully exploiting our maximum available NA of 0.29, a waist below could be reached. Each optical trap can hold an ensemble of up to 100 atoms or in the limiting case an individual atom, thus a 2D register of atomic qubits with excellent scaling properties is created. Already in the present realization, lens arrays with ten thousands of individual lenses are available, a number which is by far not at the limit of the available technology of micro-optics fabrication. Quantum state storage in optical micro-potentials {#sec:storage} ================================================= Atomic quantum systems offer the important advantage that they can be localized and cooled at predefined sites as well as decoupled from their environment to a high degree. For neutral atoms, as an alternative to using the energy shift in magnetic fields [@0022-3727-32-18-201; @2002AdAMP..48..263F; @RevModPhys.79.235; @Reichel:2011:AtomChips], this can be achieved by using the energy shift in inhomogeneous optical fields [@Dalibard:85; @Metcalf1999; @grimm-2000-42]. Here, the short range character of the trapping force additionally facilitates the decoupling from the environment.\ Optical trapping in dipole traps relies on the modification of the atomic energies by far-detuned laser light which is commonly described through the interaction part of the Hamiltonian $$H=H_{atom}+H_{light}+H_{int}.$$ In most of the relevant cases, the atomic Hamiltonian $H_{atom}$ can be restricted to atomic two-level systems and the interaction Hamiltonian $H_{int}$ to atom-light coupling in dipole approximation. The resulting energy shift $\Delta E$ leads to the position dependent dipole potential $U\left(\mathbf{r}\right)$ for atoms in the ground state and a corresponding photon scattering rate $\Gamma_{SC}\left(\mathbf{r}\right)$ of $$U\left(\mathbf{r}\right)=\frac{3\pi c^2}{2\omega_0^3}\frac{\Gamma}{\Delta}I\left(\mathbf{r}\right)\quad;\quad\Gamma_{SC}\left(\mathbf{r}\right)=\frac{3\pi c^2}{2\hbar\omega_0^3}\left(\frac{\Gamma}{\Delta}\right)^2 I\left(\mathbf{r}\right)$$ with the rotating wave approximation applied. Here $I\left(\mathbf{r}\right)$ is the position-dependent laser intensity of the focused laser beam with waist $w_0$ ($1\slash e^2$-radius) and $\Delta=\omega_L-\omega_{eg}$ is the detuning of the laser field with respect to the resonance frequency of the two-level system spanned by ${\ensuremath{| g \rangle}}$ and ${\ensuremath{| e \rangle}}$ (Fig. \[fig:dipole\] (b)) having a natural linewidth $\Gamma$. ![image](fig4.png){width="75.00000%"} The above equations exhibit the essential features in optical dipole trapping: The magnitude of the energy shift depends linearly on the trapping laser intensity $I\left(\mathbf{r}\right)$ at the position of the atom and its sign is given by sign of the laser detuning $\Delta$. Therefore, the inhomogeneous intensity profile of a focused Gaussian laser beam (Fig. \[fig:dipole\] (a)) creates a reduction of the atom’s groundstate energy (Fig. \[fig:dipole\] (b)) and thus an attractive trapping potential with depth $U_0$ in three dimensions for red detuning $\Delta<0$ (Fig. \[fig:dipole\] (c)). Furthermore, unwanted exitation to the excited state and resulting spontaneous scattering can be kept low for large detuning, since the dipole potential scales with $1\slash\Delta$, whereas the scattering rate scales with $1\slash\Delta^2$.\ Important characteristics of these traps are potential depths of up to several $\unit{\milli\kelvin}{}\times\unit{k_B}{}$, which are about two orders of magnitude larger than the thermal energies achievable with standard laser cooling techniques [@Metcalf1999] and vibrational frequencies in the range of $\unit{10}{\kilo\hertz}$ to $\unit{100}{\kilo\hertz}$ or even beyond for tight focusing. By advanced laser cooling, e.g. Raman sideband cooling [@PhysRevLett.69.1741; @PhysRevLett.80.4149], or by making use of the phase transition to a Bose-Einstein condensate (BEC) [@RevModPhys.74.875; @RevModPhys.74.1131], the vibrational ground state of the trapping potentials can be populated with high probability. Corresponding spreads of the ground state wave functions are on the order of $\unit{10}{\nano\meter}$ for single atoms.\ A typical set of parameters for the experiments with $^{85}$Rb atoms presented in the following sections of this work are a trapping laser wavelength of $\unit{815}{\nano\meter}$ (Ti:Sapphire laser) which corresponds to an effective detuning of about $\Delta=\unit{2\times 10^6}{\Gamma}$ with respect to the rubidium D-Lines at $\unit{780}{\nano\meter}$ and $\unit{795}{\nano\meter}$ and an optical power of $\unit{2}{\milli\watt}$ in the central focal spot of the microlens register. The implemented trap array has a pitch of $a=\unit{55}{\micro\meter}$, a waist of $w_0=\unit{3.7}{\micro\meter}$ and corresponding Rayleigh range of $z_R=\unit{52.8}{\micro\meter}$ which yields a trap depth of $U_0=\unit{k_B\times 0.1}{\milli\kelvin}$. The scattering rate evaluates to $\Gamma_{SC}=\unit{6}{s^{-1}}$. In addition, the coherence limiting state-changing part of photon scattering is suppressed by quantum interference effects [@1994OptL...19..207C] to a value of $\unit{0.5}{s^{-1}}$ which already in this configuration gives a limit to the coherence time of $2$ seconds. With the available laser power of $\unit{1}{\watt}$, several 100 sites of the processor architecture are accessible, but we limit the number of investigated qubits to about 100 in our current work (Fig. \[fig:microoptics\] (e)). Changing to a trapping laser with even larger detuning, e.g. using light at $\unit{1064}{\nano\meter}$ (Nd:YAG-laser), a power of $\unit{14}{\milli\watt}$ in the central trap leads to the same trap depth and an absolute scattering rate of about $\unit{0.3}{s^{-1}}$. Here, unwanted state-changing scattering is suppressed to $\unit{3\times 10^{-4}}{s^{-1}}$. Again, typically available laser powers of $\unit{10}{W}$ lead to architectures with several 100 register sites, now having coherence times in the range of minutes.\ Neutral atoms stored in this register represent intrinsically identical quantum systems which are decoupled from their environment to a high degree [@PhysRevLett.89.097903; @2007ApPhB..86..377L]. In addition, there is a wide range of options for encoding quantum information in neutral atoms in this architecture: quasi spin-$1\slash 2$ systems can be generated in the external degrees of freedom [@PhysRevLett.90.147901; @PhysRevA.66.042317; @PhysRevA.70.023606; @Eckert2006264], e.g. in the vibrational modes of the trapping potential, as well in the internal degrees of freedom represented by two states of the hyperfine manifold of the electronic ground state of the trapped atoms, as shown in [@2007ApPhB..86..377L]. Initialization, readout, and 1-qubit-rotation {#sec:prepmanread} ============================================= Alkali atoms - especially rubidium and caesium - have become the preferred atomic species for research in quantum information processing with neutral atoms: alkali atoms can be efficiently controlled by laser light in the external degrees of freedom as described in the previous section, but also in their internal states which is essential for quantum state preparation, manipulation and readout.\ Optical pumping [@RevModPhys.44.169], which is based on applying resonant laser light of adequate polarization, allows one to prepare atoms in desired internal states, e.g. the “clock states” (${\ensuremath{| F=2,m_F=0 \rangle}}$ and ${\ensuremath{| F=3,m_F=0 \rangle}}$ of $^{85}$Rb) of the ground state hyperfine manifold of rubidium. The resulting true two-level quasi spin-$1\slash 2$ system is an excellent qubit basis. The pumping light can be applied globally or site-selectively in our architecture for efficient initialization of variable qubit configurations. Selective readout of the qubit state can be achieved by utilizing fluorescence imaging, which can be done spatially resolved with a CCD-camera, and therefore simultaneously for all sites of the qubit register (Fig. \[fig:microoptics\] (e)) [@PhysRevLett.89.097903].\ ![image](fig5.png){width="75.00000%"} In fluorescence imaging, the detected signal level corresponds to the atom number at each site since every atom contributes a comparable amount of photons to the signal. With low-noise detection schemes another important requirement for the successful implementation of quantum information processing can be fulfilled: the number of atoms at each register site can be determined precisely, especially at the single atom level. The characteristics of number resolved atom detection in our architecture are illustrated in Fig. \[fig:SA\]. The left side displays the fluorescence signal obtained for one selected trap out of a 2D trap register for about $500$ consecutive experimental runs. The signal clearly exhibits reoccuring levels in signal amplitude, such that the levels for background light scattering (i.e. no atom), single-atom events, and two-atom events can be clearly discriminated. This becomes even more obvious in a histogram analysis of the experimental data (Fig. \[fig:SA\] (right)) which exhibits distinct peaks for 0, 1, and 2 atoms. For a statistical loading process, as implemented in the experimental situation presented in Fig. \[fig:SA\], a Poissonian probability distribution for the atom number distribution is observed and the maximum probability for single-atom events is limited to $\unit{37}{\%}$, while two-atom events are present as well. More advanced loading schemes have been implemented in single dipole trap experiments [@2001Natur.411.1024S; @2011NatPh...6..951G], including the possibility of eliminating two-atom events and increasing the single-atom loading efficiency by utilizing light assisted collisions [@RevModPhys.71.1]. Single-atom probabilities of $\unit{50}{\%}$ in the regime of collisional blockade [@2001Natur.411.1024S] and up to $\unit{83}{\%}$ for an optimized process starting from an ensemble of atoms [@2011NatPh...6..951G] with no or almost no two-atom events have been achieved. Implementing these techniques also in our architecture should lead to a collective near deterministic preparation of single atoms at all sites of our qubit register.\ ![image](fig6.png){width="75.00000%"} For the sake of improved signal-to-noise ratio, all experiments presented in the following sections have been performed with small atom ensembles with atom numbers per site ranging from 10 to 100. The quantum state of the investigated atomic qubits, given by the superposition of the basis states ${\ensuremath{| 0 \rangle}}={\ensuremath{| F=2,m_F=0 \rangle}}$ and ${\ensuremath{| 1 \rangle}}={\ensuremath{| F=3,m_F=0 \rangle}}$ of $^{85}$Rb is accessible to full coherent control by microwave radiation or optically with single-site addressability through two-photon coupling via a virtually excited intermediate state (Fig. \[fig:spectroscopy\] (left)). Achievable coupling strengths correspond to Rabi frequencies of several $\unit{10^6}{s^{-1}}$, thus allowing for spin rotations of $\pi$ on a microsecond timescale.\ Fig. \[fig:spectroscopy\] (right) displays a typical example for the coherent control of the quantum state dynamics in a Ramsey and spin-echo configuration [@2007ApPhB..86..377L]. In this experiment, the above described methods for quantum state preparation and control are utilized to analyze the time evolution of qubits initially prepared in a superposition of states ${\ensuremath{| 0 \rangle}}$ and ${\ensuremath{| 1 \rangle}}$. The signal amplitude decay in the Ramsey signal is due to reversible inhomogeneous dephasing which is eliminated in the spin-echo configuration. We label the time constant for homogeneous dephasing extracted from the latter case as decoherence time. In our architecture, decoherence times on the order of $\unit{100}{\milli\second}$ have been observed, already allowing for hundreds or thousands of coherent control pulses to be applied during qubit coherence [@PhysRevLett.105.170502]. Reconfigurable single-site addressable qubit register {#sec:slm} ===================================================== Architectures based on 2D arrays of tightly focused laser beams with typical separations in the micrometer regime for qubit storage inherently provide the ability to address the individual qubit sites since one can use the optics generating the trap array at the same time for addressing purposes. Based on the scalable architecture presented above, we have introduced and experimentally implemented a novel approach for complementing the ability to perform quantum operations in parallel with an additional versatility by achieving reconfigurable, site-selective initialization and operation in freely selectable subsets of sites. We combine 2D arrays of microlenses with per-pixel addressable spatial light modulators (SLM). This results in reconfigurable, per-site addressable 2D configurations of diffraction-limited laser foci in the focal plane of the microlens array which - as before - are re-imaged into the vacuum system [@PhysRevA.81.060308]. Central to our approach is the fact that we use the SLM only for the addressing of individual microlenses, but not as a holographic phase element for creating complex focal spot structures [@Bergamini:04; @PhysRevA.73.031402]. This ensures high stability and a diffraction-limited performance, both given by the advantageous characteristics of the microlenses.\ ![image](fig7.jpg){width="75.00000%"} A schematic view of the extended experimental setup is presented in Fig. \[fig:addressing\]. Laser light for atom trapping or manipulation globally illuminates an SLM which is placed in front of the microlens array. The SLM is a 2D array of pixels, each acting as an individually tunable optical attenuator, which we use as a per-pixel intensity modulator. This allows one to separately control the light power impinging on each microlens by inscribing a reconfigurable pattern of transmitting or non-transmitting sections into the SLM. In the configuration used for the experiments presented below, an area of 80 pixels corresponds to one single microlens and the transmission of each pixel is subject to computer-control. This results in a range of the relative transmitted intensity between and corresponding to a contrast of 270:1. For the experiments presented here only static configurations of the SLM are used, but employing state-of-the-art fast updating devices, switching frequencies in the several kilohertz regime are achievable for liquid crystal or micro-mirror (DLP) based devices.\ ![image](fig8.png){width="75.00000%"} We used this setup to produce versatile 2D configurations of atom traps [@PhysRevA.81.060308] as shown in Fig. \[fig:addressing\]. Starting from the fundamental structure of the 2D trap register, created by globally illuminating the full microlens array (all pixels of the SLM turned to full transmission) we have demonstrated the ability to change the pitch and orientation of the qubit register by illuminating only every other microlens creating a ’superlattice’ with definable structure (Fig. \[fig:addressing\] (b)), to generate subsets of separated dipole trap arrays (Fig. \[fig:addressing\] (c)), e.g. for the implementation of quantum error correction schemes or plaquette states in 2D lattice spin models [@lattice_spin_2006], and to realize the structure of a ring lattice with periodic boundary conditions (Fig. \[fig:addressing\] (d)) [@PhysRevLett.95.063201; @PhysRevA.79.043419].\ In addition to creating flexible trap geometries, we also perform coherent manipulation of 2D sets of atomic quantum systems in parallel as well as site-selectively in a reconfigurable fashion. We use the combined system of SLM and microlens array in a very similar fashion as before but now for the control of the light inducing the two-photon coherent coupling. This provides fully flexible quantum state control of the qubits stored in the register by inscribing freely configurable phase shifts at definable sites [@PhysRevA.81.060308].\ This site-selective addressability allows one to prepare complex 2D spin configurations. As an example, we use the SLM to prepare a 2D configuration of alternatingly anti-parallel spins in neighboring trapping sites (Fig. \[fig:spinflip\] (right, top)) by applying a $\pi$ phase shift in the pattern of Fig. \[fig:addressing\] (b) to atoms initially in state [$| 0 \rangle$]{} at all sites. To demonstrate the coherence of this site-selective reversal of spins, a Ramsey experiment is performed in all traps simultaneously after the spin-flip operation. In Fig. \[fig:spinflip\] (right, bottom) two fluorescence images showing atoms in state [$| 0 \rangle$]{} after different free evolution times are presented for nine traps and Ramsey oscillations in two neighboring traps are given in detail (Fig. \[fig:spinflip\] (left)). All traps show Ramsey oscillations, but due to their different initial spin states, we observe the expected phase difference of $\pi$ in the Ramsey oscillations between qubits initially prepared in ${\ensuremath{| 0 \rangle}}$ and ${\ensuremath{| 1 \rangle}}$, respectively. Coherent transport of atomic quantum states {#sec:transport} =========================================== Central to the functionality of our complex processor architecture (Fig. \[fig:architecture\]) is the implementation of a scalable quantum shift register which serves as data bus and connects spatially separated loading and processing units. In the following, we present an all optical device which offers precise control of the position and transport of trapped neutral-atom qubits in registers of dipole potentials. Moreover, this quantum shift register can serve as a 2D quantum memory to archive and retrieve quantum information, and sequentially shuffle quantum information through complex architectures [@PhysRevLett.105.170502].\ The shift operation is based on consecutive loading, moving, and reloading of qubits stored in two independently controllable quantum registers. This configuration is obtained from two superimposed dipole trap arrays created either from two separated microlens arrays or by irradiating a single microlens array with two trapping laser beams under different incident angles. To move the traps, we vary the incident angle of one of the trapping laser beams by a scanning mirror, which causes the foci of the respective array to shift laterally within the focal plane. Atoms stored in the trap register are transported along with the laser foci and it is straightforward to shift the array of trapped atoms by a distance of the full trap separation of $a=\unit{55}{\micro\meter}$ as shown in Fig. \[fig:transport\].\ ![image](fig9.png){width="75.00000%"} For a scalable shift register [@PhysRevLett.105.170502], we combine the moveable quantum register with a static one of identical parameters (Fig. \[fig:ribbon\] (left)). Consecutive moving and reloading of atoms between the two registers allows for atom transport over macroscopic distances, where the number of achievable shift sequences is only limited by the size of the illuminated trap array (Fig. \[fig:transport\]). The fluorescence images in Fig. \[fig:ribbon\] (right) show two shift sequences in detail. Pictured is the central column of a 2D quantum register (as indicated by the differently colored central colunm in Fig. \[fig:ribbon\] (left)) as a function of time together with the corresponding timing sequence for the depths and positions of the two dipole trap arrays. The shift operation is performed as follows: atoms are loaded into the moveable trap register (Array 1), shifted for a full trap separation, and transferred to the static register (Array 2) where they are stored while the moveable register is returned to its initial position. To complete a shift cycle, the atoms are loaded from the static register back to the moveable one, ready for a repetition of the shift sequence.\ We do not observe any atom loss or heating when reloading between identical potential wells or during transport with durations in the single millisecond regime, which ensures the ability of high-fidelity transport of atoms over macroscopic distances for sufficiently large trapping arrays. Technical optimization is capable of pushing time constants in this process below the millisecond regime to the limit given by vibrational frequencies of the trapping potential.\ An essential requirement for quantum information processing in this architecture is the preservation of coherence during transport, reloading, and the full shift sequence. We have performed a detailed investigation on the influence of the shift register operation on coherence in order to address that issue [@PhysRevLett.105.170502]. We embedded the shift register cycle in a spin-echo experiment, thereby analyzing its influence on the decoherence time. The corresponding experimental sequence is shown in Fig. \[fig:coherenceSHIFT\] (left). ![image](fig10.png){width="75.00000%"} For a quantitative investigation, Fig. \[fig:coherenceSHIFT\] (right) presents the signal contrast, i.e. the maximum amplitude $A_{\textrm{E}}(2t_{\pi})$ of the echo signal at time $2t_{\pi}$ normalized to the amplitude of the Ramsey signal $A_{\textrm{R}}(t=0)$ as a function of the free evolution time $2t_{\pi}$ for atoms at rest (red triangles) and atoms participating in the full shift register sequence (blue circles). The loss in signal contrast is almost identical in both cases. From a detailed analysis of external influences, we determine homogeneous dephasing due to irreversible variations of the atomic resonance frequency to be the dominant cause for loss of contrast. We identify heating caused by photon scattering from the trapping laser to be the most likely cause for this. Following the calculations given in [@PhysRevA.72.023406], the signal contrast should be described by the Gaussian function with time constant $T_{2}'$ for reduction of the initial contrast to its $e^{-1}$-value. The measurements in Fig. \[fig:coherenceSHIFT\] can be well fitted to $C(2t_{\pi})$ which gives time constants for both cases (atoms at rest and atoms in the shift register) of about $\unit{40}{\milli\second}$. On average, the ratio of experientially determined coherence times evaluates to $T_{2,shift}'\slash T_{2,rest}'=0.98 (4)$. Thus, no additional dephasing or decoherence of internal-state superposition states occur for the full shift register cycle within the measurement uncertainty. This proves that the qubit transport (as necessary for most of the two-qubit gate operations proposed in Sec. \[sec:gates\]) preserves coherence and that the fundamental shift sequence can be cascaded and thus scaled to complex and versatile 2D architectures allowing coherent quantum state storage and transport along complex and reconfigurable paths. ![image](fig11.png){width="75.00000%"} Prospects for the implementation of two-qubit gates {#sec:gates} =================================================== One of the essential requirements for the realization of a quantum processor is the capability of performing arbitrary one-qubit gates and at least one suitable two-qubit gate [@2000ForPh..48..771D]. Together they represent a universal set of quantum gates [@PhysRevA.52.3457]. Our single-site addressable 2D quantum register inherently provides the framework for single-qubit operations as presented in Secs. \[sec:prepmanread\] and \[sec:slm\].\ The realization of two-qubit gates, in general, is still subject to active research in a variety of approaches towards the physical implementation of quantum computation and is inevitably linked to great demands on experimental control, since interaction becomes indispensable. There are several promising proposals for architectures based on optically trapped neutral atoms in multi-site potential wells.\ Originating from the rapid developments in the field of ultracold quantum gases, the idea of entangling atoms via controlled s-wave collisions was formulated [@PhysRevLett.82.1975]. The deterministic implementation of coherent collisional events relies on the precondition of preparing single atoms in well-defined vibrational states (e.g. the ground state) of the trapping potential which is characteristic for the transition to a Mott-Insulator state in a BEC. Hence, remarkable experimental results have been achieved in optical lattices, namely the production of entangled states [@2003NatMandel] and the realization of a two-qubit $\sqrt{SWAP}$ phase gate [@2007Natur.448..452A]. Both realizations employ time and state dependent potentials for the controlled overlap and separation of atomic wave functions. In this regard, the coherent transport of atomic quantum states (Sec. \[sec:transport\]) with full control over site separation represents a fundamental step which has to be complemented with state-selectivity and cooling to the ground vibrational state in order to utilize cold collisions according to the above method in the current setup.\ In addition, the shift register is the foundation for two-qubit-gate proposals based on the external degrees of freedom. One is based on the use of the two lowest vibrational levels as qubit basis where gate operations involve tunnelling controlled by adiabatic spatial approach and separation of traps together with cold collisional interaction [@PhysRevA.66.042317; @PhysRevA.70.023606; @Eckert2006264], and a second one is based on quantum computing with spatially delocalized qubits [@PhysRevLett.90.147901]. In the latter case, the computational basis states are defined by the presence of a single atom in the ground state of one out of two trapping sites. There is a fascinating extension of this approach, also making use of the underlying concept of adiabatically connecting adjacent trapping sites: “Atomtronics” with holes [@PhysRevA.82.013604] allows for the construction of a coherent single hole diode and transistor in an array of dipole traps filled with two atoms and one hole. These elements could serve as the fundamental building blocks for atom-based implementations of processor schemes analogous to currently used electronic devices, but working on coherent principles and single-atom control.\ Through recent experimental success [@PhysRevLett.104.010502; @PhysRevLett.104.010503; @PhysRevA.82.030306], two-qubit gates based on the optical excitation of neutral atoms to Rydberg states with principal quantum number $n\gg 1$ currently appear as the most promising scheme for the implementation of two-qubit gates in our 2D quantum processing architecture. For Rydberg atoms, the evoked dipole moment mediates strong long-range interaction exceeding the accessible coupling strength of groundstate atoms by orders of magnitude. Central to quantum information processing with Rydberg atoms is the shift of the resonance frequency of one atom (target atom) induced by the excitation of a nearby control atom to a Rydberg state [@PhysRevLett.85.2208; @PhysRevA.65.052301; @PhysRevA.72.022347; @RevModPhys.82.2313]. The consequence is a blockade sphere of inhibited excitation with a radius in the micrometer regime which enables quantum information architectures with qubit separations up to $10 \micro\meter$. This has been experimentally demonstrated in systems of two dipole traps [@PhysRevLett.104.010502; @PhysRevLett.104.010503; @PhysRevA.82.030306] with typical trap parameters of $\unit{3.2}{\micro\meter}$ waist and $\unit{8.7}{\micro\meter}$ separation as given in [@PhysRevA.82.030306]. As presented in Fig. \[fig:microoptics\](d,e), we currently operate 2D arrays of well resolved traps with a fundamental pitch of and a beam waist of , which shows that by minor modifications in the optical setup we can reach and even exceed the required trap parameters for a successful Rydberg-gate operation as demonstrated in [@PhysRevA.82.030306]. For this reason, we can apply to our system the detailed analysis of Rydberg state mediated quantum computing with focus on the relevant physical mechanisms contributing to gate errors given in [@PhysRevA.82.030306; @PhysRevA.72.022347; @1742-6596-264-1-012023] which predicts a fidelity well above 0.99. The expected intrinsic error evaluates to $6.5\times 10^{-3}$ and the experimentally determined fidelity is 0.92, where technical errors contribute most of the difference between the prediction and the experimental results.\ This discussion shows that Rydberg-mediated two-qubit gate operations have become a very promising candidate for the currently still open issue of implementing a suitable two-qubit quantum gate in our 2D quantum processing architecture. Conclusion {#sec:conclusion} ========== We have presented a scalable architecture for quantum information processing and quantum simulation with neutral atoms and have discussed its characteristics with regard to the requirements imposed for the successful implementation of quantum computing schemes. The design is based on a 2D quantum register created by 2D sets of optical micro-potentials, which can be conceptionally split into spatially separated functional units, e.g. for preparation, processing, and readout.\ We obtain a suitable hardware with typical dimensions of the individual register cell in the range of a few microns from arrays of focused laser beams employing micro-fabricated lens arrays. This implementation ensures single-site addressability as demonstrated by producing reconfigurable trap patterns and quantum state control of selected qubits. In a combined system used for single-site addressing, the stability and the diffraction-limited performance of the microlens array is complemented by the flexibility of a per-pixel addressable spatial light modulator. The introduced system is capable of performing single-qubit operations as well as qubit specific initialization and readout.\ A scalable quantum shift register connecting adjacent trapping sites realizes the data bus of the prospect quantum processor. As demonstrated experimentally, the shift operation can be performed with negligible atom loss, heating, or additional dephasing or decoherence, thus allowing for qubit transport over macroscopic distances. Intrinsically, the quantum shift register provides full control over trap separations which also becomes an essential ingredient for the implementation of two-qubit gates. In this respect, among other approaches, the optical and therefore site-selective control of Rydberg interaction turns out to be a very promising candidate for implementing two-qubit gates for quantum computing in 2D quantum registers.\ In summary, we have given a detailed analysis of our architecture for scalable quantum information processing with neutral atoms in 2D quantum state registers. Although the quantum processor has yet to be implemented in full operation by combining all of the building blocks as they have been analyzed in the previous sections with two-qubit gate operations as achieved in refs. [@PhysRevLett.104.010502; @PhysRevLett.104.010503; @PhysRevA.82.030306], no principle obstacles can be identified to prevent a successful realization. In addition, next-generation configurations will strongly benefit from the technological basis available in micro-fabrication which will enable optical-, semiconductor- and micro-mechanical structures to be combined on a single chip and therefore opening an excellent path towards parallelized, large scale quantum computing. We acknowledge financial support by the Deutsche Forschungsgemeinschaft (DFG), by the European Commission (Integrated Projects ACQUIRE, ACQP, and SCALA), by NIST (Grant No. 60NANB5D120), and by the DAAD (Contract No. 0804149).
--- abstract: 'We present a spatially resolved study of the relation between dust and metallicity in the nearby spiral galaxies M101 (NGC5457) and NGC628 (M74). We explore the relation between the chemical abundances of their gas and stars with their dust content and their chemical evolution. The empirical spatially resolved oxygen effective yield and the gas to dust mass ratio (GDR) across both disc galaxies are derived, sampling one dex in oxygen abundance. We find that the metal budget of the NGC628 disc and most of the M101 disc appears consistent with the predictions of the simple model of chemical evolution for an oxygen yield between half and one solar, whereas the outermost region (R$\geq$0.8$\rm R_{25}$) of M101 presents deviations suggesting the presence of gas flows. The GDR-metallicity relation shows a two slopes behaviour, with a break at 12+log(O/H)$\approx$8.4, a critical metallicity predicted by theoretical dust models when stardust production equals grain growth. A relation between GDR and the fraction of molecular to total gas, $\rm\Sigma_{H_{2}}$/$\rm\Sigma_{gas}$ is also found. We suggest an empirical relationship between GDR and the combination of 12+log(O/H), for metallicity, and $\rm\Sigma_{H_{2}}$/$\rm\Sigma_{gas}$, a proxy for the molecular clouds fraction. The GDR is closely related with metallicity at low abundance and with $\rm\Sigma_{H_{2}}$/$\rm\Sigma_{gas}$ for higher metallicities suggesting ISM dust growth. The ratio $\rm\Sigma_{dust}$/$\rm\Sigma_{star}$ correlates well with 12 + log(O/H) and strongly with log(N/O) in both galaxies. For abundances below the critical one, the ’stardust’ production gives us a constant value suggesting a stellar dust yield similar to the oxygen yield.' author: - | J.M. Vílchez$^{1}$[^1], M. Relaño$^{2,3,4}$, R. Kennicutt$^{2,5}$, I. De Looze$^{6,7}$, M. Moll[á]{}$^{8}$, M. Galametz$^{9}$\ $^{1}$Instituto de Astrofísica de Andalucía - CSIC, Glorieta de la Astronomía s.n., 18008 Granada, Spain\ $^{2}$Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK\ $^{3}$Dept. Física Teórica y del Cosmos, Universidad de Granada, Spain\ $^{4}$Instituto Universitario Carlos I de Física Teórica y Computacional, Universidad de Granada, 18071, Granada, Spain\ $^{5}$Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721\ $^{6}$Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, UK\ $^{7}$Sterrenkundig Observatorium, Universiteit Gent, Krijgslaan 281 S9, B-9000 Gent, Belgium\ $^{8}$Departamento de Investigación Básica, CIEMAT, E-28040 Madrid, Spain\ $^{9}$Astrophysics department, CEA/DRF/IRFU/DAp, Université Paris Saclay, UMR AIM, F-91191 Gif-sur-Yvette, France bibliography: - 'jvm.bib' date: 'Accepted XXX. Received YYY; in original form ZZZ' title: Metals and dust content across the galaxies M101 and NGC628 --- \[firstpage\] galaxies: general – galaxies: metallicity – galaxies: ISM Introduction ============ The study of the chemical evolution of galaxies is a powerful way to understand of their history of star formation and evolution. Nucleosynthesis of the different chemical elements occurs in stars of different masses and lifetimes that inject their metal production into the interstellar medium (ISM) across the galaxies. Chemical abundances of stars and ISM in star forming disc galaxies are observed to be different depending on the precise location in the galaxy, most frequently defining spatial gradients that reflect its star formation and chemical history. Massive gas flows have proven to be an essential ingredient in order to explain the observed shapes of the abundance gradients in disc galaxies. In particular outflows, and especially selective enriched outflows, were postulated for virtually all galaxies; mainly because they seem to be an efficient mechanism to decrease the yield of the chemical system [e.g. @2007ApJ...658..941D], though some discussion is still open [e.g. @2001ApJ...552...91S; @2013MNRAS.434.2491G]. On the other hand gas inflows are a key ingredient in most chemical evolution codes as well as in accretion models of galaxy formation. There is an important amount of work in the literature devoted to study the (inter)relation between the properties of disc galaxies, notably spirals, and their chemical evolution parameters. Large part of this work is based on galaxy global properties (e.g. from SDSS), i.e. under the [*one point-one galaxy*]{} scheme. Spatially resolved chemical evolution studies can provide new useful information. Studying the chemical evolution of discs at spatially resolved scales imposes strong constraints (e.g. to selective metal outflows) linked to galaxy structure and dynamics. For example, supernova driven galactic winds departing from closed-box evolution, would only produce substantial decrease of the retaining fraction of processed gas for galaxies with rotation velocities lower than some 160 km/s . A useful empirical indicator of the efficiency of the enrichment of the interstellar gas in heavy elements is the “effective yield” [@1990MNRAS.246..678E; @2007ApJ...658..941D], a quantity that measures how much the metallicity of a galaxy deviates from what would be expected for a system with the same gas mass fraction, but that have evolved in a closed box framework, i.e. where flows of gas (inflow or outflow) are not allowed (see Sect. \[sec:chemevol\]). As shown by @2007ApJ...658..941D, in fact only metal-enriched outflows can reduce the effective yield of gas-rich systems by a substantial amount. For gas-poor systems, chemical yields are difficult to be changed irrespective of the amount of gas lost or accreted, though. The study of outflows and inflows of gas are important to understand the chemical evolution of galaxies since both processes are directly related to the large reservoirs of neutral and molecular gas and also circumgalactic material embracing star forming galaxies . This external reservoirs can be metal enriched from supernova winds leaving chemically processed gas. Models can include also external gas infall, either pristine or enriched. Though frequently overlooked, incorporating spatially resolved chemical abundances, e.g. O/H radial gradients, can add new constraints to chemical evolution modelling. Also, spatially resolved information on the ratio of secondary to primary elements (e.g. nitrogen to oxygen ratio, N/O, versus O/H) can tell us on the relative roles played by flows, the star formation efficiency, or possible delays in the delivery of the nucleosynthesis of low and intermediate mass stars, among other clues [e.g. @2006ApJ...647..984H; @2006MNRAS.372.1069M; @2017MNRAS.471.1743D; @2018MNRAS.473..241H]. Much of the work done for samples of galaxies has used single aperture abundances [e.g. @1994ApJ...420...87Z; @2002ApJ...581.1019G; @2004ApJ...613..898T] losing the wealth of information across e.g. spiral discs, whereas recent integral field spectroscopy surveys (e.g. CALIFA; MANGA, SAMI; VENGA) have provided new spatially resolved data. Measuring chemical abundance for many positions across galaxies -from center to outskirts- allows a better characterisation of the metallicity spatial profile along the discs that, together with the gas mass fraction profile, give strong constraints on the effective yield and chemical modelling. The abundance profiles of O/H and N/O, taken together with their corresponding gas and stellar surface density, can inform us also on how and when gas inflows and/or outflows are present, modulating the chemical, ionising and mechanical feedback of each star forming complex on the molecular clouds and dust reservoirs. These feedback mechanisms could affect the places where dust growth is believed to be produced, eventually leading to e.g. grain destruction from supernova blast waves and (possibly also) winds shocks or intense radiation fields. Together with metallicity and gas, dust is also directly related, via its formation and destruction channels, to the chemical evolution of galaxies. Dust constitutes a fundamental ingredient for star formation and chemical evolution often overlooked in the metal balance across galaxies. Dust is an essential component of the ISM; absorbs and scatter the light from stars and reemits in the infrared dominating the spectral energy distribution in this range. Dust grains are intimately related to the formation of molecular hydrogen, both providing substantial cooling, of especial relevance at very low metallicity [e.g. @2005ApJ...626..627O; @2006MNRAS.369.1437S]. A detailed description of the physics and the chemical properties of dust, grain distribution and composition in the ISM is out of the scope of this work and can be found elsewhere . Dust production and destruction and dust content evolution involve very complex physics and different chemical processes competing, in practice, through diverse channels which, in turns, modulate the final dust to gas mass ratio (GDR) . Production by evolved stars and supernovae adds to the dust budget, and the overall dust yield from stellar sources (basically AGB stars and some SNe II) can be evaluated for a given IMF and star formation rate history. Nonetheless, stellar related dust production alone does not support current observations, and growth of grains in the ISM is needed especially in high metallicity environments. Models of the dust content in galaxies show how an increasing rate of dust mass growth in the ISM can dominate dust production by stars assuming constant star formation. Also has included the effects of episodic star formation histories on the ISM dust growth. On the other hand, destruction of dust can be easily produced due to supernova shocks [@2007MNRAS.378..973B], and also intense UV radiation fields have been proposed to affect to the carbonaceous dust population [@2017MNRAS.471.1743D]. A critical metallicity has been defined by which discriminates between these two main ISM dust regimes which should be seen when analysing the GDR as a function of the gas oxygen abundance, assuming different star formation time scales or a bursty star formation scheme . studied the GDR versus oxygen abundance relation for an extended sample of star forming galaxies, including from dwarfs to large spirals. Using models from and , they showed the scatter they obtain in this relation could be explained for a range of chemical evolution models and star formation timescales and suggested two main trends with metallicity. It is important to bear in mind that the derivation of the GDR is complex and can suffer from several effects; among them the conversion of CO observations into molecular gas content is a large factor of uncertainty, and the impact of the CO-dark gas fraction, as well as the dust model or the CO-chemical abundance gradients; i.e. the lower oxygen abundance favouring the prominence of more atomic gas and more CO-dark gas, probably due to a lower shielding of dust and self-shielding, as suggested for the lower metallicity outer Milky Way . Recent studies of spatially resolved galaxies [e.g. @2013ApJ...777....5S] have tried to minimise the uncertainty in the CO-H$_{\rm 2}$ conversion factor on kpc scales. have presented a new database of gas and dust profiles for a large sample of galaxies, proposing another recipe to tackle this problem. It is clear that the star formation history, metal content and chemical evolution across a star forming galaxy should appear tightly related to the observed dust content. A detailed study relating the dust content and metallicity in galaxy discs using high quality data is timely. We need to combine chemical abundances and physical conditions of the ISM with a precise determination of the dust content and evolution. We have selected the nearly face on nearby galaxies NGC5457 (M101) and NGC628 (M74) as prototype objects to carry out this study. They have a similar mass (logM$_\star$/M$_{\odot}$$\approx$10.2) though very different chemical and structural properties and environment. The global budget of metals and dust across M101 and NGC628 are derived from various datasets covering a large range of diagnostics (optical spectroscopy of the ionised gas, multi-band spectrophotometry of the stars, dust and gas, radio and mm observations of their neutral and molecular gas) analysed consistently. The GDR and the empirical spatially resolved (effective) metal yield across the discs of both galaxies are derived, sampling one dex range in O/H. Here we empirically explore the presence of a relationship between GDR and chemical abundance across both galaxies, discussing the effects of their different chemical evolution, from their effective yield spatial profiles to the dust yield and ISM dust growth. This paper is organised as follows: in Sect. \[sec:data\] we describe the data sets used and provide a description of the two selected galaxies. The main results obtained in this work are detailed in Sect. \[sec:results\]. The final discussion, Sect. \[sec:discussion\], and main conclusions, Sect. \[sec:conclusions\], are presented. Sample and data {#sec:data} =============== The galaxies M101 (NGC5457) and NGC628 (M74), are the two selected nearby well resolved spirals taken as representative examples of their class. M101 is a nearly face on (inclination 18[$^\circ$]{}) large spiral located at 7.4Mpc distance. The galaxy shows a well-known asymmetry of the disc and an asymmetric plume towards larger radius seen in deep exposures consistent with a very extended HI disc to the northeast [@2013ApJ...762...82M], a footprint of possible past interactions with the nearby galaxies NGC5477 and NGC5474. M101 general properties are presented in Table\[tab:1\]. NGC628 (M74) is a giant late type spiral galaxy located at a distance of 7.2Mpc and seen practically face on (inclination 5[$^\circ$]{}). The adopted properties for NGC628 are shown in Table\[tab:1\]. NGC628 presents an ideally suited large extended spiral structure and an undisturbed optical profile. M101 and NGC628 have been observed at all wavelength ranges relevant for this study, belong to the KINGFISH (Key Insights on Nearby Galaxies: a Far-Infrared Survey with Herschel) sample [@2011PASP..123.1347K] and high quality photometric and spectroscopic data are available for them. Finally, both galaxies have been observed as part of the CHAOS (CHemical Abundances of Spirals) project [@2015ApJ...806...16B; @2016ApJ...830....4C] that derives estimates of the electron temperatures and direct chemical abundances for a large number of [H[<span style="font-variant:small-caps;">ii</span>]{}]{} regions. The structure of both galaxies has been studied within the Spitzer Survey of Stellar Structure in Galaxies (S4G) [@2010PASP..122.1397S] and they have been classified (surface brightness profile decomposition performed) as hosting a pseudobulge, which appears especially notable in M101. In Fig. \[fig:RGB\] we show RGB images of both galaxies. ![image](fig1a.pdf){width="45.00000%"} ![image](fig1b.pdf){width="45.00000%"} ---------------- ------------------ ----------- ------------------ ----------- NGC5457 NGC628 Property Value Reference Value Reference R.A. 14h03m12.5s \[1\] 01h36m41.7s \[1\] Dec 54$^\circ$20m56s \[1\] 15$^\circ$47m01s \[1\] Inclination 18$^\circ$ \[2\] 5$^\circ$ \[3\] Position Angle 39$^\circ$ \[2\] 12$^\circ$ \[4\] Distance 7.4Mpc \[5\] 7.2Mpc \[6\] $\rm R_{25}$ 864 \[7\] 315 \[8\] ---------------- ------------------ ----------- ------------------ ----------- \[tab:1\] Chemical abundances ------------------- ### Ionized gas Chemical abundances of a large number of [H[<span style="font-variant:small-caps;">ii</span>]{}]{} regions in M101 have been obtained in several works, notably @1998AJ....116.2805V, @2003ApJ...591..801K, @2007ApJ...656..186B and @2013ApJ...766...17L. In this work we use only abundances derived directly using measurements of the electron temperature sensitive lines. We avoid chemical abundances derived from either empirical or photoionisation models calibrations; though a plethora of different abundance calibrations is available they can suffer from substantial uncertainties and systematics [see e.g. @2013ApJ...766...17L]. Therefore here we have adopted the abundance gradient of oxygen and nitrogen from @2016ApJ...830....4C who derived direct abundances (i.e. using the measurements of electron temperature) for a total of 109 [H[<span style="font-variant:small-caps;">ii</span>]{}]{} regions, including updated abundances from the aforementioned references. The radial oxygen abundance gradient shows a well defined negative slope with a relatively low scatter. We adopt here the following abundance gradients expressions as a function of radius normalised to $\rm R_{25}$, 12+log(O/H) = (8.716$\pm$0.023) - (0.832$\pm$0.044) $\rm R/R_{25}$ for oxygen (see Fig. \[fig:SD\_profiles\]); and for N/O, log(N/O) = (-0.505$\pm$0.029) - (1.415$\pm$0.075) $\rm R/R_{25}$; the N/O gradient flattens beyond $\rm R/R_{25}$=0.64 where N/O is better characterised by a constant value of log(N/O)=-1.434$\pm$0.107, reflecting the classical primary to secondary behaviour of nitrogen. Chemical abundances for NGC628 [H[<span style="font-variant:small-caps;">ii</span>]{}]{} regions have been derived in a number of papers. We note the integral field spectroscopy studies performed by @2011MNRAS.415.2439R, which produced an extended mosaic of the galaxy and, more recently by @2018MNRAS.477.4152R using SITELLE [^2] large field fourier transform spectrograph. However in these two works direct derivation of abundances is lacking, as it is also the case of @1998AJ....116..673F who observed several [H[<span style="font-variant:small-caps;">ii</span>]{}]{} regions beyond NGC628 optical radius. The oxygen abundance gradient has been recently derived for the inner $\approx$7 kpc disc of NGC628 by @2016MNRAS.455.1218B though using bright line abundance calibrations. Direct chemical abundances have been obtained by @2013ApJ...775..128B and @2015ApJ...806...16B for the largest number of NGC628 [H[<span style="font-variant:small-caps;">ii</span>]{}]{} regions with electron temperature measurements to date. The radial abundance gradients of O/H and N/O obtained in the last reference have been adopted in this work. It is surprising the large variance seen in the O/H abundance, which shows scatter along galactic radius. In contrast, the N/O radial gradient appear well defined with a negative slope and low scatter. @2015ApJ...806...16B suggest various explanations for the scatter such as the selection of the corresponding electron temperature across the ionisation structure of the [H[<span style="font-variant:small-caps;">ii</span>]{}]{} regions, an effect which should be minimised in N/O due to the lower dependence of this abundance ratio on electron temperature. The abundance gradients here adopted are: 12+log(O/H) = (8.835$\pm$0.069) - (0.485$\pm$0.122) $\rm R/R_{25}$ and log(N/O) = (-0.521$\pm$0.035) - (0.849$\pm$0.064) $\rm R/R_{25}$, and beyond the isophotal radius the N/O gradient appears to flatten out, as derived in @2015ApJ...806...16B. The zero points of M101 and NGC628 O/H and N/O radial gradients are very similar, to within the errors, though the O/H slope is flatter for NGC628 presenting a higher O/H abundance than M101 along the disc within the optical radius. In the case of N/O the difference in the gradient slopes seems less enhanced (though still apparent in the same sense). The radial profiles of the gaseous abundances, and corresponding uncertainties, adopted for M101 and NGC628 are shown in Fig. \[fig:SD\_profiles\]. ### Stars The M101 stellar metallicity has been derived by @2013ApJ...769..127L fitting evolutionary population synthesis [@2003MNRAS.344.1000B] models to a set of multiband photometric images (ultraviolet, optical and infrared) together with the fifteen narrowband images observed in the BATC filter system. Their metallicity map shows an average radial gradient, giving \[Fe/H\]$\sim$-0.2dex in the center and a radial gradient slope -0.011dex/kpc. Thus \[Fe/H\] varies between central -0.2 dex to -0.365 dex at R=15kpc for the stellar abundance, with typical uncertainties of 0.2 to 0.3 dex [see Fig. 9 in @2013ApJ...769..127L]. Translating this to oxygen abundance we use the \[O/Fe\]–\[Fe/H\] relation, as shown by @2016AN....337..944M. Taking into account the large uncertainties quoted before for the stellar \[Fe/H\], and being conservative we have adopted an average correction \[O/Fe\]=0.15, (for average \[Fe/H\]$\approx$-0.3), and have applied this correction when deriving \[O/H\] values from the \[Fe/H\] measurements, leading to 12+log(O/H) $\approx$ 8.64 at R=0 kpc and 12+log(O/H) $\approx$ 8.47 at R=15kpc. Corresponding gaseous abundances (following the adopted radial gradient, @2016ApJ...830....4C) are 12+log(O/H) = 8.716 at R=0, and 12+log(O/H) = 8.32 at R=15 kpc. The derived stellar metallicity values are smaller than the corresponding gas metallicity in the inner region and slightly higher at R=15 kpc, but still within the uncertainties of 0.2 to 0.3 dex reported [@2013ApJ...769..127L]. The stellar metallicity radial profile, and corresponding uncertainty band, finally adopted for M101 is shown in the bottom-left panel of Fig. \[fig:SD\_profiles\]. For NGC628 we have adopted the stellar metallicity results by @2014MNRAS.437.1534S. These authors performed a detailed spectroscopic population synthesis fitting to the PINGS integral field spectroscopy data of this galaxy from @2011MNRAS.415.2439R. Spatially integrated spectra of different regions of this galaxy defined via a Voronoi-tessellation binning scheme were fitted. The radial profile for R &lt; 0.5${\rm R_{\rm 25}}$ of mass-weighted metallicity for NGC628 as derived by @2014MNRAS.437.1534S has been adopted in this work, and is presented in the bottom-right panel of Fig. \[fig:SD\_profiles\]. Gas masses and radial profiles {#sec:gasmass} ------------------------------ We use the data from three different surveys: [H[<span style="font-variant:small-caps;">i</span>]{}]{} observations from ’The [H[<span style="font-variant:small-caps;">i</span>]{}]{} Nearby Galaxies Survey’ [@2008AJ....136.2563W] and $^{12}$CO(2-1) from HERACLES [@2009ApJ...702..352L; @2013AJ....146...19L] to derive [H[<span style="font-variant:small-caps;">i</span>]{}]{} and $\rm H_{2}$ gas masses, and far-IR observations from KINGFISH to obtain dust masses over the disc of the two galaxies. The [H[<span style="font-variant:small-caps;">i</span>]{}]{} and CO images were convolved to the spatial resolution of 500[$\mu$m]{} image from Herschel using the dedicated kernels from @2012ApJ...756..138A, and regridded to the same pixel scale of 14. The description of the process is presented in @2013ApJ...777....5S. For the [H[<span style="font-variant:small-caps;">i</span>]{}]{} , we assume the uncertainties to be the larger of either 0.5 M$_{\odot}$/pc$^{2}$ or 10% of the measured column density, as in @2013ApJ...777....5S. The CO uncertainties are estimated from the propagation of uncertainties through the different steps involved in creating the integrated CO line map [see @2013ApJ...777....5S]. In order to derive the amount of neutral gas mass we need to make use of the X$_{\rm CO}$ factor and the line ratio R$_{21}$ =(2–1)/(1–0). For R$_{21}$ we use a value of 0.7, following @2013ApJ...777....5S. We assume X$_{\rm CO}$ to vary over the disc following the metallicity gradient as X$_{\rm CO}\propto(Z)^{-1}$. The values of X$_{\rm CO}$ derived here agree in general with those obtained by @2013ApJ...777....5S for both galaxies. A factor of 1.37 is applied to the molecular gas masses to take into account the He contribution. Radial profiles of [H[<span style="font-variant:small-caps;">i</span>]{}]{} , $\rm H_{2}$, and total gas mass for M101 and NGC628 are presented in the left and right panels of Fig. \[fig:SD\_profiles\], respectively. In the top panels we see that while NGC628 presents CO emission up to a radii of $\rm\sim R_{25}$, M101 CO emission is restricted to the central $\rm R\leq0.7\,R_{25}$. In order to study how the adopted X$_{\rm CO}$ factor can affect the derivation of the molecular gas mass we have applied also a constant X$_{\rm CO}$ factor for the entire discs, scaled to the central metallicity of each galaxy with X$_{\rm CO}\propto(Z)^{-1}$. The radial profiles of the molecular gas mass we have obtained when using a constant X$_{\rm CO}$ across the disc do not change significantly from those presented in the top panel of Fig. \[fig:SD\_profiles\]. ![image](fig2a.pdf){width="30.00000%"} ![image](fig2b.pdf){width="30.00000%"} Stellar mass maps ----------------- We obtain the stellar mass map from the S4G survey, as derived in @2015ApJS..219....5Q. These authors apply a methodology that allows to separate the emission from the stars and the dust emission using the 3.6[$\mu$m]{} and 4.5[$\mu$m]{} from [[*Spitzer*]{}]{}. Detailed method followed by these authors to decontaminate hot dust emission in 3.6[$\mu$m]{} is based on the expected colours of the different sources emitting at these bands. The final surface density at 3.6[$\mu$m]{} in units of MJy/sr and decontaminated from dust emission is transformed to stellar surface density mass following Eq.6 in @2015ApJS..219....5Q. In this equation a single mass-to-luminosity ratio of M/L=0.6, using Chabrier IMF, is assumed. Radial stellar profiles for M101 and NGC628 are presented in the second panel of Fig. \[fig:SD\_profiles\]. No decontamination of the bulge has been done. Both galaxies belong to the S4G survey and were classified as objects hosting a pseudobulge: some contamination from the bulge can be expected though limited to the inner R$\leq$0.1$\rm R_{25}$ [@2015ApJS..219....4S]. Results {#sec:results} ======= Chemical evolution scheme and effective yield profiles {#sec:chemevol} ------------------------------------------------------ For the purpose of this work, we have used the scheme of the [*simple model*]{} (SM) [e.g. @1975MNRAS.172...13P; @1990MNRAS.246..678E] as a guide to characterize the chemical evolution of our galaxies (although it is well known that it can not provide a complete description of a chemical system). In the SM the system starts with an initial pristine mass of gas and no stars, and metallicity is driven by star formation. The SM assumptions include instantaneous recycling and complete mixing of the ISM, as well as constant stellar yield and IMF. The well-known solution for the SM $$\rm Z_{gas} = y ln (\mu^{-1})$$ relates $\rm Z_{gas}$, the metallicity of the gas expressed in mass fraction of metals, and $\mu$ = $\rm M_{gas}/(M_{gas} + M_{star})$, the gas mass fraction, for the corresponding gas mass, $\rm M_{gas}$, and stellar mass, $\rm M_{star}$, being y the true yield of the stellar metal production (see below). A convenient measure of the efficiency of gas enrichment in heavy elements is the “effective yield”, $\rm y_{eff}$ [@1990MNRAS.246..678E; @2007ApJ...658..941D], defined as $$\rm y_{eff} = Z_{gas}/ln (\mu^{-1})$$ This is a useful indicator that measures how much the metallicity of a galaxy deviates from what would be expected for a system with the same gas mass fraction, but that have evolved in a closed box SM framework, i.e. with no flows of gas (inflow or outflow) allowed. Quoting @1990MNRAS.246..678E the effective yield $\rm y_{eff}$ of a chemical system is the metal yield that would be derived if that system were assumed to be a closed box SM. We have derived the gas mass fraction and stellar mass radial profiles for M101 and NGC628 and have applied the SM framework to obtain their spatially resolved effective yield, $\rm y_{eff}$. All the relevant quantities were derived spatially resolved for both galaxies. Here $\rm Z_{gas}$ has been obtained directly from the oxygen abundance, 12+log(O/H), and throughout this work we refer to this quantity as the mass fraction of oxygen, calculated as $\rm Z_{gas}$=11.81(O/H). The profiles of the gas mass fraction, $\mu$, were obtained using the corresponding radial profiles of both galaxies derived in Section\[sec:gasmass\]. From the theoretical side, the metal yield integrated for a single generation of stars, $\rm p$, can be obtained as the IMF mass weighted integral of the stellar yield predicted by theoretical nucleosynthesis models for each star of given mass; in this way the integrated yield of a given element for a single stellar population represents the total amount of newly produced element (e.g. oxygen) by this population. Since very low mass stars (below 0.8M$_{\odot}$) and stellar remnants, in fact, would not contribute with matter to the ISM, we must define the real “return fraction” $\rm R$ of the total stellar mass of a stellar generation which is restored to the ISM (modulo the assumed IMF). With these definitions the “true” yield can be derived as $$$\rm y = p/(1-R)$$$ where the quantity $\rm 1-R$ represents the mass that is locked up in stars. Different values of the yield can be achieved depending on the IMFs and stellar nucleosynthesis prescriptions, for the same metallicity range; in the case of oxygen this appears strongly dependent on the set of nucleosynthesis prescriptions and the IMF. In the work of @2016MNRAS.455.4183V the oxygen yield was computed for a set of three IMFs [@1955ApJ...121..161S; @2002Sci...295...82K; @2003PASP..115..763C] and a set of stellar yields ; they found the important results that the oxygen stellar yield behaves approximately independent of metallicity and that it has a strong dependence on the assumed IMF. Taking $p$=0.007 from @2016MNRAS.455.4183V as the lower limit for the integrated yield of oxygen (assuming no rotation and one third solar metallicity with $\rm R$=0.3), would lead to a true yield $\rm y$= 0.01 for the metallicity range 0.005 $\le$ Z $\le$ 0.02. Recent work by @2018MNRAS.tmp..302P included stellar rotation velocity (0, 150 and 300 km/s) finding that the average true yield could be increased by a factor $\sim$2 (for a Kroupa IMF, 13 to 120M$_{\odot}$). The comprehensive works by @2015MNRAS.451.3693M and more recently Moll[á]{} et al (2018, in preparation) carried out extensive computations of true yields (including oxygen) for a large set comprising 6 IMFs plus 6 sets of nucleosynthesis prescriptions for massive stars and 4 for low-intermediate mass stars; for each of these possible combinations the yields were computed for 7 metallicities ($\rm Z$ =0.000, 0.0001, 0.0004, 0.004, 0.008, 0.02, 0.05). Examination of their results gives reasonable $\rm y$ values for oxygen typically between $\sim$ 0.002 to $\sim$ 0.01 (i.e. from log$\rm y$$\sim$ -2.7 to log$\rm y$$\sim$ -2), with no dependence on metallicity. It is clear that assuming different combinations of stellar nucleosynthesis, IMFs and metallicity lead to a range for the theoretically predicted values of the oxygen true yield. Besides, we should bear in mind that models should assume an effective mass limit for a massive star (e.g. $\ge$ 40 M$_\odot$) to leave a black hole remnant, eventually enclosing a significant part of the nucleosynthesis products. From the empirical side, derivations of the oxygen yield can be obtained measuring the maximum observed (plateau) of the oxygen abundance in spirals discs, which is expected to occur towards the central parts of the most luminous galaxies [e.g. @2007MNRAS.376..353P]; these authors find an empirical oxygen yield of $\rm y$ = 0.0035[^3]; they corrected for a fraction (0.08 dex) of the total oxygen assumed to be incorporated into dust grains. Similar empirical values for the oxygen yield have been obtained; and the value $\rm y$ = 0.004, (i.e. log$\rm y$= -2.4), was adopted in the well known work of @2007ApJ...658..941D. An interesting complementary empirical derivation was performed by @2015MNRAS.450..342K based on the observed local abundance gradient of OB stars in the Galaxy, which they calibrated with the help of a chemical evolution model. This approach lead the authors to derive an oxygen yield $\rm y$ = 0.00313, assuming $\rm R$ = 0.4, a value in good agreement with the determinations mentioned before, considering the differences in metallicity data source and methods used. In summary, typical empirical estimates of the oxygen yield $\rm y$ cluster between 0.003 to 0.004 (i.e. between log$\rm y$$\sim$ -2.5 to -2.4). Taking into account both, the empirical derivations and the theoretical predictions of the true yield, and their uncertainties as illustrated before, we relax here the expected value of the true yield of oxygen to within the reference range -2.6 $\le$ log $\rm y$ $\le$ -2.2. The radial profiles of the effective yield have been derived for M101 and NGC628 and they are shown in Fig. \[fig:yeff\]. In order to correct the yields for the amount of oxygen depleted onto dust grains, a depletion factor of 0.12dex [@2010ApJ...724..791P] has been adopted; all the effective yield values derived here have been corrected by adding this factor. We can see that all the effective yield values derived here for both galaxies show log $\rm y_{eff}$ $\le$ -2.2, below the upper face value of the range adopted for the true yield. For all the radii of NGC 628 the effective yield derived is within this reference band. This is also the case for a major part of M101 ($\rm R/R_{25} \leq$ 0.7) which shows values consistent to within the errors with the reference band, whereas the outer points ($\rm R/R_{25} \geq$ 0.8) of M101 present smaller effective yield values clearly departing from the SM picture. This fact illustrates how the ’closed box model’ can not provide a good description of the chemical evolution of the entire M101 disc for a reasonable expected value of the yield, since the outermost points of its profile strongly suggest the presence of gas flows. Following @2007ApJ...658..941D the effect of gas inflow is expected to be small, whereas for even moderate gas outflows a substantial reduction of the effective yield could be observed. Besides that, the N/O radial gradient measured for M101 looks very well delineated and a small scatter is shown in the N/O versus O/H plot [@2016ApJ...830....4C], a feature that goes against the presence of massive (external) gas inflow. In the case of NGC628, assuming plausible values of the expected true yield we can accommodate all the observations, thus rendering unnecessary any substantial contribution from gas flows, according to the derived effective yield profile. Nonetheless, we should bear in mind that a moderate unenriched gas infall could not be discarded due to its minimal (detectable) effect in the effective yield [e.g. @2007ApJ...658..941D], as mentioned before. As we can see in Fig. \[fig:logOH\_loglnmu\], the oxygen abundance appears well correlated with ln$(\mu^{-1})$, the natural log of the inverse gas fraction, for the two galaxies illustrating the locus of the SM of chemical evolution. However for M101, for radii R$\geq$0.7$\rm R_{25}$ the observed correlation between oxygen abundance and inverse gas fraction is lost, showing that the ’closed box model’ does not seem suitable for the outer parts of this galaxy. A similar figure for a large sample of low metallicity galaxies has been presented recently [@2017MNRAS.471.1743D]. The results obtained we believe are of relevance for the understanding of the behaviour of the metals and dust radial profiles derived for both galaxies, as discussed later in this work. ![Logarithmic $\rm y_{eff}$ versus galactocentric radius for M101 (circles) and NGC628 (squares). The grey area corresponds to -2.6 $\le$ log y $\le$ -2.2, the reference band adopted in this work enclosing the expected uncertainties in the derivation of the theoretical true yield. The amount of oxygen depleted onto dust grains has been taken into account assuming a depletion factor of 0.12dex [@2010ApJ...724..791P]. See the text for details.[]{data-label="fig:yeff"}](fig3.pdf){width="47.00000%"} ![Log-log plot of the oxygen abundance versus $ln(\mu^{-1})$ across M101 (circles) and NGC628 (squares) for each $0.1\rm R_{25}$ fractional radius. Points are colour-coded according to the galactocentric radius. A constant effective yield should translate into a constant slope in the plot.[]{data-label="fig:logOH_loglnmu"}](fig4.pdf){width="47.00000%"} Spatially resolved chemical budget ---------------------------------- The spatially resolved oxygen budget has been computed across the radial profiles of M101 and NGC628. To do so we first calculate the total amount of oxygen measured along the disc, adding the oxygen content in the stars and in the gas, as well as the correction due to the expected amount of oxygen in dust grains. For each galaxy, the oxygen content in the stars is derived radially as the product of the profile of stellar mass of each galaxy times its corresponding radial profile of stellar oxygen abundance, in mass fraction, as presented in Sect. \[sec:data\]. As for the gas, the total oxygen content has been computed for each galaxy as the product -at each radius- of its radial gradient of oxygen abundance of the ionised gas, in mass fraction, times the radial profile of the total gas mass (neutral and molecular; i.e. ionised gas has been neglected). The final oxygen budget has been refined adding a correction accounting for the fraction of oxygen expected to be in dust grains, computed from the expected factor of depletion of oxygen in dust grains in [H[<span style="font-variant:small-caps;">ii</span>]{}]{} regions of $\sim$ 0.12 dex [e.g. @2010ApJ...724..791P]. The budget of the “measured” oxygen for each galaxy -’total oxygen’ in the plots- can be compared, at each radius, with the expected production of oxygen that has been delivered from stellar nucleosynthesis. This latter bulk stellar production has been calculated at each radius as the product of the (IMF averaged) true yield of oxygen adopted -see previous section-, times the stellar mass radial profile. We show the results in Fig. \[fig:Obudget\] for M101 (left panel) and for NGC628 (right panel). The production of oxygen expected from the stars is presented for two values of the true oxygen yield, log $\rm y$ = -2.2 ($\sim$ solar oxygen) and log $\rm y$ = -2.5 ($\sim$ half solar), for illustrative purposes, encompassing the values of a useful comparison band according to theoretical and empirical predictions. ![image](fig5a.pdf){width="47.00000%"} ![image](fig5b.pdf){width="47.00000%"} The first result for the figures is that, overall, the radial oxygen budget appears consistent with the observations of both galaxies for the range of values of the true yield adopted. Nonetheless, there are some differences between both galaxies for their oxygen budget comparison. In the case of M101, the total oxygen measured is always in between the values of the theoretical estimations of the expected production for the two yields adopted; with the largest contribution to the total measured oxygen coming from the stars, whereas the ISM contribution is nearly an order of magnitude smaller in the inner parts but increases towards the outermost regions of the galaxy. We should bear in mind that we have shown how these outermost regions of M101 diverge from the closed box model scenario, and we can see in the plot how the total oxygen for these regions (R$\geq$0.8$\rm R_{25}$) appears closer to the expected production corresponding to the lower true yield adopted (blue points). In the case of NGC628, the situation appears qualitatively similar, though this galaxy shows a bulk metallicity always higher than that of M101 for each fractional radius. However, we can see how the expected oxygen production calculated for the higher true yield adopted (log $\rm y$ = -2.2) is above the measured total oxygen. When a true oxygen yield of log $\rm y$ = -2.5 is assumed the total oxygen measured agrees well with the expected stellar production within the errors for all except one point[^4]. Of course if a higher value of the oxygen true yield were assumed then a correspondingly larger oxygen production would be predicted and consequently a deficit of oxygen deduced for the galaxy. The radial chemical budget of the inner disc (up to $\sim$ 0.6$\rm R_{25}$) of NGC628 has been studied in a recent paper by @2016MNRAS.455.1218B finding that around 50 $\%$ of the metals of the disc were lost, and also that an episode of late time metal enriched gas accretion was present. These results could be reconciled with our findings considering that the oxygen abundances they used are substantially higher and were derived from bright line calibrations (we have used direct oxygen abundances). Also their assumed true yield of oxygen was higher -by 0.2 dex- than the upper boundary (log $\rm y$ = -2.2) of the band for the true yield of oxygen adopted here. Derivation of dust mass {#sec:dustmass} ----------------------- We derive the dust mass in each pixel of M101 and NGC628 using the same methodology as described in . We explain here the main steps of the methodology and refer the reader to for further technical details. We apply the dust model from and assumed that the spectral shape of the ISRF[^5] is that of @Mathis:1983p593 to fit the observed SED in each pixel of the galaxy disc. The dust model consists of three dust grain types: polycyclic aromatic hydrocarbons (PAHs), very small grains (VSGs) of carbonaceous material, and big grains (BGs) of astronomical silicates. In order to perform the fit of each individual SED, we first create a library of models with different values of $\rm Y_{i}$ ($\rm Y_{i}=M_{i}/M_{H}$, for i = PAH, VSG, and BG), $\rm G_{0}$ (the scaling factor relative to the solar neighbourhood ISRF given in @Mathis:1983p593), and $G_{\rm NIR}$ (the scale factor of the NIR 1000K black body continuum). Then, we convolve the SEDs of each dust model in the library with the corresponding filter bandpass of our observations to obtain the fluxes in each band for each library model. From the probability density function (PDF) generated in the fit we retrieve the best fit parameter values and the corresponding uncertainties. We take the mean of the PDF as the best parameter value and the 16th-84th percentile range as an estimate of its uncertainty. With the best fit parameters we can derive the dust mass in each pixel of the galaxy as: $\rm M_{dust} = M_{PAH} + M_{VSG} + M_{BG}$. In the fit procedure we have taken only the SEDs with reliable flux measurements in all bands: only pixels with fluxes above 3$\sigma$ in all the bands were taken into account. Finally, we eliminate all the fits with [$\chi^{2}_{\rm red}$]{}$\geq$4.0 and [$\chi^{2}_{\rm red}$]{}$\geq$2.0 for M101 and NGC628, respectively, in order to obtain reliable dust masses. The relative uncertainties for the dust masses in the final set of pixels are within 10-30% for both galaxies. The dust masses for NGC628 agree with those derived by @2012ApJ...756..138A when the difference of the extinction coefficients between and @2007ApJ...657..810D is taken into account. In Fig. \[fig:Mdustmap\] we show the dust surface density map for M101 (left) and NGC628 (right). We see that the map shows internal structures related to the main star-forming complexes within the galaxies and in both objects we are able to derive dust masses up to radii closer to $\rm R_{25}$. n any case, care should be exercised when studying the outer regions of the spiral discs since very low surface brightness, diffuse components of gas and dust could still be present, and they either simply could remain undetected due to their low signal to noise (below detection treshold), or may represent only a tiny contribution in the derivation of radial profiles of the integrated properties. In Fig. \[fig:SDdust\_profile\] we show the radial profiles of the dust mass surface density maps for M101 and NGC628. Dust mass radial profile for NGC628 has been previously derived by @2009ApJ...701.1965M using only [[*Spitzer*]{}]{} bands (e.g. up to 160[$\mu$m]{}). They find higher dust masses in general and a steeper radial profile than what is found here. Our dust mass estimations are more reliable as they are obtained with data that cover the whole IR SED peak. ![image](fig6a.pdf){width="47.00000%"} ![image](fig6b.pdf){width="47.00000%"} ![Dust surface density, $\rm\Sigma_{Dust}$, versus radius for M101 (circles) and NGC628 (squares). As in Fig. \[fig:SD\_profiles\], for each data point the mean, and the uncertainty of the mean are presented for each elliptical ring. The uncertainties in the mean values are generally, -not shown-, smaller than the points. The shaded area corresponds to the standard deviation of the values within each ring. []{data-label="fig:SDdust_profile"}](fig7.pdf){width="47.00000%"} Derivation of the gas to dust ratio ----------------------------------- The gas-to-dust mass ratio (GDR) map for each galaxy is obtained from the total gas mass and the dust mass maps derived in Sections\[sec:gasmass\] and \[sec:dustmass\]. In Fig. \[fig:GDRmaps\] we show the GDR maps for M101 (left) and NGC628 (right). While the GDR of NGC628 seems to be relatively constant across the disc of the galaxy, the GDR of M101 shows a significant increase towards the outer parts of the galaxy. A radial profile of the GDR map for each galaxy presented in Fig. \[fig:GDR\_profile\] shows the different behaviour with galactocentric radii for both galaxies. The GDR of NGC628 increases very smoothly with galactocentric radius, while the GDR of M101 shows a significant increase towards the outer parts of the galaxy. The values of the GDR for both galaxies agree with the values reported by @2013ApJ...777....5S after taking into account the different extinction coefficients to derive the dust masses. These authors also found significant high values of GDR in the outer parts of M101, while for NGC628 the GDR map they present is more smooth. The GDR radial profile of M101 presented here also agrees with the one presented in @2018ApJ...865..117C. ![image](fig8a.pdf){width="47.00000%"} ![image](fig8b.pdf){width="47.00000%"} ![GDR radial profiles for M101 and NGC628 obtained with the dust masses derived in this paper. Each data point represents the mean and the uncertainty of the mean for each elliptical ring. The uncertainties in the mean values are generally smaller, when not shown, than the points. The shaded area corresponds to the standard deviation of the values within each ring.[]{data-label="fig:GDR_profile"}](fig9.pdf){width="47.00000%"} Discussion {#sec:discussion} ========== GDR relation with metallicity and molecular content --------------------------------------------------- In order to study the chemical evolution and the dust content of M101 and NGC628 we have derived the radial profiles of the total gas mass and stellar mass of the galaxies extracted from the 2D data, as well as the maps of the dust mass and of the GDR (Figs. \[fig:Mdustmap\] and \[fig:GDRmaps\], respectively). These maps show a clear radial trend and also considerable spatial structure going from the inner parts to the outer disc. In both cases the outermost ring beyond 0.8$\rm R_{25}$ appears less populated with emission regions, though some big star-forming complexes (like NGC5471 in M101) are still present as well as more sparse emission. The results obtained for both galaxies have been derived in a consistent way so as to be compared independent of the precise assumptions adopted (e.g. derivation of the molecular gas and of dust components). In Fig. \[fig:GDR\_profile\] we can see how the two galaxies show quite different radial profiles of the GDR. Whereas for NGC628 the profile appears nearly flat moving between GDR$\sim$200 to 400 for the whole range of galactic radius, the case of M101 presents an apparently broken radial profile shape starting from GDR values similar to the ones derived for the inner disc of NGC628, however beyond 0.5$\rm R_{25}$ the GDR profile becomes very steep reaching well above GDR $\approx$1000 in the outermost disc region. The chemical evolution of both galaxies would be consistent with the SM as judged from their radial profiles of the effective yield of oxygen if we assume a lower band for the true yield between solar to half solar (-2.5$\leq$logy$\leq$-2.2), and allowing for a 0.12dex correction for oxygen depletion onto dust grains. However, while this situation appears to hold for the complete disc up to the optical radius for NGC628, for M101 applies till $\rm R/R_{25}$$\approx$0.7. Beyond this radius the derived profile of the oxygen effective yield of M101 suggests that a moderate gas outflow could be invoked there. Some small inflow component could not be discarded but its effects should be hardly noticeable, especially given the higher gas fraction already present in its outer disc [e.g. @2007ApJ...658..941D]. Massive inflow does seem less likely judging from the well defined N/O versus O/H relation observed in this galaxy[^6]. These results qualitatively agree with previous findings in the literature [@2015MNRAS.450..342K; @2015AJ....150..192Z], though in the case of NGC628 an important loss of metals has been proposed especially for the inner regions [@2016MNRAS.455.1218B]. Nonetheless, we should bear in mind that these previous works have used oxygen abundances derived with bright line abundance calibrations that can suffer from well-known important uncertainties (see Sect. \[sec:data\]). The radial GDR profile and the spatially resolved effective yield are compared across both galaxies, sampling one dex range in O/H. Here we empirically present the relationship between the gas to dust mass ratio and chemical abundance obtained across both galaxies. In the left panel of Fig. \[fig:GDR\_metal\] we show the behaviour of the GDR versus the oxygen abundance, 12+log(O/H), for M101 (circles) and NGC628 (squares). A clear two slopes behaviour is seen with a break at 12+log(O/H)$\approx$8.4; below this value of oxygen a steep negative slope ($\Delta$ logGDR/$\Delta$ log(O/H)$\approx$ -1.3) is present, mostly defined by the external (metal poor) points of the disc of M101. For higher metallicities the slope is still negative but much shallower, and it seems similar for both galaxies to within the errors. The observational trend shown here appears consistent with that one found by for a sample of galaxies. These authors present a broken power law to describe the observational behaviour of the GDR versus a single value of 12+log(O/H) for a large sample of galaxies. Their break at 8.10$\pm$0.43 appears consistent, within the errors, with the break at 12+log(O/H)$\approx$8.4 found in this work. found a break when a constant Galactic X$_{\rm CO}$ factor was used, as well as when a X$_{\rm CO}$ factor scaled with the metallicity of the galaxy was applied to each object. Here we have used a variable X$_{\rm CO}$ factor that scales with the metallicity (measured) at each galactocentric radius across the galaxy. We have also derived the GDR using a constant X$_{\rm CO}$ factor for the whole disc, scaled with the central metallicity, of each galaxy (see Sect. \[sec:gasmass\]). In this case, the radial variations of GDR obtained for both galaxies show the same trend as in Fig. \[fig:GDR\_profile\]. Our results are in good agreement with the findings of the recent detailed work across the M33 disc done by . In the left panel of Fig. \[fig:GDR\_metal\] we show the theoretical predictions of which define a critical metallicity highlighting this change in slope. We can see that our observations agree with the chemical evolution models of for star formation time scales of 0.5-5Gyr. In the right panel of Fig. \[fig:GDR\_metal\] the derived GDRs for M101 (circles) and NGC628 (squares) are shown against the corresponding log(N/O) ratios across their discs. A strong correlation is present for both galaxies within the metallicity range typically associated to secondary nitrogen production, i.e. for log(N/O) $\geq$ -1.4 (above this N/O value, the break in GDR vs. oxygen abundance -as seen in the left plot- corresponding to log(N/O) $\sim$ -1.1), whereas no correlation is shown below this N/O value, sharing all the points the same average N/O, characteristic of primary nitrogen production at low metallicity. ![image](fig10a.pdf){width="47.00000%"} ![image](fig10b.pdf){width="47.00000%"} The effects of the different chemical evolution and the spatial profiles of the oxygen effective yield derived for both galaxies could be translated onto an [*effective dust yield*]{} for stellar dust production and onto the dust growth component of the rich ISM, especially in the inner galaxy. In this scenario, the largest GDR values obtained in this work have been derived for the outer disc of M101, where the largest deviations from the SM yield are measured. In Fig. \[fig:GDR\_yeff\] we show the GDR derived across both galaxies versus the corresponding values of the effective yield of oxygen. We can see how the lower values of the GDR derived for M101 and the GDR for the entire disc of NGC628, all between 200&lt;GDR&lt;500, correspond to those regions of the galaxies presenting an effective yield broadly consistent with the SM of chemical evolution. Conversely, it is for the lowest values of the effective yield that we see the highest GDR values derived for the outermost disc of M101. ![GDR versus the logarithmic of the effective yield for M101 (circles) and NGC628 (squares).[]{data-label="fig:GDR_yeff"}](fig11.pdf){width="47.00000%"} In the framework of the models, a two slopes behaviour is predicted for the GDR against 12+log(O/H); these models suggest the existence of a critical metallicity, when stardust production equals grain growth, for which a change in the slope of GDR against metallicity should be observed. Asano’s models predict the slope to be steeper in the critical metallicity regime (0.05$\leq$$\rm Z/Z_{\odot}$$\leq$0.3) defined in @2017arXiv171107434G as observed here. Hints for the existence of an overall behaviour of GDR versus metallicity have been shown in previous studies using diverse samples of galaxies , with a somewhat large scatter probably mostly associated to the different sample definition and diverse origin of the individual objects [see e.g. @2017arXiv171107434G] included; these two branches have been observed also for M33 by . More recent models have been performed by reproducing this relation of GDR versus metallicity and adding the effects of episodic star formation histories on the ISM dust growth. The growth of dust grains in the ISM is needed to balance the overall dust budget, adding to the stardust production and compensating by the dust sink of grain destruction which is believed to be associated to SN blast waves (likely also to a hard radiation field), also allowing for the possible net dust loss linked to gas outflows. Within the above mentioned theoretical framework for the evolution of dust content it is predicted that the accretion time for grain growth is expected to be proportional to the inverse of the metallicity and to the inverse of the fraction of cold (molecular) component of the ISM. We have defined a proxy to trace this fraction of cold ISM component using the molecular to total gas mass ratio, $\rm\Sigma_{H_{2}}$/$\rm\Sigma_{gas}$, across the galaxies. In Fig. \[fig:GDR\_fracmolgas\] we present the behaviour of the derived GDR against the cold ISM fraction mapped by $\rm\Sigma_{H_{2}}$/$\rm\Sigma_{gas}$ across both galaxies. The colour coded points help us follow their radial positions across the discs. The GDR shows a close relation with $\rm\Sigma_{H_{2}}$/$\rm\Sigma_{gas}$, in this case, it seems to be complementary to what is shown in Fig. \[fig:GDR\_metal\]; i.e. it is now the lower GDR values (GDR$\lesssim$400) that appear very well sampled and extended along the $\rm\Sigma_{H_{2}}$/$\rm\Sigma_{gas}$ axis. Both galaxies are seen to share a strong correlation with a shallow negative slope above $\rm\Sigma_{H_{2}}$/$\rm\Sigma_{gas}$$\approx$0.1. For the regions with a very low molecular fraction, mainly populating the outer disc of M101, the GDR vs. $\rm\Sigma_{H_{2}}$/$\rm\Sigma_{gas}$ slope is nearly vertical in the plot, witnessing a strong non dependence of the high GDR values of these regions on the ISM grain growth. The observed trend of GDR versus $\rm\Sigma_{H_{2}}$/$\rm\Sigma_{gas}$ has been linked to grain growth, which should take place in regions of the ISM of intermediate to high metallicity [@2017arXiv171107434G]. This fact can be interpreted as a straight (causal) effect, but also a complementary scenario could be invoked in which a significant amount of hydrogen molecules can form on the surfaces of the dust grains; and thus, an enhanced $\rm\Sigma_{H_{2}}$/$\rm\Sigma_{gas}$ relative content should be found associated to those regions with more dust grains and high surface dust density. ![The behaviour of the gas to dust ratio GDR against the cold ISM fraction as mapped by $\rm\Sigma_{H_{2}}$/$\rm\Sigma_{gas}$. Points are colour coded for M101 (circles), and NGC628 (squares).[]{data-label="fig:GDR_fracmolgas"}](fig12.pdf){width="50.00000%"} We have empirically unveiled here a twofold behaviour for the GDR: the gas-to-dust ratio is seen to correlate with metallicity (12+log(O/H)), and also with the molecular fraction of the total gas content. A clear change in the observed slopes is seen for: a) 12+log(O/H)$\sim$8.3 to 8.4, the critical metallicity predicted in Asano’s models; this value appears to be similar for the two discs studied here, though for NGC628 the points do not enter into the steep slope regime delineated clearly by M101 outer points; and, b) $\rm\Sigma_{H_{2}}$/$\rm\Sigma_{gas}$$\sim$0.2, above which the points from both M101 and NGC628 sample the observed correlation, though NGC628 is clearly dominating in this regime. For higher oxygen abundances, up to 12+log(O/H) $\approx$ 8.8, ($\sim$1.4 solar), the negative slope of the relation GDR vs. (O/H) is much flatter than the one for the lower abundance branch, reaching to 12+log(O/H) = 7.9. In contrast, against $\rm\Sigma_{H_{2}}$/$\rm\Sigma_{gas}$, the GDR behaves nearly insensitive to the molecular gas fraction below $\rm\Sigma_{H_{2}}$/$\rm\Sigma_{gas}$$\sim$0.2, whereas a clear and definite correlation of GDR with $\rm\Sigma_{H_{2}}$/$\rm\Sigma_{gas}$ can be appreciated for the regions located above this value (see Figs. \[fig:GDR\_metal\] and \[fig:GDR\_fracmolgas\]). Therefore we propose an empirical relationship between GDR and the combination of these two parameters, 12+log(O/H) for metallicity, and $\rm\Sigma_{H_{2}}$/$\rm\Sigma_{gas}$ which is used as a proxy of the cold molecular clouds relative content, we believe these relations can describe better the complex behaviour of the GDR over the entire range of variations observed here. Empirical fits for these relations have been derived as follows: - for 12+log(O/H) $\geq$ 8.3 and log(M$\rm_{H_{2}}$/M$_{\rm gas}$) $\geq$ -0.05: $$\begin{aligned} \rm log(GDR) = (-0.101\pm0.183)\times (12+log(O/H)) + \nonumber \\ \rm (-0.199\pm0.094)\times log(\Sigma_{H_{2}}/\Sigma_{gas}) + (3.299\pm1.600) \nonumber\end{aligned}$$ - When 12+log(O/H)&lt;8.3 (i.e. from M101 points): $$\begin{aligned} \rm log(GDR) = (-1.332\pm0.048)\times (12+log(O/H)) + \nonumber \\ (13.718\pm0.396)\end{aligned}$$ The relation between GDR and 12+log(O/H) at low metallicity depends on the star formation time scale, showing substantial scatter for a given O/H, as seen in the observations and, according to theoretical models, it also depends on the critical metallicity . As indicated before for oxygen abundances below 12+log(O/H)&lt;8.3, Eq.4 has been obtained from a fit to M101 points only. For M101 the critical metallicity inferred here is consistent with the value found for NGC628, to within the errors, and similar to the one for M33 derived by , who presented a spatially resolved GDR versus 12+log(O/H) relation in line with our findings. Bearing in mind the small number of low metallicity discs studied so far with spatially resolved data, we would expect comparable values of the critical metallicity to be derived for those galaxies for which M101, NGC628 and M33 can be seen as representative objects (i.e. sharing similar star formation activity, star formation time scales -few Gyr-, chemical abundances and related properties). For these galaxies, the relation between GDR and 12+log(O/H) in the low metallicity range of Eq.4 could bring us a guide, though with large uncertainty until more spatially resolved studies are available. Fig. \[fig:GDR\_fracmolgas\] illustrates very well the shallow slope found here for the GDR versus molecular fraction relation and also the small role ($\sim$30$\%$ change) played by the oxygen abundance in the fit for the inner galactic regions presenting higher molecular fraction. Stellar dust production ----------------------- At this stage we can use the predictions from chemical evolution models, including metals and dust production/destruction in galaxies [e.g. @2012MNRAS.423...38M; @2012MNRAS.423...26M], in order to try to understand the results obtained. As a matter of fact, we can proceed exploring the component of stellar production of dust, which according to models could be associated to the stellar [*’dust yield’*]{}. In this respect, we have searched for a correlation between the spatially resolved dust surface density profile and the oxygen abundance spatial profile for both galaxies. In the left panel of Fig. \[fig:SDDust\_others\] the relation is presented of the dust mass surface density (in M$_{\odot}$/pc$^{2}$) versus the metallicity as derived across M101 and NGC628. An overall correlation can be seen with the lowest metallicity points (i.e. outermost regions of M101) associated to the lowest dust mass surface density but growing slowly up to 12+log(O/H)$\sim$8.3 to 8.4, where a discontinuity is apparent, above which the slope clearly increases again, though showing a systematic difference in oxygen abundance of $\sim$0.15 dex higher for NGC628 points; possibly a non negligible abundance difference, but here we should bear in mind that for NGC628, although the average fit to the oxygen abundance gradient w.r.t. $\rm R/R_{25}$ showed systematically higher abundances than in the case of M101, its scatter was also larger ($\sim$$\pm$0.15 dex) than for M101. In the right panel of Fig. \[fig:SDDust\_others\] the dust mass surface density is plotted against the stellar mass surface density across both galaxies. A strong and clear correlation is observed and now all the points for both galaxies are located following well the same relation, without an obvious discrepancy in the loci of both galaxies. Again two main regions with two (somewhat) different slopes are present, below and above of a (small) discontinuity apparent for a dust surface density of log($\Sigma_{\rm dust}$/(M$_{\odot}$pc$^{-2}$))$\approx$-1.5 (this value corresponding to the region around critical metallicity, as can be read in the left panel), with the steeper slope derived for log$\Sigma_{\rm dust}$&lt;-1.5 (in the same units) corresponding to the regions of lower metallicity and lower stellar density. This result can be expected according to chemical evolution dust models [e.g. @2012MNRAS.423...26M top panels of their Fig. 4], from which we can anticipate that, for metallicities below Z$\approx$0.01, all models covering a large range in dust growth parameter predict that the dust mass produced should be always proportional to the metallicity, for a large range (within a factor of 10$^{3}$) in the dust yield to metal yield ratio; the absolute level of the dust mass produced being proportional to the dust yield adopted. Above Z$\approx$0.01, the role played by the dust growth parameter appears very important, and the relation between the logarithmic of the dust mass and the logarithmic of the metallicity appears non linear, with (at least) two other zones in the relation with different shapes when going to higher (super solar) metallicities, as observed in this work. In Fig. \[fig:SDDust\_others\], near the discontinuity seen in the right panel mentioned above, a (small) local maximum in the log$\rm\Sigma_{dust}$ versus log$\rm\Sigma_{star}$ relation of M101 appears, traced by the points for a value of $\rm\Sigma_{star}$$\sim$10 M$_{\odot}$/pc$^{2}$. For this local maximum the corresponding normalized radius is $\rm R/R_{25}\sim$0.55, equivalent to $\sim$7.9[$^{\prime}$]{} in the disc of M101. It is worth noting here that M101 presents a well behaved rotation curve in the inner part of the galaxy for radial distances lower than 7[$^{\prime}$]{}, whereas a distorted outer part has been revealed for radii &gt;7[$^{\prime}$]{} . The radial distance of the observed local maximum of $\sim$7.9[$^{\prime}$]{} appears just beyond the limit defining the outer distortet part of M101. Interestingly enough, over the radial range 7 - 9[$^{\prime}$]{}, @2013ApJ...762...82M have found changes in the photometric profile slope of M101 for the various octants of the disc analysed. ![image](fig13a.pdf){width="47.00000%"} ![image](fig13b.pdf){width="47.00000%"} We have explored further the role of $\rm\Sigma_{dust}$/$\rm\Sigma_{star}$ as a proxy for an empirical estimation of the stellar dust yield ([*stardust*]{}). $\rm\Sigma_{dust}$ is a balance between the amount of dust produced by the stars, that one destroyed by SN shocks and the amount formed by accretion in molecular clouds. In the top panel of Fig. \[fig:MdustMstar\] we show $\rm\Sigma_{dust}$/$\rm\Sigma_{star}$ versus oxygen abundance 12+log(O/H). The relation observed presents a constant -within the errors- value of $\rm\Sigma_{dust}$/$\rm\Sigma_{star}$ $\approx$0.003 (close to the value of the oxygen yield derived here, see Sec.\[sec:chemevol\]) corresponding to low metallicity in the outer galaxy points. We expect that, as there is no molecular gas at these outer radii, the contribution of accretion to the dust budget is minimal, and therefore the amount of dust in this region is related to stellar dust production and destruction. This situation is holding until the critical abundance for M101 is reached. Above 12+log(O/H) $\geq$ 8.3 to 8.4 a strong correlation is seen for both galaxies with a negative slope decreasing the empirical stardust production by an order of magnitude to $\rm\Sigma_{dust}$/$\rm\Sigma_{star}$ $\approx$ 0.0003 for supersolar abundance, 12+log(O/H) $\approx$ 8.8. Again, above this value the oxygen abundances appear to be slightly larger (by $\sim$ 0.15 dex) for NGC628 than for M101. It is clear that the ratio $\rm\Sigma_{dust}$/$\rm\Sigma_{star}$ should depend on the star formation history of the system, spatially resolved across the discs of the galaxies. Since the absolute value of the total stellar mass across these galaxies can certainly be weighting the derivation of this ratio, we also have presented it against the N/O abundance ratio (see below), an intensive parameter which can be seen as a ’chemical clock’ tracing the main nitrogen star producers, of typical age of order several Gyr, compared to $\sim$1 Gyr for the assumed dust life time [@2017arXiv171107434G]. We note here that the data available do not allow to perform a detailed study in order to differentiate the two main contributors to the stellar dust: AGB stars, that inject dust in a timescale of 100-500Myr depending on the star formation history, IMF and metallicity [@2009MNRAS.397.1661V]; and SNe producing dust a few hundred years after the explosion [@2001MNRAS.325..726T; @2007MNRAS.378..973B]. The stellar population studies available in the literature show radial age profiles that vary between 1-10Gyr for NGC628 [@2011AJ....142...16Z; @2014MNRAS.437.1534S] and 2-7Gyr for M101 [@2013ApJ...769..127L]. In the middle panel of Fig. \[fig:MdustMstar\], $\rm\Sigma_{dust}$/$\rm\Sigma_{star}$ is now shown against the nitrogen to oxygen ratio, log(N/O), a proxy of the secondary nucleosynthesis production and thus an empirical ’chemical clock’. The correlation is strong now with no apparent difference in the relation between the points from both galaxies: all points sharing now the same locus on the plot. Therefore, we can say that N/O behaves as a good tracer of dust in this range -possibly better than the oxygen abundance-. We can see how for the abundance range of primary nitrogen production, i.e. represented here by the assumed log(N/O) $\approx$ -1.4, all (M101) points show a value of $\rm\Sigma_{dust}$/$\rm\Sigma_{star}$ constant and equal to the value quoted from the previous plot. Then, for larger values of log(N/O) across both galaxies -i.e. going towards the inner parts of the disc where gas is expected to be more chemically evolved with a secondary nitrogen production- points show a very low $\rm\Sigma_{dust}$/$\rm\Sigma_{star}$ ratio. We speculate if this may suggest possible dust destruction mechanisms operating in addition to the ISM dust growth process mentioned before. However, it seems plausible that for the inner regions of these galaxies the $\rm\Sigma_{dust}$/$\rm\Sigma_{star}$ ratio may be lower as a consequence of their higher past star formation, leading there to their higher stellar mass and faster chemical enrichment as inferred from the available data. According to chemical evolution models with dust production [e.g. @2012MNRAS.423...26M] the chemical evolution of the ’stardust’ production could be described as the one for an element with primary and secondary nucleosynthesis, as it is the case for nitrogen. In this case and following the theoretical prescriptions, our simple description would indicate that the linear part at low N/O should be tracing directly primary ’stardust’ production, whereas the secondary part would be associated to a metallicity dependent dust yield, that assuming no growth nor destruction of dust in the ISM. In this scheme, for the low (primary) metallicity regime, the quotient between the dust to gas ratio and the metallicity would be equal to the ratio of the primary dust yield and the metal yield. In that case, our results above favour a value of the dust yield similar to the oxygen yield derived in this work, i.e. some 40$\%$ of the total metals yield [e.g. @2012MNRAS.423...38M]. Of course, since we here directly compared dust mass to stellar mass, we are dealing with the ’stardust’ yield part of the dust production. Clearly, the corresponding dust production associated to ISM dust growth need to be added in further work. In the bottom panel of Fig. \[fig:MdustMstar\] we have explored the behaviour of $\rm\Sigma_{dust}$/$\rm\Sigma_{star}$ versus the surface density of the molecular cold gas fraction, $\rm\Sigma_{H_{2}}$/$\rm\Sigma_{gas}$, as our empirical indicator of the galactic regions of ISM dust growth production predominance. We can see that the slope is again negative but now the range with a much shallower slope seems large, presenting a nearly similar value of $\rm\Sigma_{dust}$/$\rm\Sigma_{star}$ for a substantial fraction of $\rm\Sigma_{H_{2}}$/$\rm\Sigma_{gas}$ from $\approx$0.2 up to 0.7, for both galaxies. The highest changes in $\rm\Sigma_{dust}$/$\rm\Sigma_{star}$ in this plot are seen at the two extremes of very low (high ’stardust’ production) and very high (low ’stardust’ production) $\rm\Sigma_{H_{2}}$/$\rm\Sigma_{gas}$. Finally, it is worth mentioning that our interpretation of the empirical trends presented in this paper proposes a change of the main mechanisms contributing to the dust mass budget, from accretion at high oxygen abundances, to stellar dust production for oxygen abundance values lower than 12+log(O/H)$\sim$8.4. However, we should keep in mind that this value of a ’critical metallicity’ would agree with the metallicity at which the relative fraction to PAHs is expected to decrease from $\sim$3% to values less than 1% at lower metallicities [@2007ApJ...663..866D]. Further future work to study possible physical connections between the observed trends of GDR and the relative abundance of PAHs would be interesting. ![$\rm\Sigma_{dust}$ over $\rm\Sigma_{star}$ versus 12 + log(O/H) (top), log(N/O) (middle) and molecular gas mass fraction (bottom) for M101 (circles) and NGC628 (squares).[]{data-label="fig:MdustMstar"}](fig14a.pdf "fig:"){width="47.00000%"} ![$\rm\Sigma_{dust}$ over $\rm\Sigma_{star}$ versus 12 + log(O/H) (top), log(N/O) (middle) and molecular gas mass fraction (bottom) for M101 (circles) and NGC628 (squares).[]{data-label="fig:MdustMstar"}](fig14b.pdf "fig:"){width="47.00000%"} ![$\rm\Sigma_{dust}$ over $\rm\Sigma_{star}$ versus 12 + log(O/H) (top), log(N/O) (middle) and molecular gas mass fraction (bottom) for M101 (circles) and NGC628 (squares).[]{data-label="fig:MdustMstar"}](fig14c.pdf "fig:"){width="47.00000%"} Conclusions {#sec:conclusions} =========== In this work we have studied in detail the relation between dust and metallicity in the nearly face on nearby galaxies NGC5457 (M101) and NGC628 (M74) using high quality data available (including optical spectroscopy of the ionised gas, multi-band spectrophotometry of the stars, dust and gas, radio and millimetre observations of their neutral and molecular gas). Both galaxies can be considered prototype objects of its class and have a similar mass (logM$_{\star}$/M$_{\odot}\approx$10.2), though they present apparently different environments and chemical and structural properties. We have combined the chemical abundances of the ISM and the stars of both galaxies with a detailed derivation of their dust content and of their chemical evolution. The main results obtained in this work are the following: - The global budgets of metals and dust, and the chemical evolution, across M101 and NGC628 have been derived and analysed consistently for both galaxies, under the same physical assumptions. We have found that for both galaxies the overall metal budget could be consistent with the predictions of the simple model of chemical evolution if we assume an oxygen yield around half solar to one solar, close to the observational derivations (y$\approx$0.003) and to some recent theoretical prescriptions, though other nucleosynthesis schemes predict higher values. Whereas the whole disc of NGC628 appears to be consistent with this result, M101 presents a deviation in the outermost region (R$\geq$0.8$\rm R_{25}$), suggesting non closed box gas flows there. - The gas to dust mass ratio map for each galaxy is obtained from the maps of total gas mass and the dust mass that have been computed. Despite the presence of sufficient small scale structure across the maps, NGC628 GDR seems to be relatively constant across the disc of the galaxy, while the GDR of M101 shows a significant increase towards the outer parts of the galaxy. The GDR radial profiles of the galaxies show different behaviour with galactocentric radii, with NGC628 increasing very smoothly with galactocentric radius, while for M101 the GDR shows a significant increase towards the outer parts of the galaxy. - The gas to dust mass ratio varies against the chemical abundance obtained across the galaxies showing a two slopes behaviour, with a break at 12+log(O/H)$\approx$8.4; showing a steep negative slope ($\Delta$ logGDR/$\Delta$ log(O/H)$\approx$ -1.3) below this oxygen abundance. This result is found to be consistent with theoretical predictions which define a critical metallicity for this slope change , in the line of what is seen in recent observational studies for M33 and from previous samples of star forming objects . - We have found an empirical relation between the gas to dust ratio, metallicity and the fraction of molecular to total gas mass ratio, $\rm\Sigma_{H_{2}}$/$\rm\Sigma_{gas}$. The GDR shows a very close relationship with metallicity for the lowest abundances, and with $\rm\Sigma_{H_{2}}$/$\rm\Sigma_{gas}$ for the higher metallicity, leading to the lower GDR values (GDR$\lesssim$400) measured, suggestive of ISM dust growth. - It has been found that log$\rm\Sigma_{dust}$, the dust surface density, is well correlated with 12+log(O/H) and also with log$\rm\Sigma_{star}$, the stellar surface density, in agreement with theoretical chemical evolution models including dust [e.g. @2012MNRAS.423...38M; @2012MNRAS.423...26M]. In the same vein, we have found that across both galaxies the ratio $\rm\Sigma_{dust}$/$\rm\Sigma_{star}$, an empirical proxy for the stellar dust production, is well correlated with oxygen abundance, 12 + log(O/H), and appears strongly correlated with log(N/O), i.e. the nitrogen to oxygen abundance ratio which is a direct indicator of primary to secondary nucleosynthesis, and which we propose as a tracer of the GDR and of the ’stardust’ production behaviour. For abundances below the critical abundance, the ’stardust’ production as measured by $\rm\Sigma_{dust}$/$\rm\Sigma_{star}$, gives a constant value (mainly in the M101 outer regions) indicating that the (stellar) dust yield is similar to the value of the oxygen yield derived for the region (equivalent to some 40$\%$ of the total metal yield). - We have found a relation of $\rm\Sigma_{dust}$/$\rm\Sigma_{star}$ with the molecular gas mass fraction, $\rm\Sigma_{H_{2}}$/$\rm\Sigma_{gas}$, which could illustrate the balance between ’stardust’ production and ISM dust growth for M101 and NGC628. This relation shows a complex behaviour with a nearly constant ’stardust’ component for a large range in molecular gas mass fraction values (0.2$< \rm\Sigma_{H_{2}}/\rm\Sigma_{gas} \leq$0.8); being the largest changes in $\rm\Sigma_{dust}$/$\rm\Sigma_{star}$ seen at the extremes of low and high $\rm\Sigma_{H_{2}}$/$\rm\Sigma_{gas}$. More deep observations and detailed analyses are needed before going further in this direction. A more quantitative analysis of the empirical scenarios here presented is underway, including data of three nearby spirals and the Milky Way. We are presently developing the application of the ’stardust’ low-metallicity scenario with no dust growth, illustrated in this work, to a sample of low metallicity dwarfs to be presented in a forthcoming work (Relaño et al. in preparation). Acknowledgements {#acknowledgements .unnumbered} ================ The authors would lilke to thank the anonymous referee for very constructive comments that have helped to improve the paper. This work was partially supported by the Spanish Ministerio de Economía y Competitividad under grants AYA2016-79724-C4-4-P and AYA2016-79724-C4-3-P, and excellence project PEX2011-FQM-7058 of Junta de Andalucía (Spain). MR acknowledges support by the research projects AYA2014-53506-P and AYA2017-84897-P from the Spanish Ministerio de Economía y Competitividad and Junta de Andalucía grant FQM108; support from the European Regional Development Funds (FEDER) is acknowledged. JMV thanks the Director of the IoA for hospitality, and the Spanish MECD for a “Salvador de Madariaga” grant PRX17/00485, and the MIAPP 2016 program “The Chemical Evolution of Galaxies” of the Excellence Cluster “Universe” for partial support. We thank Miguel Querejeta for providing the stellar mass maps of M101 and NGC628, Karin Sandstrom for the [H[<span style="font-variant:small-caps;">i</span>]{}]{} and CO data, and the KINGFISH collaboration for providing us with [[*Spitzer*]{}]{} and [[*Herschel*]{}]{} data of the galaxy sample. Thanks are given also to R. Asano for providing the evolutionary tracks of his models. MR thanks the Computational service PROTEUS at the Instituto Carlos I (Universidad de Granada). This research made use of APLpy and Matplotlib an open-source plotting package for Python. \[lastpage\] [^1]: E-mail: [email protected], Visiting fellow, Institute of Astronomy, University of Cambridge [^2]: Spectromètre Imageur a Transformée de Fourier pour l’Etude en Long et en Large de raies d’Emission [^3]: Strictly speaking this value would provide a lower limit to the true yield [^4]: Note that the stellar metallicity contribution is unknown beyond $\rm R/R_{25}$$\approx$0.5 and could not be added to the total budget [^5]: InterStellar Radiation Field [^6]: N/O is not expected to be much affected by massive unenriched inflow, but O/H would be.
--- abstract: 'Giant low surface brightness galaxies (gLSB) with radii of discs as large as 130 kpc challenge galaxy formation scenarios and it is still not well understood how they form and evolve through the cosmic time. Here we present analysis of deep long-slit spectroscopic observations of six gLSBs that we obtained with the Russian 6-m telescope: UGC 1922, Malin 2, UGC 6614, UGC1382, NGC 7589 and UGC 1378. We derived spatially resolved properties of stellar and ionized gas kinematics and characteristics of stellar populations and interstellar medium. The stars in the central regions are old and metal rich for most of the galaxies. We revealed the presence of a kinematically decoupled central component in the inner regions of UGC1922, UGC1382 and UGC6614, where we detected counter-rotating kinematical components. We combine the results of our observations with the results available in literature. There seems to be a need for diversity of gLSBs formation scenarios: (i) some of them could have formed by in-plane mergers of massive galaxies; (ii) for some others the major merger scenario is excluded by our data. We revealed that most of gLSBs are situated in low-density environment which possibly helped to preserve the giant discs. At the same time at some point of the formation history of these systems there should exist a reservoir of gas from which the massive discs were formed. Future observations and detailed comparison with numerical simulations of galaxy formation in the cosmological contest will help to clarify which gLSB formation channel is more important.' --- Introduction ============ Giant low surface brightness galaxies (gLSBs) represent a challenge for the current galaxy formation scenarios despite of being known for more than 30 years, since the prototype of this class of objects Malin 1 was discovered by @Bothun1987. It is difficult to form such discs in the hierarchical clustering paradigm where dark haloes of disc galaxies do not undergo major mergers. Major merger events would likely destroy large discs but merger trees of such massive galaxies with no major mergers are exceptionally rare. The sample of confirmed gLSBs stays limited despite of being extended as new objects are discovered using improved observational facilities [see, e.g. @Hagenetal2016]. It can indicate that gLSBs are rare and therefore a detailed insight on every object of this type helps to understand their nature. That was the aim of our continuing series of works [@Kasparova2014; @Saburova2018; @Saburova2019]. We present the main properties of the sample of known gLSBs in Table \[properties\] with the references to the origin of the data. The LSB disc radius gives 4 radial scale lengths of LSB discs for all galaxies except Malin 1 for which we used the distance to the last measured point above the noise level and with an approximate exponential radial distribution of surface brightness given in @Boissier2016. All gLSBs demonstrate prominent spiral structure which continues at the low surface densities [see, e.g. @Kasparova2014; @Galaz2015; @Hagenetal2016]. Many of them have complex structure when normal-size early type galaxy is embedded in LSB disc [see, e.g. @Lelli2010; @Saburova2019]. We performed long-slit spectral observations of six gLSBs: UGC 1922, Malin 2, UGC 6614, UGC1382, NGC 7589 and UGC 1378 with the Russian 6-m telescope and the deep photometry of UGC 1922, UGC 1378 and Malin 2. For the details on the observations and data analysis see and Saburova et al. in prep. We combined the results of our observations with the data available in literature including that for Malin 1 that was not a part of our observational sample to get clues on the formation and evolution of these unusual systems. On the results of the long-slit spectral analysis ================================================= From our spectral observations we obtained spatially resolved properties of stellar and ionized gas kinematics and characteristics of stellar populations. We found that 3 of 6 considered gLSBs have counter-rotation of the gaseous disc with respect to the stellar disc or the outer gaseous disc. Similar high occurrence of decoupled gas kinematics was found in isolated S0 galaxies [@Katkov2014]. The stellar velocity profile obtained by @Reshetnikov [@2010] for the 7-th gLSB Malin 1 does not show counter-rotation with respect to outer disc. The decoupled gaseous kinematics is demonstrated on the line-of-sight velocity profiles in Fig. \[profiles\] (upper row). Ionized gas kinematics is shown by colored symbols and stellar kinematics is demonstrated by black points. We should admit however that we can not exclude that the peculiarity of the gaseous kinematics of UGC 6644 could be related to the outflow of gas related to the AGN. We need more spectral observations to make firm conclusion. The counter-rotation can indicate the presence of merger or accretion of gas in the history of these systems. We also found that the stellar population of the central parts of the gLSBs is very old ($T>10$ Gyr) and metal-rich ($[Z/H]=-0.2..0.2$ dex) in most cases besides galaxies with prominent bars UGC 1378 and NGC 7589. ![The radial profiles of velocity (top row) and velocity dispersion (bottom row) of ionized gas (colored symbols) and stars (black circles) for UGC 1382 ($PA=238^o$), UGC 1922 ($PA=85^o$) and UGC 6614 ($PA=45^o$) from left to right correspondingly.[]{data-label="profiles"}](u1382_kin_prof.pdf "fig:"){height="37.00000%"} ![The radial profiles of velocity (top row) and velocity dispersion (bottom row) of ionized gas (colored symbols) and stars (black circles) for UGC 1382 ($PA=238^o$), UGC 1922 ($PA=85^o$) and UGC 6614 ($PA=45^o$) from left to right correspondingly.[]{data-label="profiles"}](u1922_kin_prof_pa85.pdf "fig:"){height="37.50000%"} ![The radial profiles of velocity (top row) and velocity dispersion (bottom row) of ionized gas (colored symbols) and stars (black circles) for UGC 1382 ($PA=238^o$), UGC 1922 ($PA=85^o$) and UGC 6614 ($PA=45^o$) from left to right correspondingly.[]{data-label="profiles"}](u6614_kin_prof_pa45.pdf "fig:"){height="37.00000%"} On the mass-modelling of the rotation curves ============================================ We combined the optical rotation curves obtained in the project with the data from [@Lelli2010; @Pickering1997; @Mishra2017; @Hagenetal2016] and performed the mass-modelling of the combined rotation curves. We used the components of stellar and gaseous discs and bulges and [@Burkert] dark halo. We fixed the densities of the stellar components using the multi-band photometrical data. The examples of the rotation curve decomposition could be found in [@Saburova2018; @Saburova2019], the details on the modeling are given in @Saburova2016. The mass-modelling allowed us to obtain the parameters of dark halo of gLSBs - the radial scale and the central density. We compared them to the radii of the discs in Fig. \[fig\_par\] (triangles), where we also demonstrate the parameters of HSB galaxies (squares and diamonds) [@saburova]. The line corresponds to the least squares fit for HSB galaxies from @saburova. From Fig. \[fig\_par\] it follows that gLSBs behave differently on these plots. Some of them are lying on the continuation of the line plotted for HSB galaxies, other tend to deviate from it, like Malin 1 which dark halo radial scale is lower than expected for its high disc radius. It can indicate the different nature of gLSBs. ![The disc radius compared to the parameters of dark matter halo – central density (left-hand panel) and radial scale (right-hand panel). The gLSBs are shown by triangles. Diamonds correspond to the giant HSBs, squares give the plot for moderate size galaxies, the line is the fit for HSB galaxies [@saburova]. []{data-label="fig_par"}](rho_rd.png "fig:"){height="27.00000%"} ![The disc radius compared to the parameters of dark matter halo – central density (left-hand panel) and radial scale (right-hand panel). The gLSBs are shown by triangles. Diamonds correspond to the giant HSBs, squares give the plot for moderate size galaxies, the line is the fit for HSB galaxies [@saburova]. []{data-label="fig_par"}](rc_rd.png "fig:"){height="27.00000%"} On the formation scenario for gLSBs =================================== The most important goal of our project was to understand how such systems as gLSBs could be formed. Here we summarize most of discussed formation scenarios. They can be divided into catastrophic and non-catastrophic. Among the are the following. (i) Bygone head on collision with massive intruder which forms a ring-like structure that evolves into the giant disc proposed by @Mapelli2008. This scenario seems to be unlikely because the deep images and color maps of the galaxies do not show the traces indicative for this model [see e.g. @Boissier2016]. (ii) The formation of gLSB by tidally disrupted dwarf galaxies [@2006ApJ650L33P]. The major disadvantage of the scenario is that it predicts the decrease of the rotation velocity at the disc periphery which is not observed according to the data [see, e.g. @Mishra2017], and also the satellites should have almost the same angular momentum to form disc not the spherical system, which makes this scenario less likely. (iii) The group of scenarios in which an extended disc is formed from an ample supply of gas cooled down at the late stage of a merger [@Saburova2018; @Zhu2018MNRAS]. These models show a good agreement with the observed properties of some of gLSBs, thus they seem to be more promising. are the following. (i) The formation of giant disc due to the peculiarly high radial scale of dark halo. We can not exclude fully this scenario since most of the gLSB indeed have high radial scale of the halo [see, e.g. @Kasparova2014; @Saburova2018 and Fig. \[fig\_par\]]. (ii) A build up of a gLSB disc by accretion from cosmic filaments [@Saburova2018; @Saburova2019]. This scenario has two stages. The first stage, in common with MW-type galaxies, may have included several episodes of merging and a second stage, following the accretion of most satellites, quiescently formed gLSB stellar and gaseous discs by accretion of metal-poor gas from a gas-rich filament. According to our data there seems to be a need for diversity of gLSBs formation scenarios. The reasoning for it is visually demonstrated in Fig. \[fig\_images\] where we show the deep images of two galaxies which both can be classified as gLSBs but which are completely different. On the left-hand side is which has a bar and complex structure consisting of normal HSB galaxy with bulge and disc. According to our estimates of the stellar velocity dispersion, its disc is not dynamically overheated which is expected in major merger scenario [@Saburova2019]. We do not see any peculiarities in its morphology and kinematics. It is located in a low-density environment [@Saburova2019]. On the right-hand is which seems to be a bulge embedded in the LSB disc, and has peculiar morphology. Its stellar velocity dispersion is high and corresponds to the strongly overheated disc. We also found counter-rotation of the outer gaseous disc with respect to the inner one (see Fig. \[profiles\]). The radius of the disc of UGC 1922 is also higher than that of UGC 1378 (see Table \[properties\]). UGC 1922 belongs to a group that includes 7 spectroscopically confirmed members, one of which is a giant elliptical galaxy that dominates the group [@Saburova2018]. Most likely these two objects were formed differently. First one was formed by the gas accretion on the HSB galaxy. And the second one is better described by major merger scenario. ![Composite [*r, g, z*]{}-band image of UGC 1378 (left-hand panel) [@Saburova2019], [*g*]{}-band image of UGC 1922 [@Saburova2018].[]{data-label="fig_images"}](u1378_rgb.png "fig:"){height="40.00000%"} ![Composite [*r, g, z*]{}-band image of UGC 1378 (left-hand panel) [@Saburova2019], [*g*]{}-band image of UGC 1922 [@Saburova2018].[]{data-label="fig_images"}](ugc1922_g.png "fig:"){height="40.00000%"} ---------------- ------------------- ------------------------- -------------- -------------- ------------------ --------------------- --------------------------- [**Galaxy**]{} Ra, Dec [**LSB disc radius**]{} [**Type**]{} [**mass**]{} [**Distance**]{} [**Inclination**]{} [**Rotation velocity**]{} (kpc) ($10^{10}$) (Mpc) ($^o$) () Malin 1 12h36m59.350s, 130$^2$ SBab$^3$ 6.7$^4$ 377$^4$ 38$^{4}$ 236$^5$ +14d19m49.32s$^1$ Malin 2 10h39m52.483s, 82$^6$ Scd$^3$ 3.6$^7$ 201$^8$ 38$^7$ 320$^7$ +20d50m49.36s$^1$ UGC 6614 11h39m14.872s, 54$^7$ Sa$^3$ 2.5$^7$ 85$^7$ 35$^7$ 228$^7$ +17d08m37.21s$^1$ NGC 7589 23h18m15.668s, 56$^4$ SABa$^3$ 1.5$^4$ 130$^4$ 58$^7$ 205$^4$ +00d15m40.19s$^1$ UGC 1378 01h56m19.24s, 50$^9$ SBa$^3$ 1.2$^{10}$ 38.8$^{10}$ 59$^{10}$ 280$^{10}$ +73d16m58.0s$^1$ UGC 1382 01h54m41.042s, 80$^{11}$ S0$^{11}$ 1.7$^{11}$ 80$^{11}$ 46$^{11}$ 280$^{11}$ -00d08m36.03s$^1$ UGC 1922 02h27m45.930s, 84$^{12}$ S?$^1$ 3.2$^{10}$ 150$^{10}$ 51$^{10}$ 432$^{10}$ +28d12m31.83s$^1$ ---------------- ------------------- ------------------------- -------------- -------------- ------------------ --------------------- --------------------------- : The main properties of gLSBs.[]{data-label="properties"} We analyzed each galaxy of the sample in the similar way and decided which of the considered scenarios is more likely for it. We discuss it below. . We can rule out the scenario in which the giant size of the disc is due to the peculiarly high radial scale of the dark matter halo, since it contradicts our estimates, @Lelli2010 also give moderate value of the radial scale of the dark halo for Malin 1. The presence of a complex structure with two discs makes the scenario of accretion of the gas from the filament on the preformed early-type galaxy more possible for this system. LSB disc could be the result of the accretion. @Zhu2018MNRAS proposed a major merging scenario for this galaxy which also does not contradict its observed properties including the absence of the gradient of the stellar age in its disc. However, to make firm conclusion one needs to estimate the stellar velocity dispersion in the region of LSB disc of the galaxy.\ . We can not exclude the possibility that extended disc of Malin 2 is due to the high radial scale of the dark halo [in agreement with our finding in @Kasparova2014]. The major merger scenario is less likely since the stellar velocity dispersion is not very high at one disc scalelength and we observe high gradient of gas metallicity in the disc [@Kasparova2014]. The formation of the giant disc from the gas accreted from the filament could not be fully excluded.\ . For this galaxy we need more information to choose the scenario (properties of stellar population and gas metallicty in the region of LSB-disc). We can not exclude both catastrophic and non-catastrophic scenarios.\ possesses massive companion NGC 7603 which has disturbed spiral arms. Both extended disc of NGC 7589 and the disturbed appearance of NGC 7603 could be the traces of interaction of these two galaxies. The gas accretion scenario however could not be fully excluded since NGC 7589 has HSB and LSB discs [@Lelli2010].\ . The counter-rotation of gaseous disc could indicate the accretion of the gas on the pre-formed early-type galaxy. velocity field is not disturbed [@Hagenetal2016] which gives evidences against recent major merger. We also can rule out the scenario in which the extended disc is formed due to unusually high radial scale of dark halo, since the parameters of the halo deviate from the expected for high radius of the LSB-disc. Acknowledgements ================ AS is grateful to IAU Grant for financial support to attend an IAU Symposium 355. AS acknowledges The Russian Science Foundation (RSCF) grant 19-72-20089 that supported research on dynamical modelling of gLSBs. IC and AS acknowledge The Russian Science Foundation (RSCF) grant 19-12-00281 for the reduction and analysis of spectral and photometric data. [99]{} 1987, [*AJ*]{}, 94, 23 2016,*A&A* , 593, A126 2014,*A&A*, 570, A13 2016, [*The Astrophysical Journal*]{}, 826, 210. 2006,*PASA*, 23, 165 2010, [*A&A*]{}, 516, A11 2014, [*MNRAS*]{}, 437, 3072 1997, [*AJ*]{}, 114, 1858 2010, [*A&A*]{}, 523, A63 2017, [*MNRAS*]{}, 464, 2741 2018, [*MNRAS*]{}, 481, 3534. 2019, arXiv:1908.11383 2014, [*MNRAS*]{}, 438, 2798 2010, [*MNRAS*]{}, 406, L90 2015, [*ApJ*]{}, 815, L29 2016, [*MNRAS*]{}, 463, 2523 1995, [*ApJ*]{}, 447, L25 2018, [*MNRAS*]{}, 473, 3796. 2008, [*MNRAS*]{}, 383, 1223 2006, [*ApJ*]{}, 650, L33 2018, [*MNRAS*]{}, 480, L18
--- abstract: 'We have extended the high-temperature susceptibility series of the three-dimensional spin-$\frac{1}{2}$ Ising model to $O(v^{26})$. Analysis of the new series gives $\alpha = 0.101 \pm 0.004$.' author: - | A.J. Guttmann\ Department of Mathematics, The University of Melbourne\ Parkville, Vic. 3052, Australia\ and I.G. Enting\ CSIRO Division of Atmospheric Research,\ Private Bag No.1\ Mordialloc, Vic. 3195, Australia title: 'The high-temperature specific heat exponent of the 3-d Ising model' --- In an earlier paper [@ge1] we gave series to order $v^{22}$ for the high-temperature expansion of the zero-field partition function of the 3-dimensional Ising model. More precisely, we gave the coefficients $a_n, \mbox{ } n=0,22$, defined by $$Z =2[cosh(J/kT)]^3\Phi(v), \mbox{ with } \Phi(v)=\sum_{n}a_n v^n.$$ The series were obtained by the finite-lattice method. One difficulty with the finite-lattice method for this problem is its voracious appetite for computer memory. Our earlier computation in fact calculated the series to two further terms - to order $v^{26}$ - but due to addressing limitations, we were unable to retain the intermediate information. This particular calculation requires $2.08 GB$ of memory, and we were unable to address more than $2 GB$, due to operating system limitations. We have now been able to re-run our program under a different operating system that permits us to address this large address space. The program was run on an IBM 3090/400J with $500 MB$ of memory and $2 GB$ of extended storage - a slower type of memory. The use of the MVS operating system allowed the large address space to be used. Even so, 2-byte integers were used, and the program run twice [*modulo*]{} two different primes. The results were combined using the Chinese Remainder Theorem, and provided the least significant digits of the new coefficients, the most significant digits were obtained by differential approximants. The final results were then compared by running with a third prime. Each run took 150 hours. As a result, we have obtained two further non-zero terms (the partition function being an even function has vanishing odd-order coefficients). We have also obtained the 6 most significant digits of the $O(v^{28})$ coefficient, by the method of differential approximants. In our earlier paper we obtained the coefficient of the $O(v^{24})$ coefficient by this method, and claimed the coefficient to be $a_{24} = 27337 * 10^7$. The present calculation gives the coefficient as $a_{24}=273374177222$, verifying our prediction. The subsequent coefficients are found to be $a_{26}= 4539862959852$ and $a_{28} = 7474452 * 10^7 \pm 5 * 10^7$, where the last coefficient is obtained by differential approximants. (The approximate coefficient was not used in the subsequent analysis, as differential approximants require more accurate coefficients. It is nevertheless useful for ratio type methods of analysis). As we were completing this work we received a preprint [@bc] in which a variant of the finite-lattice method, using helical boundary conditions was used to obtain one further coefficient than we had previously obtained. This work also confirmed our predicted coefficient, and agrees with our exact coefficient. (Note that they give the free-energy series and we give the partition function series). They also predicted $a_{26}$, and our exact coefficient confirms their predicted value. We have analysed the new series by several methods. The series is now, for the first time, sufficiently long that the method of differential approximants can be used with some confidence. For our initial analysis, we used unbiased approximants, but for maximum precision we used biased approximants. This requires a knowledge of the critical temperature which has been accurately estimated from the more readily analysed high-temperature susceptibility series, as well as from a variety of Monte-Carlo estimates. The series estimates are reviewed in [@g1], and we use the best estimate given there, $v_c = 0.218093$, which is in good agreement with the most recent, high-precision Monte-Carlo estimate, [@l] of $v_c = 0.2180992 \pm 0.0000026$. Our method of analysis is fully described in [@g2], and provides a weighted mean of critical exponent estimates from inhomogeneous first- and second-order differential approximants, with one estimate obtained for each order of the series. Our analysis was carried out on the coefficients of the partition function itself. Our unbiased estimates are $$v_c^2 = 0.04756 \pm 0.00003 \mbox{ and } 2-\alpha = 1.905 \pm 0.016 \mbox{ with } K=1$$ $$v_c^2 = 0.04756 \pm 0.00002 \mbox{ and } 2-\alpha = 1.897 \pm 0.012 \mbox{ with } K=2.$$ In the above, $K=1,2$ refers to first- and second-order differential approximants respectively. The unbiased estimates are seen to be in excellent agreement with the sussceptibility series estimate $v_c^2 = 0.0475646$, while an estimate of $\alpha = 0.10 \pm 0.01$ can be made. A biased analysis yields the following: $$2-\alpha = 1.899 \pm 0.004,\mbox{ } K=1 \mbox{ and } 2-\alpha = 1.900 \pm 0.006, \mbox{ } K=2$$ Thus we find, from this analysis, $\alpha = 0.101 \pm 0.004$. This is substantially more precise than our earlier analysis, using two fewer series coefficients, of $\alpha = 0.104 \pm 0.018$. It is consistent with the analysis of [@bc] who find $\alpha = 0.104 \pm 0.004$, though as can be seen we favour a rather lower value. Note that second-order differential approximants implicitly take correction-to-scaling terms into account. The agreement between first- and second-order differential approximants suggests that correction-to-scaling exponents are weak. A subsequent analysis provides numerical confirmation of this. Ratio techniques can also be used with this series. We have analysed the free-energy series by a variety of extrapolation methods, based on the observation that if the free-energy, $\Psi/kT \sim A(1 - v^2/v_c^2)^{2-\alpha}$, then the ratio of successive coefficients in the series expansion of $\Psi/kT$ behaves like $\frac{1}{v_c^2}(1 + \frac{\alpha-3}{n})$, with higher order corrections from correction-to-scaling exponents, as well as corrections due to analytic terms. In any event, the sequence of ratios can obviously be re-arranged to give a sequence that will converge to $\alpha$. Neville extrapolation (which takes into account only analytic correction terms), gives $\alpha = 0.103 \pm 0.006$. Other extrapolation methods, such as Levin’s u-transform and Brezinski’s $\theta$-algorithm are less accurate, allowing only the estimate $\alpha = 0.10 \pm 0.03$. In our previous analysis, we also studied the amplitude of the “correction-to-scaling” term, $a_\theta$, where the specific heat is defined to have the scaling form $C \sim A|t|^{-\alpha}[1 + a_\theta|t|^{\theta} + a_1|t| + \cdots]$, where $t=(T-T_c)/T_c$ and $\theta \approx 0.52$ [@ni]. In [@lf] it was argued that $a_\theta$ should be negative, and our earlier analysis [@ge1] seemed to confirm this, in that we found $a_\theta \approx -0.04$. This can be seen from the behaviour of the ratios of successive coefficients, as follows: We first write $C(v) = \sum c_n v^{2n}$, as the expansion we obtain is in terms of the usual high-temperature variable $v=$tanh$(J/kT)$. Note that, to leading order, $t=(T-T_c)/T_c=B(v-v_c)/v_c$, where B is a positive constant. It therefore follows that the correction-to-scaling amplitude of the specific heat series expanded in the variable $v^2$ should also be of negative sign. Writing $$C(v)= \sum c_n v^{2n} = A(1 - v^2/v_c^2) ^{-\alpha}(1 + b(1-v^2/v_c^2)^\theta +\cdots),$$ it follows that $$c_n = \frac{A\Gamma(\alpha+n)}{\Gamma(\alpha)\Gamma(n+1)v_c^{2n}}[1 + %% FOLLOWING LINE CANNOT BE BROKEN BEFORE 80 CHAR \frac{b\Gamma(\alpha)\Gamma(\alpha+n-\theta)}{\Gamma(\alpha-\theta)\Gamma(\alpha+n)} + \cdots$$ hence $$\frac{c_n}{c_{n-1}} = \frac{1}{v_c^2}[1 + \frac{\alpha-1}{n} - \frac{b\Gamma(\alpha)\theta}{\Gamma(\alpha-\theta)n^{\theta+1}} + O(\frac{1}{n^2})].$$ Taking $\alpha \approx 0.1$ and $\theta \approx 0.5$, it follows that the above equation can be rewritten as $$\frac{c_n}{c_{n-1}} = \frac{1}{v_c^2}[1 + \frac{\alpha-1}{n} + \frac{1.28..b}{n^{\theta+1}} + O(\frac{1}{n^2})].$$ Hence we find that $$(\frac{c_n}{c_{n-1}}v_c^2 - 1)n + 1 \sim \alpha + \frac{1.28..b}{n^\theta} + O(\frac{1}{n }).$$ This means that if $b < 0$, estimators of $\alpha$, given by the l.h.s. of the above equation, should approach $\alpha$ [*from below*]{}. In fact we find the approach to be from above, but a simple $n$-shift of 1 makes the approach change to an approach from below! Even an analysis taking into account the analytic correction term does not alter this behaviour. To be more precise, we have repeated the above analysis with an additional analytic correction-to-scaling term present, and found that the numerical value of $b$ changes sign with an $n$-shift of just $1$. In all cases, the estimate of $b$ is numerically rather small, and we conclude that this analysis is not sensitive enough to distinguish $b$ from zero. A similar conclusion, based on a somewhat different analysis, was obtained in [@bc]. Our estimate of $\alpha$ is rather lower than the field-theory estimate [@lz] of $\alpha = 0.110 \pm 0.0045$, but the field-theory and series estimates are both (separately) consistent with the hyperscaling relation $d\nu = 2-\alpha$. Our best series estimate of $\nu = 0.632^{+0.002}_{-0.003}$ implies $\alpha = 0.104^{+0.006}_{-0.009}$, while the best field-theory estimate [@ni] is $\nu = 0.630$, which implies $\alpha = 0.110$, a value at the centre of the field-theory estimates. We summarise the various estimates of $\alpha$ in table \[t1\]. $\alpha$ estimate Method Reference --------------------------- ----------------------------- ----------- $0.101(4)$ Series This work $0.104(4)$ Series [@bc] $0.1100(45)$ Field theory [@lz] $0.104_{-0.009}^{+0.006}$ Series + hyperscaling [@g2] $0.110$ Field theory + hyperscaling [@ni] : Summary of $\alpha$ estimates[]{data-label="t1"} Acknowledgements {#acknowledgements .unnumbered} ================ We wish to thank Mr. Bob Hill of the Australian Communications and Computing Institute for the provision of the computing facilities, and for assistance in running this large job. Financial support from the Australian Research Council is also gratefully acknowledged. [99]{} G Bhanot M Creutz U Glässner and K Schilling, Phys. Rev. B [**49**]{} (1994) 1 2909. J C Le Guillou and J Zinn-Justin, Phys Rev B [**21**]{} (1980) 3976. A J Liu and M E Fisher J Stat. Phys [**58**]{} (1990) 431 A J Guttmann and I G Enting J. Phys. A: Math. Gen [**26**]{} (1993) 807-21. A J Guttmann J. Phys. A: Math. Gen [**21**]{} (1987) 1855-63. A J Guttmann J. Phys. A: Math. Gen [**21**]{} (1987) 1839-54. A M Ferrenberg and D P Landau Phys Rev B [**44**]{} (1991) 5081-91. V Privman and M Barma, Z. Phys. B [**57**]{} (1984) 59-63. B G Nickel, Physica A [**177**]{} (1991) 189-96.
--- author: - | $^{(a)}$ and Tetsuo Hatsuda$^{(a,b)}$\ $ ^{(a)}$ Department of Physics, The University of Tokyo, Tokyo 113-0033, Japan\ $ ^{(b)}$ Institute for the Physics and Mathematics of the Universe (IPMU), The University of Tokyo, Chiba 277-8568, Japan\ E-mail: , title: Chiral symmetry of graphene and strong coupling lattice gauge theory --- Introduction ============ After its first experimental discovery in 2004 [@Novoselov_2004], graphene (monoatomic layer material of carbon atoms) has widely attracted theoretical and experimental attention [@castro_neto_2009]. Due to its hexagonal lattice structure, charge carriers on a graphene reveal a linear dispersion relation around two “Dirac points” in momentum space [@wallace_1947], so that quasiparticles at low energies can be described as two species of massless Dirac fermions in (2+1)-dimensions [@Semenoff:1984dq]. The symmetry between two triangular sublattices of graphene is referred to as the “chiral symmetry”. In the low-energy effective theory of graphene, the effective Coulomb interaction between charge carriers is enhanced by 300 times due to the small Fermi velocity $v_{_F}$. Such a strong Coulomb interaction may turn a suspended graphene in the vacuum from semimetal to insulator by the formation of a finite spectral gap of quasiparticles [@CN09]. This mechanism is similar to the spontaneous chiral symmetry breaking and associated fermion mass generation in quantum chromodynamics (QCD). Various attempts have been made so far to study the gap formation in monolayer graphene by using the Schwinger–Dyson equation [@khveshchenko_2001; @gorbar_2002; @gorbar_2009], the $1/N$ expansion [@Herbut_2006; @son_2007; @son_2008], the exact renormalization group [@Giuliani_2010], and the lattice Monte Carlo simulations [@hands_2008; @drut_2009; @giedt_2009; @drut_2010]. These works are mainly focused on the critical region of semimetal-insulator transition or on the behavior for large number of flavors. In this work, we rather focus on the strong coupling region of the system and study the low-energy effective theory discretized on a square lattice [@Araki_Hatsuda_2010; @Araki_2010]. By using the strong coupling expansion of the compact and non-compact formulations of the gauge field, we study the gap formation due to the spontaneous “chiral symmetry” breaking as well as the dispersion relations for collective excitations. Typical energy scale of the emergent excitations are also estimated with the use of the intrinsic length scale of the original honeycomb lattice. Low-energy effective theory of graphene ======================================= Effective action in the continuum limit --------------------------------------- With the annihilation operators of the electrons on the two triangular sublattices of graphene ($a_\sigma$ and $b_\sigma$) near the two Dirac points $\mathbf{K}_\pm$, we can construct a 4-component spinor in the momentum space, $\psi_\sigma(\mathbf{p}) \equiv \left(a_\sigma(\mathbf{K}_+ + \mathbf{p}),b_\sigma(\mathbf{K}_+ + \mathbf{p}),b_\sigma(\mathbf{K}_- + \mathbf{p}),a_\sigma(\mathbf{K}_- + \mathbf{p})\right)^T$. Here $\sigma=\uparrow,\downarrow$ denotes the original spin of the electrons. The Euclidean effective action for graphene is then written as [@gorbar_2002; @son_2007] $$S_E = \sum_{\sigma} \int dx^{(3)} \ \bar{\psi}_\sigma \left( D[A] +m \right) \psi_\sigma + \frac{1}{2g^2} \sum_{j=1,2,3} \int dx^{(4)} (\partial_j A_4)^2 , \label{eq:effaction}$$ where the Dirac operator reads $D[A]= \gamma_4(\partial_4+iA_4) + v_{_F} \sum_{i=1,2} \gamma_i \partial_i$. This is analogous to the action in “reduced QED” [@gorbar_2001], in which the fermion $\psi$ in (2+1)-dimensions is interacting with the U(1) gauge field $A$ in (3+1)-dimensions. The Hermitian $\gamma$ matrices obey the anticommutation relation $\{\gamma_\mu,\gamma_\nu\}=2\delta_{\mu\nu}$. The gauge coupling constant is defined by $g^2=2e^2/\epsilon_0(1+\varepsilon)$, with the electric charge $e$, the dielectric constant of vacuum $\epsilon_0$, and the dielectric constant of substrate $\varepsilon$. Due to the small Fermi velocity of quasiparticles, $v_{_F}=3.02\times 10^{-3}$, one may adopt the “instantaneous approximation” in which the spatial component $\mathbf{A}$ is neglected. From the absence of the $z$-component in the Dirac operator, this model possesses a continuous global U(4) symmetry generated by 16 generators $\{1,\gamma_3,\gamma_5,\gamma_3 \gamma_5\} \otimes \{1,\vec{\sigma}\}$, which is the extension of the continuous U(1) charge symmetry and the discrete $Z_2$ sublattice exchange symmetry of the original honeycomb lattice. The explicit chiral symmetry breaking term is represented by the mass $m$ in Eq.(\[eq:effaction\]). After performing the scale transformation in the temporal direction, $x_4 \rightarrow x_4 /v_{_F}, \; A_4 \rightarrow v_{_F} A_4$, the Dirac operator becomes independent of $v_{_F}$. This scale transformation changes the mass $m$ into the effective mass $m_*=m/v_{_F}$, while the Coulomb coupling strength is enhanced as $g_*^2 = g^2/v_{_F}$ which is about 300 times larger than that of QED. Since the inverse effective coupling strength $\beta=1/g_*^2$ is $0.0369$ in the vacuum-suspended graphene, the expansion by $\beta$ (strong coupling expansion) would work well. Regularization on a square lattice ---------------------------------- We discretize the low energy action Eq.(\[eq:effaction\]) on a square lattice with a lattice spacing $a$. Since the original honeycomb lattice has a lattice spacing $a_{_\mathrm{Hc}} \sim 1.4$Å, we make an approximate identification, $a \sim a_{_\mathrm{Hc}}$, so that we can carry out the strong coupling expansion. The quasiparticles in monolayer graphene are described with a single staggered fermion $\chi$, because its eight doublers can be identified as 4(spinor components) $\times$ 2(spin) degrees of freedom. As a consequence, the lattice action for fermions on graphene is written as [@drut_2009] $$S_F= \sum_{x^{(3)}} \left[ \frac{1}{2} \sum_{\mu=1,2,4} \left( V_{\mu}^+(x)-V_{\mu}^-(x) \right) + m_{*} M(x) \right] \label{eq:latticeaction-F}$$ with fermionic bilinears $M(x)= \bar{\chi}(x)\chi(x), V_{\mu}^+(x)= \eta_{\mu}(x)\bar{\chi}(x)U_{\mu}(x) \chi(x+\hat{\mu}), V_{\mu}^-(x)=\eta_{\mu}(x)\bar{\chi}(x+\hat{\mu}) U_{\mu}^{\dagger}(x) \chi(x)$, where $\mu=1,2,4$. The staggered phase factors $\eta_{\mu}$ corresponding to the Dirac $\gamma$-matrices are $ \eta_4(x)=1, \eta_1(x)=(-1)^{x_4}, \eta_2(x)=(-1)^{x_4+x_1}$. $U_\mu$ is the U(1) link variable, where the temporal link is defined as $U_4(x)=\exp\left[i\theta(x)\right] \, (-\pi \leq \theta < \pi)$, while the spatial links $U_{1,2,3}$ are set to unity as a result of the instantaneous approximation. In the staggered fermion formulation, the global chiral symmetry U(4) shrinks to $\mathrm{U(1)_V \times U(1)_A}$, with the ordinary $\mathrm{U(1)_V}$ charge symmetry, and the axial $\mathrm{U(1)_A}$ symmetry generated by $\epsilon(x) \equiv (-1)^{x_1+x_2+x_4}$. As for the pure gauge action, we consider two types of formulation. One is the compact formulation which consists of plaquettes made of U(1) compact link variables: $$S_G^{\rm (C)} = \beta \sum_{x^{(4)}} \sum_{j=1,2,3} \left[1-{\rm Re} \left( U_4(x) U_4^{\dagger}(x+\hat{j}) \right)\right]. \label{eq:latticeaction-G}$$ The other is the non-compact formulation with the gauge angle $\theta$: $$S_G^{\rm (NC)} = \frac{\beta}{2} \sum_{x^{(4)}} \sum_{j=1,2,3} \left[\theta(x)-\theta(x+\hat{j})\right]^2. \label{eq:latticeaction-G-NC}$$ The compact formulation has photon self-interactions which are absent in the non-compact formulation and in the continuum theory. Strong coupling expansion ========================= Expanding the partition function $Z$ by the inverse coupling strength $\beta$ and integrating out the link variables order by order, we obtain the effective action $S_\chi$ in terms of fermions [@Drouffe:1983fv]: $$Z= \int [d\chi d\bar{\chi}][d\theta] \left[ \sum_{n=0}^{\infty} \frac{(-S_G)^n }{n!} e^{-S_F} \right] = \int [d\chi d\bar{\chi}] e^{-S_{\chi}}. \label{eq:part-Z}$$ Since the link integration selects the terms in which the link variable cancel with its complex conjugate, various 4-fermi interaction terms are induced as shown in Fig.\[fig:links\]. With the compact formulation, we obtain the leading order (LO) and the next-to-LO (NLO) effective action, $S_\chi^{(0)}$ and $S_{\chi}^{(1) {\rm C}}$ respectively, as $$\begin{aligned} S_{\chi}^{(0)} &=& \sum_{x^{(3)}} \left[ \frac{1}{2} \sum_{j=1,2} \left( V_j^+(x)-V_j^-(x) \right) + m_{*} M(x) \right] -\frac{1}{4} \sum_{x^{(3)}} M(x) M(x+\hat{4}), \label{eq:4-fermi-0} \\ S_{\chi}^{(1) {\rm C}} &=& \frac{\beta}{8} \sum_{x^{(3)}} \sum_{j=1,2} \left[ V_j^+(x) V_j^-(x+\hat{4}) + V_j^-(x) V_j^+(x+\hat{4}) \right]. \label{eq:4-fermi-1}\end{aligned}$$ Since the pure gauge term $S_G$ vanishes in the strong coupling limit ($\beta=0$), the compact formulation and the non-compact one give the same result in the LO. In the NLO, the effective action from the non-compact formulation, $S_{\chi}^{(1) {\rm NC}}$, is twice that from the compact one, $S_{\chi}^{(1) {\rm C}}$, so that observables of the both formulations are related as ${\cal O}^{{\rm NC}}(\beta)={\cal O}^{{\rm C}}(2\beta)$. ![Induced four-fermion interaction in the strong coupling expansion. The open and filled circles represent fermion fields $\chi$ and $\bar{\chi}$, respectively. (a) In the LO, the time-like links (red arrows) in the fermion action $S_F$ cancel with each other to leave a spatially local interaction. (b) In the NLO, the time-link links in $S_F$ are canceled by the time-like links in a plaquette $S_G$ (blue arrows) to leave a spatially non-local interaction. []{data-label="fig:links"}](links_full.eps){width="7cm"} In order to linearize the induced 4-fermi terms and to integrate out the fermions, we introduce complex bosonic auxiliary fields by the Stratonovich–Hubbard transformation. As for the LO 4-fermi terms in Eq.(\[eq:4-fermi-0\]), we introduce an auxiliary field $\phi(x) = \phi_\sigma(x)+i\epsilon(x)\phi_\pi(x)$. Another auxiliary field $\lambda=\lambda_1+i\lambda_2$ is introduced to linearize the NLO terms in Eq.(\[eq:4-fermi-1\]); the mean field value of $\lambda$ is determined by requiring the stationary condition of the effective action. Then we arrive at the LO and the NLO effective potential (free energy) written in terms of $\phi$ as $$\begin{aligned} F_{\rm eff}^{(0)}(\phi) &=& \frac{1}{4}|\phi|^2 -\frac{1}{2}\int_\mathbf{k} \ln\left[G^{-1}(\mathbf{k};\phi)\right], \label{eq:effpot-LO} \\ F_{\rm eff}^{(1){\rm C}}(\phi) &=& -\frac{\beta}{4} \sum_{j=1,2} \left[\int_\mathbf{k} G(\mathbf{k};\phi) \sin^2 k_j\right]^2. \label{eq:effpot-NLO}\end{aligned}$$ with the bosonic effective propagator defined as $G^{-1}(\mathbf{k};\phi) \equiv \left|\phi/2-m_*\right|^2 +\sum_{j=1,2}\sin^2 k_j$ and the momentum integration as $\int_\mathbf{k} \equiv (2\pi)^{-2} \int_{-\pi}^{\pi} d k_1 \int_{-\pi}^{\pi} d k_2$. Fig.\[fig:NLO\] shows $F_{\rm eff}^{(0)}$ in the chiral limit ($m=0$): Its minimum corresponds to the chiral condensate (the order parameter of chiral symmetry breaking) $\sigma \equiv |\langle \bar{\chi}\chi \rangle| = |\langle\phi\rangle|$, so that the spontaneous “chiral symmetry” breaking takes place in the strong coupling limit. The effective mass of fermions reads $M_F = m+(v_{_F}/a)(\sigma a^2/2)$, from the mass term of the effective action. ![The free energy in the strong coupling limit, $F_{\rm eff}^{(0)}(\phi)$, in the lattice unit as a function of $|\phi|$, in the chiral limit ($m=0$).[]{data-label="fig:NLO"}](NLO_new.eps){width="8cm"} Since $F_{\rm eff}^{(1)}$ is an increasing function of $|\phi|$, the chiral condensate $\sigma$ drops as $\beta$ grows. In other words, the chiral symmetry tends to be restored as the coupling strength becomes weaker. Up to the linear terms in $\beta$ and $m$, we can calculate $\sigma$ with the compact formulation as $$\sigma^{{\rm C}}(\beta,m) \simeq (0.240 - 0.297 \beta + 0.0239\ ma) a^{-2}, \label{eq:sigma-exp}$$ As we mentioned, the condensate in the non-compact formulation is simply obtained as $\sigma^{{\rm NC}}(\beta) = \sigma^{{\rm C}}(2\beta)$ up to NLO. The behavior that $\sigma^{{\rm NC}}(\beta)$ drops faster than $\sigma^{{\rm C}}(\beta)$ is consistent with the results of the Monte Carlo simulation for the same lattice model [@drut_2010]. Taking $a^{-1}\simeq a^{-1}_{_{\rm Hc}} = 1.39 \ {\rm keV}$ as a typical cutoff scale of our system, we obtain $\sigma^{{\rm C}}(\beta,m) \simeq \left[\left(0.680 - 0.421 \beta + \frac{1.39\ m}{\rm eV} \right) {\rm keV} \right]^2$. The dynamical fermion mass is estimated from the value of the chiral condensate as $M_F \simeq (0.523 - 0.623 \beta ) \ {\rm eV} \ + 3.05 m$, which is much smaller than the momentum cutoff scale $E_\Lambda \sim \pi v_{_F}/a = 13\mathrm{eV}$ of the original honeycomb lattice as long as the bare mass $m$ is small enough. Collective Excitations ====================== Here we study the fluctuations of the order parameter $\phi(x)$ around the symmetry broken state $\langle \phi \rangle = -\sigma$: the phase fluctuation (“$\pi$-exciton”) corresponds to $\phi_\pi(x)$ and the amplitude fluctuation (“$\sigma$-exciton”) corresponds to $\phi_\sigma(x)$. Propagators of those modes, $D_{\phi_{\sigma,\pi}}$ are derived from the second derivative of the effective action $S_{\rm eff}[\phi]$ with respect to the corresponding fields $\phi_{\sigma,\pi}$. Their excitation energies are obtained from the imaginary pole of the propagator, ${D}_{\phi_{\sigma,\pi}}^{-1}(\mathbf{p}=0,\omega=i M_{\sigma,\pi}/v_{_F})=0$. As for the $\pi$-exciton, we obtain a mass formula in the leading order of $m$ as $$\begin{aligned} M_\pi \simeq \frac{2 v_{_F}}{a} \sqrt{\frac{m}{M_F(m=0)}}. \label{eq:mpi}\end{aligned}$$ Since $M_\pi$ vanishes in the chiral limit ($m=0$), this mode serves as a pseudo-Nambu–Goldstone (NG) boson emerging from the spontaneous breaking of chiral symmetry. From the axial Ward–Takahashi identity corresponding to the infinitesimal local chiral transformation, a simple relation can be derived, $(F^{\tau}_{\pi}{M_\pi})^2 = m \sigma$, which is analogous to the Gell-Mann–Oakes–Renner relation for the pion in QCD [@gmor]. Here the temporal “pion decay constant” $F_{\pi}^{\tau}$ is defined by the matrix element, $\langle 0 | J^{\rm axial}_4(0)| \pi(p) \rangle = 2 F_{\pi}^{\tau} p_{\pi}^\tau$, with the axial current $J^{\rm axial}_{\mu}(x) \equiv \frac{i}{2} \epsilon(x) \left[V_{\mu}^-(x) - V_{\mu}^+(x)\right]$. By solving the pole equation numerically, the $\sigma$-exciton is found to be a massive mode with $M_{\sigma} \simeq (1.30 - 0.47 \beta)({v_{_F}}/{a})+22.6m$. Since $M_{\sigma}$ acquires a large value comparable to the cutoff scale $E_\Lambda$, application of the low-energy effective theory in this channel is not quite justified. Conclusion ========== We have investigated the behavior of monolayer graphene analytically in/around the limit of strong Coulomb coupling, by means of the strong coupling expansion of U(1) lattice gauge theory. As for the pure gauge action, we have compared the results from the compact and non-compact formulations. In either case, we find that “chiral symmetry” (the sublattice exchange symmetry in the original honeycomb lattice) is spontaneously broken in the strong coupling limit with equal magnitude of the chiral condensate. As the coupling strength becomes weaker, chiral condensate from the non-compact formulation drops faster than that from the compact one. These results up to NLO in the strong coupling expansion agree qualitatively with the extrapolation of the numerical results of the lattice Monte Carlo simulations. We have also examined the collective excitations associated with the chiral symmetry breaking in our approach and have derived their dispersion relations. The phase fluctuation of the chiral condensate, the “$\pi$-exciton”, behaves as a pseudo-NG boson, like the pion in QCD. Experimental observation of such mode in vacuum-suspended graphene would be a good evidence for the spontaneous chiral symmetry breaking in the strong coupling regime. The amplitude fluctuation of the chiral condensate, the “$\sigma$-exciton,” acquires a large mass comparable to the intrinsic cutoff scale $E_\Lambda$, so that it needs further investigation without the low-energy approximation. There are several directions to be investigated in future: Behavior of the present model on a square lattice at finite temperature and finite chemical potential still remains an open problem both analytically and numerically. Formulating the strong coupling expansion on a honeycomb lattice would be of great importance. Extension of our strong coupling approach to the analysis of bilayer graphene, which has been attracting attentions recently [@Guinea_2010], would be also of interest. Acknowledgements {#acknowledgements .unnumbered} ================ The authors thank H. Aoki, T.Z. Nakano, Y. Nishida, A. Ohnishi, T. Oka, S. Sasaki, E. Shintani and N. Yamamoto for discussions. Y. A.  is supported by Grant-in-Aid for Japan Society for the Promotion of Science (DC1, No.22.8037). T. H. is supported in part by the Grant-in-Aid for Scientific Research on Innovative Areas (No. 2004: 20105003) and by Japanese MEXT grant (No. 22340052). [99]{} K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, Y. Zhang, S. V. Dubonos, I. V. Grigorieva, A. A. Firsov, Science **306**, 666 (2004). See, e.g. A. H. Castro Neto et al., Rev. Mod. Phys. **81**, 109 (2009). P. E. Wallace, Phys. Rev. **71**, 622 (1947). G. W. Semenoff, Phys. Rev. Lett.  [**53**]{}, 2449 (1984). See, e.g., A. H. Castro Neto, Physics **2**, 30 (2009). D. V. Khveshchenko, Phys. Rev. Lett. **87**, 246802 (2001); D. V. Khveshchenko and H. Leal, Nucl. Phys. B **687**, 323 (2004); D. V. Khveshchenko, J. Phys.: Condens. Matter **21**, 075303 (2009). E. V. Gorbar, V. P. Gusynin, V. A. Miransky and I. A. Shovkovy, Phys. Rev. B [**66**]{}, 045108 (2002). O. V. Gamayun, E. V. Gorbar and V. P. Gusynin, Phys. Rev. B [**81**]{}, 075429 (2010). I. F. Herbut, Phys. Rev. Lett. **97**, 146401, (2006). D. T. Son, Phys. Rev.  B **75**, 235423 (2007). J. E. Drut and D. T. Son, Phys. Rev. B **77**, 075115 (2008). A. Giuliani, V. Mastropietro and M. Porta, arXiv:1001.5347 \[cond-mat.str-el\]; arXiv:1005.2528 \[cond-mat.str-el\]. S. Hands and C. Strouthos, Phys. Rev. B **78**, 165423 (2008); W. Armour, S. Hands and C. Strouthos, arXiv:0910.5646 \[cond-mat.str-el\]. J. E. Drut and T. A. Lähde, Phys. Rev. Lett. **102**, 026802 (2009); Phys. Rev. B **79**, 165425 (2009). J. Giedt, A. Skinner and S. Nayak, arXiv:0911.4316 \[cond-mat.str-el\]. J. E. Drut, T. A. Lähde and L. Suoranta, arXiv:1002.1273 \[cond-mat.str-el\]. Reviewed in J. M. Drouffe and J. B. Zuber, Phys. Rept.  [**102**]{}, 1 (1983). Y. Araki and T. Hatsuda, Phys. Rev. B **82**, 121403(R) (2010). Y. Araki, arXiv:1010.0847 \[cond-mat.str-el\]. E. V. Gorbar, V. P. Gusynin, V. A. Miransky, Phys. Rev. D **64**, 105028 (2001). M. Gell-Mann, R. J. Oakes and B. Renner, Phys. Rev. **175**, 2195 (1968). See e.g., F. Guinea, Physics **3**, 1 (2010).
--- abstract: | We present the first reported case of the simultaneous metallicity determination of a gamma-ray burst (GRB) host galaxy, from both afterglow absorption lines as well as strong emission-line diagnostics. Using spectroscopic and imaging observations of the afterglow and host of the long-duration *Swift* GRB121024A at $z\,=\,2.30$, we give one of the most complete views of a GRB host/environment to date. We observe a strong damped Ly$\alpha$ absorber (DLA) with a hydrogen column density of log$N(\text{H\,{\sc i}})\,=\,21.88\pm0.10$, H$_2$ absorption in the Lyman-Werner bands (molecular fraction of log($f$)$\approx-1.4$; fourth solid detection of molecular hydrogen in a GRB-DLA), the nebular emission lines H$\alpha$, H$\beta$, \[\], \[\] and \[\], as well as metal absorption lines. We find a GRB host galaxy that is highly star-forming (SFR$\sim$40M$_\odot$yr$^{-1}$), with a dust-corrected metallicity along the line of sight of \[Zn/H\]$_{\rm corr} =-0.6\pm0.2$ ($\text{[O/H]}\sim-0.3$ from emission lines), and a depletion factor Zn/Fe=$0.85\pm0.04$. The molecular gas is separated by 400kms$^{-1}$ (and 1–3kpc) from the gas that is photo-excited by the GRB. This implies a fairly massive host, in agreement with the derived stellar mass of log(M$_*$/M$_\odot$) = $9.9^{+0.2}_{-0.3}$. We dissect the host galaxy by characterising its molecular component, the excited gas, and the line-emitting star-forming regions. The extinction curve for the line of sight is found to be unusually flat ($R_V\sim15$). We discuss the possibility of an anomalous grain size distributions. We furthermore discuss the different metallicity determinations from both absorption and emission lines, which gives consistent results for the line of sight to GRB121024A. title: | The warm, the excited, and the molecular gas:\ GRB121024A shining through its star-forming galaxy. [^1] --- \[firstpage\] Galaxies: abundances – gamma-ray burst: individual: GRB121024A Introduction ============ The study of gamma-ray burst (GRB) afterglows has proven to be a powerful tool for detailed studies of the interstellar medium (ISM) of star-forming galaxies, out to high redshifts [e.g. @vreeswijk04; @prochaska07; @ledoux09; @sparre13]. With quickly fading emission spanning the entire electromagnetic spectrum, GRB afterglows offer a unique opportunity to probe the surrounding environment. The intrinsic spectrum of the afterglow is well fitted with simple power-law segments, so the imprints of the intergalactic medium (IGM) as well as the ISM surrounding the burst are relatively easy to distinguish from the afterglow in the observed spectrum. Moreover, with absorption and emission-line analysis it is possible to determineparameters such as H[i]{} column density, metallicity, dust depletion, star-formation rate (SFR) and kinematics of the GRB host galaxy. Metallicity is a fundamental parameter for characterising a galaxy and it holds important information about its history. Metallicity might also play a crucial role in the GRB production mechanism. For GRB hosts, the metallicity is measured either from hydrogen and metal absorption lines, or by using diagnostics based on the fluxes of strong nebular emission lines, calibrated in the local Universe. Different calibrations are in use leading to some discrepancy [e.g. @Kudritzki], and the different diagnostics have their strengths and weaknesses (e.g. less sensitive to reddening, multiple solutions, or more sensitive at high metallicities). The absorption lines probe the ISM along the line of sight, while the nebular line diagnostics determine the integrated metallicity of the H[ii]{} regions of the host. For GRB damped Ly$\alpha$ absorbers [GRB-DLAs, $N$(H[i]{})$>2\times10^{20}$cm$^{-2}$ @wolfe05], a direct comparison of metallicity from the two methods is interesting because it can either provide a test of the strong-line methods or alternatively allow a measurement of a possible offset in abundances in H[ii]{} regions and in the ISM. So far, this comparison has only been carried out for a few galaxy counterparts of DLAs found in the line of sight of background QSOs [QSO-DLAs, e.g. @bowen05; @peroux; @noterdaeme12; @fynbo13; @JW14]. To our knowledge, a comparison for GRB-DLAs has not been reported before. For both emission and absorption measurements to be feasible with current instrumentation, the observed host needs to be highly star-forming, to have strong nebular lines, and at the same time be at a redshift high enough for the Ly$\alpha$ transition to be observed (at redshifts higher than $z\approx1.5$ the Ly$\alpha$ absorption line is redshifted into the atmospheric transmission window). GRB121024A is a $z=2.30$ burst hosted by a highly star-forming galaxy. We measure abundances of the GRB host galaxy in absorption and compare them with the metallicity determined by strong-line diagnostics using observed nebular lines from \[\], \[\], \[\] and the Balmer emission lines. Apart from the absorption features from metal lines, we also detect the Lyman-Werner bands of molecular hydrogen. Molecular hydrogen is hard to detect in absorption, because it requires high S/N and mid-high resolution. As long duration GRBs (t$_{\text{obs}}>\,$2s) are thought to be associated with the death of massive stars [e.g. @hjorth03; @stanek03; @sparre11; @cano13; @schulze14], they are expected to be found near regions of active star formation, and hence molecular clouds. In spite of this, there are very few detections of molecular absorption towards GRBs [see e.g @tumlinson07]. [@ledoux09] found that this is likely due to the low metallicities found in the systems observed with high resolution spectrographs ($R=\lambda/\Delta\lambda\gtrsim40000$). Typically, mid/high-resolution spectroscopy at a sufficient S/N is only possible for the brighter sources. As is the case for QSO-DLAs, lines of sight with H$_2$ detections will preferentially be metal-rich and dusty. The observed spectra are therefore UV-faint and difficult to observe [GRB080607 is a striking exception, where observations were possible thanks to its extraordinarily intrinsic luminosity and rapid spectroscopy, see @prochaska09]. Now with X-shooter [@xshooter] on the Very Large Telescope (VLT) we are starting to secure spectra with sufficient resolution to detect H$_2$ for fainter systems resulting in additional detections [@thomas1; @delia14]. Throughout this paper we adopt a flat $\Lambda$CDM cosmology with $H_0\,=\,71$kms$^{-1}$ and $\Omega_\text{M}\,=\,0.27$, and report $1\,\sigma$ errors ($3\,\sigma$ limits), unless otherwise indicated. Reference solar abundances are taken from [@asplund09], where either photospheric or meteoritic values (or their average) are chosen according to the recommendations of [@lodders09]. Column densities are in cm$^{-2}$. In Sect. \[obs\] we describe the data and data reduction used in this paper, in Sect. \[results\] we present the data analysis and results, which are then discussed in Sect. \[discussion\]. Observations and Data Reduction {#obs} =============================== On 2012 October 24 at 02:56:12 UT the Burst Alert Telescope [BAT, @bat] onboard the *Swift* satellite [@swift] triggered on GRB121024A. The X-Ray Telescope (XRT) started observing the field at 02:57:45 UT, 93 seconds after the BAT trigger. About one minute after the trigger, Skynet observed the field with the PROMPT telescopes located at CTIO in Chile and the $16"$ Dolomites Astronomical Observatory telescope (DAO) in Italy [@prompt] in filters $g'$,$r'$,$i'$,$z'$ and $BRi$. Approximately 1.8 hours later, spectroscopic afterglow measurements in the wavelength range of 3000Å to 25000Å were acquired (at 04:45 UT), using the cross-dispersed, echelle spectrograph X-shooter [@xshooter] mounted at ESO’s VLT. Then at 05:53 UT, 3 hours after the burst, the Gamma-Ray burst Optical/NIR Detector [GROND, @grond1; @grond2] mounted on the 2.2 m MPG/ESO telescope at La Silla Observatory (Chile), performed follow-up optical/NIR photometry simultaneously in $g', r', i', z'$ and $JHK$. About one year later (2013 November 07), VLT/HAWK-I imaging of the host was acquired in the $J$ (07:02:13 UT) and $K$ (06:06:47 UT) band. To supplement these, $B$, $R$ and $i$ band imaging was obtained at the Nordic Optical Telescope (NOT) at 2014 January 06 ($i$) and February 10 ($R$) and 19 ($B$). Gran Telescopio Canarias (GTC) observations in the $g$ and $z$ band were optioned on 2014 February 28. For an overview see Tables \[tab:xshooter\], \[tab:res\] and \[tab:phot\]. Linear and circular polarisation measurements for the optical afterglow of GRB121024A have been reported in [@wiersema]. $t_{\rm obs}$ (UT)$^a$ $t_{\rm GRB}$ (min)$^b$ $t_{\rm exp.}$ (s) Mean Airmass Seeing ------------------------ ------------------------- -------------------- -------------- -------- -- 04:47:01 116 600 1.23 06-07 04:58:35 127 600 1.19 06-07 05:10:12 139 600 1.16 06-07 05:21:46 151 600 1.13 06-07 : X-shooter observations \ \[tab:xshooter\] X-shooter NIR/Optical/UV Spectroscopy ------------------------------------- The X-shooter observation consists of four nodded exposures with exposure times of 600s each, taken simultaneously by the ultraviolet/blue (UVB), visible (VIS) and near-infrared (NIR) arms. The average airmass was 1.18 with a median seeing of $\sim$07. The spectroscopy was performed with slit-widths of 10, 09 and 09 in the UVB, VIS and NIR arms, respectively. The resolving power $R=\lambda$/$\Delta$$\lambda$ is determined from telluric lines to be $R\,=\,13000$ for the VIS arm. This is better than the nominal value due to the very good seeing. Following [@fynbo11] we then infer $R\,=\,7100$ and $R\,=\,6800$ for the UVB and NIR arms, see Table \[tab:res\] for an overview. X-shooter data were reduced with the ESO/X-shooter pipeline version 2.2.0 [@pipeline], rectifying the data on an output grid with a dispersion of 0.15Å/pixel in the UVB, 0.13Å/pixel in the VIS and 0.5Å/pixel in the NIR arm. The wavelength solution was obtained against arc-lamp frames in each arm. Flux-calibration was performed against the spectrophotometric standard GD71 observed during the same night. We further correct the flux-calibrated spectra for slit-losses by integrating over filter curves from GROND photometry shifted to X-shooter observation times (assuming a slope of $\alpha=0.8$). For the UVB arm, only the $g'$ band photometry is available, which covers the DLA (see Sect. \[abs\]), making this calibration less secure. Wavelengths are plotted in vacuum and corrected for heliocentric motion. Arm Slit R=$\lambda$/$\Delta$$\lambda$ ----- ------ ------------------------------- NIR 09 6800 VIS 09 13000 UVB 10 7100 : X-shooter resolution \[tab:res\] [@ p[3.2cm]{} &gt;p[2.0cm]{} &gt;p[2.2cm]{} &gt;p[2.2cm]{} &gt;p[2.2cm]{} &gt;p[2.2cm]{} @]{} Instrument & Time$^{a}$ & Filter & Exp. time (s) & Seeing & Mag. (Vega)\ MPG/GROND & 3.0 h & $g'$ & 284 & $1{\mbox{$\stackrel{\prime\prime}{\textstyle .}$}}55$ & $20.79\pm0.07$\ MPG/GROND & 3.0 h & $r'$ & 284 & $1{\mbox{$\stackrel{\prime\prime}{\textstyle .}$}}40$ & $19.53\pm0.05$\ MPG/GROND & 3.0 h & $i'$ & 284 & $1{\mbox{$\stackrel{\prime\prime}{\textstyle .}$}}26$ & $19.05\pm0.07$\ MPG/GROND & 3.0 h & $z'$ & 284 & $1{\mbox{$\stackrel{\prime\prime}{\textstyle .}$}}39$ & $18.66\pm0.08$\ MPG/GROND & 3.0 h & $J$ & 480 & $1{\mbox{$\stackrel{\prime\prime}{\textstyle .}$}}36$ & $17.84\pm0.09$\ MPG/GROND & 3.0 h & $H$ & 480 & $1{\mbox{$\stackrel{\prime\prime}{\textstyle .}$}}29$ & $16.98\pm0.10$\ MPG/GROND & 3.0 h & $K_s$ & 480 & $1{\mbox{$\stackrel{\prime\prime}{\textstyle .}$}}21$ & $16.07\pm0.11$\ VLT/HAWK-I & 355.2 d & $J$ & $240\times10$ & $0{\mbox{$\stackrel{\prime\prime}{\textstyle .}$}}6$ & $22.4\pm0.1$\ VLT/HAWK-I & 355.1 d & $K$ & $240\times10$ & $0{\mbox{$\stackrel{\prime\prime}{\textstyle .}$}}5$ & $20.8\pm0.2$\ NOT/ALFOSC & 483.9 d & $B$ & $5\times480$ & $1{\mbox{$\stackrel{\prime\prime}{\textstyle .}$}}3$ & $24.2\pm0.2$\ NOT/ALFOSC & 475.0 d & $R$ & $9\times265$ & $1{\mbox{$\stackrel{\prime\prime}{\textstyle .}$}}1$ & $23.8\pm0.3$\ NOT/ALFOSC & 440.0 d & $i$ & $9\times330$ & $0{\mbox{$\stackrel{\prime\prime}{\textstyle .}$}}9$ & $23.8\pm0.3$\ GTC/OSIRIS & 491.3 d & $g'$ & $3\times250$ & $1{\mbox{$\stackrel{\prime\prime}{\textstyle .}$}}6$ & $24.9\pm0.1$\ GTC/OSIRIS & 491.3 d & $z'$ & $10\times75$ & $1{\mbox{$\stackrel{\prime\prime}{\textstyle .}$}}4$ & $23.2\pm0.3$\ \[tab:phot\] NOT, GTC and VLT/HAWK-I imaging ------------------------------- To derive physical parameters of the host of GRB121024A via stellar population synthesis modelling, we obtained late-time photometry from VLT/HAWK-I, NOT and GTC. Exposure times and seeing can be found in Table \[tab:phot\]. $J$ and $K$ band images were observed with HAWK-I on the Yepun (VLT-UT4) telescope at the ESO Paranal Observatory in Chile. HAWK-I is a near-infrared imager with a pixel scale of 0106/pix and a total field of view of $7{\mbox{$\stackrel{\prime}{\textstyle .}$}}5 \times 7{\mbox{$\stackrel{\prime}{\textstyle .}$}}5$. $B$, $R$ and $i$ images were obtained with the ALFOSC optical camera on the NOT. The photometric calibration was carried out by observing the standard star GD71 at a similar airmass to the GRB field. $g^\prime$ and $z^\prime$-band host galaxy images were taken with the 10.4m GTC. The images were acquired with the OSIRIS instrument which provides an unvignetted field of view of $7{\mbox{$\stackrel{\prime}{\textstyle .}$}}8 \times 7{\mbox{$\stackrel{\prime}{\textstyle .}$}}8$ and a pixel scale of 025/pix [@cepa00]. Images were taken following a dithering pattern. The $z^\prime$-band images were defringed by subtracting an interference pattern which was constructed based on the dithered individual frames. The photometric calibration was carried out by observing the standard star SA95-193 [@smith02]. NOT and GTC are located at the observatory of Roque de los Muchachos, La Palma, Spain. All images were dark-subtracted and flat-fielded using IRAF standard routines. GROND and Skynet Photometry --------------------------- GROND data was reduced using standard IRAF tasks [@tody97; @thomas08]. The afterglow image was fitted using a general point spread function (PSF) model obtained from bright stars in the field. The optical images in $g', r', i', z'$ were calibrated against standard stars in the SDSS catalogue, with an accuracy of $\pm0.03$mag. The NIR magnitudes were calibrated using stars of the 2MASS catalogue, with an accuracy of $\pm0.05$mag. Skynet obtained images of the field of GRB121024A on 2012 October 24-25 with four $16''$ telescopes of the PROMPT array at CTIO, Chile, and the $16''$ DAO in Italy. Exposures ranging from 5 to 160s were obtained in the $BVRI$ (PROMPT) and $g', r', i'$ (DAO) bands, starting at 02:57:07UT ($t = 55$s since the GRB trigger) and continuing until $t = 7.3$h on the first night, and continuing from $t = 20.7 - 25.5$h on the second night. Bias subtraction and flat-fielding were performed via Skynet’s automated pipeline. Post-processing occurred in Skynet’s guided analysis pipeline, using both custom and IRAF-derived algorithms. Differential aperture photometry was performed on single and stacked images, with effective exposure times of 5s to 20min on the first night, and up to $\sim$4h on the second night. Photometry was calibrated to the catalogued $B, V, g', r', i'$ magnitudes of five APASS DR7 stars in the field, with $g', r', i'$ magnitudes transformed to RI using transformations obtained from prior observations of Landoldt stars (Henden, A. et al., in preparation). The Skynet magnitudes can be seen in Appendix \[appendix\]. Analysis and Results {#results} ==================== Absorption Lines {#abs} ---------------- Component Transition a b c d e -------------------- ------------------------------------------------------------------------------------------------ ---------------------- ---------------------- ------------------ ------------------ ------------------ $z$ — 2.2981 2.2989 2.3017 2.3023 2.3026 $b$ (kms$^{-1}$) — 26 21 20 22 35 $v$ (kms$^{-1}$) — $-264$ $-191$ $64$ $118$ $145$ log($N$) Mg[i]{} $\lambda$1827, $\lambda$2026(b) $13.97\pm0.05$ $13.57\pm0.05$ $<13.4$ $13.57\pm0.07$ $<13.4$ Al[iii]{} $\lambda$1854(s), $\lambda$1862(s) — — — — — Si[ii]{} $\lambda$1808(s) — — — — — S[ii]{} $\lambda$1253(s) — — — — — Ca[ii]{} $\lambda$3934, $\lambda$3969 $13.25\pm{0.16}^{a}$ $12.50\pm{0.16}^{b}$ $12.20\pm{0.16}$ $11.90\pm{0.16}$ $11.20\pm{0.16}$ Cr[ii]{} **$\boldsymbol{\lambda}$2056**, $\lambda$2062(b), **$\boldsymbol{\lambda}$2066** $13.47\pm0.05$ $13.48\pm0.05$ $13.39\pm0.05$ $13.67\pm0.05$ $13.34\pm0.09$ Mn[ii]{} **$\boldsymbol{\lambda}$2576**, **$\boldsymbol{\lambda}$2594**, **$\boldsymbol{\lambda}$2606** $13.15\pm0.05$ $13.07\pm0.05$ $12.71\pm0.05$ $13.21\pm0.05$ $12.93\pm0.05$ Fe[ii]{} $\lambda$1611, **$\boldsymbol{\lambda}$2260**, **$\boldsymbol{\lambda}$2249** $15.15\pm0.05$ $15.09\pm0.05$ $14.81\pm0.05$ $15.27\pm0.05$ $15.12\pm0.05$ Ni[ii]{} $\lambda$1345, $\lambda$1454, $\lambda$1467.3, $\lambda$1467.8, $\lambda$1709 $13.91\pm0.10$ $13.88\pm0.10$ $13.95\pm0.09$ $14.17\pm0.10$ $13.73\pm0.29$ Zn[ii]{} $\lambda$2026(b), $\lambda$2062(b) $13.14\pm0.05$ $13.05\pm0.05$ $12.50\pm0.08$ $13.40\pm0.05$ $12.19\pm0.40$ Component $\alpha$ $\beta$ $z$ — 2.2981 2.2989 — — — $b$ (kms$^{-1}$) — 28 30 — — — $v$ (kms$^{-1}$) — $-264$ $-191$ — — — log($N$) Fe[ii]{}\* $\lambda$2389, $\lambda$2396(b) $13.25\pm0.05$ $13.16\pm0.05$ — — — Fe[ii]{}\*\* $\lambda$2396(b), $\lambda$2405(b), $\lambda$2607 $12.92\pm0.05$ $12.80\pm0.05$ — — — Fe[ii]{}\*\*\* $\lambda$2405(b), $\lambda$2407, $\lambda$2411(b) $12.63\pm0.05$ $12.58\pm0.07$ — — — Fe[ii]{}\*\*\*\* $\lambda$2411(b), $\lambda$2414, $\lambda$2622 $12.53\pm0.06$ $12.61\pm0.05$ — — — Fe[ii]{}\*\*\*\*\* $\lambda$1559, $\lambda$2360 $13.95\pm0.08$ $13.68\pm0.13$ — — — Ni[ii]{}\*\* $\lambda$2166, $\lambda$2217, $\lambda$2223 $13.43\pm0.05$ $13.47\pm0.05$ — — — Si[ii]{}\* $\lambda$1309, $\lambda$1533, $\lambda$1816$^{e}$ $14.98\pm0.11$$^{c}$ $14.39\pm0.05^{d}$ — — — \ \[tab:components\] Ion log($N$/cm$^{-2}$)$_\text{{tot}}$ log($N$/cm$^{-2}$)$_{\text{a+b}}$ log($N$/cm$^{-2}$)$_{\text{c+d+e}}$ X/H$_{\text{tot}}$ X/Fe X/Fe$_{\text{a+b}}$ X/Fe$_{\text{c+d+e}}$ ----------- ----------------------------------- ----------------------------------- ------------------------------------- -------------------- ---------------- ---------------------- ----------------------- -- H[i]{} $21.88\pm0.10$ — — — — — — Mg[i]{} $<14.31$ $14.11\pm0.03$ $<13.86$ — — — — Al[iii]{} $>14.11$ — — — — — — Si[ii]{} $>16.35$ — — $>-1.0$ $>0.53$ — — S[ii]{} $>15.90$ — — $>-1.1$ $>0.46$ — — Ca[ii]{} $13.37\pm0.12$ $13.32\pm0.13$$^{a}$ $12.40\pm0.12$ $-2.9\pm0.2$ $-1.29\pm0.13$ $-0.97\pm0.14$$^{a}$ $-2.02\pm0.11$ Cr[ii]{} $14.18\pm0.03$ $13.78\pm0.04$ $13.97\pm0.03$ $-1.3\pm0.1$ $0.22\pm0.05$ $0.18\pm0.05$ $0.24\pm0.04$ Mn[ii]{} $13.74\pm0.03$ $13.41\pm0.04$ $13.47\pm0.03$ $-1.6\pm0.1$ $-0.01\pm0.05$ $0.03\pm0.05$ $-0.04\pm0.04$ Fe[ii]{} $15.82\pm0.05$ $15.45\pm0.05$ $15.58\pm0.03$ $-1.6\pm0.1$ — — — Ni[ii]{} $14.70\pm0.06$ $14.33\pm0.05$ $14.47\pm0.06$ $-1.4\pm0.1$ $0.17\pm0.08$ $0.02\pm0.08$ $0.16\pm0.06$ Zn[ii]{} $13.74\pm0.03$ $13.40\pm0.03$ $13.47\pm0.04$ $-0.7\pm0.1$ $0.85\pm0.06$ $0.88\pm0.05$ $0.83\pm0.05$ \ \[tab:metal\] The most prominent absorption feature is the Ly$\alpha$ line. We plot the spectral region in Fig. \[fig:dla\]. Over-plotted is a Voigt-profile fit to the strong Ly$\alpha$ absorption line yielding log$N(\text{H\,{\sc i}})=21.88\pm0.10$. The error takes into account the noise in the spectrum, the error on the continuum placement and background subtraction at the core of the saturated lines. Table \[tab:components\] shows the metal absorption lines identified in the spectrum. To determine the ionic column densities of the metals, we model the identified absorption lines with a number of Voigt-profile components, as follows. We use the Voigt-profile fitting software VPFIT[^2] version 9.5 to model the absorption lines. We first normalise the spectrum around each line, fitting featureless regions with zero- or first-order polynomials. To remove the contribution of atmospheric absorption lines from our Voigt-profile fit, we compare the observed spectra to a synthetic telluric spectrum. This telluric spectrum was created following [@smette] as described by [@annalisa12] and assuming a precipitable water-vapour column of $2.5$mm. We systematically reject from the fit the spectral regions affected by telluric features at a level of $>1$ per cent[^3]. None of the absorption lines that we include are severely affected by telluric lines. The resulting column densities are listed in Tables \[tab:components\] and \[tab:metal\] for lines arising from ground-state and excited levels, respectively. We report formal 1-$\sigma$ errors from the Voigt-profile fitting. We note that these do not include the uncertainty on the continuum normalisation, which can be dominant for weak lines [see e.g. @annalisa12]. We hence adopt a minimum error of 0.05dex to account for this uncertainty. The error on the redshifts of each component is $0.0001$. The Voigt-profile fits to the metal lines are shown in Figs. \[fig:absorption\] and \[fig:fine\]. The fit to the absorption lines from ground-state levels is composed of five components (a-e). We consider the redshift of the \[\] $\lambda$5007 emission-line centroid $z=2.3015$, as the reference zero-velocity. Components ’a’ through ’e’ are shifted $-264$, $-191$, $64$, $118$ and $145$kms$^{-1}$, respectively. Given the resolution of the instrument of 23kms$^{-1}$ (VIS arm), the individual components are blended, and therefore the profile decomposition is not unequivocal. However, regardless of the properties (and numbers) of the individual components, they are clearly divided into two well separated groups: a+b and c+d+e. When forcing more components to the fit of each group, the resultant total column density are consistent with the previous estimate for each of the two groups. We stress that the resultant $b$-values are not physical, but likely a combination of smaller unresolved components. First we determine redshift $z$ and broadening parameter $b$ (purely turbulent broadening) of the individual components of the line profile, by considering only a master-sample of unblended and unsaturated lines (shown in bold in Table \[tab:components\]), with $b$ and $z$ tied among transitions of different ions. Values for $z$ and $b$ were then frozen for the rest of the absorption lines, and the column densities were fitted. We report 3-$\sigma$ lower and upper limits for the saturated and undetected components, respectively. For the saturated lines Al[iii]{}, Si[ii]{} and S[ii]{} we do not report column densities from the Voigt-profile fit, but instead from the measured equivalent widths (EWs), converted to column densities assuming a linear regime. For these, we only report the total column density for all the components together. At the H[i]{} column density that we observe, we expect most elements to be predominantly in their singly ionised state [@wolfe05]. We hence expect much of the Mg to be in Mg[ii]{} (for this reason we do not report the abundance of Mg[i]{} in Table \[tab:metal\]). Ca[ii]{} seems to have a different velocity composition than the rest of the lines. One possibility is that Ca[ii]{} may extend to a slightly different gas phase, as its ionisation potential is the lowest among the observed lines (less than 1Ryd = 13.6eV). Alternatively, since the Ca[ii]{} lines are located in the NIR arm, a small shift in the wavelength solution with respect to the VIS arm could cause the observed difference. However, a positive comparison between the observed and synthetic telluric lines rules out any shift in the wavelength calibration. We have allowed $z$ and $b$ to have different values for the two Ca[ii]{} lines. This resulted in a slightly different a+b component, but the same c+d+e component as for the rest of the sample. ![image](vpfit_plot_resonance_lines.ps){width="1.8\columnwidth"} ![image](vpfit_plot_finestructure_lines.ps){width="1.45\columnwidth"} The fine-structure lines show a different velocity profile composed only of two components, $\alpha$ and $\beta$, see Table \[tab:components\]. The redshift of $\alpha$ and $\beta$ are the same as for component a and b found for the resonance lines (but different broadening parameters). Remarkably, no fine-structure lines are detected at the position of components c+d+e. The Si[ii]{}\* lines are poorly fitted when tied together with the rest of the fine-structure lines, so we allow their $z$ and $b$ values to vary freely. These components are then referred to as $\gamma$ and $\delta$, which are quite similar to components $\alpha$ and $\beta$, respectively, see Fig. \[fig:absorption\]. The column density for component $\gamma$ of the stronger Si[ii]{}\* line appears strongly saturated, so only the $\lambda$1816 line has been used to determine the column density in this component. The total ionic column densities (summed over individual components and including excited levels when necessary) are given in Table \[tab:metal\]. We also report the column densities of the groups of component a+b and c+d+e, which are well resolved from each other, unlike the individual components. Our first metallicity estimate is from Zn, as this element is usually not heavily depleted into dust [see e.g. @pettini94]. We derive Zn/H=$-0.7\pm0.1$ (the other non-refractory elements Si and S are saturated, but the limits we find are consistent). This is in agreement with the value reported in [@cucchiara14]. We note that high ionisation lines from Si[iv]{} as well as C[iv]{} are detected, but are highly saturated, see Fig. \[fig:high\]. ![High ionisation lines. These lines are highly saturated. See Fig. \[fig:absorption\] for details.[]{data-label="fig:high"}](highions.ps){width="0.7\columnwidth"} Dust Depletion {#depl} -------------- \[extinction\] Refractory elements, such as Fe, Ni, and Cr, can be heavily depleted into dust grains [e.g. @SS96; @ledoux02 De Cia et al. in prep.], and thus can be missing from the gas-phase abundances. A first indicator of the level of depletion in the ISM is the relative abundance \[Zn/Fe\] (referred to as the depletion factor), because Zn is marginally if not at all depleted into dust grains, and its nucleosynthesis traces Fe. We measure \[Zn/Fe\] $= 0.85 \pm 0.06$. This value is among the highest for QSO-DLAs, but typical at the observed metallicity of \[Zn/H\] $=-0.7\pm0.1$ [e.g. @noterdaeme08 De Cia et al. in prep.]. Following [@annalisa13] we calculate a column density of Fe in dust-phase of $\log N(\mbox{Fe})_{\rm dust} = 16.74\pm0.17$ and a dust-corrected metallicity of \[Zn/H\]$_{\rm corr} = -0.6\pm0.2$, indicating that even Zn is mildly depleted in this absorber, by $\sim0.1$dex. This is not surprising given the level of depletion, as also discussed by [@jenkins09]. We also compare the observed abundances of a variety of metals (namely Zn, S, Si, Mn, Cr, Fe, and Ni) to the depletion patterns of a warm halo (H), warm disk+halo (DH), warm disk (WD) and cool disk (CD) types of environments, as defined in [@SS96]. These are fixed depletion patterns observed in the Galaxy and calculated assuming that Zn is not depleted into dust grains. We fit the observed abundances to the depletion patterns using the method described in [@savaglio01]. We find that none of the environments are completely suitable to describe the observed abundances. The fits to cool- and warm-disk patterns are displayed in Fig. \[fig:depletion\] ($\chi^{2}_\nu$=1.18 and 1.58, respectively, with 4 degrees of freedom). For the cool disk the lower limit on the Si column density is not very well reproduced, while the fit for the warm disk overestimates the Mn abundance. The real scenario could be somewhere in between these two environments. Alternatively, the actual depletion pattern is different than what has been observed by [@SS96], or there are some nucleosynthesis effects which we cannot constrain for our case. Another quantity that is very useful to derive from the observed dust depletion is the dust-to-metals ratio ([$\mathcal{{DT\!\!M}}$]{}, normalised by the Galactic value). Constraining the [$\mathcal{{DT\!\!M}}$]{} distribution on a variety of environments can indeed shed light on the origin of dust [e.g. @mattsson]. Based on the observed \[Zn/Fe\] and following [@annalisa13], we calculate [$\mathcal{{DT\!\!M}}$]{} $=1.01\pm0.03$, i.e. consistent with the Galaxy. From the depletion-pattern fit described above we derive similar, although somewhat smaller, [$\mathcal{{DT\!\!M}}$]{} $=0.84\pm0.02$ (CD) and [$\mathcal{{DT\!\!M}}$]{} $=0.89\pm0.02$ (WD). These values are in line with the distribution of the [$\mathcal{{DT\!\!M}}$]{} with metallicity and metal column densities reported by [@annalisa13], and are also consistent with those of [@zafar13]. Following [@zafar13], we calculate [$\mathcal{{DT\!\!M}}$]{} $=0.1$ now based on the dust extinction $A_V$ that we model from the SED fit (Sect. \[sed\]). Due to the small amount of reddening in the SED, this [$\mathcal{{DT\!\!M}}$]{}(A$_V$) value is a factor of 10 lower than expected at the metal column densities observed. This will be discussed further in Sect. \[ext\]. At the metallicity of GRB121024A ($\sim1/3$ solar), it is not possible to draw further conclusions on the dust origin based on the [$\mathcal{{DT\!\!M}}$]{}. Both models of pure stellar dust production and those including dust destruction and grain growth in the ISM converge to high (Galactic-like) [$\mathcal{{DT\!\!M}}$]{} values at metallicities approaching solar [@mattsson]. ![The dust-depletion pattern fit for a cold disk (red solid curve) and a warm disk (red dashed curve) to the observed abundances measured from absorption-line spectroscopy (diamonds and arrows, for the constrained and $3\,\sigma$ limits, respectively).[]{data-label="fig:depletion"}](host_depletionpattern.eps){width="0.95\columnwidth"} Distance between GRB and Absorbing Gas -------------------------------------- The most likely origin of the fine-structure transitions observed in the a+b ($\alpha$+$\beta$) component, is photo-excitation by UV photons from the GRB afterglow itself [see e.g @prochaska06; @vreeswijk07]. Assuming the afterglow to be the only source of excitation, we model the population of the different levels of Fe and Ni, closely following [@vreeswijk13]. Using an optical light curve to estimate the luminosity of the afterglow, we can then determine how far the excited gas must be located from the GRB site, for the afterglow to be able to excite these levels. We model the total column density from component a+b ($\alpha$+$\beta$) of all observed levels (ground state and excited states) of Ni[ii]{} and Fe[ii]{}. We input the optical light curve from Skynet, see Fig. \[fig:skynet\] and Tables in Appendix \[appendix\], which is extrapolated to earlier times using the power-law decay observed. We use the broadeningÊparametersÊ$b$ derivedÊfrom the Voigt-profile fits, and the sameÊatomic parameters [see @vreeswijk13]. The best fit (see Fig. \[fig:distance\]) is obtained with a distance of $590\pm100$pc between the cloud and burst, and a cloud size of $<333$pc (1$\sigma$). The resultant fit is rather poor ($\chi^2$/d.o.f$=40.6/4$). As can be seen in Fig. \[fig:absorption\] and \[fig:fine\], the column densities of the ground level of Ni[ii]{} (as probed by Ni[ii]{} $\lambda\lambda\lambda$ 1709, 1454, 1467) and 5th excited level of Fe[ii]{} (as probed by Fe[ii]{} 5s $\lambda\lambda$ 1559, 2360) are not very well constrained due to the observed spectrum having a low S/N ratio near those features. The formal errors from the Voigt profile fit are likely an underestimate of the true error for these column densities. This, in turn, results in the $\chi^2$ of the excitation model fit being overestimated. Furthermore the lack of spectral time series means the resultant parameters are not well constrained. For the c+d+e component we are able to set a lower limit of 1.9kpc on the distance to the burst using Fe[ii]{}, and 3.5kpc using Si[ii]{} (3$\sigma$). Since Si[ii]{} is saturated, we use the EW to determine the column density, but that only gives the total value of all components together. Hence, for the c+d+e component we fitted using VPFIT and compared the total column density with what we get from the EWs. After establishing that both methods yield the same result, we feel confident in using the column density of log$N$(Si[ii]{})$_{\text{c+d+e}} > 15.99$ together with a detection limit log$N$(Si[ii]{}\*)$_{\text{c+d+e}} < 12.80$ on the 1265Å line, as this is the strongest of the Si[ii]{}\* lines. The lack of vibrationally-excited H$_2$ in the spectra, see below, is in agreement with a distance $\gg100$pc, see [@draine00]. ![GRB-afterglow light curve from the Skynet instruments, used as input for the population modelling. The legend gives the instrument and observational band. The black arrow indicates the starting point of the X-shooter observations. Observations started 55s after the GRB trigger. See online version for colours.[]{data-label="fig:skynet"}](lc_skynet_log.ps){width="1.10\columnwidth"} ![Best-fit model for the excited-level populations of the a+b ($\alpha$+$\beta$) column densities of Fe[ii]{} (top panel) and Ni[ii]{} (bottom). Black lines show the fit to the resonance level. For Fe[ii]{}, from the lower levels and up, excited-level population are shown with red, green, purple, orange and blue. For Ni[ii]{} the red line shows the first excited level, while the blue line shows the second. Open circles show the actual values from Voigt-profile fits. See online version for colours.[]{data-label="fig:distance"}](grb1024feni2_conterr_nofit.ab.ps){width="1.20\columnwidth"} Molecular Hydrogen {#molecules} ------------------ We detect Lyman- and Werner-band absorption lines of molecular hydrogen at redshift $z\,=\,2.3021$ (corresponding to metal-line component “c+d+e”) in rotational levels J=0, 1, 2 and 3, see Fig. \[fig:H2\]. The fitting and analysis of the molecular hydrogen transition lines follow [@ledoux02; @ledoux03] and [@thomas1]. We performed a Voigt-profile fit of lines mainly from the Lyman bands L0-0 up to L3-0, as these are found in the less noisy part of the spectrum (a few J=2 and 3 lines from the Lyman bands L4-0 and L5-0 were also fitted). $J=0$ and 1 lines are strong and fairly well constrained by the presence of residual flux around them, hinting at damping wings in $L\ge 1$. Given the low spectral resolution of the data and the possibility of hidden saturation, we tested a range of Doppler parameters. The estimated H$_2$ column densities, log $N$(H$_2$) are given in Table \[tab:molecules\] for Doppler parameter values of $b=1$ and $10$kms$^{-1}$ resulting in log$N(\text{H}_2)$=19.8–19.9. Using the column density of neutral hydrogen for component ’c+d+e’ of log$N(\text{H\,I})=21.6$, calculated assuming the same Zn metallicity for the two main velocity components (’a+b’ and ’c+d+e’), this results in a molecular fraction in the order of log $f\sim-1.4$, where $f$$\equiv$2$N$(H$_2$)/($N$(HI)+2$N$(H$_2$)). For the component ’a+b’ at redshift $z\sim2.2987$, we report log$N(\text{H}_2)<18.9$ as a conservative upper limit on detection. A more detailed analysis is not possible because of the high noise-level. The implications of this detection are discussed in Sect. \[GRBmol\]. We searched for vibrationally-excited H$_2$ by cross-correlating the observed spectrum with a theoretical model from [@draine00] and [@DH] similar to the procedure outlined in [@thomas1]. There is no evidence for H$_{2}^{*}$ in our data, neither through the cross-correlation nor for individual strong transitions, and we set an upper limit of 0.07 times the optical depth of the input model. This approximately corresponds to $\log N$(H$_{2}^{*})<15.7$. A column density of H$_2^*$ as high as seen in e.g. GRBs120815A or 080607 [@sheffer09] would have been clearly detected in our data. We furthermore note that CO is not detected. We set a conservative limit of log$N(\text{CO})<14.4$, derived by using four out of the six strongest CO AX bandheads with the lowest 6 rotational levels of CO. The wavelength range of the other two bandheads are strongly affected by metal lines, and thus do not provide constraining information. ![image](h2fig_5panel.eps){width="\textwidth"} [@ l &gt;p[2.5cm]{} &gt;p[2.5cm]{} @]{} Rotational level &\ & $b=10$kms$^{-1}$ & $b=1$kms$^{-1}$\ $J=0$ & 19.7 & 19.7\ $J=1$ & 19.2 & 19.3\ $J=2$ & 16.1 & 18.3\ $J=3$ & 16.0 & 18.2\ Total & 19.8 & 19.9\ \[tab:molecules\] Emission Lines -------------- In the NIR spectrum, we detect H$\alpha$, H$\beta$, the \[\] $\lambda$$\lambda$3727, 3729 doublet, \[\] $\lambda$6583 (highest-redshift \[\] detection published for a GRB host) and the two \[\] $\lambda$$\lambda$4959, 5007. Table \[tab:flux\] shows the fluxes (extinction-corrected, see Sect. \[bd\]). The reported fluxes are derived from Gaussian fits, with the background tied between the \[\] doublet and H$\beta$, and between H$\alpha$ and \[\], assuming a slope of the afterglow spectrum of 0.8. \[\] is intrinsically a doublet, so we fit a double Gaussian with a fixed wavelength spacing based on the wavelength of the rest-frame lines. Using the GROND photometry, we estimate a slit-loss correction factor of $1.25\pm0.10$. Fig. \[fig:emission\] shows the emission-line profiles, the 2D as well as the extracted 1D spectrum. The figure shows a Gaussian fit to the lines, after subtracting the PSF for the continuum [done by fitting the spectral trace and PSF as a function of wavelength locally around each line, see @moller for details]. For the weaker \[\], a formal $\chi^2$ minimisation is done by varying the scale of a Gaussian with fixed position and width. The noise is estimated above and below the position of the trace (marked by a horizontal dotted line in Fig. \[fig:emission\]). We assign the zero-velocity reference at the redshift of the \[\] $\lambda$5007 line. For the weaker \[\] line, we fix the Gaussian-profile fit to be centred at this zero-velocity. [@ p[1.1cm]{} &gt;p[2.2cm]{} &gt;p[1.1cm]{} &gt;p[1.1cm]{} &gt;p[1.3cm]{} @]{} Transition & Wavelength$^{a}$ & Flux$^b$ & Width$^{c}$ & Redshift\ & 3726.03, 3728.82 & 14.5$\pm$1.2 & —$^{d}$ & 2.3015$^{e}$\ H$\beta$ & 4861.33 & 7.4$\pm$0.4 & 218$\pm$12 & 2.3012\ $[\mbox{O\,{\sc iii}}]$& 4958.92 & 9.0$\pm$0.4 & 194$\pm$28 & 2.3017\ $[\mbox{O\,{\sc iii}}]$& 5006.84 & 27.2$\pm$0.7 & 192$\pm$7 & 2.3010\ H$\alpha$ & 6562.80 & 21.0$\pm$1.5 & 279$\pm$17 & 2.3010\ $[\mbox{N\,{\sc ii}}]$ & 6583.41 & 1.9$\pm$0.7 & $\sim140$ & 2.3015$^{f}$\ \ $^{a}$ Wavelengths in air in units of Å.\ $^{b}$ Extinction corrected flux in units of 10$^{-17}$ergs$^{-1}$cm$^{-2}$.\ $^{c}$ FWHM of line (after removing instrumental broadening) in units of kms$^{-1}$. Errors do not include uncertainty in continuum.\ $^{d}$ \[\] is intrinsically a doublet, which is not fully resolved here, so we do not give the width.\ $^{e}$ Calculated using a weighted wavelength average of 3727.7Å.\ $^{f}$ The Gaussian fit shown of \[\] has a redshift frozen to that of the \[\] $\lambda$5007 line. \[tab:flux\] ![Emission lines detected from the GRB121024A host. Each panel shows the 2D spectrum after continuum PSF subtraction on top. The bottom part shows the extracted 1D spectrum. The blue line shows the Gaussian fit to the line profile. The abscissa shows the velocity dispersion with respect to the \[\] $\lambda$5007 reference frame. The \[\] spectrum has been smoothed and binned differently than the other lines, and the fit has been performed with the Gaussian profile centre frozen at 0kms$^{-1}$ with respect to the reference frame, as indicated with the dashed line in the figure. \[\] has been fit as a doublet for the flux estimate.[]{data-label="fig:emission"}](OII.eps "fig:"){width="0.45\columnwidth"} ![Emission lines detected from the GRB121024A host. Each panel shows the 2D spectrum after continuum PSF subtraction on top. The bottom part shows the extracted 1D spectrum. The blue line shows the Gaussian fit to the line profile. The abscissa shows the velocity dispersion with respect to the \[\] $\lambda$5007 reference frame. The \[\] spectrum has been smoothed and binned differently than the other lines, and the fit has been performed with the Gaussian profile centre frozen at 0kms$^{-1}$ with respect to the reference frame, as indicated with the dashed line in the figure. \[\] has been fit as a doublet for the flux estimate.[]{data-label="fig:emission"}](Hbeta.eps "fig:"){width="0.45\columnwidth"} ![Emission lines detected from the GRB121024A host. Each panel shows the 2D spectrum after continuum PSF subtraction on top. The bottom part shows the extracted 1D spectrum. The blue line shows the Gaussian fit to the line profile. The abscissa shows the velocity dispersion with respect to the \[\] $\lambda$5007 reference frame. The \[\] spectrum has been smoothed and binned differently than the other lines, and the fit has been performed with the Gaussian profile centre frozen at 0kms$^{-1}$ with respect to the reference frame, as indicated with the dashed line in the figure. \[\] has been fit as a doublet for the flux estimate.[]{data-label="fig:emission"}](OIIIb.eps "fig:"){width="0.45\columnwidth"} ![Emission lines detected from the GRB121024A host. Each panel shows the 2D spectrum after continuum PSF subtraction on top. The bottom part shows the extracted 1D spectrum. The blue line shows the Gaussian fit to the line profile. The abscissa shows the velocity dispersion with respect to the \[\] $\lambda$5007 reference frame. The \[\] spectrum has been smoothed and binned differently than the other lines, and the fit has been performed with the Gaussian profile centre frozen at 0kms$^{-1}$ with respect to the reference frame, as indicated with the dashed line in the figure. \[\] has been fit as a doublet for the flux estimate.[]{data-label="fig:emission"}](OIIIa.eps "fig:"){width="0.45\columnwidth"} ![Emission lines detected from the GRB121024A host. Each panel shows the 2D spectrum after continuum PSF subtraction on top. The bottom part shows the extracted 1D spectrum. The blue line shows the Gaussian fit to the line profile. The abscissa shows the velocity dispersion with respect to the \[\] $\lambda$5007 reference frame. The \[\] spectrum has been smoothed and binned differently than the other lines, and the fit has been performed with the Gaussian profile centre frozen at 0kms$^{-1}$ with respect to the reference frame, as indicated with the dashed line in the figure. \[\] has been fit as a doublet for the flux estimate.[]{data-label="fig:emission"}](Halpha.eps "fig:"){width="0.45\columnwidth"} Star-Formation Rate {#sfr} ------------------- The SFR can be derived from the emission line fluxes of H$\alpha$ and \[\]. Using conversion factors from [@kennicutt], but converted from a Salpeter initial mass function (IMF) to Chabrier [@treyer07], we report extinction corrected (see Sect. \[bd\]) values of $\text{SFR}_{H\alpha}=42\pm11$M$_\odot$yr$^{-1}$ from the H$\alpha$ flux and a $\text{SFR}_{[\mbox{O\,{\sc ii}}]}=53\pm15$M$_\odot$yr$^{-1}$ derived from \[\]. For a comparison with results from the stellar population synthesis modelling see Sect. \[pop\]. Metallicity from Emission Lines ------------------------------- We determine the gas-phase metallicity of the GRB host galaxy using the strong-line diagnostics R$_{23}$ (using the ratio (\[\] $\lambda$$\lambda$3727 + \[\] $\lambda$$\lambda$4959, 5007)/H$\beta$), O3N2 (using (\[\]/H$\beta$)/([\[\]]{}/H$\alpha$)) and N2 [using [\[\]]{}/H$\alpha$; for a discussion of the different diagnostics see e.g. @KE]. Note that different metallicity calibrators give different values of metallicity. R$_{23}$ appears to be consistently higher than O3N2 and N2. The R$_{23}$ diagnostic has two branches of solutions, but the degeneracy can be broken using the ratios \[\]/H$\alpha$ or \[\]/\[\]. In our case \[\]/H$\alpha$=0.09$\pm0.02$ and \[\]/\[\]=$0.13\pm0.03$, which places the R$_{23}$ solution on the upper branch (though not far from the separation). Because of the large difference in wavelength of the emission lines used for R$_{23}$, this method is sensitive to the uncertainty on the reddening. Both O3N2 and N2 use lines that are close in wavelength, so for these we expect the reddening to have a negligible effect. Instead, they then both depend on the weaker \[\] line, which has not got as secure a detection. We derive 12+log(O/H)=$8.6\pm0.2$ for R$_{23}$ [@mcgaugh91], 12+log(O/H)=$8.2\pm0.2$ for O3N2 and 12+log(O/H)=$8.3\pm0.2$ for N2 [both from @PP]. The errors include the scatter in the relations [these values are from @KE and references therein], though the scatter in N2 is likely underestimated. See Sect. \[abund\] for a comparison with absorption-line metallicity. Balmer Decrement {#bd} ---------------- The ratio of the Balmer lines H$\alpha$ and H$\beta$ can be used to estimate the dust extinction. We use the intrinsic ratio found I(H$\alpha$)/I(H$\beta)=2.86$ [@balmer], for star-forming regions (and case B recombination, meaning photons above 13.6eV are not re-absorbed), where we expect GRBs to occur. The ratio we measure is 2.98 which, assuming the extinction law of [@calzetti][^4], results in $E(B-V)=0.04\pm0.09$mag. We note that adopting a different extinction law [from e.g. @pei] results in the same reddening correction within errors, because there is little difference within the wavelength range of the Balmer lines. Broad-Band Spectral Energy Distribution {#sed} --------------------------------------- We fitted the broad-band afterglow data from XRT and GROND (without the $g'$-band, due to possible DLA contamination), where simultaneous data exist (11ks after the trigger). The fit was perform within the ISIS software [@isis] following the method of [@starling07]. The XRT data were extracted using *Swift* tools. We use single and broken power-law models. For the broken power-law, we tie theÊtwo spectral slopes to a fixed difference of 0.5. Such spectral feature is known as the Òcooling breakÓ of GRB afterglows [e.g. @sari98], and is observed to be the best-fit model for most burst [@zafar11], with the exception of GRB080210 [@zafar11; @annalisa11]. We fit with two absorbers, one Galactic fixed at $N(\text{H})_{\text{X}}^{\text{Gal}}=7.77\times10^{20}$cm$^{-2}$ [@willingale13], and one intrinsic to the host galaxy[^5] An SMC dust-extinction model (the average extinction curve observed in the Small Magellanic Cloud) was used for the host, while the reddening from the Milky Way was fixed to $E(B-V)=0.123$ [@schlegel]. A single power-law is preferred statistically ($\chi^2$/d.o.f = 1.07), see Fig. \[fig:SED\], but the two models give similar results. The best fit parameters for the single power-law SMC absorption model are $N(\text{H})_\text{X}=(1.2^{+0.8}_{-0.6})\times10^{22}$cm$^{-2}$ and $E(B-V)=0.03\pm0.02$mag at a redshift of $z=2.298$, and a power-law index of $\beta=0.90\pm0.02$ (90 per cent confidence limits), see Table \[tab:sed\]. LMC and MW (the average extinction curves observed in the Large Magellanic Cloud and the Milky Way) model fits result in the same values within errors. For a discussion on the extinction see Sect. \[ext\]. [@ p[4.2cm]{} &gt;p[3.6cm]{} @]{} $N(\text{H})_\text{X}$ & $(1.2^{+0.8}_{-0.6})\times10^{22}$cm$^{-2}$\ $E(B-V)$ & $0.03\pm0.02$mag\ Power-law index $\beta$ & $0.90\pm0.02$\ \[tab:sed\] Stellar Population Synthesis Modelling {#pop} -------------------------------------- Using our photometry of the host, see Table \[tab:phot\], we perform stellar population synthesis modelling of the host galaxy. We use a grid of stellar evolution models with different star formation timescales, age of stellar population and extinction, to compute theoretical magnitudes and compare them to the observed photometry. For the model input, we assume stellar models from [@BC03], based on an IMF from [@chabrier03] and a Calzetti dust attenuation law [@calzetti]. Table \[tab:pop\] lists the galaxy parameters resulting from the best-fit to the HAWK-I, NOT and GTC data. The best fit is obtained with a $\chi^2 = 8$ for the 7 data points used in the modelling. Most of the contribution to the $\chi^2$ comes from the $B$-band observations. This data point lies $\approx$ 3$\sigma$ above the best-fit and the $g$-band measurement, which probes a very similar wavelength range. The reported value of the SFR takes into account the uncertainty in the dust attenuation, and thus has large error bars. We observe a significant Balmer break, which is well fit with star-burst ages between 50 and 500 Myr. The SFR of $\sim40$M$_\odot$yr$^{-1}$ is consistent with the results from Sect. \[sfr\]. [@ p[4.2cm]{} &gt;p[3.6cm]{} @]{} Starburst age (Myr) & $\sim250$\ Extinction (mag) & $0.15\pm0.15$\ M$_\text{B}$ & $-22.1\pm0.2$\ $\rm log(M_*/M_{\odot})$ & $9.9^{+0.2}_{-0.3}$\ SFR (M$_\odot$yr$^{-1}$) & $40^{+80}_{-25}$\ \[tab:pop\] Kinematics {#kinematic} ---------- The X-shooter spectrum contains information both on the kinematics of the absorbing gas along the line-of-sight to the location of the burst inside the host galaxy, as well as kinematics of the emitting gas in H[ii]{} regions probed by the emission lines. The emission lines have a full-width-at-half-maximum (FWHM) of around 210kms$^{-1}$ from a Gaussian fit, see Table \[tab:flux\]. We do not observe signs of rotation in the 2 dimensional spectrum. One possibility is that the galaxy could be dominated by velocity dispersion, as observed for galaxies of similar mass and properties, [@sins]. The velocity width that encloses 90% of the optical depth [as defined by @ledoux06] is 460 km s$^{-1}$ based on the Si[ii]{} $\lambda1808$ line. This is consistent with the correlation between absorption line width and metallicity for GRB host galaxies of [@arabsalmani]. The velocity for each absorption component, with respect to the emission lines, is given in Table \[tab:components\]. The characteristics of different gas components are discussed in Sect. \[gas\]. Intervening Systems {#intervening} ------------------- We identify three intervening systems along the line of sight, at redshifts $z\,=\,2.0798$, $z\,=\,1.959$, and $z\,=\,1.664$. Table \[tab:ew\] lists the observed lines along with the measured EWs. Furthermore only at $z\,=\,1.959$ do we observe the Ly$\alpha$ line, but blended with a Si[iv]{} line. The intervening systems will not be discussed further in this work. [@ l &gt;p[1.8cm]{} &gt;p[1.8cm]{} &gt;p[1.8cm]{} @]{} &\ Transition & $z=2.0798$ & $z=1.959$ & $z=1.664$\ C[iv]{}, $\lambda$1548 & $0.43\pm0.10$ & $3.64\pm0.15$ & —\ C[iv]{}, $\lambda$1550 & $0.38\pm0.10$ & $3.11\pm0.14$ & —\ Fe[ii]{}, $\lambda$2382 & — & $0.72\pm0.10$ & $0.36\pm0.05$\ Mg[ii]{}, $\lambda$2796 & — & $3.73\pm0.08$ & $1.29\pm0.05$\ Mg[ii]{}, $\lambda$2803 & — & $2.53\pm0.11$ & $1.02\pm0.05$\ \[tab:ew\] Discussion and Implications {#discussion} =========================== Abundance Measurements from Absorption and Emission Lines {#abund} --------------------------------------------------------- The metallicity of GRB hosts is usually determined either directly through absorption line measurements, or via the strong-line diagnostics using nebular-line fluxes. The two methods probe different physical regions; the ISM of the host galaxy along the line of sight as opposed to the ionised star-forming H[ii]{} regions emission - weighted over the whole galaxy. Hence, the two methods are not necessarily expected to yield the same metallicity, see for instance [@JW14]. The line of sight towards the GRB is expected to cross star-forming regions in the GRB host. Thus, the absorption and emission lines may probe similar regions. Local measurements from the solar neighbourhood show a concurrence of the two metallicities in the same region, see e.g. [@esteban04]. Only a few cases where measurements were possible using both methods have been reported for QSO-DLAs [see e.g. @bowen05; @JW14] but never for GRB-DLAs. The challenge is that the redshift has to be high enough ($z \gtrsim 1.5$) to make Ly$\alpha$ observable from the ground, while at the same time the host has to be massively star-forming to produce sufficiently bright emission lines. Furthermore, the strong-line diagnostics are calibrated at low redshifts, with only few high redshift cases available [see for instance @christensen12]. The spectrum of GRB121024A has an observable Ly$\alpha$ line as well as bright emission lines. We find that the three nebular line diagnostics R$_{23}$, O3N2 and N2 all find a similar oxygen abundance of $12+\text{log(O/H)}=8.4\pm0.4$. Expressing this in solar units we get a metallicity of O/H$\sim-0.3$ (or slightly lower if we disregard the value found from the R$_{23}$ diagnostic, given that we cannot convincingly distinguish between the upper and lower branch). This is indeed consistent with the absorption line measurement from the low-depletion elements (dust-corrected value) \[Zn/H\]$_{\rm corr}=-0.6\pm0.2$, though the large uncertainty in the strong-line diagnostics hinders a more conclusive comparison. [@krogager13] find a slightly lower metallicity from absorption lines in the spectrum of quasar Q2222-0946, compared to the emission-line metallicity. However, this is easily explained by the very different regions probed by the nebular lines (6kpc above the galactic plane for this quasar) and the line of sight, see also [@peroux]. QSO lines of sight intersect foreground galaxies at high impact parameters, while the metallicity probed with GRB-DLAs are associated with the GRB host galaxy. Interestingly, [@noterdaeme12] find different values for the metallicities, even with a small impact parameter between QSO and absorber, for a QSO-DLAs. A comparison of the two metallicities is also possible for Lyman break galaxies (LBGs), see for instance [@pettini02] for a discussion on the metallicity of the galaxy MS 1512–cB58. They find that the two methods agree for a galaxy with an even larger velocity dispersion in the absorbed gas than observed here ($\sim1000$kms$^{-1}$). The line of sight toward GRB121024A crosses different clouds of gas in the host galaxy, as shown by the multiple and diverse components of the absorption-line profiles. The gas associated with component a+b is photo-excited, indicating that it is the closest to the GRB. Given the proximity, the metallicity of this gas could be representative of the GRB birth site. Assuming the GRB exploded in an H[ii]{} region, the emission- and absorption-metallicities are expected to be similar, though if other H[ii]{} region are dominating the brightness, the GRB birth sight might contribute only weakly to the emission line-flux, see Sect \[gas\]. Building a sample of dual metallicity measurements will increase our understanding of the metallicity distribution and evolution in galaxies. The Mass-Metallicity Relation at $z\sim2$ ----------------------------------------- Having determined stellar mass, metallicity and SFR of the GRB host, we can investigate whether the galaxy properties are consistent with the mass-metallicity relation at the observed redshift. Appropriate for a redshift of $z\approx2$, we use equation 5 from [@mannucci10]:\ \ $12 + \text{log(O/H)} = 8.90 + 0.47 \times (\mu_{0.32} - 10)$\ \ where $\mu_{0.32} = \text{log(M}_*[\,\text{M}_\odot]) - 0.32\times\text{log(SFR}_{\text{H}\alpha}[\text{M}_\odot]\,\text{yr}^{-1})$. Using the stellar mass from Sect. \[pop\] and the SFR from H$\alpha$ we find an equivalent metallicity of 12+log(O/H)=$8.6\pm0.2$ (\[O/H\]=$-0.1\pm0.2$). The error does not include a contribution from the scatter in the relation, and is hence likely underestimated. This value is consistent with the metallicity derived from the emission lines, but given the large uncertainty this is perhaps not that illustrative. Instead, we use the mass-metallicity relation determined in [@christensen14] for QSO-DLAs for absorption-line metallicities remodelled to GRBs by [@arabsalmani]. This results in a metallicity \[$M$/H\]=$-0.3\pm0.2$, not including scatter from the relation, and using the mean impact parameter of 2.3kpc calculated in [@arabsalmani]. This is consistent with the dust-corrected metallicity of \[Zn/H\]$_{\rm corr}=-0.6\pm0.2$. Grey Dust Extinction? {#ext} --------------------- We determine the dust extinction/attenuation of the host galaxy of GRB121024A both from the Balmer decrement (Sect. \[bd\]) and a fit to the X-ray and optical spectral energy distribution (SED, see Sect. \[sed\]), as well as from the stellar population synthesis modelling (Sect. \[pop\]). The first method determines the attenuation of the host H[ii]{} regions (from the X-shooter spectrum alone), while the SED fitting probes the extinction along the line of sight (using XRT+GROND data). The stellar population synthesis modelling models the host attenuation as a whole (using host photometry). All methods determine the amount of extinction/attenuation by comparing different parts of the spectrum with known/inferred intrinsic ratios, and attribute the observed change in spectral form to dust absorption and scattering. We find values that agree on a colour index $E(B-V)\sim0.04$mag. This value is small, but falls within the range observed for GRB-DLA systems. However, low $A_V$’s are typically observed for the lowest metallicities. For our case we would expect a much higher amount of reddening at our determined H[i]{} column density and metallicity. Using the metallicity of \[Zn/H\]$_{\rm corr}=-0.6\pm0.2$, column density log$N(\text{H\,{\sc i}})=21.88\pm0.10$, dust-to-metal ratio [$\mathcal{{DT\!\!M}}$]{}$=1.01\pm0.03$ (see Sect. 3.2), and a reference Galactic dust-to-metal ratio $A_{V, \rm{Gal}}/N_{(H, \rm{Gal})}=0.45\times 10^{-21}$ mag cm$^2$ [@watson11], we expect an extinction of $A_V=0.9\pm0.3$mag [De Cia et al. in prep. and @savaglio03]. This is incompatible with the determined reddening, as it would require $R_V>15$ ($R_V$ for the Galaxy is broadly in the range 2–5). For the Balmer decrement and SED fitting we have examined different extinction curves (MW and LMC besides the SMC) and we have tried fitting the SED with a cooling break, neither option changing the extinction significantly. In an attempt to test how high a fitted reddening we can achieve, we tried fitting the SED with a lower Galactic $N(\text{H\,{\sc i}})$ and reddening. While keeping reasonable values (it is unphysical to expect no Galactic extinction at all), and fitting with the break, the resulting highest colour index is $E(B-V)\sim0.06$mag. This is still not compatible with the value derived from the metallicity, so this difference needs to be explained physically. One possibility to consider is that the host could have a lower dust-to-metals ratio, and hence we overestimate the extinction we expect from the metallicity. However, we see no sign of this from the relative abundances, see Sect. \[depl\]. The metallicity is robustly determined from Voigt-profile fits and EW measurement of several lines from different elements (including a lower limit from the none Fe-peak element Si). The lines are clearly observed in the spectra, see Fig. \[fig:absorption\], and the metallicity that we find is consistent with the mass-metallicityÊrelation. To examine the extinction curve, we perform a fit to the XRT (energy range: 0.3–10keV) X-ray data alone and extrapolate the resultant best-fit power-law to optical wavelengths. We try both a single power law, as well as a broken power law with a cooling break in the extrapolation. The latter is generally found to be the best model for GRB extinction in optical fit [e.g. @zafar11; @greiner11; @schady12]. We calculate the range of allowed $A_\lambda$ by comparing the X-shooter spectrum to the extrapolation, within the 90% confidence limit of the best-fit photon index, and a break in between the two data sets (X-ray and optical). The resulting extinction does not redden the afterglow strongly, so we cannot constrain the total extinction very well directly from the SED. However, the optical spectroscopy indicates a high metal column density. The strong depletion of metals from the gas phase supports the presence of dust at $A_V=0.9\pm0.3$mag (see Sect \[depl\]). Fixing $A_V$ at this level, allows us to produce a normalised extinction curve (Fig. \[fig:curves\]). This extinction curve is very flat, much flatter than any in the local group [@FM], with an $R_V>9$. ![Extinction curves for the line of sight to GRB121024A. We plot the extinction law assuming $A_V=0.9\pm0.3$mag as expected from the measured metallicity, H[i]{} column density and dust-to-metal ratio. The solid black curve shows the extinction curve for a broken power law with $A_V=0.9$, while the grey-shaded area corresponds to the $A_V$ error-space. Likewise, the extinction curve for a single power law is plotted with the dashed black curve, and the hatched area displays the error-space. Over-plotted, in colours, are extinction curves from Pei (1992).[]{data-label="fig:curves"}](Av.ps){width="1.05\columnwidth"} The most likely physical scenario that can explain this shape of the curve is grey dust. If the dust extinction is ’grey’, i.e., has a much weaker dependence on wavelength than in the local extinction laws, then a given visual extinction will be much less apparent in the SED (’flat’ extinction curve) and thus underestimated in our analysis. Such grey extinction corresponds to larger $R_V$ and is physically interpreted with large grain sizes. A weak wavelength dependance in the extinction for GRBs has been suggested before. [@savaglio04] reported a MW-like depletion pattern, but a very low reddening in the SED. [@li08] likewise claim a grey extinction law for GRB lines of sight determined by comparing observed spectra to intrinsic ones (by extrapolating from X-rays), arguing for grain growth through coagulation in the dense molecular clouds surrounding GRBs. The larger grains have an extinction that is less dependent on wavelength, because of the contribution of their physical cross-section to the opacity. Preferable destruction of the smaller grains by the GRB would be another possibility, but is unlikely in our case, because the absorbing gas is far from the GRB. We note that the other GRB-DLAs with molecular-hydrogen detection show the expected amount of reddening (using a standard extinction curve), though anomalies do exist in GRB observations. The most notable example to date is reported by [@perley08] for GRB061126. As for the GRB121024A afterglow, they observe a very flat optical–to–X-ray spectral index, arguing for large quantities of grey dust, or a separate origin of the optical and X-ray afterglow. To fit the extinction curve for GRB061126, an $R_V\sim10$ is needed. We find an $R_V$ even higher than this, making this case even more extreme than previously observed. We refer to future work on this problem (Friis et al., in prep), as a deeper analysis is beyond the scope of this paper. Molecular Hydrogen in GRB-DLAs {#GRBmol} ------------------------------ The lack of detection of molecular hydrogen towards GRBs has puzzled astronomers [see e.g @tumlinson07], given that long GRBs are associated with active star formation, and hence are expected to show signatures of molecular clouds. Compared to QSO-DLA line of sights then, we would expect the presence of H$_2$ to be more common for GRB-DLAs, because the QSO-DLA line of sights have a higher probability to intersect the outskirts/halo of the intervening galaxy, where we would anticipate a low molecular content. Recently, a number of H$_2$ detections in GRB afterglows have been reported [@prochaska09; @thomas1; @delia14], making GRB121024A the fourth definite case. This detection supports the emerging picture that dust has played a major role in biasing past observations against molecular detection [e.g. @ledoux09]. Molecules are thought to form on the surface of dust grains, and once formed, shielded from Lyman-Werner photons by the grains. [@thomas1] suggest that it is likely this connection that is responsible for the low number of H$_2$ detections towards GRB-DLAs. The high dust column density makes the GRB afterglow UV-faint, preventing high-resolution and high S/N spectroscopy, which is needed to identify the presence of molecular gas. Thus, the lack of H2 detections in most GRB-DLAs can be explained with an observational bias. They illustrate this argument by investigating the metallicity, N(H[i]{}) and dust depletion parameter space, showing that the GRB-DLAs with unsuccessful molecular searches fall outside the region where we would expect detections (with the only exception being GRB050829A). This argument is supported by the observed log$N(\text{H\,{\sc i}})$, metallicity and depletion factor of GRB121024A, which lies inside the parameter space where molecular detections are expect. The high level of dust depletion observed in this GRB-DLA (see Sect. \[depl\]), is consistent with molecular detections in QSO-DLAs [@noterdaeme08; @thomas1], where there is a strong preference for H$_2$-bearing DLAs to have significant depletion factors. The dependence on the total neutral hydrogen column density is weak (although intrinsically-weak molecular lines are better constrained in strong DLAs), whereas the parameter that seems to determine whether H$_2$ is detected, is the column density of iron locked into dust. The $\log N({\rm Fe})_{\rm dust}$ that we measure is 2dex higher than the column density above which a significant presence of molecules has been observed in QSO-DLAs [@noterdaeme08]. Indeed [@annalisa13] studied $\log N({\rm Fe})_{\rm dust}$ and concluded that GRB hosts are promising sites for molecular detections. [@delia14] find a molecular fraction for GRB120327A of log($f$) between $-7$ and $-4$ with a depletion factor of Zn/Fe=$0.56\,\pm\,0.14$, while for the dustier line of sight towards GRB120815A [@thomas1] reports a value of log($f$)=$-1.14\pm0.15$ (Zn/Fe=$1.01\,\pm\,0.10$). For GRB121024A we find intermediate values, although with the high noise-level the numbers are consistent with those reported for GRB120815A. For GRB080607 [@prochaska09] only report limits on both the molecular fraction and the Zn+Fe column densities. Although the sample is too small to infer anything statistically, it appears that the H$_2$ detection criteria in GRB afterglows follow the trend observed for QSO-DLAs. For a fair estimate of the molecular fraction, the column densities of both H$_2$ and H[i]{} should be constrained for individual velocity components, while this is hardly the case for H[i]{}. Recent work [e.g. @balashev15] indicates that the molecular fraction in QSO-DLAs can possibly be much higherÊthan the line-of-sight average values usually measured. Gas kinematics: dissecting the host components {#gas} ---------------------------------------------- One of the striking features of the metal absorption-line profiles observed towards GRB121024A is that they consist of two widely separated groups of velocity components (a+b and c+d+e, see Sect. \[abs\]). The separation is of about 340kms$^{-1}$, which lies at the high end of the velocity distribution of [@moller13] and [@ledoux06]. The latter is for QSO-DLAs, however [@arabsalmani] showed that GRB-DLAs follow the velocity-metallicity distribution of QSO-DLAs. This suggests that either the two components belong to separate galaxies [see for instance @savaglio12 on the GRB090323 systems], or that this galaxy is fairly massive compared to the average GRB host of $\sim$$10^{9}\text{M}_\odot$ (@savaglio09 [@ceron10], but see also @perley13 and @hunt14). The scenario with separate galaxies is disfavoured, because the two absorption components show very similar relative abundances (see Table \[tab:components\]) and also because the emission lines are centred in between the two absorption components [unlike for GRBs050820A and 060418; see @hw]. Thus, a likely possibility is that the two absorption components are probing different regions within the host. This is in agreement with the mass found in Sect. \[pop\], of almost $10^{10}\text{\,M}_\odot$. Furthermore, the blue (a+b) and the red (c+d+e) absorption components are associated with gas at different physical conditions. On one hand, Fe[ii]{}, Ni[ii]{} and Si[ii]{} fine-structure lines are detected only in the blue component. These lines are photo-excited by the GRB radiation at a distance of $\sim$$600$pc. On the other hand, H$_2$ molecules are detected in the red component only, indicating a gas that is not disturbed by the GRB (at a distance of minimum 3.5kpc). Through absorption-line spectroscopy at the X-shooter resolution, these two different gas components could be located inside the host (with respect to the GRB) and characterised. The observed emission component (arbitrarily set at $v = 0$) traces the brightest star-forming regions. Since GRBs tend to reside around the brightest star-forming regions in their host [@fruchter], one might expect to observe absorption components at velocities close to that of the emission as the line of sight passes through this gas. However, the gas around the GRB can be photo-ionised out to hundreds of parsecs [e.g. @ledoux09; @vreeswijk13]; it is thus highly unlikely that the optical/UV absorption lines are probing the actual GRB environment. For GRB121024A, this is further supported by the fact that the a,b component is located $\sim$$600$pc away from the GRB. Given that giant molecular clouds have a maximum radius of $\sim$$200$pc [@murray11], the a,b component is undoubtedly unrelated to the GRB surroundings. Although GRBs are most often associated with the brightest star-forming regions, this is not always the case. GRB980425 [e.g. @michal] is an example where the star-forming region in which the GRB occurred is quite faint compared to the larger and brighter star-forming regions in the host. A potentially similar scenario could hold for GRB121024A as well, in which case the burst should not be identified with $v = 0$. In this situation, the possible interpretations of the kinematics would be different and lead to other geometric setups compared to those conceivable were the GRB localised close to $v = 0$. It should also be noted that we have not discussed transverse motion which could complicate the interpretation even further. Finally, since the host is most likely an irregular galaxy, indicating a 3D perturbed environment without a rotating disk, we find it appropriate not to draw further conclusions. While the available data do not allow us to discriminate between possible scenarios, this work demonstrates how powerful GRB afterglow observations can be to start dissecting individual building-block components of star-forming galaxies at $z\sim2$ and above. This is especially true once we have gathered enough data to compile a statistical sample; see [e.g. @fox08] for previous work on VLT/UVES data and Fynbo et al. (in prep) for upcoming VLT/X-shooter results on a large afterglow sample. Acknowledgments {#acknowledgments .unnumbered} =============== MF acknowledges support from the University of Iceland Research fund. ADC acknowledges support by the Weizmann Institute of Science Dean of Physics Fellowship and the Koshland Center for Basic Research. The Dark Cosmology Centre is funded by the DNRF. RLCS is supported by a Royal Society Dorothy Hodgkin Fellowship. JPUF acknowledge support from the ERC-StG grant EGGS-278202. CCT is supported by a Ramón y Cajál fellowship. The research activity of J. Gorosabel, CCT, and AdUP is supported by Spanish research project AYA2012-39362-C02-02. Ad.U.P. acknowledges support by the European Commission under the Marie Curie Career Integration Grant programme (FP7-PEOPLE-2012-CIG 322307). SS acknowledges support from CONICYT-Chile FONDECYT 3140534, Basal-CATA PFB-06/2007, and Project IC120009 “Millennium Institute of Astrophysics (MAS)” of Iniciativa Científica Milenio del Ministerio de Economía, Fomento y Turismo. Part of the funding for GROND (both hardware as well as personnel) was generously granted from the Leibniz-Prize to Prof. G. Hasinger (DFG grant HA 1850/28-1). We thank Alain Smette for providing the telluric spectrum, and the referee for very constructive feedback. \[lastpage\] Arabsalmani M., M[ø]{}ller P., Fynbo J. P. U., Christensen L., Freudling W., Savaglio S., Zafar T., 2015, MNRAS, 446, 990 Asplund M., Grevesse N., Sauval A. J., Scott P., 2009, ARA&A, 47, 481 Barthelmy S.D. et al., 2005, SSRv, 120, 143 Balashev S. A., Noterdaeme P., Klimenko V. V., Petitjean P., Srianand R., Ledoux C., Ivanchik A. V., Varshalovich D. A., 2015, A&A, 575, 8 Bowen D. V., Jenkins E. B., Pettini M., Tripp T. M., 2005, ApJ, 635, 880 Bruzual G., Charlot S., 2003, MNRAS, 344, 1000 Calzetti D., Armus L., Bohlin R. C., Kinney A. L., Koornneef J., Storchi-Bergmann T., 2000, ApJ, 533, 682 Cano Z., 2013, MNRAS, 434, 1098 Castro Cerón J. M. et al., 2010, ApJ, 721, 1919 Cepa J. et al., 2000, SPIE, 4008, 623 Chabrier G., 2003, PASP, 115, 763 Chen H.-W., 2012, MNRAS, 419, 3039 Christensen L. et al., 2012, MNRAS, 427, 1973 Christensen L., Møller P., Fynbo J. P. U., Zafar T., 2014, MNRAS, 445, 225 Cucchiara A., Fumagalli M., Rafelski M., Kocevski D., Prochaska J. X., Cooke R. J., Becker G. D., 2014, arXiv:1408.3578 De Cia A. et al., 2011, MNRAS, 412, 2229 De Cia A. et al., 2012, A&A, 545, 64 De Cia A., Ledoux C., Savaglio S., Schady P., Vreeswijk P. M., 2013, A&A, 560, 88 D’Elia V. et al., 2014, A&A, 564, 38 El[í]{}asd[ó]{}ttir [Á]{}. et al., 2009, ApJ, 697, 1725 Draine B. T., 2000, ApJ, 532, 273 Draine B. T., Hao L., 2002, ApJ, 569, 780 Esteban C., Peimbert M., Garc[í]{}a-Rojas J., Ruiz M. T., Peimbert A., Rodr[í]{}guez M., 2004, MNRAS, 355, 229 Fitzpatrick E. L., Massa D. 2005, AJ, 130, 1127 Fox A. J., Ledoux C., Vreeswijk P. M., Smette A., Jaunsen A. O., 2008, A&A, 491, 189 Förster S. et al., 2009, ApJ, 706, 1364 Fruchter A. S. et al., 2006, Nature, 441, 463 Fynbo J. P. U. et al., 2009, ApJS, 185, 526 Fynbo J. P. U. et al., 2011, MNRAS, 413, 2481 Fynbo J. P. U. et al., 2013, MNRAS, 436, 361 Gao J., Jiang B. W., Li A., 2008, ApJ, 707, 89 Gehrels N. et al., 2004, ApJ, 611, 1005 Goldoni P., Royer F., François P., Horrobin M., Blanc G., Vernet J., Modigliani A., Larsen J., 2006, in SPIE conf. ser., 6269 Greiner J. et al., 2007, The Messenger, 130, 12 Greiner J. et al., 2008, PASP, 120, 405 Greiner J. et al., 2011, A&A, 526, 30 Haehnelt M. G., Steinmetz M., Rauch M., 1998, ApJ, 495, 647 Hjorth J. et al., 2003, Nature, 423, 847 Houck J. C., Denicola L.A., 2000, ASPC, 216, 519 Hunt L. K. et al., 2014, A&A, 565, 112 Jakobsson P. et al., 2006, A&A, 460, L13 Jenkins E. B., 2009, ApJ, 700, 1299 Jorgenson R., Wolfe A. M., 2014, ApJ, 785, 16 Kennicutt Jr. R. C., 1998, ARA&A, 36, 189 Kewley L. J., Ellison S. L., 2008, ApJ, 681, 1183 Krogager. J.-K. et al., 2013, MNRAS, 433, 3091 Krühler T. et al., 2008, ApJ, 685, 376 Krühler T. et al., 2012, A&A, 546, 8 Krühler T. et al., 2013, A&A, 557, 18 Kudritzki et al., 2012, ApJ, 747, 15 Levesque E. M., Berger E., Kewley L. J., Bagley M. M., 2010, AJ, 139, 694 Ledoux C., Petitjean P., Bergeron J., Wampler E. J., Srianand R., 1998, A&A, 337, 51 Ledoux C., Bergeron J., Petitjean P., 2002a, A&A, 385, 802 Ledoux C., Srianand R., Petitjean P., 2002b, A&A, 392, 781 Ledoux C., Petitjean P., Srianand R., 2003, MNRAS, 346, 209 Ledoux C., Petitjean P., Fynbo J. P. U., Møller P., Srianand R., 2006, A&A, 457, 71 Ledoux C., Vreeswijk P. M., Smette A., Fox A. J., Petitjean P., Ellison S. L., Fynbo J. P. U., Savaglio S., 2009, A&A, 506, 661 Lodders K., Palme H., Gail H. P., 2009, Landolt Börnstein, 44 Mannuci F., Cresci G., Maiolino R., Marconi A., Gnerucci A., 2010, MNRAS, 408, 2115 Mattsson L., De Cia A., Andersen A.C., Zafar T., 2014, MNRAS, 440, 1562 McGaugh S. S., 1991, ApJ, 380, 140 Melioli C., Brighenti F., D’Ercole A., De Gouveia Dal Pino E. M., 2008, MNRAS, 388, 573 Micha[ł]{}owski M. J. et al., 2009, ApJ, 693, 347 Morton D. C., 2003, ApJs, 149, 205 Møller P., 2000, The Messenger, 99, 31 Møller P., Fynbo J. P. U., Ledoux C., Nilsson K. K., 2013, MNRAS, 430, 2680 Murray N., 2011, ApJ, 729, 133 Noterdaeme P., Ledoux C., Petitjean P., Srianand R., 2008, A&A, 481, 327 Noterdaeme P. et al., 2012, A&A, 540, 63 Osterbrock D. E., 1989, Astrophysics of Planetary Nebulae and Active Galactic Nuclei (University Science Books) Pei Y. C., 1992, ApJ, 395, 130 Perley D. A. et al., 2008, ApJ, 672, 449 Perley D. A. et al., 2013, ApJ, 778, 128 Péroux C., Bouché N., Kulkarni V. P., York D. G., Vladilo G., 2012, MNRAS, 419, 3060 Pettini M., Pagel B. E. J., 2004, MNRAS, 348, L59 Pettini M., Smith L. J., Hunstead, R. W., King D. L., 1994, ApJ, 426, 79 Pettini M., Rix S. A., Steidel C. C., Adelberger K. L., Hunt M. P., Shapley A. E., 2002, ApJ, 569, 742 Prochaska J. X., Wolfe A. M., 1997, ApJ, 487, 73 Prochaska J. X., Chen H. W., Bloom J. S., 2006, ApJ, 648, 95 Prochaska J. X., Chen H. W., Dessauges-Zavadsky M., Bloom J. S., 2007, ApJ, 666, 267 Prochaska J. X. et al., 2009, ApJ, 691, L27 Reichart D., 2005, Nuovo Cimento C Geophysics Space Physics C, 28, 767 Sari R., 1998, ApJ, 494, 49 Savage B. D., Sembach K., R., 1996, ARA&A, 34, 279 Savaglio S., 2001, IAUS, 204, 307 Savaglio S., Fall S. M., Fiore F., 2003, ApJ, 585, 638 Savaglio S., Fall S. M., 2004, ApJ, 614, 293 Savaglio S., Glazebrook K., Le Borgne D., 2009, ApJ, 691, 182 Savaglio S. et al., 2012, MNRAS, 449, 627 Schady et al., 2012, ApJ, 537, 15 Schlegel D. J., Finkbeiner D. P., Davis M., 1998, ApJ, 500, 525 Schulze S. et al., 2014, A&A, 566, 102 Sheffer Y., Prochaska J. X., Draine B. T., Perley D. A., Bloom J. S., 2009, ApJ, 701, 63 Smette A., Sana H., Horst H., 2010, Highlights of Astronomy, 15, 533 Smith et al., 2002, AJ, 123, 2121 Sparre M. et al., 2011, ApJ, 735, L24 Sparre M. et al., 2014, ApJ, 785, 105 Stanek K. Z. et al., 2003, ApJ, 591, L17 Starling R. L. C., Wijers R. A. M. J., Wiersema K., Rol E., Curran P. A., Kouveliotou C., van der Horst A. J., Heemskerk M. H. M., 2007, ApJ, 661, 787 Steeghs D., McClintock J. E., Parsons S. G., Reid M. J., Littlefair S., Dhillon V. S., 2013, ApJ, 768, 185 Tody D., 1997, Astronomical Society of the Pacific Conference Series, 125, 451 Treyer M. et al., 2007, ApJs, 173, 256 Tumlinson J., Prochaska J. X., Chen H. W., Dessauges-Zavadsky M., Bloom J. S., 2007, ApJ, 668, 667 Vernet J. et al., 2011, A&A, 536, A105 Vladilo G., Centurión M., Levshakov S. A., Péroux C., Khare P., Kulkarni V. P., York D. G., 2006, A&A, 454, 151 Vreeswijk P. M. et al., 2004, A&A, 419, 927 Vreeswijk P. M. et al., 2007, A&A, 468, 83 Vreeswijk P. M. et al., 2013, A&A, 549, 22 Wakker B. P., York D. G., Wilhelm R., Barentine J. C., Richter P., Beers T. C., Ivezić Ž, Howk J. C., 2008, ApJ, 672, 298 Watson D., 2011, A&A, 533, 16 Wiersema K. et al., 2007, A&A, 464, 529 Wiersema K. et al., 2014, Nature, 509, 201 Willingale R., Starling R. L. C., Beardmore A. P., Tanvir N. R., O’Brien P. T., 2013, MNRAS, 431, 394 Wilms J., Allen A., McCray R., 2000, ApJ, 542, 914 Wolfe A. M., Gawiser E., Prochaska J. X., 2005, ARAA, 43, 861 Zafar T., Watson D., 2013, A&A, 560, 26 Zafar T., Watson D., Fynbo J. P. U., Malesani D., Jakobsson P., de Ugarte Postigo A., 2011, A&A, 532, 143 Skynet magnitude tables {#appendix} ======================= Tabels \[tab:skynetR\], \[tab:skynetB\], \[tab:skynetI\], \[tab:skynetgp\], \[tab:skynetrp\] and \[tab:skynetip\] give the magnitudes (not corrected for reddening) used for the optical light-curve input to model the distance between the excited gas (component a+b) and the burst itself. Filter Time (h) Exposure time S/N Mag. (Vega) -------- ---------- ---------------- ------ ------------------------- $R$ 0.01656 $1\times10$s 27.7 $15.01\pm0.04$ $R$ 0.02232 $1\times10$s 18.9 $15.51\pm0.06$ $R$ 0.02760 $1\times10$s 12.1 $15.88\pm0.09$ $R$ 0.03288 $1\times10$s 7.58 $16.0^{+0.2}_{-0.1}$ $R$ 0.04032 $1\times20$s 10.7 $16.3\pm0.1$ $R$ 0.04824 $1\times20$s 9.35 $16.4\pm0.1$ $R$ 0.06720 $1\times40$s 7.45 $16.5^{+0.2}_{-0.1}$ $R$ 0.08088 $1\times40$s 13.1 $16.68^{+0.09}_{-0.08}$ $R$ 0.10824 $1\times40$s 5.47 $16.9\pm0.2$ $R$ 0.18216 $1\times80$s 5.82 $17.5\pm0.2$ $R$ 0.20880 $1\times80$s 3.58 $17.0\pm0.3$ $R$ 0.24744 $1\times160$s 10.9 $17.4\pm0.1$ $R$ 0.35520 $1\times160$s 5.37 $17.8\pm0.2$ $R$ 0.40536 $1\times160$s 6.08 $17.6\pm0.2$ $R$ 0.50928 $1\times160$s 2.86 $17.9^{+0.4}_{-0.3}$ $R$ 0.55584 $1\times160$s 6.08 $17.5\pm0.2$ $R$ 0.66000 $1\times160$s 5.53 $17.6\pm0.2$ $R$ 0.91992 $3\times160$s 5.79 $18.4\pm0.2$ $R$ 1.28712 $4\times160$s 6.64 $18.79\pm0.2$ $R$ 1.84008 $9\times160$s 3.68 $19.3\pm0.3$ $R$ 2.79768 $7\times160$s 6.72 $19.5^{+0.2}_{-0.1}$ $R$ 3.21240 $7\times160$s 8.31 $19.8\pm0.1$ $R$ 3.59184 $7\times160$s 10.1 $19.8\pm0.1$ $R$ 4.12824 $12\times160$s 10.7 $19.8\pm0.1$ $R$ 4.9908 $18\times160$s 7.18 $20.0\pm0.1$ $R$ 24.5124 $40\times160$s 1.94 $21.9^{+0.7}_{-0.4}$ : Skynet - Filter $R$ \[tab:skynetR\] Filter Time (h) Exposure time S/N Mag. (Vega) -------- ---------- ---------------- ------ ---------------------- $B$ 0.02520 $2\times10$s 4.98 $17.2\pm0.2$ $B$ 0.05592 $2\times20$s 4.22 $17.9^{+0.3}_{-0.2}$ $B$ 0.12144 $3\times40$s 5.10 $18.3\pm0.2$ $B$ 0.27456 $2\times160$s 5.61 $18.8\pm0.2$ $B$ 3.87000 $37\times160$s 5.96 $21.2\pm0.2$ : Skynet - Filter $B$ \[tab:skynetB\] Filter Time (h) Exposure time S/N Mag. (Vega) -------- ---------- ---------------- ------ ------------------------- $I$ 0.02328 $1\times5$s 13.4 $14.88\pm0.08$ $I$ 0.02784 $1\times5$s 9.57 $15.1\pm0.1$ $I$ 0.03312 $1\times10$s 10.2 $15.4\pm0.1$ $I$ 0.04032 $1\times20$s 14.6 $15.49^{+0.08}_{-0.07}$ $I$ 0.04896 $1\times20$s 11.8 $15.70^{+0.1}_{-0.09}$ $I$ 0.05616 $1\times10$s 2.34 $16.2^{+0.5}_{-0.4}$ $I$ 0.06672 $1\times40$s 5.75 $16.2\pm0.2$ $I$ 0.08088 $1\times40$s 15.4 $16.03\pm0.07$ $I$ 0.09504 $1\times40$s 7.74 $15.9\pm0.1$ $I$ 0.10896 $1\times40$s 7.16 $16.2^{+0.2}_{-0.1}$ $I$ 0.12864 $1\times80$s 15.1 $16.21\pm0.07$ $I$ 0.15504 $1\times80$s 3.84 $16.3^{+0.3}_{-0.2}$ $I$ 0.18216 $1\times80$s 9.82 $16.5\pm0.1$ $I$ 0.20880 $1\times80$s 5.14 $16.7\pm0.2$ $I$ 0.24744 $1\times160$s 16.3 $16.72\pm0.07$ $I$ 0.30144 $1\times160$s 13.3 $16.74^{+0.09}_{-0.08}$ $I$ 0.35520 $1\times160$s 11.3 $16.73^{+0.1}_{-0.09}$ $I$ 0.40536 $1\times160$s 6.89 $17.0^{+0.2}_{-0.1}$ $I$ 0.50928 $1\times160$s 8.51 $16.9\pm0.1$ $I$ 0.55632 $1\times160$s 7.10 $17.1^{+0.2}_{-0.1}$ $I$ 0.61248 $1\times160$s 6.87 $16.8^{+0.2}_{-0.1}$ $I$ 0.66000 $1\times160$s 5.78 $17.2\pm0.2$ $I$ 0.71592 $1\times160$s 3.53 $17.3\pm0.3$ $I$ 0.81528 $2\times160$s 5.50 $17.4\pm0.2$ $I$ 0.94704 $2\times160$s 11.4 $17.45^{+0.1}_{-0.09}$ $I$ 1.21368 $1\times160$s 5.87 $17.7\pm0.2$ $I$ 1.33872 $2\times160$s 6.90 $18.1^{+0.2}_{-0.1}$ $I$ 1.53672 $2\times160$s 3.92 $18.3^{+0.3}_{-0.2}$ $I$ 1.75560 $1\times160$s 2.80 $18.0^{+0.4}_{-0.3}$ $I$ 2.07528 $5\times160$s 4.51 $18.4^{+0.3}_{-0.2}$ $I$ 2.46480 $5\times160$s 3.76 $18.6\pm0.3$ $I$ 2.83080 $6\times160$s 8.36 $18.64\pm0.1$ $I$ 3.21504 $7\times160$s 12.1 $18.72\pm0.09$ $I$ 3.59496 $7\times160$s 13.7 $18.71\pm0.08$ $I$ 4.10328 $11\times160$s 12.8 $18.99^{+0.09}_{-0.08}$ $I$ 4.81200 $12\times160$s 9.54 $193\pm0.1$ $I$ 5.63112 $16\times160$s 6.02 $19.2\pm0.2$ $I$ 24.3192 $47\times160$s 1.89 $21^{+0.7}_{-0.4}$ : Skynet - Filter $I$ \[tab:skynetI\] Filter Time (h) Exposure time S/N Mag. (Vega) ------------ ---------- --------------- ------ ---------------------- $g^\prime$ 0.03312 $1\times20$s 3.76 $17.4\pm0.3$ $g^\prime$ 0.25032 $1\times80$s 4.45 $18.3^{+0.3}_{-0.2}$ $g^\prime$ 0.69912 $1\times80$s 2.16 $19.5^{0.6}_{-0.4}$ : Skynet - Filter $g^\prime$ \[tab:skynetgp\] Filter Time (h) Exposure time S/N Mag. (Vega) ------------ ---------- --------------- ------ ------------------------ $r^\prime$ 0.05688 $1\times20$s 6.15 $16.9\pm0.2$ $r^\prime$ 0.15576 $1\times80$s 10.6 $17.4\pm0.1$ $r^\prime$ 0.30168 $1\times160$s 11.8 $17.66^{+0.1}_{-0.09}$ $r^\prime$ 0.50952 $1\times160$s 8.21 $18.0\pm0.1$ $r^\prime$ 0.56088 $1\times80$s 5.14 $18.0\pm0.2$ : Skynet - Filter $r^\prime$ \[tab:skynetrp\] [^1]: Based on observations carried out under prog. ID 090.A-0088(B) with the X-shooter spectrograph installed at the Cassegrain focus of the Very Large Telescope (VLT), Unit 2 – Kueyen, operated by the European Southern Observatory (ESO) on Cerro Paranal, Chile. Also used are observations made with the Nordic Optical Telescope, operated by the Nordic Optical Telescope Scientific Association, and the Gran Telescopio Canarias (program GTC67-13B), both at the Observatorio del Roque de los Muchachos, La Palma, Spain, of the Instituto de Astrofisica de Canarias. HAWK-I imaging used is part of the program 092.A-0076(B). This work made use of data supplied by the UK Swift Science Data Centre at the University of Leicester. [^2]: <http://www.ast.cam.ac.uk/~rfc/vpfit.html> [^3]: This procedure does not aim at reproducing the observed telluric spectrum, but simply reject suspect telluric lines from the Voigt-profile fit. [^4]: We use the [@calzetti] law, which is an attenuation law for star burst galaxies, where the [@pei] laws are relevant for lines of sight towards point-sources inside galaxies where light is lost due to both absorption and scattering out of the line of sight. [^5]: We assume solar metallicity, not to provide a physical description of the absorbers, but purely to let $N$(H)$_\text{X}$ conform to the standard solar reference. The reference solar abundances used are from [@wilms].
--- abstract: 'We say a lattice point $X=(x_1,\ldots,x_m)$ is visible from the origin, if $\gcd(x_1,...,x_m)=1$. In other word, there are no other lattice point on the line segment from the origin $O$ to $X$. From J.E. Nymann’s result [@Ny72], we know that the number of lattice point from the origin in $[-r,r]^m$ is $(2r)^m/\zeta(m)+$(Error term). We showed that the exact order of the error term is $r^{m-1}$ for $m\ge3$.' author: - Wataru Takeda date: | Department of Mathematics,\ Kyoto University,\ Kitashirakawa Oiwake-cho, Sakyo-ku, Kyoto 606-8502, Japan title: The exact order of the number of lattice points visible from the origin --- Introduction ============ The counting and probability problem of the visible lattice points is well known. Let $\mathbb{V}^m=\{x=(x_1,\ldots,x_m)\in\mathbb{Z}^m~|~\text{$x$ is visible from the origin}\}$. It is well known that the cardinality of the set $\mathbb{V}^m\cap\{(x_1,...,x_m)\in\mathbb{Z}^m~|~|x_i|\le r\ (1\le i\le m)\}$ is $$\frac{2^m}{\zeta(m)}r^m+\left\{ \begin{array}{ll} O(r\log r)&(m=2)\\ O(r^{m-1})& (m\ge3), \end{array} \right.$$ where $\zeta$ is the Riemann zeta function. F. Mertens proved the the case of $m=2$ in 1874 [@Me74] and J. E. Nymann showed the case of $m\ge3$ in 1972 [@Ny72]. In this article, as a generalization of the result of Nymann, we study the number of elements of $$\mathbb{V}^m\cap\{(x_1,...,x_m)\in\mathbb{Z}^m~|~|x_i|\le r\ (1\le i\le m)\}.$$ Let $V_m(r)$ denote $|\mathbb{V}^m\cap\{(x_1,...,x_m)\in\mathbb{Z}^m~|~|x_i|\le r\ (1\le i\le m)\}|$ and let $E_m(x)$ denote the error term, i.e. $$E_m(r)=V_m(r)-\frac{2^m}{\zeta(m)}r^m.$$ Then we obtain a generating function of $V_m(r)$ and the exact order of $E_m(r)$, where $m\ge3$. More precisely, we prove If $m\ge3$, $$E_m(r)=\Omega(r^{m-1}).$$ Combine Nymann’s result [@Ny72] with this theorem, the exact order of the magnitude of $E_m(r)$ is $r^{m-1}$ for all $m\ge3$. Jordan totient function $\varphi(n)$ ==================================== In [@Ap76], it is proved that $$V_2(r)=8\sum_{n\le r}\varphi(n).$$ We follow this way and use the Jordan totient function $J_m(n)$ to obtain the value of $V_m(r)$. For $m\ge 1$ we define $$\begin{aligned} J_m(n):=&|\{(x_1,...,x_m)\in\mathbb{Z}^m~|~\gcd(x_1,...,x_m,n)=1,1\le x_i\le n\ (1\le i\le m)\}|.\\ \intertext{For $m=0$ we define} J_0(n):=&\left\{ \begin{array}{rl} 1&(n=1)\\ 0&(n\neq1) \end{array} \right.\end{aligned}$$ Since $J_1(n)=|\{x_1\in\mathbb{Z}~|~\gcd(x_1,n)=1,1\le x_1\le n\}|=\varphi (n)$, we regard $J_m(n)$ as a generalization of $\varphi(n)$. The Euler totient function $\varphi(n)$ satisfies $$\varphi(n)=\sum_{d|n} \mu(d)\frac nd=n\prod_{\substack{p|n\\p:\text{prime}}}\left(1-\frac1p\right),$$ where $\mu(d)$ is Möbius function. As well as this, $J_m(n)$ satisfies following lemma. \[thm:2.2\] $\displaystyle{J_m(n)=\sum_{d|n} \mu(d)\left(\frac nd\right)^m=n^m\prod_{\substack{p|n\\p:\text{prime}}}\left(1-\frac1{p^m}\right).}$ It suffices to show that $\displaystyle{n^m=\sum_{d|n}J_m(d)}$, because the Möbius inversion formula gives the assertion. Let $S$ denote $\{(x_1,...,x_m)\in \mathbb{Z}^m~|~1\le x_i\le n\ (1\le i\le m)\}$. Let $S(d)$ be $\{(x_1,...,x_m)\in \mathbb{Z}^m~|~\gcd(x_1,...,x_m,n)=d,1\le x_i\le n\ (1\le i\le m)\}$, where $d$ divides $n$. Then $S$ is the disjoint union $\displaystyle{S=\bigcup_{d|n}S(d)}$. $\gcd(x_1,...,x_m,n)=d$ if and only if $\displaystyle{\gcd\left(\frac {x_1}d,...,\frac {x_m}d,\frac nd\right)=1}$, so $\displaystyle{|S(d)|=J_m\left(\frac nd\right)}$. Hence we can write $$n^m=|S|=\sum_{d|n}|S(d)|=\sum_{d|n}J_m\left(\frac nd\right)=\sum_{d|n}J_m(d).$$ The equality on the left in Lemma follows. Let us prove the other equality. If $n=1$ the product is empty and assigned to be the value $1$. And if $n\ge 2$, we can observe that $$\sum_{d|n} \frac {\mu(d)}{d^m}=\prod_{\substack{p|n\\p:\text{prime}}}\left(1-\frac1{p^m}\right).$$ Thus the other equality in Lemma also follows. Generating function of $V_m(r)$ =============================== \[thm:main\] Generating function of $V_m(r)$ is the following. $$\begin{aligned} \sum_{m=0}^{\infty}\frac{u^m}{m!}V_m(r)&=\frac1{2u}(e^{(2X+1)u}-e^{(2X-1)u})\\ \intertext{and} \sum_{m=0}^{\infty}u^{m+1}V_m(r)&=\frac12\log \frac{1-(2X-1)u}{1-(2X+1)u}, \\ \intertext{where $\displaystyle{i\sum_{n \le r}J_{i-1}(n)}$ is replaced by $X^i$ when $i\ge1$, and $X^0$ are assigned to be the value $0$.}\end{aligned}$$ It suffices to show that $$V_m(r)=\frac1{2(m+1)}\{(2X+1)^{m+1}-(2X-1)^{m+1}\}.$$ Let $V_m^+(r)$ denote $|\mathbb{V}^m\cap\{(x_1,...,x_m)~|~0< x_i\le r\ (1\le i\le m)\}|$ for $m\ge1$. Considering the sign of component, we have $$V_m(r)=\sum_{i=0}^{m-1}\binom mi 2^{m-i}V_{m-i}^+(r).$$ Let $A_m^+(n)=V_m^+(n)-V_m^+(n-1)$, then$$V_m^+(r)=\sum_{2 \le n \le r}A_m^+(n)+1.$$ We compute $A_m^+(n)$ in a combinatorial way as follows. Fix $i$ with $0\le i<m$. Fix $I\subset\{1,\ldots,m\}$ such that $|I|=m-i$, and let $$V=\{(x_1,\ldots,x_m)\in\mathbb{V}^m~|~x_j=n\text{ for all }j\in I, 0<x_j<n\text{ for all }j\not\in I\}.$$ Then $|V|=J_{i}(n)$. There are $\displaystyle{\binom m{m-i}}$ ways to choose $I\subset\{1,\ldots,m\}$ with $|I|=m-i$. So the number of points $(x_1,\ldots,x_m)\in \mathbb{V}$ such that $|\{i~|~x_i=n \}|\ge m-i$ is $$\binom m{m-i} J_{i}(n).$$ But we count same point $(x_1,\ldots,x_m)\in \mathbb{V}$ such that $|\{i~|~x_i=n \}|=k$, $\displaystyle{\binom k{m-i}}$ times each $i\ (m-k\le i\le m-1)$. We can show that $\displaystyle{(-1)^{m-1}\sum_{i=m-k}^{m-1}(-1)^{i}\binom k{m-i}=1}$, so we can count all points in $\mathbb{V}$ without repetition and obtain that $\displaystyle{A_m^+(n)=\sum_{j=0}^{m-1}(-1)^{m-1-j}\binom {m}jJ_j(n)}.$ $$\begin{aligned} \intertext{Thus we get} V_m^+(r)&=\sum_{2\le n \le r}\sum_{j=0}^{m-i-1}(-1)^{m-i-1-j}\binom {m-i}jJ_j(n)+1,\\ \intertext{and} V_m(r)&=\sum_{i=0}^{m-1}\binom mi 2^{m-i}\left(\sum_{2\le n \le r}\sum_{j=0}^{m-i-1}(-1)^{m-i-1-j}\binom {m-i}jJ_j(n)+1\right). \\ \intertext{For $j\ge 1$, $X^j$ is defined by $\displaystyle{j\sum_{n \le r}J_{j-1}(n)}$. To use this notation, we add the term $n=1$ to above sum. By the definition of $J_j(n)$, we get $J_j(1)=1$ for all $j$. And we get $\displaystyle{\sum_{j=0}^{m-i-1}(-1)^{m-i-1-j}\binom {m-i}j=1}$ by applying the binomial theorem. Using this we have} V_m(r)&=\sum_{i=0}^{m-1}\binom mi 2^{m-i}\left(\sum_{n \le r}\sum_{j=0}^{m-i-1}(-1)^{m-i-1-j}\binom {m-i}jJ_j(n)\right).\\ &=\sum_{i=0}^{m-1}\binom mi 2^{m-i}\left(\sum_{j=0}^{m-i-1}(-1)^{m-i-1-j}\binom {m-i}j\sum_{n \le r}J_j(n)\right).\\ \intertext{By the definition of $X^j$, we find $\displaystyle{\binom {m-i}j \sum_{n \le r}J_j(n)=\frac1{m-i+1}\binom {m-i+1}{j+1}\frac{X^{j+1}}{j+1}}$, this gives} &=\sum_{i=0}^{m-1}\binom mi 2^{m-i}\frac 1{m-i+1}\sum_{j=0}^{m-i-1}(-1)^{m-i-1-j}\binom {m-i+1}{j+1}X^{j+1}.\\ \intertext{Replacing the index $j+1$ by $j$, we obtain} &=\sum_{i=0}^{m-1}\binom mi 2^{m-i}\frac 1{m-i+1}\sum_{j=1}^{m-i}(-1)^{m-i-j}\binom {m-i+1}j X^j.\end{aligned}$$ Because the term $i=m$ and $m+1$ and $j=0$ of above sum are a constant times $X^0$, we get $$\begin{aligned} V_m(r)&=\sum_{i=0}^{m+1}\binom mi 2^{m-i}\frac 1{m-i+1}\left(X^{m-i+1}-\sum_{j=0}^{m-i+1}(-1)^{m-i+1-j}\binom {m-i+1}j X^j\right).\\ \intertext{Applying the binomial theorem by repetition, we get} &=\sum_{i=0}^{m+1}\binom mi 2^{m-i}\frac 1{m-i+1}\left\{X^{m-i+1}-(X-1)^{m-i+1}\right\},\\ &=\frac 1{2(m+1)}\sum_{i=0}^{m+1}\binom {m+1}i\left\{(2X)^{m+1}+(2X-2)^{m+1}\right\},\\ &=\frac 1{2(m+1)}\left\{(2X+1)^{m+1}+(2X-1)^{m+1}\right\}.\end{aligned}$$ This proves the theorem. The value of $\displaystyle{\sum_{n \le r}J_{i-1}(n)}$ ====================================================== In this paper, we use the $\Omega$ simbol introduced by G.H. Hardy and J.E. Littlewood. This simbol is defined as follows: $$f(x)=\Omega(g(x))\Longleftrightarrow\limsup_{x\rightarrow\infty}\left|\frac{f(x)}{g(x)}\right|>0$$ If there exists a function $g(x)$ such that $f(x)=O(g(x))$ and $f(x)=\Omega(g(x))$ then the exact order of $f(x)$ is $g(x)$. \[thm:4.1\] Let $\{x\}$ be the fractional part of $x$. If $m \ge 2$, $$\sum_{d\le r}\mu(d)\left(\frac rd\right)^m\left\{\frac rd\right\}=\Omega(r^m).$$ It suffices to show that $\displaystyle{\sum_{d\le r}\frac {\mu(d)}{d^m}\left\{\frac rd\right\} \le M<0}$ for infinity many values of $r$ and some negative $M$. $$\begin{aligned} \intertext{If $m \ge 4$ and $r$ is odd integer and greater than or equal to $3$,} \sum_{d\le r}\frac {\mu(d)}{d^m}\left\{\frac rd\right\}&=-\frac1{2^{m+1}}+\sum_{3\le d\le r}\frac {\mu(d)}{d^m}\left\{\frac rd\right\}.\\ \intertext{Since $\mu(d)=1.0.-1$ and $\displaystyle{\left\{\frac rd\right\}\le 1}$,} &\le -\frac1{2^{m+1}}+\sum_{3\le d\le r}\frac 1{d^m},\\ &=-\frac1{2^{m+1}}+\zeta(m)-1-\frac 1{2^m}<0,\\ \intertext{since when $m \ge 4$, $\displaystyle{\zeta(m)-1-\frac1{2^m}<\frac1{2^{m+1}}}$.} \intertext{So for $m \ge 4$ the lemma follows.} \intertext{Suppose that $m=2$ or $m=3$ and $\displaystyle{r=k\prod_{p\le 100}p}$, where the product is extended over all odd primes less than $100$ and $k$ isn't a multiple of $2$ and $p$.} \intertext{Then,} \sum_{d\le r}\frac {\mu(d)}{d^m}\left\{\frac rd\right\}&\le \sum_{d=1}^{100}\frac {\mu(d)}{d^m}\left\{\frac rd\right\}+\sum_{d=101}^{\infty}\frac {\mu(d)}{d^m}\left\{\frac rd\right\}.\\ \intertext{Since $\mu(d)=1.0.-1$ and $\displaystyle{\left\{\frac rd\right\}\le 1}$,} &\le -\frac1{2^{m+1}}+\sum_{d=3}^{100}\frac {\mu(d)}{d^m}\left\{\frac rd\right\}+\sum_{d=101}^{\infty}\frac 1{d^m}.\end{aligned}$$ Because we defined $\displaystyle{r=k\prod_{p\le 100}p}$, we obtain $$\begin{aligned} \sum_{d=3}^{100}\frac {\mu(d)}{d^m}\left\{\frac rd\right\}&=\sum_{p=prime}^{47}\frac 1{(2p)^m}\frac 12-\frac12\left(\frac1{30^m}+\frac1{42^m}+\frac1{66^m}+\frac1{78^m}+\frac1{70^m}\right)\\ &<\frac25\times\frac1{10^{m-1}}\end{aligned}$$ From this result and $\displaystyle{\sum_{d=101}^{\infty}\frac 1{d^m}\le \frac1{100^{m-1}}}$, we find $$\begin{aligned} \sum_{d\le r}\mu(d)\frac 1{d^m}\left\{\frac rd\right\}&\le -\frac1{2^{m+1}}+\frac25\times\frac1{10^{m-1}}+\frac1{100^{m-1}}\\ &<-\frac1{20},\end{aligned}$$ so for $m=2$ or $m=3$ the lemma follows. This completes the proof of the lemma. \[thm:4.2\] If $i \ge 3$,  $\displaystyle{X^i=\frac1{\zeta(i)}r^i+\Omega(r^{i-1})}$ For $i\ge 3$, $X^i$ is defined by $\displaystyle{i\sum_{n \le r}J_{i-1}(n)}$. $$\begin{aligned} \intertext{From Lemma \ref{thm:2.2} we know that $\displaystyle{J_{i-1}(n)=\sum_{d|n} \mu(d)\left(\frac nd\right)^{i-1}}$,} X^i&=i\sum_{n \le r}\sum_{d|n} \mu(d)\left(\frac nd\right)^{i-1}.\\ \intertext{We write $n=dq$ and sum over all pair of positive integers $d,q$ with $dq\le r$, thus} X^i&=i\sum_{dq \le r} \mu(d)q^{i-1}\\ \intertext{Changing the order of summation,} X^i&=i\sum_{d \le r}\mu(d)\sum_{q\le \frac rd}q^{i-1}\\ \intertext{Applying the relationship between Bernoulli numbers $B_0,B_1(=\displaystyle{\frac12}),B_2,\ldots$ and a sum $1^k+2^k+\cdots+n^k$, that is, $\displaystyle{\sum_{q=1}^n q^{i-1} =\frac1i\sum_{j=0}^{i-1}\binom ij B_j n^{i-j}}$,} X^i&=i\sum_{d \le r}\mu(d)\frac 1i \sum_{j=0}^{i-1} \binom ij B_j\left[\frac rd\right]^{i-j},\\ \intertext{where $[x]$ is the greatest integer less than or equal to $x$. Now we use a relation $[x]=x-\{x\}$,} X^i&=\sum_{d \le r}\mu(d)\sum_{j=0}^{i-1} \binom ij B_j\left(\frac rd-\left\{\frac rd\right\}\right)^{i-j}.\end{aligned}$$ We note that $$\begin{aligned} \left|\sum_{d\le r}\frac {\mu(d)}{d^m}\left\{\frac rd\right\}^k\right|&\le \sum_{d\le r}\frac 1{d^m}=\left\{ \begin{array}{cl} \zeta(m)+O(r^{1-m})& (m\ge2)\\ \log(r)+\gamma+o(1) & (m=1), \end{array} \right.\end{aligned}$$ where $\gamma$ is Euler’s constant, defined by the equation $$\gamma=\lim_{n\rightarrow\infty}\left(\sum_{k=1}^n\frac1k-\log n\right).$$ Combining this result with Lemma \[thm:4.1\] and $\displaystyle{\sum_{d\le r}\frac{\mu(d)}{d^i}=\frac1{\zeta(i)}+O\left(\frac1{r^{i-1}}\right)}$, where $i>1$, we get $$\begin{aligned} X^i&=r^i\sum_{d \le r}\frac {\mu(d)}{d^i}+\Omega(r^{i-1}).\\ &=\frac1{\zeta(i)}r^i+\Omega(r^{i-1}) \intertext{This proved the lemma.}\end{aligned}$$ The exact order of magnitude of $E_m(r)$ ======================================== By using lemmas in last section with Theorem \[thm:main\], we prove following theorem about the exact order of magnitude of $E_m(r)$. If $m\ge3$, $$E_m(r)=\Omega(r^{m-1}).$$ $$\begin{aligned} \intertext{From Theorem \ref{thm:main}, } V_m(r)&=\frac1{2(m+1)}\{(2X+1)^{m+1}-(2X-1)^{m+1}\},\\ &=(2X)^{m}+O(X^{m-2}).\\ \intertext{Applying Lemma \ref{thm:4.1} and \ref{thm:4.2}, we find} V_m(r)&=\frac{2^m}{\zeta(m)}r^m+\Omega(r^{m-1}).\end{aligned}$$ Combine Nymann’s result [@Ny72] with this theorem, the exact order of the magnitude of $E_m(r)$ is $r^{m-1}$ for all $m\ge3$. [99]{} F. Mertens. Ueber einige asymptotische Gesetze der Zahlentheorie. J. Reine Angew. Math, 77 (1874) 289-–338, J. E. Nymann, On the probability that $k$ positive integers are relatively prime, J. Number Theory 4 (1972) 469–473. T. M. Apostol, Introduction to Analytic Number Theory, Springer-Verlag, New York, 1976.
--- abstract: 'The large values of the singlet and triplet two-nucleon scattering lengths locate the nuclear system close to the unitary limit. This particular position strongly constrains the low-energy observables in the three-nucleon system as depending on one parameter, the triton binding energy, and introduces correlations in the low energy sector of light nuclei. Here we analyze the propagation of these correlations to infinite nuclear matter showing that its saturation properties, the equation of state of $\beta$-stable nuclear matter and several properties of neutron stars, as their maximum mass, are well determined solely by a few number of low-energy quantities of the two- and three-nucleon systems. In this way we make a direct link between the universal behavior observed in the low-energy region of few-nucleon systems and fundamental properties of nuclear matter and neutron stars.' author: - 'A. Kievsky$^1$, M. Viviani$^1$, D. Logoteta$^1$, I. Bombaci$^{1,2}$, and L. Girlanda$^{3,4}$' title: 'Correlations imposed by the unitary limit between few-nucleon systems, nuclear matter and neutron stars' --- [*Introduction.*]{} The unitary limit, characterized by the divergence of the $s$-wave two-body scattering length $a$, is a critical point in which the two-body system has no scale. As the system approaches this limit, it presents a continuous scale invariance. The two-body scattering length appears as a control parameter: it determines the low-energy observables with a functional dependence dictated by dimensional analysis. For identical particles the three-boson system presents a discrete scale symmetry, governed by the size of a particular three-body state, and shows the Efimov effect at the unitary point. The three-boson spectrum is then determined by the control parameter $a$ and the three-body parameter $\kappa_*$, the binding momentum of the selected state. All these features, collected in what is now called Efimov physics, are intensively studied from an experimental [@ferlaino2011; @machtey2012; @roy2013; @dyke2013] as well as a theoretical point of view (for recent reviews see Refs. [@report; @frederico2011]). The systems inside this window (Efimov window) have many striking properties characterized by their insensitivity to the particular form of the interaction, they show universal behavior. In atomic physics the study of Efimov physics is based on the experimental ability of tuning the scattering length using Feshbach resonances. It is interesting to notice that nuclear physics is naturally close to the unitary limit [@koenig2017]. In fact, the deuteron as well as the virtual $^1S_0$ state are very shallow two nucleon systems. The energy scale from which their energy can be estimated is not directly related to $r_N$, the range of the nuclear force, but to the two-body scattering length in the corresponding spin channel: $|E_S|\approx \hbar^2/m a_S^2$, with $S=0,1$ and $m$ the nucleon mass, where the singlet and triplet scattering lengths, $a_0$ and $a_1$, are both large with respect to the typical length of the interaction, $r_N\approx1.4\;$fm. Recent studies of nuclear systems as Efimov systems can be found in Refs. [@koenig2017; @kievsky2016] up to $A=4$. It emerges that the triton, $^3$He and $^4$He are the lowest Efimov states at the particular values of the ratio $r_S/a_S$, with $r_S$ the effective range. The quantities $a_S$, $r_S$ appear as control parameters and finite size parameters respectively whereas the binding momentum of $^3$H, $\kappa_T$, is the three-body parameter. The binding energy of $^4$He, $B(^4{\rm He})=\hbar^2\kappa_\alpha^2/m$ can be deduced from the universal ratio $\kappa_\alpha/\kappa_T\approx 1.9$, considering finite size and Coulomb corrections [@koenig2017]. The question that we want to discuss here is the constraints imposed by the location of the nuclear system close to the unitary limit as they propagate with the number of particles. In particular we will analyze the saturation properties of nuclear matter (NM) determined solely by $a_S$, $r_S$ and $\kappa_T$ and, more important, the equation of state (EoS) of $\beta$-stable nuclear matter and the corresponding properties of neutron stars (NSs) and particularly of their maximum mass configuration. In this way we introduce a strict correlation between a few number of low-energy observables in $A=2,3,4$ systems and fundamental properties of NM and NS. To follow this study we make use of the effective field theory (EFT) framework with and without pions. In the latter case (pionless EFT) the leading order (LO) has been studied in a series of articles [@bedaque1999; @bedaque02] showing that it consists of two contact terms plus a contact three-body interaction needed to stabilize the three-nucleon system against the Thomas collapse. The pionless EFT is closely connected to the pioneering work by V. Efimov more than 40 years ago [@efimov1; @efimov2]. The LO parameters of the theory can be used to fix the low-energy parameters $a_S$ and $\kappa_T$ with the consequence that also the $^4$He binding energy is well reproduced resulting in an energy per particle of about $7$ MeV. This quantity compares well with the average binding energy per particle along the nuclear chart of around $8$ MeV, having a peak of approximately $8.8$ MeV at the $^{56}{\rm Fe}$ nucleus. Accordingly the pionless LO maintains its character along the nuclear chart describing correctly the threshold at which nuclei bind beyond the clusterization on $\alpha's$. The binding beyond these thresholds may well be considered as a higher order effect, that a LO description need not address in detail. As an example we can mention the $\alpha+d$ threshold of $^6{\rm Li}$, the three- and four-$\alpha$ threshold of $^{12}{\rm C}$ and $^{16}{\rm O}$ and so on. Calculations using a pionless two- and three-body potentials beyond the LO will clarify further this point [@kirscher2010; @lensky2016; @girlanda2011]. Correlations imposed by the unitary limit between few- and many-body systems have been discussed in the context of EFT in Ref. [@vankolck]. Recent studies at LO have been done in light and medium-mass nuclei [@stetcu; @bansal; @contesi] and boson systems [@bira]. Correlations between the triton and nuclear matter can be found in Ref. [@delfino]. Clusters of bosons close to the unitary limit have been studied using a LO EFT-inspired potential in Refs. [@kievsky2011; @gatto2011; @gatto2012; @kievsky2017b]. [*The LO EFT-inspired potential.*]{} In the following we define our LO EFT-inspired potential and fix the associate low energy constants from the low energy data in the two- and three-body systems. The two-body potential we use, which includes all LO interactions from EFT and some of its important finite range corrections is $$V^{2N}_{LO}= V_{sr} +V_\pi$$ where $V_{sr}$ is the short-range interaction and $V_\pi$ is the one-pion-exchange potential (OPEP). The short-range interaction is a regularized contact interaction and has a spin dependence. It can be written as $$V_{sr}=C_0 V_0{\cal P}_{01} + C_1 V_1 {\cal P}_{10} \;\; ,$$ where ${\cal P}_{ST}$ is a projector onto the total spin-isospin state $S,T$ of two nucleons. Using a local gaussian regulator, the two potentials $V_0$ and $V_1$ have the following form $$V_S= V_S(r)= \frac{1}{\pi^{3/2}d_S^3} e^{-r^2/d_S^2}$$ and $V_\pi$ is the regularized OPEP potential $$V_\pi(r)= {\bm \tau}_1\cdot{\bm \tau}_2 \left[ {\bm \sigma}_1\cdot{\bm \sigma}_2 Y_\beta(r)+ S_{12} T_\beta(r)\right]$$ with the central and tensor factors ($x=m_\pi r$) $$\begin{gathered} Y_\beta(x)= \frac{g_A^2 m_\pi^3}{12\pi F_\pi^2}\frac{e^{-x}}{x}(1-e^{-r^2/\beta^2)} \\ T_\beta(x)= \frac{g_A^2 m_\pi^3}{12\pi F_\pi^2}\frac{e^{-x}}{x}(1+\frac{3}{x}+\frac{3}{x^2})(1-e^{-r^2/\beta^2})^2\;\; . \end{gathered}$$ With this choice [@kievsky2017], the two-body potential $V^{2N}_{LO}$ has the form of the LO pionless EFT potential ($\beta\rightarrow\infty$) and tends continuosly to the LO potential in chiral perturbation theory. The strength $C_S$ and range $d_S$ are designed to reproduce the $np$ scattering length and effective range, $a_S$ and $r_S$, in channels $S,T = 0,1$ and $1,0$ for different values of the regulator $\beta$. Due to the shallow character of the deuteron state and virtual ${}^1S_0$ state, this procedure automatically fixes the correct binding of these two states. We now extend our analysis to the three- and four-nucleon systems. The effective potential is $$V_{LO}^{2N+3N}=\sum_{i<j}\left[V_{sr}(i,j)+V_\pi(i,j)\right] +\sum_{cyclic} W(i,j,k)$$ where we have considered the possibility of a (regularized) contact three-body term of the form $$W(i,j,k)=W_0 e^{-r_{ij}^2/r_3^2} e^{-r_{ik}^2/r_3^2} \;\; . \label{eq:w30}$$ We calculate the $^3$H and $^4$He energies, $B(^3{\rm H})$ and $B(^4{\rm He})$, for different values of $r_3$. In each case the strength $W_0$ is fixed to reproduce $B(^3{\rm H})$. The results are shown in Fig.1 for different values of the regulator of the OPEP $\beta$. As $\beta \rightarrow \infty$ the LO pionless theory is recovered whilst the lowest value, $\beta=1.0\;$fm, is an extreme case, well inside the region $\beta<1/m_\pi\approx 1.5\;$fm corresponding to the formation of the OPEP tail. From the figure we observe that at low values of the range $r_3$ the curves are close to and slightly below the experimental binding of $28.3\;$ MeV whereas for $r_3 > 3\;$fm the curve tend to be above $30$ MeV. The $\beta=1\;$fm curve remains very stable and limit the two regions from above (for the lowest $r_3$ values) and from below (for the highest $r_3$ values). The analysis with equal values of the two-body and three-body ranges, $d_0=d_1=r_3$, has been done in Ref. [@kievsky2017]. Here we vary the three-body range $r_3$ independently of the two-body ranges, fixed in the two-body sector by the effective range values. The interest of the present study is to analyze the impact of the low-energy properties, $a_S$, $r_S$, $B(^3{\rm H})$ and $B(^4{\rm He})$ in the determination of NM properties. Taking fixed the two-body parameters and $B(^3{\rm H})$, the three-body range $r_3$ allows for a simultaneous description of $B(^3{\rm H})$ and $B(^4{\rm He})$. It helps to construct a curve of the energy per particle of symmetric nuclear matter with a saturation point as close as possible to the empirical one. Specifically at $\beta \rightarrow \infty$ the two-body ranges are $d_0=1.83\;$fm, $d_1=1.56\;$fm and $B(^4{\rm He})$ is well reproduced with $r_3=1.5\;$fm. Instead at $\beta =2\;$ fm, $d_0=1.54\;$fm, $d_1=1.39\;$fm and $r_3=1.7\;$fm. [*Nuclear matter*]{}. We next discuss the application of the two- and three-nucleon forces, derived in the previous section, to the case of NM. To calculate the energy per nucleon $E/A$ of NM we make use of the the Brueckner–Bethe–Goldstone (BBG) quantum many-body theory (see e.g. [@bbg1; @bbg2] and references therein) considering contributions up to two-hole-line level, the so called Brueckner–Hartree-Fock (BHF) approximation. The BHF approximation incorporates in an exact way the two particle correlations via a self-consistent determination of the $G$ matrix and the single-particle auxiliary potential $U(k)$, for which we use the continuous choice [@jeuk+67; @baldo+90]. As shown in [@song98; @baldo00], the contribution of the three-hole-line diagrams is minimized in this prescription indicating a fast convergence of the hole-line expansion for $E/A$. In our calculations the three-nucleon force has been reduced to an effective density dependent two-body force by averaging over the coordinates (momentum, spin and isospin) of one of the nucleons as described in Ref. [@LBK2]. The energy per particle $E/A$ of symmetric nuclear matter (SNM) is shown in Fig. \[fig2\] for various parametrizations of the two- and three-body forces. In each panel, for a fixed value of the OPEP regulator $\beta$ of the two-body force, we show the saturation curve (i.e. $E/A$ as a function of the nucleonic density $\rho$) of SNM obtained using four different values of the three-nucleon force range $r_3$. The empirical saturation point of SNM ($\rho_{0} = 0.16 \pm 0.01~{\rm fm}^{-3}$, $E/A|_{\rho_0} = -16.0 \pm 1.0~{\rm MeV}$) is denoted by a gray box in each panel of Fig. \[fig2\]. ---------- ------- ------------- ----------------- ------------- ------- ------------ $\beta$ $r_3$ $\rho_0$ $E/A|_{\rho_0}$ $E_{sym}^0$ $L$ $K_\infty$ (fm) (fm) (fm$^{-3}$) (MeV) (MeV) (MeV) (MeV) $\infty$ 1.4 0.151 -16.11 35.20 70.2 251 10 1.35 0.150 -15.65 34.92 69.8 251 5 1.25 0.160 -15.80 36.16 71.0 247 2 1.15 0.173 -14.83 36.37 67.9 209 1.8 1.15 0.176 -14.74 36.33 67.0 205 1 1.5 0.179 -14.20 35.02 58.5 203 ---------- ------- ------------- ----------------- ------------- ------- ------------ : Nuclear matter properties at the calculated saturation density $\rho_0$ (3$^{rd}$ column) for different combinations of the interaction model parameters $\beta$ and $r_3$. $E/A|_{\rho_0}$ is the SNM saturation energy, $E_{sym}^0$ the symmetry energy, $L$ the symmetry energy slope parameter and $K_\infty$ the SNM incompressibility. \[tab1\] The calculated saturation points for the “best” saturation curve in each panel of Fig. \[fig2\] are reported in Tab. \[tab1\]. We note that the empirical saturation point of SNM is adequately reproduced by the first three entries, ($\beta = \infty$, $r_3 = 1.4\;$fm), ($\beta = 10\;$fm, $r_3 = 1.35\;$fm) and ($\beta = 5\;$fm, $r_3=1.25\;$fm), in Tab. \[tab1\]. For smaller values of $\beta$, the empirical saturation point of SNM can not be reproduced. However, even in this case, optimizing the value of the parameter $r_3$, a reasonable saturation point is obtained (see last three entries in Tab. \[tab1\]). The calculated saturation points of SNM for various interaction models are shown in the left panel of Fig. \[fig3\]. Here the empirical saturation point is denoted by a yellow box. Notice that for fixed $\beta$, the saturation points of the various interaction models show an almost linear dependence on the three-nucleon force range $r_3$ (the value of $r_3$ increases from the bottom to the top of each line). All together the calculated saturation points locate a narrow band, the so called Coester band [@coester70; @day81], which for the interaction models used goes through the empirical saturation point. The hatched zone in this panel collects calculations in which the two-body and three-body ranges have been taken equal $d_0=d_1=r_3$. In this case the two-body scattering lengths, $a_S$, and $B(^3{\rm H})$ have been kept fixed by proper values of the strengths $C_S$ and $W_0$. These results give rise to a much broader Coester band, with saturation points that vary in a much larger range and do not overlap with the empirical saturation point of SNM contrary to the calculations in which only the three-body range $r_3$ is varied. These results show the importance of tuning the $r_3$ parameter. The nuclear symmetry energy $E_{sym}$, and particularly its density dependence, is an important physical quantity which regulates the properties of asymmetric NM (i.e. matter with $\rho_n \neq \rho_p$, with $\rho_n$ and $\rho_p$ being the neutron and proton densities respectively). The symmetry energy can be obtained [@bl91] taking the difference between the energy per nucleon $E/A$ of pure neutron matter and the one of SNM at a given total nucleon number density $\rho = \rho_n + \rho_p$. The symmetry properties of NM around the saturation density $\rho_0$ are summarized by the value of $E_{sym}^0 \equiv E_{sym}(\rho_0)$ and by the value of the so called symmetry energy slope parameter $$L = 3 \rho_{0} \frac{\partial E_{sym}(\rho)}{\partial \rho}\Big|_{\rho_{0}} \label{slope}$$ It has been shown [@latt2013] that a strong correlation between the values of $E_{sym}^0$ and $L$ can be deduced in a nearly model-independent way from nuclear binding energies. In addition, it has been recently demonstrated [@tews2017] that the unitary gas limit [@zwier2015], which can be used to describe low density neutron matter, puts stringent constraints on the possible values of the symmetry energy parameters, excluding an ample region in the $E_{sym}^0$–$L$ plane (the white region on the left of the line in the right panel of Fig. \[fig3\]). As pointed out by the authors of Ref. [@tews2017] several EOS models currently used in astrophysical simulations of supernova explosions and binary neutron star mergers violate the unitary gas bounds. Thus the unitary gas model can be used as a novel way to constrain dense matter EOS in astrophysical applications [@endrizzi]. The values of $E_{sym}^0$ and $L$ calculated for our “best” interaction models are reported in Tab. \[tab1\] and are plotted in the right panel of Fig. \[fig3\]. As one can see our calculated $E_{sym}^0$ and $L$ are totally compatible with the unitary gas bound (the gray zone in the right panel of Fig. \[fig3\]) proposed in Ref. [@tews2017]. Next, in the last column of Tab. \[tab1\], we report the incompressibility of SNM $$K_\infty = 9 \rho_{0}^2 \, \frac{\partial^2 E/A}{\partial \rho^2}\Big|_{\rho_{0}} \label{incom}$$ at the calculated saturation point for each of the interaction models listed in Tab. \[tab1\]. Our calculated values for $K_\infty$ are in very good agreement with the empirical value $K_\infty = 210 \pm 30$ MeV [@blaizot76] or more recently $K_\infty = 240 \pm 20$ MeV [@shlo06] extracted from experimental data of giant monopole resonance energies in medium-mass and heavy nuclei. Finally for the three interaction models which reproduce the empirical saturation point of SNM (first three entries in Tab. \[tab1\]) we have calculated the EoS of $\beta$-stable NM (see e.g. [@prak97; @BL2018]) and then integrated the stellar structure equations in general relativity for non rotating stars. The results of our calculations for the stellar maximum mass configuration are reported in Tab. \[tab\_NS\]. It should be noticed that the neutron star matter EOS for the interaction models listed in Tab. \[tab\_NS\] are all compatible with present measured NM masses and particularly with the mass $M = 2.01 \pm 0.04 \, M_{\odot}$ [@anto13] of the NS in PSR J0348+0432 and $M = 2.27^{+0.17}_{-0.15} \, M_{\odot}$ [@linares] of the NS in PSR J2215+5135. $\beta$ (fm) $r_3$ (fm) $M_{max}$ ($M_\odot$) $R$ (km) $\rho_c$ (fm$^{-3}$) -------------- ------------ ----------------------- ---------- ---------------------- $\infty$ 1.40 2.52 11.64 0.84 10 1.35 2.52 11.68 0.82 5 1.25 2.46 11.29 0.89 : Neutron star properties for the maximum mass configuration for the interaction parameters reported in the first two columns. $M_{max}$ is the stellar gravitational maximum mass (in unit of the mass of the Sun $M_\odot = 1.989\times 10^{33}$ g), $R$ is the corresponding radius and $\rho_c$ the central nucleonic density.[]{data-label="tab_NS"} [*Conclusions*]{}. We have analyzed correlations between observables in the few-nucleon sector, NM and NSs caused by the location of the nuclear system close to the unitary limit. The LO EFT-inspired potential, based on pionless EFT plus a regularized OPEP term, has been constructed to describe two-body low-energy observables and the triton binding energy. The three-body range, $r_3$, was allowed to vary and, in terms of this quantity and for different regularizations of the OPEP, we have calculated $B(^4{\rm He})$, the energy per particle of SNM (and the corresponding saturation point), the nuclear symmetry energy, its slope parameter, the SNM incompressibility and the EoS of $\beta$-stable NM. Unexpectedly this very simple potential, in particular for values making the OPEP small, reproduces many of the mentioned observables. Moreover the EoS of $\beta$-stable matter produces neutron star configurations with a maximum mass compatible with present measured neutron star masses [@anto13; @linares]. This analysis indicates that the unitary limit, which controls the universal aspects of the two-body physics, introduces severe constraints in the nuclear system taking priority over the precise description of the two-nucleon data up to high energies. Due to the simplicity of the model, the values of $r_3$ at which these results are obtained are slightly smaller than the best values needed for describing $B(^4{\rm He})$. The feature that the requested three-body range be shorter than the two-body ranges can be understood from the way the contact interactions are regularized: if the momentum transfers ${\bf k_1}$ and ${\bf k_2}$ of two particles are limited to a range $\Lambda$, the third one, ${\bf k_3}$, constrained by momentum conservation to be the sum of the two, may take larger values; with gaussian cutoffs its width would be a factor $\sqrt{2}$ larger, implying the necessity of shorter-range ranges for the regularized three-body contact interaction. The main result of this study is to put in evidence the direct connection between many-body observables and the deuteron and the $S=0$ virtual state scales given by $a_S$ and the triton binding energy whose value fixes the strength of the three-body potential $W_0$. This result extends the analysis of Refs. [@koenig2017; @kievsky2017] to infinite nuclear systems showing that fundamental many-body properties are controlled by the position of the nuclear system close to the unitary limit. [10]{} F. Ferlaino, A. Zenesini, M. Berninger, B. Huang, H.C. N" agerl, and R. Grimm, Few-Body Syst. **51**, 113 (2011) O. Machtey, Z. Shotan, N. Gross, and L. Khaykovich, Phys. Rev. Lett. [**108**]{}, 210406 (2012) S. Roy, M. Landini, A. Trenkwalder, G. Semeghini, G. Spagnolli, A. Simoni, M. Fattori, M. Inguscio, and G. Modugno, Phys. Rev. Lett. [**111**]{}, 053202 (2013) P. Dyke, S.E. Pollack, and R.G. Hulet, Phys. Rev. A [**88**]{}, 023625 (2013) E. Braaten and H.-W. Hammer, Phys. Rep. [**428**]{}, 259 (2006) T. Frederico, L. Tomio, A. Delfino, M. R. Hadizadeh, and M.T. Yamashita, Few-Body Syst. **51**, 87 (2011) S. König, H. W. Grießhammer, H.-W. Hammer, and U. van Kolck, Phys. Rev. Lett. [**118**]{}, 202501 (2017) A. Kievsky and M. Gattobigio, Few-Body Syst. [**57**]{}, 217 (2016) P.F. Bedaque, H.W. Hammer and U. van Kolck, Phys. Rev. Lett. [**82**]{}, 463 (1999) P.F. Bedaque and U. van Kolck, Ann. Rev. Nuc. Part. Sci. [**52**]{}, 339 (2002) V. Efimov, Phys. Lett. B [**33**]{}, 563 (1970) V. Efimov, Sov.J. Nucl. Phys. [**12**]{}, 589 (1971), J. Kirscher, H.W. Grießhammer, D. Shukla and H.M. Hofmann, Eur. Phys. J. [**A 44**]{}, 239 (2010) V. Lensky, M.C. Birse, N.R. Walet, Phys. Rev. C [**94**]{}, 034003 (2016) L. Girlanda, A. Kievsky and M. Viviani, Phys. Rev. C [**84**]{}, 014001 (2011). U. van Kolck, Few-Body Syst. [**58**]{}, 112 (2017) I. Stetcu, B.R. Barrett and U. van Kolck, Phys. Lett. [**B653**]{}, 358 (2007) A. Bansal, S. Binder, A. Ekström, G. Hagen, G. R. Jansen, and T. Papenbrock, arXiv:1712.10246 \[nucl-th\] L. Contessi, A. Lovato, F. Pederiva, A. Roggero, J. Kirscher, U. van Kolck, Phys. Lett. [**B772**]{}, 839 (2017) J. Carlson, S. Gandolfi, U. van Kolck, and S. A. Vitiello, Phys. Rev. Lett. [**119**]{}, 223002 (2017) A. Delfino, T. Frederico, V. S. Timóteo, L. Tomio, Phys. Lett. [**B634**]{}, 185 (2006) A. Kievsky, E. Garrido, C. Romero-Redondo and P. Barletta, Few-Body Syst. [**51**]{}, 259 (2011). M. Gattobigio, A. Kievsky and M. Viviani, Phys. Rev. A [**84**]{}, 052503 (2011). M. Gattobigio, A. Kievsky and M. Viviani, Phys. Rev. A [**86**]{}, 042513 (2012). A. Kievsky, A. Polls, B. Juliá-Díaz, N.K. Timofeyuk, Phys. Rev. [**A96**]{}, 040501(R) (2017) A. Kievsky, M. Viviani, M. Gattobigio and L. Girlanda, Phys. Rev. C [**95**]{}, 024001 (2017). B.D. Day, Rev. Mod. Phys., 39, 719 (1967). M. Baldo and G.F. Burgio, Progr. Phys., 75, 026301 (2012) J. P. Jeukenne, A. Lejeunne and C. Mahaux, Phys. Rep., 25, 83 (1976) M. Baldo, I. Bombaci, G. Giansiracusa, U. Lombardo, C. Mahaux and R. Sartor, Phys. Rev. C, 41, 1748 (1990) H.Q. Song, M. Baldo, G. Giansiracusa and U. Lombardo, Phys. Rev. Lett., 81, 1584 (1998) M. Baldo, G. Giansiracusa, U. Lombardo and H.Q. Song, Phys. Lett. B, 473, 1 (2000) D. Logoteta, I. Bombaci, and A. Kievsky, Phys Rev. C [**94**]{}, 064001 (2016). F. Coester, S. Cohen, B. Day, and C.M. Vincent, Phys. Rev. C [**1**]{}, 769 (1970). B. Day, Phys. Rev. Lett. [**47**]{}, 226 (1981). I. Bombaci, U. Lombardo, Phys. Rev. C [**44**]{}, (1991) 1892. J.M. Lattimer, and Y. Lim, Astrophys. J. [**771**]{}, 51 (2013) I. Tews, J. M. Lattimer, A. Ohnishi, and E. E. Kolomeitsev, Astrophys. J. [**848**]{}, 105 (2017). M.W. Zwierlein, in Novel Superfluids, Vol. 2, Oxford Uni. Press. (2015). A. Endrizzi et al., arXiv:1806.09832 \[astro-ph\] J.P. Blaizot, D. Gogny and B. Grammaticos, Nucl. Phys. A [265]{}, 315 (1976). S. Shlomo, V.K. Kolomietz, G. Colò, Eur. Phys. J. A [**30**]{}, 23 (2006). M. Prakash, I. Bombaci, M. Prakash, P. J. Ellis, J. M. Lattimer and R. Knorren, Phys. Rep. [**280**]{}, 1 (1997). I. Bombaci, and D. Logoteta, Astron. and Astrophys. [**609**]{}, A128 (2018). J. Antoniadis et al., Science [**340**]{}, 1233232 (2013). M. Linares, T. Shehbaz, and J. Casares, Astrophys. J. [**859**]{}, 54 (2018)
--- abstract: 'The Gemini Deep Deep Survey (GDDS) is an ultra-deep ($K<20.6$ mag, $I<24.5$ mag) redshift survey targeting galaxies in the “redshift desert” between $1<z<2$. The primary goal of the survey is to constrain the space density at high redshift of evolved high-mass galaxies. We obtained 309 spectra in four widely-separated 30 arcmin$^2$ fields using the Gemini North telescope and the Gemini Multi-Object Spectrograph (GMOS). The spectra define a one-in-two sparse sample of the reddest and most luminous galaxies near the $I-K$ vs. $I$ color-magnitude track mapped out by passively evolving galaxies in the redshift interval $0.8<z<1.8$. This sample is augmented by a one-in-seven sparse sample of the remaining high-redshift galaxy population. The GMOS spectrograph was operating in a Nod & Shuffle mode which enabled us to remove sky contamination with high precision, even for typical exposures times of 20–30 hours per field. The resulting spectra are the deepest ever obtained. In this paper we present our sample of 309 spectra, along with redshifts, identifications of spectral features, and photometry. This makes the GDDS the largest and most complete infrared-selected survey probing the redshift desert. The 7-band ($VRIzJHK_s$) photometry is taken from the Las Campanas Infrared Survey. The infrared selection means that the GDDS is observing not only star-forming galaxies, as in most high-redshift galaxy surveys, but also quiescent evolved galaxies. In our sample, we have obtained 225 secure redshifts, 167 of which are in the redshift interval $0.8 < z < 2$. About 25% of these show clear spectral signatures of evolved (pure old, or old + intermediate-age) stellar populations, while 35% of show features consistent with either a pure intermediate-age or a young + intermediate-age stellar population. About 29% of the galaxies in the GDDS at $0.8 < z < 2$ are young starbursts with strong interstellar lines. A few galaxies show very strong post-starburst signatures. Another 55 objects have less secure redshifts, 31 of which lie in the redshift interval $0.8 < z < 2$. The median redshift of the whole GDDS sample is $z=1.1$. Spectroscopic completeness varies from a low of $\sim70\%$ for red galaxies to $>90\%$ for blue galaxies. In this paper we also present, together with the data and catalogs, a summary of the criteria for selecting the GDDS fields, the rationale behind our mask designs, an analysis of the completeness of the survey, and a description of the data reduction procedures used. All data from the GDDS are publicly available.' author: - 'Roberto G. Abraham' - Karl Glazebrook - 'Patrick J. McCarthy' - 'David Crampton, Richard Murowinski' - 'Inger J[ø]{}rgensen, Kathy Roth' - 'Isobel M. Hook' - Sandra Savaglio - 'Hsiao-Wen Chen' - 'Ronald O. Marzke' - 'R. G. Carlberg' title: 'The Gemini Deep Deep Survey: I. Introduction to the Survey, Catalogs and Composite Spectra' --- INTRODUCTION {#sec:introduction} ============ The Gemini Deep Deep Survey (GDDS) is an infrared-selected ultra-deep spectroscopic survey probing the redshift range $0.8<z<1.8$. It is designed to target galaxies of all colors at high redshift with an emphasis on the reddest population. The survey is designed with the following scientific goals in mind: (1) Measurement of the space density and luminosity function of massive early-type galaxies at high redshift. (2) Construction of the volume-averaged stellar mass function in at least three mass bins and two redshift bins over the target redshift range. (3) Measurement of the luminosity-weighted ages and recent star-formation histories of $\sim 50$ evolved galaxies at $z>1$ [cf. @dun96]. The over-arching goal of the survey is to use these sets of observations to test hierarchical models for the formation of early-type galaxies. Many studies (see @ell01 for a recent review) have probed the evolving space density of early-type systems and it is now clear that the number density of early-types does not evolve rapidly out to $z=1$, as once predicted by matter-dominated models (see @ell01 for a review). However, $\Lambda$-dominated cosmologies push back the formation epoch of most early-type systems out to at least $z=1$ even in a hierarchical picture. Alternative theories for the origin of early-type galaxies (e.g. high-z monolithic collapse [*vs.*]{} hierarchical formation from mergers) now start to become readily distinguishable at exactly the redshift ($z=1$) where spectroscopy from the ground becomes problematic [@kau98]. Our focus on the redshift range $0.8<z<1.8$ is motivated by two additional considerations. Firstly, the star-formation histories of individual galaxies in this redshift range have been very poorly explored. We do not even know whether most red objects in this range are old and quiescent, or very young and active and heavily reddened by dust. Distinguishing between these two possibilities requires high signal-to-noise in the continuum so that the characteristic photospheric features of evolved stars become evident, but most work in this redshift range has focused on emission lines. Secondly, and irrespective of model predictions, this redshift range appears to correspond to the peak epoch of galaxy assembly inferred by integrating under the ‘Madau/Lilly plot’, an observationally-defined diagram quantifying the volume-averaged star-formation history of the Universe as a function of redshift [@mad96; @lil96; @ste99]. The high-redshift tail of this plot is subject to large and uncertain dust and surface-brightness corrections, and remains rather poorly determined, and recent observations have pushed back the peak of star-formation, showing a broad maximum in redshift (for a summary of the observational situation, see Figure 2 in Nagamine et al. 2003). However, the integral under the Madau/Lilly plot is simply the total mass assembled in stars per unit volume, so by measuring this quantity directly in the GDDS we can undertake a basic consistency check of the overall picture inferred from global star-formation history and luminosity density diagrams. Spectroscopy of galaxies in the redshift range we seek to probe suffers from technical challenges brought on by the lack of strong spectral features at visible wavelengths. The redshift range $1<z<2$ has come to be known as the “redshift desert”, in reference to the paucity of optical redshifts known over this interval. Fortunately, it is now becoming clear that redshifts and diagnostic spectra [*can*]{} be obtained using optical spectrographs in this redshift range, by focusing on rest-frame UV metallic absorption features. Good progress is now being made in obtaining redshifts for UV-selected samples in the redshift desert using the blue-sensitive LRIS-B spectrograph on the Keck telescope [@ste03; @erb03]. However, UV-selected surveys are biased in favor of high star-formation rate galaxies, and the passive red galaxies with high mass-to-light ratios that are missed by UV-selection could well dominate the high-$z$ galaxy mass budget, motivating deep $K$-selected surveys such as the VLT K20 survey [@cim03], and ultra-deep small area surveys such as FIRES [@fra03]. We refer the reader to @cim04 for an excellent summary of recent results obtained from infrared-selected surveys probing high-redshift galaxy evolution. The GDDS is designed to build upon these results. Because their rest-UV continuum is so weak, determining the redshifts of passive red galaxies at $z\sim 1.5$ with 8m-class telescopes presently requires extreme measures. Ultra-deep ($>10$ hour) integration times and Poisson-limited spectroscopy are required in order to probe samples of red galaxies with zero residual star-formation and no emission lines. This poses a severe problem, because MOS spectroscopy with 8m-class telescopes is generally not Poisson-limited unless exposure times are short (less than a few hours). The main contributors to the noise budget are imperfect sky subtraction and fringe removal. At optical wavelengths, both of these problems are most severe redward of 7000Å, where most of the light from evolved high redshift stellar populations is expected to peak. To mitigate against these effects, the Gemini Deep Deep Survey team has implemented a Nod & Shuffle sky-subtraction mode [@gla01; @cui94; @bla95] on the Gemini Multi-Object Spectrograph [@mur03; @hoo03]. This technique is somewhat similar to beam-switching in the infrared, and allows sky subtraction and fringe removal to be undertaken with an order of magnitude greater precision than is possible with conventional spectroscopy. In order to undertake an unbiased inventory of the high-redshift galaxy population, the GDDS is $K$-band selected to a sufficient depth ($K = 20.6$ mag) to reach $L^\star$ throughout the $1 < z < 2$ regime. IR-selected samples that do not reach to this $K$-band limit are limited primarily to the $z < 1$ epoch, while samples that go substantially deeper outrun the capability of ground-based spectroscopic follow-up for the reddest objects, and once again become biased. The standard definition for an ‘Extremely Red Galaxy’, or ERG, is $I-K\ga 4$, a threshold which roughly corresponds to the expected color of an evolved dust-free early-type galaxy seen at $z\sim 1$. As will be shown below, the effective limit for obtaining absorption-line redshifts for red galaxies with weak UV continua is about $I=25$ mag with an 8m telescope. (Our GDDS spectroscopy — the deepest ever undertaken — has a magnitude limit of $I=24.5$ mag). Therefore at present it is only just possible to obtain a nearly complete census of redshifts for all evolved red objects in a $K\sim21$ imaging survey. Our strategy with the GDDS is to go deep enough to allow redshifts to be obtained for $L_\star$ galaxies irrespective of star-formation history at $z\sim 1.5$, while simultaneously covering enough area to minimize the effects of cosmic variance. Our sampling strategy (based on photometric redshift pre-selection to eliminate low-$z$ contamination) is different from that adopted by most other redshift surveys. In terms of existing surveys, the K20 survey [@cim03] is probably the closest benchmark comparison to the GDDS, although the experimental designs are very different, making the K20 and GDDS surveys quite complementary. The K20 survey has about twice as many redshifts as the GDDS, but because K20 survey does not preferentially select against low-redshift objects, most of these are at $z<1$. The GDDS has between two and three times as many redshifts as K20 in the interval $1.2<z<2$ (the precise number depending on the minimum acceptable redshift confidence class), and has a higher median redshift ($z\sim 1.1$ vs. $z\sim0.7$). The GDDS also goes about 0.6 mag deeper in $K$ and has over twice the area (121 square arcmin in four widely-separated site-lines in the GDDS vs. 52 square arcmin in two widely-separated site-lines in K20). A plan for this paper follows. In §2 we describe our experimental design, with a particular focus on how our targets were selected from the [*Las Campanas Infrared Survey*]{}. In §3 we outline our observing procedure, but defer the details of the Nod & Shuffle mode that are not specific to the GDDS to an Appendix. (Nod & Shuffle on the Gemini Multi-Object Spectrograph was implemented for use on the GDDS but is now a common-user mode. Since many observers may wish to use the mode themselves in contexts unrelated to faint galaxy observations, the Appendices to this paper will act as a stand-alone reference to using the mode on Gemini). In Section 4 we summarize the data obtained from the GDDS, both graphically and as a series of tables. Composite spectra obtained by co-adding similar spectra are presented in Section 5. We used these composite as templates to obtain redshifts in the GDDS, but others may wish to apply them to their own work for other purposes[^1]. Some implications from the data obtained are discussed and our conclusions given in Section 6. The major results from the GDDS will be presented in three companion papers[^2]. Appendix A describes the operation of the Nod & Shuffle mode on the Gemini Multi-Object Spectrograph [@hoo03] (GMOS) in the context of the GDDS. A more general description of the implementation of the mode will be given in Murowinski et al. (in preparation). Appendix B describes how the two-dimensional data from the GDDS were reduced. Appendix C describes how the final one-dimensional spectra were extracted from the two-dimensional data. The catalogs presented in this paper, as well as reduced spectra for all galaxies in the GDDS, are available in electronic form as a digital supplement to this article. All software described in this paper, as well as auxiliary data, are publicly available (in both raw and fully reduced form) from the central GDDS web site located at http://www.ociw.edu/lcirs/gdds.html. Throughout this paper we adopt a cosmology with $H_0$=70 km/s/Mpc, $\Omega_M=0.3$, and $\Omega_\Lambda=0.7$. EXPERIMENTAL DESIGN {#sec:design} =================== All galaxies observed in the GDDS were taken from seven-filter ($BVRIz^\prime JK$) photometric catalogs constructed as part of the one square-degree Las Campanas Infrared survey (LCIR survey; [@mcc01; @che02; @fir01]). The GDDS can be thought of as a sparse-sampled spectroscopically defined subset of the LCIR survey. The GDDS is comprised of four Gemini Multi-Object Spectrograph (GMOS) integrations, each with exposure times between 21 and 38 hours. Each GDDS field lies within a separate LCIR survey equatorial field. (The LCIR survey fields chosen were SSA22, NOAODW-Cetus, NTT-Deep, and LCIR 1511; the reader is referred to @che02 for information on the LCIR survey data available in these fields). Since the publication of @che02 additional imaging data has been acquired for these fields. Deep $K_s$, and in some cases, J imaging supplements the VVRIz’H data discussed in Chen et al. The 5$\sigma$ completeness limit of the LCIR survey fields is K$_s = 20.6$ mag (on the Vega scale). The 5.5x 5.5 GMOS field of view is small relative to even a single 13x 13 ‘tile’ of the LCIR survey (four such tiles constitute a single LCIR survey field). We therefore had considerable freedom to position the GMOS pointing within each LCIR survey field in areas which avoided very bright foreground objects and which had suitable guide stars proximate to the fields. We were also careful to pick regions in each field where the number of red galaxies was near the global average ([*i.e.*]{} we tried to avoid obvious over-densities and obvious voids; our success in achieving this will be quantified below). The four GMOS fields in our survey will be referred to as GDDS-SA22, GDDS-SA02, GDDS-SA12 and GDDS-SA15 for the remainder of this paper and in subsequent papers in this series. Taken together, these survey fields cover a total area of 121 square arcmin. The locations of each field and the total exposure time per spectroscopic mask are given in Table \[tab:overview\]. Finding charts for individual galaxies within the GDDS fields are shown in Figures \[fig:chart1\]–\[fig:chart4\]. [cccccc]{} GDDS-SA02 & 02:09:41.30 & -04:37:54.0 & 59 & GN2002B-Q-1-1.fits & 75600\ GDDS-SA12 & 12:05:22.17 & -07:22:27.9 & 61 & GN2003A-Q-1-1.fits & 18000\ “ & 12:05:22.17 & -07:22:27.9 & 74 & GN2003A-Q-1-3.fits & 57600\ GDDS-SA15 & 15:23:47.83 & -00:05:21.1 & 59 & GN2003A-Q-1-5.fits & 70200\ GDDS-SA22 & 22:17:41.0 & +00:15:20.0 & 83 & GN2002BSV-78-14.fits & 48600\ ” & 22:17:41.0 & +00:15:20.0 & 62 & GN2002BSV-78-15.fits &  90000 ![image](RobertoAbrahamFig1.eps){width="6in"} ![image](RobertoAbrahamFig2.eps){width="6in"} ![image](RobertoAbrahamFig3.eps){width="6in"} ![image](RobertoAbrahamFig4.eps){width="6in"} At $z=1.2$ (the median redshift of our survey), the 5.5 arcmin angle subtended by each GMOS field of view corresponds to a physical size of 2.74 Mpc. The total comoving volume in the four GDDS ‘pencil beams’ over the redshift interval $0.8<z<1.8$ (the range over which $L^\star$ galaxies would be detected in the GDDS) is 320,000 ${\rm Mpc}^3$. Over this volume the effects of cosmic variance on random pointings is quite significant, especially for the highly clustered red objects in our survey whose correlation length is large ($\sim 10h^{-1}$ Mpc; @mcc01 [@dad00]). Fortunately, completely random pointings are unnecessary, because the global statistical properties of the the LCIR survey (the GDDS’ parent population) are well defined. As mentioned earlier, we took advantage of this extra information when selecting our GDDS fields by ensuring that the areal density of red ($I-K>4$) galaxies was close to the ensemble average of such galaxies in the LCIR survey. Figure \[fig:selection\] compares the density contrast of red galaxies in each of our fields to the histogram obtained by measuring this same quantity in a series of random 5.5 x 5.5 boxes overlaid on the   LCIRS field from which the corresponding GDDS field was selected. In GDDS-SA15 and GDDS-SA02 the number of red galaxies is essentially identical to the median number expected from the parent population. The number of red galaxies in GDDS-SA22 is somewhat lower than the global median, while the number in GDDS-SA12 is somewhat higher than the global median. Averaged over the four fields, the areal density of red systems in the GDDS is close to the typical value expected. ![image](RobertoAbrahamFig5.eps){width="7in"} The areal density of galaxies in the LCIR Survey with photometric redshifts in the range $1<z<2$ and $I<24.5$ mag is about 8 arcmin$^{-2}$, corresponding to about 250 galaxies in a typical GMOS field of view. Since this is about a factor of four larger than can be accommodated in a single mask with GMOS, even in in Nod & Shuffle microslit mode, it is impossible to target every candidate $1<z<2$ galaxy with a single mask. On the other hand, to reach our required depth on Gemini requires around 100ks of integration time, which means it is not practical to obtain many masks per field in a single semester. Fortunately, the areal density of red galaxies with $I-Ks>4$ and $I<24.5$ is only $\sim 1$ arcmin$^{-2}$, so it is at least possible (in principle) to target all [*red*]{} galaxies in the appropriate redshift range. We therefore adopted a sparse-sampling strategy based on color, apparent magnitude, and photometric redshift in order to maximize the number of targeted galaxies occurring in our desired redshift range[^3], with a particular emphasis on red galaxies. Targets were selected from the photometric catalogs on the basis of $K_s$ and $I$ magnitudes measured in $3''$ diameter apertures. Complete photometric catalogs for our sample will be presented below. Our sparse-sampling strategy is summarized graphically in Figure \[fig:lcirs\]. Each panel in this figure shows two-dimensional histograms of different quantities within the parameter space defined by our $(I-K_s)$ vs. $I$ photometry. The dashed line at the bottom-right corner of each panel denotes the region in this parameter space below which the $K_s$ band magnitudes of galaxies become fainter than the formal 5$\sigma$ $K_s=20.6$ magnitude limit of the survey. Non-detections have been placed at the detection limit in our master data tables (presented below), so the bunching up of galaxies in the boxes intersected by this line is mostly artificial (the counts are inflated by blue galaxies undetected in $K_s$). The top-left panel of Figure \[fig:lcirs\] shows the distribution in color-magnitude space of all galaxies in a 554.7 square arcmin subset of the LCIRS, corresponding to the parent ‘tiles’ of the LCIRS from within which our GDDS fields were chosen. The labeled track on this panel shows the expected position as a function of redshift of an M$^\star$ (assumed to be $M_K=-23.6$) galaxy formed in a 1 Gyr burst at redshift $z=10$. The position of this model old galaxy at several observed redshifts between $z=1.7$ and $z=0.7$ are marked with red dots and are labeled. Note the good agreement between the locus defined by the reddest galaxies in the LCIRS and the model track. Most galaxies should be bluer than our extreme old galaxy model, suggesting that the optimal area for a mass-limited survey targeting $0.8<z<1.8$ should be the region defined by $\{22<I<24.5,\ 3<(I-K_s)<5\}$. This basic conclusion is borne out by a comparison with the full photometric redshift distribution computed from our seven-filter photometry, shown in the top-right panel of Figure \[fig:lcirs\]. ![image](RobertoAbrahamFig6.eps){width="6.5in"} Our strategy for assigning slits to objects was therefore based on the following algorithm. Firstly, we assigned as many slits as possible to objects with firm $K_s$ detections ($K_s < 20.6$ mag), red $I-K_s$ colors ($(I - K_s) > 3.5$ mag) and photometric redshifts greater than 0.8. As seen in Figure \[fig:lcirs\], the red $(I-K_s)$ color criterion alone gives a strong, but not perfect, selection against redshifts below 0.8. Once the number of allocatable slits assigned to such objects was exhausted, we then assigned slits to objects with bluer $(I-K_s)$ colors but with firm $K_s$ detections and photometric redshifts beyond $z=0.7$. After doing this we started filling empty space on the masks with objects whose $K_s$ photometry fell below our $K_s$ detection limit, but who’s photometric redshifts were greater than 0.7. Such objects are, as a rule, the easiest to get redshifts for but are our lowest priority, simply because our primary focus is to learn more about the high-mass systems likely to be missed by other surveys. Our masks were designed to maximize the number of spectra per field. With this goal in mind we used two tiers of low-dispersion spectra in our mask designs, and laid out our slits so as to use the ‘microslit’ Nod & Shuffle configuration described in Abraham et al. (2003) and in Appendix \[sec:nodandshuffle\]. Each slit was 2.2  long (corresponding to two 1.1 long pseudo-slits at the two Nod & Shuffle positions) and 0.75 wide (giving a spectral FWHM of $\simeq 17$Å)[^4]. Additional room for charge storage on the detector must be allowed for in the mask design, so each slit has an effective footprint of 4.4  on the CCDs. Our two-tier mask design strategy allowed spectral orders to overlap in some cases. A classification system for describing these overlaps is presented in Table \[tab:collision\]. Each two-dimensional spectrum has been classified on this system and the results of this inspection are included in the master data table presented in this paper. Our strategy for dealing with order overlaps is described in detail in Appendix C, but it is worth noting here that only modest overlaps were allowed, and effort was made to ensure that top-priority objects suffered little or no overlap. [ll]{} 0 & Both A & B channels uncontaminated (at most very minor masking needed).\ 1 & Single channel overlap. Offending channel not used (at most very minor masking needed).\ 2 & A contaminating 0th-order line has been masked. Remaining continuum is trustworthy.\ 3 & Two channel collision. Major masking used in extraction. Continuum in blue should not be trusted.\ 4 & Two channel collision. Major masking used in extraction. Continuum in red should not be trusted.\ 5 & Extreme measures needed to try to recover a spectrum. Continuum should not be trusted.\ \[tab:collision\] The number of slits per mask (not counting alignment holes) ranged from 59 to 83 (see Table 1). The highest density masks had a high proportion of low-priority (non $K_s$-detected) objects and a somewhat greater degree of overlap. To achieve this high slit density, objects were distributed preferentialy in two vertical bands to the left and right of the field center. Few objects were selected near the center of the field, as this reduced the multiplexing options for that portion of the field. Six masks were used over the four fields (GDDS-SA12 and GDDS-SA22 had two masks each). In total 398 target slits were cut into six masks, 323 of which were unique (since in the cases where two masks were used in the same field, most slits on the second mask were duplicates targeting the same galaxy as on the first mask. The second mask was used simply because we had time between lunations to quickly determine preliminary redshifts and drop obvious low-redshift contaminants and replace these with alternate targets). Our spectroscopic completeness ([*i.e.*]{} our success rate in turning slits into measured redshifts) was around 80%, with considerable variation with both color and apparent magnitude. A detailed investigation of spectroscopic completeness will be given in §\[sec:completeness\]. The practical upshot of our general mask design strategy is graphically summarized in the bottom left panel of Figure \[fig:lcirs\]. This panel is a two-dimensional histogram showing the number of independent slits assigned each cell of color-magnitude space. For the reasons just described heavy emphasis is given to the $\{22<I<24.5,\ 3<(I-K_s)<5\}$ region of color-magnitude space. The relative number of slits as a function of the average population in each cell expected in a wide-area survey can be computed by dividing the bottom-left panel of the figure by the top-left panel. The values computed using this procedure are shown in the bottom-right panel, and correspond to [*sampling weights*]{}. These weights will prove important in the computation of the luminosity and mass functions in future papers in this series. The sampling weight for each galaxy in the GDDS is given in Table 4 which will be presented later in this paper. SPECTROSCOPY ============ Observations ------------ The spectroscopic data described in this paper were obtained using the Frederick C. Gillett Telescope (Gemini North) between the months of August 2002 and August 2003. Most of data were taken by Gemini Observatory staff using the observatory’s queue observing mode. Data were obtained only under conditions when the seeing was $<0.85$ arcsec, the cloud cover was such that any loss of signal was $<30\%$ at all times while on target, and the moon was below the horizon. Typical seeing was in the range 0.45–0.65 arcsec as measured in the $R$-band. A detailed description of the procedure used to obtain and reduce the data in the GDDS is given in Appendix A, and will only be outlined here. Our Nod & Shuffle observations switched between two sky positions with a cycle time of 120s, i.e. we spent 60s on the first position (with the object at the top of the slit) and 60s on second position (with the object at the bottom of the slit and charge shuffled down) in each cycle. Fifteen such cycles gave us an 1800s on-source integration which was read out and stored in an individual data frame. Sky subtraction is undertaken by simply shifting each 2D image by a known number of pixels and subtracting the shifted image from the original image. Multiple 1800s integrations were combined to build up the total exposure times given Table 1. Between exposures we dithered spatially by moving the detector and spectrally by moving the grating. As described in the appendices, these multiple dither positions allow the effect of charge traps and inter-CCD gaps to be removed from the final stack. By stacking spectra in the observed frame and analyzing strong sky lines (OI, OH) we measured typical sky residuals of 0.05% – 0.1%, even with $>30$ hour integrations. The GMOS instruments are imaging multi-slit spectrographs capable of spectroscopic resolutions ($\lambda/\Delta\lambda$) between 600 and 4000 over a wavelength range of 0.36–1.0. Different detectors are used on the Northern and Southern versions of the instrument. On Gemini North the GMOS focal plane is covered by three 2048$\times$4608 EEV detectors which give a plate scale of 0.0727 arcsec per unbinned pixel. The GDDS observations were all taken with the R150\_G5306 grating in first order and the OG515\_G0306 blocking filter, giving a typical wavelength range of 5500–9200Å. (In any multi-slit configuration the exact range depends on the geometric constraints of each individual slit.) This configuration gives a dispersion of 1.7Å per unbinned pixel. Due to the high sampling of the CCD, all data were taken binned by a factor of two in the dispersion direction in order to reduce data volume and speed up readout, giving a final dispersion of 3.4Åper binned pixel. Representative spectra from the survey are shown in Figures \[fig:spectra1\]–\[fig:spectra5\]. ![image](RobertoAbrahamFig7.eps){height="6.5in"} ![image](RobertoAbrahamFig8.eps){height="6.5in"} ![image](RobertoAbrahamFig9.eps){height="6.5in"} ![image](RobertoAbrahamFig10.eps){height="6.5in"} ![image](RobertoAbrahamFig11.eps){height="6.5in"} All of the multiple 1800s exposures (typically 30–60 spread across multiple nights) making up a mask observation were sky-subtracted (using the shuffled image) and combined into a single 2D frame called the ‘supercombine’. Full details are given in Appendix B. These were then extracted to 1D spectra using standard procedures, but using special software to allow efficient interactive assessment and adjustment (see Appendix C for more details). Determination of Redshifts -------------------------- The absence of artifacts from poor sky subtraction made redshift determination straightforward in the majority of cases. The reality of weak features was also aided by the fact that the [iGDDS]{} software used for our analysis (described in Appendix \[appendix:oned\]) provides both one-dimensional and two-dimensional displays of the spectra. An advantage of the Nod and Shuffle technique is that both negative and positive versions of the two-dimensional spectra are recorded and consequently real features display a distinctive pattern that is easily recognized on the two-dimensional spectrum. The presence of strong emission and absorption features (e.g., [\[\]$\lambda 3727$]{}, [\[\]$\lambda 5007$]{}, CaII, MgII, D4000, Hydrogren Balmer lines) immediately indicated the approximate redshift in the majority of cases for galaxies at $z<1.2$ At higher redshifts, \[OII\] can be seen in star-forming objects to $z=1.7$, along with the blue UV continuum and absorption lines (primarily MgII ad FeII). For redder objects, once H&K become undetectable we relied on template matching, which proved to be an excellent aid to redshift estimation. Many such redshifts were obtained using the interactive template manipulation tools built into [iGDDS]{}. Good templates covering the 2000-3500Å  wavelength region for a variety of spectral types were not initially available and we eventually constructed some of our own (from galaxies with redshifts that were obvious from other features in their spectra). These templates are shown in Section \[sec:templates\], and proved to be invaluable, particularly for spectra that just exhibit broad absorption features and continuum shape variations in the observed spectral range. On spectra for which the redshifts were uncertain or indeterminate, cross-correlation versus a variety of templates were used to suggest possible redshift/spectral feature matches. These templates included Lyman break galaxies [@sha01], a composite starburst galaxy template (kindly supplied by C. Tremonti), the red galaxy composite from the Sloan Digital Sky Survey published in @eis03, and a selection of nearby galaxy spectra obtained from various sources (see Appendix C). Ultimately the most useful templates proved to be the ones constructed iteratively from the GDDS data itself and presented in §\[sec:templates\]. Since the continuum shape is an important indicator of galaxy redshifts but is not utilized in the cross-correlation technique, all our final redshift determinations were based on a combination of features, not just a cross-correlation peak. [ccl]{}\ 0 & None & No redshift determined. If a redshift is given in Table 4 it should be taken as an educated guess.\ 1 & $<50$% & Very insecure.\ \ 2 & $>75$% & Reasonably secure. Two or more matching lines/features.\ 3 & 95% & Secure. Two or more matching lines/features + supporting continuum.\ 4 & Certain & Unquestionably correct.\ \ 8 & & Single emission line. Continuum suggests line is [\[\]$\lambda 3727$]{}.\ 9 & & Single emission line.\ \ 1$n$ & & Class $n$ as above, but with with AGN characteristics\ Each object’s redshift was assigned a “confidence class”, based on the system adopted by Lilly et al. (1995) for the [*Canada-France Redshift Survey*]{}. This system is summarized in Table \[tab:confidence\]. The confidence class reflects the consensus probability (based on a quorum of at least five team members) that the assigned redshift is correct, and takes into consideration each spectrum’s signal-to-noise ratio, number of emission/absorption features, local continuum shape near prominent lines (eg [\[\]$\lambda 3727$]{}), and global continuum shape. We did not factor galaxy color into our redshift confidence classifications, which are independent of photometric redshift, although a post-facto inspection shows that the colors of essentially all single emission-line objects (classes 8 and 9) are consistent with the line being [\[\]$\lambda 3727$]{}. Redshift measurements for the GDDS sample are presented in Table 4, along with corresponding photometry for each galaxy. Note that in this table non-detections have been placed at the formal 2$\sigma$ detection limits and flagged with an magnitude error of $-9.99$. These detection limits are $B = 27.5$ mag, $V = 27.5$ mag, $R = 27.0$ mag, $I = 25.5$ mag, $z = 24.5$ mag, $H = 21.0$ mag and $K_s = 21.0$ mag. Statistical Completeness {#sec:completeness} ------------------------ Our analysis of the statistical completeness of the GDDS is broken down into two components. Firstly, there is the component of completeness that quantifies the number of spectra obtained relative to the number of galaxies that could possibly have been targeted. We refer to this component of the completeness as the [*sampling efficiency*]{} of the survey. Secondly there is the fraction of redshifts actually obtained relative to the number of redshifts attempted. We will refer to this as [*spectroscopic completeness*]{} of the survey. ![image](RobertoAbrahamFig12.eps){width="5.5in"} The sampling efficiency of the GDDS is investigated in Figure \[fig:samplingefficiency\]. This figure is essentially a visual summary illustrating the success of the mask design algorithm described in §\[sec:design\]. The figure shows a two-dimensional histogram quantifying the number of slits assigned to targets as a function of color and magnitude. The sampling efficiency of each cell is keyed to the color bar also shown in the figure. As described earlier, our mask design strategy places heavy emphasis on targeting objects with colors and apparent magnitudes consistent with those of a passively evolving luminous galaxy at $0.8<z<1.8$. Slits were placed on essentially all red galaxies with apparent magnitudes around $I=23.5$ mag (corresponding to the expected brightness of an $M^\star$ galaxy at $I\sim1.3$). The number-weighted average of the sampling efficiency over the cells in the optimal region of this diagram (described §\[sec:design\], and corresponding to $\{22<I<24.5,\ 3<(I-K_s)<5\}$) is 50%. Thus the GDDS can be thought of as a one-in-two sparse sample of the reddest and most luminous galaxies near the track mapped out by passively evolving high-redshift galaxies in $I$ vs. $I-K$. This sample is augmented by a one-in-seven sparse sample of the remaining galaxy population. ![image](RobertoAbrahamFig13.eps){width="5.5in"} Our success in translating slits into redshifts is shown in Figure \[fig:completeness1\]. This diagram is the close analog to the previous figure, with the difference being that cell colors are keyed to spectroscopic completeness instead of sampling efficiency. Spectroscopic completeness is calculated simply by dividing the number of high-confidence redshifts (confidence class greater than or equal to 3) in each cell by the number of slits assigned in the same cell. As expected, spectroscopic completeness is a strong function of apparent magnitude. Spectroscopic completeness is 100% brighter than $I=22$ mag, dropping to around 50% at $24.0<I<24.5$ mag (11 redshifts out of 20 attempted). The overall spectroscopic completeness of the GDDS depends on the minimum value of the redshift confidence that is considered acceptable. Slits were placed on 317 objects. Three of these slits contained two objects and thus redshift determinations were attempted for 320 objects. Of these, approximately 3% (11 objects) were so badly compromised by overlap or contamination that they were judged to be invalid. Of the 309 valid spectra, approximately 79% of the attempts resulted in moderately high confidence (classes 2 and 9) or very high confidence (classes 3,4, and 8) redshifts. As will be shown in §\[sec:ultimate\], the great majority of these were in our target redshift range, with only a very modest contamination by very low redshift objects. (Twelve objects were found to be late-type galactic stars, and 5 objects were found to be $z \sim 0.1$ extragalactic HII regions). An additional 10% of GDDS targets had very low confidence redshifts assigned (class 1 and 0). The remaining 11% of targets had no redshifts assigned. ![image](RobertoAbrahamFig14.eps){width="6in"} ![image](RobertoAbrahamFig15.eps){width="6.5in"} It is interesting to consider the main sources of spectroscopic incompleteness in the GDDS. The obvious gradient in incompleteness as a function of apparent magnitude seen in Figure \[fig:completeness1\] indicates that the main cause of incompleteness is photon starvation, particularly for red systems with little rest-UV flux. However, in some cases we were unable to obtain redshifts for relatively bright systems on account of overlapping spectral orders. Our mask design strategy was a trade-off between maximizing the number of objects we attempted to get redshifts for, and minimizing spectrum order overlaps. It is worthwhile to try to quantify the relative importance of these competing effects. An attempt at doing this is illustrated in Figure \[fig:pristine\], which shows a comparison between the cell-to-cell completeness in our full sample and the corresponding cell-to-cell completeness in a sub-sample of “pristine” objects unaffected by spectrum overlaps. Note that the spectroscopic completeness of our red galaxy sample is nearly unchanged in both panels of this figure, while greater variation is seen in the blue population. As described in §\[sec:design\], our mask-design strategy was optimized for red galaxies, and a particular effort was made to avoid spectrum overlaps in this population in order to preserve continuum shape. On the other hand, our emphasis in laying down slits on blue galaxies was to use the detector area efficiently even at the expense of sometimes allowing spectrum overlaps to occur (since emission line redshifts can often be determined from these by studying the two-dimensional spectra). We therefore expected to see a somewhat greater variation in the cell-to-cell completeness of blue galaxies as a function of slit overlap class, and Figure \[fig:pristine\] is consistent with this. We also note that the extra freedom used when assigning slits to blue galaxies results in substantially greater field-to-field variation in the spectroscopic completeness of blue galaxies in the GDDS. The field-to-field completeness of the GDDS is shown in Figure \[fig:completenessFieldToField\]. Significant variations in the total completeness in each field are seen, ranging from a high of 87% in GDDS-SA02, to a low of 60% in GDDS-SA12. It is important to note that most of this field-to-field variation is in the blue population, and the cell-to-cell completeness near the red galaxy locus of the color-magnitude diagram in each panel of this figure remains quite stable. Spectral Type Classification ---------------------------- In addition to the redshift confidence class, each object was assigned a series of spectral classifications that record both the features that were used to determine the redshift and a subjective classification of the galaxy’s spectral type. Our spectral classifications are presented in Table 5. The first column notes whether any features indicative of AGN activity are seen in the spectrum (0 = no, 1 = yes). The next 11 entries specify the presence (1) or lack (0) of the most common spectral features used in the redshift determinations. A (2) in any of these columns indicates that the particular feature did not fall within the wavelength range of our spectra. In some cases the spectral range covered by a particulary object was reduced by overlap or contamination from other objects. The “template" column in Table 5 identifies those objects whose redshifts were based largely on a match to a template spectrum, either a composite from our own spectra (see below) or an external spectrum. It is emphasized that spectra on different masks had different exposure times, and order overlaps impacted some spectra more than others, so these feature visibility classifications should be used as a general guideline only and not over-interpreted. The last column in Table 5 lists the spectral class assigned to each object. This classification is based on three digits that flag young, intermediate age, and old stellar populations. Objects showing pure, or nearly pure, signatures of an evolved stellar population (e.g. D4000, H&K, or template matches) are assigned a class of “001". Objects that are dominated by the flat-UV continuum and strong emission-lines characteristic of star forming systems are assigned a “100" classification, those showing signatures of intermediate ages (e.g. strong Balmer absorption) are assigned a class of “010". Many objects show characteristics of more than one type and so are assigned classes that are the composite of old (001), intermediates (010) and young (100) populations. Objects listed as “101" may show strong H&K absorption and 4000Å  breaks and yet have a flat-UV continuum tail indicative of a low-level of ongoing star formation. SUMMARY OF REDSHIFTS AND SPECTRAL CLASSIFICATIONS {#sec:ultimate} ================================================= A graphical synopsis of the spectroscopic information in the GDDS is presented in Figure \[fig:ultimate\], which shows the number vs. redshift histogram for our sample, color-coded both by confidence class and by spectral class. The shading of the box shows the confidence class, the color of the label reflects the spectral class. A number of interesting aspects of our experiment are evident in Figure \[fig:ultimate\]. Firstly, despite our use of four widely separated fields, we remain significantly impacted by large-scale structure and sample variance. The factor of two deficit of objects at $z = 1.2$ cannot be the sign of failure to recognize objects at this redshift as we do considerably better at $z = 1.3$, a more difficult redshift. Secondly, it is clear that our success rate drops steeply at $z > 1.5$, where our fraction of high confidence redshifts drops from $> 90$% to $\sim 30$%. As described in §\[sec:completeness\], this is mostly because these objects, and the red ones in particular, are fainter than the average galaxy in the sample, particularly at wavelengths shortward of the $I$-band used to set the magnitude limit of the sample. The following summarizes the fractions of different stellar populations amongst objects with high-confidence redshifts in the GDDS. Approximately 15% of the objects observed showed spectra with pure old stellar populations (class 001). The fraction with strong signatures of evolved stars (i.e. objects with classes 001 or 011) is 22%. Galaxies showing some evidence for intermediate age stellar population features (classes 110, 010, 001) accounted for 46% of the sample. About 25% of galaxies had pure intermediate age populations (class 010), and 24% of the sample appear to be dominated by young populations (class 100). We were unable to assign any spectral classifications to 10% of objects with high confidence redshifts. The fraction of galaxies with evidence of old populations peaks at $z \sim 1$ and falls off steeply at higher redshifts. Some of this reflects the increasing impact of the non-$K_s$-detected objects that were added to the sample on the basis of their photometric redshifts. A number of the $z > 1.5$ objects, however, do have $K < 20.6$ and some of these have red $I - K_s$ colors and yet still show essentially flat UV spectra dominated by massive stars. This is not surprising given the shape of the $V-I$ vs. $I-H$ (or $I-K_s$) two-color diagram (McCarthy et al. 2001). At $z > 1.5$ the bulk of the red $I-K_s$ population has blue $V-I$ colors. The spectroscopic properties of these galaxies, along with inferred ages from stellar population synthesis models, will be presented in a companion paper (McCarthy et al. 2004, in preparation). ![image](RobertoAbrahamFig16.eps){height="5.6in"} COMPOSITE TEMPLATE SPECTRA {#sec:templates} ========================== As described earlier, redshifts were determined by visual examination of the spectra, comparing with spectral templates and looking for expected redshifted emission and absorption features. The most uncertain aspect at the start of our survey was the appearance of normal galaxies in the 2000–3000Å region. Early-type quiescent galaxy spectra are expected to be dominated by multiple broad absorption features in this region (for example see the mid-UV spectra of elliptical galaxies in Lotz et al. 2000) which come primarily from F & G main sequence stars (Nolan, Dunlop & Jimenez 2001). In contrast late-type actively star-forming galaxies should have a featureless blue continuum with narrow ISM absorption lines superimposed (for example see the HST starburst spectra in Tremonti et al. 2003). For the first two GDDS fields we primarily used the Luminous Red Galaxy template of @eis03 and spectra of the $z\sim 1.4$ radio galaxies 53W091 (Spinrad et al., 1997) and 53W069 (Dunlop 1999) for our early-type reference templates. For late types we used a composite spectrum made from an average of the local starburst galaxy starforming regions of Tremonti et al. After we had reduced our first two fields and obtained preliminary redshifts we then constructed our own templates by combining GDDS spectra. We had three principal motivations for doing this. The first was to obtain a better match in spectral resolution; the second was to improve the UV coverage especially in the early-type template and the third was to make templates corresponding to galaxies in an earlier evolutionary epoch in the history of the Universe. In order to make the templates we visually identified similar looking spectra with confidence level 3 or 4 redshifts. The full redshift range of GDDS $0.6<z<2$ was used in order to produce templates with a wide range of spectral coverage. Of course this large redshift range is less than ideal because on average the resulting UV portion of the templates will come from higher redshift objects than the optical portion. However we decided that since the primary use of these templates were for redshift matching, the wider wavelength coverage would be of greater importance. It remains true that the UV portion of the templates ($\lambda<3000$Å) comes mostly from $z>1$ objects and this was where our previous set of templates were most inadequate. We emphasize that these redshift templates are [*different*]{} from the composite spectrum analyzed by Savaglio et al. (2004), where it is more important to have a more restricted redshift range. The template construction process fully allowed for masking and disparate wavelength coverage in the different spectra. Given a set of input spectra the construction process was as follows. Firstly one spectrum (typically at the median redshift) was chosen as a master for the purposes of normalization. Next all the other templates were scaled to the same normalization as the master by computing the average flux in the non-masked overlapping regions. Finally a masked average was performed of all the normalized templates (the mask being either 1 or 0 depending on whether a given spectrum included that part of the wavelength region with good data). For the early type template we combined 8 convincing early type spectra with redshifts $0.6<z<1.5$ and for the late type template we combined 23 late type spectra with $0.8<z<2.0$. We also found it desirable to make an intermediate type template (i.e., somewhat red but with signs of star-formation such as \[OII\] emission) as these were especially poorly represented in our first template set. For this we used 8 galaxies with $0.7<z<1.3$. The resulting three templates are plotted in Figure \[fig:templates\]. CONCLUSIONS {#sec:summary} =========== The Gemini Deep Deep Survey was undertaken in order to explore galaxy evolution near the peak epoch of galaxy building. The survey probes a color-selected sample in a manner that minimizes the strong star-formation rate selection biases inherent in most high-redshift galaxy surveys. It is designed to bridge the gap between landmark surveys of highly complete samples at $z<1$ [@cow94; @lil95; @ell96] and UV-selected surveys at higher redshift [@ste96; @sha01; @ste03]. The signal-to-noise ratios of the spectra in the GDDS are sufficient to distinguish old stellar populations (dominated by F-type stars) from post-starburst systems (dominated by A-type stars) and reddened starbursts with their flat spectra and strong interstellar lines. In this paper we have described the motivation for the survey, the choice of fields, the experimental design underlying our choice of targets, and our data reduction process. We have presented final catalogs of redshifts and photometry and an analysis of their statistical completeness. Spectra for individual objects are available as an electronic supplement to this paper. Further information on the GDDS and the data reduction software used in the project are available on the World Wide Web at <http://www.ociw.edu/lcirs/gdds.html>. *Acknowledgments* 0.5 cm This paper is based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the Particle Physics and Astronomy Research Council (United Kingdom), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), CNPq (Brazil) and CONICET (Argentina) The Gemini Deep Deep survey is the product of a university/institutional partnership and would not have been possible without the work of many people. We thank Matt Mountain, Jean-René Roy and Doug Simons at the Gemini Observatory for their vision in supporting this project in the midst of many other observatory priorities, and for the time, energy and manpower they have invested in the making Nod & Shuffle a reality on Gemini. It is a pleasure to thank Matthieu Bec and Tatiana Paz at the Gemini Observatory for their work in implementing the modifications to the GMOS telescope control system and sequence executor needed in order to support the Nod & Shuffle mode. We are grateful to the entire staff of the Gemini Observatory for undertaking the queue observing for this project in such an efficient manner, and for the kind hospitality shown to us during our visits. We also thank the Instrumentation Group at the Herzberg Institute of Astrophysics for working with us and with Gemini in order to help make GMOS Nod & Shuffle a reality. Many members of the HIA Instrumentation Group went beyond the call of duty in support of this project, but we would like to particularly thank Bob Wooff and Brian Leckie at HIA for their quick response and willingness to work late into the night during a weekend to fix some early problems we encountered, which saved us from losing a night of observing during the science verification phase of the GDDS. We also thank Richard Wolff at NOAO for doing much of the microcode programming needed for this project. RGA and RGC acknowledge generous support from the Natural Sciences and Engineering Research Council of Canada, and RGA thanks the Government of Ontario for funding provided from a Premier’s Research Excellence Award. KG & SS acknowledge generous funding from the David and Lucille Packard Foundation. H.-W.C. acknowledges support by NASA through a Hubble Fellowship grant HF-01147.01A from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. **Appendices** NOD & SHUFFLE OBSERVATIONS WITH GMOS {#sec:nodandshuffle} ==================================== The principles of the Nod & Shuffle technique are described in Glazebrook & Bland-Hawthorn (2001). The basic ideas behind the mode are very similar to those of the [*Va et Vient*]{} strategy for sky subtraction [@cui94; @bla95][^5]. Our specific Nod & Shuffle configuration is shown schematically in Figure \[fig:nod-shuffle\], and corresponds to the ‘Case 1’ strategy shown in Figure 2(a) of @abr03. Slits were 0.75 arcsec wide (giving a spectral FWHM of $\simeq 17$Å) and were designed to take advantage of the queue observing mode of the telescope by being optimized for seeing $<0.85$ arcsec. In this seeing the target galaxies were mostly unresolved. In Nod & Shuffle mode the telescope is nodded between two positions along the slit denoted ‘A’ and ‘B’, as shown in Figure \[fig:nod-shuffle\]. For our first mask observations on the 22$^{\rm h}$ field we used a 2.0 arcsec long slit, since the nod distance was 1.0 arcsec the targets appear on the slit in both A and B positions ($\pm 0.5$ arcsec from the slit center). The shuffle distance was 28 pixels which produces a shuffled B image immediately below the A image with a small one pixel gap. For subsequent fields we increased the slit length to 2.2 arcsec and the shuffle to 30 pixels as analysis of the first field convinced us that a slightly longer slit would be beneficial to reduce the impact of the ‘red end correction’ issue described below. ![image](RobertoAbrahamFig18.eps){width="6.5in"} ![image](RobertoAbrahamFig19.eps){height="7.9in"} At each A and B position we observed for a 60 sec exposure before closing the shutter and nodding the telescope and shuffling the CCD. Our standard GDDS exposure consisted of 15 cycles with A$=$60s and B$=$60s (i.e. a total of 1800s open shutter time on target before reading out). There is of course extra overhead associated primarily with moving the telescope and guide probes, for the GDDS observing setup this typically added 25% to the total observing time. We found that this Nod & Shuffle setup gave a sky-residual of only 0.05–0.1%, which is well below the Poisson limit for our stacked exposures, except for the brightest few night sky lines. The reader is referred to Abraham et al. (2003) and Murowinski et al 2004 (in preparation) for a more detailed description of the implementation, observing sequence and sky-subtraction performance of Nod & Shuffle mode on GMOS. A typical mask observation consisted of approximately 50 half-hour exposures observed across many nights. In order to fill in the gaps between CCD chips the grating angle was changed between different groups of observations in order to dither the MOS image along the dispersion ‘X’ axis relative to the CCD. Approximately one third of the data was taken with a central wavelength of 7380Å, one third with 7500Å and the final third with 7620Å. The CCD gaps were filled in when the different positions were combined to make a master frame. Similarly it is also desirable to dither in the orthogonal, ‘Y’ direction, i.e. parallel to the spatial axis, in order to minimize the effects of shallow charge traps aligned with the silicon lattice of the CCD. These charge traps manifest themselves in the shuffled images as short pairs of streaks in the horizontal, or dispersion, direction. Each pair is always separated by the shuffle distance. We speculate these charge traps originate from subtle detector defects that are repeatedly pumped by the shuffle-and-pause action. Since the traps always appear at the same place on the CCD, their undesired effect can be greatly reduced by dithering the image along the Y axis and rejecting outliers during stacking. To accomplish this dithering the CCD was physically moved using the Detector Translation Assembly (DTA). During normal GMOS operation, this stage is used to actively compensate for flexure in the GMOS optical chain during exposures. Additional offsets can be applied between exposures in order to position the image on different pixels on the array. Our standard observing block thus consisted of the following six-step sequence: 1. The grating was set to one of the 3 positions used (e.g. 7500Å). The DTA was homed. 2. An 1800s exposure was recorded using Nod & Shuffle (A$=$60s, B$=$60s, $\times$ 15 cycles). 3. An 1800s exposure was recorded with the DTA offset by +41  (+3 pixels) along the spatial axis. 4. An 1800s exposure was recorded with the DTA offset by +81  (+6 pixels) along the spatial axis. 5. An 1800s exposure was recorded with the DTA offset by $-$41  ($-$3 pixels) along the spatial axis. 6. An 1800s exposure was recorded with the DTA offset by $-$81  ($-$6 pixels) along the spatial axis. This same sequence would then be repeated for the next grating position, and this pattern of changing the central wavelength and taking a sequence of 5 exposures was repeated until the required number of exposures was completed. In practice not every sequence had this exact number of steps and sequence due to observing constraints, but an approximate balance was maintained among the different DTA offset positions which was all that was required for the stacking of the data. REDUCTION OF TWO-DIMENSIONAL DATA {#sec:datared} ================================= The goal of the 2D reduction was to combine all the individual 2D dispersed 1800s exposures for each mask with outlier rejection to make a master ‘supercombine’ 2D sky-subtracted dispersed image. This is then used for the next stage — extraction to 1D spectra. The GDDS 2D data was reduced using IRAF and the Gemini IRAF package, in particular the v1.4 GMOS sub-package. To handle the peculiarities of Nod & Shuffle data two new software tasks ([gnsskysub]{} and [ gnscombine]{}) were written by us. These have now been incorporated into the standard IRAF GMOS package distributed by the Gemini Observatory. The first step in the 2D data reduction was to bias subtract the individual runs using a master bias frame (the average of typically 20 bias frames taken during the observing period). GMOS exhibits 2D bias structure so a 2D bias subtraction is done with the [gireduce]{} task. The next step was to sky-subtract all the runs using the [gnsskysub]{} task. This simply takes the frame, shifts it in Y by the shuffle step, as recorded in the image header, and subtracts it from itself[^6]. This results in clean sky-subtracted spectra sandwiched between regions of artifacts, as shown in Figure \[fig:nod-shuffle\]. These subtracted frames are then visually examined to make a list of relative dither offsets. In most cases these are as given by the nominal DTA offsets, with occasional 1–2 additional pixel shifts between different GMOS nights (these shifts are then recovered by sky-line fitting and cross-correlation). In some cases the objects moved slightly in the slits relative to their nominal position due to an error in setting the tracking wavelength in the telescope control system. In order to handle these extra offsets actual emission lines in bright galaxies were centroided in X and Y and used to define the offsets. Calculating the offsets directly relative to the object positions in this way results in some fuzziness of slit edges in the 2D combined frames, but since the inter-slit offsets were only a few pixels and only a few frames were affected, this did not turn out to be a serious problem in practice. The final result of the inspection is a list of X,Y offsets between the objects in different dispersed images. Once the offsets are known image combination proceeds with the [ gnscombine]{} task which generates sky-subtracted frames using [ gnsskysub]{} and combines them using a variance map calculated from the median count level in the non-subtracted frames and the known readout noise and Poisson statistics. Outliers (cosmic rays and charge traps) are rejected using a 7$\sigma$ cut and retained data is averaged. The outlier rejection was checked visually by comparing frames combined with and without rejection and it was verified that only genuine outliers were rejected. A median 2D sky frame is also produced which is used for later wavelength calibration and further noise estimates. We first combined the frames in 3 groups according to the central wavelength using [gnscombine]{}, i.e. a combined frame was produced for each of the 7380Å, 7500Å  and 7620Å positions. At this point the combined frames are a single Multi-Extension FITS (MEF) file where each extension represents a separate GMOS CCD as a 2D image. The next step is to use the [gmosaic]{} task to assemble the three images for each group into a single contiguous image using the known geometric relationship between the three CCDs. [gmosaic]{} was used in the mode where the assembly was done to the nearest pixel; no re-sampling or interpolation scheme was used in order to preserve pixel independence in the noise map. Since the data is 4$\times$ over-sampled this does not result in any significant degradation. Inspection of the [gmosaic]{}’d images showed the spectral continuity across the CCD gaps was good to $\pm 1$ pixel and subsequent wavelength calibration showed the positioning in the dispersion direction was of a similar accuracy. Finally the three [gmosaic]{}’d combined frames for the 7380Å, 7500Å  and 7620Å positions were mosaiced again into the final ‘supercombine’ frame. The task [gemcombine]{} was used to accomplish this with offsets calculated from fitting to sky lines. A binary mask denoting the position of CCD gaps and bad columns was also used to remove these features from the final supercombine by replacing them with real data from the other frames. The final products of this process are: (a) a supercombined (i.e. sky subtracted) frame corresponding to the the stack of all the 2D data — an example is shown in Figure \[fig:mask-overview\]; (b) a corresponding supercombined sky frame showing the emission from the night sky. It should be noted that no attempt was made to flat-field the data. This is not required to get accurate sky-subtraction with Nod & Shuffle. The effect of pixel-pixel variations in the final object spectra are greatly reduced by the extensive dithering in any case. Some residual flat-field features, primarily fringes, are visible in the brightest spectra at the few percent level, but these do not seriously impact our faint spectra. EXTRACTION OF ONE-DIMENSIONAL SPECTRA {#appendix:oned} ===================================== One-dimensional spectra were extracted from the two-dimensional stacked image frames using [iGDDS]{}, a publicly available spectral extraction and analysis program for Mac OS X that we have written for use with Nod & Shuffled GMOS data. [iGDDS]{} operates in a manner that is rather different from the familiar command-line driven tools used by astronomers (e.g. [IRAF, FIGARO, MIDAS]{}, etc.), and it is intended to be highly interactive and take full advantage of the graphical capabilities of modern computers. The program functions as an electronic catalog with interactive tools for spectral aperture tracing, one-dimensional spectrum extraction, wavelength calibration, spectral template fitting, and redshift estimation. All these tools are linked. For example, selecting an object in a catalog displays its two-dimensional image and corresponding aperture trace. This aperture trace can be reshaped by dragging with a mouse, resulting in a newly extracted one-dimensional spectrum. Clicking the mouse on a feature on this spectrum immediately displays this feature on a corresponding two-dimensional image. This feature can then be selected and a trial redshift assigned, which results in the superposition of a comparison template spectrum on the object spectrum. The template can then be dragged with the mouse to refine the redshift or try other possibilities. All analysis steps for all spectra on a GMOS mask are stored in a single document file which can be shared with colleagues and interactively modified. The saved [iGDDS]{} document files used by our team are publicly available, and the interested reader may find these to be a useful starting point for further exploration of the GDDS data set. Aperture Tracing ---------------- Spectra were extracted from the supercombined stack using aperture traces defined by a fourth-order bezier curve. Positive and negative apertures (corresponding to the A and B nod and shuffled positions) were defined relative to this curve. The sizes of these apertures and the distance between them were allowed to vary independently. (As described in §\[sec:collisions\], in some cases it is useful to discard a single A or B position channel in order to avoid contamination from overlapping spectra). In most cases the spectra were sufficiently bright to allow a well-defined trace to be determined visually (after lightly smoothing the two-dimensional image and displaying the galaxy spectrum with a large contrast stretch). However, the very faintest galaxies in our sample proved too dim to allow a reliable trace to be estimated, and for these objects simple horizontal trace was used. ![image](RobertoAbrahamFig20.eps){height="4.3in"} A screenshot from [iGDDS]{} illustrating the aperture tracing process is shown in Figure \[fig:iGDDSExtraction\]. In this figure the top sub-image shows a compressed view of a horizontal slice across the supercombined 2D image at the $y$-coordinate of the slit (34 pixels high and 4608 pixels wide). The fourth-order bezier curve defining the trace for an individual spectrum is shown as the solid red line in the top window. The transparent yellow and purple regions on this image are linked to the trace and correspond to the negative and positive apertures used to extract the one-dimensional spectrum. Linear and Optimal Extraction ----------------------------- One-dimensional spectra were extracted from the supercombined images using both linear and optimal extraction procedures. Both sets of extractions are available in the public data release of the GDDS observations described in §\[sec:format\] below. Our optimal extractions were constructed using profile weights defined by projecting the spectra in the spatial direction following the bezier curves which define their traces. This procedure resulted in smooth extraction profiles that resembled gaussians for most galaxies, although in cases where spectrum overlaps occurred (described in greater detail §\[sec:collisions\]) the resulting weight profiles were spuriously asymmetric. The optimal extractions should not be trusted for these objects. [*We therefore recommend that those readers interested in making uniform comparisons between all spectra in the GDDS use only the linearly extracted spectra*]{}. Optimally extracted spectra can be trusted for those objects with recorded spectrum overlap classifications of zero in Table 3, and these do have slightly improved signal-to-noise relative to their linearly extracted counterparts. However, the signal-to-noise improvement is modest (of order 5%) on account of the narrow slit lengths in the GDDS masks. For the sake of consistency, only linearly extracted spectra have been shown in the figures throughout the present paper. Small artifacts on the two-dimensional supercombined images were masked out using [iGDDS]{} prior to extraction. Our masking procedure works by excluding aberrant pixels from the resulting average over the spatial direction in a given column. The procedure is clearly of rather limited usefulness, and was only adopted in those cases where at most a few pixels in a given column were contaminated by artifacts. In cases of more severe contamination, we chose to eliminate the column completely (leaving gaps in the spectrum) rather than to patch over the bad data. As will be described in §\[sec:format\], the output data format for the GDDS spectra retains a record of which wavelength points on a spectrum have been patched. Including the effects of bad pixel masking, extraction proceeded as follows. Consider a single column in a two-dimensional spectrum containing $n$ rows. Denote the flux in the $i$th pixel by $F_i$, its variance by $\sigma_i^2$, and let the discrete variable $\mathcal{M}_i \in \{-1,0,1\}$ take on the value $1$ in the case that the pixel is in aperture A, $-1$ in the case that the pixel is in aperture B, and $0$ in the case that the pixel is masked. Assuming $n_A$ pixels are contained within apertures A and B, a simple estimator of the total flux in the case of linear extraction with masked regions is: $$F = n_A \cdot {\sum_{i=1}^n (\mathcal{M}_i/\sigma_i^2) F_i \over \sum_{i=1}^n | (\mathcal{M}_i/\sigma_i^2) |}$$ In other words, the flux is now the average over the non-rejected pixels multiplied by the total number of masked and un-masked pixels in both apertures. The corresponding case for optimal extraction is only slightly more complicated. Denoting the optimal extraction profile by a continuous variable $P_i \in \{0..1\}$ (with $P_i \equiv 0$ for pixels in the column outside the aperture), it is straightforward to show that the maximum likelihood estimator for the true total flux is given by: $$F = \sum_{i=1}^n P_i \cdot { \sum_{i=1}^n P_i (\mathcal{M}_i/\sigma_i^2) F_i \over \sum_{i=1}^n P_i^2 | (\mathcal{M}_i/\sigma_i^2) | }$$ Note that if we set $P_i = 1$ then we recover the linear extraction case. Flux Calibration, Atmospheric Absorption Correction, and Red Fix Correction {#sec:calib} --------------------------------------------------------------------------- Since precise flux calibration is impossible for observations accumulated over many nights (spread over months in some cases) under varying conditions, the flux calibration was carried out using observations of standard stars obtained as part of the GMOS queue baseline calibration. These data were reduced using the standard routines in IRAF and calibrations deduced for each field. A mean aperture correction of a factor of 3.5 was applied to each spectrum. In the end, the calibrations were so similar from field to field that the same one was used for all. For most objects the relative flux calibration appears to be quite good, as evidenced by the fact that composites made from these spectra agree extremely well with composites from other surveys, e.g., the SDSS luminous red composite of @eis03. However, the absolute flux for an individual object may be in considerable error (we think they should only be trusted to within about a factor of two), since no attempt was made to allow for overlapping spectra, masked regions, flat-fielding of 2D spectra, etc. We therefore caution that the fluxes are relative only. As the 1D spectra were initially being extracted from the 2D spectra, it almost immediately became apparent that there was a problem in that the continua became too low or even negative at the extreme red end of the wavelength region. Since one expects that the nod and shuffle technique would result in perfect sky subtraction, this was initially very puzzling. On further examination it became evident that the strong sky lines displayed “tails" so that there was spatial extension of the lines that increased in strength with wavelength. Unfortunately, in the nod and shuffle technique, the protrusion of these on either side of the spectra means that they are superimposed and subtracted from the object spectrum during the shift-and-combine operation. The origin of this effect is charge diffusion in the silicon which is a strong function of wavelength and it has only become apparent since we are attempting to extract extremely faint target spectra that are stored immediately adjacent to extremely strong sky spectra on the CCD. The effect could be reduced or avoided by increasing the distance between the object and sky or the two object positions in our case, at the cost of less efficient use of the detector area. For any given mask design, the magnitude of the effect depends on the precise relative position of the object in the slitlet but, in our case, the only practical way to correct for it was to establish an average correction for all objects in a mask and then apply that to all spectra. A correction curve to account for this effect was derived empirically for each mask. First, the variation of the strength of the effect as a function of wavelength was derived by measuring the percentage of light that leaked from strong sky lines into an extraction window equivalent to that used for the objects but on the opposite, unexposed, side of spectrum. It was immediately obvious that the effect varies exponentially with wavelength, ranging (for our initial mask) from 0.2% at 883nm to 5% at 1030nm. Having established the form of the variation, it was multiplied by a high signal-to-noise sky spectrum that was broadened slightly in wavelength to account for the fact that the charge diffusion occurs in all directions (technically, the broadening should also be a function of wavelength but this was ignored). Finally, this modified sky spectrum was scaled so that it minimized the negative sky features that were apparent in a co-added (in observed wavelength) spectrum of the 25 strongest spectra in the mask. This ‘redfix’ correction curve can then be applied as an option in [iGDDS]{} during the data reduction. As mentioned above, it is at best only a statistical correction for objects in a mask, and this effect introduces additional uncertainty into the continuum level and flux calibration that decreases exponentially shortward of 1 micron. The magnitude of the effect also increases with the faintness of the target since it is a fixed fraction of the night sky. Our final spectra have also been corrected for the major features caused by atmospheric absorption. The atmospheric features were identified by isolating them in a normalized, high signal-to-noise spectrum of a standard star and then adjusting their amplitude to match those in the combined spectrum of suitable objects in a given mask. Spectrum Order Overlaps {#sec:collisions} ----------------------- ![image](RobertoAbrahamFig21.eps){height="5in"} As described in §\[sec:design\], our two-tier mask design strategy allowed some overlap to occur among spectra originating in adjacent slits. The extent of these overlaps can be gauged by an inspection of Figure \[fig:mask-overview\]. The wavelengths 5500–9200Å from the ‘blue tier’ spectra in second order can overlap with the first order ‘red tier’ spectra. Since 5500–9200Å is the main observational window, this second order light cannot be filtered out, nor is the second order sky cancelled in the Nod & Shuffle process, because the slits are in general not aligned between the two tiers. However the intensity of second order spectra in this wavelength range is only 5-10% of the intensity of the first order spectra, so in practice only the very strongest sky lines (such as \[OI\]5577Å, \[OI\]6300Å  and \[OI\]6363Å) were significant contaminants, and in many cases these individual lines can simply be masked out, as described above. Another source of contamination is zero-order light from red-tier spectra overlapping with the first-order spectra in the blue tier. Several examples of this contamination are clearly seen in Figure \[fig:mask-overview\]. Such cases are easy to identify and, as only a small portion of the spectrum is affected, this portion of the affected spectra has simply been eliminated in the final spectra presented in this paper. As noted earlier, we have attempted to classify the importance of spectrum overlaps using the system defined in Table \[tab:collision\]. To help the reader visualize the meaning of this system, it is illustrated using example spectra in Figure \[fig:overlap\]. Final ASCII-format Spectra {#sec:format} -------------------------- In addition to the main data tables presented in Tables 4 and 5 above, the GDDS Public Data Release contains the individual spectra for all galaxies in the survey. These spectra are stored as ASCII text files, each of which contains the eleven columns of information specified in Table \[tab:dataformat\]. Linearly and optimally extracted calibrated spectra, their corresponding variance spectra, and an uncalibrated raw spectrum are stored in the same file. A linearly extracted night sky spectrum sampled through the same slit as part of the Nod & Shuffle operation is also included. All calibrated spectra have been fully processed through our pipeline; they are flux-calibrated, as well as corrected for atmospheric absorption and charge bleeding (via the ‘redfix’ correction described in §\[sec:calib\]). Separate columns in the output data file record the values of the atmospheric calibration, ‘redfix’, and flux calibration curves applied at each wavelength point in the spectrum (so the calibrations can be undone and new ones experimented with). A column in each file also records the fraction of masked pixels in the spatial direction at each wavelength. The final column in each file records raw counts in electrons normalized to the 1800s exposure time of a single sub-frame, as described above. [lll]{} & Wavelength& Å\ [Flux]{} & Linearly extracted object flux & erg ${\rm cm}^{-2} {\rm s}^{-1} {\rm \AA}^{-1}$\ [Sigma]{} & Standard deviation of linearly extracted object flux & erg ${\rm cm}^{-2} {\rm s}^{-1} {\rm \AA}^{-1}$\ [SkyFlux]{} & Linearly extracted sky flux & erg ${\rm cm}^{-2} {\rm s}^{-1} {\rm \AA}^{-1}$\ [OptFlux]{} & Optimally extracted object flux & erg ${\rm cm}^{-2} {\rm s}^{-1} {\rm \AA}^{-1}$\ [OptSigma]{} & Standard deviation of optimally extracted object flux & erg ${\rm cm}^{-2} {\rm s}^{-1} {\rm \AA}^{-1}$\ [RedFix]{} & Additive correction for charge bleeding & counts\ [FluxCa]{}l & Flux calibration & mag\ [Atmos]{} & Additive correction for atmospheric absorption & counts\ [Frac]{} & Fraction of pixels masked in spectral dimension &\ [Electrons]{} & Uncorrected counts & electrons\ [1]{} Abraham, R. G., et al. 2003. Gemini Newsletter, June 2003 Bland-Hawthorn, J. 1995. In “Tridimensional Spectroscopic Methods in Astrophysics", ed. Comté & Marcelin, M., ASP Conf., 71, 369. Chen, H–W., et al. 2002, 570, 54 Cimatti, A. 2004. To appear in the Proceedings of the ESO/USM/MPE Workshop on “Multiwavelength Mapping of Galaxy Formation and Evolution”, eds. R. Bender and A. Renzini (astro-ph/0401101) Cimatti, A. et al. 2003, A&A, in press, (astro-ph/0310742) Cowie, L. L., Gardner, J. P., Hu, E. M., Songaila, A., Hodapp, K.-W., & Wainscoat, R. J. 1994, , 434, 114 Cuillandre, J. C. et al. 1994. , 281, 603 Daddi, E. et al. 2000. , 361, 535 Dunlop, J. et al. 1996. Nature, 381, 58 Dunlop, J. S. 1999, in Rottgering H. J. A., Best P., Lehnert M. D., eds, The Most Distant Radio Galaxies. KNAW Colloq. Amsterdam. Kluwer, Dordrecht, p. 71 Eisenstein, D. J. et al. 2003, , 585, 694 Ellis, R. S., Colless, M., Broadhurst, T., Heyl, J., & Glazebrook, K. 1996, , 280, 235 Ellis, R. S. 2001. , 113, 515 Erb, D. K., Shapley, A. E., Steidel, C. C., Pettini, M., Adelberger, K. L., Hunt, M. P., Moorwood, A. F. M., & Cuby, J. 2003, , 591, 101 Firth, A., et al. 2002, MNRAS, 332, 617 Franx, M. et al. 2003, , 587, L79 Glazebrook, K. et al. (2004). Submitted to Nature. (astro-ph/0401037). \[GDDS Paper III\] Glazebrook, K. & Bland-Hawthorn, J. 2001. , 113, 197 Hook, I. et al. 2003, , 4841, 1645 Kauffmann, G. & Charlot, S. 1998, 297, 23 Lilly, S. et al. 1995. , 455, 50 Lilly, S. et al. 1996. , 460, L1 Lotz, J. M., Ferguson, H. C., & Bohlin, R. C. 2000, , 532, 830 Madau, P., Ferguson, H. C., Dickinson, M. E., Giavalisco, M., Steidel, C. C. & Fruchter, A. 1996, MNRAS, 283, 1388 McCarthy, P. et al. 2004. In preparation. \[GDDS Paper IV\] McCarthy, P. et al. 2001, , 560, L131 Murowinski, R. G. et al. 2003, , 4841, 1440 Nagamine, K., Cen, R., Hernquist, L., Ostriker, J. P. & Springel, V. 2003. (astro-ph/0311294) Nolan, L. A., Dunlop, J. S., & Jimenez, R. 2001, , 323, 385 Savaglio et al. 2004, ApJ, in press, (astro-ph/031043), \[GDDS Paper II\] Shapley, A. E., Steidel, C. C., Adelberger, K. L., Dickinson, M., Giavalisco, M., & Pettini, M. 2001, , 562, 95 Spinrad, H., Dey, A., Stern, D., Dunlop, J., Peacock, J., Jimenez, R., & Windhorst, R. 1997, , 484, 581 Steidel, C. C., Giavalisco, M., Pettini, M., Dickinson, M., & Adelberger, K. L. 1996, , 462, L17 Steidel et al. 1999. , 519, 1 Steidel, C. C., Adelberger, K. L., Shapley, A. E., Pettini, M., Dickinson, M., & Giavalisco, M. 2003, , 592, 728 Tremonti, C. A., Leitherer, C., Heckamn, T. M., & Calzetti, D. 2003, ApJ, submitted. [^1]: It should be born in mind that these composites are constructed from galaxies covering a wide range of redshift and time. Most analyses on composites would require a more restricted range, e.g. paper II (Savaglio et al. 2004). [^2]: Savaglio et al. 2004 \[paper II\] presents measurements of column densities and metallicities of star-forming galaxies in our Sample. Glazebrook et al. 2004 \[paper III\] presents the mass function from the GDDS. McCarthy et al. 2004 \[paper IV\] will present an analysis of the stellar populations in the reddest galaxies in our sample. [^3]: In GDDS-SA02 and GDDS-SA22 the full color set was not available at the time of the mask design and so only $VRIz'K_s$ and $VIz'HK_s$, respectively, were used in these two fields. The impact of the smaller filter set for these two fields on the final sample selection was minor, as determined from tests with the GDDS-SA12 and GDDS-SA15 catalogs. [^4]: The first mask (file GN2002BSV-78-14.fits in the Gemini archive) of our first field (GDDS-SA22) has 2.0 long slits. The slit length was increased by 10% after an initial inspection of the incoming data suggested that 1.0 pseudo-slits were slightly too short for the largest galaxies being targeted. [^5]: The main difference between [*Va et Vient*]{} and Nod & Shuffle is the latter’s emphasis on using tiny slits for extreme multiplexing. [^6]: In some rare cases the detector controller would drop a sub-frame resulting in an A$\ne$B mismatch. [gnsskysub]{} includes an option to fix this case via re-scaling of the B frame
--- abstract: 'The theoretical understanding of emergent phenomena in quantum materials is one of the greatest challenges in condensed matter physics. In contrast to simple materials such as noble metals and semiconductors, macroscopic properties of quantum materials cannot be predicted by the properties of individual electrons. One of the examples of scientific importance is strongly correlated electron system. Neither localized nor itinerant behaviors of electrons in partially-filled $3d$, $4f$, and $5f$ orbitals give rise to rich physics such as Mott insulators, high-temperature superconductors, and superior thermoelectricity, but hinder quantitative understanding of low-lying excitation spectrum. Here, we present a new *first-principles* approach to strongly correlated solids. It is based on a combination of the quasiparticle self-consistent GW approximation and the Dynamical Mean Field Theory (DMFT). The sole input in this method is the projector to the set of correlated orbitals for which all local Feynman graphs are being evaluated. For that purpose, we choose very localized quasiatomic orbitals spanning large energy window, which contains most strongly-hybridized bands as well as upper and lower Hubbard bands. The self-consistency is carried out on the Matsubara axis. This method enables the *first-principles* study of Mott insulators in both their paramagnetic (PM) and antiferromagnetic (AFM) phases. We illustrate the method on the archetypical charge transfer correlated insulators La$_2$CuO$_4$ and NiO, and obtain spectral properties and magnetic moments in good agreement with experiments.' author: - Sangkook Choi - Andrey Kutepov - Kristjan Haule - Mark van Schilfgaarde - Gabriel Kotliar title: '*First-principles* treatment of Mott insulators: linearized QSGW+DMFT approach' --- ="2D *Introduction*. The *first-principles* description of strongly-correlated materials is currently regarded as one of the greatest challenges in condensed matter physics. The interplay between correlated electrons in open $d$- or $f$- shell and itinerant band states gives rise to rich physics that makes these materials attractive for a wide range of applications such as oxide electronics, high-temperature superconductors and spintronic devices. Various theoretical approaches are currently being pursued [@anisimov_strong_2000]. One of the most successful approaches is the dynamical mean field theory (DMFT) [@georges_dynamical_1996]. In combination with density functional theory [@anisimov_first-principles_1997; @lichtenstein_textbf_1998], it has described many features of strongly-correlated materials successfully and highlighted the surprising accuracy of treating correlations local to a small subset of orbitals exactly, while treating the reminder of the problem in a static mean field manner.[@kotliar_electronic_2006]. The numerous successes of DMFT in different classes of correlated materials revived the interest in the long-sought goal of achieving a diagrammatically controlled approach to the quantum many-body problem of solids, starting from the Green’s function $G$ and the screened Coulomb interactions $W$  [@almbladh_variational_1999; @chitra_effective-action_2001]. The lowest order diagram in perturbation theory in this functional gives rise to the GW approximation [@hedin_new_1965] while the local approximation applied to the most correlated orbitals gives rise to an extended DMFT approach to the electronic structure problem [@chitra_effective-action_2001]. The addition of the GW and DMFT graphs was proposed and implemented in model Hamiltonian studies [@sun_extended_2002] and in realistic electronic structure [@biermann_first-principles_2003; @kotliar_model_2001]. There is now intense activity in this area with many recent publications [@tomczak_combined_2012; @sakuma_electronic_2013; @taranto_comparing_2013; @tomczak_asymmetry_2014] triggered by advances in the quality of the impurity solvers [@werner_continuous-time_2006; @haule_quantum_2007], insights into the analytic form of the high-frequency behavior of the self-energy [@casula_dynamical_2012] and improved electronic structure codes. Several conceptual issues remain to be clarified before the long sought goal of a robust electronic structure method for solids is attained. The first issue is the choice of local orbitals on which to perform the DMFT method (summation of all local Feynman graphs). The second issue is the level of self-consistency that should be used in the calculation of various parts of the diagrams included in the treatment (free or bare Green’s function $G_0$ vs self-consistent interacting Green’s functions $G$). These central issues are addressed in this letter. The self-consistency issue appears already at the lowest order, namely, the GW level, and it has been debated over time. The corresponding issue in GW+DMFT is expected to be at least as important, but has not been explored, except for model Hamiltonians [@sun_many-body_2004; @hansmann_long-range_2013]. At the GW level, it is now well established that Hedin’s fully self-consistent formulation [@hedin_new_1965], while producing good total energies in solids [@kutepov_ground-state_2009], atoms and molecules [@stan_fully_2006; @stan_levels_2009], does not produce a good approximation to the spectra of even 3D electron gas and aluminum in comparison to non-self-consistent GW results [@holm_fully_1998; @kutepov_ground-state_2009]. Instead, using a free (quasiparticle) Green’s function in the evaluation of the polarization graph of the GW method gives much better results for spectral functions. This is the basis of the one-shot quasiparticle (QP) GW, starting from LDA [@hybertsen_electron_1986] or from others [@rinke_combining_2005]. Unfortunately, the answer depends on the starting point. A solution for this problem is to impose a self-consistency equation to determine $G_0$. This method, called the quasiparticle self-consistent GW (QSGW) [@kotani_quasiparticle_2007], is very successful reproducing the spectra of many systems [@kotani_quasiparticle_2007]. How to combine it with DMFT is an important open challenge [@tomczak_many-body_2012; @tomczak_qsgw+dmft:_2014]. ![Atomic structure and first Brillouin Zone of La$_2$CuO$_4$. (a) Atomic structure of La$_2$CuO$_4$ in the single face-centered orthorhombic phase. Lanthanum atoms are represented by green spheres, copper atoms by blue spheres in the blue octahedrons, and oxygen atoms by red spheres. The structure is characterized by an alternating rotation of successive Cu$O_6$ octahedra along the $x$ direction. (b) First Brillouin zone of single face-centered orthorhombic phase. Red lines show the path along which electronic bandstructures are plotted in Fig. 2(c) and Fig. 3.](fig1_new.eps){width="0.70\columnwidth"} Previous GW+DMFT studies typically used a $G_0$ which depends on the LDA starting point, and projectors spanning a relatively small energy window [@tomczak_combined_2012; @sakuma_electronic_2013; @taranto_comparing_2013; @tomczak_asymmetry_2014]. In this work, we propose a different approach to the level of self-consistency and the choice of the DMFT orbital. We do a self-consistent QSGW calculation and then calculate local self-energy using DMFT with static $U_d$ and $J_H$ without feedback to non-local self-energy within GW. For the DMFT step, we choose a very localized orbital spanning large energy window which contains most strongly-hybridized bands as well as upper and lower Hubbard bands. ![Hubbard U associated with Cu-3$d$ orbitals in La$_2$CuO$_4$. (a) Frequency dependence of $W_d$ (dashed lines) and $U_d$ (full lines) of La$_2$CuO$_4$ with a $\chi_{QP}^{low}$ defined in the energy window $E_F\pm 10eV$. Real and imaginary parts of the parameter are marked by red and blue colors, respectively. (b) Bandgap dependence on $U_d$, in La$_2$CuO$_4$, evaluated with impurity self-energy within spin-polarized GW approximation with $J_H$=1.4eV. The Black dashed line represents bandgap within spin-polarized MQSGW. (c) Spectral function of La$_2$CuO$_4$ with $U_d$=12eV and $J_H$=1.4eV. The black dashed-lines show bandstructures within spin-polarized MQSGW](fig2_new.eps){width="0.70\columnwidth"} In the LDA+DMFT context, the choice of very localized orbitals has provided a great deal of universality since the interactions do not vary much among compounds of the same family. This has been demonstrated in the studies of iron pnictides [@yin_kinetic_2011] and transition metal oxides [@haule_covalency_2014]. This choice results in a second advantage as we will show below, namely the frequency dependence of the interaction matrix can be safely ignored. Having chosen the correlated orbitals, all the other parameters are self-consistently determined. This is the first *ab initio* quasiparticle self-consistent GW+DMFT implementation and the first study on a paramagnetic Mott insulator within the GW+DMFT method. ![image](fig3_new.eps){width="0.70\columnwidth"} *Results*. Fig. 2(a) shows the frequency dependence of real and imaginary parts of $U_d$ of La$_2$CuO$_4$ shown in Fig. 1. It is calculated on an imaginary frequency axis and analytically continued by a maximum entropy method [@jarrell_bayesian_1996]. We also plot the fully screened Coulomb interaction $W_d$ for comparison. Static $U_d$ is 12.0 eV and $U_d$ remains almost constant up to $10\,$eV. In contrast, in $W_d$, there are several peaks due to low-energy collective excitations below $10\,$eV. At very high energy, $U_d$ approaches the bare coulomb interaction of $28\,$eV. Static value of $U_{pd}$ is 2.0 eV, much smaller than $U_d$, hence we don’t discuss its treatment further[@optimal_u]. Calculated $J_H$ is $1.4\,$eV and has negligible frequency dependence. By contrast, conventional constrained-RPA, in which 10 bands of mostly Cu-$3d$ character are excluded from screening, results in static $U_d=7.6\,$eV, which is too small to open the Mott gap, and which is also inconsistent with photoemission experiments on CuO charge transfer insulators [@ghijsen_resonant_1990]. We introduced a complementary method to compute the static $U_d$. The key idea is to first calculate the excitation spectra of La$_2$CuO$_4$ within MQSGW+DMFT using local GW (with a static $U_d$) as the impurity solver and then determine $U_d$, by finding the value that best matches the full spin-polarized MQSGW spectra. The procedure starts from the non-spin-polarized MQSGW band structure without magnetic long-range order. We then allow spontaneous magnetic long-range order by embedding a polarized impurity self-energy for the Cu-3$d$ electrons computed in a local GW approximation. We find that indeed magnetic ordering associated with Cu-3$d$ is captured by spin-polarized local MQSGW using a static value of $U_d$ and $J_H$ and spectral properties such as energy gap are very similar in value to the full spin-polarized MQSGW spectra. In Fig. 2(b), we allowed $U_d$ to vary between 8-13$\,$eV (at fixed $J_H=1.4\,$eV) and we plot the size of the indirect gap. The gap size of this method matches the gap of spin-polarized MQSGW when $U_d\approx 12\,$eV. If this choice of $U_d$ and $J_H$ is correct, the resulting spectra must be similar to the prediction of spin-polarized MQSGW method. We show this comparison in Fig. 2(c) to confirm a good match. In addition, the relative position of the Cu-$d$ band (the lowest energy conduction band at S) to the La-$d$ band (the lowest energy conduction band at Y) is also well matched justifying the approximation of $\hat{\Sigma}^{DC}(i\omega_n)\simeq\hat{\Sigma}^{DC}(i\omega_n=0)$. $\Sigma^{DC}(i\omega_n=0)$ for Cu-$d_{x^2-y^2}$ orbital differs from nominal double counting energy [@haule_dynamical_2010] by only $1\%$, highlighting again the advantages of using a broad window and narrow orbitals. ![The density of states of La$_2$CuO$_4$. (a) Total density of states of La$_2$CuO$_4$ from LDA (magenta), LDA+DMFT(green), MQSGW (red), and MQSGW+DMFT (blue). Full lines and dashed-lines represent quantities within non-spin-polarized and spin-polarized versions of each calculation, respectively. The cyan dotted line shows photoemission/inverse-photoemission data [@nucker_experimental_1987]. The Positions of La-$f$ peaks are marked by arrows. (b) A zoom-in view in the low-energy region. (c) The overlap of total density of states of La$_2$CuO$_4$ within LDA+DMFT as well as MQSGW+DMFT and photoemission/inverse-photoemission data [@nucker_experimental_1987]](fig4_new.eps){width="0.70\columnwidth"} We now discuss the magnetic moment associated with Cu and the electronic excitation spectra of La$_2$CuO$_4$ by using MQSGW+DMFT (with $U_d=12.0eV$, $J_H=1.4eV$) in which the impurity is solved by the numerically exact CTQMC [@haule_quantum_2007; @werner_continuous-time_2006] and compare them with other methods. LSDA does not have a magnetic solution. In contrast, spin-polarized MQSGW, QSGW [@kotani_quasiparticle_2007], and MQSGW+DMFT predict 0.7 $\mu_B$, 0.7 $\mu_B$, and 0.8 $\mu_B$, respectively. This is consistent with experimental measurements, although the later span quite large range $0.4\mu_B \sim 0.8\mu_B$ [@borsa_staggered_1995; @reehuis_crystal_2006; @vaknin_antiferromagnetism_1987]. In the low-energy spectrum of La$_2$CuO$_4$, LSDA does not have an insulating solution; there is a single non-magnetic solution with zero energy gap as shown in the bandstructure(Fig. 3(a)) and total density of states (Fig. 4(a)). The non-spin-polarized MQSGW also predicts metal as shown in Fig. 4(a), but the two bands of primarily $Cu\mbox{-}d_{x^2\mbox{-}y^2}$ character near the Fermi level are well-separated from the rest of the bands (dashed lines in Fig. 3(b)). Spin-polarized MQSGW calculation (dashed lines in Fig. 3(c)) yields qualitative different results from LSDA and non-spin-polarized MQSGW calculation. The two $Cu\mbox{-}d_{x^2\mbox{-}y^2}$ bands are now well separated from each other with a bandgap of 3.4 eV. Spin-polarized QSGW [@kotani_quasiparticle_2007] also yields insulating phase with a gap of 4.0 eV. In the experiment, the larger direct gap, as measured by optics, is $\sim2eV$  [@ginder_photoexcitations_1988; @cooper_optical_1990]. We show that these deficiencies of LDA, QSGW and MQSGW in the low-energy spectra can be remedied by adding all local Feynman diagrams for the Cu-$d$ orbitals using the DMFT. The LDA+DMFT calculation in Fig. 4(a), carried out by the all-electron LDA+DMFT method [@haule_dynamical_2010; @haule_covalency_2014], predicts reasonable gap of 1.5 eV and 1.8 eV in PM and AFM phases, in good agreement with experiment and previous LDA+DMFT studies  [@weber_optical_2008; @weber_strength_2010; @wang_covalency_2012; @haule_covalency_2014; @werner_dynamical_2014]. Within MQSGW+DMFT, we find gaps of 1.5 eV and 1.6 eV in PM and AFM phases, respectively, as shown in Fig. 4(b). The excitation spectra of MQSGW+DMFT in PM and AFM phase as shown in Fig. 3(b) and 3(c) are very similar as both are insulating with well separated $Cu\mbox{-}d_{x^2\mbox{-}y^2}$ bands, which is now also substantially broadened due to large scattering rate in Hubbard-like bands. In addition, MQSGW+DMFT improves the line-shape of LDA+DMFT. Near the top of the valence bands with oxygen $p$ character, the lineshape within LDA+DMFT is too sharp in comparison to the experiments as shown in Fig. 4(c). By treating oxygen $p$ levels within GW, the lineshape becomes smoother and in a better agreement with experiments. In the high energy region of La$_2$CuO$_4$, the most distinctive difference is the position of La-$f$ peak. It appears at $\sim3eV$ within LDA and LDA+DMFT, but at around $\sim 9eV$, in the inverse-photoemission spectra (cyan dotted line in Fig. 4(a)) [@nucker_experimental_1987]. By treating La-$f$ within GW approximation, it appears at $\sim10eV$ within MQSGW and MQSGW+DMFT. The underestimation of unoccupied La-$f$ excitation energy is attributed to the local approximation to the electron self-energy within LDA. Within LDA, Hartree and exchange-correlation potential applied to La-$f$ orbitals are orbital-independent since charge density is averaged over 14 different m channels [@anisimov_band_1991]. In contrast, these potentials within MQSGW are orbital-dependent and non-local. The effect of orbital-dependent potential can be tested within LDA+U approaches, since LDA+U adds orbital-dependent potential and subtracts orbital-independent potential explicitly [@anisimov_first-principles_1997]. From LDA+U approaches, we can also understand MQSGW better since LDA+U can be regarded as a local and static approximation to GW approximation [@anisimov_first-principles_1997]. According to M.T.Czyzyk and G.A.Sawatzky [@czyzyk_local-density_1994], La-$f$ peaks shift from E$_F$+3eV to E$_F$+3eV+U/2 with U=11eV for La-$f$. ![Hubbard U associated with Ni-3$d$ orbitals in NiO (a) Frequency dependence of $W_d$ (dashed lines) and $U_d$ (full lines) of NiO, with a $\chi_{QP}^{low}$ defined in the energy window in $E_F-11eV$ to $E_F+10eV$. Real and imaginary parts of the parameter are marked by red and blue colors, respectively. (b) Total density of states of NiO within LDA+DMFT(green) and MQSGW+DMFT(blue). The cyan dotted line shows photoemission/inverse-photoemission data [@zimmermann_electronic_1999]](fig5_new.eps){width="0.70\columnwidth"} We also tested our proposed scheme with one more charge transfer insulator, NiO. Fig. 5(a) shows the frequency dependence of $U_d$ and $W_d$ for the Ni-$3d$ orbitals in the low-energy region. In contrast to $W_d$, $U_d$ is almost constant up to $5\,$eV. Static $U_d$ is 9.6 eV. In the high energy limit, $U_d$ and $W_d$ approach the bare value of $26.0\,$eV. Calculated $J_H$ for the Ni-$3d$ orbitals has negligible frequency dependence and static $J_H$ is $1.4\,$eV. Fig. 5(b) shows the total density of states of NiO within LDA+DMFT and MQSGW+DMFT in its paramagnetic phase. Photoemission/inverse photoemission data are also plotted for comparison [@zimmermann_electronic_1999]. The LDA+DMFT calculation is being carried out by the all-electron LDA+DMFT method [@haule_dynamical_2010] with $U_d=10eV, J_H=0.9eV$ and nominal double-counting energy. In the paramagnetic phase, LDA+DMFT and MQSGW+DMFT predict insulator in an agreement with previous LDA+DMFT studies [@ren_lda_2006; @yin_calculated_2008], but MQSGW+DMFT improves the line-shape of LDA+DMFT. Near the top of the valence bands, the lineshape within LDA+DMFT is too sharp in comparison to the experiments. By treating oxygen $p$ levels within GW, the lineshape becomes smoother and in a better agreement with experiments. In the antiferromagnetic phase, magnetic moment associated with Ni-$d$ orbitals is 1.6 $\mu_B$ within MQSGW+DMFT, in agreement with experimental value of 1.6-1.9 $\mu_B$ [@cheetham_magnetic_1983; @fender_covalency_1968; @ren_lda_2006]. In summary, we introduced a new methodology within MQSGW+DMFT and tested it in the classic charge transfer insulator La$_2$CuO$_4$ and NiO. Our methodology predicts a Mott-insulating gap in the PM phase, thus overcoming the limitation of LDA and QSGW. It yields more precise peak positions of the La-$f$ states in La$_2$CuO$_4$ and valence band lineshape, thus improving the results of LDA+DMFT. The method should be useful in understanding electronic excitation spectrum of other strongly-correlated materials, in particular, those where precise position of both the itinerant and correlated states is important. **Methods** Our approach is carried it out entirely on the Matsubara axis, which requires a different approach to the quasiparticle self-consistency in GW [@kutepov_electronic_2012], called Matsubara Quasiparticle Self-consistent GW (MQSGW), where the quasiparticle Hamiltonian is constructed by linearizing the self-energy and renormalization factor [@qsgw_vs_qpgw]. Working on the Matsubara axis is numerically very stable, provide a natural interface with advanced DMFT solvers such as continuous-time quantum Monte-Carlo (CTQMC) [@werner_continuous-time_2006; @haule_quantum_2007] and has very good scaling in system size as in the space-time method. (see supplementary note on Matsubara QSGW calculations). For DMFT, it is essential to obtain bandstructures in a fine enough crystal momentum ($\mathbf{k}$) mesh to attain desired frequency resolution of physical quantities. To achieve such momentum resolution, we use a Wannier-interpolated MQSGW bandstructure in a large energy window using Maximally localized Wannier function (MLWF) [@marzari_maximally_2012], and then constructed local projector in a fine momentum mesh. In contrast to SrVO$_3$ [@tomczak_combined_2012; @sakuma_electronic_2013; @taranto_comparing_2013; @tomczak_asymmetry_2014] where a set of $t_{2g}$ states is reasonably well separated from the other bands, correlated $3d$ orbitals in La$_2$CuO$_4$ and NiO and are strongly hybridized with other itinerant bands. In this case, it is necessary to construct local projectors from states in a wide enough energy windows to make projectors localized near the correlated atoms. We constructed local projectors in the energy window $E_F\pm 10eV$ in which there are ${\sim}82$ bands at each $\mathbf{k}$ point, where $E_F$ is the Fermi level for La$_2$CuO$_4$. For NiO, we constructed local projectors in the energy window of $E_F-11eV$ to $E_F+10eV$. Then we confirmed that absolute value of its overlap to the muffin-tin orbital (of which radial function is determined to maximize electron occupation in it) is more than 95%. Our choice of energy window is justified by the Cu-$3d$ spectra being entirely contained in this window. Using constructed MLWFs, we defined our local-projector $P_{i,n}(\mathbf{k})=\sum_{R}\left<W_{\mathbf{R}i}|\psi_{n\mathbf{k}}\right>e^{-i\mathbf{k}\cdot\mathbf{R}}/\sqrt{N_k}$, where $W_{\mathbf{R}i}(\mathbf{r})$ is MLWF with an index $i$, $\psi_{n\mathbf{k}}(\mathbf{r})$ is quasiparticle wavefunction with an index $n$, and $N_k$ is the number of $\mathbf{k}$ points in the first Brillouin zone. Static $U_d$ and $J_H$ are evaluated by a modification of the constrained RPA method [@aryasetiawan_frequency-dependent_2004], which avoids screening by the strongly hybridized bands. This screening by hybridization is included in our large energy window DMFT. For details, see supplementary note on $U_d$ and $J_H$. We divide dynamic polarizability within MQSGW approximation $\chi_{QP}$ into two parts, $\chi_{QP}=\chi_{QP}^{low}+\chi_{QP}^{high}$. Here, $\chi_{QP}^{low}$ is defined by all transitions between the states in the energy window accounted for by the DMFT method ($E_F\pm 10eV$ for La$_2$CuO$_4$ and $E_F-11eV$ to $E_F+10eV$ for NiO). Using $\chi_{QP}^{high}$, we evaluate partially-screened Coulomb interaction $U^{-1}(\mathbf{r},\mathbf{r}',\mathbf{k},i\omega_n)=V^{-1}(\mathbf{r},\mathbf{r}',\mathbf{k})-\chi_{QP}^{high}(\mathbf{r},\mathbf{r}',\mathbf{k},i\omega_n)$ and parametrize static $U_d$ and $J_H$ by Slater’s integrals [@van_der_marel_electron-electron_1988; @kutepov_self-consistent_2010], where $V$ is bare Coulomb interaction. The Feynman graphs included in both MQSGW and DMFT (double-counting) are the local Hartree and the local GW diagram. They are computed using the local projection of the MQSGW Green’s function ($\hat{G}_{QP}$) $\hat{G}_{QP}^{loc}(i\omega_n)=\frac{1}{N_k}\sum_{\mathbf{k}}\hat{P}(\mathbf{k})\hat{G}_{QP}(\mathbf{k},i\omega_n)\hat{P}^\dagger(\mathbf{k})$ and the local Coulomb matrix constructed from Slater’s integrals. For the details, see supplementary note on double counting energy. **Acknowledgments** This work was supported by the U.S Department of energy, Office of Science, Basic Energy Sicences as a part of the Computational Materials Science Program and by the Simons Foundation under the Many Electron Problem collaboration. An award of computer time was provided by the INCITE program. This research used resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725. **Author contributions** G.K. designed the framework of the code. S.C developed the code, building on earlier developments of A.K. and K.H. and performed the calculations. G.K., K.H. and S.C. analyzed the data with the help of A.K. and M.v.S. All authors provided comments on the paper. [10]{} url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} ** (, ). , , & . ** ****, (). , , , & . ** ****, (). & . ** ****, (). *et al.* . ** ****, (). , & . ** ****, (). & . ** ****, (). . ** ****, (). & . ** ****, (). , & . ** ****, (). & . In (ed.) **, no.  in , (, ). , , , & . ** ****, (). , & . ** ****, (). *et al.* . ** ****, (). , , & . ** ****, (). , , , & . ** ****, (). . ** ****, (). , & . ** ****, (). & . ** ****, (). , , , & . ** ****, (). , & . ** ****, (). , & . ** ****, (). , & . ** ****, (). & . ** ****, (). & . ** ****, (). , , , & . ** ****, (). , & . ** ****, (). , & . ** ****, (). . ** ****, (). , & . ** ****, (). , & . ** ****, (). & . ** ****, (). . , , , & . ** ****, (). , & . ** ****, (). *et al.* . ** ****, (). *et al.* . ** ****, (). *et al.* . ** ****, (). *et al.* . ** ****, (). *et al.* . ** ****, (). *et al.* . ** ****, (). , & . ** ****, (). , & . ** ****, (). *et al.* . ** ****, (). , , & . ** ****, (). , & . ** ****, (). & . ** ****, (). *et al.* . ** ****, (). *et al.* . ** **** (). , , & . ** ****, (). & . ** ****, (). , & . ** ****, (). , , & . ** ****, (). , , , & . ** ****, (). *et al.* . ** ****, (). & . ** ****, (). , , & . ** ****, (). & **. Magnetism (, ). , , , & . ** ****, (). , & . ** ****, ().
--- abstract: 'Regularization by Denoising (RED), as recently proposed by Romano, Elad, and Milanfar, is powerful image-recovery framework that aims to minimize an explicit regularization objective constructed from a plug-in image-denoising function. Experimental evidence suggests that the RED algorithms are state-of-the-art. We claim, however, that explicit regularization does not explain the RED algorithms. In particular, we show that many of the expressions in the paper by Romano et al. hold only when the denoiser has a symmetric Jacobian, and we demonstrate that such symmetry does not occur with practical denoisers such as non-local means, BM3D, TNRD, and DnCNN. To explain the RED algorithms, we propose a new framework called Score-Matching by Denoising (SMD), which aims to match a “score” (i.e., the gradient of a log-prior). We then show tight connections between SMD, kernel density estimation, and constrained minimum mean-squared error denoising. Furthermore, we interpret the RED algorithms from Romano et al. and propose new algorithms with acceleration and convergence guarantees. Finally, we show that the RED algorithms seek a consensus equilibrium solution, which facilitates a comparison to plug-and-play ADMM.' author: - 'Edward T. Reehorst and Philip Schniter, [^1]' bibliography: - 'macros\_abbrev.bib' - 'books.bib' - 'misc.bib' - 'machine.bib' - 'sparse.bib' - 'phase.bib' title: 'Regularization by Denoising: Clarifications and New Interpretations' --- Introduction {#sec:intro} ============ Consider the problem of recovering a (vectorized) image ${\ensuremath{\boldsymbol{x}}}{^0}\in{{\mathbb{R}}}^N$ from noisy linear measurements ${\ensuremath{\boldsymbol{y}}}\in{{\mathbb{R}}}^M$ of the form $$\begin{aligned} {\ensuremath{\boldsymbol{y}}}={\ensuremath{\boldsymbol{A}}}{\ensuremath{\boldsymbol{x}}}{^0}+{\ensuremath{\boldsymbol{e}}} \label{eq:yAx},\end{aligned}$$ where ${\ensuremath{\boldsymbol{A}}}\in{{\mathbb{R}}}^{M\times N}$ is a known linear transformation and ${\ensuremath{\boldsymbol{e}}}$ is noise. This problem is of great importance in many applications and has been studied for several decades. One of the most popular approaches to image recovery is the “variational” approach, where one poses and solves an optimization problem of the form $$\begin{aligned} {\ensuremath{\boldsymbol{{\widehat}{x}}}}=\arg\min_{{\ensuremath{\boldsymbol{x}}}} \big\{ \ell({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{y}}}) + \lambda\rho({\ensuremath{\boldsymbol{x}}}) \big\} \label{eq:variational}.\end{aligned}$$ In [(\[eq:variational\])]{}, $\ell({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{y}}})$ is a loss function that penalizes mismatch to the measurements, $\rho({\ensuremath{\boldsymbol{x}}})$ is a regularization term that penalizes mismatch to the image class of interest, and $\lambda>0$ is a design parameter that trades between loss and regularization. A prime advantage of the variational approach is that, in many cases, efficient optimization methods can be readily applied to [(\[eq:variational\])]{}. A key question is: How should one choose the loss $\ell(\cdot;{\ensuremath{\boldsymbol{y}}})$ and regularization $\rho(\cdot)$ in [(\[eq:variational\])]{}? As discussed in the sequel, the MAP-Bayesian interpretation suggests that they should be chosen in proportion to the negative log-likelihood and negative log-prior, respectively. The trouble is that accurate prior models for images are lacking. Recently, a breakthrough was made by Romano, Elad, and Milanfar in [@Romano:JIS:17]. Leveraging the long history (e.g., [@Buades:MMS:05; @Milanfar:SPM:13]) and recent advances (e.g., [@Chen:TPAMI:17; @Zhang:TIP:17]) in image denoising algorithms, they proposed the *regularization by denoising* (RED) framework, where an explicit regularizer $\rho({\ensuremath{\boldsymbol{x}}})$ is constructed from an image denoiser ${\ensuremath{\boldsymbol{f}}}:{{\mathbb{R}}}^N\rightarrow{{\mathbb{R}}}^N$ using the simple and elegant rule $$\begin{aligned} {\rho{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}}) = \frac{1}{2}{\ensuremath{\boldsymbol{x}}}{^{\top}}\big({\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}})\big) \label{eq:rhoRED0} .\end{aligned}$$ Based on this framework, they proposed several recovery algorithms (based on steepest descent, ADMM, and fixed-point methods, respectively) that yield state-of-the-art performance in deblurring and super-resolution tasks. In this paper, we provide some clarifications and new interpretations of the excellent RED algorithms from [@Romano:JIS:17]. Our work was motivated by an interesting empirical observation: With many practical denoisers ${\ensuremath{\boldsymbol{f}}}(\cdot)$, the RED algorithms do not minimize the RED variational objective “$\ell({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{y}}})+\lambda{\rho{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}})$.” As we establish in the sequel, the RED regularization [(\[eq:rhoRED0\])]{} is justified only for denoisers with symmetric Jacobians, which unfortunately does not cover many state-of-the-art methods such as non-local means (NLM) [@Buades:CVPR:05], BM3D [@Dabov:TIP:07], TNRD [@Chen:TPAMI:17], and DnCNN [@Zhang:TIP:17]. In fact, we are able to establish a stronger result: For non-symmetric denoisers, there exists no regularization $\rho(\cdot)$ that explains the RED algorithms from [@Romano:JIS:17]. In light of these (negative) results, there remains the question of how to explain/understand the RED algorithms from [@Romano:JIS:17] when used with non-symmetric denoisers. In response, we propose a framework called *score-matching by denoising* (SMD), which aims to match the “score” (i.e., the gradient of the log-prior) rather than to design any explicit regularizer. We then show tight connections between SMD, kernel density estimation [@Parzen:AMS:62], and constrained minimum mean-squared error (MMSE) denoising. In addition, we provide new interpretations of the RED-ADMM and RED-FP algorithms proposed in [@Romano:JIS:17], and we propose novel RED algorithms with faster convergence. Inspired by [@Buzzard:JIS:18], we show that the RED algorithms seek to satisfy a consensus equilibrium condition that allows a direct comparison to the plug-and-play ADMM algorithms from [@Venkatakrishnan:GSIP:13] The remainder of the paper is organized as follows. In [Section \[sec:back\]]{} we provide more background on RED and related algorithms such as plug-and-play ADMM [@Venkatakrishnan:GSIP:13]. In [Section \[sec:clarifications\]]{}, we discuss the impact of Jacobian symmetry on RED and test whether this property holds in practice. In [Section \[sec:new\]]{}, we propose the SMD framework. In [Section \[sec:algorithms\]]{}, we present new interpretations of the RED algorithms from [@Romano:JIS:17] and new algorithms based on accelerated proximal gradient methods. In [Section \[sec:CE\]]{}, we perform an equilibrium analysis of the RED algorithms, and, in [Section \[sec:conclusion\]]{}, we conclude. Background {#sec:back} ========== The MAP-Bayesian Interpretation {#sec:Bayesian} ------------------------------- For use in the sequel, we briefly discuss the Bayesian maximum a posteriori (MAP) estimation framework [@Bishop:Book:07]. The MAP estimate of ${\ensuremath{\boldsymbol{x}}}$ from ${\ensuremath{\boldsymbol{y}}}$ is defined as $$\begin{aligned} {\ensuremath{\Hat{\boldsymbol{x}}}}{_{\textsf{map}}}= \arg\max_{{\ensuremath{\boldsymbol{x}}}} p({\ensuremath{\boldsymbol{x}}}|{\ensuremath{\boldsymbol{y}}}) ,\end{aligned}$$ where $p({\ensuremath{\boldsymbol{x}}}|{\ensuremath{\boldsymbol{y}}})$ denotes the probability density of ${\ensuremath{\boldsymbol{x}}}$ given ${\ensuremath{\boldsymbol{y}}}$. Notice that, from Bayes rule $p({\ensuremath{\boldsymbol{x}}}|{\ensuremath{\boldsymbol{y}}}) = p({\ensuremath{\boldsymbol{y}}}|{\ensuremath{\boldsymbol{x}}}) p({\ensuremath{\boldsymbol{x}}}) / p({\ensuremath{\boldsymbol{y}}})$ and the monotonically increasing nature of $\ln(\cdot)$, we can write $$\begin{aligned} {\ensuremath{\Hat{\boldsymbol{x}}}}{_{\textsf{map}}}&= \arg\min_{{\ensuremath{\boldsymbol{x}}}} \big\{ -\ln p({\ensuremath{\boldsymbol{y}}}|{\ensuremath{\boldsymbol{x}}}) - \ln p({\ensuremath{\boldsymbol{x}}}) \big\} \label{eq:MAP}.\end{aligned}$$ MAP estimation [(\[eq:MAP\])]{} has a direct connection to variational optimization [(\[eq:variational\])]{}: the log-likelihood term $-\ln p({\ensuremath{\boldsymbol{y}}}|{\ensuremath{\boldsymbol{x}}})$ corresponds to the loss $\ell({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{y}}})$ and the log-prior term $-\ln p({\ensuremath{\boldsymbol{x}}})$ corresponds to the regularization $\lambda\rho({\ensuremath{\boldsymbol{x}}})$. For example, with additive white Gaussian noise (AWGN) ${\ensuremath{\boldsymbol{e}}}\sim {\ensuremath{\mathcal{N}}}({\ensuremath{\boldsymbol{0}}},\sigma_e^2{\ensuremath{\boldsymbol{I}}})$, the log-likelihood implies a quadratic loss: $$\begin{aligned} \ell({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{y}}})=\frac{1}{2\sigma_e^2}\|{\ensuremath{\boldsymbol{Ax}}}-{\ensuremath{\boldsymbol{y}}}\|^2 \label{eq:quadloss}.\end{aligned}$$ Equivalently, the normalized loss $\ell({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{y}}})=\frac{1}{2}\|{\ensuremath{\boldsymbol{Ax}}}-{\ensuremath{\boldsymbol{y}}}\|^2$ could be used if $\sigma_e^2$ was absorbed into $\lambda$. ADMM {#sec:ADMM} ---- A popular approach to solving [(\[eq:variational\])]{} is through ADMM [@Boyd:FTML:11], which we now review. Using variable splitting, [(\[eq:variational\])]{} becomes $$\begin{aligned} {\ensuremath{\boldsymbol{{\widehat}{x}}}} = \arg\min_{{\ensuremath{\boldsymbol{x}}}} \big\{ \ell({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{y}}})+\lambda\rho({\ensuremath{\boldsymbol{v}}}) \big\} ~~ \text{s.t.}\ {\ensuremath{\boldsymbol{x}}}={\ensuremath{\boldsymbol{v}}} \label{eq:variable_splitting}.\end{aligned}$$ Using the augmented Lagrangian, problem [(\[eq:variable\_splitting\])]{} can be reformulated as $$\begin{aligned} \min_{{\ensuremath{\boldsymbol{x}}},{\ensuremath{\boldsymbol{v}}}}\max_{{\ensuremath{\boldsymbol{p}}}} \Big\{ \ell({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{y}}})+\lambda\rho({\ensuremath{\boldsymbol{v}}})+{\ensuremath{\boldsymbol{p}}}{^{\top}}({\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{v}}})+\frac{\beta}{2}{\ensuremath{\| {\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{v}}} \|}}^2 \Big\} \label{eq:augmented_lagrangian_nonscaled}\end{aligned}$$ using Lagrange multipliers (or “dual” variables) ${\ensuremath{\boldsymbol{p}}}$ and a design parameter $\beta>0$. Using ${\ensuremath{\boldsymbol{u}}}\triangleq {\ensuremath{\boldsymbol{p}}}/\beta$, [(\[eq:augmented\_lagrangian\_nonscaled\])]{} can be simplified to $$\begin{aligned} \min_{{\ensuremath{\boldsymbol{x}}},{\ensuremath{\boldsymbol{v}}}}\max_{{\ensuremath{\boldsymbol{u}}}} \Big\{ \ell({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{y}}})+\lambda\rho({\ensuremath{\boldsymbol{v}}})+\frac{\beta}{2}{\ensuremath{\| {\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{v}}}+{\ensuremath{\boldsymbol{u}}} \|}}^2 - \frac{\beta}{2}\|{\ensuremath{\boldsymbol{u}}}\|^2 \Big\} \label{eq:augmented_lagrangian} .\end{aligned}$$ The ADMM algorithm solves [(\[eq:augmented\_lagrangian\])]{} by alternating the minimization of ${\ensuremath{\boldsymbol{x}}}$ and ${\ensuremath{\boldsymbol{v}}}$ with gradient ascent of ${\ensuremath{\boldsymbol{u}}}$, as specified in [Algorithm \[alg:ADMM\]]{}. ADMM is known to converge under convex $\ell(\cdot;{\ensuremath{\boldsymbol{y}}})$ and $\rho(\cdot)$, and other mild conditions (see [@Boyd:FTML:11]). \[alg:ADMM\] \[line:ADMM\_init\] \[line:ADMM\_x\_update\] \[line:ADMM\_v\_update\] \[line:ADMM\_u\_update\] Plug-and-Play ADMM ------------------ Importantly, [line \[line:ADMM\_v\_update\]]{} of [Algorithm \[alg:ADMM\]]{} can be recognized as variational *denoising* of ${\ensuremath{\boldsymbol{x}}}_k + {\ensuremath{\boldsymbol{u}}}_{k-1}$ using regularization $\lambda\rho({\ensuremath{\boldsymbol{x}}})$ and quadratic loss $\ell({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{r}}})=\frac{1}{2\nu}\|{\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{r}}}\|^2$, where ${\ensuremath{\boldsymbol{r}}}={\ensuremath{\boldsymbol{x}}}_k+{\ensuremath{\boldsymbol{u}}}_{k-1}$ at iteration $k$. By “denoising,” we mean recovering ${\ensuremath{\boldsymbol{x}}}{^0}$ from noisy measurements ${\ensuremath{\boldsymbol{r}}}$ of the form $$\begin{aligned} {\ensuremath{\boldsymbol{r}}}={\ensuremath{\boldsymbol{x}}}{^0}+{\ensuremath{\boldsymbol{e}}}, \quad {\ensuremath{\boldsymbol{e}}}\sim {\ensuremath{\mathcal{N}}}({\ensuremath{\boldsymbol{0}}},\nu{\ensuremath{\boldsymbol{I}}}) \label{eq:rxe} ,\end{aligned}$$ for some variance $\nu>0$. Image denoising has been studied for decades (see, e.g., the overviews [@Buades:MMS:05; @Milanfar:SPM:13]), with the result that high performance methods are now readily available. Today’s state-of-the-art denoisers include those based on image-dependent filtering algorithms (e.g., BM3D [@Dabov:TIP:07]) or deep neural networks (e.g., TNRD [@Chen:TPAMI:17], DnCNN [@Zhang:TIP:17]). Most of these denoisers are not variational in nature, i.e., they are not based on any explicit regularizer $\lambda\rho({\ensuremath{\boldsymbol{x}}})$. Leveraging the denoising interpretation of ADMM, Venkatakrishnan, Bouman, and Wolhberg [@Venkatakrishnan:GSIP:13] proposed to replace [line \[line:ADMM\_v\_update\]]{} of [Algorithm \[alg:ADMM\]]{} with a call to a sophisticated image denoiser, such as BM3D, and dubbed their approach *Plug-and-Play* (PnP) ADMM. Numerical experiments show that PnP-ADMM works very well in most cases. However, when the denoiser used in PnP-ADMM comes with no explicit regularization $\rho({\ensuremath{\boldsymbol{x}}})$, it is not clear what objective PnP-ADMM is minimizing, making PnP-ADMM convergence more difficult to characterize. Similar PnP algorithms have been proposed using primal-dual methods [@Ono:SPL:17] and FISTA [@Kamilov:SPL:17] in place of ADMM. Approximate message passing (AMP) algorithms [@Donoho:PNAS:09] also perform denoising at each iteration. In fact, when ${\ensuremath{\boldsymbol{A}}}$ is large and i.i.d. Gaussian, AMP constructs an internal variable statistically equivalent to ${\ensuremath{\boldsymbol{r}}}$ in [(\[eq:rxe\])]{} [@Bayati:TIT:11]. While the earliest instances of AMP assumed separable denoising (i.e., $[{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}})]_n=f(x_n)~\forall n$ for some $f$) later instances, like [@Som:TSP:12; @Donoho:TIT:13b], considered non-separable denoising. The paper [@Metzler:TIT:16] by Metzler, Maleki, and Baraniuk proposed to plug an image-specific denoising algorithm, like BM3D, into AMP. Vector AMP, which extends AMP to the broader class of “right rotationally invariant” random matrices, was proposed in [@Rangan:ISIT:17], and VAMP with image-specific denoising was proposed in [@Schniter:BASP:17]. Rigorous analyses of AMP and VAMP under non-separable denoisers were performed in [@Berthier:17] and [@Fletcher:NIPS:18], respectively. Regularization by Denoising (RED) {#sec:RED} --------------------------------- As discussed in the Introduction, Romano, Elad, and Milanfar [@Romano:JIS:17] proposed a radically new way to exploit an image denoiser, which they call *regularization by denoising* (RED). Given an arbitrary image denoiser ${\ensuremath{\boldsymbol{f}}}:{{\mathbb{R}}}^N\rightarrow{{\mathbb{R}}}^N$, they proposed to construct an explicit regularizer of the form $$\begin{aligned} {\rho{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}}) {\triangleq}\frac{1}{2}{\ensuremath{\boldsymbol{x}}}{^{\top}}({\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}})) \label{eq:rhoRED}\end{aligned}$$ to use within the variational framework [(\[eq:variational\])]{}. The advantage of using an explicit regularizer is that a wide variety of optimization algorithms can be used to solve [(\[eq:variational\])]{} and their convergence can be tractably analyzed. In [@Romano:JIS:17], numerical evidence is presented to show that image denoisers ${\ensuremath{\boldsymbol{f}}}(\cdot)$ are *locally homogeneous* (LH), i.e., $$\begin{aligned} (1+\epsilon){\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}}) = {\ensuremath{\boldsymbol{f}}}\big((1+\epsilon){\ensuremath{\boldsymbol{x}}}\big)~\forall {\ensuremath{\boldsymbol{x}}} \label{eq:LH}\end{aligned}$$ for sufficiently small $\epsilon\in{{\mathbb{R}}}\setminus 0$. For such denoisers, Romano et al. claim [@Romano:JIS:17 Eq.(28)] that ${\rho{_{\textsf{red}}}}(\cdot)$ obeys the gradient rule $$\begin{aligned} \nabla {\rho{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}}) &= {\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}}) \label{eq:gradREDromano} .\end{aligned}$$ If $\nabla {\rho{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}})={\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}})$, then any minimizer ${\ensuremath{\Hat{\boldsymbol{x}}}}$ of the variational objective under quadratic loss, $$\begin{aligned} \frac{1}{2\sigma^2}\|{\ensuremath{\boldsymbol{Ax}}}-{\ensuremath{\boldsymbol{y}}}\|^2 + \lambda {\rho{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}}) &{\triangleq}{C{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}}) \label{eq:CRED},\end{aligned}$$ must yield $\nabla{C{_{\textsf{red}}}}({\ensuremath{\Hat{\boldsymbol{x}}}})={\ensuremath{\boldsymbol{0}}}$, i.e., must obey $$\begin{aligned} {\ensuremath{\boldsymbol{0}}} &= \frac{1}{\sigma^2}{\ensuremath{\boldsymbol{A}}}{^{\top}}({\ensuremath{\boldsymbol{A}}}{\ensuremath{\Hat{\boldsymbol{x}}}}-{\ensuremath{\boldsymbol{y}}}) + \lambda({\ensuremath{\Hat{\boldsymbol{x}}}}-{\ensuremath{\boldsymbol{f}}}({\ensuremath{\Hat{\boldsymbol{x}}}})) \label{eq:fpRED} .\end{aligned}$$ Based on this line of reasoning, Romano et al. proposed several iterative algorithms that find an ${\ensuremath{\Hat{\boldsymbol{x}}}}$ satisfying the fixed-point condition [(\[eq:fpRED\])]{}, which we will refer to henceforth as “RED algorithms.” Clarifications on RED {#sec:clarifications} ===================== In this section, we first show that the gradient expression [(\[eq:gradREDromano\])]{} holds if and only if the denoiser ${\ensuremath{\boldsymbol{f}}}(\cdot)$ is LH and has Jacobian symmetry (JS). We then establish that many popular denoisers lack JS, such as the median filter (MF) [@Huang:TASSP:79], non-local means (NLM) [@Buades:CVPR:05], BM3D [@Dabov:TIP:07], TNRD [@Chen:TPAMI:17], and DnCNN [@Zhang:TIP:17]. For such denoisers, the RED algorithms cannot be explained by ${\rho{_{\textsf{red}}}}(\cdot)$ in [(\[eq:rhoRED\])]{}. We also show a more general result: When a denoiser lacks JS, there exists no regularizer $\rho(\cdot)$ whose gradient expression matches [(\[eq:gradREDromano\])]{}. Thus, the problem is not the specific form of ${\rho{_{\textsf{red}}}}(\cdot)$ in [(\[eq:rhoRED\])]{} but rather the broader pursuit of explicit regularization. Preliminaries {#sec:prelim} ------------- We first state some definitions and assumptions. In the sequel, we denote the $i$th component of ${\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}})$ by $f_i({\ensuremath{\boldsymbol{x}}})$, the gradient of $f_i(\cdot)$ at ${\ensuremath{\boldsymbol{x}}}$ by $$\begin{aligned} \nabla f_i({\ensuremath{\boldsymbol{x}}}) &{\triangleq}\begin{bmatrix} \tfrac{\partial f_i({\ensuremath{\boldsymbol{x}}})}{\partial x_1} & \cdots & \tfrac{\partial f_i({\ensuremath{\boldsymbol{x}}})}{\partial x_N} \end{bmatrix}{^{\top}}\label{eq:gradient} ,\end{aligned}$$ and the Jacobian of ${\ensuremath{\boldsymbol{f}}}(\cdot)$ at ${\ensuremath{\boldsymbol{x}}}$ by $$\begin{aligned} {J{\ensuremath{\boldsymbol{f}}}}({\ensuremath{\boldsymbol{x}}}) &{\triangleq}{\ensuremath{\begin{bmatrix} \tfrac{\partial f_1({\ensuremath{\boldsymbol{x}}})}{\partial x_1} & \tfrac{\partial f_1({\ensuremath{\boldsymbol{x}}})}{\partial x_2} & \dots & \tfrac{\partial f_1({\ensuremath{\boldsymbol{x}}})}{\partial x_N} \\ \tfrac{\partial f_2({\ensuremath{\boldsymbol{x}}})}{\partial x_1} & \tfrac{\partial f_2({\ensuremath{\boldsymbol{x}}})}{\partial x_2} & \dots & \tfrac{\partial f_2({\ensuremath{\boldsymbol{x}}})}{\partial x_N} \\ \vdots & \vdots & \ddots & \vdots\\ \tfrac{\partial f_N({\ensuremath{\boldsymbol{x}}})}{\partial x_1} & \tfrac{\partial f_N({\ensuremath{\boldsymbol{x}}})}{\partial x_2} & \dots & \tfrac{\partial f_N({\ensuremath{\boldsymbol{x}}})}{\partial x_N} \end{bmatrix}}} \label{eq:Jacobian} .\end{aligned}$$ Without loss of generality, we take $[0,255]^N\subset {{\mathbb{R}}}^N$ to be the set of possible images. A given denoiser ${\ensuremath{\boldsymbol{f}}}(\cdot)$ may involve decision boundaries ${\ensuremath{\mathcal{D}}}\subset [0,255]^N$ at which its behavior changes suddenly. We assume that these boundaries are a closed set of measure zero and work instead with the open set ${\ensuremath{\mathcal{X}}}{\triangleq}(0,255)^N\setminus {\ensuremath{\mathcal{D}}}$, which contains almost all images. We furthermore assume that ${\ensuremath{\boldsymbol{f}}}:{{\mathbb{R}}}^N\rightarrow{{\mathbb{R}}}^N$ is *differentiable* on ${\ensuremath{\mathcal{X}}}$, which means [@Rudin:Book:76 p.212] that, for any ${\ensuremath{\boldsymbol{x}}}\in{\ensuremath{\mathcal{X}}}$, there exists a matrix ${\ensuremath{\boldsymbol{J}}}\in{{\mathbb{R}}}^{N\times N}$ for which $$\begin{aligned} \lim_{{\ensuremath{\boldsymbol{w}}}\rightarrow {\ensuremath{\boldsymbol{0}}}} \frac{\|{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}}+{\ensuremath{\boldsymbol{w}}})-{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}}) - {\ensuremath{\boldsymbol{J}}}{\ensuremath{\boldsymbol{w}}}\|}{\|{\ensuremath{\boldsymbol{w}}}\|} = 0 \label{eq:differentiable} .\end{aligned}$$ When ${\ensuremath{\boldsymbol{J}}}$ exists, it can be shown [@Rudin:Book:76 p.216] that ${\ensuremath{\boldsymbol{J}}}={J{\ensuremath{\boldsymbol{f}}}}({\ensuremath{\boldsymbol{x}}})$. The RED Gradient {#sec:gradRED} ---------------- We first recall a result that was established in [@Romano:JIS:17]. \[lem:LH\] Suppose that denoiser ${\ensuremath{\boldsymbol{f}}}(\cdot)$ is locally homogeneous. Then $[{J{\ensuremath{\boldsymbol{f}}}}({\ensuremath{\boldsymbol{x}}})]{\ensuremath{\boldsymbol{x}}} = {\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}})$. Our proof is based on differentiability and avoids the need to define a directional derivative. From [(\[eq:differentiable\])]{}, we have $$\begin{aligned} 0 &= \lim_{\epsilon\rightarrow 0} \frac{\|{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}}+\epsilon{\ensuremath{\boldsymbol{x}}})-{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}}) - [{J{\ensuremath{\boldsymbol{f}}}}({\ensuremath{\boldsymbol{x}}})]{\ensuremath{\boldsymbol{x}}}\epsilon\|}{\|\epsilon{\ensuremath{\boldsymbol{x}}}\|} ~\forall {\ensuremath{\boldsymbol{x}}}\in{\ensuremath{\mathcal{X}}}\\ &= \lim_{\epsilon\rightarrow 0} \frac{\|(1+\epsilon){\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}})-{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}}) - [{J{\ensuremath{\boldsymbol{f}}}}({\ensuremath{\boldsymbol{x}}})]{\ensuremath{\boldsymbol{x}}}\epsilon\|}{\|\epsilon{\ensuremath{\boldsymbol{x}}}\|} ~\forall {\ensuremath{\boldsymbol{x}}}\in{\ensuremath{\mathcal{X}}} \label{eq:LH2}\\ &= \lim_{\epsilon\rightarrow 0} \frac{\|{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}}) - [{J{\ensuremath{\boldsymbol{f}}}}({\ensuremath{\boldsymbol{x}}})]{\ensuremath{\boldsymbol{x}}}\|}{\|{\ensuremath{\boldsymbol{x}}}\|} ~\forall {\ensuremath{\boldsymbol{x}}}\in{\ensuremath{\mathcal{X}}} \label{eq:LH3} ,\end{aligned}$$ where [(\[eq:LH2\])]{} follows from local homogeneity [(\[eq:LH\])]{}. [Equation (\[eq:LH3\])]{} implies that $[{J{\ensuremath{\boldsymbol{f}}}}({\ensuremath{\boldsymbol{x}}})]{\ensuremath{\boldsymbol{x}}} = {\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}})~\forall {\ensuremath{\boldsymbol{x}}}\in{\ensuremath{\mathcal{X}}}$. We now state one of the main results of this section. \[lem:gradRED\] For ${\rho{_{\textsf{red}}}}(\cdot)$ defined in [(\[eq:rhoRED\])]{}, $$\begin{aligned} \nabla {\rho{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}}) &= {\ensuremath{\boldsymbol{x}}} - \frac{1}{2} {\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}}) - \frac{1}{2} [{J{\ensuremath{\boldsymbol{f}}}}({\ensuremath{\boldsymbol{x}}})]{^{\top}}{\ensuremath{\boldsymbol{x}}} \label{eq:gradRED} .\end{aligned}$$ For any ${\ensuremath{\boldsymbol{x}}}\in{\ensuremath{\mathcal{X}}}$ and $n=1,\dots,N$, $$\begin{aligned} \lefteqn{ \frac{\partial {\rho{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}})}{\partial x_n} = \frac{\partial}{\partial x_n} \frac{1}{2} \sum_{i=1}^N \big( x_i^2 - x_i f_i({\ensuremath{\boldsymbol{x}}}) \big) }\\ &= \frac{1}{2} \frac{\partial}{\partial x_n} \left( x_n^2 - x_n f_n({\ensuremath{\boldsymbol{x}}}) + \sum_{i\neq n} x_i^2 - \sum_{i\neq n} x_i f_i({\ensuremath{\boldsymbol{x}}}) \right)\\ &= \frac{1}{2} \left( 2x_n - f_n({\ensuremath{\boldsymbol{x}}}) - x_n \frac{\partial f_n({\ensuremath{\boldsymbol{x}}})}{\partial x_n} - \sum_{i\neq n} x_i \frac{\partial f_i({\ensuremath{\boldsymbol{x}}})}{\partial x_n} \right) \\ &= x_n - \frac{1}{2} f_n({\ensuremath{\boldsymbol{x}}}) - \frac{1}{2} \sum_{i=1}^N x_i \frac{\partial f_i({\ensuremath{\boldsymbol{x}}})}{\partial x_n} \label{eq:partialRED} \\ &= x_n - \frac{1}{2} f_n({\ensuremath{\boldsymbol{x}}}) - \frac{1}{2} \left[ [{J{\ensuremath{\boldsymbol{f}}}}({\ensuremath{\boldsymbol{x}}})]{^{\top}}{\ensuremath{\boldsymbol{x}}} \right]_n ,\end{aligned}$$ using the definition of ${J{\ensuremath{\boldsymbol{f}}}}({\ensuremath{\boldsymbol{x}}})$ from [(\[eq:Jacobian\])]{}. Collecting $\{\frac{\partial {\rho{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}})}{\partial x_n}\}_{n=1}^N$ into the gradient vector [(\[eq:gradREDromano\])]{} yields [(\[eq:gradRED\])]{}. Note that the gradient expression [(\[eq:gradRED\])]{} differs from [(\[eq:gradREDromano\])]{}. \[lem:gradREDromano\] Suppose that the denoiser ${\ensuremath{\boldsymbol{f}}}(\cdot)$ is locally homogeneous. Then the RED gradient expression [(\[eq:gradREDromano\])]{} holds if and only if ${J{\ensuremath{\boldsymbol{f}}}}({\ensuremath{\boldsymbol{x}}})=[{J{\ensuremath{\boldsymbol{f}}}}({\ensuremath{\boldsymbol{x}}})]{^{\top}}$. If ${J{\ensuremath{\boldsymbol{f}}}}({\ensuremath{\boldsymbol{x}}})=[{J{\ensuremath{\boldsymbol{f}}}}({\ensuremath{\boldsymbol{x}}})]{^{\top}}$, then the last term in [(\[eq:gradRED\])]{} becomes $-\frac{1}{2} [{J{\ensuremath{\boldsymbol{f}}}}({\ensuremath{\boldsymbol{x}}})]{\ensuremath{\boldsymbol{x}}}$, which equals $-\frac{1}{2}{\ensuremath{\boldsymbol{f}}}(x)$ by [Lemma \[lem:LH\]]{}, in which case [(\[eq:gradRED\])]{} agrees with [(\[eq:gradREDromano\])]{}. But if ${J{\ensuremath{\boldsymbol{f}}}}({\ensuremath{\boldsymbol{x}}})\neq[{J{\ensuremath{\boldsymbol{f}}}}({\ensuremath{\boldsymbol{x}}})]{^{\top}}$, then [(\[eq:gradRED\])]{} differs from [(\[eq:gradREDromano\])]{}. Impossibility of Explicit Regularization {#sec:impossibility} ---------------------------------------- For denoisers ${\ensuremath{\boldsymbol{f}}}(\cdot)$ that lack Jacobian symmetry (JS), [Lemma \[lem:gradREDromano\]]{} establishes that the gradient expression [(\[eq:gradREDromano\])]{} does not hold. Yet [(\[eq:gradREDromano\])]{} leads to the fixed-point condition [(\[eq:fpRED\])]{} on which all RED algorithms in [@Romano:JIS:17] are based. The fact that these algorithms work well in practice suggests that “$\nabla \rho({\ensuremath{\boldsymbol{x}}}) = {\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}})$” is a desirable property for a regularizer $\rho({\ensuremath{\boldsymbol{x}}})$ to have. But the regularization ${\rho{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}})$ in [(\[eq:rhoRED\])]{} does not lead to this property when ${\ensuremath{\boldsymbol{f}}}(\cdot)$ lacks JS. Thus an important question is: > *Does there exist some other regularization $\rho(\cdot)$ for which $\nabla\rho({\ensuremath{\boldsymbol{x}}})={\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}})$ when ${\ensuremath{\boldsymbol{f}}}(\cdot)$ is non-JS?* The following theorem provides the answer. \[thm:impossible\] Suppose that denoiser ${\ensuremath{\boldsymbol{f}}}(\cdot)$ has a non-symmetric Jacobian. Then there exists no regularization $\rho(\cdot)$ for which $\nabla\rho({\ensuremath{\boldsymbol{x}}})={\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}})$. To prove the theorem, we view ${\ensuremath{\boldsymbol{f}}}:{\ensuremath{\mathcal{X}}}\rightarrow{{\mathbb{R}}}^N$ as a vector field. Theorem 4.3.8 in [@Kantorovitz:Book:16] says that a vector field ${\ensuremath{\boldsymbol{f}}}$ is *conservative* if and only if there exists a continuously differentiable potential ${\overline}{\rho}:{\ensuremath{\mathcal{X}}}\rightarrow{{\mathbb{R}}}$ for which $\nabla{\overline}{\rho}={\ensuremath{\boldsymbol{f}}}$. Furthermore, Theorem 4.3.10 in [@Kantorovitz:Book:16] says that if ${\ensuremath{\boldsymbol{f}}}$ is conservative, then the Jacobian $J{\ensuremath{\boldsymbol{f}}}$ is symmetric. Thus, by the contrapositive, if the Jacobian $J{\ensuremath{\boldsymbol{f}}}$ is *not* symmetric, then no such potential ${\overline}{\rho}$ exists. To apply this result to our problem, we define $$\begin{aligned} \rho({\ensuremath{\boldsymbol{x}}}) {\triangleq}\frac{1}{2} \|{\ensuremath{\boldsymbol{x}}}\|^2 - {\overline}{\rho}({\ensuremath{\boldsymbol{x}}}) \end{aligned}$$ and notice that $$\begin{aligned} \nabla\rho({\ensuremath{\boldsymbol{x}}}) = {\ensuremath{\boldsymbol{x}}} - \nabla{\overline}{\rho}({\ensuremath{\boldsymbol{x}}}) = {\ensuremath{\boldsymbol{x}}} - {\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}}) \label{eq:rhobar}.\end{aligned}$$ Thus, if $J{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}})$ is non-symmetric, then $J[{\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}})]={\ensuremath{\boldsymbol{I}}}-J{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}})$ is non-symmetric, which means that there exists no $\rho$ for which [(\[eq:rhobar\])]{} holds. Thus, the problem is not the specific form of ${\rho{_{\textsf{red}}}}(\cdot)$ in [(\[eq:rhoRED\])]{} but rather the broader pursuit of explicit regularization. We note that the notion of conservative vector fields was discussed in [@Sreehari:TCI:16 App. A] in the context of PnP algorithms, whereas here we discuss it in the context of RED. Analysis of Jacobian Symmetry {#sec:JS} ----------------------------- The previous sections motivate an important question: Do commonly-used image denoisers have sufficient JS? For some denoisers, JS can be studied analytically. For example, consider the “transform domain thresholding” (TDT) denoisers of the form $$\begin{aligned} {\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}}) {\triangleq}{\ensuremath{\boldsymbol{W}}}{^{\top}}{\ensuremath{\boldsymbol{g}}}({\ensuremath{\boldsymbol{Wx}}}) \label{eq:fTD} ,\end{aligned}$$ where ${\ensuremath{\boldsymbol{g}}}(\cdot)$ performs componentwise (e.g., soft or hard) thresholding and ${\ensuremath{\boldsymbol{W}}}$ is some transform, as occurs in the context of wavelet shrinkage [@Donoho:Bio:94], with or without cycle-spinning [@Coifman:Chap:95]. Using $g_n'(\cdot)$ to denote the derivative of $g_n(\cdot)$, we have $$\begin{aligned} \frac{\partial f_n({\ensuremath{\boldsymbol{x}}})}{\partial x_q} &= \sum_{i=1}^N w_{in} g_i'\Bigg(\sum_{j=1}^N w_{ij} x_j\Bigg) w_{iq} = \frac{\partial f_q({\ensuremath{\boldsymbol{x}}})}{\partial x_n} ,\end{aligned}$$ and so the Jacobian of ${\ensuremath{\boldsymbol{f}}}(\cdot)$ is perfectly symmetric. Another class of denoisers with perfectly symmetric Jacobians are those that produce MAP or MMSE optimal ${\ensuremath{\Hat{\boldsymbol{x}}}}$ under some assumed prior ${{\widehat}{{p_{\text{\sf x}}}}}$. In the MAP case, ${\ensuremath{\Hat{\boldsymbol{x}}}}$ minimizes (over ${\ensuremath{\boldsymbol{x}}}$) the cost $c({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{r}}})=\frac{1}{2\nu}\|{\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{r}}}\|^2-\ln{{\widehat}{{p_{\text{\sf x}}}}}({\ensuremath{\boldsymbol{x}}})$ for noisy input ${\ensuremath{\boldsymbol{r}}}$. If we define $\phi({\ensuremath{\boldsymbol{r}}}){\triangleq}\min_{{\ensuremath{\boldsymbol{x}}}} c({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{r}}})$, known as the Moreau-Yosida envelope of $-\ln{{\widehat}{{p_{\text{\sf x}}}}}$, then ${\ensuremath{\Hat{\boldsymbol{x}}}}={\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{r}}})={\ensuremath{\boldsymbol{r}}}-\nu\nabla\phi({\ensuremath{\boldsymbol{r}}})$, as discussed in [@Parikh:FTO:13] (See also [@Ong:18] for insightful discussions in the context of image denoising.) The elements in the Jacobian are therefore $[J{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{r}}})]_{n,q} = \frac{\partial f_n({\ensuremath{\boldsymbol{r}}})}{\partial r_q} = \delta_{n-q} - \nu\frac{\partial^2 \phi({\ensuremath{\boldsymbol{r}}})}{\partial r_q\partial r_n}$, and so the Jacobian matrix is symmetric. In the MMSE case, we have that ${\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{r}}})={\ensuremath{\boldsymbol{r}}}-\nabla{\rho_{\text{\sf TR}}}({\ensuremath{\boldsymbol{r}}})$ for ${\rho_{\text{\sf TR}}}(\cdot)$ defined in [(\[eq:rhoTR\])]{} (see [Lemma \[lem:gradTR\]]{}), and so $[J{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{r}}})]_{n,q} = \delta_{n-q} - \frac{\partial^2 {\rho_{\text{\sf TR}}}({\ensuremath{\boldsymbol{r}}})}{\partial r_q\partial r_n}$, again implying that the Jacobian is symmetric. But it is difficult to say anything about the Jacobian symmetry of *approximate* MAP or MMSE denoisers. Now let us consider the more general class of denoisers $$\begin{aligned} {\ensuremath{\boldsymbol{f}}}(x) &= {\ensuremath{\boldsymbol{W}}}({\ensuremath{\boldsymbol{x}}}){\ensuremath{\boldsymbol{x}}} \label{eq:fPL} ,\end{aligned}$$ sometimes called “pseudo-linear” [@Milanfar:SPM:13]. For simplicity, we assume that ${\ensuremath{\boldsymbol{W}}}(\cdot)$ is differentiable on ${\ensuremath{\mathcal{X}}}$. In this case, using the chain rule, we have $$\begin{aligned} \frac{\partial f_n({\ensuremath{\boldsymbol{x}}})}{\partial x_q} &= w_{nq}({\ensuremath{\boldsymbol{x}}}) + \sum_{i=1}^N \frac{\partial w_{ni}({\ensuremath{\boldsymbol{x}}})}{\partial x_q} x_i ,\end{aligned}$$ and so the following are sufficient conditions for Jacobian symmetry. 1. ${\ensuremath{\boldsymbol{W}}}({\ensuremath{\boldsymbol{x}}})$ is symmetric $\forall {\ensuremath{\boldsymbol{x}}}\in{\ensuremath{\mathcal{X}}}$, 2. $\sum_{i=1}^N \frac{\partial w_{ni}({\ensuremath{\boldsymbol{x}}})}{\partial x_q} x_i =\sum_{i=1}^N \frac{\partial w_{qi}({\ensuremath{\boldsymbol{x}}})}{\partial x_n} x_i ~\forall {\ensuremath{\boldsymbol{x}}}\in{\ensuremath{\mathcal{X}}}$. When ${\ensuremath{\boldsymbol{W}}}$ is ${\ensuremath{\boldsymbol{x}}}$-invariant (i.e., ${\ensuremath{\boldsymbol{f}}}(\cdot)$ is linear) and symmetric, both of these conditions are satisfied. This latter case was exploited for RED in [@Teodoro:18]. The case of non-linear ${\ensuremath{\boldsymbol{W}}}(\cdot)$ is more complicated. Although ${\ensuremath{\boldsymbol{W}}}(\cdot)$ can be symmetrized (see [@Milanfar:JIS:13; @Milanfar:ICIP:16]), it is not clear whether the second condition above will be satisfied. Jacobian Symmetry Experiments {#sec:gradREDnum} ----------------------------- For denoisers that do not admit a tractable analysis, we can still evaluate the Jacobian of ${\ensuremath{\boldsymbol{f}}}(\cdot)$ at ${\ensuremath{\boldsymbol{x}}}$ numerically via $$\begin{aligned} \frac{f_i({\ensuremath{\boldsymbol{x}}}+\epsilon{\ensuremath{\boldsymbol{e}}}_n)-f_i({\ensuremath{\boldsymbol{x}}}-\epsilon{\ensuremath{\boldsymbol{e}}}_n)}{2\epsilon} {\triangleq}\big[ \widehat{{J{\ensuremath{\boldsymbol{f}}}}}({\ensuremath{\boldsymbol{x}}}) \big]_{i,n} ,\end{aligned}$$ where ${\ensuremath{\boldsymbol{e}}}_n$ denotes the $n$th column of ${\ensuremath{\boldsymbol{I}}}_N$ and $\epsilon>0$ is small ($\epsilon=1\times10^{-3}$ in our experiments). For the purpose of quantifying JS, we define the normalized error metric $$\begin{aligned} e^J_{{\ensuremath{\boldsymbol{f}}}}({\ensuremath{\boldsymbol{x}}}) {\triangleq}\frac{\big\|\widehat{{J{\ensuremath{\boldsymbol{f}}}}}({\ensuremath{\boldsymbol{x}}}) - [\widehat{{J{\ensuremath{\boldsymbol{f}}}}}({\ensuremath{\boldsymbol{x}}})]{^{\top}}\big\|_F^2} {\|\widehat{{J{\ensuremath{\boldsymbol{f}}}}}({\ensuremath{\boldsymbol{x}}})\|_F^2} ,\end{aligned}$$ which should be nearly zero for a symmetric Jacobian. [Table \[tab:Jacobian\]]{} shows[^2] the average value of $e^J_{{\ensuremath{\boldsymbol{f}}}}({\ensuremath{\boldsymbol{x}}})$ for $17$ different image patches[^3] of size $16\times 16$, using denoisers that assumed a noise variance of $25^2$. The denoisers tested were the TDT from [(\[eq:fTD\])]{} with the 2D Haar wavelet transform and soft-thresholding, the median filter (MF) [@Huang:TASSP:79] with a $3\times 3$ window, non-local means (NLM) [@Buades:CVPR:05], BM3D [@Dabov:TIP:07], TNRD [@Chen:TPAMI:17], and DnCNN [@Zhang:TIP:17]. [Table \[tab:Jacobian\]]{} shows that the Jacobians of all but the TDT denoiser are far from symmetric. TDT MF NLM BM3D TNRD DnCNN ---------------------------------------------------------------------- ---------- ------ ------- ------ -------- -------- $e^J_{{\ensuremath{\boldsymbol{f}}}}({\ensuremath{\boldsymbol{x}}})$ 5.36e-21 1.50 0.250 1.22 0.0378 0.0172 : Average Jacobian-symmetry error on 16$\times$16 images \[tab:Jacobian\] Jacobian symmetry is of secondary interest; what we really care about is the accuracy of the RED gradient expressions [(\[eq:gradREDromano\])]{} and [(\[eq:gradRED\])]{}. To assess gradient accuracy, we numerically evaluated the gradient of ${\rho{_{\textsf{red}}}}(\cdot)$ at ${\ensuremath{\boldsymbol{x}}}$ using $$\begin{aligned} \frac{{\rho{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}}+\epsilon{\ensuremath{\boldsymbol{e}}}_n)-{\rho{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}}-\epsilon{\ensuremath{\boldsymbol{e}}}_n)}{2\epsilon} {\triangleq}\big[ \widehat{ \nabla {\rho{_{\textsf{red}}}}}({\ensuremath{\boldsymbol{x}}}) \big]_{n} \end{aligned}$$ and compared the result to the analytical expressions [(\[eq:gradREDromano\])]{} and [(\[eq:gradRED\])]{}. [Table \[tab:gradRED\]]{} reports the normalized gradient error $$\begin{aligned} e^\nabla_{{\ensuremath{\boldsymbol{f}}}}({\ensuremath{\boldsymbol{x}}}) {\triangleq}\frac{\|\nabla{\rho{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}})-\widehat{\nabla{\rho{_{\textsf{red}}}}}({\ensuremath{\boldsymbol{x}}})\|^2}{\|\widehat{\nabla{\rho{_{\textsf{red}}}}}({\ensuremath{\boldsymbol{x}}})\|^2}\end{aligned}$$ for the same $\epsilon$, images, and denoisers used in [Table \[tab:Jacobian\]]{}. The results in [Table \[tab:gradRED\]]{} show that, for all tested denoisers, the numerical gradient $\widehat{\nabla{\rho{_{\textsf{red}}}}}(\cdot)$ closely matches the analytical expression for $\nabla{\rho{_{\textsf{red}}}}(\cdot)$ from [(\[eq:gradRED\])]{}, but not that from [(\[eq:gradREDromano\])]{}. The mismatch between $\widehat{\nabla{\rho{_{\textsf{red}}}}}(\cdot)$ and $\nabla{\rho{_{\textsf{red}}}}(\cdot)$ from [(\[eq:gradREDromano\])]{} is partly due to insufficient JS and partly due to insufficient LH, as we establish below. \[tab:gradRED\] Local Homogeneity Experiments {#sec:LH} ----------------------------- Recall that the TDT denoiser has a symmetric Jacobian, both theoretically and empirically. Yet [Table \[tab:gradRED\]]{} reports a disagreement between the $\nabla{\rho{_{\textsf{red}}}}(\cdot)$ expressions [(\[eq:gradREDromano\])]{} and [(\[eq:gradRED\])]{} for TDT. We now show that this disagreement is due to insufficient local homogeneity (LH). To do this, we introduce yet another RED gradient expression, $$\begin{aligned} \nabla{\rho{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}}) &{\stackrel{\text{\tiny\sf LH}}{=}}{\ensuremath{\boldsymbol{x}}} - \frac{1}{2}[J{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}})]{\ensuremath{\boldsymbol{x}}} - \frac{1}{2}[J{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}})]{^{\top}}{\ensuremath{\boldsymbol{x}}} \label{eq:gradREDint} ,\end{aligned}$$ which results from combining [(\[eq:gradRED\])]{} with [Lemma \[lem:LH\]]{}. Here, ${\stackrel{\text{\tiny\sf LH}}{=}}$ indicates that [(\[eq:gradREDint\])]{} holds under LH. In contrast, the gradient expression [(\[eq:gradREDromano\])]{} holds under *both* LH and Jacobian symmetry, while the gradient expression [(\[eq:gradRED\])]{} holds in general (i.e., even in the absence of LH and/or Jacobian symmetry). We also introduce two normalized error metrics for LH, $$\begin{aligned} {e^{\text{\sf LH,1}}_{{\ensuremath{\boldsymbol{f}}}}}({\ensuremath{\boldsymbol{x}}}) &{\triangleq}\frac{\big\|{\ensuremath{\boldsymbol{f}}}((1+\epsilon){\ensuremath{\boldsymbol{x}}}) - (1+\epsilon){\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}})\big\|^2}{\|(1+\epsilon){\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}})\|^2} \label{eq:eLHb}\\ {e^{\text{\sf LH,2}}_{{\ensuremath{\boldsymbol{f}}}}}({\ensuremath{\boldsymbol{x}}}) &{\triangleq}\frac{\big\|[\widehat{J{\ensuremath{\boldsymbol{f}}}}({\ensuremath{\boldsymbol{x}}})]{\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}})\big\|^2} {\|{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}})\|^2} \label{eq:eLHa}.\end{aligned}$$ which should both be nearly zero for LH ${\ensuremath{\boldsymbol{f}}}(\cdot)$. Note that ${e^{\text{\sf LH,1}}_{{\ensuremath{\boldsymbol{f}}}}}$ quantifies LH according to definition [(\[eq:LH\])]{} and closely matches the numerical analysis of LH in [@Romano:JIS:17]. Meanwhile, ${e^{\text{\sf LH,2}}_{{\ensuremath{\boldsymbol{f}}}}}$ quantifies LH according to [Lemma \[lem:LH\]]{} and to how LH is actually used in the gradient expressions [(\[eq:gradREDromano\])]{} and [(\[eq:gradREDint\])]{}. \[tab:LH\] The middle row of [Table \[tab:gradRED\]]{} reports the average gradient error of the gradient expression [(\[eq:gradREDint\])]{}, and [Table \[tab:LH\]]{} reports average LH error for the metrics ${e^{\text{\sf LH,1}}_{{\ensuremath{\boldsymbol{f}}}}}$ and ${e^{\text{\sf LH,2}}_{{\ensuremath{\boldsymbol{f}}}}}$. There we see that the average ${e^{\text{\sf LH,1}}_{{\ensuremath{\boldsymbol{f}}}}}$ error is small for all denoisers, consistent with the experiments in [@Romano:JIS:17]. But the average ${e^{\text{\sf LH,2}}_{{\ensuremath{\boldsymbol{f}}}}}$ error is several orders of magnitude larger (for all but the MF denoiser). We also note that the value of ${e^{\text{\sf LH,2}}_{{\ensuremath{\boldsymbol{f}}}}}$ for BM3D is several orders of magnitude higher than for the other denoisers. This result is consistent with [Fig. \[fig:cost\_figs\]]{}, which shows that the cost function associated with BM3D is much less smooth than that of the other denoisers. As discussed below, these seemingly small imperfections in LH have a significant effect on the RED gradient expressions [(\[eq:gradREDromano\])]{} and [(\[eq:gradREDint\])]{}. Starting with the TDT denoiser, [Table \[tab:gradRED\]]{} shows that the gradient error on [(\[eq:gradREDint\])]{} is large, which can only be caused by insufficient LH. The insufficient LH is confirmed in [Table \[tab:LH\]]{}, which shows that the value of ${e^{\text{\sf LH,2}}_{{\ensuremath{\boldsymbol{f}}}}}({\ensuremath{\boldsymbol{x}}})$ for TDT is non-negligible, especially in comparison to the value for MF. Continuing with the MF denoiser, [Table \[tab:Jacobian\]]{} indicates that its Jacobian is far from symmetric, while [Table \[tab:LH\]]{} indicates that it is LH. The gradient results in [Table \[tab:gradRED\]]{} are consistent with these behaviors: the $\nabla{\rho{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}})$ expression [(\[eq:gradREDint\])]{} is accurate on account of LH being satisfied, but the $\nabla{\rho{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}})$ expression [(\[eq:gradREDromano\])]{} is inaccurate on account of a lack of JS. The results for the remaining denoisers NLM, BM3D, TNRD, and BM3D show a common trend: they have non-trivial levels of *both* JS error (see [Table \[tab:Jacobian\]]{}) and LH error (see [Table \[tab:LH\]]{}). As a result, the gradient expressions [(\[eq:gradREDromano\])]{} and [(\[eq:gradREDint\])]{} are *both* inaccurate (see [Table \[tab:gradRED\]]{}). In conclusion, the experiments in this section show that the RED gradient expressions [(\[eq:gradREDromano\])]{} and [(\[eq:gradREDint\])]{} are very sensitive to small imperfections in LH. Although the experiments in [@Romano:JIS:17] suggested that many popular image denoisers are approximately LH, our experiments suggest that their levels of LH are insufficient to maintain the accuracy of the RED gradient expressions [(\[eq:gradREDromano\])]{} and [(\[eq:gradREDint\])]{}. Hessian and Convexity --------------------- From [(\[eq:partialRED\])]{}, the $(n,j)$th element of the Hessian of ${\rho{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}})$ equals $$\begin{aligned} \lefteqn{ \frac{\partial^2 {\rho{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}})}{\partial x_n \partial x_j} = \frac{\partial}{\partial x_j}\left( x_n - \frac{1}{2} f_n({\ensuremath{\boldsymbol{x}}}) - \frac{1}{2} \sum_{i=1}^N x_i \frac{\partial f_i({\ensuremath{\boldsymbol{x}}})}{\partial x_n} \right) }\\ &= \delta_{n-j} - \frac{1}{2} \frac{\partial f_n({\ensuremath{\boldsymbol{x}}})}{\partial x_j} - \frac{1}{2} \frac{\partial f_j({\ensuremath{\boldsymbol{x}}})}{\partial x_n} - \frac{1}{2} x_j \frac{\partial^2 f_j({\ensuremath{\boldsymbol{x}}})}{\partial x_n\partial x_j} \qquad\quad \nonumber\\&\quad - \frac{1}{2} \sum_{i\neq j} x_i \frac{\partial^2 f_i({\ensuremath{\boldsymbol{x}}})}{\partial x_n \partial x_j} \\ &= \delta_{n-j} - \frac{1}{2} \frac{\partial f_n({\ensuremath{\boldsymbol{x}}})}{\partial x_j} - \frac{1}{2} \frac{\partial f_j({\ensuremath{\boldsymbol{x}}})}{\partial x_n} - \frac{1}{2} \sum_{i=1}^N x_i \frac{\partial^2 f_i({\ensuremath{\boldsymbol{x}}})}{\partial x_n \partial x_j} .\end{aligned}$$ where $\delta_{k}=1$ if $k=0$ and otherwise $\delta_{k}=0$. Thus, the Hessian of ${\rho{_{\textsf{red}}}}(\cdot)$ at ${\ensuremath{\boldsymbol{x}}}$ equals $$\begin{aligned} H{\rho{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}}) &= {\ensuremath{\boldsymbol{I}}} - \frac{1}{2} {J{\ensuremath{\boldsymbol{f}}}}({\ensuremath{\boldsymbol{x}}}) - \frac{1}{2} [{J{\ensuremath{\boldsymbol{f}}}}({\ensuremath{\boldsymbol{x}}})]{^{\top}}- \frac{1}{2} \sum_{i=1}^N x_i Hf_i({\ensuremath{\boldsymbol{x}}}) \label{eq:hessRED} .\end{aligned}$$ This expression can be contrasted with the Hessian expression from [@Romano:JIS:17 (60)], which reads $$\begin{aligned} {\ensuremath{\boldsymbol{I}}} - {J{\ensuremath{\boldsymbol{f}}}}({\ensuremath{\boldsymbol{x}}}) \label{eq:hessREDromano} .\end{aligned}$$ Interestingly, [(\[eq:hessRED\])]{} differs from [(\[eq:hessREDromano\])]{} even when the denoiser has a symmetric Jacobian ${J{\ensuremath{\boldsymbol{f}}}}({\ensuremath{\boldsymbol{x}}})$. One implication is that, even if eigenvalues of ${J{\ensuremath{\boldsymbol{f}}}}({\ensuremath{\boldsymbol{x}}})$ are limited to the interval $[0,1]$, the Hessian $H{\rho{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}})$ may not be positive semi-definite due to the last term in [(\[eq:hessRED\])]{}, with possibly negative implications on the convexity of ${\rho{_{\textsf{red}}}}(\cdot)$. That said, the RED algorithms do not actually minimize the variational objective $\ell({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{y}}})+\lambda{\rho{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}})$ for common denoisers ${\ensuremath{\boldsymbol{f}}}(\cdot)$ (as established in [Section \[sec:trajectory\]]{}), and so the convexity of ${\rho{_{\textsf{red}}}}(\cdot)$ may not be important in practice. We investigate the convexity of ${\rho{_{\textsf{red}}}}(\cdot)$ numerically in [Section \[sec:cost\]]{}. Example RED-SD Trajectory {#sec:trajectory} ------------------------- We now provide an example of how the RED algorithms from [@Romano:JIS:17] do not necessarily minimize the variational objective $\ell({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{y}}})+\lambda{\rho{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}})$. For a trajectory $\{{\ensuremath{\boldsymbol{x}}}_k\}_{k=1}^K$ produced by the steepest-descent (SD) RED algorithm from [@Romano:JIS:17], [Fig. \[fig:poor\_behaved\]]{} plots, versus iteration $k$, the RED Cost ${C{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}}_k)$ from [(\[eq:CRED\])]{} and the error on the fixed-point condition [(\[eq:fpRED\])]{}, i.e., $\|{\ensuremath{\boldsymbol{g}}}({\ensuremath{\boldsymbol{x}}}_k)\|^2$ with $$\begin{aligned} {\ensuremath{\boldsymbol{g}}}({\ensuremath{\boldsymbol{x}}}) &{\triangleq}\frac{1}{\sigma^2}{\ensuremath{\boldsymbol{A}}}{^{\top}}({\ensuremath{\boldsymbol{Ax}}}-{\ensuremath{\boldsymbol{y}}}) +\lambda\big({\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}})\big) \label{eq:gradCREDalg} .\end{aligned}$$ For this experiment, we used the $3\times 3$ median-filter for ${\ensuremath{\boldsymbol{f}}}(\cdot)$, the Starfish image, and noisy measurements ${\ensuremath{\boldsymbol{y}}}={\ensuremath{\boldsymbol{x}}}+{\ensuremath{\mathcal{N}}}({\ensuremath{\boldsymbol{0}}},\sigma^2{\ensuremath{\boldsymbol{I}}})$ with $\sigma^2=20$ (i.e., ${\ensuremath{\boldsymbol{A}}}={\ensuremath{\boldsymbol{I}}}$ in [(\[eq:CRED\])]{}). \[b\]\[b\]\[0.8\][${C{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}}_k)$]{} \[t\]\[t\]\[0.8\][ $\big\|{\ensuremath{\boldsymbol{A}}}{^{\top}}({\ensuremath{\boldsymbol{Ax}}}_k-{\ensuremath{\boldsymbol{y}}})/\sigma^2 +\lambda\big({\ensuremath{\boldsymbol{x}}}_k-{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}}_k)\big)\big\|^2$]{} \[t\]\[t\]\[0.7\][iteration $k$]{} ![RED cost ${C{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}}_k)$ and fixed-point error $\|{\ensuremath{\boldsymbol{A}}}{^{\top}}({\ensuremath{\boldsymbol{Ax}}}_k-{\ensuremath{\boldsymbol{y}}})/\sigma^2 +\lambda({\ensuremath{\boldsymbol{x}}}_k-{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}}_k))\|^2$ versus iteration $k$ for $\{{\ensuremath{\boldsymbol{x}}}_k\}_{k=1}^K$ produced by the RED-SD algorithm from [@Romano:JIS:17]. Although the fixed-point condition is asymptotically satisfied, the RED cost does not decrease with $k$.[]{data-label="fig:poor_behaved"}](figures/poor_behaved.eps "fig:"){width="\figsizein"} [Figure \[fig:poor\_behaved\]]{} shows that, although the RED-SD algorithm asymptotically satisfies the fixed-point condition [(\[eq:fpRED\])]{}, the RED cost function ${C{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}}_k)$ does not decrease with $k$, as would be expected if the RED algorithms truly minimized the RED cost ${C{_{\textsf{red}}}}(\cdot)$. This behavior implies that any optimization algorithm that monitors the objective value ${C{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}}_k)$ for, say, backtracking line-search (e.g., the FASTA algorithm [@Goldstein:14]), is difficult to apply in the context of RED. Visualization of RED Cost and RED-Algorithm Gradient {#sec:cost} ---------------------------------------------------- We now show visualizations of the RED cost ${C{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}})$ from [(\[eq:CRED\])]{} and the RED algorithm’s gradient field ${\ensuremath{\boldsymbol{g}}}({\ensuremath{\boldsymbol{x}}})$ from [(\[eq:gradCREDalg\])]{}, for various image denoisers. For this experiment, we used the Starfish image, noisy measurements ${\ensuremath{\boldsymbol{y}}}={\ensuremath{\boldsymbol{x}}}+{\ensuremath{\mathcal{N}}}({\ensuremath{\boldsymbol{0}}},\sigma^2{\ensuremath{\boldsymbol{I}}})$ with $\sigma^2=100$ (i.e., ${\ensuremath{\boldsymbol{A}}}={\ensuremath{\boldsymbol{I}}}$ in [(\[eq:CRED\])]{} and [(\[eq:gradCREDalg\])]{}), and $\lambda$ optimized over a grid (of $20$ values logarithmically spaced between $0.0001$ and $1$) for each denoiser, so that the PSNR of the RED fixed-point ${\ensuremath{\Hat{\boldsymbol{x}}}}$ is maximized. [Figure \[fig:cost\_figs\]]{} plots the RED cost ${C{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}})$ and the RED algorithm’s gradient field ${\ensuremath{\boldsymbol{g}}}({\ensuremath{\boldsymbol{x}}})$ for the TDT, MF, NLM, BM3D, TNRD, and DnCNN denoisers. To visualize these quantities in two dimensions, we plotted values of ${\ensuremath{\boldsymbol{x}}}$ centered at the RED fixed-point ${\ensuremath{\Hat{\boldsymbol{x}}}}$ and varying along two randomly chosen directions. The figure shows that the minimizer of ${C{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}})$ does not coincide with the fixed-point ${\ensuremath{\Hat{\boldsymbol{x}}}}$, and that the RED cost ${C{_{\textsf{red}}}}(\cdot)$ is not always smooth or convex. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- \[b\]\[b\]\[[0.75]{}\][TDT]{} \[Bl\]\[Bl\]\[[0.75]{}\][$\alpha$]{}\[t\]\[t\]\[[0.75]{}\][$\beta$]{} ![Contours show RED cost ${C{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}}_{\alpha,\beta})$ from [(\[eq:CRED\])]{} and arrows show RED-algorithm gradient field ${\ensuremath{\boldsymbol{g}}}({\ensuremath{\boldsymbol{x}}}_{\alpha,\beta})$ from [(\[eq:gradCREDalg\])]{} versus $(\alpha,\beta)$, where ${\ensuremath{\boldsymbol{x}}}_{\alpha,\beta}={\ensuremath{\Hat{\boldsymbol{x}}}}+\alpha{\ensuremath{\boldsymbol{e}}}_1+\beta{\ensuremath{\boldsymbol{e}}}_2$ with randomly chosen ${\ensuremath{\boldsymbol{e}}}_1$ and ${\ensuremath{\boldsymbol{e}}}_2$. The subplots show that the minimizer of ${C{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}}_{\alpha,\beta})$ is not the fixed-point ${\ensuremath{\Hat{\boldsymbol{x}}}}$, and that ${C{_{\textsf{red}}}}(\cdot)$ may be non-smooth and/or non-convex.[]{data-label="fig:cost_figs"}](figures/cost_figs/TDT_cost.eps "fig:"){height="\hi"} \[b\]\[b\]\[[0.75]{}\][MF]{} \[Bl\]\[Bl\]\[[0.75]{}\][$\alpha$]{}\[t\]\[t\]\[[0.75]{}\][$\beta$]{} ![Contours show RED cost ${C{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}}_{\alpha,\beta})$ from [(\[eq:CRED\])]{} and arrows show RED-algorithm gradient field ${\ensuremath{\boldsymbol{g}}}({\ensuremath{\boldsymbol{x}}}_{\alpha,\beta})$ from [(\[eq:gradCREDalg\])]{} versus $(\alpha,\beta)$, where ${\ensuremath{\boldsymbol{x}}}_{\alpha,\beta}={\ensuremath{\Hat{\boldsymbol{x}}}}+\alpha{\ensuremath{\boldsymbol{e}}}_1+\beta{\ensuremath{\boldsymbol{e}}}_2$ with randomly chosen ${\ensuremath{\boldsymbol{e}}}_1$ and ${\ensuremath{\boldsymbol{e}}}_2$. The subplots show that the minimizer of ${C{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}}_{\alpha,\beta})$ is not the fixed-point ${\ensuremath{\Hat{\boldsymbol{x}}}}$, and that ${C{_{\textsf{red}}}}(\cdot)$ may be non-smooth and/or non-convex.[]{data-label="fig:cost_figs"}](figures/cost_figs/MF_cost.eps "fig:"){height="\hi"} \[2mm\] \[b\]\[b\]\[[0.75]{}\][NLM]{} \[Bl\]\[Bl\]\[[0.75]{}\][$\alpha$]{}\[t\]\[t\]\[[0.75]{}\][$\beta$]{} ![Contours show RED cost ${C{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}}_{\alpha,\beta})$ from [(\[eq:CRED\])]{} and arrows show RED-algorithm gradient field ${\ensuremath{\boldsymbol{g}}}({\ensuremath{\boldsymbol{x}}}_{\alpha,\beta})$ from [(\[eq:gradCREDalg\])]{} versus $(\alpha,\beta)$, where ${\ensuremath{\boldsymbol{x}}}_{\alpha,\beta}={\ensuremath{\Hat{\boldsymbol{x}}}}+\alpha{\ensuremath{\boldsymbol{e}}}_1+\beta{\ensuremath{\boldsymbol{e}}}_2$ with randomly chosen ${\ensuremath{\boldsymbol{e}}}_1$ and ${\ensuremath{\boldsymbol{e}}}_2$. The subplots show that the minimizer of ${C{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}}_{\alpha,\beta})$ is not the fixed-point ${\ensuremath{\Hat{\boldsymbol{x}}}}$, and that ${C{_{\textsf{red}}}}(\cdot)$ may be non-smooth and/or non-convex.[]{data-label="fig:cost_figs"}](figures/cost_figs/NLM_cost.eps "fig:"){height="\hi"} \[b\]\[b\]\[[0.75]{}\][BM3D]{} \[Bl\]\[Bl\]\[[0.75]{}\][$\alpha$]{}\[t\]\[t\]\[[0.75]{}\][$\beta$]{} \[2mm\] \[b\]\[b\]\[[0.75]{}\][TNRD]{} \[Bl\]\[Bl\]\[[0.75]{}\][$\alpha$]{}\[t\]\[t\]\[[0.75]{}\][$\beta$]{} ![Contours show RED cost ${C{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}}_{\alpha,\beta})$ from [(\[eq:CRED\])]{} and arrows show RED-algorithm gradient field ${\ensuremath{\boldsymbol{g}}}({\ensuremath{\boldsymbol{x}}}_{\alpha,\beta})$ from [(\[eq:gradCREDalg\])]{} versus $(\alpha,\beta)$, where ${\ensuremath{\boldsymbol{x}}}_{\alpha,\beta}={\ensuremath{\Hat{\boldsymbol{x}}}}+\alpha{\ensuremath{\boldsymbol{e}}}_1+\beta{\ensuremath{\boldsymbol{e}}}_2$ with randomly chosen ${\ensuremath{\boldsymbol{e}}}_1$ and ${\ensuremath{\boldsymbol{e}}}_2$. The subplots show that the minimizer of ${C{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}}_{\alpha,\beta})$ is not the fixed-point ${\ensuremath{\Hat{\boldsymbol{x}}}}$, and that ${C{_{\textsf{red}}}}(\cdot)$ may be non-smooth and/or non-convex.[]{data-label="fig:cost_figs"}](figures/cost_figs/TNRD_cost.eps "fig:"){height="\hi"} \[b\]\[b\]\[[0.75]{}\][DnCNN]{} \[Bl\]\[Bl\]\[[0.75]{}\][$\alpha$]{}\[t\]\[t\]\[[0.75]{}\][$\beta$]{} ![Contours show RED cost ${C{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}}_{\alpha,\beta})$ from [(\[eq:CRED\])]{} and arrows show RED-algorithm gradient field ${\ensuremath{\boldsymbol{g}}}({\ensuremath{\boldsymbol{x}}}_{\alpha,\beta})$ from [(\[eq:gradCREDalg\])]{} versus $(\alpha,\beta)$, where ${\ensuremath{\boldsymbol{x}}}_{\alpha,\beta}={\ensuremath{\Hat{\boldsymbol{x}}}}+\alpha{\ensuremath{\boldsymbol{e}}}_1+\beta{\ensuremath{\boldsymbol{e}}}_2$ with randomly chosen ${\ensuremath{\boldsymbol{e}}}_1$ and ${\ensuremath{\boldsymbol{e}}}_2$. The subplots show that the minimizer of ${C{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}}_{\alpha,\beta})$ is not the fixed-point ${\ensuremath{\Hat{\boldsymbol{x}}}}$, and that ${C{_{\textsf{red}}}}(\cdot)$ may be non-smooth and/or non-convex.[]{data-label="fig:cost_figs"}](figures/cost_figs/DnCNN_cost.eps "fig:"){height="\hi"} ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Score-Matching by Denoising {#sec:new} =========================== As discussed in [Section \[sec:RED\]]{}, the RED algorithms proposed in [@Romano:JIS:17] are explicitly based on gradient rule $$\begin{aligned} \nabla\rho({\ensuremath{\boldsymbol{x}}}) = {\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}}) \label{eq:desired} .\end{aligned}$$ This rule appears to be useful, since these algorithms work very well in practice. But [Section \[sec:clarifications\]]{} established that ${\rho{_{\textsf{red}}}}(\cdot)$ from [(\[eq:rhoRED\])]{} does not usually satisfy [(\[eq:desired\])]{}. We are thus motived to seek an alternative explanation for the RED algorithms. In this section, we explain them through a framework that we call *score-matching by denoising* (SMD). Tweedie Regularization {#sec:rhoTR} ---------------------- As a precursor to the SMD framework, we first propose a technique based on what we will call *Tweedie regularization*. Recall the measurement model [(\[eq:rxe\])]{} used to define the “denoising” problem, repeated in [(\[eq:rxe1\])]{} for convenience: $$\begin{aligned} {\ensuremath{\boldsymbol{r}}} = {\ensuremath{\boldsymbol{x}}}{^0}+{\ensuremath{\boldsymbol{e}}}, \quad {\ensuremath{\boldsymbol{e}}}\sim {\ensuremath{\mathcal{N}}}({\ensuremath{\boldsymbol{0}}},\nu{\ensuremath{\boldsymbol{I}}}) \label{eq:rxe1} .\end{aligned}$$ To avoid confusion, we will refer to ${\ensuremath{\boldsymbol{r}}}$ as “pseudo-measurements” and ${\ensuremath{\boldsymbol{y}}}$ as “measurements.” From [(\[eq:rxe1\])]{}, the likelihood of ${\ensuremath{\boldsymbol{x}}}{^0}$ is $p({\ensuremath{\boldsymbol{r}}}|{\ensuremath{\boldsymbol{x}}}{^0};\nu) = {\ensuremath{\mathcal{N}}}({\ensuremath{\boldsymbol{r}}};{\ensuremath{\boldsymbol{x}}}{^0},\nu{\ensuremath{\boldsymbol{I}}})$. Now, suppose that we model the true image ${\ensuremath{\boldsymbol{x}}}{^0}$ as a realization of a random vector ${\ensuremath{\boldsymbol{x}}}$ with prior pdf ${{\widehat}{{p_{\text{\sf x}}}}}$. We write “${{\widehat}{{p_{\text{\sf x}}}}}$” to emphasize that the model distribution may differ from the true distribution ${p_{\text{\sf x}}}$ (i.e., the distribution from which the image ${\ensuremath{\boldsymbol{x}}}$ is actually drawn). Under this prior model, the MMSE denoiser of ${\ensuremath{\boldsymbol{x}}}$ from ${\ensuremath{\boldsymbol{r}}}$ is $$\begin{aligned} \operatorname{\mathbb{E}}_{{{\widehat}{{p_{\text{\sf x}}}}}}\{{\ensuremath{\boldsymbol{x}}}|{\ensuremath{\boldsymbol{r}}}\} &{\triangleq}{{\ensuremath{\Hat{\boldsymbol{f}}}}_{\text{\sf mmse},\nu}}({\ensuremath{\boldsymbol{r}}}) \label{eq:fhatmmse} ,\end{aligned}$$ and the likelihood of observing ${\ensuremath{\boldsymbol{r}}}$ is $$\begin{aligned} {{\widehat}{{p_{\text{\sf r}}}}}({\ensuremath{\boldsymbol{r}}};\nu) &{\triangleq}\int_{{{\mathbb{R}}}^N} p({\ensuremath{\boldsymbol{r}}}|{\ensuremath{\boldsymbol{x}}};\nu){{\widehat}{{p_{\text{\sf x}}}}}({\ensuremath{\boldsymbol{x}}}) {\mathop{}\!\mathrm{d}}{\ensuremath{\boldsymbol{x}}} \label{eq:prhat_def}\\ &= \int_{{{\mathbb{R}}}^N} {\ensuremath{\mathcal{N}}}({\ensuremath{\boldsymbol{r}}};{\ensuremath{\boldsymbol{x}}},\nu{\ensuremath{\boldsymbol{I}}}) {{\widehat}{{p_{\text{\sf x}}}}}({\ensuremath{\boldsymbol{x}}}) {\mathop{}\!\mathrm{d}}{\ensuremath{\boldsymbol{x}}} \label{eq:pr} .\end{aligned}$$ We will now define the *Tweedie regularizer* (TR) as $$\begin{aligned} {\rho_{\text{\sf TR}}}({\ensuremath{\boldsymbol{r}}};\nu) &{\triangleq}- \nu \ln {{\widehat}{{p_{\text{\sf r}}}}}({\ensuremath{\boldsymbol{r}}};\nu) \label{eq:rhoTR} .\end{aligned}$$ As we now show, ${\rho_{\text{\sf TR}}}(\cdot)$ has the desired property [(\[eq:desired\])]{}. \[lem:gradTR\] For ${\rho_{\text{\sf TR}}}({\ensuremath{\boldsymbol{r}}};\nu)$ defined in [(\[eq:rhoTR\])]{}, $$\begin{aligned} \nabla {\rho_{\text{\sf TR}}}({\ensuremath{\boldsymbol{r}}};\nu) = {\ensuremath{\boldsymbol{r}}}-{{\ensuremath{\Hat{\boldsymbol{f}}}}_{\text{\sf mmse},\nu}}({\ensuremath{\boldsymbol{r}}}) \label{eq:gradTR} ,\end{aligned}$$ where ${{\ensuremath{\Hat{\boldsymbol{f}}}}_{\text{\sf mmse},\nu}}(\cdot)$ is the MMSE denoiser from [(\[eq:fhatmmse\])]{}. Equation [(\[eq:gradTR\])]{} is a direct consequence of a classical result known as Tweedie’s formula [@Robbins:BSMSP:56; @Efron:JASA:11]. A short proof, from first principles, is now given for completeness. $$\begin{aligned} \lefteqn{ \frac{\partial}{\partial r_n} {\rho_{\text{\sf TR}}}({\ensuremath{\boldsymbol{r}}};\nu) = - \nu \frac{\partial}{\partial r_n} \ln \int_{{{\mathbb{R}}}^N} {{\widehat}{{p_{\text{\sf x}}}}}({\ensuremath{\boldsymbol{x}}}) {\ensuremath{\mathcal{N}}}({\ensuremath{\boldsymbol{r}}};{\ensuremath{\boldsymbol{x}}},\nu{\ensuremath{\boldsymbol{I}}}) {\mathop{}\!\mathrm{d}}{\ensuremath{\boldsymbol{x}}} }\\ &= - \frac{\nu \int_{{{\mathbb{R}}}^N} {{\widehat}{{p_{\text{\sf x}}}}}({\ensuremath{\boldsymbol{x}}}) \frac{\partial}{\partial r_n} {\ensuremath{\mathcal{N}}}({\ensuremath{\boldsymbol{r}}};{\ensuremath{\boldsymbol{x}}},\nu{\ensuremath{\boldsymbol{I}}}) {\mathop{}\!\mathrm{d}}{\ensuremath{\boldsymbol{x}}}}{\int_{{{\mathbb{R}}}^N} {{\widehat}{{p_{\text{\sf x}}}}}({\ensuremath{\boldsymbol{x}}}) {\ensuremath{\mathcal{N}}}({\ensuremath{\boldsymbol{r}}};{\ensuremath{\boldsymbol{x}}},\nu{\ensuremath{\boldsymbol{I}}}) {\mathop{}\!\mathrm{d}}{\ensuremath{\boldsymbol{x}}}} \\ &= \frac{\int_{{{\mathbb{R}}}^N} {{\widehat}{{p_{\text{\sf x}}}}}({\ensuremath{\boldsymbol{x}}}) {\ensuremath{\mathcal{N}}}({\ensuremath{\boldsymbol{r}}};{\ensuremath{\boldsymbol{x}}},\nu{\ensuremath{\boldsymbol{I}}}) (r_n-x_n) {\mathop{}\!\mathrm{d}}{\ensuremath{\boldsymbol{x}}}}{\int_{{{\mathbb{R}}}^N} {{\widehat}{{p_{\text{\sf x}}}}}({\ensuremath{\boldsymbol{x}}}) {\ensuremath{\mathcal{N}}}({\ensuremath{\boldsymbol{r}}};{\ensuremath{\boldsymbol{x}}},\nu{\ensuremath{\boldsymbol{I}}}) {\mathop{}\!\mathrm{d}}{\ensuremath{\boldsymbol{x}}}} \label{eq:deriv_gauss}\\ &= r_n - \int_{{{\mathbb{R}}}^N} x_n \frac{{{\widehat}{{p_{\text{\sf x}}}}}({\ensuremath{\boldsymbol{x}}}) {\ensuremath{\mathcal{N}}}({\ensuremath{\boldsymbol{r}}};{\ensuremath{\boldsymbol{x}}},\nu{\ensuremath{\boldsymbol{I}}}) }{\int_{{{\mathbb{R}}}^N} {{\widehat}{{p_{\text{\sf x}}}}}({\ensuremath{\boldsymbol{x}}}') {\ensuremath{\mathcal{N}}}({\ensuremath{\boldsymbol{r}}};{\ensuremath{\boldsymbol{x}}}',\nu{\ensuremath{\boldsymbol{I}}}) {\mathop{}\!\mathrm{d}}{\ensuremath{\boldsymbol{x}}}'} {\mathop{}\!\mathrm{d}}{\ensuremath{\boldsymbol{x}}} \qquad\\ &= r_n - \int_{{{\mathbb{R}}}^N} x_n \,{{\widehat}{{p_{\text{\sf x$|$r}}}}}({\ensuremath{\boldsymbol{x}}}|{\ensuremath{\boldsymbol{r}}};\nu) {\mathop{}\!\mathrm{d}}{\ensuremath{\boldsymbol{x}}}\\ &= r_n - [{{\ensuremath{\Hat{\boldsymbol{f}}}}_{\text{\sf mmse},\nu}}({\ensuremath{\boldsymbol{r}}})]_n \label{eq:deriv_rhoTR},\end{aligned}$$ where [(\[eq:deriv\_gauss\])]{} used $\frac{\partial}{\partial r_n} {\ensuremath{\mathcal{N}}}({\ensuremath{\boldsymbol{r}}};{\ensuremath{\boldsymbol{x}}},\nu{\ensuremath{\boldsymbol{I}}}) = {\ensuremath{\mathcal{N}}}({\ensuremath{\boldsymbol{r}}};{\ensuremath{\boldsymbol{x}}},\nu{\ensuremath{\boldsymbol{I}}}) (x_n-r_n)/\nu$. Stacking [(\[eq:deriv\_rhoTR\])]{} for $n=1,\dots,N$ in a vector yields [(\[eq:gradTR\])]{}. Thus, if the TR regularizer ${\rho_{\text{\sf TR}}}(\cdot;\nu)$ is used in the optimization problem [(\[eq:CRED\])]{}, then the solution ${\ensuremath{\Hat{\boldsymbol{x}}}}$ must satisfy the fixed-point condition [(\[eq:fpRED\])]{} associated with the RED algorithms from [@Romano:JIS:17], albeit with an MMSE-type denoiser. This restriction will be removed using the SMD framework in [Section \[sec:SMD\]]{}. It is interesting to note that the gradient property [(\[eq:gradTR\])]{} holds even for non-homogeneous ${{\ensuremath{\Hat{\boldsymbol{f}}}}_{\text{\sf mmse},\nu}}(\cdot)$. This generality is important in applications under which ${{\ensuremath{\Hat{\boldsymbol{f}}}}_{\text{\sf mmse},\nu}}(\cdot)$ is known to lack LH. For example, with a binary image ${\ensuremath{\boldsymbol{x}}}\in\{0,1\}^N$ modeled by ${{\widehat}{{p_{\text{\sf x}}}}}({\ensuremath{\boldsymbol{x}}})=\prod_{n=1}^N 0.5 (\delta(x_n)+\delta(x_n-1))$, the MMSE denoiser takes the form $[{{\ensuremath{\Hat{\boldsymbol{f}}}}_{\text{\sf mmse},\nu}}({\ensuremath{\boldsymbol{x}}})]_n = 0.5 + 0.5\tanh(x_n/\nu)$, which is not LH. Tweedie Regularization as Kernel Density Estimation {#sec:kde} --------------------------------------------------- We now show that TR arises naturally in the data-driven, non-parametric context through kernel-density estimation (KDE) [@Parzen:AMS:62]. Recall that, in most imaging applications, the true prior ${p_{\text{\sf x}}}$ is unknown, as is the true MMSE denoiser ${{\ensuremath{\boldsymbol{f}}}_{\text{\sf mmse},\nu}}(\cdot)$. There are several ways to proceed. One way is to design “by hand” an approximate prior ${{\widehat}{{p_{\text{\sf x}}}}}$ that leads to a computationally efficient denoiser ${{\ensuremath{\Hat{\boldsymbol{f}}}}_{\text{\sf mmse},\nu}}(\cdot)$. But, because this denoiser is not MMSE for ${\ensuremath{\boldsymbol{x}}}\sim{p_{\text{\sf x}}}$, the performance of the resulting estimates ${\ensuremath{\Hat{\boldsymbol{x}}}}$ will suffer relative to ${{\ensuremath{\boldsymbol{f}}}_{\text{\sf mmse},\nu}}$. Another way to proceed is to approximate the prior using a large corpus of training data $\{{\ensuremath{\boldsymbol{x}}}_t\}_{t=1}^T$. To this end, an approximate prior could be formed using the empirical estimate $$\begin{aligned} {{\widehat}{{p_{\text{\sf x}}}}}({\ensuremath{\boldsymbol{x}}}) &= \frac{1}{T}\sum_{t=1}^T \delta({\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{x}}}_t) \label{eq:px_emp} ,\end{aligned}$$ but a more accurate match to the true prior ${p_{\text{\sf x}}}$ can be obtained using $$\begin{aligned} {{\widetilde}{{p_{\text{\sf x}}}}}({\ensuremath{\boldsymbol{x}}};\nu) &= \frac{1}{T}\sum_{t=1}^T {\ensuremath{\mathcal{N}}}({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{x}}}_t,\nu{\ensuremath{\boldsymbol{I}}}) \label{eq:px_parzen} \end{aligned}$$ with appropriately chosen $\nu>0$, a technique known as kernel density estimation (KDE) or Parzen windowing [@Parzen:AMS:62]. Note that if ${{\widetilde}{{p_{\text{\sf x}}}}}$ is used as a surrogate for ${p_{\text{\sf x}}}$, then the MAP optimization problem becomes $$\begin{aligned} {\ensuremath{\Hat{\boldsymbol{x}}}} &= \arg\min_{{\ensuremath{\boldsymbol{r}}}} \frac{1}{2\sigma^2}\|{\ensuremath{\boldsymbol{Ar}}}-{\ensuremath{\boldsymbol{y}}}\|^2 - \ln {{\widetilde}{{p_{\text{\sf x}}}}}({\ensuremath{\boldsymbol{r}}};\nu) \label{eq:map_parzen} \\ &= \arg\min_{{\ensuremath{\boldsymbol{r}}}} \frac{1}{2\sigma^2}\|{\ensuremath{\boldsymbol{Ar}}}-{\ensuremath{\boldsymbol{y}}}\|^2 + \lambda {\rho_{\text{\sf TR}}}({\ensuremath{\boldsymbol{r}}};\nu) \text{~for~}\lambda=\frac{1}{\nu} \label{eq:map_TR} ,\end{aligned}$$ with ${\rho_{\text{\sf TR}}}(\cdot;\nu)$ from [(\[eq:prhat\_def\])]{}-[(\[eq:rhoTR\])]{} constructed using ${{\widehat}{{p_{\text{\sf x}}}}}$ from [(\[eq:px\_emp\])]{}. In summary, TR arises naturally in the data-driven approach to image recovery when KDE is used to smooth the empirical prior. Score-Matching by Denoising {#sec:SMD} --------------------------- A limitation of the above TR framework is that it results in denoisers ${{\ensuremath{\Hat{\boldsymbol{f}}}}_{\text{\sf mmse},\nu}}$ with symmetric Jacobians. (Recall the discussion of MMSE denoisers in [Section \[sec:JS\]]{}.) To justify the use of RED algorithms with non-symmetric Jacobians, we introduce the *score-matching by denoising* (SMD) framework in this section. Let us continue with the KDE-based MAP estimation problem [(\[eq:map\_parzen\])]{}. Note that ${\ensuremath{\Hat{\boldsymbol{x}}}}$ from [(\[eq:map\_parzen\])]{} zeros the gradient of the MAP optimization objective and thus obeys the fixed-point equation $$\begin{aligned} \frac{1}{\sigma^2}{\ensuremath{\boldsymbol{A}}}{^{\top}}({\ensuremath{\boldsymbol{A}}}{\ensuremath{\Hat{\boldsymbol{x}}}}-{\ensuremath{\boldsymbol{y}}}) - \nabla \ln {{\widetilde}{{p_{\text{\sf x}}}}}({\ensuremath{\Hat{\boldsymbol{x}}}};\nu) &= {\ensuremath{\boldsymbol{0}}} \label{eq:fp_parzen} .\end{aligned}$$ In principle, ${\ensuremath{\Hat{\boldsymbol{x}}}}$ in [(\[eq:fp\_parzen\])]{} could be found using gradient descent or similar techniques. However, computation of the gradient $$\begin{aligned} \nabla \ln {{\widetilde}{{p_{\text{\sf x}}}}}({\ensuremath{\boldsymbol{r}}};\nu) &= \frac{\nabla {{\widetilde}{{p_{\text{\sf x}}}}}({\ensuremath{\boldsymbol{r}}};\nu)}{{{\widetilde}{{p_{\text{\sf x}}}}}({\ensuremath{\boldsymbol{r}}};\nu)} = \frac{\sum_{t=1}^T ({\ensuremath{\boldsymbol{x}}}_t-{\ensuremath{\boldsymbol{r}}}) {\ensuremath{\mathcal{N}}}({\ensuremath{\boldsymbol{r}}};{\ensuremath{\boldsymbol{x}}}_t,\nu{\ensuremath{\boldsymbol{I}}})} {\nu\sum_{t=1}^T {\ensuremath{\mathcal{N}}}({\ensuremath{\boldsymbol{r}}};{\ensuremath{\boldsymbol{x}}}_t,\nu{\ensuremath{\boldsymbol{I}}})} \label{eq:score_parzen} \end{aligned}$$ is too expensive for the values of $T$ typically needed to generate a good image prior ${{\widetilde}{{p_{\text{\sf x}}}}}$. A tractable alternative is suggested by the fact that $$\begin{aligned} \nabla \ln {{\widetilde}{{p_{\text{\sf x}}}}}({\ensuremath{\boldsymbol{r}}};\nu) &= \frac{{{\ensuremath{\Hat{\boldsymbol{f}}}}_{\text{\sf mmse},\nu}}({\ensuremath{\boldsymbol{r}}}) -{\ensuremath{\boldsymbol{r}}}}{\nu} \label{eq:score_parzen2} \\ \text{for~} {{\ensuremath{\Hat{\boldsymbol{f}}}}_{\text{\sf mmse},\nu}}({\ensuremath{\boldsymbol{r}}}) &= \frac{\sum_{t=1}^T {\ensuremath{\boldsymbol{x}}}_t {\ensuremath{\mathcal{N}}}({\ensuremath{\boldsymbol{r}}};{\ensuremath{\boldsymbol{x}}}_t,\nu{\ensuremath{\boldsymbol{I}}})} {\sum_{t=1}^T {\ensuremath{\mathcal{N}}}({\ensuremath{\boldsymbol{r}}};{\ensuremath{\boldsymbol{x}}}_t,\nu{\ensuremath{\boldsymbol{I}}})} ,\end{aligned}$$ where ${{\ensuremath{\Hat{\boldsymbol{f}}}}_{\text{\sf mmse},\nu}}({\ensuremath{\boldsymbol{r}}})$ is the MMSE estimator of ${\ensuremath{\boldsymbol{x}}}\sim{{\widehat}{{p_{\text{\sf x}}}}}$ from ${\ensuremath{\boldsymbol{r}}}={\ensuremath{\boldsymbol{x}}}+{\ensuremath{\mathcal{N}}}({\ensuremath{\boldsymbol{0}}},\nu{\ensuremath{\boldsymbol{I}}})$. In particular, if we can construct a good approximation to ${{\ensuremath{\Hat{\boldsymbol{f}}}}_{\text{\sf mmse},\nu}}(\cdot)$ using a denoiser ${{\ensuremath{\boldsymbol{f}}}_{{\ensuremath{\boldsymbol{\theta}}}}}(\cdot)$ in a computationally efficient function class ${\ensuremath{\mathcal{F}}}{\triangleq}\{{\ensuremath{\boldsymbol{f}}}_{{\ensuremath{\boldsymbol{\theta}}}}: {\ensuremath{\boldsymbol{\theta}}}\in{\ensuremath{\boldsymbol{\Theta}}}\}$, then we can efficiently approximate the MAP problem [(\[eq:map\_parzen\])]{}. This approach can be formalized using the framework of *score matching* [@Hyvarinen:JMLR:05], which aims to approximate the “score” (i.e., the gradient of the log-prior) rather than the prior itself. For example, suppose that we want to want to approximate the score $\nabla \ln {{\widetilde}{{p_{\text{\sf x}}}}}(\cdot;\nu)$. For this, Hyv[ä]{}rinen [@Hyvarinen:JMLR:05] suggested to first find the best mean-square fit among a set of computationally efficient functions ${\ensuremath{\boldsymbol{\psi}}}(\cdot;{\ensuremath{\boldsymbol{\theta}}})$, i.e., find $$\begin{aligned} {\ensuremath{\Hat{\boldsymbol{\theta}}}} &= \arg\min_{{\ensuremath{\boldsymbol{\theta}}}} \operatorname{\mathbb{E}}_{{{\widetilde}{{p_{\text{\sf x}}}}}} \left\{\left\| {\ensuremath{\boldsymbol{\psi}}}({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{\theta}}}) - \nabla \ln {{\widetilde}{{p_{\text{\sf x}}}}}({\ensuremath{\boldsymbol{x}}};\nu) \right\|^2 \right\} \label{eq:score_matching} ,\end{aligned}$$ and then to approximate the score $\nabla \ln {{\widetilde}{{p_{\text{\sf x}}}}}(\cdot;\nu)$ by ${\ensuremath{\boldsymbol{\psi}}}(\cdot;{\ensuremath{\Hat{\boldsymbol{\theta}}}})$. Later, in the context of denoising autoencoders, Vincent [@Vincent:NC:11] showed that if one chooses $$\begin{aligned} {\ensuremath{\boldsymbol{\psi}}}({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{\theta}}}) &= \frac{{{\ensuremath{\boldsymbol{f}}}_{{\ensuremath{\boldsymbol{\theta}}}}}({\ensuremath{\boldsymbol{x}}})-{\ensuremath{\boldsymbol{x}}}}{\nu} \label{eq:psi}\end{aligned}$$ for some function ${{\ensuremath{\boldsymbol{f}}}_{{\ensuremath{\boldsymbol{\theta}}}}}(\cdot)\in{\ensuremath{\mathcal{F}}}$, then ${\ensuremath{\Hat{\boldsymbol{\theta}}}}$ from [(\[eq:score\_matching\])]{} can be equivalently written as $$\begin{aligned} {\ensuremath{\Hat{\boldsymbol{\theta}}}} &= \arg\min_{{\ensuremath{\boldsymbol{\theta}}}} \operatorname{\mathbb{E}}_{{{\widehat}{{p_{\text{\sf x}}}}}} \left\{ \left\| {{\ensuremath{\boldsymbol{f}}}_{{\ensuremath{\boldsymbol{\theta}}}}}\big({\ensuremath{\boldsymbol{x}}}+{\ensuremath{\mathcal{N}}}(0,\nu{\ensuremath{\boldsymbol{I}}})\big) - {\ensuremath{\boldsymbol{x}}} \right\|^2\right\} .\end{aligned}$$ In this case, ${{\ensuremath{\boldsymbol{f}}}_{{\ensuremath{\Hat{\boldsymbol{\theta}}}}}}(\cdot)$ is the MSE-optimal denoiser, averaged over ${{\widehat}{{p_{\text{\sf x}}}}}$ and constrained to the function class ${\ensuremath{\mathcal{F}}}$. Note that the denoiser approximation error can be directly connected to the score-matching error as follows. For any denoiser ${{\ensuremath{\boldsymbol{f}}}_{{\ensuremath{\boldsymbol{\theta}}}}}(\cdot)$ and any input ${\ensuremath{\boldsymbol{x}}}$, $$\begin{aligned} \lefteqn{ \|{{\ensuremath{\boldsymbol{f}}}_{{\ensuremath{\boldsymbol{\theta}}}}}({\ensuremath{\boldsymbol{x}}})-{{\ensuremath{\Hat{\boldsymbol{f}}}}_{\text{\sf mmse},\nu}}({\ensuremath{\boldsymbol{x}}})\|^2 }\nonumber\\ &=\nu^2\left\|\frac{{{\ensuremath{\boldsymbol{f}}}_{{\ensuremath{\boldsymbol{\theta}}}}}({\ensuremath{\boldsymbol{x}}})-{\ensuremath{\boldsymbol{x}}}}{\nu} - \nabla\ln{{\widetilde}{{p_{\text{\sf x}}}}}({\ensuremath{\boldsymbol{x}}};\nu)\right\|^2 \label{eq:ferr1}\\ &=\nu^2\left\|{\ensuremath{\boldsymbol{\psi}}}({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{\theta}}}) - \nabla\ln{{\widetilde}{{p_{\text{\sf x}}}}}({\ensuremath{\boldsymbol{x}}};\nu)\right\|^2 \label{eq:ferr2}\end{aligned}$$ where [(\[eq:ferr1\])]{} follows from [(\[eq:score\_parzen2\])]{} and [(\[eq:ferr2\])]{} follows from [(\[eq:psi\])]{}. Thus, matching the score is directly related to matching the MMSE denoiser. Plugging the score approximation [(\[eq:psi\])]{} into the fixed-point condition [(\[eq:fp\_parzen\])]{}, we get $$\begin{aligned} \frac{1}{\sigma^2}{\ensuremath{\boldsymbol{A}}}{^{\top}}({\ensuremath{\boldsymbol{A}}}{\ensuremath{\Hat{\boldsymbol{x}}}}-{\ensuremath{\boldsymbol{y}}}) + \lambda\big( {\ensuremath{\Hat{\boldsymbol{x}}}} - {{\ensuremath{\boldsymbol{f}}}_{{\ensuremath{\boldsymbol{\theta}}}}}({\ensuremath{\Hat{\boldsymbol{x}}}}) \big) &= {\ensuremath{\boldsymbol{0}}} \text{~for~}\lambda=\frac{1}{\nu} \label{eq:fpRSS},\end{aligned}$$ which matches the fixed-point condition [(\[eq:fpRED\])]{} of the RED algorithms from [@Romano:JIS:17]. Here we emphasize that ${\ensuremath{\mathcal{F}}}$ may be constructed in such a way that ${{\ensuremath{\boldsymbol{f}}}_{{\ensuremath{\boldsymbol{\theta}}}}}(\cdot)$ has a non-symmetric Jacobian, which is the case for many state-of-the-art denoisers. Also, ${\ensuremath{\boldsymbol{\theta}}}$ does not need to be optimized for [(\[eq:fpRSS\])]{} to hold. Finally, ${{\widehat}{{p_{\text{\sf x}}}}}$ need not be the empirical prior [(\[eq:px\_emp\])]{}; it can be any chosen prior [@Vincent:NC:11]. Thus, the score-matching-by-denoising (SMD) framework offers an explanation of the RED algorithms from [@Romano:JIS:17] that holds for generic denoisers ${{\ensuremath{\boldsymbol{f}}}_{{\ensuremath{\boldsymbol{\theta}}}}}(\cdot)$, whether or not they have symmetric Jacobians, are locally homogeneous, or MMSE. Furthermore, it suggests a rationale for choosing the regularization weight $\lambda$ and, in the context of KDE, the denoiser variance $\nu$. Relation to Existing Work ------------------------- Tweedie’s formula [(\[eq:gradTR\])]{} has connections to Stein’s Unbiased Risk Estimation (SURE) [@Stein:AS:81], as discussed in, e.g., [@Luisier:Diss:10 Thm. 2] and [@Raphan:NC:11 Eq. (2.4)]. SURE has been used for image denoising in, e.g., [@Blu:TIP:07]. Tweedie’s formula was also used in [@Bigdeli:17] to interpret autoencoding-based image priors. In our work, Tweedie’s forumula is used to provide an interpretation for the RED algorithms through the construction of the explicit regularizer [(\[eq:rhoTR\])]{} and the approximation of the resulting fixed-point equation [(\[eq:fp\_parzen\])]{} via score matching. Recently, Alain and Bengio [@Alain:JMLR:14] studied the contractive auto-encoders, a type of autoencoder that minimizes squared reconstruction error plus a penalty that tries to make the autoencoder as simple as possible. While previous works such as [@Ranzato:NIPS:08] conjectured that such auto-encoders minimize an energy function, Alain and Bengio showed that they actually minimize the norm of a score (i.e., match a score to zero). Furthermore, they showed that, when the coder and decoder do not share the same weights, it is not possible to define a valid energy function because the Jacobian of the reconstruction function is not symmetric. The results in [@Alain:JMLR:14] parallel those in this paper, except that they focus on auto-encoders while we focus on variational image recovery. Another small difference is that [@Alain:JMLR:14] uses the small-$\nu$ approximation $$\begin{aligned} {{\ensuremath{\Hat{\boldsymbol{f}}}}_{\text{\sf mmse},\nu}}({\ensuremath{\boldsymbol{x}}}) = {\ensuremath{\boldsymbol{x}}} + \nu \nabla \ln {{\widehat}{{p_{\text{\sf x}}}}}({\ensuremath{\boldsymbol{x}}}) + o(\nu),\end{aligned}$$ whereas we use the exact (Tweedie’s) relationship [(\[eq:gradTR\])]{}, i.e., $$\begin{aligned} {{\ensuremath{\Hat{\boldsymbol{f}}}}_{\text{\sf mmse},\nu}}({\ensuremath{\boldsymbol{x}}}) = {\ensuremath{\boldsymbol{x}}} + \nu \nabla \ln {{\widetilde}{{p_{\text{\sf x}}}}}({\ensuremath{\boldsymbol{x}}}) ,\end{aligned}$$ where is ${{\widetilde}{{p_{\text{\sf x}}}}}$ the “Gaussian blurred” version of ${{\widehat}{{p_{\text{\sf x}}}}}$ from [(\[eq:pr\])]{}. Fast RED Algorithms {#sec:algorithms} =================== In [@Romano:JIS:17], Romano et al. proposed several ways to solve the fixed-point equation [(\[eq:fpRED\])]{}. Throughout our paper, we have been referring to these methods as “RED algorithms.” In this section, we provide new interpretations of the RED-ADMM and RED-FP algorithms from [@Romano:JIS:17] and we propose new RED algorithms based on accelerated proximal gradient methods. RED-ADMM -------- The ADMM approach was summarized in [Algorithm \[alg:ADMM\]]{} for an arbitrary regularizer $\rho(\cdot)$. To apply ADMM to RED, [line \[line:ADMM\_v\_update\]]{} of [Algorithm \[alg:ADMM\]]{}, known as the “proximal update,” must be specialized to the case where $\rho(\cdot)$ obeys [(\[eq:gradREDromano\])]{} for some denoiser ${\ensuremath{\boldsymbol{f}}}(\cdot)$. To do this, Romano et al. [@Romano:JIS:17] proposed the following. Because $\rho(\cdot)$ is differentiable, the proximal solution ${\ensuremath{\boldsymbol{v}}}_k$ must obey the fixed-point relationship $$\begin{aligned} {\ensuremath{\boldsymbol{0}}} &= \lambda \nabla\rho({\ensuremath{\boldsymbol{v}}}_k) + \beta ({\ensuremath{\boldsymbol{v}}}_k - {\ensuremath{\boldsymbol{x}}}_k - {\ensuremath{\boldsymbol{u}}}_{k-1}) \\ &= \lambda \big({\ensuremath{\boldsymbol{v}}}_k-{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{v}}}_k)\big) + \beta ({\ensuremath{\boldsymbol{v}}}_k - {\ensuremath{\boldsymbol{x}}}_k - {\ensuremath{\boldsymbol{u}}}_{k-1}) \\ \Leftrightarrow~ {\ensuremath{\boldsymbol{v}}}_k &= \frac{\lambda}{\lambda+\beta} {\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{v}}}_k) + \frac{\beta}{\lambda+\beta} ({\ensuremath{\boldsymbol{x}}}_k + {\ensuremath{\boldsymbol{u}}}_{k-1}) \label{eq:proxRED} .\end{aligned}$$ An approximation to ${\ensuremath{\boldsymbol{v}}}_k$ can thus be obtained by iterating $$\begin{aligned} {\ensuremath{\boldsymbol{z}}}_{i} &= \frac{\lambda}{\lambda+\beta} {\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{z}}}_{i-1}) + \frac{\beta}{\lambda+\beta} ({\ensuremath{\boldsymbol{x}}}_k + {\ensuremath{\boldsymbol{u}}}_{k-1}) \label{eq:proxREDinexact} \end{aligned}$$ over $i=1,\dots,I$ with sufficiently large $I$, initialized at ${\ensuremath{\boldsymbol{z}}}_0={\ensuremath{\boldsymbol{v}}}_{k-1}$. This procedure is detailed in lines \[line:RED\_ADMM\_z\_init\]-\[line:RED\_ADMM\_z\_end\] of [Algorithm \[alg:RED\_ADMM\]]{}. The overall algorithm is known as RED-ADMM. \[alg:RED\_ADMM\] \[line:RED\_ADMM\_init\] \[line:RED\_ADMM\_x\_update\] \[line:RED\_ADMM\_z\_init\] \[line:RED\_ADMM\_z\_update\] \[line:RED\_ADMM\_z\_end\] \[line:RED\_ADMM\_v\_update\] \[line:RED\_ADMM\_u\_update\] Inexact RED-ADMM ---------------- [Algorithm \[alg:RED\_ADMM\]]{} gives a faithful implementation of ADMM when the number of inner iterations, $I$, is large. But using many inner iterations may be impractical when the denoiser is computationally expensive, as in the case of BM3D or TNRD. Furthermore, the use of many inner iterations may not be necessary. For example, [Fig. \[fig:ADMM\_time\_log\]]{} plots PSNR trajectories versus runtime for TNRD-based RED-ADMM with $I=1,2,3,4$ inner iterations. For this experiment, we used the deblurring task described in [Section \[sec:compare\]]{}, but similar behaviors can be observed in other applications of RED. [Figure \[fig:ADMM\_time\_log\]]{} suggests that $I=1$ inner iterations gives the fastest convergence. Note that [@Romano:JIS:17] also used $I=1$ when implementing RED-ADMM. \[t\]\[t\]\[0.7\][time (sec)]{} \[b\]\[b\]\[0.7\][PSNR]{} \[l\]\[l\]\[0.6\][$I=1$]{} \[l\]\[l\]\[0.6\][$I=2$]{} \[l\]\[l\]\[0.6\][$I=3$]{} \[l\]\[l\]\[0.6\][$I=4$]{} ![PSNR versus runtime for RED-ADMM with TNRD denoising and $I$ inner iterations.[]{data-label="fig:ADMM_time_log"}](figures/ADMM_time_log.eps "fig:"){width="\figsizein"} \[alg:RED\_ADMM\_inexact\] \[line:ADMM2\_init\] \[line:ADMM2\_x\_update\] \[line:ADMM2\_v\_update\] \[line:ADMM2\_u\_update\] With $I=1$ inner iterations, RED-ADMM simplifies down to the 3-step iteration summarized in [Algorithm \[alg:RED\_ADMM\_inexact\]]{}. Since [Algorithm \[alg:RED\_ADMM\_inexact\]]{} looks quite different than standard ADMM (recall [Algorithm \[alg:ADMM\]]{}), one might wonder whether there exists another interpretation of [Algorithm \[alg:RED\_ADMM\_inexact\]]{}. Noting that [line \[line:ADMM2\_v\_update\]]{} can be rewritten as $$\begin{aligned} {\ensuremath{\boldsymbol{v}}}_k &={\ensuremath{\boldsymbol{v}}}_{k-1}-\frac{1}{\lambda+\beta}\big[\lambda \nabla\rho({\ensuremath{\boldsymbol{v}}}_{k-1}) +\beta({\ensuremath{\boldsymbol{v}}}_{k-1}-{\ensuremath{\boldsymbol{x}}}_k-{\ensuremath{\boldsymbol{u}}}_{k-1})\big] \\ &={\ensuremath{\boldsymbol{v}}}_{k-1}-\frac{1}{\lambda+\beta} \nabla \left[ \lambda \rho({\ensuremath{\boldsymbol{v}}}) + \frac{\beta}{2}\|{\ensuremath{\boldsymbol{v}}} - {\ensuremath{\boldsymbol{x}}}_k-{\ensuremath{\boldsymbol{u}}}_{k-1}\|^2 \right] _{{\ensuremath{\boldsymbol{v}}}={\ensuremath{\boldsymbol{v}}}_{k-1}}\end{aligned}$$ we see that the $I=1$ version of inexact RED-ADMM replaces the proximal step with a gradient-descent step under stepsize $1/(\lambda+\beta)$. Thus the algorithm is reminiscent of the proximal gradient (PG) algorithm [@Beck:Chap:09; @Combettes:Chap:11]. We will discuss PG further in the sequel. Majorization-Minimization and Proximal-Gradient RED {#sec:MM} --------------------------------------------------- We now propose a proximal-gradient approach inspired by *majorization minimization* (MM) [@Sun:TSP:17]. As proposed in [@Figueiredo:TIP:07], we use a quadratic upper-bound, $$\begin{aligned} {\overline}{\rho}({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{x}}}_k) &{\triangleq}\rho({\ensuremath{\boldsymbol{x}}}_{k}) + [\nabla\rho({\ensuremath{\boldsymbol{x}}}_{k})]{^{\top}}\big({\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{x}}}_{k}\big) + \frac{L}{2}{\ensuremath{\| {\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{x}}}_{k} \|}}_2^2 \label{eq:rhobound} ,\end{aligned}$$ on the regularizer $\rho({\ensuremath{\boldsymbol{x}}})$, in place of $\rho({\ensuremath{\boldsymbol{x}}})$ itself, at the $k$th algorithm iteration. Note that if $\rho(\cdot)$ is convex and $\nabla\rho(\cdot)$ is $L_\rho$-Lipschitz, then ${\overline}{\rho}({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{x}}}_k)$ “majorizes” $\rho({\ensuremath{\boldsymbol{x}}})$ at ${\ensuremath{\boldsymbol{x}}}_k$ when $L\geq L_\rho$, i.e., $$\begin{aligned} {\overline}{\rho}({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{x}}}_k) &\geq \rho({\ensuremath{\boldsymbol{x}}})~\forall {\ensuremath{\boldsymbol{x}}}\in{\ensuremath{\mathcal{X}}} \\ {\overline}{\rho}({\ensuremath{\boldsymbol{x}}}_k;{\ensuremath{\boldsymbol{x}}}_k) &= \rho({\ensuremath{\boldsymbol{x}}}_k) .\end{aligned}$$ The majorized objective can then be minimized using the *proximal gradient* (PG) algorithm [@Beck:Chap:09; @Combettes:Chap:11] (also known as forward-backward splitting) as follows. From [(\[eq:rhobound\])]{}, note that the majorized objective can be written as $$\begin{aligned} \lefteqn{ \ell({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{y}}})+\lambda{\overline}{\rho}({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{x}}}_k) }\nonumber\\ &=& \ell({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{y}}})+ \frac{\lambda L}{2}\left\| {\ensuremath{\boldsymbol{x}}} - \left({\ensuremath{\boldsymbol{x}}}_k- \frac{1}{L}\nabla\rho({\ensuremath{\boldsymbol{x}}}_k)\right) \right\|^2 + \text{const} \\ &=& \ell({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{y}}})+ \frac{\lambda L}{2}\bigg\| {\ensuremath{\boldsymbol{x}}} - \underbrace{ \bigg({\ensuremath{\boldsymbol{x}}}_k- \frac{1}{L}\big({\ensuremath{\boldsymbol{x}}}_k-{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}}_k)\big) \bigg) }_{\displaystyle {\triangleq}{\ensuremath{\boldsymbol{v}}}_k} \bigg\|^2 + \text{const} , \nonumber\\[-4.5mm] \label{eq:objPG}\end{aligned}$$ where [(\[eq:objPG\])]{} follows from assuming [(\[eq:desired\])]{}, which is the basis for all RED algorithms. The RED-PG algorithm then alternately updates ${\ensuremath{\boldsymbol{v}}}_k$ as per the gradient step in [(\[eq:objPG\])]{} and updates ${\ensuremath{\boldsymbol{x}}}_{k+1}$ according to the proximal step $$\begin{aligned} {\ensuremath{\boldsymbol{x}}}_{k+1} &= \arg\min_{{\ensuremath{\boldsymbol{x}}}} \left\{\ell({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{y}}})+\frac{\lambda L}{2}{\ensuremath{\| {\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{v}}}_k \|}}^2 \right\} ,\end{aligned}$$ as summarized in [Algorithm \[alg:RED\_PG\]]{}. Convergence is guaranteed if $L\geq L_\rho$; see [@Beck:Chap:09; @Combettes:Chap:11] for details. \[alg:RED\_PG\] \[line:PG\_init\] \[line:PG\_x\_update\] \[line:PG\_v\_update\] We now show that RED-PG with $L=1$ is identical to the “fixed point” (FP) RED algorithm proposed in [@Romano:JIS:17]. First, notice from [Algorithm \[alg:RED\_PG\]]{} that ${\ensuremath{\boldsymbol{v}}}_k={\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}}_k)$ when $L=1$, in which case $$\begin{aligned} {\ensuremath{\boldsymbol{x}}}_k &= \arg\min_{{\ensuremath{\boldsymbol{x}}}} \left\{ \ell({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{y}}})+\frac{\lambda}{2}{\ensuremath{\| {\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}}_{k-1}) \|}}^2 \right\} \label{eq:RED_FP}.\end{aligned}$$ For the quadratic loss $\ell({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{y}}})=\frac{1}{2\sigma^2}\|{\ensuremath{\boldsymbol{Ax}}}-{\ensuremath{\boldsymbol{y}}}\|^2$, [(\[eq:RED\_FP\])]{} becomes $$\begin{aligned} {\ensuremath{\boldsymbol{x}}}_k &= \arg\min_{{\ensuremath{\boldsymbol{x}}}} \left\{ \frac{1}{2\sigma^2}\|{\ensuremath{\boldsymbol{Ax}}}-{\ensuremath{\boldsymbol{y}}}\|^2 +\frac{\lambda}{2}{\ensuremath{\| {\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}}_{k-1}) \|}}^2 \right\} \\ &= \Big(\frac{1}{\sigma^2}{\ensuremath{\boldsymbol{A}}}{^{\top}}{\ensuremath{\boldsymbol{A}}}+\lambda{\ensuremath{\boldsymbol{I}}}\Big)^{-1} \Big(\frac{1}{\sigma^2}{\ensuremath{\boldsymbol{A}}}{^{\top}}{\ensuremath{\boldsymbol{y}}} + \lambda{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}}_{k-1})\Big) \label{eq:RED_FPQ},\end{aligned}$$ which is exactly the RED-FP update [@Romano:JIS:17 (37)]. Thus, [(\[eq:RED\_FP\])]{} generalizes [@Romano:JIS:17 (37)] to possibly non-quadratic[^4] loss $\ell(\cdot;{\ensuremath{\boldsymbol{y}}})$, and RED-PG generalizes RED-FP to arbitrary $L>0$. More importantly, the PG framework facilitates algorithmic acceleration, as we describe below. The RED-PG and inexact RED-ADMM-$I\!=\!1$ algorithms show interesting similarities: both alternate a proximal update on the loss with a gradient update on the regularization, where the latter term manifests as a convex combination between the denoiser output and another term. The difference is that RED-ADMM-$I\!=\!1$ includes an extra state variable, ${\ensuremath{\boldsymbol{u}}}_k$. The experiments in [Section \[sec:compare\]]{} suggest that this extra state variable is not necessarily advantageous. Dynamic RED-PG {#sec:DPG} -------------- Recalling from [(\[eq:objPG\])]{} that $1/L$ acts as a stepsize in the PG gradient step, it may be possible to speed up PG by decreasing $L$, although making $L$ too small can prevent convergence. If $\rho(\cdot)$ was known, then a line search could be used, at each iteration $k$, to find the smallest value of $L$ that guarantees the majorization of $\rho({\ensuremath{\boldsymbol{x}}})$ by ${\overline}{\rho}({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{x}}}_k)$ [@Beck:Chap:09]. However, with a non-LH or non-JS denoiser, it is not possible to evaluate $\rho(\cdot)$, preventing such a line search. We thus propose to vary $L_k$ (i.e., the value of $L$ at iteration $k$) according to a fixed schedule. In particular, we propose to select $L_0$ and $L_\infty$, and smoothly interpolate between them at intermediate iterations $k$. One interpolation scheme that works well in practice is summarized in [line \[line:DPG\_Lk\_update\]]{} of [Algorithm \[alg:RED\_DPG\]]{}. We refer to this approach as “dynamic PG” (DPG). The numerical experiments in [Section \[sec:compare\]]{} suggest that, with appropriate selection of $L_0$ and $L_\infty$, RED-DPG can be significantly faster than RED-FP. \[alg:RED\_DPG\] \[line:DPG\_init\] \[line:DPG\_x\_update\] \[line:DPG\_Lk\_update\] \[line:DPG\_v\_update\] Accelerated RED-PG {#sec:APG} ------------------ Another well-known approach to speeding up PG is to apply momentum to the ${\ensuremath{\boldsymbol{v}}}_k$ term in [Algorithm \[alg:RED\_PG\]]{} [@Beck:Chap:09], often known as “acceleration.” An accelerated PG (APG) approach to RED is detailed in [Algorithm \[alg:RED\_APG\]]{}. There, the momentum in [line \[line:APG\_z\_update\]]{} takes the same form as in FISTA [@Beck:JIS:09]. The numerical experiments in [Section \[sec:compare\]]{} suggest that RED-APG is the fastest among the RED algorithms discussed above. \[alg:RED\_APG\] \[line:APG\_init\] \[line:APG\_x\_update\] \[line:APG\_t\_update\] \[line:APG\_z\_update\] \[line:APG\_v\_update\] By leveraging the principle of vector extrapolation (VE) [@Sidi:Book:17], a different approach to accelerating RED algorithms was recently proposed in [@Hong:18]. Algorithmically, the approach in [@Hong:18] is much more complicated than the PG-DPG and PG-APG methods proposed above. In fact, we have been unable to arrive at an implementation of [@Hong:18] that reproduces the results in that paper, and the authors have not been willing to share their implementation with us. Thus, we cannot comment further on the difference in performance between our PG-DPG and PG-APG schemes and the one in [@Hong:18]. Convergence of RED-PG --------------------- Recalling [Theorem \[thm:impossible\]]{}, the RED algorithms do not minimize an explicit cost function but rather seek fixed points of [(\[eq:fpRED\])]{}. Therefore, it is important to know whether they actually converge to any one fixed point. Below, we use the theory of non-expansive and $\alpha$-averaged operators to establish the convergence of RED-PG to a fixed point under certain conditions. First, an operator ${\ensuremath{\boldsymbol{B}}}(\cdot)$ is said to be *non-expansive* if its Lipschitz constant is at most $1$ [@Bauschke:Book:11]. Next, for $\alpha\in (0,1)$, an operator ${\ensuremath{\boldsymbol{P}}}(\cdot)$ is said to be *$\alpha$-averaged* if $$\begin{aligned} {\ensuremath{\boldsymbol{P}}}({\ensuremath{\boldsymbol{x}}}) = \alpha {\ensuremath{\boldsymbol{B}}} ({\ensuremath{\boldsymbol{x}}}) + (1 - \alpha) {\ensuremath{\boldsymbol{x}}} \label{eq:alpha_average_def} \end{aligned}$$ for some non-expansive ${\ensuremath{\boldsymbol{B}}}(\cdot)$. Furthermore, if ${\ensuremath{\boldsymbol{P}}}_1$ and ${\ensuremath{\boldsymbol{P}}}_2$ are $\alpha_1$ and $\alpha_2$-averaged, respectively, then [@Bauschke:Book:11 Prop. 4.32] establishes that the composition ${\ensuremath{\boldsymbol{P}}}_2 \circ {\ensuremath{\boldsymbol{P}}}_1$ is $\alpha$-averaged with $$\begin{aligned} \alpha = \frac{2}{1+\frac{1}{\max\left\{\alpha_1,\alpha_2\right\}}} \label{eq:alpha_composition} .\end{aligned}$$ Recalling RED-PG from [Algorithm \[alg:RED\_PG\]]{}, let us define an operator called ${\ensuremath{\boldsymbol{T}}}(\cdot)$ that summarizes one algorithm iteration: $$\begin{aligned} \lefteqn{ {\ensuremath{\boldsymbol{T}}}({\ensuremath{\boldsymbol{x}}}) }\nonumber\\ &{\triangleq}\arg\min_{{\ensuremath{\boldsymbol{z}}}} \Big\{ \ell({\ensuremath{\boldsymbol{z}}};{\ensuremath{\boldsymbol{y}}}) + \tfrac{\lambda L}{2}\big\|{\ensuremath{\boldsymbol{z}}} - \big(\tfrac{1}{L} {\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}})- \tfrac{1-L}{L}{\ensuremath{\boldsymbol{x}}}\big)\big\|^2 \Big\} \label{eq:T_minimization} \\ &= \operatorname{prox}_{\ell/(\lambda L)}\big(\tfrac{1}{L}({\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}})-(1-L){\ensuremath{\boldsymbol{x}}})\big) \label{eq:T_prox}\end{aligned}$$ \[lem:T\_alpha\] If $\ell(\cdot)$ is proper, convex, and continuous; ${\ensuremath{\boldsymbol{f}}}(\cdot)$ is non-expansive; and $L>1$, then ${\ensuremath{\boldsymbol{T}}}(\cdot)$ from [(\[eq:T\_prox\])]{} is $\alpha$-averaged with $\alpha = \max\{\tfrac{2}{1+L},\tfrac{2}{3}\}$. First, because $\ell(\cdot)$ is proper, convex, and continuous, we know that the proximal operator $\operatorname{prox}_{\ell/(\lambda L)}(\cdot)$ is $\alpha$-averaged with $\alpha=1/2$ [@Bauschke:Book:11]. Then, by definition, $\frac{1}{L}{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{z}}})-\frac{1-L}{L}{\ensuremath{\boldsymbol{z}}}$ is $\alpha$-averaged with $\alpha=1/L$. From [(\[eq:T\_prox\])]{}, ${\ensuremath{\boldsymbol{T}}}(\cdot)$ is the composition of these two $\alpha$-averaged operators, and so from [(\[eq:alpha\_composition\])]{} we have that ${\ensuremath{\boldsymbol{T}}}(\cdot)$ is $\alpha$-averaged with $\alpha = \max\{\frac{2}{1+L},\frac{2}{3}\}$. With [Lemma \[lem:T\_alpha\]]{}, we can prove the convergence of RED-PG. \[thm:mann\] If $\ell(\cdot)$ is proper, convex, and continuous; ${\ensuremath{\boldsymbol{f}}}(\cdot)$ is non-expansive; $L>1$; and ${\ensuremath{\boldsymbol{T}}}(\cdot)$ from [(\[eq:T\_prox\])]{} has at least one fixed point, then RED-PG converges. From [(\[eq:T\_prox\])]{}, we have that [Algorithm \[alg:RED\_PG\]]{} is equivalent to $$\begin{aligned} {\ensuremath{\boldsymbol{x}}}_{k+1} &= {\ensuremath{\boldsymbol{T}}}({\ensuremath{\boldsymbol{x}}}_k) \label{eq:mann_1}\\ &= \alpha{\ensuremath{\boldsymbol{B}}}({\ensuremath{\boldsymbol{x}}}_k) + (1-\alpha){\ensuremath{\boldsymbol{x}}}_k \label{eq:mann_2}\end{aligned}$$ where ${\ensuremath{\boldsymbol{B}}}(\cdot)$ is an implicit non-expansive operator that must exist under the definition of $\alpha$-averaged operators from [(\[eq:alpha\_average\_def\])]{}. The iteration [(\[eq:mann\_2\])]{} can be recognized as a Mann iteration [@Parikh:FTO:13], since $\alpha\in(0,1)$. Thus, from [@Bauschke:Book:11 Thm. 5.14], $\{{\ensuremath{\boldsymbol{x}}}_k\}$ is a convergent sequence, in that there exists a fixed point ${\ensuremath{\boldsymbol{x}}}_\star\in{{\mathbb{R}}}^N$ such that $\lim_{k\to\infty} \|{\ensuremath{\boldsymbol{x}}}_k - {\ensuremath{\boldsymbol{x}}}_\star\| = 0$. We note that similar Mann-based techniques were used in [@Buzzard:JIS:18; @Sun:18] to prove the convergence of PnP-based algorithms. Also, we conjecture that similar techniques may be used to prove the convergence of other RED algorithms, but we leave the details to future work. Experiments in [Section \[sec:compare\]]{} numerically study the convergence behavior of several RED algorithms with different image denoisers ${\ensuremath{\boldsymbol{f}}}(\cdot)$. Algorithm Comparison: Image Deblurring {#sec:compare} -------------------------------------- \[t\]\[t\]\[0.7\][iteration]{} \[t\]\[t\]\[0.7\][time (sec)]{} \[B\]\[B\]\[0.7\][PSNR]{} \[l\]\[l\]\[[0.6]{}\][ADMM-$I\!=\!1$]{} \[l\]\[l\]\[[0.6]{}\][PRS-$I\!=\!1$]{} \[l\]\[l\]\[[0.6]{}\][FP]{} \[l\]\[l\]\[[0.6]{}\][GEC-$I\!=\!1$]{} \[l\]\[l\]\[[0.6]{}\][DPG]{} \[l\]\[l\]\[[0.6]{}\][APG]{} \[l\]\[l\]\[[0.6]{}\][DPG]{} \[l\]\[l\]\[[0.6]{}\][APG]{} \[l\]\[l\]\[[0.6]{}\][PG]{} ![PSNR versus iteration for RED algorithms with TNRD denoising when deblurring the starfish.[]{data-label="fig:psnr_tnrd"}](figures/psnr_tnrd.eps "fig:"){width="\figsizein"} \[t\]\[t\]\[0.7\][iteration]{} \[l\]\[l\]\[[0.6]{}\][ADMM-$I\!=\!1$]{} \[l\]\[l\]\[[0.6]{}\][DPG]{} \[l\]\[l\]\[[0.6]{}\][FP]{} \[l\]\[l\]\[[0.6]{}\][DPG]{} \[l\]\[l\]\[[0.6]{}\][APG]{} \[l\]\[l\]\[[0.6]{}\][PG]{} \[B\]\[B\]\[0.7\][$\frac{1}{N}\big\|\frac{1}{\sigma^2}{\ensuremath{\boldsymbol{A}}}^H({\ensuremath{\boldsymbol{Ax}}}_k-{\ensuremath{\boldsymbol{y}}}) + \lambda({\ensuremath{\boldsymbol{x}}}_k-{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}}_k))\big\|^2$]{} ![Fixed-point error versus iteration for RED algorithms with TNRD denoising when deblurring the starfish.[]{data-label="fig:fp_tnrd"}](figures/fp_tnrd.eps "fig:"){width="\figsizein"} \[t\]\[t\]\[0.7\][iteration]{} \[t\]\[t\]\[0.7\][time (sec)]{} \[l\]\[l\]\[[0.6]{}\][ADMM-$I\!=\!1$]{} \[l\]\[l\]\[[0.6]{}\][PRS-$I\!=\!1$]{} \[l\]\[l\]\[[0.6]{}\][FP]{} \[l\]\[l\]\[[0.6]{}\][GEC-$I\!=\!1$]{} \[l\]\[l\]\[[0.6]{}\][APG]{} \[l\]\[l\]\[[0.6]{}\][DPG]{} \[l\]\[l\]\[[0.6]{}\][APG]{} \[l\]\[l\]\[[0.6]{}\][PG]{} \[B\]\[B\]\[0.7\][$\frac{1}{N}\|{\ensuremath{\boldsymbol{x}}}_k-{\ensuremath{\boldsymbol{x}}}_{k-1}\|^2$]{} ![Update distance versus iteration for RED algorithms with TNRD denoising when deblurring the starfish.[]{data-label="fig:dist_tnrd"}](figures/dist_tnrd.eps "fig:"){width="\figsizein"} We now compare the performance of the RED algorithms discussed above (i.e., inexact ADMM, FP, DPG, APG, and PG) on the image deblurring problem considered in [@Romano:JIS:17 Sec. 6.1]. For these experiments, the measurements ${\ensuremath{\boldsymbol{y}}}$ were constructed using a $9\times 9$ uniform blur kernel for ${\ensuremath{\boldsymbol{A}}}$ and using AWGN with variance $\sigma^2=2$. As stated earlier, the image ${\ensuremath{\boldsymbol{x}}}$ is normalized to have pixel intensities in the range $[0,255]$. For the first experiment, we used the TNRD denoiser. The various algorithmic parameters were chosen based on the recommendations in [@Romano:JIS:17]: the regularization weight was $\lambda=0.02$, the ADMM penalty parameter was $\beta=0.001$, and the noise variance assumed by the denoiser was $\nu=3.25^2$. The proximal step on $\ell({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{y}}})$, given in [(\[eq:RED\_FPQ\])]{}, was implemented with an FFT. For RED-DPG we used[^5] $L_0=0.2$ and $L_\infty=2$, for RED-APG we used $L=1$, and for RED-PG we used $L=1.01$ since [Theorem \[thm:mann\]]{} motivates $L>1$. [Figure \[fig:psnr\_tnrd\]]{} shows $$\text{PSNR}_k {\triangleq}-10\log_{10}\left(\frac{1}{N 256^2}\|{\ensuremath{\boldsymbol{x}}}-{\ensuremath{\Hat{\boldsymbol{x}}}}_k\|^2\right)$$ versus iteration $k$ for the starfish test image. In the figure, the proposed RED-DPG and RED-APG algorithms appear significantly faster than the RED-FP and RED-ADMM-$I\!=\!1$ algorithms proposed in [@Romano:JIS:17]. For example, RED-APG reaches PSNR $=30$ in $15$ iterations whereas RED-FP and inexact RED-ADMM-$I=1$ take about $50$ iterations. [Figure \[fig:fp\_tnrd\]]{} shows the fixed-point error $$\begin{aligned} \frac{1}{N}\bigg\|\frac{1}{\sigma^2}{\ensuremath{\boldsymbol{A}}}^H({\ensuremath{\boldsymbol{Ax}}}_k-{\ensuremath{\boldsymbol{y}}}) + \lambda({\ensuremath{\boldsymbol{x}}}_k-{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}}_k))\bigg\|^2\end{aligned}$$ verus iteration $k$. All but the RED-APG and RED-ADMM algorithms appear to converge to the solution set of the fixed-point equation [(\[eq:fpRED\])]{}. The RED-APG and RED-ADMM algorithms appear to approximately satisfy the fixed-point equation [(\[eq:fpRED\])]{}, but not exactly satisfy [(\[eq:fpRED\])]{}, since the fixed-point error does not decay to zero. [Figure \[fig:dist\_tnrd\]]{} shows the update distance $\frac{1}{N}\|{\ensuremath{\boldsymbol{x}}}_k-{\ensuremath{\boldsymbol{x}}}_{k-1}\|^2$ vs. iteration $k$ for the algorithms under test. For most algorithms, the update distance appears to be converging to zero, but for RED-APG and RED-ADMM it does not. This suggests that the RED-APG and RED-ADMM algorithms are converging to a limit cycle rather than a unique limit point. \[t\]\[t\]\[0.7\][iteration]{} \[l\]\[l\]\[[0.6]{}\][ADMM-$I\!=\!1$]{} \[l\]\[l\]\[[0.6]{}\][FP]{} \[l\]\[l\]\[[0.6]{}\][DPG]{} \[l\]\[l\]\[[0.6]{}\][DPG]{} \[l\]\[l\]\[[0.6]{}\][PG]{} \[l\]\[l\]\[[0.6]{}\][APG]{} \[B\]\[B\]\[0.7\][PSNR]{} \[t\]\[t\]\[0.7\][iteration]{} \[l\]\[l\]\[[0.6]{}\][ADMM-$I\!=\!1$]{} \[l\]\[l\]\[[0.6]{}\][FP]{} \[l\]\[l\]\[[0.6]{}\][DPG]{} \[l\]\[l\]\[[0.6]{}\][DPG]{} \[l\]\[l\]\[[0.6]{}\][PG]{} \[l\]\[l\]\[[0.6]{}\][APG]{} \[B\]\[B\]\[0.7\][$\tfrac{1}{N}\big\|\tfrac{1}{\sigma^2}{\ensuremath{\boldsymbol{A}}}^H({\ensuremath{\boldsymbol{Ax}}}_k-{\ensuremath{\boldsymbol{y}}}) + \lambda({\ensuremath{\boldsymbol{x}}}_k-{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}}_k))\big\|^2$]{} ![Fixed-point error versus iteration for RED algorithms with TDT denoising when deblurring the starfish.[]{data-label="fig:fp_dwt"}](figures/fp_dwt.eps "fig:"){width="\figsizein"} \[t\]\[t\]\[0.7\][iteration]{} \[l\]\[l\]\[[0.6]{}\][ADMM-$I\!=\!1$]{} \[l\]\[l\]\[[0.6]{}\][FP]{} \[l\]\[l\]\[[0.6]{}\][DPG]{} \[l\]\[l\]\[[0.6]{}\][DPG]{} \[l\]\[l\]\[[0.6]{}\][PG]{} \[l\]\[l\]\[[0.6]{}\][APG]{} \[B\]\[B\]\[0.7\][$\tfrac{1}{N}\|{\ensuremath{\boldsymbol{x}}}_k-{\ensuremath{\boldsymbol{x}}}_{k-1}\|^2$]{} ![Update distance versus iteration for RED algorithms with TDT denoising when deblurring the starfish.[]{data-label="fig:dist_dwt"}](figures/dist_dwt.eps "fig:"){width="\figsizein"} Next, we replace the TNRD denoiser with the TDT denoiser from [(\[eq:fTD\])]{} and repeat the previous experiments. For the TDT denoiser, we used a Haar-wavelet based orthogonal discrete wavelet transform (DWT) ${\ensuremath{\boldsymbol{W}}}$, with the maximum number of decomposition levels, and a soft-thresholding function ${\ensuremath{\boldsymbol{g}}}(\cdot)$ with threshold value $0.001$. Unlike the TNRD denoiser, this TDT denoiser is the proximal operator associated with a convex cost function, and so we know that it is $\frac{1}{2}$-averaged and non-expansive. [Figure \[fig:psnr\_dwt\]]{} shows PSNR versus iteration with TDT denoising. Interestingly, the final PSNR values appear to be nearly identical among all algorithms under test, but more than $1$ dB worse than the values around iteration $20$. [Figure \[fig:fp\_dwt\]]{} shows the fixed-point error vs. iteration for this experiment. There, the errors of most algorithms converge to a value near $10^{-7}$, but then remain at that value. Noting that RED-PG satisfies the conditions of [Theorem \[thm:mann\]]{} (i.e., convex loss, non-expansive denoiser, $L>1$), it should converge to a fixed-point of [(\[eq:fpRED\])]{}. Therefore, we attribute the fixed-point error saturation in [Fig. \[fig:fp\_dwt\]]{} to issues with numerical precision. [Figure \[fig:dist\_dwt\]]{} shows the normalized distance versus iteration with TDT denoising. There, the distance decreases to zero for all algorithms under test. We emphasize that the proposed RED-DPG, RED-APG, and RED-PG algorithms seek to solve exactly the same fixed-point equation [(\[eq:fpRED\])]{} sought by the RED-SD, RED-ADMM, and RED-FP algorithms proposed in [@Romano:JIS:17]. The excellent quality of the RED fixed-points was firmly established in [@Romano:JIS:17], both qualitatively and quantitatively, in comparison to existing state-of-the-art methods like PnP-ADMM [@Venkatakrishnan:GSIP:13]. For further details on these comparisons, including examples of images recovered by the RED algorithms, we refer the interested reader to [@Romano:JIS:17]. Equilibrium View of RED Algorithms {#sec:CE} ================================== Like the RED algorithms, PnP-ADMM [@Venkatakrishnan:GSIP:13] repeatedly calls a denoiser ${\ensuremath{\boldsymbol{f}}}(\cdot)$ in order to solve an inverse problem. In [@Buzzard:JIS:18], Buzzard, Sreehari, and Bouman show that PnP-ADMM finds a “consensus equilibrium” solution rather than a minimum of any explicit cost function. By consensus equilibrium, we mean a solution $({\ensuremath{\Hat{\boldsymbol{x}}}},{\ensuremath{\Hat{\boldsymbol{u}}}})$ to \[eq:consensus\] $$\begin{aligned} {\ensuremath{\Hat{\boldsymbol{x}}}} = F({\ensuremath{\Hat{\boldsymbol{x}}}}+{\ensuremath{\Hat{\boldsymbol{u}}}}) \label{eq:F} \\ {\ensuremath{\Hat{\boldsymbol{x}}}} = G({\ensuremath{\Hat{\boldsymbol{x}}}}-{\ensuremath{\Hat{\boldsymbol{u}}}}) \label{eq:G} \end{aligned}$$ for some functions $F,G:{{\mathbb{R}}}^N\rightarrow{{\mathbb{R}}}^N$. For PnP-ADMM, these functions are [@Buzzard:JIS:18] $$\begin{aligned} F{_{\textsf{pnp}}}({\ensuremath{\boldsymbol{v}}}) &= \arg\min_{{\ensuremath{\boldsymbol{x}}}} \left\{\ell({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{y}}}) + \frac{\beta}{2}\|{\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{v}}}\|^2\right\} \label{eq:Fpnp} \\ G{_{\textsf{pnp}}}({\ensuremath{\boldsymbol{v}}}) &= {\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{v}}}) \label{eq:Gpnp} .\end{aligned}$$ RED Equilibrium Conditions -------------------------- We now show that the RED algorithms also find consensus equilibrium solutions, but with $G\neq G{_{\textsf{pnp}}}$. First, recall ADMM [Algorithm \[alg:ADMM\]]{} with explicit regularization $\rho(\cdot)$. By taking iteration $k\rightarrow\infty$, it becomes clear that the ADMM solutions must satisfy the equilibrium condition [(\[eq:consensus\])]{} with $$\begin{aligned} F{_{\textsf{admm}}}({\ensuremath{\boldsymbol{v}}}) &= \arg\min_{{\ensuremath{\boldsymbol{x}}}} \left\{\ell({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{y}}}) + \frac{\beta}{2}\|{\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{v}}}\|^2\right\} \label{eq:Fadmm} \\ G{_{\textsf{admm}}}({\ensuremath{\boldsymbol{v}}}) &= \arg\min_{{\ensuremath{\boldsymbol{x}}}} \left\{\lambda\rho({\ensuremath{\boldsymbol{x}}}) + \frac{\beta}{2}\|{\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{v}}}\|^2\right\} \label{eq:Gadmm} ,\end{aligned}$$ where we note that $F{_{\textsf{admm}}}=F{_{\textsf{pnp}}}$. The RED-ADMM algorithm can be considered as a special case of ADMM [Algorithm \[alg:ADMM\]]{} under which $\rho(\cdot)$ is differentiable with $\nabla\rho({\ensuremath{\boldsymbol{x}}})={\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}})$, for a given denoiser ${\ensuremath{\boldsymbol{f}}}(\cdot)$. We can thus find $G{_{\textsf{red-admm}}}(\cdot)$, i.e., the RED-ADMM version of $G(\cdot)$ satisfying the equilibrium condition [(\[eq:G\])]{}, by solving the right side of [(\[eq:Gadmm\])]{} under $\nabla\rho({\ensuremath{\boldsymbol{x}}})={\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}})$. Similarly, we see that the RED-ADMM version of $F(\cdot)$ is identical to the ADMM version of $F(\cdot)$ from [(\[eq:Fadmm\])]{}. Now, the ${\ensuremath{\Hat{\boldsymbol{x}}}}=G{_{\textsf{red-admm}}}({\ensuremath{\boldsymbol{v}}})$ that solves the right side of [(\[eq:Gadmm\])]{} under differentiable $\rho(\cdot)$ with $\nabla\rho({\ensuremath{\boldsymbol{x}}})={\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}})$ must obey $$\begin{aligned} {\ensuremath{\boldsymbol{0}}} &= \lambda \nabla \rho({\ensuremath{\Hat{\boldsymbol{x}}}}) + \beta({\ensuremath{\Hat{\boldsymbol{x}}}}-{\ensuremath{\boldsymbol{v}}}) \\ &= \lambda \big({\ensuremath{\Hat{\boldsymbol{x}}}}-{\ensuremath{\boldsymbol{f}}}({\ensuremath{\Hat{\boldsymbol{x}}}})\big) + \beta({\ensuremath{\Hat{\boldsymbol{x}}}}-{\ensuremath{\boldsymbol{v}}}) \label{eq:denoiser_fp} ,\end{aligned}$$ which we note is a special case of [(\[eq:fpRED\])]{}. Continuing, we find that $$\begin{aligned} {\ensuremath{\boldsymbol{0}}} &= \lambda \big({\ensuremath{\Hat{\boldsymbol{x}}}}-{\ensuremath{\boldsymbol{f}}}({\ensuremath{\Hat{\boldsymbol{x}}}})\big) + \beta({\ensuremath{\Hat{\boldsymbol{x}}}}-{\ensuremath{\boldsymbol{v}}}) \\ \Leftrightarrow {\ensuremath{\boldsymbol{0}}} &= \frac{\lambda+\beta}{\beta}{\ensuremath{\Hat{\boldsymbol{x}}}} - \frac{\lambda}{\beta} {\ensuremath{\boldsymbol{f}}}({\ensuremath{\Hat{\boldsymbol{x}}}}) -{\ensuremath{\boldsymbol{v}}} \\ \Leftrightarrow {\ensuremath{\boldsymbol{v}}} &= \left(\frac{\lambda+\beta}{\beta}{\ensuremath{\boldsymbol{I}}} - \frac{\lambda}{\beta} {\ensuremath{\boldsymbol{f}}}\right)({\ensuremath{\Hat{\boldsymbol{x}}}}) \\ \Leftrightarrow {\ensuremath{\Hat{\boldsymbol{x}}}} &= \left( \frac{\lambda+\beta}{\beta}{\ensuremath{\boldsymbol{I}}}-\frac{\lambda}{\beta}{\ensuremath{\boldsymbol{f}}} \right)^{-1}({\ensuremath{\boldsymbol{v}}}) = G{_{\textsf{red-admm}}}({\ensuremath{\boldsymbol{v}}}) \label{eq:G_RED_ADMM},\end{aligned}$$ where ${\ensuremath{\boldsymbol{I}}}$ represents the identity operator and $(\cdot)^{-1}$ represents the functional inverse. In summary, RED-ADMM with denoiser ${\ensuremath{\boldsymbol{f}}}(\cdot)$ solves the consensus equilibrium problem [(\[eq:consensus\])]{} with $F=F{_{\textsf{admm}}}$ from [(\[eq:Fadmm\])]{} and $G=G{_{\textsf{red-admm}}}$ from [(\[eq:G\_RED\_ADMM\])]{}. Next we establish an equilibrium result for RED-PG. Defining ${\ensuremath{\boldsymbol{u}}}_k={\ensuremath{\boldsymbol{v}}}_k-{\ensuremath{\boldsymbol{x}}}_k$ and taking $k\rightarrow\infty$ in [Algorithm \[alg:RED\_PG\]]{}, it can be seen that the fixed points of RED-PG obey [(\[eq:F\])]{} for $$\begin{aligned} F{_{\textsf{red-pg}}}({\ensuremath{\boldsymbol{v}}}) &= \arg\min_{{\ensuremath{\boldsymbol{x}}}} \left\{\ell({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{y}}}) + \frac{\lambda L}{2}\|{\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{v}}}\|^2\right\} \label{eq:Fredpg} .\end{aligned}$$ Furthermore, from [line \[line:PG\_v\_update\]]{} of [Algorithm \[alg:RED\_PG\]]{}, it can be seen that the RED-PG fixed points also obey $$\begin{aligned} {\ensuremath{\Hat{\boldsymbol{u}}}} &= \frac{1}{L}\left( {\ensuremath{\boldsymbol{f}}}({\ensuremath{\Hat{\boldsymbol{x}}}}) - {\ensuremath{\Hat{\boldsymbol{x}}}} \right) \label{eq:ured}\\ \Leftrightarrow {\ensuremath{\Hat{\boldsymbol{x}}}}-{\ensuremath{\Hat{\boldsymbol{u}}}} &= {\ensuremath{\Hat{\boldsymbol{x}}}} - \frac{1}{L}\left( {\ensuremath{\boldsymbol{f}}}({\ensuremath{\Hat{\boldsymbol{x}}}})-{\ensuremath{\Hat{\boldsymbol{x}}}} \right) \\ &= \left(\frac{L+1}{L}{\ensuremath{\boldsymbol{I}}} - \frac{1}{L} {\ensuremath{\boldsymbol{f}}}\right)({\ensuremath{\Hat{\boldsymbol{x}}}}) \\ \Leftrightarrow {\ensuremath{\Hat{\boldsymbol{x}}}} &= \left(\frac{L+1}{L}{\ensuremath{\boldsymbol{I}}} - \frac{1}{L} {\ensuremath{\boldsymbol{f}}}\right)^{-1}({\ensuremath{\Hat{\boldsymbol{x}}}}-{\ensuremath{\Hat{\boldsymbol{u}}}}) ,\end{aligned}$$ which matches [(\[eq:G\])]{} when $G=G{_{\textsf{red-pg}}}$ for $$\begin{aligned} G{_{\textsf{red-pg}}}({\ensuremath{\boldsymbol{v}}}) &= \left(\frac{L+1}{L}{\ensuremath{\boldsymbol{I}}} - \frac{1}{L} {\ensuremath{\boldsymbol{f}}}\right)^{-1}({\ensuremath{\boldsymbol{v}}}) \label{eq:Gredpg} .\end{aligned}$$ Note that $G{_{\textsf{red-pg}}}=G{_{\textsf{red-admm}}}$ when $L=\beta/\lambda$. Interpreting the RED Equilibria ------------------------------- The equilibrium conditions provide additional interpretations of the RED algorithms. To see how, first recall that the RED equilibrium $({\ensuremath{\Hat{\boldsymbol{x}}}},{\ensuremath{\Hat{\boldsymbol{u}}}})$ satisfies $$\begin{aligned} {\ensuremath{\Hat{\boldsymbol{x}}}} &= F{_{\textsf{red-pg}}}({\ensuremath{\Hat{\boldsymbol{x}}}}+{\ensuremath{\Hat{\boldsymbol{u}}}}) \label{eq:xequilibRED}\\ {\ensuremath{\Hat{\boldsymbol{x}}}} &= G{_{\textsf{red-pg}}}({\ensuremath{\Hat{\boldsymbol{x}}}}-{\ensuremath{\Hat{\boldsymbol{u}}}}) ,\end{aligned}$$ or an analogous pair of equations involving $F{_{\textsf{red-admm}}}$ and $G{_{\textsf{red-admm}}}$. Thus, from [(\[eq:Fredpg\])]{}, [(\[eq:ured\])]{}, and [(\[eq:xequilibRED\])]{}, we have that $$\begin{aligned} {\ensuremath{\Hat{\boldsymbol{x}}}} &= F{_{\textsf{red-pg}}}\left({\ensuremath{\Hat{\boldsymbol{x}}}}+\frac{1}{L}({\ensuremath{\boldsymbol{f}}}({\ensuremath{\Hat{\boldsymbol{x}}}})-{\ensuremath{\Hat{\boldsymbol{x}}}})\right) \\ &= F{_{\textsf{red-pg}}}\left(\frac{L-1}{L}{\ensuremath{\Hat{\boldsymbol{x}}}}+\frac{1}{L}{\ensuremath{\boldsymbol{f}}}({\ensuremath{\Hat{\boldsymbol{x}}}})\right) \\ &= \arg\min_{{\ensuremath{\boldsymbol{x}}}} \left\{\ell({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{y}}}) + \frac{\lambda L}{2}\left\|{\ensuremath{\boldsymbol{x}}}-\frac{L-1}{L}{\ensuremath{\Hat{\boldsymbol{x}}}}-\frac{1}{L}{\ensuremath{\boldsymbol{f}}}({\ensuremath{\Hat{\boldsymbol{x}}}})\right\|^2\right\} .\end{aligned}$$ When $L=1$, this simplifies down to $$\begin{aligned} {\ensuremath{\Hat{\boldsymbol{x}}}} &= \arg\min_{{\ensuremath{\boldsymbol{x}}}} \left\{\ell({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{y}}}) + \frac{\lambda}{2}\left\|{\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{f}}}({\ensuremath{\Hat{\boldsymbol{x}}}})\right\|^2\right\} \label{eq:xequilibRED2}.\end{aligned}$$ Note that [(\[eq:xequilibRED2\])]{} is reminiscent of, although in general not equivalent to, $$\begin{aligned} {\ensuremath{\Hat{\boldsymbol{x}}}} &= \arg\min_{{\ensuremath{\boldsymbol{x}}}} \left\{\ell({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{y}}}) + \frac{\lambda}{2}\left\|{\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}})\right\|^2\right\} ,\end{aligned}$$ which was discussed as an “alternative” formulation of RED in [@Romano:JIS:17 Sec. 5.2]. Insights into the relationship between RED and PnP-ADMM can be obtained by focusing on the simple case of $$\begin{aligned} \ell({\ensuremath{\boldsymbol{x}}};{\ensuremath{\boldsymbol{y}}}) &= \frac{1}{2\sigma^2}\|{\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{y}}}\|^2 \label{eq:simple} ,\end{aligned}$$ where the overall goal of variational image recovery would be the denoising of ${\ensuremath{\boldsymbol{y}}}$. For PnP-ADMM, [(\[eq:RED\_FPQ\])]{} and [(\[eq:Fpnp\])]{} imply $$\begin{aligned} F{_{\textsf{pnp}}}({\ensuremath{\boldsymbol{v}}}) &= \frac{1}{1+\lambda\sigma^2}{\ensuremath{\boldsymbol{y}}} + \frac{\lambda\sigma^2}{1+\lambda\sigma^2}{\ensuremath{\boldsymbol{v}}} ,\end{aligned}$$ and so the equilibrium condition [(\[eq:F\])]{} implies $$\begin{aligned} {\ensuremath{\Hat{\boldsymbol{x}}}}{_{\textsf{pnp}}}&= \frac{1}{1+\lambda\sigma^2}{\ensuremath{\boldsymbol{y}}} + \frac{\lambda\sigma^2}{1+\lambda\sigma^2}({\ensuremath{\Hat{\boldsymbol{x}}}}{_{\textsf{pnp}}}+{\ensuremath{\Hat{\boldsymbol{u}}}}{_{\textsf{pnp}}}) \\ \Leftrightarrow {\ensuremath{\Hat{\boldsymbol{u}}}}{_{\textsf{pnp}}}&= \frac{{\ensuremath{\Hat{\boldsymbol{x}}}}{_{\textsf{pnp}}}-{\ensuremath{\boldsymbol{y}}}}{\lambda\sigma^2} .\end{aligned}$$ Meanwhile, [(\[eq:Gpnp\])]{} and the equilibrium condition [(\[eq:G\])]{} imply $$\begin{aligned} {\ensuremath{\Hat{\boldsymbol{x}}}}{_{\textsf{pnp}}}&= {\ensuremath{\boldsymbol{f}}}({\ensuremath{\Hat{\boldsymbol{x}}}}{_{\textsf{pnp}}}-{\ensuremath{\Hat{\boldsymbol{u}}}}{_{\textsf{pnp}}}) \\ &= {\ensuremath{\boldsymbol{f}}}\left(\frac{\lambda\sigma^2-1}{\lambda\sigma^2}{\ensuremath{\Hat{\boldsymbol{x}}}}{_{\textsf{pnp}}}+\frac{1}{\lambda\sigma^2}{\ensuremath{\boldsymbol{y}}}\right) .\end{aligned}$$ In the case that $\lambda=1/\sigma^2$, we have the intuitive result that $$\begin{aligned} {\ensuremath{\Hat{\boldsymbol{x}}}}{_{\textsf{pnp}}}&= {\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{y}}}) \label{eq:equilib_pnp},\end{aligned}$$ which corresponds to direct denoising of ${\ensuremath{\boldsymbol{y}}}$. For RED, ${\ensuremath{\Hat{\boldsymbol{u}}}}{_{\textsf{red}}}$ is algorithm dependent, but ${\ensuremath{\Hat{\boldsymbol{x}}}}{_{\textsf{red}}}$ is always the solution to [(\[eq:fpRED\])]{}, where now ${\ensuremath{\boldsymbol{A}}}={\ensuremath{\boldsymbol{I}}}$ due to [(\[eq:simple\])]{}. That is, $$\begin{aligned} {\ensuremath{\boldsymbol{y}}} -{\ensuremath{\Hat{\boldsymbol{x}}}}{_{\textsf{red}}}&= \lambda\sigma^2\big({\ensuremath{\Hat{\boldsymbol{x}}}}{_{\textsf{red}}}- {\ensuremath{\boldsymbol{f}}}({\ensuremath{\Hat{\boldsymbol{x}}}}{_{\textsf{red}}})\big) .\end{aligned}$$ Taking $\lambda=1/\sigma^2$ for direct comparison to [(\[eq:equilib\_pnp\])]{}, we find $$\begin{aligned} {\ensuremath{\boldsymbol{y}}} -{\ensuremath{\Hat{\boldsymbol{x}}}}{_{\textsf{red}}}&= {\ensuremath{\Hat{\boldsymbol{x}}}}{_{\textsf{red}}}- {\ensuremath{\boldsymbol{f}}}({\ensuremath{\Hat{\boldsymbol{x}}}}{_{\textsf{red}}}) \label{eq:equilib_red} .\end{aligned}$$ Thus, whereas PnP-ADMM reports the denoiser output ${\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{y}}})$, *RED reports the ${\ensuremath{\Hat{\boldsymbol{x}}}}$ for which the denoiser residual ${\ensuremath{\boldsymbol{f}}}({\ensuremath{\Hat{\boldsymbol{x}}}}) - {\ensuremath{\Hat{\boldsymbol{x}}}}$ negates the measurement residual ${\ensuremath{\boldsymbol{y}}} - {\ensuremath{\Hat{\boldsymbol{x}}}}$.* This ${\ensuremath{\Hat{\boldsymbol{x}}}}$ can be expressed concisely as $$\begin{aligned} {\ensuremath{\Hat{\boldsymbol{x}}}} = (2{\ensuremath{\boldsymbol{I}}}-{\ensuremath{\boldsymbol{f}}})^{-1}({\ensuremath{\boldsymbol{y}}}) = G{_{\textsf{red-pg}}}({\ensuremath{\boldsymbol{y}}})\big|_{L=1} .\end{aligned}$$ Conclusion {#sec:conclusion} ========== The RED paper [@Romano:JIS:17] proposed a powerful new way to exploit plug-in denoisers when solving imaging inverse-problems. In fact, experiments in [@Romano:JIS:17] suggest that the RED algorithms are state-of-the-art. Although [@Romano:JIS:17] claimed that the RED algorithms minimize an optimization objective containing an explicit regularizer of the form ${\rho{_{\textsf{red}}}}({\ensuremath{\boldsymbol{x}}}){\triangleq}\frac{1}{2}{\ensuremath{\boldsymbol{x}}}{^{\top}}({\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{f}}}({\ensuremath{\boldsymbol{x}}}))$ when the denoiser is LH, we showed that the denoiser must also be Jacobian symmetric for this explanation to hold. We then provided extensive numerical evidence that practical denoisers like the median filter, non-local means, BM3D, TNRD, or DnCNN lack sufficient Jacobian symmetry. Furthermore, we established that, with non-JS denoisers, the RED algorithms cannot be explained by explicit regularization of any form. None of our negative results dispute the fact that the RED algorithms work very well in practice. But they do motivate the need for a better understanding of RED. In response, we showed that the RED algorithms can be explained by a novel framework called *score-matching by denoising* (SMD), which aims to match the “score” (i.e., the gradient of the log-prior) rather than design any explicit regularizer. We then established tight connections between SMD, kernel density estimation, and constrained MMSE denoising. On the algorithmic front, we provided new interpretations of the RED-ADMM and RED-FP algorithms proposed in [@Romano:JIS:17], and we proposed novel RED algorithms with much faster convergence. Finally, we performed a consensus-equilibrium analysis of the RED algorithms that lead to additional interpretations of RED and its relation to PnP-ADMM. Acknowledgments {#acknowledgments .unnumbered} =============== The authors thank Peyman Milanfar, Miki Elad, Greg Buzzard, and Charlie Bouman for insightful discussions. [^1]: E. T. Reehorst (email: [email protected]) and P. Schniter (email: [email protected]) are with the Department of Electrical and Computer Engineering, The Ohio State University, Columbus, OH, 43210. Their work is supported in part by the National Science Foundation under grants CCF-1527162 and CCF-1716388 and the National Institutes of Health under grant R01HL135489. [^2]: Matlab code for the experiments is available at <http://www2.ece.ohio-state.edu/~schniter/RED/index.html>. [^3]: \[foot:images\]We used the center $16\times 16$ patches of the standard Barbara, Bike, Boats, Butterfly, Cameraman, Flower, Girl, Hat, House, Leaves, Lena, Parrots, Parthenon, Peppers, Plants, Raccoon, and Starfish test images. [^4]: The extension to non-quadratic loss is important for applications like phase-retrieval, where RED has been successfully applied [@Metzler:ICML:18]. [^5]: Matlab code for these experiments is available at <http://www2.ece.ohio-state.edu/~schniter/RED/index.html>.
--- abstract: 'Using toric geometry we prove a Bézout type theorem for weighted projective spaces.' author: - 'Bernt Ivar Utst$\o$l N$\o$dland' bibliography: - 'reference.bib' title: | A toric proof of Bézout’s theorem\ for weighted projective spaces --- Introduction ============ The classical Bézout’s Theorem for projective space states that for divisors $D_1,...,D_n$ on $\p^n$ we have $D_1 \cdots D_n = \Pi_{i=1}^n \deg D_i$ [@Fulton2 Prop. 8.4]. Using toric geometry we will generalize this formula to weighted projective spaces $\p(q_0,...,q_n)$: \[Bézout’s Theorem\] \[Bezout\] Given n torus-invariant divisors $E_1,...,E_n$ on $\p(q_0,...,q_n)$, we have $$E_1 \cdots E_n = \frac {\Pi_{i=1}^n \deg E_i}{q_0 \cdots q_n} .$$ We recover Bézout’s theorem for weighted homogeneous polynomials [@Ertl Thm. 3.6] by choosing effective divisors intersecting in finitely many points with an additional hypothesis on the degrees of the polynomials defining the divisors. Our theorem also generalizes Bézout’s theorem for the weighted projective plane given in [@bezplane Prop. 5]. This paper is based on results from my master thesis [@biun] and is written as part of my PhD thesis at the University of Oslo, supervised by Ragni Piene and John Christian Ottem. Preliminaries on intersection theory and WPS ============================================ For any $n$-dimensional variety $X$ let $Z_k(X)$ be the free abelian group generated by the set of irreducible closed subvarieties of dimension $k$ on $X$. We define rational equivalence: Let $\alpha \in Z_k(X)$ be equivalent to zero if there exist finitely many $(k+1)$-dimensional subvarieties $V_i \subseteq X$ such that $\alpha$ is the divisor of a rational function on $V_i$ for all $i$. Then the $k$-th Chow group $A_k(X)$ is $Z_k(X)$ modulo rational equivalence. We use notation and definitions from [@Cox] for toric varieties. For a complex toric variety $X_\Sigma$ defined by a fan $\Sigma$, $A_k(X_\Sigma)$ is generated by the classes of the orbit closures $\overbar{O(\sigma)}$ of the cones $\sigma \in \Sigma(n-k)$ ([@Fulton Ch.5.1]). If $\Sigma$ is complete and simplicial, setting $A^k(X_\Sigma)=A_{n-k}(X_\Sigma)$, one can define a product $$A^k(X) \otimes \mathbb{Q} \times A^l(X) \otimes \mathbb{Q} \to A^{k+l}(X) \otimes \mathbb{Q}$$ which agrees with geometric intersection in nice cases (see [@Danilov Remark 10.9]). This makes the groups of cycles into a graded ring $A^\bullet(X_\Sigma)_{\mathbb{Q}}$. To compute intersections we will also consider the Chow ring of a toric variety, as defined in [@Cox Ch. 12.5]. Given a fan $\Sigma$, let $\Sigma(1)=\{\rho_1,...,\rho_r \}$. Denote by $u_i$ the minimal generator of $\rho_i$. We will consider two ideals $\mathscr{I}$, $\mathscr{J}$ in the polynomial ring $\mathbb{Q}[x_1,...,x_r]$. Let $$\mathscr{I} = \langle x_{i_1} \cdots x_{i_s} | \text{ all } i_j \text{ distinct and } \rho_{i_1}+ \cdots + \rho_{i_s} \text{ is not a cone in } \Sigma \rangle ,$$ $$\mathscr{J} = \langle \sum_{i=1}^r \langle m,u_i \rangle x_i | \text{ where } m \text{ ranges over a basis of } M \rangle .$$ The ideal $\mathscr{I}$ is called the Stanley–Reisner ideal. The Chow ring $R_{\mathbb{Q}}(\Sigma)$ is defined as $$R_{\mathbb{Q}}(\Sigma)= \mathbb{Q} [x_1,...,x_r]/\mathscr{I}+\mathscr{J} .$$ We have that if $X_\Sigma$ is complete and simplicial, then by [@Cox Thm 12.5.3] $$R_{\mathbb{Q}}(\Sigma) \cong A^\bullet(X_\Sigma)_{\mathbb{Q}} .$$ We have from [@Cox Example 3.1.17] the fan for a weighted projective space: Given natural numbers $q_0,...,q_n$ with $\gcd(q_0,...,q_n)=1$, consider the quotient lattice of $\Z^{n+1}$ by the subgroup generated by $(q_0,...,q_n)$. We write $N=\mathbb{Z}^{n+1} / \mathbb{Z} (q_0,...,q_n)$. Let $u_i$ for $i=0,...,n$ be the images in $N$ of the standard basis vectors of $\Z^{n+1}$. This means that in $N$ we have the relation $$q_0u_0 + ... + q_nu_n = 0 .$$ Let $\Sigma$ be the fan consisting of all cones generated by proper subsets of $\{ u_0,...,u_n \}$. Then $X_\Sigma = \mathbb{P}(q_0,...,q_n)$, which is complete and simplicial. Moreover, we have $$\mathscr{I} = \langle x_0 \cdots x_n \rangle .$$ Since we are over $\mathbb{Q}$, a basis for $M \otimes \Q = \{ m \in \Z^{n+1} | \sum q_im_i = 0 \}$ will be $(q_i,0,...,0,-q_0,0...,0)$, for $i=1,...,n$. This gives the ideal $$\mathscr{J} = \langle q_ix_0 -q_0x_i | i=1,...,n \rangle .$$ Doing the computations, we can eliminate $x_1,...,x_n$ since $x_i = \frac{q_i}{q_0}x_0$, so the Chow ring will be $$R_{\mathbb{Q}}(\Sigma) \cong \mathbb{Q}[x_0]/x_0^{n+1} .$$ The group of torus-invariant divisors is the free abelian group generated by one prime divisor $D_i$ for each $1$-dimensional ray $\rho_i$ in $\Sigma$. The $1$-graded part of $R_{\mathbb{Q}}(\Sigma)$ corresponds to $\Cl(\p(q_0,...,q_n)) \otimes \Q$. It is well known that $\Cl(\p(q_0,...,q_n))$ is isomorphic to $\Z$ via the degree map, where $\deg D_i = q_i$ (see for instance [@RossiTerra Thm. 1.19]). Then we have relations $q_iD_0 = q_0D_i$, thus choosing the image of $D_0$ (by abuse of notation we denote this by $D_0$ as well) as a generator for $\Cl(\p(q_0,...,q_n)) \otimes \Q \simeq \Q$, we get that $D_i$ is mapped to $\frac{q_i}{q_0} D_0$. Taking any torus-invariant divisor $D = \sum_{i=0}^n a_iD_i$, we have $\deg D= \sum_{i=0}^n a_iq_i$. Then in the Chow ring, $D$ gets mapped to $\sum_{i=0}^n a_iD_i = \sum_{i=0}^n a_i \frac {q_i}{q_0}D_0 = \frac{D_0}{q_0} \sum_{i=0}^n a_iq_i = \frac{deg D}{q_0} D_0$. Taking $n$ torus-invariant divisors $E_1,...,E_n$ it then follows $$E_1 \cdots E_n = \frac {\Pi_{j=1}^n \deg E_j}{q_0^n} D_0^n ,$$ thus we have determined intersections of divisors modulo the generator $D_0^n$. We wish to push this forward to $\Spec \C$ to obtain an actual number, i.e., to calculate $\int_{X_\Sigma} D_0^n$. If a complete variety $X_\Sigma$ of dimension $n$ is embedded in $\p^s$ via a very ample divisor $D$, define $D^n \defeq \deg (X_{\Sigma_P} \subset \p^s) = \int_{X_\Sigma} D^n$. Then by [@Cox Thm. 13.4.3], $D^n$ equals $\Vol(P_D)$, i.e., the volume of the polytope associated to the divisor, normalized with respect to the lattice $M$. To apply this we need to to describe a polytope giving $\p(q_0,...,q_n)$. From [@RossiTerra Remark 1.24 and Cor 1.25] we have the following polytope: Given $(q_0,...,q_n)$ and $M \cong \Z^{n+1}$, let $\delta = \lcm(q_0,...,q_n)$. Consider the $n+1$ points of $M_\R \cong \R^{n+1}$: $$v_i = (0,...,\frac{\delta}{q_i},...0)$$ Let $\Delta$ be the convex hull of $0$ and all $v_i$. Intersecting $\Delta$ with the hyperplane $H= \{ (x_0,...,x_n) | \sum_{i=0}^n x_iq_i = \delta \}$, we get an $n$-dimensional polytope $P$. Then $X_P \cong \p(q_0,...,q_n)$ and the divisor $D$ associated to the polytope will be $\frac {\delta}{q_0}D_0$ which is very ample. If we can determine the volume of $P$, we can determine $D_0^n$, since $$\Vol(P) = D^n=\frac {\delta^n}{q_0^n}D_0^n ,$$ implying that $D_0^n=\Vol(P) \frac{q_0^n}{\delta^n}$. Proof of Theorem \[Bezout\] =========================== To determine the volume of $P$, we will use the generalized cross product (see [@Cross]). For $n$ vectors $v_1,...,v_n \in \R^{n+1}$, let $A$ be the matrix with $i$-th row $v_i$. The cross product $v_1 \times \cdots \times v_n \in \R^{n+1}$ has $k$-th coordinate equal to $(-1)^k$ times the $n \times n$ minor of $A$ obtained by removing the $k$-th column. This cross product is orthogonal to all $v_i$ and satisfies $$|v_1 \times \cdots \times v_n| = \Vol(v_1,...,v_n) ,$$ where $\Vol(v_1,...,v_n)$ is the $n$-dimensional volume of the parallelotope spanned by $v_1,...,v_n$ (this product can be expressed by exterior algebra operations as the Hodge dual \*$(v_1 \wedge \cdots \wedge v_n)$). To determine the volume, we first need to normalize with respect to the lattice, i.e., we need to determine the volume spanned by a basis. We will need the following to choose a basis for the lattice spanned by $P$: Given set of linearly independent vectors $b_1,...,b_n \in M$ let $$T(b_1,...,b_n) = \{ \sum_{i=1}^n c_ib_i | 0 \leq c_i < 1 \} \subseteq M_{\mathbb{R}} = M \otimes \mathbb{R} .$$ The following is well-known, but we include a proof for lack of a proper reference: \[basis\] The vectors $b_1,...,b_n$ form a basis for the lattice $M$ if and only if $T(b_1,...,b_n) \cap M = \{ 0 \}$ . Assume $(b_1,...,b_n)$ is a basis. Let $x \in T(b_1,...,b_n) \cap M$. Then $x= \sum_{i=1}^n c_ib_i = \sum_{i=1}^n n_ib_i$ for $0 \leq c_i < 1$, $n_i \in \mathbb{Z}$. Thus $0 = \sum_{i=1}^n (c_i-n_i)b_i$. Since the $b_i$’s are linearly independent, this implies that $c_i = n_i$, hence $c_i=0$. Assume $T(b_1,...,b_n) \cap M = \{ 0 \}$. Pick a lattice point $x \in M$. Since $b_1,...,b_n$ is a basis for the vector space $M_{\mathbb{R}}$ we can find $d_i \in \mathbb{R}$ such that $x= \sum_{i=1}^n d_ib_i$. Let $d_i = n_i +c_i$ where $n_i \in \mathbb{Z}$ and $0 \leq c_i < 1$. Then $x - \sum_{i=1}^n n_ib_i \in T(b_1,...,b_n) \cap M = \{ 0 \}$, hence $c_i = 0$ for all $i$. Thus $b_1,...,b_n$ is a basis for $M$. First we choose an edge of the polytope $P$, say the edge $v_0v_1$, which is generated by $(-\frac{\delta}{q_0},\frac{\delta}{q_1},0,...,0)$. For simpler notation set $q_{i_1,...,i_s}=\gcd(q_{i_1},...,q_{i_s})$. The primitive generator of the edge $v_0v_1$ will be $e_1=(-\frac{q_1}{q_{01}},\frac{q_0}{q_{01}},0,...,0)$. Now, choose any lattice point of $H = \{ (x_0,...,x_n) | \sum_{i=0}^n x_iq_i = \delta \}$ of the form $$(x_{20},x_{21},\frac{q_{01}}{q_{012}},0,...,0),$$ this exists since the numbers obtained as integral linear combination of $q_0,q_1$ are exactly all multiples of $q_{01}$, and $\delta-q_2 \frac{q_{01}}{q_{012}}$ is such a multiple (the subscripts are chosen for notational purposes which will become clear) . Set $e_2$ as the difference between this point and $v_0$, in other words $$e_2=(x_{20}-\frac{\delta}{q_0},x_{21},\frac{q_{01}}{q_{012}},0,...,0 ) .$$ In general, for all $2 \leq s \leq n$ find a lattice point of the form $$(x_{i0},x_{i1},...,x_{i(s-1)},\frac{q_{0...s-1}}{q_{0...s}},0,...,0).$$ This is equivalent to saying $$x_{i0}q_0 + x_{i1}q_1 + \cdots + x_{i(s-1)}q_{s-1} + \frac{q_{0...s-1}}{q_{0...s}} q_s = \delta .$$ Set $$e_s = ( x_{i0}-\frac{\delta}{q_0},x_{i1},...,x_{i(s-1)},\frac{q_{0...s-1}}{q_{0...s}},0,...,0).$$ Then we have \[nbasis\] The $n$ vectors $\{ e_1,...,e_n \}$ constructed above, are a basis for the lattice spanned by $H$. We will use Lemma \[basis\] to show this. Assume we have a lattice point $l=\sum_{i=1}^n c_ie_i$, where $0 \leq c_i < 1$ for all $i$. Then it suffices to show that all $c_i=0$. We will show this by descending induction on $c_n$. Let $l=(y_0,...,y_n)$. Then we have, by definition of $H$, $$\label{jau} \sum_{i=0}^n q_iy_i=\delta .$$ Consider the $(n+1)$-th coordinate. Since the basis is constructed in such a way that the only vector having nonzero $(n+1)$-th coordinate is $e_n$, we must have $y_n=c_n \frac{q_{0,...,n-1}}{q_{0,...,n}}$. When we defined weighted projective space, we assumed $q_{0,...,n}=1$. Thus we must have $y_n=c_n q_{0,...,n-1}$. Now consider modulo $(q_{0,...,n-1})$: The righthand side is $0$ and the first terms $q_0y_0+...+q_{n-1}y_{n-1}$ will be zero, since in general integral linear combinations of a set of integers are exactly the multiples of their greatest common divisor. Thus we must have $$q_ny_n \equiv q_nc_nq_{0,...,n-1} \equiv 0 \pmod{q_{0,...,n-1}}.$$ Now since $c_n < 1$, we have $c_nq_{0,...,n-1} < q_{0,...,n-1}$, and if $0<c_n$ there must be some prime power $p^r$ dividing $q_{0,...,n-1}$ which does not appear in $c_nq_{0,...,n-1}$. But then we must have that $p$ divides $q_n$, which implies $q_{0,...,n}>1$ which is a contradiction. Thus $c_n = 0$. Assume in general we have proved that $c_n=c_{n-1}=...=c_{s+1}=0$. We will show that $c_s=0$. We will use the same method as above: Since $c_{s+1}=...=c_n=0$, we have a linear combination $l=\sum_{i=0}^s c_ie_i$. In the set $\{e_1,...,e_s \}$, the only vector with $(s+1)$-th coordinate nonzero will be $e_s$. Thus we must have $y_s=c_s \frac {q_{0,...,s-1}}{q_{0,...,s}}$. Considering modulo $q_{0,...,s-1}$ we get $$q_sy_s \equiv q_sc_s \frac{q_{0,...,s-1}}{q_{0,...,s}} \equiv 0 \pmod{q_{0,...,s-1}}.$$ Now, since $l$ is a lattice point, $c_s \frac{q_{0,...,s-1}}{q_{0,...,s}}$ is an integer $k < \frac{q_{0,...,s-1}}{q_{0,...,s}} $. Rewriting the above we get $$\label{jups} \frac {q_s}{q_{0,...,s}} k q_{0,...,s} \equiv 0 \pmod{q_{0,...,s-1}} ,$$ since $k q_{0,...,s}=c_s q_{0,...,s-1} < q_{0,...,s-1}$, we must have, if $0 < c_s$, that there is a prime power $p^r$ in the prime factorization of $q_{0,...,s-1}$, which appears to a smaller degree in the prime factorization of $c_s q_{0,...,s-1}$. By the previous equality, the highest power of $p$ which can appear in $q_{0,...,s}$ will also be smaller than $r$, say it is $(r-t)$. But to satisfy we must also have that $p$ divides $\frac{q_s}{q_{0,...,s}}$, which implies that $p^{r-t+1}$ divides $q_s$, but then $p^{r-t+1}$ will divide $q_{0,...,s}$ which is a contradiction. Thus we must have $c_s = 0$. The last case is an exception. If $s=0$ we have $l = c_0e_0$, but by construction of $e_0$ as a primitive vector we must have $c_0=0$. Hence we are done. Now we can use this to calculate the normalized volume. \[volume\] The volume of the parallelotope spanned by $e_1,...,e_n$ is $\sqrt{q_0^2+...+q_n^2}$. The coordinates of $z=e_1 \times \cdots \times e_n$ will be (modulo a sign) the $n \times n$ minors of the matrix $A$ with row $i$ equal to $e_i$. $$\begin{aligned} A= \begin{bmatrix} -\frac{q_1}{q_{01}} & \frac{q_0}{q_{01}} & 0 & 0 & \cdots & 0\\ x_{20}-\frac{\delta}{q_0} & x_{21} & \frac{q_{01}}{q_{012}} & 0 & \cdots &0 \\ x_{30}-\frac{\delta}{q_0} & x_{31} & x_{32} & \frac{q_{012}}{q_{0123}} & \ddots &0 \\ \vdots & \vdots & \vdots & \ddots & \ddots & 0 \\ x_{n0}-\frac{\delta}{q_0} & x_{n1} & x_{n2} & x_{n3} & \cdots & \frac{q_{0,...,n-1}}{q_{0,...,n}} \end{bmatrix}\end{aligned}$$ Set $z=(z_0,...,z_n)$. We see immediately that $z_0=q_0$ and $z_1=q_1$, since the corresponding minors are lower triangular and $q_{0,...,n}=1$. To calculate $z_s$ we get, by expanding along the columns from the right, that $z_s=(-1)^sq_{0,...,s}\det(B_s)$ where $B_s$ is the $s \times s$ submatrix from the upper left corner of $A$. Consider such a $B_s$: $$\begin{aligned} B_s= \begin{bmatrix} -\frac{q_1}{q_{01}} & \frac{q_0}{q_{01}} & 0 & 0 & \cdots & 0\\ x_{20}-\frac{\delta}{q_0} & x_{21} & \frac{q_{01}}{q_{012}} & 0 & \cdots &0 \\ x_{30}-\frac{\delta}{q_0} & x_{31} & x_{32} & \frac{q_{012}}{q_{0123}} & \ddots &0 \\ \vdots & \vdots & \vdots & \ddots & \ddots & 0 \\ \vdots & \vdots & \vdots & \ddots & \ddots & \frac{q_{0,...,s-2}}{q_{0,...,s-1}} \\ x_{s0}-\frac{\delta}{q_0} & x_{s1} & x_{s2} & x_{s3} & \cdots & x_{s(s-1)} \end{bmatrix} \end{aligned}$$ Enumerating the columns $0,...,s-1$, after multiplying column $i$ by $q_i$ (thus changing the determinant by a factor of $q_0 \cdots q_{s-1}$) for all $i$, observe that, by the construction of $e_i$, the sum of all rows except the last one is $0$. For $i=0,...,s-2$ do successively the column operation: add column $i$ to column $i+1$. This will not change the determinant, and observe that by the remark about the row sums, the new matrix will be lower triangular. Thus the determinant will be the product of the diagonal elements. Diagonal entry number $r$ will be equal to $x_{r0}q_0 -\delta +x_{r1}q_1+...+x_{r(r-1)}q_{r-1}$, which by construction equals $-\frac{q_{0,...,r-1}}{q_{0,...,r}}$. So we get $$\frac{1}{q_0 \cdots q_{s-1}}\det(B_s)=(-1)^s\frac{q_0 \cdots q_s}{q_{0,...,s}}$$ implying that $z_s=q_s$. The result now follows from the fact that $|z|^2=q_0^2+ \cdots q_n^2$. By this result, we have that a polytope with a Euclidean volume of $\frac{\sqrt{q_0^2+\cdots q_n^2}}{n!}$ will have normalized lattice volume equal to $1$, in the lattice spanned by $H$. Using this we have: The normalized volume of $P$ is $\frac{\delta^n}{q_0 \cdots q_n}$. The edges emanating from $v_0$ are spanned by the vectors $$w_i = (-\frac{\delta}{q_0},0,...,\frac{\delta}{q_i},0,...,0),$$ for $i=1,...,n$. The Euclidean volume of $P$ will be $\frac{|w_1 \times \cdots \times w_n|}{n!}$. The corresponding matrix is $$\begin{aligned} \begin{bmatrix} -\frac{\delta}{q_0} & \frac{\delta}{q_1} & 0 & 0 & \cdots & 0\\ -\frac{\delta}{q_0} & 0 & \frac{\delta}{q_2} & 0 & \cdots & 0\\ -\frac{\delta}{q_0} & 0 & 0 & \frac{\delta}{q_3} & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \ddots & 0 \\ -\frac{\delta}{q_0} & 0 & 0 & 0 & \cdots & \frac{\delta}{q_n} \end{bmatrix} .\end{aligned}$$ We see that $w_1 \times \cdots \times w_n = (\frac{\delta^n}{q_1 \cdots q_n}, \frac{\delta^n}{q_0q_2 \cdots q_n}, ...,\frac{\delta^n}{q_0 \cdots \hat{q_i} \cdots q_n}, \cdots, \frac{\delta^n}{q_0 \cdots q_{n-1}})$. This implies $$|w_1 \times \cdots w_n|^2 = \frac{\delta^{2n}q_0^2 + \delta^{2n}q_1^2 + ... + \delta^{2n}q_n^2}{q_0^2 \cdots q_n^2} ,$$ giving $$|w_1 \times \cdots w_n| = \frac{\delta^n}{q_0 \cdots q_n} \sqrt{q_0^2+...+q_n^2} .$$ Combinining this with the normalization yields the result. Finally we can return to intersection theory on $\p(q_0,...,q_n)$. Recall that we have $D_0^n = \Vol(P) \frac{q_0^n}{\delta^n}$. Inserting the above gives $D_0^n = \frac {q_0^n}{q_0 \cdots q_n}$. Combining this with the previous calculations we get $$E_1 \cdots E_n = \frac {\Pi_{j=1}^n \deg E_j}{q_0^n}D_0^n =\frac {\Pi_{j=1}^n \deg E_j}{q_0^n} \frac {q_0^n}{q_0 \cdots q_n} = \frac {\Pi_{j=1}^n \deg E_j} {q_0 \cdots q_n}$$ thus we are done.
--- address: | $^{1}$ Department of Astronomy, University of Maryland, College Park, MD 20742, USA\ $^{2}$ CRESST II, NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA[; [email protected]]{} --- Introduction ============ The morphology of the large-scale Galactic magnetic field (GMF) is surprisingly poorly understood for such an important component of the Milky Way’s interstellar medium (ISM). There is a long list of topics in Galactic astrophysics that currently depend on an incomplete understanding of the GMF, such as disk dynamics, cosmic-ray propagation, the turbulent ISM, molecular cloud collapse, star formation, supernova remnant evolution, etc. There is also a list of studies in the literature modeling the GMF using a variety of observational tracers and parametrized morphological forms. For earlier reviews, see, e.g., [@haverkorn:2014; @Beck:2003ke] and references therein. In this review, I will discuss modeling work that is either relatively recent or still being used In addition to its importance in its own right, the GMF has a significant impact on several extragalactic observations because of effects it can have on the observables or because of the foreground confusion it adds. The cosmic microwave background (CMB) is the most obvious example, since we observe the CMB in the foreground minimum between the synchrotron emission that dominates in the radio (Section \[sec:obs\_sync\]) and the dust emission that dominates the higher frequencies (Section \[sec:obs\_dust\]). Another cosmological problem impacted by the Galaxy foregrounds is the study of the recombination epoch using redshifted 21cm emission, which may be contaminated by Galactic synchrotron emission that is orders of magnitude brighter ([@Zaroubi:2012kt] Section 5.5). The search for the sources of the highest energy particles in the Universe, ultra-high-energy cosmic rays (UHECRs) is also complicated by the fact that these particles are deflected by magnetic fields as they propagate to the Earth, so back-tracing them requires an accurate GMF model (see Section \[sec:crs\]). These needs have driven some of the modeling work in the field and will continue to do so. The variety of observables and the variety of contexts where the GMF is important explain the variety of modeling efforts in the literature. Some of these analyses focus on only one part of the problem and, as in the case of the proverbial elephant, find an answer that is incomplete. All of the analyses require certain assumptions about the distributions of particles associated with the observables, whether thermal electrons, relativistic cosmic rays, or dust grains. These assumptions mean that there remain hidden degeneracies in the parameter space and that different analyses come up with different measures even for fundamental quantities such as the average strength or degree of ordering of the field. However, the information we can gain from these disparate efforts can also help us to tackle the problem from different angles. I will summarize the observables in Section \[sec:observables\] and the components of a physical model of the Galaxy needed to simulate them in Section \[sec:phys\_components\]. I will review some of the different models, their origins and contexts, their advantages and disadvantages, what common features they have, and what we have learned in Section \[sec:models\]. There are a variety of challenges faced by all such modeling efforts, which I will summarize in Section \[sec:challenges\]. However, there is also a prospect of making significant progress in the next few years, which I will speculate on in Section \[sec:prospects\]. Observables {#sec:observables} =========== There are many physical processes affected by the GMF that allow us to observe it indirectly. None of these phenomena give complete and unambiguous information about the large-scale GMF by themselves. This section will briefly summarize the principle phenomena that have been used so far to model the Galactic-scale GMF, with the analyses and results summarized in the next section. A summary of this section is also presented in Table\[tab:observables\]. There are other tracers not discussed here, from Zeeman splitting of masers ([@Fish:2003gp; @Han:2006hs; @Green:2012fm]) to HI velocity gradients ([@GonzalezCasanova:2017fh]). For a thorough review of observations and analyses from small-scale turbulence to the intergalactic medium (IGM), see @han:2017. Here we will focus on probes that probe the large-scale magnetic field over large portions of the Galaxy. [&gt;m[2.cm]{}&gt;m[2.5cm]{}&gt;m[3cm]{}&gt;m[3cm]{}&gt;m[3cm]{}]{} **Observable** & **GMF Property Probed** & **Dependencies** & **Pros** & **Cons**\ Starlight polarization & $B_\perp$ orientation & dust grain properties and distribution & 3D information & sampling limited to a few kpc\ Faraday rotation (extragalactic) & $B_\parallel$ direction and strength & thermal electron density & good full-sky sampling (42k sources);full LOS through Galaxy & no 3D info along LOS\ Faraday rotation (Galactic) & $B_\parallel$ direction and strength & thermal electron density & 3D sampling along the LOS through the Galaxy & mostly in Galactic plane;currently insufficient sampling ($1$k)\ Faraday tomography (extragalactic) & $B_\parallel$ direction and strength & thermal electron density & probes variations along the LOS through the Galaxy & low physical resolution, not a probe of the Milky Way\ Faraday tomography (Galactic) & $B_\parallel$ direction and strength & thermal electron density & probes variations in 3D along the LOS & no physical distances associated with Faraday depth variations\ Diffuse synchrotron emission(radio) & $B_\perp$ orientation and strength (squared) & cosmic-ray density;thermal electron density & goes as $|\mathbf{B}|^2$;full-sky coverage;probes turbulent Faraday effects & no 3D info along LOS;polarization horizon of a few kpc due to Faraday depolarization effects\ Diffuse synchrotron emission (microwave) & $B_\perp$ orientation and strength (squared) & cosmic-ray density & goes as $|\mathbf{B}|^2$;full-sky coverage;full LOS through the Galaxy;no Faraday rotation & no 3D info along LOS;total intensity contaminated by Bremsstrahlung and AME.\ Diffuse dust emission & $B_\perp$ orientation & dust grain density, properties, environment, and alignment & full-sky coverage;full LOS through the Galaxy;3D information with extinction surveys (e.g., Gaia)no Faraday rotation & probes only close to Galactic plane $|z|\lesssim 100$pc\ Polarized Starlight ------------------- One of the longest-known observational signatures of the GMF is the polarization of starlight. Amorphous dust grains tend to align their long axes perpendicular to the local magnetic field, and therefore absorption leaves the starlight partly linearly polarized parallel to that local field as projected onto the sky from the point of view of the observer (perpendicular to the line of sight, i.e., $B_\perp$). The catalog of @heiles:2000, for example, provides measurements to over 9k individual stars over a significant fraction of the sky. The advantage of starlight polarization is that one can in principle use the distance to the star as well as multiple stars in a given direction to extract 3D information about the magnetic field. This analysis depends on knowledge of the distribution of the relevant dust grains, of course, which can be measured via the reddening. Its main disadvantage for modeling the large-scale GMF is sampling, since we cannot make such measurements, much less get accurate distances, for stars in the more distant regions of the disk. We also run out of stars away from the Galactic plane. However, @santos:2011 demonstrate how useful they can be for studying local features, particularly the North Polar Spur (NPS, aka Loop I), and @Pavel:2011fn explore how near infra-red observations can be used to constrain large-scale field properties. See Section \[sec:local\_features\]. @Panopoulou:2018uh demonstrate how combining polarization measurements with Gaia[^1] distances allows a detailed tomography of the ISM. Faraday Rotation Measures (RMs) of Point Sources ------------------------------------------------ The Faraday rotation of polarized emission can probe the line-of-sight (LOS) component of the magnetic field ($B_\parallel$) through the wavelength-dependent rotation of the orientation of the linear polarization vector of emission originating from a source behind a Faraday rotating medium. For each polarized point source observed at multiple frequency bands, a single RM value can be fit to the polarization orientation as a function of frequency, and this RM represents the integrated product of the thermal electron density and the LOS component of the magnetic field between the observer and the source. This observable is unique in that it probes not only the orientation of the magnetic field but also its direction: a positive (negative) RM implies a field pointed toward (away from) the observer. RMs of Galactic pulsars can give us 3D information if we have a measured distance to each pulsar, but our sampling is currently very limited. RMs of extragalactic point sources are far more plentiful, but each measurement represents the full LOS through the Galaxy (as well as the intergalactic medium) in that direction as well as the Faraday rotation intrinsic to the source itself. We currently have roughly 1k RMs for Galactic pulsars [@han:2018] and  42k for extragalactic polarized sources [@taylor:2009; @Xu:2014ic] and more all the time [@Schnitzeler:2019ge]. The sampling of the Galactic pulsars allows some analysis of the field morphology (see, e.g., work by @han:2018) but is not currently sufficient to use for robust model fitting and to take full advantage of the 3D information this tracer has the potential to offer. (That situation will change, however, with the Square Kilometer Array (SKA)[^2] and its associated pathfinder surveys; see Section \[sec:prospects\]) Extragalactic sources, however, are plentiful over the full sky and particularly well sampled over the Southern and Canadian Galactic Plan Surveys (SGPS and CGPS [@brown:2003; @brown:2007]. Many catalogs of extragalactic RMs have been collected by @oppermann:2012b and synthesized into a reconstructed sky map of observed total RM through the Galaxy as well as an uncertainty based on the sampling and the variations among the data used in any given region of the sky. Diffuse Polarized Synchrotron Emission {#sec:obs_sync} -------------------------------------- Diffuse synchrotron emission dominates the sky in maps from the radio frequencies to the microwave bands. It depends both on the strength and orientation of the magnetic field projected onto the plane of the sky ($B_\perp$). One of the first full-sky maps and one that remains useful today is that of Haslam et al. at 408MHz [@haslam:1982; @remazeilles:2015] giving total synchrotron intensity, $I$, from a combination of ground-based single-dish data. Polarization (in the form of Stokes parameters $Q$ and $U$, or polarized intensity as $PI\equiv\sqrt{Q^2+U^2}$) has been measured over the full sky in the radio at, for example, 1.4GHz by @reich:2001b and @testori:2008, showing large-scale coherent polarization signals in the NPS and Fan regions as well as significant depolarization in the Galactic plane compared to the high latitude sky. This is the main disadvantage of probing the polarization at radio frequencies, where it is easier to do from the ground: there is a so-called polarization horizon [@uyaniker:2003; @Hill:2018gc], which is a function of telescope beam size and observation frequency, beyond which little polarization signal can be observed due to Faraday effects in the turbulent ISM. Space-based microwave observations began with [*WMAP*]{} that showed us the 23GHz sky in polarized emission free of Faraday effects [@wmap_9yr], and similarly at 30GHz by [*Planck*]{}  [@planck:2018]. These data allow us to study the apparent morphology of the magnetic fields projected onto the sky. One of the most important observables, however, that of the polarization fraction, $p\equiv PI/I$, remains unavailable to us. This is because the total intensity sky at microwave bands is dominated by other emission processes, principally thermal Bremsstrahlung emission and the anomalous microwave emission (AME) believed to arise from spinning dust grains. Both processes are thought to produce only unpolarized emission, but their presence makes it difficult to map the synchrotron total intensity in the microwave bands and therefore to estimate the degree of polarization and the field ordering. An additional complexity is the calibration of the zero-level of these maps; see @wehus:2014 for a recent analysis. Most of these radio and microwave observations are not absolutely calibrated, which means that there is an unknown net offset in the datasets, and though this offset is not important for the fitting of morphological features, it is again vital for the polarization fraction and inferred field ordering. If we understood the energy spectrum of the cosmic-ray lepton population that produces the synchrotron emission from the radio to the microwave bands, we could combine the low-frequency total intensity maps (where contamination is minimal) with the high-frequency polarization maps (where Faraday effects are minimal) to measure the polarization fraction. Unfortunately, the shape of the spectrum and its likely turnover around a few GeV are not well understood. Since direct measures of the cosmic-ray electron (CRE) spectrum near the Earth are additionally complicated by solar modulation, and the local spectrum may not be typical of the Galaxy, the synchrotron emission itself may be the best indirect probe of that region of the CRE spectrum [@jaffe:2011] if we can combine information from enough different frequencies around a few GHz. This situation is improving because of the C-Band All Sky Survey (C-BASS) at 5GHz [@king:2014; @irfan:2015bm], precisely the region most interesting for probing not only the synchrotron spectral turnover but also the regime where Faraday effects go from dominant to negligible in different regions of the plane. Diffuse Polarized Thermal Dust Emission {#sec:obs_dust} --------------------------------------- The same dust grains that polarize background starlight through absorption also produce thermal emission that is polarized perpendicular to the local magnetic field. The observed dust polarization is then another tracer of the orientation of the magnetic field projected onto the sky. It is not thought to be a strong function of the magnetic field strength, but the degree to which the grains tend to align depends on the grain properties and environment in ways not well understood. This was first measured by Archeops [@Benoit:2004] and more recently with the full-sky high-resolution and multi-frequency data of the [*Planck*]{} mission [@pipXIX; @planck:2018]. Though the geometric dependence is similar to that of synchrotron emission (i.e., polarization perpendicular to the $B_\perp$ orientation), this observable is complementary to the synchrotron emission, because it arises from a different region of the ISM, the cold dusty ISM close to the Galactic plane. The [*Planck*]{} data showed [@pipXIX] that though the two tracers correlated in some regions such as the Fan and NPS, they were uncorrelated over much of the sky. This means that we can use the different particle distributions to probe the field along the LOS. Though detailed models of the distribution of CREs are lacking, the Gaia mission is providing new 3D dust models from the extinction measurements of millions of stars [@Lallement:2018kl]. The combination of Gaia and [*Planck*]{} data will allow us to probe the local magnetic field in 3D using the dust emission. This in turn will then help us to determine which aspects of the large angular-scale structure in the synchrotron sky are local (see Section \[sec:local\_features\]) and let us account for them. This in turn would then let us focus on the vertical structure of both dust and synchrotron emission to probe from the disk into the Galactic halo out to a few kpc. Diffuse $\gamma$-ray Emission ----------------------------- Diffuse $\gamma$-ray emission has long been used to study the cosmic-ray population in the Galaxy and is therefore crucial to interpreting the synchrotron emission and studying the GMF. The Fermi[^3] data in particular provide both direct measurements of the local population of CRs as well as the diffuse inverse Compton $\gamma$-ray emission mapped over the full sky. Both have been used to constrain the CR population [@strong:2010], to probe the cosmic-ray spectrum by combining $\gamma$-ray and microwave band observables [@strong:2011], and to fit models of the magnetic field and measure the scale height of the CRE population [@orlando:2013]. It is now being recognized that the question of CR propagation cannot be solved without the combination of $\gamma$-ray and synchrotron emission, as demonstrated in recent work by @Orlando:2019tb. Faraday Tomography/RM Synthesis ------------------------------- With enough spectral resolution, one can do better than fit a single RM value to the variation of the polarization angle with frequency in a given direction. A full Fourier analysis converts the emission as a function of frequency into a measure of the polarized intensity along the LOS as a function of Faraday depth. This allows us to study diffuse emission where the synchrotron-emitting regions and the Faraday rotating regions are mixed. The Faraday depth, though it is not a physical distance scale, provides 3D information about the distribution of emitting regions along the LOS. It therefore probes the magnetic fields not only through their Faraday effects but also from the synchrotron emission itself. This sort of analysis avoids the sampling restrictions of pulsar RMs, though in return, it links the Faraday information to the cosmic-ray density distribution. See @Ferriere:2016kt for a brief review of Faraday tomography and its prospects. Supernova Remnants ------------------ The morphology of supernova remnants (SNRs) can be a complementary probe of the GMF as shown by @west:2016. When the SNR expands into the ambient ISM, it will compress the local magnetic field component that is tangent to the shell (perpendicular to the expansion direction). This will in turn produce synchrotron emission that is strongest where the field is most compressed, implying that the morphology of the remnant in the radio is in part determined by the orientation of the ambient field relative to the line of sight. Though there are not many SNRs with such regular morphology, @west:2016 showed that one large-scale GMF model predicted a significantly better agreement with the available observations than another model. This analysis is therefore a useful addition to the toolbox of informative probes of the GMF. Modeling Components {#sec:phys_components} =================== There are no direct measurements of the GMF that do not depend on other components of the ISM, namely on the spatial and spectral distributions of the particles that are also involved. With the observables outlined above, we need to model not only the GMF itself but also: the dust grains that polarize starlight and emit in the submm bands; the warm and hot ionized gas that Faraday rotates polarized emission that propagates through it; the relativistic cosmic-ray leptons that emit the synchrotron emission when interacting with the GMF. We summarize all these components of any GMF modeling effort in this section. Magnetic Field {#sec:B_components} -------------- Any model for the GMF has several components that are useful to distinguish by their effects on observables and that can be generated separately for computational feasibility. This means using ad hoc models rather than full magneto-hydrodynamic (MHD) solutions to the dynamo equations. This section summarizes the main characteristics of the GMF that can be probed and by what observables. A cartoon illustrating these components is shown in Figure\[fig:cartoon\] ![Cartoon illustrating [several geometrical properties of magnetic fields. Panel (a) shows]{} the effective magnetic field components defined by their effects on the indicated observables (from @jaffe:2010 ). [Panel (b) shows the helicity.]{} See Section \[sec:B\_components\]. \[fig:cartoon\] ](cartoon.png){width=".9\textwidth"} ### Definitions: “Random”, “Regular”, “Ordered”, “Striated”...? There is a bit of confusion of terminology in the literature, some of which dates from a time when generally only one observable was studied at a time, and “regular” and “random” were the only two components of the magnetic field that were discussed. However, for different observables, these two terms can be ambiguous. A useful cartoon is shown in Figure\[fig:cartoon\] (originally published by @jaffe:2010) that demonstrates how the polarized emission (synchrotron or dust) cannot distinguish the field direction but only its orientation, while the RMs can, and how this combination of observables then divides the field into three effective components. In brief, the three observables of total synchrotron intensity, polarized synchrotron intensity, and Faraday RMs divide the GMF into three effective field components: - The coherent field is the component whose [*direction*]{} remains coherent over large regions. This is also referred to as the mean field. When observed from a perpendicular direction, the synchrotron polarization adds coherently, and when observed in parallel, the RM adds coherently. See left-most box in Figure\[fig:cartoon\]a. - The isotropic random component represents a zeroth-order simplification for the ISM turbulence, where local field fluctuations are equally likely to be in any direction. Such a component does not have to be single-scale (e.g., as in @sun:2008); one can define an isotropic Gaussian random field (GRF) that encodes correlations as a function of distance but has equal power in all directions (e.g., as in @jaffe:2010). This component contributes only to the total synchrotron intensity but does not add coherently to its polarization or to the RM; see the lower right box of Figure\[fig:cartoon\]a. - The third component can be thought of representing a first-order approximation of the ISM, where local field fluctuations are not isotropic but rather prefer certain [*orientations*]{}. Please note that it does not prefer certain [*directions*]{}; that would simply be part of the coherent component. The definition of this third component is that the RMs must still average to zero but that the polarized emission adds up. See the middle box of Figure\[fig:cartoon\]a. This component has variously been called the “ordered random” (@jaffe:2010), or the “striated” component (@jansson:2012b). The term “anisotropic random” used by @Beck:2003ke can be ambiguous in that it has sometimes been used to refer to the Figure\[fig:cartoon\]a middle component only and sometimes to the total random component, the sum of the middle and right boxes, i.e., the ordered random plus isotropic random. It is best to separate these three effective components clearly in the text, and it is imperative to define explicitly what one means if using the term “regular” or “random”. The importance of separating these components is in measuring their relative amplitudes and understanding the relationships among them, e.g., how much of the regular field may arise from large-scale differential rotation and shear, or shocks that compress the turbulent component along one direction, etc. The next step is then to associate them with the distinct regions or phases of the ISM (see, e.g., @2016arXiv160802398E) to understand their relationship with the other components of the Galaxy. ### Coherent Field The coherent field has been studied with RMs for some time, and the clearest large-scale morphological features are the apparent reversal of the sign of the RMs both across the plane (an anti-symmetry north to south) and reversals along the Galactic plane over relatively short angular distances ($\sim$10$^\circ$). The problem is that the sampling of RMs is not sufficient to resolve unambiguously at what distance these reversals happen and therefore what they imply about the large-scale field morphology. Regarding the coherent field strength, this in principle can also be estimated from the RMs, but in practice this is limited by our knowledge of the distribution of the thermal electron population and by its correlations with the field . See discussions by @Beck:2003ke and @sun:2008. ### Isotropic Random Field The isotropic random field manifests in several ways. Firstly, as illustrated in Figure\[fig:cartoon\], this component contributes to the total synchrotron intensity but not to its polarization (or that of dust emission), since the addition of the polarization vectors will average to zero. This means that the polarization fraction of synchrotron emission could be a good probe of the relative strength of this component. In practice, this is complicated, as discussed below. This component can also be probed by measuring the variance of the polarization and RM (e.g., @haverkorn:2004), since though they average to zero, a stronger isotropic random component will increase the variance both across the sky and along the LOS. Again, this is degenerate with variations in the relevant particle distributions and in particular depends on the correlations between the fluctuations in the fields and particles, which are not independent. ### Ordered Random Field Without these three complementary observables, this component cannot be disentangled from the others, but it has now been estimated by several teams (@jaffe:2010, @jansson:2012b, and @orlando:2013). This component is of interest in studying the origin of the turbulence, which is not expected to be isotropic, and the interaction of the small scales with the large-scale dynamics such as shear from differential rotation or compression from spiral arm shocks. These two mechanisms can generate an ordered but random component from a purely isotropic random component. ### Helicity Helicity is a quantity that is recently becoming interesting in studies of magnetic fields, both Galactic and extragalactic, because of its importance to dynamo theory (see, e.g., [@Brandenburg:2004kl]). The three effective field components discussed above are of course an incomplete description of the GMF; they are simply defined by the three observables of total synchrotron intensity, polarized synchrotron intensity, and RM. Helicity is another thing entirely and one that we may be able to probe with either different observables or by looking at these observables differently. Helicity in a field is not defined at a point but within a volume as $$H=\int_V {\mathbf A}\cdot{\mathbf B}dV \label{eq:helicity}$$ where $\mathbf{A}$ is the vector potential and ${\mathbf B}$ is the magnetic field such that ${\mathbf B}=\nabla\times{\mathbf A}$. Observable signatures of this quality of the field are few, but @Volegova:2010go have demonstrated with simulations that helical turbulence would leave a statistical signature in the joint probability distribution of the polarization fraction, $p$, and the Faraday RM. This effect is further discussed theoretically in @Brandenburg:2014gp, which illustrates how this asymmetry arises in a synchrotron-emitting region from the addition of the emission and the Faraday rotation when the field has non-zero helicity. In brief, the Faraday rotation can either cause the emitted polarization vectors to wind up faster (and therefore cancel more quickly) or more slowly (and therefore add more coherently) depending on the sign of the helicity. This leads respectively to a negative (positive) shift in the PDF for a positive (negative) helicity. (See Figure3 of @Volegova:2010go.) The resulting observable signature is summarized in Figure\[fig:cartoon\]b, where because this observable depends on Faraday rotation, it depends on observing angle as illustrated. Efforts are ongoing by West [et al., in prep]{} to use this method to look for helicity in the large-scale GMF. An intriguing possibility for measuring the helicity of extragalactic magnetic fields through its signature on diffuse $\gamma$-ray emission is discussed in @tashiro:2014 and references therein. However, there is not yet a corresponding effect within the Galaxy. It may become important to find such probes as our large-scale GMF models become more realistic (see Section \[sec:beyond\_adhoc\]) and likewise our treatment of the turbulence at small scales (see Section \[sec:turbulence\]), since helicity is important to both contexts. Thermal Electrons—WHIM ---------------------- The warm/hot ionized medium (WHIM) plays several roles in GMF studies. Firstly, the Faraday RM discussed above depends on the free electrons in this phase of the ISM. Secondly, the WHIM emits thermal Bremsstrahlung emission, a.k.a. free-free emission, from the radio to microwave bands. Its spectrum ($\beta\simeq -2.1$) is not different enough from that of synchrotron emission ($\beta\simeq -2.5$ to 3) for the components to be reliably separated in the Galactic plane. This has implications discussed below in Section \[sec:challenges\]. The most widely used model so far is the “NE2001” of @cordes:2002, where the dispersion measures of Galactic pulsars were used to fit a four-arm spiral model for the distribution of thermal electrons. The model also includes a molecular ring around the inner Galaxy and local features in several directions around the Sun. The vertical scale height was corrected in @gaensler:2008, but the model has otherwise remained the main large-scale model of this component of the ISM. Recently, a new model has appeared from @Yao:2017kb (YMW16). One of the main issues with these models is the uncertainty over how to handle the clumping of the WHIM and the fact that it may be correlated (compression) or anti-correlated (pressure equilibrium) with the small-scale fluctuations in the GMF [@Beck:2003ke; @sun:2008]. Such correlations will introduce biases in the inferred magnetic field strength, and there is no consensus on how to handle this. Dust Grains ----------- The distribution of dust grains in the ISM can be probed by several observables, particularly its thermal emission in the sub-millimeter bands and its absorption of starlight. The difficulty here is that both the environment and the detailed properties of the dust grain populations vary throughout the Galaxy. The [*Planck*]{} observations of polarized dust emission have given us a new tool to study both the properties of the cold dust and its relationship with the magnetic field in the ISM. See the review by Boulanger in this issue. Recent results from the Gaia mission have given us much more detailed 3D information about the dust distribution in the local quadrant of the Galaxy by combining measurements of the dust extinction toward billions of stars with precise distance measures. This can be used to map the local ISM in detail as done by @Lallement:2018kl. Galactic Cosmic Rays {#sec:crs} -------------------- The relativistic particles in the ISM that produce the synchrotron emission are thought to be generated by supernovae, but neither the origin nor the propagation of these cosmic rays is well understood. The higher energy particles (greater than a few GeV) produce inverse Compton emission that dominates the diffuse $\gamma$-ray sky, while the lower energy particles dominate the synchrotron emission spectrum in the radio to microwave bands. @Bernardo:2015ik and @orlando:2018 show recent and complementary multi-wavelength analyses of these diffuse CRs with different propagation models. These analyses can help modeling the synchrotron observables, and vice versa. One of the more complicated unknowns is the question of anisotropic cosmic-ray diffusion. Not only is the diffusion physics not well understood, but it determines the dependence of the CR propagation on the magnetic field. Inferring the GMF from synchrotron emission therefore requires understanding the diffusion. It is worth mentioning that there are several CR propagation codes with different input physics being used to constrain the CR distribution with both the diffuse multi-wavelength emission data as well as the directly detected particle spectrum. These include: the GALPROP[^4] code of @Strong:2009tq that has been used for many years and most recently by @orlando:2018; the DRAGON code of @Evoli:2008hz used by other groups such as @Bernardo:2015ik; and the PICARD code of @Kissmann:2014hy. Many GMF modeling analyses separate the CR modeling from the field modeling (i.e., assume a fixed model for the former), but  [@jaffe:2011; @orlando:2013] show that they should be modeled together because of their interdependence. Models and Analyses {#sec:models} =================== There have been several studies of the GMF over the past few decades, and though there are some common features (e.g., spirals in disks), their morphologies have a surprising variety in the details. Some analyses include an exploration of the parameter space and a quantification of statistical uncertainties, but few have accounted for systematic uncertainties or have quantitatively compared the different parametrized models that have be explored independently. These models can be roughly grouped into those that are largely ad hoc constructions built to be compared with specific observations, and those that arise from theoretical work. Clearly, some physical properties such as the divergence-free condition can be enforced on the ad hoc models, and equally clearly, the observations inform the theoretical work. However, the two approaches can be considered to be complementary, and if they do not meet up in the middle, they will each continue to be useful to compare. The first half of this section reviews the analyses that have been done recently, with emphasis on their pros and cons compared to the others. This collection of models contains many common morphological features, and since it is those physical features that are of interest, and how well they are constrained, we summarize them in the second half of this section. Current Magnetic Field Model Fits --------------------------------- This section presents an incomplete but representative sample of some of the current models in the literature, describing what datasets were used to constrain the models and what particular advantages or disadvantages each analysis had. Some of these are shown in Figure\[fig:views\] to illustrate the varieties of morphologies. (Please note that this is a biased subset of only those that could be visualized in a consistent way using the modeling code [hammurabi]{}[^5] [@waelkens:2009].) $\begin{array}{ccc} \textrm{(\textbf{a})}& \textrm{(\textbf{b})} & \textrm{(\textbf{c})}\\ \includegraphics[width=45mm]{view_sun_bcoh.png} & \includegraphics[width=45mm]{view_jf12_bcoh.png} & \includegraphics[width=45mm]{view_jaffe_bcoh.png} \\ \textrm{(\textbf{d})}& \textrm{(\textbf{e})} & \textrm{(\textbf{f}) }\\ \includegraphics[width=45mm]{view_sun_biso.png} & \includegraphics[width=45mm]{view_jf12_biso.png} & \includegraphics[width=45mm]{view_jaffe_biso.png} \\ \textrm{(\textbf{g})}& \textrm{(\textbf{h})} \\ \includegraphics[width=45mm]{view_hmr_bcoh.png} & \includegraphics[width=45mm]{view_fauvet_bcoh.png} \\ \end{array}$ - @sun:2008 (refined in [@sun:2010], “Sun10”) first used the combination of synchrotron total and polarized intensity (at 408MHz and 23GHz respectively) along with RMs to compare several 3D models of the GMF. They used the NE2001 model for thermal electrons and a simple exponential disk with a power law spectrum of index $p=-3$ for the cosmic rays. This work included an analysis of the impact of the filling factor of the ionized gas in the ISM and examined several models from the literature, both axi-symmetric and bisymmetric spirals. The model they concluded was favored by the data was the “ASS+RING” based on an axisymmetric spiral disk with field reversals defined in several regions to match the data. The turbulence was modeled with a single-scale random field. This was the first such analysis, though it was not quantitative model fit, and it assumed a very high local cosmic-ray density to fit the data without an ordered random field component. - @jaffe:2010 (refined in [@jaffe:2011; @jaffe:2013], “Jaffe13”) used these same synchrotron observables and the SGPS and CGPS extragalactic RMs to perform a systematic likelihood exploration in the plane of a 2D model based loosely on previous work by [@broadbent:1990]. It used the NE2001 model for thermal electrons and a Galprop cosmic-ray model from [@strong:2010]. It included an exponential disk to which is added four Gaussian-profiled spiral arms as well as a ring around the Galactic center. This analysis first included realizations of the random components, both isotropic and ordered, based on a Kolmogorov-like GRF in an MCMC likelihood space optimization, but only in 2D. The update in @jaffe:2011 simultaneously constrained the CR lepton break at low energies in one of the first attempts to model the CRs and GMF simultaneously. Then [@jaffe:2013] added the polarized dust information and saw how the different distributions of particles means that the two observables can perhaps constrain the GMF in different regions of the ISM. These analyses, though, remain limited by the systematic uncertainties of the particle distributions. - @jansson:2012b (refined in [@jansson:2012c], “JF12”) used the synchrotron total and polarized intensity from [*WMAP*]{} 23GHz as well as the 40k extragalactic RMs to perform a systematic likelihood exploration in 3D of a model with both thin and thick disk components, eight spiral arm or inter-arm segments, and an x-shaped halo field. It was based on the NE2001 thermal electron density model (with the scale height correction from [@gaensler:2008]) and a CR model based on the “71Xvarh7S” from Galprop. It used an analytical method to treat the random field components, and the measured pixel variance was used in the likelihood. (See Section \[sec:turbulence\]) This was the first 3D model optimization with an MCMC likelihood analysis, but the use of the [*WMAP*]{} synchrotron map at 23GHz meant that the extra total intensity foregrounds biased their estimate of the random field component. See also Unger & Farrar below for updates. - @han:2018 used both Galactic and extragalactic radio sources to model the RM reversals in the Galactic plane with a set of spiral arm and inter-arm segments. The analysis used the YMW16 model for thermal electrons. This is not a global GMF model for the Galaxy, but an analysis specifically focused on where the field reversals lie using the distance information from pulsars. - @Terral:2017bx (“TF17”) used the spiraling x-shaped field models derived in [@ferriere:2014] to fit the RM data. They explore both axisymmetric and bisymmetric possibilities. This work represents the first quantitative fitting to models of spiraling x-shaped fields in theoretically derived forms (rather than ad hoc). - [Unger & Farrar]{} [@Unger:2017ty] built on the JF12 work by replacing the ad hoc x-shaped halo field by the models of @ferriere:2014. They also compared the results of fits based on different datasets ([*WMAP*]{} synchrotron total intensity versus 408MHz), different thermal electron models (NE2001 vs. YMW16), and CR distributions from the original work compared to those of [@strong:2010; @orlando:2013]. - @Shukurov:2018vb derived eigenfunctions of the mean-field dynamo equation that can be used to construct any model consistent with those assumptions. Though this analysis does not present one model fit to the data, it provides a framework for fitting more physically realistic models in future with a publicly available software package. Magnetic Field Morphological Features {#sec:model_features} ------------------------------------- The rest of this section discusses the morphological features that are astrophysically interesting and are common among many if not all the different models, such as magnetic arms, reversals, and x-shaped vertical fields. ### Axisymmetric Spirals Simple models began with axisymmetric spirals (e.g., [@page:2007; @sun:2008]) with, e.g., exponential disks, and though one of these alone cannot reproduce all the observables, the morphology remains a component of many different models. Sun10, for example, uses such a model as the basis on which reversals are added in an annular region and/or a spiral arm segment. Jaffe13 adds spiral arms to an axisymmetric spiral base model. JF12 uses multiple disk components to model the GMF in the thin and thick disks, where the spiral arm segments are imposed on the former. Until the GMF can be modeled without parametrized ad hoc models (e.g., using the eigenfunctions of @Shukurov:2018vb), the basic axisymmetric spiral will remain useful. The parameters of the axisymmetric spiral, however, are not yet well constrained because of the uncertainties in the equivalent parameters of the particle distributions. In the case of CRs, for example, the thick disk scale height is not known even to within a factor of two [@orlando:2013], and this translates into a corresponding uncertainty in the magnetic field strength as a function of height above the disk. The thermal electron density is better known, and so for example, the Sun10 model was corrected by a factor of two change in that model, but as discussed by those authors, there remains some uncertainty. Recent analysis by @Sobey:2019bz of pulsar data from LOFAR[^6] estimate the GMF scale height assuming the @Yao:2017kb model for thermal electrons, but the paper also discusses how these systematic uncertainties may affect their estimates of both the scale height of the coherent magnetic field and its overall strength. The pitch angle of the spiral is often assumed to be [$-11.5^\circ$]{} in the disk (Sun10, JF12, Jaffe13) following the NE2001 electron density model, and the RMs are consistent with this. @steininger:2018 showed that when allowed to vary, this pitch angle is not constrained by full-sky maps of RM and synchrotron emission, though this may be due to a pitch angle that has one value in the Galactic plane on average and another value in the local neighborhood that dominates the measures that are at higher latitudes. Though the model fits discussed above provide numbers for these parameters, and some of them with small statistical error bars, these systematic uncertainties make it difficult to conclude that they are really constrained. ### Spiral Arms Though we have mapped out the spiral structure of our Galaxy’s stellar component, it is harder to determine whether the GMF has a similar structure because of the difficulty determining distances to the corresponding measurements. In several external galaxies that we can see face-on in synchrotron emission, the fields appear to be strongly ordered in spiral arm structures that may or may not be coincident with the material spiral arms. Observations of these so-called magnetic arms are reviewed by @beck:2013. The question is of particular interest for what the magnetic arms may say about the mean-field dynamo, as discussed by @Moss:2013tb, for example. For this reason, most models of the GMF include magnetic arms of some sort, whether as explicit bisymmetric spirals, ad hoc but continuously defined spiral arms [@jaffe:2013], or as discontinuous segments [@jansson:2012b; @han:2018]. If the extragalactic RM features along the plane are indeed tracing these large-scale arms, then the models largely agree where the arms with the strongest coherent fields are. However, since the models so far generally constrain the magnetic arms to follow the disk field pitch angle, the same systematic uncertainties above apply. Comparison of the polarized emission of synchrotron and dust has the potential to probe whether the two components come from different spiral arm regions, as discussed by @jaffe:2013, but again, this depends on a better understanding of the field ordering. ### Reversals As reviewed in @haverkorn:2014, one of the most obvious features of the sky traced by RMs toward Galactic pulsars as well as extragalactic radio sources is the clear evidence of reversals in the large-scale GMF along the Galactic plane. Though it remains difficult to determine the distance to these reversals, @han:2018 use Galactic pulsars with distance estimates to model the reversals with alternating magnetic arms and inter-arm regions. @Ordog:2017wc recently added the analysis of the RM gradient in the diffuse polarized radio emission. Their findings highlight the importance of modeling these reversals in 3D and connecting them to dynamo theory. If indeed the GMF reversals are a feature of the large-scale Galactic structure and can be defined as reversals between magnetic arms, then we can perhaps learn about such structures from external galaxies. See @beck:2013 for a review of how observations of the RM of the diffuse emission can be combined with the rotational velocity information but are not yet sensitive enough to confirm the sorts of reversals we see in our own Milky Way. The modeling projects discussed above include large-scale GMF reversals as either magnetic spiral arms or as annuli or both. However, only @han:2018 use the distance information from Galactic pulsars that can constrain where there are. Models that use only extragalactic RMs and assume that the field along an entire magnetic arm is oriented the same way (Sun10, JF12, Jaffe13) do agree on which segments are reversed. However, the Han et al. analysis considers the direction in different sections of each arm, and these are not required, nor found empirically, to be uniform in each arm. Increased pulsar sampling will help, as will adding information from other observables such as Zeeman splitting of masers [@Fish:2003gp; @Han:2006hs; @Green:2012fm]. ### Vertical (Poloidal) Field Early models fit to the GMF included no vertical component to the field, because starlight polarization observations showed clearly a field that remained parallel to the Galactic plane in the disk, and the strongest synchrotron emission in the plane likewise. Observations of external galaxies seen edge-on in radio polarization initially showed a field largely parallel to the disk. However, more sensitive radio observations of external galaxies where fainter emission from the “halo” or “thick disk” component could be observed showed an x-shaped magnetic field structure. (Again, see @beck:2013 for a review.) Such a vertical component is again connected with the Galactic dynamo and winds. The @sun:2010 model included such a vertical component, as did , both cases of ad hoc x-shaped components. @ferriere:2014 more generally classified the different field morphologies that could have such a vertical component. Both and the fitting of @Terral:2017bx conclude that such a component is likely favored by observations of our Milky Way. However, again, the systematic uncertainties in the particle distributions keep this question open for the time being. ### Beyond the Ad Hoc {#sec:beyond_adhoc} @ferriere:2014 began the work of looking beyond ad hoc parametric models by using the Euler method to define convenient field configurations that could reproduce a spiral and an x-shaped vertical field. Then next step was taken by @Shukurov:2018vb: determining the eigenfunctions of the mean-field dynamo equation. These functions can then reproduce any GMF that is physically possible within those assumptions. Though parametrized models will always be useful for studying specific identified features of the large-scale GMF, we should increasingly move beyond them and exploit these more physical representations of the possible morphologies. @Terral:2017bx fit the @ferriere:2014 models to the RM data, and this work can now be combined with other tracers, as partly begun by @Unger:2017ty, who combined the @ferriere:2014 models with the ad hoc JF12 model. ### Turbulent Field {#sec:turbulence} The treatment of the small-scale fluctuations in the GMF is one of the thornier questions that must be addressed in any modeling, even when the large-scale GMF is the only goal. One reason is that small but local structures project to large angular-scale structures on the sky and can have a large effect on the model fitting if not properly taken into account. Furthermore, there may be systematic correlations between fields and particles on small scales that must be accounted for in modeling the large scales, whether explicitly or statistically. Lastly, when comparing models to data, it must be quantified how far the latter are expected to deviate from the former. The Milky Way should be considered one realization of a galaxy model we are looking for, and that model includes some “galactic variance” due to the expected small-scale fluctuations that are model dependent. The ISM is known to be turbulent at a range of scales (again, see [@haverkorn:2014] and references therein), and this turbulence is neither expected nor observed to be Gaussian. Both properties present a challenge for generating simulated galaxy models and for comparing the data to the simulated observables. Some modeling efforts simply ignore the stochasticity by fitting mean-field models to observables such as averaged RMs that depend only on that coherent field component. This is effectively what Han et al. [@han:2018] do fitting RM vs distance plots for different regions of the Galaxy, and the error bars include the scatter that is partially due to the ISM turbulence. The JF12 model includes an analytic expression for the average contribution to each observable from the turbulence under a few simplifying assumptions. This allows the average contribution to, e.g., synchrotron polarization to be correctly reproduced. To compare that average to our Galaxy that itself is a single realization of a field with a random component, JF12 used the data essentially to bootstrap this statistically, so that the optimized model did take this measured variance into account in the likelihood. However, as they point out and as further discussed in [@pipxlii], the variance itself is an observable that should be used to improve the constraints on the degree of ordering in the magnetic field. The next lowest order approximations are to create realizations of the random component during the simulation process by simply adding a randomly drawn number (usually from a Gaussian distribution), or a set of three for a random vector. This can be done either to every pixel of a simulated sky map, to every point along a simulated LOS, or to every 3D voxel over which the simulated observables are integrated. The first approach is in a sense effectively similar to the JF12 method of simulating the ensemble average observed sky and comparing with a bootstrapped estimate of the variance. The second approach was used by O’Dea et al. @ODea:2011bm (with a further refinement discussed in the next paragraph), which then takes into account LOS averaging effects (i.e., depth depolarization) for each observed pixel. The third approach was used in the modeling of Sun10 and (effectively) in TF17, which then includes some effects of averaging within the observing angle (beam depolarization). Adding information about the two-point correlation function of the turbulence is the next step, i.e., simulating a GRF with a given, e.g., Kolmogorov, power spectrum. O’Dea et al. used such a prescription for generating the 1D turbulence for each LOS. Jaffe13 used this in a 2D analysis restricted to the Galactic plane. The full 3D approach was used in the [*Planck*]{} analysis [@pipxlii] (without any quantitative parameter optimization) and in @steininger:2018 (with a full MCMC likelihood exploration). This last result shows that it is now computationally feasible to do this. The next step will be to include more information than simply the two-point statistics of the magnetic field. Studies of the ISM turbulence have begun to characterize a variety of its statistical properties based on the diffuse synchrotron emission in total [@2017MNRAS.466.2272H] or polarized intensity [@Gaensler:2011ix; @burkhart:2012; @2018ApJ...855...29H]. Likewise, for dust, the [*Planck*]{} collaboration has opened a new window into the turbulence in the colder phase of the ISM with high-resolution maps of the polarized dust emission ([@p18xii] and references therein). These studies make use of MHD simulations with known physical parameters to study how to infer the physics from the observables. Those simulations in turn can encode different assumptions about the turbulence and about the correlations among the relevant physical quantities such as the field strength and direction and the particle distributions (whether thermal electrons, CRs, or dust etc.). See, e.g., @Stepanov:2013ce for comparisons of data and simulations focusing on the cosmic rays in MHD simulations, or @pipXX or @Kandel:2017uu for discussions of the dust. With these studies, we can then use the information learned from MHD simulations to define physical parameters of the turbulence that the data may constrain. Challenges {#sec:challenges} ========== The observables available to us for studying the GMF are summarized in Section \[sec:observables\] and in Table\[tab:observables\], including some of their dependencies and drawbacks. These issues are discussed in detail in the [*Planck*]{} paper on GMF modeling @pipxlii and summarized here. Synchrotron and CR spectra -------------------------- The degree to which synchrotron emission is polarized could be a direct tracer of the ordering of in the GMF, but only if we can compare the total and polarized emission components at the same frequency. As discussed in Section \[sec:obs\_sync\], however, this is complicated by the presence of Faraday effects at the low frequencies and other emission components at the higher frequencies. To then estimate the field ordering, we need to compare the data at different frequencies and therefore to understand the synchrotron emission spectrum, which in turn depends on the cosmic-ray energy spectrum. Multi-wavelength observations of the synchrotron emission can help us to understand the variations in its spectrum both across the sky and at different energies. Studies such as @kogut:2012 and @fuskeland:2014 have quantified these variations based on available data, and @pipxlii shows how the variations impact the modeling that has been done so far. In particular, the synchrotron spectrum is $\beta\sim-3$ or steeper at high latitudes and high frequencies, but is harder at both low latitudes and low frequencies by of order $\Delta\beta\sim0.1$ or more. This uncertainty in $\beta$ translates into a change in the synchrotron intensity of $\sim$50% when comparing the total synchrotron intensity at 408MHz and the polarized intensity at 30GHz, with a corresponding impact on the estimate on the field ordering. An independent and yet related way to make progress on this issue is through modeling of the diffuse $\gamma$-ray emission. The CRs at the relevant energies can be directly measured near the Earth, but their distribution is strongly affected by solar modulation and therefore is not representative of the ISM in general. With additional observations from [*Voyager 1*]{} [@Cummings:2016ks], the interstellar spectrum in the solar neighborhood but unaffected by that modulation can now be used. The diffuse $\gamma$-ray emission in combination with the directly measured CR lepton spectrum can then be used to study the CR distribution. See, e.g., @orlando:2018 and references therein for an extensive analysis, including a warning that even with data that is unaffected by solar modulation, the spectrum may not be representative of the ISM on average. Please note that the $\gamma$-ray emission in the relatively high-energy Fermi LAT bands is dominated by a different population of CRs than those that produce the synchrotron emission, so further progress may require a medium-energy project that measures the $\gamma$s in the few-MeV range. @orlando:2018 include predictions for next-generation medium-energy $\gamma$-ray observatories that will probe the energy range dominated by the inverse Compton emission of the same CR population. All that can be confidently concluded at this point is that the anisotropic random field component (i.e., that which contributes to synchrotron polarization but not to Faraday RM) is of the same order of magnitude as the isotropic random component. However, refining that estimate will require a re-analysis of the $\gamma$-ray, CRE, and synchrotron data including more intermediate frequencies. With that additional data, we may be able to then trace the variation of the degree of field ordering, e.g., between and among the spiral arms (e.g., as @haverkorn:2004 did with RMs on smaller scales), which will tell us about the relationship between the large-scale Galaxy dynamics and the interstellar turbulence. Local Features {#sec:local_features} -------------- Another challenge in studying the large-scale structure of the GMF is distinguishing morphological features that are large-scale on the sky because they are close by from those that are physically large-scale components of the Galaxy. The NPS is the most obvious example of how such a large angular-scale feature may impact GMF or CMB studies [@Liu:2014td], but it is not the only one. ### Loops and Spurs The ISM turbulence discussed in Section \[sec:turbulence\] is thought to be injected by supernova remnants that expand into the ambient ISM, compress gas and magnetic fields, and start a cascade of turbulence from the scale of the SNR ($\sim$100pc) down to smaller scales following a Kolmogorov-like power law. That discussion of the turbulence refers to the smaller scales where the morphology of the SNR is no longer relevant. However, the SNRs themselves are a significant part of the ISM that cannot be treated either as the large-scale GMF or as the small-scale turbulence. One approach is to mask out regions of the sky that are thought to be dominated by a local object such as the NPS (as done in most of the modeling discussed above). The NPS looked at by @Liu:2014td is only the largest of the loops and spurs identified in radio data for some decades. Mertsch & Sarkar @Mertsch:2013gg studied the impact on the radio sky of a collection of synchrotron-emitting shells distributed roughly as we expect supernovae to be distributed in the Galactic disk. The observed power spectrum of the synchrotron emission at 408MHz cannot be explained by a combination of a large-scale disk field and small-scale Gaussian turbulence, which is how it is often modeled. Mertsch & Sarkar showed that it can be better reproduced by the addition of such correlated but turbulent structures as loops and spurs at a variety of scales, most of which cannot be seen by eye in the maps. More recently, @vidal:2015 have studied in detail the numerous loops and spurs visible in the [*Planck*]{} maps in polarization and conclude that even when not bright enough to be obvious by eye, these features will perturb both studies of the GMF as well as the separation of the synchrotron foregrounds from the CMB. It remains to be quantified how such features can affect GMF model fits. For example, the NPS visibly extends nearly from the Galactic plane to the north pole and has an emission ridge that lies near the zero-longitude meridian. This means that the observed anti-symmetry of RMs across the plane toward the inner Galaxy (the butterfly pattern discussed in ) may be affected by this structure [@Wolleben:2010bl]. Understanding the impact of these loops and spurs on the observables is therefore necessary to fitting global models to the GMF and interpreting them physically. ### Fan Region In the outer Galaxy, the most obvious feature of the sky in the radio is the region of polarized emission in the second quadrant extending around the Galactic plane and up to middle latitudes ($b\lesssim30^\circ$). This region is known as the Fan region due to the appearance of radio polarization maps, where the orientations of the polarization vectors “fan” outward from the plane due to Faraday rotation. At higher frequencies where Faraday effects are negligible, this region is very highly polarized by an inferred GMF almost entirely parallel to the Galactic plane, even well off the plane. The distance to this emission region is highly uncertain, and it remains unknown whether it is a local feature or a result of our view of the large-scale GMF in the outer Galaxy. It is highly polarized in dust emission at 353GHz as well, and difficult to reproduce with large-scale GMF models [@pipxlii]. @hill:2017 have compared more recent Global Magneto-Ionic Medium Survey (GMIMS) data with other tracers such as H$\alpha$ and examined depolarization features that can be associated with regions of known distances. They argue that a large fraction of this emission must be coming from a significant distance of $d\gtrsim2$kpc and that its asymmetric distribution about the plane (with significantly more emission above) can be explained by large-scale Galactic warp. Again, it is a better sampling of GMF tracers for which we have distance information (e.g., pulsars and starlight polarization) that is necessary to understand this region and its impact on large-scale GMF fits. ### Local Bubble Another local feature that affects the observations on the largest scales is the impact of what immediately surrounds our own solar system. A variety of studies have probed the local ISM with starlight polarization, dust reddening, Faraday tomography, etc. See @frisch:2011 for a review. Recent data from the Gaia mission to map the distances to billions of stars constitute a significant improvement. @Lallement:2018kl use the dust extinction combined with the Gaia parallax distances to map the nearby dust in 3D. @Alves:2018kw have demonstrated how the polarization of dust emission can be used to study the local magnetic field. When combined, these analyses have the potential to greatly improve our local field modeling, which will then in turn allow us to better constrain the large-scale GMF. Galactic Center, Outflow, and Fermi Bubbles ------------------------------------------- Though not local, there are other features of the multi-wavelength sky that can impact the fitting of large-scale GMF models. In 2010, the Fermi mission discovered lobes of $\gamma$-ray emission extending to $\sim50^\circ$ above and below the Galactic plane toward the Galactic center, since referred to as the Fermi bubbles [@Su:2010vk]. These have been interpreted as evidence of giant Galactic-scale outflows. have connected these $\gamma$-ray features with spurs and loops of polarized radio emission seen in the S-Band All Sky Survey (S-PASS) data. As with the NPS, it is difficult to establish distances to these features of the radio or $\gamma$-ray sky. But an outflow of some sort from the Galactic disk is the likely explanation for the vertical field component, the x-shaped morphology seen in other galaxies and perhaps our own [@jansson:2012b; @Terral:2017bx]. It is, therefore, an outstanding question whether modeling the large-scale GMF must then also include a modeling of the Fermi bubbles and the possible associated synchrotron emission in the radio and microwave bands. Even in the plane, the central few kpc of the Galaxy constitute a key region we know little about. Most of the above models assume a simple azimuthal field in the central region, or continue the ever-tightening spiral. Approaching the center itself, some models simply set the field to zero, and none of those discussed above include a physical connection to any outflow or x-shaped halo field. More physically motivated formalisms (e.g., @ferriere:2014 or @Shukurov:2018vb) are needed to do this better. Sub-Grid Modeling ----------------- Section \[sec:turbulence\] discussed the complicated question of the turbulence in the magnetized ISM and what we can learn from high-resolution MHD simulations. However, any finite simulation of the large-scale GMF will have a finite resolution below which the GMF cannot be modeled in detail, and this is well above the resolution at which interesting things happen in the MHD simulations. Below this scale, the modeling must encode our knowledge of how the smallest-scale fields interact with the matter on average and to use this “sub-grid modeling” to represent the scales we cannot probe. Until we can run a full galaxy MHD simulation from kpc to sub-parsec scales, we can only use these simulations indirectly. With the notable exception of @sun:2008, most modeling assumes uniform magnetic field and particle properties over the smallest resolved grid cell, which for large-scale field modeling can be tens of parsecs. For Faraday rotation, for example, this means assuming a smooth average thermal electron density and computing the RM with no account of the known clumpy nature of the ISM. Any small-scale (anti)correlation of the magnetic field with the particle distributions can result in an over- or under-estimation of the large-scale magnetic field strength. Specific correlations with the magnetic field that are not taken into account in most of the above modeling studies include: the ionized gas as discussed in [@sun:2008], where a filling factor for the ionized gas was considered and its implications for the estimate of the magnetic field strength; the relativistic cosmic-ray leptons discussed in [@Seta:2017id]; the dust grain density [@Kandel:2017uu] or alignment efficiency [@Fanciullo:2017cx]. A better understanding of these correlations can be encoded in the modeling, or at very least the effects ought to be quantified so that we can estimate their impact on the large-scale GMF fits. Prospects {#sec:prospects} ========= As always, the way forward certainly includes continuing to gather more of the traditional observational tracers, e.g., more pulsar RMs and distances, more starlight polarization measurements and distances, more synchrotron frequencies between the radio and microwave bands, etc. Gaia has already demonstrated how orders of magnitude more stellar distance and extinction measurements can impact our understanding of the local ISM [@Lallement:2018kl]. Combining this with dust polarization measurements from [*Planck*]{} and additional starlight polarization surveys will likewise help us to map the magnetic fields in the solar neighborhood. Next-generation radio surveys are poised to do the same for Galactic pulsars and extragalactic radio sources. The GALFACTS[^7] team have completed their survey to improve our sampling of radio sources by an order of magnitude over the full sky visible from Arecibo. The LOFAR project is now underway to demonstrate the power of not only the idea of linking large numbers of radio antennae in a software telescope but also of opening the low-frequency window onto the Universe. The latter will allow us to probe the more tenuous regions of the ISM. This is one of the first SKA-pathfinder projects testing technology and algorithms for the planned SKA that is expected to detect every pulsar in our Galaxy (that sweeps in our direction [@Keane:2014wk]) as well as orders of magnitude more extragalactic sources. This improved sampling and distance information will help us to isolate the field reversals that we see in the RMs as well as the local features such as bubbles and loops. A new survey of OH Masers [@Green:2012fm] will increase our sampling of Zeeman splitting measurements so that it can also tighten constraints on the large-scale field. Constraints on Galactic cosmic rays will also be improved by additional direct measurements of the local interstellar spectrum by Voyager [@Cummings:2016ks] and future constraints in the medium-energy $\gamma$-rays by, e.g., the proposed AMEGO[^8] mission. When combined with additional synchrotron maps at intermediate (a few GHz) frequencies such as from S-PASS, C-BASS, and QUIJOTE [^9], we should be able to constrain the CR and synchrotron spectra at large scales and therefore the magnetic field ordering, at least on average in the disk. If we then combine this information with the variations in the RMs from the vastly improved sampling from the SKA pathfinders, we may be able to start studying its variations in different regions of the Galaxy in 3D. In addition to more data from the traditional tracers, we also have the prospect of new tracers and methods. UHECRs are also deflected in the magnetic field, and though we cannot back-trace the particles to their sources, the anisotropy in the distribution of their arrival directions is a statistical probe of the local magnetic field (where the meaning of “local” depends on the particle rigidity). @hawc:2018 have recently measured the direction of the local interstellar magnetic field from the anisotropy in the HAWC and IceCube data at 10TeV. Multi-messenger astronomy has the potential to identify the sources of these particles by associating them with photons, neutrinos, and gravitational wave events unaffected by the GMF, so the individual UHECR deflections could be used as an additional tracer of the GMF. New techniques include analyses of synchrotron data in new ways, such as the family of tools based on the polarization gradient of @Lazarian:2018bi. See [@Ho:2019vh] for how these methods compare to Faraday tomography, for example. This review has not discussed Zeeman splitting, because generally the number of measurements is not sufficient for probing the large-scale GMF. However, these data can provide crucial independent probes in the regions where we do have them, and the sampling is soon to be improved by new projects such as MAGMO by @Green:2012br. Though this is not a new magnetic field tracer, using a large sampling of measurements to trace the GMF may indeed soon have a new discriminatory power. However, in making progress on understanding the GMF, more data may not be more important than putting together information from different fields and from asking different questions. The IMAGINE project [@imagine] aims not merely to add observables such as UHECRs to the mix but more importantly to provide a common framework for analysis. Just as a decade ago, significant progress was made by several teams combining three complementary observables, so we can expect the next advances to come from a yet more holistic approach. Initial attempts have already been made to fit the CR spectrum simultaneously with the large-scale GMF [@jaffe:2011] and to add the dust polarization [@jaffe:2013], but it was not computationally feasible with the tools available at the time to go beyond a very simple analysis with too many assumptions. @steininger:2018 have published a more efficient platform into which we can plug in the information from the many disparate tracers and explore the likelihood space in a rigorous Bayesian analysis. It is also important to move beyond the ad hoc models and simple field components described in Section \[sec:B\_components\] to include higher-order moments of the random components as well as properties such as helicity. It is also worth mentioning that simulations of Milky Way-like galaxies are becoming more informative about the amplification of magnetic fields during galaxy formation. @Pakmor:2017ia for example show how the Auriga simulations reproduce spiral galaxies with magnetic fields with similar exponential disks to what we observe in our Galaxy and with similar strengths. Whether these simulations will inform GMF modeling work or vice versa is an interesting question. The bottom line for the time being is that modeling the GMF has now reached an interesting and challenging point, where there are both degeneracies and contradictions in the parameter space, but still too many systematic uncertainties to know what to make of them yet. However, the uncertainties are being attacked in several ways, with new techniques, new observables, and new combinations of old observables, and the prospects are correspondingly bright for improving our models. [999]{} \[1\][\#1]{} Haverkorn, M. . In [*Magnetic Fields in Diffuse Media*]{}; de Gouveia Dal Pino, E.M.; Lazarian, A., Eds.; Springer Berlin Heidelberg: Berlin, Heidelberg, 2014; pp. 483–506. Beck, R.; Shukurov, A.; Sokoloff, D.; Wielebinski, R. . , [*411*]{}, 99–107. Zaroubi, S. , In [*The First Galaxies. Astrophysics and Space Science Library*]{}, Wiklind T., Mobasher B., Bromm V. Eds., Springer, Berlin, Heidelberg, [**2013**]{}, [*396*]{}, 45 Fish, V.L.; Reid, M.J.; Argon, A.L.; Menten, K.M. . , [*596*]{}, 328–343. Han, J.L.; Zhang, J.S. . , [*464*]{}, 609–614. Green, J.A.; McClure-Griffiths, N.M.; Caswell, J.L.; Robishaw, T.; Harvey-Smith, L. . , pp. 2530–2547, [](http://arxiv.org/abs/1207.3550) ([accessed on Jan.30, 2019]{}) Gonz[á]{}lez-Casanova, D.F.; Lazarian, A. . , [*835*]{}, 41. Han, J.L. . , [*55*]{}, 111–157. Heiles, C. . , [*119*]{}, 923–927. Santos, F.P.; Corradi, W.; Reis, W. . , [*728*]{}, 104. Pavel, M.D. . , p. 21, [](http://arxiv.org/abs/1107.2415) ([accessed on Jan.30, 2019]{}) Panopoulou, G.V.; Tassis, K.; Skalidis, R.; Blinov, D.; Liodakis, I.; Pavlidou, V.; Potter, S.B.; Ramaprakash, A.N.; Readhead, A.C.S.; Wehus, I.K. , [*ApJ*]{} [ **2018**]{}, [*872*]{}, p. 56 [[\[1809.09804\]]{}](http://arxiv.org/abs/1809.09804) ([accessed on Jan.30, 2019]{}) Han, J.L.; Manchester, R.N.; van Straten, W.; Demorest, P. . , [*234*]{}, 11. Taylor, A.R.; Stil, J.M.; Sunstrum, C. . , [*702*]{}, 1230. Xu, J.; Han, J.L. . , [*14*]{}, 942–958. Schnitzeler, D.H.F.M.; Carretti, E.; Wieringa, M.H.; Gaensler, B.M.; Haverkorn, M.; Poppi, S. . , pp. 1293–1309, [](http://arxiv.org/abs/1902.09556) ([accessed on Jan.30, 2019]{}) Brown, J.C.; Taylor, A.R.; Jackel, B.J. . , [*145*]{}, 213–223. Brown, J.C.; Haverkorn, M.; Gaensler, B.M.; Taylor, A.R.; Bizunok, N.S.; McClure-Griffiths, N.M.; Dickey, J.M.; Green, A.J. . , [*663*]{}, 258–266. Oppermann, N.; Junklewitz, H.; Robbers, G.; Bell, M.R.; En[ß]{}lin, T.A.; Bonafede, A.; Braun, R.; Brown, J.C.; Clarke, T.E.; Feain, I.J.; Gaensler, B.M.; Hammond, A.; Harvey-Smith, L.; Heald, G.; Johnston-Hollitt, M.; Klein, U.; Kronberg, P.P.; Mao, S.A.; McClure-Griffiths, N.M.; O’Sullivan, S.P.; Pratley, L.; Robishaw, T.; Roy, S.; Schnitzeler, D.H.F.M.; Sotomayor-Beltran, C.; Stevens, J.; Stil, J.M.; Sunstrum, C.; Tanna, A.; Taylor, A.R.; van Eck, C.L. . , [*542*]{}, 93. Haslam, C.G.T.; Salter, C.J.; Stoffel, H.; Wilson, W.E. . , [*47*]{}, 1–+. Remazeilles, M.; Dickinson, C.; Banday, A.J.; Bigot-Sazy, M.A.; Ghosh, T. . , [*451*]{}, pp. 4311-4327 [](http://arxiv.org/abs/1411.3628v1) ([accessed on Jan.30, 2019]{}) Reich, P.; Testori, J.C.; Reich, W. . , [*376*]{}, 861–877. Testori, J.C.; Reich, P.; Reich, W. . , [*484*]{}, 733–742. Uyaniker, B.; Landecker, T.L.; Gray, A.D.; Kothes, R. . , [*585*]{}, 785–800. Hill, A. , [*6*]{}, 129. Bennett, C.L.; Larson, D.; Weiland, J.L.; Jarosik, N.; Hinshaw, G.; Odegard, N.; Smith, K.M.; Hill, R.S.; Gold, B.; Halpern, M.; Komatsu, E.; Nolta, M.R.; Page, L.; Spergel, D.N.; Wollack, E.; Dunkley, J.; Kogut, A.; Limon, M.; Meyer, S.S.; Tucker, G.S.; Wright, E.L. , [*ApJS*]{} [**2012**]{}, [*208*]{}, 20 . . , [](http://arxiv.org/abs/1807.06205) ([accessed on Jan.30, 2019]{}) Wehus, I.K.; Fuskeland, U.; Eriksen, H.K.; Banday, A.J.; Dickinson, C.; Ghosh, T.; G[ó]{}rski, K.M.; Lawrence, C.R.; Leahy, J.P.; Maino, D.; Reich, P.; Reich, W. . , [*597*]{}, A131 [](http://arxiv.org/abs/1411.7616) ([accessed on Jan.30, 2019]{}) Jaffe, T.R.; Banday, A.J.; Leahy, J.P.; Leach, S.; Strong, A.W. . , [*416*]{}, 1152–1162. King, O.G.; Jones, M.E.; Blackhurst, E.J.; Copley, C.; Davis, R.J.; Dickinson, C.; Holler, C.M.; Irfan, M.O.; John, J.J.; Leahy, J.P.; Leech, J.; Muchovej, S.J.C.; Pearson, T.J.; Stevenson, M.A.; Taylor, A.C. . , [*438*]{}, 2426–2439. Irfan, M.O.; Dickinson, C.; Davies, R.D.; Copley, C.; Davis, R.J.; Ferreira, P.G.; Holler, C.M.; Jonas, J.L.; Jones, M.E.; King, O.G.; Leahy, J.P.; Leech, J.; Leitch, E.M.; Muchovej, S.J.C.; Pearson, T.J.; Peel, M.W.; Readhead, A.C.S.; Stevenson, M.A.; Sutton, D.; Taylor, A.C.; Zuntz, J. . , [](http://arxiv.org/abs/1501.06069v1) ([accessed on Jan.30, 2019]{}) Beno[î]{}t, A.e.a. . , [*424*]{}, 571. . . , [*576*]{}, A104. Lallement, R.; Capitanio, L.; Ruiz-Dern, L.; Danielski, C.; Babusiaux, C.; Vergely, L.; Elyajouri, M.; Arenou, F.; Leclerc, N. . , [*616*]{}, A132. Strong, A.; Porter, T.; Digel, S.; J[ó]{}hannesson, G.; Martin, P.; Moskalenko, I.; Murphy, E.; Orlando, E. . , [*722*]{}, L58. Strong, A.W.; Orlando, E.; Jaffe, T.R. . , [*534*]{}, 54. Orlando, E.; Strong, A. . , [*436*]{}, 2127–2142. Orlando, E. , [*Phys. Rev. D*]{}, [**2019**]{} [*99*]{}, 043007 [[\[1901.08604\]]{}](http://arxiv.org/abs/1901.08604) [accessed on Jan.30, 2019]{} Ferriere, K. . , [*767*]{}, 012006. West, J.L.; Safi-Harb, S.; Jaffe, T.; Kothes, R.; Landecker, T.L.; Foster, T. . , [*587*]{}, A148. Jaffe, T.R.; Leahy, J.P.; Banday, A.J.; Leach, S.M.; Lowe, S.R.; Wilkinson, A. . , [*401*]{}, 1013–1028. Sun, X.H.; Reich, W.; Waelkens, A.; En[ß]{}lin, T.A. . , [*477*]{}, 573–592. Jansson, R.; Farrar, G.R. . , [*757*]{}, 14. Evirgen, C.C.; Armstrong Gent, F.; Shukurov, A.; Fletcher, A.; Bushby, P. . , p. arXiv:1608.02398, [](http://arxiv.org/abs/1608.02398) ([accessed on Jan.30, 2019]{}) Haverkorn, M.; Gaensler, B.M.; McClure-Griffiths, N.M.; Dickey, J.M.; Green, A.J. . , [*609*]{}, 776–784. Brandenburg, A.; Subramanian, K. . , [*417*]{}, 1–209. Volegova, A.A.; Stepanov, R.A. . , [*90*]{}, 637–641. Brandenburg, A.; Stepanov, R. . , [*786*]{}, 91. Tashiro, H.; Chen, W.; Ferrer, F.; Vachaspati, T. . , pp. L41–L45, [](http://arxiv.org/abs/1310.4826v4) ([accessed on Jan.30, 2019]{}) Cordes, J.M.; Lazio, T.J.W. . [](http://arxiv.org/abs/astro-ph/0207156) [**2002**]{}. Gaensler, B.M.; Madsen, G.J.; Chatterjee, S.; Mao, S.A. . , [*25*]{}, 184–200. Yao, J.M.; Manchester, R.N.; Wang, N. . , [*835*]{}, 29. Bernardo, G.D.; Grasso, D.; Evoli, C.; Gaggero, D. . , [*2*]{}, 21–26. Orlando, E. . , [*475*]{}, 2724–2742. Strong, A.W.; Moskalenko, I.V.; Porter, T.A.; Johannesson, G.; Orlando, E.; Digel, S.W. , [*preprint*]{}, [**2009**]{} [](http://arxiv.org/abs/0907.0559) ([accessed on Jan.30, 2019]{}) Evoli, C.; Gaggero, D.; Grasso, D.; maccione, l. . , p. 018, [](http://arxiv.org/abs/0807.4730) ([accessed on Jan.30, 2019]{}) Kissmann, R. . , pp. 37–50, [](http://arxiv.org/abs/1401.4035) ([accessed on Jan.30, 2019]{}) Waelkens, A.; Jaffe, T.; Reinecke, M.; Kitaura, F.S.; En[ß]{}lin, T.A. . , [*495*]{}, 697–706. Jansson, R.; Farrar, G.R. . , [*761*]{}, L11. Jaffe, T.R.; Ferri[è]{}re, K.M.; Banday, A.J.; Strong, A.W.; Orlando, E.; Macias-Perez, J.F.; Fauvet, L.; Combet, C.; Falgarone, E. . , [*431*]{}, 683–694. Kachelrie[ß]{}, M.; Serpico, P.; Teshima, M. . , [*26*]{}, 378–386, [[\[0510444\]]{}](http://arxiv.org/abs/astro-ph/0510444) [accessed on Jan.30, 2019]{} Fauvet, L.; Mac[í]{}as-P[é]{}rez, J.F.; Aumont, J.; D[é]{}sert, F.X.; Jaffe, T.R.; Banday, A.J.; Tristram, M.; Waelkens, A.H.; Santos, D. . , [*526*]{}, 145. . . , [*596*]{}, A103, [](http://arxiv.org/abs/1601.00546) ([accessed on Jan.30, 2019]{}) Sun, X.H.; Reich, W. . , [*10*]{}, 1287–1297. Broadbent, A.; Haslam, C.G.T.; Osborne, J.L. . , [*3*]{}, 229. Terral, P.; Ferriere, K. . , [*600*]{}, A29. Ferriere, K.; Terral, P. . , [*561*]{}, A100. Unger, M.; Farrar, G.R. . , [](http://arxiv.org/abs/1707.02339v1) ([accessed on Jan.30, 2019]{}) Shukurov, A.; Rodrigues, L.F.S.; Bushby, P.J.; Hollins, J.; Rachen, J.P. , [*submitted to A&A*]{} [ **2019**]{}. [[ \[1809.03595\]]{}](http://arxiv.org/abs/1809.03595). [accessed on Jan.30, 2019]{} Page, L.; Hinshaw, G.; Komatsu, E.; Nolta, M.R.; Spergel, D.N.; Bennett, C.L.; Barnes, C.; Bean, R.; Dor[é]{}, O.; Dunkley, J.; Halpern, M. . , [*170*]{}, 335–376. Sobey, C.; Bilous, A.V.; Grie[ß]{}meier, J.M.; Hessels, J.W.T.; Karastergiou, A.; Keane, E.F.; Kondratiev, V.I.; Kramer, M.; Michilli, D.; Noutsos, A.; Pilia, M.; Polzin, E.J.; Stappers, B.W.; Tan, C.M.; van Leeuwen, J.; Verbiest, J.P.W.; Weltevrede, P.; Heald, G.; Alves, M.I.R.; Carretti, E.; En[ß]{}lin, T.; Haverkorn, M.; Iacobelli, M.; Reich, W.; Van Eck, C. . , [](http://arxiv.org/abs/1901.07738) ([accessed on Jan.30, 2019]{}) Steininger, T.; Ensslin, T.A.; Greiner, M.; Jaffe, T.; van der Velden, E.; Wang, J.; Haverkorn, M.; H[ö]{}randel, J.R.; Jasche, J.; Rachen, J.P. , [**2018**]{}. [[ \[1801.04341\]]{}](http://arxiv.org/abs/1801.04341) ([accessed on Jan.30, 2019]{}) Beck, R.; Wielebinski, R. . In [*Planets, Stars and Stellar Systems*]{}; Oswalt T.D., Gilmore G., Eds.; Springer, Dordrecht [**2013**]{}; pp. 641, [](http://arxiv.org/abs/1302.5663v2) ([accessed on Jan.30, 2019]{}) Moss, D.; Beck, R.; Sokoloff, D.; Stepanov, R.; Krause, M.; Arshakian, T.G. , p. A147 Ordog, A.; Brown, J.C.; Kothes, R.; Landecker, T.L. , [*A&A*]{}, [**2017**]{}, [*603*]{}, A15 [[ \[1704.08663\]]{}](http://arxiv.org/abs/1704.08663)([accessed on Jan.30, 2019]{}) ODea, D.T.; Clark, C.N.; Contaldi, C.R.; MacTavish, C.J. . , [*419*]{}, 1795–1803. Herron, C.A.; Federrath, C.; Gaensler, B.M.; Lewis, G.F.; McClure-Griffiths, N.M.; Burkhart, B. . , [*466*]{}, 2272–2283. Gaensler, B.M.; Haverkorn, M.; Burkhart, B.; Newton-McGee, K.J.; Ekers, R.D.; Lazarian, A.; McClure-Griffiths, N.M.; Robishaw, T.; Dickey, J.M.; Green, A.J. . , [*478*]{}, 214–217. Burkhart, B.; Lazarian, A.; Gaensler, B.M. . , [*749*]{}, 145. Herron, C.A.; Burkhart, B.; Gaensler, B.M.; Lewis, G.F.; McClure-Griffiths, N.M.; Bernardi, G.; Carretti, E.; Haverkorn, M.; Kesteven, M.; Poppi, S.; Staveley-Smith, L. . , [*855*]{}, 29. . . [[ \[1807.06212\]]{}](http://arxiv.org/abs/1807.06212) ([accessed on Jan.30, 2019]{}) Stepanov, R.; Shukurov, A.; Fletcher, A.; Beck, R.; La Porta, L.; Tabatabaei, F. . , [*-1*]{}, 2700. . . , [*576*]{}, A105, [](http://arxiv.org/abs/1405.0872) ([accessed on Jan.30, 2019]{}) Kandel, D.; Lazarian, A.; Pogosyan, D. , [ *MNRAS*]{} [**2017**]{}, [*472*]{}, 1, L10 [[ \[1707.06276\]]{}](http://arxiv.org/abs/1707.06276) ([accessed on Jan.30, 2019]{}). Kogut, A. . , p. 110, [](http://arxiv.org/abs/1205.4041v1) ([accessed on Jan.30, 2019]{}). Fuskeland, U.; Wehus, I.K.; Eriksen, H.K.; N[æ]{}ss, S.K. . , [*790*]{}, 104. Cummings, A.C.; Stone, E.C.; Heikkila, B.C.; Lal, N.; Webber, W.R.; Johannesson, G.; Moskalenko, I.V.; Orlando, E.; Porter, T.A. . , [*831*]{}, 18. Liu, H.; Mertsch, P.; Sarkar, S. , [**2014**]{}, [*789*]{}, L29 Mertsch, P.; Sarkar, S. . , [**2013**]{}, [*06*]{}, p. 041 Vidal, M.; Dickinson, C.; Davies, R.D.; Leahy, J.P. . , [*452*]{}, 656–675. Han, J.L.; Manchester, R.N.; Berkhuijsen, E.M.; Beck, R. , [*322*]{}, 98–102. , M.; [Fletcher]{}, A.; [Landecker]{}, T.L.; [Carretti]{}, E.; [Dickey]{}, J.M.; [Gaensler]{}, B.M.; [Haverkorn]{}, M.; [McClure-Griffiths]{}, N.; [Reich]{}, W.; [Taylor]{}, A.R. . , [*724*]{}, L48–L52, [](http://arxiv.org/abs/1011.0341) ([accessed on Jan.30, 2019]{}) Hill, A.S.; Landecker, T.L.; Carretti, E.; Douglas, K.; Schnitzeler, D.H.F.M. . , [*467*]{}, 1–17. Frisch, P.C.; Redfield, S.; Slavin, J.D. . , [*49*]{}, 237–279. Alves, M.I.R.; Boulanger, F.; Ferriere, K.; Montier, L. . . Su, M.; Slatyer, T.R.; Finkbeiner, D.P. , [*1005*]{}, 5480. Carretti, E.; Crocker, R.M.; Staveley-Smith, L.; Haverkorn, M.; Purcell, C.; Gaensler, B.M.; Bernardi, G.; Kesteven, M.J.; Poppi, S. . , [*493*]{}, 66–69. Seta, A.; Shukurov, A.; Wood, T.S.; Bushby, P.J.; Snodin, A.P. . , pp. 4544–4557, [](http://arxiv.org/abs/1708.07499) ([accessed on Jan.30, 2019]{}) Fanciullo, L.; Guillet, V.; Boulanger, F.; Jones, A. . , p. A7, [](http://arxiv.org/abs/1702.08356) ([accessed on Jan.30, 2019]{}) Keane, E.F.; Bhattacharyya, B.; Kramer, M.; Stappers, B.W.; Bates, S.D.; Burgay, M.; Chatterjee, S.; Champion, D.J.; Eatough, R.P.; Hessels, J.W.T.; Janssen, G.; Lee, K.J.; van Leeuwen, J.; Margueron, J.; Oertel, M.; Possenti, A.; Ransom, S.; Theureau, G.; Torne, P. , [**2015**]{}, p. 40 [[ \[1501.00056\]]{}](http://arxiv.org/abs/1501.00056) ([accessed on Jan.30, 2019]{}) . . , [*871*]{}, 96, [](http://arxiv.org/abs/1812.05682) ([accessed on Jan.30, 2019]{}) Lazarian, A.; Yuen, K.H. . , p. 59, [](http://arxiv.org/abs/1802.00028) ([accessed on Jan.30, 2019]{}) Ho, K.W.; Yuen, K.H.; Leung, P.K.; Lazarian, A. , [*submitted to ApJ*]{} [**2019**]{}, [[([accessed on Jan.30, 2019]{}) \[1901.07731\]]{}](http://arxiv.org/abs/1901.07731) Green, J.A.; McClure-Griffiths, N.M.; Caswell, J.L.; Robishaw, T.; Harvey-Smith, L.; Mao, S.A. . , [*10*]{}, 402–402. Boulanger, F.; Ensslin, T.; Fletcher, A.; Girichides, P.; Hackstein, S.; Haverkorn, M.; H[ö]{}randel, J.R.; Jaffe, T.; Jasche, J.; Kachelriess, M.; Kotera, K.; Pfrommer, C.; Rachen, J.P.; Rodrigues, L.F.S.; Ruiz-Granados, B.; Seta, A.; Shukurov, A.; Sigl, G.; Steininger, T.; Vacca, V.; van der Velden, E.; van Vliet, A.; Wang, J. . , [*08*]{}, 049–049. Pakmor, R.; Gomez, F.A.; Grand, R.J.J.; Marinacci, F.; Simpson, C.M.; Springel, V.; Campbell, D.J.R.; Frenk, C.S.; Guillet, T.; Pfrommer, C.; White, S.D.M. . , pp. 3185–3199, [](http://arxiv.org/abs/1701.07028) ([accessed on Jan.30, 2019]{}) [^1]: <https://www.cosmos.esa.int/web/gaia>. [^2]: <https://www.skatelescope.org/>. [^3]: <https://fermi.gsfc.nasa.gov/>. [^4]: <https://galprop.stanford.edu/>. [^5]: <https://sourceforge.net/p/hammurabicode/wiki/Home/>. [^6]: <http://www.lofar.org/>. [^7]: <https://www.ucalgary.ca/ras/GALFACTS>. [^8]: <https://asd.gsfc.nasa.gov/amego/index.html>. [^9]: <http://www.iac.es/proyecto/cmb/pages/en/quijote-cmb-experiment.php>.
--- abstract: 'We consider the differential conductance of a periodically driven system connected to infinite electrodes. We focus on the situation where the dissipation occurs predominantly in these electrodes. Using analytical arguments and a detailed numerical study we relate the differential conductances of such a system in two and three terminal geometries to the spectrum of quasi-energies of the Floquet operator. Moreover these differential conductances are found to provide an accurate probe of the existence of gaps in this quasi-energy spectrum, being quantized when topological edge states occur within these gaps. Our analysis opens the perspective to describe the intermediate time dynamics of driven mesoscopic conductors as topological Floquet filters.' address: - 'Laboratoire de Physique de l’École Normale Supérieure de Lyon, UMR CNRS 5672, 46 Allée d’Italie, 69007 Lyon, France' - 'CEA/Université Grenoble Alpes, INAC-SPSMS, F-38000 Grenoble, France' author: - 'M. Fruchart' - 'P. Delplace' - 'J. Weston' - 'X. Waintal' - 'D. Carpentier' bibliography: - 'biblioFloquet.bib' title: 'Probing (topological) Floquet states through DC transport' --- \#1[ ![image](figures-pdf/#1.pdf) ]{} Introduction ============ Recently, the possibility to induce an out-of-equilibrium topological state of matter through irradiation or a periodic driving has stimulated numerous works. While initially the external driving perturbation was used to trigger a phase transition between states of conventional topological order [@Inoue:2010; @Lindner:2011; @Kitagawa:2011], fascinating topological properties specific to driven out-of-equilibrium states were soon identified [@Kitagawa:2010; @Rudner:2013; @Carpentier:2015]. While several proposals to realize and probe these topological states in various artificial systems have turned out to be successful [@Fang:2012; @Kitagawa:2012; @Hauke:2012; @Rechtsman:2013; @Hu:2015; @Karzig:2014; @Reichl:2014; @Jotzu:2014]. Their realization in condensed matter have proved to be challenging [@Wang:2013; @Onishi:2014]. There is a strong analogy between equilibrium topological insulators and topological driven states. Both require the existence of a gap in the spectrum characterizing their single particle states: topological insulators are band insulators with a gap in the energy spectrum of the single particle Hamiltonian while topological driven states have a gap in the spectrum of the Floquet operator. In both cases, a nontrivial topology manifests itself through the appearance within this gap of robust states located at the edge of the system. However, while in an insulator the gap separates empty states from occupied states, the thermodynamics of gapped periodically driven states is much less understood. Recent studies have stressed the differences between the nature of the states reached at long time in such periodically driven systems and the equilibrium ground states of insulators [@Lazarides:2014; @Dehghani:2014; @Lacedola:2015; @Seetharam:2015]. Here we follow a different route: we focus on the relation between the DC transport of a periodically driven system and its quasi-energy Floquet spectrum in a regime where the times of flight of electrons through the system are shorter than the characteristic inelastic scattering times, which can be the case in mesoscopic systems. This provides a way to avoid the issue of long time dynamics of driven systems, which was raised in recent studies [@Lacedola:2015; @Seetharam:2015; @Dehghani:2015]. Technically, this requires that the dominant perturbation of the unitary evolution of the driven system is the presence of the electrodes: *dissipation should occur in the leads*. From this point of view, the driven system behaves as a topological Floquet filter instead of an out-of-equilibrium steady-state analogue of an equilibrium insulating state. DC transport is known to be an ideal probe of the existence of edge states in topological equilibrium phases realized in condensed matter, particularly in two dimensions. In a seminal paper [@Buttiker:1988] Markus Büttiker demonstrated how the non-local conductances in a Hall bar fully characterize the nature of the quantum Hall effect and the associated chiral edge states. This approach was recently extended to study the quantum spin Hall effect occuring in HgTe/CdTe quantum wells [@Roth:2009; @buttiker2009]. In this time-reversal invariant topological phase, the existence of a Kramers pair of counter-propagating edge states leads to a series of non-local conductances whose experimental observation clearly identified this new phase. For topological driven systems, the situation is more confusing: building on earlier works on the transport through a topological periodically driven state [@Gu:2011; @Kitagawa:2011], recent studies have focused on the transport through a one-dimensional topological superconducting state [@Kundu:2013], the effect on transport of the competition between heating by the drive and the coupling to the leads [@Dehghani:2015] or the quantization of conductances of a topological phase in multi-terminal geometry [@FoaTorres:2014]. It was also proposed to probe quasienergy spectra (and topological edge states) through magnetization measurements [@dahlhaus2015] and tunneling spectroscopy [@fregoso2014]. However, the relation between transport and the existence of topological edge states in periodically driven states remains unclear, and a summation procedure over different energies in the lead was proposed to recover a quantized conductance [@Kundu:2013; @Farrell:2015]. The purpose of our paper is to reconsider the relation between the (non-local) differential conductances of periodically driven systems and their Floquet quasi-energy spectrum, allowing for a direct relation between these differential conductances and the topological indices associated with the spectral gaps. In particular we will establish a protocol in a multi-terminal geometry allowing for this identification. In this point of view, a topological periodically driven system is viewed as a *topological Floquet filter* with selective edge transport occurring for specific voltage biases between a lead and the system. These voltage biases lead to a stationnary DC current by counterbalancing the time dependence of Floquet states. From Floquet theory to scattering theory ======================================== Floquet theory for open systems ------------------------------- We consider a periodically driven quantum system connected to $N_{\text{leads}}$ equilibrium electrodes through good contacts with large transmissions. The system is described by a Hamiltonian $\hat{H}^{\text{sys}}(t) - \hat{\Sigma} $ where $\hat{H}^{\text{sys}}(t+T)=\hat{H}^{\text{sys}}(t)$ with $T$ the period of the drive, and $ \hat{\Sigma}$ is a self-energy accounting for the coupling between the system and its environment (e.g. the leads). We assume in the following that this self-energy is dominated by the exchange with the electrons in the leads. When all characteristic times of the leads are small with respect to the characteristic times of the system, we can use the so-called wide band approximation [@Kohler:2005] where the self-energy is assumed to be constant in energy: $\hat{\Sigma}(E) \simeq \hat{\Sigma}$. The dynamics of the system is described by the evolution operator $\hat{U}(t,t')$ which obeys the equation $$\ii \hbar \frac{\dd}{\dd t}\hat{U}(t,t') = \left( \hat{H}^{\text{sys}}(t) - \hat{\Sigma} \right) \hat{U}(t,t') . \label{eq:schrodinger_U}$$ Of great importance is the *Floquet operator* which is the evolution operator after one period $\hat{U}(T,0)$. When diagonalizable, it can be decomposed on the left eigenstates $\bra{\tilde{\phi}_\alpha}$ and the right eigenstates $\ket{\tilde{\phi}_\alpha}$ of $\hat{U}(T,0)$ $$\begin{aligned} \begin{split} \hat{U}(T,0) \ket{\phi_\alpha} &= \lambda_\alpha \ket{\phi_\alpha} , \\ \bra{\tilde{\phi}_\alpha}\hat{U}(T,0) &= \lambda_\alpha \bra{\tilde{\phi}_\alpha} , \end{split} \label{eq:lambda}\end{aligned}$$ that constitute a bi-orthonormal basis of the Hilbert space $$\braket{\tilde{\phi}_\alpha \mid \phi_\beta} = \delta_{\alpha \beta} \ ; \ \sum_{\alpha} \ket{\phi_\alpha}\bra{\tilde{\phi}_\alpha} = \Id . \label{eq:basis}$$ The eigenvalues $\lambda_\alpha$ in Eq.  are called the Floquet multiplicators and read $$\lambda_\alpha = \exp\left[ - \ii \left( \frac{\varepsilon_\alpha}{\hbar} - \ii \gamma_\alpha \right) T \right].$$ The coefficient $ \varepsilon_{\alpha}$ is called the quasienergy and $\gamma_{\alpha}$ is its damping rate whose inverse gives the life-time of the eigenstate. Note that the quasienergy being a phase, it is defined modulo the driving frequency $\omega=2\pi/T$. Any state at arbitrary time $t$ can then be constructed from the eigenstates of the Floquet operator. It is particularly useful to define the left and right Floquet states $$\begin{aligned} \begin{split} \ket{u_\alpha(t)} &= \ee^{ \ii \left( \varepsilon_\alpha/\hbar - \ii \gamma_\alpha \right) t} \ \hat{U}(t,0)\ket{\phi_\alpha} , \\ \bra{\tilde{u}_\alpha(t)} &= \ee^{- \ii \left( \varepsilon_\alpha/\hbar - \ii \gamma_\alpha \right) t}\ \bra{\tilde{\phi}_\alpha} \hat{U}(0,t) , \end{split} \label{eq:u_def}\end{aligned}$$ which are periodic in time, $\ket{u_\alpha(t)}=\ket{u_\alpha(t+T)}$ (same for $\bra{\tilde{u}_\alpha(t)}$), so that they can be expanded in Fourier series $$\begin{aligned} \begin{split} \ket{u_{\alpha}(t)} &= \sum_{p \in \mathbb{Z}} \ee^{-\ii p \omega t} \ket{u_{\alpha}^{(p)}} , \\ \bra{\tilde{u}_{\alpha}(t)} &= \sum_{p \in \mathbb{Z}} \ee^{\ii p \omega t} \bra{\tilde{u}_{\alpha}^{(p)}} , \label{eq:u_fourier} \end{split}\end{aligned}$$ where the harmonics read $$\begin{aligned} \begin{split} \ket{u_{\alpha}^{(p)}} = \frac{1}{T} \int_{0}^{T} \dd t \; \ee^{\ii p \omega t} \ket{u_{\alpha}(t)} \\ \bra{\tilde{u}_{\alpha}^{(p)}} = \frac{1}{T} \int_{0}^{T} \dd t \; \ee^{-\ii p \omega t} \bra{\tilde{u}_{\alpha}(t)} . \end{split} \label{eq:harmonics}\end{aligned}$$ From Eqs. (\[eq:basis\]) and (\[eq:u\_def\]) the evolution operator can be expanded on the Floquet states as $$\hat{U}(t, t') = \sum_{\alpha} \ee^{-\ii \left( \varepsilon_\alpha/\hbar - \ii \gamma_\alpha \right)\left(t-t' \right)} \ket{u_{\alpha}(t)}\bra{\tilde{u}_{\alpha}(t')} .$$ This expression can finally be decomposed on the harmonics of the Floquet states by using Eq.(\[eq:u\_fourier\]) $$\begin{gathered} \label{eq:U_decomposition} \!\!\!\!\hat{U}(t, t') = \!\!\! \sum_{\substack{\alpha \\ p,p' \in \mathbb{Z}}} \!\!\! \ee^{-\ii \left[ \left( \frac{\varepsilon_\alpha}{\hbar} - \ii \gamma_\alpha \right)(t-t') + \omega (p t - p' t')\right]} \ket{u_{\alpha}^{(p)}}\!\bra{\tilde{u}_{\alpha}^{(p')}}.\end{gathered}$$ In practice, the spectrum of Floquet operator of the semi-infinite system can be obtained numerically either by direct a computation of $U(T,0)$ (e.g. as a discretized in time version of the infinite product) or through its representation in Sambe space [@sambe73]. Differential conductance ------------------------ Based on a standard formalism, we can calculate analytically the differential conductance of the periodically driven system in a multi-terminal geometry and relate it to the quasienergy spectrum of the system. We follow the standard Floquet scattering formalism [@Moskalets:2002; @Kohler:2005; @Stefanucci:2008; @Gaury:2013; @Kundu:2013; @FoaTorres:2014] to describe the transport properties of this multiterminal setup in a phase coherent regime. We consider the rolling average over a period $T$ (all time-averages in the following are also rolling averages over one driving perdiod) of the current entering each lead labelled by the index $\ell$: $$\mathcal{I}_{\ell}(t) = \frac{1}{T} \int_{t}^{t+T} \dd t' \; \braket{\hat{J}_{\ell}(t')}. \label{eq:averagedCurrent}$$ where $\braket{\hat{J}_{\ell}(t')}$ is the expectation value of the current entering lead $\ell$ at time $t'$. This average current satisfies a relation [@Moskalets:2002; @Kohler:2005; @Stefanucci:2008]: $$\mathcal{I}_{\ell}(t) = \frac{e}{h} \int \dd\energy\; \sum_{\ell' \neq \ell} \left[ \Tpasbar_{\ell \ell'}(t, \energy) f_{\ell'}(\energy) - \Tpasbar_{\ell' \ell}(t, \energy) f_{\ell}(\energy) \right], \label{eq:Il_stat}$$ where $f_{\ell}(\energy)$ is the Fermi-Dirac distribution of the lead $\ell$ assumed to be at equilibrium at the chemical potential $\mu_{\ell}$. The $\Tpasbar_{\ell \ell'}(\energy) $ are the time-averaged transmission coefficients between lead $\ell$ and $\ell'$ which will be discussed below. We define the differential conductance $G_{\ell \ell'}(t, E)$ as the sensitivity of the current entering the lead $\ell$ to variations of the chemical potential $\mu_{\ell'}$ of the lead[^1] $\ell'$ : $$G_{\ell \ell'} (t, E)\equiv - e \left. \frac{\dd \mathcal{I}_{\ell}}{\dd \mu_{\ell'}} \right|_{\mu_{\ell'}=\mu_{\text{sys}}+E}. \label{eq:diff_cond}$$ Note that this definition is not symmetric in the various chemical potentials $\mu_{\ell}$, as opposed to other definitions used in the literature. In the long time stationary regime on which we focus, $t \to \infty$, the average conductances $G_{\ell \ell'}(t,E)$ are expected to reach a value independent on the choice of origin of time $t$ and associated initial conditions [@Kohler:2005], which we denote by $G_{\ell \ell'}^{\infty}(E)$. We obtain from Eq.  the zero temperature time-averaged differential conductances $$\begin{aligned} G_{\ell \ell'}^{\infty}(E) &= \frac{e^2}{h} \Tpasbar_{\ell \ell' }(E) \qquad \text{for $\ell \neq \ell'$} \label{eq:diff_condllp} \\ G_{\ell \ell}^{\infty}(E) &= - \frac{e^2}{h} \sum_{\ell' \neq \ell} \Tpasbar_{\ell' \ell }(E), \label{eq:diff_condll}\end{aligned}$$ which satisfy the rule $\sum_{\ell} G_{\ell ,\ell'}(E) = 0$ for any $ \ell'$ (the average current leaving the system is independent on the chemical potentials in the leads). This formula is analogous to the Landauer-Büttiker formula for the differential conductance of multiterminal equilibrium systems. Generalized Fisher-Lee relations -------------------------------- The average transmission coefficients $\Tpasbar_{\ell \ell'}(\energy)$ can be related to the Floquet-Green functions of the system, in a way analogous to the case of undriven conductors [@Fisher:1981]. The retarded Green functions in mixed energy-time representations are defined by $${\hat{\gf}}(t,E) = \frac{1}{\ii \hbar}\int d\tau \ e^{\ii E\tau/\hbar} \, \gfo(t,t-\tau)$$ where $\gfo(t,t')$ satisfies $$\begin{aligned} \begin{split} \label{eq:def_gf} \left( \ii \hbar \frac{\dd}{\dd t} - \hat{H}^{\text{sys}}(t) + \hat{\Sigma} \right) \gfo(t,t') = \delta(t-t'). \end{split}\end{aligned}$$ It is related to the evolution operator defined in Eq.(\[eq:schrodinger\_U\]) by $${\gfo}(t,t')=\frac{1}{\ii \hbar}\Theta(t-t')\hat{U}(t,t') .$$ From the decomposition Eq.(\[eq:U\_decomposition\]) of $\hat{U}(t,t')$ over the harmonics of the Floquet states, we express the *Floquet-Green functions* $$\label{eq_green_floquet} \hat{\gf}^{(p)}(\energy) = \frac{1}{T} \int_{0}^{T} \dd t\; \ee^{\ii p \omega t} \, {\gf}(t, \energy) .$$ as $$\label{eq_green_floquet_evolution} \hat{\gf}^{(p)}(\energy) = \sum_{r, \alpha} \frac{\ket{u_{\alpha}^{(p+r)}}\bra{\tilde{u}_{\alpha}^{(r)}} }{\energy - \left[ \varepsilon_\alpha + r \hbar \omega - \ii \hbar \gamma_\alpha \right]}.$$ The transmission coefficients are expressed as $$\Tpasbar_{\ell' \ell}(\energy) = \sum_{p \in \mathbb{Z}} T_{\ell' \ell}^{(p)}(\energy) , \label{eq:Tbar}$$ where each $T^{(p)}_{\ell' \ell }(E)$ is the transmission coefficient for an electron injected in lead $\ell$ at the energy $E=\mu_{\ell}$ and leaving the system in lead $\ell'$ at the energy $E+p \hbar \omega$, [*i.e.*]{} after having exchanged $p$ quanta with the driving perturbation. They read (see \[sec:appendix\] and [@Kohler:2005; @Stefanucci:2008; @Gaury:2013; @arrachea2006]): $$\label{eq:coefficient_transmission_general} T_{\ell' \ell}^{(p)}(\energy) = \text{Tr} \left[ \hat{\gf}^{{\dagger}(p)}(\energy) \hat{\Gamma}_{\ell'}(\energy + p \hbar \omega) \hat{\gf}^{(p)}(\energy) \hat{\Gamma}_{\ell}(\energy) \right]$$ where $\hat{\Gamma}_{\ell}(\energy)$ is the coupling operator at energy $E$ between the system and the electrode $\ell$. Note that when the rates $\gamma_\alpha$ are sufficiently small for the quasi-energies $\epsilon_\alpha$ occurring in Eq.  to be “well-defined”, the equations (\[eq\_green\_floquet\_evolution\],\[eq:Tbar\],\[eq:coefficient\_transmission\_general\]) imply that the transmission coefficient $\Tpasbar_{\ell' \ell}(\energy))$ and thus the differential conductance $G_{\ell' \ell}(E)$ vanishes whenever the energy $\mu_{\ell}=E$ does not correspond to a quasi-energy $\epsilon_\alpha$ up to a multiple of $\hbar \omega$. This is nothing but the conservation of energy of incident state which holds modulo $\hbar \omega$ in a periodically driven system. This property allows one to probe the existence of gaps in the quasi-energy spectrum of the Floquet operator. Note however that to access the whole quasi-energy spectrum, the lead $\mu_{\ell}$ has to be strongly biased (with a bias of order $\hbar \omega$) with respect to the scattering region at $\mu_{\text{sys}}$, a situation far from the standard procedure to probe equilibrium phases, but required to probe an inherently out-of-equilibrium Floquet state. Moreover, in the presence of a ballistic mode connecting two leads $\ell$ and $\ell'$ such as the edge mode of a topological Floquet state, we expect the corresponding $\Tpasbar_{\ell' \ell}(\energy)$ to be quantized *provided both interfaces are sufficiently transparent*. Note however that as the states leaving the systems have component of energies $E+p\hbar \omega$, this implies a perfect transparency over a broad range of energies. These two properties allow for a potential probe of topological Floquet states through non-local transport. In the next section, we study this application by a numerical implementation of transport through an AC driven system. Numerical study =============== In the following, we provide numerical evidence of the correspondence between the differential conductance and the quasi-energy spectrum in a periodically driven system. In particular, we show that a single out-of-equilibrium topological edge state corresponds to a quantized differential conductance. The chirality of these topological edge states is probed in a multi-terminal setup. Model and method ---------------- Following [@Rudner:2013], we use the restriction to spins up of the Bernevig-Hugues-Zhang model of quantum spin Hall effect in / quantum wells [@Bernevig:2006] (referred to as half-BHZ model). As the Haldane model [@Haldane:1988] it realizes an anomalous quantum Hall equilibrium phase, but on a square lattice with two orbitals per site, denoted $s$ and $p$. The tight-binding Hamiltonian with nearest and next-to-nearest neighbors hoppings (see Fig. \[fig:reseau\]) can therefore be written as a two-by-two matrix on the $(s,p)$ basis as $$\begin{split} &H_{\text{BHZ}} = \sum_{x,y} \biggl[ \left[\left(M - J - 4 B \right) \sigma_z - \mu \sigma_0 \right] \ket{x,y} \bra{x,y} \\ &+T^{(1)}_x \ket{x,y} \bra{x+1,y} +T^{(1)}_y \ket{x,y} \bra{x,y+1} \\ &+T^{(2)} \left( \ket{x,y} \bra{x+1,y+1} + \ket{x,y} \bra{x+1,y-1} \right) \biggr] \\ &+ \text{h.c.} \end{split}$$ with hopping matrices $$\begin{split} T^{(1)}_{x/y} = \ \frac{A}{2 \ii} \sigma_{x/y} + B \sigma_z \text{ and } T^{(2)} = \frac{J}{4} \sigma_z \end{split}$$ where $\sigma_{x,y,z}$ are the Pauli matrices and $\sigma_0$ the identity matrix. The parameters of this Hamiltonian are chosen as $M=\num{-1.}$, $J = \num{1.5}$, $A = \num{4.}$, $B = \num{1.5}$ : they correspond to an equilibrium phase which is a trivial insulator. A topological insulating phase can be reached by varying [*e.g.*]{} $M$ such that $0<M/B<4$ or $4<M/B<8$. This trivial equilibrium phases is submitted to a periodic on-site perturbation [@Rudner:2013] $$\Delta H(t) = \sum_{x,y} F \left[ \sin(\omega t) \sigma_x + \cos(\omega t) \sigma_y \right] \ket{x,y} \bra{x,y}.$$ Note that this perturbation does not correspond to the variation of a parameter of the initial Hamiltonian. Throughout our study, we have used a driving frequency $\omega = \num{20}$. Depending on the strength $F$, this perturbation can drive the system either towards a topologically nontrivial out-of-equilibrium state or in a topologically trivial out-of-equilibrium state. We have chosen two values $F=2$ and $F=8$ of the driving amplitude so that gaps open at $\epsilon= 0, \pm \hbar \omega/2$ of the quasi-energy spectrum. For each value of $F$ and each gap, we have computed numerically the bulk topological invariant $W_{\varepsilon}[U]$ associated to the quasi-energy gap around $\epsilon$ [@Rudner:2013] to ensure that $F=2$ correspond to a trivial gapped Floquet state, while $F=8$ corresponds to a topological gapped Floquet state with a non-trivial gap at $\epsilon= 0$. Alternatively, in the two cases we have computed the quasi-energy spectrum for the driven model in an infinitely long ribbon of width $W=\num{60}$ sites (see Fig. \[fig:figure\_geometry\]) by diagonalization in Sambe space with  sidebands. The resulting spectra are shown in Fig. \[fig\_qe\_spectrum\]: they show as expected that states located at each edge on the ribbon appear inside the $\epsilon=0$ gap for the topological case ($F=8$) as opposed to the trivial case ($F=2$). Finally, to study transport through the system, leads are attached to a finite size system. These leads are modeled by a simple tight-binding Hamiltonian on a square lattice with nearest neignbors hoppings with amplitude $J_0=\num{8}$. An onsite potential is added to the lead Hamiltonian to reduce mismatches between the incoming and outgoing states of the leads and the scattering states of the central region. The numerical calculations are performed using the numerical method described in [@Gaury:2013]. Although the technique is based on wavefunctions, it is mathematically equivalent to the Green function approach used in this article (see Eq. (49) of [@Gaury:2013] for the connection with the transmission coefficient as well as section 5.4 for the link with Floquet theory). Our implementation is based on the Kwant package [@Groth:2014]. Probing the quasi-energy bands and gaps through two-terminal differential conductance ------------------------------------------------------------------------------------- The differential conductance $G_{\text{RL}}(t)$ is computed in a two-terminal setup through a sample of width $W=60$ and length $L=\num{30}$ sites. Throughout this study, the chemical potential of the undriven system was chosen as $\mu_{\text{sys}}=0$. To get rid of oscillations faster than the driving frequency, which are irrelevant for our purpose, we perform a sliding average over one driving period $T$. After a transient regime, the (averaged) differential conductance $\bar{G}(t)$ converges to a finite value (see Fig. \[fig\_conductance\_time\_short\]). This transient regime can be understood as the time of flight of the state injected at the left lead to the right lead after the driving perturbation has been turned on and the Floquet states developed inside the system. When the chemical potential of the incoming lead lies in a topological quasi-energy gap of the scatterng region, transport occurs through a chiral state localized near the edge of the sample. We can easily evaluate the travel length $L$. The expected group velocity is extracted from the slope of the quasi-energy dispersion relation from Fig. \[fig\_qe\_spectrum\] through $$v_{\text{g}} = \frac{1}{\hbar} \frac{\dd \varepsilon}{\dd k},$$ and we obtain $v_{\text{g}} = \num{0.15(1)} \, a \omega$ where $a$ is the lattice spacing and $\omega$ the driving frequency. This correlates perfectly with the time $\Delta t$ of the first increase from zero of the conductance from the switching on of the driving field. From the curve on Fig. \[fig\_conductance\_time\_short\] we find find $L/\Delta t= \num{0.14(1)} \, a \omega$. After this transient regime, the differential conductance $G_{\text{RL}}(t)$ reaches a long time stationary limit $G_{\text{RL}}^{\infty}$, as shown in the inset of Fig. \[fig\_conductance\_time\_short\]. As expected, this asymptotic differential conductance is sensitive to the quasi-energy spectrum of the driven system: when in a spectral band, it reaches high values but vanishes when the chemical potential $\mu_{\text{L}}- \mu_{\text{sys}}$ lies in a trivial spectral gap of the Floquet operator, as shown on the left plot of Fig. \[fig\_conductance\_time\_short\]. Moreover, when this spectral gap is topological, the associated presence of chiral states at the edge of the system shown in Fig. \[fig\_qe\_spectrum\] (Right) leads to a quantized two terminal conductance as shown in the right plot of Fig. \[fig\_conductance\_time\_short\]. To further study the correlation between the differential conductance as a function of the chemical potential $\mu_{\text{L}}- \mu_{\text{sys}}=E$ and the spectral gap of the system we have studied the behavior of the long time limit of this conductance as a function of $E$. The corresponding results as well the spectra of the isolated driven models are plotted in Fig. \[fig\_qe\_spectrum\] for both the trivial and a topological gapped Floquet states. We find a strong correlation between the vanishing of both quantities for the topologically trivial case: the differential conductance vanishes only inside a trivial gap, except at the edge of the gap where finite size effects occurs due to imperfect transparencies of the contact with the leads, as shown on a magnified view around the gap $\epsilon=0$ in Fig. \[fig\_conductance\_bias\_zoom\]. This demonstrates both that the quasi-energy spectrum of the finite system connected to infinite leads is sufficiently close to the spectrum of the isolated infinite strip, and that the differential conductance is an accurate probe of this spectrum for the open system. Moreover, in the topological case the asymptotic differential conductance remains constant equal to the number $n=1$ of edge states (in units of $e^2/h$) inside the topological gap as shown in Fig. \[fig\_conductance\_bias\_zoom\]. The small deviations from $G_{\text{RL}}^{\infty} = 1 \, e^2/h$ visible in figure \[fig\_conductance\_bias\_zoom\] are attributed to the finite dispersion relation of the leads, which does not completely satisfy the wide band approximation and leads to imperfect transparencies of the contacts. Multiterminal geometry ---------------------- A crucial characteristics of the edge states occurring in the topological gap of the spectrum shown in Fig. \[fig\_qe\_spectrum\] (Right) is their chiral nature. In the equilibrium case this property together with their ballistic propagation leads to quantized conductance in a Hall bar geometry [@Haldane:1988]. To test the chirality of the topological edge states, we have computed differential conductances in a three-terminal geometry shown in Fig.  \[fig\_conductance\_time\_three\_terminal\_bands\], where the width of the contact with the electrodes is $W=\num{30}$ sites, the total length (between $L$ and $R$ contacts) is $\num{50}$ sites, corresponds to all three arms having a length of $\num{10}$ sites. In this geometry, we monitor the two differential conductances $$\begin{split} G_{R,T} (E) = - e \left. \frac{\dd \mathcal{I}_{R}}{\dd \mu_{T}} \right|_{\mu_{T}=\mu_{\text{sys}}+E} , \\ G_{L,T} (E) = - e \left. \frac{\dd \mathcal{I}_{L}}{\dd \mu_{T}} \right|_{\mu_{T}=\mu_{\text{sys}}+E} . \end{split}$$ where $\mathcal{I}_{R,L}$ are the current in the $R,L$ contacts averaged over one period $T$ of drive (see eq. \[eq:averagedCurrent\]). The chemical potential of the system is still set to $\mu_{\text{sys}}=0$ (as if imposed e.g. by a backgate). We consider the case of a topological gap round $\epsilon=0$ (see Fig. \[fig\_qe\_spectrum\] (Right)). First we set the chemical potential of the top lead inside this gap ($\mu_{T}=0$). The time evolution of the differential conductances are shown in Fig. \[fig\_conductance\_time\_three\_terminal\_bands\]. After a transient regime, the differential conductances converge to asymptotic values $G_{\text{LT}}^{\infty} = \num{0.0002(1)} \, e^2/h$ and $G_{\text{RT}}^{\infty} = \num{0.94(1)} \, e^2/h$. The value of $G_{\text{LT}}^{\infty}$ is in agreement with the two terminal results, while the vanishing of $G_{\text{RT}}^{\infty}$ is in perfect agreement with the chiral nature of the edge state moving clockwise for the chosen parameters. This can be contrasted with the case of a chemical potential $\mu_{T}=0$ set inside a bulk Floquet band, displayed in Fig. \[fig\_conductance\_time\_three\_terminal\_bands\_bulk\]. In this case, after a longer transient regime due to slower group velocities, both conductances converge towards large finite values, confirming the absence of chirality for these bulk states. Finally, to illustrate the spatial structure of these Floquet states we show in insets of Fig. \[fig\_conductance\_time\_three\_terminal\_bands\] and \[fig\_conductance\_time\_three\_terminal\_bands\_bulk\] a color map of the probability density of the states reached at large times for both the topological gap and inside the bulk bands. The chiral nature of the states injected in the top lead is clearly apparent and confirms the transport results. Discussion ========== We have studied the differential conductances $G_{\ell' \ell} (E)$ defined in Eq.  for both a two-terminal and a three-terminal geometry. As for equilibrium phases, we have found that these conductances probe the topological nature of a gapped Floquet state: it vanishes whenever the chemical potential $\mu_{\ell}$ lies in a topologically trivial gap while we found it to be quantized for chemical potential inside a topological gap of quasi-energies. Moreover, the chiral nature of the edge states reflected on the value of the multi-terminal differential conductances within this gap. These results validate the relation between the DC transport through a periodically driven system and the Floquet description of the dynamics of the isolated system. This requires that the dissipation occurs mostly inside the leads, [*i.e.*]{} that the system is small enough for the travel times through it to be small compared with times scales of other source of dissipation (phonons, photons, etc). In that situation, the differential conductance of the system depends on the intermediate time evolution of the driven system and can be accurately described starting from the unitary dynamics of the driven system and its associated topological properties [@Rudner:2013; @Carpentier:2015]. This should be the case as long as phase coherence is preserved on the length scale of the sample, even in the presence of dissipation or weak interactions. This result paves the way towards a direct engineering of a topological filter through the implementation of Floquet states. Our results on the quantization of differential conductances inside a topological gap contrast with some of the recent results on similar systems [@Kundu:2013; @FoaTorres:2014; @Farrell:2015], where a summation procedure over multiple chemical potentials was required to recover a quantized conductance. Note that these previous results used a different measuring scheme of differential conductance, symmetric in chemical potentials of electrodes, which may filter out some of the energies of the outgoing electronic states, and thus require a summation protocol. In our case, a strong and asymmetric voltage bias is used to compensate the time dependence (through quasi-energies) of the Floquet states. Moreover, we have used in our numerical study a model with a single edge mode located around $k\simeq 0$ as opposed to double-mode topological gaps used previously. More complex edge structures may well lead to lower transparency of the interfaces for energies in the gap, and thus non-quantized conductances. The study of the transparency of such interface between a strongly driven conductor and a DC electrode is a deep and topical subject whose discussion goes beyond the scope of the present study. [**Acknowledgments:**]{} This paper is dedicated to the memory of Markus Büttiker whose pioneering works on mesoscopic systems and in particular on scattering theory of chiral edge channels shaped our understanding of electronic transport. P.D. will always be grateful to him for his trust, enthusiasm and advice he received during his stay as a post-doc assistant in the Büttiker’s group. This work was supported by the French Agence Nationale de la Recherche (ANR) under grants SemiTopo (ANR-12-BS04-0007), IsoTop (ANR-10-BLAN-0419) and TopoDyn (ANR-14-ACHN-0031) and by the European Research Council grant MesoQMC (ERC-2010-StG\_20091028 257241). Transmission coefficients {#sec:appendix} ========================= In the following, we extend the approach of [@Kohler:2005] to a quasi-unidimensional system in the geometry of Fig. \[fig:figure\_geometry\]. The purpose is to describe the scattering through a two dimensional driven system, viewed as a Chern Floquet filter. We follow closely the derivation of [@Kohler:2005] and only highlight the differences due to the geometry considered. We consider a system composed of a central region $\mathcal{R}$ (the scattering region), which is submitted to a periodic excitation. This region is described by a time-periodic (with period $T=2\pi/\omega$) tight-binding Hamiltonian $$H_{\text{sys}} = \sum_{n n' \in \mathcal{R}} H_{n n'}(t) c_{n}^{\dagger} c_{n'}^{}$$ with $H_{n n'}(t+T) = H_{n n'}(t)$, where $c_{n}$ is the annihilation operator of an electron in an localized state $n$ of the tight-binding model, the index $n$ representing the position on the Bravais lattice as well as internal degrees of freedom (sublattice, orbital, spin, etc.). This central region is connected to $N_{\text{leads}}$ leads described by the Hamiltonian $$H_{\text{leads}} = \sum_{\ell = 1}^{N_{\text{leads}}} \sum_{q \alpha} \energy_{\ell q \alpha} c_{\ell q \alpha}^{\dagger} c_{\ell q \alpha},$$ where $\alpha$ labels transverse modes of the semi-infinite lead and $q$ is the longitudinal momentum in this lead. We assume that the wavefunctions in the leads read $$\psi_{\alpha q}(x,y) = \frac{1}{\sqrt{L}} \ee^{- \ii q x} \chi_{\alpha}(y),$$ and in particular that transverse modes $\ket{\chi}$ do not depend on $q$ and constitute an orthonormal basis of the transverse Hilbert space,. (Notice that this is not always the case, especially when a magnetic field is present [@Groth:2014]). The annihilation operator of a state localized at transverse position $y=y_{\ell}(n)$ in the interface with the lead  $\ell$ is written as $$c_{\ell, y=y_{\ell}(n)} = \sum_{q \alpha} c_{\ell q \alpha} \chi_{\ell \alpha}^*(n).$$ The internal degrees of freedom in the leads can be taken into account, if needed, by considering more leads. The contacts between the central region and the leads are described by the Hamiltonian $$H_{\text{contacts}} = \sum_{\ell = 1}^{N_{\text{leads}}} \sum_{n \in \mathcal{M}_{\ell}} \sum_{q} V_{\ell} c_{\ell, y=y_{\ell}(n)}^{\dagger} c_{n} + \text{h.c.}$$ where $\mathcal{M}_{\ell}$ describes the set of sites of the central region at the interface with lead $\ell$. In terms of the transverse modes creation/annihilation operators, this Hamiltonian reads $$H_{\text{contacts}} = \sum_{\ell = 1}^{N_{\text{leads}}} \sum_{n \in \mathcal{M}_{\ell}} \sum_{q \alpha} V_{\ell} \chi_{\ell \alpha}^*(n) c_{\ell q \alpha}^{\dagger} c_{n} + \text{h.c.}.$$ Using the approach of [@Kohler:2005] which amounts to describe the correlations of the driven system in terms of the equilibrium noise in the leads, we obtain the equation of main text with transmission coefficients $$T_{\ell \ell'}^{(p)}(\energy) = \!\!\!\! \sum_{\substack{ m, n \in \mathcal{I}_{\ell} \\ m',n' \in \mathcal{I}_{\ell'} }} (\gf_{m m'}^{(p)}(\energy))^* \Gamma^{\ell}_{m n}(\energy + p \hbar \omega) \gf_{n n'}^{(p)}(\energy) \Gamma^{\ell'}_{n' m'}(\energy),$$ that can be written as a trace on the interfaces (Eq. of main text), $$T_{\ell \ell'}^{(p)}(\energy) = \text{Tr} \left[ [\gf^{(p)}(\energy)]^\dagger \Gamma^{\ell}(\energy + p \hbar \omega) \gf^{(p)}(\energy) \Gamma^{\ell'}(\energy) \right]$$ where $$\Gamma_{\ell}(\energy) = 2 \pi \sum_{q \alpha} \left|V_{\ell}\right|^2 \ket{\chi_{\alpha}}\!\bra{\chi_{\alpha}} \delta(\energy - \energy_{\ell q \alpha}).$$ This expression is in agreement with previous approaches based on different formalisms [@Stefanucci:2008; @Gaury:2013; @arrachea2006]. [^1]: See [@Gaury:2013] for a recent discussion of chemical versus electrical potential drops at the interface between the system and an electrode. Such subtleties are quietly neglected in the following.
[**Deformation quantization and invariant differential operators.**]{} **Panagiotis Batakidis**[^1] **Abstract.** In this note we explain how the techniques of deformation quantization in the sense of Kontsevich can be used to describe the algebra of invariant differential operators on Lie groups. **Version française abrégée.** Soit ${\mathfrak{g}}$ une algèbre de Lie, ${\mathfrak{h}}\subset{\mathfrak{g}}$ une sous-algèbre et $\lambda$ un caractère de ${\mathfrak{h}}$. En prenant comme motivation la quantification par déformation de Kontsevich et la généralization de Cattaneo-Felder on regarde ${\mathfrak{h}}_{\lambda}^{\bot}$ comme sous-variété coisotrope de la variété de Poisson ${\mathfrak{g}}^{\ast}$. C’est l’approche de Cattaneo-Torossian aux algèbres de Lie pour calculer dans l’algèbre déformée $\left(U_{(\epsilon)}({\mathfrak{g}})/U_{(\epsilon)}({\mathfrak{g}}){\mathfrak{h}}_{\lambda}\right)^{{\mathfrak{h}}}$. Nous étudions également motivés par les Conjectures de Duflo et Corwin-Greenleaf. Le départ est l’isomorphisme $(U(\mathfrak{g})/U(\mathfrak{g})\mathfrak{h}_{\lambda})^{\mathfrak{h}}\simeq \mathbb{D}(\mathfrak{g},\mathfrak{h},\lambda)$ de Koornwider qui interprète l’algèbre $(U(\mathfrak{g})/U(\mathfrak{g})\mathfrak{h}_{\lambda})^{\mathfrak{h}}$ comme les opérateurs linéaires qui laissent invariant l’espace $C^{\infty}(G,H,\lambda)$ des fonctions complexes $\theta : G\longrightarrow \mathbb{C}$ qui satisfaient $\theta (g\cdot \exp X)=e^{-i\lambda(X)}\theta(g), \;\;\forall X\in \mathfrak{h}, \forall g\in G$. Nous annonçons qu’l ya un isomorphisme non-canonique $\overline{\beta}_{\mathfrak{q},(\epsilon)}\circ\partial_{q_{(\epsilon)}^{\frac{1}{2}}}\circ \overline{T}_1^{-1}T_2:\; H^0_{(\epsilon)}({\mathfrak{h}}^{\bot}_{\lambda},d^{(\epsilon)}_{{\mathfrak{h}}^{\bot}_{\lambda},\mathfrak{q}})\stackrel{\simeq}{\longrightarrow} \left(U_{(\epsilon)}({\mathfrak{g}})/U_{(\epsilon)}({\mathfrak{g}}){\mathfrak{h}}_{\lambda}\right)^{{\mathfrak{h}}}$ entre l’algèbre de réduction sur l’espace affine ${\mathfrak{h}}_{\lambda}^{\bot} :=\{f\in\mathfrak{g}^{\ast}/f|_{\mathfrak{h}}=-\lambda\}$ et l’algèbre déformée $\left(U_{(\epsilon)}({\mathfrak{g}})/U_{(\epsilon)}({\mathfrak{g}}){\mathfrak{h}}_{\lambda}\right)^{{\mathfrak{h}}}$. Ensuite on étudie la spécialization $H^0_{(\epsilon=1)}({\mathfrak{h}}^{\bot}_{\lambda},d^{(\epsilon=1)}_{{\mathfrak{h}}^{\bot}_{\lambda},\mathfrak{q}})$, l’algèbre de réduction $H^0({\mathfrak{h}}_{\lambda}^{\bot},d_{{\mathfrak{h}}_{\lambda}^{\bot},\mathfrak{q}})$ définie sans le paramètre de déformation $\epsilon$, et $H^0({\mathfrak{h}}_{t\lambda}^{\bot},d_{{\mathfrak{h}}_{t\lambda}^{\bot},\mathfrak{q}}), t\in\mathbb{R}$ l’algèbre de réduction sans $\epsilon$ et en déformant le caractère $\lambda$. Nous comparons les objets correspondants de la part de $H^0_{(\epsilon)}({\mathfrak{h}}^{\bot}_{\lambda},d^{(\epsilon)}_{{\mathfrak{h}}^{\bot}_{\lambda},\mathfrak{q}})$ avec les objets associés du coté $\left(U_{(\epsilon)}({\mathfrak{g}})/U_{(\epsilon)}({\mathfrak{g}}){\mathfrak{h}}_{\lambda}\right)^{{\mathfrak{h}}}$ ça veut dire $\mathbb{D}_{(T=1)}({\mathfrak{g}},{\mathfrak{h}},\lambda)$ et $\left((U({\mathfrak{g}})/U({\mathfrak{g}}){\mathfrak{h}}_{t\lambda})^{{\mathfrak{h}}}\right)$. On arrive finalement d’annonçer que $$\mathbb{D}_{(T=1)}({\mathfrak{g}},{\mathfrak{h}},\lambda)\stackrel{alg}{\simeq}H^0_{(\epsilon=1)}({\mathfrak{h}}_{\lambda}^{\bot},d^{(\epsilon=1)}_{{\mathfrak{h}}_{\lambda}^{\bot},\mathfrak{q}}).$$ Introduction. ============= Let $G$ be a nilpotent, connected and simply connected Lie group, $H\subset G$ a subgroup and $\lambda\in{\mathfrak{g}}^{\ast}$ a character of ${\mathfrak{h}}$. Let $C^{\infty}(G,H,\lambda)$ be the vector space of $C^{\infty}$ functions $\theta : G\longrightarrow \mathbb{C}$ that satisfy the condition $$\theta (g\cdot \exp X)=e^{-i\lambda(X)}\theta(g), \;\;\forall X\in \mathfrak{h}, \forall g\in G$$ and $\mathbb{D}(\mathfrak{g},\mathfrak{h},\lambda)$ the algebra of linear differential operators that leave the space $C^{\infty}(G,H,\lambda)$ invariant and commute with the left translation on $G$, $$D(C^{\infty}(G,H,\lambda))\subset C^{\infty}(G,H,\lambda),\;\;\text{and}\;\;D(L(g)\theta)=L(g)(D(\theta)).$$ Let also $m(\tau)$ be the multiplicities in the spectral decomposition of the representation $\tau_{\lambda}$ $$\tau_{\lambda} \cong \int_{\widehat{G}} m(\tau)\tau d\mu(\tau)\cong\int_{(f+\mathfrak{h}^{\bot})/H}\tau_l d\nu(l)$$ where $\tau_{\lambda}$ is the representation $Ind(G,H,\lambda)=L^2(G,H,\lambda)$ and $\widehat{G}$ is the space of irreducible representations, or otherwise the unitary dual of $G$. Furthermore $\nu$ is a finite and positive measure equivalent to the Lebesgue measure on the affine space $\lambda+{\mathfrak{h}}^{\bot}$. Finally $\mu=K_{\ast}(\nu)$ (where $K:{\mathfrak{g}}^{\ast}\longrightarrow \widehat{G}$ is the Kirillov map) is our measure on $\widehat{G}$. The **** conjecture says that $$\mathbb{D}(\mathfrak{g},\mathfrak{h},\lambda)\; \text{is commutative} \;\; \mathbf{ \Leftrightarrow}\;\; m(\tau)<+\infty\;\; \mu- \text{a.e}.$$ To state a second conjecture of our interest, let $\mathfrak{g}$ be a nilpotent Lie algebra, $\mathfrak{h}\subset\mathfrak{g}$ a subalgebra and $\lambda$ a character of $\mathfrak{h}$. Let $U(\mathfrak{g})\mathfrak{h}_{\lambda}$ be the left ideal of $U(\mathfrak{g})$ generated by the family $<X+\lambda(X),\;X\in\mathfrak{h}>$. Then the **** conjecture says that $$\text{The algebras}\; C_{poiss}((S(\mathfrak{g})/S(\mathfrak{g})\mathfrak{h}_{\lambda})^{\mathfrak{h}})\; \text{and}\; C_{ass}[(U(\mathfrak{g})/U(\mathfrak{g})\mathfrak{h}_{\lambda})^{\mathfrak{h}}]\;\;\text{are isomorphic.}$$ The first center refers to the Poisson algebra structure of $(S(\mathfrak{g})/S(\mathfrak{g})\mathfrak{h}_{\lambda})^{\mathfrak{h}}$ while the second to the associative structure of $(U(\mathfrak{g})/U(\mathfrak{g})\mathfrak{h}_{\lambda})^{\mathfrak{h}}$. The analytic interest of the second conjecture stems from a theorem of stating that $(U(\mathfrak{g})/U(\mathfrak{g})\mathfrak{h}_{\lambda})^{\mathfrak{h}}\simeq \mathbb{D}(\mathfrak{g},\mathfrak{h},\lambda)$. Taking into consideration the interpretation of $U({\mathfrak{g}})$ as the linear differential operators on $G$, the Duflo conjecture is actually a question about the structure of the invariant differential operators on the homogeneous space $G/H$. In this note we will prove that $(U(\mathfrak{g})/U(\mathfrak{g})\mathfrak{h}_{\lambda})^{\mathfrak{h}}\simeq \mathbb{D}(\mathfrak{g},\mathfrak{h},\lambda)$ is isomorphic to the reduction algebra related to these data, an algebra playing a central role to the deformation quantization theory. we will also provide results of the same nature for related reduction algebras. The results of this note were presented in [@BAT]. Deformation quantization and generalizations. ============================================= We start by reminding the standard result \[Kontsevich\] Let $\pi$ be a Poisson bivector of $\mathbb{R}^k$ and $F,G\in C^{\infty}(\mathbb{R}^k)$. The product $$F \ast_KG:=F\cdot G+\sum_{n=1}^{\infty}\epsilon^n\left(\frac{1}{n!}\sum_{\Gamma\in\mathbf{Q_{n,\bar{2}}}}\omega_{\Gamma}B_{\Gamma,\pi}(F,G)\right)$$ is associative. The set $\mathbf{Q_{n,\bar{2}}}$ is a special family of graphs $\Gamma$. To every admissible graph $\Gamma$ corresponds a bidifferential operator $B_{\Gamma}(F,G):=\sum_{R,S}b_{\Gamma}^{R,S}\partial_R(F)\partial_S(G)$ on $C^{\infty}(\mathbb{R}^k)\times C^{\infty}(\mathbb{R}^k)$. Note that the functions $b_\Gamma^{R,S}$ depend on $\Gamma$ and are $n$-linear in the bivector $\pi$. The coefficient $\omega_{\Gamma}\in\mathbb{R}$ is computed by integrating a differential form $\Omega_{\Gamma}$ (which is also encoded in $\Gamma$) over a concentration manifold $$\hat{C}^+_{n,\bar{m}}=\{(z_1,\dots,z_n,z_{\bar{1}},\dots,z_{\bar{m}}) / z_i\in\mathbb C, \mathfrak{Im}(z_i)>0, z_{\bar{i}}\in\mathbb R, z_{\bar{i}}<z_{\bar{j}}\; \text{for}\; i<j\}/G_2,$$ where $G_2$ is the 2-dimensional Lie group of dilations $\langle z_k\mapsto az_k+b, a>0, b\in\mathbb R\rangle$. So $\hat{C}^+_{n,\bar{m}}\subset (H^+)^n\times R^m$. In the general setting that we briefly recalled, the central result of [@K] is the following \[Kontsevich 2\] Let $\mathcal{U}:\;\mathcal{T}_{poly}(\mathbb{R}^k)\longrightarrow \mathcal{D}_{poly}(\mathbb{R}^k)$ be the map defined by the Taylor coefficients: $$\mathcal{U}_n:=\sum_{\overline{m}\geq 0}\left(\sum_{\Gamma\in\mathbf{Q}_{n,\overline{m}}}\omega_{\Gamma}B_{\Gamma}\right).$$ Then $\mathcal{U}$ is an $L_{\infty}-$ morphism and a quasi-isomorphism between the differential graded Lie algebras (DGLA) $\mathcal{T}_{poly}(\mathbb{R}^k)$ of polyvector fields on $\mathbb{R}^k$ and $\mathcal{D}_{poly}(\mathbb{R}^k)$ of polydifferential operators on $\mathbb{R}^k$. The key point in the proof is a Stokes equation integrating the form $\Omega_{\Gamma}$ on $\hat{C}_{n,\overline{m}}^+$. This way Kontsevich also reached the Duflo isomorphism (applying these to the case of the Poisson manifold ${\mathfrak{g}}^{\ast}$). Our approach is using the following generalization of what is so far said. Let $X$ be a Poisson manifold, and $C\subset X$ a coisotropic submanifold. Let $NC$ be the normal bundle, $\mathcal{A}=\bigoplus_{i=0}^{rank(\pi)}\mathcal{A}^i$ with $\mathcal{A}^i=\Gamma(C,\wedge^iNC)$ be the graded commutative algebra of sections of the exterior algebra of the normal bundle $NC$ and $\tilde{\mathcal{D}}(\mathcal{A})=\oplus_n\tilde{\mathcal{D}}^n(\mathcal{A})$ with $\tilde{\mathcal{D}}^n(\mathcal{A})=\prod_{p+q-1=n}Hom^p(\otimes^q\mathcal{A},\mathcal{A})$. Finally we set $$\mathcal{B}=\bigoplus_{j=0}^{\infty}\mathcal{B}^j,\;\; \mathcal{B}^0=\Gamma(C,S((NC)^{\ast})), \;\mathcal{B}^j=0,\;\textlatin{if}\;j\neq 0.$$ Consider the DGLA $\mathcal{T}(X,C)$ and $\tilde{\mathcal{D}}(\mathcal{A})$ of polyvector fields and polydifferential operators on $X$. There is an $L_{\infty}-$ quasi-isomorphism $$\mathcal{F}_R:\;\mathcal{T}(X,C)\longrightarrow \tilde{\mathcal{D}}(\mathcal{A}),$$ whose first Taylor coefficient $\mathcal{F}_R^1$ is the composition $$F_{HKR}\circ \hat{F}:\;\mathcal{T}(X,C)\simeq \tilde{\mathcal{T}}(\mathcal{B})\;\stackrel{\hat{F}}{\rightarrow}\; \tilde{\mathcal{T}}(\mathcal{A})\;\stackrel{F_{HKR}}{\rightarrow}\;\tilde{\mathcal{D}}(\mathcal{A}).$$ where $F_{HKR}$ is the Hochschild-Konstant-Rosenberg map $$F_{HKR}:= S_{\mathcal{A}}(Der(\mathcal{A})[-1])\longrightarrow \oplus_{j=0}^{\infty}Hom_{\mathbf{K}}(\otimes^j\mathcal{A},\mathcal{A})$$ and $\hat{F}$ is a kind of Fourier transform (for more details we refer to [@CF3]). We will now apply the previous results for ${\mathfrak{g}}^{\ast}$ with the natural Poisson structure coming from the Lie structure and the orthogonal space ${\mathfrak{h}}^{\bot}$ of a subalgebra ${\mathfrak{h}}\subset{\mathfrak{g}}$ as a coisotropic submanifold. Some modifications in the theory are needed since the conjectures of the introduction refer to the affine space $-\lambda+{\mathfrak{h}}^{\bot}\subset {\mathfrak{g}}^{\ast}$. Applications for non-commutative harmonic analysis. =================================================== **Reduction equations.** Let $\mathfrak{h}^{\bot}:=\{l\in\mathfrak{g}^{\ast}/ l(\mathfrak{h})=0\}$, $\mathfrak{h}_{\lambda}^{\bot}:=\{f\in\mathfrak{g}^{\ast}/f|_{\mathfrak{h}}=-\lambda\}$ and $\mathfrak{q}$ a supplementary space of ${\mathfrak{h}}$ in ${\mathfrak{g}}$. Consider the differential $d^{(\epsilon)}_{\mathfrak{h}^{\bot},\mathfrak{q}}:\;S(\mathfrak{q})[\epsilon]\longrightarrow S(\mathfrak{q})[\epsilon]\otimes\mathfrak{h}^{\ast}$ where $d^{(\epsilon)}_{\mathfrak{h}^{\bot},\mathfrak{q}}:=\sum_{i=1}^{\infty}\epsilon^i d^{(i)}_{\mathfrak{h}^{\bot},\mathfrak{q}}$ and $d^{(i)}_{\mathfrak{h}^{\bot},\mathfrak{q}}=\sum_{\Gamma\in \mathcal{B}_i\cup \mathcal{BW}_i}\overline{\omega}_{\Gamma}B_{\Gamma}$. The second sum is on two families of graphs in $Q_{1,\bar{i}}$: the set ${\mathcal B}_i$ of Bernoulli graphs and the set ${\mathcal {BW}}_i$ of Bernoulli-attached-to-a-wheel graph (see figure 1). The first terms of the differential $d_{\mathfrak{h}^{\bot},\mathfrak{q}}(F)=0$ are $$d^1_{\mathfrak{h}^{\bot},\mathfrak{q}}(F_n)=0,\;\;\;\;\; d^3_{\mathfrak{h}^{\bot},\mathfrak{q}}(F_n)+d^1_{\mathfrak{h}^{\bot},\mathfrak{q}}(F_{n-2})=0,$$ $$d^5_{\mathfrak{h}^{\bot},\mathfrak{q}}(F_n)+d^3_{\mathfrak{h}^{\bot},\mathfrak{q}}(F_{n-2})+d^1_{\mathfrak{h}^{\bot},\mathfrak{q}}(F_{n-4})=0\;\ldots$$ For example, $d^1_{\mathfrak{h}^{\bot},\mathfrak{q}}(F_n)=0\Rightarrow F_n\in S(\mathfrak{q})^{\mathfrak{h}}$. ![](typesfordifferential){width="9cm"} (Reduction algebra) We define the reduction space $H^0_{(\epsilon)}({\mathfrak{h}}^{\bot},d^{(\epsilon)}_{{\mathfrak{h}}^{\bot},\mathfrak{q}})$ of polynomials in the formal variable $\epsilon$ as the vector space of polynomials - solutions of the system of linear differential equations $$d^{(\epsilon)}_{{\mathfrak{h}}^{\bot},\mathfrak{q}}(F_{(\epsilon)})=0,\;F_{(\epsilon)}\in S(\mathfrak{q})[\epsilon].$$ The reduction algebra is $H^0_{(\epsilon)}({\mathfrak{h}}^{\bot},d^{(\epsilon)}_{{\mathfrak{h}}^{\bot},\mathfrak{q}})$ equipped with the product $\ast_{CF,\epsilon}$. We now explain our main results. Let $G$ be a real Lie group, ${\mathfrak{h}}\subset{\mathfrak{g}}$ a subalgebra, $\lambda\in \mathfrak{h}^{\ast}$ a character of ${\mathfrak{h}}$ and $f\in\mathfrak{g}^{\ast}$ s.t $f|_{\mathfrak{h}}=\lambda$. Let also $\mathfrak{q}$ be a supplementary space of ${\mathfrak{h}}$ in ${\mathfrak{g}}$. We set the deformed tensor algebra to be $T_{(\epsilon)}({\mathfrak{g}}):=\mathbf{R}[\epsilon]\otimes T({\mathfrak{g}})$. Let $\mathcal{I}_{\epsilon}$ be the two-sided ideal $<X\otimes Y-Y\otimes X-\epsilon[X,Y]>$ of $T_{(\epsilon)}({\mathfrak{g}})$. Since our basic model of $U({\mathfrak{g}})$ is $T({\mathfrak{g}})$ factored by the non-homogeneous ideal $<X\otimes Y-Y\otimes X-[X,Y]>$, we define the deformed universal enveloping algebra of ${\mathfrak{g}}$ as $U_{(\epsilon)}({\mathfrak{g}}):=T_{(\epsilon)}({\mathfrak{g}})/\mathcal{I}_{\epsilon}$. Let also ${\mathfrak{h}}_{\lambda}:=\{H+\lambda(h),\;H\in{\mathfrak{h}}\}$. The map $$\overline{\beta}_{\mathfrak{q},(\epsilon)}\circ\partial_{q_{(\epsilon)}^{\frac{1}{2}}}\circ \overline{T}_1^{-1}T_2:\; H^0_{(\epsilon)}({\mathfrak{h}}^{\bot}_{\lambda},d^{(\epsilon)}_{{\mathfrak{h}}^{\bot}_{\lambda},\mathfrak{q}})\stackrel{\simeq}{\longrightarrow} \left(U_{(\epsilon)}({\mathfrak{g}})/U_{(\epsilon)}({\mathfrak{g}}){\mathfrak{h}}_{\lambda}\right)^{{\mathfrak{h}}}$$ is an algebra isomorphism. Here $\overline{\beta}_{\mathfrak{q},(\epsilon)}:\;S(\mathfrak{q})[\epsilon]\longrightarrow U_{(\epsilon)}(\mathfrak{g})/U_{(\epsilon)}(\mathfrak{g})\mathfrak{h}_{\lambda}$ is the symmetrization map, $q(Y) = \det_{\mathfrak{g}} \left(\frac{sinh\frac{{ad}Y}{2}}{\frac{{ad}Y}{2}}\right)$ and $T_1,T_2$ are differential operators that can be described in terms of Kontsevich graphs [@BAT] $\mathcal{x}$ 2.5.2. The theorem is powerful since no condition is needed for the original Lie group $G$. **Proof.** We use the $H^0_{(\epsilon)}({\mathfrak{h}}_{\lambda}^{\bot},d^{(\epsilon)}_{{\mathfrak{g}}^{\ast},{\mathfrak{h}}_{\lambda}^{\bot},\mathfrak{q}})-$ bimodule structure $$T_1:\;H^0_{(\epsilon)}({\mathfrak{g}}^{\ast},d^{(\epsilon)}_{{\mathfrak{g}}^{\ast}})\longrightarrow H^0_{(\epsilon)}({\mathfrak{h}}_{\lambda}^{\bot},d^{(\epsilon)}_{{\mathfrak{g}}^{\ast},{\mathfrak{h}}_{\lambda}^{\bot},\mathfrak{q}}),\;\; G\mapsto G\ast_1 1$$ and $$T_2:\;H^0_{(\epsilon)}({\mathfrak{h}}^{\bot}_{\lambda},d^{(\epsilon)}_{{\mathfrak{h}}^{\bot}_{\lambda},\mathfrak{q}}) \longrightarrow H^0_{(\epsilon)}({\mathfrak{h}}_{\lambda}^{\bot},d^{(\epsilon)}_{{\mathfrak{g}}^{\ast},{\mathfrak{h}}_{\lambda}^{\bot},\mathfrak{q}}),\;\;F\mapsto 1\ast_2 F$$ in the biquantization diagram (see [@CKTB] and [@CT]) of $\mathfrak{g}^{\ast}$ and $\mathfrak{h}_{\lambda}^{\bot}$. We show first that $H^0_{(\epsilon)}({\mathfrak{h}}^{\bot}_{\lambda},d^{(\epsilon)}_{{\mathfrak{h}}^{\bot}_{\lambda},\mathfrak{q}})\subset (U_{(\epsilon)}({\mathfrak{g}})/U_{(\epsilon)}({\mathfrak{g}}){\mathfrak{h}}_{\lambda})^{{\mathfrak{h}}}$ exploiting results from [@CF2],[@CF3],[@CT] and the bimodule structure we mentioned to pass from $H^0_{(\epsilon)}({\mathfrak{h}}^{\bot}_{\lambda},d^{(\epsilon)}_{{\mathfrak{h}}^{\bot}_{\lambda},\mathfrak{q}})$ to $(U_{(\epsilon)}({\mathfrak{g}})/U_{(\epsilon)}({\mathfrak{g}}){\mathfrak{h}}_{\lambda})^{{\mathfrak{h}}}.$ For the inverse let $H\in {\mathfrak{h}}$ be a function of first degree $F\in H^0_{(\epsilon)}({\mathfrak{h}}^{\bot}_{\lambda},d^{(\epsilon)}_{{\mathfrak{h}}^{\bot}_{\lambda},\mathfrak{q}})$. We check the graphs in the expression $$(H+\lambda(H))\ast_1 1 \ast_2 F.$$ We show that $(H+\lambda(H))\ast_1 (1 \ast_2 F)=[(H+\lambda(H))\ast_1 1] \ast_2 F=0$. ![Behaviour of the expression $(H+\lambda(H))\ast_1 1 \ast_2 F$ when $s\rightarrow 0$ and $s\rightarrow \infty$.](F_to_0_and_F_to_00){width="9cm"} Then examining the possible and admissible graphs in the concentration manifolds in this expression we end up to the equation $$\sum_{\Gamma^{\alpha}_{\textlatin{int}},\Gamma^{\alpha}_{\textlatin{ext}}}\int_0^{\infty}\hat{\omega}_{\Gamma^{\alpha}_{\textlatin{ext}}}(\textlatin{s})\mathcal{B}_{\Gamma^{\alpha}_{\textlatin{ext}}}\left(\omega_{\Gamma^{\alpha}_{\textlatin{int}}}\mathcal{B}_{\Gamma^{\alpha}_{\textlatin{int}}}\right)\mathrm{d}\textlatin{s}=0$$ which is equivalent to $$\sum_{\alpha}\left(\sum_{\Gamma^{\alpha}_{\textlatin{int}},\Gamma^{\alpha}_{\textlatin{ext}}}\left(\sum_{\textlatin{l},\textlatin{k},\textlatin{m}}\left(B^\textlatin{m}_{\Gamma^{\alpha}_{\textlatin{ext}}}(B^\textlatin{k}_{\Gamma^{\alpha}_{\textlatin{int}}}(\textlatin{F}_\textlatin{l}))\right)\epsilon^{\textlatin{m+k+l}}\right)\right)=0.$$ This last equation gives (after determining the admissible graphs above) the reduction equations defining $\mathbf{H^0_{(\epsilon)}({\mathfrak{h}}^{\bot}_{\lambda},d^{(\epsilon)}_{{\mathfrak{h}}^{\bot}_{\lambda},\mathfrak{q}})}$. **The specialization algebra** $\mathbf{H^0_{(\epsilon=1)}({\mathfrak{h}}_{\lambda}^{\bot},d^{(\epsilon=1)}_{{\mathfrak{h}}_{\lambda}^{\bot},\mathfrak{q}})}$. The specialization algebra for the affine space $-\lambda+{\mathfrak{h}}^{\bot}={\mathfrak{h}}_{\lambda}^{\bot}$, is defined as $$H^0_{(\epsilon=1)}({\mathfrak{h}}_{\lambda}^{\bot},d^{(\epsilon=1)}_{{\mathfrak{h}}_{\lambda}^{\bot},\mathfrak{q}}):=\left(H^0_{(\epsilon)}({\mathfrak{h}}_{\lambda}^{\bot},d^{(\epsilon)}_{{\mathfrak{h}}_{\lambda}^{\bot},\mathfrak{q}})/<\epsilon-1>\right).$$ The Cattaneo-Felder product on $H^0_{(\epsilon=1)}({\mathfrak{h}}_{\lambda}^{\bot},d^{(\epsilon=1)}_{{\mathfrak{h}}_{\lambda}^{\bot},\mathfrak{q}})$ will be also denoted as $\ast_{CF,(\epsilon=1)}$. We have to note here that one may consider also the reduction algebras $H^0({\mathfrak{h}}_{\lambda}^{\bot},d_{{\mathfrak{h}}_{\lambda}^{\bot},\mathfrak{q}})$ (that is defined without the formal variable $\epsilon$) and $H^0({\mathfrak{h}}_{t\lambda}^{\bot},d_{{\mathfrak{h}}_{t\lambda}^{\bot},\mathfrak{q}}), t\in\mathbb{R}$ (deforming the character $\lambda$). Let now $\textlatin{F}^{'}\in H^0_{(\epsilon)}({\mathfrak{h}}_{\lambda}^{\bot},d^{(\epsilon)}_{{\mathfrak{h}}^{\bot}_{\lambda},\mathfrak{q}})$. Let $J:\;H^0_{(\epsilon)}({\mathfrak{h}}_{\lambda}^{\bot},d^{(\epsilon)}_{{\mathfrak{h}}^{\bot}_{\lambda},\mathfrak{q}})\longrightarrow H^0({\mathfrak{h}}_{\lambda}^{\bot},d_{{\mathfrak{h}}^{\bot}_{\lambda},\mathfrak{q}})$, $J(F^{'}):=\sum_k F^{'}_k$. To describe the image of the map $J$ we have the following: $\bullet$ Let $F\in H^0({\mathfrak{h}}_{\lambda}^{\bot},d_{{\mathfrak{h}}_{\lambda}^{\bot},\mathfrak{q}})$. Suppose an element $F_{(t)}=\sum_pt^pF_p$ with $F_{(t)}\in H^0({\mathfrak{h}}_{t\lambda}^{\bot},d_{{\mathfrak{h}}_{t\lambda}^{\bot},\mathfrak{q}})$, $\forall t\in\mathbb{R}^{\ast}$ and $F_{(t=1)}=F$. Let $F_p=\sum_iF_p^{(i)}$ be a decomposition to homogeneous components and $F_{(\epsilon)}:=\epsilon^N\sum F_p^{(i)}\frac{1}{\epsilon^{i+p}}$ ($N>>max(i+p)$). Then $F_{(\epsilon)}\in H^0_{(\epsilon)}({\mathfrak{h}}_{\lambda}^{\bot},d^{(\epsilon)}_{{\mathfrak{h}}_{\lambda}^{\bot},\mathfrak{q}})$ and $J(F_{(\epsilon)})=F$. $\bullet$ Let $F\in H^0({\mathfrak{h}}_{\lambda}^{\bot},d_{{\mathfrak{h}}_{\lambda}^{\bot},\mathfrak{q}})$. Suppose an element $F_{(\epsilon)}=\sum_{0 \geq k\geq n}\epsilon^kF_k$ with $F_{(\epsilon)}\in H^0_{(\epsilon)}({\mathfrak{h}}_{\lambda}^{\bot},d^{(\epsilon)}_{{\mathfrak{h}}_{\lambda}^{\bot},\mathfrak{q}})$ and $J(F_{(\epsilon)})=F$. Let $F_k=\sum_iF_k^{(i)}$ be a decomposition to homogeneous components and $F_{(t)}:=t^N\sum_{i,k}\frac{1}{t^{i+k}}F_k^{(i)}$ ($N>>max(i+k)$). Then $\forall t\in\mathbb{R}^{\ast},\;\;F_{(t)}\in H^0({\mathfrak{h}}_{t\lambda}^{\bot},d_{{\mathfrak{h}}_{t\lambda}^{\bot},\mathfrak{q}})$. **Deformations.** In this section we will make clear the relation between the various reduction algebras presented in the previous part. Using Theorem 5 we will associate them to the appropriate algebras of operators. Let ${\mathfrak{g}}_T:={\mathfrak{g}}\oplus\mathbb{R}T,\;\;[{\mathfrak{g}},T]=0$. We note as $\mathcal{P}_{(t)}\left((U({\mathfrak{g}})/U({\mathfrak{g}}){\mathfrak{h}}_{t\lambda})^{{\mathfrak{h}}}\right)$ the algebra of polynomial families in $t$, $t\longrightarrow u_t\in\left(U({\mathfrak{g}})/U({\mathfrak{g}}){\mathfrak{h}}_{t\lambda}\right)^{{\mathfrak{h}}}$. Let $e_t:\;U({\mathfrak{g}}_T)\longrightarrow U({\mathfrak{g}})$ be the surjective map $T\mapsto t$ and $\forall X\in{\mathfrak{g}},\; X\mapsto X$. Then $\left(U({\mathfrak{g}}_T)/<T-t>\right)\simeq U({\mathfrak{g}})$ and evaluating at $T=t$, we take the surjective map $$e_{(T=t)}:\;\left( (U({\mathfrak{g}}_T)/U({\mathfrak{g}}_T){\mathfrak{h}}^T_{\lambda})^{{\mathfrak{h}}_T}/<T-t>\right)\hookrightarrow (U({\mathfrak{g}})/U({\mathfrak{g}}){\mathfrak{h}}_{t\lambda})^{{\mathfrak{h}}}.$$ It turns out that if $t\longrightarrow u_t\in (U({\mathfrak{g}})/U({\mathfrak{g}}){\mathfrak{h}}_{t\lambda})^{{\mathfrak{h}}}$ is a polynomial family in $t$, then there is a $u_T\in (U({\mathfrak{g}}_T)/U({\mathfrak{g}}_T){\mathfrak{h}}^T_{\lambda})^{{\mathfrak{h}}_T}$ s.t $e_t(u_T)=u_t$. Now let’s examine the notion of specialization from the differential operator point of view. Let $\mathbb{D}_{(T=1)}({\mathfrak{g}},{\mathfrak{h}},\lambda):= (U({\mathfrak{g}}_T)/U({\mathfrak{g}}_T){\mathfrak{h}}^T_{\lambda})^{{\mathfrak{h}}_T}/<T-~1>$. In the case $t=1$ of the previous example we get $$\mathbb{D}_{(T=1)}({\mathfrak{g}},{\mathfrak{h}},\lambda)\hookrightarrow (U({\mathfrak{g}})/U({\mathfrak{g}}){\mathfrak{h}}_{\lambda})^{{\mathfrak{h}}}.$$ More specifically if $u\in \mathbb{D}_{(T=1)}({\mathfrak{g}},{\mathfrak{h}},\lambda)$ then there is a $u_T\in U({\mathfrak{g}}_T)$ s.t $u=\pi_{(T=1)}(u_T)$. The element $u_t:=e_{(T=t)}(u_T)\in (U({\mathfrak{g}})/U({\mathfrak{g}}){\mathfrak{h}}_{t\lambda})^{{\mathfrak{h}}}$ defines a polynomial family in $t$ so $u_t\in \mathcal{P}_{(t)}\left((U({\mathfrak{g}})/U({\mathfrak{g}}){\mathfrak{h}}_{t\lambda})^{{\mathfrak{h}}}\right)$. Then $$\mathbb{D}_{(T=1)}({\mathfrak{g}},{\mathfrak{h}},\lambda)\simeq \mathcal{P}_{(t=1)}\left((U({\mathfrak{g}})/U({\mathfrak{g}}){\mathfrak{h}}_{\lambda})^{{\mathfrak{h}}}\right).$$ We get similar results for specialization from the reduction algebra point of view. $\bullet$ Let $t\mapsto F_t \in H^0({\mathfrak{h}}_{t\lambda}^{\bot},d_{{\mathfrak{h}}_{t\lambda}^{\bot},\mathfrak{q}})$ be a polynomial family in $t$. Then there is a $F_T\in H^0(({\mathfrak{h}}_{\lambda}^T)^{\bot},d_{({\mathfrak{h}}_{\lambda}^T)^{\bot},\mathfrak{q}})$ s.t $e_t(F_T)=F_t$. $\bullet$ $\mathbb{D}_{(T=1)}({\mathfrak{g}},{\mathfrak{h}},\lambda)\stackrel{alg}{\simeq}H^0_{(\epsilon=1)}({\mathfrak{h}}_{\lambda}^{\bot},d^{(\epsilon=1)}_{{\mathfrak{h}}_{\lambda}^{\bot},\mathfrak{q}})$. **Acknowledgements.** The author would like to sincerely thank Charles Torossian for his inspiring ideas and guidance during the preparation of his thesis. He would also like to thank the EU RTN Liegrits and its coordinator Fred Van Oystaeyen for financial support. [99]{} [^1]: Department of Mathematics, Aristotle University of Thessaloniki.
--- abstract: | **Abstract:** We study the problem of finding an Euler tour in an undirected graph $G$ in the W-Streaming model with $\mathcal O(n\text{ polylog}(n))$ RAM, where $n$ resp. $m$ is the number of nodes resp. edges of $G$. Our main result is the first one pass W-Streaming algorithm computing an Euler tour of $G$ in the form of an edge successor function with only $\mathcal O(n \log(n))$ RAM which is optimal for this setting (e.g., Sun and Woodruff (2015)). The previously best-known result in this model is implicitly given by Demetrescu et al. (2010) with the parallel algorithm of Atallah and Vishkin (1984) using $\mathcal O(m/n)$ passes under the same RAM limitation. For graphs with $\omega(n)$ edges this is non-constant. Our overall approach is to partition the edges into edge-disjoint cycles and to merge the cycles until a single Euler tour is achieved. Note that in the W-Streaming model such a merging is far from being obvious as the limited RAM allows the processing of only a constant number of cycles at once. This enforces us to merge cycles that partially are no longer present in RAM. Furthermore, the successor of an edge cannot be changed after the edge has left RAM. So, we steadily have to output edges and their designated successors, not knowing the appearance of edges and cycles yet to come. We solve this problem with a special edge swapping technique, for which two certain edges per node are sufficient to merge tours without having all of their edges in RAM. Mathematically, this is controlled by structural results on the space of certain equivalence classes corresponding to cycles and the characterization of associated successor functions. For example, we give conditions under which the swapping of edge successors leads to a merging of equivalence classes. The mathematical methods of our analysis might be of independent interest for other routing problems in streaming models author: - Christian Glazik - Jan Schiemann - Anand Srivastav bibliography: - 'literature-ets.bib' date: | Department of Computer Science\ Kiel University\ Christian-Albrechts-Platz 4\ 24118 Kiel, Germany\ `{cgl,jasc,asr}@informatik.uni-kiel.de` title: 'Finding Euler Tours in One Pass in the W-Streaming Model with O(n log(n)) RAM' --- Introduction ============ For the processing of large graphs, the *graph streaming* or *semi streaming* model introduced by Feigenbaum et al. [@Feigenbaum:2005:GPS:1132633.1132638] has been studied extensively over the last decade. In this model, a graph with $n$ nodes and $m$ edges is given as a stream of its edges. Random-access memory (RAM, also called internal memory) is restricted to $\mathcal O(n \text{ polylog}(n))$ edges at a time, see, e.g., the survey [@McGregor] for a detailed introduction. In consequence, the model cannot be applied to problems where the size of the solution exceeds this amount of memory. In the Euler tour problem, we are looking for a closed trail in an undirected graph such that each edge is visited exactly once. Since the size of an Euler tour is $m$, which might even be $\Theta(n^2)$, we need a relaxation of the model that allows us to store the output separated from the RAM. Previous Work on W-Streaming ---------------------------- The *W-Streaming model* introduced by Demetrescu et al. [@Dem09] is a relaxation of the classical streaming model. At each pass, an output stream is written which then becomes the input stream of the next pass. In [@Dem09], a trade-off between internal memory and streaming passes is shown for undirected connectivity and the single-source shortest path problem in directed graphs. The One-Pass-Model plays a special role, since in this case the writing to the stream is only for output reasons, because the stream is only processed once. This is particularly interesting regarding problems with solutions which do not fit in RAM. The W-Streaming model originated as a more restrictive alternative to the *StrSort model* introduced by Aggarwal et al. [@Ruhl; @Aggarwal]. Finding an Euler tour in trees has been studied in multiple papers (e.g., [@Dem10]), but to the best of our knowledge the general Euler tour problem has hardly been considered in a streaming model so far. However, there are some general results for transferring PRAM algorithms to the W-Streaming model. Atallah and Vishkin [@Atallah] presented a PRAM algorithm for finding Euler tours, using $\mathcal O(\log(n))$ time and $n+m$ processors. Transferred to the W-Streaming model with the methods from [@Dem10], this algorithm computes an Euler tour in the form of a bijective successor function within $p=\mathcal O(m\text{ polylog}(n)/s)$ passes, where $s$ is the RAM-capacity. For a RAM size of $\mathcal O(n\text{ polylog}(n))$, this translates to $\Omega(m/n)$ passes which for any $m=\omega(n)$ is non-constant. Furthermore, Sun and Woodruff [@Sun] showed that a one-pass streaming algorithm for verifying whether a graph is Eulerian needs $\Omega(n \log(n))$ RAM. This implies that a one pass W-streaming algorithm for finding an Euler tour with less RAM does not exist and therefore justifies our choice of the RAM size. Our Contribution ---------------- We present the W-Stream algorithm <span style="font-variant:small-caps;">Euler-Tour</span> for finding an Euler tour in a graph in form of a bijective successor function or stating that the graph is not Eulerian, using only one pass and $\mathcal O(n \log(n))$ bits of RAM. This is not only a significant improvement over previous results, but is in the view of the lower bound of Sun and Woodruff [@Sun] the first optimal algorithm in this setting. Usually, the W-Streaming model is restricted to sub-linear internal memory but in our case the output stream is used solely for storing the solution which needs $\Omega(m)$ memory. As in [@Atallah], our algorithm outputs the Euler tour as a successor function that for every edge gives the following edge of the tour. Atallah and Vishkin [@Atallah] find edge disjoint tours (in our case cycles) and connect them by pairwise swapping the successor edges of suitable edges. This idea is easy to implement without memory restrictions but the implementation gets distinctly more complicated with limited memory space: We cannot store all cycles in RAM. Therefore, we have to output edges and their successors before finding resp.  processing all cycles. Our idea is to keep specific edges of some cycles in RAM along with additional information so that we are able to merge following cycles regardless of their appearance with already processed tours which likely are no longer present in RAM. We develop a mathematical foundation by partitioning the edges into equivalence classes induced by a given bijective successor function and prove structural properties that allow to iteratively change this function on a designated set of edges so that the modified function is still bijective. Translated to graphs this is a tour merging process. This mathematical approach is quite general and might be useful in other routing scenarios in streaming models. Organization of the Article --------------------------- In Section \[sec:preliminaries\] we give some basic definitions. The main techniques of our algorithm are described in Section \[sec:idea\] in an intuitive manner. Section \[sec:algorithm\] contains the pseudo code of the algorithm. In the analysis in Section \[sec:analysis\], we show the connection of the concepts of Euler tours and successor functions and then show that the required RAM of the algorithm does not exceed $\mathcal O(n \log(n))$ and that the output actually depicts an Euler tour (Theorem \[finalthm\]). Preliminaries {#sec:preliminaries} ============= Let $\N:=\{1,2,\ldots\}$ denote the set of natural numbers. For $n\in\N$ let $[n]:=\{1,\ldots,n\}$. In the following, we consider a graph $G=(V,E)$ where $V$ denotes the set of nodes and $E$ the set of (undirected) edges. A *trail* in $G$ is a finite sequence $T=(v_1,\ldots,v_\ell)$ of nodes of $G$ with $\{v_i,v_{i+1}\}\in E$ and $v_i=v_j$ implies $v_{i+1}\notin\{v_{j-1},v_{j+1}\}$ for all $i\in\{1,\ldots,\ell-1\}$ and $j\in\{2,\ldots,\ell-1\}$ with $i\neq j$. The *length* of $T$ is $\ell-1$. The (directed) edge-set of $T$ is $E(T):=\{(v_i,v_{i+1})|i\in[\ell-1]\}$. We also write $e\in T$ instead of $e\in E(T)$. For a directed edge $e$ we denote by $e_{(1)}$ its first and by $e_{(2)}$ its second component. A trail $T=(v_1,\ldots,v_\ell)$ with $v_1=v_\ell$ is called a *tour*. In tours, we usually do not care about starting point and end point, so we slightly abuse the notation and write $v_{i+k}$ resp. $v_{i-k}$ for any $k\in\N$, identifying $v_0:=v_\ell$ and $v_{\ell+1}:=v_2$ and so on. If additionally $v_i\neq v_j$ holds for all $i,j\in [\ell-1],i\neq j$ (and $\ell\geq 3$), we call $T$ a *cycle*. An *Euler tour* of $G$ is a tour $T$ with $E(T)=E$. Since in the streaming model the graph is represented as a set of edges, we often use the edges for the depiction of tours. With $e_i:=\{v_i,v_{i+1}\}$ for all $i \in [l-1]$, $T$ can be written as $T=(e_1,\ldots,e_l)$. Here, we also use the slightly abusive index notation. Note that for the tour $T$ the edges are distinct. For $i \in [l]$, we call $e_{i+1}$ the *successor edge of $e_i$* in tour $T$. Our algorithm outputs an Euler tour $T=(v_1,\ldots,v_{|E|},v_1)$ in form of a *successor function*, i.e., for every $i \in [|E|]$, we output the triple $(v_i,v_{i+1},v_{i+2})$, where $\{v_{i+1},v_{i+2}\}$ is the successor edge of $\{v_i,v_{i+1}\}$ in $T$. Idea of the Algorithm {#sec:idea} ===================== As the analysis of our algorithm is quite involved, in this section we try to explain the new algorithmic idea and where the mathematical analysis is required. First we explain how merging of subtours can be accomplished without RAM limitation clarifying why this does not work in W-streaming. Thereafter we explain our merging technique and its locality and RAM efficiency. Subtour merging in unrestricted RAM ----------------------------------- Recall that an Euler tour will be presented by giving for every edge the corresponding successor edge in the tour. Let $G=(V,E)$ be an Eulerian graph and $T, T'$ be edge-disjoint tours in $G$. The tour induces an orientation of the edges in a canonical way. If $T$ and $T'$ have a common node $v$, it is easy to merge them to a single tour: $T$ has at least one in-going edge $(u,v)$ with a successor edge $(v,w)$, and $T'$ has at least one in-going edge $(u',v)$ with a successor edge $(v,w')$. By changing the successor edge of $(u,v)$ from $(v,w)$ to $(v,w')$ and the successor edge of $(u',v)$ to $(v,w)$, we get a tour containing all edges of $T \cup T'$ (see Figure \[fig:TwoTours\]). The same principle can be applied when merging more than two tours at once. When we have a tour $T$ and tours $T_1,\ldots,T_k$, $k \in \mathbb N$, such that $T, T_1,\ldots,T_k$ are pairwise edge-disjoint and for every $j \in [k]$ there is a common node $v_j$ of $T$ and $T_j$, switching the successor edges of two in-going edges per node $v_j$ as described above creates a tour containing the edges of $T \cup T_1 \cup \cdots \cup T_k$. We can use this method as a simple algorithm for finding an Euler tour: a\) Find a partition of $E$ into edge disjoint cycles. b\) Iteratively pick a cycle $C$ and merge it with all tours encountered so far which have at least one common node with $C$. Such a merging process certainly converges to a tour covering all nodes, if a subtour obtained by merging some subtours does not decompose later into some subtours again. If we use a local swapping technique to merge tours, this can very well happen, if swapping is again applied to some other node of the merged tour (see Figure \[fig:WrongSwap\]). In the RAM model we can keep all tours in RAM and avoid such fatal nodes. In the W-stream model with $O(n\log n)$ RAM it is far from being obvious how to implement an efficient tour merging for the following reasons. 1. We cannot keep every intermediate tour in RAM, so we have to regularly remove some edges together with their successors from RAM, even if we do not know the edges yet to come. But on the other hand, we have to keep edges in RAM which are essential in later merging steps. 2. Sometimes we have to merge cycles with tours that had already left RAM. Therefore, we must keep track of common nodes and the related edges. Subtour merging in limited RAM ------------------------------ Let us assume that we have found say four cycles $C_1,\ldots,C_4$ in that order, all sharing a common node $v$. (see Figure \[fig:OneNode\]). Let $(u_1,v),\ldots,(u_4,v)$ be the respective in-going edges and $(v,w_1),\ldots,(v,w_4)$ be the respective out-going edges. By swapping the successor edges of $(u_1,v)$ and $(u_2,v)$ as explained before, we get a tour containing all edges from $C_1$ and $C_2$. We then merge this tour with $C_3$ swapping the successor edges of $(u_1,v)$ and $(u_3,v)$, and then with $C_4$ by swapping the successors of $(u_1,v)$ and $(u_4,v)$. The successor edges are now as follows: $$\begin{aligned} &(u_1,v) \longrightarrow (v,w_4) & (u_2,v) \longrightarrow (v,w_1)\\ &(u_3,v) \longrightarrow (v,w_2) & (u_4,v) \longrightarrow (v,w_3)\end{aligned}$$ For $i > 1$ and cycle $C_i$, the successor of the edge $(u_i,v)$ is edge $(v,w_{i-1})$, the out-going edge of $C_{i-1}$. The edge $(u_1,v)$ of the cycle $C_1$ has the out-going edge of the last cycle as its successor edge. The edge $(u_1,v)$ is the first in-going edge of $v$ called the *first-in edge of $v$*. Let us briefly show how this merging can be implemented in W-streaming. When $C_1$ is kept in RAM, we store the edge $(u_1,v)$, since we don’t know its final successor edge yet. We also keep the edge $(v,w_1)$ in RAM, because it will be the successor edge of $C_2$. We call such an edge the *potential successor edge of $v$*. We can remove every edge except $(u_1,v)$ together with their respective successor edges in $C_1$, since only the successor edge of $(u_1,v)$ will change over the course of the algorithm. Then iteratively, if we have a cycle $C_i$ for $i > 1$ in RAM, we assign the edge $(v,w_{i-1})$ as successor edge of $(u_i,v)$, replace $(v,w_{i-1})$ by $(v,w_i)$ in RAM as potential successor edge of $v$ for the next cycle and then remove $C_i$ with the respective successor edges from RAM. Finally, as no more cycles with node $v$ occur, we can remove $(u_1,v)$ together with the last successor edge left from RAM (in our case this is $(v,w_4)$). Now, let us consider the more complicated case, where we wish to merge a cycle $C$ with multiple tours at several nodes. Consider a cycle $C$ and tours $T_1,\ldots,T_j$. Let $v_1,\ldots,v_j$, $C$ be nodes so that $v_i$ belongs to $T_i$ and $C$ for all $i$. We distinguish between merging at three types of nodes: 1. For the nodes $v_1,\ldots,v_j$. we use the successor edge swapping. 2. Nodes in $C$ and in $T_1 \cup \cdots \cup T_j\backslash \{v_1,\ldots,v_j\}$: as only one successor edge swapping per tour is needed, these additional common nodes are not used, so for every $v \in T_1 \cup \cdots \cup T_j\backslash \{v_1,\ldots,v_j\}$ the in-going edge $(u,v)$ of $C$ keeps its successor edge, so nothing happens here. 3. Nodes in $C\backslash (T_1 \cup \cdots \cup T_j)$. These nodes are visited by the algorithm for the first time. Since we might want to merge $C$ with future cycles at these nodes, we store for every $v \in C\backslash (T_1 \cup \cdots \cup T_j)$ the in-going edge $(u,v)$ of $C$ as first-in edge and the out-going edge $(v,w)$ of $C$ as potential successor edge. Note that the very first cycle found by the algorithm consists only of type 3 nodes, so every edge will become a first-in edge. The challenge in the analysis is on the one hand to choose sufficiently many nodes where merging is done in a [*simultaneous*]{} way in order to stay within the one-pass complexity, and on the other hand to ensure that simultaneous merging enlarges and never decomposes subtours. Here we need two lemmas. Lemma \[technical2\] is used to show that merging of equivalence classes of Euler subtours leads to equivalence classes of a new subtour, thus subtour merging is invariant w.r.t. the equivalence class relation. This lemma is needed to prove Lemma \[mainLemma\], which shows that the sequence of successor functions iteratively built by refining the equivalence relation are indeed Eulerian subtours. It also gives a criterion for belongingness of edges to such a subtour. This criterion is finally used to show that the successor function returned by our algorithm is equal to the successor function associated to the last and most refined equivalence relation, hence is an Eulerian tour for the graph. For the readers convenience we give a high level description of our algorithm. A detailed description in pseudo code together with an outline of the analysis and the proof of the main theorem will follow in the next sections. We denote the set of first-in edges by $F$. 1. Iteratively: 1. \[step1\] Read edges from the input stream until the edges in RAM contain a cycle $C$. 2. \[step2\] If a node $v$ of $C$ is visited for the first time, - store the in-going edge $(u,v)$ of $C$ in $F$ (we will process these $\leq n$ edges in step \[step7\]), - remember the out-going edge $(v,w)$ as potential successor edge of $v$. 3. \[step3\] Every node $v$ that has already been visited, has thereby been assigned to a unique tour $T$ with $v\in C\cap T$. For each tour that shares a node with $C$, choose exactly one common node. 4. \[step4\] For each node $v$ chosen in step \[step3\] “swap the successors”. That means, we write the in-going edge $e$ to the stream and take the recent potential successor edge of $v$ as successor edge for $e$. Then, save the out-going edge as new potential successor edge of $v$. 5. \[step5\] For each edge that has not been stored in $F$ (step \[step2\]) or written to the stream (step \[step4\]) so far, write this edge to the stream and take as successor the following edge in $C$. 6. \[step6\] All tours with common nodes together with all newly visited nodes are now assigned to a single tour. 2. \[step7\] After the end of the input stream is reached, all edges have either been written to the stream or stored in $F$. For every edge $(u,v) \in F$, write it to the stream and take as its successor the potential successor edge of $v$. An example of how the algorithm works can be found in the appendix. The Algorithm {#sec:algorithm} ============= To enable a clear and structured analysis, in this section we present the pseudo-code for our algorithm. For a better understanding it is split up into several procedures that correspond to the steps from our high level description in Section \[sec:idea\]. Note that these procedures are not independent algorithms, since they access variables from the main algorithm. The output is an Euler tour on $G$, given in the form of a successor function $\delta^*$. To be more precise, the output is a stream of triples $(v_1,v_2,s)$ with $v_1,v_2,s\in V$ and $\{v_1,v_2\}\in E$. Each of these triples represents the information $\delta^*((v_1,v_2))=(v_2,s)$. If a triple $(v_1,v_2,s)$ is written to the stream, we say that the edge $(v_2,s)$ is *marked as successor* of the edge $(v_1,v_2)$. For every node we store two important values during the algorithm: The value $t(v)$ that gives the tour $v$ is assigned to at the moment and the value $j(v)$ that indicates that $(v,j(v))$ is the potential successor edge of $v$.\ $c:=0$;  $F:=\emptyset$;  $E_{\mathrm{int}}:=\emptyset$;  for every $v\in V$: $j(v):=0,t(v):=0$ <span style="font-variant:small-caps;">Write-F</span> The algorithm searches the stream for cycles (Step \[step1\] in our high level description ) and whenever a cycle is found, we will run the procedure <span style="font-variant:small-caps;">Merge-Cycle</span> on this cycle. The procedure <span style="font-variant:small-caps;">Merge-Cycle</span> contains the steps \[step2\] to \[step6\] and <span style="font-variant:small-caps;">Write-F</span> corresponds to step \[step7\]\ <span style="font-variant:small-caps;">New-Nodes</span> <span style="font-variant:small-caps;">Construct-J-M</span> <span style="font-variant:small-caps;">Merge</span> <span style="font-variant:small-caps;">Write</span> <span style="font-variant:small-caps;">Update</span> The procedure <span style="font-variant:small-caps;">New-Nodes</span> implements step \[step2\]  If a node $v$ is processed the very first time by the algorithm, this is indicated by $t(v)=0$. If this is the case, we store the corresponding in-going edge in the set $F$ and store the next node on the cycle in $j(v)$ (this is, the edge $(v,v_{i+1})$ becomes the potential successor of $v$).\ The procedure <span style="font-variant:small-caps;">Construct-J-M</span> is a realization of step \[step3\]  For every value $j\neq 0$, we pick exactly one node $v$ with $t(v)=j$ if there is one. These nodes are stored in $J$, their values are stored in $M$. The nodes in $J$ are the “chosen” nodes we want to use for merging tours. If two nodes already have the same value in $t$, this means they are already part of the same tour (see Lemma \[mainLemma\]) and we want to avoid using both of them for merging.\ $M=\emptyset$;  $J=\emptyset$ In the following procedure <span style="font-variant:small-caps;">Merge</span>, we use the nodes from $J$ to merge all tours that share a node in the cycle $C$ by edge-swapping (step \[step4\]).\ In the procedure <span style="font-variant:small-caps;">Write</span>, we take care of all the edges that have not been stored in $F$ and have not been written to the stream in the procedure <span style="font-variant:small-caps;">Merge</span> (Step \[step5\]).\ In the procedure <span style="font-variant:small-caps;">Update</span> we update the $t$-values to implement step \[step6\]  After this step we can be sure that any two nodes $v,v'\in V$ with $t(v)=t(v')\neq 0$ belong to the same tour, whereas $t(v)=0$ means that $v$ has not been processed so far.\ $a:=0$ Finally, in the procedure <span style="font-variant:small-caps;">Write-F</span> (step \[step7\]), the first-in edges that have been stored in $F$ during the algorithm are written to the stream with proper successors.\ Analysis {#sec:analysis} ======== Subtour representation by equivalence classes --------------------------------------------- In this subsection we present some basic definitions and results that allow us to transfer the problem of tour merging in a graph to the language of equivalence relations. This will allow an elegant and clear analysis of our algorithm in Section \[sec:analysis\]. - Let $G=(V,E)$ be an undirected graph. An *orientation* of the edges of $G$ is a function $R:E\rightarrow V^2$ such that for every edge $\{u,v\}\in E$ either $R(\{u,v\})=(u,v)$ or $R(\{u,v\})=(v,u)$. So $R(G):=(V,R(E))$ is a directed graph. - Let $\vec{G}=(V,\vE)$ be a directed graph. A *successor function* on $\vG$ is a function $\delta:\vE\rightarrow \vE$ with ${\delta(e)}_{(1)}=e_{(2)}$ for all $e\in\vE$. - Let $\vG=(V,\vE)$ be a directed graph with successor function $\delta$. We define the relation $\equiv_\delta$ on $\vE$ by $e\eqd e':\Leftrightarrow \exists k\in\mathbb{N}:\delta^k(e)=e'$, where $\delta^k$ denotes the $k$-wise composition of $\delta$. So $e\eqd e'$ means that $e'$ can be reached from $e$ by iteratively applying $\delta$. \[lem:eqrel\] Let $\delta$ be a bijective successor function on a directed graph $\vG=(V,\vE)$. Then $\eqd$ is an equivalence relation on $\vE$. Reflexivity: Let $e\in \vE$. Since $\vE$ is finite, there exists $k\in\N$ with the following property: There exists $k'\in\N$ with $k'<k$ and $\delta^k(e)=\delta^{k'}(e)$. Let $k$ be minimal with this property. Since $\delta$ is injective, it follows that $\delta^{k-1}(e)=\delta^{k'-1}(e)$ and the minimality of $k$ enforces that $k'-1\notin\N$. So $k'=1$, therefore $\delta^k(e)=\delta(e)$ and by injectivity of $\delta$ we have $\delta^{k-1}(e)=e$. Symmetry: Let $e,e'\in\vE$ with $e\eqd e'$. Then there exists a minimal $k\in\N$ with $\delta^k(e)=e'$. As shown above, there also exists a $k'\in\N$ with $\delta^{k'}(e)=e$. Because $k$ is minimal, we have $k<k'$. It follows that $\delta^{k'-k}(e')=\delta^{k'}(e)=e$. Transitivity: Let $e,e',e''\in\vE$ with $e\eqd e'$ and $e'\eqd e''$. Then there exist $k_1,k_2\in\N$ with $\delta^{k_1}(e)=e'$ and $\delta^{k_2}(e')=e''$. So we have $\delta^{k_1+k_2}(e)=e''$. We denote the equivalence class of an edge $e\in\vE$ w.r.t. $\eqd$ by ${[e]}_\delta$. The following lemma is necessary to show that the equivalence classes of $\delta$ always form tours on $\vG$. \[lem:technical1\] Let $\vG=(V,\vE)$ be a directed graph with bijective successor function $\delta$ and the related equivalence relation $\eqd$. Then we have: - Let $e\in \vE$ and $k_1,k_2\in\N_0$ with $k_1\neq k_2$ and $\delta^{k_1}(e)=\delta^{k_2}(e)$. Then $|k_1-k_2|\geq|{[e]}_\delta|$. - For any $e\in\vE$ we have $\delta^{|{[e]}_\delta|}(e)=e$. (i): Assume for a moment that there exist $e\in\vE$ and $k_1,k_2\in\N$ with $\delta^{k_1}(e)=\delta^{k_2}(e)$ and $0<|k_1-k_2|<|{[e]}_\delta|$. Without loss of generality let $k_1>k_2$. We have $\delta^{k_1-k_2}(\delta^{k_2}(e))=\delta^{k_1}(e)=\delta^{k_2}(e)$ and via induction for every $s\in\N$, we get $\delta^{s(k_1-k_2)}(\delta^{k_2}(e))$ $=\delta^{k_2}(e)$. For the set $M:=\{\delta^k(e)|k_2\leq k<k_1\}$, we have $|M|\leq k_1-k_2<|{[e]}_\delta|$. But on the other hand, we also have ${[e]}_\delta\subseteq M$: Let $e'\in{[e]}_\delta={[\delta^{k_2}(e)]}_\delta$. Let $n\in\N$ with $e'=\delta^{n}(\delta^{k_2}(e))$. Then there exist unique $s,r\in\N_0$ with $0\leq r< k_1-k_2$ and $n=s(k_1-k_2)+r$. So $$e'=\delta^n(\delta^{k_2}(e))=\delta^{r}(\delta^{s(k_1-k_2)}(\delta^{k_2} (e)))=\delta^r(\delta^{k_2}(e))=\delta^{k_2+r}(e)\in M.$$ Now we have $|M| \leq k_1 - k_2 < |{[e]}_{\delta}| \leq |M|$, a contradiction. (ii): Assume that there exists $e\in\vE$ with $\delta^{|{[e]}_\delta|}(e)=e'\neq e$. Define $M:=\{\delta^k(e)|1\leq k\leq|{[e]}_\delta|\}$. Clearly, $M\subset {[e]}_{\delta}$. Case 1: $e\in M$. Then $\delta^0(e)=e=\delta^k(e)$ for some $k$ with $1\leq k<|{[e]}_\delta|$, in contradiction to (i). Case 2: $e\notin M$. Then $|M|<|{[e]}_\delta|$, By pigeon hole principle, there exist $1\leq k_1,k_2\leq|{[e]}_\delta|$ with $\delta^{k_1}(e)=\delta^{k_2}(e)$ in contradiction to (i). \[thm:classifyEulerTour\] Let $\vG=(V,\vE)$ be a directed graph with bijective successor function $\delta$ such that $e\eqd e'$ for all $e,e'\in\vE$. Then $\delta$ determines an Euler tour on $\vG$ in the following sense: For every $e\in \vE$ the sequence $(e_{(1)},{\delta(e)}_{(1)},\ldots,\delta^{|\vE|}{(e)}_{(1)})$ is an Euler tour on $\vG$. Let $e\in\vE$. Note that ${[e]}_\delta=\vE$. The sequence $(e_{(1)},{\delta(e)}_{(1)},\ldots,\delta^{|\vE|}{(e)}_{(1)})$ consists of $|\vE|$ edges, namely $e,\delta(e),\ldots,\delta^{|\vE|-1}(e)$. These edges are pairwise distinct: Otherwise, we would have $\delta^{k_1}(e)=\delta^{k_2}(e)$ for some $k_1,k_2\in\{0,\ldots,|\vE|-1\}$. Hence, $|k_1-k_2|<|\vE|$ in contradiction to Lemma \[lem:technical1\] (i). So the sequence is a trail. By applying Lemma \[lem:technical1\] (ii), we get $e=\delta^{|{[e]}_\delta|}(e)=\delta^{|\vE|}(e)$, thus the trail is a tour on $\vG$ and since it has length $|\vE|$, it is an Euler tour on $\vG$. Before we start with a detailed memory- and correctness analysis, we show that at the end of the algorithm, every edge $\{u,v\}\in E$ has been written to the output stream exactly once, either in the form $(u,v)$ or in the form $(v,u)$. We also show that $|E_{\mathrm{int}}|\leq n$ all the time. \[lem:everyEdgeWritten\] - After each processing of an edge (lines $2$ to $5$ in <span style="font-variant:small-caps;">Euler-Tour</span>) in the algorithm, the graph $G_{\mathrm{int}}=(V,E_{\mathrm{int}})$ is cycle-free so $|E_{\mathrm{int}}|\leq n$. If all nodes from $V$ have even degree in $G$, after completion of <span style="font-variant:small-caps;">Euler-Tour</span>, $E_{\mathrm{int}}=\emptyset$. - If all nodes from $V$ have even degree in $G$, after completion of <span style="font-variant:small-caps;">Euler-Tour</span> every edge $\{u,v\}\in E$ has been written to the stream either in the form $(u,v,s)$ or in the form $(v,u,s)$ for some $s\in V$. We start by proving the first part of (i) via induction over the number of already processed edges. If there are no edges processed so far, then $E_{\mathrm{int}}=\emptyset$, so $G_{\mathrm{int}}$ is cycle-free. Now let $k\in[|E|]\cup\{0\}$, let $G_k,G_{k+1}$ denote $G_{\mathrm{int}}$ after $k$ resp. $k+1$ edges have been processed and let $G_k$ be cycle-free. Let $e$ denote the $(k+1)$-th processed edge. When $e$ is added to $G_{\mathrm{int}}$, it may produce a cycle $C$. If $e$ does not produce a cycle, then $G_{k+1}=G_k\cup\{e\}$ is cycle-free and we are done. If $e$ produces a cycle $C$, then (at lines $6,7$ in <span style="font-variant:small-caps;">Merge-Cycle</span>) $C$ is deleted from $E_{\mathrm{int}}$ and because $e\in C$, we get $G_{k+1}=(G_k\cup\{e\})\setminus C\subseteq G_k$ and we are done by the induction hypothesis. Now assume for a moment that $E_{\mathrm{int}}\neq \emptyset$ at the end of <span style="font-variant:small-caps;">Euler-Tour</span>. We know that $G_{\mathrm{int}}$ is cycle-free at this time, so $G_{\mathrm{int}}$ contains a node with odd degree in $G_{\mathrm{int}}$. Because we always delete whole cycles, the degree of this node in $G$ has to be odd as well, but then $G$ is not an Eulerian graph. In this case we might output a message that $G$ does not contain an Euler tour. About (ii). During <span style="font-variant:small-caps;">Euler-Tour</span>, every edge from $E$ is added to $E_{\mathrm{int}}$ at some point of time and there is only one way for an edge to be deleted from $E_{\mathrm{int}}$ again, namely in line $7$ of <span style="font-variant:small-caps;">Merge-Cycle</span>. At that point of time, the edge has either been written to the stream in <span style="font-variant:small-caps;">Merge</span> or <span style="font-variant:small-caps;">Write</span> (in which case we are done) or it has been added to $F$ in <span style="font-variant:small-caps;">New-Nodes</span>. In that case it is written to the stream in <span style="font-variant:small-caps;">Write-F</span>. Because, according to (i), $E_{\mathrm{int}}=\emptyset$ at the end of <span style="font-variant:small-caps;">Euler-Tour</span>, at this point of time, every edge must have been written to the stream in exactly one of the two ways. The idea of (i) is that every time a cycle occurs in $E_{\mathrm{int}}$, we delete this cycle so we assure that $E_{\mathrm{int}}$ becomes cycle-free again (since we only add one edge at a time). Memory Requirement ------------------ For the memory estimation, we have to consider the variables $j(v),t(v)$ for all $v\in V$, the sets $F,E_{\mathrm{int}},J$, and $M$, and the counter $c$. By Lemma \[lem:everyEdgeWritten\] (i), $|E_{\mathrm{int}}|\leq n$ and, with some straightforward considerations, we can estimate the memory requirement for the other parameters leading to the following lemma. \[lem:memory\] Algorithm <span style="font-variant:small-caps;">Euler-Tour</span> needs at most $\mathcal{O}(n\log n)$ bits of RAM. We consider the different parameters. About $c$: We show that $c\leq n/3$ at every time, so $\log n$ bits suffice to store $c$. $c$ is initiated with $0$ and changed in the procedure <span style="font-variant:small-caps;">Update</span> if and only if $M=\emptyset$ at that point of time. This only happens if for every node $v$ of the considered cycle, we have $t(v)=0$, which means that none of the cycle nodes was considered before. This case can occur at most $n/3$ times during the algorithm, because there can be no more than $n/3$ node disjoint cycles in $G$, so $c\leq n/3$. About $j(v)$: In this variable we store the label of a node, so for fixed $v$, $\log n$ bits suffice and altogether $n\log n$ bits suffice. About $t(v)$: We prove that for any $v\in V$ $t(v)\leq n$ at any time: Assume for a moment that this is not the case. Consider the first point of time $T$ in which $t(v)$ is set to a value $>n$ for some $v\in V$. $t(v)$ is only changed in the procedure <span style="font-variant:small-caps;">Update</span>, line $9$ or $11$. In both cases the value is set to $r$ which is either $c$ (line $4$) or $\min(M)$ (line $6$). We already showed $c<n$. Hence, by our assumption, $\min(M)>n$ at that point of time. But this implies that at the time of the construction of $M$, there already existed a node $u\in V$ with $t(u)>n$ in contradiction to the choice of $T$. About $E_{\mathrm{int}},F,J,M$: Because a single element of each of these sets can be stored in $\log n$ bits, it suffices to show that the cardinalities of these sets do not exceed $n$. For $E_{\mathrm{int}}$, this is shown in Lemma \[lem:everyEdgeWritten\]. For $J$ and $M$, it follows directly from the construction (see Procedure <span style="font-variant:small-caps;">Construct-J-M</span>). In the set $F$, for every node we collect the first edge that leads into this node (see Procedure <span style="font-variant:small-caps;">New-Nodes</span>, lines $2$ and $4$), so clearly $|F|\leq n$. Correctness ----------- In this subsection, we prove that $\delta^*$ determines an Euler tour on $G$, provided that $G$ is Eulerian (Theorem \[finalthm\]). This is done with the help of Theorem \[thm:classifyEulerTour\], where bijectivity of $\delta^*$ and the condition that $\delta^*$ induces only one equivalence class is required. In the following, we show that these assumptions are true for $\delta^*$ by generating a sequence of bijective successor functions $\delta^*_0,\ldots,\delta^*_N$ such that $\delta^*_0$ is bijective, $\delta^*_N=\delta^*$ and $\delta_{i+1}^*$ emerges from $\delta_{i}^*$ by swapping of edge successors. Lemma \[lem:everyEdgeWritten\] (ii) induces an orientation on $E$ which we call $R^*$: For all $\{u,v\}\in E$, we define $$R^*(\{u,v\}):=(u,v) \text{ if }(u,v)\text{ has been written to the output stream}.$$ From now on, let $C_1,\ldots, C_N$ denote the cycles found in $E_{\mathrm{int}}$ by the algorithm in chronological order. For $k \in \{0,\ldots,N\}$ and a variable $x$, we denote by $x_k$ the value of $x$ after the $k$-th call of <span style="font-variant:small-caps;">Merge-Cycle</span>. For $k = 0$, this means the initial value of $x$. For each $i\in[N]$, let $C_i=(v_1^{(i)},\ldots,v_{\ell_i}^{(i)})$ be the form of the cycle given to <span style="font-variant:small-caps;">Merge-Cycle</span>. Define $\delta^c_i:E(C_i)\rightarrow E(C_i)$ by for every $j\in[k_i]$ and let $\delta^c:R^*(E)\rightarrow R^*(E)$ denote the unique successor function with $\delta^c|_{E(C_i)}=\delta^c_i$ for all $i\in[N]$. So $\delta^c$ is the natural successor function induced by the cycles $C_1,\ldots,C_N$. \[lem:delc\] The successor function $\delta^c$ is bijective and for any two edges $e,e'$ we have $e\equiv_{\delta^c}e'\Leftrightarrow \exists i\in[N]: e,e'\in C_i$. We first show that $\delta^c$ is surjective: Let $e\in R^*(E)$. Then there exist $k\in[N]$ and $i\in\N$ such that $C_k=(v_1,\ldots, v_{\ell_k})$ and $e=(v_i,v_{i+1})$. Then $\delta(v_{i-1},v_i)=(v_i,v_{i+1})=e$. Because $R^*(E)$ is finite, $\delta^c$ is bijective. Now let $e,e'\in R^*(E)$ with $e\eqc e'$. Let $i\in[N]$ such that $e\in C_i$. Since $\delta^c(C_i)=C_i$, it follows that $e'\in C_i$. Now let $e,e'\in C_i$ for some $i\in[N]$, for instance $C_i=(v_1,\ldots,v_{\ell_i})$ and $j,k\in[\ell_i]$ with $e=(v_j,v_{j+1})$ and $e'=(v_k,v_{k+1})$. W.l.o.g. let $j<k$ and set $r:=k-j$. Then ${(\delta^c)}^r(e)=e'$, so $e\equiv_{\delta^c}e'$. Let now $k\in\{0,\ldots N\}$. We consider the point of time right after the $k$-th iteration of <span style="font-variant:small-caps;">Merge-Cycle</span>(for $k=0$ this means the very beginning of the algorithm). We call edges from $\bigcup\limits_{i=1}^k E(C_i)$ *processed edges*, since those edges have already been loaded into and then deleted from $E_{\mathrm{int}}$. All processed edges can be divided into two types: - Type A: The edge has been written on the stream with a dedicated successor. - Type B: The edge has been added to $F$. These are the only possible cases for processed edges because an edge is deleted from $E_{\mathrm{int}}$ is either written to the stream or added to $F$. This leads to the following definition. For every $k\in\{0\ldots,N\}$ define the function $\delta_k:\bigcup\limits_{i=1}^k E(C_i)\rightarrow \bigcup\limits_{i=1}^k E(C_i)$ by $$(u,v)\mapsto \begin{cases} e' &\text{if } (u,v) \text{ is of type A with successor }e'\\ (v,j_k(v))&\text{if } (u,v) \text{ is of type B} \end{cases}$$ $$\text{ and define }\delta^*_k:=\begin{cases} \delta_k \text{ on }\bigcup\limits_{i=1}^{k}E(C_i)\\ \delta^c \text{ on }\bigcup\limits_{i=k+1}^{N}E(C_i). \end{cases}$$ Note that $\delta^*_0=\delta^c$ and $\delta^*_N=\delta^*$. \[obs:obs\] Let $k,\ell\in\{0,\ldots,N\}$ with $k<\ell$. Then for any $v,v'\in V$, $e\in R^*(E)$, we have - If $t_k(v)=t_k(v')\neq 0$, then $t_{\ell}(v)=t_{\ell}(v')$. - If $e\in C_{\ell}$, then ${[e]}_{\delk}={[e]}_{\delta^c}$. About (i). Let $v,v'\in V$ with $t_k(v)=t_k(v')\neq 0$. Assume for a moment that $t_{\ell}(v)\neq t_{\ell}(v')$. Then there exists $k\leq k'< \ell$ such that $t_{k'}(v)=t_{k'}(v')$ and $t_{k'+1}(v)\neq t_{k'+1}(v')$. Furthermore, $t_{k'}(v)\neq 0$, because $t_k(v)\neq 0$ and the value $t(v)$ is never set to $0$ after its initiation. We take a closer look at the $(k'+1)$-th call of <span style="font-variant:small-caps;">Merge-Cycle</span>. If for a node its $t$-value is changed in this call, it is set to $a_{k'+1}$ (line $9$ or $11$ in <span style="font-variant:small-caps;">Update</span>), so we may assume that $t_{k'+1}(v)=a_{k'+1}\neq t_{k'+1}(v')$. But this implies that $t_{k'}(v)\in M$ or $v\in C_{k'+1}$, in which case also $t_{k'}(v)\in M$ (since $t_{k'}(v)\neq 0$). But then, $t_{k'}(v')=t_{k'}(v)\in M$ and therefore $t_{k'+1}(v')=a_{k'+1}=t_{k'+1}(v)$, in contradiction to our assumption. About (ii). Let $e\in C_\ell$. With Lemma \[lem:delc\], we get ${[e]}_{\delk}=E(C_\ell)$. Since $\ell>k$, $\delk (e')=\delta^c (e')$ for any $e'\in C_\ell$. Hence, $\delk (e)=\delta^c (e)$ and using $\delta^c(e)\in C_\ell$, by induction ${(\delk)}^j(e)={(\delta^c)}^j(e)$ for any $j>1$, which proves the claim. The following two lemmata form the technical foundation of our analysis. In Lemma \[technical2\] we repeat in a formal way the basic idea of tour-merging given in Section \[sec:idea\]. It is needed for the proof of Lemma \[mainLemma\]. \[technical2\] Let $\vG=(V,\vE)$ be a directed graph with bijective successor function $\delta$ and the related equivalence relation $\eqd$. Let $r\in\N$ and $e_1,\ldots,e_r\in\vE$ with $e_i\eqd e_j$ for every $i,j\in[r]$. Let $e_1',\ldots,e_r'\in\vE$ with $e_i'\neqd e_j'$ and $e_i\neqd e_i'$ for every $i,j\in[r]$. Let $\delta'$ be a successor function on $\vG$ with $\delta'(e)=\delta(e)$ for every $e\in \vE\setminus\{e_1,\ldots,e_r,e_1',\ldots,e_r'\}$ and $\delta'(e_i)=\delta(e_i')$ and $\delta'(e_i')=\delta(e_i)$ for any $i\in[r]$. Then, $\delta'$ is bijective and $$\begin{aligned} &{[e_1]}_{\delta'}=\bigcup\limits_{i=1}^{r}{[e_i']}_\delta\cup{[e_1]}_{\delta} \tag{P1}\label{prop1}\\ % \end{equation} % \begin{equation} &{[e]}_{\delta'}={[e]}_\delta \text{ for any } e\in\vE\setminus{[e_1]}_{\delta'}.\tag{P2}\label{prop2} \end{aligned}$$ Via induction over $r$. First of all notice that because of the definition of $\delta'$ and because $\delta$ is bijective, $\delta'$ is bijective as well. For $r=1$ to shorten notation, we write $e$ and $e'$ instead of $e_1$ and $e_1'$. We first show $$\label{techeq1} {[e]}_{\delta'}\subseteq {[e]}_\delta\cup{[e']}_\delta:$$ First we show that for any $e''\in {[e]}_\delta\cup{[e']}_\delta$, we have $\delta'(e'')\in {[e]}_\delta\cup{[e']}_\delta$: Let $e''\in{[e]}_\delta\cup{[e']}_\delta$. Then there exists $k\in\N$ such that $e''=\delta^k(e)$ or $e''=\delta^k(e')$. If $e''\in\{e,e'\}$, then $\delta'(e'')\in\{\delta(e),\delta(e')\}$ and otherwise $\delta'(e'')=\delta(e'')=\delta^{k+1}(e)$ or $\delta'(e'')=\delta^{k+1}(e')$, respectively. So in each case we have $\delta'(e'')\in {[e]}_\delta\cup{[e']}_{\delta}$. Since $e\in{[e]}_\delta\cup{[e']}_{\delta}$, it follows by induction on $n$ that ${(\delta')}^n(e)\in{[e]}_\delta\cup{[e']}_{\delta}$ for any $n\in\N$, so ${[e]}_{\delta'} \subseteq {[e]}_{\delta} \cup {[e']}_{\delta}$. Next, we show $$\label{techeq2} {[e]}_\delta\cup{[e']}_\delta\subseteq{[e']}_{\delta'}.$$ Let $e''\in{[e']}_\delta$. Then there exists $k\in\{1,\ldots,|{[e']}_\delta|\}$ with $e''=\delta^k(e')$. Since $e\notin{[e']}_\delta$ and $\delta^\ell(e')\neq e'$ for all $\ell\in\{1,\ldots,k-1\}$ (follows from Lemma \[lem:technical1\] (i)), we have $$\delta^{k}(e')=\delta(\delta^{k-1}(e'))=\delta'(\delta^{k-1}(e')) =\delta'(\delta'(\delta^{k-2}(e')))=\cdots ={(\delta')}^{k-1}(\delta(e')).$$ Hence $e''=\delta^k(e')={(\delta')}^{k-1}(\delta(e'))={(\delta')}^{k-1}(\delta'(e))={ (\delta')}^k(e)\in{[e]}_{\delta'}$. So we have $$\label{techeq3} {[e']}_\delta\subseteq{[e]}_{\delta'}$$ and analogously we get $$\label{techeq4} {[e]}_\delta\subseteq{[e']}_{\delta'},$$ Because $\delta(e')\in{[e']}_{\delta}\subseteq{[e]}_{\delta'}$, we have ${[\delta(e')]}_{\delta'}={[e]}_{\delta'}$ and it follows that $$\label{techeq5} {[e]}_{\delta'}={[\delta(e')]}_{\delta'}={[\delta'(e)]}_{\delta'}={[e']}_{ \delta'}.$$ Combining , , and , we proved . With , , and , we have $${[e]}_{\delta'}\subseteq{[e]}_\delta\cup{[e']}_\delta\subseteq{[e']}_{\delta'} ={[e]}_{\delta'},$$ so property  is proven. For , note that $\delta^k(e'')={(\delta')}^k(e'')$ for any $e''$ with $e''\neqd e, e''\neqd e'$ and any $k\in\N$. *Induction step:* Now let $r\in \N$ and let the claim be true for all $k\leq r\in\N$. Let $e_1,\ldots,e_{r+1}\in \vE$ with $e_i\eqd e_j$ for every $i,j\in[r+1]$. Let $e_1',\ldots, e_{r+1}'\in\vE$ with $e_i'\neqd e_j'$ and $e_i'\neqd e_i$ for every $i\neq j\in[r+1]$. Let $\delta'$ be a successor function on $\vG$ with $\delta'(e)=\delta(e)$ for every $e\in\vE\setminus\{e_1,\ldots,e_{r+1},e_1',\ldots e_{r+1}'\}$ and $\delta'(e_i)=\delta(e_i')$ and $\delta'(e_i')=\delta(e_i)$ for every $i\in[r+1]$. We define a successor function $\delta_r$ for $\vG$ by $$\delta_r:=\begin{cases} \delta' \text{ on } \vE\setminus\{e_{r+1},e'_{r+1}\}\\ \delta \text{ on } \{e_{r+1},e'_{r+1}\}. \end{cases}$$ With the induction hypothesis applied to $\delta$ and $\delta_r$, we get by  $$\label{deldelm1} {[e_1]}_{\delta_r}=\bigcup\limits_{i=1}^{r}{[e_i']}_{\delta}\cup{[e_1]}_\delta$$ and by  $$\label{deldelmp2} {[e_{r+1}']}_{\delta_r}={[e_{r+1}']}_{\delta}.$$ Now we apply the induction hypothesis to $\delta_r$ and $\delta'$ as follows: We take $\delta_r$ instead of $\delta$, $\delta'$ remains, $r=1$, $e_1$ resp. $e_1'$ are replaced by $e_{r+1}$ resp. $e_{r+1}'$. This gives $$\label{deldelm2} {[e_{r+1}]}_{\delta'}={[e_{r+1}']}_{\delta_r}\cup{[e_{r+1}]}_{\delta_r}.$$ Since $e_1\eqd e_{r+1}$, we get with  $$e_{r+1}\in{[e_{r+1}]}_{\delta}={[e_1]}_\delta\subseteq{[e_1]}_{\delta_r}$$ which implies $$\label{deldelm2a} {[e_{r+1}]}_{\delta_r}={[e_1]}_{\delta_r}.$$ Summarizing, we have $$\begin{aligned} {[e_{r+1}]}_{\delta'} &\stackrel{\eqref{deldelm2}}{=}{[e_{r+1}']}_{\delta_r}\cup{[e_{r+1}]}_{\delta_r} \notag \\ &\stackrel{\eqref{deldelm2a}}{=}{[e_{r+1}']}_{\delta_r}\cup{[e_1]}_{\delta_m} \label{eq1}\\ &\stackrel{\eqref{deldelm1}}{=}{[e_{r+1}']}_{\delta_r}\cup\Big( \bigcup\limits_{i=1}^{r}{[e_i']}_{\delta}\cup{[e_1]}_\delta\Big) \notag\\ &\stackrel{\eqref{deldelmp2}}{=}{[e_{r+1}']}_{\delta}\cup\Big( \bigcup\limits_{i=1}^{r}{[e_i']}_{\delta}\cup{[e_1]}_\delta\Big) \notag\\ &=\bigcup\limits_{i=1}^{r+1}{[e_i']}_{\delta}\cup{[e_1]}_\delta.\label{eq2} \end{aligned}$$ So  is proved, if ${[e_{r+1}]}_{\delta'}={[e_1]}_{\delta'}$. By  ${[e_1]}_\delta\subseteq{[e_{r+1}]}_{\delta'}$, so $e_1\in{[e_{r+1}]}_{\delta'}$ and hence $$\label{deldelm2b} {[e_{r+1}]}_{\delta'}={[e_1]}_{\delta'}.$$ For the proof of , let $e\in\vE\setminus{[e_1]}_{\delta'}$. Since $e\notin{[e_1]}_{\delta'}$, by  and  $e\notin{[e_1]}_{\delta_r}$. Applying  of the induction hypothesis to $\delta$ and $\delta_r$, gives us ${[e]}_{\delta_r}={[e]}_{\delta}$. We know ${[e_{r+1}]}_{\delta'}={[e_1]}_{\delta'}$, so $e\notin{[e_{r+1}]}_{\delta'}$. As above, we apply the induction hypothesis to $\delta_r$ and $\delta'$ and get ${[e]}_{\delta'}={[e]}_{\delta_r}$. Altogether ${[e]}_{\delta'}={[e]}_{\delta_r}=[e_\delta]$. Note that $\delta'$ emerges from $\delta$ by swapping of successors as explained in the beginning of Section \[sec:idea\]. The restriction $e_i'\neqd e_j'$ reflects the fact that we have to choose exactly one common node per tour for merging, as already explained in Section \[sec:idea\], see Figure \[fig:WrongSwap\]. \[mainLemma\] Let $k\in\{0,\ldots,N\}$. Then, $\delk$ is bijective and for any $(u,v),(u',v') \linebreak \in R^*(E)$, we have - If $(u,v),(u',v')$ are processed edges, then $(u,v)\eqk(u',v')\Leftrightarrow t_k(u)=t_k(u')$. - If $(u,v)$ is a processed edge, then $t_k(u)=t_k(v)$. - If $t_k(u)=0$, then $(u,v)\eqk(u',v')\Leftrightarrow (u,v)\equiv_{\delta^c}(u',v')$. Claim (i) says that the procedure <span style="font-variant:small-caps;">Update</span> works correctly, i.e., that the $t$-value of a node (if it isn’t $0$) always represents the tour it currently is associated to. Claim (ii) says that after an edge has been processed, both of their nodes are associated to the same tour. So after the algorithm has finished, every node of $G$ is in the same tour as its neighbor. We prove all claims via one induction over $k$. For $k=0$ we have $\delta^*_0=\delta^c$ which is bijective (Lemma \[lem:delc\]). Moreover, no edge has been processed so far, so (i) and (ii) are trivially fulfilled and (iii) follows directly from $\delta^*_0=\delta^c$. Now let all of the claims be true for $k\in \{0,\ldots,N-1\}$. We start with proving the bijectivity and (i) for $k+1$. For this we take a closer look at the $(k+1)$-th call of <span style="font-variant:small-caps;">Merge-Cycle</span>. If $\delk\neq \delkp$, this change has to be happening in one of the procedures <span style="font-variant:small-caps;">New-Nodes</span>, <span style="font-variant:small-caps;">Merge</span> or <span style="font-variant:small-caps;">Write</span>, since these are the only procedures in which edges are written to the stream or added to $F$. First, note that for every edge $e$ written to the stream during <span style="font-variant:small-caps;">Write</span> or added to $F$ in <span style="font-variant:small-caps;">New-Nodes</span> it holds $\delk(e)=\delkp(e)$: If $e=(v_i,v_{i+1})$ is written to the stream during <span style="font-variant:small-caps;">Write</span>, it is written in the form $(v_i,v_{i+1},v_{i+2})$, so we have $\delkp(e)=(v_{i+1},v_{i+2})=\delta^c(e)=\delk(e)$. If $e=(v_{i-1},v_i)$ is added to $F$ during <span style="font-variant:small-caps;">New-Nodes</span>, it becomes a type-B-edge at this point, so $\delkp(e)=(v_i,j(v_i))$. Moreover, $j(v_i)$ is set to $v_{i+1}$ in line $3$, so $\delkp(e)=(v_i,v_{i+1})=\delta^c(e)=\delk(e)$. So we may concentrate on the procedure <span style="font-variant:small-caps;">Merge</span>: Here we process every node from the set $J_{k+1}$. Let $r:=|J_{k+1}|$, for instance $J=\{w_1,\ldots,w_r\}$. Each of these nodes $w_i$ has been processed before, hence, there is a unique edge in $F_k$ that ends in $w_i$ and which we denote by $e_i$. Moreover, there is a unique edge in $C_{k+1}$ that ends in $w_i$ and which we denote by $e_i'$. Now let $i\in[r]$. We process $w_i$ in two steps: Step 1: $(w_i,j(w_i))$ is marked as successor of $e_i'$. So directly after this step, $e_i'$ and $e_i$ share the same successor, while the out-going edge of $w_i$ in $C_{k+1}$ has lost its predecessor. Step 2: $j(w_i)$ is set to the next node in the cycle, so that the out-going edge of $w_i$ becomes the successor of $e_i$. In these two steps we swapped the successors of $e_i$ and $e_i'$ and did not change anything else, so what we get is $$\delkp(e)=\delk(e)\text{ for any } e\in \vE\setminus\{e_1,\ldots, e_r,e_1',\ldots,e_r'\}$$ and for any $i\in[r]$ $$\delkp(e_i)=\delk(e_i')\text{ and }\delkp(e_i')=\delk(e_i).$$ Let $i,j\in[r]$ with $i\neq j$. We have $e_i'\eqk e_j'$, because $e_j'\in C_{k+1}={[e_i']}_{\delta^c}={[e_i']}_{\delk}$. We also have $e_i\neqk e_j$, which follows from $t_k(w_i)\neq t_k(w_j)$ (<span style="font-variant:small-caps;">Construct-J-M</span>, line $4$) together with the induction hypothesis. Finally we have $e_i\neqk e_i'$, because $e_i'\notin E(C_{k+1})={[e_i]}_{\delta^c}={[e_i]}_{\delk}$. So we can apply Lemma \[technical2\] with $\delta=\delk$ and $\delta'=\delkp$ and get the bijectivity of $\delkp$ and for every processed edge $e$ $$\begin{aligned} e\in{[e_1]}_{\delkp} \Leftrightarrow e\in\bigcup\limits_{i=1}^{r}{[e_i']}_{\delk}\cup{[e_1]}_{\delk} \Leftrightarrow t_k(e_{(1)})\in M_k \lor e\in C_{k+1} \Leftrightarrow t_{k+1}(e_{(1)})= a_{k+1} \end{aligned}$$ and $$\begin{aligned} e\notin{[e_1]}_{\delkp} &\Leftrightarrow e\notin\bigcup\limits_{i=1}^{r}{[e_i']}_{\delk}\cup{[e_1]}_{\delk} \Leftrightarrow t_k(e_{(1)})\notin M_k \land e\notin C_{k+1}\\ &\Leftrightarrow t_{k+1}(e_{(1)})=t_k(e_{(1)})\neq a_{k+1}. \end{aligned}$$ Now we are able to complete the proof of (i): Let $(u,v),(u',v')$ be processed edges. Case 1: $(u,v),(u',v')\in {[e_1]}_{\delkp}$. Then $(u,v)\eqkp (u',v')$ and $t(u)=a_{k+1}=t(u')$. Case 2: $(u,v) \in {[e_1]}_{\delkp},(u',v')\notin {[e_1]}_{\delkp}$. Then $(u,v)\neqkp (u',v')$ and $t(u)=a_{k+1}\neq t(u')$. Case 3: $(u,v) \notin {[e_1]}_{\delkp},(u',v')\in {[e_1]}_{\delkp}$. Analog to case 2. Case 4: $(u,v),(u',v')\notin {[e_1]}_{\delkp}$. Then $t_{k+1}(u)=t_k(u),t_{k+1}(u')=t_k(u')$ and (\[prop2\] of Lemma \[technical2\]) ${[(u,v)]}_{\delkp}={[(u,v)]}_{\delk}$ and ${[(u',v')]}_{\delkp}={[(u',v')]}_{\delk}$. So we have $$(u,v)\eqkp(u',v')\Leftrightarrow (u,v)\eqk(u',v')\Leftrightarrow t_k(u)=t_k(u')\Leftrightarrow t_{k+1}(u)=t_{k+1}(u').$$ About (ii). Let $(u,v)$ be a processed edge. If $(u,v)\in C_{k+1}$, then at the end of <span style="font-variant:small-caps;">Merge-Cycle</span> both $t(u)$ and $t(v)$ are set to the same value $a$. If $(u,v)\notin C_{k+1}$, then $(u,v)$ already was a processed edge before so by induction hypothesis and Lemma \[obs:obs\] we are finished. About (iii). Let $u\in V$ with $t_{k+1}(u)=0$. That means that $u$ is not processed in the first $k+1$ calls of <span style="font-variant:small-caps;">Merge-Cycle</span>. Especially we have $(u,v)\equiv_{\delkp}(u',v')\Leftrightarrow (u,v)\eqk(u',v')\Leftrightarrow (u,v)\equiv_{\delta^c}(u',v')$ by induction hypothesis. These results suffice to proof our main result, given in the following. \[finalthm\] If $G$ is Eulerian, $\delta^*$ determines an Euler tour on $G$. According to Theorem \[thm:classifyEulerTour\], it suffices to show that $\delta^*$ is bijective and that $e\equiv_{\delta^*}e'$ for any $e,e'\in R^*(E)$. Remember that $\delta^*=\delta^*_N$, so by Lemma \[mainLemma\] $\delta^*$ is bijective. For the second property, let $e,e'\in R^*(E)$ with $e=(u,v)$ and $e'=(u',v')$. If $G$ is Eulerian, it is connected, so there exists a $u$-$u'$-path $P$ in $G$. For every edge on $P$, either the edge itself or the corresponding reversed edge has been processed during the algorithm <span style="font-variant:small-caps;">Euler-Tour</span>. By Lemma \[mainLemma\] (ii), $t_N(x)=t_N(y)$ for all nodes $x,y$ of $P$, hence, $t_N(u)=t_N(u')$ and by Lemma \[mainLemma\] (i), we get $e\equiv_{\delta^*_N}e'$. Since $\delta^*_N=\delta^*$, we are done. Appendix {#appendix .unnumbered} ======== [r]{}\[-0.3cm\][0.3]{} On the following two pages we present a working example for the method <span style="font-variant:small-caps;">Merge-Cycle</span> that corresponds to the steps \[step2\] to \[step6\] in our high level description. Note that every node has at most one in-going first-in edge and one out-going potential successor edge at a time.\ [0.5]{} [0.5]{} [0.5 ]{} [0.5]{} [0.5 ]{} [0.5]{}
--- abstract: 'In this paper, we seek to model the deformation of nucleated cells by single diode-laser bar optical stretchers. We employ a recently developed computational model, the Dynamic Ray-Tracing method, to determine the stress distribution induced by the applied optical forces on a capsule encapsulating a nucleus of different optical properties. These forces are shape dependent and can deform real non-rigid objects; thus resulting in a dynamically changing optical stress distribution with cell and nucleus deformation. Chinese hamster ovary cell is a common biological cell that is of interest to the biomedical community because of their use in recombinant protein therapeutics and is an example of a nucleated cell. To this end, we model chinese hamster ovary cells as two three-dimensional elastic capsules of variable inner capsule size immersed in a fluid where the hydrodynamic forces are calculated using the Immersed Boundary Method. Our results show that the presence of a nucleus has a major effect on the force distribution on the cell surface and the net deformation. Scattering and gradient forces are reported for different nucleus sizes and the effect of nucleus size on the cell deformation is discussed.' author: - | Ihab Sraj\ Division of Physical Sciences and Engineering\ King Abdullah University of Science and Technology\ Thuwal, Saudi Arabia\ *[email protected]* - | Joshua Francois\ Department of Mechanical Engineering\ University of Maryland Baltimore County\ Baltimore, Maryland 21250, USA - | David W.M. Marr\ Department of Chemical and Biological Engineering\ Colorado School of Mines,\ Golden, Colorado 80401, USA - | Charles D. Eggleton\ Department of Mechanical Engineering\ University of Maryland Baltimore County\ Baltimore, Maryland 21250, USA\ *[email protected]* title: Numerical Model for the Deformation of Nucleated Cells by Optical Stretchers --- \ \ *[email protected]*\ \ *[email protected]*\ Introduction ============ The ability to trap particles using laser light was discovered by Arthur Ashkin in 1970 [@Ashkin1970]. In this, gradient forces are created at the surface of transparent particles suspended in a medium of different refractive index and situated within a light gradient. In the ray-optics regime, where the particles size is much larger than the light wavelength [@Hulst1957], refraction of light rays of different intensities at the surface and within the particles results in a change in the total momentum between the entering and exiting light beam. These gradient forces on the order of picoNewtons are capable of drawing microscopic particles into a region of highest light intensity [@Ashkin1986]. Scattering forces are also created that accelerate the particle in the direction of beam propagation towards its focus [@Ashkin1971]. With these opposing mechanisms, an equilibrium position is reached and the particle is held fixed (trapped) in the center of the beam focus as the light rays passing through and exiting the particle exert forces that are balanced with no net change in momentum. Optical traps or tweezers used to manipulate microscopic objects without any mechanical contact have become a major tool in biological research over the last thirty years. Cells have been stretched [@Guck2001], folded [@Gu2007] and even rotated [@Gu2007; @Mohanty2004] using single and multiple optical tweezers. Recently, the technique has been extended to study the properties of cells by observing their deformation [@Bronkhorst1995; @Guck2005; @Sraj2010a]. Guck *et al.* developed an optical stretcher that uses two counterpropagating diverging laser beams to trap cells individually along the aligned laser beams axis [@Guck2001]. This optical stretcher has also been used to deform cells and to measure their membrane properties using a simple numerical model [@Guck2005]. Extending this, Sraj *et al.* developed a high-throughput optical stretcher using a single linear diode bar to trap and stretch bovine red blood cells (RBC) [@Sraj2010a]. Theoretical and numerical studies have also been conducted to determine the induced optical force distribution. Using analytical solutions of the governing optical equations, Ashkin was the first to perform calculation of the total forces of a single-beam gradient laser on solid spheres in the ray-optics (RO) regime [@Ashkin1992]. Guck *et al.* determined the local optical force distribution by the dual optical stretcher on spherical cells using the RO technique before cells begin to deform [@Guck2001] to determine the stiffness of RBCs [@Guck2005]. In their method, the effect of subsequent deformation on the calculation of force distribution was neglected and constant rigid spherical cell morphology was assumed due to limitations in the method. Deformability of biological cells can result in a shape change under the influence of external flows or applied forces and thus the local force distribution and total trapping forces can change significantly with cell deformation [@Sraj2010b]. To take this into account, the RO method has been improved to include different cell shapes such as oblate spheroids [@Sosa-Martinez2009] and even cylinders [@Gauthier1997a]. As an analytical method, RO remains a difficult approach for calculating the forces on more complex cell shapes like the RBC bi-concave discoid shape and deformable cells. To overcome these issues, we recently developed and implemented a dynamic ray-tracing (DRT) approach [@Sraj2010b] that, in addition to finding transient optical forces on deformable cells, solves for fluid-cell interactions [@Peskin1989; @Eggleton1998]. DRT offers the possibility of simulating different phenomena occurring in optical systems such as erythrocyte deformation in high-throughput optical stretchers [@Sraj2012] and optical levitation [@Chang2012]. The approach allows one to assess cell deformability and to investigate the optical parameters to better design traps and manipulate cells prior to performing experiments. In addition, due the vector-based nature, DRT allows the calculations of both anisotropic and inhomogeneous structures, cases that exist in real systems. CHO cells are the most commonly used mammalian host for industrial production of recombinant protein therapeutics [@Karthik2007] and have been used in related genetics studies [@Tjio1958]. Because of their importance, a number of previous studies have investigated the optical forces on CHO cells. For example, Wei *et al.* used a fiber-optic dual-beam trap to capture chinese hamster ovary (CHO) cells and determine the associated three-dimensional optical force field [@Wei2006]. Chang *et al.* developed a model based on RO to calculate the optical force upon a solid spherically-symmetric multilayer sphere [@Chang2006]. This study showed that the magnitude of optical forces are three times smaller than that upon a polystyrene bead of the same size and that the distribution of optical forces is much different from that upon a uniform particle. Recently, Kim *et al.* computed the optical force on a pair of concentric spheres in a focused beam and determined the influence of refractive index differences and relative size between the inner and outer spheres on the optical force [@Kim2012]. All of these studies however did not take into account the deformability of both the cell and its encapsulated nucleus. We show here how the presence of a nucleus inside deformable cells leads to alteration in the propagation of light rays due to the additional internal surface and the additional medium of different refractive index [@Meyer1975]. Here, the variation in the nucleus size may significantly influence the optical forces and ultimately the net deformation of both the cell and the nucleus itself. Numerical method ================ Simulating deformation of a cell via optical stretchers requires a two-step method to first determine optical stresses induced by the interaction of light with the cell surface and then model the cell-fluid interactions. Because the distribution of optical stresses is dependant on the shape of the cell that in turn changes during deformation, these two steps are done alternately until a steady state shape is reached. In the case of nucleated cells, optical stress calculation is additionally challenging due to the different external and internal morphologies in addition to membrane deformability. For this purpose, we resort to DRT to determine the optical stresses induced on the surfaces of deformable nucleated cells by optical stretchers [@Sraj2010a; @Sraj2012]. DRT, unlike the traditional RO method, is vector-based and is capable of determining the optical forces on cells of arbitrary shape and morphology. Cell-fluid interaction and the hydrodynamics, on the other hand, are solved using the Immersed Boundary Method (IBM). In this section we briefly describe the two methods. Dynamic Ray Tracing {#sec:drt} ------------------- DRT is a vector-based method developed by Sraj *et al.* [@Sraj2010b; @Sraj2012] to determine optical forces on the surface of any arbitrary shaped cell including deformable cells with asymmetrical geometries [@Chang2012]. Briefly, DRT considers a finite number of rays issued from a light source with given intensities and known direction. These rays are treated as vectors and traced as they intersect a surface. A ray-triangle intersection algorithm is then employed to determine the location of intersection where the surface is divided into triangular elements for this purpose. Geometrical optics laws are then applied to find the refraction and reflection angle. Consequently the vectors of the rays are updated and the procedure is repeated till each ray exits the cell. From the direction of the light rays within the cell one can calculate the trapping efficiency $Q$, a dimensionless factor representing the amount of momentum transferred [@Guck2001; @Sraj2010b]. The trapping efficiency $Q$ is independent of the laser power used and depends only on the object geometry and reflectance of the medium. Elemental optical forces at any location of the cell are therefore found regardless of the initial cell shape. Rays are traced at any surface where $Q$ is multiplied by a factor to account for energy loss from previous refractions. Internal and external reflections within the cell are neglected as their effects rapidly diminish. Optical forces can be expressed as $$F _{optical} = \frac{n_{m}QP}{c},$$ where $n_m$ is the index of refraction of the buffer medium, $c$ the speed of light in vacuum, and $P$ the laser power. It is important to note that the resulting optical forces are added to the Navier-Stokes equations as body forces as described below. This method has been validated and applied to cells of different initial shapes [@Sraj2010b] and different optical applications [@Sraj2012; @Chang2012]. Here, DRT is used to model nucleated cells by considering two concentric spheres of different sizes, one representing the cell surface and another representing the nucleus. DRT is then employed to determine optical stresses induced on both surfaces. Immersed Boundary Method {#subsection:IBM} ------------------------ The IBM is a cell-fluid interaction solver that has been used extensively to simulate biological systems such as cell adhesion [@Gupta2010], cell adhesion in atomic force microscopy measurements [@Sraj2011] and red blood cell motion through microvascular bifurcation [@Xiong2012]. IBM splits the numerical solution onto two grids: a stationary grid that has a fixed position with time representing the three-dimensional fluid domain and a moving grid representing the two-dimensional immersed boundary. To this end, a cell is modeled as an elastic membrane that is deformable by any applied stress. The membrane is discretized into a finite number of flat triangular elements that remain flat after deformation. This approximation is valid given that the local radius of curvature during deformation is much larger than the membrane thickness and that bending stresses are negligible. Elastic forces at the discrete membrane nodes are found from their displacement (deformation) using a finite element model. We adopt an approach developed by Charrier *et al.* [@Charrier1989] and Shrivastava and Tang [@Shrivastava1993] that uses the principle of virtual work to find those forces from an appropriate strain energy density function. These forces and any external applied forces such as the optical forces are then distributed onto the fluid grid using an appropriate discrete delta function and added to the Navier-Stokes equations as body forces. The discrete delta function ensures that only membrane nodes in the sphere of influence of the fluid grid make a contribution to the local body forces. The Navier-Stokes equations are then solved for the fluid velocity. The no-slip boundary condition at the membrane surface is satisfied by allowing the membrane nodes to move with the local fluid velocity. The velocity of the membrane is found by summing of the velocities at the fluid grid nodes weighted by the same discrete delta function used for the distribution of body forces. This again ensures that only fluid grid nodes in the sphere of influence of the membrane node make a contribution to its velocity. Membrane nodes are then moved with the calculated velocity for one time step to a new position giving a new membrane shape. The procedure is repeated and elastic forces and optical forces are then calculated as described above to advance the flow for another time step. Model parameters ================ The CHO cell is a typical example of a nucleated cell. The size of such cells vary with radius $r_{cell}$ ranging from $5-7.5~\mu m$ [@Han2006] and nucleus radius $r_{nuc}$ varying following the relationship [@Brunsting1974]: $$r_{cell} = (1.38\pm 0.02)r_{nuc} + (0.03\pm 0.05). \label{eq:ratio}$$ The refractive index of the cytoplasm has been measured and reported as $n_{cyt}=1.37$ while that of the nucleus has been found to be slightly greater $n_{nuc} = 1.392$ [@Brunsting1974]. As the size of the cell has an effect on the forces induced by optical stretchers (more surface area intuitively results in large forces) and hence on the cell deformation, we seek to investigate the impact of the presence of the nucleus and its size on the optical stress distribution and resulting cellular deformation. For this purpose, we model CHO cells as two three-dimensional (3D) concentric elastic spherical capsules. The radius of the outer capsule is fixed and taken as the CHO cell average radius $r_{cell} = 5.6~\mu m$; however, the radius of the inner capsule representing the nucleus is varied from $r_{nuc} = 1.1-5~\mu m$. The ratio of the radius of the cell to the radius of the nucleus is denoted by $r = \frac{r_{nuc}}{r_{cell}}$. We note that a typical CHO cell has radii ratio of $r = 0.72$ from Equation \[eq:ratio\]. ![CHO cell model: $n_m<n_{cell}<n_{cyt}$ in a linear diode stretcher.[]{data-label="fig:model"}](figure1.pdf){width="100mm"} In our calculations, cells are assumed initially trapped and situated at the center of a laser beam created with a single linear diode bar of wavelength $\lambda = 808~nm$ and power $P = 12.5~ mW/\mu m$. The length of the diode lies in the $y$-axis and the laser beam direction is along the $z$-axis as shown in Fig. \[fig:model\]. The cell is assumed to be immersed in an aqueous medium of refractive index $n_m=1.335$ that is lower than the refractive index of both the cell and its nucleus ($n_m<n_{cyt}<n_{nuc}$) (Fig.  \[fig:model\]). The hydrodynamics and cell mechanics are calculated using the IBM. Both fluids inside and outside the cell are assumed incompressible and Newtonian with identical density $\rho = 1~g/cm^3$ and viscosity $\mu = 0.8~cP$. The cell membrane is assumed of Neo-Hookean material, as appropriate for most biological cells, and can be characterized using solely its stiffness $Eh$. Unless otherwise noted, membrane stiffness is taken as $Eh = 0.1~dyn/cm$. For the purpose of quantifying cell deformation from optical or hydrodynamic forces, we use the Taylor deformation parameter defined as: $DF = (L-B)/(L+B)$, where $L$ and $B$ are the major and minor semi-axis of a capsule in the $x-z$ plane. When viscous stresses, elastic forces and optical forces are balanced, cells adopt a steady state shape denoted by $DF_{\infty}$. The uniform grid used for the fluid solver has $64^3$ nodes with a grid spacing of $r_{cell}/8$ while the finite element cell grid has $20482$ triangular elements. A time step of $10^{-5} s$ was used in all computations to ensure numerical stability. Results and discussions ======================= Impact of nucleus size on optical forces ---------------------------------------- As a first step, we investigate the effect of nucleus size on the optical forces initially induced at the cell surface. For reference, we employ DRT to determine the forces induced on a cell with no nucleus. In this case, light rays emerging from the diode laser bar hit the front surface of the cell to create optical scattering forces along the laser beam axis whose direction is opposite to their propagation direction i.e. the negative $z-$axis direction as shown in Fig. \[fig:model\]. This is due to the momentum gained by the cell when the rays transit from a medium of lower refractive index to a medium of higher refractive index (the cell cytoplasm $n_{cyt} = 1.37$)  [@Guck2001]. Gradient optical forces are also created with a sum equal to zero as the center of the cell is aligned with the center of the laser beam. After refraction, the rays continue to hit the back surface of the cell where they refract again and transfer momentum inducing scattering forces in the positive direction. The magnitude of the net scattering force at the back surface of the cell is, however, greater than the net scattering force at the front surface resulting in a net total scattering force in the positive $z-axis$ direction. With our chosen laser, cell, and fluid properties, the magnitude of scattering force applied on the front surface is $23.1~pN$ while on the back surface is $26.7~pN$ with a net scattering force of $3.6~pN$. The scattering forces would both stretch and translate the cell away from the light source. The net gradient forces are again equal to zero on both the front and back surface of the cell; however, if we consider the net gradient force in the perpendicular direction to the laser beam on one half of the cell we find it equal to $20.7~pN$. The gradient forces contribute to the stretching of the cell as well but have no translation effect. In cells with a nucleus, light rays can hit up to four surfaces as shown in Fig. \[fig:model\] before exiting the cell from the back surface. This leads to scattering and gradient optical forces at both the outer cell and the nucleus surface. At the front cell surface, scattering forces remain unchanged and independent of the nucleus size as the rays first enter the cell as already described. As the refractive index of the nucleus is higher than that of the cytoplasm, scattering forces at the front surface are also in the negative direction while the same forces at the back surface are in the positive direction. However, the magnitude of the net scattering force on the nucleus is less than the net scattering force on the cell due to the smaller refractive index contrast between the nucleus and the cell compared with the cell and suspending medium ($\frac{n_{nuc}}{n_{cyt}}=\frac{1.392}{1.37} = 1.016$ versus $\frac{n_{cyt}}{n_m} = \frac{1.37}{1.335} = 1.030$ ). The magnitude of these forces depends on the nucleus size. For instance, a nucleus of radius $r_{nuc} = 4~\mu m$ experiences a scattering force of $4.54~pN$ on the front surface of the nucleus and a force of $5.24~pN$ on the back surface with a net scattering force on the nucleus of $0.7~pN$. When the nucleus size is increased to $r_{nuc} = 5~\mu m$, the net force on the nucleus increases to $8.75~pN$ and $10.4~pN$ on the front and back surface respectively with a total net force of $1.6~pN$. This increase in force has a significant effect on the nucleus deformation and the forces induced on the cell itself as discussed in the following section. As the rays reach the cell back surface, the forces induced are influenced by the previous two refractions at the nucleus. As the nucleus size increases, optical forces on the nucleus increase, rays gain more energy, and we see subsequent increase in the magnitude of optical forces at the back cell surface. For a nucleus of radius $r_{nuc} = 4~ \mu m$ the net force at the back of the cell is $27.1~pN$ and $27.4~pN$ for $r_{nuc} = 5~ \mu m$. The maximum increase in the net optical forces on the cell is thus $20.4\%$ compared with the nucleus-free case. Finally, for $r_{nuc} = 5~ \mu m$ the gradient forces calculated on the cell slightly decreased to $20.2~pN$ due to the presence of the nucleus while the gradient forces calculated on the nucleus increased to $8.6~pN$. The variation of the net scattering and gradient forces on the cell and its nucleus is shown in Fig. \[fig:fnet\] where we clearly see the effect of the size of the nucleus on the total net force induced on the cell surface. As the nucleus size increases, the cell will be exposed to larger forces for the same laser power. The nucleus acts as a lens that focuses the light rays and leads to higher optical forces at the nucleus and cell back surfaces. We finally note that we can determine the net scattering and gradient forces on a CHO cell of nominal size from the curves in Fig. \[fig:fnet\] using a vertical line (shown in magenta) that corresponds to the nominal CHO cell radius ratio of $r = 0.72$. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Comparison of the magnitude of optical forces on a CHO cell for different nucleus radii: (left) net scattering forces (right) gradient forces. The vertical line in magenta corresponds to the nominal CHO cell radius ratio of $r = 0.72$.[]{data-label="fig:fnet"}](figure2a.pdf "fig:"){width="45.00000%"} ![Comparison of the magnitude of optical forces on a CHO cell for different nucleus radii: (left) net scattering forces (right) gradient forces. The vertical line in magenta corresponds to the nominal CHO cell radius ratio of $r = 0.72$.[]{data-label="fig:fnet"}](figure2b.pdf "fig:"){width="45.00000%"} ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Influence of nucleus size on net cell deformation ------------------------------------------------- From our calculations, it is clear that the presence of a nucleus has a significant impact on the initial optical force distribution as these forces deform and stretch CHO cells. Changes in cell shape lead to a new force distribution. To calculate these, optical forces are added as body forces to the surrounding fluid. DRT is then employed to update the optical force distribution as the cell shape is changing until steady state when the elastic and applied optical forces are equal. The net deformation of both the cell and the nucleus is quantified using the Taylor parameter deformation $DF$ shown in Fig. \[fig:dft\]. The figure indicates that in the case of small nuclei, $DF$ of the cell increases to a steady value that is higher than the reference case of no-nucleus (shown on all panels for comparison). This is due to the slight decrease in gradient forces that lead to more deformation in the z-direction and thus higher net deformation. $DF$ of the nucleus, however, is negligible due to the small magnitudes of the optical forces created. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- -- ![Evolution of net deformation $DF$ (of both a CHO cell and its nucleus) for different nucleus size as indicated. Evolution of net deformation of cell with no-nucleus is also shown for reference.[]{data-label="fig:dft"}](figure3a.pdf "fig:"){width="40.00000%"} ![Evolution of net deformation $DF$ (of both a CHO cell and its nucleus) for different nucleus size as indicated. Evolution of net deformation of cell with no-nucleus is also shown for reference.[]{data-label="fig:dft"}](figure3b.pdf "fig:"){width="40.00000%"} ![Evolution of net deformation $DF$ (of both a CHO cell and its nucleus) for different nucleus size as indicated. Evolution of net deformation of cell with no-nucleus is also shown for reference.[]{data-label="fig:dft"}](figure3c.pdf "fig:"){width="40.00000%"} ![Evolution of net deformation $DF$ (of both a CHO cell and its nucleus) for different nucleus size as indicated. Evolution of net deformation of cell with no-nucleus is also shown for reference.[]{data-label="fig:dft"}](figure3d.pdf "fig:"){width="40.00000%"} ![Evolution of net deformation $DF$ (of both a CHO cell and its nucleus) for different nucleus size as indicated. Evolution of net deformation of cell with no-nucleus is also shown for reference.[]{data-label="fig:dft"}](figure3e.pdf "fig:"){width="40.00000%"} ![Evolution of net deformation $DF$ (of both a CHO cell and its nucleus) for different nucleus size as indicated. Evolution of net deformation of cell with no-nucleus is also shown for reference.[]{data-label="fig:dft"}](figure3f.pdf "fig:"){width="40.00000%"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- -- For larger nuclei, $DF$ shows similar trends where the cell deforms until a steady state shape is reached but with net deformation lower than the nucleus-free case. In this, we see that the net deformation of the cell decreases and the net deformation of the nucleus increases with $r$. To summarize the results discussed above, we show in Fig. \[fig:dfnet\] the steady state net deformation $DF_{\infty}$ of both the cell and the nucleus. Here, we clearly see that $DF_{\infty}$ of the cell is initially larger than the $DF_{\infty}$ of the nucleus-free case but then decreases as the radius ratio increases. At the same time, $DF_{\infty}$ of the nucleus increases with the radius ratio. The two curves eventually intersect when the radius of the nucleus becomes comparable to the radius of the cell. We also note here that we can determine the steady state net deformation $DF_{\infty}$ of a CHO cell of nominal size from the curves in Fig. \[fig:dfnet\] using a vertical line (shown in magenta) that corresponds to the nominal CHO cell radius ratio of $r = 0.72$. ![Comparison of net deformation of a CHO cell and its nucleus for different nucleus radii. The vertical line in magenta corresponds to the nominal CHO cell radius ratio of $r = 0.72$. []{data-label="fig:dfnet"}](figure4.pdf){width="55.00000%"} $DF$ calculations for CHO cells show a clear relationship between the size of the nucleus and the steady state net cell and nuclues deformation. As the size of the nucleus increases, the steady state deformation increases due to the increase in the scattering forces applied at the cell surface. It is therefore expected that the corresponding deformation increases as well. Fig. \[fig:dfnet\] shows also that as the size of the nucleus increases, the steady state deformation of the cell decreases. Moreover, as the size of the nucleus approaches the size of the cell, the relative deformation of the nucleus, as characterized by the Taylor deformation parameter surpasses that of the cell. The relationship between the size of the nucleus and the deformation of the cell means that cells with larger nuclei show less deformation when optically stretched than cells with smaller nuclei. Another observation is that the cell with no nucleus deformed to a steady state of $DF$ value that is smaller than the value for the smallest radius ratio but larger than the value for the largest radius ratio. Conclusions =========== Chinese hamster ovary cell is of interest to the biomedical community because of their use in recombinant protein therapeutics. Unfortunately, no experiments have been performed to stretch this line of cells in optical traps. Few experiments, however, were performed to describe deformation of cell nucleus. In these experiments researchers have studied the interaction of cells with topographically patterned material surfaces to show the changes in shape, function, and viability of the cells. Only few of these studies, however, indicated possible impact on the behavior of organelles. For instance, Dalby *et al.* [@Dalby2003] quantified cell and nuclear morphology with light and fluorescence microscopy and showed a slight elongation of the nuclei in grooves. Yamauchi *et al.* also observed the deformation of cancerous cells/nucleus and their migration in capillaries of mince [@Yamauchi2005]. This study aimed to find the minimum diameter of capillaries where cancer cells are able to migrate. They measured the diameter of both the cell and its nucleus for this purpose. In our work, we modeled the deformation of Chinese hamster ovary cells by single diode-laser bar optical stretchers. For this purpose, we extended the recently developed Dynamic Ray-Tracing method to determine the stress distribution induced by the applied optical forces on cells that have a nucleus. Our results showed that the presence of a nucleus has a major effect on the force distribution on the cell surface and the net deformation. We also showed and quantified the effect of nucleus size on the net applied force as well as on cell deformation. We are working effectively on setting up experiments to stretch CHO cells and compare our numerical data with experimental results. Acknowledgments =============== The authors would like to acknowledge financial support provided by the National Institute of Health grant R01 AI079347-04. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number OCI-1053575. [10]{} A. Ashkin, “Acceleration and trapping of particles by radiation pressure,” Phys. Rev. Lett. **24**, 156–159 (1970). H. C. Van De Hulst, *[Light Scattering by Small Particles (Structure of Matter Series.)]{}* (Dover Pubn Inc). A. Ashkin, J. M. Dziedzic, J. E. Bjorkholm, and S. Chu, “Observation of a single-beam gradient force optical trap for dielectric particles,” Opt. Lett. **11**, 288–290 (1986). A. Ashkin and J. M. Dziedzic, “Optical levitation by radiation pressure,” Applied Physics Letters **19**, 283–285 (1971). J. Guck, R. Ananthakrishnan, H. Mahmood, T. J. Moon, C. C. Cunningham, and J. K[ä]{}s, “The optical stretcher: A novel laser tool to micromanipulate cells,” Biophysical Journal **81**, 767 – 784 (2001). M. Gu, S. Kuriakose, and X. Gan, “A single beam near-field laser trap for optical stretching, folding and rotation of erythrocytes,” Opt. Express **15**, 1369–1375 (2007). S. K. Mohanty, A. Uppal, and P. K. Gupta, “Self-rotation of red blood cells in optical tweezers: Prospects for high throughput malaria diagnosis,” Biotechnology Letters **26**, 971–974 (2004). 10.1023/B:BILE.0000030041.94322.71. P. Bronkhorst, G. Streekstra, J. Grimbergen, E. Nijhof, J. Sixma, and G. Brakenhoff, “A new method to study shape recovery of red blood cells using multiple optical trapping,” Biophysical Journal **69**, 1666 – 1673 (1995). J. Guck, S. Schinkinger, B. Lincoln, F. Wottawah, S. Ebert, M. Romeyke, D. Lenz, H. M. Erickson, R. Ananthakrishnan, D. Mitchell, and J. K “Optical deformability as an inherent cell marker for testing malignant transformation and metastatic competence,” Biophysical Journal **88**, 3689–3698 (2005). I. Sraj, C. D. Eggleton, R. Jimenez, E. Hoover, J. Squier, J. Chichester, and D. W. M. Marr, “Cell deformation cytometry using diode-bar optical stretchers,” Journal of Biomedical Optics **15**, 047010 (2010). A. Ashkin, “Forces of a single-beam gradient laser trap on a dielectric sphere in the ray optics regime,” Biophysical Journal **61**, 569–582 (1992). I. Sraj, A. C. Szatmary, D. W. M. Marr, and C. D. Eggleton, “Dynamic ray tracing for modeling optical cell manipulation,” Opt. Express **18**, 16702–16714 (2010). H. Sosa-Martínez and J. C. Gutiérrez-Vega, “Optical forces on a mie spheroidal particle arbitrarily oriented in a counterpropagating trap,” J. Opt. Soc. Am. B **26**, 2109–2116 (2009). R. C. Gauthier, “Theoretical investigation of the optical trapping force and torque on cylindrical micro-objects,” J. Opt. Soc. Am. B **14**, 3323–3333 (1997). C. S. Peskin and D. M. McQueen, “A three-dimensional computational method for blood flow in the heart i. immersed elastic fibers in a viscous incompressible fluid,” Journal of Computational Physics **81**, 372 – 405 (1989). C. D. Eggleton and A. S. Popel, “Large deformation of red blood cell ghosts in a simple shear flow,” Physics of Fluids **10**, 1834–1845 (1998). I. Sraj, A. C. Szatmary, S. A. Desai, D. W. M. Marr, and C. D. Eggleton, “Erythrocyte deformation in high-throughput optical stretchers,” Phys. Rev. E **85**, 041923 (2012). C. B. Chang, W.-X. Huang, K. H. Lee, and H. J. Sung, “Optical levitation of a non-spherical particle in a loosely focused gaussian beam,” Opt. Express **20**, 24068–24084 (2012). K. P. Jayapal, K. F. Wlaschin, W.-S. Hu, and M. G. S. Yap, “[Recombinant Protein Therapeutics from Cho Cells - 20 Years and Counting]{},” CHO Consortium: SBE Special Edition pp. 40–47 (2007). J. H. Tjio and T. T. Puck, “Genetics of somatic mammalian cells: Ii. chromosomal constitution of cells in tissue culture,” The Journal of Experimental Medicine **108**, 259–268 (1958). M.-T. Wei, K.-T. Yang, A. Karmenyan, and A. Chiou, “Three-dimensional optical force field on a chinese hamster ovary cell in a fiber-optical dual-beam trap,” Opt. Express **14**, 3056–3064 (2006). Y.-R. Chang, L. Hsu, and S. Chi, “Optical trapping of a spherically symmetric sphere in the ray-optics regime: a model for optical tweezers upon cells,” Appl. Opt. **45**, 3885–3892 (2006). S. B. Kim, K. H. Lee, S. S. Kim, and H. J. Sung, “Optical force on a pair of concentric spheres in a focused laser beam: ray-optics regime,” J. Opt. Soc. Am. B **29**, 2531–2541 (2012). R. Meyer and A. Brunsting, “Light scattering from nucleated biological cells,” Biophysical Journal **15**, 191 – 203 (1975). V. Gupta, I. Sraj, K. Konstantopoulos, and C. Eggleton, “Multi-scale simulation of l-selectin-psgl-1-dependent homotypic leukocyte binding and rupture,” Biomechanics and Modeling in Mechanobiology **9**, 613–627 (2010). 10.1007/s10237-010-0201-2. I. Sraj, K. Y. Chan, K. Konstantopoulos, and C. D. Eggleton, “A numerical study of the influence of cellular adhesion on prestress in atomic force microscopy measurements,” Journal of Advanced Microscopy Research **6**, 89–96 (May). W. Xiong and J. Zhang, “Two-dimensional lattice [B]{}oltzmann study of red blood cell motion through microvascular bifurcation: cell deformability and suspending viscosity effects,” Biomechanics and Modeling in Mechanobiology **11**, 575–583 (2012). 10.1007/s10237-011-0334-y. J. M. Charrier, S. Shrivastava, and R. Wu, “Free and constrained inflation of elastic membranes in relation to thermoforming — non-axisymmetric problems,” The Journal of Strain Analysis for Engineering Design **24**, 55–74 (1989). S. Shrivastava and J. Tang, “Large deformation finite element analysis of non-linear viscoelastic membranes with reference to thermoforming,” The Journal of Strain Analysis for Engineering Design **28**, 31–51 (1993). Y. Han, X.-M. Liu, H. Liu, S.-C. Li, B.-C. Wu, L.-L. Ye, Q.-W. Wang, and Z.-L. Chen, “Cultivation of recombinant chinese hamster ovary cells grown as suspended aggregates in stirred vessels,” Journal of Bioscience and Bioengineering **102**, 430 – 435 (2006). A. Brunsting and P. F. Mullaney, “Differential light scattering from spherical mammalian cells,” Biophysical Journal **14**, 439 – 453 (1974). M. J. Dalby, M. O. Riehle, S. J. Yarwood, C. D. Wilkinson, and A. S. Curtis, “Nucleus alignment and cell signaling in fibroblasts: response to a micro-grooved topography,” Experimental Cell Research **284**, 272 – 280 (2003). K. Yamauchi, M. Yang, P. Jiang, N. Yamamoto, M. Xu, Y. Amoh, K. Tsuji, M. Bouvet, H. Tsuchiya, K. Tomita, A. Moossa, and R. M. Hoffman, “Real-time in vivo dual-color imaging of intracapillary cancer cell and nucleus deformation and migration,” Cancer Research **65**, 4246–4252 (2005).
--- abstract: 'Markov Chain Monte Carlo (MCMC) is an invaluable means of inference with complicated models, and Hamiltonian Monte Carlo, in particular Riemannian Manifold Hamiltonian Monte Carlo (RMHMC), has demonstrated impressive success in many challenging problems. Current RMHMC implementations, however, rely on a Riemannian metric that limits their application to analytically-convenient models. In this paper I propose a new metric for RMHMC without these limitations and verify its success on a distribution that emulates many hierarchical and latent models.' author: - Michael Betancourt bibliography: - 'softAbsMetric.bib' title: A General Metric for Riemannian Manifold Hamiltonian Monte Carlo --- Riemannian Manifold Hamiltonian Monte Carlo provides a powerful tool for the efficient sampling from complex distributions, but the applicability of existing approaches has been limited by the dependency on the Fisher-Rao metric. In this paper I introduce a new metric that admits a general implementation of Riemannian Manifold Hamiltonian Monte Carlo and demonstrate its efficacy on a distribution that mirrors the pathological behavior of common models. Hamiltonian Monte Carlo ======================= Hamiltonian Monte Carlo (HMC) takes advantage of symplectic geometry to yield efficient Markov transitions [@Betan2011]. Augmenting the parameters of an $N$-dimensional target density, $\pi \! \left( \mathbf{q} \right)$, with corresponding momenta, $\mathbf{p}$, defines a joint density, $$\begin{aligned} \pi \! \left( \mathbf{p}, \mathbf{q} \right) &= \pi \! \left( \mathbf{p} | \mathbf{q} \right) \pi \! \left( \mathbf{q} \right) \\ &= \exp \left[ \log \pi \! \left( \mathbf{p} | \mathbf{q} \right) \right] \exp \left[ \log \pi \! \left( \mathbf{q} \right) \right] \\ & \propto \exp \left[ - T \! \left( \mathbf{p}, \mathbf{q} \right) \right] \exp \left[ - V \! \left( \mathbf{q} \right) \right] \\ &\equiv \exp \left[ - H \! \left( \mathbf{p}, \mathbf{q} \right) \right].\end{aligned}$$ The Hamiltonian, $H \! \left( \mathbf{p}, \mathbf{q} \right) = T \! \left( \mathbf{p}, \mathbf{q} \right) + V \! \left( \mathbf{q} \right)$, defines trajectories between points $\mathbf{z} = \left\{ \mathbf{p}, \mathbf{q} \right\}$ via the differential equations $$\begin{aligned} \frac{d \mathbf{q} }{dt} &= + \frac{ \partial H }{ \partial \mathbf{p} } \\ \frac{d \mathbf{p} }{dt} &= - \frac{ \partial H }{ \partial \mathbf{q} }.\end{aligned}$$ Because these trajectories preserve the value of the Hamiltonian and the differential volume $\mathrm{d}^{2N} \mathbf{z}$, they also define Markovian transitions with the stationary density $\pi \! \left( \mathbf{p}, \mathbf{q} \right)$. Alternating this Hamiltonian evolution with conditional samples of the momenta, $$\mathbf{p} \sim \pi \! \left( \mathbf{p} | \mathbf{q} \right) \propto \exp \left[ - T \! \left( \mathbf{p}, \mathbf{q} \right) \right],$$ yields an ergodic Markov chain sampling from $\mathbf{z}$ and, because the marginal of $\pi \! \left( \mathbf{p}, \mathbf{q} \right)$ is constructed to be the target distribution, the desired samples from $\pi \! \left( \mathbf{q} \right)$ follow by simply disregarding the momenta. No matter the choice of the kinetic energy, $T \! \left( \mathbf{p}, \mathbf{q} \right)$, the evolution equations incorporate the gradient of the potential, $V \! \left( \mathbf{q} \right)$, and hence higher order information about the target distribution. This gradient guides the Markov chain along regions of high probability mass and reduces random walk behavior. Note that, in practice, the Hamiltonian evolution cannot be performed analytically and we must resort to numerical integration. Error in the integration scheme introduces bias into the transitions, but this is readily avoided by considering the evolution not as a transition but rather as the proposal for a Metropolis transition [@Duane1987; @Neal2011]. The first [@Duane1987] and still most common choice of the conditional density, $\pi \! \left( \mathbf{p} | \mathbf{q} \right)$, is a standard gaussian, $$\pi \! \left( \mathbf{p} | \mathbf{q} \right) = \mathcal{N} \! \left( \mathbf{p} | \mathbf{0}, \mathbf{M} \right),$$ or $$\label{EMHMC} T \! \left( \mathbf{p}, \mathbf{q} \right) = \frac{1}{2} \mathbf{p}^{T} \cdot \mathbf{M}^{-1} \cdot \mathbf{p},$$ where the mass matrix $\mathbf{M}$ allows for a global decorrelation and rescaling of the parameters with respect to each other. This choice, however, ultimately limits the effectiveness of HMC when applied to intricate target distributions. Because $ \mathbf{p}^{T} \cdot \mathbf{M}^{-1} \cdot \mathbf{p}$ is a $\chi^{2}$ variate, in equilibrium $\Delta T \approx N / 2$ and, with the Hamiltonian conserved along each trajectory, this implies that the variation in the potential is also limited to $\Delta V \approx N / 2$. When the target distribution is highly correlated, the typical set spans a potential gap much larger than this: the resulting samples become highly correlated no matter how long the trajectories are evolved [@Neal2011] and the Markov chain devolves towards a random walk. Another issue with the simple choice is that the inevitable numerical integration introduces a spatial scale into the system via a finite step-size. Complicated target distributions will typically exhibit multiple spatial scales depending on the particular value of the parameters, and any single choice of a step-size will generate at least some inefficiency. If the step-size is chosen to maximize efficiency, as common in adaptive schemes, regions of the target distribution with large curvature, and hence small spatial scales, can be missed entirely by the numerical trajectories. These weaknesses can be overcome by appealing to a more sophisticated choice of the conditional density: a gaussian conditionally dependent on the $\mathbf{q}$ through a covariance matrix, $$\pi \! \left( \mathbf{p} | \mathbf{q} \right) = \mathcal{N} \! \left( \mathbf{p} | \mathbf{0}, \mathbf{\Sigma} \! \left( \mathbf{q} \right) \right),$$ or $$T \left( \mathbf{p}, \mathbf{q} \right) = \frac{1}{2} \mathbf{p}^{T} \cdot \mathbf{\Sigma}^{-1} \! \left( \mathbf{q} \right) \cdot \mathbf{p} + \frac{1}{2} \log | \mathbf{\Sigma} \! \left( \mathbf{q} \right) |.$$ Because the resulting Hamiltonian trajectories are related to geodesics on a Riemannian manifold with metric $\mathbf{\Sigma} \! \left( \mathbf{q} \right)$, this choice is known as *Riemannian Manifold Hamiltonian Monte Carlo* (RMHMC) [@Girolami2011]. Similarly, the constant metric of can be thought of as emulating dynamics on a Euclidean manifold, and to be consistent I will refer to use of the simpler Hamiltonian as *Euclidean Manifold Hamiltonian Monte Carlo* (EMHMC). The freedom in specifying a metric admits two significant improvements: a proper choice of $\mathbf{\Sigma} \! \left( \mathbf{q} \right)$ can dynamically decorrelate and rescale the target distribution to avoid inefficiencies in the numerical integration, while also yielding a dynamic determinant whose variations can compensate for much larger variations in the potential. What, however, exactly defines a proper choice for the metric? When the target distribution is a multivariate gaussian, $$V \! \left( \mathbf{q} \right) = \frac{1}{2} \mathbf{q}^{T} \cdot \mathbf{S}^{-1} \cdot \mathbf{q},$$ the target distribution is standarized by taking $\mathbf{\Sigma} \! \left( \mathbf{q} \right) = \mathbf{S}^{-1}$ [@Neal2011]. In a convex neighborhood any target distribution can be approximated by a multivariate gaussian, $$\pi \! \left( \mathbf{q} \right) \approx \mathcal{N} \! \left( \mathbf{q} | \mathbf{0}, \mathbf{H}^{-1} \right)$$ or, equivalently, $$V \! \left( \mathbf{q} \right) \approx \frac{1}{2} \mathbf{q}^{T} \cdot \mathbf{H} \cdot \mathbf{q},$$ with the Hessian matrix $$H_{ij} = \partial^{2} V / \partial q^{i} \partial q^{j},$$ which immediately motivates the candidate metric $\mathbf{\Sigma} \! \left( \mathbf{q} \right) = \mathbf{H}$. This metric quickly runs into problems, however, when the target distribution is not globally convex. In neighborhoods where the Hessian is not positive-definite, for example, the conditional density $\pi \! \left( \mathbf{p} | \mathbf{q} \right)$ becomes improper. Moreover, in the neighborhoods where the signature of the Hessian changes, the log determinant diverges and the Hamiltonian evolution becomes singular. These neighborhoods effectively partition the support of the target distribution into a disjoint union of compact neighborhoods between which the Markov chain cannot transition. One way to avoid indefinite metrics is to take advantage of any conditioning variables, $\mathbf{y}$, in the target distribution. Marginalizing the Hessian over these conditioning variables yields the Fisher-Rao metric [@Amari2007], $$\Sigma_{ij} = \mathbb{E}_{\mathbf{y}} \left[ \frac{ \partial^{2} V \! \left( \mathbf{q} | \mathbf{y} \right) }{ \partial q^{i} \partial q^{j} } \right],$$ which is guaranteed to be positive-semidefinite. For all but the simplest conditional distributions, however, the marginalization is unfeasible and, even when it can be performed analytically, the resulting metric can still be singular. Moreover, the marginalization removes the correlation between variables in many hierarchical and latent models, almost eliminating the effectiveness of the metric. Of course, all of this is immaterial if the target distribution lacks natural conditioning variables. We need a means of constructing a metric from the Hessian that is not only everywhere well-behaved but also practical to compute for any given target distribution. The SoftAbs Metric ================== With a careful application of matrix functions, it is possible to maintain the desirable behavior of the Hessian in convex neighborhoods while avoiding its singular behavior elsewhere. Moreover, because the functions are local the resulting metric is readily implemented for general distributions. Definition ---------- The exponential map [@Spivak2005a], $\exp$, is a matrix function from the space of all matrices to the component of the general linear group, $\mathrm{GL} \! \left(n\right)$, connected to the identity matrix: an isomorphism of the space of positive-definite matrices. Because this mapping preserves the symmetric part of the domain, any symmetric matrix, such as the Hessian, is guaranteed to be mapped to a symmetric, positive-definite matrix admissible as a Riemannian metric. One benefit of the exponential map is that it preserves the eigenbasis of the input matrix, $\mathbf{X}$. If $$\mathbf{X} = \mathbf{Q} \cdot \mbox{\boldmath{$\lambda$}} \cdot \mathbf{Q}^{T}$$ is the eigendecomposition of $\mathbf{X}$ with $\mbox{\boldmath{$\lambda$}} = \mathrm{Diag} \left( \lambda_{i} \right)$ the diagonal matrix of eigenvalues and $\mathbf{Q}$ the corresponding matrix of eigenvectors, then the exponential map yields $$\exp \! \mathbf{X} = \mathbf{Q} \cdot \exp \! \mbox{\boldmath{$\lambda$}} \cdot \mathbf{Q}^{T}.$$ The metric $\exp \! \mathbf{H}$ provides the same decorrelation as the Hessian but, unfortunately, also severely warps the eigenvalues and the corresponding rescaling of the local parameters. By combining multiple exponential mappings, however, we can largely preserve the spectral decomposition of the Hessian. In particular, the *SoftAbs* map $$\begin{aligned} {\ensuremath { \raisebox{1pt}{$\wr$} \mathbf{X} \raisebox{1pt}{$\wr$} } } & \equiv \\ &\left[ \exp \! \left( \alpha \mathbf{X} \right) + \exp \! \left( -\alpha \mathbf{X} \right) \right] \cdot \mathbf{X} \cdot \left[ \exp \! \left( \alpha \mathbf{X} \right) - \exp \! \left( - \alpha \mathbf{X} \right) \right]^{-1}\end{aligned}$$ approximates the absolute value of the eigenspectrum with a smooth function: $${\ensuremath { \raisebox{1pt}{$\wr$} \mathbf{X} \raisebox{1pt}{$\wr$} } } = \mathbf{Q} \cdot {\ensuremath { \raisebox{1pt}{$\wr$} \mbox{\boldmath{$\lambda$}} \raisebox{1pt}{$\wr$} } } \cdot \mathbf{Q}^{T},$$ where $$\begin{aligned} {\ensuremath { \raisebox{1pt}{$\wr$} \mbox{\boldmath{$\lambda$}} \raisebox{1pt}{$\wr$} } } &= \mathrm{Diag} \left( \lambda_{i} \frac{ e^{ \alpha \lambda_{i} } + e^{ -\alpha \lambda_{i} } }{ e^{ \alpha \lambda_{i} } - e^{ - \alpha \lambda_{i} } } \right) \\ &= \mathrm{Diag} \left( \lambda_{i} \coth \alpha \lambda_{i} \right).\end{aligned}$$ This map not only ensures that the transformed eigenvalues are positive but also regularizes any small eigenvalues that might introduce numerical instabilities (Figure \[fig:softAbsEigen\]). ![The SoftAbs map preserves the eigenbasis of the Hessian but transforms the eigenvalues, $\lambda$, with a smooth approximation to the absolute value. The inverse of the regularization parameter, $\alpha$, controls the “hardness” of the approximation; as $\alpha \rightarrow \infty$ the SoftAbs map reduces to the exact absolute value. \[fig:softAbsEigen\]](softAbsEigen.eps){width="3in"} Applying the SoftAbs map to the Hessian guarantees a well-behaved metric for RMHMC, ${\ensuremath { \raisebox{1pt}{$\wr$} \mathbf{H} \raisebox{1pt}{$\wr$} } }$, that preserves the desired properties of the Hessian while regularizing its numerical singularities. In a practical implementation, $\alpha$ limits the scaling of the integration step-size and restrains the integrator from unwise extrapolations, emulating a trust region common in nonlinear optimization [@Celis1985]. This construction also motivates a family of approximate metrics with possible utility in circumstances of limited computational resources (Appendix ). Implementation -------------- In practice, exponential maps can be difficult to implement [@Moler2003]; the eigendecomposition used above, for example, can suffer from numerical instabilities when applied to general matrices because of ambiguities among the eigenvectors. The Hessian, however, is symmetric and the eigenvectors are guaranteed to be orthogonal. Consequently, the eigendecomposition is well-behaved and provides a practical means of computing the SoftAbs map. To implement the SoftAbs metric we first perform the eigendecomposition of the Hessian $$\mathbf{H} = \mathbf{Q} \cdot \mbox{\boldmath{$\lambda$}} \cdot \mathbf{Q}^{T},$$ and then reconstruct the metric as $${\ensuremath { \raisebox{1pt}{$\wr$} \mathbf{H} \raisebox{1pt}{$\wr$} } } = \mathbf{Q} \cdot {\ensuremath { \raisebox{1pt}{$\wr$} \mbox{\boldmath{$\lambda$}} \raisebox{1pt}{$\wr$} } } \cdot \mathbf{Q}^{T},$$ with ${\ensuremath { \raisebox{1pt}{$\wr$} \mbox{\boldmath{$\lambda$}} \raisebox{1pt}{$\wr$} } } = \mathrm{Diag} \left( \lambda_{i} \coth \alpha \lambda_{i} \right)$. Hamiltonian evolution also requires two derivatives: the gradient of the quadratic form, $\mathbf{p}^{T} \cdot {\ensuremath { \raisebox{1pt}{$\wr$} \mathbf{H} \raisebox{1pt}{$\wr$} } }^{-1} \cdot \mathbf{p}$, and the log determinant, $\log \left| {\ensuremath { \raisebox{1pt}{$\wr$} \mathbf{H} \raisebox{1pt}{$\wr$} } } \right|$. The latter can be computed as [@Aizu1963; @Wilcox1967] $$\begin{aligned} \partial \left( \mathbf{p}^{T} \cdot {\ensuremath { \raisebox{1pt}{$\wr$} \mathbf{H} \raisebox{1pt}{$\wr$} } }^{-1} \cdot \mathbf{p} \right) &= - \mathbf{p}^{T} \cdot {\ensuremath { \raisebox{1pt}{$\wr$} \mathbf{H} \raisebox{1pt}{$\wr$} } }^{-1} \cdot \partial {\ensuremath { \raisebox{1pt}{$\wr$} \mathbf{H} \raisebox{1pt}{$\wr$} } } \cdot {\ensuremath { \raisebox{1pt}{$\wr$} \mathbf{H} \raisebox{1pt}{$\wr$} } }^{-1} \cdot \mathbf{p} \\ &= - \mathbf{M}^{T} \left[ \mathbf{J} \circ \mathbf{Q}^{T} \cdot \partial \mathbf{H} \cdot \mathbf{Q} \right] \mathbf{M},\end{aligned}$$ where $\circ$ denotes the Hadamard product, $$\mathbf{M} = {\ensuremath { \raisebox{1pt}{$\wr$} \mbox{\boldmath{$\lambda$}} \raisebox{1pt}{$\wr$} } }^{-1} \cdot \mathbf{Q}^{T} \cdot \mathbf{p},$$ and $$J_{ij} \equiv \frac{ \lambda_{i} \coth \alpha \lambda_{i} - \lambda_{j} \coth \alpha \lambda_{j} }{ \lambda_{i} - \lambda_{j} }.$$ Note that when $\lambda_{i} = \lambda_{j}$, such as for the diagonal elements or degenerate eigenvalues, this becomes the derivative, $$J_{ij} \rightarrow \frac{ \partial }{ \partial \lambda_{i} } \lambda_{i} \coth \alpha \lambda_{i}.$$ Unfortunately, this form of the gradient is computationally inefficient, requiring $O \! \left( N^{3} \right)$ for each component of the gradient, and hence $O \! \left( N^{4} \right)$ overall. Taking advantage of the properties of the Hadamard product [@Magnus2007], however, the gradient can be manipulated to give $$\begin{aligned} \partial \left( \mathbf{p}^{T} {\ensuremath { \raisebox{1pt}{$\wr$} \mathbf{H} \raisebox{1pt}{$\wr$} } }^{-1} \mathbf{p} \right) &= - \mathrm{Tr} \left[ \mathbf{Q} \cdot \mathbf{D} \cdot \mathbf{J} \cdot \mathbf{D} \cdot \mathbf{Q}^{T} \cdot \partial \mathbf{H} \right],\end{aligned}$$ where $\mathbf{D} = \mathrm{Diag} \left( \left( \mathbf{Q}^{T} \cdot \mathbf{p} \right)_{i} / \lambda_{i} \coth \alpha \lambda_{i}\right)$. If the matrix $\mathbf{Q} \cdot \mathbf{D} \cdot \mathbf{J} \cdot \mathbf{D} \cdot \mathbf{Q}^{T}$ is first cached, then each component of the gradient can be computed in only $O \! \left( N^{2} \right)$ so that the complete gradient does not exceed the $O \! \left( N^{3} \right)$ complexity of the decomposition itself. Similar Hadamard identities reduce the gradient of the log determinant to $$\begin{aligned} \partial \log \left | \wr \mathbf{H} \wr \right| &= \mathrm{Tr} \left[ \mathbf{Q} \left( \mathbf{R} \circ \mathbf{J} \right) \mathbf{Q}^{T} \cdot \partial \mathbf{H} \right],\end{aligned}$$ where $$\mathbf{R} = \mathrm{Diag} \left( \frac{1}{ \lambda_{i} \coth \alpha \lambda_{i} } \right).$$ Once again, caching the intermediate matrix, $\mathbf{Q} \left( \mathbf{R} \circ \mathbf{J} \right) \mathbf{Q}^{T}$, enables the full gradient to be computed in $O \! \left( N^{3} \right)$. These results admit an efficient symplectic integrator (Appendix ) for RMHMC; a C++ implementation is available online at <http://betanalpha.github.com/jamon/>. [c@c@c@c@c@c@c@c]{} **Algorithm** & **Warm-Up Iterations** & **Samples** & **$\mathbf{\epsilon}$** & **Accept Rate** & **CPU Time (s)** & **ESS** & **ESS / Time ($\mathbf{s^{1}}$)** \ EMHMC & $10^{3}$ & $10^{5}$ & 0.001 & 0.999 & 1627 & 70.3 & 0.0432\ RMHMC & $10^{3}$ & $10^{3}$ & 0.21 & 0.946 & 6282 & 856 & 0.136\ Experiments =========== The utility of the SoftAbs metric is best demonstrated on complex distributions. Neal’s funnel distribution [@Neal2003] $$\pi \! \left( \mathbf{x}, v \right) = \prod_{i = 1}^{n} \mathcal{N} \! \left( x_{i} | 0, e^{-v} \right) \cdot \mathcal{N} \! \left( v | 0, 9 \right),$$ emulates many pathological features of popular distributions, such as those arising in hierarchical [@Gelman2004] and latent [@Murray2010] models. Note that, by construction, the marginal distribution of $v$ is simply $v \sim \mathcal{N} \! \left(0, 9 \right)$[^1], independent of $n$, admitting $v$ and its marginal distribution as a simple diagnostic of bias in any sampling procedure. In each experiment a Markov chain is randomly initialized, $q_{i} \sim U \! \left(-1, 1\right)$, and then taken through a series of warm-up iterations before sampling begins. Where noted, the integrator step-size, $\epsilon$, is adapted with dual-averaging to yield a target Metropolis acceptance rate [@Hoffman2011]. The number of integration steps is set by hand to approximate the half-period of the oscillating trajectories (Figures \[fig:noadaptFlatFunnelDiptych\], \[fig:softAbsFunnelDiptych\]). Autocorrelations, $\rho_{i}$, of $v$ are computed with an initial monotone sequence estimator [@Geyer2011] and the effective sample size (ESS) is defined as $\mathrm{ESS} = I \left( 1 + 2 \sum_{i = 1}^{I} \rho_{i} \right)^{-1}$, where $I$ is the total number of generated samples. The above procedure is applied to EMHMC with step-size adaptation, EMHMC without step-size adaption, and RMHMC with the SoftAbs metric. EMHMC with Adaptation --------------------- Despite its simplicity, the funnel demonstrates many of the limitations of EMHMC. When adaptively tuned to the nominal acceptance rate $r = 0.65$ [@Neal2011], the integrator step-size exceeds the spatial scale of the narrow neck; even though the probability mass of the mouth and neck of the funnel is comparable, the resulting trajectories overlook the neck entirely and bias resulting expectations without any obvious indication (Figure \[fig:adaptFlatFunnelTrace\]). EMHMC without Adaptation ------------------------ Because we know the truth in this case, we can abandon adaptive tuning and instead tune the step-size by hand; a smaller step-size ensures that the trajectories explore most of the funnel’s probability mass and that the marginal distribution $p \! \left( v \right)$ is correct within Monte Carlo error. Unfortunately, the funnel also exhibits the limitations of a position-independent kinetic energy. The variation of the potential within the typical set is huge, and the meager variation of the kinetic energy dramatically restricts the distance of each transition (Figure \[fig:noadaptFlatFunnelDiptych\]). The EMHMC transitions struggle to cross between the mouth and neck of the funnel, and the Markov chain becomes little more than a random walk across the distribution (Figure \[fig:noadaptFlatFunnelTrace\]). RMHMC with the SoftAbs Metric ----------------------------- On the other hand, the SoftAbs metric, here with $\alpha = 10^{6}$, allows RMHMC to explore the entire distribution within a single trajectory (Figure \[fig:softAbsFunnelDiptych\]). Because the metric accounts for local curvature, the step-size can be adaptively tuned[^2] without introducing any bias. The huge autocorrelations of EMHMC vanish (Figure \[fig:softAbsFunnelTrace\]) and, despite the increased computation required for each transition, RMHMC yields a more efficient generation of effective samples (Table \[tab:benchmark\]). Nominally, the $O \! \left( N^{3} \right)$ computational burden of RMHMC is significantly worse than the $O \! \left( N \right)$ burden of EMHMC. The pathological behavior of distributions like the funnel, however, scales much faster, often exponentially, and the benefit of RMHMC with the SoftAbs metric only increases with dimension. Moreover, this concern ignores the burden of computing the potential itself which, as in the case of Bayesian posteriors with many data, can overwhelm the $O \! \left( N^{3} \right)$ burden entirely. ![Adaptive tuning of EMHMC results in an excessively large step-size, preventing trajectories from exploring the neck of the funnel and biasing the stationary distribution. Consequently, the samples of $v$ are inconsistent with the marginal distribution $\mathcal{N} \! \left( 0, 9 \right)$. \[fig:adaptFlatFunnelTrace\]](adaptFlatFunnelTrace.eps){width="7in"} ![EMHMC trajectories are limited to $\Delta V \sim (n + 1) / 2$ and consequently explore only a small neighborhood of the funnel. Note that the trajectories are oscillatory; half of the largest period of oscillation, here $T / 2 \approx 8$, defines the optimal integration time for maximizing distance, and minimizing autocorrelation, between the samples. \[fig:noadaptFlatFunnelDiptych\]](noadaptFlatFunnelDiptych.eps){width="7in"} ![Although not optimal, a smaller step-size tuned by hand allows the trajectories to penetrate the neck of the funnel, and the resulting samples of $v$ are consistent with the true marginal, $\mathcal{N} \! \left(0, 9 \right)$. In real applications where the truth is not known a priori, this sort of hand-tuning is not viable. \[fig:noadaptFlatFunnelTrace\]](noadaptFlatFunnelTrace.eps){width="7in"} ![The log determinant in the Hamiltonian allows RMHMC trajectories to explore without the restriction to ${\Delta V \sim (n +1) / 2}$, and the dynamic decorrelation/scaling ensures that a single integrator step-size is efficient across the entire distribution. As in EMHMC, the trajectories oscillate and the half-period of the longest oscillation, $T / 2 \approx 25$, defines the optimal integration time. \[fig:softAbsFunnelDiptych\]](softAbsFunnelDiptych.eps){width="7in"} ![The far-reaching trajectories of RMHMC dramatically reduce the autocorrelation of the samples which, despite the step-size adaptation, are consistent with the desired marginal $\mathcal{N} \!\left(0, 9\right)$. Note that significantly fewer lags shown here than above. \[fig:softAbsFunnelTrace\]](softAbsFunnelTrace.eps){width="7in"} Conclusions and Future Work =========================== By smoothly regularizing the eigendecomposition of the Hessian, the SoftAbs metric admits a general implementation of RMHMC robust against the many pathologies to which EMHMC can be vulnerable. Despite its apparently steep computational burden, the SoftAbs metric allows for practical inference on complex models never before deemed feasible. Potential to reduce that burden might be found by looking deeper into Riemannian geometry. Use of frame bundles [@Spivak2005b] to transport the Hessian eigenbasis along a trajectory, for example, could be more efficient than recomputing the eigendecomposition at each point. Moreover, further insight into the metric itself may be found in understanding the geometric consequences of the SoftAbs mapping. What effect, for example, does the regularization of the SoftAbs mapping have on the geometric curvature of the manifold? Perhaps this parameterized regularization be related to Ricci flow [@Chow2004], or other smoothing diffeomorphisms. Acknowledgements ================ I warmly thank Bob Carpenter, Joe Formaggio, Mark Girolami, and Chris Jones for helpful comments and suggestions, as well as Andrew Gelman and the Stan team for their generous hospitality. Appendix A: Approximations to the SoftAbs Metric {#sec:approximations .unnumbered} ================================================ Approximations -------------- The SoftAbs metric requires an eigendecomposition and all third-order derivatives of the potential, both of which can be sufficiently burdensome as to render the metric unfeasible for very large problems. Transforming various approximations to the Hessian with the SoftAbs map, however, yields a series of approximations that offer less intense alternatives that may be useful for certain distributions. ### Diagonal Ignoring the local decorrelation of the potential and instead focusing on the rescaling of the parameters, we can simply take $$\mathbf{H} \approx \mathrm{Diag} \left( H_{ii} \right),$$ with $${\ensuremath { \raisebox{1pt}{$\wr$} \mathbf{H} \raisebox{1pt}{$\wr$} } } \approx \mathrm{Diag} \left( H_{ii} \coth \alpha H_{ii} \right).$$ The computational burden of the resulting evolution scales as $O \! \left(N^{2} \right)$, much better than the $O \! \left(N^{3} \right)$ of the full SoftAbs metric, and, because the Hessian of the funnel is almost diagonal, the approximation loses little information. Consequently, the diagonal approximation performs exceptionally well in this case (Table \[tab:approxBenchmark\], Figures \[fig:diagSoftAbsFunnelDiptych\], \[fig:diagSoftAbsFunnelTrace\]). Note that, on more correlated distributions the approximation will be more limiting and the performance will suffer. [c@c@c@c@c@c@c@c]{} **Algorithm** & **Warm-Up Iterations** & **Samples** & **$\mathbf{\epsilon}$** & **Accept Rate** & **CPU Time (s)** & **ESS** & **ESS / Time ($\mathbf{s^{1}}$)** \ EMHMC & $10^{3}$ & $10^{5}$ & 0.001 & 0.999 & 1627 & 70.3 & 0.0432\ RMHMC (SoftAbs) & $10^{3}$ & $10^{3}$ & 0.21 & 0.946 & 6282 & 856 & 0.136\ RMHMC (Diag SoftAbs) & $10^{3}$ & $10^{3}$ & 0.49 & 0.805 & 7.694 & 633 & 82.3\ ![Although clearly more ragged than the trajectories with the full SoftAbs metric, the trajectories with the diagonal SoftAbs metric explore almost the same expanse as the exact metric. \[fig:diagSoftAbsFunnelDiptych\]](diagSoftAbsFunnelDiptych.eps){width="7in"} ![The samples drawn with the diagonal SoftAbs metric are consistent with the true marginal for $v$, albeit at the expense of slightly higher autocorrelations. \[fig:diagSoftAbsFunnelTrace\]](diagSoftAbsFunnelTrace.eps){width="7in"} ![When $\alpha$ has to be kept small to ensure the stability of the numerical evolution, the benefit of the dynamic metric is substantially reduced. The oscillating trajectories become much more complicated and, as in the EMHMC case, the variation of the potential over each trajectory is limited. \[fig:diagOuterSoftAbsFunnelDiptych\]](diagOuterSoftAbsFunnelDiptych.eps){width="7in"} ![The limited variation in potential induces random walk behavior in the diagonal outer-product approximation, while the small step-sizes required for stable evolution dramatically reduce the efficiency of each transition. Moreover, dual-averaging adaptation of the step-size introduces the same bias seen in adaptive tuning of EMHMC. \[fig:diagOuterSoftAbsFunnelTrace\]](diagOuterSoftAbsFunnelTrace.eps){width="7in"} ### Outer-Product A common approximation to the Hessian is the outer-product of the gradient [@Bishop2007], $$\mathbf{H} \approx \mathbf{g} \mathbf{g}^{T},$$ where $\mathbf{g} = \partial V$. Propagating this approximation through the SoftAbs map gives $${\ensuremath { \raisebox{1pt}{$\wr$} \mathbf{H} \raisebox{1pt}{$\wr$} } } \approx \frac{ \left( \mathbf{g} \cdot \mathbf{g} \right) }{ \sinh \left( \alpha \, \mathbf{g} \cdot \mathbf{g} \right) } \left( \mathbb{I} + \frac{ \cosh \left( \alpha \, \mathbf{g} \cdot \mathbf{g} \right) - 1 }{ \left( \mathbf{g} \cdot \mathbf{g} \right) } \mathbf{g} \mathbf{g}^{T} \right).$$ Note that this is equivalent to the “background-score” metric proposed in [@Betan2011] with functions scaling the rank-one update as well as the entire metric. Evolution with this metric requires no derivatives beyond the Hessian, and with a careful implementation the computation scales as $O \! \left( N^{2} \right)$[^3]. The global coefficient, however, renders the metric almost singular in all but the simplest problems. Along typical trajectories the norm of the gradient is far from zero, and the $\sinh$ becomes highly nonlinear. Consequently, the numerical evolution becomes unstable unless $\alpha \ll 1$ or $\epsilon \ll 1$, at which point the evolution becomes impractical. For example, with $\alpha = 1$ the step-size must be smaller than floating-point precision before the Metropolis accept rate rises above 0.5. ### Diagonal Outer-Product The severe non-linearities of the outer-product approximation can be avoided by considering only the diagonal elements of the outer-product, $$\mathbf{H} \approx \mathrm{Diag} \left( g_{i}^{2} \right),$$ which gives the metric $${\ensuremath { \raisebox{1pt}{$\wr$} \mathbf{H} \raisebox{1pt}{$\wr$} } } \approx \mathrm{Diag} \left( g_{i}^{2} \coth \alpha g_{i}^{2} \right).$$ As above, $g_{i} = \partial_{i} V$. Although this approximation avoids the highly-nonlinear coefficient above, the typically large values of the $g_{i}^{2}$ do induce large gradients in the Hamiltonian evolution. Taking $\alpha = 1$ yields stable trajectories, but the strong regularization reduces the log determinant’s ability to increase the variation in the potential (Figure \[fig:diagOuterSoftAbsFunnelDiptych\]) and limits the adaptability of the metric. The limited adaptability makes the Markov chain vulnerable to bias when adapting the integrator step-size and, consequently the samples exhibit the same random walk behavior and bias towards large $v$ seen in the EMHMC case (Figure \[fig:diagOuterSoftAbsFunnelTrace\]). Appendix B: A Symplectic Integrator For RMHMC {#sec:integrator .unnumbered} ============================================= One of the challenges of RMHMC is that the non-separable Hamiltonian requires a more sophisticated, and costly, symplectic integrator than EMHMC [@Leimkuhler2004; @Hairer2006]. In order to construct such an integrator first rewrite the Hamiltonian as $$H = \tau + \phi,$$ where $$\tau = \frac{1}{2} \mathbf{p}^{T} \cdot \mathbf{\Sigma}^{-1} \! \left( \mathbf{q} \right) \cdot \mathbf{p}$$ and $$\phi = \frac{1}{2} \log | \mathbf{\Sigma} \! \left( \mathbf{q} \right) | + V.$$ Both proper scalar functions, $\tau$ and $\phi$ motivate a natural splitting of the Hamiltonian which yields the integrator $$\label{splitting} \Phi_{t} = \hat{\phi}_{\frac{t}{2}} \circ \hat{\tau}_{\frac{t}{2}} \circ \hat{T}_{t} \circ \hat{\tau}_{\frac{t}{2}} \circ \hat{\phi}_{\frac{t}{2}},$$ with $$\begin{aligned} \hat{\phi} &= \frac{ \partial \phi}{ \partial q^{i} } \frac{ \partial }{ \partial p_{i}} \\ \hat{\tau} &= \frac{ \partial \tau}{ \partial q^{i} } \frac{ \partial }{ \partial p_{i} } \\ \hat{T} &= \frac{ \partial \tau}{ \partial p^{i} } \frac{ \partial }{ \partial q_{i} }.\end{aligned}$$ Of the individual operators, only $\hat{\phi}$ is separable and analytically calculable. The non-separable operators, $\hat{\tau}_{\frac{t}{2}} \circ \hat{T}_{t} \circ \hat{\tau}_{\frac{t}{2}}$, require the implicit generalized leapfrog algorithm (Algorithm \[algo:evolution\]). Efficient gradient computations complete the implementation (Algorithm \[algo:gradients\]). $\mathbf{p} \gets \mathbf{p} - \frac{\epsilon}{2} \partial \phi / \partial \mathbf{q} \left( \mathbf{p}, \mathbf{q} \right) $ $ \mbox{\boldmath{$\rho$}} \gets \mathbf{p}$ $\mathbf{p}' \gets \mbox{\boldmath{$\rho$}} - \frac{\epsilon}{2} \cdot \partial \tau / \partial \mathbf{q} \left( \mathbf{p}, \mathbf{q} \right) $ $\Delta p = \max_{i} \left\{ \left| p_{i} - p_{i}' \right| \right\} $ $\mathbf{p} \gets \mathbf{p}'$ $ \mbox{\boldmath{$\sigma$}} \gets \mathbf{q}$ $\mathbf{q}' \gets \mbox{\boldmath{$\sigma$}} + \frac{\epsilon}{2} \cdot \partial \tau / \partial \mathbf{p} \left( \mathbf{p}, \mbox{\boldmath{$\sigma$}} \right) + \frac{\epsilon}{2} \cdot \partial \tau / \partial \mathbf{p} \left( \mathbf{p}, \mathbf{q} \right) $ $\Delta q = \max_{i} \left\{ \left| q_{i} - q_{i}' \right| \right\}$ $\mathbf{q} \gets \mathbf{q}'$ $\mathbf{p} \gets \mathbf{p} - \frac{\epsilon}{2} \cdot \partial \tau / \partial \mathbf{q} \left( \mathbf{p}, \mathbf{q} \right) $ $\mathbf{p} \gets \mathbf{p} - \frac{\epsilon}{2} \partial \phi / \partial \mathbf{q} \left( \mathbf{p}, \mathbf{q} \right) $ $\alpha$ $\mathbf{H}$ $\partial \mathbf{H}$ $\mbox{\boldmath{$\lambda$}}, \mathbf{Q}$ **function** $\partial \tau / \partial \mathbf{p} \left( \mathbf{p}, \mathbf{q} \right)$ $\quad \mathbf{return} \;\; \mathbf{Q} \cdot {\ensuremath { \raisebox{1pt}{$\wr$} \mbox{\boldmath{$\lambda$}} \raisebox{1pt}{$\wr$} } }^{-1} \cdot \mathbf{Q}^{T} \cdot \mathbf{p}$ **function** $\partial \tau / \partial \mathbf{q} \left( \mathbf{p}, \mathbf{q} \right)$ $\quad J_{ij} \gets \dfrac{ \lambda_{i} \coth \alpha \lambda_{i} - \lambda_{j} \coth \alpha \lambda_{j} }{ \lambda_{i} - \lambda_{j} } \left( 1 - \delta_{ij} \right) $ $\quad \quad \quad \quad + \dfrac{ \partial }{ \partial \lambda_{i} } \lambda_{i} \coth \alpha \lambda_{i} \delta_{ij}$ $\quad \mathbf{D} \gets \mathrm{Diag} \left( \left( \mathbf{Q}^{T} \cdot \mathbf{p} \right)_{i} \right)$ $\quad \mathbf{M} \gets \mathbf{Q} \cdot \mathbf{D} \cdot \mathbf{J} \cdot \mathbf{D} \cdot \mathbf{Q}^{T}$ $\quad \mathbf{for} \; n = 1 \; \mathrm{to} \; N \; \mathbf{do}$ $\quad\quad \delta_{n} \gets \frac{1}{2} \mathrm{Tr} \left[ - \mathbf{M} \cdot \partial_{n} \mathbf{H} \right]$ $\quad \mathbf{end \; for}$ $\quad \mathbf{return} \;\; \mbox{\boldmath{$\delta$}}$ **function** $\partial \phi / \partial \mathbf{q} \left( \mathbf{p}, \mathbf{q} \right)$ $\quad J_{ij} \gets \dfrac{ \lambda_{i} \coth \alpha \lambda_{i} - \lambda_{j} \coth \alpha \lambda_{j} }{ \lambda_{i} - \lambda_{j} } \left( 1 - \delta_{ij} \right) $ $\quad \quad \quad \quad + \dfrac{ \partial }{ \partial \lambda_{i} } \lambda_{i} \coth \alpha \lambda_{i} \delta_{ij}$ $\quad \mathbf{R} \gets \mathrm{Diag} \left( \dfrac{1}{\lambda_{i} \coth \alpha \lambda_{i}} \right)$ $\quad \mathbf{M} \gets \mathbf{Q} \left( \mathbf{R} \circ \mathbf{J} \right) \mathbf{Q}^{T}$ $\quad \mathbf{for} \; n = 1 \; \mathrm{to} \; N \; \mathbf{do}$ $\quad\quad \delta_{n} \gets \frac{1}{2} \mathrm{Tr} \left[ \mathbf{M} \cdot \partial_{n} \mathbf{H} \right] + \partial_{n} V$ $\quad \mathbf{end \; for}$ $\quad \mathbf{return} \;\; \mbox{\boldmath{$\delta$}}$ [^1]: Note the use of the convention $\mathcal{N} \! \left( \mu, \sigma^{2} \right)$. [^2]: The increased information encoded in the metric should admit a larger acceptance rate, $r$, for RMHMC than the EMHMC case of $r = 0.65$. Motivated by some simple experiments, here the target rate for RMHMC is set to $r = 0.95$. [^3]: In fact, with automatic differentiation techniques that can compute Hessian-vector products in linear time [@Bishop2007], the computation can be reduced further to $O \! \left( N \right)$.
--- abstract: 'We report a novel singularity in the hysteresis of spin glasses, the reversal-field memory effect, which creates a non-analyticity in the magnetization curves at a particular point related to the history of the sample. The origin of the effect is due to the existence of a macroscopic number of “symmetric clusters” of spins associated with a local spin-reversal symmetry of the Hamiltonian. We use First Order Reversal Curve (FORC) diagrams to characterize the effect and compare to experimental results on thin magnetic films. We contrast our results on spin glasses to random magnets and show that the FORC technique is an effective “magnetic fingerprinting” tool.' author: - 'H. G. Katzgraber$^1$, F. Pázmándi$^1$, C. R. Pike$^2$, Kai Liu$^1$, R. T. Scalettar$^1$, K. L. Verosub$^2$, G. T. Zimányi$^1$' bibliography: - 'refs.bib' title: 'Reversal-Field Memory in the Hysteresis of Spin Glasses' --- The non-equilibrium behavior of random magnets and spin glasses is an intensely studied field, posing formidable theoretical and experimental challenges directly for magnetic systems, and also serving as paradigms for other fields. Concepts developed for random magnets such as glassy phases, droplet and replica theories, as well as aging have subsequently been applied to fields as diverse as structural biology, geology, and even financial analysis. The slow and complex time dependence of various correlators is a hallmark of such systems. Several aspects of this non-equilibrium dynamics have already been described in great detail for spin glasses [@young:98]. Hysteresis is one of the most central of these phenomena [@bertotti:98], yet while many basic features are qualitatively understood [@sethna:93; @lyuksyutov:99; @zhu:90], theoretical descriptions of hysteresis even in the simplest spin-glass models are in their early stages [@pazmandi:99]. Hysteresis in magnetic systems has a host of practical applications including magnetic recording and sensors, but a less than complete understanding at a fundamental level [@bertotti:98]. In this paper, we present a detailed study of several new aspects of hysteresis in two of the most commonly studied models of disordered magnets. One is the Random Field Ising Model (RFIM), which has been shown to describe successfully many of the relevant aspects of hysteresis [@sethna:93]. The second is the Edwards-Anderson Ising Spin Glass (EASG), which, unlike the RFIM, contains frustration, a phenomenon known to introduce a whole new level of complexity in disordered systems. Accordingly, we show that the hysteretic properties of the EASG can be significantly different from those of the RFIM. Our first important observation is of a novel memory effect in the hysteresis of the EASG that emerges when the magnetic field is first decreased from its saturation value and then increased again from some reversal field $H_R$. We find that the EASG exhibits a singularity at the negative of the reversal field, $-H_R$, in the form of a kink in the magnetization of the reversal curve. By calculating a suitable overlap function, we demonstrate that the microscopic origin of the effect is due to a macroscopic number of “symmetric clusters”. In these clusters the central spins flip [*after*]{} all spins on the cluster surface have flipped. Therefore, the central spins experience an effective local field which is symmetric with respect to the change of direction of the external field. This reversal-field memory effect can be even more precisely characterized with the recently introduced First Order Reversal Curve (FORC) method [@pike:99]. As we shall demonstrate, the FORC technique provides a uniquely sensitive characterization of hysteretic systems and specifically of the difference between the hysteretic behavior of the RFIM and the EASG. The sharp kink of the minor loops of the EASG is captured as a profound horizontal ridge in FORC diagrams, indicative of a broad range of effective coercivities in the system, but a rather narrow range of biases. In contrast, despite exhibiting a major hysteresis loop rather similar to that of the EASG, the RFIM shows a strikingly different FORC diagram, characterized by a well developed vertical feature reflecting a rather narrow range of effective coercivities and a broad range of biases. Finally, we determine experimentally the reversal curves and FORC diagram of a magnetic thin film. Experimentally, the reversal curves only show smoothed kinks around $-H_R$. However, the FORC diagram of the data reveals a profound horizontal ridge, signaling the presence of a reversal field memory in these films. This experimental result further highlights the usefulness of the FORC technique as a powerful method which captures the detailed behavior of hysteretic systems. The Hamiltonian of the EASG is given by [@binder:86] $${\cal H}= \sum_{\langle i,j \rangle} J_{ij}S_iS_j - H \sum_i S_i \,. \label{eq:hamilton}$$ Here $S_i = \pm 1$ are Ising spins on a square lattice of size $N = L \times L$ in two dimensions with periodic boundary conditions. The exchange couplings $J_{ij}$ are random nearest-neighbor interactions chosen according to a Gaussian distribution with zero mean and standard deviation unity, and $H$ is the external magnetic field. We simulate the zero temperature dynamics of the EASG by changing the external field $H$ in small steps, first downward from positive saturation and then upward from a reversal field $H_R$. After each field step, the effective local field $h_i$ of each spin $S_i$ is calculated: $$h_i=\sum_{j} J_{ij}S_j - H \; . \label{eq:local_field}$$ A spin is unstable if $h_i S_i < 0$. We then flip a randomly chosen unstable spin and update the local fields at neighboring sites and repeat this procedure until all spins are stable. Figure \[fig:kink\] (solid line) shows the average of $10^3$ reversal curves, all with the same $H_R$, but different disorder realizations. The area around $-H_R$ is enlarged in the inset and shows a “kink.” The presence of any such sharp feature in a disordered system, especially of finite size and after disorder averaging, is quite remarkable. =7.0cm The change of slope at the kink can be characterized by measuring the slope of the magnetization curves to the left and right of $-H_R$, and comparing the difference $\Delta (dM/dH)$ with the average $(dM/dH)_{ave}$ (see Fig. \[fig:delta\]). The slope changes abruptly by as much as $30 \%$ as the field $H=-H_R$ is passed, creating the kink. With our parameters the kink is present in the range of reversal-field values $-4.0 < H_{R} < -1.5$. =7.0cm In an effort to understand the microscopic origin of reversal-field memory, we first describe this effect within a phenomenological approach to hysteretic systems, the Preisach model [@preisach:35]. In the Preisach model a magnetic system is described as a collection of independent two-state ($\pm 1$) switching units, or “hysterons”. Unlike Ising spins, which always align with their local field, the hysteron’s state changes from $-1$ to $+1$ at a field $H_b+H_c$, different from the field $H_b-H_c$, required to switch the hysteron from $+1$ to $-1$. Different systems are distinguished by their different distributions $\rho(H_b, H_c)$ of hysterons of a given bias $H_b$ and coercivity $H_c$. Here $\rho(H_b, H_c)$ is the so-called “Preisach function”. An intuitive picture can be obtained by first considering symmetric hysterons, having no bias, i.e. $H_b = 0$. Starting from a fully “up” polarized state and decreasing the field to a negative $H_R$ switches down all symmetric hysterons with $H_c<|H_R|$. Reversing the direction of the sweep and increasing the field from $H_R$ to $-H_R$ along a reversal curve switches back every switched hysteron. Thus at $H=-H_R$ saturation is reached, creating a kink in the magnetization. Symmetric hysterons therefore give rise to reversal-field memory. However, this memory effect will be detectable only if the number of symmetric hysterons is macroscopic. This happens if $\rho(H_b, H_c)$ has a Dirac delta singularity at $H_b=0$ and $H_c=|H_R|$. As the kink is observed in a range of $H_R$ values, the singularities of the Preisach function form a horizontal ridge along the $H_b=0$ axis for the corresponding range of $H_c=|H_R|$ values. Next we move beyond phenomenological approaches, but keep the insight gained from the Preisach model. We carry over the concept of symmetric hysterons as “symmetric clusters” of the strongly interacting spins of the EASG. A spin $S_i$ belongs to a symmetric cluster if $S_i$ flips down only after all its neighbors have flipped down, and during the reverse sweep $S_i$ flips up again only after all its neighbors have. Therefore, this central spin $S_i$ experiences an effective local field which is symmetric with respect to the change of direction of the external field, in analogy to a symmetric hysteron. Spins possessing [*local spin-reversal symmetry*]{} are candidates for symmetric hysterons. By local spin-reversal symmetry we mean that the local field $h_i$, felt by $S_i$ \[Eq. (\[eq:local\_field\])\], is perfectly reversed if the external field $H$ is reversed and all spins coupled to $S_i$ are reversed as well. Every spin of the EASG has local spin-reversal symmetry. However, in a glassy system the spin configurations depend on the history of the sample. Therefore, at $-H_R$ the neighbors of most spins [*do not necessarily point in a direction opposite of their direction at $H_R$*]{}, and thus most EASG spins do not belong to symmetric clusters. Hence the model Hamiltonian possessing a local spin-reversal symmetry is a necessary but not sufficient condition for having symmetric clusters. To see a macroscopic kink it has to be shown that the density of symmetric clusters is finite. A lower bound on their density is obtained by considering the simplest symmetric cluster: two strongly coupled spins, weakly coupled to their six neighbors in a 2D lattice. The switching field of each spin is determined by these couplings. The outer spins will switch before the inner spins if their couplings are restricted by appropriate inequalities, confining the couplings to finite intervals. The density of symmetric clusters is obtained by integrating the product of the distributions of the couplings over these finite intervals. With unbounded coupling distributions, e.g. Gaussian, the product of the distributions is finite over the finite integration intervals, thus the resulting density is finite as well. =7.0cm As a further evidence for the macroscopic number of symmetric clusters, we define an overlap function $q$ between the spins which flip at $H_R$ and the spins which flip at $H > H_R$: $$\begin{aligned} q(H) &=& \frac{1}{4}\sum_i[S_i(H_R + \delta) - S_i(H_R)] \times \nonumber \\ &&\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;[S_i(H + \delta) - S_i(H)] \; . \label{overlap}\end{aligned}$$ Here $\delta$ is the field step. In Fig. \[fig:overlap\] we show the overlap $q(H)$ for $H_R = -2.28$. The large peak at $H = +2.28$ indicates that a macroscopic number of spins which have flipped at $H_R$, also flip at $-H_R$. This in turn means that there are a macroscopic number of symmetric clusters. The insets show a much smaller number of symmetric clusters at $H_R = -0.40$ and $H_R = -5.60$, values outside the peak of Fig. \[fig:delta\]. To characterize reversal-field memory further, we adapt a new tool developed for analyzing experimental data of hysteretic systems [@pike:99]. A family of First Order Reversal Curves (FORCs) with different $H_R$ is generated, with $M(H, H_R)$ denoting the resulting magnetization as a function of the applied and reversal fields. Computing the mixed second order derivative $\rho(H, H_R)= -(1/2) [{\partial}^2 M/{\partial} H {\partial} H_R]$ and changing variables to $H_c=(H-H_R)/2$ and $H_b=(H+H_R)/2$, the local coercivity and bias, respectively, yields the “FORC distribution” $\rho(H_b, H_c)$. For phenomenological Preisach models, the FORC distribution is equal to the Preisach function. However, FORC distributions are more general, because they are extracted from numerical or experimental data, and thus are model independent. Figure \[fig:forc-ea\] shows the FORC diagram of the EASG. The ridge along the $H_c$ axis in the range $1.5 < H_{c} < 4.0$ corresponds to the peak of Fig. \[fig:delta\], representing the kinks of Fig. \[fig:kink\]. Thus FORC diagrams capture the reversal-field memory effect in the form of a ridge along the $H_c$ axis. =1.6cm To demonstrate that local spin reversal symmetry of the Hamiltonian is necessary for reversal field memory to be present, we study the Random Field Ising Model (RFIM) [@binder:86; @sethna:93]. In this model $J_{ij} = 1$ and the disorder is introduced through random local fields chosen according to a Gaussian distribution with zero mean and standard deviation $\Delta$. Direct inspection reveals that the RFIM [*does not possess*]{} a local spin-reversal symmetry. Therefore, the RFIM cannot have symmetric clusters and should not exhibit a reversal-field memory. This is confirmed by our simulations: a typical RFIM reversal curve shown in the right panel of Fig. \[fig:forc-rfim\] has no kink at $-H_R$. Not only do the two models differ in the local spin symmetry, the EASG possesses frustration which might give rise to hysteretic phenomena that are qualitatively rather different than in the the RFIM. To explore this possibility, we show in the left panel of Fig. \[fig:forc-rfim\] the FORC diagram for the RFIM for a disorder $\Delta = 4.0$. While the major hysteresis loop of the RFIM is very similar to that of the EASG, the FORC distribution is qualitatively different: it exhibits a predominantly vertical feature. The distribution of random fields of the RFIM introduces a large range of biases for the spins, with little variation in the local coercivity. This expectation is confirmed by simulations of the RFIM with various disorder distributions, which show that the vertical cross section of the vertical feature mirrors the shape of the random field distribution. Finally we demonstrate the existence of reversal-field memory in experimental systems. We study thin films of well-dispersed single-domain magnetic Co-$\gamma$-Fe$_2$O$_3$ particles provided by Kodak Inc. We determine both the individual reversal curves and the FORC diagram of the system. While the reversal field memory kinks in $-H_R$ are somewhat smoothed, the FORC diagram clearly exhibits the horizontal ridge associated with the reversal-field memory effect (Fig. \[fig:kodak\]). This striking similarity between the experimentally determined FORC diagram of the Co-$\gamma$-Fe$_2$O$_3$ films and the numerically determined FORC diagram of the EASG indicates not only that Co-$\gamma$-Fe$_2$O$_3$ films exhibit reversal-field memory but also that frustration may be a component of the physics of the Co-$\gamma$-Fe$_2$O$_3$ films. In conclusion, we have reported a novel reversal-field memory effect in the EASG that manifests itself as a sharp kink in first order reversal curves and also as a sharp ridge on the zero bias axis of FORC diagrams. We suggest the microscopic origin of the effect is the presence of a macroscopic number of “symmetric clusters,” and prove this by computing a suitable overlap function. We further show that reversal field memory is absent from the RFIM, which does not exhibit symmetric clusters. While the hysteresis loops of the EASG and the RFIM are remarkably similar for corresponding parameters, their FORC diagrams are profoundly different, establishing that the FORC method is a powerful diagnostic tool for capturing the sensitive details of hysteretic systems such as spin glasses and random magnets. The FORC diagrams of several magnetic thin films exhibit a profound ridge indicative of the reversal-field memory effect in experimental systems. Simulations on more realistic magnetic models which include dipolar interactions have also shown the reversal-field memory effect. This suggests that the reversal-field memory is not specific to the EASG, but it a robust result for a large class of theoretical models and experimental systems. This work was supported by NSF Grant No. DMR-9985978, No. 99-09468, No. EAR-99-09468, and No. INT-9720440. We would like to thank T. Jagielinski for characterization of the Kodak sample and D. P. Belanger and B. A. Allgood for discussions.
--- abstract: 'We study the effect of supersymmetric contributions to the effective quark transition $b\to s\gamma\gamma$, including leading order QCD effects. We apply the discussion to the decay $B_s\to\gamma\gamma$. Even though one-particle irreducible contributions could play a role, numerical cancelations make the amplitude for the two-photon emission strongly correlated to the $b\to s\gamma$ amplitude which is sharply constrained by experiment. A quite general statement follows: as long as non standard physics effects appear only in the matching of the Wilson coefficients of the standard effective operator basis, the deviations from the standard model expectations of the decay rates induced by $b\to s\gamma\gamma$ are bound to follow closely the corresponding deviations on $b\to s\gamma$. Effects of new physics are therefore bound to be small.' author: - | S. Bertolini\ INFN, Sezione di Trieste\ Scuola Internazionale Superiore di Studi Avanzati\ via Beirut 4, I-34013 Trieste, Italy. - | J. Matias[^1]\ Dipartimento di Fisica, Università di Padova\ via F. Marzolo 8, I-35131 Padova, Italy. title: 'The $b\to s\gamma\gamma$ transition in softly broken supersymmetry' --- ---------------------------------- [ SISSA 107/97/EP    ]{} \[-1ex\] [ September 1997    ]{} ---------------------------------- **Introduction** ================ The rare flavour changing transition $b\to s\gamma\gamma$ has recently attracted new interest in view of the planned experiments at the upcoming KEK and SLAC B-factories and existing hadronic accelerators, which may test branching fractions as low as $10^{-8}$ times the B meson decay width. Rare B-physics potentially provides us with valuable redundacy tests of the flavour structure of the standard model (SM) and complementary information on the related CP violation. It is also most sensitive to the “heavy” sector of the SM particle spectrum and it is the preferred low-energy laboratory for virtual signals of exotic physics. In recent years the $b\to s\gamma$ decay has provided us with the first experimental evidence of “penguin” physics and the experimental bounds on the $B\to X_s \gamma$ decays have shown to provide sharp constraints on the physics beyond the SM. The fact that the $b\to s\gamma$ transition has been already experimentally observed is related to the peculiar enhancement of the electroweak amplitude that arises at the two-loop level due to a large logarithmic QCD mixing with the effective $b\to s \bar c c$ operator[@first; @lo]. The study of the QCD leading logarithmic (LO) resummation for $b\to s\gamma$ (and $b\to s\ gluon$) has been very recently extended to the next-to-leading order [@prenlo; @nlo; @postnlo] thus reducing the theoretical estimated error for the inclusive rate below the 10% threshold. In order to produce similar studies for the $b\to s\gamma\gamma$ transition it is extremely helpful to observe that, by the use of an extension of Low’s low energy theorem [@low; @yao1] or, alternatively, by applying the equation of motions, the $b\to s\gamma\gamma$ quark operator can be expanded at $O(G_f)$ on the standard operator basis needed for $b\to s \gamma$. Three groups have recently presented a QCD LO calculation of the two photon transition [@hill2; @yao2; @reina] thus improving on the previous electroweak calculations [@yao1; @simma1; @herrlich]. More recently a study of the $b\to s\gamma\gamma$ decay in two-Higgs doublet extensions of the SM has appeared [@aliev]. The purpose of the present paper is to study the influence on the $b\to s\gamma\gamma$ transition of softly broken supersymmetry. We find actually the results of our analysis having a more general impact on the possible effects of new physics for the radiative two-photon decay. At the LO the short-distance part of the $b\to s\gamma\gamma$ amplitude turns out to be controlled by the one-photon radiative component. The higher dimension one-particle irreducible contributions present in the $b\to s\gamma\gamma$ amplitude, potentially large because of the $1/m_c^2$ dependence, turn out to remain subleading because of accidental cancelations. This result remains true when studying two-Higgs doublet extensions of the SM where the additional charged Higgs contribution to the $b\to s\gamma$ component of the two-photon amplitude adds coherently to the SM one. On the other hand, in supersymmetric theories there are potentially large destructive effects related to the exchange of superpartners, which may reduce the size of the one-particle reducible part of the $b\to s\gamma\gamma$ amplitude thus affecting the relative weight between the latter and the one-particle irreducible components. Nevertheless, the present experimental constraints on $b\to s\gamma$ induced decays are tight, and by studying the effects of low energy supersymmetry as a paradigmatic case, the present analysis shows generally that, as long as extensions of the SM affect only the short distance Wilson coefficients of the standard effective Hamiltonian, the $b\to s\gamma\gamma$ induced decays are severely constrained by the present bounds on the inclusive $B\to X_s \gamma$ decay [@CLEO] $$BR(B\to X_s \gamma)=(2.32\pm 0.51\pm 0.29\pm 0.32)\times 10^{-4}.$$ Since this result is consistent within 30% with the next-to-leading order (NLO) SM calculation [@nlo] $$BR(B\to X_s \gamma)=(3.28\pm 0.33)\times 10^{-4},$$ one may not expect much larger deviations of the two-photon decay rates from the corresponding SM estimates. Were one to invoke supersymmetry to reduce by 30% the SM prediction of $\Gamma(B\to X_s \gamma)$ then the SUSY rates related to $b\to s\gamma\gamma$ will unlikely exceed the corresponding SM expectations. Potential implications of a (albeit challenging) NLO analysis of the two-photon amplitude in extensions of the SM are commented upon in the conclusions. Although we shall apply our results to the $B_s\to\gamma\gamma$ decay we are not interested in predicting absolute decay rates and thus we will not discuss the uncertainties related to hadronization, which are not affected by short-distance new physics. Our main purpose is to study the deviations of the quark amplitudes from the SM predictions. On the basis of these considerations and at present time, it is not worthy the effort to perform a detailed analysis of a specific supersymmetric model. We will investigate the main features of the SUSY amplitudes by means of relatively simple limiting cases in the supersymmetric and soft breaking parameter space. **Effective Quark Lagrangian: Operator Basis and Coefficients** =============================================================== At the LO in QCD the effective Hamiltonian for $b\to s \gamma$ closes on a basis of eight effective operators $$H_{eff} =-{G_F\over \sqrt{2}}V^*_{ts}V_{tb}\sum_{i=1}^{8}C_i(\mu)O_i(\mu), \label{heff}$$ where $$\begin{aligned} O_1 &=& (\bar{s}_ic_j)_{V-A}(\bar{c}_jb_i)_{V-A} \nonumber \\ O_2 &=& (\bar{s}_ic_i)_{V-A}(\bar{c}_jb_j)_{V-A} \nonumber \\ O_3 &=& (\bar{s}_ib_i)_{V-A}\sum_{q}(\bar{q}_jq_j)_{V-A} \nonumber \\ O_4 &=& (\bar{s}_ib_j)_{V-A}\sum_{q}(\bar{q}_jq_i)_{V-A} \nonumber \\ O_5 &=& (\bar{s}_ib_i)_{V-A}\sum_{q}(\bar{q}_jq_j)_{V+A} \nonumber \\ O_6 &=& (\bar{s}_ib_j)_{V-A}\sum_{q}(\bar{q}_jq_i)_{V+A} \nonumber \\ O_7 &=& {e\over 8\pi^2}\bar{s}_i\sigma^{\mu\nu}(m_s(1-\gamma_5) +m_b(1+\gamma_5))b_iF_{\mu\nu} \nonumber \\ O_8 &=& {g\over 8\pi^2}\bar{s}_i\sigma^{\mu\nu}(m_s(1-\gamma_5) +m_b(1+\gamma_5))T^a_{ij}b_jG^a_{\mu\nu}\ . \label{operators}\end{aligned}$$ In eq. (\[operators\]) $i,j=1,2,3$ are color indices, $a=1,...,8$ labels the $SU(3)$ generators, and $V\pm A\equiv 1\pm\gamma_5$. The sum runs over the active quark flavours $u,d,s,c,b$. Having factored out the relevant Kobayashi-Maskawa (KM) mixings, the LO matching of the Wilson coefficients $C_i(\mu)$ at the scale $m_W$ is given in the SM by $$C^{SM}_i(m_W) = 0,\ \ \ \ \ i=1,3,4,5,6 \label{c1-6}$$ $$C^{SM}_2(m_W) = 1 \label{c2}$$ $$C^{SM}_7(m_W) = \frac{3 x^3-2 x^2}{4(x-1)^4}\log x - \frac{8 x^3 + 5 x^2 - 7 x}{24(x-1)^3} \label{c7}$$ $$C^{SM}_8(m_W) = \frac{-3 x^2}{4(x-1)^4}\log x - \frac{x^3 - 5 x^2 - 2 x}{8(x-1)^3} \label{c8}$$ where $x=m_t^2/m_W^2$. Large $(\alpha_s\log\mu)^n$ corrections to the weak amplitudes are resummed via the renormalization group (RG) equations by evolving the Wilson coefficients at the typical scale of the process ($\mu\approx m_b$). While the anomalous dimension matrix of the first six operators is regularization scheme independent, the operator mixings between $O_1,\cdots, O_6$, and the dipole penguins $O_{7,8}$ are generally regularization-scheme dependent since they arise in LO at the two-loop level. However, the finite one-loop matrix elements of $b\to s\gamma$ (and $b\to s g$) generated by the insertion of the $O_{5,6}$ gluonic penguins carry a compensating scheme-dependence such that the total physical amplitudes are independent on the regularization scheme. In view of this, one defines the so called “effective” Wilson coefficients [@ciuc; @buras], $C^{eff}_7(\mu)$ and $C^{eff}_8(\mu)$, for which the LO RG running is scheme independent. As an example, for $C_7(\mu)$ one finds the LO expression $$C^{eff}_7(\mu) = \eta^{\frac{16}{23}} C_7(m_W) + \frac{8}{3} \left( \eta^{\frac{14}{23}} - \eta^{\frac{16}{23}} \right) C_8(m_W) + C_2(m_W) \sum_{i=1}^8 h_i \eta^{a_i} \label{c7eff}$$ where $\eta=\alpha_s(m_W)/\alpha_s(\mu)$ and the numbers $h_i$ and $a_i$ are given in the appendix of ref. [@buras]. The sum of all $h_i$ is zero and $C_i^{eff}(m_W)=C_i(m_W)$. For notational convenience, the superscript “eff” on $C_{7,8}(\mu)$ will be henceforth omitted. **The $B_s\to\gamma\gamma$ Decay** ================================== The quark $b\to s\gamma\gamma$ transition induces at the hadronic level two interesting rare decay modes of the $B$ mesons: $B_{u,d}\to X_s\ \gamma\gamma$ and $B_s\to\gamma\gamma$, where $X_s$ represents strange mesonic states. Both these decay modes and their features at the hadronic level within the SM have been widely studied in the literature [@bxsgg]. We apply the analysis of the QCD corrected supersymmetric $b\to s\gamma\gamma$ amplitude to the discussion of the two body $B_s\to\gamma\gamma$ decay, whose total rate can be cast in a simple and compact form; this allows us to carry out the present analysis without unnecessary complications. Our conclusions are based on short-distance properties of the quark transition that hold as well for the $B_{u,d}\to X_s\ \gamma\gamma$ decay, even though the detailed form of the amplitude differs from that of the $B_s\to\gamma\gamma$ decay. The total $B_s\to\gamma\gamma$ amplitude can be separated into a CP-even and a CP-odd part $${\cal A}(B_s\to \gamma\gamma)=M^+F_{\mu\nu}F^{\mu\nu} +iM^-F_{\mu\nu}\tilde{F}^{\mu\nu}.$$ According to the notation of ref. [@yao2] one finds $$M^+=-{4{\sqrt 2}\alpha_{em} G_F\over 9\pi}f_{B_s}m_{B_s}V_{ts}^*V_{tb}\left( B(\mu)\ m_b\ K(m_b^2) +{3C_7(\mu)\over 8\bar{\Lambda} }\right),$$ and $$M^-={4{\sqrt 2}\alpha_{em} G_F\over 9\pi}f_{B_s}m_{B_s}V_{ts}^*V_{tb}\left(\sum_q m_{B_s}\ A_q(\mu)\ J(m_q^2)+m_b\ B(\mu)\ L(m_b^2)+ {3C_7(\mu)\over 8\bar{\Lambda} }\right),$$ where $\bar{\Lambda}=m_{B_s}-m_b$ and $$\begin{aligned} A_u &=&(C_3-C_5)N_c+(C_4-C_6)\nonumber \\ A_d &=&{1\over 4}\left((C_3-C_5)N_c+(C_4-C_6)\right)\nonumber \\ A_c &=&(C_1+C_3-C_5)N_c+(C_2+C_4-C_6) \nonumber \\ A_s &=&A_b={1\over 4}\left((C_3+C_4-C_5)N_c+(C_3+C_4-C_6)\right) \nonumber \\ B &=&-{1\over 4}(C_6N_c+C_5)\ , \label{OPI}\end{aligned}$$ are combinations of Wilson coefficients evaluated at the scale $\mu$ ($\approx m_b$). The functions $J(m^2)$, $K(m^2)$ and $L(m^2)$ are defined by $$\begin{aligned} J(m^2)&=&I_{11}(m^2),\nonumber \\ K(m^2)&=&4\ I_{11}(m^2)-I_{00}(m^2)\ ,\nonumber \\ L(m^2)&=&I_{00}(m^2),\end{aligned}$$ with $$I_{pq}(m^2)=\int_{0}^{1}{dx}\int_{0}^{1-x}{dy}{x^py^q\over m^2-2xyk_1\cdot k_2-i\varepsilon}\ , \label{Ipq}$$ and $2\ k_1\cdot k_2=m_{B_s}^2$. The decay width for $B_s\to \gamma\gamma$ is then given by $$\Gamma(B_s\to \gamma\gamma)={m_{B_s}^3\over 16\pi}({\vert M^+\vert }^2+{\vert M^-\vert }^2)\ ,$$ and, from the measured $B_s$ lifetime, the corresponding branching ratio is finally obtained $$BR(B_s\to \gamma\gamma)=\Gamma(B_s\to \gamma\gamma)/\Gamma(B_s)\ . \label{brff}$$ Coming to the inclusive $B_s\to X_s \gamma$ decay it is convenient to use the approximate equality $$BR(B\to X_s \gamma) \simeq \frac{\Gamma(b\to s \gamma)}{\Gamma(b\to c e\bar\nu_e)} BR(B\to X_c e\bar\nu_e)\ , \label{brsf}$$ where $$\label{main} \frac{\Gamma(b \to s \gamma)}{\Gamma(b \to c e \bar{\nu}_e)} = \frac{|V_{ts}^* V_{tb}|^2}{|V_{cb}|^2} \frac{6 \alpha_{em}}{\pi g(z)} |C_7(\mu)|^2\ , \label{rquark}$$ which minimizes the uncertainties related to the bottom quark mass and KM mixings. In eq. (\[rquark\]) the function $g(z) = 1 - 8z^2 + 8z^6 - z^8 - 24z^4 \log z$ is the phase space factor in the semileptonic decay and $z = m_c/m_b$. In Figs. 1 and 2 we show the LO results for the SM $B_s\to \gamma\gamma$ and $B\to X_s \gamma$ branching ratios, as a function of the renormalization scale $\mu$ and of $m_t$. Our numerical results are obtained using the values given in Table \[inputs\] for the other input parameters. \[mfig1\] [Fig. 1. $BR(B_s\to\gamma\gamma)_{SM} \times 10^{7}$ as a function of $\mu$ and $m_t$ (GeV) for central values of the other input parameters (Table I). ]{} \[mfig2\] [Fig. 2. $BR(B\to X_s \gamma)_{SM} \times 10^{4}$ as a function of $\mu$ and $m_t$ (GeV) for central values of the other input parameters (Table I). ]{} From a direct comparison of Figs. 1 and 2 it appears clear that the $B_s\to\gamma\gamma$ decay rate is dominated by the $C_7$ component. This is related to the fact that the one-particle irreducible contributions arising from the operators $Q_{1,2}$, which could be potentially large due to the $1/m_c^2$ dependence of eq. (\[Ipq\]), appear in eq. (\[OPI\]) ($A_c$) via the combination $N_c\ C_1 + C_2$ which is numerically suppressed [@yao2]. Notice that the scale dependence represents the largest source of uncertainty of the LO calculation [@buras; @ali]. We have here shown the range 2.5 GeV $< \mu <$ 10 GeV. As a reference for the following analysis, the SM central values for the LO QCD corrected decay rates at the scale $\mu=m_b$ are given by $$BR(B\to X_s \gamma)_{SM} = 2.5\times 10^{-4} \label{brsfsm}$$ and $$BR(B_s\to \gamma\gamma)_{SM} = 4.4\times 10^{-7}\ . \label{brffsm}$$ The prediction in eq. (\[brffsm\]) compares to the present experimental bound [@L3] $$BR(B_s\to\gamma\gamma)<1.48\times 10^{-4}\ \label{ffexp}$$ which is about three orders of magnitude away from the needed sensitivity. **The $B_s\to\gamma\gamma$ Decay in Softly Broken Supersymmetry** ================================================================= In a wide class of realistic SUSY models the global supersymmetry breaking is a consequence of the spontaneous breaking of an underlying N=1 supergravity theory (for reviews see ref. [@SUGRA]). The locally supersymmetric lagrangian is supposed to undergo a spontaneous breaking in the so called hidden sector, and the effects of this breaking are communicated to the observable sector through gravitational effects. A renormalizable theory is obtained in the limit in which the Planck mass goes to infinity. By doing so one is left with an effective globally supersymmetric lagrangian and explicit soft breaking terms. In our present study we shall consider the following gauge invariant soft breaking Lagrangian: $${\cal L}_{soft}= -{\cal M}^2-(\hat {M}+S\ +\ h.c.) \label{hslagr}$$ where ${\cal M}^2$ is a common mass term for all the scalar components $z_i$ in the theory $${\cal M}^2 \equiv \Sigma_i \tilde m^2 z_i^* z_i\ , \label{soft}$$ $\hat {M}$ is a mass term for the gauginos $\lambda_\alpha,$ $\alpha=1,2,3$ considered as Weyl fields $$\hat{M} \equiv -\frac{M_\alpha}{2} \lambda_\alpha \lambda_\alpha\ , \label{gm}$$ and $S$ is the scalar analogue of the superpotential $$S = \tilde m \left[ -A_U h_U H_2 \widetilde{Q} \widetilde{U}^c + A_D h_D H_1 \widetilde{Q} \widetilde{D}^c + A_E h_E H_1 \tilde{L} \widetilde{E}^c + B \mu H_1 H_2 \right]\ , \label{trilbi}$$ where with standard notation $h_{U,D,E}$ are the $3\times 3$ Yukawa matrices for the quarks and charged leptons. The soft breaking parameters $A_i$ and $B$ are dimensionless numbers of order unity. The $b\rightarrow s\gamma$ and $b\rightarrow s\gamma\gamma$ transitions can proceed in the SUSY model via five different intermediate particles exchanges: 1. Charged gauge bosons $(W^-)\ \ \ +\ \ \ $ up-quarks 2. Charged Higgs bosons $(H^-)\ \ \ +\ \ \ $ up-quarks 3. Charginos $(\chi^-)\ \ \ +\ \ \ $ up-squarks 4. Gluinos $({\widetilde g})\ \ \ +\ \ \ $ down-squarks 5. Neutralinos ($\chi^0)\ \ \ +\ \ \ $ down-squarks The total amplitude is the sum of all these contributions. The complete analytic expressions for the various components are found in ref. [@BBMR]. An effective $b-s$ flavour changing transition induced by $W^-$ exchange is the only way through which the decays proceed in the SM. A two-Higgs doublet extension of the SM would include the first two contributions, while the last three are genuinely supersymmetric in nature. Gluinos and neutralinos can mediate flavour changing interactions only via renormalization effects which are crucially dependent on the detailed structure of the model. Their consideration is beyond the scope of the present work and our results do not presently justify a more detailed analysis. We shall discuss the features of the inclusion of the first three contributions in the matching of the Wilson coefficients. The supersymmetric Wilson coefficients are then given by $$C_{7,8}^{SUSY} (m_W) = C_{7,8}^{SM}(m_W) + C_{7,8}^H (m_W) + C_{7,8}^\chi (m_W)\ ,$$ while at the LO the matching conditions in eqs. (\[c1-6\])–(\[c2\]) remain unaffected. From the results of ref. [@BBMR] and comparing with eq. (\[heff\]) we obtain the following contributions to the $C_{7,8}(m_W)$ coefficients: $$C_{7,8}^H (m_W) =\frac{1}{2}\frac{m_t^2}{m_H^2}\left[ \frac{1}{\tan^2 \beta} f^{(1)}_{7,8} \left( \frac{m_t^2}{m_H^2}\right) + f^{(2)}_{7,8} \left( \frac{m_t^2}{m_H^2}\right) \right]\ , \label{c78h}$$ induced by charged Higgs exchange and $$\begin{aligned} C_{7,8}^\chi (m_W) &=& - {1 \over V_{ts}^* V_{tb}} \ \sum_ {j=1} ^2 \sum_ {k=1} ^6 \ {m_W^2 \over \tilde m_{\chi_j}^2} \left[ (G_{UL}^{jkb} - H_{UR}^{jkb}) (G_{UL}^{*jks} - H_{UR}^{*jks}) f_{7,8}^{(1)}\left(\frac{\tilde m_{u_k}^2}{\tilde m_{\chi_j}^2}\right) \right. \nonumber \\ & & - \left. H_{UL}^{jkb} (G_{UL}^{*jks} - H_{UR}^{*jks}) \ {\tilde m_{\chi_j} \over m_b} \ f_{7,8}^{(3)}\left(\frac{\tilde m_{u_k}^2}{\tilde m_{\chi_j}^2}\right) \right] \label{c78ch}\end{aligned}$$ induced by chargino exchange. We have found convenient for the present discussion to introduce the functions $f_{7,8}^{(n)}$ according to the notation of ref. [@barbieri] $$\begin{aligned} f^{(1)}_7 (x) &=& \frac{(7-5x-8x^2)}{36(x-1)^3}+ \frac{x(3x-2)}{6(x-1)^4}\log x \label{f1}\\ f^{(2)}_7 (x) &=& \frac{(3-5x)}{6(x-1)^2}+ \frac{(3x-2)}{3(x-1)^3}\log x \\ f^{(3)}_7 (x) &=& (1-x) f^{(1)}_7 (x) - \frac{x}{2}f^{(2)}_7 (x) -\frac{23}{36} \\ f^{(1)}_8 (x) &=& \frac{(2+5x-x^2)}{12(x-1)^3}- \frac{x}{2(x-1)^4}\log x \\ f^{(2)}_8 (x) &=& \frac{(3-x)}{2(x-1)^2}- \frac{1}{(x-1)^3}\log x \\ f^{(3)}_8 (x) &=& (1-x) f^{(1)}_8 (x) - \frac{x}{2}f^{(2)}_8 (x) -\frac{1}{3}\ . \label{f3}\end{aligned}$$ These functions have simple and obvious relations with the functions $F_n (x)$ defined originally in ref. [@BBMR], to which we refer the reader for all details. In eq. (\[c78ch\]) $j=1,2$ is the label of the chargino mass eigenstates and $k=1,...,6$ is the analogous label for the up-squarks; the matricial couplings $G_{UL}$ arise from charged gaugino-squark-quark vertices, whereas $H_{UL}$ and $H_{UR}$ are related to the charged higgsino-squark-quark vertices. These couplings contain among else the unitary rotations $U$ and $V$ which diagonalize the chargino mass matrix $$U^\ast \pmatrix{M_2 & m_W\sqrt{2} \sin \beta \cr m_W\sqrt{2} \cos \beta & -\mu \cr} V^{-1} =\pmatrix{\tilde{m}_{\chi_1} & 0 \cr 0 & \tilde{m}_{\chi_2} \cr}\ , \label{chmatrix}$$ where $M_2$ is the weak gaugino mass and $\mu$ the Higgs mixing parameter. The sign of the $\mu$ entry is defined accordingly to the Feynman rules used in obtaining the above results (see the comment following eq. (15) in ref. [@bevi]). Due to the relevance of the chargino amplitude for the present discussion, it is worth trying to have a better understanding of the nature of the features exhibited by this amplitude. An explicit $\tan\beta$ dependence is found in $H_{UL}$ and $H_{UR}$ where quark Yukawa couplings are present; more precisely, $H_{UL}$ is proportional to the down-quark Yukawa coupling, which grows with $\tan\beta$ as $1/\cos\beta$, whereas $H_{UR}$ contains the up-quark Yukawa coupling, that approaches in the large $\tan\beta$ limit a constant value ($\propto 1/\sin\beta$). It is in fact the contribution in the second line of eq. (\[c78ch\]) that determines the behaviour of the amplitude in the large $\tan\beta$ regime. Detailed studies of this feature of the chargino amplitude are available in the literature [@largebeta]. In order to allow for an analytic and more transparent discussion of the chargino component we resort to simplified assumptions on the squark mass spectrum which reproduce with good approximation the global features of the model. In this we follow closely the analysis of ref. [@barbieri]. We assume that all squarks, other than the two scalar partners of the top quark, are degenerate with the soft breaking mass $\tilde{m}$. The remnant $2\times 2$ top squark mass matrix is diagonalized by an orthogonal matrix $T$ such that: $$T\pmatrix{\tilde{m}^2+m_t^2 & (A_t\tilde{m} + \mu/\tan\beta)m_t \cr (A_t\tilde{m} + \mu/\tan\beta) m_t & \tilde{m}^2+m_t^2 \cr}T^{-1}= \pmatrix{\tilde{m}^2_{t_1}&0\cr 0& \tilde{m}^2_{t_2}\cr},$$ where $A_t$ is the supersymmetry-breaking trilinear coupling. The sign of the $\mu$ term is consistent with eq. (\[chmatrix\]). With these assumptions eq. (\[c78ch\]) can be written as $$\begin{aligned} C_{7,8}^\chi (m_W) &=& \sum_{j=1}^{2} \left\{ \frac{m_W^2}{\tilde{m}_{\chi_j}^2}\left[ |V_{j1}|^2 f^{(1)}_{7,8} \left( \frac{\tilde{m}^2}{\tilde{m}_{\chi_j}^2}\right) \right. \right. \nonumber \\ &&-\left. \sum_{k=1}^2 \left| V_{j1}T_{k1}-V_{j2}T_{k2}\frac{m_t}{\sqrt{2} m_W \sin \beta} \right|^2 f^{(1)}_{7,8} \left( \frac{\tilde{m}_{t_k}^2} {\tilde{m}_{\chi_j}^2}\right) \right] \nonumber \\ &&-\frac{U_{j2}}{\sqrt{2} \cos \beta} \frac{m_W}{\tilde{m}_{\chi_j}}\left[ V_{j1} f^{(3)}_{7,8} \left( \frac{\tilde{m}^2}{\tilde{m}_{\chi_j}^2}\right) \right. \nonumber \\ &&-\left. \left. \sum_{k=1}^2 \left( V_{j1}T_{k1}-V_{j2}T_{k2}\frac{m_t}{\sqrt{2} m_W \sin \beta} \right) T_{k1} f^{(3)}_{7,8} \left( \frac{\tilde{m}_{t_k}^2} {\tilde{m}_{\chi_j}^2}\right) \right] \right\}\ , \label{c78chb}\end{aligned}$$ **Four exemplifying cases** ---------------------------- We are now ready to investigate the effects of the SUSY matchings on the $b\to s\gamma\gamma$ transition. We will show our results by plotting the ratios of SUSY versus SM decay rates for central values of the SM input parameters while varying the unknown SUSY parameters. We investigate four limiting cases which span the global features of the new amplitudes. First, we take $M_2=\mu=A_t=0$ and fix $\tan\beta=1$ (case 1). In this approximation we have $$U=\frac{1}{\sqrt{2}}\pmatrix{1&1\cr -1&1},~~~ V=\frac{1}{\sqrt{2}}\pmatrix{1&1\cr 1&-1},~~~ \tilde{m}_{\chi_{1,2}}=m_W,$$ $$T=\pmatrix{1&0\cr 0&1},~~~ \tilde{m}^2_{t_{1,2}}=\tilde{m}^2+m_t^2\ .$$ The Wilson coefficients $C_{7,8}^\chi$ can be simply written as: $$C_{7,8}^\chi (m_W) = z\left[ f^{(1)}_{7,8}(z) +\frac{1}{2} f^{(2)}_{7,8}(z)\right] -(2x+z)f^{(1)}_{7,8}(x+z) -\frac{x+z}{2} f^{(2)}_{7,8}(x+z),$$ where $$x= \frac{m_t^2}{m_W^2},~~~ z= \frac{\tilde{m}^2}{m_W^2}.$$ As remarked in ref. [@barbieri] $C_{7,8}^{SUSY}(m_W)$ shows an exact cancellation in the supersymmetric limit, $z\to 0$ and $m_H\to m_W$. This is a consequence of the fact that any magnetic moment transition vanishes in exact supersymmetry [@ferrara]. Therefore non-vanishing contributions to the $C_{7,8}^{SUSY}$ coefficients arise due to the presence of the soft breaking terms. We define $$R_{\gamma\gamma} = \frac{BR(B_s\to\gamma\gamma)_{SUSY}}{ BR(B_s\to\gamma\gamma)_{SM}}\ , \label{rff}$$ and $$R_{\gamma} = \frac{BR(B\to X_s\gamma)_{SUSY}}{ BR(B\to X_s\gamma)_{SM}}\ , \label{rf}$$ where the SM decay rates are those given in eqs. (\[brsfsm\])–(\[brffsm\]), obtained using the central values of the SM input parameters. \[mfig3\] [Fig. 3. Case 1. $R_{\gamma\gamma}$ as a function of $\tilde{m}$ and $m_H$ (GeV), for degenerate chargino masses $\tilde m_{\chi_{1,2}}=m_W$. ]{} \[mfig4\] [Fig. 4. Case 1. The allowed range for $R_{\gamma\gamma}$ is shown as a function of $\tilde{m}$ and $m_H$ (GeV), by constraining the $BR(B\to X_s \gamma)_{SUSY}$ to vary within a $\pm 30 \%$ from its SM value. ]{} In Fig. 3 we show $R_{\gamma\gamma}$ as a function of the charged Higgs mass and the scalar soft breaking mass in a few hundred GeV range. In Fig. 4 the same range is spanned assuming the constraint $$0.7 < R_\gamma < 1.3\ . \label{constraint}$$ We see that $R_{\gamma\gamma}$ as well is bound to vary in approximately the same range. The study of the ratio $R_{\gamma\gamma}/R_\gamma$ in the same region shows deviations of at most $\pm 4\%$ from unity, which shows the strong correlation between the two decays. By releasing the constraint $A_t = 0$, while holding $M=\mu=0$ and $\tan\beta=1$, we allow for a mass splitting of the stop eigenstates (case 2). This corresponds to having $$T=\frac{1}{\sqrt{2}}\pmatrix{1&1\cr -1&1},~~~ \tilde{m}^2_{t_{1,2}}=\tilde{m}^2+m_t^2\pm A_t\tilde{m} m_t\ . \label{stopmass}$$ The chargino contribution to $C_{7,8}$ becomes: $$C_{7,8}^\chi (m_W) =zf^{(1)}_{7,8} (z)+ \frac{z}{2}f^{(2)}_{7,8} (z)- \sum_{k=1}^2 \left[ \frac{x+w_k}{2}f^{(1)}_{7,8} (w_k) + \frac{w_k}{4}f^{(2)}_{7,8} (w_k)\right] .$$ where $w_{1,2}=x+z \pm A_t\tilde{m} m_t/m_W^2$. \[mfig5\] [Fig. 5. Case 2. $R_{\gamma\gamma}$ is shown as a function of $\tilde{m}$ (GeV) and $A_{t}$ for $m_H=150$ GeV. ]{} In Fig. 5 we plot $R_{\gamma\gamma}$ as a function of $A_t$ and $\tilde m$ for fixed $m_H=150$ GeV, under the requirement that the lightest stop mass is always above 45 GeV. As we see, releasing the stop squark degeneracy while keeping charginos degenerate does not sizeably modify the features shown in case 1, and the same conclusions apply. Next we consider $M=\mu =A_t=0$, and arbitrary $\tan \beta$ (case 3). The chargino component reduces to: $$\begin{aligned} C_{7,8}^\chi(m_W) &=& -\frac{x+z}{4\cos^4 \beta}\left[ f^{(1)}_{7,8} \left( \frac{x+z}{2\cos^2\beta}\right) + \frac{1}{2} f^{(2)}_{7,8} \left( \frac{x+z}{2\cos^2\beta}\right) \right] -\frac{x}{4\sin^4\beta} f^{(1)}_{7,8} \left( \frac{x+z}{2\sin^2\beta}\right) \nonumber \\ && + \frac{z}{4\cos^4 \beta}\left[ f^{(1)}_{7,8} \left( \frac{z}{2\cos^2\beta}\right) + \frac{1}{2} f^{(2)}_{7,8} \left( \frac{z}{2\cos^2\beta}\right) \right]\ . \label{case3}\end{aligned}$$ In this case, the chargino degeneracy is lifted, while keeping the degeneracy in the squark sector. The chargino contribution becomes dependent on $\tan\beta$. On the other hand, as can be verified by means of eqs. (\[f1\])–(\[f3\]), the $\tan\beta$ dependence of eq. (\[case3\]) in the large $\tan\beta$ limit is only logarithmic. As we will see later, for the SUSY amplitude to exhibit a stronger $\tan\beta$ dependence $A_t\ne 0$ is required as well. \[mfig6\] [Fig. 6. Case 3. The allowed range for $R_{\gamma\gamma}$ is shown as a function of $\tilde{m}$ and $m_H$ (GeV) for $\tan \beta=10$, imposing the constraint of eq. (\[constraint\]) on $BR(B\to X_s\gamma)_{SUSY}$. ]{} \[mfig7\] [Fig. 7. Case 3. The allowed range for $R_{\gamma\gamma}$ is shown as a function of $\tilde{m}$ (GeV) and $\tan\beta$ for $m_H=150$ GeV, imposing the constraint of eq. (\[constraint\]) on $BR(B\to X_s\gamma)_{SUSY}$. ]{} In Figs. 6 and 7 we show as a function of different SUSY parameters the allowed range for $R_{\gamma\gamma}$ once the constraint on $BR(B\to X_s\gamma)_{SUSY}$ in eq. (\[constraint\]) is imposed. $R_{\gamma\gamma}$ is always bound to vary within $\pm 30$% from its SM expectation with high correlation to $R_{\gamma}$. Finally we consider the case for which $M_2$, $\mu$ and $A_t$ are different from zero and $\tan\beta \gg 1$ (case 4). In fact, the part of the chargino contribution which leads the large $\tan\beta$ behaviour vanishes when either squarks or charginos are degenerate, as one can verify from the simplified form of eq. (\[c78chb\]). \[mfig8\] [Fig. 8. Case 4. The potential enhancement of $R_{\gamma\gamma}$ is shown as a function of $\tilde{m}$ and $m_H$ (GeV) for $\tan \beta=15$ (upper surface) and 10 (lower surface). ]{} An analytic approximation of the $H_{UL}H^*_{UR}$ component of the chargino amplitude can be derived which shows explicitly its interesting features [@bevi]. We assume the chargino mass matrix in eq. (\[chmatrix\]) to be approximately diagonal: $$M_{\chi}\approx {\rm diag}(M_2,-\mu) \label{diagonal m-chi}$$ This approximation holds effectively when [@bevi] $|M_2^2 - \mu^2| = $ $O[{\rm max}(M_2^2,\ \mu^2)] \gg m_W^2$ and $(M_2^2,\ \mu^2) \ \raisebox{-.4ex}{\rlap{$\sim$}} \raisebox{.4ex}{$>$}\ m_W^2$. It is important to notice that these requirements, and therefore the approximation of eq. (\[diagonal m-chi\]) is consistent with one of the eigenvalues, say $|\mu|$, being of the order of $m_W$, while the other $|M_2|$ is much heavier. Thehe chargino mass matrix already being diagonal, the approximate mass eigenvalues are simply given by the absolute values of the parameters $M_2$ and $\mu$, and the two unitary rotations which “diagonalize” the chargino mass matrix can be written as: $$\begin{array}{ccl} U &\approx & {\rm diag (sign}[M_2],-{\rm sign}[\mu])\ , \\ V &\approx & {\bf 1} \end{array} \label{approx decomposition}$$ In this approximation, the matrix $T$ and the stop mass eigenstates are given by eq. (\[stopmass\]). Using eq. (\[stopmass\]) and eqs. (\[diagonal m-chi\])–(\[approx decomposition\]) we obtain a simple expression for the part of the chargino component relevant for large $\tan\beta$: $$C_{7,8}^\chi (m_W) \approx \frac{1}{2 \sin 2\beta} \frac{m_t}{\mu} \left[f_{7,8}^{(3)}\left(\frac{\tilde m^2_{t_1}}{\mu^2} \right) - f_{7,8}^{(3)}\left(\frac{\tilde m^2_{t_2}}{\mu^2} \right) \right] \label{domin ampl}$$ where $\tilde m_{\chi_2}=|\mu|$ is the lightest chargino eigenvalue. Notice that the amplitude in eq. (\[domin ampl\]) depends on the signs of both $\mu$ and the trilinear soft breaking parameter $A_t$ (changing the sign of the latter amounts to interchanging the two stop mass eigenvalues). One also verifies that the amplitude vanishes for either $\mu= 0$ or $A_t=0$ as it should. As already mentioned, at variance with the case 3, the leading behaviour of the chargino amplitude is linear with $\tan\beta$. This may in general be the source of large deviations of the SUSY rates from the corresponding SM expectations. In Fig. 8 we show the ratio of the SUSY to SM branching ratio for $B_s\to\gamma\gamma$ as a function of $m_H$ and $\tilde m$ for $\tan\beta=10$ and 15. In the example shown we have chosen $\mu=m_W$ and $A_t=1.5$. We see the potential large enhancements which arise for large $\tan\beta$ from this component of the chargino amplitude. On the other hand, imposing the constraint in eq. (\[constraint\]) allows only those regions of the lower surface for which $R_{\gamma\gamma}$ varies approximately in the range $0.7-1.3$. Globally, in the tested region of parameters the deviations of the $B_s\to\gamma\gamma$ decay rate from the SM expectations are confined to be within 10% from the corresponding deviations for the $B\to X_s\gamma$ decay. At the next-to-leading order one may try to devise models that enhance the matchings of the $O_{3-6}$ penguin operators (which are vanishing at the LO) keeping the $O_{7,8}$ Wilson coefficients “under control”. As unlikely as this may be, numerically it is anyhow difficult to expect drastic deviations from the SM predictions, due to the subleading role of the $O_{3-6}$ operators (analogous considerations apply to the electroweak penguin operators, which we have neglected in the LO analysis). We conclude that in order to disentangle new physics effects from a comparison of the two $b\to s$ radiative decays a precision below 10% is required both on the theoretical and experimental sides. Due to the smallness of the two-photon rates and to the theoretical uncertainties related to long-distance physics it shows as a truly challenging task. [99]{} S. Bertolini, F. Borzumati and A. Masiero, [*Phys. Rev. Lett.* ]{} [**59**]{} [(1987)]{} [180]{};\ N.G. Deshpande, P. Lo, J. Trampetic, G. Eilam and P. Singer, [*Phys. Rev. Lett.*]{} [**59**]{} [(1987)]{} [183]{}. B. Grinstein, R. Springer and M.B. Wise, [*Phys. Lett. [**B**]{}*]{} [**202**]{} [(1988)]{} [138]{}, [*Nucl. Phys. [**B**]{}*]{} [**339**]{} [(1990)]{} [269]{};\ R. Grinjanis, P.J. O’Donnell, M. Sutherland and H. Navelet, [*Phys. Lett. [**B**]{}*]{} [**213**]{} [(1988)]{} [355]{}, [*Phys. Lett. [**B**]{}*]{} [**286**]{} [(1992)]{} [413]{} (E);\ G. Cella, G. Curci, G. Ricciardi and A. Viceré, [*Phys. Lett. [**B**]{}*]{} [**248**]{} [(1990)]{} [181]{};\ P. Cho and B. Grinstein, [*Nucl. Phys. [**B**]{}*]{} [**365**]{} [(1991)]{} [279]{};\ M. Misiak, [*Phys. Lett. [**B**]{}*]{} [**269**]{} [(1991)]{} [161]{}, [*Nucl. Phys. [**B**]{}*]{} [**393**]{} [(1993)]{} [23]{}, [*Nucl. Phys. [**B**]{}*]{} [**439**]{} [(1995)]{} [461]{} (E);\ K. Adel and Y.P. Yao, [*Mod. Phys. Lett. [**A**]{}*]{} [**8**]{} [(1993)]{} [1679]{};\ M. Ciuchini, E. Franco, G. Martinelli, L. Reina and L. Silvestrini, [*Phys. Lett. [**B**]{}*]{} [**316**]{} [(1993)]{} [127]{}, [*Nucl. Phys. [**B**]{}*]{} [**421**]{} [(1994)]{} [41]{};\ G. Cella, G. Curci, G. Ricciardi and A. Vicerè, [*Phys. Lett. [**B**]{}*]{} [**325**]{} [(1994)]{} [227]{}, [*Nucl. Phys. [**B**]{}*]{} [**431**]{} [(1994)]{} [417]{};\ A.J. Buras, M. Misiak, M. Münz and S. Pokorski, [*Nucl. Phys. [**B**]{}*]{} [**424**]{} [(1994)]{} [374]{}. R.K. Adel and Y.P. Yao, [*Phys. Rev. **D***]{} [**49**]{} [(1994)]{} [4945]{};\ M. Misiak and M. Münz, [*Phys. Lett. [**B**]{}*]{} [**344**]{} [(1995)]{} [308]{};\ C. Greub, T. Hurt and D. Wyler, [*Phys. Lett. [**B**]{}*]{} [**380**]{} [(1996)]{} [385]{},[*Phys. Rev. **D***]{} [**54**]{} [(1996)]{} [3350]{}; C. Greub and T. Hurt, [hep-ph]{} [9703349]{}. K.G. Chetyrkin, M. Misiak and M. Münz, [*Phys. Lett. [**B**]{}*]{} [**400**]{} [(1997)]{} [206]{}. A.J. Buras, A. Kwiatkowski and N. Pott, [hep-ph]{} [9707482]{}. F.E. Low, [*Phys. Rev.*]{} [**110**]{} [(1958)]{} [974]{}. G.-L. Lin, J. Liu and Y.-P. Yao, [*Phys. Rev. Lett.*]{} [**64**]{} [(1990)]{} [1498]{}, [*Phys. Rev. **D***]{} [**42**]{} [(1990)]{} [2314]{}. G. Hiller and E. O. Iltan, [*Phys. Lett. [**B**]{}*]{} [**409**]{} [(1997)]{} [344]{}. C. H. V. Chang, G. L. Lin and Y. P. Yao, [hep-ph]{} [9705345]{}. L. Reina, G. Ricciardi and A. Soni, [hep-ph]{} [9706253]{} H. Simma and D. Wyler, [*Nucl. Phys. [**B**]{}*]{} [**344**]{} [(1990)]{} [283]{}. S. Herrlich and J. Kalinowski, [*Nucl. Phys. [**B**]{}*]{} [**381**]{} [(1992)]{} [501]{}. T. M. Aliev, G. Hiller and E. O. Iltan, [hep-ph]{} [9708382]{}. CLEO Collaboration, M. S. Alam et al. , [*Phys. Rev. Lett.* ]{} [**74**]{} [(1995)]{} [2885]{}. M. Ciuchini, E. Franco, G. Martinelli, L. Reina and L. Silvestrini, in ref. [@lo]. A.J. Buras, M. Misiak, M. Münz and S. Pokorski, in ref. [@lo]. L. Reina, G. Ricciardi and A. Soni, [*Phys. Lett. [**B**]{}*]{} [396]{} [(1997)]{} [231]{} and references therein. A. Ali and C. Greub, [*Z. Physik [**C**]{}*]{} [**60**]{} [(1993)]{} [433]{}. L3 Collaboration, M. Acciarri et al. , [*Phys. Lett. [**B**]{}*]{} [**363**]{} [(1995)]{} [127]{}. H.P. Nilles, [*Phys. Rep.*]{} [**110**]{} [(1984)]{} [1]{};\ H.E. Haber and G.L. Kane, [*Phys. Rep.*]{} [**117**]{} [(1985)]{} [75]{};\ A.B. Lahanas, D.V. Nanopoulos, [*Phys. Rep.*]{} [**145**]{} [(1987)]{} [1]{}. S. Bertolini, F. Borzumati, A. Masiero and G. Ridolfi, [*Nucl. Phys. [**B**]{}*]{} [**353**]{} [(1991)]{} [591]{}. R. Barbieri and G. F. Giudice, [*Phys. Lett. [**B**]{}*]{} [**309**]{} [(1993)]{} [86]{}. S. Bertolini and F. Vissani, [*Z. Phys. [**C**]{}*]{} [67]{} [(1995)]{} [513]{}. N. Oshimo, [*Nucl. Phys. [**B**]{}*]{} [404]{} [(1993)]{} [20]{};\ Y. Okada, [*Phys. Lett. [**B**]{}*]{} [315]{} [(1993)]{} [119]{};\ R. Garisto and J.N. Ng, [*Phys. Lett. [**B**]{}*]{} [315]{} [(1993)]{} [372]{};\ F. Borzumati, [*Z. Phys. [**C**]{}*]{} [63]{} [(1994)]{} [291]{};\ S. Bertolini and F. Vissani, in ref. [@bevi];\ V. Barger, M.S. Berger, P. Ohmann and R.J.N. Phillips, [*Phys. Rev. [**D**]{}*]{} [51]{} [(1995)]{} [2438]{};\ B. de Carlos and J.A. Casas, [*Phys. Lett. [**B**]{}*]{} [349]{} [(1995)]{}[300]{};\ H. Baer and M. Brhlik, [*Phys. Rev. [**D**]{}*]{} [55]{} [(1997)]{} [3201]{}. S. Ferrara and E. Remiddi, [*Phys. Lett. [**B**]{}*]{} [**53**]{} [(1974)]{} [347]{}. ----------------------------- ---------------------------- $\alpha_{s}(m_Z)$ $0.118$ $\alpha_{em}$ 1/129 $m_{Z}$ $91.19$ GeV $m_{W}$ $80.33$ GeV $m_{t}$ $175$ GeV $m_b$ $4.8$ GeV $m_c$ $1.4$ GeV $m_s$ $0.150$ GeV $|V_{ts}^*V_{tb}|/|V_{cb}|$ $0.976$ $|V_{ts}^*V_{tb}|$ $4\times 10^{-2}$ $m_{B_s}$ $5.37$ GeV $f_{B_s}$ $0.2$ GeV $\Gamma(B_s)$ $4.09 \times 10^{-13}$ GeV $BR(B\to X_c e\bar\nu_e)$ $10.4 \times 10^{-2}$ ----------------------------- ---------------------------- : Values of the input parameters used in the numerical calculations.[]{data-label="inputs"} [^1]: Address from December 1997: CERN, Th-Division, CH-1211 Genève 23, Switzerland
--- abstract: 'An error analysis for some Newton-Cotes quadrature formulae is presented. Peano-like error bounds are obtained. They are generally, but not always, better than the usual Peano bounds.' author: - | Nenad Ujević\ Department of Mathematics\ University of Split\ Teslina 12/III, 21000 Split\ CROATIA title: 'Peano-like bounds for some Newton-Cotes Formulae' --- **Keywords:** Simpson’s rule, 3/8 Simpson rule, Boole’s rule, Peano-like bounds. **MSC:** 26D10, 41A55. Introduction ============ In this paper we present an error analysis for some Newton-Cotes quadrature formulae. We consider Simpson’s rule, 3/8 Simpson rule and Boole’s rule. A similar error analysis for Simpson’s rule have been investigated more recently ([@C1], [@DAC1], [@DCR1], [@DPW1], [@PPUV1]) with the view of obtaining bounds on the quadrature rule in terms of a variety of norms involving, at most, the first derivative. It is well known that if the mapping $f$ is neither four times differentiable nor is the fourth derivative $f^{(4)}$ bounded, then we cannot apply the classical Simpson’s quadrature formula, which, actually is one of the most used quadrature formulas in practical applications. Thus, the above mentioned analysis is important as well as the analysis presented here. The current work brings results for the above mentioned Newton-Cotes quadrature rules giving explicit error bounds and using results from the modern theory of inequalities. The used inequalities are known in the literature as inequalities of Ostrowski-Grüss type. The error bounds are expressed in terms of second derivatives. As we have already mentioned for Simpson’s rule, the general approach used in the past involves the assumption of bounded derivatives of degree higher than two. We also mention that the obtained results can be derived using the Peano kernel theorem. In any case, these bounds are generally, but not always, better that the usual Peano error bounds (see Remarks \[R1\], \[R2\] and \[R3\]). Here we do not consider composite quadrature rules since they can be formed in the usual way. However, the analysis presented here allows the determination of the partition required that would assure the accuracy the result would be within a prescribed error tolerance. In Section 2 we established some auxiliary results. We use these results in further sections. In Section 3 we consider Simpson’s rule. In Section 4 we consider 3/8 Simpson rule and in Section 5 we consider Boole’s rule. Preliminary results =================== \[L1\]Let $I\subset R$ be an open interval and $a,b\in I,$ $a<b.$ Let $f:I\rightarrow R$ be a twice differentiable function and $\ $let $x\in \left[ a,b\right] $ be a fixed element. Then we have$$\begin{aligned} &&f(x)(b-a)-(x-\frac{a+b}{2})\left[ f(b)-f(a)\right] -\int\limits_{a}^{b}f(t)dt \label{j1} \\ &=&\frac{1}{b-a}\int\limits_{a}^{b}\int\limits_{a}^{b}p(x,t)p(t,s)f^{\prime \prime }(s)dsdt, \notag\end{aligned}$$where$$p(x,t)=\left\{ \begin{array}{c} t-a,\quad t\in \left[ a,x\right] \\ t-b,\quad t\in \left( x,b\right]\end{array}\right. . \label{j2}$$ Integrating by parts, we have$$\int\limits_{a}^{b}p(x,t)f^{\prime }(t)dt=f(x)(b-a)-\int\limits_{a}^{b}f(t)dt,$$i. e.$$f(x)=\frac{1}{b-a}\int\limits_{a}^{b}f(t)dt+\frac{1}{b-a}\int\limits_{a}^{b}p(x,t)f^{\prime }(t)dt.$$If we substitute $f\rightarrow f^{\prime }$ in the above relation, then we get$$f^{\prime }(t)=\frac{1}{b-a}\left[ f(b)-f(a)\right] +\frac{1}{b-a}\int\limits_{a}^{b}p(t,s)f^{\prime \prime }(s)ds.$$Thus,$$\begin{aligned} &&\int\limits_{a}^{b}p(x,t)f^{\prime }(t)dt \\ &=&\int\limits_{a}^{b}p(x,t)\left[ \frac{1}{b-a}\left[ f(b)-f(a)\right] +\frac{1}{b-a}\int\limits_{a}^{b}p(t,s)f^{\prime \prime }(s)ds\right] dt.\end{aligned}$$We also have$$\int\limits_{a}^{b}p(x,t)dt=(b-a)(x-\frac{a+b}{2}).$$From the above relations it follows$$\begin{aligned} &&\frac{1}{b-a}\int\limits_{a}^{b}\int\limits_{a}^{b}p(x,t)p(t,s)f^{\prime \prime }(s)dsdt \\ &=&f(x)(b-a)-(x-\frac{a+b}{2})\left[ f(b)-f(a)\right] -\int\limits_{a}^{b}f(t)dt.\end{aligned}$$ \[L2\]Let $p(x,t)$ be defined by (\[j2\]). Then we have$$\begin{aligned} q(x,s) &=&\int\limits_{a}^{b}p(x,t)p(t,s)dt \label{j3} \\ &=&\left\{ \begin{array}{c} (b-a)(x-\frac{a+b}{2})(s-a)-\frac{b-a}{2}(s-a)^{2},\quad s\in \left[ a,x\right] \\ (b-a)(x-\frac{a+b}{2})(s-b)-\frac{b-a}{2}(s-b)^{2},\quad s\in \left( x,b\right]\end{array}\right. , \notag\end{aligned}$$where $x\in \left[ a,b\right] .$ For $s\in \left[ a,x\right] $ we have$$\begin{aligned} q(x,s) &=&\int\limits_{a}^{s}(t-a)(s-b)dt+\int\limits_{s}^{x}(t-a)(s-a)dt+\int\limits_{x}^{b}(t-b)(s-a)dt \\ &=&(s-b)\frac{(s-a)^{2}}{2}+(s-a)\frac{(x-a)^{2}-(s-a)^{2}}{2}-(s-a)\frac{(x-b)^{2}}{2} \\ &=&(b-a)(x-\frac{a+b}{2})(s-a)-\frac{b-a}{2}(s-a)^{2}.\end{aligned}$$For $s\in \left( x,b\right] $ we have$$\begin{aligned} q(x,s) &=&\int\limits_{a}^{x}(t-a)(s-b)dt+\int\limits_{x}^{s}(t-b)(s-b)dt+\int\limits_{s}^{b}(t-b)(s-a)dt \\ &=&(s-a)\frac{(x-a)^{2}}{2}+(s-b)\frac{(x-b)^{2}+(s-b)^{2}}{2}-(s-a)\frac{(s-b)^{2}}{2} \\ &=&(b-a)(x-\frac{a+b}{2})(s-b)-\frac{b-a}{2}(s-b)^{2}.\end{aligned}$$From the above relations we see that (\[j3\]) holds. \[C1\]Let the assumptions of Lemma \[L2\] be satisfied. Then we have$$\begin{aligned} q(a,s) &=&-\frac{1}{2}(b-a)^{2}(s-b)-\frac{b-a}{2}(s-b)^{2}, \\ q(b,s) &=&\frac{1}{2}(b-a)^{2}(s-a)-\frac{b-a}{2}(s-a)^{2}, \\ q(\frac{a+b}{2},s) &=&\left\{ \begin{array}{c} -\frac{b-a}{2}(s-a)^{2},\quad s\in \left[ a,\frac{a+b}{2}\right] \\ -\frac{b-a}{2}(s-b)^{2},\quad s\in \left( \frac{a+b}{2},b\right]\end{array}\right. , \\ q(\frac{3a+b}{4},s) &=&\left\{ \begin{array}{c} -\frac{1}{4}(b-a)^{2}(s-a)-\frac{b-a}{2}(s-a)^{2},\quad s\in \left[ a,\frac{3a+b}{4}\right] \\ -\frac{1}{4}(b-a)^{2}(s-b)-\frac{b-a}{2}(s-b)^{2},\quad s\in \left( \frac{3a+b}{4},b\right]\end{array}\right. , \\ q(\frac{a+3b}{4},s) &=&\left\{ \begin{array}{c} \frac{1}{4}(b-a)^{2}(s-a)-\frac{b-a}{2}(s-a)^{2},\quad s\in \left[ a,\frac{a+3b}{4}\right] \\ \frac{1}{4}(b-a)^{2}(s-b)-\frac{b-a}{2}(s-b)^{2},\quad s\in \left( \frac{a+3b}{4},b\right]\end{array}\right. , \\ q(\frac{2a+b}{3},s) &=&\left\{ \begin{array}{c} -\frac{1}{6}(b-a)^{2}(s-a)-\frac{b-a}{2}(s-a)^{2},\quad s\in \left[ a,\frac{2a+b}{3}\right] \\ -\frac{1}{6}(b-a)^{2}(s-b)-\frac{b-a}{2}(s-b)^{2},\quad s\in \left( \frac{2a+b}{3},b\right]\end{array}\right. , \\ q(\frac{a+2b}{3},s) &=&\left\{ \begin{array}{c} \frac{1}{6}(b-a)^{2}(s-a)-\frac{b-a}{2}(s-a)^{2},\quad s\in \left[ a,\frac{a+2b}{3}\right] \\ \frac{1}{6}(b-a)^{2}(s-b)-\frac{b-a}{2}(s-b)^{2},\quad s\in \left( \frac{a+2b}{3},b\right]\end{array}\right. .\end{aligned}$$ Simpson’s rule ============== \[T1\]Let the assumptions of Lemma \[L1\] hold and let $\gamma ,\Gamma $ be real numbers such that $\gamma \leq f^{\prime \prime }(t)\leq \Gamma ,$ $\forall t\in \left[ a,b\right] $. Then we have$$\left| \frac{b-a}{6}\left[ f(a)+4f(\frac{a+b}{2})+f(b)\right] -\int\limits_{a}^{b}f(t)dt\right| \leq \frac{\Gamma -\gamma }{162}(b-a)^{3}. \label{j7}$$ For $x=a$ the left-hand side in (\[j1\]) is equal to$$(b-a)\frac{f(b)+f(a)}{2}-\int\limits_{a}^{b}f(t)dt. \label{j8}$$For $x=b$ the left-hand side in (\[j1\]) is equal to$$(b-a)\frac{f(b)+f(a)}{2}-\int\limits_{a}^{b}f(t)dt. \label{j9}$$For $x=\frac{a+b}{2}$ the left-hand side in (\[j1\]) is equal to$$(b-a)f(\frac{a+b}{2})-\int\limits_{a}^{b}f(t)dt. \label{j10}$$If we now multiply (\[j10\]) by 2 and add (\[j8\]) or (\[j9\]) then we get$$\frac{b-a}{2}\left[ f(a)+4f(\frac{a+b}{2})+f(b)\right] -3\int\limits_{a}^{b}f(t)dt. \label{j11a}$$The corresponding right-hand side is$$\begin{aligned} R(a,b) &=&\frac{1}{b-a}\int\limits_{a}^{b}\int\limits_{a}^{b}\left[ 2p(\frac{a+b}{2},t)+p(a,t)\right] p(t,s)f^{\prime \prime }(s)dtds \label{j13} \\ &=&\frac{1}{b-a}\int\limits_{a}^{b}\left[ 2q(\frac{a+b}{2},s)+q(a,s)\right] f^{\prime \prime }(s)ds. \notag\end{aligned}$$Let $K_{1}(s)=2q(\frac{a+b}{2},s)+q(a,s)$. Then we have$$\int\limits_{a}^{b}K_{1}(s)ds=0$$such that$$\frac{1}{b-a}\int\limits_{a}^{b}K_{1}(s)f^{\prime \prime }(s)ds=\frac{1}{b-a}\int\limits_{a}^{b}K_{1}(s)\left[ f^{\prime \prime }(s)-\frac{\Gamma +\gamma }{2}\right] ds$$and$$\begin{aligned} \left| R(a,b)\right| &\leq &\frac{1}{b-a}\underset{s\in \left[ a,b\right] }{\max }\left| f^{\prime \prime }(s)-\frac{\Gamma +\gamma }{2}\right| \int\limits_{a}^{b}\left| K_{1}(s)\right| ds \\ &\leq &\frac{\Gamma -\gamma }{2(b-a)}\int\limits_{a}^{b}\left| K_{1}(s)\right| ds,\end{aligned}$$since$$\underset{s\in \left[ a,b\right] }{\max }\left| f^{\prime \prime }(s)-\frac{\Gamma +\gamma }{2}\right| \leq \frac{\Gamma -\gamma }{2}. \label{zvj}$$ Hence,$$\left| R(a,b)\right| \leq \frac{\Gamma -\gamma }{2(b-a)}\int\limits_{a}^{b}\left| 2q(\frac{a+b}{2},s)+q(a,s)\right| ds. \label{j14}$$We now calculate$$\begin{aligned} &&\int\limits_{a}^{b}\left| 2q(\frac{a+b}{2},s)+q(a,s)\right| ds \label{j15} \\ &=&\int\limits_{a}^{\frac{a+b}{2}}\left| -2\frac{b-a}{2}(s-a)^{2}-\frac{(b-a)^{2}}{2}(s-b)-\frac{b-a}{2}(s-b)^{2}\right| ds \notag \\ &&+\int\limits_{\frac{a+b}{2}}^{b}\left| -2\frac{b-a}{2}(s-b)^{2}-\frac{(b-a)^{2}}{2}(s-b)-\frac{b-a}{2}(s-b)^{2}\right| ds. \notag\end{aligned}$$From the equation$$-2\frac{b-a}{2}(s-a)^{2}-\frac{(b-a)^{2}}{2}(s-b)-\frac{b-a}{2}(s-b)^{2}=0$$we find the solutions$$s_{1}=a,\quad s_{2}=\frac{2a+b}{3}. \label{j16}$$From the equation$$-2\frac{b-a}{2}(s-b)^{2}-\frac{(b-a)^{2}}{2}(s-b)-\frac{b-a}{2}(s-b)^{2}=0$$we find the solutions$$s_{3}=b,\quad s_{4}=\frac{a+2b}{3}. \label{j17}$$From (\[j15\])-(\[j17\]) we have$$\begin{aligned} &&\int\limits_{a}^{b}\left| 2q(\frac{a+b}{2},s)+q(a,s)\right| ds \\ &=&\int\limits_{a}^{\frac{2a+b}{3}}\left[ -2\frac{b-a}{2}(s-a)^{2}-\frac{(b-a)^{2}}{2}(s-b)-\frac{b-a}{2}(s-b)^{2}\right] ds \\ &&+\int\limits_{\frac{2a+b}{3}}^{\frac{a+b}{2}}\left[ 2\frac{b-a}{2}(s-a)^{2}+\frac{(b-a)^{2}}{2}(s-b)+\frac{b-a}{2}(s-b)^{2}\right] ds \\ &&+\int\limits_{\frac{a+b}{2}}^{\frac{a+2b}{3}}\left[ 2\frac{b-a}{2}(s-b)^{2}+\frac{(b-a)^{2}}{2}(s-b)+\frac{b-a}{2}(s-b)^{2}\right] ds \\ &&+\int\limits_{\frac{a+2b}{3}}^{b}\left[ -2\frac{b-a}{2}(s-b)^{2}-\frac{(b-a)^{2}}{2}(s-b)-\frac{b-a}{2}(s-b)^{2}\right] ds \\ &=&4\frac{1}{108}(b-a)^{4}=\frac{1}{27}(b-a)^{4}.\end{aligned}$$From the above relation and (\[j11a\])–(\[j14\]) we get$$\left| \frac{b-a}{2}\left[ f(a)+4f(\frac{a+b}{2})+f(b)\right] -3\int\limits_{a}^{b}f(t)dt\right| \leq \frac{\Gamma -\gamma }{54}(b-a)^{3}. \label{j19}$$From (\[j19\]) we easily get (\[j7\]). \[R1\]The usual Peano error bound is$$\left| \frac{b-a}{6}\left[ f(a)+4f(\frac{a+b}{2})+f(b)\right] -\int\limits_{a}^{b}f(t)dt\right| \leq \frac{(b-a)^{3}}{81}\left\| f^{\prime \prime }\right\| _{\infty }. \label{p1}$$If we choose$$\gamma =\underset{s\in \left[ a,b\right] }{\inf }f^{\prime \prime }(s)\text{, \ }\Gamma =\underset{s\in \left[ a,b\right] }{\sup }f^{\prime \prime }(s)$$then $\frac{\Gamma -\gamma }{2}\leq \left\| f^{\prime \prime }\right\| _{\infty }$ and it is obvious that (\[j7\]) is better than (\[p1\]). In fact, these two bounds are equal if and only if $\Gamma =-\gamma .$ This case ($\Gamma =-\gamma $) is very rare in practice. Specially, if $\Gamma $ is large and $\Gamma \approx \gamma $ then (\[j7\]) is much better than (\[p1\]). 3/8 Simpson rule ================ \[T2\]Under the assumptions of Theorem \[T1\] we have$$\left| \frac{b-a}{8}\left[ f(a)+3f(\frac{2a+b}{3})+3f(\frac{a+2b}{3})+f(b)\right] -\int\limits_{a}^{b}f(t)dt\right| \leq \frac{\Gamma -\gamma }{384}(b-a)^{3}. \label{j21}$$ If we substitute $x=\frac{2a+b}{3}$ in (\[j1\]) then we get the corresponding left-hand side$$\frac{b-a}{6}\left[ f(b)-f(a)\right] +f(\frac{2a+b}{3})(b-a)-\int\limits_{a}^{b}f(t)dt. \label{j22}$$For $x=\frac{a+2b}{3}$ we have the corresponding left-hand side$$-\frac{b-a}{6}\left[ f(b)-f(a)\right] +f(\frac{a+2b}{3})(b-a)-\int\limits_{a}^{b}f(t)dt. \label{j23}$$If we now multiply (\[j22\]) and (\[j23\]) by 3 and add (\[j8\]) and (\[j9\]) then we get the left-hand side of the form$$(b-a)\left[ f(a)+3f(\frac{2a+b}{3})+3f(\frac{a+2b}{3})+f(b)\right] -8\int\limits_{a}^{b}f(t)dt. \label{j24}$$The corresponding right-hand side is$$R(a,b)=\frac{1}{b-a}\int\limits_{a}^{b}\left[ q(a,s)+3q(\frac{2a+b}{3},s)+3q(\frac{a+2b}{3},s)+q(b,s)\right] f^{\prime \prime }(s)ds. \label{j25}$$Let $K_{2}(s)=q(a,s)+3q(\frac{2a+b}{3},s)+3q(\frac{a+2b}{3},s)+q(b,s)$. Then we have$$\int\limits_{a}^{b}K_{2}(s)ds=0$$such that$$\frac{1}{b-a}\int\limits_{a}^{b}K_{2}(s)f^{\prime \prime }(s)ds=\frac{1}{b-a}\int\limits_{a}^{b}K_{2}(s)\left[ f^{\prime \prime }(s)-\frac{\Gamma +\gamma }{2}\right] ds$$and$$\begin{aligned} \left| R(a,b)\right| &\leq &\frac{1}{b-a}\underset{s\in \left[ a,b\right] }{\max }\left| f^{\prime \prime }(s)-\frac{\Gamma +\gamma }{2}\right| \int\limits_{a}^{b}\left| K_{2}(s)\right| ds \label{ub2} \\ &\leq &\frac{\Gamma -\gamma }{2(b-a)}\int\limits_{a}^{b}\left| K_{2}(s)\right| ds, \notag\end{aligned}$$since (\[zvj\]) holds. We now calculate$$\begin{aligned} &&\int\limits_{a}^{b}\left| q(a,s)+3q(\frac{2a+b}{3},s)+3q(\frac{a+2b}{3},s)+q(b,s)\right| ds \\ &=&\int\limits_{a}^{\frac{2a+b}{3}}\left| \frac{1}{2}(b-a)^{3}-\frac{1}{2}(b-a)(s-b)^{2}-\frac{7}{2}(b-a)(s-a)^{2}\right| ds \\ &&+\int\limits_{\frac{2a+b}{3}}^{\frac{a+2b}{3}}\left| 2(b-a)(\left[ (s-a)^{2}+(s-b)^{2}\right] -(b-a)^{3}\right| ds \\ &&+\int\limits_{\frac{a+2b}{3}}^{b}\left| \frac{1}{2}(b-a)^{3}-\frac{1}{2}(b-a)(s-a)^{2}-\frac{7}{2}(b-a)(s-b)^{2}\right| ds.\end{aligned}$$From the equation$$\frac{1}{2}(b-a)^{3}-\frac{1}{2}(b-a)(s-b)^{2}-\frac{7}{2}(b-a)(s-a)^{2}=0.$$we find the solutions$$s_{1}=a,\quad s_{2}=\frac{3a+b}{4}. \label{j210}$$From the equation$$2(b-a)(\left[ (s-a)^{2}+(s-b)^{2}\right] -(b-a)^{3}=0$$we find the solution$$s_{3}=\frac{a+b}{2}. \label{j211}$$From the equation$$\frac{1}{2}(b-a)^{3}-\frac{1}{2}(b-a)(s-a)^{2}-\frac{7}{2}(b-a)(s-b)^{2}=0$$we find the solutions$$s_{4}=b,\quad s_{5}=\frac{a+3b}{4}. \label{j212}$$From the above relations we get$$\begin{aligned} &&\int\limits_{a}^{b}\left| q(a,s)+3q(\frac{2a+b}{3},s)+3q(\frac{a+2b}{3},s)+q(b,s)\right| ds \\ &=&\int\limits_{a}^{\frac{3a+b}{4}}\left[ \frac{1}{2}(b-a)^{3}-\frac{1}{2}(b-a)(s-b)^{2}-\frac{7}{2}(b-a)(s-a)^{2}\right] ds \\ &&-\int\limits_{\frac{3a+b}{4}}^{\frac{2a+b}{3}}\left[ \frac{1}{2}(b-a)^{3}-\frac{1}{2}(b-a)(s-b)^{2}-\frac{7}{2}(b-a)(s-a)^{2}\right] ds \\ &&+\int\limits_{\frac{2a+b}{3}}^{\frac{a+2b}{3}}\left[ 2(b-a)((s-b)^{2}+(s-a)^{2})-(b-a)^{3}\right] ds \\ &&-\int\limits_{\frac{a+2b}{3}}^{\frac{a+3b}{4}}\left[ \frac{1}{2}(b-a)^{3}-\frac{1}{2}(b-a)(s-a)^{2}-\frac{7}{2}(b-a)(s-b)^{2}\right] ds \\ &&+\int\limits_{\frac{a+3b}{4}}^{b}\left[ \frac{1}{2}(b-a)^{3}-\frac{1}{2}(b-a)(s-a)^{2}-\frac{7}{2}(b-a)(s-b)^{2}\right] ds \\ &=&\frac{1}{24}(b-a)^{4}.\end{aligned}$$From the above relation and (\[j24\]) and (\[ub2\]) we get$$\left| (b-a)\left[ f(a)+3f(\frac{2a+b}{3})+3f(\frac{a+2b}{3})+f(b)\right] -8\int\limits_{a}^{b}f(t)dt\right| \leq \frac{\Gamma -\gamma }{48}(b-a)^{3}.$$This completes the proof. \[R2\] The usual Peano error bound is$$\left| \frac{b-a}{8}\left[ f(a)+3f(\frac{2a+b}{3})+3f(\frac{a+2b}{3})+f(b)\right] -\int\limits_{a}^{b}f(t)dt\right| \leq \frac{\left\| f^{\prime \prime }\right\| _{\infty }}{192}(b-a)^{3}.$$For the reasons given in Remark \[R1\] the estimation obtained in Theorem \[T2\], which is a Peano-like bound, is better than the above Peano bound. Boole’s rule ============ \[T3\]Under the assumptions of Theorem \[T2\] we have$$\begin{array}{c} \left| \frac{b-a}{90}\left[ 7f(a)+32f(\frac{3a+b}{4})+12f(\frac{a+b}{2})+32f(\frac{a+3b}{4})+7f(b)\right] \right. \\ \left. -\int\limits_{a}^{b}f(t)dt\right| \leq \frac{509}{273\,375}(\Gamma -\gamma )(b-a)^{3}.\end{array} \label{j320}$$ We first write left-hand sides of (\[j1\]) for $x=\frac{3a+b}{4}$ and $x=\frac{a+3b}{4}.$ We have$$f(\frac{3a+b}{4})+\frac{b-a}{4}\left[ f(b)-f(a)\right] -\int\limits_{a}^{b}f(t)dt \label{j321}$$and$$f(\frac{a+3b}{4})-\frac{b-a}{4}\left[ f(b)-f(a)\right] -\int\limits_{a}^{b}f(t)dt. \label{j322}$$If we now multiply (\[j8\]) and (\[j9\]) by 7, (\[j10\]) by 12, ([j321]{}) and (\[j322\]) by 32 and sum the obtained results then we get the left-hand side of the form$$(b-a)\left[ 7f(a)+32f(\frac{3a+b}{4})+12f(\frac{a+b}{2})+32f(\frac{a+3b}{4})+7f(b)\right] -90\int\limits_{a}^{b}f(t)dt. \label{j323}$$For the corresponding right-hand side we get$$\begin{aligned} &&R(a,b) \label{j327} \\ &=&\frac{1}{b-a}\int\limits_{a}^{b}\left[ 7q(a,s)+32q(\frac{3a+b}{4},s)+12q(\frac{a+b}{2},s)+32q(\frac{a+3b}{4},s)\right. \notag \\ &&\left. +7q(b,s)\right] f^{\prime \prime }(s)ds. \notag\end{aligned}$$ Let $K_{3}(s)=7q(a,s)+32q(\frac{3a+b}{4},s)+12q(\frac{a+b}{2},s)+32q(\frac{a+3b}{4},s)+7q(b,s)$. Then we have$$\int\limits_{a}^{b}K_{3}(s)ds=0$$such that$$\frac{1}{b-a}\int\limits_{a}^{b}K_{3}(s)f^{\prime \prime }(s)ds=\frac{1}{b-a}\int\limits_{a}^{b}K_{3}(s)\left[ f^{\prime \prime }(s)-\frac{\Gamma +\gamma }{2}\right] ds$$and$$\begin{aligned} \left| R(a,b)\right| &\leq &\frac{1}{b-a}\underset{s\in \left[ a,b\right] }{\max }\left| f^{\prime \prime }(s)-\frac{\Gamma +\gamma }{2}\right| \int\limits_{a}^{b}\left| K_{3}(s)\right| ds \\ &\leq &\frac{\Gamma -\gamma }{2(b-a)}\int\limits_{a}^{b}\left| K_{3}(s)\right| ds,\end{aligned}$$since (\[zvj\]) holds. Hence,$$\begin{aligned} \left| R(a,b)\right| &\leq &\frac{\Gamma -\gamma }{2(b-a)}\int\limits_{a}^{b}\left| 7q(a,s)+32q(\frac{3a+b}{4},s)+12q(\frac{a+b}{2},s)+32q(\frac{a+3b}{4},s)\right. \\ &&\left. +7q(b,s)\right| ds.\end{aligned}$$We now calculate$$\begin{aligned} &&\int\limits_{a}^{b}\left| 7q(a,s)+32q(\frac{3a+b}{4},s)+12q(\frac{a+b}{2},s)+32q(\frac{a+3b}{4},s)+7q(b,s)\right| ds \\ &=&\int\limits_{a}^{\frac{3a+b}{4}}\left| -\frac{7(b-a)}{2}\left[ (s-b)^{2}+(s-a)^{2}\right] -38(b-a)(s-a)^{2}+\frac{7}{2}(b-a)^{3}\right| ds \\ &&+\int\limits_{\frac{3a+b}{4}}^{\frac{a+b}{2}}\left| 39(b-a)\left[ \frac{(b-a)^{2}}{4}+(s-\frac{a+b}{2})^{2}\right] -\frac{23}{2}(b-a)^{3}+6(b-a)(s-a)^{2}\right| ds \\ &&+\int\limits_{\frac{a+b}{2}}^{\frac{a+3b}{4}}\left| 39(b-a)\left[ \frac{(b-a)^{2}}{4}+(s-\frac{a+b}{2})^{2}\right] -\frac{23}{2}(b-a)^{3}+6(b-a)(s-b)^{2}\right| ds \\ &&+\int\limits_{\frac{a+3b}{4}}^{b}\left| -\frac{7(b-a)}{2}\left[ (s-b)^{2}+(s-a)^{2}\right] -38(b-a)(s-b)^{2}+\frac{7}{2}(b-a)^{3}\right| ds.\end{aligned}$$Let us denote the integrands of the right-hand side of the above relation by $Q_{1}(s)$, $Q_{2}(s)$, $Q_{3}(s)$ and $Q_{4}(s)$, respectively. These integrands have the next zero points$$s_{0}=a,\text{ }s_{1}=\frac{38a+7b}{45},\text{ }s_{2}=\frac{2a+b}{3},\text{ }s_{3}=\frac{a+2b}{3},\text{ }s_{4}=\frac{7a+38b}{45},\text{ }s_{5}=b.$$From the above two relations we get$$\begin{aligned} &&\int\limits_{a}^{b}\left| 7q(a,s)+32q(\frac{3a+b}{4},s)+12q(\frac{a+b}{2},s)+32q(\frac{a+3b}{4},s)+7q(b,s)\right| ds \\ &=&\int\limits_{a}^{s_{1}}Q_{1}(s)ds-\int\limits_{s_{1}}^{\frac{3a+b}{4}}Q_{1}(s)ds-\int\limits_{\frac{3a+b}{4}}^{s_{2}}Q_{2}(s)ds+\int\limits_{s_{2}}^{\frac{a+b}{2}}Q_{2}(s)ds \\ &&+\int\limits_{\frac{a+b}{2}}^{s_{3}}Q_{3}(s)ds-\int\limits_{s_{3}}^{\frac{a+3b}{4}}Q_{3}(s)ds-\int\limits_{\frac{a+3b}{4}}^{s_{4}}Q_{4}(s)ds+\int\limits_{s_{4}}^{b}Q_{4}(s)ds \\ &=&\frac{2036}{6075}(b-a)^{4}.\end{aligned}$$Thus, we have$$\left| R(a,b)\right| \leq \frac{1018}{6075}(\Gamma -\gamma )(b-a)^{3}. \label{j331}$$From (\[j323\]), (\[j327\]) and (\[j331\]) we easily get (\[j320\]). \[R3\] The usual Peano error bound is$$\begin{array}{c} \left| \frac{b-a}{90}\left[ 7f(a)+32f(\frac{3a+b}{4})+12f(\frac{a+b}{2})+32f(\frac{a+3b}{4})+7f(b)\right] \right. \\ \left. -\int\limits_{a}^{b}f(t)dt\right| \leq \frac{1018}{273\,375}\left\| f^{\prime \prime }\right\| _{\infty }(b-a)^{3}.\end{array}$$For the reasons given in Remark \[R1\] the estimation obtained in Theorem \[T3\], which is a Peano-like bound, is better than the above Peano bound. [9]{} P. Cerone, Three points rules in numerical integration, Nonlinear Anal.-Theory Methods Appl. 47 (4), (2001), 2341–2352. S. S. Dragomir, R. P. Agarwal and P. Cerone, On Simpson’s inequality and applications, J. Inequal. Appl., 5 (2000), 533–579. S. S. Dragomir, P. Cerone and J. Roumeliotis, A new generalization of Ostrowski’s integral inequality for mappings whose derivatives are bounded and applications in numerical integration and for special means, Appl. Math. Lett., 13 (2000), 19–25. S. S. Dragomir, J. Pečarić and S Wang, The unified treatment of trapezoid, Simpson and Ostrowski type inequalities for monotonic mappings and applications, Math. Comput. Modelling, 31 (2000), 61–70. A. Ghizzetti and A. Ossicini, Quadrature formulae, Birkhaüses Verlag, Basel/Stuttgart, 1970. D. S. Mitrinović, J. Pečarić and A. M. Fink, Inequalities involving functions and their integrals and derivatives, Kluwer Acad. Publ., Dordrecht, 1991. C. E. M. Pearce, J. Pečarić, N. Ujević and S. Varošanec, Generalizations of some inequalities of Ostrowski-Grüss type, Math. Inequal. Appl., 3(1), (2000), 25–34.
--- abstract: 'Given a fixed graph $H$, a real number $p\in(0,1)$, and an infinite Erdős-Rényi graph $G\sim G(\infty,p)$, how many adjacency queries do we have to make to find a copy of $H$ inside $G$ with probability $1/2$? Determining this number $f(H,p)$ is a variant of the [*subgraph query problem*]{} introduced by Ferber, Krivelevich, Sudakov, and Vieira. For every graph $H$, we improve the trivial upper bound of $f(H,p) = O(p^{-d})$, where $d$ is the degeneracy of $H$, by exhibiting an algorithm that finds a copy of $H$ in time $o(p^{-d})$ as $p$ goes to $0$. Furthermore, we prove that there are $2$-degenerate graphs which require $p^{-2+o(1)}$ queries, showing for the first time that there exist graphs $H$ for which $f(H,p)$ does not grow like a constant power of $p^{-1}$ as $p$ goes to $0$. Finally, we answer a question of Feige, Gamarnik, Neeman, Rácz, and Tetali by showing that for any $\delta < 2$, there exists $\alpha < 2$ such that one cannot find a clique of order $\alpha \log_2 n$ in $G(n,1/2)$ in $n^\delta$ queries.' author: - 'Ryan Alweiss[^1] Chady Ben Hamida Xiaoyu He[^2] Alexander Moreira' title: On the subgraph query problem --- Introduction ============ The [*subgraph query problem*]{}, introduced by Ferber, Krivelevich, Sudakov, and Vieira [@fksv], has been the subject of recent attention in extremal combinatorics and theoretical computer science. The problem is to determine the smallest number of adaptive queries of the form “is $(u,v) \in E(G)$?" that need to be made to an Erdős-Rényi random graph $G\sim G(n,p)$ to find a copy of a given subgraph $H$ with probability at least $\frac{1}{2}$. Several variants of the problem appear in the literature. Ferber, Krivelevich, Sudakov, and Vieira [@fksv; @fksv2] first studied the subgraph query problem when $H$ is a long path or cycle of order comparable to $n$, exhibiting asymptotically optimal algorithms for finding long paths and cycles. For example, as long as $p\ge \frac{\log n + \log \log n +\omega(1)}{n}$ is above the threshold for Hamiltonicity in $G(n,p)$, they showed that a Hamiltonian cycle can be found by the time one reveals $(1+o(1))n$ edges. Here and henceforth we write $\log$ for the natural logarithm and $\lg$ for the base-$2$ logarithm. In connection with the online Ramsey number, Conlon, Fox, Grinshpun, and He [@cfgh] studied the case where $H = K_m$ is a fixed complete graph, $p\rightarrow 0$, and the number of vertices $n$ is allowed to be arbitrarily large. They defined the function $f(H,p)$ to be the number of queries needed to find a copy of $H$ in the countably infinite random graph $G(\infty, p)$ with probability $\frac{1}{2}$, and proved that $$\label{eq:cfgh} p^{-(2-\sqrt{2})m + O(1)} \le f(K_m, p) \le p^{-\frac{2}{3} m - O(1)}.$$ In this paper, we study the behavior of $f(H,p)$ as $p\rightarrow 0$ for an arbitrary fixed graph $H$. We will use the phrases “build $H$ in $T$ time" and “find $H$ in $T$ queries" interchangeably for the statement $f(H,p) \le T$. Recall that a graph $H$ is [*$d$-degenerate*]{} if every subgraph of $H$ contains a vertex of degree at most $d$, and the [*degeneracy*]{} of $H$ is the least $d$ for which $H$ is $d$-degenerate. Equivalently, $H$ is $d$-degenerate if and only if there is an acyclic orientation of $H$ with maximum outdegree at most $d$. Degeneracy is the natural notion of sparsity in graph Ramsey theory (see e.g. the recent proof of the Burr-Erdős conjecture by Lee [@cl]). In the subgraph query problem, a $d$-degenerate graph can be built by adding one vertex at a time so that each new vertex has degree at most $d$ at the time it is built. Since a common neighbor of $d$ given vertices can be found in $O(p^{-d})$ queries, this shows that $f(H,p) = O_H(p^{-d})$ whenever $H$ is $d$-degenerate. Our first main result is that this trivial bound is never tight when $d\ge 2$. Define the [*depth*]{} $\Delta$ of a graph $H$ with degeneracy $d$ to be the smallest $\Delta$ for which there exists an acyclic orientation of $H$ with maximum outdegree at most $d$ and longest directed path of length at most $\Delta$ (we use the convention that the length of the path with $n+1$ vertices is $n$). Let $\log_t(x)$ denote the $t$-fold iterated logarithm of $x$. \[thm:upper-general\] If $H$ is a graph with degeneracy $d\ge 2$ and depth $\Delta\ge 1$, then $$f(H, p) = O_H\Big(\frac{p^{-d} \log_{\Delta + 1}(p^{-1})}{\log_{\Delta} (p^{-1})}\Big).$$ Roughly speaking, one of the main innovations is to exploit the observation that in a random graph $G(n,1/n)$, the degrees of vertices are distributed like Poisson distributions with mean $1$. Thus, the maximum degree is $\Omega(\log n/\log \log n)$ despite the fact that the average degree is constant. Repeatedly finding these vertices of exceptionally large degree allows us to find $H$ slightly faster. We will also show that the behavior in Theorem \[thm:upper-general\] can be correct up to the polylogarithmic factor. Let the triforce graph be the graph obtained from the triangle $K_3$ by adding a common neighbor to each pair of vertices (see Figure \[fig:triforce\] in Section \[sec:upper\]).[^3] \[thm:lower\] If $H$ is the triforce graph and $\ell = \frac{\log(1/p)}{2\log \log(1/p)}$, then $$f(H,p) = \Omega(p^{-2}/\ell^3).$$ Note that the triforce is $2$-degenerate, so Theorems \[thm:upper-general\] and \[thm:lower\] together prove that $f(H,p) = o(p^{-2})$ and $f(H,p) = \Omega(p^{-2+\varepsilon})$ for every $\varepsilon > 0$. This is the first example of a graph for which it is known that $f(H,p)$ does not grow like a power of $p^{-1}$. The question of querying for subgraphs in random graphs was also studied by Feige, Gamarnik, Neeman, Rácz, and Tetali [@fgnrt], and by Rácz and Schiffer in the related planted clique model [@rs]. Feige, Gamarnik, Neeman, Rácz, and Tetali restricted their attention to the balanced random graph $G(n,\frac{1}{2})$ and asked for the minimum number of queries needed to find a clique of order $(2-o(1))\lg n$ (which approaches the clique number). For $\delta < 2$, define $\alpha_\star(\delta)$ to be the supremum over $\alpha \le 2$ for which a clique of order $\alpha \lg n$ can be found in at most $n^\delta$ queries for all $n$ sufficiently large. They showed under the additional assumption that only a bounded number of rounds of adaptiveness are used that $\alpha_\star(\delta) < 2$ for all $\delta < 2$, and asked if this could be proved unconditionally. Our last theorem answers this question affirmatively. We are grateful to Huy Pham [@p] for communicating to us the main idea of the proof. \[thm:cliques\] For all $2 - \sqrt{2} \le \delta<2$, $$\alpha_\star(\delta)\le1+\sqrt{1-\frac{(2-\delta)^{2}}{2}}<2.$$ The proof is an adaptation of the lower bound proof for (\[eq:cfgh\]) by [@cfgh] to take the size of the vertex set into account. The exact value of $\alpha_\star(\delta)$ remains open for all $\delta$, and the best known lower bound is $\alpha_\star(\delta) \ge 1 + \frac{\delta}{2}$ when $1\le \delta < 2$ (see [@fgnrt Lemma 6]). [**Organization.**]{} In Section \[sec:upper\], we describe a new algorithm for finding any $d$-degenerate graph and prove that it achieves the runtime described in Theorem \[thm:upper-general\]. In Section \[sec:lower\], we give a new argument for proving lower bounds on $f(H,p)$, proving Theorem \[thm:lower\]. In Section \[sec:clique\] we give a short proof of Theorem \[thm:cliques\], using a variation of the methods in [@cfgh]. Finally, Section \[sec:closing\] highlights a few of the many open questions that remain about $f(H,p)$. We will write $b=p^{-1}$ for the expected number of queries needed to find a single edge in $G(\infty, p)$. No attempt will be made to optimise the implicit constants in any of our proofs. We use $A \lesssim B$ to mean $A=O(B)$. For the sake of clarity of presentation, we systematically omit floor and ceiling signs whenever they are not crucial. Upper bounds {#sec:upper} ============ An Illustrative Example ----------------------- As mentioned in the introduction, there is a straightforward algorithm for finding any $d$-degenerate graph $H$ in $O_H(b^d)$ time. In this section we prove Theorem \[thm:upper-general\] by providing a new algorithm that beats the trivial algorithm by an iterated logarithmic factor. We begin by illustrating how the algorithm works with a specific $2$-degenerate graph. The *triforce* is the graph on $6$ vertices and $9$ edges obtained from the triangle $K_3$ by adding a common neighbor to each pair of vertices. \[fig:triforce\] \[rotate = -90\] (a)[b]{} (a)[c]{} (b)[d]{} (c)[e]{} (c)[f]{} (a,b,c,a) (e,d,b,e,c,f,e) The key step in building the triforce quickly is to build a large book. The [*book*]{} $B_{d,t}$ is the graph on $d+t$ vertices obtained by removing a clique $K_t$ from a complete graph $K_{d+t}$. The $t$ vertices of the removed clique are called the [*pages*]{} of the book and the remaining $d$ vertices are called its [*spine*]{}. Note that when $d$ and $t$ are fixed positive integers, $B_{d,t}$ is $d$-degenerate and thus $f(B_{d,t}, p) = O_t(b^d)$. The key observation is that this can be improved substantially even if we allow $t$ to grow slowly as $p$ tends to $0$. \[lem:book\] If $d\ge 2$ and $\ell=\frac{\log b}{2\log \log b}$, then $f(B_{d,\ell},p)=O(b^d \ell^{-1/2})$. We will exhibit an algorithm which finds $B_{d,\ell}$ in $G(\infty, p)$ with constant probability (w.c.p.) in $O(b^d\ell^{-1/2})$ time. The algorithm has three steps. First, we find w.c.p. $d-1$ vertices $v_1,\ldots, v_{d-1}$ of the spine forming a clique in $O(b^{d-2})$ time, which is possible because $K_{d-1}$ is $(d-2)$-degenerate. Assume this step succeeds. Next, we build a large pool $S$ of common neighbors of $v_1,\ldots, v_{d-1}$, which will serve as candidates for the remaining vertex $v_d$ of the spine and for the pages of the book. In $d-1=O(1)$ queries we can check a single new vertex $u$ to see if it is a common neighbor of $v_1,\ldots, v_{d-1}$, and $u$ has a probability $p^{d-1}$ of being such a common neighbor. We check a total of $4b^d \ell^{-1/2}$ possible $u$, and each common neighbor successfully found is added to $S$. Since the outcomes of all queries are independent, $|S|$ is distributed like the binomial random variable $\text{Bin}(4b^d\ell^{-1/2}, p^{d-1})$ with mean $4b\ell^{-1/2}$, so w.c.p. $|S| \ge 2b\ell^{-1/2}$. For the last step, assuming the previous two steps succeed, we will find a star $K_{1, \ell}$ contained in $S$. Along with vertices $v_1,\ldots ,v_{d-1}$ already chosen, this forms the desired book. To find this star, remove vertices from $S$ until it has size exactly $2b\ell^{-1/2}$, and then query all pairs of vertices in $S$ in $O(b^2 \ell^{-1})$ time. The induced subgraph on $S$ is just an Erdős-Rényi random graph $G(2b\ell^{-1/2}, p)$. It suffices to show that w.c.p. there exists a vertex of degree at least $\ell$ therein. This fact is a consequence of the observation that the degrees are distributed like independent Poisson distributions. To give a quick proof of this fact, divide $S$ into two sets $S_1, S_2$ of size $r = b\ell^{-1/2}$, let $u_1$,…, $u_{r}$ be the vertices of $S_1$, and let $X_i$ be the number of neighbors of $u_i$ in $S_2$. Then $\{X_i\}_{i=1}^{r}$ are $r$ i.i.d. random variables distributed like $\text{Bin}(r, p)$, so $$\Pr[X_i \ge \ell] \ge \binom{r}{\ell} p^{\ell}(1-p)^{r-\ell} \ge \frac{(r - \ell)^\ell p^\ell (1-p)^{r}}{\ell!}.$$ As $p\rightarrow 0$, we can bound $r - \ell > b\ell^{-1/2}/2$, $(1-p)^{r} \rightarrow 1$, and $\ell! < \ell^\ell$. Thus, $$\Pr[X_i \ge \ell] \ge \Omega\Big(\frac{1}{2^\ell \ell^{3\ell/2}}\Big).$$ When $\ell = \frac{\log b}{2\log \log b}$, this fraction is $\Omega(b^{-\frac{3}{4} - \varepsilon})$ for any $\varepsilon > 0$. In particular, since there are $r = b^{1-o(1)}$ independent random variables $X_i$, w.c.p. some $X_i$ is at least $\ell$, as desired. Letting the vertex of degree $\ell$ be the last vertex $v_d$ of the book’s spine and its $\ell$ neighbors in $S$ be the pages of the book, we have found a copy of $B_{d,\ell}$ w.c.p. in $O(b^d)$ total queries, as desired. We are now ready to prove a stronger version of Theorem \[thm:upper-general\] when $H$ is the triforce. \[thm:upper-triforce\] If $H$ is the triforce graph and $\ell=\frac{\log b}{2\log \log b}$, then $f(H,p) = O(b^2\ell^{-\frac{1}{2}})$. We exhibit an algorithm for finding $H$ w.c.p. in $O(b^2\ell^{-\frac{1}{2}})$ time. Using Lemma \[lem:book\] with $d=2$, build a copy of $B_{2,\ell }$. Let $x$ and $y$ be the two vertices of its spine and let $Z$ be its pages. In $O(b^2\ell^{-\frac{1}{2}})$ time we can w.c.p. find two sets of vertices $S_x, S_y$, each of size $b\ell^{-\frac{1}{2}}$, so that everything in $S_x$ is adjacent to $x$ and everything in $S_y$ is adjacent to $y$. Now, we will query all pairs between $S_x$ and $Z$ as well as all pairs between $S_y$ and $Z$. This takes only $O(b\ell^{1/2})$ time, which is negligible. We claim that w.c.p. there exist $x' \in S_x$, $z \in Z$, and $y' \in S_y$ so that $x'\sim z$ and $z\sim y'$. This follows because w.c.p., $\Theta(\ell^{1/2})$ vertices of $Z$ have a neighbor in $S_x$, and among these vertices w.c.p. at least one has a neighbor in $S_y$. Now, let $z'$ be any common neighbor of $x$ and $y$ in $Z$ other than $z$. It follows that the six vertices $x,y,z,x',y',z'$ form a triforce, and we have found it in $O(b^2\ell^{-\frac{1}{2}})$ queries w.c.p., as desired. The general upper bound ----------------------- Roughly speaking, the main trick in the proofs of Lemma \[lem:book\] and Theorem \[thm:upper-triforce\] is that we can find vertices of much larger than average degree in a random graph with constant average degree. We will iterate this trick many times to prove the general statement in Theorem \[thm:upper-general\]. We will construct an arbitrary $d$-degenerate graph $H$ recursively. If the vertices of $H$ are ordered $v_1,\ldots, v_n$ in the degeneracy order, the algorithm will maintain a “cloud" of candidates $C_i$ for the image of vertex $v_i$ which shrinks as the algorithm progresses. On step $i$, the algorithm chooses $v_i$ from $C_i$ and then shrinks the clouds corresponding to neighbors of $v_i$ to stay consistent with this choice. Let $H$ be a graph on $n$ vertices with degeneracy $d\ge 2$ and depth $\Delta$. Order its vertices $v_1,\ldots, v_n$ so that each $v_i$ has at most $d$ neighbors $v_j$ with $j<i$, and the longest left-to-right path $v_{i_0},\ldots, v_{i_r}$ with $i_0<\cdots < i_r$ has length $r=\Delta$. Let $\Delta_i$ be the length (in edges) of the longest left-to-right-path ending at $v_i$, so that $\Delta_i \le \Delta$ for all $i$. Finally, define $$L(x) = \frac{\log x}{3n \log \log x}$$ and $\ell_i = L^{(\Delta_i)}(b)$ is obtained from $b$ by iterating $L$ $\Delta_i$ times. We describe an algorithm for finding an injection $\phi$ from $H$ to $G(\infty, p)$ in a series of rounds, assuming $p$ is sufficiently small. There are many points at which the algorithm may fail. However, each round succeeds with probability $\Omega_H(1)$ conditional on the success of all previous rounds, and there are $n = O_H(1)$ rounds, so the entire algorithm succeeds with $\Omega_H(1)$ probability. The algorithm can then be repeated a number of times depending only on $H$ until its success probability reaches $\frac{1}{2}$; this only changes the implicit constant in $f(H,p)$. We begin by setting aside $n$ disjoint sets (“clouds") $C_1, \ldots, C_n$ which will change throughout the algorithm. We initialize these to $C_1^{(0)},\ldots, C_n^{(0)}$ of order $|C_i^{(0)}|=b^d/\ell_i$, so that $C_i^{(0)}$ is the set of candidates for $\phi(v_i)$. We proceed in $n$ rounds, so that $C_j^{(k)}$ will refer to the state of cloud $C_j$ after round $k$. After the $k$th round we will have nonempty disjoint sets $C_j^{(k)}$, and we always have $C_j^{(k-1)} \supset C_j^{(k)}$. In the round $k$ a number of queries are made to decide the value of $\phi(v_k)\in C_k^{(k-1)}$. Thus $C_k^{(k)}$ is the singleton $\{\phi(v_k)\}$ and $C_k$ remains a singleton until the end. For each $j$ with $j>k$ and $v_j\sim v_k$, the set $C_j^{(k-1)}$ is updated to a subset $C_j^{(k)}$ consisting of all elements of $C_j^{(k-1)}$ adjacent to $\phi(v_k)$. We say that a vertex $v_j$ is [*living*]{} after round $k$ if $j > k$, and [*dead*]{} otherwise. Two properties are maintained. The first is that after round $k$, for any $i\le k$ and $j\le n$ and any $u_i \in C_i^{(k)}$ and $u_j \in C_j^{(k)}$, $u_i\sim u_j$ if $v_i \sim v_j$. In other words, the adjacency relations are correct within the dead vertices and between the dead vertices and the clouds $C_i^{(k)}$ for the living ones. The second property is that the size of the set $C_j^{(k)}$ must be $$c_j^{(k)} \coloneqq \begin{cases} \frac{b^{d-m}}{\ell_j} & \text{if $v_j$ is living and has $m < d$ dead left-neighbors,} \\ \ell_j & \text{if $v_j$ is living and has exactly $d$ dead left-neighbors,} \\ 1 & \text{if $v_j$ is dead.} \end{cases}$$ The queries on round $k$ are made to guarantee two properties. First, on round $k$, vertices are thrown out of $C_k^{(k-1)}$ until it has exactly $\ell_k$ vertices (this is possible because $c_k^{(k-1)} \ge \ell_k$ when $b$ is sufficiently large). Then, consider the $j$ so that $j>k$ and $v_j \sim v_k$. If such a $v_j$ has exactly $d-1$ dead left-neighbors, then $j$ is called [*active*]{} on round $k$, and otherwise $j$ is called [*inactive*]{}. For each active $j$, all pairs in $C_k^{(k-1)} \times C_j^{(k-1)}$ are queried. Each round is divided into an [*active portion*]{}, which happens first, and then an [*inactive portion*]{}. The active portion of round $k$ succeeds if a candidate $u_k \in C_k^{(k-1)}$ is found to have at least $c_j^{(k)}$ neighbors in $C_j^{(k-1)}$ for all the active $j$. One such candidate $u_k$ is picked for $\phi(v_k)$ and $C_j^{(k)}$ is chosen to be exactly $c_j^{(k)}$ neighbors of $u_k$ in $C_j^{(k-1)}$. Then, for all inactive $j$, all pairs $\{u_k\} \times C_j^{(k-1)}$ are queried. The inactive portion of round $k$ succeeds if after these queries a total of $c_j^{(k)}$ neighbors are found for $u_k$ in $C_j^{(k-1)}$ for each of the inactive $j$’s as well. We say that the round succeeds if both the active and inactive portions succeed. The algorithm only continues past round $k$ if round $k$ succeeds. By induction on $k$, the algorithm maintains all the required properties and produces a valid injection $\phi:H\rightarrow G(\infty, p)$ if it succeeds on every round. It remains to show that the probability of success on each round is $\Omega_H (1)$. For each $u \in C_k^{(k-1)}$ and $j>k$ for which $v_j \sim v_k$, let $d_j (u)$ be number of neighbors $u$ has in $C_j^{(k-1)}$. Note that $d_j(u)$ is distributed like $\text{Bin}(c_j^{(k-1)}, p)$, since each vertex of $C_j^{(k-1)}$ is adjacent to $u$ independently with probability $p$. Suppose $j$ is active in round $k$, so that $c_j^{(k-1)} = b/\ell_j$. This time, we get $$\Pr[d_j(u) \ge c_j^{(k)}] = \Pr[{\text{Bin}}(b/\ell_j, p) \ge \ell_j] = \binom{b/\ell_j}{\ell_j}p^{\ell_j}(1-p)^{b/\ell_j - \ell_j}.$$ Using the facts that $1-x \ge e^{-2x}$ for all $x \in [0, \frac{1}{2}]$ and that $\ell_j\rightarrow \infty$ as $p\rightarrow 0^+$, we see that $(1-p)^{b/\ell_j - \ell_j} \ge e^{-2/\ell_j} \rightarrow 1$. Also, $\binom{a}{b} \ge (a/b)^b$ for all $a\ge b\ge 1$, and so $$\Pr[d_j(u) \ge c_j^{(k)}] = \binom{b/\ell_j}{\ell_j}p^{\ell_j}(1-p)^{b/\ell_j - \ell_j} = \Omega(\ell_j^{-2\ell_j}).$$ There are at most $n$ total $j$, so taking a product over all active $j$, we arrive at a lower bound $$\Pr[d_j(u) \ge c_j^{(k)}\text{for all active $j$}] \ge \Omega_H (\ell_j^{-2n\ell_j}).$$ Since each $u\in C_k^{(k-1)}$ is individually a successful candidate for $\phi(v_k)$ with this probability, and these are $\ell_k$ independent events, it follows that $$\Pr[\text{Active portion round $k$ succeeds}] \ge \Omega_H (\min(1, \ell_k \ell_j^{-2n \ell_j})).$$ Finally, we observe that for every $j$ active in round $k$, $\Delta_j \ge \Delta_k + 1$ since every left-to-right path ending at $k$ extends to a longer one ending at $j$. Thus, $\ell_j \le L(\ell_k)$, and the function $L$ was chosen so that $L(x)^{2n L(x)} \le x$ for $x$ sufficiently large. It follows that the active portion of round $k$ succeeds with probability $\Omega_H(1)$, as desired. Now we look at the inactive $j$ in round $k$. Then $c_j^{(k)} = p c_j^{(k-1)}$, so $$\Pr[d_j(u_k) \ge c_j^{(k)}] = \Pr[{\text{Bin}}(c_j^{(k-1)}, p) \ge p c_j^{(k-1)}] = \Omega(1).$$ Thus, conditional on the success of the active portion of round $k$, the inactive portion succeeds with probability $\Omega_H(1)$ as well. We have now shown that the algorithm, iterated $O_H(1)$ times, succeeds with probability $\frac{1}{2}$. It remains to bound the total number of queries made. In the active portion of each round, queries are only made between sets $C_k^{(k-1)}$ and $C_j^{(k-1)}$ if $j$ is relevant, which implies that $c_j^{(k)} = b/\ell_j$. Also, elements of $C_k^{(k-1)}$ were thrown out until it had size exactly $\ell_k = O(b)$, so the number of queries made in the active portion of any round is at most $O(b^2/L^{(\Delta)}(b)) = O(b^d/L^{(\Delta)}(b))$. In the inactive portion of each round, queries are made between a single vertex $u_k$ and sets $C_j^{(k-1)}$ of size at most $b^d/\ell_j$ each. Thus, the number of queries made in the inactive portion of any round is also $O(b^d/L^{(\Delta)}(b))$. Since there are at most $n = O_H(1)$ rounds and at most $n$ choices of $j$ involved in each round, we find that $$f(H, p) = O_H\Big(\frac{b^d}{L^{(\Delta)}(b)}\Big) = O_H\Big(\frac{p^{-d} \log_{\Delta + 1}(p^{-1})}{\log_{\Delta} (p^{-1})}\Big).$$ as desired. Lower bounds {#sec:lower} ============ Preliminaries ------------- In this section, we will prove lower bounds for $f(H,p)$. Because $N$ queries necessarily involve at most $2N$ vertices, it suffices to prove lower bounds for finding a copy of $H$ in $G(2N,p)$ rather than in $G(\infty,p).$ Following [@cfgh], we will lower bound the number of queries it takes to build a copy of $H$ by showing that the expected number of copies of $H$ we can build in some amount of time is not too large. If $H$ is a graph without isolated vertices, define $t(H,p,N)$ to be the maximum (over all querying strategies) expected number of copies of $H$ that can be found in $G(\infty,p)$ in $N$ queries. Since we are working on $G(2N,p)$, if $H = H'\cup\{v_1,\ldots, v_t\}$ has $t$ isolated vertices, define $t(H,p,N)\coloneqq (2N)^t \cdot t(H',p,N)$. If we show that we cannot build so many copies of $H$ (in expectation) in some time, this gives us a lower bound on how long it takes to build a single copy of $H$. \[lemma:LB1\] If $N\ge f(H,p)$, then $$f(H,p) \cdot t(H,p,N) \ge N/4.$$ Thus, upper bounds on $t(H,p,N)$ will yield lower bounds on $f(H,p)$. The proof of Lemma \[lemma:LB1\] is straightforward, but we include it for completeness. By definition, there exists a strategy which finds $H$ with probability $\frac{1}{2}$ in $f(H,p)$ queries. Given $N$ queries, we can iterate this strategy $\lfloor N/f(H,p)\rfloor$ independent times on disjoint vertex sets. By linearity of expectation, this implies $$t(H,p,N) \ge \Big\lfloor\frac{N}{f(H,p)}\Big\rfloor \cdot \frac{1}{2} \ge \frac{N}{4f(H,p)}.\qedhere$$ Thus, it suffices to produce upper bounds on $t(H,p,N)$. Fortunately, we can recursively bound $t(H,p,N)$ in terms of $t(H',p,N)$ for some subgraphs $H'$. The following bounds are proved in [@cfgh]. \[lemma:LB2\] If $H$ is any graph, $p\in (0,1)$, and $N \ge p^{-1 -\varepsilon}$ for some $\varepsilon > 0$, then the following inequalities hold: $$\begin{aligned} t(H,p,N) &\le \min\limits_{e \in E(H)} t(H\backslash e,p,N) \label{eq:t-rec-1} \\ t(H,p,N) &\lesssim p \max\limits_{e\in E(H)} t(H\backslash e, p, N) \label{eq:t-rec-2} \\ t(H,p,N) &\lesssim pN \min\limits_{e\in E(H)} t(H\backslash\{u,v\}, p, N), \label{eq:t-rec-3}\end{aligned}$$ where $u,v$ are the vertices of $e$ in (\[eq:t-rec-3\]). In the latter two inequalities, the implicit constants are allowed to depend only on $H$. In general, the bounds of Lemma \[lemma:LB2\] are not tight. In certain cases, we will improve these bounds using the crucial observation that large enough sets of vertices in a random graph have few common neighbors. For any vertex subset $U\subseteq V(G)$ of a graph $G$, write $d(U)$ for the number of common neighbors of every vertex in $U$. \[lem:poisson\] Let $\ell=\frac{\log b}{2\log \log b}$, let $k, n\ge 2$ be absolute constants, and let $G=G(2N,p)$. 1. If $p^kN \lesssim 1$, then there exists $C>0$ so that $$\Pr[\max_{|U|=k} d(U)>C\ell]<p^n,$$ where the maximum is taken over all $k$-subsets $U$ of $V(G)$. 2. If $p^kN=(1/N)^{\Omega(1)}$, then there exists $C>0$ so that $$\Pr[\max_{|U|=k} d(U)>C]<p^n.$$ For an arbitrary set of $k$ vertices $U$, note that $d(U)\sim \text{Bin}(2N-k,p^{k})$ as there are $2N-k$ other vertices of $G(2N,p)$ and each vertex has a $p^{k}$ chance of being adjacent to all members of $U$. Hence, we find that $$\label{eq:codegree} \Pr[d(U) \ge t] = \Pr[\text{Bin}(2N-k,p^{k}) \ge t] \leq \binom{2N-k}{t} p^{tk} \leq \Big(\frac{2Ne}{t}p^{k}\Big)^t.$$ If $p^{k}N \lesssim 1$, then (\[eq:codegree\]) implies $\Pr[d(U) \ge t] \le (O(1/t))^t$. Next we take a union bound over at most $(2N)^k$ choices of $U$, which shows that, if we take $t=C\ell$ for a sufficiently large $C$ depending on $n$, $$\Pr[\max_{|U|=k}d(U) \ge t] \le (O(1/t))^t \cdot N^k<p^n.$$ This proves the first part of the lemma. If $p^kN=(1/N)^{\Omega(1)}$, then (\[eq:codegree\]) implies the stronger bound $\Pr[d(U) \ge t]=N^{-\Omega(t)}$. For any fixed $n$, if $C$ is a large enough constant, the probability that $\max_{|U|=k}d(U)>C$ will be below $N^{-\Omega(C)}N^k \le p^n$ by the union bound. The power of Lemma \[lem:poisson\] is that the final graph that we find after $N$ queries is a subgraph of $G(2N,p)$, so we can bound the number of common numbers of any vertex set $U$ of constant size without even seeing the graph. It allows us to prove new upper bounds on $t(H,p,b^d)$. \[lem:reduction\] Let $\ell=\frac{\log b}{2\log \log b}$, let $H$ be a graph, let $v\in V(H)$, and let $H' = H \setminus \{v\}$. 1. If $d(v) = d$, then $$t(H,p,b^d) \lesssim \ell t(H',p,b^d).$$ 2. If $d(v) > d$, then $$t(H,p,b^d) \lesssim t(H',p,b^d).$$ Let $k= d(v)$, and let the neighbors of $v$ in $H$ be $v_1, \ldots, v_k$. Fix a query strategy that maximizes $t(H,p,b^d)$. For any subset $U=\{u_1, \ldots, u_k\}$ of $k$ vertices of the final graph $G \subset G(2b^d,p)$ that is found, let $H'(U)$ be the number of maps $H'\rightarrow G$ so that for all $1 \le i \le k$, $v_i$ maps to $u_i$. Then we have that $$\label{eq:t-H-expectation} t(H,p,b^d)=\mathbb{E}\left[\sum_{|U|=k}d(U)H'(U)\right]\le \mathbb{E}\left[\left(\max_{|U|=k}d(U)\right) \left( \sum_{|U|=k} H'(U) \right)\right],$$ where the sum is taken over all $k$-sets of vertices $U$. Assume $d(v)=d$. By the first part of Lemma \[lem:poisson\], there is a large constant $C=C(H)$ so that $$\Pr\left[\max_{|U|=k}d(U)>C\ell\right]<p^{d|V(H')|+1}.$$ We will break up the expectation in (\[eq:t-H-expectation\]) depending on the size of $\max_{|U|=k} d(U)$. If $\max_{|U|=k} d(U) \le C\ell$, the contribution to right side of (\[eq:t-H-expectation\]) is $O(\ell t(H',p,b^d))$. Now, $\max_{|U|=k} d(U)>C\ell$ with probability at most $p^{d|V(H)|+1}$, so the contribution from these terms is bounded by $p^{d|V(H)|+1}(2b^d)^{|V(H')|}=o(1)$. Likewise, when $d(v)>d$, by the second part of Lemma \[lem:poisson\] there is a $C$ so that the case of $\max_{U} d(U) \leq C$ contributes $O(t(H',p,b^d))$ to the right side of (\[eq:t-H-expectation\]), and the case of $\max_{|U|=k} d(U)>C$ contributes $o(1).$ Proof of Theorem \[thm:lower\] ------------------------------ We now begin the proof of Theorem \[thm:lower\]. The main idea is to apply Lemma \[lem:reduction\] to obtain new upper bounds on $t(H,p,N)$ for various subgraphs $H$ of the triforce, and then combine these with Lemma \[lemma:LB2\] to bound $t(H,p,N)$ for the triforce itself. We describe the subgraphs of the triforce to which we will apply Lemma \[lem:reduction\]. Any copy of the triforce must arise from a copy of one of the graphs formed by deleting two edges from the triforce. There are $8$ such graphs up to isomorphism, which we denote by $H_i$ for $1 \le i \le 8$ (see Figure \[fig:H\]). \[rotate = -90\] (a)[b]{} (a)[c]{} (b)[d]{} (c)[e]{} (c)[f]{} (f)[o]{} (o)[n]{} (n)[l]{} (l)[m]{} (m)[p]{} (m)[q]{} (l,m,n) (p,q,m,p,n) (p,o) (b,c,a) (d,b,e,c,f,e) (q)[d1]{} (d1)[b1]{} (b1)[a1]{} (a1)[c1]{} (c1)[e1]{} (c1)[f1]{} (f1)[o1]{} (o1)[n1]{} (n1)[l1]{} (l1)[m1]{} (m1)[p1]{} (m1)[q1]{} (l1,m1,n1) (p1,q1,m1,p1) (n1,o1,p1) (c1,a1) (e1,d1,b1,e1,c1,f1,e1) \[rotate = -90\] (a)[b]{} (a)[c]{} (b)[d]{} (c)[e]{} (c)[f]{} (f)[o]{} (o)[n]{} (n)[l]{} (l)[m]{} (m)[p]{} (m)[q]{} (l,m) (n,l) (p,q,m,p) (n,o,p) (b,c,a) (e,d,b,e) (c,f,e) (q)[d1]{} (d1)[b1]{} (b1)[a1]{} (a1)[c1]{} (c1)[e1]{} (c1)[f1]{} (f1)[o1]{} (o1)[n1]{} (n1)[l1]{} (l1)[m1]{} (m1)[p1]{} (m1)[q1]{} (m1,n1) (p1,q1,m1,p1,n1,o1,p1) (a1,b1,c1) (d1,b1,e1,c1,f1,e1) We will prove that the first $6$ of these graphs are hard to construct quickly. The last two subgraphs $H_7$ and $H_8$ are more difficult to handle, and we will bound copies of them using a different analysis. \[prop:1-6\] For all $H_i$ such that $1 \le i \le 6$, $t(H_i,p,b^2)\lesssim b^2\ell^2$. For each graph $H_i$ with $1 \le i \le 6$, it is possible to remove two vertices of degree at least two to arrive at the path $P_3$ on four vertices. Thus, we may apply the first part of Lemma \[lem:reduction\] twice to show that $$t(H_i, p, b^2) \lesssim \ell^2 t(P_3, p, b^2),$$ for all $1\le i \le 6$. Lastly, note that $t(P_3,p,b^2) \lesssim bt(K_2,p,b^2) \lesssim b^2$ by applying (\[eq:t-rec-3\]) from Lemma \[lemma:LB2\]. Hence, for $H_i$ with $1 \le i \le 6$, $t(H_i,p,b^2) \lesssim \ell^2 t(P_3,p,b^2) \lesssim \ell^2b^2$ as desired. We must now deal with $H_7$ and $H_8$, the two graphs on which the reductions of Lemma \[lem:reduction\] and Lemma \[lemma:LB2\] are not sufficient to provide the bounds that we want. We need one last definition. Given a graph $H$ with a distinguished vertex $u$, let $t^u(H,p,N)$ be the maximum expected number of copies of $H$ we can build in time $N$ so that $u$ maps to the same vertex in each copy. It is important to emphasize that the image of $u$ is not determined ahead of time, and we may pick it adaptively based on the queries made so far. \[diamond\] Let $D$ be the *diamond* graph depicted below: \[rotate = 0\] (a)[b]{} (b)[c]{} (c)[d]{} (c,a,b,c,d,b) Then $t^u(D,p,b^2)\lesssim \ell^3$. As usual, we fix a query strategy maximizing $t^u(D,p,b^2)$ and let $G\subset G(2b^2,p)$ be the final graph built. Consider the following $3$ subgraphs of $D$ which we call $D_1$, $D_2$, and $D_3$ respectively. \[rotate = 0\] (a)[b]{} (b)[c]{} (c)[d]{} (c,a,b,c) (d,b) (d)[a1]{} (a1)[b1]{} (b1)[c1]{} (c1)[d1]{} (c1,a1) (b1,c1,d1,b1) (d1)[a2]{} (a2)[b2]{} (b2)[c2]{} (c2)[d2]{} (c2,a2,b2) (c2,d2,b2) Every copy of $D$ must arise from adding an edge to a graph isomorphic to one of the $D_i$. For each $u'\in V(G)$, let $X_i(u')$ be the random variable counting the number of copies of $D$ so that $u$ maps to $u'$, and the last edge built in $D$ is the edge missing from $D_i$. In particular, $$\label{eq:t-u} t^u(D,p,b^2) \le \sum_{i=1}^3\mathbb{E}[\max_{u'} X_i(u')].$$ Also, define the random variable $X_i(u', j)$ to be the number of copies of $D_i$ with $u$ mapping to $u'$ that turn into a copy of $D$ after query $j$. In particular this number is $0$ if the query $j$ finds a non-edge. We have that $X_i(u') = \sum_j X_i (u', j)$. By the first part of Lemma \[lem:poisson\], in the random graph $G(2b^2,p)$ any two vertices have $O(\ell)$ common neighbors with overwhelmingly high probability. We can assume this is the case here as the contribution to the expectation $t^u(D,p,b^2)$ from other cases is $o(1)$. In particular, this means that each new edge built can turn at most $O(\ell^2)$ copies of $D_1$, $D_2$ or $D_3$ into $D$. For example, if an edge $(u', v')$ is built in $G$, then the number of copies of $D_2$ that can be completed into $D$ is exactly the number of ways to choose a common neighbor $w'$ of $u'$ and $v'$, and then a common neighbor of $v'$ and $w'$. As we assumed that codegrees are all $O(\ell)$, there are only $O(\ell^2)$ total choices for this copy of $D_2$. This means we may assume that $X_i(u', j)$ is stochastically dominated (up to a constant) by $\ell^2 {\text{Bin}}(1,p)$. As the results of all queries are independent, it follows that $X_i(u')$ is stochastically dominated by a constant times $\ell^2 {\text{Bin}}(b,p)$. Now it is a short computation that $$\Pr[{\text{Bin}}(b,p) > 100\ell] < p^5.$$ In particular, there exists a $C>0$ such that $\Pr[X_i(u') > C\ell^3] < p^5$ for all $1\le i \le 3$ and all $u'\in V(G)$. Also, the maximum possible number of diamonds with a given vertex $u'$ is $(2b)^3$, so $$t^u(D,p,b^2) \le 3C\ell^3 + \Pr[X_i(u') > C\ell^3\text{ for some }i, u']\cdot (2b)^3 = O(\ell^3),$$ by the union bound over all $O(b^2)$ choices of $1\le i\le 3$ and $u'\in V(G)$, as desired. We can now deal with $H_7$. We have \[prop:h7\] $t(H_7,p,b^2)= O(b^2\ell^3)$. Given a vertex $v$ of the graph $G\subset G(2b^2,p)$ built, let $f(v)$ be the number of copies of the diamond graph $D$ so that $u$ maps to $v$. Every copy of $H_7$ can be found by picking a vertex $v\in V(G)$, a copy of $D$ from among $f(v)$ total, and two neighbors of $v$ in at most $d(v)^2$ ways. Thus, the expected number of copies of $H_7$ in $G$ is at most $$\mathbb{E}\left[\sum_{v\in V(G)}d(v)^2f(v)\right] \le \mathbb{E}\left[\left( \max_{v\in V(G)}f(v) \right) \left( \sum_{v\in V(G)} d(v) \right)^2\right].$$ Also, there exists some constant $C$ so that the graph will have more than $Cb$ edges with negligible probability, so we may assume $\sum_v d(v) = O(b)$. In the other cases, we get $$t(H_7, p, b^2) \lesssim b^2\mathbb{E}\left[\max_{v\in V(G)}f(v) \right] \lesssim b^2 t^u(D,p,b^2) \lesssim b^2\ell^3.$$ whence we are done by Lemma \[diamond\]. Finally, we deal with the graph $H_8.$ Unlike $H_1,\ldots, H_7$, it is not the case that $t(H_8, p, b^2) = O(b^{2+o(1)})$ (in fact one can build $\Omega(b^3)$ copies of $H_8$ due to the isolated vertex). We will need to add one edge to $H_8$ and analyze the resulting graph instead. \[prop:hstar\] Let $H^*$ be the following graph: \[rotate = -90\] (a)[b]{} (a)[c]{} (b)[d]{} (c)[e]{} (c)[f]{} (b,c,a) (e,d,b,e,c,f,e) Then $t(H^*,p,b) = O(b\ell ^3).$ Let $u$ be the vertex adjacent to the leaf of $H^{*}.$ For any vertex of $v$ of our final graph $G\subset G(2b^2,p)$, let $f(v)$ be the number of copies of the diamond $D$ so that $u$ maps to $v$. As before, we may assume that all degrees are $O(b)$ as the contribution to $t(H^{*},p,b^2)$ is trivial otherwise. Furthermore, we may assume that any two vertices have $O(\ell)$ common neighbors, as again the contribution is trivial from other cases by the first part of Lemma \[lem:poisson\]. Given any $v$, there are $f(v)$ choices for a copy of $D$ including it, $d(v)$ choices for the leaf off of $v$, and $O(\ell)$ choices for the remaining vertex of $H^*$, as it is the common neighbor of $v$ and of its degree $4$ neighbor in $H^{*}.$ Thus, we obtain: $$\begin{aligned} t(H^{*},p,b^2) & \lesssim \ell \mathbb{E}\left[\sum_{v}f(v)d(v)\right] \lesssim \ell \mathbb{E}\left[\left( \max_{v}f(v) \right) \left(\sum_{v}d(v)\right)\right] \\ & \lesssim b\ell \mathbb{E}\left[ \max_{v}f(v) \right] \lesssim b\ell t^u(D,p,b^2) \lesssim b\ell^3,\end{aligned}$$ where the last inequality follows from Lemma \[diamond\]. Putting all of the bounds together completes the proof of Theorem \[thm:lower\]. We will apply (\[eq:t-rec-2\]) twice to the triforce $H$. Applying it once, we find $$\label{eq:h-triforce} t(H,p,b^2) \lesssim p \max\limits_{e\in E(H)} t(H\backslash e, p, b^2),$$ and there are only two nonisomorphic subgraphs of $H$ of the form $H\setminus e$. One of them is $H^*$, for which we have $t(H^*,p,b^2) = O(b\ell^3)$ by Proposition \[prop:hstar\]. If $H'$ is the other subgraph of the form $H\setminus e$, where an inner edge is deleted from the triforce, then we apply (\[eq:t-rec-2\]) again to find $$\label{eq:h'} t(H', p, b^2) \lesssim p\max\limits_{1\le i \le 7} t(H_i, p, b^2),$$ since all the graphs $H'\setminus e$ are isomorphic to one of $H_1,\ldots, H_7$. We have by Propositions \[prop:1-6\] and \[prop:h7\] that $t(H_i, p, b^2) = O(b^2\ell^3)$. It follows from (\[eq:h’\]) that $t(H', p, b^2) = O(b\ell^3)$. Together with the fact that $t(H^*,p,b^2) = O(b\ell^3)$ and (\[eq:h-triforce\]), this proves that $t(H,p,b^2) = O(\ell^3)$. The theorem follows by one application of Lemma \[lemma:LB1\] with $N=b^2$. We remark that the subgraphs $H'\setminus e$ can only be one of $H_3,H_4,H_5, H_6$, so our analysis of $H_1, H_2, H_7$ was unnecessary. We include all the subgraphs on $7$ edges for clarity of exposition. Cliques in $G(n,1/2)$ {#sec:clique} ===================== In this section, we prove Theorem \[thm:cliques\] using an idea of Huy Pham [@p]. The argument is a modification of the proof of Theorem 1 by [@cfgh] when the number of vertices in $G(n,p)$ is bounded beforehand. Let $G=G(n,\frac{1}{2})$. For each vertex subset $U\in V(G)$, let $e_{t}(U)$ be the number of queries made between pairs of vertices in $U$ after query $t$. We will study the weight function $$w(U,t)=\begin{cases} 2^{-\binom{|U|}{2}+e_{t}(U)} & \text{if all queries so far in \ensuremath{U} succeeded}\\ 0 & \text{otherwise.} \end{cases}$$ In other words, $w(U,t)$ is exactly the probability that $G[U]$ is a clique conditional on the information up after query $t$. The standard method of conditional expectation proceeds by studying the evolution of the function $$w_{k}(t)\coloneqq\sum_{|U|=k}w(U,t),$$ which is a martingale, and has the property that a $k$-clique is found after query $t$ only if $w_{k}(t)\ge1$. Our modification studies instead a restricted version of this sum. Namely, define $m(U,t)$ to be the size of the maximum matching in the known edges of $U$ after query $t$. Then, $$w_{k,m}(t)\coloneqq\sum_{|U|=k,m(U,t)\ge m}w(U,t).$$ Restricting to only sets with large maximum matchings has the function of radically reducing the number of terms in the sum $w_{k,m}(t)$. We pay for it in that $w_{k,m}(t)$ is no longer a martingale and its expectation is harder to study. Nevertheless, it remains true that if a $k$-clique is found after query $t$ then $w_{k,m}(t)\ge1$ for every $m\le k/2$. \[lem:recursive\]For any $0\le m\le k/2$ and any fixed querying strategy that uses $t\le\binom{n}{2}$ queries, $${\mathbb{E}}[w_{k,m}(t)]\le t2^{-(2k-2m-1)}\cdot{\mathbb{E}}[w_{k-2,m-1}(t)].$$ For every set $U\in\binom{[n]}{k}$, we say that $U$ is $m$-critical at query $s$ if $s$ is the smallest number for which $m(U,s)\ge m$. In particular, $U$ doesn’t contribute to $w_{k,m}(t)$ until $t=s$, after which it contributes $w(U,t)$, which is a martingale. This means that if $$w_{k,m}^{*}(s)\coloneqq\sideset{}{'}\sum_{|U|=k}w(U,s),$$ where the sum is restricted to only sets $U$ which are $m$-critical at query $s$, then $${\mathbb{E}}[w_{k,m}(t)-w_{k,m}(t-1)]={\mathbb{E}}[w_{k,m}^{*}(t)],$$ and so $${\mathbb{E}}[w_{k,m}(t)]=\sum_{s\le t}{\mathbb{E}}[w_{k,m}^{*}(s)].\label{eq:sum}$$ Next, we will show $$w_{k,m}^{*}(s)\le2^{-(2k-2m-2)}w_{k-2,m-1}(s).\label{eq:recursive}$$ To see this, note that every $U$ that appears on the left side must contain the edge $(u,v)$ built after query $s$, since $m(U,s)>m(U,s-1)$. Furthermore, $U'=U\backslash\{u,v\}$ is a set with $k-2$ vertices and an $m-1$ matching. Finally, every edge in $U$ but not $U'$ is incident to $(u,v)$. It is easy to check that if $(u,v)$ is an edge that lies in every $m$-matching of $U$, then at most $2m-2$ other edges are incident to $(u,v)$. Thus, there are at least $2(k-2)-(2m-2)=2k-2m-2$ unqueried pairs in $U$ but not in $U'$, and $$w(U,t)\le2^{-(2k-2m-2)}w(U',t).$$ Summing over all $m$-critical sets $U$, we get the desired inequality (\[eq:recursive\]). Taking expectations of both sides, $${\mathbb{E}}[w_{k,m}^{*}(s)]\le2^{-(2k-2m-1)}{\mathbb{E}}[w_{k-2,m-1}(s)].$$ Note that we gained another factor of $\frac{1}{2}$ here because there is a $\frac{1}{2}$ chance that the query $(u,v)$ fails and $w_{k,m}^{*}(s)=0$. Plugging this into (\[eq:sum\]), we get $${\mathbb{E}}[w_{k,m}(t)]\le2^{-(2k-2m-1)}\sum_{s\le t}{\mathbb{E}}[w_{k-2,m-1}(s)].$$ The expectations on the right side are nondecreasing as a function of $s$, so we can bound this by $${\mathbb{E}}[w_{k,m}(t)]\le2^{-(2k-2m-1)}\sum_{s\le t}{\mathbb{E}}[w_{k-2,m-1}(s)]\le t2^{-(2k-2m-1)}\cdot{\mathbb{E}}[w_{k-2,m-1}(t)]$$ as desired. Now we may iterate Lemma \[lem:recursive\] until $m=0$ to prove the following general bound. \[lem:cliques\]For any $0\le m\le k/2$ and any fixed querying strategy that uses $t\le\binom{n}{2}$ queries, $${\mathbb{E}}[w_{k,m}(t)]\le t^{m}n^{k-2m}2^{-\binom{k}{2}+m(m-1)}.$$ We induct on $m$. The base case $m=0$ is just the unrestricted weight function $${\mathbb{E}}[w_{k,0}(t)]={\mathbb{E}}[w_{k}(t)]=w_{k}(0)=\binom{n}{k}2^{-\binom{k}{2}}\le n^{k}2^{-\binom{k}{2}},$$ for all $k$, as desired. Assuming the statement is true for some $m\ge0$ and all $k\ge2m$, Lemma \[lem:recursive\] provides the inductive step for $m+1$ and all $k\ge2m+2$. It remains to prove Theorem \[thm:cliques\] using Lemma \[lem:cliques\]. Recall that ${\mathbb{E}}[w_{k,m}(t)]$ is an upper bound on the probability one can find a $k$-clique in $t$ queries. By Lemma \[lem:cliques\], we see that whenever $n,k,t$ are such that there exists $m\le\frac{k}{2}$ for which $${\mathbb{E}}[w_{k,m}(t)]\le t^{m}n^{k-2m}2^{-\binom{k}{2}+m(m-1)}<\frac{1}{2},$$ then it is impossible to find a $k$-clique in $t$ queries in $G(n,\frac{1}{2})$ with probability at least $\frac{1}{2}$. It is cleaner to compute the base-$2$ logarithm of this quantity. Taking $t=n^{\delta}$ and $k=\alpha\lg n$ and writing $\ell=\lg n$ as a shorthand, we get $$\begin{aligned} \lg \Big(t^{m}n^{k-2m}2^{-\binom{k}{2}+m(m-1)}\Big) & = & (\alpha\ell-m(2-\delta))\ell-\binom{\alpha\ell}{2}+m(m-1)\\ & \leq & \Big(\alpha-\frac{\alpha^{2}}{2}\Big)\ell^{2}-(2-\delta)m\ell+m^{2}+O(\ell).\end{aligned}$$ If $m=c\ell$ where $c\le\alpha/2$, then $$\Big(\alpha-\frac{\alpha^{2}}{2}\Big)\ell^{2}-(2-\delta)m\ell+m^{2}=\Big(\alpha-\frac{\alpha^{2}}{2}-(2-\delta)c+c^{2}\Big)\ell^{2}$$ is minimized at $c=\frac{2-\delta}{2}$. Assuming that $\alpha\ge2-\delta$, we find that for this choice of $c$, $$\lg ({\mathbb{E}}[w_{k,m}(t)])\le\Big(\alpha-\frac{\alpha^{2}}{2}-\frac{(2-\delta)^{2}}{4}\Big)\ell^{2}+O(\ell).$$ In particular, this shows that whenever $\alpha\ge2-\delta$ satisfies $$\alpha-\frac{\alpha^{2}}{2}-\frac{(2-\delta)^{2}}{4}<0,$$ then for sufficiently large $n$, it is impossible to find an $\alpha\lg n$ clique in $n^{\delta}$ queries. Thus, $\alpha_\star(\delta)$ is bounded above by the (larger) solution to the above quadratic, which is $$\alpha_{+}=1+\sqrt{1-\frac{(2-\delta)^{2}}{2}},$$ as desired. Concluding Remarks {#sec:closing} ================== The immediate question that arises from our work is to classify the graphs $H$ for which $f(H,p)=b^{d-o(1)}$. The natural first step is the case $d=2$. To this end, we first establish a large category of $2$-degenerate graphs $H$ for which $f(H,p) = O(b^{2-\varepsilon})$ for some $\varepsilon>0$. We call a graph $H$ [*$(1,1)$-degenerate*]{} if $H$ can be vertex-partitioned into induced subgraphs $T_1, ..., T_n$ which are trees, such that for all $k\in \{1,...,n\}$ and all $v\in T_k$, $$|N(v)\cap \bigcup_{i=1}^{k-1} T_i|\leq 1.$$ It is easy to see that if $H$ is $(1,1)$-degenerate, then $H$ is $2$-degenerate. One can show by induction on the number of trees $k$ that if $H$ is $(1,1)$-degenerate, then $f(H,p)=O(b^{2-\varepsilon})$ for some $\varepsilon>0$. We conjecture that the converse is true. If $H$ is a $2$-degenerate graph that is not $(1,1)$-degenerate, then $f(H,p)=b^{2-o(1)}$. In the case $d=2$, we were able to construct a particular $2$-degenerate graph $H$ for which $f(H,p) \ge b^{d-o(1)}$. The existence of such graphs when $d\ge 3$ remains open. For all natural numbers $d\geq 2$, there exists a $d$-degenerate graph $H$ for which $f(H,p) = b^{d-o(1)}$. There is a natural random process for constructing $d$-degenerate graphs on $n$ vertices. Namely, starting with a $K_d$, $n-d$ vertices are added one at a time, and each new vertex is given $d$ neighbors uniformly at random among the previous ones. If $n$ is sufficiently large, it is plausible that the random $d$-degenerate graph constructed in this manner should satisfy $f(H,p)=b^{d-o(1)}$ almost surely. [**Acknowledgments.**]{} We would like to thank Jacob Fox, Huy Pham, and Yuval Wigderson for helpful discussions. M. Ajtai, J. Komlós and E. Szemerédi, The longest path in a random graph, *Combinatorica* **1** (1981), 1–-12. N. Alon and J. H. Spencer, **The Probabilistic Method**, 3rd edit., Wiley, (2008). B. Bollobás and P. Erdős, Cliques in random graphs, *Math. Proc. Cambridge Philos. Soc.* **80** (1976), 419–427. S. A. Burr and P. Erdős, On the magnitude of generalized Ramsey numbers for graphs, in: [*Infinite and Finite Sets I*]{}, Colloq. Math. Soc Janos Bolyai 10, North-Holland, Amsterdam (1975), 214–240. D. Conlon, J. Fox, A. Grinshpun, and X. He, Online Ramsey numbers and the subgraph query problem, [**Building Bridges II**]{}, Bolyai Soc. Math. Stud., 28, [*Springer, Berlin,*]{} 2019. U. Feige, D. Gamarnik, J. Neeman, M.Z. Rácz and P. Tetali, Finding cliques using few probes, preprint (2018), `https://arxiv.org/pdf/1809.06950`. A. Ferber, M. Krivelevich, B. Sudakov and P. Vieira, Finding Hamilton cycles in random graphs with few queries, [*Random Structures Algorithms*]{} [**49**]{} (2016), 635–668. A. Ferber, M. Krivelevich, B. Sudakov and P. Vieira, Finding paths in sparse random graphs requires many queries, [*Random Structures Algorithms*]{} [**50**]{} (2017), 71–85. J. Fox, A. Sah, M. Sawhney, D. Stoner and Y. Zhao, Triforce and corners, *Math. Proc. Cambridge Philos. Soc.* (2019), 1–15. A. Frieze and R. Kannan, A new approach to the planted clique problem, *IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science* **2** (2008), 187–198. D. Hefetz, M. Krivelevich, M. Stojakovic and T. Szabó, **Positional Games**, Birkhäuser, 2014. M. Krivelevich, Positional games, *Proceedings of the International Congress of Mathematicians* **3** (2014), 355–379. C. Lee, Ramsey numbers of degenerate graphs. *Ann. of Math.* **185** (2017), 791–829. H. Pham, personal communication. M. Z. Rácz and B. Schiffer, Finding a planted clique by adaptive probing, preprint (2019), `https://arxiv.org/abs/1903.12050`. [^1]: Department of Mathematics, Princeton University, Princeton, NJ 08541, USA. Email: [[email protected]]{}. Research supported by a NSF Graduate Research Fellowship. [^2]: Department of Mathematics, Stanford University, Stanford, CA 94305, USA. Email: [[email protected]]{}. Research supported by a NSF Graduate Research Fellowship. [^3]: The triforce was recently defined to be the natural $3$-uniform hypergraph defined in the same picture by [@fsssz]. We use the word to refer to the graph instead.
--- abstract: 'We use Beltrami’s theorem as an excuse to present some arguments from parabolic differential geometry without any of the parabolic machinery.' address: | -School of Mathematical Sciences\ University of Adelaide\ SA 5005\ Australia author: - Michael Eastwood title: 'Beltrami’s theorem via parabolic geometry' --- Introduction ============ One version of Beltrami’s theorem [@B] is as follows. Suppose $M$ is a smooth two-dimensional manifold and $g_{ab}$ is a Riemannian metric on $M$. Then $g_{ab}$ is constant curvature if and only if there are local coördinates near any point in which the geodesics of $g_{ab}$ become straight lines. Antonio Di Scala [@dS] has a very nice proof of this theorem, which he kindly explained to me (in English). He asked me about a proof by the methods of parabolic geometry but insisted that I did not wave my hands at all. In particular, I was not allowed to say ‘Cartan connection’ nor ‘development map.’ So what follows is an application of some key ideas from parabolic geometry without actually explaining any of the underlying theory. The discerning reader will correctly suspect that this is just the tip of an iceberg. For a comprehensive description of the iceberg itself, the reader is directed to [@parabook]. I would like to thank Antonio Di Scala for many interesting discussions during the preparation of this article. Geodesics {#geo} ========= We need some notation and basic results on geodesics. Let $M$ be a smooth two-dimensional manifold. We shall denote by $TM$ the tangent bundle of $M$ and by $\Wedge^1$ the bundle of $1$-forms on $M$. Suppose $$\nabla_a:TM\to\Wedge^1\otimes TM$$ is a torsion-free connection and $t\mapsto\gamma(t)\in M$ is a smooth curve. Let us write $U^a$ for the velocity field along $\gamma$ and, having in mind a torsion-free connection $\nabla_a$, write $\partial\equiv U^a\nabla_a$ for the directional derivative along $\gamma$. Then $\gamma$ is an affinely parameterised geodesic for $\nabla_a$ if and only if the acceleration field $\partial U^a$ vanishes. The geodesics in Beltrami’s theorem, however, are [*unparameterised*]{} curves. It means that we should instead allow $\partial U^a= fU^a$ for some smooth function $f$. In order that two torsion-free connections $\nabla_a$ and $\widehat\nabla_a$ have the same unparameterised geodesics, it is necessary and sufficient that locally $$\label{proj} \widehat\nabla_aX^c=\nabla_aX^c+\Upsilon_aX^c+\Upsilon_bX^b\delta_a{}^c,$$ where $\delta_a{}^c$ denotes the identity tensor and $\Upsilon_a$ is an arbitrary $1$-form. The general formula relating two torsion-free connections is $$\widehat\nabla_aX^c=\nabla_aX^c+\Gamma_{ab}{}^cX^b,\quad\mbox{where}\enskip \Gamma_{ab}{}^c=\Gamma_{ba}{}^c.$$ Therefore $\widehat\partial U^c=\partial U^c+U^a\Gamma_{ab}{}^cU^b$ and so we require that $$U^aU^b\Gamma_{ab}{}^c\propto U^c\quad\mbox{for all}\enskip U^a.$$ It is a matter of linear algebra to check that this forces $$\Gamma_{ab}{}^c=\delta_a{}^c\Upsilon_b+\delta_b{}^c\Upsilon_a$$ for some $\Upsilon_a$. The Ricci tensor $R_{ab}$ of $\nabla_a$, characterised by $$(\nabla_a\nabla_b-\nabla_b\nabla_a)X^b=-R_{ab}X^b$$ for all vector fields $X^b$, is not necessarily symmetric. However, it is easy to check that if $\widehat\nabla_a$ and $\nabla_a$ are related by (\[proj\]), then $$\label{Ricci_change}\widehat R_{ab}=R_{ab}-2\nabla_a\Upsilon_b+\nabla_b\Upsilon_a+\Upsilon_a\Upsilon_b.$$ Locally, therefore, we may always use (\[proj\]) to arrange that the Ricci tensor be symmetric without disturbing its unparameterised geodesics. Instead of proving Beltrami’s theorem itself, we shall establish a more general result concerning the geodesics of an arbitrary torsion-free affine connection. Bearing in mind that we can arrange the Ricci tensor to be symmetric, the more general result can be stated as follows. Suppose $M$ is a smooth two-dimensional manifold. Denote by $TM$ the tangent bundle of $M$ and by $\Wedge^1$ the bundle of $1$-forms on $M$. Suppose $\nabla_a:TM\to\Wedge^1\otimes TM$ is a torsion-free connection. Suppose that its Ricci tensor $R_{ab}$ is symmetric. Let $Y_{abc}\equiv\nabla_aR_{bc}-\nabla_bR_{ac}$. Then $Y_{abc}=0$ if and only if there are local coördinates near any point in which the geodesics of $\nabla_a$ become straight lines. The reason that this theorem is more general than Beltrami’s is that, in case of a metric connection in two dimensions, the Gau[ß]{}ian curvature $K$ is characterised by $R_{ab}=Kg_{ab}$ whence $Y_{abc}=(\nabla_aK)g_{bc}-(\nabla_bK)g_{ac}$. [**Remark**]{}Another convenience of a having a symmetric Ricci tensor in two dimensions is that, in this case, $$\label{curv}(\nabla_a\nabla_b-\nabla_b\nabla_a)X^c =\delta_a{}^cR_{bd}X^d-\delta_b{}^cR_{ad}X^d,$$ as one may readily verify. A surprising connection ======================= Fixing a torsion-free connection $\nabla_a$ on $TM$ with symmetric Ricci tensor $R_{ab}$, we define a connection, also denoted by $\nabla_a$, on the bundle ${\mathbb{T}}\equiv TM\oplus\Wedge^0$ by $$\label{tractors}{\mathbb{T}}= \begin{array}{c}TM\\[-4pt] \oplus\\[-2pt]\Wedge^0\end{array}\ni \left[\!\begin{array}{c}X^b\\ \rho\end{array}\!\right] \stackrel{\nabla_a}{\longmapsto} \left[\!\begin{array}{c}\nabla_aX^b-\delta_a{}^b\rho\\ \nabla_a\rho+R_{ab}X^b\end{array}\!\right]\in\Wedge^1\otimes{\mathbb{T}}.$$ We may compute its curvature: $$\nabla_a\nabla_b\left[\!\begin{array}{c}X^c\\ \rho\end{array}\!\right] =\left[\!\begin{array}{c}\nabla_a(\nabla_bX^c-\delta_b{}^c\rho) -\delta_a{}^c(\nabla_b\rho+R_{bd}X^d)\\ \nabla_a(\nabla_b\rho+R_{bc}X^c)+R_{ac}(\nabla_bX^c-\delta_b{}^c\rho) \end{array}\!\right]$$ so $$(\nabla_a\nabla_b-\nabla_b\nabla_a) \left[\!\begin{array}{c}X^c\\ \rho\end{array}\!\right] =\left[\!\begin{array}{c}(\nabla_a\nabla_b-\nabla_b\nabla_a)X^c- (\delta_a{}^cR_{bd}-\delta_b{}^cR_{ad})X^d\\ (\nabla_aR_{bc}-\nabla_bR_{ac})X^c \end{array}\!\right]$$ but, according to (\[curv\]), the first row vanishes and so we are left with $$(\nabla_a\nabla_b-\nabla_b\nabla_a) \left[\!\begin{array}{c}X^c\\ \rho\end{array}\!\right] =\left[\!\begin{array}{c}0\\ Y_{abc}X^c \end{array}\!\right].$$ In other words, this connection is flat if and only if $Y_{abc}=0$, which is somewhat surprising. Proof of main theorem ===================== We are now in a position to prove the generalised Beltrami theorem from Section \[geo\]. One direction is mindless computation. Specifically, if the geodesics of $\nabla_a$ are straight lines in local coördinates, then (\[proj\]) holds with $\widehat\nabla_a$ being flat. According to (\[Ricci\_change\]) with $R_{ab}$ symmetric, we conclude that $$R_{ab}=\nabla_a\Upsilon_b-\Upsilon_a\Upsilon_b\quad\mbox{and}\quad \nabla_{[a}\Upsilon_{b]}=0.$$ We now compute $$Y_{abd}=\nabla_aR_{bd}-\nabla_bR_{ad} =(\nabla_a\nabla_b-\nabla_b\nabla_a)\Upsilon_d -\Upsilon_b\nabla_a\Upsilon_d+\Upsilon_a\nabla_b\Upsilon_d$$ and, from (\[curv\]), conclude that $$Y_{abd} =-\Upsilon_a(R_{bd}-\nabla_b\Upsilon_d)+\Upsilon_b(R_{ad}-\nabla_a\Upsilon_d) =-\Upsilon_a\Upsilon_b\Upsilon_d+\Upsilon_b\Upsilon_a\Upsilon_d,$$ which vanishes, as advertised. In other other direction we use the surprising flat connection (\[tractors\]). As the bundle ${\mathbb{T}}$ has rank $3$, locally we may find a three-dimensional space of covariant constant sections, which we shall identify as ${\mathbb{R}}^3$ (and replace $M$ by a suitable open subset on which this is valid). Each point $x\in M$ now gives rise to a $1$-dimensional linear subspace in ${\mathbb{R}}^3$, namely $$L_x\equiv\left\{\left[\!\begin{array}{c}X^b\\ \rho\end{array}\!\right]\in \Gamma({\mathbb{T}}) \mbox{ s.t.\ } \nabla_a\left[\!\begin{array}{c}X^b\\ \rho\end{array}\!\right]=0 \enskip\mbox{and}\enskip X^b|_x=0\right\}.$$ In this way, we obtain $\phi:M\looparrowright{\mathbb{RP}}_2$ (and replace $M$ by a suitable open subset on which $\phi:M\hookrightarrow{\mathbb{RP}}_2$). Now suppose $\gamma\hookrightarrow M$ is a geodesic with velocity vector $U^a$, as before. Restricting the connection (\[tractors\]) to $\gamma$ gives $\partial:{\mathbb{T}}\to{\mathbb{T}}$. Specifically, $$\label{tractors_along_gamma} \partial\left[\!\begin{array}{c}X^b\\ \rho\end{array}\!\right]= \left[\!\begin{array}{c}\partial X^b-\rho U^b\\ \partial\rho+U^aR_{ab}X^b\end{array}\!\right].$$ In particular, if $X^b$ is somewhere is somewhere tangent to $\gamma$, then this is always the case along $\gamma$ and (\[tractors\_along\_gamma\]) becomes $$\partial\left[\!\begin{array}{c}fU^b\\ \rho\end{array}\!\right]= \left[\!\begin{array}{c}(\partial f-\rho)U^b\\ \partial\rho+fU^aR_{ab}U^b\end{array}\!\right],$$ which is simply the second order linear ordinary differential equation $$\partial\partial f+R_{ab}U^aU^b f=0$$ along $\gamma$. Its two-dimensional space of solutions is a linear subspace of the space of covariant constant sections of ${\mathbb{T}}$. So the geodesic $\gamma$ gives a straight line in ${\mathbb{RP}}_2$. Otherwise said, the diffeomorphism $\phi:M\hookrightarrow{\mathbb{RP}}_2$ maps geodesics in $M$ to straight lines in ${\mathbb{RP}}_2$. These lines may be viewed in a standard affine chart ${\mathbb{R}}^2\subset{\mathbb{RP}}_2$ and the proof is complete. Beltrami’s task =============== In fact, Antonio Di Scala pointed out to me that Beltrami finds in [@B] the general form of a metric on ${\mathbb{R}}^2$ with the property that its geodesics, as unparameterised curves, are straight lines and, only having done this, does he note that these metrics have constant Gau[ß]{}ian curvature. This is a much more challenging task but one that is also familiar from parabolic differential geometry (as a particular instance of finding solutions to a so-called ‘first BGG operator’). Without going into details, observations of R. Liouville [@L] combine with the connection (\[tractors\]) on ${\mathbb{R}}^2$ in allowing one easily to write down the general metric defined on $U^{\mathrm{open}}\subseteq{\mathbb{R}}^2$ and having the property that its geodesics are straight lines. There is a six parameter family thereof: $$\frac{(ry^2+2qy+u)dx^2-2(rxy+qx+py+t)dx\,dy+(rx^2+2px+s)dy^2} {\big((rx^2+2px+s)(ry^2+2qy+u)-(rxy+qx+py+t)^2\big)^2}$$ for $p,q,r,s,t,u\in{\mathbb{R}}$, defined wherever this expression is positive definite. The case $(p,q,r,s,t,u)=(0,0,1,1,0,1)$ gives the Thales metric $$\frac{(1+y^2)\,dx^2-2xy\,dx\,dy+(1+x^2)\,dy^2}{(1+x^2+y^2)^2}$$ defined everywhere, whilst the case $(p,q,r,s,t,u)=(0,0,-1,1,0,1)$ is the Beltrami metric $$\frac{(1-y^2)\,dx^2+2xy\,dx\,dy+(1-x^2)\,dy^2}{(1-x^2-y^2)^2}$$ defined on the unit disc. In any case, a rather involved computation confirms that the Gau[ß]{}ian curvature is constant in the general case, specifically $$K=r(su-t^2)-p^2u+2pqt-q^2s.$$ [11]{} E. Beltrami, [*Risoluzione del problema: riportare i punti di una superficie sopra un piano in modo che le linee geodetiche vengano rappresentate da linee rette*]{}, Ann. Mat. Pura Appl. [**7**]{} (1865) 185–204. A. Čap and J. Slovák, [*Parabolic Geometries I: Background and General Theory*]{}, American Mathematical Society 2009. A.J. Di Scala, [*Corchete y curvatura*]{}, Rev. Colombiana Mat. [**39**]{} (2005) 113–131. R. Liouville, [*Sur les invariantes de certaines équations différentielle et sur leurs applications*]{}, Jour. l’École Politechnique [**59**]{} (1889) 7–76.
--- abstract: 'Entanglement in high-dimensional quantum systems, where one or more degrees of freedom of light are involved, offers increased information capacities and enables new quantum protocols. Here, we demonstrate a functional source of high-dimensional, noise-resilient hyperentangled states encoded in time-frequency and vector-vortex structured modes, which in turn carry single-particle entanglement between polarisation and orbital angular momentum. Pairing nonlinearity-engineered parametric downconversion in an interferometric scheme with spin-to-orbital-angular-momentum conversion, we generate highly entangled photon pairs at telecom wavelength that we characterise via two-photon interference and quantum state tomography, achieving near-unity visibilities and fidelities. While hyperentanglement has been demonstrated before in photonic qubits, this is the first instance of such a rich entanglement structure involving spectrally and spatially structured light, where three different forms of entanglement coexist in the same biphoton state.' author: - Francesco Graffitti - 'Vincenzo D’Ambrosio' - Massimiliano Proietti - Joseph Ho - Bruno Piccirillo - Corrado de Lisio - Lorenzo Marrucci - Alessandro Fedrizzi title: Hyperentanglement in structured quantum light --- Photonic platforms are a natural choice for many quantum applications owing to their advantages as low-noise quantum systems with high-fidelity control and suitability for long-distance transmission. Binary encoding has been the most common choice from the early, proof-of-principle experiments to the most recent photonic quantum protocols. However, there are scenarios that benefit from expanding the system dimensionality, e.g. for enhancing information capacity, noise resilience and robustness against external attacks in quantum cryptography [@Erhard2018; @Cozzolino19]. Intrinsically high-dimensional degrees of freedom (DOF) of light—such as orbital angular momentum (OAM), time and frequency—enable a larger quantum alphabet in a single photon state. The combination of two or more DOFs of light—including entanglement across them, namely *hyperentanglement* [@PhysRevLett.95.260501]—allows further expansion of the Hilbert space while providing easy access to the individual subsystems for selective control and measurements, improving existing protocols or enabling new ones [@D'Ambrosio2012; @PhysRevX.5.041017]. Hyperentanglement in particular enables protocols like complete Bell-state analysis [@PhysRevA.68.042313; @PhysRevLett.96.190501; @PhysRevA.75.042317] and logic gates simplification [@Lanyon2009], and has been used in cluster state generation [@PhysRevA.81.052301; @Ciampini2016] as well as in testing quantum foundations [@PhysRevLett.97.140407]. Moreover, hyperentangled systems have been successfully used for demonstrations of quantum dense coding [@Barreiro2008] and teleportation of multiple DOFs of a single photon [@Wang2015]. Photonic hyperentangled states have been demonstrated in different encoding regimes, as polarisation, time and frequency bins, path, and OAM [@PhysRevLett.95.260501; @PhysRevA.75.042317; @Xie2015]. While entanglement between three DOFs have been achieved in the past, e.g. for significantly expanding the accessible Hilbert-space dimension in two- or multi-photon experiments [@PhysRevLett.95.260501; @PhysRevLett.120.260502], the generation of hyperentanglement of spatially or spectrally structured light has, to our knowledge, so far remained elusive, probably due to the complexity in accessing such encoding regimes. However, structured light modes—where one or more degrees of freedom are modulated into custom light fields—are a useful resource in quantum photonics applications, spanning communication, metrology and imaging among others [@Rubinsztein_Dunlop_2016]. In this work, we fill this gap combining a nonlinearity engineering technique [@Graffitti_2017; @PhysRevLett.124.053603] with a spin-to-orbital-angular-momentum conversion scheme [@PhysRevLett.96.163905] to generate and characterise a biphoton state that exhibits complex entanglement between spectrally and spatially structured light. We produce hyperentanglement between time-frequency modes (TFM)—temporal/spectral envelopes of the electric field of the photons [@PhysRevX.5.041017; @PhysRevLett.124.053603], and vector vortex beams (VVB)—spatially structured beams characterised by a non-uniform polarisation pattern on their transverse profile [@DENNIS2009293; @Cardano:12; @PhysRevA.100.063842]. Due to their resilience to different noise sources, both TFMs [@Eckstein:11; @ding2019highdimensional] and VVBs [@D'Ambrosio2012; @Farias2015] provide ideal encodings in free-space communication schemes, while sources that generate polarisation-TFM hyperentanglement could be immediately deployed in current telecom networks, being both degrees of freedom already compatible with fibre propagation. It has recently been shown that VVB can be supported in particular fibres [@10.1117/1.AP.1.4.046005], opening the prospect of fibre-based communication schemes exploiting the full capabilities of our scheme. ![**Sketch of the biphoton hyperentangled state.** The overall quantum state $\ket{\Psi}$ (first row), exhibits hyperentanglement between spectrally and spatially structured light. Signal and idler are encoded in the $\ket{\psi^-}_\omega$ state of the TFM basis, represented with the biphoton joint spectrum and the corresponding expansion in TFMs (first box), and in any Bell states of the VVB basis (we only display $\ket{\psi}$-type states for compactness). Each photon is also in a single-particle entangled state between polarisation and OAM, giving rise to a VVB (second box). The plus (minus) sign corresponds to the radially (azimuthally) polarised beams, respectively.[]{data-label="fig:state"}](state3box.jpeg){width="0.95\columnwidth"} The generation of hyperentanglement between time-frequency modes and vector vortex beams implies the ability to independently create spectrally and spatially structured light. TFM encoding can be achieved via nonlinearity engineering, a technique that tailors the phasematching function in parametric downconversion (PDC) processes by modifying the ferroelectric structure of periodically-poled nonlinear crystals. Originally used for generating spectrally-pure heralded single photons [@Branczyk:11; @Graffitti_2017; @Graffitti:18], non-linearity engineering has since been applied to generate TFM entanglement with high fidelity [@PhysRevLett.124.053603]. Vector vortex beams on the other hand can be efficiently created by converting polarisation encoding into polarisation-OAM entanglement by means of birefringent liquid crystal devices known as q-plates [@PhysRevLett.96.163905]. Our scheme combines these two techniques with an interferometric Sagnac scheme [@Fedrizzi:07] for generating the highly-entangled state with the nontrivial structure sketched in Fig. \[fig:state\]. We describe the experimental implementation in Fig. \[fig:setup\]. A Ti-Sapphire laser produces a train of near transform-limited, $1.3$ ps pulses centred at $775$ nm with $80$ MHz repetition rate. The laser is focused into a Sagnac interferometer for generating polarisation entanglement [@Fedrizzi:07], where a nonlinearity engineered crystal (details in Ref. [@PhysRevLett.124.053603]) simultaneously enables the generation of TFM entanglement in a tailored PDC process (see Sec. 1 of Supplemental Material for mathematical details on the state generation). This first section of the setup produces biphoton states that carry hyperentanglement between the maximally antisymmetric TFM Bell-state and a polarisation Bell-state. The PDC photons (signal “s” and idler “i”) have a bandwidth of $\sim2.4$ nm (defined as the full-width at half maximum of the marginal photon’s spectral intensity) and, after being separated at a polarising beamsplitter (PBS), are spectrally filtered with a long-pass filter (cut-off wavelength at $1400$ nm) and a “loose” bandpass filter ($10$ nm nominal bandwidth) before being coupled into single-mode fibres for spatial mode filtering. A set of quarter-wave plate (QWP), half-wave plate (HWP) and QWP is used to prepare any maximally-entangled polarisation state via local operations. ![**Experimental setup.** Interferometric scheme to produce hyperentangled biphoton states in TFM and polarisation encoding, $\ket{ \psi^- }_\omega \otimes \ket{ \psi^-}_{\textrm{pol}}$ (top); Setup to produce $\ket{\psi^-}_\omega \otimes \ket{\psi^\varphi}_\text{VVB}$ converting polarisation encoding into VVB encoding, and to measure the overall state via two-photon interference and tomographic reconstruction (bottom). A set of QWP, HWP, QWP is used to prepare any maximally entangled state in the radial/azimuthal VVB basis. When fast axis of the QWPs is aligned, a rotation of the HWP corresponds to changing the phase factor of the Bell-like state. The polarisation projection stages in the dashed boxes, each consisting of QWP, HWP, and polariser, are used to perform projective measurements on the polarisation of the photons for the sixteen-dimensional polarisation-OAM biphoton state reconstruction. []{data-label="fig:setup"}](setup.pdf){width="0.90\columnwidth"} Each photon is sent through a q-plate for converting polarisation encoding into VVB encoding, producing the target TFM-VVB hyperentangled state. A q-plate with topological charge $q$ (with $q=0.5$ in our setup) implements the following transformation: $\alpha \ket{R ,0} + \beta \ket{L ,0} \to \alpha \ket{L ,-2q} + \beta \ket{R ,2q}$ where the first and second label correspond to polarisation and OAM value, respectively. If the input polarisation is linear, a VVB in a linear superposition of the basis states $\ket{\hat{r}}$ (radially polarised) and $\ket{\hat{\theta}}$ (azimuthally polarised) is produced in the process. When each photon of polarisation-entangled biphoton state is sent through a q-plate, the overall system consists of two entangled vector vortex beams [@PhysRevA.94.030304]. At the output of the q-plates, the biphoton state shows a nontrivial entanglement structure, where three different forms of entanglement coexist in the same quantum state: hyperentanglement between TFMs and VVBs, which is in turn composed of single-particle (intrasystem) entanglement [@Aiello_2015; @PhysRevA.92.023833; @PhysRevA.94.030304; @PhysRevA.100.063842]—polarisation and OAM of each photon—and two distinct sets of intersystem entanglement—between the two VVBs and between the two TFMs, as sketched in Fig. \[fig:state\]. We note that this scheme allows one to generate states within a two-dimensional VVB subspace, additional HWPs after the q-plates enable the generation of any VVB state in the four-dimensional space [@PhysRevA.94.030304]. The analysis stage consists of two main steps. First, we send the two photons on a BS to check for quantum interference depending on the symmetry of the full state [@PhysRevLett.124.053603]. After the BS, a set of two additional q-plates (with $q=0.5$) and polarisation optics (QWP, HWP and PBS for each photon) are used to perform tomographic projections in the VVB space. The photons are finally detected with superconducting nanowire single-photon detectors (SNSPDs) with $80\%$ nominal quantum efficiency. An additional tomographic projection set can be added before the measurement q-plates to perform a four-qubit tomography in the polarisation and OAM subspaces of the VVBs, simultaneously certifying the instrasystem entangled structure of each VVB and the intersystem entanglement between the two photons [@PhysRevA.94.030304], and verifying the GHZ-type structure of the state [@Carvacho2017]. We produce states of the form $\ket{\psi^-}_\omega \otimes \ket{\psi^\varphi}_\text{VVB}$, where $\ket{\psi^-}_\omega = 1/\sqrt{2}\left( \ket{{\includegraphics[height=7pt]{hg0.pdf}}, {\includegraphics[height=7pt]{hg1.pdf}}}_i - \ket{{\includegraphics[height=7pt]{hg1.pdf}}, {\includegraphics[height=7pt]{hg0.pdf}}}_i \right)$ is the antisymmetric Bell-state in the TFM Bell-basis, and $\ket{\psi^\varphi}_\text{VVB} = 1/\sqrt{2} \left( \ket{\hat{r},\hat{\theta}} + e^{i \varphi} \ket{\hat{\theta},\hat{r}} \right)$ is a $\psi$-type maximally entangled state in the VVB basis. By changing the HWP angle in the state preparation we can change the phase of the VVB part of the state and hence the symmetry of the overall wavefunction. The phase factors $e^{i 0}$ and $e^{i \pi}$ correspond to the $\ket{\psi^-}_\omega \otimes \ket{\psi^+}_\text{VVB}$ and $\ket{\psi^-}_\omega \otimes \ket{\psi^-}_\text{VVB}$ states, i.e. to a maximally antisymmetric and symmetric state, respectively. This translates into different interference behaviour at the BS, where moving from an antisymmetric to a symmetric state corresponds to moving from photon antibunching to photon bunching, as we show in Fig. \[fig:results\]. ![**Two-photon interference results.** **(a)** The interference fringes depend on the phase in the VVB-encoded part of the hyperentangled state: A maximum in the coincident counts at the two outputs of the BS (labeled as $\sum_{ij}\!A_iB_j$, where $A_i$, $B_j$ are the detectors in Fig. \[fig:setup\]) corresponds to a minimum in the coincident counts at each BS output (labeled as $A_1A_2+B_1B_2$), and vice-versa. Interference patterns collecting coincident counts at the two outputs of the BS **(b)** and corresponding visibilities **(c)** changing phase and relative arrival time of the photons at the BS. By controlling the phase of the $\ket{\psi}$-type we can move from almost perfect antibunching to bunching, i.e. from an overall antisymmetric state to a symmetric one. []{data-label="fig:results"}](figResults.pdf){width="0.90\columnwidth"} We monitor coincident counts between detectors $\left\{\text{A}_1,\text{B}_1\right\}$, $\left\{\text{A}_1,\text{B}_2\right\}$, $\left\{\text{A}_2,\text{B}_1\right\}$, $\left\{\text{A}_2,\text{B}_2\right\}$, corresponding to the photons exiting from both outputs of the BS, and between the detectors $\left\{\text{A}_1,\text{A}_2\right\}$, $\left\{\text{B}_1,\text{B}_2\right\}$, corresponding to the photons exiting the same outputs of the BS, to reconstruct the interference fringes as a function of the state’s phase $\varphi$. We show the results in Fig. \[fig:results\] (a): the fringes corresponding to antibunching (blue dots) and to bunching (red triangles) are in antiphase, and have high visibilities ($96.7\pm0.2\%$ and $99.3\pm0.1\%$, respectively) certifying a high quality of the generated state. By varying both the state’s phase and the relative arrival time of signal and idler at the BS, we can reconstruct the full biphoton interference pattern for states with different amount of antisymmetry. The 3D plot in Fig. \[fig:results\](b) shows how the interference pattern changes from perfect antibunching (corresponding to $\varphi = 0$) to perfect bunching ($\varphi = \pi$), in excellent agreement with the theoretical model we discuss in Sec. 2 of the Supplemental Material. Finally, Fig. \[fig:results\](c) shows the interference visibilities of each scan, where plus and minus $100\%$ correspond to perfect antibunching and bunching, respectively. The two-photon interference allows us to measure the overall antisymmetry of the biphoton state, but it doesn’t provide any information on its spatial structure. The VVB state can instead be measured via quantum state tomography after the interference at the BS. We prepare the symmetric state $\ket{\psi^-}_\omega \otimes \ket{\psi^+}_\text{VVB}$, which antibunches at the BS. We then convert the VVB information into polarisation information, and we perform a overcomplete quantum state tomography of the state. We measure a purity and fidelity of $(99.26\pm0.07)\%$ and $(99.57\pm 0.03)\%$, respectively, in the VVB subspace $\left\{ \ket{\hat{r}}, \ket{\hat{\theta}} \right\}$ (see Fig. \[fig:tomo\](a)). Introducing an additional tomographic projection before each measurement q-plate allows us to investigate the polarisation-OAM intrasystem entanglement and, at the same time, the two-photon intersystem entanglement [@PhysRevA.94.030304]. With this scheme, we measure a two-photon, four-qubit purity of $(92.4\pm0.1)\%$ and a fidelity of $(95.0\pm0.1)\%$ with the GHZ state $1/\sqrt{2}(\ket{R,+1,R,+1} + \ket{L,-1,L,-1})$: we show the corresponding density matrix in Fig. \[fig:tomo\](b). The high interference visibility measured in both the bunching and antibunching configuration, combined with the high state quality obtained via tomographic reconstruction of the VVB-encoded state, testifies an unprecedented capability of generating and manipulating structured light encoded in three different degrees of freedom with very high efficiency and precision. ![**Tomography results.** **(a)** Tomographic reconstruction of the biphoton $\ket{\psi^+}$ state in the VVB subspace $\left\{ \ket{\hat{r}}, \ket{\hat{\theta}} \right\}$. **(b)** Tomographic reconstruction of the biphoton GHZ state encoded in polarisation and OAM.[]{data-label="fig:tomo"}](tomo.pdf){width="0.90\columnwidth"} Many photonic quantum protocols rely on entanglement to carry out their tasks efficiently, therefore the capability of generating and manipulating complex entangled states of light with high precision is a fundamental requirement and a key challenge of quantum technologies. Here, we tackled this problem by introducing and experimentally demonstrating a scheme for efficient generation of a complex entanglement structure between three DOFs of light: polarisation, orbital angular momentum and time-frequency modes. To our knowledge, neither TFM encoding nor VVB encoding have been combined with other degrees of freedom before, while our work introduces a simple, yet high-quality source of TFM-VVB hyperentanglement. We expect our scheme will find applications in quantum communication schemes (where increased information capacity and noise resilience are obvious advantages) but also in different areas of quantum technologies, such as metrology or imaging, where both TFM and VVB encoding have already been independently used as resource [@D'Ambrosio2013; @PhysRevLett.121.090501]. There are two main routes to go beyond the results of this work in the future. On the one hand, it would be ideal to explore the intrinsic high-dimensionality of these DOFs, generating higher order OAM and TFM states to increase the information capacity of the biphoton state and investigate even more complex entanglement structures. On the other hand, implementing quantum pulse gates or other schemes [@Huang:13; @PhysRevLett.120.213601; @PhysRevX.5.041017; @Reddy:18] for performing TFM-manipulation and measurements would allow one to fully exploit the potential of our technique. **Acknowledgements** This work was supported by the UK Engineering and Physical Sciences Research Council (Grant Nos. EP/N002962/1 and EP/T001011/1.). FG acknowledges studentship funding from EPSRC under Grant No. EP/L015110/1. Italian Ministry of Education, University and Research (MIUR) through the PRIN Project ‘INPhoPOL’. European Union Horizon 2020 program, within the European Research Council (ERC) Grant No. 694683, PHOSPhOR. **Additional information** See Supplemental Material for supporting content. [39]{} ifxundefined \[1\][ ifx[\#1]{} ]{} ifnum \[1\][ \#1firstoftwo secondoftwo ]{} ifx \[1\][ \#1firstoftwo secondoftwo ]{} ““\#1”” @noop \[0\][secondoftwo]{} sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{} @startlink\[1\] @endlink\[0\] @bib@innerbibempty [****,  ()](https://doi.org/10.1038/lsa.2017.146) [****, ()](https://doi.org/10.1002/qute.201900038) [****, ()](https://doi.org/10.1103/PhysRevLett.95.260501) [****,  ()](https://doi.org/10.1038/ncomms1951) [****,  ()](https://doi.org/10.1103/PhysRevX.5.041017) [****,  ()](https://doi.org/10.1103/PhysRevA.68.042313) [****,  ()](https://doi.org/10.1103/PhysRevLett.96.190501) [****,  ()](https://doi.org/10.1103/PhysRevA.75.042317) [****,  ()](https://doi.org/10.1038/nphys1150) [****,  ()](https://doi.org/10.1103/PhysRevA.81.052301) [****,  ()](https://doi.org/10.1038/lsa.2016.64) [****, ()](https://doi.org/10.1103/PhysRevLett.97.140407) [****,  ()](https://doi.org/10.1038/nphys919) [****,  ()](https://doi.org/10.1038/nature14246) [****,  ()](https://doi.org/10.1038/nphoton.2015.110) [****, ()](https://doi.org/10.1103/PhysRevLett.120.260502) [****, ()](https://doi.org/10.1088/2040-8978/19/1/013001) [****,  ()](https://doi.org/10.1088/2058-9565/aa78d4) [****, ()](https://doi.org/10.1103/PhysRevLett.124.053603) [****, ()](https://doi.org/10.1103/PhysRevLett.96.163905) (, ) pp.  [****,  ()](https://doi.org/10.1364/AO.51.0000C1) [****,  ()](https://doi.org/10.1103/PhysRevA.100.063842) [****,  ()](https://doi.org/10.1364/OE.19.013770) @noop (),  [****,  ()](https://doi.org/10.1038/srep08424) [****,  ()](https://doi.org/10.1117/1.AP.1.4.046005) [****,  ()](https://doi.org/10.1364/OE.19.000055) [****, ()](https://doi.org/10.1364/OPTICA.5.000514) [****,  ()](https://doi.org/10.1364/OE.15.015377) [****,  ()](https://doi.org/10.1103/PhysRevA.94.030304) [****,  ()](https://doi.org/10.1088/1367-2630/17/4/043024) [****,  ()](https://doi.org/10.1103/PhysRevA.92.023833) [****,  ()](https://doi.org/10.1038/s41598-017-13124-6) [****,  ()](https://doi.org/10.1038/ncomms3432) [****,  ()](https://doi.org/10.1103/PhysRevLett.121.090501) [****,  ()](https://doi.org/10.1364/OL.38.000468) [****,  ()](https://doi.org/10.1103/PhysRevLett.120.213601) [****, ()](https://doi.org/10.1364/OPTICA.5.000423)
[**[Linear Kondo conductance in a quantum dot]{}**]{} $^{\ddagger }$, [*Adele Naddeo$^{\dagger \ast }$ and Arturo Tagliacozzo$^{\dagger \ast }$*]{} [$~^{\dagger }$ [*Coherentia - INFM*]{} (Istituto Nazionale di Fisica della Materia ), Unità di Napoli, Napoli, Italy\ ]{} [$^{\ast }$ Dipartimento di Scienze Fisiche Università di Napoli ”Federico II ”,]{} [Monte S.Angelo - via Cintia, I-80126 Napoli, Italy]{} > In a tunneling experiment across a quantum dot it is possible to change the coupling between the dot and the contacts at will, by properly tuning the transparency of the barriers and the temperature. Gate voltages allow for changes of the relative position of the dot addition energies and the Fermi level of the leads. Here we discuss the two limiting cases: weak and strong coupling in the tunneling Hamiltonian. In the latter case Kondo resonant conductance can emerge at low temperature in a Coulomb blockade valley. We give a pedagogical approach to the single channel Kondo physics at equilibrium and review the Nozières scattering picture of the correlated fixed point. We emphasize the effect of an applied magnetic field and show how an orbital Kondo effect can take place in vertical quantum dots tuned both to an even and an odd number of electrons at a level crossing. We extend the approach to the two channel overscreened Kondo case and discuss recent proposals for detecting the non Fermi Liquid fixed point which could be reached at strong coupling. Introduction ============ Recently systems have been fabricated which can sustain quantum coherence because of the smallness of their size, provided that the temperature is low enough. These mesoscopic systems are nanostructured devices in which quantum coherence sets in at very low temperatures and modifies the properties of the device as a whole. This happens notwithstanding the fact that the system is connected to an external biasing circuit. They have global, geometry dependent properties and striking quantization phenomena arise which are largely independent of the specific sample involved: charge quantization (in unit of the electron charge $e$), periodicity in the magnetic flux quantum $\phi _{o}=hc/e$, conductance quantization (in units of $G_{K}=2e^{2}/\pi \hbar =(6.5K\Omega )^{-1}$). Among these macroscopic quantum phenomena there is the unitary limit of the Kondo conductance in tunneling across a quantum dot (QD) at Coulomb Blockade (CB) [@raikh][@aleiner] that was first measured in 1998 [@goldhaber]. The Kondo phenomenon is well known since the sixties and explains an anomaly in the temperature dependence of the resistivity of diluted magnetic alloys[@kondo][@hewson]. This review is devoted to some features of equilibrium Kondo conductance across a quantum dot in the CB regime interacting with two contacts. It is remarkable here that the dot acts as a single impurity probed by the metal contacts, so that the properties of the Kondo state are not mediated over many impurities per cubic centimeter as it happens in diluted magnetic alloys. A strongly coupled state sets in between the dot and the contacts and phase coherence is established among the conduction electrons and the whole structure. The temperature scale for the interactions between a magnetic impurity and the delocalized conduction electrons of the host metal is the so called Kondo temperature $T_{K}$. It is defined as the temperature at which the perturbative analysis breaks down. Therefore, different approaches are required to investigate the thermodynamics and the transport properties of a quantum dot in the Kondo regime in the whole range of temperatures from the perturbative region down to the unitarity limit. Recently, the Kondo model and the Anderson impurity model in its Kondo limit have been deeply investigated by numerical renormalization group (NRG) calculations [@nrg], the Bethe ansatz method [@tsvi] and conformal field theory (CFT) techniques [@ludaff]. Further developments in the NRG methods have been applied successfully in the crossover region $T\approx T_{K}$ [@bickers]. The zero field spectral and transport properties of the Anderson model in the Kondo regime [@costi] as well as the field and temperature dependence of the Kondo resonance and the equilibrium magnetoconductance of the dot [@costi1] have been investigated. The tunneling conductance as a function of the gate voltage has also been calculated with NRG, in wide temperature range for a single quantum dot with Coulomb interactions, assuming that two orbitals were active for the tunneling process [@sakai]. We leave out the case when the electron distribution is not in local equilibrium about the Kondo impurity and the linear response theory is no longer sufficient. A number of techniques have been applied to describe nonequilibrium properties such as the nonlinear conductance, the nonequilibrium stationary state or the full time development of an initially out-of-equilibrium system: variational calculations [@ng], perturbation theory [@davies], equation of motion [@meir], perturbative functional integral methods [@koenig], non-crossing approximation (NCA) [@meir1][@kroha], perturbative renormalization group (RG) method in real time [@schoeller]. In particular, the last technique is well suited to describe quantum fluctuations which are induced by strong coupling between a small quantum system and an environment. It succeeds in reproducing the anomalous line shapes of the conductance observed in several recent experiments [@goldhaber], due to the renormalization of the resistance and the local energy excitations [@schoeller1]. We also leave out situations in which the levels localized at the dot are close in energy to the Fermi energy of the contacts (mixed valence models). The plan of the paper is the following. We start from the tunneling hamiltonian formalism when the coupling between the dot and the contacts is weak (Section $II$). A scattering approach in one-dimension is particularly suitable when studying the linear conductance in the device. The prototype model to account for strong electron-electron repulsion on the dot is the Anderson Hamiltonian with onsite repulsion (Section $III$). In the limit of strong correlation between the dot and the leads the latter model maps onto the so called “Kondo Hamiltonian model“ (Section $IV$). A [*poor man’s scaling* ]{}approach up to second order leads us to the definition of the Kondo temperature $T_{K}$. Next, we introduce the physics of the single channel (Section $VI$) and the two channel Kondo problem (Section $VII$), both in the perturbative region and in the unitarity limit. In such a context we briefly sketch the Anderson, Yuval and Hamann Coulomb Gas approach [@yuval] to the isotropic and anisotropic one channel Kondo system which gives a straightforward, though qualitative, insight of the strongly correlated state. In the conventional setting the dynamical variable which is coupled to the electrons propagating from the leads is the total dot spin. There are cases in which the spin is locked to orbital degrees of freedom or even absent (“orbital Kondo”) [@glazman2]; such cases are better found in a vertical geometry in presence of a magnetic field orthogonal to the dot [@noi]. A rather strong vertical magnetic field can induce level crossings in the dot due to orbital effects and produce accidental degeneracies which can give rise to exotic Kondo couplings (see Section $VI.B-C$). Some attention is drawn to the cylindrical geometry and to symmetry selection rules in the cotunneling process, also in connection with proposals for achieving the two channel Kondo non Fermi liquid fixed point in a vertical dot device [@arturo3][@oreg][@glazman3] (Section $VII.C$). Tunneling hamiltonian at weak coupling ====================================== Quantum dots (QD) are fabricated in semiconductor heterostructures, by applying metallic gates to confine few electrons [@kouwenhoven]. Because the confining area is quite small (the radius is $\sim 100\div 1000{{\mathring{A}}}$), the charging energy is much larger than the energy associated with thermal fluctuations, provided temperature is below $1^{\circ }K$. Therefore the dot is only weakly linked to metal contacts and one can bias the system in such a way that the electron number $N$ in the dot can be changed at will. Because of the confining potential, dots display a level structure organized in shells, exactly the same as atoms do. Analyzing this level structure is of primary interest because these few interacting electrons ($N\leq 20$) confined in a disk-like box (see Fig. 1), at special values of the parameters, are ruled by strong many-body Coulomb correlations. Hund’s rules can be seen at work: the total spin $S$ of the electrons confined in the dot is maximized as long as no magnetic field is applied. On the other hand, a magnetic field in the direction orthogonal to the disk produces strong orbital effects which favour larger values of the total angular momentum $M$ as well as total spin $S$. Quantum dots are remarkable because of Coulomb oscillations. At very low temperature linear conductance (at vanishing bias $V_{sd}$) is zero, except for peaks at discrete values of the gate voltage $V_{g}$, when it is energetically favourable to add one extra electron to the dot. Therefore $V_{g}$ controls the number of particles on the dot in the CB regime. We tune the chemical potential $\mu $ of the left (L) and right (R) bulk contacts within the CB valley of the conductance at $N$ particles, that is $\mu _{N}<\mu <\mu _{N+1,\alpha }\equiv ^{N+1}\!E_{\alpha }-^{N}\!E_{0}$; here the energies $^{N\pm 1}\!E_{\alpha }$ are the total energies for the dot with $N\pm 1$ particles characterized by the quantum numbers $\alpha $ and $^{N}\!E_{0}$ is the ground state energy (GS) at $N$ particles. If $N$ is odd, then the GS is at least doubly degenerate because the odd particle state can have spin $\sigma =\uparrow ,\downarrow $. In this case the dot can be treated as a single Anderson impurity. The peculiarity of these artificial atoms is in that the level structure can be investigated by measuring the tunneling current. Peaks in the linear conductance versus gate voltage $V_{g}$ separate regions at Coulomb blockade (CB) in which $N$ is fixed and differs by just one electron. The peaks at zero source-drain voltage occur whenever $V_{g}$ compensates the required chemical potential for electron addition or subtraction. Hence, measuring a tunnel current across the device $vs$ $\ V_{g}$ provides the spectroscopy of the dot levels. Adding a magnetic field the spin state of the dot can be changed, which in turn changes selection rules for electron tunneling [@arturo1]. The height and the width of the conductance peaks at resonance depends on temperature $T$. Let the dimensionless conductance be $g$ (in units of $2\frac{e^{2}}{h}$ (factor of $2$ is for the spin)). In the simplest case the maximum of the conductance at the resonant peak is $g_{max}\propto \Gamma /(k_{B}T)$ and the halfwidth of the peak $\Gamma =\pi \nu \left( 0\right) \left| V\right| ^{2}$ is $\propto T$; here $V$ is the tunneling strength (see below) and $\nu (0)$ is the density of states at the Fermi energy. In the CB region tunneling via virtual states of the dot with $N\pm 1$ is only fourth order perturbation in $V$ (they are named cotunneling processes), $g\propto e^{2}\pi \left( \frac{2\nu \left( 0\right) |V|^{2}}{\epsilon _{d}}\right) ^{2}/\hbar $, and vanishes exponentially with reducing temperature. Nevertheless, unexpectedly, transport measurement in a dot gives rise to some new physics which can be related to the Kondo effect of spin impurities in non magnetic metal alloys (see Fig. 2) [@goldhaber]. In the following we introduce the model for a quantum dot interacting with the contacts and define an equivalent one-dimensional Hamiltonian. Current within the tunneling Hamiltonian ---------------------------------------- In this Subsection we discuss the mutual influence between the contacts and the dot, starting from the weak link limit. Conduction electrons in the leads constitute a Fermi sea (FS) of non interacting fermions with plane wave single particle wavefunctions on side $L$ and on side $R$. The dot is described by an Hamiltonian $H_{D}$ and coupling between the dot and the leads is accounted for through a tunneling term, so that the Hamiltonian describing the whole system is: $$H=H_{D}+\sum_{k\sigma }\epsilon _{k\sigma }^{R}a_{k\sigma }^{\dagger }a_{k\sigma }+\sum_{k\sigma }\epsilon _{k\sigma }^{L}b_{k\sigma }^{\dagger }b_{k\sigma }+\sum_{k\alpha \sigma }[V_{k\alpha }c_{k\alpha \sigma }^{\dagger }d_{\sigma }+V_{k\alpha }^{\ast }d_{\sigma }^{\dagger }c_{k\alpha \sigma }]. \label{ap1}$$ The right and left Fermi sea (FS) of the two contacts $R,L$ are acted on by operators $c_{k R \sigma }\equiv a_{k\sigma }$ and $c_{k L \sigma } \equiv b_{k\sigma }$ ( here the index $\alpha $ stands for $R,L$). The canonical transformation of the dot problem made by Glazman and Raikh [@raikh] is just the construction of two species of fermions whose wavefunction is even or odd with respect to the dot center, in the case that the two barriers are unequal $V_{\alpha }=V_{R},V_{L}$. It changes the picture from operators $a_{k\sigma }$ and $b_{k\sigma }$ to operators $\alpha _{k\sigma }$ and $\beta _{k\sigma }$ given by $$\begin{aligned} \alpha _{k\sigma } &=&ua_{k\sigma }+vb_{k\sigma },\text{ \ \ }\beta _{k\sigma }=ub_{k\sigma }-va_{k\sigma } \nonumber \\ u &=&\frac{V_{R}}{V},\text{ \ }v=\frac{V_{L}}{V},\hspace*{0.4cm}V=\sqrt{|V_{R}|^{2}+|V_{L}|^{2}},\hspace*{0.4cm}\Gamma _{R,L}=\pi \left| V_{R,L}\right| ^{2}\nu (0) \label{tu1}\end{aligned}$$ where $\nu (0)$ is the density of states at the Fermi energy. A general formula for the the conductance of interacting systems [@ng1] [@meir][@jauho] has been written, resorting to the nonequilibrium Keldysh formalism [@noneq]; in such a framework the current through the interacting region is written in terms of the distribution functions in the leads and local properties of the intermediate region, such as the occupation and the density of states at the dot site: $$J=\frac{ie}{2h}\int d\omega \left( Tr\left\{ 2\left[ f_{L}(\omega )\Gamma _{L}-f_{R}(\omega )\Gamma _{R}\right] \left( {\bf G}^{r}-{\bf G}^{a}\right) \right\} +Tr\left\{ 2\left( \Gamma _{L}-\Gamma _{R}\right) {\bf G}^{<}\right\} \right) , \label{currmeir}$$ where ${\bf G}^{r}$, ${\bf G}^{a}$, ${\bf G}^{<}$ are the usual retarded, advanced, Keldysh Green functions for the dot in interaction with the leads and the $f$ are the Fermi functions. The Green’s function ${\bf G}^{r}$ will be denoted as $G_{dd}$ in the following. Both ${\bf G}^{r}$ and ${\Gamma }_{R,L}$ are matrices in case that many channels are present. Formula (\[currmeir\]) can be cast in a more simple form in the case that the couplings to the leads differ only by a constant factor [@meir]: $$J=-2\frac{e}{h}\int d\omega \left[ f_{L}(\omega )-f_{R}(\omega )\right] \Im mTr\left\{ \tilde{\Gamma}{\bf G}^{r}(\omega )\right\} , \label{condumeir}$$ where $\tilde{\Gamma}=\Gamma _{R}\Gamma _{L}/\Gamma $. Here $\Gamma = \Gamma _{R}+ \Gamma _{L}$. Now we derive explicitely $G_{dd}$ within the lowest order perturbation in the tunneling. We start from the Hamiltonian in eq. (\[ap1\]), where $H_{D}$ is given by a single impurity energy level $\epsilon _{d}$. All quantities will be scalar quantities for simplicity. We define the electron retarded Green’s function for the unperturbed leads: $G_{0}^{-1}=\sum_{k}(\omega -\epsilon _{k}+i0^{+})$. Projecting the equations for the total Green’s function: $$\begin{aligned} (i\omega _{n}-H)G(i\omega _{n}) &=&{\bf 1} \nonumber \\ G^{\dagger }(i\omega _{n})(-i\omega _{n}-H) &=&{\bf 1}\end{aligned}$$ onto states in which one single extra particle is occupying the delocalized state $|k>$ or the impurity state $|d>$ we have: $$\begin{aligned} (i\omega _{n}-\epsilon _{k})G_{k,k^{\prime }\sigma }(i\omega _{n}) &=&\delta _{kk^{\prime }}+V_{k}G_{d,k^{\prime }\sigma }(i\omega _{n}) \nonumber \\ G_{d,k^{\prime }\sigma }(i\omega _{n})(-i\omega _{n}-\epsilon _{k^{\prime}}) &=&G_{d,d\sigma }(i\omega _{n})V_{k^{\prime }}^{\ast }\end{aligned}$$ where $G_{d,k\sigma }(i\omega _{n})=<d|G|k>$, $G_{k,k\sigma }(i\omega _{n})=<k|G|k>$, $G_{d,d\sigma }(i\omega _{n})=<d|G|d>$ and $V_k = \langle k |V| d \rangle $. This gives for each scattering channel (here in order to simplify the notation we suppress the channel label $\alpha $): $$\begin{aligned} G_{k,k^{\prime }\sigma }(i\omega _{n}) &=&\frac{\delta _{kk^{\prime }}}{i\omega _{n}-\epsilon _{k}}+\frac{V_{k}}{i\omega _{n}-\epsilon _{k}}G_{d,d\sigma }(i\omega _{n})\frac{V_{k^{\prime }}^{\ast }}{i\omega _{n}-\epsilon _{k^{\prime }}} \nonumber \\ &\equiv &G_{k\sigma }^{0}\delta _{kk^{\prime }}+G_{k\sigma }^{0}V_{k}G_{d,d\sigma }V_{k^{\prime }}^{\ast }G_{k^{\prime }\sigma ^{\prime }}^{0}. \label{greenk}\end{aligned}$$ The density of states of the scattering electrons is defined as: $$\nu (\omega )=-\frac{1}{\pi }\Im m\sum_{k}{\bf G}_{k,k\sigma }^{r}(\omega ).$$ Using similar equations for the dot, we write: $$G_{d,d\sigma }(i\omega _{n})=\frac{1}{i\omega _{n}-\epsilon _{d}-\sum_{k}|V_{k}|^{2}/(i\omega _{n}-\epsilon _{k})}. \label{greend}$$ Now, let us consider the $L$ and $R$ contacts as two equal Fermi gases of noninteracting particles and equal chemical potential $\mu =0$ (within linear conductance regime a source drain voltage $V_{sd}$ is not applied). In such a case it is enough to discuss a single effective contact and the corresponding wavefunctions are plane waves of wavevector $k$ in the $x$-direction with a label $\alpha $ which includes all other quantum numbers. Their energy dispersion $\epsilon _{k\alpha }$ can be linearized about $\mu $ with $\epsilon _{q}\approx \hbar v_{F}q$, where $q$ is the momentum measured with respect to the Fermi momentum and $v_{F}$ is the Fermi velocity. Using a constant density of states $\nu (0)$ (at the Fermi energy per spin $L_{o}/2\pi \hbar v_{F}$ where $L_{o}$ is the size of the system) and a bandwidth of size $2D$ and neglecting the $k$ dependence of $V_{k}$, the sum in eq. (\[greend\]) is readily done: $$\sum_{k}\frac{|V_{k}|^{2}}{i\omega _{n}-\epsilon _{k}}=|V|^{2}\nu (0)\int_{-D}^{D}d\epsilon \frac{1}{i\omega _{n}-\epsilon }=\frac{\Gamma }{\pi }\ln \frac{i\omega _{n}+D}{i\omega _{n}-D}\equiv \Sigma _{d}(i\omega _{n}). \label{selfdot}$$ The retarded Green’s function for the dot is obtained from $G_{d,d\sigma }(i\omega _{n})$ in the limit to real frequencies: $i\omega _{n}\rightarrow \omega +i0^{+}$ according to: $${\bf G}_{d,d\sigma }^{-1}(i\omega _{n})=\left\{ i\omega _{n}-\epsilon _{d}-\Sigma _{d}(i\omega _{n})\right\} \rightarrow \omega -\epsilon _{d}-\frac{\Gamma }{\pi }\ln \left| \frac{v_{F}D+\omega }{v_{F}D-\omega }\right| +i\Gamma =\omega -\widetilde{\epsilon }_{d}+i\Gamma , \label{greendot}$$ where $\widetilde{\epsilon }_{d}$ is the renormalized dot energy. The coupling of the dot to the contacts shifts the location of the pole corresponding to the energy of the localized level and gives a finite width $\Gamma $ to the resonance. 1-d Scattering formalism ------------------------ At zero temperature scattering is only elastic. In the following we develop a $1-d$ scattering approach which is mostly useful in vertical structures [@erio], next we show that also the tunneling conduction can be cast into the scattering formalism. If the evolution operator $U(t,t^{\prime })$ is known, the transmission amplitude can be written in a scattering approach as $\theta ^{R\rightarrow L}=<b^{\dagger }U(+\infty ,-\infty )a>=v^{\ast }u<\alpha ^{\dagger }(\infty )\;\alpha (-\infty )>-u^{\ast }v<\beta ^{\dagger }(\infty )\;\beta (-\infty )>\equiv v^{\ast }uS^{0}+u^{\ast }vS^{1}$, where we have defined the two expectation values as the scattering matrix elements for the two uncoupled even and odd channels. Hence, the conductance takes the form of the Landauer formula: $$g=T^{R\rightarrow L}\equiv \left| \theta ^{R\rightarrow L}\right| ^{2}=4\left| v^{\ast }u\right| ^{2}\left| \met\sum_{l}S^{l}\right| ^{2}=\frac{4\Gamma _{R}\Gamma _{L}}{(\Gamma _{R}+\Gamma _{L})^{2}}\left| \met\sum_{l}S^{l}\right| ^{2}. \label{trans}$$ The potential after the transformation of eq. (\[tu1\]) has become even, hence unitarity is satisfied by each channel individually: $|S^{l}|^{2}=1$, $l=0,1$. The unitary limit of the conductance is $g_{u}=\frac{4\Gamma _{R}\Gamma _{L}}{(\Gamma _{R}+\Gamma _{L})^{2}}$. In particular, if the potential is $\delta (x)$, odd parity is totally transmitted and such that $S^{1}=-1$. Eq. (\[trans\]) is valid for a general system with an interacting intermediate region. One can match this approach to a 1-d scattering approach for non interacting electrons as it is done here below. Let us place the impurity (QD) at $x=0$ and consider the scattering amplitude $f_{L,R}$ of a one electron wavefunction $\psi (x)$ $$\begin{aligned} \psi _{>}(x) &\propto &e^{ikx}+f_{R}e^{ikx}\hspace*{0.5cm}x>>0 \nonumber \\ \psi _{<}(x) &\propto &e^{ikx}+f_{L}e^{-ikx}\hspace*{0.5cm}x<<0.\end{aligned}$$ Here the transmission coefficient is $T=|1+f_{R}|^{2}$ while the reflection coefficient is $R=|f_{L}|^{2}$ and they satisfy the conservation of flux: $T+R=1$. If the dot structure is even with respect to the origin along the vertical axis, the even parity $l=0$ and the odd parity $l=1$ channels are independent. It is then useful to define even and odd parities $f^{l}$: $f^{0}=\met(f_{L}+f_{R})$, $f^{1}=\met(f_{R}-f_{L})$ and the elastic $t-$ matrix $t^{l}=ik_{o}f^{l}/\pi $. Here the energy of the incoming particle is $\hbar v_{F}k_{o}$ (in units $\hbar v_{F}=1$, it follows that $[t]=energy$) and $k_{o}=2\pi /L_{o}$ (where $L_{o}$ is the linear size of the system). The $t-$ matrix is related to the $S-$ matrix according to: $$\begin{aligned} S^{l} &=&e^{2i\delta ^{l}};\hspace*{0.5cm}S^{l}-1=-\frac{2\pi i}{k_{o}}t^{l} \nonumber \\ t^{l} &=&-\frac{k_{o}}{\pi }\sin \delta ^{l}e^{i\delta ^{l}}\end{aligned}$$ where $\delta ^{l}$ are the phase shifts for the two parities. In this context unitarity of the $S-$matrix, $\met\sum_{l}|S^{l}|^{2}=1$, is the same as flux conservation $R+T=1$. Conductance is given by the Landauer formula: $$conductance=g=T=\left| 1+f_{R}\right| ^{2}=\left| 1+i\sum_{l}\sin \delta ^{l}e^{i\delta ^{l}}\right| ^{2}. \label{condu}$$ In the case of resonant tunneling we have $T\sim 1$ and $R=|f_{L}|^{2}\sim 0$, so that eq. (\[condu\]) becomes $g\rightarrow 1$, that is the unitary limit. The condition $R=|\pi (t^{0}-t^{1})/(ik_{o})|^{2}=\sin ^{2}(\delta ^{0}-\delta ^{1})\sim 0$ yields $\delta ^{0}\sim \delta ^{1}\equiv \delta mod\;\pi $, while in eq. (\[condu\]) we have $T\sim 1$ for $\delta ^{0}=\delta ^{1}\equiv \delta \rightarrow \pi /2$. If the potential is even ($\Gamma _{R}=\Gamma _{L}$), unitarity is satisfied by each channel individually: $|S^{l}|^{2}=1$, $l=0,1$; in particular, if the potential is $\delta (x)$, odd parity is totally transmitted and such that $S^{1}=-1$, and the conductance becomes: $$g=\left| \met(S^{0}-1)\right| ^{2}=\left| \frac{\pi }{k_{o}}t^{0}\right| ^{2}=sin^{2}\delta ^{0}. \label{gdot}$$ Again, if $\delta ^{0}=\pi /2$ conductance reaches the unitarity limit. In the following we describe the basic approximations and the weak coupling limit of very opaque barriers between dot and contacts. In this case tunneling of lead electrons across the dot is a perturbative process as derived in Section IIA. Within such a perturbative regime neither the dot nor the contacts are much affected by the interaction with the other. The relevant effect on the QD is the shifting of its levels and a level broadening, which is a second-order effect in the tunneling strength $V $. This shows itself as a small imaginary part added to the energy levels, i.e. a finite lifetime. It is easy to show that the linear conductance given above can be rephrased in terms of the imaginary part of the Green’s function as in eq. (\[condumeir\]) at the perturbative level. To this end, let us first state some formalities regarding the selfenergy ${\bf \Sigma }$ and the ${\bf t}$ matrix: $(E-H_{0}-\Sigma ){\bf G}={\bf 1}$; ${\bf G}_{0}(E-H_{0})={\bf 1}\rightarrow {\bf G}_{0}{\bf G}^{-1}+{\bf G}_{0}{\bf \Sigma }={\bf 1}\rightarrow {\bf G}={\bf G}_{0}+{\bf G}_{0}{\bf \Sigma G}$. In the Born approximation $\Sigma $ and ${\bf t}$ coincide because by definition is ${\bf G}={\bf G}_{0}+{\bf G}_{0}{\bf t}{\bf G}_{0}$. We assume that in tunneling the odd parity channel is fully transmitted ($S^{1}=-1$) and the even parity one gives (see eq. (\[greenk\])): $$\begin{aligned} t^{0} &\sim &VG_{d,d}V,\hspace*{0.5cm}G_{d,d}=\frac{1}{\epsilon -\widetilde{\epsilon }_{0}+i\Gamma } \nonumber \\ &\rightarrow &t^{0}\sim \frac{|V|^{2}}{\epsilon -\widetilde{\epsilon }_{0}+i\Gamma },\hspace*{0.5cm}\frac{\Im mt^{0}}{\Re et^{0}}=\tan \delta ^{0}=\frac{-\Gamma }{\epsilon -\widetilde{\epsilon }_{0}}, \nonumber \\ \sin ^{2}\delta ^{0} &=&\frac{\tan ^{2}\delta ^{0}}{1+\tan ^{2}\delta ^{0}}=\frac{\Gamma ^{2}}{(\epsilon -\widetilde{\epsilon }_{0})^{2}+\Gamma ^{2}}. \label{born}\end{aligned}$$ Here $V$ is the tunneling matrix element defined in eq. (\[tu1\]), $\widetilde{\epsilon }_{0}$ is the renormalized quantity defined in eq. (\[greendot\]) and $\Gamma =\Gamma _{R}+\Gamma _{L}$ is the inverse lifetime of the resonance. This implies that eqs. (\[trans\],\[gdot\]) become: $$g={g}_{u}\sin ^{2}\delta ^{0}=\frac{4\Gamma _{R}\Gamma _{L}}{(\epsilon -\widetilde{\epsilon }_{0})^{2}+\Gamma ^{2}}, \label{condu1}$$ where $g_{u}=\frac{4\Gamma _{R}\Gamma _{L}}{(\Gamma _{R}+\Gamma _{L})^{2}}$. On the other hand, because $\Im mG=-\Gamma /[(\epsilon -\widetilde{\epsilon }_{0})^{2}+\Gamma ^{2}]$, eq. (\[condumeir\]) becomes: $$g=\int d\omega \left( -\frac{\partial f(\omega )}{\partial \omega }\right) \tilde{\Gamma}\frac{\Gamma }{(\epsilon -\widetilde{\epsilon }_{0})^{2}+\Gamma ^{2}}, \label{acondu1}$$ where $\tilde{\Gamma}=\Gamma _{R}\Gamma _{L}/\Gamma $. At zero temperature both results of eqs. (\[condu1\]) and (\[acondu1\]) coincide. At finite temperature, if the odd channel is fully transmitted eq.(\[trans\]) can be generalized as $$g={g}_{u}\int d\omega \left( -\frac{\partial f(\omega )}{\partial \omega }\right) \left[ -\pi \nu (0)\Im m\left( t^{0}\right) \right] \label{correc1}$$ where $t^{0}$ is the ${\bf t}$ matrix above defined and related to the exact retarded Green function through the relation ${\bf G}={\bf G}_{0}+{\bf G}_{0}{\bf t}{\bf G}_{0}$. We have used the optical theorem: $$\frac{\pi ^{2}}{k_{o}^{2}}Tr\left\{ t^{0}t^{0\dagger }\right\} =\Re e\left\{ \frac{i\pi }{k_{o}}t^{0}\right\} \label{optical}$$ which follows from the unitarity condition $|S^{0}|^{2}=1$. Role of the onsite repulsion $U$ in tunneling ============================================= Up to now $H_{D}$ referred just to a single impurity level $\epsilon _{d}$. Indeed, charging energy $U$ is the main feature of a $QD$ and we have to deal with it. Something which is closer to a $QD$ is an impurity level with onsite repulsion $U$. Inclusion of $U$ in the Hamiltonian (\[ap1\]) leads to the single level Anderson model: $$H_{{\rm AND}}=H_{{\rm lead}}+\epsilon _{d}\sum_{\sigma }n_{\sigma }+U\sum_{\sigma \sigma ^{^{\prime }}}n_{\sigma }n_{\sigma ^{^{\prime }}}+\sum_{k\alpha \sigma }V_{\alpha }[c_{k\alpha \sigma }^{\dagger }d_{\sigma }+d_{\sigma }^{\dagger }c_{k\alpha \sigma }]. \label{app1}$$ If $\epsilon _{d}=-U/2$ with respect to the chemical potential of the conduction electrons $\mu $ (taken as the zero of the single particle energies), the Anderson model which arises is symmetric. In fact, the energies of the empty impurity state,$^{0}E$, and of the doubly occupied impurity state, $^{2}E=2\epsilon _{d}+U$, are both zero, while the singly occupied impurity level has energy $^{1}E=-U/2$. In order to understand how the Coulomb repulsion on the dot site affects the Green’s function of the dot we use a path integral formalism. We show that, in the limit in which the charge degree of freedom on the dot is frozen ($U\rightarrow \infty $), the dot Green’s function describes the dynamics of the only degree of freedom left, that is the dot magnetization $<n_{\uparrow }-1/2>\equiv <1/2-n_{\downarrow }>$ which is forced by a stochastic field $X(\tau )$ in imaginary time, produced by the scattering of the lead electrons, with a gaussian probability distribution. The excitation energy associated to it is shifted from $\epsilon _{d}$ to the Fermi level: this is the origin of the resonance at the Fermi level in the Kondo problem. As a first step, we will rephrase the previous result of an impurity with $U=0$ in this new approach. After linearizing the bands, the leads action can be written in terms of the field operators $\psi _{u\alpha }$ (where $\alpha =L,R$ and $u$ includes all other quantum numbers) as: $$S_{{\rm lead}}=-iv_{F}\int_{0}^{\beta }d\tau \int dx\sum_{u}\sum_{\alpha =L,R}\;\psi _{u\alpha }^{\dagger }(x,\tau )\frac{d}{dx}\psi _{u\alpha }(x,\tau ). \label{reasl}$$ With respect to the tunneling action, if the $L$ and $R$ barriers are equal and the dot is zero dimensional the interaction term only includes the symmetric combinations $\Phi _{u}(\tau )=\frac{1}{\sqrt{2}}(\psi _{uL}(x=0,\tau )+\psi _{uR}(x=0,\tau ))$ at the origin: $$S_{{\rm T}}=\int_{0}^{\beta }d\tau \sqrt{2}\sum_{u}[V\Phi _{u}^{\dagger }(\tau )d_{u}(\tau )+V^{\ast }d_{u}^{\dagger }(\tau )\Phi _{u}(\tau )]. \label{newint}$$ The total action is: $${\cal {A}}=\int_{0}^{\beta }d\tau \left\{ \sum_{u}d_{u}^{\dagger }(\tau )\left[ \frac{\partial }{\partial \tau }+(\epsilon _{d}-\mu )\right] d_{u}(\tau )\right\} +S_{{\rm lead}}+S_{{\rm T}}.$$ One can first integrate out the degrees of freedom of the $\psi _{u\alpha }(\tau ,x)$ fields for $x\neq 0$ which are free-like, and next the field $\Phi _{u}$ at $x=0$ which is the only one interacting with the dot variable $d_{u}(\tau )$. Because the fields in the leads are non interacting, the result of the gaussian integration yields an effective action for the dot: $$-S_{{\rm D}}^{{\rm Eff}}\propto \ln \left\{ \int \prod_{\alpha =L,R}\prod_{u}{\cal D}\psi _{u\alpha }\;{\cal D}\psi _{u\alpha }^{\dagger }\;e^{-{\cal {A}}}\right\} =-\beta \sum_{i\omega _{n}}d_{u}^{\dagger }(i\omega _{n})\left( i\omega _{n}-\epsilon _{d}-\Sigma (i\omega _{n})\right) d_{u}(i\omega _{n}) \label{sef}$$ where the self-energy correction to the Green’s function of the dot $\Sigma (i\omega _{n})$ = $\frac{\Gamma }{\pi }\ln (\frac{i\omega _{n}+D}{i\omega _{n}-D})$ was obtained in eq. (\[selfdot\]). Coulomb repulsion on the dot: freezing of the charge degree of freedom ---------------------------------------------------------------------- We now discuss the role of the onsite Coulomb interaction. In the large-$U$ limit we have: $$\begin{aligned} \exp \int_{0}^{\beta }d\tau \left\{ -\epsilon _{d}(n_{\uparrow }+n_{\downarrow })-Un_{\uparrow }n_{\downarrow }\right\} &=&e^{-\frac{U}{4}\int_{0}^{\beta }d\tau \left\{ (n_{\uparrow }+n_{\downarrow })^{2}+\frac{4}{U}\epsilon _{d}(n_{\uparrow }+n_{\downarrow })\right\} }{{\cdot} }e^{\frac{U}{4}\int d\tau (n_{\uparrow }-n_{\downarrow })^{2}} \nonumber \\ &=&e^{\frac{\beta }{U}\epsilon _{d}^{2}}\;\delta (n_{\uparrow }+n_{\downarrow }+2\epsilon _{d}/U){{\cdot} }e^{\frac{U}{4}\int_{0}^{\beta }\;d\tau (n_{\uparrow }-n_{\downarrow })^{2}}, \label{hub}\end{aligned}$$ where the delta function, which is defined by the last equality in the limit $U\rightarrow \infty $, implements the constraint of single site occupancy in the symmetric case, $\epsilon _{d}=-U/2$. The quartic interaction is decoupled by means of a Hubbard-Stratonovitch boson field $X ( \tau )$, according to the identity: $$e^{\frac{U}{4}\int_{0}^{\beta }d\tau (n_{\uparrow }-n_{\downarrow })^{2}}=\int DXe^{-\frac{1}{4U}\int_{0}^{\beta }d\tau (X^{2}(\tau )+2U(n_{\uparrow }-n_{\downarrow })X(\tau ))}.$$ Having introduced the field $X(\tau )$, the partition function ${\cal Z}(\mu )$ takes the form: $$\begin{aligned} {\cal {Z}}(\mu ) &\propto &\int DXe^{-\frac{1}{4U}\int_{0}^{\beta }d\tau X^{2}(\tau )} \nonumber \\ &&{{\times} }\int \Pi _{i}\left( Dd_{i}Dd_{i}^{\dagger }e^{-\int_{0}^{\beta }d\tau d\tau ^{\prime }d_{i}^{\dagger }(\tau )G_{(0)}^{-1}(\tau -\tau ^{\prime })d_{i}(\tau ^{\prime })}\right) \nonumber \\ &&{{\times} }\delta (n_{\uparrow }+n_{\downarrow }-1){{\cdot} }\met\sum_{j=\uparrow ,\downarrow }e^{(-1)^{j}\int_{0}^{\beta }d\tau \lbrack n_{j}-\met]X(\tau )}. \label{part2}\end{aligned}$$ Note that now the term $\epsilon _{d}\sum_{i}n_{i}$ was included in eq. (\[hub\]), so in this case the dot Green’s function is $G_{(0)}^{-1}(i\omega _{n})=i\omega _{n}-\Sigma _{(0)}(i\omega _{n})$. At odds with the case $U=0$ here the resonance is at the Fermi level, in spite of the fact that the original localized level is at $\epsilon _{d}$. The partition function in eq. (\[part2\]) describes an effective spin-1/2 coupled to the fluctuating magnetic field $X(\tau )$; its dynamics is constrained by the requirement that the impurity is singly occupied. It could be shown that the single site occupancy constraint is fulfilled in the average when the partition function of eq. (\[part2\]) is used. Quenching of the magnetic moment: singlet ground state ------------------------------------------------------ The representation of the partition function given in eq. (\[part2\]) allows us to recognize the doubly degenerate ground state (GS) of the impurity spin $S_{d}=1/2$ with $S_{d}^{z}=\left( -1\right) ^{\sigma }(n_{\sigma }-1/2)$, $\sigma =\uparrow ,\downarrow $, driven by the field $X(\tau )$ and produced by virtual tunneling of electrons on and off the dot at energy $\mu $. Anderson, Yuval and Hamann [@yuval] calculated eq. (\[part2\]) by integrating out the impurity ($d,d^{\dagger }$ fields) and showing that the interaction with the delocalized electrons gives rise to a dynamics of the field $X(\tau )$ which also interacts with itself at different times according to a logarithmic law. The problem was solved by mapping eq. (\[part2\]) onto the partition function of a Coulomb gas (CG) [@ni] of flips in $1-d$, as we show in some detail in this subsection. Let us define the new field $\xi (\tau )=X(\tau )/\Gamma $, where $\Gamma =\pi \nu (0)\left| V\right| ^{2}$ is related to the tunneling strength. At low temperatures, saddle point solutions of the resulting single particle effective action are sequences of instantons $\xi _{\pm }(\tau ,l)=\pm \xi _{M}\tanh ((\tau -\tau _{l})/\tau _{M})$ (where $\tau _{l},l=1,2,\ldots $ are the centers and $\tau _{M}\sim \frac{1}{\epsilon _{F}}$ is the width, that is some cut-off which regularizes the theory at short times) corresponding to jumps between the two minima of the effective potential $V_{eff}[\xi ]=\left( \Gamma ^{2}/U\right) \xi ^{2}-2\Gamma /\pi \left[ \xi {\rm \tan }^{-1}(\xi )-\frac{1}{2}\ln (1+\xi ^{2})\right] $, located at $\xi _{M}=\pm U/2\Gamma $, and interacting via a logarithmic potential $\alpha ^{2}\ln |(\tau _{i}-\tau _{j})/\tau _{M}|$. For such a CG we can define the bare strength $\alpha _{b}^{2}$ of the logarithmic interaction (the ”charge”) as $\alpha _{b}^{2}=2(1-8\Gamma /U\pi )$ and the “fugacity” $Y$ as $Y\equiv \tau _{M}e^{-S_{{\rm tot}}}$. Thus, the full partition function may be approximated with the sum over the trajectories given by hopping paths and will be written as: $$Z=\sum_{N=0}^{\infty }\frac{1}{\left( 2N\right) !}\int_{0}^{\beta }\frac{d\tau _{2N}}{\tau _{M}}\ldots \int_{0}^{\tau _{2}-\tau _{M}}\frac{d\tau _{1}}{\tau _{M}}[e^{\frac{1}{2}\sum_{i\neq j=1}^{2N}(-1)^{i+j}\alpha ^{2}\ln \left| \frac{\tau _{i}-\tau _{j}}{\tau _{M}}\right| }Y^{2N}], \label{part1}$$ that is the partition function of an effective one-dimensional gas of spin flips. The integral over the “centers of the instantons” has to be understood such that $\tau _{i}$ and $\tau _{j}$ never become closer than $\tau _{M}$. Now, we are ready to perform the RG analysis to get the behavior of the model at large time scales (low temperatures) [@yuval]. The bare fugacity of the CG is $Y_{b}=\tau _{M}\exp (-\bar{{\cal {A}}})$ where $\bar{{\cal {A}}}\sim \tau _{M}U$ is the action of one single flip. The scaling of the fugacity and the renormalization of the coupling constant induced by processes of fusion of charges lead to the following Renormalization Group (RG) equations [@ni]: $$\frac{dY}{d\ln \tau _{M}}=(1-\frac{\alpha ^{2}}{2})Y\text{ };\frac{d\alpha ^{2}}{d\ln \tau _{M}}=-2Y^{2}\alpha ^{2} \label{rg1}$$ (see Appendix A for a sketch on the derivation). We see clearly that the flow is towards $Y\rightarrow \infty $ and $\alpha ^{2}\rightarrow 0$ and the scaling invariant energy (that is the Kondo temperature which we will introduce in the next section) is $T_{K}=\tau _{M}^{-1}e^{-1/(1-\frac{\alpha ^{2}}{2})}\sim (U\Gamma )^{\met}e^{-\pi U/(8\Gamma )}$. Condensation of instantons in the doubly degenerate GS leads to the Kondo singlet $<S^{z}>=0$ on the dot. Now we present an heuristic argument for such quenching of the total spin $S$ on the dot; it runs as follows. The dynamics of the field $\xi (\tau )$ with action $\bar{{\cal {A}}}$ can be mimicked by a two-level system hamiltonian $H_{2l}$ with hopping energy $\lambda \sim {\frac{1}{2}}m_{eff}\left( \frac{d\xi }{d\tau }\right) ^{2}\sim \met \frac{\Gamma ^{2}}{U}\tau _{M}^{2}(\xi _{M}/\tau _{M})^{2}\sim U/2$. The role of the interaction is to project out higher energy components from the dot states. Let us denote by $|\pm >$ the two eigenstates of $H_{2l}$, i.e. the singlet and the triplet (with respect to the total spin of the dot plus conduction electrons, $S=0$ and $S=1$) on the dot. Instantons, by flipping the impurity spin, produce a dynamics of the system “dot + conduction electrons “ between these two states. Now, let us make an interesting analogy with thermodynamics. At temperature $\beta ^{-1}$ the probabilities of having the system in one of the two states will be given by [@arturo2]: $$P_{+}=\frac{1}{1+e^{-2\beta \lambda }}\;\;;\;P_{-}=\frac{e^{-2\beta \lambda }}{1+e^{-2\beta \lambda }}$$ partly eliminating the component on the high energy eigenstate $|->$. Thus we are ready to give the connection with the CG picture described by the partition function of eq. (\[part1\]). In our case the dynamics is given by the coherent zero point fluctuations of the system as a whole. From the statistical weight associated to a configuration with $N$ instantons we can easily write down the formula: $$\langle N\rangle \equiv \langle N_{{\rm inst}}\rangle =\frac{\sum_{N=0}^{\infty }2NY^{2N}/(2N)!}{\sum {}_{N=0}^{\infty }Y^{2N}/(2N)!}=Y\frac{d}{dY}\ln ({\cosh }Y)=Y{\tanh }Y. \label{ave2}$$ The frequency $\lambda /2\pi $ is obviously related to the average number of flips: $\beta \lambda /2\pi =<N>=Y\tanh Y$, as a direct implication of eq. (\[ave2\]). Hence, the probabilities of having the system in the states $|\pm >$ within the ground state ($T=0$) are: $$P_{+}=\frac{1}{1+e^{-4\pi Y\tanh Y}}\:\:;\:\:P_{-}=\frac{e^{-4\pi Y\tanh Y}}{1+e^{-4\pi Y\tanh Y}}{{\cdot} } \nonumber$$ Because $Y$ scales to infinity, the higher energy state completely decouples and the total spin on the dot is quenched: $<S^{z}>\rightarrow 0$. Perturbative analysis at $T >> T_{K}$ ===================================== The Kondo Effect in metals containing magnetic impurities is responsible for the “anomalous” minimum in the resistivity $\rho (T)$, as the temperature $T\sim T_{K}$: such a minimum in $\rho (T)$ is due to scattering of conduction electrons off the localized magnetic impurities. Those contributions were first worked out by Kondo [@kondo2], who derived a correction $\Delta \rho (T)\propto \ln \left( \frac{T_{K}}{T}\right) $. As we pointed out in the introduction, a different realization of the Kondo effect may be achieved, in a controlled way, in a quantum dot in Coulomb Blockade (CB)-regime [@goldhaber]. The Kondo effect in a dot is usually detected by measuring the linear conductance as a function of the gate voltage by connecting two electrodes to the dot. A dot at CB exhibits discrete energy levels. Changing the number of electrons is strongly prevented by electrostatic charging energy. Correspondingly, the total charge at the dot is quantized and the linear conductance is zero within large windows of variations of the gate voltage (“CB-valleys”). By analyzing the level structure of the dot it has been shown that, in some special cases, the dot at CB behaves as a magnetic moment antiferromagnetically interacting with the magnetic momenta of lead electrons. CB-valleys at different occupation number of the dot are bounded by resonant conduction peaks, as the chemical potential of the leads matches the one of the dot. Those peaks usually get sharper and better defined as $T$ is lowered. However, as $T\sim T_{K}$, Kondo effect arises at the dot. Consequently, the CB-valley between two resonant conduction peaks “fills in” in a way that does not depend on the position of the Fermi level of the leads [@goldhaber]. As $T\sim T_{K}$, the conductance $g(T)$ exhibits the typical logarithmic raise [@silvano0]. The logarithmic raise, however, cannot hold all the way down to $T=0$, as it must be limited by the unitarity bound. Therefore, a different approach is required in order to study Kondo effect in the $T=0$-unitarity limit, as we will discuss in the next section. In this Section we derive the Kondo Hamiltonian from the single impurity Anderson model (eq. \[app1\]) focusing for simplicity on the isotropic case $J_{\perp }=J_{z}=J$, then we sketch the perturbative Renormalization Group (RG) flow for the coupling strength of  such a model. In general, the lower is $T$, the more spin-flip processes become likely, which make the running coupling constant $J$ grow. At the Kondo temperature $T_{K}$ the perturbative analysis breaks down, that is $\nu \left( 0\right) J(T_{K})\sim 1$, so that $T_{K}$ is the characteristic scale that divides the regions of weak and strong coupling. Derivation of the spin dynamics hamiltonian ------------------------------------------- In the following we will restrict our analysis to a two-fold degenerate dot level: in such a case the QD can be modelized as a spin-1/2 magnetic impurity. Indeed, because the charge degree of freedom is frozen at Coulomb Blockade we mimick the dot with its total spin $\vec{S}_{d}$ and describe its interaction with the delocalized electrons by means of the Kondo Hamiltonian: $$H_{{\rm eff}}=H_{{\rm lead}}+H_{K}\equiv H_{{\rm lead}}-J\vec{S}_{d}{{\cdot} }\vec{\sigma}(0), \label{kh}$$ where $\vec{\sigma}(0)=\sum_{kk^{^{\prime }}}\sum_{\sigma \sigma ^{^{\prime }}}(c_{k\sigma }^{\dagger }\overrightarrow{\tau }_{\sigma \sigma ^{^{\prime }}}c_{k^{^{\prime }}\sigma ^{^{\prime }}})$ plays the role of a spin density of the itinerant electrons at the impurity site ($\overrightarrow{\tau } _{\sigma \sigma ^{^{\prime }}}$ are the Pauli spin$-\met$ matrices). Its components behave as a spin$-\met$, provided single occupancy of the site $x=0$ is guaranteed. In the anisotropic Kondo model the couplings of the $x,y$ components $J_{\perp }$ are different from the one of the $z$ component $J_{z}$. This effective Hamiltonian is more suitable to describe the low-$T$ physics of the system because its dynamics shows how the system flows towards the nonperturbative regime. In the following sections by “low temperature” we’ll mean $T\sim T_{K}$. As a first step we show that it is possible to derive the effective Hamiltonian (\[kh\]) from the one for the single impurity Anderson model (eq. \[app1\]), where $H_{{\rm D}}=$ $\epsilon _{d}\sum_{\sigma }n_{\sigma }+U\sum_{\sigma \sigma ^{^{\prime }}}n_{\sigma }n_{\sigma ^{^{\prime }}}$, by means of the Schrieffer-Wolff (SW) transformation [@schrieffer]. More details can be found on the book by Hewson [@hewson]. Let $\Xi $ be the space spanned by the (almost) degenerate dot’s states. As the number of electrons at the QD is fixed, the relevant degrees of freedom can be described by an effective Hamiltonian $H_{{\rm eff}}$ acting onto $\Xi $ only. In order to construct $H_{{\rm eff}}$, we apply the SW transformation to the Hamiltonian in eq. (\[app1\]). We denote by $P$ the projector onto $\Xi $ and by $\epsilon _{d}$ the energy of the states within $\Xi $, so that the effective Kondo Hamiltonian is given by: $$\delta H_{{\rm eff}}\approx P\;V(1-P)\frac{1}{\epsilon _{d}-H_{{\rm D}}-H_{{\rm lead}}}(1-P)\;V\;P. \label{ef1}$$ Indeed, by inserting eq. (\[app1\]) into eq. (\[ef1\]), we get the result: $$\begin{aligned} H_{{\rm eff}} &=&H_{{\rm lead}}-\nu \left( 0\right) \sum_{\alpha \sigma }\frac{V_{\alpha }^{2}}{\epsilon _{d}}d_{\sigma }^{\dagger }d_{\sigma }+\sum_{\alpha \sigma ;k,k^{^{\prime }}}V_{\alpha }^{2}\left[ \frac{1}{\epsilon _{d}+U}+\frac{1}{\epsilon _{d}}\right] c_{k\alpha \sigma }^{\dagger }c_{k^{^{\prime }}\alpha \sigma }+ \nonumber \\ &&-\sum_{\alpha ,\alpha ^{^{\prime }};\sigma ,\sigma ^{^{\prime }};k,k^{^{\prime }}}V_{\alpha }V_{\alpha ^{^{\prime }}}\left[ \frac{1}{\epsilon _{d}+U}-\frac{1}{\epsilon _{d}}\right] c_{k\alpha \sigma }^{\dagger }\vec{S}_{d}{{\cdot} }\vec{\tau}_{\sigma \sigma ^{^{\prime }}}c_{k^{^{\prime }}\alpha ^{^{\prime }}\sigma ^{^{\prime }}} \label{ef2}\end{aligned}$$ where $S_{d}^{z}=\sum_{\sigma }\sigma d_{\sigma }^{\dagger }d_{\sigma }$, $S_{d}^{+}=d_{\uparrow }^{\dagger }d_{\downarrow }$ and $S_{d}^{-}=d_{\downarrow }^{\dagger }d_{\uparrow }$ are the impurity spin components. Spin commutation relations are obtained if no double occupancy is admitted. Besides $H_{{\rm lead}}$, the first and the second term at the r.h.s. of eq. (\[ef2\]) represent, respectively, a renormalization of the dot’s energies and a potential scattering term; the third term is the one which induces spin flips. The two contributions appearing in the potential scattering term and in the spin-flip term refer to two virtual particle and hole processes, respectively. In the first case a particle is added to the dot so that the energy $\epsilon _{d}+U$ is involved, while in the second case a particle is subtracted from the dot level thus paying the energy $|\epsilon _{d}|$. The potential scattering term vanishes if the Anderson model is symmetric, as we have assumed up to now ($\epsilon _{d}=-U/2$). Perturbative Renormalization Group Approach ------------------------------------------- In this Subsection the starting point of our reasoning is, for simplicity, the isotropic limit of Kondo hamiltonian in eq. (\[kh\]). Scattering of conduction electrons by the impurity produces a self energy correction, as well as a correction to the interaction vertex. The transitions between the states close to the Fermi level $\epsilon _{F}$ and the states within a narrow strip of energies of width $\delta D$ near the edges of the band of width $2D$ are associated with an high energy deficit. Such transitions are virtual and their influence on the states near $\epsilon _{F}$ can be taken into account perturbatively in the second order. In the [*poor man’s scaling* ]{} approach one includes second order corrections arising from processes in which the electrons $k^{\prime }$ are scattered to an intermediate state at energy $D>\epsilon _{k^{\prime \prime }}>D-\delta D$ or $-D<\epsilon _{k^{\prime \prime }}<-D+\delta D$ where $D$ is some ultraviolet cut-off. Because they involve high intermediate energies, one can think of including them perturbatively, so for an electron $k^{\prime }$ scattered into the final state $k$ we have: $$\sum_{k^{\prime \prime }\in \gamma }<k|H_{K}|k^{\prime \prime }><k^{\prime \prime }|\frac{1}{E-H_{{\rm lead}}}|k^{\prime \prime }><k^{\prime \prime }|H_{K}|k^{\prime }>\sim -J^{2}\nu (0)\delta D\frac{1}{D}\vec{S}_{d}{{\cdot} }\sum_{kk^{^{\prime }}}\sum_{\sigma \sigma ^{^{\prime }}}c_{k\sigma }^{\dagger }\overrightarrow{\tau }_{\sigma \sigma ^{\prime }}c_{k^{\prime }\sigma ^{\prime }} \nonumber$$ where $\gamma $ is the $k^{\prime \prime }$ domain mentioned above. To justify the second step one observes that, in the shell $\gamma $, the operator $H_{{\rm lead}}$ can be replaced by an energy $D$ and, in comparison to it, the eigenvalue $E$ can be put at the Fermi level $E=0$. The result is an effective Hamiltonian acting within the band of a reduced width $D-\delta D$, of the same form as $H_{K}$ in eq. (\[kh\]) but with a modified value of $J$. This gives the following correction to the previous coupling constant in $H_{K}$: $\delta J\sim -J^{2}\nu (0)\delta D/D$ [@anderson]. Successive reductions of the bandwidth by small steps $\delta D$ can be viewed as a continuous process during which the initial Hamiltonian is trasformed to an effective low-energy Hamiltonian acting within the band of reduced width $D-\delta D$. The evolution of the exchange amplitude $J$ during such a [*poor man’s scaling* ]{}can be cast into the form of a flow differential equation [@anderson]: $$\frac{dJ}{d\ln D}=-\nu (0)J^{2}. \label{scal0}$$ Integration of eq. (\[scal0\]) gives rise to the renormalization of $J$: $$\frac{1}{j(D)}-\frac{1}{j(D_{0})}=\ln \frac{D}{D_{0}}, \label{scal1}$$ where $j=J\nu (0)$, that is $$j(D)=\frac{j_{0}}{1-j_{0}\ln \frac{D_{0}}{D}}. \label{scal2}$$ Scaling can be performed down to $D\sim k_{B}T$, also we see that, if $j_{0}\equiv j(D_{0})>0$ (antiferromagnetic coupling), the running coupling constant $j$ increases. Eq. (\[scal1\]) shows that $De^{-\frac{1}{j}}$ is a constant of this flow, which defines an energy scale as $T_{K}$: $$k_{B}T_{K}=D_{0}e^{-\frac{1}{j_{0}}}. \label{scal3}$$ In general, at the Kondo temperature $T_{K}$ the system has approached the scale at which the perturbative analysis breaks down, so that for $T<T_{K}$ $j$ starts to diverge in the flow. Now we discuss the conductance in such a perturbative limit. Let us take a $\delta (x)$ -like dot with an even barrier potential ($\Gamma _{R}=\Gamma _{L}$). To lowest perturbative order the leading self-energy correction to $t^{0}(\omega +i0^{+})$ (see eqs. (\[gdot\]), (\[born\])) ($\omega =0$) yields: $$g=\left| \frac{\pi }{k_{o}}t^{0}\right| ^{2}=\Re e\left\{ \frac{i\pi }{k_{o}}t^{0}\right\} \approx \pi ^{2}(\nu (0)J)^{2}, \label{sun}$$ where $\frac{1}{\nu (0)J}=\ln \frac{T}{T_{K}}$ is the invariant charge defined through eq. (\[scal3\]), so that the conductance can be written as: $$g\approx \pi ^{2}\ln ^{-2}\frac{T}{T_{K}},\text{ \ \ \ \ }T_{K}\ll T\ll D. \label{cweak}$$ The tail of the conductance in the perturbative limit $T\gg T_{K}$ is logarithmic. One channel anisotropic Kondo model and the Toulouse limit ---------------------------------------------------------- In the following we rewrite the Coulomb gas approach introduced in subsection $V.B$ focusing on the single channel anisotropic Kondo system. Let us start from the general effective Hamiltonian: $$H_{{\rm eff}}^{{\rm A}}=H_{{\rm lead}}+H_{K}^{{\rm A}}\equiv H_{{\rm lead}}+J_{z}S_{d}^{z}\sigma _{z}(0)+\frac{J_{\perp }}{2}\left( S_{d}^{+}\sigma _{-}(0)+S_{d}^{-}\sigma _{+}(0)\right) .$$ As stated in ref. [@yuval], the perturbation term (the one containing $J_{\perp }$) has the effect of flipping the local spin at each application; hence the problem of calculating the partition function of such a system reduces to the evaluation of the amplitude for a succession of spin flips at times $\tau _{1},\tau _{2},...$ and the Feynman sum over histories is the sum over all possible numbers and positions of flips. So, we get the expression (\[part1\]) which can be rewritten as: $$Z_{CG}=\sum_{N=0}^{\infty }\left( J_{\perp }\tau _{M}\right) ^{2N}\int_{0}^{\beta }\frac{d\tau _{2N}}{\tau _{M}}\ldots \int_{0}^{\tau _{2}-\tau _{M}}\frac{d\tau _{1}}{\tau _{M}}[e^{+\sum_{i\neq j=1}^{2N}(-1)^{i+j}\left( 2-\varepsilon \right) \ln \left\vert \frac{\tau _{i}-\tau _{j}}{\tau _{M}}\right\vert }]. \label{part3}$$ Here $\varepsilon $ is the quantity: $$\varepsilon =\frac{8\delta _{AF}}{\pi }-\frac{8\delta _{AF}^{2}}{\pi ^{2}}\simeq 2J_{z}\tau _{M}$$ where $\delta _{AF}$ is the scattering phase shift of antiferromagnetic sign introduced by the $J_{z}S_{d}^{z}\sigma _{z}(0)$ term. It can be seen clearly that the sign of $J_{z}$, or $\varepsilon $, determines whether the coupling is ferromagnetic or antiferromagnetic, so the condition $\varepsilon >0$ gives the antiferromagnetic coupling. Let us now notice that eq. (\[part3\]) is a function of three parameters only: $\frac{\tau }{\tau _{M}},J_{\perp }\tau _{M},\varepsilon $; we are interested to the case $\tau \rightarrow \infty $ (low temperature). In such a case it is well known that the ferromagnetic Kondo system has a mean spin moment which corresponds to a long range order of the classical system: the positive and negative charges are all bound in pairs forming dipoles all pointing in the same direction. The phase transition occurs, at least for small $J_{\perp }$, at the ferromagnetic-antiferromagnetic boundary point $\varepsilon =0$. For such a system it is possible to derive a set of scaling equations by renormalization of the cut-off $\tau _{M}$; these laws are exact for small $J_{\perp }\tau _{M}$ and $\varepsilon $. The physical picture is that of many close pairs of flips which change slightly the mean magnetization, located between pairs of isolated flips which are reversals of the magnetization over a larger timescale; so, the isolated flips can be considered as acting in a medium where the close pairs modify the mean magnetization. This line of reasoning leads to the following scaling laws for the fugacity $J_{\perp }\tau _{M}$ and the charge $\varepsilon $ respectively: $$\frac{d\left( J_{\perp }\tau _{M}\right) }{d\ln \tau _{M}}=\frac{\varepsilon }{2}\left( J_{\perp }\tau _{M}\right) \text{ };\text{ \ }\frac{d\varepsilon }{d\ln \tau _{M}}=\left( 2-\varepsilon \right) \left( J_{\perp }\tau _{M}\right) ^{2}. \label{scaling1}$$ Let us make a few comments on such equations. First, these are also exact for finite $\varepsilon $, because only $J_{\perp }\tau _{M}$ need to be small. Second, they are compatible in the isotropic case $J_{z}=J_{\perp }$, $\varepsilon \simeq 2J_{z}\tau _{M}\simeq 2J_{\perp }\tau _{M}$ where $J_{\perp }\tau _{M}$ and $\varepsilon $ are small; because $J_{\perp }\tau _{M}$ and $\varepsilon $ scale together, the isotropic Kondo system remains isotropic at every time scale. In the anisotropic model, eqs. (\[scaling1\]) become: $$\frac{d\varepsilon }{d\left( J_{\perp }\tau _{M}\right) }\simeq 4\frac{J_{\perp }\tau _{M}}{\varepsilon }\text{ };\text{ \ }\varepsilon ^{2}-4J_{\perp }^{2}\tau _{M}^{2}=const,$$ thus the scaling lines are a set of hyperbolas with asymptotes corresponding to the isotropic case. We see clearly from Fig. 3 that all ferromagnetic cases below the isotropic one scale onto the case $J_{\perp }\tau _{M}=0$ which is an ordered one, so the transition line coincide with the ferromagnetic case. Conversely, in the antiferromagnetic (AF) case $J_{\perp }\tau _{M}$ and $\varepsilon $ increase starting from some values, small at will, and we always can find a timescale for which $\varepsilon \sim 1$. Such a timescale is a crucial one for the Kondo phenomenon, it is the Kondo temperature already introduced in the previous sections and sets the scale at which the system behaves as it were strongly coupled. Thus, the renormalization procedure is valid up to $\varepsilon \sim 1$. Now, let us briefly focus on the so-called Toulouse limit $J_{z}=J_{\perp }$, $\varepsilon $ $\sim 1$; in such a case the system is equivalent to a simple quadratic model with Hamiltonian: $$H_{{\rm T}}=\sum_{k}\epsilon _{k}c_{k}^{\dagger }c_{k}+V\sum_{k}[d^{\dagger }c_{k}+c_{k}^{\dagger }d]. \label{Toul1}$$ Here $c_{k},c_{k}^{\dagger }$ are Fermi operators for free spinless electrons and $d,d^{\dagger }$ are Fermi operators for a local resonant state. The partition function corresponding to the Hamiltonian (\[Toul1\]) is exactly equal to the one in eq. (\[part3\]) for the case of classical charges $\pm 1$. Such a theory corresponds to free particles and there is no renormalization. Breakdown of perturbative approach at $T\sim T_{K}$: strong-coupling fixed point ================================================================================ In the previous Section we have derived the perturbative flow equation for the Kondo coupling constant $J$. Integration of the renormalization group (RG) equation shows that $J$ grows as $T$ is lowered, until the perturbative analysis does not make any sense anymore. The question whether the RG flow stops at some finite-$T$ scale fixed coupling or goes all the way down to $T=0$ has been widely debated in the literature (see [@hewson] for a review on the subject). However, from numerical RG analysis and from the exact Bethe-ansatz solution of the model [@tsvi] it is now clear that the system takes no fixed points at finite $T$, but the RG flow goes all the way down to $T=0$. So, the aim of this Section is to discuss the physics of the Kondo system in the $T=0$-unitarity limit. In the case of a localized spin-1/2 impurity antiferromagnetically interacting with the spin of one type of itinerant electrons only (spin-1/2, one channel Kondo effect), the theory of the unitary limit was first developed by Noziéres [@nozieres]. As $T$ is lowered all the way down to 0, the flow of the coupling strength runs all the way toward an $\infty $-coupling fixed point. At the fixed point, the impurity spin is fully screened and the localized magnetic moment is effectively substituted by a spinless potential scattering center with infinite strength (which forbids double occupancy at the impurity site). The $T\rightarrow 0$ unitarity limit is well-described by a Fermi liquid theory, which also allows for calculation of finite-$T$ corrections to fixed-point values of the physical quantities. Both elastic and inelastic scattering processes provide finite-$T$ contributions to the conductance. As we will show in the next section, the two kinds of processes contribute at the same order, thus giving raise to corrections to the unitary limit proportional to $\left( \frac{T}{T_{K}}\right) ^{2}$ [@nozieres]. A generalization of Kondo’s original idea was done by Noziéres and Blandin [@blandin]. They showed that interaction of itinerant electrons with magnetic impurities in metals may involve electrons with different quantum numbers besides the spin: for instance, electrons with different orbital angular momentum. Different orbital quantum numbers define different channels of interaction. Therefore, many channel Kondo effect may arise. Although in the perturbative region there is no qualitative difference between one-channel and many-channel effect, deep differences may arise in the unitarity limit, depending on the number of channels $K$ and on the total spin of the impurity, $S$. In the case $2S=K$, at $T=0$ the magnetic moment at the impurity is fully screened by electrons from the leads. The impurity forms a singlet state with $2S$ conduction electrons and no other electrons can access the impurity site. The system behaves exactly as if there were no impurity, besides a boundary condition on the wavefunction of conduction electrons which takes into account that the impurity site is “forbidden” to them [@nozieres]. Such a Fermi liquid state is stable against leading finite-$T$ corrections, as we shall see below, and corresponds to an infinite effective coupling $\nu \left( 0\right) J$. A special case of this is the one channel Kondo just discussed, with $S=\frac{1}{2}$ and $K=1$. In the case $2S>K$, in the strong coupling regime a residual magnetic moment is still present at the impurity, since there are no more conduction electrons able to screen the localized spin. The corresponding fixed point is again a Fermi liquid, but with a localized partially screened magnetic moment at the impurity that is [**ferromagnetically**]{} coupled to the itinerant electrons: it again provides the stability of the local Fermi liquid. A very special case is the $2S<K$ one. In such a case conduction electrons attempt to “overscreen” the impurity in the strong coupling limit, that is the resulting magnetic moment is opposite to the original one. The coupling among the localized residual magnetic moment and the itinerant electrons is now [**antiferromagnetic**]{}. It drives the system out of the strongly coupled regime towards a finite-coupling fixed point $J_{\ast }$ [@blandin], as we clearly see in Fig. 4. At $J=J_{\ast }$ the leading finite-$T$ interaction is given by a marginal three-particle operator which breaks down the Fermi liquid state and generates a non-Fermi liquid behavior in the physical quantities. We will discuss such an issue in the Section $VII $ of this review. The fully screened single channel Kondo case ============================================ Finite temperature corrections to the one-channel Kondo conductance ------------------------------------------------------------------- At $T=0 $ the linear response of the Kondo system to an applied voltage bias reaches the so called unitarity limit. The response function is the resistivity in a bulk system (magnetic impurities in diluted alloys), while it is the conductance in a quasi one-dimensional system as the system of our interest: a dot with applied contacts. The striking feature of the Kondo effect in diluted alloys is the minimum in the resistivity at low temperature which violates the expected property: $d\rho /dT>0$. Indeed, well below $T_{K}$ the resistivity increases again up to a maximum value proportional to the number of impurities $N_{i}$ per unit volume (”unitarity limit”). On the contrary a maximum of the conductance is expected at $T=0$ for Kondo conductance across a quantum dot, where the unitarity limit that is reached is $\frac{2e^{2}}{h}g_{u}$. Being the $s-$wave scattering effectively one-dimensional like, such a ’reversed behavior’ looks paradoxical. However, it is just a consequence of the difference between the 3-d and 1-d impurity scattering, as we explain here. In order to illustrate the difference, it is enough to note that both facts stem from the main feature of Kondo impurity scattering: the formation of a resonance at the Fermi level due to many-body effects, which implies that the phase shift reaches the value $\delta =\pi /2$ and that scattering is resonant at the impurity. This produces different results in the two cases: - a spherical $s-$ wave is diffracted from the impurity with maximum amplitude at the resonance, what increases the flux propagating backward and enhances resistivity. - resonant scattering coincides with resonant trasmission in this case, what implies the vanishing of backward reflection and enhances the conductance. Temperature corrections are twofold. One is due to the energy dependence of the phase shift close to the resonance $\delta ^{0}(\epsilon )=\pi /2-a\epsilon ^{2}$ and to the fact that energies close to the Fermi surface are sampled, because the Fermi functions are not step-like at finite $T$. The second one is due to inelastic processes which produce transitions from the singlet ground state to the excited states. The latter can be accounted for with an expansion in inverse powers of the singlet binding energy [@nozieres]. We include a simplified approach to the problem which rests on the Fermi liquid nature of the excitation spectrum in Appendix B [@nozieres] [@ludaff][@costi]. It is found that corrections to the conductance are quadratic in temperature, as it is usual in the Fermi liquid theory: $$g=g_{u}\left[ 1-\left( \frac{\pi T}{T_{K}}\right) ^{2}\right] ,\text{ \ \ \ \ }T\ll T_{K}. \label{cstrong}$$ The weak-coupling ($T\gg T_{K}$) and the strong-coupling ($T\ll T_{K}$) asymptotes of the conductance, eqs. (\[cweak\]) and (\[gdot\]), have a very different structure but, since the Kondo effect is a crossover phenomenon rather than a phase transition [@nrg][@tsvi], the dependence $g\left( T\right) $ is a smooth function [@costi] throughout the crossover region $T\sim T_{K}$: $$g=g_{u}f\left( \frac{T}{T_{K}}\right) .$$ The universal function $f\left( x\right) $, as found by resorting to numerical renormalization group (NRG) in refs. [@costi][@costi1], is plotted in Fig. 5. It interpolates between $f\left( x\gg 1\right) =\frac{3\pi ^{2}}{16}\left( \ln x\right) ^{-2}$ and $f\left( 0\right) =1$. The Kondo resonance in a magnetic field ---------------------------------------- A small magnetic field $B$ lifts the degeneracy of the spin states at the impurity.This produces a splitting of the resonance and a change of its shape which has been numerically studied mostly via NRG [@costi1]. The splitting of the peak increases linearly with $B$ and is $\Delta =2\mu _{B}B$, [*i.e.*]{} twice the Zeeman spin splitting. This can be understood easily if one considers the particle and hole virtual occupations which mediate the Kondo interaction. Let us consider the symmetric case $\epsilon _{d}=-U/2$. The dot states with $N$ electrons and one unpaired spin are $N\uparrow $ and $N\downarrow $ corresponding to $\epsilon _{d}+B/2$ ($\mu _{B}=1$) and $\epsilon _{d}-B/2$, respectively. In presence of spin splitting two particle processes are allowed: $p_{\sigma }$ ($p_{-\sigma }$) in which a spin $-\sigma $ is added to the state $N\sigma $ and subsequently a spin $-\sigma $ is removed with the energy balance: $p_\downarrow \to \delta E (N\uparrow, +\downarrow ) + \delta E (N+1 ,- \uparrow ) = U + (-U-B ) = -B $ $p_\uparrow \to \delta E (N\downarrow, +\uparrow ) + \delta E (N+1 ,- \downarrow ) = (U + B) + (-U ) = B $. Similarly the hole processes have an energy balance: $h_\uparrow \to \delta E (N\uparrow, -\uparrow ) + \delta E (N-1, + \downarrow ) = (-\epsilon _d - B/2) + (\epsilon _d -B/2 ) =- B $ $h_{\downarrow }\rightarrow \delta E(N\downarrow ,-\downarrow )+\delta E(N-1,+\uparrow )=(-\epsilon _{d}+B/2)+(\epsilon _{d}+B/2)=B$. This implies that there are two peaks in the spectral density at $\omega = \pm B $ corresponding to the $p_\downarrow ,h_\uparrow $ and $p_\uparrow ,h_\downarrow $ spin flip processes, respectively. The value at the Fermi energy of the spectral function is related to the $B$-dependent phase shift in the Fermi liquid picture by the Friedel sum rule: $$-\pi \nu \left( 0\right) \Im m\left( t_{\sigma }\left( \omega =0,T=0,B\right) \right) =sin^{2}\delta _{\sigma }(B).$$ The Bethe Ansatz solution of the problem relates the phase shift at the Fermi level to the magnetization of the impurity: $\delta _{\sigma }(B)=\frac{\pi }{2}[1-2M_{d}(B)]$ [@andrei]. Hence a reduction of the peak height with increasing of the magnetic field follows. It has also been argued that the renormalizability of the Kondo problem could break down when $B$ exceeds some critical value related to the Kondo temperature, because of the back action of the induced conduction electron polarization cloud on the impurity leading to a broken symmetry state with $<S_{z}>\neq 0$ [@ovchinnikov]. Crossing of the dot levels in a magnetic field and enhancement of the Kondo conductance --------------------------------------------------------------------------------------- Conventional Kondo resonant transmission requires a magnetic moment to be present on the dot. Usually dots with an even number of electrons are in a singlet state, while dots with an odd number of electrons have an unpaired spin and have a doublet GS. Hence, there is a parity effect: CB conduction valleys with $N=even$ do not display the Kondo conductance while those with $N=odd$ do. An exception to this rule occurs at zero magnetic field when Hund’s rule applies. This was found experimentally in vertical QD e.g. at $N=6$. The GS of the isolated dot has $S=1$ (triplet) [@tarucha]. An underscreened Kondo effect is expected at this point and the GS of the interacting system becomes a doublet. By applying a weak magnetic field orthogonal to the dot ($B=0.22Tesla$), a transition of the GS from triplet to singlet (T- S) has been found. The single particle energy levels become angular momentum dependent and Hund’s rule no longer applies. Close to the crossing a remarkable enhancement of Kondo coupling with increase of the Kondo temperature was experimentally found. Indeed, scaling shows a non universal critical temperature when the interplay of the fourfold degenerate states ($|SS_{z}>$ with $S=0,1$) is included [@eto]. An extended and unified approach to the problem can be found in [@pustil1]. A minor difference is the fact that they consider an in-plane magnetic field as the source of the crossing which could produce Zeeman spin splitting (ZSS) of the single particle states. The four dot states are mapped onto a two impurities Kondo model (2IKM) [@twoimp] with spin $\overrightarrow{S}_{1},\overrightarrow{S}_{2}$. However they are coupled by a potential term $Vn\rho _{nn}(0)\overrightarrow{S}_{1}{\cdot} \overrightarrow{S}_{2}$ and an exchange term $iIns_{\overline{n}n}(0){\cdot} \overrightarrow{S}_{1}{\times} \overrightarrow{S}_{2}$. Here $\rho ,\sigma $ are the charge and spin density of the conduction electrons with $n$ and $\overline{n}\neq n$ labeling two different orbital states. Because these terms violate the invariance under particle-hole transformation, the 2IKM cannot flow by scaling to the non Fermi-liquid fixed point, which is known to be a remarkable feature of the model. In the case of large ZSS $\Delta $ the RG flow terminates at $D\approx \Delta $. Two of the four states are ruled out in the flow and conduction electrons couple to one single effective spin $1/2 $ with one extra unusual term in the effective Hamiltonian which is a Zeeman term for the conduction electrons. Kondo conductance can take place in a dot with an even number of electrons also in a strong vertical magnetic field [@arturo2]. Orbital effects induced by B can produce the reversed transition from the singlet to the triplet state (S-T). Indeed, a vertical magnetic field on an isolated dot favors transitions to higher spin states [@wagner]. In this case the ZSS is anyhow sizeable and the crossing involves the singlet state and the component of the triplet state lowest lying in energy. In a vertical geometry with cylindrical symmetry orbital angular momentum $m$ and $z-$ spin component $\sigma $ are good quantum numbers. In particular because the singlet state has total angular momentum $M=0$ and the triplet state involved in the crossing has $M=1$, only a $(m=0\downarrow )$ electron can enter the dot when it is in the triplet state. On the contrary, only an $(m=1\uparrow )$ electron can enter the dot, if it is in the singlet state. This implies that there is one single channel of conduction electrons involved in cotunneling processes, with orbital and spin degrees of freedom locked together. Again, a residual effective spin $1/2$ survives at the dot, while $N$ is even [@noi]. This is another striking manifestation of the spin-charge separation that occurs at the Kondo fixed point. Usual Kondo coupling leads to $N=odd$ together with $S=0$. In the complementary situation here described, it is $N=even$ and $S=1/2$. The overscreened two channel Kondo case ======================================= A two channel Kondo behavior has been invoked in an experiment by Ralph and Buhrman [@ralph] on clean Cu point contacts, defects in the metal that can be described by two level systems (TLS). The TLS could tunnel between the even and odd state with the assistance of the conduction electrons. Their physical spin is not involved in the scattering, so that two channels are available [@zawa]. It is still unclear whether the Kondo temperature can be large enough so that any effect of the Kondo correlation can be measured [@boris]. However, these experiments have triggered renewed interest in two channel Kondo conductance. The temperature and voltage dependence of the conductivity have been numerically calculated within the “Non Crossing Approximation“ (NCA) [@hettler] and found to be consistent with a scaling Ansatz motivated by the Conformal Field Theory (CFT) solution of the problem [@ludaff]. Since then, no other experimental proof of two channel Kondo effect in impurities has been produced. The result for the imaginary part of the transmission $t$ is: $${\Im }mt(\omega )\sim \sqrt{\frac{\omega }{T_{K}}}. \label{nflres}$$ From this result, we get a $\sqrt{T/T_{K}}$ temperature dependence of the conductance as $T\rightarrow 0$, which is a clear signature of the breakdown of the Fermi liquid [@ludaff] (see Fig. 6). In order to emphasize the deep difference between single-channel and many-channel Kondo effect in the $T=0$ limit we just mention here that the imaginary part of the proper self-energy, close to the Fermi surface, behaves as $\Im m\Sigma (k,\omega )\propto C_{k}\omega ^{2}$ (where the chemical potential $\mu =0$ is taken as reference energy) in the Fermi liquid case and in the single channel Kondo problem. On the contrary, it behaves as $\Im m\Sigma (k,\omega )\propto C_{k}^{\prime }\omega ^{\met}$ in the two channel ”overscreened” Kondo problem. In the following we refer, for simplicity, to the two channel ”overscreened” case. Through the Kramers-Kronig relation $$\frac{\partial }{\partial \omega }\Re e\Sigma (k,\omega )=-\frac{1}{\pi }P\int_{-\infty }^{\infty }d\omega ^{\prime }\frac{\partial _{\omega }\Im m\Sigma (k,\omega ^{\prime })}{\omega ^{\prime }-\omega }$$ we see that $\frac{\partial }{\partial \omega }\Re e\Sigma (k,\omega )|_{\omega \rightarrow 0}$ is finite in the first case, at the Fermi surface, on the contrary it has a power law divergence in the second case. It follows that the quasiparticle pole residue $z_{k}=[1-\partial _{\omega }\Sigma (k,\omega )]^{-1}|_{\omega \rightarrow 0}$ vanishes as a power law in the second case at the Fermi surface. Luttinger [@luttinger] showed that, to all orders of perturbation theory in the interaction, the imaginary part of the proper self-energy behaves as $\Im m\Sigma (k,\omega )\propto C_{k}\omega ^{2}$ ($C_{k}>0$) close to the Fermi surface, which implies an infinite lifetime for the quasiparticles at the Fermi surface. The Fermi surface is sharp and well defined. These are the foundation stones of the normal Fermi liquid Theory and they are invalidated in the spin-$\met$ two channel Kondo case. In this Section we focus, in particular, on two channel spin-1/2 Kondo effect in both the perturbative region and the unitarity limit. We describe the scaling perturbative approach and the bosonization ($T\sim 0$) technique, respectively. An attempt to extend the Anderson-Yuval approach of Section $IV.C$ to the two channel Kondo case can be found in [@fabrizio]. Perturbative analysis at $T>> T_{K}$ ------------------------------------ The starting point of our reasoning is the effective Hamiltonian given in eq. (\[ef2\]), which we will take in the isotropic limit. In the following we will restrict our analysis to a two-fold degenerate dot level. In this case, as we pointed out, the QD can be modelized as a spin-1/2 magnetic impurity, whose spin is given by: $$S_{d}^{a}=\frac{1}{2}d_{\gamma }^{\dagger }\tau _{\gamma \gamma ^{^{\prime }}}^{a}d_{\gamma ^{^{\prime }}}$$ (that $\vec{S}_{d}$ is a spin-1/2 comes out from the identity $\vec{S}_{d}^{2}=3/4$, valid in the case of single-occupancy for the dot’s level). In this case and for a generic number of channels for the itinerant electrons, eq. (\[ef2\]) takes the form: $$H_{K}=J\sum_{a}\left( \sum_{\gamma \gamma ^{^{\prime }}}(d_{\gamma }^{\dagger }\frac{\tau _{\gamma \gamma ^{^{\prime }}}^{a}}{2}d_{\gamma ^{^{\prime }}})\sum_{kk^{^{\prime }}}\sum_{\sigma \sigma ^{^{\prime }};\alpha }(c_{k\sigma \alpha }^{\dagger }\frac{\tau _{\sigma \sigma ^{^{\prime }}}^{a}}{2}c_{k^{^{\prime }}\sigma ^{^{\prime }}\alpha })\right) \label{heftru}$$ where $\alpha \in (1,..,K)$ is the channel index and the constant $J$ is taken as a perturbative parameter ($>0$). Infrared divergent diagrams provide a flow of $J$ as a function of $T$. We are now going to derive the perturbative $\beta $-function at third order in $J$. At finite $T$ the Green function in Fourier space will depend on the momentum of the particles and on the Matsubara frequencies $\omega _{m}=\frac{2\pi }{\beta }(m+\frac{1}{2})$ (for fermions). In our case, the Green function for the lead electrons is given by: $$G_{\sigma \sigma ^{^{\prime }};\alpha \alpha ^{^{\prime }}}(i\omega _{m},k)={\bf FT}\left\{ \langle \hat{T}[c_{\sigma \alpha }(\tau ,k)c_{\sigma ^{^{\prime }}\alpha ^{^{\prime }}}^{\dagger }(0,k)]\rangle \right\} =\frac{\delta _{\sigma \sigma ^{^{\prime }}}\delta _{\alpha \alpha ^{^{\prime }}}}{i\omega _{m}-v_{F}k}$$ where ${\bf FT}$ stands for ’Fourier transform’ and $\hat{T}$ is the time ordering operator, while the Green function for the $d$-fermion is: $${\cal G}_{\gamma \gamma ^{^{\prime }}}(i\omega _{m})={\bf FT}\left\{ \langle \hat{T}[d_{\gamma }(\tau )d_{\gamma ^{^{\prime }}}^{\dagger }(0)]\rangle \right\} =\frac{\delta _{\gamma \gamma ^{^{\prime }}}}{i\omega _{m}}.$$ The interaction vertex determined by $H_{{\rm eff}}$ is: $$V_{\sigma \sigma ^{^{\prime }};\gamma \gamma ^{^{\prime }}}^{\alpha \alpha ^{^{\prime }}}(\{i\omega _{m}^{(j)}\})=\delta _{\alpha \alpha ^{^{\prime }}}\frac{J}{4}\tau _{\gamma \gamma ^{^{\prime }}}^{a}\tau _{\sigma \sigma ^{^{\prime }}}^{a}\delta (\omega _{m}^{(1)}+\omega _{m}^{(2)}-\omega _{m}^{(3)}-\omega _{m}^{(4)}).$$ The one-loop structure of the theory provides a renormalization to the interaction vertex, that is, to the coupling constant, by means of the two diagrams drawn in Fig. 7. The sums over Matsubara frequencies can be calculated by using the standard techniques described, for example, in [@fetter]. In the low-energy limit for the electrons from the leads (that is, if only excitations about the Fermi level are taken into account), the sum of the two diagrams is given by: $${\cal D}_{1}+{\cal D}_{2}\approx -i\frac{J^{2}}{8}\delta _{\gamma \gamma ^{^{\prime }}}\{[\delta _{\sigma \sigma ^{^{\prime }}}\delta _{\alpha \alpha ^{^{\prime }}}+\tau _{\sigma \sigma ^{^{\prime }}}^{a}\tau _{\alpha \alpha ^{^{\prime }}}^{a}]+[-\delta _{\sigma \sigma ^{^{\prime }}}\delta _{\alpha \alpha ^{^{\prime }}}+\tau _{\sigma \sigma ^{^{\prime }}}^{a}\tau _{\alpha \alpha ^{^{\prime }}}^{a}]\}2\nu \left( 0\right) \ln (\frac{D}{k_{B}T})$$ ($D$ is an ultraviolet cutoff, identified with the width of the conduction band). The corresponding renormalization to the coupling constant $J$ is easily worked out and is given by: $$\Delta ^{(2)}J(T,D)=2\nu \left( 0\right) J^{2}\ln (\frac{D}{k_{B}T}). \label{ren1}$$ A careful analysis of the vertex renormalization at third order in $J$ reveals that several third-order diagrams are already taken into account by the scaling equation generated by the second-order vertex correction, as discussed in [@zawa2]. The only “new” contribution comes from the “non-parquet” diagram shown in Fig. 8. Because of the loop over the fermions from the contacts, such a contribution carries an overall factor of $K$, that is, the diagram is proportional to the number of channels and the correction to the coupling constant up to third-order in $J$ is: $$\Delta ^{(3)}J=(2\nu \left( 0\right) J^{2}-2K\left( \nu \left( 0\right) \right) ^{2}J^{3})\ln (\frac{D}{k_{B}T}). \label{ren2}$$ Integration of the renormalization group equation for $J$ provides: $$-\frac{1}{2j}+\frac{1}{2j_{0}}+\frac{K}{2}\ln \left[ \frac{j}{j_{0}}\left( \frac{1-Kj_{0}}{1-Kj}\right) \right] =x-x_{0} \label{flow1}$$ where $j=\nu \left( 0\right) J$ and $x=\ln (\frac{D}{k_{B}T})$. It is seen from eq. (\[flow1\]) that the fixed point is reached at an intermediate coupling $j^*$. However, the perturbative RG analysis is not conclusive here, because even in the ordinary, single channel, Kondo model an artificial intermediate coupling fixed point is produced when the perturbative RG equations are expanded to 3rd order [@zawa2]. It is straightforward to infer the Kondo temperature $T_{K}$ from eq. (\[flow1\]). Usually $T_{K}$ is defined as the temperature scale at which $j$ becomes ${\cal O}(1)$. Following such a criterion, at $T=T_{K}$ we can neglect $1/j$ compared to $1/j_{0}$ and obtain the following approximate formula for $T_{K} $: $$k_{B}T_{K}\approx D(j_{0})^{\frac{K}{2}}e^{-\frac{1}{2j_{0}}}. \label{tkondo}$$ Eq. (\[tkondo\]) is quite general, in that it provides the value for the crossover temperature for any number of channels $K$. This proves that the way the system approaches the scale at which the perturbative analysis breaks down does not depend on the number of channels, except for a redefinition of the Kondo temperature. Hence, logarithmic divergencies are expected when approaching to the $T_{K}$, no matter on what the number of channels is. In the next subsection we will analyze the $T=0$ behavior of such a system, and will see that it is dramatically different from the one channel case for what concerns the fixed-point properties. Analysis at $T\sim 0$ --------------------- Several techniques have been applied in order to get informations about physical quantities around the fixed point in the Kondo regime in such a limit. Finite-$T$ corrections have been derived by means of bosonization techniques [@guinea][@emery], of Bethe-ansatz like exact solutions [@tsvi] and of Conformal Field Theory  (CFT) techniques [@ludaff]. Now, CFT approach is extremely effective in calculating finite-$T$ corrections, Wilson ratios and other exact results concerning Green functions [@ludaff], but its starting point that charge, spin and flavor quantum numbers are enough to identify a primary fermionic field in the theory leads to an inconsistency: the corresponding on-shell $S$ matrix comes out to be $0$. The solution to such an unitarity paradox has been suggested by Ludwig and Maldacena [@maldacena], who introduced a fourth spin-flavor quantum number. So, while at the unitarity limit charge, spin and flavor do not change upon scattering off the impurity, the spin-flavor changes, giving rise to one more scattering channel. Such an extra quantum number allows for an off-diagonal on-shell $S$ matrix, therefore the unitarity paradox simply means that the diagonal elements of the $S$ matrix with respect to the spin-flavor number are $0$. However, a theory of the unitarity limit, needed in order to compute, for instance, within an unified framework transport properties with finite-$T$ corrections, is not yet fully developed. In this subsection we briefly sketch the first steps in order to derive an appropriate scattering potential a’ la Noziéres, in the case of two channel spin-1/2 overscreened Kondo effect. We follow an approach equivalent to the one by Tsvelick [@tsvelick], that is the introduction of a regularization procedure able to move the fixed point toward infinite-coupling. Then, we go through a bosonization - refermionization in order to account for the scattering processes with changing of the spin-flavor. The bosonization procedure allows us to split the degrees of freedom involved in our problem: in this way we will show that the Kondo interaction involves only the spin and spin-flavor degrees of freedom while the charge and flavor ones are fully decoupled. This remark is a crucial one because it makes possible to derive a $S$ matrix in the unitarity limit which comes out to be diagonal in any quantum number, but the spin-flavor; such a $S$ matrix describes the whole system of dot and contacts. In the following we discuss the case $K=2,S=1/2$, so we have two flavors of conduction electrons from an effectively one-dimensional conductor which interact with a localized spin$-1/2$ magnetic impurity. Let $c_{\alpha \sigma }(x)$ be the lead electron operators ($\sigma =\uparrow ,\downarrow $ is the spin index, $\alpha =1,2$ is the flavor index). A lattice-model complete Hamiltonian is written as: $$\begin{aligned} H^{2Ch} &=&-t\sum_{x}[c_{\alpha \sigma }^{\dagger }(x)c_{\alpha \sigma }(x+a)+c_{\alpha \sigma }^{\dagger }(x+a)c_{\alpha \sigma }(x)] \nonumber \\ &&-\mu \sum_{x}c_{\alpha \sigma }^{\dagger }(x)c_{\alpha \sigma }(x)+J\overrightarrow{S}_{d}{\cdot} \lbrack \vec{\sigma}_{1}(0)+\vec{\sigma}_{2}(0)]\end{aligned}$$ where $\vec{\sigma}_{\alpha }(x)=\frac{1}{2}c_{\alpha \sigma }^{\dagger }(x)\vec{\tau}_{\sigma \sigma ^{^{\prime }}}c_{\alpha \sigma ^{^{\prime }}}(x)$ and $a$ is the lattice spacing. Now, we can linearize the dispersion relation around the Fermi surface and introduce two chiral fields $c_{\pm ,\alpha \sigma }(x)$. Even and odd parities can be introduced to obtain fields with the same chirality, $\phi _{\alpha \sigma }^{e}$ and $\phi _{\alpha \sigma }^{o}$, so odd parity fully decouples from the interaction Hamiltonian. In order to properly deal with the interacting fields, we will bosonize $\phi _{\alpha \sigma }^{e}$; in particular, we define four bosonic fields $\Psi _{\alpha \sigma }$ in terms of which it is possible to express densities of the remarkable physical quantities. To this end we are led to construct the four bosonic fields $\Psi _{X}$ ($X=$ch,sp,fl,sf) for the charge, spin, flavor and spin-flavor degrees of freedom [@emery][@maldacena][@mfab] as linear combinations of the previous ones. In this way it is possible to realize two “inequivalent” representations of the fields $\phi _{\alpha \sigma }^{e}$ in terms of the fields $\Psi _{X}$ ($X=$ch,sp,fl,sf), $\phi _{\alpha \sigma ;I}^{e}$ and $\phi _{\alpha \sigma ;II}^{e}$, given by: $$\begin{aligned} \phi _{\alpha \sigma ;I}^{e}(x) &=&\eta _{\alpha \sigma }:e^{-\frac{i}{2}[\Psi _{{\rm ch}}+\sigma \Psi _{{\rm sp}}+\alpha \Psi _{{\rm fl}}+\alpha \sigma \Psi _{{\rm sf}}](x)}: \nonumber \\ \phi _{\alpha \sigma ;II}^{e}(x) &=&\xi _{\alpha \sigma }:e^{-\frac{i}{2}[\Psi _{{\rm ch}}+\sigma \Psi _{{\rm sp}}+\alpha \Psi _{{\rm fl}}-\alpha \sigma \Psi _{{\rm sf}}](x)}: \label{39}\end{aligned}$$ where $\eta $ and $\xi $ are suitable Klein factors. Notice that the two fields differ only in the spin-flavor quantum number but such a difference is a crucial one. Now we are ready to rewrite the two channel Kondo interaction Hamiltonian in bosonic coordinates $\Psi _{X}$ as: $$\begin{aligned} H_{K}^{2Ch} &=&J\left\{ S_{d}^{+}:e^{-i\Psi _{{\rm sp}}(0)}::\cos (\Psi _{{\rm sf}}(0)):+S_{d}^{-}:e^{i\Psi _{{\rm sp}}(0)}::\cos (\Psi _{{\rm sf}}(0)):+S_{d}^{z}\frac{L}{2\pi }\frac{d\Psi _{{\rm sp}}(0)}{dx}\right\} \nonumber \\ &=&J\overrightarrow{S}_{d}{{\cdot} }[\vec{\Sigma}_{g}(0)+\vec{\Sigma}_{u}(0)] \label{bosi}\end{aligned}$$ where the spin densities $\vec{\Sigma}_{A,B}(x)$ are given by: $$\begin{aligned} \Sigma _{g}^{\pm }(x) &=&:e^{\pm i[\Psi _{{\rm sp}}+\Psi _{{\rm sf}}](x)}:\hspace*{1cm}\Sigma _{g}^{z}(x)=\frac{L}{4\pi }\frac{d}{dx}[\Psi _{{\rm sp}}+\Psi _{{\rm sf}}](x), \nonumber \\ \Sigma _{u}^{\pm }(x) &=&:e^{\pm i[\Psi _{{\rm sp}}-\Psi _{{\rm sf}}](x)}:\hspace*{1cm}\Sigma _{u}^{z}(x)=\frac{L}{4\pi }\frac{d}{dx}[\Psi _{{\rm sp}}-\Psi _{{\rm sf}}](x). \label{eq43}\end{aligned}$$ Both $\vec{\Sigma}_{g}(x)$ and $\vec{\Sigma}_{u}(x)$ are $SU(2)$ spin current operators, so the vector space is made of  the bosonic vacuum $|bvac\rangle $ and the bosonic spin$-\frac{1}{2}$ spinors at a point $x$: $$\begin{aligned} |\sigma _{g}\rangle &\equiv &:e^{\sigma \frac{i}{2}[\Psi _{{\rm sp}}+\Psi _{{\rm sf}}](x)}:|bvac\rangle \nonumber \\ |\sigma _{u}\rangle &\equiv &:e^{\sigma \frac{i}{2}[\Psi _{{\rm sp}}-\Psi _{{\rm sf}}](x)}:|bvac\rangle . \label{eq44}\end{aligned}$$ No triplet adding up $g$ and $u$ spin species together can occur because, given $\vec{\Sigma} _{g,u}=\int_{-L/2}^{L/2}dx:\vec{\Sigma}_{g,u}(x) $, we have: $$\vec{\Sigma}_{g}|\sigma _{u}\rangle =0\hspace*{1cm}\vec{\Sigma}_{u}|\sigma _{g}\rangle =0, \label{eq49}$$ that is, if at a point $x$ the spin density associated to $\vec{\Sigma}_{g}$ is $\neq 0$, then, at the same point, the spin density associated to $\vec{\Sigma}_{u}$ is $=0$, and vice versa; this does not allow for overscreening. Such a statement is a crucial one, indeed it corresponds to a particular regularization scheme [@tsvelick] able to move the finite-coupling fixed point corresponding to the unitarity limit down to $J =+\infty $. At this infinitely-strongly coupled fixed point the impurity spin will be fully screened in a localized spin singlet. In principle, the system might lay within any linear combination mixing the two representations $g$, $u$, of the form: $$|GS\rangle _{\mu } \left | _{x=0} \right . =|\Uparrow \rangle \otimes \frac{1}{2}\biggl (|\downarrow _{g}\rangle +\mu |\downarrow _{u}\rangle \biggr )\biggr |_{x=0}-|\Downarrow \rangle \otimes \frac{1}{2}\biggl (|\uparrow _{g}\rangle +\mu |\uparrow _{u}\rangle \biggr ) \biggr |_{x=0} \label{symm1}$$ where $|\Uparrow \rangle $, $|\Downarrow \rangle $ are the two impurity states with opposite spin polarizations. Then we search for the two independent linear combinations that do not change upon scattering of lead electrons; it can be shown that such combinations correspond to the values $\mu =\pm i$. Now, in the fixed point limit, we ”refermionize”. Physical states mix both representations, $I$ and $II$. Scattering by the impurity states should conserve all physical quantum numbers. It can be shown that the elastic scattering in the unitarity limit swaps the two inequivalent representations $I-II$ of the lead electrons. Correspondingly, the impurity absorbs/emits one spin-flavor quantum. The even parity $S$ matrix has the following representation in the $I-II$ space: $${\bf S}^{0}{\bf (}\omega =0)=\left( \begin{array}{cc} 0 & -i \\ i & 0 \end{array} \right) \equiv \left( \begin{array}{cc} 0 & e^{-i\frac{\pi }{2}} \\ e^{i\frac{\pi }{2}} & 0 \end{array} \right) . \label{matS}$$ According to eq. (\[matS\]) the phase shifts induced by the scattering are $\delta ^{I\text{ }II}{\bf (}\omega =0)=-\frac{\pi }{4}$ , $\delta ^{II\text{ }I}{\bf (}\omega =0)=\frac{\pi }{4}$, while in the case of one channel spin$-1/2$ Kondo effect the phase shift was $\delta ^{0}=\frac{\pi }{2}$. Finally, the conductance can be easily obtained by the Landauer formula using eq. (\[matS\]) and ${\bf S}^{1}=-{\bf 1}$: $$g=Tr\left\{ {\bf T}\right\} =Tr\left\{ \left| \frac{1}{2}\sum_{l}S^{l}\right| ^{2}\right\} =\frac{1}{2}Tr\left( \begin{array}{cc} 1 & i \\ -i & 1 \end{array} \right) =1.$$ Incidentally, we point out that the groundstate degeneracy always decreases under renormalization, which is the content of ”${\rm g}$-theorem” [@ludaff]. This leads to a zero temperature entropy for the impurity given by ${\rm S}_{imp}\left( 0\right) =\frac{1}{2}\ln 2$. Temperature corrections to this result have also been calculated [@tsvelick] by using real fermion coordinates (Majorana fermions) $\psi ^{a}(x)(a=1,2,3)$, which obey the anticommutation relations $\{\psi ^{a}(x),\psi ^{b}(y)\}=\delta ^{ab}\delta (x-y)$, describing the relevant coordinates only. This corresponds as well to a regularization scheme where the fixed point has been moved to $J=\infty $. The $\sqrt{T}$ behavior is recovered, as metioned in the introduction to this Section (see eq.(\[nflres\])). Can we reach the two channel Kondo fixed point in a quantum dot ? ----------------------------------------------------------------- A possible experimental realization of the two-channel Kondo fixed point in a quantum dot has been recently proposed [@arturo3]: exact diagonalization results of a vertical quantum dot with five electrons show that it can be tuned, by means of a strong external magnetic field, at the degeneracy point between energy levels with $S=1/2$, but with different orbital angular momentum. Vertical cylindrical contacts provide single particle energy subbands labeled by the cross-sectional angular momentum and the $k$ vector of the incoming/outgoing electron. Appropriate tuning of the electron density in the contacts offers the chance of including two electron channels only. The advantage of this setting is that no exchange coupling can take place between the channels due to symmetry, so that they act as totally independent. Selection rules due to the cylindrical symmetry enforce angular momentum $m$ conservation and spin component $\sigma $ conservation in the cotunneling processes. For the special setting of the proposed device only a hole $(h)$ process for $\downarrow -$spin and a particle process for $\uparrow -$spin are allowed. They differ in the sign of the potential scattering but this difference is inessential. In fact, a particle-hole transformation on the fermion fields of the $\uparrow $ channel only reverses the unwanted sign without affecting the exchange coupling. This puts both channels on an equal footing and points to an “orbital” Kondo coupling where the spin acts as the label for the channel. At this stage the dot plays the role of an effective spin $3/2$ impurity, interacting with two channels. So, as it stands the system would flow to an underscreend situation in lowering the temperature. However, provided that Zeeman spin splitting $\Delta $ is larger than the Kondo temperature for the underscreened fixed point $T_{K}^{u}$, there is a crossover temperature $T^{c}$ at which two of the four $S=1/2$ states are ruled out of the cotunneling processes, thus allowing for an effective spin $1/2$ two channel Kondo flow. Detecting the effect requires an appropriate control of the hybridization of the dot with the contacts (so that $\Delta >>T_{K}^{u}$) and a proper tuning of the gate voltage $V_{g}$. This allows for fine tuning of the exchange couplings between the dot and the two channels. It is well known that the hardest condition to be realized is total equivalence of the channels in the coupling. Were this not the case, the system would prefer one of the two channels as the dominant one and flow to a more conventional one-channel Kondo state. By tuning $V_{g}$ one can cross the point where the two channels are equally coupled which allows for scaling towards the two-channel fixed point. The measured conductance $g\left( T\right) $ [*vs* ]{}$T$ will exhibit quadratic behavior at low temperature except for a crossover to a square root behavior, at the appropriate $V_{g}$ value which makes the two channels perfectly equivalent (see Fig. 6). The delicate point in this proposal is that the requirement of full cylindrical symmetry of the device is essential. Another setup for measuring the two channel Kondo conductance has been proposed recently [@oreg]. This proposal relies on an extra contact to provide the second channel. It is recognized however that this would introduce cross exchange terms between the three leads which do not allow for equivalent coupling of the two channels. In fact, any diagonalization of the scattering problem of the kind of the one outlined in the previous Subsection to isolate the independent channels can never produce equal coupling as long as off-diagonal terms are non zero. To overcome this difficulty the authors propose that the extra contact is a larger dot itself, having a charging energy $E_{c}$ which hinders exchange coupling with the other two leads. Of course its size should be appropriately tuned. A smaller size implies a larger level spacing $\delta $ inside it and $\delta $ should be low enough because it acts as the low energy cutoff in the scaling toward the fixed point. Therefore, there is a delicate trade-off between incresing $E_{c}$ to prevent cross exchange terms and decreasing $\delta $ not to stop the flow when reducing the temperature. We have stressed that the NFL fixed point in the two channel Kondo coupling at $T=0$ can only be reached by tuning the exchange couplings of both channels $J_{1},J_{2}$ exactly equal. Hopefully this can be done by changing an appropriate gate voltage across a critical point $V_{g}^{\ast }$. According to Fig. 4 the systems flows to $J_{1}\rightarrow \infty ,J_{2}\rightarrow 0$ for $V_{g}<V_{g^{\ast }}$ and to $J_{2}\rightarrow \infty ,J_{1}\rightarrow 0$ for $V_{g}>V_{g^{\ast }}$, both of which are Fermi liquid fixed points. It has been argued that the quantum transition across $V_{g}^{\ast }$ should display a quantum critical region in the $T-V_{g}$ plane whose critical properties can be determined [@glazman3]. This offers better chances to spot whether we are in the vicinity of the NFL fixed point or not, also at finite temperature. Summary ======= In this review we focused on equilibrium transport properties across a quantum dot (QD). A QD is a tunneling center for electrons coming from the leads. Depending on the transparency of the barriers and on the temperature, the coupling $V$ to the leads is weak or strong. In the weak coupling regime, tunneling can be dealt with perturbatively. In the case of the QD, a Coulomb Blockade zone is delimited by two sharp conduction peaks which grow as $|V|^{2}/T$ in lowering the temperature. Conduction inbetween is due to cotunneling processes and is exponentially damped in temperature. Differential conductance is quite small, being $\propto |V|^{4}/U$, so that the charge degree of freedom is freezed on the dot. In lowering the temperature, non perturbative coupling of the dot to the delocalized electrons of the contacts can occur and conductance can reach the unitary limit in a CB valley. For pedagogical reasons we have reviewed the old-fashoned Anderson-Yuval model for the correlated state. Otherwise we have used the [*poor man’s scaling* ]{}approach in the regime where perturbative scaling applies and the Noziéres scattering approach which entails the fixed point physics at zero temperature. These approaches do not allow for quantitative results which are better obtained with the numerical renormalization group (NRG), real time RG and Non Crossing Approximation (NCA) (the last one preferably in the overscreened case when a non Fermi liquid (NFL) fixed point is reached), but they offer a more transparent view of what is going on. We have briefly reviewed the various types of Kondo couplings. We have considered the case in which the GS of the dot is degenerate because of spin: if temperature is low enough ($T<T_{K} $, with $T_{K}$ depending on the transparency of the barriers), spin flip processes proliferate and the magnetic moment of the dot is partially or fully screened . In this case an applied magnetic field has a disruptive effect on the Kondo peak of the conductance. The interesting case of a crossing between different dot spin states induced by the magnetic field has been also discussed. We have also reported on other possible realizations of Kondo physics involving orbital degrees of freedom (orbital Kondo). The most favourable set up for this case is a vertical geometry of the dot and the contacts with cylindrical symmetry. In this geometry a magnetic field orthogonal to the dot (which may be strong) can induce level crossings and produce the degeneracies of the dot GS which is required for Kondo conductance to take place. Hence one can have Kondo effect also with an even number of electrons on the dot and zero total spin. Some attention has been devoted to the multichannel Kondo effect. The overscreening case can lead to a NFL fixed point at zero temperature. In particular we have discussed the two channel spin $1/2$ Kondo state and reported on possible experimental realizations that have been proposed. It emerges that conditions to be met are very tough. Nonetheless its achievement would probe a beautiful piece of the physics of the strongly correlated systems. Appendix A: RG equations for the Coulomb gas model of eq. (29) ============================================================== We summarize here the scaling of the action in eq. (\[part1\]) in order to find out the behaviour of the system at large time scales (low temperature) [@yuval][@ni]. The cutoff is rescaled according to: $\tau _{M}\rightarrow (1+\lambda )\tau _{M}$ with $\lambda =\Delta \tau _{M}/\tau _{M}\rightarrow d\ln \tau _{M}$. This adds a factor $e^{-2N\lambda (1-\alpha ^{2}/2)}$. The first term arises from $\tau _{M}^{2N}$ in the denominator, while the one $\propto \alpha ^{2}$ arises from the $\ln -$ term in the interaction term. Indeed charge neutrality implies that $0=(\sum_{i}q_{i})^{2}=\sum_{i\neq j}q_{i}q_{j}+\sum_{i}q_{i}^{2}$ and $q_{i,j}=\pm 1$. This factor renormalizes the fugacity $Y\rightarrow Y+dY$, with: $$Y+dY\approx Ye^{\lambda (1-\alpha ^{2}/2)}, \label{yren}$$ what gives the first of eqs. (\[rg1\]). The interaction strength $\alpha $ is renormalized by flip-antiflip (particle-antiparticle) fusion. Let us consider now all the configurations in which pairs of neighboring charges $q_{i}$ at $\tau _{i}$ and $q_{j}=-q_{i}$ at $\tau _{j}$ (where $j=i\pm 1$) are at a distance between $\tau _{M}$ and $\tau _{M}(1+\lambda )$. In increasing the scale, these pairs are seen as a neutral compound which screens the interaction between other charges (“fusion ” of pairs). Let us consider one single fusion process and develop that part of the action that contains their coordinates: $$\begin{aligned} -S^{(2N+2)} &=&-S^{(2N)}+q_{i}q_{j}\ln \left| \frac{\epsilon }{\tau _{M}}\right| \nonumber \\ &&+\frac{q_{i}}{2}\sum_{k\neq i,j}q_{k}\left[ \ln \left| \frac{\tau _{k}-\tau -\epsilon /2}{\tau _{M}}\right| -\ln \left| \frac{\tau _{k}-\tau +\epsilon /2}{\tau _{M}}\right| \right] .\end{aligned}$$ Here we have defined $\epsilon /2=\frac{\tau _{i}-\tau _{j}}{2}$ and $\tau =\frac{\tau _{i}+\tau _{j}}{2}$. The integral over $\tau $ and $\epsilon $ for $\epsilon /\tau _{M}<<1$ which appears in the partition function of eq. (\[part1\]) is: $$\begin{aligned} &&e^{-S^{(2N)}}Y^{2}\int_{\tau _{i-1}+\tau _{M}}^{\tau _{j+1}-\tau _{M}}\frac{d\tau }{\tau _{M}}\int_{\tau _{M}}^{\tau _{M}(1+\lambda )}\frac{d\epsilon }{\tau _{M}}\left( \frac{\epsilon }{\tau _{M}}\right) ^{-\alpha ^{2}}{\cdot} e^{-\alpha ^{2}\epsilon \frac{q_{i}}{2}\sum_{k\neq i,j}q_{k}\frac{\partial }{\partial \tau }\ln \left| \frac{\tau _{k}-\tau )}{\tau _{M}}\right| } \nonumber \\ &\sim &\lambda Y^{2}\left\{ 1-\alpha ^{2}\left( \frac{q_{i-1}}{2}\sum_{k\neq i,j}^{2N}q_{k}\ln \left| \frac{\tau _{i-1}+\tau _{M}-\tau _{k})}{\tau _{M}}\right| +\frac{q_{j+1}}{2}\sum_{k\neq i,j}^{2N}q_{k}\ln \left| \frac{\tau _{j+1}-\tau _{M}-\tau _{k})}{\tau _{M}}\right| \right) \right\} ,\end{aligned}$$ where the term with $k=i-1$ $(k=j+1)$ in the first (second) sum vanishes. Doing the same for each interval $i-j$ and summing, each term is repeated twice. This generates an extra contribution to the action that can be interpreted as the renormalization of the coupling constant: $$\alpha ^{2}\rightarrow \alpha ^{2}+d\alpha ^{2}\;\;;\;d\alpha ^{2}=-2\lambda Y^{2}\alpha ^{2}. \label{aren}$$ Because $\lambda \approx d\ln \tau _{M}$, the second one of eqs. (\[rg1\]) is obtained. Appendix B: Temperature dependence of the conductance ====================================================== Ohm’s law can be derived semiclassically from a Boltzmann equation which accounts for weak scattering of the charge carriers against impurities and defects, when driven by an electric field. Scattering provides a mechanism for relaxation from a non equilibrium to a steady state flow [@ashcroft]. We follow here a simplified approach starting from the velocity of band electrons $\vec{v}_{k}=\vec{\nabla}\epsilon _{k}/\hbar $. The current density is: $$\vec{j}=-\frac{2e}{3{\cal {V}}}\sum_{k}\vec{v}_{k}(f_{k}-f_{k}^{o})$$ where $f_{k}(f_{k}^{o})$ is the non-equilibrium (equilibrium) Fermi distribution and ${\cal {V}}$ is the volume. Now we take: $$f_{k}-f_{k}^{o}=-\frac{\partial f^{o}}{\partial \epsilon _{k}}\delta \epsilon _{k}\hspace*{0.5cm}\delta \epsilon _{k}=-e\vec{E}{{\cdot} }\vec{v}_{k}\tau (k),$$ which defines the relaxation time $\tau (k)$, so that, according to Ohm’s law, we get: $$\sigma =-\frac{2e^{2}}{3{\cal {V}}}\sum_{k}\vec{v}_{k}{{\cdot} }\vec{v} _{k}\tau (k)\frac{\partial f^{o}}{\partial \epsilon _{k}}. \label{singult}$$ In th case of $s-$ wave scattering a spherical wave is diffracted from the impurity with maximum amplitude; this increases the flux propagating backward. Assuming that quantities depend on $\epsilon $ only and are mostly evaluated at the Fermi level we have ($T\sim 0$): $$\sigma =-\frac{2e^{2}}{3{\cal {V}}}\nu (0){v}_{F}^{2}\int d\epsilon \: \tau (\epsilon ,T)\frac{\partial f^{o}}{\partial \epsilon }. \label{rising}$$ In this way we recover the simple microscopic formula for Ohm’s conductivity, first derived by Drude in 1900, $\sigma =ne^{2}\tau /m$, where $\tau $ is an average relaxation time. The latter can be defined in terms of the mean free path between two scattering events $l=v_{F}\tau $ or distance between impurities. This formula is valid in the limit $k_{F}l>>1$ which we assume to be the case. In such a limit no localization effects occur and one can show that it is $d\rho /dT>0$ always, which is violated in the case of Kondo conductivity in diluted alloys. In fact, for the large $U$ Anderson model, the explicit dependence of the transport time on temperature at the Fermi energy has to be taken into account separately. In addition, the Sommerfeld expansion can be used at finite temperatures [@costi]: $$\int d\epsilon \left( -\frac{\partial f(\epsilon )}{\partial \epsilon }\right) \tau (\epsilon ,T)=\tau (\mu ,T)+\left. \frac{\pi ^{2}}{6}(k_{B}T)^{2}\frac{d^{2}\tau }{d\epsilon ^{2}}\right| _{\epsilon =\mu }=\tau (\mu ,0)\left[ 1+\frac{\pi ^{2}}{16T_{K}^{2}}\left( \frac{1}{2}\pi ^{2}T^{2}+\frac{\pi ^{2}}{6}3T^{2}\right) \right] . \label{sommerfeld}$$ The main ingredients to derive this formula are the Fermi liquid nature of the zero temperature GS and the Taylor expansion of the temperature dependence of the relaxation time. The prefactors have been specialized to the case of the symmetric Anderson model (so that the lifetime of the Kondo resonance is $\Gamma =4k_{B}T_{K}/\pi $). It follows that, in diluted alloys, the resistivity takes a maximum at $T=0$: $$\frac{\rho (T)}{\rho (0)}=1-\frac{\pi ^{4}T^{2}}{16T_{K}^{2}}.$$ The extra correction on the r.h.s. of eq. (\[sommerfeld\]) arises from inelastic scattering at finite temperature, as shown by Noziéres [@nozieres], according to the following argument. In an elastic resonant scattering process at $T=0$ is $-\Im m\{t^{0}\}=\frac{1}{\nu (0)\pi }\sin ^{2}\delta ^{0}$. In a diffusive $s-$wave scattering at finite $T$, we have to consider the total relaxation rate, defined as: $$\frac{1}{\tau (\epsilon )}=\sum_{k^{\prime }}w_{kk^{\prime }}(1-k{{\cdot} }k^{\prime })=\frac{2\pi }{\hbar }N_{i}\nu (0)\int d\epsilon _{k^{\prime }}\delta (\epsilon _{k}-\epsilon _{k^{\prime }})|<k|t|k^{\prime }>|^{2}$$ ($s-$wave scattering implies that the correction $k{{\cdot} }k^{\prime }$ averages to zero). The inelasticity can be accounted for by defining an effective elastic $S-$matrix which is now no longer unitary and an effective phase shift which is different from the one at $T=0$. The full $S-$matrix, instead, is of course unitary: $1=\sum_{\beta }|S_{\alpha \beta }|^{2}$ $\equiv \sum_{\beta }[\delta _{\alpha \beta }-2\pi i\nu (0)t_{\alpha \beta }\delta (\epsilon _{\alpha }-\epsilon _{\beta })][\delta _{\alpha \beta }+2\pi i\nu (0)t_{\alpha \beta }^{\ast }\delta (\epsilon _{\alpha }-\epsilon _{\beta })]$ $=\sum_{\beta }\delta _{\alpha \beta }+\sum_{\beta }4\pi ^{2}\left( \nu (0)\right) ^{2}|t_{\alpha \beta }|^{2}\delta (\epsilon _{\alpha }-\epsilon _{\beta }).$ Now, we separate the elastic channels $\beta =\alpha $ from the inelastic ones on the r.h.s. and we introduce the inelastic cross-section $W_{\alpha }^{in}=2\pi \sum_{\beta \neq \alpha }\nu (0)|t_{\alpha \beta }|^{2}\delta (\epsilon _{\alpha }-\epsilon _{\beta })$; hence, from unitarity it follows that $1+4\pi ^{2}\left( \nu (0)\right) ^{2}|t_{\alpha \alpha }|^{2}+2\pi \nu (0)W_{\alpha }^{in}=1$. Thus, we can define an effective phase shift (which we call $\delta $ again) in presence of inelastic scattering by introducing an effective elastic $S$ matrix: $$1-2\pi i\nu (0)t_{\alpha \alpha }=\left( 1-2\pi \nu (0)W_{\alpha }^{in}\right) ^{\met}e^{2i\delta },$$ where the square root can be expanded. According to the optical theorem, we can write the total transmission probability as: $$W_{\alpha }=-2\Im m\{t_{\alpha \alpha }\}=W_{\alpha }^{in}\cos 2\delta +\frac{(1-\cos 2\delta )}{\nu (0)\pi }. \label{pt1}$$ Now, let us notice that, because $\delta \sim \pi /2$, we have $\cos 2\delta <0$. Therefore, for each channel $\alpha $ the quantity $W_{\alpha }$ is always smaller than the elastic case value: $2\sin ^{2}\delta /(\nu (0)\pi )$. But one can thermally average over the $\alpha $ channels; according to the usual phase space argument which is invoked when scattering occurs close to the Fermi surface at very low $T$, we obtain an average $\overline{W^{in}} $ proportional to $T^{2}$: $\overline{W^{in}}\sim AT^{2}$. This gives rise to the first term in the expansion of eq. (\[sommerfeld\]). The second term can be viewed as contributing to the Sommerfeld expansion with $\delta (\epsilon )=\met\pi -a\epsilon $ from which it follows that $\sin ^{2}\delta \sim 1-a^{2}\epsilon ^{2}$. In the case of resonant tunneling across the dot, the conductance is given by eq. (\[correc1\]). Using the result of eq. (\[pt1\]) directly one obtains that conductance has a maximum at zero temperature and first corrections in temperature are again of ${\cal {O}}(T^{2})$ (see eq. (\[cstrong\])). [**Acknowledgements**]{} This manuscript is the revised and updated version of the lectures delivered by one of us (A. T.) at INFN Laboratories in Frascati (Italy) during the school “Nanotubes & Nanostructures” (October 18-27, 2001). A. T. wishes to thank S. Bellucci and M. De Crescenzi for the invitation in such a warm and stimulating atmosphere. A. Naddeo was supported by a CNR fellowship while this work was done. L. I. Glazman and M. E. Raikh, [*Pis’ma Zh. Eksp. Teor. Fiz.*]{} [**47**]{}, 378 (1988) \[[*JETP Lett.*]{} [**47**]{}, 452 (1988)\]. I. L. Aleiner, P. W. Brouwer and L. I. Glazman, [*Phys. Rep.*]{} [**358**]{}, 309 (2002). D. Goldhaber-Gordon, H. Shtrikman, D. Mahalu, D. Abusch-Magder, U. Meirav and M. A. Kastner, [*Nature*]{} [**391**]{}, 156 (1998); S. M.Cronenwett, T. H. Oosterkamp and L. P. Kouwenhoven, [*Science* ]{} [**281**]{}, 540 (1998); J. Schmid, J. Weis, K. Eberl and K. v.Klitzing, [*Physica*]{} [**B256-258**]{},182 (1998); S. Sasaki, S. De Franceschi, J. M. Elzerman, W. G. van der Wiel, M. Eto, S. Tarucha and L. P. Kouwenhoven, [*Nature*]{} [**405**]{}, 764 (2000). J. Kondo, [*Prog. Theor. Phys.*]{} [**32**]{}, 37 (1964). A. C. Hewson, “The Kondo Effect to Heavy Fermions”, Cambridge University Press, Cambridge (1993). K. G. Wilson, [*Rev. Mod. Phys.*]{} [**47**]{}, 773 (1975); H. R. Krishnamurthy, J. W. Wilkins and K. G. Wilson, [*Phys.*]{} [*Rev.* ]{}[**B21**]{}, 1003 (1980). A. M. Tsvelick and P. B. Wiegmann, [*Advances in Physics*]{} [**32**]{}, 453 (1983). I. Affleck and A. W. W. Ludwig, [*Phys. Rev.*]{} [**B48**]{}, 7297 (1993); I. Affleck and A. W. W. Ludwig, [*Nucl. Phys.*]{} [**B352**]{}, 849 (1991); I. Affleck and A. W. W. Ludwig, [*Nucl. Phys.*]{} [**B360**]{}, 641 (1991); I. Affleck and A. W. W. Ludwig, [*Phys. Rev. Lett.*]{} [**67**]{}, 161 (1991); I. Affleck, [*Acta Phys. Polon. B*]{} [**26**]{}, 1869 (1995). N. E. Bickers, [*Rev. Mod. Phys.*]{} [**59**]{}, 845 (1987). T. A. Costi, A. C. Hewson and V. Zlatic, [*J. Phys. Cond. Matt.*]{} [**6**]{}, 2519 (1994); T. A. Costi and A. C. Hewson, [*Phil. Mag. B*]{} [**65**]{}, 1165 (1992). T. A. Costi, [*Phys. Rev.*]{} [**B64**]{}, 24130(R) (2001); T. A. Costi, [*Phys. Rev. Lett.*]{} [**85**]{}, 1504 (2000). O. Sakai and Y. Shimizu, [*J. Phys. Soc. Jpn.*]{} [**61**]{}, 2333 (1992); O. Sakai, S. Suzuki and Y. Shimizu, [*J. Physica*]{} [**B206-207**]{}, 141 (1995); W. Izumida, O. Sakai and Y. Shimizu, [*J. Phys. Soc. Jpn.*]{} [**66**]{}, 717 (1997); W. Izumida, O. Sakai and Y. Shimizu, [*J. Phys. Soc. Jpn.*]{} [**67**]{}, 2444 (1998). T. K. Ng, [*Phys. Rev. Lett.*]{} [**61**]{}, 1768 (1988). S. Hershfield, J. H. Davies and J. W. Wilkins, [*Phys. Rev. Lett.*]{} [**67**]{}, 3270 (1991). Y. Meir, N. S. Wingreen and P. A. Lee, [*Phys. Rev.*]{} [*Lett.*]{} [**70**]{}, 2601 (1993); Y. Meir and N. S. Wingreen, [*Phys. Rev. Lett.*]{} [**68**]{}, 2512 (1992). J. Koenig, J. Schmidt, H. Schoeller and G. Schon, [*Phys. Rev.*]{} [**B54**]{}, 16820 (1996); [*Czech. J. Phys.* ]{}[**46** ]{}S 4, 2399 (1996). Y. Meir and N. S. Wingreen, [*Phys. Rev.*]{} [**B49**]{}, 11040 (1994). M. H. Hettler, J. Kroha and S. Hershfield, [*Phys. Rev.*]{} [**B58**]{}, 5649 (1998). H. Schoeller and G. Schon, [*Phys. Rev.*]{} [**B50**]{}, 18436 (1994); H. Schoeller, in [*“Mesoscopic electron transport”*]{}, L. Sohn, L. P. Kouwenhoven and G. Schön eds., NATO ASI Series [**E 345**]{},105, Kluwer, Dordrecht, Netherlands (1997), pg. 291-330; H. Schoeller, [*Lect. Notes Phys.* ]{}[**544**]{}, 137 (2000). H. Schoeller and J. Koenig, [*Phys. Rev. Lett.*]{} [**84**]{}, 3686 (2000); J. Koenig and H. Schoeller, [*Phys. Rev. Lett.*]{} [**81**]{}, 3511 (1998). P. W. Anderson, G. Yuval and D. R. Hamann, [*Phys. Rev.*]{} [**B1**]{}, 4464 (1970); D. R. Hamann, [*Phys. Rev.*]{} [**B2**]{}, 1373 (1970). L. I. Glazman and M. Pustilnik, cond-mat/0302159, to appear in Proceedings of the NATO ASI [*“New Directions in Mesoscopic Physics”*]{}, Erice (2002). D. Giuliano, B. Jouault and A.Tagliacozzo, [*Phys. Rev.*]{} [**B63**]{},125318 (2001); D. Giuliano, B. Jouault and A. Tagliacozzo, in [*“Macroscopic Quantum Coherence and Quantum Computing”*]{}, D. V. Averin et al. eds., Kluwer Academic/Plenum Publishers, New York (2001), pg. 325. D. Giuliano, B. Jouault and A. Tagliacozzo, [*Europhys. Lett.*]{} [**58(3)**]{}, 401 (2002). Y. Oreg and D. Goldhaber-Gordon, [*Phys. Rev. Lett.*]{} [**90**]{}, 136602 (2003). L. I. Glazman and M. Pustilnik, [*Phys. Rev. Lett.*]{} [**91**]{}, 066405 (2003). L. P. Kouwenhoven [*et al.*]{}, in [*“Mesoscopic electron transport”*]{}, L. Sohn, L. P. Kouwenhoven and G. Schön eds., NATO ASI Series [**E 345**]{},105, Kluwer, Dordrecht, Netherlands (1997). B. Jouault, G. Santoro and A. Tagliacozzo, [*Phys. Rev.* ]{} [**B61**]{}, 10242 (2000). M. A. Kastner and D. Goldhaber-Gordon, [*Solid State Comm.*]{} [**119,** ]{}245[** **]{}(2001)[**.**]{} T. K. Ng and P. A. Lee, [*Phys. Rev. Lett.*]{} [**61**]{}, 1768 (1988). A. P. Jauho, N. S. Wingreen and Y. Meir, [*Phys. Rev.*]{} [**B50**]{}, 5528 (1994).[** **]{} L. V. Keldysh, [*Zh. Eksp. Teor. Fiz.*]{} [**47**]{}, 1515 (1965) \[[*Sov. Phys. JETP* ]{}[**20**]{}, 1018 (1965)\]; G. D. Mahan, ”Many-Particle Physics”, Plenum, New York (1990), 2nd ed.. A. Tagliacozzo and E. Tosatti, [*Physica Scripta*]{} [**[38]{},** ]{}301[** (**]{}1988[**).**]{} B. Nienhuis, ”Coulomb Gas Formulation of  Two-dimensional Phase Transitions”, C. Domb and J. Lebowitz eds., Academic Press (1987). D. Giuliano and A. Tagliacozzo, [*Phys. Rev. Lett.*]{} [**84**]{}, 4677 (2000). J. Kondo, [*J. Appl. Phys.*]{} [**37**]{}, 1177 (1966). W. G. van der Wiel, S. De Franceschi, T. Fujisawa, J. M. Elzerman, S. Tarucha and L. P. Kouwenhoven, [*Science*]{} [**289**]{}, 2105 (2000). J. R. Schrieffer and P. A. Wolff, [*Phys. Rev.*]{} [**149**]{}, 491 (1966). P. W. Anderson, [*J. Phys.* ]{}[**C3**]{}, 2436 (1970). P. Noziéres, [*J. Low Temp. Phys.*]{} [**17**]{}, 31 (1974). P. Nozierés and A. Blandin, [*J. Phys. Paris*]{} [**41**]{}, 193 (1980). M. Fabrizio, A. O. Gogolin and P. Noziéres, [*Phys. Rev.*]{} [**B51**]{}, 16088 (1995). N. Andrei, [*Phys.Lett*]{} [**A87**]{}, 299 (1982); N. Andrei, K. Furuya and J. H. Lowenstein, [*Rev.Mod.Phys.*]{} [**55**]{}, 331 (1983). Yu. N. Ovchinnikov and A. M. Dyugaev, [*JETP Lett.*]{} [**70**]{}, 111 (1999); [*JETP Lett.*]{} [**88**]{}, 696 (1999). S. Tarucha, D. G. Hausting, T. Honda, R. J. van der Hage and L. P. Kouwenhoven, [*Phys. Rev.Lett.*]{} [**77**]{}, 3613 (1996). M. Eto and Y. Nazarov, [*Phys. Rev.Lett.*]{} [**85**]{}, 1306 (2000). M. Pustilnik, L. I. Glazman, D. H. Cobden and L. P. Kouwenhoven, [*Lect. Notes Phys.* ]{}[**579**]{}, 3 (2001); M. Pustilnik and L. I. Glazman, [*Phys. Rev.Lett.*]{} [**85**]{}, 2993 (2001); [*Phys. Rev.*]{} [**B64**]{}, 045328 (2001). I. Affleck, A. W. W. Ludwig and B. A. Jones [*Phys. Rev.*]{} [**B52**]{}, 9528 (1995). M. Wagner, U. Merkt and A. V. Chaplik, [*Phys. Rev.* ]{}[**B45**]{}, 1951 (1992); P. Lucignano, B. Jouault, and A. Tagliacozzo, [ *Phys. Rev.*]{} [**B69**]{}, 045314 (2004). D. C. Ralph, A. W. W. Ludwig, J. von Delft and R. A. Buhrman, [*Phys. Rev. Lett.*]{} [**72**]{},1064 (1994); S. K. Upadhyay, R. N. Louie and R. A. Buhrman, [*Phys. Rev.*]{} [**B56**]{}, 12033 (1997). G. Zaránd and A. Zawadowsky, [*Phys. Rev. Lett.*]{} [**72**]{}, 542 (1994); [*Phys. Rev.*]{} [**B50**]{}, 932 (1994); D. L. Cox and A. Zawadowsky, [*Adv. Phys.*]{} [**47**]{}, 599 (1998). I. L. Aleiner, B. L. Altshuler and Y. M. Galperin, [*cond-mat*]{} 0102513. M. H. Hettler, J. Kroha and S. Hershfield, [*Phys. Rev. Lett.*]{} [**73**]{},1967 (1994). J. M. Luttinger, [*Phys. Rev.*]{} [**121**]{}, 942 (1960). A. L. Fetter and J. D. Walecka, ”Quantum Theory of Many-Particle Systems”, McGraw-Hill Editions (1981). K. Vladar and A. Zawadowsky, [*Phys. Rev.*]{} [**B28**]{}, 1564, 1582, 1596 (1983). A. Muramatsu and F. Guinea, [*Phys. Rev. Lett.*]{} [**57**]{}, 2337 (1986). V. J. Emery and S. Kivelson, [*Phys. Rev.*]{} [**B46**]{}, 10812 (1992); [*Phys. Rev. Lett.*]{} [**71**]{}, 3701 (1993). J. M. Maldacena and A. W. W. Ludwig, [*Nucl. Phys.*]{} [**B506**]{}, 565 (1997). P. Coleman, L. Ioffe and A. M. Tsvelick, [*Phys. Rev.*]{} [**B52**]{}, 6611 (1995). J. von Delft, G. Zaránd and M. Fabrizio, [*Phys. Rev. Lett.*]{} [**81**]{}, 196 (1998). N. W. Ashcroft and N. D. Mermin, ”Solid State Physics”, Holt-Saunders International Editions, Tokyo (1976). [**Figure Captions**]{} - Figure 1: Schematic drawing for a dot and the contacts in a vertical setup; possibly a magnetic field is applied along the axis. - Figure 2: Differential conductance on a gray scale as a function of both $V_{g}$ and $V_{ds}$; the Kondo effect shows up near $V_{ds}=0$ [@kastner]. - Figure 3: Renormalization-group flow diagram for small $J$ [@yuval]. - Figure 4: Qualitative renormalization-group flow diagram for the anisotropic two channel Kondo problem; the non trivial fixed point corresponds to $J_{1}=J_{2}=J_{\ast }$ (channel symmetry) [@fabrizio]. - Figure 5: Plot of the universal function $f\left( x\right) $ vs $x=T/T_{K}$ [@costi1]. - Figure 6: Sketch of the conductance across the dot as a function of the temperature $T$ and the gate voltage $V_{g}$ [@arturo3]. - Figure 7: Second-order diagrams ${\cal D}_{1}$ and ${\cal D}_{2}$. Conduction fermions are represented as full lines, while dashed lines represent propagation of $d$-fermions (dot’s states). - Figure 8: Third-order vertex renormalization: the first one ($P$) is a ”parquet-type” diagram. Its contribution is accounted for in the integration of the second-order RG equation. The second one ($NP$) is a ”non parquet” diagram. It provides an additional third-order contribution to the $\beta $-function. ![image](fig1.eps){width="0.5\linewidth"} \[figura1\] ![image](fig2.eps){width="0.5\linewidth"} \[figura2\] ![image](fig3.eps){width="0.5\linewidth"} \[figura3\] ![image](fig4.eps){width="0.5\linewidth"} \[figura4\] ![image](fig5.eps){width="0.5\linewidth"} \[figura5\] ![image](fig7.eps){width="0.5\linewidth"} \[figura7\] ![image](fig8.eps){width="0.5\linewidth"} \[figura8\]
--- abstract: 'We train three convolutional neural networks (CNNs) to classify galaxies with Galaxy Zoo 2 dataset and extract the activations from the last fully connected layer or the last average pooling layer of CNNs to study the high-dimensional abstract feature representations of galaxy images. We apply t-Distributed Stochastic Neighbour Embedding (t-SNE), a popular dimensionality reduction technique, to visualize the high-dimensional galaxy feature representations in two-dimensional scatter plots. From the visualization, we try to understand the galaxy images data itself and obtain some highly valuable insights. For instance, the learned galaxy feature representations from networks indicate that the galaxies belonging to the same class tend to group together, i.e. same morphological galaxies are clustered; The cluster of completely round smooth galaxy and the cluster of in-between smooth galaxy (between completely round and cigar-shaped) are moved closer, compared to other clusters; The cluster of cigar-shaped smooth galaxy and the cluster of edge-on galaxy are intertwined surprisingly; A galaxy mislabelled as spiral galaxy in the original dataset falls in the cluster of completely round smooth galaxy, and manual inspection also identifies out the outlier as a completely round smooth galaxy. These findings will facilitate the study of galaxy morphology.' author: - | Jia-Ming Dai,$^{1,2}$ [^1] Jizhou Tong$^{1}$\ $^{1}$ National Space Science Center, Chinese Academy of Sciences, Beijing 100190,China\ $^{2}$ University of Chinese Academy of Sciences, Beijing 100049, China\ bibliography: - 'cit.bib' date: 'Accepted XXX. Received YYY; in original form ZZZ' title: Visualizing the Hidden Features of Galaxy Morphology with Machine Learning --- \[firstpage\] methods: data analysis-techniques: image processing-galaxies: general. Introduction {#sec:intro} ============ Deep learning has been increasingly applied to galaxy morphology classification and achieved a series of success. For example, @dieleman2015rotation first used CNNs to galaxy morphology classification which exploits galaxy images translation and rotation invariance on Galaxy Zoo 2 dataset. @gravet2015catalog used the Dieleman model to classify high redshift galaxies in the 5 Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS). @kim2016star used CNNs to resolve star-galaxy classification. @aniyan2017classifying used CNNs to classify radio galaxies into FRI, FRII and Bent-tailed radio galaxies. @sanchez2017improving used CNNs to improve galaxy morphologies for Sloan Digital Sky Survey(SDSS). These works focus on the classification tasks however we argue that these works can go further by digging into the networks to analyse what features can be learned and how the learned features can help in a better understanding of the data itself. It is well known that the reason for CNNs to have been used so popularly and successfully in galaxy morphology classification lies in that CNNs can be fed with raw data directly and can automatically learn the feature representations needed for classification task [@bengio2013representation; @lecun2015deep]. These learned feature representations are nonlinear mappings of the original image pixel values, although they are still high-dimensional, they can be used not only for classification, but also for other purposes, such as dimensionality reduction and visualization. Visualization of high-dimensional data is one of the methods of data analysis and can discover a lot of highly valuable feedbacks. In the last few decades a series of dimensionality reduction techniques have been proposed to visualize high-dimensional data due to the large high dimensional data in real world. Dimensionality reduction techniques include linear techniques, like Principal Components Analysis (PCA) [@hotelling1933analysis] and classical scaling [@torgerson1952multidimensional], and nonlinear techniques [@van2007introduction; @van2009dimensionality], like Isomap [@tenenbaum2000global], Locally linear Embedding (LLE) [@roweis2000nonlinear], Laplacian Eigenmaps [@belkin2002laplacian], Auto-encoder [@hinton2006reducing] and t-Distributed Stochastic Neighbor Embedding (t-SNE) [@maaten2008visualizing; @van2014accelerating]. t-SNE is intensively applied to visualize high-dimensional data in machine learning due to its capability of preserving the local structure of the data and revealing global structure. @maaten2008visualizing proposed t-SNE method and demonstrated its good performance with other dimensionality reduction techniques on a wide variety of datasets. @van2014accelerating trained CNNs to extract image feature representation, used t-SNE techniques to visualize high-dimensional data by scatter plots on MNIST [@lecun1998mnist] and SVHN [@netzer2011reading] datasets, and obtained some valuable insights and qualitative analysis. Recently, @rauber2017visualizing systematically trained multilayer perceptrons (MLPs) and CNNs to extract activations from the last fully connected layer in three traditional image classification benchmark datasets (MNIST, SVHN and CIFAR-10), then used t-SNE techniques to visualize the learned presentations in two-dimensional scatter plots. Some highly valuable feedbacks were discovered. These work inspires the study of the feature representations of galaxy morphology automatically learned from networks. We look deeply into the networks searching for the physical meanings of the high-dimensional abstract feature representations and show how these feature representations help to understand the galaxy images data itself. In this paper we train three CNNs to classify galaxies into five classes. We use activations from the last fully connected layer or the last average pooling layer of CNNs to make high-dimensional galaxy morphological feature representations and use t-SNE techniques to visualize the high-dimensional galaxy morphological feature representations in two-dimensional scatter plots. This is the first time to apply in galaxy morphology. The outline of the paper is as follows. In Section \[sec:classification\], we introduce our three CNNs architectures, dataset selection, image preprocessing, training protocol, classification results and activations extraction. In Section \[sec:visual\], we outline t-SNE techniques and present the visualization of the learned feature representations in details. Our conclusions and suggestions for future work are presented in Section \[sec:conclusion\]. Classification Framework {#sec:classification} ======================== Deep learning models are composed of multiple nonlinear layers to learn data representation automatically for classification, detection and segmentation tasks. Among them, deep convolutional neural networks (CNNs) have become the dominant approach in image classification task [@lecun2015deep]. Since 2012, when @krizhevsky2012imagenet used a CNN to win the first place of the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), CNNs have achieved a series of breakthroughs in image classification. Now, CNNs have been evolved into many versions, such as AlexNet [@krizhevsky2012imagenet], VGG [@simonyan2014very], Inception [@szegedy2015going; @ioffe2015batch; @szegedy2016rethinking; @szegedy2017inception], ResNets [@he2016deep; @he2016identity], DenseNet [@huang2016densely] and so on. Full details are available in above literatures and @goodfellow2016deep. In this section, we first describe dataset selection, data processing, three CNNs architectures and training tips. Next, the classification results are summarized. At last, we extract the representation learned from networks as galaxy morphological features. Dataset ------- The galaxy images in this study are drawn from Galaxy Zoo-the Galaxy Challenge[^2] from Galaxy Zoo 2 (GZ2). In order to select clean samples, we use the rules of selecting clean samples of GZ2 data release [@willett2013galaxy]. For example, to select the spiral, cuts are the combination of $f_{features/disk} \geq 0.430$, $f_{edge-on,no} \geq 0.715$ and $f_{spiral,yes} \geq 0.619$. By this means, we classify galaxies into 5 classes, i.e. completely round smooth galaxy, in-between smooth galaxy (between completely round and cigar-shaped), cigar-shaped smooth galaxy, edge-on galaxy and spiral galaxy, which are referred to as 0, 1, 2, 3, 4, respectively. We loosened the thresholds of smooth galaxy from 0.8 to 0.5 and all others are derived from @willett2013galaxy, where full details of this are available. Table \[tab:dataselection\] shows the clean samples selection criterion for every class. Dataset reduced to 28790 images after filtering. Each image is of $424\times424\times3 $ pixels in size. We split them into training set and testing set at a ratio of 9:1, 25911 images are used to train models and the remaining 2879 images to evaluate our models. Class Clean sample Tasks Selection $N_{sample}$ ------- ------------------ ------- ----------------------------------- -------------- 0 Completely round T01 $f_{smooth} \geq 0.469$ 8434 smooth T07 $f_{completely ~round} \geq 0.50$ 1 In between T01 $f_{smooth} \geq 0.469$ 8069 smooth T07 $f_{in~ between} \geq 0.50$ 2 Cigar-shaped T01 $f_{smooth} \geq 0.469$ 578 smooth T07 $f_{cigar-shaped} \geq 0.50$ 3 Edge-on T01 $f_{features/disk} \geq 0.430$ 3903 T02 $f_{edge-on,yes} \geq 0.602$ T01 $f_{features/disk} \geq 0.430$ 4 Spiral T02 $f_{edge-on,no} \geq 0.715$ 7806 T04 $f_{spiral,yes} \geq 0.619$ Preprocessing ------------- It can be seen that images are composed of large fields of view with the galaxy of interest in the center. So first step, galaxy images are cropped from center to a range scale $ S=[170,240]$ in training set. Almost main information is contained in the center images and many noises like other secondary objects are eliminated. It reduces the dimension of image and it is also a form of data augmentation for training set. Then, images are resized to $ 80 \times 80 \times 3 $ pixels. A random crop is performed to $ 64 \times 64 \times 3 $ pixels. Next, images are randomly rotated with $ 0^{\circ}, 90^{\circ},180^{\circ},270^{\circ} $ due to rotation invariant of galaxy images and randomly horizontally flipped. Brightness, contrast, saturation and hue adjustment are applied to images. These steps are data augmentation to avoid overfitting. The last step was image whitening. This are the whole preprocessing pipeline in training. After those steps, images ($ 64 \times 64\times 3 $ pixels) will be used as input of networks in training. At testing time, preprocessing procedure only includes center cropping (test scale $ Q=\{ 180,200,220,240\} $), resizing to $ 80 \times 80 \times 3 $ pixels, center cropping again, and image whitening. Images ($ 64 \times 64 \times 3 $ pixels) will be used as input of networks in testing. Type CNN 1 CNN 2 CNN 3 Layer name ----------------- ------------------------ ------------------------ ------------------------ ------------- convolutional $6 \times 6,32$ $3 \times 3,64$ $6 \times 6,64$ conv 1 convolutional - $3 \times 3,64$ - - pooling $2 \times 2$, stride 2 $2 \times 2$, stride 2 $2 \times 2$, stride 2 max-pooling convolutional $5 \times 5,64$ $3 \times 3,128$ convolutional - $3 \times 3,128$ pooling $2 \times 2$, stride 2 $2 \times 2$, stride 2 convolutional $3 \times 3,128$ $3 \times 3,256$ convolutional $3 \times 3,128$ $3 \times 3,256$ convolutional - $3 \times 3,256$ pooling $2 \times 2$, stride 2 $2 \times 2$, stride 2 convolutional - $3 \times 3,512$ convolutional - $3 \times 3,512$ convolutional - $3 \times 3,512$ pooling - $2 \times 2$, stride 2 convolutional - $3 \times 3,512$ convolutional - $3 \times 3,512$ convolutional - $3 \times 3,512$ pooling - $2 \times 2$, stride 2 $4 \times 4$ avg-pooling fully-connected 2048 4096 fully-connected 2048 4096 fully-connected 5 5 5 softmax CNNs architectures ------------------ To extract feature representations from the galaxy images, we train three CNNs, as Table \[tab:networks architectures\] shows: 1. CNN 1: CNN 1 is a 7-layers CNN, including 4 convolutional layers and 3 fully connected layers. It is a slightly modified Dieleman model [@dieleman2015rotation]. 2. CNN 2: CNN 2 has 16 layers totally, 13 convolutional layers and 3 fully connected layers. It is a slightly modified VGG-16 [@simonyan2014very]. 3. CNN 3: CNN 3 is a modified ResNets [@he2016deep; @he2016identity], 26 layers totally, where we decrease the depth and widen the channel. It has 4 convolutional groups: conv2, conv3, conv4 and conv5, respectively. We add dropout after $ 3\times 3 ~convolution $ of every residual unit, to prevent coadaptation and overfitting. Downsampling is performed by the last layers in groups conv2, conv3 and conv4 with a stride of 2. Training -------- The activation function of all convolutional layers and fully connected layers (except output layer) is Rectified Linear Units (ReLUs) [@nair2010rectified]. The networks are trained to minimize cross entropy loss. We use a batch size of 128. The initial learning rate is set to 0.1, then decreased by a factor of 10 at 30k and 60k iterations. For CNN 1, we use GradientDescentOptimizer. We stop training after 72k iterations. Weights are initialized by sampling from zero-mean normal distributions (standard deviation 0.01 for convolutional layers and softmax layer, standard deviation 0.001 for fully connected layers). Biases are initialized to small positive values (0.1 for convolutional layers and softmax layer, 0.01 for fully connected layers). Dropout probability value is 0.5 [@srivastava2014dropout]. For CNN 2, we use GradientDescentOptimizer. We stop training after 42k iterations. All weights are initialized by Xavier initializer [@glorot2010understanding]. Biases are initialized to 0 for convolutional layers (0.1 for fully connected layers and softmax layer). The weight loss values of the two fully connected layers are 0.0005. Dropout probability value is 0.5. For CNN 3, we use MomentumOptimizer with Nesterov momentum of 0.9, the weight decay is 0.0001, dropout probability value is 0.8. We adopt batch normalization (BN) [@ioffe2015batch] before activation and convolution, following @he2016identity. The weights are initialized as in @he2015delving. We stop training after 72k iterations. Our implementation is based on Python, Pandas, scikit-learn [@Pedregosa2012Scikit], scikit-image [@Van2014scikit] and TensorFlow [@abadi2016tensorflow]. Classification results ---------------------- Table \[tab:acc\] summaries the results of test accuracy of different methods. Our results are based on the maximum values of 10-times runs of each testing scale. It is obvious all three models of ours achieve excellent performance. Model Overall Accuracy(%) ------- --------------------- CNN 1 94.6528 CNN 2 93.6458 CNN 3 95.2083 : Test accuracy of different methods. Our results are based on the maximum values of 10-times runs of each test scale.[]{data-label="tab:acc"} Activations ----------- A subset of 1000 images is extracted from training set, each class contains 200 images, which is hence called training-1000. And a random subset of 1000 images is also extracted from testing set, each class contains 200 images (the cigar-shaped is not enough and only has 58 images, in addition 142 image are extracted from training set), hence it is called testing-1000. For CNN 1, we use the activations in the last fully connected layer as 2,048-dimensional feature representations. For CNN 2, we use the activations in the last fully connected layer as 4,096-dimensional feature representations. For CNN 3, we use the activations in the average pooling layers as 4,096-dimensional feature representations, respectively. Visualizing the learned feature representations {#sec:visual} =============================================== In this section, we briefly introduce t-Distributed Stochastic Neighbor Embedding (t-SNE) technique, and then use it to visualize the high-dimensional galaxy morphological feature representations learned from CNNs. t-SNE ----- t-Distributed Stochastic Neighbor Embedding (t-SNE) is a popular dimensionality reduction algorithm presented in @maaten2008visualizing [@van2014accelerating]. It is a nonlinear dimensionality reduction method which is especially suitable for the visualization of high-dimensional data in a space of two or three dimension, this is so-called a scatter plot. And it is an extremely broadly applicable technique in machine learning due to its capability of preserving the local structure of the data and revealing the global structure. A dataset $X$ includes $N$ observations $X=\{ x_1,x_2,\cdots,x_N \}$, where are $ D $-dimensional real vector. The goal of t-SNE is to compute a projection $Y=\{ y_1,y_2,\cdots,y_N \}$ where the neighborhoods from $D$ are preserved, generally, $ y_i \in \mathbb{R}^d $ corresponds to $ x_i \in \mathbb{R}^D$. Typically, $ d=2~and~D \gg d $. Firstly, compute conditional probability $ p_{j|i} $ that are proportional to the similarity of datapoint $ x_i $ and datapoint $ x_j $. $ p_{j|i} $ is high when $x_i$ is near to $x_j$ and $ p_{j|i} $ is defined as $$\label{eq:1} p_{j|i}=\frac{exp(-||x_i-x_j||^2/2\sigma_i^2)}{\sum_{k\neq i}exp(-||x_i-x_k||^2/2\sigma_i^2)}.$$ where $ \sigma_i$ is the variance of the Gaussian that is centered on datapoint $ x_i $. Then, define the joint probabilities $p_{ij}$ in the high-dimensional space to be the symmetrized conditional probabilities $$\label{eq:2} p_{ij}=\frac{p_{j|i}+p_{i|j}}{2N}.$$ Next, in the low-dimensional space, define the joint probabilities $ q_{ij} $ which uses a Student t-distribution with one degree of freedom (which is the same as a Cauchy distribution) as the heavy-tailed distribution $$\label{eq:3} q_{ij}=\frac{(1+||y_i-y_j||^2)^{-1}}{\sum_{k\neq l}(1+||y_k-y_l||^2)^{-1}}.$$ t-SNE aims at minimizing the following cost function $ C $. And the locations of the points $ y_i $ in the map are determined by minimizing the Kullback-Leibler divergence of the distribution $ Q$ from the distribution $ P $, that is $$\label{eq:4} C=KL(P|Q)=\sum_{i}\sum_{j}p_{ij}\log\frac{p_{ij}}{q_{ij}}.$$ The gradient of the Kullback-Leibler divergence between $ P$ and the Student-t based joint probability distribution $ Q $ is given by $$\label{eq:4} \frac{\delta C}{\delta y_i}=4\sum_{j}(p_{ij}-q_{ij})(y_i-y_j)(1+||y_i-y_j||^2)^{-1}.$$ For further more details, we refer to @maaten2008visualizing [@van2014accelerating]. Visualizing the representations ------------------------------- Figure \[fig:raw\] presents visualizations of raw samples. We color all points by their classes, and each point is a galaxy image. Figure \[fig:train1000\] shows the classes of galaxy have a poor separation and are quite tangled on training-1000 subset. Similar result on testing-1000 subset is shown in Figure \[fig:test1000\] as well. It happens due to the absence of the use of supervised (labels) information before training. And both of the Figure \[fig:train1000\] and Figure \[fig:test1000\] indicate our dataset is complex and challenging. Figure \[fig:cnn1\] shows visualizations of activations of the last fully connected layer of CNN 1. We color all points by their classes, and mark misclassification by triangle glyphs. It can be seen that the visual separation among galaxy classes both on training-1000 subset and testing-1000 subset is improved significantly after training. As @lecun2015deep said, higher layers of representation amplify aspects of the input that are important for classification and discrimination. Figure \[fig:cnn1\_train\] and Figure \[fig:/cnn1\_test\] show that the last fully connected layer of CNN 1 has learned to transform the raw data into a nice representation and the classes are much more separated. From the visualizations, the images of each galaxy morphological class are presence of clusters, i.e. each class of galaxy images is grouped, the images with same morphological class are moved closer together. It can be explained that the images of same galaxy morphological class have similar underlying structure, so they tend to be grouped in scatter plots. From Figure \[fig:/cnn1\_test\], we also can find the completely round and the in-between are closer, compared to other galaxy clusters. It is caused that the completely round and the in-between belong to a broad cluster, namely, smooth galaxy. Then, inspecting the outliers is interesting. Consider the lime point placed in the bottom of blue cluster in Figure \[fig:cnn1\_train\], which belongs to the spiral and is predicted to the spiral as well, but it has the similar structure with the edge-on. When inspected, the outlier sample (the spiral: GalaxyID 119573) looks very similar to the edge-on class. Many other outliers (the cigar-shaped: GalaxyID 306515 and the in-between: GalaxyID 116964) have the similar phenomenon in Figure \[fig:cnn1\_train\]. Another lime outlier near the red cluster is labelled as the spiral (GalaxyID: 110109), but is predicted herein to be the completely round. When inspected, it is really a completely round, i.e. the image numbered 110109 is incorrectly labelled. Figure \[fig:/cnn1\_test\] also shows some outliers like the cigar-shaped (GalaxyID: 981072, 997901) predicted to the in-between, the edge-on (GalaxyID: 924830) predicted to the in-between. After examination, we find the errors are so obvious that even human could recognize them incorrect merely by naked eyes without the aid of any machine. Figure \[fig:cnn2\] shows visualizations of activations of the last fully connected layer of CNN 2. Examining the Figure \[fig:cnn2\_train\] and Figure \[fig:cnn2\_test\], we see that the separation between the galaxy classes is almost perfect and each class has a strip distribution. The two dotted ellipses show that 9 in-between are misclassified into the completely round and 16 cigar-shaped are misclassified into the in-between in Figure \[fig:cnn2\_train\]. This can be understandable that the completely round, the in-between and the cigar-shaped are all smooth galaxy. There is no exact limit to recognize them. We can say that they are misclassified, or they are wrongly labelled and classified correctly as well. The inset in Figure \[fig:cnn2\_test\] shows many cigar-shaped are misclassified into the edge-on. Figure \[fig:cnn3\] shows visualizations of activations of the last average pooling layer of CNN 3. Consider the red points (GalaxyID: 102977, 107539, 122674 and 106544) outlined in Figure \[fig:cnn3\_train\], which correspond to the completely round and are predicted to the completely round as well, though are placed near the lime cluster corresponding to the spiral. After inspecting, there are three true completely round (GalaxyID: 102977, 107539 and 106544) but preserved the similar structure with the spiral, and one spiral galaxy (GalaxyID: 122674) labelled incorrectly. Another four spiral galaxies (GalaxyID: 114957, 119150, 119573 and 109366) are near the edge-on cluster and the cigar-shaped cluster. When inspected, they are all thin spiral galaxies and look very similar to the edge-on and the cigar-shaped. The outliers in Figure \[fig:cnn3\_test\] are similar to the outliers in Figure \[fig:/cnn1\_test\]. There are 2 interesting galaxy images, numbered 109266 and 914775 (GalxyID). 109266 is labelled as the spiral in training-1000 subset, but is mistaken for the in-between both in Figure \[fig:cnn1\_train\] and Figure \[fig:cnn3\_train\]. After examining, we find it is a very faint spiral galaxy and very hard to recognize whether it is a spiral or others. 914775 is labelled as the cigar-shaped in testing-1000 subset, but is misclassified into the in-between in CNN 1 shown in Figure \[fig:/cnn1\_test\] and the completely round in CNN 3 shown in Figure \[fig:cnn3\_test\], respectively. After inspecting, we find it is too small at the center of the images and really hard to recognize whether it is the cigar-shaped, the completely round or the in-between, which all belong to smooth galaxy. The separation among the galaxy classes on training subset is better than on testing subset. It is reasonable because the training subset reaches a higher accuracy and learns a better representations. Each small cluster like the completely round and the spiral is generally well grouped and almost perfect. And broad clusters like smooth galaxy including the completely round, the in-between and the cigar-shaped tend to be grouped closer together as well, as shown in Figure \[fig:cnn2\_test\], Figure \[fig:cnn3\_train\] and Figure \[fig:cnn3\_test\]. The visualizations reveal a fairly surprising phenomenon: the cigar-shaped and the edge-on are intertwined and overlapped, as shown in Figure \[fig:cnn1\_train\], Figure \[fig:/cnn1\_test\], Figure \[fig:cnn2\_train\], Figure \[fig:cnn2\_test\], Figure \[fig:cnn3\_train\] and Figure \[fig:cnn3\_test\]. This implies that the geometry of the cigar-shaped and the edge-on is similar. It is a common sense that the completely round, the in-between and the cigar-shaped are all smooth galaxies and similar, however, the results of visualization indicate that the cigar-shaped and the edge-on are much closed to together. This finding may help to further optimize the design of GZ2 decision tree. Conclusions {#sec:conclusion} =========== In this paper we train three CNNs to classify galaxies and learn automatically high-level abstract feature representations, use t-SNE techniques to visualize the high-dimensional feature representations in scatter plots, and explore the underlying structure of the data, and in some cases, the meaning of the data. Our experiments show that visualizations are of help to understand the global structure and the local structure of the galaxy, outliers, clusters with the learned feature representations extracted from CNNs. For example, the same morphological class galaxy images are clustered. A broad class, like smooth galaxy including the completely round, in-between and cigar-shaped, tends to group together and gets closer. It is an interesting phenomenon that the cigar-shaped and edge-on are intertwined. We find a completely round smooth galaxy is incorrectly labelled as spiral galaxy. This proves that the visualizations can help to find outliers in the dataset. It is hoped that this study could contribute to the exploring and understanding of the galaxy image data itself with visualization, then provide valuable feedbacks to galaxy classification system, and facilitate the study of galaxy morphology. Our code is available for download at <https://github.com/Adaydl/GalaxyVisualization>. In the future we plan to visualize galaxy datasets on more fine-grained classes, e.g., 10 classes that would be more challenging. And we focus on more excellent deep learning algorithms to improve galaxy morphology classification performance and obtain higher quality galaxy feature representations. Acknowledgements {#acknowledgements .unnumbered} ================ We would like to thank the galaxy challenge, Galaxy Zoo, SDSS and Kaggle platform for kindly sharing data. We acknowledge the financial support from the National Earth System Science Data Sharing Infrastructure (<http://spacescience.geodata.cn>). We are also supported by CAS e-Science Funds (Grand XXH13503-04). \[lastpage\] [^1]: E-mail: [email protected] [^2]: https://www.kaggle.com/c/galaxy-zoo-the-galaxy-challenge
--- abstract: 'Understanding shadows from a single image spontaneously derives into two types of task in previous studies, containing shadow detection and shadow removal. In this paper, we present a multi-task perspective, which is not embraced by any existing work, to jointly learn both detection and removal in an end-to-end fashion that aims at enjoying the mutually improved benefits from each other. Our framework is based on a novel STacked Conditional Generative Adversarial Network (ST-CGAN), which is composed of two stacked CGANs, each with a generator and a discriminator. Specifically, a shadow image is fed into the first generator which produces a shadow detection mask. That shadow image, concatenated with its predicted mask, goes through the second generator in order to recover its shadow-free image consequently. In addition, the two corresponding discriminators are very likely to model higher level relationships and global scene characteristics for the detected shadow region and reconstruction via removing shadows, respectively. More importantly, for multi-task learning, our design of stacked paradigm provides a novel view which is notably different from the commonly used one as the multi-branch version. To fully evaluate the performance of our proposed framework, we construct the first large-scale benchmark with 1870 image triplets (shadow image, shadow mask image, and shadow-free image) under 135 scenes. Extensive experimental results consistently show the advantages of ST-CGAN over several representative state-of-the-art methods on two large-scale publicly available datasets and our newly released one.' author: - | Jifeng Wang [^1] , Xiang Li$^*$, Le Hui, Jian Yang\ Nanjing University of Science and Technology\ [[email protected], {xiang.li.implus, le.hui, csjyang}@njust.edu.cn]{} - | First Author\ Institution1\ Institution1 address\ [[email protected]]{} - | Second Author\ Institution2\ First line of institution2 address\ [[email protected]]{} bibliography: - 'egbib.bib' title: Stacked Conditional Generative Adversarial Networks for Jointly Learning Shadow Detection and Shadow Removal --- Introduction ============ Both shadow detection and shadow removal reveal their respective advantages for scene understanding. The accurate recognition of shadow area (i.e., shadow detection) provides adequate clues about the light sources [@lalonde2009estimating], illumination conditions [@panagopoulos2009robust; @panagopoulos2011illumination; @panagopoulos2013simultaneous], object shapes [@okabe2009attached] and geometry information [@junejo2008estimating; @karsch2011rendering]. Meanwhile, removing the presence of shadows (i.e., shadow removal) in images is of great interest for the downstream computer vision tasks, such as efficient object detection and tracking [@cucchiara2001improving; @mikic2000moving]. Till this end, existing researches basically obey one of the following pipelines for understanding shadows: **Detection only.** In the history of shadow detection, a series of data-driven statistical learning approaches [@huang2011characterizes; @lalonde2010detecting; @vicente2016noisy; @zhu2010learning; @khan2014automatic; @vicente2015leave] have been proposed. Their main objective is to find the shadow regions, in a form of an image mask that separates shadow and non-shadow areas. **Removal only.** A list of approaches [@finlayson2006removal; @finlayson2009entropy; @zhang2015shadow; @gryka2015learning; @tappen2003recovering; @arbel2011shadow; @wu2007natural; @liu2008texture; @qudeshadownet] simply skips the potential information gained from the discovery of shadow regions and directly produces the illumination attenuation effects on the whole image, which is also denoted as a shadow matte [@qudeshadownet], to recover the image with shadows removed naturally. **Two stages for removal.** Many of the shadow removal methods [@guo2011single; @guo2013paired; @khan2016automatic; @gong2014interactive; @vicente2017leave] generally include two *seperated* steps: shadow localization and shadow-free reconstruction by exploiting the intermediate results in the awareness of shadow regions. It is worth noting that the two targets: shadow mask in detection and shadow-free image in shadow removal, share a fundamental characteristic essentially. As shown in Figure \[fig\_tasks\_cropped\], the shadow mask is posed as a two-binary map that segments the original image into two types of region whereas the shadow removal mainly focuses on one type of that and needs to discover the semantic relationship between the two areas, which indicates the strong correlations and possible mutual benefits between these two tasks. Besides, most of the previous methods, including shadow detection [@huang2011characterizes; @lalonde2010detecting; @vicente2016noisy; @zhu2010learning; @khan2014automatic; @vicente2015leave] and removal [@gong2014interactive; @wu2007natural; @arbel2011shadow] are heavily based on local region classifications or low-level feature representations, failing to reason about the global scene semantic structure and illumination conditions. Consequently, a most recent study [@nguyen2017shadow] in shadow detection introduced a Conditional Generative Adversarial Network (CGAN) [@mirza2014conditional] which is proved to be effective for the global consistency. For shadow removal, Qu et al. [@qudeshadownet] also proposed a multi-context architecture with an end-to-end manner, which maintained a global view of feature extraction. Since no existing approaches have explored the joint learning aspect of these two tasks, in this work, we propose a STacked Conditional Generative Adversarial Network (ST-CGAN) framework and aim to tackle shadow detection and shadow removal problems simultaneously in an end-to-end fashion. Besides making full use of the potential mutual promotions between the two tasks, the global perceptions are well preserved through the stacked adversarial components. Further, our design of stacked modules is not only to achieve a multi-task purpose, but also inspired from the connectivity pattern of DenseNet [@huang2016densely], where outputs of all preceding tasks are used as inputs for all subsequent tasks. Specifically, we construct ST-CGAN by stacking two generators along with two discriminators. In Figure \[fig\_pipeline\_cropped\], each generator takes every prior target of tasks (including the input) and stacks them as its input. Similarly, the discriminator attempts to distinguish the concatenation of all the previous tasks’ targets from the real corresponding ground-truth pairs or triplets. Importantly, the design of the proposed stacked components offers a novel perspective for multi-task learning in the literature. Different from the commonly used multi-branch paradigm (e.g., Mask R-CNN [@he2017mask], in which each individual task is assigned with a branch), we stack all the tasks that can not only focus on one task once a time in different stages, but also share mutual improvements through forward/backward information flows. Instead, the multi-branch version aims to learn a shared embedding across tasks by simply aggregating the supervisions from each individual task. To validate the effectiveness of the proposed framework, we further construct a new large-scale Dataset with Image Shadow Triplets (ISTD) consisting of shadow, shadow mask and shadow-free image to match the demand of multi-task learning. It contains 1870 image triplets under 135 distinct scenarios, in which 1330 is assigned for training whilst 540 is for testing. Extensive experiments on two large-scale publicly available benchmarks and our newly released dataset show that ST-CGAN performs favorably on both detection and removal aspects, comparing to several state-of-the-art methods. Further, we empirically demonstrate the advantages of our stacked joint formula over the widely used multi-branch version for shadow understanding. To conclude, the main contributions of this work are listed as follows: - It is the first end-to-end framework which jointly learns shadow detection and shadow removal with superior performances on various datasets and on both the two tasks. - A novel STacked Conditional Generative Adversarial Network (ST-CGAN) with a unique stacked joint learning paradigm is proposed to exploit the advantages of multi-task training for shadow understanding. - The first large-scale shadow dataset which contains *image triplets* of shadow, shadow mask and shadow-free image is publicly released. Related Work ============ **Shadow Detection.** To improve the robustness of shadow detection on consumer photographs and web quality images, a series of data-driven approaches [@huang2011characterizes; @lalonde2010detecting; @zhu2010learning] have been taken and been proved to be effective. Recently, Khan et al. [@khan2014automatic] first introduced deep Convolutional Neural Networks (CNNs) [@simonyan2014very] to automatically learn features for shadow regions/boundaries that significantly outperforms the previous state-of-the-art. A multikernel model for shadow region classification was proposed by Vicente et al. [@vicente2015leave] and it is efficiently optimized based on least-squares SVM leave-one-out estimates. More recent work of Vicente et al. [@vicente2016noisy] used a stacked CNN with separated steps, including first generating the image level shadow-prior and training a patch-based CNN which produces shadow masks for local patches. Nguyen et al. [@nguyen2017shadow] presented the first application of adversarial training for shadow detection and developed a novel conditional GAN architecture with a tunable sensitivity parameter. **Shadow Removal.** Early works are motivated by physical models of illumination and color. For instance, Finlayson et al. [@finlayson2009entropy; @finlayson2006removal] provide the illumination invariant solutions that work well only on high quality images. Many existing approaches for shadow removal include two steps in general. For the removal part of these two-stage solutions, the shadow is erased either in the gradient domain [@finlayson2002removing; @mohan2007editing; @barron2015shape] or the image intensity domain [@arbel2011shadow; @guo2011single; @guo2013paired; @gong2014interactive; @khan2016automatic]. On the contrary, a few works [@tappen2003recovering; @yang2012shadow; @qu2015pixel] recover the shadow-free image by intrinsic image decomposition and preclude the need of shadow prediction in an end-to-end manner. However, these methods suffer from altering the colors of the non-shadow regions. Qu et al. [@qudeshadownet] further propose a multi-context architecture which consists of three levels (global localization, appearance modeling and semantic modeling) of embedding networks, to explore shadow removal in an end-to-end and fully automatic framework. **CGAN and Stacked GAN.** CGANs have achieved impressive results in various image-to-image translation problems, such as image superresolution [@ledig2016photo], image inpainting [@pathak2016context], style transfer [@li2016precomputed] and domain adaptation/transfer [@isola2016image; @zhu2017unpaired; @liu2017unsupervised]. The key of CGANs is the introduction of the *adversarial loss* with an informative conditioning variable, that forces the generated images to be with high quality and indistinguishable from real images. Besides, recent researches have proposed some variants of GAN, which mainly explores the stacked scheme of its usage. Zhang et al. [@zhang2016stackgan] first put forward the StackGAN to progressively produce photo-realistic image synthesis with considerably high resolution. Huang et al. [@huang2016stacked] design a top-down stack of GANs, each learned to generate lower-level representations conditioned on higher-level representations for the purpose of generating more qualified images. Therefore, our proposed stacked form is distinct from all the above relevant versions in essence. **Multi-task Learning.** The learning hypothesis is biased to prefer a shared embedding learnt across multiple tasks. The widely adopted architecture of multi-task formulation is a shared component with multi-branch outputs, each for an individual task. For example, in Mask R-CNN [@he2017mask] and MultiNet [@teichmann2016multinet], 3 parallel branches for object classification, bounding-box regression and semantic segmentation respectively are utilized. Misra et al. [@Misra_2016_CVPR] propose “cross-stitch” unit to learn shared representations from multiple supervisory tasks. In Multi-task Network Cascades[@Dai_2016_CVPR], all tasks share convolutional features, whereas later task also depends the output of a preceding one. A new Dataset with Image Shadow Triplets – ISTD =============================================== Existing publicly available datasets are all limited in the view of multi-task settings. Among them, SBU [@vicente2016large] and UCF [@zhu2010learning] are prepared for shadow detection only, whilst SRD [@qudeshadownet], UIUC [@guo2013paired] and LRSS [@gryka2015learning] are constructed for the purpose of shadow removal accordingly. To facilitate the evaluation of shadow understanding methods, we have constructed a large-scale Dataset with Image Shadow Triplets called *ISTD*[^2]. It contains 1870 triplets of shadow, shadow mask and shadow-free image under 135 different scenarios. To the best of our knowledge, ISTD is the first large-scale benchmark for simultaneous evaluations of shadow detection and shadow removal. Detailed comparisons with previous popular datasets are listed in Table \[tab\_datasets\]. In addition, our proposed dataset also contains a variety of properties in the following aspects: - **Illumination:** [Minimized illumination difference]{} between a shadow image and the shadow-free one is obtained. When constructing the dataset, we pose a camera with a fixed exposure parameter to capture the shadow image, where the shadow is cast by an object. Then the occluder is removed in order to get the corresponding shadow-free image. More evidences are given in the 1st and 3rd row of Figure \[fig\_dataset\_cropped\]. - **Shapes:** [Various shapes]{} of shadows are built by different objects, such as umbrellas, boards, persons, twigs and so on. See the 2nd row of Figure \[fig\_dataset\_cropped\]. - **Scenes:** 135 different types of ground materials, e.g., 6th-8th column in Figure \[fig\_dataset\_cropped\], are utilized to cover as many complex backgrounds and different reflectances as possible. Proposed Method =============== We propose STacked Conditional Generative Adversarial Networks (ST-CGANs), a novel stacked architecture that enables the joint learning for shadow detection and shadow removal, as shown in Figure \[fig\_pipeline\_cropped\]. In this section, we first describe the formulations with loss functions, training procedure, and then present the network details of ST-CGAN, followed by a subsequent discussion. STacked Conditional Generative Adversarial Networks --------------------------------------------------- Generative Adversarial Networks (GANs) [@goodfellow2014generative] consists of two players: a generator $G$ and a discriminator $D$. These two players are competing in a zero-sum game, in which the generator G aims to produce a realistic image given an input $\mathbf{z}$, that is sampled from a certain noise distribution. The discriminator D is forced to classify if a given image is generated by $G$ or it is indeed a real one from the dataset. Hence, the adversarial competition progressively facilitates each other, whilst making $G$’s generation hard for $D$ to differentiate from the real data. Conditional Generative Adversarial Networks (CGANs) [@mirza2014conditional] extends GANs by introducing an additional observed information, named conditioning variable, to both the generator $G$ and discriminator $D$. Our ST-CGAN consists of two Conditional GANs in which the second one is stacked upon the first. For the first CGAN of ST-CGAN in Figure \[fig\_pipeline\_cropped\], both the generator $G_1$ and discriminator $D_1$ are conditioned on the input RGB shadow image $\mathbf{x}$. $G_1$ is trained to output the corresponding shadow mask $G_1(\mathbf{z}, \mathbf{x})$, where $\mathbf{z}$ is the random sampled noise vector. We denote the ground truth of shadow mask for $\mathbf{x}$ as $\mathbf{y}$, to which $G_1(\mathbf{z}, \mathbf{x})$ is supposed to be close. As a result, $G_1$ needs to model the distribution $p_{data}(\mathbf{x}, \mathbf{y})$ of the dataset. The objective function for the first CGAN is: $$\begin{aligned} \small{ \mathcal{L}_{CGAN_1}(G_1, D_1)= \mathbf{E}_{\mathbf{x}, \mathbf{y} \sim p_{data}(\mathbf{x}, \mathbf{y})}[\log {D_1(\mathbf{x}, \mathbf{y})}] + }\nonumber\\ \small{ \mathbf{E}_{\mathbf{x} \sim p_{data}(\mathbf{x}), \mathbf{z} \sim p_{\mathbf{z}}(\mathbf{z})}[\log ({1 - D_1(\mathbf{x}, G_1(\mathbf{z}, \mathbf{x}))})].} \label{eqa_CGAN1}\end{aligned}$$ We further eliminate the random variable $\mathbf{z}$ to have a deterministic generator $G_1$ and thus the Equation (\[eqa\_CGAN1\]) is simplified to: $$\begin{aligned} \small{ \mathcal{L}_{CGAN_1}(G_1, D_1)= \mathbf{E}_{\mathbf{x}, \mathbf{y} \sim p_{data}(\mathbf{x}, \mathbf{y})}[\log {D_1(\mathbf{x}, \mathbf{y})}] + }\nonumber\\ \mathbf{E}_{\mathbf{x} \sim p_{data}(\mathbf{x})}[\log ({1 - D_1(\mathbf{x}, G_1(\mathbf{x}))})]. \label{eqa_CGAN1_simple}\end{aligned}$$ Besides the adversarial loss, the classical data loss is adopted that encourages a straight and accurate regression of the target: $$\mathcal{L}_{data_1}(G_1)= \mathbf{E}_{\mathbf{x}, \mathbf{y} \sim p_{data}(\mathbf{x}, \mathbf{y})} ||\mathbf{y} - G_1(\mathbf{x})||. \label{eqa_data_1}$$ Further in the second CGAN of Figure \[fig\_pipeline\_cropped\], by applying the similar formulations above, we have: $$\vspace{+5pt} \small{\mathcal{L}_{data_2}(G_2 | G_1)= \mathbf{E}_{\mathbf{x}, \mathbf{r} \sim p_{data}(\mathbf{x}, \mathbf{r})} ||\mathbf{r} - G_2(\mathbf{x}, G_1(\mathbf{x}))||,} \vspace{+15pt}$$ $$%\vspace{-20pt} \small{\mathcal{L}_{CGAN_2}(G_2, D_2 | G_1)= \mathbf{E}_{\mathbf{x}, \mathbf{y}, \mathbf{r} \sim p_{data}(\mathbf{x}, \mathbf{y}, \mathbf{r})}[\log {D_2(\mathbf{x}, \mathbf{y}, \mathbf{r})}]}\nonumber$$ $$+\mathbf{E}_{\mathbf{x} \sim p_{data}(\mathbf{x})}[\log ({1 - \tiny{D_2(\mathbf{x}}, G_1(\mathbf{x}), G_2(\mathbf{x}, G_1(\mathbf{x})))})], \label{eqa_CGAN2_simple}$$ where $\mathbf{r}$ denotes for $\mathbf{x}$’s corresponding shadow-free image and $G_2$ takes a combination of $\mathbf{x}$ and $G_1(\mathbf{x})$ as inputs whereas $D_2$ differentiates the concatenation of outputs from $G_1$ and $G_2$, conditioned on $\mathbf{x}$, from the real pairs. Till this end, we can finally conclude the entire objective for the joint learning task which results in solving a mini-max problem where the optimization aims to find a saddle point: $$\begin{aligned} \min\limits_{G_1, G_2} \max\limits_{D_1, D_2} \mathcal{L}_{data_1}(G_1) + \lambda_1 \mathcal{L}_{data_2}(G_2 | G_1) + \nonumber\\ \lambda_2 \mathcal{L}_{CGAN_1}(G_1, D_1) + \lambda_3 \mathcal{L}_{CGAN_2}(G_2, D_2 | G_1).\end{aligned}$$ It is regarded as a two-player zero-sum game. The first player is a team consisting of two generators ($G_1,G_2$). The second player is a team containing two discriminators ($D_1,D_2$). In order to defeat the second player, the members of the first team are encouraged to produce outputs that are close to their corresponding ground-truths. Network Architecture and Training Details ----------------------------------------- **Generator.** The generator is inspired by the U-Net architecture [@ronneberger2015u], which is originally designed for biomedical image segmentation. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. The detailed structure of $G_1/G_2$, similar to [@isola2016image], is listed in the Table \[tab\_network\_G\]. **Discriminator.** For $D_1$, it receives a pair of images as inputs, composed of an original RGB scene image and a shadow mask image that generates 4-channel feature-maps as inputs. The dimensionality of channels increases to 7 for $D_2$ as it accepts an additional shadow-free image. Table \[tab\_network\_D\] gives more details of these two discriminators. **Training/Implementation settings.** Our code is based on pytorch [@ketkar2017introduction]. We train ST-CGAN with the Adam solver [@kingma2014adam] and an alternating gradient update scheme is applied. Specifically, we first adopt a gradient ascent step to update $D_1, D_2$ with $G_1,G_2$ fixed. We then apply a gradient descent step to update $G_1,G_2$ with $D_1, D_2$ fixed. We initialize all the weights of ST-CGAN by sampling from a zero-mean normal distribution with standard deviation 0.2. During training, augmentations are adopted by cropping (image size 286 $\to$ 256) and flipping (horizontally) operations. A practical setting for $\lambda$, where $\lambda_1 = 5, \lambda_2 = 0.1, \lambda_3 = 0.1$, is used. The Binary Cross Entropy (BCE) loss is assigned for the objective of image mask regression and L1 loss is utilized for the shadow-free image reconstruction respectively. Discussion ---------- **The stacked term.** The commonly used form of multi-task learning is the multi-branch version. It aims to learn a shared representation, which is further utilized for each task in parallel. Figure \[fig\_flow\_cropped\] implies that our stacked design differs quite a lot from it. We conduct the multi-task learning in such a way that each task can focus on its individual feature embeddings, instead of a shared embedding across tasks, whilst they still enhance each other through the stacked connections, in a form of a forward/backward information flow. The following experiments also confirm the effectiveness of our architecture on the two tasks, compared with the multi-branch one, which can be found in Table \[tab\_stack\_vs\_parallel\]. **The [adversarial]{} term.** Moreover, Conditional GANs (CGANs) are able to effectively enforce higher order consistencies, to learn a joint distribution of image pairs or triplets. This confers an additional advantage to our method, as we implement our basic component to be CGAN and perform a stacked input into the adversarial networks, when compared with nearly most of previous approaches. Experiments =========== To comprehensively evaluate the performance of our proposed method, we perform extensive experiments on a variety of datasets and evaluate ST-CGAN in both detection and removal measures, respectively. Datasets -------- We mainly utilize two large-scale publicly available datasets[^3] including SBU [@vicente2016large] and UCF [@zhu2010learning], along with our newly collected dataset ISTD. **SBU [@vicente2016large]** has 4727 pairs of shadow and shadow mask image. Among them, 4089 pairs are for training and the rest is for testing. **UCF [@zhu2010learning]** has 245 shadow and shadow mask pairs in total, which are all used for testing in the following experiments. **ISTD** is our new released dataset consisting of 1870 triplets, which is suitable for multi-task training. It is randomly divided into 1330 for training and 540 for testing. Compared Methods and Metrics ---------------------------- **For detection part,** we compare ST-CGAN with the state-of-the-art StackedCNN [@vicente2016large], cGAN [@nguyen2017shadow] and scGAN [@nguyen2017shadow]. To evaluate the shadow detection performance quantitatively, we follow the commonly used terms [@nguyen2017shadow] to compare the provided ground-truth masks and the predicted ones with the main evaluation metric, which is called Balance Error Rate (BER): $$\mathrm{BER} = 1 - \frac{1}{2}(\frac{TP}{TP + FN} + \frac{TN}{TN + FP}),$$ along with separated per pixel error rates per class (shadow and non-shadow). **For removal part,** we use the publicly available source codes [@guo2013paired; @yang2012shadow; @gong2014interactive] as our baselines. In order to perform a quantitative comparison, we follow [@guo2013paired; @qudeshadownet] and use the root mean square error (RMSE) in LAB color space between the ground truth shadow-free image and the recovered image as measurement, and then evaluate the results on the whole image as well as shadow and non-shadow regions separately. Detection Evaluation -------------------- For detection, we utilize the cross-dataset shadow detection schedule, similar in [@nguyen2017shadow], to evaluate our method. We first train our proposed ST-CGAN on the ISTD training set. The evaluations are thus conducted on three datasets with three state-of-the-art approaches in Table \[tab\_detection\_1\]. As can be seen, ST-CGAN outperforms StackedCNN and cGAN by a large margin. In terms of BER, we obtain a significant 14.4% error reduction on SBU and 18.1% on ISTD respectively, compared to scGAN. Next, we switch the training set to SBU’s training data. Considering our framework requires image triplets that SBU cannot offer, we make an additional pre-processing step. In order to get the corresponding shadow-free image, we use the shadow removal code [@guo2013paired] to generate them as coarse labels. We also test these trained models on the three datasets. Despite the inaccurate shadow-free ground-truths, our proposed framework still significantly improves the overall performances. Specifically, on the SBU test set, ST-CGAN achieves an obvious improvement with 10.5% error reduction of BER over the previous best record from scGAN. In Figure \[fig\_select\_cropped\], we demonstrate the comparisons of the detection results qualitatively. As shown in Figure \[fig\_select\_cropped\] (a) and \[fig\_select\_cropped\] (b), ST-CGAN is not easily fooled by the lower brightness area of the scene, comparing to cGAN and scGAN. Our method is also precise in detecting shadows cast on bright areas such as the line mark in Figure \[fig\_select\_cropped\] (c) and \[fig\_select\_cropped\] (d). The proposed ST-CGAN is able to detect more fine-grained shadow details (e.g., shadow of leaves) than other methods, as shown in Figure \[fig\_select\_cropped\] (e) and \[fig\_select\_cropped\] (f). Removal Evaluation ------------------ For removal, we compare our proposed ST-CGAN with the three state-of-the-art methods on ISTD dataset, as shown in Table \[tab\_removal\]. The RMSE values are reported. We evaluate the performance of different methods on the shadow regions, non-shadow regions, and the whole image. The proposed ST-CGAN achieves the best performance among all the compared methods by a large margin. Notably, the error of non-shadow region is very close to the original one, which indicates its strong ability to distinguish the non-shadow part of an image. The advantage of removal also partially comes from the joint learning scheme, where the well-trained detection block provides more clear clues of shadow and shadow-free areas. We also demonstrate the comparisons of the removal results. As shown in Figure \[fig\_select\_cropped\], although Yang [@yang2012shadow] can recover shadow-free image, it alters the colors of both shadow and nonshadow regions. Guo [@guo2011single] and Gong [@gong2014interactive] fail to detect shadow accurately, thus both of their predictions are incomplete especially in shadow regions. Moreover, due to the difficulty of determining the environmental illuminations and global consistency, all the compared baseline models produce unsatisfactory results on the semantic regions. Component Analysis of ST-CGAN ----------------------------- To illustrate the effects of different components of ST-CGAN, we make a series of ablation experiments by progressively removing different parts of it. According to both the removal and the detection performances in Table \[tab\_component\], we find that each individual component is necessary and indispensable for the final excellent predictions. Moreover, the last two columns of Table \[tab\_component\] also demonstrate that without the stacked joint learning, a single module consisting of one generator and one discriminator performs worse consistently. It further implies the effectiveness of our multi-task architecture on both shadow detection and shadow removal. Stacked Joint *vs.* Multi-branch Learning ----------------------------------------- We further modify our body architecture into a multi-branch version, where each branch is designed for one task respectively. Therefore, the framework aims to learn a shared embedding which is supervised by two tasks, as shown in the bottom of Figure \[fig\_parallel\_cropped\]. For a clear explanation, the illustration of comparisons between ours and the multi-branch one is also given. With all other training settings fixed, we fairly compare our proposed ST-CGAN with the multi-branch version quantitatively on the measurements of both detection and removal on ISTD dataset. Table \[tab\_stack\_vs\_parallel\] reports that our stacked joint learning paradigm consistently outperforms the multi-branch version in every single aspect of the metrics. Conclusion ========== In this paper, we have proposed STacked Conditional Generative Adversarial Network (ST-CGAN) to jointly learn shadow detection and shadow removal. Our framework has at least four unique advantages as follows: 1) it is the first end-to-end approach that tackles shadow detection and shadow removal simultaneously; 2) we design a novel stacked mode, which densely connects all the tasks in the purpose of multi-task learning, that proves its effectiveness and suggests the future extension on other types of multiple tasks; 3) the stacked adversarial components are able to preserve the global scene characteristics hierarchically, thus it leads to a fine-grained and natural recovery of shadow-free images; 4) ST-CGAN consistently improves the overall performances on both the detection and removal of shadows. Moreover, as an additional contribution, we publicly release the first large-scale dataset which contains shadow, shadow mask and shadow-free image triplets. [^1]: [co-first author]{} [^2]: ISTD dataset is available in https://drive.google.com/file/d/1I0qw-65KBA6np8vIZzO6oeiOvcDBttAY/view?usp=sharing [^3]: Note that we do not include the large-scale **SRD** dataset in this work because it is currently unavailable for the authors’ [@qudeshadownet] personal reasons.
--- title: 'Remarks on the chemical composition of highest-energy cosmic rays ' --- Introduction ============ The identities of highest-energy cosmic rays remains still an open question. Possible conclusions on either protons or iron nuclei dominance in cosmic ray flux leads to problems [@BS_rev]. Seeking to determine the nuclear identities of ultrahigh-energy cosmic-ray (UHECR) particles, the development of extensive air showers (EAS) of secondary particles in the atmosphere was extensively examined. The Auger collaboration [@Auger] has determined both the shower maximum $\langle X_{max}\rangle $ (the penetration depth in the atmosphere at which the shower reaches its maximum number of secondary particles) and the complementary observable $ \sigma(X_{max}) $ (the root mean square fluctuation of $ X_{max}$ from event to event). Their results seem to indicate a transition, at primary energies of a few times $10^{18} $ eV, from the flux dominated by protons to the one increasingly dominated at higher energies by iron nuclei. The HiRes collaboration [@HiRes] has analyzed event-by-event fluctuations of data in terms of the truncated fluctuation widths $ \sigma_{T} $ ( the $ X_{max} $ distribution was truncated at $ 2\sigma (X_{max})$ ), and reaches a different conclusion. We would like to present here arguments that the observed energy dependence of $ \left\langle X_{max}\right\rangle $ and $ \sigma(X_{max}) $ by Auger experiment are not originated by the changes of nuclear composition of primary cosmic rays (cf., also, [@WW]) and that the highest-energy cosmic rays seems to be dominated by protons. Inconsistency in the iron abundance =================================== ![ The energy dependence of relative abundance of iron in CR as extracted from $ \langle X_{max}\rangle $ and $ \sigma(X_{max}) $ given by Auger experiment [@Auger] (in frame of QGSJETII [@QGSJETII] and EPOSv1.99 [@EPOS] models). []{data-label="fig1"}](icrc0161_fig01.eps){width="2.5in"} With the energy increase, the spectacular Auger data [@Auger] show almost monotonic changes from proton composition towards iron one for both $ \langle X_{max}\rangle $ and $ \sigma(X_{max}) $ observables. For $ \langle X_{max}\rangle $ such dependence can be easily interpreted by two component cosmic ray composition (with relative abundance of iron nuclei $ \alpha $ and contribution of protons $ 1-\alpha $ ), for which we expect $$\left\langle X_{max}\right\rangle =\left( 1-\alpha\right) \left\langle X_{max}\right\rangle _{p} +\alpha \left\langle X_{max}\right\rangle _{Fe} \label{eq:Xmax}$$ where $ \langle X_{max}\rangle _{p} $ and $ \langle X_{max}\rangle _{Fe}$ are the shower maxima for pure proton and iron nuclei, respectively. However for $ \sigma(X_{max}) $ we have nonmonotonic dependence on $ \alpha $, $$\begin{aligned} \sigma^{2}=\left( 1-\alpha\right) \sigma_{p}^{2}+\alpha\sigma_{Fe}^{2}+ \nonumber \\ +\alpha\left( 1-\alpha\right) \left( \left\langle X_{max}\right\rangle _{p}-\left\langle X_{max}\right\rangle _{Fe}\right) ^{2}. \label{eq:RMS}\end{aligned}$$ For this reason the experimental data (with similar energy behavior) lead to quite different chemical composition, ranging from the proton dominated for $ \langle X_{max}\rangle $ to the iron dominated for $ \sigma(X_{max}) $ (cf. Fig.1). Importance of the first interaction point ========================================= Some remarks are in order at this point (cf. [@Alvarez; @Risse; @Urlich; @Alvarez2]). Most of the charged particles in the shower are electrons and positrons coming from the electromagnetic subshowers initiated by photons from $ \pi^{0} $-decay, with energies near the critical energy ($ \varepsilon = 81$ MeV in air). The mean depth of maximum for an electromagnetic shower initiated by a photon with energy $ E_{\gamma} $ is $$\left\langle X_{max}^{em}\left( E_{\gamma}\right) \right\rangle =X_{0}\ln \left( E_{\gamma}/\varepsilon\right), \label{eq:X_em}$$ where $ X_{0}\approx $37 $g/cm^{2}$ is the radiation length in air. A nuclear-initiated shower consists of a hadronic core feeding the electromagnetic component primarily through $ \pi^{0} $ production. In general, for an incident nucleus of mass $ A $ and total energy $ E $ (including protons with $ A $=1) the depth of maximum is expressed by $$\left\langle X_{max}\left( E\right) \right\rangle = \left\langle X_{max}^{em}\left( \left( E/A\right) \left( K/\left\langle n\right\rangle \right) \right) \right\rangle +\left\langle X_{1}\right\rangle, \label{eq:X_max}$$ where $ \left\langle X_{1}\right\rangle $ is the mean depth of the interaction with maximal energy deposition into shower (usually called the depth of the first interaction), $ K $ denote inelasticity and $ \left\langle n\right\rangle $ is related to the multiplicity of secondaries in the high-energy hadronic interactions in the cascade. If the composition changes with energy, then $ \left\langle A\right\rangle $ depends on energy and $ \left\langle X_{max}\right\rangle $ changes accordingly. The situation is, however, essentially more complicated. Whereas for a primary nucleus in which the energy is to a good approximation simply divided into $ A $ equal parts, in a hadronic cascade there is instead a hierarchy of energies of secondary particles in each interaction, and a similar (approximately geometric) hierarchy of interaction energies in the cascade. In this case $ \left\langle n\right\rangle $ has to be understood as some kind of “effective” multiplicity, which does not have a straightforward definition in general. For this reason the change of primary composition or the violation of Feynman scaling are widely discussed since many years. In addition to this, the inelasticity $ \left\langle K\right\rangle $ can itself be function of energy [@K]. The probability of having the first interaction point of a shower, $ X_{1} $ , at a depth greater than $ X $ is $$P(X_{1}>X)\sim\exp \left( -X/\lambda\right), \label{eq:p1}$$ where $ \lambda $ is the interaction length. In the case of perfect correlation between $ X_{max} $ and $ X_{1} $, i.e., when fluctuations in the shower development were nonexistent, one could use directly the exponential distribution of showers with large $ X_{max} $ to calculate $ X_{1}$ and hence the proton-air cross section. However, intrinsic shower fluctuations modify relation between the depth of maximum distribution and the interaction length. This modification is typically expressed by a factor $ k=\Lambda/\lambda $ and leads to $ P(X_{max}>X)\sim\exp (-X/\Lambda) \label{p2}$. The factor $ k $ depends mainly on how fast is the energy dissipation in the early stages of shower evolution. In particular it is sensitive to the mean inelasticity and to its fluctuations. In general, a model with small fluctuations in secondary particle multiplicity and inelasticity is characterized by a smaller $ k $ factor than a model with large fluctuations. Under the assumption of similar fluctuations in multiplicity and inelasticity, a model predicting a large average number of secondary particles leads to smaller overall fluctuations of the cumulative shower profile of the secondary particles and hence to a smaller $ k $ factor. In the absence of internal fluctuations, all showers would develop through the same amount of matter, $ \Delta X=X_{max}-X_{1}$, between the first interaction point and the maximum. As a consequence, a perfect correlation between $ X_{max} $ and $ X_{1} $ would exist, and their distributions would have exactly the same shape, shifted by a constant $ \Delta X $. In that case the slope of the $ X_{max} $ distribution, $ \Lambda $, would be equal to the mean interaction length, $ \lambda $. Intrinsic fluctuations in shower development (after the first interaction) affect the relation between the interaction length $ \lambda $ and the slope $ \Lambda $ that describes the exponential tail of the $ X_{max} $ distribution. The relation is often expressed with a $ k $ factor $ k=\Lambda/\lambda$. For more properties of EAS and influence of shower fluctuations on studies of the shower longitudinal development see Ref. [@Alvarez; @Risse; @Urlich; @Urlich2; @Alvarez2]. The effect of fluctuations in $ \Delta X $ is to broaden the correlation of $ X_{max} $ with $ X_{1}$. However, we can roughly write that $$\sigma(X_{max})\cong \sigma(X_{1})+\xi \left( \sigma(\Delta X)\right), \label{eq:sig}$$ where $ \sigma (X_{1}) \propto <X_{1}> $ and the function $ \xi $ describes influence of shower fluctuations after the first (main) interaction point (notice that for the probability distribution given by Eq.( \[eq:p1\]) the fluctuation in $ X_{1} $ is $\sigma(X_{1})=\sqrt{Var(X_{1})}=\left\langle X_{1}\right\rangle$, whereas for $ X_{1} $ interpreted as the main interaction point we have $\sigma(X_{1})=\left\langle X_{1}\right\rangle/\sqrt{\kappa} $ where $ \kappa$ determines in which of the succesive interactions of projectile particle the energy deposition to the shower is maximal). Because of Eq.(\[eq:X\_max\]), where $ \left\langle X_{max}\right\rangle =\left\langle X_{max}^{em}\right\rangle + \left\langle X_{1}\right\rangle$, we can construct observable in which influence of fluctuation of the first interaction point is strongly suppressed, namely $$\begin{aligned} &&\left\langle X_{max}\right\rangle -\sigma (X_{max}) \cong \nonumber \\ &&\cong \left\langle X_{max}^{em}\left((E/A)(K/<n>\right) \right\rangle + \xi\left( \sigma(\Delta X)\right). \label{eq:x_sig}\end{aligned}$$ Results ======= In Fig.2 this observable is plotted for Auger [@Auger] and HiRes [@HiRes] data in comparison with different models [@QGSJETII; @EPOS; @QGSJET01; @Sibil]. To make the results from both experiment to coincide, the HiRes data are shifted by $10$ $ g/cm^{2} $ (in this case predictions from QGSJETII model are roughly the same for both experiment) [^1]. ![image](icrc0161_fig02.eps){width="5in" height="3.2in"} Notice that $ \left\langle X_{max}\right\rangle -\sigma(X_{max}) $ is still dependent on models and, in particular, it is sensitive to the chemical composition. Showers initiated by protons are seemingly different from those initiated by iron nuclei. From Fig.2 we can learn that the chemical composition is not the origin of the effect observed by Auger experiment. Moreover, the experimental data fairly well coincide with the proton dominant primary composition. Within the toy model of primary composition (only two components: iron nuclei with relative abundance $ \alpha $ and protons with abundance $ 1-\alpha $ ) we can evaluate $ \alpha $ from $ \left\langle X_{max}\right\rangle -\sigma(X_{max}) $ as given by Auger experiment. The results is shown in Fig.3. For the reference model QGSJETII the abundance of iron is roughly independent on energy ($ \alpha\simeq 0.05\div 0.1 $) and even for model EPOS v.1.99 [@EPOS], which leades to the maximal abundance of iron, it increases slowly with energy (varying in interval $ \alpha\simeq 0.15\div 0.3 $). The iron abundance shown in Fig.3 coincides with the one which can be estimated from HiRes data. The comparison of $\alpha$ from Auger and HiRes data is shown in Fig.4. In the energy region $2\cdot 10^{18}\div 5\cdot 10^{19}$ eV the mean values of $\alpha$, evaluated from $ \left\langle X_{max}\right\rangle -\sigma(X_{max})$, are equal $\alpha = 0.08 \pm 0.01 $ from Auger data and $\alpha = 0.06 \pm 0.05 $ from HiRes data (notice that HiRes data on $ \left\langle X_{max}\right\rangle$ result in comparable value, $\alpha=0.03 \pm 0.02$). ![The energy dependence of relative abundance of iron in CR as extracted from $ \left\langle X_{max}\right\rangle -\sigma(X_{max}) $ as given by Auger experiment and shown in Fig.2. []{data-label="fig3"}](icrc0161_fig03.eps){width="2.5in"} Possible interpretation ======================= ![image](icrc0161_fig04.eps){width="5in" height="3.2in"} From Fig.2 we can learn that $ \left\langle X_{1}(E)\right\rangle $ gives the main contribution to the energy dependence of $ \left\langle X_{max}\right\rangle $ and $ \sigma (X_{max})$ observed experimentally. Two factors can affected energy dependence of $ \left\langle X_{1}(E)\right\rangle$: the cross section (interaction mean fee path $ \lambda $) and the inelasticity $ K $. Roughly, $ \left\langle X_{1}\right\rangle = \lambda\cdot\kappa $, where $ \kappa $ determines in which of the successive interactions of projectile the energy deposition to the shower is maximal. For a uniform inelasticity distribution in the maximal possible interval for a given $\left\langle K\right\rangle$ one has $ \kappa\simeq 1+1.85(0.75-\left\langle K\right\rangle)$. The rapid increase of inelastic cross section in energies $ E > 10^{18}$ eV cannot be excluded. In particular, if gluon saturation occurs in the nuclear surface region, the total cross section of proton$ - $nucleus collisions increases more rapidly as a function of the incident energy compared to that of a Glauber-type estimate [@Portugal]. Although in [@K] the decrease of inelasticity $ \left\langle K\right\rangle $ with energies was discussed in lower energy region, its increase at energies $ E\sim 10^{18} $ eV is be no means excluded (cf. the percolation effects which in high energies leads to increase of inelasticity [@Dias]). Both possibilities are questionable and require an abrupt onset of “new physics” beyond the standard model (notice however that here, the center of mass collision energy is about few hundreds of TeV, far beyond that can be studied at LHC). Taking into account the HiRes data (where $ X_{max} $ distribution was truncated at $ 2\sigma $ ) we can learn that the tails of $ X_{max} $ distribution are crucial. For this reason, the role of biases due to the small statistics in analyzing CR data of highest energy remains an open question (cf. ref. [@WW]). It is interesting to note that the observable $ \left\langle X_{max}\right\rangle -\sigma(X_{max}) $ is rather insensitive to the possible biases of the tail of $X_{max}$ distribution [@WW]. Concluding remarks ================== To summarize, we conclude that the spectacular energy dependence of the shower maxima distribution reported by Auger collaboration [@Auger] is not necessarily (or not only) due to the changes of chemical composition of primary cosmic rays. The observed effect seems rather to be caused by the unexpected changes of the depth of first interaction in energies above $ 2\,10^{18} $ eV. They would requires, however, an abrupt onset of some “new physics” in this energy region and are therefore questionable. We argue that it would be highly desirable to analyze the observable $ \left\langle X_{max}\right\rangle -\sigma(X_{max}) $ in which fluctuations of the depth of the first interaction, as well as the possible biases of the tail of $X_{max}$ distribution, are strongly suppressed. This observable still depends on the model of multiparticle production and is sensitive to the chemical composition of the primary cosmic rays. B.Schwarzschild, Physics Today, 2010, **63**(5): 15-18 J.Abraham et al. (Auger Coll.), Phys.Rev.Lett., 2010, **104**: 091101 R.U.Abbasi et al. (HiRes Coll.), Phys.Rev.Lett., 2010, **104**: 161101 G.Wilk, Z.Wlodarczyk, J.Phys. G, 2011, **38**: 085201 J.Alvarez-Muniz et al., Phys.Rev. D, 2002, **66**: 033011 M.Risse, Acta Phys.Pol. B, 2004, **35**: 1787-1797 R.Ulrich et al., New J.Phys., 2009, **11**: 065018 R.Ulrich, R.Engel, M.Unger, Phys.Rev. D, 2011, **83**: 054026 J.Alvarez-Muniz et al., Phys.Rev. D, 2004, **69**: 103003 S.Ostapchenko, Nucl.Phys. B (Proc.Suppl.), 2006, **151**: 143-146 T.Pirog, K.Werner, Phys.Rev.Lett.,2008, **101**: 171101 N. N. Kalmykov, S. Ostapchenko, Phys. At. Nucl., 1993, **56**: 346-353 E.J.Ahn et al., Phys.Rev. D, 2009, **80**: 094003 L.Portugal, T.Kodama, Nucl.Phys.A, 2010, **837**: 1-14 Z.Wlodarczyk, J.Phys. G, 1993, **19**: L133-L138 J.Dias de Deus et al., Phys. Rev. Lett., 2006, **96**: 162001 [^1]: Notice that Auger compares the data with pure simulations. HiRes quotes data including all detectors effect and compare it to the models ’after’ the detector simulation. Unfortunately that means that both approaches cannot be compared directly.
--- author: - The ATLAS Collaboration bibliography: - '&lt;your-bib-database&gt;.bib' title: | Measurement of the top quark pair cross section with ATLAS in $pp$ collisions\ at $\sqrt{s}=7$ TeV using final states with an electron or a muon\ and a hadronically decaying $\tau$ lepton --- abstract.tex top-quark physics ,cross section ,lepton+$\tau$ Introduction.tex DetectorAndData.tex MC.tex ObjectID.tex EventSelection.tex BackgroundModel.tex FitMethod.tex MatrixMethod.tex Systematics.tex Crosssection.tex Conclusions.tex Acknowledgements.tex [00]{} Bibliography.tex
--- author: - 'Masanori [Yumoto]{}[^1], Hidetoshi [Fukuyama]{} and Hiroshi [Matsukawa]{}$^{1}$' title: | Effect of Local Inhomogeneity on Nucleation;\ Case of Charge Density Wave Depinning --- Introduction ============ Nucleation is one of the most drastic phenomenon in various fields of physics, chemistry, biology, and also in engineering. [@rf:Hanggi] Especially, the nucleation in condensed matter physics is most interesting in the sense that it can be controlled by such parameters as pressure, temperature, electric and magnetic fields so on. In general nucleation is defined as a phenomenon where a new phase appears locally in space. The theoretical analysis of nucleation was given by Langer, [@rf:Langer] who investigated the problem of reversing of the direction of magnetization in a ferromagnetic system. In the process of reversing, changes of magnetization is found to be not uniform in space but it is triggered by the appearance of the magnetic bubbles called droplets. While this theory offers the fundamental understanding of the nucleation in a homogeneous medium, we know that a local inhomogeneity in actual systems plays important roles in the nucleation process, which will be studied in this paper. For explicit studies we choose the case of the charge density wave (CDW) [@rf:1] depinning as a typical example, because the field theory which is described by the phase Hamiltonian has been established, [@rf:HFHT] and the concept of the nucleation can be defined clearly in this case as the appearance of the spatially local non-uniform structure of the phase variable which triggers the depinning. CDW is one of the characteristic state of quasi-one-dimensional conductors, where translational symmetry is spontaneously broken. The gapless sliding mode [@rf:LRA] that incarnates Goldstone mode can be pinned by the external objects which break the translational symmetry. If the external object is the underlying lattice whose spatial periodicity is commensurate to CDW’s, it is called commensurate CDW. In this case CDW behaves as an insulator. However, if electric field is applied, CDW starts to move above some critical field called threshold field. To estimate the threshold field theoretically, the phase Hamiltonian approach [@rf:2] is useful. With the phase Hamiltonian approach, the classical threshold field can be derived as the field at which the potential barrier for the sliding disappears. In this paper we investigate classically the depinning processes of one-dimensional commensurate CDW with inhomogeneity at absolute zero. In §2 we introduce our model based on the phase Hamiltonian. In §3 we review briefly in a homogeneous case. Below the uniform depinning field, the nucleation requires a finite excitation energy. In §4 we examine the ground state in the presence of an impurity and in §5 we investigate the threshold field in this case. The threshold field can be smaller than the uniform depinning field and depinning sets at the impurity site. In §6 the potential curve of our model is shown. Our conclusion and discussion are given in §7. The effects of three dimensionality and fluctuations, quantum and thermal, will be studied separately. The Model ========= We investigate the one-dimensional commensurate CDW with one impurity located at $X_{i}$. The Lagrangian is then given by $$\cal L \mit = - \int {\rm d}X \left[ A\left(\frac{\partial\phi}{\partial X} \right)^2 - F\phi + g \left(1 - \cos(M \phi) \right) - V_{i}\cos(2 k_{{\rm F}} X + \phi)\delta(X-X_{i}) \right] .$$ Here, the first term in the integration is the elastic energy, $A=(\hbar v_{{\rm F}})/(4 \pi)$, with $v_{{\rm F}}$ being the Fermi velocity. The second term is the energy associated with the electric field, $F = ({\rm e}E)/(2 \pi)$, with $E$ and $- {\rm e}$ (${\rm e} > 0$) being electric field and the electron charge, respectively. The third term is the commensurability energy, $g=(| \Delta |^2 / \varepsilon_{{\rm F}}) n_{{\rm e}} (| \Delta |/W )^{M-2}$, $|\Delta|$ being the Peierls gap, and $\varepsilon_{{\rm F}}$, $n_{{\rm e}}$, $W$ are the Fermi energy, the density of electron, and the width of the original band, respectively. Here $M$ is the degree of commensurability, $M =\pi/(k_{{\rm F}} a)$, $k_{{\rm F}}$ is the Fermi wave number, $a$ is the lattice spacing. The last term is the coupling energy of CDW to one impurity. In this Lagrangian, there exists a characteristic length, $\xi = \sqrt{(2 A)/(M^2 g) }$, which is the phase coherence length due to the commensurability. We make the Lagrangian dimensionless through scaling by this $\xi$; i.e. $x =X/\xi$, $\cal L \mit = M^2 g \xi L$, where $L$ is given by $$L = - \int {\rm d}x \left[ \frac{1}{2}\left(\frac{\partial\phi}{\partial x} \right)^2 - \varepsilon\phi + \frac{1}{M^2}\left(1 - \cos(M \phi) \right) - v\cos(\chi + \phi)\delta(x-x_{i}) \right]. \label{eq:2.4}$$ Here $\varepsilon = F/(g M^2)$, $v = V_{i}/(g \xi M^2)$. In this paper, we assume $\varepsilon \geq 0 $. Further, we define $x_{i}=X_{i}/\xi$ and $\chi= (2 \pi z)/M $, where $z=X_{i}/a $ characterizes the location of an impurity relative to the site where the energy gain by commensurability is maximum. The range of $\chi$ is $-\pi/M \leq \chi \leq \pi/M$, because the Lagrangian has the periodicity of $2\pi/M$ with respect to the phase. The Homogeneous Case ==================== First of all, we examine the case of $v=0$, namely the homogeneous case. [@rf:3] The stable configuration of the phase is determined by varying Lagrangian; $$- \phi'' - \varepsilon + \frac{1}{M}\sin(M \phi)=0. \label{eq:3.1}$$ When $\varepsilon = 0$, eq. (\[eq:3.1\]) is the sine-Gordon equation, and has two kinds of solutions, which are a trivial one, $\phi=0$, and the kink solution, which is given by $$\phi(x)= \frac{4}{M}\arctan\left[\exp(\pm(x-x_{0})\right] \equiv \phi_{\pm}(x-x_{0}), \label{eq:3.3}$$ and shown in Fig. \[fig:kink\]. Here $x_{0}$ is the center of the kink. Among these solutions, the lowest energy solution, namely, the ground state solution is $\phi=0$, and the kink and the anti-kink are excitations with finite excitation energy. In the case of $\varepsilon \neq 0$ the equation of the classical configuration has also an uniform solution, $$\phi =\frac{1}{M}\arcsin(\varepsilon M),$$ and the non-uniform solution with local spatial variation, $\phi_{l}(x)$, which is given by $$\phi_{l}' = -\left[ 2 \left[ \varepsilon \left( \frac{1}{M} \arcsin(\varepsilon M) - \phi_{l} \right) + \frac{1}{M^2}\left(\sqrt{1-(\varepsilon M)^2} - \cos(M \phi_{l})\right)\right]\right]^{\frac{1}{2}}. \label{eq:3.5}$$ This equation is derived by the integration of eq. (\[eq:3.1\]) with the boundary condition, $\phi(\infty)=(1/M)\arcsin(\varepsilon M)$ and $\phi'(\infty)=0$. The solution, $\phi_{l}(x)$, is shown in Fig. \[fig:soliton\]. We notice that the non-uniform solution, $\phi_{l}(x)$, is kink pair solution. In the presence of the electric field, the cosine-type potential for the uniform phase is tilted as is shown in Fig. \[fig:1\]. The potential barrier disappears at $\varepsilon=1/M \equiv \varepsilon_{T}$, which is the classical depinning field of the uniform CDW. On the other hand, if the kink pair is excited, CDW will also depin. This is another type of depinning process which sets locally in space and considered to be the nucleation process. However, below the threshold field, $\varepsilon_{T}$, of the uniform depinning, the kink pair is always the excitation, which requires a finite excitation energy. Therefore the nucleation does not take place in a homogeneous case without quantum nor thermal fluctuations. Ground State in the Presence of an Impurity =========================================== Next, we consider a case with an impurity, $v \neq 0$. The stable configuration of the phase is now given by $$- \phi'' - \varepsilon + \frac{1}{M}\sin(M\phi) + v\sin(\chi + \phi)\delta(x-x_{i}) =0, \label{class}$$ with the boundary conditions $$\phi(\pm \infty)=\frac{1}{M}\arcsin(\varepsilon M) \; , \;\; \phi'(\pm \infty)=0. \label{bouncla}$$ In the case of $\varepsilon = 0$, the solution is given analytically. Depending on the range of $\chi$, there are two kinds of configurations. If $\chi$ is in the range of $-\pi/M \leq \chi \leq 0$, the solution is located in the region between $0$ and $\pi/M$ (Config.1) as shown in Fig. \[fig:conf\]. If $\chi$ is in the range of $0 \leq \chi \leq \pi/M$, the solution is located in the region between $-\pi/M$ and $0$ (Config.2). We take electric field $\varepsilon \geq 0$. Under this condition, Config.1 has more tendency to depin than Config.2. Namely, in the case of $-\pi/M \leq \chi \leq 0$, the threshold field can be smaller than that of uniform depinning as is disclosed in §5; in the case of $0 \leq \chi \leq \pi/M$, however, the threshold field is same as that of uniform depinning. Therefore, we consider only the case of $-\pi/M \leq \chi \leq 0$. This choice is justified by the discussion in §7. The solution of Config.1 is obtained in terms of $\phi_{-}(x)$ given in eq. (\[eq:3.3\]), $$\begin{aligned} \phi_{c}(x) &=& \frac{4}{M}\arctan\left[\exp\left(-|x-x_{i}|-c_{0}\right)\right] \label{clasol} \\ &=& \phi_{-}(|x-x_{i}|+ c_{0}), \end{aligned}$$ where $c_{0}$ is the parameter which is determined by the equation, $$\frac{4}{M}\frac{1}{\cosh(c_{0})} + v \sin \left(\chi + \frac{4}{M}\arctan[\exp(- c_{0})] \right) = 0. \label{connection}$$ This equation is due to the requirement that the jump of the first derivative of $\phi_{c}(x)$ at $x_{i}$ should match the third term in the left hand side of eq. (\[class\]). The solution $\phi_{c}(x)$ is considered to be a kink and an anti-kink connected at the impurity site as shown in Fig. \[fig:e0\]. Further the parameter $c_{0}$ is equivalent to the distance from the impurity site to the center of the kink as indicated in Fig. \[fig:e0\]. In the case of $\varepsilon \neq 0$, the stable solution, $\phi_{s}(x)$, in the presence of an impurity given by eq. (\[class\]) is obtained by the connection at the impurity site of two non-uniform solutions, one of which, $\phi_{l}(x)$, has been shown in Fig. \[fig:soliton\]. An example of the stable solution, $\phi_{s}(x)$, is shown in Fig. \[fig:2\]. Though the stable solution, $\phi_{s}(x)$, is obtained only numerically, the information of the phase value at the impurity site is obtained. The stable solution, $\phi_{s}(x)$, is given as follows; $$\phi_{s}(x) = \phi_{l}(|x-x_{i}| + c) \label{classsol}$$ where $\phi_{l}(x)$ is the non-uniform solution of eq. (\[eq:3.1\]) and $c$ is a parameter to be determined to satisfy the boundary condition given by eq. (\[bouncla\]). When we consider $\phi_{s}(x)$, it is necessary to consider only the range of solid line in Fig. \[fig:2\]. Substituting eq. (\[classsol\]) into eq. (\[class\]), we obtain $$- 2 \phi_{l}'(c) + v \sin(\chi + \phi_{l}(c)) = 0. \label{cla1}$$ By use of eq. (\[eq:3.5\]), eq. (\[cla1\]) is rewritten as $$\left[ 2 \left[ \varepsilon \left( \frac{1}{M} \arcsin(\varepsilon M) - \phi_{l}(c) \right) + \frac{1}{M^2}\left(\sqrt{1-(\varepsilon M)^2} - \cos(M \phi{l}_(c))\right)\right]\right]^{\frac{1}{2}} + \frac{1}{2}v\sin(\chi + \phi_{l}(c)) = 0. \label{result}$$ This equation determines the phase value, $\phi_{l}(c)$, (instead of $c$) at the impurity site of the stable solution when the electric field, $\varepsilon$, is given. It means that instead of the boundary condition at infinity, eq. (\[bouncla\]), we obtain a boundary condition at the impurity site. Threshold Field in the Presence of an Impurity ============================================== The depinning threshold field, $\varepsilon_{c}$, is determined as the field at which the stable solution becomes unstable. [@rf:4] The instability of the solution is triggered by the onset of a zero eigenvalue of the fluctuation mode around the stable solution. [@rf:5] The eigenvalue equation of the fluctuations, $\delta \phi(x)$, around the stable solution, $\phi_{s}(x)$, is the second variational equation of the Lagrangian; i.e. $$\left[ - \frac{\partial^2}{\partial x^2} + \cos(M \phi_{s}) + v\cos(\chi + \phi_{s})\delta(x-x_{i}) \right]\delta \phi = \lambda \delta \phi, \label{fluc}$$ where $\lambda$ is the eigenvalue. Hence we consider the case of $\lambda=0$ to determine the threshold field, $\varepsilon_{c}$. First, we consider the fluctuation around the solution, $\phi_{l}(x)$, in the case of $v=0$. The equation of the zero-mode fluctuation, $\delta\phi_{l}(x)$, around $\phi_{l}(x)$ is $$\left[ - \frac{\partial^2}{\partial x^2} + \cos(M \phi_{l}) \right] \delta\phi_{l} = 0$$ and the boundary condition is $$\delta\phi_{l}(\infty) = 0 \; , \;\; \delta\phi_{l}'(\infty) = 0.$$ Its solution is $$\delta\phi_{l}(x) \propto \phi_{l}'(x).$$ Next, we consider the case of $v \neq 0$. The threshold field, $\varepsilon_{c}$, is determined as a particular value of the electric field at which zero-mode fluctuations in both sides of the impurity are connected. The zero-mode fluctuation around the stable solution in each region must be expressed as $$\begin{aligned} \delta\phi(x) &\propto& - ( 2 \theta(x-x_{i}) - 1) \phi_{s}'(x) \\ &=& - \phi_{l}'(|x-x_{i}|+c), \label{flu1}\end{aligned}$$ by noting $$\phi_{s}'(x) = (2\theta(x-x_{i})-1)\phi_{l}'(x),$$ where $\theta(x)$ is the step function. Substituting eq. (\[flu1\]) into eq. (\[fluc\]), we obtain $$2\phi_{l}''(c) - v\cos(\chi + \phi_{l}(c))\phi_{l}'(c) = 0 . \label{conflu}$$ At the threshold field, $\varepsilon_{c}$, eq. (\[conflu\]) should be satisfied. By use of eqs. (\[eq:3.1\]) and (\[cla1\]), eq. (\[conflu\]) is expressed as follows; $$2 \left( -\varepsilon + \frac{1}{M}\sin(M \phi_{l}(c)) \right) - \frac{1}{4} v^2 \sin \left[ 2(\chi + \phi_{l}(c)) \right] =0. \label{5.9}$$ Hence the threshold field, $\varepsilon_{c}$, is determined as a value of $\varepsilon$ at which both eqs. (\[result\]) and (\[5.9\]) have a common solution, $\phi_{l}(c)$, for each fixed values of $\chi$ and $v$. The solutions of eqs. (\[result\]) and (\[5.9\]) for $M=4$ are shown in Fig. \[fig:3\]. The threshold field, $\varepsilon_{c}$, is normalized by the threshold field, $\varepsilon_{T}$, of the uniform case. When $v$ is fixed, the normalized threshold field goes to 1 as $\chi$ tends to $-\pi/(2M)$. It is noted that the threshold field goes to a finite value, $\varepsilon_{\infty}$, even if $v \rightarrow \infty$. This finite value, $\varepsilon_{\infty}$, is given by the equation, $$\varepsilon_{\infty} (\frac{1}{M}\arcsin(\varepsilon_{\infty} M) + \chi ) + \frac{1}{M^2} (\sqrt{1-(\varepsilon_{\infty} M)^2}-\cos(M \chi)) = 0 .$$ Through a numerical calculation, we find that for a choice of $\chi = -\pi/M$ and $M=4$, the normalized threshold field tends to $\varepsilon_{\infty}/\varepsilon_{T} \simeq 0.725$ as $v$ tends to infinity. The reason why $\chi =- \pi/(2M) $ is critical is that $\chi = - \pi/(2M) $ is the value which determines whether the commensurability potential and the impurity potential compete or not at the impurity site, namely, the system contains frustration or not. To investigate the frustration, we consider the local potential energy $U_{i}$ at the impurity site, $$U_{i}(\phi(x_{i})) = - \frac{1}{M^2}\cos(M\phi(x_{i})) - v\cos(\chi + \phi(x_{i})).$$ The first term of the right hand is the commensurability potential and the second term is the impurity potential. If the potential energy by the impurity is minimized, that is $\chi + \phi(x_{i}) = 0$, $U_{i}(\phi(x_{i}))$ is $$U_{i}(-\chi) = - \frac{1}{M^2}\cos(M\chi) - v.$$ In the case of $- \pi/M \leq \chi < - \pi/(2M) $, the commensurability potential and the impurity potential compete and the system contains frustration. In the case of $ - \pi/(2M) < \chi \leq 0$, however, they do not compete and the system contains no frustration. When the system is frustrated the effect of the impurity potential to the threshold field is important. However, if the system is not frustrated, the impurity potential is irrelevant. The reason why the threshold field is finite even if $v$ tends to infinity is that the depinning object is not a particle, but a string. In the limit of $v \rightarrow \infty$ with $\chi = -\pi/M $ which is the optimal case, the cusp of the ground state configuration of the phase is on the top of the barrier of the uniform potential (Fig. \[fig:5\]). The cusp, however, is just a part of the string, and a finite field is necessary for a string to go over the uniform potential barrier. The Potential Curve in the Presence of an Impurity ================================================== To clarify the meaning of $\varepsilon_{c}$, we consider a potential curve in the presence of an impurity. In a homogeneous case the phase is uniform in the ground state, and hence the potential energy density (Fig. \[fig:1\]) can be expressed with respect to the uniform phase variable. With an impurity, however, the phase is not uniform, and therefore we should consider the potential curve in a functional space. In this case the most suitable variable is the phase value, $\phi_{i}$, at the impurity site with $\chi$, $v$ and $\varepsilon$ fixed. For each value of $\phi_{i}$, we can determine the lowest energy configuration; we solve eq. (\[eq:3.1\]) under the boundary condition of $\phi(x_{i})=\phi_{i}$ and $$\phi'(x_{i}) = -\left[ 2 \left[ \varepsilon \left( \frac{1}{M} \arcsin(\varepsilon M) - \phi_{i} \right) + \frac{1}{M^2}\left(\sqrt{1-(\varepsilon M)^2} - \cos(M \phi_{i})\right)\right]\right]^{\frac{1}{2}},$$ which is similar to eq. (\[eq:3.5\]). Substituting this optimal configuration into eq. (\[eq:2.4\]), we determine its energy and obtain the potential curve as the function of $\phi_{i}$ which is shown in Fig. \[fig:curve\]. While there exists an energy barrier below the threshold field, $\varepsilon_{c}$, (Fig. \[fig:curve\] (a) and (b)), the barrier disappears at $\varepsilon_{c}$ (Fig. \[fig:curve\] (c)) and CDW depins. Conclusion and Discussion ========================= We conclude that the existence of an impurity in the model one-dimensional commensurate CDW causes the lowering of threshold fields by a finite amount due to the appearance of the local change of the phase variable, which is considered as nucleation, near an impurity site. Below the threshold field, $\varepsilon_{T}$, of the uniform depinning, such local change in the homogeneous case always requires a finite excitation energy. However, the nucleation in the presence of an impurity can take place without any excitation energy at field, $\varepsilon_{c}$, which is smaller than $\varepsilon_{T}$. The main reason for the lowering of the threshold field is due to the frustration which is caused by the competition between the commensurability potential and the impurity potential. Our result will be applied to the commensurate CDW with dilute but with macroscopic number of impurities where the inverse of the impurity density is smaller than the phase coherence length, $\xi$. In such a case these will be a distribution of parameter $\chi$, and then that of the depinning field. However the depinning will be triggered by the nucleation at the optimal impurity site, $\chi = -\pi/M$. In actual experiments three dimensionality and quantum fluctuations [@rf:matsu] will play important roles at low temperatures, which will be studied elsewhere. To the best of our knowledge, our investigation is the first microscopic theoretical studies on nucleation triggered by the local inhomogeneity in the bulk region. Acknowledgments {#acknowledgments .unnumbered} =============== We would like to thank Hiroshi Kohno for helpful discussions. One of us (M.Y.) is grateful for the kind hospitality received at Condensed Matter Physics Group of Osaka University. [99]{} For a review, see P.Hänggi, P.Talker and M.Borkovec, Rev. Mod. Phys. [**62**]{} (1990) 251. J.S.Langer, Ann. Phys. (N.Y.) [**41**]{} (1967) 108. For a review, see G.Grüner, [*Density Waves in Solids*]{} (Addison-Wesley, 1995). For a review, see H.Fukuyama and H.Takayama, in [*Electric Properties of Inorganic Quasi-One-Dimensional Compounds*]{} Part 1, edited by P.Monceau (D.Reidel, 1985). P.A.Lee, T.M.Rice and P.W.Anderson, Solid State Commun. [**14**]{} 703 H.Fukuyama, J. Phys. Soc. Jpn. [**41**]{} (1976) 513; H.Fukuyama and P.A.Lee, Phys. Rev. B [**17**]{} (1978) 535; P.A.Lee and H.Fukuyama, Phys. Rev. B [**17**]{} (1978) 542; P.A.Lee and T.M.Rice, Phys. Rev. B [**19**]{} (1979) 3970. M.J Rice, A.R.Bishop, J.A.Krumhansl and S.E.Trullinger, Phys. Rev. Lett. [**36**]{} (1976) 432. Instability of a state induced by an external field which is described by the phase Hamiltonian has also been discussed in the spin-Peierls system by M.Saito. \[Submitted to J. Phys. Soc. Jpn.\] In our model the coupling between the external force and the phase is $-\int {\rm d}x \varepsilon \phi$, while in the spin-Peierls system is $\int {\rm d}x \varepsilon \nabla \phi$. Identification of the stability of a metastable state by the onset of such zero eigenvalue has been employed in another context of the $\pi$ junction by T.Kato and M.Imada, J. Phys. Soc. Jpn. [**66**]{} (1997) 1445. H.Matsukawa, in [*Quantum Coherence and Decoherence*]{}, edited by K.Fujikawa and Y.A.Ono (North-Holland, 1996); H.Miyake and H.Matsukawa, in preparation. [^1]: E-mail: [email protected]
--- abstract: 'For every infinite graph $\Gamma$ we construct a non-Desarguesian projective plane $P^*_{\Gamma}$ of the same size as $\Gamma$ such that $Aut(\Gamma) \cong Aut(P^*_{\Gamma})$ and $\Gamma_1 \cong \Gamma_2$ iff $P^*_{\Gamma_1} \cong P^*_{\Gamma_2}$. Furthermore, restricted to structures with domain $\omega$, the map $\Gamma \mapsto P^*_{\Gamma}$ is Borel. On one side, this shows that the class of countable non-Desarguesian projective planes is Borel complete, and thus not admitting a Ulm type system of invariants. On the other side, we rediscover the main result of [@projective] on the realizability of every group as the group of collineations of some projective plane. Finally, we use classical results of projective geometry to prove that the class of' address: 'Einstein Institute of Mathematics, The Hebrew University of Jerusalem, Israel' author: - Gianluca Paolini title: 'The Class of Non-Desarguesian Projective Planes is Borel Complete' --- [^1] Introduction ============ \[def\_plane\] A [*plane*]{} is a system of points and lines satisfying: (A) every pair of distinct points determines a unique line; (B) every pair of distinct lines intersects in at most one point; (C) every line contains at least two points; (D) there exist at least three non-collinear points. A plane is [*projective*]{} if in addition: 1. every pair of lines intersects in exactly one point. As well-known (see e.g. [@rota] and [@kung pg. 148]), the class of planes (resp. projective planes) corresponds canonically to the class of simple rank $3$ matroids (resp. simple modular rank $3$ matroids), or, equivalently, to the class of geometric lattices of rank $3$ (resp. modular geometric lattices of rank $3$). We prove: \[main\_theorem\] For every graph $\Gamma = (V, E)$ there exists a plane $P_{\Gamma}$ such that: (1) if $\Gamma$ is finite, then $P_{\Gamma}$ has size $3|V| + |E| + 17$; (2) if $\Gamma$ is infinite, then $P_{\Gamma}$ has the same size of $\Gamma$; (3) except for $17$ points, every point of $P_{\Gamma}$ is incident with at most two non-trivial lines; (4) \[auto\] $Aut(\Gamma) \cong Aut(P_{\Gamma})$; (5) \[iso\_invariance\] $\Gamma_1 \cong \Gamma_2$ if and only if $P_{\Gamma_1} \cong P_{\Gamma_2}$; (6) restricted to structures with domain $\omega$, the map $\Gamma \mapsto P_{\Gamma}$ is Borel (with respect to the naturally associated Polish topologies). We then combine (a modification of) the construction $\Gamma \mapsto P_{\Gamma}$ of Theorem \[main\_theorem\] with the the map $P \mapsto F(P)$ associating to each plane its free projective extension (in the sense of [@hall_proj], cf. also Definition \[def\_free\_ext\]), and prove: \[main\_theorem\_proj\] For every infinite graph $\Gamma$ there exists a projective (1) $P^*_{\Gamma}$ has the same size of $\Gamma$; (2) $P^*_{\Gamma}$ is non-Desarguesian; (3) $Aut(\Gamma) \cong Aut(P^*_{\Gamma})$; (4) $\Gamma_1 \cong \Gamma_2$ if and only if $P^*_{\Gamma_1} \cong P^*_{\Gamma_2}$; (5) restricted to structures with domain $\omega$, the map $\Gamma \mapsto P^*_{\Gamma}$ is Borel (with respect to the naturally associated Polish topologies). As a first consequence we get: \[def\_classes\] (1) We say that a plane is simple (or $17$-simple) if except for $17$ points every point is incident with at most two non-trivial lines. (2) We denote by $\mathbf{K}_1$ the class of countable simple planes. (3) We denote by $\mathbf{K}_2$ the class of countable non-Desarguesian projective planes. \[main\_cor2\] Let $\mathbf{K}$ be either $\mathbf{K}_1$ or $\mathbf{K}_2$ (cf. Definition \[def\_classes\]). Then: (1) $\mathbf{K}$ is Borel complete (i.e. the isomorphism relation on $\mathbf{K}$ is $Sym(\omega)$-complete); (2) $\mathbf{K}$ does not admit a Ulm type classification (cf. [@ulm_inv_paper] for this notion). In [@frucht] and [@frucht2] Frucht showed that every finite group is the group of automorphisms of a finite graph. Later, Sabadussi [@sabi] and, independently, de Groot [@groot] proved that every group is the group of automorphisms of a graph. Using this, Harary, Piff, and Welsh [@piff] proved that every group is the group of automorphisms of a graphic matroid, possibly of infinite rank. In [@bonin], Bonin and Kung showed that every infinite group is the group of automorphisms of a Dowling plane of the same cardinality. In [@projective], Mendelsohn proved that every group is the group of collineations of some projective plane. Using Theorems \[main\_theorem\] and \[main\_theorem\_proj\] we rediscover and improve these results: \[main\_cor\] (1) For every finite structure $M$ (in the sense of model theory) there exists a simple plane $P_M$ such that $P_M$ is finite and $Aut(P_M) \cong Aut(M)$. (2) For every infinite structure $M$ (in the sense of model theory) there exists a simple plane $P_M$ such that $|M| = |P_M|$ and $Aut(P_M) \cong Aut(M)$. (3) For every infinite structure $M$ there exists a non-Desarguesian projective plane $P_M$ such that $|M| = |P_M|$ and $Aut(P_M) \cong Aut(M)$. Finally, we use classical results of projective geometry to prove: \[des\_theorem\] Let $\mathbf{K}_3$ be the class of countable Pappian[^2] projective planes. Then: (1) $\mathbf{K}_3$ is Borel complete; (2) $\mathbf{K}_3$ does not admit a Ulm type classification. We leave the following open problem: Characterize the Lenz-Barlotti classes of countable projective planes which are Borel complete. Preliminaries ============= Given a plane $P$ we will freely refer to the canonically associated geometric lattice $G(P)$. On this see e.g. [@rota], or , for an introduction directed to logicians. For our purposes the \[basic\_def\] Let $P$ be a plane. (1) \[sup\] Given two distinct points $a_1$ and $a_2$ of $P$ we let $a_1 \vee a_2$ be the unique line that they determine. (2) \[inf\] Given two distinct lines $\ell_1$ and $\ell_2$ of $P$ we let $\ell_1 \wedge \ell_2$ be the unique point in their intersection, if such a point exists, and $0$ otherwise. (3) The [*size*]{} $|P|$ of a plane $P$ is the size of its set of points. (4) We say that the point $a$ (resp. the line $\ell$) is [*incident*]{} with the line $\ell$ (resp. the point $a$) if the point $a$ (resp. the line $\ell$) is contained in the line $\ell$ (resp. contains the point $a$). (5) \[trivial\] We say that the line $\ell$ from $P$ is [*trivial*]{} if $\ell$ is incident with exactly two points from $P$. (6) We say that two lines $\ell_1$ and $\ell_2$ from $P$ are [*parallel*]{} in $P$ if $\ell_1 \wedge \ell_2 = 0$, i.e. there is no point $p \in P$ incident with both $\ell_1$ and $\ell_2$. (7) We say that three distinct points $a_1, a_2, a_3$ of $P$ are [*collinear*]{} if there is a line $\ell$ in $P$ such that $a_i$ is incident with $\ell$ for every $i = 1, 2, 3$ (in this case we also say that the set $\{a_1, a_2, a_3\}$ is dependent). We will use crucially the following fact from the theory of one-point extensions of matroids from [@crapo] (see also [@rota Chapter 10] and ). \[fact\] Let $P$ be a plane, $L$ a set of parallel lines of $P$ (in particular $L$ can be empty or a singleton) and $p \not\in P$. Then there exists a plane $P(L)$ (unique modulo isomorphism) such that its set of points is the set of points of $P$ plus the point $p$, and $p, q, r$ are collinear in $P(L)$ if and only if $q \vee r \in L$. We now introduce Hall’s notion of free projective extension from [@hall_proj]. In exposition and results we follow [@piper Chapter XI]. \[def\_free\_ext\] Given a plane $P$ we define by induction on $n < \omega$ a chain of planes $(P_n : n < \omega)$ as follows: $n = 0$. Let $P_n = P$. $n = m+1$. For every pair of parallel lines $\ell \neq \ell'$ in $P_m$ add a new point $\ell \wedge \ell'$ to $P_m$ incident We define the [*free projective extension*]{} of $P$ to be $F(P) : = \bigcup_{n < \omega} P_n$. Given two planes $P_1$ and $P_2$, we say that $P_1$ is a [*subplane*]{} of $P_2$ if $P_1 \subseteq P_2$, points of $P_1$ are points of $P_2$, lines of $P_1$ are lines of $P_2$, and the point $p$ is on the line $\ell$ in $P_1$ if and only if the point $p$ is on the line $\ell$ in $P_2$. \[def\_conf\] Let $P$ be a plane. (1) If $P$ is [*finite*]{}, then we say that $P$ is [*confined*]{} if every point of $P$ is incident with at least three lines of $P$, and every line of $P$ is non-trivial (cf. Definition \[basic\_def\](\[trivial\])). (2) We say that $P$ is confined if every point and every line of $P$ is contained in a finite confined subplane of $P$. We will make a crucial use of the following facts: \[desarg\_def\] Let $P$ be a projective plane. We say that $P$ is [*Desarguesian*]{} if given two triples of distinct points $p, q, r$ and $p', q', r'$, if the lines $p\vee p'$, $q \vee q'$ and $r \vee r'$ are incident with a common point, then the points $(p \vee q) \wedge (p' \vee q')$, $(p \vee r) \wedge (p' \vee r')$ and $(q \vee r) \wedge (q' \vee r')$ are collinear. \[desargue\_fact\] Let $P$ be a plane which is not a projective plane. Then $F(P)$ is non-Desarguesian. \[piper\_fact1\] Le $P_1$ and $P_2$ be confined planes. Then the following are equivalent: (1) $F(P_1) \cong F(P_2)$; (2) $P_1 \cong P_2$. \[piper\_fact2\] Let $P$ be a confined plane. Then: $$Aut(P) \cong Aut(F(P)).$$ The following facts are classical results of projective geometry. \[pappian\_def\] Let $P$ be a projective plane. We say that $P$ is [*Pappian*]{} if given two triples of distinct collinear points $p, q, r$ and $p', q', r'$ on distinct lines $\ell$ and $\ell'$, respectively, if $\ell \wedge \ell'$ is different from all six points, then the points $(p \vee q') \wedge (p' \vee q)$, $(p \vee r') \wedge (p' \vee r)$ and $(q \vee r') \wedge (q' \vee r)$ are collinear. Given a field $K$ we denote by $\mathfrak{P}(K)$ the corresponding projective plane (cf. e.g. [@piper Section 2]). \[pappian\_field\_fact\] Let $K$ be a field. Then $\mathfrak{P}(K)$ is Pappian. \[pappian\_fact\] Let $K$ and $K'$ be fields. Then $\mathfrak{P}(K) \cong \mathfrak{P}(K')$ if and only if $K \cong K'$. Concerning the topological notions occurring in Theorem \[main\_theorem\], they are in the sense of invariant descriptive set theory of $\mathfrak{L}_{\omega_1, \omega}$-classes, see e.g. [@gao_invariant Chapter 11] for a thorough introduction. Notice that the classes of planes, simple planes, projective planes, (non-)Desarguesian projective planes (cf. Definition \[desarg\_def\]), and Pappian planes (cf. Definition \[pappian\_def\]) are first-order classes, considered e.g. in a language specifying points, lines and the point-line incidence relation. Proof of Theorem \[main\_theorem\] ================================== In this section we prove Theorem \[main\_theorem\]. \[notation\_Bonin\_paper\] We denote by $P_*$ the plane represented in Figure \[figure1\]. The plane $P_*$ is taken from [@bonin], where it is denoted as $T_S$ for $S = \{ 0, 1, 2, 3 \}$. Let $\Gamma = (V, E)$ be given and let $\{ v_{\alpha} : \alpha < \lambda \}$ list $V$ without repetitions. For $\gamma {\leqslant}\lambda$, let $\Gamma_{\gamma} = (V_{\gamma}, E_{\gamma})$ be such that $V_{\gamma} = \{ v_{\beta} : \beta < \gamma \}$ and for ${\alpha} < \beta < \gamma$ we have $v_{\alpha} E_{\gamma} v_{\beta}$ if and only if $v_{\alpha} E v_{\beta}$. Let $P_*$ be the plane from Notation \[notation\_Bonin\_paper\]. Notice that $|P_*| = 17$ and, as proved in [@bonin Lemma 2], $P_*$ is rigid, i.e. $Aut(P_*) = \{ e \}$. By induction on $\beta {\leqslant}\lambda$, we construct a plane $P_{\Gamma}(\beta)$ such that its set of points is: $$\label{equation_points} \tag{$*$} P_* \cup \{ p_{(\alpha, 0)} : \alpha < \beta \} \cup \{ p_{(\alpha, 1)} : \alpha < \beta \} \cup \{ p_{(\alpha, 2)} : \alpha < \beta \} \cup \{ p_e : e \in E_{\beta} \}.$$ For $\beta = 0$, let $P_{\Gamma}(\beta) = P_*$. For $\beta$ limit ordinal, let $P_{\Gamma}(\beta) = \bigcup_{\alpha < \beta} P_{\Gamma}(\alpha)$. For $\beta = \alpha + 1$, we construct $P_{\Gamma}(\beta)$ from $P_{\Gamma}(\alpha)$ via a sequence of one-point extensions as follows. Firstly, add a new point $p_{(\alpha, 0)}$ under the line $p_2 \vee 1'$ (using Fact \[fact\] with $L = \{p_2 \vee 1'\}$). Secondly, add a new point $p_{(\alpha, 1)}$ under the line $0 \vee 1'$ (using Fact \[fact\] with $L = \{0 \vee 1'\}$). Thirdly, add a new point $p_{(\alpha, 2)}$ under the line $p_{(\alpha, 0)} \vee p_{(\alpha, 1)}$ (using Fact \[fact\] with $L = \{ p_{(\alpha, 0)} \vee p_{(\alpha, 1)}\}$). Fourthly, for every $e = \{ v_{\delta}, v_{\alpha} \} \in E_{\beta}$ add a point $p_e$ under the parallel lines $p_{(\delta, 0)} \vee p_{(\delta, 1)}$ and $p_{(\alpha, 0)} \vee p_{(\alpha, 1)}$ (using Fact \[fact\] with $L = \{ p_{(\delta, 0)} \vee p_{(\delta, 1)}, p_{(\alpha, 0)} \vee p_{(\alpha, 1)} \}$). Let $P_{\Gamma}(\lambda) = P_{\Gamma}$. First of all, by (\[equation\_points\]), the size of $P_{\Gamma}$ is clearly as wanted. Also, if $p \notin P_*$, then, by construction, $p$ is incident with at most two non-trivial lines. Furthermore, the construction of $P_{\Gamma}$ from $\Gamma$ is explicit, and so, restricted to structures with domain $\omega$, the map $\Gamma \mapsto P_{\Gamma}$ is easily seen to be Borel, since to know a finite substructure of $P_\Gamma$ it is enough to know a finite part of $\Gamma$. Thus, we are only left to show items (\[auto\]) and (\[iso\_invariance\]) of the statement of the theorem. To this extent, first of all notice that, letting $p_{(\alpha, 0)} \vee p_{(\alpha, 1)} = \ell_{\alpha}$ (for $\alpha < \lambda$), we have: 1. the set of lines $\{ \ell_{\alpha} : \alpha < \lambda \}$ of $P_{\Gamma}$ with edge relation $\ell_\alpha E \ell_\beta$ if and only if $\ell_\alpha \wedge \ell_\beta \neq 0$ (i.e. the two lines intersect) is isomorphic to $\Gamma$. Now, for a point $p$ let $\varphi(p)$ be the following statement: 1. $p$ is incident with exactly four distinct non-trivial lines, or $p$ is incident with a non-trivial line $\ell$ which contains a point $p'$ which is incident with four distinct non-trivial lines. Notice that for a point $p \in P_{\Gamma}$ we have: 1. $P_{\Gamma} \models \varphi(p)$ if and only if $p \in P_*$. In fact, if the point $p \in P_*$, then either it is the point $q$, in which case there are four distinct non-trivial lines which are incident with it, or we can find a non-trivial line $\ell$ which is incident with the point $p$ and contains the point $p_3$ (this is clear by inspection of Figure \[figure1\]). On the other hand, if the point $p \not\in P_*$, then it is either $p_{(\alpha, 0)}$, $p_{(\alpha, 1)}$, $p_{(\alpha, 2)}$, or $p_e$, for some $\alpha < \lambda$ and $e \in E_{\Gamma}$. Notice now that: 1. if $p = p_{(\alpha, 0)}$, then $p$ is incident with exactly two non-trivial lines, namely the lines $p_2 \vee 1'$ and $p_{(\alpha, 0)} \vee p_{(\alpha, 1)}$; <!-- --> 1. if $p = p_{(\alpha, 1)}$, then $p$ is incident with exactly two non-trivial lines, namely the lines $0 \vee 1'$ and $p_{(\alpha, 0)} \vee p_{(\alpha, 1)}$; <!-- --> 1. the point $p_2$ is incident with exactly two non-trivial lines, namely the line $p_2 \vee 0$ and the line $p_2 \vee 1'$; the point $0$ is incident with exactly three non-trivial lines, namely the lines $p_2 \vee 0$, $0 \vee 0'$ and $0 \vee 1'$; the point $1'$ is incident with exactly three non trivial lines, namely the lines $1' \vee 0'$, $1' \vee 1_0$ and $1' \vee 2_1$; <!-- --> 1. if $p = p_{e}$ and $e = \{ v_{\delta}, v_{\alpha} \}$, then $p_{e}$ is incident with exactly two non-trivial lines, namely the lines $p_{(\delta, 0)} \vee p_{(\delta, 1)}$ and $p_{(\alpha, 0)} \vee p_{(\alpha, 1)}$; <!-- --> 1. for $\alpha < \lambda$, the set of points incident with the line $p_{(\alpha, 0)} \vee p_{(\alpha, 1)}$ is: $$\{ p_{(\alpha, 0)}, p_{(\alpha, 1)}, p_{(\alpha, 2)} \} \cup \{ p_{e} : p_{\alpha} \in e \in E_{\Gamma} \};$$ <!-- --> 1. if $\alpha \neq \beta < \lambda$, then $p_{(\alpha, 0)} \vee p_{(\beta, 1)}$ is a trivial line. Thus, by $(\star_3)$-$(\star_8)$, it is clear that for $p \notin P_*$ we have that $P_{\Gamma} \not\models \varphi(p)$. We now prove (\[iso\_invariance\]). Let $f: P_{\Gamma_1} \cong P_{\Gamma_2}$, $|\Gamma_1| = \lambda$ and, for $i = 1, 2$, let the set of points of $P_{\Gamma_i}$ be: $$\{ (p, i) : p \in P_* \} \cup \{ p^i_{(\alpha, j)} : j < 3, \alpha < \lambda \} \cup \{ p^i_e : e \in E_{\Gamma_i} \},$$ (cf. $(*)$ above). By $(\star)_2$, we have that $f$ restricted to $\{ (p, 1) : p \in P_* \}$ is an isomorphism from $\{ (p, 1) : p \in P_* \}$ onto $\{ (p, 2) : p \in P_* \}$, and so, as $P_*$ is rigid, for every $p \in P_*$ we have that $f((p, 1)) = (p, 2)$. In particular, the line $(p_2, 1) \vee (1', 1)$ is mapped to the line $(p_2, 2) \vee (1', 2)$, and the line $(0, 1) \vee (1', 1)$ is mapped to the line $(0, 2) \vee (1', 2)$. Thus, $f$ maps $\{ p^1_{(\alpha, 0)} : \alpha < \lambda \}$ onto $\{ p^2_{(\alpha, 0)} : \alpha < \lambda \}$ and $\{ p^1_{(\alpha, 1)} : \alpha < \lambda \}$ onto $\{ p^2_{(\alpha, 1)} : \alpha < \lambda \}$. Also, by $(\star_6)$, $f$ maps $\{ p^1_{(\alpha, 2)} : \alpha < \lambda \}$ onto $\{ p^2_{(\alpha, 2)} : \alpha < \lambda \}$. Finally, if $\alpha \neq \beta < \lambda$ and $f(p^1_{(\alpha, 0)}) = p^1_{(\beta, 0)}$, then $f(p^1_{(\alpha, 1)}) = p^1_{(\beta, 1)}$, since otherwise $f$ would send the non-trivial line $p^1_{(\alpha, 0)} \vee p^1_{(\alpha, 1)}$ to a trivial line (cf. $(\star_8)$). Thus, $f$ induces a bijection: $$f_* : \{ p^1_{(\alpha, 0)} \vee p^1_{(\alpha, 1)} : \alpha < \lambda \} \rightarrow \{ p^2_{(\alpha, 0)} \vee p^2_{(\alpha, 1)} : \alpha < \lambda \}.$$ Hence, by $(\star)_1$, the map $f_*$ induces an isomorphism from $\Gamma_1$ onto $\Gamma_2$, since clearly the isomorphism $f$ sends pairs of intersecting lines to pairs of intersecting lines. Finally, item (\[auto\]) is clear from the proof of item (\[iso\_invariance\]). Proof of Theorem \[main\_theorem\_proj\] ======================================== In this section we prove Theorem \[main\_theorem\_proj\]. We denote by $Q$ the plane represented in the matrix in Figure \[myfigure\], where the letters occurring in the matrix represent the points of $Q$, and the columns of the matrix represent the lines of $Q$. The plane $Q$ is taken from [@projective] (cf. [@projective Diagram 1]), where it is attributed to S. Ditor. $$\begin{bmatrix} a & c & e & a & b & d & d & c & e & a & b \\ b & n & o & f & k & n & o & k & m & k & n \\ c & l & l & g & l & k & m & g & g & o & o \\ d & f & f & h & m & f & h & & & & \\ e & & & & & & & & & & \end{bmatrix}$$ \[strategy\] In proving Theorem \[main\_theorem\_proj\] we will follow the following strategy: (1) for $\Gamma$ an infinite graph, consider the $P_{\Gamma}$ of Theorem \[main\_theorem\] and extend it to a $P^+_{\Gamma}$ adding independent copies of the plane $Q$ (cf. Figure \[myfigure\]) at each point not in a finite confined subplane (cf. Definition \[def\_conf\](2)), and then adding independent copies of $Q$ at each line not in a finite confined subplane, repeating this process for lines $\omega$-many times (for points one application of the process suffices); (2) observe that, restricted to structures with domain $\omega$, the set of $P_{\Gamma}$’s is Borel and that the map $\Gamma \mapsto P_{\Gamma} \mapsto P^+_{\Gamma}$ is Borel; (3) prove that $\Gamma \mapsto P^+_{\Gamma}$ is isomorphism invariant and that $Aut(\Gamma) \cong Aut(P^+_{\Gamma})$; (4) observe that, restricted to structures with domain $\omega$, the map $P \mapsto F(P)$ (cf. Definition \[def\_free\_ext\]) is Borel; (5) consider the free projective extension $F(P^+_{\Gamma})$ of $P^+_{\Gamma}$, and use Fact \[desargue\_fact\] for non-Desarguesianess, Fact \[piper\_fact1\] for isomorphism invariance, and Fact \[piper\_fact2\] for: $$Aut(\Gamma) \cong Aut(F(P^+_{\Gamma})).$$ First of all we deal with Strategy \[strategy\](4): \[Borel\_th\] Restricted to structures with domain $\omega$, the map $P \mapsto F(P)$ associating to each plane its free projective extension is a Borel map. Essentially as in the proof of Theorem \[main\_theorem\]. Before proving Theorem \[main\_theorem\_proj\] we isolate two constructions which will be crucially used in implementing Strategy \[strategy\](1). \[construction1\] Let $P$ be a plane and $p$ a point of $P$. We define $P(p, Q, a)$ as the extension of $P$ obtained by adding an independent copy of $Q$ to $P$ identifying the point $p$ of $P$ and the point $a$ of $Q$, in such a way that if $p'$ is a point of $P$ different than $p$, and $q$ is a point of $Q$ different than $a$, then $p' \vee q$ is a trivial line. \[construction2\] Let $P$ be a plane and $\ell$ a line of $P$. We define $P(\ell, Q, a \vee b)$ as the extension of $P$ obtained by adding an independent copy of $Q$ to $P$ identifying the line $\ell$ of $P$ and the line $a \vee b$ of $Q$, in such a way that if $p'$ is a point of $P$ not on $\ell$, and $q$ is a point of $Q$ not on $a \vee b$, then $p' \vee q$ is a trivial line. The construction of $P(p, Q, a)$ and $P(\ell, Q, a \vee b)$ from $P$ can be formally justified using Fact \[fact\]. We elaborate on this: (i) Concerning the case $P(p, Q, a)$. Add two generic points[^3] $b$ and $f$ to $P$, corresponding to the points $b$ and $f$ of $Q$. Then $\langle p, b, f \rangle_P \cong \langle a, b, f \rangle_Q$ is a copy of the simple matroid of rank $3$ and size $3$. Now construct a copy of $Q$ in $P$ from $\{ p, b, f \}$ point by point, following how $Q$ is constructed from $\{ a, b, f \}$ point by point. Notice that the order in which we do this does not matter. (ii) Concerning the case $P(\ell, Q, a \vee b)$. Firs of all, let $p$ and $q$ be points of $P$ such that $p \vee q = \ell$. Now, add one generic point[^4] $f$ to $P$, corresponding to the point $f$ of $Q$. Then $\langle p, q, f \rangle_P \cong \langle a, b, f \rangle_Q$ is a copy of the simple matroid of rank $3$ and size $3$. Now construct a copy of $Q$ in $P$ from $\{ p, q, f \}$ point by point, following how $Q$ is constructed from $\{ a, b, f \}$ point by point. Notice that the choice of $p$ and $q$ does not matter, as well as the order in which we construct the copy of $Q$ in $P$ from $\{ p, q, f \}$, as observed also in (i). We follow the strategy delineated in Strategy \[strategy\]. Let $\Gamma$ be an infinite graph and $P_{\Gamma}$ be the respective plane from Theorem \[main\_theorem\]. We define $P^+_{\Gamma}$ as the union of a chain of planes $(P^n_{\Gamma}: n < \omega)$, defined by induction on $n < \omega$. . Let $\{ p_{\alpha} : 0 < \alpha < \kappa \}$ be an injective enumeration of the points of $P_{\Gamma}$ not in a finite confined configuration (notice that there infinitely many such point in $P_{\Gamma}$). Let then: (i) $P^{(0, 0)}_{\Gamma} = P_{\Gamma}$; (ii) $P^{(0, \alpha)}_{\Gamma} = P^{(0, \alpha-1)}_{\Gamma}(p_{\alpha}, Q, a)$, for $0 <\alpha < \kappa$ successor (cf. Construction \[construction1\]); (iii) $P^{(0, \alpha)}_{\Gamma} = \bigcup_{\beta < \alpha} P^{(0, \beta)}_{\Gamma}$, for $\alpha$ limit; (iv) $P^0_{\Gamma} = \bigcup_{\alpha < \kappa} P^{(0, \alpha)}_{\Gamma}$. (Notice that the choice of the enumeration $\{ p_{\alpha} : 0 < \alpha < \kappa \}$ does not matter, since the copies of $Q$ that we add at every point are independent. In particular, in the countable case we can take the enumeration to be Borel. Furthermore, we now have that every point of $P^0_{\Gamma}$ is contained in a finite confined subplane of $P^0_{\Gamma}$.) . Let $\{ \ell_{\alpha} : 0 < \alpha < \mu \}$ be an injective enumeration of the lines of $P^{n-1}_{\Gamma}$ not in a finite confined configuration (notice that there infinitely many such lines in $P^{n-1}_{\Gamma}$, this is true for $n - 1 = 0$, and it is preserved by the induction). Let then: (i) $P^{(n, 0)}_{\Gamma} = P^{n-1}_{\Gamma}$; (ii) $P^{(n, \alpha)}_{\Gamma} = P^{(0, \alpha-1)}_{\Gamma}(\ell_{\alpha}, Q, a \vee b)$, for $0 <\alpha < \mu$ successor (cf. Construction \[construction2\]); (iii) $P^{(n, \alpha)}_{\Gamma} = \bigcup_{\beta < \alpha} P^{(n, \beta)}_{\Gamma}$, for $\alpha$ limit; (iv) $P^n_{\Gamma} = \bigcup_{\alpha < \mu} P^{(n, \alpha)}_{\Gamma}$. (Notice that also in this case the choice of the enumeration $\{ \ell_{\alpha} : 0 < \alpha < \mu \}$ does not matter, since the copies of $Q$ that we add at every line are independent. In particular, in the countable case we can take the enumeration to be Borel. Furthermore, inductively, we maintain the condition that every point of $P^n_{\Gamma}$ is contained in a finite confined subplane of $P^n_{\Gamma}$ (although this is not true for lines).) Let then $P^+_{\Gamma} = \bigcup_{n < \omega} P^n_{\Gamma}$. First of all, observe that the class of $P_{\Gamma}$’s ($P_\Gamma$ and $\Gamma$ with domain $\omega$) is Borel, since the appropriate restriction of the map $\Gamma \mapsto P_{\Gamma}$ is injective, in fact if $\Gamma \neq \Gamma'$, then there are $n \neq k \in \omega$ such that $n E_{\Gamma} k$ and $n \!\!\not\!\!E_{\Gamma'} k$ (by symmetry) and so in $P_{\Gamma}$ the (codes of the) lines $p_{(n, 0)} \vee p_{(n, 1)}$ and $p_{(k, 0)} \vee p_{(k, 1)}$ are incident while in $\Gamma'$ they are parallel. Furthermore, by the uniformity of the construction, the map $P^+_{\Gamma}$ from $P_{\Gamma}$ is Borel, when restricted to structures with domain $\omega$. Also, notice that the plane $P^+_{\Gamma}$ is confined and not projective, and so if we manage to complete Strategy \[strategy\](3), then by Lemma \[Borel\_th\] and Facts \[desargue\_fact\], \[piper\_fact1\] and \[piper\_fact2\] we are done (as delineated in Strategy \[strategy\](4-5)). We are then only left with Strategy \[strategy\](3). To this extent notice that: 1. the points from $P^+_{\Gamma}$ which are incident with at least four non-trivial lines are exactly the points of $P_{\Gamma}$. Thus, from $(\star_1)$ it is clear that if $P^+_{\Gamma_1} \cong P^+_{\Gamma_2}$, then $P_{\Gamma_1} \cong P_{\Gamma_2}$, which in turn implies that $\Gamma_1 \cong \Gamma_2$ (cf. Theorem \[main\_theorem\](5)). Furthermore, using again $(\star_1)$, and the fact that by [@projective Lemma 1] the plane $Q$ has trivial automorphism group, it is easy to see that: 1. every $f \in Aut(P^+_{\Gamma})$ is induced by a $f^- \in Aut(P_{\Gamma})$; <!-- --> 1. every $f \in Aut(P_{\Gamma})$ extends uniquely to a $f^+ \in Aut(P^+_{\Gamma})$. Thus, we have that $Aut(P^+_{\Gamma}) \cong Aut(P_{\Gamma}) \cong Aut(\Gamma)$, by Theorem \[main\_theorem\](5). Other proofs ============ Corollary \[main\_cor2\] is a standard consequence of Theorems \[main\_theorem\] and \[main\_theorem\_proj\] (see e.g. [@friedman] and [@gao] for an overview on Borel completeness, and [@ulm_inv_paper] for Ulm invariants). Also, Corollary \[main\_cor\] follows from Theorems \[main\_theorem\] and \[main\_theorem\_proj\] and the following fact: (1) For every finite structure $M$ (in the sense of model theory) there exists a finite graph $\Gamma_M$ such that $Aut(\Gamma_M) \cong Aut(M)$. (2) For every infinite structure $M$ (in the sense of model theory) there exists a graph $\Gamma_M$ of the same cardinality of $M$ such that $Aut(\Gamma_M) \cong Aut(M)$. Finally, we prove Theorem \[des\_theorem\]. To this extent we need the following fact. \[field\_fact\] The class of countable fields is Borel complete. Immediate from Facts \[pappian\_field\_fact\], \[pappian\_fact\] and \[field\_fact\]. [10]{} Joseph E. Bonin and Joseph P. S. Kung. . Geom. Dedicata [**50**]{} (1994), no. 3, 243-246. Riccardo Camerlo and Su Gao. . Trans. Amer. Math. Soc. [**353**]{} (2001), no. 2, 491-518. Henry H. Crapo and Gian-Carlo Rota. . M.I.T. Press, Cambridge, Mass, 1970. Henry H. Crapo. . J. Res. Nat. Bur. Standards Sect. B [**69B**]{} (1965), 55-65. Johannes H. de Groot. . Math. Ann. [**138**]{} (1959), 80-102. Harvey Friedman and Lee Stanley. . J. Symbolic Logic [**54**]{} (1989), no. 3, 894-914. Robert Frucht. . Compositio Math. [**6**]{} (1939), 239-250. Robert Frucht. . Canadian J. Math. [**1**]{} (1949), 365-378. Su Gao. . Pure and Applied Mathematics (Boca Raton), 293. CRC Press, Boca Raton, FL, 2009. Marshall Hall. . Trans. Amer. Math. Soc. [**54**]{} (1943), 229-277. Frank Harary, Mike J. Piff, and Dominic J. A. Welsh. . Discrete Math. [**2**]{} (1972), 163-171. Greg Hjorth and Alexander S. Kechris. . J. Symbolic Logic [**60**]{} (1995), no. 4, 1273-1300. Daniel R. Hughes and Fred C. Piper. . Graduate Texts in Mathematics, Vol. 6. Springer-Verlag, New York-Berlin, 1973. Tapani Hyttinen and Gianluca Paolini. . Ann. Pure Appl. Logic [**169**]{} (2018), no. 2, 117-145. Joseph P. S. Kung. . Birkhäuser Boston, Inc., Boston, MA, 1986. Eric Mendelsohn. . J. Geometry [**2**]{} (1972), 97-106. Gert Sabidussi. . Monatsh. Math. [**64**]{} (1960), 64-67. Fredrick W. Stevenson. . W. H. Freeman and Co., San Francisco, Calif., 1972. [^1]: Partially supported by European Research Council grant 338821. [^2]: Notice that Pappian planes are Desarguesian. [^3]: I.e. $b$ and $f$ are not incident with any line of $P$. [^4]: I.e. $f$ is not incident with any line of $P$.
--- abstract: 'In order to capture as much information as possible large galaxy surveys have been increasing their volume and redshift depth. To face this challenge theory has responded by making cosmological simulations of huge computational volumes with equally increasing the number of dark matter particles and supercomputing resources. Thus, it is taken for granted that the ideal situation is when a single computational box encompasses the whole volume of the observational survey, e.g., $\sim 50\, h^{-3}{\rm Gpc}^3$ for the DESI and Euclid surveys. Here we study the effects of missing long-waves in a finite volume using several relevant statistics: the abundance of dark matter halos, the PDF, the correlation function and power spectrum, and covariance matrices. Finite volume effects can substantially modify the results if the computational volumes are less than $\sim (500\Mpch)^3$. However, the effects become extremely small and practically can be ignored when the box-size exceeds $\sim 1$Gpc$^3$. We find that the average power spectra of dark matter fluctuations show remarkable lack of dependence on the computational box-size with less than 0.1% differences between $1\Gpch$ and $4\Gpch$ boxes. No measurable differences are expected for the halo mass functions for these volumes. The covariance matrices are scaled trivially with volume, and small corrections due to super-sample modes can be added. We conclude that there is no need to make those extremely large simulations when a box-size of $1-1.5\Gpch$ is sufficient to fulfil most of the survey science requirements.' author: - | Anatoly Klypin$^{1,2}$[^1] and Francisco Prada$^{3}$\ \ $^1$ Astronomy Department, New Mexico State University, Las Cruces, NM, USA\ $^2$ Department of Astronomy, University of Virginia, Charlottesville, VA, USA\ $^3$ Instituto de Astrofísica de Andalucía (CSIC), Glorieta de la Astronomía, E-18080 Granada, Spain\ bibliography: - 'Box.bib' title: 'Effects of long-wavelength fluctuations in large galaxy surveys' --- \[firstpage\] cosmology: Large scale structure - dark matter - galaxies: halos - methods: numerical Introduction {#sec:intro} ============ Large-scale galaxy surveys such as the existing 2dFGRS [@Hawkins2003], the SDSS ([e.g., @Anderson2012; @eBOSS], and the upcoming DESI [@DESI2016], Euclid [@Euclid], LSST [@LSST], and WFIRST [@WFIRST] are important for measuring cosmological parameters of our Universe, for studying the evolution of galaxies, and for unveiling the nature of dark matter and dark energy. In order to capture as much information as possible those survey observations have been increasing their volume and redshift depth. For example, the detection of the Baryonic Acoustic Oscillations (BAO) in the distribution of Luminous Red Galaxies (LRG) in the SDSS survey [@SDSSBAO] was based on 46,768 galaxies in a volume $0.72\,h^{-3}{\rm Gpc}^3$. The BOSS measurements of cosmological parameters are based on 1.2 million LRGs in a volume of $5.8\,h^{-3}{\rm Gpc}^3$ [@Alam2017]. The volume of the DESI/Euclid and LSST surveys will be $\sim 50\,h^{-3}{\rm Gpc}^3$ and $\sim 100\,h^{-3}{\rm Gpc}^3$ respectively. Theory has responded to this enormous survey volumes by making cosmological simulations of huge computational volumes with equally increasing the number of dark matter particles and the supercomputing resources. The Euclid Flagship Simulation, DarkSky and Outer Rim, with more than one trillion particles in a volume of 5-8 $h^{-3}{\rm Gpc}^3$ on a side, are good examples of the state-of-the-art achievements made recently in this field [@PKDGRAV3; @DarkSky; @Habib2016]. [ l | r | c | l | c | c | c | r |r |r ]{} Simulation & Box & particles & $m_p$ & ${\mbox{$N_{\rm g}$}}^3$ & $\epsilon$ & $N_{\rm s}$ & $\sigma_8$ & $N_r$ & Refs. A0.5 & 500$^3$ & 1200$^3$ & $6.16\times 10^9$ & 2400$^3$ & 0.208 & 181 & 0.822 & 680 & 1 A1 & 960$^3$ & 1200$^3$ & $4.46\times 10^{10}$ & 2400$^3$ & 0.400 & 136 & 0.822 & 2532 & 1 A1.5 & 1500$^3$ & 1200$^3$ & $1.66\times 10^{11}$ & 2400$^3$ & 0.625 & 136 & 0.822 & 4513 & 1 A2.5 & 2500$^3$ & 1000$^3$ & $1.33\times 10^{12}$ & 2000$^3$ & 1.250 & 136 & 0.822 & 1960 & 1 A2.5c & 2500$^3$ & 1000$^3$ & $1.33\times 10^{12}$ & 2000$^3$ & 1.250 & 285 & 0.822 & 1600 & 1 C1.2 & 1200$^3$ & 1000$^3$ & $1.47\times 10^{11}$ & 3000$^3$ & 0.400 & 136 & 0.822 & 100 & 3 D0.25 & 250$^3$ & 1000$^3$ & $1.33\times 10^9$ & 2000$^3$ & 0.125 & 181 & 0.822 & 120 & 3 D2.75 & 2750$^3$ & 1100$^3$ & $1.33\times 10^{12}$ & 4400$^3$ & 0.6250 & 136 & 0.822 & 22 & 3 D4 & 4000$^3$ & 2000$^3$ & $6.82\times 10^{11}$ & 4000$^3$ & 1.000 & 136 & 0.822 & 100 & 3 MDPL & 1000$^3$ & 3840$^3$ & $1.5 \times 10^{9}$ & – & 0.010 & – & 0.828 & 1 & 2 HMDPL & 4000$^3$ & 3840$^3$ & $7.9 \times 10^{10}$ & – & 0.025 & – & 0.828 & 1 & 2 \[table:simtable\] Effects of computational box size were the topic of extensive discussions for the last few decades with introduction of different ideas and presentation of numerical results [e.g., @Tormen1996; @Cole1997; @Klypin1996; @Jenkins1998; @Tinker2008; @Angulo2010; @Klypin2016]. In the modern field of large cosmological simulations it is taken for granted that the ideal situation is when the volume of a single computational box covers the whole effective volume of the observational survey [e.g., @DarkSky; @Comparat; @PKDGRAV3; @Habib2016]. But why is this true? It is clear why galaxy surveys must be large: we need to have as much information as possible, and the only way to do it is to increase the volume of the galaxy sample. However, what is the reason to have a single simulation box with a computational volume as large as possible? In the sense of statistics of matter density fluctuations (and related abundance of halos, voids, filaments and so on), one can produce as many realizations of the “universe” as needed in order to match the statistics seen in the observations. In terms of computational complexity (computational cost, access and dissemination of the results) we are in a more comfortable situation with many smaller simulation boxes. In any case we need to make many realizations to estimate noises and covariances – all needed for the data analysis of the surveys. One can list a number of effects related with the finite volume of a simulation box. Those include the impact of periodically replicated images when a small computational box is replicated many times to mimic a large observational survey, and the effect of missing long-waves on the halo mass function, the clustering signal, and the covariance matrixes. Some of these effects have been already discussed in the literature [e.g., @Hu2003; @Warren2006; @DarkSky; @SuperScale; @GLAM]. Here we review the situation, and provide estimates and arguments, regarding the effects of long-waves in cosmological large-scale structure simulations. The starting issue here is what observable one wants to study. If waves longer than $\sim 1$ Gpc are probed then there is no other option but to mimic those waves in theoretical estimates by using extreme computational volumes comparable to the size of the observable universe. Examples of these type of observables are the measurements of the power spectrum of fluctuations for wave-numbers $k{\mbox{${\,\hbox{\hbox{$ < $}\kern -0.8em \lower 1.0ex\hbox{$\sim$}}\,}$}}0.001\kMpch$ or the two-point correlation function at $\sim 1\Gpch$ scale. In this case the computational volume must be extremely large. However, in most of the cases the observables may not [*explicitly*]{} involve extremely long-waves. Consider as an example the abundance of very massive (${\mbox{${\,\hbox{\hbox{$ > $}\kern -0.8em \lower 1.0ex\hbox{$\sim$}}\,}$}}10^{15}M_\odot$) clusters of galaxies. Clusters themselves have radii $\sim 2$Mpc and gather mass from $\sim 10$Mpc regions around them. So, the clusters are relatively small objects. However, their abundance depends [*implicitly*]{} on longer waves because those waves non-linearly couple with $\sim 10$ Mpc waves, which are responsible for the formation of the clusters. Another relevant example is the study of the Baryonic Acoustic Oscillations (BAO). The BAOs manifest themselves as a peak in the correlation function at pair separation of $\sim 100\Mpch$. Again, the signal of the peak is relatively small, but may [*implicitly*]{} depend on very long-waves through non-linear interactions. The goal of this paper is to estimate the impact of missing long-waves in finite volume simulations on some important statistics that depend implicitly on those long waves. This paper is structured as follows. We give a short introduction in Section 1. In Section 2 we present the suite of simulations used in this work. Methods and definitions are discussed in Section 3, and the impact of box replication is described in Section 4. The missing power estimates due to the lack of long-waves in the computational simulation box are given in Section 5, and the results of the impact on other statistics such as the correlation function, PDF, halo abundances, power spectrum and covariance matrix are presented on Sections 6, 7, 8 and 9. We study Super Scale Covariances (SSC) in Section 10. Finally we conclude and summarise our results in Section 11. Simulations {#sec:sim} =========== Most of the results presented in this paper are based on cosmological $N$-body simulations. In Table \[table:simtable\] we present the numerical parameters of our simulation suite: box-size, number of particles, mass of a particle $m_p$, number of mesh points $N_g^3 $ (if relevant), cell-size of the density/force mesh $\epsilon$, the number of time-steps $N_s$, cosmological parameters $\sigma_8$ and $\Omega_m$, and number of realizations $N_r$. Different codes were used to make those simulations. The MultiDark Planck $1\,\Gpch$  MDPL2 and $4\,\Gpch$ HMDPL simulations [@Klypin2016] were done with the <span style="font-variant:small-caps;">gadget-2</span> code [@Gadget2]. The other simulations were carried out with the parallel Particle-Mesh code [@GLAM]. Because the code is much faster than <span style="font-variant:small-caps;">gadget-2</span>, we have done many realisations of the simulations with the same cosmological and numerical parameters that only differ by initial random seed. All the  simulations were started at initial redshift $z_{\rm init}=100$ using the Zeldovich approximation. These simulations span three orders of magnitude in mass resolution, a factor of hundred in force resolution, and differ by a factor of $10^5$ in effective volume. The differences in box-size are large, which is important for analysis done in this paper, i.e., from $L=250\Mpch$ to $L=4\Gpch$. We did not study smaller boxes because simulations with $L{\mbox{${\,\hbox{\hbox{$ < $}\kern -0.8em \lower 1.0ex\hbox{$\sim$}}\,}$}}250\Mpch$ become unpractical for large-scale structure studies even if finite box-size effects were corrected. They would also require too much replication to fill the observational volume . As we show below the box-size effects become too severe in those small boxes for relevant statistics such as the correlation function at the BAO peak and abundance of clusters of galaxies. All simulations and analytical results presented in this work use the same cosmological parameters: a flat LCDM Planck cosmology with $\Omega_m=0.307$, $h=0.67$. Methods and definitions {#sec:methods} ======================= A finite box-size $L$ – either in simulations or in analytical estimates – yields an important parameter: the fundamental wavenumber, i.e., $$\kbox = \frac{2\pi}{L}.$$ In order to estimate the matter power spectrum $P(k)$ from the simulations we generate the dark matter density field on a 3D-mesh of size $N_g^3$ (see Table \[table:simtable\]). The Cloud-In-Cell (CIC) density assignment is used to estimate the density field. We then apply FFT to generate the amplitudes of $N_g^3$ Fourier harmonics. The minimum spacing of the harmonics in phase-space is $\Delta k = \kbox$. The power spectrum is obtain on a 1D-mesh with constant binning equal to $\kbox$. Each harmonic contributes to two mesh elements with the weights obtained using the CIC interpolation scheme in the same fashion as that used for the density assignment [@GLAM]. This binning procedure reduces the noise in the power spectrum by $\sim 30\%$. The power spectrum is corrected for the aliasing due to the CIC density assignment. The covariance matrix $C(k,k^\prime)$ of the power spectrum is defined as a reduced cross product of the power spectra at different wave-numbers $k$ and $k^\prime$ for the same realisation averaged over different realisations: $$C(k,k^\prime) = \langle P(k)P(k^\prime)\rangle - \langle P(k)\rangle \langle P(k^\prime)\rangle . \label{eq:cov}$$ The covariance matrix is typically normalized by the average amplitude of the diagonal componentes and plotted as $[C(k,k^\prime)/P(k)P(k^\prime)]^{1/2}$. When estimating the density distribution function (PDF) for a given simulation we use a different 3D-mesh size $N$ not necessarily equal to the mesh size of the simulation itself. The CIC density scheme is applied for every mesh size used. Once the overdensity field is created the values of the overdensity $\rho = \rho_{\rm DM}/\langle \rho_{\rm DM} \rangle$ are binned using logarithmically spaced bins with width $\Delta\log_{10}(\rho) =0.025-0.050$. The PDF is then defined as a normalized number $\Delta N$ of cells with overdensity in the range $[\rho,\rho+\Delta\rho]$, i. e., $$P(\rho) = \frac{\Delta N}{N^3\Delta\rho}. \label{eq:PDF}$$ By construction, the PDF is normalized to have the total volume and the total mass density to unity. The second moment of $P(\rho)$ is the [*rms*]{} fluctuation of the overdensity field and is related to the power spectrum of fluctuations in simulations by $$\sigma^2 = \int^\infty_0(\rho-1)^2P(\rho)d\rho = \frac{1}{2\pi^2}\int_{\kbox}^{k_{\rm Ny}}P(k)W^2(k\Delta x)k^2dk,$$ where $\Delta x = L/N$ is the cell-size of the density field and $k{\rm Ny} = \pi/\Delta x$ is the Nyquist frequency of the mesh. Here $W^2(k\Delta x)$ is the power spectrum of the CIC filter with cell size $\Delta x$. Depending on the cell size the PDF can have a very wide range of values. For the relatively small cell-sizes $\Delta x = (1-5)\Mpch$ used in this paper the leading term in the PDF is $P(\rho)\propto \rho^{-2}$ [@Bouchet1991; @PDF]. In order to reduce the dynamical range of the PDF we typically plot $\rho^2P(\rho)$. Effects of box replications {#sec:replicate} =========================== If the computational box of the simulation is smaller than the volume of a given galaxy survey, [**the same**]{} simulation box must be replicated enough times to cover the entire observed region. Note that in order to avoid defects at the boundaries of the box, the same realization is replicated. Box replications increase the apparent volume of the sample as compared to the volume of a single simulation. However, they do not add new information: it is still the same as in the original simulation. For example, if long-waves were absent in the simulation box, they will be absent in the replications. Nothing wrong with this: it is understood that something will be missing if the replication is applied to a finite volume simulation. The main question is: will the replication procedure produce any defects? One can imagine some possible issues. We start with the obvious one: the same structure will be observed again and again due to the periodical replication. Figure \[fig:4gpc\] illustrates the situation. Here we use halos drawn from the MDPL ($1\Gpch$) and HMDPL ($4\Gpch$) simulations with virial masses larger than $M>10^{14}h^{-1}M_\odot$. We assume that in this case the observational “sample” has a depth of $2000\Mpch$ and we also show halos in a somewhat arbitrary chosen (but large) $400\Mpch$ slice. The bottom panel shows halos selected from the much larger HMDPL simulation. No replication is needed in this case because the HMDPL simulation covers the whole “observed” volume. The situation is different in the case of the $1\Gpch$ MDPL simulation that requires 8 replications: 4 times along the x-axis and two along the y-axis. Indeed one clearly sees the effects of the replications (see top panel in Figure \[fig:4gpc\]) . This is obviously not a pleasant feature: the real universe should not look like that. However, is it really a problem? Once we agreed (or found) that waves longer than the computational box are not important, then there is nothing wrong with the top panel in Figure \[fig:4gpc\]. What we perceive as a defect in the plot is just a way for our brain to tell us that there are no waves longer than $1\Gpch$. Indeed, if we had analysed the new (replicated) sample and ignored the effects of sample boundaries, we would have found the same properties as in the original small volume simulation – the same halo abundances, peculiar velocities, correlation function, and the same power spectrum truncated at the fundamental mode of the simulation box. There is a simple way to remedy the visual problems with the replications. One needs to rotate the stacked simulations before making mock observational samples: the same realization is stacked and the resulting distribution is rotated. We illustrate this by rotating twice the stacked distribution of halos in the $1\Gpch$ MDPL simulation. We first rotate by some angle ($\sim 30^o-60^o$) the distribution along the y-axis (the vertical axis in Figure \[fig:4gpc\]), and then by another angle along the x-axis (horizontal axis in the same plot). After the rotations are done, we make the same slice as described before. Figure \[fig:4gpcrotate\] shows two examples of mock samples produced in this way. The plots do not show any visual defects of the replications. Just as in the case of a simple replication, the rotated stacked distribution does not bring new information. For example, if we estimate the power spectrum of fluctuations of the rotated and stacked distribution, we will find the same power spectrum as that found in the original $1\Gpch$ box with shifted angles of the harmonics. The other potential issue with the replication process is repeating structures (halos, voids, filaments) along the line-of-sight. An example is the study of the weak-lensing signal produced by clusters of galaxies or individual galaxies. In order to mimic observations, the same simulation can be repeated many times (stacked) along the line-of-sight with an “observer” placed on the line going through the centres of the aligned boxes. If the box is small and the observer is at a large distance from the lens, then every object will be found replicated many times along the line-of-sight, which constitutes a serious defect for the weak-lensing estimates. The key issue here is the size of the simulation. If it is too small, say 100–200Mpc, then indeed the replication is problematic. With the typical distance to lenses of $\sim 1\Gpch$ a small $\sim 100\Mpch$ box will result in almost plane-parallel projection on the sky and, thus, with multiple halos almost exactly along the line-of-sight. The situation is different for large simulations with size $\sim 1\Gpch$. In this case multiple replications are still required, but they do not produce problems for weak-lensing estimates. Figure \[fig:replicate\] schematically illustrates this situation with the replications of a large computational box. To make the problem more transparent we place four objects in a 2D-box of unit size and replicate it 3 times in each direction. The “observer” is placed in the corner of the box and the lines connecting the observer going through each point are shown. Most of the lines do not have periodical images. The only one that does is the object that is exactly along the diagonal. We know which points will have periodical images and which, thus, will have problems with lensing analysis. If $(x, y)$ are the coordinates of the objects, then periodical images will appear if the ratio of the coordinates is a rational number. In other words, if $x/y = i/j$; where $i, j$ are integer numbers. A periodical image appears after $i$ replications along the x-axis and $j$ replications along the y-axis. Because we replicate the simulation box only few times (three for Figure \[fig:replicate\]), we are potentially interested in the cases with small values of $i$ and $j$. In a mathematical sense the probability of an arbitrary $x$ and $y$ to be a rational number is zero. In 3D the situation is even more strict because two ratios $x/y$ and $y/z$ must be rational with small integers. In practice the chance to have close images along the line-of-sight of the same object are very small and can be found in every case. Missing power {#sec:missing} ============= The size of the computational volume defines another important ingredient: the amplitude of the power missed in the simulation box. The larger is the box the smaller is the missing power and, thus, the simulation closely matches the density fluctuations in the Universe. We can estimate the missing power $\sigma_{\rm miss}$ by integrating the linear power spectrum $P(k)$ from $k=0$ up to the wavenumber given by the fundamental mode of the box $\kbox$, i.e., $$\sigma^2_{\rm miss}(L) =\frac{1}{2\pi^2}\int_0^{\kbox}P(k)k^2dk, \quad \kbox =2\pi/L. \label{eq:miss}$$ The missing power can be computed for any redshift, but here we will do the estimates only for $z=0$. The bottom panel in Figure \[fig:sigma\] shows $\sigma_{\rm miss}(L)$ for different box-sizes $L$. The plot shows that the missing power declines dramatically with increasing box-size. This is expected because at small $k$ the power spectrum $P(k)$ is nearly primordial with slope $\sim 1$. Thus, $\sigma^2 \propto k^4\propto L^{-4}$. While it is easy to estimate $\sigma_{\rm miss}(L)$ numerically, it is convenient to have a simple approximation for large simulation boxes and Planck cosmology: $$\sigma_{\rm miss}(L) \approx \frac{7.5\times 10^{-3}}{L^2_{\rm Gpc}}, \quad L_{\rm Gpc}\equiv\frac{L}{1\Gpch}. \label{eq:missapprox}$$ The other side of this steep decline is that the missing power [ *increases*]{} dramatically for small boxes. For example, for $L=200\Mpch$ the missing power is $\sigma_{\rm miss}\approx 0.1$, which is substantial considering that one expects that non-linear effects (e.g., turn-around for halo formation) become important when the overdensity becomes unity. However, the missing power becomes very small, and falls below $\sigma_{\rm miss}<10^{-2}$, for a $L=1\Gpch$ simulation box. There are different ways of assessing how large is the power missed in a finite box size. The other lines in the bottom panel of Figure \[fig:sigma\] correspond to the power in eq.(\[eq:miss\]) integrated up to a giving wavenumber $k_{\rm cut}$ instead of $\kbox$. We use two values of $k_{\rm cut}$: $0.1\kMpch$ and $0.3\kMpch$ which are characteristic for the domain of the BAO peaks. The full curves are for the total power (infinite box) and the dashed curves are for the power inside the box (with the integrals starting at $\kbox$). Clearly there is not much missing power except for those boxes with $L{\mbox{${\,\hbox{\hbox{$ < $}\kern -0.8em \lower 1.0ex\hbox{$\sim$}}\,}$}}200\Mpch$. The top panel in Figure \[fig:sigma\] shows the ratio of missing power in waves with $k<k_{\rm box}$ to the power inside the specified wavenumber indicated in the plot. The missing $rms$ power can be substantial for simulations with boxes smaller than $\sim 200\Mpch$, but it becomes tiny for simulations with boxes larger than $\sim 1\Gpch$. It is also interesting to note that most of the missing power $\sigma_{\rm miss}(L)$ is found in waves that are just a bit longer than the computational box. For example, for a $1\Gpch$ box 95% of the missing power is in waves with wavelengths between $(1-2)\Gpch$ and 88% is in $(1-1.5)\Gpch$ waves. These waves cannot be considered constant inside the computational box: a striking contrast with the main presumption of the separate universe simulations [e.g., @SuperScale; @Wagner2015] which assumes that the only long-waves that matter are those that are much longer than the length of the computational box, and, thus, can be treated as a constant background. The $rms$ density fluctuation $\sigma_L$ of the average density inside a box $L$ embedded in an infinite density field is about 5 times smaller than $\sigma_{\rm miss}(L)$. See Section \[sec:SSC\] for more details. Impact on the correlation function {#sec:corr} ================================== Because of the truncation of the power spectrum at the fundamental mode, the finite-size box correlation function of the dark matter is different at large scales from that expected when one assumes an infinite volume [@Sirko2005; @Klypin2013]. The correlation function of the dark matter or that of halos are affected by non-linear processes. Still, their main features (e. g., position of the BAO peak and zero-crossing; see Figure 5 in @Klypin2013) are reproduced by the linear theory with some modifications though. In any case, it is important to estimate how accurately we can even reproduce the linear correlation function. The finite box-size correlation function $\xi(R)$ can be estimated using the power spectrum $P(k)$, i. e., $$\xi(R) = \frac{1}{2\pi^2}\int_{\kbox}^\infty dk k^2 P(k)\frac{\sin(kR)}{kR}.$$ Figure \[fig:correlation\] presents the estimates of the correlation function of the linear dark matter power spectrum for different box-sizes. Just as expected, the differences become small at smaller scales. Indeed, the comparison of the correlation functions of halos in the Bolshoi ($L=250\Mpch$) and MultiDark ($L=1\Gpch$) simulations are also within few percent for $R<10\Mpch$ [@Klypin2013]. At larger scales the box-size effects become more apparent. For example, for a $L=300\Mpch$ box the correlation function is qualitatively incorrect: the whole BAO domain is negative and the zero-crossing scale is twice smaller than it should be (see Figure \[fig:correlation\]). The situation improves when the box-size increases. However, the box-size should be substantially larger than $500\Mpch$ in order to closely match the correlation function of the infinite box. Just as with the estimates of the missing power, the effects due to the missing long-waves dramatically decline with increasing of the box size. Indeed, we can hardly see any impact for $L=1\Gpch$. We can quantify the effect using two statistics: the position of the BAO peak $R_{\rm BAO}$ and the scale of zero-crossing $R_0$. These two parameters are plotted in Figure \[fig:corrMax\]. As we can see, the position of the BAO is remarkably stable. For the $L=500\Mpch$ box the BAO peak is within 0.1% from its pristine location, and the deviations become unmeasurable for larger boxes. This is good news because the BAO position is an important parameter for estimates of the cosmological parameters. It will be modified by non-linear effects, but at least we start with an accurate linear theory position. The zero-crossing is much more sensitive to the box-size with large uncertainties for boxes with $L<500\Mpch$. Still, the error decreases quickly with increasing the box-size, and becomes less than 1% for $L>1\Gpch$. We study also the effects of nonlinear evolution using the C1.2 and D2.75 GLAM simulations at $z=0$. Figure \[fig:nonlincor\] presents the average correlation function of dark matter in these simulations for a wide range of radii $R=(1-150)\Mpch$. If the long-waves missed in the C1.2 simulation boxes, as compared with the much larger boxes of the D2.75 simulations were important, we would have seen a stronger clustering in 2.75box simulations at all scales. However, this does not happen: there are no measurable differences between the 1.2 and the much larger 2.75 simulations for scales $R>5\Mpch$. In order to quantify the differences in the BAO domain, we fit the average correlation functions of each set of simulations with an analytical function – a third order polynomial in the form: $$\xi_{\rm fit}(R) = \xi_0+a_1x+a_2x^2+a_3x^3, \quad x\equiv R-R_0. \label{eq:xifit}$$ The function has 5 free parameters with $R_0$ and $\xi_0$ defining the position and amplitude of the peak of the correlation function. After fitting the data in the range of radii $R=(91-113)\Mpch$ we find for $L=2.75\Gpch$ simulations $\xi_0=1.500$, $R_0=100.30\Mpch$, which is nearly identical (within 0.06% for $R_0$) with those for the C1.2 simulations: $\xi_0=1.494\pm 0.005$, $R_0=(100.24\pm 0.1)\Mpch$. The only statistically significant differences between D2.75 and C1.2 correlation functions are observed at small radii $R<5\Mpch$, which are due to the differences in the force resolution. This indicates that the $1.2\Gpch$ box of the C1.2 simulations is large enough to produce accurate results for the scales presented in Figure \[fig:nonlincor\]. Density distribution function {#sec:PDF} ============================= The density distribution function of the dark matter $P(\rho)$ provides an additional test for the effects of the finite-box size $L$. One may expect some impact due to the missing waves. Indeed, a very long wave with a wavelength longer than $L$ increases the $rms$ fluctuations inside the computational box. As the result, some fluctuations collapse earlier when the density of the universe is larger. Thus, the collapsed density will be somewhat larger as compared with the situation when the long-wave is missed in simulations with box-size $L$. Using the same argument, one expects that some regions will have lower density, if the long-wave is present. In other words, the density distribution function should be wider in simulations with larger boxes. This is the same type of arguments that were mentioned in the estimates of the halo abundances: the effect must be present, but how large is it? Here we will be interested in the high-density tail of $P(\rho)$ because of two reasons: (1) the power spectra and correlation functions – being averages over the whole computational volume – have already gave us results on the properties of the density field. However, they may not be very sensitive to a small fraction of the volume with the largest density; (2) Because of the particle noise in regions with low-density, it is more difficult to reliably estimate the PDF at low $\rho$. We select three GLAM simulation sets to study the PDF. The main comparison is between D2.75 and A1.5 with $L=2.75\Gpch$ and $L=1.5\Gpch$), which have almost two times different box-sizes and the same force resolution. So, the difference between those simulations at large densities should be only due to the box-sizes. However, these simulations have almost ten times different number densities of particles that affects the low-density part of $P(\rho)$. In addition, we also consider the A2.5 simulations that have the same number-density of particles as D2.75, nearly the same volume, but twice lower resolution. The density distribution function $P(\rho)$ is estimated for three filtering scales – sizes of cubic cells: $\Delta x = 1.25, 2.5, 5.0 \Mpch$. The dark matter density distribution functions $P(\rho)$ are shown in Figure \[fig:pdf\]. The density is given in the units of the average density of the Universe. The PDF is scaled with the square of density to reduce the dynamical range. The $rms$ density fluctuation $\sigma$ measured for the different cell-sizes is indicated in the plot. For the large cell-size $\Delta x=5h^{-1}$Mpc the PDFs of the different box sizes are practically indistinguishable. As the cell-size decreases, the lack of the force resolution in the A2.5 simulations results in the decline of the PDF at large densities $\rho> 100$, while the particle noise becomes important for low densities $\rho < 10$. In the regime where both the force and mass resolutions are small the PDF does not show any signs of dependance on the size of the simulation box. Halo abundances {#sec:abundance} =============== Missing large-scale power in finite box simulations must affect the estimates of the abundance of halos with different masses. All current analytical models – build and tested using $N$-body results – tell us that halo abundance is a function of the $rms$ density fluctuations $\sigma(M,z)$ as estimated from the linear power spectrum smoothed with filter of effective mass $M$ at redshift $z$. Because the finite box simulations miss some fraction of $\sigma$ for a given mass $M$, these simulations must predict fewer halos. However, so far the results provided by $N$-body cosmological simulations have failed to show that this is the case [@Warren2006; @Tinker2008; @DarkSky; @Ishiyama2015; @DeRose]. The halo mass functions estimated using simulations of different box sizes have been extensively studied in the field. For example, @Tinker2008 used simulations with sizes $L=80\Mpch$ up to $L=1.3\Gpch$. @DarkSky analysed simulations with different box sizes between $100\Mpch$ and $8\Gpch$. None of those works indicated any dependance of the halo mass function on the simulation box-size. In the overlapping halo mass interval $M=10^{13}-2\times 10^{14}\Msunh$ the DarkSky simulations with boxes $L=0.8, 1.6, 8\Gpch$ have mass functions that deviate by less than 1%. @DeRose do not find any differences in the halo mass function of halos more massive than $\sim 10^{13}\Msunh$ when comparing simulations with $1\Gpch$ and $5\Gpch$ boxes. \[sec:power\] In order to interpret and understand this result we use the analytical approximation of the halo mass function $n(M) = f(\sigma(M,z))$, where $\sigma(M,z)$ is the $rms$ of density fluctuations as presented in @Comparat. By itself this approximation is based on the MultiDark [@Klypin2016] suite of simulations with box sizes $L=0.4-4\Gpch$. We use this approximation to find the halo mass function in two ways. First, we estimate the $rms$ of fluctuations $\sigma(M,z)$ using the full (untruncated) linear power spectrum of fluctuations. Second, we mimic the finite box-size effects by truncating the power spectrum at the fundamental mode $\kbox$. Figure \[fig:abundance\] presents our estimates of the halo mass function for three hypothetical simulations with box sizes $L=300, 500\Mpch$ and $1\Gpch$. Clearly one should expect some deficit of halos in the simulation with $L=300\Mpch$. For example, for mass $M= 2\times 10^{15}\Msunh$ the model predicts that about 10% of the halos will be missed. It is also clear why this effect has not been measured in the $N-$body simulations, and why it could be ignored: the model predicts that one should find only about one cluster for this halo mass in such small box. Note that when analyzing the simulations, one routinely ignores the first $\sim 100$ most massive halos because these clusters are too sensitive to cosmic variance and also because of the large statistical errors. If we limit ourselves to a mass scale with more than 100 halos, then the $L=300\Mpch$ box yields less than 1% uncertainty in the halo mass function at the most massive tail. Figure \[fig:abundance\] also shows that the finite box-size uncertainties decline dramatically with $L$. A simulation with a box-size of $1\Gpch$ will end up with no missing clusters: 1% error is reached for a halo mass of $\sim 8\times 10^{15}\Msunh$. The predicted number of clusters with this mass is so low that no single cluster of this mass is expected in the Universe. So, when it comes to making a choice for the box-size, our selection depends on the observational sample, i. e, how massive are the clusters in the sample that will be analyzed. For example, if the observed volume is relatively small and we are focusing only on clusters with mass less than $10^{14}\Msunh$ then even a $300\Mpch$ box-size would be sufficient: on average it will produce the correct amount of clusters under consideration. If instead we deal with a very large survey and study all possible clusters, then the box-size must be not less than $1\Gpch$. The errorbars in the observed number of objects, which we estimate using simulations, are the sum of two factors: (1) the statistical fluctuations due to the cosmic variance (random noise due to all harmonics with wavelength less than $L$) and (2) the effects of waves longer than the computational box. The first term will be found by measuring the statistics of objects in many realizations of simulated boxes. The second term can be estimated by assuming that the number of objects $n(M)$ depends on the $rms$ of density fluctuations at a given scale $\sigma(M,z)$. We use the halo mass function as an example. The number of missed halos $\Delta n(M)$ due to change in $\sigma$ can be written as $$\frac{\Delta n}{n} = \frac{\partial \ln n(\sigma)}{\partial \sigma} \Delta\sigma,$$ where $\Delta\sigma= \sigma_{\rm miss}(L)$ is the $rms$ fluctuations due to waves longer than $L$. Note that this is exactly the quantity that is plotted in the top panel of Figure \[fig:abundance\]. The errors can be substantial for $\sim 10^{15}\Msunh$ clusters in $300\Mpch$ simulations, but they are negligible for any clusters in $1\Gpch$ runs. Finite-box effects on the Power spectrum ======================================== The accuracy of the non-linear dark matter power spectrum from $N$-body simulations has been addressed extensively in many publications [e.g., @Heitmann2008; @Heitmann2010; @Schneider2016; @Lawrence2017; @Smith2018]. However, typically the main focus of these works is devoted to the convergence of the results on the short-scales (see also @Smith2018). Unlike the short scales, where the comparison of just one or few realisations with different box-sizes is sufficient, the analysis of the power spectrum for long-waves ($k{\mbox{${\,\hbox{\hbox{$ < $}\kern -0.8em \lower 1.0ex\hbox{$\sim$}}\,}$}}0.3\kMpch$) is complicated due to the large cosmic variance, which would require many realisations in order to complete a detailed study. @Heitmann2010 compared the power spectrum results obtained with $234\Mpch,\, 960\Mpch$ and $2\Gpch$ simulation boxes. They find that the power spectrum of 137 realisations of the $234\Mpch$ boxes is below the larger box simulations by about $\sim 1\%$ for wavenumbers $k=0.03-0.15\kMpch$. There were no detectable differences (less than $\sim 1\%$ between their $960\Mpch$ and $2\Gpch$ boxes). @GLAM used thousands of realisations to study the effects of the simulation box-size on the average of the power spectrum. Here, we extend that analysis to study in detail the effects of longer waves with additional simulations. Similar to the situation with the halo abundance, we know how qualitatively the missed long waves affect the power spectrum: power spectrum must increase with increasing of the box size. The magnitude of the effect is difficult to estimate. However, we know that the missing power is small for any realistic box-size (see Figure \[fig:sigma\]). Thus, most of the effect is expected to be found on long-waves in a given computational box. However, these waves are still in nearly linear regime and their non-linear coupling with the small amplitude waves outside the box can be expected to be small. The average power spectra obtained from the different sets of simulations are shown in Figure \[fig:power\]. The only differences one can see in this plot are those due to the force resolution: increasing resolution in small-box simulations results in the increase of the amplitude of the power spectrum. This happens at large wavenumbers $k{\mbox{${\,\hbox{\hbox{$ > $}\kern -0.8em \lower 1.0ex\hbox{$\sim$}}\,}$}}0.5\kMpch$, which is a clear signature of the resolution effects. The finite-box effects should act in the opposite direction by decreasing the power in small-boxes relative to the bigger ones. There are no obvious signs of the box-size effects on long-waves where one expects them to be present. In order to see the effects on small wavenumbers more clearly, we plot the ratio of the nonlinear power spectrum to the linear spectrum $P(k)/P_{\rm linear}(k)$ – the square of the bias parameter. The results presented in Figure \[fig:bias\] do not show any signatures of depression in the power spectrum due to the missing long-waves. The outliers in this panel are coming from the small $L=250\Mpch$ and $L=500\Mpch$ simulations. There may exist a small effect of the box-size at $k=(0.01-0.02)\kMpch$ where the bias systematically increases by $\sim 0.5\%$ with increasing box-size, but the deviations are within the statistical uncertainties due to the small number of realisations of the $4.0\Gpch$ box. One effect is nevertheless noticeable: the large spacing between points for small-box simulations. This is related with the fundamental harmonic $\kbox$ that defines discreteness effects in the Fourier space (minimum separation of harmonics): the larger is the box, the smaller is the binning. This can be a serious problem for small boxes. For example, for $L=250\Mpch$ the minimum width of a bin $\Delta k$ is $\Delta k=0.025\kMpch$, which should be compared with the wavenumber of the first BAO peak $\sim 0.07\kMpch$. So, the binning is smaller than the BAO wavenumber, but only $\sim 3$ times. Indeed, the points in Figure \[fig:bias\] that deviate from the other estimates of the $P(k)/P_{\rm linear}(k)$ ratio are those that correspond to the small $L=250\Mpch$ simulations. Figure \[fig:powerBAO\] presents a zoom-in view on the BAO domain of the power spectrum. We multiply the power spectrum $P(k)$ by factor $k^{1.3}$ with the goal to flatten the curves in the range $k=(0.1-0.3)\kMpch$. Simulations D0.25 with small box sizes clearly suffer from the lack of long-waves: their power spectrum is systematically fall below the rest by $\sim 1-1.5$%. This is consistent with estimates of @Heitmann2010. There are no measurable deviations between $1\Gpch$ and $4\Gpch$ simulation boxes with differences less than $\sim 0.1\%$. Presented results so far were done for quantities defined in real space and did not include peculiar velocities. The latter produce distortions in the redshifts space that are important component of the interpreting and understanding of observed clustering of objects [e.g., @Kaiser1987; @Hamilton1998; @Reid2011; @Sanchez2017]. Because our results are based on $N$-body simulations where density and velocity perturbations play equally important roles, accurate estimates of the growth and evolution of density fluctuations imply accurate estimates of peculiar velocities. In other words, convergence of different statistics of density distribution guarantees convergence of quantities in redshift space. To make this argument more clear, we compare redshift space power spectra with those done in the real space. When estimating the redshift-space power spectra, we perturb positions of particles along one of coordinate axes according to their peculiar velocities and periodically wrap them around, if necessary. Once the density in the redshift space is constructed, we find the spectrum and estimate either the monopole or quadrupole power spectrum. Results are averaged over three directions of velocity distortions. We use 100 realizations of the D4 simulations to make estimates of power spectra for large-box simulations. To find effects of the box size we additionally made 100 realizations with a $1.2\Gpch$ box-size and 800 realizations with twice smaller $600\Mpch$ boxes. For these simulations we use exactly the same mass and force resolution as for the D4 simulations: $600^3$ particles moving in a $1200^3$ mesh for $1.2\Gpch$ simulations and $300^3$ particles moving in a $600^3$ mesh for $600\Gpch$ simulations. Because we are interested only in the effects of peculiar velocities, we analyze the ratios of the redshift to real-space power spectra. This greatly reduces the cosmic variance and allows us to dramatically reduce the statistical noise. Figure \[fig:powerRSD\] presents results for the monopole component (quadrupole component shows similar results). Full curves show results for the $4\Gpch$ box. Different symbols are for the $600\Mpch$ and $1.2\Gpch$ boxes. The bottom panel shows the ratio of the redshift-space dipole power spectrum $P_0$ to the real-space $P_{\rm real}$. The horizontal dashed line indicates theoretical prediction for very long waves [@Kaiser1987]: $P_0=(1+2f/3+f^2/5)P_{\rm real}$, where $f=d\ln\Delta/d\ln a$ is the grows rate of linear waves [e.g., @Reid2011]. Differences between simulations with different box sizes are so small that it is difficult to see them. To find the differences, we fit the ratio $P_0/P_{\rm real}$ with an analytical smooth function and display the deviations from the same fit in the top panel. The function itself is motivated by approximations used in the field. Specifically we use analytical approximation: $$\left(\frac{P_0}{P_{\rm real}}\right)_{\rm fit}=A\exp\left[-\left(\frac{k}{k_0}\right)-\left(\frac{k}{k_1}\right)^2\right], \label{eq:fit}$$ where $A=1.40$, $k_0=2.4h{\rm Mpc}^{-1}$, and $k_1=0.66h{\rm Mpc}^{-1}$. The differences between $4\Gpch$ and $1.2\Gpch$ simulations are small: less than $\sim 0.1\%$ on all scales. At even smaller $L=600\Mpch$ simulations show some differences at $k{\mbox{${\,\hbox{\hbox{$ > $}\kern -0.8em \lower 1.0ex\hbox{$\sim$}}\,}$}}0.1\kMpch$. However, they are relatively small (e.g., $\sim 1\%$ at $k{\mbox{${\,\hbox{\hbox{$ > $}\kern -0.8em \lower 1.0ex\hbox{$\sim$}}\,}$}}0.2\kMpch$). There are no measurable difference at very long waves with $k{\mbox{${\,\hbox{\hbox{$ < $}\kern -0.8em \lower 1.0ex\hbox{$\sim$}}\,}$}}0.05\kMpch$. [^2] Covariance matrix of the power spectrum {#sec:Cov} ======================================= The covariance matrix $C(k,k^\prime)$ of the power spectrum given in eq.(\[eq:cov\]) is one of the main statistics required for detailed analysis of observational survey data and estimates of cosmological parameters [see e.g., @Anderson2012; @Sanchez2012; @Dodelson2013; @Percival2014]. It is very difficult to estimate the covariance matrix using simulations because thousands of realisations are required in order to produce accurate measurements [e.g., @Taylor2013; @Percival2014; @GLAM]. This is also a quantity that strongly depends on the computational box-size. So, it is important to understand how to handle $C(k,k^\prime)$ obtained from finite-volume simulations [e.g., @Gnedin2011; @Li2014; @Mohammed2014; @Bertolini2016]. The diagonal and off-diagonal components of the covariance matrix have different nature and different magnitudes. The diagonal components $C(k,k)$ are defined mostly by the Gaussian noise associated with the finite number of Fourier harmonics found in each bin used to estimate the power spectrum. As such, we can write: $$C^G(k,k) = \alpha\frac{2}{N_h}P^2(k),\quad N_h = \frac{4\pi k^2\Delta k}{(2\pi/L)^3}, \label{eq:gauss}$$ where $N_h$ is the number of harmonics in a $[k,k+\Delta k]$ bin and the coefficient $\alpha$ takes into account the filtering due to the binning process. For the Near Grid Point (NGP) binning $\alpha =1$, and $2/3$ for the CIC binning. Note that the magnitude of the diagonal components is proportional to the volume of the simulations, i. e., $$Cov(k,k) \propto L^{-3}. \label{eq:covscale}$$ Nonlinear clustering affects the diagonal components at large wavenumbers $(k{\mbox{${\,\hbox{\hbox{$ > $}\kern -0.8em \lower 1.0ex\hbox{$\sim$}}\,}$}}0.2\kMpch)$ making them larger than the simple shot-noise estimates. However, the nonlinear terms also scale with volume [@GLAM]. The non-diagonal components $C(k,k^\prime)$ have much smaller amplitudes but there are many more of them as compared with the diagonal ones. So, the off-diagonal componentes are still important. Detailed analysis of these components was presented in @GLAM who found that they also scale with the computational volume. Here we present a couple of examples of the behaviour of the covariance matrix in the domain of the BAO peaks. Figure \[fig:Covar\] presents two slices of the dark matter covariance matrices $Cov(k,k^{\prime})$ in simulations with different box sizes. All covariance matrices were rescaled to the $1.5h^{-1}Gpc$ box-size by multiplying $Cov(k,k^{\prime})$ by the ratio of volumes. The covariance matrix of the $2.5\Gpch$ simulations (A2.5c) was additionally scaled up by 10%. Without this re-scalings the difference between the covariance matrices is very large: factor $(2.5/0.96)^3\approx 18$ between simulations with $2.5\Gpch$ and $1\Gpch$. The large level of noise of the covariance matrix for the $2.5\Gpch$ simulations is due to the fact that the level of the signal is very low due to the large box-size. These results make the rescaling of the covariance matrices easy: one can rescale the covariance matrix proportionally to the ratio of the volumes. In the next section we will discuss in detail the covariance corrections due to super sample modes. Super Sample Covariance {#sec:SSC} ======================= In this paper we have addressed so far the impact of missing long-waves by studying the scaling of different quantities such as correlation function, PDF, halo abundance, power spectrum, and covariance matrix with the box-size. As the box-size increases, more and more long-waves are incorporated into the simulations. The extrapolation of these quantities to the limit of infinitely large boxes gives us the estimates of those true quantities. These convergence studies are the traditional way of treating situations like that. This would work if the observational sample is very large and we measure a fair volume fraction of the Universe. Indeed, the current and future galaxy surveys have very large volumes. For example, the effective volume of DESI or Euclid will be $\sim 50\Gpch^3$,which roughly corresponds to the volume of a simulation box with $L\sim 3.7\Gpch$. There is another approach to the problem of waves longer than the observed sample or computational volume that aims to estimate their effects using a simplified model. The key assumption of this approach is that waves that are longer that $L$ can be considered to have a constant (background) density $\delta_b$ inside the simulation box [see e.g. @Baldauf2011; @TakadaHu; @SuperScale; @Baldauf2016]. These very long-waves affect the growth of fluctuations inside a given box $L$ that is extracted from a density field that does not have any missing long-waves. There is an obvious question regarding the accuracy of the approximation: as we saw in Section \[sec:missing\] most of the missing power is in waves that are only twice longer than the box-size $L$, and, thus cannot be considered to be constant inside the computational box. Let’s ignore for now this question and see what the [*Super Scale Covariance*]{} (SSC) approach predicts for the covariance matrix. The covariance matrix can be written as a sum of three terms: the Gaussian contribution given by eq.(\[eq:gauss\]), the nonlinear term related with the tri-spectrum of perturbations, and the contribution of waves longer than $L$: $$Cov_{ij} = Cov^G(k_i,k_j)\delta_{ij} + Cov^T(k_i,k_j) + Cov^{SSC}(k_i,k_j).$$ The first two terms scale with the volume of the simulation $Cov^{G,T}\propto L^{-3}$ as discussed in Section \[sec:Cov\]. These two terms are estimated from finite-box simulations. For this reason we combine these two terms together and refer to the sum as $Cov^{\rm Box}_{ij}$. The SSC term is due to the response of the power spectrum $P(k)$ to the background density change $\delta_b$ in the box $L$, i. e., $\delta P(k) = (dP(k)/d\delta_b)\delta_b$. Averaging over the distribution of $\delta_b$ gives an estimate of the SSC covariance term [@TakadaHu; @Li2014; @Wagner2015]: $$Cov^{SSC}_{ij} \approx \sigma^2_L\frac{\partial\ln P_i}{\partial\delta_b}\frac{\partial\ln P_j}{\partial\delta_b}P_iP_j,$$ where $\sigma_L$ is the $rms$ of $\delta_b$ as measured in boxes of size $L$. On large scales (small $k$) the response function $d\ln P(k)/d\delta_b$ changes relatively slowly [e.g., @TakadaHu; @SuperScale; @Mohammed2017], and its magnitude depends on how the power spectrum is measured. If $P(k)$ is measured with respect to the local density of the simulation box $1+\delta_b$ then the effect is substantially smaller as compared with that when the clustering is relative to the true background density. For large galaxy surveys the clustering is relative to the average density of the galaxy sample in the observed volume. To mimic the observations we can always mimic the same in our simulations. In the limit of small $k$ the response function can be written as [e.g., @Mohammed2017]: $$\frac{\partial\ln P}{\partial\delta_b} = \frac{5}{21}-\frac{1}{3}\frac{d\ln P}{d\ln k}\approx 0.57; \label{eq:respf}$$ where this estimate is given for the power spectrum with slope -1, which is the typical value for the long-waves $k=(0.1-0.3)\kMpch$. Note that if the overall density is used for the background, then the first factor $5/21$ in eq. \[eq:respf\] should be replaced with $41/21$ and the response function will value $\approx 2.3$. We estimate the $rms$ of $\delta_b$ fluctuations using a series of $4\Gpch$ GLAM simulations with $1500^3$ particles and $3000^3$ mesh. Each simulation was split in either $500\Mpch$ or $1\Gpch$ sub-boxes, and the total density of each sub-box was used to find $\sigma_L$. As expected, the results are accurately fitted by a power-law with the slope -2: $$\sigma_L = \frac{1.43\times 10^{-3}}{L_{\rm Gpc}^2}, \quad L_{\rm Gpc}\equiv\frac{L}{1\Gpch}$$ Now we estimate the impact of SSC covariance term on the normalized covariance matrix, i. e., $$\begin{aligned} \frac{C_{ij}}{P_iP_j} &=&\frac{C^{\rm Box}_{ij}}{P_iP_j} +\sigma^2_L\frac{\partial\ln P_i}{\partial\delta_b}\frac{\partial\ln P_j}{\partial \delta_b}\\ &\approx& \frac{C^{\rm Gpc}_{ij}}{P_iP_j} \frac{1}{L_{\rm Gpc}^{3}}+\left[\frac{0.0285}{L_{\rm Gpc}}\right]^4, \label{eq:SSCcorr}\end{aligned}$$ where $C^{\rm Gpc}_{ij}$ is the box covariance matrix measured for $L=1\Gpch$ and $L_{\rm Gpc}$ is the box size in units $\Gpch$. This relation can be used to estimate the correction to the covariance matrix in simulations with a given box-size due to the super-sample modes. For example, the covariance matrix estimated for $L=1.5\Gpch$ in Figure \[fig:Covar\] is $[Cov_{ij}/P_iP_j]^{1/2}\approx (3-4)\times 10^{-3}$ for non-diagonal components in a wide range of wavenumbers $0.05\kMpch < k < 0.5\kMpch$. Thus, for these simulations the covariance matrix corrected by the SSC terms is given by $$\left[\frac{Cov_{ij}^{\rm correct}}{P_iP_j}\right]^{1/2} = \left[\frac{Cov_{ij}^{\rm Box}}{P_iP_j}\right]^{1/2}\left[1+(4-7)\times 10^{-3}\right].$$ The correction is about 0.5%, which is small, but can be relevant for some very sensitive applications. The estimate for $4\Gpch$ simulations shows a 0.2% correction. The situation becomes totally different if the true background density is used for the power spectrum estimates of a small computation box. For example, a box with $L=500\Mpch$ is often used in the literature for SSC estimates [e.g., @Li2014; @Wagner2015; @Mohammed2017]. In this case the covariance matrix is dominated by the super-sample modes. Indeed, for this case our estimates show that $[Cov_{ij}/P_iP_j]^{1/2}$ almost doubles due to the SSC corrections. So, SSC corrections can be quite important for surveys with small volumes or for simulations with small box size. However, they are small for studies of galaxy clustering statistics with effective volumes of tens of Gpc$^3$. Conclusions and Summary {#sec:concl} ======================= Modern galaxy surveys encompass larger volume of space prompting the theory to make a careful analysis of the effects of very long waves on the observable statistics such as the power spectrum, the correlation functions, PDF, covariances, and the abundances of rich clusters of galaxies. Cosmological simulations play a key role in this analysis. It is routinely assumed that a single cosmological simulation box must cover the volume of the whole observable catalog. This significantly complicates the theoretical analysis and makes nearly impossible to perform thousands of realizations of mock galaxy samples required for estimates of systematics and errors. We challenge this trend and make extensive analysis of the effects due to the finite box-size of the cosmological simulations. We argue that for most of the types of analysis of large-scale surveys a computational volume of $L\sim (1-1.5)\Gpch$ is sufficient. In order to produce mock observed catalogs, these finite-volume simulations should be periodically replicated to fill the required observational volume. We also show that no corrections are required to the average power spectrum and correlation function, PDF and halo abundances, due to the effect of missing long-waves in a simulation box. On the other hand, the covariance matrices should be scaled down proportionally to the volume of the observations and, if necessary, corrected for the super-sample modes as given in eq.(\[eq:covscale\]) and eq.(\[eq:SSCcorr\]). Here is the summary of our main results: – Defects of box replications can be readily remedied by a combination of sufficiently large $\sim 1\Gpch$ simulations and rotation of the boxes before building mock galaxy catalogs, – The missing power of finite-box simulations (due to waves longer than the computational box) dramatically declines with increasing box-size $L$, and becomes extremely small $\sigma{\mbox{${\,\hbox{\hbox{$ < $}\kern -0.8em \lower 1.0ex\hbox{$\sim$}}\,}$}}0.01$ for $L{\mbox{${\,\hbox{\hbox{$ > $}\kern -0.8em \lower 1.0ex\hbox{$\sim$}}\,}$}}1\Gpch$. Most of the missing power is in waves that are slightly longer than the box-size: about 90% of the missing power is in waves with wavelengths $(1-1.5)L$, – Corrections to the abundance of halos and galaxies are extremely small and can be neglected for computational volumes larger than $1\Gpch$, – The average power spectra of dark matter fluctuations show remarkable lack of dependance on the size of the computational box. We clearly detect some decline of the amplitude of fluctuations for small $(250-500)\Mpch$ boxes, but it is small: $(1-1.5)$% for the smallest $250\Mpch$ simulation that we studied. There are no visible effects for simulations with $L>1\Gpch$ with upper limits of $\sim 0.5$% for extremely long-waves with $k=(0.008-0.05)\kMpch$ and less than $\sim 0.1$% for waves in the BAO domain with $k=(0.07-0.3)\kMpch$. – The covariance matrix of the dark matter power spectra scales proportionally to the computational volume. This well known result [e.g., @TakadaHu; @Wagner2015; @GLAM] is important for using mock galaxy catalogs: the covariance matrix must be scaled down to match the observational sample. The SSC correction to the covariance matrix is expected to be $\sim 0.5$% for observational samples with effective volume $3(\Gpch)^3$ (box-size $L=1.5\Gpch$), and becomes negligible when the observational sample increases to $\sim 50(\Gpch)^3$ expected for DESI/Euclid and LSST surveys. – The most stringent constraints on the simulation volume are coming from the requirement that mock catalogs should reproduce not only the correct power spectra, but also the correlation functions [@Sirko2005; @Klypin2013]. We find that the correlation functions for $L{\mbox{${\,\hbox{\hbox{$ < $}\kern -0.8em \lower 1.0ex\hbox{$\sim$}}\,}$}}500\Mpch$ are qualitatively incorrect. For example, for $L= 500\Mpch$ the dark matter correlation function is zero at $R\approx 85\Mpch$ where it must be positive. For $L= 300\Mpch$ the correlation function is negative for the whole domain of the BAO peak ($R\approx 100\Mpch$). However, the effect quickly becomes very small with increasing volume and is negligible for $L{\mbox{${\,\hbox{\hbox{$ > $}\kern -0.8em \lower 1.0ex\hbox{$\sim$}}\,}$}}1\Gpch$. Based on the work presented in this paper we conclude that a simulation box of $L\sim (1-1.5)\Gpch$ is large enough to fulfil most of the science requirements, in the fields of large-scale structure, weak-lensing and cosmological parameters, of the upcoming new generation of large redshift surveys. Acknowledgements {#acknowledgements .unnumbered} ================ We thank J. Peacock and M. Schmittfull for discussions and comments. A.K. acknowledges support of the Fulbright Foundation, support of the Instituto de Astrofisica de Canarias, La Laguna, and the Severo Ochoa scholarship. A.K. and F.P. acknowledges support from the Spanish MINECO grant AYA2014-60641-C2-1-P. F.P. wants to thank the support and hospitality of the ICC at Durham University where part of this work was completed. The new GLAM simulations presented in this paper were done at the Barcelona Supercomputer Center (Spain) and the DiRAC Data Centric system at Durham University, operated by ICC on behalf of the STFC DiRAC HPC Facility. We thank New Mexico State University (USA) and Instituto de Astrofisica de Andalucia CSIC (Spain) for hosting the skiesanduniverse.org site for cosmological simulation products. [^1]: E-mail: [email protected] [^2]: The limited force resolution $\epsilon =1\Mpch$ for the $4\Gpch$ box and for smaller boxes used for Figure \[fig:powerRSD\] affect (underestimate) the redshift distortions at large wavenumbers $k{\mbox{${\,\hbox{\hbox{$ > $}\kern -0.8em \lower 1.0ex\hbox{$\sim$}}\,}$}}0.2\kMpch$. For a much better resolution of $\epsilon =0.25\Mpch$ and box size $L=1\Gpch$ we find that parameters are slightly different: $A=1.405$, $k_0=1.84h{\rm Mpc}^{-1}$, and $k_1=0.55h{\rm Mpc}^{-1}$. This approximation gives errors less than 0.5% for $k<0.35\kMpch$.
--- abstract: 'This paper concerns state-based systems that interact with their environment at physically distributed interfaces, called ports. When such a system is used a projection of the global trace, called a local trace, is observed at each port. This leads to the environment having reduced observational power: the set of local traces observed need not uniquely define the global trace that occurred. We consider the previously defined implementation relation ${\sqsubseteq_s}$ and start by investigating the problem of defining a language ${{\mathcal {\tilde L}}}(M)$ for a multi-port finite state machine (FSM) $M$ such that $N {\sqsubseteq_s}M$ if and only if every global trace of $N$ is in ${{\mathcal {\tilde L}}}(M)$. The motivation is that if we can produce such a language ${{\mathcal {\tilde L}}}(M)$ then this can potentially be used to inform development and testing. We show that ${{\mathcal {\tilde L}}}(M)$ can be uniquely defined but need not be regular. We then prove that it is generally undecidable whether $N {\sqsubseteq_s}M$, a consequence of this result being that it is undecidable whether there is a test case that is capable of distinguishing two states or two multi-port FSM in distributed testing. This result complements a previous result that it is undecidable whether there is a test case that is guaranteed to distinguish two states or multi-port FSMs. We also give some conditions under which $N {\sqsubseteq_s}M$ is decidable. We then consider the implementation relation ${\sqsubseteq_s}^k$ that only concerns input sequences of length $k$ or less. Naturally, given FSMs $N$ and $M$ it is decidable whether $N {\sqsubseteq_s}^k M$ since only a finite set of traces is relevant. We prove that if we place bounds on $k$ and the number of ports then we can decide $N {\sqsubseteq_s}^k M$ in polynomial time but otherwise this problem is NP-hard.' author: - 'Robert M. Hierons' bibliography: - 'trace.bib' - 'test.bib' - 'papers\_hierons.bib' title: Checking Finite State Machine Conformance when there are Distributed Observations --- Introduction ============ Many systems interact with their environment at multiple physically distributed interfaces, called ports, with web-services, cloud systems and wireless sensor networks being important classes of such systems. When we test a system that has multiple ports we place a local tester at each port and the local tester at port $p$ only observes the events at $p$. This has led to the (ISO standardised) definition of the distributed test architecture in which we have a set of distributed testers, the testers do not communicate with one another during testing, and there is no global clock [@iso9646_1]. While it is sometimes possible to make testing more effective by allowing the testers to exchange coordination messages during testing [@cacciari99; @RafiqC03], this is not always feasible and the distributed test architecture is typically simpler and cheaper to implement. Importantly, the situation in which separate agents (users or testers) interact with the system at its ports can correspond to the expected use of the system. Distributed systems often have a persistent internal state and such systems are thus modeled or specified using state-based languages. In the context of testing the focus has largely been on finite state machines (FSMs) and input output transition systems (IOTSs). This is both because such approaches are suitable and because most tools and techniques for model-based testing[^1] transform the models, written in a high-level notation, to an FSM or IOTS [@CartaxoMN11; @Grieskamp06; @Grieskamp11; @FHP02; @Tretmans08]. Model-based testing has received much recent attention since it facilitates test automation, the results of a recent major industrial project showing the potential for significant reductions in the cost of testing [@Grieskamp11]. This paper concerns problems related to developing a multi-port system based on a (multi-port) FSM model/specification. Much of the work in the area of distributed testing has focussed on FSM models [@AlmeidaMSTV10; @dssouli85; @dssouli86; @Khoumsi02; @sarikaya84; @UralW06], although there has also been work that considers more general models such as IOTSs and variants of IOTSs [@HaarJJ07; @HJJ08; @HieTestCom08; @ATVA08; @HieronsN10]. While IOTSs are more expressive, this paper explores decidability and complexity issues in distributed testing and so we restrict attention to finite state models and, in particular, to multi-port FSMs. Naturally, the negative decidability and complexity results proved in this paper extend immediately to IOTSs. When a state-based system interacts with its environment there is a resultant sequence of inputs and outputs called a *global trace*. When there are physically distributed ports the user or tester at a port $p$ only observes the sequence of events that occur at $p$, the projection at $p$ of the global trace, and this is called a *local trace*. It is known that the local testers only observing local traces introduces additional issues into testing [@AlmeidaMSTV10; @dssouli85; @dssouli86; @HaarJJ07; @HJJ08; @Khoumsi02; @sarikaya84; @UralW06]. Previous work has shown that distributed testing introduces additional controllability and observability problems. A controllability problem occurs when a tester does not know when to supply an input due to it not observing the events at the other ports [@sarikaya84; @dssouli85]. Consider, for example, the global trace shown in Figure \[fig:cont\]. We use diagrams (message sequence charts) such as this to represent scenarios. In such diagrams vertical lines represent processes and time progresses as we go down a line. In this case the system under test (SUT) has two ports, $1$ and $2$, we have one vertical line representing the SUT, one representing the local tester at port $1$, and one representing the local tester at port $2$. There is a controllability problem because the tester at port $2$ should send input $x'$ after $y$ has been sent by the SUT but cannot know when this has happened since it does not observe the events at port $1$ [MSC1]{} \[1\] \[1\] \[1\] Observability problems refer to the fact that the observational ability of a set of distributed testers is less than that of a global tester since the set of local traces need not uniquely define the global trace that occurred [@dssouli86]. Consider, for example, the global traces $\sigma$ and $\sigma'$ shown in Figures \[fig:obs1\] and \[fig:obs2\] respectively. These global traces are different but the local testers observe the same local traces: in each case the tester at port $1$ observes $xyxy$ and the tester at port $2$ observes $y'$. Recent work has defined new notions of conformance (implementation relations) that recognise this reduced observational power of the environment [@Hierons_distrib_Oracle; @HieTestCom08]. These implementation relations essentially say that the SUT conforms to the specification if the environment cannot observe a failure. When using such implementation relations, we do not have to consider observability problems: if a global trace $\sigma$ of the SUT is observationally equivalent to one in the specification then $\sigma$ is considered to be an allowed behaviour since a set of distributed testers or users would not observe a failure. [MSC2]{} \[1\] \[1\] \[1\] \[1\] \[1\] [MSC3]{} \[1\] \[1\] \[1\] \[1\] \[1\] Given multi-port FSMs $N$ and $M$, there are two notions of conformance for situations in which distributed observations are made: weak conformance (${\sqsubseteq_w}$) and strong conformance (${\sqsubseteq_s}$). Under ${\sqsubseteq_w}$, it is sufficient that for every global trace $\sigma$ of $N$ and port $p$ there is some global trace $\sigma_p$ of $M$ such that $\sigma$ and $\sigma_p$ are indistinguishable at port $p$; they have the same local traces at $p$. In contrast, under ${\sqsubseteq_s}$ we require that for every global trace $\sigma$ of $N$ there is some global trace $\sigma'$ of $M$ such that $\sigma$ and $\sigma'$ are indistinguishable at all of the ports. To see the difference, let us suppose that there are two allowed responses to input $x_1$ at port $1$: either $y_1$ at port $1$ and $y_2$ at port $2$ (forming global trace $\sigma$) or $y'_1$ at port $1$ and $y'_2$ at port $2$ (forming global trace $\sigma'$) . Under ${\sqsubseteq_w}$ it is acceptable for the SUT to respond to $x_1$ with $y_1$ at port $1$ and $y'_2$ at port $2$ since the local trace at port $1$ is $x_1y_1$, which is a projection of $\sigma$, and the local trace at port $2$ is $y'_2$, which is a projection of $\sigma'$. However, this is not acceptable under ${\sqsubseteq_s}$ since there is no global trace of the specification that has projection $x_1y_1$ at port $1$ and projection $y'_2$ at port $2$. One of the benefits of using an FSM to model the required behaviour of a system that interacts with its environment at only one port is that there are standard algorithms for many problems that are relevant to test generation. For example, we can decide whether there are strategies (test cases) that reach or distinguish states [@alur95] and such strategies are used by many test generation algorithms [@aho91; @chow78; @hennie64; @luo94a; @PetrenkoY05; @ural97]. In addition, if we have an FSM specification $M$ and an FSM design $N$ then we can decide whether $N$ conforms to $M$. Thus, if we wish to adapt standard FSM test techniques to the situation where we have distributed testers then we need to investigate corresponding problems for multi-port FSMs. Recent work has shown that it is undecidable whether there is a strategy that is guaranteed to reach a state or distinguishes two states of an FSM in distributed testing [@Hierons10_SICOMP]. However, this left open the question of whether one can decide whether one FSM conforms to another. It also left open the related question of whether it is decidable whether there is a strategy that is capable of distinguishing two FSMs[^2]. This paper concerns the problem of deciding, for multi-port FSMs $M$ and $N$, whether $N$ conforms to $M$. Clearly, this can be decided in low order polynomial time for ${\sqsubseteq_w}$: for each port $p$ we simply compare the projections of $N$ and $M$ at $p$. However, ${\sqsubseteq_w}$ will often be too weak since it assumes that we can never have the situation in which an agent is aware of observations made at two or more ports. We therefore focus on the implementation relation ${\sqsubseteq_s}$. We start by investigating the question of whether, given a multi-port FSM $M$, we can define a language ${{\mathcal {\tilde L}}}(M)$ such that for every multi-port FSM $N$ we have that $N {\sqsubseteq_s}M$ if and only if all global traces of $N$ are in ${{\mathcal {\tilde L}}}(M)$. If we can define such an ${{\mathcal {\tilde L}}}(M)$ then there is the potential to explore properties of this in order to find algorithms for deciding $N {\sqsubseteq_s}M$ for classes of $N$ and $M$. There is also the potential to base testing and development on ${{\mathcal {\tilde L}}}(M)$. It has already been shown that we can produce such a language ${{\mathcal {\tilde L}}}(M)$ for the special case where we restrict testing to controllable input sequences and are testing from deterministic FSMs [@Hierons10]. We prove that ${{\mathcal {\tilde L}}}(M)$ is uniquely defined but need not be regular. We then consider the problem of determining whether $N {\sqsubseteq_s}M$ for multi-port FSMs $N$ an $M$ and prove that this is generally undecidable. We also give some conditions under which $N {\sqsubseteq_s}M$ is decidable. Clearly, this problem is important when we are checking an FSM design against an FSM specification. In addition, $N {\sqsubseteq_s}M$ if no *possible* behaviour of $N$ can be distinguished from the behaviours of $M$. Thus, since it is undecidable whether $N {\sqsubseteq_s}M$ it is also undecidable whether there is a test case that is *capable* of distinguishing two states or FSMs. This complements the result that it is undecidable whether there is a test case that is guaranteed to distinguish two states or FSMs [@Hierons10_SICOMP]. However, the proofs use very different approaches: the proof of the previous result [@Hierons10_SICOMP] used results from multi-player games while in this paper we develop and then use results regarding multi-tape automata. Note that many traditional methods for testing from an FSM use sequences that distinguish between states, in order to check that a (prefix of a) test case takes the SUT to a correct state [@aho91; @chow78; @hennie64; @luo94a; @PetrenkoY05; @ural97]. The results in this paper and in [@Hierons10_SICOMP] suggest that it will be difficult to adapt such techniques for distributed testing. In addition, we can represent a possible fault in the SUT by an FSM $N$ formed by introducing the fault into the specification FSM $M$: the results in this paper show that it is undecidable whether there is a test case that can detect such a ‘fault’, and thus also whether it represents an incorrect implementation. Since it is undecidable whether $N {\sqsubseteq_s}M$, we define a weaker implementation relation ${\sqsubseteq_s}^k$ that only considers sequences of length $k$ or less. This is relevant when we know a bound on the length of sequences in use or we know that the system will be reset after at most $k$ inputs have been received. For example, a protocol might have a bound on the number of steps that can occur before a ‘disconnect’ happens. Naturally, it is decidable whether $N {\sqsubseteq_s}^k M$ since we only have to reason about finite sets of global traces. We prove that if we place a bound on $k$ and the number of ports then we can decide whether $N {\sqsubseteq_s}^k M$ in polynomial time but the problem is NP-hard if we do not have such bounds. This paper is structured as follows. Section \[section:preliminaries\] provides preliminary definitions. Section \[section:models\] then investigates the problem of defining a language ${{\mathcal {\tilde L}}}(M)$ such that for every multi-port FSM $N$ we have that $N {\sqsubseteq_s}M$ if and only if all global traces of $N$ are in ${{\mathcal {\tilde L}}}(M)$. In Section \[section:multitape\] we prove results regarding multi-tape automata that we use in Section \[section:decide\_strong\] to prove that it is generally undecidable whether $N {\sqsubseteq_s}M$. Section \[section:decide\_strong\] also gives conditions under which $N {\sqsubseteq_s}M$ is decidable. In Section \[section:bounded\] we then explore $ {\sqsubseteq_s}^k$. Finally, in Section \[section:conclusions\] we draw conclusions and discuss possible lines of future work. Preliminaries {#section:preliminaries} ============= This paper concerns the testing of state-based systems whose behaviour is characterised by the input/output sequences (global traces) that they can produce. Given a set $A$ we let $A^*$ denote the set of sequences formed from elements of $A$ and we let $\epsilon$ denote the empty sequence. In addition, $A^+$ denotes the set of non-empty sequences in $A^*$. Given sequence $\sigma \in A^*$ we let ${pref}(\sigma)$ denote the set of prefixes of $\sigma$. We are interested in finite state machines, which define global traces (input/output sequences). Given a global trace $\sigma = x_1/y_1 \ldots x_k/y_k$, in which $x_1, \ldots, x_k$ are inputs and $y_1, \ldots, y_k$ are outputs, the prefixes of $\sigma$ are the global traces of the form $x_1/y_1 \ldots x_j/y_j$ with $j \leq k$. In this paper we investigate the situation in which a system interacts with its environment at $n$ physically distributed interfaces, called ports. We let ${{\mathcal P}}= \{1, \ldots, n\}$ denote the names of these ports. Then a multi-port FSM $M$ is defined by a tuple $(S,s_0,I,O,h)$ in which $S$ is the finite set of states, $s_0 \in S$ is the initial state, $I$ is the finite input alphabet, $O$ is the finite output alphabet, and $h$ is the transition relation. The set of inputs is partitioned into subsets $I_1, \ldots, I_n$ such that for $p \in {{\mathcal P}}$ we have that $I_p$ is the set of inputs that can be received at port $p$. Similarly, for port $p$ we let $O_p$ denote the set of output that can be observed at $p$. As is usual [@dssouli85; @dssouli86; @sarikaya84; @UralW06] we allow an input to lead to outputs at several ports and so we let $O = ((O_1 \cup \{-\}) \times \ldots \times (O_n \cup \{-\}))$ in which $-$ denotes null output. We can ensure that the $I_p$ and also the $O_p$ are pairwise disjoint by labelling input and output with the port name, where necessary. We let ${{\mathcal Act}}= I \cup O$ denote the set of possible observations and for $p \in {{\mathcal P}}$ we let ${{\mathcal Act}}_p = I_p \cup O_p$ denote the set of possible observations at port $p$. The transition relation $h$ is of type $S \times I \leftrightarrow S \times O$ and should be interpreted in the following way: if $(s',y) \in h(s,x)$, $y = (z_1, \ldots, z_n)$, and $M$ receives input $x$ when in state $s$ then it can move to state $s'$ and send output $z_p$ to port $p$ (all $p \in {{\mathcal P}}$). This defines the *transition* $(s,s',x/y)$, which is a *self-loop transition* if $s=s'$. Since we only consider multi-port FSMs in this paper, we simply call them FSMs. The FSM $M$ is said to be a *deterministic FSM (DFSM)* if $|h(s,x)| \leq 1$ for all $s \in S$ and $x \in I$. An FSM $M$ is completely-specified if for every state $s$ and input $x$, we have that $h(s,x) \neq \emptyset$. A sequence $(s_1,s_2,x_1/y_1)(s_2,s_3,x_2/y_2) \ldots (s_{k},s_{k+1}k,x_{k}/y_{k})$ of consecutive transitions is said to be a *path*, which has *starting state* $s_1$ and *ending state* $s_{k+1}$. This path has *label* $x_1/y_1 \ldots x_k/y_k$, which is called a (global) *trace*. Further, $x_1 \ldots x_k$ and $y_1 \ldots y_k$ are said to be the *input portion* and the *output portion* respectively of $x_1/y_1 \ldots x_k/y_k$. A path is a *cycle* if its starting and ending states are the same. The FSM $M$ defines the regular language $L(M)$ of the labels of paths of $M$ that have starting state $s_0$. Given state $s \in S$ of $M$ we let $L_M(s)$ denote the set of global traces that are labels of paths of $M$ with starting state $s$, and so $L(M) = L_M(s_0)$. We say that $M$ is *initially connected* if for every state $s$ of $M$ there is a path that has starting state $s_0$ and ending state $s$. Throughout this paper we assume that any FSM considered is completely-specified and initially connected. Where this condition does not hold we can remove the states that cannot be reached and we can complete the FSM by, for example, either adding self-loop transitions with null output or transitions to an error state. At times we will use results regarding finite automata (FA) and so we briefly define FA here. A FA $M$ is defined by a tuple $(S,s_0,X,h,F)$ in which $S$ is the finite set of states, $s_0 \in S$ is the initial state, $X$ is the finite alphabet, $h$ is the transition relation, and $F \subseteq S$ is the set of final states. The transition relation has type $S \times (X \cup \{\tau\}) \times S$ where $\tau$ represents a silent transition that is not observed. The notions of a path and its label, which does not include instances of $\tau$, correspond to those defined for FSMs and so are not defined here. The FA $M$ defines the language $L(M)$ of labels of paths that have starting state $s_0$ and an ending state in $F$. For a global trace $\sigma$ and port $p \in {{\mathcal P}}$ let $\pi_p(\sigma)$ denote the *local trace* formed by removing from $\sigma$ all elements that do not occur at $p$. This is defined by the following rules in which $\sigma$ is a global trace (see, for example, [@HieronsU08]). $$\begin{aligned} \pi_p(\epsilon) & = & \epsilon \\ \pi_p((x/(z_1, \ldots, z_n))\sigma) & = & \pi_p(\sigma) \mbox{ if } x \not \in I_p \wedge z_p = - \\ \pi_p((x/(z_1, \ldots, z_n))\sigma) & = & x\pi_p(\sigma) \mbox{ if } x \in I_p \wedge z_p = - \\ \pi_p((x/(z_1, \ldots, z_n))\sigma) & = & z_p\pi_p(\sigma) \mbox{ if } x \not \in X_p \wedge z_p \neq - \\ \pi_p((x/(z_1, \ldots, z_n))\sigma) & = & xz_p\pi_p(\sigma) \mbox{ if } x \in X_p \wedge z_p \neq -\end{aligned}$$ Given a set $A$ of global traces and port $p$ we let $\pi_p(A)$ denote the set of projections of sequences in $A$. Thus, $\pi_p(A) = \{ \pi_p(\sigma) | \sigma \in A\}$. In the distributed test architecture, a local tester at port $p \in {{\mathcal P}}$ only observes events from ${{\mathcal Act}}_p$. Thus, two global traces $\sigma$ and $\sigma'$ are indistinguishable if they have the same projections at every port and we denote this $\sigma \sim \sigma'$. More formally, we say that $\sigma \sim \sigma'$ if for all $p \in {{\mathcal P}}$ we have that $\pi_p(\sigma) = \pi_p(\sigma')$. Given an FSM $M$, we let ${{\mathcal L}}(M)$ denote the set of global sequences that are equivalent to elements of $L(M)$ under $\sim$. These are the sequences that are indistinguishable from sequences in $L(M)$ when distributed observations are made. Previous work has defined two conformance relations for testing from an FSM that reflect the observational power of distributed testing [@Hierons_distrib_Oracle]. In some situations the agents at the separate ports of the SUT will never interact with one another or share information with other agents that can interact. In such cases it is sufficient for the local trace observed at a port $p$ to be a local trace of $M$. This situation is captured by the following conformance relation. Given FSMs $N$ and $M$ with the same input and output alphabets and the same set of ports, $N {\sqsubseteq_w}M$ if for every global trace $\sigma \in L(N)$ and port $p$ there exists some $\sigma_p \in L(M)$ such that $\pi_p(\sigma_p) = \pi_p(\sigma)$. $N$ is then said to *weakly conform* to $M$. However, sometimes there is the potential for information from separate testers to be received by an external agent. For example, there may be a central controller that receives the observations made by each tester and thus knows the projection of the global trace at each port. This leads to the following stronger conformance relation. Given FSMs $N$ and $M$ with the same input and output alphabets and the same set of ports, $N {\sqsubseteq_s}M$ if for every global trace $\sigma \in L(N)$ there exists some $\sigma' \in L(M)$ such that $\sigma' \sim \sigma$. $N$ is then said to *strongly conform* to $M$. It is straightforward to see that given FSMs $N$ and $M$ we have that $N {\sqsubseteq_s}M$ if and only if $L(N) \subseteq {{\mathcal L}}(M)$. It is also clear that $N {\sqsubseteq_s}M$ implies that $N {\sqsubseteq_w}M$. In order to see that ${\sqsubseteq_s}$ is strictly stronger than ${\sqsubseteq_w}$ it is sufficient to consider the FSMs $M_1$ and $N_1$ shown in Figure \[fig:m1n1\]. Clearly we do not have that $N_1 {\sqsubseteq_s}M_1$ since $M_1$ has no global trace equivalent to $x_1/(y_1,y'_2)$ under $\sim$. However, for every global trace $\sigma$ of $N_1$ and port $p$ there is a global trace $\sigma'$ of $M_1$ such that $\pi_p(\sigma) = \pi_p(\sigma')$. Thus, we have that $N_1 {\sqsubseteq_w}M_1$. $$\UseTips \entrymodifiers = {} \xymatrix @+1cm { *++[o][F]{s_0} \ar@(ul,ur)[]^{x_1/(y_1,y_2)} \ar@(dl,dr)[]_{x_1/(y'_1,y'_2)} & & & *++[o][F]{s_0} \ar@(ul,ur)[]^{x_1/(y_1,y'_2)} \\ }$$ Test models for conformance {#section:models} =========================== In this section we investigate the problem of defining a language ${{\mathcal {\tilde L}}}(M)$ for an FSM $M$ such that $N {\sqsubseteq_s}M$ if and only if $L(N) \subseteq {{\mathcal {\tilde L}}}(M)$. The motivation is that if we are developing the SUT from $M$ and we do have some such ${{\mathcal {\tilde L}}}(M)$ then we can use standard approaches to refine ${{\mathcal {\tilde L}}}(M)$. In addition, if in testing we wish to test for ${\sqsubseteq_s}$ but we can connect the local testers to form a global tester then we should compare the global traces of the SUT with ${{\mathcal {\tilde L}}}(M)$: if we compare the global traces from the SUT with $L(M)$ then we could lead to the SUT $N$ being declared faulty even if $N {\sqsubseteq_s}M$. It would be particularly useful if we could find an FSM or IOTS $M'$ such that $L(M') = {{\mathcal {\tilde L}}}(M)$; we could then test $N$ against $M'$ using normal test methods and the usual conformance relation (trace inclusion). We start by considering the language ${{\mathcal L}}(M)$. Clearly, we have that $N {\sqsubseteq_s}M$ if and only if $L(N) \subseteq {{\mathcal {\tilde L}}}(M)$. However, if we can represent ${{\mathcal L}}(M)$ using an FSM or IOTS then for every $\sigma \in {{\mathcal L}}(M)$ and $\sigma' \in pre(\sigma)$ we must have that $\sigma' \in {{\mathcal L}}(M)$: ${{\mathcal L}}(M)$ must be prefix closed. \[prop:not\_prefix\_closed\] The language ${{\mathcal L}}(M)$ need not be prefix closed. Consider a DFSM $M$ such that $x_1/(y_1,y_2) x_1/(y_1,-) \in L(M)$. Then ${{\mathcal L}}(M)$ contains $x_1/(y_1,-) x_1/(y_1,y_2)$ but $x_1/(y_1,-) \not \in {{\mathcal L}}(M)$ and so ${{\mathcal L}}(M)$ is not prefix closed. The languages defined by FSMs and IOTSs are prefix closed and so we know that ${{\mathcal L}}(M)$ cannot always be represented by such a model. However, the languages defined by finite automata need not be prefix closed. Thus, Proposition \[prop:not\_prefix\_closed\] does not preclude the possibility of representing the ${{\mathcal L}}(M)$ using finite automata, however, the following does. It is already known from Mazurkiewicz trace theory that, where some elements of an alphabet commute (i.e. $ab = ba$) the set of sequences equivalent to those defined by a FA need not be regular (see, for example, [@diekert97]). It is straightforward to show that this also holds for FSMs. \[prop:lang\_M\_not\_regular\] Given FSM $M$, the language ${{\mathcal L}}(M)$ need not be regular. Consider the FSM $M_4$ shown in Figure \[fig:not\_regular\] and let $L = {{\mathcal L}}(M_4)$. Proof by contradiction: assume that $L$ is a regular language. Let $L' $ be the set of global traces in which all outputs are $(-,-)$. Clearly $L'$ is a regular language. Thus, since $L$ is regular we must have that $L'' = L \cap L'$ is regular. The language $L''$ is the set of global traces from ${{\mathcal L}}(M_4)$ that have null output. Thus, $L''$ is the set of all global traces with inputs drawn from $\{x_1,x_2\}$ and that have null output and in which the number of instances of $x_2$ is either equal to the number of instances of $x_1$ or is one less than this. However, this language is not regular, providing a contradiction as required. $$\UseTips \entrymodifiers = {} \xymatrix @+1cm { *++[o][F]{s_0} \ar@/^1.5pc/[rrr]^{x_1/(-,-)} \ar[d]^{x_2/(y'_1,y'_2)} & & & *++[o][F]{s_1} \ar@/^1.5pc/[lll]^{x_2/(-,-)} \ar[d]^{x_1/(y_1,y_2)} \\ *++[o][F]{s_3} \ar@(dl,ul)[]^{x_1/(y'_1,y'_2)} \ar@(dr,ur)[]_{x_2/(y'_1,y'_2)} & & & *++[o][F]{s_4} \ar@(dl,ul)[]^{x_1/(y_1,y_2)} \ar@(dr,ur)[]_{x_2/(y_1,y_2)} \\ }$$ We have that ${{\mathcal L}}(M)$ has the property we want ($N {\sqsubseteq_s}M$ if and only if $L(N) \subseteq {{\mathcal L}}(M)$) but that ${{\mathcal L}}(M)$ need not be regular. The observation that ${{\mathcal L}}(M)$ is not prefix closed tells us that it can contain strings that are in no $L(N)$ for an FSM or IOTS $N$ such that $N {\sqsubseteq_s}M$. It seems natural to remove these global traces. Given FSM $M$ let ${{\mathcal L}_{pc}}(M)$ denote the set of global traces from ${{\mathcal L}}(M)$ whose prefixes are also in ${{\mathcal L}}(M)$. More formally, ${{\mathcal L}_{pc}}(M) = \{\sigma \in {{\mathcal L}}(M) | pre(\sigma) \subseteq {{\mathcal L}}(M)\}$. Given FSMs $M$ and $N$ we have that $N {\sqsubseteq_s}M$ if and only if $L(N) \subseteq {{\mathcal L}_{pc}}(M)$. In forming ${{\mathcal L}_{pc}}(M)$ we only remove from ${{\mathcal L}}(M)$ a global trace $\sigma$ if some of its prefixes are not in ${{\mathcal L}}(M)$, and so $\sigma$ cannot be a global trace of an FSM $N$ such that $N {\sqsubseteq_s}M$. The result therefore follows from the fact that $N {\sqsubseteq_s}M$ if and only if $L(N) \subseteq {{\mathcal L}}(M)$. Clearly, ${{\mathcal L}_{pc}}(M) \subseteq {{\mathcal L}}(M)$ and so it is natural to ask whether there remain any global traces in ${{\mathcal L}_{pc}}(M)$ that cannot appear in FSMs that conform to $M$ under ${\sqsubseteq_s}$. \[prop:DFSM\_with\_sigma\] Given an FSM $M$ and global trace $\sigma \in {{\mathcal L}_{pc}}(M)$ there exists an FSM $N$ such that $N {\sqsubseteq_s}M$ and $\sigma \in L(N)$. Further, this is true even if we restrict $N$ to being deterministic. Let $\sigma$ be a global trace of length $k$ and so $\sigma = x_1/y_1 \ldots x_k/y_k$ for some $x_1, \ldots, x_k \in I$ and $y_1, \ldots, y_k \in O$. For all $1 \leq i \leq k$ let $\sigma_i$ denote the prefix of $\sigma$ with length $i$. We will construct an FSM $N'$ in the following way. First, define an initial state $s'_0$ and for every $1 \leq i < k$ we define a state $s'_i$ and add the transition $(s'_{i-1},s'_i,x_i/y_i)$. Since $\sigma \in {{\mathcal L}_{pc}}(M)$, for each state $s'_i$, $1 \leq i \leq k$, we choose a (not necessarily unique) state of $M$, which we call $s_i$, that is reached from the initial state of $M$ using a global trace $\sigma'_i \sim \sigma_i$. We add a copy of $M$ to the structure already defined and will use transitions to the states of this copy of $M$ in order to complete $N'$. First, we add the transition $(s'_{k-1},s_k,x_k/y_k)$ so if we follow $\sigma$ by further input in $N'$ then we will obtain $\sigma$ followed by a global trace $\sigma'$ from $s_k$ in $M$. For all $1 \leq i < k$, $x \in I \setminus \{ x_{i} \}$, and $(s',y) \in h(s_{i-1},x)$, we add the transition $(s'_{i-1},s',x/y)$. Every global trace in $N'$ is either a global trace of $M$ or is $\sigma_i \sigma'$ for a $\sigma_i$ (which is in ${{\mathcal L}}(M)$) and a global trace $\sigma'$ such that $\sigma' \in L_{M}(s_i)$ and so $\sigma \in {{\mathcal L}}(M)$. Further, it is clear that there is some DFSM $N$ such that $L(N) \subseteq L(N')$ and $\sigma \in L(N)$. Thus, $N$ is a DFSM with $L(N) \subseteq L(N') \subseteq {{\mathcal L}}(M)$ and so we have that $N' {\sqsubseteq_s}M$ as required. Thus, ${{\mathcal L}_{pc}}(M)$ is the smallest language such that $N {\sqsubseteq_s}M$ if and only if $L(N) \subseteq {{\mathcal L}_{pc}}(M)$ and so it appears to be the language we want. Figure \[fig:lang\_sufficient\] shows two FSMs $M$ and $M' $ such that $M'= {{\mathcal L}_{pc}}(M)$. Since $M'= {{\mathcal L}_{pc}}(M)$, for an FSM $N$ we have that $N {\sqsubseteq_s}M$ if and only if $L(N) \subseteq L(M')$. This works because the only global traces that are in ${{\mathcal L}}(M)$ but not $L(M)$ are those in the form $(x_1/(y_1,-))^*x_2/(-,y'_2)((x_1/(y_1,-)) + x_2/(-,y'_2))^*$ and these are included in $L(M')$. $$\UseTips \entrymodifiers = {} \xymatrix @+0cm { & *++[o][F]{s_1} \ar@(dl,dr)[]_{x_2/(-,y_2)} \ar@(ul,ur)[]^{x_1/(y_1,-)} &&& & *++[o][F]{s_1} \ar@(dl,dr)[]_{x_2/(-,y_2)} \ar@(ul,ur)[]^{x_1/(y_1,-)} \\ *++[o][F]{s_0} \ar[ru]^{x_1/(y_1,-)} \ar[rd]_{x_2/(-,y'_2)} &&&&*++[o][F]{s_0} \ar@(ul,dl)[]_{x_1/(y_1,-)} \ar[ru]^{x_1/(y_1,-)} \ar[rd]_{x_2/(-,y'_2)} \\ & *++[o][F]{s_2} \ar@(dl,dr)[]_{x_2/(-,y'_2)} \ar@(ul,ur)[]^{x_1/(y_1,-)} &&& & *++[o][F]{s_2} \ar@(dl,dr)[]_{x_2/(-,y'_2)} \ar@(ul,ur)[]^{x_1/(y_1,-)} \\ }$$ We have shown that we are required to keep all of the sequences in ${{\mathcal L}_{pc}}(M)$. Now we show that if $L = L(M')$ for some FSM $M'$ and we have that ${{\mathcal L}_{pc}}(M) \subset L(M')$ then the language $L(M')$ is too large. We do this by proving that there can be a reduction $N$ of $M'$ that does not conform to $M$ under ${\sqsubseteq_s}$ and this is the case even if we restrict $N$ to be deterministic. \[prop:langpre\_M\_cannot\_add\] Given an FSM $M$, if $L' = L(M')$ for some FSM $M'$ and ${{\mathcal L}_{pc}}(M) \subset L'$ then there is an FSM $N$ such that $L(N) \subseteq L'$ but we do not have that $N {\sqsubseteq_s}M$. Further, this result holds even if we restrict $N$ to being deterministic. Since ${{\mathcal L}_{pc}}(M) \subset L'$ there is some global trace $\sigma \in L(M') \setminus {{\mathcal L}_{pc}}(M)$. Since $L' = L(M')$ for an FSM $M'$, it is clear that there exists a DFSM $N$ such that $L(N) \subseteq L'$ and $\sigma \in L(N)$. It is now sufficient to observe that $\sigma$ and all of its prefixes are in $L(N)$ and since $\sigma \not \in {{\mathcal L}_{pc}}(M)$ we must have that at least one of these sequences is not in ${{\mathcal L}}(M)$. Thus, the language we are looking for must contain ${{\mathcal L}_{pc}}(M)$ and if we restrict attention to languages defined by FSMs, then the language cannot contain any additional global traces[^3]. Unfortunately, however, ${{\mathcal L}_{pc}}(M)$ need not be regular. \[prop:langpre\_M\_not\_regular\] The language ${{\mathcal L}_{pc}}(M)$ need not be regular. We will use the FSM $M_5$ shown in Figure \[fig:pre\_not\_regular\]. Proof by contradiction: assume that ${{\mathcal L}_{pc}}(M_5)$ is regular. Let $L'$ denote the regular language $(x_2/(-,-))^*(x_1/(-,-))^*$. Consider the language ${{\mathcal L}}(M_5) \cap L'$, which is the set of sequences of the form $(x_2/(-,-))^*(x_1/(-,-))^*$ where the number of instances of $x_2$ can be at worst one less than the number of instances of $x_1$. This is prefix closed since each element of ${{\mathcal L}}(M_5) \cap L'$ starts with all of its instances of $x_2/(-,-)$. Clearly ${{\mathcal L}}(M_5) \cap L'$ is not regular. Since ${{\mathcal L}}(M_5) \cap L'$ is prefix closed, ${{\mathcal L}_{pc}}(M_5) \cap L' = {{\mathcal L}}(M_5) \cap L'$. As a result, we know that ${{\mathcal L}_{pc}}(M_5) \cap L' $ is not regular. But $L'$ is regular and so if ${{\mathcal L}_{pc}}(M_5)$ was regular then ${{\mathcal L}_{pc}}(M_5) \cap L'$ would also be regular. This provides a contradiction as required. $$\UseTips \entrymodifiers = {} \xymatrix @+1cm { *++[o][F]{s_0} \ar@(ul,dl)[]_{x_2/(-,-)} \ar@/^1.5pc/[r]^{x_1/(-,-)} & *++[o][F]{s_1} \ar@/^1.5pc/[l]^{x_2/(-,-)} \ar[d]^{x_1/(y_1,y_2)} \\ & *++[o][F]{s_2} \ar@(ul,dl)[]_{x_1/(-,-)} \ar@(ur,dr)[]^{x_2/(-,-)} \\ }$$ We therefore obtain the following result. Given an FSM $M$ there need not exist an FSM $M'$ with the property that for all FSMs $N$ we have that $N {\sqsubseteq_s}M$ if and only if $L(N) \subseteq L(M')$. In addition, this result holds even if we restrict attention to deterministic $N$. By Propositions \[prop:DFSM\_with\_sigma\] and \[prop:langpre\_M\_cannot\_add\] we know that we must have that $L(M') = {{\mathcal L}_{pc}}(M)$. The result thus follows from Proposition \[prop:langpre\_M\_not\_regular\]. If ${{\mathcal L}_{pc}}(M)$ is regular then there is a corresponding FSM. It would thus be interesting to explore conditions under which ${{\mathcal L}_{pc}}(M)$ is guaranteed to be regular and also properties of ${{\mathcal L}_{pc}}(M)$ when this is not regular. There may also be scope to represent ${{\mathcal L}_{pc}}(M)$ using an IOTS. Conformance and multi-tape automata {#section:multitape} =================================== While we can decide (in polynomial time) whether $N {\sqsubseteq_w}M$, this is quite a weak conformance relation since it does not allow us to bring together local traces observed at the separate ports. It seems likely that normally the implementation relation ${\sqsubseteq_s}$ will be more suitable and so we consider the problem of deciding whether $N {\sqsubseteq_s}M$. In this section we study the problem of deciding language inclusion for multi-tape automata; in Section \[section:decide\_strong\] we use the results proved here regarding multi-tape automata to show that it is generally undecidable whether $N {\sqsubseteq_s}M$ for FSMs $M$ and $N$. We first define multi-tape FA [@rabin59]. An $r$-tape FA with disjoint alphabets $\Sigma_i$, $1 \leq i \leq r$ is a tuple $(S,s_0,\Sigma,h,F)$ in which $S$ is a finite set of states, $s_0 \in S$ is the initial state $F \subseteq S$ is the set of final states and $h : S \times \bigcup_{i=1}^r \Sigma_i \times S$ is the transition relation. Then $A$ accepts a tuple $(w_1, \ldots, w_r) \in \Sigma_1^* \times \ldots \times \Sigma_r^*$ if and only if there is some sequence $\sigma \in (\bigcup_{i=1}^r \Sigma_i)^*$ that takes $A$ to a final state such that $\pi_i(\sigma) = w_i$ for all $1 \leq i \leq r$. We let ${{\mathcal L}}(N)$ denote the set of tuples accepted by $N$. Given a multi-tape FA $N$ with $r$ tapes, we will let $L(N)$ denote the language in $(\Sigma_1 \cup \ldots \cup \Sigma_r)^*$ of the corresponding FA. We obtain $L(N)$ by treating $N$ as a FA with alphabet $\Sigma = \Sigma_1 \cup \ldots \cup \Sigma_r$. It might seem that deciding whether $N {\sqsubseteq_s}M$ is similar to deciding whether, for two multi-tape FA $N'$ and $M'$, the language defined by $N'$ is a subset of that defined by $M'$. This problem, regarding multi-tape FA, is known to be undecidable [@rabin59]. However, the proof of the result regarding multi-tape FA uses FA in which not all states are final and in FSMs there is no concept of a state not being a final state. Thus, the results of [@rabin59] are not directly applicable to the problem of deciding whether $N {\sqsubseteq_s}M$ for FSMs $N$ and $M$ and it appears that the corresponding problem, for multi-tape FA in which all states are final, has not previously been solved. In this section we prove that language inclusion is generally undecidable for multi-tape FA in which all states are final states. Before we consider decidability issues, we investigate the corresponding languages and closure properties. \[prop:closure\] Let us suppose that $N_1=(S,s_0,h_1,S)$ and $N_2 = (Q,q_0,h_2,Q)$ are multi-tape FA with the same number of tapes and the same alphabets and also that all of the states of $N_1$ and $N_2$ are final states. Then we have the following. 1. There exists a multi-tape FA $M$ such that ${{\mathcal L}}(M) = {{\mathcal L}}(N_1) \cup {{\mathcal L}}(N_2)$ and all of the states of $M$ are final states. 2. There exists a multi-tape FA $M$ such that ${{\mathcal L}}(M) = {{\mathcal L}}(N_1){{\mathcal L}}(N_2)$ and all of the states of $M$ are final states. 3. There may not exists a multi-tape FA $M$ such that ${{\mathcal L}}(M) = {{\mathcal L}}(N_1) \setminus {{\mathcal L}}(N_2)$ and all of the states of $M$ are final states. 4. There may not exists a multi-tape FA $M$ such that ${{\mathcal L}}(M) = {{\mathcal L}}(N_1) \cap {{\mathcal L}}(N_2)$ and all of the states of $M$ are final states. We will use $A \oplus B$, for sets $A$ and $B$, to denote the disjoint union of $A$ and $B$. For the first result it is sufficient to define the FA $(S \oplus Q \oplus \{r_0\},r_0,h',S \oplus Q \oplus \{r_0\})$ for $r_0 \not \in S \oplus Q$, in which $h$ is the union of $h_1$ and $h_2$ plus the following transitions: for every $(s_0,a,s) \in h_1$ we include in $h'$ the tuple $(r_0,a,s)$; and for every $(q_0,a,q) \in h_2$ we include in $h'$ the tuple $(r_0,a,q)$. For the second part, it is sufficient to take the disjoint union of $N_1$ and $N_2$ and for every transition $(s,s',a)$ of $N_1$ add a transition $(s,q_0,a)$. For the third part, it is sufficient to observe that for any choice of $N_1$ and $N_2$ we have that the empty sequence is in ${{\mathcal L}}(N_1)$ and ${{\mathcal L}}(N_2)$ and so not in ${{\mathcal L}}(N_1) \setminus {{\mathcal L}}(N_2)$. Thus, it is sufficient to choose any $N_1$ and $N_2$ such that ${{\mathcal L}}(N_1) \setminus {{\mathcal L}}(N_2)$ is non-empty. For the last part, let us suppose that we have two tapes with alphabets $\{a_1\}$ and $\{a_2\}$, let $L(N_1) = \{\epsilon, a_1, a_1a_2\}$ and let $L(N_2) = \{\epsilon,a_2,a_2a_1\}$ and so ${{\mathcal L}}(N_1) \cap {{\mathcal L}}(N_2) = \{\epsilon,a_1a_2,a_2a_1\}$. We will use Post’s Correspondence Problem to prove that language inclusion is undecidable. Given sequences $\alpha_1, \ldots, \alpha_m$ and $\beta_1, \ldots, \beta_m$ over an alphabet $\Sigma$, Post’s Correspondence Problem (PCP) is to decide whether there is a sequence $i_1, \ldots, i_k$ of indices from $[1,m]$ such that $\alpha_{i_1} \ldots \alpha_{i_k} = \beta_{i_1} \ldots \beta_{i_k}$. It is known that Post’s Correspondence Problem is undecidable [@post46]. \[theorem:PCP\] Post’s Correspondence Problem is undecidable. We now prove the main result from this section. \[theorem1\] Let us suppose that $N$ and $M$ are multi-tape FA in which all states are final states. The following problem is undecidable, even when there are only two tapes: do we have that ${{\mathcal L}}(N) \subseteq {{\mathcal L}}(M)$? We will show that if we can solve this problem then we can also solve Post’s Correspondence Problem. We therefore assume that we have been given an instance of the PCP with sequences $\alpha_1, \ldots, \alpha_m$ and $\beta_1, \ldots, \beta_m$ with alphabet $\Sigma$. To allow elements of $\Sigma$ to be on both tapes we use two disjoint copies, $\Sigma_1 = \{f_1(a) | a \in \Sigma\}$ and $\Sigma_2 = \{f_2(a) | a \in \Sigma\}$, of $\Sigma$. Given a sequence $x_1 \ldots x_i$ and $j \in \{1,2\}$ we let $f_j(x_1 \ldots x_i)$ denote $f_j(x_1) \ldots f_j(x_i)$. We consider multi-tape automata with two tapes with alphabets $\Sigma_1 \cup \{x, x'\}$ and $\Sigma_2$ respectively, where $x, x'$ are chosen so that they are not in $\Sigma_1 \cup \Sigma_2$. We let $N$ denote an FA such that $L(N)$ is the language that contains all sequences in the regular language $x((f_1(\alpha_{i_1})f_2(\beta_{i_1}) + \allowbreak \ldots + \allowbreak (f_1(\alpha_{i_k})f_2(\beta_{i_k}))^*x'$ and all prefixes of such sequences. Clearly, such an FA $N$ exists. Consider all sequences in ${{\mathcal L}}(N)$ that contain an $x$ and an $x'$ and let $\sigma_a$ and $\sigma_b$ be the sequences in the two tapes with $x$ and $x'$ removed. Then we must have that $\sigma_a$ is of the form $f_1(\alpha_{i_1}) \ldots f_1(\alpha_{i_k})$ and $\sigma_b$ is of the form $f_2(\beta_{i_1}) \ldots f_2(\beta_{i_k})$ with $i_1, \ldots, i_k \in [1,m]$. In addition, all such combinations correspond to sequences in $L(N)$. Thus, there is a solution to this instance of the PCP if and only if ${{\mathcal L}}(N)$ contains a tuple $(x\sigma_ax',\sigma_b)$ in which $\sigma_a = \sigma_b$. The FA $M$ that defines language $L(M)$ is shown in Figure \[fig:m\] in which $(a,-)$ denotes elements all of the form $f_1(a)$ and $(-,a)$ denotes all elements of the form $f_2(a)$, $a \in \Sigma$. Further, $(a,b)$ denotes sequences of the form $f_1(a)f_2(b)$ with $a,b \in \Sigma$ and $a \neq b$. We use $(a,a)$ to denote sequences of the form $f_1(a)f_2(a)$ with $a \in \Sigma$. In all cases where we have sequences with length $2$, this represents a cycle of length $2$. $$\UseTips \entrymodifiers = {} \xymatrix @+1cm { *+++[o][F]{s_0} \ar[rd]^{x} & *+++[o][F]{s_1} \ar@(ul,ur)[]^{(-,a)} \ar[r]^{x'} & *+++[o][F]{s_4} \\ & *+++[o][F]{s'_0} \ar@(ul,dl)[]_{(a,a)} \ar[u]^{(-,a)} \ar[d]_{(a,-)} \ar[r]^{(a,b)} & *+++[o][F]{s_2} \ar@(ul,ur)^{(a,-),(-,a)} \ar[r]^{x'} & *+++[o][F]{s_5} \\ & *+++[o][F]{s_3} \ar@(dl,dr)[]_{(a,-)} \ar[r]^{x'} & *+++[o][F]{s_5} }$$ Now consider the problem of deciding whether ${{\mathcal L}}(N) \subseteq {{\mathcal L}}(M)$. Specifically, we will focus on the problem of deciding whether ${{\mathcal L}}(N)$ is not a subset of ${{\mathcal L}}(M)$ and so ${{\mathcal L}}(N) \setminus {{\mathcal L}}(M)$ is non-empty. First, note that in ${{\mathcal L}}(M)$ we have: 1. The language defined by paths that pass through $s_1$ contains all tuples of the form $(xf_1(w_1),f_2(w_2))$ or $(xf_1(w_1)x',f_2(w_2))$ such that $w_1, w_2 \in \Sigma^*$ and $w_1$ is a strict prefix of $w_2$. 2. The language defined by paths that pass through $s_3$ contains all tuples of the form $(xf_1(w_1),f_2(w_2))$ or $(xf_1(w_1)x',f_2(w_2))$ such that $w_1, w_2 \in \Sigma^*$ and $w_2$ is a strict prefix of $w_1$. 3. The language defined by paths that pass through $s_2$ contains all tuples of the form $(xf_1(w_1w_3),f_2(w_2w_4))$ or $(xf_1(w_1w_3)x',f_2(w_2w_4))$ in which $w_1 \in \Sigma^*$ and $w_2 \in \Sigma^*$ have the same length, $w_1 \neq w_2$, and $w_3, w_4 \in \Sigma^*$. 4. The language defined by paths that do not leave $s_0$ contains all tuples of the form $(xf_1(w),f_2(w))$. Consider tuples in ${{\mathcal L}}(M)$ that do not contain $x'$ and thus tuples of the form $(xf_1(w_1),f_2(w_2))$ in which $w_1, w_2 \in \Sigma^*$. Then paths that pass through $s_1$ define all such tuples in which $w_1$ is a strict prefix of $w_2$ and paths that pass through $s_3$ define all such tuples in which $w_2$ is a strict prefix of $w_1$. In addition, paths that do not leave $s_0$ define all such tuples in which $w_1 = w_2$ and paths that pass through $s_2$ define all such tuples in which $w_1$ and $w_2$ differ after a (possibly empty) common prefix. Thus, ${{\mathcal L}}(M)$ defines all tuples of the form $(xf_1(w_1),f_2(w_2))$ in which $w_1, w_2 \in \Sigma^*$ and so all tuples in ${{\mathcal L}}(N)$ that do not contain $x'$ are also in ${{\mathcal L}}(M)$. Now, consider the tuples in ${{\mathcal L}}(M)$ that contain $x'$. These are of the form $(xf_1(w_1)x',f_2(w_2))$ and are defined by paths that pass through $s_1$, $s_2$ and $s_3$. By examining the languages defined by paths that pass through these states we find that ${{\mathcal L}}(M)$ contains the set of tuples of this form in which $w_1 \neq w_2$. Thus, ${{\mathcal L}}(N) \setminus {{\mathcal L}}(M)$ is non-empty if and only if ${{\mathcal L}}(N)$ contains a tuple of the form $(xf_1(w)x',f_2(w))$. But we know that this is the case if and only if there is a solution to this instance of the PCP and so the result follows from Theorem \[theorem:PCP\]. For the sake of completeness, we now prove some additional decidability results regarding multi-tape automata in which all states are final. \[theorem2\] Let us suppose that $N$ and $M$ are multi-tape FA in which all states are final states. The following problem is undecidable, even when there are only three tapes: do we have that ${{\mathcal L}}(N) \cap {{\mathcal L}}(M)$ contains a non-empty sequence. We assume that we have been given an instance of the PCP with sequences $\alpha_1, \ldots, \alpha_m$ and $\beta_1, \ldots, \beta_m$ with alphabet $\Sigma$ and follow an approach similar to that used in the proof of Theorem \[theorem1\]. To allow elements of $\Sigma$ on two tapes we use two copies of elements of $\Sigma$ and let $\Sigma_1 = \{f_1(a) | a \in \Sigma\} \cup \{x\}$, $\Sigma_2 = \{f_2(a) | a \in \Sigma\}$, and $\Sigma_3 = \{x'\}$. Given a sequence $a_1, \ldots, a_i \in \Sigma^*$ and $j \in \{1,2\}$ we let $f_j(a_1 \ldots a_i)$ denote $f_j(a_1) \ldots f_j(a_i)$. We consider multi-tape automata with three tapes with alphabets $\Sigma_1$, $\Sigma_2$, and $\Sigma_3$. We let $N$ denote such an FA such that $L(N)$ contains all tuples formed by sequences in the regular language $x((f_1(\alpha_{i_1})f_2(\beta_{i_1}) + \allowbreak \ldots + \allowbreak (f_1(\alpha_{i_k})f_2(\beta_{i_k}))^+x'$ and all prefixes of such sequences. Note that $xx'$ is not contained in $L(N)$. In addition, we let $M$ denote the multi-tape automaton defined by the following: 1. There is a transition from $s_0$ to state $s_1$ and this has label $x'$; 2. For all $a\in\Sigma$ there is a cycle starting and ending at $s_1$ with label $f_1(a)f_2(a)$; 3. There is a transition from $s_1$ to $s_2$, not involved in the cycles, with label $x$. 4. There are no transition from $s_2$. Thus, all elements of ${{\mathcal L}}(M)$ are either $(\epsilon,\epsilon,\epsilon)$ or contain $x'$. As a result, if there is a non-empty element of ${{\mathcal L}}(N) \cap {{\mathcal L}}(M)$ then this must contain both $x$ and $x'$. It is now sufficient to observe that this is the case if and only if there is a solution to this instance of the PCP. Finally, we prove that equivalence is undecidable for multi-tape FA in which all states are final states. \[theorem3\] Let us suppose that $N$ and $M$ are multi-tape FA in which all states are final states. The following problem is undecidable, even when there are only two tapes: do we have that ${{\mathcal L}}(N) = {{\mathcal L}}(M)$? First observe that given sets $A$ and $B$ we have that $A \subseteq B$ if and only if $A \cup B = B$. Let us suppose that we have two multi-tape automata $N_1$ and $N_2$ with the same numbers of tapes and the same alphabets and assume that all states of $N_1$ and $N_2$ are final states. By Proposition \[prop:closure\] we know that we can construct a multi-tape automaton $N_3$ such that ${{\mathcal L}}(N_3) = {{\mathcal L}}(N_1) \cup {{\mathcal L}}(N_2)$ and all states of $N_3$ are final states. Thus, if we can decide whether ${{\mathcal L}}(N) = {{\mathcal L}}(M)$ for two multi-tape FA that have only final states then we can also decide whether ${{\mathcal L}}(N_1) \cup {{\mathcal L}}(N_2) = {{\mathcal L}}(N_2)$ for two multi-tape FA that only have final states. However, this holds if and only if ${{\mathcal L}}(N_1) \subseteq {{\mathcal L}}(N_2)$. The result thus follows from Theorem \[theorem1\]. We now show how we can represent the problem of deciding language inclusion for multi-tape FA in terms of deciding whether $N {\sqsubseteq_s}M$ for suitable FSMs $N$ and $M$. Deciding Strong Conformance {#section:decide_strong} =========================== We have proved some decidability results for multi-tape FA in which all states are final states. However, we are interested in FSMs and here a transition has an input/output pair as a label. We now show how a multi-tape FA can be represented using an FSM (with multiple ports) before using this result to prove that $N {\sqsubseteq_s}M$ is generally undecidable for FSMs $N$ and $M$. In order to extend Theorem \[theorem1\] to FSMs we define a function that takes a multi-port finite automaton and returns an FSM. Let us suppose that $N = (S, s_0,\Sigma, h, S)$ is a FA with $r$ tapes with alphabets $\Sigma_1, \ldots, \Sigma_r$. We define the FSM ${{\mathcal F}}(N)$ with $r+1$ ports as defined below in which for all $1 \leq p \leq r$ we have that the input alphabet of $N$ at $p$ is $\Sigma_p$ and the output alphabet is empty and further we have that the input alphabet at port $r+1$ is empty and the output alphabet at $r+1$ is $\{0,1\}$. In the following for $a \in \{0,1\}$ we use $a_{k}$ to denote the $k$-tuple whose first $k-1$ elements are empty and whose $k$th element is $a$. ${{\mathcal F}}(N) = (S\cup\{{s_{e}}\},s_0,\Sigma,\{0_{n+1},1_{n+1}\},h')$ in which ${s_{e}}\not \in S$, for all $z \in \Sigma$ we have that $h'({s_{e}},z) = \{({s_{e}},0_{r+1})\}$ and for all $s \in S$ and $z \in \Sigma$ we have that $h'(s,z)$ is defined by the following: 1. If $h(s,z) = S' \neq \emptyset$ then $h'(s,z) = \{(s',1_{r+1}),(s',0_{r+1}) | s' \in S'\}$; 2. If $h(s,z) = \emptyset$ then $h'(s,z) = \{({s_{e}},0_{r+1})\}$. The idea is that while following a path of $N$ the FSM ${{\mathcal F}}(N)$ can produce either $0$ or $1$ at port $r+1$ in response to each input but once we diverge from such a path the FSM can then only produce $0$ (at $r+1$) in response to an input. \[lemma:nfsm\] Let us suppose that $N$ and $M$ are $r$-tape FA with alphabets $\Sigma_1, \ldots, \Sigma_r$. Then ${{\mathcal L}}(N) \subseteq {{\mathcal L}}(M)$ if and only if ${{\mathcal F}}(N) {\sqsubseteq_s}{{\mathcal F}}(M)$. First assume that ${{\mathcal F}}(N) {\sqsubseteq_s}{{\mathcal F}}(M)$ and we are required to prove that ${{\mathcal L}}(N) \subseteq {{\mathcal L}}(M)$. Assume that $\sigma \in {{\mathcal L}}(N)$ and so there exists some $\sigma' \sim \sigma$ such that $\sigma' \in L(N)$. Since $\sigma' \in L(N)$ we have that $L({{\mathcal F}}(N))$ contains the global trace $\rho'$ in which the input portion is $\sigma'$ and each output is $1_{r+1}$. Since ${{\mathcal F}}(N) {\sqsubseteq_s}{{\mathcal F}}(M)$ we must have that there is some $\rho'' \in L({{\mathcal F}}(M))$ such that $\rho'' \sim \rho'$. However, since the outputs are all at port $r+1$ and the inputs are at ports $1, \ldots, r$ we must have that $\rho''$ has output portion that contains only $1_{r+1}$ and input portion $\sigma''$ for some $\sigma'' \sim \sigma'$. Thus, we must have that $\sigma'' \in L(M)$. Since $\sigma \sim \sigma'$ and $\sigma' \sim \sigma''$ we must have that $\sigma \in {{\mathcal L}}(M)$ as required. Now assume that ${{\mathcal L}}(N) \subseteq {{\mathcal L}}(M)$ and we are required to prove that ${{\mathcal F}}(N) {\sqsubseteq_s}{{\mathcal F}}(M)$. Let $\rho$ be some element of $L({{\mathcal F}}(N))$ and it is sufficient to prove that $\rho \in {{\mathcal L}}({{\mathcal F}}(M))$. Then $\rho = \rho_1\rho_2$ for some maximal $\rho_2$ such that all outputs in $\rho_2$ are $0_{r+1}$. Let the input portions of $\rho_1$ and $\rho_2$ be $\sigma_1$ and $\sigma_2$ respectively. By the maximality of $\rho_2$ we must have that $\rho_1$ is either empty or ends in output $1_{r+1}$. Thus, $\sigma_1 \in L(N)$ and so, since ${{\mathcal L}}(N) \subseteq {{\mathcal L}}(M)$, there exists $\sigma'_1 \sim \sigma_1$ with $\sigma'_1 \in L(M)$. But this means that $M$ can produce the output portion of $\rho_1$ in response to $\sigma'_1$ and so there exists $\rho'_1 \in L({{\mathcal F}}(M))$ with $\rho' _1\sim \rho_1$. By the definition of ${{\mathcal F}}(M)$, since all outputs in $\rho_2$ are $0_{r+1}$ we have that $\rho'=\rho'_1\rho_2 \in L({{\mathcal F}}(M))$. The result therefore follows from observing that $\rho' = \rho'_1\rho_2 \sim \rho_1\rho_2 = \rho$. \[theorem:FSM\_undecidable\] The following problem is undecidable: given two multi-port FSMs $N$ and $M$ with the same alphabets, do we have that $N {\sqsubseteq_s}M$? This follows from Lemma \[lemma:nfsm\] and Theorem \[theorem1\]. When considering FSM $M$ with only one port, we represent the problem of deciding whether two states $s$ and $s'$ of $M$ are equivalent by comparing the FSMs $M_{s}$ and $M_{s'}$, formed by starting $M$ in $s$ and $s'$ respectively. However, we also have that the general problem is undecidable. \[theorem:equiv\_states\] The following problem is undecidable: given a multi-port FSM $M$ and two states $s$ and $s'$ of $M$, are $s$ and $s'$ equivalent. We will prove that we can express the problem of deciding whether multi-port FSMs are equivalent in terms of state equivalence. We therefore assume that we have multi-port FSMs $M_1$ and $M_2$ with the same input and output alphabets and we wish to decide whether $M_1$ an $M_2$ are equivalent. Let $s_{01}$ and $s_{02}$ denote the initial states of $M_1$ and $M_2$ respectively. We will construct an FSM $M$ in the following way. We add a new port $p$ and input $x_p$ at $p$. The input of $x_p$ in the initial state $s_0$ of $M$ moves $M$ non-deterministically to either $s_{01}$ or $s_{02}$ and produces no output. All other input in state $s_0$ moves $M$ to a state $s'_0 \neq s_0$, that is not a state of $M_1$ or $M_2$, from which all transitions are self-loops. The input of $x_p$ in a state of $M_1$ or $M_2$ leads to no output and no change of state. Now we can observe that a sequence in the language defined by starting $M$ in state $s_{0i}$, $i \in \{0,1\}$, is equivalent under $\sim$ to a sequence from ${{\mathcal L}}(M_i)$ followed by a sequence of zero or more instances of $x_p$. Thus, $s_{01}$ and $s_{02}$ are equivalent if and only if $M_1$ and $M_2$ are equivalent. The result thus follows from Theorem \[theorem:FSM\_undecidable\] and the fact that if we can decide equivalence then we can also decide inclusion. We now consider problems relating to distinguishing FSMs and states in testing. We can only distinguish between FSMs and states on the basis of observations and each observation, in distributed testing, defines an equivalence class of $\sim$. It is possible to distinguish FSM $N$ from FSM $M$ in distributed testing if and only if ${{\mathcal L}}(N) \not \subseteq {{\mathcal L}}(M)$. Further, it is possible to distinguish between FSMs $N$ and $M$ in distributed testing if and only if ${{\mathcal L}}(N) \not \subseteq {{\mathcal L}}(M)$ and ${{\mathcal L}}(M) \not \subseteq {{\mathcal L}}(N)$. The first part of the definition says that we can only distinguish $N$ from $M$ in distributed testing if there is some global trace of $N$ that is not observationally equivalent to a global trace of $M$. The second part strengthens this by requiring that we can distinguish $N$ from $M$ and also $M$ from $N$. The following is an immediate consequence of Theorem \[theorem:FSM\_undecidable\]. The following problems are generally undecidable in distributed testing. - Is it possible to distinguish FSM $N$ from FSM $M$? - Is it possible to distinguish between FSMs $N$ and $M$? Similar to the proof of Theorem \[theorem:equiv\_states\], we can express the problem of distinguishing between two FSMs as that of distinguishing two states $s$ and $s'$ of an FSM $M$. Thus, the above shows that it is undecidable whether there is some test case that is capable of distinguishing two states of an FSM or two FSMs. This complements a previous result [@Hierons10_SICOMP], that it is undecidable whether there is some test case that is *guaranteed* to distinguish two states or FSMs. Finally, we give conditions under which equivalence and inclusion are decidable. The first uses the notion of a Parikh Image of a sequence [@Parikh66]. Given a sequence $\sigma \in \Sigma^*$, where we have ordered the elements of $\Sigma$ as $a_1, \ldots, a_m$, the Parikh Image of $\sigma$ is the tuple $(x_1, \ldots, x_m)$ in which for all $1 \leq i \leq m$ we have that $\sigma$ contains $x_i$ instances of $a_i$. Given a set $A$ of sequences, the Parikh Image of $A$ is the set of tuples formed by taking the Parikh Image of each sequence in $A$. There are classes of languages where the Parikh Image of the language is guaranteed to be a semi-linear set. A linear set is defined by a set of vectors $v_0, \ldots, v_k$ that have the same dimension. Specifically, the linear set defined by $v_0, \ldots, v_k$ and is the set of $v_0 + n_1v_1 + \ldots + n_kv_k$ where $n_1, \ldots, n_k$ are all non-negative integers. A semi-linear set is a finite union of linear sets. Let us suppose that multi-port FSMs $N$ and $M$ have the same input and output alphabets and that for each port $p \in {{\mathcal P}}$ we have that $|{{\mathcal Act}}_p| \leq 1$. Then it is decidable whether $N {\sqsubseteq_s}M$. Since for all $p \in {{\mathcal P}}$ we have that $|{{\mathcal Act}}_p| \leq 1$, for each $\sigma \in {{\mathcal Act}}^*$ we have that $\sigma$ is equivalent under $\sim$ to all its permutations. Thus, the Parikh Image of a sequence in $L(N)$ or $L(M)$ uniquely defines the corresponding equivalence class. Thus, $N {\sqsubseteq_s}M$ if and only if the Parik Image of $L(N)$ is a subset of the Parikh Image of $L(M)$. However, these Parikh Images are semi-linear sets and it is decidable whether one semi-linear set is a subset of another (see, for example, [@KopczynskiT10]). The result thus follows. We now consider the case where each transition produces output at all ports. Let us suppose that $M$ is an FSM in which all transitions produce output at all ports. Then $N {\sqsubseteq_s}M$ if and only if $L(N) \subseteq L(M)$. First observe that if $N {\sqsubseteq_s}M$ then each transition of $N$ must also produce output at every port. Consider a sequence $\sigma \in L(M) \cup L(N)$ that contains $k$ inputs. Since every transition produces output at all ports, for a port $p$ we have that $\pi_p(\sigma)$ contains $k$ outputs with each input $x_i$ at $p$ being between the output produced at $p$ by the previous input and the output produced at $p$ in response to $x_i$. Thus, given sequences $\sigma, \sigma' \in L(N) \cup L(M)$ we must have that $\sigma' \sim \sigma$ if and only if $\sigma' = \sigma$. The result therefore holds. Bounded conformance {#section:bounded} =================== We have seen that it is undecidable whether two FSMs are related under ${\sqsubseteq_s}$. However, we might use a weaker notion of conformance where we only consider sequences of length at most $k$ for some $k$. This would be relevant when the expected usage does not involve sequences of length greater than $k$ since, for example, the system will be reset after at most $k$ inputs. In this section we define such a weaker implementation relation and explore the problem of deciding whether two FSMs are related under this. First, we introduce some notation. We let $IO_k$ denote the set of global traces that have at most $k$ inputs. In addition, for an FSM $N$ we let $L_k(N) = L(N) \cap IO_k$ denote the set of global traces of $N$ that have at most $k$ inputs. We can now define our implementation relation. Given FSMs $N$ and $M$ with the same input and output alphabets, we say that $N$ strongly $k$-conforms to $M$ if for all $\sigma \in L_k(N)$ there exists some $\sigma' \in L(M)$ such that $\sigma' \sim \sigma$. If this is the case then we write $N {\sqsubseteq_s}^k M$. Clearly, given $N$ and $M$ it is decidable whether $N {\sqsubseteq_s}^k M$: we can simply generate every element of $L_k(N)$ and for each $\sigma \in L_k(N)$ we determine whether $\sigma \in {{\mathcal L}}(M)$. The following shows that this can be achieved in polynomial time if we have a bound on the number of ports. We use a result from Mazurkiewicz trace theory. In Mazurkiewicz trace theory an independence graph is a directed graph where each vertex of the graph represents an element of ${{\mathcal Act}}$ and there is an edge between the vertex representing $a \in {{\mathcal Act}}$ and the vertex representing $b \in {{\mathcal Act}}$ if and only if $ab$ and $ba$ are equivalent; $a$ and $b$ are said to be independent [@diekert97]. Thus, for FSMs we have that $a$ and $b$ are independent if and only if they are at different ports. \[lemma:oracle\] Given a sequence $\sigma \in IO_k$ and FSM $M$ with $n$ ports, we can decide whether $\sigma \in {{\mathcal L}}(M)$ in time of $O(|\sigma|^n)$. The membership problem for a sequence $\sigma$ and rational trace language with alphabet $\Sigma$ and independence relation $\mathcal{I}$ can be solved in time of $O(|\sigma |^{\alpha})$ where $\alpha$ is the size of the largest clique in the independence graph [@BertoniMS82]. Since each observation is made at exactly one port and two observations are independent if and only if they are at different ports, we have that the maximal cliques of the independence graph all have size $n$ and so $\alpha = n$. The result therefore follows. \[theorem:bounded\_poly\] If there are bounds on the value of $k$ and the number $n$ of ports then the following problem can be solved in polynomial time: given FSMs $N$ and $M$ with at most $n$ ports, do we have that $N {\sqsubseteq_s}^k M$? First observe that the number of elements in $L_k(N)$ is of $O(q^k)$, where $q$ denotes the maximum number of transitions leaving a state of $N$. Thus, since $k$ is bounded, the elements in $L_k(N)$ can be produced in polynomial time. It only remains to consider each $\sigma$ in $L_k(N)$ and decide whether it is in ${{\mathcal L}}(M)$. However, by Lemma \[lemma:oracle\], this can be decided in polynomial time. The result therefore follows. Thus, if we have bounds on the number of ports of the system and the length of sequences we are considering then we can decide whether $N {\sqsubseteq_s}^k M$ in polynomial time. However, the proof of Theorem \[theorem:bounded\_poly\] introduced terms that are exponential in $n$ and $k$ and so it is natural to ask what happens if we do not place bounds on these values. It transpires that the problem is then NP-hard even for DFSMs, the proof using the following. Given boolean variables $z_1, \ldots, z_r$ let $C_1, \ldots, C_k$ denote sets of three literals, where each literal is either a variable $z_i$ or its negation. The three-in-one SAT problem is: does there exist an assignment to the boolean variables such that each $C_j$ contains exactly one true literal. The three-in-one SAT problem is known to be NP-complete [@Schaefer78]. The following problem is NP-hard: given $k$ and completely specified DFSMs $N$ and $M$, do we have that $N {\sqsubseteq_s}^k M$? We will show that we can reduce the three-in-one SAT problem to this and suppose that we have variables $z_1, \ldots, z_r$ and clauses $C_1, \ldots, C_q$. We will define a DFSM $M_1$ with $r+q+2$ ports, inputs $z_{-1},z_0,z_1, \ldots, z_r$ at ports $-1,0,1, \ldots, r$ and outputs $y_1, \ldots, y_{r+q}$ at ports $1, \ldots, r+q$. We count ports from $-1$ rather than $1$ since the roles of inputs at $-1$ and $0$ will be rather different from the roles of the other inputs. DFSM $M_1$ has four states $s_{0},s_1,s_2,s_3$ in which $s_{0}$ is the initial state. The states effectively represent different ‘modes’ and we now describe the roles of $s_1$ and $s_2$. In state $s_1$ an input at port $p$, $1 \leq p \leq r$, will lead to output at all of the ports corresponding to clauses with literal $z_p$. In state $s_2$ an input at port $p$, $1 \leq p \leq r$, will lead to output at all of the ports corresponding to clauses with literal $\neg z_p$. The input $z_0$ moves $M_1$ from $s_1$ to $s_2$. The special input $z_{-1}$ takes $M_1$ from state $s_{0}$ to state $s_1$. Overall, input $z_0$ does not produce output and only changes the state of $M_1$ if it is in state $s_1$, in which case it takes $M$ to state $s_2$. Input $z_{-1}$ does not produce output and only changes the state of $M_1$ if it is in state $s_{0}$, in which case it takes $M_1$ to state $s_1$. For an input $z_p$ with $1 \leq p \leq r$ there are four transitions: 1. From state $s_1$ there is a transition that, for all $1 \leq j \leq k$, sends output $y_{r+j}$ to port $r+j$ if $C_j$ contains literal $z_p$ and otherwise sends no output to port $r+j$. The transition sends no output to ports $-1, \ldots, r$ and does not change state. 2. From state $s_2$ there is a transition that, for all $1 \leq j \leq k$, sends output $y_{r+j}$ to port $r+j$ if $C_j$ contains literal $\neg z_p$ and otherwise sends no output to port $r+j$. The transition sends no output to ports $-1, \ldots, r$ and does not change state. 3. From state $s_{0}$ there is a transition to state $s_3$ that produces no output. 4. From state $s_3$ there is a transition to state $s_3$ that produces no output. Now consider the global trace $\sigma$ that starts with input sequence $z_{-1} z_0 z_1 \ldots z_{r-1}$ and then has input $z_r$ producing the outputs $y_{r+1} \ldots y_{r+q}$; all outputs are produced in response to the last input. Clearly we do not have $\sigma \in L(M_1)$. We now prove that $\sigma \in {{\mathcal L}}(M_1)$ if and only if there is a solution to the instance of the three-in-one SAT problem. Consider the problem of deciding whether there exists $\sigma' \in L(M_1)$ such that $\sigma' \sim \sigma$. Clearly the first input in $\sigma'$ must be $z_{-1}$. Each input $z_p$ is received once by the DFSM and these can be received in any order after $z_{-1}$. Thus, for all $1 \leq p \leq r$ we do not know whether $z_p$ will be received before or after $z_0$ in $\sigma'$. If $z_p$ is received before $z_0$ then an output is sent to all ports that correspond to clauses that contain literal $z_p$. If $z_p$ is received after $z_0$ then an output is sent to all ports that correspond to clauses that contain literal $\neg z_p$. Thus there exists $\sigma' \in L(M_1)$ such that $\sigma' \sim \sigma$ if and only if there exists an assignment to the boolean variables $z_1, \ldots, z_r$ such that each $C_j$ contains exactly one true literal. We now construct DFSMs $N$ and $M$ such that $N {\sqsubseteq_s}^k M$ if and only if $\sigma \in {{\mathcal L}}(M_1)$. In the following we assume that $r>1$ and let $\sigma_1$ be the global trace formed from $\sigma$ by replacing the prefix $z_{-1} z_0 z_1$ by $z_1 z_{-1} z_0$. Thus, $\sigma_1 \sim \sigma$. We form $N$ from $M_1$ by adding a new path that has label $\sigma_1$. We add state $s'_3$ such that the input of $z_1$ in state $s_{0}$ leads to state $s'_3$ (instead of $s_3$) and no output. From $s'_3$ we add a transition with label $z_0$ to another new state $s'_4$. We repeat this process, adding new states, until we have a path from $s_0$ with label $z_1 z_0 z_{-1} z_2 z_3 \ldots z_{r-1}$ ending in state $s'_{r+3}$. We then add a transition from $s'_{r+3}$ to $s'_{r+4}$ with input $z_r$ and the outputs $y_{r+1}, \ldots, y_{r+q}$. Finally, we complete $N$ by adding a transition to $s_3$ with input $z_p$ and null output from a state $s'_j$ if there is no transition from $s'_j$ with input $z_p$. Clearly, $L(N) = L(M_1) \cup {pref}(\sigma_1)I^*$. Let $\sigma_1'$ be defined such that $\sigma_1 = z_{1}\sigma_1'$. We can similarly form an FSM $M$ from $M_1$ such that $L(M) = L(M_1) \cup {pref}(\{z_{1}\}I\{\sigma_1'\})I^*$. Since each $I_p$ contains only one input we have that $\{z_{1}\}I\{\sigma_1'\}I^*$ and $\{\sigma_1\}I^+$ define the same sets of equivalence classes under $\sim$. Thus, the equivalence classes of ${pref}(\sigma_1)I^*$ and ${pref}(\{z_{1}\}I\{\sigma_1'\})I^*$ under $\sim$ differ only in the one that contains $\sigma_1$ and we know that $\sigma_1 \sim \sigma$. We therefore have that $N {\sqsubseteq_s}^k M$, for $k > r+1$, if and only if $\sigma \in {{\mathcal L}}(M_1)$ and we know that this is the case if and only if the instance of the three-in-one SAT problem has a solution. The result follows from the three-in-one SAT problem being NP-hard. Naturally, the results in this section are also relevant when we are looking for tests of length no longer than $k$ that distinguish states or FSMs. Conclusions {#section:conclusions} =========== There are important classes of systems such as cloud systems, web services and wireless sensor networks, that interact with their environment at physically distributed ports. In testing such a system we place a local tester at each port and the local tester (or user) at port $p$ only observes the events that occur at $p$. It is known that this reduced observational power, under which a set of local traces is observed, can introduce additional controllability and observability problems. This paper has considered the situation in which there is a finite state machine (FSM) model $M$ that acts as the specification for a system that interacts with its environment at physically distributed ports. We considered the implementation relation ${\sqsubseteq_s}$ that requires the set of local traces observed to be consistent with some global trace of the specification. We investigated the problem of defining a language ${{\mathcal {\tilde L}}}(M)$ such that we know that $N {\sqsubseteq_s}M$ if and only if all of the global traces of $N$ are contained in ${{\mathcal {\tilde L}}}(M)$. We showed that ${{\mathcal {\tilde L}}}(M)$ can be uniquely defined but need not be regular. We proved that it is generally undecidable whether $N {\sqsubseteq_s}M$ even if there are only two ports, although we also gave conditions under which this is decidable. An additional consequence of this result is that it is undecidable whether there is a test case (a strategy for each local tester) that is *capable* of distinguishing two states of an FSM or two FSMs. This complements earlier results that show that it is undecidable whether there is a test case that is *guaranteed* to distinguish between two states of an FSM or two FSMs. While these results appear to be related the proofs relied on very different approaches: the earlier result looked at the problem in terms of multi-player games while this paper developed and used results regarding multi-tape automata. Since it is generally undecidable whether $N {\sqsubseteq_s}M$ we defined a weaker implementation relation ${\sqsubseteq_s}^k$ under which we only consider input sequences of length $k$ or less. This is particularly relevant in situations in which it is known that input sequences of length greater than $k$ need not be considered since, for example, the system must be reset before this limit has been reached. We proved that if we place a bound on $k$ and the number of ports then we can decide whether $N {\sqsubseteq_s}^k M$ in polynomial time but otherwise this problem is NP-hard. There are several avenues for future work. First, there is the problem of finding weaker conditions under which we can decide whether $N {\sqsubseteq_s}M$. In addition, it would be interesting to find conditions under which ${{\mathcal {\tilde L}}}(M)$ can be constructed. There is also the problem of extending the results to situation in which we can make additional observations; for example, we might consider languages such as CSP in which we can also observe refusals. Finally, one of the motivations for this work was the problem of deciding whether there is a test case that is capable of distinguishing two states of an FSM and, despite this being undecidable, it would be interesting to develop heuristics for this problem. [^1]: In model-based testing, test automation is based on a model of the expected behaviour of the system or some aspect of this expected behaviour. [^2]: It is decidable whether there is a strategy that is capable of reaching a given state of an FSM. [^3]: We can easily extend the proofs to more general formalisms such as IOTSs.
--- abstract: 'Recently, topological quantum states of non-Hermitian systems, exhibiting rich new exotic states, have attracted great attention in condensed-matter physics. As for the demonstration, most of non-Hermitian topological phenomena previously focused on are in one- and two-dimensional systems. Here, we investigate three-dimensional non-Hermitian nodal-line semimetals in the presence of a particle gain-and-loss perturbation. It is found that this perturbation will split the original nodal ring into two exceptional rings (ERs). The topological nature of the bulk electronic structure is characterized by two different topological invariants, namely, the vorticity and the winding number defined for a one-dimensional loop in momentum space, both of which are shown to take half-integer (integer) values when an odd (even) number of ERs thread through the loop. The conventional bulk-surface correspondence in non-Hermitian nodal-line semimetals is found to break down, where the surface zero-energy flat bands are no longer bounded by projections of bulk ERs. Alternatively, a macroscopic fraction of the bulk eigenstates can be localized near the surface, thus leading to the so-called non-Hermitian skin effect.' author: - Huaiqiang Wang - Jiawei Ruan - Haijun Zhang bibliography: - 'reference.bib' title: 'Non-Hermitian nodal-line semimetals with an anomalous bulk-boundary correspondence' --- Introduction ============ The studies on topological states of Hermitian systems, including topological insulators [@Hasan2010; @Qi2010; @chiu2016], topological superconductors [@Qi2010; @Fu2008; @Qi2010; @Sau2010; @he2017chiral], and topological semimetals [@Armitage2018; @Yang2014; @gao2016classification; @yan2017topological; @Wan2011; @Xu2011; @Lu2012; @Weng2015; @Xu2015a; @Lv2015a; @Ruan2016a; @Ruan2016b; @liu2014discovery; @bradlyn2016beyond; @wang2016hourglass; @Yan2017Nodal; @Chen2017Topological; @Bomantara2016; @Wang2017Line], have profoundly deepened our understandings of symmetries and topology in condensed-matter physics. Topological states can be characterized by corresponding topological invariants defined from bulk band structures, which ensure the existence of gapless boundary states through the celebrated bulk-boundary correspondence. Among various topological materials, nodal-line semimetals have attracted much interest and have been intensively studied both theoretically [@Kim2015Dirac; @Bian2016Drumhead; @Yu2015Topological; @Burkov2011; @Phillips2014; @Mullen2015; @Yan2016; @Fang2015; @Zhang2016Quantum; @Hirayama2017Topological] and experimentally [@Bian2016Topological; @Wu2016Dirac; @Schoop2016Dirac]. They have band degeneracies along lines in momentum space, and possess drumhead-like surface states, which hold a potential possibility for realizations of surface superconductivity and surface magnetism when electron-electron correlation is introduced [@Kopnin2011; @Roy2017]. Very recently, there has been growing interest in topological states of non-Hermitian systems [@Gong2018]. Non-Hermiticity is ubiquitous in a diverse range of situations, including open quantum systems [@Carmichael1993; @Rudner2009; @Choi2010; @Diehl2011Topology; @Lee2014; @Malzard2015; @Zhen2015Spawning], optical systems with gain and loss [@Klaiman2008; @Regensburger2012Parity; @Bittner2012; @Hodaei2014Parity; @Feng2014Single; @Elganainy2018Non; @Zhou2018Observation; @Takata2018Photonic], and interacting/disordered systems [@Kozii2017Non; @Papaj2018Bulk; @Zyuzin2018; @Shen2018Quantum; @Zhao2018Condition]. The interplay between non-Hermiticity and topology leads to quite distinct properties in non-Hermitian systems, such as the breakdown of the conventional bulk-boundary correspondence [@Xiong2017Why; @Lee2016; @Yao2018Edge; @Yao2018Non; @Kawabata2018; @Alvarez2018Topological; @Kunst2018; @jin2018bulk], the emergence of anomalous edge states [@Lee2016; @Yao2018Edge; @Yao2018Non], and the anomalous localization of bulk eigenstates (“non-Hermitian skin effect”) [@Yao2018Edge; @Yao2018Non; @Martinez2018; @lee2018anatomy]. It has also been shown that non-Hermitian topology could manifest itself in some interesting transport phenomena [@Rudner2016Survival; @Ostahie2016; @Hu2017; @Zhang2017Trans; @Avila2018; @Yang2018; @Mcdonald2018Phase; @Harari2018Topological; @Chen2018Hall; @Philip2018Loss; @Longhi2015Non], for example, the deviation of the Hall conductance of the edge state from the quantized Chern number [@Philip2018Loss; @Chen2018Hall], one-way transport in low-dimensional lattices by an imaginary gauge field [@Longhi2015Non], and the topological insulator laser [@Harari2018Topological]. Up to now, most non-Hermitian topological phenomena previously studied are limited in one-dimensional (1D) and two-dimensional (2D) systems [@Lee2016; @Yao2018Edge; @Yao2018Non; @Kawabata2018; @Alvarez2018Topological; @Kunst2018; @Martinez2018; @Leykam2017; @Shen2018; @Lieu2018; @Klett2017; @Hu2011; @Esaki2011; @Zhou2018Non], and much less effort has been devoted to three-dimensional (3D) systems [@Xu2017Weyl; @Gonz2017; @Cerjan2018; @Carlstrom2018; @Yang2018Nodal]. In this work, we investigate both continuum and lattice models of non-Hermitian nodal-line semimetals in the presence of a particle gain-and-loss term. It is found that such a non-Hermitian perturbation will split each nodal ring into two exceptional rings (ERs). With increasing strength of this perturbation, some of the ERs may shrink and eventually vanish. To characterize the topological property of the bulk band structure, two different topological invariants are used: (1) One is the vorticity [@Shen2018] of a loop around the exceptional points (EPs) generated by cutting the ERs with a 2D slice in the cylinder coordinate. (2) The other is the winding number for a loop in the 3D momentum space, which stems from the chiral symmetry and can be calculated through the definition of a complex angle [@Leykam2017; @Yin2018]. Both invariants take fractional (integer) values when the loop is threaded by an odd (even) number of ERs. Under open boundary conditions (OBCs), the drumhead-like surface bands are no longer bounded by the projections of bulk ERs, thus suggesting the breakdown of conventional bulk-surface correspondence in Hermitian nodal-line semimetals. Intriguingly, not only the drumhead-like surface bands but also a macroscopic fraction of bulk states are found to be localized on the surface, which could be explained by dimensional reduction to 1D non-Hermitian lattice models. This paper is organized as follows. In Sec. II, we first study the bulk-band structure of non-Hermitian nodal-line semimetals through a simple continuum model in Sec. II A and then introduce the two topological invariants, namely, the vorticity and the winding number, in Secs. II B and II C, respectively, to characterize the bulk topology. In Sec. III, we address the issue of non-Hermitian bulk-boundary correspondence, where a lattice model is used to illustrate the band structures under periodic boundary conditions (PBCs) and OBCs in Sec. III A. The skin effect of non-Hermitian nodal-line semimetals is discussed in Sec. III B. Section IV concludes this paper. Bulk band from the continuum model ================================== Model description ----------------- ![Illustration of the nodal rings in the $k_{z}=0$ plane in the (a) absence and (b) presence of the non-Hermitian term $i\gamma_{z}\tau_{z}$. (c) The real part and (d) the imaginary part of the energy dispersion in the $k_{z}=0$ plane with the same parameters as in (b). The parameters are chosen as $m=0.5$, $B=v_{z}=1$, and $\gamma_{z}=0.3$ for the continuum model, and remain unchanged in the following unless otherwise specified.](fig1.eps){width="8cm"} A typical two-band spinless nodal-line semimetal can be described by the simple continuum model Hamiltonian [@Fang2015; @Yan2016]: $$\label{Hermitian H} H(\mathbf{k})=\epsilon_{0}(\mathbf{k})\tau_{0}+(m-Bk^{2})\tau_{x}+v_{z}k_{z}\tau_{z},$$ where $k^{2}=k_{x}^{2}+k_{y}^{2}+k_{z}^{2}$, $\tau_{i}$ ($i=x,y,z$) are Pauli matrices acting in the two-orbital subspace, $\tau_{0}$ is the identity matrix, $v_{z}$ denotes the Fermi velocity along the $k_{z}$ direction, and $m$ and $B$ are parameters with the dimension of energy and inverse energy, respectively [@Yan2016]. When $mB>0$, the conduction and valence bands touch along the nodal ring located in the $k_{z}=0$ plane at $k_{x}^{2}+k_{y}^{2}=m/B$ \[see Fig. 1(a)\], while for $mB<0$, the system lies in the trivial insulator phase with an energy gap. Without loss of generality and for simplicity, henceforth, unless stated explicitly, $m$, $B$, and $v_{z}$ are assumed to be positive. The Hermitian nodal ring is protected by the combined inversion and time-reversal symmetry $PT$ [@Zhang2016Quantum], which can be simply represented as the complex conjugate $K$ in a proper orbital basis. Such a symmetry imposes a reality condition on the Hamiltonian as $H(\mathbf{k})=H(\mathbf{k})^{*}$ and restricts the $\tau_{y}$ term to zero. This reduces the number of equations for band degeneracies to two, thus ensuring the emergence of line nodes in the 3D momentum space. In addition, when $\epsilon_{0}(\mathbf{k})=0$, the Hamiltonian in Eq. (\[Hermitian H\]) also satisfies the chiral symmetry, $\tau_{y}H(\mathbf{k})\tau_{y}=-H(\mathbf{k})$, which constrains the whole nodal ring to zero energy. In the presence of a non-Hermitian term $i \gamma_{z}\tau_{z} (\gamma_{z}>0)$ associated with particle gain and loss for the two orbitals, the Hamiltonian becomes: $$\label{Hamiltonian} H(\mathbf{k})=\epsilon_{0}(\mathbf{k})\tau_{0}+(m-Bk^{2})\tau_{x}+(v_{z}k_{z}+i\gamma_{z})\tau_{z}.$$ The energy is now obtained as $$\label{energy} E_{\pm}=\epsilon_{0}(\mathbf{k})\pm\sqrt{(m-Bk^{2})^{2}+v_{z}^{2}k_{z}^{2}-\gamma_{z}^{2}+2iv_{z}k_{z}\gamma_{z}},$$ which is generally complex for nonzero $\gamma_{z}$. Since $\epsilon_{0}(\mathbf{k})$ has no effect on band crossings and eigenstates, unless otherwise specified, it will be set to zero henceforth. Note that the non-Hermitian $i \gamma_{z}\tau_{z} (\gamma_{z}>0)$ term explicitly breaks the $PT$ symmetry of the Hermitian model but preserves the chiral symmetry in the absence of the constant-energy term. To see the fate of the original nodal ring, we focus on the $k_{z}=0$ plane, where the energy becomes $E_{\pm}=\pm\sqrt{\big(m-Bk_{\parallel}^{2}\big)^{2}-\gamma_{z}^{2}}$, with $k_{\parallel}\equiv\sqrt{k_{x}^{2}+k_{y}^{2}}$. When $\gamma_{z}<m$, the original nodal ring splits into two ERs characterized by $Bk^{2}_{\parallel}=m\pm\gamma_{z}$, as shown in Fig. 1(b). In the $k_{z}=0$ plane, the energy is purely real both inside the inner ER and outside the outer ER, while it is purely imaginary between the two ERs, as demonstrated in Figs. 1(c) and 1(d), respectively. With increasing $\gamma_{z}$, the inner ER shrinks and vanishes beyond the critical value of $\gamma_{z}=m$, where it becomes a point. Intriguingly, an ER appears even for the original gapped phase with negative $m$, as long as $\gamma_{z}>|m|$ is satisfied. Before further discussion, several points need to be clarified concerning the non-Hermitian perturbations and corresponding band degeneracies. First, generally speaking, in non-Hermitian systems, the number of conditions for two-band crossings is two instead of three in the Hermitian case [@Berry2004], and therefore 1D nodal lines are realizable in 3D non-Hermitian systems with three tunable momentum parameters even in the absence of any symmetry, as is the case with the Weyl ER [@Xu2017Weyl] and the present model in Eq. (\[Hamiltonian\]) regardless of the $\epsilon_{0}(\mathbf{k})$ term. Second, if we consider a $PT$-symmetric non-Hermitian perturbation such as an $i\gamma_{y}\tau_{y}$ term to the Hermitian Hamiltonian in Eq. (\[Hermitian H\]), the nodal ring may even evolve into an exceptional surface [@budich2018symmetry; @okugawa2018topological; @zhou2018exceptional]. Third, in contrast to two previous papers, namely, Refs. [@Carlstrom2018] and [@Yang2018Nodal], both of which mainly investigate the possibility of realizing exceptional links from nodal-line semimetals under certain non-Hermitian perturbations, in this paper, based on the nodal-line semimetals under a simple gain-and-loss perturbation, we focus on the topological properties of ordinary ERs without links, as well as the anomalous bulk-surface correspondence. ![ (a) Schematic view of EPs in the $k_{\rho}$-$k_{z}$ plane. Here, green and yellow colors represent the $1/2$ and $-1/2$ vorticities, respectively. Three dashed loops are marked as $L_{1,2,3}$ for the evolution of the complex eigenvalues. The evolution of the two complex eigenvalues of EPs along loops (b) $L_{1}$, (c) $L_{2}$, and (d) $L_{3}$, which are parameterized by $\theta_{L}\in[0,2\pi)$. Their projections onto the complex plane are also presented.](fig2.eps){width="8.5cm"} The vorticity ------------- In contrast to Hermitian band degeneracies consisting of distinct eigenvectors, EPs are ubiquitous in non-Hermitian band structures, where not only the eigenvalues but also the eigenvectors coalesce with each other, thus rendering the corresponding Hamiltonian defective and nondiagonizable [@Berry2004]. When encircling an EP, the constitutive bands get exchanged due to the square root taken in Eq. (\[energy\]), and two loops are required to return to the initial state [@Berry2004; @Moiseyev; @Heiss2012; @Mailybaev2005; @Dembowski2004; @kim2013braid; @Lee2016; @Leykam2017; @Shen2018]. In order to characterize the ERs, we adopt the cylinder-like coordinate and divide each ER into a collection of EPs residing in the 2D $k_{\rho}$-$k_{z}$ slice \[see Fig. 2(a); here, $k_{\rho}$ is allowed to take negative values, which should be distinguished from the conventional cylinder coordinate\]. After this decomposition, we can then resort to the concept of vorticity [@Shen2018] to characterize each EP. First, we consider the case with both the inner and outer ERs ($\gamma_{z}<m$), which are located in the $k_{z}=0$ plane at $k_{\parallel}=\sqrt{(m-\gamma_{z})/B}$ and $\sqrt{(m+\gamma_{z})/B}$, respectively. For each $k_{\rho}$-$k_{z}$ slice, altogether four EPs appear at $(k_{\rho},k_{z})=(k_{\pm}^{s},0)$, as shown in Fig. 2(a), with $$\begin{split} \label{EP} k_{\pm}^{s}=\pm\sqrt{(m-s\gamma_{z})/B}, \end{split}$$ where $s=+1$ $(-1)$ for the EPs from the inner (outer) ER. In fact, these EPs can be understood from the non-Hermitian-term-induced splittings of the original Dirac points at $(\pm\sqrt{m/B},0)$ in the 2D $k_{\rho}$-$k_{z}$ slice. By expanding the low-energy effective Hamiltonian to linear order around each EP, we obtain $$H_{\pm}^{s}(\mathbf{q})=(s\gamma_{z}-2Bk_{\pm}^{s}q_{\rho})\tau_{x}+(v_{z}q_{z}+i\gamma_{z})\tau_{z}.$$ The dispersion to the leading order of $\mathbf{q}$ is then derived as $$\label{Dispersion} E_{\pm,\lambda}^{s}(\mathbf{q})=\lambda\sqrt{2\gamma_{z}(-sv_{\pm}^{s}q_{\rho}+iv_{z}q_{z})},$$ where $v_{\pm}^{s}=2Bk_{\pm}^{s}$ and $\lambda=\pm1$ for the two branches of bands. Following Ref. [@Shen2018], the vorticity of each EP can be calculated as $$\nu_{\pm}^{s}=-\frac{1}{2\pi}\oint_{\Gamma}\nabla_{\mathbf{q}}\mathrm{arg}[E_{\pm,+}^{s}(\mathbf{q})-E_{\pm,-}^{s}(\mathbf{q})]\cdot d\mathbf{q}=\pm\frac{s}{2},$$ where $\Gamma$ is a closed loop encircling the EP. A nonzero vorticity for such a contractible closed loop in momentum space indicates a band degeneracy surrounded by $\Gamma$ [@Shen2018]. It should be emphasized that the fractional vorticity is an inherent property of the EP unique to non-Hermitian systems and is well defined in the absence of any symmetry. As an illustration, in Figs. 2(b) and 2(c), we numerically plot the evolution paths of the two bands along the loops $L_{1}$ and $L_{2}$ around the inner and outer EPs $k_{+}^{+}$ and $k_{+}^{-}$, respectively, both of which are parameterized by $\theta_{L}\in[0,2\pi]$. It can be seen that around both $k_{+}^{+}$ and $k_{+}^{-}$, the two bands get switched at $\theta_{L}=2\pi$. However, they wind around each other in opposite directions, namely, clockwise (counterclockwise) for $k_{+}^{+}$ ($k_{+}^{-}$) with $v_{+}^{+}=1/2$ ($v_{+}^{-}=-1/2$), as clearly demonstrated by their projections to the complex plane. More generally, when the loop encloses an odd number of EPs, the two bands swap with each other and the vorticity takes a half-integer value, while when an even number of EPs is enclosed, the vorticity becomes an integer, and the two bands return to their original states, as exemplified by the loop $L_{3}$ enclosing both $k_{-}^{+}$ and $k_{-}^{-}$ in Fig. 2(a), with the bands’ evolution shown in Fig. 2(d). In addition, with increasing $\gamma_{z}$, the two EPs from the inner ER approach each other until $\gamma_{z}=m$, where they meet and annihilate as a result of their opposite vorticities [@Shen2018], which accounts for the disappearance of the inner ER when $\gamma_{z}>m$. The winding number ------------------ To fully capture the topological property of the bulk band, the above calculation of vorticity is insufficient. For example, it cannot distinguish between a loop enclosing two EPs with opposite vorticities and a loop enclosing no EPs since both loops exhibit zero vorticity. Moreover, as the vorticity depends only on the energies, it fails to provide topological properties concerning the eigenstates such as the Berry phase [@Mailybaev2005; @Lieu2018]. However, in non-Hermitian systems, when encircling an EP, two loops are needed to return to the original state, thus making it problematic to calculate the conventional Berry phase for a single loop. To circumvent this, in this section, we will calculate the winding number originating from the chiral symmetry of the non-Hermitian Hamiltonian in Eq. (\[Hamiltonian\]) in the absence of $\epsilon_{0}(\mathbf{k})$, which has been shown to be closely related to the non-Hermitian generalization of the Berry phase [@Leykam2017; @garrison1988complex]. By treating $k_{x}$ and $k_{y}$ as parameters, the winding number can be defined for every 1D chain along the $k_{z}$ direction as [@Leykam2017; @Yin2018; @Zhou2017Dynamical] $$\label{winding number} w=\frac{1}{2\pi}\int_{-\infty}^{\infty}dk_{z}\partial_{k_{z}}\phi,$$ where $\phi\equiv\arctan(h_{x}/h_{z})=\arctan[(m-Bk^{2})/(v_{z}k_{z}+i\gamma_{z})]$, with $h_{x}$ and $h_{z}$ representing the components of the $\tau_{x}$ and $\tau_{z}$ terms, respectively, in $H$. (If the alternative definition $\phi\equiv\arctan(h_{z}/h_{x})$ is used, the final result of the winding number will differ only by a sign reversal.) Note that the presence of the non-Hermitian term indicates that $\phi$ is generically complex. ![image](fig3.eps){width="13cm"} When $m>\gamma_{z}$, two ERs appear in the $k_{z}=0$ plane, as shown before. Considering the rotational symmetry of the system, we numerically present the real part of $\phi$ for a 2D $k_{\rho}$-$k_{z}$ slice in Fig. 3(b), with the parameters $m=0.5$, $B=v_{z}=1$, and $\gamma_{z}=0.3$. Here, Re($\phi$) is an odd function of $k_{z}$ and at $k_{z}=0$, it is continuous when $k_{\rho}$ lies between the two ERs \[line B in Fig. 3(a)\], while for $k_{\rho}$ outside this range \[lines A and C in Fig. 3(a)\], it is discontinuous with a $\pi$ jump. Nevertheless, the real part of $\partial_{k_{z}}\phi$ is always continuous with no such jumps, thus validating Eq. (\[winding number\]). In contrast, the imaginary part of $\phi$ is found to be an even and continuous function of $k_{z}$ here, suggesting its derivative Im($\partial_{k_{z}}\phi$) is an odd function and $\mathrm{Im}\phi(k_{z}\rightarrow\infty)=\mathrm{Im}\phi(k_{z}\rightarrow-\infty)$, which consequently does not contribute to the integral in Eq. (\[winding number\]). Finally, the winding number can be explicitly derived as (see Appendix A for the detailed derivation) $$\label{w result} w=\left\{ \begin{array}{ll} -1, & \hbox{for $|k_{\rho}|<k_{\mathrm{in}}$,} \\ -\frac{1}{2}, & \hbox{for $k_{\mathrm{in}}<|k_{\rho}|<k_{\mathrm{out}}$,} \\ 0, & \hbox{for $|k_{\rho}|>k_{\mathrm{out}}$,} \end{array} \right.$$ where $k_{\mathrm{in}}=\sqrt{(m-\gamma_{z})/B}$ and $k_{\mathrm{out}}=\sqrt{(m+\gamma_{z})/B}$, which are the radii of the inner and outer ERs, respectively. This result is numerically supported by the phase diagram of $w$ as a function of both $k_{x}$ and $k_{y}$ in Fig. 3(d), where the boundaries between regions of different $w$ values (solid red lines) exactly correspond to locations of the bulk ER. As a comparison, in Fig. 3(c), we present the $w$ phase diagram with the same parameters in the absence of the $\gamma_{z}$ term. The merging of the two ERs into the Hermitian nodal ring leads to the disappearance of the region with a fractional value of $w$, and recovers the result for a Hermitian nodal-line semimetal. The emergence of the fractional value $w=-1/2$ and integer value $w=-1$ can be understood as follows. Although the values of $\phi$ are found to differ by $\pi$ for the two opposite limits $k_{z}\rightarrow\infty$ and $k_{z}\rightarrow-\infty$, their derivatives $\partial_{k_{z}}\phi$ turn out to be the same, thus enabling us to reasonably compact the integral line into a loop by connecting $k_{z}=\infty$ to $k_{z}=-\infty$ (the compactness will be quite natural for a Bloch Hamiltonian in a lattice model with PBCs). As a result, lines A, B, and C, are topologically equivalent to loops $S_{1}$, $S_{2}$, and $S_{3}$, respectively, which are threaded by two, one, and zero ERs. Since the $S_{1}$ loop encloses two EPs, the winding number can be proved to be $\pm1$ [@Lee2016; @Yin2018], with the non-Hermitian-generalized Berry phase $\phi_{B}=\pi$ (mod $2\pi$). This can be understood from the $\pi$ Berry phase for a loop encircling the unperturbed Hermitian nodal ring. For the $S_{2}$ loop encircling only one EP, the winding number is found to take fractional values $\pm1/2$ [@Lee2016], which is related to the fact that $\phi_{B}=\pi$ (mod $2\pi$) only after a path circles twice around an EP [@Lee2016; @Mailybaev2005; @Dembowski2004]. For the $S_{3}$ loop enclosing no EPs, the winding number should obviously take the trivial value zero with $\phi_{B}=0$ (mod $2\pi$). Consequently, the winding number is related to the non-Hermitian Berry phase as $$w\pi\equiv\phi_{B} (\mathrm{mod}\ 2\pi).$$ Note that the $\pm\frac{\pi}{2}$ phase here means the “averaged” phase for a loop [@Mailybaev2005]. When $m<\gamma_{z}$, only the outer ER remains, and it is evident from the above analysis that $w=-1/2$ ($w=0$) inside (outside) this ER. In the above discussion, the constant energy term $\epsilon_{0}(\mathbf{k})$ has been neglected to satisfy the chiral symmetry. However, although the presence of such a term explicitly breaks the chiral symmetry and invalidates the definition of the winding number, the Berry phase argument remains the same since the $\epsilon_{0}(\mathbf{k})$ term does not change the eigenstates. Anomalous bulk-surface correspondence ===================================== In Hermitian systems, by virtue of bulk-boundary correspondence, the emergence of topological surface (edge) states is ensured by relevant topological invariants of bulk bands under PBCs. This rule holds true for Hermitian nodal-line semimetals, where drumhead surface states (flat bands) are expected to be bounded by the projections of bulk nodal rings onto the surface Brillouin zone (BZ) [@Yu2015Topological; @Kim2015Dirac; @Bian2016Drumhead; @Wang2017Line]. However, the generalization of such correspondence to non-Hermitian systems is problematic and has been shown to break down in certain systems [@Lee2016; @Xiong2017Why; @Yao2018Edge; @Yao2018Non; @Kawabata2018; @Kunst2018; @Alvarez2018Topological; @jin2018bulk; @Martinez2018], such as the non-Hermitian Su-Schrieffer-Heeger (SSH) model [@Yao2018Edge; @Kunst2018] and the non-Hermitian Chern insulator [@Yao2018Non; @Kunst2018; @Kawabata2018]. Intriguingly, under OBCs, even a macroscopic number of bulk eigenstates become localized near the boundary, producing the so-called non-Hermitian skin effect [@Xiong2017Why; @Yao2018Edge; @Yao2018Non; @Martinez2018; @Kunst2018]. In this section, we will inspect such effects for a lattice model of a non-Hermitian nodal-line semimetal under PBCs and OBCs. Bloch band from lattice model ----------------------------- ![image](fig4.eps){width="17cm"} By taking $k_{i}\rightarrow \sin k_{i}$ and $k_{i}^{2}\rightarrow2(1-\cos k_{i})$ in Eq. (\[Hamiltonian\]), the lattice model Hamiltonian can be obtained as $$\label{lattice model} \begin{split} H=&\big[m-2B(3-\cos k_{x}-\cos k_{y}-\cos k_{z})\big]\tau_{x}\\ &+(v_{z}\sin k_{z}+i\gamma_{z})\tau_{z}, \end{split}$$ where the $\epsilon_{0}(\mathbf{k})$ term has been dropped for simplicity. Band degeneracies are found to occur in the $k_{z}=0$ plane at $$\label{kz0} \cos k_{x}+\cos k_{y}=2-\frac{m\pm\gamma_{z}}{2B}$$ and in the $k_{z}=\pi$ plane at $$\label{kzpi} \cos k_{x}+\cos k_{y}=4-\frac{m\pm\gamma_{z}}{2B}.$$ In the absence of the non-Hermitian $i\gamma_{z}\tau_{z}$ term, a nodal loop appears in the $k_{z}=0$ plane when $0<\frac{m}{2B}<4$, and in the $k_{z}=\pi$ plane when $2<\frac{m}{2B}<6$, as illustrated by the red and blue dashed lines, respectively, in Fig. 3(e), with $m=3$, $B=0.5$, $v_{z}=1$. In the presence of a small $i\gamma_{z}\tau_{z}$ term, analogous to the continuum model, each nodal loop will split into two ERs \[see the solid lines in Fig. 3(e) with $\gamma_{z}=0.6$\]. The energy is also purely imaginary between the two ERs and purely real outside. With increasing $\gamma_{z}$, each inner \[outer\] ER shrinks towards $(k_{x},k_{y})=(0,0)$ \[$(\pi,\pi)$\] and vanishes there beyond a critical value of $\gamma_{z}$ determined by Eqs. (\[kz0\]) and (\[kzpi\]). For example, if $\gamma_{z}$ is increased to $1.2$ in Fig. 3(e), only one ER persists in both the $k_{z}=0$ and $k_{z}=\pi$ planes, as shown by the red and blue solid lines, respectively, in Fig. 3(f). Similar to the continuum model, by treating $k_{x}$ and $k_{y}$ as parameters, the bulk band can also be characterized by the winding number in Eq. (\[winding number\]), where the integral interval of $k_{z}$ should now be replaced by $[-\pi,\pi]$. Depending on the model parameters, the winding number $w$ may take a value of $-1$, $-1/2$, or $0$. Regions of distinct $w$ are bounded by the ERs, as can be seen from Figs. 3(e) and 3(f). The emergence of such values of $w$ also originate from encircling the EPs, as has already been clarified in the continuum model. To examine the bulk-surface correspondence in the non-Hermitian nodal-line semimetals, as a first step, we choose OBCs in the $z$ direction of $N=20$ slabs with the same parameters as those in Fig. 3(e) to numerically calculate the spectrum as a function of both $k_{x}$ and $k_{y}$. Zero-energy surface states are found in the yellow regions in Fig. 4(a), where the projections of the bulk ERs under PBCs are also provided for comparison (blue dashed lines). The discrepancy between the boundaries of the zero-energy flat bands (blue solid lines) and the projections of bulk ERs is obvious, which indeed reflects the breakdown of the usual bulk-surface correspondence of Hermitian nodal-line semimetals. This discrepancy can be well explained as follows. By treating $k_{x}$ and $k_{y}$ as parameters, the Hamiltonian in Eq. (\[lattice model\]) will be effectively reduced to a 1D one: $$\label{1D lattice model} H_{xy}=(m_{xy}+2B\cos k_{z})\tau_{x}+(v_{z}\sin k_{z}+i\gamma_{z})\tau_{z},$$ where $m_{xy}=m-2B(3-\cos k_{x}-\cos k_{y})$. This Hamiltonian takes the same form as the 1D non-Hermitian lattice model in Refs. [@Lee2016; @Martinez2018]. It also bears a very close resemblance to the well-studied non-Hermitian SSH model after taking a basis change $\tau_{z}\rightarrow\tau_{y}$ [@Yao2018Edge; @Kunst2018; @Lieu2018]. For simplicity, we will choose the parameters $B=0.5$ and $V_{z}=1$. Following Refs. [@Yao2018Edge] and [@Kunst2018], under OBCs in the $k_{z}$ direction, it can be shown that topological phase transitions accompanying the (dis)appearance of boundary zero modes take place at $m_{xy}=\pm\sqrt{\gamma_{z}^{2}+1}$ or $\pm\sqrt{\gamma_{z}^{2}-1}$. This is in striking contrast to the periodic case, where the bulk ERs are projected to $m_{xy}=1\pm\gamma_{z}$ and $-1\pm\gamma_{z}$. Under OBCs, the topologically nontrivial region with boundary zero modes corresponds to [@Yao2018Edge; @Kunst2018] (see Appendix C for a detailed calculation) $$\label{phase} \left\{ \begin{array}{ll} |m_{xy}|<\sqrt{\gamma_{z}^{2}+1}, & \hbox{for $\gamma_{z}<1$;} \\ \sqrt{\gamma_{z}^{2}-1}<|m_{xy}|<\sqrt{\gamma_{z}^{2}+1}, & \hbox{for $\gamma_{z}>1$.} \end{array} \right.$$ This is numerically verified in Fig. 4(a), where surface flat bands are bounded by blue solid lines characterized by $m_{xy}=\pm\sqrt{\gamma_{z}^{2}+1}$ instead of the dashed lines representing bulk ERs. As a further illustration, we plot the absolute \[Fig. 4(b)\], real \[Fig. 4(c)\], and imaginary \[Fig. 4(d)\] values of the full complex energy spectra as a function of $k_{x}$ with fixed $k_{y}=0$. Although the model is non-Hermitian with gain and loss, in some parameter regions, the spectra become purely real, which may result from a $PT$-like symmetry [@Bender1998; @Lee2016; @Martinez2018; @Yao2018Edge]. Moreover, since both the real and imaginary parts of the flat bands equal zero ($|\epsilon|=0$), they should be dynamically stable zero modes. Non-Hermitian skin effect ------------------------- We continue to investigate the exotic non-Hermitian skin effect under OBCs in our system. For the 1D Hamiltonian in Eq. (\[1D lattice model\]), it can be shown that when $m_{xy}<0$ ($m_{xy}>0$), not only the zero modes but also a macroscopic fraction of the bulk eigenstates may be localized near the top (bottom) boundary for a large parameter region [@Martinez2018]. This stems from the parameter $\beta$ describing the behavior of an eigenstate in the $z$ direction as $\phi(z)=\beta^{n}\phi(z_{0})$, with $z_{0}$ denoting the position of the bottom slab. Obviously, $|\beta|<1$ ($|\beta|>1$) corresponds to a state localized near the bottom (top) surface, and $|\beta|=1$ describes an extended state. According to Ref. [@Yao2018Edge], the bulk eigenstates for a long chain require (see Appendix B for details) $$\label{beta} |\beta|=\sqrt{\Big|\frac{m_{xy}-\gamma_{z}}{m_{xy}+\gamma_{z}}\Big|},$$ leading to $|\beta|>1$ ($|\beta|<1$) for $m_{xy}<0$ ($m_{xy}>0$) and $|\beta|=1$ for $m_{xy}=0$. To further characterize the localization property, we calculate the inverse participation ratio (IPR) to measure the localization of a state $\phi_{i}$, which is defined as $\sum_{z}|\phi_{i}(z)|^{4}/[\sum_{z}|\phi_{i}(z)|^{2}]^{2}$ [@Martinez2018]. For extended states, it should be proportional to $1/N$, where $N$ is the total lattice number in the open boundary direction. Figure 4(e) numerically shows the IPR of a typical bulk eigenstate with the same set of parameters as in Fig. 4(a). It can be seen that the extended states exist only in the vicinity of the lines characterized by $m_{xy}=0$ (black regions), with $|\beta=1|$ as predicted by Eq. (\[beta\]), while the maximum IPR appears around the lines with $m_{xy}=\pm\gamma_{z}$ (white regions), where $|\beta|\rightarrow0$ or $\infty$, implying completely localized states. As an illustration, we choose four representative points, A $(\pi/2,0)$, B $(-\arccos(-0.4),0)$, C $(-\pi/2,\pi/2)$, and D $(2\pi/3,\pi/2)$, with $m_{xy}=1$, $\gamma_{z}$, $0$, and $-1/2$, respectively, to plot the wave function distributions $|\phi(z)|^{2}$ in the z direction of both the zero mode and a representative bulk eigenstate in Fig. 4(f). For point A (D), both the zero mode and the bulk eigenstate are localized near the bottom (top) slab, while for point B, both eigenstates are indeed totally localized at the bottom slab, which may also be related to the occurrence of higher-order EPs (HEPs), as marked by red points in Figs. 4(b)-4(d) [@Martinez2018]. For point C, the zero mode is distributed equally on both surfaces, while the bulk state has now become extended. Intriguingly, depending on the parameter $m$, the surface flatbands and a macroscopic fraction of bulk eigenstates may be localized at (i) the bottom surface when $m>5$ ($m_{xy}>0$ is always satisfied), (ii) the top surface when $m<1$ ($m_{xy}<0$ is always satisfied), (iii) both the top and bottom surfaces but at different surface BZ regions when $1<m<5$. For example, we plot the wave function distribution of the state closest to zero energy on the top and bottom slabs, respectively, for $m=3$, $\gamma_{z}=0.6$ \[Figs. 4(g)\] and $m=0.9$, $\gamma_{z}=1.1$ \[Fig. 4(h)\], where distinct localization behaviors between them can clearly be observed from the wave function distribution on opposite boundary slabs. Discussion and Conclusion ========================= Experimentally speaking, although it is quite challenging to tune the gain-and-loss term in condensed matter systems, dissipative waveguide systems and ultracold atomic gas may provide a feasible platform to create and engineer such a non-Hermitian perturbation. For example, the gain-and-loss term for the two “orbitals” can be effectively realized in ultracold atomic systems by using a resonant optical beam or applying a radio frequency pulse to generate an effective decay for one of the two orbitals [@Xu2017Weyl]. Moreover, there have already been several proposals [@Zhang2016Quantum; @Xu2016Dirac] for realizing nodal-line semimetals in ultracold optical lattices, which was recently experimentally observed [@Song2018]. It is also worth mentioning that non-Hermitian Weyl exceptional rings have been experimentally realized in optical waveguide arrays [@Cerjan2018Experimental]. Very recently, electric-circuit realizations of non-Hermitian topological phases were also proposed in Refs. [@luo2018nodal; @ezawa2018electric]. In summary, we have theoretically investigated non-Hermitian nodal-line semimetals, where the non-Hermiticity originates from the introduced particle gain-and-loss perturbation. Through dimensional reduction, two different topological numbers have been used to describe the topology of the bulk bands. By comparing the band structures under PBCs and OBCs, the conventional bulk-surface correspondence in nodal-line semimetals was found to fail in the non-Hermitian case. Furthermore, the non-Hermitian skin effect in our system was also discussed based on the knowledge from 1D non-Hermitian models. This work was supported by the National Natural Science Foundation of China (No. 11674165), the Fok Ying-Tong Education Foundation of China (Grant No. 161006) and the Fundamental Research Funds for the Central Universities (No. 020414380038). Derivation of the winding number ================================ In this section, through the method introduced in Ref. [@Yin2018], we explicitly calculate the winding number defined by Eq. (\[winding number\]) in the main text for the Hamiltonian: $$h=h_{z}\tau_{z}+h_{x}\tau_{x},$$ with $h_{x}=m-B(k_{\rho}^{2}+k_{z}^{2})$, $h_{z}=v_{z}k_{z}+i\gamma_{z}$, and $\phi=\arctan(h_x/h_z)$. Here, $m$, $B$, $v_{z}$, and $\gamma_{z}$ are set to be positive without loss of generality. As a complex angle, $\phi$ can be decomposed as $\phi=\phi_{\mathrm{R}}+i\phi_{\mathrm{I}}$, where $\phi_{\mathrm{R}}$ and $\phi_{\mathrm{I}}$ denote the real and imaginary parts of $\phi$, respectively. For later reference, the values of $\phi$ for the two limits $k_{z}\rightarrow\pm\infty$ are given as $$\phi_{k_{z}\rightarrow\pm\infty}=\arctan\Big(\frac{h_{x}}{h_{z}}\Big)_{k_{z}\rightarrow\pm\infty}=\mp\frac{\pi}{2},$$ which are purely real. Through the relation $$e^{2i\phi}=\frac{\cos\phi+i\sin\phi}{\cos\phi-i\sin\phi}=\frac{1+i\tan\phi}{1-i\tan\phi}=\frac{h_{z}+ih_{x}}{h_{z}-ih_{x}},$$ it is obvious that the amplitude and phase parts are related to $\phi_{\mathrm{I}}$ and $\phi_{\mathrm{R}}$, respectively, as $$e^{-2\phi_{\mathrm{I}}}=\Big|\frac{h_{z}+ih_{x}}{h_{z}-ih_{x}}\Big|,$$ and $$e^{2i\phi_{\mathrm{R}}}=\frac{h_{z}+ih_{x}}{h_{z}-ih_{x}}\bigg/\Big|\frac{h_{z}+ih_{x}}{h_{z}-ih_{x}}\Big|.$$ First, since $\phi_{I}$ is found to be a continuous function of $k_{z}$, the imaginary part of the integral in Eq. (\[winding number\]) is obtained as $$\frac{1}{2\pi}\int^{\infty}_{-\infty}dk_{z}\partial_{k_{z}}\phi_{\mathrm{I}}= \frac{\phi_{\mathrm{I}}(k_{z}\rightarrow\infty)-\phi_{\mathrm{I}}(k_{z}\rightarrow-\infty)}{2\pi}=0.$$ Now, consider the relation $$\tan(2\phi_{\mathrm{R}})=\mathrm{Im}\Big(\frac{h_{z}+ih_{x}}{h_{z}-ih_{x}}\Big)\bigg/\mathrm{Re}\Big(\frac{h_{z}+ih_{x}}{h_{z}-ih_{x}}\Big);$$ it can be rewritten as [@Yin2018] $$\tan(2\phi_{\mathrm{R}})=\tan(\phi_{A}+\phi_{B}),$$ with the two real angles defined via [@Yin2018] $$\begin{split} \tan\phi_{A}=&\frac{\mathrm{Re}(h_{x})+\mathrm{Im}(h_{z})}{\mathrm{Re}(h_{z})-\mathrm{Im}(h_{x})}=\frac{m-B(k_{\rho}^{2}+k_{z}^{2})+\gamma_{z}}{v_{z}k_{z}}\\ \tan\phi_{B}=&\frac{\mathrm{Re}(h_{x})-\mathrm{Im}(h_{z})}{\mathrm{Re}(h_{z})+\mathrm{Im}(h_{x})}=\frac{m-B(k_{\rho}^{2}+k_{z}^{2})-\gamma_{z}}{v_{z}k_{z}}. \end{split}$$ Then we can simply get $$\phi_{\mathrm{R}}=n\pi+\frac{1}{2}(\phi_{A}+\phi_{B}),$$ where $n$ is an integer. Note that both $\phi_{A}$ and $\phi_{B}$ exhibit discontinuities at $k_{z}=0$, with $$\begin{split} \phi_{A}(k_{z}\rightarrow 0^{\pm})=&\pm\frac{\pi}{2}\mathrm{sgn}(m+\gamma_{z}-Bk_{\rho}^{2})\\ \phi_{B}(k_{z}\rightarrow 0^{\pm})=&\pm\frac{\pi}{2}\mathrm{sgn}(m-\gamma_{z}-Bk_{\rho}^{2}). \end{split}$$ Moreover, when $k_{z}\rightarrow\pm\infty$, $$\begin{split} \phi_{A}(k_{z}\rightarrow \pm\infty)=\phi_{B}(k_{z}\rightarrow \pm\infty)=\mp\frac{\pi}{2}. \end{split}$$ Finally, we have $$\begin{split} w=&\frac{1}{2\pi}\int_{-\infty}^{\infty}dk_{z}\partial_{k_{z}}\phi_{\mathrm{R}}\\ =&\frac{1}{4\pi}\int_{-\infty}^{\infty}dk_{z}\partial_{k_{z}}(\phi_{A}+\phi_{B})\\ =&\frac{1}{4\pi}\bigg(\Big(\phi_{A}\big|^{+\infty}_{0^{+}}+\phi_{A}\big|_{-\infty}^{0^{-}}\Big) +\Big(\phi_{B}\big|^{+\infty}_{0^{+}}+\phi_{B}\big|_{-\infty}^{0^{-}}\Big)\bigg)\\ =&-\frac{1}{2}-\frac{\mathrm{sgn}(m+\gamma_{z}-Bk_{\rho}^{2})+\mathrm{sgn}(m-\gamma_{z}-Bk_{\rho}^{2})}{4}\\ =&\left\{ \begin{array}{ll} -1, & \hbox{$|k_{\rho}|<\sqrt{m-\gamma_{z}}$;} \\ -\frac{1}{2}, & \hbox{$\sqrt{m-\gamma_{z}}<|k_{\rho}|<\sqrt{m+\gamma_{z}}$;} \\ 0, & \hbox{$|k_{\rho}|>\sqrt{m+\gamma_{z}}$.} \end{array} \right. \end{split}$$ This is exactly Eq. (\[w result\]) in the main text. Derivation of $\beta$ for bulk states under OBCs ================================================ ![Schematic illustration of the 1D tight-binding model for the effective momentum-space Hamiltonian in Eq. (\[1D lattice model\]) with $\gamma=\gamma_{z}$, $m=m_{xy}$, $t_{1}=B$, and $t_{2}=v_{z}/2$.](fig5.eps){width="5cm"} In this section, we will present a brief derivation of the condition in Eq. (\[beta\]) for the parameter $\beta$ of bulk states under OBCs. We start from the 1D real-space tight-binding model with two orbitals, $A$ and $B$, in a unit cell for the momentum-space Hamiltonian in Eq. (\[1D lattice model\]) [@Lee2014; @Martinez2018], which is schematically shown in Fig. 5. Here, $t_{1}=B$ represents the intercell interorbital nearest-neighbor (NN) hopping, and $-it_{2}$ and $it_{2}$ with $t_{2}=v_{z}/2$ are the intercell intraorbital NN hoppings for $A$ and $B$ orbitals, respectively, $m=m_{xy}$ denotes the intracell interorbital hopping, and $i\gamma$ ($-i\gamma$) with $\gamma=\gamma_{z}$ is the on-site gain-and-loss term for $A$ ($B$). The real-space wave function satisfies $$\begin{split} it_{2}\psi_{An-1}+t_{1}\psi_{Bn-1}+i\gamma\psi_{An}+m\psi_{Bn}-it_{2}\psi_{An+1}+t_{1}\psi_{Bn+1}&=E\psi_{An},\\ t_{1}\psi_{An-1}-it_{2}\psi_{Bn-1}+m\psi_{An}-i\gamma\psi_{Bn}+t_{1}\psi_{An+1}+it_{2}\psi_{Bn+1}&=E\psi_{Bn}. \end{split}$$ Analogous to Ref. [@Yao2018Edge], by taking the ansatz $(\psi_{An},\psi_{Bn})=\beta^{n}(\psi_{A},\psi_{B})$, we get $$\begin{split} i\Big[t_{2}\big(\frac{1}{\beta}-\beta\big)+\gamma\Big]\psi_{A}+\Big[t_{1}\big(\frac{1}{\beta}+\beta\big)+m\Big]\psi_{B}&=E\psi_{A},\\ -i\Big[t_{2}\big(\frac{1}{\beta}-\beta\big)+\gamma\Big]\psi_{B}+\Big[t_{1}\big(\frac{1}{\beta}+\beta\big)+m\Big]\psi_{A}&=E\psi_{B}.\\ \end{split}$$ This leads to the condition $$\begin{split} E^{2}+\Big[t_{2}\big(\frac{1}{\beta}-\beta\big)+\gamma\Big]^{2}=\Big[t_{1}\big(\frac{1}{\beta}+\beta\big)+m\Big]^{2}, \end{split}$$ from which $\beta$ can be determined. In this paper, we consider the simple case of $t_{1}=t_{2}=t$ ($B=v_{z}/2$), where the above equation can be reduced to $$\label{betaequation} \begin{split} 2t(m+\gamma)\beta^{2}+(m^{2}-\gamma^{2}+4t^{2}-E^{2})\beta+2t(m-\gamma)=0, \end{split}$$ leading to two solutions, $\beta_{1}$ and $\beta_{2}$, which satisfy $$\label{betasolution} \beta_{1}\beta_{2}=\frac{m-\gamma}{m+\gamma}.$$ Through a similar argument in Ref. [@Yao2018Edge] for the general solution, it can be shown that the bulk states of a long chain require $|\beta_{1}|=|\beta_{2}|$. Combined with Eq. (\[betasolution\]), this yields $$\label{betacondition} |\beta|=|\beta_{1}|=|\beta_{2}|=\sqrt{\Big|\frac{m-\gamma}{m+\gamma}\Big|},$$ which is Eq. (\[beta\]) in the main text. When $|\beta|<1$ ($|\beta|>1$), the bulk states are localized at the left (right) end, corresponding to the bottom (top) slab in the main text. Derivation of the topological nontrivial region under OBCs ========================================================== Based on Eq. (\[betaequation\]), in the $E\rightarrow0$ limits, we get $$\beta_{1,2}=-\frac{m-\gamma}{2t}, \quad-\frac{2t}{m+\gamma}.$$ Following Ref. [@Yao2018Edge], the phase boundaries where the bulk states touch zero energy can be determined by inserting Eq. (\[betacondition\]) into $|\beta_{1,2}|$ from the above equation, leading to $$m=\pm\sqrt{\gamma^{2}+4t^{2}} \quad\mathrm{or} \quad\pm\sqrt{\gamma^{2}-4t^{2}},$$ where $t=B=v_{z}/2=1/2$ is chosen in Eq. (\[phase\]) of the main text. Then, using the methods introduced in Ref. [@Yao2018Edge], the OBC topological invariant $\chi$ (winding number) for the non-Bloch Hamiltonian obtained by replacing $e^{ik}\rightarrow\beta$ and $e^{-ik}\rightarrow\beta^{-1}$ in Eq. (\[1D lattice model\]) can be readily calculated as $$\chi=\left\{ \begin{array}{ll} 1, & \hbox{$|m_{xy}|<\sqrt{\gamma_{z}^{2}+1}$,} \\ 0, & \hbox{$|m_{xy}|>\sqrt{\gamma_{z}^{2}+1}$,} \end{array} \right.$$ when $\gamma_{z}<1$ and $$\chi=\left\{ \begin{array}{ll} 1, & \hbox{$\sqrt{\gamma_{z}^{2}-1}<|m_{xy}|<\sqrt{\gamma_{z}^{2}+1}$,} \\ 0, & \hbox{$|m_{xy}|>\sqrt{\gamma_{z}^{2}+1}$ or $|m_{xy}|<\sqrt{\gamma_{z}^{2}-1}$,} \end{array} \right.$$ when $\gamma_{z}>1$, which leads to the topological nontrivial region in Eq. (\[phase\]).
--- abstract: 'The commuting graph of a group $G$, denoted by ${\Gamma}(G)$, is the simple undirected graph whose vertices are the non-central elements of $G$ and two distinct vertices are adjacent if and only if they commute. Let ${\mathbb{Z}}_m$ be the commutative ring of equivalence classes of integers modulo $m$. In this paper we investigate the connectivity and diameters of the commuting graphs of $\operatorname{GL}(n,{\mathbb{Z}}_m)$ to contribute to the conjecture that there is a universal upper bound on $\operatorname{diam}({\Gamma}(G))$ for any finite group $G$ when ${\Gamma}(G)$ is connected. For any composite $m$, it is shown that ${\Gamma}(\operatorname{GL}(n,{\mathbb{Z}}_m))$ and ${\Gamma}(\operatorname{M}(n,{\mathbb{Z}}_m))$ are connected and $\operatorname{diam}({\Gamma}(\operatorname{GL}(n,{\mathbb{Z}}_m)))=\operatorname{diam}({\Gamma}(\operatorname{M}(n,{\mathbb{Z}}_m)))=3$. For $m$ a prime, the instances of connectedness and absolute bounds on the diameters of ${\Gamma}(\operatorname{GL}(n,{\mathbb{Z}}_m))$ and ${\Gamma}(\operatorname{M}(n,{\mathbb{Z}}_m))$ when they are connected are concluded from previous results.' author: - | Michael Giudici and Aedan Pope[^1]\ School of Mathematics and Statistics\ The University of Western Australia\ 35 Stirling Highway\ Crawley WA 6009\ [email protected], [email protected] title: The diameters of commuting graphs of linear groups and matrix rings over the integers modulo $m$ --- Introduction ============ For a group $G$, we denote the *center* of $G$ by $\operatorname{\mathcal{Z}}(G)$ and $\operatorname{\mathcal{Z}}(G)=\{x\in G | xy=yx~\forall y \in G \}$. If $x$ is an element of $G$, then ${\operatorname{\mathcal{C}}}_G(x)$ denotes the *centraliser* of $x$ in $G$ and ${\operatorname{\mathcal{C}}}_G(x)=\{y\in G|xy=yx\}$. The *commuting graph* of a group, denoted by ${\Gamma}(G)$, is the simple undirected graph whose vertices are the non-central elements of $G$ and two distinct vertices $x$ and $y$ are adjacent if and only if $xy=yx$. We take analogous definitons for the center, centraliser and commuting graph of a ring $R$. A *path* in a graph is an ordered list $a_1, a_2, \ldots , a_k$ of vertices where there is an edge in the graph from $a_i$ to $a_{i+1}$ for all $i$; the path is said to between $a_1$ and $a_k$ and of length $k-1$. A graph is *connected* if and only if there exists a path between any two distinct vertices in the graph. The *distance* between two vertices of a graph, say $x$ and $y$, is the length of the shortest path between $x$ and $y$ in the graph if such a path exists and is $\infty$ otherwise; this is denoted $\operatorname{d}(x,y)$. The *diameter* of a graph $\Gamma$ is the maximum distance between any two vertices in the graph, and is denoted $\operatorname{diam}(\Gamma)=\max\{\operatorname{d}(x,y)| x,y \in \Gamma\}$. We use $\operatorname{M}(n,R)$ to denote the ring of all $n \times n$ matrices over the ring $R$, $\operatorname{GL}(n,R)$ to denote the group all invertible $n \times n$ matrices over $R$ and $\operatorname{SL}(n,R)$ to denote those with determinant 1. ${\mathbb{Z}}_m$ denotes the commutative ring of equivalence classes of integers modulo $m$. The commuting graphs of groups have been studied heavily, for example in [@brauer; @iranmanesh; @segev; @segev2], and those of rings in [@akbari2; @akbari3; @akbari]. In [@iranmanesh], Iranmanesh and Jafarzadeh conjecture that there is a universal upper bound on the diameter of a connected commuting graph for any finite nonabelian group. They then determine when the commuting graph of a symmetric or alternating group is connected and that the diameter is at most 5 in this case. The paper [@segev] proves that for all finite classical simple groups over a field of size at least 5, when the commuting graph of a group is connected then its diameter is at most 10. Previous research into the diameters of the commuting graphs of linear groups and matrix rings has primarily covered these over fields. For a field $F$, the authors of [@akbari] show that $\operatorname{diam}({\Gamma}(\operatorname{GL}(n,F))) \leq \operatorname{diam}({\Gamma}(\operatorname{M}(n,F))) \leq 6$ when these graphs are connected and $|F|$ is greater than or equal to $3$. In addition, [@akbari2] provides necessary and sufficient conditions for ${\Gamma}(\operatorname{SL}(n,F))$ to be connected; an upper bound on the diameter of ${\Gamma}(\operatorname{SL}(n,F))$ can be generated from the proof. Our paper adds to this body of evidence supporting the conjecture by calculating the diameter of the commuting graphs of some general linear groups over commutative rings that are not fields. The diameters of the corresponding matrix rings are also calculated. \[composite\] Let $m$ be a composite natural number and $n \geq 2$. Then ${\Gamma}(\operatorname{GL}(n,{\mathbb{Z}}_m))$ and ${\Gamma}(\operatorname{M}(n,{\mathbb{Z}}_m))$ are connected and $\operatorname{diam}({\Gamma}(\operatorname{GL}(n,{\mathbb{Z}}_m)))=\operatorname{diam}({\Gamma}(\operatorname{M}(n,{\mathbb{Z}}_m)))=3$. Consider when $m$ is a prime. If $m \geq 3$ and $n \geq 2$, by [@akbari2 Corollaries 7 and 11] and [@akbari Theorems 14 and 17], ${\Gamma}(\operatorname{GL}(n,{\mathbb{Z}}_m))$ and ${\Gamma}(\operatorname{M}(n,{\mathbb{Z}}_m))$ are connected if and only if $n$ is not prime and $4 \leq \operatorname{diam}({\Gamma}(\operatorname{GL}(n,{\mathbb{Z}}_m))) \leq {\Gamma}(\operatorname{M}(n,{\mathbb{Z}}_m)) \leq 6$ in this case. In the case of $m=2$ and $n \geq 2$, [@akbari2 Corollary 7] shows that ${\Gamma}(\operatorname{M}(n,{\mathbb{Z}}_2))$ is connected if and only if $n$ is not a prime number and [@akbari2 Corollary 14] gives that ${\Gamma}(\operatorname{GL}(n,{\mathbb{Z}}_2))$ is connected if and only if $n$ and $n-1$ are not prime numbers. Moreover, $\operatorname{diam}({\Gamma}(\operatorname{M}(n,{\mathbb{Z}}_2))) \leq 6$ when it is connected [@akbari Theorem 17]. By modifying the proof of [@segev Theorem 12.5], one can conclude that, for $n \geq 5$, an arbitrary element of $\operatorname{GL}(n,{\mathbb{Z}}_2)=\operatorname{PSL}(n,{\mathbb{Z}}_2)$ is distance at most 3 from a transvection of $\operatorname{GL}(n,{\mathbb{Z}}_2)$ and that the distance between any two transvections is at most 2. Thus for $n\geq 5, \operatorname{diam}({\Gamma}(\operatorname{GL}(n,{\mathbb{Z}}_2))) \leq 8$; if $n < 5$ then $n$ or $n-1$ is prime and the graph is disconnected from above. Therefore, for any integers $m$ and $n$ that are greater than $1$, when the corresponding commuting graphs are connected there is a universal upper bound on $\operatorname{diam}({\Gamma}(\operatorname{GL}(n,{\mathbb{Z}}_m)))$ and ${\Gamma}(\operatorname{M}(n,{\mathbb{Z}}_m))$. We record here that we have used the <span style="font-variant:small-caps;">Magma</span>[@magma] implementation of the small group database [@smallgroups; @millennium] to calculate the connectedness and diameters of the commuting graphs for all groups of order at most 2000, except for the orders of the form $k2^6$ for $k\neq 9, 4$ composite. For connected graphs, the largest diameter found was $6$. We do not know if the bound of 8 for the diameter of $\operatorname{GL}(n,2)$ is sharp. No example of a connected commuting graph of a group with diameter greater than 6 was found in previous literature and so it would be interesting to find examples of diameter greater than 6. Results ======= We begin with the following result pertaining to commutative graphs which is a generalisation of the disconnectedness of ${\Gamma}(\operatorname{M}(2,F))$ for $F$ a field, concluded in [@akbari3 Remark 9] \[integraldomain\] If R is an integral domain, then ${\Gamma}(\operatorname{M}(2,R))$, ${\Gamma}(\operatorname{GL}(2,R))$ and ${\Gamma}(\operatorname{SL}(2,R))$ are disconnected. Let $R$ be an integral domain.\ Let $\mathcal{A}=\left\{ \begin{array}{c|c} \begin{bmatrix} a_1 & a_2 \\ 0 & a_1 \end{bmatrix} \in \operatorname{M}(2,R) & a_1, a_2 \in R, a_2 \neq 0 \end{array} \right\} \subseteq \operatorname{M}(2,R) \setminus \operatorname{\mathcal{Z}}(\operatorname{M}(2,R))$. Let $A =\begin{bmatrix} a_1 & a_2 \\ 0 & a_1 \end{bmatrix}\in \mathcal{A}$ for some $a_i \in R$ and let $X = \begin{bmatrix} x_1 & x_2 \\ x_3 & x_4 \end{bmatrix} \in {\operatorname{\mathcal{C}}}_{\operatorname{M}(2,R)}(A)\setminus \operatorname{\mathcal{Z}}(\operatorname{M}(2,R))$. Then $$\begin{aligned} XA&=\begin{bmatrix} x_1 & x_1 \\ x_3 & x_4 \end{bmatrix}\begin{bmatrix} a_1 & a_2 \\ 0 & a_1 \end{bmatrix} =\begin{bmatrix} a_1x_1 & a_2x_1+a_1x_2 \\ a_1x_3 & a_2x_3+a_1x_4 \end{bmatrix}\\ AX&=\begin{bmatrix} a_1 & a_2 \\ 0 & a_1 \end{bmatrix}\begin{bmatrix} x_1 & x_1 \\ x_3 & x_4 \end{bmatrix} =\begin{bmatrix} a_1x_1+a_2x_3 & a_1x_2+a_2x_4 \\ a_1x_3 & a_1x_4 \end{bmatrix}\end{aligned}$$ Since $XA=AX$, this yields $a_1x_1=a_1x_1+a_2x_3$ and $a_2x_1+a_1x_2=a_1x_2+a_2x_4$. Now $a_1x_1=a_1x_1+a_2x_3$ implies $a_2x_3 = 0$. Since $a_2 \neq 0$ and there are no zero divisors in $R$, $x_3 = 0$. Moreover, $a_2x_1+a_1x_2=a_1x_2+a_2x_4$ implies $a_2x_1=a_2x_4$ which yields $x_1=x_4$ by cancelling $a_2$ in the integral domain. Thus $X = \begin{bmatrix} x_1 & x_2 \\ 0 & x_1 \end{bmatrix}$. As $X \notin \operatorname{\mathcal{Z}}(\operatorname{M}(2,R))$, it must not be a scalar matrix and so $x_2 \neq 0$. Therefore $X \in \mathcal{A}$. So in ${\Gamma}(\operatorname{M}(2,R))$, $\mathcal{A}$ forms an isolated connected component. Thus the matrices $B=\begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix} \in \mathcal{A} \cap \operatorname{SL}(2,R)$ and $C=\begin{bmatrix} 1 & 0 \\ 1 & 1 \end{bmatrix} \in \operatorname{SL}(2,R) \setminus \operatorname{\mathcal{Z}}(\operatorname{SL}(2,R))$ with $C \notin \mathcal{A}$ are in different connected components of ${\Gamma}(\operatorname{M}(2,R))$, ${\Gamma}(\operatorname{GL}(2,R))$ and ${\Gamma}(\operatorname{SL}(2,R))$. Therefore the graphs ${\Gamma}(\operatorname{M}(2,R))$, ${\Gamma}(\operatorname{GL}(2,R))$ and ${\Gamma}(\operatorname{SL}(2,R))$ are disconnected. The following are some useful lemmas on the properties of the ring of integers modulo $m$. \[units\] Let u,v,s,t be pairwise coprime integers. Then for any natural numbers k,l, the two integers $us^k+vt^l$ and st are coprime. Let $u,v,s,t$ be pairwise coprime integers and $k, l$ be natural numbers. Let $d=\gcd(us^k+vt^l, st)$. Assume that $d \neq 1$ and let $p$ be a prime that divides $d$. Then $p|st$ and since $p$ is prime, $p|s$ or $p|t$. Without loss of generality, assume $p|s$, and thus $p|us^k$. As $v,s,t$ are pairwise coprime, $p|s$ implies that $p\nmid v$ and $p\nmid t$, thus $p \nmid vt^m$. So $p \nmid us^k+vt^k$, a contradiction. Therefore $d=1$ and $us^k+vt^l, st$ are coprime. \[dets\] For coprime natural numbers $s$ and $t$ greater than 1, if $X$ and $Y$ are elements of $\operatorname{GL}(n,{\mathbb{Z}}_{st})$ then $sX+tY$ is also an element of $\operatorname{GL}(n,{\mathbb{Z}}_{st})$. Let $s, t$ be coprime natural numbers greater than 1 and $X,Y \in \operatorname{GL}\left(n,{\mathbb{Z}}_{st}\right)$. Then by the Leibniz formula for the determinant of an $n\times n$ matrix, $$\begin{aligned} \det(sX+tY) &= \sum_{\sigma \in S_n} \left(\operatorname{sgn}(\sigma)\prod_{i=1}^{n} (sX+tY)_{i,\sigma(i)}\right)\\ &=\sum_{\sigma \in S_n} \operatorname{sgn}(\sigma)(sX_{1,\sigma(1)}+tY_{1,\sigma(1)})(sX_{2,\sigma(2)}+tY_{2,\sigma(2)}) \cdots (sX_{n,\sigma(n)}+tY_{n,\sigma(n)})\\ &=\sum_{\sigma \in S_n} \operatorname{sgn}(\sigma)((sX_{1,\sigma(1)}sX_{2,\sigma(2)}\cdots sX_{n,\sigma(n)}) + (tY_{1,\sigma(1)}tY_{2,\sigma(2)}\cdots tY_{n,\sigma(n)})) \\&\text{(expanding the product: every term that is }s^at^bh\text{ for }a,b \geq 1\text{ and some }h\\&\text{ a product of entries of }X\text{ and }Y\text{ is }0\text{ as }s^at^b =0\text{)}\\ &=\sum_{\sigma \in S_n} \left(\operatorname{sgn}(\sigma)\prod_{i=1}^{n} sX_{i,\sigma(i)}+ \operatorname{sgn}(\sigma)\prod_{i=1}^{n} tY_{i,\sigma(i)}\right)\\ &=\sum_{\sigma \in S_n}\left(\operatorname{sgn}(\sigma)\prod_{i=1}^{n} sX_{i,\sigma(i)}\right)+ \sum_{\sigma \in S_n} \left(\operatorname{sgn}(\sigma)\prod_{i=1}^{n} tY_{i,\sigma(i)}\right)\\ &=\det(sX)+\det(tY)\\ &=s^n\det(X) + t^n\det(Y)\end{aligned}$$ Since $\det(X)$ and $\det(Y)$ are units in ${\mathbb{Z}}_{st}$, they are coprime to $s$ and $t$ and so, by Lemma \[units\], $s^n\det(X) + t^n\det(Y)$ is coprime to $st$ and hence is a unit in ${\mathbb{Z}}_{st}$. Therefore $sX+tY$ is invertible and is an element of $\operatorname{GL}(n,{\mathbb{Z}}_{st})$. For the remainder of this paper we use $I, 0, I_{r}$ and $0_{r\times s}$ to denote the identity, zero, $r \times r$ identity and $r \times s$ zero matrices respectively. We also use $E_{i,j}$ to denote the matrix with $(i,j)$ entry equal to 1 and every other entry equal to 0. We now obtain a lower bound of $3$ on the diameters of ${\Gamma}(\operatorname{GL}(n,{\mathbb{Z}}_m))$ and ${\Gamma}(\operatorname{M}(n,{\mathbb{Z}}_m))$ for an arbitrary $m$ and $n$. For any $n,m \geq 2$, the matrix $P=I_{n} + \begin{bmatrix} 0_{n-1 \times 1} & I_{n-1} \\ 0 & 0_{1 \times n-1} \end{bmatrix} \in \operatorname{GL}(n,{\mathbb{Z}}_m)\setminus\operatorname{\mathcal{Z}}(\operatorname{GL}(n,{\mathbb{Z}}_m))$ has the property that ${\operatorname{\mathcal{C}}}_{\operatorname{M}(n,{\mathbb{Z}}_m)}(P) \cap {\operatorname{\mathcal{C}}}_{\operatorname{M}(n,{\mathbb{Z}}_m)}(P^T)=\operatorname{\mathcal{Z}}(\operatorname{M}(n,{\mathbb{Z}}_m))$. Let $P$ be the matrix described in the hypothesis. Then $$P = \begin{bmatrix} 1 & 1 & 0 & \ldots & 0 \\ 0 & 1 & 1 & \ldots & 0 \\ \vdots & \ddots & \ddots &\ddots& \vdots \\ 0 & \ldots & 0 & 1 & 1 \\ 0 & \ldots & 0 & 0 & 1 \end{bmatrix} \text{ and }P^T = \begin{bmatrix} 1 & 0 & \ldots & 0 & 0\\ 1 & 1 & \ddots & \vdots & \vdots \\ 0 & 1 & \ddots & 0 & 0 \\ \vdots & \vdots & \ddots & 1 & 0 \\ 0 & 0 & \ldots & 1 & 1 \\ \end{bmatrix}$$ Let $X \in {\operatorname{\mathcal{C}}}_{\operatorname{M}(n,{\mathbb{Z}}_m)}(P)$. So $X = \begin{bmatrix} x_{1,1} & x_{1,2} & \ldots & x_{1,n} \\ x_{2,1} & x_{2,2} & \ldots & x_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ x_{n,1} & x_{n,2} & \ldots & x_{n,n} \\ \end{bmatrix}$ for some $x_{i,j} \in {\mathbb{Z}}_{m}$. $$\begin{aligned} \text{Then }PX &= \begin{bmatrix} x_{1,1}+x_{2,1} & x_{1,2}+x_{2,2} & \ldots & x_{1,n}+x_{2,n} \\ x_{2,1}+x_{3,1} & x_{2,2}+x_{3,2} & \ldots & x_{2,n}+x_{3,n} \\ \vdots & \vdots & \ddots & \vdots \\ x_{n-1,1}+x_{n,1} & x_{n-1,2}+x_{n,2} & \ldots & x_{n-1,1n}+x_{n,n} \\ x_{n,1} & x_{n,2} & \ldots & x_{n,n} \end{bmatrix} \\ \text{and }XP &= \begin{bmatrix} x_{1,1} & x_{1,1}+x_{1,2} & \ldots & x_{1,n-1}+x_{1,n} \\ x_{2,1} & x_{2,1}+x_{2,2} & \ldots & x_{2,n-1}+x_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ x_{n,1} & x_{n,1}+x_{n,2} & \ldots & x_{n,n-1}+x_{n,n} \end{bmatrix}.\end{aligned}$$ Since $PX=XP$, we obtain $$\begin{array}{ccccc} x_{1,1}+x_{2,1} = x_{1,1} & x_{1,2}+x_{2,2} = x_{1,1}+x_{1,2} & \ldots & x_{1,n}+x_{2,n} = x_{1,n-1}+x_{1,n} \\ x_{2,1}+x_{3,1} = x_{2,1} & x_{2,2}+x_{3,2} = x_{2,1}+x_{2,2} & \ldots & x_{2,n}+x_{3,n} = x_{2,n-1}+x_{2,n}\\ \vdots & \vdots & \ddots & \vdots \\ x_{n,1} = x_{n,1} & x_{n,2} = x_{n,1}+x_{n,2} & \ldots & x_{n,n} = x_{n,n-1}+x_{n,n} \\ \end{array}$$ The left column of equations from $PX=XP$ gives $$\begin{aligned} x_{1,1} +x_{2,1}&= x_{1,1}\text{, so } x_{2,1}=0.\\ x_{2,1} +x_{3,1}&= x_{2,1}\text{, so } x_{3,1}=0.\\ &\vdots\\ x_{n-1,1} +x_{n,1}&= x_{n-1,1}\text{, so } x_{n,1}=0.\end{aligned}$$ The second column gives $$\begin{aligned} x_{1,2} +x_{2,2}&= x_{1,1} + x_{1,2}\text{, so }x_{2,2}=x_{1,1}.\\ x_{2,2} +x_{3,2}&= x_{2,1} + x_{2,2}\text{, from above } x_{2,1}=0\text{ and so }x_{3,2}=0.\\ x_{3,2} +x_{4,2}&= x_{3,1} + x_{3,2}\text{, from above } x_{3,1}=0\text{ and so }x_{4,2}=0.\\ &\vdots\\ x_{n-1,2} +x_{n,2}&= x_{n-1,1}+x_{n-1,2}\text{, from above } x_{n-1,1}=0\text{ and so }x_{n,2}=0.\\\end{aligned}$$ The third column gives $x_{3,3}=x_{2,2}$ and then $x_{k,3} = 0$ for all $k \geq 4$. This continues across the columns and thus $x_{i,i}=x_{1,1}$ for all $i$ and $x_{j,k}=0$ whenever $j > k$. So $X$ has the form $$X=\begin{bmatrix} x_{1,1} & x_{1,2} & \ldots & x_{1,n} \\ 0 & x_{1,1} & \ldots & x_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \ldots & x_{1,1} \end{bmatrix}$$ Let $Y \in {\operatorname{\mathcal{C}}}_{\operatorname{M}(n,{\mathbb{Z}}_m)}(P^T)$ where $Y_{i,j}=y_{i,j}$ for some $y_{i,j}\in {\mathbb{Z}}_{m}$. By similar arithmetic, $Y$ must have the form $$Y=\begin{bmatrix} y_{1,1} & 0 & \ldots & 0 \\ y_{2,1} & y_{1,1} & \ldots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ y_{n,1} & y_{n,2} & \ldots & y_{1,1} \end{bmatrix}$$ Thus $${\operatorname{\mathcal{C}}}_{\operatorname{M}(n,{\mathbb{Z}}_m)}(P) \cap {\operatorname{\mathcal{C}}}_{\operatorname{M}(n,{\mathbb{Z}}_m)}(P^T)= \left\{ \begin{array}{c|c} \begin{bmatrix} s & 0 & \ldots & 0 \\ 0 & s & \ldots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \ldots & s \end{bmatrix} & s \in {\mathbb{Z}}_{m} \end{array} \right\} = \operatorname{\mathcal{Z}}(\operatorname{M}(n,{\mathbb{Z}}_m)). \qedhere$$ \[lowerbound\] For any $m, n \geq 2$, $\operatorname{diam}({\Gamma}(\operatorname{GL}(n,{\mathbb{Z}}_m)))$ and $\operatorname{diam}({\Gamma}(\operatorname{M}(n,{\mathbb{Z}}_m)))$ are at least $3$. The following lemmas discern some properties of ${\Gamma}(\operatorname{GL}(n,{\mathbb{Z}}_m))$ and ${\Gamma}(\operatorname{M}(n,{\mathbb{Z}}_m))$ when $m$ is a prime power. \[aprimepower\] If $p$ is prime and $t \geq 2$ is a natural number then for any $X \in \operatorname{M}(n,{\mathbb{Z}}_{p^t})$ there exists $Y \in \operatorname{M}(n,{\mathbb{Z}}_{p^t})$ such that $X$ commutes with $p^{t-1} Y + I$ and $p^{t-1} Y + I \in \operatorname{GL}(n,{\mathbb{Z}}_{p^t}) \setminus \operatorname{\mathcal{Z}}(\operatorname{M}(n,{\mathbb{Z}}_{p^t})$. Let $p$ be a prime and $t \geq 2$ a natural number. Let $X \in \operatorname{M}(n,{\mathbb{Z}}_{p^t})$. To find $Y$ we will divide into 2 cases. Firstly, for all $i \neq j$ there exists $u_{i,j} \in {\mathbb{Z}}_{p^t}$ such that $X_{i,j}=pu_{i,j}$. Secondly, there exist distinct $v, w$ such that $X_{v,w} \neq pu$ for any $u \in {\mathbb{Z}}_{p^t}$. Suppose that we have the first case. Then $X$ commutes with $p^{t-1}E_{1,1} + I \in \operatorname{M}(n,{\mathbb{Z}}_{p^t}) \setminus \operatorname{\mathcal{Z}}(\operatorname{M}(n,{\mathbb{Z}}_{p^t}))$. Further, $(p^{t-1}E_{1,1} + I)((-p^{t-1})E_{1,1} + I)=p^{t-1}(-p^{t-1})E_{1,1}+I=I$, so $p^{t-1}E_{1,1} + I \in \operatorname{GL}(n,{\mathbb{Z}}_{p^t}) \setminus \operatorname{\mathcal{Z}}(\operatorname{M}(n,{\mathbb{Z}}_{p^t})$ and this case is done. Now suppose that there exist distinct $v, w$ such that $X_{v,w} \neq pu$ for any $u \in {\mathbb{Z}}_{p^t}$. Let $A = p^{t-1} X + I\in \operatorname{M}(n,{\mathbb{Z}}_{p^t})$. Since $p$ is not a factor of $X_{v,w}$, $A_{v,w}=p^{t-1}X_{v,w} \neq 0$ and $v \neq w$ gives that $A$ is not diagonal and thus not a scalar. Now, $A$ clearly commutes with $X$ and $A(-p^{t-1} X + I)=I$ so $A=p^{t-1} X + I \in \operatorname{GL}(n,{\mathbb{Z}}_{p^t}) \setminus \operatorname{\mathcal{Z}}(\operatorname{M}(n,{\mathbb{Z}}_{p^t})$, this case is also done. \[primepower\] If $p$ is prime and $t \geq 2$ is a natural number, then ${\Gamma}(\operatorname{M}(n,{\mathbb{Z}}_{p^t}))$ and ${\Gamma}(\operatorname{GL}(n,{\mathbb{Z}}_{p^t}))$ are connected and $\operatorname{diam}({\Gamma}(\operatorname{M}(n,{\mathbb{Z}}_{p^t})))=\operatorname{diam}({\Gamma}(\operatorname{GL}(n,{\mathbb{Z}}_{p^t})))=3$. Let $p$ be a prime and $t \geq 2$ a natural number. Let $X, Y$ be arbitrary vertices in ${\Gamma}(\operatorname{M}(n,{\mathbb{Z}}_{p^t}))$. By Lemma \[aprimepower\], there exist $A, B \in \operatorname{M}(n,{\mathbb{Z}}_{p^t})$ such that $p^{t-1} A + I \in {\operatorname{\mathcal{C}}}_{\operatorname{M}(n,{\mathbb{Z}}_{p^t})}(X) \cap \operatorname{GL}(n,{\mathbb{Z}}_{p^t}) \setminus \operatorname{\mathcal{Z}}(\operatorname{M}(n,{\mathbb{Z}}_{p^t}))$ and $p^{t-1} B + I \in {\operatorname{\mathcal{C}}}_{\operatorname{M}(n,{\mathbb{Z}}_{p^t})}(Y) \cap \operatorname{GL}(n,{\mathbb{Z}}_{p^t}) \setminus \operatorname{\mathcal{Z}}(\operatorname{M}(n,{\mathbb{Z}}_{p^t}))$. Now, $(p^{t-1} A + I)(p^{t-1} B + I)= p^{t-1}p^{t-1}AB+p^{t-1} AI+p^{t-1} IB+ I= p^{t}p^{t-2}AB+p^{t-1} AI+p^{t-1} IB+ I= 0+ p^{t-1}IA+p^{t-1}BI+I= p^{t-1}p^{t-1}BA+p^{t-1}IA+p^{t-1}BI+I= (p^{t-1} B + I)(p^{t-1} A + I)$. So $X\sim p^{t-1} A + I\sim p^{t-1} B + I\sim Y$ is a path of invertible matrices of length $3$ between $X$ and $Y$ in ${\Gamma}(\operatorname{M}(n,{\mathbb{Z}}_{p^t}))$, if $X, Y \in \operatorname{GL}(n,{\mathbb{Z}}_{p^t})$ then this path is also in ${\Gamma}(\operatorname{GL}(n,{\mathbb{Z}}_{p^t}))$. Therefore ${\Gamma}(\operatorname{M}(n,{\mathbb{Z}}_{p^t}))$ and ${\Gamma}(\operatorname{GL}(n,{\mathbb{Z}}_{p^t}))$ are connected and, from the lower bound given by Corollary \[lowerbound\], $\operatorname{diam}({\Gamma}(\operatorname{M}(n,{\mathbb{Z}}_{p^t})))=\operatorname{diam}({\Gamma}(\operatorname{GL}(n,{\mathbb{Z}}_{p^t})))=3$. Now we obtain some lemmas on the nature of ${\Gamma}(\operatorname{GL}(n,{\mathbb{Z}}_m))$ and ${\Gamma}(\operatorname{M}(n,{\mathbb{Z}}_m))$ when $m$ is the product of two coprime factors. \[amulticomp\] If ${s}, {t}$ are coprime natural numbers greater than 1 then for any $X \in \operatorname{M}(n,{\mathbb{Z}}_{{s}{t}})$ there exist $Y \in \operatorname{M}(n,{\mathbb{Z}}_{{s}{t}})$ and $k \in {\mathbb{Z}}_{{s}{t}}$ such that ${s}Y + kI \in {\operatorname{\mathcal{C}}}_{\operatorname{M}(n,{\mathbb{Z}}_{{s}{t}})}(X) \setminus \operatorname{\mathcal{Z}}(\operatorname{M}(n,{\mathbb{Z}}_{{s}{t}}))$. Moreover, if $X \in \operatorname{GL}(n,{\mathbb{Z}}_{{s}{t}})$, then $Y$ and $k$ can be chosen so that ${s}Y + kI \in \operatorname{GL}(n,{\mathbb{Z}}_{{s}{t}})$. Let ${s}, {t}$ be coprime natural numbers greater than 1. Let $X \in \operatorname{M}(n,{\mathbb{Z}}_{{s}{t}})$. Firstly assume that for all $i \neq j$ there exists $u_{i,j} \in {\mathbb{Z}}_{{s}{t}}$ such that $X_{i,j}={t}u_{i,j}$. Then $X$ commutes with ${s}E_{1,1} + I \in \operatorname{M}(n,{\mathbb{Z}}_{{s}{t}}) \setminus \operatorname{\mathcal{Z}}(\operatorname{M}(n,{\mathbb{Z}}_{{s}{t}}))$. Now assume there exist distinct $v, w$ such that $X_{v,w} \neq {t}u$ for any $u \in {\mathbb{Z}}_{{s}{t}}$. Then $X$ commutes with ${s}X + I \in \operatorname{M}(n,{\mathbb{Z}}_{{s}{t}})$. Moreover, $X_{v,w}$ is not a multiple of ${t}$ and so when multiplied by ${s}$ does not give zero. Since $v \neq w$, this gives that ${s}X + I$ is not diagonal and thus nonscalar. This covers the first part of the lemma. Now let $X$ be invertible. To find appropriate $Y$ and $k$, we will divide into 3 cases: (1) ${t}= 2$ and for all $i \neq j$ there exists $u_{i,j} \in {\mathbb{Z}}_{{s}{t}}$ such that $X_{i,j}={t}u_{i,j}$. (2) ${t}\neq 2 $ and for all $i \neq j$ there exists $u_{i,j} \in {\mathbb{Z}}_{{s}{t}}$ such that $X_{i,j}={t}u_{i,j}$. (3) There exist distinct $v, w$ such that $X_{v,w} \neq {t}u$ for any $u \in {\mathbb{Z}}_{{s}{t}}$. Case 1: Here ${t}=2$ and $X$ is of the form $$X = \begin{bmatrix} x_{1,1} & {t}u_{1,2} & \ldots & {t}u_{1,n} \\ {t}u_{2,1} & x_{2,2} & \ldots & {t}u_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ {t}u_{n,1} & {t}u_{n,2} & \ldots & x_{n,n} \\ \end{bmatrix}\text{ for some }x_{i,i}, u_{i,j} \in {\mathbb{Z}}_{{s}{t}}.$$ Consider that $\det(X)$ is a sum of multiples of permutations of $n$ entries of $X$ with precisely one entry from each row and column. All of the terms in the summation will have a factor of ${t}$ in them except for the $x_{1,1}x_{2,2}\cdots x_{n,n}=\operatorname{tr}(X)$ term. If one of $x_{i,i}$ were a multiple of ${t}$ then so would this term, and $\det(X)=z{t}$ for some $z$. This is not a unit in ${\mathbb{Z}}_{{s}{t}}$ so any such $X$ is not invertible. Therefore, all $x_{i,i}$ coprime to ${t}$, that is, are odd. As ${t}=2$, $z{s}\equiv 0\pmod{{s}{t}}$ if $z$ is even and $z{s}\equiv{s}\pmod{{s}{t}}$ if $z$ is odd. So ${s}x_{i,i}={s}$ for all $i$. Let $A = {s}(E_{1,n}+I) + {t}I \in \operatorname{M}(n,{\mathbb{Z}}_{{s}{t}})$. Then $A$ is invertible by Lemma \[dets\]. Since $A$ is also not a scalar, $A \in \operatorname{GL}(n,{\mathbb{Z}}_{{s}{t}})\setminus \operatorname{\mathcal{Z}}(\operatorname{GL}(n,{\mathbb{Z}}_{{s}{t}}))$. Now $$\begin{aligned} AX =& \begin{bmatrix} ({s}+{t}) x_{1,1} +{s}{t}u_{n,1} & ({s}+{t}) {t}u_{1,2} +{s}{t}u_{n,2}& \ldots &({s}+{t}) {t}u_{1,n}+{s}x_{n,n} \\ ({s}+{t}){t}u_{2,1} & ({s}+{t})x_{2,2} & \ldots & ({s}+{t}){t}u_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ ({s}+{t}){t}u_{n,1} & ({s}+{t}){t}u_{n,2} & \ldots &({s}+{t})x_{n,n} \end{bmatrix}\\ =& \begin{bmatrix} ({s}+{t}) x_{1,1} & ({s}+{t}){t}u_{1,2} & \ldots & ({s}+{t}) {t}u_{1,n} + {s}\\ ({s}+{t}){t}u_{2,1} & ({s}+{t})x_{2,2} & \ldots & ({s}+{t}){t}u_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ ({s}+{t}){t}u_{n,1} & ({s}+{t}){t}u_{n,2} & \ldots & ({s}+{t})x_{n,n} \end{bmatrix}\\ =& \begin{bmatrix} ({s}+{t}) x_{1,1} & ({s}+{t}){t}u_{1,2} & \ldots & ({s}+{t}) {t}u_{1,n}+ {s}x_{1,1} \\ ({s}+{t}){t}u_{2,1} & ({s}+{t})x_{2,2} & \ldots & ({s}+{t}){t}u_{2,n} + {s}{t}u_{2,1} \\ \vdots & \vdots & \ddots & \vdots \\ ({s}+{t}){t}u_{n,1} & ({s}+{t}){t}u_{n,2} & \ldots & ({s}+{t})x_{n,n}+ {s}{t}u_{n,1} \end{bmatrix}\\ =&XA\end{aligned}$$ So $A \in {\operatorname{\mathcal{C}}}_{\operatorname{GL}(n,{\mathbb{Z}}_{{s}{t}})}(X) \setminus \operatorname{\mathcal{Z}}(\operatorname{GL}(n,{\mathbb{Z}}_{{s}{t}}))$ and this case is done. Case 2: Here ${t}\neq 2$ and for all $i \neq j$ there exists $u_{i,j} \in {\mathbb{Z}}_{{s}{t}}$ such that $X_{i,j}={t}u_{i,j}$. Let $A={s}(I-2E_{1,1}) + {t}I \in \operatorname{M}(n,{\mathbb{Z}}_{{s}{t}})$. Then $A$ is invertible by Lemma \[dets\]. Since ${t}\neq 2$, we have ${s}\neq -{s}$ and so $A_{1,1}=-{s}+{t}\neq {s}+{t}=A_{2,2}$. Thus $A$ is not a scalar. Writing $X_{i,i}=x_{i,i}$ and $X_{i,j}=t u_{i,j}$ for $i \neq j$ as in the previous case, $$\begin{aligned} AX =& \begin{bmatrix} (-{s}+{t}) x_{1,1} & (-{s}+{t}) {t}u_{1,2} & \ldots & (-{s}+{t}) {t}u_{1,n} \\ ({s}+{t}){t}u_{2,1} & ({s}+{t})x_{2,2} & \ldots & ({s}+{t}){t}u_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ ({s}+{t}){t}u_{n,1} & ({s}+{t}){t}u_{n,2} & \ldots & ({s}+{t})x_{n,n} \end{bmatrix}\\ =& \begin{bmatrix} (-{s}+{t}) x_{1,1} & {t}^2 u_{1,2} & \ldots & {t}^2 u_{1,n} \\ {t}^2 u_{2,1} & ({s}+{t})x_{2,2} & \ldots & {t}^2 u_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ {t}^2 u_{n,1} & {t}^2 u_{n,2} & \ldots & ({s}+{t})x_{n,n} \end{bmatrix}\\ =& \begin{bmatrix} (-{s}+{t}) x_{1,1} & ({s}+{t}) {t}u_{1,2} & \ldots & ({s}+{t}) {t}u_{1,n} \\ (-{s}+{t}){t}u_{2,1} & ({s}+{t})x_{2,2} & \ldots & ({s}+{t}){t}u_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ (-{s}+{t}){t}u_{n,1} & ({s}+{t}){t}u_{n,2} & \ldots & ({s}+{t})x_{n,n} \end{bmatrix}\\ =&XA\end{aligned}$$So $A \in {\operatorname{\mathcal{C}}}_{\operatorname{GL}(n,{\mathbb{Z}}_{{s}{t}})}(X) \setminus \operatorname{\mathcal{Z}}(\operatorname{GL}(n,{\mathbb{Z}}_{{s}{t}}))$ and this case is done. Case 3: Now there exist distinct $v, w$ such that $X_{v,w} \neq {t}u$ for any $u \in {\mathbb{Z}}_{{s}{t}}$. Let $A = {s}X + {t}I \in \operatorname{M}(n,{\mathbb{Z}}_{{s}{t}})$, which is invertible by Lemma \[dets\]. Now $A_{v,w}={s}X_{v,w} \neq 0$ as $X_{v,w}$ is not a multiple of ${t}$. Since $v \neq w$, $A$ is nonscalar. Then $A$ clearly commutes with $X$ and so $A \in {\operatorname{\mathcal{C}}}_{\operatorname{GL}(n,{\mathbb{Z}}_{{s}{t}})}(X) \setminus \operatorname{\mathcal{Z}}(\operatorname{GL}(n,{\mathbb{Z}}_{{s}{t}}))$ and the lemma is proved. \[multicomp\] If ${s}, {t}$ are coprime natural numbers greater than $1$, then ${\Gamma}(\operatorname{GL}(n,{\mathbb{Z}}_{{s}{t}}))$ and ${\Gamma}(\operatorname{M}(n,{\mathbb{Z}}_{{s}{t}}))$ are connected and $\operatorname{diam}({\Gamma}(\operatorname{GL}(n,{\mathbb{Z}}_{{s}{t}})))=\operatorname{diam}({\Gamma}(\operatorname{M}(n,{\mathbb{Z}}_{{s}{t}})))=3$. Let ${s}, {t}$ be coprime natural numbers greater than 1. Let $X, Y$ be arbitrary vertices in ${\Gamma}(\operatorname{M}(n,{\mathbb{Z}}_{m}))$. By Lemma \[amulticomp\], there exist $A, B \in \operatorname{M}(n,{\mathbb{Z}}_{m})$ and $k, \ell \in {\mathbb{Z}}_{m}$ such that ${s}A + kI \in {\operatorname{\mathcal{C}}}_{\operatorname{M}(n,{\mathbb{Z}}_{m})}(X) \setminus \operatorname{\mathcal{Z}}(\operatorname{M}(n,{\mathbb{Z}}_{m}))$ and ${t}B + \ell I \in {\operatorname{\mathcal{C}}}_{\operatorname{M}(n,{\mathbb{Z}}_{m})}(Y) \setminus \operatorname{\mathcal{Z}}(\operatorname{M}(n,{\mathbb{Z}}_{m}))$. Now, $({s}A + kI)({t}B + \ell I)= {s}{t}AB+{s}\ell AI+k{t}IB+k\ell I= 0+\ell{s}IA+{t}kBI +\ell kI= {t}{s}BA+\ell{s}IA+{t}kBI +\ell kI= ({t}B + \ell I)({s}A + kI)$. So $X\sim {s}A + kI\sim {t}B + \ell I\sim Y$ is a path of length $3$ between $X$ and $Y$. Therefore ${\Gamma}(\operatorname{M}(n,{\mathbb{Z}}_{m}))$ is connected and, with Corollary \[lowerbound\], $\operatorname{diam}({\Gamma}(\operatorname{M}(n,{\mathbb{Z}}_{m})))=3$ Moreover, if $X, Y \in {\Gamma}(\operatorname{GL}(n,{\mathbb{Z}}_{m}))$ then, by Lemma \[amulticomp\], $A, B, k$ and $\ell$ can be chosen such that ${s}A + kI A, {t}B + \ell I B \in \operatorname{GL}(n,{\mathbb{Z}}_{m})$. Thus intermediate vertices on the path given above can be replaced with invertible ones, and so ${\Gamma}(\operatorname{GL}(n,{\mathbb{Z}}_{m}))$ is connected and, using Corollary \[lowerbound\], $\operatorname{diam}({\Gamma}(\operatorname{GL}(n,{\mathbb{Z}}_{m})))=3$. Since any composite integer is either a prime power or a product of two coprime factors, the proof of Theorem \[composite\] is concluded by combining Lemmas \[primepower\] and \[multicomp\]. [99]{} S. Akbari, H. Bidkhori and A. Mohammadian, ‘Commuting graphs of matrix algebras’. Communications in Algebra. 36(11) (2008), 4020-4031. S. Akbari, M. Ghandehari, M. Hadian and A. Mohammadian, ‘On commuting graphs of semisimple rings’. Linear Algebra and its Applications. 390 (2004), 345-355. S. Akbari, A. Mohammadian, H. Radjavi and P. Raja, ‘On the diameters of commuting graphs’. Linear Algebra and its Applications. 418 (2006), 161-176. Hans Ulrich Besche, Bettina Eick and E. A. O’Brien, ‘A millennium project: constructing small groups’. International Journal of Algebra and Computation. 12(5) (2001), 623-664. Hans Ulrich Besche, Bettina Eick and E. A. O’Brien, ‘The groups of order at most 2000’. Electronic Research Announcements of the American Mathematical Society. 7 (2001), 1-4. Wieb Bosma, John Cannon, and Catherine Playoust. The Magma algebra system. I. The user language. Journal of Symbolic Computation, 24(3-4):235-265, 1997 Richard Brauer and K. A. Fowler, ‘On groups of even order’. The Annals of Mathematics. (2)62 (1955), 565-583. 1 A. Iranmanesh and A. Jafarzadeh, ‘On the commuting graph associated with the symmetric and alternating groups’. Journal of Algebra and its Applications. 7 (2008), 129-146. Y. Segev, ‘The Commuting Graph of Minimal Nonsolvable Groups’. Geometriae Dedicata. 88 (2001), 55-66. Y. Segev and G. M. Seitz, ‘Anisotropic groups of type $A_n$ and the commuting graph of finite simple groups’. Pacific Journal of Mathematics. 202 (2002), 125-225. [^1]: This work was completed while the second author was an Australian Mathematical Sciences Institute Vacation Scholar. The first author holds an Australian Research Fellowship.
--- abstract: 'We compute the $b$ quark mass from dynamical lattice QCD with clover quarks. The calculation is done at a fixed lattice spacing with sea quark masses as low as half the strange quark mass. Our final result is $\overline{m_b} (\overline{m_b} )$ = 4.25(2)(11) GeV, where the first error is statistical and the last error is the systematic uncertainty.' --- Liverpool Preprint: LTH 629\ [**An unquenched lattice QCD calculation of the mass of the bottom quark** ]{}\ [*UKQCD Collaboration*]{}\ [**C. McNeile, C. Michael, and Gavin Thompson\ Theoretical Physics Division, Dept. of Mathematical Sciences, University of Liverpool, Liverpool L69 3BX, UK** ]{}\ Introduction ============ The mass of the bottom quark is a fundamental parameter of the standard model [@Lubicz:2000ch]. To extract the $b$ mass from experiment, QCD corrections must be computed reliably. The best way to do this is use lattice QCD. The different methods of computing the mass of the bottom quark have recently been reviewed by El-Khadra and Luke [@El-Khadra:2002wp]. The particle data table quotes the mass of the b quark in the $\overline{MS}$ scheme at the b mass to lie between 4.0 and 4.5 GeV [@Hagiwara:2002fs]. It is particularly important to reduce the error on the $b$ quark mass, because it is the cause of the largest uncertainty on the determination of $V_{ub}$ from the total inclusive $B$ meson decay $b \rightarrow ul\nu$. El-Khadra and Luke [@El-Khadra:2002wp] note that a 100 MeV error on $m_b$ corresponds to a 6 % error on the $V_{ub}$ CKM matrix element (currently only known to 19 % accuracy [@Hagiwara:2002fs]). In this paper we use unquenched lattice QCD with static-light mesons to extract $m_b$. As we discuss in section \[eq:system\], the error due to the use of static (leading order HQET) approximation is only of order 30 MeV [@Gimenez:2000cj], hence the static limit has an important role for the phenomenology of determining $V_{ub}$. Details of lattice calculations =============================== We used non-perturbatively improved clover fermions in both the sea and valence quarks. The Wilson gauge action is used for the gluons. The full details of the actions and details of the unquenched calculation are described in  [@Allton:1998gi; @Allton:2001sk; @Allton:2004qq]. We use static quarks for the heavy mesons. The lattice binding energy is extracted from a two point correlator using variational smearing techniques. The local two point function is $$\begin{aligned} C(t) & = & \sum_{x} \langle 0 \mid \Phi_B(x,t) \Phi_B^\dagger(x,0) \mid 0 \rangle \\ & = & Z^2 \exp( - a{ \cal E } t)\end{aligned}$$ where $\Phi_B$ is the interpolating operator for static-light mesons. We have already published [@Green:2003zz] an extensive analysis of the spectrum of static-light mesons. Our previous paper [@Green:2003zz] also describes the all-to-all propagators used to improve the statistical accuracy and the fuzzing methods used. In table \[tab:bareRESULTS\] we present our results for the lattice binding energy. All the data sets used $\beta$ = 5.2. Data sets $DF1$ and $DF2$ used a clover coefficient of 1.76, while all the others used the non-perturbative value of 2.0171. The results for the data sets: DF1, DF2 have already been published [@Green:2003zz]. The ensemble size for data sets DF4 and DF5 have been trebled over the results previously published [@Green:2003zz]. The data from ensembles DF5 and DF6 are new. Name No. $r_0 m_{PS}$ $\kappa_{sea}$ $\kappa_{val}$ Volume ${a \cal E}$ $\Lambda_{static}$ GeV ------ ----- -------------- ---------------- ---------------- -------------- --------------------- ------------------------ DF1 20 1.92(4) 0.1395 0.1395 $12^3 \; 24$ 0.87(1) 0.59(5) DF2 78 1.94(3) 0.1395 0.1395 $16^3 \; 24$ 0.842(5) 0.55(2) DF3 60 1.93(3) 0.1350 0.1350 $16^3 \; 32$ $0.772^{+7}_{-8}$ 0.69(7) DF4 60 1.48(3) 0.1355 0.1355 $16^3 \; 32$ $0.739^{+9}_{-8}$ 0.66(5) DF5 60 1.82(3) 0.1355 0.1350 $16^3 \; 32$ $0.748^{+9}_{-8}$ 0.68(5) DF6 55 1.06(3) 0.1358 0.1358 $16^3 \; 32$ $0.707^{+14}_{-12}$ 0.64(7) : Lattice binding energy (${a \cal E}$) and physical binding energy ($\Lambda_{static}$) for each data set. []{data-label="tab:bareRESULTS"} Extracting the quark mass {#se:BasicIdea} ========================= We evaluate the mass of a pseudoscalar heavy-light meson from lattice QCD with static heavy quark and compare with the experimental mass value. This gives information about the $b$-quark mass. The strange quark mass is accessible in lattice evaluations, so to minimise extrapolation, we use the $B_s$ meson for this comparison. We still need to extrapolate the sea quark mass to the experimental value and we discuss this later. In this section we will describe the central values for our calculation. We discuss systematic uncertainties in section \[eq:system\]. The quantity ${\cal E}$, from the lattice calculation, contains an unphysical $\frac{1}{a}$ divergence ($\delta m$) that must be subtracted off to obtain the physical binding energy ($\Lambda_{static}$). $$\Lambda_{static} = {\cal E } - \delta m$$ The pole quark mass is determined from $$m_b^{pole} = M_{B_s} - \Lambda_{static}$$ The physical value [@Hagiwara:2002fs] of the meson mass $M_{B_s}$ (5.369 GeV) is used. In the static theory $\delta m$ has been calculated to two loops by Martinelli and Sachrajda [@Martinelli:1998vt]. $$\begin{aligned} a \delta m & = & 2.1173 \alpha_s( \overline{m_b}) + \{ ( 3.707 - 0.225 n_f ) \log(\overline{m_b}a) \nonumber \\ & - & 1.306 - n_f ( 0.104 + 0.1 c_{SW} - 0.403 c_{SW}^2) \} \alpha_s( \overline{m_b})^2 \label{eq:deltaMLOw}\end{aligned}$$ where $n_f$ is the number of sea quarks and $c_{SW}$ is the coefficient of the clover term. We discuss estimates of the next order to $a \delta m$ in section \[eq:system\]. The pole mass (see Kronfeld for a review [@Kronfeld:1998di]) is converted to $\overline{MS}$ using continuum perturbation theory [@Gray:1990yh]. $$m_b^{\overline{MS}}(\mu) = Z_{pm}(\mu) m_b^{pole} + O(1/m_b) \label{eq:masterEq}$$ where $$Z_{pm}(\mu=m_b) = 1 -\frac{4}{3} \frac{\alpha_s( \overline{m_b}) }{ \pi} - (11.66 - 1.04 n_f) ( \frac{\alpha_s( \overline{m_b}) }{ \pi} )^2 \label{eq:ZPMlow}$$ The perturbative correction between the pole mass and the $\overline{MS}$ mass is known to $O(\alpha^3)$ [@Melnikov:2000qh; @Chetyrkin:1999qi]. The perturbative series connecting the pole mass with the $\overline{MS}$ mass is badly behaved due to renormalons (see [@Beneke:1998ui] for a review). The lattice matching is only done to $O(\alpha^2)$, hence we convert the pole mass to $\overline{MS}$ at the same order, using a consistent coupling, so the differences in the series are physical. We use the values of the coupling using the values of $\Lambda_{QCD}$ from the joint UKQCD and QCDSF paper [@Booth:2001qp]. The four loop expression for $\alpha_s$ is used [@Chetyrkin:2000yt] to determine the coupling from $\Lambda_{QCD}$. We consistently use $n_f=2$ in all the perturbative expressions. For $\kappa_{sea} = 0.1355$ (0.1350) we use $\Lambda_{QCD}$ = $0.178$ ( $0.173$) MeV [@Booth:2001qp]. We use the same value of $\Lambda_{QCD}$ for the two data sets $DF1$ and $DF2$, where $\Lambda_{QCD}$ has not been computed. In [@Green:2003zz] we estimated the mass of the strange quark using the pseudoscalar made out of strange quarks [@McNeile:2000hf]. This provided $r_0 m_{PS} \equiv 1.84$. This value is close to $r_0 m_{PS} = 1.82^{+3}_{-1}$ for $DF5$ data set [@Allton:2001sk] hence we use the binding energy from that data set as the value at strange. This is a partially quenched analysis. The data sets $DF1$ and $DF2$ also have a sea quark mass close to the strange quark mass [@McNeile:2000hf]. To determine the lattice spacing we use the measured value for $r_0/a$ from the potential with the ‘physical’ value of $r_0$ as 0.525(25) fm [@Dougall:2003hv]. We discuss in more detail the systematic error from the choice of $r_0$ in section \[eq:system\]. Hence our best estimate of $m_b(m_b)$ = $4.25(2)$ GeV from the $DF5$ data set, where the errors are statistical only. Computing the systematic uncertainties {#eq:system} ====================================== Gimenez et al. [@Gimenez:2000cj] discuss the systematic error from the neglect of the $1/m_b$ terms in the static limit. Heavy quark effective field theory parametrises the heavy mass corrections to the mass of a heavy-light meson $M_B$. $$M_B = m_b + \Lambda_{static} - \frac{\lambda_1} {2 m_b} - \frac{ 3 \lambda_2} {2 m_b}$$ where $\Lambda_{static}$ is the static binding energy, $\lambda_1$ is the matrix element due to the insertion of the kinetic energy and $\lambda_2$ is the matrix element due to the insertion of the chromomagnetic operator. The value of $\lambda_2 \sim 0.12 \; {\mathrm GeV}^2$ can be obtained from the experimental mass splitting between the $B^{\star}$ and $B$ mesons. The value of $\lambda_1$ is much harder to estimate. Gimenez et al. [@Gimenez:2000cj] use a range of $\lambda_1$ from -0.5 to 0.0 in GeV$^2$. This includes the determination from quenched lattice QCD of $\lambda_1 = -(0.45 \pm 0.12) {\mathrm GeV}^2$ by Kronfeld and Simone [@Kronfeld:2000gk]. JLQCD have recently tried to compute $\lambda_1$ using NRQCD [@Aoki:2003jf]. As suggested by Gimenez et al. [@Gimenez:2000cj], using a symmetric error of 30 MeV to parameterise the neglected $1/m_b$ terms seems reasonable to us. The specification of the strange quark mass, described in section \[se:BasicIdea\], essentially relies on the physical K mass. Since the $\phi$ meson has a very narrow width, it may also be used to specify the strange quark mass. In their study using essentially the same lattice parameters, JLQCD [@Aoki:2002uc] see approximately a 10% difference between using the $\phi$ and the $K$ to set the strange quark mass. Motivated by JLQCD’s result, we use a symmetric error of 5% as an estimate of the additional uncertainty in our estimate of the strange quark mass. This induces an error of 60 MeV in $m_b(m_b)$ for the $DF5$ data set. The data sets $DF1$, $DF2$ were generated using a different value of $c_{SW}$ to that used to generate data set $DF5$, hence they can not be used to estimate the size of the lattice spacing effects. The comparison of the results between data set $DF1$ and $DF2$ can in principle be used to estimate finite size effects. As the physical size of a size of the lattice changes from 1.83 fm (DF1) to 2.44 fm (DF2), $m_b(m_b)$ changes from 4.33(2) to 4.369(7) GeV. Hence, a simple estimate of the finite size effects in date set $DF5$ (size of box 1.77fm) is -39 MeV. In quenched QCD Duncan et al. [@Duncan:1994in] found no finite size effects in the binding energy for lattice lengths: 1.3, 1.8 and 2.2 fm. Although, finite size effects in quenched and unquenched QCD can be very different, we think it more likely that the differences in two data sets is due to a statistical fluctuation on the smaller lattice. This is supported by the fact that UKQCD saw no finite size effects in the light hadron spectrum between DF1 and DF2 [@Allton:1998gi]. The choice of coupling is a systematic error. Gimenez et al. [@Gimenez:2000cj] use $\Lambda_{QCD}^{n_f=2}$ = 300 MeV as the central value. We use the result for $\Lambda_{QCD}$ determined from the $DF4$ and $DF5$ data sets [@Booth:2001qp]. We don’t feel that it is appropriate to use the values of $\Lambda_{QCD}$ from experiment (as done by Gimenez et al. [@Gimenez:2000cj]) even for deriving a systematic error. The agreement between $\Lambda_{QCD}$ from lattice QCD calculations and experiment is not good for calculations that use clover fermions for the sea quarks [@Aoki:2002uc; @Kaneko:2001ux; @Booth:2001qp]. We assume that the discrepancy will be reduced as calculations are done with lighter sea quark masses and finer lattice spacings. To estimate the effect of the chiral extrapolation of the sea quark masses, we extrapolated $\Lambda_{static}$ from data sets $DF4$ and $DF5$ linearly in $(r_0 m_{PS})^2$ to $r_0 m_{PS}$ = 1.93 (the same as for data set $DF3$). The extrapolated result for $\Lambda_{static}$ at $r_0 m_{PS}$ = 1.93 at $\kappa_{sea}$ = 0.1355 was consistent with $\Lambda_{static}$ on data set DF3 ($\kappa_{sea}$ = 0.1350). We see no evidence for the dependence of $\Lambda_{static}$ on the sea quark mass. Gimenez et al. [@Gimenez:2000cj] see a slight increase in the lattice binding energy with decreasing quark mass. There are potentially non-analytic $m_{PS}^3$ terms in the mass dependence of the binding energy [@Goity:1992tp]. To estimate the systematic errors on the perturbative matching we did a number of things. Following Gimenez et al. [@Gimenez:2000cj] we compared taking the product of the two perturbative factors in equation \[eq:masterEq\] against expanding the perturbative expressions and only keeping $O(\alpha_s^2)$ terms. This increases the mass for data set $DF5$ by 25 MeV. In quenched QCD the next order correction to equation \[eq:deltaMLOw\] has been computed numerically by two groups using different techniques [@DiRenzo:2000nd; @Trottier:2001vj]. As the two groups obtained essentially the same result we will use the result of Di Renzo and Scorzato [@DiRenzo:2000nd]. In quenched QCD the next order correction to equation \[eq:deltaMLOw\] is [@Gimenez:2000cj] $$a \delta m^{(3)} = ( \overline{X_2} + 6.48945 ( -3.57877 + \log(\overline{m_b}a) ) ( 3.29596 + \log(\overline{m_b}a) ) ) \alpha_s(\overline{m_b})^3 \label{eq:delMNext}$$ where $\overline{X_2}$ is the number from the numerical calculation. Di Renzo and Scorzato [@DiRenzo:2000nd] obtain $\overline{X_2}$ = 86.2(0.6)(1.0) for quenched QCD. Di Renzo and Scorzato have computed $X_2$ for $n_f=2$ Wilson fermions [@DiRenzo:2004xn]. The new result also involves a lattice calculation of the $\overline{MS}$ coupling for Wilson fermions, so it is not obvious how to incorporate the new result into this analysis. The next order to the connection between the pole mass and $\overline{MS}$ mass is known in the continuum (equation \[eq:ZPMlow\]). $$Z_{pm}^{(3)} = - (157.116 - 23.8779 n_f + 0.6527 n_f^2) (\frac{\alpha_s( \overline{m_b})}{\pi})^3 \label{eq:ZPMnextOrder}$$ Because equation \[eq:delMNext\] is known for quenched QCD, we do not use it for our central result. It does seem appropriate to use equation \[eq:delMNext\] and equation \[eq:ZPMnextOrder\] to estimate the systematic errors due to the neglect of higher order terms. We set $n_f$ = 0 in equation \[eq:ZPMnextOrder\]. Adding in the next order term reduces the mass of the bottom quark by 12 MeV for data set $DF5$. Because of limitations in computer time, the unquenched calculations are at fixed lattice spacing (so the continuum limit has not been taken) and fairly heavy sea quark masses are used. This means that different lattice quantities produce slightly different values of the lattice spacing. In [@Dougall:2003hv] the values of $r_0$ from various calculations that used clover fermions are collected together. The results were in the range $r_0$ = 0.5 to 0.55 fm. This motivates our choice of $r_0$ = 0.525(25) fm. Using mass splittings in Upsilon on improved staggered configurations with measurements of the potential MILC [@Aubin:2004wf] quote $r_0$ = 0.467 fm. This is $2.3 \sigma$ from our central value for $r_0$. In the graph [@Davies:2003ik] of spread of variation of lattice spacings from different physical quantities, the most dramatic failures of the quenched approximation occur for $P-S$ and $2S -1S$ mass splittings in Upsilon and the pion decay constant. We speculate that these quantities are more sensitive to the heavy quark potential at the origin that depends on $n_f$ from the running of the coupling (see [@Bernard:2000gd] for a discussion of this). The relatively strong dependence of the Upsilon mass splittings and pion decay constant on the sea quark mass and lattice spacing doesn’t make them a good choice to set the scale for current unquenched calculations with clover fermions. Hence we feel that using the value from MILC for $r_0$ (as advocated in [@Wingate:2003gm]) will artificially inflate the error bars, so we stick to our original estimate of $r_0$ = 0.525(25) fm. The perturbative analysis, used in the this section, assumes that the sea quark mass is zero. However, the sea quark masses used in current calculations with Wilson-like quarks are not negligible. At $\kappa_{sea}$ = 0.1355 and 0.1350, the vector definitions of the sea quark mass (in units of the lattice spacing) are 0.026 and 0.044 respectively. The light quark mass dependence has been computed by Bali and Boyle [@Bali:2002wf]. For light quark masses below 0.1 (in lattice units) Bali and Boyle [@Bali:2002wf] provide a quadratic parameterisation of the light quark mass dependence of $\delta m$. The expression in equation \[eq:deltaMLOw\] gets modified to $$\begin{aligned} a \delta m & = & 2.1171 \alpha_s( \overline{m_b}) + ( ( 3.707 - 0.225 n_f ) \log(\overline{m_b}a) \nonumber \\ & - & 1.306 - n_f ( -0.199 + 0.516 m_q - 0.421 m_q^2)) \alpha_s( \overline{m_b})^2 \label{eq:BaliBoyle}\end{aligned}$$ where we have specialised to $c_{SW} = 1$. Equation \[eq:deltaMLOw\] is a combination of the two loop static self energy and a conversion from the bare coupling to the massless $\overline{MS}$ scheme [@Martinelli:1998vt]. As stressed by Martinelli and Sachrajda [@Martinelli:1998vt], it is important to use a consistent coupling in equation \[eq:masterEq\], so that the poorly behaved perturbative expansion of $Z_{pm}$ cancels with that of $\delta m$. This makes it easier to use the massless quark $\overline{MS}$ scheme. The determination of $\Lambda_{QCD}$ includes the effects of the masses of the light quarks [@Booth:2001qp]. The use of \[eq:BaliBoyle\] changes the central value of $m_b(m_b)$ by 1 MeV for the data set DF5. For our final result we use the central value from data set $DF5$. The systematic uncertainties have been discussed in this section. Hence our final result is $$\overline{m_b} (\overline{m_b} ) = (4.25 \pm 0.02 \pm 0.03 \pm 0.03 \pm 0.08 \pm 0.06 ) {\mathrm GeV}$$ where the errors are (from left to right): statistical, perturbative, and neglect of $1/m_b$ terms, ambiguities in the choice of lattice spacing, and error in the choice of the mass of the strange quark. Gimenez et al. [@Gimenez:2000cj] obtain $$\overline{m_b} (\overline{m_b} ) = (4.26 \pm 0.03 \pm 0.05 \pm 0.07 ) {\mathrm GeV}$$ from a simulation at $\beta = 5.6$ and volume = $24^3 \;40$ with two dynamical quark masses, from the $T\chi L $ collaboration. The first error is due to statistics. The second error includes the neglect of the $1/m_b$ terms and the ambiguity in the determination of the lattice spacing. The third error is due to the neglect of higher order corrections in the perturbative matching. Gimenez et al. [@Gimenez:2000cj] used a preliminary result for $\overline{X_2}$, that was relatively imprecise [@Burgio:1999ba] hence their estimate of the higher order effects is looser than ours. Also we used $\Lambda_{QCD}$ determined consistently from this data set, while Gimenez et al. [@Gimenez:2000cj] used continuum based estimates of the coupling. This analysis has recently been updated by Di Renzo and Scorzato [@DiRenzo:2004xn] with the unquenched value of $X_2$. Gimenez et al. [@Gimenez:2000cj] only used one quantity to estimate the lattice spacing, so their error from the ambiguity in the choice of lattice spacing is underestimated in the final result. However, the two effects compensate and the final error is probably representative. It is pleasing that our calculation with a different set of parameters is essentially consistent with that of Gimenez et al. [@Gimenez:2000cj]. Conclusions {#eq:conc} =========== In table \[tab:mbSummary\] we collect some recent results for the mass of the bottom quark from lattice QCD. Our result is consistent with the previous unquenched calculations. Unfortunately, we have not managed to reduce the size of the error bars. The largest error in the recent values for the mass of the bottom quark is due to the spread in different lattice spacings. Heitger and Sommer [@Heitger:2003nj] noted that a change in $r_0$ by 10% changed the value of $m_b(m_b)$ by 150 MeV. -------------------------------------------------------------------------------- Group comment $m_b(m_b)$ GeV ----------------------------------------- ------------ ------------------------- This work unquenched $4.25(2)(11)$ Collins [@Collins:2000sb] unquenched $4.34(7)_{-7}^{+0}$ Gimenez et al. [@Gimenez:2000cj] unquenched $4.26(9)$ Di Renzo and Scorzato [@Gimenez:2000cj] unquenched $4.21 \pm 0.03 \pm 0.05 \pm 0.04$ Bali and Pineda [@Bali:2003jq] quenched $4.19(6)(15)$ Heitger and Sommer [@Heitger:2003nj] quenched $4.12(8)$ -------------------------------------------------------------------------------- : Lattice QCD results for $m_b(m_b)$ in the $\overline{MS}$ scheme. The last error on the Bali and Pineda result is an estimate of unquenching.[]{data-label="tab:mbSummary"} The prospects for an improved estimate of the mass of the bottom quark from lattice QCD are quite good. The unquenched calculations with improved staggered quarks produce consistent lattice spacings from many different quantities [@Davies:2003ik]. The numerical calculation of the third order contribution to $\delta m$ has been done for unquenched Wilson fermions [@DiRenzo:2004xn]. Applying the technology of Heitger and Sommer [@Heitger:2003nj] to unquenched calculations would allow a non-perturbative estimate of the mass of the bottom quark that is free from problems with delicate cancellations of poorly converging perturbative expressions. The use of automated perturbative calculations may allow the computation of the bottom quark mass from NRQCD / FNAL type calculations with two loop accuracy [@Trottier:2003bw; @Nobes:2003nc]. We are investigating the use of the static formulation introduced by the ALPHA collaboration [@DellaMorte:2003mn], but the required perturbative (or non-perturbative) factors are not available yet. Some combination of the techniques and projects mentioned in the last paragraph should be able to produce a number of independent calculations, with different systematic errors, of the mass of the bottom quark from unquenched lattice QCD. Acknowledgements ================ The lattice data was generated on the Cray T3E system at EPCC supported by, EPSRC grant GR/K41663, PPARC grants GR/L22744 and PPA/G/S/1998/00777. We are grateful to the ULgrid project of the University of Liverpool for computer time. [10]{} V. Lubicz, Nucl. Phys. Proc. Suppl. [**94**]{}, 116 (2001), hep-lat/0012003, A. X. El-Khadra and M. Luke, Ann. Rev. Nucl. Part. Sci. [**52**]{}, 201 (2002), hep-ph/0208114, Particle Data Group, K. Hagiwara [*et al.*]{}, Phys. Rev. [**D66**]{}, 010001 (2002), V. Gimenez, L. Giusti, G. Martinelli, and F. Rapuano, JHEP [**03**]{}, 018 (2000), hep-lat/0002007, UKQCD, C. R. Allton [*et al.*]{}, Phys. Rev. [**D60**]{}, 034507 (1999), hep-lat/9808016, UKQCD, C. R. Allton [*et al.*]{}, Phys. Rev. [**D65**]{}, 054502 (2002), hep-lat/0107021, UKQCD, C. R. Allton [*et al.*]{}, (2004), hep-lat/0403007, UKQCD, A. M. Green, J. Koponen, C. McNeile, C. Michael, and G. Thompson, (2003), hep-lat/0312007, G. Martinelli and C. T. Sachrajda, Nucl. Phys. [**B559**]{}, 429 (1999), hep-lat/9812001, A. S. Kronfeld, Phys. Rev. [**D58**]{}, 051501 (1998), hep-ph/9805215, N. Gray, D. J. Broadhurst, W. Grafe, and K. Schilcher, Z. Phys. [**C48**]{}, 673 (1990), K. Melnikov and T. v. Ritbergen, Phys. Lett. [**B482**]{}, 99 (2000), hep-ph/9912391, K. G. Chetyrkin and M. Steinhauser, Nucl. Phys. [**B573**]{}, 617 (2000), hep-ph/9911434, M. Beneke, Phys. Rept. [**317**]{}, 1 (1999), hep-ph/9807443, QCDSF-UKQCD, S. Booth [*et al.*]{}, Phys. Lett. [**B519**]{}, 229 (2001), hep-lat/0103023, K. G. Chetyrkin, J. H. Kuhn, and M. Steinhauser, Comput. Phys. Commun. [**133**]{}, 43 (2000), hep-ph/0004189, UKQCD, C. McNeile and C. Michael, Phys. Lett. [**B491**]{}, 123 (2000), hep-lat/0006020, UKQCD, A. Dougall, R. D. Kenway, C. M. Maynard, and C. McNeile, Phys. Lett. [**B569**]{}, 41 (2003), hep-lat/0307001, A. S. Kronfeld and J. N. Simone, Phys. Lett. [**B490**]{}, 228 (2000), hep-ph/0006345, JLQCD, S. Aoki [*et al.*]{}, (2003), hep-lat/0305024, JLQCD, S. Aoki [*et al.*]{}, Phys. Rev. [**D68**]{}, 054502 (2003), hep-lat/0212039, A. Duncan, E. Eichten, J. M. Flynn, B. R. Hill, and H. Thacker, Nucl. Phys. Proc. Suppl. [**34**]{}, 444 (1994), hep-lat/9312046, T. Kaneko, Nucl. Phys. Proc. Suppl. [**106**]{}, 133 (2002), hep-lat/0111005, J. L. Goity, Phys. Rev. [**D46**]{}, 3929 (1992), hep-ph/9206230, F. Di Renzo and L. Scorzato, JHEP [**02**]{}, 020 (2001), hep-lat/0012011, H. D. Trottier, N. H. Shakespeare, G. P. Lepage, and P. B. Mackenzie, Phys. Rev. [**D65**]{}, 094502 (2002), hep-lat/0111028, F. Di Renzo and L. Scorzato, (2004), hep-lat/0408015, C. Aubin [*et al.*]{}, (2004), hep-lat/0402030, HPQCD, C. T. H. Davies [*et al.*]{}, Phys. Rev. Lett. [**92**]{}, 022001 (2004), hep-lat/0304004, C. W. Bernard [*et al.*]{}, Phys. Rev. [**D62**]{}, 034503 (2000), hep-lat/0002028, M. Wingate, C. T. H. Davies, A. Gray, G. P. Lepage, and J. Shigemitsu, Phys. Rev. Lett. [**92**]{}, 162001 (2004), hep-ph/0311130, G. S. Bali and P. Boyle, (2002), hep-lat/0210033, G. Burgio, F. D. Renzo, M. Pepe, and L. Scorzato, Nucl. Phys. Proc. Suppl. [**83**]{}, 935 (2000), hep-lat/9909169, ALPHA, J. Heitger and R. Sommer, JHEP [**02**]{}, 022 (2004), hep-lat/0310035, S. Collins, The mass of the b quark from lattice nrqcd, 2000, hep-lat/0009040. G. S. Bali and A. Pineda, Phys. Rev. [**D69**]{}, 094001 (2004), hep-ph/0310130, H. D. Trottier, (2003), hep-lat/0310044, M. A. Nobes and H. D. Trottier, (2003), hep-lat/0309086, ALPHA, M. Della Morte [*et al.*]{}, Phys. Lett. [**B581**]{}, 93 (2004), hep-lat/0307021,
--- author: - 'H. Sana[^1]' - 'E. Antokhina' - 'P. Royer' - 'J. Manfroid[^2]' - 'E. Gosset[^3]' - 'G. Rauw$^{\dagger}$' - 'J.-M. Vreux' bibliography: - '/datas6/XMM\_CAT\_PAPER/ngc6231\_Xcat.bib' date: 'Received September 15, 2000; accepted September 15, 2000' subtitle: 'II. Optical light curve and X-ray observations [^4]' title: 'The massive binary CPD$-$417742 : ' --- ------------------------- ---------- ------- --------- ------- $P_\mathrm{orb}$ (days) 2.44070 $\pm$ 0.00050 $e$ 0.027 $\pm$ 0.006 0.008 $\omega$ () 149 $\pm$ 10 17 $T_0$(HJD 2400.284 $\pm$ 0.067 0.113 $-$2450000) $\gamma_1$ () $-15.3$ $\pm$ 0.5 1.2 $K_1$ () 167.1 $\pm$ 0.9 1.4 $a_1\sin i$ () 8.05 $\pm$ 0.05 0.07 $\gamma_2$ () $-26.3$ $\pm$ 0.7 2.4 $K_2$ () 301.3 $\pm$ 1.8 3.0 $a_2\sin i$ () 14.52 $\pm$ 0.09 0.14 $q\ (=M_1/M_2)$ 1.803 $\pm$ 0.015 0.023 $M_1\sin^3 i$ () 16.69 $\pm$ 0.25 0.39 $M_2\sin^3 i$ () 9.25 $\pm$ 0.12 0.18 ------------------------- ---------- ------- --------- ------- : Orbital and physical parameters of  as derived from the lines orbital solution presented in . The usual notations have been adopted. $T_0$ is the time of periastron passage and is adopted as phase $\psi=0.0$ . The column to the right provides the revised estimate of the errors, obtained with Monte-Carlo simulation techniques (see Sect. \[sect: discuss\_photo\]). []{data-label="tab: orbit"} Introduction \[sect: intro\] ============================ In the quest for accurate measurements of fundamental stellar parameters, eclipsing spectroscopic binaries are unique physical laboratories all over the Hertzsprung-Russell diagram. Combined spectroscopic and photometric studies provide a direct determination of the masses and sizes of their stellar components. This is of a particular interest in the upper left part of the diagram. Although few in number, massive early-type stars have a large influence on their surroundings through their mechanical and radiative energy input. A detailed knowledge of both their evolution and wind properties is thus crucial in many different contexts. For example, these objects seem to play a key role in the formation of the less massive stars in starburst regions or within the core of OB associations. However, our understanding of massive stars is clearly still fragmentary. Only a few tens of objects have their orbital and physical parameters determined with a reasonable accuracy [@Gie03]. The problem of their exact formation mechanism is largely unsolved [@Zin03] and, from the theoretical point of view, their physical parameters (effective temperatures, radii, masses, ...) significantly differ from one study to another [@HM84; @HP89; @VGS96]. The observational masses deduced from atmosphere models are systematically lower than the predicted masses using evolutionary models (the so-called [*mass discrepancy*]{} problem, @HKV92 [@He03]). Fortunately, recent works [@CHE02; @HPN02; @BG02; @MSH02] using line-blanketed atmosphere models and accounting both for the spherical stellar atmosphere and for the stellar winds yielded new effective temperature scales for early-type stars and, simultaneously, led to a better agreement between the spectroscopic and evolutionary masses. In this context, the accurate determination of the massive star fundamental parameters, over the whole spectral type and luminosity class range covered by these objects, provides thus the basic material to strengthen our understanding of this particularly important stellar population. The early-type binary systems are also crucial for the mapping of X-ray emitting plasmas. So far, the most reliable way to constrain the geometry of the hot plasma around stars of various spectral types is through the study of the temporal changes of the X-ray fluxes of eclipsing binaries or rotating stars; the latter only in cases of non-uniform surface distributions of X-ray plasma. A good time coverage of the orbital or rotation cycle is of course critical to provide as complete a description as possible. While late-type stars often experience flaring activities which may considerably complicate the task of mapping their coronae, the situation should, in principle, be much easier in early-type stars. In fact, single early-type objects usually do not display a strong X-ray variability [@BS94]. In early-type binaries, a significant fraction of the X-ray emission may however arise in a wind interaction zone. The orbital modulation of their X-ray flux is thus quite common, either because of the changing opacity along the line of sight towards the shock region, or as a consequence of the changing properties of the wind interaction zone in an eccentric binary. In this context, we have undertaken a detailed study of , a double line spectroscopic binary located in the core of the young open cluster . In @SHRG03 [ Paper I hereafter], we presented a first accurate orbital solution for the two components of the system. We derived a short period $P=2.44070$ days and a slight but definite eccentricity $e=0.027$. Based on spectroscopic criteria, we proposed a spectral type and a luminosity class of O9 III + B1 III for the two components of the system. However we outlined the strong ambiguity concerning the quoted luminosity classification. Indeed the luminosities and radii inferred from the membership in   rather indicate a class V or IV for both components. The analysis of the light curve of the system will allow us to elucidate this question. We refer to for a review of the previous works on the object. In , we dit not mention the work of @BL95 in which the authors present a first light curve of , showing a clearly-marked eclipse. We also refer to for details on the spectroscopic analysis of the system. Table \[tab: orbit\] summarizes the computed orbital solution and the constraints obtained on its physical parameters. This second paper will complete our current view of the system by providing the analysis of the photometric light curve and of   X-ray observations. It is organised as follows. After a description of the optical and X-ray data sets and data handling (Sect. \[sect: obs\]), we present the analysis of the system light curve (Sect. \[sect: lc\]). In Sect. \[sect: discuss\_photo\], we combine the newly obtained information with results from and we derive the absolute parameters of the system. The X-ray properties of  are described in Sect. \[sect: X\]. In Sect. \[sect: Xmodel\], we investigate the wind properties and we propose to interpret the X-ray light curve as the signature of a wind interaction. We also present a simple phenomenological model that reproduces reasonably well the observed modulations. Final considerations and conclusions of this work are summarised in Sect. \[sect: ccl\]. ---------- ------- ---------- ------- ---------- ------- ---------- ------- ---------- ------- ---------- ------- HJD mag HJD mag HJD mag HJD mag HJD mag HJD mag 530.8192 8.103 540.8836 8.103 549.9225 8.417 534.8722 8.540 545.8333 8.533 554.8312 8.777 530.8203 8.098 540.9142 8.088 550.7756 8.129 534.8968 8.533 545.8758 8.525 554.8680 8.678 531.8341 8.130 540.9223 8.098 550.8198 8.132 535.7976 8.520 545.9226 8.546 554.9149 8.594 531.8354 8.123 540.9286 8.101 550.8595 8.129 535.8262 8.526 546.7415 8.521 555.7662 8.569 533.8018 8.123 541.7697 8.106 550.8872 8.125 535.8670 8.534 546.7979 8.516 555.8110 8.574 533.8033 8.127 541.7939 8.105 550.9163 8.139 535.9103 8.530 546.8556 8.522 555.8573 8.643 533.8041 8.126 541.8431 8.108 550.9216 8.148 537.7754 8.690 546.9197 8.527 555.9148 8.726 533.8291 8.138 541.8689 8.091 551.7497 8.105 537.8168 8.607 547.7870 8.541 555.9303 8.756 533.8299 8.147 541.8974 8.112 551.7937 8.115 537.8576 8.568 547.8468 8.542 556.7881 8.535 533.8307 8.146 542.7511 8.127 551.8060 8.124 537.8590 8.571 547.8905 8.517 556.8690 8.558 533.8571 8.161 542.7896 8.128 551.8631 8.124 537.9009 8.552 547.9242 8.510 557.7451 8.517 533.8579 8.160 542.8192 8.141 551.9181 8.100 538.7595 8.614 548.7645 8.692 557.7899 8.525 533.8587 8.151 542.8482 8.118 552.7605 8.113 538.8022 8.674 548.8095 8.614 557.8334 8.525 533.8823 8.198 542.8906 8.126 552.8081 8.117 538.8615 8.755 548.8509 8.565 557.8599 8.509 533.8831 8.188 543.7525 8.342 552.8566 8.105 538.9045 8.772 548.8781 8.544 557.9078 8.506 533.8839 8.194 543.8084 8.356 552.8789 8.093 539.7730 8.560 548.9168 8.538 557.9249 8.511 533.9053 8.234 543.8701 8.292 552.8798 8.093 539.8246 8.552 548.9212 8.538 558.6961 8.545 533.9063 8.236 543.8955 8.242 552.9189 8.090 539.8655 8.560 549.7466 8.710 558.7425 8.543 533.9270 8.273 543.9226 8.199 553.7507 8.118 539.8983 8.569 549.8067 8.858 558.7897 8.521 533.9278 8.263 544.7848 8.159 553.7947 8.124 540.7890 8.514 549.8637 8.902 558.8082 8.543 533.9286 8.267 544.8474 8.255 553.8328 8.116 540.8306 8.516 549.9214 8.824 533.9294 8.266 544.8936 8.374 553.8736 8.114 540.8815 8.523 550.7748 8.534 533.9302 8.274 544.9210 8.442 553.9195 8.111 540.9135 8.531 550.8190 8.531 534.7974 8.123 544.9281 8.457 554.7069 8.470 540.9217 8.547 550.8586 8.537 534.8227 8.129 545.7752 8.112 554.7540 8.485 540.9280 8.542 550.8864 8.524 534.8456 8.130 545.8341 8.119 554.7937 8.430 541.7689 8.523 550.9155 8.546 534.8730 8.142 545.8766 8.123 554.8319 8.343 541.7931 8.534 550.9208 8.561 534.8976 8.132 545.9234 8.128 554.8686 8.253 541.8423 8.532 551.7489 8.528 535.7984 8.118 546.7423 8.100 554.9156 8.168 541.8681 8.523 551.7930 8.535 535.8270 8.118 546.7987 8.097 555.7669 8.148 541.8966 8.538 551.8052 8.528 535.8678 8.124 546.8564 8.095 555.8117 8.168 542.7503 8.546 551.8623 8.535 535.9111 8.110 546.9205 8.109 555.8581 8.228 542.7888 8.542 551.9173 8.519 537.7762 8.275 547.7881 8.108 555.9156 8.310 542.8183 8.569 552.7569 8.542 537.8176 8.195 547.8476 8.109 555.9311 8.342 542.8474 8.535 552.7614 8.540 537.8601 8.157 547.8913 8.086 556.7887 8.135 542.8898 8.546 552.8074 8.528 537.9021 8.145 547.9250 8.092 556.8698 8.138 543.7516 8.769 552.8551 8.530 538.7603 8.186 548.7674 8.255 557.7458 8.106 543.8076 8.765 552.8782 8.521 538.8030 8.258 548.8105 8.184 557.7907 8.095 543.8693 8.701 552.9174 8.522 538.8623 8.328 548.8519 8.140 557.8342 8.087 543.8947 8.657 553.7500 8.557 538.9053 8.342 548.8789 8.115 557.8607 8.082 543.9218 8.623 553.7941 8.556 539.7738 8.137 548.9179 8.115 557.9094 8.079 544.7840 8.560 553.8321 8.540 539.8254 8.132 548.9200 8.119 557.9257 8.087 544.8465 8.672 553.8729 8.554 539.8663 8.140 548.9223 8.113 558.6969 8.127 544.8928 8.784 553.9188 8.550 539.8991 8.142 549.7474 8.302 558.7433 8.103 544.9202 8.858 554.7062 8.887 540.7898 8.096 549.8075 8.449 558.7910 8.099 544.9272 8.863 554.7533 8.914 540.8314 8.101 549.8645 8.490 558.8091 8.113 545.7744 8.530 554.7928 8.840 ---------- ------- ---------- ------- ---------- ------- ---------- ------- ---------- ------- ---------- ------- ![Broad-band \[0.5 - 10.0keV\] image of the  core. This 1+2 image combines the two instruments and the six pointings of the campaign for a cumulated effective exposure time of 351.5ks. The source and background extraction regions are shown.[]{data-label="fig: fov"}](2746fig1.ps){width="8.5cm"} Observations and data reduction \[sect: obs\] ============================================= Photometry \[ssect: obs\_photo\] -------------------------------- Between 1997 March 22 and April 19, we observed the core of the open cluster with the 0.6-m Bochum telescope at La Silla, Chile. The Cassegrain focus of the telescope was equipped with a direct camera and a Thomson 7882 charge-coupled device (CCD) detector (384 $\times$ 576 pixels) subtending a full field of view of 3.2 by 4.8 arcmin. The photometric observations have been performed through two narrow band filters: one called $\lambda$4686 addressing the region of the line usually present in massive stars (centre: 4684 Å, FWHM: 30 Å) and another one labelled $\lambda$6051 addressing a region of the continuum free from strong lines (centre: 6051 Å, FWHM: 28 Å). More information on these filters can be found in @RVM98. The typical exposure times were 60s for both filters. Some 112 (resp. 138) useful frames were obtained with the $\lambda$4686 (resp. $\lambda$6051) filter. Flat field calibrations were obtained daily on the floodlit dome. No twilight flat could be acquired due to the narrowness of the filters. Several biases were cautiously acquired at various times during the different nights. The frames were debiased using a master zero frame and, in the absence of overscan, a level value interpolated between the various bias frames taken during the same night. The optical elements close to the CCD proved to be frequently contaminated by dust. Hence, the pixel-to-pixel (high spatial frequency) part of the flat-field calibration had to be carefully extracted from the calibration frames obtained daily. The large scale component of the dome flat fields varied slightly from day to day. This was found to be due to minor changes in the instrumental setup. Night sky superflats proved to be more stable, but yielded a strong systematic vignetting as shown by @MRR01. Consequently, the illumination correction was entirely obtained from the ‘photometric superflats’ based on stellar measurements [see e.g. @Man95]. All reductions were carried out with the National Optical Astronomy Observatories (NOAO) package. The debiased, flat-fielded frames were analyzed with the software [@Ste87], using aperture radii between 2 and 5.5 arcsec. ‘Absolute’ photometry was derived from the large aperture data, using a multi-night, multi-filter algorithm and a few standard stars [@Man93]. This procedure yielded additional reference stars for each field. These secondary standards together with all non-variable stars were used to fix, through a global minimization procedure, the zero points for the individual frames and for each aperture radius, thus performing some kind of global differential photometry. Comparing the photometry performed through the different apertures, we noted that a faint companion visible at 3-4 to the W-SW of  has actually no influence on the differential photometry. The final magnitudes are given in Table \[tab: photo\] and correspond to a 25 radius aperture. The expected error on a star of similar brightness as  corresponds to $\sigma$ = 0.007 mag in differential photometry. X-ray observation \[ssect: obs\_xray\] --------------------------------------  was observed with  [@Jansen01_xmm] during the six pointings of the campaign towards  [@SSG04; @SGR05] performed within the guaranteed time programme of the Optical Monitor consortium. The  cameras [@Turner01_mos] were operated in the full frame mode and using the thick filter to avoid contamination by UV/optical light. No  data were collected for  since the star fell on a gap of the  detector. Due to the brightness of the objects in the field of view (FOV), the Optical Monitor was switched off throughout the campaign. The raw data were processed with the Scientific Analysis System () version 5.4.1. For details on the  observations and on the data processing, we refer to the previous work on  – the central target of the FOV – by @SSG04. For the purpose of scientific analysis, we adopted a circular extraction region with a radius of 13.2arcsec and centered on . This radius corresponds to half the distance to the nearest neighbouring X-ray source. Using the  task [calview]{}, we estimated that, at the position of , the adopted extraction region corresponds to an encircled energy fraction of about 64% and 63% respectively for the 1 and 2 instruments. Unfortunately, due to the crowded nature of the  cluster core in the X-rays (see Fig. \[fig: fov\]), the background could not be evaluated in the immediate vicinity of , but had to be taken from the very few source free regions in the cluster core. We adopted three circular background regions – labelled A, B and C on Fig.\[fig: fov\] – centered on $(\alpha, \delta) = (16^\mathrm{h}54^\mathrm{m}31\fs43, -41\degr45\arcmin42\farcs2)$, $(16^\mathrm{h}54^\mathrm{m}23\fs28, -41\degr46\arcmin14\farcs7)$ and $(16^\mathrm{h}53^\mathrm{m}44\fs12, -41\degr53\arcmin34\farcs6)$, and with respective radii of 20, 20 and 25 arcsec. These regions are somewhat offset from the source region but all three are located on the same CCD detector (CCD \#1) as . ------ ------------ -------- ---------------- --------------- ---------------- --------------- ---------------- ---------------- ---------------- --------------- Obs. JD $\psi$ \# $-$2450000 \[0.5-10.0\] \[0.5-1.0\] \[1.0-2.5\] \[2.5-10.0\] \[0.5-10.0\] \[0.5-1.0\] \[1.0-2.5\] \[2.5-10.0\] 1 2158.214 0.819 $16.5 \pm 0.9$ $7.7 \pm 0.6$ $ 8.7 \pm 0.6$ $0.1 \pm 0.1$ $14.3 \pm 0.8$ $ 7.7 \pm 0.6$ $ 6.2 \pm 0.5$ $0.4 \pm 0.2$ 2 2158.931 0.113 $29.7 \pm 1.5$ $9.8 \pm 0.8$ $17.0 \pm 0.9$ $2.9 \pm 0.5$ $29.0 \pm 1.5$ $10.1 \pm 0.9$ $16.6 \pm 1.1$ $2.2 \pm 0.5$ 3 2159.796 0.468 $22.8 \pm 1.0$ $9.0 \pm 0.6$ $12.2 \pm 0.6$ $1.6 \pm 0.3$ $23.6 \pm 1.0$ $ 9.2 \pm 0.6$ $13.1 \pm 0.8$ $1.3 \pm 0.3$ 4 2160.925 0.930 $19.0 \pm 1.0$ $8.4 \pm 0.7$ $ 9.5 \pm 0.7$ $1.1 \pm 0.3$ $19.3 \pm 1.1$ $ 9.3 \pm 0.7$ $ 9.1 \pm 0.7$ $1.0 \pm 0.3$ 5 2161.774 0.278 $19.7 \pm 1.0$ $8.5 \pm 0.6$ $10.0 \pm 0.7$ $1.2 \pm 0.3$ $21.1 \pm 1.0$ $ 8.1 \pm 0.6$ $11.4 \pm 0.7$ $1.6 \pm 0.3$ 6 2162.726 0.668 $18.9 \pm 0.9$ $9.5 \pm 0.6$ $ 8.9 \pm 0.7$ $0.5 \pm 0.2$ $20.3 \pm 1.0$ $ 9.2 \pm 0.6$ $10.4 \pm 0.7$ $0.7 \pm 0.2$ ------ ------------ -------- ---------------- --------------- ---------------- --------------- ---------------- ---------------- ---------------- --------------- ![image](2746fig2.ps){width="17cm"} ![The model of  viewed at different orbital phases $\phi$. The corresponding phases using the ephemeris of Table \[tab: orbit\] are, from top to bottom, $\psi=0.85$, 0.95, 0.10 and 0.36. []{data-label="fig: cpdmodel"}](2746fig3.ps){width="8cm"} Using the average count rates in each pointing, we built raw and background-corrected broad-band light curves in the range \[0.5-10.0keV\][^5]. We also extracted light curves in three different energy bands: a soft (S$_\mathrm{X}$) band \[0.5 - 1.0keV\], a medium (M$_\mathrm{X}$) band \[1.0 - 2.5keV\] and a hard (H$_\mathrm{X}$) band \[2.5 - 10.0keV\]. For comparison, we used the background corrected count rates in each pointing as given in the cluster X-ray source catalogue. These latter values were obtained by means of a psf-model fit to the source using the  task [*emldetect*]{} and a spline background function [see details in @SGR05]. While the catalogue count rates turn out to be about 50% larger than the extracted count rates, both are in excellent agreement when these latter are corrected for the encircled energy fraction. The obtained X-ray light curves show clear variability. To increase our time resolution, we extracted light curves with temporal bins of 5ks, over the same energy ranges as stated above. These latter curves were corrected for the various good time intervals that result from the data processing; they will be discussed in Sect. \[sect: X\]. Finally, adopting the same source and background regions, we extracted X-ray spectra for each observation and for each of the two  instruments. For this purpose, we used the redistribution matrix files ([*rmf*]{}) provided by the  instrument teams and we built the appropriate ancillary response files ([*arf*]{}) with the help of the  software. The spectra were binned in such a way as to have at least 10 counts per energy bin. Using the [blanksky]{} files for the  instruments, we extracted the spectra corresponding to the adopted source and background regions. The impact of the offset in the background regions on the background spectrum, and on the instrumental emission lines in particular, was found to be negligible.\ \[Input\_Par\] [ll]{} Parameters & Description\ $q=M_1/M_2$ & Mass ratio\ $e$ & Eccentricity\ $\omega$ & Longitude of periastron of primary star\ $F_1, F_2$ & Ratio of surface rotation rate to synchronous\ & rotation rate for both stars\ $i$ & Orbital inclination\ $\mu_1$, $\mu_2$ & Roche lobe filling coefficients, $\mu=R/R^*$, where\ & $R$ and $R^*$ are the polar radii for partial and\ & complete filling of the critical Roche lobe at\ & periastron position ($0 < \mu \le 1$)\ $T_1$, $T_2$ & Average effective temperatures of the components\ $\beta_1$, $\beta_2$ & Gravity darkening coefficients (the temperature of\ & an elementary surface area $T=T_{1,2}\times{({g\over{<g>_{1,2}}})}^{\beta_{1,2}}$,\ & where $g$ and $<g>$ are the local and mean gravity\ & accelerations)\ $x_{1,2}$, $y_{1,2}$ & Limb darkening coefficients (see the text)\ $A_1$, $A_2$ & Bolometric albedos (coefficients of reprocessing\ & of the emission of a companion by ”reflection”)\ $l_3$ & Third light\ $\Delta \phi$ & Phase shift between the times of conjunction $t_0$ and\ & of periastron passage $T_0$ (Table \[tab: orbit\])\ $t_0$ & Time of primary eclipse minimum\ Optical light curve analysis\[sect: lc\] ======================================== Photometric light curves were analysed within the framework of the Roche model for an eccentric orbit, similar to Wilson’s [@Wil79] model. The algorithm is described in detail by @Ant88 [@Ant96], here we only briefly describe its main features. The computer code allows one to calculate a radial velocity curve, the monochromatic light curves and absorption line profiles of stars simultaneously, either for a circular or an eccentric orbit. Axial rotation of the stars may be non synchronized with the orbital revolution. Following @Wil79, we assumed that the shapes of the stars coincide with equipotential surfaces in the Roche model at all orbital phases and both stars retain constant volumes during their orbital revolution. The tidally distorted surfaces of the stars are heated by mutual radiation. The intensity of the radiation coming from an elementary area of the stellar surface and its angular dependence are determined by the temperature of the star, gravitational darkening, limb darkening, and heating by radiation from the companion. The input parameters of the model are summarized in Table \[Input\_Par\]. For light curve solution, we fixed some parameters whose values were defined in previous investigations of the system or can be assumed from global stellar properties. Namely, we used the known spectroscopic value of mass ratio $q=M_1/M_2=1.803$, deduced from the data on He I lines . A light curve solution is only sensitive to the temperature ratio between the stars, thus the temperature of one star should be fixed. Usually it is the more reliably determined temperature of the primary star. The spectral types of the stars O9III (primary) and B1III (secondary) were derived in , but we pointed out that adopting a main sequence luminosity class for both components solves much of the inconsistency between the luminosity class III hypothesis and the typical luminosities and radii of giant stars. Our preliminary light curve solution resulted in stellar radii also suggesting the luminosity class V for both stars, thus we fixed the average effective temperature of the primary $T_1=34\,000$ K corresponding to an O9V star [@HM84]. This value is also very close to the one given by the new effective temperature scale of O-type dwarfs by @MSH02. [lllll]{} Parameters & $\lambda 4686$ Å& $\lambda 6051$ Å& Simultaneous & Parameter\ & & & solution & status\ $q=M_1/M_2$ & 1.803 & 1.803 & 1.803 & adopted\ $i$ & $77\fdg35 \pm 0\fdg05$ (08) & $77\fdg37 \pm 0\fdg05$ (08) & $77\fdg35 \pm 0\fdg05$ (08) & adjusted\ $e$ & $0.020 \pm 0.001$ (0.006) & $0.020 \pm 0.001$ (0.006) & $0.020 \pm 0.001$ (0.006) & adjusted\ $\omega$ & $33\degr \pm 8\degr$ (19)& $33\degr \pm 8\degr$ (19)& $33\degr \pm 8\degr$ (19)& adjusted\ $\mu_1$ & $0.782 \pm 0.004$ (0.037) & $0.784 \pm 0.004$ (0.037) & $0.783 \pm 0.004$ (0.037) & adjusted\ $\mu_2$ & $0.748 \pm 0.003$ (0.050) & $0.751 \pm 0.003$ (0.050) & $0.749 \pm 0.003$ (0.050) & adjusted\ $T_1$ (K) & $34\,000$ & $34\,000$ & $34\,000$ & adopted\ $T_2$ (K) & $26\,280 \pm 150$ (420) & $26\,230 \pm 150$ (420) & $26\,260 \pm 150$ (420) & adjusted\ $L_1/(L_1+L_2)$ & $0.7380$ & $0.7308$ & $0.7379\ |\ 0.7314$ & computed\ $L_2/(L_1+L_2)$ & $0.2620$ & $0.2692$ & $0.2621\ |\ 0.2686$ & computed\ $F_1$ & 1.0 & 1.0 & 1.0 & adopted\ $F_2$ & 1.0 & 1.0 & 1.0 & adopted\ $\beta_1$ & 0.25 & 0.25 & 0.25 & adopted\ $\beta_2$ & 0.25 & 0.25 & 0.25 & adopted\ $A_1$ & 1.0 & 1.0 & 1.0 & adopted\ $A_2$ & 1.0 & 1.0 & 1.0 & adopted\ $l_3$ & 0.0 & 0.0 & 0.0 & adopted\ $x_1$ & $-0.213$& $-0.188$ & $-0.213\ |\ -0.188$ & adopted\ $y_1$ & 0.724 & 0.643 & $0.724\ |\ \hspace*{3.3mm}0.643 $ & adopted\ $x_2$ & $-0.124$& $-0.112$ & $-0.124\ |\ -0.112$ & adopted\ $y_2$ & 0.663 & 0.559 & $0.663\ |\ \hspace*{3.3mm}0.559 $ & adopted\ $\Delta\,\phi$ & $0.1537 \pm 0.0007$ (0.0011) & $0.1537 \pm 0.0007$ (0.0011) & $0.1537 \pm 0.0007$ (0.0011) & adjusted\ $t_0\ (HJD-2\,450\,000$)& 2399.909 & 2399.909 & 2399.909 & computed\ Relative radii ($R/a$) & &\ $r_1(pole)$ & $0.3127 \pm 0.0016$ (0.0148) & $0.3135 \pm 0.0016$ (0.0148) & $0.3131 \pm 0.0016$ (0.0148)\ $r_1(point)$ & $0.3351 \pm 0.0022$ (0.0203) & $0.3362 \pm 0.0022$ (0.0203) & $0.3357 \pm 0.0022$ (0.0203)\ $r_1(side)$ & $0.3205 \pm 0.0018$ (0.0164) & $0.3214 \pm 0.0018$ (0.0164) & $0.3210 \pm 0.0018$ (0.0164)\ $r_1(back)$ & $0.3290 \pm 0.0020$ (0.0182) & $0.3300 \pm 0.0020$ (0.0182) & $0.3295 \pm 0.0020$ (0.0182)\ $r_2(pole)$ & $0.2268 \pm 0.0009$ (0.0152) & $0.2277 \pm 0.0009$ (0.0152) & $0.2271 \pm 0.0009$ (0.0152)\ $r_2(point)$ & $0.2421 \pm 0.0012$ (0.0210) & $0.2433 \pm 0.0012$ (0.0210) & $0.2425 \pm 0.0012$ (0.0210)\ $r_2(side)$ & $0.2306 \pm 0.0010$ (0.0163) & $0.2316 \pm 0.0010$ (0.0163) & $0.2309 \pm 0.0010$ (0.0163)\ $r_2(back)$ & $0.2384 \pm 0.0011$ (0.0189) & $0.2395 \pm 0.0011$ (0.0189) & $0.2387 \pm 0.0011$ (0.0189)\ Gravity-darkening coefficients $\beta_1=\beta_2=0.25$ and albedos $A_1=A_2=1$ were assumed as typical for early type stars. We used the nonlinear ‘square-root’ limb-darkening law [@DCG92; @DCCG95; @vHa93]: $I(\cos\gamma)=I(1)[1-x(1-\cos\gamma)-y(1-\sqrt{\cos\gamma})]$, where $\gamma$ is the angle between the line of sight and the normal to the surface, $I(1)$ is the intensity for $\gamma = 0$, and $x,y$ are the limb darkening coefficients. As shown by @vHa93, this is the most appropriate limb-darkening law at optical wavelengths for $T\geq10\,000$ K. The rotation of both stars is assumed to be synchronous with the orbital one $F_1=F_2=1$. The adjustable parameters of the model were the following: the Roche lobe filling coefficients for the primary and secondary $\mu_1,\mu_2$ (calculated for the time of periastron passage), the average effective temperature of the secondary star $T_2$, the orbital inclination $i$, the eccentricity $e$, the longitude of periastron of the primary $\omega$. While doing minimization, every model light curve was also shifted along the magnitude axis until the best fit between the model and observed curves was achieved. Initial phases $\psi$ of observational data points were calculated using the spectroscopic ephemeris of Table \[tab: orbit\]: $HJD = 2\,452\,400.284 + 2{\fd}44070 \times E.$ Since our model assumes an orbital phase $\phi$ equal to zero at the time of conjunction (the secondary star being in front), the observed light curve was then shifted in phase by $\Delta\,\phi$, according to $\psi=\phi-\Delta\phi$. The value of $\Delta\,\phi$ was determined by the minimum of the deviation between the observed and model light curves. The estimation of adjustable parameters was done with the well-known simplex algorithm (Nelder and Mead’s method) [@Him71; @KL87]. In the vicinity of the minima found, additional calculations were done on a fine grid, in order to explore the details in shape of the deviation surface and to estimate the errors on the parameters. The resulting parameters for the solutions corresponding to ł4686Å, to ł6051Å and to the simultaneous adjustment at both wavelengths are presented in Table \[tab: Sol\_Par\]. Two kinds of confidence intervals have been computed and are also given in Table \[tab: Sol\_Par\]. The first one corresponds to a test of the adequacy of the model. The confidence intervals for the parameters are estimated using an absolute critical value of $\chi^2$ corresponding to a significance level of 1%. This first approach rather defines the zones of variation of the parameters that still lead to an acceptation of the model. The obtained error bars are rather small. The second kind of confidence intervals corresponds to a critical value which is defined relatively to the obtained minimum $\chi^2$ of the fit, increased by a value corresponding to a significance level of 0.1%. This latter interval corresponds to a 3-$\sigma$ deviation and it has been transformed to a 1$\sigma$-uncertainty in the sake of coherence with the radial velocity adjustment. This approach is reminiscent to a search for the zone where lie the true values of the parameters. Figure \[fig: lcdef\] exhibits the observed light curves corresponding to $\lambda 4686$Å and to $\lambda 6051$Å along with the model predictions of the simultaneous solution. The final model for  viewed at different orbital phases is presented in Fig. \[fig: cpdmodel\].  orbital and physical parameters \[sect: discuss\_photo\] ========================================================= Period $P$ \[ssect: discuss\_period\] ------------------------------------- Since the time base of our photometric campaign is [*only*]{} 28 days long, it provides little constraint on the period. Indeed the width of the associated peak in the periodogram is about $3.6\times 10^{-2}$d$^{-1}$, yielding an uncertainty of about $2.1\times 10^{-2}$d (corresponding to one tenth of the peak width) on the value of a period determined from the photometric set only. As a consequence, we choose to keep the period fixed at the value determined from the much longer time span of our spectroscopic data set. We thus retain $P=2.44070$d for . [lcc]{} Parameters & Primary & Secondary\ $a(R_{\sun})$ &\ $R(R_{\sun})$ & $7.45\pm0.45$ & $5.39\pm0.43$\ $M(M_{\sun})$ & $17.97\pm0.45$ & $9.96\pm0.22$\ $T(K)$ & 34000 & 26260\ $\log(L_\mathrm{bol}/L_{\sun})$ & $4.82\pm0.07$ & $4.09\pm0.10$\ $\log(g)$ & $3.93\pm0.48$ & $3.96\pm0.64$\ $M_\mathrm{V}$ & $-4.00\pm0.21$ & $-2.98\pm0.31$\ Eccentricity $e$ \[ssect: discuss\_ecc\] ---------------------------------------- The values of the eccentricity obtained from the analysis of the light curve and of the radial velocity curve are in excellent agreement. From our data, the separation between the two light minima is indeed clearly different from half an orbital cycle and the  orbit is thus slightly eccentric. Recently, @StB04 led a photometric campaign searching for new variables in . Using the period from , they obtained independent light curves for  in the Strömgren system. Surprisingly, their data set reveals almost perfectly symmetric light curves with the two light minima separated by exactly half a cycle, thus indicating either a non eccentric system or a longitude of periastron very close to 90 or 270. No detailed analysis of the light curve has been published yet, but the differences between the @StB04 observations and ours are quite intriguing. In our data, the ingress of the secondary eclipse has been observed during three different nights spread over the one month run. It is therefore well defined and clearly indicates a slight eccentricity, except if some systematic biases were present. Our observing run lasted for 28 days and the  light curves displayed in Fig. \[fig: lcdef\] show smooth ellipsoidal variations and well behaved eclipses. Spread over at least two years and acquired more recently, the @StB04 data set is larger, especially in the $y$ and $b$ bands, though with some gaps in the phase coverage. Their published light curves display several striking features. First, the primary eclipse seems to vary over the time: it presents different depths over different cycles and shows different ingress and egress shapes. Rapid variations are also observed slightly before the primary eclipse as well as slightly after the secondary one. The right wing of the secondary eclipse displays an inflection point in the $y$ and $b$ bands, while a strange [*bifurcation*]{} is observed in the $u$ band. Finally, even outside the eclipses, the behaviour of the system is clearly not as quiet as in our data set (see Fig. \[fig: lcdef\]). While a change of the period or of the eccentricity with time is hard to explain, a change in the longitude of the periastron could mimic a non-eccentric system. Another hypothesis, also mentioned by @StB04, is that the observed dispersion of their light curves reveal the signature of some kind of activity in . Under this hypothesis, the system could have remained in a quiet state during the 11-cycle duration of our observations, while @StB04 could have observed different activity states during the longer time-span of their campaign. Longitude of periastron $\omega$ \[ssect: discuss\_omega\] ---------------------------------------------------------- The two values for the primary longitude of periastron obtained from the spectroscopy ($\omega=149\degr$) and from the photometry ($\omega=33\degr$) are clearly not consistent. In , we also computed an orbital solution including all published primary RVs and we obtained an argument $\omega=27\pm31\degr$ closer to the latter photometric value. In principle, the light curve analysis is a more powerful tool to derive accurate values for $\omega$, again from the separation between the two light minima. In Fig. \[fig: lcdef\], the separation between the primary and secondary eclipses is slightly larger than half a cycle. This indicates that the longitude of periastron is located between 0 and 90, thus rejecting the much larger spectroscopic value. In Paper I, we already noted the large dispersion in the values deduced from data sets associated with different lines, ranging from $\omega=99\degr$ to 190. We tentatively suggested that this was linked to the difficulty to accurately determine the periastron argument in such a slightly eccentric system. From our orbital solution, we however derived a reasonable error bar of 10. While searching for the origin of the discrepancy between the photometric and spectroscopic solutions, we have investigated this point more deeply. Adopting the orbital parameters of Table \[tab: orbit\], we computed a set of orbital solutions, varying the periastron argument from 0 to 360. The obtained curves are very similar in shape; the main difference is a shift in radial velocity of an amplitude of about 10 peak-to-peak. Comparing this with the root-mean-square (r.m.s.) residual of 4.8 of our orbital solution gives us a first impression that the periastron argument is probably loosely constrained by the radial velocity solution and that the quoted error-bar could be underestimated in this particular case. In a second approach, we performed Monte-Carlo simulations adopting a Gaussian distribution of the errors on the measured radial velocities (RVs). For the primary, we adopted a standard deviation of 4.8, thus equal to the r.m.s. residual of our fit. For the secondary component, we accounted for the obtained ratio between the primary and secondary uncertainties, $s_y/s_x=2.1$, as quoted in . Finally, for each observation, we scaled the dispersion according to the relative weighting adopted to compute the orbital solution. For each measured RVs, we randomly drew a series of 10000 simulated RV points from these distributions, so building an equivalent number of simulated data sets. We then computed the corresponding orbital solutions using the same method as the one described in . We finally computed the distributions of the resulting orbital elements. This latter approach allows to estimate the errors assuming a random dispersion of the observed points. This evidently does not account for possible systematic errors or outstanding points. We found that all the orbital parameters follow a Gaussian distribution, centered on the values of Table \[tab: orbit\], except the longitude of periastron (and thus the time of periastron passage). The simulated 1-$\sigma$ dispersions were found to be systematically, but not dramatically, higher than the published uncertainties. The difference is, on average, not larger than 80% but can reach a factor of 3. These new values for the uncertainties are quoted in the right column of Table \[tab: orbit\]. Concerning the distribution of the periastron argument, Fig. \[fig: omega\] shows that it significantly deviates from a Gaussian distribution. The width of the peak however does approximately correspond to the width of an equivalent Gaussian characterized by an estimated standard deviation equal to the one of the simulated distribution and by an equivalent surface. We thus retain the 1-$\sigma$ dispersion of the distribution as a good estimator of the typical error on the determined spectroscopic value for $\omega$. As a consequence, while the quoted error on the periastron argument was indeed underestimated in , this new estimate explains rather well the dispersion observed from time to time but still rules out the photometric value $\omega=33\degr$. The origin of the inconsistency between the photometric and spectroscopic values, as well as with the @StB04 data, should be looked for elsewhere. ![Distribution of the longitude of periastron $\omega$ in a set of 10000 simulated orbital solutions built using Monte-Carlo techniques (see text). An equivalent Gaussian characterized by the same estimated mean and standard dispersion and with an equivalent surface has been overplotted. []{data-label="fig: omega"}](2746fig4.ps){width="\columnwidth"} One could indeed think of a possible physical effect that would modify the observed RV curve compared to the [*true*]{} curve of the system. In particular, a modification of the position of the spectral line centroids could produce a different RV value compared to the [*true*]{} velocity of the stars. In such a slightly eccentric binary as , it is also plausible that a small variation in the measured RVs could mimic orbits with quite a different periastron argument. Though the exact nature of the phenomenon is unknown, we tentatively linked it to a possible manifestation of the Barr effect. The Canadian amateur astronomer, J. Miller Barr noted that the longitudes of periastron of spectroscopic binaries are not uniformly distributed between $0^{\circ}$ and $360^{\circ}$ [@Barr08]. Out of 30 spectroscopic binaries with elliptical orbits, apparently only four had $\omega$ between $180^{\circ}$ and $360^{\circ}$, all others had their longitude of periastron in the first two quadrants. Barr advanced two possible explanations for this systematic effect: either the pressure or temperature effects in the atmospheres of the stars shift their spectral lines with respect to their genuine orbital motion, or a non-uniform brightness of the components combined with a large rotational velocity causes the spectral lines to become asymmetric. Although Barr included several Cepheid variables in his sample, a similar effect was (re-)discovered by @Str48 [ see also the discussion by @Bat83 [@Bat88]]. Struve apparently found an excess of systems with $\omega$ in the first quadrant. He suggested that this could be due to streams of gas between the stars which lead to spurious eccentricities and values of $\omega$ in the first quadrant. The existence of the Barr effect was confirmed by the studies of @Fra79 and @How93. @Fra79 used the data from the VIIth Catalogue of orbital elements of spectroscopic binaries and found a distribution of $\omega$ for systems with large eccentricities ($e \geq 0.6$) that shows an excess of systems with $\omega = 0^{\circ}$ and a flat minimum around $\omega = 250^{\circ}$. The effect was most prominent in systems with short orbital periods. @How93 analysed the effect by means of non-parametric statistical tests, restricting his sample to systems with orbital solutions of reasonable quality. He found a statistically significant effect only for systems with orbital periods shorter than 3days. The distribution of $\omega$ peaks at a preferred direction of $\omega \simeq 100^{\circ}$, corresponding to a shallower, longer rising branch in the radial velocity curve and a steeper, shorter falling branch. Howarth interpreted this effect as the result of a gas stream from the primary towards the secondary, though no simulation of the phenomenon has been performed to check its exact influence on the RV curve. Time of periastron passage $T_0$ \[ssect: discuss\_t0\] ------------------------------------------------------- The difference in the spectroscopically and photometrically determined times of periastron passage directly results from the inconsistency between the values of the periastron argument derived using the two techniques. This problem has already been extensively described in the previous paragraph (Sect. \[ssect: discuss\_omega\]). We just note here that adopting a periastron argument $\omega =33\degr$ yields a value for the time of periastron passage of $T_0=2\,452\,399.498$ (HJD).\ ![Position of the primary (filled symbol) and secondary (open symbol) components of  in the H-R diagram. A formal error of 1000K has been adopted on the temperatures. Solid lines: evolutionary tracks from @SSM92 for different initial masses. Dotted lines: isochrones ranging, from left to right, from 2 to 10Myr with a step of 2Myr. []{data-label="fig: HR"}](2746fig5.ps){width="\columnwidth"}  physical parameters \[ssect: discuss\_phys\_par\] -------------------------------------------------- Thanks to the light curve analysis, the inclination of the system is now very well constrained. Combining this with the spectroscopic information of Table \[tab: orbit\], we derived absolute values for the system separation and the star radii and masses. We also derived their luminosity and the surface gravity. The physical parameters of both stars are given in Table \[tab: Abs\_Par\]. With an absolute radius of $R_1 =7.45 \pm 0.45$, the primary component is slightly smaller than typical O9 V stars. @HP89, @SK82 and @VGS96 respectively listed radii of 8, 9.5 and 8.8 . The observed radius is however larger than the typical O9.5 V radius of 7  given by @HP89. Adopting the bolometric correction of @HM84, $BC=-3.3\pm0.1$, we derived a visual absolute magnitude $M_\mathrm{V,1}=-4.00\pm0.21$, fainter than the values of $-$4.5, $-$4.2, $-$4.5 and $-$4.43 respectively reported by @HM84, @HP89, @SK82 and @VGS96, though again in agreement with the slightly later spectral-type O9.5 V. Comparing the obtained values with those of other eclipsing early-type binaries listed by @Gie03 clearly indicates that the physical parameters of the primary in  correspond to the observed range for O9 dwarfs. @VCV97 reported a mass of 19  for the O9 V component in though with a relatively smaller radius ($R=6.13$ ). On the other hand, the  primary is slightly larger and heavier than the O9.5 dwarf components in [@RSA01 $M=14.5$ , $R=4.9$ ], [@PoH91 $M=16.6~+~16.3$ , $R=7.4~+~7.4$ ] or [@SSF94; @HiH95; @BMM97 $M=17.0-17.7$ , $R=5.7-7.7$ ]. The dwarf nature of the primary star is consistent with the derived surface gravity (altough the corresponding error is rather large). From the effective temperature calibration of @HM84, the secondary temperature corresponds to a spectral sub-type B0.5, in rough agreement with the B1 spectral type obtained from spectroscopy. Its radius and visual magnitude however fall within the expected range for B1-2 stars [@HM84; @SK82]. The secondary is also slightly smaller and lighter than the B1 V component in [@BHA87 $M=13.5$ , $R=5.9$ ]. All in all, and accounting for the uncertainties on the spectroscopic data, adopting a B1.5 V spectral sub-type for the secondary in  yields a better match between its physical parameters and the typical observed and theoretical values expected for such a star. The locations of the  components in the H-R diagram are shown in Fig. \[fig: HR\] together with the evolutionary tracks of @SSM92. A rough interpolation from these tracks yields initial masses $M_1^{(0)}=23.7$ and $M_2^{(0)}=11.1$ and current ages between 3 and 8 Myr. These ages do well reproduce the range of derived values for the  cluster [see the cluster literature review in @SGR05]. In such a small time span, the actual masses of the stars remain close to their initial masses and are thus quite larger than the observed masses of about 18 and 10 (Table \[tab: Abs\_Par\]). In a binary system, mass exchange between its components, through e.g. Roche lobe overflow, could alter their evolutionary status compared to single star models. From the photometric light curve,  is actually a well detached system. Due to its young age, it is thus very unlikely that the system could have undergone such a phenomenon (now interrupted) in its past history. New evolutionary tracks that account for the effect of rotation could help to investigate this apparent discrepancy. Finally, comparing the absolute magnitudes obtained in Table \[tab: Abs\_Par\] with the visual magnitude $V=8.228$ of  [@SBL98], we estimated the distance of the object. We adopted a colour excess $E(B-V)=0.49$ and $R=3.3$ as derived by @SBL98. We finally obtained $DM=10.92\pm0.16$, in excellent agreement with the cluster average distance modulus $DM=11.07\pm0.04$ [@SGR05]. ![Net  count rates of  as a function of orbital phase and averaged over the duration of each pointing [from @SGR05]. The vertical axes are in units $10^{-3}$. The horizontal [*error bars*]{} indicate the extension in phase of the corresponding pointing.[]{data-label="fig: cpd42_lc"}](2746fig6.ps){width="8cm"} ![[**Top panel:**]{}  background count rates in the \[0.5-10.0keV\] band. [**Middle panel:**]{}    background-corrected count rates in the same energy range. The time binning of these two panels is 5ks. The vertical axes are in units $10^{-3}$cnts$^{-1}$. No correction for the limited encircled energy fraction has been applied. [**Lower panel:**]{} RV curve (in ) and optical light curve (in mag) of . Note the coincidence of the X-ray drop around $\psi=0.35$ and the time of conjunction with the primary star being in front, as well as the lack of coincidence of the secondary eclipse with the passage at the systemic velocity. []{data-label="fig: cpd42_lc5ks"}](2746fig7.ps){width="8.5cm"} X-ray light curves and spectral analysis \[sect: X\] ==================================================== The X-ray light curves of  as seen by the two  cameras are shown in Fig.\[fig: cpd42\_lc\]. The count rates, averaged over the duration of each pointing, were taken from the  X-ray source catalogue of @SGR05 and were obtained using the  task [*emldetect*]{}. The count rates are thus corrected for the effects of exposure, vignetting and finite size of the extraction region. They also account for the background subtraction. It is clear from Fig.\[fig: cpd42\_lc\] that the X-ray emission from  displays strong signs of variability. A $\chi^2$ test of hypothesis consistently rejects, at the 1% significance level, the null hypothesis of constant rates in the \[0.5 - 10.0 keV\] band and in the M$_\mathrm{X}$ and H$_\mathrm{X}$ bands. Fig. \[fig: cpd42\_lc\] also indicates that the phase coverage of the orbital cycle is almost complete with only a small gap slightly before phase $\psi=0.2$. To increase our time resolution, we also extracted background-corrected light curves with a time binning of 5ks. Figure\[fig: cpd42\_lc5ks\] shows that the count rate changes by about a factor two over relatively short time scales. These variations are also seen in the different energy ranges (Fig. \[fig: cpd42\_lc5ks\_eb\]) and are most prominent in the intermediate (M$_\mathrm{X}$) band. As in Fig. \[fig: cpd42\_lc5ks\], they suggest a double-peaked light curve with two broad maxima around phases $\psi\approx 0.1$ and 0.5. From the top panels of Fig. \[fig: cpd42\_lc5ks\], we conclude that the observed modulations are clearly not due to background fluctuations. Note that, in Figs. \[fig: cpd42\_lc5ks\] and \[fig: cpd42\_lc5ks\_eb\], no correction for the limited encircled energy fraction has been applied, neither for the vignetting nor for the exposure. This explains the lower count rates obtained compared to Fig. \[fig: cpd42\_lc\]. ![image](2746f8a.ps){width="5.9cm"} ![image](2746f8b.ps){width="5.9cm"} ![image](2746f8c.ps){width="5.9cm"} One of the main pictures in the X-ray light curve is a sensible decrease of the signal between phase $\psi=0.27$ and $0.45$, which almost exactly corresponds to the time of the secondary minimum in the optical light curve. The observed modulations are probably phase-locked since, for example, the two wings of the [*eclipse*]{} have been observed during two different orbital revolutions. However, except near $\psi=0.85$, the different pointings do not overlap in phase. One can therefore not definitively assert the phase-locked behaviour of the observed X-ray light curves. Figure \[fig: cpd42\_lc5ks\] seems thus to indicate two different emission levels: a higher state between $\psi\approx 0.0$ and 0.5, during which the [*eclipse*]{} is observed, and a lower state between $\psi\approx0.6$ and 1.0, where no counterpart of the primary eclipse can be seen. The hardness ratio curves are shown in Fig. \[fig: hr\]. Though the error bars are quite large, they seem to indicate that the emission is slightly softer around phase $\psi=0.3$, so approximately at the time of conjunction, while it is presumably harder at the maximum of the emission. While the orbit is presumably not circular, the value of the eccentricity is pretty small and it is unlikely that the variation of the distance between the two stars plays a significant role in . In consequence, the observed modulations of the X-ray emission are more probably due to a modification of the line of sight towards the system while it is revolving around its center of mass. The observed X-ray light curves will be discussed in the framework of a wind interaction model presented in the next section (Sect. \[ssect: Xinteract\]). As a next step in the analysis, we attempt to constrain the physical properties of the X-ray emission by adjusting a series of models to the obtained spectra for each pointing. We simultaneously fitted the two  spectra using the  software v.11.2.0 [@arn96]. Using the $B - V$ colours quoted by @BVF99 and @SBL98, we infer a colour excess of about $E(B - V) = 0.49$ for . The corresponding ISM neutral hydrogen column density amounts to $N_{\rm H}^{\rm ISM} = 2.8 \times 10^{21}$cm$^{-2}$. In the spectral fits, we thus requested a column density larger or equal to $N_{\rm H}^{\rm ISM}$. The best spectral fits are obtained for a two-temperature [mekal]{} thermal plasma model [@MGvdO85; @Ka92] with two independent absorbing columns. These fits indicate a soft (k$T\sim0.6$keV) slightly absorbed plus a harder (k$T\sim1.0$keV) and more heavily absorbed component. However, they only provide an upper limit on the absorbing column associated to the soft component. As for  [@SSG04], fixing this additional soft column to zero yields even better fits, characterized by more stable solutions. The best-fit parameters are listed in Table \[tab: Xspec\] and tend to indicate that the  X-ray spectrum is significantly harder when the total flux is larger. More accurate information is however difficult to obtain since, as can be deduced from the modulations of the hardness ratios (Fig. \[fig: hr\]), the spectral variations are probably averaged out over the 30ks duration of a pointing. Unfortunately, smaller bin sizes do not allow to obtain spectra of a sufficient quality to derive reliable constraints on the spectral properties. The combined  spectra obtained from the merging of the six  observations are shown in Fig. \[fig: Xspec\_fig\] together with the best fit 2-T model. Though the general quality of the fit is relatively good, the model tends to underestimate the fluxes at high energy ($>4$ keV). This could indicate the existence of a high energy component as well as the presence of the line at 6.7keV. The merged spectra do unfortunately not have a sufficient quality at high energy to constrain this probable additional component. [c c c c c c c c c c c c c]{} $\psi$ & Obs. \# & $N_\mathrm{H, 1}$ & k$T_1$ & $norm_1$ & $N_\mathrm{H, 2}$ & k$T_2$ & $norm_2$ & $\chi^2_{\nu}$ (d.o.f.) & $f_\mathrm{X}$ & $f_\mathrm{X,S}$ & $f_\mathrm{X,M}$ & $f_\mathrm{X,H}$\ $[1]$ & $[2]$ & $[3]$ & $[4]$ & $[5]$ & $[6]$ & $[7]$ & $[8]$ & $[9]$ & $[10]$ & $[11]$ & $[12]$ & $[13]$\ \ 0.113 &2 & 0.0 & $0.62^{+.06}_{-.08}$ & $0.81^{+.15}_{-.13}$ & $0.79^{+.33}_{-.28}$ & $1.22^{+.26}_{-.17}$ & $2.71^{+.74}_{-.70}$ & 0.63 (59) & 21.0 & 5.6 & 11.5 & 3.8\ 0.278 &5 & 0.0 & $0.52^{+.14}_{-.18}$ & $0.65^{+.09}_{-.10}$ & $0.74^{+.25}_{-.19}$ & $0.97^{+.27}_{-.13}$ & $1.64^{+.42}_{-.52}$ & 1.21 (60) & 13.0 & 4.7 & 7.0 & 1.3\ 0.468 &3 & 0.0 & $0.52^{+.10}_{-.17}$ & $0.66^{+.09}_{-.10}$ & $0.76^{+.18}_{-.16}$ & $0.95^{+.12}_{-.13}$ & $2.28^{+.65}_{-.56}$ & 0.82 (81) & 15.4 & 4.9 & 8.7 & 1.8\ 0.668 &6 & 0.0 & $0.61^{+.05}_{-.14}$ & $0.75^{+.14}_{-.28}$ & $0.74^{+.41}_{-.36}$ & $0.93^{+.36}_{-.35}$ & $1.17^{+2.10}_{-.52}$ & 0.85 (64) & 12.7 & 5.3 & 6.5 & 0.9\ 0.819 &1 & 0.0 & $0.40^{+.10}_{-.08}$ & $0.76^{+.14}_{-.20}$ & $0.52^{+.27}_{-.21}$ & $0.81^{+.12}_{-.21}$ & $0.94^{+.48}_{-.20}$ & 1.08 (53) & 10.2 & 5.0 & 4.7 & 0.5\ 0.930 &4 & 0.0 & $0.35^{+.19}_{-.09}$ & $0.63^{+.30}_{-.23}$ & $0.46^{+.19}_{-.16}$ & $0.75^{+.14}_{-.08}$ & $1.61^{+.49}_{-.48}$ & 1.22 (46) & 11.8 & 5.2 & 6.0 & 0.6\ \ & 0.0 & $0.59^{+.03}_{-.09}$ & $0.74^{+.06}_{-.08}$ & $0.73^{+.13}_{-.11}$ & $1.05^{+.11}_{-.08}$ & $1.39^{+.20}_{-.10}$ & 1.09 (209) & 14.0 & 5.2 & 7.3 & 1.5\  X-ray properties \[sect: Xmodel\] ================================== X-ray emission from the stellar components \[sect: Xstars\] ----------------------------------------------------------- The X-ray emission from massive stars presumably comes from shell collisions within the lower layers of their winds, which result from the growing of radiatively-driven wind instabilities [@FPP97]. It is expected that the bulk of the emission is produced in a zone extending to about five times the stellar radius. Within a binary system with an inclination close to 90, we thus expect only a small fraction of this extended emission zone to be occulted by the motion of one companion in front of the other. In consequence, because of the much larger emission zone, the eclipses in the X-ray domain are probably not as clearly marked as in the optical. However, the  X-ray light curve (Fig. \[fig: cpd42\_lc5ks\]) shows a clear decrease – around $\psi=0.35$ – almost perfectly synchronized with the optical secondary eclipse. This suggests a different geometry and, probably, the presence of a localized emission component, in addition to the intrinsic emission of the two stars. To match the observed light curve, this component should be occulted around $\psi=0.35$. It should thus be associated either with the primary inner side, or with the secondary inner or outer sides. The emission level also appears to be lower between $\psi=0.6$ and 1.0, thus when the line of sight points both towards the primary inner side or the secondary outer side. The second possibility (i.e. an X-ray emission associated with the secondary inner side) therefore seems to best describe the main features of the X-ray light curve, at least qualitatively. In Sect. \[ssect: Xinteract\], we present a phenomenological model that associates an extra X-ray emission with the secondary inner side. Using the relations of @BSD97 and bolometric luminosities from Table \[tab: Abs\_Par\], we obtained X-ray luminosities of $\log(L_\mathrm{X})=31.51$ and $30.69$ () respectively for the O9 and B1-1.5 components in the band $0.1-2.0$keV. Accounting for the distance modulus of the cluster $DM=11.07$, this corresponds to unabsorbed fluxes of $f_\mathrm{X}=9.99$ and $1.54\times 10^{-14}$. Though the energy bands considered are slightly different, we can compare these predictions with the values obtained from the X-ray spectral fits (Table \[tab: Xflux\]). It appears that, even at its minimum of emission ($\psi\sim0.82$),  is at least twice brighter than expected from the @BSD97 relations. Part of the gap between the observed and predicted values could however be filled by the following considerations. First, the dispersion around the @BSD97 relations is quite large and does not allow an accurate determination of the X-ray luminosities. Second, @MSC84 reported that the winds from the main sequence B stars in  are particularly strong. The B star in  could thus have a particularly powerful wind for its spectral type, producing stronger shocks within its lower layers and, subsequently, an enhanced X-ray emission. @SNG05 further reported that, in , the B stars seem to follow a brighter  relation than predicted from @BSD97. From this new relation, the B1-1.5 component in  could be at least three times brighter, yielding a luminosity of a few $10^{31}$. -------- --------- ------------------------------- --------------------------------- --------------------------------- --------------------------------- ---------------------- $\psi$ Obs. \# $f_\mathrm{X}^\mathrm{unabs}$ $f_\mathrm{X,S}^\mathrm{unabs}$ $f_\mathrm{X,M}^\mathrm{unabs}$ $f_\mathrm{X,H}^\mathrm{unabs}$ $\log(L_\mathrm{X})$ $[1]$ $[2]$ $[3]$ $[4]$ $[5]$ $[6]$ $[7]$ 0.113 2 36.2 16.4 15.8 4.0 32.06 0.278 5 25.4 14.2 9.8 1.4 31.91 0.468 3 28.7 14.8 12.1 1.8 31.96 0.668 6 25.8 15.5 9.3 1.0 31.92 0.819 1 23.8 16.4 6.9 0.5 31.88 0.930 4 25.7 16.4 8.7 0.6 31.92 27.2 15.4 10.3 1.5 31.95 -------- --------- ------------------------------- --------------------------------- --------------------------------- --------------------------------- ---------------------- : Unabsorbed fluxes (in $10^{-14}$ergcm$^{-2}$s$^{-1}$), i.e. fluxes corrected for the interstellar absorption ($N_\mathrm{H, ISM}=0.28\times 10^{22}$cm$^{-2}$), according to the best-fit models presented in Table \[tab: Xspec\]. The last column gives the total X-ray luminosity (in ) assuming a distance modulus $DM=11.07$.[]{data-label="tab: Xflux"}  wind properties ---------------- The wind properties of the two components of  are not known. We however used the newly derived physical parameters of the stars to get an insight into their wind strengths. We estimated their mass-loss rates using the mass-loss recipes from [@VdKL00; @VdKL01]. We obtained, for the primary, $\log(\dot{M}_1)=-7.06$ (yr$^{-1}$). The temperature of the secondary component however falls within the bi-stability jump region. Using the recommendations from @VdKL00, we estimated the position of the bi-stability jump to be located at about 22800K for the particular stellar parameters of the secondary. This puts the companion on the hot side of the jump, yielding thus $\log(\dot{M}_2)=-8.74$ (yr$^{-1}$). We estimated the terminal wind velocities by first computing the escape velocities and then adopting the average ratio $v_\infty/v_\mathrm{esc}=2.6$ as appropriate for the winds of the stars on the hot side of the stability jump. We respectively obtained terminal velocities of $v_{\infty,1}=2380$ and $v_{\infty,2}=2150$  for the two components of . While these values are typical for O-type stars, the secondary terminal velocity seems quite large for a typical B1 dwarf. As stated above, @MSC84 reported particularly strong winds for the B dwarfs in . For example, they derived, for the single B1V star , a terminal wind velocity close to 2300, thus very near our estimate for the B component in . A wind interaction in \[ssect: Xinteract\] ------------------------------------------ Using the estimated wind parameters, we computed the position of the ram pressure equilibrium surface that typically indicates the location of a possible wind-wind collision. For this purpose, we adopted a $\beta=1$ velocity law, as appropriate for the hot star winds. Due to its larger mass-loss rate, the primary wind clearly overwhelms the secondary wind and no equilibrium is possible. In consequence, the O-star wind should crash into the B-star surface, preventing the secondary wind to develop towards the primary star. Under the above hypotheses, the primary wind luminosity at the distance of the secondary surface is about $\log(L_\mathrm{w,1})=\log(\frac{\dot{M}_1 v^2_1}{2})\sim 34.7$ (). Accounting for the secondary radius and its distance to the primary, a fraction of about 2.7% of the O9V wind is intercepted by the secondary and we therefore expect the shocked plasma to be heated to temperatures of a few $10^7$K, thus generating a substantial amount of X-rays. According to the formalism of @Uso92, and using a primary wind pre-shock velocity of 1380, the X-ray emission generated by such a wind-photosphere interaction should be about $\log(L_\mathrm{X})\approx 32.8$ () for a purely radiative interaction (Usov’s Eq. 80) and $\log(L_\mathrm{X})\approx 30.7$ () in the adiabatic case (Usov’s Eq. 79, adopting a solar chemical composition for the wind). Following @SBP92, the ratio between the characteristic cooling time and flow time is $\chi=t_\mathrm{cool}/t_\mathrm{flow}\approx 5.2$, indicating a mainly adiabatic collision. However, the interaction region is immersed in the intense UV photon field of the secondary. Inverse Compton cooling (Comptonization) could thus be significant, yielding a higher cooling rate, thus a lower value for the $\chi$ parameter. In addition, under the influence of the radiative pressure of the secondary, the acceleration of the primary wind may be slowed down (the so-called [*radiative inhibition*]{} effect, @StP94). According to the formalism developed by these latter authors, the mass-loss rate on the axis should not be affected by more than 1%. From a crude interpolation of their results, the primary wind velocity at the secondary surface might however be reduced by about one third. In consequence, the wind kinetical energy would be cut down by a factor of about two. Hence, radiative inhibition might significantly affect the value of the $\chi$ parameter, which depends on the fourth power of the velocity. Assuming a factor 2/3 on the velocity reached by the primary wind at the distance of the secondary surface, we obtain $\chi\approx1.0$. Both under the influence of Comptonization and of radiative inhibition, the shock region might thus shift towards the radiative regime. Using the formalism of @GOC97, we also investigated the possibility to alter the wind-photosphere interaction by sudden radiative braking. In such a phenomenon, the wind of the primary star could be suddenly brought to a stop due to the radiative pressure coming from the companion. The main effect of radiative braking is to modify the position of the dynamical ram pressure equilibrium surface by pushing it further away from the secondary star. In certain cases, radiative braking could be strong enough to prevent the primary star wind to actually reach the secondary surface, thus yielding a wind-wind interaction structure rather than a wind-photosphere interaction. Adopting the known stellar parameters, we computed the radiative braking coefficient. We then used different values of the [cak]{} parameters [@CAK75] appropriate for effective temperatures around 30kK. According to the values of these coefficients as given by different authors [@PPK86; @SIH94; @PSL00], the radiative braking can, or can not, disrupt the wind-photosphere interaction. It is thus impossible to conclude on this point. However, even when the braking occurs, the interaction is moved only slightly away from the secondary star surface. Though the shock structure would be quite different, the geometry of the emitting region will probably remain rather similar, with an extra emission component mainly located close to the secondary inner surface. ![image](2746f11a.ps){height="5cm"} ![image](2746f11b.ps){height="4.5cm"} A phenomenological model {#a-phenomenological-model .unnumbered} ------------------------ To estimate the influence of such a wind interaction on the observed X-ray light curve, we built a simple geometrical model presented in Fig. \[fig: th\_view\]. We adopted a circular orbit, spherically symmetric stars and winds, and a $\beta=1$ acceleration law for the primary wind. Assuming a totally radiative interaction, we considered that, when encountering the secondary star surface, the kinetic energy associated to the normal velocity component of the incident wind flow is totally dissipated into thermal energy. We thus computed the amount of energy re-emitted by each elements of the secondary surface. Then, accounting for the orbital inclination and the possible occultation by the primary star, we computed the interaction contribution to the observed X-ray light curve. We noticed above the possibility of radiative braking to occur within the system. We caution however that it should not alter much this simple model. Indeed, in the case of a wind-wind collision, the interaction region should still be located near the secondary star surface, so that the geometry of the problem would be only slightly modified. The emission from the secondary shock would further be very limited. Indeed, so close to the surface, the radiative acceleration could not have been very efficient yet. The secondary wind velocity is thus probably of the order of the photospheric thermal velocity, therefore close to 20. Under these hypotheses, the possible contribution of the secondary shock to the total X-ray emission would thus be about $10^{29}$, at least one order of magnitude below the other emission components. ![[**Upper panel:**]{} Predicted unabsorbed flux emitted by a radiative wind-photosphere interaction in . [**Lower panel:**]{} Tuned phenomenological model (thick line) overplotted on the observed X-ray light curves. Filled triangles and open squares respectively show the background-corrected 1 and 2 count rates in the 0.5-10.0keV energy band. These count rates have been corrected for the limited size of the extraction region. Flux axes are in units of $10^{-14}$ in both panels. The dashed line gives the adopted intrinsic contribution from the two stellar components of . It acts as a pedestal.[]{data-label="fig: th_lc"}](2746f12a.ps "fig:"){width="8cm"} ![[**Upper panel:**]{} Predicted unabsorbed flux emitted by a radiative wind-photosphere interaction in . [**Lower panel:**]{} Tuned phenomenological model (thick line) overplotted on the observed X-ray light curves. Filled triangles and open squares respectively show the background-corrected 1 and 2 count rates in the 0.5-10.0keV energy band. These count rates have been corrected for the limited size of the extraction region. Flux axes are in units of $10^{-14}$ in both panels. The dashed line gives the adopted intrinsic contribution from the two stellar components of . It acts as a pedestal.[]{data-label="fig: th_lc"}](2746f12b.ps "fig:"){width="8cm"} The results of this simple model are presented in Fig. \[fig: th\_lc\] (upper panel) and provide an upper limit on the actual contribution of such an interaction. Indeed, as stated above, the interaction is probably not fully radiative, so that only a fraction of the incoming energy is effectively radiated. Radiative inhibition might also reduce the wind velocity, giving rise to a weaker shock, hence to a weaker emission than considered here. In Fig. \[fig: th\_lc\], the occultation of the interaction zone by the primary is clearly seen ($\psi \sim 0.35$), while the interaction does not provide any contribution when the secondary is turning its outer side to the observer ($\psi\sim0.8-0.9$). In this simple form, this phenomenological model indeed predicts the higher emission slightly before and after the secondary eclipse, while the secondary inner side is facing the observer, and the lower emission state half a cycle later, when the interaction zone is hidden by the secondary body. It also reasonably reproduces the width of the observed [*eclipse*]{} in the X-ray light curve. In a second step, we try to provide a moderate tuning to the model, in order to investigate to which degree it can match the observed modulations. According to the model, no emission from the interaction is expected around $\psi=0.8-0.9$, and it should only provide a faint contribution at the time of the secondary minimum. At those particular phases, we thus probably observe the intrinsic emission of the two stars which, as explained above, is only slightly affected by the eclipses because of its wide extension. Correcting the observed light curve for the limited encircled energy fraction, the intrinsic emission from the two stars gives about $14\times 10^{-3}$ in the two  instruments. From Table \[tab: Xspec\], this approximately corresponds to an observed flux about $10.2\times 10^{-14}$. We also note that the model provides unabsorbed fluxes while the observed count rates have suffered interstellar absorption. Comparing the values of the absorbed and unabsorbed fluxes (Tables \[tab: Xspec\] and \[tab: Xflux\]), we estimate the ISM material to absorb about half of the flux at the considered energy. At this stage, the model predicts an emission rate still much larger than the observed emission. To properly match the observations, we have to divide the predicted flux again by a factor of 6, so that the maximum contribution of the interaction zone to the observed flux is now about $10\times 10^{-14}$. This last step finds a relative justification in the fact that, as discussed above, the present purely radiative model only provides an upper limit to the X-ray emission emerging from the interaction zone. In addition, effects that might reduce the shock strength, such as radiative inhibition, are not accounted for. Assuming radiative braking to take place, one might also think that some emission may originate from the trailing arm of the shock cone. In such a configuration, the extra emission from the collision would not drop to zero around $\psi\sim0.8-0.9$. In the current fully radiative model, the plasma immediately cools down after the shock. By nature, it could thus not produce an extra-emission at these particular phases. Such a contribution from the arms of the shock cone would however be less affected by the eclipses in the system and, as a first approximation, one can consider that it has been accounted for in the empirical pedestal adopted in Fig. \[fig: th\_lc\]. This latter is indeed higher than expected from the sole @BSD97 relations for the intrinsic emission of the early-type stars in the system. It is clear that the reality is probably different from this idealized situation. It is not our purpose to over-interpret the present model; our aim was to show that, using reasonable assumptions, an interaction region located on, or near, the secondary surface can reproduce the main features of the observed X-ray light curve. From Fig. \[fig: th\_lc\], the time of the beginning of the X-ray eclipse is well reproduced by our model. The right wing of the eclipse is however slightly larger, suggesting that the interaction is more extended on the surface side opposite to the orbital motion. Similarly, the drop in emission around $\psi\approx 0.6$ occurs slightly later than expected from our model, which means that the X-ray emitting region should remain visible slightly longer, a condition which does not tally with the previous suggestion. Clearly, Fig. \[fig: th\_lc\] shows that the observed modulations in the X-ray light curves is dominated by an extra-emission component associated with the secondary inner surface. However, the details of the phenomenon could be more complicated as suggested by the observed delays in the rising and falling branches near $\psi=0.4$ and 0.6. Finally, the hardness ratio curves indicate that the hardest emission is observed at the time of the two emission maxima. This is expected if the extra emission is produced in a wind interaction region, which provides typically harder X-rays than the intrinsic emission from the stars. Final remarks and conclusions \[sect: ccl\] =========================================== In the first part of this paper, we presented optical photometry of . Adopting the period obtained from , the analysis of the system light curves indicates that  is a well detached system with an inclination close to 77. The obtained curves display two eclipses with a separation slightly different from half an orbital cycle, thus indicating a small eccentricity, in agreement with the results of . Combining the spectroscopic and photometric analyses, we derived the absolute physical parameters of the stellar components and confirmed that the system is formed by two dwarf early-type stars with masses, sizes and luminosities relatively close to typical values expected both from observational and theoretical works. The photometric and spectroscopic data sets however provide discrepant values for the longitude of periastron. Independent observations by @StB04 also tend to indicate either a periastron argument close to 90 or 270 or a zero eccentricity. Their light curves, obtained over at least two years, also display intriguing signs of variability. Clearly, further observations are needed to elucidate these apparent discrepancies. In the second part of the present paper, we focused on recent  X-ray observations of the system. The X-ray emission from  is well described by a two-temperature thermal plasma model with energies close to 0.6 and 1.0keV, thus slightly harder than typical emission from early-type stars. The X-ray light curve of the system is clearly variable both in the total and in the different energy ranges; the emission level is higher when the primary is in front of the secondary. During the high state, the system shows a drop of its X-ray emission that almost exactly matches the optical secondary eclipse. Assuming that the X-ray light curve is reproductible, we interpreted this as the signature of a wind interaction phenomenon in which the overwhelming primary wind crashes into the secondary star surface. Alternatively, the wind-photosphere interaction could be altered by sudden radiative braking, yielding a wind-wind interaction located close to the secondary surface, and displaying thus a similar geometry. We expect this phenomenon to produce a substantial amount of X-rays, which could be the major source for the observed modulations in the  light curves. As a next step, we built a simple phenomenological model that associates an extra X-ray emission component with the inner side of the secondary star surface. Though limited by some simplifying assumptions, this model renders the main properties of the observed variations and lends thus further support to our interpretation of the X-ray light curve. At this stage, several important questions remain however unanswered. The exact influence of the wind interaction, and of the generated X-ray emission, on the secondary surface properties is very difficult to estimate. We carefully inspected the high resolution high signal to noise spectra from but could not find any systematic differences in the secondary spectra obtained when this star is showing either its inner or its outer face to the observer. As a final check, we put a point-like X-ray source at a distance of $1.1\times R_2$ from the center of the secondary star on the system axis. We assigned to this source a luminosity of $10^{33}$, which is probably typical of the wind interaction taking place in the system. The additional heating of the secondary star surface elements closest to the X-ray source amounts to a few tens of Kelvin. For comparison, the heating of the same surface elements by the radiation of the primary component, is about 2000-2500K. This clearly suggests that the heating of the secondary surface by the nearby interaction should be limited. Formed by an O9 plus a B1-1.5 dwarf,  [*a priori*]{} seemed to be an ordinary, well detached system. We however showed that it probably harbours a wind-wind or wind-photosphere interaction. Such a phenomenon could be quite common among close early-type systems. It is thus of a particular importance to evaluate its possible impact on the determination of the physical parameters obtained using different observational methods. The possible variable activity of  is an additional motivation to accumulate more data on this particularly interesting early-type binary system. Finally, the present set of observations provides X-ray light curves that cover almost the full orbital cycle of  with reasonable signal-to-noise and time resolution. As discussed in Sect. \[ssect: Xinteract\], different physical phenomena (radiative inhibition, radiative braking, Comptonization, ...) probably affect the shock structure and, hence, the exact amount of X-ray emission generated by the wind interaction. The development of appropriate tools, both theoretical and numerical, to analyse such high quality X-ray light curves is probably one of the challenges that the new generation of X-ray stellar scientists will have to face in the coming decade, especially to prepare the ground for the next generation of large X-ray observatories. It is a pleasure to thank Dr. I.I. Antokhin for fruitful discussions. The one month run at the Bochum telescope has been made possible thanks to a ‘crédit aux chercheurs’ from the Belgian FNRS. Our Bochum negotiators, H.G. Grothues and R.J. Dettmar, are warmly thanked for their open-minded efficiency. We also acknowledge support from the PRODEX XMM-OM and Integral Projects, as well as contracts P4/05 and P5/36 ‘Pôle d’Attraction Interuniversitaire’ (Belgium). EA acknowledges support from the Russian Foundation for Basic Research (project No 02-02-17524) and the Russian LSS (project No 388.2003.2). [^1]: Research Fellow FNRS (Belgium) [^2]: Research Director FNRS (Belgium) [^3]: Research Associate FNRS (Belgium) [^4]: Based on observations collected at the European Southern Observatory (La Silla, Chile) and with , an ESA Science Mission with instruments and contributions directly funded by ESA Member States and the USA (NASA). [^5]: Expressed in pulse-invariant (PI) channel numbers and considering that one PI approximately corresponds to 1eV, the adopted range is actually PI$\in$ \[500-10000\].
--- abstract: 'We introduce an anisotropic two–dimensional lattice gas model of metal terminated II–VI(001) semiconductor surfaces. Important properties of this class of materials are represented by effective NN and NNN interactions, which result in the competition of two vacancy structures on the surface. We demonstrate that the experimentally observed $c(2\times2)$–$(2\times1)$ transition of the CdTe(001) surface can be understood as a phase transition in thermal equilibrium. The model is studied by means of transfer–matrix and Monte Carlo techniques. The analysis shows that the small energy difference of the competing reconstructions determines to a large extent the nature of the different phases. Possible implications for further experimental research are discussed.' author: - | M.Biehl$^{1,2}$, M. Ahr$^1$, W. Kinzel$^1$, M. Sokolowski$^{2,3}$ and T. Volkmann$^1$\ $^1$Institut für Theoretische Physik und Astrophysik\ $^2$Sonderforschungsbereich 410\ Julius–Maximilians–Universität Würzburg\ Am Hubland, D–97074 Würzburg, Germany\ $^3$ Institut für Physikalische und Theoretische Chemie\ Rheinische Friedrich–Wilhelms–Universität\ Wegelerstr. 12, D–53115 Bonn, Germany title: '**A lattice gas model of II–VI(001) semiconductor surfaces**' ---  \ Two–dimensional lattice gases have served as models of atoms adsorbed to a singular crystal surface, or the terminating layer of such a surface itself, respectively. The interplay of attractive and repulsive short range interactions can result in highly non–trivial features, see e.g. [@schick; @selke; @kinzel; @bartelt; @binder] and references therein. For instance, square lattice systems with infinite NN–repulsion ([*hard squares*]{}) and NNN–attraction display tricritical behavior. At low temperatures a dense, $c(2\times2)$ ordered phase coexists with a disordered phase of low coverage. Here we will investigate a particular model with highly anisotropic attractive and repulsive interactions, which result in a $c(2\times2)$ groundstate, as well. However, this ordering competes with a $(2\times1)$ structure which can prevail locally in the disordered regime. The model parameters are chosen as to represent certain properties of metal terminated II–VI(001) semiconductor surfaces. This class of materials has attracted considerable attention due to their potential technological relevance in the development of optoelectronic devices, for a recent overview see [@zweisechs]. Frequently, (001) surfaces serve as substrates for the growth of II–VI crystals [@cibert] by means of Molecular Beam Epitaxy or Atomic Layer Epitaxy, for instance. Surface reconstructions play an important role in this context and have been the target of experimental studies [@cibert; @tata; @soko]. In contrast to most III–V materials, II–VI(001) surfaces exhibit a fairly small number of possible reconstructions, which are less complex than their III–V counterparts, in general. In the following we will mainly address the CdTe(001) surface, see [@cibert] for a detailed discussion. Apparently, only Cd–terminated (001) surfaces are observed in vacuum [@stm; @inseln]. The underlying, complete Te half–layer provides potential Cd–sites which form a simple square lattice. Electron counting rules [@pashley] and similar considerations [@harrison] show that the simultaneous occupation of NN–sites in the \[$1\bar{1}0$\]–direction (termed the $y$–direction in the following) is excluded in the terminating Cd–layer, whereas NN–neighbors along the $[110]$–direction (or $x$–axis, for short) are possible. Therefore, unless excess Cd is deposited, the surface is characterized by a vacancy structure with a maximum Cd–coverage of $\theta = 1/2$. Figure \[figure1\] (a) illustrates the structure of the two relevant configurations which satisfy this constraint at $\theta=1/2$. The $c(2\times2)$ reconstruction is characterized by a staggered ([*checkered*]{}) occupation of the square lattice sites. In the $(2\times1)$ structure, Cd–atoms arrange in rows along the $x$–direction which alternate with rows of vacancies. In principle, the configurations can be transformed into one another by shifting every other column of Cd–atoms by one lattice site. Density functional (DF) calculations have shown that the surface energies of the two competing structures at $\theta=1/2$ and $T=0$ differ only by a small amount $\Delta E$, with the $c(2\times2)$ reconstruction having the slightly lower energy. Qualitatively this preference can be unterstood in terms of electron Coloumb interactions, as the distances of neighboring metal atoms are smaller in the $(2\times1)$ arrangement [@garcia]. For ZnSe, a value of $\Delta E \approx 0.03 eV$ per potential Zn–site is given in [@garcia; @park; @gundeldip]. According to [@gundeletal], the energy difference is even smaller ($\Delta E \approx 0.016 eV$) for the CdTe(001) surface. This factor should play a crucial role in a phase transition which has been studied for CdTe [@cibert; @tata; @soko]: in vacuum at temperatures below a critical value of about $T_c = 270^oC \pm 10^oC$, the surface displays a mixed $c(2\times2)$–$(2\times1)$ structure with a clear prevalence of the checkered configuration close to (but below) $T_c$. Above $T_c$, the $(2\times1)$ arrangement of Cd–atoms dominates the surface. The observed coverage is in the vicinity of $\theta \approx 0.4$ in both regimes [@soko]. The situation is complicated by the fact that the material begins to sublimate at about the same temperature $T_c$. However, it has been argued that sublimation through step flow would not hinder the surface to achieve an effective equilibrium configuration on terraces [@soko]. The aim of our theoretical investigation is to clarify, whether the nature of the above discussed transition can be explained within a thermodynamic equilibrium framework at all, or if non–equilibrium effects should play a crucial role. The modeling of reconstructions which are characterized by displacement of atoms from their regular lattice positons, usually requires continuous two– or three-dimensional degrees of freedom. A prominent example is the description of W(100) surfaces by XY–models, see e.g. [@w100] and references therein. Here, however, reconstruction occurs via the rearrangement of atoms in vacancy structures and a description in terms of occupation variables is appropriate. We present here a lattice gas model which takes into account important features of the above discussed II–VI(001) surfaces. We will loosely speak of Cd–atoms in the following, without claiming to reproduce particular properties of CdTe faithfully. In fact, the basic structure of the model would be the same for other II–VI(001) surfaces. In our simplifying picture we consider only the terminating Cd–layer, represented by a square lattice of sites $(x,y)$ which can be either occupied $(n_{x,y} =1)$ or empty $(n_{x,y}=0)$. The influence of the underlying crystal structure is accounted for by effective pairwise interactions of atoms. In the $y$–direction, an infinite repulsion excludes the simultaneous occupation of NN–sites, i.e.  $n_{x,y}=1$ always implies $n_{x,y\!\pm\!1}=0$. In the $x$–direction, an attractive interaction favors the occupation of NN–pairs, the strength of which is denoted by $J_x < 0$. A competing attractive interaction of diagonal neighbors (NNN) $J_d < 0$ tends to stabilize the $c(2\times2)$ arrangement of atoms. The total energy of the system is given by $$\label{hamilton} H \, = \, \left. \left. \sum_{x,y} \, n_{x,y} \,\, \right( J_d \, {{\, \left[\, {n_{x\!+\!1,y\!+\!1} + n_{x\!+\!1,y\!-\!1}} \,\right] }} \, + \, J_x \, n_{x\!+\!1,y} \,\, - \mu \right),$$ where the sum is over all lattice sites and the (effective) chemical potential $\mu$ controls the mean coverage $ \theta = {{\, \left\langle\, {n_{x,y}} \,\right\rangle}} \leq 1/2$. Without loss of generality we can choose $J_d =-1$ and thus fix the energy scale. Then $J_x$ controls the energy difference $\Delta E$ (in units of ${{\, \left| \,{J_d}\,\right|}}$) between a perfectly ordered $c(2\times2)$ and a perfect $(2\times1)$ arrangement at $\theta=1/2$: $ \Delta E \, = \, {{\, \left| \,{ 2 + J_x}\,\right|}} / 2$ (per lattice site). The groundstate of the system is a $c(2\times2)$ ordered configuration with $\theta=1/2$, whenever $J_x > -2 $ (and $\mu > -2 $). The free energy of the system is obtained from the partition function $ Z = \sum_{{{\, \left\{\, {n_{x,y}} \,\right\}}}} e^{-\beta H}$, where the temperature $T=1/\beta$ is also measured in units of ${{\, \left| \,{J_d}\,\right|}}=1$. The sum is restricted to configurations ${{\, \left\{\, {n_{x,y}} \,\right\}}}$ which obey the NN–exclusion in $y$–direction. We have applied standard transfer matrix (TM) techniques [@julia] to evaluate the logarithm of $Z_L$, the partition sum of a system with $M = L\times N$ lattice sites in the limit $N\to\infty$. Strips of width $L$ with periodic boundary conditions were aligned with the $x$-axis. Hence, only even $L$ allow for the perfect $c(2\times2)$ ordering of the groundstate. Note that the TM is of dimension $2^L\times2^L$, but with a much smaller number $3^L$ of non–zero elements due to the anisotropic repulsion. As a first example we consider the model with $J_x =-1.96$. Figure \[figure1\] (b) shows results for strip width $L=10$ at different temperatures and constant chemical potential $\mu = -1.96$. We have evaluated the coverage $\theta= \, {{\, \left\langle\, {n_{x,y}} \,\right\rangle}} = \sum_{x,y} n_{x,y} / M $ as well as the correlations $$\label{corrs} c_d \, = \, \frac{1}{2} {{\, \left\langle\, { n_{x,y} {{\, \left(\, {n_{x\!+\!1,y\!+\!1}+n_{x\!+\!1,y\!-\!1}} \,\right) }} } \,\right\rangle}} \mbox{~~ and~~} c_x \, = \, {{\, \left\langle\, {n_{x,y} n_{x\!+\!1,y} } \,\right\rangle}}.$$ These measure the probabilities of finding an occupied NN–pair ($c_x$) or NNN–pair ($c_d$) of Cd–atoms, i.e. the contribution of $(2\times1)$– or $c(2\times2)$– dominated regions in the system. Coverage and correlations can be obtained from proper derivatives of $\ln Z_L$, e.g.$$c_d \, = \, \left. \frac{-1}{2\beta} \frac{1}{M} \, \frac{\partial}{\partial J_d} \, \ln Z_L \right|_{J_d=-1}$$ or, as in the case of $\theta$ and $c_x$, directly from the relevant eigenvector of the TM [@kinzel; @bartelt]. (14.5,4.5)(0,0.1) (-2,0.2)[(10,4.0)]{} (0.,-0.5) (6.0,0)[(10,4.0)]{} (6.5,-0.5) In addition, Figure \[figure1\] (b) displays results of Monte Carlo simulations of a system with $M=64\times64$ sites. In order to achieve reasonably fast equilibration we have applied a rejection-free algorithm [@newman], the results are in good agreement with the TM–calculation. In addition to the correlations (\[corrs\]) we determine order parameters which are associated with a perfect $c(2\times2)$ or $(2\times1)$ structure on one of the sublattices: $$m_{2\times1} \, = \frac{1}{M} \, \sum_{x,y}^{x \, \mbox{\scriptsize even}} \! n_{x,y} \mbox{~~~ and ~~~} m_{2\times2} \, = \frac{1}{M} \, \sum_{x,y}^{(x+y) \,\mbox{\scriptsize even}} \! n_{x,y}$$ Large values ($\leq \theta$) of these quantities indicate long range order, whereas a homogeneously disordered occupation of the lattice would yield $m_{2\times2}=m_{2\times1}=\theta/2$. For the sake of breaking the sublattice symmetry, we have initialized the system with $m_{2\times2} =\theta$ for the equilibration dynamics. We have refrained from determining the order parameters within the TM–approach, which would require the introduction of additional staggered fields to the energy function (\[hamilton\]). The TM–formalism offers a more suitable method to localize the phase transition [@bartelt]. In the considered example, one observes a sudden drop of the coverage at $T \approx 0.3$ when $\mu = -1.96$ is held constant. Simultaneously, the system looses its long range order as indicated by values $m_{2\times2} = m_{2\times1}=\theta/2$ in the simulations. This is also signaled in the properties of the relevant eigenvector in the TM-analysis [@bartelt]. The behavior is consistent with a first order transition, as it was investigated for similar models with isotropic or anisotropic interactions, see e.g. [@schick; @selke; @kinzel; @bartelt; @binder] and references therein. Here, however, also the NNN–correlation $c_d$ decreases rapidly at the coverage drop, while $c_x$ displays a sudden increase and $c_x > c_d$ in the high temperature regime. This indicates that the phase transition also affects the short range correlations in the system: atoms order in rows of the $(2\times1)$–type without long range order. At $\theta = 1/2$ the $c(2\times2)$ ordering is always preferred energetically. For significantly smaller coverages, however, the local rearrangement of atoms is possible and can be favorable if $J_x \approx 2 J_d$. Indeed, the degree of the prevalence of $c_x$ over $c_d$ depends strongly on the actual coverage as will be discussed below. We have followed the prescription outlined by Bartelt et al. [@bartelt] for estimating the coverage discontinuity and phase boundaries for $L\to\infty$ from three different strip widths. The results as obtained from $L=6,8,10$ are shown in Figure \[figure2\] for the models with $J_x=-1.90$ and $J_x=-1.60$, i.e. $\Delta E = 0.05$ and $0.2$, respectively. At low temperatures (III), an ordered phase with $\theta \approx 1/2$ coexists with a disordered phase of low coverage. At higher temperatures, the system becomes homogeneously disordered (II) or ordered (I) depending on the coverage. For $T\to\infty$, we expect the phase boundary (I/II) to approach the $\theta=1/2$ axis. In this limit the infinite repulsion should be the only relevant interaction, columns of lattice sites decouple and the system is always disordered. This is in contrast to hard square models with isotropic NN–repulsion, where an extended regime (I) persists for arbitrary temperature. (14.5,4.5)(0,-0.5) (-1.5,0)[(10,2.9)]{} (3.5,1.0) (2.8,2.7) (6.2,2.3) (-0.2,3.0) (3.5,-0.8) (6.0,0.05)[(10,2.9)]{} (11.0,1.0) (11.2,2.7) (13.3,2.2) (7.2,3.0) (11.0,-0.8) As an additional characteristics of the system we have determined the line $T(\theta)$ where $c_x=c_d$ and extrapolated for $L\to\infty$. Right of the dashed lines in Figure \[figure2\], the $c(2\times2)$–structure is prevalent and vice versa. For small coverage, this characteristic line coincides with the boundary (II/III) of the coexistence region. Hence, for a range of coverages, the transition into disorder is accompanied by a simultaneous and discontinuous change of local ordering from $c(2\times2)$ to $(2\times1)$ arrangement of Cd–atoms. We obtain also a rough estimate of the phase diagram from additional Monte Carlo simulations at constant coverage. For this purpose, we apply a non–local algorithm which exchanges empty with occupied sites according to Kawasaki–like rates [@newman]. The system is again initialized in an ordered $c(2\times2)$-configuration for equilibration, and a rapid decrease of $m_{2\times2}$ with increasing $T$ marks the transition into the homogeneously disordered phase. Figure \[figure2\] shows in both diagrams the results for $M=128\times128$, which are in good agreement with the TM–prediction. Within error bars, we obtain the same results by searching for a pronounced maximum in the fluctuations of order parameters, correlations, or energy. Note that this method is not suitable for detecting the transition into the homogeneously ordered region (I): simulations slow down considerably at almost maximal coverage and, furthermore, (I) and (III) become virtually indistinguishable in small systems. Figure \[figure2\] demonstrates the crucial role that the energy difference $\Delta E$ plays for the nature of the phase transition. With increasing $\Delta E$, the tricritical point shifts to smaller coverage and higher temperature. Even more so does the line which separates $c(2\times2)$ from $(2\times1)$ prevalence. This feature might offer a qualitative explanation for the remarkable fact that the $c(2\times2)$–$(2\times1)$ transition, which was investigated for CdTe in great detail, has not been found in ZnSe, so far. There, $\Delta E$ is expected to be significantly larger than for CdTe and the region of noticeable $(2\times1)$–dominance should indeed be smaller. Note that in the experimental investigation, integrated HRLEED–peak intensities provide information about local correlations, similar to $c_x$ and $c_d$, rather than about long range ordering [@soko]. In summary, our model offers an interpretation of the $c(2\times2)$–$(2\times1)$–transition in CdTe(001) as an equilibrium phase transition. At medium coverage the transition is, with increasing $T$, from a coexistence regime into a homogeneously disordered phase. For small enough energy difference $\Delta E$, this phase transition is accompanied inevitably by a rearrangement of the vacancy structure from $c(2\times2)$– to local $(2\times1)$–ordering. Of course, some of the detailed experimental findings cannot be accounted for in our simple model, see for instance [@soko] for particular phenomena related to the relaxation of surface strain. For a more quantitative comparison with experiments, additional information is needed. A precise measurement of $\theta$ as a function of the temperature is difficult, but would reveal the path on which the system enters the $(2\times1)$–dominated region in the phase diagram. In a naive attempt to interpret our results quantitatively one would identify the dimensionless critical temperature (in units of ${{\, \left| \,{J_d}\,\right|}}=1$) with $T_c \approx 270^oC$, thus setting the scale for expressing the energy difference ${{\, \left| \,{2+J_x}\,\right|}}/2$ in physical units. For example, the model with $J_x = -1.94$ exhibits the desired transition with $\theta\approx 0.4$ at a temperature $T\approx 0.3$. This would translate into $\Delta E \approx 0.005 eV$ which is significantly smaller than the value $(0.03 eV)$ given in [@garcia; @park; @gundeldip]. DF–calculations yield $\Delta E$ at $T=0$ and the precise effect of higher temperatures on the relation of (free) energies is unknown. Furthermore, recent calculations have shown that the DF–results are very sensitive (up to a factor of about $2$) to the number of atomic layers considered in the calculation [@private]. Hence, a serious quantitative matching is not feasible unless more reliable estimates of $\Delta E$ become available. Another open question is, if and how our results for small values of $\theta$ can be interpreted in the experimental context. Terminating layers of metal atoms with very low coverage are unstable in vacuum and the next (metal) layer is uncovered, see e.g. [@cibert; @stm; @inseln]. However, the presence of excess group VI atoms might stabilize an effective equilibrium situation with small metal coverage. As a test for this hypothesis we suggest to search for the structural transition of the ZnSe(001) surface under mildly Se–rich conditions. Our model also opens the possibility to study the shapes and sizes of domains, e.g. the regions of local $(2\times1)$–dominance in the disordered phase. Experimental data is available for the pronounced anisotropy of such domains [@cibert]. Furthermore, we will study the equilibrium shape of isolated [*islands*]{} of atoms and its dependence on the temperature. This should allow for further comparison with experimental results as reported in [@inseln], for instance.  \ [**Acknowledgment:**]{}  We would like to thank S. Gundel for fruitful discussions. M. Ahr was supported by the Deutsche Forschungsgemeinschaft. [123]{} M. Schick, in: [*Phase Transitions in Surface Films*]{}, eds. J.G. Dash and J. Ruvalds (Plenum, New York, 1980) W. Selke, K. Binder, and W. Kinzel, Surface Science [**125**]{} (1983) 74 W. Kinzel and M. Schick, Phys. Rev. [**B24**]{} (1981) 324 N.C. Bartelt, T.L. Einstein, and L.D. Roelofs, Phys. Rev. [**B34**]{} (1986) 1616 K. Binder and D.P. Landau, Phys. Rev. [**B21**]{} (1980) 1949; Surface Science [**108**]{} (1981) 502 , Kyoto 1999, to be published in: Journal of Crystal Growth (2000) J. Cibert and S. Tatarenko, Defect and Diffusion Forum [**150–151**]{} (1997) 1 S. Tatarenko, B. Daudin, D. Brun, V. Etgens, and M.B. Veron, Phys. Rev. [**B50**]{} (1994) 18479 H. Neureiter, S. Tatarenko, S. Spranger, and M. Sokolowski, Phys. Rev. [**B 62**]{} (2000) 2542 L. Seehofer, V.H. Etgens, G. Falkenberg, M.B. Veron, D. Brun, B. Daudin, S. Tatarenko, and R.L. Johnson, Surface Science [**347**]{} (1996) L55 D. Martrou, J. Eymery, and N. Magnea, Phys. Rev. Lett. [**83**]{} (1999) 2366 M.D. Pashley, Phys. Rev. [**B40**]{} (1989) 10481 W.A. Harrison, J. Vac. Sci. Technol. [**16**]{} (1979) 1492 A. Garcia, J. Northrup, J. Vac. Sci. Technol. [**B12**]{} (1994) 2678 C.H. Park, D.J. Chadi, Phys. Rev. [**B49**]{} (1994) 16647 S. Gundel, Diploma thesis, Universit[ä]{}t W[ü]{}rzburg, 1997 S. Gundel, A. Fleszar, W. Faschinger, and W. Hanke, Phys. Rev. [**B59**]{} (1999) 15261 M.R. Baldan, E. Granato, S.C. Ying, Phys. Rev. [**B62**]{} (2000) 2146. J. Yeomans, [*Statistical Mechanics of Phase Transitions*]{}, Oxford University Press (Oxford, 1992) M.E.J. Newman and G.T. Barkema, [*Monte Carlo Methods in Statistical Physics*]{}, Clarendon Press (Oxford, 1999) S. Gundel, private communication.
--- abstract: 'The present study is devoted to the problem of tsunami wave generation. The main goal of this work is two-fold. First of all, we propose a simple and computationally inexpensive model for the description of the sea bed displacement during an underwater earthquake, based on the finite fault solution for the slip distribution under some assumptions on the dynamics of the rupturing process. Once the bottom motion is reconstructed, we study waves induced on the free surface of the ocean. For this purpose we consider three different models approximating the Euler equations of the water wave theory. Namely, we use the linearized Euler equations (we are in fact solving the Cauchy-Poisson problem), a Boussinesq system and a novel weakly nonlinear model. An intercomparison of these approaches is performed. The developments of the present study are illustrated on the 17 July 2006 Java event, where an underwater earthquake of magnitude 7.7 generated a tsunami that inundated the southern coast of Java.' address: - 'LAMA, UMR 5127 CNRS, Université de Savoie, Campus Scientifique, 73376 Le Bourget-du-Lac Cedex, France' - 'IMA, University of Minnesota, Minneapolis, MN 55455, USA' - 'LAMA, UMR 5127 CNRS, Université de Savoie, Campus Scientifique, 73376 Le Bourget-du-Lac Cedex, France' - 'School of Mathematical Sciences, University College Dublin, Belfield, Dublin 4, Ireland' author: - 'Denys Dutykh$^*$' - Dimitrios Mitsotakis - Xavier Gardeil - Frédéric Dias bibliography: - 'biblio.bib' title: On the use of the finite fault solution for tsunami generation problems --- [^1] Introduction ============ Tsunami waves have attracted a lot of attention by researchers. The interest of the scientific community has especially increased after the two megatsunamis in December 2004 [@Syno2006], where nearly 230,000 people in fourteen countries lost their lives, and in March 2011, where 20,000 people lost their lives in Japan. The 2004 event also led Indian Ocean countries to develop Tsunami Warning Systems (TWS) [@Synolakis2005; @Basher2006], unfortunately more on an individual basis than on a collective basis. The most elaborated warning system to date is the Pacific Ocean TWS, which has been developed over several decades by efforts of NOAA’s specialists [@Titov2005; @Gonz]. An operational tsunami wave modeling tool is an essential part of any warning system [@Titov2005; @Tkalich2007a]. Mathematical and numerical models in use should be constantly improved to produce more accurate results in less CPU time [@Imamura1996; @Titov1997; @Dutykh2009a]. In order to study the propagation of a tsunami wave, an initial condition must usually be provided to any numerical model designed for this purpose. The present study is an attempt to improve the construction of the initial tsunami waveform. The set of existing practices described in the literature constitutes the field of the so-called tsunami generation modeling [@Hammack; @Todo; @Dutykh2006; @Dutykh2007a; @Dutykh2007b; @Dutykh2008]. The modeling of tsunami generation was initiated in the early sixties by the prominent work of Kajiura [@kajiura], who proposed the translation of the static sea bed displacement towards the free surface as an initial condition. Classically, the celebrated Okada [@Okada85; @okada92] and sometimes Mansinha & Smylie[^2] [@Mansinha1967; @Mansinha1971] solutions are used to compute the co-seismic sea bed displacements. This approach is still widely used by the tsunami wave modeling community. However, significant progress has been recently made in this direction [@Ohmachi2001; @Dutykh2006; @Dutykh2007a; @Dutykh2007b; @Rabinovich2008; @Saito2009; @Dutykh2009a]. In the present study we exploit some recent advances in seismology to reconstruct better co-seismic displacements of a tsunamigenic earthquake. More precisely, we suggest using the so-called finite fault solution developed by Ji and his collaborators [@Bassin2000; @Ji2002], based on static and seismic data inversion. This solution provides multiple fault segments of variable local slip, rake angle and several other parameters. By applying Okada’s solution to each subfault, we reconstruct the sea bed displacement with higher resolution. To our knowledge, this technique has already been employed to model the Kuril islands tsunamis of 15 November 2006 and 13 January 2007, cf. [@Rabinovich2008]. Since Okada’s solution consists of relatively simple closed-form analytical expressions, all computations can be done efficiently enough so that they can be used in a real-time TWS (cf. [@Weinstein2008]). The obvious *sine qua non* condition is that the corresponding finite fault inversion should also be performed in a reasonable time. In the present study we go further in reconstructing the dynamic sea bed displacement according to the rupture propagation speed and the rise time also provided by the finite fault solution. Constructed in this special way, sea bed displacements are then coupled with several water wave models. Among them, there is a novel weakly nonlinear solver based on a formulation involving the Dirichlet-to-Neumann operator which is computed approximately using Fourier transforms. The other two models considered here are the linearized free surface Euler equations and a Boussinesq type system. Developments presented in this paper are illustrated on the example of July 17, 2006 Java event [@Ammon2006]. However, we would like to stress that the methodology presented in this study is quite general and can be applied to many other tsunamigenic earthquakes for which a finite fault solution is available. The paper is organized as follows. In Section \[sec:displ\] we describe the static and dynamic sea bed displacements, while in Section \[sec:fluid\] we present a simple approximate water wave solver with a moving bottom. In Section \[sec:numres\] we study numerically the generation process of a real-world event. An intercomparison of the three models mentioned above is performed. Some important conclusions are drawn in Section \[sec:concl\]. Co-seismic displacement construction {#sec:displ} ==================================== The modeling of tsunami generation is directly related to the problem of the bottom motion during an underwater earthquake. Traditionally, Okada’s solution [@Okada85; @okada92] is used in regimes characterized by an active fault of small or intermediate size, i.e. consisting of one or a few segments (e.g. the great Sumatra 2004 earthquake, [@Syno2006; @Ioualalen2007]). In this case the resulting vertical displacement field is translated to the free surface. This approach is conventionally referred to as [*passive tsunami generation*]{} [@ddk], contrary to the [*active generation*]{} which explicitly involves the bottom motion dynamics [@Dutykh2006]. Since our methods will be illustrated on the example of the July 17, 2006 Java event, we show in Figure \[fig:Java1\_fault\] a typical single-fault based initial condition used for the corresponding tsunami wave modeling [@Yalciner2008]. The seismic parameters used to produce this vertical displacement are given in Table \[tab:singleF\]. Fault length, km $80.9$ --------------------------------- ------------- Fault width, km $40.0$ Focal depth, km $20.0$ Slip, m $2.5$ Dip angle $10^\circ$ Slip angle $95^\circ$ Strike angle (clockwise from N) $289^\circ$ : Seismic fault parameters for the Java 2006 event. The corresponding seismic moment can be taken as $M_0 = 2.52\times 10^{27}$ N$\cdot$ m ($M_w = 7.56$).[]{data-label="tab:singleF"} ![Static vertical displacement in meters of the seabed computed with the single fault parameters provided in Table \[tab:singleF\]. The maximum lift is 0.7215 m while the maximum subsidence is 0.4030 m. The $x-$axis is the longitude while the $y-$axis is the latitude. The $y-$axis points to the North.[]{data-label="fig:Java1_fault"}](figs/Java1fault.eps){width="90.00000%"} The celebrated Okada solution [@Okada85; @okada92] is based on two main ingredients — the dislocation theory of Volterra [@volt] and Mindlin’s fundamental solution for an elastic half-space [@mindl1]. Particular cases of this solution were known before Okada’s work, for example the well-known Mansinha & Smylie’s solution [@Mansinha1967; @Mansinha1971]. Usually, all these particular cases differ by the choice of the dislocation and Burger’s vector orientation [@press]. We recall the basic assumptions behind this solution: - The fault is immersed into a linear homogeneous and isotropic half-space - The fault is a Volterra type dislocation - The dislocation has a rectangular shape For more information on Okada’s solution we refer to [@Dutykh2006; @Dias2006; @Dutykh2007a]. The finite fault solution is based on the multi-fault representation of the rupture [@Bassin2000; @Ji2002]. The rupture complexity is reconstructed using a joint inversion of the static and seismic data. The fault’s surface is parametrized by multiple segments with variable local slip, rake angle, rise time and rupture velocity. The inversion is performed in an appropriate wavelet transform space. The objective function is a weighted sum of $L_1$, $L_2$ norms and some correlative functions. With this approach seismologists are able to recover rupture slip details [@Bassin2000; @Ji2002]. This available seismic information is exploited in this study to compute the sea bed displacements produced by an underwater earthquake with higher geophysical resolution. The proposed approach will be directly illustrated on the Java 2006 event. The July 17, 2006 Java earthquake involved thrust faulting in the Java trench and generated a tsunami wave that inundated the southern coast of Java [@Ammon2006; @Fritz2007]. The estimates of the size of the earthquake (cf. [@Ammon2006]) indicate a seismic moment of $6.7 \times 10^{20}$ N$\cdot$ m, which corresponds to the magnitude $M_w = 7.8$. Later this estimate was refined to $M_w = 7.7$ [@Ji2006]. Like other events in this region, this 2006 event had an unusually low rupture speed of $1.0$ – $1.5$ km/s, and occurred near the up-dip edge of the subduction zone thrust fault. According to Ammon [*et al*]{}, most aftershocks involved normal faulting [@Ammon2006]. The rupture propagated approximately $200$ km along the trench with an overall duration of approximately $185$ s. The fault’s surface projection along with ocean ETOPO1 bathymetric map are shown in Figure \[fig:Java\_fault\]. We note that the Indian Ocean bathymetry considered in this study varies between 7186 and 20 meters in the shallowest region. The estimate of the finite fault inversion for this earthquake was also performed by the Caltech team [@Ozgun2006]. The magnitude estimated in that study was $M_w = 7.9$. In this study we do not present numerical simulations using their data but it is straightforward to apply our algorithms to this case as well. Static displacement ------------------- In order to illustrate the advantages of the proposed approach we will also compute the static co-seismic displacements using the finite fault solution [@Ji2006]. The fault is considered to be the rectangle with vertices located at $(109.20508^\circ$ (Lon), $-10.37387^\circ$ (Lat), $6.24795$ km (Depth)$)$, ($106.50434^\circ$, $-9.45925^\circ$, $6.24795$ km), ($106.72382^\circ$, $-8.82807^\circ$, $19.79951$ km), ($109.42455^\circ$, $-9.74269^\circ$, $19.79951$ km) (see Figure \[fig:Java\_fault\]a). The fault’s plane is conventionally divided into $N_x = 21$ subfaults along strike and $N_y = 7$ subfaults down the dip angle, leading to a total number of $N_x\times N_y = 147$ equal segments. Parameters such as subfault location $(x_c, y_c)$, depth $d_i$, slip $u$ and rake angle $\phi$ for each segment are given in Table \[tab:subfaults\] and can also be downloaded from [@Ji2006]. The elastic constants common to all subfaults and parameters such as dip and slip angles are given in Table \[tab:crust\]. (We note that the slip angle is measured conventionally in the counter-clockwise direction from the North. The relations between the elastic wave celerities $c_p$, $c_s$ and the Lamé coefficients $\lambda$, $\mu$ used in Okada’s solution are given in Appendix \[app:iii\].) $P$-wave celerity $c_p$, m/s 6000 --------------------------------- --------------- $S$-wave celerity $c_s$, m/s 3400 Crust density $\rho$, kg/m$^3$ 2700 Dip angle, $\delta$ $10.35^\circ$ Strike angle (clockwise from N) $289^\circ$ : Geophysical parameters used to model elastic properties of the subduction zone in the region of Java.[]{data-label="tab:crust"} We compute Okada’s solution at the sea bottom by substituting $z=0$ in the geophysical coordinate system and taking the vertical component of the displacement field ${\mathcal{O}}_i({\vec{x}}; \delta, \lambda, \mu, \ldots)$, where $\delta$ is the dip angle, $\lambda$, $\mu$ are the Lamé coefficients (see Appendix \[app:iii\]) and the dots denote the dependence of the function ${\mathcal{O}}({\vec{x}})$ on eight other parameters, cf. [@Dutykh2006]. The resulting co-seismic vertical bottom displacement $\zeta({\vec{x}})$ can be computed as a simple superposition of subfault contributions: $$\zeta ({\vec{x}}) = \sum_{i=1}^{N_x\times N_y} {\mathcal{O}}_i({\vec{x}}; \delta, \lambda, \mu, \ldots)$$ The graph of $\zeta ({\vec{x}})$ is presented in Figure \[fig:ffaultdisp\]. The specific static displacement can be compared with the single fault classical approach depicted on Figure \[fig:Java1\_fault\]. It is worth mentioning that more than one local extrema can be found in this solution due to a higher slip resolution. Hereafter we will adopt the short-hand notation ${\mathcal{O}}_i({\vec{x}})$ for the vertical displacement component of Okada’s solution for the $i^{\mathrm{th}}$ segment having in mind its dependence on various parameters from Tables \[tab:crust\] and \[tab:subfaults\]. ![The vertical displacement of the finite fault solution, cf. [@Ji2006]. The corresponding seismic moment is $M_0 = 3.53\times 10^{27}$ N$\cdot$ m ($M_w = 7.65$). The maximum lift is 0.4629 while the maximum subsidence is 0.1997.[]{data-label="fig:ffaultdisp"}](figs/static_vert.eps){width="90.00000%"} Dynamic co-seismic displacements {#sec:dyndisp} -------------------------------- Here, we go even further in the reconstruction of the bottom motion. By making some assumptions on the time dependence of the displacement fields, we can have an insight into the dynamics of the sea bed motion. The finite fault solution provides two additional parameters concerning the rupture dynamics for the July 17, 2006 event — the rupture velocity $v_r = 1.1$ km/s and the rise time $t_r = 8$ s. The epicenter is located at the point ${\vec{x}}_e =$ $(107.345^\circ, -9.295^\circ)$ [@Ji2006]. Given the origin ${\vec{x}}_e$, the rupture velocity $v_r$ and the $i^{\mathrm{th}}$ subfault location ${\vec{x}}_i$ (the full list is provided in Table \[tab:subfaults\]), we define the [*subfault activation times*]{} $t_i$ needed for the rupture to reach the corresponding segment $i$ by the formula: $$t_i = \frac{||{\vec{x}}_e - {\vec{x}}_i ||}{v_r}, \quad i=1,\ldots, N_x\times N_y.$$ For the sake of simplicity and due to the lack of information we assume implicitly that the rupture speed $v_r$ is constant along the fault; however this can be refined in future studies. We will also follow the pioneering idea of J. Hammack [@Hammack1972; @Hammack] developed later in [@Todo; @todo2; @Dutykh2006; @ddk; @Kervella2007] where the maximum bottom deformation is achieved during some finite time (known as the rise time) according to a specific (in an *ad hoc* manner) dynamic scenario. Various scenarios on the time dependence (instantaneous, linear, trigonometric, exponential, etc) can be found in [@Hammack; @ddk; @Dutykh2006]. In this study we will adopt the trigonometric scenario which can be described by the formula: $$T(t) = {\mathcal{H}}(t-t_r) + \frac12{\mathcal{H}}(t){\mathcal{H}}(t_r-t)\bigl(1 - \cos(\pi t/t_r)\bigr),$$ where ${\mathcal{H}}(t)$ is the Heaviside step function. For illustrative purposes this dynamic scenario is represented on Figure \[fig:trigscen\]. Physically the function $T(t)$ represents the time history of the vertical bottom displacement in terms of its final amplitude. We assume that during the rise time temporal interval $[0, t_r]$ the vertical displacement goes from zero to its final stage according to the trigonometric scenario. ![Trigonometric scenario with rise time $t_r = 1$ s.[]{data-label="fig:trigscen"}](figs/trig.eps){width="69.00000%"} Finally, we put together all the ingredients in order to construct the dynamic sea bed motion: $$\label{eq:zeta} \zeta ({\vec{x}}, t) = \sum_{i=1}^{N_x\times N_y} {\mathcal{H}}(t-t_i) T(t-t_i) {\mathcal{O}}_i({\vec{x}}).$$ In the following sections we will present several approaches to couple this dynamic deformation with the hydrodynamic problem to predict waves induced on the ocean’s free surface. Fluid layer solution {#sec:fluid} ==================== Once the sea bed deformation is determined, a water wave problem must be solved in order to compute the free surface motion induced by the ocean bottom shaking. Traditionally this difficulty is circumvented by the simple translation of the static bottom deformation onto the free surface [@kajiura], known as the passive generation approach [@ddk; @Kervella2007]. In this section we present three approximate models to the water wave problem with moving bottom that we will use in combination with the finite-fault solution to study the tsunami generation problem. Linearized Euler equations – CP model {#sec:CP} ------------------------------------- Consider an ideal incompressible fluid of constant density $\rho$. The horizontal projection of the fluid domain $\Omega$ is a subset of ${\mathbb{R}}^2$. The horizontal independent variables are denoted by ${\vec{x}}= (x,y)$ and the vertical one by $z$. The origin of the cartesian coordinate system is chosen such that the surface $z=0$ corresponds to the still water level. The fluid is bounded below by the bottom $z = -h({\vec{x}},t)$ and above by the free surface $z = \eta ({\vec{x}},t)$. Usually we assume that the total depth $H({\vec{x}}, t) := h({\vec{x}},t) + \eta ({\vec{x}},t)$ remains positive $H ({\vec{x}},t) \geq h_0 > 0$ at all times $t \in [0, T]$. The sketch of the physical domain is shown in Figure \[fig:sketch\]. Classically in water wave modeling, we make the assumption that the free surface is a graph $z = \eta ({\vec{x}},t)$ of a single-valued function. It means in practice that we exclude some interesting phenomena, (e.g. wave breaking phenomena) which are out of the scope of this modeling paradigm. (0,-3.0692186)(12.901875,3.0992188) (0.0,1.3407812)(11.26,1.3407812) (5.42,3.0207813)(5.44,-3.0592186) (11.521407,1.5107813)[$\vec{x}$]{} (5.021406,2.9107811)[$z$]{} (10.4,1.4207813)(10.38,-2.6392188) (11.691406,-0.50921875)[$h (\vec{x}, t)$]{} (5.0914063,1.1307813)[$O$]{} (7.76,1.3207812)(7.76,0.9007813) (7.76,1.6407813)(7.76,2.2207813) (7.0714064,1.9907813)[$\eta(\vec{x}, t)$]{} (0.64,0.44078124)(1.12,0.44078124) (2.94,0.82078123)(3.42,0.82078123) (2.98,-1.8992188)(3.46,-1.8992188) (4.04,-0.41921875)(4.52,-0.41921875) (0.98,-1.2792188)(1.46,-1.2792188) (8.82,0.48078126)(9.3,0.48078126) (8.56,-1.4792187)(9.04,-1.4792187) (6.64,0.18078125)(7.12,0.18078125) (6.52,-1.8592187)(7.0,-1.8592187) (2.18,-0.31921875)(2.66,-0.31921875) (7.72,-0.67921877)(8.2,-0.67921877) (1.94,1.7007812)(2.04,1.4807812) (2.04,1.5007813)(2.16,1.6807812) (1.94,1.7007812)(2.16,1.7007812) The linearized water wave problem consists of the following set of equations [@Hammack1972; @Hammack; @Dutykh2006]: $$\begin{aligned} \label{eq:lin1} \Delta\phi = {\nabla}^2\phi + \partial^2_{zz}\phi &=& 0, \qquad ({\vec{x}}, z) \in \Omega\times [-h, 0], \\ {\partial_t}\eta - {\partial_z}\phi &=& 0, \qquad z = 0, \\ {\partial_t}\phi + g\eta &=& 0, \qquad z = 0, \\ {\partial_t}h + {\partial_z}\phi &=& 0, \qquad z = -h({\vec{x}},t).\label{eq:lin2}\end{aligned}$$ This set of equations together with an initial condition is also often referred to in the literature as the Cauchy-Poisson (CP) problem after the pioneering work of Cauchy [@Cauchy1827]. In view of the specific requirements of the analytical techniques used in the applications, we will assume first that the domain $\Omega = {\mathbb{R}}^2$, i.e. it is unbounded in the horizontal extent, and the bottom has a special form: $$h({\vec{x}}, t) = h_0 - \zeta ({\vec{x}},t),$$ where $h_0$ is some uniform depth and $\zeta({\vec{x}},t)$ is the sea bed displacement due to an underwater earthquake. In Section \[sec:dyndisp\] one possible construction of the bottom displacement was proposed. Using integral transform methods (cf. [@Hammack; @Todo; @Dutykh2006; @Kervella2007]), one can derive the following expression for the free surface elevation $\eta({\vec{x}},t)$: $$\begin{gathered} \eta({\vec{x}},t) = \frac{\gamma^2}{2}{\mathcal{F}}^{-1}\Bigl\{ \sum_{i=1}^{n = N_x \times N_y}\frac{{\mathcal{H}}(t-t_i)\hat{\mathcal{O}}_i({\vec{k}})}{(\gamma^2 - \omega^2)\cosh(|{\vec{k}}|h_0)} \bigl( \cos(\omega(t-t_i)) - \cos(\gamma(t-t_i)) + \\ {\mathcal{H}}(t-t_i-t_r)[\cos(\omega(t-t_i-t_r)) + \cos(\gamma(t-t_i))]\bigr) \Bigr\},\end{gathered}$$ where $t_r$ is the rise time defined in Section \[sec:dyndisp\], $\gamma = {\pi}/{t_r}$, $$\omega^2 = g |{\vec{k}}| \tanh (|{\vec{k}}|h_0)$$ and ${\mathcal{F}}^{-1}$ is the inverse Fourier transform (see equation below). A similar expression can also be derived for the velocity potential $\phi ({\vec{x}},z,t)$, however we do not directly need it in our study. This analytical solution will be used below in numerical simulations. It has the advantage of being simple and, thus, computationally inexpensive. However, the flat bottom assumption ($h({\vec{x}}) = h_0 = const$) prevents us from using this solution beyond some small evolution times. The validity of this approximation has already been addressed in the literature [@Kervella2007; @Saito2009] and will be discussed at some point below. The weakly nonlinear (WN) model {#sec:WN} ------------------------------- A tsunami wave during its generation is usually well described by the Cauchy-Poisson problem [@Dutykh2006; @Kervella2007; @Saito2009]. The main reason for this simplification is the fact that a wave of a half meter amplitude represents only a tiny perturbation of over a 4000 m water column. However, the real world bathymetry is generally complex and may contain simultaneously various scales. For example, the subduction zone bathymetry represented on Figure \[fig:Java\_fault\] ranges from 7000 to 20 m and thus, nonlinear effects may be locally important. In order to take into account all realistic bathymetric features and study in detail the initial stages of tsunami propagation we describe below a new numerical model. We consider the physical setting and notation of Section \[sec:CP\]. The governing equations of the classical water wave problem are the following [@Lamb1932; @Stoker1958; @Mei1994; @Whitham1999]: $$\begin{aligned} \Delta\phi = {\nabla}^2\phi + \partial^2_{zz}\phi &=& 0, \qquad ({\vec{x}}, z) \in \Omega\times [-h, \eta], \label{eq:laplace} \\ {\partial_t}\eta + {\nabla}\phi\cdot{\nabla}\eta - {\partial_z}\phi &=& 0, \qquad z = \eta({\vec{x}}, t), \label{eq:kinematic} \\ {\partial_t}\phi + {{\textstyle{1\over2}}}|{\nabla}\phi|^2 + {{\textstyle{1\over2}}}({\partial_z}\phi)^2 + g\eta &=& 0, \qquad z = \eta({\vec{x}},t), \label{eq:bernoulli} \\ {\partial_t}h + {\nabla}\phi\cdot{\nabla}h + {\partial_z}\phi &=& 0, \qquad z = -h({\vec{x}},t), \label{eq:bottomkin}\end{aligned}$$ with $\phi$ the velocity potential, $g$ the acceleration due to gravity force and ${\nabla}= (\partial_x, \partial_y)$ denotes the gradient operator in horizontal Cartesian coordinates. The assumptions of fluid incompressibility and flow irrotationality lead to the Laplace equation (\[eq:laplace\]) for the velocity potential $\phi({\vec{x}}, z, t)$. The main difficulty of the water wave problem lies on the boundary conditions. Equations (\[eq:kinematic\]) and (\[eq:bottomkin\]) express the free-surface kinematic condition and bottom impermeability respectively, while the dynamic condition (\[eq:bernoulli\]) expresses the free surface isobarity. The bathymetry $h({\vec{x}},t)$ is decomposed into the static part $h_0({\vec{x}})$ (given e.g. by the ETOPO1 database, cf. Figure \[fig:Java\_fault\]) and the dynamic sea bed displacement $\zeta({\vec{x}},t)$ constructed above in : $$\label{eq:bottom} h({\vec{x}},t) = h_0 ({\vec{x}}) - \zeta ({\vec{x}},t).$$ Recently, some weak dissipative effects have also beed included into the classical water wave problem (\[eq:laplace\]) – (\[eq:bottomkin\]). For more details on the visco-potential formulation we refer to [@Dias2007; @DutykhDias2007; @Dutykh2007a; @Dutykh2008a; @Dutykh2008b]. In the sequel we will need the unit exterior normals to the fluid domain. It is straightforward to obtain the following expressions for the normals at the free surface and bottom respectively: $${\hat{n}}_f = \frac{1}{\sqrt{1 + |{\nabla}\eta|^2}} [-{\nabla}\eta, 1]^t, \qquad {\hat{n}}_b = \frac{1}{\sqrt{1 + |{\nabla}h|^2}} [-{\nabla}h, -1]^t.$$ In 1968 Zakharov proposed a different formulation of the water wave problem based on the trace of the velocity potential at the free surface [@Zakharov1968]: $$\label{eq:trace} {\varphi}({\vec{x}}, t) := \phi({\vec{x}}, \eta({\vec{x}},t), t).$$ This variable plays the role of generalized momentum in the Hamiltonian description of water waves [@Zakharov1968; @Dias2006a]. The second canonical variable is the free surface elevation $\eta$. Another important ingredient is the normal velocity at the free surface $v_n$ which is defined as: $$\label{eq:normalv} v_n ({\vec{x}},t) := \sqrt{1 + |{\nabla}\eta|^2}\left.{\frac{\partial\phi}{\partial{\hat{n}}_f}}\right|_{z=\eta} = \left.({\partial_z}\phi - {\nabla}\phi\cdot{\nabla}\eta)\right|_{z=\eta}.$$ The boundary conditions (\[eq:kinematic\]) and (\[eq:bernoulli\]) on the free surface can be rewritten in terms of ${\varphi}$, $v_n$ and $\eta$ [@Craig1992; @Craig1993; @Fructus2005]: $$\label{eq:dynamics} \begin{array}{rl} {\partial_t}\eta - {\mathcal{D}}_\eta({\varphi}) &= 0, \\ {\partial_t}{\varphi}+ {{\textstyle{1\over2}}}|{\nabla}{\varphi}|^2 + g\eta - \frac{1}{2(1+|{\nabla}\eta|^2)}\bigl[{\mathcal{D}}_\eta({\varphi}) + {\nabla}{\varphi}\cdot{\nabla}\eta \bigr]^2 &= 0. \end{array}$$ Here we introduced the Dirichlet-to-Neumann operator (D2N) ${\mathcal{D}}_\eta : {\varphi}\mapsto v_n$ [@Coifman1985; @Craig1993] which maps the velocity potential at the free surface ${\varphi}$ to the normal velocity $v_n$. The name of this operator comes from the fact that it denotes a correspondance between Dirichlet data ${\varphi}$ and Neumann data $\sqrt{1+|{\nabla}\eta|^2}\left. \displaystyle{{\frac{\partial\phi}{\partial{\hat{n}}_f}}} \right|_{z=\eta}$ on the free surface. We provide in Appendix \[app:ii\] the complete derivation of Zakharov’s formulation for the water wave problem. ### Numerical evaluation of the D2N operator We saw above that the water wave problem can be reduced to a system of two PDEs governing the evolution of the canonical variables $\eta$ and ${\varphi}$. In order to solve this system of equations we must be able to compute efficiently the quantity ${\mathcal{D}}_\eta({\varphi})$. In this section we present a simple method for the numerical computation of the D2N operator, which is appropriate for the application of the linearized Euler model in the solution of problems dealing with tsunami generation. This approach is based on the extensive use of Fourier transforms. On the discrete level this transformation can be efficiently implemented with the Fast Fourier Transform (FFT) algorithm [@Cooley1965; @Frigo2005]. The direct ${\mathcal{F}}$ and inverse ${\mathcal{F}}^{-1}$ Fourier transforms in 2D are defined as follows: $$\label{eq:fourier} {\mathcal{F}}[f] = \hat{f}({\vec{k}}) = \int\limits_{{\mathbb{R}}^2} f({\vec{x}}) e^{-i{\vec{k}}\cdot{\vec{x}}}\; d{\vec{x}}, \quad {\mathcal{F}}^{-1}[\hat{f}] = f({\vec{x}}) = \frac{1}{(2\pi)^2}\int\limits_{{\mathbb{R}}^2} \hat{f}({\vec{k}}) e^{i{\vec{k}}\cdot{\vec{x}}}\; d{\vec{k}}.$$ The problem to be solved is $$\begin{aligned} \label{eq:lapl} {\nabla}^2\phi + \partial^2_{zz}\phi &=& 0, \quad ({\vec{x}}, z) \in \Omega\times [-h, \eta], \\ \phi &=& {\varphi}, \quad z=\eta, \label{eq:dirichlet} \\ \sqrt{1 + |{\nabla}h|^2}\displaystyle{{\frac{\partial\phi}{\partial{\hat{n}}_b}}} &=& {\partial_t}h, \quad z=-h. \label{eq:neumann}\end{aligned}$$ Once the function $\phi$ is determined, we must compute its normal derivative on the free surface (\[eq:normalv\]). Since a tsunami wave induces a special flow regime in which the horizontal extent is much more important than the variations in the vertical direction, we can apply the Fourier transform to the Laplace equation (\[eq:lapl\]) as if it were posed in a strip-like domain: $${\frac{d ^2{\hat{\phi}}}{d z^2}} - |{\vec{k}}|^2{\hat{\phi}}= 0.$$ The general exact solution to this ODE can be easily computed: $$\label{eq:gensol} {\hat{\phi}}({\vec{k}}; z) = A({\vec{k}})\cosh(|{\vec{k}}|z) + B({\vec{k}})\sinh(|{\vec{k}}|z).$$ The two unknown functions $A({\vec{k}})$ and $B({\vec{k}})$ must be determined from the boundary conditions (\[eq:dirichlet\]), (\[eq:neumann\]). For the sake of convenience we rewrite the Neumann boundary condition at the bottom (\[eq:neumann\]) in this form: $$\label{eq:forcing} \left.{\frac{\partial\phi}{\partialz}}\right|_{z=-h} = -{\partial_t}h - \left.{\nabla}\phi\right|_{z=-h}\cdot{\nabla}h \equiv f({\vec{x}}, t).$$ The right-hand side will be denoted by $f({\vec{x}},t)$, which implicitly depends on the solution $\phi$. The application of the boundary conditions leads to the following system of linear equations: $$\begin{aligned} \cosh(|{\vec{k}}|\eta)A({\vec{k}}) + \sinh(|{\vec{k}}|\eta)B({\vec{k}}) &=& \hat{{\varphi}} \\ -|{\vec{k}}|\sinh(|{\vec{k}}|h)A({\vec{k}}) + |{\vec{k}}|\cosh(|{\vec{k}}|h)B({\vec{k}}) &=& \hat{f},\end{aligned}$$ which can be easily solved: $$A({\vec{k}}) = \frac{\hat{{\varphi}}\cosh(|{\vec{k}}|h) - \hat{f}\displaystyle{\frac{\sinh(|{\vec{k}}|\eta)}{|{\vec{k}}|}}}{\cosh(|{\vec{k}}|H)}, \quad B({\vec{k}}) = \frac{\hat{{\varphi}}\sinh(|{\vec{k}}|h) + \hat{f}\displaystyle{\frac{\cosh(|{\vec{k}}|\eta)}{|{\vec{k}}|}}}{\cosh(|{\vec{k}}|H)}.$$ Here, $H = h + \eta$ is the total water depth. The knowledge of these functions provides the velocity potential in the whole domain thanks to the general solution (\[eq:gensol\]). Finally, we compute the normal velocity $v_n$ on the free surface (\[eq:normalv\]). If we compute this quantity in Fourier space, the answer will be given immediately by the inverse transform ${\mathcal{F}}^{-1}$. The first term of $v_n$ is readily given by the formula $$\left.{\partial_z}{\hat{\phi}}\right|_{z=\eta} = \hat{{\varphi}}|{\vec{k}}|\tanh(|{\vec{k}}|H) + \hat{f}{\mathop{\mathrm{sech}}}(|{\vec{k}}|H).$$ To compute the second term we use the following approximate expression: $$\label{eq:nlinproduct} \widehat{\left.{\nabla}\phi\right|_{z=\eta}\cdot{\nabla}\eta} = {\mathcal{F}}\Bigl[{\mathcal{F}}^{-1}\bigl[i{\vec{k}}\hat{{\varphi}}\bigr]\cdot{\mathcal{F}}^{-1}\bigl[i{\vec{k}}\hat{\eta}\bigr]\Bigr].$$ Equation indicates that the function $f({\vec{x}},t_n)$ depends implicitly on the unknown solution $\phi({\vec{x}}, z, t_n)$. In order to compute this apparent contradiction, we apply a fixed-point iteration initialized with the value of $f({\vec{x}},t_{n-1})$ from the previous time step: $$\hat{f}^{k+1} = -\widehat{{\partial_t}h} - {\mathcal{F}}\Bigl[\left.{\nabla}\phi\right|_{z=-h}(\hat{f}^k) \cdot{\nabla}h\Bigr], \quad \hat{f}^0 = \hat{f}({\vec{k}}; t_{n-1}).$$ The last product is computed in the physical space: $${\mathcal{F}}\Bigl\{\left.{\nabla}\phi\right|_{z=-h}(f^k)\cdot{\nabla}h\Bigr\} = {\mathcal{F}}\Bigl[ {\mathcal{F}}^{-1}\bigl[\left.\widehat{{\nabla}\phi}\right|_{z=-h}(\hat{f}^k)\bigr]\cdot {\nabla}h\Bigr]. $$ Simple computations yield $$\left.\widehat{{\nabla}\phi}\right|_{z=-h}(\hat{f}^k) = i{\vec{k}}\Bigl[ \hat{{\varphi}}{\mathop{\mathrm{sech}}}(|{\vec{k}}|H) - \hat{f}^k\frac{\tanh(|{\vec{k}}|H)}{|{\vec{k}}|} \Bigr].$$ Our numerical experiments show that this iterative procedure is convergent and the tolerance $\varepsilon := ||\hat{f}^{k+1} - \hat{f}^k||_\infty \leq 10^{-5}$ is reached after four iterations in average. The resulting model is only weakly nonlinear since Laplace’s equation is solved using the Fourier transform in a strip-like domain. Consequently, there is an implicit linearization in the solution procedure. However, the WN model contrary to the CP model not only takes into account some nonlinear effects but can also be efficiently applied to cases with realistic bathymetry. We note that this model is similar to the first order approximation model proposed in [@Guyenne2007] if in our method we further simplify all expressions by replacing the total water depth $H$ by the undisturbed depth $h$. Time integration ---------------- Applying the above Fourier type spectral method to equations governing the evolution of the canonical variables $\eta$ and ${\varphi}$ leads to a system of ordinary differential equations, i.e. $$\label{ode1} \Phi_t = {\mathcal{A}}(t, \Phi), \qquad \Phi(t_0) = \Phi_0, \qquad \Phi = (\eta,{\varphi})^T.$$ In order to integrate numerically this system of ODEs we apply an integrating factor method analogous to the one used in [@Fructus2005; @Xu2009]. This method apparently decreases the stiffness of the system of ODEs and therefore allows for an efficient application of explicit time integration schemes. We start by extracting the linear part of equations (\[ode1\]): $$\label{ode2} \Phi_t + {\mathcal{L}}\cdot\Phi = {\mathcal{N}}(\Phi),$$ where $ {\mathcal{L}}= \begin{pmatrix} 0 & -\frac{\omega^2}{g} \\ g & 0 \\ \end{pmatrix}$ and $\omega = \sqrt{g|{\vec{k}}|\tanh(|{\vec{k}}|h_0)}$ is the wave frequency corresponding to the wave number $|{\vec{k}}|$. For a general bathymetry we choose the constant $h_0$ to be the mean water depth. (We note that we use the arithmetic average of values provided by the ETOPO1 database in the region under consideration.) The term ${\mathcal{N}}(\Phi)$ incorporates the remaining nonlinear terms: $${\mathcal{N}}(\Phi) = \begin{pmatrix} {\mathcal{F}}\bigl\{{\mathcal{D}}_\eta({\varphi})\bigr\} - \frac{\omega^2}{g}\hat{{\varphi}} \\ {\mathcal{F}}\Bigl\{\frac{1}{2(1+|{\nabla}\eta|^2)}\bigl[{\mathcal{D}}_\eta({\varphi}) + {\nabla}{\varphi}\cdot{\nabla}\eta \bigr]^2 - {{\textstyle{1\over2}}}|{\nabla}{\varphi}|^2\Bigr\} \end{pmatrix}.$$ The linear terms can be integrated exactly by the following change of variables: $$\Psi(t) := e^{{\mathcal{L}}(t-t_0)}\Phi(t), \qquad e^{{\mathcal{L}}(t-t_0)} = \begin{pmatrix} \cos(\omega(t-t_0)) & -\frac{\omega}{g}\sin(\omega(t-t_0)) \\ \frac{g}{\omega}\sin(\omega(t-t_0)) & \cos(\omega(t-t_0)) \end{pmatrix}.$$ Consequently, we solve in practice the following system of ODEs: $$\Psi_t = e^{{\mathcal{L}}(t-t_0)}{\mathcal{N}}\bigl(e^{-{\mathcal{L}}(t-t_0)}\Psi\bigr) \equiv {\mathcal{B}}(t,\Psi), \qquad \Psi(t_0) = \Phi_0.$$ This simple modification allows us to take larger CFL numbers, thus improving the overall time stepping performance. Finally, the system of ODEs is discretized by the standard fourth-order Runge-Kutta (RK4) scheme [@Hairer2009]: $$\label{eq:rk4} \begin{array}{rl} \Psi_{n+1} &= \Psi_n + \frac16 \Delta t (k_1 + 2k_2 + 2k_3 + k_4), \\ k_1 &= {\mathcal{B}}(t_n, \Psi_n), \\ k_2 &= {\mathcal{B}}(t_n + \frac12 \Delta t, \Psi_n + \frac12 \Delta t \; k_1), \\ k_3 &= {\mathcal{B}}(t_n + \frac12 \Delta t, \Psi_n + \frac12 \Delta t \; k_2), \\ k_4 &= {\mathcal{B}}(t_n + \Delta t, \Psi_n + \Delta t \; k_3), \end{array}$$ where the subscript refers to the discrete time instance $\Psi_n := \Psi (t_n)$ and $\Delta t$ is the discrete time step: $t_{n+1} = t_n + \Delta t$. In the computations described below, we use a Runge-Kutta (4,5) scheme with an adaptive time step control (cf. [@Dormand1980]). However it is not so fundamentally different from the classical RK4 scheme described above. The BBM-BBM type system ----------------------- When the long wave approximation is applied to the water wave problem – , one obtains the well-known nonlinear shallow water (or Saint-Venant) equations [@SV1871; @Stoker1958; @Whitham1999] which have been extensively used for tsunami simulations [@Imamura1996; @Titov1997; @TS; @DeKaKa; @Dutykh2009a]. If we go further in the asymptotic expansions, some dispersive effects can be included and generally the resulting system is referred to as Boussinesq system [@Boussinesq1872; @BCS; @Madsen03; @Dutykh2007; @DMS1; @DMS2]. In this study we use the Boussinesq system of BBM-BBM type with variable bottom derived in [@Mitsotakis2007]. See also [@Peregrine1967; @Chazel2007]. The system in dimensional variables can be written as: $$\label{eq:Bouss} \begin{array}{rl} \eta_t + {\nabla\cdot}((h_0+\eta){\vec{u}}) + {\nabla\cdot}\left\{A h_0^2[\nabla({\nabla}h_0\cdot{\vec{u}}) + {\nabla}h_0{\nabla\cdot}{\vec{u}}] - bh_0^2{\nabla}\eta_t\right\} + & \\ A\nabla\cdot(h_0^2\nabla\zeta_t) + \zeta_t & = 0, \\ {\vec{u}}_t + g{\nabla}\eta + \frac12 {\nabla}|{\vec{u}}|^2 + B g h_0[{\nabla}({\nabla}h\cdot\nabla\eta) + {\nabla}h_0\Delta\eta] - d h_0^2\Delta{\vec{u}}_t - B h_0{\nabla}\zeta_{tt} &= 0, \end{array}$$ where $A$, $B$, $b$ and $d$ are constants defined as: $$A = \sqrt{\frac{2}{3}}-\frac{2}{3}, \quad B = 1-\sqrt{\frac{2}{3}}, \quad b = d = \frac{1}{6}.$$ The variable ${\vec{u}}({\vec{x}},t)$ denotes the horizontal velocity of the fluid at $z=-h+\sqrt{2/3}(\eta+h)$, and the bathymetry variables $h({\vec{x}},t)$, $h_0({\vec{x}})$, $\zeta ({\vec{x}},t)$ are defined in Section \[sec:displ\]. We integrate numerically the system by using the standard Galerkin/finite element method with ${\mathbb{P}}1$ elements for the spatial discretization coupled with an explicit, second-order Runge-Kutta method for the temporal discretization (so-called improved Euler scheme) [@Hairer2009]. A proof that the semidiscrete system is not stiff and thus that the specific RK method is sufficient can be found in [@Dougalis2010]. In order to obtain a well-posed problem, we impose homogeneous Dirichlet boundary conditions which absorb partially the wave while reflecting only small amplitude oscillatory waves. Moreover, the specific numerical method appears to converge with optimal rate in the $L^2$ and $L^\infty$ norms whether we consider structured or unstructured grids. This is contrary to the analogous initial boundary value problems with zero Dirichlet boundary conditions on ${\vec{u}}$ for the Peregrine system [@Peregrine1967] where the analogous numerical method converges with suboptimal orders on structured and unstructured grids. For more information on the properties and the implementation of the numerical method for a BBM-BBM type system we refer to [@DMS1; @Mitsotakis2007]. Numerical results {#sec:numres} ================= In this section we compare the propagation of a solitary wave when it is used as an initial condition in both the CP and WN models. Moreover, we study the generation and the initial stages of the propagation of the tsunami wave of the July 17, 2006 event. We also present a comparison between the WN, CP and Boussinesq models. Solitary wave propagation ------------------------- Before performing the Java 2006 tsunami generation simulations, we study the propagation of a solitary wave solution to the full water wave problem using the WN and CP models. The initial condition is a solitary wave, computed by using the method presented by Tanaka [@Tanaka1986]. Consider the two-dimensional water wave problem in a channel of uniform depth $h_0 = const$. Since we look for travelling wave solutions, the flow field can be reduced to the steady state by choosing a frame of reference moving with the wave speed $c$. The introduction of dimensionless variables leads to a single scaling parameter, the Froude number ${\mathrm{Fr}}$ defined as ${\mathrm{Fr}}:= \displaystyle{\frac{c}{\sqrt{gh_0}}}$. Hereafter, the governing equations are considered in dimensionless form. The complex velocity potential is classically introduced as $w = \phi + i\psi$, where $\psi$ is the stream function. We choose $\phi = 0$ at the crest and $\psi = 0$ at the bottom. The fluid region is then mapped onto the strip $0<\psi<1$, $-\infty<\phi<\infty$ on the plane $w$ with $\psi = 1$ corresponding to the free surface. We introduce the quantity $\Omega = \log\displaystyle{{\frac{d w}{d z}}} = \tau - i\theta$, where $\theta$ is the angle between the velocity vector and horizontal axis $Ox$. The real part $\tau$ is expressed in terms of the velocity magnitude $q$ as $\tau = \log q$. The boundary conditions to be satisfied are the dynamic condition on the free surface and the bottom impermeability which are expressed as $$\label{eq:bcond} {\frac{d q^3}{d \phi}} = -\frac{3}{{\mathrm{Fr}}^2}\,\sin\theta, \quad\mbox{on } \psi = 1 \qquad \mbox{ and } \qquad \theta = 0, \quad \mbox{on } \psi = 0.$$ Consequently, the problem is now transformed into the determination of the complex function $\Omega$, analytic with respect to $w$ within the region of the unit strip $0 < \psi < 1$, decaying at infinity and satisfying the boundary conditions (\[eq:bcond\]). By applying Cauchy’s integral theorem, one can find the following integral equation on the free surface $\psi = 1$: $$-\theta(\phi) - \frac{2}{\pi} \int\limits_{-\infty}^\infty\frac{\theta(\phi)}{({\varphi}-\phi)^2 + 4}\;d{\varphi}= -\frac{1}{\pi}\int\limits_{-\infty}^\infty \frac{({\varphi}- \phi)\tau({\varphi})}{({\varphi}- \phi)^2 + 4}\; d{\varphi}+ \frac{1}{\pi}\mbox{p.v.}\int\limits_{-\infty}^\infty \frac{\tau({\varphi})}{{\varphi}-\phi}\; d{\varphi},$$ where $\tau(\phi)$ and $\theta(\phi)$ denote the traces of the corresponding functions on the free surface $\psi = 1$. The integral equation is solved iteratively. The convergence is tested with respect to the Froude number. Several solitary wave solutions computed in this way are plotted on Figure \[fig:tanaka\] for illustrative purposes. ![Solitary wave solutions of various amplitudes for the full water wave problem. Both $x$ and $\eta$ have been non-dimensionalized by the depth $h_0$.[]{data-label="fig:tanaka"}](figs/tanaka.eps){width="85.00000%"} In order to illustrate the advantages of the proposed WN model over the classical CP solution, we let a solitary wave with amplitude $A/h_0 = 0.1$ propagate up to $T = 80$ (in this section we use dimensionless quantities and time $T$ is non-dimensionalized by $\sqrt{g/h_0}$). We recall that the classical CP solution of – corresponding to the initial free surface height $\left.\eta\right|_{t=0} = \eta_0(x)$ and the velocity potential distribution at the free surface $\left.{\varphi}\right|_{t=0} = {\varphi}_0(x)$, takes the following form: $$\eta ({\vec{x}},t) = {\mathcal{F}}^{-1}\Bigl\{\hat\eta_0({\vec{k}})\cos(\omega t) + \frac{\omega}{g}\hat{\varphi}_0({\vec{k}})\sin(\omega t)\Bigr\},$$ $$\phi ({\vec{x}}, z, t) = {\mathcal{F}}^{-1}\Bigl\{\bigl( \hat{\varphi}_0({\vec{k}})\cos(\omega t) - \frac{g}{\omega}\hat\eta_0({\vec{k}})\sin(\omega t) \bigr) \bigl(\cosh(|{\vec{k}}|z) + \tanh(|{\vec{k}}|h)\sinh(|{\vec{k}}|z)\bigr)\Bigr\},$$ where $\hat\eta_0({\vec{k}}) = {\mathcal{F}}\{\eta_0({\vec{x}})\}$ and $\hat{\varphi}_0({\vec{k}}) = {\mathcal{F}}\{{\varphi}_0({\vec{x}})\}$ are the Fourier transforms of the initial conditions. The solution profiles of both models are presented in Figures \[fig:tansev\] (a)–(e). One observes that the WN model preserves quite well the shape of the solitary wave while shedding a small dispersive tail behind. The CP solution gradually transforms the initial wave into a dispersive tail according to the linear nature of equations –. In Figure \[fig:tansev\] (f) we present the normalized amplitude error defined as: $$\epsilon (t) := \frac{|\max\limits_x\{\eta(x,t)\} - A/h_0|}{A/h_0},$$ where $\max\limits_x\{\eta(x,t)\}$ denotes the discrete maximum of the numerical solution and $A/h_0 = 0.1$ is the exact solitary wave amplitude. In both computations a uniform grid of 512 nodes is used. Here, again, we notice a better performance of the WN solver compared to that of the CP solution. This specific experiment shows that the WN model is a better model compared to the CP solution when nonlinear effects must be included for the study of tsunami generation and propagation. The July 17, 2006 tsunami generation simulation ----------------------------------------------- The main purpose of this study is to present a novel methodology for tsunami generation problems. This approach is illustrated on the example of the July 17, 2006 Java tsunami since this event is not completely understood yet and there is an available finite fault solution for the presumed generating underwater earthquake. In this section we show a practical application of the WN method for water waves generated by a moving bottom. Namely, we exploit the bottom motion constructed in Section \[sec:dyndisp\]. The corresponding hydrodynamic problem is solved by the three methods discussed above: the linearized water wave problem (CP), BBM-BBM system and the novel WN model. The solution given by the WN model and the exact solutions to the linearized Euler equations – are computed on a uniform grid of $512 \times 512$ points. The time step $\Delta t$ is chosen adaptively according to the RK(4,5) method proposed in [@Dormand1980]. The BBM-BBM system is solved on a triangular unstructured grid of $86276$ elements. The time integration is performed with the classical RK2 scheme [@Hairer2009] with time step $\Delta t = 0.5$ s. Several snapshots of the free surface elevation computed with the WN model are shown in Figures \[fig:soleuler\] (a) – (f). Analogous contour plots of the solutions of the CP and BBM-BBM models are almost identical and differences cannot be observed within graphical accuracy. Therefore, they are not presented here. The parameters of the bottom motion, bathymetry and computational domain geometry were explained in Section \[sec:displ\]. In this computation, we see a complex process of simultaneous wave evolution together with rupture propagation during approximately 210 s. Namely, the free surface deformed by the rupture of the first subfaults evolves while the rupture continues to propagate along the fault. This kind of fluid/moving bottom interaction cannot be described in the static generation framework, cf. Figure \[fig:gauges\]. In order to compare the three models described above we put eight numerical wave gauges at the following locations: six close to the source ((a) $(107.2^\circ$, $-9.388^\circ)$, (b) $(107.4^\circ$, $-9.205^\circ)$, (c) $(107.6^\circ$, $-9.648^\circ)$, (d) $(107.7^\circ$, $-9.411^\circ)$, (e) $(108.3^\circ$, $-10.02^\circ)$, (f) $(108.2^\circ$, $-9.75^\circ)$) and two further away from the source area ((g) $(108^\circ$, $-10.5^\circ)$, (h) $(108^\circ$, $-9^\circ)$). The locations of the wave gauges are represented by the symbol $\diamond$ on Figure \[fig:gaugespos\] along with the static sea bed displacement. ![Location of the eight numerical wave gauges (indicated by the symbol $\diamond$) superposed with the static co-seismic bottom displacement.[]{data-label="fig:gaugespos"}](figs/gauges.eps) The eight wave gauge records are presented in Figures \[fig:gauges\] (a)–(h). In order to show the importance of the dynamics of the rupture process, the records obtained from the static approach are also included. The overall agreement among the three dynamic models appears to be satisfactory. We underline that the CP solution is very close to the other solutions despite the fact that the bathymetric features are neglected. We also note that the specific BBM-BBM type system underestimates by a small amount the maximum wave amplitude compared to the WN model. Further numerical tests showed some sensitivity of the BBM-BBM solution to the bottom motion scenario [@Dutykh2006]. Namely, we can report, for example, that the exponential scenario led to a slighty larger wave amplitude compared to the other models. As expected, the static approach exhibits differences both in the shape and in the arrival time of the waves. Further away from the source area, the CP solution continues to be accurate. This is due to the fact that nonlinearity is not important during the propagation stage of such small amplitude waves. [![Free surface elevation computed numerically with four models at eight wave gauges located approximately at the local extrema of the static bottom displacement. The elevation (vertical axis) is expressed in meters, while the time (horizontal axis) is in seconds. The models 1, 2 and 4 use the dynamic finite-fault rupture (weakly nonlinear model, linearized Euler equations, BBM-BBM model). The third model uses the static approach (weakly nonlinear model).[]{data-label="fig:gauges"}](figs/gauge1.eps "fig:")]{} [![Free surface elevation computed numerically with four models at eight wave gauges located approximately at the local extrema of the static bottom displacement. The elevation (vertical axis) is expressed in meters, while the time (horizontal axis) is in seconds. The models 1, 2 and 4 use the dynamic finite-fault rupture (weakly nonlinear model, linearized Euler equations, BBM-BBM model). The third model uses the static approach (weakly nonlinear model).[]{data-label="fig:gauges"}](figs/gauge2.eps "fig:")]{} [![Free surface elevation computed numerically with four models at eight wave gauges located approximately at the local extrema of the static bottom displacement. The elevation (vertical axis) is expressed in meters, while the time (horizontal axis) is in seconds. The models 1, 2 and 4 use the dynamic finite-fault rupture (weakly nonlinear model, linearized Euler equations, BBM-BBM model). The third model uses the static approach (weakly nonlinear model).[]{data-label="fig:gauges"}](figs/gauge3.eps "fig:")]{} [![Free surface elevation computed numerically with four models at eight wave gauges located approximately at the local extrema of the static bottom displacement. The elevation (vertical axis) is expressed in meters, while the time (horizontal axis) is in seconds. The models 1, 2 and 4 use the dynamic finite-fault rupture (weakly nonlinear model, linearized Euler equations, BBM-BBM model). The third model uses the static approach (weakly nonlinear model).[]{data-label="fig:gauges"}](figs/gauge4.eps "fig:")]{} [![Free surface elevation computed numerically with four models at eight wave gauges located approximately at the local extrema of the static bottom displacement. The elevation (vertical axis) is expressed in meters, while the time (horizontal axis) is in seconds. The models 1, 2 and 4 use the dynamic finite-fault rupture (weakly nonlinear model, linearized Euler equations, BBM-BBM model). The third model uses the static approach (weakly nonlinear model).[]{data-label="fig:gauges"}](figs/gauge5.eps "fig:")]{} [![Free surface elevation computed numerically with four models at eight wave gauges located approximately at the local extrema of the static bottom displacement. The elevation (vertical axis) is expressed in meters, while the time (horizontal axis) is in seconds. The models 1, 2 and 4 use the dynamic finite-fault rupture (weakly nonlinear model, linearized Euler equations, BBM-BBM model). The third model uses the static approach (weakly nonlinear model).[]{data-label="fig:gauges"}](figs/gauge6.eps "fig:")]{} [![Free surface elevation computed numerically with four models at eight wave gauges located approximately at the local extrema of the static bottom displacement. The elevation (vertical axis) is expressed in meters, while the time (horizontal axis) is in seconds. The models 1, 2 and 4 use the dynamic finite-fault rupture (weakly nonlinear model, linearized Euler equations, BBM-BBM model). The third model uses the static approach (weakly nonlinear model).[]{data-label="fig:gauges"}](figs/gauge8.eps "fig:")]{} [![Free surface elevation computed numerically with four models at eight wave gauges located approximately at the local extrema of the static bottom displacement. The elevation (vertical axis) is expressed in meters, while the time (horizontal axis) is in seconds. The models 1, 2 and 4 use the dynamic finite-fault rupture (weakly nonlinear model, linearized Euler equations, BBM-BBM model). The third model uses the static approach (weakly nonlinear model).[]{data-label="fig:gauges"}](figs/gauge7.eps "fig:")]{} Conclusions and perspectives {#sec:concl} ============================ In the present work we considered an important issue in the modeling of tsunami generation. Namely, a new method for the construction of dynamic co-seismic sea bed displacements was proposed. This method basically relies on two main ingredients: - the finite fault solution [@Bassin2000; @Ji2002] gives the slip distribution along the fault - dynamic sea bed deformation scenarios [@Hammack; @ddk; @Dutykh2006] allow us to take into account available information of the rupture dynamics To our knowledge, this reconstruction of the bottom motion is new. All developments presented in this paper are illustrated on the example of the July, 17 2006 Java event. Along with the bottom motion construction, we discussed three models to solve approximately the corresponding hydrodynamic problem and compute the induced free surface motions. The July 17, 2006 tsunami generation case was computed with three different models and a comparison was performed. We obtained a surprisingly good agreement between the CP solution and the solutions of the other two models. Recall that in the latter the bottom is assumed to be flat. Discrepancies will appear later in time since the bathymetry plays a crucial role in the tsunami propagation. Taking into account the simplicity and the relatively good accuracy of the new WN approximation to the full water wave problem with time dependent variable bottom, we suggest its use for the computation of the initial stages ($\approx 300$ $s$) of the life of a tsunami. The propagation and runup can be computed afterwards by other sophisticated tools [@Titov1997; @Imamura2006; @noaa_report; @Dutykh2009a], some of them being already integrated into tsunami warning systems [@Titov2005; @Weinstein2008]. However we point out that extreme runup values measured after the July, 17 Java 2006 event [@Fritz2007] deserve additional numerical studies. Finite fault parameters {#app:i} ======================= [c|c|c|c|c]{} \ Latitude, ${}^\circ$ & Longitude, ${}^\circ$ & Depth, $km$ & Slip, $cm$ & Rake, ${}^\circ$\ Latitude, ${}^\circ$ & Longitude, ${}^\circ$ & Depth, $km$ & Slip, $cm$ & Rake, ${}^\circ$\ \ -10.33298 & 109.17112 & 6.81260 & 5.01844 & 121.65860\ -10.28919 & 109.04183 & 6.81260 & 4.31652 & 80.93857\ -10.24541 & 108.91254 & 6.81260 & 48.94745 & 85.43047\ -10.20162 & 108.78325 & 6.81260 & 3.60585 & 101.68500\ -10.15784 & 108.65396 & 6.81260 & 0.86479 & 67.04596\ -10.11405 & 108.52467 & 6.81260 & 0.96921 & 99.45411\ -10.07027 & 108.39538 & 6.81260 & 0.62447 & 71.54340\ -10.02648 & 108.26609 & 6.81260 & 0.02449 & 99.44887\ -9.98270 & 108.13680 & 6.81260 & 2.71502 & 119.63240\ -9.93891 & 108.00751 & 6.81260 & 0.57000 & 114.25760\ -9.89513 & 107.87822 & 6.81260 & 14.54725 & 112.71920\ -9.85134 & 107.74893 & 6.81260 & 31.66312 & 107.26750\ -9.80756 & 107.61964 & 6.81260 & 2.74176 & 85.79224\ -9.76377 & 107.49035 & 6.81260 & 3.35868 & 78.97166\ -9.71999 & 107.36105 & 6.81260 & 67.95367 & 64.89334\ -9.67620 & 107.23177 & 6.81260 & 62.33453 & 65.43832\ -9.63242 & 107.10248 & 6.81260 & 35.33318 & 66.90181\ -9.58863 & 106.97318 & 6.81260 & 1.75233 & 101.93900\ -9.54485 & 106.84389 & 6.81260 & 40.63542 & 81.77631\ -9.50106 & 106.71461 & 6.81260 & 84.20831 & 68.95723\ -9.45728 & 106.58531 & 6.81260 & 25.12981 & 66.62241\ -10.24093 & 109.20313 & 8.78887 & 0.68254 & 88.79068\ -10.19714 & 109.07384 & 8.78887 & 30.70282 & 97.90491\ -10.15336 & 108.94455 & 8.78887 & 76.07102 & 99.93182\ -10.10957 & 108.81525 & 8.78887 & 0.56201 & 79.59160\ -10.06579 & 108.68597 & 8.78887 & 0.95023 & 114.32920\ -10.02201 & 108.55668 & 8.78887 & 64.78191 & 121.81120\ -9.97822 & 108.42738 & 8.78887 & 81.31910 & 105.21240\ -9.93443 & 108.29810 & 8.78887 & 137.60680 & 121.72020\ -9.89065 & 108.16881 & 8.78887 & 85.81732 & 88.13734\ -9.84686 & 108.03951 & 8.78887 & 30.61069 & 80.38488\ -9.80308 & 107.91022 & 8.78887 & 60.08308 & 113.75000\ -9.75929 & 107.78094 & 8.78887 & 46.98381 & 96.25403\ -9.71551 & 107.65164 & 8.78887 & 21.69421 & 80.82516\ -9.67173 & 107.52235 & 8.78887 & 11.01957 & 112.63110\ -9.62794 & 107.39307 & 8.78887 & 27.85978 & 75.88463\ -9.58416 & 107.26377 & 8.78887 & 5.96505 & 77.66200\ -9.54037 & 107.13448 & 8.78887 & 3.85634 & 83.57522\ -9.49658 & 107.00520 & 8.78887 & 3.23158 & 113.73070\ -9.45280 & 106.87590 & 8.78887 & 29.89915 & 116.10890\ -9.40902 & 106.74661 & 8.78887 & 65.25044 & 72.60931\ -9.36523 & 106.61732 & 8.78887 & 19.62932 & 65.99193\ -10.14888 & 109.23514 & 10.76514 & 20.60663 & 124.43320\ -10.10510 & 109.10584 & 10.76514 & 69.91051 & 122.64720\ -10.06131 & 108.97655 & 10.76514 & 63.10052 & 99.23547\ -10.01753 & 108.84727 & 10.76514 & 0.63700 & 74.09311\ -9.97374 & 108.71797 & 10.76514 & 1.02761 & 117.53560\ -9.92996 & 108.58868 & 10.76514 & 85.54328 & 123.64950\ -9.88617 & 108.45940 & 10.76514 & 167.18620 & 104.56840\ -9.84239 & 108.33010 & 10.76514 & 202.60880 & 122.12460\ -9.79860 & 108.20081 & 10.76514 & 144.76970 & 81.50333\ -9.75482 & 108.07152 & 10.76514 & 53.97212 & 72.84430\ -9.71103 & 107.94223 & 10.76514 & 79.21021 & 98.66053\ -9.66725 & 107.81294 & 10.76514 & 82.95619 & 80.81979\ -9.62346 & 107.68365 & 10.76514 & 119.13390 & 74.36982\ -9.57968 & 107.55436 & 10.76514 & 95.90159 & 116.24710\ -9.53589 & 107.42507 & 10.76514 & 36.94965 & 102.32060\ -9.49211 & 107.29578 & 10.76514 & 0.28681 & 81.49704\ -9.44832 & 107.16649 & 10.76514 & 8.06018 & 98.40840\ -9.40454 & 107.03720 & 10.76514 & 3.02927 & 116.89820\ -9.36075 & 106.90791 & 10.76514 & 10.73559 & 74.60908\ -9.31697 & 106.77862 & 10.76514 & 57.94233 & 75.39254\ -9.27318 & 106.64933 & 10.76514 & 60.97223 & 64.77096\ -10.05684 & 109.26714 & 12.74141 & 21.97392 & 121.10740\ -10.01305 & 109.13785 & 12.74141 & 74.47045 & 119.75060\ -9.96927 & 109.00856 & 12.74141 & 17.25334 & 124.09410\ -9.92548 & 108.87927 & 12.74141 & 14.38904 & 87.41515\ -9.88170 & 108.74998 & 12.74141 & 3.03040 & 106.36440\ -9.83791 & 108.62069 & 12.74141 & 8.97587 & 101.53580\ -9.79413 & 108.49140 & 12.74141 & 114.85160 & 115.94270\ -9.75034 & 108.36211 & 12.74141 & 91.90382 & 115.95240\ -9.70656 & 108.23282 & 12.74141 & 64.72478 & 100.08050\ -9.66277 & 108.10353 & 12.74141 & 17.30368 & 123.06770\ -9.61899 & 107.97424 & 12.74141 & 57.09099 & 68.20686\ -9.57520 & 107.84495 & 12.74141 & 64.81193 & 79.84035\ -9.53142 & 107.71566 & 12.74141 & 131.04410 & 76.45924\ -9.48763 & 107.58636 & 12.74141 & 112.11020 & 99.51801\ -9.44385 & 107.45708 & 12.74141 & 60.23628 & 97.77266\ -9.40006 & 107.32778 & 12.74141 & 126.96870 & 80.27277\ -9.35628 & 107.19849 & 12.74141 & 63.39000 & 65.00801\ -9.31249 & 107.06921 & 12.74141 & 0.52621 & 94.79313\ -9.26871 & 106.93991 & 12.74141 & 1.52171 & 66.78681\ -9.22492 & 106.81062 & 12.74141 & 10.96743 & 81.94861\ -9.18114 & 106.68134 & 12.74141 & 2.38062 & 123.04830\ -9.96479 & 109.29915 & 14.71768 & 22.40949 & 123.90350\ -9.92100 & 109.16986 & 14.71768 & 48.62879 & 115.45630\ -9.87722 & 109.04057 & 14.71768 & 5.99559 & 83.81007\ -9.83343 & 108.91128 & 14.71768 & 7.22945 & 123.80940\ -9.78965 & 108.78199 & 14.71768 & 0.10031 & 93.40998\ -9.74586 & 108.65269 & 14.71768 & 0.36991 & 69.37087\ -9.70208 & 108.52341 & 14.71768 & 104.18760 & 123.83230\ -9.65829 & 108.39411 & 14.71768 & 46.12533 & 95.97049\ -9.61451 & 108.26482 & 14.71768 & 0.28679 & 89.56866\ -9.57072 & 108.13554 & 14.71768 & 2.06597 & 80.14312\ -9.52694 & 108.00624 & 14.71768 & 30.55070 & 66.23147\ -9.48315 & 107.87695 & 14.71768 & 73.72994 & 87.91253\ -9.43937 & 107.74767 & 14.71768 & 112.90700 & 92.28181\ -9.39558 & 107.61837 & 14.71768 & 74.73608 & 86.51558\ -9.35180 & 107.48908 & 14.71768 & 121.73820 & 64.68654\ -9.30801 & 107.35979 & 14.71768 & 231.20940 & 65.50779\ -9.26423 & 107.23050 & 14.71768 & 96.55727 & 87.01543\ -9.22044 & 107.10121 & 14.71768 & 28.29534 & 122.55670\ -9.17666 & 106.97192 & 14.71768 & 0.84110 & 70.21989\ -9.13287 & 106.84263 & 14.71768 & 7.99213 & 87.51706\ -9.08909 & 106.71334 & 14.71768 & 1.33281 & 96.33266\ -9.87274 & 109.33115 & 16.69394 & 43.31154 & 121.79150\ -9.82896 & 109.20187 & 16.69394 & 87.17052 & 124.49750\ -9.78517 & 109.07257 & 16.69394 & 61.47630 & 87.10537\ -9.74139 & 108.94328 & 16.69394 & 31.53286 & 70.58137\ -9.69760 & 108.81400 & 16.69394 & 0.70628 & 65.17896\ -9.65382 & 108.68470 & 16.69394 & 5.74160 & 87.70702\ -9.61003 & 108.55541 & 16.69394 & 93.47714 & 107.32000\ -9.56625 & 108.42612 & 16.69394 & 93.55753 & 85.39201\ -9.52246 & 108.29683 & 16.69394 & 47.25525 & 74.24297\ -9.47868 & 108.16754 & 16.69394 & 24.65230 & 124.20110\ -9.43489 & 108.03825 & 16.69394 & 35.63115 & 71.78733\ -9.39111 & 107.90896 & 16.69394 & 25.11757 & 75.27779\ -9.34732 & 107.77967 & 16.69394 & 68.15302 & 107.42980\ -9.30354 & 107.65038 & 16.69394 & 24.66007 & 112.77880\ -9.25975 & 107.52109 & 16.69394 & 0.50688 & 79.86887\ -9.21597 & 107.39180 & 16.69394 & 119.92850 & 75.03103\ -9.17218 & 107.26250 & 16.69394 & 77.08335 & 110.83160\ -9.12840 & 107.13322 & 16.69394 & 31.65430 & 123.83060\ -9.08461 & 107.00393 & 16.69394 & 11.42768 & 66.47282\ -9.04083 & 106.87463 & 16.69394 & 33.80650 & 115.65650\ -8.99704 & 106.74535 & 16.69394 & 39.47481 & 65.15574\ -9.78069 & 109.36316 & 18.67021 & 35.42621 & 111.95830\ -9.73691 & 109.23387 & 18.67021 & 103.05030 & 124.62650\ -9.69312 & 109.10458 & 18.67021 & 101.38220 & 122.70620\ -9.64934 & 108.97529 & 18.67021 & 76.76701 & 68.20042\ -9.60556 & 108.84600 & 18.67021 & 10.71945 & 77.79713\ -9.56177 & 108.71671 & 18.67021 & 1.32449 & 100.72950\ -9.51799 & 108.58742 & 18.67021 & 37.46857& 124.59330\ -9.47420 & 108.45813 & 18.67021 & 118.99580 & 100.38000\ -9.43042 & 108.32883 & 18.67021 & 79.62616 & 91.56905\ -9.38663 & 108.19955 & 18.67021 & 97.61735 & 109.86430\ -9.34285 & 108.07026 & 18.67021 & 87.67753 & 87.57239\ -9.29906 & 107.94096 & 18.67021 & 15.14859 & 64.75201\ -9.25528 & 107.81168 & 18.67021 & 82.60960 & 71.66805\ -9.21149 & 107.68239 & 18.67021 & 66.06397 & 98.55843\ -9.16771 & 107.55309 & 18.67021 & 0.43085 & 67.81042\ -9.12392 & 107.42381 & 18.67021 & 35.30429 & 124.04570\ -9.08014 & 107.29452 & 18.67021 & 59.17323 & 124.55130\ -9.03635 & 107.16522 & 18.67021 & 15.23214 & 66.82615\ -8.99257 & 107.03593 & 18.67021 & 28.10358 & 76.08198\ -8.94878 & 106.90664 & 18.67021 & 48.09923 & 124.24450\ -8.90500 & 106.77735 & 18.67021 & 42.38682 & 124.42850\ Zakharov’s formulation of the water wave problem {#app:ii} ================================================ In this appendix we recast the governing equations (\[eq:laplace\]) – (\[eq:bottomkin\]) of the water wave problem in a more compact and mathematically more convenient form [@Zakharov1968; @Craig1993]. Using the definition of the normal velocity (\[eq:normalv\]), it is straightforward to rewrite the kinematic free surface condition (\[eq:kinematic\]): $${\partial_t}\eta - {\mathcal{D}}_\eta ({\varphi}) = 0,$$ where ${\varphi}$ is the trace of the velocity potential at the free surface . The time derivative and the horizontal gradient of the velocity potential trace on the free surface can be computed: $$\label{eq:tphis} {\partial_t}{\varphi}= {\partial_t}\phi + {\partial_t}\eta \left.{\partial_z}\phi\right|_{z=\eta} = {\partial_t}\phi + {\mathcal{D}}_\eta({\varphi})\left.{\partial_z}\phi\right|_{z=\eta},$$ and similarly one can compute the horizontal gradient: $$\label{eq:gradphis} {\nabla}{\varphi}= \left.{\nabla}\phi\right|_{z=\eta} + {\nabla}\eta \left.{\partial_z}\phi\right|_{z=\eta}.$$ In order to close the system, we have to express all derivatives of the potential $\phi$ computed at the free surface, in terms of ${\varphi}$, $\eta$ and ${\mathcal{D}}_\eta({\varphi})$. From the definition of the normal velocity (\[eq:normalv\]) and the D2N operator one readily obtains: $$\label{eq:phizeta} \left.{\nabla}\phi\right|_{z=\eta}\cdot{\nabla}\eta = \left.{\partial_z}\phi\right|_{z=\eta} - {\mathcal{D}}_\eta({\varphi}).$$ Substituting the last identity into (\[eq:gradphis\]) multiplied by ${\nabla}\eta$, leads to the following expression: $$\label{eq:phiz} \left.{\partial_z}\phi\right|_{z=\eta} = \frac{{\mathcal{D}}_\eta({\varphi}) + {\nabla}{\varphi}\cdot{\nabla}\eta}{1 + |{\nabla}\eta|^2}.$$ Now we have all elements to find the horizontal derivatives of the velocity potential: $$\label{eq:gradphi} \left.{\nabla}\phi\right|_{z=\eta} = {\nabla}{\varphi}- {\nabla}\eta \left.{\partial_z}\phi\right|_{z=\eta} = \frac{(1 + |{\nabla}\eta|^2){\nabla}{\varphi}- {\mathcal{D}}_\eta({\varphi}){\nabla}\eta - ({\nabla}{\varphi}\cdot{\nabla}\eta){\nabla}\eta} {1 + |{\nabla}\eta|^2}.$$ In order to rewrite Bernoulli condition (\[eq:bernoulli\]) in new variables, we make the following observation (using (\[eq:gradphis\]) and (\[eq:phizeta\])): $$\begin{gathered} {{\textstyle{1\over2}}}|{\nabla}\phi|^2 + {{\textstyle{1\over2}}}({\partial_z}\phi)^2 = {{\textstyle{1\over2}}}{\nabla}\phi\cdot{\nabla}\phi + {{\textstyle{1\over2}}}{\partial_z}\phi\;{\partial_z}\phi = \\ = {{\textstyle{1\over2}}}{\nabla}\phi\cdot({\nabla}{\varphi}- {\partial_z}\phi{\nabla}\eta) + {{\textstyle{1\over2}}}{\partial_z}\phi({\mathcal{D}}_\eta({\varphi}) + {\nabla}\phi\cdot{\nabla}\eta) = {{\textstyle{1\over2}}}{\nabla}\phi\cdot{\nabla}{\varphi}+ {{\textstyle{1\over2}}}{\mathcal{D}}_\eta({\varphi}){\partial_z}\phi, \quad z=\eta.\end{gathered}$$ Taking into account this observation and expression (\[eq:tphis\]) for the time derivative of ${\varphi}$, the dynamic condition takes this equivalent form: $${\partial_t}{\varphi}+ g\eta + {{\textstyle{1\over2}}}{\nabla}\phi\cdot{\nabla}{\varphi}- {{\textstyle{1\over2}}}{\mathcal{D}}_\eta({\varphi}){\partial_z}\phi = 0, \quad z = \eta.$$ After substituting expressions (\[eq:phiz\]), (\[eq:gradphi\]) into the last equation and summarizing all the developments made above, we get the following set of dynamic equations equivalent to the complete water wave problem (\[eq:laplace\]) – (\[eq:bottomkin\]): $$\begin{array}{rl} {\partial_t}\eta - {\mathcal{D}}_\eta({\varphi}) &= 0, \\ {\partial_t}{\varphi}+ {{\textstyle{1\over2}}}|{\nabla}{\varphi}|^2 + g\eta - \frac{1}{2(1+|{\nabla}\eta|^2)}\bigl[{\mathcal{D}}_\eta({\varphi}) + {\nabla}{\varphi}\cdot{\nabla}\eta \bigr]^2 &= 0. \end{array}$$ Relations between elastic constants {#app:iii} =================================== In the classical elasticity theory, coefficients in Lamé equations (governing the displacements field in an elastic solid), can be expressed in terms of various sets of physical parameters [@love; @sokol]. The purpose of this Appendix is to recall some relations between them. Lamé coefficients $\lambda$ and $\mu$ can be defined in terms of the Young’s modulus $E$ (having the dimension of the pressure $[Pa]$) and Poisson’s ratio $\nu$ (dimensionless coefficient $0<\nu<1/2$): $$\lambda = \frac{E\nu}{(1+\nu)(1-2\nu)}, \quad \mu = \frac{E}{2(1+\nu)},$$ and inversely: $$E = \frac{(3\lambda + 2\mu)\mu}{\lambda + \mu}, \quad \nu = \frac{\lambda}{2(\lambda + \mu)}.$$ The celerities of $P$ and $S$ waves have the following expressions in terms of Lamé coefficients: $$c_p = \sqrt{\frac{\lambda + 2\mu}{\rho}}, \quad c_s = \sqrt{\frac{\mu}{\rho}},$$ where $\rho$ is the density of elastic medium. These relations yield $$\mu = \rho c_s^2, \quad \lambda = \rho c_p^2 - 2\mu.$$ Acknowledgement {#acknowledgement .unnumbered} =============== D. Dutykh acknowledges the support from French Agence Nationale de la Recherche, project MathOcean (Grant ANR-08-BLAN-0301-01). F. Dias and D. Dutykh acknowledge the support of the Ulysses Program of the French Ministry of Foreign Affairs under the project 23725ZA. Special thanks go to Professor Costas Synolakis whose work on tsunami waves has been the source of our inspiration. Finally we would like to thank Professor Didier Clamond for very helpful discussions on the numerical simulation of water waves. [^1]: $^*$ Corresponding author [^2]: In fact, the Mansinha & Smylie solution is a particular case of the more general Okada solution.
--- abstract: 'We introduce multiplicative LSTM (mLSTM), a recurrent neural network architecture for sequence modelling that combines the long short-term memory (LSTM) and multiplicative recurrent neural network architectures. mLSTM is characterised by its ability to have different recurrent transition functions for each possible input, which we argue makes it more expressive for autoregressive density estimation. We demonstrate empirically that mLSTM outperforms standard LSTM and its deep variants for a range of character level language modelling tasks. In this version of the paper, we regularise mLSTM to achieve 1.27 bits/char on text8 and 1.24 bits/char on Hutter Prize. We also apply a purely byte-level mLSTM on the WikiText-2 dataset to achieve a character level entropy of 1.26 bits/char, corresponding to a word level perplexity of 88.8, which is comparable to word level LSTMs regularised in similar ways on the same task.' author: - | Ben Krause, Iain Murray & Steve Renals\ School of Informatics, University of Edinburgh\ Edinburgh, Scotland, UK\ `{ben.krause,i.murray,s.renals}@ed.ac.uk`\ Liang Lu\ Toyota Technological Institute at Chicago\ Chicago, Illinois, USA\ `{llu}@ttic.edu`\ bibliography: - 'iclr2017\_conference.bib' title: Multiplicative LSTM for sequence modelling --- Introduction ============ Recurrent neural networks (RNNs) are powerful sequence density estimators that can use long contexts to make predictions. They have achieved tremendous success in (conditional) sequence modelling tasks such as language modelling, machine translation and speech recognition. Generative models of sequences can apply factorization via the product rule to perform density estimation of the sequence $x_{1:T} = \{x_1,\dots,x_T\}$, $$P(x_1,\dots,x_T) = P(x_1) P(x_2|x_1)P(x_3|x_2,x_1)\cdots P(x_T|x_1\dots x_{T-1}).$$ RNNs can model sequences with the above factorization by using a hidden state to summarize past inputs. The hidden state vector $h_t$ is updated recursively using the previous hidden state vector $h_{t-1}$ and the current input $x_{t}$ as $$h_t = \mathcal{F}(h_{t-1},x_t),$$ where $\mathcal{F}$ is a differentiable function with learnable parameters. In a vanilla RNN, $\mathcal{F}$ multiplies its inputs by a matrix and squashes the result with a non-linear function such as a hyperbolic tangent ($\tanh$). The updated hidden state vector is then used to predict a probability distribution over the next sequence element, using function $\mathcal{G}$. In the case where $x_{1:T}$ consists of mutually exclusive discrete outcomes, $\mathcal{G}$ may apply a matrix multiplication followed by a softmax function: $$P(x_{t+1}) = \mathcal{G}(h_t).$$ Generative RNNs can evaluate log-likelihoods of sequences exactly, and are differentiable with respect to these log-likelihoods. RNNs can be difficult to train due to the vanishing gradient problem [@Bengio-1994], but advances such as the long short-term memory architecture (LSTM) [@Hochreiter-1997] have allowed RNNs to be successful. Despite their success, generative RNNs (as well as other conditional generative models) are known to have problems with recovering from mistakes [@Graves-2013]. Each time the recursive function of the RNN is applied and the hidden state is updated, the RNN must decide which information from the previous hidden state to store, due to its limited capacity. If the RNN’s hidden representation remembers the wrong information and reaches a bad numerical state for predicting future sequence elements, for instance as a result of an unexpected input, it may take many time-steps to recover. We argue that RNN architectures with hidden-to-hidden transition functions that are input-dependent are better suited to recover from surprising inputs. Our approach to generative RNNs combines LSTM units with multiplicative RNN (mRNN) factorized hidden weights, allowing flexible input-dependent transitions that are easier to control due to the gating units of LSTM . We compare this multiplicative LSTM hybrid architecture with other variants of LSTM on a range of character level language modelling tasks. Multiplicative LSTM is most appropriate when it can learn parameters specifically for each possible input at a given timestep. Therefore, its main application is to sequences of discrete mutually exclusive elements, such as language modelling and related problems. Input-dependent transition functions ------------------------------------ RNNs learn a mapping from previous hidden state $h_{t-1}$ and input $x_t$ to hidden state $h_t$. Let $\hat{h}_t$ denote the input to the next hidden state before any non-linear operation: $$\hat{h}(t) = W_{hh}h_{t-1} + W_{hx}x_{t} ,$$ where $W_{hh}$ is the hidden-to-hidden weight matrix, and $W_{hx}$ is the input-to-hidden weight matrix. For problems such as language modelling, $x_t$ is a one-hot vector, meaning that the output of $W_{hx}x_{t}$ is a column in $W_{hx}$, corresponding to the unit element in $x_{t}$. The possible future hidden states in an RNN can be viewed as a tree structure, as shown in Figure \[fig:tree\]. For an alphabet of $N$ inputs and a fixed $h_{t-1}$, there will be $N$ possible transition functions between $h_{t-1}$ and $\hat{h}_t$. The relative magnitude of $W_{hh}h_{t-1}$ to $W_{hx}x_{t}$ will need to be large for the RNN to be able to use long range dependencies, and the resulting possible hidden state vectors will therefore be highly correlated across the possible inputs, limiting the width of the tree and making it harder for the RNN to form distinct hidden representations for different sequences of inputs. However, if the RNN has flexible input-dependent transition functions, the tree will be able to grow wider more quickly, giving the RNN the flexibility to represent more probability distributions. ![Diagram of hidden states of a generative RNN as a tree, where $x_{t}^{(n)}$ denotes which of $N$ possible inputs is encountered at timestep $t$. Given $h_t$, the starting node of the tree, there will be a different possible $h_{t+1}$ for every $x_{t+1}^{(n)}$. Similarly, for every $h_{t+1}$ that can be reached from $h_t$, there is a different possible $h_{t+2}$ for each $x_{t+2}^{(n)}$, and so on.[]{data-label="fig:tree"}](tree.png){width="100.00000%"} In a vanilla RNN, it is difficult to allow inputs to greatly affect the hidden state vector without erasing information from the past hidden state. However, an RNN with a transition function mapping $\hat{h}_t \leftarrow h_{t-1}$ dependent on the input would allow the relative values of $h_{t}$ to vary with each possible input $x_{t}$, without overwriting the contribution from the previous hidden state, allowing for more long term information to be stored. This ability to adjust to new inputs quickly while limiting the overwriting of information should make an RNN more robust to mistakes when it encounters surprising inputs, as the hidden vector is less likely to get trapped in a bad numerical state for making future predictions. Multiplicative RNN ------------------ The multiplicative RNN (mRNN) [@Sutskever-2011] is an architecture designed specifically to allow flexible input-dependent transitions. Its formulation was inspired by the tensor RNN, an RNN architecture that allows for a different transition matrix for each possible input. The tensor RNN features a 3-way tensor $W_{hh}^{1:N}$, which contains a separately learned transition matrix $W_{hh}$ for each input dimension. The 3-way tensor can be stored as an array of matrices $$W_{hh}^{(1:N)} = \{ W_{hh}^{(1)},...,W_{hh}^{(N)} \},$$ where superscript is used to denote the index in the array, and $N$ is the dimensionality of $x_t$. The specific hidden-to-hidden weight matrix $W_{hh}^{(x_t)}$ used for a given input $x_t$ is then $$W_{hh}^{(x_t)} = \sum_{n=1}^{N} W_{hh}^{(n)} x_t^{(n)}.$$ For language modelling problems, only one unit of $x_t$ will be on, and $W_{hh}^{(x_t)}$ will be the matrix in $W_{hh}^{(1:N)}$ corresponding to that unit. Hidden-to-hidden propagation in the tensor RNN is then given by $$\hat{h}(t) = W_{hh}^{(x_t)} h_{t-1} + W_{hx}x_{t}.$$ The large number of parameters in the tensor RNN make it impractical for most problems. mRNNs can be thought of as a shared-parameter approximation to the tensor RNN that use a factorized hidden-to-hidden transition matrix in place of the normal RNN hidden-to-hidden matrix $W_{hh}$, with an input-dependent intermediate diagonal matrix $\mathrm{diag}(W_{mx}x_t)$. The input-dependent hidden-to-hidden weight matrix, $W_{hh}^{(x_t)}$ is then $$W_{hh}^{(x_t)} = W_{hm} \mathrm{diag}(W_{mx}x_t) W_{mh}.$$ An mRNN is thus equivalent to a tensor RNN using the above form for $W_{hh}^{(x_t)}$. For readability, an mRNN can also be described using intermediate state $m_t$ as follows: $$\begin{aligned} m_t &= (W_{mx} x_t) \odot (W_{mh}h_{t-1}) \\ \hat{h}_t &= W_{hm}m_t + W_{hx}x_{t}.\end{aligned}$$ mRNNs have improved on vanilla RNNs at character level language modelling tasks [@Sutskever-2011; @mikolov2012c], but have fallen short of the more popular LSTM architecture, for instance as shown with LSTM baselines from [@cooijmans2017]. The standard RNN units in an mRNN do not provide an easy way for information to bypass its complex transitions, resulting in the potential for difficulty in retaining long term information. Long short-term memory ---------------------- LSTM is a commonly used RNN architecture that uses a series of multiplicative gates to control how information flows in and out of internal states of the network [@Hochreiter-1997]. There are several slightly different variants of LSTM, and we present the variant used in our experiments. The LSTM hidden state receives inputs from the input layer $x_t$ and the previous hidden state $h_{t-1}$: $$\hat{h}_t= W_{hx}x_t + W_{hh}h_{t-1}.$$ The LSTM network also has 3 gating units – input gate $i$, output gate $o$, and forget gate $f$ – that have both recurrent and feed-forward connections: $$\begin{aligned} i_t &= \sigma(W_{ix}x_t + W_{ih}h_{t-1}) \\ o_t &= \sigma(W_{ox}x_t + W_{oh}h_{t-1}) \\ f_t &= \sigma(W_{fx}x_t + W_{fh}h_{t-1}),\end{aligned}$$ where $\sigma$ is the logistic sigmoid function. The input gate controls how much of the input to each hidden unit is written to the internal state vector $c_t$, and the forget gate determines how much of the previous internal state $c_{t-1}$ is preserved. This combination of write and forget gates allows the network to control what information should be stored and overwritten across each time-step. The internal state is updated by $$c_{t} = f_t \odot c_{t-1} + i_t \odot tanh(\hat{h}_t).$$ The output gate controls how much of each unit’s activation is preserved. It allows the LSTM cell to keep information that is not relevant to the current output, but may be relevant later. The final output of the hidden state is given by $$h_t = \tanh(c_t)\odot o_t.$$ LSTM’s ability to control how information is stored in each unit has proven generally useful. Comparing LSTM with mRNN ------------------------ The LSTM and mRNN architectures both feature multiplicative units, but these units serve different purposes. LSTM’s gates are designed to control the flow of information through the network, whereas mRNN’s gates are designed to allow transition functions to vary across inputs. LSTM gates receive input from both the input units and hidden units, allowing multiplicative interactions between hidden units, but also potentially limiting the extent of input-hidden multiplicative interaction. LSTM gates are also squashed with a sigmoid, forcing them to take values between 0 and 1, which makes them easier to control, but less expressive than mRNN’s linear gates. For language modelling problems, mRNN’s linear gates do not need to be controlled by the network because they are explicitly learned for each input. They are also placed in between a product of 2 dense matrices, giving more flexibility to the possible values of the final product of matrices. Multiplicative LSTM =================== Since the LSTM and mRNN architectures are complimentary, we propose the multiplicative LSTM (mLSTM), a hybrid architecture that combines the factorized hidden-to-hidden transition of mRNNs with the gating framework from LSTMs. The mRNN and LSTM architectures can be combined by adding connections from the mRNN’s intermediate state $m_t$ (which is redefined below for convenience) to each gating units in the LSTM, resulting in the following system: $$\begin{aligned} m_t &= (W_{mx} x_t) \odot (W_{mh}h_{t-1}) \\ \hat{h}_t &= W_{hx}x_t + W_{hm}m_{t} \\ i_t &= \sigma(W_{ix}x_t + W_{im}m_{t}) \\ o_t &= \sigma(W_{ox}x_t + W_{om}m_{t}) \\ f_t &= \sigma(W_{fx}x_t + W_{fm}m_{t}).\end{aligned}$$ We set the dimensionality of $m_t$ and $h_t$ equal for all our experiments. We also chose to share $m_t$ across all LSTM unit types, resulting in a model with 1.25 times the number of recurrent weights as LSTM for the same number of hidden units. The goal of this architecture is to combine the flexible input-dependent transitions of mRNNs with the long time lag and information control of LSTMs. The gated units of LSTMs could make it easier to control (or bypass) the complex transitions in that result from the factorized hidden weight matrix. The additional sigmoid input and forget gates featured in LSTM units allow even more flexible input-dependent transition functions than in regular mRNNs. Related approaches ================== Many recently proposed RNN architectures use recurrent depth, which is depth between recurrent steps. Recurrent depth allows more non-linearity in the combination of inputs and previous hidden states from every time step, which in turn allows for more flexible input-dependent transitions. Recurrent depth has been found to perform better than other kinds of non-recurrent depth for sequence modelling [@zhang2016]. Recurrent highway networks (RHNs) [@zilly2017] use a more sophisticated recurrent depth that carefully controls propagation through layers using gating units. The gating units also allow for a greater deal of multiplicative interaction between the inputs and hidden units. While adding recurrent depth could improve our model, we believe that maximizing the input-dependent flexibility of the transition function is more important for expressive sequence modelling. Recurrent depth can do this through non-linear layers combining hidden and input contributions, but our method can do this independently of non-linear depth. Another approach, multiplicative integration RNNs (MI-RNNs) [@wu2016], use Hadamard products instead of addition when combining contributions from input and hidden units. When applying this to LSTM, this architecture achieves impressive sequence modelling results. The main difference between multiplicative integration LSTM and mLSTM is that mLSTM applies the Hadamard product between the multiplication of two matrices. In the case of LSTM, this allows for the potential for greater expressiveness, without significantly increasing the size of the model. Experiments =========== System Setup ------------ Our experiments measure the performance of mLSTM for character-level language modelling tasks of varying complexity[^1]. Our initial experiments, which appeared in previous versions of this work, were mainly designed to compare the convergence and final performance of mLSTM vs LSTM and its deep variants. Our follow up experiments explored training and regularisation of mLSTM in more detail, with goal of comparing more directly with the most competitive architectures in the literature. Our initial and follow up experiments used slightly different set ups; initial experiments used a variant of RMSprop, [@tieleman2012], with normalized updates in place of a learning rate. All unnormalized update directions $v_*$, computed by RMSprop, were normalized to have length $\ell$, where $\ell$ was decayed exponentially over training: $$v \leftarrow \frac{\ell}{\sqrt{v_*^T v_*}} v_* .$$ This update rule is similar to applying gradient norm clipping [@pascanu2013], with a very high learning rate balanced out by a very low gradient norm threshold. The initial experiments also used a slightly non-standard version of LSTM (and mLSTM) with the output gate inside of the final tanh of the LSTM cell. This gave us slightly better results in preliminary experiments with very small models, but likely does not make much difference. We use LSTM (RMSprop) and mLSTM (RMSprop) in tables to distinguish results obtained by these initial set of experiments. For our follow up experiments, we use more standard methodology to be more comparable to the literature. We used ADAM [@kingma2014], always starting with an initial learning rate of $0.001$ and decaying this linearly to a minimum learning rate (which was always in the range $0.00005$ to $0.0001$). The mLSTMs used the standard LSTM cell with the output gate outside the tanh. These mLSTMs also used scaled orthogonal initialisations [@saxe2013] for the hidden weights, an initial forget gate bias of 3, and truncated backpropogation lengths from 200 to 250. We compared mLSTM to previously reported regular LSTM, stacked LSTM, and RNN character-level language models. We run detailed experiments on the text8 and Hutter Prize datasets [@Hutter2012] to test medium scale character-level language modelling. We test our best model from these experiments on the WikiText-2 dataset [@Merity2016] to measure performance on smaller scale character level language modelling, and to compare with word level models. Previous versions of the paper also report a character level result on Penn Treebank dataset [@marcus1993] of 1.35 bits/char with an unregularised mLSTM, however we do not include this experiment in this version as we have no results with our updated training and regularisation methodology. Hutter Prize dataset -------------------- We performed experiments using the Hutter Prize dataset, originally used for the Hutter Prize compression benchmark [@Hutter2012]. This dataset consists mostly of English language text and mark-up language text, but also contains text in other languages, including non-Latin languages. The dataset is modelled using a UTF-8 encoding, and contains 205 unique bytes. In our initial experiments, we compared mLSTMs and 2-layer stacked LSTMs for varying network sizes, ranging from about 3–20 million parameters. These results all used RMS prop with normalized updates, stopping after 4 epochs on the first 95 million characters, with test performance measured on the last 5 million bytes. Hyperparameters for each mLSTM and stacked LSTM were kept constant across all sizes. The results, shown in Figure \[fig:hutter-res\], show that mLSTM gives an improvement across all network sizes. ![Cross entropy loss for mLSTM and stacked LSTM immediately proceeding a\ surprising input[]{data-label="fig:surprise-res"}](hutterplot.pdf){width="\textwidth"} ![Cross entropy loss for mLSTM and stacked LSTM immediately proceeding a\ surprising input[]{data-label="fig:surprise-res"}](surpriseplot.pdf){width="\textwidth"} We hypothesized that mLSTM’s superior performance over stacked LSTM was in part due to its ability to recover from surprising inputs. To test this we looked at each network’s performance after viewing surprising inputs that occurred naturally in the test set by creating a set of the 10% characters with the largest average loss taken by mLSTM and stacked LSTM. Both networks perform roughly equally on this set of surprising characters, with mLSTM and stacked LSTM taking losses of 6.27 bits/character and 6.29 bits/character respectively. However, stacked LSTM tended to take much larger losses than mLSTM in the timesteps immediately following surprising inputs. One to four time-steps after a surprising input occurred, mLSTM and stacked LSTM took average losses of (2.26, 2.04, 1.61, 1.51) and (2.48, 2.25, 1.79, 1.67) bits per character respectively, as shown in Figure \[fig:surprise-res\]. mLSTM’s overall advantage over stacked LSTM was 1.42 bits/char to 1.53 bits/char; mLSTM’s advantage over stacked LSTM was greater after a surprising input than it is in general. We also explore more standard training methodology and regularisation methods on this dataset. These experiments all used ADAM, and the standard 90-5-5 training validation test split on this dataset. We firstly consider a standard unregularised mLSTM trained with this methodology. We then experiment with an mLSTM with a linear embedding layer and weight normalization [@salimans2016] on recurrent weights (mLSTM +emb +WN), which is similar to the mLSTM architecture used in [@radford2017], which was built off our initial work. We also consider regularisation of the later model with variational dropout [@gal2016] (mLSTM +emb +WN +VD). Variational dropout is a form of dropout [@srivastava2014] where the dropout mask is shared across a sequence. The standard unregularised LSTM used 1900 hidden units and 20 million parameters. The weight normalized mLSTM used 1900 hidden units, and a linear embedding layer of 400, giving it 22 million parameters. The large embedding layer was used because it was found to work well with dropout. Since this embedding layer is linear, it could potentially be removed during test time by multiplying its incoming and outgoing weight matrices to reduce the number of parameters (however we report parameter numbers with the embedding layer). For the regularised weight normalized mLSTM, we apply a variational dropout of 0.2 to the hidden state and to the embedding layer (dropout masks for both the hidden state and embedding layer were shared across a sequence). We also consider a larger version of the weight normalized mLSTM with 2800 hidden units and 46 million parameters. We increased the dropout in the embedding layer to 0.5 on this model. All results without variational dropout used early stopping on the validation error to reduce overfitting. The results for these experiments are given in table \[tab:wiki-res\]. architecture \# of parameters test set error ------------------------------------------------------ ------------------ ---------------- -- -- -- -- stacked LSTM (7-layer) [@Graves-2013] 21M 1.67 stacked LSTM (7-layer) + dynamic eval [@Graves-2013] 21M 1.33 MI-LSTM [@wu2016] 17M 1.44 recurrent memory array structures [@rocki2016] 1.40 feedback LSTM + zoneout [@rocki2016b] 1.37 hyperLSTM [@Ha2017] 27 M 1.34 hierarchical multiscale LSTM [@chung2017] 1.32 bytenet decoder [@Kalchbrenner2016] 1.31 LSTM (4 layer) + VD + BB tuning [@melis2017] 46M 1.30 RHN (rec depth 7) + VD [@zilly2017] 46M 1.27 Fast-slow LSTM (rec depth 4) + zoneout [@mujika2017] 47M 1.25 **unregularised mLSTM (RMS prop, 4 epoch)** **20M** **1.42** **unregularised mLSTM** **20M** **1.40** **mLSTM +emb +WN** **22M** **1.44** **mLSTM +emb +WN +VD** **22M** **1.28** **large mLSTM +emb +WN +VD** **46M** **1.24** : Hutter Prize dataset test error in bits/char.[]{data-label="tab:wiki-res"} Interestingly, adding weight normalization and an embedding layer hurt performance in the absence of regularisation. However, when combined with variational dropout, this model outperformed all previous static single model neural network results on Hutter Prize. We did not explore variational dropout applied to mLSTM without weight normalization. Earlier versions of this work also considered dynamic evaluation of mLSTMs on this task, however this is now in a separate paper focused on dynamic evaluation [@krause2017]. We also tested an MI-LSTM, mLSTM’s nearest neighbor, with a slightly larger size (22M parameters) and a very similar hyperparameter configuration and initialisation scheme [^2] (compared with unregularised mLSTM with no WN). MI-LSTM achieved a relatively poor test set performance of 1.53 bits/char, as compared with 1.40 bits/char for mLSTM under the same settings. The MI-LSTM also converged more slowly, although eventually did require early stopping like the mLSTM. While this particular experiment cannot conclusively prove anything about the relative utility of mLSTM vs. MI-LSTM on this task, it does show that the two architectures are sufficiently different to obtain very different results under the same hyper-parameter settings. Text8 dataset ------------- Text8 contains 100 million characters of English text taken from Wikipedia in 2006, consisting of just the 26 characters of the English alphabet plus spaces. This dataset can be found at <http://mattmahoney.net/dc/textdata>. This corpus has been widely used to benchmark RNN character level language models, with the first 90 million characters used for training, the next 5 million used for validation, and the final 5 million used for testing. The results of these experiments are shown in Table \[tab:text8-res\]. The first set of experiments we performed were designed to be comparable to those of @zhang2016, who benchmarked several deep LSTMs against shallow LSTMs on this dataset. The shallow LSTM had a hidden state dimensionality of 512, and the deep versions had reduced dimensionality to give them roughly the same number of parameters. Our experiment used an mLSTM with a hidden dimensionality of 450, giving it slightly fewer parameters than the past work, and our own LSTM baseline with hidden dimensionality 512. mLSTM showed an improvement over our baseline and the previously reported best deep LSTM variant. We also ran experiments to compare a large mLSTM with other reported experiments. We trained an mLSTM with hidden dimensionality of 1900 on the text8 dataset. Unregularised mLSTM was able to fit the training data well and achieved a competitive performance; however it was outperformed by other architectures that are less prone to over-fitting. We later considered our best training setup from the Hutter Prize dataset, reusing the exact same architecture and hyper-parameters from this task, with the only difference being the number of input characters (27 for text8), which reduces the number of parameters to around 45 million. This well regularised mLSTM was able to achieve a much stronger performance on text8, tying RHNs with a recurrent depth of 10 for the best result on this dataset. architecture test set error ------------------------------------------------------------ ---------------- -- -- -- -- -- mRNN [@mikolov2012c] 1.54 MI-LSTM [@wu2016] 1.44 LSTM [@cooijmans2017] 1.43 batch normalised LSTM [@cooijmans2017] 1.36 layer-norm hierarchical multiscale LSTM [@chung2017] 1.29 Recurrent highway networks, rec. depth 10 +VD [@zilly2017] 1.27 small LSTM [@zhang2016] 1.65 small deep LSTM (best) [@zhang2016] 1.63 **small LSTM (RMSprop)** **1.64** **small mLSTM (RMSprop)** **1.59** **unregularised mLSTM (RMSprop)** **1.40** **large mLSTM +emb +WN +VD** **1.27** : Text8 dataset test set error in bits/char. Architectures labelled with small used a highly restrictive hidden dimensionality (512 for LSTM, 450 for mLSTM).[]{data-label="tab:text8-res"} WikiText-2 ---------- The WikiText-2 dataset has been a common benchmark for very recent advances in word-level language modelling. This dataset contains 2 million training tokens and a vocab size of 33k. Documents are given in non-shuffled order, causing the data to contain more long-range dependencies. We use this dataset to benchmark how our advances in character-level language modelling stack up against word level language models. Character language models generally perform worse than word-level language models on standard English text benchmarks. One reason for this is word level language models know the test set vocabulary in advance, whereas character level models model a distribution over all possible words, including out of vocabulary words, making the task inherently more difficult from character level view. Furthermore, very rare words, which character level models are more equipped to handle than word level models, are mapped to an unknown token. From the perspective of training, character level language models must model longer range dependencies, and must learn a more complex non-linear fit to capture joint dependencies between characters. Character level models do have an inherent advantage of being able to capture subword language information, motivating their use on traditionally word-level tasks. Character level language models can be compared with word level language models by converting bits per character to perplexity. In this case, we model the data at the UTF-8 byte level. The bits per word can be computed as $$bits/word = bits/symbol \times \frac{symbols/file}{words/file}$$ where in this case, symbols are UTF-8 bytes. 2 raised to the power of the number of bits/word is then the perplexity. The WikiText-2 test set is 245,569 words long, and 1,256,449 bytes long, so each word is on average 5.1165 UTF-8 bytes long. A character level model can also assign word level probabilities directly by taking the product of the probabilities of the characters in a word, including the probability of the character ending the word (either a space or a newline). A byte level model is likely at a slight disadvantage compared with word-level because it must predict some information that gets removed during tokenization (such as spaces vs. newlines), but the perplexity given by the conversion above could atleast be seen as an upper bound of the word level perplexity such a model could achieve predicting byte by byte. This is because the entropy of the file after tokenization (which word level models measure) will always be less than or equal to the entropy of the file before tokenization (which byte level models measure). We trained the mLSTM configuration from the Hutter Prize dataset, using an embedding layer, weight normalization, and a variational dropout of 0.5 in both the hidden and embedding layer, to model WikiText-2 at the byte level. This model contained 46 million parameters, which is larger than most word level models that use tied input and output embeddings [@press2017; @inan2017] to share parameters, but similar in size to untied word level models on this dataset. The results are given in table \[tab:wikitext\]. architecture valid test ---------------------------------------------- ---------- ---------- -- -- -- -- LSTM [@grave2017] 104.2 99.3 LSTM + VD (untied)[@inan2017] 98.8 93.1 LSTM + VD (tied)[@inan2017] 91.5 87.0 Pointer Sentinel LSTM [@Merity2016] 84.8 80.8 LSTM (tied) + VD + BB tuning [@melis2017] 69.1 65.9 LSTM + neural cache [@grave2017] 72.1 68.9 LSTM + dynamic eval [@krause2017] 63.7 59.8 AWD-LSTM (tied) [@merity2017] 68.6 65.8 AWD-LSTM (tied)+ neural cache [@merity2017] 53.8 52.0 AWD-LSTM (tied) + dynamic eval [@krause2017] 46.4 44.3 **byte mLSTM +emb +WN +VD** **92.8** **88.8** : WikiText-2 perplexity errors[]{data-label="tab:wikitext"} Byte mLSTM achieves a byte-level test set cross entropy of $1.2649$ bits/char, corresponding to a perplexity of $88.8$. Despite all the disadvantages faced by character level models, byte level mLSTM achieves similar word level perplexity to previous word-level LSTM baselines that also use variational dropout for regularisation. Byte mLSTM does not perform as well as word-level models that use adaptive add-on methods or very recent advances in regularisation/hyper-parameter tuning, however it could likely benefit from these advances as well. Discussion ========== This work combined the mRNN’s factorized hidden weights with the LSTM’s hidden units for generative modelling of discrete multinomial sequences. This mLSTM architecture was motivated by its ability to have both controlled and flexible input-dependent transitions, to allow for fast changes to the distributed hidden representation without erasing information. In a series of character-level language modelling experiments, mLSTM showed improvements over LSTM and its deep variants. mLSTM regularised with variational dropout performed favorably compared with baselines in the literature, outperforming all previous neural models on Hutter Prize and tying the best previous result on text8. Byte-level mLSTM was also able to perform competitively with word-level language models on WikiText-2. Unlike many previous approaches that have achieved success at character level language modelling, mLSTM does not use non-linear recurrent depth. All mLSTMs considered in this work only had 2 linear recurrent transition matrices, whereas comparable works such as recurrent highway networks use a recurrent depth of up to 10 to achieve best results. This makes mLSTM more easily parallelizable than these approaches. Additionally, our work suggests that a large depth is not necessary to achieve competitive results on character level language modelling. We hypothesize that mLSTM’s ability to have very different transition functions for each possible input is what makes it successful at this task. While recurrent depth can accomplish this too, mLSTM can achieve this more efficiently. While these results are promising, it remains to be seen how mLSTM performs at word-level language modelling and other discrete multinomial generative modelling tasks, and whether mLSTM can be formulated to apply more broadly to tasks with continuous or non-sparse input units. We also hope this work will motivate further exploration in generative RNN architectures with flexible input-dependent transition functions. [^1]: Code to replicate our experiments on the Hutter Prize dataset is available at <https://github.com/benkrause/mLSTM>. [^2]: The only difference in settings was the scale for the orthogonally initialised hidden weights; mLSTM used 0.7 and MI-LSTM used 0.5. We believed this was justified because mLSTM uses a product of two matrices, resulting in a spectral radius of 0.49 for this product. Additionally, reducing the scale to 0.5 improved MI-LSTM’s initial convergence rate. Downscaling the orthogonal initialisations was necessary in general because an initial forget gate bias of 3 was used.
--- abstract: 'In this paper, an interference-aware path planning scheme for a network of cellular-connected unmanned aerial vehicles (UAVs) is proposed. In particular, each UAV aims at achieving a tradeoff between maximizing energy efficiency and minimizing both wireless latency and the interference level caused on the ground network along its path. The problem is cast as a dynamic game among UAVs. To solve this game, a deep reinforcement learning algorithm, based on echo state network (ESN) cells, is proposed. The introduced deep ESN architecture is trained to allow each UAV to map each observation of the network state to an action, with the goal of minimizing a sequence of time-dependent utility functions. Each UAV uses ESN to learn its optimal path, transmission power level, and cell association vector at different locations along its path. The proposed algorithm is shown to reach a subgame perfect Nash equilibrium (SPNE) upon convergence. Moreover, an upper and lower bound for the altitude of the UAVs is derived thus reducing the computational complexity of the proposed algorithm. Simulation results show that the proposed scheme achieves better wireless latency per UAV and rate per ground user (UE) while requiring a number of steps that is comparable to a heuristic baseline that considers moving via the shortest distance towards the corresponding destinations. The results also show that the optimal altitude of the UAVs varies based on the ground network density and the UE data rate requirements and plays a vital role in minimizing the interference level on the ground UEs as well as the wireless transmission delay of the UAV.' author: - | \ \ \ [^1] bibliography: - 'references.bib' title: 'Cellular-Connected UAVs over 5G: Deep Reinforcement Learning for Interference Management' --- Unmanned aerial vehicles (UAV); echo state network (ESN); deep learning; deep reinforcement learning; game theory; path planning Introduction ============ Cellular-connected unmanned aerial vehicles (UAVs) will be an integral component of future wireless networks as evidenced by recent interest from academia, industry, and 3GPP standardizations [@3GPP_standards; @Qualcomm_UAV; @LTEintheSky; @SkyNotLimit; @coexistence_ground_aerial; @christian]. Unlike current wireless UAV connectivity that relies on short-range communication range (e.g., WiFi, bluetooth, and radio waves), cellular-connected UAVs allow beyond line-of-sight control, low latency, real time communication, robust security, and ubiquitous coverage. Such *cellular-connected UAV-user equipments (UEs)* will thus enable a myriad of applications ranging from real-time video streaming to surveillance. Nevertheless, the ability of UAV-UEs to establish line-of-sight (LoS) connectivity to cellular base stations (BSs) is both a blessing and a curse. On the one hand, it enables high-speed data access for the UAV-UEs. On the other hand, it can lead to substantial inter-cell mutual interference among the UAVs and to the ground users. As such, a wide-scale deployment of UAV-UEs is only possible if interference management challenges are addressed [@LTEintheSky; @SkyNotLimit; @coexistence_ground_aerial]. While some literature has recently studied the use of UAVs as mobile BSs [@U_globecom; @ferryMessage; @zhang_trajectory_power; @path_planning_WCNC; @mohammad_UAV; @qingqing_UAV; @chen2016caching], the performance analysis of cellular-connected UAV-UEs (*short-handed hereinafter as UAVs*) remains relatively scarce [@LTEintheSky; @SkyNotLimit; @coexistence_ground_aerial; @reshaping_cellular]. For instance, in [@LTEintheSky], the authors study the impact of UAVs on the uplink performance of a ground LTE network. Meanwhile, the work in [@SkyNotLimit] uses measurements and ray tracing simulations to study the airborne connectivity requirements and propagation characteristics of UAVs. The authors in [@coexistence_ground_aerial] analyze the coverage probability of the downlink of a cellular network that serves both aerial and ground users. In [@reshaping_cellular], the authors consider a network consisting of both ground and aerial UEs and derive closed-form expressions for the coverage probability of the ground and drone UEs. Nevertheless, this prior art is limited to studying the impact that cellular-connected UAVs have on the ground network. Indeed, the existing literature [@LTEintheSky; @SkyNotLimit; @coexistence_ground_aerial; @reshaping_cellular] does not provide any concrete solution for optimizing the performance of a cellular network that serves both aerial and ground UEs in order to overcome the interference challenge that arises in this context. UAV trajectory optimization is essential in such scenarios. An online path planning that accounts for wireless metrics is vital and would, in essence, assist in addressing the aforementioned interference challenges along with new improvements in the design of the network, such as 3D frequency resue. Such a path planning scheme allows the UAVs to adapt their movement based on the rate requirements of both aerial UAV-UEs and ground UEs, thus improving the overall network performance. The problem of UAV path planning has been studied mainly for non-UAV-UE applications [@ferryMessage; @zhang_trajectory_power; @path_planning_WCNC; @networked_camera] with [@path_cellular_UAVs] being the only work considering a cellular-connected UAV-UE scenario. In [@ferryMessage], the authors propose a distributed path planning algorithm for multiple UAVs to deliver delay-sensitive information to different ad-hoc nodes. The authors in [@zhang_trajectory_power] optimize a UAV’s trajectory in an energy-efficient manner. The authors in [@path_planning_WCNC] propose a mobility model that combines area coverage, network connectivity, and UAV energy constraints for path planning. In [@networked_camera], the authors propose a fog-networking-based system architecture to coordinate a network of UAVs for video services in sports events. However, despite being interesting, the body of work in [@ferryMessage; @zhang_trajectory_power; @path_planning_WCNC] and [@networked_camera] is restricted to UAVs as BSs and does not account for UAV-UEs and their associated interference challenges. Hence, the approaches proposed therein cannot readily be used for cellular-connected UAVs. On the other hand, the authors in [@path_cellular_UAVs] propose a path planning scheme for minimizing the time required by a cellular-connected UAV to reach its destination. Nevertheless, this work is limited to one UAV and does not account for the interference that cellular-connected UAVs cause on the ground network during their mission. Moreover, the work in [@path_cellular_UAVs] relies on offline optimization techniques that cannot adapt to the uncertainty and dynamics of a cellular network. The main contribution of this paper is a novel deep reinforcement learning (RL) framework based on echo state network (ESN) cells for optimizing the trajectories of multiple cellular-connected UAVs in an online manner. This framework will allow cellular-connected UAVs to minimize the interference they cause on the ground network as well as their wireless transmission latency. To realize this, we propose a dynamic noncooperative game in which the players are the UAVs and the objective of each UAV is to *autonomously* and *jointly* learn its path, transmit power level, and association vector. For our proposed game, the UAV’s cell association vector, trajectory optimization, and transmit power level are closely coupled with each other and their optimal values vary based on the dynamics of the network. Therefore, a major challenge in this game is the need for each UAV to have full knowledge of the ground network topology, ground UEs service requirements, and other UAVs’ locations. Consequently, to solve this game, we propose a deep RL ESN-based algorithm, using which the UAVs can predict the dynamics of the network and subsequently determine their optimal paths as well as the allocation of their resources along their paths. Unlike previous studies which are either centralized or rely on the coordination among UAVs, our approach is based on a self-organizing path planning and resource allocation scheme. In essence, two important features of our proposed algorithm are *adaptation* and *generalization*. Indeed, UAVs can take decisions for *unseen* network states, based on the reward they got from previous states. This is mainly due to the use of ESN cells which enable the UAVs to retain their previous memory states. We have shown that the proposed algorithm reaches a subgame perfect Nash equilibrium (SPNE) upon convergence. Moreover, upper and lower bounds on the UAVs’ altitudes, that guarantee a maximum interference level on the ground network and a maximum wireless transmission delay for the UAV, have been derived. To our best knowledge, *this is the first work that exploits the framework of deep ESN for interference-aware path planning of cellular-connected UAVs*. Simulation results show that the proposed approach improves the tradeoff between energy efficiency, wireless latency, and the interference level caused on the ground network. Results also show that each UAV’s altitude is a function of the ground network density and the UAV’s objective function and is an important factor in achieving the UAV’s target. The rest of this paper is organized as follows. Section \[system\_model\] presents the system model. Section \[game\] describes the proposed noncooperative game model. The deep RL ESN-based algorithm is proposed in Section \[algorithm\]. In Section \[simulation\], simulation results are analyzed. Finally, conclusions are drawn in Section \[conclusion\]. System Model {#system_model} ============ Consider the uplink (UL) of a wireless cellular network composed of a set $\mathcal{S}$ of $S$ ground BSs, a set $\mathcal{Q}$ of $Q$ ground UEs, and a set $\mathcal{J}$ of $J$ cellular-connected UAVs. The UL is defined as the link from UE $q$ or UAV $j$ to BS $s$. Each BS $s \in \mathcal{S}$ serves a set $\mathcal{K}_s\subseteq\mathcal{Q}$ of $K_s$ UEs and a set $\mathcal{N}_s\subseteq\mathcal{J}$ of $N_s$ cellular-connected UAVs. The total system bandwidth, $B$, is divided into a set $\mathcal{C}$ of $C$ resource blocks (RBs). Each UAV $j\in \mathcal{N}_s$ is allocated a set $\mathcal{C}_{j,s}\subseteq\mathcal{C}$ of $C_{j,s}$ RBs and each UE $q\in \mathcal{K}_s$ is allocated a set $\mathcal{C}_{q,s}\subseteq\mathcal{C}$ of $C_{q,s}$ RBs by its serving BS $s$. At each BS $s$, a particular RB $c \in \mathcal{C}$ is allocated to *at most* one UAV $j\in \mathcal{N}_s$, or UE $q\in \mathcal{K}_s$. An airborne Internet of Things (IoT) is considered in which the UAVs are equipped with different IoT devices, such as cameras, sensors, and GPS that can be used for various applications such as surveillance, monitoring, delivery and real-time video streaming. The 3D coordinates of each UAV $j \ \in \mathcal{J}$ and each ground user $q \ \in \mathcal{Q}$ are, respectively, $(x_j, y_j, h_j)$ and $(x_q, y_q, 0)$. All UAVs are assumed to fly at a fixed altitude $h_j$ above the ground (as done in [@zhang_trajectory_power; @path_cellular_UAVs; @relaying; @optimization]) while the horizonal coordinates $(x_j, y_j)$ of each UAV $j$ vary in time. Each UAV $j$ needs to move from an initial location $o_j$ to a final destination $d_j$ while transmitting *online* its mission-related data such as sensor recordings, video streams, and location updates. We assume that the initial and final locations of each UAV are pre-determined based on its mission objectives. For ease of exposition, we consider a virtual grid for the mobility of the UAVs. We discretize the space into a set $\mathcal{A}$ of $A$ equally sized unit areas. The UAVs move along the center of the areas $c_a=(x_a, y_a, z_a)$, which yields a finite set of possible paths $\boldsymbol{p}_j$ for each UAV $j$. The path $\boldsymbol{p}_j$ of each UAV $j$ is defined as a sequence of area units $\boldsymbol{p}_j=(a_1, a_2, \cdots, a_l)$ such that $a_1=o_j$ and $a_l=d_j$. The area size of the discretized area units $(a_1, a_2, \cdots, a_A) \in \mathcal{A}$ is chosen to be sufficiently small such that the UAVs’ locations can be assumed to be approximately constant within each area even at the maximum UAV’s speed, as commonly done in the literature [@relaying]. We assume a constant speed $0 < V_j \leq \widehat{V}_j$ for each UAV where $\widehat{V}_j$ is the maximum speed of UAV $j$. Therefore, the time required by each UAV to travel between any two unit areas is constant. Channel Models -------------- We consider the sub-6 GHz band and the free-space path loss model for the UAV-BS data link. The path loss between UAV $j$ at location $a$ and BS $s$, $\xi_{j,s,a}$, is given by [@hourani]: $$\begin{aligned} \xi_{j,s,a} (\mathrm{dB})= 20\ \mathrm{log}_{10} (d_{j,s,a}) + 20\ \mathrm{log}_{10} (\hat{f}) - 147.55,\end{aligned}$$ where $\hat{f}$ is the system center frequency and $d_{j,s,a}$ is the Euclidean distance between UAV $j$ at location $a$ and BS $s$. We consider a Rician distribution for modeling the small-scale fading between UAV $j$ and ground BS $s$ thus accounting for the LoS and multipath scatterers that can be experienced at the BS. In particular, adopting the Rician channel model for the UAV-BS link is validated by the fact that the channel between a given UAV and a ground BS is mainly dominated by a LoS link [@zhang_trajectory_power]. We assume that the Doppler spread due to the mobility of the UAVs is compensated for based on existing techniques such as frequency synchronization using a phase-locked loop [@mengali] as done in [@zhang_trajectory_power] and [@relaying]. For the terrestrial UE-BS links, we consider a Rayleigh fading channel. For a carrier frequency, $\hat{f}$, of 2 GHz, the path loss between UE $q$ and BS $s$ is given by [@pathloss_ground]: $$\begin{aligned} \zeta_{q,s}(\mathrm{dB}) = 15.3+37.6\ \mathrm{log}_{10}(d_{q,s}),\end{aligned}$$ where $d_{q\textrm{,}s}$ is the Euclidean distance between UE $q$ and BS $s$. The average signal-to-interference-plus-noise ratio (SINR), $\Gamma_{j,s,c,a}$, of the UAV-BS link between UAV $j$ at location $a$ $(a \in \mathcal{A})$ and BS $s$ over RB $c$ will be: $$\begin{aligned} \label{SNIR} \Gamma_{j,s,c,a}=\frac{P_{j,s,c,a} h_{j,s,c,a}}{I_{j,s,c}+B_c N_0},\end{aligned}$$ where $P_{j,s,c,a}=\widehat{P}_{j,s,a}/C_{j,s}$ is the transmit power of UAV $j$ at location $a$ to BS $s$ over RB $c$ and $\widehat{P}_{j,s,a}$ is the total transmit power of UAV $j$ to BS $s$ at location $a$. Here, the total transmit power of UAV $j$ is assumed to be distributed uniformly among all of its associated RBs. $h_{j,s,c,a}=g_{j,s,c,a}10^{-\xi_{j,s,a}/10}$ is the channel gain between UAV $j$ and BS $s$ on RB $c$ at location $a$ where $g_{j,s,c,a}$ is the Rician fading parameter. $N_0$ is the noise power spectral density and $B_{c}$ is the bandwidth of an RB $c$. $I_{j,s,c}= \sum_{r=1, r\neq s}^S (\sum_{k=1}^{K_r} P_{k,r,c} h_{k,s,c} + \sum_{n=1}^{N_r} P_{n,r,c,a'} h_{n,s,c,a'})$ is the total interference power on UAV $j$ at BS $s$ when transmitting over RB $c$, where $\sum_{r=1, r\neq s}^S \sum_{k=1}^{K_r} P_{k,r,c} h_{k,s,c}$ and $\sum_{r=1, r\neq s}^S\sum_{n=1}^{N_r} P_{n,r,c,a'} h_{n,s,c,a'}$ correspond, respectively, to the interference from the $K_r$ UEs and the $N_r$ UAVs (at their respective locations $a'$) connected to neighboring BSs $r$ and transmitting using the same RB $c$ as UAV $j$. $h_{k,s,c}=m_{k,s,c}10^{-\zeta_{k,s}/10}$ is the channel gain between UE $k$ and BS $s$ on RB $c$ where $m_{k,s,c}$ is the Rayleigh fading parameter. Therefore, the achievable data rate of UAV $j$ at location $a$ associated with BS $s$ can be defined as $R_{j,s,a}=\sum_{c=1}^{C_{j,s}} B_{c} \mathrm{log}_2(1+\Gamma_{j,s,c,a})$. Given the achievable data rate of UAV $j$ and assuming that each UAV is an M/D/1 queueing system, the corresponding latency over the UAV-BS wireless link is given by [@delay_book]: $$\begin{aligned} \label{delay_eqn} \tau_{j,s,a}=\frac{\lambda_{j,s}}{2\mu_{j,s,a}\textrm{(}\mu_{j,s,a}-\lambda_{j,s}\textrm{)}}+\frac{1}{\mu_{j,s,a}},\end{aligned}$$ where $\lambda_{j,s}$ is the average packet arrival rate (packets/s) traversing link $(j,s)$ and originating from UAV $j$. $\mu_{j,s,a}=R_{j,s,a}/\nu$ is the service rate over link $(j,s)$ at location $a$ where $\nu$ is the packet size. On the other hand, the achievable data rate for a ground UE $q$ served by BS $s$ is given by: $$\begin{aligned} R_{q,s}=\sum_{c=1}^{C_{q,s}}B_c\mathrm{log}_2\Big(1+\frac{P_{q,s,c}h_{q,s,c}}{I_{q,s,c}+B_cN_0}\Big),\end{aligned}$$ where $h_{q,s,c}=m_{q,s,c}10^{-\zeta_{q,s}/10}$ is the channel gain between UE $q$ and BS $s$ on RB $c$ and $m_{q,s,c}$ is the Rayleigh fading parameter. $P_{q,s,c}=\widehat{P}_{q,s}/C_{q,s}$ is the transmit power of UE $q$ to its serving BS $s$ on RB $c$ and $\widehat{P}_{q,s}$ is the total transmit power of UE $q$. Here, we also consider equal power allocation among the allocated RBs for the ground UEs. $I_{q,s,c}= \sum_{r=1, r\neq s}^S (\sum_{k=1}^{K_r} P_{k,r,c} h_{k,s,c} + \sum_{n=1}^{N_r} P_{n,r,c,a'} h_{n,s,c,a'})$ is the total interference power experienced by UE $q$ at BS $s$ on RB $c$ where $\sum_{r=1, r\neq s}^S \sum_{k=1}^{K_r} P_{k,r,c} h_{k,s,c}$ and $\sum_{r=1, r\neq s}^S\sum_{n=1}^{N_r} P_{n,r,c,a'} h_{n,s,c,a'}$ correspond, respectively, to the interference from the $K_r$ UEs and the $N_r$ UAVs (at their respective locations $a'$) associated with the neighboring BSs $r$ and transmitting using the same RB $c$ as UE $q$. Problem Formulation ------------------- Our objective is to find the optimal path for each UAV $j$ based on its mission objectives as well as its interference on the ground network. Thus, we seek to minimize: a) the interference level that each UAV causes on the ground UEs and other UAVs, b) the transmission delay over the wireless link, and c) the time needed to reach the destination. To realize this, we optimize the paths of the UAVs jointly with the cell association vector and power control at each location $a \in \mathcal{A}$ along each UAV’s path. We consider a directed graph $G_j=(\mathcal{V}, \mathcal{E}_j)$ for each UAV $j$ where $\mathcal{V}$ is the set of vertices corresponding to the centers of the unit areas $a \in \mathcal{A}$ and $\mathcal{E}_j$ is the set of edges formed along the path of UAV $j$. We let $\boldsymbol{\widehat{P}}$ be the transmission power vector with each element $\widehat{P}_{j,s,a}\in[0, \overline{P}_{j}]$ being the transmission power level of UAV $j$ to its serving BS $s$ at location $a$ where $\overline{P}_{j}$ is the maximum transmission power of UAV $j$. $\boldsymbol{\alpha}$ is the path formation vector with each element $\alpha_{j,a,b}\in\{0,1\}$ indicating whether or not a directed link is formed from area $a$ towards area $b$ for UAV $j$, i.e., if UAV $j$ moves from $a$ to $b$ along its path. $\boldsymbol{\beta}$ is the UAV-BS association vector with each element $\beta_{j,s,a}\in\{0,1\}$ denoting whether or not UAV $j$ is associated with BS $s$ at location $a$. Next, we present our optimization problem whose goal is to determine the path of each UAV along with its cell association vector and its transmit power level at each location $a$ along its path $\boldsymbol{p}_j$: $$\begin{gathered} \label{obj} \hspace{-0.5cm}\min_{\boldsymbol{\widehat{P}}, \boldsymbol{\alpha}, \boldsymbol{\beta}}\vartheta\sum_{j=1}^{J}\sum_{s=1}^S \sum_{c=1}^{C_{j,s}}\sum_{a=1}^A \sum_{r=1, r\neq s}^S \frac{\widehat{P}_{j,s,a} h_{j,r,c,a}}{C_{j,s}}+\varpi \sum_{j=1}^J\sum_{a=1}^A\sum_{b=1, b\neq a}^A\alpha_{j,a,b} + \phi\sum_{j=1}^J\sum_{s=1}^S\sum_{a=1}^A\beta_{j,s,a}\tau_{j,s,a}, $$ $$\begin{aligned} \label{cons_1} \sum_{b=1, b\neq a}^A\alpha_{j,b,a} \leq 1 \;\;\forall j\in \mathcal{J}, a\in \mathcal{A},\end{aligned}$$ $$\begin{aligned} \label{cons_2} \sum_{a=1, a\neq o_j}^A\alpha_{j,o_j,a} \textrm{=} 1 \;\;\forall j\in \mathcal{J}, \sum_{a=1, a\neq d_j}^A\alpha_{j,a,d_j} \textrm{=} 1 \;\;\forall j\in \mathcal{J},\end{aligned}$$ $$\begin{aligned} \label{cons_3} \sum_{a\textrm{=}1, a\neq b}^A\alpha_{j,a,b}-\sum_{f\textrm{=}1, f\neq b}^A\alpha_{j,b,f}\textrm{=} 0 \;\forall j\in \mathcal{J},b\in \mathcal{A} \;(b\neq o_j, b\neq d_j),\end{aligned}$$ $$\begin{aligned} \label{cons_4} \widehat{P}_{j,s,a}\geq\sum_{b=1, b\neq a}^A\alpha_{j,b,a} \;\;\forall j\in \mathcal{J}, s\in \mathcal{S}, a\in \mathcal{A},\end{aligned}$$ $$\begin{aligned} \label{cons_44} \widehat{P}_{j,s,a}\geq\beta_{j,s,a} \;\;\forall j\in \mathcal{J}, s\in \mathcal{S}, a\in \mathcal{A},\end{aligned}$$ $$\begin{aligned} \label{cons_6} \sum_{s=1}^S \beta_{j,s,a} - \sum_{b=1, b\neq a}^A\alpha_{j,b,a}=0\;\;\;\forall j\in \mathcal{J}, a\in A,\end{aligned}$$ $$\begin{aligned} \label{cons_7} \sum_{c=1}^{C_{j,s}}\Gamma_{j,s,c,a}\geq\beta_{j,s,a}\overline{\Gamma}_j \;\;\;\forall j\in \mathcal{J}, s\in \mathcal{S}, a\in \mathcal{A},\end{aligned}$$ $$\begin{aligned} \label{cons_8} 0\leq \widehat{P}_{j,s,a}\leq \overline{P}_{j} \;\;\forall j\in \mathcal{J}\textrm{,} s\in \mathcal{S}\textrm{,} \; a\in \mathcal{A},\end{aligned}$$ $$\begin{aligned} \label{cons_9} \alpha_{j\textrm{,}a\textrm{,}b}\in\{0\textrm{,}1\}\textrm{,}\; \beta_{j\textrm{,}s\textrm{,}a}\in\{0\textrm{,}1\} \;\;\forall j\in \mathcal{J}\textrm{,}\; s\in \mathcal{S}\textrm{,} \;\;a,b\in \mathcal{A}\textrm{.}\end{aligned}$$ The objective function in (\[obj\]) captures the total interference level that the UAVs cause on neighboring BSs along their paths, the length of the paths of the UAVs, and their wireless transmission delay. $\vartheta$, $\varpi$ and $\phi$ are multi-objective weights used to control the tradeoff between the three considered metrics. These weights can be adjusted to meet the requirements of each UAV’s mission. For instance, the time to reach the destination is critical in search and rescue applications while the latency is important for online video streaming applications. (\[cons\_1\]) guarantees that each area $a$ is visited by UAV $j$ at most once along its path $\boldsymbol{p}_j$. (\[cons\_2\]) guarantees that the trajectory of each UAV $j$ starts at its initial location $o_j$ and ends at its final destination $d_j$. (\[cons\_3\]) guarantees that if UAV $j$ visits area $b$, it should also leave from area $b$ $(b\neq o_j, b\neq d_j)$. (\[cons\_4\]) and (\[cons\_44\]) guarantee that UAV $j$ transmits to BS $s$ at area $a$ with power $\widehat{P}_{j,s,a}>0$ only if UAV $j$ visits area $a$, i.e., $a\in \boldsymbol{p}_j$ and such that $j$ is associated with BS $s$ at location $a$. (\[cons\_6\]) guarantees that each UAV $j$ is associated with one BS $s$ at each location $a$ along its path $\boldsymbol{p}_j$. (\[cons\_7\]) guarantees an upper limit, $\overline{\Gamma}_j$, for the SINR value $\Gamma_{j,s,c,a}$ of the transmission link from UAV $j$ to BS $s$ on RB $c$ at each location $a$, $a\in \boldsymbol{p}_j$. This, in turn, ensures successful decoding of the transmitted packets at the serving BS. The value of $\overline{\Gamma}_j$ is application and mission specific. Note that the SINR check at each location $a$ is valid for our problem since we consider small-sized area units. (\[cons\_8\]) and (\[cons\_9\]) are the feasibility constraints. The formulated optimization problem is a mixed integer non-linear program, which is computationally complex to solve for large networks. To address this challenge, we adopt a distributed approach in which each UAV decides autonomously on its next path location along with its corresponding transmit power and association vector. In fact, a centralized approach requires control signals to be transmitted to the UAVs at all time. This might incur high round-trip latencies that are not desirable for real-time applications such as online video streaming. Further, a centralized approach requires a central entity to have full knowledge of the current state of the network and the ability to communicate with all UAVs at all time. However, this might not be feasible in case the UAVs belong to different operators or in scenarios in which the environment changes dynamically. Therefore, we next propose a distributed approach for each UAV $j$ to learn its path $\boldsymbol{p}_j$ along with its transmission power level and association vector at each location $a$ along its path in an autonomous and online manner. Towards a Self-Organizing Network of an Airborne Internet of Things {#game} =================================================================== Game-Theoretic Formulation -------------------------- Our objective is to develop a distributed approach that allows each UAV to take actions in an autonomous and online manner. For this purpose, we model the multi-agent path planning problem as a finite dynamic noncooperative game model $\mathcal{G}$ with perfect information [@walid_book]. Formally, we define the game as $\mathcal{G}=(\mathcal{J}, \mathcal{T}, \mathcal{Z}_j, \mathcal{V}_j, \Pi_j, u_j)$ with the set $\mathcal{J}$ of UAVs being the agents. $\mathcal{T}$ is a finite set of stages which correspond to the steps required for all UAVs to reach their sought destinations. $\mathcal{Z}_j$ is the set of actions that can be taken by UAV $j$ at each $t \in \mathcal{T}$, $\mathcal{V}_j$ is the set of all observed network states by UAV $j$ up to stage $T$, $\Pi_j$ is a set of probability distributions defined over all $z_j \in \mathcal{Z}_j$, and $u_j$ is the payoff function of UAV $j$. At each stage $t \in \mathcal{T}$, the UAVs take actions simultaneously. In particular, each UAV $j$ aims at determining its path $\boldsymbol{p}_j$ to its destination along with its optimal transmission power and cell association vector for each location $a \in \mathcal{A}$ along its path $\boldsymbol{p}_j$. Therefore, at each $t$, UAV $j$ chooses an action $z_j(t) \in \mathcal{Z}_j$ composed of the tuple $\boldsymbol{z}_j(t)=(\boldsymbol{a}_j(t), \widehat{P}_{j,s,a}(t), \boldsymbol{\beta}_{j,s,a}(t))$, where $\boldsymbol{a}_j(t)$={left, right, forward, backward, no movement} corresponds to a fixed step size, $\widetilde{a}_j$, in a given direction. $\widehat{P}_{j,s,a}(t)=[\widehat{P}_{1}, \widehat{P}_{2}, \cdots, \widehat{P}_{O}]$ corresponds to $O$ different maximum transmit power levels for each UAV $j$ and $\boldsymbol{\beta}_{j,s,a}(t)$ is the UAV-BS association vector. For each UAV $j$, let $\mathcal{L}_j$ be the set of its $L_j$ nearest BSs. The observed network state by UAV $j$ at stage $t$, $\boldsymbol{v}_j(t) \in \mathcal{V}_j$, is: $$\begin{aligned} \label{input} \boldsymbol{v}_j(t)\textrm{=}\Big[\{\delta_{j\textrm{,}l\textrm{,}a}(t)\textrm{,} \theta_{j\textrm{,}l\textrm{,}a}(t)\}_{l=1}^{L_j}\textrm{,} \theta_{j\textrm{,}d_j\textrm{,}a}(t)\textrm{,} \{x_j(t)\textrm{,} y_j(t)\}_{j \in \mathcal{J}} \Big]\textrm{,}\end{aligned}$$ where $\delta_{j,l,a}(t)$ is the Euclidean distance from UAV $j$ at location $a$ to BS $l$ at stage $t$, $\theta_{j,l,a}$ is the orientation angle in the xy-plane from UAV $j$ at location $a$ to BS $l$ defined as $\mathrm{tan}^{-1}(\Delta y_{j,l}/\Delta x_{j,l})$ [@orientation_angle] where $\Delta y_{j,l}$ and $\Delta x_{j,l}$ correspond to the difference in the $x$ and $y$ coordinates of UAV $j$ and BS $l$, $\theta_{j,d_j,a}$ is the orientation angle in the xy-plane from UAV $j$ at location $a$ to its destination $d_j$ defined as $\mathrm{tan}^{-1}(\Delta y_{j,d_j}/\Delta x_{j,d_j})$, and $\{x_j(t)\textrm{,} y_j(t)\}_{j \in \mathcal{J}}$ are the horizonal coordinates of all UAVs at stage $t$. For our model, we consider different range intervals for mapping each of the orientation angle and distance values, respectively, into different states. Moreover, based on the optimization problem defined in (\[obj\])-(\[cons\_9\]) and by incorporating the Lagrangian penalty method into the utility function definition for the SINR constraint (\[cons\_7\]), the resulting utility function for UAV $j$ at stage $t$, $u_j(\boldsymbol{v}_j(t), \boldsymbol{z}_j(t), \boldsymbol{z}_{-j}(t))$, will be given by: $$\begin{aligned} \label{utility_t} u_j(\boldsymbol{v}_j(t), \boldsymbol{z}_j(t), \boldsymbol{z}_{-j}(t))\textrm{=} \begin{cases} \Phi(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t)) \textrm{+} C\textrm{,} \; \mathrm{if} \; \delta_{j,d_j,a}(t)<\delta_{j,d_j,a'}(t-1)\textrm{,}\\ \Phi(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))\textrm{,} \; \mathrm{if} \; \delta_{j,d_j,a}(t)=\delta_{j,d_j,a'}(t-1)\textrm{,}\\ \Phi(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t)) \textrm{-} C \textrm{,} \; \mathrm{if} \; \delta_{j,d_j,a}(t)>\delta_{j,d_j,a'}(t-1)\textrm{,} \end{cases}\end{aligned}$$ where $\Phi(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$ is defined as: $$\begin{gathered} \hspace{-0.3cm}\Phi(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))\textrm{=}-\vartheta' \sum_{c=1}^{C_{j,s}(t)} \sum_{r=1, r\neq s}^S \frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t)) h_{j,r,c,a}(t)}{C_{j,s}(t)} - \phi'\tau_{j,s,a}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t)) \\- \varsigma (\mathrm{min}(0, \sum_{c=1}^{C_{j,s}(t)}\Gamma_{j,s,c,a}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))-\overline{\Gamma}_j))^2,\end{gathered}$$ subject to (\[cons\_1\])-(\[cons\_6\]), (\[cons\_8\]) and (\[cons\_9\]). $\varsigma$ is the penalty coefficient for (\[cons\_7\]) and $C$ is a constant parameter. $a'$ and $a$ are the locations of UAV $j$ at $(t-1)$ and $t$ where $\delta_{j,d_j,a}$ is the distance between UAV $j$ and its destination $d_j$. It is worth noting here that the action space of each UAV $j$ and, thus, the complexity of the proposed game $\mathcal{G}$ increases exponentially when updating the 3D coordinates of the UAVs. Nevertheless, each UAV’s altitude must be bounded in order to guarantee an SINR threshold for the UAV and a minimum achievable data rate for the ground UEs. Next, we derive an upper and lower bound for the optimal altitude of any given UAV $j$ based on the proposed utility function in (\[utility\_t\]). In essence, such bounds are valid for all values of the multi-objective weights $\vartheta '$, $\phi '$, and $\varsigma$. \[theorem\_altitude\] *For all values of $\vartheta '$, $\phi '$, and $\varsigma$, a given network state $\boldsymbol{v}_j(t)$, and a particular action $\boldsymbol{z}_j(t)$, the upper and lower bounds for the altitude of UAV $j$ are, respectively, given by:* $$\begin{aligned} h_j^{\mathrm{max}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))= \mathrm{max} (\chi, \hat{h}_j^{\mathrm{max}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))),\end{aligned}$$ $$\begin{aligned} h_j^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))= \mathrm{max} (\chi, \hat{h}_j^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))),\end{aligned}$$ *where $\chi$ corresponds to the minimum altitude at which a UAV can fly. $\hat{h}_j^{\mathrm{max}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$ and $\hat{h}_j^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$* *are expressed as:* $$\begin{gathered} \hat{h}_j^{\mathrm{max}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))= \\ \sqrt{\frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t))}{C_{j,s}(t) \cdot \overline{\Gamma}_j \cdot \left(\frac{4 \pi \hat{f}}{\hat{c}}\right)^2} \cdot \sum_{c=1}^{C_{j,s}(t)}\frac{g_{j,s,c,a}(t)}{I_{j,s,c}(t)+B_cN_0} - (x_j - x_s)^2 - (y_j - y_s)^2},\end{gathered}$$ *and* $$\begin{aligned} \hat{h}_j^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))= \max_r \hat{h}_{j,r}^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t)),\end{aligned}$$ *where $\hat{h}_{j,r}^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$ is the minimum altitude that UAV $j$ should operate at with respect to a particular neighboring BS $r$ and is expressed as:* $$\begin{aligned} \hat{h}_{j,r}^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))= \sqrt{\frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t)) \cdot \sum_{c=1}^{C_{j,s}(t)} g_{j,r,c,a}(t)}{C_{j,s}(t) \cdot \left(\frac{4 \pi \hat{f}}{\hat{c}}\right)^2 \cdot \sum_{c=1}^{C_{j,s}(t)} \bar{I}_{j,r,c,a}} - (x_j - x_r)^2 - (y_j - y_r)^2},\end{aligned}$$ See Appendix A. From the above theorem, we can deduce that the optimal altitude of the UAVs is a function of their objective function, location of the ground BSs, network design parameters, and the interference level from other UEs and UAVs in the network. Therefore, at each time step $t$, UAV $j$ would adjust its altitude level based on the values of $h_j^{\mathrm{max}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t)$ and $h_j^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t)$ thus adapting to the dynamics of the network. In essence, the derived upper and lower bounds for the optimal altitude of the UAVs allows a reduction of the action space of game $\mathcal{G}$ thus simplifying the process needed for the UAVs to find a solution, i.e., equilibrium, of the game. Next, we analyze the equilibrium point of the proposed game $\mathcal{G}$. Equilibrium Analysis -------------------- For our game $\mathcal{G}$, we are interested in studying the subgame perfect Nash equilibrium (SPNE) in behavioral strategies. An SPNE is a profile of strategies which induces a Nash equilibrium (NE) on every subgame of the original game. Moreover, a *behavioral strategy* allows each UAV to assign independent probabilities to the set of actions at each network state that is independent across different network states. Here, note that there always exists at least one SPNE for any finite horizon extensive game with perfect information \[Selten’s Theorem\] [@SPNE_existence]. Let $\boldsymbol{\pi}_j(\boldsymbol{v}_j(t))=(\pi_{j,z_1}(\boldsymbol{v}_j(t)), \pi_{j,z_2}(\boldsymbol{v}_j(t)), \cdots, \pi_{j,\boldsymbol{z}_{\mid Z_j\mid}}(\boldsymbol{v}_j(t))) \in \Pi_j$ be the behavioral strategy of UAV $j$ at state $\boldsymbol{v}_j(t)$ and let $\Delta (\mathcal{Z})$ be the set of all probability distributions over the action space $\mathcal{Z}$. Next, we define the notion of an SPNE. *A behavioral strategy $(\boldsymbol{\pi}^*_1(\boldsymbol{v}_j(t))\textrm{,} \cdots\textrm{,} \boldsymbol{\pi}_J^*(\boldsymbol{v}_j(t))) = (\boldsymbol{\pi}_j^*(\boldsymbol{v}_j(t)), \boldsymbol{\pi}^*_{-j}(\boldsymbol{v}_j(t)))$ constitutes a* subgame perfect Nash equilibrium *if, $\forall j \in \mathcal{J}$, $\forall t \in \mathcal{T}$ and $\forall \boldsymbol{\pi}_j(\boldsymbol{v}_j(t)) \in \Delta (\mathcal{Z})$, $\overline{u}_j(\boldsymbol{\pi}^*_j(\boldsymbol{v}_j(t)), \boldsymbol{\pi}^*_{-j}(\boldsymbol{v}_j(t)))\geq \overline{u}_j(\boldsymbol{\pi}_j(\boldsymbol{v}_j(t)), \boldsymbol{\pi}^*_{-j}(\boldsymbol{v}_j(t)))$.* Therefore, each state $\boldsymbol{v}_j(t)$ and stage $t$, the goal of each UAV $j$ is to maximize its expected sum of discounted rewards, which is computed as the summation of the immediate reward for a given state along with the expected discounted utility of the next states: $$\begin{gathered} \label{expected_utility} \overline{u}(\boldsymbol{v}_j(t), \boldsymbol{\pi}_j(\boldsymbol{v}_j(t))\textrm{,} \boldsymbol{\pi}_{\textrm{-}j}(\boldsymbol{v}_j(t)))=\mathds{E}_{\boldsymbol{\pi}_j(t)}\left\{\sum_{l=0}^\infty \gamma^{l} u_j(\boldsymbol{v}_j(t+l)\textrm{,} \boldsymbol{z}_j(t+l)\textrm{,} \boldsymbol{z}_{\textrm{-}j}(t+l))| \boldsymbol{v}_{j,0}=\boldsymbol{v}_j\right\}\\ \textrm{=}\sum_{\boldsymbol{z}\in\mathcal{Z}} \sum_{l=0}^\infty \gamma^{l} u_j(\boldsymbol{v}_j(t+l)\textrm{,} \boldsymbol{z}_j(t+l)\textrm{,} \boldsymbol{z}_{\textrm{-}j}(t+l)) \prod_{j=1}^J \pi_{j\textrm{,}z_j}(\boldsymbol{v}_j(t+l))\textrm{,}\end{gathered}$$ where $\gamma^l \in (0, 1)$ is a discount factor for delayed rewards and $\mathds{E}_{\boldsymbol{\pi}_j(\boldsymbol{v}_j(t))}$ denotes an expectation over trajectories of states and actions, in which actions are selected according to $\boldsymbol{\pi}_j(\boldsymbol{v}_j(t))$. Here, $\boldsymbol{u}_j$ is the short-term reward for being in state $\boldsymbol{v}_j$ and $\boldsymbol{\overline{u}}_j$ is the expected long-term total reward from state $\boldsymbol{v}_j$ onwards. Here, note that the UAV’s cell association vector, trajectory optimization, and transmit power level are closely coupled with each other and their corresponding optimal values vary based on the UAVs’ objectives. In a multi-UAV network, each UAV must have full knowledge of the future reward functions at each information set and thus for all future network states in order to find the SPNE. This in turn necessitates knowledge of all possible future actions of all UAVs in the network and becomes challenging as the number of UAVs increases. To address this challenge, we rely on deep recurrent neural networks (RNNs) [@RNN_survey]. In essence, RNNs exhibit dynamic temporal behavior and are characterized by their adaptive memory that enables them to store necessary previous state information to predict future actions. On the other hand, deep neural networks are capable of dealing with large datasets. Therefore, next, we develop a novel deep RL based on ESNs, a special kind of RNN, for solving the SPNE of our game $\mathcal{G}$. Deep Reinforcement Learning for Online Path Planning and Resource Management {#algorithm} ============================================================================ In this section, we first introduce a deep ESN-based architecture that allows the UAVs to store previous states whenever needed while being able to learn future network states. Then, we propose an RL algorithm based on the proposed deep ESN architecture to learn an SPNE for our proposed game. Deep ESN Architecture {#ESN_architecture} --------------------- ESNs are a new type of RNNs with feedback connections that belong to the family of reservoir computing (RC) [@RNN_survey]. An ESN is composed of an input weight matrix $\boldsymbol{W}_{\mathrm{in}}$, a recurrent matrix $\boldsymbol{W}$, and an output weight matrix $\boldsymbol{W}_{\mathrm{out}}$. Because only the output weights are altered, ESN training is typically quick and computationally efficient compared to training other RNNs. Moreover, multiple non-linear reservoir layers can be stacked on top of each other resulting in a *deep ESN architecture*. Deep ESNs exploit the advantages of a hierarchical temporal feature representation at different levels of abstraction while preserving the RC training efficiency. They can learn data representations at different levels of abstraction, hence disentangling the difficulties in modeling complex tasks by representing them in terms of simpler ones hierarchically. Let $N_{j,R}^{(n)}$ be the number of internal units of the reservoir of UAV $j$ at layer $n$, $N_{j,U}$ be the external input dimension of UAV $j$ and $N_{j,L}$ be the number of layers in the stack for UAV $j$. Next, we define the following ESN components: - $\boldsymbol{v}_j(t) \in \mathds{R}^{N_{j,U}}$ the external input of UAV $j$ at stage $t$ which effectively corresponds to the current network state, - $\boldsymbol{x}^{(n)}_j(t) \in \mathds{R}^{N_{j,R}^{(n)}}$ as the state of the reservoir of UAV $j$ at layer $n$ at stage $t$, - $\boldsymbol{W}_{j, \mathrm{in}}^{(n)}$ as the input-to-reservoir matrix of UAV $j$ at layer $n$, where $\boldsymbol{W}_{j, \mathrm{in}}^{(n)} \in \mathds{R}^{N_{j,R}^{(n)} \times N_{j,U}}$ for $n=1$, and $\boldsymbol{W}_{j, \mathrm{in}}^{(n)} \in \mathds{R}^{N_{j,R}^{(n)} \times N_{j,R}^{(n-1)}}$ for $n>1$, - $\boldsymbol{W}_j^{(n)} \in \mathds{R}^{N_{j,R}^{(n)} \times N_{j,R}^{(n)}}$ as the recurrent reservoir weight matrix for UAV $j$ at layer $n$, - $\boldsymbol{W}_{j, \mathrm{out}} \in \mathds{R}^{\mid\mathcal{Z}_j\mid \times (N_{j,U}+\sum_{n}N_{j,R}^{(n)})}$ as the reservoir-to-output matrix of UAV $j$ for layer $n$ only. The objective of the deep ESN architecture is to approximate a function $\boldsymbol{F}_j=(F_j^{1}, F_j^{2}, \cdots, F_j^{N_{j,L}})$ for learning an SPNE for each UAV $j$ at each stage $t$. For each $n=1, 2, \cdots, N_{j,L}$, the function $F_j^{(n)}$ describes the evolution of the state of the reservoir at layer $n$, i.e., $\boldsymbol{x_{j}^{(n)}}(t)=F_j^{(n)}(\boldsymbol{v}_j(t), \boldsymbol{x}_j^{(n)}(t-1))$ for $n=1$ and $\boldsymbol{x_{j}^{(n)}}(t)=F_j^{(n)}(\boldsymbol{x}_j^{(n-1)}(t), \boldsymbol{x}_j^{(n)}(t-1))$ for $n>1$. $\boldsymbol{W}_{j, \mathrm{out}}$ and $\boldsymbol{x}^{(n)}_j(t)$ are initialized to zero while $\boldsymbol{W}_{j, \mathrm{in}}^{(n)}$ and $\boldsymbol{W}_j^{(n)}$ are randomly generated. Note that although the dynamic reservoir is initially generated randomly, it is combined later with the external input, $\boldsymbol{v}_j(t)$, in order to store the network states and with the trained output matrix, $\boldsymbol{W}_{j, \mathrm{out}}$, so that it can approximate the reward function. Moreover, the spectral radius of $\boldsymbol{W}_j^{(n)}$ (i.e., the largest eigenvalue in absolute value), $\rho_j^{(n)}$, must be strictly smaller than 1 to guarantee the stability of the reservoir [@echo_state_property]. In fact, the value of $\rho_j^{(n)}$ is related to the variable memory length of the reservoir that enables the proposed deep ESN framework to store necessary previous state information, with larger values of $\rho_j^{(n)}$ resulting in longer memory length. We next define the deep ESN components: the input and reward functions. For each deep ESN of UAV $j$, we distinguish between two types of inputs: external input, $\boldsymbol{v}_j(t)$, that is fed to the first layer of the deep ESN and corresponds to the current state of the network and input that is fed to all other layers for $n>1$. For our proposed deep ESN, the input to any layer $n>1$ at stage $t$ corresponds to the state of the previous layer, $\boldsymbol{x}_j^{(n-1)}(t)$. Define $\widetilde{u}_j(\boldsymbol{v}_j(t), \boldsymbol{z}_j(t), \boldsymbol{z}_{-j}(t))= u_j(\boldsymbol{v}_j(t), \boldsymbol{z}_j(t), \boldsymbol{z}_{-j}(t)) \prod_{j=1}^J \pi_{j,z_j}(\boldsymbol{v}_j(t))$ as the expected value of the instantaneous utility function $u_j(\boldsymbol{v}_j(t), \boldsymbol{z}_j(t), \boldsymbol{z}_{-j}(t))$ in (\[utility\_t\]) for UAV $j$ at stage $t$. Therefore, the reward that UAV $j$ obtains from action $\boldsymbol{z}_j$ at a given network state $\boldsymbol{v}_j(t)$: $$\begin{gathered} \label{reward} r_j(\boldsymbol{v}_j(t), \boldsymbol{z}_j(t), \boldsymbol{z}_{-j}(t)) \textrm{=} \begin{cases} \widetilde{u}_j(\boldsymbol{v}_j(t), \boldsymbol{z}_j(t), \boldsymbol{z}_{\textrm{-}j}(t)) \textrm{,} \; \mathrm{if\; UAV} \;j\; \mathrm{reaches} \; d_j\textrm{,}\\ \widetilde{u}_j(\boldsymbol{v}_j(t), \boldsymbol{z}_j(t), \boldsymbol{z}_{\textrm{-}j}(t))\textrm{+}\gamma \mathrm{max}_{\boldsymbol{z}_j \in \mathcal{Z}_j} \boldsymbol{W}_{j\textrm{,} \mathrm{out}}(\boldsymbol{z}_j(t \textrm{+} 1)\textrm{,} t \textrm{+}1) \\ \hspace{0.4 cm} [\boldsymbol{v}'_j(t), \boldsymbol{x}'^{(1)}_j(t), \boldsymbol{x}'^{(2)}_j(t), \cdots, \boldsymbol{x}'^{(n)}_j(t)]\textrm{,} \; \mathrm{otherwise}\textrm{.} \end{cases}\end{gathered}$$ Here, $\boldsymbol{v}'_j(t+1)$ and $\boldsymbol{x}'^{(n)}_j(t)$, correspond, respectively, to the next network state and reservoir state of layer $(n)$, at stage $(t+1)$, upon taking actions $\boldsymbol{z}_j(t)$ and $\boldsymbol{z}_{-j}(t)$ at stage $t$. Fig. \[Deep\_ESN\] shows the proposed reservoir architecture of the deep ESN consisting of two layers. ![Proposed Deep ESN architecture.[]{data-label="Deep_ESN"}](figures/Deep_ESN){width="13cm"} Update Rule Based on Deep ESN ----------------------------- We now introduce the deep ESN’s update phase that each UAV uses to store and estimate the reward function of each path and resource allocation scheme at a given stage $t$. In particular, we consider leaky integrator reservoir units [@leaky_integrator] for updating the state transition functions $\boldsymbol{x}^{(n)}_j(t)$ at stage $t$. Therefore, the state transition function of the first layer $\boldsymbol{x}^{(1)}_j(t)$ will be: $$\begin{aligned} \label{state_1} \boldsymbol{x}^{(1)}_j(t)= (1-\omega_j^{(1)})\boldsymbol{x}_j^{(1)}(t-1)+\omega_j^{(1)}\mathrm{tanh}(\boldsymbol{W}_{j, \mathrm{in}}^{(1)}\boldsymbol{v}_j(t)+\boldsymbol{W}_j^{(1)}\boldsymbol{x}_j^{(1)}(t-1)),\end{aligned}$$ where $\omega_j^{(n)} \in [0, 1]$ is the leaking parameter at layer $n$ for UAV $j$ which relates to the speed of the reservoir dynamics in response to the input, with larger values of $\omega_j^{(n)}$ resulting in a faster response of the corresponding $n$-th reservoir to the input. The state transition of UAV $j$, $\boldsymbol{x}^{(n)}_j(t)$, for $n>1$ is given by: $$\begin{aligned} \label{state_n} \boldsymbol{x}^{(n)}_j(t)= (1-\omega_j^{(n)})\boldsymbol{x}_j^{(n)}(t-1)+\omega_j^{(n)}\mathrm{tanh}(\boldsymbol{W}_{j,\mathrm{in}}^{(n)}\boldsymbol{x}_j^{(n-1)}(t)+\boldsymbol{W}_j^{(n)}\boldsymbol{x}_j^{(n)}(t-1)),\end{aligned}$$ The output $y_j(t)$ of the deep ESN at stage $t$ is used to estimate the reward of each UAV $j$ based on the current adopted action $\boldsymbol{z}_j(t)$ and $\boldsymbol{z}_{-j}(t)$ of UAV $j$ and other UAVs $(-j)$, respectively, for the current network state $\boldsymbol{v}_j(t)$ after training $\boldsymbol{W}_{j, \mathrm{out}}$. It can be computed as: $$\begin{aligned} \label{output} y_j(\boldsymbol{v}_j(t), \boldsymbol{z}_j(t))=\boldsymbol{W}_{j, \mathrm{out}}(\boldsymbol{z}_j(t), t) [\boldsymbol{v}_j(t), \boldsymbol{x}^{(1)}_j(t), \boldsymbol{x}^{(2)}_j(t), \cdots, \boldsymbol{x}^{(n)}_j(t)].\end{aligned}$$ We adopt a temporal difference RL approach for training the output matrix $W_{j, \mathrm{out}}$ of the deep ESN architecture. In particular, we employ a linear gradient descent approach using the reward error signal, given by the following update rule [@RL_ESN]: $$\begin{gathered} \label{W_out} \hspace{-0.2cm}\boldsymbol{W}_{j\textrm{,} \mathrm{out}}(\boldsymbol{z}_j(t)\textrm{,} t\textrm{+}1)\textrm{=}\boldsymbol{W}_{j\textrm{,} \mathrm{out}}(\boldsymbol{z}_j(t)\textrm{,} t)\textrm{+}\lambda_j ( r_j(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{\textrm{-}j}(t)) -y_j(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t))) [\boldsymbol{v}_j(t)\textrm{,} \\ \boldsymbol{x}^{(1)}_j(t)\textrm{,} \boldsymbol{x}^{(2)}_j(t)\textrm{,} \cdots\textrm{,} \boldsymbol{x}^{(n)}_j(t)]^T\textrm{.}\end{gathered}$$ Here, note that the objective of each UAV is to minimize the value of the error function $e_j(\boldsymbol{v}_j(t))= \left| r_j(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{\textrm{-}j}(t)) - y_j(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t))\right|$. Proposed Deep RL Algorithm -------------------------- Based on the proposed deep ESN architecture and update rule, we next introduce a multi-agent deep RL framework that the UAVs can use to learn an SPNE in behavioral strategies for the game $\mathcal{G}$. The algorithm is divided into two phases: *training and testing*. In the former, UAVs are trained offline before they become active in the network using the architecture of Subsection \[ESN\_architecture\]. The testing phase corresponds to the actual execution of the algorithm after which the weights of $\boldsymbol{W}_{j, \mathrm{out}}, \forall j \in \mathcal{J}$ have been optimized and is implemented on each UAV for execution during run time. **Initialization:**\ $\boldsymbol{\pi}_{j,z_j}(\boldsymbol{v}_j(t))=\frac{1}{\mid \mathcal{A}_j\mid} \forall t\in T, z_j \in \mathcal{Z}_j$, $y_j(\boldsymbol{v}_j(t), \boldsymbol{z}_{j}(t))=0$, $\boldsymbol{W}_{j, \mathrm{in}}^{(n)}$, $\boldsymbol{W}_{j}^{(n)}$, $\boldsymbol{W}_{j, \mathrm{out}}$.\ **Input:** Each UAV $j$ receives an input $\boldsymbol{v}_j(t)$ based on (\[input\]). **Step 1: Action selection**\ Each UAV $j$ selects a random action $\boldsymbol{z}_j(t)$ with probability $\epsilon$,\ Otherwise, UAV $j$ selects $\boldsymbol{z}_j(t)= \mathrm{argmax}_{z_j \in \mathcal{Z}_j} y_{j}\left(\boldsymbol{v}_{j}(t), \boldsymbol{z}_{j}(t)\right)$. **Step 2: Location, cell association and transmit power update**\ Each UAV $j$ updates its location, cell association and transmission power level based on the selected action $\boldsymbol{z}_j(t)$.\ **Step 3: Reward computation**\ Each UAV $j$ computes its reward values based on (\[reward\]).\ **Step 4: Action broadcast**\ Each UAV $j$ broadcasts its selected action $\boldsymbol{z}_j(t)$ to all other UAVs.\ **Step 5: Deep ESN update**\ - Each UAV $j$ updates the state transition vector $\boldsymbol{x}_j^{(n)}(t)$ for each layer $(n)$ of the deep ESN architecture based on (\[state\_1\]) and (\[state\_n\]).\ - Each UAV $j$ computes its output $y_j\left(\boldsymbol{v}_{j}(t), \boldsymbol{z}_{j}(t)\right)$ based on (\[output\]).\ - The weights of the output matrix $\boldsymbol{W}_{j,\mathrm{out}}$ of each UAV $j$ are updated based on the linear gradient descent update rule given in (\[W\_out\]).\ During the training phase, each UAV aims at optimizing its output weight matrix $\boldsymbol{W}_{j\textrm{,} \mathrm{out}}$ such that the value of the error function $e_j(\boldsymbol{v}_j(t))$ at each stage $t$ is minimized. In particular, the training phase is composed of multiple iterations, each consisting of multiple rounds, i.e., the number of steps required for all UAVs to reach their corresponding destinations $d_j$. At each round, UAVs face a tradeoff between playing the action associated with the highest expected utility, and trying out all their actions to improve their estimates of the reward function in (\[reward\]). This in fact corresponds to the exploration and exploitation tradeoff, in which UAVs need to strike a balance between exploring their environment and exploiting the knowledge accumulated through such exploration [@sutton]. Therefore, we adopt the $\epsilon$-greedy policy in which UAVs choose the action that yields the maximum utility value with a probability of $1- \epsilon + \frac{\epsilon}{\mid \mathcal{Z}_j\mid}$ while exploring randomly other actions with a probability of $\frac{\epsilon}{\mid\mathcal{A}_j \mid}$. The strategy over the action space will be: $$\begin{aligned} \pi_{j,z_j}(\boldsymbol{v}_j(t))= \begin{cases} 1- \epsilon + \frac{\epsilon}{\mid \mathcal{Z}_j\mid}, \; \mathrm{argmax}_{z_j \in \mathcal{Z}_j} y_{j}\left(\boldsymbol{v}_j(t), \boldsymbol{z}_{j}(t) \right),\\ \frac{\epsilon}{\mid \mathcal{Z}_j\mid}, \; \mathrm{otherwise}. \end{cases}\end{aligned}$$ Based on the selected action $\boldsymbol{z}_j(t)$, each UAV $j$ updates its location, cell association, and transmission power level and computes its reward function according to (\[reward\]). To determine the next network state, each UAV $j$ broadcasts its selected action to all other UAVs in the network. Then, each UAV $j$ updates its state transition vector $\boldsymbol{x}_j^{(n)}(t)$ for each layer $(n)$ of the deep ESN architecture according to (\[state\_1\]) and (\[state\_n\]). The output $y_j$ at stage $t$ is then updated based on (\[output\]). Finally, the weights of the output matrix $\boldsymbol{W}_{j,\mathrm{out}}$ of each UAV $j$ are updated based on the linear gradient descent update rule given in (\[W\_out\]). Note that, a UAV stops taking any actions once it has reached its destination. A summary of the training phase is given in Algorithm \[training\_algorithm\]. **Input:** Each UAV $j$ receives an input $\boldsymbol{v}_j(t)$ based on (\[input\]). **Step 1: Action selection**\ Each UAV $j$ selects an action $\boldsymbol{z}_j(t)= \mathrm{argmax}_{z_j \in \mathcal{Z}_j} y_{j}\left(\boldsymbol{v}_{j}(t), \boldsymbol{z}_{j}(t)\right)$. **Step 2: Location, cell association and transmit power update**\ Each UAV $j$ updates its location, cell association and transmission power level based on the selected action $\boldsymbol{z}_j(t)$.\ **Step 3: Action broadcast**\ Each UAV $j$ broadcasts its selected action $\boldsymbol{z}_j(t)$ to all other UAVs.\ **Step 4: State transition vector update**\ Each UAV $j$ updates the state transition vector $\boldsymbol{x}_j^{(n)}(t)$ for each layer $(n)$ of the deep ESN architecture based on (\[state\_1\]) and (\[state\_n\]).\ Meanwhile, the testing phase corresponds to the actual execution of the algorithm. In this phase, each UAV chooses its action greedily for each state $\boldsymbol{v}_j(t)$, i.e., $\mathrm{argmax}_{z_j \in \mathcal{Z}_j} y_{j}(\boldsymbol{v}_j(t), \boldsymbol{z}_j(t))$, and updates its location, cell association, and transmission power level accordingly. Each UAV then broadcasts its selected action and updates its state transition vector $\boldsymbol{x}_j^{(n)}(t)$ for each layer $n$ of the deep ESN architecture based on (\[state\_1\]) and (\[state\_n\]). A summary of the testing phase is given in Algorithm \[testing\_algorithm\]. It is important to note that analytically guaranteeing the convergence of the proposed deep learning algorithm is challenging as it is highly dependent on the hyperparameters used during the training phase. For instance, using too few neurons in the hidden layers results in underfitting which could make it hard for the neural network to detect the signals in a complicated data set. On the other hand, using too many neurons in the hidden layers can either result in overfitting or an increase in the training time that could prevent the training of the neural network. Overfitting corresponds to the case when the model learns the random fluctuations and noise in the training data set to the extent that it negatively impacts the model’s ability to generalize when fed with new data. Therefore, in this work, we limit our analysis of convergence by providing simulation results (see Section \[simulation\]) to show that, under a reasonable choice of the hyperparameters, convergence is observed for our proposed game. In such cases, it is important to study the convergence point and the convergence complexity of our proposed algorithm. Next, we characterize the convergence point of our proposed algorithm. *If Algorithm \[training\_algorithm\] converges, then the convergence strategy profile corresponds to a SPNE of game $\mathcal{G}$.* An SPNE is a strategy profile that induces a Nash equilibrium on every subgame. Therefore, at the equilibrium state of each subgame, there is no incentive for any UAV to deviate after observing any history of joint actions. Moreover, given the fact that an ESN framework exhibits adaptive memory that enables it to store necessary previous state information, UAVs can essentially retain other players’ actions at each stage $t$ and thus take actions accordingly. To show that our proposed scheme guarantees convergence to an SPNE, we use the following lemma from [@SPNE_existence]. For our proposed game $\mathcal{G}$, the payoff functions in (\[reward\]) are bounded, and the number of players, state space and action space is finite. Therefore, $\mathcal{G}$ is a finite game and hence a SPNE exists. This follows from Selten’s theorem which states that every finite extensive form game with perfect recall possesses an SPNE where the players use behavioral strategies. Here, it is important to note that for finite dynamic games of perfect information, any backward induction solution is a SPNE [@walid_book]. Therefore, given the fact that, for our proposed game $\mathcal{G}$, each UAV aims at maximizing its expected sum of *discounted rewards* at each stage $t$ as given in (\[reward\]), one can guarantee that the convergence strategy profile corresponds to a SPNE of game $\mathcal{G}$. This completes the proof. Moreover, it is important to note that the convergence complexity of the proposed deep RL algorithm for reaching a SPNE is $O(J \times A^2)$. Next, we analyze the computational complexity of the proposed deep RL algorithm for practical scenarios in which the number of UAVs is relatively small. \[proposition\_complexity\] *For practical network scenarios, the computational complexity of the proposed training deep RL algorithm is $O(A^3)$ and reduces to $O(A^2)$ when considering a fixed altitude for the UAVs, where $A$ is the number of discretized unit areas.* Consider the case in which the UAVs can move with a fixed step size in a 3D space. For such scenarios, the state vector $\boldsymbol{v}'_j(t)$ is defined as: $$\begin{aligned} \label{input_3D} \boldsymbol{v}'_j(t)\textrm{=}\Big[\{\delta_{j\textrm{,}l\textrm{,}a}(t)\textrm{,} \theta_{j\textrm{,}l\textrm{,}a}(t)\}_{l=1}^{L_j}\textrm{,} \theta_{j\textrm{,}d_j\textrm{,}a}(t)\textrm{,} \{x_j(t)\textrm{,} y_j(t)\textrm{,} h_j(t)\}_{j \in \mathcal{J}} \Big]\textrm{,}\end{aligned}$$ For each state $\boldsymbol{v}'_j(t)$, the action of UAV $j$ is a function of the location, transmission power level and cell association vector of all other UAVs in the network. Nevertheless, the number of possible locations of other UAVs in the network is much larger than the possible number of transmission power levels and the size of the cell association vector of those UAVs. Therefore, by the law of large numbers, one can consider the number of possible locations of other UAVs only when analyzing the convergence complexity of the proposed training algorithm. Moreover, for practical scenarios, the total number of UAVs in a given area is considered to be relatively small as compared to the number of discretized unit areas i.e., $J \ll A$ (3GPP admission control policy for cellular-connected UAVs [@3GPP_standards]). Therefore, by the law of large numbers and given the fact that the UAVs take actions in a parallel fashion, the computational complexity of our proposed algorithm is $O(A^3)$ when the UAVs update their x, y and z coordinates and reduces to $O(A^2)$ when considering fixed altitudes for the UAVs. This completes the proof. From Theorem \[proposition\_complexity\], we can conclude that the convergence speed of the proposed training algorithm is significantly reduced when considering a fixed altitude for the UAVs. This in essence is due to the reduction of the state space dimension when updating the $x$ and $y$ coordinates only. It is important to note here that there exists a tradeoff between the computational complexity of the proposed training algorithm and the resulting network performance. In essence, updating the 3D coordinates of the UAVs at each step $t$ allows the UAVs to better explore the space thus providing more opportunities for maximizing their corresponding utility functions. Therefore, from both Theorems \[proposition\_complexity\] and \[theorem\_altitude\], the UAVs can update their x and y coordinates only during the learning phase while operating within the upper and lower altitude bounds derived in Theorem \[theorem\_altitude\]. Simulation Results and Analysis {#simulation} =============================== =0.06cm For our simulations, we consider an 800 m $\times$ 800 m square area divided into 40 m $\times$ 40 m grid areas, in which we randomly uniformly deploy 15 BSs. All statistical results are averaged over several independent testing iterations during which the initial locations and destinations of the UAVs and the locations of the BSs and the ground UEs are randomized. The maximum transmit power for each UAV is discretized into 5 equally separated levels. We consider an uncorrelated Rician fading channel with parameter $\widehat{K}=1.59$ [@rician_fading]. The external input of the deep ESN architecture, $\boldsymbol{v}_j(t)$, is a function of the number of UAVs and thus the number of hidden nodes per layer, $N_{j,R}^{(n)}$, varies with the number of UAVs. For instance, $N_{j,R}^{(n)}= 12$ and $6$ for $n=1$ and $2$, respectively, for a network size of 1 and 2 UAVs, and 20 and 10 for a network size of 3, 4, and 5 UAVs. Table \[parameters\] summarizes the main simulation parameters. [1.0]{} ![The (a) upper bound for the optimal altitude of the UAVs as a function of the SINR threshold value $(\bar{\Gamma})$ and for different transmit power levels and ground network density and (b) lower bound for the optimal altitude of the UAVs as a function of the interference threshold value $(\sum_{c=1}^{C_{j,s}(t)} \bar{I}_{j,r,c,a})$ and for different transmit power levels.[]{data-label="altitude_results"}](figures/maximum_altitude "fig:"){width="10cm"} \ [1.0]{} ![The (a) upper bound for the optimal altitude of the UAVs as a function of the SINR threshold value $(\bar{\Gamma})$ and for different transmit power levels and ground network density and (b) lower bound for the optimal altitude of the UAVs as a function of the interference threshold value $(\sum_{c=1}^{C_{j,s}(t)} \bar{I}_{j,r,c,a})$ and for different transmit power levels.[]{data-label="altitude_results"}](figures/minimum_altitude "fig:"){width="10cm"} Fig. \[maximum\_altitude\] shows the upper bound for the optimal altitude of UAV $j$ as a function of the SINR threshold value, $\bar{\Gamma}$, and for different transmit power levels, based on Theorem \[theorem\_altitude\]. On the other hand, Fig. \[minimum\_altitude\] shows the lower bound for the optimal altitude of UAV $j$ as a function of the SINR threshold value, $\bar{\Gamma}$, and for different transmit power levels and ground network density, based on Theorem \[theorem\_altitude\]. From Figs. \[maximum\_altitude\] and \[minimum\_altitude\], we can deduce that the optimal altitude range of a given UAV is a function of network design parameters, ground network data requirements, the density of the ground network, and its action $\boldsymbol{v}_j(t)$. For instance, the upper bound on the UAV’s optimal altitude decreases as $\bar{\Gamma}$ increases while its lower bound decreases as $\sum_{c=1}^{C_{j,s}(t)} \bar{I}_{j,r,c,a}$ increases. Moreover, the maximum altitude of the UAV decreases as the ground network gets denser while the its lower bound increases as the ground network data requirements increase. Thus, in such scenarios, a UAV should operate at higher altitudes. A UAV should also operate at higher altitudes when its transmit power level increases due to the increase in the lower and upper bounds of its optimal altitude. ![Path of a UAV for our approach and shortest path scheme.[]{data-label="snapshot"}](figures/snapshot){width="11cm"} =0.1cm Fig. \[snapshot\] shows a snapshot of the path of a single UAV resulting from our approach and from a shortest path scheme. Unlike our proposed scheme which accounts for other wireless metrics during path planning, the objective of the UAVs in the shortest path scheme is to reach their destinations with the minimum number of steps. Table \[snapshot\_table\] presents the performance results for the paths shown in Fig. \[snapshot\]. From Fig. \[snapshot\], we can see that, for our proposed approach, the UAV selects a path away from the densely deployed area while maintaining proximity to its serving BS in a way that would minimize the steps required to reach its destination. This path will minimize the interference level that the UAV causes on the ground UEs and its wireless latency (Table \[snapshot\_table\]). From Table \[snapshot\_table\], we can see that our proposed approach achieves 25% increase in the average rate per ground UE and 47% decrease in the wireless latency as compared to the shortest path, while requiring the same number of steps that the UAV needs to reach the destination. ![Performance assessment of the proposed approach in terms of average (a) wireless latency per UAV and (b) rate per ground UE as compared to the shortest path approach, for different number of UAVs.[]{data-label="scalability"}](figures/scalability){width="11cm"} =0.1cm Fig. \[scalability\] compares the average values of the (a) wireless latency per UAV and (b) rate per ground UE resulting from our proposed approach and the baseline shortest path scheme. Moreover, Table \[steps\_table\] compares the number of steps required by all UAVs to reach their corresponding destinations for the scenarios presented in Fig. \[scalability\]. From Fig. \[scalability\] and Table \[steps\_table\], we can see that, compared to the shortest path scheme, our approach achieves a lower wireless latency per UAV and a higher rate per ground UE for different numbers of UAVs while requiring a number of steps that is comparable to the baseline. In fact, our scheme provides a better tradeoff between energy efficiency, wireless latency, and ground UE data rate compared to the shortest path scheme. For instance, for 5 UAVs, our scheme achieves a 37% increase in the average achievable rate per ground UE, 62% decrease in the average wireless latency per UAV, and 14% increase in energy efficiency. Indeed, one can adjust the multi-objective weights of our utility function based on several parameters such as the rate requirements of the ground network, the power limitation of the UAVs, and the maximum tolerable wireless latency of the UAVs. Moreover, Fig. \[scalability\] shows that, as the number of UAVs increases, the average delay per UAV increases and the average rate per ground UE decreases, for all schemes. This is due to the increase in the interference level on the ground UEs and other UAVs as a result of the LoS link between the UAVs and the BSs. ![Performance assessment of the proposed approach in terms of average (a) wireless latency per UAV and (b) rate per ground UE for different utility functions and for different altitudes of the UAVs.[]{data-label="altitude"}](figures/altitude){width="11cm"} Fig. \[altitude\] studies the effect of the UAVs’ altitude on the average values of the (a) wireless latency per UAV and (b) rate per ground UE for different utility functions. From Fig. \[altitude\], we can see that, as the altitude of the UAVs increases, the average wireless latency per UAV increases for all studied utility functions. This is mainly due to the increase in the distance of the UAVs from their corresponding serving BSs which accentuates the path loss effect. Moreover, higher UAV altitudes result in a higher average data rate per ground UE for all studied utility functions mainly due to the decrease in the interference level that is caused from the UAVs on neighboring BSs. Here, there exists a tradeoff between minimizing the average wireless delay per UAV and maximizing the average data rate per ground UE. Therefore, alongside the multiobjective weights, the altitude of the UAVs can be varied such that the ground UE rate requirements is met while minimizing the wireless latency for each UAV based on its mission objective. ![Effect of the ground network densification on the average transmit power level of the UAVs along their paths.[]{data-label="power_densification"}](figures/power_densification){width="11cm"} Fig. \[power\_densification\] shows the average transmit power level per UAV along its path as a function of the number of BSs considering two utility functions, one for minimizing the average wireless latency for each UAV and the other for minimizing the interference level on the ground UEs. From Fig. \[power\_densification\], we can see that network densification has an impact on the transmission power level of the UAVs. For instance, when minimizing the wireless latency of each UAV along its path, the average transmit power level per UAV increases from 0.04 W to 0.06 W as the number of ground BSs increases from 10 to 30, respectively. In essence, the increase in the transmit power level is the result of the increase in the interference level from the ground UEs as the ground network becomes denser. As a result, the UAVs will transmit using a larger transmission power level so as to minimize their wireless transmission delay. On the other hand, the average transmit power level per UAV decreases from 0.036 W to 0.029 W in the case of minimizing the interference level caused on neighboring BSs. This is due to the fact that as the number of BSs increases, the interference level caused by each UAV on the ground network increases thus requiring each UAV to decrease its transmit power level. Note that, when minimizing the wireless latency, the average transmit power per UAV is always larger than the case of minimizing the interference level, irrespective of the number of ground BSs. Therefore, the transmit power level of the UAVs is a function of their mission objective and the number of ground BSs. ![Effect of the ground network densification on the average (a) wireless latency per UAV and (b) rate per ground UE for different utility functions and for a fixed altitude of 120m.[]{data-label="densification"}](figures/densification){width="11cm"} Fig. \[densification\] presents the (a) wireless latency per UAV and (b) rate per ground UE for different utilities as a function of the number of BSs and for a fixed altitude of 120 m. From this figure, we can see that, as the ground network becomes more dense, the average wireless latency per UAV increases and the average rate per ground UE decreases for all considered cases. For instance, when the objective is to minimize the interference level along with energy efficiency, the average wireless latency per UAV increases from 13 ms to 47 ms and the average rate per ground UE decreases from 0.86 Mbps to 0.48 Mbps as the number of BSs increases from 10 to 30. This is due to the fact that a denser network results in higher interference on the UAVs as well as other UEs in the network. ![Effect of the ground network densification on the average (a) wireless latency per UAV and (b) rate per ground UE for different utility functions and for various altitudes of the UAVs.[]{data-label="densification_altitude"}](figures/densification_altitude){width="11cm"} Fig. \[densification\_altitude\] investigates the (a) wireless latency per UAV and (b) rate per ground UE for different values of the UAVs altitude and as a function of the number of BSs. From this figure, we can see that as the UAV altitude increases and/or the ground network becomes denser, the average wireless latency per UAV increases. For instance, the delay increases by 27% as the altitude of the UAVs increases from 120 to 240 m for a network consisting of 20 BSs and increases by 120% as the number of BSs increases from 10 to 30 for a fixed altitude of 180 m. This essentially follows from Theorem \[theorem\_altitude\] and the results in Fig. \[maximum\_altitude\] which shows that the maximum altitude of the UAV decreases as the ground network gets denser and thus the UAVs should operate at a lower altitude when the number of BSs increases from 10 to 30. Moreover, the average rate per ground UE decreases as the ground network becomes denser due to the increase in the interference level and increases as the altitude of the UAVs increases. Therefore, the resulting network performance depends highly on both the UAVs altitude and the number of BSs in the network. For instance, in case of a dense ground network, the UAVs need to fly at a lower altitude for applications in which the wireless transmission latency is more critical and at a higher altitude in scenarios in which a minimum achievable data rate for the ground UEs is required. ![The average rate per ground UE as a function of the number of interferer BSs in the state definition $(\emph{L}_j)$.[]{data-label="interferers"}](figures/interferers){width="11cm"} Fig. \[interferers\] shows the effect of varying the number of nearest BSs ($\emph{L}_j$) in the observed network state of UAV $j$, $\boldsymbol{v}_j(t)$, on the average data rate per ground UE for different utility functions. From Fig. \[interferers\], we can see an improvement in the average rate per ground UE as the number of nearest BSs in the state definition increases. For instance, in scenarios in which the UAVs aim at minimizing the interference level they cause on the ground network along their paths, the average rate per ground UE increases by 28% as the number of BSs in the state definition increases from 1 to 5. This gain results from the fact that as $\emph{L}_j$ increases, the UAVs get a better sense of their surrounding environment and thus can better select their next location such that the interference level they cause on the ground network is minimized. It is important to note here, that as $\emph{L}_j$ increases, the size of the external input ($\boldsymbol{v}_j$) increases thus requiring a larger number of neurons in each layer. This in turn increases the number of required iterations for convergence. Therefore, a tradeoff exists between improving the performance of the ground UEs and the running complexity of the proposed algorithm. ![Effect of the learning rate on the convergence of offline training.[]{data-label="learning_rate"}](figures/learning_rate){width="11cm"} Fig. \[learning\_rate\] shows the average of the error function $e_j(\boldsymbol{v}_j(t))$ resulting from the offline training phase as a function of a multiple of 20 iterations while considering different values for the learning rate, $\lambda$. The learning rate determines the step size the algorithm takes to reach the optimal solution and, thus, it impacts the convergence rate of our proposed framework. From Fig. \[learning\_rate\], we can see that small values of the learning rate, i.e., $\lambda =0.0001$, result in a slow speed of convergence. On the other hand, for large values of the learning rate, such as $\lambda=0.1$, the error function decays fast for the first few iterations but then remains constant. Here, $\lambda=0.1$ does not lead to convergence during the testing phase, but $\lambda =0.0001$ and $\lambda=0.01$ result in convergence, though requiring a different number of training iterations. In fact, a large learning rate can cause the algorithm to diverge from the optimal solution. This is because large initial learning rates will decay the loss function faster and thus make the model get stuck at a particular region of the optimization space instead of better exploring it. Clearly, our framework achieves better performance for $\lambda=0.01$, as compared to smaller and larger values of the learning rate. We also note that the error function does not reach the value of zero during the training phase. This is due to the fact that, for our approach, we adopt the early stopping technique to avoid overfitting which occurs when the training error decreases at the expense of an increase in the value of the test error [@RNN_survey]. Conclusion ========== In this paper, we have proposed a novel interference-aware path planning scheme that allows cellular-connected UAVs to minimize the interference they cause on a ground network as well as their wireless transmission latency while transmitting online mission-related data. We have formulated the problem as a noncooperative game in which the UAVs are the players. To solve the game, we have proposed a deep RL algorithm based ESN cells which is guaranteed to reach an SPNE, if it converges. The proposed algorithm enables each UAV to decide on its next location, transmission power level, and cell association vector in an autonomous manner thus adapting to the changes in the network. Simulation results have shown that the proposed approach achieves better wireless latency per UAV and rate per ground UE while requiring a number of steps that is comparable to the shortest path scheme. The results have also shown that a UAV’s altitude plays a vital role in minimizing the interference level on the ground UEs as well as the wireless transmission delay of the UAV. In particular, we have shown that the altitude of the UAV is a function of the ground network density, the UAV’s objective and the actions of other UAVs in the network. Appendix {#appendix .unnumbered} ======== Proof of Theorem \[theorem\_altitude\] -------------------------------------- For a given network state $\boldsymbol{v}_j(t)$ and a particular action $\boldsymbol{z}_j(t)$, the upper bound for the altitude of UAV $j$ can be derived when UAV $j$ aims at minimizing its delay function only, i.e., $\vartheta '=0$. For such scenarios, UAV $j$ should guarantee an upper limit, $\overline{\Gamma}_j$, for the SINR value $\Gamma_{j,s,c,a}$ of the transmission link from UAV $j$ to BS $s$ on RB $c$ at location $a$ as given in constraint (\[cons\_7\]). Therefore, $\hat{h}_j^{\mathrm{max}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$ corresponds to the altitude at which UAV $j$ achieves $\overline{\Gamma}_j$ and beyond which (\[cons\_7\]) is violated. The derivation of the expression of $\hat{h}_j^{\mathrm{max}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$ is: $$\begin{aligned} \sum_{c=1}^{C_{j,s}(t)}\Gamma_{j,s,c,a} = \overline{\Gamma}_j,\end{aligned}$$ $$\begin{aligned} \sum_{c=1}^{C_{j,s}(t)} \frac{\frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t))}{C_{j,s}(t)}\cdot g_{j,s,c,a}(t)}{\left(\frac{4 \pi \hat{f} d_{j,s,a}^{\mathrm{max}}}{\hat{c}}\right)^2 \cdot (I_{j,s,c}(t)+B_cN_0)}= \overline{\Gamma}_j,\end{aligned}$$ $$\begin{aligned} \frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t))}{C_{j,s}(t)} \cdot \frac{1}{\left(\frac{4 \pi \hat{f} d_{j,s,a}^{\mathrm{max}}}{\hat{c}}\right)^2} \cdot \sum_{c=1}^{C_{j,s}(t)} \frac{g_{j,s,c,a}{}(t)}{I_{j,s,c}(t)+B_cN_0} = \overline{\Gamma}_j,\end{aligned}$$ $$\begin{aligned} (d_{j,s,a}^{\mathrm{max}})^2=\frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t))}{C_{j,s}(t)} \cdot \frac{1}{\overline{\Gamma}_j \cdot \left(\frac{4 \pi \hat{f}}{\hat{c}}\right)^2} \cdot \sum_{c=1}^{C_{j,s}(t)}\frac{g_{j,s,c,a}(t)}{I_{j,s,c}(t)+B_cN_0},\end{aligned}$$ where $d_{j,s,a}$ is the Euclidean distance between UAV $j$ and its serving BS $s$ at location $a$. Assume that the altitude of BS $s$ is negligible, i.e., $z_s=0$, $\hat{h}_j^{\mathrm{max}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$ can be expressed as: $$\begin{gathered} \hat{h}_j^{\mathrm{max}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))= \\ \sqrt{\frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t))}{C_{j,s}(t) \cdot \overline{\Gamma}_j \cdot \left(\frac{4 \pi \hat{f}}{\hat{c}}\right)^2} \cdot \sum_{c=1}^{C_{j,s}(t)}\frac{g_{j,s,c,a}(t)}{I_{j,s,c}(t)+B_cN_0} - (x_j - x_s)^2 - (y_j - y_s)^2},\end{gathered}$$ where $x_s$ and $y_s$ correspond to the x and y coordinates of the serving BS $s$ and $\hat{c}$ is the speed of light. On the other hand, for a given network state $\boldsymbol{v}_j(t)$ and a particular action $\boldsymbol{z}_j(t)$, the lower bound for the altitude of UAV $j$ can be derived when the objective function of UAV $j$ is to minimize the interference level it causes on the ground network only, i.e., $\phi '=0$ and $\varsigma=0$. For such scenarios, the interference level that UAV $j$ causes on neighboring BS $r$ at location $a$ should not exceed a predefined value given by $\sum_{c=1}^{C_{j,s}(t)}\bar{I}_{j,r,c,a}$[^2]. Therefore, $\hat{h}_j^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$ corresponds to the altitude at which UAV $j$ achieves $\sum_{c=1}^{C_{j,s}(t)}\bar{I}_{j,r,c,a}$ and below which the level of interference it causes on BS $r$ exceeds the value of $\sum_{c=1}^{C_{j,s}(t)}\bar{I}_{j,r,c,a}$. The derivation of the expression of $\hat{h}_j^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$ is given by: $$\begin{aligned} \sum_{c=1}^{C_{j,s}(t)}\sum_{r=1, r\neq s}^{S} \frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t)) h_{j,r,c,a}(t)}{C_{j,s}(t)}= \sum_{c=1}^{C_{j,s}(t)} \sum_{r=1, r\neq s}^{S}\bar{I}_{j,r,c,a},\end{aligned}$$ $$\begin{aligned} \label{all_interferers} \sum_{c=1}^{C_{j,s}(t)}\sum_{r=1, r\neq s}^{S} \frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t)) \cdot g_{j,r,c,a}(t)}{C_{j,s}(t) \cdot \left(\frac{4 \pi \hat{f} d_{j,r,a}^{\mathrm{min}}}{\hat{c}}\right)^2 }= \sum_{c=1}^{C_{j,s}(t)} \sum_{r=1, r\neq s}^{S}\bar{I}_{j,r,c,a},\end{aligned}$$ To find $\hat{h}_j^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$, we need to solve (\[all\_interferers\]) for each neighboring BS $r$ separately. Therefore, for a particular neighboring BS $r$, (\[all\_interferers\]) can be written as: $$\begin{aligned} \sum_{c=1}^{C_{j,s}(t)} \frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t)) \cdot g_{j,r,c,a}(t)}{C_{j,s}(t) \cdot \left(\frac{4 \pi \hat{f} d_{j,r,a}^{\mathrm{min}}}{\hat{c}}\right)^2}= \sum_{c=1}^{C_{j,s}(t)} \bar{I}_{j,r,c,a},\end{aligned}$$ $$\begin{aligned} \frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t)) \cdot \sum_{c=1}^{C_{j,s}(t)} g_{j,r,c,a}(t)}{C_{j,s}(t) \cdot \left(\frac{4 \pi \hat{f} d_{j,r,a}^{\mathrm{min}}}{\hat{c}}\right)^2} = \sum_{c=1}^{C_{j,s}(t)} \bar{I}_{j,r,c,a},\end{aligned}$$ $$\begin{aligned} (d_{j,r,a}^{\mathrm{min}})^2=\frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t)) \cdot \sum_{c=1}^{C_{j,s}(t)} g_{j,r,c,a}(t)}{C_{j,s}(t) \cdot \left(\frac{4 \pi \hat{f}}{\hat{c}}\right)^2 \cdot \sum_{c=1}^{C_{j,s}(t)} \bar{I}_{j,r,c,a}},\end{aligned}$$ where $d_{j,r,a}$ is the Euclidean distance between UAV $j$ and its neighboring BS $r$ at location $a$. Assume that the altitude of BS $r$ is negligible, i.e., $z_r=0$, we have: $$\begin{aligned} \hat{h}_{j,r}^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))= \sqrt{\frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t)) \cdot \sum_{c=1}^{C_{j,s}(t)} g_{j,r,c,a}(t)}{C_{j,s}(t) \cdot \left(\frac{4 \pi \hat{f}}{\hat{c}}\right)^2 \cdot \sum_{c=1}^{C_{j,s}(t)} \bar{I}_{j,r,c,a}} - (x_j - x_r)^2 - (y_j - y_r)^2},\end{aligned}$$ Therefore, $\hat{h}_{j}^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$ corresponds to the maximum value of $\hat{h}_{j,r}^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$ among all neighboring BSs $r$ and is expressed as: $$\begin{aligned} \hat{h}_j^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))= \max_r \hat{h}_{j,r}^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t)),\end{aligned}$$ where $x_r$ and $y_r$ correspond to the x and y coordinates of other neighboring BSs $r$. This completes the proof. [^1]: A preliminary version of this work has been accepted for publication at the IEEE International Conference on Communications (ICC) 2018 [@ICC_paper]. [^2]: $\sum_{c=1}^{C_{j,s}(t)}\bar{I}_{j,r,c,a}$ is a network design parameter that is a function of the ground network density, number of UAVs in the network and the data rate requirements of the ground UEs. The value of $\bar{I}_{j,r,c,a}$ is in fact part of the admission control policy which limits the number of UAVs in the network and their corresponding interference level on the ground network [@3GPP_standards].
--- abstract: 'A monomial self-map $f$ on a complex toric variety is said to be $k$-stable if the action induced on the $2k$-cohomology is compatible with iteration. We show that under suitable conditions on the eigenvalues of the matrix of exponents of $f$, we can find a toric model with at worst quotient singularities where $f$ is $k$-stable. If $f$ is replaced by an iterate one can find a $k$-stable model as soon as the dynamical degrees $\lambda_k$ of $f$ satisfy $\lambda_k^2>\lambda_{k-1}\lambda_{k+1}$. On the other hand, we give examples of monomial maps $f$, where this condition is not satisfied and where the degree sequences $\deg_k(f^n)$ do not satisfy any linear recurrence. It follows that such an $f$ is not $k$-stable on any toric model with at worst quotient singularities.' address: - 'Department of Mathematics, Indiana University, Bloomington, IN 47405, USA' - | Department of Mathematics, Chalmers University of Technology and the University of Gothenburg\ SE-412 96 G[ö]{}teborg, Sweden author: - 'Jan-Li Lin' - Elizabeth Wulcan date: - - title: | Stabilization of monomial maps in\ higher codimension --- [^1] Introduction {#introduction .unnumbered} ============ When studying the dynamics of a dominant meromorphic self-map $f:X\dashrightarrow X$ on a compact complex manifold $X$ it is often desirable that the action of $f$ on the cohomology of $X$ be compatible with iterations. Following Sibony [@S] and Dinh-Sibony [@DS09] (see also [@FS]) we then say that $f$ is *(algebraically) stable*. More precisely, if $f^*$ denotes the induced action on $H^{2k}(X)$ we say that $f$ is *$k$-stable* if $(f^n)^*=(f^*)^n$ for all $n$. For examples of classes of $k$-stable maps see, e.g., [@DS09; @DS05b]. If $f$ is not stable, one can look for a model $X'$, birational to $X$, where the induced self-map on $X'$ is stable. As shown by Favre [@Fa] this is not always possible to achieve. However, for large classes of surface maps and monomial maps, one can find models $X'\to X$ (with at worst quotient singularities), so that $f$ lifts to a $1$-stable map, see [@DF; @Fa; @FJ; @L1; @JW; @L3]. In this paper we address the question of finding a $k$-stable model for the special class of monomial maps, but for arbitrary $k$. Monomial maps on complex projective space $\P^m$, or more generally, on toric varieties, correspond to integer-valued $m\times m$-matrices, $M_m(\Z)$. For $A\in M_m(\Z)$ with entries $a_{ij}$ we write $f_A$ for the corresponding monomial map $$f_A(z_1,\ldots, z_m)=(z_1^{a_{11}}\cdots z_m^{a_{m1}}, \ldots, z_1^{a_{1m}}\cdots z_m^{a_{mm}})$$ with $(z_1,\ldots, z_m)\in (\C^*)^m$. This mapping is holomorphic on the torus $(\C^*)^m$ and extends as a rational map to $\P^m$ or to any toric variety. It is dominant precisely if $\det A\neq 0$. Note that $f_A^\ell=f_{A^\ell}$. Assume that the eigenvalues of $A\in M_m(\Z)$ are real and satisfy $\mu_1 > \ldots > \mu_m > 0$ or $\mu_1 < \ldots < \mu_m <0$. Then there is a projective toric variety $X$, with at worst quotient singularities, such that $f_A:X\dashrightarrow X$ is $k$-stable for $k=1,\ldots, m-1$. The definition of $k$-stable extends verbatim to toric varities with at worst quotient singularities, cf. Section  \[maps\]. If the eigenvalues of $A$ only satisfy $|\mu_1|> \ldots > |\mu_m|>0$, it is not always possible to find a stable model, see, e.g., [@JW Example 6.3]. Still, since $A^2$ has positive and distinct eigenvalues, by Theorem A we can find a model so that $f_A^2$ becomes stable. In fact, there is an $\ell_0$ such that $f^\ell$ is $k$-stable for $\ell\geq \ell_0$ and each $k$. Assume that the eigenvalues of $A\in M_m(\Z)$, ordered so that $|\mu_1|\geq \ldots \geq |\mu_m|$, satisfy $|\mu_{k_j}|>|\mu_{k_j+1}|$ for $j=1,\ldots, s$ and $|\mu_m|>0$. Then there is a projective toric variety $X$, with at worst quotient singularities, and $\ell_0\in \N$, such that $f_A^\ell:X\dashrightarrow X$ is $k_j$-stable for $\ell\geq \ell_0$ and $j=1,\ldots, s$. Recall that the *$k$th degree* $\deg_k(f)$ of the rational self-map $f:\P^m\dashrightarrow \P^m$ is defined as $\deg f^{-1}(L_k)$ where $L_k$ is a generic linear subspace of $\P^m$ of codimension $k$. In [@FW; @L2] it was proved that the *$k$th dynamical degree* $$\lambda_k=\lambda_k(f_A):=\lim_n(\deg_k (f_A^n))^{1/n}$$ of $f_A,$ introduced by Russakovskii-Shiffman [@RS], is equal to $|\mu_1|\cdots |\mu_k|$, if the eigenvalues of $A$ are ordered so that $|\mu_1|\geq \ldots \geq |\mu_m|$. It follows that the condition $|\mu_k|>|\mu_{k+1}|$ is equivalent to $\lambda_k^2>\lambda_{k-1}\lambda_{k+1}$. In general the dynamical degrees satisfy $\lambda_k^2\geq \lambda_{k-1}\lambda_{k+1}$; for this and other basic properties of dynamical degrees, see, e.g., [@DS05a; @G; @RS]. Thus, in particular, Theorem B says that if we are only interested in the action of $f_A^*$ on $H^{2k}(X)$, we can find good models as soon as $\lambda_k^2>\lambda_{k-1}\lambda_{k+1}$. One could ask if this is true for general meromorphic $f:X\dashrightarrow X$. Is it always possible to find a model $X'$ birational to $X$ so that $f^\ell$ is $k$-stable for $\ell$ large enough when $\lambda_k^2>\lambda_{k-1}\lambda_{k+1}$? The problem of finding stable models for $f$ is related to the question whether the degree sequence $\deg_k(f^n)$ satisfies a linear recurrence. For $1\leq k\leq m-1$, assume that the eigenvalues of $A\in M_m(\Z)$ satisfy $$\label{film} |\mu_{k-1}|>|\mu_k|=|\mu_{k+1}|>|\mu_{k+2}|$$ and moreover that $\mu_k/\mu_{k+1}$ is not a root of unity. Then the degree sequence $\deg_{k}(f_A^n)$ does not satisfy any linear recurrence. If $k=1$ or $k=m-1$ the condition on the moduli of the $\mu_j$ should be interpreted as $|\mu_1|=|\mu_2|>|\mu_3|$ and $|\mu_{m-2}|>|\mu_{m-1}|=|\mu_{m}|$, respectively. Assume that $A\in M_m(\Z)$ satisfies the assumption of Theorem C. Then for each toric projective variety $X$ with at worst quotient singularities, $f_A:X\dashrightarrow X$ is not $k$-stable. In fact, Corollary  D follows from a slight generalization of Theorem C, Theorem  C’, which asserts that $\deg_{D,k}(f_A^n)$ does not satisfy any linear recurrence, where $\deg_{D,k}(f_A)$ is the $k$th degree of $f_A$ on a projective toric variety $X$ with respect to an ample divisor $D$ on $X$, see Section \[degreesection\]. Note that if $A$ satisfies the assumption of Theorem  C, then all powers $A^\ell$ of $A$ satisfy the assumption as well. Thus we get that for each $X$ as in Corollary  D and each $\ell\in\N$, $f_A^\ell:X\dashrightarrow X$ is not $k$-stable. It would be interesting to investigate whether one can remove the conditions $|\mu_{k-1}|>|\mu_k|$ and $|\mu_{k+1}|>|\mu_{k+2}|$. Is it true that $f_A$ cannot be made $k$-stable as soon as $|\mu_k|=|\mu_{k+1}|$ and $\mu_k/\mu_{k+1}$ is not a root of unity? For $m=2$, Theorems A and  B follow from [@Fa] and for $m=3$ they follow from [@L3 Theorem 1.1]. Moreover, for $k=1$ Theorem A follows from [@JW Theorem A]. In fact, if $f_A$ is a monomial map on a toric variety $X$, under the assumption $\mu_1> \ldots >\mu_m>0$ one can find a birational modification $\pi: X'\to X$, with at worst quotient singularities, such that the lifted mapping $\pi^{-1}\circ f_A \circ \pi: X'\dashrightarrow X'$ is $1$-stable. Geometrically, $f:X\dashrightarrow X$ is $1$-stable if no iterate of $f$ sends a hypersurface into the indeterminacy set of $f$, see [@FS; @S]. If $f=f_A$ is monomial and $X$ is toric this translates into a certain condition in terms of the action of $A$ on the fan of $X$, see [@L1 Section 4] and [@JW Section 2.4]. The construction of a $1$-stable model $X'\to X$ amounts to carefully refining the fan corresponding to $X$. A model $X'$ that is only birationally equivalent to $X$ can be obtained in a much less technical way and also for a larger class of monomial mappings; for $s=1$ and $k_1=1$, Theorem B appeared in [@L1 Theorem 4.7] and [@JW Theorem B’], cf. Remark  \[compare\]. For $k\geq 2$ we do not in general understand what it means geometrically to be $k$-stable, nor if there is a translation into the language of fans for monomial maps. In this paper we consider a certain class of toric varieties, where the action of $f_A^*$ is particularly simple. Given a basis $\epsilon_1,\ldots, \epsilon_m$ of $\Q^m$ we construct a toric variety $X$, see Section  \[skew\], for which the entries of the matrix of $f_A^*:H^{2k}(X)\to H^{2k}(X)$ are the absolute values of the $k\times k$-minors of $A$ in the basis $\epsilon_j$ (modulo multiplication by a positive constant). It follows that a sufficient condition for $f_A$ to be $k$-stable is that all $k\times k$-minors have the same sign, see Lemma  \[stable\] and Remark \[bakat\]. The basic idea of the proofs of Theorems  A and  B is to find bases $\epsilon_j$ so that this condition is satisfied. The construction will be based on (strictly) totally positive matrices, i.e., matrices whose minors are all (strictly) positive. Typical examples of totally positive matrices are certain Vandermonde matrices. Corollary D is due to Favre [@Fa] for $m=2$; he showed that if $|\mu_1|=|\mu_2|$ and $\mu_1/\mu_2$ is not a root of unity there is no model such that (any power of) $f_A$ is stable. Bedford-Kim [@BK] proved Theorem C for $k=1$ and some cases when $k>1$, see also [@L1 Theorem 4.7]. Following ideas due to Hasselblatt-Propp [@HP] and Bedford-Kim [@BK], we prove Theorem  C by comparing the degree sequence $\deg_k(f_A^n)$ to a certain other sequence $\beta_n$, which satisfies a linear recurrence. If $\deg_k(f_A^n)$ satisfied a linear recurrence the set of $n$ for which $\deg_k(f_A^n)=\beta_n$ would be eventually periodic, which we show cannot be the case, see. To do this we express $\deg_k(f_A^n)$ in terms of minors of $A^n$ using a result from [@FW], which expresses $\deg_{k}(f_A)$ as a mixed volume of certain polytopes, and a method due to Huber-Sturmfels [@HS] of computing mixed volumes of polytopes. **Acknowledgment** We thank Eric Bedford, Mattias Jonsson, and Pavlo Pylyavskyy for fruitful discussions. We also thank the referee for many helpful suggestions. Part of this work was done while the authors were visiting the Institute for Computational and Experimental Research in Mathematics (ICERM); we would like to thank the ICERM for their hospitality. The second author was supported by the Swedish Research Council. Toric varieties {#sec:toric} =============== A complex toric variety is a (partial) compactification of the torus $T\cong (\C^*)^m$, which contains $T$ as a dense subset and which admits an action of $T$ that extends the natural action of $T$ on itself. We briefly recall some of the basic definitions, referring to [@Fu] and [@O] for details. Fans and toric varieties {#fansoch} ------------------------ Let $N$ be a lattice isomorphic to $\Z^m$ and let $M=\text{Hom}(N,\Z)$ denote the dual lattice. Set $N_\Q:=N\otimes_\Z \Q$, $N_\R:=N\otimes_\Z \R$, and define $M_\Q$ and $M_\R$ analogously. Let $\R_+$ and $\R_-$ denote the sets of non-negative and non-positive numbers, respectively. A *cone* $\sigma$ in $N_\R$ is a set that is closed under positive scaling. If $\sigma$ is convex and does not contain any line in $N_\R$, it is said to be *strictly convex*. If $\sigma$ is of the form $\sigma=\sum\R_+v_i$ for some $v_i\in N$, we say that $\sigma$ is a *convex rational cone* *generated* by the vectors $v_i$. A *face* of $\sigma$ is the intersection of $\sigma$ and a *supporting hyperplane*, i.e., a hyperplane through the origin such that the whole cone $\sigma$ is contained in one of the closed half-spaces determined by the hyperplane. The *dimension* of $\sigma$ is the dimension of the linear space $\R\sigma$ spanned by $\sigma$. One-dimensional faces of $\sigma$ are called *edges* and one-dimensional cones are called *rays*. Given a ray $\sigma$, the associated *primitive vector* is the first lattice point met along $\sigma$. The *multiplicity* $\text{mult}(\sigma)$ of $\sigma$ is the index of the lattice generated by the primitive elements of the edges of $\sigma$ in the lattice generated by $\sigma$. A $k$-dimensional cone is *simplicial* if it can be generated by $k$ vectors. A cone is *regular* if it is simplicial and of multiplicity one. A *fan* $\Delta$ in $N$ is a finite collection of rational strongly convex cones in $N_\R$ such that each face of a cone in $\Delta$ is also a cone in $\Delta$ and, moreover, the intersection of two cones in $\Delta$ is a face of both of them. Let $\Delta_k$ denote the set of cones in $\Delta$ of dimension $k$. The fan $\Delta$ is said to be *complete* if the union of all cones in $\Delta$ equals $N_\R$. If all cones in $\Delta$ are simplicial then $\Delta$ is said to be *simplicial*, and if all cones are regular, $\Delta$ is said to be *regular*. A fan $\widetilde\Delta$ is a *refinement* of $\Delta$ if each cone in $\Delta$ is a union of cones in $\widetilde\Delta$. A fan $\Delta$ determines a toric variety $X(\Delta)$ obtained by patching together affine toric varieties $U_\sigma$ corresponding to the cones $\sigma\in\Delta$. It is compact if and only if $\Delta$ is complete. Toric varieties are normal and Cohen-Macaulay. The variety $X(\Delta)$ is nonsingular if and only if $\Delta$ is regular. $X(\Delta)$ has at worst quotient singularities, i.e., it is locally the quotient of a smooth variety by the action of a finite group, if and only if $\Delta$ is simplicial, see e.g. [@Fu Section 2.2]. In this case, we will also say that the variety $X(\Delta)$ is [*simplicial*]{}. For any fan $\Delta$ in $N$ there is a fan $\widetilde \Delta$ such that $X(\widetilde \Delta)\to X(\Delta)$ is a resolution of singularities. Cohomology of toric varieties and piecewise linear functions {#coho} ------------------------------------------------------------ Let $\Delta$ be a simplicial complete fan. Then the odd cohomology groups of $X:=X(\Delta)$ vanish and the even cohomology groups are generated by varieties invariant under the action of $T$. More precisely $H^{2k}(X):=H^{2k}(X; \R)$ is generated by $T$-invariant varieties of codimension $k$. There is a Hodge decomposition $H^k(X)\otimes_\R \C=\bigoplus_{p+q=k}H^{p,q}(X)$ of the cohomology groups of $X$ and, moreover, $H^{p,q}(X)=0$ if $p\neq q$, see, e.g., [@Da Proposition 12.11] and [@PS Chapter 2.5]. In particular, $$H^{2k}(X)=H^{k,k}(X;\R):=H^{k,k}(X)\cap H^{2k}(X;\R).$$ Each cone $\sigma\in\Delta_k$ determines an irreducible subvariety $V(\sigma)$ of $X$ of codimension $k$ that is invariant under the action of $T$. If we use $[V(\sigma)]$ to denote the class of $V(\sigma)$ in $H^{2k}(X)$, then the classes $[V(\sigma)]$, as $\sigma$ runs through all cones of codimension $k$, generate $H^{2k}(X)$. In particular, each ray $\rho$ in $\Delta$ determines a $T$-invariant prime Weil divisor $D(\rho)$ and these divisors generate $H^{2}(X)$. Since $\Delta$ is simplicial, $$\frac{1}{\text{mult}(\sigma)}[V(\sigma)]=\prod [D(\rho_i)]$$ in $H^{*} (X)$, where $\rho_i$ are the edges of $\sigma$, i.e., the $[D(\rho_i)]$ genererate $H^{*}(X)$ as an algebra. Let $\operatorname{{\mathrm PL}}(\Delta)$ be the set of all continuous functions $h:\bigcup_{\sigma\in\Delta} \sigma \to \R$ that are *piecewise linear with respect to $\Delta$*, i.e., for each cone $\sigma\in\Delta$ there exists $m=m(\sigma)\in M$ with $h|_\sigma=m$. A function in $\operatorname{{\mathrm PL}}(\Delta)$ is said to be *strictly convex* if it is convex and defined by different elements $m(\sigma)$ for different cones $\sigma\in\Delta_m$. A compact toric variety $X(\Delta)$ is projective if and only if there is a strictly convex $h\in \operatorname{{\mathrm PL}}(\Delta)$. We then say that $\Delta$ is *projective*. Functions in $\operatorname{{\mathrm PL}}(\Delta)$ are in one-to-one correspondence with $T$-invariant Cartier divisors. If $D$ is a $T$-invariant Cartier divisor of the form $D=\sum a_i D(\rho_i)$, then the corresponding function $h_D\in \operatorname{{\mathrm PL}}(\Delta)$ is determined by $h_D(v_i)=a_i$ if $v_i$ is a primitive vector for $\rho_i$. Conversely $h\in\operatorname{{\mathrm PL}}(\Delta)$ determines the Cartier divisor $D(h):=\sum h(v_i) D(\rho_i)$. Given $h_1, h_2\in\operatorname{{\mathrm PL}}(\Delta)$, the corresponding divisors are linearly equivalent if and only if $h_1-h_2$ is linear. The function $h_D$ is strictly convex if and only if $D$ is ample. A function $h\in\operatorname{{\mathrm PL}}(\Delta)$ determines a non-empty polyhedron $$P(h) := \{ m \in M_\R , \, m \leq h \} \subset M_\R~;$$ in particular, $$P_D:=P(h_D)=\{m\in M_\R, \, m (v_i)\leq a_i \}.$$ If $h$ is convex, then $P(h)$ is a compact *lattice polytope* in $M_\R$, i.e., it is the convex hull of finitely many points in the lattice $M$. Conversely, if $P\subset M_\R$ is a lattice polytope, then the function $$\label{hp} h_P (u) := \sup \{ m(u), \, m \in P \}$$ is a piecewise linear convex function on $N_\R$. If $h_P\in\operatorname{{\mathrm PL}}(\Delta)$ then $\Delta$ is said to be *compatible* with $P$. We write $D_P$ for the corresponding divisor on $X(\Delta)$. Mixed volume and intersection of divisors {#mixed} ----------------------------------------- Given any finite collection of convex compact sets $K_1,\ldots, K_s\subset M_\R$, we let $K_1+\cdots + K_s$ denote the *Minkowski sum* $$K_1 + \cdots + K_s:=\{x_1+\cdots +x_s \mid x_j\in K_j\},$$ and for $r\in\R_+$, we write $rK_j:=\{rx\mid x\in K_j\}$. Let $\operatorname{Vol}$ be the Lebesque measure on $M_\R\cong \R^m$ normalized so that the parallelepiped $$Q_e:=\Bigl\{\textstyle{\sum_{j=1}^m a_j e_j\mid 0\leq a_j\leq 1}\Bigr\},$$ spanned by a basis $e_1,\ldots, e_m$ of $M$, has volume 1. A theorem by Minkowski and Steiner asserts that $\operatorname{Vol}(r_1K_1+\cdots +r_sK_s)$ is a homogeneous polynomial of degree $m$ in the variables $r_1,\ldots, r_s\in \R$. In particular, there is a unique expansion: $$\label{minkowski} \operatorname{Vol}\left (r_1K_1+\cdots +r_sK_s\right )= \sum_{k_1+\cdots +k_s=m} \binom{m}{k_1, \ldots, k_s} \operatorname{Vol}\left (K_1[k_1],\ldots, K_s[k_s]\right )\, r_1^{k_1}\cdots r_s^{k_s};$$ the coefficients $\operatorname{Vol}(K_1[k_1],\ldots, K_s[k_s])\in \R$ are called *mixed volumes*. Here the notation $K_j[k_j]$ refers to the repetition of $K_j$ $k_j$ times. \[rakning\] Pick $u_1, \ldots, u_m\in M_\R$ and let $P_j$ be the line segments $[0,u_j]\subset M_\R$. Then $r_1P_1+\cdots +r_mP_m$ is the parallelepiped $Q_{ru}$, where $ru$ denotes the tuple $r_1u_1,\ldots, r_mu_m$, and so $$\operatorname{Vol}(r_1P_1+\cdots +r_mP_m)= r_1\cdots r_m\operatorname{Vol}(Q_u).$$ Hence $\operatorname{Vol}(P_1,\ldots,P_m)=\operatorname{Vol}(Q_u)/m!$. Note that $\operatorname{Vol}(Q_u)$ is strictly positive if and only if the $u_j$ are linearly independent. If $\Delta$ is compatible with $P_1,\ldots, P_s$, then the intersection product (i.e., the cup product for cohomology classes) of the corresponding divisor classes equals $$\label{snitt} [D_{P_1}]^{k_1}\cdots[D_{P_s}]^{k_s}= m!\operatorname{Vol}(P_1[k_1],\ldots, P_s[k_s])$$ if $k_1+\cdots +k_s=m$, see [@O p. 79]. Monomial maps {#maps} ============= Given a group homomorphism $A: M\to M$, we will write $A$ also for the induced linear maps $M_\Q\to M_\Q$ and $M_\R\to M_\R$. Moreover, we let $\Atrans$ denote the dual map $N\to N$, as well as the dual linear maps $N_\Q\to N_\Q$ and $N_\R\to N_\R$. It turns out to be convenient to use this notation rather than writing $\A$ for the map on $N$ and $\Atrans$ for the map on $M$. Let $\Delta$ be a fan in $N\cong \Z^m$. Then any group homomorphism $\Atrans : N \to N$ gives rise to a rational map $f_A: X(\Delta)\dashrightarrow X(\Delta)$, which is equivariant with respect to the action of $T$. Let $e_1,\ldots , e_m$ be a basis of $M$ and let $e_1^*,\ldots, e_m^*$ be the dual basis of $N$. Then the dual map $A:M\to M$ is of the form $A=\sum a_{ij} e_i\otimes e_j^*$ for some $a_{ij}\in\Z$. If $z_1,\ldots,z_m$ are the induced coordinates on $T$, then $f_A$ is the monomial map $$f_A(z_1,\ldots, z_m)=(z_1^{a_{11}}\cdots z_m^{a_{m1}}, \ldots, z_1^{a_{1m}}\cdots z_m^{a_{mm}})$$ restricted to $T$. Conversely, any rational, equivariant map $f: X(\Delta)\dashrightarrow X(\Delta)$ comes from a group homomorphism $N\to N$, see [@O p.19]. The map $f_A: X(\Delta)\dashrightarrow X(\Delta)$ is holomorphic precisely if $\Atrans:N_\R\to N_\R$ satisfies that for each $\sigma\in \Delta$ there is a $\sigma'\in\Delta$, such that $\Atrans(\sigma)\subseteq \sigma'$. Then $f_A^*[D(h)]=[D(h\circ \Atrans)]$, see, e.g., [@M Chapter 6, Exercise 8], and, moreover, $P(h\circ\Atrans)=A P(h)$. Given a fan $\Delta$ and a group homomorphism $\Atrans: N\to N$ one can find a regular refinement $\widetilde \Delta$ of $\Delta$ such that the induced equivariant map $\tilde f_A:X(\widetilde \Delta)\to X(\Delta)$ is holomorphic. We denote by $\pi$ the modification $ X(\widetilde\Delta)\to X(\Delta)$ induced by the identity map $\operatorname{id}:N\to N$. Furthermore, we have the relation $\tilde f_A = f_A \circ \pi$, i.e., the following diagram commutes. $$\xymatrix{ & X(\widetilde \Delta)\ar[ld]_{\pi}\ar[rd]^{\tilde{f}_A}\\ X(\Delta)\ar@{-->}[rr]_{f_A} & & X(\Delta) }$$ Now the pullback of a $T$-invariant Cartier divisor $D$ under $f_A: X(\Delta)\dashrightarrow X(\Delta)$ is defined as $f_A^* D= \pi_*\tilde f_A^* D$; in fact, this definition does not depend on the particular choice of $\widetilde \Delta$. The divisor $f_A^*D$ is in general only $\Q$-Cartier, cf. [@Fu Chapter 3.3]. Note that, since $H^*(X(\Delta))$ is generated (as an algebra) by Cartier divisors, $f_A$ induces an action $f_A^*$ on $H^*(X(\Delta))$. An important example {#skew} ==================== We will prove Theorems A and B by constructing toric varieties of a certain type. Throughout this paper we let $I=\{i_1,\ldots, i_k\}$ and $J=\{j_1,\ldots, j_\ell\}$ be strictly increasing multi-indices in $\{1,\ldots, m\}$. If $|I|=|J|=k$, we let $B_{IJ}$ denote the minor corresponding to the sub-matrix of $B$ with rows $i_1,\ldots, i_k$ and columns $j_1,\ldots, j_k$. Moreover, we write $[\ell]$ for the multi-index $\{1,\ldots ,\ell\}$ and $I^C$ for the complement $[m]\setminus I$ of $I$. Pick linearly independent vectors $v_1, \ldots, v_m\in N_\Q$ and let $\Delta$ be the fan $$\Delta=\Bigl\{\textstyle\sum_{j=1}^m\R_+\varepsilon_jv_j\Bigr\}_ {\varepsilon=(\varepsilon_1,\ldots,\varepsilon_m)\in\{0,-1,+1\}^m}.$$ In particular, the rays of $\Delta$ are of the form $\R_+ v_j$ and $\R_- v_j$. For simplicity we will assume that $v_j$ is the primitive vector of the ray $\R_+v_j$ for each $j$. Note that $\Delta$ is complete and simplicial and that there are strictly convex functions in $\operatorname{{\mathrm PL}}(\Delta)$; hence the resulting toric variety $X(\Delta)$ is projective and has at worst quotient singularities. If the $v_j$ form a basis of $N$, then $X(\Delta)$ is isomorphic to $(\P^1)^m$. In general we will think of $X=X(\Delta)$ as a “skew product” of $\P^1$s. Note that the rays of $\Delta$ determine divisors $D_j:=D(\R_- v_j)$ and $E_j:=D(\R_+ v_j)$, such that $E_j$ is linearly equivalent to $D_j$ for each $j$. The polytope $P_j:=P_{D_j}$ associated to the divisor $D_j$ is the line segment in $M_\R$ with the origin and $u_j\in M_\R$ as endpoints, where $u_j$ is the point in $M_\R$ such that $\langle v_i, u_j\rangle= \delta_{ij}$ (Kronecker’s delta). Notice that the $u_j$, as vectors, are linearly independent. By [@Fu Section 5.2] $H^{2k}(X)$ will be generated by the intersection (cup) product of divisor classes: $$[D_I]:=[D_{i_1}]\cdots [D_{i_k}]$$ for $I=\{i_1, \ldots, i_k\}\subseteq [m]$. In particular $$f_A^*[D_I]=\sum_{|J|=k}\alpha_{I J} [D_J]$$ for some coefficients $\alpha_{I J}$. Recall that $I^C=[m]\setminus I$, thus from we get $$[D_I]\cdot[D_{J^C}]= \left \{ \begin{array}{cl} m!\operatorname{Vol}(P_1,\ldots, P_m)>0 & \text{if } J=I\\ 0 & \text{otherwise } \end{array} \right. ,$$ cf. Example  \[rakning\]. It follows that $$f_A^*[D_I]\cdot [D_{J^C}]= \alpha_{IJ} \cdot m!\operatorname{Vol}(P_1,\ldots, P_m)$$ On the other hand, for $I=\{i_1, \ldots, i_k\}$ and $J^C=\{j_1,\ldots, j_{m-k}\}$, by the projection formula [@FuInt p.325], we have $$\begin{gathered} \label{lala} f_A^*[D_I]\cdot [D_{J^C}] = \pi_* \tilde f_A^*[D_I] \cdot [D_{J^C}] = \tilde f_A^*[D_I]\cdot \pi^*[D_{J^C}] = \\ \tilde f_A^*\big ([D_{i_1}]\cdots [D_{i_k}] \big )\cdot \pi^*\big ([D_{j_1}]\cdots [D_{j_{m-k}}]\big ) = \\ \tilde f_A^*[D_{i_1}]\cdots \tilde f_A^*[D_{i_k}]\cdot \pi^*[D_{j_1}]\cdots \pi^*[D_{j_{m-k}}], $$ where the last step follows since $\tilde f_A$ and $\pi$ are holomorphic. Recall from Section  \[maps\] that the polytopes associated to $\tilde f_A^*[D_i]$ and $\pi^*[D_j]$ are $AP_i$ and $\operatorname{id}P_j=P_j$, respectively. Thus in light of , $$\begin{gathered} \tilde f_A^*[D_{i_1}]\cdots \tilde f_A^*[D_{i_k}]\cdot \pi^*[D_{j_1}]\cdots \pi^*[D_{j_{m-k}}] = m! \operatorname{Vol}(AP_{i_1}, \ldots, AP_{i_k}, P_{j_1}, \ldots, P_{j_{m-k}}).\end{gathered}$$ Let $A_{IJ}=A_{IJ}(u_j)$ denote the minors of $A:M_\R\to M_\R$ with respect to the basis $u_1,\ldots, u_m$. Then, in light of Example  \[rakning\], $$\operatorname{Vol}(AP_{i_1}, \ldots, AP_{i_k}, P_{j_1}, \ldots, P_{j_{m-k}})= | A_{IJ}| \operatorname{Vol}(P_{1}, \ldots, P_{m}).$$ To conclude, $\alpha_{IJ}=|A_{IJ}|$, and thus we have proved the following result. \[pullback\] Let $\Delta$ be a fan of the form $\Delta=\{\sum_{j=1}^m\R_+\varepsilon_jv_j\}_{\varepsilon\in\{0,-1,+1\}^m}.$ Using the notation above, $$f_A^*[D_I]=\sum_{|J|=k}| A_{IJ}|[D_J].$$ Hence $$(f_A^*)^\ell[D_I]=\sum_{|J_1|=\ldots= |J_{\ell-1}|=|J|=k} | A_{IJ_1}|| A_{J_1J_2}|\cdots| A_{J_{\ell-2}J_{\ell-1}}|| A_{J_{\ell-1}J}|[D_J]$$ and $$(f_A^\ell)^*[D_I]=(f_{A^\ell})^*[D_I]=\sum_{|J|=k} | A^\ell_{IJ}|[D_J],$$ where $ A^\ell_{IJ}$ denotes the $IJ$-minor of $A^\ell$. Recall the Cauchy-Binet formula: $$\label{CB} (AB)_{IJ}=\sum_{|K|=k} A_{IK} B_{KJ}.$$ It follows that a sufficient condition for $(f_A^*)^\ell=(f_A^\ell)^*$ is that $ A_{IJ}\geq 0$ for all $I,J$ or $ A_{IJ} \leq 0$ for all $I,J$. Let us summarize this: \[stable\] Let $\Delta$ be a fan of the form $\Delta=\{\sum_{j=1}^m\R_+\varepsilon_jv_j\}_{\varepsilon\in\{0,-1,+1\}^m}$. Using the notation above, if $ A_{IJ} \geq 0$ for all strictly increasing multi-indices $I=\{i_1,\ldots, i_k\}, J=\{j_1,\ldots, j_k\}\subseteq [m]$ or if $ A_{IJ}\leq 0$ for all $I, J$, then $f_A:X(\Delta)\dashrightarrow X(\Delta)$ is $k$-stable. \[bakat\] One can also construct the fan $\Delta$ above starting from a basis $\epsilon_1,\ldots, \epsilon_m$ of $M_\Q$. For $j=1,\ldots, m$, let $V_j$ be the one-dimensional subspace of $N_\Q$ defined by $$V_j=\{v\in N_\Q\mid \epsilon_\ell(v)=0 \text{ for } \ell \neq j\}.$$ Then each $V_j$ determines two rational rays in $N_\R$, which will be the rays of $\Delta$; more precisely, pick $v_j$ to be a primitive vector of one of the rays in $V_j$ and construct $\Delta$ as above. Now the polytopes $P_{D_j}$ and $P_{E_j}$ will lie in the one-dimensional vector space spanned by $\epsilon_j$. By the choice of $v_j$ we can arrange so that $u_j$ is a positive multiple of $\epsilon_j$. Then the minor $A_{IJ}$ of $A$ in the basis $u_j$ is just a positive constant times the $IJ$-minor $A_{IJ}(\epsilon_j)$ of $A$ in the basis $\epsilon_j$. More precisely, if $\epsilon_j=\alpha_j u_j$, then $$A_{IJ}=\frac{\alpha_{i_1}\cdots \alpha_{i_k}}{\alpha_{j_1}\cdots \alpha_{j_k}} A_{IJ}(\epsilon_j).$$ \[projective\] Consider the monomial map $$f_A(z_1,\ldots, z_m)= (z_1^{a_{11}}\cdots z_m^{a_{m1}}, \ldots, z_1^{a_{1m}}\cdots z_m^{a_{mm}}).$$ If all $k\times k$-minors of the matrix $(a_{ij})$ are either nonnegative (or nonpositive), then $f_A:(\P^1)^n\dashrightarrow (\P^1)^n$ is $k$-stable. In particular, if $(a_{ij})$ is totally positive (or totally negative) $f_A$ is stable on $(\P^1)^n$ for all $k$. Indeed, since $(a_{ij})$ is the matrix of the group homomorphism $A:M\to M$ associated with $f_A$ with respect to the basis $e_j$ of $M$, cf. Section  \[maps\], Lemma  \[stable\] implies that $f_A$ is stable on $$X \Big (\Big \{\sum_{j=1}^m\R_+\varepsilon_je_j^* \Big \}_{\varepsilon\in\{0,-1,+1\}^m}\Big ) = (\P^1)^n.$$ Proof of Theorem A {#proofA} ================== Given a basis $\xi_1, \ldots, \xi_m$ of $M_\R$, and a linear transformation $A$, let $A(\xi_j)$ denote the matrix of $A$ with respect to this basis. Assume that $A$ has distinct positive eigenvalues $\mu_1> \ldots > \mu_m>0$. Then so has the matrix $A(\xi_j)$, for any basis $\xi_1,\ldots,\xi_m$ of $M_\R$. By [@BJ], one can find a strictly totally positive matrix $A^+$ with eigenvalues $\mu_1,\ldots,\mu_m$. Since both matrices $A(\xi_j)$ and $A^+$ are diagonalizable over $\R$ and they have the same set of eigenvalues, it follows that they are conjugate to each other over $\R$. Thus, without loss of generality, we can perform a change of basis and assume that $A(\xi_j)=A^+$. The coefficients and the minors of the matrix $A(\xi_j)$ change continuously as one perturbs the basis $\xi_j$. Moreover, being strictly totally positive is an open condition on the space of matrices. Hence, by perturbing $\xi_j$, we can find a basis $\epsilon_1,\ldots, \epsilon_m$ of $M_\Q$ such that $A(\epsilon_j)$ is strictly totally positive. Given this basis, following Remark \[bakat\], we construct a fan of the form $$\Delta=\Bigl\{\textstyle\sum_{j=1}^m\R_+\varepsilon_j v_j\Bigr\}_{\varepsilon\in\{0,-1,+1\}^m}.$$ In view of Remark \[bakat\], using the notation of Section \[skew\], all $k\times k$-minors $A_{IJ}$ in the basis $u_j$ are then positive for $k=1,\ldots, m-1$, and thus Lemma  \[stable\] asserts that $f_A:X(\Delta)\dashrightarrow X(\Delta)$ is $k$-stable for $k=1,\ldots, m-1$. If $A$ has negative and distinct eigenvalues, by arguments as above, we can find a basis of $M_\Q$ so that the matrix of $A$ is of the form $-B$, where $B$ is (strictly) totally positive. Constructing $\Delta$ as above, the $k\times k$-minors of $A$ in the basis $u_j$ will then all have sign $(-1)^k$ and so, by Lemma  \[stable\], $f_A:X(\Delta)\dashrightarrow X(\Delta)$ is $k$-stable for $k=1,\ldots, m-1$. Proof of Theorem B {#proofB} ================== Given vectors $w_1,\ldots, w_m\in M_\R$ we will write $w_I=w_{i_1}\wedge\cdots \wedge w_{i_k}$ for $I=\{i_1,\ldots, i_k\}\subseteq [m]$. Note that if $A:M_\R\to M_\R$ is a linear map with eigenvalues $\mu_1,\ldots, \mu_m$, then the induced linear map $\Lambda^k A: \Lambda^k M_\R\to \Lambda^k M_\R$ has eigenvalues $\mu_I:=\mu_{i_1}\cdots \mu_{i_k}$ for $I=\{i_1,\ldots, i_k\}\subseteq [m]$. Throughout we will assume that the eigenvalues of $A$ are ordered so that $|\mu_1|\geq \ldots \geq |\mu_m|$. \[orthant\] Given a basis $\rho_1,\ldots,\rho_m$ of $M_\R$, there is a basis $\epsilon_1,\ldots, \epsilon_m$ of $M_\Q$, such that for $k=1,\ldots, m$, $\rho_{[k]}$ lies in the interior of the first orthant $\sigma_k := \sum_{|I|=k}\R_+ \epsilon_I \subset \Lambda^k M_\R$, and, moreover, the hyperplane $H_k\subset\Lambda^k M_\R$, spanned by $\rho_I$, $I\neq [k]$, intersects $\sigma_k$ only at the origin. Pick real numbers $\mu_1>\ldots >\mu_m>0$ and let $A:M_\R\to M_\R$ be a linear map given by a diagonal matrix in the basis $\rho_j$ with diagonal entries $\mu_1, \ldots, \mu_m$. As in the proof of Theorem A we can then choose a basis $\epsilon_1,\ldots, \epsilon_m$ of $M_\Q$ such that $A(\epsilon_j)$ is strictly totally positive. In particular, for a given $k$, $ A_{IJ}(\epsilon_j)>0$ for all $I,J$ such that $|I|=|J|=k$, which means that $\Lambda^kA:\Lambda^k M_\R\to \Lambda^k M_\R$ maps the first orthant $\sigma_k$ into itself. Since the $\rho_j$ are the eigenvectors of $A$, it follows by the Perron-Frobenius theorem that the eigenvector $\rho_{[k]}$ (or $-\rho_{[k]}$) corresponding to the largest eigenvalue $\mu_{[k]}$ of $\Lambda^kA$ is contained in the interior of $\sigma_k$ and, moreover, that $H_k\cap\sigma_k$ is the origin. To prove Theorem B, we choose a basis $\rho_j$ such that the linear map $A:M_\R\to M_\R$ is in real Jordan form, i.e., with blocks $$\begin{bmatrix} \mu_j & 1 & & \\ & \mu_j & \ddots & \\ & & \ddots & 1\\ & & & \mu_j \end{bmatrix} \text{ and } \begin{bmatrix} C_j & I & & \\ & C_j & \ddots & \\ & & \ddots & I\\ & & & C_j \end{bmatrix}, \text{ where } C_j=\begin{bmatrix} a_j & b_j\\ -b_j & a_j \end{bmatrix}$$ and $I$ is the $2\times 2$ identity matrix; we have the first block type for real eigenvalues $\mu_j$ and the second type for complex eigenvalues $a_j \pm i b_j$. We order the blocks so that moduli of the eigenvalues are in decreasing order along the diagonal and the vectors $\rho_j$ so that $\rho_1$ is an eigenvector corresponding to the largest eigenvalue etc. Next, we let $\epsilon_1,\ldots, \epsilon_m$ be a basis of $M_\Q$ constructed as in Lemma \[orthant\], and from $\epsilon_j$, following Remark \[bakat\], we construct a fan $\Delta$ of the form $$\Delta=\Bigl\{\textstyle\sum_{j=1}^m\R_+\varepsilon_j v_j\Bigr\}_{\varepsilon\in\{0,-1,+1\}^m}.$$ Assume that $|\mu_k|>|\mu_{k+1}|$. Then $\mu_{[k]}$ is the unique eigenvalue of $\Lambda^k A: \Lambda^k M_\R\to \Lambda^k M_\R$ of largest modulus. Since $A$, and thus $\Lambda^kA$, is real, $\mu_{[k]}$ is real. Moreover, $\Lambda^k A \rho_{[k]}= \mu_{[k]} \rho_{[k]}$, so that $\rho_{[k]}$ spans the one-dimensional eigenspace of $\mu_{[k]}$. By Lemma  \[orthant\] $\rho_{[k]}$ is the unique (up to scaling) eigenvector of $\Lambda^k A$ that is contained in $\sigma_k$ and the hyperplane in $\Lambda^k M_\R$ spanned by the other eigenvectors intersects $\sigma_k$ only at the origin, and thus, since $\mu_{[k]}$ is the unique eigenvalue of largest modulus, $\sigma_k$ will get attracted to the eigenspace $\R\rho_{[k]}\subseteq \Lambda^k M_\R$. Hence there is an $\ell_k\in\N$, such that $(\Lambda^k A)^\ell\sigma_k\subset \sigma_k$ or $$(\Lambda^k A)^\ell\sigma_k \subset -\sigma_k:=\{x\in M_\R\mid -x\in \sigma_k\}$$ for all $\ell\geq \ell_k$. In particular, for such an $\ell$, $(\Lambda^k A)^\ell \epsilon_I \in \sigma_k$ for all $I=\{i_1,\ldots, i_k\}\subseteq [m]$ or $(\Lambda^k A)^\ell \epsilon_I \in -\sigma_k$ for $(\Lambda^k A)^\ell$ all $I$. This means that the entries of $(\Lambda^k A)^\ell$, i.e., the $k\times k$-minors of $A^\ell$, in the basis $\epsilon_j$ are either all positive or all negative. In view of Remark \[bakat\], using the notation of Section \[skew\], this implies that $A^\ell_{IJ}(u_j)\geq 0$ for all $I=\{i_1, \ldots, i_k\}, J=\{j_1, \ldots, j_k\}\subseteq [m]$ or $ A^\ell_{IJ}(u_j)\leq 0$ for all $I, J$. Now Lemma  \[stable\] asserts that $f_A^\ell:X(\Delta)\dashrightarrow X(\Delta)$ is $k$-stable. Finally let $\ell_0=\max_j \ell_{k_j}$. Then $f_A^\ell:X(\Delta)\dashrightarrow X(\Delta)$ is $k_j$-stable for $\ell\geq \ell_0$ and $j=1,\ldots, s$. \[compare\] For $s=1$ and $k_1=1$ Theorem B appeared as Theorem 4.7 in [@L1] and Theorem B’ in [@JW]. Also in these papers the idea of the proof is to find a basis (of $N_\R$) such that the first orthant is mapped into itself and then construct a toric variety as in Section \[skew\]. Degrees of monomial maps {#degreesection} ======================== Let $\Delta$ be a complete simplicial projective fan and let $D$ be an ample divisor on $X(\Delta)$. Then the *$k$th degree of $f_A:X(\Delta)\dashrightarrow X(\Delta)$ with respect to $D$* is defined as $$\deg_{D,k}:=f_A^* D^k\cdot D^{m-k}.$$ If $X(\Delta)=\P^m$ and $\mathcal O(D)=\mathcal O_{\P^m}(1)$, then $\deg_{D,k}$ coincides with the $k$th degree $\deg_k$ as defined in the introduction. We have the following more general version of Theorem C. Indeed, Theorem C corresponds to the case when $X(\Delta)=\P^m$ and $\mathcal O(D)=\mathcal O_{\P^m}(1)$. Let $\Delta$ be a complete simplicial fan and let $D$ be an ample divisor on $X(\Delta)$. For $1\leq k\leq m-1$ assume that the eigenvalues of $A\in M_m(\Z)$ satisfy $$\label{ballong} |\mu_{k-1}|>|\mu_k|=|\mu_{k+1}|>|\mu_{k+2}|$$ and moreover that $\mu_k/\mu_{k+1}$ is not a root of unity. Then the degree sequence $\deg_{D,k}(f_A^n)$ does not satisfy any linear recurrence. If $k=1$ or $k=m-1$ the condition on the moduli of the $\mu_j$ should be interpreted as $|\mu_1|=|\mu_2|>|\mu_3|$ and $|\mu_{m-2}|>|\mu_{m-1}|=|\mu_{m}|$, respectively. Note that there exist maps that satisfy the assumption of Theorem  C’. For example, choose integers $a_1,\ldots, a_{k-1}, b_1,b_2,a_{k+2},\ldots, a_m$ such that $$|a_1|\geq \ldots \geq |a_{k-1}|>\sqrt{b_1^2+b_2^2} > |a_{k+2}|\geq \ldots \geq |a_m|$$ and $b_1+ib_2=\sqrt{b_1^2+b_2^2} \cdot e^{2\pi i \theta}$, where $\theta\notin\Q$. Then (the matrix of) the monomial map $$(z_1,\ldots, z_m)\mapsto (z_1^{a_1},\ldots, z_{k-1}^{a_{k-1}}, z_k^{b_1}z_{k+1}^{b_2}, z_k^{-b_2}z_{k+1}^{b_1}, z_{k+2}^{a_{k+2}}, \ldots, z_m^{a_m})$$ satisfies the assumption of Theorem  C’. Corollary D follows immediately from Theorem C’ and the following result. This is probably well-known, but we include a proof for completeness; we follow the proof of Proposition  3.4 in [@L3]. \[stabdeg\] Assume that $\Delta$ is a simplicial projective fan and let $D$ be an ample divisor on $X(\Delta)$. Assume that the monomial map $f_A:X(\Delta)\dashrightarrow X(\Delta)$ is $k$-stable. Then the degree sequence $\deg_{D,k}(f_A^n)$ satisfies a linear recurrence. For the proof we will need the *Caley-Hamilton theorem*: Let $B\in M_L(\Z)$ and assume that $$\chi (r)=r^L+\varphi_{L-1}r^{L-1}+\cdots +\varphi_1 r+ \varphi_0$$ is the characteristic polynomial of $B$. Then the Caley-Hamilton theorem asserts that $$B^L+\varphi_{L-1}B^{L-1}+\cdots +\varphi_1 B+ \varphi_0 I=0$$ where $I$ is the identity matrix. In particular, for each $1\leq i,j\leq L$, the entry $b_{ij}^n=:b_n$ of $B^n$ satisfies the linear recurrence $$\label{caley} \chi (b_n): b_{n+L}+\varphi_{L-1}b_{n+L-1}+\cdots +\varphi_1 b_{n+1}+\varphi_0 b_n=0.$$ Since $D$ is ample, the class $[D]^k$ in $H^{2k}(X)$ is non-zero, where $X=X(\Delta)$, and thus we can extend it to a basis $[D]^k, \theta_1,\ldots, \theta_r$ for $H^{2k}(X)$ such that $\theta_j\cdot [D]^{m-k}=0$ for $j=1,\ldots, r$. Note that then $\deg_{D,k}(f_A)$ is equal to $[D]^k\cdot[D]^{m-k}=[D]^m$ times the $(1,1)$-entry of the matrix $B$ of $f_A^*:H^{2k}(X)\to H^{2k}(X)$ with respect to the basis $[D]^k, \theta_1,\ldots, \theta_r$. Since by assumption $f_A$ is $k$-stable, i.e., $(f_A^n)^*=(f_A^*)^n$ for all $n\in\N$, it follows that $\deg_{D,k}(f_A^n)$ is equal to $[D]^{m}$ times the $(1,1)$-entry of $B^n$. Therefore by the Caley-Hamilton theorem $\deg_{D,k}(f_A^n)=:b_n$ satisfies the linear recurrence , where $\chi$ is the characteristic equation of $B$. Note that Proposition \[stabdeg\] implies that if $A$ satisfies the assumption of Theorem A, $X(\Delta)$ is the toric variety constructed in the proof of Theorem A, and $D$ is an ample divisor on $X(\Delta)$, then the degree sequence $\deg_{D,k}(f_A^n)$ of $f_A:X(\Delta)\dashrightarrow X(\Delta)$ satisfies a linear recurrence. Similarly if $A$ and $X(\Delta)$ are as in the (proof of) Theorem B, then $\deg_{D,k}(f_A^{\ell n})$ satisfies a linear recurrence for $\ell \geq \ell_0$. Moreover, even if $f_A$ is not $k$-stable, as long as we can lift it to a $k$-stable map, we still have a linear recurrence for its degree sequence. \[lift\_stab\_deg\] Suppose that $X$ is a simplicial projective toric variety, and that $\pi: \widetilde X\to X$ is a nonsingular projective modification of $X$ such that $f_A:X\dashrightarrow X$ lifts to a $k$-stable map $f_A:\widetilde X\dashrightarrow \widetilde X$. Then, for any ample divisor $D$ on $X$, the degree sequence $\deg_{D,k}(f_A^n)$ satisfies a linear recurrence. Since $D$ is ample, $\pi^*([D]^k)$ is nonzero. As in the proof of Proposition \[stabdeg\], we can extend it to a basis of $H^{2k}(\widetilde X)$ in such a way that $\deg_{D,k}(f_A)$ is equal to $\pi^*([D]^m)$ times the $(1,1)$-entry of the matrix $B$ of $f^*_A$. Thus, again as in the proof of Proposition \[stabdeg\], $\deg_{D,k}(f_A^n)$ satisfies the linear recurrence given by the characteristic equation of $B$. It follows from Theorem C’ and Proposition \[lift\_stab\_deg\] that if $A$ satisfies the assumption of Theorem C, one cannot $k$-stabilize $f_A$ by blowing up $\P^m$ or any other simplicial projective toric variety. Computing $\deg_{D,k} (f_A)$ ---------------------------- To prove Theorem C’ we will express $\deg_{D,k}(f_A)$ in terms of the $k\times k$-minors of $A$. First, for a $T$-invariant divisor $D$, Proposition 4.1 in [@FW] says that $\deg_{D,k}(f_A)$ can be computed as a mixed volume: $$\label{favrew} \deg_{D,k}(f_A)=m! \operatorname{Vol}\bigl(A P_D [k], P_D[m-k]\bigr).$$ In the case of a general ample divisor $D'$, notice that the degrees only depend on the cohomology class of $D'$, and there is always a $T$-invariant divisor $D$ such that $[D]=[D']$. Next, we will describe a method of computing the mixed volume of polytopes that we learned from Huber-Sturmfels [@HS]. A more detail exposition can be found in their paper. Let $\mathcal P=(P_1,\ldots, P_s)$ be a sequence of polytopes in $\R^m$ such that $P:=P_1+\cdots + P_s$ has dimension $m$. A *cell* of $\mathcal P$ is a tuple $\mathcal C = (C_1,\ldots, C_s)$ of polytopes $C_i\subset P_i$. Let $\# C_i$ be the number of vertices of $C_i$ and let $C:=C_1+\cdots + C_s$. A *fine mixed subdivision* of $\mathcal P$ is a collection of cells $\mathcal S=\{\mathcal C^{(1)},\ldots, \mathcal C^{(r)}\}$ such that $C^{(j)}$ has dimension $m$, $$\dim C_1^{(j)}+\cdots +\dim C_s^{(j)}=m ~~~~\text{ and } ~~~~ \# (C_1^{(j)})+\cdots + \# (C_s^{(j)})-s=m$$ for $j=1,\ldots, r$. Moreover, $C^{(j)}\cap C^{(j')}$ is a face of both $C^{(j)}$ and $C^{(j')}$ for all $j,j'$, and $\bigcup_j C^{(j)}=P$. If $\mathcal S$ is a fine mixed subdivsion of $\mathcal P$, then Theorem  2.4 in [@HS] asserts that $$\label{uggla} \operatorname{Vol}(P_1[k_1], \ldots, P_s[k_s])= k_1!\cdots k_s! \cdot\sum_{\mathcal C^{(j)}\in\mathcal S, \dim C^{(j)}_i=k_i, i=1,\ldots, s} \operatorname{Vol}(C^{(j)}).$$ Moreover, Algorithm  2.9 in [@HS] gives a method of finding fine mixed subdivisions; in particular, each sequence of polytopes $\mathcal P$ admits a fine mixed subdivision, where, for each $i,j$, $C^{(j)}_i$ is the convex hull of a subset of the vertices of $P_i$. We want to apply this method to the right hand side of . Assume that $P_D$ has vertices $v_1,\ldots ,v_N$. Then $A P_D$ has vertices $A v_1,\ldots, A v_N$ and thus we can find a fine mixed subdivision $\mathcal S$ of $\big (A P_D, P_D\big )$ with cells of the form $$\mathcal C_{IJ}:=\big (\operatorname{conv}(A v_{i_0},\ldots, A v_{i_k}), \operatorname{conv}(v_{j_0},\ldots,v_{j_{m-k}})\big ),$$ where $\operatorname{conv}(v_{i_0},\ldots,v_{i_{k}})$ denotes the convex hull of $v_{i_0},\ldots,v_{i_{k}}$, for some $I=\{i_0,\ldots, i_k\}$ and $J=\{j_0,\ldots, j_{m-k}\}\subset [N]$. Let $\mathcal S_k\subset \mathcal S$ be the set of cells $\mathcal C_{IJ}$, where $|I|=k+1$. Then gives $$\operatorname{Vol}(A P_D[k], P_D[m-k])= k! (m-k)! \sum_{\mathcal C_{IJ}\in \mathcal S_k} \operatorname{Vol}(C_{IJ}).$$ Note that $C_{IJ}$ is the Minkowski sum of the $k$-simplex $\operatorname{conv}(A v_{i_0},\ldots, A v_{i_k})$ with edges $A(v_{i_1}-v_{i_0}), \ldots, A(v_{i_k}-v_{i_0})$ and the $(m-k)$-simplex $\operatorname{conv}(v_{j_0},\ldots,v_{j_{m-k}})$ with edges $v_{j_1}-v_{j_0},\ldots, v_{j_{m-k}}-v_{j_0}$. From now on, let us fix a basis of $M$. It follows that $k!(m-k)!\operatorname{Vol}(C_{IJ})$ equals the modulus of the determinant of the matrix $B_{IJ}$ with the vectors $A(v_{i_1}-v_{i_0}), \ldots, A(v_{i_k}-v_{i_0})$ and $v_{j_1}-v_{j_0},\ldots, v_{j_{m-k}}-v_{j_0}$ as columns. Since $P_D$ is a lattice polytope, the determinant of $B_{IJ}$ is an integer linear combination of $k\times k$-minors of $A$. Hence $\deg_{D,k}(f_A)$ is of the form $\sum \sigma_{IJ}A_{IJ}$ where $\sigma_{IJ}\in\Z$ and $A_{IJ}$ is the $IJ$-minor of $A$. Observe that the matrix $\sigma$ with entries $\sigma_{IJ}$ only depends on the set of multi-indices $I,J$ such that $C_{IJ}$ is in $\mathcal S_k$ and the sign of the determinant of $B_{IJ}$. Since there are only finitely many choices of $I,J$ and signs, we conclude the following. \[flyg\] There is a finite set $\Sigma$ of matrices $\sigma=(\sigma_{IJ})\in\Z^{{m\choose k}^2}$, such that for each $A:M\to M$ there is a $\sigma=\sigma(A)\in\Sigma$ such that $$\deg_{D,k}(f_A)=\sum_{IJ}\sigma_{IJ} A_{IJ}.$$ Proof of Theorem C’ ------------------- Our proof is much inspired by the proof of Proposition 3.1 in [@BK] and the proof of Proposition 7.3 in [@HP]. We will argue by contradiction using a result from combinatorics, which says that if $\alpha_n$ and $\beta_n$ are two sequences that each satisfies a linear recurrence, then the set of $n\in \N$, for which $\alpha_n=\beta_n$, is either finite or eventually periodic, see [@St Chapter 4, Exercise 3]. In particular if $\alpha_n=\beta_n$ for infinitely many $n$, then for some $a,b\in \N$, $$\label{joho} \alpha_{a+b\ell}=\beta_{a+b\ell} \text{ for }\ell \gg 0.$$ Now, let $\alpha_n =\deg_{D,k}(f_A^n)$. By Lemma \[flyg\] $\alpha_n=\sum_{IJ} \sigma_{IJ}(n)A^n_{IJ}$, where $A_{IJ}^n$ is the $IJ$-minor of $A^n$, for some matrix $\sigma(n)\in\Sigma$. Since $\Sigma$ is a finite set, there is at least one $\sigma\in\Sigma$ such that $\sigma(n)=\sigma$ for infinitely many $n$. Pick such a $\sigma=(\sigma_{IJ})\in\Sigma$ and let $\beta_n=\sum_{IJ}\sigma_{IJ}A_{IJ}^n$. Let $\chi (r)$ be the characteristic polynomial of $\Lambda^kA$. By the Caley-Hamilton theorem the entries $A^n_{IJ}$ of $(\Lambda^k A)^n$, cf. , satisfy the linear recurrence $\chi (A_{IJ}^n)$, see . It follows that $\beta_n$ satisfies the linear recurrence $\chi (\beta_n)$, and $\alpha_n=\beta_n$ for infinitely many $n$. Next, we claim that, if $A$ is as in the assumption, then for each choice of $a,b\in\N$, $\beta_{a+b\ell}<0$ for infinitely many $\ell$. Since the eigenvalues of $A$ satisfy $$|\mu_{k-1}|<|\mu_k|=|\mu_{k+1}|<|\mu_{k+2}|$$ it follows that $\mu_{[k]}=:\mu$ and $\mu_{\{1,\ldots, k-1,k+1\}}$ are the two eigenvalues of $\Lambda^kA$ of largest modulus, and that the other eigenvalues $\mu_{I_3}, \ldots, \mu_{I_{{m\choose k}}}$ are of strictly smaller modulus. Moreover, since $\mu_{k+1}=\bar\mu_{k}$ and $\mu_k/\bar\mu_{k}$ is not a root of unity, it follows that $\mu_{\{1,\ldots, k-1,k+1\}}=\bar\mu$ and that $\mu/\bar\mu$ is not a root of unity. Hence we can write $$(\Lambda^k A)^n= P \begin{bmatrix} \mu^n & 0 & 0 & \cdots \\ 0 & \bar \mu^n & 0 & \cdots \\ 0 & 0 & \mu_{I_3}^n & \\ \vdots & \vdots & & \ddots \end{bmatrix} P^{-1}$$ for some invertible matrix $P$. It follows that $$\beta_n=\sum \sigma_{IJ}A^n_{IJ}=C\mu^n + D\bar \mu^n+\O (|\mu_{I_3}|^n)$$ where $C$ and $D$ are independent of $n$. Since $\beta_n$ is real it follows that $D=\bar C$, so that $$\beta_n=2\Re (C) \cdot\Re (\mu^n) + \O (|\mu_{I_3}|^n).$$ Since $\mu= |\mu|\cdot e^{2\pi i\theta}$ with $\theta\not\in\Q$, it follows that $\arg(\mu^{a+b\ell})$ is dense in $[0,2\pi)$. In particular, $\Re (\mu^{a+b\ell})<0$ for infinitely many $\ell$ and $\Re (\mu^{a+b\ell})>0$ for infinitely many $\ell$, and thus, since $|\mu_{I_3}|<|\mu|$, $\beta_{a+b\ell}<0$ for infinitely many $\ell$. Assume that $\alpha_n$ satisfies a linear recurrence. Then, since $\alpha_n=\beta_n$ for infinitely many $n$, holds for some $a,b$, but this contradicts that $\alpha_n >0$. This proves Theorem C’. [BK2]{} W. Barrett and C. Johnson. *Possible spectra of totally positive matrices.* Linear Algebra Appl. **62** (1984), 231–233. E. Bedford and K. Kim. *Linear recurrences in the degree sequences of monomial mappings.* Ergodic Theory Dynam. Systems **28** (2008), no. 5, 1369–1375. V. I.  Danilov. *The geometry of toric varieties.* Uspekhi Mat. Nauk **33** (1978), no. 2(200), 85–134, 247. J. Diller and C. Favre. *Dynamics of bimeromorphic maps of surfaces*. Amer. J. Math. **123** (2001), 1135–1169. T.-C. Dinh and N. Sibony. *Une borne supérieure pour l’entropie topologique d’une application rationnelle*. Ann. of Math. **161** (2005), 1637–644. T.-C. Dinh and N. Sibony. *Dynamics of regular birational maps in $\P^k$.* J. Funct. Anal. **222** (2005), no. 1, 202–216. T. C. Dinh and N.  Sibony *Super-potentials of positive closed currents, intersection theory and dynamics.* Acta Math. **203** (2009), no. 1, 1–82. C. Favre. *Les applications monomiales en deux dimensions*. Michigan Math. J. **51** (2003), 467–475. C. Favre and M. Jonsson. *Dynamical compactifications of $\C^2$*. Ann. of Math. **173** (2011), 211–-248. C. Favre and E. Wulcan. *Degree growth of monomial maps and McMullen’s polytope algebra*, To appear in Indiana Univ. Math. J. J.-E. Forn[æ]{}ss and N. Sibony. *Complex dynamics in higher dimension*, II. In *Modern Methods in Complex Analysis*, Ann. of Math. Stud., vol. 137, Princeton Univ. Press, 1995, pp. 135–182. W. Fulton. *Introduction to toric varieties*. Annals of Mathematics Studies, 131. Princeton University Press, Princeton, NJ, 1993. W. Fulton. *Intersection Theory*. Springer-Verlag, 1998. V. Guedj. *Ergodic properties of rational mappings with large topological degree*. Ann. of Math. **161** (2005), 1589–1607. B. Hasselblatt and J. Propp. *Degree-growth of monomial maps*. Ergodic Theory Dynam. Systems **27** (2007), no. 5, 1375–1397. B. Huber and B. Sturmfels. *A polyhedral method for solving sparse polynomial systems.* Math. Comp. **64** (1995), no. 212, 1541–1555. M. Jonsson and E. Wulcan *Stabilization of monomial maps* Michigan Math. J. **60** (2011), 629–660. J.-L. Lin. . To appear in Math. Z. J.-L. Lin. . To appear in Bull. Soc. Math. France. J.-L. Lin. Preprint, arXiv:1204.6258. M. Mustaţă. *Lecture notes on toric varieties*. Available on the author’s webpage: `www.math.lsa.umich.edu/\simmmustata`. T. Oda. *Convex bodies and algebraic geometry. An introduction to the theory of toric varieties*. Ergebnisse der Mathematik und ihrer Grenzgebiete, 15. Springer-Verlag, Berlin, 1988. C. Peters and J. Steenbrink. *Mixed Hodge structures.* Ergebnisse der Mathematik und ihrer Grenzgebiete, 52. Springer-Verlag, Berlin, 2008. A. Russakovskii and B. Shiffman. *Value distribution for sequences of rational mappings and complex dynamics*. Indiana Univ. Math. J. **46** (1997), 897–932. N. Sibony. *Dynamique des applications rationnelles de $\mathbf{P}^k$*. In *Dynamique et g[é]{}om[é]{}trie complexes (Lyon, 1997)*. Panor. Synth[è]{}ses, 8 (1999), 97–185. R. Stanley. *Enumerative combinatorics.* Vol. I. Wadsworth & Brooks/Cole Advanced Books & Software, Monterey, CA, 1986. [^1]:
--- abstract: 'Low-temperature magnetization ($M$) measurements down to 0.1 K have been performed in magnetic fields up to 14.5 T for a single piece of a tiny single-crystalline sample ($\sim 0.2$ mg weight) of the spin-gap system YbAl$_3$C$_3$. At the base temperature of 0.1 K, several metamagnetic transitions were clearly observed for $H \parallel c$ in the range 6 T $< \mu_0H <$ 9 T whereas only two transitions were observed, one at 4.8 T and the other at 6.6 T, for $H \parallel a$. At fields above 9 T, the magnetization becomes almost saturated for both $H \parallel a$ and $H \parallel c$. The present results indicate that a singlet-triplet crossover occurs in a relatively narrow field range, suggesting a rather weak interdimer interaction in spite of the nearly triangular lattice of Yb ions.' author: - Shunichiro - Tomoyoshi - Yasuyuki - Toshiro - Saori - Akira date: Received 4 September 2012 title: 'Singlet-triplet Crossover in the Two-dimensional Dimer Spin System YbAl$_3$C$_3$' --- INTRODUCTION ============ Low-dimensional quantum spin systems have attracted much interest because of their novel ground states dominated by strong quantum fluctuations. Intensive studies have been done in the $d$-electron compounds such as the two dimensional $S=1/2$ dimer spin systems SrCu$_2$(BO$_3$)$_2$ [@Takigawa2010JPSJ] and (CuCl)LaNb$_2$O$_7$ [@Kageyama2005JPSJ], both of which have a singlet ground state. By contrast, not many $4f$-electron compounds have been investigated from a standpoint of quantum spin systems. This is because $4f$-electron compounds generally have a large total angular momentum $J$ that is equal to and above $5/2$, which would hinder intersite quantum fluctuations. Moreover, in the case of metallic $4f$-electron compounds, either long-range Ruderman-Kittel-Kasuya-Yoshida (RKKY) interactions induce a magnetic ordering or a screening by the conduction electrons leads to a singlet Kondo state. These features of $4f$-electron compounds tend to disturb the realization of a quantum spin state. Until recently, only Yb$_4$As$_3$ has been known as a unique $4f$-electron compound in which a one-dimensional pseudo-spin-$1/2$ Heisenberg antiferromagnetic ground state is realized [@Kohgi1997PRB]. Recently, YbAl$_3$C$_3$ has been proposed to be another candidate for a $4f$-electron quantum spin system. YbAl$_3$C$_3$ crystallizes into a hexagonal ScAl$_3$C$_3$-type structure at room temperature, which consists of layers of a Yb triangular lattice separated by Al and C layers. At temperatures below about 80 K ($=T_{\rm s}$), it exhibits a structural phase transition into an orthorhombic structure, accompanied by a slight displacement of the Yb atoms [@Matsumura2008JPSJ]. YbAl$_3$C$_3$ was revealed to have a low carrier concentration of about 0.01 per formula unit [@Ochiai2007JPSJ] and not to show any long-range magnetic ordering at temperatures down to 0.5 K [@Kato2008JPSJ]. The magnetic susceptibility $\chi(T)$ of YbAl$_3$C$_3$ in the high-temperature hexagonal phase at temperatures above 80 K indicates an effective Yb moment of $4.65 \sim 4.66 \mu_{\rm B}$ and a Weiss temperature of $\Theta=-80\sim-120$ K, depending on the field direction, implying the existence of a relatively large antiferromagnetic interaction between the localized Yb$^{3+}$ moments [@Ochiai2007JPSJ]. In the orthorhombic phase, $\chi(T)$ shows a broad peak around 10 K and becomes quite small at lower temperatures, suggesting a non-magnetic ground state with the development of a spin gap [@Ochiai2007JPSJ]. Interestingly, in the specific heat $C(T)$ measurements, a Schottky-type anomaly was found around 5 K, whose entropy release was estimated to be exactly $R\ln2$ [@Kato2008JPSJ]. In addition, inelastic neutron scattering spectra exhibit three well-defined peaks around 1.5 meV, indicating the presence of low-energy magnetic excitations, and confirmed that the ground-state Kramers doublet of Yb$^{3+}$ was well separated from the excited levels by about 200 K [@Kato2008JPSJ]. From these facts, it has been proposed that the ground state Kramers doublet of Yb$^{3+}$, which can be represented by a pseudo-spin 1/2, forms an antiferromagnetic dimer state with a singlet-triplet energy gap $\Delta$ of about 15 K [@Ochiai2007JPSJ]. Indeed, the low-energy spectra in the inelastic neutron scattering experiment can be interpreted by using singlet-triplet excitations [@Kato2008JPSJ]. For the clarification of the nature of the ground state of the spin-gap system, low-temperature magnetization $M$ measurements provide a powerful tool because $M(T,H)$ depends on low-lying magnetic states. Within an isolated dimer model, the application of a magnetic field induces a step-like magnetization as $T\rightarrow 0$ reflecting a change in the ground state from a spin singlet to a triplet. In addition, in the presence of interdimer interactions and geometrical frustration, the degeneracy of the dimer triplet excited states is removed, and various ground states are expected to appear under magnetic fields. For instance, the two-dimensional $S=1/2$ dimer spin system SrCu$_2$(BO)$_3$ exhibits magnetization plateaus at 1/8, 1/4, and 1/3 of the saturation magnetization, whose origins have been attributed to the formation of superstructures of the triplet state [@Onizuka2000JPSJ]. In the case of YbAl$_3$C$_3$, the mechanism of the dimer formation is not so obvious because the displacement of the Yb atoms from the original triangular lattice is very small (only $0.1-0.2$%) [@Matsumura2008JPSJ]. Accordingly, one might expect the interdimer interaction to be relatively strong (the dimers are less isolated from each other). The previous $M$ measurements performed at temperatures down to 1.8 K revealed a broad metamagnetic increase in $M(H)$ that could be interpreted as a singlet-triplet crossover [@Ochiai2007JPSJ]. In addition, quite recently, $M(H)$ at fields up to 8 T was investigated at about 0.5 K, and multiple metamagnetic steps were found for $H \parallel c$ [@Hara2012PRB], though several pieces of single crystals were used in the experiments. In order to further examine the nature of the singlet-triplet crossover in magnetic fields, we measured $M(T,H)$ for *one piece* of a single-crystalline sample of YbAl$_3$C$_3$ at lower temperatures down to 0.1 K in higher magnetic fields up to 14.5 T. Experimental ============ Single crystals of YbAl$_3$C$_3$ were grown by using an encapsulated tungsten crucible [@Ochiai2007JPSJ]. The typical weight of the obtained crystal was at most several hundred micrograms. We measured $M(T,H)$ of a tiny single crystal of YbAl$_3$C$_3$ by using a high-resolution capacitive Faraday magnetometer with a vertical field gradient of 8 T/m in a dilution refrigerator [@Sakakibara1994JJAP]. We recently improved the sensitivity of the measurement by a factor of 100 over the previous apparatus, the details of which will be published elsewhere. In this paper, we present the data obtained for two samples: sample A (0.16 mg) and sample B (0.23 mg). Because YbAl$_3$C$_3$ is easily decomposed by a reaction with atmospheric oxygen, the sample was wrapped in silver foil with silver paste and then fixed firmly on the sample stage of the magnetometer by using GE varnish. Therefore, a slight misalignment of the crystal orientation may have happened. In all the data presented, the background magnetization of the magnetometer was subtracted. For the measurements with $H \parallel a$, we cooled the sample slowly across $T_{\rm s}$ in a magnetic field of 10 T applied along one of the three equivalent $a$ axes, so that the orthorhombic phase became a single-domain state. Results and Discussion ====================== ![(Color online) Magnetic field dependences of (a) the magnetization $M$ and (b) the differential susceptibility ${\rm d}M/{\rm d}H$ of sample A for $H \parallel a$ at several temperatures. []{data-label="MH_a"}](MH_a.eps){width="3.2in"} Figure \[MH\_a\](a) shows the $H$ dependence of the magnetization, $M(H)$, of sample A for $H \parallel a$ obtained at several temperatures. Whereas $M(H)$ is a gradually increasing function of $H$ at 4.2 K, the increase in $M$ becomes non-monotonic and steeper at lower temperatures. At the base temperature of 0.1 K, the differential susceptibility d$M$/d$H$ exhibits a kink at 4.8 T and a large peak at 6.6 T, as shown in Fig. \[MH\_a\](b). Here, no hysteresis was detected down to 0.1 K between the field increasing and decreasing sweeps. The convex increase in $M(H)$ at low fields below 2 T, which can be fitted using the Brillouin function, is attributable to the decomposed Yb$^{3+}$ impurities in the sample. ![(Color online) Magnetic field dependences of (a) $M$ and (b) ${\rm d}M/{\rm d}H$ of sample A for $H \parallel c$ at several temperatures. []{data-label="MH_c"}](MH_c.eps){width="3.2in"} The application of $H$ along the $c$ axis provides a more striking effect. As shown in Fig. \[MH\_c\](a), the gradual increase of $M(H)$ observed at 4.2 K becomes sharp and splits into multiple steps at lower temperatures. This feature can be seen more clearly in the d$M$/d$H$ data presented in Fig. \[MH\_c\](b): a kink at around 7 T, which is similar to the one observed for $H \parallel a$ at 4.7 T, and more than six peaks in the interval 7.5 T $\le \mu_0H \le 9$ T were observed at 0.1 K. No appreciable hysteresis was observed for $H \parallel c$, either. These multiple steps resemble the fractional magnetization steps observed in SrCu$_2$(BO$_3$)$_2$ [@Onizuka2000JPSJ], though clear plateaus cannot be resolved in the $M(H)$ curve for sample A. ![(Color online) Normalized magnetization $\langle S_\alpha \rangle=(M-\chi_{\rm v}\mu_0H)/g\mu_{\rm B}$ of sample A at 0.1 K as a function of $g\mu_0H$. []{data-label="MH_g"}](MH_g.eps){width="3.2in"} In a previous report [@Ochiai2007JPSJ], the $H$ and the $T$ dependences of $M$ of YbAl$_3$C$_3$ were examined at temperatures above 1.8 K and were compared by using the isolated dimer spin model with an effective anisotropic $g$ factor. In this model, the magnetization normalized by the $g$-factor and the Bohr magneton $\mu_{\rm B}$, which is labeled $\langle S_\alpha \rangle$ ($\alpha=a$, $b$, or $c$), is given by [@Kageyama2005JPSJ; @Ochiai2007JPSJ] $$\begin{aligned} \langle S_\alpha \rangle&=\frac{M(H)-\chi_{\rm v}\mu_0H}{g\mu_{\rm B}} \notag \\ &= \frac{N\sinh(g\mu_{\rm B}\mu_0H/k_{\rm B}T)}{1+\exp(\Delta/k_{\rm B}T)+2\cosh(g\mu_{\rm B}\mu_0H/k_{\rm B}T)}. \label{eq1}\end{aligned}$$ Here, $N$, $\chi_{\rm v}$, and $k_{\rm B}$ denote the number of Yb ions, the temperature-independent susceptibility, and the Boltzmann constant, respectively. Accordingly, $\langle S_\alpha \rangle$ is expected to be scaled by $g\mu_0H$; $\langle S_\alpha \rangle$ increases rapidly and saturates to a value 0.5 at around $g\mu_0H \sim 22.5$ T when $\Delta/k_{\rm B}=15$ K and $T=0.1$ K. Figure \[MH\_g\] presents $\langle S_a \rangle$ and $\langle S_c \rangle$ for YbAl$_3$C$_3$ at 0.1 K as functions of $g\mu_0H$, where the values of $g$ determined from a previous magnetic-susceptibility study [@Ochiai2007JPSJ] were used, and $\chi_{\rm v}$ was adjusted so that $M(H)$ in 5 T $\le g\mu_0H \le 10$ T was almost constant. As expected from Eq. , both $\langle S_a \rangle$ and $\langle S_c \rangle$ saturate to be about 0.45 at around $g\mu_0H \sim 20$ T, which means that the magnetic moment of the pseudo-spin $S=1/2$ is almost fully polarized. However, the structures of the magnetization curves do not match between $H \parallel a$ and $H \parallel c$. For instance, $\langle S_a \rangle$ ($\langle S_c \rangle$) keeps increasing with a gradual slope above $g\mu_0H \sim 13$ T until it reaches about half (quarter) the saturation magnetization. Thus, the $M(H)$ behavior at low temperatures cannot be explained by using the simple isolated dimer model. The ratio of the saturation fields, $H^\ast_{\rm c2}$ ($=g\mu_0H \sim 18-20$ T), to the onset field of the triplet crossover, $H^\ast_{\rm c1}$ ($=g\mu_0H \sim 13-15$ T), is estimated to be at most 1.5 for both $H \parallel a$ and $H \parallel c$ in YbAl$_3$C$_3$. This ratio is related to the strength of the interdimer interaction. In the isolated dimer model at 0 K, $H^\ast_{\rm c2}/H^\ast_{\rm c1}=1$. By contrast, when the interdimer coupling is sufficiently strong, $H^\ast_{\rm c2}/H^\ast_{\rm c1}$ becomes larger because the interdimer interaction lifts the degeneracy of the triplet states and makes a wide triplet crossover or induces an ordered state. The observed ratio $H^\ast_{\rm c2}/H^\ast_{\rm c1} \sim 1.5$ in YbAl$_3$C$_3$ indicates that the interdimer interaction is not strong compared with other dimer systems, e.g. SrCu$_2$(BO$_3$)$_2$ ($H^\ast_{\rm c2}/H^\ast_{\rm c1} \gg 3$) [@Onizuka2000JPSJ], (CuCl)LaNb$_2$O$_7$ ($\sim 3$) [@Kageyama2005JPSJ], BaCuSi$_2$O$_6$ ($\sim 2$) [@Jaime2004PRL], and Ba$_3$Cr$_2$O$_8$ ($\sim 2$) [@Kofu2009PRL]. ![(Color online) Field-temperature phase diagrams of sample A for (a) $H \parallel a$ and (b) $H \parallel c$. Dashed lines are guides to the eye. The horizontal axis is a logarithmic scale. []{data-label="HT"}](HT.eps){width="3.2in"} In Fig. \[HT\], we plot the position of a peak or a kink in d$M$/d$H$ as a function of temperature. We define three main regions: a dimer state at low fields, a transition region $H^\ast_{\rm c1}<g\mu_0H<H^\ast_{\rm c2}$, and a fully-polarized state above $H^\ast_{\rm c2}$. Possibly, the transition region consists of a mixture of the singlet and the triplet states. A remarkable feature is the existence of various internal states in the transition region for $H \parallel c$ at low temperatures represented by the several peaks in d$M$/d$H$. Because the boundary of the transition region is not likely to close in the $H-T$ plane, we consider that there is no long-range ordered state. ![(Color online) Temperature dependences of $M$ of sample A measured (a) in 7, 6.9, 6.8, 6.7, 6.6, and 6.5 T for $H \parallel a$ and (b) in fields of 10, 9, 8.5, 8.35, 8.1, and 7.75 T for $H \parallel c$, respectively, from top to bottom. []{data-label="MT"}](MT.eps){width="3.2in"} Figures \[MT\](a) and \[MT\](b) show the $T$ dependences of $M$ at several fields for $H \parallel a$ and $H \parallel c$, respectively. In the field region where $M(H)$ shows a steep increase, a rather strong increase of $M(T)$ was observed on cooling, although there is no distinct feature that could be ascribed to the manifestation of a phase transition. The stepwise increase in $M(T)$ at 8.35 T for $H \parallel c$ probably comes from the multiple magnetization steps in $M(H)$ for this field direction. ![ (a) $M(H)$ and d$M$/d$H$ of sample A replotted from Fig. \[MH\_c\], and (b) those measured six months later. (c) $M(H)$ and d$M$/d$H$ of sample B measured relatively soon after the crystal growth. All the data presented here have been taken at 0.1 K in fields $H\parallel c$ . []{data-label="sample-dep"}](sample-dep.eps){width="3.2in"} Next, we focus on the sample quality dependence of the multiple magnetization steps. After taking the data shown in Fig. \[MH\_c\], we kept sample A in a vacuum desiccator. Six months later, we measured $M(H)$ for sample A again, and the result is shown in Fig. \[sample-dep\](b). The data in Fig. \[MH\_c\] are also replotted in Fig. \[sample-dep\](a) for comparison. A qualitative difference between the two data can be seen. In Fig. \[sample-dep\](b), the field range of the transition region becomes wider, and even clear plateaus appear. The impurity magnetization in low fields is also slightly enhanced, indicating that the sample is degraded and that the number of decomposed Yb atoms has increased. Obviously, the magnetization process of YbAl$_3$C$_3$ very much depends on the sample quality. This effect might originate from the partial release of the frustration that induced the distribution of the interdimer interaction. This, however, does not mean that the observed multiple magnetization steps are caused by such sample deterioration. Figure \[sample-dep\](c) shows the $M(H)$ for sample B at 0.1 K, which was measured relatively soon after the growth. Sample B can be seen to be of good quality because of less impurity magnetization in low fields. Nevertheless, it shows clear multiple steps with sharp and large peaks of d$M$/d$H$. In addition, the field range of the transition region for sample B is even slightly wider than that for sample A (Fig. \[sample-dep\](a)). Note that the largest two peaks in d$M$/d$H$ are well separated for sample B whereas they tend to merge for sample A. These results strongly suggest that the multiple magnetization steps are intrinsic to the system and are not due to the sample deterioration. Unfortunately, the $M(H)$ for sample B at different $T$ was not investigated before it had decomposed. The overall behavior of $M(H)$ in YbAl$_3$C$_3$ is well interpreted by using a singlet-triplet crossover in a spin-dimer system. In particular, the multiple magnetization steps observed for $H\parallel c$ are reminiscent of the magnetization plateaus seen in the quantum dimer spin system SrCu$_2$(BO$_3$)$_2$, whose origin is the formation of magnetic superstructures of localized triplets. It is highly interesting how the dimer state is realized in the nearly triangular lattice of Yb ions in YbAl$_3$C$_3$. If the singlet-triplet crossover of YbAl$_3$C$_3$ is to be studied in more detail, careful investigations using a high-quality single crystal are essential. CONCLUSIONS =========== We have measured the magnetization of a single crystal of YbAl$_3$C$_3$ at low temperatures down to 0.1 K and in magnetic fields up to 14.5 T. For both $H \parallel a$ and $H \parallel c$, a steep increase of the magnetization ascribed to a single-triplet crossover, was observed at 1.8 K. With decreasing temperature, this crossover becomes sharp and splits into multiple steps. At the lowest temperature of 0.1 K, multiple magnetization steps, which resemble the quantized magnetization plateaus found in the quantum dimer spin system SrCu$_2$(BO$_3$)$_2$, were observed only for $H \parallel c$. We found that these multiple magnetization steps strongly depended on the sample quality. The width of the singlet-triplet crossover region for a good-quality sample was relatively narrow compared with those for other $3d$ electron dimer systems. This might indicate that the effective interdimer interaction in YbAl$_3$C$_3$ is not so strong in spite of the nearly triangular lattice configuration of Yb atoms. Further investigations are needed to establish the nature of the novel ground state of YbAl$_3$C$_3$. ACKNOWLEDGEMENTS {#acknowledgements .unnumbered} ================ This work has been partially supported by a Grant-in Aid for Scientific Research on Innovative Areas (20102007, 23102705) from the Ministry of Education, Culture, Sports, Science and Technology of Japan, and by a Grant-in-Aid for Scientific Research (A) (No. 20244053) from the Japan Society for the Promotion of Science. REFERENCES {#references .unnumbered} ========== natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , , , , ****, (). , , , , , , , , , , , ****, (). , , , , , ****, (). , , , , , , , , , , ****, (). , , , , , ****, (). , , , , , , , , , , , ****, (). , , , , , , ****, (). , , , , , , , , ****, (). , , , , ****, (). , , , , , , , , , , , , ****, (). , , , , , , , , , , , ****, ().
\#1[figure \[\#1\]]{} \#1[Figure \[\#1\]]{} \#1\#2[figures \[\#1\] and \[\#2\]]{} \#1\#2\#3\#4[figures \[\#1\], \[\#2\], \[\#3\] and \[\#4\]]{} \#1[section \[\#1\]]{} \#1[Section \[\#1\]]{} \#1\#2[sections \[\#1\] and \[\#2\]]{} \#1\#2\#3[sections \[\#1\], \[\#2\] and \[\#3\]]{} \#1[equation \[\#1\]]{} \#1[Equation \[\#1\]]{} \#1[\[\#1\]]{} \#1[chapter \[\#1\]]{} \#1[Chapter \[\#1\]]{} \#1\#2[chapters\[\#1\]–\[\#2\]]{} \#1[algorithm \[\#1\]]{} \#1[Algorithm \[\#1\]]{} \#1\#2[algorithms \[\#1\] and \[\#2\]]{} \#1\#2[Algorithms \[\#1\] and \[\#2\]]{} \#1[part \[\#1\]]{} \#1[Part \[\#1\]]{} \#1\#2[parts \[\#1\] and \[\#2\]]{} \#1[\#1 ]{} \#1[\#1 ]{}
--- abstract: 'The near edge structure (XANES) in K-edge X-ray absorption spectroscopy (XAS) is a widely used tool for studying electronic and local structure in materials. The precise interpretation of these spectra with the help of calculations is hence of prime importance, especially for the study of correlated materials which have a complicated electronic structure per se. The single particle approach, for example, has generally limited itself to the dominant dipolar cross-section. It has long been known however that effects beyond this approach should be taken into account, both due to the inadequacy of such calculations when compared to experiment and the presence of shake-up many-body satellites in core-level photoemission spectra of correlated materials. This effect should manifest itself in XANES spectra and the question is firstly how to account for it theoretically and secondly how to verify it experimentally. By using state-of-the-art first principles electronic structure calculations and 1s photoemission measurements we demonstrate that shake-up many-body effects are present in K-edge XAS dipolar spectra of NiO, CoO and CuO at all energy scales. We show that shake-up effects can be included in K-edge XAS spectra in a simple way by convoluting the single-particle first-principles calculations including core-hole effects with the 1s photoemission spectra. We thus describe all features appearing in the XAS dipolar cross-section of NiO and CoO and obtain a dramatic improvement with respect to the single-particle calculation in CuO. These materials being prototype correlated magnetic oxides, our work points to the presence of shake-up effects in K-edge XANES of most correlated transition metal compounds and shows how to account for them, paving the way to a precise understanding of their electronic structure.' author: - 'M. Calandra' - 'J. P. Rueff' - 'C. Gougoussis' - 'D. Céolin' - 'M. Gorgoi' - 'S. Benedetti' - 'P. Torelli' - 'A. Shukla' - 'D. Chandesris' - 'Ch. Brouder' title: | K-edge X-ray absorption spectra in transition metal oxides beyond the single particle\ approximation: shake-up many body effects. --- Introduction ============ Low-energy excitations of correlated materials are strongly influenced by electron-electron interaction effects beyond the single particle approximation. This is particularly evident in core-level photoemission spectra (XPS) of transition metal compounds where the occurrence of many-body satellites is well documented (for a review see Ref. ). As for x-ray absorption, interpretation of the dipolar K-edge XAS cross-section heavily relies on standard single-particle first principles calculations  [@Gougoussis09a; @Taillefumier; @Hebert_Wien2k; @Joly] that neglect shake-up excitations. Since dipolar L$_{2,3}$ XAS mostly samples d-states of the absorbing atom which are more prone to effects of correlation than p-states, one normally assumes that shake-up effects are visible mostly in L$_{2,3}$ XAS and not in K XAS. However a recent work [@Gougoussis09b] shows that in NiO the single particle dipolar K-edge spectrum misses some near-edge and far-edge features present in the experimental measured one. We concentrate on shake-up many body excitations arising from a valence electron excitation following the creation of a core hole by the incident x-ray [@Agren92]. In the past, the description of shake-up effects in core-hole photoemission spectra has been investigated in the framework of quantum-chemical calculations  [@Carravetta-12], by using approaches based on model hamiltonians [@Veenendal93; @Groot_Kotani_Book; @Taguchi] or first-principles modified approaches  [@Takahashi-12]. The occurrence of these effects in XPS is well established but they have also been shown to occur in M$_{4,5}$ edges of mixed-valent compounds [@Gunnarsson] and in L$_{2,3}$ X-ray absorption spectra (XAS) of transition metals and rare earths compounds [@Hammoud; @Malterre] and were proposed as a possible explanation of the double peak structure in dipolar K-edge XAS of high T$_c$ cuprates [@tolentino_PhysRevB.45.8091] and copper compounds in general [@Bair_PhysRevB.22.2767]. However this attribution was questioned in Ref. [@Kosugi_PhysRevB.41.131; @Gougoussis09a] and the double peak structure was suggested to be single particle in origin. Nailing down the importance of these effects has been difficult due to complications related to many-body calculations but also to the paucity of experimental 1s photoemission spectra. In this work, following earlier suggestions, we demonstrate that shake-up manybody effects can be included in a simple way in K-edge XAS spectra by convoluting the single particle first principles calculations with experimental 1s photoemission spectra, some of which we have freshly measured. We show that this procedure explains all features in K-edge XAS spectra of NiO and CoO and strongly improves the agreement with experimental data in CuO. Our work points out the relevance of these effects in K-edge dipolar XAS of all compounds displaying multiple structures in photoemission spectra. The structure of the paper is the following. In section \[sec:theory\], following Ref , we briefly sketch the demonstration of the convolution formula relating the many body XAS cross section to the single particle one via the photoemission spectrum. We then present experimental details concerning our measured photoemission spectra in sec. \[sec:experiment\]. The experimental results and the theoretical undestanding are presented in sec. \[sec:results\]. Theory ====== Shake-up theory {#sec:theory} --------------- Shake up satellites are many-body peaks present in core-electron spectra. They originate from a valence electron excitation following the creation of a core hole by the incident x-ray [@Agren92]. Quantum chemical calculations of shake-up satellites have been recently reviewed by Carravetta and [Å]{}gren [@Carravetta-12]. An electric dipole transition between two Slater determinants built from the same set of orbitals do not allow for shake-up satellites. Indeed, the orthogonality of orbitals allows for only one transition from the core level to the empty one. Therefore, a shake-up can only be obtained by describing the (initial) state with a linear combination of Slater determinants or by using different orbitals for the initial and final determinants [@Martin-76]. The first approach was extensively used by Sawatzky and collaborators [@Veenendal93] . Here we use dipole transitions between single Slater determinants using non-orthogonal orbitals, the orbitals of the final state being relaxed in the presence of the core hole. The possibility of describing the electronic state of NiO by a single Slater determinant was suggested in Refs  [@Brandow-77; @Brandow-92; @Harrison-98]. Moreover, relaxed Slater determinants can sometimes describe a state much better than the sum of a small number of unrelaxed Slater determinants [@Brandow-77]. A single Slater determinant is also the non-interacting ground state of the Kohn-Sham version of density funtional theory (DFT). The corresponding Kohn-Sham orbitals are usually considered to have no physical meaning. This would be a problem for our approach that calculates electric dipole transitions between these orbitals. The success of DFT calculations of XAS seems to indicate that Kohn-Sham orbitals are physically meaningful and indeed Gidopoulos [@Gidopoulos-11] discovered that the non-interacting Kohn-Sham ground state is the best approximation of the true ground state in a subtle way. To describe his finding, let $h(\mathbf{r})=-\hbar^2\Delta/2m +v(\mathbf{r})$ be a one-body potential and $H_v=\sum_i h(\mathbf{r}_i)$ be the corresponding non-interacting many-body Hamiltonian. Denote by $|\Psi_v\rangle$ the (Slater determinant) ground state of $H_v$ and by $|\Psi\rangle$ the ground state of the interacting Hamiltonian $H$. By the Rayleigh-Ritz minimum principle we have $\langle \Psi|H_v|\Psi\rangle - \langle \Psi_v|H_v|\Psi_v\rangle >0$. Gidopoulos proved that the potential $v$ that minimizes this difference is precisely the Kohn-Sham potential. In that sense, the Kohn-Sham determinant and the Kohn-Sham potential provide the best single-particle description of the ground state of an interacting system. Therefore, it is relevant to describe shake-up processes with non-orthogonal Slater determinants. Other calculations were carried out within this framework by Tyson [@TysonPhD], who could calculate double-electron excitations in XAS [@Chaboy-94] for LN$_{4,5}$-edges. Similar calculations for x-ray photoemission spectroscopy are more common [@Takahashi-12]. Cross section ------------- The manybody X-ray photoemission cross section can be written as [@almbladh] $$\begin{aligned} \sigma_{XPS}(\epsilon_k)&=&\frac{2\pi}{\hbar}\sum_f |\langle k, \Psi_{f} (N-1)|T| \Phi_i(N)\rangle|^2 \nonumber \\ & & \delta(\epsilon_k-\hbar\omega- E_i(N)+E_f(N-1)) \label{eq:crossmbxps}\end{aligned}$$ where $\epsilon_k$ is the photoelectron kinetic energy, $E_i(N)$ is the energy of the $N$ electrons ground state $|\Phi_i(N)\rangle$, $E_f(N-1)$ and $|\Psi_{f}(N-1)\rangle$ characterize the excited energy and state of the $N-1$ electron system with a core hole and $\hbar \omega$ is the energy of the incident X-ray beam. The electric-dipole transition operator is labeled T. The transform $ I_{XPS}(t) $ of the XPS cross-section is defined as, $$\sigma_{XPS}(\epsilon)=2 {\operatorname{Re}}\int_{0}^{+\infty} dt e^{i \epsilon_{+} t} I_{XPS}(t) \label{eq:Ixpst}$$ where $\epsilon_{+}=\epsilon+i \eta$ and Eq. \[eq:Ixpst\] has to be understood as the limit for $\eta\to 0^{+}$. The manybody X-ray absorption cross section in the dipolar approximation can be written as $$\begin{aligned} \sigma_{XAS}(\omega)&=&\frac{2\pi}{\hbar}\sum_f |\langle \Psi_{f}(N)|W |\Phi_i(N)\rangle|^2 \nonumber \\ & &\delta(E_f(N)-E_i(N)-\hbar \omega) \label{eq:crossmbxas}\end{aligned}$$ where now $\omega$ is the energy of the incident beam and $W$ is proportional to the dipole transition operator $M$, namely $W=\sqrt{2\pi\hbar^2\omega\alpha_0} M $. Similarly to what was done for the case of XPS and using a similar notation, we can define the the transform $ I_{XAS}(t) $ of the XAS cross-section as $$\sigma_{XAS}(t)=2 {\operatorname{Re}}\int_{0}^{+\infty} dt e^{i \omega_{+} t} I_{XAS}(t) \label{eq:Ixast}$$ Under the assumption that [*both $\Phi_i$ and $\Psi_f^{XPS}$ are single determinant states*]{}, Ohtaka and Tanake[@Ohtaka83; @OhtakaRMP] demonstrated that $$I_{XAS(t)}=I_{XPS}(t) I_{0}(t) \label{eq:OT}$$ where both the terms $I_{XPS}(t)$ and $I_{0}(t)$ (see Eq. 4.43 in Ref. [@OhtakaRMP]) includes manybody shake-up processes at all orders. Eq. \[eq:OT\] holds for a generic static core-hole potential. A similar relation was found for the less-general case of a contact core-hole potential by Nozières and DeDominicis [@Nozieres69] using the linked cluster theorem. If shake-up processes are neglected only in the $I_{0}(t)$ term then $I_{0}(t)$ reduces to $I_{XAS}^{sp}(t)$, namely the transform of the single-particle XAS cross section calculated in the presence of a static core-hole potential. Thus, it is possible to include to some extent many body effects in the XAS cross section $\sigma_{XAS}(\omega)$ by performing the convolution between the measured XPS cross section $\sigma_{XPS}^{exp.}(\epsilon)$ that fully includes manybody shake-up processes and the single-particle calculated XAS cross section $\sigma_{XAS}^{sp}(\omega)$, namely $$\sigma_{XAS}(\omega)=\int d\epsilon \, \sigma_{XPS}^{exp.}(\epsilon) \sigma_{XAS}^{sp}(\omega-\epsilon) \label{eq:convolution}$$ In our work the single-particle cross section $\sigma_{XAS}^{sp}(\omega) $ is calculated in the framework of density functional theory with inclusion of a static core-hole and $ \sigma_{XPS}^{exp.}(\omega)$ is measured. Technical details ----------------- The single particle XAS cross section $\sigma_{XAS}^{sp}(\omega) $ is calculated in the framework of density functional theory using the implementation of Ref. [@Gougoussis09a] distributed with the Quantum-Espresso [@QE] distribution. The technical details for the NiO calculation are the same as in ref. [@Gougoussis09b]. We used norm-conserving pseudopotentials with inclusion of semicore states. The energy cutoffs used in the calculations were 140 Ryd and 160 Ryd for CuO and CoO, respectively. In the case of CoO, we neglect the tetragonal structural distortion below the 290K Néel temperature and adopt magnetic and crystal structures similar to those of NiO. The electron-momentum grids for the Brillouin zone integration and the choice of the supercell for the XAS calculation are the same as for the NiO case in ref. [@Gougoussis09b]. The CuO XAS cross-section was calculated in the supercell obtained by doubling the antiferromegnetic cell along the shortest direction. The antiferromagnetic cell is obtained from the non-magnetic one by defining as new lattice vectors ${\bf a}^{\prime}={\bf a}+{\bf c}$, ${\bf b}^{\prime}={\bf b}$, and ${\bf c}^{\prime}={\bf a}-{\bf c}$ where ${\bf a}$,${\bf b}$, and ${\bf c}$ are the direct lattice vectors. We then use a $3\times 3\times 3$ electron-momentum grid in the supercell to obtain the self consistent charge density and a $3\times 3\times 3$ electron-momentum grid in the supercell to calculate the XAS cross-section. Finally we employ the DFT+U approximation in all case with $U=7.75$ eV and $U=11.1$ eV for CoO and CuO respectively. These values of the Hubbard repulsion are calculated from first principles using the method of ref. [@Cococcioni05]. Experiment {#sec:experiment} ========== The Ni-1s photoemission spectrum in NiO were measured at the HIKE station of the KMC-1 beamline at BESSY [@Gorgoi2009; @Schaefers2007]. The spectra were recorded with a SCIENTA R4000 photoelectron analyzer placed at 90$^\circ$ from the x-ray beam. The incident x-ray beam (9 keV) was monochromatized by a pair of Si(422) crystals providing $\sim$500 meV energy bandwidth. To avoid charging effects, a 25 nm thick NiO thin film was grown on a Ag substrate in the presence of oxygen, and capped by 3 nm of MgO. The growth of NiO was found fully epitaxial with the NiO(001) direction parallel to Ag(001) as confirmed by the LEED patterns. The sample was positioned at a grazing angle of 89.99$^\circ$ from the incident x-rays in order to reduce the penetration depth of photons and enhance the photoelectron yield. The XAS spectra of NiO and CoO were borrowed from Refs. [@Vedrinskii] and [@Modrow] respectively. Results {#sec:results} ======= Nickel Oxide ------------ ![figure 1: Experimental (circle) and fitted (lines) 1s photoemission spectra in NiO[]{data-label="fig:NiOXPS"}](NiO_HAXPES_fit_2.eps){width="0.9\linewidth"} The measured 1s photoemission spectra of NiO are shown in Fig. \[fig:NiOXPS\]. The fit to the data is consistent with a three peak structure in the 590-610 eV energy region. The results closely resemble 2p$_{3/2}$ Ni photoemission in NiO [@Taguchi; @Parmigiani99] in this energy region. In literature the attribution of the different features in 2p$_{3/2}$ Ni XPS is very controversial and was subject to several reinterpretations. Van Veenendaal and Sawatzky [@Veenendal93] attributed the main feature at high energy to a 2p$^5$3d$^9$ state, where means a hole in the ligand state. The satellite of the main peak (shoulder) was attributed to non-local screening coming from the nearest neighbours Ni atoms, while the lower energy satellite at $\approx 596 $ eV was attributed to a 2p$^6$3d$^{10}$. Recently this was reconsidered in ref. [@Taguchi] where the main feature was attributed to a 2p$^5$3d$^9$, where is a Zhang-Rice k-dispersing bound state[@Kunes]. The shoulder of the main peak is attributed to a 2p$^5$3d$^9$ state and the lowest energy feature to a 2p$^5$3d$^8$ state. Here we show that, regardless of their attribution, the features measured in 1s Ni NiO XPS are present also in the dipolar Ni K-edge XAS spectrum of NiO. In Fig. \[fig:NiOconv\] we show the measured and calculated XAS cross sections. The single particle cross section is generally in good agreement with the dipolar part of the measured spectrum except for the two peaks indicated by the letters F and H that are missing in the single particle calculation. In order to determine if the missing excitations are manybody in nature, and eventually due to multi-determinant or shake-up processes, we then proceed by using Eq. \[eq:convolution\] and obtain new XAS spectra. We first perform the convolution using the complete three-peaks structure of the XPS spectra. We find that the convolution of the DFT calculated XAS cross-section with the photoemission spectra greatly improves the agreement with experiments. In particular the missing peaks are now present in the spectrum and a better agreement occurs at all energy scales. The F and H peaks are then replicas of the single particle E and G peaks respectively and are manybody in nature. We can further test to what extent this interpretation is robust by altering the XPS spectrum before convolution with the theoretical single-particle XAS calculation. We do this for NiO by artificially supressing the main peak of the XPS spectrum, leaving the shoulder and the satellite. We find that the resulting absorption spectrum (dashed line Fig xxx)is less in agreement with the experimental spectrum supporting our interpretation. Cobalt Oxide ------------ Co 1s photoemission data of CoO are not available in literature. However 2p$_{3/2}$and 3s Co XPS data [@ShenCoO; @Parmigiani99] are extremely similar and composed by two main peaks and a small shoulder at low energy visible only in the 3s data. We then consider 3s photoemission data of Ref. [@Parmigiani99] and fit them with a two-peak structure and neglect the very small shoulder, invisible in 2p photoemission data. The results are shown in Fig. \[fig:CoOconv\]. The situation is very similar to NiO, namely the F peak is missing missing from the single-particle spectrum and the H peak is weak. Convolution with photoemission improves substantially the agreement although the main edge peak is narrower than the experimental data. Copper Oxide ------------ The copper oxide CuO has a monoclinic crystal structure with symmetry group C/2c . The dipolar part in CuO was measured in ref. [@Bocharov01; @BocharovTh]. We follow the notation of refs. [@Bocharov01; @BocharovTh] and label the configuration of the crystal with respect to the incident beam by the three angles $(\theta, \phi, \psi)$. These angles correspond to the three rotation angles of the goniometer. In particular, when the three angles are zero, then the polarization is parallel to the $\theta$ axis of the goniometer and to the c-axis of the crystal. At zero angles, the plate holding the sample is orthogonal to the incident beam and parallel to one of the plaquette chains in the crystal (see Ref. [@BocharovTh] Fig. 10 for more details). Given the low symmetry of the crystal, the polarization dependence of CuO K-edge XAS spectra is very complicated, as can be seen in Fig. \[fig:cuo\_conv\]. The calculated single particle spectra are in strong disagreement with experiments. Both the peak positions and the polarization dependence of the intensities disagree with the measured data. In order to see if the disagreement is due to the lack of manybody effects in the XAS cross-section, we consider the convolution with Cu 1s photoemission spectra [@HiroshimaXPS]. Cu 1s photoemission spectra of CuO are composed of two peaks, usually attributed to 3d$^9$ and 3d$^{10}$. Performing the convolution with the calculated single particle Cu K-edge XAS leads to an impressive improvement. The convoluted spectrum is in much better agreement with experimental data, demonstrating that the Cu K-edge XAS spectrum in CuO is dominated by shake-up manybody processes. Conclusions =========== We have demonstrated that shake-up processes occur in dipolar K-edge XAS spectra of NiO, CoO, CuO. As these are prototype correlated transition metal oxides, we expect these excitations to be present in all XAS data of correlated materials. To be more precise, whenever charge transfer satellites occur in XPS core-hole spectra, then shake-up satellites must also occur in the corresponding X-ray absorption edge, at all energy scales. We have also shown that a practical way to include these effects in first principle calculations is to perform the convolution with the XPS spectrum at the same edge, as suggested by Eq. \[eq:convolution\]. Despite the fact that Eq. \[eq:convolution\] was obtained many years ago [@Nozieres69; @Ohtaka83; @OhtakaRMP], we are currently unaware of other works trying to explicitly apply this equation to K-edge XAS by using state of the art calculations. Our work that full includes core-hole attraction and Hubbard U at the DFT+U level demonstrates that this approach is feasible and allows, for the first time, the attribution of all dipolar peaks in NiO and CoO dipolar K-edge XAS spectra. In Eq. \[eq:convolution\] we neglected the additional many-body terms [@Ohtaka83; @OhtakaRMP] that are present in $I_0(t)$. These terms seem to be neglible in NiO and CoO, but could explain the remaining discrepancy between theory and experiment in CuO. Further work is required to calculate these many-body correction terms. Acknowledgement =============== M. C. acknowledges fruitful discussion with F. Mauri. Calculations were carried out at the IDRIS supercomputing center (proposal number: 091202). [99]{} F. de Groot and A. Kotani, [*Core Level Spectroscopy of Solids*]{}, Taylor and Francis 2008. C. Gougoussis, M. Calandra, A. P. Seitsonen, and F. Mauri, Phys. Rev. B [**80**]{}, 075102 (2009) Mathieu Taillefumier, Delphine Cabaret, Anne-Marie Flank, and Francesco Mauri Phys. Rev. B [**66**]{}, 195107 (2002) C.Hebert, Micron [**38**]{} 12 (2007) Y. Joly, Phys. Rev. B [**63**]{}, 125120 (2001) C. Gougoussis, M. Calandra, A. Seitsonen, Ch. Brouder, A. Shukla, and F. Mauri, Phys. Rev. B [**79**]{}, 045118 (2009) H. [Å]{}gren and V. Carravetta. Inter. J. Quant. Chem., [**42**]{}, 685 (1992) V. Carravetta and H. [Å]{}gren. Computational x-ray spectroscopy. In V. Barone, editor, [*Computational Strategies for Spectroscopy: From Small Molecules to Nano Systems*]{}, pages 137–205, Hoboken, 2012. Wiley. M. A. van Veenendaal and G. A. Sawatzky, Phys. Rev. Lett. [**70**]{}, 2459 (1993) M. Taguchi, M. Matsunami, Y. Ishida, R. Eguchi, A. Chainani, Y. Takata, M. Yabashi, K. Tamasaku, Y. Nishino, T. Ishikawa, Y. Senba, H. Ohashi and S. Shin, Phys. Rev. Lett. [**100**]{}, 206401 (2008) M. Takahashi and J. I. Igarashi. Phys. Rev. B, [**85**]{}, 085128 (2012) Fuggle, J. C., Hillebrecht, F. U., Esteva, J.-M. , Karnatak, R. C. , Gunnarsson, O. , Schönhammer, K., Phys. Rev. B [**27**]{}, 4637 (1983) Y. Hammoud, J. C. Parlebas, F. Gauthier, J. Phys. F, [**17**]{}, 503 (1987) D. Malterre, Phys. Rev. B [**43**]{}, 1391 (1991) H. Tolentino, M. Medarde, A. Fontaine, F. Baudelet, E. Dartyge, D. Guay and G. Tourillon, Phys. Rev. B [**45**]{}, 8091 (1992) R. Bair and W. Goddard, Phys. Rev. B [**22**]{} 2767 (1980) N. Kosugi, Y. Tokura, H. Takagi and S. Uchida, Phys. Rev. B [**41**]{}, 131–137 (1990) R. L. Martin and D. A. Shirley. J. Chem. Phys., [**64**]{}, 3685 (1976) B. H. Brandow. Adv. Phys., [**26**]{}, 651 (1977) B. H. Brandow. J. Alloys Compounds, [**181**]{}, 377 (1992) N. M. Harrison, V. R. Saunders, R. Dovesi, and W. C. Mackrodt. Phil. Trans. R. Soc. Lond. A, [**356**]{}, 75 (1998) N. I. Gidopoulos. Phys. Rev. A, [**83**]{}, 040502 (2011) T. A. Tyson. . Ph.D. thesis, Stanford University, 1991 J. Chaboy and T. A. Tyson. Phys. Rev. B, [**49**]{}, 5869 (1994) C. O. Almbladh and L. Hedin, [*Handbook of Synchrotron radiation*]{}, Vol. 1b, 607, 1983, North-Holland Amsterdam K. Ohtaka and Y. Tanabe, Phys. Rev. B [**28**]{}, 6833 (1983) K. Ohtaka and Y. Tanabe, Rev. Mod. Phys. [**62**]{}, 929 (1990) P. Nozières and C. T. DeDominicis, Phys. Rev. [178]{}, 1097 (1969) P. Gianozzi et al., J. Phys.: Condens. Matter **21**, 395502 (2009) M. Cococcioni and S. de Gironcoli, Phys. Rev. B [**71**]{}, 035105 (2005) M. Gorgoi, S. Svensson, F. Schäfers, G. Öhrwall, M. Mertin, P. Bressler, O. Karis, H. Siegbahn, A. Sandell, H. Rensmo, W. Doherty, C. Jung, W. Braun, and W. Eberhardt, Nucl. Instrum. Meth. A [**601**]{}, 48 (2009) F. Schaefers, M. Mertin, and M. Gorgoi, Rev. Sci. Instrum. [**78**]{}, 123102 (2007) R. V. Vedrinskii, V. L. Kraizman, A. A. Novakovich, Sh. M. Elyafi, S. Bocharov, Th. Kirchner, and G. Dräger, Phys. Status Solidi B [**226**]{}, 203 (2001) H. Modrow, S. Bucher, J. J. Rehr and A. L. Ankudinov Phys. Rev. B [**67**]{}, 035123 (2003) F. Parmigiani and L. Sangaletti, Journal of Electron Spectroscopy and Related Phenomena, [**98-99**]{}, 287, 1999. J. Kunes, V. I. Anisimov, S. L. Skornyakov, A. V. Lukoyanov, and D. Vollhardt, Phys. Rev. Lett. [**99**]{}, 156404 (2007) Shen, Z.-X., J. W. Allen, P. A. P. Lindberg, D. S. Dessau, B. O. Wells, A. Borg, W. Ellis, J. S. Kang, S.-J. Oh, I. Lindau, and W. E. Spicer, Phys. Rev. B [**42**]{}, 1817 (1990) S. Bocharov, Th. Kirchner, G. Dräger, O. Sipr, and A. Simunek, Phys. Rev. B [**63**]{}, 045104 (2001). S. Bocharov. Winkelabhngige K-Rntgenabsorption und Elektronenstruktur von 3d-Metallverbindungen. PhD thesis, VanMartin-Luther-Universitt Halle-Wittenberg, 2001. See for example experimental Cu 1s hard X-ray photoemission collected at the Hiroshima Synchrotron radiation center at http://www.hsrc.hiroshima-u.ac.jp/cuprates.htm
--- abstract: 'We analyze the classical phase space of the hydrogen atom in crossed magnetic and circularly polarized microwave fields in the high frequency regime, u sing the Chirikov resonance overlap criterion and the renormalization map. These methods are used to compute thresholds to large scale chaos and to ionization. The effect of the magnetic field is a strong stabilization of a set of invariant tori which bound the trajectories and prevent stochastic ionization. In order to ionize, larger amplitudes of the microwave field are necessary in the presence of a magnetic field.' author: - 'C. Chandre$^{1}$, David Farrelly$^{2}$, and T. Uzer$^{1}$' title: Thresholds to Chaos and Ionization for the Hydrogen Atom in Rotating Fields --- u Introduction ============ The chaotic ionization of the hydrogen atom in a variety of external fields is a fundamental problem in nonlinear dynamics and atomic physics. Early work, in particular, focused on the ionization mechanism in a linearly polarized (LP) microwave field [@casa87; @koch95]. This problem is noteworthy because it showed the general applicability of the “Chirikov Resonance Overlap Criterion” [@chir79] to real quantum mechanical systems. This empirical criterion predicts the value at which two nearby resonances overlap and large scale stochasticity occurs. There is no doubt that the Chirikov criterion is generally robust and, therefore, it is no surprise that Chirikov’s criteria has been tried as a way to predict transitions to chaos and ionization dynamics in more complicated circumstances, e.g., for the hydrogen atom in circularly polarized (CP) microwave fields. However, the success of the Chirikov criterion in describing the LP problem stands in contrast with what seems to have been a somewhat mixed performance when applied to the hydrogen atom in rotating microwave fields. Partly for this reason, a good deal of controversy has surrounded the ionization mechanism and also the interpretation of the Chirikov criterion when applied to ionization in rotating fields. In this article we propose to examine the ionization of the hydrogen atom in a circularly polarized microwave field (CP). Our analysis will use an advanced method, the renormalization map [@koch99; @chan01R], so as to include higher order resonances beyond the Chirikov approach. Interest in the CP microwave problem began with experiments by the Gallagher group [@fu90] which revealed a strong dependence of the ionization threshold on the degree of polarization. They explained their CP results proposing that in a rotating frame, ionization proceeds in roughly the same way as for a static field, i.e., a static field has the same effect whether its coordinate system is rotating or not. In a Comment, Nauenberg [@naue90] argued that the ionization mechanism was substantially more complicated and that the effect of rotation on the ionization threshold had to be taken into account. Fully three-dimensional numerical simulations by Griffiths and Farrelly [@grif92] were able to provide quantitative agreement with the experimental results. Griffiths and Farrelly [@grif92] and Wintgen [@wint91] independently proposed similar models for ionization based on the Runge-Lenz vector.\ Subsequent theoretical work can be divided into three main categories: $(i)$ classical simulations [@kapp93], $(ii)$ quantum simulations, and $(iii)$ resonance overlap studies. The two main papers on resonance overlap are by Howard [@howa92] who published the first such study of this system, and by Sacha and Zakrzewski [@sach97]. In both cases, the Hamiltonian was written in an appropriate rotating frame, a choice of ‘zero-order’ actions was made and the Chirikov machinery invoked. Somewhat surprisingly, the paper by Sacha and Zakrzewski [@sach97] disagrees in a number of key areas with the results of Howard [@howa92] as well as with some conclusions drawn from numerical studies [@farr95]. In a paper by Brunello [*et al.*]{} [@brun97], it was shown both numerically and analytically that the actual ionization threshold observed in an experiment must take into account the manner in which the field was turned on; essentially, in some experiments electrons are switched during the field “turn on” directly into unbound regions of phase space. That is, the underlying resonance structure of the Hamiltonian is almost irrelevant because ionization is accomplished by the ramp-up of the field. For this reason, the application of resonance overlap criteria in rotating frames can be quite intricate and this provides some explanation for the apparent inadequacy of the Chirikov criterion - if ionization occurs during the field turn-on time then, of course, resonance overlap is irrelevant. In this article, we study the hydrogen atom driven by a CP microwave field together with a magnetic field lying perpendicular to the polarization plane (CP$\times B$). The magnetic field has been introduced to prevent ionization in the plane. This provides the opportunity to eliminate difficulties associated with the turn on of the field, thus opening up the way to a study of resonance overlap in a more controlled manner in the rotating frame. As noted, without the added magnetic field all the electrons may have gone before the resonances have had a chance to overlap. The paper is organized as follows: Section II introduces the Hamiltonian and provides a discussion of resonance overlap in the rotating frame in both the Chirikov and renormalization approaches. In order to obtain qualitative and quantitative results about the dynamics of the hydrogen atom in CP$\times $B fields, we compare numerically, in Sec. III, Chirikov’s resonance overlap criterion with our renormalization group transformation method. The Chirikov method is found to provide good qualitative results and is useful because of its simplicity. The renormalization transformation is used first to check the qualitative features obtained by the resonance overlap criterion, and to obtain more quantitative results about the dynamics by expanding the Chirikov approach.  Conclusions are in Sec. IV. Hydrogen atom in CP$\times$B fields {#sec:mode} =================================== We consider a hydrogen atom perturbed by a microwave field of amplitude $F$ and frequency $\omega_f$, circularly polarized in the orbital plane, and a magnetic field with cyclotron frequency $\omega_c$. The Hamiltonian in atomic units and in the rotating frame of the microwave field is [@Bfrie91] : $$H(p_x,p_y,x,y)=\frac{p_x^2+p_y^2}{2}- \frac{1}{\sqrt{x^2+y^2}}-\Omega(xp_y-y p_x)+F x +\frac{\omega_c^2}{8}(x^2+y^2), \label{eqn:tro}$$ where $\Omega=\omega_f-\omega_c/2$. Following Ref. [@howa92], we rewrite Hamiltonian (\[eqn:tro\]) in the action-angle variables $(J,L,\theta,\psi)$ of the problem with $\omega_c=0$. The angles $\theta$ and $\psi$ are conjugate respectively to the principal action $J$ and to the angular momentum $L$. The Hamiltonian becomes : $$\label{eqn:ham} H(J,L,\theta,\psi)=H_0(J,L,\theta)+ FJ^2\sum_{n=-\infty}^{+\infty} V_n(e)\cos (n\theta+\psi),$$ where the integrable Hamiltonian $H_0$ is $$H_0(J,L,\theta)=-\frac{1}{2J^2}-\Omega L +J^4 \frac{\omega_c^2}{8}% \sum_{n,m=-\infty}^{+\infty} V_m V_{m-n} \cos n\theta .$$ The coefficients $V_n$ of the expansion are the following ones : $$\begin{aligned} && V_0(e)=-\frac{3e}{2}, \\ && V_n(e)=\frac{1}{n}\left[ {\cal J}^{\prime}_n(ne)+ \frac{\sqrt{1-e^2}}{e} {\cal J}_n(ne)\right], \, \mbox{ for } n\not= 0,\end{aligned}$$ where ${\cal J}_n$ is the $n$th Bessel function of the first kind and ${\cal J}_n^{\prime}$ its derivative. The parameter $e$ is given by $$e=\left(1-\frac{L^2}{J^2}\right)^{1/2}.$$ The positions $x$ and $y$ are given by the following formulas : $$\begin{aligned} && x=\sum_{n=-\infty}^{+\infty} V_n(e) \cos (n\theta+\psi), \\ && y=\sum_{n=-\infty}^{+\infty} V_n(e) \sin (n\theta+\psi).\end{aligned}$$ The Hamiltonian (\[eqn:ham\]) can be rescaled in order to withdraw the dependence on $\Omega$. We rescale time by a factor $\Omega$ \[ we divide Hamiltonian (\[eqn:ham\]) by $\Omega$\]. We rescale the actions $J$ and $L$ by a factor $\lambda=\Omega^{1/3}$, i.e. we replace $H(J,L,\theta,\psi)$ by $\lambda H(J/\lambda,L/\lambda,\theta,\psi)$. We notice that this rescaling does not modify the eccentricity $e$. The resulting Hamiltonian becomes : $$\label{eqn:hamham} H=H_0+F^{\prime}J^2 \sum_{n=-\infty}^{+\infty} V_n\cos (n\theta+\psi),$$ where $$\label{eqn:sint} H_0=-\frac{1}{2J^2}-L +J^4 \frac{\omega_c^{\prime 2}}{8}% \sum_{n,m=-\infty}^{+\infty} V_m V_{m-n} \cos n\theta .$$ and $F^{\prime}=F\Omega^{-4/3}$ is the rescaled amplitude of the field and $ \omega^{\prime}_c= \omega_c/\Omega$. In what follows we assume that $ \Omega=1$. Furthermore, for simplicity we assume that $e$ is a parameter of the system equal to the initial eccentricity of the orbit in the Keplerian problem ($\omega_c=0$ and $F=0$).\ In this paper, we consider the high-scaled frequency regime for co-rotating orbits : $\Omega/\omega_K >1$ where the Kepler frequency $\omega_K$ is equal to $1/J^3$, i.e., we consider part of phase space corresponding to $0<J<1$. Study of the integrable part of Hamiltonian (\[eqn:ham\]) --------------------------------------------------------- Several key features of the dynamics of Hamiltonian (\[eqn:ham\]) can be understood by looking at the integrable part. We consider the mean value with respect to $\theta$ of the integrable Hamiltonian (\[eqn:sint\]) : $$\label{eqn:Hint} \tilde{H}_0=-\frac{1}{2J^2}-L +J^4 \frac{\omega_c^{2}}{8}\Vert V\Vert ^2,$$ where $\Vert V\Vert^2=\sum_{n=-\infty}^{+\infty} V_n^2$, or another way for considering $\tilde{H_0}$ is to assume that $J^4\frac{\omega_c^2}{8} \sum_{n\not= 0, m} V_mV_{m-n}\cos n\theta$ is part of the perturbation of Hamiltonian (\[eqn:hamham\]). One of the main features of Hamiltonian $\tilde{H_0}$ is that it does not satisfy the standard twist condition for $\omega_c\not= 0$. Since the Hessian of this Hamiltonian is $$\frac{\partial^2 \tilde{H_0}}{\partial J^2}=\frac{3}{J^4}\left(\frac{1}{2} J^6\omega_c^2 \Vert V\Vert ^2-1\right),$$ the phase space is divided into two main parts : a positive twist region where $ \partial^2\tilde{H_0}/\partial J^2>0$ for $J\geq (\sqrt{2}/\omega_c\Vert V\Vert)^{1/3}$, and a negative one where $\partial^2 \tilde{H_0}/\partial J^2<0$ for $J \leq (\sqrt{2}/\omega_c\Vert V\Vert)^{1/3}$. Both regions are separated by a twistless region ($\partial^2\tilde{H_0}/\partial J^2=0$). We notice that the singularity at $J=0$ is located inside the negative twist region, that the energy in the negative twist region is negative, and that the positive twist and twistless regions do not exist in the absence of magnetic field. Furthermore, the size of the negative twist region decreases like $\omega_c^{-1/3}$ as one increases the magnetic field $\omega_c$, i.e., it shrinks to the singularity of the Hamiltonian ($J=0$).\ The phase space of $\tilde{H_0}$, as well as the one of Hamiltonian (\[eqn:sint\]), is foliated by invariant tori, the main difference being that the invariant tori for $\tilde{H_0}$ are flat in these coordinates. We consider a motion with frequency $\omega$, i.e. such that the dynamics is $\theta(t)=\omega t +\theta(0) \mod 2\pi$ and $J(t)=J(0)$. The associated invariant torus is located at $J_{\omega}$ such that the frequency $\frac{\partial \tilde{H}_0}{ \partial J}$ at $J=J_{\omega}$ is equal to $\omega$. The equation determining $J_{\omega}$ is : $$\frac{\omega_c^2}{2} \Vert V \Vert^2 J_{\omega}^6 -\omega J^3_{\omega}+1=0.$$ There are two real positive solutions of this equation : $$J_{\omega}^{\pm}=\left(\frac{\omega\pm \sqrt{\omega^2-2\omega_c^2\Vert V\Vert ^2}}{\omega_c^2\Vert V\Vert^2}\right)^{1/3}.$$ The condition of existence of an invariant torus with frequency $\omega$ for $\tilde{H_0}$ is $\omega \geq \sqrt{2} \omega_c \Vert V\Vert$. There are two invariant tori with frequency $\omega$: one located at $J_\omega^-$ in the negative twist region is a continuation of the torus with frequency $\omega$ in the absence of magnetic field since $\lim_{\omega_c\to 0} J_{\omega}^- =\omega^{-1/3}$; the other torus located at $J_\omega^+$ in the positive twist region is created far from the singularity $J=0$ as soon as the field is non-zero. Figure \[fig:Joms\] depicts the position of these tori. We notice that as soon as $\omega_c\not= 0$ there is creation of a set of invariant tori far from $J=0$ in the positive twist region.If we increase $\omega_c$, the position of the positive twist torus decreases whereas the position of the negative twist torus increases. Both tori collide at $\omega_c=\omega/\sqrt{2}\Vert V\Vert$ to a twistless invariant torus (of the same frequency $\omega$) located at $ (\omega/2)^{-1/3}$. The approximate location of this twistless torus as a function of $\omega_c$ (for fixed parameter $e$) is plotted in Fig. \[fig:Joms\]. As one increases $\omega_c$, a large portion of the invariant tori with $\omega\in [0,1]$ disappears. [*Remark : First Order Delaunay normalization*]{} [@lanc97] Averaging Hamiltonian (\[eqn:hamham\]) over $\theta$ gives $$\langle H\rangle_{\theta} = -\frac{1}{2J^2}-L +J^4 \frac{\omega_c^2}{8} \Vert V\Vert^2 -\frac{3e}{2}J^2 F\cos\psi.$$ Using the expansion of $\Vert V\Vert^2$ to the second order of $e$, $\Vert V\Vert^2\approx 1+\frac{3e^2}{2}$, and the fact that the previous Hamiltonian does not depend on $\theta$ (thus $J$ is constant), the Hamiltonian reduces to $${\cal K}=-L+\frac{3\omega_c^2}{16}e^2J^4-\frac{3}{2}eJ^2F\cos\psi,$$ which is the Hamiltonian studied in Ref. [@lanc97] to find bifurcation of equilibrium points. Primary main resonances and Chirikov resonance overlap {#sec:chir} ------------------------------------------------------ The positive and negative twist regions have their own sets of primary resonances given by Hamiltonian (\[eqn:hamham\]). The approximate locations of these primary resonances $m$:1, denoted $J_m^{\pm}$, are obtained by the condition $m\dot{\theta}+\dot{\psi}\approx 0$. There are two such resonances located approximately at $$\label{eqn:posres} J^{\pm}_m=\left( \frac{1\pm \sqrt{1-2m^2\omega_c^2\Vert V\Vert^2}}{ m\omega_c^2\Vert V \Vert^2}\right)^{1/3}.$$ The resonance located at $J_m^-$ is the continuation of the resonance in the case $\omega_c=0$. The condition of existence of these resonances is $m\sqrt{ 2}\omega_c\Vert V\Vert<1$. Similarly to collisions of invariant tori, collisions of periodic orbits occur when increasing $\omega_c$. Figure \[fig:omse\] depicts the different domains of existence of real $J_m$ for first primary resonances $(m=1,\ldots,5)$ in the plane of parameters $e$-$ \omega_c$. We notice that the most relevant parameter in this problem is the magnetic field. The variations of the dynamics induced by the eccentricity $e$ are smooth. Between two consecutive resonances $m$:1 and $m$+1:1 (if they exist), regular and chaotic motions occur for a typical value of the field $F$. In order to estimate the chaos threshold between these resonances, i.e. the value for which there is no longer any rotational invariant torus acting like a barrier in phase space, Chirikov overlap criterion provides an upper bound but usually quite far from realistic values (obtained by numerical integration for instance). For quantitatively accurate thresholds, a modified criterion is applied : the 2/3-rule criterion. In order to compute the resonance overlap value of the field $F$ between $m$:1 and $m$+1:1 primary resonances, we follow the procedure described in Ref. [@meer82]. First, we change the frame to a rotating one at the phase velocity of resonance $m$:1. We apply the following canonical change of variables : $(\boldsymbol{A}^{\prime},\boldsymbol{\varphi}^{\prime})=( \tilde{N}^{-1}\boldsymbol{A},N\boldsymbol{\varphi})$ where $\u{A}=(J,L)$ and $\u{\varphi}=(\theta,\psi)$ and $$N=\left( \begin{array}{cc} m & 1 \\ 0 & 1 \end{array} \right),$$ and $\tilde{N}$ denotes the transposed matrix of $N$. Hamiltonian (\[eqn:hamham\]) is mapped into $$\begin{aligned} H=&&-\frac{1}{2m^2J^{\prime 2}}-J^{\prime}-L^{\prime}+\frac{\omega_c^2}{8}m^4J^{\prime 4}\sum_{n,n^{\prime}} V_{n^{\prime}}V_{n^{\prime}-n} \cos[n(\theta'-\psi')/m] \\ &&+Fm^2J^{\prime 2}\sum_n V_n\cos[(n\theta'+(m-n)\psi')/m]. \end{aligned}$$ By averaging over the fast variable $\psi'$, the Hamiltonian becomes $$H=-\frac{1}{2m^2J^{\prime 2}}-J^{\prime}+\frac{\omega_c^2}{8}m^4 J^{\prime 4}\Vert V\Vert^2+Fm^2 V_m J^{\prime 2}\cos \theta'.$$ The resulting Hamiltonian is integrable and one can compute the width of the resonance $m$:1 following Ref. [@meer82]. We expand the previous Hamiltonian around $J'_m=J_m/m$ and keep only the quadratic part in $\Delta J'=J'-J'_m$ and the constant term in the action $\Delta J'$ proportional to $\cos \theta'$ : $$H=-\frac{3m^2}{2J_m^4}\left(1-\frac{1}{2}\omega_c^2\Vert V\Vert^2 J_m^6\right) \Delta J^{\prime 2} +FJ_m^2V_m \cos \theta'.$$ We rescale energy by a factor $-3m^2J_m^{-4}(1-\omega_c^2\Vert V\Vert^2J_m^6/2)$ : $$H=\frac{1}{2}\Delta J^{\prime 2} -\frac{FJ_m^6V_m}{3m^2\left(1-\frac{1}{2}\omega_c^2\Vert V\Vert^2 J_m^6\right)}\cos \theta'.$$ Depending on the positive or negative twist region, this rescaling is positive or negative respectively. The width of the resonance $m$:1 in the variables $\Delta J= m\Delta J'$ is given by : $$\Delta_m=4 J_m^3\sqrt{\frac{F V_m(e)}{3}}\left| 2-\frac{J_m^3}{m}\right|^{-1/2},$$ since $\omega_c^2\Vert V\Vert^2 J_m^6/2=-1+J_m^3/m$. The 2/3-rule criterion for the critical threshold for the overlap between resonance $m$:1 and $m$+1:1 is reached when the distance between two neighboring primary resonances is equal (up to a factor $2/3$) to the sum of the half-widths of these resonances : $$\frac{2}{3}\vert J_{m+1}-J_m\vert =\frac{1}{2}(\Delta_{m}+\Delta_{m+1}).$$ The critical threshold is given by : $$\label{eqn:critchir} F_m(e,\omega_c)=\frac{(J_{m+1}-J_m)^2}{3\left( J_m^3\sqrt{V_m}\left| 2-\frac{J_m^3}{m}\right|^{-1/2}+J_{m+1}^3 \sqrt{V_{m+1}}\left| 2-\frac{J_{m+1}^3}{m+1}\right|^{-1/2} \right)^2},$$ where $J_m$ stands for either $J_m^+$ or $J_m^-$ given by Eq. (\[eqn:posres\]). Therefore we obtain two critical couplings : one in the positive twist region, $F_m^+(e,\omega_c)$, and one in the negative twist region, $F_m^-(e,\omega_c)$. We notice that for $\omega_c=0$, since $J_m^-=m^{1/3}$, we obtain the formula of Ref. [@sach97] for the chaos threshold in the CP problem. For $\omega_c=1/[\sqrt{2}\Vert V\Vert (m+1)]$, the threshold $F_m(e,\omega_c)$ vanishes. This case corresponds to the collision of the resonances $m$+1:1 (the positive and negative twist ones). Therefore, the Chirikov criterion is valid only for $\omega\leq 1/[\sqrt{2}\Vert V\Vert (m+1)]$, i.e. when the two primary resonances $m$:1 and $m$+1:1 exist in the system.\ We use this criterion in order to study the stability in the different regions of phase space as a function of the magnetic field $\omega_c$ (for small values of the field $\omega_c$) and the eccentricity of the initial orbit $e$. However, since this criterion is purely empirical, we use another method to validate or refine the results : we use the renormalization method which has proved to be a very powerful and accurate method for determining chaos thresholds in this type of models [@chan01R; @chan01a; @chan02a; @chan00d]. We compare the results given by both methods in the region where the criterion applies and we use the renormalization map to compute chaos thresholds when Eq. (\[eqn:critchir\]) does not apply (when one of the main primary resonance $m$:1 has disappeared). Renormalization method {#sec:reno} ---------------------- The Chirikov resonance overlap criterion gives us a value for the chaos threshold between two neighboring primary resonances. This value aims at approximating the value of the field $F$ for which there is no longer any barrier in phase space, and for which large-scale diffusion of trajectories occur between these resonances. The renormalization method gives a more local and more accurate information. This method computes the threshold of break-up of an invariant torus with a given frequency $\omega$. Then by varying $\omega$, we obtain the global information on the transition to chaos. ### Expansion of the Hamiltonian around a torus In order to apply the renormalization transformation as defined in Refs. [@chan99c; @chan00a] for a given torus with frequency $\omega$, we expand Hamiltonian (\[eqn:ham\]) in Taylor series in the action $J$ around $ J=J_{\omega}$. $$-\frac{1}{2 J^2}=\sum_{k=0}^{+\infty} (-1)^{k+1}\frac{k+1}{2 J_{\omega}^{k+2} } \Delta J ^k,$$ where $\Delta J = J-J_{\omega}$. The meanvalue of the quadratic term in $ \Delta J$ of $H_0$ is equal to $\frac{3(\omega J_{\omega}^3-2)}{2 J_{\omega}^4} \Delta J ^2$. We rescale the action variables such that this quadratic term is equal to 1/2, i.e. we replace $H(\Delta J,L,\theta,\psi)$ by $\lambda H(\Delta J/\lambda,L/\lambda,\theta,\psi)$ with $ \lambda=3(\omega J_{\omega}^3-2)/J_{\omega}^4$. We notice that this rescaling can be done except for two cases $J_{\omega}=(\omega/2)^{-1/3}$ which is the twistless case, and for $J_{\omega}=0$ which is the singularity. Therefore, as it is defined in Refs. [@chan01R; @chan99c; @chan00a], the renormalization cannot be applied in the twistless region. The Hamiltonian (\[eqn:ham\]) becomes : $$\label{eqn:hamresc} H_F=\omega \Delta J -L +\sum_{k\geq 2}^{+\infty} H_{k,0} \Delta J^k +\sum_{k=0}^4 \sum_{n >0} H_{k,n} \Delta J^k \cos n\theta +F\sum_{k=0}^2 \sum_{n=-\infty}^{+\infty} V_{k,n}\Delta J^k \cos (n\theta+\psi),$$ where $$\begin{aligned} && H_{2,0}=\frac{1}{2}, \\ && H_{3,0}=\frac{J_{\omega}^3(\omega J_{\omega}^3+1)}{9(2-\omega J_{\omega}^3)^2}, \\ && H_{4,0}=\frac{J_{\omega}^6(11-\omega J_{\omega}^3)}{108(2-\omega J_{\omega}^3)^3}, \\ && H_{k,0}=\frac{(k+1)J_{\omega}^{3k-6}}{2 \cdot 3^{k-1}(2-\omega J_{\omega}^3)^{k-1}} \qquad \mbox{ for } k\geq 5, \\ && H_{0,n}=-\frac{3}{4}\omega_c^2(2-\omega J_{\omega}^3)\sum_{m=-\infty}^{+\infty} V_m V_{m-n}, \\ && H_{1,n}=\omega_c^2 J_{\omega}^3 \sum_{m=-\infty}^{+\infty} V_m V_{m-n}, \\ && H_{2,n}=-\frac{\omega J_{\omega}^3 -1}{2-\omega J_{\omega}^3} \cdot \frac{ \sum_{m=-\infty}^{+\infty} V_m V_{m-n}}{\Vert V \Vert ^2}, \\ && H_{3,n}=\frac{2J_{\omega}^3 (\omega J_{\omega}^3 -1)}{9(2-\omega J_{\omega}^3)^2} \cdot \frac{ \sum_{m=-\infty}^{+\infty} V_m V_{m-n}}{\Vert V \Vert ^2}, \\ && H_{4,n}=-\frac{J_{\omega}^6 (\omega J_{\omega}^3 -1)}{54(2-\omega J_{\omega}^3)^3} \cdot \frac{ \sum_{m=-\infty}^{+\infty} V_m V_{m-n}}{\Vert V \Vert ^2}, \\ && V_{0,n}=-3 V_n \frac{2-\omega J_{\omega}^3}{J_{\omega}^2}, \\ && V_{1,n}=2 J_{\omega} V_n, \\ && V_{2,n}=- V_n \frac{J_{\omega}^4}{3(2-\omega J_{\omega}^3)},\end{aligned}$$ for $n\in {\Bbb Z}$. The resulting Hamiltonian is of the form : $$\label{eqn:form} H_F(\boldsymbol{A},\boldsymbol{\varphi})=\boldsymbol{\omega}\cdot \boldsymbol{A} + V(\boldsymbol{\Omega}\cdot\boldsymbol{A}, \boldsymbol{\varphi}),$$ where $\boldsymbol{\omega}=(\omega,-1)$ and $\boldsymbol{\Omega}=(1,0)$, and the coordinates are $\boldsymbol{A}=(\Delta J,L)$ and $\boldsymbol{\varphi} =(\theta,\psi)$. For small $F$, the Kolmogorov-Arnold-Moser (KAM) theorem states that if $\omega$ satisfies a Diophantine condition, the invariant torus with frequency $ \omega $ will persist. In fact, the picture that emerges from numerical simulations is that if $F$ (in absolute value) is smaller than some critical threshold $F_c(\omega)$ then Hamiltonian (\[eqn:ham\]) \[or equivalently (\[eqn:hamresc\])\] has an invariant torus with frequency $\omega$. If $F$ is larger than this critical threshold, the torus is broken. In order to compute numerically the critical function $\omega\mapsto F_c(\omega)$, we use the renormalization method described briefly below. ### Renormalization method {#renormalization-method} Without restriction (up to a rescaling of time), we assume that $\omega\in [0,1]$. The renormalization relies upon the continued fraction expansion of the frequency $\omega$ of the torus: $$\omega=\frac{1}{a_0+\frac{1}{a_1+\cdots}} \equiv [a_0,a_1,\ldots].$$ This transformation will act within a space of Hamiltonians $H$ of the form $$\label{eqn:HRG} H(\u{A},\u{\varphi})=\u{\omega}\cdot\u{A}+V(\u{\Omega} \cdot\u{A},\u{\varphi}),$$ where $\u{\Omega}=(1,\alpha)$ is a vector not parallel to the frequency vector $\u{\omega}=(\omega,-1)$. We assume that Hamiltonian (\[eqn:HRG\]) satisfies a non-degeneracy condition : If we expand $V$ into $$V(\u{\Omega}\cdot\u{A},\u{\varphi})=\sum_{\scriptstyle \u{\nu} \in {\mathbb{Z}}^2 \atop k\geq 0} V^{(k)}_{\u{\nu}} (\u{\Omega}\cdot\u{A})^k e^{i\u{\nu}\cdot\u{\varphi}},$$ we assume that $V_{\boldsymbol{0}}^{(2)}$ is non-zero. This restriction means that we cannot explore the twistless region by the present renormalization transformation.The transformation contains essentially two parts : a rescaling and an elimination procedure [@koch99]. [**(1)**]{} *Rescaling* : The first part of the transformation is composed by a shift of the resonances, a rescaling of time and a rescaling of the actions. It acts on a Hamiltonian $H$ as $H'=H\circ {\mathcal T}$ (see Ref. [@chan00a] for details) : $$H'(\u{A},\u{\varphi})=\lambda\omega^{-1} H\left(-\frac{1}{\lambda}N_{a}\u{A}, -N_{a}^{-1}\u{\varphi}\right),$$ with $$\label{eqn:rescaling} \lambda=2\omega^{-1} (a+\alpha)^2 V^{(2)}_{\u{0}},$$ and $$\label{eq:matrix2d} N_{a}=\left(\begin{array}{cc} a & 1\\ 1 & 0 \end{array}\right),$$ and $a$ is the integer part of $\omega^{-1}$ (the first entry in the continued fraction expansion). This change of coordinates is a generalized (far from identity) canonical transformation and the rescaling $\lambda$ is chosen to ensure a normalization condition (the quadratic term in the actions has a mean value equal to 1/2). For $H$ given by Eq. (\[eqn:HRG\]), this expression becomes $$H'(\u{A},\u{\varphi})=\u{\omega}'\cdot\u{A}+ \sum_{\u{\nu},k} V^{\prime (k)}_{\u{\nu}} (\u{\Omega}'\cdot\u{A})^k e^{i\u{\nu}\cdot\u{\varphi}},$$ where $$\begin{aligned} \label{eq:renexp} && \u{\omega}'=(\omega',-1) \; \mbox{ with } \omega'=\omega^{-1}-a,\\ && \u{\Omega}'=(1,\alpha) \; \mbox{ with } \alpha'=1/(a+\alpha),\\ && V^{\prime (k)}_{\u{\nu}}=r_k V_{-N\u{\nu}}^{(k)} \; \mbox{ with } r_k=(-1)^k 2^{1-k}\omega^{k-2}(a+\alpha)^{2-k} \left( V^{(2)}_{\u{0}}\right) ^{1-k}.\end{aligned}$$ We notice that the frequency of the torus is changed according to the Gauss map : $$\label{eqn:gauss} \omega \mapsto \omega'=\omega^{-1}-\left[ \omega^{-1}\right],$$ where $\left[\omega^{-1} \right]$ denotes the integer part of $\omega^{-1}$. Expressed in terms of the continued fraction expansion, it corresponds to a shift to the left of the entries $$\omega=[a_0,a_1,a_2,\ldots]\mapsto \omega'=[a_1,a_2,a_3,\ldots].$$ [**(2)**]{} *Elimination* : The second step is a (connected to identity) canonical transformation ${\mathcal U}$ that eliminates the non-resonant modes of the perturbation in $H'$. Following Ref. [@chan00a], we consider the set $I^- \subset {\mathbb{Z}}^2$ of non-resonant modes as the set of integer vectors $\u{\nu}=(\nu_1,\nu_2)$ such that $|\nu_2|> |\nu_1|$. The canonical transformation ${\mathcal U}$ is such that $H''=H'\circ {\mathcal U}$ does not have any non-resonant mode, i.e. it is defined by the following equation : $$\label{eq:proj} {\mathbb{I}}^-(H'\circ {\mathcal U})=0,$$ where ${\mathbb{I}}^-$ is the projection operator on the non-resonant modes; it acts on a Hamiltonian (\[eqn:HRG\]) as : $${\mathbb{I}}^- H=\sum_{\scriptstyle \u{\nu}\in I^- \atop k\geq 0} V_{\u{\nu}}^{(k)} (\u{\Omega}\cdot\u{A})^k e^{i\u{\nu}\cdot\u{\varphi}}.$$ We solve Eq. (\[eq:proj\]) by a Newton method following Refs. [@chan98b; @chan00a]. Thus the renormalization acts on $H$ for a torus with frequency $\omega$ as $H''={\mathcal R}(H)=H\circ {\mathcal T} \circ{\mathcal U}$ for a torus with frequency $\omega'$.\ The critical thresholds are obtained by iterating the renormalization transformation $\mathcal{R}$. The main conjecture of the renormalization approach is that if the torus exists for a given Hamiltonian $H$, the iterates ${\mathcal{R}}^n H$ of the renormalization map acting on $H$ converge to some integrable Hamiltonian $H_0$. This conjecture is supported by analytical results in the perturbative regime [@koch99; @koch99b], and by numerical results [@chan00d; @chan01a; @chan01R]. For a one-parameter family of Hamiltonians $\{H_F\}$, the critical amplitude of the perturbation $F_c(\omega)$ is determined by the following conditions : $$\begin{aligned} && {\mathcal R}^nH_{F} \underset{n\to \infty}{\to} H_0(\u{A}) =\u{\omega}\cdot\u{A}+ \frac{1}{2}(\u{\Omega}\cdot\u{A})^2 \quad \mbox{ for } F<F_c(\omega), \label{eqn:Adef1}\\ && {\mathcal R}^nH_{F} \underset{ n\to \infty}{\to} \infty \quad \mbox{ for } F>F_c(\omega).\label{eqn:Adef2} \end{aligned}$$ The critical threshold $F_c(\omega)$ is determined by a bisection procedure.\ In order to obtain the value $F_m$ of chaos threshold between resonances $m$:1 ans $m$+1:1, we vary $\omega$ between $1/(m+1)$ and $1/m$. The critical threshold is given by $F_m=\max_{\omega\in [1/(m+1),1/m]} F_c(\omega)$. Numerical computation of chaos thresholds {#sec:resu} ========================================= Chaos thresholds in the negative twist region {#sec:neg} --------------------------------------------- Figure \[fig:fomn\] represents a typical plot of the critical function $\omega\mapsto F_c^-(\omega)$ in the negative twist region obtained by the renormalization method for $e=1$ and $\omega_c=0.3$. This function vanishes at (at least) all rational values of the frequency $\omega$ since all tori with rational frequency are broken as soon as the field is turned on. The condition of existence of a torus with frequency $\omega$ obtained from the integrable case (\[eqn:sint\]) is $\omega > \sqrt{2}\omega_c\Vert V\Vert$, which is in that case $\omega > 0.67$. We expect the tori with frequency belonging to \[0,0.67\] to be broken by collision with invariant tori in the positive twist region before this value of the field $\omega_c$, as it is the case in Hamiltonian (\[eqn:sint\]). From Fig. \[fig:fomn\], we deduce that for $F>F_c=0.023$, no invariant tori are left in this region of phase space. The frequency of the last invariant torus is equal to $(\gamma+1)/(2\gamma+1)\approx 0.7236$ where $\gamma=(\sqrt{5}-1)/2$. This value of the frequency of the last invariant torus varies with the parameters $e$ and $\omega_c$ (for $\omega_c=0$, see Fig. 2 of Ref. [@chan02a]). The main feature of this frequency is that it remains noble as the parameters $e$ and $\omega_c$ are varied, in the sense that the tail of the continued fraction expansion of this frequency is a sequence of 1, or equivalently this frequency expresses like $(a\gamma+b)/(c\gamma+d)$ where $a$, $b$, $c$, $d$ are integers such that $ad-bc=\pm 1$.\ We compute chaos thresholds between two successive primary resonances by two methods : the 2/3-rule criterion and the renormalization map. We compute the critical value of the field $F$ for which there is no longer any invariant torus between resonances 1:1 and 2:1, located at $ J_{1}^{-}$ and $J_{2}^{-}$ respectively, as a function of the magnetic field $\omega_c$ and the parameter $e$ in the negative twist region, i.e.$F_1^-(e,\omega_c)=\max_{\omega\in [1/2,1]} F_c^-(\omega,e,\omega_c)$. Figure \[fig:renchirn\] represents two computations : for $e=0.5$ and for $e=1$. For $\omega_c=0$, we obtain the values that have been obtained in Ref. [@chan02a]. The 2/3-rule criterion gives a very good approximation for low values of the field $\omega_c$ (typically for $\omega_c\in [0,0.15]$). For small $\omega_c$, we develop the critical function $F_1^-(\omega_c)$ given by Eq. (\[eqn:critchir\]). The corrections to $ F_c^-$ due to the magnetic field are of order $\omega_c^2$ since $J_n^-=n^{1/3} +O(\omega_c^2).$ We notice that at some value of $\omega_c$, the curves $F_1^-(e,\omega_c)$ given by the Chirikov criterion decrease sharply to zero. We have seen that the approximate condition of existence of the resonance $m$:1, derived from the integrable case (\[eqn:Hint\]), is $\omega_c < 1/(\sqrt{2}m \Vert V \Vert)$. For $e=1$ and $m=2$, this condition is $\omega_c<0.23$ (and for $e=0.5$ this condition becomes $\omega_c<0.30$). This means that the criterion derived in Sec. \[sec:chir\] is inapplicable for $\omega_c$ larger than these values ($J_2^-$ becomes complex). One way of extending this criterion for larger values of $\omega_c$ would be to consider the overlap of higher order resonances. From the renormalization map results, we see that after 2:1 resonance disappear, there is an increase of stability which can be explained by the fact that the invariant tori in this region of phase space are no longer deformed by this neighboring resonance. We expect this region to be fully broken [*before*]{} 1:1 disappears (i.e. all the region between 1:1 and 2:1 primary resonances has disappeared), and this happens at $\omega_c\approx 0.45$ for $e=1$, and at $\omega_c\approx 0.6$ for $e=0.5$. These results are consistent with the sharp decreases of $F_c$ found by renormalization on Fig. \[fig:renchirn\]. Furthermore, we investigate the chaos thresholds for the different primary resonances $m$:1 by the Chirikov criterion. Using Eq. (\[eqn:critchir\]), we compute the resonance overlap value of $m$:1 and $m$+1:1 for increasing $m=1,\ldots,10$. For a fixed eccentricity $e=1$, Fig. \[fig:Epschirn\] shows the resonance overlap values as a function of the field $\omega_c$ From this figure it emerges that the dynamics in that region is mainly insensitive to the magnetic field for low values of this field. The sharp decreases of the curves $F_m^-$ are due to the disappearance of one of the primary resonances from which the resonance overlap is computed. At these values and for larger $\omega_c$, we expect stability enhancement due to the disappearance of this primary resonance. For $\omega_c\geq 1/(\sqrt{2}m \Vert V\Vert)$, the critical curve $F_m^-$ sharply decreases to zero due to the disappearance of all the region between primary resonances $m$:1 and $m$+1:1 into the twistless region. For instance, for $m=3$, the critical curve $F_3^-(\omega_c)$ increases slowly with $\omega_c$ for $\omega_c\in [0,0.11]$, then for $\omega_c\in [0.11,0.15]$, we expect stabilization enhancement by the magnetic field, and for $\omega_c$ larger than 0.15, the region of phase space between 3:1 and 4:1 resonances disappears into the twistless region and $F_3^-(\omega_c)$ vanishes. The region between resonances of low order ($m$ small) seem to be more stable for $e=1$. However, this feature varies with the parameter $e$ as it has been observed in Ref. [@chan02a]. For $e\in [0,0.8]$ and for $\omega_c=0$, the regions near $m$:1 with large $m$ become very stable (and this stabilization is increased by the field for low values of $\omega_c$) and the diffusion of the trajectories is very limited. Therefore, the orbits with eccentricity close to one are the easiest to diffuse in a broad range of phase space. In particular, these orbits can ionize more easily than medium eccentricity ones. This feature is expected to hold for small $\omega_c$ (typically for $\omega_c\leq 0.05$) even if the diffusion is now limited in the negative twist region since we expect the twistless region to be very stable and act as a barrier in phase space [@cast96]. Figure \[fig:fce\] displays the values of $F_m^-(e)$ for $e\in[0,1]$, and for two values of $\omega_c$ : $\omega_c=0.01$ and $\omega_c=0.1$. For $\omega=0.1$, since a large portion of phase space between primary resonances with $m$ large has disappeared into the twistless region, the orbits with medium eccentricity become as easy to ionize as the ones with eccentricity close to one in the remaining part of the negative twist region of phase space. For eccentricities close to one, the critical threshold is dominated by the chaos threshold between resonances 1:1 and 2:1. In summary, the effect of the magnetic field $\omega_c$ is to stabilize the dynamics in the negative twist region by successively breaking up primary resonances. Therefore, between two succesive resonances with $m$ small, the effect of the magnetic field is expected to be smooth in the sense that the critical thresholds are only slightly changed (increased) from the chaos thresholds in the absence of magnetic field up to some critical value of $\omega_c$ where collisions of periodic orbits occur. Investigating this region by Chirikov resonance overlap yields values which are qualitatively and quantitatively accurate with respect to the renormalization results for $\omega_c\in [0,0.1]$.\ For small values of $\omega_c$, the qualitative behavior concerning diffusion of trajectories is expected to be the same as the one in the absence of magnetic field even if the diffusion coefficients may be smaller due to the limited region of phase space and the stability enhancement due to the magnetic field. Increasing $\omega_c$ makes the orbits with medium eccentricity easier to diffuse in a more and more reduced region of phase space. Chaos thresholds in the positive twist region --------------------------------------------- The classical dynamics in the positive twist region of phase space is essential for the stochastic ionization process since it contains the region where the action $J$ is large, anf for large $\omega_c$ this region is the predominant region since the negative twist region shrinks to the singularity $J=0$. Figure \[fig:fomp\] show a typical critical function $\omega\mapsto F_c^+(\omega)$ obtained by the renormalization method for $\omega_c=0.3$ and for $e=1$. This figure is analogous to Fig. \[fig:fomn\]. From Fig. \[fig:fomp\], we deduce that for $F>F_c=0.021$, no invariant tori are left in this region of phase space. For these values of the parameters $e$ and $\omega_c$, the condition of existence of invariant tori $\omega\geq \sqrt{2}\omega_c\Vert V\Vert$ obtained from the integrable case (\[eqn:sint\]), is $\omega\geq 0.67$. We notice that there is a large chaotic zone for $\omega\in [0.67,0.8]$ (the set of $\omega$ where $F_c^+(\omega)$ is very small). The diffusion of trajectories throughout the $J$ large region is prevented by the small set of invariant tori with frequency $\omega\in [0.85,0.90]$ for intermediate values of the amplitude of the microwaves $F_c\in [0.015,0.02]$. The frequency of the last invariant torus to survive in this region of phase space is approximately equal to 0.87. The frequency of the last invariant torus in the positive twist region fluctuates in a very erratic way as one varies $\omega_c$. However we observed that this frequency is always between 0.85 and 0.90 (close to the 1:1 resonance). The changes induced by the magnetic field are stronger in the positive twist region. We compute the critical thresholds between 1:1 and 2:1 resonances in the positive twist region. Figure \[fig:renchirp\] represents the results obtained by renormalization and by the 2/3-rule criterion. In this region, the critical thresholds determined by 2/3-rule criterion are well below the ones given by the renormalization for significant values of the field. Figure \[fig:Tgamma\] represents the critical threshold for the break-up of invariant tori with frequency $\gamma=(\sqrt{5}-1)/2$. In the negative twist region, the golden mean torus is slightly stabilized by the magnetic field. In the positive twist region, the influence of the field is very strong : first, the torus is created then stabilized to high values of the field, then it disappears. This figure shows that in contrast to the negative twist region, the influence of the magnetic field in the positive twist region is very strong. The curve for the positive twist torus appears to have non-smooth variations conversely to the negative twist torus like for instance for $\omega_c\approx 0.065$. We notice that for the golden mean tori, there is no break-up by collision since the one located in the positive twist region is broken before the expected collision. For small values of the field $\omega_c$, we expand the threshold given by Chirikov criterion  (\[eqn:critchir\]). Since the resonances located at $J_n^+$ do not exist in the absence of the field $\omega_c$, we expect $F_c^+(m,e,\omega_c)$ to vanish at $ \omega_c=0$. The corrections to $F_c^+$ due to the magnetic field are obtained using the following expansion of $ J_n^+=2^{1/3}n^{-1/3}\Vert V \Vert^{-2/3} \omega_c^{-2/3} +O(1)$ Thus we have : $ F_m^+(e,\omega_c)=\alpha_m(e)\omega_c^{2/3}[1+O(\omega^{2/3})]$, where $\alpha_m$ are some constants depending on $e$. This means that the increase of stabilization is very sharp (with infinite slope) for low values of $\omega_c$.\ Furthermore, using Eq. (\[eqn:critchir\]), we compute the 2/3-rule resonance overlap value of $m$:1 and $m$+1:1 resonances for increasing $m=1,\ldots,10$. For a fixed eccentricity $e=1$ , Fig. \[fig:Epschirp\] shows the resonance overlap values as a function of the field $\omega_c$ in the positive twist region. What emerges from this figure is that this region of phase space is strongly stabilized by the magnetic field, and that the curve $m=1$ dominates. However, this feature may vary with the parameter $e$ as it has been observed in Ref. [@chan02a] and in Sec. \[sec:neg\] for the negative twist region. These curves are similar to Fig. \[fig:fce\]. As a result, increasing the magnetic field makes intermediate eccentricity orbits as easy to ionize (diffusion through a part of phase space where $J$ is large) as the ones with eccentricity close to one. This situation is different from the situation without magnetic field. The overall effect of the magnetic field in the positive twist region is basically the same as the one in the negative twist region, i.e., it breaks up the primary resonances and fills up the remaining region by very stable invariant tori, preventing the diffusion of trajectories throughout phase space. Conclusion ========== We have analyzed the classical phase space of the hydrogen atom in crossed magnetic and microwave fields in the high frequency regime. Useful information about the dynamics is provided by the analysis of an integrable part of the Hamiltonian. Accurate information about chaos threshold is obtained by using the renormalization map and the 2/3-rule Chirikov criterion. The global effect of the magnetic field is to stabilize the dynamics and consequently reducing the diffusion of trajectories and the stochastic ionization process. G. Casati, B.V. Chirikov, D.L. Shepelyansky, and I. Guarnieri, Phys. Rep. [ **154**]{}, 77 (1987). P.M. Koch and K.A.H. van Leeuwen, Phys. Rep. [**255**]{}, 289 (1995). B.V. Chirikov, Phys. Rep. [**52**]{}, 263 (1979). H. Koch, Erg. Theor. Dyn. Syst. [**19**]{}, 475 (1999). C. Chandre and H.R. Jauslin, to appear in Physics Reports (2002). P. Fu, T.J. Scholz, J.M. Hettema, and T.F. Gallagher, Phys. Rev. Lett. [**64**]{}, 511 (1990). M. Nauenberg, Phys. Rev. Lett. [**64**]{}, 2731 (1990). J. Griffiths and D. Farrelly, Phys. Rev. A [**45**]{}, R2678 (1992). D. Wintgen, Z. Phys. D [**18**]{}, 125 (1991). P. Kappertz and M. Nauenberg, Phys. Rev. A [**47**]{}, 4749 (1993). J.E. Howard, Phys. Rev. A [**46**]{}, 364 (1992). K. Sacha and J. Zakrzewski, Phys. Rev. A [**55**]{}, 568 (1997). D. Farrelly and T. Uzer, Phys. Rev. Lett. [**74**]{}, 1720 (1995). A.F. Brunello, T. Uzer, and D. Farrelly, Phys. Rev. A [**55**]{}, 3730 (1997). H. Friedrich, [*Theoretical Atomic Physics*]{} (Springer-Verlag, Berlin, 1991). V. Lanchares, M. Inarrea, and J.P. Salas, Phys. Rev. A [**56**]{}, 1839 (1997). B.I. Meerson, E.A. Oks, and P.V. Sasorov, J. Phys. B: At. Mol. Phys. [**15**]{}, 3599 (1982). C. Chandre, Phys. Rev. E [**63**]{}, 046201 (2001). C. Chandre and T. Uzer, to appear in Phys. Rev. E (2002). C. Chandre, J. Laskar, G. Benfatto, and H.R. Jauslin, Physica D [**154**]{}, 159 (2001). C. Chandre and H.R. Jauslin, in [*Mathematical Results in Statistical Mechanics*]{}, edited by S. Miracle-Solé, J. Ruiz, and V. Zagrebnov (World Scientific, Singapore, 1999). C. Chandre and H.R. Jauslin, Phys. Rev. E [**61**]{}, 1320 (2000). C. Chandre, M. Govin, H.R. Jauslin, and H. Koch, Phys. Rev. E [**57**]{}, 6612 (1998). J.J. Abad and H. Koch, Commun. Math. Phys. [**212**]{}, 371 (2000). D. del Castillo-Negrete, J.M. Greene, and P.J. Morrison, Physica D [**91**]{}, 1 (1996). ![\[fig:Joms\] Position of the invariant tori with frequency $(\sqrt{5}-1)/2$ (continuous curve) and $(5+\sqrt{5})/10$ (dashed curve) as a function of $\omega_c$ for the integrable Hamiltonian (\[eqn:Hint\]) for $e=1$. The strong continuous curve is the location of the twistless region.](Fig1) ![\[fig:omse\] Existence of primary resonances ($m=1,\ldots,5$) in the plane of parameters $e$-$\omega_c$. The light gray part is the domain of existence of only $m=1$, and in the black part, all the five first resonances exist.](Fig2) ![\[fig:fomn\] Critical function $F_c^-(\omega)$ in the negative twist region, obtained by the renormalization method, for $\omega_c=0.3$ and for $e=1$.](Fig3) ![\[fig:renchirn\] Chaos thresholds $F_1^-(\omega_c)$ between resonances 1:1 and 2:1 in the negative twist region, obtained by the 2/3-rule criterion (dashed curves) and by the renormalization method (continuous curves), as a function of $\omega_c$, for $e=0.5$ and $e=1$.](Fig4) ![\[fig:Epschirn\] Resonance overlap values of $F_m^-(\protect\omega_c)$ between resonances $m$:1 and $m$+1:1 for $m=1,\ldots ,10$ in the negative twist region for $e=1$.](Fig5) ![\[fig:fce\] Resonance overlap values of $F_m^-(e)$ between resonances $m$:1 and $m$+1:1 for $m=1,\ldots$ in the negative twist region for $(a)$ $\omega_c=0.01$ and $(b)$ $\omega_c=0.1$.](Fig6a "fig:") ![\[fig:fce\] Resonance overlap values of $F_m^-(e)$ between resonances $m$:1 and $m$+1:1 for $m=1,\ldots$ in the negative twist region for $(a)$ $\omega_c=0.01$ and $(b)$ $\omega_c=0.1$.](Fig6b "fig:") ![\[fig:fomp\] Critical function $F_c^+(\omega)$ in the positive twist region, obtained by the renormalization method, for $\omega_c=0.3$ and for $e=1$.](Fig7) ![\[fig:renchirp\] Chaos thresholds $F_1^+(\omega_c)$ between resonances 1:1 and 2:1 in the positive twist region, obtained by the 2/3-rule criterion (dashed curves) and by the renormalization method (continuous curves), as a function of $\omega_c$. The upper curves are obtained for $e=1$ and the lower ones are for $e=0.5$.](Fig8) ![\[fig:Tgamma\] Critical threshold $F_c(\gamma,\omega_c)$ for the break-up of the invariant tori with frequency $\gamma=(\sqrt{5}-1)/2$ for $e=1$ in the positive twist region (continuous curve) and in the negative twist region (dashed curve).](Fig9) ![\[fig:Epschirp\] Resonance overlap values of $F_c^+(\protect\omega_c)$ between resonances $m$:1 and $m$+1:1 for $m=1,\ldots ,10$ in the positive twist region for $e=1$.](Fig10) ![\[fig:fcep\] Resonance overlap values of $F_m^+(e)$ between resonances $m$:1 and $m$+1:1 for $m=1,\ldots$ in the negative twist region for $(a)$ $\omega_c=0.01$ and $(b)$ $\omega_c=0.1$.](Fig11a "fig:") ![\[fig:fcep\] Resonance overlap values of $F_m^+(e)$ between resonances $m$:1 and $m$+1:1 for $m=1,\ldots$ in the negative twist region for $(a)$ $\omega_c=0.01$ and $(b)$ $\omega_c=0.1$.](Fig11b "fig:")
--- abstract: 'We prove that if $\pi$ is a recursive set of primes, then pointlike sets are decidable for the pseudovariety of semigroups whose subgroups are $\pi$-groups. In particular, when $\pi$ is the empty set, we obtain Henckell’s decidability of aperiodic pointlikes. Our proof, restricted to the case of aperiodic semigroups, is simpler than the original proof.' address: | Department of Mathematics/Computer Science\ New College of Florida 5800 Bay Shore Road Sarasota, Florida 34243-2109\ Department of Mathematics\ University of California at Berkeley\ Berkeley\ CA 94720\ USA\ School of Mathematics and Statistics\ Carleton University\ 1125 Colonel By Drive\ Ottawa, Ontario K1S 5B6\ Canada author: - Karsten Henckell - John Rhodes - Benjamin Steinberg bibliography: - 'standard.bib' date: 'June 2, 2007' title: Aperiodic Pointlikes and Beyond --- Introduction ============ In [@Henckell] the first author showed that aperiodic pointlikes are computable; the companion result for groups was proved by Ash [@Ash]. Recently there has been renewed interest in the decidability of aperiodic pointlikes: the third author used it to compute certain joins [@slice; @slice2]; the authors have recently used it to study aperiodic idempotent pointlikes and stable pairs [@Henckellidem; @Henckellstable; @ourstablepairs]. As a consequence, the Mal’cev product $\pv V\malce \pv A$ is always decidable if $\pv V$ is decidable and the semidirect product $\pv V\ast \pv A$ is decidable so long as $\pv V$ is local and decidable. The original proof of the decidability of aperiodic pointlikes in [@Henckell] is quite long. The key complication is that the aperiodic semigroup used to compute the pointlikes is given in terms of generators of a transformation semigroup. To prove the semigroup is aperiodic, the first author used a complicated Zeiger coding of the Rhodes expansion to show that these generators live inside a wreath product of aperiodic semigroups, and hence generate an aperiodic semigroup. An alternate approach was given by the first author in [@productexp] involving a simpler coding into a wreath product. We prove here a considerable generalization of this result. If $\pi$ is a set of primes, let $\Gpi$ denote the pseudovariety of groups with order divisible only by primes in $\pi$, that is, the pseudovariety of $\pi$-groups. Then $\barGpi$ denotes the pseudovariety of semigroups whose subgroups are $\pi$-groups. For instance, when $\pi=\emptyset$, then $\barGpi$ is the pseudovariety of aperiodic semigroups; if , then $\barGpi$ is the pseudovariety of semigroups whose subgroups are $p$-groups. Notice that $\barGpi$ has decidable membership if and only if $\pi$ is recursive. We prove in this case that $\barGpi$ has decidable pointlikes. Our construction is inspired by Henckell’s proof [@Henckell], but we sidestep the Zeiger coding by working directly with $\L$-chains. The paper is organized as follows. Given a finite semigroup $T$, we first introduce a certain computable semigroup $\CP$ of $\barGpi$-pointlikes. Then we discuss Schützenberger groups and the notion of a $\pi'$-free element. In the following section, we show how to associate a finite semigroup $S^{\pi}\in \barGpi$ to any finite semigroup $S$. By working with an arbitrary semigroup and axiomatizing the essential properties of Henckell’s original construction, we manage to simplify Henckell’s proof scheme. We draw inspiration from the Grigorchuk school’s theory of self-similar (or automaton) groups [@GNS; @selfsimilar]. The subsequent section shows how to construct a relational morphism from $T$ to $\CP^{\pi}$ such that the inverse image of each element belongs to $\CP$. The construction assumes the existence of a blowup operator on $\CP$, the existence of which is established in the final section. This last bit again simplifies the corresponding construction in [@Henckell]. Pointlikes ========== As usual, if $S$ is a finite semigroup, then $s^{\omega}$ denotes as usual the unique idempotent power of $s$. The notation $S^I$ stands for $S$ with an adjoined identity $I$. The reader is referred to [@Almeida:book; @CP; @Arbib; @Eilenberg; @qtheor] for background and undefined terminology concerning finite semigroups. Let $\pv V$ be a pseudovariety of semigroups and $T$ a semigroup. A subset $Z\subseteq T$ is said to be *$\pv V$-pointlike* if, for all relational morphisms $\p:T\to S$ with $S\in \pv V$, there exists $s\in S$ such that $Z\subseteq s\pinv$. The collection $\pl V T$ of $\pv V$-pointlikes of $T$ is a subsemigroup of the power set $P(T)$, containing the singletons, and which is a downset for the order $\subseteq$. One says that $\pv V$ has decidable pointlikes if one can effectively compute $\pl V T$ from the multiplication table of $T$. See [@Almeidahyp; @Henckell; @slice; @slice2; @delay; @qtheor] for more on pointlikes. If $Z\in P(T)$, let us define $Z^{\omega+\ast} = Z^{\omega}\bigcup_{n\geq 1} Z^n$. Since products distribute over union in $P(T)$, it follows easily that $$\label{eq:omega+*} ZZ^{\omega+\ast}=Z^{\omega+\ast}=Z^{\omega+\ast}Z.$$ One deduces immediately from that $Z^{\omega+\ast}$ is an idempotent. Observe that if $Z$ is a group element, then $Z^{\omega+\ast}=\bigcup_{n\geq 1} Z^n$. Let $\pi$ be a set of primes; then $\pi'$ denotes the set of primes not belonging to $\pi$. Denote by $\Gpi$ the pseudovariety of $\pi$-groups, that is, groups whose orders only involve primes from $\pi$. Let $\barGpi$ be the pseudovariety of semigroups whose subgroups are $\pi$-groups. As mentioned in the introduction, if $\pi = \emptyset$, then $\Gpi$ is the trivial pseudovariety and $\barGpi$ is the pseudovariety of aperiodic semigroups; if $\pi=\{p\}$, then $\Gpi$ is the pseudovariety of $p$-groups. Notice that the membership problems for $\pi$, $\Gpi$ and $\barGpi$ are equivalent. The following proposition shows that the semigroup of $\barGpi$-pointlikes is closed under unioning up cyclic $\pi'$-subgroups. If $\pi$ is empty, this means one can union up any cyclic subgroup, as was observed by Henckell [@Henckell]. In fact, $\pl A T$ is closed under the operation $Z\mapsto Z^{\omega+\ast}$. \[closedunderomega+star\] Let $\pi$ be a set of primes and $T$ a finite semigroup. Suppose $Z\in \pl {\barGpi} T$ generates a cyclic $\pi'$-group. Then $Z^{\omega+\ast}\in \pl {\barGpi} T$. Suppose the group element $Z$ has order $k$. Let $\p:T\to S$ be a relational morphism with $S\in \barGpi$. Choose $s\in S$ with $Z\subseteq s\pinv$. Then $Z=Z^{\omega}Z\subseteq s^{\omega}s\pinv$. So without loss of generality, we may assume that $s$ is a group element. The order $n$ of $s$ must be prime to $k$, so we can find an positive integer $m$ with $mn\equiv 1\bmod k$. Then $Z = Z^{mn}\subseteq s^{mn}\pinv = s^{\omega}\pinv$. Thus $Z^r\subseteq s^{\omega}\pinv$ for all $r>0$ and so $Z^{\omega+\ast}\subseteq s^{\omega}\pinv$. We conclude $Z^{\omega+\ast}\in \pl {\barGpi} T$. This paper is devoted to proving the following generalization of Henckell’s theorem describing the $\pv A$-pointlike sets [@Henckell]. \[henckellmain\] Let $\pi$ be a set of primes and $T$ be a finite semigroup. Denote by $\CP$ the smallest subsemigroup of $P(T)$ containing the singletons and closed under $Z\mapsto Z^{\omega+\ast}$ whenever $Z$ generates a cyclic $\pi'$-group. Then $\pl {\barGpi} T$ consists of all $X\in P(T)$ with $X\subseteq Y$ some $Y\in \CP$. Proposition \[closedunderomega+star\] shows that $\CP\subseteq \pl {\barGpi} T$ and hence each of the subsets described in Theorem \[henckellmain\] is indeed $\barGpi$-pointlike. The hard part of the result is proving the converse. Let $\pi$ be a recursive set of prime numbers. Then $\barGpi$-pointlikes are decidable. In [@Henckellidem; @ourstablepairs] it is shown that if $\pv V$ is a pseudovariety such that $\pv A\malce \pv V=\pv V$ and $\pv V$-pointlikes are decidable, then the $\pv V$-idempotent pointlikes are decidable. This in particular applies to pseudovarieties of the form $\barGpi$. Let $\pi$ be a recursive set of prime numbers. Then $\barGpi$-idempotent pointlikes are decidable and hence the Mal’cev product $\pv V\malce \barGpi$ is decidable whenever $\pv V$ is decidable. Schützenberger groups and $\pi'$-free elements {#schutzsec} ============================================== Fix a semigroup $S$. Let $H$ be an $\H$-class of $S$ and set $$\stab H = \{s\in S^I\mid Hs\subseteq H\}.$$ The faithful quotient transformation monoid, denoted $(H,\Gamma_R(H))$, is a transitive regular permutation group called the *Schützenberger group* of $H$ [@CP; @Arbib]. If $H$ is a maximal subgroup, then $\Gamma_R(H)=H$. In general, one can always find a subgroup $\til \Gamma_R(H)\subseteq \stab H$ acting transitively on $H$ with faithful quotient $\Gamma_R(H)$ [@Arbib]. The following proposition is well-known. \[L-classindep\] Let $H$ and $H'$ be $\L$-equivalent $\H$-classes of $S$. Then $\stab H=\stab {H'}$ and one can take $\til \Gamma_R(H)= \til \Gamma_R(H')$. Moreover, the kernels of the natural maps $\til\Gamma_R(H)\to \Gamma_R(H)$ and coincide. In particular, $\Gamma_R(H)\cong \Gamma_R(H')$. Suppose that $H=H_a$, $H'=H_b$ and $ya=b$ with $y\in S^I$. By Green’s lemma, $yH_a=H_b$. The equality $\stab H=\stab {H'}$ is classical [@CP; @Arbib]. Now $H_a=a\til \Gamma_R(H)$, so $b\til \Gamma_R(H) = ya\til \Gamma_R(H) = yH_a=H_b$. Thus $\til \Gamma_R(H)$ is a group in $\stab {H'}$ acting transitively on $H$. By regularity of the action we can take $\til\Gamma_R(H') = \til \Gamma_R(H)$. The statement about kernels follows since the right stabilizer of any two $\L$-equivalent elements of a semigroup coincide. Similarly, there is a left Schützenberger group $(\Gamma_L(H),H)$ and a subgroup $\til \Gamma_L(H)$ of the left stabilizer of $H$ mapping onto $\Gamma_L(H)$. The groups $\Gamma_L(H)$ and $\Gamma_R(H)$ are isomorphic. In fact, if $h_0\in H$ is a fixed base point and $g\in \Gamma_R(H)$, then the map $\gamma$ sending $g$ to the unique $g\gamma\in \Gamma_L(H)$ with $g\gamma h_0 = h_0 g$ is an anti-isomorphism. In particular, using Proposition \[L-classindep\] and its dual, we see that the Schützenberger group depends up to isomorphism only on the $\J$-class. See [@CP; @Arbib] for details. The following proposition describes when an element belongs to $\stab H$, and hence represents an element of $\Gamma_R(H)$. \[stayinschutz\] Let $H$ be an $\H$-class of $S$. Then $s\in S^I$ belongs to $\stab H$ if and only if, for some $h\in H$, $hs\in H$. Necessity is clear. Suppose $hs\in H$ and $h'\in H$. Then we have $h's\L hs\L h'$. Therefore, $h's\J h'$ and so $h's\R h'$. This shows $h's\in H$. We now introduce the important notion of $\pi'$-freeness. \[definepiprimefree\] Let $\pi$ be a set of primes. A $\J$-class (respectively, $\L$-, $\R$-class) of $S$ is called *$\pi'$-free* if its Schutzenberger group is a $\pi$-group. Likewise, an element of a $\pi'$-free $\J$-class is called $\pi'$-free. We shall need the following well-known and easy to prove lemma. \[primelift\] Let $\p:G\to H$ be an onto group homomorphism and let $h\in H$ have prime order $p$. There there is an element $g\in G$ of prime power order $p^n$ with $g\p=h$. A $\barGpi$-variant of the Rhodes expansion =========================================== Our goal in this section is to associate a finite semigroup $\suppera\in\barGpi$ to each finite semigroup $S$. The case of $\CP$ will yield a semigroup in $\barGpi$ and a relational morphism that establishes Theorem \[henckellmain\]. Fix a finite semigroup $S$ for this section. Elements of the free monoid $S^*$ will be written as strings $\vec{x}=(x_n,x_{n-1},\ldots,x_1)$. The empty string is denoted $\varepsilon$. We omit parentheses for strings of length $1$. If $n\geq \ell$, define $$\label{defalpha} (x_n,x_{n-1},\ldots,x_1)\alpha_{\ell} = (x_{\ell},x_{\ell-1},\ldots,x_1)$$ and $(x_n,\ldots,x_1)\tau_{\ell} = x_{\ell}$. We identify $\vec x\alpha_1$ with $\vec x\tau_1$, the first letter of $\vec x$. By convention $\vec x\alpha_0=\varepsilon$. Set $(x_n,\ldots,x_1)\omega=x_n$. We use $\vec b\cdot\vec a$ for the concatenation of $\vec b$ and $\vec a$. As the notation suggests, we read strings from right to left. If $P$ is a pre-ordered set, then a *flag* of elements of $P$ is a strict chain $p_n<p_{n-1}<\cdots<p_1$. We also allow an empty flag. Denote by $\flag$ the set of flags for the $\L$-order on $S$. Of course, $\flag$ is a finite set. A typical flag $s_n<_{\L}s_{n-1}<_{\L}\cdots <_{\L} s_1$ shall be denoted $(s_n,s_{n-1},\cdots,s_1)$. We shall also consider the set $\flagc$ of $\L$-chains, that is, all strings $(s_n,s_{n-1},\ldots,s_1)\in S^*$ (including the empty string) such that $s_{i+1}\leq_{\L}s_i$ for all $i$. Of course $\flag\subseteq \flagc\subseteq S^*$. A string $(s_n,\ldots,s_1)$ is termed *$\pi'$-free* if each $s_i$ is $\pi'$-free (see Definition \[definepiprimefree\]). We use $\flaga$ and $\flagca$ to denote the respective subsets of $\flag$ and $\flagc$ consisting of $\pi'$-free strings. There is a natural retraction from $\flagc$ to $\flag$ (mapping $\flagca$ onto $\flaga$), which we proceed to define. Define an elementary reduction to be a rule of the from $(s',s)\to s'$ where $s'\L s$. Elementary reductions are length-decreasing. It is well known and easy to prove that the elementary reductions form a confluent rewriting system and so each element $\vec{x}\in \flagc$ can be reduced to a unique flag $\vec{x}\mathcal \red\in \flag$, called its *reduction* [@TilsonXII]. Clearly the reduction map is a retract and takes $\flagca$ to $\flaga$. The Rhodes expansion [@TilsonXII] defines a multiplication on $\flag$ using the reduction map. Our constructions are motivated by properties of the Rhodes expansion, but we shall not need this expansion per se. Some key properties of the reduction map, which are immediate from the definition, are recorded in the following lemma. \[reductionmap\] The reduction map $\red$ enjoys the following properties: 1. For $\vec x\in \flagc$, $\vec x\red\omega = \vec x\omega$; 2. Let $\vec b,\vec a,\vec b\cdot\vec a\in \flagc$ and suppose $\vec a\red = (a_{\ell},a_{\ell-1},\ldots,a_1)$. Then one has $(\vec b\cdot\vec a)\red\alpha_{\ell} = (x_{\ell},a_{\ell-1},\ldots,a_1)$ where $x_{\ell}\L a_{\ell}$. Let us now turn to defining an auxiliary semigroup that will play a role in the proof. \[definechecks\] Denote by $\check S$ the monoid of all functions $f:S\to S$ such that 1. $sf\leq_{\R} s$ for all $s\in S$; 2. $f$ preserves $\L$, i.e. $s\L s'$ implies $sf\L s'f$; 3. There exists $s_f\in S^I$ such that $sf\R s$ implies $sf =ss_f$. Notice that the natural action of $S^I$ on the right of $S$ belongs to $\check S$. The set $\check S$ is a monoid. Clearly $\check S$ contains the identity. The set of functions satisfying the second item is obviously closed under composition. Suppose $f,g\in S$. Then $sfg\leq_{\R} sf\leq_{\R} s$. Moreover, if $s\R sfg$, all the inequalities are equalities and so $sfg = sfs_g =ss_fs_g$. In particular, we can take $s_{fg}=s_fs_g$. Let us write $\check S^{\infty}$ for the action monoid of the infinite wreath product $\wr^{\infty} (S,\check S)$ of right transformation monoids $(S,\check S)$. There is a natural action of $\bigwr$ on $S^*$ by length-preserving, sequential functions via the projections $\wr^{\infty}(S,\check S)\to \wr^n(S,\check S)$; to obtain the action on a word of length $n$, project first to $\wr^n(S,\check S)$ and then act. If $F\in \bigwr$ and $\vec a\in S^*$, then there is a unique element ${}_{\vec a}F\in \check S^{\infty}$ such that $(\vec b\cdot \vec a)F = \vec b{}_{\vec a}F\cdot \vec aF$ for all $\vec b\in S^*$; for example, ${}_{\varepsilon}F = F$. Also if $F\in \bigwr$, then there is also a unique element $\sigma_F\in \check S$ such that, for $s\in S$, the equality $sF = s\sigma_F$ holds. In particular, $$\label{wreathrecursions} (s_n,\ldots,s_1)F = (s_n,\ldots,s_2){}_{s_1}F\cdot s_1\sigma_F.$$ So $\sigma_F$ describes the action on the first letter, and must belong to $\check S$, while ${}_{s_1}F\in \check S^{\infty}$ is how $F$ acts on the rest of a string starting with $s_1$. In fact, can serve as a recursive definition of what it means to belong to $\bigwr$ (c.f. [@GNS; @selfsimilar]). Now in our situation, by definition of $\check S$, there is an element $s_{\sigma_F}\in S^I$ so that $s_1F =s_1\sigma_F\R s_1$ implies $s_1F = s_1s_{\sigma_F}$. Let us consider some examples to illustrate this formalism for infinite iterated wreath products, in particular the wreath recursion . For simplicity, we work with $\wr^{\infty} (\{0,1\},T_2)$ where $T_2$ is the full transformation semigroup on two letters. Notice that the iterated wreath product $\wr^{\infty} (\{0,1\},T_2)$ is isomorphic to the semidirect product $(\wr^{\infty} (\{0,1\},T_2))^2\rtimes T_2$. As a first example example, consider the $2$-adic odometer, which adds one to the $2$-adic expansion of an integer (where the least significant bit is the first one read from *right to left*) [@GNS; @selfsimilar]. Let $A$ be the $2$-adic odometer considered above, acting on $\{0,1\}^*$, and let $I$ be the identity function on $\{0,1\}^*$. If a $2$-adic integer has $0$ as its least significant bit, we change the $0$ to a $1$ and then continue with the identity map the rest of the way; if the least significant bit is $1$, we change it to $0$ and we add $1$ to what remains (i.e. perform a carry). So in terms of the wreath recursion , $\sigma_A = (01)$ and ${}_0A = I$, ${}_1A=A$. If we identify $\wr^{\infty} (\{0,1\},T_2)$ with $\wr^{\infty} (\{0,1\},T_2)^2\rtimes T_2$, then $A=((I,A),(01))$. Next we consider the two sections to the unilateral shift. Consider the functions $F$, $G$ on $\{0,1\}^*$ that send $x_nx_{n-1}\cdots x_1$ to, respectively, $x_{n-1}x_{n-2}\cdots x_10$ and $x_{n-1}x_{n-2}\cdots x_11$. Both of these functions act by remembering the first letter, then resetting it to a predetermined symbol, and then resetting the second letter to the first and so on and so forth. Formally, the wreath recursion is given by $\sigma _{F} = \ov 0$, $\sigma_G=\ov 1$ (where $\ov x$ is the constant map to $x$) and ${}_0F=F={}_0G$, ${}_1F=G={}_1G$. Identifying $\wr^{\infty} (\{0,1\},T_2)$ with $\wr^{\infty} (\{0,1\},T_2)^2\rtimes T_2$, we have $F=((F,G),\ov 0)$ and $G= ((F,G),\ov 1)$. So, for example, the wreath recursion $$(x_nx_{n-1}\cdots x_21)F = (x_nx_{n-1}\cdots x_2)G\cdot 0=x_{n-1}x_{n-2}\cdots x_210$$ holds. Notice that on infinite bit strings, $F$ and $G$ are the two sections to the unilateral shift that erases the first letter. From these examples, the reader should instantly see the connection between iterated wreath products and sequential functions [@Eilenberg; @EilenbergA; @GNS; @selfsimilar]. A subsemigroup $T$ of $\bigwr$ is called *self-similar* if, for all $F\in T$ and $\vec a\in S^*$, one has ${}_{\vec a}F\in T$; so $\bigwr$ itself is self-similar. It is actually enough that, for each letter $s\in S$, one has ${}_sF\in T$. For instance, the group generated by the $2$-adic odometer $A$ is self-similar in $\wr^{\infty} (\{0,1\},T_2)$ since ${}_0A=I$, ${}_1A=A$. Similarly the semigroup generated by the two sections $F$, $G$ to the shift is self-similar since ${}_0F=F={}_0G$, ${}_1F=G={}_1G$. This viewpoint on infinite wreath products is due to Grigorchuk and Nekrashevych [@GNS; @selfsimilar]. \[defbigwrf\] Denote by $\bigwrf$ the collection of all transformations $F\in \bigwr$ such that whenever $(x_n,x_{n-1},\ldots,x_1)F = (y_n,y_{n-1},\ldots, y_1)$ with and $x_n\R y_n$, there exists $s\in S^I$ with $x_{n-1}s = y_{n-1}$ and $x_ns=y_n$. The element $s$ can depend on the string $(x_n,\ldots,x_1)$. $\bigwrf$ is a self-similar submonoid of $\bigwr$. Clearly it contains the identity. Suppose $F,G\in \bigwrf$ and $$(x_n,x_{n-1},\ldots,x_1)FG = (z_n,z_{n-1},\ldots,z_1)$$ with $x_{n-1}\R z_{n-1}$ and $x_n\R z_n$. Suppose that $(x_n,x_{n-1},\ldots,x_1)F = (y_n,y_{n-1},\ldots,y_1)$. Then, for $i=n-1,n$, we have $x_i\geq_{\R} y_i\geq_{\R} z_i\R x_i$. Thus $x_i\R y_i$ and $y_i\R z_i$, $i=n-1,n$. By assumption, there exists $s,t\in S$ with $x_is=y_i$ and $y_it=z_i$, $i=n-1,n$. Then $x_ist=z_i$, for $i=n-1,n$. Hence $\bigwrf$ is submonoid of $\bigwr$. Self-similarity is immediate from the equation $(x_n,\ldots,x_1){}_{\vec a}F\cdot \vec aF = ((x_n,\ldots,x_1)\cdot \vec a)F$ and the definition of $\bigwrf$. If $s\in S$, define the diagonal operator $\Delta_s:S^*\to S^*$ by $$\label{diagonal} (x_n,x_{n-1},\ldots,x_1)\Delta_s = (x_ns,x_{n-1}s,\ldots,x_1s).$$ It is immediate $\Delta_s\in \bigwrf$. The next lemma expresses the so-called Zeiger property of $\bigwrf$. \[zeigerprop\] Suppose $\vec x = (x_n,\ldots,x_1)\in \flagc$ and $F\in \bigwrf$ is such that $\vec xF = (y_n,y_{n-1},\ldots,y_1)$ with $x_{n-1}=y_{n-1}$ and $x_n\R y_n$. Then $x_n=y_n$. By definition of $\bigwrf$, there exists $s\in S^I$ such that $x_{n-1}s = y_{n-1}=x_{n-1}$ and $x_ns=y_n$. Since $x_n\leq_{\L}x_{n-1}$, we can write $x_n= ux_{n-1}$ with $u\in S^I$. Then $y_n = x_ns=ux_{n-1}s = ux_{n-1}=x_n$, as required. We now define an important transformation semigroup on $\flagc$. Let $\mathscr C$ consist of all transformations such that there exists $(\wh f,\ov f)\in \bigwrf\times (\flagc\setminus \{\varepsilon\})$ with $\vec xf = \vec x\wh f\cdot \ov f$. For instance, if $s\in S$, then one readily checks that $(\Delta_s,s)$ defines an element of $\mathscr C$ via the formula: $$(x_n,x_{n-1},\ldots,x_1)(\Delta_s,s) = (x_ns,x_{n-1}s,\ldots, x_1s,s).$$ Such elements correspond to generators of the Rhodes expansion [@TilsonXII]. Notice that in order for $(\wh f,\ov f)$ to define an element of $\mathscr C$ one must have $x\wh f\leq_{\L} \ov f\omega$ for every $x\in S$. It is essential that $\ov f$ is not empty in the definition of $\mathscr C$. \[Cissemigroupandformula\] The set $\mathscr C$ is a semigroup. Let $f,g\in \mathscr C$. We claim $\wh{fg} = \wh{f}{}_{\ov f}\wh{g}$ and $\ov {fg} = (\ov f)\wh g\cdot \ov g$. Indeed, $$\vec xfg = (\vec x\wh f\cdot \ov f)g = (\vec x\wh f\cdot \ov f)\wh g\cdot \ov g = \vec x\wh f{}_{\ov f}\wh g\cdot (\ov f)\wh g\cdot \ov g.$$ As $\bigwrf$ is self-similar, ${}_{\ov f}\wh{g}\in \bigwrf$ and so $fg\in\mathscr C$. An immediate consequence of the definition is: \[wreathlike\] If $f\in \mathscr C$ and $\vec a,\vec b,\vec b\cdot\vec a\in \flagc$, then $(\vec b\cdot \vec a)f = \vec b{}_{\vec a}\wh f\cdot \vec af$. Indeed, a straightforward computation yields $$(\vec b\cdot \vec a)f = (\vec b\cdot \vec a)\wh f\cdot \ov f = \vec b{}_{\vec a}\wh f\cdot (\vec a\wh f\cdot \ov f) = \vec b{}_{\vec a}\wh f\cdot \vec af,$$ proving the lemma. Lemma \[wreathlike\] shows that $\mathscr C$ behaves very much like an iterated wreath product, a property that we shall exploit repeatedly. In fact, an element of $\mathscr C$ is like an asynchronous transducer that outputs $\ov f$ with empty input and then computes synchronously. We are almost prepared to define our semigroup in $\barGpi$. Recall that $\rho$ denotes the reduction map. \[CA\] Let $\mathscr C^{\pi}$ be the subset of $\mathscr C$ consisting of all transformations $f\in \mathscr C$ such that: 1. $\flagca f\subseteq \flagca$; 2. $\red f\red =f\red$. \[CAissemi\] The set $\mathscr C^{\pi}$ is a semigroup. Closure of the first item under composition is clear. The computation $$\label{redishom} \red fg\red = \red f (\red g\red) = (\red f\red)g\red = f\red g\red = fg\red$$ completes the proof that $\mathscr C^{\pi}$ is a semigroup. Notice that allows us to define an action of $\mathscr C^{\pi}$ on $\flaga$ by $\vec x\mapsto \vec xf\rho$ for $f\in \mathscr C^{\pi}$. Let us denote the resulting faithful transformation semigroup by $(\flaga,\suppera)$. Observe that $\flagca$ appears in Definition \[CA\], while in the definition of $\suppera$ we use $\flaga$. Since $\flaga$ is finite, so is $\suppera$. Our goal is to prove $\suppera\in \barGpi$. We do this by showing that if $\vec x\in \flaga$, $p\in \pi'$ and $f\in \suppera$ with $\vec xf^p = \vec x$, then $\vec xf=\vec x$. This implies that $\suppera$ has no cyclic subgroup of prime order belonging to $\pi'$ and hence $\suppera\in \barGpi$. First we make a simple observation. \[omegaRdrops\] Let $f\in \mathscr C$ and $\varepsilon\neq\vec x\in \flagc$. Then $$\vec xf\red\omega = \vec xf\omega = \vec x\wh f\omega\leq_{\R} \vec x\omega.$$ By definition of $\mathscr C$, we have $\vec xf\omega = (\vec x\wh f\cdot \ov f)\omega = \vec x\wh f\omega$. Since $\wh f\in \bigwrf\subseteq \bigwr$, we conclude $\vec x\wh f\omega\leq_{\R} \vec x\omega$. The lemma then follows from Lemma \[reductionmap\]. Notice if $f\in \mathscr C^{\pi}$, then $\vec xf\red\neq \varepsilon$, for all $\vec x\in \flagc$. Indeed, $\ov f\neq \varepsilon$ implies $\vec xf\red = (\vec x\wh f\cdot \ov f)\red\neq \varepsilon$. Finally, we turn to the main result of this section. \[isaperiodic\] Let $S$ be a finite semigroup. Then $\suppera\in \barGpi$. As $\red:\mathscr C^{\pi}\to \suppera$ is a homomorphism by , it suffices to prove that if $\vec x\in \flaga$, $p\in \pi'$ and $f\in \mathscr C^{\pi}$ are such that $\vec xf^p\red =\vec x$, then $\vec xf\red = \vec x$. Set $\vec x_i = \vec xf^i\red$; so $\vec x_0=\vec x_p = \vec x$. As $f\in \mathscr C^{\pi}$, the above discussion shows $\vec x_i\neq \varepsilon$ for all $i$. Suppose that $\vec x=(x_n,x_{n-1},\ldots,x_1)$. We prove the following critical claim by induction on $\ell$. \[keyclaim\] For each $1\leq i\leq p$ and each $0\leq \ell\leq n$, we have $|\vec x_i|\geq \ell$ and $\vec x_i\alpha_{\ell}=\vec x\alpha_{\ell} = (x_{\ell},x_{\ell-1},\ldots,x_1)$. Let us show how the claim implies the theorem. As $\vec x_1=\vec xf\red$, it suffices from the claim to show that $|\vec x_1|=|\vec x|$. But Lemma \[omegaRdrops\] implies that $$\label{rgoesdown} \vec x\omega\geq_{\R}\vec x_1\omega\geq_{\R}\cdots\geq_{\R}\vec x_{p-1}\omega\geq_{\R}\vec x_p\omega = \vec x\omega$$ and so $x_n = \vec x_1\tau_n\geq_{\L}\vec x_1\omega\R \vec x\omega =x_n$. Therefore, $\vec x_1\omega\L x_n$ and hence, since $\vec x_1$ is a flag, we conclude $|\vec x_1|=n$. This shows that $\vec x=\vec xf\red$ and completes the proof of Theorem \[isaperiodic\] once we establish the claim. The claim is trivial for $\ell=0$. Assume the claim is true for $0\leq \ell\leq n-1$. We prove it for $\ell+1$. Let $\vec a = \vec x\alpha_{\ell} = (x_{\ell},x_{\ell-1},\ldots, x_1)$ and set $\vec x = \vec b\cdot \vec a$; if $\ell=0$, then $\vec a = \varepsilon$ and $\vec b = \vec x$. Then $\vec x_i = \vec b_i\cdot \vec a$ for some $\vec b_i\in \flaga$, each $1\leq i\leq p$, by the inductive hypothesis. First we show $|\vec x_i|\geq \ell+1$. Indeed, implies $\vec x_i\omega\R x_n<_{\L} x_{\ell}$ since $\vec x$ is a flag. We conclude $\vec b_i\neq\varepsilon$, that is $|\vec x_i|\geq \ell+1$, for all $i$. Let us set $\vec z= \vec af\red$. First observe that $|\vec z|\geq \ell$ and $\vec z\alpha_{\ell}=\vec a$. Indeed, $$\vec b_1\cdot \vec a=\vec x_1 = \vec xf\red = (\vec b\cdot \vec a)f\red = (\vec b{}_{\vec a}\wh f\cdot \vec af)\red = (\vec b{}_{\vec a}\wh f\cdot \vec z)\red$$ by Lemma \[wreathlike\] and the confluence of reduction. As each entry of $\vec b{}_{\vec a}\wh f$ is $<_{\J}$-below $x_{\ell}$ (since $\vec x$ is a flag and ${}_{\vec a}\wh f\in\bigwrf$), we must have $\vec z\alpha_{\ell} = \vec a$. There are three cases. Case 1 {#case-1 .unnumbered} ------ Suppose that $|\vec z|\geq\ell+2$. Then an application of Lemma \[wreathlike\] and the second item of Lemma \[reductionmap\] shows that, for any $\vec y\in \flaga$, $$(\vec y\cdot \vec a)f\red\alpha_{\ell+1} = (\vec y{}_{\vec a}\wh f\cdot \vec af)\red\alpha_{\ell+1} = (\vec y{}_{\vec a}\wh f\cdot \vec z)\rho\alpha_{\ell+1} = \vec z\alpha_{\ell+1}.$$ In particular, $(\vec x_i)\alpha_{\ell+1}$ is independent of $i$ and so equals $(\vec x)\alpha_{\ell+1}$, as desired. Case 2 {#case-2 .unnumbered} ------ Suppose that $|\vec z|=\ell+1$. Our goal is to show that $\vec b_i\alpha_1 = x_{\ell+1}$ for all $1\leq i\leq p$. Set $s=\vec z\omega$ for convenience. Then we have by Lemma \[wreathlike\] and confluence of the reduction map $$\vec b_i\cdot \vec a=\vec x_{i} = \vec x_{i-1}f\red =(\vec b_{i-1}\cdot \vec a)f\red= ((\vec b_{i-1}){}_{\vec a}\wh f\cdot \vec af)\red= ((\vec b_{i-1}){}_{\vec a}\wh f\cdot \vec z)\red.$$ So the second item of Lemma \[reductionmap\] allows us to deduce $\vec b_{i}\alpha_1\L \vec z\omega=s$. On the other hand, Lemma \[reductionmap\] implies $s=\vec z\omega=\vec af\rho\omega = \vec af\omega$ and so, since $\vec x_if = (\vec b_i){}_{\vec a}\wh f\cdot \vec af\in\flagc$, we must have $$\label{superequation} (\vec b_i\alpha_1){}_{\vec a}\wh f\leq_{\L}\vec af\omega = s\L \vec b_i\alpha_1,\ \text{for all}\ i.$$ ### Subcase 1 {#subcase-1 .unnumbered} Suppose $(\vec b_j\alpha_1){}_{\vec a}\wh f\R \vec b_j\alpha_1$ for some $1\leq j\leq p$. By definition of $\check S$ and of the wreath product there is an element $u\in S^I$ so that if $y\in S$ and $y{}_{\vec a}\wh f\R y$, then $y{}_{\vec a}\wh f=yu$. With this notation, we are assuming $\vec b_j\alpha_1u\R \vec b_j\alpha_1$. By , $\vec b_j\alpha_1u=(\vec b_j\alpha_1){}_{\vec a}\wh f\leq_{\L} \vec b_j\alpha_1$, whence $\vec b_j\alpha_1u\H \vec b_j\alpha_1$. In particular, $u$ represents an element of $\Gamma_R(H_{\vec b_j\alpha_1})$ by Proposition \[stayinschutz\]. Notice in the case of aperiodic pointlikes, this already yields $(\vec b_j\alpha_1){}_{\vec a}\wh f=\vec b_j\alpha_1u=\vec b_j\alpha_1$ and so the following subclaim is essentially trivial in the aperiodic case. \[newclaim\] For $i\geq j$, we have $(\vec b_i\alpha_1){}_{\vec a}\wh f\R \vec b_i\alpha_1$ and $\vec b_i\alpha_1 = \vec b_j\alpha_1u^{i-j}$. We prove the subclaim by induction, the case $i=j$ being by assumption. Assume it is true for $i$. Then $(\vec b_i\alpha_1){}_{\vec a}\wh f\R \vec b_i\alpha_1$ implies $(\vec b_i\alpha_1){}_{\vec a}\wh f=\vec b_i\alpha_1u$. Since $ (\vec b_i\alpha_1){}_{\vec a}\wh f\leq_{\L} \vec b_i\alpha_1$ by and $(\vec b_i\alpha_1){}_{\vec a}\wh f\R \vec b_i\alpha_1$ by the induction hypothesis, we conclude $\vec b_i\alpha_1u = (\vec b_i\alpha_1){}_{\vec a}\wh f\L \vec b_i\alpha_1\L s$, where the last $\L$-equivalence uses . As $\vec b_i$ is a flag, $\vec b_i\tau_2<_{\L} \vec b_i\alpha_1$ and so $(\vec b_i){}_{\vec a}\wh f\tau_2<_{\J} \vec b_i\alpha_1$. Therefore, $(\vec b_i){}_{\vec a}\wh f\tau_2<_{\J}(\vec b_i\alpha_1){}_{\vec a}\wh f$. Recalling $\vec z$ is a flag of length $\ell+1$, reduction is confluent and $(\vec b_i\alpha_1){}_{\vec a}\wh f\L s=\vec z\omega$, we obtain $$\label{longequation} \vec b_{i+1}\alpha_1 = ((\vec b_i){}_{\vec a}\wh f\cdot \vec af)\red\tau_{\ell+1} = ((\vec b_i){}_{\vec a}\wh f\cdot \vec z)\red\tau_{\ell+1}= (\vec b_i\alpha_1){}_{\vec a}\wh f =\vec b_i\alpha_1u$$ Since $\vec b_{i+1}\alpha_1 = \vec b_i\alpha_1u \L \vec b_i\alpha_1$, the second item of Definition \[definechecks\] implies $(\vec b_{i+1}\alpha_1){}_{\vec a}\wh f\L (\vec b_i\alpha_1){}_{\vec a}\wh f= \vec b_{i+1}\alpha_1$, where the equality uses . Since, $(\vec b_{i+1}\alpha_1) {}_{\vec a}\wh f\leq_{\R} \vec b_{i+1}\alpha_1$, we conclude $(\vec b_{i+1}\alpha_1){}_{\vec a}\wh f\R \vec b_{i+1}\alpha_1$, as was required for the subclaim. Also, by induction and $$\vec b_{i+1}\alpha_1 = \vec b_i\alpha_1u = \vec b_j\alpha_1u^{i-j}u = \vec b_j\alpha_1u^{i+1-j},$$ where the first equality uses . This proves Subclaim \[newclaim\]. Since $\vec b_j = \vec b_{j+p}$, Subclaim \[newclaim\] implies $\vec b_j\alpha_1 = \vec b_{j+p}\alpha_1 = \vec b_j\alpha_1u^p$. Since $u$ represents an element of $\Gamma_R(H_{\vec b_j\alpha_1})$ and the Schützenberger group is a regular permutation group, it follows $u^p$ represents the identity. But $\vec b_j\alpha_1$ is $\pi'$-free and $p\in \pi'$. We conclude that $u$ represents the identity of $\Gamma_R(H_{\vec b_j\alpha_1})$ and so $\vec b_j\alpha_1u=\vec b_j\alpha_1$. Applying the subclaim, it follows $\vec b_j\alpha_1=\vec b_i\alpha_1$ for all $i\geq j$. Since the sequence $\vec b_i$ is periodic with period $p$, we conclude $\vec b_i\alpha_1$ is independent of $i$ and in particular coincides with $\vec b_p\alpha_1= x_{\ell+1}$, as required. ### Subcase 2 {#subcase-2 .unnumbered} Suppose that $(\vec b_i\alpha_1){}_{\vec a}\wh f<_{\R} \vec b_i\alpha_1$ for all $1\leq i\leq p$. Then since $\vec x_if = (\vec b_i){}_{\vec a}\wh f\cdot \vec af\in \flagca$, we deduce $(\vec b_i)_{\vec a}\wh f\in\flagca$. Therefore, every entry of $(\vec b_i){}_{\vec a}\wh f$ is $<_{\J}$-below $\vec b_i\alpha_1\L s=\vec z\omega$ (see ). Since $|\vec z|=\ell+1$, $$\vec b_{i+1}\alpha_1 = \vec x_if\red\tau_{\ell+1} = ((\vec b_i){}_{\vec a}\wh f\cdot \vec af)\red\tau_{\ell+1} = ((\vec b_i){}_{\vec a}\wh f\cdot \vec z)\red\tau_{\ell+1}=\vec z\omega$$ In particular, $\vec b_i\alpha_1$ is independent of $i$, and so taking $i=p$ shows that $\vec b_i\alpha_1 = \vec b\alpha_1=x_{\ell+1}$, as desired. Case 3 {#case-3 .unnumbered} ------ We now arrive at the final case: when $|\vec z|=\ell$, i.e. $\vec z=\vec a$. This case does not arise when $\ell=0$ since $\varepsilon f\rho = \ov f\rho \neq \varepsilon$. So assume from now on $\ell\geq 1$. This is the only case that makes use of the definition of $\bigwrf$. Observe $$\label{bigequationnew} x_{\ell}=\vec a\omega = \vec z\omega = \vec af\red\omega = \vec af\omega= (\vec a\wh f\cdot \ov f)\omega = \vec a\wh f\omega.$$ Since $\vec x_i = \vec b_i\cdot \vec a$ is a flag, we have the important formula $$\label{anotherusefuleq} \vec b_{i+1}\alpha_1 = \vec x_if\red\tau_{\ell+1} = ((\vec b_i){}_{\vec a}\wh f\cdot \vec af)\red \tau_{\ell+1} = (\vec b_i){}_{\vec a}\wh f\rho\alpha_1\L (\vec b_i\alpha_1){}_{\vec a}\wh f$$ where the last equality uses $|\vec af\rho| = |\vec z|=\ell$, while the $\L$-equivalence comes from the second item of Lemma \[reductionmap\]. The following subclaim will be used to seal the rest of the proof. \[secondclaim\] There do not exist $m>i\geq 0$ such that $\vec b_{m}\alpha_1<_{\J} \vec b_i\alpha_1.$ Indeed, suppose $\vec b_{m}\alpha_1<_{\J} \vec b_i\alpha_1$. Then by , we have $$\vec b_{m+1}\alpha_1\L (\vec b_m\alpha_1){}_{\vec a}\wh f\leq_{\R} \vec b_m\alpha_1<_{\J} \vec b_i\alpha_1.$$ Continuing, we see that $\vec b_n\alpha_1<_{\J} \vec b_i\alpha_1$ for all $n\geq m$. But the sequence $\vec b_n$ is periodic with period $p$, so choosing an appropriate $n$ yields $\vec b_i<_{\J} \vec b_i$. This contradiction establishes Subclaim \[secondclaim\]. Now we are in a position to prove that the $\L$ in is really an equality: $$\label{evenmoreusefulequation} \vec b_{i+1}\alpha_1 =(\vec b_i\alpha_1){}_{\vec a}\wh f.$$ Indeed, by $\vec b_{i+1}\alpha_1\L (\vec b_i\alpha_1){}_{\vec a}\wh f\leq_{\R} \vec b_i\alpha_1$. Subclaim \[secondclaim\] then implies $$\label{whyname} (\vec b_i\alpha_1){}_{\vec a}\wh f\R \vec b_i\alpha_1>_{\J} \vec b_i\tau_2\geq _{\R} (\vec b_i){}_{\vec a}\wh f\tau_2$$ since $\vec b_i$ is a flag. From and it follows that indeed $$\vec b_{i+1}\alpha_1 = (\vec b_i){}_{\vec a}\wh f\rho\alpha_1 = (\vec b_i\alpha_1){}_{\vec a}\wh f.$$ Let us now prove by induction on $0\leq i\leq p-1$ that $\vec b_i\alpha_1 = x_{\ell+1}$ (where $\vec b_0=\vec b$). The case $i=0$ is trivial. Suppose that the statement is true for $i$ with $0\leq i\leq p-2$. Then and induction implies $$\vec b_{i+1}\alpha_1 = (\vec b_i\alpha_1){}_{\vec a}\wh f\leq_{\R} \vec b_i\alpha_1 = x_{\ell+1}.$$ Since $\vec b_{i+1}\alpha_1\nless_{\J} \vec b_i\alpha_1$ (by Subclaim \[secondclaim\]), we must in fact have $\vec b_{i+1}\alpha_1\R x_{\ell+1}$. Putting together $(\vec b_i\alpha_1\cdot \vec a)\wh f = (\vec b_i\alpha_1){}_{\vec a}\wh f\cdot \vec a\wh f$ with and yields $$(x_{\ell+1},x_{\ell},\ldots,x_1)\wh f = (\vec b_{i+1}\alpha_1,x_{\ell},y_{\ell-1},\ldots,y_1).$$ Since $x_{\ell+1}\R \vec b_{i+1}\alpha_1$, the Zeiger property (Lemma \[zeigerprop\]) yields $b_{i+1}\alpha_1 = x_{\ell+1}$. This completes the induction that $\vec b_i\alpha_1=x_{\ell+1}$ for all $i$ and thereby finishes the proof of Claim \[keyclaim\]. Theorem \[isaperiodic\] is now proved. Blowup operators ================ Fix a finite semigroup $T$, a set of primes $\pi$ and set $S=\CP$. The salient idea underlying the remainder of the proof, is to construct a retraction belonging to $\bigwrf$. One then “conjugates” the action of the generators of the Rhodes expansion on $\flag$ by this retract to get an action on $\flaga$ belonging to $\suppera$. Let us write $s\leq_{\H} t$ if both $s\leq_{\L} t$ and $s\leq_{\R} t$. \[blowupoperator\] A *preblowup operator* on $S$ is a function $\Hop:S\to S$ satisfying the following properties: 1. $s\Hop = s$ if $s$ is $\pi'$-free; 2. $s\Hop<_{\H} s$ if $s$ is not $\pi'$-free; 3. $s\subseteq s\Hop$ (the “blow up”); 4. There exists a function $m:S\to S$, written $s\mapsto m_s$, such that $s\Hop = sm_s$ and $m_s=m_{s'}$ whenever $s\L s'$. An idempotent preblowup operator is called a *blowup operator*. The element $m_s$ is called the *right multiplier* associated to $s$. \[preblowupisemigroup\] The collection of preblowup operators on $S$ is a finite semigroup. In particular, if there are any preblowup operators on $S$, then there is a blowup operator on $S$. The first three conditions are obviously closed under composition. If $\Hop$ and $\Hop'$ are preblowup operators with respective right multipliers and $s\mapsto n_s$, then $s\Hop\Hop' = sm_sn_{sm_s}$. If $s\L s'$, then $m_s=m_{s'}$ and $sm_s\L s'm_s$. Therefore, $n_{sm_s} = n_{s'm_s}=n_{s'm_{s'}}$. This shows that $\Hop\Hop'$ is a preblowup operator with $m_sn_{sm_s}$ as the right multiplier associated to $s$. The final statement follows from the existence of idempotents in non-empty finite semigroups. The next proposition collects some elementary properties of blowup operators. For the first item, the reader should consult Definition \[definechecks\]. \[blowupprops\] Let $\Hop:S\to S$ be a blowup operator. Then: 1. $\Hop\in \check S$; 2. The image of $\Hop$ is the set of $\pi'$-free elements of $S$; 3. Suppose $y\leq_{\L} s$. Then $y\subseteq ym_s$; 4. If $s$ is $\pi'$-free and $y\leq_{\L} s$, then $y=ym_s$ First we check $B\in \check S$. Since $s\Hop = sm_s$, clearly $s\Hop \leq_{\R} s$. If $s\L t$, then $m_s=m_t$ and so $sB = sm_s\L tm_s=tm_t=tB$. If $s\Hop \R s$, then by the first and second items in the definition of a blowup operator, $s$ is a $\pi'$-free and $s\Hop = s = sI$, so we may take $s_{\Hop} =I$ in the third item of Definition \[definechecks\]. For the second item, observe that $\Hop$ fixes an element $s$ if and only if $s$ is $\pi'$-free. Since $\Hop$ is idempotent, it image is its fixed-point set. Turning to the third item, write $y = zs$ with $z\in S^I$. Then we have $$y=zs\subseteq z(s\Hop) = zsm_s = ym_s,$$ as required. For the final item, we have $s=s\Hop=sm_s$. Since $y=zs$, some $z\in S^I$, we have $ym_s = zsm_s =zs=y$. This completes the proof. For the rest of this section we assume the existence of a blowup operator $\Hop$ on $S$; a construction appears in the next section. We proceed to define an “extension” $\wh\Hop$ of $\Hop$ to $\flagc$. Recall that $\Delta_{s}$ is the diagonal operator in $\bigwr$ corresponding to $s$ . \[extendblowup\] Define $\wh\Hop:S^*\to S^*$ recursively by - $\varepsilon\wh\Hop=\varepsilon$ - $(\vec b\cdot s)\wh\Hop = (\vec b\Delta_{m_s})\wh\Hop\cdot s\Hop$. This recursive definition is known as the Henckell formula. Since $s\Hop\leq_{\L}s$, we obtain the following lemma. \[firstdropsinL\] If $\vec x\neq \varepsilon$, then $\vec x\wh\Hop\alpha_1\leq_{\L} \vec x\alpha_1$. We retain the notation from the previous section for the next proposition. \[belongstobigwrf\] The map $\wh\Hop$ belongs to $\bigwrf$. Moreover, $\flagc\wh\Hop=\flagca$ and $\wh\Hop|_{\flagc}$ is idempotent. Since $\Hop\in \check S$ by Proposition \[blowupprops\] and $\Delta_{m_s}\in \bigwr$, it is immediate from the recursive definition that $\wh\Hop\in \bigwr$. Next we verify that $\flagc\wh\Hop\subseteq \flagca$ by induction on length. The base case is trivial. In general, $(\vec b\cdot x)\wh\Hop = (\vec b\Delta_{m_x})\wh\Hop\cdot x\Hop$. Since $\Delta_{m_x}$ preserves $\flagc$, by induction $(\vec b\Delta_{m_x})\wh\Hop\in \flagca$. Since $x\Hop$ is $\pi'$-free (Proposition \[blowupprops\]), if $\vec b=\varepsilon$ we are done. Otherwise, let $x_1$ be the first entry of $\vec b$. Then Lemma \[firstdropsinL\] shows that $(\vec b\Delta_{m_x})\wh\Hop\alpha_1\leq_{\L} x_1m_x$. As $x\Hop = xm_x$ and $x_1\leq_{\L} x$, we see that $x_1m_x\leq_{\L} xm_x$ and so $(\vec b\Delta_{m_x})\wh\Hop\cdot x\Hop$ belongs to $\flagca$. We show by induction on length that $\wh\Hop$ fixes $\flagca$, the case of length $0$ being trivial. If $\vec x=\vec b\cdot x\in\flagca$, then $\vec x\wh\Hop = (\vec b\Delta_{m_x})\wh\Hop\cdot x\Hop$. But Proposition \[blowupprops\], together with the fact that $x$ is $\pi'$-free and $\vec x$ is an $\L$-chain, implies $xB=x$ and $\vec b\Delta_{m_x}=\vec b$. So a simple induction yields $\vec x\wh\Hop =\vec x$. We conclude $\wh\Hop|_{\flagc}$ is idempotent. Finally, we must verify $\wh\Hop\in \bigwrf$. We proceed by induction on length, the cases of length $0$ and $1$ being vacuously true. Suppose $(x_2,x_1)\wh\Hop =(y_2,y_1)$ with $x_1\R y_1$ and $x_2\R y_2$. Then $y_1 = x_1\Hop = x_1m_{x_1}$. On the other hand, $x_2\R y_2=(x_2m_{x_1})\Hop\leq_{\R} x_2m_{x_1}\leq_{\R} x_2$. Thus $x_2m_{x_1}\R (x_2m_{x_1})\Hop$ and so $x_2m_{x_1}$ is $\pi'$-free. Therefore $y_2=(x_2m_{x_1})\Hop=x_2m_{x_1}$. Thus $y_1=x_1m_{x_1}$ and $y_2=x_2m_{x_2}$, showing that the condition in Definition \[defbigwrf\] is satisfied. Suppose now $n>2$ and that $(x_n,x_{n-1},\ldots,x_1)\wh\Hop = (y_n,y_{n-1},\ldots,y_1)$ with $x_i\R y_i$ for $i=n-1,n$. Then $$(y_n,y_{n-1},\ldots,y_1)=((x_n,x_{n-1},\ldots,x_2)\Delta_{m_{x_1}})\wh\Hop\cdot x_1\Hop.$$ Now $x_i\geq_{\R} x_im_{x_1}\geq_{\R} y_i\R x_i$, for $i=n-1,n$. Therefore, $x_im_{x_1}\R y_i$, $i=n-1,n$. Induction provides $s'\in S^I$ with $x_im_{x_1}s' = y_i$, $i=n-1,n$. Taking $s=m_{x_1}s'$ yields $x_is=y_i$, for $i=n-1,n$, completing the proof. Another crucial property of $\wh\Hop$ is that it “blows up $\L$-chains”. \[blowupstring\] Let $\vec x=(x_n,\ldots,x_1)\in \flagc$ and set $\vec x\wh\Hop = (y_n,\ldots,y_1)$. Then $x_i\subseteq y_i$ for $i=1,\ldots,n$. The proof is by induction on $n$. For $n=0$, the statement is vacuously true. In general, $\vec x\wh\Hop = (x_nm_{x_1},\ldots,x_2m_{x_1})\wh\Hop\cdot x_1\Hop$. By the definition of a blowup operator $x_1\subseteq x_1\Hop$. Since $\vec x$ is an $\L$-chain, Proposition \[blowupprops\] shows $x_i\subseteq x_im_{x_1}$ for $i=2,\ldots, n$. Induction yields $x_im_{x_1}\subseteq y_i$ for $i=2,\ldots,n$, establishing that $x_i\subseteq y_i$. Recall that if $s\in S$, then $(\Delta_s,s)$ denotes the element of $\mathscr C$ that acts by $\vec x(\Delta_s,s) = \vec x\Delta_s\cdot s$. \[commutewithred\] The equalities $\red(\Delta_s,s)\red = (\Delta_s,s)\red$ and $\red\wh\Hop\red =\wh\Hop\red$ hold. Consider first $(\Delta_s,s)$. Suppose $(x_n,\ldots,x_1)\in \flagc$ with $x_{i+1}\L x_i$. Then $(x_n,\ldots,x_1)(\Delta_s,s) = (x_ns,\cdots, x_1s,s)$ has $x_{i+1}s\L x_is$. It follows that applying first the elementary reduction $(x_{i+1},x_i)\to x_{i+1}$ and then $(\Delta_s,s)$ is the same as applying first $(\Delta_s,s)$ and then the elementary reduction . We conclude $\red(\Delta_s,s)\red = (\Delta_s,s)\red$. Let us now turn to $\wh\Hop\red$. We show by induction on $i$ that if a string with $n>i$ admits an elementary reduction $(x_{i+1},x_i)\to x_{i+1}$ and $\vec x\wh\Hop = (y_n,\ldots,y_1)$, then $(y_{i+1},y_i)\to y_{i+1}$ is an elementary reduction. The base case is $i=1$, i.e. $x_1\L x_2$. Then $$(x_n,\ldots,x_2,x_1)\wh\Hop = ((x_n,\ldots,x_3)\Delta_{m_{x_1}m_{x_2m_{x_1}}})\wh\Hop\cdot\left((x_2m_{x_1})\Hop, x_1m_{x_1}\right)$$ By the definition of a blowup operator, $x_1\L x_2$ implies that $m_{x_1}=m_{x_2}$. Now $y_1=x_1m_{x_1}$. Since $x_2m_{x_1}=x_2m_{x_2} = x_2\Hop$ and $\Hop$ is idempotent, we see that $y_2 = x_2m_{x_1}$. Now $x_2\L x_1$ implies $x_2m_{x_1}\L x_1m_{x_1}$. We conclude $y_2\L y_1$, as required. If $i>1$, then we use that $$(y_n,\ldots,y_1)=(x_nm_{x_1},\ldots,x_2m_{x_1})\wh\Hop\cdot x_1\Hop.$$ Since $x_{i+1}m_{x_1}\L x_im_{x_1}$, the induction hypothesis gives $(y_{i+1},y_i)\to y_{i+1}$ is an elementary reduction. This completes the induction. It is then immediate that $\red\wh\Hop\red = \wh\Hop\red$. For the next proposition, the reader is referred to Definition \[CA\]. \[inCA\] If $s\in S$, then $(\Delta_s,s)\wh\Hop\in \mathscr C^{\pi}$. We saw in the proof of Proposition \[CAissemi\] that the set of transformations $f$ satisfying $\red f\red =f\red$ is a semigroup. As $\wh\Hop:\flagc\to \flagca$ (Proposition \[belongstobigwrf\]), it suffices by Proposition \[commutewithred\], to show that $(\Delta_s,s)\wh\Hop\in \mathscr C$. Both $(\Delta_s,s)$ and $\wh\Hop$ leave $\flagc$ invariant. Now we obtain $$\vec x(\Delta_s,s)\wh\Hop = (\vec x\Delta_s\cdot s)\wh\Hop = (\vec x\Delta_s\Delta_{m_s})\wh\Hop\cdot s\Hop=(\vec x\Delta_{sm_s})\wh\Hop\cdot s\Hop$$ and $\Delta_{sm_s}\wh\Hop\in \bigwrf$ by Proposition \[belongstobigwrf\]. This establishes $(\Delta_s,s)\wh\Hop\in \mathscr C$, completing the proof. Let $S$ be any finite monoid and $\Hop:S\to S$ any idempotent operator satisfying (1), (2) and (4) of Definition \[blowupoperator\]. Then one can define $\wh\Hop:S^*\to S^*$ as per Definition \[extendblowup\] and Proposition \[inCA\] will still hold. Condition (3) is just used for Proposition \[blowupstring\] and to construct the relational morphism below. If $(X,M)$ and $(Y,N)$ are faithful transformation semigroups, then a *relational morphism* $\p:(X,M)\to (Y,N)$ is a fully defined relation $\p:X\to Y$ such that, for each $m\in M$, there exists $\til m\in N$ such that $y\pinv m\subseteq y\til m\pinv$ for all $y\in Y$. If $\p:(X,M)\to (Y,N)$ is a relational morphism of faithful transformation semigroups, then the *companion relation* $\til\p:M\to N$ is defined by $m\til\p = \{n\in N\mid y\pinv m\subseteq yn\pinv, \forall y\in Y\}$. It is well known that $\til\p$ is a relational morphism [@Eilenberg]. We define a relational morphism of faithful transformation semigroups $\p:(T^I,T)\to (\flaga,\suppera)$ as follows: we set $I\p = \varepsilon$, while, for $t\in T$, we define $t\p = \{\vec x\in \flaga \mid t\in \vec x\omega\}$. Notice that $\pinv$ coincides with $\omega$ on non-empty strings. \[definetherelmorph\] The relation $\p:T^I\to \flaga$ gives rise to a relational morphism $\p:(T^I,T)\to (\flaga,\suppera)$. Since $t\in \{t\}\subseteq \{t\}\Hop$ and $\{t\}\Hop\in \flaga$, it follows that $\p$ is fully defined. Let $t\in T$. We set $\til t = (\Delta_{\{t\}},\{t\})\wh\Hop\red$. Proposition \[inCA\] shows $\til t\in \suppera$. We need to prove $\vec x\p\inv t\subseteq \vec x\til t\p\inv$. If $\vec x=\varepsilon$, then $\vec x(\Delta_{\{t\}},\{t\})\wh\Hop\red = \{t\}\Hop$. So $\varepsilon\pinv t=\{I\}t=\{t\} \subseteq \{t\}\Hop=\varepsilon \til t\pinv$. If $\vec x\neq\varepsilon$, then we need to show $\vec x\omega t\subseteq \vec x\til t\omega$. Abusing notation, we identify $\{t\}$ with $t$. Then we have $\vec x(\Delta_t,t)\wh\Hop\red = \left((\vec x\Delta_t\cdot t)\wh\Hop\right)\red$. Lemma \[reductionmap\] tells us $\left((\vec x\Delta_t\cdot t)\wh\Hop\right)\red\omega = (\vec x\Delta_t\cdot t)\wh\Hop\omega$. An application of Proposition \[blowupstring\] yields $$(\vec x\Delta_t\cdot t)\wh\Hop\omega\supseteq (\vec x\Delta_t\cdot t)\omega = \vec x\omega t,$$ completing the proof. \[blowupcomputes\] The companion relation $\til\p:T\to \suppera$ satisfies the inequality $f\til\p\inv\subseteq \varepsilon f\omega\in \CP$. Let $t\in f\til\p\inv$. Then $t = It \in \varepsilon\pinv t\subseteq \varepsilon f\pinv = \varepsilon f\omega\in \CP$, as $\varepsilon f\neq \varepsilon$. Hence $f\til\p\inv\subseteq \varepsilon f\omega$, as required. \[almostdone\] If $\CP$ admits a blowup operator, then Theorem \[henckellmain\] holds. That is, $\pl {\barGpi} T = \{X\subseteq T\mid X\subseteq Y\in \CP\}$. We already know $\CP\subseteq \mathsf {PL}_{\barGpi}(T)$. Since $\suppera$ is $\pi'$-free by Theorem \[isaperiodic\], we have that each $\barGpi$-pointlike set is contained in $f\til\p\inv$ for some $f\in \suppera$. An application of Proposition \[blowupcomputes\] then completes the proof. Construction of the blowup operator =================================== We continue to work with our fixed finite semigroup $T$ and to denote $\CP$ by $S$. Our task now consists of constructing a blowup operator for $S$. By Lemma \[preblowupisemigroup\], it suffices to construct a preblowup operator. Our approach is a variation on Henckell’s [@Henckell], which leads to a shorter proof. For this purpose, we need to use Schützenberger groups. We retain the notation for Schützenberger groups introduced in Section \[schutzsec\]. For each non-$\pi'$-free $\L$-class $L$, fix an $\H$-class $H_L$ of $L$ and a prime power order element $g_L\in \til \Gamma_R(H_L)$ representing an element of $\Gamma_R(H_L)$ of prime order $p\in \pi'$ (c.f. Lemma \[primelift\]). We are now prepared to define our preblowup operator $\Hop$. If $s\in S$ is a $\pi'$-free element, define $m_s=I$. If $s$ is not $\pi'$-free, define $m_s=g_{L_s}^{\omega+\ast}\in S$. Notice that $g_{L_s}^{\omega+\ast} = \bigcup_{n\geq 1} g_{L_s}^n\supset g_{L_s}$ since $g_{L_s}$ is a group element. Define an operator $\Hop:S\to S$ by $s\Hop = sm_s$. The operator $\Hop$ is a preblowup operator. If $s$ is $\pi'$-free, then $m_s=I$ and $s\Hop = sm_s=s$. The fourth item of Definition \[blowupoperator\] is clearly satisfied by construction. We turn now to the third item. If $s$ is $\pi'$-free, then trivially $s\subseteq s\Hop$. If $s$ is not $\pi'$-free, then since $g_{L_s}\in \til\Gamma_R(H_s)$ by Proposition \[L-classindep\], we have $sg_{L_s}^{\omega} = s$. As $g_{L_s}^{\omega}\subseteq g_{L_s}^{\omega+\ast}$, $$s=sg_{L_s}^{\omega}\subseteq sg_{L_s}^{\omega+\ast} = sm_s=s\Hop,$$ as required. Finally we turn to the second item of Definition \[blowupoperator\]. Suppose that $s\in S$ is not $\pi'$-free. It is immediate from the definition $s\Hop = sm_s\leq_{\R} s$. Let $\gamma:\Gamma_R(H_s)\to \Gamma_L(H_s)$ be the anti-isomorphism given by $sg = g\gamma s$ for $g\in \Gamma_R(H_s)$. Choose, using Lemma \[primelift\], an element $x\in \til \Gamma_L(H_s)$ of order a power of $p$ so that $x$ maps to $g_{L_s}\gamma$ in $\Gamma_L(H_s)$ (where we view $g_{L_s}$ as an element of $\Gamma_R(H_s)$ using Proposition \[L-classindep\] and the projection). Then we have $\bigcup_{n\geq 1}x^n=x^{\omega+\ast}\in S$. We calculate $s\Hop$ as follows: $$\begin{aligned} s\Hop = sg_{L_s}^{\omega+\ast} = s\bigcup_{n\geq 1} g_{L_s}^n = \bigcup_{n\geq 1}sg_{L_s}^n = \bigcup_{n\geq 1}(g_{L_s}\gamma)^ns = \bigcup_{n\geq 1}x^ns = x^{\omega+\ast}s.\end{aligned}$$ We conclude $s\Hop\leq_{\L} s$ and thus $s\Hop\leq_{\H} s$. To establish $s\Hop<_{\H} s$, observe, using , $s\Hop g_{L_s} =sg_{L_s}^{\omega+\ast}g_{L_s} = sg_{L_s}^{\omega+\ast} = s\Hop$. Since $g_{L_s}$ represents a non-trivial element of $\Gamma_R(H_s)$ (c.f. Proposition \[L-classindep\]) and $(H_s,\Gamma_R(H_s))$ is a regular permutation group, we deduce that $s\Hop\notin H_s$. This concludes the proof that $s\Hop <_{\H} s$ when $s$ is not $\pi'$-free. Therefore, $\Hop$ is a preblowup operator, as required. In light of Corollary \[almostdone\], we have now established Theorem \[henckellmain\]. The first and second authors believe that one can make this whole approach work without blowing up null elements.
--- abstract: 'We construct a family of 1D and 2D long-range $SU(2)$ spin models as parent Hamiltonians associated with infinite dimensional matrix product states that arise from simple current correlation functions in $SU(2)_k$ WZW models. Our results provide a conformal field theory foundation for recent proposals by Greiter and coauthors regarding the realization of non-abelian chiral spin liquids. We explain, in particular, how the symmetrization procedure for the amplitudes of the ground state wave function suggested in [@Greiter:2009PhRvL.102t7203G] originates from the conformal field theory description.' author: - | Thomas Quella$^{1}$ and Abhishek Roy\ \ $^1$The University of Melbourne\ School of Mathematics and Statistics\ Parkville 3010 VIC, Australia\ [[email protected]@thp.uni-koeln.de]{} title: 'Conformal field theory and the non-abelian $SU(2)_k$ chiral spin liquid' --- Introduction ============ Topological states of matter in two dimensions have been the subject of intensive study since the discovery and theoretical explanation of the fractional quantum Hall (FQH) effect [@Tsui:1982PhRvL..48.1559T; @Laughlin:1983PhRvL..50.1395L]. These states generally arise as the effect of strong interactions and correlations and are thus hard to address from first principles. Most of the theoretical work since then has therefore focused on the description of idealized trial wave functions for ground states and excited states that exhibit the desired physical features, such as topological degeneracies and abelian or non-abelian anyonic statistics [@Laughlin:1983PhRvL..50.1395L; @Moore:1991ks]. Many of these arise in a very natural way from 2D conformal field theory (CFT) which, all at the same time, encodes wave functions, braiding statistics of anyonic excitations and the spectrum of gapless edge modes. These investigations were complemented by the study of so-called parent Hamiltonians involving suitable pseudo-potentials that render the trial wave functions exact eigenstates [@Haldane:1983PhRvL..51..605H]. By now there is an enormous amount of literature on trial wave functions for ground states and excited states in fractional quantum Hall systems. Similar wave functions have also been proposed for the description of quantum spin liquid states [@Kalmeyer:1987PhRvL..59.2095K; @Arovas:1988PhRvL..60..531A; @Kalmeyer:1989PhRvB..3911879K] in 2D spin systems making use of a mapping between spin configurations and hardcore bosons. While originally only abelian chiral spin liquids were discussed, in Ref. [@Greiter:2009PhRvL.102t7203G; @Greiter:2011jp; @Scharfenberger:2011PhRvB..84n0404S] Greiter and coauthors suggested a non-abelian generalization for each non-trivial choice of $SU(2)$ spin $S$. These studies were accompanied by the derivation of a suitable parent Hamiltonian that annihilates the ground state wave function of these non-abelian chiral spin liquids [@Schroeter:2007PhRvL..99i7202S; @Thomale:2009PhRvB..80j4406T; @Greiter:2011jp]. Recently, the construction of topological and other exotic states of matter was picked up again from the perspective of quantum information theory. Numerical studies showed that the bipartite entanglement spectrum of FQHE ground states contains information about the gapless edge modes and hence about the underlying conformal field theory [@Li:2008PhRvL.101a0504L]. This idea can be turned around by trying to encode desired entanglement features into ground state wave functions. This is achieved by coupling the physical degrees of freedom to each other through an auxiliary layer of virtual spins or particles that encodes the entanglement. On a technical level this naturally leads to the concept of tensor network states. In that context there is another natural definition of parent Hamiltonians as a frustration-free combination of local interactions that annihilate the desired ground state locally. In the context of 2D topological quantum states it seems appropriate to model the entanglement layer through infinite dimensional matrix product states based on conformal field theory (CFT) correlators since many trial wave functions arise directly from CFT [@Moore:1991ks; @Read:1999PhRvB..59.8084R]. This program has been initiated in Ref. [@Cirac:2010PhRvB..81j4431C] for free bosons and the $SU(2)_1$ WZW model and then further elaborated on in Ref. [@Nielsen:2011py] for the $SU(2)_k$ WZW models, with strong emphasis on $k=1$ and $k=2$. While this, obviously, produced the same quantum states that were discussed previously based on the same CFTs, the approach gave natural access to (long range) parent Hamiltonians by employing identities for correlation functions that are associated with so-called null vectors, hence with the representation theory of the underlying affine Lie algebra symmetry. The same kind of construction was also employed for $SO(N)$ and $SU(N)$ spin models associated with $SO(N)_1$ and $SU(N)_1$ WZW models [@Tu:2013PhRvB..87d1103T; @Tu:2014NuPhB.886..328T; @Bondesan:2014NuPhB.886..483B]. Remarkably, in all $SU(N)$ cases it was shown that the parent Hamiltonians for $k=1$ readily reduces to a (trivial) variation of the long-range Haldane-Shastry model [@Haldane:1988PhRvL..60..635H; @Shastry:1988PhRvL..60..639S] if the field insertions defining the [$\infty$MPS]{}are chosen to lie equidistantly on the unit circle. This is exciting since the Haldane-Shastry model is a paradigmatic model for excitations with purely statistical interacions [@Haldane:1991PhRvL..67..937H], is exactly solvable due to an exact Yangian symmetry [@Haldane:1992PhRvL..69.2021H] and is known to provide an accurate realization of the $SU(N)_1$ WZW models in the thermodynamic limit. In a somewhat disconnected line of development, attempts were made to identify Hamiltonians that realize $SU(2)$ quantum spin systems in one dimension with higher order critical behaviour that is described by an $SU(2)_k$ Wess-Zumino-Witten (WZW) CFT [@Gepner:1986wi]. While these CFTs are known to arise from integrable systems based on $R$-matrices associated with the spin $\frac{k}{2}$-representation of $SU(2)$ [@Babujian:1983NuPhB.215..317B; @Reshetikhin:1991JPhA...24.3299R], the associated Hamiltonians involve higher order spin-spin couplings and are extremely fine-tuned. Several authors have suggested simpler Hamiltonians that are still claimed to be lattice approximations of $SU(2)_k$ WZW models. The authors of Ref. [@Thomale:2012PhRvB..85s5149T] proposed a family of long-range Hamiltonians, for arbitrary level $k$, with quartic three-spin interactions which are based on a 1D reduction of parent Hamiltonians of the 2D non-abelian chiral spin liquid. Shortly after, a truncated short ranged version was studied in [@Michaud:2012PhRvL.108l7202M; @Michaud:2013PhRvB..87n0404M] for $k=2,3,4$ with numerically determined couplings. In both cases, the flow to the $SU(2)_k$ WZW model in the thermodynamic limit was supported by numerical studies. It is probably fair to say that there is currently no model for higher spin $S$ (or higher level $k$) that has the same simplicity, symmetry and hence theoretical appeal as the Haldane-Shastry model for $S=\frac{1}{2}$ ($k=1$).[^1] Moreover, it should be noted that the [$\infty$MPS]{}construction based on the $SU(2)_k$ WZW model for $k=1$ and $k=2$ that was discussed in the literature [@Cirac:2010PhRvB..81j4431C; @Nielsen:2011py] is based on free field theories, namely a single free boson or three free fermions respectively for which correlation functions are readily available. In contrast, the $SU(2)_k$ WZW models for $k>2$ are genuine interacting conformal field theories. It is thus worth to provide a systematic investigation of the [$\infty$MPS]{}construction for higher values of $k$. This is also interesting in the context of the recent paper [@Greiter:2019arXiv190509728G] which studies the associated non-abelian statistics of such one-dimensional models. In this paper we study an infinite dimensional matrix product state ($\infty$MPS) based on the $SU(2)_k$ WZW model and present explicit formulas for long-range parent Hamiltonians based on null vector conditions and WZW Ward identities. This extends earlier considerations in [@Nielsen:2011py] to general level $k$ and clarifies the CFT origin of the wave function for the non-abelian chiral spin liquid (as discussed in [@Greiter:2014PhRvB..89p5125G]). Let us briefly outline the structure of this paper. Section \[sc:Setup\] is devoted to a discussion of the physical setup for an [$\infty$MPS]{}construction of a spin-$S$ chain and the presentation of arguments that constrain the associated CFT to be an $SU(2)_k$ WZW model with $k=2S$. In Section \[sc:ParentHamiltonian\] the null vectors in that CFT are then used to derive a family of operators that annihilate the [$\infty$MPS]{}and hence allow the construction of a parent Hamiltonian. The precise form of the [$\infty$MPS]{}is discussed in Section \[sc:GroundState\] and compared to the construction of the non-abelian chiral spin liquid as suggested in Ref. [@Greiter:2009PhRvL.102t7203G]. The general parent Hamiltonian for the 2D setup is simplified in Section \[sc:Circle\] for equidistant positions on the circle to investigate the reduction to a 1D setup. Two Appendices \[ap:Notation\] and \[ap:SchwingerBosons\] are used to summarize some elementary identities for the Lie algebra $su(2)$ and provide a brief review of the Schwinger boson construction. The final Appendix \[ap:GeneralG\] addresses the form of simple currents in $G_k$ WZW models in relation to those of products of $G_1$ WZW models for arbitrary (simple) symmetry groups $G$. \[sc:Setup\]Description of the physical setup ============================================= We wish to employ the idea of [$\infty$MPS]{}and parent Hamiltonians [@Cirac:2010PhRvB..81j4431C; @Nielsen:2011py] to construct a spin system consisting of $L$ spins ${\mathbf{S}}_l$ distributed arbitrarily on the complex plane. We will focus on $SU(2)$ spins and individual spins are supposed to transform in the spin-$S$ representation ${\mathcal{V}}$ where $S=\frac{1}{2},1,\frac{3}{2},2,\ldots$ is fixed once and for all. The total Hilbert space of the system is thus ${\mathcal{H}}={\mathcal{V}}^{\otimes L}$. To describe such a spin system within the formalism of [$\infty$MPS]{}we, first of all, define a state $$\begin{aligned} \label{eq:iMPS} |\psi\rangle =\sum_{m_1,\ldots,m_L}\bigl\langle\psi_{m_1}(z_1)\cdots\psi_{m_L}(z_L)\bigr\rangle_{\text{CFT}}\,|m_1\cdots m_L\rangle\;,\end{aligned}$$ where $|m_l\rangle$ denotes an orthonormal basis of ${\mathcal{V}}$ and $\psi(z)$ are suitable primary fields in a 2D CFT that also transform in the representation ${\mathcal{V}}$ and $\langle\cdots\rangle_{\text{CFT}}$ is the associated conformal block.[^2] In addition, there are certain consistency requirements that will be addressed below. From the perspective of the state $|\psi\rangle$ the variables $z_l$ are merely parameters but we will interpret them as the positions of the quantum spins ${\mathbf{S}}_l$. In the case at hand, the natural CFTs are $SU(2)_k$ WZW models (with $k=1,2,\ldots$) since these come with the correct symmetry and primary fields. For such a CFT to exhibit a primary field transforming in the spin-$S$ representation we need to have $k\geq2S$. It should be noted that the fusion of primary fields may exhibit multiple channels and hence there is generally a whole space of conformal blocks $\bigl\langle\psi(z_1)\cdots\psi(z_L)\bigr\rangle_{\text{CFT}}$ whose dimension grows exponentially with the number of spins $L$. Hence the definition is, in general, highly ambiguous and moving around the positions of the spins may lead to other states due to monodromies of conformal blocks [@FrancescoCFT]. One (and in our situation the only) way to come up with a unique conformal block[^3] is to choose the fields $\psi(z)$ to be simple currents. A simple current $\psi$ has, by definition, only a single fusion channel when fused with any other field $X$: $\psi\otimes_F X=Y$. In particular, for $SU(2)_k$ the fields are all self-dual and hence simple currents all fuse to the identity field: $\psi\otimes_F\psi={\mathbb{I}}$. This property immediately implies that the associated space of conformal blocks has dimension one (for even $L$) or zero (for odd $L$). We thus restrict our attention to even $L$ in what follows. The only non-trivial simple current of the $SU(2)_k$ WZW model transforms in the repesentation $S=\frac{k}{2}$. We are hence forced to assume $k=2S$ and this identification will be understood from now on. In Section \[sc:ParentHamiltonian\] we will define (or rather derive) operators ${\mathcal{C}}_l(\{z_i\})$ on ${\mathcal{H}}$ that take the positions $z_i$ as parameters and annihilate the quantum state $|\psi\rangle$. The associated parent Hamiltonian $$\begin{aligned} \label{eq:ParentHamiltonianGeneral} H=\sum_{l=1}^L{\mathcal{C}}_l\bigl(\{z_i\}\bigr)^\dag\cdot{\mathcal{C}}_l\bigl(\{z_i\}\bigr)\end{aligned}$$ is then positive semi-definite (satisfies $H\geq0$) and $|\psi\rangle$ is a zero energy ground state of $H$. We will see that the Hamiltonian $H$ as defined in is closey related to the Hamiltonian found earlier in [@Greiter:2011jp] (without noting the relation to CFT) and further discussed in [@Thomale:2012PhRvB..85s5149T]. For $S=\frac{1}{2}$ and $S=1$ our results readily reduce to those presented in [@Nielsen:2011py]. \[sc:ParentHamiltonian\]Derivation of the parent Hamiltonian from CFT ===================================================================== We will construct the operators ${\mathcal{C}}_l\bigl(\{z_i\}\bigr)$ based on our knowledge of null fields related to the simple currents $\psi(z)$ in the $SU(2)_k$ WZW model where $k=2S$. This procedure was suggested in [@Cirac:2010PhRvB..81j4431C; @Nielsen:2011py] and carried out explicitly for $S=\frac{1}{2}$ and $S=1$ based on specific formulas for Clebsch-Gordan coefficients. In contrast, we will employ a formalism that is manifestly basis-independent and straightforwardly generalizes to higher values of $S$. Since the primary field $\psi(z)$ is transforming in the representation $S$, the associated null vector is located at energy level 1, regardless of the value of $S$ (see, e.g., [@FrancescoCFT]). The states on energy level 1 can all be represented as $J_{-1}^a|m\rangle$ where $m=-S,-S+1,\ldots,S$ is the magnetic quantum number and $J_n^a$ are the modes of the affine Lie algebra $\widehat{su}(2)_k$ underlying the $SU(2)_k$ WZW model. These states form a representation with respect to the global symmetry group $SU(2)$ which is generated by the zero modes $J_0^a$ and decomposes as $$\begin{aligned} \label{eq:TensorProduct} 1\otimes S =(S+1)\oplus(S)\oplus(S-1)\;,\end{aligned}$$ where $1$ arises since the modes $J_{-1}^a$ transform in the adjoint representation and $S$ refers to the action on the ground states $|m\rangle$.[^4] An illustration of these statements can be found in Figure \[fig:ModuleS\]. The null vectors lie in the top component $(S+1)$ of the tensor product decomposition . Let ${\mathbf{t}}$ and ${\mathbf{S}}$ be the spin generators associated with the action of $SU(2)$ on $J_{-1}^a$ and $|m\rangle$, respectively. Then $C=({\mathbf{t}}+{\mathbf{S}})^2$ denotes the quadratic Casimir operator on this tensor product and the projector onto the null states is given by $$\begin{aligned} {\mathcal{P}}=\frac{(C-C_S)(C-C_{S-1})}{(C_{S+1}-C_S)(C_{S+1}-C_{S-1})}\;, $$ where we have made use of the Casimir eigenvalues $C_J=J(J+1)$. Upon expanding this product, one finds the projector $$\begin{aligned} {\mathcal{P}}=\frac{1}{2S+1}+\frac{S+2}{(S+1)(2S+1)}\,{\mathbf{t}}\cdot{\mathbf{S}} +\frac{({\mathbf{t}}\cdot{\mathbf{S}})^2}{(S+1)(2S+1)}\;.\end{aligned}$$ This expression may be further simplified since ${\mathbf{t}}$ is a spin in the adjoint representation and hence specified by matrix elements ${(t^e)^a}_c=-i{\epsilon^{ea}}_c$ where $\epsilon$ is the completely anti-symmetric Levi-Civita symbol. Using identities summarized in Appendix \[ap:Notation\] we then find $$\begin{aligned} {\bigl[{\mathbf{t}}\cdot{\mathbf{S}}\bigr]^{ab}}_{cd} =-i{\epsilon^a}_{ce}\,{(S^e)^b}_d \quad\text{ and }\quad {\bigl[({\mathbf{t}}\cdot{\mathbf{S}})^2\bigr]^{ab}}_{cd} ={\bigl[S(S+1)\,\delta_c^a\,{\mathbb{I}}-S_cS^a\bigr]^b}_d\;,\end{aligned}$$ where we have emphasized the operator structure in the second tensor factor while keeping explicit matrix indices for the first one. Writing the projector as a matrix in the first two indices only (i.e. as an operator valued matrix), one then obtains $$\begin{aligned} {{\mathcal{P}}^a}_b &=\delta_b^a\,\frac{S+1}{2S+1}\,{\mathbb{I}}-\frac{i(S+2)}{(S+1)(2S+1)}\,{\epsilon^a}_{bc}\,S^c -\frac{S_bS^a}{(S+1)(2S+1)}\;.\end{aligned}$$ This expression can be symmetrized using the commutation relations of $SU(2)$ and this leads to the final result $$\begin{aligned} \label{eq:ProjectorSymmetrized} {{\mathcal{P}}^a}_b &=\delta_b^a\,\frac{S+1}{2S+1}\,{\mathbb{I}}-\frac{i(2S+3){\epsilon^a}_{bc}\,S^c}{2(S+1)(2S+1)} -\frac{S^aS_b+S_bS^a}{2(S+1)(2S+1)}\;.\end{aligned}$$ The symmetrization will facilitate a comparison of our parent Hamiltonian with that proposed in Ref. [@Greiter:2011jp]. (-2.2,-.2) rectangle (2.2,.2); (-2.2,.7) rectangle (2.2,1.3); (-3.2,.8) rectangle (3.2,1.2); (0,-.5) – (0,2) node\[left\] [$L_0$]{}; (-4.5,0) node\[left\] [$h_{S}$]{} – (4.5,0) node\[below\] [$J_0^z$]{}; (-4,1) node\[left\] [$h_{S}+1$]{} – (4,1); in [-1,0,1]{} [ (,1) circle (6pt); ]{}; in [-2,-1,0,1,2]{} [ (,0) circle (2pt); (,1) circle (4pt); ]{}; in [-3,-2,-1,0,1,2,3]{} [ (,1) circle (2pt); ]{}; (2,0) node\[below=1mm\] [$S$]{}; (-2,0) node\[below=1mm\] [$-S$]{}; (3,0) +(0,.1) – +(0,-.1); (-3,0) +(0,.1) – +(0,-.1); (3,0) node\[below=1mm\] [$S{+}1$]{}; (-3,0) node\[below=1mm\] [$-S{-}1$]{}; (2,0) – node\[right=1mm\] [$J_{-1}^+$]{} (3,1); (2,0) – (2,1); (2,0) – node\[left=1mm\] [$J_{-1}^-$]{} (1,1); (-2,0) – node\[left=1mm\] [$J_{-1}^-$]{} (-3,1); (-1,0) – node\[left\] [$J_{-1}^z$]{} (-1,1); (1,1.7) node [$\vdots$]{}; (2,1.7) node [$\vdots$]{}; (3,1.7) node [$\vdots$]{}; (-1,1.7) node [$\vdots$]{}; (-2,1.7) node [$\vdots$]{}; (-3,1.7) node [$\vdots$]{}; (8,1) node [${{\color{blue}}(S+1)}\oplus{{\color{red}}(S)}\oplus{{\color{red}}(S-1)}$]{}; (8,0) node [$(S)$]{}; (8,0) – node\[right\] [$\otimes1$]{} (8,1); (8,1.7) node [Representation content w.r.t. $SU(2)$]{}; By way of construction of the projector ${{\mathcal{P}}^a}_b$, we are guaranteed that ${{\mathcal{P}}^a}_b\psi(z)$ is a null field.[^5] Consequently, all correlation functions involving this field vanish and this allows to conclude, after using a WZW Ward identity, that $$\begin{aligned} \label{eq:NullFieldCorrelator} 0={{\mathcal{P}}_l^a}_b\,\bigl\langle\psi(z_1)\cdots J_{-1}^b\psi(z_l)\cdots\psi(z_L)\bigr\rangle ={\mathcal{P}}_l^a\bigl(\{z_l\}\bigr)\,\bigl\langle\psi(z_1)\cdots\psi(z_L)\bigr\rangle\;,\end{aligned}$$ where the operator appearing on the right hand side is defined by[^6] $$\begin{aligned} \label{eq:OperatorP} {\mathcal{P}}_l^a\bigl(\{z_i\}\bigr) =\sum_{j(\neq l)}\frac{{{\mathcal{P}}_l^a}_b\,S_j^b}{z_l-z_j}\;.\end{aligned}$$ The subscript $l$ on ${{\mathcal{P}}_l^a}_b$ indicates that the spin operators ${\mathbf{S}}$ present in this matrix, see Eq. , act on the $l$’s tensor factor and hence can be written ${\mathbf{S}}_l$. By Eq.  all operators ${\mathcal{P}}_l^a\bigl(\{z_i\}\bigr)$ annihilate the [$\infty$MPS]{}defined in Eq. , $$\begin{aligned} \label{eq:PAction} {\mathcal{P}}_l^a\bigl(\{z_i\}\bigr)|\psi\rangle =\sum_{j(\neq l)}\frac{{{\mathcal{P}}_l^a}_b\,S_j^b}{z_l-z_j}\,|\psi\rangle =0\;,\end{aligned}$$ and can be used to define an associated parent Hamiltonian along the lines of Eq. . Before we spell out the parent Hamiltonian we will slightly generalize the coordinate dependence of . This is possible in view of the equations $$\begin{aligned} \label{eq:ModPAction} \sum_jS_j^b|\psi\rangle=0\;,\qquad {{\mathcal{P}}_l^a}_bS_l^b &=0\; \quad\text{ and hence }\quad 0=\sum_{j(\neq l)}{{\mathcal{P}}_l^a}_b S_j^b|\psi\rangle\;.\end{aligned}$$ The first equation is just the statement that $|\psi\rangle$ is a singlet, the second equation follows from a straightforward calculation and the last equality is a simple consequence of the former two. We can then combine Eqs.  and with arbitrary coefficients to obtain $$\begin{aligned} \label{eq:OperatorC} {\mathcal{C}}_l^a\bigl(\{z_i\}\bigr)|\psi\rangle =0 \quad\text{ where }\quad {\mathcal{C}}_l^a\bigl(\{z_i\}\bigr) =\sum_{j(\neq l)}\Omega_{lj}\,{{\mathcal{P}}_l^a}_b\,S_j^b\end{aligned}$$ for relatively general choices of the parameters $\Omega_{lj}$ (as functions of the $z_i$). A few of these choices will be discussed below in Section \[sc:Circle\]. The operators ${\mathcal{C}}_l^a\bigl(\{z_i\}\bigr)$ are used to construct the parent Hamiltonian defined in Eq. . The evaluation of this expression is greatly simplified by the fact that the matrices ${{\mathcal{P}}_l^a}_b$ are projectors and hermitean. The resulting terms are all straightforward to bring into a convenient form except for the last one which is quartic in spin operators. Upon using the commutation relations and some other identities listed in Appendix \[ap:Notation\] we then ultimately find $$\begin{aligned} \label{eq:HSymmetrized} H&=\sum_{l=1}^L{\mathcal{C}}_l^a\bigl(\{z_i\}\bigr)^\dag\kappa_{ab}\,{\mathcal{C}}_l^b\bigl(\{z_i\}\bigr) =\sum_k\sum_{i,j(\neq k)}\bar{\Omega}_{ki}\Omega_{kj}\biggl[ \frac{S+1}{2S+1}\,{\mathbf{S}}_i\cdot{\mathbf{S}}_j -\frac{\delta_{ij}\,{\mathbf{S}}_k\cdot{\mathbf{S}}_i}{2(S+1)(2S+1)}\nonumber\\[2mm] &\qquad\qquad\qquad+\frac{i(2S+3)}{2(S+1)(2S+1)} \,({\mathbf{S}}_i\times{\mathbf{S}}_k)\cdot{\mathbf{S}}_j -\frac{({\mathbf{S}}_i\cdot{\mathbf{S}}_k)({\mathbf{S}}_k\cdot{\mathbf{S}}_j) +({\mathbf{S}}_j\cdot{\mathbf{S}}_k)({\mathbf{S}}_k\cdot{\mathbf{S}}_i)}{2(S+1)(2S+1)}\biggr]\;.$$ We note that this Hamiltonian features bilinear and biquadratic interactions[^7] as well as three- and four-spin terms. The three-spin term is special in the sense that it is chiral, i.e. not invariant under time-reversal or a reordering of the spins. However, it may well be absent for specific choices of the parameters $\Omega_{ij}$ as will be discussed in Section \[sc:Circle\]. \[sc:GroundState\]Conformal field theory description of the ground state ======================================================================== To determine the explicit form of the [$\infty$MPS]{}ground state we need to calculate the WZW correlation function $$\begin{aligned} \label{eq:ConformalBlock} \bigl\langle\psi_{m_1}(z_1)\cdots\psi_{m_L}(z_L)\bigr\rangle\;.\end{aligned}$$ We will now show that this correlator follows from a symmetrization argument that closely resembles that given in [@Greiter:2011jp], but here from the perspective of the underlying CFT. We first of all note that there is a diagonal embedding $$\begin{aligned} \label{eq:EmbeddingNaive} SU(2)_k\subset\underbrace{SU(2)_1\times\cdots\times SU(2)_1}_{k\text{ factors}}\;.\end{aligned}$$ However, this embedding is not conformal since the central charges on the two sides of the equation do not coincide: $$\begin{aligned} \frac{3k}{k+2}\neq k\times 1\qquad(\text{except for }k=1)\;.\end{aligned}$$ As we will show momentarily there is still a way of thinking of the simple current $\psi^{(k)}(z)$ in the $SU(2)_k$ WZW model as being built from the simple currents $\psi_\alpha^{(1)}(z)$ (with $\alpha=1,\ldots,k$) of the $SU(2)_1$ WZW models appearing on the right hand side of Eq. . In a first step we make the embedding conformal by considering $$\begin{aligned} \label{eq:Embedding} \frac{SU(2)_1\times\cdots\times SU(2)_1}{SU(2)_k}\times SU(2)_k \ \subset\ \underbrace{SU(2)_1\times\cdots\times SU(2)_1}_{k\text{ factors}}\end{aligned}$$ instead of Eq. . The primary fields of the product theory are associated with representations ${\mathcal{H}}_{(j_1,\ldots,j_L)}={\mathcal{H}}_{j_1}^{(1)}\otimes\cdots\otimes{\mathcal{H}}_{j_L}^{(1)}$ where $j_i=0,\frac{1}{2}$. According to the usual branching rules, these decompose into a direct sum $$\begin{aligned} \label{eq:RepDecomposition} {\mathcal{H}}_{(j_1,\ldots,j_L)} =\bigoplus_{j=0}^{k/2}\,{\mathcal{H}}_{(j_1,\ldots,j_L|j)}\otimes{\mathcal{H}}_j^{(k)}\end{aligned}$$ under the embedding , where ${\mathcal{H}}_{(j_1,\ldots,j_L|j)}$ is a representation of the diagonal coset and ${\mathcal{H}}_j^{(k)}$ one of $SU(2)_k$. Some of the coset representations are in fact trivial in the sense that they do not appear in the decomposition . This is described by selection rules which, in the present case, have the form $$\begin{aligned} \label{eq:SelectionRule} j_1+\cdots+j_L+j\equiv0\mod2\;.\end{aligned}$$ In other words, for a representation $j$ of $SU(2)_k$ to appear in the decomposition  it needs to satisfy Eq. . The simple current we are interested in corresponds to $j=\frac{k}{2}$ and we see that this arises very naturally from the product representation ${\mathcal{H}}_{(\frac{1}{2},\ldots,\frac{1}{2})}$ which corresponds to a certain projection of the product of the $SU(2)_1$ simple currents in the product theory. On the other hand, the latter projection also gives rise to a coset field corresponding to ${\mathcal{H}}_{(\frac{1}{2},\ldots,\frac{1}{2}|\frac{k}{2})}$. We hence still need to explain why this coset field does not play a role and, moreover, specify in more detail what projection to choose in order to single out the $j=\frac{k}{2}$ component of the composition . Let us start with the former task. It is well-known that the presence of selection rules goes hand in hand with the presence of “field identifications” that in the present case read $$\begin{aligned} {\mathcal{H}}_{(j_1,\ldots,j_L|j)} \cong{\mathcal{H}}_{(\frac{1}{2}-j_1,\ldots,\frac{1}{2}-j_L|\frac{k}{2}-j)}\;.\end{aligned}$$ We immediately recognize that this field identification implies $$\begin{aligned} {\mathcal{H}}_{(\frac{1}{2},\ldots,\frac{1}{2}|\frac{k}{2})} \cong{\mathcal{H}}_{(0,\ldots,0|0)}\;,\end{aligned}$$ which is just the vacuum module of the diagonal coset theory. In other words, the simple current of the $SU(2)_k$ WZW model that we are after is paired with the identity field of the coset (or one of its descendants). A comparison of conformal dimensions (see below) implies that the remaining factor is indeed a multiple of the identity field and not a non-trivial descendant. This is good news since this means that the desired correlator of the $SU(2)_k$ simple current can be directly obtained from the product of correlators of the $SU(2)_1$ simple currents, without additional complications due to the coset part. From Young diagram techniques it is evident that the $\frac{k}{2}$ representation of $SU(2)$ arises as the top component in the tensor product $\frac{1}{2}\otimes\cdots\otimes\frac{1}{2}$ ($k$ factors) which is associated with the complete symmetrization of all factors. Hence we obtain the identity[^8] $$\begin{aligned} \label{eq:Psik} \psi^{(k)}(z) =\text{Symmetrization}\bigl(\psi_1^{(1)}(z)\cdots\psi_k^{(1)}(z)\bigr)\end{aligned}$$ relating the simple currents in $SU(2)_k$ and $k$ copies of $SU(2)_1$, where the symmetrization refers to the magnetic quantum numbers $m_\alpha=\pm\frac{1}{2}$ which have been supressed. We note that each of the fields $\psi_l^{(1)}(z)$ contributes $h^{(1)}=\frac{1}{4}$ to the total conformal dimension of the product since they mutually commute. This precisely matches $h^{(k)}=\frac{k}{4}$ for the field $\psi^{(k)}(z)$ on the left hand side[^9] and indeed shows that relation holds without the insertion of a descendant in the coset part. For the determination of the desired conformal block  it thus suffices to have knowledge of the $SU(2)_1$ correlation functions which can be calculated by means of a free field construction [@FrancescoCFT]. Instead of reproducing the derivation we just spell out the result [@Cirac:2010PhRvB..81j4431C], $$\begin{aligned} \bigl\langle\psi_{m_1}^{(1)}(z_1)\cdots\psi_{m_L}^{(1)}(z_L)\bigr\rangle =\rho_{{\mathbf{m}}}\prod_{i<j}(z_i-z_j)^{2m_im_j}\;.\end{aligned}$$ The constant $\rho_{{\mathbf{m}}}$ appearing here is given by $$\begin{aligned} \rho_{{\mathbf{m}}} =\prod_{i\text{ odd}}\rho_{m_i} \qquad\text{ where }\quad \rho_m=\begin{cases} \phantom{-}1&,\;m=\frac{1}{2}\\ -1&,\;m=-\frac{1}{2} \end{cases}\end{aligned}$$ and is known as the Marshall sign factor. As was suggested in [@Arovas:1988PhRvL..60..531A] in the context of AKLT models and then adapted to the non-abelian chiral spin liquid in [@Greiter:2009PhRvL.102t7203G], the symmetrization can be carried out quite explicitly in terms of Schwinger bosons, see Appendix \[ap:SchwingerBosons\]. In that formalism and setting $M=\frac{L}{2}$, the state that is relevant for the case $k=1$ is[^10] $$\begin{aligned} |\psi_0^{\text{KL}}\rangle =\sum_{\{\xi_1,\ldots,\xi_M\}} \psi_0^{\text{KL}}(z_{\xi_1},\ldots,z_{\xi_M})\, a_{\xi_1}^\dag\cdots a_{\xi_M}^\dag b_{\eta_1}^\dag\cdots b_{\eta_M}^\dag|0\rangle =\psi_0^{\text{KL}}[{\mathbf{a}}^\dag,{\mathbf{b}}^\dag]|0\rangle\;,\end{aligned}$$ where the sum extends over all subsets $\{\xi_1<\ldots<\xi_M\}\subset\{1,\ldots,L\}$, the subset $\{\eta_1<\ldots<\eta_M\}$ complements the previous one and all entries are ordered as indicated. This state may then easily be generalized to the spin-$S$ representation by defining [@Greiter:2009PhRvL.102t7203G] $$\begin{aligned} \label{eq:SpinS} |\psi_0^S\rangle =\bigl(\psi_0^{\text{KL}}[{\mathbf{a}}^\dag,{\mathbf{b}}^\dag]\bigr)^{2S}|0\rangle\;.\end{aligned}$$ The mathematical situation is depicted in the following diagram, in [0,-1.2]{} [ (0,) node [$\frac{1}{2}$]{}; (.3,) node [$\otimes$]{}; (.6,) node [$\frac{1}{2}$]{}; (.9,) node [$\otimes$]{}; (1.2,) node [$\frac{1}{2}$]{}; (1.5,) node [$\otimes$]{}; (2,) node [$\cdots$]{}; (2.5,) node [$\otimes$]{}; (2.8,) node [$\frac{1}{2}$]{}; (3.5,) node [$\rightarrow0$]{}; ]{}; (0,-.5) node [$\vdots$]{}; (.6,-.5) node [$\vdots$]{}; (1.2,-.5) node [$\vdots$]{}; (2.8,-.5) node [$\vdots$]{}; (.6,-.6) node\[left\] [${\footnotesize k\text{ copies }}\left\{\phantom{\begin{array}{c}X\\[1.2cm]\end{array}}\right.$]{}; (1.4,0) node\[above\] [$\overbrace{\phantom{XXXXXXXX}}^{L \text{ copies}}$]{}; (0,-1.8) node [$\downarrow$]{}; (.6,-1.8) node [$\downarrow$]{}; (1.2,-1.8) node [$\downarrow$]{}; (2.8,-1.8) node [$\downarrow$]{}; (0,-2.4) node [$\frac{k}{2}$]{}; (.3,-2.4) node [$\otimes$]{}; (.6,-2.4) node [$\frac{k}{2}$]{}; (.9,-2.4) node [$\otimes$]{}; (1.2,-2.4) node [$\frac{k}{2}$]{}; (1.5,-2.4) node [$\otimes$]{}; (2,-2.4) node [$\cdots$]{}; (2.5,-2.4) node [$\otimes$]{}; (2.8,-2.4) node [$\frac{k}{2}$]{}; On the one hand we have the Hilbert space of $k$ individual spin systems (it is useful to think about them as chains here) which are each projected into a specific singlet by means of the [$\infty$MPS]{}construction for $k=1$. On the other hand we have projections to the maximal spin component $\frac{k}{2}$ across the various spin systems for each individual site.[^11] From the perspective of the previous comments it is sensible to first define the state[^12] $$\begin{aligned} |\psi_0\rangle_{k\text{ copies}} =\bigl(\psi_0^{\text{KL}}[{\mathbf{a}}_1^\dag,{\mathbf{b}}_1^\dag]\bigr)\cdots\bigl(\psi_0^{\text{KL}}[{\mathbf{a}}_k^\dag,{\mathbf{b}}_k^\dag]\bigr)|0\rangle\;.\end{aligned}$$ This is a singlet but it still lives in the Hilbert space $((\frac{1}{2})^{\otimes L})^{\otimes k}=(\frac{1}{2})^{\otimes kL}$. The desired projection onto $(\frac{k}{2})^{\otimes L}$ is implemented by means of a complete symmetrization, i.e. by identifying all the bosons: $({\mathbf{a}}_i,{\mathbf{b}}_i)=({\mathbf{a}},{\mathbf{b}})$. This procedure leads directly to the quantum state . This argument proves that the [$\infty$MPS]{}underlying our construction of the parent Hamiltonian indeed agrees with the state introduced by Greiter and Thomale [@Greiter:2009PhRvL.102t7203G]. \[sc:Circle\]Equidistant spins on the unit circle ================================================= The Hamiltonian can be simplified significantly if the parameters $z_l$ are assumed to be distributed equidistantly on the unit circle, i.e. $z_l=e^{\frac{2\pi i}{L}l}$, and the functions $\Omega_{ij}$ are chosen appropriately.[^13] Natural choices for $\Omega_{ij}$ that are consistent with the derivation of the operators ${\mathcal{C}}_l$ in Eq.  are $$\begin{aligned} x_{ij}=\frac{1}{z_i-z_j}\;,\quad w_{ij}=\frac{z_i+z_j}{z_i-z_j} \quad\text{ and }\quad \theta_{ij}=\frac{z_i}{z_i-z_j}\;.\end{aligned}$$ On the unit circle one has $\bar{z}_i=1/z_i$ and this allows to simplify expressions such as $|\Omega_{ij}|^2$ or $\bar{\Omega}_{li}\Omega_{lj}$ that appear in the [$\infty$MPS]{}parent Hamiltonian. In what follows we restrict our attention to the choice $w_{ij}=\frac{z_i+z_j}{z_i-z_j}$. The most significant benefit of this choice is the absence of the cross product term $({\mathbf{S}}_i\times{\mathbf{S}}_k)\cdot{\mathbf{S}}_j$ in Eq.  whenever $i\neq j$. Indeed, this term cancels out in view of the relation $\bar{w}_{ij}=-w_{ij}$ and the resulting symmetry of $\bar{w}_{ki}w_{kj}=-w_{ki}w_{kj}$ in the indices $i$ and $j$. For $i=j\neq k$ on the other hand we can use the identities of Appendix \[ap:Notation\] to write $$\begin{aligned} i({\mathbf{S}}_i\times{\mathbf{S}}_k)\cdot{\mathbf{S}}_j ={\mathbf{S}}_i\cdot{\mathbf{S}}_k\;,\end{aligned}$$ which is bilinear. It is convenient to collect all bilinears into a single sum and to simplify all the constant contributions that arise on the way. To do so we first of all evaluate the sums [@Nielsen:2011py] $$\begin{aligned} \label{eq:WGeneral} \sum_{k(\neq i,j)}\bar{w}_{ki}w_{kj} =2-L-2w_{ij}^2 \quad\text{ and }\quad \sum_{k\neq l}|w_{kl}|^2 =\frac{1}{3}L(L-1)(L-2)\;.\end{aligned}$$ After using the first expression in Eq. , the Hamiltonian features spin-spin interactions without dependence on the parameters $z_l$. These can be rewritten in terms of the identity operator and the square ${\mathbf{S}}_{\text{tot}}^2$ of the total spin operator ${\mathbf{S}}_{\text{tot}}$ using the identity $ \sum_{k\neq l}{\mathbf{S}}_k\cdot{\mathbf{S}}_l ={\mathbf{S}}_{\text{tot}}^2-LS(S+1)$. After collecting all terms of the same form we then arrive at[^14] $$\begin{aligned} \label{eq:HPreFinal} H&=\frac{2\pi^2}{L^2}\Biggl[\frac{LS(S+1)^2(L-2)(L+2)}{12(2S+3)} -\frac{1}{4}\sum_{i\neq j}w_{ij}^2 \,{\mathbf{S}}_i\cdot{\mathbf{S}}_j\nonumber\\[2mm] &\qquad\qquad -\sum_k\sum_{i,j(\neq k)}\bar{w}_{ki}w_{kj}\frac{({\mathbf{S}}_i\cdot{\mathbf{S}}_k)({\mathbf{S}}_k\cdot{\mathbf{S}}_j) +({\mathbf{S}}_j\cdot{\mathbf{S}}_k)({\mathbf{S}}_k\cdot{\mathbf{S}}_i)}{8(S+1)(2S+3)} -\frac{(S+1)(L-2)}{4(2S+3)}\,{\mathbf{S}}_{\text{tot}}^2\Biggr]\;.\end{aligned}$$ We note that the four-spin term includes biquadratic spin couplings for $i=j$. It remains to obtain a better idea about the coordinate dependence of the interactions, i.e. the dependence on the parameters $z_l$. For the bilinear term the relevant expression is $$\begin{aligned} \label{eq:wij} |w_{ij}|^2 =-w_{ij}^2 =-\frac{4z_iz_j}{(z_i-z_j)^2}-1 =\frac{4}{|z_i-z_j|^2}-1 =\frac{1}{\sin^2\frac{\pi}{L}(i-j)}-1\;.\end{aligned}$$ This clearly reproduces the distance dependence familiar from the Haldane-Shastry Hamiltonian. The simplification of the expression $\bar{w}_{ki}w_{kj}$ requires slightly more thoughts but eventually it is a matter of elementary algebra to show that $$\begin{aligned} \bar{w}_{ki}w_{kj} =\frac{2}{(\bar{z}_k-\bar{z}_i)(z_k-z_j)} +\frac{2}{(\bar{z}_k-\bar{z}_j)(z_k-z_i)}-1\;.\end{aligned}$$ We note that the last expression is multiplying a term that is symmetric in the indices $i$ and $j$. Hence the summation simplifies and we are left with the final expression $$\begin{aligned} \label{eq:HFinal} H&=\frac{2\pi^2}{L^2}\Biggl\{\frac{LS(S+1)\bigl(L^2(S+1)+2S+5\bigr)}{12(2S+3)} +\sum_{k\neq l}\frac{{\mathbf{S}}_k\cdot{\mathbf{S}}_l}{|z_k-z_l|^2} -\frac{L(S+1)+1}{4(2S+3)}\,{\mathbf{S}}_{\text{tot}}^2\nonumber\\[2mm] &\qquad\qquad -\sum_k\sum_{i,j(\neq k)}\frac{({\mathbf{S}}_i\cdot{\mathbf{S}}_k)({\mathbf{S}}_k\cdot{\mathbf{S}}_j) +({\mathbf{S}}_j\cdot{\mathbf{S}}_k)({\mathbf{S}}_k\cdot{\mathbf{S}}_i)}{2(S+1)(2S+3)(\bar{z}_k-\bar{z}_i)(z_k-z_j)} +\sum_k\sum_{i,j(\neq k)}\frac{({\mathbf{S}}_i\cdot{\mathbf{S}}_k)({\mathbf{S}}_k\cdot{\mathbf{S}}_j)}{4(S+1)(2S+3)}\Biggr\}\;.\end{aligned}$$ We see that the [$\infty$MPS]{}Hamiltonian features Haldane-Shastry-like spin-spin interactions as well as three-spin interactions, all of them long-ranged. In addition there is a term proportional to ${\mathbf{S}}_{\text{tot}}^2$ that can be interpreted as some sort of chemical potential that is favoring large total spins. It is likely though that this term will be dominated by contributions arising from the final three-spin term and that, overall, small spins will be favoured. When carrying out the sums over $i$ and $j$, the final term can be written in terms of the total spin ${\mathbf{S}}_{\text{tot}}$ but is is unclear how to simplify the sum $\sum_k({\mathbf{S}}_k\cdot{\mathbf{S}}_{\text{tot}})^2$ that arises in this way. A closer inspection reveals that the Hamiltonian is similar but not quite identical to the one that was derived in Ref. [@Greiter:2011jp]. Indeed, while the $z$-dependent (and hence distance dependent) interaction terms precisely agree, the Hamiltonian  features additional terms which have the flavor of chemical potentials. From a naive perspective it looks possible to remedy the issue by working with one of the other two choices for $\Omega_{ij}$ that we have mentioned at the beginning of this section, namely $x_{ij}$ or $\theta_{ij}$. Indeed, these different choices are related by $$\begin{aligned} \label{eq:Change} |w_{ij}|^2+1 &=4|\theta_{ij}|^2 =4|x_{ij}|^2 =-\frac{4z_iz_j}{(z_i-z_j)^2} =\frac{4}{|z_i-z_j|^2}\nonumber\\[2mm] \bar{w}_{ki}w_{kj} &=2(\bar{\theta}_{ki}\theta_{kj}+\bar{\theta}_{kj}\theta_{kj})-1 =2(\bar{x}_{ki}x_{kj}+\bar{x}_{kj}x_{kj})-1\;,\end{aligned}$$ so they are guaranteed to give the desired dependence on the parameters $z_l$ when inserted into the Hamiltonian . On the other hand, for both of these choices the three-spin term $({\mathbf{S}}_i\times{\mathbf{S}}_k)\cdot{\mathbf{S}}_j$ does not drop out, which results in a Hamiltonian that is manifestly chiral. While this may be desired in a 2D setting, it seems rather unnatural in one dimension, at least if one wishes to find a realization of the $SU(2)_k$ WZW model. Conclusions =========== In this paper we have applied the [$\infty$MPS]{}construction to $SU(2)_k$ WZW models. Using a basis-independent formalism for spin operators and projectors allowed us to derive explicit parent Hamiltonians for 1D and 2D quantum spin models. We also established a precise correspondence between our conformal field theory approach and the symmetrization approach for trial wavefunctions of a family of non-abelian chiral spin liquids [@Greiter:2009PhRvL.102t7203G]. Even though the wavefunctions agree there seems to be a slight mismatch in the associated parent Hamiltonians. To our knowledge, this paper has been the first complete and explicit application of the [$\infty$MPS]{}construction to a family of truly interacting conformal field theories after earlier papers on $U(1)$, $SU(2)_1$, $SO(N)_1$ and $SU(N)_1$ WZW models [@Cirac:2010PhRvB..81j4431C; @Nielsen:2011py; @Tu:2013PhRvB..87d1103T; @Tu:2014NuPhB.886..328T; @Bondesan:2014NuPhB.886..483B] which can all be interpreted as free field theories (with the special case $SO(3)_1\cong SU(2)_2$). Our present discussion focused on 2D quantum spin systems on the plane (or Riemann sphere) and closed 1D systems on the circle. It is known that the [$\infty$MPS]{}construction generalizes to 1D systems with open boundary conditions [@Tu:2015PhRvB..92d1119T; @BasuMallick:2016PhRvB..93o5154B] and to systems on the torus [@Nielsen:2014JSMTE..04..007N; @Deshpande:2016JSMTE..01.3102D] and it would be interesting to carry these constructions out for $SU(2)_k$. Moreover, the non-abelian chiral spin liquid we constructed was obtained by combining $k$ [*identical*]{} copies of the state for $k=1$. It would be worthwhile to understand how the more general possibilities of combining $k=1$ theories as discussed in Ref. [@Scharfenberger:2011PhRvB..84n0404S] can be realized in the framework of [$\infty$MPS]{}. Finally, the last few years have seen tremendous efforts to understand abelian and non-abelian spin liquids using what is called a coupled wire construction [@Gorohovsky:2015PhRvB..91x5139G; @Meng:2015PhRvB..91x1106M; @Huang:2016PhRvB..93t5123H] and generalized spin ladders [@Lecheminant:2017PhRvB..95n0406L]. While there are certain similarities to the approach discussed in the present paper it still needs to be verified whether these are merely superficial or whether both constructions have a more intimate relationship. ### Acknowledgements {#acknowledgements .unnumbered} AR and TQ gratefully acknowlegde discussions with Roberto Bondesan and collaboration on closely related topics. This research was conducted by the Australian Research Council Centre of Excellence for Mathematical and Statistical Frontiers (project number CE140100049) and partially funded by the Australian Government. Parts of this work were carried out while the authors were employed at the Institute of Theoretical Physics at the University of Cologne. We would like to thank the DFG for financial support through M. Zirnbauer’s Leibniz Prize, DFG grant no. ZI 513/2-1. Additional support was received from the DFG through the SFB$|$TR 12 “Symmetries and Universality in Mesoscopic Systems” and the Center of Excellence “Quantum Matter and Materials”. \[ap:Notation\]Summary of notation and conventions ================================================== Throughout the paper we will work with a basis of spin operators $S^a$ (with $a=1,2,3$) that satisfy $$\begin{aligned} [S^a,S^b] =i{\epsilon^{ab}}_c\,S^c\;,\end{aligned}$$ where ${\epsilon^{ab}}_c$ is completely antisymmetric in its indices and ${\epsilon^{12}}_3=1$. Indices are raised (and lowered) with the Killing form $\kappa^{ab}=\delta^{ab}$ (and its inverse $\kappa_{ab}=\delta_{ab}$). For certain simplifications we will need the identities $$\begin{aligned} \label{eq:epsidentity} \epsilon^{abc}\,\epsilon_{abd} =2\,\delta^c_d \quad\text{ and }\quad \epsilon^{abe}{\epsilon^{cd}}_e =\delta^{ac}\delta^{bd}-\delta^{ad}\delta^{bc}\;.\end{aligned}$$ In writing these (and other) formulas Einstein’s summation convention is understood, i.e. indices occurring twice on one side of an equation are summed over the relevant range. \[ap:SchwingerBosons\]Schwinger bosons ====================================== The essential idea of the Schwinger boson construction is to realize all finite dimensional irreducible representations of $SU(2)$ on the Fock space of two bosons [@SchwingerBosons]. As we will review shortly, individual representations can be singled out simply by projection onto subspaces of specific boson number. Let $a^\dag,a$ and $b^\dag,b$ be the creation and annihilation operators of the two bosons. By definition, these satisfy $$\begin{aligned} [a,a^\dag]=[b,b^\dag]=1\;,\end{aligned}$$ with all other commutators of these generators vanishing. From these we define spin operators $$\begin{aligned} S^z &=\frac{1}{2}(a^\dag a-b^\dag b)\;,& S^+ &=a^\dag b\;,& S^- &=b^\dag a\end{aligned}$$ that satisfy the desired commutation relations of $SU(2)$. The Fock space of the Schwinger bosons hence realizes a representation of $SU(2)$, containing the irreducible representations $S=0,\frac{1}{2},1,\ldots$. In view of the relation $$\begin{aligned} {\mathbf{S}}^2 =\frac{1}{4}\,(a^\dag a+b^\dag b)(a^\dag a+b^\dag b+2) =S(S+1)\end{aligned}$$ half the total boson number $$\begin{aligned} S=\frac{1}{2}\bigl(aa^\dag+bb^\dag\bigr)\end{aligned}$$ may be interpreted as the spin of the representation that is realized. The Fock space is created from a vacuum vector $|0\rangle$ defined by $a|0\rangle=b|0\rangle=0$. An orthonormal basis of states is obtained by defining $$\begin{aligned} |S,m\rangle =\frac{(a^\dag)^{S+m}}{\sqrt{(S+m)!}} \frac{(b^\dag)^{S-m}}{\sqrt{(S-m)!}}\,|0\rangle\;.\end{aligned}$$ We finally note that $$\begin{aligned} |S,-S\rangle =\frac{(b^\dag)^{2S}}{\sqrt{(2S)!}}\,|0\rangle\;.\end{aligned}$$ This is the state in the spin-$S$ representation with minimal value of $S^z$ (spin maximally down). The Schwinger bosons may be used to represent the two states of the spin-$\frac{1}{2}$ representation as follows, $$\begin{aligned} |\!\uparrow\rangle =a^\dag|0\rangle \quad\text{ and }\quad |\!\downarrow\rangle =b^\dag|0\rangle\,\end{aligned}$$ and similarly for the three states of the spin-$1$ representation, $$\begin{aligned} |\!\Uparrow\rangle =\frac{1}{\sqrt{2}}(a^\dag)^2|0\rangle\;,\quad |\!\Leftrightarrow\rangle =a^\dag b^\dag|0\rangle \quad\text{ and }\quad |\!\Downarrow\rangle =\frac{1}{\sqrt{2}}(b^\dag)^2|0\rangle\;.\end{aligned}$$ \[ap:GeneralG\]Simple current correlators for other symmetry groups =================================================================== [cccc]{} Type & Group & Simple current(s) & Conformal dimension\ \ $A$ & $SU(N)$ & $j_l=k\omega_l$ & $h_{j_l}=\frac{kl(N-l)}{2N}$\ $B$ & $SO(2N+1)$ & $j=k\omega_1$ & $h_j=\frac{k}{2}$\ $C$ & $SP(2N)$ & $j=k\omega_N$ & $h_j=\frac{kN}{4}$\ $D$ & $SO(2N)$ & $j_1=k\omega_1$ & $h_{j_1}=\frac{k}{2}$\ && $j_2=k\omega_{N-1}$ & $h_{j_2}=\frac{kN}{8}$\ && $j_3=k\omega_N$ & $h_{j_3}=\frac{kN}{8}$\ $E$ & $E_6$ & $j_1=k\omega_1$ & $h_{j_1}=\frac{2k}{3}$\ && $j_2=k\omega_5$ & $h_{j_2}=\frac{2k}{3}$\ $E$ & $E_7$ & $j=k\omega_6$ & $h_j=\frac{3k}{4}$ The considerations for the ground state wave function presented in Section \[sc:GroundState\] carry over to other WZW models at generic level $k$ based on simple Lie groups. This is essentially due to the relation $h_{j_k}=kh_{j_1}$ where $j_k$ is a simple current in the level-$k$ theory that arises from the symmetric combination of $k$ simple currents $j_1$ in $k$ distinct level-$1$ theories based on the same group. The relevant relation may be verified case by case and a more detailed discussion can be found in the book [@Fuchs:1995], see especially the paragraphs around Eqs. (3.5.43) and (5.1.30). Here we content ourselves with the following table that summarizes the simple currents at generic levels (in the labeling conventions of [@FrancescoCFT]), together with their conformal dimensions. \[2\][\#2]{} [10]{} M. [Greiter]{} and R. [Thomale]{}, “Non-abelian statistics in a quantum antiferromagnet,” [[*Phys. Rev. Lett.*]{} [[*102*]{}]{} (2009) 207203](http://dx.doi.org/10.1103/PhysRevLett.102.207203), [[arXiv:0903.4547]{}](http://arxiv.org/abs/0903.4547). D. C. [Tsui]{}, H. L. [Störmer]{}, and A. C. [Gossard]{}, “Two-dimensional magnetotransport in the extreme quantum limit,” [[*Phys. Rev. Lett.*]{} [[*48*]{}]{} (1982) 1559–1562](http://dx.doi.org/10.1103/PhysRevLett.48.1559). R. B. [Laughlin]{}, “[Anomalous quantum Hall effect - An incompressible quantum fluid with fractionally charged excitations]{},” [[*Phys. Rev. Lett.*]{} [[*50*]{}]{} (1983) 1395–1398](http://dx.doi.org/10.1103/PhysRevLett.50.1395). G. W. Moore and N. Read, “[Nonabelions]{} in the fractional quantum [Hall]{} effect,” [[*Nucl. Phys.*]{} [[ *B360*]{}]{} (1991) 362–396](http://dx.doi.org/10.1016/0550-3213(91)90407-O). F. D. M. [Haldane]{}, “[Fractional quantization of the Hall effect - A hierarchy of incompressible quantum fluid states]{},” [[*Phys. Rev. Lett.*]{} [[*51*]{}]{} (1983) 605–608](http://dx.doi.org/10.1103/PhysRevLett.51.605). V. [Kalmeyer]{} and R. B. [Laughlin]{}, “[Equivalence of the resonating-valence-bond and fractional quantum Hall states]{},” [[*Phys. Rev. Lett.*]{} [[*59*]{}]{} (1987) 2095–2098](http://dx.doi.org/10.1103/PhysRevLett.59.2095). D. P. [Arovas]{}, A. [Auerbach]{}, and F. D. M. [Haldane]{}, “[Extended Heisenberg models of antiferromagnetism: Analogies to the fractional quantum Hall effect]{},” [[*Phys. Rev. Lett.*]{} [[*60*]{}]{} (1988) 531–534](http://dx.doi.org/10.1103/PhysRevLett.60.531). V. [Kalmeyer]{} and R. B. [Laughlin]{}, “[Theory of the spin liquid state of the Heisenberg antiferromagnet]{},” [[*Phys. Rev.*]{} [[ *B39*]{}]{} (1989) 11879–11899](http://dx.doi.org/10.1103/PhysRevB.39.11879). M. Greiter, [*Mapping of parent Hamiltonians: [From]{} abelian and non-abelian quantum [Hall]{} states to exact models of critical spin chains*]{}, vol. 244 of [*Springer Tracts in Modern Physics*]{}. 2011. [[arXiv:1109.6104]{}](http://arxiv.org/abs/1109.6104). B. [Scharfenberger]{}, R. [Thomale]{}, and M. [Greiter]{}, “Non-abelian statistics and a hierarchy of fractional spin liquids in [spin-S]{} antiferromagnets,” [[*Phys. Rev.*]{} [[ *B84*]{}]{} (2011) 140404](http://dx.doi.org/10.1103/PhysRevB.84.140404), [[ arXiv:1105.4348]{}](http://arxiv.org/abs/1105.4348). D. F. [Schroeter]{}, E. [Kapit]{}, R. [Thomale]{}, and M. [Greiter]{}, “Spin [Hamiltonian]{} for which the chiral spin liquid is the exact ground state,” [[*Phs. Rev. Lett.*]{} [[*99*]{}]{} (2007) 097202](http://dx.doi.org/10.1103/PhysRevLett.99.097202), [[ arXiv:0707.4262]{}](http://arxiv.org/abs/0707.4262). R. [Thomale]{}, E. [Kapit]{}, D. F. [Schroeter]{}, and M. [Greiter]{}, “Parent [Hamiltonian]{} for the chiral spin liquid,” [[*Phys. Rev.*]{} [[ *B80*]{}]{} (2009) 104406](http://dx.doi.org/10.1103/PhysRevB.80.104406), [[ arXiv:0905.3257]{}](http://arxiv.org/abs/0905.3257). H. [Li]{} and F. D. M. [Haldane]{}, “Entanglement spectrum as a generalization of entanglement entropy: [Identification]{} of topological order in non-abelian fractional quantum [Hall]{} effect states,” [[*Phys. Rev. Lett.*]{} [[*101*]{}]{} (2008) 010504](http://dx.doi.org/10.1103/PhysRevLett.101.010504), [[arXiv:0805.0332]{}](http://arxiv.org/abs/0805.0332). N. [Read]{} and E. [Rezayi]{}, “[Beyond paired quantum Hall states: Parafermions and incompressible states in the first excited Landau level]{},” [[*Phys. Rev.*]{} [[ *B59*]{}]{} (1999) 8084–8092](http://dx.doi.org/10.1103/PhysRevB.59.8084), [[cond-mat/9809384]{}](http://arxiv.org/abs/cond-mat/9809384). J. I. [Cirac]{} and G. [Sierra]{}, “Infinite matrix product states, conformal field theory, and the [Haldane-Shastry model]{},” [[*Phys. Rev.*]{} [[ *B81*]{}]{} (2010) 104431](http://dx.doi.org/10.1103/PhysRevB.81.104431), [[ arXiv:0911.3029]{}](http://arxiv.org/abs/0911.3029). A. E. B. Nielsen, J. I. Cirac, and G. Sierra, “Quantum spin [Hamiltonians]{} for the [$SU(2)_k$]{} [WZW]{} model,” [[*J. Stat. Mech.*]{} [[*1111*]{}]{} (2011) P11014](http://dx.doi.org/10.1088/1742-5468/2011/11/P11014), [[arXiv:1109.5470]{}](http://arxiv.org/abs/1109.5470). H.-H. Tu, “Projected [BCS]{} states and spin [Hamiltonians]{} for the [SO(n)$_{1}$]{} [Wess-Zumino-Witten model]{},” [[*Phys. Rev.*]{} [[ *B87*]{}]{} (2013) 041103](http://dx.doi.org/10.1103/PhysRevB.87.041103), [[ arXiv:1210.1481]{}](http://arxiv.org/abs/1210.1481). H.-H. [Tu]{}, A. E. B. [Nielsen]{}, and G. [Sierra]{}, “Quantum spin models for the [$SU(n)_1$]{} [Wess-Zumino-Witten]{} model,” [[*Nucl. Phys.*]{} [[*B886*]{}]{} (2014) 328–363](http://dx.doi.org/10.1016/j.nuclphysb.2014.06.027), [[arXiv:1405.2950]{}](http://arxiv.org/abs/1405.2950). R. [Bondesan]{} and T. [Quella]{}, “Infinite matrix product states for long-range [$SU(N)$]{} spin models,” [[*Nucl. Phys.*]{} [[*B886*]{}]{} (2014) 483–523](http://dx.doi.org/10.1016/j.nuclphysb.2014.07.002), [[arXiv:1405.2971]{}](http://arxiv.org/abs/1405.2971). F. D. M. [Haldane]{}, “Exact [Jastrow-Gutzwiller]{} resonating-valence-bond ground state of the spin-1/2 antiferromagnetic [Heisenberg]{} chain with [$1/r^2$]{} exchange,” [[*Phys. Rev. Lett.*]{} [[*60*]{}]{} (1988) 635–638](http://dx.doi.org/10.1103/PhysRevLett.60.635). B. S. [Shastry]{}, “Exact solution of an [$S=1/2$]{} [Heisenberg]{} antiferromagnetic chain with long-ranged interactions,” [[*Phys. Rev. Lett.*]{} [[*60*]{}]{} (1988) 639–642](http://dx.doi.org/10.1103/PhysRevLett.60.639). F. D. M. [Haldane]{}, “[‘Fractional statistics’ in arbitrary dimensions: A generalization of the Pauli principle]{},” [[*Phys. Rev. Lett.*]{} [[*67*]{}]{} (1991) 937–940](http://dx.doi.org/10.1103/PhysRevLett.67.937). F. D. M. [Haldane]{}, Z. N. C. [Ha]{}, J. C. [Talstra]{}, D. [Bernard]{}, and V. [Pasquier]{}, “Yangian symmetry of integrable quantum chains with long-range interactions and a new description of states in conformal field theory,” [[*Phys. Rev. Lett.*]{} [[*69*]{}]{} (1992) 2021–2025](http://dx.doi.org/10.1103/PhysRevLett.69.2021). D. Gepner and E. Witten, “String theory on group manifolds,” [*Nucl. Phys.*]{} [[*B278*]{}]{} (1986) 493. H. M. [Babujian]{}, “[Exact solution of the isotropic Heisenberg chain with arbitrary spins: Thermodynamics of the model]{},” [[*Nucl. Phys.*]{} [[ *B215*]{}]{} (1983) 317–336](http://dx.doi.org/10.1016/0550-3213(83)90668-5). N. [Reshetikhin]{}, “[S-matrices in integrable models of isotropic magnetic chains. I]{},” [[*J. Phys.*]{} [[*A24*]{}]{} (1991) 3299–3309](http://dx.doi.org/10.1088/0305-4470/24/14/017). R. [Thomale]{}, S. [Rachel]{}, P. [Schmitteckert]{}, and M. [Greiter]{}, “Family of [spin-S]{} chain representations of [SU(2)$_{k}$]{} [Wess-Zumino-Witten]{} models,” [[*Phys. Rev.*]{} [[*B85*]{}]{} (2012) 195149](http://dx.doi.org/10.1103/PhysRevB.85.195149), [[arXiv:1110.5956]{}](http://arxiv.org/abs/1110.5956). F. [Michaud]{}, F. [Vernay]{}, S. R. [Manmana]{}, and F. [Mila]{}, “Antiferromagnetic spin-[S]{} chains with exactly dimerized ground states,” [[*Phys. Rev. Lett.*]{} [[*108*]{}]{} (2012) 127202](http://dx.doi.org/10.1103/PhysRevLett.108.127202), [[arXiv:1110.3394]{}](http://arxiv.org/abs/1110.3394). F. [Michaud]{}, S. R. [Manmana]{}, and F. [Mila]{}, “Realization of higher [Wess-Zumino-Witten]{} models in spin chains,” [[*Phys. Rev.*]{} [[ *B87*]{}]{} (2013) 140404](http://dx.doi.org/10.1103/PhysRevB.87.140404), [[ arXiv:1301.5719]{}](http://arxiv.org/abs/1301.5719). F. D. M. [Haldane]{}, “Physics of the ideal semion gas: [Spinons]{} and quantum symmetries of the integrable [Haldane-Shastry]{} spin chain,” in [ *Correlation Effects in Low-Dimensional Electron Systems*]{}, vol. 118 of [ *Springer Series in Solid-State Sciences*]{}, pp. 3–20. Springer, 1994. [[ arXiv:cond-mat/9401001]{}](http://arxiv.org/abs/arXiv:cond-mat/9401001). M. [Greiter]{}, F. D. M. [Haldane]{}, and R. [Thomale]{}, “Non-abelian statistics in one dimension: topological momentum spacings and [SU(2)]{} level $k$ fusion rules,” [*arXiv e-prints*]{} (2019) arXiv:1905.09728, [[arXiv:1905.09728]{}](http://arxiv.org/abs/1905.09728). M. [Greiter]{}, D. F. [Schroeter]{}, and R. [Thomale]{}, “Parent [Hamiltonian]{} for the non-abelian chiral spin liquid,” [[*Phys. Rev.*]{} [[ *B89*]{}]{} (2014) 165125](http://dx.doi.org/10.1103/PhysRevB.89.165125). P. [Di Francesco]{}, P. Mathieu, and D. Senechal, [*[Conformal Field Theory]{}*]{}. Graduate Texts in Contemporary Physics. Springer, New York, 1999. H.-H. [Tu]{} and G. [Sierra]{}, “[Infinite matrix product states, boundary conformal field theory, and the open Haldane-Shastry model]{},” [[*Phys. Rev.*]{} [[ *B92*]{}]{} (2015) 041119](http://dx.doi.org/10.1103/PhysRevB.92.041119), [[ arXiv:1504.07224]{}](http://arxiv.org/abs/1504.07224). B. [Basu-Mallick]{}, F. [Finkel]{}, and A. [Gonz[á]{}lez-L[ó]{}pez]{}, “[Integrable open spin chains related to infinite matrix product states]{},” [[*Phys. Rev.*]{} [[ *B93*]{}]{} (2016) 155154](http://dx.doi.org/10.1103/PhysRevB.93.155154), [[ arXiv:1511.08613]{}](http://arxiv.org/abs/1511.08613). A. E. B. [Nielsen]{} and G. [Sierra]{}, “[Bosonic fractional quantum Hall states on the torus from conformal field theory]{},” [[*J. Stat. Mech.*]{} [[*4*]{}]{} (2014) 04007](http://dx.doi.org/10.1088/1742-5468/2014/04/P04007), [[arXiv:1312.5134]{}](http://arxiv.org/abs/1312.5134). A. [Deshpande]{} and A. E. B. [Nielsen]{}, “[Lattice Laughlin states on the torus from conformal field theory]{},” [[*J. Stat. Mech.*]{} [[*1*]{}]{} (2016) 013102](http://dx.doi.org/10.1088/1742-5468/2016/01/013102), [[arXiv:1507.04335]{}](http://arxiv.org/abs/1507.04335). G. [Gorohovsky]{}, R. G. [Pereira]{}, and E. [Sela]{}, “[Chiral spin liquids in arrays of spin chains]{},” [[*Phys. Rev.*]{} [[ *B91*]{}]{} (2015) 245139](http://dx.doi.org/10.1103/PhysRevB.91.245139), [[ arXiv:1503.05050]{}](http://arxiv.org/abs/1503.05050). T. [Meng]{}, T. [Neupert]{}, M. [Greiter]{}, and R. [Thomale]{}, “[Coupled-wire construction of chiral spin liquids]{},” [[*Phys. Rev.*]{} [[ *B91*]{}]{} (2015) 241106](http://dx.doi.org/10.1103/PhysRevB.91.241106), [[ arXiv:1503.05051]{}](http://arxiv.org/abs/1503.05051). P.-H. [Huang]{}, J.-H. [Chen]{}, P. R. S. [Gomes]{}, T. [Neupert]{}, C. [Chamon]{}, and C. [Mudry]{}, “Non-abelian topological spin liquids from arrays of quantum wires or spin chains,” [[*Phys. Rev.*]{} [[ *B93*]{}]{} (2016) 205123](http://dx.doi.org/10.1103/PhysRevB.93.205123), [[ arXiv:1601.01094]{}](http://arxiv.org/abs/1601.01094). P. [Lecheminant]{} and A. M. [Tsvelik]{}, “Lattice spin models for non-abelian chiral spin liquids,” [[*Phys. Rev.*]{} [[ *B95*]{}]{} (2017) 140406](http://dx.doi.org/10.1103/PhysRevB.95.140406), [[ arXiv:1608.05977]{}](http://arxiv.org/abs/1608.05977). J. Schwinger, “On angular momentum,” in [*Quantum Theory of Angular Momentum*]{}, H. V. D. L.C. Biedenharn, ed. Academic Press, 1965. Original report in 1952 unpublished, scan available online. J. Fuchs, [*Affine Lie algebras and quantum groups*]{}. Cambridge Monographs on Mathematical Physics. Cambridge University Press, Cambridge, 1995. [^1]: The existence of simple Haldane-Shastry-like spin chains with Yangian symmetry has, in fact, been ruled out in [@Haldane:1994cond.mat..1001H] for $S>\frac{1}{2}$. [^2]: In standard CFT conventions, a primary field frequently is defined to be a field associated with a highest weight state. In contrast, in this paper a convention will be used where “primary field” refers to the whole ground state multiplet. In other words, our primary fields are ${\mathcal{V}}$-valued. Here and in what follows we will, moreover, frequently suppress the index $m$ on $\psi$ to avoid cluttering of notation. [^3]: Unique up to an overall phase which is irrelevant for the physical properties of the quantum state . [^4]: Mathematically, this follows from $J_0^a(J_{-1}^b\otimes|m\rangle)=[J_0^a,J_{-1}^b]\otimes|m\rangle+J_{-1}^b\otimes J_0^a|m\rangle$ where we introduced (superfluous) tensor product symbols $\otimes$ to make the connection more explicit. [^5]: We suppress the indices in $\psi(z)$ in order to avoid cluttering of notation. [^6]: Here and in what follows we write $j(\neq l)$ to emphasize that the summation is over $j$ while $l$ is fixed. In contrast, $j\neq l$ would mean a sum over $j$ and $l$. [^7]: Biquadratic interactions arise from the quartic interactions when choosing $i=j$. [^8]: The product is non-singular since the fields all live in different tensor factors. [^9]: A spin $j$ primary field has conformal dimension $h_j=j(j+1)/(k+2)$ in the $SU(2)_k$ WZW model. The two results quoted are obtained for $j=\frac{k}{2}$, in the first case after restriction to $k=1$. [^10]: The subscript $0$ indicates that this is meant to be a ground state. [^11]: “Individual site” here refers to an arbitrary numbering of spins which is identical on each of the $k$ spin systems. [^12]: We would like to emphasize that the subscript of the vectors ${\mathbf{a}}_i$ and ${\mathbf{b}}_i$ refers to different chains, not the sites on individual chains. [^13]: Many of the simplifications we present carry through even for arbitrary positions $z_l=e^{i\phi_l}$ on the unit circle where the $z_l$ are assumed to be mutually distinct. Since our main motivation is the comparison of the [$\infty$MPS]{}approach with the recent proposal of Greiter and coauthors [@Greiter:2011jp] we refrain from presenting the general expressions. [^14]: In writing down this expression we rescaled the original Hamiltonian  by a factor $\frac{2\pi^2(2S+1)}{4L^2(2S+3)}$. This is possible without changing any of the fundamental properties of $H$. The rescaling ensures that the Hamiltonian behaves as expected in the thermodynamic limit $L\to\infty$ and is required for a comparison with the results of [@Greiter:2011jp].
--- bibliography: - 'reference.bib' --- Introduction ============ Data from real complex networks shows that correlations exist in various forms, for instance the existence of social relationships and interests in social networks. Degree correlations between neighbors, correlations in income, followers of users and number of likes of specific pages in social networks are some examples, to name a few. These kind of correlations have several implications in network structure, for example, degree-degree correlations manifests itself in assortativity or disassortativity of the network [@BBV08]. We consider very large complex networks where it is impractical to have a complete picture a priori. Crawling or sampling techniques are in practice to explore such networks by making use of API calls or HTML scrapping. We look into randomized sampling techniques which generate stationary samples. As an example, random walk based algorithms are in use in many cases because of several advantages offered by them [@ART10; @Brin1998]. We focus on the extremal properties in the correlated and stationary sequence of characteristics of interest $X_1,\ldots,X_n$ which is a function of the node sequence, the one actually generated by sampling algorithms. The characteristics of interest, for instance, can be node degrees, node income, number of followers of the node in OSN etc. Among the properties, clusters of exceedances of such sequences over high thresholds are studied in particular. The cluster of exceedances is determined as the consecutive exceedances of $\{X_n\}$ over the threshold $\{u_n\}$ between two consecutive non-exceedances [@Ferro; @Markovich2014]. It is important to investigate stochastic nature of extremes since it allows us to disseminate advertisement or collect opinions more effectively within the clusters. The dependence structure of sampled sequence exceeding sufficiently high thresholds is measured using a parameter called extremal index (EI), $\theta$ in Extremal Value Theory. It is defined as follows. [@Leadbetter p. 53] \[Def-1\] The stationary sequence $\{X_n\}_{n\ge 1}$, with $F$ as the marginal distribution function and $M_n=\max\{X_1,..., X_n\}$, is said to have the extremal index $\theta\in[0,1]$ if for each $0<\tau <\infty$ there is a sequence of real numbers (thresholds) $u_n=u_n(\tau)$ such that $$\begin{aligned} \lim_{n\to\infty}n(1-F(u_n))&=&\tau \mbox{ and} \label{eq:CCDF_condn_theta} \\ \lim_{n\to\infty}{\mbox{P}}\{M_n\le u_n\}&=&e^{-\tau\theta}. \nonumber\end{aligned}$$ The maxima $M_n$ is related to EI more clearly as [@Beirlant p. 381]$^{(\textrm{a})}$ $$\begin{aligned} \label{eq:max_ei_reln} {\mbox{P}}\{M_n\le u_n\}&=& F^{n\theta}(u_n)+o(1).\end{aligned}$$ When $\{X_n\}_{n\ge 1}$ is i.i.d. (for instance uniform independent node sampling), $\theta=1$ and point processes of exceedances over threshold $u_n$ converges weakly to homogeneous Poisson process [@Beirlant Chapter 5]. But when $0\leq \theta <1$, point processes of exceedances converges weakly to compound Poisson process and this implies that exceedances of high threshold values $u_n$ tend to occur in clusters for dependent data [@Beirlant Chapter 10]. EI has many useful interpretations and applications like - Finding distribution of order statistics of the sampled sequence. These can be used to find quantiles and predicts the $k$th largest value which arise with a certain probability. Specifically for the distribution of maxima, is available and the quantile of maxima is proportional to EI. Hence in case of samples with lower EI, lower values of maxima can be expected. When sampled sequence is the sequence of node degrees, these give many useful results. - Close relation of extremal index to the distribution and expectation of the size of clusters of exceedances. - First hitting time of the sampled sequence to $(u_n, \infty)$ is related to EI. Thus in case of applications where the aim is to detect large values of samples quickly, without actually employing sampling (which might be very costly), we can compare different sampling procedures by EI: smaller EI leads to longer searching of the first hitting time. These interpretations are explained later in the paper. The network topology determines the stationary distribution of the characteristics of interest under a sampling technique and is reflected on the EI. This indicates that different sampling algorithms may have different EI. Our contributions {#our-contributions .unnumbered} ----------------- The main contributions in this work are as follows. We associated Extremal Value Theory of stationary sequences to sampling of large complex networks and we study the extremal and clustering properties of the sampling process due to correlations. In order to facilitate a painless future study of correlations and clusters of samples in large networks, we propose to abstract the extremal properties into a single and handy parameter, EI. For any general stationary samples meeting two mixing conditions, we find that knowledge of bivariate distribution or bivariate copula is sufficient to compute EI analytically and thereby deriving many extremal properties. Several useful applications of EI (first hitting time, order statistics and mean cluster size) to analyse large graphs, known only through sampled sequences, are proposed. Degree correlations are explained in detail with a random graph model for which joint degree correlations exist for neighbor nodes. Three different random walk based algorithms that are widely discussed in literature (see [@ART10] and the references therein), are then revised for degree state space and EI is calculated when the joint degree correlation is bivariate Pareto distributed. We establish a general lower bound for EI in PageRank processes irrespective of the degree correlation model. Finally two estimation techniques of EI are provided and EI is numerically computed for a synthetic graph with neighbour degrees correlated and for two real networks (Enron email network and DBLP network). The paper is organized as follows. In Section \[sec:calcn\_EI\], methods to derive EI are presented. Section \[sec:deg\_corlns\] considers the case of degree correlations. In Section \[sec:graph\] the graph model and correlated graph generation technique are presented. Section \[sec:desc\_rand\_walks\] explains the different types of random walks studied and derives associated transition kernels and joint degree distributions. EI is calculated for different sampling techniques later in Section \[sec:deg\_calcn\_ei\]. In Section \[sec:applcn\_ei\] we provide several applications of extremal index in graph sampling techniques. In Section \[sec:simulations\] we estimate extremal index and perform numerical comparisons. Finally Section \[sec:conclu\] concludes the paper. A shorter version of this submission has been appeared in [@Jithin_2014]. Calculation of Extremal Index (EI) {#sec:calcn_EI} ================================== We consider networks represented by an undirected graph $G$ with $N$ vertices and $M$ edges. Since the networks under consideration are huge, we assume it is impossible to describe them completely, i.e., no adjacency matrix beforehand. Assume any randomized sampling procedure is employed and let the sampled sequence $\{X_i\}$ be any general sequence. This section explains a way to calculate extremal index from the bivariate distribution if the sampled sequence admits two mixing conditions. \[cond:d\] $$\begin{gathered} \hspace*{-0.4 cm}\Big|{\mbox{P}}(X_{i_1}\leq u_n,\ldots,X_{i_p}\leq u_n,X_{j_1}\leq u_n,\ldots,X_{j_q}\leq u_n )\\ -{\mbox{P}}(X_{i_1}\leq u_n,\ldots,X_{i_p}\leq u_n){\mbox{P}}(X_{j_1}\leq u_n,\ldots,X_{j_q}\leq u_n )\Big|\leq \alpha_{n,l_n},\end{gathered}$$ where $\alpha_{n,l_n} \to 0$ for some sequence $l_n=o(n)$ as $n \to \infty$, for any integers $i_1\leq \ldots <i_p<j_1<\ldots\leq j_q$ with $j_1-i_p>l_n$. \[cond:d”\] $$\lim_{n \to \infty} \Big \{\sum_{j=2}^{r_n}{\mbox{P}}(X_j \leq u_n < X_{j+1}|X_1>u_n) \Big\}= 0,$$ where $(n/r_n)\alpha_{n,l_n} \to 0$ and $l_n/r_n \to 0$ with $\alpha_{n,l_n}$, $l_n$ as in Condition $D(u_n)$ and $r_n$ as $o(n)$. Let $C(u,v)$ is the bivariate Copula [@Nelsen2007] ($[0,1]^2 \to [0,1]$) and $C'$ is its Gâteaux derivative along the direction $(1,1)$. Using Sklar’s theorem [@Nelsen2007 p. 18], with $F$ as the marginal stationary distribution function of the sampling process, $$C(u,u)={\mbox{P}}(X_1\leq F^{-1}(u), X_2\leq F^{-1}(u)).$$ $F^{-1}$ denotes the inverse function of $F$. This representation is unique if the stationary distribution $F(x)$ is continuous. \[prop:theta\] If the sampled sequence is stationary and satisfies conditions $D(u_n)$ and $D''(u_n)$, then extremal index is given by $$\label{eq:theta_my_expn} \theta=C'(1,1)-1,$$ and $0\leq \theta \leq 1$. From [@Leadbetter_1989], for the stationary sequence $\{X_n\}$ with Conditions $D(u_n$ and $D''(u_n)$, $\theta = \lim_{n \to \infty} {\mbox{P}}(X_2\leq u_n |X_1>u_n)$. Then $$\begin{aligned} \theta &=&\lim_{n \to \infty} \frac{{\mbox{P}}(X_2 \leq u_n,X_1>u_n)}{{\mbox{P}}(X_1>u_n)}\nonumber \\ &=&\lim_{n \to \infty} \frac{{\mbox{P}}(X_2 \leq u_n)-{\mbox{P}}(X_1 \leq u_n, X_2 \leq u_n)}{{\mbox{P}}(X_1>u_n)} \nonumber \\ &=&\lim_{n \to \infty} \frac{{\mbox{P}}(X_2 \leq u_n)-C\big ({\mbox{P}}(X_1 \leq u_n), {\mbox{P}}(X_2 \leq u_n)\big)}{1-{\mbox{P}}(X_1 \leq u_n)} \nonumber\\ &=&\lim_{x \to 1} \frac{x-C(x,x)}{1-x} \nonumber\\ &=& C'(1,1)-1. \nonumber\end{aligned}$$ The existence of EI in $[0,1]$ is evident from the definition used in this proof. The condition $D''(u_n)$ can be made weaker to $D^{(k)}(u_n)$ presented in [@Chernick_1991], $$\lim_{n \to \infty} n{\mbox{P}}\left(X_1>u_n\geq \max_{2\leq i \leq k} X_i, \max_{k+1\leq j \leq r_n} X_j >u_n \right)=0,$$ where $r_n$ is defined as in $D''(u_n)$. For the stationary sequence $D^{(2)}(u_n)=D''(u_n)$. If we assume $D^{(k)}$ is satisfied for some $k\geq 2$ along with $D(u_n)$, then following the proof of Proposition \[prop:theta\], EI can be derived as $$\theta=C'_k(1)-C'_{k-1}(1),$$ where $C_k(x)$ represents the copula of $k$-dimensional vector $(x_1,\ldots,x_k)$, $C_k(x_1,\ldots,x_k)$ with $x=x_1\ldots=x_k$ and $C_{k-1}$ is its $(k-1)$th marginal, $C_{k-1}(x)=C_{k-1}(x_1,\ldots,x_{k-1},1)$ with $x=x_1\ldots=x_{k-1}$. In some cases it is easy to handle with the joint tail distribution. Survival Copula $\widehat{C}(\cdot,\cdot)$ which corresponds to $${\mbox{P}}(X_1> x, X_2> x)=\widehat{C}(\overline{F}(x), \overline{F}(x)),$$ with $\overline{F}(x)=1-F(x)$, can also be used to calculate $\theta$. It is related to Copula as $\widehat{C}(u,u)=C(1-u,1-u)+2u-1$ [@Nelsen2007 p. 32]. Hence $\theta=C'(1,1)-1=1-\widehat{C}'(0,0)$. Lower tail dependence function of survival copula is defined as [@Weng2012] $$\lambda(u_1,u_2)=\lim_{t\to 0^{+}}\frac{\widehat{C}(tu_1,tu_2)}{t}.$$ Hence $\widehat{C}'(0,0)=\lambda(1,1)$. $\lambda$ can be calculated for different copula families. In particular, if $\widehat{C}$ is a bivariate Archimedean copula, then it can be represented as, $\widehat{C}(u_1,u_2)=\psi(\psi^{-1}(u_1)+\psi^{-1}(u_2))$, where $\psi$ is the generator function and $\psi^{-1}$ is its inverse with $\psi:[0,\infty]\to[0,1]$ meeting several other conditions. If $\psi$ is a regularly varying distribution with index $-\beta$, $\beta>0$, then $\lambda(x_1,x_2)=(x_1^{-\beta^{-1}}+x_2^{-\beta^{-1}})^{-\beta}$ and $(X_1,X_2)$ has a multivariate regularly varying distribution [@Weng2012]. Therefore, for Archimedean copula family, EI is given by $$\label{eq:theta_archim} \theta=1-1/2^{\beta}.$$ As an example, bivariate Pareto distribution of the form ${\mbox{P}}(X_1> x_1, X_2> x_2)=(1+x_1+x_2)^{-\gamma}$, $\gamma>0$ has Arhimedean copula with generator function $\psi(x)=(1+x)^{-\gamma}$. This gives $\theta=1-1/2^{\gamma}$. Bivariate exponential distribution of the form $${\mbox{P}}(X_1> x_1, X_2> x_2)=1-e^{-x_1}-e^{-x_2}+e^{-(x_1+x_2+\eta x_1x_2)},$$ $0\leq \eta \leq 1,$ also admits Archimedian copula. Check of conditions $D(u_n)$ and $D''(u_n)$ {#subsec:check_condns} ------------------------------------------- If the sampling technique is assumed to be based on a Markov chain and consider the sampled sequence as measurable functions of stationary Markov samples, then such a sequence is stationary and [@Brien_1987] proved that another mixing condition $AIM(u_n)$ which implies $D(u_n)$ is satisfied. Condition $D''(u_n)$ allows clusters with consecutive exceedances and eliminates the possibility of clusters with upcrossing of the threshold $u_n$ (${X_i\leq u_n <X_{i+1}}$). Hence in those cases, where it is tedious to check the condition $D''(u_n)$ theoretically, we can use numerical procedures to measure ratio of number of consecutive exceedances to number of exceedances and the ratio of number of upcrossings to number of consecutive exceedances in small intervals. Such an example is provided in Section \[sec:deg\_calcn\_ei\]. \[ferreira\_comment\] The EI is derived in [@FerFer] to the same expression in . But [@FerFer] assumes $\{X_n\}$ is sampled from a first order Markov chain. This condition is much stricter than $D(u_n)$ and $D''(u_n)$ which we used to derive . For instance, degrees of the node samples obtained from a Markov chain based sampling, mostly not form a Markov chain as node-degree relation is not one-one while $D(u_n)$ is agreed for such a case and $D''(u_n)$ can get satisfied, see Section \[sec:deg\_calcn\_ei\] for an example. Degree correlations {#sec:deg_corlns} =================== The techniques established in Section \[sec:calcn\_EI\] are very general, applicable to any sampling techniques and any sequence of samples which satisfy certain conditions. In this section we illustrate the calculation of extremal index for correlations among degrees. We introduce different sampling techniques through this section though they can be used in case of any general correlations. We denote the sampled sequence $\{X_i\}$ as $\{D_i \}$ in this section. Description of the model {#sec:graph} ------------------------ We take into account correlation in degrees between neighbor nodes. The dependence structure in the graph is described by the joint degree-degree probability density function $f(d_1,d_2)$ with $d_1$ and $d_2$ indicating the degrees of adjacent nodes or equivalently by the corresponding tail distribution function $\overline{F}(d_1,d_2)={\mbox{P}}(D_1 \ge d_1, D_2 \ge d_2)$ with $D_1$ and $D_2$ representing the corresponding degree random variables (see e.g., [@BBV08; @BPV03; @GDM08]). The probability that a randomly chosen edge has the end vertices with degrees $d_1 \leq d \leq d_1+\Delta(d_1)$ and $d_2 \leq d \leq d_2+\Delta(d_2)$ is $(2-\delta_{d_1d_2})f(d_1,d_2)\Delta(d_1)\Delta(d_2)$. Here $\delta_{d_1d_2}=1$ if $d_1=d_2$, zero otherwise. The multiplying factor $2$ appear on the above expression when $d_1 \neq d_2$ because of the symmetry in $f(d_1,d_2)$, $f(d_1,d_2)=f(d_2,d_1)$ due to the undirected nature of the underlying graph, and the fact that both $f(d_1,d_2$ and $f(d_2,d_1)$ contribute to the edge probability under consideration. The degree density $f_d(d_1)$ can be calculated from the marginal of $f(d_1,d_2)$ as $$\label{eq:marginal_jt-degree} f(d_1)=\int_{d_2} f(d_1,d_2)d(d_2) \approx \frac{d_1 f_d(d_1)}{{\mbox{E}}[D]},$$ where ${\mbox{E}}[D]$ denotes the mean node degree, $${\mbox{E}}[D]=\left[\int \int \left( \frac{f(d_1,d_2)}{d_1} \right) d(d_1)d(d_2)\right]^{-1}.$$ $f(.)$ can be interpreted as the degree density of a vertex reached by following a randomly chosen edge. The approximation for $f(d_1)$ is obtained as follows: in the R.H.S. of (\[eq:marginal\_jt-degree\]), roughly, $d_1 f_d(d_1)N$ is the number of half edges from nodes with degree around $d_1$ and ${\mbox{E}}[D] N$ is the total number of half edges. From the above description, it can be noted that the knowledge of $f(d_1,d_2)$ is sufficient to describe this random graph model and for its generation. Most of the results in this paper are derived assuming continuous probability distributions for $f(d_1,d_2)$ and $f_d(d_1)$ because an easy and unique way to calculate extremal index exists for continuous distributions in our setup (more details in Section \[sec:calcn\_EI\]). Also the extremal index might not exist for many discrete valued distributions [@Leadbetter]. ### Random graph generation {#subsubsec:graph-genern} A random graph bivariate joint degree-degree correlation distribution can be generated as follows ([@Newman2002]). 1. Degree sequence is generated according to the degree distribution, $f_d(d)=\frac{f(d)E[D]}{d}$ 2. An uncorrelated random graph is generated with the generated degree sequence using configuration model ([@BBV08]) 3. Metropolis dynamics is applied now on the generated graph: choose two edges randomly (denoted by the vertex pairs $(v_1,w_1)$ and $(v_2,w_2)$) and measure the degrees, $(j_1,k_1)$ and $(j_2,k_2)$ correspond to these vertex pairs. Generated a random number, $y$, according to uniform distribution in $[0,1]$. If $y\leq \min(1,(f(j_1,j_2)f(k_1,k_2))/(f(j_1,k_1)f(j_2,k_2)))$, then remove the selected edges and construct news ones as $(v_1,v_2)$ and $(w_1,w_2)$. Otherwise keep the selected edges intact. This dynamics will generate the required joint degree-degree distribution. Run Metropolis dynamics well enough to mix the network. Description of random walks {#sec:desc_rand_walks} --------------------------- In this section, we explain three different random walk based algorithms for exploring the network. They have been extensively studied in previous works [@ART10; @Brin1998; @L93] where they are formulated with vertex set as the state space of the underlying Markov chain on graph. The walker in these algorithms, after reaching each node, moves to another node randomly by following the transition kernel of the Markov chain. But since the interest in the present work is in the degree sequence, rather than node sequence, and its extremal properties, we take degree set as the state space and find appropriate transition kernels. We use ${f}_{\mathscr{X}}$ and ${\mbox{P}}_{\mathscr{X}}$to represent the probability density function and probability measure under the algorithm $\mathscr{X}$ with the exception that $f_d$ represents the probability density function of degrees. ### Random Walk (RW) In a random walk, the next node to visit is chosen uniformly among the neighbors of the current node. From (\[eq:marginal\_jt-degree\]) we approximate the standard random walk on degree state space by the following transition kernel, conditional density function that the present node has degree $d_t$ and the next node is with degree $d_{t+1}$, $$\label{eq:RWkernel} f_{RW}(d_{t+1}|d_{t}) \approx \frac{{\mbox{E}}[D] f(d_t,d_{t+1})}{d_t f_d(d_t)}.$$ This approximation is obtained as follows: given the present node has degree $d_t$, $1/d_t$ is the probability of selecting a neighbor uniformly and rest of the terms in R.H.S. represent the mean number of neighbors with degree around $d_{t+1}$. When $d_t \neq d_{t+1}$, $\frac{{\mbox{E}}[D]N}{2}(2f(d_t,d_{t+1}))$ is the mean number of edges between degrees about $d_t$ and $d_{t+1}$ and $f_d(d_t)N$ is the mean number of nodes with degrees about $d_t$, and thus their ratio represents such a mean number of edges per node with degree about $d_t$, i.e., mean number of neighbors with degree about $d_{t+1}$. The probability of occurring the other case, $d_t=d_{t+1}$, is zero as the degrees are assumed to follow a continuous distribution. If the standard random walk on the vertex set is in the stationary regime, its stationary distribution (probability of staying at a particular vertex $i$) is proportional to the degree (see e.g., [@L93]) and is given by $d_i/2M$. Then in the standard random walk on degree set, the stationary distribution of staying at any node with degree around $d_1$ can be approximated as $Nf_d(d_1)\left(d_1/2M|\right)$. Thus $$f_{RW}(d_1) \approx \frac{d_1}{{\mbox{E}}[D]} f_d(d_1),$$ Then, the joint density of the standard random walk is $f_{RW}(d_{t+1}, d_{t}) \approx f(d_t,d_{t+1}).$\ ### Check of the approximation {#subsubsec:RW_trans_comp .unnumbered} We provide comparison of simulated values and theoretical values of transition kernel of RW in Figure \[fig:trans\_kernel\_RW\]. The bivariate Pareto model is assumed for the joint degree-degree tail function of the graph, $$\label{eq:biPareto} \bar{F}(d_1,d_2) = \left(1+\frac{d_1-\mu}{\sigma}+\frac{d_2-\mu}{\sigma}\right)^{-\gamma},$$ where $\sigma$, $\mu$ and $\gamma$ are positive values. In the figure, $N$ number of nodes is 5,000. $\mu=10$, $\gamma=1.2$ and $\sigma=15$. These choices of parameters provides $E[D]=21.0052$. At each instant Metropolis dynamics will choose two edges and it has run 200,000 times (provides sufficient mixing). The figure shows satisfactory fitting of the approximation. ### PageRank (PR) {#sec:PR} PageRank is a modification of the random walk which with a fixed probability $1-c$ samples a random node with uniform distribution and with a probability $c$, it follows the random walk transition [@Brin1998]. Its evolution on degree state space can be described as follows: [rCl]{} \[eq:PRkernel\] f\_[PR]{}(d\_[t+1]{}|d\_[t]{})&& c f\_[RW]{}(d\_[t+1]{}|d\_[t]{}) + (1-c) N f\_d(d\_[t+1]{})\ && c f\_[RW]{}(d\_[t+1]{}|d\_[t]{}) + (1-c) f\_d(d\_[t+1]{}) Here the $1/N$ corresponds to the uniform sampling on vertex set and $\frac{1}{N} N f_d(d_{t+1})$ indicates the net probability of jumping to all the nodes with degree around $d_{t+1}$. ### Check of the approximation {#check-of-the-approximation .unnumbered} We provide a consistency check of the approximation derived for transition kernel by studying tail behavior of degree distribution and PageRank distribution. It is known that under some strict conditions, for a directed graph, PageRank and Indegree have same tail exponents [@Litvak]. In our formulation in terms of degrees, for *uncorrelated* and undirected graph, PageRank for a given degree $d$, $PR(d)$, can be approximated from the basic definition as, $$PR(d)=f_{RW}(d)\approx c\;f_{RW}(d)+(1-c)\;f_d(d).$$ This is a deterministic quantity. We are interested in the distribution of the random variable $PR(D)$, PageRank of a randomly choosen degree class $D$. PageRank $PR(d)$ is also the long term proportion or probability that PageRank process ends in a degree class with degree $d$. This can be scaled suitably to provide a rank-type information. Its tail distribution is $$\begin{aligned} P(PR(D)> x)=P\left(c.f_{RW}(D)+(1-c).f_d(D) > x \right), \nonumber\end{aligned}$$ where $D \sim f_d(.)$. The PageRank of any vertex inside the degree class $d$ is $PR(d)/(Nf_d(d))$. The distribution of Page Rank of a randomly chosen vertex $i$, $P(PR(i)>x)$ after appropriate scaling for comparison with degree distribution is $P(N.PR(i)>\hat{d})$, where $\hat{d}=Nx$. Now $$\begin{aligned} P(N.PR(i)>\hat{d})&=&P\left(N \frac{PR(D)}{Nf_d(D)}>\hat{d}\right) \nonumber \\ &=&P\left(D>\frac{E[D]}{c}\left[\hat{d}-(1-c)\right] \right). \nonumber\end{aligned}$$ This of the form $P(D>A\hat{d}+B)$ with $A$ and $B$ as appropriate constants and hence will have the same exponent of degree distribution tail when the graph is *uncorrelated*. There is no convenient expression for the stationary distribution of PageRank, to the best of our knowledge, and it is difficult to come up with an easy to handle expression for the joint distribution. Therefore, along with other advantages, we consider another modification of the standard random walk. ### Random Walk with Jumps (RWJ) {#RWJ} RW sampling leads to many practical issues like the possibility to get stuck in a disconnected component, biased estimators etc. RWJ overcomes such problems ([@ART10]). In this algorithm we follow random walk on a modified graph which is a superposition of the given graph and complete graph on same vertex set of the given graph with weight $\alpha/N$ on each edge, $\alpha\in[0,\infty]$ being a design parameter ([@ART10]). The algorithm can be shown to be equivalent to select $c=\alpha/(d_t+\alpha)$ in the PageRank algorithm, where $d_t$ is the degree of the present node. The larger the node’s degree, less likely is the artificial jump of the process. This modification makes the underlying Markov chain time reversible, significantly reduces mixing time, improves estimation error and leads to a closed form expression for stationary distribution. The transition kernel on degree set, following PageRank kernel, is [rCl]{} f\_[RWJ]{}(d\_[t+1]{}|d\_[t]{}) &&f\_[RW]{}(d\_[t+1]{}|d\_[t]{})+f\_d(d\_[t+1]{})\ &=&. The stationary distribution for node $i$ (on the vertex set) is $(d_i+\alpha)/(2M+N\alpha)$ and the equivalent stationary probability density function on degree set by collecting all the nodes with same degree is $$\begin{aligned} \label{eq:JPsta} f_{RWJ}(d_1) &\approx &\left( \frac{d_1+\alpha}{2M+N\alpha}\right)Nf_d(d_1) \nonumber\\ &=&\frac{(d_1+\alpha)f_d(d_1)}{{\mbox{E}}[D]+\alpha},\end{aligned}$$ since $2M/N={\mbox{E}}[D]$. The stationarity of the $f_{RWJ}(d_1$ can be verified by plugging the obtained expression in the stationarity condition of the Markov Chains. We have [rCl]{} [f\_[RWJ]{}(d\_1)]{} &=&f\_[RWJ]{}(d\_1|d\_2)f\_[RWJ]{}(d\_2) d([d\_2]{})\ && d([d\_2]{})\ \[eq:RWJPmarginal\] &&, where (\[eq:marginal\_jt-degree\]) has been applied. Then, the joint density function for the random walk with jumps has the following form [rCl]{} f\_[RWJ]{}(d\_[t+1]{}, d\_[t]{}). Moreover the associated tail distribution has a simple form, $$\label{eq:tail_RWJ_joint-distbn} f_{RWJ}(D_{t+1} > d_{t+1}, D_{t} > d_{t}) \approx \frac{{\mbox{E}}[D]\overline{F}(d_{t+1},d_t)+\alpha \overline{F}_d(d_{t+1})\overline{F}_d(d_t)}{{\mbox{E}}[D]+\alpha}.$$ Characterizing Markov chain based sampling in terms of degree transition has some advantages, - In the different random walk algorithms considered on vertex set, all the nodes with same degree have same stationary distribution. This also implies that it is more natural to formulate the random walk transition in terms of degree. - Degree uncorrelations in the underlying graph is directly reflected in the joint distribution of the studied sampling techniques. For uncorrelated networks, $f_{RW}(d_1,d_2)=f_{RW}(d_1)f_{RW}(d_2)$, $f_{PR}(d_1,d_2)=f_{PR}(d_1)f_{PR}(d_2)$ and $f_{RWJ}(d_1,d_2)=f_{RWJ}(d_1)f_{RWJ}(d_2)$. Extremal Index for bivariate Pareto Degree Correlation {#sec:deg_calcn_ei} ------------------------------------------------------ As explained in the Introduction section, extremal index is an important parameter in characterizing dependence and extremal properties in a stationary sequence. We assume that we have waited sufficiently long that the underlying Markov chain of the three different graph sampling algorithms are in stationary regime now. Here we derive EI of RW and RWJ for the model with degree correlation among neighbours as bivariate Pareto . The two mixing conditions $D(u_n)$ and $D''(u_n)$ introduced in Section \[sec:calcn\_EI\] are needed for our EI analysis. Condition $D(u_n)$ is satisfied as explained in Section \[subsec:check\_condns\]. An empirical evaluation of $D''(u_n)$ is provided in Section \[subsub:check\_d2\_emp\]. ### EI for Random Walk sampling {#subsec:theta_ex_RW} We use the expression for EI given in Proposition \[prop:theta\]. As $f_{RW}(x,y)$ is same as $f(x,y)$, we have, $$\begin{aligned} \widehat{C}(u,u)&=&{\mbox{P}}(D_1>\bar{F}^{-1}(u),D_2>\bar{F}^{-1}(u))\nonumber \\ &=&\left(1+2(u^{-1/\gamma}-1) \right)^{-\gamma} \nonumber \\ \widehat{C}'(u,u)&=& 2(2-u^{1/\gamma})^{-(\gamma+1)}. \nonumber\end{aligned}$$ Thus $\theta=1-\widehat{C}'(0,0)=1-1/2^{\gamma}$. For $\gamma=1$ we get $\theta=1/2$. In this case, we can also use expression given in . ### EI for Random Walk with Jumps sampling Although it is possible to derive EI as in RW case above, we provide an alternative way to avoid the calculation of tail distribution of degrees and inverse of RWJ marginal (with respect to the bivariate Pareto degree correlation). Under the assumption of $D''$, $$\label{eq:proof_RWJ_2pareto} \theta = \lim_{n \to \infty} \frac{P(D_2 \le u_n, D_1>u_n)}{P(D_1 > u_n)} = \lim_{n \to \infty} \frac{P(D_1 \ge u_n)-P(D_2 \ge u_n, D_1 \ge u_n)}{P(D_1 > u_n)}$$ Now using the condition on the marginal and joint tail distribution of RWJ , we can write$^{(\textrm{b})}$ $$\begin{aligned} \lefteqn{\frac{P(D_1 \ge u_n)-P(D_2 \ge u_n, D_1 \ge u_n)}{P(D_1 > u_n)}} \nonumber\\ &=&\frac{\tau/n+o(1/n) - \frac{E[D]}{E[D]+\alpha}P_{RW}(D_2 \ge u_n, D_1 \ge u_n) - \frac{\alpha}{E[D]+\alpha}O(\tau/n)O(\tau/n) }{\tau/n+o(1/n)} \nonumber\end{aligned}$$ The asymptotics in the last term of the numerator is due to the following: $$\overline{F}_{RWJ}(u_n)=\frac{E[D]}{E[D]+\alpha}\overline{F}(u_n)+\frac{\alpha}{E[D]+\alpha} \overline{F}_d(u_n)=\tau/n+o(1/n),$$ and hence $\overline{F}_d(u_n)=O(\tau/n)$. Therefore becomes $$\theta=1-\frac{E[D]}{E[D]+\alpha}\lim_{n \to \infty}P_{RW}(D_2 \ge u_n, D_1 \ge u_n)n/\tau$$ In the case of the bivariate Pareto distribution , we obtain $$\label{eq:EI-RWJ_2pareto} \theta = 1 - \frac{E[D]}{E[D]+\alpha} 2^{-\gamma}$$ Lower bound of EI of the PageRank --------------------------------- We obtain the following lower bound for EI in the PageRank processes. \[prop:PR\] For the PageRank process on degree state space irrespective of the degree correlation structure in the underlying graph, the extremal index $$\theta \geq (1-c).$$ From [@Brien_1987], the following representation of EI holds for degree sequence, $$\label{eq:Brien_EI_repsn} \lim_{n\to\infty}{\mbox{P}}\{M_{1,p_n}\le u_n|D_{1} >u_n\}= \theta,$$ where $\{p_n\}$ is an increasing sequence of positive integers, $p_n=o(n)$ as $n\to\infty$ and $M_{1,p_n}=\max\{D_2,..., D_{p_n}\}$. Let $\mathcal{A}$ be the event that the node corresponding to $D_2$ is selected uniformly among all the nodes, not following random walk from the node for $D_1$. Then ${\mbox{P}}_{PR}(\mathcal{A})=1-c$. Now, with (\[eq:PRkernel\]), [rCl]{} [\_[PR]{}(M\_[1,p\_n]{}u\_n|D\_[1]{}&gt;u\_n)]{} & & \_[PR]{} (M\_[1,p\_n]{}u\_n, |D\_[1]{}&gt;u\_n)\ &=& \_[PR]{}(|D\_1&gt;u\_n) \_[PR]{}(M\_[1,p\_n]{}u\_n|,D\_[1]{}&gt;u\_n)\ &=&(1-c)\_[PR]{}(M\_[1,p\_n]{}u\_n),\ &=&(1-c)\_[PR]{}\^[(p\_n-1)]{}([D\_[1]{}u\_n]{})+o(1)\ &&(1-c)\_[PR]{}\^[(p\_n-1)]{}([D\_[1]{}u\_n]{})+o(1)\ &\~& (1-c)(1-/n)\^[p\_n-1]{}, \[eq:proof\_prop2\_2\] where $\{p_n\}$ is the same sequence as in (\[eq:Brien\_EI\_repsn\]) and $(i)$ follows mainly from the observation that conditioned on $\mathcal{A}$, $\{M_{1,p_n}\le u_n \}$ is independent of $\{D_{1}>u_n\}$, $(ii)$ and $(iii)$ result from the approximations in and respectively. Assuming $p_n-1=n^{1/2}$ and since $(1-\tau/n)^{p_n-1}\sim e^{-\tau/\sqrt n}\rightarrow 1$ as $n\to\infty$, from and , $$\label{14}\theta \ge 1-c.$$ The PageRank transition kernel on the degree state space does not depend upon the random graph model in Section \[sec:graph\]. Hence the derived lower bound of EI is useful for any degree correlation model. Applications of Extremal Index in Network Sampling Processes {#sec:applcn_ei} ============================================================ This section provides several uses of EI to infer the sampled sequence. This emphasis that the analytical calculation and estimation of EI are practically relevant. The limit of the point process of exceedances, $N_n(.)$, which counts the times, normalized by $n$, at which $\{X_i\}_{i=1}^n$ exceeds a threshold $u_n$ provides many applications of extremal index. A cluster is considered to be formed by the exceedances in a block of size $r_n$ ($r_n=o(n)$) in $n$ with cluster size $\xi_n=\sum_{i=1}^{r_n}1(X_i>u_n)$ when there is at least one exceedance within $r_n$. The point process $N_n$ converges weakly to a compound poisson process ($CP$) with rate $\theta \tau$ and i.i.d. distribution as the limiting distribution of cluster size, under condition and a mixing condition, and the points of exceedances in $CP$ correspond to the clusters [@Beirlant Section 10.3]. We name this kind of clusters as blocks of exceedances. The applications below require a choice of the threshold sequence $\{u_n\}$ satisfying . For practical purposes, if a single threshold $u$ is demanded for the sampling budget $B$, we can fix $u=\max\{u_1,\ldots,u_B\}$. The applications in this section are explained with the assumption that the sampled sequence is the sequence of node degrees. But the following techniques are very general and can be extended to any sampled sequence satisfying conditions $D(u_n)$ and $D''(u_n)$. Order statistics of the sampled degrees --------------------------------------- The order statistics $X_{n-k,n} $, $(n-k)$th maxima, is related to $N_n(.)$ and thus to $\theta$ by $${\mbox{P}}(X_{n-k,n} \leq u_n)={\mbox{P}}(N_n((0,1])\leq k),$$ where we apply the result of convergence of $N_n$ to $CP$ [@Beirlant Section 10.3.1]. ### Distribution of Maxima The distribution of the maxima of the sampled degree sequences can be derived as when $n\to\infty$. Hence if the extremal index of the underlying process is known then from (\[eq:max\_ei\_reln\]) one can approximate the $(1-\eta)$th quantile $x_{\eta}$ of the maximal degree $M_n$ as $${\mbox{P}}\{M_n\le x_{\eta}\}=F^{n\theta}(x_{\eta})={\mbox{P}}^{n\theta}\{X_1\le x_{\eta}\}=1-\eta,$$ i.e. $$\label{eq:quant_theta_genrl} x_{\eta}\approx F^{-1}\left(\left(1-\eta\right)^{1/(n\theta)}\right).$$ In other words, quantiles can be used to find the maxima of the degree sequence with certain probability. For a fixed certainty $\eta$, $x_\eta$ is proportional to $\theta$. Hence if the sampling procedures have same marginal distribution, with calculation of EI, it is possible to predict how much large values can be achieved. Lower EI indicates lower value for $x_{\eta}$ and higher represents high $x_{\eta}$. For the random walk example in Section \[subsec:theta\_ex\_RW\] for the degree correlation model, with the use of (\[eq:quant\_theta\_genrl\]), we get the $(1-\eta)$th quantile of the maxima $M_n$ $$x_{\eta}\approx \mu+\sigma\left(\left(1-(1-\eta)^{1/(n \theta)}\right)^{-1/\gamma}-1\right)$$ The following example demonstrates the effect of neglecting correlations on the prediction of the largest degree node. The largest degree, with the assumption of Pareto distribution for the degree distribution, can be approximated as $KN^{1/\delta}$ with $K\approx 1$, $N$ as the number of nodes and $\gamma$ as the tail index of complementary distribution function of degrees [@Kostia2012]. For Twitter graph (recorded in 2012), $\delta=1.124$ for outdegree distribution and $N=537,523,432$ [@Maksym_2014]. This gives the largest degree prediction as $59,453,030$. But the actual largest out degree is $22,717,037$. This difference is because the analysis in [@Kostia2012] assumes i.i.d. samples and does not take into account the degree correlation. With the knowledge of EI, correlation can considered as in . In the following section, we derive an expression for such a case. ### Estimation of largest degree when the marginals are Pareto distributed It is known that many social networks have the degree asymptotically distributed as Pareto. We find that in these cases, the marginal distribution of degrees of the random walk based methods also follow Pareto distribution (though we have derived only for the model with degree correlations among neighbors, see Section \[sec:deg\_corlns\]) For any stationary sequence with marginal distribution following Pareto distribution $\bar{F}(x)=Cx^{-\delta}$, the largest value is $$M_n \approx (n \theta)^{1/\delta} \Big(\frac{C}{\log 2} \Big)^{1/\delta}$$ From extreme value theory [@Beirlant], it is known that when $\{X_i, i\geq 1 \}$ are i.i.d., $$\label{eq:lar_deg_1} \lim_{n \to \infty} {\mbox{P}}\left(\frac{M_n-b_n}{a_n} \leq x \right)=H_\gamma(x),$$ where $H_{\gamma}(x)$ is the extreme value distribution with index $\gamma$ and $\{a_n\}$ and $\{b_n\}$ are appropriately chosen deterministic sequences. When $\{X_i, i\geq 1 \}$ are stationary with EI $\theta$, the limiting distribution becomes $H'_{\gamma'}(x)$ and it differs from $H_{\gamma}(x)$ only through parameters. $H_{\gamma}(x)=\exp(-t(x))$ with $t(x)=\left(1+\left(\frac{x-\mu}{\sigma} \right) \gamma \right)^{-1/ \gamma}$. With the normalizing constants ($\mu=0$ and $\sigma=1$), $H'_{\gamma'}$ has the same shape as $H_\gamma$ with parameters $\gamma'=\gamma$, $\sigma'=\theta^{\gamma}$ and $\mu'=(\theta^{\gamma}-1)/\gamma$. For Pareto case, $\overline{F}(x)=Cx^{-\delta}$, $\gamma=1/ \delta$, $a_n=\gamma C^{\gamma} n^{\gamma}$ and $b_n=C^{\gamma}n^{\gamma}$. From , for large $n$, $M_n$ is stochastically equivalent to $a_n \chi+b_n$, where $\chi$ is a random variable with distribution $H'_{\gamma'}$. It is observed in [@Kostia2012] that median of $\chi$ is an appropriate choice for the estimation of $M_n$. Median of $\chi=\mu'+\sigma'\left(\frac{(\log 2)^{-\gamma'}-1}{\gamma'} \right)= (\theta^{\gamma}(\log 2)^{-\gamma}-1)\gamma^{-1}$. Hence, $$\begin{aligned} M_n & \approx & a_n\left(\frac{\theta^\gamma (\log 2)^{-\gamma}}{\gamma}-1 \right)+b_n \nonumber\\ &= & (n \theta)^{1/\delta} \left( \frac{C}{\log 2}\right)^{1/ \delta} \nonumber\end{aligned}$$ Relation to first hitting time and interpretations -------------------------------------------------- Extremal index also gives information about the first time $\{X_n\}$ hits $(u_n,\infty)$. Let $T_n$ be this time epoch. As $N_n$ converges to compound poisson process, it can be observed that $T_n/n$ is asymptotically an exponential random variable with rate $\theta \tau$, i.e., $\lim_{n\to \infty}{\mbox{P}}(T_n/n>x)=\exp(-\theta \tau x)$. Therefore $\lim_{n \to \infty}{\mbox{E}}(T_n/n)=1/(\theta \tau)$. Thus the more EI smaller, the more time it will take to hit the extreme levels as compared to independent sampling. This property can make use to compare different sampling procedures. Relation to mean cluster size ----------------------------- If the conditions $D''(u_n)$ is satisfied along with $D(u_n)$, asymptotically, a run of the consecutive exceedances following an upcrossing is observed, i.e., $\{X_n\}$ crosses the threshold $u_n$ at a time epoch and stays above $u_n$ for some more time before crossing $u_n$ downwards and stays below it for some time until next upcrossing of $u_n$ happens. This is called cluster of exceedances and is more practically relevant than blocks of exceedances at the starting of this section and is shown in [@Leadbetter_1989] that these two definitions clusters are asymptotically equivalent resulting in similar cluster size distribution. The expected value of cluster of exceedances converges to inverse of extremal index [@Beirlant p. 384], i.e., $$\theta^{-1}=\lim_{n\to\infty}\sum_{j \geq 1}j\pi_n(j),$$ where $\{\pi_n(j),j\geq 1\}$ is the distribution of size of cluster of exceedances with $n$ samples. More details about cluster size distribution and its mean can be found in [@Markovich2014]. Estimation of Extremal Index and Numerical results {#sec:simulations} ================================================== This section introduces two estimators for EI. Two types of networks are presented: synthetic correlated graph and real networks (Enron email network and DBLP network). For the synthetic graph, we compare the estimated EI to its theoretical value. For the real network, we calculate EI using the two estimators. We take $\{X_i\}$ as the degree sequence and use RW, PR and RWJ as the sampling techniques. The methods mentioned in the following are general and are not specific to degree sequence or random walk technique. Empirical Copula based estimator -------------------------------- We have tried different estimators for EI available in literature [@Beirlant; @FerFer] and found that the idea of estimating copula and then finding value of its derivative at $(1,1)$ works without the need to choose and optimize several parameters found in other estimators. We assume that $\{X_i\}$ satisfies $D(u_n)$ and $D''(u_n)$ and we use for calculation of EI. Copula $C(u,v)$ is estimated empirically by $$C_n(u,v)=\frac{1}{n}\sum_{k=1}^n \mathbb{I}\left(\frac{R_{i_k}^X}{n+1}\leq u, \frac{R_{i_k}^Y}{n+1}\leq v\right),$$ with $R_{i_k}^X$ indicates rank of the element $X_{i_k}$ in $\{X_{i_k},1\leq k \leq n\}$, and $Y_{i_k}=X_{i_k+1}$. The sequence $\{X_{i_k}\}$ is chosen from the original sequence $\{X_i \}$ in such a way that $X_{i_k}$ and $X_{i_{k+1}}$ are sufficiently apart to make them independent to certain extent. The large-sample distribution of $C_n (u, v)$ is normal and centered at copula $C(u, v)$. Now, to get $\theta$, we use linear least squares error fitting to find slope at $(1,1)$ or use cubic spline interpolation for better results. Intervals Estimator ------------------- This estimator does not assume any conditions on $\{X_i\}$, but has the parameter $u$ to choose appropriately. Let $N=\sum_{i=1}^{n} 1(X_i > u)$ be number of exceedances of u at time epochs $1 \leq S_1 < \ldots < S_N \leq n$ and let the interexceedance times are $T_i = S_{i+1}- S_{i}$. Then intervals estimator is defined as [@Beirlant p. 391], $$\hat{\theta}_n(u)=\Big\{ \begin{array}{ll} \min(1,\hat{\theta}_n^1(u)) , \text{ if } \max{T_i : 1 \leq i \leq N -1} \leq 2,\\ \min(1,\hat{\theta}_n^2(u)) , \text{ if } \max{T_i : 1 \leq i \leq N -1} > 2, \end{array}$$ where $$\hat{\theta}_n^1(u)=\frac{2(\sum_{i=1}^{N-1}T_i)^2}{(N-1)\sum_{i=1}^{N-1}T_i^2},$$ and $$\hat{\theta}_n^2(u)=\frac{2(\sum_{i=1}^{N-1}(T_i-1))^2}{(N-1)\sum_{i=1}^{N-1}(T_i-1)(T_i-2)}.$$ We choose $u$ as $\delta$ percentage quantile thresholds, i.e., $\delta$ percentage of $\{X_i,1\leq i \leq n\}$ falls below $u$. The EI is usually selected corresponding to the stability interval in the plot $(\theta, \delta)$. Synthetic graph --------------- The simulations in the section follow the bivariate Pareto model and parameters introduced in . We use the same set of parameters of Figure \[fig:trans\_kernel\_RW\] and the graph is generated according to the technique in Section \[subsubsec:graph-genern\]. For the RW case, Figure \[fig:ei\_copula\_estr\] shows copula estimator, and theoretical copula based on the continuous distribution in and is given by $$C(u,u)=\left(1+2((1-u)^{-1/\gamma}-1)\right)^{-\gamma}+2u-1.$$ Though we take quantized values for degree sequence, it is found that the copula estimated matches with theoretical copula. The value of EI is then obtained after cubic interpolation and numerical differentiation at point $(1,1)$. For the theoretical copula, EI is $1-1/2^{\gamma}$, where $\gamma=1.2$. Figure \[fig:ei\_ie\_estr\] displays the comparison between theoretical value of EI and Intervals estimate. For the RWJ algorithm, Figure \[fig:ei\_ie\_estr\_RWJ\] shows the Intervals estimate and theoretical value for different $\alpha$. We used the expression for theoretical calculation. The small difference in theory and simulation results is due to the assumption of continuous degrees in the analysis, but the practical usage requires quantized version. Figure \[fig:ei\_ie\_estr\_PR\] displays the Intervals estimate of EI with PR sampling. It can be seen that the lower bound proposed in Proposition \[prop:PR\] gets tighter as $c$ decreases. ### Check of condition $D''$ {#subsub:check_d2_emp} The mixing conditions $D(u_n)$ and $D''(u_n)$ need to be satisfied for using the theory in Section \[sec:calcn\_EI\]. Though Intervals estimator does not require them, these conditions will provide the existence of EI. Condition $D(u_n)$ works in this case as explained in previous sections and for $D''(u_n)$, we do the following empirical test. We collect samples for each of the techniques RW, PR and RWJ. Intervals are taken of duration $5, 10, 15$ and $20$ time samples. The ratio of number of upcrossings to number of exceedances $r_{\textrm{up}}$ and ratio of number consecuitve exceedances to number of exceedances $r_{\textrm{cluster}}$ are calculated in Table \[tab:check\_d2\]. These proportions are averaged over $2000$ occurrences of each of these intervals and over all the different intervals. The statistics in the table indicates strong occurrence of condition $D''(u_n)$. We have also observed that the changes in the parameters does not affect this inference. $r_{\textrm{up}(\%)}$ $r_{\textrm{cluster}(\%)}$ ----- ----------------------- ---------------------------- -- RW $4$ $89$ PR $7$ $91$ RWJ $5$ $86$ : Test of Condition $D''$ in the synthetic graph[]{data-label="tab:check_d2"} Real network ------------ We consider two real world networks: Enron email network and DBLP network. The data is collected from [@snap]. Both the networks satisfy the check for the condition $D''(u_n)$ reasonably well. For the RW sampling, Figure \[fig:cop\_estr\_enron\_dblp\] shows the bivariate copula estimated and mentions corresponding EI. Intervals estimator is presented in Figure \[fig:ie\_estr\_enron\_dblp\]. After observing plateaus in the plots, we took EI as $0.25$ and $0.2$ for DBLP and Enron email graphs, respectively. In case of RWJ sampling, Figures \[fig:ie\_estr\_enron\_RWJ\] and \[fig:ie\_estr\_dblp\_RWJ\] present Intervals estimator for email-Enron and DBLP graphs respectively. Intervals estimate for PR sampling can be found in Figures \[fig:ie\_estr\_enron\_PR\] and \[fig:ie\_estr\_dblp\_PR\]. Conclusions {#sec:conclu} =========== In this work, we have associated Extreme Value Theory of stationary sequences to sampling of large networks. We show that for any general stationary samples (function of node samples) meeting two mixing conditions, the knowledge of bivariate distribution or bivariate copula is sufficient to derive many of its extremal properties. The parameter extremal index (EI) encapsulates this relation. We relate EI to many relevant extremes in networks like order statistics, first hitting time, mean cluster size etc. In particular, we model correlation in degrees of adjacent nodes and examine samples from random walks on degree state space. Finally we have obtained estimates of EI for a synthetic graph with degree correlations and find a good match with the theory. We also calculate EI for two real-world networks. In future, we plan to investigate the relation between assortativity coefficient and EI, and intends to study in detail the EI in real networks. Endnotes {#endnotes .unnumbered} ======== $^{(\textrm{a})}$ $F^{k}(.)$ indicates $k$th power of $F(.)$ throughout the paper except when $k=-1$ where it denotes the inverse function.\ $^{(\textrm{b})}$ $\sim$’ stands for asymptotically equal, i.e. $f(x)\sim g(x)\Leftrightarrow f(x)/g(x)\rightarrow 1$ as $x\rightarrow a$, $x\in M$ where the functions $f(x)$ and $g(x)$ are defined on some set $M$ and $a$ is a limit point of $M$. $f(x)=o(g(x))$ means $\lim_{x\to a}f(x)/g(x)=0$. Also $f(x)=O(g(x))$ indicates that there exist $\delta>0$ and $M>0$ such that $|f(x)| \le M |g(x)|$ for $|x - a| < \delta.$
--- abstract: 'We consider spread-out models of self-avoiding walk, bond percolation, lattice trees and bond lattice animals on $\Zd$, having long finite-range connections, above their upper critical dimensions $d=4$ (self-avoiding walk), $d=6$ (percolation) and $d=8$ (trees and animals). The two-point functions for these models are respectively the generating function for self-avoiding walks from the origin to $x \in \Zd$, the probability of a connection from $0$ to $x$, and the generating function for lattice trees or lattice animals containing $0$ and $x$. We use the lace expansion to prove that for sufficiently spread-out models above the upper critical dimension, the two-point function of each model decays, at the critical point, as a multiple of $|x|^{2-d}$ as $x {\rightarrow}\infty$. We use a new unified method to prove convergence of the lace expansion. The method is based on $x$-space methods rather than the Fourier transform. Our results also yield unified and simplified proofs of the bubble condition for self-avoiding walk, the triangle condition for percolation, and the square condition for lattice trees and lattice animals, for sufficiently spread-out models above the upper critical dimension.' author: - | Takashi Hara\ Graduate School of Mathematics\ Nagoya University\ Chikusa-ku\ Nagoya 464-8602, Japan\ [[email protected]]{}\ [http://www.math.nagoya-u.ac.jp/\~hara]{}\ - | Remco van der Hofstad\ Stieltjes Institute for Mathematics\ Delft University\ Mekelweg 4\ 2628 CD Delft, The Netherlands\ [[email protected]]{}\ [http://ssor.twi.tudelft.nl/\~hofstad]{}\ - | Gordon Slade\ Department of Mathematics\ University of British Columbia\ Vancouver, BC, Canada V6T 1Z2\ [[email protected]]{}\ [http://www.math.ubc.ca/people/faculty/slade/index.html]{} title: 'Critical two-point functions and the lace expansion for spread-out high-dimensional percolation and related models ' --- Introduction {#sec-intro} ============ Critical two-point functions {#sec-1.1} ---------------------------- In equilibrium statistical mechanical models at criticality, correlations typically decay according to a power law, rather than exponentially as is the case away from the critical point. We consider models of self-avoiding walks, bond percolation, lattice trees and bond lattice animals on the lattice $\Zd$. Let $|x|$ denote the Euclidean norm of $x \in \Zd$. Assuming translation invariance, and denoting the critical two-point function for any one of these models by $U_{p_c}(x,y) = U_{p_c}(y-x)$, the power-law decay is traditionally written as U\_[p\_c]{}(x) \~, . The critical exponent $\eta$ is known as the [*anomalous dimension*]{}, and depends on the model under consideration. Its value is believed to depend on $d$ but otherwise to be [*universal*]{}, which means insensitive to many details of the model’s definition. The above models have [*upper critical dimensions*]{} d\_c = { [ll]{} 4 &\ 6 &\ 8 & . above which critical exponents cease to depend on the dimension. Our purpose in this paper is to prove for $d>d_c$, with $\eta = 0$, for certain long-range models having a small parameter. The small parameter is used to ensure convergence of the [*lace expansion*]{}. There is now a large literature on the lace expansion, but proving for $d>d_c$ remained an open question. To make this paper more self-contained, a review of the basic steps in the derivation of the lace expansion will be included. All past approaches to the lace expansion have relied heavily on the Fourier transform of the two-point function. We present a new approach to the lace expansion, based directly in $x$-space. Our approach provides a unified proof of convergence of the expansion, with most of the analysis applying simultaneously to all the models under consideration. There is one model-dependent step in the convergence proof, involving estimation of certain Feynman diagrams. The Feynman diagrams are model-specific, and converge when $d>d_c$. This is the key place where the assumption $d>d_c$ enters the analysis. We use a new method to estimate the relevant Feynman diagrams, based in $x$-space rather than using the Fourier transform. As we will explain in more detail below, weaker versions of have been obtained previously, for the Fourier transform of the two-point function. These statements for the Fourier transform follow as corollaries from our $x$-space results. In addition, our results immediately imply the bubble, square and triangle conditions for sufficiently spread-out models of self-avoiding walks, lattice trees and lattice animals, and percolation, for $d>d_c$. These diagrammatic conditions, which had been obtained previously using Fourier methods, are known to imply existence (with mean-field values) of various critical exponents. For $d \leq d_c$, it remains an open question to prove the existence of $\eta$. In fact, it has not been proved for self-avoiding walk nor for lattice trees or animals that $U_{p_c}(x)$ is even finite for $2 \leq d \leq d_c$. For percolation, the two-point function is a probability, so it is certainly finite. However, it has not been proved for $2 \leq d \leq 6$ that it approaches zero as $|x| {\rightarrow}\infty$, except for $d=2$ [@Kest82]. Such a result is known to imply absence of percolation at the critical point [@AKN87], which, for general dimensions, is an outstanding open problem in percolation theory. For self-avoiding walks, partial results suggesting that $\eta = 0$ for $d=d_c=4$ have been obtained in [@BEI92] for a hierarchical lattice and in [@IM94] for a variant of the Edwards model. Contrary to other critical exponents at the upper critical dimension, no logarithmic factors appear to leading order. It is believed that $\eta >0$ for self-avoiding walk for $2 \leq d <4$ [@MS93]. Interestingly, there is numerical evidence that $\eta <0$ for percolation when $3 \leq d <6$ [@AMAH90], and it has been conjectured that $\eta <0$ also for lattice trees and lattice animals when $2 \leq d < 8$ [@BFG86] (see also [@PS81] for $d=3$ and [@LI79] for $d =8-\epsilon$). The exponent $\eta$ is believed to be related to the exponents $\gamma$ for the susceptibility and $\nu$ for the correlation length by the scaling relation $\gamma =(2-\eta)\nu$. Some exact but nonrigorous values of $\gamma$ and $\nu$ have been predicted (see [@Grim99; @Hugh96; @MS93; @PS81]), which lead to the exact predictions $\eta = \frac{5}{24}$ for 2-dimensional self-avoiding walk and percolation, and $\eta = -1$ for 3-dimensional lattice trees and animals. Main results {#sub-1.2} ------------ The spread-out models are defined in terms of a function $D : \Zd {\rightarrow}[0,\infty)$, which depends on a positive parameter $L$. We will take $L$ to be large, providing a small parameter $L^{-1}$. We will consider only those $D$ which obey the conditions imposed in the following definition. \[def-Dsp1\] Let $h$ be a non-negative bounded function on $\Rd$ which is piecewise continuous, symmetric under the lattice symmetries, supported in $[-1, 1]^{d}$, and normalised so that $\int_{[-1, 1]^{d}}h(x) d^d x = 1$. Then for large $L$ we define D(x) = . Since $\sum_{x\in \Zd}h(x/L) \sim L^{d}$ (using a Riemann sum approximation to $\int_{[-1, 1]^{d}}h(x)d^dx$), the assumption that $L$ is large ensures that the denominator of is nonzero. We also define $\sigma^2 = \sum_x |x|^2 D(x)$. The sum $\sum_x |x|^p D(x)$ can be regarded as a Riemann sum, and is asymptotic to a multiple of $L^p$ for $p>0$. In particular, $\sigma$ and $L$ are comparable. A basic example obeying the conditions of Definition \[def-Dsp1\] is given by the function $h(x)=2^{-d}$ for $x \in [-1,1]^d$, $h(x)=0$ otherwise, for which $D(x) = (2L+1)^{-d}$ for $x \in [-L,L]^d \cap \Zd$, $D(x)=0$ otherwise. Next, we define the models we consider. Let $\Omega_D = \{ x \in \Zd : D(x) >0\}$. By Definition \[def-Dsp1\], $\Omega_D$ is finite and $\Zd$-symmetric. A [*bond*]{} is a pair of sites $\{x,y\} \subset \Zd$ with $y-x \in \Omega_D$. For $n \geq 0$, an $n$-step [*walk*]{} from $x$ to $y$ is a mapping $\omega : \{0,1,\ldots, n\} {\rightarrow}\Zd$ such that $\omega(i+1)-\omega(i) \in \Omega_D$ for $i=1,\ldots, n-1$. We sometimes consider a walk to be a set of bonds, rather than a set of sites. Let $\Wcal(x,y)$ denote the set of walks from $x$ to $y$, taking any number of steps. An $n$-step [*self-avoiding walk*]{} is an $n$-step walk $\omega$ such that $\omega(i) \neq \omega(j)$ for each pair $i \neq j$. Let $\Scal(x,y)$ denote the set of self-avoiding walks from $x$ to $y$, taking any number of steps. A [*lattice tree*]{} is a finite connected set of bonds which has no cycles. A [*lattice animal*]{} is a finite connected set of bonds which may contain cycles. Although a tree $T$ is defined as a set of bonds, we write $x \in T$ if $x$ is an endpoint of some bond of $T$, and similarly for lattice animals. Let $\Tcal(x,y)$ denote the set of lattice trees containing $x$ and $y$, and let $\Acal(x,y)$ denote the set of lattice animals containing $x$ and $y$. Given a finite set $B$ of bonds and a nonnegative parameter $p$, we define its [*weight*]{} to be W\_[p,D]{}(B) = \_[{x,y} B]{} pD(y-x). If $B$ is empty, we set $W_{p,D}(\varnothing)=1$. The random walk and self-avoiding walk two-point functions are defined respectively by S\_p(x) = \_[(0,x)]{} W\_[p,D]{}(), \_p(x) = \_[(0,x)]{} W\_[p,D]{}(). For any $d>0$, $\sum_x S_p(x)$ converges for $p < 1$ and diverges for $p>1$, and $p=1$ plays the role of a critical point. It is well-known [@Uchi98] that, for $d>2$, S\_1(x) \~, , so that $\eta = 0$. A standard subadditivity argument [@HM54; @Hugh95; @MS93] implies that $\sum_x \sigma_p(x)$ converges for $p<p_c$ and diverges for $p>p_c$, for some finite positive critical value $p_c$. The lattice tree and lattice animal two-point functions are defined by \_p (x) = \_[T (0,x)]{} W\_[p,D]{}(T) , \_p\^a (x) = \_[A (0,x)]{} W\_[p,D]{}(A) . A standard subadditivity argument implies that there are positive finite $p_c$ and $p_c^a$ such that $\sum_x \rho_p^{(a)}(x)$ converges for $p<p_c^{(a)}$ and diverges for $p>p_c^{(a)}$ [@Klar67; @Klei81]. Turning now to bond percolation, we associate independent Bernoulli random variables $n_{\{x,y\}}$ to each bond $\{x,y\}$, with (n\_[{x,y}]{}=1)=p D(x-y)(n\_[{x,y}]{}=0)=1- p D(x-y), where $p \in [0, ( \max_x D(x))^{-1}]$. (Note that $p$ is not a probability.) A configuration is a realisation of the bond variables. Given a configuration, a bond $\{x,y\}$ is called [*occupied*]{} if $n_{\{x,y\}}=1$ and otherwise is called [*vacant*]{}. Let $C(x)$ denote the random set of sites $y$ such that there is a path from $x$ to $y$ consisting of occupied bonds. The percolation [*two-point function*]{} is defined by \_p(x) = \_p(x C(0)), where $\Pbold_p$ is the probability measure on configurations induced by the bond variables. There is a critical value $p_c \in (0,1)$ such that $\sum_{x} \tau_p(x) < \infty$ for $p \in [0,p_c)$ and $\sum_{x} \tau_p(x) = \infty$ for $p \geq p_c$. This critical point can also be characterised by the fact that the probability of existence of an infinite cluster of occupied bonds is $1$ for $p>p_c$ and $0$ for $p<p_c$ [@AB87; @Mens86]. We use $U_p(x)$ to refer to the two-point function of all models simultaneously. We use $p_c$ to denote the critical points for the different models, although they are, of course, model-dependent. In what follows, it will be clear from the context which model is intended. Let a\_d = . We write $O(f(x,L))$ to denote a quantity bounded by $\mbox{const.} f(x,L)$, with a constant that is independent of $x$ and $L$ but may depend on $d$. We define $\epsilon$ by = 2(d-4) & ()\ d-6 & ()\ d-8 & () and write \_2 = 2. Our main result is the following theorem. \[thm-main\] Let $U_{p_c}(x)$ denote the critical two-point function for self-avoiding walk, percolation, lattice trees or lattice animals. Let $d > d_c$, and fix any $\alpha >0$. There is a constant $A= 1+O(L^{-2+\alpha})$ depending on $d$, $L$ and the model, and an $L_0$ depending on $d$, $\alpha$ and the model, such that for $L \geq L_0$ U\_[p\_c]{}(x) = as $|x| {\rightarrow}\infty$. Constants in the error terms for and $A-1$ depend on $\alpha$. We expect that Threorem \[thm-main\] remains true with $\alpha = 0$, but it is convenient in our analysis to allow a small power of $L$ or $|x|$ to enter into error estimates. Results closely related to Theorem \[thm-main\], for nearest-neighbour models in very high dimensions, are proved in [@Hara00] using a different method. The leading asymptotics of the critical random walk two-point function $S_1(x)$ are also given by , with $A=1$. This will be discussed in detail, in Proposition \[prop-A\] below. The second error term in represents an error term in the expansion for random walk, while the first error term represents the difference between random walk and the other models. The fact that the power $|x|^{2-d}$ appears as the leading power in , independent of the precise form of $D$ or the value of large $L$, is an illustration of universality. As was pointed out in Section \[sec-1.1\], it is a consequence of for percolation that there is no percolation at the critical point. In other words, for $d >6$ and for $L$ large, with probability $1$ there is no infinite cluster of occupied bonds when $p=p_c$. There are, however, large emerging structures present at $p=p_c$ that are loosely referred to as the incipient infinite cluster. The result of Theorem \[thm-main\] for percolation provides a necessary ingredient for work of Aizenman [@Aize97] in this regard. Roughly speaking, Aizenman showed that if a (then unproved) weaker statement than holds for $d>6$, then at $p_c$ the largest percolation clusters present within a box of side length $M$ are of size approximately $M^4$ and are approximately $M^{d-6}$ in number. Details can be found in [@Aize97]. Equation  now implies that Aizenman’s conclusions do hold for sufficiently spread-out models with $d>6$. The following corollary will follow immediately from Theorem \[thm-main\]. The conclusion of the corollary was proved previously in [@MS93] for self-avoiding walk, in [@HS90a] for percolation, and in [@HS90b] for lattice trees and lattice animals. The corollary is known to imply existence (with mean-field values) of various critical exponents [@AN84; @BA91; @MS93; @TH87]. \[cor-bub\] For $d > d_c$ and $L \geq L_0$, the self-avoiding walk bubble condition, the percolation triangle condition, and the lattice tree and lattice animal square conditions all hold. These diagrammatic conditions are respectively the statements that the following sums are finite: $$\sum_{x \in \Zd} \sigma_{p_c}(x)^2, \quad \sum_{x,y \in \Zd} \!\!\tau_{p_c}(0,x)\tau_{p_c}(x,y)\tau_{p_c}(y,0), \quad \sum_{w,x,y \in \Zd}\!\!\! \rho^{(a)}_{p_c^{(a)}}(w)\rho^{(a)}_{p_c^{(a)}}(x-w) \rho^{(a)}_{p_c^{(a)}}(y-x) \rho^{(a)}_{p_c^{(a)}}(y).$$ Theorem \[thm-main\] implies a related result for the Fourier transform of the critical two-point function. Given an absolutely summable function $f$ on $\Zd$, we denote its Fourier transform by (k) = \_[x ]{} f(x) e\^[ikx]{}, k \^d. In general, can be expected to correspond to \_[p\_c]{}(k) \~ , However, some care is required with this correspondence. In particular, if $\eta=0$ then $U_{p_c}(x)$ is not summable, and hence its Fourier transform is not well-defined. Moreover, the inverse Fourier transform of a function asymptotic to a multiple of $|k|^{-2}$, which does exist for $d>2$, is not necessarily asymptotic to a multiple of $|x|^{2-d}$ without further assumptions. A counterexample is given in [@MS93 page 32]. The situation is well-understood for random walk [@Spit76]. For $d >2$, it is the case that $S_1(x)$ is given by the inverse Fourier transform S\_1(x) = \_[\[-,\]\^d]{} . Therefore, it is reasonable to assert that \_1(k) = , even though $S_1(x)$ is not summable. Our assumptions on $D$ imply that $1-\hat{D}(k) \sim \sigma^2 |k|^2/(2d)$ as $k {\rightarrow}0$. Comparing with and , this gives the $k$-space version of the statement that $\eta = 0$ for random walk. For the models of Theorem \[thm-main\], we have the following corollary. A proof of the corollary will be given in Section \[sec-pfmain\]. The quantity $\hat{U}_{p_c}(k)$ appearing in the corollary represents the Fourier transform of the corresponding $x$-space two-point functions $U_{p_{c}}(x)$, in the sense that the $x$-space two-point functions are given by the inverse Fourier transform of the $k$-space quantities. It will be part of the proof to demonstrate this correspondence. Recall that $\epsilon_2 = \epsilon \wedge 2$. \[cor-k\] For $d > d_c$ and $L \geq L_0$, the Fourier transforms of the critical two-point functions of the models of Theorem \[thm-main\] obey \_[p\_c]{}(k) = , | \_[L]{}(k) | . |k|\^[\_2]{} & (2)\ . |k|\^[2]{} |k|\^[-1]{} & (= 2) as $k {\rightarrow}0$, with an $L$-dependent constant in the error term $\Delta_{L}$. The constant $A$ is the same as the constant of Theorem \[thm-main\]. The conclusion of Corollary \[cor-k\] for self-avoiding walk was established in [@MS93 Theorem 6.1.6], with $|\Delta_L(k)| \leq \mbox{const.}|k|^a$ for any $a < \frac{d-4}{2}\wedge 1$. For percolation, it was proved in [@HS00b Theorem 1.1] that, under the hypotheses of Corollary \[cor-k\], $\lim_{k {\rightarrow}0}|k|^2 \hat{\tau}_{p_c}(k) = A$, with no error estimate but with joint control in the limit $(k,h) {\rightarrow}(0,0)$, where $h$ is a magnetic field. For lattice trees, the conclusion of Corollary \[cor-k\] was implicitly proved in [@DS98], with $|\Delta_L(k)| \leq \mbox{const.}|k|^a$ for some unspecified $a>0$. The proof of Theorem \[thm-main\] also yields the following result for the asymptotic behaviour of the critical points of self-avoiding walk and percolation. We do not obtain such a result for lattice trees and lattice animals. Much stronger results have been obtained for nearest-neighbour self-avoiding walk and percolation by pushing lace expansion methods further [@HS95]. See [@Penr94] for related results obtained without using the lace expansion, including for lattice trees. \[cor-pc\] Let $\alpha >0$. For self-avoiding walk and percolation with $d > d_c$, as $L {\rightarrow}\infty$ 1 p\_c 1 + O(L\^[-2+]{}). In [@HS01a; @HS01b], is improved to $1 \leq p_c \leq 1 + O(L^{-d})$ for self-avoiding walk. Overview of the proof {#sub-prfoverview} --------------------- In this section, we isolate four propositions which will be combined in Section \[sec-pfmain\] to prove Theorem \[thm-main\]. We define $I$ by $I(x) = \delta_{0,x}$, and denote the convolution of two functions $f,g$ on $\Zd$ by (f\*g)(x) = \_[y ]{} f(x-y)g(y). Consider the random walk two-point function $S_z(x)$. By separating out the contribution from the zero-step walk, and extracting the contribution from the first step in the other walks, $S_z$ can be seen to obey the convolution equation S\_z = I + (z D\*S\_z). The lace expansion is a modification of this convolution equation, for the models we are considering, that takes interactions into account via a kind of inclusion-exclusion. To state the lace expansion in a unified fashion, a change of variables is required. This change of variables is explained in Section \[sec-le\]. To each $p \leq p_c$, we associate z = p & ()\ p \_[p]{}\^[(a)]{}(0) & (). We denote by $z_{c}$ the value which corresponds to $p_{c}$ in the above definition. It is possible in principle that $z_c=\infty$ for lattice trees and animals, but we will rule out this possibility in Section \[sec-pfmain\], and we proceed in this section under the assumption that $\rho_{p_c}^{(a)}(0) < \infty$. Since the right hand side of is increasing in $p$, it defines a one-to-one mapping. For $p=p(z)$ given by , we also define G\_z(x) = \_p(x) & ()\ \_p\^[(a)]{}(x)/\_p\^[(a)]{}(0) & ()\ \_p(x) & (). We will explain in Section \[sec-le\] how the lace expansion gives rise to a function $\Pi_z$ on $\Zd$ and to the convolution equation G\_z = I + \_z + (z D\*(I+\_z) \* G\_z). The function $\Pi_z$ is symmetric under the symmetries of $\Zd$. For self-avoiding walk, a small modification of the usual analysis [@BS85; @MS93] has been made to write the lace expansion in this form. (A remainder term in the percolation expansion will be shown to vanish, in Section \[sec-pfmain\].) The identity reduces to when $\Pi_z \equiv 0$. Our method involves treating each of the models as a small perturbation of random walk, and the function $\Pi_z(x)$ should be regarded as a small correction to $\delta_{0,x}$. As we will show in Section \[sec-pfmain\], $\Pi_z(x)$ is small uniformly in $x$ and $z \leq z_c$ for large $L$ and decays at least as fast as $|x|^{-(d+2+\epsilon)}$, when $d > d_c$. In particular, $\sum_x |x|^{2+s} |\Pi_z(x)|$ converges for $z \leq z_c$, for any $s < \epsilon$, so $\Pi_z$ has a finite $(2+s)$ moment. We assume the above bounds on $\Pi_z(x)$ in the remainder of this section. Equations  and can be rewritten as I = (I - D)\*S\_= G\_z - \_z - (zD\*(I+\_z) \* G\_z) . Let $\lambda \in \Rbold$. Writing G\_z = S\_+(I\*G\_z) - (I\*S\_) and using the first representation of for $I$ in $I*G_z$ and the second in $I*S_\mu$, we obtain G\_z = ((I+ \_z) \* S\_) + (S\_ E\_[z,, ]{} \*G\_z), with E\_[z,, ]{} = \[I - D\] - . By symmetry, odd moments of $E_{z,\mu,\lambda}(x)$ vanish. We fix $\lambda$ and $\mu$ so that the zeroth and second moments also vanish, i.e., \_[x ]{} E\_[z,, ]{}(x) = \_[x ]{} |x|\^2 E\_[z,, ]{}(x)=0. Here we are assuming, as discussed above, that $\Pi_z$ has finite second moment. Thus we take & = & \_[z]{} = ,\ & = & \_z = 1 - \_z\[1-z-z\_x \_z(x)\]. For simplicity, we will write $E_{z}(x) = E_{z,\mu_{z},\lambda_{z}}(x)$. Then becomes G\_[z]{}(x) = \_[z]{} ((I+ \_[z]{}) \* S\_[\_[z]{}]{})(x) + (S\_[\_[z]{}]{} \* E\_[z]{} \*G\_[z]{})(x). The critical point obeys the identity 1 - z\_c - z\_c \_x \_[z\_c]{}(x) = 0, and hence $\mu_{z_c}=1$. To see this, we sum over $x$ to obtain \_[x]{} G\_[z]{}(x) = . The left side is finite below the critical point, but diverges as $z \uparrow z_c$ [@AN84; @BFG86; @MS93]. Under the assumption made above on $\Pi_z$, the critical point thus corresponds to the vanishing of the denominator of . Using the decay of $\Pi_z$ in $x$, we will argue that, at $z_c$, the first term of gives $\lambda_{z_c}[1+ \sum_y \Pi_{z_c}(y)] S_1(x)$ as the leading behaviour of $G_{z_c}(x)$. The second term will be shown to be an error term which decays faster than $|x|^{-(d-2)}$. In terms of the Fourier transform, we understand this second term as follows. By our choice of the parameters $\lambda_z$ and $\mu_z$, $\hat{E}_{z_c}(k)$ should behave to leading order as $k^{2+a}$ for some positive $a$. Assuming that $\hat{G}_{z_c}(k)$ behaves like $k^{-2}$, and since $\hat{S}_1(k)$ behaves like $k^{-2}$ by , the second term of would be of the form $k^{-2+a}$, which should correspond to $x$-space decay of the form $|x|^{-(d-2+a)}$. Our proof will be based on this insight. The proof will require: 1. information about the asymptotics of $S_\mu(x)$ (model-independent), 2. an estimate providing bounds on the decay rate of a convolution in terms of the decay of the functions being convolved (model-independent), 3. a mechanism for proving that $\Pi_z(x)$ decays faster than $|x|^{-(d+2)}$ (model-dependent), and 4. given this decay of $\Pi_z(x)$, an upper bound guaranteeing adequate decay of $(S_{\mu_z}*E_z)(x)$ (model-independent). The third item is the part of the argument that is model-dependent. The restriction $d>d_c$ enters here, in the bounding of certain Feynman diagrams that are specific to the model under consideration. The first ingredient in the above list, namely asymptotics for the random walk generating function, is provided by the following proposition. The proof of the proposition is deferred to Section \[sec-G2ptfcn\]. More general results for the critical generating function $S_1(x)$ can be found in work of Uchiyama [@Uchi98]. However, we are unable to apply results of [@Uchi98] directly, since we need control of the parameter $L$ in our estimates that is not readily extractable from [@Uchi98]. Our proof of Proposition \[prop-A\] will also make use of an analysis of $S_\mu(x)$ for $\mu < 1$, which we will need in proving Proposition \[lem-C\] below. \[prop-A\] Let $d>2$, and suppose $D$ satisfies the conditions of Definition \[def-Dsp1\]. Then, for $L$ sufficiently large, $\alpha > 0$, $\mu \leq 1$ and $x \in \Zd$, S\_(x) & & \_[0,x]{}+ O(),\ S\_1(x) & = & +O( ) . In –, constants in error terms may depend on $\alpha$. For the second ingredient in the list above, we will use the following proposition, whose estimates show that the decay rate of functions implies a corresponding decay for their convolution. The elementary proof of the proposition will be given in Section \[sec-conv\]. \[lem-conv\]\ (i) If functions $f, g$ on $\Zd$ satisfy $| f(x) | \leq (|x|+1)^{-a}$ and $| g(x) | \leq (|x|+1)^{-b}$ with $a \geq b >0$, then there exists a constant $C$ depending on $a, b, d$ such that | (f\*g)(x) | C (|x|+1)\^[-(a b)]{} & (a &gt; d )\ C (|x|+1)\^[d-(a + b)]{} & (a &lt; d a+b&gt;d) . \(ii) Let $d>2$, and let $f,g$ be functions on $\Zd$, where $g$ is $\Zd$-symmetric. Suppose that there are $A,B,C >0$ and $s>0$ such that f(x) & = & + O()\ | g(x) | && . Let $s_2 = s \wedge 2$. Then (f\*g)(x) = + e(x) with e(x) = O(C(A+B)(|x|+1)\^[-(d-2+s\_2)]{}) & (s 2)\ O(C(A+B)(|x|+2)(|x|+1)\^[-d]{}) & (s = 2), where the constant in the error term depends on $d$ and $s$. For the third ingredient, we will use the following proposition. The proof of the proposition involves model-dependent diagrammatic estimates, and is given in Section \[sec-Fd\]. \[prop-diagbd\] Let $q<d$, and suppose that G\_z(x) (|x|+1)\^[-q]{} (x 0) with $\beta /L^{q-d}$ bounded away from zero. Then for sufficiently small $\beta$ (which requires $L$ to be large) the following statements hold.\ (a) Let $z \leq 2$, and assume $\frac{1}{2}d<q <d$. For self-avoiding walk, there is a $c$ depending on $d$ and $q$ such that |\_z(x)| c \_[0,x]{} + . (b) Define $p=p(z)$ implicitly by , and fix a positive constant $R$. Let $z$ be such that $\rho_{p(z)}^{(a)}(0) \leq R$, and assume $\frac{3}{4}d < q < d$. For lattice trees or lattice animals, there is a $c$ depending on $d$, $q$ and $R$ such that |\_z(x)| c \_[0,x]{} + . (c) Let $z \leq 2$, and assume $\frac{2}{3}d < q <d$. For percolation, there is a $c$ depending on $d$ and $q$ such that |\_z(x)| c \_[0,x]{} + . The main hypothesis in Proposition \[prop-diagbd\] is an assumed bound on the decay of the two-point function. To motivate the form of the assumption, we first note that $G_z(x)$ cannot be expected to decay faster than $D(x)$. Let $\chi_L$ denote the indicator function of the cube $[-L,L]^d$. By Definition \[def-Dsp1\], D(x) O(L\^[-d]{}) \_L(x) O(L\^[q-d]{}(|x|+1)\^[-q]{}), and the upper bound is achieved when $|x|$ and $L$ are comparable. This helps explain the assumption that $\beta /L^{q-d}$ is bounded away from zero in the proposition. Note that $G_z(0)=1$ for all $z \leq z_c$. We will apply Proposition \[prop-diagbd\] with $q = d-2$. However, to do so, we will have to deal with the fact that *a priori* we do not know that holds for $z$ near $z_c$ with $q = d-2$. Note that, for $q = d-2$, the conditions on $d$ in the above proposition correspond to $d>d_c$, with $d_c$ given by . Also, using $\epsilon$ defined in , all three bounds of the lemma can be unified (after weakening the self-avoiding walk bound) in the form |\_z(x)| c \_[0,x]{} + . Note that $\epsilon > 0$ if and only if $d>d_c$. It is at this stage of the analysis, and *only* here, that the upper critical dimension enters our analysis. Finally, the fourth ingredient is the following proposition. Its proof is model-independent and is given in Section \[sec-CE\]. \[lem-C\] Fix $z \leq z_c$, $0<\gamma <1$, $\alpha >0$ and $\kappa > 0$. Let $\kappa_2 = \kappa \wedge 2$. Assume that $z \leq C$ and that $|\Pi_z(x)| \leq \gamma(|x|+1)^{-(d+2+\kappa)}$. Then there is a $c$ depending on $C, \kappa, \alpha$ but independent of $z,\gamma, L$ such that for $L$ sufficiently large | (E\_z \* S\_[\_z]{})(x) | cL\^[-d]{} & (x 0)\ c L\^[\_2]{} (|x|+1)\^[-(d+\_2 -)]{} & (). In , we are interested in the case where $\alpha$ is close to zero (and small compared to $\kappa_2$), so that the upper bound decays faster in $|x|$ than $|x|^{-d}$. It will be crucial in the proof of Proposition \[lem-C\] that $E_z$ is $\Zd$-symmetric, and that we have chosen $\lambda_z$ and $\mu_z$ such that the zeroth and second moments of $E_z$ vanish. The coefficients of terms of order $|x|^{2-d}$ and $|x|^{-d}$, which would typically be present in the convolution of a function decaying like $|x|^{-(d+2+\kappa)}$ with $S_{\mu_z}$, then vanish and hence are absent in the upper bound of . This can partially be seen from the first term of , where the leading term vanishes if and only if $\sum_y g(y)=0$. Proof of the main results {#sec-pfmain} ========================= In this section, we prove Theorem \[thm-main\] and Corollaries \[cor-bub\]–\[cor-pc\], assuming Propositions \[prop-A\]–\[lem-C\]. The proof will be based on the following elementary lemma. The lemma states that under an appropriate continuity assumption, if an inequality implies a stronger inequality, then in fact the stronger inequality must hold. \[lem-P4\] Let $f$ be a nonnegative function defined on an interval $[z_1,z_c)$, and let $a\in (0,1)$ be given. Suppose that 1. $f$ is continuous on the interval $[z_1,z_c)$. 2. $f (z_1) \leq a$. 3. for each $z \in (z_1,z_c)$, if $f(z) \leq 1$ then in fact $f(z) \leq a$. (In other words, one inequality implies a stronger inequality.) Then $f(z) \leq a$ for all $z \in [z_1,z_c)$. By the third assumption, $f(z) \nin (a,1]$ for all $z \in (z_1,z_c)$. By the first assumption, $f(z)$ is continuous in $z \in [z_1,z_c)$. Since $f(z_1) \leq a$ by the second assumption, the above two facts imply that $f(z)$ cannot enter the forbidden interval $(a,1]$ when $z \in (z_1,z_c)$, and hence $f(z) \leq a$ for all $z \in [z_1,z_c)$. We will employ Lemma \[lem-P4\] to prove the following proposition, which lies at the heart of our method. The proposition provides a good upper bound on the critical two-point function for nonzero $x$. There is an additional detail required in the proof for lattice trees and lattice animals, and we therefore treat these models separately from self-avoiding walk and percolation. The relevant difference between the models is connected with the fact that $\sigma_z(0)=\tau_z(0)=1$, whereas $\rho_p^{(a)}(0) >1$ and we do not know [*a priori*]{} that $\rho_{p_c}^{(a)}(0) < \infty$. In the proof, we establish the finiteness of $\rho_{p_c}^{(a)}(0)$. As usual, $\alpha$ should be regarded as almost zero in Proposition \[prop-P4P3\]. \[prop-P4P3\] Fix $d> d_c$ and $\alpha > 0$. For $L$ sufficiently large depending on $d$ and $\alpha$, G\_[z\_c]{}(x) (x 0). In addition, $z_c \leq 1+O(L^{-2+\alpha})$, and, for lattice trees and lattice animals, $\rho_{p_c}^{(a)}(0)<O(1)$. The constants in all the above statements depend only on $d$ and $\alpha$, and not on $L$. We prove the desired bound for $\alpha <\frac{\epsilon \wedge 1}{2}$, because the bound for large $\alpha$ follows from that for small $\alpha$. In the following, let $K$ denote the smallest constant that can be used in the error bound of , i.e.  $K = \sup_{L \geq 1,x \neq 0} L^{2-\alpha} (|x|+1)^{d-2} S_{1}(x) \in (0,\infty)$. [*Self-avoiding walk and percolation.*]{} We will prove that $G_z(x)$ obeys the upper bound of uniformly in $z < z_c$. This is sufficient, by the monotone convergence theorem. Let g\_x(z) = (2K)\^[-1]{}L\^[2-]{} (|x|+1)\^[d-2]{} G\_z(x), g(z) = \_[x 0]{} g\_x(z). For self-avoiding walk and percolation, we will employ Lemma \[lem-P4\] with $z_1=1$, f(z) = {g(z) ,z}, and $a$ chosen arbitrarily in $(\frac{1}{2},1)$. We verify the assumptions of Lemma \[lem-P4\] one by one, with the bound then following immediately from Lemma \[lem-P4\]. In the course of the proof, the desired upper bound on $z_c$ will be shown to be a consequence of a weaker bound than , in . Since the proof actually establishes , then follows. [*(i)*]{} Continuity of each $g_x$ on $[0,z_c)$ is immediate from the fact that $\sigma_z(x)$ is a power series with radius of convergence $z_c$, and from the continuity in $z$ of $\tau_z(x)$ proved in [@AKN87]. We need to argue that the supremum of these continuous functions is also continuous. For this, it suffices to show that the supremum is continuous on $[0,z_c-t)$ for every small $t >0$. It is a standard result that $\sigma_z(x)$ and $\tau_z(x)$ decay exponentially in $|x|$, with a decay rate that is uniform in $z \in [0,z_c-t)$ (though not in $L$) [@Grim99; @MS93]. Thus $g_x(z)$ can be made less than any $\delta >0$, uniformly in $z \in [0,z_c-t)$, by taking $|x|$ larger than some $R=R(L,t,\delta)$. However, choosing $x_0$ such that $D(x_0) >0$, we see that $g_{x_0}(z) \geq (2K)^{-1}L^{2-\alpha} (|x_0|+1)^{d-2} z D(x_0) \geq (2K)^{-1}L^{2-\alpha} (|x_0|+1)^{d-2} D(x_0) \equiv \delta_0$. Hence the supremum is attained for $|x| \leq R(L,t,\delta_0)$, which is a finite set, and hence the supremum is continuous and the first assumption of Lemma \[lem-P4\] has been established. [*(ii)*]{} For the second assumption of the lemma, we note that $\tau_1(x) \leq \sigma_1(x) \leq S_1(x)$ and apply the uniform bound of to conclude that $g(1) \leq 1/2$. Since we have restricted $a$ to be larger than $\frac{1}{2}$, this implies $f(1) <a$. [*(iii)*]{} Fix $z \in (1,z_c)$. We assume that $f(z) \leq 1$, which implies G\_z(x) (x 0). We will apply Proposition \[prop-diagbd\] with $q=d-2$ and $\beta = KL^{-2+\alpha}$. Since we have taken $\alpha < \frac{1}{2}$, we have $\beta \ll 1$ and $\beta/L^{q-d} = KL^\alpha \gg 1$ for sufficiently large $L$ depending on $\alpha$. Proposition \[prop-diagbd\] then implies that | \_z(x) | c K L\^[-2+]{} \_[0,x]{} + , where $\epsilon >0$ was defined in . It addition, for percolation, as argued at the end of Section \[sec-percdiagrams\], the remainder term $R_z^{(N)}(x)$ vanishes in the limit $N {\rightarrow}\infty$ under the assumption , yielding the form of the expansion. Summing over $x \in \Zd$ gives \_x G\_z(x) = &gt; 0, which is finite for $z<z_c$. The numerator is positive by , and hence the denominator is also positive. Therefore, since $z \leq 2$ by our assumption that $f(z) \leq 1$, implies that z &lt; 1- z \_x \_z(x) 1 +O(L\^[-2+]{}). Since $a \in (\frac{1}{2},1)$, this implies that $z<2a$ for all $z<z_c$, when $L$ is large. Thus, to prove that $f(z) \leq a$, it suffices to show that $g(z) \leq a$. The bound also implies that $\lambda_z$ and $\mu_z$ are well-defined by –, and that $\lambda_z {\rightarrow}1$ uniformly in $z \leq z_c$. Using the convolution bound of Proposition \[lem-conv\](i), , and the first bound of , it then follows that |(\_z \* S\_[\_z]{})(x)| = (x 0). By Proposition \[lem-C\] with $\kappa = 2 \alpha < \epsilon$ and $\gamma = cKL^{-2+\alpha}$, for $L$ large we have | (E\_z \* S\_[\_z]{})(x) | O(L\^[-2+-d]{}) & (x 0)\ O( L\^[-2+3]{})(|x|+1)\^[-(d+)]{} & (). Using the first bound for $0<|x| \leq L$ and the second bound for $|x| \geq L$, we conclude from this that | (E\_z \* S\_[\_z]{})(x) | O(L\^[-4+2]{}) (|x|+1)\^[-(d-2)]{} (x 0). By Proposition \[lem-conv\](i), and , it then follows that | (E\_z \* S\_[\_z]{}\*G\_z) (x) | & & | (E\_z \* S\_[\_z]{}) (x) | + \_[y 0]{}| (E\_z \* S\_[\_z]{})(x-y)| |G\_z(y) | & & = (x 0), where we have used to bound the first term in the first inequality. Using the fact that $\lambda_z = 1+o(1)$ as $L {\rightarrow}\infty$, and the definition of $K$, it then follows from the identity that for $L$ sufficiently large we have G\_z(x) (1+o(1)) S\_1(x) + (x 0). This yields $g(z) \leq a$, and completes the proof for self-avoiding walk and percolation. [*Lattice trees and lattice animals.*]{} We will first prove that $G_z(x)$ obeys the upper bound of uniformly in $z < z_c$. By and the fact that $h$ is bounded, there is a $\delta_1 \geq 1$ such that $D(x) \leq \delta_1 |\Omega_D|^{-1}$ for all $x$. The number of $n$-bond lattice trees or lattice animals containing the origin is less than the number $b_n(L)$ of $n$-bond lattice trees on the Bethe lattice of coordination number $|\Omega_D|$ (the uniform tree of degree $|\Omega_D|$), which contain the origin. A standard subadditivity argument, together with the fact that, as $L {\rightarrow}\infty$, $\lim_{n {\rightarrow}\infty}b_n(L)^{1/n} \sim e|\Omega_D| \leq 3|\Omega_D|$ (see, e.g., [@Penr94]), implies that $b_n(L) \leq (n+1)(3|\Omega_D|)^n$. Therefore, for lattice trees or lattice animals, \_p\^[(a)]{}(0) \_[n=0]{}\^(n+1)(3\_1 p)\^n = . Let $p_1 = \frac{1}{6\delta_1}$. We use $z_1 = p_1 \rho_{p_1}^{(a)}(0)$ in Lemma \[lem-P4\]. Note that $z_1$ is well-defined, since gives $\rho_p^{(a)}(0) \leq 4$ for $p \leq p_1$. In addition, implies that $p_c \geq (3\delta_{1})^{-1} > p_1$, so $z_c > z_1$. We again fix $a \in (\frac{1}{2},1)$, and we use the function $f(z)$ of in Lemma \[lem-P4\], taking now g(z) = \_[x 0]{} g\_[x]{}(z) g\_[x]{}(z) = L\^[2-]{} (|x|+1)\^[d-2]{} G\_[z]{}(x) , The desired bound on $G_z(x)$, for $z<z_c$, together with the desired bound on $z_c$, will follow once we verify the three conditions of Lemma \[lem-P4\]. We verify these conditions now. [*(i)*]{} Continuity of $f(z)$ follows from the exponential decay of $\rho_p^{(a)}(x)$ for $p<p_c$, as in the previous discussion, together with the continuity of $\rho_p^{(a)}(0)$ for $p<p_c$. [*(ii)*]{} By the remarks surrounding the definition of $z_1$, we have $\frac{z_1}{2} \leq \frac{4}{2\cdot 6\delta_1} \leq \frac{1}{3} < a$. Moreover, this implies $z_1 < \frac{2}{3} < 1$. It remains to show that G\_[z\_1]{}(x) (x 0). Since $\rho_{p_1}^{(a)} \geq 1$, we have $G_{z_1}(x) \leq \rho_{p_1}^{(a)}(x)$. Each lattice tree or lattice animal containing $0$ and $x$ can be decomposed in a non-unique way into a walk from $0$ to $x$ with a lattice tree or lattice animal attached at each site along the walk. Therefore $\rho_{p_1}^{(a)}(x) \leq \rho_{p_1}^{(a)}(0) S_{z_1}(x)$. Using $\rho_{p_1}^{(a)}(0) \leq 4$, it follows from Proposition \[prop-A\] that $G_{z_1}(x) \leq 4KL^{-2+\alpha}(|x|+1)^{-(d-2)}$, which implies . [*(iii)*]{} Fix $z \in (z_1,z_c)$. The assumption that $f(z) \leq 1$ implies the bound $\rho_{p}^{(a)}(0) \leq z/p_1 \leq 12\delta_1$, and we take $R = 12\delta_1$ in Proposition \[prop-diagbd\](b). We then proceed as in the discussion for self-avoiding walk and percolation. We obtain as before, so that $z < 2a$ as required. The proof of also proceeds as before. The above discussion proves that $G_z(x)$ is bounded by the right side of and that $\rho_p^{(a)}(0) \leq 4$, uniformly in $z<z_c$, and that $z_c \leq 1+O(L^{-2+\alpha})$. The proof is then completed by observing that $\lim_{p\uparrow p_c} \rho_p^{(a)}(x) = \rho_{p_c}^{(a)}(x)$, by monotone convergence. Proposition \[prop-P4P3\] establishes the hypotheses of Proposition \[prop-diagbd\], with $\beta$ proportional to $L^{-2+\alpha}$, $z=z_c$ and $q=d-2$. Hence the hypotheses of Proposition \[lem-C\] are also now established, with $z=z_c$, $\kappa = \epsilon$, and $\gamma =O(L^{-2+\alpha})$. The conclusion of Proposition \[lem-C\] has therefore also been established. Moreover, since Proposition \[prop-P4P3\] gives a bound on $G_z(x)$ uniformly in $z \leq z_c$, the bounds of Proposition \[prop-P4P3\] and Proposition \[lem-C\] hold uniformly in $z \leq z_c$. We will use this in the following. [*Proof of Theorem \[thm-main\].*]{} Fix $z=z_c$, and recall the observation below that $\mu_{z_c}=1$. Define H(x) = \_[z\_c]{} \_[n=0]{}\^ ( (I+\_[z\_c]{})\*(E\_[z\_c]{}\*S\_1)\^[\*n]{}) (x), where the superscript $*n$ denotes an $n$-fold convolution and $(E_{z_c}*S_1)^{*0}=I$. Propositions \[lem-conv\](i), \[prop-diagbd\] and \[lem-C\] guarantee that the series in converges absolutely, and that H(x) = \_[z\_c]{} \_[0,x]{} + O( ) with $\epsilon_2 = \epsilon \wedge 2$. Iteration of then gives G\_[z\_c]{}(x) = (S\_1\*H)(x). By Proposition \[lem-conv\](ii) and the asymptotic formula of , this yields G\_[z\_c]{}(x) = + O( ) + O( ), with $A=\hat{H}(0)$. This proves Theorem \[thm-main\], apart from the assertion that $A=1+O(L^{-2+\alpha})$. The constant $A=\hat{H}(0)$ can be evaluated as follows. Since is absolutely summable over $x$, the Fourier transform (k) = \_[z\_c]{} (1+\_[z\_c]{}(k)) \_[n=0]{}\^ \[\_[z\_c]{}(k) \_1(k)\]\^[n]{} is continuous in $k$. Using , the fact that $E_{z_c}(x)$ decays like $|x|^{-(d+2+ \epsilon)}$, and dominated convergence, we have \_[k 0]{} |k|\^[-2]{} \_[z\_c]{}(k) = \_[k 0]{} \_x E\_[z\_c]{}(x) |k|\^[-2]{} ((kx) - 1 - ) = 0. Since $\hat{S}_1(k)$ diverges like a multiple of $|k|^{-2}$ by , we conclude from and the conclusion of Proposition \[prop-diagbd\] that A = (0) = \_[k 0]{}(k) = \_[z\_c]{} (1+\_[z\_c]{}(0)) = = 1 + O(L\^[-2+]{}). [*Proof of Corollary \[cor-bub\].*]{} The corollary follows immediately from Theorem \[thm-main\] and the convolution bound of Proposition \[lem-conv\](i). To prove Corollary \[cor-k\], the following lemma will be useful. \[lem-Fourier1\] Let $f(x)$ be a $\Zd$-symmetric function which obeys the bound $| f(x) | \leq (|x|+1)^{-(d+2+\kappa)}$ with $\kappa >0$. Then (k) = (0) + \^2 (0) + e(k) with |e(k)| const. |k|\^[2+(2)]{} & (2)\ const. |k|\^[4]{} |k|\^[-1]{} & (= 2). By the $\Zd$-symmetry of $f(x)$, (k) = \_[x]{} f(x) (kx) = (0) + \^2 (0) + \_[x]{} ( ([kx]{}) - 1 + ) f(x) . The expression in brackets of the third term is bounded in absolute value both by $|k|^4|x|^4/24$ and $2+|k|^2 |x|^2/2$. The third term of is therefore bounded by \_[x: |x| |k|\^[-1]{}]{}|x|\^[4]{} | f(x)| + \_[x: |x| &gt; |k|\^[-1]{}]{} ( 2 + ) | f(x) | . Using the assumed upper bound on $f(x)$ then gives and completes the proof. [*Proof of Corollary \[cor-k\].*]{} We first assume $\epsilon \neq 2$, and comment on the minor modifications required for $\epsilon =2$ at the end of the proof. Let $\hat{F}_z(k) = 1 - z\hat{D}(k)(1+\hat{\Pi}_z(k))$. For $z<z_c$, as in we have \_z(k) = . As we have noted above, the bounds of Proposition \[prop-diagbd\] have been established with $q=d-2$, uniformly in $z \leq z_c$. Therefore by Lemma \[lem-Fourier1\], we have for $z \leq z_c$ 1 + \_[z]{}(k) & = 1 + \_[z]{}(0) + O\_L(|k|\^[2]{}),\ \_[z]{}(k) & = \_[z]{}(0) + \^2 \_[z]{}(0) + O\_L( |k|\^[2 + (2)]{}), with $L$-dependent error estimates. Also, as observed in , $\hat{F}_z(0) > 0$ for $z<z_c$. Thus we have the infra-red bound 0 &lt; \_z(k) O\_L(|k|\^[-2]{}) uniformly in $z < z_c$. Since $G_{z_c}(x)$ behaves like $|x|^{-(d-2)}$, it is not summable over $x$ and hence the summation defining $\hat{G}_{z_c}(k)$ is not well-defined. We define \_[z\_c]{}(k) = \_[z z\_c]{}\_z(k) = . This is a sensible definition, because $G_{z_c}(x)$ is then given by the inverse Fourier transform of $\hat{G}_{z_c}(k)$. In fact, using monotone convergence in the first step, and and the dominated convergence theorem in the last step (since $d\geq d_c +1 >2$), we have G\_[z\_c]{}(x) = \_[z z\_c]{} G\_z(x) = \_[z z\_c]{} \_[\[-,\]\^d]{} \_z(k) e\^[-ikx]{} = \_[\[-,\]\^d]{} \_[z\_c]{}(k) e\^[-ikx]{} . Since $\hat{F}_{z_c}(0)=0$ by , – then imply \_[z\_c]{}(k) = = . In the last equality, we used and . The case $\epsilon =2$ can be treated by adding an extra factor $\log|k|^{-1}$ to and . [*Proof of Corollary \[cor-pc\].*]{} Recall the elementary fact that for self-avoiding walk and percolation, $p_c=z_c \geq 1$. The corollary then follows immediately from Proposition \[prop-P4P3\]. (The bound $z_c \leq 1+O(L^{-2+\alpha})$ is uninformative concerning $p_c$ for lattice trees and lattice animals, since we have proved only that $\rho_{p_c}^{(a)}(0) \in [1,4]$.) It remains to prove Propositions \[prop-A\]–\[lem-C\]. After reviewing the lace expansion in Section \[sec-le\], these four propositions will be proved in Sections \[sec-G2ptfcn\], \[sec-conv\], \[sec-Fd\] and \[sec-CE\] respectively. The lace expansion ================== \[sec-le\] In this section, we review the key steps in the derivation of the lace expansion. In particular, we will describe how for each of our models the lace expansion gives rise to the convolution equation , which can be written as G\_[z]{}(x) = \_[0,x]{} + \_[z]{}(x) + (zD \* G\_[z]{})(x) + (\_[z]{}\*zD\*G\_[z]{})(x). For self-avoiding walk, the lace expansion was introduced by Brydges and Spencer in [@BS85]. Our treatment of the expansion for self-avoiding walk differs slightly from the usual treatment, to allow for a simultaneous treatment of lattice trees and animals. For percolation, and for lattice trees and lattice animals, the expansions were introduced by Hara and Slade in [@HS90a; @HS90b]. For overviews, see [@HS94; @MS93]. Proofs and further details can be found in the above references. This section, together with Section \[sec-Fd\], contains the model-dependent part of our analysis. Inclusion-exclusion {#sec-le.i-e} ------------------- The expansion can be understood intuitively as arising from repeated use of the inclusion-exclusion relation. We describe this now in general terms, postponing a more precise (but more technical) description to Sections \[sec-le.comb\]–\[sec-le.perc\]. The two-point function for each of the models under consideration is a sum, over geometrical objects, of weights associated with these objects. The geometrical objects are self-avoiding walks, lattice trees, or lattice animals containing the two points $0$ and $x$. This is the case also for percolation when $p<p_c$. For example, for the nearest-neighbour model $\tau_p(x)=\sum_{A \in \Acal(0,x)}p^{|A|}(1-p)^{|\partial A|}$ for $p<p_c$, where $\partial A$ represents the boundary bonds of $A$ and $|A|$ is the number of bonds in $A$. We view these geometrical objects as a string of mutually-avoiding beads, as depicted in Figure . For self-avoiding walk, the beads are simply lattice sites, whose mutual avoidance keeps the walk self-avoiding. For lattice trees, the string represents the unique path, or [*backbone*]{}, in the tree from $0$ to $x$, and the beads represent lattice trees corresponding to branches along the backbone. These branches are mutually-avoiding, to preserve the overall tree structure. For lattice animals and percolation, we need to introduce the notion of a pivotal bond. A bond $\{a,b\}$ in $A \in \Acal(x,y)$ is called [*pivotal*]{} for the connection from $x$ to $y$ if its removal would disconnect the animal into two connected components, with $x$ in one component and $y$ in the other. A lattice animal $A$ containing $x$ and $y$ is said to have a *double connection* from $x$ to $y$ if there are two bond-disjoint paths in $A$ between $x$ and $y$, or if $x=y$. For lattice animals, the string in the string of beads represents the pivotal bonds for the connection of $0$ and $x$. The beads correspond to the portions of the animal doubly-connected between pivotal bonds. The mutual avoidance of the beads is required for consistency with the pivotal nature of the pivotal bonds. This picture is the same both for lattice animals and for percolation. The basic idea of the lace expansion is the same in all four models. It consists in approximating the two-point function by a sum of weights of geometrical objects represented by a string of beads, with the interaction between the first bead and all subsequent beads neglected. This treats the model as if it were a Markov process. The approximation causes configurations which do not contribute to the two-point function to be included, and these undesired contributions are then excluded in a correction term. The correction term is then subjected to repeated and systematic further application of inclusion-exclusion. Let $\Dcal(x,y)$ denote the set of all animals having a double connection between $x$ and $y$, and, given a lattice animal, let $\Delta(x)$ denote the set of sites that are doubly-connected to $x$. We define \_p\^[(0)]{} (x) = { [ll]{} 0 & ()\ (1-\_[0,x]{}) \_[A (0,x)]{} W\_[p,D]{}(A) & ()\ (1-\_[0,x]{}) \_p( x (0) ) & () . and a\_p = { [ll]{} 1 & ()\ \_p\^[(a)]{}(0) & (). . The procedure described in the preceding paragraph is implemented by writing U\_p(x) = a\_p \_[0,x]{} + \_p\^[(0)]{}(x) + a\_p(pD\*U\_p)(x) + (\_p\^[(0)]{}\*pD\*U\_p)(x) + R\_p\^[(0)]{}(x). The terms on the right side can be understood as follows. The first term is the contribution due to the case when the string of beads consists of a single bead and $x=0$. The term $\psi_p^{(0)}(x)$ is the contribution due to the case when the string of beads consists of a single bead and $x \neq 0$. The convolutions correspond to the case where the string of beads consists of more than a single bead. The factors $a_p$ and $\psi_p^{(0)}$ together give the contribution from the first bead, the factor $pD$ is the contribution from the first piece of string, and the factor $U_p$ is the contribution of the remaining portion of the string of beads. These two terms neglect the interaction between the first bead and the subsequent beads. This is corrected by the correction term $R_p^{(0)}(x)$, which is negative. To understand the correction term, we first restrict attention to the combinatorial models, which excludes percolation. In this case, the correction term simply involves the contributions from configurations in which the first bead intersects some subsequent bead. The contribution due the case where the first such bead is actually the last bead is denoted $-\psi_p^{(1)}(x)$. If the first such bead is not the last bead, then suppose it is the $j^{\rm th}$ bead. The second through $j^{\rm th}$ beads are mutually avoiding, and the $(j+1)^{\rm st}$ through last bead are mutually avoiding, and these two sets of beads avoid each other. We neglect the mutual avoidance between these two sets of beads, making them independent of each other, and add a correction term to exclude the undesired configurations included through this neglect. This leads to the identity R\_p\^[(0)]{}(x) = -\_p\^[(1)]{}(x) - (\_p\^[(1)]{}\*pD\*U\_p)(x) + R\_p\^[(1)]{}(x). The inclusion-exclusion can then be applied to $R_p^{(1)}(x)$, and so on. For percolation, the above procedure can also be applied, but more care is needed in dealing with the probabilistic nature of the weights involved. The form of the terms arising in the expansion for percolation is, however, the same as the above. When the process is continued indefinitely, the result is U\_p(x) = a\_p \_[0,x]{} + \_p(x) + a\_p(pD\*U\_p)(x) + (\_p\*pD\*U\_p)(x), with \_p(x) = \_[N=0]{}\^(-1)\^N \_p\^[(N)]{}(x). The change of variables defined by – then gives our basic identity , once we define \_z(x) = a\_p\^[-1]{} \_p(x). Care is needed for convergence of . We require convergence at $p=p_c$, which demands in particular that the individual terms in the sum over $N$ are finite when $p=p_c$. This will be achieved by taking $d$ greater than the critical dimension $d_c$. The role of large $L$ is to ensure that the terms $\psi_{p_c}^{(N)}(x)$ are not only finite, but grow small with $N$ sufficiently rapidly to be summable. These issues are addressed in detail in Section \[sec-Fd\]. The above discussion has been at an informal level, to establish intuition for the expansion. Our next goal is to make this more precise. Self-avoiding walk, lattice trees and lattice animals {#sec-le.comb} ----------------------------------------------------- For the combinatorial models, an elegant formalism introduced by Brydges and Spencer [@BS85] can be used to make the discussion more precise, using the notion of [*lace*]{}. We discuss this now. Let $R$ be an ordered set $R_0, R_1, \ldots, R_l$ of lattice animals, with $l$ arbitrary. In particular, each $R_j$ may be simply a lattice tree or a single site. Given $R$, we define \_[st]{}(R) = { [rl]{} -1 &\ 0 & . In , the intersection is to be interpreted as the intersection of sets of [*sites*]{} rather than of bonds. For $0 \leq a \leq b$, we also define K\_R\[a,b\] = \_[a s&lt;tb]{}(1+\_[st]{}(R)). The two-point functions for self-avoiding walk, lattice trees, and lattice animals can be rewritten in terms of $K_R$. Given a finite set $B$ of bonds, we let $|B|$ denote its cardinality. For self-avoiding walk, we let $R$ consist of the sites along the walk (the ‘beads’), so that each $R_i$ is the single site $\omega(i)$. Then the two-point function can be written \_p(x) = \_[(0,x)]{} W\_[p,D]{}() K\_[R]{}\[0, ||\]. The sum is over all walks, with or without self-intersections, but $K_R$ is nonzero only for self-avoiding walks, for which $K_R=1$. Thus $K_R$ provides the avoidance interaction. For a lattice tree $T \ni 0,x$, we let the $R_i$ denote the branches (the ‘beads’) along the backbone of $T$ joining $0$ to $x$. The two-point function for lattice trees can be written \_p(x) = \_[(0,x)]{} W\_[p,D]{}() K\_R\[0,||\] . The additional sums and product in , compared with , generate the branches attached along the backbone $\omega$, and the factor $K_R$ ensures that the branches do not intersect. For a lattice animal $A \in \Acal(x,y)$, there is a natural order to the set of pivotal bonds for the connection from $x$ to $y$, and each pivotal bond is directed in a natural way, as in the left to right order in Figure . Given two sites $x,y$ and an animal $A$ containing $x$ and $y$, the [*backbone*]{} of $A$ is defined to be the ordered set of directed pivotal bonds for the connection from $x$ to $y$. In general this backbone is not connected. Let $R$ denote the set $R_0,R_1,\ldots$ of connected components which remain after the removal of the backbone from $A$ (the ‘beads’). Let $B= ( (u_1, v_1), ... , (u_{|B|}, v_{|B|}) )$ be an arbitrary finite ordered set of directed bonds. Let $v_0 = 0$ and $u_{|B|+1} = x$. Then the two-point function for lattice animals can be written as \_p\^a (x) = \_[B: |B| 0]{} W\_[p,D]{}(B) K\_R\[0,|B|\]. The lace expansion proceeds by expanding out the product defining $K_R$, in each of –. An elementary but careful partial resummation is then performed, which leads to a result equivalent to that of the inclusion-exclusion procedure described in Section \[sec-le.i-e\]. We will review this procedure now, leading to precise definitions for $\psi_p(x)$ and hence, recalling , also for $\Pi_z(x)$. An essential ingredient is the following definition, in which the notion of lace is defined. It involves a definition of graph connectivity, which for self-avoiding walk has been relaxed in the following compared to the usual definition [@BS85; @MS93], to give a unified form of the expansion for all the models. \[def-lace\] Given an interval $I = [a,b]$ of positive integers, we refer to a pair $\{ s, t\}$ of elements of $I$ as an [*edge*]{}. For $s<t$, we write simply $st$ for $\{ s,t \}$. A set of edges is called a [*graph*]{}. The set of graphs on $[a,b]$ is denoted $\Gcal[a,b]$. A graph $\Gamma$ is said to be [*connected*]{} if, as intervals, $\cup_{st \in \Gamma}[s,t] = [a,b]$. A [*lace*]{} is a minimally connected graph, i.e., a connected graph for which the removal of any edge would result in a disconnected graph. The set of laces on $[a,b]$ is denoted by $\Lcal [a,b]$. Given a connected graph $\Gamma$, the following prescription associates to $\Gamma$ a unique lace ${\sf L}_\Gamma \subset \Gamma$: The lace ${\sf L}_\Gamma$ consists of edges $s_1 t_1, s_2 t_2, ...$ where, for $i \geq 2$, s\_1 = a , t\_1 = {t : at } t\_[i]{} = { t: st , s t\_[i-1]{} } s\_i = { s : st\_i } . The procedure terminates as soon as $t_N=b$. Given a lace $L$, the set of all edges $st \nin L$ such that ${\sf L}_{L\cup \{st\} } = L $ is called the set of edges [*compatible*]{} with $L$ and is denoted $\Ccal (L)$. For $0 \leq a<b$ we define J\_R \[a,b\] = \_[L ]{} \_[st L]{} \_[st]{}(R) \_[s’t’ (L) ]{} ( 1 + \_[s’t’]{}(R) ) . This has a nice interpretation in terms of the beads of Section \[sec-le.i-e\]. In that language, the product over $\Ccal(L)$ in is nonzero precisely when pairs of beads compatible with the lace $L$ avoid each other, as in the product defining $K_R$. On the other hand, the product over $L$ is nonzero precisely when the pairs of beads corresponding to lace edges do intersect each other. The number $N=N(L)$ of edges in $L$ corresponds to the superscript in $\psi_p^{(N)}(x)$ in . The function $\psi_p(x)$ is defined, for the different models, by \^[[saw]{}]{}\_p (x) & = & \_ W\_[p,D]{}() J\_,\ \^[[lt]{}]{}\_p (x) & = & \_ W\_[p,D]{}() J\_R \[ 0, || \] ,\ \_p\^[[la]{}]{} (x) & = & (1-\_[0,x]{})\_[R (0,x)]{} W\_[p,D]{}(R)\ && + \_[B : |B| 1]{} W\_[p,D]{}(B) J\_R \[0, |B| \] , for any $p$ for which the right side converges. We then define $z$ in terms of $p$ as in , and introduce $G_z(x)$ and $\Pi_z(x)$ as in and . The following theorem gives the basic convolution equation for the combinatorial models. For self-avoiding walk, the proof involves a minor modification of the standard proof given in [@BS85; @MS93], to account for the relaxed definition of connectivity. For lattice trees and lattice animals, the proof is given in [@HS90b]. \[thm-Gexp\] For any $p<p_c$ for which the series defining $\psi_{p}(x)$ is absolutely summable over $x$ (with absolute values taken inside the sums in –), the convolution equations and hold. [*Sketch of proof.*]{} The proof relies on the elementary identity K\_R\[0,b\] = K\_R\[1,b\] + J\_R\[0,b\] + \_[a=1]{}\^[b-1]{} J\_R\[0,a\]K\_R\[a+1,b\], (b 1). To prove , we first expand the product in to obtain $K_R[0,b] = \sum_{\Gamma \in \Gcal[0,b]} \prod_{st \in \Gamma} U_{st}(R)$. Graphs with no edge containing $0$ contribute $K_R[1,b]$. Graphs with an edge containing $0$ are then partitioned according to the interval supporting the connected component containing $0$, and give rise to the remaining two terms in the identity. In the first term on the right side of , interactions between the first and subsequent beads do not occur, corresponding to the term $a_p(pD*U_p)(x)$ of . The second term gives rise to the term $\psi_p(x)$ of . (For lattice animals, the first term of arises from the case of just one bead, which does not appear in .) The last term represents an effective decoupling of the interaction between beads $0$ to $a$ and beads $a+1$ to $b$, and gives rise to the final term of . For $N \geq 1$, let $\Lcal^{(N)} [a,b]$ denote the set of laces in $\Lcal [a,b]$ consisting of exactly $N$ edges. We define J\^[(N)]{}\_R \[a,b\] = \_[L \^[(N)]{} \[a,b\]]{} \_[st L]{} \_[st]{}(R) \_[s’ t’ (L)]{} ( 1 + \_[s’ t’]{}(R) ) . For $N \geq 1$, the quantity $\psi_p^{(N)}(x)$ discussed in Section \[sec-le.i-e\] then corresponds to $(-1)^N$ times the contribution to – arising from the replacement of $J$ by $J^{(N)}$ in those formulas. This representation of $\psi_p^{(N)}(x)$ leads to the formula $\psi_p(x) = \sum_{N=0}^\infty (-1)^N \psi_p^{(N)}(x)$ (with the $N=0$ term arising only for lattice animals and given by the first term of ). This formula was discussed via the inclusion-exclusion approach in the discussion leading to . Percolation {#sec-le.perc} ----------- The lace expansion discussed in Section \[sec-le.comb\] is combinatorial in nature, but the expansion for percolation is inherently probabilistic. It relies entirely on inclusion-exclusion and does not make use of an interaction term $\Ucal_{st}$. It is interesting that an expansion based on such an interaction can be carried out for oriented percolation [@NY93], which has an additional Markovian structure not present in ordinary percolation. However, this has not been done outside the oriented setting. The expansion we present here, based on inclusion-exclusion, applies to quite general percolation models, including oriented percolation. Before giving a precise statement of the expansion, we first revisit the discussion of Section \[sec-le.i-e\]. The discussion of the first application of inclusion-exclusion can be recast as follows, in the context of percolation. Let $g_p^{(0)}(x) = \Pbold_p(x \in \Delta(0))$ denote the probability that $0$ and $x$ are doubly connected. If these two sites are not doubly connected, then there is a first pivotal bond $(u,v)$ for the connection. As in the discussion of lattice animals in Section \[sec-le.comb\], we may regard this bond as being directed. Let $F(0,u,v,x)$ denote the event that $0$ and $x$ are connected, but not doubly connected, and that $(u,v)$ is the first pivotal bond for the connection. We would like to approximate $\Pbold_p(F(0,u,v,x))$ by $(g_p^{(0)}*pD*\tau_p)(x)$, which treats the first bead in the string of beads as independent of the beads that follow. To discuss the error in this approximation, we will use the following definitions. \[def-percterms1\] (a) Given a set of sites $A \subset \Zd$ and a bond configuration, two sites $x$ and $y$ are [*connected in*]{} $A$ if there is an occupied path from $x$ to $y$ having all of its sites in $A$, or if $x=y \in A$. (b) The [*restricted two-point function*]{} is defined by $$\tau_{p}^A(x,y) = \Pbold_{p} (x \; \mbox{and} \; y \; \mbox{are connected in} \; \Zd \backslash A ).$$ (c) Given a bond $\{u,v\}$ and a bond configuration, we define $\tilde{C}^{\{u,v\}}(x)$ to be the set of sites which remain connected to $x$ in the new configuration obtained by setting $\{u,v\}$ to be vacant. It can be shown [@MS93 Lemma 5.5.4] that \_p(F(0,u,v,x)) = pD(v-u) , where $\Ebold$ denotes expectation with respect to $\Pbold_p$. The restricted two-point function in the above identity is a random variable, since the set $\tilde{C}^{\{u,v\}}(0)$ is random. The approximation discussed above amounts to replacing the restricted two-point function simply by $\tau_p(x-v)$, and gives \_p(F(0,u,v,x)) & = & g\_p\^[(0)]{}(u) p D(v-u) \_p(x-v)\ && - p D(v-u). To understand the correction term in , we introduce the following definition. Two sites $x$ and $y$ are [*connected through*]{} $A$ if they are connected in such a way that every occupied path from $x$ to $y$ has at least one bond with an endpoint in $A$, or if $x=y \in A$. Then, by definition, \_p(x-v) - \_p\^[A]{}(v,x) = \_p ( ). Therefore \_p(F(0,u,v,x)) & = & g\_p\^[(0)]{}(u) p D(v-u) \_p(x-v)\ && - p D(v-u). In , we encounter the occurrence of a nested expectation, corresponding to a pair of distinct percolation configurations. This is the analogue for percolation of the occurrence of independent strings of beads in the combinatorial models. The two percolation configurations interact with each other via the event in the inner expectation, which requires a specific kind of intersection between them. An example of a pair of configurations contributing to this nested expectation is depicted in Figure . In the figure, $(u',v')$ is the first pivotal bond for the connection from $v$ to $x$ such that $v$ is connected to $u'$ through $\tilde{C}^{\{u,v\}}(0)$. It is possible that there is no such pivotal bond, corresponding to a picture in which $u'=x$, and in that case no further expansion is performed. In the case where there is such a pivotal bond, we perform the expansion again by treating the portion of the cluster of $x$ following $u'$ as independent of the portion preceding $u'$, in a manner similar to the first application of inclusion-exclusion performed above. This is discussed in detail in [@HS90a; @MS93], and we now just state the conclusion. In doing so, we will use subscripts to coordinate random sets with the corresponding expectations. For example, we write the subtracted term in as p D(v-u) \_0 to emphasise that the set occurring in the inner expectation is a random set with respect to the outer expectation. We will also make use of the following definition. Given sites $x,y$ and a set of sites $A$, let $E(x,y;A)$ be the event that $x$ is connected to $y$ through $A$ and there is no directed pivotal bond for the connection from $x$ to $y$ whose first endpoint is connected to $x$ through $A$. We make the abbreviation $I_j = I[E(y_j',y_{j+1};\tilde{C}_{j-1})]$ with $\tilde{C}_{j-1} = \tilde{C}^{\{y_j,y_j'\}}_{j-1}(y_{j-1}')$ and $y_0'=0$, and we write $p_{u,v} = pD(v-u)$. In this notation, the situation with $u'=x$ discussed in the previous paragraph makes a contribution to equal to p\_[u,v]{} \_0 . Let \_[p]{}\^[(0)]{}(x) = (1-\_[0,x]{}) \_[p]{} ( x (0) ). For $n \geq 1$, we define \_[p]{}\^[(n)]{}(x) & = & \_[(y\_1,y\_1’)]{} p\_[y\_1,y\_1’]{} …\_[(y\_n,y\_n’)]{} p\_[y\_n,y\_n’]{} \_0 ( I\[y\_1 (0)\]\ && \_1 I\_1 \_2 I\_2 \_3 I\_3 …\_[n-1]{} I\_[n-1]{} \_n I\[E(y\_n’,x;\_[n-1]{})\] ) , where the sums are over directed bonds and [*all*]{} the expectations are nested. Define \_[p]{}\^[(n)]{}(x) = \_[j=0]{}\^n (-1)\^j \_[p]{}\^[(j)]{}(x) and R\_[p]{}\^[(n)]{}(x) & = & \_[(y\_1,y\_1’)]{} p\_[y\_1,y\_1’]{} … \_[(y\_[n+1]{},y\_[n+1]{}’)]{} p\_[y\_[n+1]{},y\_[n+1]{}’]{} \_0 ( I\[y\_1 (0)\]\ && \_1 I\_1 \_2 I\_2 …\_n ( I\_[n]{} (\_[p]{}(x-y\_[n+1]{}’) - \_[p]{}\^[\_n]{}(y\_[n+1]{}’,x) ))). The following theorem is proved in [@HS90a]; see also [@HS94; @MS93]. \[thm-percexp\] For $p<p_c$ and $N \geq 0$, \_[p]{}(x) = \_[0,x]{} + \_[p]{}\^[(N)]{}(x) + ( p D \* \_[p]{}) (x) + ( \_[p]{}\^[(N)]{} \* p D \* \_[p]{}) (x) +(-1)\^[N+1]{}R\_[p]{}\^[(N)]{}(x). As we will show in Section \[sec-percdiagrams\], the limit $N {\rightarrow}\infty$ can be taken in under the hypotheses of Proposition \[prop-diagbd\](c), with the remainder term vanishing in the limit. Defining $\Pi_{p}(x) = \psi_p(x) = \sum_{j=0}^\infty (-1)^j \psi_{p}^{(j)}(x)$, then becomes \_[p]{}(x) = \_[0,x]{} + \_[p]{}(x) + ( p D \* \_[p]{}) (x) + ( \_[p]{} \* p D \* \_[p]{}) (x) . This is equivalent to , with $z=p$ and $G_{z}(x) = \tau_{p}(x)$. Lace expansion diagrams {#sec-Fd} ======================= We begin in Section \[sec-sawdiagrams\] by recalling the well-established procedure by which the lace expansion for self-avoiding walk gives rise to diagrammatic upper bounds for $\psi_p^{(N)}(x)$ [@BS85; @MS93]. We then bound these diagrams to prove Proposition \[prop-diagbd\](a). For the other models, there are also diagrammatic upper bounds. These bounds can be expressed in the form $\psi_{p}^{(N)}(x) \leq M^{(N)}(x,x)$, where $M^{(N)}(x,y)$ is a recursively defined function having a diagrammatic interpretation. In Section \[subsub-general\], we prove Lemma \[lem-Mbound\], a key lemma that will be used to bound $M^{(N)}(x,y)$. In Sections \[sec-ltdiagrams\]–\[sec-percdiagrams\], we recall the well-established procedure by which the expansions of Section \[sec-le\] give rise to diagrams [@HS90a; @HS90b] for lattice trees, lattice animals and percolation. We will not provide complete proofs here but attempt only to motivate the diagrams. Once the diagrams have been identified, we estimate them using Lemma \[lem-Mbound\]. This will provide a proof of Proposition \[prop-diagbd\](b-c). In addition, in Section \[sec-percdiagrams\], we will argue that, for percolation, $\lim_{N{\rightarrow}\infty}R^{(N)}_p(x)=0$ under the hypotheses of Proposition \[prop-diagbd\](c). Our bounds here are in contrast to all previous diagrammatic estimates in lace expansion analyses, which have been for $\sum_x \psi_p(x)$ rather than for fixed-$x$ quantities [@BS85; @HS90a; @HS90b]. Self-avoiding walk diagrams {#sec-sawdiagrams} --------------------------- For self-avoiding walk, simplifies to \_p\^[(N)]{} (x) = (-1)\^N \_ W\_[p,D]{}() J\_R\^[(N)]{} \[ 0, || \] . The diagrammatic representation of an expression of the form has been discussed many times in the literature, for example in [@BS85; @MS93]. Here we focus on the differences that arise because of the weakened definition of connectivity used in Definition \[def-lace\]. The factor $\prod_{st \in L}\Ucal_{st}$ in $J^{(N)}$ imposes $N$ bead intersections, which are self-intersections of the random walk. These self-intersections divide the underlying time interval into subintervals, as illustrated in Figure (a). The factor $\prod_{st \in \Ccal(L)}(1+\Ucal_{st})$ in $J^{(N)}$ is then bounded by replacing each factor $1+\Ucal_{st}$ for which $s$ and $t$ lie in distinct subwalks by the factor $1$. This produces a bound that can be interpreted as involving a self-avoiding walk on each time subinterval, with no interaction between the walks corresponding to distinct time intervals. For example, the lace of Figure (a) gives rise to the diagram of Figure (b). A simplification for these diagrams occurs in the case where two lace edges abut and do not overlap. In this case, after discarding the interaction between distinct subwalks, the interaction decouples across the time coordinate where an abuttal occurs. If we define $\pi_p^{(N)}(x)$ to be the contribution to the summation in only from laces with no abuttal, then we are led to 0 \_p\^[(N)]{}(x) \_[m=1]{}\^N \_ (\_p\^[(n\_1)]{} \* \_p\^[(n\_m)]{})(x). The quantity $\pi_p^{(n)}(x)$ is the quantity that has appeared in previous lace expansion analyses [@BS85; @MS93]. We encounter $\psi_p$ instead, because we have used a definition of graph connectivity in Definition \[def-lace\] that is relaxed compared to previous analyses, to achieve a unified treatment with lattice trees and lattice animals. We will bound $\psi_p^{(N)}(x)$ by combining a bound on $\pi_p^{(n)}(x)$ with Proposition \[lem-conv\](i). To bound $\pi_p^{(n)}(x)$, we define ’\_[p]{}(x) & = \_[p]{}(x) - \_[0,x]{},\ A(u,v,x,y) & = \_p’(v-u) \_p(y-u) \_[v,x]{},\ M\^[(2)]{}(x, y) & = \_[p]{}’(x)\^2\_p(y),\ M\^[(n)]{}(x, y) & = \_[u,v]{} M\^[(n-1)]{}(u,v)A(u,v,x,y) (n 3). The standard bounds of [@BS85] can then be written as 0 & \_p\^[(1)]{}(x) \_[0,x]{}\_[v \_D]{} pD(v) \_p’(v) ,\ 0 & \_p\^[(n)]{}(x) M\^[(n)]{}(x,x) (n 2). The power $3q$ in the desired decay $(|x|+1)^{-3q}$ can be understood from the fact that there are three distinct routes from $0$ to $x$ in the diagrams for $M^{(n)}(x,x)$; see Figure (c). [*Proof of Proposition \[prop-diagbd\](a).*]{} For self-avoiding walk, we have $z=p$, $G_z(x)=\sigma_p(x)$ and $\Pi_z(x)=\psi_p(x)$. The hypotheses of the proposition are that $\sigma_p'(x) \leq \beta(|x|+1)^{-q}$, with $2q>d$, and that $p \leq 2$. Since $\sigma_p(0)=1$, it follows that $\sigma_p(x) \leq (|x|+1)^{-q}$. (The lower bound on $\beta L^{q-d}$ assumed in Proposition \[prop-diagbd\] is not needed for self-avoiding walk.) We must show that \_p(x) c\_[0,x]{} + c\^3 (|x|+1)\^[-3q]{}. By definition of $\sigma_p'$ and the hypotheses, it follows from that 0 \_p\^[(1)]{}(x) 2\^[1-q]{} \_[0,x]{}. By and hypothesis, A(u,v,x,y) \_[v,x]{} . Let S(x) = \_[y ]{} , |[S]{} = \_[x ]{} S(x). Note that $\bar{S} < \infty$ if $2q>d$, by Proposition \[lem-conv\](i). Diagrammatically, $S(x)$ corresponds to an open bubble, and the condition $\bar{S}<\infty$ is closely related to the bubble condition [@MS93 Section 1.5]. We will show that implies there is a constant $C$ such that M\^[(n)]{}(x,y) \^n (C|[S]{})\^[n-2]{} (n 2). This gives the conclusion of Proposition \[prop-diagbd\](a), apart from the fact that the $n=2$ term here has a factor $\beta^2$ rather than the required $\beta^3$. The missing factor of $\beta$ can be recovered by noting that, by definition, $M^{(2)}(x,x)$ can be written as $\sigma_p'(x)^3$, and we obtain one factor of $\beta$ for each factor of $\sigma_p'$. To prove , we use induction on $n$. The case $n=2$ follows immediately from and the assumed bound on $\sigma_p'$. To advance the induction, we assume for $n-1$ and show that it holds also for $n$. The inductive hypothesis and then give M\^[(n)]{}(x,y) \_[u]{} (n 3). It therefore suffices to show that there is a $C$ for which \_[u]{} (n 3). To prove , we consider four cases.\ [*Case 1*]{}: $|u| \geq |x|/2$ and $|u| \geq |y|/2$. In this contribution to , we may bound the factor $(|u|+1)^{-2q}$ above by $2^{2q} (|x|+1)^{-q}(|y|+1)^{-q}$. The remaining summation over $u$ is then bounded above by $\bar{S}$, as required.\ [*Case 2*]{}: $|u| \geq |x|/2$ and $|u| \leq |y|/2$. The second inequality implies that $|y-u| \geq |y|/2$. We then argue as in Case 1.\ [*Case 3*]{}: $|u| \leq |x|/2$ and $|u| \geq |y|/2$. This is the same as Case 2, by symmetry.\ [*Case 4*]{}: $|u| \leq |x|/2$ and $|u| \leq |y|/2$. This follows as above, using $|y-u| \geq |y|/2$ and $|x-u| \geq |x|/2$. The diagram lemma {#subsub-general} ----------------- In this section, we present a lemma that will be useful for the diagrammatic estimates for lattice trees, lattice animals and percolation. It involves a constant $\bar{S}$, which is defined for $q_1, q_2 >0$ by S(x,y)= \_[u,v ]{} , |[S]{} = \_[x,y ]{} S(x,y). It is possible that $\bar{S}=\infty$, depending on the values of $d$ and the $q_i$. However, if 2q\_1+d &gt; 2q\_1+q\_2 &gt; 2d then $\bar{S}<\infty$. In fact, given , it follows from Proposition \[lem-conv\](i) that S(x,y) . Finiteness of $\bar{S}$ is related to the triangle condition for percolation [@AN84] and to the square condition for lattice trees and lattice animals [@TH87]. To see this, we first note the diagrammatic representation of $S(x,y)$ in figure (a). When $q_1=q_2=q$, which is the relevant case for percolation, this corresponds to the open triangle diagram depicted in Figure (b). When $q_1=q$ and $q_2=2q-d$, which is the relevant case for lattice trees and lattice animals, $S$ corresponds to the open square diagram depicted in Figure (c). To understand this for the square diagram, we interpret the line decaying with power $q_2$ as arising from a convolution of two two-point functions decaying with power $q$, in accordance with Proposition \[lem-conv\](i). The following lemma is the key lemma that will be used in bounding diagrams for lattice trees, lattice animals and percolation. Its statement involves functions ${A^{(0)}}: {{\mathbb Z}}^{2d}{\rightarrow}[0,\infty)$, $A^{(i)}:{{\mathbb Z}}^{4d}{\rightarrow}[0,\infty)$ for $i \geq 1$, ${A^{(\text{end})}}:{{\mathbb Z}}^{4d}{\rightarrow}[0,\infty)$, and functions $M^{(N)}:{{\mathbb Z}}^{2d} {\rightarrow}[0,\infty)$ defined for $N \geq 1$ by M\^[(N)]{}(x,y) &= \_[u\_1,v\_1,…,u\_[N]{},v\_[N]{} ]{} [A\^[(0)]{}]{}(u\_[1]{},v\_[1]{}) \_[i=1]{}\^[N-1]{} A\^[(i)]{}(u\_[i]{},v\_[i]{},u\_[i+1]{},v\_[i+1]{}) [A\^[()]{}]{}(u\_[N]{},v\_[N]{},x, y). (For $N=1$, the empty product over $i$ is interpreted as $1$.) The proof of Lemma \[lem-Mbound\] can be extended to $q_1<d$ and $q_2$ obeying , but since $q_2 \leq q_1$ in our applications, we add this assumption to simplify the proof. \[lem-Mbound\] Fix $q_2\leq q_1 <d$ obeying , so that $\bar{S} < \infty$. Let $K_0>0$. Suppose that [A\^[(0)]{}]{}(x,y) K\_0 {+ }, and suppose that $A^{(i)}$ for $i \geq 1$ and ${A^{(\text{end})}}$ satisfy A\^[(\*)]{}(u,v,x,y) {+ } with $K_*>0$. Then there is a $C$ depending on $d,q_1,q_2$ such that for $N \geq 1$ M\^[(N)]{}(x,y) ( C |[S]{})\^[N-1]{} (\_[i=0]{}\^[N-1]{}K\_i ) K\_[end]{} {+ }. The proof is by induction on $N$. To deal with the fact that $M^{(N)}$ is not defined literally by a convolution of $M^{(N-1)}$ with ${A^{(\text{end})}}$, we proceed as follows. Let $\tilde{M}^{(N)}$ be the quantity defined by replacing ${A^{(\text{end})}}$ by $A^{(N)}$ in the definition of $M^{(N)}$. Because all the constituent factors in the definitions of $M^{(N)}$ and $\tilde{M}^{(N)}$ obey the same bounds, it suffices to prove that $\tilde{M}^{(N)}$ obeys with $K_{\rm end}$ replaced by $K_N$. We prove this by induction, with the inductive hypothesis that $\tilde{M}^{(N-1)}$ obeys with $K_{\rm end}$ replaced by $K_{N-1}$ and $N$ replaced by $N-1$ on the right side. For $x,y\in \Zd$, let [T]{}(x,y) & = \_[u,v]{} {+ }\ & . This quantity is depicted in Figure . By definition, and using –, \^[(1)]{}(x,y) K\_0K\_1 . By the induction hypothesis, and , \^[(N)]{}(x,y) (C|[S]{})\^[N-2]{} (\_[i=0]{}\^[N]{}K\_i ) . It therefore suffices to show that [T]{}(x,y) C |[S]{} { +}. To prove , we write ${T}(x,y) \leq \sum_{i=1}^4 {T}_i(x,y)$, with ${T}_i(x,y)$ defined to be the contribution to $T(x,y)$ arising from each of the following four cases. In the discussion of these four cases, $C$ denotes a generic constant whose value may change from line to line. [*Case 1.*]{} $|v| \geq |x-v|$ and $|u| \geq |u-y|$. This implies $|v| \geq |x|/2$ and $|u| \geq |y|/2$, so that [T]{}\_1(x,y) C |[S]{} { +}. [*Case 2.*]{} $|v| \geq |x-v|$ and $|u| \leq |u-y|$. This implies $|v| \geq |x|/2$ and $|u-y| \geq |y|/2$. Then [T]{}\_2(x,y) & \_[u,v]{} { + }\ & . The second term of is bounded above by $C\bar{S}(|x|+1)^{-q_1}(|y|+1)^{-q_2}$, as required. We bound the first term using Proposition \[lem-conv\](i) to estimate the two convolutions, obtaining a bound $C(|x|+1)^{-(3q_1+q_2-2d)} (|y|+1)^{-q_2}$. (Here we used the assumption $q_2 \leq q_1$ to ensure that $3q_1-2d>0$, as required to apply Lemma \[lem-conv\](i).) It follows from that $3q_1+q_2-2d >q_1$, which gives the desired result. [*Case 3.*]{} $|v| \leq |x-v|$ and $|u| \geq |u-y|$. This implies $|v-x| \geq |x|/2$ and $|u| \geq |y|/2$, and hence [T]{}\_3(x,y) & \_[u,v]{} { + }\ & . Each term is bounded by $C \bar{S}(|x|+1)^{-q_1}(|y|+1)^{-q_2}$, as required. [*Case 4.*]{} $|v| \leq |x-v|$ and $|u| \leq |u-y|$. This implies $|v-x| \geq |x|/2$ and $|u-y| \geq |y|/2$, and hence [T]{}\_4(x,y) 2C |[S]{} . Adding the contributions in the four cases yields and completes the proof. \[rk-diag\] Let [H]{}(z, w, x, y) = \_[u,v]{} . By dividing into four cases according to whether $|z-u|$ is greater than or less than $|y-u|$ and whether $|w-v|$ is greater than or less than $|x-v|$, the above proof can be easily adapted to show that [H]{}(w,z,x,y) , where $\bar{S}$ is defined in with now $q_{1} = q_{2} = q$. This will be used in Section \[sec-percdiagrams\] to analyse percolation. Lattice tree diagrams {#sec-ltdiagrams} --------------------- For lattice trees, the quantity $\psi_p^{(N)}(x)$ ($N \geq 1$) can be understood either as arising from $N$ applications of inclusion-exclusion, along the lines discussed in Section \[sec-le.i-e\], or from the contribution to from laces having $N$ edges, as explained around . Explicitly, \_p\^[(N)]{} (x) = (-1)\^N \_ W\_[p,D]{}() J\_R\^[(N)]{} \[ 0, || \] . For a nonzero contribution to $\psi_{p}^{(N)} (x)$, the factor $\prod_{st \in L} \Ucal_{st}$ in $J^{(N)}$ enforces intersections between the beads $R_s$ and $R_t$, for each $st \in L$. This leads to bounds in which the contribution to $\psi_p^{(N)}(x)$ from the $N$-edge laces can be bounded above by $N$-loop diagrams. We illustrate this in detail only for the simplest case $N=1$. To bound $\psi_p^{(1)} (x)$, we proceed as follows. There is a unique lace $0|\omega|$ consisting of a single edge, and all other edges on $[0,|\omega|]$ are compatible with it. Therefore \_p\^[(1)]{} (x) = - \_ W\_[p,D]{}() \_[0 ||]{} (1 + \_[st]{} ) . After relaxing the last product to $\prod_{1 \leq s < t \leq |\omega|} (1 + \Ucal_{st})$, the trees $R_{1}, \ldots, R_{l}$, together with the bonds of $\omega$ connecting them, can be considered as a single lattice tree connecting $\omega(1)$ and $x$. Writing this tree as $T_{1}$, writing $v = \omega(1)$, and stating the constraint imposed by $\Ucal_{0|\omega|}$ in words, we obtain 0 \_[p]{}\^[(1)]{}(x) & \_[v \_D]{} pD(v) \_[R\_[0]{} (0,0)]{} W\_[p,D]{}(R\_0) \_[T\_[1]{} ( v, x)]{} W\_[p,D]{}(T\_1) & I \[ \] & \_[y]{} \_[v \_D]{} pD(v) \_[R\_[0]{} (0,y)]{} W\_[p,D]{}(R\_0) \_[T\_[1]{} ( v, x)]{} W\_[p,D]{}(T\_1) I \[ () y\] . In , the summations over $R_{0}$ and $T_{1}$ can be performed independently. The summation over $R_0$ simply gives $\rho_p(y)$. For the summation over $T_{1}$, we note that there must be disjoint connections from $v$ to $x$ and from $x$ to $y$, because $y$ is in the last bead of $T_{1}$. Therefore the sum over $T_1$ is bounded above by $\rho_p(x-v)\rho_p(y-x)$. Define \_p(x) = (pD\*\_p)(x) = \_[v \_D]{} pD(v)\_p(x-v). Then the above bound gives 0 \_p\^[(1)]{}(x) \_[y ]{} \_p(x) \_p(y-x) \_p(y). For $N \geq 2$, a similar analysis can be performed, along the lines discussed in [@HS90b]. To state the resulting bound, we define M\^[(0)]{}(x,y) = [A\^[(0)]{}]{}(x, y) = \_p(x) \_[v]{}\_[p]{}(y-v) \_[p]{}(v) and A\^[(i)]{}(u,v,x,y) & = \_p(v-u) , with ${A^{(\text{end})}}= A^{(i)}$. We define $M^{(N)}(x,y)$ ($N \geq 1$) recursively by . Then, for $N \geq 1$, the resulting bound is 0 \_p\^[(N)]{}(x) M\^[(N-1)]{}(x,x). The first few diagrams are depicted in Figure . The upper bound differs from the bound of [@HS90b], which uses $\rho_p$ in place of $\tilde{\rho}_p$ in . We could also use the bounds of [@HS90b] here, but the bounds with $\tilde{\rho}_p$ are easier to derive and lead ultimately to the same conclusion. [ *Proof of Proposition \[prop-diagbd\](b) for lattice trees.*]{} For lattice trees, we have $z=p\rho_p(0)$, $G_z(x)=\rho_p(x)/\rho_p(0)$ and $\Pi_z(x)=\psi_p(x)/\rho_p(0)$. The hypotheses of the proposition are that $G_z(x) \leq \beta(|x|+1)^{-q}$ for $x \neq 0$, with $\frac{3}{4}d < q < d$, that there is a constant $R$ such that $\rho_p^{(a)}(0) \leq R$, and that $\beta L^{q-d}$ is bounded away from zero. It follows that $\rho_p(x) \leq R\beta (|x|+1)^{-q}$ for $x \neq 0$. Since $\rho_p(0) \geq 1$, it is sufficient to conclude that \_p(x) c\_[0,x]{} + c\^2 (|x|+1)\^[d-3q]{}, where $c$ may depend on $R$. By definition \_p\^[(a)]{}(x) = pD(x) \_p\^[(a)]{}(0) + \_[v \_D : v x]{} pD(v) \_p\^[(a)]{}(x-v). Note that $p = \sum_{v \in \Omega_D}pD(v) < \rho_p^{(a)}(0) \leq R$. The first term on the right side can be bounded as in , while the second term can be estimated by considering separately the contributions due to $|x| \geq 2L$ and $|x|<2L$. The result is \_p(x) + , where we have invoked the hypothesis that $\beta L^{q-d}$ is bounded away from zero. Therefore, by definition and by Proposition \[lem-conv\](i), M\^[(0)]{}(x,x) c(|x|+1)\^[d-3q]{}. Similarly, $A^{(0)}(x,y)$ of obeys the bound of with $q_1=q$ and $q_2=2q-d$. Moreover, the factor $\beta$ on the right side of can be replaced by $\beta^2$ when $x \neq 0$, since at least one of the two lower lines in the first diagram of Figure (b) must make a nonzero displacement when $x \neq 0$. For $N \geq 1$, we will show that the hypotheses imply M\^[(N)]{}(x,x) , where $C_1$ is a constant. By and , this will complete the proof. The remainder of the proof is devoted to proving . By Proposition \[lem-conv\](i) and the above remarks, the function $A^{(i)}$ defined in obeys A\^[(i)]{}(u,v,x,y) . Hence, applies with $q_1=q$, $q_2=2q-d$. By our assumption on $q$, it follows that $q_2 \leq q_1 <d $ and is satisfied. Therefore, by Lemma \[lem-Mbound\], there is a constant $C_1$ such that M\^[(N)]{}(x,y) \^[N+1]{} C\_1\^N{ + } . This implies and completes the proof for lattice trees. Lattice animal diagrams {#sec-ladiagrams} ----------------------- The determination of the lattice animal diagrams is similar to that for lattice trees. It makes use of [@HS90b Lemma 2.1], which can be rephrased in our present context as follows. \[lem-ladisj\] Given sets of lattice paths $E_i$ ($i = 1,\ldots , n$), let $\Acal_i$ denote the set of lattice animals which contain a path in $E_i$, and let $\Acal$ denote the set of lattice animals which contain disjoint paths in each of $E_1,\ldots, E_n$. Then \_[A ]{} W\_[p,D]{}(A) \_[i=1]{}\^n We denote the first term on the right side of by $\psi_p^{(0)}(x)$ and denote the contribution to the second term due to $J_R^{(N)}[0,|B|]$ by $\psi_p^{(N)}(x)$. By Lemma \[lem-ladisj\], \_p\^[(0)]{}(x) = (1-\_[0,x]{}) \_[A (0,x)]{} W\_[p,D]{}(A) (1- \_[0,x]{})\_p\^a(x)\^2 . By definition, \_[p]{}\^[(1)]{}(x) = - \_[|B|:|B|1]{} W\_[p,D]{}(B) \_[0 |B|]{} \_ (1 + \_[st]{}) , where the sum over $B$ is a sum over $|B|$ bonds $(u_i,v_i)$ with $v_i-u_i \in \Omega_D$, where $v_0=0$ and $u_{|B|+1}=x$. After relaxing the avoidance constraint in to $\prod_{1 \leq s < t \leq |B|} (1 + \Ucal_{st})$, the beads $R_{1}, \ldots, R_{|B|}$, together with the pivotal bonds connecting them, can be considered as a single lattice animal connecting $v_{1}$ and $x$. Writing this animal as $A_{1}$, and stating the constraint imposed by $\Ucal_{0|B|}$ in words, we obtain 0 \_[p]{}\^[(1)]{}(x) & \_[(u, v)]{} pD(v-u) \_[R\_[0]{} (0, u)]{} W\_[p,D]{}(R\_[0]{}) \_[A\_[1]{} ( v, x)]{} W\_[p,D]{}(A\_[1]{}) & I \[ \] & \_[y]{} \_[(u, v)]{} pD(v-u) \_[R\_[0]{} (0, u) : R\_0 y]{} W\_[p,D]{}(R\_[0]{}) \_[A\_[1]{} ( v, x)]{} W\_[p,D]{}(A\_[1]{}) I \[() y\] . In , the summations over $R_{0}$ and $A_{1}$ can be performed independently. For the summation over $R_{0}$, we note that there must be a site $w$, and four disjoint connections joining $0$ to $w$, $w$ to $u$, $u$ to $0$, and $w$ to $y$. For the summation over $A_{1}$, there must be disjoint connections joining $x$ to $v$ and $x$ to $y$, because $y$ is in the last bead of $A_{1}$. This is illustrated in Figure , where on the left we show a typical contribution to the one-loop diagram, and on the right we show the connections used to bound it. Therefore, using Lemma \[lem-ladisj\] we obtain 0 \_p\^[(1)]{}(x) \_[u,w,y ]{} \^a\_p(u)\^a\_p(w)\^a\_p(u-w)\^a\_p(y-w)\^a\_p(x-y) \^a\_p(x-u), where $\tilde{\rho}^a_p(x) = (pD*\rho^a_p)(x)$ as in . This diagram is depicted in Figure . The contribution arising from the term with $u=w=0$ equals $\rho_p^a(0)^3$ times the triangle diagram of . Taking the full sum into account, the right side of corresponds diagrammatically to the triangle diagram with its vertex at the origin replaced by a triangle. The above procedure can be extended to bound the higher-order terms. The resulting diagrams are the lattice tree diagrams, with an extra initial triangle as observed for $\psi_p^{(1)}(x)$. Now we define [A\^[(0)]{}]{}(x,y) = \_[p]{}\^[a]{}(x) \_[p]{}\^[a]{}(y) and use the $A^{(i)}=A^{\rm end}$ of (with $\rho$ replaced by $\rho^{a}$) to define $M^{(N)}$ recursively by for $N \geq 1$. Then for $N \geq 1$ we have 0 \_p\^[(N)]{}(x) M\^[(N)]{}(x,x) . The cases $N=1,2$ are depicted in Figure . The bounds described above for lattice animals differ from those of [@HS90b] in two respects. One difference is that the diagrams of [@HS90b] involve additional small triangles that make no significant difference and need not be included. A second difference is that here we are using $\tilde{\rho}^a$ whereas only $\rho^a$ was used in [@HS90b]. It is in fact possible to avoid the use of $\tilde{\rho}^a$, but a more involved argument than the one provided in [@HS90b] is necessary for this. However, the use of $\tilde{\rho}^a$ poses no difficulties and is simpler, so we will use it here. [ *Proof of Proposition \[prop-diagbd\](b) for lattice animals.*]{} The proof proceeds in the same way as for lattice trees. One minor difference for lattice animals is the presence of the term $\psi_p^{(0)}$, for which implies 0 \^[(0)]{}\_p(x) (1-\_[0,x]{}). Since $2q > 3q -d$ by assumption, this is smaller than what is required (second term of ). A second minor change is that the extraction of the extra factor $\beta$ from the bound on $\psi_p^{(1)}$ is slightly different. Percolation diagrams {#sec-percdiagrams} -------------------- For percolation, the BK inequality [@Grim99] plays the role that Lemma \[lem-ladisj\] played for lattice animals. In particular, application of the BK inequality to immediately gives \_p\^[(0)]{}(x) (1-\_[0,x]{}) \_p(x)\^2. Higher order contributions can also be bounded using the BK inequality. For example, application of BK to the contribution to Figure  when $u'=x$ leads to the bound \_p\^[(1)]{}(x) \_[u,w,y,z ]{} \_p(u)\_p(w)\_p(u-w) \_p(y-u) \_p(y-z) \_p(z-w) \_p(x-y) \_p(x-z), where \_p(x) = \_[v \_D]{} pD(v) \_p(x-v). The right side of is depicted in Figure . It involves the two distinct routes $0 {\rightarrow}u {\rightarrow}y {\rightarrow}x$ and $0 {\rightarrow}w {\rightarrow}z {\rightarrow}x$ from $0$ to $x$, which is suggestive of the fact that $\psi_p^{(1)}(x)$ could decay, like , twice as rapidly as the two-point function. To state bounds on $\psi_p^{(N)}(x)$ for general $N$, we define [A\^[(0)]{}]{}(x, y) & = \_[a,b ]{} \_[p]{}(a)\_[p]{}(b)\_[p]{}(a-b) \_p(x-a) \_[p]{}(y-b),\ A\_1(u,v,x,y) & = \_[p]{}(u-v) \_[a,b ]{} \_[p]{}(u-a)\_[p]{}(v-b)\_[p]{}(a-b) \_[p]{}(y-a) \_p(x-b),\ A\_2(u,v,x,y) & = \_[p]{}(y-u) \_[a,b ]{} \_[p]{}(u-a)\_[p]{}(v-a)\_[p]{}(a-b)\_[p]{}(v-b) \_p(x-b),\ A\^[(i)]{}(u,v,x,y) & = A\_1(u,v,x,y) + A\_2(u,v,x,y) (i 1),\ [A\^[()]{}]{}(u, v, x, y) & = \_[p]{}(u-v) \_[p]{}(u-x)\_[p]{}(v-y). The above quantities are depicted in Figure . We define $M^{(N)}$ for $N \geq 1$ by . It then follows from [@HS90a Proposition 2.4] that, for $N \geq 1$, 0 \_p\^[(N)]{}(x) M\^[(N)]{}(x,x). Consistent with , we define $M^{(0)}(x,x) = (1-\delta_{0,x}) \tau_p(x)^2$. We also recall from [@HS90a Proposition 2.4] that for $N \geq 1$ the expansion remainder term $R_p^{(N)}(x)$ of obeys 0 R\_p\^[(N)]{}(x) \_[u ]{} M\^[(N)]{}(u,u) \_p(x-u). We will use this below to conclude that $\lim_{N {\rightarrow}\infty} R_p^{(N)}(x) =0$, assuming the hypotheses of Proposition \[prop-diagbd\](c). The vanishing of this limit was claimed below Theorem \[thm-percexp\] and used under . [*Proof of Proposition \[prop-diagbd\](c).*]{} For percolation, we have $z=p$, $G_z(x)=\tau_p(x)$ and $\Pi_z(x)=\psi_p(x)$. The hypotheses of the proposition are that $\tau_p(x) \leq \beta(|x|+1)^{-q}$ for $x \neq 0$ with $\frac{2}{3}d < q < d$, that $\beta L^{q-d}$ is bounded away from zero, and that $p \leq 2$. It suffices to show that \_p(x) c\_[0,x]{} + c\^2 (|x|+1)\^[-2q]{}. It follows immediately from that the contribution to $\psi_p$ from $\psi_p^{(0)}$ does obey , and we concentrate now on $N \geq 1$. By the assumed bound on $\tau$, we conclude as in that \_p(x) . We will apply Lemma \[lem-Mbound\] with $q_1=q_2=q$. Our assumption on $q$ implies that is satisfied. We also need to verify that ${A^{(0)}}, A^{(i)}, {A^{(\text{end})}}$ obey the assumptions of Lemma \[lem-Mbound\]. It is clear that ${A^{(\text{end})}}$ obeys with $q_{1} = q_{2} = q$ and $K_{\rm end} = O(1)$. For ${A^{(0)}}$, we note the decomposition [A\^[(0)]{}]{}(x,y) = \_[u,v]{} . We can then apply Lemma \[lem-Mbound\], considering the first factor as ${A^{(0)}}$ and the second factor as $A^{({\rm end})}$, to conclude that ${A^{(0)}}$ obeys with $K_0=C\beta$. To check that $A^{(i)} = A_{1} + A_{2}$ obeys with $K=C\beta$, we begin with $A_{2}$. Define $a(u, v, x) = A_{2}(u,v,x,y)/\tau_{p}(y-u)$. This quantity is nothing but ${A^{(0)}}(x-v,u-v)$ of . Therefore, a(u,v, x) , and $A_{2}$ obeys with $q_{1} = q_{2} = q$ and $K=C\beta$. For $A_{1}$, recalling the definition of ${H}$ in , we see that $A_{1}(u,v,x,y)$ obeys the same bound as $C\beta \tau_{p}(u-v) {H}(u, v, x, y)$. By , $A_1$ obeys with $K=C\beta$. It then follows from Lemma \[lem-Mbound\] that for $N \geq 1$ 0 \_p\^[(N)]{}(x) M\^[(N)]{}(x,x) . The factor $\beta^N$ here arises from the $\beta$’s present in $A^{(0)}$ and in each of the $N-1$ factors of $A^{(i)}$. This gives an adequate bound for $N \geq 2$. To complete the proof, it suffices to argue that for $N=1$ the power of $\beta$ in can be replaced by $\beta^2$ when $x \neq 0$. This follows from the observation that for $N=1$ and $x\neq 0$, at least two diagram lines in $M^{(1)}(x,x)$ must undergo a nontrivial displacement, and each of these lines contributes a factor $\beta$. [*Proof that $\lim_{N{\rightarrow}\infty} R_p^{(N)}(x)=0$ under hypotheses of Proposition \[prop-diagbd\](c)*]{}. This is an immediate consequence of , , , and Proposition \[lem-conv\](i). Convolution bounds {#sec-conv} ================== In this section, we prove Proposition \[lem-conv\]. [*Proof of Proposition \[lem-conv\].*]{} (i) By definition, | (f\*g)(x) | \_[y: |x-y| |y|]{} + \_[y: |x-y| &gt; |y|]{} . Using $a \geq b$ and the change of variables $z=x-y$ in the second term, we see that | (f\*g)(x) | 2\_[y: |x-y| |y|]{} . In the above summation, $|y| \geq \frac{1}{2}|x|$. Therefore, for $a>d$ we have |(f\*g)(x)| \_[y: |x-y| |y|]{} C (|x|+1)\^[-b]{} . Suppose now that $a<d$ and $a+b>d$. In this case, we divide the sum in according to whether $\frac{1}{2}|x| \leq |y| \leq \frac{3}{2}|x|$ or $|y| \geq \frac{3}{2}|x|$. The contribution to due to the first range of $y$ is bounded above, as in , by \_[y: |x-y| 3|x|/2]{} (|x|+1)\^[d-a]{}, as required. When $|y| \geq \frac{3}{2}|x|$, we have $|y-x| \geq |y|-|x| \geq |y|/3$. Therefore, the contribution to due to the second range of $y$ is bounded above by 3\^b 2 \_[y: |y| 3|x|/2]{} . This completes the proof. \(ii) By (i), the convolution of $g$ with the error term of $f$ gives a result that is $O(BC (|x|+1)^{-(d-2+s)})$. This leaves us with the convolution of the main term with $g$, which is given by \_[y]{} g(y) = + \_[y]{} g(y) . The first term is the desired main term, so it remains to prove that the second term is an error term. We denote the second term by $X$. We consider separately the contributions to $X$ due to $|y|>\frac{1}{2}|x|$ and $|y| \leq \frac{1}{2}|x|$, beginning with the former. This contribution to $X$, which we call $X_1$, is bounded above by |X\_1| AC\_[y: |y| &gt; |x|/2]{} . Using part (i) of the proposition, the first term is bounded above by \_[y ]{} O ( ). The second term of obeys \_[y: |y| &gt; |x|/2]{} = O ( ) . Combining these gives $X_1 = O (AC(|x|+1)^{-(d-2+s)} )$, so $X_1$ is an error term. Next, we consider the contribution to $X$ due to $|y| \leq \frac{1}{2}|x|$, which we denote by $X_2$. We estimate this term by expanding the difference $\frac{1}{(|x-y|+1)^{d-2}} - \frac{1}{(|x|+1)^{d-2}}$ into powers of $y$. Because of the $\Zd$-symmetry of $g(y)$, odd powers of $y$ in the expansion give no contribution. Let $h(t) = [|x|(1+t)+1]^{2-d}$, for $|t|<1$. Using the Fundamental Theorem of Calculus, it can be seen that $|h(t)-h(0)-h'(0)t| \leq c|t|^2 |x|^2 (|x|+1)^{-d}$ when $|t| \leq \frac 12$. Applying this with $t=|x|^{-1}|x-y|-1$ (the case $x=0$ does not contribute), we conclude that | X\_[2]{} | \_[y: |y| |x|/2]{} | g(y) | . Therefore, recalling that $s_2=s\wedge 2$, we have |X\_2| \_[y : |y| |x|/2]{} cAC(|x|+1)\^[-d-2+s\_2]{} & (s 2)\ cAC(|x|+1)\^[-d]{}(|x|+2) & (s=2). This completes the proof. The random walk two-point function {#sec-G2ptfcn} ================================== In this section, we prove Proposition \[prop-A\]. We begin in Section \[sec-Cub\] with an elementary proof of the bound \_[0,x]{} S\_(x) \_[0,x]{} + O(L\^[-d]{} ) , which is uniform in $\mu \leq 1$ and $x \in \Zd$. In Section \[sec-intrep\], an integral representation for $S_\mu(x)$ is introduced, which is analysed in Sections \[sb-ab-t&gt;T\]–\[sb-ab-t&lt;T\] for $|x|$ large compared with $L$. The proof of the asymptotic formula is then given in Section \[sec-Casy\]. Once and have been proved, the bound then follows easily. In fact, it suffices to prove for $\mu=1$, since $S_{\mu}(x)$ is increasing in $\mu$. However, for $\mu=1$ follows immediately, by using for $|x| \leq L$ and for $|x| >L$. Proof of the uniform bound {#sec-Cub} -------------------------- In this section, we prove the uniform bound . The lower bound of follows immediately from the facts that $S_\mu(x)$ is positive for all $x$ by definition and that $S_\mu(0)$ receives a contribution $1$ from the zero-step walk. So it remains to prove the upper bound. For this, it suffices to consider $\mu=1$, because $S_{\mu}(x)$ is increasing in $\mu$. In preparation, and for later use, we first note some properties of $D$. By Definition \[def-Dsp1\], $D(x) \leq O(L^{-d})$ and $\sigma \sim \mbox{const.}L$. In addition, it is proved in [@HS01a Appendix A] that there are constants $\delta_2$ and $\delta_3$, such that for $L$ sufficiently large, 1 - (k) & \_[2]{} L\^[2]{} |k|\^[2]{} |k| L\^[-1]{} ,\ 1 - (k) & \_[3]{} . To prove the upper bound of , we rewrite $[1-\Dhat(k)]^{-1}$ as $1 + \Dhat(k) + \Dhat(k)^{2}[1 - \Dhat(k)]^{-1}$, to obtain S\_1 (x) = [\_[\[-,\]\^[d]{}]{}]{} = \_[0,x]{} + D(x) + [\_[\[-,\]\^[d]{}]{}]{} . The second term is $O(L^{-d})$, so it remains to prove that the last term is also $O(L^{-d})$. We first estimate the absolute value of the last term by taking absolute values inside the integral. We then divide the integral over $k$ into two parts, according to whether $|k|$ is greater than or less that $L^{-1}$. For the integral over small $k$, we note that in general $|\Dhat(k)|\leq 1$. Using yields \_[|k|&lt; L\^[-1]{}]{} \_[|k|&lt; L\^[-1]{}]{} = O( L\^[-d]{}) . Also, using , the integral over large $k$ is bounded by \_[{k\^d : |k| L\^[-1]{}}]{} [\_[\[-,\]\^[d]{}]{}]{}(k)\^[2]{} = \_[y]{} D(y)\^[2]{} = O( L\^[-d]{}). This proves . The integral representation {#sec-intrep} --------------------------- To prove the asymptotic formula , which states that S\_1(x) = +O( ), we will use an integral representation for $S_\mu(x)$. By , $S_1(x) \leq O(L^{-d})$ for $x \neq 0$. This immediately implies for $|x| \leq L^{1+\alpha/d}$. It therefore suffices, in what follows, to restrict attention to $|x| \geq L^{1+\alpha/d}$. Although it is sufficient to consider only $\mu =1$ to prove , we consider also $0 \leq \mu <1$, as this will be used in the proof of the main error estimate in Section \[sec-CE\]. Let I\_[t, ]{} (x) = \_[\[-, \]\^[d]{}]{} e\^[-i [kx]{}]{} e\^[-t\[1 - (k)\]]{}. It follows from that $\hat{S}_\mu(k) = [1-\mu\hat{D}(k)]^{-1}$. Thus, for $0 \leq \mu \leq 1$ we have the integral representation S\_(x) & = \_[\[-, \]\^[d]{}]{} = \_[\[-, \]\^[d]{}]{} e\^[-i [kx]{}]{} \_[0]{}\^ dt e\^[-t\[1 - (k)\]]{} = \_[0]{}\^ dt I\_[t,]{}(x) . The integration variable $t$ plays the role of a time variable, with the dominant contribution to $S_1(x)$ due to $t \approx |x|^2/\sigma^{2}$. With this in mind, we write $S_\mu(x) = S_\mu^<(x;T) +S_\mu^>(x;T)$ with S\_\^[&lt;]{}(x;T) = \_[0]{}\^[T]{} dt I\_[t, ]{}(x), S\_\^[&gt;]{}(x;T) = \_[T]{}\^ dt I\_[t, ]{}(x) , and choose $T$ to be equal to T\_[x]{} = ( )\^[2 - 2/d]{}, where $\alpha$ is the small parameter of Proposition \[prop-A\]. With this choice, it will turn out that $S_1^>(x)$ contains the leading term of , whereas $S_1^<(x)$ is an error term. Our analysis will make use of different estimates on $I_{t,1}(x)$ for each of these two terms. Integration over $[T,\infty]$ {#sb-ab-t>T} ----------------------------- In this section, we prove that for $|x| \geq L^{1+\alpha/d}$ and $L$ sufficiently large depending on $\alpha$, we have S\_1\^&gt;(x; T\_[x]{}) = \_[T\_[x]{}]{}\^I\_[t,1]{}(x) dt = \_[T\_[x]{}]{}\^p\_t(x) dt +O( ), where p\_t(x) = ( )\^[[d/2]{}]{} ( - ). The proof will make use of the following lemma, which extracts the leading term from $I_{t,1}(x)$. \[lem-ab-large-t\] Let $d>2$, and suppose $D$ obeys Definition \[def-Dsp1\]. Then there are finite $L$-independent constants $\tau$ and $c_1$ such that for $t \geq \tau$ I\_[t,1]{}(x) = p\_t(x) +r\_t(x) with |r\_t(x)| c\_1 L\^[-d]{} t\^[-d/2-1]{} + e\^[-t \_[3]{} ]{} . Before proving Lemma \[lem-ab-large-t\], we show how integration of its bound leads to a proof of . It suffices to show that the integral of the error term $r_t(x)$ in is bounded by the error term of . By we have | \_[T\_[x]{}]{}\^ dt r\_t(x) | c L\^[-d]{} T\_[x]{}\^[-d/2]{} + \_[3]{}\^[-1]{} e\^[-\_[3]{} T\_[x]{}]{} . The second term can be absorbed into the first term. In fact, since $T_{x} \geq cL^{2(\alpha/d)(1-\alpha/d)}$, for any positive $N$ we have e\^[-\_3 T\_[x]{}]{} = c\_N , with the last factor less than $1$ for $L$ and $N$ sufficiently large depending on $\alpha$. In addition, since $|x| > L$ and $d>\alpha$, the first term of is equal to c L\^[-d]{} T\_[x]{}\^[-d/2]{} = c . This proves . [*Proof of Lemma \[lem-ab-large-t\].*]{} By Taylor’s theorem and symmetry, for $k \in [-\pi, \pi]^{d}$ we have 1 - (k) = + R (k) with |R(k)| & \_[x]{} D(x) (x k)\^[4]{} L\^[4]{} |k|\^[4]{} . Let $k_t^{2} =4d \sigma^{-2}t^{-1}\log{t}$. We write $I_{t,1}(x) = \sum_{j=1}^{4} I_{t}^{(j)}(x)$ with I\_[t]{}\^[(1)]{}(x) & = \_ e\^[-i [kx]{}- t \^[2]{} |k|\^[2]{}/(2d)]{},\ I\_[t]{}\^[(2)]{}(x) & = - \_[k\_t &lt;|k| &lt;]{} e\^[-i [kx]{}- t \^[2]{} |k|\^[2]{}/(2d)]{},\ I\_[t]{}\^[(3)]{}(x) & = \_[|k| &lt; k\_[t]{}]{} e\^[-i [kx]{}- t \^[2]{} |k|\^[2]{}/(2d)]{} ( e\^[-t R(k)]{} - 1 ),\ I\_[t]{}\^[(4)]{}(x) & = \_[k \^d : |k| &gt; k\_[t]{} ]{} e\^[-i [kx]{}]{} e\^[-t \[1 - (k)\]]{} . The integrals $I_{t}^{(1)}(x)$ through $I_{t}^{(3)}(x)$ combine to give the contribution to $I_{t,1}(x)$ due to $|k| \leq k_{t}$, while $I_{t}^{(4)}(x)$ represents the contribution from $|k| > k_{t}$. The first integral can be evaluated exactly to give I\_[t]{}\^[(1)]{}(x) = p\_t(x) . We therefore set $r_t(x) = \sum_{j=2}^4 I_{t}^{(j)}(x)$ and show that $r_t(x)$ obeys . By definition, | I\_[t]{}\^[(2)]{}(x) | & \_[k\_t &lt; |k| &lt;]{} e\^[- t \^[2]{} |k|\^[2]{}/(2d)]{} c(t\^[2]{})\^[-d/2]{} e\^[- t \^[2]{} k\_[t]{}\^[2]{} /(4d)]{} c L\^[-d]{} t\^[-d/2 -1]{} . The integral $I_{t}^{(3)}(x)$ is bounded as follows. First, we note that for $|k|< k_{t}$ it follows from and the definition of $k_t$ that $| t R(k) | \leq c (\log t)^{2}/t$, which is less than $1$ for sufficiently large $t$. Using the bound $|e^x - 1 | \leq |x|$ for $|x| \leq 1$, and increasing the integration domain to $\Rd$ in the last step, we have | I\_[t]{}\^[(3)]{}(x) | & c \_[|k|&lt; k\_[t]{}]{} e\^[-t \^[2]{} |k|\^[2]{}/(2d)]{} |t R(k)| c tL\^4 \_[|k|&lt; k\_[t]{}]{} e\^[-t \^[2]{} |k|\^[2]{}/(2d) ]{} |k|\^[4]{} c L\^[-d]{} t\^[-d/2-1]{} . Finally we estimate $I_{t}^{(4)}(x)$. We divide the integration domain according to whether $|k|$ is greater than or less than $L^{-1}$. By , as in the contribution due to $|k|\leq L^{-1}$ is at most \_[k\_[t]{} &lt; |k| &lt; ]{} e\^[-t \_2 L\^[2]{} |k|\^[2]{}]{} c L\^[-d]{} t\^[-d/2-1]{} . By , the contribution due to $|k| > L^{-1}$ is at most \_[k \^d : |k| &gt; L\^[-1]{} ]{} e\^[-t \_[3]{} ]{} e\^[-t \_[3]{} ]{} . Combining – then gives the desired bound |r\_t(x) | \_[j=2]{}\^4 |I\_[t]{}\^[(j)]{}(x)| c L\^[-d]{} t\^[-d/2-1]{} + e\^[-t \_[3]{} ]{} . Integration over $[0,T]$ {#sb-ab-t<T} ------------------------ In this section, we prove the following lemma, which will also be used in the main error estimate of Section \[sec-CE\]. \[lem-Smu&lt;bd\] Let $|x|\geq L^{1+\alpha/d}$ and $T\leq T_{x}$. Then for $0 \leq \mu \leq 1$ and sufficiently large $L$ depending on $\alpha$ S\_\^&lt;(x;T) = \_[0]{}\^[T]{} I\_[t, ]{} (x) dt . We will prove this using the following lemma, whose proof involves a standard large deviations argument. \[lem-StIt\] For $x \in \Zd$, $t\geq 0$ and $t_0 = dL\|x\|_\infty /(2\sigma^{2})$, 0 I\_[t,1]{}(x) { [ll]{} & (0 t &lt; )\ & (t t\_0) . . [*Proof of Lemma \[lem-Smu&lt;bd\] assuming Lemma \[lem-StIt\].*]{} We first note that for fixed $x \in \Zd$, $I_{t, \mu}(x)$ is nonnegative and is monotone increasing in $\mu$. In fact, expanding the exponential $e^{t\mu \hat{D}(k)}$ in and interchanging the integral and the sum (justified by absolute convergence), gives I\_[t, ]{}(x) = e\^[-t]{} \_[n=0]{}\^ [\_[\[-,\]\^[d]{}]{}]{} e\^[-i [kx]{}]{} (k)\^[n]{} = e\^[-t]{} \_[n=0]{}\^ D\^[\*n]{}(x), where $D^{*n}$ denotes the $n$-fold $x$-space convolution. Because $D^{*n}(x)$ is nonnegative, this representation immediately implies the non-negativity of $I_{t,\mu}(x)$, together with its monotonicity in $\mu$. Therefore $S_\mu^{<}(x;T)$ is increasing in $\mu$ and in $T$, and it suffices to prove for $\mu=1$ and $T = T_{x}$. In this case, gives \_[0]{}\^[T\_[x]{}]{} dt I\_[t,1]{}(x) \_[0]{}\^[t\_[0]{}]{} dt + \_[t\_[0]{}]{}\^[T\_[x]{}]{} dt . The first integral can be performed exactly. For the second, we use the fact that for $a\geq T$, \_0\^[T]{} dt e\^[-a/t]{} = a \_[a/T]{}\^ du u\^[-2]{} e\^[-u]{} e\^[-a/T]{}. Now choose $T= T_{x}$ and $a=d\|x\|_\infty^{2} /(4\sigma^{2}) \geq T_{x}$, for $|x| \geq L^{1+\alpha/d}$. This gives \_[0]{}\^[T\_[x]{}]{} dt I\_[t,1]{}(x) & & c + T\_[x]{}\^[2]{}\ & & c ( - c ) + c ( )\^[2-4/d]{} ( - c ( )\^[2/d]{} ) . For $|x| \geq L^{1+\alpha/d}$, we have $|x|/L \geq |x|^{(\alpha/d)/(1+\alpha/d)}$. The integral $\int_{0}^{T_{x}} dt \, I_{t,1}(x)$ therefore decays at least as fast as a constant multiple of an exponential of a power of $|x|$, and hence eventually decays faster than $|x|^{-(d+2)}$. This completes the proof of . [*Proof of Lemma \[lem-StIt\].*]{} Since $D$ is supported only on $\|x\|_\infty \leq L$, we have $D^{*n}(x) = 0$ for $\|x\|_\infty > n L$. Since $0 \leq D^{*n}(x) \leq 1$ for all $n$, we can therefore bound using the inequality \_[n=N]{}\^ e\^[t]{} e\^[t]{} ( )\^[N]{} as I\_[t,1]{}(x) = e\^[-t]{} \_[nx\_/L]{} D\^[\*n]{} (x) e\^[-t]{} \_[nx\_/L]{} ( )\^[x\_/L]{} . Thus, $I_{t,1}(x)$ decays in $|x|$ more rapidly than any exponential, and we may define the quantity \_[t]{}(s) = \_[x]{} e\^[s x\_[1]{}]{} I\_[t,1]{}(x) (s ). We claim that \_[t]{}(s) = . This follows by interchanging sums in to obtain \_[t]{}(s) = e\^[-t]{} \_[n=0]{}\^ \_[x]{} e\^[s x\_[1]{}]{} D\^[\*n]{} (x) = e\^[-t]{} \_[n=0]{}\^ \^[n]{} = , and then using $D(x)=D(-x)$ and $\sum_{x} D(x) = 1$. By definition, $\phi_{t}(s)= \sum_{y} e^{s |y_{1}|} I_{t,1}(y) \geq e^{s |x_{1}|} I_{t,1}(x)$ for any $x \in \Zd$, and therefore $I_{t,1}(x) \leq e^{- s |x_{1}|} \phi_{t}(s)$. The $\Zd$-symmetry and the formula for $\phi_{t}(s)$ then give I\_[t,1]{}(x) . When $s \leq L^{-1}$, we have $s|y_1| \leq 1$ for any $y$ contributing to $\sum_{y} D(y) [ \cosh (s y_{1}) -1 ]$. Since $\cosh x \leq 1 + x^{2}$ for $|x| \leq 1$, we obtain 0 \_[y]{} D(y) s\^[2]{} \_[y]{} D(y) y\_[1]{}\^[2]{} = s\^[2]{} . Thus, for $s \leq L^{-1}$ we have I\_[t,1]{}(x) . Putting $s=L^{-1}$ in gives the first bound of . The minimum of the right side of is attained at $s = d \| x \|_{\infty}/(2 \sigma^{2} t)$, but we may use only for $s \leq 1/L$. This condition will be valid provided $t \geq t_0$. Using the minimal value of $s$ in gives the second bound of . Proof of the asymptotics {#sec-Casy} ------------------------ We now prove . As discussed below , it suffices to consider $|x| \geq L^{1+\alpha/d}$. For $|x| \geq L^{1+\alpha/d}$, we have already proved a version of having lower limit of integration $T_x$ rather than $0$, since the combination of and gives S\_1(x) = S\_1\^&lt;(x;T\_[x]{}) + S\_1\^&gt;(x;T\_[x]{}) = \_[T\_x]{}\^p\_t(x) dt + O( ) (|x| L\^[1+/d]{}). We will show that holds as stated with lower limit of integration $0$, if $|x| \geq L^{1+\alpha/d}$. For this, it suffices to show that the quantity $R(x) = \int_0^{T_{x}} p_t(x)dt $ obeys the bound R(x) O( ) (|x| L\^[1+/d]{}). By definition, | R(x) | c L\^[-d]{} \_[0]{}\^[T\_x]{} dt t\^[-d/2]{} e\^[-c’|x|\^[2]{}/(tL\^[2]{})]{} . To estimate the right side, we use the fact that \_[0]{}\^[T]{} dt t\^[-b]{} e\^[-a/t]{} = a\^[1-b]{} \_[a/T]{}\^u\^[b-2]{} e\^[-u]{} du a\^[-1]{} T\^[2-b]{} e\^[-a/T]{} & (1&lt;b2)\ C(b)a\^[1-b]{}e\^[-a/2T]{} & (b &gt;2) if $a \geq T >0$ and $b>1$, where $C(b)$ is a $b$-dependent constant. This inequality can be proved for $1<b\leq 2$ using $u^{b-2} \leq (a/T)^{b-2}$. For $b >2$, it can be proved using $e^{-u} \leq e^{-u/2} e^{-a/2T}$. We apply with $a = c|x|^{2}/L^{2}$, which is greater than $T_{x}$ when $|x|\geq L^{1+\alpha/d}$ and $L$ is large. The exponent $b$ equals $d/2$, which is greater than $1$ for $d>2$. The result is that | R(x) | c L\^[-d]{} e\^[-c”(|x|/L)\^[2/d]{}]{} ()\^[q(d)]{}, for some power $q(d)$. For $|x| \geq L^{1+\alpha/d}$ sufficiently large, we therefore have | R(x) | cL\^[-d]{} ()\^[d]{} . This completes the proof of if $|x| \geq L^{1+\alpha/d}$, and hence for all $x$. The main error estimate {#sec-CE} ======================= In this section, we prove Proposition \[lem-C\]. We first obtain bounds on $E_z(x)$ and $\hat{E}_z(k)$ in Section \[sec-Ex\], and then complete the proof of Proposition \[lem-C\] in Section \[sub-errormain\]. Bounds on $E_z$ {#sec-Ex} --------------- First we derive the following bound on $E_{z}(x)$ from the assumed bound on $\Pi_{z}(x)$. \[lem-Epbd\] Under the assumptions of Proposition \[lem-C\], | E\_[z]{}(x)| c & (x = 0)\ c L\^[-d]{} & (0 &lt; |x| &lt; 2L)\ c |x|\^[-(d+2+)]{} & (|x| 2L). By virtue of its definition in , we can write E\_[z]{}(x) = (1-\_z)\_[0,x]{} -(D\*N\_z)(x). with N\_[z]{}(x) = \[(1-\_z)+\_zz\_z(0)\]\_[0,x]{} -\_zz\_z(x). To derive bounds on $N_{z}$ and thus on $E_{z}$, we first derive bounds on $\Pi_{z}$ and $\lambda_{z}$. Assuming $| \Pi_{z}(x) | \leq \gamma (|x|+1)^{-(d+2+\kappa)}$, we have \_y | \_[z]{}(y) | c , \_y |y|\^[2]{} | \_[z]{}(y) | c . Also, by the formula for $\lambda_z$ of and our assumption that $z \leq C$, it follows that \_[z]{} =O(1), \_[z]{} -1 = O(). The bounds and imply N\_[z]{}(x) = O() \_[0,x]{} + O( ) , \_[x]{} N\_[z]{}(x) = O(), and hence (D\*N\_[z]{})(x) = \_[|y|L]{} N\_[z]{}(x-y) D(y) = \_[|y|L]{} | N\_[z]{}(x-y) | O(L\^[-d]{}) = O(L\^[-d]{}). By , this proves for $0 \leq |x| < 2 L$. For $|x|\geq 2L$, we note that $|x-y| \geq |x|/2$ when $|y| \leq L$. For such $y$, implies $| N_{z}(x-y) | = O( \gamma |x|^{-(d+2+\kappa)})$, and therefore (D\*N\_[z]{})(x) = \_[|y|L]{} N\_[z]{}(x-y) D(y) = O ( ) \_[|y|L]{} D(y) = O ( ) . Next, we use the above bound on $E_z(x)$ to derive a bound on $\hat{E}_z(k)$. \[lem-Ezhatbd\] Let $\kappa_2 = \kappa \wedge 2$. As $k {\rightarrow}0$, under the assumptions of Lemma \[lem-C\], | \_[z]{}(k) | c L\^[2+\_2]{} |k|\^[2+\_2]{} & (2)\ c |k|\^[4]{} (L\^[4]{} + |k|\^[-1]{}) & ( = 2). The proof proceeds as in the proof of Lemma \[lem-Fourier1\]. Since $\hat{E}_z(0)=\nabla^2\hat{E}_z(0)=0$, as in – we have |\_[z]{}(k)| c|k|\^[4]{} \_[x: |x| |k|\^[-1]{}]{}|x|\^[4]{} | E\_[z]{}(x)| + c \_[x: |x| &gt; |k|\^[-1]{}]{} (1 + |k|\^[2]{}|x|\^[2]{}) | E\_[z]{}(x) |. A calculation using Lemma \[lem-Epbd\] then implies that for $|k| \leq (2L)^{-1}$, | \_[z]{}(k) | c & (2 )\ c & (= 2 ). The above bounds imply for $|k| \leq (2L)^{-1}$. The case $|k| > (2L)^{-1}$ is bounded simply as |\_[z]{}(k)| \_[x]{} | E\_[z]{}(x) | = O(), which satisfies for $|k| > (2L)^{-1}$. Proof of Proposition \[lem-C\] {#sub-errormain} ------------------------------ In this section, we prove Proposition \[lem-C\]. It suffices to consider the case of small $\alpha$. We begin by proving the $x$-dependent bound, which is valid for all $x$. The uniform bound, valid for $x \neq 0$, will follow immediately from this proof. The proof is divided into three cases, according to the value of $x$. We prove the $x$-dependent bound first assuming $\kappa \neq 2$, and comment on the minor modifications for $\kappa =2$ at the end. *Case 1: $x = 0$.* The uniform bound on $S_{\mu_{z}}(x)$ implies that |(E\_[z]{}\*S\_[\_[z]{}]{})(0)| |E\_[z]{}(0)| + O(L\^[-d]{}) \_y |E\_[z]{}(y)| . Lemma \[lem-Epbd\] implies $\sum_{y} |E_{z}(y)| = O(\gamma)$, and hence is $O(\gamma)$ and satisfies . *Case 2: $0 < |x| \leq L^{1+\alpha/(d+\kappa_2)}$.* For arbitrary $x \neq 0$, it follows from Lemma \[lem-Epbd\] and that (E\_[z]{}\*S\_[\_[z]{}]{})(x) & = E\_[z]{}(x) S\_[\_[z]{}]{}(0) + \_[y: y x]{} E\_[z]{}(y) S\_[\_[z]{}]{}(x-y) & = O(L\^[-d]{}) + \_[y]{} |E\_[z]{}(y)| O(L\^[-d]{}) = O(L\^[-d]{}) . This proves the first bound of . Also, when $0 < |x| \leq L^{1+\alpha/(d+\kappa_2)}$, implies | (E\_[z]{}\*S\_[\_z]{})(x) | = O(L\^[-d]{}) = O ( ) O ( ) . *Case 3: $|x| > L^{1+\alpha/(d+\kappa_2)}$.* We fix $T = (\frac{|x|}{2\sigma})^{2-2\alpha/(d+\kappa_2)}$, which is equal to $T_{x/2}$ of with a smaller $\alpha$. We then define $X_1$ and $X_2$ by (E\_[z]{}\*S\_[\_[z]{}]{})(x) = \_[y]{} E\_[z]{}(x-y) S\^[&lt;]{}\_[\_z]{}(y;T) + \_[y]{} E\_[z]{}(x-y) S\^[&gt;]{}\_[\_z]{}(y;T) = X\_[1]{} + X\_[2]{}. The contribution $X_{1}$ is further divided as X\_[1]{} = \_[y: |y| |x|/2]{} E\_[z]{}(x-y) S\^[&lt;]{}\_[\_z]{}(y;T) + \_[y: |y| &gt; |x|/2]{} E\_[z]{}(x-y) S\^[&lt;]{}\_[\_z]{}(y;T) = X\_[11]{} + X\_[12]{}. It remains to estimate $X_{11}$, $X_{12}$ and $X_2$. For $X_{12}$, by our choice of $T$ we can use . Since $\sum_{y} | E_{z}(y) | = O(\gamma)$, we obtain | X\_[12]{} | \_[y: |y| &gt; |x|/2]{} | E\_[z]{} (x-y) | \_[y]{}| E\_[z]{} (x-y) | O ( ) = O ( ). For $X_{11}$, we use to obtain S\^[&lt;]{}\_[\_z]{}(y; T) S\_[\_z]{}(y) \_[0,y]{} + O ( ) . Since $|x-y| \geq |x|/2 \geq 2L$ (for large $L$), the third bound of Lemma \[lem-Epbd\] gives | X\_[11]{} | O ( ) O ( ). To control $X_{2}$, we use the integral representation for $S^{>}_{\mu_z}$ to write X\_[2]{} = (E\_[z]{}\*S\^[&gt;]{}\_[\_z]{})(x) = \_[T]{}\^ dt (I\_[t, \_z]{} \* E\_[z]{})(x) = \_[T]{}\^ dt \_[\[-,\]\^d]{} e\^[-i k x]{} \_[t, \_z]{}(k) \_[z]{}(k). By , $\hat{I}_{t,\mu_z}(k) = e^{-t[1-\mu_z\Dhat(k)]}$. Since $1-\mu_z\Dhat \geq [1-\Dhat]/2$, it follows from Lemma \[lem-Ezhatbd\] that | X\_2 | \_[T]{}\^ dt \_[\[-,\]\^d]{} e\^[-t\[1-(k)\]/2]{} cL\^[2+\_2]{} |k|\^[2+\_2]{}. We divide the $k$-integral according to whether $|k|$ is greater or less than $L^{-1}$, as in the analysis of $I^{(4)}_{t}(x)$ in –. This gives \_[\[-,\]\^d]{} e\^[-t\[1-(k)\]/2]{} |k|\^[2+\_2]{} & O(L\^[-(d+2+\_2)]{}) t\^[-(d+2+\_2)/2]{} + O(e\^[-\_[3]{} t]{}). The second error term can be absorbed into the first for $t \geq T$ and sufficiently large $L$, by arguing exactly as was done for . Performing the $t$-integral then gives | X\_2 | & O(L\^[-d]{}) \_[T]{}\^ t\^[-(d+2+\_2)/2]{} dt . Combining , and gives the desired estimate | (E\_z \* S\_[\_z]{})(x) | O( ). for $|x| > L^{1+\alpha/(d+\kappa_2)}$. The case $\kappa =2$ adds extra factors $\log |k|^{-1}$, $|\log t|$, and $\log(|x|/L)$ in , , , and . However, the extra logarithm of can be absorbed in $|x|^{d+\kappa_2-\alpha}$, by slightly increasing the exponent $\alpha$ of . Acknowledgements {#acknowledgements .unnumbered} ================ This work began while all three authors were visiting the Fields Institute and was supported in part by NSERC of Canada. The work of T.H. was also supported in part by a [Grant-in-Aid for Scientific Research (C)]{} of [the Ministry of Education, Science, Sports and Culture]{}  of Japan. [10]{} J. Adler, Y. Meir, A. Aharony, and A.B. Harris. Series study of percolation moments in general dimension. , [**41**]{}:9183–9206, (1990). M. Aizenman. On the number of incipient spanning clusters. , [**485**]{}:551–582, (1997). M. Aizenman and D.J. Barsky. Sharpness of the phase transition in percolation models. , [**108**]{}:489–526, (1987). M. Aizenman, H. Kesten, and C.M. Newman. Uniqueness of the infinite cluster and continuity of connectivity functions for short and long range percolation. , [**111**]{}:505–531, (1987). M. Aizenman and C.M. Newman. Tree graph inequalities and critical behavior in percolation models. , [**36**]{}:107–143, (1984). D.J. Barsky and M. Aizenman. Percolation critical exponents under the triangle condition. , [**19**]{}:1520–1536, (1991). A. Bovier, J. Fröhlich, and U. Glaus. Branched polymers and dimensional reduction. In K. Osterwalder and R. Stora, editors, [*Critical Phenomena, Random Systems, Gauge Theories*]{}, Amsterdam, (1986). North-Holland. D. Brydges, S.N. Evans, and J.Z. Imbrie. Self-avoiding walk on a hierarchical lattice in four dimensions. , [**20**]{}:82–124, (1992). D.C. Brydges and T. Spencer. Self-avoiding walk in 5 or more dimensions. , [**97**]{}:125–148, (1985). E. Derbez and G. Slade. The scaling limit of lattice trees in high dimensions. , [**193**]{}:69–104, (1998). G. Grimmett. . Springer, Berlin, 2nd edition, (1999). J.M. Hammersley and K.W. Morton. Poor man’s [Monte]{} [Carlo]{}. , [**16**]{}:23–38, (1954). T. Hara. Critical two-point functions for nearest-neighbour high-dimensional self-avoiding walk and percolation. In preparation. T. Hara and G. Slade. Mean-field critical behaviour for percolation in high dimensions. , [**128**]{}:333–391, (1990). T. Hara and G. Slade. On the upper critical dimension of lattice trees and lattice animals. , [**59**]{}:1469–1510, (1990). T. Hara and G. Slade. Mean-field behaviour and the lace expansion. In G. Grimmett, editor, [*Probability and Phase Transition*]{}, Dordrecht, (1994). Kluwer. T. Hara and G. Slade. The self-avoiding-walk and percolation critical points in high dimensions. , [**4**]{}:197–215, (1995). T. Hara and G. Slade. The scaling limit of the incipient infinite cluster in high-dimensional percolation. [II]{}. [Integrated]{} super-[Brownian]{} excursion. , [**41**]{}:1244–1293, (2000). R. van der Hofstad and G. Slade. A generalised inductive approach to the lace expansion. Preprint, (2000). R. van der Hofstad and G. Slade. The lace expansion on a tree with application to networks of self-avoiding walks. In preparation. B.D. Hughes. , volume 1: Random Walks. Oxford University Press, Oxford, (1995). B.D. Hughes. , volume 2: Random Environments. Oxford University Press, Oxford, (1996). D. Iagolnitzer and J. Magnen. Polymers in a weak random potential in dimension four: rigorous renormalization group analysis. , [**162**]{}:85–121, (1994). H. Kesten. . Birkhäuser, Boston, (1982). D.A. Klarner. Cell growth problems. , [**19**]{}:851–863, (1967). D.J. Klein. Rigorous results for branched polymer models with excluded volume. , [**75**]{}:5186–5189, (1981). T.C. Lubensky and J. Isaacson. Statistics of lattice animals and dilute branched polymers. , [**A20**]{}:2130–2146, (1979). N. Madras and G. Slade. . Birkh[ä]{}user, Boston, (1993). M.V. Menshikov. Coincidence of critical points in percolation problems. , [**33**]{}:856–859, (1986). B.G. Nguyen and W-S. Yang. Triangle condition for oriented percolation in high dimensions. , [**21**]{}:1809–1844, (1993). G. Parisi and N. Sourlas. Critical behavior of branched polymers and the [L]{}ee–[Y]{}ang edge singularity. , [**46**]{}:871–874, (1981). M.D. Penrose. Self-avoiding walks and trees in spread-out lattices. , [**77**]{}:3–15, (1994). F. Spitzer. . Springer, New York, 2nd edition, (1976). H. Tasaki and T. Hara. Critical behaviour in a system of branched polymers. , [**92**]{}:14–25, (1987). K. Uchiyama. Green’s functions for random walks on ${Z}^{N}$. , [**77**]{}:215–240, (1998).
--- abstract: 'By fitting a flexible stellar anisotropy model to the observed surface brightness and line-of-sight velocity dispersion profiles of Draco we derive a sequence of cosmologically plausible two-component (stars $+$ dark matter) models for this galaxy. The models are consistent with all the available observations and can have either cuspy Navarro-Frenk-White or flat-cored dark matter density profiles. The dark matter halos either formed relatively recently (at $z\sim 2\dots 7$) and are massive (up to $\sim 5\times 10^9$ M$_\odot$), or formed before the end of the reionization of the universe ($z\sim 7\dots 11$) and are less massive (down to $\sim 7\times 10^7$ M$_\odot$). Our results thus support either of the two popular solutions of the “missing satellites” problem of $\Lambda$ cold dark matter cosmology – that dwarf spheroidals are either very massive, or very old. We carry out high-resolution simulations of the tidal evolution of our two-component Draco models in the potential of the Milky Way. The results of our simulations suggest that the observable properties of Draco have not been appreciably affected by the Galactic tides after 10 Gyr of evolution. We rule out Draco being a “tidal dwarf” – a tidally disrupted dwarf galaxy. Almost radial Draco orbits (with the pericentric distance $\lesssim 15$ kpc) are also ruled out by our analysis. The case of a harmonic dark matter core can be consistent with observations only for a very limited choice of Draco orbits (with the apocentric-to-pericentric distances ratio of $\lesssim 2.5$).' author: - 'Sergey Mashchenko, Alison Sills, and H. M. P. Couchman' title: Constraining global properties of the Draco dwarf spheroidal galaxy --- INTRODUCTION ============ Galactic dwarf spheroidal galaxies (dSphs) are intriguing objects with a deceptively simple appearance – roughly spheroidal shape, no gas, and no recent star formation. Due to their closeness, these galaxies are studied in significantly more detail than other external galaxies (see the review of @mat98). Despite that, the nature of dSphs and their place in the larger cosmological picture is not well understood. The first wave of enhanced interest in dSphs took off after the pioneer work of @aar83. Based only on the three stars in Draco with measured line-of-sight velocities, he boldly claimed that Draco can be significantly dark matter (DM) dominated. The fact that the stellar velocity dispersion in dSphs is significantly larger than what would follow from the virial theorem for the luminous mass was later confirmed with much larger studies. For example, the latest compilation of @wil04 contained 207 Draco stars with measured line-of-sight velocities. Similar results were obtained for other dSphs as well – both for the Milky Way and M31 satellites. There were attempts to explain the large stellar velocity dispersion in Galactic dSphs without invoking the DM hypothesis. Most notably, @kro97 presented a model where dSphs are considered to be “tidal dwarfs” – virtually unbound stellar streams from dwarf galaxies tidally disrupted in the Milky Way potential. The model of @kro97 appears to be not applicable to Draco as it is unable to reproduce the narrow observed horizontal branch width of this dwarf [@kle03]. In this paper we will present additional evidence against Draco being a tidal dwarf. The overall consensus now appears to be that dSphs do contain significant amounts of DM and are hence the smallest known objects with (indirectly) detected DM. This makes them very interesting objects, as they can be an important test bench for modern cosmological models and for the theories of DM. More recently, the interest in dSphs was rejuvenated after the realization that simple (DM-only) cosmological models overpredict the number of the Milky Way satellite galaxies by $1-2$ orders of magnitude [@kly99; @moo99]. This was coined the “missing satellite problem” of cosmology. The original analysis was done under the assumption that DM in dSphs has the same spatial extent as stars. Relaxation of the above assumption led to a suggestion that only the most massive subhalos predicted to populate a Galaxy-sized halo managed to form stars, with the rest of the subhalos staying dark [@sto02]. Another way of solving the missing satellites problem is to assume that only the oldest subhalos formed stars, with the star formation in the younger subhalos being suppressed by the metagalactic ionizing radiation after the reionization of the universe was accomplished around $z\sim 6.5$ [@bul00]. In reality, both mechanisms could have realized [@ric05]. To discriminate between the two above solutions of the missing satellites problem and to place Galactic dSphs in the right cosmological context one has to know the global parameters of these dwarfs, and most importantly, their total DM extent and mass. The traditional approach is to assume that the dwarf is in equilibrium (thus ignoring the possible impact of the Galactic tides), and to solve the Jeans equation to infer the density profile of the DM halo based on the observed surface brightness and line-of-sight velocity dispersion profiles. As the proper motions of individual stars in dSphs are not known, one has to resort to making certain assumptions about the anisotropy in the stellar velocities. Due to a well known degeneracy between the assumed stellar anisotropy and inferred total enclosed mass there are many solutions to the Jeans equation which are consistent with the observations. Another limitation of the above approach is that DM can be traced only within the stellar body extent of the dwarf, so no conclusion can be made about the total mass of the galaxy. It is also not clear at what distance from the dwarf’s center the virial equilibrium assumption breaks down due to Galactic tides. Some work has been done on the impact of tidal forces on the structure of Galactic satellites [e.g. @oh95; @pia95; @hay03; @kaz04], where it was clearly demonstrated that the tidal stripping and shocking is a complex dynamic process. The deficiency of the above work is in using single-component models for the dwarf galaxies, which made it impossible to directly compare the results of the numerical simulations with the observations. In general, stars in dSphs are distributed differently from DM, so they also behave differently in reaction to the external tides. To correctly describe the observable manifestations of the Galactic tides in dSphs it is essential to use two-component (stars $+$ DM) dwarf models [@rea05]. In this paper we place joint constraints on the global properties of Draco (one of the best studied dSphs) by (1) using cosmological predictions for the properties of DM halos, (2) developing very flexible stellar anisotropy model for dSphs, and (3) running a large set of high-resolution simulations of the evolution of two-component dwarfs in the Galactic tidal field. We derive a sequence of cosmologically plausible models for Draco which are consistent with all the observed structural and kinematical properties of this dwarf. Our results are consistent with either of the two above solutions for the missing satellite problem. Global Constraints on Draco DM Halo Properties {#global} ============================================== We consider two types of DM halos: theoretically motivated @NFW97 [hereafter NFW] halos with a $\gamma=-1$ density cusp at the center, and observationally motivated @bur95 halos, which have a flat core: $$\begin{aligned} \rho(r)& =& \frac{\rho_s}{r/r_s\, (1+r/r_s)^2}\quad\mbox{(NFW),}\label{eq1}\\ \rho(r)& = &\frac{\rho_s}{(1+r/r_s)\,[1+(r/r_s)^2]}\quad\mbox{(Burkert).}\label{eq2}\end{aligned}$$ Here $\rho_s$ and $r_s$ are the scaling density and radius. At large distances from the center both halos have the same asymptotic density slope of $\gamma=-3$. Analysis of cosmological $N$-body simulations showed that the concentration $c=r_{\rm vir}/r_s$ of low-mass DM halos (with the virial mass $m_{\rm vir}=10^8\dots 10^{11}$ M$_\odot$) has a log-normal distribution with the mean $$\label{eqc} c=\frac{27}{1+z} \left(\frac{m_{\rm vir}}{10^9 M_\odot}\right)^{-0.08}$$ and dispersion 0.14 dex [@bul01 with the correction of @ste02]. Here $r_{\rm vir}$ is the virial radius of the halo and $z$ is the redshift. @ste02 showed that the four dwarf galaxies with a Burkert DM halo profile studied by @bur95 have the same dependence of the DM halo scaling density $\rho_s$ on the scaling radius $r_s$ as do the NFW halos in cosmological simulations. This result was obtained for $z=0$. We assume that it holds true for other redshifts as well, and use the equation (\[eqc\]) to find concentrations of both NFW and Burkert halos. Using the formula of @she99, one can estimate the comoving number density of DM halos per unit $\ln m_{\rm vir}$ and per standard deviation in concentration: $$\begin{aligned} \label{ST2} F\equiv \frac{dn}{d\ln m_{\rm vir} d\nu_c}&&\nonumber\\ = \frac{0.322}{2\pi} \left(1+\frac{1}{\nu^{0.6}}\right) \frac{\rho_m \nu}{S}\left|\frac{dS}{dm_{\rm vir}}\right| \exp\left(-\frac{\nu^2+\nu_c^2}{2}\right).&&\end{aligned}$$ Here $\nu_c$ is the number of standard deviations from the mean concentration, $\nu= (0.707/S)^{1/2} \delta(z)$, where $\delta(z)$ is the critical overdensity for spherical collapse extrapolated linearly to $z=0$, $\rho_m$ is the present day mass density of the universe, and $S$ is the variance of the primordial density field on mass scale $m_{\rm vir}$ extrapolated linearly to $z=0$. To estimate the above parameters we follow @mcs05. Throughout this paper we assume a flat $\Lambda$CDM cosmology and use the following values of the cosmological parameters: $\Omega_m=0.27$, $\Omega_b=0.044$, $H=71$ km s$^{-1}$ Mpc$^{-1}$, and $\sigma_8=0.84$ [@spe03]. It is interesting that one can derive quite general and (stellar) model-independent constraints on the properties of the Draco DM halo by combining the available observational data on this galaxy with the predictions of cosmology. Throughout this paper we assume that the distance to Draco is $82\pm 6$ kpc [@mat98]. It is convenient to consider different DM halo constraints in the plane of two scaling parameters, $\rho_s$ and $r_s$. We summarize the global constraints in Figure \[fig1\]. The most obvious constraint is that the Draco DM halo should have formed before the bulk of the Draco stars formed. We assume that the first star burst in Draco took place at least 10 Gyr ago [@mat98], or at $z\geqslant 1.8$. In Figure \[fig1\], the areas to the right of the dashed lines marked “$z=1.8$” correspond to halos older than 10 Gyr. The next constraint, $F\geqslant F_{\rm min}$, comes from the requirement for DM halos to be abundant enough to explain the observed number ($\sim 20$) of dwarf spheroidal galaxies in the Local Group. Following @mcs05, we adopt $F_{\rm min}=0.01$ Mpc$^{-3}$. As the function $F$ does not explicitly depend on $\rho_s$ and $r_s$ (it depends on $m_{\rm vir}$ and $z$), in @mcs05 we designed a numerical method for finding the most probable combination of ($m_{\rm vir}$,$z$) corresponding to given values of ($\rho_s$,$r_s$). As a by-product we also obtain the corresponding value of $\nu_c$. The areas below the solid lines in Figure \[fig1\] correspond to DM halos which were abundant enough to have been progenitors of dwarf spheroidals in the Local Group. Assuming that the tidal field of the Milky Way has not perturbed significantly the stars in the outskirts of Draco out to a distance of $r_{\rm out}\sim 1.2$ kpc from its center (the last observed point in the Draco surface brightness profile of @ode01, see Figure \[fig2\]a), the third constraint can be written as $r_{\rm vir}\geqslant r_{\rm out}$. (It is hard to imagine stars forming beyond the virial radius of its DM halo). The halos in the areas above the dash-dotted lines in Figure \[fig1\] satisfy the above criterion. In Figure \[fig4\] we plot the observed line-of-sight velocity dispersion profile for Draco (from @wil04). It has been noted that in Draco and Ursa Minor the galactic outskirts appear to be kinematically cold [@wil04]. There appears to be no good explanation for this phenomena. Given the fact that in the case of Draco the only (outermost) radial bin which is “cold” contains only 6 stars with measured line-of-sight velocities, and is marginally (at $\sim 1$ sigma level) consistent with the global average for Draco, $\langle \sigma_{\rm los}\rangle=9.5$ km s$^{-1}$, we decided to exclude the last bin from our analysis. Our assumption is that at the distance of $r_1=0.74$ kpc from the Draco center (half-way between the two last points in Figure \[fig4\]) the line-of-sight velocity dispersion for Draco is roughly equal to the global average[^1]: $\sigma_{\rm los}(r_1)\sim 9.5$ km s$^{-1}$ (the circle with a cross in Figure \[fig4\]). At this distance the stellar density in Draco is steeply declining (see Figure \[fig2\]a). As a result, most of stars observed at the projected distance $r_1$ from the Draco center are located roughly in the plane of the sky at the [*spatial*]{} distance $r_1$ from the center of the dwarf. We can then use $\sigma_{\rm los}(r_1)$ as a lower limit of Draco circular velocity at this distance, $V_{c,1}$. Indeed, if one considers the extreme of purely radial orbits, the line-of-sight velocity dispersion at large distance from the galactic center will be close to zero. In the opposite extreme of purely circular orbits, one can show that $\sigma_{\rm los}$ becomes approximately equal to the circular velocity at this distance. We plot the solution of the equation $V_{c,1}=9.5$ km s$^{-1}$ as long-dashed lines in Figure \[fig1\]. The areas above these lines correspond to halos which satisfy the above criterion. The result we derived here is approximate, but of model-independent nature. We present more accurate (but also more model-dependent) treatment in § \[st\_model\], where we fit stellar models to all the reliable observed $\sigma_{\rm los}$ points in Figure \[fig4\]. The last constraint is that the virial mass of the Draco halo is somewhere between “ridiculously” low and high values $10^7$ and $10^{11}$ M$_\odot$ (the area between two dotted lines in Figure \[fig1\]). The lower limit is even lower, by a factor $2-3$, than the classical “mass follows light” estimates [@mat98; @ode01]. The upper limit corresponds to a satellite which would very quickly spiral in to the center of the Milky Way due to dynamical friction. As one can see, the last constraint does not add any new information to our exclusion plots in Figure \[fig1\]. From comparison of Figures \[fig1\]a and \[fig1\]b one can see that all the constraints except for the fourth one ($V_{c,1}\geqslant 9.5$ km s$^{-1}$) are very similar for both NFW and Burkert DM density profiles. This is intimately linked to our assumption that for a given virial mass $m_{\rm vir}$ and redshift $z$ both types of halos have the same concentration – given by equation (\[eqc\]). This automatically makes the scaling radii $r_s$ equal in both cases. At the same time, one can show that the scaling densities for NFW and Burkert halos are related through $$\frac{\rho_{s,\rm Burkert}}{\rho_{s,\rm NFW}} = 2\frac{\ln(1+c)-c/(1+c)}{\ln(1+c)+[\ln(1+c^2)]/2-\arctan c},$$ which is close to unity for realistic halos (the ratio changes from $0.966\dots 0.921$ for $c=3.5\dots 10$). The fourth constraint, unlike the rest of the criteria, does not deal with a global property of a halo (such as $m_{\rm vir}$ and $z$ or their derivatives – $r_{\rm vir}$ and $F$). Instead, it deals with the average DM density within a certain fixed radius – which can be dramatically different for cuspy NFW and flat-cored Burkert models. The shaded areas in Figure \[fig1\] correspond to DM halos which satisfy all of the above global constraints. As one can see, the Draco halo parameters are not particularly well constrained, especially in the case of the NFW profile. Still, one can make a few interesting observations. Firstly, the most restrictive (and useful) constraints are $F\geqslant 0.01$ Mpc$^{-3}$ and $V_{c,1}\geqslant 9.5$ km s$^{-1}$. Secondly, there is plenty of room for Draco to have formed before the end of the reionization of the universe at $z\sim 6.5$ (shaded areas to the right of the dashed lines marked “$z=6.5$” in Figure \[fig1\]). Thirdly, we can derive the range of possible values of different halo parameters. For NFW halos, our exclusion plot implies that $\log m_{\rm vir}=7.7\dots 10.7$, $z=1.8\dots 11$, $r_s=0.16\dots 7.8$ kpc, and $\log \rho_s =7.0\dots 9.1$. (Here units for $m_{\rm vir}$ and $\rho_s$ are M$_\odot$ and M$_\odot$ kpc$^{-3}$, respectively.) For Burkert halos, $\log m_{\rm vir}=7.8\dots 10.5$, $z=3.9\dots 11$, $r_s=0.17\dots 5.0$ kpc, and $\log \rho_s =7.4\dots 9.1$. Interestingly, for both NFW and Burkert cases, Draco could not have formed before $z\sim 11$, or more than 13.2 Gyrs ago. The reason for that is that at larger redshifts DM halos with the virial radii $\geqslant 1.2$ kpc become too rare to correspond to a typical dwarf spheroidal galaxy in the Local Group. The main purpose of generating the exclusion plots in Figure \[fig1\] was to significantly reduce the computational burden in the next step of our analysis, described in the next section, where we use the whole observed line-of-sight velocity dispersion profile along with the surface brightness profile to find the best-fitting stellar models and to further reduce the uncertainty in ($\rho_s$,$r_s$) values for the Draco DM halo. Stellar Model {#st_model} ============= The equilibrium state of a spherically symmetric stellar system can described by the Jeans equation, $$\label{Jeans} \frac{1}{\rho_*}\frac{d(\rho_* \sigma_r^2)}{dr} + \frac{2}{r}\left(\sigma_r^2 - \sigma_t^2\right) = -\frac{d\Phi}{dr}$$ [@bin87], which is obtained by taking the first velocity moment of the collisionless Boltzmann equation. Here $r$ is the distance from the center of the system, $\rho_*$ is the stellar density, $\sigma_r$ and $\sigma_t$ are the one-dimensional stellar radial and tangential velocity dispersions, respectively, and $\Phi$ is the total gravitational potential (due to stars and DM). The radial gradient of the gravitational potential can be calculated as $d\Phi/dr=G[m(r)+m_*(r)]/r^2$, where $m(r)$ and $m_*(r)$ are the enclosed DM and stellar masses, respectively, and $G$ is the gravitational constant. From equations (\[eq1\]–\[eq2\]) we derived $$\begin{aligned} m(r)&=&4\pi r_s^3 \rho_s [\ln(1+x)-x/(1+x)] \hspace{1.2cm} \mbox{(NFW)},\label{m_DM}\\ m(r)&=&2\pi r_s^3 \rho_s [\ln(1+x)+(1/2)\ln(1+x^2)-\arctan x]\nonumber \\ &&\hspace{5cm}\mbox{(Burkert).}\label{m_DM2}\end{aligned}$$ Here $x\equiv r/r_s$. We neglect impact of baryons on DM distribution, as in Draco the stellar density is more than an order of magnitude lower than DM density even at the center of the galaxy. The Plummer density profile, $$\label{rho_*} \rho_*=\rho_0 \left[ 1 + (r/b)^2\right]^{-5/2}$$ [@bin87], is used sometimes to describe simple spherically symmetric stellar systems, such as globular clusters and dwarf spheroidal galaxies. It has a core of size $b$ and a power-law envelope with the slope $\gamma=-5$. We found that a “generalized Plummer law”, $$\label{genplum} \rho_*=\rho_0 \left[ 1 + (r/b)^2\right]^{-\alpha/2},$$ which has a surface density profile of the form $$\label{Sigma} \Sigma=\Sigma_0 \left[ 1 + (R/b)^2\right]^{-(\alpha-1)/2},$$ provides much better fit to the Draco star count profile of @ode01 than the Plummer model and the theoretical @K66 model, if one chooses $\alpha=7$ (see Figure \[fig2\]a) [^2]. Here $\alpha$ is an integer number $\geqslant 2$ and $R$ is the projected distance from the center of the system. For $\alpha \geqslant 4$, the total stellar mass can be calculated as $$M_*=\frac{2\pi \Sigma_0 b^2}{\alpha-3}.$$ For $\alpha=7$, the stellar enclosed mass is $$\label{m_*} m_*(r)=\frac{4\pi \rho_0 r^3}{15} \frac{5+2(r/b)^2}{(1+r^2/b^2)^{5/2}},$$ and $\rho_0=15\Sigma_0/(16 b)$. The $\chi^2$ fitting of equation (\[Sigma\]) to the Draco profile of @ode01 gave $\rho_0=1.08\times 10^7$ M$_\odot$ kpc$^{-3}$ and $b=0.349$ kpc. (We assumed that the stellar $V$-band mass-to-light ratio of Draco stars is $\Upsilon=1.32$, which is an average value of the Salpeter and composite model estimates of @mov98.) Traditionally, in equation (\[Jeans\]) one uses $\sigma_r$ and $\beta\equiv 1-\sigma_t^2/\sigma_r^2$ instead of $\sigma_r$ and $\sigma_t$. The anisotropy parameter $\beta$ is equal to $-\infty$, 0, and 1 for purely tangential (circular), isotropic, and purely radial stellar orbits. We advocate a different anisotropy parameter, $$\eta\equiv \frac{\sigma_r^2-\sigma_t^2}{\sigma_r^2+\sigma_t^2}.$$ Unlike $\beta$, parameter $\eta$ is symmetric: it is equal to $-1$, 0, and 1 for circular, isotropic, and purely radial orbits. The equations connecting $\beta$ and $\eta$ are $\eta=\beta/(2-\beta)$ and $\beta=2\eta/(1+\eta)$. Another useful expression is $\sigma_t^2=\sigma_r^2 (1-\eta)/(1+\eta)$. The Jeans equation (\[Jeans\]) can now be rewritten as $$\label{Jeans2} \frac{1}{\rho_*}\frac{d(\rho_* \sigma_r^2)}{dr} + \frac{4\eta}{1+\eta}\frac{\sigma_r^2}{r} = -\frac{d\Phi}{dr}.$$ As there are two unknown functions in equations (\[Jeans\]) or (\[Jeans2\]), $\sigma_r(r)$ and $\sigma_t(r)$ (or $\beta[r]$, or $\eta[r]$), one customarily assumes the shape of the anisotropy profile and then solves the ordinary differential equation for $\sigma_r(r)$. The boundary condition is $\sigma_r(r\rightarrow\infty)=0$. Traditional choices for the anisotropy profile are (1) $\beta=constant$ (with the special case of $\beta=0$, or isotropic stellar orbits), and (2) the Osipkov-Merritt profile [@osi79; @mer85], $$\beta=\left[ 1+\left(r_a/r\right)^2\right]^{-1}.$$ In the latter case, the stellar system is isotropic at the center, reaches $\beta=0.5$ at $r=r_a$, and becomes purely radially anisotropic in infinity. Unfortunately, the two above choices are very limited. We propose instead a much more flexible anisotropy profile, $$\label{eta} \eta = \eta_0 + (\eta_1-\eta_0)\left[1-\left(\rho_*/\rho_0\right)^{1/\lambda}\right],$$ which is applicable to systems with a flat core (such as generalized Plummer model or King model). Here $\lambda$ is a positive number of order of unity, and $\eta_0$ and $\eta_1$ are asymptotic values of the anisotropy parameter $\eta$ for $r\rightarrow 0$ and $r\rightarrow\infty$, respectively. As one can see, the profile in equation (\[eta\]) does not explicitly depend on $r$, like the Osipkov-Merritt profile. Instead, it depends on the stellar density $\rho_*$. We believe it is a reasonable approach, as in many realistic stellar systems the same dynamical processes shape simultaneously both density and anisotropy profiles. The examples are the collapse of initially homogeneous warm stellar sphere [@alb82; @mas05a] and the expansion of a newly formed stellar system in a spherical galaxy with a DM halo after removal of the leftover gas by the feedback mechanisms [@mcs05]. In both cases, the stellar orbits become increasingly radially anisotropic in the outskirts of the relaxed system, where the density is steeply declining. Both $\beta=constant$ and the Osipkov-Merritt profiles can be considered as special cases of our more general expression in equation (\[eta\]). Indeed, fixing $\eta_0=\eta_1$ would correspond to the case of $\beta=constant$; using the generalized Plummer density profile from equation (\[genplum\]) and setting $\lambda=\alpha/2$ produces the Osipkov-Merritt profile with $r_a=b/\sqrt{2}$. The parameter $\lambda$ in equation (\[eta\]) controls how sensitive the anisotropy parameter is to changes in density. To illustrate this effect, in Figure \[fig2\]b we show anisotropy profiles for Draco with $\eta_0=0.9$, $\eta_1=-0.7$, and $\lambda=0.5,1,3,5$. One can see that by varying $\lambda$ by a factor of a few a large range of possible anisotropy profiles is produced. We designed a numerical algorithm to find values of the parameters controlling the shape of the anisotropy function $\eta$ ($\eta_0$, $\eta_1$, and $\lambda$) which would $\chi^2$ minimize the deviation of the simulated line-of-sight velocity dispersion profile from the observed one (circles with error bars in Figure \[fig4\] — we excluded from our analysis the last point as an unreliable one) for given $\rho_s$, $r_s$, and the halo profile (NFW or Burkert). The procedure consists of the following six steps. \(1) We choose values of $\log \rho_s$ and $\log r_s$ from a grid with spacings of 0.45 dex and 0.15 dex, respectively. (The values of the increments were chosen to lead to a factor of two increase in the halo virial mass.) The reference point of the grid is $\log \rho_s =9$ and $\log r_s =0$. Only those grid points are considered which are lying within the shaded zones in Figure \[fig1\]. In the Burkert case, we also considered a few points lying outside of the shaded area in attempt to bracket the point with the best $\chi^2$. All these grid points are shown as circles (either empty or filled) in Figure \[fig3\]. Overall, we considered 35 different DM halo models. \(2) For each of the DM halo models, we consider 1764 different combinations of anisotropy shape parameters $\eta_0=-1,-0.9,-0.8,\dots,1$, $\eta_1=-1,-0.9,-0.8,\dots,1$, and $\lambda=0.5,1,3,5$. For each combination, we solve the Jeans equation (\[Jeans2\]) numerically with the anisotropy, stellar density, enclosed DM mass, and enclosed stellar mass profiles given by equations (\[eta\]), (\[rho\_\*\]), (\[m\_DM\]–\[m\_DM2\]), and (\[m\_\*\]), respectively. We adopt $\alpha=7$, $\rho_0=1.08\times 10^7$ M$_\odot$ kpc$^{-3}$, and $b=0.349$ kpc. The solution of the Jeans equation is the radial velocity dispersion profile $\sigma_r(r)$. \(3) For each of the above $\sim 60,000$ models we generate a spherically symmetric $N$-body stellar model, with the number of particles $N=10,000$. Stellar particles are distributed randomly, with the density profile given by equation (\[rho\_\*\]). Each stellar particle is assigned random values of the radial and two tangential components of the velocity vector: $V_r$, $V_\theta$, and $V_\phi$. All three components are assumed to have a Gaussian distribution, with the dispersions $\sigma_r(r)$, $\sigma_t(r)$, and $\sigma_t(r)$, respectively. This is an approximate method of generating a close-to-equilibrium $N$-body system with an arbitrary density and anisotropy profiles. The accurate method would involve numerically calculating the distribution function, which is a very computationally expensive procedure. This would render our approach unfeasible. \(4) We calculate the line-of-sight velocity dispersion profile for the generated $N$-body stellar models integrated over the same projected radial bins as in the observed profile of @wil04 (the edges of their bins are 0,0.12,0.24,0.36,0.48,0.60,0.72 kpc – Jan Kleyna, private communication). For this we use the projection method of @mas05a [Appendix B] where we explicitly use the spherical symmetry of our stellar system. \(5) We calculate $\chi^2$ difference between the observed line-of-sight velocity dispersion profile (the six reliable points in Figure \[fig4\]) and the modeled one. As the observational error bars are asymmetric, for calculating $\chi^2$ we use the appropriate one-sided value of the standard deviation depending on which side of the observed point the model point is located. \(6) For each DM halo model, we choose one of 1764 models, with different $\eta_0$, $\eta_1$, and $\lambda$, which produces the lowest value of $\chi^2$. For most DM halo models, we could not find a good fit to the observed line-of-sight velocity dispersion profile (with some models having $\chi^2>100$): all modeled $\sigma_{\rm los}(r)$ points were either well above or well below the observed ones. Only a few models produced $\chi^2<9.5$ (solid circles in Figure \[fig3\]). It is interesting that for both NFW and Burkert cases the best-fitting models follow very closely a sequence of isothermal stellar models with the total one-dimensional stellar velocity dispersion $\sigma_{\rm tot}=[(\sigma_r^2+2\sigma_t^2)/3]^{1/2}=9.5$ km s$^{-1}$ (thin solid lines in Figure \[fig3\]). We obtained the isothermal solutions by solving the appropriate Jeans equation, $$\label{Jeans3} \frac{1}{\rho_*}\frac{d(\rho_* \sigma_r^2)}{dr} + \frac{3}{r}\left(\sigma_r^2 - \sigma_{\rm tot}^2\right) = -\frac{d\Phi}{dr},$$ with the boundary condition $\sigma_r(0)=\sigma_{\rm tot}$. For isothermal systems the usual Jeans equation requirement $\sigma_r(r\rightarrow\infty)=0$ does not hold in general case, and at some radius $r_{\rm max}$ the solution breaks down when $\sigma_r^2$ becomes negative. The isothermal model lines shown in Figure \[fig3\] were obtained by finding the value of $r_s$ which would maximize $r_{\rm max}$ for given $\rho_s$. Typically $r_{\rm max}\gg 1$ kpc, and only for the NFW model with $\rho_s=10^9$ M$_\odot$ kpc$^{-3}$ did it drop down to 0.6 kpc. The closeness of our best-fitting models to isothermal models does not imply that the isothermal models are the best ones. We calculated $\chi^2$ differences between the observed and modeled line-of-sight velocity dispersion profiles for a few isothermal models, and they were substantially worse than for our best-fitting models. We also plotted $\sigma_{\rm tot}(r)$ profiles for the best-fitting models, and they were not isothermal. The explanation for the closeness of our best-fitting models to the isothermal ones is in the virial theorem. Given that the line-of-sight velocity dispersion profile and the surface brightness profile are fixed, the virial theorem implies that all realistic models should stay close to a certain line in the $(\rho_s,r_s)$ plane. From Figure \[fig3\] one can see that the best-fitting models are well bracketed inside the zones were all the global constrains on $\rho_s$ and $r_s$ from § \[global\] are satisfied. In other words, we have a good agreement between the global constraints and the detailed $\sigma_{\rm los}(r)$ analysis constraints, which were derived in a very different fashion. The exception is the Burkert halos with $\log\rho_s=8.1$, where $\chi^2$ is slightly improving when moving upwards into the area with $F< 0.01$ Mpc$^{-3}$. This is explained by the fact that at this value of $\log\rho_s$ the isothermal sequence is making a sharp upward turn. [cccccc|cccccccccc]{} &\ Model&Halo & $\log\rho_s$ & $\log r_s$ & $N_{\rm DM}$ & $\varepsilon_{\rm DM}$ & $\lambda$ & $\eta_0$ & $\eta_1$ & $\chi^2$ & $m_{\rm vir}$ & $z$ & $c$ & $r_{\rm vir}$ & $\nu_c$ & $\log F$\ && M$_\odot$ kpc$^{-3}$ & kpc & & pc & & & & & M$_\odot$ & & & kpc & & Mpc$^{-3}$\ N1& NFW & 7.20 & 0.45 &$10^6$ &60& 1 & 0.9 & $-0.7$ & 5.4 & $4.6\times 10^9$ & 2.46 & 5.55 & 15.6 & $-0.68$ & $-1.09$\ N2& NFW & 7.65 & 0.00 &$3.16\times 10^5$&28& 1 & 0.7 & $-0.8$ & 6.6 & $5.2\times 10^8$ & 4.50 & 4.80 & 4.80 & $-0.23$ & $-0.15$\ N3& NFW & 8.10 & $-0.30$ &$10^5$ &36& 1 & 0.8 & $-0.9$ & 6.1 & $1.8\times 10^8$ & 7.10 & 4.55 & 2.28 & 0.54 & 0.08\ N4& NFW & 8.55 & $-0.60$ &$10^5$ &20& 1 & 0.6 & $-1.0$ & 8.9 & $6.9\times 10^7$ & 9.45 & 5.14 & 1.29 & 1.47 & $-0.11$\ N5& NFW & 9.00 & $-0.75$ &$10^5$ &19& 1 & 1.0 & $-1.0$ & 8.6 & $8.4\times 10^7$ & 10.7 & 6.92 & 1.23 & 2.80 & $-1.62$\ B1& Burkert & 8.10 & 0.15 &$10^6$ &37& 1 & 0.7 & 0.0 & 6.6 & $4.5\times 10^9$ & 6.74 & 4.98 & 7.03 & 1.48 & $-1.93$\ B2& Burkert & 8.55 & $-0.45$ &$3.16\times 10^5$&30& 1 & 0.7 & $-0.6$ & 6.5 & $2.1\times 10^8$ & 9.64 & 5.17 & 1.84 & 1.82 & $-0.97$\ B3& Burkert & 9.00 & $-0.75$ &$10^5$ &19& 1 & 0.8 & $-0.9$ & 6.2 & $9.1\times 10^7$ & 11.0 & 6.91 & 1.23 & 2.90 & $-1.83$\ We list the parameters for the best-fitting models in Table \[tab1\]. We show only the models with $\chi^2<9.5$ located within the globally constrained zones. As for the Burkert halos with $\log\rho_s=8.55$ we have two comparably good models, we chose the one which is closer to the isothermal sequence. Analysis of Table \[tab1\] shows that a comparably good $\sigma_{\rm los}(r)$ fit can be obtained for a very large range in virial masses, from $\sim 7\times 10^7$ to $\sim 5\times 10^9$ M$_\odot$, for both NFW and Burkert halos. Despite very large difference in DM masses and density profiles, all the best-fitting stellar models have comparable values of the anisotropy parameters: $\lambda =1$ (in other words, anisotropy $\eta$ is a linear function of the stellar density), $\eta_0\sim 0.8$, and $\eta_1\sim 0\dots -1$. In Figure \[fig4\] we show the line-of-sight velocity dispersion profiles for the 8 best-fitting models from Table \[tab1\] integrated over the same projected radial bins as the observed profile. All the models correctly reproduce the observational trend of a slight increase in $\sigma_{\rm los}$ with radius. However, two models with the largest virial mass (both NFW and Burkert) produce profiles which are rising rather steeply at the last measured point. This could present a problem if the result of @mun05, that the $\sigma_{\rm los}$ profile in the outskirts of Draco is almost flat, is confirmed with a larger sample of stars. For less massive models, the line-of-sight velocity dispersion profiles are levelling off at the last measured point, which is more in line with the results of @mun05. Assuming that the best-fitting stellar models follow closely the isothermal sequence, from Figure \[fig3\] we can derive new (slightly better than in the previous section) constraints on Draco’s DM halo parameters: $\log m_{\rm vir}\simeq 7.9\dots 9.7$ (for both NFW and Burkert halos), $z=1.8\dots 10$, $r_s=0.21\dots 3.1$ kpc, $\log \rho_s =7.0\dots 8.8$ (for NFW halos), and $z=6.8\dots 11$, $r_s=0.18\dots 1.4$ kpc, $\log \rho_s =8.1\dots 9.0$ (for Burkert halos). The interesting result is that if [*cosmological halos have a Burkert-like flat-cored DM density profiles, then Draco should have formed before the end of the reionization of the universe at $z\sim 6.5$*]{}. Evolution in the Milky Way Potential ==================================== Possible Orbits in the Galactic Potential {#Orbits} ----------------------------------------- To carry out tidal stripping simulations for Galactic satellites, it is of principal importance to know reasonably well the shape of the gravitational potential of the Milky Way. One popular Milky Way model often used to calculate orbits of Galactic globular clusters and dwarf spheroidals is that of @joh99. This model consists of three components: (1) a disk represented by @miy75 potential with the mass $1.0\times 10^{11}$ M$_\odot$, radial scale-length 6.5 kpc, and scale-height 0.26 kpc, (2) a spherical bulge with a @her90 potential, mass $3.4\times 10^{10}$ M$_\odot$, and scale-length 0.7 kpc, and (3) an isothermal halo with $\sigma=128$ km s$^{-1}$ and a flat core of size 12.0 kpc. The model was designed to reproduce the observed flat Galactic rotation curve between 1 and 30 kpc. There are two major disadvantages of the above model for our work. Firstly, it is axisymmetric, which adds an additional degree of freedom to our already multi-dimensional problem. (Orbits in axisymmetric potentials depend on all three components of the space velocity vector of the satellite, whereas in spherical potentials orbits depend only on the radial, $V_r$, and tangential, $V_t$, space velocity components.) Secondly, the DM halo of @joh99 is an isothermal sphere with the rotation curve asymptotically approaching $V_c=181$ km s$^{-1}$ as $r\rightarrow\infty$, whereas in the cosmological halos the rotation curves are declining in the outskirts. We decided to use a simple static NFW potential, $$\label{pot} \Phi=-4\pi G\varrho_s R_s^3 \ln(1+R/R_s) / R,$$ as the Milky Way model for our project. The scaling radius $R_s$ and density $\varrho_s$ of an NFW halo can be determined from the virial mass $M_{\rm vir}$ and concentration $C$. These quantities are still not very well known for the Milky Way. Traditionally, one uses different Galactic objects (stars, gas, globular clusters, dwarfs spheroidals) with known line-of-sight velocity and, in some cases, proper motion as kinematical tracers of the Galactic potential, with different authors obtaining quite different results. The favored model of @kly02, for example, has a virial mass of $10^{12}$ M$_\odot$ and concentration $10\dots 17$. Other recent NFW halo based models have $M_{\rm vir}=(0.7-1.7)\times 10^{12}$ M$_\odot$, $C=5\dots 12$ [@car05], and $M_{\rm vir}=(0.6-2.0)\times 10^{12}$ M$_\odot$, $C=18$ [@bat05]. Non-NFW models of @sak03 give larger values for the total mass of the Milky Way: $M_{\rm vir}=(1.5-3.0)\times 10^{12}$ M$_\odot$ if Leo I is gravitationally bound to the Galaxy, and $M_{\rm vir}=(1.1-2.2)\times 10^{12}$ M$_\odot$ if not. We adopted an intermediate value for the Galactic virial mass, $M_{\rm vir}=1.5\times 10^{12}$ M$_\odot$. The median concentration of cosmological halos with such mass at $z=0$ is $C=13.2$ [@bul01]. The halo scaling parameters are then $R_s=22.6$ kpc and $\varrho_s=6.0\times 10^6$ M$_\odot$ kpc$^{-3}$, and the virial radius is $R_{\rm vir}=298$ kpc. The radial tidal acceleration $-d^2\Phi/dr^2$ of our NFW halo is comparable to that of the composite model of @joh99. As one can see in Figure \[fig7\], the tidal acceleration profile for the NFW halo is located between the two extreme profiles for the @joh99 model (in the Galactic plane and along the polar axis) down to a radius of $\sim 10$ kpc. We obtained the pericentric and apocentric distances, $R_p$ and $R_a$, for closed Draco’s orbits in the potential given by equation (\[pot\]) by solving numerically the following non-linear equation [@bin87]: $$\frac{1}{R^2}+\frac{2[\Phi(R)-\Phi(R_0)]-V_r^2-V_t^2}{R_0^2 V_t^2} = 0,$$ where $R_0$, $V_r$, and $V_t$ are the Draco’s current distance from the Galactic center, radial velocity, and tangential velocity, respectively. If proper motion is known, the two space velocity vector components $V_r$ and $V_t$ can be calculated using the procedure outlined in Appendix. The radial orbital period is obtained by solving numerically the following integral [@bin87]: $$P=2\int_{R_p}^{R_a}\frac{dR}{\sqrt{V_r^2+V_t^2-2[\Phi(R)-\Phi(R_0)]-R_0^2V_t^2/R^2}}.$$ @sch94 published the only available measurement of the proper motion of Draco: $\mu_\alpha\cos\delta=0.6\pm 0.5$ mas yr$^{-1}$ and $\mu_\delta=1.1\pm 0.5$ mas yr$^{-1}$. (We adopted the larger value for the uncertainty from the text of their paper.) When we started this project, we hoped that the measurements of @sch94 could be used to place useful constraints on possible Draco’s orbits in the Milky Way potential. Unfortunately, this is not the case. This can be seen from Figure \[fig5\]a, which shows the proper motion measurements of @sch94 and the locus of the possible proper motion vectors resulting in a closed orbit in the Milky Way potential given by equation (\[pot\]). (We assumed that the halo density drops to zero beyond the virial radius $r_{\rm vir}$, and took into account the uncertainties in the distance to Draco, $d=82\pm 6$ kpc, and its line-of-sight velocity, $V_{\rm los}=-293\pm 2$ km s$^{-1}$, which were taken from @mat98.) One can see that the observational results are virtually inconsistent with Draco moving along a bound orbit around the Milky Way. More quantitatively, the chance for a bound orbit with the radial period $P<8$ Gyr is only 5.5% (or 7.4% for any period, which includes periods much longer than the Hubble time). This is not surprising, as the proper motion measurements of @sch94 imply that Draco moves in the Galactic halo with a staggering speed of $610\pm 190$ km s$^{-1}$ (one sigma error bars), with no realistic Milky Way model being able to keep it gravitationally bound. Given that Draco appears to be a pretty normal dwarf spheroidal galaxy and that the dwarf spheroidals strongly concentrate toward the two large spirals in the Local Group [@mcb04], the Milky Way and the M31 galaxy, it seems to be very unlikely that this dwarf moves along an unbound orbit around our Galaxy, and just by chance happened to be at the present small distance from the Galactic center. We assume instead that Draco moves on a bound orbit, with $P\lesssim 8$ Gyr, and that the proper motion measurements of @sch94 are wrong. Proper motion measurements of the outer Galactic halo objects based on heterogeneous collection of photographic plates spanning a few decades, such as those of @sch94, are notoriously difficult to correct for all possible sources of systematic errors. As another example, the proper motion measurement of the same authors [@sch94] for Ursa Minor is more than one sigma away from each of the three more recent results, including two derived with the help of the Hubble Space Telescope [@pia05 their Fig. 16]. Fortunately, even in the case when the proper motion is not known, some constraints can be placed on the orbital elements of Draco. We noticed that pericentric and apocentric distances for all the bound orbits from Figure \[fig5\]a are not completely independent, and follow closely the solution of the equation $$\label{rarp} \left[R_p^2 + (7000/R_a)^2\right]^{1/2} = 76 \mbox{~kpc},$$ which is a circle with the radius of 76 kpc in the coordinates $x=R_p$, $y=7000/R_a$ (see Figure \[fig5\]b). (Here $R_p$ and $R_a$ are in kpc.) As one can see in Figure \[fig5\]b, despite the observational uncertainties in $d$ and $V_{\rm los}$ and the fact that the proper motion in not known, the Draco’s bound orbits stay in a relatively narrow zone near the solution of equation (\[rarp\]). Neglecting the spread of the points around the circle, one can assume then that bound Draco’s orbits form a one-dimensional family of models, with both $R_p$ and $R_a$ depending only on the polar angle $\theta$, with $\tan\theta=y/x=7000/(R_p R_a)$. [cccccccc]{} Orbit & $\theta$ & $R_p$ & $R_a$ & $R_a/R_p$ & $P$ & $V_{t,a}$ & $f_P$\ & rad & kpc & kpc & & Gyr & km s$^{-1}$ &\ 1 & 0.37 & 70.1 & 260 & 3.7 & 4.86 & 78.3 & 0.25\ 2 & 0.61 & 62.4 & 162 & 2.6 & 2.99 & 103.9 & 0.47\ 3 & 0.85 & 51.1 & 122 & 2.4 & 2.19 & 112.1 & 0.76\ 4 & 1.09 & 35.7 & 104 & 2.9 & 1.71 & 99.9 & 0.64\ 5 & 1.33 & 18.5 & 96.9 & 5.2 & 1.42 & 65.5 & 0.57\ 6 & 1.54 & 2.47 & 92.1 & 37 & 1.23 & 11.5 & 0.54\ We chose six different values of the polar angle $\theta$ (shown as radially divergent lines in Figure \[fig5\]b, and listed in Table \[tab2\]) to obtain a sequence of bound Draco’s orbits. We used plots shown in Figure \[fig6\] to estimate the typical values of $R_p$ and $R_a$ corresponding to the particular values of the angle $\theta$. In Table \[tab2\] we list the parameters of the derived Draco’s orbits. The range of covered orbital periods is $\sim 1-5$ Gyr. All the orbits except for the last one have a pericenter at the distances where the tidal disruption properties of our NFW halo are comparable to those of the composite Milky Way model of @joh99 [see Figure \[fig7\]]. The orbit 6 was designed to explore the extreme case of a virtually radial orbit. The apocenter of the orbit with the longest period is at 260 kpc, which is at the very edge of the virialized Galactic halo. As one can see from Table \[tab2\], the derived sequence of orbits is not trivial, with the orbits being more eccentric for longest and shortest periods, and becoming rounder for intermediate periods of $\sim 2$ Gyr. It is interesting to note that six out of eight, or 75% of the “classical” Galactic dwarf spheroidals (we exclude Saggitarius as being currently tidally disrupted) are located in a rather narrow interval of Galactocentric distances $R=70\dots 140$ kpc [@mat98]. This includes Draco, and excludes Leo I and II. In Table \[tab2\] we list for each orbit the fraction of time $f_P$ Draco spends in this interval of Galactocentric distances. Statistically speaking, if Draco is a “normal” dwarf spheroidal, $f_P$ should be around 0.75. Our orbit 3 is in this sense the likeliest orbit for Draco, and orbit 1 (and probably 2) are rather unlikely. In Figure \[fig3\] we show as long-dashed lines the halos with the analytical tidal radius $r_{\rm tid}$ equal to 0.85 kpc, which is the radius where the density of Draco stars on the map of @ode01 is two sigma above the noise level. We calculated $r_{\rm tid}$ from $$\frac{m(r_{\rm tid})}{r_{\rm tid}^3} = \left[ 2-\frac{R}{M(R)}\frac{\partial M}{\partial R}\right] \frac{M(R)}{R^3} \label{eq_rtid}$$ [@hay03], where $M(R)$ and $m(r)$ are the enclosed mass for the Milky Way and the satellite, respectively. Even at this large distance from the Draco’s center @ode01 did not see any sign of Draco being tidally distorted by the Milky Way tidal field. From Figure \[fig3\] one can see that all our best-fitting models (solid circles) have tidal radii larger than 0.85 kpc (even for the worst orbit with $R_p=2.5$ kpc). One would naively conclude that the tidal forces are not important for our Draco models. But the reality is more complicated than that. Numerical $N$-body simulations of the evolution of subhalos orbiting in the host halo showed that removal of DM from the outskirts of the satellite results in the expansion of the satellite, which reduces its average density and exposes more DM to the action of the tidal field [@hay03; @kaz04]. Burkert halos have lower average density than NFW halos, and as a result are even easier to disrupt tidally [@mas05b]. Moreover, even well inside the tidal radius, distribution of bound stars can be noticeably distorted by the tidal filed of the Milky Way, which would be at odds with the Draco observations of @ode01. To correctly describe the above effects, we had to resort to high-resolution $N$-body simulations of the tidal disruption of our best-fitting composite (DM $+$ stars) models from Table \[tab1\] in the static potential of the Milky Way. This will be described in the following sections. Isolated Models {#isol} --------------- To run the $N$-body simulations of the tidal stripping of Draco in the static gravitational potential of the Milky Way, we used the parallel tree-code Gadget-1.1 [@spr01]. We generated equilibrium DM halos for our eight models (see Table \[tab1\]) using the prescription in @mas05a. The essence of this approach is to use explicitly the distribution function (DF) to set up the initial distribution of velocity vectors of DM particles (isotropy was assumed). It was argued [@kaz04] that using DFs explicitly is far superior to traditional local Maxwellian approximation for cuspy models such as NFW. To reduce the boundary effects, we truncate DM halos at a distance of two virial radii $r_{\rm vir}$ from the center. This results in virtually no evolution for isolated models within the virial extent of the halos after 10 Gyr. We chose the gravitational softening lengths (separately for DM and stars) to be commensurable with the average interparticle distance: $\varepsilon=0.77 r_h N^{-1/3}$ [@hay03]. Here $r_h$ is the half-mass radius of the system. For stellar particles, $\varepsilon_*=8.6$ pc. For DM particles, $\varepsilon_{\rm DM}$ values are listed in Table \[tab1\]. To set up the initial distribution of stars inside the DM halo, we use the same pseudo-Maxwellian approximation we used to measure the projected line-of-sight velocity dispersion profiles in § \[st\_model\] (step 3). The only difference is that now we use a larger number of stellar particles, $N_*=30,000$, to reduce the Poisson noise in the observable properties of the models. Our baseline number of DM particles is $N_{\rm DM}=10^5$. Test runs showed that in the most massive models (N1,2 and B1,2) a much larger number of DM particles is required to prevent the artificial evolution of the stellar cluster at the center of the halo. We observed the central stellar density being significantly reduced (by more than an order of magnitude in the worst cases) in the low-resolution models. This effect does not depend on the number of stellar particles, and becomes less severe for larger $N_{\rm DM}$ and/or $\varepsilon_{\rm DM}$. As stars are only a trace population in our models, the most obvious explanation for the above artifact is that stars get scattered from the imperfections of the granulated gravitational potential of the DM halo. Setting $N_{\rm DM}=10^6$ for the models N1 and B1, and $N_{\rm DM}=3.16\times 10^5$ for the models N2 and B2 resulted in acceptable level of the artificial evolution of the surface brightness profile $\Sigma(r)$ of the stellar cluster. In all our isolated models, the change in the central surface brightness was negligible after 10 Gyr of evolution (with the only exception of the model B1, where the change was $-0.4$ dex). The radius corresponding to the outmost reliable Draco isodensity contour of @ode01 with $\Sigma=12,060$ M$_\odot$ kpc$^{-2}$ ($r=0.85$ kpc initially) increased by mere $0.03-0.05$ dex. Unfortunately, we observed significant evolution in the velocity anisotropy profiles $\eta(r)$ in our isolated models, especially at the center of the halo. All our best-fitting stellar models have a strong radial anisotropy at the center (see Table \[tab1\]). In our $N$-body simulations with live DM halos the stellar core becomes close to isotropic within the first Gyr, suggesting that the particle-particle interactions are not the main culprit, as such effects would take Gyrs to manifest themselves. This became more apparent after we ran simulations for all our models from Table \[tab1\] with $N_*=30,000$ stellar particles and [*static*]{} DM potential. In this setup, close encounters between particles are extremely rare due to very low stellar density. In the static models we see the same effect (of slightly smaller magnitude) as in the runs with live DM halos: central radial anisotropy reduces almost to zero after a few crossing times. We do not know the exact reason for the above effect. The possible explanations are (1) the (unknown) DF corresponding to our choice of anisotropy, stellar density, and gravitational potential profiles is okay (positive everywhere), but the pseudo-Maxwellian approximation we use to set up the stellar velocities is not good for systems with strong radial anisotropy at the center, and/or (2) the DF is not physical (i.e. is negative) at the center. We also want to point out that a real stellar system cannot have a radial anisotropy all the way to its center, as the radial velocity dispersion diverges in such case when $r\rightarrow 0$. To demonstrate this, we write down the solution of the Jeans equation (\[Jeans2\]) in the case of constant anisotropy [@lok01] $$\label{const} \sigma_r^2 = \frac{1}{r^{\zeta}\rho_*}\int\limits_r^\infty r'^\zeta \rho_* \frac{d\Phi}{dr} dr',$$ where the constant $\zeta\equiv 2\beta=4\eta/(1+\eta)$ is positive for the case of radial anisotropy. The integral in equation (\[const\]) is non-zero for $r=0$, resulting in divergent $\sigma_r$ at the center of the system. Obviously, in a real object radial anisotropy should break down at some radius, changing into isotropy or tangential anisotropy. It is also hard to expect radially divergent velocity dispersion in a stellar system with a flat core, especially in the case of a flat-cored (Burkert) DM halo. We want to emphasize that even though we cannot guarantee that the stellar models in Table \[tab1\] are physical (especially at the center), in our simulations they quickly relax to a stable configuration, with the surface brightness profile virtually identical to the observed one, and the line-of-sight velocity dispersion profile $\sigma_{\rm los}(r)$ still consistent with the observations. In Figure \[fig8\] we plot $\sigma_{\rm los}(r)$ profiles for our stellar models in a static DM potential after 10 Gyr of evolution integrated over the same six radial bins as the observational data of @wil04. As one can see, the profiles for evolved stellar clusters are a reasonably good match to the observations. (The $\chi^2$ measure becomes a factor of two larger in the evolved models.) We also used the isolated runs to estimate the accumulated total energy errors in our models. In the case of live DM halos, the total energy for DM $+$ stars is conserved to better than $\sim 0.1$% (typically $\sim 0.05$%) after 10 Gyr of evolution. To estimate the total energy errors for stars only, we measured the difference in stellar total energy at $t\sim 3$ Gyr (when the cluster has reached a steady state configuration) and at $t=10$ Gyr in our static DM halo simulations. The difference was again $\lesssim 0.1$%. Tidal Stripping Simulations --------------------------- We simulated evolution of the eight stars$+$DM models of Draco from Table \[tab1\] orbiting for 10 Gyr in the static spherically symmetric potential of the Milky Way given by equation (\[pot\]). To run the simulations, we used the parallel version of the multi-stepping tree code Gadget [@spr01]. The number of stellar and DM particles and the corresponding gravitational softening lengths were the same as in the isolated models described in § \[isol\]. Each Draco model was simulated for six different orbits from Table \[tab2\]. Initially, Draco was located at the apocenter of its orbit. Altogether we made 48 tidal stripping simulations. We refer to the models by both the model number from Table \[tab1\] and the orbit number from Table \[tab2\], e.g. model N1-5. Most of the results we give below are for the latest moment of time when the dwarf was located at the current distance of Draco from the Galactic center of $\sim 82$ kpc (we do not make distinction between the dwarf moving inward or outward), which took place $7.4-10$ Gyr from the beginning of the simulations ($9.4-10$ Gyr for the orbits $3-6$). All our models experienced tidal stripping of different degree. As Figure \[fig9\]a shows, the DM mass of the gravitationally bound remnant is between 90% (model N5-1) and 0.1% (model N1-6) of the original mass at the end of the simulations. Three of our models became completely gravitationally unbound within 10 Gyr: B1-6 (after 3.2 Gyr and 3 pericenter passages), B2-6 (after 7.8 Gyr and 6 pericenter passages), and B1-5 (after 9.4 Gyr and 7 pericenter passages). In the latter case, the dwarf is still bound when it is located at $\sim 82$ kpc at the end of the simulations (when we compare the results of the simulations with the observed properties of Draco) – but only barely so. It is not surprising that all the unbound models have Burkert halos. Indeed, for given mass and scaling radius these halos have lower averaged density than NFW halos in the central area. If truncated instantaneously at a certain radius, the total energy of the remnant becomes positive for Burkert halos at a radius $\sim 2.1$ times larger than for NFW halos [@mas05b]. Another result which can be explained is that the most massive halos are easier to strip and disrupt tidally than the less massive ones. All our models (both massive and of lower mass) have a comparable DM density within the observed extent of Draco (because of the virial theorem), so in the point mass approximation they should be equally susceptible to tidal forces. However, the point mass approximation (used to derive equation \[\[eq\_rtid\]\]) breaks down for our most massive halos, as their size becomes comparable to the pericentric distance, so the strongly non-linear components of the tidal force become important. Not surprisingly, our most massive halos on the orbit 6 were either totally disrupted (model B1-6) or lost 99.9% of its original mass (model N1-6) after 10 Gyr of evolution. In all of our models a fraction of stars has been tidally stripped by the end of the simulations – even for the orbit 1. In two cases (models B1-6 and B2-6), stars have become completely unbound by the time the dwarf was passing at the distance of $\sim 82$ kpc from the Galactic center for the last time. In other cases, the fraction of escaped stars was between $\sim 10^{-4}$ for the models B1-2,3,4 and 96.6% for the model N1-6 (see Figure \[fig9\]b). When analyzing the global properties of the stellar cluster at the end of the simulations, the most obvious result is the fact that the more disruptive the orbit is, the more the system is affected by the tidal shocks experienced near the pericenter of the orbit. (Orbits with a larger number from Table \[tab2\] are more disruptive for two reasons: they have smaller pericentric distance, and they have shorter orbital period, so the number of pericentric passages in 10 Gyr is larger.) Dwarfs are puffed up by the tidal shocks, with the central line-of-sight velocity dispersion $\sigma_0$ becoming smaller (Figure \[fig9\]c), central surface brightness decreasing (not shown, but is qualitatively similar to $\sigma_0$ behavior), and the projected half-light radius becoming larger (except for the extreme case of the orbit 6 when both tidal shocking and tidal stripping are important, see Figure \[fig9\]d). Our models are highly idealized when it comes to long-term evolution of stellar tidal streams. On our most disruptive orbits 5 and 6, a significant fraction of stars becomes unbound over the course of the tidal evolution, with the most of the tidally stripped stars following very closely the almost radial orbit of the dwarf. From the Sun location, many of these stars project on a rather small area in the sky inside or around the apparent location of the dwarf. This can be quite unphysical, as such cold tidal streams are not expected to survive for many gigayears in the Milky Way halo due to its triaxiality and clumpiness (as predicted by cosmology) and due to interaction with baryonic structures in the Galaxy (stellar disk with spiral arms, stellar bar, giant molecular clouds). To circumvent this difficulty, in this section we discuss all the observable model properties for two extreme cases: (a) all stars are taken into account, and (b) only the stars (both bound and unbound) located within the spatial distance of 5 kpc from the center of the dwarf are considered. In the latter case, the spatial truncation of the tidal stream ensures that only the most recently stripped stars are used for calculating the observable properties of the models. On a more detailed level, the impact of both tidal shocks and tidal stripping and heating can be seen in Figure \[fig11\]. Here we show the surface brightness profiles (panel a) and line-of-sight velocity dispersion profiles (panel b) for the models N1-1,4,5,6. The most obvious effect in Figure \[fig11\] is the global decrease of $\sigma_{\rm los}$ for orbits with smaller pericentric distance (excluding the orbit 6), accompanied by a small decrease in the central surface brightness and a slight radial expansion of the system. No obvious tidal truncation and tidal heating is observed in the outskirts in the cluster. These results were obtained for the stars located within the spatial distance of 5 kpc from the center of the dwarf, but in the case when all stars are included the profiles are practically the same. The orbit 6 is a completely different case: one can see the $\sigma_{\rm los}$ being dramatically inflated in the outskirts of the dwarf due to superposition of tidally removed stars on the dwarf (Figure \[fig11\]b). This effect becomes even more dramatic when we include all the stars, with the line-of-sight velocity dispersion reaching $50-70$ km s$^{-1}$ in the outermost observed bin. As we discussed in the previous paragraph, it is quite unlikely that old tidal streams can stay so well collimated for many gigayears to produce the above effect. But even for the conservative case of considering only freshly stripped stars the steeply growing $\sigma_{\rm los}$ profile for the model N1-6 is grossly inconsistent with the observed profiles of Draco and other dwarf spheroidals, where $\sigma_{\rm los}$ is either not changing or decreasing at large distances from the center. The change in the surface brightness profile for the model N1-6 is also quite dramatic (Figure \[fig11\]a, long-dashed line). The outer $\Sigma(r)$ slope becomes very shallow which is inconsistent with the observed profiles for Galactic dSphs. The slope becomes even more shallow when we include stars beyond the spatial distance of 5 kpc from the center of the dwarf. Similar behavior (in terms of shallow outer $\Sigma$ profiles and inflated $\sigma_{\rm los}$ in the outskirts of the dwarf) is also observed in other models with the orbit 6 (both bound and unbound). The possible exceptions are the models N4-6 and N5-6, which are consistent with the observations of Draco when we consider only freshly stripped stars. We conclude that it is unlikely that Draco and other Galactic dSphs have experienced tidal interactions as dramatic as our models on the orbit 6. To facilitate the comparison of our models with the observed stellar isodensity contours of Draco of @ode01 we designed the following projection algorithm. (a) The frame of reference is rotated to place the center of the dwarf on the negative side of the axis $Z$, with the axis $X$ located in the orbital plane of the dwarf and pointing in the direction of the orbital motion. (b) As the direction of the proper motion of Draco is not known (see discussion in § \[Orbits\]) and the current angle between the vector connecting Draco with the center of the Galaxy and the vector connecting Draco with Sun is $\varphi=5\fdg 7$, we consider three extreme cases of the possible vantage point location which should encompass the whole range of projected model appearances: view from the Galactic center ($\varphi=0\degr$), view from a point in the orbital plane of Draco located at 82 kpc from the dwarf with $\varphi=5\fdg 7$, and view from a point in the plane which is perpendicular to the orbital plane of Draco at 82 kpc and with $\varphi=5\fdg 7$. We found that due to the fact that for Draco the angle $\varphi$ is small (which is also the case for other Galactic dSphs with the exception of Sagittarius), the observable properties of our models ($\Sigma$ and $\sigma_{\rm los}$ profiles and surface brightness maps) do not depend noticeably on the vantage point we choose – especially for the case when we only include freshly stripped stars. (c) We exclude stars with $z>-8.5$ kpc to avoid contamination of our maps with local tidal stream overdensities which would be discarded by observers as local Galactic stars. (d) We perform prospective projection of the stellar particles smoothed with a Gaussian beam which has a fixed physical size (either 0.15 or 0.5 kpc) and brightness inversely proportional to the square of the distance of the particle from the observer. This procedure makes the “surface brightness” of individual particles invariant of the distance from the observer, which is appropriate for spatially resolved clumps of the tidal stream. The most interesting result obtained from the analysis of the surface brightness maps is the lack of the classical S-shaped tidal tails in our models. In the case of significant tidal stripping (Figure \[fig12\], two different vantage points are shown) the stellar isodensity contours change from being spherical near the center of the dwarf to being increasingly more elliptical and often off-centered at larger distances. In the cases with less severe stripping, outer contour boxiness is observed in some of the models. For orbits 1-3 the surface density of the tidally removed stars is so low that it is hard to draw any conclusion as to the shape of the tidally distorted isodensity contours (except for the fact that the galaxy is observed against the background and/or foreground of a few degrees wide belt of tidally stripped stars). The explanation for the above effect is in the facts that the tidal stripping is significant only for very eccentric (almost radial) Draco orbits with $R_p\lesssim 20$ kpc, and that currently Draco is located $\gtrsim 10$ kpc away from the apocenter of its orbit (see Table \[tab2\]). Under these circumstances, line of sight practically coincides with the direction of the tidal elongation of Draco, with both tidal tails seen edge-on. This is an interesting result, as it suggests that the lack of S-shaped isodensity contours in the outskirts of the Galactic dSphs cannot be used to support a claim that the dwarf has not experienced significant tidal stripping in the gravitational potential of the Milky Way. It appears that the presence (absence) of tidally heated stars in the outskirts of dSphs is much more sensitive indicator of tidal stripping being significant (not significant). We measured for all our models the critical surface brightness $\Sigma_c$ when the outer isodensity contours become noticeably distorted due to tidal effects. The two models which became gravitationally unbound by the end of the simulations (B1-6 and B2-6) show very distorted contours all the way to the center of the dwarf (see Figure \[fig15\]). They also have dramatically inflated line-of-sight stellar velocity dispersion profiles (up to $50-100$ km s$^{-1}$), and very shallow outer surface brightness slopes. All of the above makes these two models (and most probably any other unbound model for a dSph) completely inconsistent with the observed properties of the Galactic dSphs. Among the bound models, the one which has experienced the most dramatic tidal stripping (model N1-6) is also the one which has the largest value for $\Sigma_c$: $\sim 4\times 10^4$ M$_\odot$ deg$^{-2}$, which would correspond to $\sim 3$ sigma isodensity contour of @ode01. In all other cases, the value of $\Sigma_c$ is significantly lower: $(7.4-17)\times 10^3$ M$_\odot$ deg$^{-2}$ for the models on the orbit 6, and less than $10^3$ M$_\odot$ deg$^{-2}$ for other orbits. For the orbits 1, 2, and 3 $\Sigma_c$ was too small to be measured. These values of $\Sigma_c$ are significantly lower (by a factor of $\gtrsim 1.5$) than the two-sigma detection limit of @ode01. The angular radius of the smallest distorted isodensity contour is $\sim 1^\circ$ for the orbit 6 (with the exception of the models B1, B2, and N1), and $\gtrsim 1\fdg 8$ for other orbits. One very interesting special case is that of the model B1. This model has a halo with a flat DM core of size $r_s\sim 1.4$ kpc (see Table \[tab1\]), so all the observed extent of Draco is within this large harmonic core. In Figure \[fig13\] we show the inner (corresponding to the observed part of Draco) isodensity contours for the models B1-4 and B1-1. Unlike all other models (both NFW and Burkert), here one can see a relatively strong tidal distortion of the contours at distances $\lesssim 0\fdg5$ from the center of the dwarf. The distortions are substantial even for orbit 1 (Figure \[fig13\]b) which has the largest pericentric distance of 70 kpc. The distorted contours are both elliptical and non-concentric. Interestingly, the effect is minimal for the two orbits which have the smallest eccentricity – orbits 2 and 3 with $R_a/R_p\leqslant 2.6$. The distortions are strong for the more eccentric orbits 1, 4, and 5. It appears that it is the variability of the tidal force rather than its strength which is the primary governing factor for the above effect. As the observed stellar isodensity contours of Draco are very regular and concentric [@ode01], results of our simulations suggest that it is unlikely that Draco resides in a large DM harmonic core – unless it happened to move on a close to circular orbit with $R_p\sim 45-65$ kpc and $R_a/R_p\lesssim 2.6$. DISCUSSION ========== In this paper we presented a sequence of composite (stars $+$ DM) models for Draco, listed in Table \[tab1\], which satisfy all the available observational and cosmological constraints. We showed that for the most of physically plausible orbits of Draco in the Galactic potential the tidal forces could not modify the observable properties of our models appreciably after 10 Gyr of evolution. Both “standard” cuspy NFW DM halos and Burkert halos with a flat core provide a reasonably good description of Draco. The properties of a Burkert halo are better constrained by our analysis. Most importantly, if Draco has a flat core, it should have formed at or before the end of the reionization of the universe: $z\gtrsim 6.5$. Tidal stripping simulations put even stronger constraints on the flat-core case: we showed in the previous section that our most massive Burkert model B1 would be consistent with the observations only for a very limited range of possible Draco orbits: orbit 6 is ruled out as the model becomes completely unbound with dramatically inflated $\sigma_{\rm los}$ and very shallow $\Sigma$ profiles, whereas for the orbits 1, 4, and 5 (and also 6) we observe significant distortion of inner stellar isodensity contours which is most definitely not consistent with the regular isodensity contour shape observed in Draco by @ode01. Only the lowest eccentricity orbits 2 and 3 are not ruled out by our analysis. An NFW halo is less constrained by the available observations of Draco: the halo formation redshift $z$ is anywhere between $\sim 2$ and $\sim 10$, whereas the initial virial mass could be between $\sim 10^8$ and $\sim 5\times 10^9$ M$_\odot$. In the smaller mass (and larger $z$) limit the halos are so sturdy that even for our most disruptive orbit 6 the observable parameters of the model can still be consistent with the Draco observations after 10 Gyr of tidal evolution. If Draco was accreted by the Milky Way more recently than 10 Gyr ago, the impact of the tidal forces would be even smaller. For more massive NFW halos and for all Burkert models the orbit 6 can be ruled out as the model predicts inflated line-of-sight velocity dispersion in the outskirts of the dwarf which would be at odds with observations. How strong is our case against Draco (and other dSphs) being a “tidal dwarf” – remnants of a dwarf galaxy which are not gravitationally bound at the present time? In the previous section we suggested that the fact that the line-of-sight velocity dispersion is dramatically (by up to an order of magnitude) inflated in our two “tidal dwarf” candidates, models B1-6 and B2-6, can be used to rule out the “tidal dwarf” explanation for Draco. Here we want to caution that a more detailed comparison between the model and observations is required to critically assess our conclusion. Our large estimates of $\sigma_{\rm los}$ were derived for stars with any line-of-sight velocity projected onto the dwarf disk and optionally restricted to lie within the spatial distance of 5 kpc from the dwarf’s center. Many of the high-velocity tidal tail stars responsible for inflating $\sigma_{\rm los}$ would be discarded by observers as “not belonging to the galaxy”. In Figure \[fig14\] we show the distribution of line-of-sight velocities $V_{\rm los}$ for the models B1-6 and B2-6. We show separately histograms for freshly stripped stars (solid lines) and for all stars projected on the disk of the dwarf (dashed lines). As one can see, the situation depends strongly on how recently the dwarf became unbound, and on longevity of the cold tidal streams in the Galactic halo. The model B1-6 became unbound many orbits (almost 7 Gyr) ago, and has a very wide distribution of $V_{\rm los}$ – even for freshly stripped stars. The model B2-6, on the other hand, became unbound only $\sim 2$ Gyr ago, and has more complex distribution of $V_{\rm los}$. In this model, the freshly removed stars are virtually all concentrated within a relatively narrow interval, with $\sigma_{\rm los}$ being inflated mainly due to the presence of one high velocity stellar particle with $V_{\rm los}\simeq 160$ km s$^{-1}$. Such stars will definitely be discarded by observers. When we consider all stars (dashed line in Figure \[fig14\]b), the distribution of $V_{\rm los}$ is much wider than in Galactic dSphs. In the case of the model B2-6, the situation thus sensitively depends on how long the tidal stream can stay collimated in the Milky Way potential. Another important evidence against Draco being an unbound stream of stars is presented in Figure \[fig15\]. Here we show surface brightness maps for our two unbound models B1-6 and B2-6. One can see that the contours are irregular and not concentric – even near the center of the dwarf. This is in sharp contrast with the regular appearance of the observed isodensity contours in Draco [@ode01]. Pending the arrival of accurate proper motion measurements for Draco, let us be slightly more definitive in trying to determine the nature and cosmological significance of Draco by assuming that it is moving on the orbit 3, which is the most probable one (see § \[Orbits\]). From Figure \[fig9\] one can then infer that if Draco is a cosmological halo, its current DM mass is between $7\times 10^7$ and $3\times 10^9$ M$_\odot$, the fraction of the tidally stripped stars is $<3$%, and the central line-of-sight velocity dispersion $\sigma_0$, central surface brightness $\Sigma_0$, and the half-light radius $r_h$ has changed due to tidal shocks by no more than $-0.07$, $-0.04$, and $0.03$ dex, respectively, in the last 10 Gyr. This orbit has $R_p=51.1$ kpc, placing it well outside of the Galactic disk. Stellar tidal tails produced by our models on this orbit are extremely weak, with the surface brightness sensitivity required to see the isodensity contours distorted due to tidal forces being better than $\sim 200$ M$_\odot$ deg$^{-2}$, or more that two orders of magnitude better that the observations of @ode01. The DM halo could be either NFW or Burkert, and was formed after $z\sim 11$. Our results then support either of the two recently proposed solutions to the “missing satellites” problem [@kly99; @moo99]: that the Galactic dSphs are the most massive subhalos predicted by cosmological simulations to orbit in the halo of a Milky Way sized galaxy [@sto02; @hay03], or that the Galactic dSphs are the halos which managed to form stars before the reionization of the universe was completed around $z\sim 6.5$ [@bul00]. Our analysis suggests that unfortunately there is not enough of observational data yet to discriminate between the two above scenarios. Much better quality line-of-sight velocity dispersion profiles, deeper surface brightness maps, and ideally accurate proper motion measurements are required to produce further progress in this direction. We would like to mention a few most important deficiencies of our tidal stripping model. The first one is due to our use of a spherically symmetric potential for the Milky Way. As a result, we ignore disk shocking, which can be very important for the orbits with $R_p\lesssim 20$ kpc. We tried to partly circumvent this deficiency by considering an orbit with extremely small pericentric distance: orbit 6 with $R_p\simeq 2.5$ kpc. Ideally, we would prefer to include the disk in our simulations, but this would result in significant increase in number of required models, which would make our project not feasible with the current level of computing power. The second problem is generic to existing tidal stripping simulations of Galactic satellites (dSphs and globular clusters), and is caused by our use of a static potential for the Milky Way halo. In a realistic (live) Galactic halo very massive satellites should experience dynamical friction, which would gradually bring them closer to the center of the Galaxy. Unfortunately, we could not use a simple analytical formula to estimate the impact of the dynamical friction on our results. The dynamical friction would be strongest for the orbits 6 and 5, which have the smallest pericentric distances. Our subhalos on these orbits experience dramatic mass loss (up to 90% and 70%, respectively) during the first pericentric passage, rendering the constant satellite mass formula of @bin87 not applicable. The dynamical friction equation of @col99 does not have this limitation, but it was derived for the special case of a subhalo with a truncated isothermal DM density profile with a core which is very different from both the NFW and Burkert profiles of our subhalos. We want to emphasize that an inclusion of dynamical friction in our model would make our conclusion, that the observable properties of the most of our dwarfs were not affected noticeable by tidal forces, even stronger, as the dwarfs would start off at larger distance from the Milky Way center where the tidal forces are weaker. In addition, the use of static potential ignores the impact of the gravitational field of the satellite on the Milky Way halo. This effect can be very important for massive dwarfs on almost radial orbits, with the potential of both the satellite and the Milky Way center violently fluctuating during the pericentric passage, leading to an exchange of energy between the dwarf and the Galactic halo (similar to the mechanism of violent relaxation). The above effect would probably be important only for the orbit 6, which we were able to rule out for the most of our Draco models. It is important to mention that our models do not cover all possible initial configurations of Draco. In a more general case, one would have to start with arbitrary initial stellar and DM density profiles (with the initial stellar velocity dispersion profile following from eq. \[\[Jeans\]\]). After 10 Gyr of evolution in the Galactic tidal field both profiles could become substantially different, with the line-of-sight velocity dispersion either increasing (due to the projection of unbound stars) or decreasing (due to tidal shocks; see Fig. \[fig11\]). The more general case would require a dramatic increase in supercomputing time, which would make our approach infeasible. We want to emphasize that despite the fact that our models probably do not include all possible initial Draco configurations, they do constitute a family of fully self-consistent models which match well all the available observations of Draco. A potentially important evolutionary factor not included in our model is an interaction between Draco and dark subhalos, predicted to be present in the Milky Way halo in large numbers by $\Lambda$CDM cosmological models. This effect was studied on larger scales by @moo98, who showed that in a cluster environment the galaxy-galaxy harassment can be substantial. It is not clear if the harassment on a smaller, galactic scale would be of the same order: unlike the cluster scale, where the numbers of modeled and observed galaxies are in good agreement, there is a “missing satellites” issue on galactic scales. CONCLUSIONS =========== We presented two one-parameter families (separately for NFW and Burkert DM density profiles) of composite (stars $+$ DM) models for Draco which satisfy all the available observational and cosmological constraints. We showed that these models can survive tidal shocks and stripping on most realistic Draco orbits in the Galactic potential for 10 Gyr, with no appreciable impact on their observable properties. Both NFW and Burkert DM halo profiles are found to be equally plausible for Draco. The DM halos are either massive (up to $\sim 5\times 10^9$ M$_\odot$) and recently formed ($z\sim 2\dots 7$), or less massive (down to $\sim 7\times 10^7$ M$_\odot$) and older ($z\sim 7\dots 11$). Consequently, our results can be used to support either of the two popular solutions of the missing satellites problem – “very massive dwarfs” and “very old dwarfs”. Higher quality observations (line-of-sight velocity dispersion profiles, surface brightness maps, proper motion measurements) are required to further constraint the properties of Galactic dSphs and to place them in the right cosmological context. We would like to thank Jan Kleyna for providing the observed line-of-sight velocity dispersion profile for Draco. The simulations reported in this paper were carried out on McKenzie cluster at the Canadian Institute for Theoretical Astrophysics. Derivation of space velocity vector components for Galactic satellites {#prop} ====================================================================== In this section we derive the components of the space velocity vector of an object with known distance from the Sun $D$, heliocentric line-of-sight velocity $V_{\rm los}$, and two proper motion components $\mu_\alpha \cos\delta$ and $\mu_\delta$. We work with the frame of reference where the center is at the Sun, the axis $X$ is directed toward the Galactic center, the axis $Y$ is pointing at ($l=90\degr$, $b=0$), and the axis $Z$ is directed toward the North Galactic Pole ($b=90\degr$). Here ($l,b$) are the galactic coordinates. The frame of reference is at rest relative to the Galactic center. The Solar velocity vector in this frame of reference is ${\bf V_\odot}=\{10.0,225.25,7.17\}$ km s$^{-1}$ [@deh98]. We assume that the Sun is located at the distance $R_\odot=8.5$ kpc from the Galactic center. To convert the proper motion vector components from equatorial to galactic frame of reference, one can use $$\begin{aligned} \mu_l \cos b &=& (\mu_\alpha \cos\delta)\cos\varphi - \mu_\delta \sin\varphi,\\ \mu_b &=& (\mu_\alpha \cos\delta)\sin\varphi + \mu_\delta \cos\varphi,\end{aligned}$$ where the angle $\varphi$ is obtained from $$\begin{aligned} \cos\varphi &=& (\sin\delta_{\rm NGP}-\sin\delta\sin b)/(\cos\delta \cos b), \\ \sin\varphi &=& -\sin(\alpha-\alpha_{\rm NGP})\cos\delta_{\rm NGP}/\cos b.\end{aligned}$$ Here ($\alpha,\delta$) and ($l,b$) are the equatorial and galactic coordinates of the object, respectively, and ($\alpha_{\rm NGP},\delta_{\rm NGP}$) are the equatorial coordinates of the North Galactic Pole (for the $J2000$ equinox, $\alpha_{\rm NGP}=192\fdg 859$ and $\delta_{\rm NGP}=27\fdg 128$). One can show that the three components of the space velocity vector of the object in the frame of reference moving with the Sun are $$\begin{aligned} U_z &=& zV_{\rm los} + D\mu_b(x^2+y^2)^{1/2},\\ U_x &=& [x(V_{\rm los}-zU_z) - D(\mu_l\cos b)y(x^2+y^2)^{1/2}]/(x^2+y^2),\\ U_y &=& [D(\mu_l\cos b)(x^2+y^2)^{1/2} + yU_x]/x,\end{aligned}$$ where $x=\cos l \cos b$, $y=\sin l \cos b$, and $ z=\sin b$ are the components of a unit vector directed from the Sun toward the object, and the units for $D$, $\bf U$, and the two proper motion components ($\mu_l\cos b,\mu_b$) are km, km s$^{-1}$, and rad s$^{-1}$, respectively. In the frame of reference which is at rest relative to the Galactic center, the space velocity vector of the object is ${\bf V}={\bf U} + {\bf V_\odot}$. In the cylindrical Galactic frame of reference, the three components of the space velocity vector of the object are $$\Pi=(S_xV_x+S_yV_y)/(S_x^2+S_y^2)^{1/2}, \quad \Theta=(S_yV_x-S_xV_y)/(S_x^2+S_y^2)^{1/2}, \quad W=V_z,$$ where $\Pi$ is directed outward from the Galactic center in the plane of the Galaxy, $\Theta$ is the circular rotation speed in the plane of the Galaxy (positive for the Sun), $W$ is directed toward the North Galactic Pole, and ${\bf S}=\{xD-R_\odot,yD,zD\}$ is the vector connecting the Galactic center with the object. In the spherical Galactic frame of reference, the radial and tangential velocities of the object are $$V_r=({\bf V}\cdot {\bf S})/|{\bf S}|, \quad V_t=(|{\bf V}|^2-V_r^2)^{1/2}.$$ Aaronson, M. 1983, , 266, L11 Battaglia, G., et al. 2005, , 364, 433 Binney, J., & Tremaine, S. 1987, Galactic Dynamics (Princeton: Princeton Univ. Press) Bullock, J. S., Kravtsov, A. V., & Weinberg, D. H. 2000, , 539, 517 Bullock, J. S., Kolatt, T. S., Sigad, Y., Somerville, R. S., Kravtsov, A. V., Klypin, A. A., Primack, J. R., & Dekel, A. 2001, , 321, 559 Burkert, A. 1995, , 447, L25 Cardone, V. F., & Sereno, M. 2005, , 438, 545 Colpi, M., Mayer, L., & Governato, F. 1999, , 525, 720 Dehnen, W., & Binney, J. J. 1998, , 298, 387 Hayashi, E., Navarro, J. F., Taylor, J. E., Stadel, J., & Quinn, T. 2003, , 584, 541 Hernquist, L. 1990, , 356, 359 Johnston, K. V., Sigurdsson, S., & Hernquist, L. 1999, , 302, 771 Kazantzidis, S., Magorrian, J., & Moore, B. 2004, , 601, 37 King, I. R. 1966, , 71, 64 Klessen, R. S., Grebel, E. K., & Harbeck, D. 2003, , 589, 798 Klypin, A., Kravtsov, A. V., Valenzuela, O., & Prada, F. 1999, , 522, 82 Klypin, A., Zhao, H., & Somerville, R. S. 2002, , 573, 597 Kroupa, P. 1997, New Astronomy, 2, 139 okas, E. L. 2001, , 327, L21 Mashchenko, S., Carignan, C., & Bouchard, A. 2004, , 352, 168 Mashchenko, S., Couchman, H. M. P., & Sills, A. 2005, , 624, 726 Mashchenko, S., & Sills, A. 2005a, , 619, 243 Mashchenko, S., & Sills, A. 2005b, , 619, 258 Mateo, M. L. 1998, , 36, 435 Mateo, M., Olszewski, E. W., Vogt, S. S., & Keane, M. J. 1998, , 116, 2315 Merritt, D. 1985, , 90, 1027 Miyamoto, M., & Nagai, R. 1975, , 27, 533 Moore, B., Lake, G., & Katz, N. 1998, , 495, 139 Moore, B., Ghigna, S., Governato, F., Lake, G., Quinn, T., Stadel, J., & Tozzi, P. 1999, , 524, L19 Munoz, R. R., et al. 2005, , 631, L137 Navarro, J. F., Frenk, C. S., & White, S. D. M. 1997, , 490, 493 Odenkirchen, M., et al. 2001, , 122, 2538 Oh, K. S., Lin, D. N. C., & Aarseth, S. J. 1995, , 442, 142 Osipkov, L. P. 1979, Pis’ma Astron. Zh., 5, 77 Piatek, S., & Pryor, C. 1995, , 109, 1071 Piatek, S., Pryor, C., Bristow, P., Olszewski, E. W., Harris, H. C., Mateo, M., Minniti, D., & Tinney, C. G. 2005, , 130, 95 Read, J. I., Wilkinson, M. I., Wyn Evans, N., Gilmore, G., & Kleyna, J. T., 2005, , submitted Ricotti, M., & Gnedin, N. Y. 2005, , 629, 259 Sakamoto, T., Chiba, M., & Beers, T. C. 2003, , 397, 899 Scholz, R.-D., & Irwin, M. J. 1994, in IAU Symp. 161, Astronomy from Wide-Field Imaging, ed. H. T. MacGillivray (Dordrecht: Kluwer), 535 Sheth, R. K. & Tormen, G. 1999, , 308, 119 Spergel, D. N., et al. 2003, , 148, 175 Springel, V., Yoshida, N., & White, S. D. M. 2001, New Astronomy, 6, 79 Sternberg, A., McKee, C. F., & Wolfire, M. G. 2002, , 143, 419 Stoehr F., White S. D. M., Tormen G., Springel V., 2002, MNRAS, 335, L84 van Albada, T. S. 1982, , 201, 939 Wilkinson, M. I., Kleyna, J. T., Evans, N. W., Gilmore, G. F., Irwin, M. J., & Grebel, E. K. 2004, , 611, L21 [^1]: [^2]: For the Draco star count of @wil04 the best-fitting value of $\alpha$ is 6.
--- abstract: 'To extend the BLMSSM, we not only add exotic Higgs superfields $(\Phi_{NL},\varphi_{NL})$ to make the exotic lepton heavy, but also introduce the superfields($Y$,$Y^\prime$) having couplings with lepton and exotic lepton at tree level. The obtained model is called as EBLMSSM, which has difference from BLMSSM especially for the exotic slepton(lepton) and exotic sneutrino(neutrino). We deduce the mass matrices and the needed couplings in this model. To confine the parameter space, the Higgs boson mass $m_{h^0}$ and the processes $h^0\rightarrow \gamma\gamma$, $h^0\rightarrow VV, V=(Z,W)$ are studied in the EBLMSSM. With the assumed parameter space, we obtain reasonable numerical results according to data on Higgs from ATLAS and CMS. As a cold dark mater candidate, the relic density for the lightest mass eigenstate of $Y$ and $Y''$ mixing is also studied.' author: - 'Shu-Min Zhao$^1$[^1], Tai-Fu Feng$^{1}$[^2], Guo-Zhu Ning$^{1}$[^3], Jian-Bin Chen$^{2}$[^4], Hai-Bin Zhang$^{1}$[^5], Xing Xing Dong$^{1}$' title: The extended BLMSSM with a 125 GeV Higgs boson and dark matter --- introduction ============ The total lepton number (L) and baryon number (B) are good symmetries because neutrinoless double beta decay or proton decay has not been observed. In the standard model(SM), L and B are global symmetries[@BLWending]. However, the individual lepton numbers $L_i=L_e,~L_\mu,~L_\tau$ are not exact symmetries at the electroweak scale because of the neutrino oscillation and the neutrinos with tiny masses[@neutrinomass]. In the Universe, there is matter-antimatter asymmetry, then the baryon number must be broken. With the detection of the light Higgs $h^0(m_h^0=125.1{\rm GeV})$[@higgs125], the SM succeeds greatly and the Higgs mechanism is compellent. Beyond the SM, supersymmetry[@SUSY] provides a possibility to understand the light Higgs. The minimal supersymmetric extension of the SM(MSSM)[@MSSM] is one of the favorite models, where the light Higgs mass at tree level is $m_{h}^{tree}=m_Z|\cos2\beta|$ [@BLfirst]. The one loop corrections to Higgs mass mainly come from fermions and sfermions, that depend on the virtual particle masses and the couplings with the Higgs. There are many papers about the gauged B and L models, although most of them are non-supersymmetric [@BLnoSYSY]. Extending MSSM with the local gauged B and L, one obtains the so called BLMSSM, which was proposed by the authors in Ref.[@BLfirst]. The proton remains stable, as B and L are broken at the TeV scale. Therefore, a large desert between the electroweak scale and grand unified scale is not necessary. In BLMSSM, the baryon number is changed by one unit, at the same time the lepton number is broken in an even number. R-parity in BLMSSM is not conserved, and it can explain the matter-antimatter asymmetry in the Universe. There are some works for Higgs and dark matters[@dark1] in the BLMSSM[@darkM; @TFBL]. In the framework of BLMSSM, the light Higgs mass and the decays $h^0\rightarrow \gamma\gamma$ and $h^0\rightarrow VV, V=(Z,W)$ are studied in our previous work[@TFBL]. Some lepton flavor violating processes and CP-violating processes are researched with the new parameters in BLMSSM[@zhaolepton]. In BLMSSM, the exotic leptons are not heavy, because their masses just have relation with the parameters $Y_{e_4}\upsilon_d,~Y_{e_5}\upsilon_u$. Here $\upsilon_u$ and $\upsilon_d$ are the vacuum expectation values(VEVs) of two Higgs doublets $H_u$ and $H_d$. In general, the Yukawa couplings $Y_{e_4}$ and $Y_{e_5}$ are not large parameters, so the exotic lepton masses are around 100 GeV. The light exotic leptons may lead to that the BLMSSM is excluded by high energy physics experiments in the future. To obtain heavy exotic leptons, we add two exotic Higgs superfields to the BLMSSM, and they are SU(2) singlets $\Phi_{NL}$ and $\varphi_{NL}$, whose VEVs are $\upsilon_{NL}$ and $\bar{\upsilon}_{NL}$[@cd750]. The exotic leptons and the superfields $\Phi_{NL},\varphi_{NL}$ have Yukawa couplings, then $\upsilon_{NL}$ and $\bar{\upsilon}_{NL}$ give contributions to the diagonal elements of the exotic lepton mass matrix. So the exotic leptons turn heavy and should be unstable. In the end, the super fields $Y$ and $Y'$ are also introduced. At tree level, there are couplings for lepton-exotic lepton-$Y(Y')$. It is appealing that this extension of BLMSSM produces some new cold dark matter candidates, such as the lightest mass egeinstate of $Y$ and $Y'$ mixing. The four-component spinor $\tilde{Y}$ is made up of the superpartners of $Y$ and $Y'$. In this extended BLMSSM(EBLMSSM), we study the lightest CP even Higgs mass with the one loop corrections. The Higgs decays $h^0\rightarrow \gamma\gamma$ and $h^0\rightarrow VV, ~V=(Z, W)$ are also calculated here. Supposing the lightest mass eigenstate of $Y$ and $Y'$ mixing as a cold dark matter candidate, we study the relic density. After this introduction, in Section 2, we introduce the EBLMSSM in detail, including the mass matrices and the couplings different from those in the BLMSSM. The mass of the lightest CP-even Higgs $h^0$ is deduced in the Section 3. The Section 4 is used to give the formulation of the Higgs decays $h^0\rightarrow \gamma\gamma$, $h^0\rightarrow VV, ~V=(Z, W)$ and dark matter relic density. The corresponding numerical results are computed in Section 5. The last section is used for the discussion and conclusion. Extend the BLMSSM ================= The local gauge group of the BLMSSM [@BLfirst] is $SU(3)_{C}\otimes SU(2)_{L}\otimes U(1)_{Y}\otimes U(1)_{B}\otimes U(1)_{L}$. In the BLMSSM, the exotic lepton masses are obtained from the Yukawa couplings with the two Higgs doublets $H_u$ and $H_d$. The VEVs of $H_u$ and $H_d$ are $\upsilon_u$ and $\upsilon_d$ with the relation $\sqrt{\upsilon_u^2+\upsilon_d^2}=\upsilon\sim 250$ GeV. Therefore, the exotic lepton masses are not very heavy, though they can satisfy the experiment bounds at present. In the future, with the development of high energy experiments, the experiment bounds for the exotic lepton masses can improve in a great possibility. Therefore, we introduce the exotic Higgs superfields $\Phi_{NL}$ and $\varphi_{NL}$ with nonzero VEVs to make the exotic lepton heavy. The heavy exotic leptons should be unstable, then the superfields $Y,Y'$ are introduced accordingly. These introduced superfields lead to tree level couplings for lepton-exotic lepton-$Y(Y')$. In EBLMSSM, we show the superfields in the Table I. Superfields $SU(3)_C$ $SU(2)_L$ $U(1)_Y$ $U(1)_B$ $U(1)_L$ ---------------------- ----------- ----------- ---------- ---------------- -------------- $\hat{Q}_i$ 3 2 1/6 $1/3$ 0 $\hat{u}^c_i$ $\bar{3}$ 1 -2/3 -$1/3$ 0 $\hat{d}^c_i$ $\bar{3}$ 1 1/3 -$1/3$ 0 $\hat{L}_i$ 1 2 -1/2 0 $1$ $\hat{e}^c_i$ 1 1 1 0 -$1$ $\hat{N}^c_i$ 1 1 0 0 -$1$ $\hat{Q}_4$ 3 2 1/6 $B_4$ 0 $\hat{U}^c_4$ $\bar{3}$ 1 -2/3 -$B_4$ 0 $\hat{D}^c_4$ $\bar{3}$ 1 1/3 -$B_4$ 0 $\hat{Q}_5^c$ $\bar{3}$ 2 -1/6 -$(1+B_4)$ 0 $\hat{U}_5$ $3$ 1 2/3 $1 + B_4$ 0 $\hat{D}_5$ $3$ 1 -1/3 $1 + B_4$ 0 $\hat{L}_4$ 1 2 -1/2 0 $L_4$ $\hat{E}^c_4$ 1 1 1 0 -$L_4$ $\hat{N}^c_4$ 1 1 0 0 -$L_4$ $\hat{L}_5^c$ 1 2 1/2 0 -$(3 + L_4)$ $\hat{E}_5$ 1 1 -1 0 $3 + L_4$ $\hat{N}_5$ 1 1 0 0 $3 + L_4$ $\hat{H}_u$ 1 2 1/2 0 $0$ $\hat{H}_d$ 1 2 -1/2 0 $0$ $\hat{\Phi}_B$ 1 1 0 1 0 $\hat{\varphi}_B$ 1 1 0 -1 0 $\hat{\Phi}_L$ 1 1 0 0 -2 $\hat{\varphi}_L$ 1 1 0 0 2 $\hat{\Phi}_{NL}$ 1 1 0 0 -3 $\hat{\varphi}_{NL}$ 1 1 0 0 3 $\hat{X}$ 1 1 0 $2/3 + B_4$ 0 $\hat{X'}$ 1 1 0 $-(2/3 + B_4)$ 0 $Y$ 1 1 0 0 $2+L_4$ $Y'$ 1 1 0 0 $-(2+L_4)$ : The super fields in the extended BLMSSM (EBLMSSM) \[quarks\] The superpotential of EBLMSSM is shown here $$\begin{aligned} &&{\cal W}_{{EBLMSSM}}={\cal W}_{{MSSM}}+{\cal W}_{B}+{\cal W}_{L}+{\cal W}_{X}+{\cal W}_{Y}\;, \nonumber\\ &&{\cal W}_{L}=\lambda_{L}\hat{L}_{4}\hat{L}_{5}^c\hat{\varphi}_{NL}+\lambda_{E}\hat{E}_{4}^c\hat{E}_{5} \hat{\Phi}_{NL}+\lambda_{NL}\hat{N}_{4}^c\hat{N}_{5}\hat{\Phi}_{NL} +\mu_{NL}\hat{\Phi}_{NL}\hat{\varphi}_{NL}\nonumber\\&&\hspace{1.2cm}+Y_{{e_4}}\hat{L}_{4}\hat{H}_{d}\hat{E}_{4}^c+Y_{{\nu_4}}\hat{L}_{4}\hat{H}_{u}\hat{N}_{4}^c +Y_{{e_5}}\hat{L}_{5}^c\hat{H}_{u}\hat{E}_{5}+Y_{{\nu_5}}\hat{L}_{5}^c\hat{H}_{d}\hat{N}_{5} \nonumber\\ &&\hspace{1.2cm} +Y_{\nu}\hat{L}\hat{H}_{u}\hat{N}^c+\lambda_{{N^c}}\hat{N}^c\hat{N}^c\hat{\varphi}_{L} +\mu_{L}\hat{\Phi}_{L}\hat{\varphi}_{L}\;, \nonumber\\&& {\cal W}_{Y}=\lambda_4\hat{L}\hat{L}_{5}^c\hat{Y}+\lambda_5\hat{N}^c\hat{N}_{5}\hat{Y}^\prime +\lambda_6\hat{E}^c\hat{E}_{5}\hat{Y}^\prime+\mu_{Y}\hat{Y}\hat{Y}^\prime\;.\end{aligned}$$ ${\cal W}_{{MSSM}}$ is the superpotential of MSSM. ${\cal W}_{B}$ and ${\cal W}_{X}$ are same as the terms in BLMSSM[@TFBL]. $W_Y$ includes the terms beyond BLMSSM, and they include the couplings of lepton-exotic lepton-$Y$($l^I-L'-Y$). Therefore, the heavy exotic leptons can decay to leptons and mass eigenstates of $Y$ and $Y^\prime$ mixing whose lighter one can be a dark matter candidate. From $W_Y$, one can also obtain the coupling of lepton-exotic slepton-$\tilde{Y}$ ($l^I-\tilde{L}'-\tilde{Y}$), where $\tilde{Y}$ is the four component spinor composed by the superpartners of $Y$ and $Y'$. The new couplings of $l^I-L'-Y$ and $l^I-\tilde{L}'-\tilde{Y}$ can give one loop corrections to lepton anormal magnetic dipole moment(MDM). They may compensate the deviation between the experiment value and SM prediction for muon MDM. The parameter $\mu_Y$ can be complex number with non-zero imaginary part, which is a new source of CP-violating. Therefore, the both new couplings produce one loop diagrams contributing to the lepton electric dipole moment(EDM). Further more, if $\lambda_4$ in $\lambda_4\hat{L}\hat{L}_{5}^c\hat{Y}$ is a matrix and has non-zero elements relating with lepton flavor, this term can enhance the lepton flavor violating effects. In the whole, ${\cal W}_{Y}$ enriches the lepton physics to a certain degree, and these subjects will be researched in our latter works. Because of the introduction of the superfields $\Phi_{NL},\varphi_{NL}, Y$ and $Y'$, the soft breaking terms are written as $$\begin{aligned} &&{\cal L}_{{soft}}^{EBLMSSM}={\cal L}_{{soft}}^{BLMSSM} -m_{{\Phi_{NL}}}^2\Phi_{NL}^*\Phi_{NL} -m_{{\varphi_{NL}}}^2\varphi_{NL}^*\varphi_{NL} +(A_{{LL}}\lambda_{L}\tilde{L}_{4}\tilde{L}_{5}^c\varphi_{NL}\nonumber\\&&\hspace{2.0cm} +A_{{LE}}\lambda_{E}\tilde{e}_{4}^c\tilde{e}_{5}\Phi_{NL} +A_{{LN}}\lambda_{NL}\tilde{\nu}_{4}^c\tilde{\nu}_{5}\Phi_{NL} +B_{NL}\mu_{NL}\Phi_{NL}\varphi_{NL}+h.c.)\nonumber\\&&\hspace{2.0cm}+( A_4\lambda_4\tilde{L}\tilde{L}_{5}^cY+A_5\lambda_5\tilde{N}^c\tilde{\nu}_{5}Y^\prime +A_6\lambda_6\tilde{e}^c\tilde{e}_{5}Y^\prime+B_{Y}\mu_{Y}YY^\prime+h.c.). \label{soft-breaking}\end{aligned}$$ Here ${\cal L}_{{soft}}^{BLMSSM}$ is the soft breaking terms of BLMSSM, whose concrete form is in our previous work[@TFBL]. The $SU(2)_L$ doublets $H_{u},H_{d}$ acquire the nonzero VEVs $\upsilon_{u},\upsilon_{d}$. The $SU(2)_L$ singlets $\Phi_{B},\varphi_{B},\Phi_{L},\varphi_{L},\Phi_{NL},\varphi_{NL}$ obtain the nonzero VEVs $\upsilon_{{B}},\overline{\upsilon}_{{B}},\upsilon_{L},\;\overline{\upsilon}_{L}, \upsilon_{NL},\;\overline{\upsilon}_{NL}$ respectively. $$\begin{aligned} &&H_{u}=\left(\begin{array}{c}H_{u}^+\\{1\over\sqrt{2}}\Big(\upsilon_{u}+H_{u}^0+iP_{u}^0\Big)\end{array}\right), ~~~~~~ H_{d}=\left(\begin{array}{c}{1\over\sqrt{2}}\Big(\upsilon_{d}+H_{d}^0+iP_{d}^0\Big)\\H_{d}^-\end{array}\right), \nonumber\\ &&\Phi_{B}={1\over\sqrt{2}}\Big(\upsilon_{B}+\Phi_{B}^0+iP_{B}^0\Big),~~~~~~~~~~~ \varphi_{B}={1\over\sqrt{2}}\Big(\overline{\upsilon}_{B}+\varphi_{B}^0+i\overline{P}_{B}^0\Big), \nonumber\\ &&\Phi_{L}={1\over\sqrt{2}}\Big(\upsilon_{L}+\Phi_{L}^0+iP_{L}^0\Big),~~~~~~~~~~~~ \varphi_{L}={1\over\sqrt{2}}\Big(\overline{\upsilon}_{L}+\varphi_{L}^0+i\overline{P}_{L}^0\Big), \nonumber\\ &&\Phi_{NL}={1\over\sqrt{2}}\Big(\upsilon_{NL}+\Phi_{NL}^0+iP_{NL}^0\Big),~~~~ \varphi_{NL}={1\over\sqrt{2}}\Big(\overline{\upsilon}_{NL}+\varphi_{NL}^0+i\overline{P}_{NL}^0\Big).\end{aligned}$$ Here, we define $\tan\beta=\upsilon_u/\upsilon_d,~\tan\beta_B=\bar{\upsilon}_B/\upsilon_B,~\tan\beta_L=\bar{\upsilon}_L/\upsilon_L$ and $\tan\beta_{NL}=\bar{\upsilon}_{NL}/\upsilon_{NL}$. The VEVs of the Higgs satisfy the following equations $$\begin{aligned} &&|\mu|^2-\frac{g_1^2+g_2^2}{8}(\upsilon_u^2-\upsilon_d^2)+m_{H_d}^2+Re[B\mu]\tan\beta=0,\\&& |\mu|^2+\frac{g_1^2+g_2^2}{8}(\upsilon_u^2-\upsilon_d^2)+m_{H_u}^2+Re[B\mu]\cot\beta=0,\\&& |\mu_B|^2+\frac{g_B^2}{2}(\upsilon_B^2-\bar{\upsilon}_B^2)+m_{\Phi_B}^2-Re[B_B\mu_B]\tan\beta_B=0,\\&& |\mu_B|^2-\frac{g_B^2}{2}(\upsilon_B^2-\bar{\upsilon}_B^2)+m_{\varphi_B}^2-Re[B_B\mu_B]\cot\beta_B=0,\\&& %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |\mu_L|^2-2g_L^2V_L^2+m_{\Phi_L}^2-Re[B_L\mu_L]\tan\beta_L=0,\label{VL1}\\&& |\mu_L|^2+2g_L^2V_L^2+m_{\varphi_L}^2-Re[B_L\mu_L]\cot\beta_L=0,\label{VL2}\\&& |\mu_{NL}|^2-3g_L^2V_L^2+m_{\Phi_{NL}}^2-Re[B_{NL}\mu_{NL}]\tan\beta_{NL}=0,\label{VL3}\\&& |\mu_{NL}|^2+3g_L^2V_L^2 +m_{\varphi_{NL}}^2-Re[B_{NL}\mu_{NL}]\cot\beta_{NL}=0,\label{VL4}\end{aligned}$$ with $V_L^2=\overline{\upsilon}^2_L-\upsilon^2_L+\frac{3}{2}(\overline{\upsilon}^2_{NL}-\upsilon^2_{NL})$. Here, the Eqs.(\[VL1\]) and (\[VL2\]) are similar as the corresponding equations in BLMSSM, but Eqs.(\[VL1\]) and (\[VL2\]) have relation with the new parameters $\upsilon_{NL}$ and $\bar{\upsilon}_{NL}$. We obtain the new Eqs.(\[VL3\]) and (\[VL4\]) through $\frac{\partial V}{\partial \Phi_{NL}}$ and $\frac{\partial V}{\partial \varphi_{NL}}$, with $V$ denoting the Higgs scalar potential. Here we deduce the mass matrices in the EBLMSSM. Compared with BLMSSM, the superfields $\Phi_{NL}$ and $\varphi_{NL}$ are introduced and they give corrections to the mass matrices of the slepton, sneutrino, exotic lepton, exotic neutrino, exotic slepton and exotic sneutrino. That is to say, in EBLMSSM, the mass matrices of squark, exotic quark, exotic squark, baryon neutralino, MSSM neutralino, $X$ and $\tilde{X}$ are same as those in the BLMSSM, and their concrete forms can be found in our previous works[@SM14JHEP; @zhaoBL]. Though the mass squared matrices of slepton and sneutrino in EBLMSSM are different from those in BLMSSM, we can obtain the slepton and sneutrino mass squared matrices in EBLMSSM easily just using the replacement $\overline{\upsilon}^2_L-\upsilon^2_L\rightarrow V_L^2$ for the BLMSSM results. In the BLMSSM, the issue of Landau pole has been discussed in detail by the authors of Ref.[@BLfirst]. Their conclusion is that there are no Landau poles at the low scale due to the new families. In EBLMSSM, the parts of quark (squark), exotic quark (exotic squark) are same as those in BLMSSM. Therefore, the Landau pole conditions for the Yukawa couplings of quark (squark), exotic quark (exotic squark) have same behaviors of BLMSSM. The added superfields $(\Phi_{NL},\varphi_{NL}, Y, Y')$ do not have couplings with the gauge fields of $SU(3)_C,SU(2)_L,U(1)_Y$ and $U(1)_B$. So the characters of gauge couplings $g_1,g_2,g_3$ and $g_B$ in BLMSSM and EBLMSSM are same. The different parts between BLMSSM and EBLMSSM are the terms including $\Phi_{NL},\varphi_{NL}, Y$ and $ Y'$. The new terms in the superpotential $\mathcal{W}_L$ are $\lambda_{L}\hat{L}_{4}\hat{L}_{5}^c\hat{\varphi}_{NL} +\lambda_{E}\hat{E}_{4}^c\hat{E}_{5} \hat{\Phi}_{NL}+\lambda_{NL}\hat{N}_{4}^c\hat{N}_{5}\hat{\Phi}_{NL} +\mu_{NL}\hat{\Phi}_{NL}\hat{\varphi}_{NL}$ and they have corresponding relations with $\lambda_{Q}\hat{Q}_{4}\hat{Q}_{5}^c\hat{\Phi}_{B}+\lambda_{U}\hat{U}_{4}^c\hat{U}_{5} \hat{\varphi}_{B}+\lambda_{D}\hat{D}_{4}^c\hat{D}_{5}\hat{\varphi}_{B}+\mu_{B}\hat{\Phi}_{B}\hat{\varphi}_{B}$ in $\mathcal{W}_B$ by the replacements $\hat{L}_{4}\leftrightarrow \hat{Q}_{4}, \hat{L}^c_{5}\leftrightarrow \hat{Q}^c_{5}, \hat{E}^c_{4}\leftrightarrow \hat{U}^c_{4}, \hat{E}_{5}\leftrightarrow \hat{U}_{5}, \hat{N}^c_{4}\leftrightarrow \hat{D}^c_{4},\hat{N}_{5}\leftrightarrow \hat{D}_{5}, \hat{\Phi}_{NL}\leftrightarrow\hat{\varphi}_{B},\hat{\varphi}_{NL}\leftrightarrow\hat{\Phi}_{B}$. The corresponding relations for ${\cal W}_{Y}=\lambda_4\hat{L}\hat{L}_{5}^c\hat{Y}+\lambda_5\hat{N}^c\hat{N}_{5}\hat{Y}^\prime +\lambda_6\hat{E}^c\hat{E}_{5}\hat{Y}^\prime+\mu_{Y}\hat{Y}\hat{Y}^\prime$ and ${\cal W}_{X}=\lambda_1\hat{Q}\hat{Q}_{5}^c\hat{X}+\lambda_2\hat{U}^c\hat{U}_{5}\hat{X}^\prime +\lambda_3\hat{D}^c\hat{D}_{5}\hat{X}^\prime+\mu_{X}\hat{X}\hat{X}^\prime$ are obvious with $\hat{L}\leftrightarrow \hat{Q}, \hat{L}^c_{5}\leftrightarrow \hat{Q}^c_{5}, \hat{E}^c\leftrightarrow \hat{U}^c, \hat{E}_{5}\leftrightarrow \hat{U}_{5}, \hat{N}^c\leftrightarrow \hat{D}^c,\hat{N}_{5}\leftrightarrow \hat{D}_{5}, \hat{X}\leftrightarrow\hat{Y},\hat{X}^\prime\leftrightarrow\hat{Y}^\prime$. From this analysis, the Landau pole conditions of gauge coupling $g_L$ and Yukawa couplings of exotic leptons should possess similar peculiarities of gauge coupling $g_B$ and Yukawa couplings of exotic quarks. In conclusion, similar as BLMSSM, there are no Landau poles in EBLMSSM at the low scale because of the new families. The concrete study of Landau poles for the couplings should use renormalization group equation which is tedious, and we shall research this issue in our future work. The mass matrices of exotic lepton (slepton) and exotic neutrino (sneutrino) in EBLMSSM --------------------------------------------------------------------------------------- In BLMSSM, the exotic lepton masses are not heavy, because they obtain masses only from $H_u$ and $H_d$. The VEVs of $\Phi_{NL}$ and $\varphi_{NL}$ are $\upsilon_{NL}$ and $\bar{\upsilon}_{NL}$, that can be large parameters. So, the EBLMSSM exotic leptons are heavier than those in BLMSSM. The mass matrix for the exotic leptons reads as $$\begin{aligned} &&-{\cal L}_{{e^\prime}}^{mass}=\left(\begin{array}{ll}\bar{e}_{{4R}}^\prime,&\bar{e}_{{5R}}^\prime\end{array}\right) \left(\begin{array}{ll}-{1\over\sqrt{2}}\lambda_{L}\overline{\upsilon}_{NL},&{1\over\sqrt{2}}Y_{{e_5}}\upsilon_{u}\\ -{1\over\sqrt{2}}Y_{{e_4}}\upsilon_{d},&{1\over\sqrt{2}}\lambda_{E}\upsilon_{NL} \end{array}\right)\left(\begin{array}{l}e_{{4L}}^\prime\\e_{{5L}}^\prime\end{array}\right)+h.c. \label{ELmass}\end{aligned}$$ Obviously, $\overline{\upsilon}_{NL}$ and $\upsilon_{NL}$ are the diagonal elements of the mass matrix in the Eq.(\[ELmass\]). It is easy to obtain heavy exotic lepton masses with large $\overline{\upsilon}_{NL}$ and $\upsilon_{NL}$. If we take $\overline{\upsilon}_{NL}$ and $\upsilon_{NL}$ as zero, the mass matrix is same as that in BLMSSM. In fact, our used values of $\overline{\upsilon}_{NL}$ and $\upsilon_{NL}$ are at TeV order, which produce TeV scale exotic leptons. Heavy exotic leptons have strong adaptive capacity to the experiment bounds. The exotic neutrinos are four-component spinors, whose mass matrix is $$\begin{aligned} &&-{\cal L}_{{\nu^\prime}}^{mass}=\left(\begin{array}{ll}\bar{\nu}_{{4R}}^\prime,&\bar{\nu}_{{5R}}^\prime\end{array}\right) \left(\begin{array}{ll}{1\over\sqrt{2}}\lambda_{L}\overline{\upsilon}_{NL},&-{1\over\sqrt{2}}Y_{{\nu_5}}\upsilon_{d}\\ {1\over\sqrt{2}}Y_{{\nu_4}}\upsilon_{u},&{1\over\sqrt{2}}\lambda_{NL}\upsilon_{NL} \end{array}\right)\left(\begin{array}{l}\nu_{{4L}}^\prime\\\nu_{{5L}}^\prime\end{array}\right)+h.c. \label{Nmass-matrix}\end{aligned}$$ Similar as the exotic lepton condition, heavy exotic neutrinos are also gotten. In BLMSSM, the exotic sleptons of 4 generation and 5 generation do not mix, and their mass matrices are both $2\times2$. In EBLMSSM, the exotic sleptons of 4 generation and 5 generation mix together, and their mass matrix is $4\times4$. With the base $(\tilde{e}_4,\tilde{e}_4^{c*},\tilde{e}_5,\tilde{e}_5^{c*})$, we show the elements of exotic slepton mass matrix $\mathcal{M}^2_{\tilde{E}}$ in the following form. $$\begin{aligned} &&\mathcal{M}^2_{\tilde{E}}(\tilde{e}_5^{c*}\tilde{e}_5^{c})= \lambda_L^2\frac{\bar{\upsilon}_{NL}^2}{2}+\frac{\upsilon_u^2}{2}|Y_{e_5}|^2+M^2_{\tilde{L}_5} -\frac{g_1^2-g_2^2}{8}(\upsilon_d^2-\upsilon_u^2)-g_L^2(3+L_4)V_L^2, \nonumber\\&&\mathcal{M}^2_{\tilde{E}}(\tilde{e}_5^{*}\tilde{e}_5)=\lambda_E^2\frac{\upsilon_{NL}^2}{2} +\frac{\upsilon_u^2}{2}|Y_{e_5}|^2+M^2_{\tilde{e}_5}+\frac{g_1^2}{4}(\upsilon_d^2-\upsilon_u^2) +g_L^2(3+L_4)V_L^2, \nonumber\\&&\mathcal{M}^2_{\tilde{E}}(\tilde{e}_4^{*}\tilde{e}_4)=\lambda_L^2\frac{\bar{\upsilon}_{NL}^2}{2} +\frac{g_1^2-g_2^2}{8}(\upsilon_d^2-\upsilon_u^2)+\frac{\upsilon_d^2}{2}|Y_{e_4}|^2+M^2_{\tilde{L}_4} +g_L^2L_4V_L^2, \nonumber\\&&\mathcal{M}^2_{\tilde{E}}(\tilde{e}_4^{c*}\tilde{e}_4^{c})= \lambda_E^2\frac{\upsilon_{NL}^2}{2}-\frac{g_1^2}{4}(\upsilon_d^2-\upsilon_u^2)+\frac{\upsilon_d^2}{2}|Y_{e_4}|^2+M^2_{\tilde{e}_4} -g_L^2L_4V_L^2, \nonumber\\&&\mathcal{M}^2_{\tilde{E}}(\tilde{e}_4^{*}\tilde{e}_5) =\upsilon_dY_{e_4}^*\lambda_E\frac{\upsilon_{NL}}{2}+\lambda_LY_{e_5}\frac{\bar{\upsilon}_{NL}v_u}{2}, ~~~\mathcal{M}^2_{\tilde{E}}(\tilde{e}_5\tilde{e}_5^{c})=\mu^*\frac{\upsilon_d}{\sqrt{2}}Y_{e_5}+A_{e_5}Y_{e_5}\frac{\upsilon_u}{\sqrt{2}}, \nonumber\\&&\mathcal{M}^2_{\tilde{E}}(\tilde{e}_4^{c}\tilde{e}_5)=\mu_{NL}^*\lambda_E \frac{\bar{\upsilon}_{NL}}{\sqrt{2}}-A_{LE}\lambda_E\frac{\upsilon_{NL}}{\sqrt{2}}, ~~\mathcal{M}^2_{\tilde{E}}(\tilde{e}_4\tilde{e}_5^{c})=-\mu_{NL}^*\frac{\upsilon_{NL}}{\sqrt{2}}\lambda_L+A_{LL}\lambda_L\frac{\bar{\upsilon}_{NL}}{\sqrt{2}}, \nonumber\\&&\mathcal{M}^2_{\tilde{E}}(\tilde{e}_4\tilde{e}_4^{c})=\mu^*\frac{\upsilon_u}{\sqrt{2}}Y_{e_4}+A_{e_4}Y_{e_4}\frac{\upsilon_d}{\sqrt{2}}, ~~~\mathcal{M}^2_{\tilde{E}}(\tilde{e}_5^{c}\tilde{e}_4^{c*}) =-Y_{e_5}\lambda_E\frac{\upsilon_u\upsilon_{NL}}{2}-\lambda_LY_{e_4}^*\frac{\bar{\upsilon}_{NL}v_d}{2}. \label{SE45}\end{aligned}$$ In Eq.(\[SE45\]), the non-zero terms $\mathcal{M}^2_{\tilde{E}}(\tilde{e}_4\tilde{e}_5^{c}), \mathcal{M}^2_{\tilde{E}}(\tilde{e}_4^{*}\tilde{e}_5), \mathcal{M}^2_{\tilde{E}}(\tilde{e}_5^{c}\tilde{e}_4^{c*})$ and $\mathcal{M}^2_{\tilde{E}}(\tilde{e}_4^{c}\tilde{e}_5)$ are the reason for the exotic slepton mixing of generations 4 and 5. These mixing terms all include the parameters $\upsilon_{NL}$ and $\bar{\upsilon}_{NL}$. It shows that this mixing is caused basically by the added Higgs superfields $\Phi_{NL}$ and $\varphi_{NL}$. Using the matrix $Z_{\tilde{E}}$, we obtain mass eigenstates with the formula $Z^{\dag}_{\tilde{E}}\mathcal{M}^2_{\tilde{E}} Z_{\tilde{E}}=diag(m^2_{\tilde{E}^1},m^2_{\tilde{E}^2},m^2_{\tilde{E}^3},m^2_{\tilde{E}^4})$. In the same way, the exotic sneutrino mass squared matrix is also obtained $$\begin{aligned} \nonumber\\&&\mathcal{M}^2_{\tilde{N}}(\tilde{\nu}_5^{c*}\tilde{\nu}_5^{c})=\lambda_L^2\frac{\bar{\upsilon}_{NL}^2}{2} -\frac{g_1^2+g_2^2}{8}(\upsilon_d^2-\upsilon_u^2) +\frac{\upsilon_d^2}{2}|Y_{\nu_5}|^2+M^2_{\tilde{L}_5} -g_L^2(3+L_4)V_L^2, \nonumber\\&&\mathcal{M}^2_{\tilde{N}}(\tilde{\nu}_4^{*}\tilde{\nu}_4) =\lambda_L^2\frac{\bar{\upsilon}_{NL}^2}{2}+\frac{g_1^2+g_2^2}{8}(\upsilon_d^2-\upsilon_u^2) +\frac{\upsilon_u^2}{2}|Y_{\nu_4}|^2+M^2_{\tilde{L}_4} +g_L^2L_4V_L^2, \nonumber\\&&\mathcal{M}^2_{\tilde{N}}(\tilde{\nu}_5^{*}\tilde{\nu}_5)= \lambda_{NL}^2\frac{\upsilon_{NL}^2}{2}+g_L^2(3+L_4)V_L^2 +\frac{\upsilon_d^2}{2}|Y_{\nu_5}|^2+M^2_{\tilde{\nu}_5}, \nonumber\\&&\mathcal{M}^2_{\tilde{N}}(\tilde{\nu}_4^{c*}\tilde{\nu}_4^{c})= \lambda_{NL}^2\frac{\upsilon_{NL}^2}{2}-g_L^2L_4V_L^2 +\frac{\upsilon_u^2}{2}|Y_{\nu_4}|^2+M^2_{\tilde{\nu}_4}, \nonumber\\&&\mathcal{M}^2_{\tilde{N}}(\tilde{\nu}_5^{c}\tilde{\nu}_4^{c*})= \lambda_{NL}Y_{\nu_5}\frac{\upsilon_{NL}\upsilon_d}{2}-\lambda_LY_{\nu_4}^*\frac{\bar{\upsilon}_{NL}\upsilon_u}{2}, ~~~\mathcal{M}^2_{\tilde{N}}(\tilde{\nu}_5\tilde{\nu}_5^{c})= \mu^*\frac{\upsilon_u}{\sqrt{2}}Y_{\nu_5}+A_{\nu_5}Y_{\nu_5}\frac{\upsilon_d}{\sqrt{2}}, \nonumber\\&& \mathcal{M}^2_{\tilde{N}}(\tilde{\nu}_4^{c}\tilde{\nu}_5) =\mu_{NL}^*\lambda_{NL}\frac{\bar{\upsilon}_{NL}}{\sqrt{2}}-A_{LN}\lambda_N\frac{\upsilon_{NL}}{\sqrt{2}}, ~~~\mathcal{M}^2_{\tilde{N}}(\tilde{\nu}_4\tilde{\nu}_5^{c})=\mu_{NL}^*\frac{\upsilon_{NL}}{\sqrt{2}} \lambda_L-A_{LL}\lambda_L\frac{\bar{\upsilon}_{NL}}{\sqrt{2}}, \nonumber\\&&\mathcal{M}^2_{\tilde{N}}(\tilde{\nu}_4^{*}\tilde{\nu}_5) =\lambda_LY_{\nu_5}\frac{\bar{\upsilon}_{NL}\upsilon_d}{2}-\frac{\upsilon_u\upsilon_{NL}}{2}\lambda_{NL}Y_{\nu_4}^*,~~~ \mathcal{M}^2_{\tilde{N}}(\tilde{\nu}_4\tilde{\nu}_4^{c})=\mu^*\frac{\upsilon_d}{\sqrt{2}}Y_{\nu_4}+A_{\nu_4}Y_{\nu_4}\frac{\upsilon_u}{\sqrt{2}}.\end{aligned}$$ For the exotic sneutrino, the mixing of generations 4 and 5 is similar as that of exotic slepton. In the base $(\tilde{\nu}_4,\tilde{\nu}_4^{c*},\tilde{\nu}_5,\tilde{\nu}_5^{c*})$, we get the mass squared matrix of the exotic sneutrino, and obtain the mass eigenstates by the matrix $Z_{\tilde{N}}$ through the formula $Z^{\dag}_{\tilde{N}}\mathcal{M}^2_{\tilde{N}} Z_{\tilde{N}}=diag(m^2_{\tilde{N}^1},m^2_{\tilde{N}^2},m^2_{\tilde{N}^3},m^2_{\tilde{N}^4})$. The lepton neutralino mass matrix in EBLMSSM -------------------------------------------- In EBLMSSM, the superfields ($\Phi_{L},\varphi_{L},\Phi_{NL},\varphi_{NL}$) have their SUSY superpartners $(\psi_{\Phi_L},\psi_{\varphi_L},\psi_{\Phi_{NL}},\psi_{\varphi_{NL}})$. They mix with $\lambda_L$, which is the superpartner of the new lepton type gauge boson $Z^\mu_L$. Therefore, we deduce their mass matrix in the base $(i\lambda_L,\psi_{\Phi_L},\psi_{\varphi_L},\psi_{\Phi_{NL}},\psi_{\varphi_{NL}})$ $$\mathcal{M}_L=\left( \begin{array}{ccccc} 2M_L &2\upsilon_Lg_L &-2\bar{\upsilon}_Lg_L&3\upsilon_{NL}g_L &-3\bar{\upsilon}_{NL}g_L\\ 2\upsilon_Lg_L & 0 &-\mu_L& 0 & 0\\ -2\bar{\upsilon}_Lg_L&-\mu_L &0& 0 & 0\\ 3\upsilon_{NL}g_L & 0 & 0 & 0 & -\mu_{NL}\\ -3\bar{\upsilon}_{NL}g_L& 0&0&-\mu_{NL}&0 \end{array}\right).$$ The lepton neutralino mass egeinstates are four-component spinors $X^0_{L_i}=(K_{L_i}^0,\bar{K}_{L_i}^0)^T$, and their mass matrix is diagonalized by the rotation matrix $Z_{NL}$. The relations for the components are $$\begin{aligned} &&i\lambda_L=Z_{NL}^{1i}K_{L_i}^0 ,~~~\psi_{\Phi_L}=Z_{NL}^{2i}K_{L_i}^0 ,~~~\psi_{\varphi_L}=Z_{NL}^{3i}K_{L_i}^0, \nonumber\\&&\psi_{\Phi_{NL}}=Z_{NL}^{4i}K_{L_i}^0 ,~~~~~\psi_{\varphi_{NL}}=Z_{NL}^{5i}K_{L_i}^0.\end{aligned}$$ In BLMSSM, there are no $\psi_{\Phi_{NL}},\psi_{\varphi_{NL}}$, and the base of lepton neutralino is $(i\lambda_L,\psi_{\Phi_L},\psi_{\varphi_L})$, whose mass matrix is $3\times3$. EBLMSSM extends this matrix to $5\times5$ including the BLMSSM results. The Higgs superfields and $Y$ in EBLMSSM ---------------------------------------- The superfields $\Phi_{L},\varphi_{L},\Phi_{NL},\varphi_{NL}$ mix together and form $4\times4$ mass squared matrix, which is larger than the corresponding $2\times2$ mass matrix in the BLMSSM. Diagonalizing the mass squared matrix, four CP even exotic Higgs are obtained. $$\begin{aligned} &&\mathcal{M}^2_{\phi}(\Phi_L^0\Phi_L^0)=\frac{1}{2}g_L^2\Big(6\upsilon_L^2-2\bar{\upsilon}_L^2+3(\upsilon_{NL}^2-\bar{\upsilon}_{NL}^2)\Big) +\frac{1}{2}\mu_L^2+\frac{1}{2}m_{\Phi_L}^2, \nonumber\\&&\mathcal{M}^2_{\phi}(\varphi_L^0\varphi_L^0)= \frac{1}{2}g_L^2\Big(6\bar{\upsilon}_L^2-2\upsilon_L^2+3(\bar{\upsilon}_{NL}^2-\upsilon_{NL}^2)\Big)+\frac{1}{2}\mu_L^2 +\frac{1}{2}m_{\varphi_L}^2, \nonumber\\&&\mathcal{M}^2_{\phi}(\Phi_{NL}^0\Phi_{NL}^0) =\frac{1}{2}g_L^2\Big(\frac{27}{2}\upsilon_{NL}^2-\frac{9}{2}\bar{\upsilon}_{NL}^2+3(\upsilon_L^2-\bar{\upsilon}_L^2)\Big) +\frac{1}{2}\mu_{NL}^2+\frac{1}{2}m_{\Phi_{NL}}^2, \nonumber\\&&\mathcal{M}^2_{\phi}(\varphi_{NL}^0\varphi_{NL}^0) =\frac{1}{2}g_L^2\Big(\frac{27}{2}\bar{\upsilon}_{NL}^2-\frac{9}{2}\upsilon_{NL}^2+3(\bar{\upsilon}_L^2-\upsilon_L^2)\Big) +\frac{1}{2}\mu_{NL}^2+\frac{1}{2}m_{\varphi_{NL}}^2, \nonumber\\&&\mathcal{M}^2_{\phi}(\Phi_L^0\varphi_L^0)=-4g_L^2\upsilon_L\bar{\upsilon}_L-\frac{B_L\mu_L}{2},~~~~~~~~~~~~ \mathcal{M}^2_{\phi}(\Phi_L^0\Phi_{NL}^0)=6g_L^2\upsilon_L\upsilon_{NL}, \nonumber\\&&\mathcal{M}^2_{\phi}(\Phi_{NL}^0\varphi_{NL}^0)=-9g_L^2\upsilon_{NL}\bar{\upsilon}_{NL}-\frac{B_{NL}\mu_{NL}}{2}, ~~~\mathcal{M}^2_{\phi}(\varphi_L^0\varphi_{NL}^0)=6g_L^2\bar{\upsilon}_L\bar{\upsilon}_{NL}, \nonumber\\&&\mathcal{M}^2_{\phi}(\varphi_L^0\Phi_{NL}^0)=-6g_L^2\bar{\upsilon}_L\upsilon_{NL},~~~~~~~~~~~~~~~~~~~~~ \mathcal{M}^2_{\phi}(\Phi_L^0\varphi_{NL}^0)=-6g_L^2\upsilon_L\bar{\upsilon}_{NL}.\label{Phimass}\end{aligned}$$ We use $Z_{\tilde{\phi}_L}$ to diagonalize the mass squared matrix in Eq.(\[Phimass\]), and the relation between mass eigenstates and the comments are $$\begin{aligned} &&\Phi_L^0=Z_{\tilde{\phi}_L}^{1i}H_{L_i}^0 ,~~~\varphi_L^0=Z_{\tilde{\phi}_L}^{2i}H_{L_i}^0, ~~~ \Phi_{NL}^0=Z_{\tilde{\phi}_{L}}^{3i}H_{L_i}^0 ,~~~\varphi_{NL}^0=Z_{\tilde{\phi}_{L}}^{4i}H_{L_i}^0.\end{aligned}$$ In EBLMSSM, the conditions for the exotic CP odd Higgs $P_L^0, \bar{P}_L^0$ are same as those in BLMSSM, and they do not mix with the added exotic CP odd Higgs $P_{NL}^0, \bar{P}_{NL}^0$. Here, we show the mass squared matrix for the added exotic CP odd Higgs $P_{NL}^0, \bar{P}_{NL}^0$. $$\begin{aligned} &&\mathcal{M}^2_{p}(P_{NL}^0P_{NL}^0)=\frac{1}{2}g_L^2\Big(\frac{9}{2}\upsilon_{NL}^2 -\frac{9}{2}\bar{\upsilon}_{NL}^2+3(\upsilon_L^2-\bar{\upsilon}_L^2)\Big)+\frac{1}{2}\mu_{NL}^2+\frac{1}{2}m_{\Phi_{NL}}^2, \nonumber\\&&\mathcal{M}^2_{p}(\bar{P}_{NL}^0\bar{P}_{NL}^0)=\frac{1}{2}g_L^2 \Big(\frac{9}{2}\bar{\upsilon}_{NL}^2-\frac{9}{2}\upsilon_{NL}^2+3(\bar{\upsilon}_L^2 -\upsilon_L^2)\Big)+\frac{1}{2}\mu_{NL}^2+\frac{1}{2}m_{\varphi_{NL}}^2, \nonumber\\&&\mathcal{M}^2_{p}(P_{NL}^0\bar{P}_{NL}^0)=\frac{B_{NL}\mu_{NL}}{2}.\end{aligned}$$ The scalar superfields $Y$ and $Y'$ mix, and their mass squared matrix is deduced here. This condition is similar as that of $X$ and $X'$, then the lightest mass egeinstate of $Y$ and $Y'$ can be a candidate of the dark matter. With $S_{Y}=g_{L}^2(2+L_{4})V_L^2$, the concrete form for the mass squared matrix is shown here. To obtain mass eigenstates, the matrix $Z_Y$ is used through the following formula, with the supposition $m_{{Y_1}}^2<m_{{Y_2}}^2$. $$\begin{aligned} Z^{\dag}_{Y}\left( \begin{array}{cc} |\mu_{Y}|^2+S_{Y} &-\mu_{Y}B_{Y} \\ -\mu^*_{Y}B^*_{Y} & |\mu_{Y}|^2-S_{Y}\\ \end{array}\right) Z_{Y}=\left( \begin{array}{cc} m_{{Y_1}}^2 &0 \\ 0 & m_{{Y_2}}^2\\ \end{array}\right), ~~~~~\left( \begin{array}{c} Y_{1} \\ Y_{2}\\ \end{array}\right) =Z_{Y}^{\dag}\left( \begin{array}{c} Y \\ Y'^*\\ \end{array}\right).\label{YY'} \end{aligned}$$ The superpartners of $Y$ and $Y'$ form four-component Dirac spinors, and the mass term for superfields $\tilde{Y}$ is shown as $$\begin{aligned} &&-\mathcal{L}^{mass}_{\tilde{Y}}=\mu_Y\bar{\tilde{Y}}\tilde{Y} ,~~~~~~~~~~~~~~~~\tilde{Y} =\left( \begin{array}{c} \psi_{Y'} \\ \bar{\psi}_{Y}\\ \end{array}\right).\end{aligned}$$ The spinor $\tilde{Y}$ and the mixing of superfields $Y,Y'$ are all new terms beyond BLMSSM, that add abundant contents to lepton physics and dark matter physics. Some couplings with $h^0$ in EBLMSSM ------------------------------------ In EBLMSSM, the exotic slepton(sneutrino) of generations 4 and 5 mix. So the couplings with exotic slepton(sneutrino) are different from the corresponding results in BLMSSM. We deduce the couplings of $h^0$ and exotic sleptons $$\begin{aligned} &&\sum_{i,j=1}^4\tilde{E}^{i*}\tilde{E}^{j}h^0\Big[\Big(e^2\upsilon\sin\beta\frac{1-4s_W^2}{4s_W^2c_W^2}(Z_{\tilde{E}}^{4i*}Z_{\tilde{E}}^{4j} -Z_{\tilde{E}}^{1i*}Z_{\tilde{E}}^{1j}) -\frac{\mu^*}{\sqrt{2}}Y_{e_4}Z_{\tilde{E}}^{2i*}Z_{\tilde{E}}^{1j}\nonumber\\&& -\upsilon\sin\beta|Y_{e_5}|^2\delta_{ij}-\frac{A_{E_5}}{\sqrt{2}}Z_{\tilde{E}}^{4i*}Z_{\tilde{E}}^{3j} +\frac{1}{2}\lambda_LY_{e_5}Z_{\tilde{E}}^{3j}Z_{\tilde{E}}^{3i*}\bar{\upsilon}_{NL} -\frac{1}{2}Y_{e_5}^*Z_{\tilde{E}}^{4j}\lambda_EZ_{\tilde{E}}^{2i*}\upsilon_{NL}\Big)\cos\alpha\nonumber\\&&- \Big(e^2\upsilon\cos\beta\frac{1-4s_W^2}{4s_W^2c_W^2}(Z_{\tilde{E}}^{1i*}Z_{\tilde{E}}^{1j}-Z_{\tilde{E}}^{4i*}Z_{\tilde{E}}^{4j}) -\upsilon\cos\beta|Y_{e_4}|^2\delta_{ij}-\frac{A_{E_4}}{\sqrt{2}}Z_{\tilde{E}}^{2i*}Z_{\tilde{E}}^{1j}\nonumber\\&& -\frac{\mu^*}{\sqrt{2}}Y_{e_5}Z_{\tilde{E}}^{4i*}Z_{\tilde{E}}^{3j}-\frac{1}{2}Y_{e_4}^*Z_{\tilde{E}}^{2j}\lambda_LZ_{\tilde{E}}^{4i*}\bar{\upsilon}_{NL} +\frac{1}{2}Z_{\tilde{E}}^{1i*}Y_{e_4}^*\lambda_EZ_{\tilde{E}}^{3j}\upsilon_{NL} \Big)\sin\alpha\Big].\label{hEECP}\end{aligned}$$ In Eq.(\[hEECP\]), different from BLMSSM, there are new terms $(\frac{1}{2}\lambda_LY_{e_5}Z_{\tilde{E}}^{3j}Z_{\tilde{E}}^{3i*}\bar{\upsilon}_{NL} -\frac{1}{2}Y_{e_5}^*Z_{\tilde{E}}^{4j}\lambda_EZ_{\tilde{E}}^{2i*}\upsilon_{NL})\cos\alpha- (\frac{1}{2}Z_{\tilde{E}}^{1i*}Y_{e_4}^*\lambda_EZ_{\tilde{E}}^{3j}\upsilon_{NL} -\frac{1}{2}Y_{e_4}^*Z_{\tilde{E}}^{2j}\lambda_LZ_{\tilde{E}}^{4i*}\bar{\upsilon}_{NL})\sin\alpha$ besides the mixing of generations 4 and 5 slepton. Obviously, these new terms include $\upsilon_{NL}$ and $\bar{\upsilon}_{NL}$, which are the VEVs of added Higgs superfields $\Phi_{NL}$ and $\varphi_{NL}$. In the same way, the couplings of $h^0$ and exotic sneutrinos are also calculated $$\begin{aligned} &&\sum_{i,j=1}^4\tilde{N}^{i*}\tilde{N}^{j}h^0\Big[\Big(\frac{e^2}{4s_W^2c_W^2}\upsilon\sin\beta (Z_{\tilde{N}}^{1i*}Z_{\tilde{N}}^{1j}-Z_{\tilde{N}}^{4i*}Z_{\tilde{N}}^{4j}) -\frac{1}{2}Z_{\tilde{N}}^{1i*}Y_{\nu_4}^*\lambda_{NL}Z_{\tilde{N}}^{3i}\upsilon_{NL} \nonumber\\&&-\upsilon\sin\beta|Y_{\nu_4}|^2\delta_{ij}-\frac{A_{N_4}}{\sqrt{2}}Z_{\tilde{N}}^{2i*}Z_{\tilde{N}}^{1j}-\frac{\mu^*}{\sqrt{2}}Y_{\nu_5} Z_{\tilde{N}}^{4i*}Z_{\tilde{N}}^{3j}-\frac{1}{2}Y_{\nu_4}^*Z_{\tilde{N}}^{2j}\lambda_LZ_{\tilde{N}}^{4i*}\bar{\upsilon}_{NL} \Big)\cos\alpha\nonumber\\&&-\Big(\frac{e^2}{4s_W^2c_W^2}\upsilon\cos\beta [Z_{\tilde{N}}^{4i*}Z_{\tilde{N}}^{4j} -Z_{\tilde{N}}^{1i*}Z_{\tilde{N}}^{1j}]-\frac{\mu^*}{\sqrt{2}}Y_{\nu_4} Z_{\tilde{N}}^{2i*}Z_{\tilde{N}}^{1j}-\upsilon\cos\beta|Y_{\nu_5}|^2\delta_{ij}\nonumber\\&& -\frac{A_{N_5}}{\sqrt{2}}Z_{\tilde{N}}^{4i*}Z_{\tilde{N}}^{3j}+\frac{1}{2}Y_{\nu_5}Z_{\tilde{N}}^{3j} \lambda_LZ_{\tilde{N}}^{1i*}\bar{\upsilon}_{NL} +\frac{1}{2}Y_{\nu_5}Z_{\tilde{N}}^{4i*}\lambda_{N^c}Z_{\tilde{N}}^{2j}\upsilon_{NL} \Big)\sin\alpha\Big].\end{aligned}$$ In this coupling, the new terms beyond BLMSSM are $-(\frac{1}{2}Z_{\tilde{N}}^{1i*}Y_{\nu_4}^*\lambda_{NL}Z_{\tilde{N}}^{3i}\upsilon_{NL} +\frac{1}{2}Y_{\nu_4}^*Z_{\tilde{N}}^{2j}\lambda_LZ_{\tilde{N}}^{4i*}\bar{\upsilon}_{NL})\cos\alpha -(\frac{1}{2}Y_{\nu_5}Z_{\tilde{N}}^{3j} \lambda_LZ_{\tilde{N}}^{1i*}\bar{\upsilon}_{NL} +\frac{1}{2}Y_{\nu_5}Z_{\tilde{N}}^{4i*}\lambda_{N^c}Z_{\tilde{N}}^{2j}\upsilon_{NL})\sin\alpha.$ The $h^0-\tilde{L}-\tilde{L}$ coupling has the same form as that in BLMSSM. While, the $h^0-\tilde{\nu}-\tilde{\nu}$ coupling gets corrected terms, but these terms are suppressed by the tiny neutrino Yukawa coupling $Y_\nu$. $$\begin{aligned} &&\sum_{i,j=1}^6\tilde{\nu}^{i*}\tilde{\nu}^jh^0\Big[\sin\alpha\frac{\mu^*}{\sqrt{2}}Y_{\nu}^*Z_{\tilde{\nu}}^{Ii*}Z_{\tilde{\nu}}^{(I+3)j} -\frac{e^2}{4s_W^2c_W^2}B_R^2Z_{\tilde{\nu}}^{Ii*}Z_{\tilde{\nu}}^{Ij}\nonumber\\&&+\cos\alpha\Big( (\lambda_{N^c}\bar{\upsilon}_L -\frac{A_N}{\sqrt{2}})Y_{\nu}^*Z_{\tilde{\nu}}^{Ii*}Z_{\tilde{\nu}}^{(I+3)j}-\upsilon\sin\beta|Y_{\nu}|^2\delta_{ij} \Big)\Big].\end{aligned}$$ Here, $s_W(c_W)$ denotes $\sin\theta_W(\cos\theta_W)$, with $\theta_W$ representing the weak-mixing angle. The concrete form of $B_R^2$ is in Ref.[@MSSM]. The couplings with $Y$ ---------------------- For the dark matter candidate $Y_1$, the necessary tree level couplings are deduced in EBLMSSM. We show the couplings (lepton-exotic lepton-$Y$) and (neutrino-exotic neutrino-$Y$) $$\begin{aligned} &&\mathcal{L}=\sum_{i,j=1}^2\bar{e}^I\Big(\lambda_4W_L^{1i}Z_Y^{1j*}P_R -\lambda_6U_L^{2i}Z_Y^{2j*}P_L\Big)L'_{i+3}Y_j^*\nonumber\\&& -\sum_{\alpha=1}^6\sum_{i,j=1}^2\bar{X}_{N_\alpha}^0\Big(\lambda_4Z_{N_{\nu}}^{I\alpha*}W_N^{1i}Z_Y^{1j*} P_R+\lambda_5Z_{N_{\nu}}^{(I+3)\alpha}U_N^{2i}Z_Y^{2j*}P_L\Big) N'_{i+3}Y_j^*+h.c.\label{TCLY}\end{aligned}$$ The new gauge boson $Z_L$ couples with leptons, neutrinos and $Y$, whose concrete forms are $$\begin{aligned} &&\mathcal{L}=-\sum_{I=1}^3g_LZ^\mu_L\bar{e}^I\gamma_\mu e^I-\sum_{i,j=1}^2g_L(2+L_4)Z^\mu_LY_i^*i\partial_\mu Y_j \nonumber\\&&-\sum_{I=1}^3\sum_{\alpha,\beta=1}^6g_LZ^\mu_L\bar{\chi}^0_{N_\alpha}(Z_{N_\nu}^{I\alpha*}Z_{N_\nu}^{I\beta}\gamma_\mu P_L +Z_{N_\nu}^{(I+3)\alpha*}Z_{N_\nu}^{(I+3)\beta}\gamma_\mu P_R)\chi^0_{N_\beta}+h.c.\label{TCZL}\end{aligned}$$ $\varphi_L$ gives masses to the light neutrinos trough the see-saw mechanism and $\Phi_L,\varphi_L, \Phi_{NL},\varphi_{NL}$ mix together producing lepton Higgs $H^0_{L}$. Then the couplings of $H^0_L YY^*$ and $\bar{\chi}^0_{N}\chi^0_{N}H^0_{L}$ are needed $$\begin{aligned} &&\mathcal{L}=\sum_{i,j=1}^2\sum_{k=1}^4g_L^2(2+L_4)(Z_Y^{1i*}Z_Y^{1j}-Z_Y^{2i*}Z_Y^{2j})\nonumber\\&&\times \Big(v_LZ^{1k}_{\tilde{\phi}_L}-\bar{v}_LZ^{2k}_{\tilde{\phi}_L} +\frac{3}{2}v_{NL}Z^{3k}_{\tilde{\phi}_{NL}}-\frac{3}{2}\bar{v}_{NL}Z^{4k}_{\tilde{\phi}_{NL}}\Big)H^0_{L_k}Y_i^*Y_j. \nonumber\\&&-\sum_{k=1}^4\sum_{\alpha,\beta=1}^6\lambda_{N^c}Z_{N_\nu}^{(I+3)\alpha} Z_{N_\nu}^{(I+3)\beta}Z_{\phi_L}^{2k}\bar{\chi}^0_{N_\alpha} P_L\chi^0_{N_\beta}H^0_{L_k}+h.c.\label{TCHL}\end{aligned}$$ The mass of $h^0$ ================= Similar as BLMSSM, in EBLMSSM the mass squared matrix for the neutral CP even Higgs are studied, and in the basis $(H_d^0,\;H_u^0)$ it is written as $$\begin{aligned} &&{\cal M}^2_{even}=\left(\begin{array}{ll}M_{11}^2+\Delta_{11}&M_{12}^2+\Delta_{12}\\ M_{12}^2+\Delta_{12}&M_{22}^2+\Delta_{22}\end{array}\right)\;, \label{MCPEHiggs}\end{aligned}$$ where $M_{11}^2,M_{12}^2,M_{22}^2$ are the tree level results, whose concrete forms can be found in Ref.[@TFBL] $$\begin{aligned} &&\Delta_{11}=\Delta_{11}^{MSSM}+\Delta_{11}^{B}+\Delta_{11}^{L}\;, \nonumber\\ &&\Delta_{12}=\Delta_{12}^{MSSM}+\Delta_{12}^{B}+\Delta_{12}^{L}\;, \nonumber\\ &&\Delta_{22}=\Delta_{22}^{MSSM}+\Delta_{22}^{B}+\Delta_{22}^{L}\;. \label{M-CPE2}\end{aligned}$$ The MSSM contributions are represented by $\Delta_{11}^{MSSM}$, $\Delta_{12}^{MSSM}$ and $\Delta_{22}^{MSSM}$. The exotic quark(squark) contributions denoted by $\Delta_{11}^{B},\Delta_{12}^{B}$ and $\Delta_{22}^{B}$ are the same as those in BLMSSM[@TFBL]. However, the corrections $\Delta_{11}^{L},\Delta_{12}^{L}$ and $\Delta_{22}^{L}$ from exotic lepton(slepton) are different from those in BLMSSM, because the mass squared matrices of exotic slepton and exotic sneutrino are both $4\times4$ and they relate with $\upsilon_{NL}$ and $\bar{\upsilon}_{NL}$. Furthermore, the exotic leptons and exotic neutrinos are heavier than those in BLMSSM, due to the introduction of $\Phi_{NL}$ and $\varphi_{NL}$. $$\begin{aligned} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% &&\Delta_{11}^{L}={G_{F}Y_{\nu_4}^4\upsilon^4\over4\sqrt{2}\pi^2\sin^2\beta}\cdot {\mu^2(A_{{\nu_4}}-\mu\cot\beta)^2\over(m_{{\tilde{N}^1}}^2-m_{{\tilde{N}^2}}^2)^2} g(m_{{\tilde{N}^1}},m_{{\tilde{N}^2}}) +{G_{F}Y_{{\nu_5}}^4\upsilon^4\over4\sqrt{2}\pi^2\cos^2\beta}\Big\{\ln{m_{{\tilde{N}^3}}m_{{\tilde{N}^4}} \over m_{{\nu_5}}^2}\nonumber\\&&\hspace{1.2cm}+{A_{{\nu_5}}(A_{{\nu_5}}-\mu\tan\beta)\over m_{{\tilde{N}^3}}^2-m_{{\tilde{N}^4}}^2} \ln{m_{{\tilde{N}^3}}^2\over m_{{\tilde{N}^4}}^2} +{A_{{\nu_5}}^2(A_{{\nu_5}}-\mu\tan\beta)^2\over(m_{{\tilde{N}^3}}^2-m_{{\tilde{N}^4}}^2)^2} g(m_{{\tilde{N}^3}}, m_{{\tilde{N}^4}})\Big\} \nonumber\\&&\hspace{1.2cm} +{G_{F}Y_{{e_4}}^4\upsilon^4\over4\sqrt{2}\pi^2\cos^2\beta}\Big\{{A_{{e_4}}(A_{{e_4}}-\mu\tan\beta)\over m_{{\tilde{E}^1}}^2-m_{{\tilde{E}^2}}^2} \ln{m_{{\tilde{E}^1}}^2\over m_{{\tilde{E}^2}}^2} +{A_{{e_4}}^2(A_{{e_4}}-\mu\tan\beta)^2\over(m_{{\tilde{E}^1}}^2-m_{{\tilde{E}^2}}^2)^2} g(m_{{\tilde{E}^1}}, m_{{\tilde{E}^2}}) \nonumber\\&&\hspace{1.2cm}+\ln{m_{{\tilde{E}^1}}m_{{\tilde{E}^2}} \over m_{{e_4}}^2}\Big\} +{G_{F}Y_{{e_5}}^4\upsilon^4\over4\sqrt{2}\pi^2\sin^2\beta}\cdot {\mu^2(A_{{e_5}}-\mu\cot\beta)^2\over(m_{{\tilde{E}^3}}^2-m_{{\tilde{E}^4}}^2)^2} g(m_{{\tilde{E}^3}},m_{{\tilde{E}^4}}) \;,\nonumber\\ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% &&\Delta_{12}^{L}={G_{F}Y_{{\nu_4}}^4\upsilon^4\over4\sqrt{2}\pi^2\sin^2\beta}\cdot {\mu(\mu\cot\beta-A_{{\nu_4}})\over m_{{\tilde{N}^1}}^2-m_{{\tilde{N}^2}}^2} \Big\{\ln{m_{{\tilde{N}^1}}\over m_{{\tilde{N}^2}}}+{A_{{\nu_4}}(A_{{\nu_4}}-\mu\cot\beta) \over m_{{\tilde{N}^1}}^2-m_{{\tilde{N}^2}}^2}g(m_{{\tilde{N}^1}},m_{{\tilde{N}^2}})\Big\} \nonumber\\ &&\hspace{1.2cm} +{G_{F}Y_{{e_4}}^4\upsilon^4\over4\sqrt{2}\pi^2\cos^2\beta}\cdot {\mu(\mu\tan\beta-A_{{e_4}})\over m_{{\tilde{E}^1}}^2-m_{{\tilde{E}^2}}^2} \Big\{\ln{m_{{\tilde{E}^1}}\over m_{{\tilde{E}^2}}}+{A_{{e_4}}(A_{{e_4}}-\mu\tan\beta) \over m_{{\tilde{E}^1}}^2-m_{{\tilde{E}^2}}^2}g(m_{{\tilde{E}^1}},m_{{\tilde{E}^2}})\Big\} \nonumber\\ &&\hspace{1.2cm} +{G_{F}Y_{{\nu_5}}^4\upsilon^4\over4\sqrt{2}\pi^2\cos^2\beta}\cdot {\mu(\mu\tan\beta-A_{{\nu_5}})\over m_{{\tilde{N}^3}}^2-m_{{\tilde{N}^3}}^2} \Big\{\ln{m_{{\tilde{N}^3}}\over m_{{\tilde{N}^4}}}+{A_{{\nu_5}}(A_{{\nu_5}}-\mu\tan\beta) \over m_{{\tilde{N}^3}}^2-m_{{\tilde{N}^4}}^2}g(m_{{\tilde{N}^3}},m_{{\tilde{N}^4}})\Big\} \nonumber\\ &&\hspace{1.2cm} +{G_{F}Y_{{e_5}}^4\upsilon^4\over4\sqrt{2}\pi^2\sin^2\beta}\cdot {\mu(\mu\cot\beta-A_{{e_5}})\over m_{{\tilde{E}^3}}^2-m_{{\tilde{E}^4}}^2} \Big\{\ln{m_{{\tilde{E}^3}}\over m_{{\tilde{E}^4}}}+{A_{{e_5}}(A_{{e_5}}-\mu\cot\beta) \over m_{{\tilde{E}^3}}^2-m_{{\tilde{E}^4}}^2}g(m_{{\tilde{E}^3}},m_{{\tilde{E}^4}})\Big\} \;,\nonumber\\ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% &&\Delta_{22}^{L}={G_{F}Y_{{\nu_4}}^4\upsilon^4\over4\sqrt{2}\pi^2\sin^2\beta} \Big\{{A_{{\nu_4}}(A_{{\nu_4}}-\mu\cot\beta)\over m_{{\tilde{N}^1}}^2-m_{{\tilde{N}^2}}^2} \ln{m_{{\tilde{N}^1}}^2\over m_{{\tilde{N}^2}}^2} +{A_{{\nu_4}}^2(A_{{\nu_4}}-\mu\cot\beta)^2\over(m_{{\tilde{N}^1}}^2-m_{{\tilde{N}^2}}^2)^2} g(m_{{\tilde{N}^1}}, m_{{\tilde{N}^2}}) \nonumber\\ &&\hspace{1.2cm}+\ln{m_{{\tilde{N}^1}}m_{{\tilde{N}^2}} \over m_{{\nu_4}}^2}\Big\} +{G_{F}Y_{{e_4}}^4\upsilon^4\over4\sqrt{2}\pi^2\cos^2\beta}\cdot {\mu^2(A_{{e_4}}-\mu\tan\beta)^2\over(m_{{\tilde{E}^1}}^2-m_{{\tilde{E}^2}}^2)^2} g(m_{{\tilde{E}^1}},m_{{\tilde{E}^2}}) \nonumber\\&&\hspace{1.2cm} +{G_{F}Y_{{e_5}}^4\upsilon^4\over4\sqrt{2}\pi^2\sin^2\beta}\Big\{{A_{{e_5}}(A_{{e_5}} -\mu\cot\beta)\over m_{{\tilde{E}^3}}^2-m_{{\tilde{E}^4}}^2} \ln{m_{{\tilde{E}^3}}^2\over m_{{\tilde{E}^4}}^2} +{A_{{e_5}}^2(A_{{e_5}}-\mu\cot\beta)^2\over(m_{{\tilde{E}^3}}^2-m_{{\tilde{E}^4}}^2)^2} g(m_{{\tilde{E}^3}}, m_{{\tilde{E}^4}})\nonumber\\ &&\hspace{1.2cm}+\ln{m_{{\tilde{E}^3}}m_{{\tilde{E}^4}} \over m_{{e_5}}^2}\Big\} +{G_{F}Y_{{\nu_5}}^4\upsilon^4\over4\sqrt{2}\pi^2\cos^2\beta}\cdot {\mu^2(A_{{\nu_5}}-\mu\tan\beta)^2\over(m_{{\tilde{N}^3}}^2-m_{{\tilde{N}^4}}^2)^2} g(m_{{\tilde{N}^3}},m_{{\tilde{N}^4}})\;. \label{app4-1}\end{aligned}$$ the processes $h^0\rightarrow \gamma\gamma, ~h^0\rightarrow VV,~ V=(Z,W)$ and dark matter $Y_1$ =============================================================================================== $h^0$ decays ------------ At the LHC, $h^0$ is produced chiefly from the gluon fusion $(gg\rightarrow h^0)$. The one loop diagrams are the leading order (LO) contributions. The virtual t quark loop is the dominate contribution because of the large Yukawa coupling. Therefore, when the couplings of new particles and Higgs are large, they can influence the results obviously. For $ h^0\rightarrow gg$, the EBLMSSM results are same as those in BLMSSM, and are shown as[@htogg; @htoxx] $$\begin{aligned} &&\Gamma_{{NP}}(h^0\rightarrow gg)={G_{F}\alpha_s^2m_{{h^0}}^3\over64\sqrt{2}\pi^3} \Big|\sum\limits_{q,q'}g_{{h^0qq}}A_{1/2}(x_q) +\sum\limits_{\tilde q, \tilde q'}g_{{h^0\tilde{q}\tilde{q}}}{m_{{\rm Z}}^2\over m_{{\tilde q}}^2}A_{0}(x_{{\tilde{q}}})\Big|^2\;, \label{hgg}\end{aligned}$$ with $x_a=m_{{h^0}}^2/(4m_a^2)$. Here, $q$ and $q'$ are quark and exotic quark. While, $\tilde{q}$ and $\tilde{q}'$ denote squark and exotic squark. The concrete expressions for $g_{{h^0qq}},\;g_{{h^0q'q'}},\;g_{{h^0\tilde{q}\tilde{q}}} ,\;g_{{h^0\tilde{q}'\tilde{q}'}}\;(i=1,\;2)$ are in literature [@TFBL]. The functions $A_{1/2}(x)$ and $A_0(x)$ are[@htoxx] $$\begin{aligned} &&A_{1/2}(x)=2\Big[x+(x-1)g(x)\Big]/x^2,~~~~~~A_0(x)=-(x-g(x))/x^2\;,\nonumber\\ &&g(x)=\left\{\begin{array}{l}\arcsin^2\sqrt{x},\;x\le1\\ -{1\over4}\Big[\ln{1+\sqrt{1-1/x}\over1-\sqrt{1-1/x}}-i\pi\Big]^2,\;x>1\;.\end{array}\right. \label{g-function}\end{aligned}$$ The decay $h^0\rightarrow \gamma\gamma$ obtains contributions from loop diagrams, and the leading order contributions are from the one loop diagrams. In the EBLMSSM, the exotic quark(squark) and exotic lepton (slepton) give new corrections to the decay width of $h^0\rightarrow \gamma\gamma$. Different from BLMSSM, the exotic leptons in EBLMSSM are more heavy and the exotic sleptons of the 4 and 5 generations mix together. These parts should influence the numerical results of the EBLMSSM theoretical prediction to the process $h^0\rightarrow \gamma\gamma$ to some extent. The decay width of $h^0\rightarrow \gamma\gamma$ can be expressed as[@htopp] $$\begin{aligned} &&\Gamma_{{NP}}(h^0\rightarrow\gamma\gamma)={G_{F}\alpha^2m_{{h^0}}^3\over128\sqrt{2}\pi^3} \Big|\sum\limits_fN_cQ_{f}^2g_{{h^0ff}}A_{1/2}(x_f)+g_{{h^0H^+H^-}}{m_{{\rm W}}^2\over m_{{H^\pm}}^2}A_0(x_{{H^\pm}}) \nonumber\\&&+g_{{h^0WW}}A_1(x_{{\rm W}}) +\sum\limits_{i=1}^2g_{{h^0\chi_i^+\chi_i^-}}{m_{{\rm W}}\over m_{{\chi_i}}}A_{1/2}(x_{{\chi_i}}) +\sum\limits_{\tilde f}N_cQ_{f}^2g_{{h^0\tilde{f}\tilde{f}}}{m_{ Z}^2\over m_{{\tilde f}}^2} A_{0}(x_{{\tilde{f}}})\Big|^2\;, \label{hpp}\end{aligned}$$ where $g_{{h^0WW}}=\sin(\beta-\alpha)$ and $A_1(x)=-\Big[2x^2+3x+3(2x-1)g(x)\Big]/x^2$. The formulae for $h^0\rightarrow ZZ, WW$ are $$\begin{aligned} &&\Gamma(h^0\rightarrow WW)={3e^4m_{{h^0}}\over512\pi^3s_{ W}^4}|g_{h^0WW}|^2 F({m_{_{\rm W}}\over m_{h^0}}),\;\nonumber\\ &&\Gamma(h^0\rightarrow ZZ)={e^4m_{{h^0}}\over2048\pi^3s_{W}^4c_{W}^4}|g_{h^0ZZ}|^2 \Big(7-{40\over3}s_{W}^2+{160\over9}s_{W}^4\Big)F({m_{Z}\over m_{_{h^0}}}),\end{aligned}$$ with $g_{{h^0ZZ}}=g_{{h^0WW}}$ and $F(x)$ is given out in Ref[@htoww; @htozz]. The observed signals for the diphoton and $ZZ,\;WW$ channels are quantified by the ratios $R_{\gamma\gamma}$ and $R_{VV}, ~V=(Z,W)$, whose current values are $R_{\gamma\gamma}=1.16\pm0.18$ and $R_{VV}=1.19^{+0.22}_{-0.20}$ [@pdg2016]. Dark matter $Y$ --------------- In BLMSSM, there are some dark matter candidates such as: the lightest mass eigenstate of $X X^\prime$ mixing, $\tilde{X}$ the four-component spinor composed by the super partners of $X$ and $X^\prime$. They are studied in Ref.[@darkM]. In EBLMSSM, the dark matter candidates are more than those in BLMSSM, because the lightest mass eigenstate of $Y Y^\prime$ mixing and $\tilde{Y}$ are dark matter candidates. After $U(1)_L$ is broken by $\Phi_L$ and $\Phi_{NL}$, Z2 symmetry is left, which guarantees the stability of the dark matters. There are only two elements (1,-1) in Z2 group. This symmetry eliminates the coupling for the mass eigenstates of $Y Y^\prime$ mixing with two SM particles. The condition for $X$ is similar as that of $Y$, and it is also guaranteed by the Z2 symmetry. In this subsection, we suppose the lightest mass eigenstate of $Y Y^\prime$ mixing in Eq.(\[YY’\]) as a dark matter candidate, and calculate the relic density. So we summarize the relic density constraints that any WIMP candidate has to satisfy. The interactions of the WIMP with SM particles are deduced from the EBLMSSM, then we study its annihilation rate and its relic density $\Omega_D$ by the thermal dynamics of the Universe. The annihilation cross section $\sigma(Y_1 Y_1^* \rightarrow anything)$ should be calculated and can be written as $\sigma v_{rel}=a+bv_{rel}^2$ in the $Y_1Y_1^*$ center of mass frame. $v_{rel}$ is the twice velocity of $Y_1$ in the $Y_1Y_1^*$ c.m. system frame. To a good approximation, the freeze-out temperature($T_F$) can be iteratively computed from[@dark1] $$\begin{aligned} &&x_F=\frac{m_D}{T_F}\simeq\ln[\frac{0.038M_{Pl}m_D(a+6b/x_F)}{\sqrt{g_*x_F}}],\end{aligned}$$ with $x_F\equiv m_D/T_F$ and $m_D=m_{Y_1}$ representing the WIMP mass. $M_{Pl}=1.22\times10^{19}$ GeV is the Planck mass and $g_*$ is the number of the relativistic degrees of freedom with mass less than $T_F$. The density of cold non-baryonic matter is $\Omega_D h^2=0.1186\pm 0.0020$[@pdg2016], whose formula is simplified as $$\begin{aligned} \Omega_D h^2\simeq \frac{1.07\times10^9 x_F}{\sqrt{g_*}M_{PL}(a+3b/x_F)\texttt{GeV} }\;.\end{aligned}$$ To obtain $a$ and $b$ in the $\sigma v_{rel}$, we study the $Y_1Y_1^*$ dominate decay channels whose final states are leptons and light neutrinos: 1. $Y_1Y_1^*\rightarrow Z_L \rightarrow\bar{l}^Il^I$; 2. $Y_1Y_1^*\rightarrow Z_L \rightarrow\bar{\nu}^I\nu^I$; 3. $Y_1Y_1^*\rightarrow \varphi_L \rightarrow\bar{\nu}^I\nu^I$; 4. $Y_1Y_1^*\rightarrow L' \rightarrow\bar{l}^Il^I$; 5. $Y_1Y_1^*\rightarrow N' \rightarrow\bar{\nu}^I\nu^I$. Using the couplings in Eqs.(\[TCLY\])(\[TCZL\])(\[TCHL\]), we deduce the results of $a$ and $b$ $$\begin{aligned} &&a=\sum_{l=e,\mu,\tau}\frac{1}{\pi} |\sum_{i=1}^2\frac{m_{L'_i}}{(m_D^2+m_{L'_i}^2)}\lambda_4W_L^{1i}Z_Y^{11*}\lambda_6U_L^{2i}Z_Y^{21*}|^2 \nonumber\\&&\hspace{0.5cm}+\sum_{\chi^0_{N_\alpha=\nu_e,\nu_\mu,\nu_\tau}}\Big\{\frac{g_L^4(2+L_4)^2 }{8\pi}|(Z_Y^{11*}Z_Y^{11}-Z_Y^{21*}Z_Y^{21})\sum_{I=1}^3\sum_{i=1}^4\frac{1} {(4m_D^2-m_{\Phi_i}^2)}\nonumber\\&&\hspace{0.5cm}\times(\lambda_{N^c}Z_{N_\nu}^{(I+3)\alpha} Z_{N_\nu}^{(I+3)\alpha}Z_{\phi_L}^{2i}) (v_LZ^{1i}_{\tilde{\phi}_L}-\bar{v}_LZ^{2i}_{\tilde{\phi}_L} +\frac{3}{2}v_{NL}Z^{3i}_{\tilde{\phi}_{L}}-\frac{3}{2}\bar{v}_{NL}Z^{4i}_{\tilde{\phi}_{L}})|^2 \nonumber\\&&\hspace{0.5cm} +\frac{1}{\pi} |\sum_{i=1}^2\sum_{I=1}^3\frac{m_{N'_i}}{(m_D^2+m_{N'_i}^2)} \lambda_4Z_{N_{\nu}}^{I\alpha*}W_N^{1i}Z_Y^{11*}\lambda_5Z_{N_{\nu}}^{(I+3)\alpha}U_N^{2i}Z_Y^{21*}|^2\Big\}, \nonumber\\ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% &&b=\sum_{l=e,\mu,\tau}\frac{7m_D^2 }{24\pi}\frac{g_L^4(2+L_4)^2}{(4m_D^2-m_{Z_L}^2)} +\sum_{\chi^0_{N_\alpha=\nu_e,\nu_\mu,\nu_\tau}} \frac{1}{96\pi} \frac{g_L^4(2+L_4)^2m_D^2}{(4m_D^2-m_{Z_L}^2)^2}\nonumber\\&&\hspace{0.5cm}\times\Big( 7+|\sum_{I=1}^3\Big(Z_{N_\nu}^{I\alpha*}Z_{N_\nu}^{I\alpha} -Z_{N_\nu}^{(I+3)\alpha*}Z_{N_\nu}^{(I+3)\alpha}\Big)|^2 \Big).\end{aligned}$$ numerical results ================= $h^0$ decays and $m_{A^0}, m_{H^0}$ ----------------------------------- In this section, we research the numerical results. For the parameter space, the most strict constraint is that the mass of the lightest eigenvector for the mass squared matrix in Eq.(\[MCPEHiggs\]) is around $125.1$ GeV. To satisfy this constraint, we use $m_{h^0}=125.1$ GeV as an input parameter. Therefore, the CP odd Higgs mass should meet the following relation. $$\begin{aligned} &&m_{A^0}^2={m_{h^0}^2(m_{Z}^2-m_{h^0}^2+\Delta_{11}+\Delta_{22})-m_{Z}^2 \Delta_{A}+\Delta_{12}^2-\Delta_{11}\Delta_{22}\over -m_{h^0}^2+m_{Z}^2\cos^22\beta +\Delta_{B}}\;, \label{Higgs-mass1}\end{aligned}$$ where $$\begin{aligned} &&\Delta_{A}=\sin^2\beta\Delta_{11}+\cos^2\beta\Delta_{22}+\sin2\beta \Delta_{12} \;,\nonumber\\ &&\Delta_{B}=\cos^2\beta\Delta_{11}+\sin^2\beta\Delta_{22}+\sin2\beta \Delta_{12}\;. \label{Higgs-mass2}\end{aligned}$$ To obtain the numerical results, we adopt the following parameters as $$\begin{aligned} &&Y_{u_4} = 1.2Y_t,~~~Y_{u_5} = 0.6Y_t,~~~ Y_{d_4}=Y_{d_5} = 2Y_b,~~~ g_B = 1/3,~~~\lambda_u=\lambda_d = 0.5,\nonumber\\&& A_{u_4} = A_{d_4} = A_{d_5} = A_{e_4} = A_{e_5} =A_{\nu_4} = A_{\nu_5} =1{\rm TeV},~ \lambda_Q = 0.4,~g_L = 1/6, \nonumber\\&& m_{\tilde{Q}_4} = m_{\tilde{Q}_5} =m_{\tilde{U}_4} = m_{\tilde{U}_5} = m_{\tilde{D}_4} = m_{\tilde{D}_5} = m_{\tilde{\nu}_4} = m_{\tilde{\nu}_5} =1{\rm TeV},~Y_{e_5} = 0.6,\nonumber\\&& \upsilon_{NL} = \upsilon_L =A_b = 3{\rm TeV},~~~~~ \tan\beta_{NL} =\tan\beta_L = 2,~~~~~ \lambda_L = \lambda_{NL} =\lambda_E = 1, \nonumber\\&& m_{\tilde{L}} = m_{\tilde{e}} = 1.4 \delta_{ij}{\rm TeV},~ A_{\tilde{L}} = A_{\tilde{L}'} = 0.5 \delta_{ij}{\rm TeV}~(i,j=1,2,3),~\mu_B = 0.5 {\rm TeV} , \nonumber\\&& A_{BQ}=A_{BU} = A_{BD} = \mu_{NL} =A_{LL} = A_{LE} = A_{LN} = 1{\rm TeV},~Y_{\nu_4} = Y_{\nu_5} = 0.1, \nonumber\\&& m_{\tilde{L}_4} = m_{\tilde{L}_5}= m_{\tilde{E}_4} = m_{\tilde{E}_5}=m_2=1.5{\rm TeV},~m_{\tilde{D}_3}= 1.2{\rm TeV},~B_4 = L_4 = 1.5. \label{canshu}\end{aligned}$$ Here $Y_t$ and $Y_b$ are the Yukawa coupling constants of top quark and bottom quark, whose concrete forms are $Y_t = \sqrt{2} m_t/(\upsilon \sin\beta)$ and $Y_b = \sqrt{2} m_b/(\upsilon \cos\beta)$ respectively. To embody the exotic squark corrections, we calculate the results versus $A_{u_5}$ which has relation with the mass squared matrix of exotic squark. In the left diagram of FIG.\[Au5tu\], $R_{\gamma\gamma}$ and $R_{VV}$ versus $A_{u_5}$ are plotted by the solid line and dashed line respectively with $m_{\tilde{Q}_3}=m_{\tilde{U}_3}=1.2{\rm TeV},~\tan\beta =1.4, ~A_t=1.7{\rm TeV},\upsilon_B =3.6{\rm TeV},~\mu =-2.4{\rm TeV},~\tan\beta_B =1.5$ and $Y_{e_4}=0.5$. In the left diagram of FIG.\[Au5tu\], the solid line($R_{\gamma\gamma}$) and dashed line ($R_{VV}$) change weakly with the $A_{u_5}$. When $A_{u_5}$ enlarges, $R_{\gamma\gamma}$ is the increasing function and $R_{VV}$ is the decreasing function. During the $A_{u_5}$ region $(-1700\sim1000)$ GeV, both $R_{\gamma\gamma}$ and $R_{VV}$ satisfy the experiment limits. The dot-dashed line(dotted line) in the right diagram denotes the Higss mass $m_A^0(m_H^0)$ varying with $A_{u_5}$. The dot-dashed line and dotted line increase mildly with $A_{u_5}$. The value of $m_A^0$ is a little bigger than 500 GeV, while the value of $m_H^0$ is very near 500 GeV. ![The results versus $A_{u_5}$ are shown. $R_{\gamma\gamma}$ (solid line) and $R_{VV}$ (dashed line) are in the left diagram. $m_{A^0}$ (dot-dashed line) and $m_{H^0}$ (dotted line) are in the right diagram.[]{data-label="Au5tu"}](Au51.eps "fig:"){width="2.9in"}   ![The results versus $A_{u_5}$ are shown. $R_{\gamma\gamma}$ (solid line) and $R_{VV}$ (dashed line) are in the left diagram. $m_{A^0}$ (dot-dashed line) and $m_{H^0}$ (dotted line) are in the right diagram.[]{data-label="Au5tu"}](Au52.eps "fig:"){width="2.9in"} For the squark, we assume the first and second generations are heavy, so they are neglected. The scalar top quarks are not heavy, and their contributions are considerable. $A_t$ is in the mass squared matrix of scalar top quark influencing the mass and mixing. The effects from $A_t$ to the ratios $R_{\gamma\gamma}$, $R_{VV}$, Higgs masses $m_{A^0}$ and $m_{H^0}$ are of interest. As $m_{\tilde{Q}_3}=2.4{\rm TeV},~ m_{\tilde{U}_3}=1.2{\rm TeV}, ~\tan\beta =\tan\beta_B =2.15,~\upsilon_B =4.1{\rm TeV},~\mu =-2.05{\rm TeV}, ~Y_{e_4}=0.5$ and $ A_{u_5}=1 {\rm TeV}$. $R_{\gamma\gamma}$ (solid line) and $R_{VV}$ (dashed line) versus $A_t$ are shown in the left diagram of FIG.\[NAttu\]. While the right diagram of FIG.\[NAttu\] gives out the Higgs masses $m_{A^0}$ (dot-dashed line) and $m_{H^0}$ (dotted line). In the $A_t$ region $(2\sim4.8)$ TeV, the $R_{\gamma\gamma}$ varies from 1.25 to 1.34. At the same time, the $R_{VV}$ is in the range $(1.2\sim1.38)$. The dot-dashed line and dotted line are very near. In the $A_t$ region $(3000\sim4000)$ GeV, the masses of Higgs $A^0$ and $H^0$ are around 1000 GeV. In this parameter space, the allowed biggest values of $A^0$ and $H^0$ masses can almost reach 1350 GeV. ![The results versus $A_{t}$ are shown. $R_{\gamma\gamma}$ (solid line) and $R_{VV}$ (dashed line) are in the left diagram. $m_{A^0}$ (dot-dashed line) and $m_{H^0}$ (dotted line) are in the right diagram.[]{data-label="NAttu"}](NAt1.eps "fig:"){width="2.9in"}   ![The results versus $A_{t}$ are shown. $R_{\gamma\gamma}$ (solid line) and $R_{VV}$ (dashed line) are in the left diagram. $m_{A^0}$ (dot-dashed line) and $m_{H^0}$ (dotted line) are in the right diagram.[]{data-label="NAttu"}](NAt2.eps "fig:"){width="2.9in"} $Y_{e_4}$ is the Yukawa coupling constant that can influence the mass matrix of exotic lepton and exotic slepton. We use $m_{\tilde{Q}_3}= m_{\tilde{U}_3}=1.2{\rm TeV},~\tan\beta =2.3,~\tan\beta_B =1.77, ~A_t=1.7{\rm TeV},~\upsilon_B =5.43{\rm TeV},~\mu =-2.64{\rm TeV} ,~ A_{u_5}=1 {\rm TeV}$ and obtain the results versus $Y_{e_4}$ in the FIG.\[NYe4tu\]. In the left diagram, the $R_{\gamma\gamma}$ (solid line) and $R_{VV}$ (dashed line) are around 1.3 and their changes are small during the $Y_{e_4}$ range $(0.05\sim1)$. One can see that in the right diagram $m_{A^0}$ (dot-dashed line) and $m_{H^0}$ (dotted line) possess same behavior versus $Y_{e_4}$. They are both decreasing functions of $Y_{e_4}$ and vary from 1500GeV to 500 GeV. In general, $Y_{e_4}$ effect to the Higgs masses $m_{A^0}$ and $m_{H^0}$ is obvious. ![The results versus $Y_{e_4}$ are shown. $R_{\gamma\gamma}$ (solid line) and $R_{VV}$ (dashed line) are in the left diagram. $m_{A^0}$ (dot-dashed line) and $m_{H^0}$ (dotted line) are in the right diagram.[]{data-label="NYe4tu"}](NYe41.eps "fig:"){width="2.9in"}   ![The results versus $Y_{e_4}$ are shown. $R_{\gamma\gamma}$ (solid line) and $R_{VV}$ (dashed line) are in the left diagram. $m_{A^0}$ (dot-dashed line) and $m_{H^0}$ (dotted line) are in the right diagram.[]{data-label="NYe4tu"}](NYe42.eps "fig:"){width="2.9in"} $m_{\tilde{Q}_3}$ and $m_{\tilde{U}_3}$ are the diagonal elements of the squark mass squared matrix, and they should affect the results. Supposing $m_{\tilde{Q}_3}= m_{\tilde{U}_3}=M_Q,~\tan\beta =2.1, ~\tan\beta_B =2.24,~A_t=1.7{\rm TeV},~\upsilon_B =3.95{\rm TeV},~\mu =-1.9{\rm TeV} ,~Y_{e_4}=0.6,~ A_{u_5}=1 {\rm TeV}$, we calculate the results versus $M_Q$ and plot the diagrams in the FIG.\[NMQtu\]. It shows that in this figure the solid line, dashed line, dotted line and dot-dashed line are all stable. $R_{\gamma\gamma}$ and $R_{VV}$ are around 1.2. At the same time $m_{A^0}$ and $m_{H^0}$ are about 1 TeV. ![The results versus $M_{Q}$ are shown. $R_{\gamma\gamma}$ (solid line) and $R_{VV}$ (dashed line) are in the left diagram. $m_{A^0}$ (dot-dashed line) and $m_{H^0}$ (dotted line) are in the right diagram.[]{data-label="NMQtu"}](NMQ1.eps "fig:"){width="2.9in"}   ![The results versus $M_{Q}$ are shown. $R_{\gamma\gamma}$ (solid line) and $R_{VV}$ (dashed line) are in the left diagram. $m_{A^0}$ (dot-dashed line) and $m_{H^0}$ (dotted line) are in the right diagram.[]{data-label="NMQtu"}](NMQ2.eps "fig:"){width="2.9in"} scalar dark matter $Y_1$ ------------------------ Here, we suppose $Y_1$ as a scalar dark matter candidate. In Ref.[@pdg2016] the density of cold non-baryonic matter is $\Omega_D h^2=0.1186\pm 0.0020$. To obtain the numerical results of dark matter relic density, for consistency the used parameters in this subsection are of the same values as in Eq.(\[canshu\]) if they are supposed. Therefore, we just show the values of the parameters beyond Eq.(\[canshu\]). These parameters are taken as $$\begin{aligned} &&~\mu_Y=1500{\rm GeV},~~~~~ \lambda_5=1,~~~~~ \mu_L =B_L = B_{NL} = 1{\rm TeV},~~~~~\tan\beta=1.4,\nonumber\\&& B_Y=940{\rm GeV},~~~~~m_{\Phi_L}^2=m_{\varphi_L}^2= m_{\Phi_{NL}}^2= m_{\varphi_{NL}}^2= 3 {\rm TeV}^2,~~~~Y_{e_4}=0.5.\label{darkparameter}\end{aligned}$$ With the relation $\lambda_4=\lambda_6=Lm$, we study relic density $\Omega_D$ and $x_F$ versus $Lm$ in the FIG.\[darktu1\]. ![The relic density and $x_F$ versus $Lm$.[]{data-label="darktu1"}](fengduN.eps "fig:"){width="2.9in"}   ![The relic density and $x_F$ versus $Lm$.[]{data-label="darktu1"}](xF.eps "fig:"){width="2.9in"} In the right diagram of FIG.\[darktu1\], the grey area is the experimental results in 3 $\sigma$ and the solid line representing $\Omega_Dh^2$ turns small with the increasing $Lm$. During the $Lm$ region $(0.7\sim1.4)$, $\Omega_Dh^2$ satisfies the experiment bounds of dark matter relic density. $x_F$ is stable and in the region $(23.5 \sim 24)$. Taking $Y_{e_4}=1.3, \lambda_4=\lambda_6=1$ and the other parameters being same as Eq.(\[darkparameter\]) condition, we plot the relic density($x_F$) versus $Y_{e_5}$ in the left (right) diagram of the FIG.\[darktu2\]. In this parameter space, during $Y_{e_5}$ region $(0.1\sim2.5)$, our theoretical results satisfy the relic density bounds of dark matter, and $x_F$ is very near 23.55. Generally speaking, both the solid line and dashed line are very stable. ![The relic density and $x_F$ versus $Y_{e_5}$.[]{data-label="darktu2"}](fengduYe5.eps "fig:"){width="2.9in"}   ![The relic density and $x_F$ versus $Y_{e_5}$.[]{data-label="darktu2"}](xFYe5.eps "fig:"){width="2.9in"} discussion and conclusion ========================= Considering the light exotic lepton in BLMSSM, we add exotic Higgs superfields $\Phi_{NL}$ and $\varphi_{NL}$ to BLMSSM in order to make the exotic leptons heavy. Light exotic leptons may be excluded by the experiment in the future. On the other hand, heavy exotic leptons should not be stable. So we also introduce the superfields $Y$ and $Y'$ to make exotic leptons decay quickly. The lightest mass eigenstate of $Y$ and $Y'$ mixing mass matrix can be a dark matter candidate. Therefore, the exotic leptons are heavy enough to decay to SM leptons and Y at tree level. We call this extended BLMSSM as EBLMSSM, where the mass matrices for the particles are deduced and compared with those in BLMSSM. Different from BLMSSM, the exotic sleptons of 4 and 5 generations mix together forming $4\times4$ mass squared matrix. EBLMSSM has more abundant content than BLMSSM for the lepton physics. To confine the parameter space of EBLMSSM, we study the decays $h^0\rightarrow \gamma\gamma$ and $h^0\rightarrow VV, V=(Z,W)$. The CP even Higgs masses $m_{h^0}, m_{H^0}$ and CP odd Higgs mass $m_A^0$ are researched. In the numerical calculation, to keep $m_{h^0}=125.1$ GeV, we use it as an input parameter. In our used parameter space, the values of $R_{\gamma\gamma}$ and $R_{VV}$ both meet the experiment limits. The CP odd Higgs mass $m_{A^0}$ is a little heavier than the CP even Higgs mass $m_{H^0}$. Generally speaking, both $m_{A^0}$ and $m_{H^0}$ are in the region $(500\sim1500)$ GeV. Based on the supposition that the lightest mass eigenstate $Y_1$ of $Y$ and $Y'$ mixing possesses the character of cold dark matter, we research the relic density of $Y_1$. In our used parameter space, $\Omega_Dh^2$ of $Y_1$ can match the experiment bounds. EBLMSSM has a bit more particles and parameters than those in BLMSSM. Therefore, EBLMSSM possesses stronger adaptive capacity to explain the experiment results and some problems in the theory. In our later work, we shall study the EBLMSSM and confine its parameter space to move forward a single step. [**Acknowledgments**]{} Supported by the Major Project of NNSFC (No. 11535002, No. 11605037, No. 11705045), the Natural Science Foundation of Hebei province with Grant No. A2016201010 and No. A2016201069, and the Natural Science Fund of Hebei University with Grants No. 2011JQ05 and No. 2012- 242, Hebei Key Lab of Optic-Electronic Information and Materials, the midwest universities comprehensive strength promotion project. At last, thanks Dr. Tong Li and Dr. Wei Chao very much for useful discussions of dark matter. [99]{} S.R. Elliott and P. Vogel, Ann. Rev. Nucl. Part. Sci. 52 (2002) 115; P. Nath and P. Fileviez Perez, Phys. Rept. 441 (2007) 191. T2K collaboration, Phys. Rev. Lett. 107 (2011) 041801; DAYA-BAY collaboration, Phys. Rev. Lett. 108 (2012) 171803. CMS collaboration, Phys. Lett. B 716 (2012) 30; ATLAS collaboration, Phys. Lett. B 716 (2012) 1. H.P. Nilles, Phys. Rept. 110 (1984) 1; H.E. Haber and G.L. Kane, Phys. Rept. 117 (1985) 75. J. Rosiek, Phys. Rev. D 41 (1990) 3464, hep-ph/9511250. P.F. Perez, Phys. Lett. B 711 (2012) 353; J.M. Arnold, P.F. Perez, B. Fornal, S. Spinner, Phys. Rev. D 85 (2012) 115024; P.F. Perez, M.B. Wise, Phys. Rev. D 84 (2011) 055015. P.F. Perez, M.B. Wise, Phys. Rev. D 82 (2010) 011901; Erratum: Phys. Rev. D 82 (2010) 079901; T.R. Dulaney, P.F. Perez, M.B. Wise, Phys. Rev. D 83 (2011) 023520. G. Jungman, M. Kamionkowski, K. Griest, Phys. Rept. 267 (1996) 195-373; G. Bertone, D. Hooper, J. Silk, Phys. Rept. 405 (2005) 279-390; M. Drees, M.M. Nojiri, Phys. Rev. D 47 (1993) 376-408. P. Ko, Y. Omura, Phys. Lett. B 701 (2011) 363-366. T.F. Feng, S.M. Zhao, H.B. Zhang et al., Nucl. Phys. B 871 (2013) 223. S.M. Zhao, T.F. Feng, H.B. Zhang et al., Phys. Rev. D 92 (2015) 115016; S.M. Zhao, T.F. Feng, X.J. Zhan et al., JHEP 07 (2015) 124. T.F. Feng, X.Q. Li, H.B. Zhang, S.M. Zhao, e-Print: arXiv:1512.06696 \[hep-ph\]. S.M. Zhao, T.F. Feng, H.B. Zhang et al., JHEP 11 (2014) 119. S.M. Zhao, T.F. Feng, B. Yan et al., JHEP 10 (2013) 020; X.X. Dong, S.M. Zhao, H.B. Zhang, F. Wang, T.F. Feng, Chin.Phys. C40 (2016) 093103. J.R. Ellis, M.K. Gaillard, D.V. Nanopoulos, Nucl. Phys. B 106 (1976) 292; M.A. Shifman, A.I. Vainshtein, M.B. Voloshin, V.I. Zakharov, Sov. J. Nucl. Phys. 30 (1979) 711. A. Djouadi, Phys. Rep. 459 (2008) 1. M. Gavela, G. Girardi, C. Malleville, P. Sorba, Nucl. Phys. B 193 (1981) 257. W.Y. Keung, W.J. Marciano, Phys. Rev. D 30 (1984) 248; J.F. Gunion, H.E. Haber, G. Kane, S. Dawson, The Higgs Hunter¡¯s Guide, Perseus Books, 1990. W. Bernreuther, P. Gonzalez, M. Wiebusch, Eur. Phys. J. C 69 (2010) 31. Particle Data Group collaboration, Chin. Phys. C 40 (2016) 100001. [^1]: email:[email protected] [^2]: email:[email protected] [^3]: [email protected] [^4]: [email protected] [^5]: [email protected]
--- author: - Humberto Gomez - Ellis Ye Yuan bibliography: - 'BerkovitsString.bib' title: | $N$-Point Tree-Level Scattering Amplitude\ in the New Berkovits’ String --- =1 Introduction ============ Recently a new formula was proposed by Cachazo, He and Yuan (CHY) to compute the tree-level scattering amplitudes of massless bosons (doubly-colored scalar with cubic self-interaction, pure gluon and pure graviton) in any dimensions [@Cachazo:2013hca; @Cachazo:2013iea], which is constructed upon scattering equations that govern the relation between scattering data and an underlying punctured Riemann sphere in the connected prescription [@Witten:2003nn; @Roiban:2004yf; @Cachazo:2012da; @Cachazo:2012kg; @Cachazo:2013zc; @Cachazo:2013iaa]. This formula has been proven by Britto-Cachazo-Feng-Witten (BCFW) recursion relations [@Dolan:2013isa]. Given the twistor string origin of such construction, Mason and Skinner found a new ambitwistor string theory whose tree-level scattering produces this formula [@Mason:2013sva]. Moreover, they pointed out that this new version of twistor string can be obtained by taking the chiral infinite tension limit of the ordinary string theory and they gave an explicit example in the bosonic case. This was extended by Berkovits very recently to the superstring in the pure spinor formalism [@nathannewpaper]. By investigating its connection with the RNS formalism in Mason and Skinner’s discussion, this twistor-like string theory was also claimed to give rise to the scattering-equation-based formula. A particularly interesting aspect of this extension is that, since the pure spinor formalism naturally encodes space-time supersymmetry, this has the potential of extending the original CHY formula to the supersymmetric case, at least in ten dimensions where this twistor-like string theory sits. So as the first step, it is worth to see how the CHY formula arises from Berkovits’ theory in a direct way and how supersymmetry enters into the formula. In this short paper, we give a direct proof that at tree-level Berkovits’ twistor-like theory of heterotic string and type II string is identical to the ten-dimensional $\mathcal{N}=1$ super Yang-Mills (SYM) and type II supergraivty (SUGRA) respectively. The proof parallels the work of Mafra, Schlotterer and Stieberger (and also later on with Broedel) on the disk amplitudes of ordinary superstring in the pure spinor formalism [@npointsmafra; @mafratwo; @Broedel:2013tta]. This is expected since the constructions of vertex operators are very similar between the two theories. The main difference comes with the moduli, which are now holomorphic coordinates on a Riemann sphere instead of ordered coordinates on the real axis that by conformal symmetry describe points on the boundary of a disk. With this, the scattering equations directly make an appearance in the amplitude [@Mason:2013sva; @nathannewpaper], which is a hint that the formula is of the CHY type. Indeed, the final result obtained here shares a similar Kawai-Lewellen-Tye (KLT) structure with the string amplitude given in [@Broedel:2013tta]. And as a result of KLT orthogonality pointed out in [@Cachazo:2013iea], in the heterotic version this just reduces to the SYM amplitude as given in [@mafraSYM], and in the type II version it leads to SUGRA as the KLT of two copies of SYM. Moreover, by applying KLT orthogonality in a different way, this result is exactly equivalent to the original CHY formula when restricting to gluon and graviton scattering. The paper is organized as follows. We first give a detailed proof for the case of heterotic string in Section \[section2\]. Since the proof for the type II string shares a lot in common, we only discuss in detail the differences in Section \[section3\]. A quick review of Berkovits’ theory in each case is summarized at the beginning of each section. Tree-Level SYM Amplitude {#section2} ======================== The action in Berkovits’ new twistor-like theory for heterotic superstring, which is expected to describe the ${\cal N}=1$ SYM in ten dimensions, is given by [@nathannewpaper] $$\label{action} S= \int d^2z\, (P_m \pb X^m+ p_\a\pb \t^\a + w_\a\pb \l^\a + b\pb c) + S_c,$$ where $\l^\a$ is a ten dimensional pure spinor (with the constraint $\l\g^m\l=0$) and $S_C$ is the worldsheet action for the current algebra. The BRST operator is defined as $$Q=\int dz\, (\l^\a d_\a +c(P_m\p X^m p_\a\p \t^\a + w_\a\p \l^\a + T_c)+ bc\p c),$$ where $d_\a$ is the Green-Schwarz constraint $$\label{gsc} d_\a = p_\a - \frac{1}{2}P_m(\g^m\t)_\a.$$ The massless vertex operator describing the ${\cal N}=1$ SYM multiplet are $$\label{vertexym} \begin{array}{cc} \begin{split} V=& c\, \tilde V^I\, J_I,\\U=& \,\tilde U^I \,J_I, \end{split}& \begin{split} &\qquad\text{Unintegrated},\\&\qquad\text{Integrated}, \end{split} %V=& c\, \tilde V^I\, J_I, \,\, %\qquad\qquad\text {Unintegrated},\\ %U=& \,\tilde U^I \,J_I, \,\,\quad\text {Integrated}, \end{array}$$ where $$\label{vertexym2} \begin{split} \tilde V^I=& e^{ik\cdot X}\l^\a A^I_\a(\t),\\ \tilde U^I=&e^{ik\cdot X} \bar\delta(K\cdot P)[P^m A^I_m + d_\a W^{\a I} + \frac{1}{2}N_{mn}{\cal F}^{mn I}]. \end{split}$$ In the above, $N_{mn}=\frac{1}{2}(\l\g_{mn}w)$, $\{ A_\a,A_m, W^\a,{\cal F}^{mn}\}$ are the ${\cal N}=1$ SYM superfields, and the current $J_I(\s)$ satisfies $$J_I(\s_i)J_J(\s_j)\sim \frac{k\delta_{IJ}}{\s^2_{ij}}+ \frac{f_{IJ}^KJ_K(\s_j)}{\s_{ij}}.$$ From the action (\[action\]) it is simple to read the OPE’s $$\label{ope} \begin{split} &d_\a(\s_i)d_\b(\s_j)\sim -\frac{\g^m_{\a\b}P_m}{\s_{ij}}, \quad d_\a(\s_i)\t^\b(\s_j)\sim \frac{\delta_{\a}^{\b}}{\s_{ij}},\quad d_\a(\s_i)f(X(\s_j),\t(\s_j))\sim \frac{D_\a f(X(\s_j)}{\s_{ij}},\\ & N^{mn}(\s_i)N_{pq}(\s_j)\sim \frac{4N^{[m}_{\,\,\,\,\,\,[p}\delta^{n]}_{q]}}{\s_{ij}}-\frac{6\delta^{m}_{[p}\delta^{n}_{q]}}{\s^2_{ij}},\quad N^{mn}(\s_i)\l^\a(\s_j)\sim -\frac{1}{2}\frac{(\l\g^{mn})^\a}{\s_{ij}}, \quad P^m(\s_i) P_n(\s_j)\sim 0,\\ &P^m(\s_i) f(X(\s_j),\t(\s_j))\sim -\frac{(k^m)f(X(\s_j),\t(\s_j))}{\s_{ij}}, \end{split}$$ where $D_\a:=\p_\a + \frac{1}{2}(\g^m\t)_\a\p_m$ is the covariant derivative. Note that these are the same OPE’s as found in the pure spinor superstring formalism [@nathanps], except for the OPE of $P^m(\s_i) P_n(\s_j)$, which in the pure spinor formalism of ordinary string has a double pole[^1]. The tree-level amplitude prescription is given by the correlation function $$\mathcal{A}_N= \int \prod_{i=1}^{N-2}d\s_i\, \langle V_1(\sigma_1=0)U_2 \cdots U_{N-2} V_{N-1}(\s_{N-1}=1) V_N(\s_N=\infty)\rangle,$$ where the three unintegrated vertex operators $\{V_1(\s_{1}=0), V_{N-1}(\s_{N-1}=1), V_N(\s_N=\infty)\}$ fix the $SL(2,\mathbb{C})$ gauge symmetry on the sphere. Since there is no correlation between $c$, $J_I$ and the vertices $\{\tilde V^I,\tilde U^I\}$, the above formula can be decomposed as $$\label{prescription} \mathcal{A}_N= \int \prod_{i=1}^{N-2}d\s_i \langle c(\s_1)c(\s_{N-1})c(\s_N) \rangle \,\,\langle \tilde V^{I_1}_1\tilde U^{I_2}_2 ... \tilde U^{I_{N-2}}_{N-2} \tilde V^{I_{N-1}}_{N-1}\tilde V^{I_{N}}_N\rangle\,\, \langle J_{I_1}J_{I_2} ... J_{I_{N}}\rangle,$$ where the $c$-ghost correlator just produces a Vandermonde factor $$\langle c(\s_1)c(\s_{N-1})c(\s_N) \rangle = \s_{1,N-1}\s_{N-1,N}\s_{N,1}.$$ $X^m$ and $P^m$ Integration --------------------------- We first perform the phase space integration. In the path integral prescription the $X^m$ effective action contribution is given by $$S[X,P] = -\int d^2\s \left(\frac{1}{2\pi}P_m\pb X^m -i\sum_{i=1}^{N}k_i\cdot X \delta^{(2)}(\s-\s_i)\right).$$ Integrating the zero modes of the $X^m$ field leads to the usual momentum conservation $\delta^{(10)}(\sum_i k^m_i)$. The non-zero modes integration implies the constraint $$\pb P^m=-2\pi i\sum_{i=1}^{N}k_i^m \delta^{(2)}(\s-\s_i),$$ which, on the sphere, has the unique solution $$P^m(\s)= \sum_{i=1}^N \frac{-(ik_i^m)}{\s-\s_i}.$$ This solution must be replaced in the integrated vertex operators, i.e., at the vertex $U_i$ we have $$\label{Pope} P^m\,\,\longrightarrow \,\,P^m(\s_i)= \sum_{j\neq i}^N \frac{-(ik_j^m)}{\s_i-\s_j}$$ $$\label{sceq} \bar\delta(k_i\cdot P)\,\,\longrightarrow \,\,\bar\delta(k_i\cdot P(\s_i))= \delta\left(i\sum_{j\neq i}^N \frac{(ik_j)\cdot(ik_j)}{\s_i-\s_j}\right)=\delta\left(i\sum_{j\neq i}^N \frac{s_{ij}}{\s_{ij}}\right),$$ where[^2] $s_{ij}:=(ik_i)\cdot (ik_j)$. The solution (\[Pope\]) is equivalent to consider the OPE given in and we can write the Dirac delta as $$\label{scatteringeq} \bar\delta(k_i\cdot P(\s_i))=\delta\left(\sum_{j\neq i}^N \frac{s_{ij}}{\s_{ij}}\right),$$ since the overall $i$ factor does not affect the final answer[^3]. Hence, we can conclude that the integration by the $X^m$ and $P^m$ fields imply the OPE , the $N-3$ independent scattering equations and the momentum conservation. Connection to CHY Formula ------------------------- In order to compute the pure spinor correlator we must note that every single pole contraction is the same as those given in [@npointsmafra]. This is simple to see, since the only difference between the operators in and those used in [@npointsmafra] is the missing term $$\label{missingterm} \p\t^\a A_\a(X,\t)$$ in the definition of the integrated vertices, whose OPE’s involve only double poles (all possible simple poles from this term cancel away in the end). In addition, the operator $\Pi^{m}$ in the ordinary string, which is replaced by $P^m$ in the new vertex operator (\[vertexym2\]), has the OPE’s $$\label{Piope} \Pi^m(\s_i)\Pi_n(\s_j)\sim -\frac{\delta^m_n}{\s^2_{ij}}, \qquad \Pi^m(\s_i) f(X(\s_j),\t(\s_j))\sim -\frac{k^mf(X(\s_j),\t(\s_j))}{\s_{ij}}$$ where $f(X(\s_j),\t(\s_j))$ is any superfield. Note that the only difference between (\[ope\]) and (\[Piope\]) is the double pole. These indicate that all differences enter into the terms with double poles, and so we must have a careful look at these terms before moving on. In [@npointsmafra] it was argued that terms involving double poles always combine to produce a prefactor of the form $$\label{doublepoleprefactor} \frac{(1+s_{ij})}{\sigma^2_{ij}},$$ whose numerator plays the role of canceling the tachyon pole $1/(1+s_{ij})$ produced by integration of the Koba-Nielsen (KN) factor $\prod_{i<j}|\sigma_i-\sigma_j|^{-s_{ij}}$ (since such a pole is expected to be spurious). At the integrand level this means that the double poles are actually spurious as well, and hence the aim is to dissolve the appearance of the double poles. In the treatment of [@npointsmafra] for ordinary string, this is done by integration by parts in the presence of the KN factor, e.g., $$\int d\sigma_a\frac{d}{d\sigma_a}\left(\frac{1}{\sigma_{a,b}}\prod_{i<j}|\sigma_i-\sigma_j|^{-s_{ij}}\right) =-\int d\sigma_a\left(\frac{1+s_{ab}}{\sigma_{ab}^2}+\sum_{i\neq a,b}\frac{s_{ai}}{\sigma_{ab}\sigma_{ai}}\right)\prod_{i<j}|\sigma_i-\sigma_j|^{-s_{ij}}=0,$$ and so the effect of this operation is equivalent to the substitution $$\label{substitutionordinarystring} \frac{(1+s_{ab})}{\sigma^2_{ab}}\longrightarrow-\sum_{i\neq a,b}\frac{s_{ai}}{\sigma_{ab}\sigma_{ai}}.$$ It is important to point out that, in the calculation of ordinary string, the presence of the term in the integrated vertex and the double pole in contribute and only contribute to the term “$1$” in the numerator of , and this “$1$” term receives no contribution from anything else. Here we just show this explicitly in the simplest example at five points. According to the calculation in [@Mafra:2009bz], when fixing $\{\sigma_1,\sigma_4,\sigma_5\}$, the terms with double poles reads $$\label{doublepoleOPE5pt} \frac{(1+s_{23})}{\sigma_{23}^2}\,\langle \tilde{V}(\sigma_1)\,[A_\alpha(\sigma_2)W^\alpha(\sigma_3)+A_\alpha(\sigma_3)W^\alpha(\sigma_2)-A_m(\sigma_2)A^m(\sigma_3)]\,\tilde{V}(\sigma_4)\,\tilde{V}(\sigma_5)\rangle,$$ where with a slight abuse of notation we denote $\tilde{V}=\lambda_\alpha A^\alpha$. If we study the contribution from the term , from it is easy to see that the only non-trivial OPE’s are[^4] $$\label{vanishing1} (\partial\theta_\alpha A^\alpha)(\sigma_2)\,(d_\beta W^\beta)(\sigma_3)\sim\frac{A_{\alpha}(\sigma_2)W^\alpha(\sigma_3)}{\sigma_{23}^2},\quad (d_\beta W^\beta)(\sigma_2)\,(\partial\theta_\alpha A^\alpha)(\sigma_3)\sim\frac{A_{\alpha}(\sigma_3)W^\alpha(\sigma_2)}{\sigma_{23}^2}.$$ Moreover, the non-trivial OPE among $\Pi_m$’s given in produces an additional term $$\label{vanishing2} (\Pi^mA_m)(\sigma_2)\,(\Pi^nA_n)(\sigma_3)\sim-\frac{A_m(\sigma_2)A^m(\sigma_3)}{\sigma_{23}^2}.$$ When we switch from ordinary string to the twistor string constructed by Berkovits, one can check that and are the only OPE’s that cease to contribute to the vertices correlator, and so the change to the result is only to delete the “$1$” from the prefactor $(1+s_{23})$. In general, in the computation of Berkovits’ twistor string, we just need to switch the prefactors $(1+s_{ij})$ to the corresponding $s_{ij}$ [^5]. Now, in the context of twistor string, there is no longer any KN factor. Instead, since the $\sigma$ variables are evaluated under the delta constraints , the way to dissolve the presence of double poles is to apply substitution on the support of the corresponding scattering equations, e.g., $$\label{substitutiontwistorstring} \frac{s_{ab}}{\sigma^2_{ab}}\longrightarrow-\sum_{i\neq a,b}\frac{s_{ai}}{\sigma_{ab}\sigma_{ai}}.$$ From and , we see that although the differences in OPE’s between ordinary string and Berkovits’ twistor string lead to different appearances of double-pole terms, after canceling these spurious poles they actually give the same result for the vertices correlator. Due to this fact, we are justified to directly apply the results obtained in [@npointsmafra] $$\label{correlatorpsUV} \begin{split} \langle \tilde V^{I_1}_1\tilde U^{I_2}_2 ... \tilde U^{I_{N-2}}_{N-2} \tilde V^{I_{N-1}}_{N-1}\tilde V^{I_{N}}_N\rangle =&\delta^{(10)}(\sum_i k^m_i)\,\prod_{i=2}^{N-2}\delta\left(\sum_{j\neq i}^N \frac{s_{ij}}{\s_{ij}}\right)\cdot\\ &\cdot\sum_{\b\in S_{N-3}}A_{YM}(1,\b,N-1,N) \,\,\prod_{k=2}^{N-2}\sum_{m=1}^{k-1}\frac{s_{\b(m)\b(k)}}{\sigma_{\b(m)\b(k)}}, \end{split}$$ where $A_{YM}(1,\b,N-1,N)=A_{YM}(1,\b(2),...,\b(N-3),N-1,N)$ is the SYM scattering amplitude which is given in terms of the BRST building blocks [@mafraSYM]. Furthermore, from [@Broedel:2013tta] we know that by manipulations with partial fraction relations the last factor above can be rewritten as[^6] $$\label{resumKLT} \prod_{k=2}^{N-2}\sum_{m=1}^{k-1}\frac{s_{\b(m)\b(k)}}{\sigma_{\b(m)\b(k)}} =\sigma_{1,N}\sigma_{N,N-1}\sigma_{N-1,1}\sum_{\gamma\in S_{N-3}}\mathcal{S}[\beta|\gamma]\frac{1}{(1,\gamma,N,N-1)},$$ where $$(1,\gamma,N,N-1):= \s_{1\gamma(2)}\s_{\gamma(2)\gamma(3)}...\s_{\gamma(N-2)N}\s_{N,N-1}\s_{N-1,1}$$ denotes the Parker-Taylor factor, and $$\mathcal{S}[\beta|\gamma]:=\prod_{a=1}^{n-2}\left(s_{1,\beta(a)}+\sum_{b=2}^{a-1}\theta(\beta(b),\beta(a))_{\gamma}\, s_{\beta(b),\beta(a)}\right)$$ is the $(n-3)!\times(n-3)!$ based Kawai-Lewellen-Tye (KLT) kernel, with $\theta(a,b)_{\beta}=1$ if the ordering of the labels $a,b$ is the same in both orderings $\beta$ and $\gamma$, and zero otherwise [@BjerrumBohr:2010ta]. On the other hand, the current algebra correlator gives[^7] $$\label{correlatorpsJ} \langle J_{I_1}J_{I_2}\cdots J_{I_{N}}\rangle=\sum_{\Pi\in S_{N-1}}\frac{\text{Tr}(T^{I_1} T^{\Pi(I_2)}\cdots T^{\Pi(I_N)})}{(1,\Pi(2),\ldots,\Pi(N))}.$$ Due to the delta constraints in , the formula actually reduces to a rational function with the $\{\sigma\}$ variables evaluated on the solutions to the scattering equations. On the support of these equations, it is known from [@Cachazo:2012uq] that the Parke-Taylor factors in can be linearly decomposed onto a $(n-3)!$ basis due to the validity of Bern-Carrasco-Johansson relations [@Bern:2008qj] $$\frac{1}{(1,\Pi(2),\ldots,\Pi(N))}=\sum_{\alpha\in S_{N-3}}\mathcal{K}[\Pi,\alpha]\frac{1}{(1,\alpha,N-1,N)}$$ in the same way as $$\label{YMdecomposition} A_{YM}(1,\Pi(2),\ldots,\Pi(N))=\sum_{\alpha\in S_{N-3}}\mathcal{K}[\Pi,\alpha]A_{YM}(1,\alpha,N-1,N),$$ with $\mathcal{K}[\Pi,\alpha]$ some function only depending on the kinematic invariants $\{s_{ij}\}$ and the two orderings $\Pi,\alpha$ (which is not relavent to our discussion). To this end, we see that the two copies of Vandermonde factor $(\s_{1,N-1}s_{N-1,N}s_{N,1})$ from the $c$-ghost correlation and combine with the measure and the delta constraints in to form fully permutation invariant and $SL(2,\mathbb{C})$ covariant objects $$\label{measure} \begin{split} \int\frac{d^n\s}{\text{vol }SL(2,\mathbb{C})}&:=\s_{1,N-1}\s_{N-1,N}\s_{N,1}\int\sum_{i-2}^{N-2}d\s_i,\\ {\prod}'\left(\sum\frac{s_{ij}}{\s_{ij}}\right)&:=\s_{1,N}\s_{N,N-1}\s_{N-1,1}\prod_{i=2}^{N-2}\delta\left(\sum_{j\neq i}^N \frac{s_{ij}}{\s_{ij}}\right). \end{split}$$ Hence by assembling different pieces in , the whole amplitude can be expressed as $$\label{fullYMamplitudeintermediate} \begin{split} \mathcal{A}_N =& \sum_{\Pi\in S_{N-1}}\sum_{\alpha\in S_{N-3}}\text{Tr}(T^{I_1} T^{\Pi(I_2)}\cdots T^{\Pi(I_N)})\mathcal{K}[\Pi,\alpha]\int\frac{d^n\s}{\text{vol }SL(2,\mathbb{C})}{\prod}'\left(\sum\frac{s_{ij}}{\s_{ij}}\right)\frac{1}{(1,\alpha,N-1,N)}\\ &\sum_{\b\in S_{N-3}}A_{YM}(1,\b,N-1,N)\sum_{\gamma\in S_{N-3}}\mathcal{S}[\beta|\gamma]\frac{1}{(1,\gamma,N,N-1)}\,\delta^{(10)}(\sum_i k^m_i). \end{split}$$ It is easy to see that the part $$\label{mdef} m[\gamma|\alpha]:=\int\frac{d^n\s}{\text{vol }SL(2,\mathbb{C})}{\prod}'\left(\sum\frac{s_{ij}}{\s_{ij}}\right)\frac{1}{(1,\gamma,N,N-1)}\frac{1}{(1,\alpha,N-1,N)}$$ is exactly the double partial amplitude in the doubly colored $\phi^3$ theory computed by CHY formula in [@Cachazo:2013iea]. Since from there we know that as the result of KLT orthogonality $$\label{mrelations} m[\gamma|\alpha]=(\mathcal{S}[\gamma|\alpha])^{-1},$$ reduces to $$\mathcal{A}_N =\delta^{(10)}(\sum_i k^m_i)\,\sum_{\Pi\in S_{N-1}}\sum_{\alpha\in S_{N-3}}\text{Tr}(T^{I_1} T^{\Pi(I_2)}\cdots T^{\Pi(I_N)})\mathcal{K}[\Pi,\alpha]A_{YM}(1,\alpha,N-1,N),$$ which by is indeed the full tree-level amplitude of ten-dimensional $\mathcal{N}=1$ SYM as originally computed in [@mafraSYM]. Supersymmetries are solely encoded into the $(n-3)!$ basis $A_{YM}(1,\alpha,N-1,N)$. On the other hand, for the component amplitude involving gluons only, if we substitute the $A_{YM}(1,\b,N-1,N)$ in by the CHY formula for gluons, when we apply the original KLT orthogonality as stated in [@Cachazo:2013gna], the factor $$\label{pfaffianpart} \sum_{\b\in S_{N-3}}A_{YM}(1,\b,N-1,N)\sum_{\gamma\in S_{N-3}}\mathcal{S}[\beta|\gamma]\frac{1}{(1,\gamma,N,N-1)}$$ becomes a Pfaffian and the entire again reduces to the original CHY formula (apart from the momentum conservation). Tree-Level SUGRA Amplitude {#section3} ========================== In the version of Berkovits’ theory for type II superstring, which is expected to describe type II supergravity in ten dimensions, the action reads [@nathannewpaper] $$\label{actiontypetwo} S= \int d^2z (P_m \pb X^m+ p_\a\pb \t^\a + w_\a\pb \l^\a + \hat p_{\hat \a}\pb \hat \t^{\hat \a} + \hat w_{\hat \a}\pb \hat \l^{\hat \a}),$$ where $\l^\a$ and $\hat \l^{\hat \a}$ are pure spinors. The BRST charge is defined as $$Q=\int dz (\l^\a d_\a +\hat \l^{\hat \a}\hat d_{\hat \a})$$ where $d_\a$($\hat d_{\hat \a}$) is the Green-Schwarz constraint given in (\[gsc\]). The massless vertex operators are the double copy of the vertices defined previously in (\[vertexym\]), but now without $c$-ghost and $J^I$ current. With a little change of notation for later convenience, these are given by $$\label{vertexgrv} \begin{array}{cc} \begin{split} V=& \,e^{ik\cdot X} \tilde V\,\, \tilde{\hat{V}},\\U=& \,e^{ik\cdot X} \bar\delta(K\cdot P)\,\tilde U\,\,\tilde{\hat{U}}, \end{split}& \begin{split} &\qquad\text{Unintegrated},\\&\qquad\text{Integrated}, \end{split} %V=& c\, \tilde V^I\, J_I, \,\, %\qquad\qquad\text {Unintegrated},\\ %U=& \,\tilde U^I \,J_I, \,\,\quad\text {Integrated}, \end{array}$$ where $$\begin{split} \tilde V=& \l^\a A_\a(\t),\\ \tilde U=&P^m A_m + d_\a W^{\a} + \frac{1}{2}N_{mn}{\cal F}^{mn}, \end{split}$$ and the $\{\tilde{\hat{V}},\,\tilde{\hat{U}}\}$ are defined in a similar way (with the hatted version of the fields). Connection to CHY Formula ------------------------- The computation in this case greatly resembles that for the heterotic string, and so here we only summarize the differences. Since there is no correlation between the hatted and non-hatted fields, the tree-level amplitude prescription reads $$\begin{aligned} \label{scatteringgrv} \mathcal{M}_N &= \int \prod_{i=1}^{N-2}d\s_i\, \langle V_1(\sigma_1=0)U_2 \cdots U_{N-2} V_{N-1}(\s_{N-1}=1) V_N(\s_N=\infty)\rangle\nonumber\\ &=\delta^{(10)}(\sum_i k^m_i)\, \int \prod_{i=1}^{N-2}d\s_i\,\,\delta\left(\sum_{j\neq i}^{N}\frac{s_{ij}}{\s_{ij}}\right) \,\,\langle \tilde V_1 \tilde U_2 ... \tilde U_{N-2} \tilde V_{N-1}\tilde V_N \rangle\,\,\langle \tilde{\hat{V}}_1\tilde{\hat{U}}_2 ... \tilde{\hat{U}}_{N-2} \tilde{\hat{V}}_{N-1}\tilde{\hat{V}}_N\rangle\end{aligned}$$ where we have already performed the phase space integration, which is the same as that discussed in the SYM case. Each of the remaining correlators above is computed in the same way as that in and . Hence one can check that ${\cal M}_N$ can be expressed as $$\label{sugraformula} {\cal M}_N=\delta^{(10)}(\sum_i k^m_i)\, \sum_{\b\in S_{N-3}}\sum_{\hat \b\in S_{N-3}} A_{YM}(1,\b,N-1,N)\,\, H[\b|\hat\b]\,\,\hat A_{YM}(1,\hat \b,N,N-1),$$ where $$\begin{split} H[\b|\hat\b] =&\int\frac{d^n\s}{\text{vol }SL(2,\mathbb{C})}\,{\prod}'\left(\sum\frac{s_{ij}}{\s_{ij}}\right) \sum_{\gamma\in S_{N-3}}\mathcal{S}[\beta|\gamma]\frac{1}{(1,\gamma,N,N-1)} \sum_{\hat{\gamma}\in S_{N-3}}\mathcal{S}[\hat{\gamma}|\hat{\beta}]\frac{1}{(1,\hat{\gamma},N-1,N)}\\ =&\sum_{\gamma,\hat{\gamma}\in S_{N-3}}\mathcal{S}[\beta|\gamma]m[\gamma|\hat{\gamma}]\mathcal{S}[\hat{\gamma}|\hat{\beta}]. \end{split}$$ Then by the relation it is clear that $${\cal M}_N=\delta^{(10)}(\sum_i k^m_i)\, \sum_{\b\in S_{N-3}}\sum_{\hat \b\in S_{N-3}} A_{YM}(1,\b,N-1,N)\,\, \mathcal{S}[\b|\hat\b]\,\,\hat A_{YM}(1,\hat \b,N,N-1),$$ which is just the KLT relation in constructing SUGRA amplitude from the corresponding SYM amplitude. So we have also confirmed that this theory indeed describes the type II SUGRA at tree level. In the same manner, when restricting to graviton scattering, if in we substitute each of the two SYM amplitudes by their CHY formula, a direct simplification recovers the original CHY formula for pure gravitons. Discussion ========== In this paper we have given a proof by direct computation that at tree level Berkovits’ new twistor-like theory for the heterotic string describes the ten-dimensional $N=1$ SYM and that for the type II string describes the corresponding SUGRA. The computation straightforwardly leads to the scattering-equation-based structure in CHY formula, in similar way as discussed by Mason and Skinner in the RNS formalism. Although the result is manifestly supersymmetric, due to the similar structure with the superstring amplitude given in [@Broedel:2013tta], the supersymmetric structure is completely absorbed into the $(n-3)!$ based $A_{YM}$ factors therein. For this reason, it is not at all obvious whether in SYM there exists any compact integrand which is the supersymmetric analog of the Pfaffian factor as appearing in the CHY formula for pure gluon scattering (at least in ten dimensions). However, by comparing with the CHY formula, we know that if there is such a structure, it has to be equivalent to . So it would be very nice if one can find a non-trivial way to manipulate to make manifest such structure embedded therein. At the time when this paper was being prepared, Adamo, Casali and Skinner published a new work studying Mason and Skinner’s ambitwistor string at one loop [@Adamo:2013tsa]. In particular, the extention of scattering equations to loop levels was proposed, and one-loop amplitudes for NS-NS external states in the type II ambitwistor string were calculated. It is interesting to see how Berkovits’ theory works at loop levels. Moreover, here we have started from the action of a twistor-like string theory, which as claimed in [@nathannewpaper] is the chiral infinite tension limit of the action for the original string theory. It is still interesting to see how this chiral limit can be explicitly performed at the amplitude level. The usual way of taking the infinite tension limit of string amplitude cannot be done smoothly, i.e., a direct $\alpha'$-expansion of the integrand (mainly the Koba-Nielsen factor) is not allowed, since it is the singularities from the integration boundaries that are responsible for the field theory counterpart. However, given the analogous structure between CHY formula and that of string amplitude obtained in [@npointsmafra; @mafratwo; @Broedel:2013tta], it is speculated that there might exit a way to smoothly connect the two sides via a novel way of taking the infinite tension limit. If this is true, this chiral limit might have the potential to play the role. The authors would like to thank Nathan Berkovits, Freddy Cachazo and Carlos Mafra for useful discussions, especially Carlos Mafra for pointing out the importance of the double poles from OPE in the calculation. H.G is grateful to the Perimeter Institute for Theoretical Physics for warm hospitality during stages of this work. The work of H.G is supported by FAPESP grant 07/54623-8, 13/11409-7. This work is supported by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research & Innovation. [^1]: Note that the OPE between $P^m$ and any superfield has the same convention as in [@npointsmafra]. The idea is to apply the results obtained in that paper. [^2]: The definition of $s_{ij}$ matches with one given in [@npointsmafra]. [^3]: Integrating out the phase space $\{P^m, X^m\}$ implies that it is not necessary to consider the OPE between the Dirac delta $\bar\delta(k\cdot P)$ and the superfields [^4]: Here we only write out the double-pole terms. As stated before, the simple-pole terms from these additional OPE’s eventually cancel each other, and thus are of no interests. [^5]: The authors are grateful to Carlos Mafra for discussions over this issue. [^6]: Rigorously speaking, this identity holds only when $\sigma_N$ is gauge-fixed at infinity. However, for the general gauge where $\sigma_N$ is finite, the requirement of $SL(2,\mathbb{C})$ invariance of $\mathcal{A}_N$ guarantees that the r.h.s. below is the correct answer. This will become obvious later in . [^7]: In addition to the single-trace terms, the current algebra correlator also produces multi-trace terms [@Berkovits:2004hg; @Mason:2013sva]. As stated in [@Mason:2013sva], the multi-trace terms are associated to coupling Yang-Mills to gravity. Here we care about pure Yang-Mills and so we only focus on the single-trace terms.
--- abstract: 'Let $\mathcal{B}$ denote a collection of open bounded sets in $\mathbb{R}^n$, and define the associated maximal operator $M_{\mathcal{B}}$ by $$M_{\mathcal{B}}f(x) \coloneqq \sup_{x \in R \in \mathcal{B}} \frac{1}{|R|}\int_R |f|.$$ The *sharp Tauberian constant of $M_{\mathcal{B}}$ associated to $\alpha$*, denoted by $C_{\mathcal{B}}(\alpha)$, is defined as $$C_{\mathcal{B}}(\alpha) \coloneqq \sup_{E :\, 0 < |E| < \infty}\frac{1}{|E|}\big|\big\{x \in \mathbb{R}^n:\, M_{\mathcal{B}}\chi_E (x) > \alpha\big\}\big|.$$ Motivated by previous work of A. A. Solyanik, we show that if $M_{\mathcal{B}}$ is the uncentered Hardy-Littlewood maximal operator associated to balls, the estimate $$\lim_{\alpha \rightarrow 1^-}C_{\mathcal{B}}(\alpha) = 1$$ holds. Similar results for iterated maximal functions are obtained, and open problems in the field of Solyanik estimates are also discussed.' address: - 'Department of Mathematics, Baylor University, Waco, Texas 76798' - 'Department of Mathematics, Aalto University, P. O. Box 11100, FI-00076 Aalto, Finland' author: - Paul Hagelstein - Ioannis Parissis title: Solyanik Estimates in Harmonic Analysis --- [^1] [^2] Introduction ============ Let $\mathcal{B}$ denote a collection of open sets in $\mathbb{R}^n$, and define the associated geometric maximal operator $M_{\mathcal{B}}$ by $$M_{\mathcal{B}}f(x) \coloneqq \sup_{x \in R \in \mathcal{B}} \frac{1}{|R|}\int_R |f|.$$ For some examples, if $\mathcal{B}$ were the collection of all cubes or balls in $\mathbb{R}^n$, $M_\mathcal{B}$ would be the uncentered Hardy-Littlewood maximal operator; while if $\mathcal{B}$ were the collection of all rectangles in $\mathbb{R}^n$ with sides parallel to the axes, $M_\mathcal{B}$ would be the strong maximal operator. Due to the importance of these classical operators we adopt the special notation $M_{\operatorname {HL} } $ for the uncentered Hardy-Littlewood maximal operator and $M_{\operatorname S}$ for the strong maximal operator. To avoid confusion, we will sometimes let $M_{\operatorname {HL},b } $ denote the uncentered Hardy-Littlewood maximal operator $M_{\operatorname{HL}}$ with respect to balls, and let $M_{\operatorname {HL} ,c} $ denote the uncentered Hardy-Littlewood maximal operator with respect to cubes. We will also consider the *centered* Hardy-Littlewood maximal operator defined as $$M_{\operatorname {HL},b } ^{\operatorname c} f(x) \coloneqq \sup_{r>0}\frac{1}{|B(x,r)|}\int_{B(x,r)} |f|,$$ where $B(x,r)$ denotes the Euclidean ball of radius $r>0$, centered at $x\in\mathbb R^n$. A similar definition gives $M_{\operatorname {HL},c } ^{\operatorname c}$, defined with respect to centered cubes. Observe that, strictly speaking, these centered operators does not fall under the scope of our general definition for $M_{\mathcal B}$ as there is no collection $\mathcal B$ that will generate $M_{ \operatorname{HL},b } ^{\operatorname c}$ or $M_{ \operatorname{HL},c} ^{\operatorname c}$. This is essentially due to the centered nature of the sets defining $M_{\operatorname{HL}} ^{\operatorname c}$. Given a collection $\mathcal{B}$ as above, we are typically interested in determining if the associated maximal operator $M_\mathcal{B}$ is bounded on $L^p(\mathbb{R}^n)$ for some and also what are the optimal weak type $(p,p)$ estimates that $M_\mathcal{B}$ satisfies. For instance, it is well known that the uncentered Hardy-Littlewood maximal operator $M_{\operatorname{HL}}$ is bounded on $L^p(\mathbb{R}^n)$ for all $1 < p \leq \infty$ and that it satisfies the weak type $(1,1)$ estimate: $$| \{x\in\mathbb R^n:\, M_{\operatorname{HL}}f(x) > \alpha \} | \leq \frac{3^n}{\alpha} \|f \|_1.$$ Even weaker conditions on geometric maximal operators are so-called *Tauberian conditions*. The maximal operator $M_{\mathcal{B}}$ is said to satisfy a *Tauberian condition with respect to $\alpha\in(0,1)$* if there is some constant $C$ such that $$| \{x\in\mathbb R^n:\, M_{\mathcal{B}}\chi_E (x) > \alpha \}| \leq C|E|$$ holds for all measurable sets $E$. Note that the previous condition is only supposed to hold for some *fixed* $\alpha\in(0,1)$. Now, if $M_{\mathcal{B}}$ is known to satisfy a weak type $(1,1)$ estimate or to be bounded on $L^{p}$ for some $1 < p < \infty$, then it is easily seen that $M_{\mathcal{B}}$ must satisfy a Tauberian condition with respect to $\alpha$, for *all* $0 < \alpha < 1$. However, a maximal operator $M_{\mathcal{B}}$ can in fact satisfy a Tauberian condition with respect to some $0 < \alpha < 1$ without being $L^p$ bounded for any finite $p$. A quick example of this type of behavior can be exhibited by, say, letting $\mathcal{B}$ be the collection of all sets of the form $[0,1] \cup(x, x+2)$ and observing that, while $M_{\mathcal{B}}$ satisfies a Tauberian condition with respect to 4/5, it is not bounded on $L^p(\mathbb{R})$ for any $1 < p < \infty$. A Tauberian condition on a maximal operator, although quite weak, is still very useful, as was shown by A. Córdoba and R. Fefferman in their work [@CorF] relating the $L^{p}$ bounds of certain multiplier operators to the weak type $\big((\frac{p}{2})', (\frac{p}{2})'\big)$ bounds of associated geometric maximal operators; see [@CorF] for details. Moreover, Hagelstein and Stokolos have shown in [@hs] that, provided $\mathcal{B}$ is a homothecy invariant basis of convex sets in $\mathbb{R}^n$, if $\mathcal{B}$ satisfies a Tauberian condition with respect to *some* $0 < \alpha < 1$, then $M_{\mathcal{B}}$ must be bounded on $L^p(\mathbb{R}^n)$ for sufficiently large $p$. This work has recently been extended by Hagelstein, Luque, and Parissis in [@hlp] to yield weighted $L^p$ bounds on maximal operators satisfying a Tauberian condition with respect to a weighted basis. The issue of *sharp Tauberian constants* is one that has received very little attention until recently. For specificity, given a maximal operator $M_{\mathcal{B}}$, we define the Tauberian constant $C_{\mathcal{B}}(\alpha)$ by $$\label{e.CBalpha} C_{\mathcal{B}}(\alpha) \coloneqq \sup_{ E\subset \R^n :\, 0 < |E| < \infty}\frac{1}{|E|} | \{x \in \mathbb{R}^n:\, M_{\mathcal{B}}\chi_E (x) > \alpha \} |.$$ We note here that, in the relevant literature, the function $\phi_{\mathcal B}:[1,\infty)\to \mathbb R$ defined as $\phi_{\mathcal B}(\lambda)\coloneqq C_{\mathcal{B}}(1/\lambda)$, $\lambda>1$, is many times called the *Halo function* of the collection $\mathcal B$, as for example in [@Gu]. Obviously, it is equivalent to study the function $C_{\mathcal B} (\alpha)$ for $\alpha<1$ which is the setup we adopt in this paper. We will use the special notation $C_{\operatorname {HL},b}$, $C_{\operatorname {HL},c}$ and $C_{\operatorname S}$ for the sharp Tauberian constants corresponding to the basis of balls, cubes, and axes parallel rectangles, respectively. For the centered Hardy-Littlewood maximal operator we denote the corresponding sharp Tauberian constants by $C_{\operatorname{HL},b} ^{\operatorname c}$ and $C_{{\operatorname{HL},c} } ^{\operatorname c}$. Now, if the maximal operator $M_{\mathcal{B}}$ satisfies a weak type $(1,1)$ estimate $$| \{x \in \mathbb{R}^n : \, M_{\mathcal{B}}f(x) > \alpha \} | \leq \frac{C}{\alpha} \|f \|_{1},$$ then the associated sharp Tauberian constant $C_{\mathcal{B}}(\alpha)$ must satisfy $$C_{\mathcal{B}}(\alpha) \leq \frac{C}{\alpha}.$$ However, we might expect in many situations $C_{\mathcal{B}}(\alpha)$ to be significantly smaller than $C/ \alpha$. For example, even though the weak type $(1,1)$ bound of the uncentered Hardy-Littlewood maximal operator $M_{\operatorname{HL}}$ acting on functions on $\mathbb{R}$ is 2, we would suspect it unlikely to find a set $E$ contained in $\mathbb{R}$ such that $ | \{x \in \mathbb{R} :\, M_{\operatorname{HL}}\chi_E(x) > .99 \} | = 2|E|$. We will show momentarily that this indeed cannot be the case, and in fact that we must have $$\lim_{\alpha \rightarrow 1^-} C_{\operatorname{HL}}(\alpha)=\lim_{\alpha \rightarrow 1^-} \sup_{E \subset \mathbb{R}^n :\, 0<|E| < \infty}\frac{1}{|E|}\big|\big\{x\in\mathbb R^ n :\, M_{\operatorname{HL}}\chi_E(x) > \alpha\big\}\big| = 1 .$$ The first estimates along the lines of the one above were obtained by Solyanik in [@Solyanik]. In his honor, we call a result of the form $$\lim_{\alpha \rightarrow 1^-}C_{\mathcal{B}}(\alpha) = 1$$ a *Solyanik estimate*. \[t.solyanik\] We have the following Solyanik estimates: $$\lim_{\alpha \rightarrow 1^-} C_{\operatorname{HL},c}(\alpha) =1\quad\text{and}\quad \lim_{\alpha \rightarrow 1^-} C_{\operatorname{S}}(\alpha) =1.$$ In particular, $$C_{\operatorname{HL},c}(\alpha)-1\sim_n \big(\frac{1}{\alpha}-1\big)^\frac{1}{n}\quad \text{and}\quad C_{\operatorname{S}}(\alpha)-1\sim_n \big(\frac{1}{\alpha}-1\big)^\frac{1}{n}.$$ For the sharp Tauberian constant of the centered Hardy-Littlewood maximal operator (with respect to cubes or balls) we have $$\lim_{\alpha \rightarrow 1^-} C_{\operatorname{HL},b} ^{\operatorname c} (\alpha)=1\quad ;\quad\lim_{\alpha \rightarrow 1^-} C_{\operatorname{HL}, c} ^{\operatorname c} (\alpha)=1$$ and in particular $$C_{\operatorname{HL},b} ^{\operatorname c} (\alpha)-1\sim_n \frac{1}{\alpha}-1\quad;\quad C_{\operatorname{HL},c} ^{\operatorname c} (\alpha)-1\sim_n \frac{1}{\alpha}-1$$ as $\alpha\to 1^{-}$. Note that Solyanik’s theorem does not include an estimate for $C_{\operatorname{HL},b}$ associated to the uncentered Hardy-Littlewood maximal operator with respect to balls, $M_{\operatorname{HL},b}$. Indeed, Solyanik concludes the estimate for $C_{\operatorname{HL},c }$ as a corollary of estimate for $C_S$ and thus the methods in his paper do not readily apply to non-centered maximal operators defined with respect to balls. However, the method of Solyanik for centered maximal operators deals equally well with balls or cubes. This is because the basic underlying ingredient for these estimates in the case of centered operators is the Besicovitch covering lemma which works equally well for balls or cubes. We now introduce the directional maximal operator $M_{j}$, $j=1,\ldots,n$, acting on $\mathbb R^n$ and defined by $$M_{j}f(x_1, \ldots, x_n) \coloneqq \sup_{s < x_j < t}\frac{1}{t-s}\int_{s}^{t}|f(x_1, \ldots, x_{j-1}, u, x_{j+1}, \ldots, x_n)|\,du.$$ In the next section we will prove a Solyanik estimate for the iterated maximal operator $M_1\cdots M_n$, namely that we have $$\lim_{\alpha \rightarrow 1^-} \sup_{E \subset \mathbb{R}^n :\, 0<|E| < \infty}\frac{1}{|E|}\big|\big\{x \in \mathbb{R}^n:\, M_1\cdots M_n\chi_E(x) > \alpha\big\}\big| = 1.$$ This will be done by proving a Solyanik estimate for $M_{\operatorname{HL}}$ on $\mathbb{R}^{1}$ by a means different than Solyanik did in [@Solyanik] but one enabling us to afterwards apply induction to get the desired estimate for $M_{1}\cdots M_{n}$. Subsequently we will provide a Solyanik estimate for the uncentered maximal operator $M_{\operatorname{HL}, b}$ by utilizing the circle of ideas developed by A. Córdoba and R. Fefferman in their work [@CorF75] relating covering lemmas to weak type bounds of geometric maximal operators. Afterwards we will visit the issue of generalizing Solyanik estimates to encompass maximal operators $M_{\mathcal{B}}$ where $\mathcal{B}$ is a homothecy invariant collection of convex sets. Throughout this paper we will indicate open problems and directions for further research. Notation {#notation .unnumbered} -------- We write $A\lesssim B$ whenever there is a numerical constant $c>0$ such that $A\leq c B$. We also write $A\sim B$ if $A\lesssim B$ and $B\lesssim A$. If the constant $c$ depends for example on the dimension $n$ we will write $A\sim_n B$. Solyanik Estimates for Iterated Maximal Functions ================================================= The key result in this section is the following lemma. \[l.oned\] Let $M_{\operatorname{HL}}$ denote the uncentered Hardy-Littlewood maximal operator acting on functions on $\mathbb{R}$. Let $E \subset \mathbb{R}$, where $|E| < \infty$, and let $0 \leq \gamma < \alpha < 1$. Then $$\label{e1} | \{x \in \mathbb{R} :\, M_{\operatorname{HL}}(\chi_{E} + \gamma \chi_{E^\mathtt{c}})(x) > \alpha \} | \leq \bigg(1 + 4 \frac{1-\alpha}{\alpha - \gamma}\bigg) |E |.$$ We first prove the lemma for $\gamma>0$. Let $f_{E, \gamma}$ be the function defined on $\mathbb{R}$ by $$f_{E, \gamma}(x) = \chi_E(x) + \gamma\chi_{E^\mathtt{c}}(x) .$$ Let $ \{I_{j} \}$ be a countable collection of intervals such that $$\{x \in \mathbb{R}:\, M_{\operatorname{HL}}(\chi_{E} + \gamma \chi_{E^\mathtt{c}})(x) > \alpha \} = \cup_j I_j$$ and such that, for each $j$, $$\frac{1}{|I_j|}\int_{I_j}f_{E, \gamma} > \alpha .$$ We now fix some $\epsilon > 0$. As $M_{\operatorname{HL}}$ is of weak type $(1,1)$ we must have that $$\{x \in \mathbb{R}: \, M_{\operatorname{HL}}(\chi_{E} + \gamma \chi_{E^\mathtt{c}})(x) > \alpha\}$$ is of finite measure. Indeed, if $ M_{\operatorname{HL}}(\chi_{E} + \gamma \chi_{E^\mathtt{c}})(x) > \alpha$ then we must have $M_{\operatorname{HL}}\chi_E(x) > \alpha - \gamma$. Accordingly there exists a finite subcollection $\{I_{j}' \}$ of $ \{I_{j} \}$ such that $$| \{x \in \mathbb{R} :\, M_{\operatorname{HL}}(\chi_{E} + \gamma \chi_{E^\mathtt{c}})(x) > \alpha \} \setminus \cup_j I_{j}' | < \epsilon .$$ Arguing as in [@baf]\*[p. 24]{} we see that there exists a collection of intervals $\{\tilde{I}_j\}_j$ contained in $ \{I'_j \}_j$ such that $\cup_j \tilde{I}_j = \cup_j I'_j$ and $\sum_j \chi_{\tilde{I}_j} \leq 2$. Since $\frac{1}{|\tilde{I}_j|} \int_{\tilde{I}_j}f_{E, \gamma} > \alpha$, we have $$|E \cap \tilde{I}_j| + \gamma|\tilde{I}_j \setminus E| > \alpha |\tilde{I}_j|,$$ implying $$\frac{|E \cap \tilde{I}_{j}|}{|\tilde{I}_{j}|} > \frac{\alpha - \gamma}{1 - \gamma}.$$ So $$\begin{split} \big|\big\{x \in \mathbb{R} :\, M_{\operatorname{HL}}f_{E, \gamma}(x) > \alpha\big\}\big| &\leq |E| + \frac{1 - \alpha}{1 - \gamma}\sum|\tilde{I}_j| + \epsilon \notag \\ &\leq |E| + 2 \frac{1 - \alpha}{1 - \gamma}|\cup \tilde{I}_j| + \epsilon. \end{split}$$ As we have shown that $\frac{1}{|\tilde{I}_j|}\int_{\tilde{I}_j} f_{E, \gamma} > \alpha$ implies $$\frac{|E \cap \tilde{I}_j|}{|\tilde{I}_{j}|} > \frac{\alpha - \gamma}{1 - \gamma},$$ we have $$\cup_j \tilde{I}_{j} \subset \bigg\{x \in \mathbb{R}:\, M_{\operatorname{HL}}\chi_{E}(x) > \frac{\alpha - \gamma}{1 - \gamma}\bigg\}.$$ So by the weak type $(1,1)$ bound of 2 of $M_{\operatorname{HL}}$ on $\mathbb{R}^{1}$, we have $$|\cup \tilde{I}_{j}| \leq 2 \frac{1 - \gamma}{\alpha - \gamma}|E|$$ and accordingly $$\big|\big\{x \in \mathbb{R}:\, M_{\operatorname{HL}}f_{E, \gamma}(x) > \alpha\big\}\big| \leq \bigg(1 + 4\frac{1 - \alpha}{\alpha - \gamma}\bigg)|E| + \epsilon.$$ As $\epsilon > 0$ was arbitrary we obtain the desired result in the case $\gamma>0$. Now observe that for any $\alpha,\delta>0$ we have $$|\{x\in \mathbb R:\, M_{\operatorname{HL}}(\chi_E)>\alpha \}|\leq |\{x\in\mathbb R:\, M_{\operatorname{HL}}f_{E,\delta}>\alpha\}|\leq \bigg(1 + 4\frac{1 - \alpha}{\alpha - \delta}\bigg)|E|$$ by the case already proved. Since the left hand side of the estimate above does not depend on $\delta$ we can let $\delta\to 0^+$ to get the lemma for $\gamma=0$ as well. We now iterate the above estimate to yield a Solyanik estimate for the iterated maximal operator $M_{1}\cdots M_{n}$. \[l.iterated\] Setting $\alpha_{0} = 0$ and $0 < \alpha_{1} < 1$, define $\alpha_j$, $j= 2, 3, 4, \ldots, n$ by $$\alpha_j = 1 - (1 - \alpha_1 )^j .$$ Then $$\big|\big\{x \in \mathbb{R}^{n} :\, M_{1}\cdots M_{n}\chi_E(x) > \alpha_n\big\}\big| \leq \bigg(1 + 4\frac{1 - \alpha_1}{\alpha_1}\bigg)^n |E |$$ holds for every measurable set $E$ in $\mathbb{R}^{n}$. We proceed by proving $$| \{x \in \mathbb{R}^{n} : \, M_{1}\cdots M_{N}\chi_E(x) > \alpha_N \} | \leq \bigg(1 + 4\frac{1 - \alpha_1}{\alpha_1}\bigg)^N |E | , \quad N = 1, \ldots, n,$$ by induction on $N$. Note $$| \{x \in \mathbb{R}^n : \, M_{1}\chi_E(x) > \alpha_1 \} | \leq \bigg(1 + 4 \frac{1 - \alpha_1}{\alpha_1}\bigg)|E|$$ holds by Lemma \[l.oned\], seen by setting $\alpha = \alpha_1$, $\gamma = 0$. Suppose now $$| \{x \in \mathbb{R}^{n} : \, M_1 \cdots M_j \chi_{E}(x) > \alpha_j \} | \leq \bigg(1 + 4\frac{1 - \alpha_1}{\alpha_1}\bigg)^j |E |.$$ Let $$E_j \coloneqq \{x \in \mathbb{R}^n :\, M_1 \cdots M_j\chi_E(x) > \alpha_j \}.$$ Observe that the $\alpha_j$ satisfy $$\frac{1 - \alpha_{j+1}}{\alpha_{j+1} - \alpha_j} = \frac{1 - \alpha_j}{\alpha_j - \alpha_{j-1}},$$ implying $$\frac{1 - \alpha_{j+1}}{\alpha_{j+1} - \alpha_j} = \frac{1 - \alpha_j}{\alpha_j - \alpha_{j-1}} = \cdots = \frac{1 - \alpha_1}{\alpha_1}.$$ Also, for any $j$ we have $$\begin{split} M_{j+1}M_{1}\cdots M_j\chi_E (x) &=M_{j+1}(\chi_{E_j}M_{1}\cdots M_j\chi_E+\chi_{E_j ^\mathtt{c}} M_{1}\cdots M_j\chi_E)(x) \\ & \leq M_{j+1}(\chi_{E_j} + \alpha_j \chi_{E_j ^\mathtt{c}})(x). \end{split}$$ Hence $$\begin{aligned} &\big|\big\{x \in \mathbb{R}^{n} :\, M_{j+1}M_{1}\cdots M_j\chi_E (x) > \alpha_{j+1}\big\}\big| \\ &\leq \big|\big\{x \in \mathbb{R}^{n} :\, M_{j+1} (\chi_{E_j} + \alpha_j \chi_{E_{j}^\mathtt{c}} )(x) > \alpha_{j+1}\big\}\big| \\ &\leq \bigg(1 + 4\frac{1 - \alpha_{j+1}}{\alpha_{j+1} - \alpha_j}\bigg)|E_j|\quad\quad\text{(by Lemma~\ref{l.oned})} \\ &\leq \bigg(1 + 4\frac{1 - \alpha_1}{\alpha_1}\bigg)\bigg(1 + 4\frac{1 - \alpha_1}{\alpha_1}\bigg)^{j}|E| \\ &\leq \bigg(1 + 4\frac{1 - \alpha_{1}}{\alpha_1}\bigg)^{j+1}|E|.\end{aligned}$$ Since this holds for every measurable set $E$ in $\mathbb{R}^n$, by symmetry we have $$\big|\big\{x \in \mathbb{R}^n :\, M_{1}\cdots M_{j+1}\chi_E(x) > \alpha_{j+1}\big\}\big| \leq \bigg(1 + 4\frac{1 - \alpha_1}{\alpha_1}\bigg)^{j+1}|E|.$$ Setting $j = n - 1$ yields the desired result. \[t.iterated\] Let $0 < \alpha < 1$. Then $$\big|\big\{x \in \mathbb{R}^n :\, M_1\cdots M_n\chi_E(x) > \alpha \big\}\big| \leq \bigg(1 + 4 \frac{(1 - \alpha)^{1/n}}{1 - (1 - \alpha)^{1/n}}\bigg)^n |E| .$$ Accordingly, letting $C_{1\cdots n}(\alpha)$ denote the sharp Tauberian constant with respect to $\alpha$ of $M_1\cdots M_n$, we have $$C_{1\cdots n}(\alpha) - 1 \sim_n (\frac{1}{\alpha} - 1)^{1/n} .$$ Using the notation of the previous lemma, we let $\alpha_n = \alpha$. The corresponding $\alpha_1$ satisfies $$\alpha = 1 - (1 - \alpha_1)^n,$$ implying that $$\alpha_1 = 1 - (1 - \alpha)^{1/n}.$$ The result follows by Lemma \[l.iterated\]. Solyanik Estimates for the Uncentered Hardy-Littlewood maximal operator ======================================================================= The primary goal in this section is to provide a Solyanik estimate for the uncentered Hardy-Littlewood maximal operator $M_{\operatorname{HL}, b}$. \[un.solyanik\] Let $M_{\operatorname{HL}, b}$ denote the non-centered Hardy-Littlewood maximal operator, defined with respect to balls in $\mathbb{R}^n$. Then we have the corresponding Solyanik estimate $$\lim_{\alpha \rightarrow 1^{-}} C_{\operatorname{HL}, b}(\alpha) = 1\;.$$ In particular we have that $$C_{\operatorname{HL}, b}(\alpha) - 1 \lesssim_n \big( \frac{1}{\alpha} - 1 \big)^{\frac{1}{n+1}}$$ as $\alpha \rightarrow 1^{-}$. Let $0 < \alpha < 1$, and let $E$ be a set of finite measure in $\mathbb{R}^n$. Let $\{B_j\}$ be a collection of balls such that $$\left\{x \in \mathbb{R}^n : M_{\operatorname{HL}, b}\chi_E (x) > \alpha\right\} = \cup_j B_j,$$ where every $B_j$ satisfies $$\frac{1}{|B_j|} \int_{B_j}\chi_E > \alpha.$$ Without loss of generality we may assume that $\{B_{j}\}_j$ is a finite collection $\{B_{j}\}_{j=1}^N$ as our estimates of $|\cup B_j|$ will be independent of $N$. We reorder the balls $B_j$ so that they are nonincreasing in size, i.e. $$|B_1| \geq |B_2| \geq \cdots\geq |B_N|.$$ We will now obtain a subcollection $\{\tilde{B}_j\}_j$ using a selection algorithm motivated by ideas of A. Córdoba and R. Fefferman in [@CorF75]. Let $1 > \delta > 0$; here we think of $\delta$ as being very close to 0. We choose $\tilde{B}_1 = B_1$. Assume $\tilde{B}_1, \ldots, \tilde{B}_k$ have been selected and suppose that $\tilde B_k=B_M$ for some positive integer $M<N$. We let $\tilde{B}_{k+1}$ be the first $B_j$ on the list $B_{M+1}, B_{M+2}, \ldots,B_N$ such that $$|B_{j} \cap (\cup_{i=1}^k \tilde{B}_i)| \leq (1-\delta) |B_j|.$$ If such a $B_j$ does not exist, the list of selected balls terminates with $\tilde{B}_k.$ Let now $x \in \{x \in \mathbb{R}^n : M_{\operatorname{HL}, b}\chi_E(x) > \alpha\}$ so $x$ necessarily lies in one of the balls $B_j$. Suppose for the moment that $B_j$ is *not* one of the selected balls. Let $B_x$ be a ball of volume $\delta |B_j|$ containing $x$ and contained in $B_{j}$. Since $B_j$ was not selected, $B_x$ must intersect a $\tilde{B}_k$ of size larger than that of $B_j$. As the radius of $B_x$ is less than $\delta^{1/n}$ times the radius of $\tilde{B}_k$, by the triangle inequality we have $x \in (1 + 2 \delta^{1/n})\tilde{B}_k$, where for a ball $B$ in $\mathbb{R}^n$ we let $cB$ denote the $c$-fold concentric dilate of $B$. So $$\{x \in \mathbb{R}^n : M_{\operatorname{HL}, b}\chi_E(x) > \alpha\} \subset \cup_j (1 + 2 \delta^{1/n})\tilde{B}_j.$$ Let now $$\tilde{E}_j \coloneqq \tilde{B}_j \backslash \cup_{i=1}^{j-1}\tilde{B}_i.$$ We have that $$\left|\{x \in \mathbb{R}^n : M_{\operatorname{HL}, b}\chi_E(x) > \alpha\}\right| \leq \sum_j (1 + 2 \delta^{1/n})^n|\tilde{E}_j|.$$ Since for each $j$ we have $\frac{1}{|\tilde{B}_j|}\int_{\tilde{B}_j}\chi_E > \alpha$ and moreover $|\tilde{E}_j| / |\tilde{B}_j| > \delta$, we conclude $$\begin{split} \frac{1}{|\tilde{E}_j|}\int_{\tilde{E}_j}\chi_E &\geq \big[ \alpha|\tilde{B}_j| - (|\tilde{B}_j| - |\tilde{E}_j|)\big]/ |\tilde{E}_j| \\ &\geq 1 - (1-\alpha){|\tilde{B}_j|}/{|\tilde{E}_j|} \geq 1 -(1-\alpha ) \delta^{-1} \\ &\geq [ \delta - (1 - \alpha)]/ \delta. \end{split}$$ Placing an additional restriction on $\delta$ by requiring that $1 > \delta > 1 - \alpha$, we have $$|\tilde{E}_j| < \frac{\delta}{\delta - (1 - \alpha)}|E \cap \tilde{E}_j|.$$ As the $\tilde{E}_j$ are disjoint, we then have $$|\{x \in \mathbb{R}^n : M_{\operatorname{HL}, b}\chi_E(x) > \alpha\}| \leq (1 + 2 \delta^{1/n})^n\frac{\delta}{\delta - (1 - \alpha)}|E|.$$ Setting $\delta = (1 - \alpha)^{\frac{n}{n+1}}$ then yields the desired estimate. We strongly suspect the the bound $( \frac{1}{\alpha} - 1 )^{1/(n+1)}$ is not sharp, as indicated by the following example. \[exa.slab\] Let $E$ be the $n$-dimensional rectangle $$E\coloneqq[-100,100]\times\cdots\times[-100,100]\times[-1,1].$$ (1,0.4589) (0,0)[![A ball $B$ intersecting the slab $E$.](slab.pdf "fig:"){width="\unitlength"}]{} (0.27049931,0.38422147) (0,0)\[lt\] (-0.00118258,0.42838489)[(0,0)\[lb\]]{} (0.74291265,0.37894806)[(0,0)\[lb\]]{} (0.15242719,0.09836996) (0,0)\[lt\] volume $\alpha|B|$ (0.87766124,0.11146486) (0,0)\[lt\] $E$ (0.45357744,0.29125595)[(0,0)\[lb\]]{} (0.57902496,0.17241096)[(0,0)\[lb\]]{} Consider a ball $B$ of radius $1$ intersecting the rectangle $E$ on one of its long sides and away from its corners, so that a $(1-\alpha)$ portion of $|B|$ lies outside $E$. One can calculate that the union of all such balls constitutes a region of measure approximately $(1 + h)|E|$ with $h\simeq_n (\frac{1}{\alpha} - 1)^\frac{2}{n+1}$. We conclude $$C_{\operatorname{HL,b}}(\alpha)-1\gtrsim_n(\frac{1}{\alpha} - 1)^\frac{2}{n+1}.$$ In contrast, by doing a similar calculation with a unit cube $Q$ meeting the set $E$ at an angle $\pi/4$ we get $h\simeq_n (\frac{1}{\alpha} - 1)^\frac{1}{n}$. This proves the lower bound $$C_{\operatorname{HL},c}(\alpha)-1\gtrsim_n\big(\frac{1}{\alpha} - 1\big)^\frac{1}{n}.$$ (1,0.42371159) (0,0)[![A cube $Q$ intersecting the slab $E$.](slab_cube.pdf "fig:"){width="\unitlength"}]{} (0.34668284,0.34409629) (0,0)\[lt\] (-0.00105908,0.39638324)[(0,0)\[lb\]]{} (0.7697609,0.3393736)[(0,0)\[lb\]]{} (0.24094127,0.08809694) (0,0)\[lt\] volume $\alpha|Q|$ (0.89043737,0.09982432) (0,0)\[lt\] $E$ (0.51064167,0.26083938)[(0,0)\[lb\]]{} (0.68666673,0.11619869)[(0,0)\[lb\]]{} (0.40648216,0.28176223)[(0,0)\[lb\]]{} Observe that the latter calculation indicates that the Solyanik estimate for iterated maximal functions provided by Theorem 2 is sharp. Moreover, the fact that the slab example provides a *better* Solyanik estimate for $M_{\operatorname{HL}, b}$ inclines us to believe that Theorem 3 is not sharp, and a more refined argument might prove the following: a\) We have the asymptotic estimate $$C_{\operatorname{HL}, b}(\alpha) - 1 \sim_n \big(\frac{1}{\alpha}- 1\big)^{\frac{1}{n}}$$ as $\alpha\to 1^{-}$. The exponent here is a natural one to consider, as $ (\frac{1}{\alpha}- 1\big)^{\frac{1}{n}}$ is the sharp Solyanik exponent associated to $M_{\operatorname{HL}, c}$ and $M_1 \cdots M_n$. b\) A stronger asymptotic estimate, motivated by Example \[exa.slab\] above, would be that $$C_{\operatorname{HL}, b}(\alpha) - 1 \sim_n\big ( \frac{1}{\alpha} - 1\big)^{\frac{2}{n+1}}$$ as $\alpha\to 1^{-}$. Solyanik estimates for homothecy invariant bases of convex sets =============================================================== With the Solyanik estimates associated to Theorems \[t.solyanik\]-\[un.solyanik\] in hand, it is natural to try to extend these types of results to encompass maximal operators such as the maximal operator with respect to rectangles along lacunary directions. Rather than focus our attention on a particular maximal operator, we will here consider the following more general problem: Let $\mathcal{B}$ denote a collection of open bounded sets in $\mathbb{R}^n$ and $M_{\mathcal{B}}$ the associated geometric maximal operator. Define the associated Tauberian constants $C_{\mathcal{B}}(\alpha)$ by $$C_{\mathcal{B}}(\alpha) \coloneqq \sup_{E :\, 0 < |E| < \infty}\frac{1}{|E|} | \{x \in \mathbb{R}^n:\, M_{\mathcal{B}}\chi_E (x) > \alpha \}|.$$ For which $\mathcal{B}$ do we have $$\lim_{\alpha \rightarrow 1^-}C_{\mathcal{B}}(\alpha) = 1?$$ We would expect that the maximal operator $M_{\mathcal{B}}$ should be somewhat well-behaved in order to have $\lim_{\alpha \rightarrow 1^-}C_{\mathcal{B}}(\alpha) = 1$, as such an estimate would not hold if $\mathcal{B}$ were, say, the collection of all rectangles in $\mathbb{R}^2$. However, simple $L^p$ boundedness of $M_{\mathcal{B}}$ or even weak type (1,1) bound on $M_{\mathcal{B}}$ is not enough to guarantee that $M_{\mathcal{B}}$ satisfies a Solyanik estimate, as is indicated by the following example of Beznosova and Hagelstein found in [@BH]. \[exa.nonconvex\] Let $\mathcal{B}$ consist of all the homothecies of sets in $\mathbb{R}$ in the collection $$\{((0,1) \cup (x, x+ \epsilon))\cap(0,2):\, x \in (0,2),\, \epsilon > 0\}.$$ The operator $M_{\mathcal{B}}$ is dominated by twice the Hardy-Littlewood maximal operator and hence is bounded on $L^{p}(\mathbb{R})$ for $1 < p \leq \infty$ and is of weak type $(1,1)$. Observe however that $M_{\mathcal{B}}\chi_{(0,1)} = 1$ on $(0,2)$ and hence we have that $\lim_{\alpha \rightarrow 1^{-}} C_{\mathcal{B}}(\alpha) \geq 2$. Note that the sets in the collection $\mathcal{B}$ above are not all *convex*. We have previously seen convexity play an important role in problems involving Tauberian conditions, examples including the previously mentioned work of Hagelstein and Stokolos [@hs] and Hagelstein, Luque, and Parissis [@hlp]. This naturally leads us to the following conjecture involving convex density bases. (Recall that a *density basis* $\mathcal{B}$ in $\mathbb{R}^n$ is a collection of sets for which $$\lim_{\substack{x \in R \in B \\ \operatorname{diam}(R) \rightarrow 0}}\frac{1}{|R|}\int_R \chi_E = \chi_E (x)$$ holds for a.e. $x\in\R^n$, for every set $E \subset \mathbb{R}^n$ of finite measure. An important result of Busemann and Feller is that the maximal operator $M_{\mathcal{B}}$ associated to a homothecy invariant density basis $\mathcal{B}$ satisfies a Tauberian condition with respect to $\alpha$ for every $\alpha > 0$. See [@BF; @Gu] for details.) \[conj.cont\] Let $\mathcal{B}$ be a homothecy invariant density basis of bounded convex sets in $\mathbb{R}^n$. Then the associated Tauberian constants $C_{\mathcal{B}}(\alpha)$ satisfy $$\lim_{\alpha \rightarrow 1^-}C_{\mathcal{B}}(\alpha) = 1.$$ The following theorem provides some evidence that the above conjecture is on the right track. \[t.singlesol\] Let $\mathcal{B}$ be a homothecy invariant density basis of convex sets in $\mathbb{R}^n$. Then $$\big|\big\{x \in \mathbb{R}^n :\, M_{\mathcal{B}}\chi_E(x) = 1\big\}\big| = |E|$$ holds for every measurable set $E$ in $\mathbb{R}^n$. To appreciate the role that convexity plays in the following argument, observe that the conclusion of this theorem does *not* hold when $\mathcal{B}$ is the homothecy invariant collection of sets indicated in Example \[exa.nonconvex\] above. Let us fix some measurable set $E\subset\mathbb \mathbb R^n$ with $|E|>0$. Since $\mathcal{B}$ is a density basis, for a.e. $x \in \mathbb{R}^n$ we have that $$\lim_{j \rightarrow \infty}\frac{1}{|R_{x,j}|}\int_{R_{x,j}}\chi_E= \chi_E(x),$$ where $R_{x,j}$ is any sequence of sets in $\mathcal{B}$ containing $x$ whose diameters tend to $0$; for this and other basic properties of density bases, see [@Gu]\*[Ch. III]{}. So $$E \subset \{x \in \mathbb{R}^n:\, M_{\mathcal{B}}\chi_E(x) = 1\} \quad\text {a.e.}$$ and in particular $$\label{e.singlesol} |E| \leq | \{x \in \mathbb{R}^n :\, M_{\mathcal{B}}\chi_E(x) = 1 \} | .$$ If $|E| = \infty$ the theorem automatically holds so we may assume without loss of generality that $|E| < \infty$. The rest of the proof is by way of contradiction and the argument is divided into two basic steps. ### Step 1: {#step-1 .unnumbered} Suppose that fails. Then there exists a set $A\subset E^\mathtt{c}$ with $|A|>0$ such that, for every $x\in A$ there exists a sequence of sets $\{R_{x,j}\}_j \subset \mathcal B$ satisfying $x\in R_{x,j}$ for all $j$, $\lim_{j\to +\infty}\operatorname{diam}(R_{x,j})=+\infty$ and $$\label{e.averageto1} \frac{1}{|R_{x,j}|}\int_{R_{x,j}} \chi_E>1-\frac{1}{j},\quad j=2,3,\ldots \, .$$ We now prove this claim. Assuming that fails and letting $$\mathcal H_E\coloneqq \{x \in \mathbb{R}^n :\, M_{\mathcal{B}}\chi_E(x) = 1 \}\setminus E$$ we have that $|\mathcal H_E|>0$. Now let $A$ denote the set $$A= \mathcal H_E \cap\bigg\{x\in E^\mathtt{c}: \lim_{\substack {x\in R\in\mathcal B\\ \operatorname{diam}(R)\to 0}}\frac{1}{|R|}\int_{R}\chi_E =0 \bigg\}.$$ Since $\mathcal B$ is a density basis we have that $|A|=|\mathcal H_E|>0$. We fix $x\in A$. Since $x\in \mathcal H _E $ we conclude that for every positive integer $j\geq 2$ there exists a sequence $\{R_{x,j}\}_j\subset \mathcal B$, $x\in R_{x,j}$ for each $j$ and holds. It remains to show that $\lim_{j\to +\infty}\operatorname{diam}(R_x,j)=+\infty$. By the definition of $A$ there exists $\delta=\delta_x>0$ such that $$x\in R\in\mathcal B, \, \operatorname{diam}(R)<\delta \Rightarrow \frac{1}{|R|}\int_R \chi_E <\frac{1}{2}.$$ Furthermore, it is clear that $\inf_j \operatorname{diam}(R_{x,j})\geq c>0$ otherwise the averages in would have a subsequence converging to $0$. The previous discussion and the convexity hypothesis for the collection $\mathcal B$ imply that there exists a homothetic copy $S_{R_j}$ of $R_{x,j}$ with $\operatorname{diam}(S_{R_j})=\frac{1}{2}\min(c,\delta)$ that satisfies $$x\in S_{R_j}\subset R_{x,j} \quad\text{and}\quad \frac{|E\cap S_{R_j}|}{|S_{R_j}|}<\frac{1}{2}.$$ It is essential to notice here that the diameter of $S_{R_j}$ is independent of $j$. We have $$\begin{aligned} 1-\frac{1}{j}\leq \frac{|E\cap R_{x,j}|}{|R_{x,j}|}&=\frac{|E\cap S_{R_j} |}{|R_{x,j}|}+ \frac{|E\cap R_{x,j}\setminus S_{R_j} |}{|R_{x,j}|} \\ &\leq \frac{|E\cap S_{R_j} |}{|R_{x,j}|}+\frac{|R_{x,j}|-|S_{R_j}|}{|R_{x,j}|} \\ &= \frac{|E\cap S_{R_j}|}{|S_{R_j}|}\bigg(\frac{\operatorname{diam}(S_{R_j})}{\operatorname{diam}(R_{x,j})}\bigg)^n+1-\bigg(\frac{\operatorname{diam}(S_{R_j})}{\operatorname{diam}(R_{x,j})}\bigg)^n \\ &\leq 1-\frac{1}{2}\bigg(\frac{\operatorname{diam}(S_{R_j})}{\operatorname{diam}(R_{x,j})}\bigg)^n.\end{aligned}$$ Thus we have $$\operatorname{diam}(R_{x,j}) \geq \frac{\operatorname{diam}(S_{R_j}) }{2^\frac{1}{n}}j^\frac{1}{n}=\frac{\frac{1}{2}\min(c,\delta) }{2^\frac{1}{n}}j^\frac{1}{n} \to +\infty\quad\text{as}\quad j\to+\infty.$$ This proves the claim of the first step. ### Step 2: {#step-2 .unnumbered} Suppose that $\{R_j\}_j$ is a sequence of convex sets whose diameters satisfy $\operatorname{diam}(R_j)\to+\infty$ and $\sup_j |R_j|<+\infty$. Then for any bounded set $B$ we have that $\lim_{j\to+\infty}|B\cap R_j|= 0$. To see this note that every convex set in $\mathbb R^n$ is contained in a rectangle of comparable volume. Thus we can assume that $\{R_j\}_j$ is a sequence of rectangles in $\mathbb R^n$. Since $\sup_j |R_j|<+\infty$ and the diameters of the rectangles $R_j$ tend to infinity we conclude that there is a one-dimensional side $I_j$ of $R_j$ such that $\lim_{j\to+\infty}|I_j|=0$. The claim now follows since $$|R_j\cap B|\leq |I_j| |\operatorname{diam}(B)|^{n-1}\to 0\quad\text{as}\quad j\to+\infty.$$ We can now conclude the proof of theorem. Assuming  does not hold let us consider the set $A$ provided by the first step above. We fix some ball $B(0,r)$ and $x\in A$ and $R_{x,j}\ni x$ as in the first step. Note that, necessarily, $\sup_j|R_{x,j}|<+\infty$ because of the validity of . Thus $$\frac{|B(0,r)^\mathtt{c} \cap E\cap R_{x,j}|}{|R_{x,j}|}\geq \frac{ |E\cap R_{x,j}|}{|R_{x,j}|}-\frac{|B(0,r)\cap R_{x,j}|}{|R_{x,j}|}\to 1\quad\text{as}\quad j\to+\infty$$ by and the statement of the second step. This implies that for any $r>0$ and $0<\lambda<1$ we have $$A\subset \{x\in \mathbb R^n: M(\chi_{E\cap B(0,r) ^\mathtt{c}})>\lambda\}.$$ However $\mathcal B$ is a homothecy invariant density basis so by the Tauberian condition we should have $$0<|A|\leq |\{x\in \mathbb R^n: M(\chi_{E\cap B(0,r) ^\mathtt{c}})>\lambda\}|\leq c(\lambda)|E\cap B^\mathtt{c}(0,r)|$$ which is clearly a contradiction since $|E|<+\infty$ and thus $|E\cap B(0,r)^\mathtt{c}|\to 0$ as $r\to+\infty$. We are quickly exhausting all that we know at the moment regarding Solyanik estimates in harmonic analysis. As a closing remark, it is worth noting that Theorem \[t.singlesol\] provides a viable strategy to proving Conjecture \[conj.cont\]. Namely, to prove Conjecture \[conj.cont\] it now suffices to prove the following: Let $\mathcal{B}$ be a homothecy invariant density basis of convex sets in $\mathbb{R}^n$. Suppose for some $\gamma > 1$ we have that, for every $0 < \alpha < 1$, there exists a set $E_{\alpha, \gamma}$ such that $$\big|\big\{x \in \mathbb{R}^n :\, M_{\mathcal{B}}\chi_{E_{\alpha, \gamma}}(x) > \alpha\big\}\big| \geq \gamma |E_{\alpha, \gamma}|.$$ Then there exists a set $E_{\gamma}$ and a constant $c(\gamma)>1$ such that $$\big|\big\{x \in \mathbb{R}^n :\, M_{\mathcal{B}}\chi_{E_{\gamma}}(x) = 1\big\}\big| \geq c(\gamma) |E_{\gamma}|.$$ [^1]: P. H. is partially supported by a grant from the Simons Foundation (\#208831 to Paul Hagelstein). [^2]: I. P. is supported by the Academy of Finland, grant 138738.
--- abstract: 'We give the full set of $S$ matrices for extensions of $D(n)_1$ permutation orbifolds, extending our previous work to the yet unknown case of integer spin spinor currents. The main tool is triality of $SO(8)$. We also provide fixed point resolution matrices for spinor currents of $D(n)_1$ permutation orbifolds with $n$ even and not multiple of four, where the spinor currents have half-integer spin.' author: - | M. Maio$^{1}$ and A.N. Schellekens$^{1,2,3}$\  \  \ \ $^1$Nikhef Theory Group, Amsterdam, The Netherlands\  \ $^2$IMAPP, Radboud Universiteit Nijmegen, The Netherlands\  \ $^3$Instituto de Física Fundamental, CSIC, Madrid, Spain title: 'Complete analysis of extensions of $D(n)_1$ permutation orbifolds' --- Introduction ============ In our previous paper [@Maio:2009kb] we studied the structure of order-two simple currents in permutation orbifolds in two-dimensional conformal field theories [@Belavin:1984vu]. The main tool was the BHS $S$ matrix for the permutation orbifold [@Klemm:1990df; @Borisov:1997nc]. In general they can only be generated from diagonal fields that correspond to simple currents in the mother theory, while their fixed points can come from both the untwisted (diagonal and off-diagonal) and twisted sector. In the same paper we also considered extensions of the permutation orbifold and fixed point resolution. Simple current extensions are useful tools in conformal field theories and string theory [@Schellekens:1989am; @Schellekens:1989dq; @Intriligator:1989zw] but their modular transformation matrices are often quite non-trivial due to fixed points [@Schellekens:1999yg; @Schellekens:1989uf]. In [@Maio:2009kb] we derived $S$ matrices for extensions in the case of $SU(2)_2$, $B(n)_1$ and $D(n)_1$ WZW models [@Knizhnik:1984nr; @Gepner:1986wi]. This was completely done for the first two models but only partially for the $D(n)_1$. In fact, we provided the $S$ matrix for the omnipresent integer spin simple currents for any value of $n$, but sometimes additional currents appear in the $D(n)_1$ model whose fixed points must be resolved as well, in order to use them as extensions. Generically fixed points can arise for integer spin and half-integer spin simple currents [@Schellekens:1990xy]. We will see that this happens for particular ranks of $D(n)_1$ where they must be resolved. In this paper we address those additional problems, providing a complete picture for the fixed point resolution in $D(n)_1$ permutation orbifold. Explicitly, there are two interesting situations where fixed points can occur and that we have not studied so far. When $n$ is multiple of four, $n=4p$ with $p\in \mathbb{Z}$, there are additional integer-spin simple currents coming from the two spinor representations of the $D(n)_1$ WZW model. The spinor fields have weight $h=\frac{n}{8}$ and their symmetric and anti-symmetric representations in the $D(n)_1$ permutation orbifold have weight $h=\frac{n}{4}$. Similarly, when $n=4p+2$, the same two spinor currents generate half-integer spin simple currents in the $D(n)_1$ permutation orbifold. Although the latter cannot be used to extend the chiral algebra, they can be used in combination with half-integer spin currents of another factor in a tensor product. For example, one may tensor the permutation orbifold with an Ising model, and consider the product of the half-integer spin current of the $D(n)_1$ permutation orbifold and the Ising spin field. This is not just of academic interest. Extended tensor products of rational conformal field theories are an important tool in explicit four-dimensional string constructions, and in the vast majority of cases one encounters fixed points. For this reason the fixed point resolution matrices we determine here and in our previous paper $\cite{Maio:2009kb}$ have a range of applicability far beyond the special cases used here to determine them. From previous works [@Fuchs:1996dd], we know that resolving the fixed points is the same as finding a set of $S^J$ matrices, one matrix for each current $J$, acting on the fixed points. The $S^J$ matrices must be unitary and must satisfy the modular constraint $(S^J)^2=(S^JT^J)^3$, where $T^J$ is the $T$ matrix of the extended theory restricted to the fixed points; moreover, for order-two currents, the $S^J$’s must be symmetric. The $S$ matrix of the extended theory is then computed as a Fourier-like transform of the $S^J$ matrices [@Fuchs:1996dd]: it has to be unitary, modular invariant and should give rise to non-negative integer fusion coefficients obtained by the Verlinde formula [@Verlinde:1988sn]. These are non-trivial tests for a good $S$ matrix. There is no known algorithm for determining these matrices in generic rational CFT’s, even if their matrix $S$ is known. In WZW-based models (WZW extensions and coset CFT’s) one can make use of foldings of Dynking diagrams [@Fuchs:1995zr] to compute the matrices $S^J$. In $\cite{Maio:2009kb}$ we made use of the fact that the extension currents had spin 1 and led to identifiable CFT’s. This method will not work here except in the special case of $D(4)$, where the spinor currents have spin 1. In that case one can make use of triality of $SO(8)$ to determine the missing fixed point resolution matrices. Although triality does not extend to larger ranks, it turns out that in the other cases the fixed point spectrum is sufficiently similar to allow us to make a general [*ansatz*]{}. The plan of the paper is as follows.\ In section \[D4p permutation orbifolds\] we describe the $D(4p)_1$ permutation orbifolds extended by the two spinor currents and resolve the fixed points. In the special case $p=1$ we use triality of $SO(8)$ to determine the set of $S^J$ matrices. From the case $p=1$ is indeed possible to generalize the result to arbitrary values of $p$.\ In section \[D4p+2 permutation orbifolds\] we repeat the procedure for $D(4p+2)_1$ permutation orbifolds. We can be fast here since a very few changes are sufficient to write down consistent $S^J$ matrices.\ We conclude by illustrating open questions and future directions. $D(4p)_1$ permutation orbifolds {#D4p permutation orbifolds} =============================== By permutation orbifold we mean the procedure of taking the tensor product of a given conformal field theory (that sometimes we will call the mother theory) with itself (we restrict ourselves to two factors in the tensor product, even if it is possible to generalize the product to more than two factors [@Bantay:1997ek; @Bantay:1999us]) and modding out the resulting theory with respect to the permutation symmetry which exchanges the two factors. All the details of the orbifold theory are known from the work of BHS [@Borisov:1997nc]. Once the permutation orbifold is given, we may extend it by any of its integer spin simple currents to derive new theories. The presence of simple current fixed points makes life difficult, because the new extended $S$ matrix is not trivially known in terms of the one of the mother theory, but requires the knowledge of a set of $S^J$ matrices, one for each simple current. Here we start with the $D(n)_1$ WZW model as mother theory and focus on the spinor currents that for even rank $n$ can have (half-)integer spin. This will complete the analysis initiated in [@Maio:2009kb]. Let us fix our notation. The $D(n)_1=SO(N)_1$, $N=2n$, series has central charge $c=\frac{N}{2}$ and four primary fields $\phi_i$ with weight $h_i=0,\frac{N}{16},\frac{1}{2},\frac{N}{16}$ ($i=0,1,2,3$ respectively). The $S$ matrix is given in table \[table S\_Dn\_1\]. [c|c c c c]{}\ $S_{D(n)_1}$ & $h=0$ & $h=\frac{N}{16}$ & $h=\frac{1}{2}$ & $h=\frac{N}{16}$\ &&&\ $h=0$ & $\frac{1}{2}$ & $\frac{1}{2}$ & $\frac{1}{2}$ & $\frac{1}{2}$\ $h=\frac{N}{16}$ & $\frac{1}{2}$ & $\frac{(-i)^n}{2}$ & $-\frac{1}{2}$ & $-\frac{(-i)^n}{2}$\ $h=\frac{1}{2}$ & $\frac{1}{2}$ & $-\frac{1}{2}$ & $\frac{1}{2}$ & $-\frac{1}{2}$\ $h=\frac{N}{16}$ & $\frac{1}{2}$ & $-\frac{(-i)^n}{2}$ & $-\frac{1}{2}$ & $\frac{(-i)^n}{2}$ \[table S\_Dn\_1\] All the four fields of the $D(n)_1$ series are simple currents. In the permutation orbifold, they give rise to four integer spin simple currents, namely $(0,0)$, $(0,1)$, $(2,0)$ and $(2,1)$, and to four non-necessarily-integer spin simple currents, namely $(1,0)$, $(1,1)$, $(3,0)$ and $(3,1)$. For $n$ multiple of four, these latter currents have also integer spin. In [@Maio:2009kb] we focused on the former set. Here we want to study the latter, coming from the spinor representations $i=1,3$ of the $D(n)_1$ model. There are already a few observations that we can make. First of all, there exists an automorphism that exchanges the fields $\phi_1$ and $\phi_3$. This will have the consequence that the permutation theories extended by the currents $(1,0)$ and $(3,0)$ will be isomorphic[^1] (the fields having same weights and the two theories having equal central charge); this holds as well as for the extensions by $(1,1)$ and $(3,1)$. Secondly, when $n$ is multiple of four, i.e. $n=4p$ with $p\in\mathbb{Z}$, the $S$ matrix of the mother $D(n)_1$ theory is the same for every $p$. This will have the consequence that the fusion rules of these current in the permutation orbifolds are the same for every value of $p$. Putting these two observations together, we conclude that for $n=4p$ there will be only two universal $S^J$ matrices to determine[^2]. Let us illustrate these points with the explicit construction. Consider[^3] the case with arbitrary $n=4 p$. The $D(n)_1$ weights are then $h=0,\frac{n}{8},\frac{1}{2},\frac{n}{8}$ and the orbit structure under the additional integer-spin simple currents (all with $h=\frac{n}{4}=p$) is as follows. ---------------- ------------------------------------------------- -- ---------------------------------- $J\equiv(1,0)$ $(\phi_0,\phi_1)$, $h=\frac{n}{8}$ $\Big( (0,0),(1,0) \Big)$, $h=0$ $(\phi_2,\phi_3)$, $h=\frac{n}{8}+\frac{1}{2}$ $\Big( (0,1),(1,1) \Big)$, $h=1$ $\widehat{(0,0)}$, $h=\frac{n}{16}$ $\Big( (2,0),(3,0) \Big)$, $h=1$ $\widehat{(0,1)}$, $h=\frac{n}{16}+\frac{1}{2}$ $\Big( (2,1),(3,1) \Big)$, $h=1$ $\widehat{(1,0)}$, $h=\frac{n}{8}$ $\widehat{(1,1)}$, $h=\frac{n}{8}+\frac{1}{2}$ ---------------- ------------------------------------------------- -- ---------------------------------- ---------------- ------------------------------------------------------------- -- ---------------------------------- $J\equiv(1,1)$ $(\phi_0,\phi_1)$, $h=\frac{n}{8}$ $\Big( (0,0),(1,1) \Big)$, $h=0$ $(\phi_2,\phi_3)$, $h=\frac{n}{8}+\frac{1}{2}$ $\Big( (0,1),(1,0) \Big)$, $h=1$ $\widehat{(2,0)}$, $h=\frac{n}{16}+\frac{1}{4}$ $\Big( (2,0),(3,1) \Big)$, $h=1$ $\widehat{(2,1)}$, $h=\frac{n}{16}+\frac{1}{4}+\frac{1}{2}$ $\Big( (2,1),(3,0) \Big)$, $h=1$ $\widehat{(3,0)}$, $h=\frac{n}{8}$ $\widehat{(3,1)}$, $h=\frac{n}{8}+\frac{1}{2}$ ---------------- ------------------------------------------------------------- -- ---------------------------------- ---------------- ------------------------------------------------- -- ---------------------------------- $J\equiv(3,0)$ $(\phi_0,\phi_3)$, $h=\frac{n}{8}$ $\Big( (0,0),(3,0) \Big)$, $h=0$ $(\phi_1,\phi_2)$, $h=\frac{n}{8}+\frac{1}{2}$ $\Big( (0,1),(3,1) \Big)$, $h=1$ $\widehat{(0,0)}$, $h=\frac{n}{16}$ $\Big( (1,0),(2,0) \Big)$, $h=1$ $\widehat{(0,1)}$, $h=\frac{n}{16}+\frac{1}{2}$ $\Big( (1,1),(2,1) \Big)$, $h=1$ $\widehat{(3,0)}$, $h=\frac{n}{8}$ $\widehat{(3,1)}$, $h=\frac{n}{8}+\frac{1}{2}$ ---------------- ------------------------------------------------- -- ---------------------------------- ---------------- ------------------------------------------------------------- -- ---------------------------------- $J\equiv(3,1)$ $(\phi_0,\phi_3)$, $h=\frac{n}{8}$ $\Big( (0,0),(3,1) \Big)$, $h=0$ $(\phi_1,\phi_2)$, $h=\frac{n}{8}+\frac{1}{2}$ $\Big( (0,1),(3,0) \Big)$, $h=1$ $\widehat{(1,0)}$, $h=\frac{n}{8}$ $\Big( (1,0),(2,1) \Big)$, $h=1$ $\widehat{(1,1)}$, $h=\frac{n}{8}+\frac{1}{2}$ $\Big( (1,1),(2,0) \Big)$, $h=1$ $\widehat{(2,0)}$, $h=\frac{n}{16}+\frac{1}{4}$ $\widehat{(2,1)}$, $h=\frac{n}{16}+\frac{1}{4}+\frac{1}{2}$ ---------------- ------------------------------------------------------------- -- ---------------------------------- Note that in going from the fixed points of $(1,\psi)$ to $(3,\psi)$, the fields $\phi_1$ and $\phi_3$ get interchanged: this provides isomorphic sets of fields in the extensions. The fixed points get splitted into two fields in the extended permutation orbifold and hence all the theories above admit $2\cdot 6+4=16$ fields. By changing $n=4p$, the weights of the orbits and the ones of the fixed points might change, but as we said there are a few things that remain invariant, namely: 1) the fact that the extension by the current $(1,0)$ (resp. $(1,1)$) is isomorphic (up to field reordering) to the one by $(3,0)$ (resp. $(3,1)$), as it can be seen by looking at the weights of the extended fields; 2) the orbit and fixed-point structure (i.e. the fusion rules of the currents with any other field in the permutation orbifold) remains the same for arbitrary $p$; this has the consequence that we will have to determine only two $S^J$ matrices instead of four. $S^J$ matrices for $D(4p)_1$ permutation orbifolds -------------------------------------------------- We have already noticed that there are in practice only two $S^J$ matrices to determine for the four above-mentioned integer-spin simple currents. So here we derive $S^{J=(1,0)}$ and $S^{J=(1,1)}$; $S^{J=(3,0)}$ and $S^{J=(3,1)}$ are equal to the former two, after proper field ordering. It is instructive to start with the $D(4)_1$ $(p=1)$ case. $SO(8)_1$ is special in the sense that the three non-trivial representations, i.e. the vector ${\bf 8_v}$ and the two spinors ${\bf 8_s}$ and ${\bf 8_c}$, have same weight ($h=\frac{1}{2}$) and same multiplicity (dim$=8$) and can be mapped into each other. This property of $SO(8)$ is triality. Let us now work out the $S^J$ matrices corresponding to the two integer-spin simple currents $J=(1,0)$ and $J=(1,1)$. The extension by $(1,0)$ of the permutation orbifold is isomorphic to an extension of the tensor product of an $SU(8)$ and a $U(1)$ factor as in [@Maio:2009kb]: $$(D(4)_1 \times D(4)_1/ \mathbb{Z}_2)_{(1,0)}=(SU(8)_1 \times U(1)_{128})_{(4,16)}\,,$$ while the extension by $(1,1)$ is isomorphic to the tensor product $D(4)_1 \times D(4)_1$. This is exactly what happened for the already known currents $(2,\psi)$ [@Maio:2009kb]; in fact, due to triality of $SO(8)$, the three theories extended by $(1,\psi)$ $(2,\psi)$ $(3,\psi)$ must be the same. ### $J=(1,0)$ We use the main formula of [@Fuchs:1996dd] $$\label{MainFormula} \tilde{S}_{(a,i)(b,j)}=\frac{|G|}{\sqrt{|U_a||S_a||U_b||S_b|}}\sum_{J\in G}\Psi_i(J) S^J_{ab} \Psi_j(J)^{\star}$$ as done in [@Maio:2009kb] to derive the $S^J$ matrix from the knowledge of the extended matrix $\tilde{S}$ and the permutation orbifold matrix $S^{(0,0)}\equiv S^{BHS}$. The prefactor in (\[MainFormula\]) is a group theoretical factor and the $\Psi_i$’s are the group characters. Our field convention to distinguish between the two splitted fixed points is: $$\begin{aligned} (\phi_0,\phi_1) \,\, \longrightarrow& (1,4) & \&\qquad (7,124) \nonumber \\ (\phi_2,\phi_3) \,\, \longrightarrow& (1,116) & \&\qquad (3,124) \nonumber \\ \widehat{(0,0)} \,\, \longrightarrow& (0,120) & \&\qquad (0,8) \nonumber \\ \widehat{(0,1)} \,\, \longrightarrow& (6,0) & \&\qquad (2,0) \nonumber \\ \widehat{(1,0)} \,\, \longrightarrow& (7,4) & \&\qquad (1,124) \nonumber \\ \widehat{(1,1)} \,\, \longrightarrow& (1,12) & \&\qquad (3,4) \nonumber \\ &&\nonumber\end{aligned}$$ where $(s,u)$ denote a field in the extended theory ($s\equiv s+8$, $u\equiv u+128$). Observe that field one and field two correspond to complementary orbits as explained in [@Maio:2009kb]. We obtain the matrix in table \[table S\^J=10\_p=1\] for the $(D4_1\times D4_1/\mathbb{Z}_2)_{(1,0)}$ orbifold. We denote it by $S^J_{D4}$ for reasons that will become clear later. [c|c c c c c c c]{}\ $S^{J\equiv (1,0)}_{D4}$ & $(\phi_0,\phi_1)$ & $(\phi_2,\phi_3)$ & $\widehat{(0,0)}$ & $\widehat{(0,1)}$ & $\widehat{(1,0)}$ & $\widehat{(1,1)}$\ &&&\ $(\phi_0,\phi_1)$ & $0$ & $0$ & $\frac{i}{2}$ & $-\frac{i}{2}$ & $-\frac{i}{2}$ & $-\frac{i}{2}$\ $(\phi_2,\phi_3)$ & $0$ & $0$ & $\frac{i}{2}$ & $-\frac{i}{2}$ & $\frac{i}{2}$ & $\frac{i}{2}$\ $\widehat{(0,0)}$ & $\frac{i}{2}$ & $\frac{i}{2}$ & $0$ & $0$ & $\frac{i}{2}$ & $-\frac{i}{2}$\ $\widehat{(0,1)}$ & $-\frac{i}{2}$ & $-\frac{i}{2}$ & $0$ & $0$ & $\frac{i}{2}$ & $-\frac{i}{2}$\ $\widehat{(1,0)}$ & $-\frac{i}{2}$ & $\frac{i}{2}$ & $\frac{i}{2}$ & $\frac{i}{2}$ & $0$ & $0$\ $\widehat{(1,1)}$ & $-\frac{i}{2}$ & $\frac{i}{2}$ & $-\frac{i}{2}$ & $-\frac{i}{2}$ & $0$ & $0$\ \[table S\^J=10\_p=1\] One can check that this matrix is unitary ($S^J (S^J)^\dagger=1$) and modular invariant ($(S^J)^2=(S^J T^J)^3$, where $T^J$ is the $T$ matrix restricted to the fixed points) and gives non-negative integer fusion coefficients. Moreover, one can see that unitarity and modular invariance are preserved for $p=1$ mod $4$: then this matrix can be used also in these situations. Observe that rescaling the $S^J$ matrix by a phase does not destroy unitarity but it does affect modular invariance. By a suitable choice of the phase, it is possible to make a modular invariant matrix out of $S^{(1,0)}_{D4}$ valid for all $p$. The correct choice is: $$\label{S10p} \boxed{S^{(1,0)}= (-i)^{p-1} \cdot S^{(1,0)}_{D4}= e^{-\frac{i\pi}{4}(m-2)} \cdot S^{(1,0)}_{D4}}$$ which will use for any value of $p$. Here $m=2p$ is an even integer such that $D_{2m}\equiv D_{4p}$. This is again unitary, modular invariant and gives non-negative integer fusion coefficients. Let us make a final comment. What happens when we shift $p\rightarrow p+1$? Under this shift, the fixed point weights change differently. In particular, for the current $(1,0)$ the shifts are $h\rightarrow h+\{\frac{1}{2},\frac{1}{2},\frac{1}{4},\frac{1}{4},\frac{1}{2},\frac{1}{2}\}$. The $T^{(1,0)}$ matrix then changes as $T^{(1,0)}\rightarrow e^{-\frac{2\pi i}{3}}\,{\rm diag}(-1,-1,i,i,-1,-1)\cdot T^{(1,0)}$ (the phase in front coming from the central charge), while the $S^{(1,0)}$ takes a phase, $S^{(1,0)}\rightarrow -iS^{(1,0)}$. These changes are such that modular invariance is still preserved for every $p$. ### $J=(1,1)$ For this current, recall that $$(D(4)_1 \times D(4)_1/ \mathbb{Z}_2)_{(1,1)}\sim D(4)_1 \times D(4)_1\,.$$ The splitted fixed points correspond to fields in the tensor product theory. We choose conventionally the following scheme, but a few other choices are also possible. $$\begin{aligned} (\phi_0,\phi_1) \,\, \longrightarrow& \phi_0 \otimes \phi_1 & \&\qquad \phi_1 \otimes \phi_0 \nonumber \\ (\phi_2,\phi_3) \,\, \longrightarrow& \phi_2 \otimes \phi_3 & \&\qquad \phi_3 \otimes \phi_2 \nonumber \\ \widehat{(2,0)} \,\, \longrightarrow& \phi_0 \otimes \phi_2 & \&\qquad \phi_2 \otimes \phi_0 \nonumber \\ \widehat{(2,1)} \,\, \longrightarrow& \phi_1 \otimes \phi_3 & \&\qquad \phi_3 \otimes \phi_1 \nonumber \\ \widehat{(3,0)} \,\, \longrightarrow& \phi_0 \otimes \phi_3 & \&\qquad \phi_3 \otimes \phi_0 \nonumber \\ \widehat{(3,1)} \,\, \longrightarrow& \phi_1 \otimes \phi_2 & \&\qquad \phi_2 \otimes \phi_1 \nonumber \\ &&\nonumber\end{aligned}$$ Now our strategy to compute $S^J_{D4}$ is as follow. We first go to the isomorphic tensor product theory and use $$S^J_{(mn)(pq)}=S_{mp}S_{nq}-S_{mq}S_{np}$$ as derived in [@Maio:2009kb] to compute the $S^J$ matrix there and then we go back to the extended permutation orbifold using the field map. We obtain the $S^J$ matrix as in table \[table S\^J=11\_p=1\]. [c|c c c c c c c]{}\ $S^{J\equiv (1,1)}_{D4}$ & $(\phi_0,\phi_1)$ & $(\phi_2,\phi_3)$ & $\widehat{(2,0)}$ & $\widehat{(2,1)}$ & $\widehat{(3,0)}$ & $\widehat{(3,1)}$\ &&&\ $(\phi_0,\phi_1)$ & $0$ & $0$ & $-\frac{1}{2}$ & $-\frac{1}{2}$ & $-\frac{1}{2}$ & $-\frac{1}{2}$\ $(\phi_2,\phi_3)$ & $0$ & $0$ & $-\frac{1}{2}$ & $-\frac{1}{2}$ & $\frac{1}{2}$ & $\frac{1}{2}$\ $\widehat{(2,0)}$ & $-\frac{1}{2}$ & $-\frac{1}{2}$ & $0$ & $0$ & $-\frac{1}{2}$ & $\frac{1}{2}$\ $\widehat{(2,1)}$ & $-\frac{1}{2}$ & $-\frac{1}{2}$ & $0$ & $0$ & $\frac{1}{2}$ & $-\frac{1}{2}$\ $\widehat{(3,0)}$ & $-\frac{1}{2}$ & $\frac{1}{2}$ & $-\frac{1}{2}$ & $\frac{1}{2}$ & $0$ & $0$\ $\widehat{(3,1)}$ & $-\frac{1}{2}$ & $\frac{1}{2}$ & $\frac{1}{2}$ & $-\frac{1}{2}$ & $0$ & $0$\ \[table S\^J=11\_p=1\] The $S^J$ matrix obtained in this way for $(D(4)_1 \times D(4)_1/ \mathbb{Z}_2)_{(1,1)}$ is unitary and modular invariant, so it is a good matrix for the extended theory. Moreover, this $S^J$ matrix is a good (i.e. unitary and modular invariant) matrix also for $p=1$ mod $4$. In order to make this matrix modular invariant for any $p$, we again multiply by a phase. The choice is the same as before: $$\label{S11p} \boxed{S^{(1,1)}= (-i)^{p-1} \cdot S^{(1,1)}_{D4}= e^{-\frac{i\pi}{4}(m-2)} \cdot S^{(1,1)}_{D4}}$$ which will use for any value of $p$. This is again unitary, modular invariant and gives non-negative integer fusion coefficients. Again, the shift $n\rightarrow n+16$ changes $S^{(1,1)}$ by a phase and $T^{(1,1)}$ in a more complicated way, but both always in a modular invariant fashion. One can check formulas (\[S10p\]) and (\[S11p\]) in many explicit examples. For instance, one can see that they have good properties by looking at a few values of $p$, but also considering tensor products like $D(8)_1\times D(12)_1$ or $D(8)_1\times D(16)_1$ and extending with many current combinations $(J_1,J_2)$, where $J_1$ belongs to the first factor and $J_2$ to the second factor. In every example, the fusion rules give non-negative integer coefficients. $D(4p+2)_1$ permutation orbifolds {#D4p+2 permutation orbifolds} ================================= So far we have not addressed half-integer spin simple currents. They might also admit fixed points that must be resolved in the extended theory. This happens for the $D(n)_1$ permutation orbifolds with $n=4p+2$. In fact, the four currents $(1,\psi)$ and $(3,\psi)$, with $\psi =0,1$, will have weight $h=\frac{2p+1}{2}$ and will admit fixed points. The orbit structure is in this case with $n=4p+2$ very similar to the previous situation with $n=4p$, except for the fact that the twisted fields get reshuffled. The fixed point structure is as following. Observe that this is very similar to the structure for the previous case $n=4p$. ---------------- ------------------------------------------------------------ ---------------- ------------------------------------------------------------ $J\equiv(1,0)$ $J\equiv(3,0)$ $(\phi_0,\phi_1)$,$h=\frac{n}{8}$ $(\phi_0,\phi_3)$, $h=\frac{n}{8}$ $(\phi_2,\phi_3)$,$h=\frac{n}{8}+\frac{1}{2}$ $(\phi_1,\phi_2)$, $h=\frac{n}{8}+\frac{1}{2}$ $\widehat{(2,0)}$,$h=\frac{n}{16}+\frac{1}{4}$ $\widehat{(2,0)}$, $h=\frac{n}{16}+\frac{1}{4}$ $\widehat{(2,1)}$,$h=\frac{n}{16}+\frac{1}{4}+\frac{1}{2}$ $\widehat{(2,1)}$,$h=\frac{n}{16}+\frac{1}{4}+\frac{1}{2}$ $\widehat{(1,0)}$, $h=\frac{n}{8}$ $\widehat{(3,0)}$, $h=\frac{n}{8}$ $\widehat{(1,1)}$, $h=\frac{n}{8}+\frac{1}{2}$ $\widehat{(3,1)}$, $h=\frac{n}{8}+\frac{1}{2}$ ---------------- ------------------------------------------------------------ ---------------- ------------------------------------------------------------ $$\label{D 4p+2 scheme}$$ ---------------- ------------------------------------------------- ---------------- ------------------------------------------------- $J\equiv(1,1)$ $J\equiv(3,1)$ $(\phi_0,\phi_1)$, $h=\frac{n}{8}$ $(\phi_0,\phi_3)$, $h=\frac{n}{8}$ $(\phi_2,\phi_3)$, $h=\frac{n}{8}+\frac{1}{2}$ $(\phi_1,\phi_2)$, $h=\frac{n}{8}+\frac{1}{2}$ $\widehat{(0,0)}$, $h=\frac{n}{16}$ $\widehat{(0,0)}$, $h=\frac{n}{16}$ $\widehat{(0,1)}$, $h=\frac{n}{16}+\frac{1}{2}$ $\widehat{(0,1)}$, $h=\frac{n}{16}+\frac{1}{2}$ $\widehat{(3,0)}$, $h=\frac{n}{8}$ $\widehat{(1,0)}$, $h=\frac{n}{8}$ $\widehat{(3,1)}$, $h=\frac{n}{8}+\frac{1}{2}$ $\widehat{(1,1)}$, $h=\frac{n}{8}+\frac{1}{2}$ ---------------- ------------------------------------------------- ---------------- ------------------------------------------------- Again, the current $(1,0)$ (resp. $(1,1)$) generates the same fixed points as the current $(3,0)$ (resp. $(3,1)$), hence we have to determine only two, instead of four, $S^J$ matrices, since $S^{(1,\psi)}=S^{(3,\psi)}$, with $\psi=0,1$. Actually the study of the previous section helps us a lot, since it is easy to generate unitary and modular invariant matrices out of two matrices *numerically* equal to the two $S^J_{D4}$ matrices of tables \[table S\^J=10\_p=1\] and \[table S\^J=11\_p=1\] with the fields ordered as above. More tricky is to check that also the fusion coefficients are non-negative integers if these currents are used in chiral algebra extensions. The more sensible choice is the following. Let us have a closer look at the fixed point structure of the $n=4p$ and the $n=4p+2$ cases. They are very similar, but not quite. The weights of the fixed points of the current $(1,0)$ in the $n=4p$ case have the same expression as the weights of the fixed points of the current $(1,1)$ in the $n=4p+2$ case, and similarly for the $(3,\psi)$ current. So a natural guess for the $S^J$ matrices would involve interchanging the matrices in tables \[table S\^J=10\_p=1\] and \[table S\^J=11\_p=1\]. Equivalently, symmetric and anti-symmetric representations are interchanged in going from $n=4p$ to $n=4p+2$. Hence, we would expect $S^{(1,0)} \sim S^{(1,1)}_{D4}$ and $S^{(1,1)} \sim S^{(1,0)}_{D4}$. This is indeed the case. The unitary and modular invariant[^4] combinations are in fact:[^5] $$\label{S10p2bis} \boxed{S^{(1,0)}= e^{-\frac{i\pi}{4}} \cdot (-i)^{p-1} \cdot S^{(1,1)}_{D4} = e^{-\frac{i\pi}{4}(m-2)} \cdot S^{(1,1)}_{D4}}$$ and $$\label{S11p2bis} \boxed{S^{(1,1)}= e^{-\frac{i\pi}{4}} \cdot (-i)^{p-1} \cdot S^{(1,0)}_{D4} = e^{-\frac{i\pi}{4}(m-2)} \cdot S^{(1,0)}_{D4}}$$ giving also acceptable fusion rules. Here $m=2p+1$ is an odd integer such that $D_{2m}\equiv D_{4p+2}$. There are a few comments that we can make here. The first comment regards the labelling of the matrices just given. We observe that the matrix $S^{(1,0)}$ (resp. $S^{(1,1)}$) contains the same fields as the matrix $S^{(1,1)}_{D4}$ (resp. $S^{(1,0)}_{D4}$) except for the fact that the twisted fields corresponding to the spinors are interchanged (but they still have the same weights). We will then keep the same labels as given in the above scheme (\[D 4p+2 scheme\]) and in table \[table S\^J=11\_p=1\] (resp. table \[table S\^J=10\_p=1\]). The second comment regards the periodicity of the modular matrices. Observe that in (\[D 4p+2 scheme\]) a shift $n\rightarrow n+16$ (corresponding to $m\rightarrow m+8$ and $p\rightarrow p+4$) changes all the weights by integers, but the $T^J$ matrices will be invariant. Similarly, the $S^J$ matrices are invariant under the same shift $m\rightarrow m+8$. This happened already for the modular matrices in the $n=4p$ case and it happens here again in the $n=4p+2$ case. Hence, it seems that in comparing phases one should consider situations which have the same $p$ mod $4$. On the other hand, in going from $n=4p$ to $n=4p+2$, the $S^J$ formulas are similar, but there is one main difference, namely $S^{(1,0)}_{D4}$ gets interchanged by $S^{(1,1)}_{D4}$ and this is a completely different matrix. The same consideration that we made after (\[S10p\]) about the shift $p\rightarrow p+1$ can be repeated here. The last comment regards the fusion coefficients. Note that when we check the fusion rules, we cannot do it directly from the single $D(n)_1$ permutation orbifolds, exactly because the spinor currents have half-integer spin. Instead, we have to tensor the $D(n)_1$ theory with another one which also has half-integer spin simple currents (e.g. Ising model or the $D(n)_1$ model itself, maybe with different values of $n$) such that the tensor product has integer spin simple currents that can be used for the extension: those integer spin currents will then have acceptable fusion coefficients. We have checked that this is indeed the case for tensor products of the permutation orbifold CFT’s with the Ising model, and also in extensions of different permutation orbifold CFT’s tensored with each other (we have also performed the latter check for $n=4p$, for combinations of integer spin currents). Conclusion ========== In this paper we have completed the analysis initiated in [@Maio:2009kb] of extensions of $D(n)_1$ permutation orbifolds by additional integer spin simple currents arising when the rank $n$ is multiple of four and by additional half-integer spin simple currents arising when the rank $n$ is even but not multiple of four. In both situations fixed points occur that must be resolved in the extended theory. This means that we have to provide the $S^J$ matrices corresponding to those extra currents $J$. They will allow us to obtain the full $S$ matrix of the extended theory which satisfies all the necessary properties. The currents in question are those corresponding to the spinor representations $i=1$ and $i=3$ of $D(n)_1$, both with weight $h=\frac{n}{8}$. In the permutation orbifold they arise from the symmetric and the anti-symmetric representations of the spinors, both with weight $h=\frac{n}{4}$: so they have integer spin for $n=4p$ ($p$ is integer) and half-integer spin for $n=4p+2$. Moreover, they produce pairwise identical extensions of the permutation orbifold, such that there are only two unknown matrices to determine: $S^{(1,\psi)}=S^{(3,\psi)}$ ($\psi=0,1$). The solutions were given in sections \[D4p permutation orbifolds\] and \[D4p+2 permutation orbifolds\] (boxed formulas). This completely solves the fixed point resolution in extension of $D(n)_1$ permutation orbifold. There is still more work to do. First of all, we do not have any general expression yet for the $S^J$ matrix in terms of the $S$ (and maybe $P$) matrix of the mother theory. This should be independent of the particular CFT and/or the particular current used to extend the theory. Secondly, it would be interesting to apply these CFT results in String Theory. Suitable candidates appear to be the minimal models of the $N=2$ superconformal algebra, which are the building blocks of Gepner models [@Gepner:1987vz; @Gepner:1987qi], but this is still work in progress. Acknowledgments {#acknowledgments .unnumbered} =============== This research is supported by the Dutch Foundation for Fundamental Research of Matter (FOM) as part of the program STQG (String Theory and Quantum Gravity, FP 57) and has been partially supported by funding of the Spanish Ministerio de Ciencia e Innovación, Research Project FPA2008-02968. [99]{} M. Maio and A. N. Schellekens, “Fixed Point Resolution in Extensions of Permutation Orbifolds,” arXiv:0905.1632 \[hep-th\]. A. A. Belavin, A. M. Polyakov and A. B. Zamolodchikov, “Infinite conformal symmetry in two-dimensional quantum field theory,” Nucl. Phys.  B [**241**]{} (1984) 333. V. G. Knizhnik and A. B. Zamolodchikov, “Current algebra and Wess-Zumino model in two dimensions,” Nucl. Phys.  B [**247**]{}, 83 (1984). D. Gepner and E. Witten, “String Theory on Group Manifolds,” Nucl. Phys.  B [**278**]{} (1986) 493. A. Klemm and M. G. Schmidt, “Orbifolds by cyclic permutations of tensor product conformal field theories,” Phys. Lett.  B [**245**]{} (1990) 53. L. Borisov, M. B. Halpern and C. Schweigert, “Systematic approach to cyclic orbifolds,” Int. J. Mod. Phys.  A [**13**]{} (1998) 125 \[arXiv:hep-th/9701061\]. P. Bantay, “Characters and modular properties of permutation orbifolds,” Phys. Lett.  B [**419**]{} (1998) 175 \[arXiv:hep-th/9708120\]. P. Bantay, “Permutation orbifolds,” Nucl. Phys.  B [**633**]{} (2002) 365 \[arXiv:hep-th/9910079\]. A. N. Schellekens and S. Yankielowicz, “Extended Chiral Algebras and Modular Invariant Partition Functions” Nucl. Phys.  B [**327**]{} (1989) 673. A. N. Schellekens and S. Yankielowicz, “Simple currents, modular invariants and fixed points,” Int. J. Mod. Phys.  A [**5**]{} (1990) 2903. A. N. Schellekens and S. Yankielowicz, “Modular invariants from simple currents: an explicit proof,” Phys. Lett.  B [**227**]{} (1989) 387. K. A. Intriligator, “Bonus Symmetry in Conformal Field Theory” Nucl. Phys.  B [**332**]{} (1990) 541. J. Fuchs, A. N. Schellekens and C. Schweigert, “A matrix S for all simple current extensions,” Nucl. Phys.  B [**473**]{} (1996) 323 \[arXiv:hep-th/9601078\]. J. Fuchs, B. Schellekens and C. Schweigert, “From Dynkin diagram symmetries to fixed point structures,” Commun. Math. Phys.  [**180**]{}, 39 (1996) \[arXiv:hep-th/9506135\]. A. N. Schellekens, “Fixed point resolution in extended WZW-models,” Nucl. Phys.  B [**558**]{} (1999) 484 \[arXiv:math/9905153\]. A. N. Schellekens and S. Yankielowicz, “Field identification fixed points in the coset construction,” Nucl. Phys.  B [**334**]{} (1990) 67. G. Pradisi, A. Sagnotti and Y. S. Stanev, “Planar Duality In SU(2) WZW Models,” Phys. Lett.  B [**354**]{} (1995) 279 \[arXiv:hep-th/9503207\]. E. P. Verlinde, “Fusion Rules And Modular Transformations In 2d Conformal Field Theory,” Nucl. Phys.  B [**300**]{} (1988) 360. D. Gepner, “Exactly Solvable String Compactifications on Manifolds of SU(N) Holonomy,” Phys. Lett.  B [**199**]{} (1987) 380. D. Gepner, “Space-Time Supersymmetry in Compactified String Theory and Superconformal Models,” Nucl. Phys.  B [**296**]{} (1988) 757. [^1]: The fields $\phi_1$ and $\phi_3$ also have same $P$-matrix entries. In fact, the $P$ matrix for $n=4p$ is $$P=\left( \begin{array}{cccc} (-1)^p & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & (-1)^{p+1} & 0 \\ 0 & 0 & 0 & 1 \end{array} \right)\,. \nonumber$$ We recall that the $P$ matrix, $P=\sqrt{T}ST^2S\sqrt{T}$, first introduced in [@Pradisi:1995qy], enters the BHS formulas [@Borisov:1997nc] for the $S$ matrix of the permutation orbifold in the twisted sector. [^2]: They will in general depend on $p$ through a phase in order to satisfy modular invariance, since the $T$ matrix depends on $p$. [^3]: The case $n=4$, that we will consider extensively later, is very interesting since it corresponds to $SO(8)_1$ where, due to triality, three out of four fields have equal weight. The extensions by the currents $(1,\psi)$, $(2,\psi)$ and $(3,\psi)$ must produce the same result. The extension by $(2,\psi)$ is already known from [@Maio:2009kb] and from what we said before we also know that the extensions by $(1,\psi)$ and $(3,\psi)$ are equal. Indeed one can check that the extension of the permutations orbifold by $(1,0)$ is $$D(4)_1\times D(4)_1/\mathbb{Z}_2 = (SU(8)_1\times U(1)_{128})_{(4,16\cdot x)}$$ for odd integer $x$. Also, the extension by $(1,1)$ gives a theory which is almost isomorphic to (in the sense of having same weights and same central charge but not dimensions of) the tensor product of two $D(4)_1$’s. [^4]: Modular invariance reads here: $(S^J)^2=(-1)^p i \cdot 1=(S^JT^J)^3$ for $J=(1,0)$ and $(S^J)^2=(-1)^{p-1} i \cdot 1=(S^JT^J)^3$ for $J=(1,1)$, both with imaginary $(S^J)^2$. [^5]: Note that in order to use these relations one must order the six fields as indicated above, without paying attention to the actual labelling of the fixed point fields.
--- author: - | Heng-Yu Chen$^{*,a}$, Koji Hashimoto$^{\dagger,b}$ and Shunji Matsuura$^{\S,c}$\ ${^*}$ [*Department of Physics, University of Wisconsin, Madison, WI 53706, USA*]{}\ ${}^\dagger$ [*Theoretical Physics Laboratory, RIKEN, Saitama 351-0198, Japan*]{}\ ${}^\S$ [*Kavli Institute for Theoretical Physics, University of California,*]{}\ [*Santa Barbara CA 93106-9530, USA*]{}\ $^a$ E-mail:\ $^b$ E-mail:\ $^c$ E-mail:\ title: 'Towards a Holographic Model of Color-Flavor Locking Phase' --- Introduction and Summary {#sec1} ======================== #### In QCD phase diagram, the color-flavor locking (CFL) phase, or more generically, the color superconducting phase, is expected to be present in a region with large chemical potential $\mu$ for baryon number. Perturbative analytic study of this phase (see Ref. [@CFLReviews] for reviews) has mainly been done for very large $\mu$ such that the QCD coupling is weak. However, the issue on possible phase transitions from the hadronic phase at finite $\mu$ has not been addressed, as the system becomes strongly coupled. So far, neither direct experimental search, nor the lattice QCD simulation with “sign problem” have reached such region in the phase space. Holographic techniques from gauge/string duality [@Maldacena:1997re; @Gubser:1998bc] may offer new insights to such issue, as they enable us to probe the strongly coupled region(s) in the phase diagram for QCD-like theories. Although the duality strictly works for large number of colors $N_c \gg 1$, the holographic techniques applied to QCD-like theories (so-called “Holographic QCD”) have been rather successful in reproducing qualitative and semi-quantitative features of low energy QCD dynamics. In this paper, among other things, we shall show that a color-flavor locking occurs for a toy QCD-like theory at zero temperature, when the baryon chemical potential $\mu$ exceeds its critical value.[^1] Before entering the details on how to realize CFL phase in our model, let us summarize here the possible difficulties in obtaining it in holographic QCD. - In gauge/string duality, to treat $N_f$ flavor branes as probes, we typically need to take $N_c\gg N_f$, while the CFL refers to a locking of the $SU(3)$ flavor and the $SU(3)$ color symmetries, [*i.e.*]{} $N_c=N_f$. - In gauge/string duality, usually only gauge-invariant quantities are considered, while in the CFL phase the order parameter is a gauge variant di-quark condensate. The first problem is strictly technical, as when $N_f\sim N_c$, the backreaction of the flavor branes cannot be ignored, and renders it difficult to analyze in supergravity.[^2] Our approach used in this paper is to first separate some finite number of color branes $\tilde{N}_c$([*i.e.*]{} $\tilde{N}_c \ll N_c$), and investigate the locking of $SU(\tilde{N}_c)$ color symmetry with the flavor symmetry. Though this procedure of separation is artificial, our result may suggest a piece of the whole picture. Another concern for the first problem is that in the strict $N_c\to \infty$ limit, the theory does not reveal the CFL phase [@NoCFL]. We don’t consider this concern, since we will not perform a comparison with the chiral density wave (which is supposed to be favored at the strict $N_c\to \infty$ limit) in our toy model, and also because a large but finite value of $N_c$ may give the CFL phase even for the real QCD [@MaybeCFL]. As for the second problem above, it is familiar to us that gauge-invariant correlators of QCD-like theories can be computed in their gravity duals, but in fact there are some gauge-covariant quantities which one can also compute in the gauge/string duality.[^3] In this paper, we use holographic techniques for Coulomb phase of supersymmetric Yang-Mills (SYM) theories [@Douglas:1998tk; @Peet:1998wn], where a part of the gauge symmetry decouples from the rest. When the rank of this decoupled gauge subgroup is small, we may treat them in the same way as the probe flavor branes, and their gauge symmetry is manifest in the dual gravity description. We shall describe this in detail later. The toy model we shall focus on is ${\cal N}=4$ SYM coupled to ${\cal N}=2$ fundamental matter hyper multiplets. The holographic dual of this theory was proposed by Karch and Katz [@KarchKatz], as a minimal deformation of the ${\cal N}=4$ SYM to include quarks. The quark superfields are introduced as the lowest excitation on a string connecting $N_c$ D3-(color-)branes and $N_f$ D7-(flavor-)branes. For $N_c\gg N_f$, D3-branes can be replaced by $AdS_5\times S^5$ geometry, and the flavor dynamics of the strongly coupled large $N_c$ SQCD can be analyzed by the probe flavor D7-branes in that geometry. The quark mass $m$ quark is proportional to the distance between the D3-branes and the D7-branes. For zero temperature $T=0$ and $\mu=0$, quarks and gluons are deconfined while quarks can form deeply bound mesons. The phase structure of this theory has been analysed by the holographic duality [@HoloBaryon; @Karch:2007br; @D3D7phase; @Nakamura:2006xk; @Erdmenger:2007ja] and at the leading large $N_c$ expansion, it is known that there are two phases in the $(\mu,T)$ diagram: the meson phase and the melted meson phase (see Fig. \[figphase1\]). ![The structure of the phase diagram of the ${\cal N}=2$ SQCD. (Scales in this figure is not accurate. See [@HoloBaryon; @Karch:2007br; @D3D7phase; @Nakamura:2006xk; @Erdmenger:2007ja] for details.) []{data-label="figphase1"}](phase1.eps "fig:"){width="40.00000%"} (0,10)[$\mu$]{} (-180,120)[$T$]{} (-100,-10)[$m$]{} (-160,30)[Meson]{} (-150,15)[phase]{} (-110,60)[Melted meson phase]{} In both phases, gluons are deconfined. In the meson phase, quarks are bound to form mesons with their discrete spectrum,[^4] while in the melted meson phase, the meson spectrum is continuous, and there appears nonzero baryon number density. These two phases are characterized by the shape of the probe $N_f$ D7-branes [@Babington:2003vm; @Kruczenski:2003uq; @Kirsch:2004km; @Mateos:2006nu; @Albash:2006ew; @Aharony:2006da]. For the finite temperature, the background geometry is an AdS black hole. The meson phase corresponds to the D7-branes away from the horizon, which is called [*“Minkowski embedding”*]{}. On the other hand, in the melted meson phase, the D7-branes touch the horizon (see Fig. \[figembed\]), and is called [*“black hole embedding”*]{}. ![Two embeddings of the D7-branes in the geometry. The shaded ball denotes a black hole with a horizon of the topology $S^5$. Left: Minkowski embedding (meson phase). Right: black hole embedding (melted meson phase). []{data-label="figembed"}](mink.eps "fig:"){width="70.00000%"} (-215,70)[D7]{} (-25,70)[D7]{} Since the local gauge symmetry $U(N_f)$ on the D7-brane is identified as the global $U(N_f)_{\rm V}$ symmetry of the SQCD via the gauge/string duality, the chemical potential $\mu$ is identified as the value of the temporal component of the overall $U(1)$ gauge field on the coincident D7-branes. In the meson phase, this gauge field is just a constant $\mu$, while in the melted meson phase, there appears electric flux on the D7-branes: this configuration has a lower free energy which the holographic dual can compute, and thus favored. The phase transition is first order, and the critical chemical potential for $T=0$ is $\mu_{\rm cr}=m$. In the melted meson phase, the shape of the D7-branes is a spike whose tip is inside the horizon. Electrically charged spikes on D-branes are identified as fundamental strings, so the existence of the electric flux means that the quark number density is nonzero. These are briefly reviewed in Sec. \[sec2.1\]. Let us explain how the CFL phase of this theory can be realized in its gravity dual. First of all, note that our theory is ${\cal N}=2$ SQCD which includes squarks carrying the baryon (quark) number. So, once the chemical potential becomes large enough, we expect squark condensation, instead of di-quark condensation. We shall see this squark condensation in this paper: [*this is certainly a CFL, but also a Higgs phase*]{}. As suggested before, we separate $\tilde{N}_c$ D3-branes among $N_c$ and treat them as probes, $\tilde{N}_c \ll N_c$. The relevant quark/squarks are strings connecting the $\tilde{N}_c$ D3-branes and the $N_f$ D7-branes. Condensation of strings connecting D$p$-branes and D$(p+4)$-branes is well-known [@Douglas:1995bn]: the D$p$-branes are dissolved into the D$(p+4)$-branes, and the D$p$-branes can be seen as finite size instantons on the D$(p+4)$-branes. Therefore, the CFL Higgs phase of the SQCD is equivalent, via the gauge/string duality, to the situation where the size of the instantons on the probe D7-branes is driven to become larger. We will show in this paper that this is indeed the case, by computing the potential of the instanton size modulus on the D7-brane $U(N_f)$ gauge theory, in the melted meson phase. [D3-branes are moved onto the D7-branes and dissolve on the D7-branes dynamically.]{} This Higgs phase for $T=0$ was described in Refs. [@Guralnik:2004ve; @Erdmenger:2005bj; @Arean:2007nh] (see also Ref. [@Guralnik:2004wq]), and the potential for the instanton size modulus was considered in the absence of baryon density. At $T=0$, the resultant potential vanishes (we review it in Sec. \[sec2.2\] and \[sec2.3\]), thus there is no CFL. Our new point is that including a backreaction from the D7-brane electric flux (Sec. \[sec3\]), this generates a nontrivial potential for the intanton size modulus (Sec. \[sec4\]). The new potential has a run-away behavior (see Fig. \[figrunawayT=0\]), causing the instantons to expand, hence the CFL Higgs phase is prefered. This new potential exists only in the melted meson phase, so, for $T=0$, above the critical baryon chemical potential, the CFL Higgs phase appears — this is what we show using the gauge/string duality for the SQCD. The way this new potential emerges is quite intriguing. This is essentially a Chern-Simons (CS) term on the D7-branes, $\int {\rm tr}\; F\wedge F\wedge F \wedge C_2$. The backreaction of the electric flux on the D7-branes generate a nonzero constant Ramond-Ramond (RR) 3-form flux $F_3 = dC_2$ (\[f3back\]).[^5] Substituting this to the CS term, we obtain $\int {\rm tr}\; A\wedge F\wedge F$, thus the electric potential $A_t$ on the D7-brane interacts with the instanton density ${\rm tr} F\wedge F$, which gives a nontrivial potential (\[potrho\]).[^6] This generation of $F_3$ can also be thought of being sourced by baryon vertices, which are nothing but D5-branes wrapping $S^5$ [@WittenBaryon] (see also Ref. [@Gross:1998gk]). If one smears them, they provide a constant magnetic flux $F_3$ along $x^1$-$x^2$-$x^3$ directions (Sec. \[sec3.1\]). So, our work is an example of backreacting baryon vertices. We also analyze the thermalized case with $T\neq 0$ (Sec. \[sec5\]). Ref. [@Apreda1] showed that, for the finite temperature, a nontrivial potential (\[thermal-pot-fini-T\]) for the size modulus of the instanton on the D7-branes is generated, before including the baryon density. This potential is minimized at a finite value of the instanton size. Therefore a Higgs CFL phase is prefered. Introducing baryonic density, we analyze the backreaction and our new CS-type potential (\[back-reac-pot-fini-T\]) adds up on it. This addition does not change the result that the instanton size is nonzero, so we still have the Higgs CFL phase. If we naively adds up the two potentials (the thermal potential (\[thermal-pot-fini-T\]) given in Ref. [@Apreda1] and our CS-type potential (\[back-reac-pot-fini-T\])), we find that there are two CFL phases: for small values of the baryon density, we have the minimum at a finite value of the instanton size, while for large values of the density the minimum sits at the infinite size modulus. An expected phase diagram is given in Fig. \[figphase2\]. However, since the potential computed in this paper is valid only around small size of the instanton, this conclusion is a qualitative one and deserves further study for full backreaction of the geometry. We conclude by discussing some interesting future directions in Sec. \[sec6\]. ![The structure of the phase diagram of the ${\cal N}=2$ SQCD given by a naive addition of the instanton size potential coming from the back reaction of the geometry. The melted meson phase (corresponding to the black hole embedding of the D7-branes) is devided into two distinct phases. The upper half denoted as “thermal CFL Higgs phase” is dominated by the thermal potential of the instanton size, in which the size is roughly equal to the horizon size. The lower half denoted as “CFL Higgs phase” is dominated by the one generated by the backreaction due to the baryon density, in which the instanton size is much larger than the horizon size. []{data-label="figphase2"}](phase2.eps "fig:"){width="40.00000%"} (0,10)[$\mu$]{} (-180,120)[$T$]{} (-160,30)[Meson]{} (-150,15)[phase]{} (-80,40)[CFL Higgs phase]{} (-160,90)[Thermal CFL]{} (-150,75)[Higgs phase]{} Instanton on the Flavor Branes {#sec2} ============================== #### We start with constructing a solution for the equations of motion on the $N_f$ flavor D7-branes, which has nonzero instanton number. This solution corresponds to the dissolved D3-branes in the flavor D7-branes. In this section, the probe approximation for the D7-branes is adopted, while the important backreaction will be treated in Sec. \[sec3\], and its effect on the solution which we will find in this section will be studied in Sec. \[sec4\] where the dynamical dissolution of the instantons (D3-branes) due to the finite baryon density is shown. Review of the D3D7 System at Finite Baryon Density {#sec2.1} -------------------------------------------------- #### Let us begin for simplicity, by considering the case with zero temperature, which corresponds to $AdS_5\times S^5$ background in type IIB Supergravity. We shall embed in it a stack of $N_F$ space-time filling D7-branes, with a non-trivial world volume baryonic $U_b(1)$ gauge field turned on. As it turns out, the exact shape of D7s and profile of the gauge field can be analytically solved in such regime [@Karch:2007br], which we shall review in some detail next. The $AdS_5\times S^5$ metric, as generated by the backreaction of $N_c$ D3-branes, is given in Poincare coordinates: $$\begin{aligned} ds^2 &=& \frac{r_6^2}{R^2}\eta_{\mu\nu}dx^\mu dx^\nu +\frac{R^2}{r_6^2}\left(dr_6^2 + r_6^2 ds_5^2\right)\,,~~~\frac{R^4}{\alpha'^2}=4\pi g_s N_c=\lambda \label{AdS5S5metric}\,,\\ g_s C_4&=&\frac{r_6^4}{R^4}dx^0\wedge dx^1\wedge dx^2\wedge dx^3\,, \label{C4}\\ g_s F_5&=&(1+*_{10})d(g_s C_4)=4R^4(d\Omega_5+*_{10} d\Omega_5)\,.\label{F5}\end{aligned}$$ Here we have listed out the RR 4-form field $C_4$ and the self-dual 5-form field strength $F_5$, whereas the string coupling $g_s=e^{\Phi_0}$ remains constant. The indices $\mu\,\nu$ runs over $0,1,2,3$, $\eta_{\mu\nu}$ denotes four dimensional Minkowski metric, and $ds_5^2$ is the metric for a unit five-sphere. For our later purpose, let us also reparametrize the flat six internal dimensional metric as: $$dr_6^2+r_6^2 ds_5^2=dr^2+r^2 ds_3^2+dy^2+dz^2\,,\label{reparaR6}$$ with $r_6^2=r^2+y^2+z^2$. Here $ds_5^2$ ($ds_3^2$) is the metric on the unit $S^5$ ($S^3$). In such coordinates, there exists $U(1) \subset SO(6)$ isometry group which rotates $(y,z)$. Introducing $N_f$ $(\ll N_c)$ probe D7-branes into (\[AdS5S5metric\]), their Dirac-Born-Infeld (DBI) action is given by [@Myers:1999ps] $$\begin{aligned} S_{\rm DBI}^{\rm D7} = -{\cal T}_{\rm D7}\int d^8\xi \; e^{-\Phi} \; {\rm tr} \sqrt{-\det(G_{ab} + 2\pi\alpha'F_{ab})}\,. \label{d7}\end{aligned}$$ Here $\xi^a, a=0,\dots 7$ are the eight dimensional D7-brane world volume coordinates, $G_{ab}$ is the pullback metric and $F_{ab}$ is the worldvolume gauge field, which for now, we shall only turn on the diagonal baryonic $U_b(1)$ component. ${\cal T}_{\rm D7}e^{-\Phi_0} = 1/((2\pi)^7 \alpha'^4 g_s)$ is the tension of the D7-brane. [The trace is taken over the symmetrized gauge indices. Note that the symmetrized trace is valid only up to the fourth order in $\ap$ [@Tseytlin:1997csa; @Hashimoto:1997gm; @Tseytlin:1999dj]. However, it is known that, at this order, the non-abelian DBI equations are solved at least for the instanton configurations.]{} We choose the gauge for the D7-brane worldvolume coordinates as $$(\xi^0,\dots\xi^3)\equiv (t,\dots,x^3)\,,~~~(\xi^4,\dots,\xi^7)\equiv (r,S^3)\, ,$$ so that the D7-branes are spacetime filling and spanning in the four flat internal directions given by $r$ and $S^3$ in (\[reparaR6\]). The D7-branes therefore have asymptotic worldvolume geometry of $AdS_5\times S^3$. The precise D7-brane embedding are specified by the transverse coordinates $(y,z)$, which become D7-brane scalar fields. To preserve the isometry of $S^3$, we have $(y(r),z(r))$; the $U(1)$ isometry further sets $z(r)=0$. The induced D7-brane world volume metric is therefore: $$G_{ab}d\xi^a d\xi^b=\frac{(r^2+y(r)^2)}{R^2}\left(\eta_{\mu\nu}dx^\mu dx^\nu\right)+\frac{R^2}{(r^2+y(r)^2)}\left((1+(y'(r))^2)dr^2+r^2 ds_3^2\right)\,,\label{Gab}$$ where $'$ denotes $\frac{d}{dr}$. Turning on only the temporal component of the $U_b(1)$ gauge field $A_t(r)$, which we again take to be dependent purely on $r$, the resultant D7-brane DBI action (density)[^7] is: $$S_{\rm DBI}^{\rm D7}/V_4=\int dr L=-{{{\cal N}}}\int dr \; r^3\sqrt{(1+(y'(r)^2)-(2\pi\alpha' A_t'(r))^2}\label{d7action}\,,$$ where ${{\cal N}}= N_f {\cal T}_{\rm D7} {\rm Vol}(S^3)g_s^{-1}=N_f{\mathcal{T}}_{\rm D7}(2\pi^2)g_s^{-1}$, and factor $N_f$ arises from the trace. As noted in Ref. [@Karch:2007br], the action (\[d7action\]) does not contain explicit dependences on $y(r)$ and $A_t(r)$, their equations of motion yield following constants of motion: $$\begin{aligned} \frac{\delta L}{\delta y'} &=&-{\mathcal{N}} r^3 \frac{y'}{\sqrt{1+(y')^2-(2\pi\alpha' A_t')^2}} =-{{\bf{c}}}\,,\label{constc}\\ \frac{\delta L}{\delta(2\pi\alpha' A_t')} &=&{\mathcal{N}} r^3 \frac{2\pi\alpha'A_t'} {\sqrt{1+(y')^2-(2\pi\alpha' A_t')^2}}={{\bf{d}}}\,. \label{constd}\end{aligned}$$ A useful relation can also be readily deduced $$2\pi\alpha' A_t'(r)=\frac{{{\bf{d}}}}{{{\bf{c}}}}y'(r)\, .\label{Usefulrel}$$ Using this and rearranging (\[constc\]) and (\[constd\]), we obtain $$\begin{aligned} 2\pi\alpha' A_t'(r) = \frac{{{\bf{d}}}}{{{\cal N}}\sqrt{r^6 + r_0^6}}, \quad y'(r) = \frac{{{\bf{c}}}}{{{\cal N}}\sqrt{r^6 +r_0^6}}\,, \label{KOsol}\end{aligned}$$ where we have defined: $$r_0^6=\frac{{{\bf{d}}}^2-{{\bf{c}}}^2}{{{\cal N}}^2}\,.\label{Defr0}$$ We can readily integrate (\[KOsol\]) to obtain the profiles for $y(r)$ and $2\pi\alpha' A_t(r)$: $$\begin{aligned} &&y(r)=\frac{{{\bf{c}}}}{2~3^{1/4}{{\cal N}}r_0^2}{\mathbb{F}} \left(\!\varphi(r),\frac{2+\sqrt{3}}{4}\right) \,,~2\pi\alpha' A_t(r) =\frac{{{\bf{d}}}}{2~3^{1/4}{{\cal N}}r_0^2}{\mathbb{F}} \left(\!\varphi(r),\frac{2+\sqrt{3}}{4}\right)\,,\nonumber\\ \label{yrAr}\\ &&\varphi(r)=\arccos\left(\frac{1-(\sqrt{3}-1)(r/r_0)^2}{1+(\sqrt{3}+1)(r/r_0^2)^2}\right)\,,\label{varphir}\end{aligned}$$ where ${\mathbb{F}}(\varphi,k)$ is the incomplete elliptic integral of the first kind. In the above computations, we have taken ${{\bf{d}}}>{{\bf{c}}}$, the resultant solutions (\[yrAr\]) should be regarded as the zero temperature analog of the aforementioned black hole embedding [@HoloBaryon; @Karch:2007br]. In such case, the D7-branes extend all the way to the “horizon” located at $\sqrt{r^2+y(r)^2}=r_6=0$ ($z(r)$ has been set to zero) and we have used this fact to fix the integration constant. The profile of $y(r)$ in (\[yrAr\]) in $(r,y(r))$ plane displays a sharp peak towards $y(0)=0$ around $r=0$ (or four dimensional cone when sweeping out the $S^3$), and flattens out to approach $2\pi\alpha' m$ as $r\to \infty$, where $m$ is the bare quark mass. This is in contrast with the Minkowski embedding where D7-branes lie at finite distance from the horizon, or $\sqrt{r^2+y(r)^2}>r_H$. In the presence of finite baryon density, it was shown in Ref. [@HoloBaryon] that only black hole embedding is stable and physical, we shall discuss them in more details in section \[sec5\]. Finally one can relate the asymptotic values of $y(r)$ and $2\pi \alpha' A_t(r)$ with the quark mass $m$ and the chemical potential $\mu$ as $y(\infty)\to 2\pi\alpha' m$ and $2\pi \alpha' A_t(\infty)\to 2\pi\alpha' \mu$ [@HoloBaryon], and obtain the following relations [@Karch:2007br]: $$\begin{aligned} && {{\bf{c}}}= \gamma {\cal N} (2\pi\alpha')^3 (\mu^2-m^2)m \, , \label{relm}\\ && {{\bf{d}}}= \gamma {\cal N} (2\pi\alpha')^3 (\mu^2-m^2)\mu \, . \label{relmu}\end{aligned}$$ Here the constant $\gamma=\left(\frac{\sqrt{\pi}}{\Gamma(1/3)\Gamma(7/6)}\right)^{-3}\sim 0.363$. This completes our review on the zero temperature D7-brane embedding in the presence of baryonic $U_b(1)$ gauge field. To realize the color-flavor locking phase, we shall next consider turning on a $SU(N_f)$ instanton configuration within the internal four cycle as a perturbation. Instanton Solution {#sec2.2} ------------------ #### We are ready to consider the non-Abelian part of the $U(N_f)$ gauge group on the flavor D7-branes, including the instantons. In the equations of motion, the overall $U_b(1)$ discussed earlier is coupled to the $SU(N_f)$ subsector where we like to put the instantons representing the D3-branes. As we shall see later, for large ’tHooft coupling $\lambda$, the non-Abelian part can be regarded as a fluctuation around the fixed $U_b(1)$ background (\[yrAr\]). We substitute the $U_b(1)$ solution (\[KOsol\]) of Ref. [@Karch:2007br] into the action and consider only the $SU(N_f)$ non-Abelian part of the action (\[d7\]). We are interested in solutions having the instanton charges in the subspace $(\xi^4, \cdots, \xi^7)$, so we just turn on $SU(N_f)$ $A_i(\xi)$ $(i=4, \cdots, 7)$ among the gauge fields, and let them be dependent on only the coordinates $\xi^i$. In the action, the effective four cycle metric felt by these non-Abelian components is computed as follows. We note that the background $A_t(r)$ can be regarded as an additional transverse scalar field in the D7-brane DBI action, as we are interested in only the space spanned by $(\xi^4, \cdots, \xi^7)$. Indeed, the effective metric for the directions $(\xi^4, \cdots, \xi^7)$ can be written formally as $$\begin{aligned} G_{ij}^{(4)} = g_{ij} + g_{yy} {\partial}_i y {\partial}_j y + g^{tt} {\partial}_i A_t {\partial}_j A_t (2\pi\alpha')^2\,,~~~i,j=4,5,6,7.\label{Gij1}\end{aligned}$$ Since $y(r)$ and $2\pi\alpha' A_t(r)$ are functions of $r=\sqrt{\sum_{i=4}^7 (\xi^i)^2}$, we can rewrite above as $$\begin{aligned} {G}_{ij}^{(4)} = \frac{R^2}{r_6^2} \left(\delta_{ij} + \frac{\xi^i\xi^j}{r^2} \left(y'^2 - (2\pi\alpha' A_t')^2\right)\right).\label{Gij2}\end{aligned}$$ So the determinant in the DBI action, including the non-Abelian field strength $F_{ij}$ in the $SU(N_f)$, is written as $$\begin{aligned} -\det (G_{ab}+2\pi\alpha' F_{ab}) & = & \det \left( \widetilde{G}_{ij}^{(4)} + 2\pi\alpha' F_{ij} \frac{r^2+y^2(r)}{R^2} \right)\end{aligned}$$ where the unwarped effective four cycle metric $\widetilde{G}_{ij}^{(4)}$ is given by $$\begin{aligned} {\widetilde{G}}_{ij}^{(4)} \equiv \delta_{ij} + \frac{\xi_i\xi_j}{r^2}(y'^2 - (2\pi\alpha' A_t')^2) \, .\label{tGij}\end{aligned}$$ Thus the total DBI action including the non-Abelian field strengths $F_{ij}$ is $$\begin{aligned} S_{\rm DBI}^{\rm D7} = -{\cal T}_{\rm D7} \int\! d^4x \int\! d^4\xi \; e^{-\Phi} \; {\rm tr} \sqrt{ \det \left( \widetilde{G}_{ij}^{(4)} + 2\pi\alpha' F_{ij}\frac{r^2+y^2(r)}{R^2} \right)} \,. \label{dbiin}\end{aligned}$$ In this expression, note that the prefactor of $F_{ij}$ is suppressed by $\lambda^{-1/2}$. In fact, $2\pi \alpha'/R^2 = 2\pi/\sqrt{\lambda}$, with the relation $R^4 = 4\pi g_s N_c \alpha'^2$. We can therefore regard the instanton as a fluctuation around the fixed $U_b(1)$ background sourced by $A_t$, for a large $\lambda$. In addition to this DBI action, now we also have a Chern-Simons (CS) term by coupling with background RR 4-form $C_4$ given in (\[C4\]) with the non-Abelian field strength $F_{ij}$: $$\begin{aligned} S_{\rm CS}^{\rm D7} = {\cal \mu}_{\rm D7} \int \! d^4x \int \! d^4\xi \; \frac{1}{g_s} \left(\frac{r^2 + y(r)^2}{R^2}\right)^2 \frac{(2\pi\alpha')^2}{8} \; {\rm tr} \left[\epsilon^{ijkl} F_{ij}F_{kl}\right]\,, \label{csd7}\end{aligned}$$ with $\mu_{\rm D7}= {\cal T}_{\rm D7}$ We will show that self-dual configurations of the non-Abelian gauge fields with respect to the metric ${\widetilde{G}}^{(4)}_{ij}$ satisfies a particular property: the $F_{ij}$-dependent part of the DBI action (\[dbiin\]) is completely canceled by the Chern-Simons term (\[csd7\]). This interesting property of the instantons on the D7-branes was explicitly shown for a special case in Ref. [@Arean:2007nh] which treated the case of the flat D7-branes (${{\bf{c}}}={{\bf{d}}}=0$). We use the following formula in generic curved space [@Gibbons:2000mx] $$\begin{aligned} \sqrt{{\rm det} (g+F)} = \sqrt{\det g} + \frac14 \sqrt{{\rm det} g} \left| F_{ij} *_4 F^{ij} \right| \end{aligned}$$ for the self-dual configuration $$\begin{aligned} F_{ij}= *_4 F_{ij}\,. \label{selfdual}\end{aligned}$$ Here the Hodge dual operation $*_4$ is with respect to the effective four cycle metric and defined by a covariant totally antisymmetric tensor $\eta^{ijkl}$, $$\begin{aligned} *F^{ij}\equiv \frac12 \eta^{ijkl}F_{kl}\, , \quad \eta^{ijkl} = \frac{1}{\sqrt{\det g}} \epsilon^{ijkl}\, , \quad \epsilon^{4567}=1\, .\end{aligned}$$ This formula was shown in Ref. [@Gibbons:2000mx] for Abelian field strength, and now if we assume that the non-Abelian DBI action is written with the symmetric trace prescription, this equality also holds for the present non-Abelian case. Once we apply this formula to our DBI action (\[dbiin\]), for the self-dual instanton configuration with respect to the metric ${\widetilde{G}}^{(4)}_{ij}$, we obtain $$\begin{aligned} S_{\rm DBI}^{\rm D7} = -\frac{{\cal T}_{\rm D7}}{g_s} \int \! \!d^4x \!\!\int \!\! d^4\xi \;{\rm tr} \left[ \sqrt{\det {\widetilde{G}}^{(4)}_{ij}} + \frac{(2\pi\alpha')^2}{8} \left(\frac{r^2\! +\! y(r)^2}{R^2}\!\right)^2 \!\epsilon^{ijkl} F_{ij} F_{kl} \right]\,.\end{aligned}$$ Note that we rewrite the Hodge dual by the constant tensor $\epsilon^{ijkl}$. Using the relation ${\cal T}_{\rm D7}=\mu_7$ and $g_s=e^{\Phi}$ is fixed, it is obvious that the $FF$ dependent terms in the DBI is canceled by the CS actions (\[csd7\]). It is interesting that this cancellation occurs not only for the flat D7-branes with no electric flux on it but also our present case, albeit our D7-brane configuration breaks the supersymmetries completely. The state with the D3 branes and the D7 branes connected by the fundamental strings in flat space is supersymmetric. However, in our case, the spike does not extend to infinity, supersymmetry is thus broken. In Ref. [@Arean:2007nh], it was argued that this cancellation is due to the BPS property of the D3D7 system. Here we could show the same cancellation even with the non-supersymmetric electric flux, and there is therefore no potential on the instanton moduli space. However, in the remaining part of this paper, we will see that in fact a backreaction of this electric flux on the D7-branes will lift the cancellation slightly, and induces a potential term for the instanton moduli space. It is an essential point which we like to focus on in this paper. Conformal Metric and Explicit Instanton Configuration ----------------------------------------------------- #### {#sec2.3} Our self-dual configuration of the non-Abelian field strength is with respect to the curved “effective” metric ${\widetilde{G}}^{(4)}_{ij}$. On the other hand, the simpler case of Ref. [@Arean:2007nh] has a flat metric $\delta_{ij}$ instead. In the following, we show that a coordinate transformation can turn the effective unwarped four cycle metric ${\widetilde{G}}^{(4)}_{ij}$ (\[tGij\]) into a conformally flat metric, so that in the new coordinate the standard BPST instanton configuration suffices. In any conformally flat space, the self-dual equation on it is simply the same as the self-dual equation on the flat space. It is easy to see that the metric ${\widetilde{G}}_{ij}^{(4)}$ (\[tGij\]) can be written in the polar coordinate as $$\begin{aligned} ds^2 = \left(1 + (y'^2-(2\pi\alpha' A_t')^2 )\right)dr^2 + r^2 ds_3^2 \, .\label{z0metric} \end{aligned}$$ Substituting the explicit expressions for $y'(r)$ and $2\pi\alpha' A'_t(r)$ (\[KOsol\]), we can deduce that $$1+y'^2(r)-(2\pi\alpha' A_t'(r))^2 =\frac{r^6}{r^6+r^6_0}\,.$$ To show (\[z0metric\]) is conformally flat, let us consider $$\begin{aligned} ds^2 = \left(\frac{r^6}{r^6+r_0^6}\right)dr^2 + r^2 ds_3^2=S(\tilde{r})^2 \left(d \tilde{r}^2 + \tilde{r}^2 ds_3^2\right)\end{aligned}$$ and solve for $\tilde{r}$ and $S(\tilde{r})$. First the consistency in $S^3$ directions demands that $r=S(\tilde{r})\tilde{r}$, the relevant differential equation in the $r$ and $\tilde{r}$ directions then gives: $$\frac{d\tilde{r}}{\tilde{r}}=\frac{r^2}{\sqrt{r^6+r_0^6}}dr\,. \label{conftrans}$$ Integrating both sides, we can obtain the desired change of variable, $$\tilde{r}=r\left[\frac{1+\sqrt{1+r_0^6/r^6}}{2}\right]^{1/3}. \label{rtrrelation1}$$ The integration constant is fixed so that $r\sim \tilde{r}$ for large $r$. We can also invert the relation (\[rtrrelation1\]) to obtain $$\frac{r}{\tilde{r}}=S(\tilde{r})=\left[1-\frac{r_0^6}{4\tilde{r}^6}\right]^{1/3}\,. \label{rtrrelation2}$$ In this new coordinate $\tilde{r}$, the self-dual configuration is just the familiar BPST instanton. When bringing that to the original coordinate $r$, we obtain a solution to the self-dual equation in the space with the metric $\widetilde{G}_{ij}^{(4)}$. In Sec. \[sec4\], we shall use this explicit coordinate transformation to evaluate the potential for the instanton size moduli. Linearized Supergravity Backreaction {#sec3} ==================================== #### {#section-4} In this section, we shall compute a linearized perturbation to the supergravity background (\[AdS5S5metric\]), (\[C4\]), (\[F5\]), due to the electric field $A_t$ on the D7-branes. Let us first recall that the electric flux, which is responsible for the $U_b(1)$ baryon charge, can be regarded as fundamental strings dissolved in the D7-branes. This is because in the DBI action the electric field is combined with the (pull-back of) NSNS 2-form field $\hat{B}_2$ in a gauge-invariant fashon, $2\pi\alpha' F_{ab} + \hat{B}_{ab}$. Such electrified D7-branes can be regarded as a source to the bulk 3-form flux $H_3\equiv dB_2$, acting as small perturbation to the background SUGRA solution. Moreover from the consistent equations of motion of the SUGRA, this also induces RR 3-form flux $F_3$, which we will proceed to extract in two different ways. The induced $F_3$ is important for the dynamics of the instantons on the D7-branes as we will see in the next section. So, in this section, we derive the exact amount of this $F_3$ as a backreaction of the electrified D7-brane configuration, which is $$\begin{aligned} F_{123}^{(3)} = \frac{8\pi^3\alpha'^2 {{\bf{d}}}}{N_c} \, . \label{f3back}\end{aligned}$$ First in Sec. \[sec3.1\], we present an intuitive derivation of the $F_3$ by using smeared baryon vertices. In Sec. \[sec3.2\], we compute the backreaction to the geometry due to the electrified D7-branes. The result of Sec. \[sec3.2\] coincides with that of Sec. \[sec3.1\]. Smeared Baryon Vertices {#sec3.1} ----------------------- The electric fields on the D7-branes are interpreted as fundamental strings connecting the D7-branes and the D3-branes, therefore they are quarks. The number density of them is given by (\[constd\]), quark density $= 2\pi\alpha' {{\bf{d}}}$. This means that the baryon number density is $2\pi\alpha' {{\bf{d}}}/N_c$. The D7-brane spike terminates at the origin $r_6=0$. If we take the flux conservation at the tip of the spike seriously, we need to assume the presence of the baryon vertices surrounding the origin. As is well known, D5-branes wrapping the $S^5$, which are called baryon vertices, can give a charge at which the fundamental strings can end [@WittenBaryon]. In this subsection, we compute a back reaction of these baryon vertices smeared on the plane $x^1$-$x^2$-$x^3$ at $r_6=0$. Our result is (\[f3back\]).[^8] The relevant terms from the type IIB supergravity and D5-brane DBI actions are $$\begin{aligned} -\frac{1}{4 \kappa_{10}^2} \int \! d^{10}x \; \sqrt{-g_{10}} |F_7|^2 + \mu_5 \int C_6 \, ,\label{D5actions}\end{aligned}$$ with $4 \kappa_{10}^2 = 2 (2\pi)^7\alpha'^4$ and $\mu_5 = (2\pi)^{-5} \alpha'^{-3}$. We have also used $F_7=dC_6=*_{10} F_3$. It is enough to consider the explicit component $C_6 = C^{(6)}_{0\theta_1\theta_2\theta_3\theta_4\theta_5} dx^0\wedge d\theta_1\wedge d\theta_2 \wedge d\theta_3 \wedge d\theta_4 \wedge d\theta_5$, then (\[D5actions\]) becomes $$\begin{aligned} \int d^4x dr_6 d\theta_1 d\theta_2 d\theta_3 d\theta_4 d\theta_5 \left[ \frac{-1}{4\kappa_{10}^2} \frac{r_6^3}{R^8}\frac{1} {\sin^4\theta_1\sin^3\theta_2\sin^2\theta_3\sin\theta_4} (\partial_{r_6}C^{(6)}_{0\theta_1\theta_2\theta_3\theta_4\theta_5})^2 \right. \nonumber \\ \left. + \mu_5 \frac{2\pi\alpha' {{\bf{d}}}}{N_c} \delta(r_6-\epsilon) C^{(6)}_{0\theta_1\theta_2\theta_3\theta_4\theta_5} \right],\qquad\end{aligned}$$ where the position of the baryon vertices is specified as $r_6=\epsilon$ with $\epsilon \to 0$. This can be solved as $$\begin{aligned} \partial_{r_6}C^{(6)}_{0\theta_1\theta_2\theta_3\theta_4\theta_5} = \frac{8\pi^3\alpha'^2 {{\bf{d}}}}{N_c} \frac{R^8}{r_6^3} \sin^4\theta_1\sin^3\theta_2\sin^2\theta_3\sin\theta_4 \, ,\end{aligned}$$ where we have also used the explicit expressions for $\kappa_{10}^2$ and $\mu_5$. Taking a Hodge dual in the background $AdS_5\times S^5$, we immediately obtain (\[f3back\]). The above analysis leads to an important consequence which resolves a problem in introducing baryons in D$p$/D$q$ systems. The phase structure of fundamental matter at finite baryon density has been studied by introducing electric flux on probe D$q$-branes in D$p$-brane background [@HoloBaryon; @D3D7phase; @Matsuura:2007zx]. The baryon number there was considered to be carried by free quarks in the sense that the quark density, or the electric flux, can take any value as long as the total number of strings takes an integer. In other words ${\bf d}$ is quantized in units of 1.[^9] It is natural to ask what if one considers baryons instead of the quarks in the system. As was pointed out in Refs. [@HoloBaryon; @Matsuura:2007zx] and studied in detail in Ref. [@Seo:2008qc], it turned out that there is no stable baryon vertex solution outside the horizon, in the deconfinment phases. A resolution of this problem of the missing baryon vertex is that the baryon vertices undergo a brane/flux transition and leaves only RR flux outside the horizon. The DBI part of the D5-brane baryon vertex disappears since its volume element vanishes (the time direction of the geometry shrinks), while the CS term of the D5-brane action remains to source the bulk RR 3-form flux $F_3$. We can see this “remnant” of the baryon vertices anyway, by solving consistently the SUGRA equations of motion for the NSNS $B$-field, as in the following Sec. \[sec3.2\]. In this section we work with $T=0$, but the role of the horizon is played by the origin $r_6=0$. Similar transition can be found in a simpler example. Consider $AdS_5 \times S^5$ with $N$ units of the RR flux and put an additional probe D3 brane parallel to the boundary in this spacetime at certain $r_6$. This is a supersymmetric configuration (the Coulomb phase) and $r_6$ is a modulus. When the brane goes to $r_6=0$, the DBI part of the D3 brane becomes zero. The correct picture for this case is given by $AdS_5 \times S^5$ with $N+1$ units of the flux. Therefore, the probe D3 brane is replaced by a unit of flux. This argument can be applied to the finite temperature case. It is interesting that the quantization condition of this $F_3$ in (\[f3back\]) shows that the quark number density [**d**]{} is quantized not in units of 1 but in units of $N_c$ [^10]. This would suggest that the quarks in the D$p$/D$q$ system are always thought to be components of baryons. We will see in Sec. \[sec5\] that the analysis of Sec. \[sec3.2\] still applies to a finite temperature system despite the fact that the end points of the strings are hidden inside the black hole horizon. Backreaction from the D7-brane Electric Flux {#sec3.2} -------------------------------------------- #### {#section-5} Instead of assuming the presence of the D5-brane baryon vertices, here we provide an alterative derivation for (\[f3back\]) by solving the backreaction due to the electric flux on the D7-branes. In this subsection, we demonstrate this by looking at the equation of motion for the NSNS 2-form field $B_2$. For the validity of our approximation adopted in this section, see Sec. \[sec323\]. ### Sourcing the Bulk NS-NS B-Field First, let us examine how the electric flux on the D7-branes can act as a source for the bulk NSNS 2-form field $B_{2}$. The DBI action includes the NSNS B-field as $$\begin{aligned} S^{\rm D7}_{\rm DBI} = - {\cal T}_{\rm D7} \int \!d^8\xi \; e^{-\Phi}\; {\rm tr} \sqrt{-\det (g_{ab} + 2\pi\alpha' F_{ab} + \hat{B}_{ab})}\,,\label{DBIBaction}\end{aligned}$$ where $\hat{B}_{ab}$ is the induced NSNS $B$-field carried by the fundamental strings in the D7-branes. We are treating here only the overall $U_b(1)\subset U(N_f)$ sector (\[KOsol\]) only, so tracing over already gave the factor $N_f$. Since we are interested in a linear perturbation by this source, we expand this action around $\hat{B}=0$ to the linear order in the $B$-field: $$\begin{aligned} S^{\rm D7}_{\rm DBI}\biggm|_{{\cal O}(\hat{B})} = -\int d^4x dr \; \hat{B}_{0r} \left[\frac{\delta L} {\delta (2\pi\alpha' A_t')}\right]_{B=0},\label{d7B2}\end{aligned}$$ where $L$ is the Lagrangian density as defined in (\[d7action\]). This expression follows since the $B$-field appears in the DBI action (\[DBIBaction\]) only as a gauge invariant combination of $2\pi\alpha' F + \hat{B}$. More explicitly they appear as $(2\pi\alpha'A_t'-\hat{B}_{0r})^2$ in the action, hence we have the additional negative sign. The shape of the D7-branes is specified only by $y(r)$, so the induced $B$-field is just $\hat{B}_{0r} = B_{0y} y'(r) + B_{0r}$. Together with (\[constd\]), we have $$\begin{aligned} S^{\rm D7}_{\rm DBI}\biggm|_{{\cal O}(B)} = -{{\bf{d}}}\int d^4x dr \; \left( B_{0y} y' + B_{0r} \right)\, .\end{aligned}$$ This is the source term for the bulk NSNS B-field. For our later purpose, we can express $(r,y,z)$ in terms of angular coordinates for the $S^5$ $\{\theta_1,\dots,\theta_5\}$: $$\begin{aligned} z = r_6 \cos\theta_1 \, , \quad y = r_6 \sin \theta_1 \cos \theta_2 \,, \quad r = r_6 \sin\theta_1 \sin\theta_2 \, ,\label{Deftheta12}\end{aligned}$$ the remaining $\theta_3,\theta_4,\theta_5$ parametrize $S^3$ in (\[Gab\]). For our D7-brane embedding specified by $y(r), z=0$, this translates into setting $\theta_1 = \pi/2$. We can further invert the relation $r_6^2=r^2+y(r)^2$, and express $\theta_2$ as a function of $r_6$ via: $$\begin{aligned} \theta_2(r_6) = \arctan \frac{r(r_6)}{y(r(r_6))} \, . \label{deftheta2r6}\end{aligned}$$ So in terms of $r_6$ and these angular coordinates, the source coupling for the NSNS $B$-field is $$\begin{aligned} S^{\rm D7}_{\rm DBI}\biggm|_{{\cal O}(B)} \!\!\!\! = -{{\bf{d}}}\int\! d^4x \!\int \!dr_6 \;d\Omega_5 \; \frac{\delta(\theta_1-\pi/2) \delta(\theta_2 - \theta_2(r_6))}{2\pi^2\sin^3\theta_2} \left( B_{0r_6} + B_{0 \theta_2} \frac{\partial \theta_2}{\partial r_6} \right).\quad \label{sourcetheta}\end{aligned}$$ Note that to incorporate the D7-brane DBI action into the full 10 dimensional supergravity analysis, we have inserted delta-functions which restrict to the specific embedding we are considering. In particular we have changed the integral to the whole angular coordinates, so we divide it by the volume $V_3=2\pi^2$ of the unit 3-sphere. We will now concentrate on the region $r \sim 0$ to simplify our situation. The spike has a rigid cone shape around the origin $r=0$. Around the tip of the cone $r\sim 0$, $y'$ diverges, so around there we have $$\begin{aligned} \theta_2\sim \theta_2^{(0)} \equiv \frac{\sqrt{{{\bf{d}}}^2 - {{\bf{c}}}^2}}{{{\bf{c}}}}\, .\label{theta20}\end{aligned}$$ With this, we can approximate the source term (\[sourcetheta\]) as $$\begin{aligned} S^{\rm D7}_{\rm DBI}\biggm|_{{\cal O}(B)} = -\frac{{{\bf{d}}}}{2\pi^2}\int\! d^4x \int \!dr_6 \;d\Omega_5 \; \delta(\theta_1-\pi/2) \delta(\theta_2 - \theta_2^{(0)}) \frac{B_{0r_6}}{\sin^3\theta_2^{(0)} }\, . \label{source}\end{aligned}$$ ### Extracting the RR 3-Form Flux We now would like to extract the linearized perturbation $F_3$, which will be crucial for generating the potential on the instanton moduli space. For this, at the linear order perturbation, it is sufficient to consider the equations of motion for the NSNS $B$-field, with this limiting source term (\[source\]) included, as other equations are affected only at higher orders (we will check this later, see eq. (\[f5kin\]) and discussions thereafter). It will be important that near the $r\sim 0$ region, this source is only for $B_{0r_6}$. The relevant part of the type IIB supergravity action is [@Polchinski2] $$\begin{aligned} S_{\rm B} &=& -\frac{1}{4\kappa_{10}^2} \int d^{10}x \sqrt{-g_{10}}\; e^{-2\Phi} |H_3|^2 +\frac{1}{4\kappa_{10}^2} \int F_5 \wedge B_2 \wedge F_3 \nonumber \\ & & -\frac{1}{4\kappa_{10}^2} \int d^{10}x\sqrt{-g_{10}}\;\frac12 |\tilde{F_5}|^2 . \label{IIBaction}\end{aligned}$$ Here $$\begin{aligned} \tilde{F}_5 \equiv F_5-\frac12 C_2\wedge H_3 + \frac12 B_2 \wedge F_3 \, ,\end{aligned}$$ so, in the third term in (\[IIBaction\]), the term linear in $B_2$ is $$\begin{aligned} -\frac{1}{8\kappa_{10}^2} \int (-C_2\wedge H_3 + B_2 \wedge F_3) \wedge * F_5 \, .\end{aligned}$$ For a self-dual background 5-form flux $F_5 = * F_5$ (\[F5\]), this is equal to the second term of (\[IIBaction\]). Substituting the $AdS_5\times S^5$ background metric and the RR 5-form flux (\[F5\]) and writing out in explicit components, we obtain $$\begin{aligned} S_{\rm B} &=& -\frac{1}{2(2\pi)^7 \alpha'^4 g_s^2} \int d^4x dr_6 d\Omega_5 \; r_6^3 \left[ H_{0r_6\theta_1}^2 + \frac{1}{\sin^2\theta_2}H_{0 r_6 \theta_2}^2 \right] \nonumber \\ &&+\frac{1}{(2\pi)^7\alpha'^4} \int d^4x dr_6 d\Omega_5 \; B_{0r_6}F_{123}^{(3)} 2^4 \pi N_c (\alpha')^2\, .\end{aligned}$$ Here we used the explicit 5-form flux on the $S^5$, $F_5 = 2^4 \pi N_c \alpha'^2 d\Omega_5$ (\[F5\]). Together with the source action (\[source\]), the total equation of motion for the NSNS $B$-field is $$\begin{aligned} 0 &= & \frac{r_6^3}{(2\pi)^7\alpha'^4 g_s^2} \left[ {\partial}_{\theta_1} \left( \sin^4 \theta_1 \sin^3 \theta_2 H_{0r_6\theta_1} \right) + {\partial}_{\theta_2} \left( \sin^2 \theta_1 \sin^3 \theta_2 H_{0r_6\theta_2} \right) \right] \nonumber \\ & & + \frac{1}{(2\pi)^7 \alpha'^4} F_{123}^{(3)} \sin^4\theta_1 \sin^3\theta_2 \; 2^4 \pi N_c \alpha'^2 \nonumber \\ & & - \delta(\theta_1 - \pi/2) \delta (\theta_2-\theta_2^{(0)}) \frac{{{\bf{d}}}}{2\pi^2}\,. \label{nsnseq}\end{aligned}$$ We can recognize this as a 1+2-dimensional electromagnetism on a compact space spanned by $\theta_1$ and $\theta_2$. The first term is a total divergence, so the remaining terms should vanish when we perform an integration over the 2-dimensional space. This condition results in $$\begin{aligned} \frac{1}{(2\pi)^7 \alpha'^4} F_{123}^{(3)} \int_0^\pi\! d\theta_1 \; \sin^4\theta_1 \int_0^\pi\! d\theta_2 \; \sin^3\theta_2 \; (2^4 \pi N_c \alpha'^2) = \frac{{{\bf{d}}}}{ 2\pi^2}\, .\end{aligned}$$ Performing the integration and re-arranging, we obtain the constant RR 3-form flux $F_{123}^{(3)}$ as given in (\[f3back\]). This is the leading order effect of the backreaction of the D7-brane electric flux. Note that the supergravity equation of motion for the $F_3$ flux is trivially satisfied with this constant configuration. Interestingly, this result (\[f3back\]) is the same one obtained previously by solving the $F_3$ equation of motion with the smeared baryon vertices in Sec. \[sec3.1\]. Here we have not assumed the presence of the baryon vertices, but the supergravity equation of motion “knows” the presence for its consistency. Dissolution of the Instanton and Color-Flavor Locking {#sec4} ===================================================== In this section, we show that the backreacted supergravity flux (\[f3back\]) generates a nontrivial potential for the instanton moduli space (Sec. \[sec41\]). It provides a dynamical mechanism for fattening the size of the instanton on the flavor D7-branes. We compute the potential explicitly in Sec. \[sec42\]. This is a dynamical color-flavor locking in the holographic QCD, because the size of the instanton on the D7-branes is the vev of the squark of the supersymmetric QCD. Since the squarks are in the bi-fundamental representation of the color and the flavor symmetries, their condensation gives a color-flavor locking. The squark condensation means that the theory favors Higgs phase when the baryon chemical potential $\mu$ is larger than the quark/squark mass $m$. We study the patterns of the symmetry breaking in Sec. \[sec43\]. Additional Chern-Simons Term {#sec41} ---------------------------- #### {#section-6} Using the backreacted supergravity solution (\[f3back\]), we obtain an additional Chern-Simons term induced on the D7-branes. The general formula for the Chern-Simons couplings on the D7-branes is $$\begin{aligned} \mu_7 \;{\rm tr} \int \exp(2\pi\alpha' F + B_2)\wedge \sum_q C_q \, .\label{CSD7}\end{aligned}$$ Here the D7-brane RR charge is $\mu_7 = \frac{1}{(2\pi)^7 \alpha'^4}$, and the field strength $2\pi\alpha' F$ now also contains non-Abelian instanton piece. Formally expanding (\[CSD7\]) out and performing integration by parts, for non-zero $F_{123}^{(3)}$ we obtain the explicit expression in components $$\begin{aligned} S_{\rm CS} = \frac{1}{8 (2\pi)^4 \alpha'} \int d^4 x F_{123}^{(3)} \int d^4\xi \; {\rm tr} \left[A_0 F_{ij} F_{kl}\epsilon^{ijkl}\right]+\cdots\label{CS}\end{aligned}$$ Here $\cdots$ means terms necessary to form a gauge-invariant CS 5-form $$\begin{aligned} {\rm tr}\left[ AFF -\frac{1}{2} A^3 F + \frac{1}{10} A^5 \right]\end{aligned}$$ where the wedge product $\wedge$ is omitted. The second and the third terms in the CS action are irrelevant for our subsequent discussions. We substitute the constant $F_{123}^{(3)}$ from the linearized supergravity backreaction (\[f3back\]) to extract the relevant term from $S_{\rm CS}$: $$\begin{aligned} \frac{\alpha'{{\bf{d}}}}{16\pi N_c} \int d^4 x \int d^4\xi \; {\rm tr} \left[A_0 F_{ij} F_{kl}\epsilon^{ijkl}\right].\label{ACSterm}\end{aligned}$$ This is the leading correction term due to the baryon density ${\bf d}$. Note that the factor $1/N_c$ in front of above shows that this is indeed a correction to the original D7-brane action. ![ A schematic picture of the charge conservation process. In the left figure, $N_c$ D3-branes are sitting inside the baryon vertex (D5-brane wrapping $S^5$). From the D3-branes, $N_c$ units of RR 5-form flux emanates. The CS term on the D5-brane creates an electric charge on the worldvolume of the D5-brane [@WittenBaryon], and this generates electric field on the spiky D7-brane which touches the D5-brane. In the right figure, we move one D3-brane toward outside of the baryon vertex. This D3-brane becomes an instanton (shaded region on the D7-brane spike). The instanton is electrically charged, so the total electric flux going to the asymptotic infinity of the D7-brane worldvolume is conserved. []{data-label="figcharge"}](charge.eps "fig:"){width="70.00000%"} (-195,80)[D7]{} (-25,80)[D7]{} (-325,50)[Baryon vertex]{} (-288,95)[D3]{} (-80,65)[Instanton]{} This additional CS term has an important physical meaning. The essence here is quite similar to the generation of the baryon charge in the Sakai-Sugimoto model [@SaSu1], while the use of the instantons here is quite different from there ([*c. f.*]{} footnote 6.). As discussed in Sec. \[sec1\], we are studying the process of moving one D3-brane from the origin onto the worldvolume of the D7-branes. Once the single D3-brane goes outside the D5-brane wrapping the $S^5$, the RR 5-form flux penetrating the $S^5$ worldvolume of the D5-branes reduces by one unit. This 5-form was responsible for the CS term on the D5-branes to produce the electric charges on the D5-branes, which are the end points of the fundamental strings. So, by this moving process, the total number of the fundamental strings decreases by a fraction of $1/N_c$. Then, where does the baryon charge go off to? The answer is the new CS term (\[CS\]). Once the D3-brane gets on the D7-branes, it induces an instanton charge. The instanton number for the single instanton is $$\begin{aligned} {\rm tr} \int d^4\xi \; F_{ij} F_{kl}\epsilon^{ijkl} = 32\pi^2.\end{aligned}$$ For a small size instanton, the CS term (\[CS\]) effectively becomes proportional to $$\begin{aligned} \frac{2\pi\alpha' {{\bf{d}}}}{N_c}\int d^4x d^4\xi \; {\rm tr}A_t \; \delta^4(\xi) \, .\end{aligned}$$ This means that the instanton (which is the D3-brane dissolved into the D7-brane) carries the electric charge $(2\pi\alpha' {{\bf{d}}})/N_c$. Compared to this amount of the charge, the original solution (\[KOsol\]) shows that the spike has an electric charge $2\pi \alpha' {{\bf{d}}}$. We can therefore conclude that, pulling out one from the $N_c$ D3-branes decreases the baryon charge by its fraction $1/N_c$. This decrease is indeed offset by the instanton sourcing the electric field, so that the asymptotic expression for the electric field doesn’t change. [See Fig. \[figcharge\] for a schematic explanation of this charge conservation.]{} Potential on the Instanton Moduli Space {#sec42} --------------------------------------- Finally we have collected all the pieces for computing the induced potential on the instanton moduli space, and we will show that it drives the instanton(s) into dissolution. Here we consider a simpler special case, where the BPST instanton profile is centered at the origin: $$\begin{aligned} {\rm tr} F_{ij} F_{kl}\epsilon^{ijkl} = \frac{192 \rho^4}{(\tilde{r}^2 + \rho^2)^4}\,. \end{aligned}$$ Note that the BPST instanton solution is obtained in the conformally flat $\tilde{r}$ coordinate, not in the $r$ coordinate. Upon substituting into the additional CS term (\[ACSterm\]), we can extract the potential for the size $\rho$, $V_B(\rho)$, via the relation $S=-\int d^4x V(\rho)$ as $$\begin{aligned} V_B(\rho) = -\frac{12 \rho^4 (2\pi\alpha'{{\bf{d}}})}{N_c} \int_{\frac{r_0}{2^{1/3}}}^\infty \!\! d\tilde{r} \; A_t(r) \frac{\tilde{r}^3}{(\tilde{r}^2 + \rho^2)^4} \, .\end{aligned}$$ Again, note that the argument of the electric potential $A_t$ is $r$ which is related to $\tilde{r}$ by (\[rtrrelation1\]). Integrating by parts (for the $\tilde{r}$ coordinate) and use (\[KOsol\]), we obtain $$\begin{aligned} V_B(\rho) &=& -\frac{2\pi\alpha'{{\bf{d}}}}{N_c} \int_0^\infty \! dr \; A'_t(r) \frac{\rho^4 (3\tilde{r}^2 + \rho^2)}{(\tilde{r}^2 + \rho^2)^3} \nonumber \\ & = &-\frac{2\pi\alpha'{{\bf{d}}}}{{\cal N}N_c} \int_0^\infty \! dr \; \frac{{{\bf{d}}}}{\sqrt{ r^6+r_0^6}} \frac{\rho^4 (3\tilde{r}^2(r) + \rho^2)}{(\tilde{r}^2(r) + \rho^2)^3} \, . \label{potrho}\end{aligned}$$ This (\[potrho\]) is indeed a monotonically decreasing function of $\rho$, when viewing together with the coordinate redefinition (\[rtrrelation1\]). This can be easily understood if we notice following three facts: (i) $A_t'(r)$ is a monotonically decreasing function of $r$. (ii) The last factor in the integrand of (\[potrho\]) is the instanton density function which is peaked at $\tilde{r}=0$ and monotonically decreasing in $\tilde{r}$, while the width of the function is given by $\rho$ and the function has a normalized integral (which is the instanton number). (iii) The map (\[rtrrelation1\]) between $r$ and $\tilde{r}$ is a one-to-one and monotonic function. The instanton modulus potential $V(\rho)$ (\[potrho\]) shows that the system dynamically favors the Higgs phase, $\rho\neq 0$. This is the color-flavor locking in the supersymmetric QCD. The $V_B(\rho)$ from the CS term computation (\[potrho\]) does not appear to be analytically integrable, however to get a qualitative understanding we can consider the following asymptotic values: $$\begin{aligned} V_B(\rho=0) = 0\, , \quad V_B(\rho=\infty) = -\frac{2\pi\alpha'{{\bf{d}}}}{N_c} \int_0^\infty A_t' dr = -\frac{2\pi\alpha'{\bf d}\mu}{N_c}\, .\end{aligned}$$ Their difference, $$\begin{aligned} V_B(\rho=0)-V_B(\rho=\infty) = \frac{2\pi\alpha'{{\bf{d}}}\mu}{N_c} \label{height}\end{aligned}$$ is consistent with the interpretation that we pull out one quark per each baryon to the infinity in the background chemical potential $\mu$. However, note that the constant $F_{123}^{(3)}$ is obtained only in the vicinity of $r=0$. So, our calculation is strictly valid only for small $\rho$, and the potential height (\[height\]) derived at large $\rho$ should not be reliable. We however expect that the qualitative physical result (\[height\]) is not modified significantly when we taken into account of full $F_{123}^{(3)}$ at large radius. For completeness, here we include the plot of the one instanton size modulus potential, normalized by the asymptotic value $V_B(\infty)$: ![The plot of $\frac{V_B(\rho)}{|V_B(\infty)|}$ versus $\frac{\rho}{r_0}$[]{data-label="figrunawayT=0"}](plot1.eps){width="45.00000%"} In the analysis above, we assumed that the center of the BPST instanton is at the origin. However, we can consider a generic situation where the center of the instanton is not at the origin $r=0$. Suppose that the center is at some distance ${\bf X}$ from the origin. It is clear that the similar expression to (\[potrho\]), which is again only valid at small $r$, would give a qualitative result that the potential $V_B(\rho,{\bf X})$ goes to its minimum at $(\rho,{\bf X})=(\infty, {\rm finite})$ or $(\rho,{\bf X})=({\rm finite}, \infty)$. The latter is in particular an extreme point in the moduli space in the Coulomb phase. There $\rho$ can vanish and in that case the color-flavor locking does not occur. However, for ${\bf X}\neq 0$, the original gauge group $U(N_c)$ is explicitly broken, as it is in the Coulomb phase. Instead of substituting the BPST instanton, we can substitute multi-instanton solutions. Suppose we treat $\tilde{N}_c$ instantons. We need to require $\tilde{N}_c \ll N_c$, because in the perturbation of the backreaction the original background of $AdS_5\times S^5$ should not be drastically modified. Generically, the instantons are separated from each other.[^11] Our analysis for the single BPST instanton holds also for the multi-instanton case. Then it is shown that all the size moduli of the instantons are destabilized. What is Locked? {#sec43} --------------- Naively speaking, the condensation of the squark field which comes from a fluctuation of a string connecting the D3-branes and the D7-branes gives the instanton size. This string transforms in the fundamental representation of the gauge group $U(\tilde{N}_c)$ and in the anti-fundamental representation of the flavor group[^12] $U(N_f)$. Here $U(\tilde{N}_c)$ is a part of the total gauge group $U(N_c)$ [*i.e.*]{} $\tilde{N}_c \leq N_c$. The partial color symmetry is for our convenience to consider $\tilde{N}_c$ instantons only to dissolve into the worldvolume of the D7-branes. By the squark condensation, apparently the color and the flavor symmetries $U(\tilde{N}_c)\times U(N_f)$ are broken. We shall see the symmetry breaking pattern. The theory on the instantonic D3-branes is constructed from their ADHM data [@Douglas:1995bn]. The ADHM data consists of four $U(\tilde{N}_c)$ adjoint scalar fields which are combined into two $\tilde{N}_c\times \tilde{N}_c$ complex scalar fields $B_1$ and $B_2$, and the squark fields $I^\dagger$ and $J$ which are complex $N_f\times \tilde{N}_c$ matrix scalar fields. The squark fields transform as $$\begin{aligned} I^\dagger \mapsto U I^\dagger U_0^{-1}, \quad J \mapsto U J U_0^{-1}, \end{aligned}$$ where $U\in U(N_f)$ and $U_0 \in U(\tilde{N}_c)$. So these fields are in the bi-fundamental representation. Let us consider ’t Hooft instantons. Then $B_1$ and $B_2$ are diagonal matrices, and the ADHM equations are nothing but the BPS equations for the theory on the D3-branes, are $$\begin{aligned} I I^\dagger = J^\dagger J \, , \quad I J =0 \, .\end{aligned}$$ This of course allows a trivial solution $I^\dagger = J =0$, which corresponds to the zero-size instanton. First we consider the two-flavor case $N_f=2$. The flavor symmetry is $U(2)$ which is a vector part of the chiral symmetry (the chiral symmetry is explicitly broken by the quark/squark mass in our case). The ’tHooft instanton whose centers are located at the origin is represented by a solution $$\begin{aligned} I^\dagger = \left( \begin{array}{cccc} \rho_1 & \rho_2 & \cdots & \rho_{\tilde{N}_c}\\ 0 & 0 & \cdots & 0 \end{array} \right), \quad J = \left( \begin{array}{cccc} 0 & 0 & \cdots & 0 \\ \rho_1 & \rho_2 & \cdots & \rho_{\tilde{N}_c} \end{array} \right). \label{ij}\end{aligned}$$ Here, all $\rho_i$’s are real parameters. These correspond to size of each instanton. With this at hand, we can compute unbroken symmetry. For simplicity, we consider the case of a single instanton, $\tilde{N}_c=1$. $$\begin{aligned} I^\dagger = \rho \left( \begin{array}{cccc} 1\\ 0 \end{array} \right), \quad J = \rho \left( \begin{array}{cccc} 0 \\ 1 \end{array} \right).\end{aligned}$$ Here $\rho$ is a nonzero constant (for which we computed the potential). In this case, the transformation which leaves $I^\dagger$ intact is $$\begin{aligned} U = \left( \begin{array}{cc} e^{i\alpha_1} & 0 \\ 0 & e^{i\alpha_2} \end{array} \right) , \quad U_0 = e^{i\alpha_1}, \quad \alpha_i \in {\mathbb R} \, .\end{aligned}$$ On the other hand, the symmetry which leaves $J$ intact is $$\begin{aligned} U = \left( \begin{array}{cc} e^{i\alpha_1} & 0 \\ 0 & e^{i\alpha_2} \end{array} \right) , \quad U_0 = e^{i\alpha_2}, \quad \alpha_i \in {\mathbb R}\, .\end{aligned}$$ Therefore, we need to require $\alpha_1=\alpha_2$, and we conclude, $$\begin{aligned} U(1)_{\rm color} \times U(2)_{\rm flavor} \to U(1)_{\rm CFL} \, .\end{aligned}$$ The global part of $U(1)_{\rm color}$ is locked with the diagonal part of $U(2)_{\rm flavor}$, which is the baryonic symmetry $U(1)_B$, and the local part of $U(1)_{\rm color}$ is broken. This kind of the color-flavor locking can be found for general $N_f$. In the case of $\tilde{N}_c=1$, the squark condensations are given by $$\begin{aligned} I^\dagger = \rho \left( \begin{array}{cccc} 1\\ 0 \\ 0\\ \vdots \\ 0 \end{array} \right), \quad J = \rho \left( \begin{array}{cccc} 0 \\ 1 \\ 0 \\ \vdots\\ 0 \end{array} \right).\end{aligned}$$ A similar analysis shows $$\begin{aligned} U(1)_{\rm color} \times U(N_f)_{\rm flavor} \to U(1)_{\rm CFL} \times U(N_f-2)_{\rm flavor}\, .\end{aligned}$$ $U(1)_{\rm CFL}$ is a global symmetry which locks a part of the flavor symmetry and the gauge symmetry. Note that for $N_f>2$, this $U(1)_{\rm CFL}$ can be nothing to do with the baryonic symmetry $U(1)_{B}$, since the action of this $U(1)_{\rm CFL}$ can be chosen as $$\begin{aligned} U = \mbox{diag}(e^{i\alpha}, e^{i\alpha}, e^{-2i\alpha/N_f}, \cdots, e^{-2i\alpha/N_f}), \quad U_0 = e^{i\alpha}, \quad \alpha\in {\mathbb R} \, .\end{aligned}$$ For generic $\tilde{N}_c$, we expect that all the size moduli are driven to have nonzero values (for example, for $N_f=2$ we have (\[ij\])). So the symmetry is broken as $$\begin{aligned} U(\tilde{N}_c)_{\rm color} \times U(N_f)_{\rm flavor} \to U(1)_{\rm CFL} \times U(N_f-2)_{\rm flavor}\, .\end{aligned}$$ The locking is quite similar to the case of $\tilde{N}_c=1$. Note that we restrict ourselves to the case of the ’tHooft instantons. Since the ’tHooft instantons do not cover all the moduli space of the instantons, there remains a possibility that the unbroken symmetry, in particular the part concerning the gauge symmetry, may be enhanced. In the D-brane analysis, we used the technique for Coulomb branch in AdS/CFT correspondence and treat one D3-brane as a probe by separating it from the rest, by hand. This procedure for the separation is somewhat artificial, but it is required for the geometry not to be drastically modified by a possible back reaction, which is our limitation. Extension to Finite Temperature System {#sec5} ====================================== In this section we explore the system at finite temperature and baryon density. Since the boundary geometry of our $AdS_5\times S^5$ is $R^{1,3}$, the geometry creates a horizon inside $AdS$ at any finite temperature. ${{\cal N}}=4$ SYM theory coupled with ${{\cal N}}=2$ hypermultiplets in fundamental representation at finite temperature has been studied (See [@Erdmenger:2007cm] for a review). As mentioned in Sec. \[sec1\], there are two brane embeddings in the black hole background: the Minkowski embedding and the black hole embedding (Fig. \[figembed\]). The former describes the configuration of D-branes staying outside of the horizon everywhere and the latter describes the configuration of D-branes falling into the black hole horizon. The equations of motion and the free energy determine which configuration is realized at given quark mass and baryon density normalized by temperature: $m/T$ and $2\pi\alpha'{\bf d}/T^3$. In the case of zero baryon density and no instanton, the Minkowski embedding covers only higher $m/T$ region while the black hole embedding covers only lower $m/T$ region [@Mateos:2006nu]. These two are connected by a first order phase transition at certain critical temperature $(m/T)_{\rm crit.}$. This phase transition is interpreted in the field theory as meson melting: the spectrum is discrete and the mesons are stable in the Minkowski embedding, while the spectrum is continuous and the mesons are unstable in the black hole embedding. In the case of finite baryon density, the Minkowski embedding is no longer physically allowed and the black hole embedding covers the whole temperature region. 0 [**This canonical ensemble picture does not cover the whole phase space. More complete picture is obtained by considering the grand canonical ensemble [@HoloBaryon; @Karch:2007br; @D3D7phase; @Nakamura:2006xk; @Erdmenger:2007ja].** ]{} When an instanton is excited on the D7-branes at zero baryon density, the potential for the instanton size moduli takes its minimum at the origin, $\rho=0$, for the Minkowski embeddings and at some finite value, $\rho=\rho_{\rm min}>0$, for the black hole embeddings [@Apreda1]. This means the system is in Higgs phase above the critical temperature $(m/T)_{\rm crit.}$. [Therefore, the system is already in a CFL phase with the squark condensation, in the melted meson phase with finite $T$.]{} The purpose of this section is to study the case with both the finite baryon density and the instanton configuration. As we saw in the zero temperature case, the backreaction of the baryon density excites an additional CS term which induces the CFL. We will see how this CS term affects the instanton potential in the finite temperature system. The D3D7 System at Finite Baryon Density and Temperature {#sec51} -------------------------------------------------------- The background geometry dual to the finite temperature system is an AdS black hole. In Poincare like coordinates, the metric, RR 4-form and the dilaton have the following expressions, in the conventions of Ref. [@HoloBaryon]: $$\begin{aligned} ds^2&=&{1\over2}{u^2\over R^2} \left(-{f^2\over {\widetilde}{f}}dt^2+{\widetilde}{f} dx_3^2\right) +{R^2\over u^2}\left(du^2+u^2 ds_5^2\right) \cr C&=&{1\over R^4}\left( {u^2\over2}+ {u^4_0\over 2 u^2} \right)^2 d^4x\, ,~~~~~~e^{\Phi}=e^{\Phi_0}\, ,\end{aligned}$$ where $$\begin{aligned} f=1-{u_0^4\over u^4}\,,~~~{\widetilde}{f}=1+{u_0^4\over u^4}\,,\end{aligned}$$ with $u_0$ the location of the horizon. $u$ and $r_6$ in Sec. 3 are related to each other by the coordinate transformation $$\begin{aligned} u^2=r_6^2+\sqrt{r_6^4-u^4_0} \, .\end{aligned}$$ The regularity of the Euclidean section of this geometry relates the horizon radius $u_0$ and the Hawking temperature, which is interpreted as a temperature of the boundary gauge theory of our concern, as $$\begin{aligned} T={u_{0}\over \pi R^2}\, .\end{aligned}$$ As in the zero temperature case, we foliate $S^5$ with $S^3$ so that $SO(3)$ R-symmetry is manifest: $$\begin{aligned} du^2+u^2ds_5^2 &=&du^2+u^2(d\theta^2+\sin^2{\theta}ds^2_3+\cos^2\theta d\phi^2)\, ,\end{aligned}$$ where $\theta,\phi$ and $\theta_1, \theta_2$ in (\[Deftheta12\]) are related through the following equations $$\begin{aligned} \sin\theta=\sin\theta_1\sin\theta_2\, , ~~~~~\tan\phi={\cos\theta_1\over \sin\theta_1\cos\theta_2}\, .\end{aligned}$$ The probe D7-brane worldvolume is spanned by $(t,x^i,u,S^3)$, and is localized in $\phi$ direction, which we can use rotational symmetry to set $\phi=0$ (corresponding to $\theta_1=\frac{\pi}{2}$). In the new coordinates, the embedding is described by $\chi\equiv \cos\theta$ as a function of $u$. The $U_b(1)$ gauge field $A$ dual to the baryon current on the boundary has only non-zero time component $$\begin{aligned} A=A_t(u)dt.\end{aligned}$$ With these ansätze, the DBI action of the D7-branes is given by $$\begin{aligned} \frac{S_{\rm DBI}^{\rm D7}}{V_4} &=&\int\! du \; {{\mathcal L}}\cr &=& -{{\cal N}}\int\! du \; {u^3f\tilde{f}(1-\chi^2)\over4} \sqrt{1-\chi^2+u^2\dot{\chi}^2-(2\pi\ap \dot{A_t})^2{2\tilde{f}(1-\chi^2)\over f^2}} \end{aligned}$$ where the dot denotes ${d\over du}$ and ${\cal N}$ is as defined below (\[d7action\]). Since the action does not contain $A$ explicitly, the momentum conjugate of $A$ is constant: $$\begin{aligned} {\delta {{\mathcal L}}\over \delta(2\pi\ap \dot{A}_t)}={{\cal N}}{u^3\over 2}{\tilde{f}^2\over f} {(1-\chi^2)^2 (2\pi\ap\dot{A}_t)\over \sqrt{1-\chi^2+u^2\dot{\chi}^2-(2\pi\ap \dot{A_t})^2{2\tilde{f}(1-\chi^2)\over f^2}} } \equiv {\bf D}\, , \end{aligned}$$ or equivalently, $$\begin{aligned} 2\pi\ap\dot{A}= 2\left({{\bf D}\over {{\cal N}}}\right) {f\sqrt{1-\chi^2+u^2\dot{\chi}^2} \over \sqrt{\tilde{f}(1-\chi^2)} \sqrt{u^6\tilde{f}^3(1-\chi^2)^3+8({\bf D}/{{\cal N}})^2 }}\, . \label{ei-dotto}\end{aligned}$$ To derive the equation of motion for $\chi$, we Legendre transform the action with respect to ${\bf D}$ so that $\dot{A}_t$ can be eliminated from the action: $$\begin{aligned} \tilde{{{\mathcal L}}}&=&{{\mathcal L}}-{\delta L\over \delta (2\pi\ap \dot{A}_t)}(2\pi\ap \dot{A}_t)\cr &=&-{{{\cal N}}\over 4}{f\over \sqrt{\tilde{f}}\sqrt{1-\chi^2}} \sqrt{1-\chi^2+u^2\dot{\chi}^2} \sqrt{u^6\tilde{f}^3(1-\chi^2)^3+8({\bf D}/{{\cal N}})^2}\, .\end{aligned}$$ Then the $\chi$ equation is $$\begin{aligned} &&{\partial}_{u} \left( {u^5f\tilde{f}(1-\chi^2)\dot{\chi}\over \sqrt{1-\chi^2+u^2\dot{\chi}^2}}\sqrt{1+{8({\bf D}/{{\cal N}})^2\over u^6\tilde{f}^3(1-\chi^2)^3}} \right)\cr && \quad =-{u^3f\tilde{f}\chi\over \sqrt{1-\chi^2+u^2\dot{\chi}^2}} \sqrt{1+{8({\bf D}/{{\cal N}})^2\over u^6\tilde{f}^3(1-\chi^2)^3}}\cr &&\quad\quad \times\left( 3(1-\chi^2)+2u^2\dot{\chi}^2-24\left({{\bf D}\over {{\cal N}}}\right)^2{1-\chi^2+u^2\dot{\chi}^2 \over u^6\tilde{f}^3(1-\chi^2)^3+8({\bf D/{{\cal N}}})^2} \right).\end{aligned}$$ As studied in Ref. [@Frolov:2006tc], the boundary condition of probe branes at the horizon is determined by the regularity of the induced curvature: $\dot{\chi}|_{u=u_0}=0$. With this boundary condition, the solution of this equation of motion near the horizon $u\sim u_0$ is then given by $$\begin{aligned} \chi=\chi_0 -{3\chi_0(1-\chi_0^2)^3\over 4(({\bf D}^2/{{\cal N}}^2u_0^3)+1-\chi^6_0-3\chi_0^2(1-\chi^2_0))}\left(\!{u\over u_0}\!-\!1\!\right)^2\! +{{\mathcal O}}\left(\!\left({u\over u_0}\!-\!1\right)^3\right)\, .\end{aligned}$$ Therefore, the embedding can be approximated as $$\begin{aligned} \chi=\chi_0,~~~\dot{\chi}=0, \label{flat-approx}\end{aligned}$$ for ${u\over u_0}-1$ smaller than ${2\over \sqrt{3}(1-\chi_0^2)^{3/2}} {{\bf D}\over {{\cal N}}u_0^{3/2}}$ with large ${\bf D}$. We again consider the instanton excitations as a perturbation in this background field. At the leading order of ${\bf D}/N_c$, the instanton couples to RR 4-form and the induced metric, and the relevant terms are: $$\begin{aligned} S_{\rm DBI}^{\rm D7}(FF)&=&-N_f{\cal T}_{\rm D7}\int d^4x \int {u^4\over 4R^4}f{\widetilde}{f} \cdot{(2\pi \alpha')^2\over 8}{{\rm Tr}}[F\wedge F] \, , \label{therm-poten-DBI}\end{aligned}$$ $$\begin{aligned} S_{\rm CS}^{\rm D7}(FF)=N_f{\cal T}_{\rm D7}\int d^4x\int {u^4\over 4R^4}\tilde{f}^2\cdot {(2\pi \alpha')^2\over 8} {{\rm Tr}}[F\wedge F] \,. \label{therm-poten-CS}\end{aligned}$$ The instanton $F\wedge F$ lives on an effective four dimensional space whose metric is $$\begin{aligned} {\widetilde}{G}^4_{ij}&=&\left({1\over2}\sqrt{f{\widetilde}{f}}\right) \left(\left({1-\chi^2+u^2 \dot{\chi}^2 \over 1-\chi^2}+{(2\pi \alpha')^2\dot{A}^2\over -{1\over2}{f^2\over {\widetilde}{f}}}\right)du^2 +u^2(1-\chi^2)ds_3^2\right) \cr &=&{\sqrt{f{\widetilde}{f}} \over2} \left( \left(1+{u^2 \dot{\chi}^2 \over 1-\chi^2}\right) {u^6\tilde{f}^3(1-\chi^2)^3 \over u^6\tilde{f}^3(1-\chi^2)^3+8({\bf D}/{{\cal N}})^2}du^2 +u^2(1-\chi^2)ds_3^2\right)\,.\nonumber\\ \label{eff-4-met}\end{aligned}$$ Note that ${\bf D}$ dependence appears only through this metric. The difference between the DBI term and the CS term is the factors of $f$ and $\tilde{f}$ in the integrands. Therefore, the instanton potential vanishes as long as the temperature is zero, even in the presence of finite baryon density as we have seen. CS Term from Backreaction {#CSback} ------------------------- \[sec52\] The next leading order in ${\bf D}/N_c$ comes from the backreaction of the gauge field to RR fields. Similar calculations show that it excites the same constant RR 3-form field near the horizon as that of the zero temperature case with ${\bf d}$ replaced by ${\bf D}$, $$\begin{aligned} F^{(3)}_{123}={8\pi^3\ap^2{\bf D}\over N_c}\, .\end{aligned}$$ Therefore it induces the same CS term $$\begin{aligned} S_{\rm CS}^{\rm D7}({\bf{D}})={\ap{\bf D}\over 16\pi N_c}\int d^4x \int \tr\left[A\wedge F\wedge F\right] . \label{backre-poten-CS}\end{aligned}$$ The remaining issue is to obtain the explicit instanton configuration $\tr\left[ F\wedge F\right]$ on the effective metric ${\widetilde}{G}_{ij}^{(4)}$. As in the zero temperature case, we would like to obtain a conformally flat coordinate ${\widetilde}{u}$ satisfying: $$\begin{aligned} {\widetilde}{G}_{ij}^{(4)}=S({\widetilde}{u})^2(d{\widetilde}{u}^2+{\widetilde}{u}^2ds_3) \, . \label{conf-flat-met}\end{aligned}$$ For the approximate embedding near the horizon (\[flat-approx\]), the conformally flat metric (\[conf-flat-met\]) is obtained by $$\begin{aligned} {u^2(1-\chi_0^2)\sqrt{\tilde{f}}\over \sqrt{u^6\tilde{f}^3(1-\chi^2_0)^3+8({\bf D}/{{\cal N}})^2}}du= {d\tilde{u}\over \tilde{u}}\, .\end{aligned}$$ With this conformally flat coordinate, the BPST instanton is given by $$\begin{aligned} \tr\left[ F\wedge F\right]={192\rho^4\over ({\widetilde}{u}^2+\rho^2)^4}d^4{\widetilde}{\xi}\, . \end{aligned}$$ As we have seen, there are three terms contributing the instanton potential for the finite temperature case: the DBI and the CS term for thermal effect, which we denote $V_T$, and the CS term from the backreaction, which we denoted earlier as $V_B$. From (\[therm-poten-DBI\]) and (\[therm-poten-CS\]), the thermal potential is $$\begin{aligned} V_T(\rho)&=&-(S_{\rm DBI}^{\rm D7}(FF)+S_{\rm CS}^{\rm D7}(FF))/V_4 \cr &=&-{{{\cal N}}\over 4R^4} \int \! d{\widetilde}{u} \; u^4 {\widetilde}{f}({\widetilde}{f}-f) \cdot{(2\pi \alpha')^2\over 8} {192\rho^4 \tilde{u}^3\over ({\widetilde}{u}^2+\rho^2)^4} \, . \label{thermal-pot-fini-T}\end{aligned}$$ This potential has a minimum at finite $\rho$ since $V_T(0)=V_T(\infty)=0$. On the other hand, from (\[backre-poten-CS\]), the potential from the backreaction is $$\begin{aligned} V_B(\rho)&=&-S_{\rm CS}^{\rm D7}({\bf D})/V_4 \cr &=&-{{\bf D}^2\over N_c{{\cal N}}} \int_{u=u_0}du { 2f\over \sqrt{\tilde{f}} \sqrt{u^6\tilde{f}^3(1-\chi_0^2)^3+8({\bf D}/{{\cal N}})^2 }} {\rho^4(3{\widetilde}{u}^2+\rho^2)\over({\widetilde}{u}^2+\rho^2)^3} \, . \label{back-reac-pot-fini-T}\end{aligned}$$ The approximation we used to obtain the integrands breaks down for large $u$. However, the following two features may still hold even beyond the approximation: $V_B(0)=0$ and $V_B(\infty)=-{(2\pi\ap){\bf D}\mu\over N_c}$. The first one, $V_B(0)=0$, comes from the fact that the integration of the instanton term, the last term in (\[back-reac-pot-fini-T\]), is zero for $\rho=0$. Therefore, independent of the form of $F_3$ and $\dot{A}_t$, the integration gives zero. The second one, $V_B(\infty)=-{(2\pi\ap){\bf D}\mu\over N_c}$, comes from the physical reason explained in Section 4.2 for the zero temperature case. A numerical analysis suggests that $V_B$ monotonically decreases from zero to $-{(2\pi\ap){\bf D}\mu\over N_c}$. The shape of the total potential $V=V_T+V_B$ then depends on the ratio between them, which is characterized by $$\begin{aligned} {V_B\over V_T} \sim {g_s\over 12\pi} {1\over u_0^4} \left({{\bf D}\over {{\cal N}}}\right)^{{4\over3}}.\end{aligned}$$ Physically, this suggests that the thermal effect dominates when the temperature is high (large $u_0$), while the backreaction effect dominates when the baryon density is high (large ${\bf D}$).[^13] Note that $\left({{\bf D}/ {{\cal N}}}\right)$ can be large up to the order of $(N_c/N_f)$ where the probe flavor brane description breaks down. Therefore, the ratio $(V_B/V_T)$ may become large despite the fact that it is a positive power of $g_s$. Recalling that $V_T$ has a local stable minimum and $V_B$ is a run-away type potential, we conclude that the potential can have three possible behaviors depending on the ratio between ${\bf D}$ and $u_0$. When $(({\bf D}/{{\cal N}})^{4/3}/u_0^4)$ is very small, $V_T$ dominates the potential and it has a local stable minimum. The size of the instanton in this case is about the order of the horizon scale. Since observables which have less energy than the temperature have no meaning at finite temperature, the finiteness of the instanton size may be interpreted as a thermal effect. As $(({\bf D}/{{\cal N}})^{4/3}/u_0^4)$ increases, the local minimum becomes a meta-stable state and the instanton size $\rho$ eventually decays to infinity. As $(({\bf D}/{{\cal N}})^{4/3}/u_0^4)$ increases further, $V_B$ dominates the potential and the local minimum disappears. These features are shown in Figure \[finiteTempDesdfig\]. ![The plot of $\frac{V(\rho)}{|V(\infty)|}$ versus $\frac{\rho}{r_0}$ for $\chi_0={\sqrt{3}\over2}$. The three lines, from bottom to top, correspond to $\frac{g_s^{3/4}{\bf D}}{{{\cal N}}u_0^3}=$0.4, 0.8, and 10 respectively. We can see that for when the baryon density is small compared to the temperature, the thermal potential $V_T$ dominates and the potential has a local stable minimum. As the baryon density is increased, the relative contribution of $V_B$ becomes larger and eventually the potential becomes the run-away type. []{data-label="finiteTempDesdfig"}](plot2.eps){width="45.00000%"} Thermal quantities such as the derivative of the free energy may change discontinuously at a critical value of $(({\bf D}/{{\cal N}})^{4/3}/u_0^4)$. Therefore, this shows a phase transition within the CFL phase. [For larger temperature, we have a CFL phase with the finite size instanton, while for larger baryon density, we have another CFL phase with the size of the instanton being very large. A schematic picture of the phase diagram is shown in Fig. \[figphase2\].]{} As in the case of $T=0$, our analysis is valid only for small $u$ region and cannot say anything precise about the potential at large $u$. From the boundary theory point of view, the stability at higher temperature may be understood as the thermal masses of the scalar fields and the instability at higher density may be understood as the tachyonic masses of the scalar fields from the chemical potential. Since the supersymmetry is completely broken, the potential might be lifted up by cubic and higher terms and the vevs of the squarks may take finite values. Of course, these expectations are from the weak coupling analysis of the gauge theory and the strong coupling dynamics might change the picture. We do not go into the detail on this point in this paper. 0 We would like to guess the potential behavior another way. For zero baryon density case, the instanton potential takes its minimum at the origin $\rho=0$ for the Minkowski embeddings while it takes the minimum at finite $\rho$ for the black hole embeddings [@Apreda1]. This suggests that the gravity force acts attractively to the instanton in the asymptotic region (large $u$) while it acts repulsively near the horizon. Our analysis near the horizon shows that the backreaction effects repulsively to the potential. Discussions {#sec6} =========== In this paper, we made some preliminary steps towards a holographic model of color-flavor locking phase, here we end with a list of interesting future directions which seem worth exploring. The phase diagram (Fig. \[figphase2\]) is obtained by the total potential $V_T + V_B$ for the instanton size modulus, but the potential $V_B$ is valid only for a restricted region for $r$, as shown in Sec. \[sec323\]. So, it is important to compute the backreaction which is valid in all region of $r$, to explore the phase diagram further. In particular for $T=0$, we have shown that there is an instability along the direction of squark VEV in the melted meson phase. This means that the critical chemical potential dividing the meson and melted meson phases may take a different value which is smaller than $\mu=m$. Our method of treating $\tilde{N}_c$ D3-branes among $N_c$ of them separately cannot reach the true value of the critical chemical potential in the full phase diagram, and this deserves a further study. It is possible that there may be no vacuum if the potential valid for all $r$ is found and turns out to be a run-away type. See [@Yamada:2006rx] for a related discussion for R-charge chemical potential. A related issue is a possible distinction between the two CFL Higgs phases. We have two CFL phases, one is with finite instanton size $\rho$ while the other is with $\rho=\infty$. The former is realized mainly by the thermal potential for $\rho$, while the latter is by a domination of the baryon density. The symmetry breaking patterns look similar to each other. However, we expect that, once the repulsion among electrically charged instantons is included, the remaining symmetries may differ. In addition, physical solitonic spectra in these CFL vacua may be different from each other. It would be interesting to study vortex strings in these vacua. The vortex strings in the CFL phase in QCD play important roles in various physics (see Ref. [@Iida:2002ev] for a partial list of related papers), and the D-brane techniques for the CFL phase studied in this paper may be helpful in revealing the properties of those vortex strings. Since the vortex strings are inside the D3-branes which are instantons on the D7-branes, this suggests that “vortices inside instantons” are possible. This is intriguing on its own in soliton physics. It was described in Ref. [@Schafer:1998ef] that in an idealistic situation the CFL phase of QCD may be continuously connected to the hadron phase, giving a continuous deformation of the excitation spectrum, named “quark-hadron continuity”. In our case, the dynamically favored CFL phase is in the melted meson phase, so the fluctuation spectrum is continuous, which means that the spectral “continuity” doesn’t make much sense. However, in our ${\cal N}=4$ YM theory coupled to the ${\cal N}=2$ quark hypermultiplet, it is known that the meson phase is continuously connected to a Higgs phase [@Guralnik:2004ve]. This marginal deformation does not cost any energy, and the baryon number density is kept to be zero. The instanton size modulus is a free parameter (that is, the squark VEV is a flat direction of the theory). In this deformation, it was shown in Ref. [@Erdmenger:2005bj] that the discrete fluctuation spectrum is smoothly deformed. See Fig. 1 of Ref. [@Erdmenger:2005bj]. This phenomenon is analogous to the spectral quark-hadron continuity. It is well-known that color-flavor locking phase in QCD closely resembles the locking between spin and orbital symmetries found in the so-called “B-phase” of superfluid Helium 3, the setup we consider here therefore seems to be directly applicable in realizing this in string theory. One can study various thermodynamical properites and also consider topological defects e.g. vortices and study in such phase. Some interesting work relating D3D7 system with fermi-liquid can be found in Refs. [@FermiLiquid1]. Finally, it would be interesting to study a possible universality of the CFL at finite baryon density among holographic models. In the D4/D6 system considered in Ref. [@Kruczenski:2003uq], the dual field theory becomes effectively a pure bosonic Yang-Mills theory at low energy [@Witten:1998zw]. The phase structure of this system at finite temperature and baryon density was shown to have universal properties in Ref. [@Matsuura:2007zx]. Therefore, it is expected that when the baryon number density increases, the system becomes unstable and some of the D4-branes would be pulled onto the D6 branes. In this case, the squarks condensation corresponds to an expansion of monopoles on the D6-branes, instead of the instantons. As mentioned in Sec. \[sec3.1\], in the deconfinement phase, the baryon vertex is replaced by a flux while there is no probe brane description[@Seo:2008qc]. On the other hand, in a confining phase, the baryon vertex does have a probe brane description, and the discussion in Sec. \[sec3.1\] at zero temperature does not apply to the case. Therefore, it would be interesting to investigate the possibility of CFL in a confinement phase. We would like to thank Johanna Erdmenger, Aki Hashimoto, Deog-Ki Hong, Elias Kiritsis, Shin Nakamura, Hirosi Ooguri, Shigeki Sugimoto and Seiji Terashima for discussions. We would also like to thank Rob Myers and David Mateos for giving many useful comments on the draft. K.H. thanks Kavli Institute for Theoretical Physics at UCSB for providing an ideal environment for discussions, and thanks Aki Hashimoto for kind hospitality to support his visit to the physics department at University of Wisconsin. He also thanks the Yukawa Institute for Theoretical Physics at Kyoto University, at which this topic was discussed during the workshop YITP-W-09-04 on “Development of Quantum Field Theory and String Theory.” HYC is supported in part by NSF CAREER Award No. PHY-0348093, DOE grant DE-FG-02-95ER40896, a Research Innovation Award and a Cottrell Scholar Award from Research Corporation, and a Vilas Associate Award from the University of Wisconsin. KH. is partly supported by the Japan Ministry of Education, Culture, Sports, Science and Technology. The work of SM is supported in part by National Science Foundation under grant No. PHY05-51164 and Japan Society for the Promotion of Science. Check of Consistency for the Linearized Perturbation {#sec323} ==================================================== To complete the analysis of Sec. \[sec3.2\], we shall now check if this can be regarded as a [*small*]{} backreaction, so that our perturbative treatment for solving the equations of motion of the supergravity is guaranteed. The second term of (\[IIBaction\]) suggests that the nonzero $F_3$ (\[f3back\]) will again backreact the $F_5$ flux. We examine that this backreaction does not spoil the original flux configuration (\[F5\]) too much. To this end, we compare the second term of (\[IIBaction\]) with the $F_5$ kinetic term $$\begin{aligned} \frac{-1}{8\kappa_{10}^2} \int \! d^{10}x \sqrt{-g_{10}} |F_5|^2. \label{f5kin}\end{aligned}$$ We are only interested in order of magnitudes. Solving (\[nsnseq\]), we obtain $$\begin{aligned} B_{0r_6} \sim r_6^{-3} g_s^2 \alpha'^4 {{\bf{d}}}\, .\end{aligned}$$ Using this and (\[f3back\]) (\[F5\]), we evaluate the second term of (\[IIBaction\]) as $$\begin{aligned} \frac{1}{4\kappa_{10}^2} \int F_5 \wedge B_2 \wedge F_3 \sim \int d^4x dr_6 d\Omega_5 \; r_6^{-3} g_s^2 \alpha'^4 {{\bf{d}}}^2 \, . \label{compare1}\end{aligned}$$ On the other hand, the $F_5$ kinetic term (\[f5kin\]) with the flux solution (\[F5\]) gives $$\begin{aligned} \frac{-1}{8\kappa_{10}^2} \int \! d^{10}x \sqrt{-g_{10}} |F_5|^2 \sim \int d^4x dr_6 d\Omega_5 \; r_6^{3} g_s^{-2} \alpha'^{-4} \, . \label{compare2}\end{aligned}$$ Requiring (\[compare1\]) being much smaller than (\[compare2\]), we obtain $$\begin{aligned} g_s^4 \alpha'^8 {{\bf{d}}}^2 \ll r_6^6 \, . \label{constr1}\end{aligned}$$ This means that, for the backreaction to the 5-form flux $F_5$ to be small, we need to work in this region for $r_6$. On the other hand, we made the assumption $r \ll r_0$ to simplify the source term to get (\[source\]). Around the tip, we have a relation $r_6^2 = r^2 + y(r)^2 \sim r^2 (1 + y'(0))$, so this assupmtion translates to the condition $ r_6^2 ({{\bf{d}}}^2-{{\bf{c}}}^2)/{{{\bf{d}}}^2} \ll r_0^2$ which is equivalent to $$\begin{aligned} r_6^6 \ll \alpha'^8 g_s^2 N_f^{-2} {{\bf{d}}}^6/({{\bf{d}}}^2-{{\bf{c}}}^2)^2 \, . \label{constr2}\end{aligned}$$ Therefore, in order to have a region for $r_6$ which satisfies the two requirements (\[constr1\]) and (\[constr2\]), we need $$\begin{aligned} g_s N_f \ll {{\bf{d}}}^2/({{\bf{d}}}^2-{{\bf{c}}}^2) \, . \label{ddc}\end{aligned}$$ With (\[relm\]) and (\[relmu\]), this condition is met if we are close to the critical chemical potential, $$\begin{aligned} \mu - m \ll \mu\, , m \, .\end{aligned}$$ Throughout this paper, we are working in this regeme. Note that when ${{\bf{d}}}^2-{{\bf{c}}}^2 \ll {{\bf{d}}}^2$ with which (\[ddc\]) is satisfied, the D7-brane spike becomes very narrow, and the spike can be well-approximated by fundamental strings. This means that dilaton backreaction can be safely neglected. The backreaction to the metric is suppressed by $1/N_c$ and also neglected. [99]{} K. Rajagopal and F. Wilczek, “The condensed matter physics of QCD,” arXiv:hep-ph/0011333.\ \[-4em\] M. G. Alford, A. Schmitt, K. Rajagopal and T. Schafer, “Color superconductivity in dense quark matter,” Rev. Mod. Phys.  [**80**]{}, 1455 (2008) \[arXiv:0709.4635 \[hep-ph\]\]. J. M. Maldacena, “The large N limit of superconformal field theories and supergravity,” Adv. Theor. Math. Phys.  [**2**]{}, 231 (1998) \[Int. J. Theor. Phys.  [**38**]{}, 1113 (1999)\] \[arXiv:hep-th/9711200\]. S. S. Gubser, I. R. Klebanov and A. M. Polyakov, “Gauge theory correlators from non-critical string theory,” Phys. Lett.  B [**428**]{}, 105 (1998) \[arXiv:hep-th/9802109\].\ \[-4em\] E. Witten, “Anti-de Sitter space and holography,” Adv. Theor. Math. Phys.  [**2**]{}, 253 (1998) \[arXiv:hep-th/9802150\].\ \[-4em\] O. Aharony, S. S. Gubser, J. M. Maldacena, H. Ooguri and Y. Oz, “Large N field theories, string theory and gravity,” Phys. Rept.  [**323**]{}, 183 (2000) \[arXiv:hep-th/9905111\]. E. Witten, “Baryons and branes in anti de Sitter space,” JHEP [**9807**]{}, 006 (1998) \[arXiv:hep-th/9805112\].\ \[-4em\] Y. Imamura, “Supersymmetries and BPS configurations on Anti-de Sitter space,” Nucl. Phys. B [**537**]{}, 184 (1999) \[arXiv:hep-th/9807179\].\ \[-4em\] C.G. Callan, A. Guijosa and K.G. Savvidy, “Baryons and string creation from the fivebrane worldvolume action,” Nucl. Phys. B [**547**]{}, 127 (1999) \[arXiv:hep-th/9810092\].\ \[-4em\] B. Craps, J. Gomis, D. Mateos and A. Van Proeyen, “BPS solutions of a D5-brane worldvolume in a D3-brane background from superalgebras,” JHEP [**9904**]{}, 004 (1999) \[arXiv:hep-th/9901060\].\ \[-4em\] C.G. Callan, A. Guijosa, K.G. Savvidy and O. Tafjord, “Baryons and flux tubes in confining gauge theories from brane actions,” Nucl. Phys. B [**555**]{} (1999) 183 \[arXiv:hep-th/9902197\]. D. J. Gross and H. Ooguri, “Aspects of large N gauge theory dynamics as seen by string theory,” Phys. Rev.  D [**58**]{}, 106002 (1998) \[arXiv:hep-th/9805129\]. F. Benini, F. Canoura, S. Cremonesi, C. Nunez and A. V. Ramallo, “Unquenched flavors in the Klebanov-Witten model,” JHEP [**0702**]{}, 090 (2007) \[arXiv:hep-th/0612118\].\ \[-4em\] F. Benini, F. Canoura, S. Cremonesi, C. Nunez and A. V. Ramallo, “Backreacting Flavors in the Klebanov-Strassler Background,” JHEP [**0709**]{}, 109 (2007) \[arXiv:0706.1238 \[hep-th\]\].\ \[-4em\] R. Casero, C. Nunez and A. Paredes, “Elaborations on the String Dual to N=1 SQCD,” Phys. Rev.  D [**77**]{}, 046003 (2008) \[arXiv:0709.3421 \[hep-th\]\]. R. Harnik, D. T. Larson and H. Murayama, “Supersymmetric color superconductivity,” JHEP [**0403**]{}, 049 (2004) \[arXiv:hep-ph/0309224\]. N. Maru and M. Tachibana, “Color superconductivity from supersymmetry,” Mod. Phys. Lett. [**A20**]{}, 1495 (2005) \[arXiv:hep-ph/0411079\]. D. V. Deryagin, D. Y. Grigoriev and V. A. Rubakov, “Standing wave ground state in high density, zero temperature QCD at large N(c),” Int. J. Mod. Phys.  A [**7**]{}, 659 (1992). E. Shuster and D. T. Son, “On finite-density [QCD]{} at large N(c),” Nucl. Phys.  B [**573**]{}, 434 (2000) \[arXiv:hep-ph/9905448\].\ \[-4em\] B. Y. Park, M. Rho, A. Wirzba and I. Zahed, “Dense QCD: Overhauser or BCS pairing?,” Phys. Rev.  D [**62**]{}, 034015 (2000) \[arXiv:hep-ph/9910347\]. M. R. Douglas and W. Taylor, “Branes in the bulk of anti-de Sitter space,” arXiv:hep-th/9807225. A. W. Peet and J. Polchinski, “UV/IR relations in AdS dynamics,” Phys. Rev.  D [**59**]{}, 065011 (1999) \[arXiv:hep-th/9809022\]. A. Karch and E. Katz, “Adding flavor to AdS/CFT,” JHEP [**0206**]{}, 043 (2002) \[arXiv:hep-th/0205236\]. S. Kobayashi, D. Mateos, S. Matsuura, R. C. Myers and R. M. Thomson, “Holographic phase transitions at finite baryon density,” JHEP [**0702**]{}, 016 (2007) \[arXiv:hep-th/0611099\].\ \[-4em\] D. Mateos, S. Matsuura, R. C. Myers and R. M. Thomson, “Holographic phase transitions at finite chemical potential,” JHEP [**0711**]{}, 085 (2007) \[arXiv:0709.1225 \[hep-th\]\]. A. Karch and A. O’Bannon, “Holographic Thermodynamics at Finite Baryon Density: Some Exact Results,” JHEP [**0711**]{}, 074 (2007) \[arXiv:0709.0570 \[hep-th\]\]. R. C. Myers, “Dielectric-branes,” JHEP [**9912**]{}, 022 (1999) \[arXiv:hep-th/9910053\]. A. A. Tseytlin, “On non-abelian generalisation of the Born-Infeld action in string theory,” Nucl. Phys.  B [**501**]{}, 41 (1997) \[arXiv:hep-th/9701125\]. A. Hashimoto and W. Taylor, “Fluctuation spectra of tilted and intersecting D-branes from the Born-Infeld action,” Nucl. Phys.  B [**503**]{}, 193 (1997) \[arXiv:hep-th/9703217\]. A. A. Tseytlin, “Born-Infeld action, supersymmetry and string theory,” arXiv:hep-th/9908105. K. Ghoroku, M. Ishihara and A. Nakamura, “D3/D7 holographic Gauge theory and Chemical potential,” Phys. Rev.  D [**76**]{}, 124006 (2007) \[arXiv:0708.3706 \[hep-th\]\].\ K. Y. Kim, S. J. Sin and I. Zahed, “Dense hadronic matter in holographic QCD,” arXiv:hep-th/0608046.\ N. Horigome and Y. Tanii, “Holographic chiral phase transition with chemical potential,” JHEP [**0701**]{}, 072 (2007) \[arXiv:hep-th/0608198\].\ A. Karch and A. O’Bannon, “Metallic AdS/CFT,” JHEP [**0709**]{}, 024 (2007) \[arXiv:0705.3870 \[hep-th\]\]. S. Nakamura, Y. Seo, S. J. Sin and K. P. Yogendran, “A new phase at finite quark density from AdS/CFT,” J. Korean Phys. Soc.  [**52**]{}, 1734 (2008) \[arXiv:hep-th/0611021\].\ S. Nakamura, Y. Seo, S. J. Sin and K. P. Yogendran, “Baryon-charge Chemical Potential in AdS/CFT,” Prog. Theor. Phys.  [**120**]{}, 51 (2008) \[arXiv:0708.2818 \[hep-th\]\]. J. Erdmenger, M. Kaminski and F. Rust, “Holographic vector mesons from spectral functions at finite baryon or isospin density,” Phys. Rev.  D [**77**]{}, 046005 (2008) \[arXiv:0710.0334 \[hep-th\]\].\ J. Erdmenger, M. Kaminski, P. Kerner and F. Rust, “Finite baryon and isospin chemical potential in AdS/CFT with flavor,” JHEP [**0811**]{}, 031 (2008) \[arXiv:0807.2663 \[hep-th\]\]. M. Kruczenski, D. Mateos, R. C. Myers and D. J. Winters, “Meson spectroscopy in AdS/CFT with flavour,” JHEP [**0307**]{}, 049 (2003) \[arXiv:hep-th/0304032\]. D. Arean and A. V. Ramallo, “Open string modes at brane intersections,” JHEP [**0604**]{}, 037 (2006) \[arXiv:hep-th/0602174\].\ R. C. Myers and R. M. Thomson, “Holographic mesons in various dimensions,” JHEP [**0609**]{}, 066 (2006) \[arXiv:hep-th/0605017\]. J. Babington, J. Erdmenger, N. J. Evans, Z. Guralnik and I. Kirsch, “Chiral symmetry breaking and pions in non-supersymmetric gauge / gravity duals,” Phys. Rev.  D [**69**]{}, 066007 (2004) \[arXiv:hep-th/0306018\]. I. Kirsch, “Generalizations of the AdS/CFT correspondence,” Fortsch. Phys.  [**52**]{}, 727 (2004) \[arXiv:hep-th/0406274\]. D. Mateos, R. C. Myers and R. M. Thomson, “Holographic phase transitions with fundamental matter,” Phys. Rev. Lett.  [**97**]{}, 091601 (2006) \[arXiv:hep-th/0605046\].\ D. Mateos, R. C. Myers and R. M. Thomson, “Thermodynamics of the brane,” JHEP [**0705**]{}, 067 (2007) \[arXiv:hep-th/0701132\]. T. Albash, V. G. Filev, C. V. Johnson and A. Kundu, “A topology-changing phase transition and the dynamics of flavour,” Phys. Rev.  D [**77**]{}, 066004 (2008) \[arXiv:hep-th/0605088\].\ A. Karch and A. O’Bannon, “Chiral transition of N = 4 super Yang-Mills with flavor on a 3-sphere,” Phys. Rev.  D [**74**]{}, 085033 (2006) \[arXiv:hep-th/0605120\].\ T. Albash, V. G. Filev, C. V. Johnson and A. Kundu, “Global Currents, Phase Transitions, and Chiral Symmetry Breaking in Large $N_c$ Gauge Theory,” JHEP [**0812**]{}, 033 (2008) \[arXiv:hep-th/0605175\]. O. Aharony, J. Sonnenschein and S. Yankielowicz, “A holographic model of deconfinement and chiral symmetry restoration,” Annals Phys.  [**322**]{}, 1420 (2007) \[arXiv:hep-th/0604161\].\ Y. h. Gao, W. s. Xu and D. f. Zeng, “NGN, QCD(2) and chiral phase transition from string theory,” JHEP [**0608**]{}, 018 (2006) \[arXiv:hep-th/0605138\].\ K. Peeters, J. Sonnenschein and M. Zamaklar, “Holographic melting and related properties of mesons in a quark gluon plasma,” Phys. Rev.  D [**74**]{}, 106008 (2006) \[arXiv:hep-th/0606195\].\ E. Antonyan, J. A. Harvey and D. Kutasov, “The Gross-Neveu model from string theory,” Nucl. Phys.  B [**776**]{}, 93 (2007) \[arXiv:hep-th/0608149\]. M. Kruczenski, D. Mateos, R. C. Myers and D. J. Winters, “Towards a holographic dual of large-N(c) QCD,” JHEP [**0405**]{}, 041 (2004) \[arXiv:hep-th/0311270\]. M. R. Douglas, “Branes within branes,” arXiv:hep-th/9512077.\ E. Witten, “Small Instantons in String Theory,” Nucl. Phys.  B [**460**]{}, 541 (1996) \[arXiv:hep-th/9511030\].\ M. R. Douglas, “Gauge Fields and D-branes,” J. Geom. Phys.  [**28**]{}, 255 (1998) \[arXiv:hep-th/9604198\]. Z. Guralnik, S. Kovacs and B. Kulik, “Holography and the Higgs branch of N = 2 SYM theories,” JHEP [**0503**]{}, 063 (2005) \[arXiv:hep-th/0405127\]. J. Erdmenger, J. Grosse and Z. Guralnik, “Spectral flow on the Higgs branch and AdS/CFT duality,” JHEP [**0506**]{}, 052 (2005) \[arXiv:hep-th/0502224\]. D. Arean, A. V. Ramallo and D. Rodriguez-Gomez, “Holographic flavor on the Higgs branch,” JHEP [**0705**]{}, 044 (2007) \[arXiv:hep-th/0703094\]. Z. Guralnik, “Strong coupling dynamics of the Higgs branch: Rolling a Higgs by collapsing an instanton,” Nucl. Phys.  B [**732**]{}, 46 (2006) \[arXiv:hep-th/0412074\].\ Z. Guralnik, S. Kovacs and B. Kulik, “AdS/CFT duality and the Higgs branch of N = 2 SYM,” Fortsch. Phys.  [**53**]{}, 480 (2005) \[arXiv:hep-th/0501154\]. R. Apreda, J. Erdmenger, N. Evans and Z. Guralnik, “Strong coupling effective Higgs potential and a first order thermal phase transition from AdS/CFT duality,” Phys. Rev.  D [**71**]{}, 126002 (2005) \[arXiv:hep-th/0504151\]. G. W. Gibbons and K. Hashimoto, “Non-linear electrodynamics in curved backgrounds,” JHEP [**0009**]{}, 013 (2000) \[arXiv:hep-th/0007019\]. S. Matsuura, “On holographic phase transitions at finite chemical potential,” JHEP [**0711**]{}, 098 (2007) \[arXiv:0711.0407 \[hep-th\]\]. Y. Seo and S. J. Sin, “Baryon Mass in medium with Holographic QCD,” JHEP [**0804**]{}, 010 (2008) \[arXiv:0802.0568 \[hep-th\]\]. J. Polchinski, “String theory. Vol. 2: Superstring theory and beyond,” [*Cambridge, UK: Univ. Pr. (1998) 531 p*]{} K. Hashimoto, “Holographic Nuclei,” Prog. Theor. Phys.  [**121**]{}, 241 (2009) \[arXiv:0809.3141 \[hep-th\]\].\ K. Hashimoto, to appear. H. Hata, T. Sakai, S. Sugimoto and S. Yamato, “Baryons from instantons in holographic QCD,” arXiv:hep-th/0701280.\ K. Hashimoto, T. Sakai and S. Sugimoto, “Holographic Baryons : Static Properties and Form Factors from Gauge/String Duality,” Prog. Theor. Phys.  [**120**]{}, 1093 (2008) \[arXiv:0806.3122 \[hep-th\]\]. K. Hashimoto, T. Sakai and S. Sugimoto, “Nuclear Force from String Theory,” arXiv:0901.4449 \[hep-th\]. T. Sakai and S. Sugimoto, [*“Low energy hadron physics in holographic QCD,”*]{} Prog. Theor. Phys.  [**113**]{}, 843 (2005) \[arXiv:hep-th/0412141\]. J. Erdmenger, N. Evans, I. Kirsch and E. Threlfall, “Mesons in Gauge/Gravity Duals - A Review,” Eur. Phys. J.  A [**35**]{}, 81 (2008) \[arXiv:0711.4467 \[hep-th\]\]. V. P. Frolov, “Merger transitions in brane-black-hole systems: Criticality, scaling, and self-similarity,” Phys. Rev.  D [**74**]{}, 044006 (2006) \[arXiv:gr-qc/0604114\].\ V.P. Frolov, A.L. Larsen and M. Christensen, “Domain wall interacting with a black hole: A new example of critical phenomena,” Phys. Rev. D [**59**]{} (1999) 125008 \[arXiv:hep-th/9811148\].\ M. Christensen, V.P. Frolov and A.L. Larsen, “Soap bubbles in outer space: Interaction of a domain wall with a black hole,” Phys. Rev. D [**58**]{} (1998) 085008 \[arXiv:hep-th/9803158\]. D. Yamada and L. G. Yaffe, “Phase diagram of N = 4 super-Yang-Mills theory with R-symmetry chemical potentials,” JHEP [**0609**]{}, 027 (2006) \[arXiv:hep-th/0602074\]. K. Iida and G. Baym, “Superfluid phases of quark matter. III: Supercurrents and vortices,” Phys. Rev.  D [**66**]{}, 014015 (2002) \[arXiv:hep-ph/0204124\].\ K. Iida, “Magnetic vortex in color-flavor locked quark matter,” Phys. Rev.  D [**71**]{}, 054011 (2005) \[arXiv:hep-ph/0412426\].\ M. M. Forbes and A. R. Zhitnitsky, “Global strings in high density QCD,” Phys. Rev.  D [**65**]{}, 085009 (2002) \[arXiv:hep-ph/0109173\].\ E. J. Ferrer and V. de la Incera, “Magnetic fields boosted by gluon vortices in color superconductivity,” Phys. Rev. Lett.  [**97**]{}, 122301 (2006) \[arXiv:hep-ph/0604136\]; “Paramagnetism in color superconductivity and compact stars,” J. Phys. A [**40**]{}, 6913 (2007) \[arXiv:astro-ph/0611460\].\ A. P. Balachandran, S. Digal and T. Matsuura, “Semi-superfluid strings in high density QCD,” Phys. Rev.  D [**73**]{}, 074009 (2006) \[arXiv:hep-ph/0509276\].\ E. Nakano, M. Nitta and T. Matsuura, “Non-Abelian Strings in High Density QCD: Zero Modes and Interactions,” Phys. Rev.  D [**78**]{}, 045002 (2008) \[arXiv:0708.4096 \[hep-ph\]\]; “Non-Abelian Strings in Hot or Dense QCD,” Prog. Theor. Phys. Suppl.  [**174**]{}, 254 (2008) \[arXiv:0805.4539 \[hep-ph\]\].\ D. M. Sedrakian, D. Blaschke, K. M. Shahabasyan and M. K. Shahabasyan, “Vortex structure of a neutron star with CFL quark core,” arXiv:0810.3003 \[hep-ph\].\ M. K. Shahabasyan, “Vortex lattice oscillations in rotating neutron stars with quark ’CFL’ cores,” Astrophysics [**52**]{}, 151 (2009) \[Astrofiz.  [**52**]{}, 165 (2009)\].\ M. Eto and M. Nitta, “Color Magnetic Flux Tubes in Dense QCD,” arXiv:0907.1278 \[hep-ph\].\ M. Eto, E. Nakano and M. Nitta, “Color Magnetic Flux Tubes in Dense QCD. II: Effective World-Sheet Theory,” arXiv:0908.4470 \[hep-ph\]. T. Schafer and F. Wilczek, “Continuity of quark and hadron matter,” Phys. Rev. Lett.  [**82**]{}, 3956 (1999) \[arXiv:hep-ph/9811473\]. A. Karch, D. T. Son and A. O. Starinets, “Zero Sound from Holography,” arXiv:0806.3796 \[hep-th\].\ M. Kulaxizi and A. Parnachev, “Holographic Responses of Fermion Matter,” Nucl. Phys.  B [**815**]{}, 125 (2009) \[arXiv:0811.2262 \[hep-th\]\]. E. Witten, “Anti-de Sitter space, thermal phase transition, and confinement in gauge theories,” Adv. Theor. Math. Phys.  [**2**]{}, 505 (1998) \[arXiv:hep-th/9803131\]. [^1]: Disclaimer: Note that our theory is not QCD but rather a supersymmetric generalization of it, and we shall only treat the squark condensation for the CFL. For a field-theoretical treatment of the squark condensation, see for example Ref. [@Harnik:2003ke]. [^2]: There are examples in which fully backreacted geometry is obtained (see for example Ref. [@BRgeometry]), and it would be interesting to generalize our results to those examples. [^3]: Examples of that kind include computations explicitly uses string worldsheets in the dual gravity backgrounds; gluon scattering amplitudes, drag forces, quark-antiquark forces, heavy meson spectroscopy and Regge trajectory. [^4]: The meson spectrum at zero baryon density is studied in Refs. [@Kruczenski:2003be; @Myers:2006qr]. [^5]: The importance of this coupling between the NSNS 2-form and the $F_3$ for the baryons was found in Ref. [@Gross:1998gk]. [^6]: A similar CS mechanism was used for treating baryons [@Hata:2007mb; @Hashimoto:2009ys] in Sakai-Sugimoto holographic model [@SaSu1], but used in a rather different way: the CS term was to stabilize the size of a single baryon in the model. [^7]: We divide out the infinity volume of four Minkowski spacetime $V_4$. [^8]: A related issue on backreaction of baryon vertices was discussed in Ref. [@Hashimoto:2008jq]. [^9]: In this paper this ${\bf d}$ is the density, but one can imagine localized quarks/baryons instead, for the discussion here. [^10]: The flux is smeared along the space directions parallel to the boundary: $x_1,x_2,x_3$. With these directions being non-compact, we do not need to quantize the flux from computational point of view. However, our motivation of this quantization comes from the fact that the flux is sourced by the D5 branes. [^11]: Since electrically charged instantons should repel each other, generic configurations should be with the separated instantons. This phenomenon is common with the interaction among baryons [@Hashimoto:2009ys] in the Sakai-Sugimoto model [@SaSu1]; the repulsive core of nucleons is mainly due to this electric repulsion. [^12]: Precisely speaking, the flavor group of the supersymmetric QCD should be $SU(N_f)$, since the overall $U(1)$ transformation of the flavor symmetry can be identified as a global part of a $U(1)$ subgroup of the local gauge group. In the following, we adopt $U(N_f)$ rather than $SU(N_f)$, but the argument goes similarly. [^13]: Note that both potentials are of ${\cal O}(1/N_c)$ for fixed $g_s$ ($\sim$ fixed ratio of $\lambda/N_c$).
--- abstract: 'We study the equation of state (EOS) of nuclear matter as function of density. We expand the energy per particle (E/A) of symmetric infinite nuclear matter in powers of the density to take into account 2,3,$\ldots$,N-body forces. New EOS are proposed by fitting ground state properties of nuclear matter (binding energy, compressibility and pressure) and assuming that at high densities a second order phase transition to the Quark Gluon Plasma (QGP) occurs. The latter phase transition is due to symmetry breaking at high density from nuclear matter (locally color white) to the QGP (globally color white). In the simplest implementation of a second order phase transition we calculate the critical exponent $\delta$ by using Landau’s theory of phase transition. We find $\delta=3$. Refining the properties of the EOS near the critical point gives $\delta=5$ in agreement with experimental results. We also discuss some scenarios for the EOS at finite temperatures.' author: - 'Ruslan Magana$^{a,b)}$, Hua Zheng$^{a)}$ and Aldo Bonasera$^{a,c)}$' title: Virial Expansion of the Nuclear Equation of State --- Introduction ============ In recent years, the availability of new heavy-ion accelerators which are capable of accelerating ions from a few MeV/nucleon to GeV/nucleon has fueled a new field of research loosely referred to as Nuclear Fragmentation. The characteristics of the fragments produced depend on the beam energy and the target-projectile combinations which can be externally controlled. Depending on the beam energy, hard photons, pions, kaons and so on can be produced as well [@1; @2]. Fragmentation experiments could provide informations about the nuclear matter properties and constrain the EOS of nuclear matter. A ’conventional’ EOS provides only limited information about the nuclear matter: the static thermal equilibrium properties[@Csernai]. In heavy ion collisions non-equilibrium processes are very important, thus nuclear transport properties will play an equally important role. If we want to study the EOS at high densities and high temperatures, we have to rely on theoretical estimates as well. The low density behavior of nuclear matter determines the observables mechanism of the final expansion stage in a collision before the break up. The low density part of the nuclear EOS is directly related to the final fragmentation, nuclear compressibility, momentum dependence, etc [@Csernai]. After an energetic nucleus-nucleus collision in the 100 MeV-4 GeV/nucleon beam energy region, many light nuclear fragments, a few heavy fragments and a few mesons (mainly pions) are observed. Thus the initial kinetic energy of the projectile leads to the destruction of the ground state of nuclear matter and converts it into dilute gas of fragments which then loses thermal contact during the break-up or freeze-out stage. One of the standard methods to explore the nuclear EOS is within the framework of the mean field theory. It starts out with a Langrangian including the nucleon field $\Psi$, a scalar meson field $\phi$, and a vector meson field $V_\mu$. Customarily the contribution of the scalar field is described by a quartic polynomial. From conventional nuclear physics we know that there is a stable equilibrium state at the normal nuclear density $\rho_0=0.145-0.17 fm^{-3}$[@3; @4] with a compressibility in the range of $K=180-240$ MeV [@5] and a binding energy of 15-16 MeV/nucleon[@shlomo]. In this work we take $\rho_0=0.165 fm^{-3}$ and $K=225$ MeV based on the condition that the mean field potential has a minimum at normal nuclear density. With increasing density, the effects of N-body correlations become more and more important. This is especially true near a phase transition. Furthermore, nucleons are not elementary particles but they are made of quarks and gluons, thus N-body forces are expected to be stronger at high densities where the nucleon wave functions strongly overlap. We can take these features into account by expanding the EOS in powers of the density as customary in the virial expansion of any EOS[@8]. Finally we discuss some properties at finite temperatures assuming either a classical gas or a quantum Fermi systems. We show that at the densities and temperatures of interest the classical approximation is not valid. This is at variance with many experimental and theoretical results in heavy ion collisions near the Fermi energy [@albergo; @pocho; @15] which assume the classical approximation to be valid. Quantum corrections have been recently extensively discussed in [@15a] for cluster formation in a low density expanding nuclear system. Nuclear Equation Of State ========================= For a system interacting through two body forces having a short-range repulsion and a longer-range attraction the EOS resembles a Van Der Waals one. This is indeed the case for nuclear matter [@1; @2; @Csernai; @10]. A popular approach is to postulate an equation of state which satisfies known properties of nuclei [@shlomo]. The equation for energy per particle is: $$E/A=22.5\tilde \rho^{\frac{2}{3}}+\frac{A}{2}\tilde \rho+\frac{B}{\sigma+1}\tilde\rho^\sigma \label{2.1}$$ where $\tilde \rho=\rho/\rho_0$ and $\rho_0$ is the normal nuclear density. The first term of eq. (\[2.1\]) refers to the kinetic energy of a Fermi gas. The other terms are due to potential interactions and correlations. To generalize at finite temperature we could use a classical approximation giving an EOS in the form [@10]. $$P=\rho^2\frac{\partial (E/A)}{\partial \rho}+\rho T \label{2.2}$$ In the following we will test the validity of such an approximation. The requirement of causality provides several theoretical constraints on the EOS [@11] at high densities and limits the choice of the functional form of the compressional energy that can be used in phenomenological EOS(see Fig. \[Fig4\]). Very stiff equations of state may lead to superluminal speed of sound (see ref. [@12]). However, this is not a problem if the acausality occurs in a region of the phase diagram where matter is in the mixed or plasma phase, because the phase transition softens the EOS. There are some phenomenological parameterizations of the specific energies like Linear Quadratic, Sierk-Nix or Grant-Kapusta which are acausal at sufficiently high densities[@Csernai]. Fortunately the acausality occurs well within or beyond the mixed or plasma phase for all the parameterization except the Quadratic. The basic parameter is the (isothermal) compressibility, which is defined as $$K=9\frac{\partial P}{\partial\rho}\bigg|_{\rho=\rho_0,T=0} \label{2.3}$$ The three parameters $A$, $B$ and $\sigma$ in eq. (\[2.1\]) are determinated by using the conditions of pressure equal zero, the binding energy of $E/A=-15$MeV and finally the compressibility is of order of 200MeV (as inferred from the vibrational frequency of the giant monopole resonance[@5]), at $\rho=\rho_0$. Using these conditions, we get $A=-356$MeV, $B=303$MeV and $\sigma=7/6$, we will refer to this EOS as CK200 (Conventional, K=200 MeV). Now if we modify this approach in accordance to [@13] the compressibility condition is substituted by the mean field potential having a minimum at the ground state density: $$U(\rho)=A(\tilde \rho)+BA(\tilde\rho)^\sigma \label{2.4}$$ where $$\frac{\partial U(\rho)}{\partial \rho}\bigg|_{\rho=\rho_0}=0 \label{2.5}$$ which means that the compressibility at $\rho=\rho_0$ is equal to $K=225$ MeV then we have the conditions $$\begin{array}{lll} a)&E/A=-15\text{MeV}\\ b)&P=0&\text{at}\hspace{0.5cm}\rho=\rho_0\\ c)&K=225\text{MeV} \end{array} \label{2.6}$$ Solving these equations, we get $A=-210$ MeV, $B=157.5$ MeV and $\sigma=4/3$, we will refer to this EOS as CK225. The form of the EOS is a very delicate subject. For a nuclear system we expect to see a liquid-gas (LG) phase transition at a temperature of the order of 10 MeV and at low density. Under these conditions we can assume that nuclear matter behaves like a classical ideal gas eq. (\[2.2\]) however this is just our ansatz. Actually in this work we will compare the theoretical behavior of the EOS assuming classical ideal gas and Fermi gas. ![The energy per particle of nuclear matter as a function of density using different density-dependent interactions in comparison with the ’conventional’ formulation. The parameter values are given in Table \[table1\]. Symbols: CK225-thin solid line, CK200-dashed line, CCS$\delta$3-dashed dotted line and CCS$\delta$5-thick line.[]{data-label="Fig1"}](1){width="0.9\columnwidth"} In order to calculate the critical point, we will impose the conditions that the first and second derivative of eq. (\[2.2\]) respect to density are equal to zero therefore we can obtain the critical temperature and density, i.e. $T_c=9$ MeV and $\rho_c=0.3\rho_0$. Such values are in some agreement with experimental results [@10]. However, we notice that in order for the classical approximation to be valid, the ratio of the temperature to the Fermi Energy, $\epsilon_f(\rho)=36(\rho/\rho_0)^{2/3}MeV$, should be much larger than one. For the values above we get $T_c/\epsilon_f=9/16<1$ which shows that we are still in a quantum regime. The validity of eq.(1) is restricted to densities close to the ground state value. In fact no further constraints are imposed so far for larger densities. Such constraints should come from experimental data in heavy ion collisions and properties of heavy stars. Those data, if available, do not give directly a constraint on the EOS but must be filtered through model calculations. The models in turn need some form of EOS. We propose a new equation for the energy per particle which could be used in microscopic calculations: $$E/A=\tilde \varepsilon_f\tilde\rho^\frac{2}{3}+\sum_{n=1}^k\frac{A_n}{n+1}\tilde \rho^n \label{2.7}$$ where the first term refers to the average kinetic energy of a free Fermi gas with $\tilde \varepsilon_f=3/5 \varepsilon_f= 22.5 MeV$, the other terms are due to potential interactions and correlations. The $n=1$ term is obtained by taking into account the interaction between pair of particles, and the subsequent terms must involve the interactions between groups of three, four, etc., particles. The coefficients $A_n$ in the expansion, eq. (\[2.7\]) are called first, second, third, etc., virial coefficients[@8]. Let’s start considering three body forces $O(\rho^3)$ and we assume that at the normal density and zero temperature the energy of the ground state is -15 MeV and the pressure is zero. In addition we assume that the mean field potential has a minimum at normal density or, equivalently $K=225$ MeV. If we do this, we have three conditions and three equations so we can solve the corresponding set of equations. Unfortunately, the solution has no physical meaning because the energy diverges to minus infinity when the density approaches infinity, see table I. For the fourth order of our expansion the eq. (\[2.7\]) takes the form $$E/A=\tilde\varepsilon_f\tilde\rho^\frac{2}{3}+\frac{A_1}{2}\tilde\rho+\frac{A_2}{3}\tilde \rho^2+\frac{A_3}{4}\tilde\rho^3+\frac{A_4}{5}\tilde\rho^4 \label{2.8}$$ Here we assume the symmetry breaking at high density, from nuclear matter (locally color white) to the QGP (globally color white), this possibly gives a second-order phase transition[@16; @17]. In accordance to the conditions given in eq. (\[2.6\]) we can add two extra constraints based on the conditions of matter close to the critical density of a second-order phase transition at T=0. [cccllllllllll]{} **Interactions**&$A_1$&$A_2$&$A_3$&$A_4$&$A_5$&$A_6$&$\sigma$\ &&&&&&\ **CK200**&-356&303&&&&&7/6\ **CK225**&-210&157.5&&&&&4/3\ **3**&-135&112.5&-30\ **CCS$\bf\delta$3**&-136.89&120.99&-41.32&4.72\ **5**&-137.59&124.41&-46.51&7.67&-0.47\ **CCS$\bf\delta$5**&-137.96&126.25&-49.46&9.55&-0.92&0.0035\ \ \ \ \ \[table1\] At the critical point the first and second derivative of the pressure with respect to density are equal to zero. Then we have five constrains: $$\begin{array}{lll} a)&E/A=-15\text{MeV}\\ b)&P=0&\text{at}\hspace{0.5cm}\rho=\rho_0\\ c)&K=225\text{MeV}\\ d)&\frac{\partial P}{\partial\rho}\bigg|_{\rho=\rho_c}=0\\ e)&\frac{\partial^2 P}{\partial\rho^2}\bigg|_{\rho=\rho_c}=0 \end{array} \label{3.1}$$ Solving these equations we get the values of $A_1..A_4$ (see table \[table1\]). Therefore we obtain the critical point $\tilde \rho_c=2.9354$, we refer to this EOS as CCS$\delta$3. This could be the critical point for a second-order phase transition to the QGP at T=0. In order to include higher order terms we assume that $\frac{\partial^n P}{\partial\rho^n}\bigg|_{\rho=\rho_c}=0$ where $n=1,\ldots,4$ (this assumption will become clear later on) then we solve the resulting nonlinear system of many unknown variables to get all $A_n$ and therefore the critical point $\tilde \rho_c$ at T=0. Here we are taking into account the interaction between pairs of particles and interactions between groups of three, four, five and six particles. For interactions up to five particles $O(\rho^5)$ where $\frac{\partial^n P}{\partial\rho^n}\bigg|_{\rho=\rho_c}=0$ with n=1-3, the nuclear EOS diverges negatively again as in the $O(\rho^3)$ case. If we take more variables in our expansion, up to sixth order we get: $$E/A=\tilde\varepsilon_f\tilde\rho^\frac{2}{3}+\frac{A_1}{2}\tilde\rho+\frac{A_2}{3}\tilde \rho^2+\frac{A_3}{4}\tilde\rho^3+\frac{A_4}{5}\tilde\rho^4+\frac{A_5}{6}\tilde\rho^5+\frac{A_6}{6}\tilde\rho^6 \label{3.2}$$ Imposing the conditions that the fourth order derivative of the pressure vanishes as well, gives the values of the parameters reported in Table I and a critical density $\tilde \rho_c=5.2$, which we refer as CCS$\delta$5. Now we can try to apply our equations of state in the high-density domain. A quite promising result is shown in Fig. \[Fig1\], where we compare the different EOS at T=0. ![The pressure per particle of nuclear matter as a function of density using different density-dependent interactions. Symbols: CK225-thin line, CCS$\delta$3-dashed dotted line and CCS$\delta$5-thick line. []{data-label="Fig2"}](6aa){width="0.9\columnwidth"} The different EOS are very similar near and below the ground state density of nuclear matter while they differ greatly, as expected at higher densities. In particular the EOS softens because of the assumed QGP phase transition. The pressure is shown in Fig. \[Fig2\] at temperature T=0 with a comparison with the conventional EOS. At $\rho=\rho_0$ the pressure is zero and increases largely for the ’conventional’ EOS. ![The compressibility per particle of nuclear matter as a function of density using different density-dependent interactions. Symbols as in Fig.2.[]{data-label="Fig3"}](compre){width="0.9\columnwidth"} We proceed now to the study of the compressibility. We are assuming that at $\rho=\rho_0$, K=225 MeV (apart CK200) as is shown in Fig. \[Fig3\]. A negative compressibility (which indicates an instability region) is obtained at subnuclear densities only, while it becomes zero at the critical point for the QGP phase transition. A negative compressibility gives an imaginary speed of sound since for a particle of mass m [@Landau-Vol6]: $$v_c=\sqrt{\frac{1}{m}\frac{\partial P}{\partial \rho}}.$$ Such a quantity is plotted in Fig.4 as function of density for the different EOS. Two phenomena are worth noticing. First the speed of sound becomes larger than the speed of light for all the EOS (excluding CCS$\delta$5) for $\rho > 4.5\rho_0$. The CCS$\delta$5 EOS gives a superluminal speed of sound at almost twice such a density, well in the region of the QGP. The second property is that the speed of sound becomes imaginary in the instability region, thus a discontinuity is shown in Fig. 4 below normal nuclear matter density. The small discontinuity observed near the critical density of CCS$\delta$5 is due to the numerical solution of the set of equations used to determine the values of the coefficients reported in Table I. We can have a better view of the critical region by zooming on the density as in Fig. 5. Now the discontinuities at lower densities due to the LG phase transition are visible for three EOS. The instability region is very much the same for all the EOS having the same compressibility, which shows that such a region is mainly determined by the ground state properties of the EOS and not by the assumed functional form. As we will show later also the critical point for the LG is the same when the compressibility is the same. At higher densities, the speed of sound becomes zero (discontinuous) at the critical densities of the fourth and sixth order EOS respectively. ![The speed of sound of nuclear matter as a function of density. In the shaded region the principle of causality is broken. Symbols as in Fig.2.[]{data-label="Fig4"}](8){width="0.9\columnwidth"} ![The speed of sound of nuclear matter as a function of density. Symbols as in Fig.2.[]{data-label="Fig5"}](8a){width="0.9\columnwidth"} ![Equation of state surface for a nuclear system with a second-order phase transition to the QGP, CCS$\delta$5. The pressure is reduced by the phase transition near the critical region. The pressure increases again at very high densities in the QGP phase.[]{data-label="Fig6"}](3d1){width="01.\columnwidth"} Finite Temperatures =================== In order to study the properties of the EOS at finite temperatures we need to go beyond the classical approximation. A simple functional form could be obtained using a Fermi gas expansion[@8; @10; @15]. The region of validity of the Fermi gas model is related to the ratio of the temperature to the Fermi energy $T/\varepsilon_f(\rho)$. In this scenario eq. (\[2.7\]) takes the form: $$E/A=\tilde \varepsilon_f \tilde \rho^{\frac{2}{3}}+\sum_{n=1}^k\frac{A_n}{n+1} \tilde \rho^n+a_0\tilde\rho^{-\frac{2}{3}}T^2 \label{3.6}$$ where the level density parameter $a_0=1/13.3$ MeV$^{-1}$. Correspondingly the pressure takes the form[@8]: $$P=\rho_0\left[15\tilde \rho^{\frac{5}{3}}+\sum_{n=1}^k\frac{nA_n\tilde \rho^{(n+1)}}{(n+1)}+\frac{2}{3}a_0\tilde \rho^{\frac{1}{3}}T^2\right]$$ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- ![ The pressure per particle of nuclear matter as a function of density at some temperatures. Here we are assuming that nuclear matter behaves like a classical ideal gas. The EOS are respectively from top to bottom: CK225, CCS$\delta$3 and CCS$\delta$5. []{data-label="Fig7"}](pa "fig:") ![ The pressure per particle of nuclear matter as a function of density at some temperatures. Here we are assuming that nuclear matter behaves like a classical ideal gas. The EOS are respectively from top to bottom: CK225, CCS$\delta$3 and CCS$\delta$5. []{data-label="Fig7"}](pb "fig:") ![ The pressure per particle of nuclear matter as a function of density at some temperatures. Here we are assuming that nuclear matter behaves like a classical ideal gas. The EOS are respectively from top to bottom: CK225, CCS$\delta$3 and CCS$\delta$5. []{data-label="Fig7"}](pc "fig:") ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- ------------------------------------------------------------------------ -- -- ![Same as Fig.7 but for a Fermi gas.[]{data-label="Fig8"}](pd1 "fig:") ![Same as Fig.7 but for a Fermi gas.[]{data-label="Fig8"}](pd2 "fig:") ![Same as Fig.7 but for a Fermi gas.[]{data-label="Fig8"}](pd3 "fig:") ------------------------------------------------------------------------ -- -- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- ![Behavior near to the critical point for $O(\rho^4)$; the pressure per particle of nuclear matter as a function of volume at some temperatures. Here we are assuming that nuclear matter behaves like a Fermi gas. ](Pv1p "fig:") ![Behavior near to the critical point for $O(\rho^4)$; the pressure per particle of nuclear matter as a function of volume at some temperatures. Here we are assuming that nuclear matter behaves like a Fermi gas. ](Pv1pa "fig:") ![Behavior near to the critical point for $O(\rho^4)$; the pressure per particle of nuclear matter as a function of volume at some temperatures. Here we are assuming that nuclear matter behaves like a Fermi gas. ](Pv1pb "fig:") ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- \[Fig9\] ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- ![Behavior near to the critical point for $O(\rho^6)$; the pressure per particle of nuclear matter as a function of volume at some temperatures. Here we are assuming that nuclear matter behaves like a Fermi gas. ](Pv2p "fig:") ![Behavior near to the critical point for $O(\rho^6)$; the pressure per particle of nuclear matter as a function of volume at some temperatures. Here we are assuming that nuclear matter behaves like a Fermi gas. ](Pv2pa "fig:") ![Behavior near to the critical point for $O(\rho^6)$; the pressure per particle of nuclear matter as a function of volume at some temperatures. Here we are assuming that nuclear matter behaves like a Fermi gas. ](Pv2pb "fig:") ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- \[Fig10\] The properties near the critical temperature and density can be obtained for the various EOS imposing the constraints similar to those given in eq.(9d),e)). For the case of fourth order $O(\rho^4)$ the critical temperature and critical density for the LG phase transition has the values $T_c=18.05$ MeV and $\rho_c=0.3724\rho_0$ respectively. In this approach we are assuming interactions between pairs of particles and between groups of three and four particles which could be fully justified by the fact that nucleons are made of quarks and gluons. For higher order terms, $O(\rho^6)$ the critical temperature and density are $T_c=17.932$ MeV and $\rho_c=0.3717\rho_0$. Similar values are obtained for the CK225 EOS. We notice that the critical values of the LG do not change much by using different EOS. In particular the critical density seems to be almost independent of the assumed classical or quantum statistical properties of nuclear matter. On the other hand, the critical T changes almost of a factor two when going from the classical to the quantum approximation. We have already noticed that the ratio of T to the corresponding Fermi energy at the critical density was smaller than one when using the classical approximation. If we perform the same calculation for the quantum case we get $T_c/\epsilon_f=18/18.6\approx1$, which implies that higher order terms must be considered in the expansion of the Fermi gas pressure (or energy per particle) but we are $\it{not}$ in the classical regime. A detailed calculation at finite T properly taking into account the Fermi statistics is beyond the scope of this work but we would expect a slight reduction of the temperature as compared to the value given above. We notice again that values obtained experimentally for the critical T and $\rho$ suggest that we are in the quantum regime while the methods used to obtain them are purely classical [@albergo; @pocho; @15], but [@15a]. This is analogous to what we have done above, eq.(2), using a classical approximation for finite T. We expect that properly taking into account quantum statistics will dramatically change the values obtained experimentally near the critical point. In the quantum approximation considered above we can obtain the $P(\rho, T)$, the result is given in Fig.6 for the CCS$\delta$5 case. This region is very similar for all the EOS for a given compressibility (K=225 MeV in our case). At higher T and/or $\rho$ the pressure flattens because of the QGP phase transition and increases again at higher densities. We can compare the differences between different EOS more in detail in the purely classical, Fig.\[Fig7\] and quantum approximation, Fig.\[Fig8\]. We notice the dramatic differences between the two cases especially at the lowest densities and T where quantum effects are stronger. For completeness we show in Figs.9 and 10 the same cases but as function of reduced volume. Critical Phenomena =================== The term critical phenomena refers to the thermodynamic behavior of a system near the critical temperature of a second order phase transition. A simple understanding of this phenomenon can be obtained in the framework of Landau’s theory[@8]. In such a framework it is assumed that near the critical point of a second order phase transition, the relevant degree(s) of freedom reduce to a few (order parameters) which reflect basic invariance properties of the Hamiltonian. Below the critical point such invariance(s) is spontaneously broken and it is restored above the critical point, this means that the order parameter is non zero only below the critical point. Some examples are the magnetization in a ferromagnetic system, or the difference in densities between a gas and its liquid in a liquid-gas phase transition. In Landau’s approach one assumes that there exist a free energy $\Psi$ near the critical point that depends on the order parameter and its conjugate field. In general $\Psi=E/A-TS$, where S is the entropy. Near the critical point we can expand the free energy in terms of the density at the critical temperature (which is zero for our case). Defining the order parameter $\eta=V-V_c$ i.e. the distance from the critical volume, we get $$\Psi=E(V)/A\approx\sum_n\frac{1}{n!}\frac{\partial^{(n)}E(V)}{\partial V^{(n)}}\bigg|_{V_c}(V-V_c)^{(n)}$$ n=0,1...N. We define an external or conjugate field $P(V_c)=-\partial E/\partial V|_{V=V_c}$. If we stop our expansion at fourth order $n=4$, and notice that in the neighborhood of the critical point the conditions of the second order phase transition are $P'(V_c)=0$ and $P''(V_c)=0$ at $T=0$ . Thus only n=0 (a constant),1 and 4 survive in the expansion above. Imposing the free energy to have a minimum in presence of a conjugate field gives[@8]: $$\eta\sim P^{\frac{1}{\delta}}\sim P^{\frac{1}{3}}$$ Therefore $O(\rho^4)$ has the critical exponent $\delta=3$ which corresponds to the ’mean field’ or ’classical’ value[@8]. Experimentally the value of $\delta=4-5$[@8] is found. It is now evident how to go beyond the ’mean field’ value using the arguments given above. In particular if we impose $P'''(V_c)=0$ and continue the expansion in eq.(14) to $n=5$ we easily get $\delta=4$. However the resulting fifth order coefficient in the virial expansion of the EOS is negative, see table I, thus this case is unphysical. Finally expanding to $n=6$ and imposing $P''''(V_c)=0$ gives $\delta=5$ and the fitting parameters reported in table I for the CCS$\delta$5 EOS. Notice again that our formulation is perfectly consistent with Landau’s theory as discussed for general thermodynamical systems. Because of this analogy we expect that all other critical exponents are the same as those calculated in the Landau’s approach [@8]. Summary and conclusions ======================= In this paper we discussed some nuclear equations of state bridging basic properties of nuclear matter near its ground state and a possible second order phase transition to the quark-gluon plasma at high densities. We have determined a critical density of about three times normal ground state density for the QGP in the case where a critical value $\delta=3$, this corresponds to the so called mean field value or classical value of the critical exponent[@8]. To go beyond this classical value we used Landau’s theory of phase transition and for $\delta=5$ we determined an EOS to $O(\rho^6)$ which gives a critical density about five times the ground state density of nuclear matter. Using an MIT bag model, it is possible to estimate a critical density for the QGP of about five times normal nuclear matter density using a bag constant $B^{1/4}=206 $MeV[@16]. Other estimates using phenomenological hadronic and QGP EOS give similar densities but a first order phase transition[@Csernai]. In order to see if there is a first or second order phase transition or a simple cross-over from one state to the other, reliable experimental data in the region at high baryonic densities and relatively small temperatures are needed or data from heavy stars compared to refined theoretical models. Such quantities could be for instance collective flow compared to microscopic calculations which implement the EOS discussed here. As we have seen the pressure at high densities is completely different in the EOS with or without a QGP phase transition. In particular if the transition is second order than the corresponding EOS cannot be much different from the one estimated here when $\delta=5$. At lower densities a liquid gas phase transition might occur at the critical density $\rho_c=0.37\rho_0$ and $T_c=18$ MeV. Such values are rather independent from the EOS chosen when fixing the compressibility. However the critical temperature depends somewhat on the assumed classical or quantum statistics. In the region of densities and temperature of the LG transition it seems that there is no reason why a classical approximation should work. However, presently most experimental and theoretical results in this region are obtained using classical methods. This is a feature that should be improved in future works in order to have more reliable constraints on the EOS[@15a]. Finally, we found that for odd orders as, $O(\rho^3)$, $O(\rho^5)$ this approach is not suitable to describe the basic properties of our system. The even order $O(\rho^4)$, $O(\rho^6)$ approximation result in more acceptable behavior of the energy per particle as function of density. It could be possible that the collective character in regions with high density of particles are associated with even numbers [@18; @19]. One of us (RM) would like to acknowledge the support by DOE, NSF-REU Program, and the support of many people from Texas A&M Cyclotron Institute and Instituto de Ciencias Nucleares(ICN)-UNAM. We thank prof. J.Natowitz for discussions. [99]{} A. Bonasera, F. Gulminelli and J. Molitoris, Phys. Rep.[**243**]{}, 1 (1994). G. Bertsch and S. Dasgupta Phys. Rep.[**160**]{}, 189 (1988). L.P. Csernai Introduction to Relativistic Heavy Ion Collisions ( Wiley, New York 1994). W. D. Myeres, *Atomic Data Nucl. Data Tables*, **17**, 411 (1978). H. A. Bethe, *Ann. Rev. Nucl. Sci.* **21** 93 (1971). J. P. Blaizot, D. Gogny and B Grammaticos, Nucl. Phys. **A265**, 315 (1976). B.K.Agrawal, S.Shlomo and V.Kim Au, Phys Rev. **C72**, 014310 (2005). Huang K., Statistical Mechanics (J. Wiley and Sons, New York) 1987, 2n ed.;\ Landau L and Lifshits F. Statistical Physics (Pergarmon, New York) 1980. S.Albergo et al., Nuovo Cimento **89**, 1(1985). J.Pochodzalla et al. PRL**75**, 1040(1995). J.B.Natowitz et al., Phys. Rev.**C65**, 034618(2002);\ S. Wuenschel et al., Nucl. Phys.**A843**, 1 (2010). J.B.Natowitz et al., Phys. Rev.Lett.**104**, 202501(2010). A. Bonasera et al., Rivista del Nuovo Cimento **23**, 1(2000). T. S. Olson and W. A. Hiscock, Phys Rev.**C39**, 1818 (1989). C. Grant and J. Kapusta, Phys. Rev. **C32**, 663 (1985);\ A. Goodman, J.Kapusta and A.Z.Mekijan, Phys. Rev.**C30**, 851 (1984). A. Bonasera and M. Di Toro, Lettere at Nuovo Cimento **44**, 172 (1985). C.Y. Wong, Introduction to High-Energy Heavy Ion Collisions (World Scientific Singapore 1994). A. Bonasera, Phys. Rev.**C62**, O52202(R)(2000);\ A.Bonasera, Nucl.Phys.**A681**, 64c(2001);\ S.Terranova and A.Bonasera, Phys. Rev.**C70**, O24906(2004);\ S.Terranova, D.M.Zhou and A.Bonasera, Eur.Phys.J.**A26**, 333(2005);\ Z.G.Tan and A.Bonasera, Nucl.Phys.**A784**, 368(2007). Landau L and Lifshits F. Fluid Mechanics (Pergarmon, New York) 1980. A. Arima and F. Iachello, Phys. Rev. Lett.**35**, 1069 (1975). A. Bohr and B.R. Mottelson, Mat. Phys. Dan. Vid. Selks. **27** No 16 (1953).
--- abstract: 'Using a complete basis set we have obtained an analytic expression for the matrix elements of the Coulomb interaction. These matrix elements are written in a closed form. We have used the basis set of the three-dimensional isotropic quantum harmonic oscillator in order to develop our calculations, which can be useful when treating interactions in localized systems.' author: - Jaime bibliography: - 'articulo.bib' title: 'Analytic Coulomb matrix elements in a three-dimensional geometry' --- \[sec:intro\]Introduction ========================= Having an analytic expression for the Coulomb matrix elements is an important step for several numerical methods, like, for example, exact diagonalization method. In order to describe the Coulomb interaction in three dimensions, we have chosen the basis set of the isotropic harmonic oscillator for the single-particle wave functions, which, in one dimension, is written as $$\psi_{n_x}(x) = \left(a\sqrt{\pi}2^{n_x}n_x!\right)^{-1/2} e^{-\frac{1}{2}x^2/a^2} H_{n_x}(x/a), \label{eq:wf}$$ where $a=\sqrt{\hbar/m\omega}$ is taken as the characteristic unit length. One of the reasons for the election of this particular basis set is the Gaussian Product Theorem, which guarantees that the product of two Gaussian type orbitals (a linear combination of them in our case) centered on two different atoms is a finite sum of Gaussians centered on a point along the axis connecting them. In previous works, several ways to evaluate the two-dimensional matrix elements using different approaches have been studied [@halonen; @stone; @b:tapash], such as restricting to the lowest Landau level due to simplicity reasons [@tsiper; @girvin]. The purpose of this paper is to report an analytic formula for the Coulomb interaction written in closed form. It can be easily implemented by computer means and could help to improve the performance of solid state simulations in which interactions are taken into account. \[sec:elem\]Matrix elements =========================== In order to derive an analytical expression for the Coulomb interaction matrix elements we will proceed starting with the same approach as the one used in Ref. [@chakra], i.e., writing the single-electron wave function and the Coulomb potential as their Fourier transform integrals: $$\begin{aligned} \psi_\lambda(\bm{r}) & = & \frac{1}{(2\pi)^{3/2}}\int \phi_\lambda(\bm{q}) e^{-i\bm{q}\cdot\bm{r}}\,\mathrm{d}\bm{q}, \\ V(\bm{r}) & = & \frac{1}{(2\pi)^{3/2}}\int \tilde{V}(\bm{q}) e^{-i\bm{q}\cdot\bm{r}}\,\mathrm{d}\bm{q},\end{aligned}$$ where $\lambda$ stands for a set of quantum numbers $\left\{n_i\right\}$ and $V(\bm{r_1}-\bm{r_2})=r_{12}^{-1}$ is the Coulomb potential. Now, the two-particle matrix element, which, in real space is written as $$\begin{aligned} \mathcal{V}^{\lambda_1\lambda_2}_{\lambda_3\lambda_4} = \int \psi^\ast_{\lambda_1}(\bm{r}_1) \psi^\ast_{\lambda_2}(\bm{r}_2) V(\bm{r}_1-\bm{r}_2) \nonumber \\ \times \psi_{\lambda_3}(\bm{r}_2) \psi_{\lambda_4}(\bm{r}_1)\, \mathrm{d}\bm{r}_1\bm{r}_2,\end{aligned}$$ is now expressed, in momentum space, as $$\begin{aligned} \mathcal{V}^{\lambda_1\lambda_2}_{\lambda_3\lambda_4} = \frac{1}{(2\pi)^{3/2}} \int \phi^\ast_{\lambda_1}(\bm{q}_1) \phi_{\lambda_4}(\bm{q}_1-\bm{q})\nonumber \\ \times \phi^\ast_{\lambda_2}(\bm{q}_2) \phi_{\lambda_3}(\bm{q}_2+\bm{q}) \tilde{V}(\bm{q})\,\mathrm{d}\bm{q}_1\mathrm{d}\bm{q}_2\mathrm{d}\bm{q}. \label{eq:me2}\end{aligned}$$ Eq. (\[eq:me2\]) can be rewritten in a more convenient and compact form. Let us define $C^{\lambda}_{\lambda^\prime}(\bm{q})$ and $D^{\lambda}_{\lambda^\prime}(\bm{q})$ as the following convolution integrals: $$\begin{aligned} C^{\lambda}_{\lambda^\prime}(\bm{q}) & = & \int\phi_\lambda^\ast(\bm{k}) \phi_{\lambda^\prime}(\bm{k}-\bm{q})\,\mathrm{d}\bm{k} \label{eq:C1} \\ & = & \int\psi_\lambda^\ast(\bm{r})\psi_{\lambda^\prime} (\bm{r})e^{-i\bm{q}\cdot\bm{r}}\,\mathrm{d}\bm{r}, \label{eq:C2} \\ D^{\lambda}_{\lambda^\prime}(\bm{q}) & = & \int\phi_\lambda^\ast(\bm{k})\phi_{\lambda^\prime} (\bm{k}+\bm{q})\, \mathrm{d}\bm{k} \label{eq:D1} \\ & = & \int\psi_\lambda^\ast(\bm{r})\psi_{\lambda^\prime} (\bm{r})e^{i\bm{q}\cdot\bm{r}}\, \mathrm{d}\bm{r} \label{eq:D2} \\ & = & C^{\lambda}_{\lambda^\prime}(-\bm{q}).\label{eq:D3}\end{aligned}$$ Substituting Eqs. (\[eq:C1\]) and (\[eq:D1\]) into Eq. (\[eq:me2\]) we obtain $$\mathcal{V}^{\lambda_1\lambda_2}_{\lambda_3\lambda_4} = \frac{1}{(2\pi)^{3/2}} \int C^{\lambda_1}_{\lambda_4}(\bm{q}) D^{\lambda_2}_{\lambda_3}(\bm{q}) \tilde{V}(\bm{q})\,\mathrm{d}\bm{q}.$$ Now, it is straightforward to perform the integral appearing in Eq. (\[eq:C2\]) [@gradshteyn]. Using Cartesian coordinates, it is possible to separate all three variables and integrate independently. For simplicity reasons, let us integrate only along the $x$ variable, the result then reads: $$\begin{aligned} C^{n_x^1}_{n_x^4}(q_x) = \left(\frac{2^{n_{x+}^{14}}}{n_{x+}^{14}!} \frac{n_{x-}^{14}!}{2^{n_{x-}^{14}}} \right)^{1/2} i^{n_x^1+n_x^4} (-1)^{n_{x+}^{14}} \nonumber \\ \times e^{-q_x^2a^2/4} \left( \frac{aq_x}{2} \right)^{|n_x^1-n_x^4|} L_{n_{x-}^{14}}^{|n_x^1-n_x^4|} \left( a^2q_x^2/2\right), \label{eq:Cx}\end{aligned}$$ where $n_i^j$ is the quantum number referring to the $i$-axis of the particle $j$. We have also used the terms $n_{i+}^{jk}$ and $n_{i-}^{jk}$, which are defined as $\max(n_i^j,n_i^k)$ and $\min(n_i^j,n_i^k)$ respectively. The final form for $C^{\lambda_1}_{\lambda_4}(\bm{q})$ will be $$C^{n_x^1n_y^1n_z^1}_{n_x^4n_y^4n_z^4} (\bm{q})= \prod_{i\in\{x,y,z\}} C^{n_i^1}_{n_i^4}(q_i).$$ Using the relation between $D$ and $C$ shown in Eq. (\[eq:D3\]), it is trivial to find out the value of the former convolution integral. It still remains to calculate the Fourier transform $\tilde{V}(\bm{q})$ of the spherically symmetric interaction potential $V(\bm{r})$. $$\tilde{V}(q) = \sqrt{\frac{2}{\pi}}\frac{1}{q^2} \label{eq:fourier}$$ But it will be more convenient to substitute it by $$\tilde{V}(q) = \sqrt{\frac{2}{\pi}}\int_0^{\infty} e^{-(q_x^2+q_y^2+q_z^2)u}\,\mathrm{d}u\label{eq:fourier2}$$ The integration over variables $q_x$, $q_y$ and $q_z$ can be performed all in the same fashion. Using the symmetry of the problem we only need to integrate over one variable, i.e. $q_x$ and then use the same result for $q_y$ and $q_z$. Therefore, integrating over $q_x$ yields: $$\begin{aligned} \int_{-\infty}^{\infty}e^{-(u+a^2/2)q_x^2}\left( \frac{aq_x}{2} \right)^{|n_x^1-n_x^4|+|n_x^2-n_x^3|} \nonumber \\ \times L_{n_{x-}^{14}}^{|n_x^1-n_x^4|}(a^2q_x^2/2) L_{n_{x-}^{23}}^{|n_x^2-n_x^3|}(a^2q_x^2/2) \,\mathrm{d}q_x. \label{eq:laguerre1}\end{aligned}$$ This integral does not vanish if and only if $$|n_x^1-n_x^4|+|n_x^2-n_x^3| = 2s_x,$$ where $s_x=0,1,2,\ldots$. Therefore, using the previous selection rule and the power series for the associated Laguerre polynomial $$L_{n}^{l}(x)=\sum_{k=0}^{n}\frac{1}{k!}\binom{n+l}{n-k}(-x)^{k},$$ we can write Eq. (\[eq:laguerre1\]) as $$\begin{aligned} \sum_{k_x=0}^{n_{x-}^{14}} \frac{(-1)^{k_x}}{k_x!} \binom{n_{x+}^{14}}{n_{x-}^{14}-k_x} \sum_{k_x=0}^{n_{x-}^{14}} \frac{(-1)^{k_x^{\prime}}}{k_x^{\prime}!} \binom{n_{x+}^{23}}{n_{x-}^{23}-k_x^{\prime}} \nonumber \\ \times 2^{k_x+k_x^\prime} \left( \frac{a}{2} \right)^{2s_x+2k_x+2k_x^\prime} \nonumber \\ \times \frac{(2s_x+2k_x+2k_x^\prime-1)!!}{(2u+a^2)^{s+k_x+k_x^\prime+1/2}} \sqrt{2\pi}. \label{eq:laguerre2}\end{aligned}$$ Taking into account only the $u$-dependent part in Eq. (\[eq:laguerre2\]) and its symmetric extension for $y$ and $z$ variables, we end up with the last integral which will lead to the final result. This last integral is expressed as: $$\int_{0}^{\infty}\left( 2u+a^2 \right)^{-\Omega-3/2} \,\mathrm{d}u = \frac{1}{1+2\Omega}\frac{1}{a^{1+2\Omega}},$$ where $\Omega=s_x+s_y+s_z+k_x+k_y+k_z+k_x^\prime+k_y^\prime+k_z^\prime$. Finally, collecting all the terms, we end up with the analytic expression for the Coulomb interaction matrix elements: $$\begin{aligned} \mathcal{V}^{n^1_xn^1_yn^1_zn^2_xn^2_yn^2_z}_{n^3_xn^3_yn^3_zn^4_xn^4_yn^4_z} & = & \frac{1}{a} \sqrt{\frac{2}{\pi}}(-1)^{n_x^1 + n_y^1 + n_z^1 + n_x^4 + n_y^4 + n_z^4 - s_x -s_y - s_z} \nonumber \\ & & \left( \frac{2^{n_{x+}^{14}}}{n_{x+}^{14}!} \frac{n_{x-}^{14}!}{2^{n_{x-}^{14}}} \frac{2^{n_{y+}^{14}}}{n_{y+}^{14}!} \frac{n_{y-}^{14}!}{2^{n_{y-}^{14}}} \frac{2^{n_{z+}^{14}}}{n_{z+}^{14}!} \frac{n_{z-}^{14}!}{2^{n_{z-}^{14}}} \right)^{1/2} \left( \frac{2^{n_{x+}^{23}}}{n_{x+}^{23}!} \frac{n_{x-}^{23}!}{2^{n_{x-}^{23}}} \frac{2^{n_{y+}^{23}}}{n_{y+}^{23}!} \frac{n_{y-}^{23}!}{2^{n_{y-}^{23}}} \frac{2^{n_{z+}^{23}}}{n_{z+}^{23}!} \frac{n_{z-}^{23}!}{2^{n_{z-}^{23}}} \right)^{1/2} \nonumber \\ & & \sum_{k_x=0}^{n_{x-}^{14}} \frac{(-1)^{k_x}}{k_x!} \binom{n_{x+}^{14}}{n_{x-}^{14}-k_x} \sum_{k'_x=0}^{n_{x-}^{23}} \frac{(-1)^{k'_x}}{k'_x!} \binom{n_{x+}^{23}}{n_{x-}^{23}-k'_x} \frac{(2s_x+2k_x+2k'_x-1)!!}{2^{2s_x+k_x+k'_x}} \nonumber \\ & & \sum_{k_y=0}^{n_{y-}^{14}} \frac{(-1)^{k_y}}{k_y!} \binom{n_{y+}^{14}}{n_{y-}^{14}-k_y} \sum_{k'_y=0}^{n_{y-}^{23}} \frac{(-1)^{k'_y}}{k'_y!} \binom{n_{y+}^{23}}{n_{y-}^{23}-k'_y} \frac{(2s_y+2k_y+2k'_y-1)!!}{2^{2s_y+k_y+k'_y}} \nonumber \\ & & \sum_{k_z=0}^{n_{z-}^{14}} \frac{(-1)^{k_z}}{k_z!} \binom{n_{z+}^{14}}{n_{z-}^{14}-k_z} \sum_{k'_z=0}^{n_{z-}^{23}} \frac{(-1)^{k'_z}}{k'_z!} \binom{n_{z+}^{23}}{n_{z-}^{23}-k'_z} \frac{(2s_z+2k_z+2k'_z-1)!!}{2^{2s_z+k_z+k'_z}} \nonumber \\ & & \frac{1}{1+2(s_x+s_y+s_z+k_x+k'_x+k_y+k'_y+k_z+k'_z)}. \label{eq:result}\end{aligned}$$ \[sec:rec\]Recurrence ===================== Due to the six summatories appearing in Eq. (\[eq:result\]), if the indices start to grow to values say, just of the order of tenths, the process for calculating a single matrix element can be quite time-consuming, and thus, a real bottleneck for any numerical simulation. Using the recurrence relations that the Hermite polynomials obey, it is possible to find a simple iterative formula for the matrix elements which will accelerate the process of calculating the matrix elements. Let $\{n_-,n_+\}$ be any pair of quantum numbers $\{n_{i-}^{jk},n_{i+}^{jk}\}$ with $i \in \{x,y,z\}$ and $jk \in \{14,23\}$, satisfying $n_+ \ge n_-$. Then, the Coulomb matrix elements will satisfy (remaining indices ommitted for clarity) $$\begin{aligned} \mathcal{V}_{n_-}^{n_+} & = & \sqrt{\frac{n_++1}{n_-}} \mathcal{V}_{n_--1}^{n_++1} + \sqrt{\frac{n_+}{n_-}} \mathcal{V}_{n_--1}^{n_+-1} \nonumber \\ & & - \sqrt{\frac{n_--1}{n_-}} \mathcal{V}_{n_--2}^{n_+}, \label{eq:it1}\end{aligned}$$ for $n_-> 0$. If we consider the unnormalized matrix elements $$\begin{aligned} \overline{\mathcal{V}}^{n^1_xn^1_yn^1_zn^2_xn^2_yn^2_z}_{n^3_xn^3_yn^3_zn^4_xn^4_yn^4_z} & = & \prod_{i\in\{1,2,3,4\}} (2^{n^i_x}n^i_x!2^{n^i_y}n^i_y!2^{n^i_z}n^i_z!)^{\frac{1}{2}} \nonumber \\ & & \times \mathcal{V}^{n^1_xn^1_yn^1_zn^2_xn^2_yn^2_z}_{n^3_xn^3_yn^3_zn^4_xn^4_yn^4_z},\end{aligned}$$ Eq. (\[eq:it1\]) can be transformed to $$\overline{\mathcal{V}}_{n_-+1}^{n_+} = \overline{\mathcal{V}}_{n_-}^{n_++1} + 2n_+ \overline{\mathcal{V}}_{n_-}^{n_+-1} -2n_- \mathcal{V}_{n_--1}^{n_+}. \label{eq:it2}$$ Another interesting recurrence relation which, this time involves four indices $\{0,n_+\}$ and $\{m_-,m_+\}$, is the following $$\overline{\mathcal{V}}_{0,m_-}^{n_+,m_++1} = \overline{\mathcal{V}}_{0,m_-}^{n_++1,m_+} + \overline{\mathcal{V}}_{0,m_--1}^{n_+,m_+}.$$
--- abstract: 'We consider a class of nonlinear partial-differential equations, including the spatially homogeneous Fokker-Planck-Landau equation for Maxwell (or pseudo-Maxwell) molecules. Continuing the work of [@f; @g; @fgm], we propose a probabilistic interpretation of such a P.D.E. in terms of a nonlinear stochastic differential equation driven by a standard Brownian motion. We derive a numerical scheme, based on a system of $n$ particles driven by $n$ Brownian motions, and study its rate of convergence. We finally deal with the possible extension of our numerical scheme to the case of the Landau equation for soft potentials, and give some numerical results.' author: - Nicolas Fournier$^1$ title: Particle approximation of some Landau equations --- **Mathematics Subject Classification (2000)**: 82C40, 60K35. **Keywords**: Fokker-Planck-Landau equation, Plasma physics, Stochastic particle systems. Introduction and main results ============================= The equation ------------ Let $S_d$ be the set of symmetric $d\times d$ matrices with real entries, and $S_d^+$ its subset of nonnegative matrices. For $a:{{{{\mathbb{R}}}^d}}\mapsto S_d^+$, we consider the partial differential equation $$\label{eqa} \partial_t f_t(x) =\frac{1}{2}\sum_{i,j=1}^d \partial_i \left\{ \int_{{{{{\mathbb{R}}}^d}}}a_{ij}( x-y) \Big[ f_t( y) \partial_j f_t( x) -f_t( x) \partial_j f_t( y) \Big]dy \right\},$$ where $\partial_t =\frac{\partial }{\partial t}$, $\partial_i = \frac{\partial }{\partial x_i}$ and where the unknown $(f_t)_{t\geq 0}$ is a family of probability density functions $(f_t)_{t\geq 0}$ on ${{{{\mathbb{R}}}^d}}$. The spatially homogeneous Landau (or Fokker-Planck-Landau) equation corresponds, in dimension $d\geq 2$, to the case where for some $\kappa:{{\mathbb{R}}}_+\mapsto {{\mathbb{R}}}_+$, $$\label{aLandau} a_{ij}(z)=\kappa(|z|^2) (|z|^{2}\delta_{ij}-z_iz_j).$$ Physically, one assumes that $\kappa(r)=r^{\gamma/2}$, for some $\gamma \in [-3,1]$. One talks of soft potentials when $\gamma<0$, Maxwell molecules when $\gamma=0$, and hard potentials when $\gamma>0$. We consider in this paper the case of Maxwell molecules, or of pseudo-Maxwell molecules, where $\kappa$ is supposed to be smooth and bounded. [.2cm]{} This equation arises as a limit of the Boltzmann equation when all the collisions become grazing. We refer to Villani [@v; @v2; @v3] and the many references therein for physical and mathematical details on this topic. See Cordier-Mancini [@cm] and Buet-Cordier-Filbet [@bcf] for a review on deterministic numerical methods to solve (\[eqa\]). Notation -------- Let ${{\mathcal P}}={{\mathcal P}}({{{{\mathbb{R}}}^d}})$ be the set of probability measures on ${{{{\mathbb{R}}}^d}}$, and ${{\mathcal P}}_k=\{\mu\in{{\mathcal P}}, m_k(\mu)<\infty\}$, where $m_k(\mu)=\int |x|^k\mu(dx)$. For $x,y\in {{{{\mathbb{R}}}^d}}$, we set $|x|=(\sum_1^d x_i^2){{^\frac{1}{2}}}$, and $(x,y)=x^*y =\sum_1^d x_iy_i$. We consider the norm $|M|=\sup \{|(Mx,x)|, |x|=1\} =\max\{|\lambda|, \lambda$ eighenvalue of $M\}$ on $S_d$. Recall that for $A \in S_d^+$, $\inf \{(Ax,x), |x|=1 \}=1/|A^{-1}|$. All $A\in S_d^+$ admits a unique square root $A{{^\frac{1}{2}}}\in S_d^+$, and we have $|A{{^\frac{1}{2}}}|=|A|{{^\frac{1}{2}}}$. Consider $a:{{{{\mathbb{R}}}^d}}\mapsto S_d^+$. Let $b:{{{{\mathbb{R}}}^d}}\mapsto {{{{\mathbb{R}}}^d}}$ be defined by $b_i(x)=\sum_{j=1}^d \partial_j a_{ij}(x)$. Assume that $|a(x)|+|b(x)|\leq C(1+|x|^2)$ (which is the case when $a$ is defined by (\[aLandau\]) with $\kappa\in C^1_b$). A measurable family $(P_t)_{t \geq 0}\subset {{\mathcal P}}_2$ is said to be a weak solution to (\[eqa\]) if for all $t\geq 0$, $\sup_{[0,t]} m_2(P_s) <\infty$ and for all $\varphi \in C^2_b({{{{\mathbb{R}}}^d}})$, $$\label{LW} {{\displaystyle}\int_{{{{\mathbb{R}}}^d}}}\varphi(x)P_t(dx) = {{\displaystyle}\int_{{{{\mathbb{R}}}^d}}}\varphi(x)P_0(dx) + {{\displaystyle}\int _0^t }ds {{\displaystyle}\int_{{{{\mathbb{R}}}^d}}}{{\displaystyle}\int_{{{{\mathbb{R}}}^d}}}P_s(dx)P_s(dy) L\varphi(x,y),$$ where $L\varphi(x,y)= \frac{1}{2}\sum_{i,j=1}^d a_{ij}(x-y)\partial^2_{ij} \varphi(x) + \sum_{i=1}^d b_i(x-y)\partial_i\varphi(x)$. All the terms make sense due to our conditions on $a$, $b$, $P_t$. See Villani [@v2] for a similar formulation. Known results ------------- To our knowledge, the first (and only) paper proving a rate of convergence for a numerical scheme to solve (\[eqa\]) is that of Fontbona-Guérin-Méléard [@fgm]. Their method relies on a stochastic particle system. The aim of this paper is to go further in this direction. Let us thus recall briefly the method of [@fgm], relying on the probabilistic interpretation of (\[eqa\]) developped by Funaki [@f], Guérin [@g]. Let $\sigma:{{{{\mathbb{R}}}^d}}\mapsto S_d^+$ and $b:{{{{\mathbb{R}}}^d}}\mapsto{{{{\mathbb{R}}}^d}}$ be Lipschitz continuous functions, and let $P_0\in{{\mathcal P}}_2$. A ${{{{\mathbb{R}}}^d}}$-valued process $(X_t)_{t\geq 0}$ is said to solve $E_0(P_0,\sigma,b)$ if ${\mathcal{ L}}(X_0)=P_0$, and if for all $t\geq 0$, setting $P_t={\mathcal{ L}}(X_t)$, $$\label{sdew} X_t=X_0+{{\displaystyle}\int _0^t }{{\displaystyle}\int_{{{{\mathbb{R}}}^d}}}\sigma(X_s-x)W_P(dx,ds) + {{\displaystyle}\int _0^t }{{\displaystyle}\int_{{{{\mathbb{R}}}^d}}}b(X_s-x)P_s(dx)ds.$$ Here $W_P(dx,dt)$ is a ${{{{\mathbb{R}}}^d}}$-valued white noise on $[0,\infty)\times {{{{\mathbb{R}}}^d}}$, independent of $X_0$, with independent coordinates, each of which having covariance measure $P_t(dx)dt$ (see Walsh [@w]). Existence and uniqueness in law for $E_0(P_0,\sigma,b)$ have been proved in Guérin [@g]. If furthermore $\sigma(x)\sigma^*(x)=a(x)$ and $b_i(x)=\sum_{j=1}^d \partial_ja_{ij}(x)$, then $(P_t)_{t\geq 0}$ is a weak solution to (\[eqa\]). The condition that $\sigma$ and $b$ are Lipschitz continuous is satisfied in the case of the Landau equation for Maxwell or pseudo-Maxwell molecules. [.2cm]{} In [@fgm], one considers an exchangeable stochastic particle system $(X^{i,n}_t)_{t\geq 0,i=1,\dots,n}$, satisfying a S.D.E. driven by $n^2$ Brownian motions. It is then shown that one may find a coupling between a solution $(X_t^1)_{t\geq 0}$ to $E_0(P_0,\sigma,b)$ and such a particle system in such a way that $${\mathbb{E}}\left[ \sup_{[0,T]} |X^{1,n}_t - X_t^1|^2\right] \leq C_{T} n^{-2/(d+4)},$$ under the condition that $P_0$ has a finite moment of order $d+5$. The proof relies on a clever coupling between the the white noise and $n$ Brownian motions. In particular, one has to assume that $P_t$ has a density for all $t>0$, in order to guarantee the uniqueness of some optimal couplings. Another approach ---------------- For $a:{{{{\mathbb{R}}}^d}}\mapsto S_d^+$, $b:{{{{\mathbb{R}}}^d}}\mapsto{{{{\mathbb{R}}}^d}}$ satisfying $|a(x)|+|b(x)|\leq C (1+|x|^2)$ and $\mu \in {{\mathcal P}}_2({{{{\mathbb{R}}}^d}})$, we introduce $$a(x,\mu)= \int_{{{{\mathbb{R}}}^d}}a(x-y)\mu(dy), \quad b(x,\mu)= \int_{{{{\mathbb{R}}}^d}}b(x-y)\mu(dy).$$ For each $x \in {{{{\mathbb{R}}}^d}}$, $\mu\in {{\mathcal P}}_2$, $a(x,\mu)$ is a nonnegative symmetric matrix and thus admits an unique symmetric nonnegative square root ${a^\frac{1}{2}}(x,\mu) := [a(x,\mu)]^{\frac{1}{2}}$. [.2cm]{} Denote by ${{{\mathbf W}_d}}$ the law of the $d$-dimensional Brownian motion, consider $P_0\in{{\mathcal P}}_2$, and let $(X_0,B)\sim P_0 \otimes {{{\mathbf W}_d}}$. We say that a ${{{{\mathbb{R}}}^d}}$-valued process $(X_t)_{t\geq 0}$ solves $E_1(P_0,a,b)$ (or $E_1(P_0,a,b,X_0,B)$ when needed) if ${\mathbb{E}}[\sup_{[0,T]} |X_t|^2]<\infty$ for all $T$ and if for all $t\geq 0$, setting $P_t={\mathcal{ L}}(X_t)$, $$\label{sdeb} X_t=X_0+{{\displaystyle}\int _0^t }{a^\frac{1}{2}}(X_s,P_s)dB_s + {{\displaystyle}\int _0^t }b(X_s,P_s)ds.$$ This equation is nonlinear in the sense that its coefficients involve the law of the solution. Compared to (\[sdew\]), equation (\[sdeb\]) is simpler, since it is driven by a finite-dimensional Brownian motion, and since the nonlinearity does not involve the driving process. However, one may check that at least formally, solutions to (\[sdew\]) and (\[sdeb\]) have the same law. The link with (\[eqa\]) relies on a simple application of the Itô formula. \[lien\] Let $(X_t)_{t\geq 0}$ solve $E_1(P_0,a,b)$. Assume that $b_i=\sum_{j=1}^d \partial_j a_{ij}$, and that $|a(x)|+|b(x)|\leq C(1+|x|^2)$. Then $(P_t)_{t\geq 0}:=({\mathcal{ L}}(X_t))_{t\geq 0}$ is a weak solution to (\[eqa\]). [.2cm]{} The natural linearization of (\[sdeb\]) consists of considering $n$ particles $(X^{i,n}_t)_{t\geq 0, i=1,\dots,n}$ solving $$\label{sden} X_t^{i,n}=X_0^i+ {{\displaystyle}\int _0^t }{a^\frac{1}{2}}\left(X_s^{i,n}, \frac{1}{n}\sum_1^n\delta_{X^{k,n}_s}\right) dB^{i}_s + {{\displaystyle}\int _0^t }b\left(X_s^{i,n},\frac{1}{n}\sum_1^n\delta_{X^{k,n}_s} \right)ds.$$ Here $(X_0^i,B^i)_{i=1,\dots,n}$ are i.i.d. with law $P_0\otimes {{{\mathbf W}_d}}$. We thus use $n$ Brownian motions. When linearizing (\[sdew\]), one needs to use $n^2$ Brownian motions, since the white noise is infinite dimensional. However, one may check that the solution to (\[sden\]) and the particle system built in [@fgm] have the same distribution (provided $\sigma\sigma^*=a$ in [@fgm Equation (4)]). Main results ------------ The main result of this paper is the following. \[main\] Assume that $b$ is Lipschitz continuous, that $a$ is of class $C^2$, with all its derivatives of order $2$ bounded, and that $P_0\in{{\mathcal P}}_2$. \(i) There is strong existence and uniqueness for $E_1(P_0,a,b)$: for any $(X_0,B)\sim P_0\otimes{{{\mathbf W}_d}}$, there is an unique solution $(X_t)_{t\geq 0}$ to $E_1(P_0,a,b,X_0,B)$. \(ii) Let $(X_0^i,B^i)_{i=1,\dots,n}$ be i.i.d. with law $P_0\otimes{{{\mathbf W}_d}}$. There is an unique solution $(X^{i,n}_t)_{t\geq 0,i=1,\dots,n}$ to (\[sden\]). Assume that $P_0\in{{\mathcal P}}_4$, and consider the unique solution $(X^1_t)_{t\geq 0}$ to $E_1(P_0,a,b,X_0^1,B^1)$. There is a constant $C_T$ depending only on $d,P_0,a,b,T$ such that $$\label{obj1} {\mathbb{E}}\left[ \sup_{[0,T]} |X^{1,n}_t - X^1_t|^2\right] \leq C_{T} \int_0^T \min\left( n^{-1/2} , n^{-1} \sup_{x\in{{{{\mathbb{R}}}^d}}} (1+|a(x,P_t)^{-1}|) \right) dt \leq C_T n^{-1/2}.$$ In the general case, we thus prove a rate of convergence in $n^{-1/2}$, which is faster than $n^{-2/(d+4)}$. If we have some information on the nondegeneracy of $a(x,P_t)$, then ${a^\frac{1}{2}}(x,\mu)$ is smooth around $\mu \simeq P_s$, and we can get a better rate of convergence. Assume for example that $a$ is uniformly elliptic (which is unfortunately not the case of (\[aLandau\]), since $a(x)x=0$ for all $x\in {{{{\mathbb{R}}}^d}}$). Then $\sup_x |a(x,P_t)^{-1}|\leq \sup_y |a(y)^{-1}| <\infty$, and we get a convergence rate in $n^{-1}$. In the case of the Landau equation for true Maxwell molecules, we obtain the following result. \[cormax\] Consider the Landau equation for Maxwell molecules, where $a$ is given by (\[aLandau\]) with $\kappa\equiv 1$ and $b_i(x)=\sum_{j=1}^d \partial_ja_{ij}(x)=-(d-1)x_i$. Then $a,b$ satisfy the assumptions of Theorem \[main\]. Let $P_0 \in {{\mathcal P}}_4$, and adopt the notation of Theorem \[main\]-(ii). \(i) We have ${\mathbb{E}}[ \sup_{[0,T]} |X^{1,n}_t - X^1_t|^2] \leq C_{T} n^{-1} (1+\log n)$. \(ii) Set $x_0=\int x P_0(dx)$. If $a(x_0,P_0)$ is invertible, then ${\mathbb{E}}[ \sup_{[0,T]} |X^{1,n}_t - X^1_t|^2] \leq C_{T} n^{-1}$. We finally consider the case of pseudo-Maxwell molecules. \[corpm\] Consider the Landau equation for pseudo-Maxwell molecules, where $a$ is given by (\[aLandau\]) with $\kappa \in C^2({{\mathbb{R}}}_+)$, and $b_i(x)=\sum_{j=1}^d \partial_ja_{ij}(x)=-(d-1)\kappa(|x|^2)x_i$. Assume that $\kappa'$ has a bounded support. Then $a,b$ satisfy the assumptions of Theorem \[main\]-(ii). Assume furthermore that $P_0 \in {{\mathcal P}}_4$ has a density with a finite entropy $\int P_0(x)\log P_0(x) dx <\infty$, and that $\kappa$ is bounded below by a positive constant. With the notation of Theorem \[main\], we have ${\mathbb{E}}[ \sup_{[0,T]} |X^{1,n}_t - X^1_t|^2] \leq C_{T} n^{-1}$. Time discretization ------------------- To get a simulable particle system, it remains to discretize time in (\[sden\]). Let $N \geq 1$, and consider $\rho_N(s)= \sum_{k\geq 0} \frac{k}{N}{{\bf 1}}_{s\in[k/N,(k+1)/N)}$. Consider the simulable particle system $(X_t^{i,n,N})_{t\geq 0, i=1,\dots,n}$ defined by $$\label{sdenN} X_t^{i,n,N}=X_0^i+ {{\displaystyle}\int _0^t }{a^\frac{1}{2}}\left(X_{{\rho_N(s)}}^{i,n,N}, \frac{1}{n}\sum_1^n\delta_{X^{k,n,N}_{{\rho_N(s)}}}\right) dB^{i}_s + {{\displaystyle}\int _0^t }b\left(X_{{\rho_N(s)}}^{i,n,N},\frac{1}{n}\sum_1^n\delta_{X^{k,n,N}_{{\rho_N(s)}}} \right)ds.$$ \[totaldisc\] Assume that $b$ is Lipschitz continuous, that $a$ is of class $C^2$, with all its derivatives of order $2$ bounded, and that $P_0\in{{\mathcal P}}_2$. Let $(X_0^i,B^i)_{i=1,\dots,n}$ be i.i.d. with law $P_0\otimes {{{\mathbf W}_d}}$. Consider the unique solutions $(X^{i,n}_t)_{t\geq 0,i=1,\dots,n}$ to (\[sden\]) and $(X^{i,n,N}_t)_{t\geq 0,i=1,\dots,n}$ to (\[sdenN\]). Then there is a constant $C_T$ depending only on $d,P_0,a,b,T$ such that $$\label{obj4} {\mathbb{E}}\left[ \sup_{[0,T]}|X^{1,n}_t- X^{1,n,N}_t |^2 \right] \leq C_T N^{-1}.$$ Conclusion ---------- Choosing for example $a,b$, and $P_0$ as in Corollary \[cormax\]-(ii) or as in Corollary \[corpm\], denoting by $(P_t)_{t\geq 0}=({\mathcal{ L}}(X^1_t))_{t\geq 0}$ the weak solution to the corresponding Landau equation, we obtain for any $\varphi \in C^1_b$, by exchangeability, $$\begin{aligned} \sup_{[0,T]}{\mathbb{E}}\left[\left|\frac{1}{n}\sum_1^n \varphi(X^{i,n,N}_t) - {{\displaystyle}\int_{{{{\mathbb{R}}}^d}}}\varphi(x)P_t(dx) \right| \right]\leq C_T ||\varphi'||_\infty \sqrt{n^{-1}+N^{-1}}.\end{aligned}$$ Thus if one simulates the discretized particle system (\[sdenN\]), and if one computes $\frac{1}{n}\sum_1^n \varphi(X^{i,n,N}_t) $, we get an approximation of $\int \varphi(x)P_t(dx)$, with a reasonnable error. Plan of the paper ----------------- In Section \[pr\], we give the proofs of Theorems \[main\] and \[totaldisc\]. Section \[ell\] is devoted to the proofs of Corollaries \[cormax\] and \[corpm\]. In Section \[soft\], we briefly deal with the case of soft potentials, but our theoritical results do not extend well. Numerical results are given in Section \[num\]. Finally an appendix lies at the end of the paper. General proofs {#pr} ============== In the whole section, we assume that $P_0 \in {{\mathcal P}}_2$, that $a:{{{{\mathbb{R}}}^d}}\mapsto S_d^+$ is of class $C^2$, with bounded derivatives of order two, and that $b:{{{{\mathbb{R}}}^d}}\mapsto {{{{\mathbb{R}}}^d}}$ is Lipschitz continuous. We denote by $C$ (resp. $C_T$, $C_{T,p}$) a constant which depend only on $a,b,d,P_0$ (resp. additionally on $T$, on $T,p$) and whose value may change from line to line. [.2cm]{} For $\mu,\nu \in {{\mathcal P}}_2$, we set ${{{\mathcal W}}}^2_2(\mu,\nu) = \min \left\{ {\mathbb{E}}\left[|X-Y|^2 \right] ; \; {\mathcal{ L}}(X)=\mu,{\mathcal{ L}}(Y)=\nu\right\}$. See Villani [@v4] for many informations on the Wasserstein distance ${{{\mathcal W}}}_2$. Preliminaries ------------- Our results are mainly based on the two following Lemmas. \[ll\] For all $\mu,\nu \in {{\mathcal P}}_2$, all $x,y \in {{{{\mathbb{R}}}^d}}$, $$\begin{aligned} &|{a^\frac{1}{2}}(x,\mu)-{a^\frac{1}{2}}(y,\nu)|^2 + |b(x,\mu)-b(y,\nu)|^2 \leq C(|x-y|^2+{{{\mathcal W}}}_2^2(\mu,\nu)), {\nonumber \\}&|{a^\frac{1}{2}}(x,\mu)|^2+ |b(x,\mu)|^2 \leq C(1+m_2(\mu)+|x|^2).\end{aligned}$$ [*Step 1.*]{} For $\mu\in {{\mathcal P}}_2$ fixed, we consider the map $A:{{{{\mathbb{R}}}^d}}\mapsto S_d^+$ defined by $A(x)=a(x,\mu)$. Then $D^2 A(x)= \int_{{{{{\mathbb{R}}}^d}}} D^2 a(x-y)\mu(dy)$, is clearly uniformly bounded. Lemma \[a1\] ensures us that $||D (A{{^\frac{1}{2}}})||_\infty $ is uniformly bounded, so that $|{a^\frac{1}{2}}(x,\mu) - {a^\frac{1}{2}}(y,\mu) |=|A{{^\frac{1}{2}}}(x)-A{{^\frac{1}{2}}}(y)| \leq C|x-y|$. [.2cm]{} [*Step 2.*]{} We now fix $x\in {{{{\mathbb{R}}}^d}}$, and consider $\mu,\nu \in {{\mathcal P}}_2$. We introduce a couple $(X,Y)$ of random variables such that ${\mathcal{ L}}(X)=\mu$, ${\mathcal{ L}}(Y)=\nu$, and ${{{\mathcal W}}}_2^2(\mu,\nu)= {\mathbb{E}}[|X-Y|^2]$. We define $A:{{\mathbb{R}}}\mapsto S_d^+$ by $A(t)={\mathbb{E}}\left[a(x-[tX+(1-t)Y]) \right]$. Then $A(0)={\mathbb{E}}[a(x-Y)]= a(x,\nu)$ while $A(1)={\mathbb{E}}[a(x-X)]=a(x,\mu)$. Furthermore, $$|D^2A(t)|= |{\mathbb{E}}[|X-Y|^2 D^2a(x-[tX+(1-t)Y]) ]| \leq ||D^2a||_\infty {\mathbb{E}}[|X-Y|^2]= C {{{\mathcal W}}}_2^2(\mu,\nu).$$ Lemma \[a1\] ensures us that $||(A{{^\frac{1}{2}}})'||_\infty \leq C {{{\mathcal W}}}_2(\mu,\nu)$, so that $|{a^\frac{1}{2}}(x,\mu) - {a^\frac{1}{2}}(x,\nu) |= |A{{^\frac{1}{2}}}(1)-A{{^\frac{1}{2}}}(0)| \leq C {{{\mathcal W}}}_2 (\mu,\nu)$. [.2cm]{} [*Step 3.*]{} The growth estimate (for $a$) follows from the Lipschitz estimate, since $|{a^\frac{1}{2}}(0,\delta_0)|^2=|{a^\frac{1}{2}}(0)|^2 <\infty$, and since ${{{\mathcal W}}}_2^2(\mu,\delta_0)=m_2(\mu)$. [.2cm]{} [*Step 4.*]{} The case of $b$ is much simpler. For $\mu,\nu\in {{\mathcal P}}_2$, we introduce $X,Y$ as in Step 2. Then $|b(x,\mu)-b(y,\nu)|^2=|{\mathbb{E}}[b(x-X)-b(y-Y)]|^2\leq C(|x-y|^2+{\mathbb{E}}[|X-Y|]^2) \leq C(|x-y|^2+{{{\mathcal W}}}_2^2(\mu,\nu))$. The growth estimate follows from the Lipschitz estimate, since $|b(0,\delta_0)|^2=|b(0)|^2 <\infty$. \[nesti\] Let $Y_i$ be i.i.d. ${{{{\mathbb{R}}}^d}}$-valued random variables with common law $\mu\in{{\mathcal P}}_4$. Then $${\mathbb{E}}\left[ \left|a(Y_1,\mu)-a\left(Y_1,\frac{1}{n}\sum_1^n\delta_{Y_i}\right) \right|^2 +\left|b(Y_1,\mu)-b\left(Y_1,\frac{1}{n}\sum_1^n\delta_{Y_i}\right) \right|^2 \right] \leq C \frac{1+m_4(\mu)}{n}.$$ We denote by ${\mathbb{E}}_1$ the expectation concerning only $Y_1$, and by ${\mathbb{E}}_{2,n}$ the expectation concerning only $Y_2,\dots,Y_n$. We observe that for all $i=2,\dots,n$, we have $a(Y_1,\mu)={\mathbb{E}}_{2,n}[a(Y_1-Y_i)]$, whence $a(Y_1,\mu)={\mathbb{E}}_{2,n}[\frac{1}{n-1}\sum_2^n a(Y_1-Y_i)]$. We also have $a(Y_1,\frac{1}{n}\sum_1^n\delta_{Y_i})=\frac{1}{n} \sum_1^n a(Y_1-Y_i)$. As a consequence, $$\begin{aligned} &{\mathbb{E}}\left[\left|a\left(Y_1,\frac{1}{n}\sum_1^n\delta_{Y_i}\right) -a(Y_1,\mu) \right|^2 \right] \leq 2{\mathbb{E}}\left[\left|\frac{1}{n}\sum_1^n a(Y_1-Y_i)- \frac{1}{n-1}\sum_2^{n} a(Y_1-Y_i) \right|^2 \right]{\nonumber \\}&\hskip1.5cm+2{\mathbb{E}}_1\left\{ {\mathbb{E}}_{2,n}\left[\left|\frac{1}{n-1}\sum_2^n a(Y_1-Y_i)- {\mathbb{E}}_{2,n}\left[\frac{1}{n-1}\sum_2^{n} a(Y_1-Y_i)\right] \right|^2 \right] \right\}=:2I_{n}+2J_{n}.\end{aligned}$$ An immediate computation, using that $|a(x)|\leq C(1+|x|^2)$, shows that $I_n \leq C(1+m_4(\mu))/n^2$. On the other hand, since the random variables $Y_1-Y_i$ are i.i.d. under ${\mathbb{E}}_{2,n}$, $$\begin{aligned} J_n \leq& {\mathbb{E}}_1 \left\{ \sum_{k,l=1}^d Var_{2,n} \left(\frac{1}{n-1}\sum_2^n a_{kl}(Y_1-Y_i)\right)\right\} \leq \frac{1}{n-1} {\mathbb{E}}_1 \left\{ \sum_{k,l=1}^d Var_{2,n} a_{kl}(Y_1-Y_2)\right\} {\nonumber \\}\leq & \frac{C}{n-1} {\mathbb{E}}_1\left\{ {\mathbb{E}}_{2,n}\left[|a(Y_1-Y_2)|^2\right]\right\} \leq \frac{C}{n} {\mathbb{E}}\left[|a(Y_1-Y_2)|^2\right] \leq \frac{C}{n}(1+m_4(\mu)),\end{aligned}$$ again since $|a(x)|\leq C(1+|x|^2)$. The same computation holds for $b$, replacing everywhere $m_4(\mu)$ by $m_2(\mu)$, since $|b(x)|\leq C(1+|x|)$. Convergence proofs ------------------ We start this subsection with some moment estimates. \[mom\] (i) Let $(X_t)_{t\geq 0}$ solve $E_1(P_0,a,b)$. Assume that $m_p(P_0)<\infty$ for some $p\geq 2$. Then ${\mathbb{E}}[\sup_{[0,T]}|X_t|^p]<\infty$ for all $T>0$. \(ii) Let $(X^{i,n}_t)_{t\geq 0, i=1,\dots,n}$ solve (\[sden\]). For all $0\leq s \leq t\leq T$, ${\mathbb{E}}[|X_t^{1,n}-X_s^{1,n}|^2] \leq C_T |t-s|$. [*Point (i).*]{} Set $P_t={\mathcal{ L}}(X_t)$. Using the Burkholder-Davies-Gundy inequality for the Brownian part, and the Hölder inequality for the drift part, we obtain, for all $0\leq t \leq T$, $$\begin{aligned} {\mathbb{E}}\left[\sup_{[0,t]}|X_s|^p\right]\leq C_p {\mathbb{E}}[|X_0|^p] + C_p {{\displaystyle}\int _0^t }ds {\mathbb{E}}\left[|{a^\frac{1}{2}}(X_s,P_s)|^p \right] + C_{p,T} {{\displaystyle}\int _0^t }ds {\mathbb{E}}\left[|b(X_s,P_s)|^p \right].\end{aligned}$$ But Lemma \[ll\] implies that ${\mathbb{E}}[|{a^\frac{1}{2}}(X_s,P_s)|^p+|b(X_s,P_s)|^p ] \leq C_p {\mathbb{E}}[1+|X_s|^p+m_2(P_s)^{p/2}]$. Furthermore, since $P_s={\mathcal{ L}}(X_s)$ and $p\geq 2$, we deduce that $m_2(P_s)^{p/2}\leq {\mathbb{E}}[|X_s|^p]$. As a conclusion, ${\mathbb{E}}[\sup_{[0,t]}|X_s|^p]\leq C_p {\mathbb{E}}[|X_0|^p] + C_{p,T}\int_0^t ds {\mathbb{E}}[1+|X_s|^p]$, whence the result by the Gronwall Lemma. [.2cm]{} [*Point (ii).*]{} Using the Cauchy-Scharz and Doob inequalities, we see that for $0\leq s \leq t \leq T$, $$\begin{aligned} \label{cc} E\left[ |X^{1,n}_t-X^{1,n}_s|^2\right] \leq& C \int_s^t du {\mathbb{E}}\left[|{a^\frac{1}{2}}(X_u^{1,n},\frac{1}{n}\sum_{1}^n \delta_{X^{i,n}_u})|^2 \right] + C_{T} \int_s^t du {\mathbb{E}}\left[|b(X_u^{1,n},\frac{1}{n}\sum_{1}^n \delta_{X^{i,n}_u})|^2 \right] {\nonumber \\}\leq& C_T \int_s^t du {\mathbb{E}}\left[ 1+ |X^{1,n}_u|^2 +m_2\left(\frac{1}{n}\sum_{1}^n \delta_{X^{i,n}_u} \right)\right] \leq C_T \int_s^t du {\mathbb{E}}\left[ 1+ |X^{1,n}_u|^2 \right].\end{aligned}$$ We used Lemma \[ll\] and that ${\mathbb{E}}[m_2 (\frac{1}{n}\sum_{1}^n \delta_{X^{i,n}_u})]=\frac{1}{n}\sum_{1}^n {\mathbb{E}}[|X^{i,n}_u|^2 ]={\mathbb{E}}[|X^{1,n}_u|^2 ] $ by exchangeability. Applying (\[cc\]) with $s=0$, we get $E[ |X^{1,n}_t|^2] \leq C {\mathbb{E}}[|X_0^1|^2] + C_T\int_0^t du [1+{\mathbb{E}}[|X^{1,n}_u|^2]du$. The Gronwall Lemma allows us to conclude that $\sup_{[0,T]}E[|X^{1,n}_t|^2] \leq C_T$. Applying a second time (\[cc\]), we deduce that $E[ |X^{1,n}_t-X^{1,n}_s|^2] \leq C_T |t-s|$. [*of Theorem \[main\].*]{} We consider $P_0\in{{\mathcal P}}_2$ fixed. [.2cm]{} [*Point (i).*]{} Let $(X_0,B)\sim P_0\otimes{{{\mathbf W}_d}}$. [*Uniqueness.*]{} Assume that we have two solutions $X,Y$ to $E_1(P_0,a,b,X_0,B)$, and set $P_t={\mathcal{ L}}(X_t)$, $Q_t={\mathcal{ L}}(Y_t)$. Using the Cauchy-Schwarz and Doob inequalities, we obtain, for $0\leq t\leq T$, $$\begin{aligned} \label{tech} {\mathbb{E}}\left[\sup_{[0,t]} |X_s-Y_s|^2\right]\leq& C_T {{\displaystyle}\int _0^t }{\mathbb{E}}[|{a^\frac{1}{2}}(X_s,P_s) - {a^\frac{1}{2}}(Y_s,Q_s) |^2 + |b(X_s,P_s) - b(Y_s,Q_s) |^2]ds {\nonumber \\}\leq& C_T {{\displaystyle}\int _0^t }{\mathbb{E}}\left[ |X_s-Y_s|^2 + {{{\mathcal W}}}_2^2(P_s,Q_s) \right]ds \leq C_T {{\displaystyle}\int _0^t }{\mathbb{E}}\left[ |X_s-Y_s|^2\right]ds.\end{aligned}$$ We used Lemma \[ll\] and the obvious inequality ${{{\mathcal W}}}_2^2(P_s,Q_s) \leq {\mathbb{E}}[|X_s-Y_s|^2]$. The Gronwall Lemma allows us to conclude that $X=Y$. [*Existence.*]{} We consider the following Picard iteration: set $X_t^0=X_0$, and define, for $n\geq 0$, $t\geq 0$, $$\label{pic} X^{n+1}_t=X_0+{{\displaystyle}\int _0^t }{a^\frac{1}{2}}(X^n_s,{\mathcal{ L}}(X^n_s))dB_s + {{\displaystyle}\int _0^t }b(X_s^n, {\mathcal{ L}}(X^n_s))ds.$$ We get as in (\[tech\]), for $0\leq t \leq T$, ${\mathbb{E}}[\sup_{[0,t]} |X_s^{n+1}-X_s^{n}|^2]\leq C_T \int_0^t {\mathbb{E}}[ |X_s^n-X_s^{n-1}|^2 ]ds$. Thus there classically exists $(X_t)_{t\geq 0}$ such that $\lim_n {\mathbb{E}}[\sup_{[0,T]} |X_t^{n}-X_t|^2]=0$ for all $T$, which implies that $\lim_n\sup_{[0,T]}{{{\mathcal W}}}_2^2({\mathcal{ L}}(X^n_t),{\mathcal{ L}}(X_t)) =0$. Passing to the limit in (\[pic\]), we see that $X$ solves $E_1(P_0,a,b,X_0,B)$. [.2cm]{} [*Point (ii).*]{} First of all, the strong existence and uniqueness for (\[sden\]) follows from standard theory (see e.g. Stroock-Varadhan [@sv]), since for each $i$, the maps $(x_1,\dots,x_n) \mapsto b(x^i,\frac{1}{n}\sum_1^n\delta_{x^k})$ and $(x_1,\dots,x_n) \mapsto {a^\frac{1}{2}}(x^i,\frac{1}{n}\sum_1^n\delta_{x^k})$ are Lipschitz continuous (use Lemmas \[ll\] and \[a2\]). We now consider $(X_0^i,B^i)$ i.i.d. with law $P_0\otimes {{{\mathbf W}_d}}$, the solution $(X^{i,n}_t)_{t\geq 0,i=1,\dots,n}$ to (\[sden\]), and for each $i=1,\dots,n$, the unique solution $(X^i_t)_{t\geq 0}$ to $E_1(P_0,a,b,X_0^i,B^i)$. For each $t\geq 0$, let $P_t={\mathcal{ L}}(X^1_t)=\dots={\mathcal{ L}}(X^n_t)$. Due to the Cauchy-Schwarz and Doob inequalities, for $0\leq t \leq T$, $$\begin{aligned} &{\mathbb{E}}\left[\sup_{[0,t]}|X^{1,n}_s-X^1_s|^2\right] \leq C_T {{\displaystyle}\int _0^t }ds {\mathbb{E}}\Big[|{a^\frac{1}{2}}\left(X^{1,n}_s,\frac{1}{n}\sum_1^n \delta_{X^{i,n}_s}\right) - {a^\frac{1}{2}}(X^{1}_s,P_s)|^2{\nonumber \\}&\hskip6cm + |b\left(X^{1,n}_s,\frac{1}{n}\sum_1^n \delta_{X^{i,n}_s}\right) - b(X^{1}_s,P_s)|^2 \Big] {\nonumber \\}& \leq C_T {{\displaystyle}\int _0^t }ds \Big( {\mathbb{E}}\Big[|{a^\frac{1}{2}}\left(X^{1,n}_s,\frac{1}{n}\sum_1^n \delta_{X^{i,n}_s}\right) - {a^\frac{1}{2}}\left(X^{1}_s,\frac{1}{n}\sum_1^n \delta_{X^{i}_s} \right)|^2 {\nonumber \\}&\hskip4cm + |b\left(X^{1,n}_s,\frac{1}{n}\sum_1^n \delta_{X^{i,n}_s}\right) - b\left(X^{1}_s,\frac{1}{n}\sum_1^n \delta_{X^{i}_s} \right)|^2 \Big] + \Delta_n(s) \Big),\end{aligned}$$ where $$\begin{aligned} \label{deltan} \Delta_n(s):=& {\mathbb{E}}\left[\left|{a^\frac{1}{2}}\left(X^{1}_s,\frac{1}{n}\sum_1^n \delta_{X^{i}_s}\right) - {a^\frac{1}{2}}\left(X^{1}_s,P_s\right)\right|^2 + \left|b\left(X^{1}_s,\frac{1}{n}\sum_1^n \delta_{X^{i}_s}\right) - b\left(X^{1}_s,P_s\right)\right|^2\right]{\nonumber \\}=:& \Delta_n^1(s)+\Delta_n^2(s).\end{aligned}$$ Using Lemmas \[ll\] and \[a2\], we obtain, for $0\leq t \leq T$, $$\begin{aligned} {\mathbb{E}}\left[\sup_{[0,t]}|X^{1,n}_s-X^1_s|^2\right] &\leq C_T \int_0^t ds \left( {\mathbb{E}}\left[|X^{1,n}_s-X^1_s|^2+ {{{\mathcal W}}}^2_2\left(\frac{1}{n}\sum_1^n \delta_{X^{i,n}_s},\frac{1}{n}\sum_1^n\delta_{X^{i}_s}\right)\right] +\Delta_n(s) \right){\nonumber \\}&\leq C_T \int_0^t ds {\mathbb{E}}\left[|X^{1,n}_s-X^1_s|^2+ \frac{1}{n}\sum_1^n |X^{i,n}_s-X^{i}_s|^2\right] + C_T \int_0^t ds \Delta_n(s) {\nonumber \\}& \leq C_T {{\displaystyle}\int _0^t }ds {\mathbb{E}}\left[|X^{1,n}_s-X^1_s|^2\right] + C_T {{\displaystyle}\int _0^t }ds \Delta_n(s)\end{aligned}$$ by exchangeability. The Gronwall Lemma ensures us that $$\begin{aligned} \label{gron} {\mathbb{E}}\left[\sup_{[0,T]}|X^{1,n}_s-X^1_s|^2\right] \leq C_T \int_0^T ds \Delta_n(s).\end{aligned}$$ It remains to estimate $\Delta_n(s)$. The random variables $X^1_s,\dots,X^n_t$ are i.i.d. with law $P_s$. Thus Lemma \[nesti\] shows that $\Delta_n^2(s) \leq C (1+m_4(P_s))/n \leq C_T/n$ for $s\leq T$, due to Lemma \[mom\]-(i) and since $P_0\in {{\mathcal P}}_4$ by assumption. Next, we use Lemma \[a1bis\]-(i), the Cauchy-Schwarz inequality, and then Lemma \[nesti\]: for $s\leq T$, $$\begin{aligned} \Delta_n^1(s) \leq& {\mathbb{E}}\left[\left|a\left(X^{1}_s,\frac{1}{n}\sum_1^n \delta_{X^{i}_s}\right) - a\left(X^{1}_s,{\mathcal{ L}}(X^1_s)\right)\right|\right] \leq C\left(\frac{1+m_4(P_s)}{n} \right){{^\frac{1}{2}}}\leq \frac{C_T}{\sqrt n}.\end{aligned}$$ But one may also use Lemma \[a1bis\]-(ii) instead of Lemma \[a1bis\]-(i), and this gives, for $s\leq T$, $$\begin{aligned} \Delta_n^1(s) \leq& {\mathbb{E}}\left[|a(X^1_s,P_s)^{-1}| \left|a\left(X^{1}_s,\frac{1}{n}\sum_1^n \delta_{X^{i}_s}\right) - a\left(X^{1}_s,{\mathcal{ L}}(X^1_s)\right)\right|^2\right]{\nonumber \\}\leq& C \sup_x|a(x,P_s)^{-1}| \left(\frac{1+m_4(P_s)}{n} \right) \leq \frac{C_T}{n}\sup_x|a(x,P_s)^{-1}|.\end{aligned}$$ Thus $\Delta_n(s) \leq C_T n^{-1}+ C_T \min(n^{-1/2}, n^{-1}\sup_x|a(x,P_s)^{-1}|)$. Inserting this into (\[gron\]), we obtain (\[obj1\]). [.2cm]{} [*of Theorem \[totaldisc\].*]{} Using Lemmas \[ll\] and \[a2\], we get as usual (see (\[tech\])), by exchangeability, $$\begin{aligned} {\mathbb{E}}\left[\sup_{[0,t]}|X^{1,n}_s-X^{1,n,N}_s|^2 \right] &\leq C_T{{\displaystyle}\int _0^t }{\mathbb{E}}\left[|X^{1,n}_s-X^{1,n,N}_{{\rho_N(s)}}|^2 + {{{\mathcal W}}}_2^2\left( \frac{1}{n}\sum_1^n\delta_{X^{i,n}_s},\frac{1}{n}\sum_1^n\delta_{X^{i,n,N}_{{\rho_N(s)}}} \right)\right] ds {\nonumber \\}&\leq C_T{{\displaystyle}\int _0^t }{\mathbb{E}}\left[|X^{1,n}_s-X^{1,n,N}_{{\rho_N(s)}}|^2 + \frac{1}{n}\sum_1^n|X^{i,n}_s-X^{i,n,N}_{{\rho_N(s)}}|^2 \right] ds{\nonumber \\}&\leq C_T{{\displaystyle}\int _0^t }{\mathbb{E}}\left[|X^{1,n}_s-X^{1,n,N}_{{\rho_N(s)}}|^2\right] ds{\nonumber \\}&\leq C_T{{\displaystyle}\int _0^t }{\mathbb{E}}\left[|X^{1,n}_s-X^{1,n,N}_s|^2\right] ds + C_T{{\displaystyle}\int _0^t }{\mathbb{E}}\left[|X^{1,n}_s-X^{1,n}_{{\rho_N(s)}}|^2\right] ds.\end{aligned}$$ Using finally Lemma \[mom\]-(ii), and since $|s-{{\rho_N(s)}}|\leq 1/N$, we deduce that ${\mathbb{E}}[|X^{1,n}_s-X^{1,n}_{{\rho_N(s)}}|^2] \leq C_T/N$. The Gronwall Lemma allows us to conclude. Ellipticity estimates {#ell} ===================== We start with the [.2cm]{} [*of Corollary \[cormax\].*]{} Recall here that $a$ is given by (\[aLandau\]) with $\kappa\equiv 1$ and $b(z)=-(d-1)z$. Thus $b$ is Lipschitz continuous, and the second derivatives of $a$ are clearly bounded. We consider a weak solution $(P_t)_{t\geq 0}$ to (\[eqa\]). [.2cm]{} Simple computations using (\[LW\]) (with $\varphi(x)=x_i$, $\varphi(x)=|x|^2$) show that $\partial_t \int x P_t(dx)=0$ and $\partial_t m_2(P_t)=0$. We classically may assume without loss of generality that $\int x P_t(dx)=\int x P_0(dx)=0$. We also assume that $m_2(P_t)=m_2(P_0)>0$ (else $X_t^1=X^{1,n}_t=0$ a.s.). [.2cm]{} We now bound from below $(a(x,P_t)y,y)$ for $x,y \in {{{{\mathbb{R}}}^d}}$, $t\geq 0$. A simple computation, using that $\int x P_t(dx)=0$, shows that $a(x,P_t)=a(x)+a(0,P_t)$. Thus for all $t\geq 0$, $x,y\in {{{{\mathbb{R}}}^d}}$, setting $m_2^{ij}(P_t)=\int x_ix_j P_t(dx)$ $$(a(x,P_t)y,y)\geq (a(0,P_t)y,y)=\sum_{i,j}y_iy_j [m_2(P_t)\delta_{ij} -m_2^{ij}(P_t)]= m_2(P_0)|y|^2 -\sum_{i,j}y_iy_jm_2^{ij}(P_t).$$ Using (\[LW\]) with $\varphi(x)=x_ix_j$, we deduce that $$\partial_t m_2^{ij}(P_t)= 2m_2(P_t)\delta_{ij} -2d m_2^{ij}(P_t)= 2m_2(P_0)\delta_{ij} -2d m_2^{ij}(P_t).$$ We thus obtain $$\begin{aligned} \partial_t (a(0,P_t)y,y)=& - \sum_{i,j}y_iy_j \partial_t m_2^{ij}(P_t) = 2 d \sum_{i,j}y_iy_j m_2^{ij}(P_t) - 2m_2(P_0)|y|^2{\nonumber \\}=& 2(d-1)m_2(P_0)|y|^2 - 2d (a(0,P_t)y,y).\end{aligned}$$ Set $\lambda_0=\inf\{(a(0,P_0)y,y), |y|=1\}\geq 0$ and $\lambda_1=\frac{d-1}{d}m_2(P_0)>0$. For all $t\geq 0$, all $x,y\in {{{{\mathbb{R}}}^d}}$, $$\begin{aligned} \label{solex} (a(x,P_t)y,y) \geq & (a(0,P_t)y,y)= (a(0,P_0)y,y)e^{-2dt}+\lambda_1 |y|^2(1-e^{-2dt}) {\nonumber \\}\geq& |y|^2 [\lambda_0 e^{-2dt} + \lambda_1(1-e^{-2dt})].\end{aligned}$$ We now prove point (i). We deduce from (\[solex\]) that $$\begin{aligned} (a(x,P_t)y,y) \geq \lambda_1(1-e^{-2dt})|y|^2.\end{aligned}$$ As a consequence, $|a(x,P_t)^{-1}| \leq 1/[\lambda_1 (1-e^{-2dt})]\leq C/t+C$. Inserting this into (\[obj1\]), we get ${\mathbb{E}}[\sup_{[0,T]}|X^1_t-X^{1,n}_t|^2]\leq C_T\int_0^{T} \min(n^{-1/2},n^{-1}+(nt)^{-1}) dt \leq C_T n^{-1} (1+\log n)$. [.2cm]{} To get (ii), we use (\[solex\]) and that by assumption, $\lambda_0>0$. We deduce that $$(a(x,P_t)y,y) \geq |y|^2 ( \lambda_0 e^{-2dt}+\lambda_1(1-e^{-2dt})) \geq \min(\lambda_0,\lambda_1) |y|^2/2.$$ As a consequence, $|a(x,P_t)^{-1}|\leq 2/ \min(\lambda_0,\lambda_1)$. Inserting this into (\[obj1\]), we get ${\mathbb{E}}[\sup_{[0,T]}|X^1_t-X^{1,n}_t|^2]\leq C_T\int_0^{T} \min(n^{-1/2},n^{-1}) dt \leq C_T n^{-1}$. [.2cm]{} It remains to give the [.2cm]{} [*of Corollary \[corpm\].*]{} Recall here that $a_{ij}(x)=\kappa(|x|^2)(|x|^2\delta_{ij}-x_ix_j)$ and that $b(x)=-(d-1)\kappa(|x|^2)x$, that $\kappa$ is $C^2$ and that $\kappa'$ has a bounded support, so that $a$ has bounded derivatives of order $2$, and $b$ is Lipschitz continuous. We consider a weak solution $(P_t)_{t\geq 0}$ to (\[eqa\]). As previously, we classically have $m_2(P_t)=m_2(P_0)$. Furthermore, it is again classical and widely used that the entropy of $P_t$ is non-increasing, so that $\int P_t(x)\log P_t(x) dx \leq \int P_0(x)\log P_0(x) dx=C<\infty$ for all times, see Villani [@v; @v2; @v3]. If we prove that there is $\lambda_0>0$ such that for all $t\geq 0$, $x,y\in {{{{\mathbb{R}}}^d}}$, $(a(x,P_t)y,y) \geq \lambda_0 |y|^2$, then we deduce that $|a(x,P_t)^{-1}|$ is uniformly bounded, so that the Corollary follows from (\[obj1\]). Observe that setting $\alpha_{ij}(x)=|x|^2\delta_{ij}-x_ix_j$, we have $(a(x,P_t)y,y) \geq \lambda_1 (\alpha(x,P_t)y,y)$, where $\lambda_1>0$ is a lowerbound of $\kappa$. But it is shown in Desvillettes-Villani [@dv Proposition 4] that for $E_0 \in {{\mathbb{R}}}_+, H_0\in {{\mathbb{R}}}_+$, there is a constant $c_{E_0,H_0}>0$ such that for any probability density function $f$ on ${{{{\mathbb{R}}}^d}}$ such that $m_2(f)\leq E_0$ and $\int f(x)\log f(x) dx \leq H_0$, $(\alpha(x,f)y,y) \geq c_{E_0,H_0} |y|^2$. Actually, they consider the case where $\alpha_{ij}(x)=|x|^\gamma(|x|^2\delta_{ij}-x_ix_j)$ for some $\gamma>0$, but one can check that their proof works without modification when $\gamma=0$. We finally obtain $(a(x,P_t)y,y) \geq \lambda_1 c_{E_0,H_0} |y|^2$ for all $t\geq 0$, $x,y\in{{{{\mathbb{R}}}^d}}$, which concludes the proof. On soft potentials {#soft} ================== We consider in this section the spatially homogeneous Landau equation for soft potentials, which writes (\[eqa\]) with $a_{ij}(z)=|z|^\gamma(|z|^{2}\delta_{ij}-z_iz_j)$ for some $\gamma\in [-3,0)$, the Coulomb case $\gamma=-3$ being the most interesting from a physical point of view. Then we have $b_i(z)=\sum_{1}^d \partial_j a_{ij}(z)=-(d-1)|z|^\gamma z_i$. Simulation with cutoff {#simulation-with-cutoff .unnumbered} ---------------------- We restrict our study to the case where $\gamma \in (-2,0]$. We assume that $P_0$ has finite moments of all orders, and has a density with a finite entropy $\int P_0(x) \log P_0(x) dx <\infty$. For ${{\varepsilon}}>0$ let $\kappa_{{\varepsilon}}:{{\mathbb{R}}}_+\mapsto{{\mathbb{R}}}_+$ of class $C^2$, nondecreasing, with $\kappa_{{\varepsilon}}(z)=z$ for $z\geq{{\varepsilon}}$, $\kappa_{{\varepsilon}}(z)= {{\varepsilon}}/2$ for $z\in [0,{{\varepsilon}}/2]$, with $|\kappa'_{{\varepsilon}}(z)|+ {{\varepsilon}}|\kappa_{{\varepsilon}}''(z)| \leq C$. Consider then $a_{{\varepsilon}},b_{{\varepsilon}}$ be defined as $a,b$ with $|z|^\gamma$ replaced by $[\kappa_{{\varepsilon}}(|z|)]^\gamma$. Then $a_{{\varepsilon}}$ is of class $C^2$, with all its derivatives of order $2$ bounded by $C{{\varepsilon}}^\gamma$, and $b_{{\varepsilon}}$ is Lipschitz continuous with Lipschitz constant $C{{\varepsilon}}^{\gamma}$. We thus may apply Corollary \[corpm\] and Theorem \[totaldisc\]. Denote by $(P^{{\varepsilon}}_t)_{t\geq 0}=({\mathcal{ L}}(X^{1,{{\varepsilon}}}_t))_{t\geq 0}$ a weak solution to (\[eqa\]) with $a_{{\varepsilon}}$ and $P_0^{{\varepsilon}}=P_0$. Then we believe that our results, plus some moment and ellipticity estimates (uniform in ${{\varepsilon}}\in (0,1]$), will give something like ${\mathbb{E}}[\sup_{[0,T]}|X^{1,n,N,{{\varepsilon}}}_t-X^{1,{{\varepsilon}}}_t|^2]\leq (n^{-1}+N^{-1})\exp(C_T{{\varepsilon}}^{2\gamma})$, where $(X_t^{i,n,N,{{\varepsilon}}})_{t\geq 0, i=1,..,n}$ solves (\[sdenN\]) with $a_{{\varepsilon}},b_{{\varepsilon}}$ instead of $a,b$. [.2cm]{} On the other hand, we may apply the techniques introduced in [@fg] to estimate ${{{\mathcal W}}}_2^2(P_t,P^{{\varepsilon}}_t)$, where $(P_t)_{t\geq 0}=({\mathcal{ L}}(X^1_t))_{t\geq 0}$ is a weak solution to (\[eqa\]) with $a$ and $P_0$. We believe that, with a convenient coupling, it is possible to obtain something like $\sup_{[0,T]} {\mathbb{E}}[|X^{1,{{\varepsilon}}}_t-X^1_t|^2] \leq C_{T}{{\varepsilon}}^2$. [.2cm]{} One would thus get $\sup_{[0,T]}{\mathbb{E}}[|X^{1,n,N,{{\varepsilon}}}_t - X^1_t|^2]\leq C_T \left({{\varepsilon}}^2 + \left(n^{-1}+N^{-1}\right)e^{C_T{{\varepsilon}}^{2\gamma}}\right)$. This is of course an awfull rate of convergence. It does not seem reasonnable to handle a rigorous proof. Simulation without cutoff {#simulation-without-cutoff .unnumbered} ------------------------- However, the particle system (\[sdenN\]) is still well-defined and simulable for soft potentials (with $\gamma \in [-3,0]$), at least if we replace $\frac{1}{n}\sum_{k} \delta_{X^{k,n,N}_t}$ by $\frac{1}{n}\sum_{k\ne i} \delta_{X^{k,n,N}_t}$ and if $P_0$ has a density. Based on the well-posedness result of [@fg], we hope that, at least when $\gamma \in (-2,0]$, one might obtain the same estimates as in Corollary \[corpm\] and Theorem \[totaldisc\] ( under additionnal conditions on $P_0$). The proof however seems to be quite difficult: we do not know how to get a sufficiently good estimate of quantities like $|X^{i,n,N}_t-X^{j,n,N}_t|^\gamma$. Numerics {#num} ======== Let us first observe that for the Landau equation \[eqa\] where $a$ is given by (\[aLandau\]) and $b_i=\sum_j \partial_j a_{ij}$, the simulable particle system (\[sdenN\]) is conservative, in the sense that it preserves, in mean, momentum and kinetic energy: for all $i=1,\dots,n$, all $t\geq 0$, ${\mathbb{E}}[X^{i,n,N}_t]=\int x P_0(dx)$ and ${\mathbb{E}}[|X^{i,n,N}_t|^2]=m_2(P_0)$. [.2cm]{} We consider here the Landau equation for soft potentials, for some $\gamma \in [-3,0]$, described in the previous section, in dimension $d=2$. We use no cutoff procedure in the case $\gamma<0$. We consider the initial condition $P_0$ with density $P_0(x_1,x_2)=f(x_1)g(x_2)$, where $f$ is the Gaussian density with mean $0$ and variance $0.1$, while $g(x)=(f(x-1)+f(x+1))/2$. The momentum and energy of $P_0$ are given by $(0,0)$ and $1.02$. Thus in large time, the solution $P_t$ should converge to the Gaussian distribution with mean $(0,0)$ and covariance matrix $0.51 I_2$, see Villani [@v3]. We use the particle system (\[sdenN\]) with $n$ particles, and $N$ steps per unit of time. Easy considerations show that the computation of (\[sdenN\]) until time $T$ is essentially proportionnal to $TNn^2$, and should not depend too much on $\gamma$. However, it is consequently faster when $\gamma=0$ for obvious computational reasons. Let us also remark that the law of (\[sdenN\]) does not change when replacing ${a^\frac{1}{2}}$ by any $\sigma$ such that $\sigma(x,\mu)\sigma(x,\mu)^*=a(x,\mu)$. We thus use a Cholesky decomposition, which is numerically quite fast. Let us give an idea of the time needed to perform one time-step: with $\gamma=0$, it takes around $7.10^{-3}$ seconds ($n=500$), $0.15$ s ($n=2500$), $3.5$ s ($n=12500$), and $13$ s ($n=25000$). The computations are around $10$ times slower when $\gamma<0$. [.2cm]{} Now we alway use $n=5000$ particles, and $N=200$ steps per unit of time. We draw, for different values of $t$ and $\gamma$, the histogram (with $80$ sticks) based on the second coordinates of $(X_t^{i,n,N})_{i=1,\dots,n}$. The plain curve is the expected asymptotic Gaussian density, with mean $0$ and variance $0.51$. The convergence to equilibrium seems to be slower and slower as $\gamma$ is more and more negative. ![image](0-025.eps){width="5cm"}-0.3cm ![image](0-05.eps){width="5cm"}-0.3cm ![image](0-1.eps){width="5cm"} ![image](1-025.eps){width="5cm"}-0.3cm ![image](1-1.eps){width="5cm"}-0.3cm ![image](1-2.eps){width="5cm"} ![image](22-025.eps){width="5cm"}-0.3cm ![image](22-1.eps){width="5cm"}-0.3cm ![image](22-3.eps){width="5cm"} For too small values of $\gamma$ (say $\gamma<-2.5$), the numerical results are not so convincing. This is not surprising, since the coefficients are more and more singular as $\gamma$ becomes smaller and smaller. Appendix {#app} ======== The following Lemma can be found in Stroock-Varadhan (when $p=d$) [@sv Theorem 5.2.3], or in Villani [@v5 Theorem 1] (for a more refined statement including all possible values of $p$ and $d$). \[a1\] Let $A:{{\mathbb{R}}}^p \mapsto S_d^+$, for some $p\geq 1$, $d\geq 1$, be of class $C^2$, with all its derivatives of order $2$ bounded. Then $||D (A{{^\frac{1}{2}}})||_\infty \leq C_{p,d} \sqrt{ ||D^2 A ||_{\infty}}$, where $C_{p,d}$ depends only on $p$ and $d$. We also need the following estimates, which are probably standard. \[a1bis\] For $A,B \in S_d^+$, \(i) there holds $|A{{^\frac{1}{2}}}-B{{^\frac{1}{2}}}| \leq \sqrt{|A-B|}$ \(ii) and $|A{{^\frac{1}{2}}}-B{{^\frac{1}{2}}}| \leq \sqrt{\min(|A^{-1}|,|B^{-1}|)}\times |A-B|$. We start with point (i). Let $\sigma=|A{{^\frac{1}{2}}}-B{{^\frac{1}{2}}}|$. There is a unit vector $e\in{{{{\mathbb{R}}}^d}}$ such that $|(A{{^\frac{1}{2}}}-B{{^\frac{1}{2}}})e|=\sigma e$, and we may assume that $(A{{^\frac{1}{2}}}-B{{^\frac{1}{2}}})e=\sigma e$ (else, change the roles of $A,B$). Then, using that $B{{^\frac{1}{2}}}$ is nonnegative, $$\begin{aligned} |A-B|\geq (Ae-Be,e)=((A{{^\frac{1}{2}}}-B{{^\frac{1}{2}}})e,(A{{^\frac{1}{2}}}+B{{^\frac{1}{2}}})e)= (\sigma e,\sigma e + 2B{{^\frac{1}{2}}}e) \geq \sigma^2|e|^2=\sigma^2.\end{aligned}$$ We now prove (ii). First observe that $(A{{^\frac{1}{2}}}x,x)\geq |x|^2/ |A^{-1/2}|$ for all $x\in {{{{\mathbb{R}}}^d}}$. As previously, $$\begin{aligned} |A-B|\geq& ((A{{^\frac{1}{2}}}-B{{^\frac{1}{2}}})e,(A{{^\frac{1}{2}}}+B{{^\frac{1}{2}}})e)=\sigma (A{{^\frac{1}{2}}}e,e)+ \sigma(B{{^\frac{1}{2}}}e,e){\nonumber \\}\geq & \sigma |e|^2/ |A^{-1/2}| + \sigma |e|^2/ |B^{-1/2}| = \sigma/\sqrt{|A^{-1}|}+\sigma/\sqrt{|B^{-1}|} \geq \sigma/\sqrt{\min(|A^{-1}|,|B^{-1}|)},\end{aligned}$$ whence $|A{{^\frac{1}{2}}}-B{{^\frac{1}{2}}}|=\sigma \leq \sqrt{\min(|A^{-1}|,|B^{-1}|)}|A-B|$. We conclude this annex with an elementary fact on the Wasserstein distance. \[a2\] For $x_1,\dots,x_n$, $y_1,\dots,y_n \in {{{{\mathbb{R}}}^d}}$, ${{{\mathcal W}}}^2_2\left(\frac{1}{n}\sum_{1}^n\delta_{x_i},\frac{1}{n}\sum_{1}^n\delta_{y_i} \right) \leq \frac{1}{n}\sum_{1}^n |x_i-y_i|^2$. Let $U$ be uniformly distributed on $\{1,\dots,n\}$, set $X=x_U$ an $Y=y_U$. Then $X\sim \frac{1}{n}\sum_{1}^n\delta_{x_i}$, $Y\sim \frac{1}{n}\sum_{1}^n\delta_{y_i}$, and ${\mathbb{E}}[|X-Y|^2]= \frac{1}{n}\sum_{1}^n |x_i-y_i|^2$. [99]{}
--- author: - 'E.A. Muravleva, I.V. Oseledets' bibliography: - 'final.bib' title: 'Fast low-rank solution of the Poisson equation with application to the Stokes problem' --- [grid.pgf]{} @defobject[currentmarker]{} @transformshift[0.744000in]{}[0.544000in]{}@useobject[currentmarker]{} @transformshift[1.872000in]{}[0.544000in]{}@useobject[currentmarker]{} @transformshift[3.000000in]{}[0.544000in]{}@useobject[currentmarker]{} @transformshift[4.128000in]{}[0.544000in]{}@useobject[currentmarker]{} @transformshift[5.256000in]{}[0.544000in]{}@useobject[currentmarker]{} @transformshift[0.744000in]{}[1.272000in]{}@useobject[currentmarker]{} @transformshift[1.872000in]{}[1.272000in]{}@useobject[currentmarker]{} @transformshift[3.000000in]{}[1.272000in]{}@useobject[currentmarker]{} @transformshift[4.128000in]{}[1.272000in]{}@useobject[currentmarker]{} @transformshift[5.256000in]{}[1.272000in]{}@useobject[currentmarker]{} @transformshift[0.744000in]{}[2.000000in]{}@useobject[currentmarker]{} @transformshift[1.872000in]{}[2.000000in]{}@useobject[currentmarker]{} @transformshift[3.000000in]{}[2.000000in]{}@useobject[currentmarker]{} @transformshift[4.128000in]{}[2.000000in]{}@useobject[currentmarker]{} @transformshift[5.256000in]{}[2.000000in]{}@useobject[currentmarker]{} @transformshift[0.744000in]{}[2.728000in]{}@useobject[currentmarker]{} @transformshift[1.872000in]{}[2.728000in]{}@useobject[currentmarker]{} @transformshift[3.000000in]{}[2.728000in]{}@useobject[currentmarker]{} @transformshift[4.128000in]{}[2.728000in]{}@useobject[currentmarker]{} @transformshift[5.256000in]{}[2.728000in]{}@useobject[currentmarker]{} @transformshift[0.744000in]{}[3.456000in]{}@useobject[currentmarker]{} @transformshift[1.872000in]{}[3.456000in]{}@useobject[currentmarker]{} @transformshift[3.000000in]{}[3.456000in]{}@useobject[currentmarker]{} @transformshift[4.128000in]{}[3.456000in]{}@useobject[currentmarker]{} @transformshift[5.256000in]{}[3.456000in]{}@useobject[currentmarker]{} @defobject[currentmarker]{} @transformshift[1.308000in]{}[0.908000in]{}@useobject[currentmarker]{} @transformshift[2.436000in]{}[0.908000in]{}@useobject[currentmarker]{} @transformshift[3.564000in]{}[0.908000in]{}@useobject[currentmarker]{} @transformshift[4.692000in]{}[0.908000in]{}@useobject[currentmarker]{} @transformshift[1.308000in]{}[1.636000in]{}@useobject[currentmarker]{} @transformshift[2.436000in]{}[1.636000in]{}@useobject[currentmarker]{} @transformshift[3.564000in]{}[1.636000in]{}@useobject[currentmarker]{} @transformshift[4.692000in]{}[1.636000in]{}@useobject[currentmarker]{} @transformshift[1.308000in]{}[2.364000in]{}@useobject[currentmarker]{} @transformshift[2.436000in]{}[2.364000in]{}@useobject[currentmarker]{} @transformshift[3.564000in]{}[2.364000in]{}@useobject[currentmarker]{} @transformshift[4.692000in]{}[2.364000in]{}@useobject[currentmarker]{} @transformshift[1.308000in]{}[3.092000in]{}@useobject[currentmarker]{} @transformshift[2.436000in]{}[3.092000in]{}@useobject[currentmarker]{} @transformshift[3.564000in]{}[3.092000in]{}@useobject[currentmarker]{} @transformshift[4.692000in]{}[3.092000in]{}@useobject[currentmarker]{} @defobject[currentmarker]{} @transformshift[1.872000in]{}[1.272000in]{}@useobject[currentmarker]{} @transformshift[3.000000in]{}[1.272000in]{}@useobject[currentmarker]{} @transformshift[4.128000in]{}[1.272000in]{}@useobject[currentmarker]{} @transformshift[1.872000in]{}[2.000000in]{}@useobject[currentmarker]{} @transformshift[3.000000in]{}[2.000000in]{}@useobject[currentmarker]{} @transformshift[4.128000in]{}[2.000000in]{}@useobject[currentmarker]{} @transformshift[1.872000in]{}[2.728000in]{}@useobject[currentmarker]{} @transformshift[3.000000in]{}[2.728000in]{}@useobject[currentmarker]{} @transformshift[4.128000in]{}[2.728000in]{}@useobject[currentmarker]{} @defobject[currentmarker]{} @transformshift[1.872000in]{}[1.272000in]{}@useobject[currentmarker]{} @transformshift[1.872000in]{}[2.728000in]{}@useobject[currentmarker]{} @transformshift[3.000000in]{}[2.000000in]{}@useobject[currentmarker]{} @transformshift[4.128000in]{}[1.272000in]{}@useobject[currentmarker]{} @transformshift[4.128000in]{}[2.728000in]{}@useobject[currentmarker]{} @defobject[currentmarker]{} @transformshift[6.156400in]{}[2.396000in]{}@useobject[currentmarker]{} @defobject[currentmarker]{} @transformshift[6.156400in]{}[2.115200in]{}@useobject[currentmarker]{} @defobject[currentmarker]{} @transformshift[6.156400in]{}[1.836311in]{}@useobject[currentmarker]{} Introduction {#sec-1} ============ Solvers for the Poisson equation are important in many different application areas, thus their improvement is of great interest even for special grids and right-hand sides. In this paper we consider a very specific class of Poisson equations: Poisson equations in two and three dimensions on tensor-product uniform grids with *low-rank* right-hand sides. In this case, it is possible to reduce the complexity of the solver to an almost linear complexity in a one-dimensional grid size. The problems of such kind have been considered previously by several authors [@grasedyck-kron-2004; @GHK-ten_inverse_ellipt-2005; @beylkin-2002] and the linear complexity is indeed possible. However, in small dimensions (especially in two dimensions) the constant hidden in $\mathcal{O}(n)$ can be very high, and the full representation is more efficient for a wide range of $n$. The main goal of this paper is, staying within the framework of low-rank approximations, provide a new and efficient algorithm for the approximate solution the Poisson equation. To show the effectiveness of our approach, we use it to create a fast solver for Stokes problem. The Stokes problem is one of the classical problems of mathematical physics. It is often encountered as a subproblem in more complicated problems, such as Navier-Stokes problem, unsteady Stokes problem, flows of non-Newtonian fluids. Numerical methods for the solution of the Stokes problem are a classical topic of the computational fluid dynamics, thus any improvement of the efficiency of the Stokes solvers (at least in some important cases) is of great interest. The Uzawa method for the Stokes problem requires the repetitive solution of the Poisson equations. The whole iterative procedure (including matrix-by-vector products, arithmetics, dot products) will be implemented in the low-rank format, and it is not always an easy task, since at each step the accuracy should be monitored to avoid the growth of the number of parameters and the growth of the error. The method proposed in this paper has its limitations: it works for a special discretization of the Laplace operator on tensor-product grids and for special right-hand sides, but the class of right-hand sides in two and three dimensions is not small: it includes sufficiently smooth functions (i.e., approximated by polynomials), so-called asymptotically smooth functions, sparse right-hand sides and so on. The approximation work is done on the algebraic level using unified algorithms. To get better complexity and accuracy, one can use advanced discretization techniques: $hp$-methods, high-order schemes, spectral and pseudospectral approaches, etc. However, they require significant additional work on the analytic level. On the other hand, one can use certain approximations on the discrete level, by approximating the discrete solution using some low-parametric format. As such a format, we will use a low-rank factorization of matrices (in two dimensions) and tensors (in three dimensions). The approach can be used in arbitrary dimension, but we will leave this topic for future work. Model problem: discretization and solution method {#sec-2} ================================================= The Stokes problem is used a basic problem to illustrate the low-rank techniques that are used. The numerical scheme consists of three basic steps. 1. Take a simple discretization of the Stokes problem (we will use semi-staggered grids) 2. Use a mesh-size-independent convergent iterative scheme (we will use the Uzawa method) 3. Replace each step of the method by operations in the low-parametric format with truncation. The whole procedure is very simple, and as it will be seen later on, most of the steps are also simple in the tensor format. However, to get a more efficient method, several modifications should be made to the “naive” approach. For the Stokes problem, the second step requires the solution of the Poisson problem at each iteration step. Our main result is a new algorithm for the solution of such problem, based on the cross approximation [@tee-cross-2000; @bebe-2000] in the frequency space. The well-known approaches for the solution of the Poisson equation in low-rank formats rely on a special approximation to the inverse of the Laplace operator by a sum of tensor (Kronecker) products, but in two and three dimensions the complexity can be reduced signicantly using a new approach. We consider the Stokes problem in a rectangular domain $\Omega = [0,1]^d$, $d=2,3$. The problem is discretized using the finite difference method (FDM) on semi-staggered grids. For details, see [@om-stockes-2010; @mur-phd-2010; @mur-ker-2008]. We will give only the final matrix form of the discrete problem. There are $d+1$ unknown vectors ($d$ for velocity components and one for pressure). The components of velocity are defined in the vertices of the grid cells, and the pressure is defined in the centers of the cells (see Figure \[flr:grid\]). The grid for pressure is $n_x \times n_y$ in two-dimensional case, and $n_x \times n_y \times n_z$ in three-dimensional case, respectively. For simplicity we will assume that the grid sizes are equal: $n_x = n_y = n_z = n$. The mesh size is defined as $h = \frac{1}{n}$. The discretized Stokes problem has the following form: $$\label{flr:stprob} \begin{pmatrix} A & B\\ B^T & 0 \end{pmatrix} \begin{pmatrix} u \\ p \end{pmatrix} = \begin{pmatrix} f \\ g \end{pmatrix}.$$ We use the standard notations: $u$ is the velocity vector, $p$ is the pressure. Here $A$ is a $d \times d$ block diagonal matrix of the form $$A = I_d \otimes \Delta,$$ where $\Delta$ is a discrete Laplace operator which will be defined later on, and $I_d$ is an identity matrix of order $d$. The matrix $B$ is a block matrix matrix of the form $$B = \begin{pmatrix} B_{x}^{\top} & B_{y}^{\top} \end{pmatrix}^{\top}$$ in two-dimensional case, and $$B = \begin{pmatrix} B_{x}^{\top} & B_{y}^{\top} & B_{z}^{\top} \end{pmatrix}^{\top}$$ in three-dimensional case. To define the components of the discrete gradient operators, it is convenient to use the following auxiliary matrices, $G$ and $H$. They are defined as $$G = E - Z, \quad H = E + Z,$$ where $E,Z$ are $(n-1) \times n$ matrices. The matrix $E$ deletes the first element of the vector (and moves all others up by one), and the matrix $Z$ deletes the last element of the vector (and moves all others down by one). In two-dimensional case, the matrices $B_x$ and $B_y$ have the form $$B_x = \frac{1}{4h} G \otimes H, \quad B_y = \frac{1}{4h} H \otimes G.$$ In three-dimensional case, the matrices $B_x$, $B_y$ and $B_z$ have the form: $$B_x = \frac{1}{4h} G \otimes H \otimes H, \quad B_y = \frac{1}{4h} H \otimes G \otimes H, \quad B_z = \frac{1}{4h} H \otimes H \otimes G.$$ Finally, the discrete Laplace operator has the following form: $$\begin{split} & \Delta = B_x B^{\top}_x + B_y B^{\top}_y, \\ & \Delta = B_x B^{\top}_x + B_y B^{\top}_y + B_z B^{\top}_z, \end{split}$$ in two and three dimensions respectively. Let us note that this is a non-standard 9-point stencil in 2D (27-point stencil in 3D) discretization for the Laplace operator, but its main benefit is that it is *consistent* with the discrete gradient operator. The standard approach to solve is the Uzawa method [@benzi-saddle-2005]. The system is reduces to the following equation for the pressure: $$\label{flr:schur} (B^{\top} A^{-1} B) p = B^{\top} A^{-1} f.$$ The matrix of the system is positive semi-definite, and the conjugate-gradient (CG) method is a convenient tool to solve it. An important question is the spectrum of the Schur operator. It is known [@om-stockes-2010; @mur-phd-2010] that it has $2$ zero eigenvalues in 2D, and $3n-1$ eigenvalues in $3d$, and $(n-2)^d$ eigenvalues, equal to $1$. The numerical experiments confirm, that the remaining part of the spectrum lies on $[0,1]$, and is bounded from below by a constant, independent of $h$. The linear system is consistent, i.e., the right-hand side is orthogonal to any vector $q$ in the kernel of the Schur complement, thus the conjugate gradient method effectively works on the subspace, orthogonal to the kernel, where the operator is well-conditioned, thus the total number of iterations does not depend on $h$. Low-rank formats {#sec-3} ================ Our main goal is the solution of the problem in low-rank formats. In two dimensions, the discrete pressure is a vector of length $(n-1)^2$, and it can be naturally considered as an $(n-1) \times (n-1)$ matrix. In a three-dimensional case, the discrete pressure can be naturally considered as an $(n-1) \times (n-1) \times (n-1)$ three-dimensional tensor. We want to approximate those tensors, effectively reducing the number of parameters and the computational cost, while maintaining the required accuracy $\varepsilon$ of the computations. This accuracy should be consistent with the mesh discretization error, which is $\mathcal{O}(h^2)$ for the pressure. Let us recall some basic facts about low-rank tensor decompositions in two and three dimensions. The $n_1 \times n_2$ matrix $M$ is said to be in the low-rank format with rank $r$, if it can be represented as $$\label{flr:lr} M = U V^{\top} = \sum_{\alpha=1}^r u_{\alpha} v_{\alpha}^{\top},$$ where $U$ is an $n_1 \times r$ matrix, and $V$ is an $n_2 \times r$ matrix. The dyadic decomposition can be computed via the singular value decomposition (SVD). The linear operator, acting on a space of $n_1 \times n_2$ matrices, can be represented as a $(n_1 n_2) \times (n_1 n_2)$ matrix. The corresponding low-rank format for the matrix is given in the following form. An $(n_1 n_2) \times (n_1 n_2)$ matrix $A$ is said to be in the low-rank (Kronecker) format with rank $R$, if it can be represented as a sum of Kronecker products: $$\label{flr:lrmat} A = \sum_{\alpha=1}^R C_{\alpha} \otimes D_{\alpha},$$ where $C_{\alpha}$ is an $n_1 \times n_1$ matrix, and $D_{\alpha}$ is an $n_2 \times n_2$ matrix, and $\otimes$ is a Kronecker product of matrices. The low-rank format for operators and vectors allows for fast linear algebra operations [@ost-latensor-2009; @ot-hyper-2005; @hkt-iter-2008]. For example, multiplication of a matrix of tensor rank $R$ by a vector of rank $r$ yields a vector of rank $Rr$. An important operation is the so-called *rounding*. Most basic operations (like addition or matrix-by-vector product) increase the rank, thus the reapproximation with some specified accuracy is required. If the matrix is given in the low-rank format, then the rounding procedure can be implemented without the computation of the full SVD. Its complexity with respect to $n,m,r$ is $\mathcal{O}( (n+m) r^2 + r^3)$, i.e., it is linear in the mode size (compared to cubic for the full SVD). In three dimensions, the so-called Tucker format can be very useful: [@Tucker; @lathauwer-svd-2000; @khor-ml-2009]. An $n_1 \times n_2 \times n_3$ three-dimensional tensor $A$ is said to be in the Tucker format, if it can be represented as $$\label{flr:tucker} A(i_1,i_2,i_3) = \sum_{\alpha_1,\alpha_2,\alpha_3} G(\alpha_1,\alpha_2,\alpha_3) U_1(i_1,\alpha_1) U_2(i_2,\alpha_2) U_3(i_3,\alpha_3),$$ where the numbers $\alpha_k$ vary from $1$ to $r_k$. The matrices $U_k = [U_k(i_k,\alpha_k)]$ are called *factors* of the Tucker decomposition, the numbers $r_k$ are called *Tucker ranks*, and the three-dimensional tensor $G$ is called the *Tucker core*. The Tucker format for the matrix is defined in the same fashion. The basic linear operations for the Tucker format can also be defined, as well as a fast rounding procedure with complexity that is linear in the mode size [@ost-latensor-2009]. Fast inversion of the Laplacian in tensor formats {#sec-4} ================================================= Known approaches {#sec-4-1} ---------------- The basic computational core of the algorithm is the solution of the Poisson equation (see ): $$\label{flr:pois} \Delta u = g.$$ We will consider first the two-dimensional case. The generalization to higher dimensions will be outlined further. The matrix $\Delta$ in our case has the form (up to a scaling factor, which is not important) $$\Delta = A_1 \otimes A_2 + A_2 \otimes A_1,$$ where $$A_1 = GG^{\top}, A_2 = HH^{\top}.$$ It is not difficult to see, that the matrices $A_1$ and $A_2$ commute and are diagonalized by the discrete sine transform (DST): $$A_1 = S \Lambda_1 S^{\top}, A_2 = S \Lambda_2 S^{\top},$$ thus the full matrix $\Delta$ can be written as $$\label{flr:diag} \Delta = (S \otimes S) (\Lambda_1 \otimes \Lambda_2 + \Lambda_2 \otimes \Lambda_1) (S^{\top} \otimes S^{\top}).$$ The representation is a textbook way to solve such kind of problems in the full format, by applying the DST, inverting the diagonal matrix, and then applying the DST back. For the low-rank formats, the situation is as follows. The application of the DST can be done in a fast way by computing the DST of the factors $U$ and $V$, and it is an $\mathcal{O}(nr \log n)$ complexity. Moreover, then the rank does not change. But the inversion of the diagonal matrix $$\label{flr:diagmat} \Lambda = \Lambda_1 \otimes \Lambda_2 + \Lambda_2 \otimes \Lambda_1,$$ suddenly becomes the bottleneck, since it is a $\mathcal{O}(n^2)$ computation! Thus, the matrix $\Lambda^{-1} \widehat{g}$ should be approximated in the low-rank format. A well-established approach to compute the inverse of $\Lambda^{-1}$ is based on the approximation by exponential sums [@rokhlin-gauss-1998; @GHK-ten_inverse_ellipt-2005; @grasedyck-kron-2004; @khor-rstruct-2006; @hackbra-expsum-2005]. It is uses the following quadrature formula: $$\label{flr:int} \frac{1}{x} = \int_{0}^{\infty} e^{-px} dp \approx \sum_{\alpha=1}^R w_k e^{-p_k x},$$ where $p_k$ and $w_k$ are quadrature weights. Typical value of $r$ required to achieve a good accuracy is of order of several tens. The equation can be used to get the approximation of the inverse matrix $\Lambda$: $$\label{flr:expsum} \Lambda^{-1} \approx \sum_{\alpha=1}^R w_k e^{-p_k \Lambda} = \sum_{\alpha=1}^R w_k e^{-p_k(\Lambda_1 + \Lambda_2)} \otimes e^{-p_k(\Lambda_1 + \Lambda_2)}.$$ A new method based on cross approximation {#sec-4-2} ----------------------------------------- Let us estimate the typical complexity of the method, discussed in the previous subsection. The main computational task is the multiplication by $\Lambda^{-1}$. Let $r$ be the rank of the vector, and $R$ be the tensor rank of the inverse of $\Lambda$. Then the result would have the rank $Rr$. Thus, in two dimensions, the method will be effective only when $n \geq (Rr) \sim 1000$, since the typical values of $R$ and $r$ are around $30$. The main cost is due to the approximation of the full inverse matrix. However, we need only to multiply this inverse by a vector, and the result is expected to have a small rank. This structure is not used in the methods based on the approximation of the inverse matrix. To obtain a new method, let us look more closely on the main operation: $$\widehat{f} = \Lambda^{-1} \widehat{g},$$ or elementwise: $$\widehat{f}_{ij} = \frac{\widehat{g}_{ij}}{\Lambda_{ij}}.$$ In our case, $$\Lambda_{ij} = (4 - \lambda_i) \lambda_i + (4 - \lambda_j) \lambda_j = \mu_i + \mu_j,$$ therefore $$\label{flr:div} \widehat{f}_{ij} = \frac{\widetilde{g}_{ij}}{\mu_i + \mu_j}.$$ The most important observation is that is an *elementwise division* of a matrix of rank $r$ by a matrix of rank $2$, and we can compute any element of the matrix $\widehat{F} = [f_{ij}]$ in $\mathcal{O}(r)$ operations. Our main assumption is that the result of the division can be in turn approximated by a matrix of a small rank, $\mathcal{O}(r)$. Thus, we can try to approximate the result directly, without the approximation of the $\Lambda^{-1}$. Since we can compute any prescribed element fastly, it is natural to use the cross method for the approximation of low-rank matrices (in two dimensions) [@tee-cross-2000; @bebe-2000] and for tensors [@ost-tucker-2008; @ot-ttcross-2010]. In the matrix case, to approximate a matrix with rank $r$ it is sufficient to compute $r$ rows and $r$ columns of it, using a certain heuristical technique (a quasioptimal choice can be based on the maximal-volume submatrix [@gt-maxvol-2001] using the maxvol-algorithm [@gostz-maxvol-2010]). Thus, the expected complexity is $\mathcal{O}(nr^2)$ operations, which is much less, than the approximation of the full inverse. Stokes solver in the low-rank format {#sec-5} ==================================== The Uzawa method requires the solution of the Poisson equation and multiplication by the gradient matrices, and some addition. All these operations are done approximately with some accuracy $\varepsilon$. To solve the linear system with the Schur complement and inexact matrix-by-vector products, we used the inexact GMRES as implemented in [@dc-tt_gmres-2013], which implements the adaptive accuracy for the intermediate matrix-by-vector products. Despite the fact that our matrix is positive semidefinite, we decided to use GMRES over the conjugate gradients in this case, since the GMRES method proved to be much more robust to the approximate inversion of the Laplacian. Numerical experiments {#sec-6} ===================== As a numerical test, we have selected the following analytical solution: $$\label{flr:ansin2d} \begin{split} &u = \frac{1}{4 \pi^2} \sin 2 \pi x (1 - \cos 2 \pi y), \\ &v = \frac{1}{4 \pi^2} (1 - \cos 2 \pi x) \sin 2 \pi y, \\ &p = \frac{1}{\pi} \sin 2 \pi x \sin 2 \pi y. \end{split}$$ The corresponding right-hand side is then $$\begin{split}\label{flr:ansin2d-2} &f_u = \sin 2 \pi y, \\ &f_v = \sin 2 \pi x, \\ &g = -\frac{1}{\pi}\sin 2 \pi x \sin 2 \pi y. \end{split}$$ The solution for $p$ has a perfect structure (rank-1). However, during the iterative process the intermediate ranks may grow quite a lot, and that increases the complexity. The Table \[flr:table-1\] provides the dependence of the total computation time for full low-rank method and for the same method in the full format. The third column gives an error of the approximation with respect to the true solution. For all computations the same threshold $\varepsilon = 5 \cdot 10^{-9}$ was used that is safe to match the discretization error. It is clear, that the low-rank method is slower for small mode sizes (until $n \leq 256$) and is becoming faster due to better scaling in $n$. The almost linear scaling in $n$ is also observed. n Time (LR/full) Rel. error in $p$ (LR/full) ------ ---------------- ----------------------------- 64 0.17/0.024 1.2e-03/1.2e-03 128 0.29/0.061 2.9e-04/2.9e-04 256 0.54/0.39 7.8e-05/7.8e-05 512 **1.2/2.3** 1.9e-05/1.9e-05 1024 **2.8/10** 8.6e-06/8.2e-06 : Numerical results for the two-dimensional sine example[]{data-label="flr:table-1"} In Figure \[flr:rank-and-res\] we present the approximate rank of the Krylov vectors in the inexact GMRES and the residual computed in the GMRES method. We see that the rank decreases with the iteration number (which is consistent with the inexact GMRES theory, the next Krylov vectors are approximated with less accuracy), and the GMRES converges linearly. Lid-driven cavity flow {#sec-6-1} ---------------------- The second example is the classical lid-driven cavity test (see the picture). The homogenius Dirichlet boundary conditions are introduced on all the boundaries for $u$ and $v$ except the upper boundary which is moving with constant velocity (see Figure \[flr:lid\]). =\[fill,pattern=north east lines,draw=none\] (0,0) – (1,0); (0.25,0.1) – (0.75, 0.1); at (0.5, 0.15) [$u = 1, v = 0$]{}; at (-0.3, -0.5) [$u = 0, v = 0$]{}; at (1.3, -0.5) [$u = 0, v = 0$]{}; at (0.5, -1.13) [$u = 0, v = 0$]{}; (ground1) at (0.5,-1) \[anchor = north, ground, minimum width=12em\] ; (ground1.north west) – (ground1.north east); (ground2) at (0, -0.5) \[anchor = east, ground, minimum height=12em\] ; (ground2.south east) – (ground2.north east); (ground3) at (1,-0.5) \[anchor = west, ground, minimum height=12em\] ; (ground3.south west) – (ground3.north west); (ground4) at (1.5, 0) \[anchor = north, ground, minimum width=12em\] ; (ground4.north west) – (ground4.north east); (ground5) at (-0.5, 0) \[anchor = north, ground, minimum width=12em\] ; (ground5.north west) – (ground5.north east); The results are presented in Table \[flr:table-2\]. Since now we do not the analytic solution, we can compare the solution obtained from the full format with the solution obtained the the low-rank method. The sublinear scaling in the number of unknowns is clearly visible. ------ ---------------- ------------ n Time (LR/full) Rel. error 256 0.89/0.63 6.7e-07 512 **2.5/3.2** 5.6e-07 1024 **6.3/13** 5.9e-07 ------ ---------------- ------------ : Numerical results for the two-dimensional lid-driven cavity flow[]{data-label="flr:table-2"} Finally, we will compare the new method with the exponential sums approach. We have taken $n = 1024$ and $\varepsilon = 1e-7$. That gave the rank of the inverse $35$. The typical rank of the Krylov vector for the lid-driven cavity test was around $30$ as well. The timing for the Hadamard product and then rounding was **1.6** seconds, and for the cross method the time was **0.07** seconds, i.e, more than 20 times faster. Conclusions {#sec-7} =========== We have presented a new way to solve Poisson equation in the low-rank tensor format using cross approximation in the frequency domain. The idea is quite simple, but is very effective. Using the new solver we have implemented the first low-rank solver for the two-dimensional Stokes problem. The low-rank Stokes solver is based on the Uzawa-GMRES approach, and is effective and robust in the experiments. In the numerical experiments only a two-dimensional case was considered, since it is the “worst case” for the low-rank methods due to typically high complexity with respect to the ranks. Even in this setting, the low-rank solver outperforms the full-format finite-difference solver starting from $n = 512$. We plan to implement a three-dimensional variant, and the complexity reduction in three dimensions is expected to be much more visible.
--- abstract: 'A key question in extragalactic studies is the determination of the relative roles of stars and AGN in powering dusty galaxies at $z$$\sim$1-3 where the bulk of star-formation and AGN activity took place. In Paper I, we present a sample of $336$ 24$\mu$m-selected (Ultra)Luminous Infrared Galaxies, (U)LIRGs, at $z \sim 0.3$-$2.8$, where we focus on determining the AGN contribution to the IR luminosity. Here, we use hydrodynamic simulations with dust radiative transfer of isolated and merging galaxies, to investigate how well the simulations reproduce our empirical IR AGN fraction estimates and determine how IR AGN fractions relate to the UV-mm AGN fraction. We find that: 1) IR AGN fraction estimates based on simulations are in qualitative agreement with the empirical values when host reprocessing of the AGN light is considered; 2) for star-forming galaxy-AGN composites our empirical methods may be underestimating the role of AGN, as our simulations imply $>$50% AGN fractions, $\sim$3$\,\times$ higher than previous estimates; 3) 6% of our empirically classified “SFG" have AGN fractions $\gtrsim$ 50%. While this is a small percentage of SFGs, if confirmed, would imply the true number density of AGN may be underestimated; 4) this comparison [depends]{} on the adopted AGN template – those that neglect the contribution of warm dust lower the empirical fractions by up to 2$\times$; and 5) the IR AGN fraction is only a good proxy for the intrinsic UV-mm AGN fraction when the extinction is high ($A_V\gtrsim 1$ or up to and including coalescence in a merger).' author: - 'Eric Roebuck, Anna Sajina, Christopher C. Hayward, Alexandra Pope, Allison Kirkpatrick, Lars Hernquist, Lin Yan' bibliography: - 'paperrefs.bib' title: 'The Role of Star-Formation and AGN in Dust Heating of z=0.3-2.8 Galaxies - II. Informing IR AGN fraction estimates through simulations' --- Introduction ============ Understanding galaxies at $z \sim 1$-$3$ is of key importance to galaxy evolution studies because both the star formation rate density [see @MadauDickinson2014 for a recent review] and quasar number density [@Richards2006b] peak at this epoch. Along with the $M_{\rm{BH}}-\sigma$ relation [@Ferrarese2000], these observations suggest that the accumulation of stellar mass and growth of super-massive black holes are closely tied [see e.g. @Hopkins2006; @Hopkins2008]. The increase in number density of luminous and ultraluminous infrared galaxies (LIRGs and ULIRGs respectively) up to $z \sim 2$ makes them the dominant contributor to the SFR density peak [@Murphy2011; @Magnelli2011; @Casey2012]. However, understanding exactly how much star formation takes place in such systems requires accurate determinations of the fraction of their power output that is due to recent star formation rather than AGN. The high levels of obscuration in such galaxies make answering this question notoriously difficult. Analysis of the infrared spectral energy distribution (IR SED), especially mid-IR spectra when available, has been our best tool to determine the level of AGN activity in such heavily dust obscured systems [e.g. @Armus2007; @Sajina2007; @Yan2007; @Pope2008; @Veilleux2009; @Kirkpatrick2012; @Sajina2012]. The contribution of AGN to the IR luminosity[^1] is typically referred to as the IR “AGN fraction" or $f{\rm{(AGN)_{IR}}}$. Traditionally, determining $f{\rm{(AGN)_{IR}}}$ is based on assuming that the hot dust giving rise to the mid-IR continuum is exclusively due to an AGN torus, while the far-IR cold dust emission peak is entirely powered by stars [e.g @Polletta2007; @Sajina2007]. The warm dust ($\sim$80-100K) giving rise to the 20-40[$\mu$m]{} continuum is more uncertain as it can be due to star formation [e.g. @Veilleux2009] [or to reprocessing in an NLR]{} [e.g. see @Netzer2015 for a review]. This can account for the typically greater warm dust component in empirical AGN templates [e.g. @Richards2006a; @Mullaney2011] relative to pure AGN torus models [e.g. @Nenkova2008; @Honig2010]. Aside from the uncertainty regarding the role of the NLR, this view ignores the fact that the AGN is embedded in its host galaxy, and the light from it is subject to further processing therein. The effects of this galaxy-scale dust processing of the AGN emission can be investigated by performing radiative transfer on hydrodynamical simulations of galaxies. The goal of this paper is to inform empirical IR AGN fraction estimates by comparing simulated and observed IR SEDs of a sample covering the redshift and luminosity regime most critical to the build-up of stars and black holes in the Universe. This paper is second in a series. Paper I [@Kirkpatrick2015] presents our sample of 343 24$\mu$m-selected $z\sim$0.3-2.8 (U)LIRGs with exceptional coverage from the optical to the far-IR/mm including *Spitzer*/IRS mid-IR spectra. That paper includes a state-of-the-art spectro-photometric analysis of the observed IR SEDs yielding empirical [$f\rm{(AGN)_{IR}}$]{}. In this paper (Paper II), we test whether simulated galaxies can reproduce the observed SEDs of our sample, which covers a wide range in IR AGN fractions; compare the empirical and simulation-based [$f\rm{(AGN)_{IR}}$]{}; and investigate how such IR AGN fractions constrain the intrinsic AGN contribution to the power output of dusty galaxies. In Paper III (Roebuck et al., in prep.), we will present a more detailed comparison between the simulated and observed SEDs, including a discussion of the merger stage/morphology, gas fractions, star formation rates, and stellar masses of the galaxies. The structure of the paper is as follows. In Section \[sec:sample\] we describe the observed data. In Section \[sec:simdata\] we summarize the methodology underlying the <span style="font-variant:small-caps;">gadget</span>+<span style="font-variant:small-caps;">sunrise</span> simulations and present the details of our specific simulation library. In Section \[sec:analysis\] we use our suite of simulations to explore the dependence of IR AGN fraction estimates on the intrinsic AGN fraction, and on parameters such as merger stage, level of obscuration, initial gas fraction and viewing perspective. We then present a direct comparison between empirical SED-fitting based AGN fractions to those implied by the best fitting simulated SED to the observed SED. We include estimates of the systematic uncertainties in the derived AGN fractions. In Section \[sec:discussion\], we discuss the caveats associated both with the simulation-based and empirical AGN fractions. We present the summary and conclusions in Section \[sec:conclusions\]. In the appendix, we investigate the potential dependence of our results on the assumed AGN SED template and the sub-resolution structure of the ISM of the simulated galaxies. ![The luminosity-redshift distribution of our [observed]{} sample. The redshifts are based on optical, near-IR or mid-IR spectra. The luminosities are based on the IR SED fitting [@Kirkpatrick2015]. The three classes shown (SFG, Composite and AGN) are based on mid-IR spectral fitting as described in the text.[]{data-label="fig:lumz"}](f1.pdf){width="\columnwidth"} Observed Data & Empirical Classification {#sec:sample} ======================================== Figure \[fig:lumz\] shows the distribution of our sample of galaxies in redshift and IR luminosity. Full details on the sample selection and coverage are presented in @Kirkpatrick2015; here we provide a brief summary. The sample is representative of a 24[$\mu$m]{} flux limited-selection with $S_{24 \rm{\mu m}}$$>$0.1mJy. Most importantly, all galaxies in the sample have low resolution (R $= \lambda / \Delta \lambda \sim 100$) mid-infrared *Spitzer* IRS spectroscopy[^2] [@houck2004]. In addition, our galaxies have up to 11 broadband photometric points across the IR SED that include *Spitzer* IRAC in the near-IR, MIPS 24 and 70 $\ \rm{\mu m}$, and [*Herschel*]{} PACS and SPIRE photometry [@Fazio2004; @Rieke2004; @Poglitsch2010; @Griffin2010]. From the original sample of 343 galaxies in Paper I [@Kirkpatrick2015] we remove the six galaxies with z$>$2.8 as redshift determinations may be uncertain given our special coverage. We additionally remove one galaxy where the spectra may not match the photometry. Our final sample consists of 336 galaxies. In Paper I, we fit the mid IR SEDs using a linear combination of: 1) a star forming component represented either as the local starburst component of @Brandl2006 or the starburst M82 [@Forster2003]; and 2) an AGN component comprised of a power law, with the slope and normalization as free parameters. Each component has a separate screen extinction based on the MW extinction curve from @Draine2003. The mid-IR AGN fraction [$f\rm{(AGN)_{MIR}}$]{} is calculated by taking the ratio of the AGN component over the total SED integrated over $\sim$5-18$\ \rm{\mu m}$. We adopt the classification scheme from Paper I where star-forming galaxies (SFGs) have [$f\rm{(AGN)_{MIR}}$]{}$<0.2$, AGN have [$f\rm{(AGN)_{MIR}}$]{}$>0.8$, and composites have intermediate values of [$f\rm{(AGN)_{MIR}}$]{}$=0.2-0.8$. [@Kirkpatrick2015 find 70% of their sources have $| f {\rm (AGN)}_{\rm MIR}- f \rm{(AGN)}_{\rm MIR,unextinct}|<0.1$, with an average value of $\sim 0.06$. For this reason in this paper we use the unextincted AGN contribution.]{} In Paper I, we construct full IR SED templates, based on the above mid-IR classification, and in bins of IR luminosity and redshift. These full IR SED templates, are in turn decomposed with a linear combination of a star-forming component and AGN (represented by the respective templates from @Kirkpatrick2012) to find a total 8-1000[$\mu$m]{} IR AGN fraction [$f\rm{(AGN)_{IR}}$]{} for each template. We find that [$f\rm{(AGN)_{MIR}}$]{} correlates quadratically with [$f\rm{(AGN)_{IR}}$]{} for the template SEDs. This relation is then applied to each source in our sample to derive individual empirical [$f\rm{(AGN)_{IR}}$]{}. These values are denoted as [$f\rm{(AGN)_{IR,emp}}$]{} throughout the rest of this paper. [For a given galaxy the empirical [$f\rm{(AGN)_{IR}}$]{} is consistent whether we use true photometry or one based on the best fit simulated SEDs as described in Section \[sec:agnfrac\]. We test this for three random galaxies (a SFG, a Composite, and an AGN), by taking the best-fit simulated SED, generating synthetic photometry (including IRS spectra) from it and running this “simulated" photometry through the full analysis of @Kirkpatrick2015. The resulting mid-IR classifications were unchanged, and the 8-1000[$\mu$m]{} AGN fractions were within 20% of the values inferred from the real photometry (well within the systematic uncertainties we derive in Section\[sec:agnfrac\])]{}. A key result of Paper I is that the warm dust component is consistent with being AGN powered – this is seen in particular in that the temperature of the warm dust increases as the mid-IR AGN strength increases. This empirical result is not proof, but is consistent with this warm dust being associated with an NLR as discussed in the Introduction. An even broader result is that AGN dominate the counts above $S_{24 \rm{\mu m}}>0.5$ mJy, but even down to the lowest flux levels (0.1mJy), sources with significant AGN (AGN+Composite classification) account for 40-60% of the counts. Roughly half of these faint AGN, are in the Composite population, that would likely be missed by traditional AGN surveys. Simulated Data {#sec:simdata} ============== We use a suite of idealized simulations of isolated disk galaxies and galaxy mergers [to compare to our observations]{}. All of the hydrodynamical simulations were presented originally in previous works (see Table\[tab:modeltable\]), but some of the radiative transfer calculations were done specifically for this work (see below). The simulations were performed using a modified version of the <span style="font-variant:small-caps;">gadget-2</span> cosmological $N$-body/smoothed particle hydrodynamics (SPH) code [@Springel2001]. Every $10-100$ Myr, the simulation outputs were post-processed using the <span style="font-variant:small-caps;">sunrise</span> [@Jonsson2006; @Jonsson2010] dust radiative transfer code to yield SEDs of the simulated galaxies for $7$ isotropically distributed camera angles. The success of this approach at reproducing SEDs characteristic of typical star-forming galaxies [@Lanz2014], $z \sim 2$ dusty star-forming galaxies [@Narayanan2010a; @Narayanan2010; @Hayward2012], and AGN [@Snyder2013] make it a natural choice for comparison with our observed sample. Further details regarding <span style="font-variant:small-caps;">gadget</span> and <span style="font-variant:small-caps;">sunrise</span> and the specific simulation library that we use are given in the subsequent subsections. Hydrodynamical Simulations {#sec:gadget} -------------------------- <span style="font-variant:small-caps;">gadget-2</span> [@Springel2005gadget] computes gravitational forces using a tree-based gravity solver. Hydrodynamics is treated using [a modified]{} TreeSPH [@Hernquist1989], in a fully conservative manner [@Springel2002]. The simulations include radiative heating and cooling following @Katz1996. Star formation is modeled by applying the volume-density-dependent Kennicutt-Schmidt relation $\rho_{\rm SFR} \sim \rho_{\rm gas}^{1.5}$ [@Kennicutt1998a] with a density cutoff at $n \sim 0.1 \ \rm{cm^{-3}}$ on a particle-by-particle basis. [This [normalization of this]{} prescription is tuned to reproduce the galaxy scale KS relation.]{} [We adopt an effective equation of state following]{} the two-phase subresolution ISM model of @Springel2003, which accounts for the effects of supernovae feedback in the form of heating and the evaporation of gas [@Cox2006b], is used. Explicit stellar winds are not included in our simulations. Each gas particle is self-enriched according to its SFR, assuming a yield of $y \sim 0.02$. We employ the black hole accretion and feedback model of @Springel2005feedback. Each galaxy is initialized with a black hole sink particle with initial mass $10^5 \textrm{M}_{\odot}$ that accretes at the Eddington-limited Bondi-Hoyle rate and [5% of the luminous energy of the AGN is returned to the ISM as feedback in the form of thermal energy]{}, to match the $M-\sigma$ relation [@DiMatteo2005]. The simulations adopt a standard 10% [radiative]{} efficiency such that the AGN luminosity is given by $L_{\rm{bol}} = 0.1 \dot{M}_{\rm{BH}} c^2$ [@Shakura1973]. For additional and more detailed information concerning <span style="font-variant:small-caps;">gadget</span>, see @Springel2005gadget. Radiative Transfer Calculations {#sec:sunrise} ------------------------------- To calculate UV–mm SEDs of the simulated galaxies, we perform dust radiative transfer in post-processing using the 3D Monte Carlo radiative transfer code <span style="font-variant:small-caps;">sunrise</span>, which calculates how emission from stellar and AGN particles in the <span style="font-variant:small-caps;">gadget-2</span> simulations is absorbed, scattered, and re-emitted by dust. Star particles from the <span style="font-variant:small-caps;">gadget-2</span> simulations are treated as single-age stellar populations. Those aged $>10 \ \rm{Myr}$ are assigned <span style="font-variant:small-caps;">starburst99</span> [@Leitherer1999] template SEDs according to their ages and metallicities, whereas those with ages $<10 \ \rm{Myr}$ are assigned templates from [@Groves2008], which include emission from H<span style="font-variant:small-caps;">ii</span> and photodissociation regions (PDRs) surrounding the clusters. Black hole particles, with luminosity defined in Section \[sec:gadget\], are assigned the luminosity-dependent AGN SED templates of [@Hopkins2007 hereafter H07], which are empirical templates based on observations of unreddened quasars. In the IR, these templates match the mean quasar SED of @Richards2006a. Once the spatial distribution and SEDs of sources (i.e. stars and AGN) are specified, the dust density field must be determined. To do so, the <span style="font-variant:small-caps;">gadget</span> gas-phase metal density is projected onto a 3D octree grid initially $200 \ \rm{kpc}$ on a side. To calculate the dust density, it is assumed that $40 \%$ of the metals are in the form of dust [@Dwek1998]. The default dust model is the Milky Way model of @DL07. Grid cells are refined until both the variation in the metal density within a grid cell and the total optical depth through a grid cell are less than specified thresholds or until a maximum number of refinement levels is reached; see @Jonsson2010 for details. We have confirmed that the grid refinement parameters ensure that the SEDs are converged within $\sim 10 \%$ [@Hayward2011]. After the above steps, we propagate $10^7$ photon packets through the grid to calculate how the stellar and AGN emission are absorbed and scattered by dust. Then, the radiation absorbed by dust is re-emitted, assuming that the large grains are in thermal equilibrium. A fraction of the small dust grains are assumed to emit thermally, whereas the rest emit an empirically based PAH template [@Groves2008]. This fraction is fixed to 50$\%$ following @Jonsson2010 to match mid-IR flux ratios from SINGS [@Dale2007][^3]. The IR emission is then propagated through the grid to account for dust self-absorption, and the equilibrium dust temperatures are recalculated. This process is iterated until convergence. The final result of the radiative transfer calculation is spatially resolved far-UV–mm SEDs (i.e. integral field spectrograph-like data) of the simulated galaxy seen at different times (every 10-50 Myr for the simulations used in this work) from 7 viewing angles. For the purposes of this work, we sum the SEDs of each pixel to yield the integrated SED of the system. The default behavior of <span style="font-variant:small-caps;">sunrise</span> is to calculate the AGN luminosity using the black hole accretion rate from the <span style="font-variant:small-caps;">gadget-2</span> simulation and assuming a standard radiative efficiency of 10%; we denote these runs as AGN1$\times$. For this work, as in @Snyder2013, we also performed radiative transfer calculations in which we assumed radiative efficiencies of 100% (AGN10$\times$) or 0% (AGN0$\times$). This simulates the effects of short-term stochasticity in the black hole accretion rate [e.g. @Hickox2014] that is not present in the time-averaged accretion rates from the <span style="font-variant:small-caps;">gadget-2</span> simulation snapshots (although the accretion rates corresponding to individual timesteps exhibit considerable variation; @Hayward2014arepo). Moreover, by computing the radiative transfer with the AGN emission disabled, we are able to directly disentangle the effect of AGN emission on the resulting SED. Note that in all cases, the same hydrodynamical simulation is used; i.e. thermal AGN feedback is included assuming a radiative efficiency of 10% [for details see @Snyder2013]. ### Host Galaxy ISM Treatment \[sec:ism\] The ISM treatment used in the hydrodynamic simulations is the two-phase model of @Springel2003 (see Section \[sec:gadget\]). [Each resolution element is implicitly assumed to contain a warm ($>10^5$ K) and cold ($<10^4$ K) gas component, but only a single density and an “effective pressure" is actually evolved.]{} How the sub-resolution ISM structure is treated can affect the resulting SED significantly, as discussed in detail in various previous works [e.g. @Younger2009; @Hayward2011; @Snyder2013; @Lanz2014]. To crudely capture the uncertainty caused by not resolving the full structure of the ISM, we use two extreme cases when performing the radiative transfer. The ‘multiphase off’ treatment uses the total dust density ([spreading the total dust mass uniformly through the cell]{}) to calculate the optical depth of each cell; this yields an upper limit on the optical depth through a cell. ‘Multiphase on’ [assumes that the unresolved cold clouds]{} have a negligible volume filling factor and [and thus removes the dust]{} implicitly contained [in this phase (according to the sub-grid model)]{} when performing the radiative transfer. This yields a lower limit on the optical depth through a cell. The effect of each assumption on the mid-IR AGN spectral signatures is discussed in @Snyder2013, who generally find their results are not significantly dependent on the model adopted. We examine how our results depend on the sub-resolution ISM treatment in Appendix\[sec:agnism\]. [cccl]{} M1 & $3.78 \times 10^{9}$ & $0.26$ & M1\[J10,L14\]\ M2 & $1.18 \times 10^{10}$ & $0.21$ & M2\[J10,L14\]\ M3 & $4.23 \times 10^{10}$ & $0.16$ & M3\[J10,L14\]\ M4 & $3.39 \times 10^{10}$ & $0.40$ & vc3\[S13,H15\]\ M5 & $4.08 \times 10^{10}$ & $0.60$ & c5\[H12\]\ M6 & $1.56 \times 10^{10}$ & $0.60$ & c6\[H11,H12,S13\]\ M7 & $2.08 \times 10^{10}$ & $0.80$ & b5\[H12\]\ M8 & $8.00 \times 10^{10}$ & $0.80$ & b6\[H12\] Simulation Library {#sec:simlib} ------------------ Table \[tab:modeltable\] shows the 8 progenitors in our simulation library. These progenitors are simulated both as isolated disks and identical mergers. Mergers are particularly relevant because it is believed that gas inflows during such events are a primary trigger for exciting bright IR activity [@Barnes1992; @Barnes1996; @Mihos1996; @Hopkins2010b]. All mergers use the [tilted prograde-prograde]{} ‘e’ orbit from @Cox2006a (with the exception of [M4]{}, which uses the [retrograde-retrograde]{} ‘c’ orbit). In addition, to the 8 mergers performed with the default AGN strength (i.e., assuming a radiative efficiency of 10% to determine the AGN luminosity), we also ran no-AGN variants of the radiative transfer calculations on all mergers excluding M5 and M7[^4]. This enables us to directly determine the effect of the AGN on the UV–mm SED. For most mergers, again excluding M5 and M7, we also performed AGN10$\times$ calculations (i.e., assuming a radiative efficiency of 100%). The AGN0$\times$ calculations are not used to fit the observed SEDs, but are a means of calculating the effect of the AGN on the emergent UV–mm SED, post host galaxy dust reprocessing of the AGN light i.e., $L_{\rm{AGN,reprocessed}} = L_{\rm{total,AGN1\times}} - L_{\rm{total,AGN0\times}}$. Including the isolated disks, AGN1$\times$, and AGN10$\times$ mergers, our observed SEDs are fit to a suite of 22 simulations. The simulations most analogous to $z \sim 0$ galaxies (M1-M4, Table \[tab:modeltable\]) are prescribed the ‘multiphase on’ ISM treatment (Section \[sec:ism\]) by default. Using a restricted sample, we found that using the ‘multiphase off’ ISM treatment did not significantly alter the goodness-of-fit values obtained. For the more luminous and gas-rich $z \sim 3$ simulations (M5-M8), the ‘multiphase off’ treatment (Section \[sec:ism\]) was used because this treatment was found to yield better agreement with the SEDs of high-redshift dusty star-forming galaxies (see @Hayward2011 for discussion). Both ISM assumptions are only approximations, and the truth should lie somewhere in between. The need for these assumptions could potentially be eliminated (or at least mitigated) via use of state-of-the-art simulations that resolve the structure of the ISM [@Hopkins2013; @Hopkins2014]; this is a focus of ongoing work. In Appendix \[sec:agnism\], we examine the potential effects of the ISM treatment on our results and find that they are negligible. Other factors likely to affect the emergent SED include variations in merger orbital parameters, the adopted AGN template, and the dust model. In Appendix \[sec:agntemp\], we investigate how our results depend on the choice of AGN template [see also discussion in @Snyder2013]. In Paper III (Roebuck et al. in prep.) we explore such systematics further, including run these simulations with different AGN templates, different dust compositions (MW, SMC, and LMC) and different ISM treatments. The spread in the emergent SED allows us to estimate a model uncertainty which we adopt in our SED fitting (Section \[sec:fitting\]). Analysis {#sec:analysis} ======== AGN Fractions from Our Simulations {#sec:agnfracs} ---------------------------------- ![image](f2.pdf){width="1.95\columnwidth"} ![image](f3.pdf){width="1.95\columnwidth"} ![image](f4.pdf){width="1.95\columnwidth"} ![The ratio of the bolometric IR AGN fractions ($R$, see Equation\[eq:ratio\_def\]) as a function of [$A_V$(=$-2.5 \log L_{V} / L_{V, {\rm input}}$)]{}. As in Figure \[fig:agncollage\], the colors are time relative to coalescence, the time sampling is in steps of 100Myrs, the data points are the mean values over viewing angle, and the error bars are the standard deviation thereof. We show the full range in initial gas fractions and both fiducial (AGN1$\times$) and boosted (AGN10$\times$) models as respectively the smaller and larger symbols. Overlaid is the best fit relation given in Equation \[eq:fagnratio\]. This shows that, largely independent of initial conditions, as $A_V$ grows larger $R$ approaches unity and vice versa. Thus IR AGN fraction measurements are useful measures of bolometric AGN strength when the extinction is high. []{data-label="fig:fagnvav"}](f5.pdf){width=".95\columnwidth"} The first question we want to address with our simulations is what is the total contribution of the AGN to the IR emission of our simulated galaxies – including not only the IR emission associate with the nuclear regions of the galaxy (i.e. from the torus), but also dust in the host galaxy heated by the AGN. Figure \[fig:sedpicture\] helps to visualize the role of the ISM in the host galaxy in re-distributing the emission from both the AGN and the stars[,]{} where the left-hand panel shows the SEDs of stars and AGN before host galaxy processing, and the right-hand panel shows the same post host galaxy processing. The AGN SED post-processing is significantly redder (more IR heavy) compared to the input AGN SED pre-processing[,]{} highlighting the importance of accounting for AGN heating of the host galaxy dust. In this paper, we do not investigate the spatial extent of the AGN-powered FIR emission; this will be discussed in Hayward et al. (in prep.). To determine the total post host dust processing IR emission that results from the AGN, we perform the radiative transfer calculations both with and without the AGN emission and then take the difference between the two SEDs. This is given in Equation\[eq:agn\_host\] below. $$\centering {f\rm{(AGN)_{IR}}}=\left(\frac{L_{\rm{total}} - L_{\rm total,no AGN} }{ L_{\rm{total}} } \right)_{8-1000 \rm{\mu m}} \label{eq:agn_host}$$ where $L_{\rm{total}}$ is the post-radiative transfer SED including both the attenuated stellar and AGN emission in addition to dust emission (the fiducial calculation), and $L_{\rm{total,no AGN}}$ is the post-radiative transfer SED for the calculation in which the AGN emission is ignored (the red-orange curve in the right panel of Figure \[fig:sedpicture\]; see Section \[sec:simlib\] for details). The role of the input AGN SED template assumed is addressed in Appendix \[sec:agntemp\]. The key conclusion thereof, is that ${f\rm{(AGN)_{IR}}}$ is not strongly dependent on the input AGN SED. This conclusion is due to the spatially and spectrally integrated nature of this AGN fraction. The second key question is, assuming we have perfect knowledge of the AGN contribution to the IR, is this a good measure of the intrinsic AGN fraction? We obtain this by dividing the integrated AGN luminosity in the 0.1-1000[$\mu$m]{} regime over the integrated total (AGN+stars) luminosity (see left-hand panel in Figure \[fig:sedpicture\]). [While this fraction is missing the X-ray and radio emission, for simplicity, we still refer to it as the *bolometric or “bol" AGN fraction*]{}. This is defined in Equation\[eq:agnintrinsic\] below. $$\centering {f\rm{(AGN)_{bol}}}= \left( \frac{ L_{\rm{AGN}} }{ L_{\rm{stars}} + L_{\rm{AGN}} } \right)_{0.1-1000 \rm{\mu m}} \label{eq:agnintrinsic}$$ where $L_{\rm{stars}}$ is the integrated UV-mm luminosity of the emission from stars, PDRs and H<span style="font-variant:small-caps;">ii</span> regions (the latter two are powered by star formation, so the luminosity would be the same if we considered the intrinsic stellar emission before it is processed in the PDRs and H<span style="font-variant:small-caps;">ii</span> regions). Because we calculate it using the intrinsic SEDs, [${f\rm{(AGN)_{bol}}}$]{} is independent of viewing angle. If we were to calculate it from the post-radiative transfer SEDs, the value inferred from a given line of sight would differ from the intrinsic value, but the intrinsic value would be recovered if we averaged over sufficiently many viewing angles. We remind the reader that the AGN luminosity is calibrated by the accretion rate (see Section \[sec:gadget\]) for $L_{\rm AGN,bol}$ from X-ray through mm, and that $\lesssim10\%$ of the AGN flux is emitted in the X-ray, 0.5-10keV, regime [@Hopkins2007; @Snyder2013]. Lastly, we define the ratio of the above AGN fractions as: $$\centering R=\frac{{f\rm{(AGN)_{bol}}}}{{f\rm{(AGN)_{IR}}}} \label{eq:ratio_def}$$ Trends with Merger Stage {#sec:role_mergerstage} ------------------------ Figure \[fig:agncollage\] [*left*]{} shows the relationship between our AGN fractions (Section \[sec:agnfracs\]), as a function of time for six AGN10$\times$ merger simulations (Section \[sec:simlib\]) which have different initial gas fractions, as indicated. We chose the boosted models because they reach higher AGN fractions making trends easier to see, but as we discuss in the following Section, our final conclusions are valid for both fiducial (AGN1$\times$) and boosted models. The time relative to coalescence, defined as the moment the black hole separation goes to zero, is indicated by the colorbar located in the upper-right panel of the figure. Overall, the higher gas fraction models achieve higher AGN fractions peaking around coalescence when the two fractions are on the 1:1 relation. Lower gas fraction models achieve overall lower AGN fractions, and also fall somewhat short of the 1:1 relation even at coalescence. With these differences, in mind, all models show reasonably good agreement between the two fractions up to and including coalescence, but show much higher ${f\rm{(AGN)_{IR}}}$ vs. ${f\rm{(AGN)_{bol}}}$ post-coalescence. In this regime, the AGN fractions are overall much lower. Figure \[fig:agncollage\] [*right*]{} shows the ratio of ${f\rm{(AGN)_{IR}}}$ to ${f\rm{(AGN)_{bol}}}$ vs IR luminosity. The two tracks seen in this figure correspond to different initial gas fractions (upper track is for models with $f_{\rm gas,init}<0.5$ whereas the lower track is for models with $f_{\rm gas,init}>0.5$). Therefore, for all models, the higher the IR luminosity, the higher the $R$ parameter (i.e. the more closely IR AGN fractions trace UV-mm AGN fractions). For any given model, the highest luminosities are achieved around coalescence. However, for a fixed IR luminosity, different initial gas fraction models will correspond to different stages in the merger and will have different $R$ values. For example, a $10^{11}$[$\rm{L}_{\odot}$]{} model galaxy can have $R$ values that differ by a factor of 3 (0.2-0.6), with the higher values corresponding to lower initial gas fraction models nearer coalescence. This degeneracy is lifted for the highest luminosities ($>10^{12}$[$\rm{L}_{\odot}$]{}) where all models correspond to high $R$ values. Aside from lensed systems (e.g. @Sklias2014), $L_{\rm IR} \gtrsim 10^{11}$[$\rm{L}_{\odot}$]{} corresponds to the sensitivity limit for current wide field IR extragalactic surveys. Because initial gas fraction or merger stage are difficult to determine observationally, the above finding makes it difficult to know, apart from the highest luminosity sources, whether or not the IR AGN fraction for a given galaxy is a good proxy for the overall AGN fraction or not. To try and break this degeneracy observationally, Figure \[fig:fagnratio\] examines more closely the reasons behind the above trends. Here we explicitly show $R$ as a function of time for the gas-rich merger M6 both in the AGN1$\times$ and 10$\times$ case. The right-hand panel shows the time evolution of the SFR, $L_{\rm AGN}$, and $A_V$. [By contrast $R$ (as well as $A_V$) are relatively flat until post-coalescence. Therefore $A_V$ most closely traces the evolution of $R$ with time. ]{} For clarity, we only show one model in Figure \[fig:fagnratio\], but the trends are qualitatively the same for all models except that the models with lower $f_{\rm gas,init}$ reach lower maximal $A_V$ values. [Figure \[fig:fagnratio\] shows that $R$ is not a good proxy for the AGN luminosity which is much more peaked around coalescence. In addition, we found no clear trends between either the bolometric or IR AGN fractions and $R$.]{} In the following section we explore further this dependence on $A_V$ as well as provide a relation for converting the IR AGN fraction to a bolometric (here UV-mm) AGN fraction. Trends with $A_V$ {#sec:interp_av} ----------------- Figure \[fig:fagnvav\] shows explicitly the universality of this dependence on $A_V$. Here we show $R$ as a function of $A_V$ for all models including default (AGN1$\times$) and boosted (AGN10$\times$) cases. As in Figure \[fig:agncollage\]-\[fig:fagnratio\], the points are averaged over camera angle with the errorbars showing the standard deviation thereof. We find that $R$ is $\sim$1 for the most obscured models and timesteps ($A_V \gtrsim$ 3), which suggests that while the optical depth of the host galaxy is high the IR AGN fraction does trace the inherent AGN to stellar strength. This ratio remains $>$0.5 (i.e. the IR AGN fraction traces the intrinsic UV-mm AGN fraction within a factor of 2) for all cases when the dust extinction is high (i.e. $A_V \gtrsim$ 1). The spread between the models is roughly consistent with the spread between camera angles. The only outlier is a point at $A_V$$\sim$3.9 which corresponds to the initial few timesteps in the M6 default (AGN1$\times$) run (see Figure \[fig:fagnratio\]). In this regime, the AGN luminosity is negligibly small, with both AGN fractions tiny. Where there might be a true physical effect, it is likely that this outlier is noise related to the very small numbers. Other models that cover this regime (the boosted M6 and the boosted and unboosted M4), do not show this behavior [supporting the view that]{} it is likely an outlier. Still, this might be a behavior associated with more isolated systems. As none of our isolated disk models have the AGN0$\times$ runs necessary for explicit determination of $R$, we do not investigate this further in the present paper. [The relation between $R$ and $A_V$ is approximated by: ]{} $$R = \left\{ \begin{array}{ll} \frac{-0.52}{A_V} + 1.05 & \quad A_V \geq 1 \\ 0.51 \sqrt{A_V} - 0.01 & \quad A_V < 1 \end{array} \right. \label{eq:fagnratio}$$ [This piecewise form qualitatively represents the trend in Figure \[fig:fagnvav\], however we do not compute a formal fit as the points shown in Figure \[fig:fagnvav\] are binned in time and averaged by viewing perspective for clarity. We tested that this trend is maintained with the raw data plotted as well. ]{} But why should $R$ drop at lower $A_V$ (typically post-coalescence)? Post-coalescence, the AGN luminosity as well as the SFR both decrease dramatically. However, thanks to the recently build-up stars, the bolometric luminosity is strongly dominated by the now aging stellar population [see also @Donoso2012; @Hayward2014]. The stellar SED here has relatively little UV emission compared to optical/near-IR emission. By contrast, the assumed constant AGN SED shape is rich in UV photons even in the post-coalescence regime (although see Section\[sec:discussion\] for caveats). This allows the galaxy to have a higher IR than UV-mm AGN fraction (since UV photons are much more efficiently absorbed than optical photons). Therefore post-coalescence the boost in the optical-NIR coming from the older stellar populations, coupled with the redder stellar SED relative to the AGN, lead to lower $R$ values in this regime. We reiterate that this does not imply high values of ${f\rm{(AGN)_{IR}}}$ post coalescence compared to earlier times, as ${f\rm{(AGN)_{IR}}}$ is typically much lower post coalescence (see Figure \[fig:agncollage\]). In addition, differential extinction between the AGN and the stars can also contribute to this effect. In essence, while the attenuation of the starlight is given by $A_V$ (which dominates the visible light), there might be greater column of dust toward the AGN. [The ISM of these simulated galaxies is smoother]{} than real ones and thus harder to get clear sight lines to an AGN in the presence of significant host dust than in reality. [^5] The error bar on R in Figure \[fig:fagnvav\] represents the spread across the 7 sight lines we track, but this spread is likely an underestimate of the the spread we would get in real galaxies given the smoothness of the simulated galaxies. Indeed, we do not observe cases of significant AGN luminosity when the galaxy is not dusty. [Thus Type-1 QSOs]{} are missing in our model library. Because of selection bias [see @Kirkpatrick2015] they are missing from our observed sample as well. We examine this issue more closely in upcoming papers (Roebuck et al. in prep, and Hayward et al. in prep.) We caution, @Hayward2015 find the $A_V$ inferred via SED modeling with <span style="font-variant:small-caps;">magphys</span> [@daCunha2008] tends to be less than the true $A_V$ when $A_V \gtrsim$1. When the attenuation is high, the only UV-optical emission originates from the relatively unobscured stars. Thus $A_V$ appears lower than it actually is. This underestimation is most severe for merger simulations near coalescence, however the difference is typically small ($<$0.2 in $A_V$, with overall less extreme systems with $A_V\sim2$.). Therefore, observationally determined $A_V$ would be a lower limit to the true ones resulting in a lower limit on $R$ if Equation\[eq:fagnratio\] is used. Because $R$ as a function of $A_V$ flattens for high $A_V$, the effect of this in practice is minimal. [ccccccc]{} SFG & $ 101$ & $ 0.88 \pm 0.59$ & $ 3.79 \pm 0.97$ & $ 4 \pm 10$ & $ 6 \pm 23$ ($ 4 \pm 11$) & $ 0 \pm 0$\ Composite & $ 116$ & $ 0.97 \pm 0.63$ & $ 3.87 \pm 1.06$ & $ 38 \pm 20$ & $ 52 \pm 17$ ($ 41 \pm 22$) & $ 15 \pm 10$\ AGN & $ 119$ & $ 1.30 \pm 0.76$ & $ 3.87 \pm 1.25$ & $ 66 \pm 18$ & $ 78 \pm 15$ ($ 77 \pm 17$) & $ 55 \pm 7$ ![image](f6.pdf) SED Fitting {#sec:fitting} ----------- The analysis so far has been based on the simulated SEDs alone. We now want to compare our simulated SEDs with the SEDs of our sample of observed galaxies in order to directly compare their simulation-based and empirical SED-fitting based AGN fractions. To find the best-fit simulated SED for each galaxy, we make use of both its broadband photometry and mid-IR spectra. To include the [*Spitzer*]{} IRS spectra, we create ‘pseudo’-photometry using $\Delta \lambda = 2 \ \micron$ square filters [@Hernancaballero2007; @Sajina2012] at 16 (except where 16 $\rm{\mu m}$ [*Spitzer*]{} *peak-up* detections are available[)]{}, 18, 20, 22, 26, 28, 30, and 32 $\rm{\mu m}$. This is done to sample the mid-IR without overwhelming the fitting routine or introducing an arbitrary weighting. In addition, this binning of the IRS spectra brings them closer to the spectra resolution of the simulated SEDs themselves (see Figure \[fig:sedmos\]). For each observed galaxy - model SED pair, we compute a $\chi^2$ value as follows: $$\chi^2= \frac{1}{N_{\nu}-1} \sum_{\nu} \frac{(F_{\nu,{\rm{data}}}-a\times F_{\nu,{\rm{model}}})^2}{\sigma_{\nu,{\rm{data}}}^2+\sigma_{\nu, {\rm{model}}}^2}, \label{eq:chidef}$$ where $N_{\nu}$ is the number of photometric bands, $F_{\nu,{\rm{data}}}$ is the observed flux density in each band, $F_{\nu,\rm{model}}$ is the corresponding simulated flux density, $a$ is a linear scale parameter, $\sigma_{\nu,{\rm{data}}}$ is the observed uncertainty, and $\sigma_{\nu,{\rm{model}}}$ is the estimated model uncertainty, which incorporates the uncertainty associated with fixing certain parameters in the model rather than allowing them to vary (e.g. the dust model); see Paper III (Roebuck et al., in prep.) for details. In cases of missing far-IR data, we additionally constrain our models using the $3 \sigma$ upper limits. When a model exceeds an upper limit for any point past $\lambda_{\rm{obs}}>$70[$\mu$m]{} the $\chi^2$ value is multiplied by 100. We only do this for the far-IR points because they determine the overall luminosity of the galaxy; therefore, without this constraint, we can severely overestimate the luminosity and by extension star formation rate or/and obscured AGN luminosity of the galaxy. The model with the lowest $\chi^2$ is taken to be the best-fit model. Including a variable linear scale parameter (see Equation\[eq:chidef\]) achieves overall better $\chi^2$ values. The linear scaling factor $a$ in Equation\[eq:chidef\] is necessary because we have a discrete simulation library that does not fully sample the relevant parameter space and thus likely does not capture the full variation observed in real SEDs. Because use of this scale factor breaks the physical consistency of the model (i.e., the luminosity and shape of the SED are no longer directly physically connected to a specific hydrodynamical simulation), it is best to restrict the scale factor to be only as large as necessary to achieve acceptable fits. A full examination of the effect of this scaling is deferred to Paper III (Roebuck et al., in prep.), but we list the salient conclusions here. We find that when left as a free parameter, the median best-fit scaling factor is 1.73. This suggests that the initial parameters of our simulations library are well suited to this sample. However, the scale factor distribution has broad tails that extend to above or below a factor of 10. We also find that model degeneracies lead to very broad $\chi^2$ vs. model distributions such that a fit with large scale factor may be only marginally better than one with no scale factor (i.e., for which the forward-modeled SED is used). We also re-fit all sources with a limited range of allowed scale factor, $1/3 < a < 3$. We compare the two cases (free vs. restricted scale factor) using the best-fit $\chi^2$ in each case and an odds ratio defined as $P_1/P_2=\exp(-(\chi^2_1-\chi^2_2)/2)$. We find that the odds ratio is typically $>$0.95 and at the very worst is 0.67. This means that the two cases are essentially equally likely. Based on the above analysis, in the following, we adopt the restricted-scale-factor fitting. Figure \[fig:sedmos\] shows representative best-fit SEDs for galaxies from each of the three spectral classes defined in @Kirkpatrick2015. These SEDs provide visual confirmation of the general goodness of fits obtained for our observed sample using our model library. Overall, the fits are good, with $\sim$94% achieving $\chi^2<3$. Given that the SEDs are *forward-modeled* from hydrodynamical simulations and a restricted scale factor ($1/3 < a < 3$) is used, achieving fits of this quality is a non-trivial feat. Table\[tab:bestfitclass\] shows the median $\chi^2$ values for each class and their standard deviations. All classes show good fits, although the AGN have marginally worse $\chi^2$ values. ![image](f7.pdf){width="1.95\columnwidth"} ![Our best-fit [$f\rm{(AGN)_{IR}}$]{} fractions compared with the empirically derived [$f\rm{(AGN)_{IR,emp}}$]{} for the different mid-IR spectral classes. The [$f\rm{(AGN)_{IR}}$]{} values for the colored symbols come directly from the simulations, but those for the greyscale symbols use Equation\[eq:fagnratio\] and therefore are more uncertain. The magenta line is the best fit ([$f\rm{(AGN)_{IR}}$]{} = $0.88 \times [$[$f\rm{(AGN)_{IR,emp}}$]{}$]^{0.63} + 0.12$, including grey points), [and the greyscale is 2$\sigma$ to the fit]{}. The larger symbols and their errorbars represent the median values for each spectral class and median absolute deviation respectively (the greyscale points include all sources within a class, the colored points are restricted to those with direct measurements of [$f\rm{(AGN)_{IR}}$]{}). The broad agreement between the empirically estimated and simulation-based values supports our mid-IR spectral classification, although our simulated fractions imply the AGN contribution to IR-luminous galaxies, especially the Composite galaxies, may be even higher than previously estimated. See the text for discussion on the spread and outliers in this plot.[]{data-label="fig:vempiragn"}](f8.pdf){width="\columnwidth"} AGN Fractions From Simulated SED Fits {#sec:agnfrac} ------------------------------------- Table \[tab:bestfitclass\] also shows the median [$f\rm{(AGN)_{bol}}$]{} and [$f\rm{(AGN)_{IR}}$]{} values as well as the median $A_V$ values of the best-fit simulation SEDs for each of our empirical spectral classes. There is broad agreement with the empirical classification, with SFGs showing the smallest AGN fractions, whereas AGN show the largest AGN fractions. The biggest deviation is seen among the Composite sources where the simulations imply AGN fractions $\sim$3$\times$ greater than our empirical values. Indeed, our results suggests that for both the Composite and AGN classified sources, the median IR AGN fractions are $>$50%. The median $A_V$ values for all sub-classes place them well within the optically-thick regime, consistent with the similarity between their ${f\rm{(AGN)_{IR}}}$ and ${f\rm{(AGN)_{bol}}}$ values. Our simulation-based fits allow us to estimate the systematic uncertainties associated with AGN fraction estimates. This is done by computing a marginalized probability for a particular AGN fraction based on the best $\chi^2$ achieved across all models. In other words, the probability of AGN fraction $i$ scales as $P_i\propto e^{-\chi_{{\rm min},i}^2 /2}$. Figure \[fig:fagn\_mosaic\] shows these probability distributions for [$f\rm{(AGN)_{IR}}$]{} binned by mid-IR classification. [The median probability distributions for each mid-IR classification are shown as the thick black histograms, with one median absolute deviation given as the greyscale.]{} The breadth of these distributions reflects the model degeneracies in that comparable $\chi^2$ values can be achieved with very different AGN fractions. This figure shows that the systematic uncertainties [are significant, $\sigma_{f{\rm (AGN)_{IR}}} \sim $0.4.]{} Comparison with Empirical AGN Fraction Estimates {#sec:emp_comparison} ------------------------------------------------ ![image](f9.pdf) Using the median values, we found broad agreement between our empirical and simulation-based AGN fraction estimates (see Table\[tab:bestfitclass\]). Figure \[fig:vempiragn\] examines this issue more closely by comparing the [$f\rm{(AGN)_{IR}}$]{} value of the best-fit simulated SED to the empirically derived [$f\rm{(AGN)_{IR,emp}}$]{} for each individual galaxy. There is significant scatter, unsurprising given the large systematic uncertainties, but there is a clear trend of larger simulation-based AGN fraction for larger empirical AGN fractions. [The Spearman rank coefficient is $\rho=$0.75 confirming a strong positive correlation between [$f\rm{(AGN)_{IR}}$]{} and [$f\rm{(AGN)_{IR,emp}}$]{}.]{} [The curve shown in Figure\[fig:vempiragn\] is the best-fit power law relation of [$f\rm{(AGN)_{IR}}$]{} = $(0.88 \pm 0.10) \times [$[$f\rm{(AGN)_{IR,emp}}$]{}$]^{0.63 \pm 0.17} + (0.12 \pm 0.05)$. The fit was done using [optimize.curve\_fit]{} from the SciPy library. It was done in linear space assuming uniform $\sigma_{f{\rm (AGN)_{IR}}} \sim $0.4 uncertainties estimated from Figure \[fig:fagn\_mosaic\]. We considered log-space fits as well, however those were complicated by most SFGs having zero [$f\rm{(AGN)_{IR,emp}}$]{} values. We caution that, given the large scatter, the above functional form performed only marginally better than a linear fit with a slope close to 1 but a significant offset (i.e. y-intercept) of 0.16. Our preference for the power-law fit is due to both the formally (if marginally) better fit, but also to the better agreement with the median values (the large symbols in Figure\[fig:vempiragn\]).]{} [The above best-fit suggests]{} that while there is a correlation between the empirical and simulated IR AGN fractions, there is also a significant systematic offset from the expected 1:1 relation. Therefore, our simulations imply the AGN contribution to the IR luminosity of the majority of these sources may be even higher than previously estimated, especially at intermediate empirical AGN fractions (i.e. the composite sources). Nevertheless, the existence of a correlation between the simulation-based and empirical values and the consistency between the empirical and simulation-based classifications (i.e., the sources empirically classified as SFGs have the lowest simulation-based estimates, and those empirically classified as AGN have the highest) suggest that our determinations of the dominant power sources of our IR-selected sources are robust. Even considering the large scatter, a few points appear to be outliers. [Here we define as outlier any galaxy whose best-fit ${f\rm{(AGN)_{IR}}}$ value is $\geq$2$\sigma$ away from the curve fit seen in Figure \[fig:vempiragn\].]{} For the SFGs, these are sources where [$f\rm{(AGN)_{IR}}$]{} $\gtrsim$0.5, which leads to 6 galaxies (6% of all SFGs). Nearly all of these sources are fit to simulated galaxies that exhibit strong PAH emission despite having a luminous buried AGN (with $<$[$f\rm{(AGN)_{bol}}$]{}$> \sim 0.5$). An example of this, GN\_IRS3, can be seen in Figure \[fig:sedmos2\]. For the AGN, outliers are sources with ${f\rm{(AGN)_{IR}}}$$<$0.3, which leads to 5 galaxies (4% of all AGN). All are $z$$<$1 galaxies where, for all but one, the IRS spectra does not cover the principle PAH features shortward of the 9.7 silicate feature which makes the empirically measured values very uncertain. An example is MIPS279 shown in Figure \[fig:sedmos2\]. Some of the large scatter in ${f\rm{(AGN)_{IR}}}$ vs [$f\rm{(AGN)_{IR,emp}}$]{} for the Composite sources is due to them being subject to both the effects of underestimated deeply obscured AGN (as in the outlier SFGs) as well as the overestimated AGN due to poor IRS coverage (as with the AGN outliers). But Composites also have the broadest median probability distribution (see Figure \[fig:fagn\_mosaic\]). There are 8 formal outliers (6 above and 2 below the 2$\sigma$ band in Figure \[fig:vempiragn\]) or $\sim$7% of all Composites. The ones above are essentially the same as the star-forming galaxy outliers (an example is given in Figure \[fig:sedmos2\]; 19456000), whereas the two below are effectively the same as the AGN outliers. Discussion {#sec:discussion} ========== Caveats Regarding Simulation-Based AGN Fractions ------------------------------------------------ The spatial resolution of the simulations used here is $\ga 100$ pc, and the ISM tends to be smoother than in reality even on resolved scales because of the use of the @Springel2003 equation of state, which pressurizes the ISM to account for the effects of stellar feedback. The average optical depth through a cell is maximized when the cell is assumed to have constant density; any clumpiness will reduce the mean optical depth, although lines of sight that intersect clumps can have higher optical depths than in the uniform-density case [@Witt1996]. We note that $\sim28\%$ of our galaxies are best fit by “multiphase-on" simulations (see Section \[sec:ism\]) - therefore it is possible that the AGN reprocessing is *underestimated* because we simply throw away the dense clumps, which typically account for $\sim 90$% of the dust mass. However, in AppendixB, we examine the role of “multiphase-on" vs. “multiphase-off" ISM treatments on the emergent [$f\rm{(AGN)_{IR}}$]{} values and find them to be fully consistent. This issue was investigated by @Hopkins2012, who used a multi-scale technique [@Hopkins2010] to self-consistently simulate gas inflows from galaxy to AGN ‘torus’ scales. In their Figure 8, @Hopkins2012 demonstrate that assuming that the gas is smooth leads to column densities that are systematically greater than those inferred for real AGN, whereas assuming that the ISM is clumpy on sub-resolution scales leads to lower column densities that are in better agreement with the observationally inferred values. However, we note that @Hopkins2012 used a probabilistic method to account for obscuration from clumps, whereas in our ‘multiphase on’ runs, we simply ignore the clumps. We further caution the details of this analysis cannot be translated to our simulations because of significant differences in the resolution, technique, and assumed equation of state for the ISM. Thus, exactly how sub-resolution clumpiness affects the AGN contribution to the IR emission is still uncertain and will be investigated in more detail in the future using simulations with parsec-scale resolution. That being said, we can empirically judge the degree to which this simulated “smoothing" of the ISM plays a role here by considering the fact that for SFG and AGN, the simulated and empirical IR AGN fraction estimates agree typically to better than 50%, although the scatter is significant, partly due to the scatter among camera angles but also model degeneracies. Composite sources show much more discrepant simulated and empirical values (median values differ by 3$\times$), but as they are subject to the same ISM treatments as the SFG and AGN, this discrepancy is likely real as opposed to a a by-product of the limitations in our simulations. Even if we have perfect knowledge of the degree to which an AGN contributes to $L_{\rm{IR}}$, translating this to the overall power balance between AGN and stars is not straightforward. Figure \[fig:agncollage\] shows that [$f\rm{(AGN)_{IR}}$]{} relates to [$f\rm{(AGN)_{bol}}$]{} in a complex manner that depends on both the merger stage and the initial gas-richness of the merger progenitors – although we remove some of this degeneracy by re-casting these in terms of $A_V$. The [$f\rm{(AGN)_{bol}}$]{} and [$f\rm{(AGN)_{IR}}$]{} AGN fractions exhibit a $\sim$1:1 relation for the most gas-rich progenitors and only prior to coalescence – i.e. the highest $A_V$ cases. The last caveats relates to the assumed feeding of the black hole and the emergent AGN SED. Our simulations adopt a standard 10% radiative efficiency and adopt an essentially constant quasar-like AGN SED. Both are appropriate choices for high accretion, radiatively efficient AGN. However, as the gas density drops in the post-coalescence stage, we expect a transition to lower accretion rate, radiatively inefficient AGN [e.g. @Best2012]. This does not affect our conclusion that the IR AGN fraction in the high $A_V$ regime is a good proxy for the UV-mm AGN fraction, but it does affect the exact relation between these two fractions in the low obscuration/post-coalescence regime. This will be investigated further in future work (Roebuck et al. in prep.). Caveats Regarding Empirically Based AGN Fractions ------------------------------------------------- As we found in Section\[sec:emp\_comparison\], the mid-IR spectral classification works well for $>$90% of both AGN- and SFG-classified sources, and the aforementioned types of outliers constitute a negligible fraction of the sources in these classes. The @Kirkpatrick2015 method may underestimate the AGN contribution to the longer wavelength ($\gtrsim 100 \ {\rm \mu m}$) portion of the FIR SED (i.e. it may attribute some AGN-heated-dust emission to star formation). It is very difficult to empirically test this possibility because AGN photons that reach the extended host galaxy are indistinguishable from stellar photons in the FIR. The caveats regarding the observer-based AGN fractions are effectively highlighted by the outliers in Figure \[fig:vempiragn\] and discussed in Section\[sec:emp\_comparison\]. Apart from a couple of outliers due to incomplete coverage of the principle PAH features, the outliers suggest that some strong AGN sources may be mistaken for weak ones as a result of the AGN being so heavily obscured that the AGN’s mid-IR continuum is reprocessed into the far-IR, leaving behind an SFG-like PAH-dominated mid-IR spectrum. This effect is an issue for 6% of our SFG-classified sources, but is likely behind the much higher median AGN fraction for Composite sources inferred from our analysis compared to the empirical analysis in @Kirkpatrick2015. Overall, the Composites are the only population whose median [$f\rm{(AGN)_{IR}}$]{} values are significantly discrepant from the [$f\rm{(AGN)_{IR,emp}}$]{} values. As discussed above, this is unlikely to be an artifact of the simulations since in that case, we would see it for all classes. @Kirkpatrick2015 show that the Composite population represents $\sim$30% of the 24-[$\mu$m]{} source population brighter than 0.1 mJy (comparable to the AGN-classified population). This 24-[$\mu$m]{} depth is fainter than the evolutionary peak in the number counts [e.g. @Papovich2004], and sources above a comparable flux density have been shown to account for the bulk of the cosmic IR background [CIB; @Dole2006], at its peak. Composite sources are usually not considered when looking at the breakdown of the CIB, and their AGN fraction may be $>$50% based on our simulations. This implies that the AGN contribution to the CIB may be underestimated [e.g. @Jauzac2011], [however the magnitude of this effect is very uncertain at present]{}. To fully constrain the AGN contribution requires a better understanding of the composite fraction of lower redshift IR sources than are covered in our sample [and will be addressed in Kirkpatrick et al. in prep.]{} Summary & Conclusions {#sec:conclusions} ===================== This paper presents an analysis of the accuracy of and systematic uncertainties inherent in determining the AGN contribution to $L_{\rm{IR}}$ based on fitting IR SEDs as well as the relation between this IR AGN fraction and the bolometric AGN fraction. We used a suite of hydrodynamic simulations on which radiative transfer calculations were performed to yield simulated galaxy SEDs. These simulations were used to investigate the relations between the IR and bolometric AGN fractions and key properties such as merger stage and level of obscuration. The simulated SEDs were then directly fit to the observed IR photometry of a sample of 336 $z$$\sim$0.3–2.8, $\log (L_{\rm{IR}}) = 10.4$-$13.7$ galaxies spanning the full range in empirically derived AGN fractions [see @Kirkpatrick2015 for details]. Our conclusions are the following: - An AGN fraction measured solely in the infrared (here [$f\rm{(AGN)_{IR}}$]{}) is a good predictor of the intrinsic AGN to stellar strength (here [$f\rm{(AGN)_{bol}}$]{}) but only up to and including coalescence, or conversely while the extinction is high ($A_V \gtrsim 1$). We provide relations to convert empirical IR AGN fraction estimates to bolometric AGN fractions as a function of $A_V$. - Our simulation library well represents our observed sample, as indicated both by the overall goodness of fit (Section \[sec:fitting\]) and the examples presented in Figure \[fig:sedmos\]. A more extensive discussion will be presented in Paper III (Roebuck et al. in prep.) - We provide the first estimate of the systematic uncertainties in deriving the AGN fractions of galaxies. [We estimate that these uncertainties are significant with typical 1$\sigma$ uncertainties of $\sigma_{f{\rm (AGN)_{IR}}} \sim $0.4 ]{}. - Within the above uncertainties, there is agreement between our empirically derived and simulation-based IR AGN fractions (i.e. [$f\rm{(AGN)_{IR}}$]{}). Specifically, both the per-class median [$f\rm{(AGN)_{IR}}$]{} values, and the formal fit between individual [$f\rm{(AGN)_{IR}}$]{} and [$f\rm{(AGN)_{IR,emp}}$]{} values support our previous classification: i.e. empirically classified SFG have the least AGN contribution to their total power output; empirically classified AGN have the most. - However, in detail there are key differences. For Composite sources, we find a significant shift in that their median empirical IR AGN fraction is $\sim$15%, but we infer $>$50% from our simulations. This suggests heavily obscured AGN whose strength is underestimated in empirical methods relying on the observed mid-IR spectra. [In addition, 6% of our empirically classified SFGs have AGN fractions $>$ 50%. Both imply the true number density of luminous AGN may be potentially underestimated.]{} Given the large systematic uncertainties on our estimates, this result requires independent confirmation. - Our empirical AGN fraction estimates rely on an AGN template that is heavy in warm dust emitting at 20-40[$\mu$m]{}. More common ‘torus-only’ AGN templates that have less emission in this regime, will lead to AGN fraction estimates that are 2$\times$ lower and therefore will lead to much greater disagreement with our simulated AGN fractions. We are grateful to the anonymous referee for their careful reading and detailed feedback which improved the content and presentation of this paper. This work is supported by NSF grants AST-1313206 and AST-1312418. C.C.H. is grateful to the Gordon and Betty Moore Foundation for financial support. This work is based in part on data obtained with the [*Spitzer*]{} Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. This work also makes use of [*Herschel*]{} data. [*Herschel*]{} is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA. Role of AGN Template {#sec:agntemp} ==================== The key conclusion of this paper is that empirical and simulated IR AGN fraction estimates are broadly consistent. Here we explicitly test to what degree both fractions may or may not be affected by the specific AGN template adopted. The default AGN SED template in our simulations is that of @Hopkins2007; hereafter H07. In @Snyder2013, one of the models in our simulations, M6, was also run with two different choices of AGN template: a face-on ($i=0^{\circ}$) and an edge-on ($i=90^{\circ}$) clumpy torus models from @Nenkova2008; hereafter N08. In @Snyder2013 we concluded that at high levels of obscuration, the choice of input AGN template does not significantly affect the emergent IR SED. This insensitivity arises because IR-selected sources tend to exhibit high dust columns to the AGN because the sources are selected to be dust-obscured. If a relatively unobscured AGN template, such as H07, is used as input for the radiative transfer calculations, much of the UV–optical light is reprocessed into the IR. This effect will mimic having as input a more obscured AGN template, such as the @Nenkova2008 $i=90^{\circ}$ model, where the IR already accounts for most of the AGN bolometric luminosity. We tested that the emergent [$f\rm{(AGN)_{IR}}$]{} fractions are consistent between the runs with the default (H07) vs. face-on or edge-on @Nenkova2008 models. This is as expected since we already concluded that in the highly obscured regime, [$f\rm{(AGN)_{IR}}$]{} is a good proxy for the intrinsic AGN fraction which depends only on the integrated AGN power and therefore is independent of adopted AGN template. By contrast, the empirical IR AGN fraction is very sensitive to the adopted IR AGN template. In @Kirkpatrick2015, we used the pure AGN template of @Kirkpatrick2012 which is empirically derived from weak silicate absorption AGN with star-formation component subtracted; hereafter the K12 template. A similar method, with consistent results was used in @Mullaney2011 to derive the $z=0$ intrinsic AGN template; hereafter the M11 template[^6]. These two templates are compared with the H07 template as well as the edge-on and face-on N08 templates in Figure \[fig:agntemp\]. In all cases, we normalize the templates by their 5-15[$\mu$m]{} continuum luminosity to highlight their differences in the “warm-dust" 20-40[$\mu$m]{} regime. This normalization mimics empirical SED fitting since typically the strength of the AGN relative to star-formation hinges on the level of mid-IR continuum vs. PAH emission – extrapolating from that to the overall IR AGN fraction is then a function of the AGN template adopted. Clearly, the M11 and K12 are fairly consistent with each other and both show significant warm dust emission. At the other extreme the H07 and face-on N08 models both are much hotter, with weaker 20-40[$\mu$m]{} emission. The difference in integrated 8-1000[$\mu$m]{} AGN luminosity between the top and bottom template in Figure \[fig:agntemp\] is 2$\times$. Therefore, if we had adopted a face-on N08 model in our empirical SED fitting, our [$f\rm{(AGN)_{IR,emp}}$]{} fractions would be lower by 2$\times$. Our conclusion on the agreement between empirical and simulations-based IR AGN fractions is therefore conditional on empirical methods adopting warm-dust heavy templates such as the M11 or K12 ones. ![The range of AGN+torus templates, normalized by their 5-15$\rm{\mu m}$ luminosities. The H07 template (black solid) is the default template in our simulations; while the K12 template (long dashed curve) is the one adopted in our empirical SED decomposition [@Kirkpatrick2015]. The templates show significant spread in their level of warm dust ($\sim$20–40[$\mu$m]{}) emission. In particular, the 8-1000[$\mu$m]{} luminosity ratio between the warm-dust heavy (M11 and K12) templates and warm-dust light templates (face-on N08 and H07) templates is $\sim$2. The edge-on N08 templates is intermediate and differs from both the warm-dust heavy and light templates by $\sim$50%. These differences should be born in mind when interpreting other empirical SED decomposition in the literature in the context of the results presented in this paper. See text for full references.[]{data-label="fig:agntemp"}](f10.pdf){width="0.5\columnwidth"} Role of ISM Treatment {#sec:agnism} ===================== ![image](f11.pdf){width="0.5\columnwidth"} Our simulation library is not uniform in terms of the treatment of sub-resolution ISM clumpiness (see Section\[sec:ism\]). Here, we test whether the sub-resolution ISM treatment affects our conclusions. In Figure \[fig:mpfagn\], we plot [$f\rm{(AGN)_{IR}}$]{} corresponding to the fiducial and AGN10$\times$ versions of the M2 and M6 mergers for both the ‘multiphase on’ (i.e. the cold clumps have zero volume filling factor) and “multiphase off” (i.e. the ISM is uniform on sub-resolution scales) treatments. These models sample a range of initial gas fractions and AGN strengths. Figure \[fig:mpfagn\] suggests that our estimates of [$f\rm{(AGN)_{IR}}$]{} are insensitive to the ISM treatment used. Note that the shape of the IR SED can depend on the sub-resolution ISM assumption because all else being equal, the effective dust temperature of the SED is less in the ‘multiphase off’ case because the dust mass is greater (see @Lanz2014 and @Safarzadeh2015 for details). These differences however are lost in our integrated fraction estimates. Ultimately, this result is expected because in the regime in which the IR AGN fraction is non-negligible, the AGN are heavily obscured even in the ‘multiphase on’ case. [^1]: Throughout this paper, IR refers to the integrated 8-1000[$\mu$m]{} emission. [^2]: Refer to Paper I [@Kirkpatrick2015] for a complete list of the IRS programs involved. [^3]: The question of how much mid-IR continuum is assigned to SF is uncertain, and complicates a direct comparison with our empirical $f{\rm (AGN)}_{\rm MIR}$ fractions. The extent of which is beyond the scope of this discussion. In this paper we focus on total IR AGN fractions where this effect is small. [^4]: [As seen in Table \[tab:modeltable\], models M5 and M7 do not occupy drastically different parameter spaces than M6 and M8, respectively. This omission should not significantly affect the conclusions of this paper.]{} [^5]: [The ]{} <span style="font-variant:small-caps;">sunrise</span> [calculations may not result in unobscured AGN because all light-emitting particles are ‘fuzzy’; photons start from a random position within a sphere with radius equal to the gravitational softening, which is significantly larger than the scale of the torus emission. This has the effect of ‘smearing out’ the emission, which leads to less variation in the effective optical depth than you would get if it were calculated for individual lines of sight. It is only when you allow for a clumpy [ISM on scales below the resolution limit]{} that you can get low-attenuation lines of sight (discussed in @Hopkins2012).]{} [^6]: The phenomenological template adopted in @Sajina2012 was scaled to match M11 template in the long wavelength “warm-dust" regime, while allowing for the range between face-on and edge-on N08 models in the hot dust regime
--- abstract: 'We classify log-canonical pairs $(X, \Delta)$ of dimension two with $K_X+\Delta$ an ample Cartier divisor with $(K_X+\Delta)^2=1$, giving some applications to stable surfaces with $K^2=1$. A rough classification is also given in the case $\Delta=0$.' address: - | Marco Franciosi\ Dipartimento di Matematica\ Università di Pisa\ Largo B. Pontecorvo 5\ I-56127 Pisa\ Italy - | Rita Pardini\ Dipartimento di Matematica\ Università di Pisa\ Largo B. Pontecorvo 5\ I-56127 Pisa\ Italy - | Sönke Rollenske\ Fakultät für Mathematik\ Universität Bielefeld\ Universitätsstr. 25\ 33615 Bielefeld\ Germany author: - Marco Franciosi - Rita Pardini - Sönke Rollenske title: 'Log-canonical pairs and Gorenstein stable surfaces with $K_X^2=1$' --- Introduction ============ The study of stable curves and, more generally, stable pointed curves is by now a classical subject. Stable surfaces were introduced by Kollár and Shepherd-Barron in [@ksb88] and it was consequently realized (see, for instance, [@alexeev06; @kollar12; @KollarModuli] and references therein) that this definition can be extended to higher-dimensional varieties and pairs. So the study of (semi-)log-canonical pairs became an important topic in the theory of singular higher-dimensional varieties. Here we consider two-dimensional log-canonical pairs in which the log-canonical divisor is Cartier and has self-intersection equal to 1, and we give some applications to Gorenstein stable surfaces. First we study the case with non-empty boundary: \[thm: pairs\] Let $(X, \Delta)$ be a log-canonical pair of dimension 2 with $\Delta>0$, $K_X+\Delta$ Cartier and ample and $(K_X+\Delta)^2=1$. Then $(X,\Delta)$ belongs to one of the types $(P)$, $(dP)$, $(E_+)$ or $(E_-)$ described in List \[list\]. In particular, Theorem \[thm: pairs\] implies that $X$ is either the projective plane, a del Pezzo surface of degree 1, the symmetric product $S^2E$ of an elliptic curve, or a projective bundle $\IP(\ko_E\oplus\ko_E(x))$ over an elliptic curve with the section of square $-1$ contracted. It came rather as a suprise to us that the list is so short and that in each case the underlying surface itself is Gorenstein. The case in which $\Delta=0$ cannot be described so precisely, since it includes, for instance, all smooth surfaces of general type with $K^2=1$; however in Section \[section: bigandnef\] we give a rough classification, according to the Kodaira dimension of a smooth model of $X$ (see Theorem \[thm:normal-case\]). Although log-canonical pairs are interesting in their own right, our main motivation for proving the above results is that, by a result of Kollár, a non-normal Gorenstein stable surface gives rise to a pair as in Theorem \[thm: pairs\] via normalisation (see Corollary \[cor: main motivation\]). In Section \[section: applications to moduli\], we explain how the above pairs can be used to construct stable surfaces and which pairs can occur as normalisations of stable surfaces for given invariants $K^2 $ and $\chi$. In particular, we show that $\chi(X)\geq0$ for a Gorenstein stable surface $X$ with $K_X^2=1$ improving upon results in [@liu-rollenske13]. We will study the geometry and moduli of Gorenstein stable surfaces with $K^2=1$ more in detail in a subsequent paper, building on the classification results proven here. Acknowledgements {#acknowledgements .unnumbered} ---------------- We are grateful to Wenfei Liu for many discussions on stable surfaces and related topics and to Valery Alexeev for some useful communications. The first author is a member of GNSAGA of INDAM. The third author is grateful for support of the DFG through the Emmy-Noether program and SFB 701; he enjoyed the hospitality of HIM in Bonn during the final preparation of this paper. The collaboration benefited immensely from a visit of the third author in Pisa supported by GNSAGA of INDAM. This project was partially supported by PRIN 2010 “Geometria delle Varietà Algebriche” of italian MIUR. Notation and conventions. {#notation-and-conventions. .unnumbered} -------------------------- We work over the complex numbers; all varieties are assumed to be projective and irreducible unless otherwise stated. We do not distinguish between Cartier divisors and invertible sheaves in our notation. For a variety $X$ we denote by $\chi(X)$ the holomorphic Euler-characteristic and by $K_X$ a canonical divisor. Classification of pairs {#sec:pairs} ======================= Let $(X, \Delta)$ be a log-canonical (lc) pair of dimension two (cf. [@Kollar-Mori Def. 2.34] for the definition). We call $(X, \Delta) $ stable if $K_X+\Delta$ is ample and Gorenstein if $K_X+\Delta$ is Cartier. The aim of this section is the classification of Gorenstein stable lc pairs with $(K_X+\Delta)^2=1$ and $\Delta> 0 $. We start by listing and describing quickly the cases that occur in our classification. \[list\]  - $X={\mathbb P}^2$ and $\Delta $ is a nodal quartic. Here $p_a(\Delta)=3$ and $K_X+\Delta={\mathcal O}_{{\mathbb P}^2}(1)$. - $X$ is a (possibly singular) Del Pezzo surface of degree $1$, namely $X$ has at most canonical singularities, $-K_X$ is ample and $K^2_X=1$. The curve $\Delta$ belongs to the system $|-2K_{X}|$, hence $K_X+\Delta=-K_X$ and $p_a(\Delta)=2$. - Let $E$ be an elliptic curve and let $a\colon {\widetilde}X\to E$ be a geometrically ruled surface that contains an irreducible section $C_0$ with $C_0^2=-1$. Namely, ${\widetilde}X={\mathbb P}({\mathcal O}_E\oplus {\mathcal O}_E(-x))$, where $x\in E$ is a point and $C_0$ is the only curve in the system $|{\mathcal O}_X(1)|$. Set $F=a{^{-1}}(x)$: the normal surface $X$ is obtained from ${\widetilde}X$ by contracting $C_0$ to an elliptic Gorenstein singularity of degree 1 and $\Delta$ is the image of a curve $\Delta_0\in |2(C_0+F)|$ disjoint from $C_0$, so $p_a(\Delta)=2$. The line bundle $K_X+\Delta$ pulls back to $C_0+F$ on ${\widetilde}X$. - $X=S^2E$, where $E$ is an elliptic curve. Let $a\colon X\to E$ be the Albanese map, which is induced by the addition map $E\times E \to E$, denote by $F$ the class of a fiber of $a$ and by $C_0$ the image in $X$ of the curve $\{0\}\times E+E\times \{0\}$, where $0\in E$ is the origin, so that $C_0 F=C_0^2=1$. Then $\Delta$ is a divisor numerically equivalent to $3C_0-F$, $p_a(\Delta)=2$ and $K_X+\Delta$ is numerically equivalent to $C_0$. An equivalent description of $X$ is as follows (cf. [@catanese-ciliberto93 §1]). Denote by $\ke$ the only indecomposable extension of the form $0\to {\mathcal O}_E\to \ke\to {\mathcal O}_E(0)\to 0$ and set $X={\mathbb P}(\ke)$: then $C_0$ is the only effective divisor in $|{\mathcal O}_X(1)|$. For completeness, we give in Table \[tab: invariants pairs\] the numerical invariants of the four possible cases. Case $\chi({X})$ $q(X)$ $p_a(\Delta)$ $h^0(K_{X}+\Delta)$ --------- ------------- -------- --------------- --------------------- $(P)$ 1 0 3 3 $(dP)$ 1 0 2 2 $(E_-)$ 1 0 2 2 $(E_+)$ 0 1 2 1 : Invariants of $( X, \Delta)$[]{data-label="tab: invariants pairs"} The rest of the section is devoted to proving Theorem \[thm: pairs\]. We start with some general remarks: \[lem: h\^0&gt;=3\] Let $X$ be a normal surface and let $L$ be an ample line bundle of $X$ such that $L^2=1$. Then: 1. every curve $C\in|L|$ is irreducible and $h^0(L)\le 3$ 2. $h^0(L)=3$ if and only if $X={\mathbb P}^2$ and $L={\mathcal O}_{{\mathbb P}^2}(1)$ 3. if $h^0(L)=2$, then the system $|L|$ has one simple base point $P$ that is smooth for $X$. , We have $LC=1$, hence $C$ is irreducible, since $L$ is ample. Denote by $\nu\colon {\widetilde}C \to C$ the normalization: since $\deg L|_C=1$, one has $h^0(\nu^*L)\le 2$, with equality holding iff ${\widetilde}C$ is a smooth rational curve. Since $h^0(L|_C)\le h^0(\nu^*L)$, the usual restriction sequence $$0\to {\mathcal O}_X\to {\mathcal O}_X(C)=L\to L|_C\to 0$$ gives $h^0(L)\le 3$. Moreover, if $h^0(L)=3$ then $h^0(L|_C)=h^0(\nu^*L)=2$, $C$ is a smooth rational curve and the system $|L|$ is base point free. The morphism $X\to {\mathbb P}^2$ defined by $|L|$ has degree 1 and is finite, since $L$ is ample, so it is an isomorphism. Follows by and by the fact that $L^2=1$. \[lem:Dred\] Let $Y$ be a smooth surface, let $D>0$ be a nef and big divisor of $Y$ and let $D\red$ be the underlying reduced divisor. Then: 1. $p_a(D\red)\le p_a(D)$ 2. the natural map $\Pic^0(Y)\to \Pic^0(D\red)$ is injective. One has $h^1(K_Y+D)=0$ by Kawamata-Viehweg’s vanishing, thus taking cohomology in the usual restriction sequence $0\to K_Y\to K_Y+D\to K_D\to 0$ one obtains $$p_a(D)=\chi(K_D)+1=\chi(K_Y+D)-\chi(K_Y)+1=h^0(K_Y+D)-\chi(K_Y)+1$$ Applying the same argument to $D\red$ one obtains instead the inequality: $$p_a(D\red)\le h^0(K_Y+D\red)-\chi(K_Y)+1,$$ since $h^2(K_Y+D\red)=h^0(-D\red)=0$. Then the claim follows since $h^0(K_Y+D\red)\le h^0(K_Y+D)$. This is a slight generalization of [@CFMM Prop. 1.6] and can be proven exactly by the same argument. Next we fix the notation and the assumptions that we keep throughout the rest of the section: $(X,\Delta)$ is an lc pair satisfying the assumptions of Theorem \[thm: pairs\] and ${\epsilon}\colon {\widetilde}X\to X$ is the minimal desingularization. We set $L:=K_X+\Delta$, and ${\widetilde}L:={\epsilon}^*L$; ${\widetilde}L$ is a nef and big divisor with ${\widetilde}L^2=1$ and $h^0(L)=h^0({\widetilde}L)$. We define the divisor ${\widetilde}\Delta$ by the equality ${\widetilde}L=K_{{\widetilde}X}+{\widetilde}\Delta$ and by requiring that $\epsilon_*{\widetilde}\Delta = \Delta$. \[lem:h0\] In the above set-up: 1. $K_{{\widetilde}X}{\widetilde}L<0$, $h^2({\widetilde}L)=0$ 2. ${\widetilde}X$ is ruled. Using the projection formula, we compute $${\widetilde}L {\widetilde}\Delta=\epsilon^* L(\inverse\epsilon)_* \Delta=L\Delta= (K_{ X}+ \Delta)\Delta,$$ so ${\widetilde}L{\widetilde}\Delta$ is a positive number and it is even, by adjunction. Thus $${\widetilde}L K_{{\widetilde}X} = {\widetilde}L^2 -{\widetilde}L{\widetilde}\Delta=1-{\widetilde}L {\widetilde}\Delta<0 .$$ By Serre duality, we have $h^2({\widetilde}L)=h^0(-{\widetilde}\Delta)=0$, since ${\widetilde}L{\widetilde}\Delta=L\Delta>0$ and ${\widetilde}L$ is nef. Since ${\widetilde}L$ is nef, the condition $K_{{\widetilde}X}{\widetilde}L<0$ implies that $\kappa({\widetilde}X)=-\infty$. Next we look at the adjoint divisor $K_{{\widetilde}X}+{\widetilde}L$: \[lem:RE\] Assume that $h^0({\widetilde}L)\le 2$; then $K_{{\widetilde}X}{\widetilde}L=-1$, and there are the following two possibilities: - $h^0(K_{{\widetilde}X}+{\widetilde}L)=\chi({\widetilde}X)=1$ and $h^0({\widetilde}L)=2$, - $h^0(K_{{\widetilde}X}+{\widetilde}L)=\chi({\widetilde}X)=0$ and $h^0({\widetilde}L)=2\text{ or } 1$. Since ${\widetilde}L$ is nef and big, Riemann-Roch and Kawamata-Viehweg vanishing give: $$\label{eq:K+L} h^0(K_{{\widetilde}X}+{\widetilde}L)= \chi({\widetilde}X)+\frac{{\widetilde}L^2+K_{{\widetilde}X}{\widetilde}L}{2}=\chi({\widetilde}X)+\frac{1+K_{{\widetilde}X}{\widetilde}L}{2}\le \chi({\widetilde}X),$$ where the last inequality follows by Lemma \[lem:h0\]. Since ${\widetilde}X$ is ruled by Lemma \[lem:h0\], we have $\chi({\widetilde}X)\le 1$, so $h^0(K_{{\widetilde}X}+{\widetilde}L)\le 1$ and if equality holds, then $\chi({\widetilde}X)=1$ and ${\widetilde}LK_{{\widetilde}X}=-1$. Assume $h^0(K_{{\widetilde}X}+{\widetilde}L)=0$. Then equation implies that either $\chi({\widetilde}X)=1$, $K_{{\widetilde}X}{\widetilde}L=-3$ or $\chi({\widetilde}X)=0$ and $K_{{\widetilde}X}{\widetilde}L=-1$. In the first case, using Lemma \[lem:h0\] and Riemann-Roch we obtain $h^0({\widetilde}L)\ge \chi({\widetilde}L)= 3$, against the assumptions. In the second case, since $K_{{\widetilde}X}{\widetilde}L=-1$, the same argument gives $h^0({\widetilde}L)\ge \chi({\widetilde}L)=\chi({\widetilde}X)+1$ which gives the listed cases. Case $(R)$ of the above Lemma gives case $(dP)$ in our classification: \[lem:dP\] If $(X, \Delta)$ is as in case $(R)$ of Lemma \[lem:RE\] then it is of type $(dP)$. By Lemma \[lem: h\^0&gt;=3\], the base locus of the pencil $|{\widetilde}L|={\epsilon}^*|L|$ is a simple point ${\widetilde}P$ which is the preimage of a smooth point $P\in X$; by adjunction the general $C\in |{\widetilde}L|$ is a smooth elliptic curve. Blowing up the point $P$ we get an elliptic fibration $p\colon \widehat X\to {\mathbb P}^1$ with a section $\Gamma$. Denote by $Z$ the only effective divisor in $|K_{{\widetilde}X}+{\widetilde}L|$. Since ${\widetilde}L Z=0$, $Z$ does not contain the point ${\widetilde}P$ and it is contained in a finite union of curves of $|{\widetilde}L|$, hence it can be identified with a divisor $Z'$ of $\widehat X$ that is contained in a union of fibers of $p$ and does not intersect the section $\Gamma$. By the Kodaira classification of elliptic fibers, $Z'$ is either $0$ or it is supported on a set $R_1,\dots R_k$ of $-2$-curves; the same is true for $Z$, since $Z'$ does not meet $\Gamma$. In particular, we have $K_{{\widetilde}X}Z=0$, hence $$Z^2=ZK_{{\widetilde}X}+Z{\widetilde}L=0,$$ and therefore $Z=0$ by the Index Theorem. So ${\widetilde}L=-K_{{\widetilde}X}$, $X$ is a the anti-canonical model of ${\widetilde}X$ and ${\widetilde}D\in |-2K_{{\widetilde}X}|$. We now turn to studying case $(E)$ of Lemma \[lem:RE\]. This gives rise to the cases $(E_-)$ and $(E_+)$ in our classification, depending on the value of $h^0({\widetilde}L)$. \[lem:proj\] If $(X, \Delta)$ is as in case $(E)$ of Lemma \[lem:RE\], then there exists an elliptic curve $E$ and a vector bundle $\ke$ on $E$ of rank 2 and degree 1 such that ${\widetilde}X={\mathbb P}(\ke)$ and ${\widetilde}L={\mathcal O}_{{\widetilde}X}(1)$. By Lemma \[lem:h0\] and Lemma \[lem:RE\], the surface ${\widetilde}X$ is ruled and $q({\widetilde}X)=1$; we denote by $a\colon {\widetilde}X\to E$ the Albanese map and by $F$ a fiber of $a$. [**Step 1:**]{} [*one has ${\widetilde}L F=1$*]{}The linear system $|{\widetilde}L|$ is non-empty by Lemma \[lem:RE\]. Fix $C\in |{\widetilde}L|$ and denote by $C\red$ the underlying reduced divisor. One has $p_a(C)=1$ by adjunction and $p_a(C\red)\le 1$ by Lemma \[lem:Dred\]. The natural map $\Pic^0(E)=\Pic^0(X)\to \Pic^0(C\red)$ is an inclusion by Lemma \[lem:Dred\]. Thus $p_a(C\red)=1$ and $\Pic^0(E)\to \Pic^0(C\red)$ is an isomorphism. By [@BLRNeron Ch. 9, Cor. 12], $C\red=C_0+Z$, where $C_0$ is an elliptic curve that is mapped isomorphically onto $E$ by $a$, $Z$ is a sum of smooth rational curves and the dual graph of $C\red$ is a tree. We write $C=bC_0+Z'$, where $b>0$ is an integer and $Z'$ has the same support as $Z$. If $b=1$, then ${\widetilde}L F=1$ as claimed. So assume by contradiction that $b>1$: in this case $1={\widetilde}L^2\ge b{\widetilde}L C_0$ gives ${\widetilde}LC_0=0$. Then $C_0^2<0$, $C_0$ is contracted by ${\widetilde}L$ to an elliptic singularity and it does not intersect any other ${\epsilon}$-exceptional curve. Since ${\widetilde}L$ is nef and ${\widetilde}L C={\widetilde}L^2=1$, there is exactly one component $\Gamma$ of $C$ that has nonzero intersection with ${\widetilde}L$, and $\Gamma$ appears in $C$ with multiplicity 1. In particular, $Z'-\Gamma$ is contracted by $\epsilon$ and therefore $C_0(Z'-\Gamma)=0$. We have $C_0\Gamma \le 1$, since $\Gamma$ is contained in a fiber of $a$. Hence we have $$0=C_0{\widetilde}L=C_0(bC_0+\Gamma+(Z'-\Gamma))=bC_0^2+C_0\Gamma\le 1-b<0,$$ a contradiction. [**Step 2:**]{} [*conclusion of the proof*]{}We claim that $a\colon {\widetilde}X\to E$ is a ${\mathbb P}^1$-bundle. Indeed, assume by contradiction that ${\widetilde}X$ contains an irreducible $(-1)$-curve $\Gamma$: then ${\widetilde}L \Gamma>0$, because ${\widetilde}X\to X$ is the minimal resolution and ${\widetilde}L$ is the pull back of an ample line bundle on $X$. On the other hand ${\widetilde}L\Gamma\le {\widetilde}L F=1$, since $\Gamma$ is contained in a fiber $F$ of $a$. Hence ${\widetilde}L\Gamma=1$. But then we have ${\widetilde}L(F-\Gamma)=0$ and $K_{{\widetilde}X}(F-\Gamma)=-1$, namely $F-\Gamma$ contains a $(-1)$-curve $\Gamma_1$ with ${\widetilde}L\Gamma_1=0$, a contradiction. Finally, we set $\ke=a_*{\widetilde}L$. \[lem:E-+\] Assume we are in case $(E)$ of Lemma \[lem:RE\]. 1. If $h^0({\widetilde}L)=2$, then $(X,\Delta)$ is of type $(E_-)$. 2. If $h^0({\widetilde}L)=1$, then $(X,\Delta)$ is of type $(E_+)$. By Lemma \[lem:proj\] there exists an elliptic curve $E$ and a vector bundle $\ke$ on $E$ of rank 2 and degree 1 such that ${\widetilde}X={\mathbb P}(\ke)$ and ${\widetilde}L={\mathcal O}_{{\widetilde}X}(1)$. Denote by $x\in E$ the point such that $\det \ke={\mathcal O}_E(x)$. We will freely use the general theory of $\IP^1$-bundles and especially the classification of such bundles over an elliptic curve, see [@Hartshorne Ch. V.2]. Assume that $\ke$ is decomposable, i.e., that there are line bundles $A$ and $B$ on $E$ such that $\ke=A\oplus B$. Then we have $\deg A+\deg B=\deg \ke =1$ and $1\le h^0(A)+h^0(B)=h^0({\widetilde}L)\le 2$. So there are three possibilities: - $\deg A=-1$, $\deg B=2$; - $\deg A=0$, $A\ne {\mathcal O}_E$ and $\deg B=1$; - $A={\mathcal O}_E$ and $B={\mathcal O}_E(x)$. We denote by $C_0$ the section of ${\widetilde}X$ corresponding to the surjection $\ke \twoheadrightarrow A$. In case (a), the system $|{\widetilde}L|=|{\mathcal O}_{{\widetilde}X}(1)|$ has dimension 1 and has $C_0$ as fixed part, contradicting Lemma \[lem: h\^0&gt;=3\]. So this case does not occur. In case (b), we have ${\widetilde}L C_0=0$, but ${\widetilde}L|_{C_0}$ is non-trivial: this contradicts the assumption that ${\widetilde}L$ is the pull-back of an ample line bundle via the birational map ${\epsilon}\colon {\widetilde}X\to X$. So (c) is the only possibility. In this case $C_0$ is contracted to an elliptic singularity of degree 1 by ${\epsilon}$ and $C_0$ is the only curve contracted by ${\epsilon}$ since $\NS({\widetilde}X)$ has rank 2. We have ${\widetilde}\Delta={\widetilde}L-K_{{\widetilde}X}=3C_0+2F$. Since $K_{{\widetilde}X} ={\epsilon}^*K_X-C_0$ and $\Delta$ does not go through the elliptic singularity of $X$ because the pair $(X,\Delta)$ is lc, we obtain ${\epsilon}^*\Delta={\widetilde}\Delta- C_0=2C_0+2F$ and $(X,\Delta)$ is a log surface of type $(E_-)$. If $\ke$ is indecomposable, then $\ke$ is the only non-trivial extension $0\to{\mathcal O}_E\to\ke\to{\mathcal O}_E(x)\to 0$ and $h^0({\widetilde}L)=h^0(\ke)=1$. Up to a translation in $E$, we may assume that $x$ is the origin $0\in E$. Hence ${\widetilde}X=S^2E$ and $C=C_0={\widetilde}L$ is the image of the curve $\{0\}\times E+E\times \{0\}$ via the quotient map $E\times E\to S^2E$ (cf. description of case $(E_+)$ at the beginning of the section). Since ${\widetilde}L$ is ample, we have ${\widetilde}X=X$, ${\widetilde}L= L$ and ${\widetilde}\Delta=\Delta=L-K_X$ is numerically equivalent to $3C_0-F$. So the pair $(X,\Delta)$ is of type $(E_+)$. Finally, we summarize all the above results: If $h^0({\widetilde}L)\ge 3$, then by Lemma \[lem: h\^0&gt;=3\] we have $X={\mathbb P}^2$ and $L={\mathcal O}_{{\mathbb P}^2}(1)$, and thus $(X,\Delta)$ is of type $(P)$. So we may assume $h^0({\widetilde}L)\le 2$, which by Lemma \[lem:RE\] leaves us with the cases $(R)$ and $(E)$, according to the value of $\chi({{\widetilde}X})$. The first case gives type $(dP)$ by Lemma \[lem:dP\] while the second splits up into the cases $(E_+)$ and $(E_-)$ by Lemma \[lem:E-+\]. This concludes the proof of the Theorem. Applications to stable surfaces {#section: applications to moduli} =============================== In this section we explore some consequences of the classification of pairs in Theorem \[thm: pairs\] for the study of stable surfaces with $K^2=1$. Definitions and Kollár’s gluing construction {#section: definitions} -------------------------------------------- Our main reference for this section is [@KollarSMMP Sect. 5.1–5.3]. ### Stable surfaces Let $X$ be a demi-normal surface, that is, $X$ satisfies $S_2$ and at each point of codimension one $X$ is either regular or has an ordinary double point. We denote by $\pi\colon \bar X \to X$ the normalisation of $X$. Contrary to our previous assumptions $X$ is not assumed irreducible, in particular, $\bar X$ is possibly disconnected. The conductor ideal $ \shom_{\ko_X}(\pi_*\ko_{\bar X}, \ko_X)$ is an ideal sheaf in both $\ko_X$ and $\ko_{\bar X} $ and as such defines subschemes $D\subset X \text{ and } \bar D\subset \bar X,$ both reduced and pure of codimension 1; we often refer to $D$ as the non-normal locus of $X$. \[defin: slc\] The demi-normal surface $X$ is said to have *semi-log-canonical (slc)* singularities if it satisfies the following conditions: 1. The canonical divisor $K_X$ is $\IQ$-Cartier. 2. The pair $(\bar X, \bar D)$ has log-canonical (lc) singularities. It is called a stable surface if in addition $K_X$ is ample. In that case we define the geometric genus of $X$ to be $ p_g(X) = h^0(X, K_X) = h^2(X, \ko_X)$ and the irregularity as $q(X) = h^1(X, K_X) = h^1(X, {\mathcal O}_X)$. A Gorenstein stable surface is a stable surface such that $K_X$ is a Cartier divisor. The importance of these surfaces lies in the fact that they generalise stable curves: there is a projective moduli space of stable surfaces which compactifies the Gieseker moduli space of canonical model of surfaces of general type [@KollarModuli]. ### Kollár’s gluing principle {#ssec:kollar-glue} Let $X$ be a demi-normal surface as above. Since $X$ has at most double points in codimension one, the map $\pi\colon \bar D \to D$ on the conductor divisors is generically a double cover and thus induces a rational involution on $\bar D$. Normalising the conductor loci we get an honest involution $\tau\colon \bar D^\nu\to \bar D^\nu$ such that $D^\nu = \bar D^\nu/\tau$ and such that ${\mathrm{Diff}}_{\bar D^\nu}(0)$ is $\tau$-invariant (for the definition of the different see for example [@KollarSMMP 5.11]). \[thm: triple\] Associating to a stable surface $X$ the triple $(\bar X, \bar D, \tau\colon \bar D^\nu\to \bar D^\nu)$ induces a one-to-one correspondence $$\left\{ \text{\begin{minipage}{.12\textwidth} \begin{center} stable surfaces \end{center} \end{minipage}} \right\} \leftrightarrow \left\{ (\bar X, \bar D, \tau)\left|\,\text{\begin{minipage}{.37\textwidth} $(\bar X, \bar D)$ log-canonical pair with $K_{\bar X}+\bar D$ ample, \\ $\tau\colon \bar D^\nu\to \bar D^\nu$ involution s.th.\ ${\mathrm{Diff}}_{\bar D^\nu}(0)$ is $\tau$-invariant. \end{minipage}}\right. \right\}.$$ **Addendum:** In the above correspondence the surface $X$ is Gorenstein if and only if $K_{\bar X}+\bar D$ is Cartier and $\tau$ induces a fixed-point free involution on the preimages of the nodes of $\bar D$. An important consequence, which allows to understand the geometry of stable surfaces from the normalisation, is that $$\label{diagr: pushout} \begin{tikzcd} \bar X \dar{\pi}\rar[hookleftarrow]{\bar\iota} & \bar D\dar{\pi} & \bar D^\nu \lar[swap]{\bar\nu}\dar{/\tau} \\ X\rar[hookleftarrow]{\iota} &D &D^\nu\lar[swap]{\nu} \end{tikzcd}$$ is a pushout diagram. Clearly, if $X$ is Gorenstein then $K_{\bar X}+\bar D=\pi^*K_X$ is an ample Cartier divisor. The converse follows from the classification of slc surface singularities in terms of the minimal semi-resolution [@ksb88 Prop. 4.27]. More precisely, in the Gorenstein case the only singular points of $\bar X$ along $\bar D$ are contained in nodes of $\bar D$ and the different ${\mathrm{Diff}}_{\bar D^\nu}(0)$ is the sum of preimages of the nodes, each with coefficient 1. Thus the $\tau$-invariance of the different gives the action on the preimages of the nodes of $\bar D$. Let $P\in X$ be the image of a node of $\bar D$. If $\tau$ fixes a point in the preimage of $P$ in $\bar D^\nu$ then the exceptional divisor over $P$ in the minimal semi-resolution cannot be a cycle of rational curves. Therefore, by classification the non-normal point $P$ is a quotient of a degenerate cusp and it is not Gorenstein. This proves the remaining claim. Computing the main invariants of a stable surface from its normalisation is not difficult, see for example [@liu-rollenske13 Prop. 2.5]. \[prop: invariants\] Let $X$ be a stable surface with normalisation $(\bar X,\bar D)$. Then $K_X^2 = (K_{\bar X}+\bar D)^2$ and $\chi(X) = \chi({\bar X})+\chi(D)-\chi({\bar D})$. Note in particular that, by Nakai-Moishezon, a Gorenstein stable surface with $K_X^2 = 1$ is irreducible. Summing up, we now state our main motivation for the classification in Theorem \[thm: pairs\] explicitly: \[cor: main motivation\] Let $X$ be a Gorenstein stable surface with $K_X^2 = 1 $ and let $(\bar X, \bar D, \tau)$ be the corresponding triple as above. Then $(\bar X, \bar D)$ is of one of the types classified in Theorem \[thm: pairs\]. Numerology {#section: Numerology} ---------- In this section we feed the classification from Section \[sec:pairs\] into Kollár’s gluing construction. The result is a precise list of the possible normalisations of a non-normal Gorenstein stable surface with $K_X^2 = 1$. We also give the possible values of $\chi(X)$ for each type, showing in particular that there are no Gorenstein stable surfaces with $K_X^2 =1$ and $\chi(X)<0$. We start with a preliminary lemma. In order to state it, we keep the notation from Section \[ssec:kollar-glue\] and introduce some additional numerical invariants of a stable surface $X$: - $\mu_1$, the number of degenerate cusps - $\mu_2$, the number of $\IZ/2\IZ$-quotients of degenerate cusps of $X$ - $\rho$, the number of ramification points of the map $\bar D^\nu \to D^\nu$ - $\bar \mu$ the number of nodes of $\bar D$ \[lem: chi(D)\] Let $X$ be a non-normal stable surface. With the above notation: 1. $\chi(D) = \frac 12\left( \chi({\bar D})-\bar \mu\right)+\frac\rho 4+\mu_1$. 2. If $K_{\bar X}+\bar D$ is Cartier then $\chi(D) \geq 2\chi({\bar D})+\frac \rho 4 +\mu_1$. 3. If $X$ is Gorenstein, then $\chi(D)\geq 2\chi({\bar D})+1$. In addition, if equality holds in or , then $\bar D$ is a union of rational curves and has $-3\chi({\bar D})$ nodes. We remark that there exist examples of non-Gorenstein stable surfaces for which the inequalities and of Lemma \[lem: chi(D)\] fail. The curve $\bar D$ has nodes by the classification of lc pairs. Recall that Diagram is a pushout diagram in the category of schemes. In particular, the points of $D$ correspond to equivalence classes of points on $\bar D^\nu$ with respect to the relation generated by $x\sim y $ if $\bar \nu(x) =\bar\nu (y)$ or $\tau(x) = y$. Note that if an equivalence class contains the preimage of a node of $\bar D$ then either it contains no fixed point of $\tau$ and the image point is a degenerate cusp or it contains exactly two fixed points of $\tau$ and the image is a $\IZ/2\IZ$-quotient of a degenerate cusp. (Compare the discussion in [@liu-rollenske12 Sect. 4.2] and [@ksb88 §4].) Thus of the $2\bar \mu$ preimages of nodes of $\bar D$ in $\bar D^\nu$ exactly $2\mu_2$ are fixed by $\tau$ and there are exactly $\bar\mu +\mu_2$ points in $D^\nu$ that map to images of nodes in $D$. By the normalisation sequences we have $$\begin{gathered} \chi({\bar D^\nu}) = \chi({\bar D}) + \bar \mu,\\ \chi(D) = \chi({D^\nu})-((\bar \mu+\mu_2)-(\mu_1+\mu_2)) = \chi({D^\nu})+\mu_1-\bar \mu. \end{gathered}$$ Combining this with the Hurwitz formula for $\bar D^\nu\to D^\nu$, which gives $$\chi({D^\nu}) = \frac 1 2\chi({\bar D^\nu})+\frac\rho 4,$$ we get $$\chi(D) = \frac 12\chi({\bar D^\nu})+\frac\rho 4+\mu_1-\bar \mu= \frac 12\left( \chi({\bar D})-\bar \mu\right)+\frac\rho 4+\mu_1$$ as claimed in . Now assume in addition that $K_{\bar X}+\bar D$ is Cartier. Then, by adjunction (see e.g. [@KollarSMMP Sect. 4.1]), $K_{\bar D}=(K_{\bar X}+\bar D)|_{\bar D}$ is ample, so $\bar D$ is a stable curve. Therefore, every rational component of the normalisation has at least three marked points mapping to nodes in $\bar D$ and thus $\chi({\bar D^\nu})\leq 2\bar \mu/3$ which implies $-\bar \mu\geq 3\chi({\bar D})$. This gives and proves the last sentence in the statement. Equality in is attained if and only if $\bar D^\nu$ consists of $-2\chi({\bar D})$ rational curves, each with three marked points; then the curve $\bar D$ has $-3\chi({\bar D})$ nodes. In order to prove , we only need to show that if equality occurs in and $X$ is Gorenstein, then there is at least one degenerate cusp. But if equality holds in then $\bar D$ has $-3\chi(\bar D)>0$ nodes and, since $X$ is Gorenstein, each node of $\bar D$ maps to a degenerate cusp, that is, $\mu_1>0$. \[thm: possible chi for normalisation types\] There exists a non-normal Gorenstein stable surface with normalisation of given type (as defined and classified in Section \[sec:pairs\]) exactly in the following cases: $\chi({ X})=0$ $\chi({ X})=1$ $\chi({ X})=2$ $\chi({ X})=3$ --------- ---------------- ---------------- ---------------- ---------------- $(P)$ $(dP)$ $(E_-)$ $(E_+)$ One could extend the above numerical analysis to all stable surfaces with $K_X^2=1$ and Gorenstein normalisation $(\bar X, \bar D)$. From a moduli perspective such surfaces do not form a good class: they would include some but not all $2$-Gorenstein surfaces. The restrictions follow from Proposition \[prop: invariants\], the invariants given in Table \[tab: invariants pairs\] and Lemma \[lem: chi(D)\] where in the cases $(E_\pm)$ we use that not all components of $\bar D$ can be rational. The existence of examples is settled below in Section \[sect: examples\]. The above results allow us to refine in the case $K^2=1$ the $P_2$-inequality $\chi\ge -K^2$, proved in [@liu-rollenske13] for Gorenstein stable surfaces: \[cor: chi&gt;=0\] If $X$ is a Gorenstein stable surface with $K_X^2=1$, then $\chi(X)\ge 0$. Let $X$ be a Gorenstein stable surface with $K_X^2 =1$. If $X$ is normal then $\chi(X)\geq 1$ by [@bla94 Theorem 2]. If $X$ is not normal then $\chi(X)\geq 0$ by Theorem \[thm: possible chi for normalisation types\]. Examples {#sect: examples} -------- For completeness, we now provide explicit examples for each case given in Theorem \[thm: possible chi for normalisation types\]. We will analyse such surfaces more systematically in a subsequent paper. By Theorem \[thm: triple\] and Corollary \[cor: main motivation\] for each type we need to specify a (nodal) boundary $\bar D$ and an involution $\tau$ on the normalisation of $\bar D$ which induces a fixed point-free action on the preimages of the nodes. The holomorphic Euler-characteristic is then computed by Proposition \[prop: invariants\]. The case $(P)$ : Examples with $0\leq \chi(X)\leq 3$ are given in [@liu-rollenske13 Sect. 5.1]. [The case $(dP)$]{} :   - Take $\bar D$ to be a general section in $|-2K_{\bar X}|$, which is smooth, and $\tau$ the hyperelliptic involution. This gives $\chi(X)=3$. - Let $E_1, E_2\in |-K_{\bar X}|$ be two distinct smooth isomorphic curves and fix the intersection point as a base point on both. Let $\bar D = E_1+E_2$ and let $\tau$ be the involution that exchanges the two curves preserving the base-point. Then $\chi(X)=2$. - Assume that $|-K_X|$ contains two distinct nodal plane cubics and let $\bar D$ be their union. The normalisation $\bar D^\nu$ consists of two copies of $\IP^1$ each with three marked points which are the preimages of the nodes of $\bar D$. An involution on $\bar D^\nu$ interchanging the components is uniquely determined by its action on the marked points and we can choose it in such a way that the preimage of the base-point of the pencil is not preserved by the involution (see Figure \[fig: construction\]). One can easily see that this gives a rational curve of genus 2 (not nodal) as non-normal locus, thus $\chi(X)=1$. \[fig: construction\] \[curve/.style =[thick, every loop/.style=[looseness=10, min distance=30]{}]{}, q1/.style=[color=red]{}, q2/.style=[color=green]{}, q3/.style=[color=yellow]{}, scale = 0.6 \] (Q3) at (0,2.3); (Q2) at (1.6,1.2) ; (Q1) at (-1.6,1.2) ; (-2.5, -1.5) rectangle (2.5, 3.5) node \[above right\] [$\bar X$]{}; (-2,-1) node \[ right\] [$\bar D_1$]{} to\[out=45, in=315\] (-1.6, 1.2) to\[out=135, in =170,loop\] () to\[out=350, in = 260\] (.2,3); (-2,-1) node \[left\] [$\bar D_2$]{} to\[out=45, in=315\] (-1.6, 1.2) to\[out=135, in =170,loop\] () to\[out=350, in = 260\] (.2,3); (Q3) circle (2pt) ; at (Q3) \[right\] [$Q_3$]{}; (Q2) circle (2pt); at (Q2) \[above left\] [$Q_2$]{}; (Q1) circle (2pt); at (Q1) \[above right\] [$Q_1$]{}; (Xbar) ; (-2.5, -1.5) rectangle (2.5, 2) node \[above right\] [$X$]{}; (1,1.5) node \[right\][$D$]{} to\[out = 250, in = 30\] (0,0) to\[out = 210, in = 145, loop\] () to\[out = 325, in =355 , loop\] () to\[out =175, in = 5\] (179:1.5cm); (0,0) circle (2pt) node \[below, yshift = -.1cm\] [$P$]{}; \(X) ; (3,1) – (2.5,1) node\[above\] [$Q_{3,1}$]{} – (1,1) node\[above\] [$Q_{1,2}$]{} to (-0.5,1) node\[above\] [$Q_{1,1}$]{}–(-1,1); (2.5,1) circle (2pt); (1,1) circle (2pt); (-0.5,1) circle (2pt); (3,-1)–(2.5,-1) node\[below\] [$Q_{2,2}$]{}–(1,-1) node\[below\] [$Q_{2,1}$]{}–(-0.5,-1) node\[below\] [$Q_{3,2}$]{}–(-1,-1); (2.5,-1) circle (2pt); (1,-1) circle (2pt); (-0.5,-1) circle (2pt); at (3.5, 2) [$\bar D^\nu$]{}; (2.5, 0.5) – (2.5, -0.5); (1, 0.5) – (1, -0.5); (-.5, 0.5) – (-.5, -0.5); at (3,0) [$\tau$]{}; (Dbarnu) ; (3,0) –(2.5,0) circle (2pt) node\[above\] [$P_{3}$]{}–(1,0) circle (2pt) node\[above\] [$P_2$]{}–(-0.5,0) circle (2pt) node\[above\] [$P_1$]{}–(-1,0); at (3.5, 1) [$ D^\nu$]{}; (Dnu) ; (X.north) to node\[right\] [$\pi$]{} (Xbar.south); (Dbarnu.west) to node\[above\] [$\bar\nu$]{} (Xbar.east); (Dbarnu.south) to node\[right\] [$\pi$]{} (Dnu.north); (Dnu.west) to node\[above\] [$\nu$]{} (X.east); The case $(E_-)$ : The divisor $\bar D$ is a curve of arithmetic genus 2, which after pullback to the minimal resolution becomes a degree 2 cover of the base curve of the projective bundle. If $\bar D$ is smooth, choosing as $\tau$ either the hyperelliptic involution or the involution corresponding to the double cover of the elliptic base curve gives the two possible values for $\chi(X)$. The case $(E_+)$ :   - A general $\bar D$ is a smooth curve of genus two and choosing $\tau$ to be the hyperelliptic involution we get $\chi(X) = 2$. - For the numerical Godeaux case let $E\isom \IC/\IZ[{\mathrm{i}}]$. Then multiplication by $1+{\mathrm{i}}$ induces an endomorphism of degree 2 on $E$, that is, an isomorphism $E\isom E/\xi$ for a particular 2-torsion element in $E$. We can choose $\bar D \isom E\cup E/\xi \isom E\cup E$ in case $(E_+)$ (cf. [@catanese-ciliberto93 §2]) and the intersection of the two components is a single point. Thus there is an involution $\tau$ on $\bar D$ with quotient $E$ which exchanges the two components while keeping the base-point. With this choice $\chi(X)=1$. Normal Gorenstein stable surfaces with $K^2=1$ {#section: bigandnef} ============================================== In this section we complement the results of Section \[sec:pairs\] by omitting the condition that the boundary should be non-empty, that is, we study Gorenstein log-canonical surfaces $X$ with $K_X$ ample and $K_X^2=1$. In the terminology of Section \[section: definitions\] these are normal Gorenstein stable surfaces and they occur in the compactified Gieseker moduli space. Of course, in this case we cannot hope for a complete picture: for instance surfaces of general type with $K^2=\chi=1$, known as Godeaux surfaces, have been an object of study for decades and a full classification has not been achieved yet. Still, we are able to give a rough description according to the Kodaira dimension of ${\widetilde}X$: \[thm:normal-case\] Let $X$ be a normal Gorenstein stable surface with $K^2_X=1$ and let ${\epsilon}\colon {\widetilde}X\to X$ be its minimal desingularization. Then 1. If $\kappa({\widetilde}X)=2$, then $X$ has canonical singularities. 2. If $\kappa({\widetilde}X)=1$, then ${\widetilde}X$ is a minimal properly elliptic surface and $X$ has precisely one elliptic singularity of degree $1$. 3. If $\kappa({\widetilde}X)=0$, denote by $X_{\min}$ the minimal model of ${\widetilde}X$. Then there exists a nef effective divisor $D_{\min}$ on $X_{\min}$ and a point $P$ such that: - $D^2_{\min}=2$ and $P\in D_{\min}$ has multiplicity $2$ - ${\widetilde}X$ is the blow-up of $X_{\min}$ at $P$ - $X$ is obtained from ${\widetilde}X$ by blowing down the strict transform of $D_{\min}$ and it has either one elliptic singularity of degree 2 or two elliptic singularities of degree 1. 4. If $\kappa({\widetilde}X)=-\infty$, then there are two possibilities: - $\chi({\widetilde}X)=1$ and ${\widetilde}X$ has 1 or 2 elliptic singularities - $\chi({\widetilde}X)=0$, ${\widetilde}X$ has 1, 2 or 3 elliptic singularities; in this case, the exceptional divisors arising from the elliptic singularities are smooth elliptic curves. One can show that all cases actually occur (see for example [@fpr14a]). The proof of Theorem \[thm:normal-case\] occupies the rest of the section. We fix set-up and notations to be kept throughout: $X$ is a normal Gorenstein stable surface with $K^2_X=1$, ${\epsilon}\colon {\widetilde}X\to X$ is the minimal resolution and ${\widetilde}L:={\epsilon}^*K_X$, so ${\widetilde}L$ is a nef and big line bundle with ${\widetilde}L^2=1$. One has ${\widetilde}L=K_{{\widetilde}X}+{\widetilde}D$, where ${\widetilde}D$ is effective and ${\widetilde}L {\widetilde}D=0$. It follows in particular that ${\widetilde}LK_{{\widetilde}X}=1$. By the classification of normal Gorenstein lc singularities (cf. [@ksb88 Thm. 4.21]), the singularities of $X$ are either canonical or elliptic. The elliptic Gorenstein singularities are described in [@reid97 4.21]: denoting by $x_1, \dots x_k\in X$ the elliptic singular points, we can write ${\widetilde}D={\widetilde}D_1+\dots +{\widetilde}D_k$, where ${\widetilde}D_i$ is a divisor supported on ${\epsilon}{^{-1}}(x_i)$ such that $p_a(Z)<p_a({\widetilde}D_i)=1$ for every $0<Z<{\widetilde}D_i$. The divisors ${\widetilde}D_i$ are called the [*elliptic cycles*]{} of ${\widetilde}X$. The degree of the elliptic singularity $x_i$ is the positive integer $-{\widetilde}D_i^2$. The invariants of $X$ and ${\widetilde}X$ are related as follows: \[lem:invariants-normal\] In the above set-up: $$p_g(X)=h^0({\widetilde}L)\ge p_g({\widetilde}X), \quad q(X)\le q({\widetilde}X) \quad \chi(X)=\chi({\widetilde}X)+k.$$ By the projection formula we have $h^0({\widetilde}L)=h^0({\epsilon}_*{\widetilde}L)=h^0(K_X)=p_g(X)$; in addition there is an inclusion $H^0(K_{{\widetilde}X})\hookrightarrow H^0({\widetilde}L)$, since ${\widetilde}D$ is effective. The remaining inequalities follow by the 5-term exact sequence associated with the Leray spectral sequence for ${\mathcal O}_{{\widetilde}X}$: $$0\to H^1({\mathcal O}_X)\to H^1({\mathcal O}_{{\widetilde}X})\to H^0(R^1{\epsilon}_*{\mathcal O}_{{\widetilde}X})\to H^2({\mathcal O}_X)\to H^2({\mathcal O}_{{\widetilde}X})\to 0,$$ since $R^1{\epsilon}_*{\mathcal O}_{{\widetilde}X}$ has length 1 at each of the points $x_1, \dots, x_k$ and is zero elsewhere. We start by dealing with the case $\kappa({\widetilde}X)>0$. \[lem:kappa&gt;0\] If $\kappa({\widetilde}X)>0$, then there are the following possibilities: 1. $X$ has canonical singularities 2. ${\widetilde}X$ is a minimal properly elliptic surface and $X$ has precisely one elliptic singularity of degree $1$. Let $\eta\colon {\widetilde}X\to X_{\min}$ be the morphism to the minimal model. Let $M= \eta^*K_{X_{\min}}$, so that $K_{{\widetilde}X}=M+E$, where $E$ is exceptional for $\eta$. We have ${\widetilde}L(M+E)={\widetilde}L K_{{\widetilde}X}={\widetilde}L ^2=1$. Since ${\widetilde}L$ is nef and big and some multiple of $M$ moves, we have ${\widetilde}L M=1$, ${\widetilde}L E=0$. Thus, since ${\widetilde}L$ is the pullback of an ample divisor, $E$ is also contracted by $\epsilon$. Since $\epsilon$ is assumed minimal, there is no $\epsilon$-exceptional $(-1)$-curve, while on the other hand $\eta$ is a composition of blow-ups of a smooth surface. Hence $E=0$, namely ${\widetilde}X$ is minimal. If $\kappa({\widetilde}X)=2$, then the index theorem applied to ${\widetilde}L$ and $K_{{\widetilde}X}$ gives $K^2_{{\widetilde}X}=1$ and $K_{{\widetilde}X}$ and ${\widetilde}L$ are numerically equivalent (otherwise they span a 2-dimensional subspace on which the intersection form is positive). It follows that ${\widetilde}D\geq0$ is numerically trivial, hence ${\widetilde}D=0$ and $K_{{\widetilde}X}={\epsilon}^* K_X$, namely $X$ has canonical singularities. If $\kappa({\widetilde}X)=1$, then ${\widetilde}X$ is minimal properly elliptic and $K^2_{{\widetilde}X}=0$. It follows that $({\widetilde}D_1+\dots + {\widetilde}D_k)K_{{\widetilde}X}={\widetilde}DK_{{\widetilde}X}={\widetilde}L K_{{\widetilde}X}=1$. Since ${\widetilde}D_i K_{{\widetilde}X}>0$ for every $i$, we have $k=1$, namely ${\widetilde}D$ is connected and ${\widetilde}D^2=-1$. Next we consider the case $\kappa({\widetilde}X)=0$: \[lem:normal-kappa0\] If $\kappa({\widetilde}X)=0$, then $X$ is as in Theorem \[thm:normal-case\], . Let $\eta\colon {\widetilde}X\to X_{\min}$ be the morphism to the minimal model, so $\eta$ is a composition of $m$ blow-ups in smooth points $P_1,\dots P_m$, possibly infinitely near. Denote by $E_i$ the total transform on ${\widetilde}X$ of the exceptional curve that appears at the $i$-th blow-up: then $E_i^2=E_iK_{{\widetilde}X}=-1$, $E_iE_j=0$ if $i\ne j$, and $K_{{\widetilde}X}$ is numerically equivalent to $\sum _{i=1}^mE_i$. Observe that each $E_i$ contains at least one irreducible $(-1)$-curve. Since $\epsilon$ is relatively minimal, ${\widetilde}L$ is positive on irreducible $(-1)$-curves. Hence we have $1={\widetilde}L K_{{\widetilde}X}=\sum_{i=1}^m{\widetilde}LE_i\ge m$, and we conclude that $m=1$, i.e., ${\epsilon}$ is a single blow-up. We set $E=E_1$. Write ${\widetilde}D={\widetilde}D_1+\dots +{\widetilde}D_k$, with the ${\widetilde}D_i$ disjoint elliptic cycles. We have $2=({\widetilde}L - K_{{\widetilde}X})K_{{\widetilde}X}={\widetilde}DK_{{\widetilde}X}={\widetilde}D_1K_{{\widetilde}X}+\dots +{\widetilde}D_kK_{{\widetilde}X}$, thus either $k=1$ and $2={\widetilde}D_1K_{{\widetilde}X}={\widetilde}D_1 E$, or $k=2$ and $1={\widetilde}D_iK_{{\widetilde}X}={\widetilde}D_iE$, for $i=1,2$. In the former case we have ${\widetilde}D_1^2={\widetilde}D^2=-2$, and in the latter case we have ${\widetilde}D_1^2={\widetilde}D_2^2=-1$, since $p_a({\widetilde}D_i)=1$. We set $D_{\min}=\eta_*{\widetilde}D$. The divisor $D_{\min}$ has $D_{\min}^2=2$ and contains $P$ with multiplicity 2. In order to complete the proof we need to show that $D_{\min}$ is nef. Let $\Gamma$ be an irreducible curve of $X_{\min}$ and write $\eta^*\Gamma={\widetilde}\Gamma+\alpha E$, where ${\widetilde}\Gamma$ is the strict transform and $\alpha\ge 0$. We have $\Gamma D_{\min}=({\epsilon}^*\Gamma)({\epsilon}^*D_{\min})={\epsilon}^*\Gamma({\widetilde}L+E)={\epsilon}^*\Gamma {\widetilde}L\ge 0$, since ${\widetilde}L$ is nef. Finally we consider the case $\kappa({\widetilde}X)=-\infty$: If $\kappa({\widetilde}X)=-\infty$, then there are the following possibilities: - $\chi({\widetilde}X)=1$ and ${\widetilde}X$ has 1 or 2 elliptic singularities - $\chi({\widetilde}X)=0$, ${\widetilde}X$ has 2 or 3 elliptic singularities and ${\widetilde}D$ is a union of disjoint smooth elliptic curves. Since ${\widetilde}X$ is ruled, we have $\chi({\widetilde}X)\le 1$, with equality if and only if ${\widetilde}X$ is rational. Assume $\chi({\widetilde}X)\le 0$ and let $a\colon X\to B$ be the Albanese map, where $B$ is a smooth curve of genus $b>0$. Write ${\widetilde}D={\widetilde}D_1+\cdots +{\widetilde}D_k$; since the general fiber of $a$ is a smooth rational curve and $p_a({\widetilde}D_i)=1$ for all $i$, no ${\widetilde}D_i$ can be contracted to a point by $a$, hence ${\widetilde}D_i$ dominates $B$. It follows that $b=1$ and ${\widetilde}D_i$ contains a smooth elliptic curve $D'_i$. Since ${\widetilde}D_i$ is minimal among the divisors $Z>0$ supported on ${\epsilon}{^{-1}}(x_i)$ and such that $p_a(Z)=1$, it follows that ${\widetilde}D_i=D'_i$. One has $\chi(X)\ge 1$ by [@bla94 Theorem 2] and $\chi(X)\le 3$ by the stable Noether inequality for normal Gorenstein stable surfaces [@sakai80; @liu-rollenske13]. Since $k>0$, Lemma \[lem:invariants-normal\] gives $1\le k\le 3$ if $\chi({\widetilde}X)=0$ and $1\le k\le 2$ if $\chi({\widetilde}X)=1$. [CFML97]{} Valery Alexeev. Higher-dimensional analogues of stable curves. In [*International [C]{}ongress of [M]{}athematicians. [V]{}ol. [II]{}*]{}, pages 515–536. Eur. Math. Soc., Z[ü]{}rich, 2006. R. Blache. Positivity results for [E]{}uler characteristics of singular surfaces. , 215(1):1–12, 1994. Siegfried Bosch, Werner L[ü]{}tkebohmert, and Michel Raynaud. , volume 21 of [*Ergebnisse der Mathematik und ihrer Grenzgebiete (3) \[Results in Mathematics and Related Areas (3)\]*]{}. Springer-Verlag, Berlin, 1990. F. Catanese and C. Ciliberto. Symmetric products of elliptic curves and surfaces of general type with [$p_g=q=1$]{}. , 2(3):389–411, 1993. Ciro Ciliberto, Paolo Francia, and Margarida Mendes Lopes. Remarks on the bicanonical map for surfaces of general type. , 224(1):137–166, 1997. Marco Franciosi, Rita Pardini, and Sönke Rollenske. Gorenstein stable surfaces with [$K_X^2=1$]{} and [$\chi(\mathcal O_X)\neq 1$]{}, 2014. article in preparation. Robin Hartshorne. . Springer-Verlag, New York, 1977. Graduate Texts in Mathematics, No. 52. J[á]{}nos Koll[á]{}r and Shigefumi [M]{}ori. , volume 134 of [ *Cambridge Tracts in Mathematics*]{}. Cambridge University Press, Cambridge, 1998. With the collaboration of C. H. Clemens and A. Corti, Translated from the 1998 Japanese original. Janós Kollár. Moduli of varieties of general type. In G. Farkas and I. Morrison, editors, [*Handbook of Moduli: Volume II*]{}, volume 24 of [*Advanced Lectures in Mathematics*]{}, pages 131–158. International Press, 2012, arXiv:1008.0621. J[á]{}nos Koll[á]{}r. , volume 200 of [ *Cambridge Tracts in Mathematics*]{}. Cambridge University Press, Cambridge, 2013. With a collaboration of S[á]{}ndor Kov[á]{}cs. J[á]{}nos Koll[á]{}r. . 2014. book in preparation. J[á]{}nos Koll[á]{}r and Nick Shepherd-Barron. Threefolds and deformations of surface singularities. , 91(2):299–338, 1988. Wenfei Liu and S[ö]{}nke Rollenske. Pluricanonical maps of stable log surfaces, 2012. arXiv:1211.1291, to appear in Adv. in Math. Wenfei Liu and S[ö]{}nke Rollenske. Geography of [G]{}orenstein stable log surfaces, 2013, arXiv:1307.1999. to appear in TAMS. Miles Reid. Chapters on algebraic surfaces. In [*Complex algebraic geometry ([P]{}ark [C]{}ity, [UT]{}, 1993)*]{}, volume 3 of [*IAS/Park City Math. Ser.*]{}, pages 3–159. Amer. Math. Soc., Providence, RI, 1997. Fumio Sakai. Semistable curves on algebraic surfaces and logarithmic pluricanonical maps. , 254(2):89–120, 1980.
--- abstract: 'The boundary integral method is extended to derive closed integro-differential equations applicable to computation of the shape and propagation speed of a steadily moving spot and to the analysis of dynamic instabilities in the sharp boundary limit. Expansion of the boundary integral near the locus of traveling instability in a standard reaction-diffusion model proves that the bifurcation is supercritical whenever the spot is stable to splitting, so that propagating spots can be stabilized without introducing additional long-range variables.' author: - | L.M. Pismen\ [*Department of Chemical Engineering and Minerva Center for Nonlinear Physics of Complex Systems,\ Technion – Israel Institute of Technology, 32000 Haifa, Israel*]{} title: 'Nonlocal Boundary Dynamics of Traveling Spots in a Reaction-Diffusion System' --- =10000 Localized structures in non-equlibrium systems (dissipative solitons) have been studied both in experiments and computations in various applications, including chemical patterns in solutions [@swin] and on surfaces [@imb], gas discharges [@pur99] and nonlinear optics [@firth]. The interest to dynamic solitary structures, in particular, in optical [@firth] and gas discharge systems [@purw] has been recently driven by their possible role in information transmission and processing. A variety of observed phenomena can be reproduced qualitatively with the help of simple reaction-diffusion models with separated scales [@ohta; @keros; @meron; @pi94; @osmur]. Extended models of this type included nonlocal interactions due to gas transport [@pi94; @mikh], Marangoni flow [@pi97] or optical feedback [@firth; @piop]. A great advantage of scale separation is a possibility to construct analytically strongly nonlinear structures in the sharp interface limit. An alternative approach based on Ginzburg–Landau models supplemented by quintic and/or fourth-order differential (Swift-Hohenberg) terms [@rab] have to rely on numerics in more than one dimension. Dynamical solitary structures are most interesting from the point of view of both theory and potential applications. Existence of traveling spots in sharp-interface models is indicated by translational instability of a stationary spot [@mikh]. This instability is a manifestation of a general phenomenon of parity breaking (Ising–Bloch) bifurcation [@coul; @hagmeron] which takes a single stable front into a pair of counter-propagating fronts forming the front and the back of a traveling pulse. Numerical simulations, however, failed to produce stable traveling spots in the basic activator-inhibitor model, and the tendency of moving spots to spread out laterally had to be suppressed either by global interaction in a finite region [@mikh] or by adding an extra inhibitor with specially designed properties [@bode]. The dynamical problem is difficult for theoretical study, since a moving spot loses its circular shape, and a free-boundary problem is formidable even for simplest kinetic models. Numerical simulation is also problematic, due to the need to use fine grid to catch sharp gradients of the activator; therefore actual computations were carried out for moderate scale ratios. A large amount of numerical data, such as the inhibitor field far from the spot contour, is superfluous. This could be overcome if it was possible to reduce the PDE solution to local dynamics of a sharp boundary. Unfortunately, a purely local equation of front motion [@hagmeron] is applicable only when the curvature far exceeds the diffusion scale of the long-range variable, whereas a spot typically suffers splitting instability [@ohta] before growing so large. On the other hand, the nonlocal boundary integral method [@gope] is applicable only when the inhibitor dynamics is fast compared to the characteristic propagation scale of a front motion, i.e. under conditions when no dynamic instabilities arise and traveling spots do not exist. It is the aim of this Letter, to extend the nonlocal boundary integral method to dynamical problems, and to find out with its help conditions of supercritical bifurcation for steadily moving spots. We consider the standard FitzHugh–Nagumo model including two variables – a short-range activator $u$ and a long-range inhibitor $v$: $$\begin{aligned} \epsilon^2 \tau u_t & = & \epsilon^2 \nabla^2 u + V'(u) - \epsilon v, \label{sueq} \\ v_t & = & \nabla^2 v - v - \nu + \mu u. \label{sveq} \end{aligned}$$ Here $V(u)$ is a symmetric double-well potential with minima at $u=\pm 1$; $\epsilon \ll 1$ is a scale ratio, and other parameters are scaled in such a way that the effects of bias and curvature on the motion of the front separating the up- and down states of the short-range variable are of the same order of magnitude. The local [*normal*]{} velocity of the front is $$c_n = \tau^{-1}(b v -\kappa) + O(\epsilon), \label{eqmot}$$ where $\kappa$ is curvature and $b$ is a numerical factor dependent on the form of $V(u)$; for example, $b=3/\sqrt{2}$ for the quartic potential $V(u)= -\frac{1}{4}(1-u^2)^2$. By definition, the velocity is positive when the down-state $u<0$ advances. In the sharp boundary approximation valid at $\epsilon \ll 1$, a closed equation of motion for a solitary spot propagating with a constant speed can be written by expressing the local curvature in Eq. (\[eqmot\]) with the help of a suitable parametrization of the spot boundary, and resolving Eq. (\[sveq\]) rewritten in a coordinate frame propagating with a speed $c$ (as yet unknown). It is convenient to shift the long-range variable $v= w- \nu +\mu$, so that $w(\infty)=0$ when the up-state $u=1-O(\epsilon)$ prevails at infinity. The stationary equation of $w$ in the coordinate frame translating with the speed [**c**]{} is $${\bf c} \cdot \nabla w + \nabla^2 w - w = 2\mu H, \label{sweq}$$ where, neglecting $O(\epsilon)$ corrections, $H=1$ inside and $H=0$ outside the spot. The solution can be presented in the form of an integral over the spot area $\cal S$: $$w({\bf x} ) = -\frac{\mu}{\pi} \int_{\cal S} {\cal G}({\bf x}-\mbox{\boldmath $\xi$}) d^2 \mbox{\boldmath $\xi$}, \label{wint}$$ where the kernel ${\cal G}$ contains a modified Bessel function $K_0$: $${\cal G}({\bf r}) = \frac{1}{2\pi} e^{-\frac{1}{2}{\bf c \cdot r}} K_0\left( |{\bf r}|\sqrt{1+\mbox{$\frac{1}{4}c^2$}}\right) . \label{wker}$$ This integral can be transformed into a contour integral with the help of the Gauss theorem. To avoid divergent expressions, the contour should exclude the point ${\bf x}=\mbox{\boldmath $\xi$}$. Clearly, excluding an infinitesimal circle around this point does not affect the integral (\[wint\]), since the kernel (\[wker\]) is only logarithmically divergent. Replacing ${\cal G}({\bf r}) =\nabla^2 {\cal G}({\bf r}) + {\bf c} \cdot \nabla {\cal G}({\bf r})$ $({\bf r} \neq 0)$, we transform the integral in Eq. (\[wint\]) as $$\begin{aligned} -\int_{\cal S} {\cal G}({\bf x}-\mbox{\boldmath $\xi$}) d^2 \mbox{\boldmath $\xi$} = \int_{\cal S} \nabla_{\mbox{\boldmath $\xi$}} \cdot {\bf H} ({\bf x}-\mbox{\boldmath $\xi$}) d^2 \mbox{\boldmath $\xi$} \cr = \oint_{\Gamma'} {\bf n}(s) \cdot {\bf H} ({\bf x}-\mbox{\boldmath $\xi$}(s)) ds, \label{gauss} \end{aligned}$$ where ${\bf H}({\bf r}) = \nabla{\cal G}({\bf r}) + {\bf c} {\cal G}({\bf r})$ and [**n**]{} is the normal to the contour $\Gamma'$. The vector Green’s function [**H**]{} corresponding to the kernel in Eq. (\[wint\]) is computed as $$\begin{aligned} {\bf H}({\bf r}) &=& e^{-\frac{1}{2}{\bf c}\cdot {\bf r}} \left[\mbox{$\frac{1}{2}$}{\bf c} K_0\left( |{\bf r}| \sqrt{1+\mbox{$\frac{1}{4}$} c^2}\right) \right. \cr &-& \left. \sqrt{1+\mbox{$\frac{1}{4}$} c^2} \frac{{\bf r}}{|{\bf r}|} K_1\left( |{\bf r}| \sqrt{1+\mbox{$\frac{1}{4}$} c^2}\right) \right]. \label{gauss2} \end{aligned}$$ When ${\bf x}$ is a boundary point, $\Gamma'$ consists of the spot boundary $\Gamma$ cut at this point and closed by an infinitesimally small semicircle about ${\bf x}$. The integral over the semicircle equals to $\pi$. Defining the external normal to $\Gamma$ as the tangent ${\bf t}={\bf x}'(s)$ rotated clockwise by $\pi/2$, the required value of the long-range variable on the spot boundary (parametrized by the arc length $s$ or $\sigma$) is expressed, using the 2D cross product $\times$, as $$v (s) = -\nu + \frac{\mu}{\pi} \oint_{\Gamma} {\bf H} ({\bf x}(s)- {\bf x} (\sigma)) \times {\bf x}' (\sigma) d\sigma . \label{woint}$$ To obtain a closed integral equation of a steadily moving spot, it remains to define a shift of parametrization accompanying shape-preserving translation. Recall that Eq. (\[eqmot\]) determines the propagation velocity $c_n$ along the [*normal*]{} to the boundary. In addition, one can introduce arbitrary [ *tangential*]{} velocity $c_t$ which has no physical meaning but might be necessary to account for the fact that each “material point” on a translated contour is, generally, mapped onto a point with a different parametrization even when the shape remains unchanged. The tangential velocity can be defined by requiring that each material point be translated strictly parallel to the direction of motion, i.e. $c_n{\bf n}+c_t{\bf t}={\bf c}$. Taking the cross product with ${\bf c}$ yields $c_t= c_n ({\bf c} \times {\bf t})$ $/ ({\bf c} \cdot {\bf t}).$ Then eliminating $c_t$ gives the normal velocity $c_n = {\bf c} \times {\bf t}$ necessary for translating the contour along the $x$ axis with the velocity $c$. Using this in Eq. (\[eqmot\]) yields the condition of stationary propagation $${\bf c} \times {\bf x}'(s) = \tau^{-1} [b v(s)- \kappa(s)] . \label{shape}$$ The form and the propagation speed of a slowly moving and weakly distorted circular contour can be obtained by expanding Eq. (\[shape\]) in $c=|{\bf c}|$ near the point of traveling bifurcation $\tau=\tau_0$, which is also determined in the course of the expansion. For a circular contour with a radius $a$, Eq. (\[woint\]) takes the form $$\begin{aligned} v(\phi) &=& -\nu + \frac{\mu a}{\pi} \int_{0}^{2\pi} e^{-\frac{1}{2} c a(\cos \phi- \cos \varphi)} \; \times \cr && \left[ \mbox{$\frac{1}{2}$} c \cos \varphi \, K_0\left((2a\sqrt{1+\mbox{$\frac{1}{4}c^2$}} \sin \mbox{$\frac{1}{2}$} |\phi-\varphi| \right) \right. \cr & & + \sin \mbox{$\frac{1}{2}$}|\phi-\varphi| \sqrt{1+\mbox{$\frac{1}{4}c^2$}}\, \; \times \cr && \left. K_1\left(2a\sqrt{1+\mbox{$\frac{1}{4}c^2$}} \sin \mbox{$\frac{1}{2}$} |\phi-\varphi| \right) \right] d \varphi, \label{woint1} \end{aligned}$$ where $\phi$ or $\varphi$ is the polar angle counted from the direction of motion. The angular integrals that appear in the successive terms of the expansion are evaluated iteratively, starting from $\Phi_0(a)= \pi I_0(a) K_0(a)$ and using the relations $$\begin{aligned} \Psi_k(a) &=& \int_{0}^{\pi} \sin^{2k+1} \frac{\phi}{2}\, K_1\left( 2 a\sin \frac{\phi}{2}\right) d \phi = - \frac{1}{2}\frac{d \Phi_k}{da}, \cr \Phi_k (a) &=& \int_{0}^{\pi}\!\! \sin^{2k} \frac{\phi}{2}\, K_0 \!\left( 2 a\sin \frac{\phi}{2} \right) \! d\phi = -\frac{1}{2a}\, \frac{d(a\Psi_{k-1})}{da}. \end{aligned}$$ Effect of small boundary distortions on $v$ can be computed directly with the help of Eq. (\[wint\]), where the integration should be carried out only over a small area swept by the displaced spot boundary. This approach is most useful for stability analysis with respect to small perturbations of a known static shape, and is easier than using the expansion of Eq. (\[woint\]) with a perturbed boundary. For a circular spot, we expand the perturbations of both $v$ and $\rho$ in the Fourier series $$\begin{aligned} \widetilde \rho(\phi,t) &=& \rho(\phi)-a = \sum_{n \geq 2} c^n a_n e^{\lambda_n t}\cos n\phi , \cr \widetilde v(\phi,t) &=& \sum_{n \geq 2} \widehat v_n e^{\lambda_n t}\cos n\phi . \label{expand} \end{aligned}$$ The curvature is expressed as $$\begin{aligned} \kappa(\phi) &=& \frac{\rho^2 -2 \rho^2_\phi - \rho\rho_{\phi\phi}}{(\rho^2+\rho^2_\phi)^{3/2}} \cr &=& a^{-1}+ 3(c/a)^2 a_2 e^{\lambda_2 t}\cos 2\phi + O(c^3). \label{curv} \end{aligned}$$ Since the displaced point should remain on the boundary, the distortion $\widetilde \rho(\varphi)$ should be compensated by rigid displacement of the spot by an increment $\widetilde \rho(\phi)$ when $\widetilde v(\phi)$ is computed (see the inset in Fig. \[f1\]). The resulting equation for eigenvalues $\lambda_n$ following from Eq. (\[eqmot\]) is $$\begin{aligned} && \tau \lambda_n = \frac{ n^2-1}{a^2} - \frac{4ab\mu}{\pi^2} \int_0^{\pi} \cos n\phi \, d \phi \; \times \cr && \int_0^\pi [\widetilde \rho(\varphi)- \widetilde \rho(\phi) \cos (\varphi-\phi)] e^{-\frac{1}{2} c a(\cos \phi- \cos \varphi)}\; \times \cr && K_0\left(2a\sqrt{1+ \lambda_n +\mbox{$\frac{1}{4}c^2$}} \sin \mbox{$\frac{1}{2}$} |\phi-\varphi| \right) d \varphi. \label{wint1} \end{aligned}$$ Using the constant zero-order term in the expansion of Eq. (\[woint1\]) together with $\kappa= a^{-1}$ in Eq.(\[eqmot\]) yields the stationarity condition $$\nu = -(ba)^{-1} + \mu a \left[ K_1(a)\,I_0(a) - K_0(a)\,I_1(a) \right] . \label{cstat}$$ A stationary solution stable against collapse or uniform swelling exists in the region in the parametric plane $\mu,\nu$ (Fig. \[f1\]) bounded by the cusped curve C and the axis $\nu=0,\, \mu>2/b$. This curve is drawn as a parametric plot with $\nu(a)$ given by Eq. (\[cstat\]) and $\mu(a)$ by Eq. (\[wint1\]) with $c, \, n$ and $\lambda_0$ set to zero (or, equivalently, by the condition $F_0'(a)=0$, where $F_0(a)$ is the right-hand side of Eq. (\[cstat\]). The first-order term in the expansion of Eq. (\[woint1\]) is proportional to $\cos \phi$, and should compensate at the traveling bifurcation point the left-hand side of Eq. (\[shape\]). This yields the bifurcation condition $$\tau_0 = b\mu a [a(I_1(a)K_0(a) - I_0(a)K_1(a)) + 2I_1(a)K_1(a)] , \label{disp1d}$$ which coincides with the known result obtained by other means [@mikh]. The curve T in Fig. \[f1\] shows the traveling instability threshold for $\tau_0 =1$. The static spot is unstable below this curve; the locus shifts up (to smaller radii) as $\tau$ decreases, and exits the existence domain at $\tau<1/4$. At $\tau>1$, the dominant instability at large radii is a static splitting instability. Its locus, determined by Eq. (\[wint1\]) with $n=2$ and $c=\lambda_2=0$, is the curve S in Fig. \[f1\]. Another possible dynamic instability is breathing instability [@ohta; @haim; @pur99]. Its locus is given by Eq. (\[wint1\]) with $c=n=0$ and $\lambda_0=i\omega$. The frequency $\omega$ as a function of the spot radius $a$ is computed by solving the equation $\tau \omega =a^{-2}$Im $F(a,\omega)/$Re $F(a,\omega)$, where $F(a,\omega)$ is the right-hand side of Eq. (\[wint1\]) computed as $$\begin{aligned} F(a,\omega) &=& 2 \mu a \left[I_1\left(a\sqrt{1+i\omega}\right) K_1\left(a\sqrt{1+i\omega}\right) \right. \cr &-& \left. I_0\left(a\sqrt{1+i\omega}\right) K_0\left(a\sqrt{1+i\omega}\right) \right] . \label{dis00} \end{aligned}$$ The curve B in Fig. \[f1\] shows the bifurcation locus at $\tau=1$. The instability region retreats to small radii (large $\nu$) at large $\tau$ and spreads downwards as $\tau$ decreases. The balloon of stable solutions disappears altogether at $\tau<0.5$ after the tips of both dynamic loci meet on the existence boundary. In the second order, Eq. (\[woint1\]) yields a constant term $$v^{(2,0)} = - \mu a^2 [a(I_1(a)K_0(a) - I_0(a)K_1(a)) + I_1(a)K_1(a)] \label{dis20}$$ and a dipole term $ v^{(2,2)} = q^{(2,2)} \cos 2\phi$, where $$\begin{aligned} q^{(2,2)} = \mbox{$\frac{1}{4}$} \mu a^2 [a(I_0(a)K_1(a) - I_1(a)K_0(a)) \cr - 3I_1(a)K_1(a) +2 I_2(a)K_2(a)] . \label{dis22} \end{aligned}$$ The constant term is positive and causes contraction of the average radius of the moving spot by an increment $\widetilde a = - a^2 c^2 bv^{(2,0)}$. The second-order dipolar term in the right-hand side of Eq. (\[shape\]), $\widetilde v^{(2,2)} = \widetilde q^{(2,2)} a_2 \cos 2\phi$, as well as the third-order first harmonic term, $\widetilde v ^{(3,1)} = \widetilde q^{(3,1)} a_2 \cos \phi$, needed for the solvability condition to follow, are read from Eq. (\[wint1\]) with $n=2$ and $\lambda_2=0$, respectively, in zero and first order in $c$: $$\begin{aligned} \widetilde q^{(2,2)} &=& - 3a^{-2} + 2b\mu [I_1(a)K_1(a)- I_2(a)K_2(a)], \label{disp22} \\ \widetilde q^{(3,1)} &=& b\mu a^2 I_1(a)K_1(a) . \label{disp31} \end{aligned}$$ The coefficient $\widetilde q^{(2,2)}$ vanishes at the splitting instability threshold (curve S in Fig. \[f1\]), and must be negative when the circular spot is stable. Consequently, the distortion amplitude is $a_2 = - q^{(2,2)}/ \widetilde q^{(2,2)} <0$, so that the dipole term causes contraction of the moving spot in the direction of motion and expansion in the normal direction. Continuing the expansion to the third order, we compute the first harmonic term contributing to the solvability condition. The latter has the form $ \widetilde \tau c = k c^3$, where $ \widetilde \tau = \tau-\tau_0$ and the coefficient $k$ determining the character of the bifurcation is computed as $$k = b\mu \left( q^{(3,1)} - \tau_0'(a) a^2 v^{(2,0)} - \widetilde q ^{(3,1)} q ^{(2,2)}/ \widetilde q ^{(2,2)}\right). \label{disp3}$$ The first term is the coefficient at the first harmonic in the third order of the expansion of Eq. (\[woint1\]). The second term takes into account the second-order radius correction to the first-order first harmonic term. The last term gives the effect of dipolar shape distortion; it becomes dominant when the locus of splitting instability is approached. Stable traveling solution should be observed beyond the traveling instability threshold, i.e. at $\widetilde \tau <0$; hence, the condition of supercritical bifurcation is $k<0$. The numerical check of the symbolically computed expression shows that the traveling bifurcation is always supercritical when the spot is stable to splitting. The traveling solution bifurcating supercritically must be stable, at least close to the bifurcation point where it inherits stability of the stationary spot to other kinds of perturbations. The third harmonic term that appears in the third order of the expansion delineates, together with the second-order dipolar term, the characteristic shape of a translating spot, pointed in the direction of motion and spread sidewise, as in the inset in Fig. \[f1\], which has been also observed in numerical simulations [@bode]. Beyond the range of the bifurcation expansion, the shape, as well as the propagation speed can be determined by solving numerically Eq. (\[shape\]) with $v(s)$ given by Eq. (\[gauss2\]) and curvature computed using the fully nonlinear expression in Eq. (\[curv\]). Although the boundary integral method reduces a PDE to a 1D integro-differential equation, the equation is rather difficult. Iterative numerical solution [@dima] tends to break down rather close to the bifurcation point, as soon as the shape distortion becomes strong enough to flatten the spot at the back side. Since the boundary integral equation is non-evolutionary, there is no way to distinguish between a purely numerical failure of convergence and a physical instability that would lead to lateral spreading observed in PDE simulations [@mikh]. The above bifurcation expansion proves that a stable traveling solution does exist in the basic model (\[sueq\]), (\[sveq\]) in the sharp boundary limit. The result is applicable at $1 \gg c\gg \sqrt{\epsilon}$. It can be extended straightforwardly to models with more than one long-range variable, provided all long-range equations are linear. Stable traveling spot solutions should be, indeed, more robust in an extended model where they have been obtained in PDE simulations [@bode], whereas in the basic model they require fine parametric tuning aided by the analytical theory. #### Acknowledgement. {#acknowledgement. .unnumbered} This work has been supported by the German–Israeli Science Foundation. [9]{} G. Li, Q. Ouyang, and H.L. Swinney, J. Chem. Phys. [**105**]{}, 10830 (1996). G. Haas , M. Bär, I.G. Kevrekidis, P.B. Rasmussen, H.-H. Rotermund , and G. Ertl, Phys. Rev. Lett. [**75**]{}, 3560 (1995). I. Müller, E. Annelt and H.-G. Purwins, Phys. Rev. Lett. [ **82**]{}, 3428 (1999). W.J. Firth and A.J. Scroggie, , 1623 (1996). L.M. Portsel, Yu.A. Astrov, I. Reimann, E. Annelt and H.-G. Purwins, J. Appl. Phys. [**85**]{}, 3960 (1999). T. Ohta, M. Mimura, and R. Kobayashi, Physica (Amsterdam) [**D 34**]{} 115 (1989). B.S. Kerner and V.V. Osipov, Usp. Fiz. Nauk. [**157**]{}, 201 (1989) \[Sov. Phys. Usp. [**32**]{}, 101 (1989)\]. E. Meron, Phys. Rep. [**218**]{}, 1 (1992). L.M. Pismen, J. Chem. Phys. [**101**]{} 3135 (1994). C.B. Muratov and V.V. Osipov, Phys. Rev. [**E 53**]{}, 3101 (1996). K. Krischer and A. Mikhailov, Phys. Rev. Lett. [**73**]{}, 3165 (1994) L.M. Pismen, Phys. Rev. Lett. [**78**]{}, 382 (1997). L.M. Pismen, Phys. Rev. [**75**]{}, 228 (1995). I.S. Aranson, K.A. Gorshkov, A.S. Lomov and M.I. Rabinovich, Physica (Amsterdam) [**D 42**]{}, 435 (1990); W. van Saarloos and P.C. Hohenberg, Physica (Amsterdam) [**D 56**]{}, 303 (1992); H. Sakaguchi and H.R. Brand, Physica [**D 97**]{}, 274 (1996); K. Ouchi and H. Fujisaka, Phys. Rev. [**E 54**]{}, 3895 (1996). P. Coullet , J. Lega, B. Houchmanzadeh, and J. Lajzerowicz, Phys. Rev. Lett. [**65**]{}, 1352 (1990). A. Hagberg and E. Meron, Nonlinearity [**7**]{}, 805 (1994). C.P. Schenk, M. Or-Guil, M. Bode, and H.-G. Purwins, Phys. Rev. Lett. [**78**]{}, 3781 (1997). D.M. Petrich and R.E. Goldstein, Phys. Rev. Lett [**E 72**]{}, 1120 (1994); R.E. Goldstein, D.J. Muraki, and D.M. Petrich, Phys. Rev. [**E 53**]{}, 3933 (1996). D. Haim, G. Li, Q. Ouyang, W.D. McCormick, H.L. Swinney, A. Hagberg, and E. Meron, Phys. Rev. Lett. [**77**]{}, 190 (1996). L.M. Pismen and D. Kazhdan, unpublished.
--- abstract: 'High-harmonic spectroscopy driven by circularly-polarized laser pulses and their counter-rotating second harmonic is a new branch of attosecond science which currently lacks quantitative interpretations. We extend this technique to the mid-infrared regime and record detailed high-harmonic spectra of several rare-gas atoms. These results are compared with the solution of the Schrödinger equation in three dimensions and calculations based on the strong-field approximation that incorporate accurate scattering-wave recombination matrix elements. A quantum-orbit analysis of these results provides a transparent interpretation of the measured intensity ratios of symmetry-allowed neighboring harmonics in terms of (i) a set of propensity rules related to the angular momentum of the atomic orbitals, (ii) atom-specific matrix elements related to their electronic structure and (iii) the interference of the emissions associated with electrons in orbitals co- or counter-rotating with the laser fields. These results provide the foundation for a quantitative understanding of bi-circular high-harmonic spectroscopy.' author: - Denitsa Baykusheva - Simon Brennecke - Manfred Lein - Hans Jakob Wörner title: 'Signatures of electronic structure in bi-circular high-harmonic spectroscopy' --- [^1] [^2] High-harmonic spectroscopy driven by circularly-polarized laser fields superimposed with their counter-rotating second harmonic is a new technique that attracts considerable attention because of its wide application potential, such as the characterization of dynamical symmetries in atoms and molecules [@baykusheva16a]. This new research direction has been opened by the pioneering demonstration of high-harmonic generation in such laser fields [@eichmann95a] and its theoretical interpretation [@long95a; @zuo95a; @milosevic00a]. The potential of this early work has only recently been fully exploited for the generation of circularly-polarized high-harmonic radiation [@fleischer14a], including its extension to high photon energies [@fan15a] and its applications [@kfir15a]. Bi-circular high-harmonic spectroscopy (BHHS) has also received considerable theoretical attention, in particular due to its sensitivity to atomic and molecular symmetry [@medisauskas15a; @mauger2016; @reich16a; @hasovic16a; @odzak16a; @bandrauk16a], molecular chirality [@smirnova15a] and spin polarization [@milosevic16a; @ayuso16a]. However, both experimental and theoretical results have remained very scarce, such that the sensitivity of BHHS to the various aspects of electronic structure remains largely unknown and it is not clear which theories reach quantitative accuracy. In this letter, we address these challenges by reporting a joint experimental and theoretical analysis of the fundamental working principles of BHHS in rare-gas atoms. Studying the intensity ratios of neighboring harmonic orders $3q+1$ and $3q+2$ ($q\in\mathbb{N}$) allowed by symmetry, we observe striking differences between the spectra of neon and argon. The experimental results are qualitatively well reproduced by solving the time-dependent Schrödinger equation (TDSE) in three dimensions. We generalize our results by developing a model based on the quantum-orbit analysis within the strong-field approximation (SFA) that allows for complex-valued electron trajectories, similar to Ref. [@milosevic00a], but incorporates accurate scattering-wave recombination matrix elements [@woerner09a; @le09a]. These results are also in good agreement with the experimental data, although significant deviations are found close to the ionization threshold. The analysis of the SFA results allows us to separate the contributions of strong-field ionization and photorecombination to the observed ratios. Our work establishes propensity rules for BHHS based on the angular momentum of atomic orbitals. It additionally reveals the manifestations of orbital-specific radial structures in BHHS. In particular, the sign change of the radial 3p$\rightarrow$d photoionization matrix element of argon with energy, responsible for the Cooper minimum [@cooper62a], is shown to cause a reversal of the relative intensities of neighboring harmonics. The experimental setup is similar to the one described in Ref. [@baykusheva16a], but is augmented by the capability of generating bi-circular laser fields at wavelengths longer than the standard 800/400-nm of titanium-sapphire lasers. This new development compensates the lower cut-off energies achieved in bi-circular laser fields by an increase of the ponderomotive energy, such that the spectra recorded in argon extend beyond the region of the Cooper minimum. This is achieved by pumping a high-energy optical parametric amplifier (HE-TOPAS, Light Conversion) with up to 6.5 mJ, 30 fs laser pulses centered at 800 nm at a repetition rate of 1 kHz to generate 1.5 mJ pulses with $\sim$ 40 fs duration centered in the vicinity of 1400 nm, that are subsequently frequency doubled to $\sim$700 nm in a nonlinear crystal. We use a Mach-Zehnder interferometer equipped with dedicated dichroic mirrors for each wavelength pair. High-harmonic spectra are generated in a thin supersonic beam generated by expansion of neon or argon through a pulsed nozzle with a diameter of 250 $\mu$m at a stagnation pressure of $\sim$5 bar. The high-harmonic spectra are recorded with a flat-field spectrometer consisting of a concave 1200 lines/mm grating, a microchannel-plate-phosphor-screen assembly and a charge-coupled device camera. Figure 1 shows a typical bi-circular high-harmonic spectrum recorded in neon using 800/400-nm laser pulses and a spectrum recorded in argon using 1404/702-nm pulses. The different laser parameters, given in the caption of Fig. 1, were chosen to keep the Keldysh parameter (defined on the basis of the fundamental field) constant. Whereas the neon spectrum displays a monotonically decreasing intensity envelope from the ionization potential (dotted line) to the cut-off, the argon spectrum reveals a suppression around photon energies of $\sim$ 45 eV, i.e. in the region of the Cooper minimum. We however note already, that this position is lower than the 53-eV position observed in the case of linear HHS [@woerner09a]. ![(color online) Experimental high-harmonic spectra generated with counter-rotating circularly polarized femtosecond laser pulses in rare-gas atoms. a) High-harmonic spectrum of neon obtained with 800/400 nm pulses with intensities 3.0/1.8$\times 10^{14}$ W/cm$^2$, b) High-harmonic spectrum of argon obtained with 1404/702 nm pulses with peak intensities of 7.2/6.0$\times 10^{13}$ W/cm$^2$. ](Fig1.pdf){width="50.00000%"} Importantly, BHHS offers an additional observable compared to linear HHS, i.e. the intensity ratios of the neighboring allowed harmonic orders $I(3q+1)/I(3q+2)$. This ratio is a robust observable because it is insensitive to the slow variation of the grating and detector sensitivities with photon energy. It is shown in Fig. 2 as a function of $q$ and the photon energy (using the relation $E=\hbar\omega(3q+1)$ with $\omega$ the fundamental angular frequency). The experimental data are shown as squares with error bars representing twice the standard deviation of multiple measurements taken under nominally identical experimental conditions. Neon displays a very large intensity ratio ($>18$) close to the ionization threshold and a rapid, monotonic decrease of this ratio with increasing photon energy. Argon, in contrast, shows a much richer variation of the ratio with photon energy, with a weak decrease of the ratio from $\sim$29 eV ($q=10$) upwards, followed by an inversion of the ratio at $\sim$40 eV ($q=14$) and a second inversion at $\sim$51 eV ($q=18$). The finding that the ratio is mostly larger than one can be interpreted as due to the less likely absorption of photons from the second harmonic field when its intensity is smaller than the fundamental intensity [@dorney17a]. A quantitative understanding must, however, consider the specific atomic structure as explained in the following. ![(color online) Ratios of integrated intensities of neighboring high-harmonic orders as indicated on the vertical axis, as a function of the integer $q$ for Ne, 800/400 nm (a) and Ar, 1404/702 nm (b). The symbols represent the experimental data. The lines correspond to different theoretical models discussed in the text using the intensities given in the caption of Fig. 1. The theoretical results have been shifted by $+6.2$ eV in the case of Ar (see text for discussion).](Fig2.pdf){width="50.00000%"} We now discuss the different theoretical models to which the experimental data will be compared. The TDSE is solved numerically in three dimensions in the length gauge and the single-active-electron approximation. The driving field consists of a circularly polarized fundamental that rotates counterclockwise and a circularly polarized second harmonic that rotates clockwise in the x-y-plane. We use a $\text{cos}^2$-pulse envelope with 30 cycles duration (10.9 cycles FWHM in intensity) and effective potentials taken from Refs. [@tong05a] and [@mueller1998] in the case of neon and argon, respectively. The latter is known to provide the most accurate results for argon, although the Cooper minimum predicted by this potential lies a few electron-Volts below its experimentally established position [@woerner09a; @higuet11a]. The TDSE is propagated using the pseudo-spectral method described in [@tong1997; @murakami2013]. The outermost sub-shell of the ground state consists of three degenerate $p$ orbitals: $p_+$, $p_-$ and $p_0$, where the index indicates the z-component of the orbital angular momentum. Since the $p_0$ orbital has a node in the polarization plane, its contribution to the harmonic signal is negligible. Therefore the contributions from the $p_+$ and $p_-$ orbitals are summed coherently to obtain harmonic spectra and ratios between neighboring harmonics. For each orbital, the harmonic signal is obtained from the Fourier-transformed time-dependent dipole acceleration. We develop a deeper understanding of the observed effects by turning to calculations based on the SFA [@lewenstein94a], which was extended to bi-circular driving fields in Ref. [@milosevic00a]. In order to capture system-specific effects, the photorecombination matrix elements traditionally evaluated in the plane-wave approximation in the SFA, are replaced by numerically exact matrix elements based on outgoing scattering waves $\Psi^{(+)}$ of the same model potentials as used in the TDSE [@morishita2008; @woerner09a]. The application of the saddle-point approximation [@lewenstein94a; @milosevic00a] leads to a three-step model consisting of a sequence of ionization, propagation and recombination [@corkum1993; @frolov2009]. As a consequence of the combined spatio-temporal symmetries of the system and the laser field the total induced dipole moment contains contributions of three equivalent electron trajectories per optical cycle of the fundamental laser field [@mauger2016]. Harmonics of orders $3q+1$ are polarized as the fundamental and harmonics of orders $3q+2$ are polarized as the second harmonic [@becker1999; @milosevic00a]. Therefore the dipole operator $\textbf{d}$ is projected on the relevant polarization vector $\textbf{e}_{\pm}=\textbf{e}_x\pm{\mathrm{i}}\textbf{e}_y$. We sum coherently over the contributions of the two initial atomic orbitals $p_+$ and $p_-$ with wave functions denoted as $\psi_{m=1}$ and $\psi_{m=-1}$ in what follows. Due to phase-matching conditions for the interaction region behind the laser focus, only the shortest trajectory has a significant contribution to the HHG signal. We have confirmed this statement by a numerical analysis of the macroscopic signal where we apply the theoretical methods described in Refs. [@salieres96a; @balcou97a; @gaarde08a] to bi-circular fields. This leads to the following expression for the harmonic intensity: $$\begin{aligned} \label{model} I_{3q\pm1}= &9& \, \left|P(\boldsymbol{k}_s,t_s,t'_s)\right|^2 \\ \nonumber &\times& \left|\sum_{m=\pm1} d_{\textrm{rec},m}^{\pm}(\textbf{v}(\textbf{k}_s,t_s)) d_{\textrm{ion},m}(\textbf{v}(\textbf{k}_s,t'_s))\right|^2, \end{aligned}$$ with the complex-valued time of ionization $t_s'$, recombination $t_s$ and momentum ${\bf k}_s=-\int_{t'_s}^{t_s} \mathrm{d}t''\textbf{A}(t'')/(t_s-t'_s)$. The velocity of the electron is $\textbf{v}(\textbf{k},t)=\textbf{k}+\textbf{A}(t)$ with $\textbf{A}(t)=-\int^t\textbf{E}(t')\mathrm{d}t'$. Here and in what follows, we use atomic units unless otherwise stated. The times of ionization $t'_s$ and recombination $t_s$ are given as solutions of the corresponding saddle-point equations $$\begin{aligned} \textbf{v}(\textbf{k}_s,t'_s)^2/2\,=\,&-I_p \label{condion} \\ \textbf{v}(\textbf{k}_s,t_s)^2/2\,=\,& n\omega-I_p, \label{condrec}\end{aligned}$$ with the harmonic order $n$ and the ionization potential $I_p$. All factors that are independent of the atomic orbitals are collected in the prefactor $P(\boldsymbol{k}_s,t_s,t'_s)$. The first transition matrix element $d_{\textrm{rec},m}^{\pm}=\langle \psi_m | \textbf{e}_{\pm}^{*}\cdot \textbf{d}| \Psi_{\textbf{v}(\textbf{k}_s,t_s)}^{(+)}\rangle$ describes the recombination step and the second one $d_{\textrm{ion},m}=\langle \textbf{v}(\textbf{k}_s,t'_s)|\psi_m\rangle$ describes the ionization step. The continuum states in the ionization process (only) are described as plane waves $|\textbf{v}\rangle$ [@perelomov1966; @barth2011]. Electron tunnelling entirely takes place in imaginary time and results in a complex velocity $\textbf{v}(\textbf{k}_s,t'_s)$ of the electron at $t'_s$. Electron propagation in the laser field after ionization takes place in real time and results in the complex-valued velocity vector $\textbf{v}(\textbf{k}_s,t_s)$ at $t_s$. The imaginary part of the recombination velocity does not vanish in the adiabatic limit ($I_p\rightarrow 0$). Over the plateau of the harmonic spectrum the prefactor in Eq. (\[model\]) is only weakly energy dependent. Hence it approximately cancels out in the calculation of intensity ratios between neighboring harmonic orders and was neglected to obtain the dotted curves in Fig. 2. The agreement between the TDSE (full red line in Fig. 2a) and the experiment is very good in the case of neon. We note that the intensity ratios between neighboring harmonics strongly depend on the intensity ratio $I(\omega)/I(2\omega)$ of the fundamental and its second harmonic. Remaining discrepancies between experiment and theory may thus be partially attributed to the limited accuracy to which $I(\omega)/I(2\omega)$ in the generation region is known. In the case of argon, the agreement between the experiment and the TDSE results is also good, especially in the high-energy region, after a global shift of $+6.2$ eV has been applied to the theoretical results. This global shift is consistent with earlier work [@woerner09a; @le09a; @morishita08a] and can therefore be attributed to limitations of the effective-potential approach in describing high-harmonic spectra. The results of the SFA calculations agree well with those of the TDSE calculations and the experimental data in the high-energy part of both spectra, but the SFA systematically over-estimates the intensity ratios at low photon energies. A detailed analysis suggests that this discrepancy mainly originates from an overestimation of the asymmetry of the recombination step, that we attribute to a failure of the saddle-point approximation at very low kinetic energies. The contributions of each of the ionization and recombination matrix elements are separately shown in Fig. 3. The panels (a) show the magnitudes of the ionization matrix elements, evaluated for the complex velocities at ionization as a function of the emitted photon energy. Interestingly, we find that the orbitals representing electrons co-rotating with the fundamental laser field dominate for low emitted photon energies, whereas the counter-rotating electrons dominate the contribution at high photon energies. Similar results have been obtained in [@ayuso16a]. The exact crossing point of the two curves depends on the intensity ratio of the two components of the bi-circular field and the ionization potential of the atom. It shifts to higher photon energies with increasing relative intensity of the second-harmonic field. ![(color online) a) Amplitudes of the matrix elements for ionization of $p_+$ and $p_-$ orbitals by bi-circular laser fields. The wavelengths and intensities are the same as those used in Fig. 2 and are given in the caption of Fig. 1, b) Amplitudes of the photorecombination matrix elements, c) Amplitudes of the total induced dipole moments responsible for the emission of the symmetry-allowed harmonics 3$q$+1 (magenta) and 3$q$+2 (blue), d) relative phases of the contributions from $p_+$ and $p_-$ orbitals to the results of panels (c).](Fig3.pdf){width="50.00000%"} The panels (b) show the magnitudes of the recombination matrix elements. In the case of neon (left column), the matrix element describing recombination to the orbital co-rotating with the fundamental ($p_+$) under emission of a photon of co-rotating polarization (${\bf e}_+$) dominates ($d^+_{rec,+}$), followed by recombination to the counter-rotating orbital under emission of a photon of counter-rotating polarization ($d^-_{rec,-}$). In the case of real-valued recombination velocities, these matrix elements would have equal moduli. The imaginary part of the recombination velocities is responsible for the observed differences between $d^+_{rec,+}$ and $d^-_{rec,-}$, which are very large at low energies in neon. We note that this effect is also responsible for the deviation of the ratios from unity in the case of the 1s-shell of helium, that were observed in [@baykusheva16a]. The ratio $\left|d^+_{rec,+}/d^-_{rec,-}\right|$ is found to decrease with increasing relative intensity of the second-harmonic field, for both Ne and Ar. The smaller amplitudes of the $d^+_{rec,-}$ and $d^-_{rec,+}$ matrix elements are expected because of the Fano-Bethe propensity rules [@fano85a; @medisauskas15a]. In the case of argon (Fig. 3b, right), the dominance of $d^+_{rec,+}$ and $d^-_{rec,-}$ over $d^+_{rec,-}$ and $d^-_{rec,+}$ only holds below 30 eV. The breakdown of the Fano-Bethe propensity rule is caused by the fact that the $d^+_{rec,+}$ and $d^-_{rec,-}$ matrix elements go through zero at 46.4 eV. Since these two matrix elements contain only the contribution of the $\epsilon d\rightarrow 3p$ transition, the zero crossing is a direct consequence of the sign reversal of the $\epsilon d\rightarrow 3p$ radial matrix element at this energy. The $d^+_{rec,-}$ and $d^-_{rec,+}$ matrix elements, in contrast, additionally contain a contribution from the $\epsilon s \rightarrow 3p$ transition matrix element, which varies monotonically with energy and does not change sign. We note that the argon results in Fig. 3 have not been shifted in energy, in contrast to the theoretical results shown in Fig. 2b (see caption). Figure 3c shows the magnitude of the total induced dipole moment for the harmonics of orders $3q+1$ (${\bf e}_+$-polarized, magenta) and $3q+2$ (${\bf e}_-$-polarized, blue) as full lines. In addition to the amplitude effects shown in panels (a) and (b), these results are influenced by the relative phase of the emission from the $p_+$ and $p_-$ orbitals, which is shown in the panels (d). The strong emission of $3q+1$ harmonics at low photon energies in neon originates from the large magnitude of $d^+_{rec,+}$ on one hand and the constructive interference between emissions from the $p_+$ and $p_-$ orbitals on the other hand, as revealed by panel (d). Similarly, the much weaker emission of the $3q+2$ harmonics is caused by the small magnitude of $d^-_{rec,-}$ and the destructive interference at low energies. These effects result in the observed very large values of the intensity ratio at low energies in neon. We further investigate the relative importance of the ionization and recombination matrix elements by setting the ionization matrix elements to unity. This leads to the dashed curves shown in Fig. 3c. The effect of the unequal ionization amplitudes leads to a more rapid merging of the two full curves at high energies in neon. The much smaller ratios in the case of argon, as compared to neon, and the two-fold inversion of the ratio as a function of the photon energy also become immediately apparent from Fig. 3c. Both inversions are largely unchanged by the effect of unequal ionization amplitudes. The higher-lying crossing of the dashed curves occurs exactly at the position where $d^+_{rec,+}=d^-_{rec,-}=0$. The full lines cross in the immediate vicinity of this point, which shows that the higher-lying inversion of the intensity ratio in argon lies very close to the position where the $\epsilon d\rightarrow 3p$ radial matrix element changes sign. We note that all qualitative observations made in the case of Ar, i.e. the two-fold inversion of the ratio and the mentioned dependencies of ionization and recombination amplitudes on the ratio of the driving fields, have also been made in the case of Xe (not shown). Our work thus suggests that the qualitative differences between Ne and the heavier rare gases originate in the existence of radial nodes in the $np$ atomic wave functions for $n\ge 3$, which are also the prerequisite for the existence of Cooper minima [@cooper62a]. This study marks the beginning of a quantitative understanding of bi-circular high-harmonic spectra. We have shown that the intensity ratios of neighboring harmonic orders are an important observable in BHHS that is particularly sensitive to the electronic structure of the target. These ratios were found to be subject to propensity rules that reflect the angular momentum of the probed orbital. In particular, strong-field ionization in BHHS was found to favor orbitals describing electrons that rotate in the same direction as the fundamental field ($p_+$) for quantum orbits emitting low photon energies and the p$_-$ orbitals at high photon energies (Fig. 3a). This result appears to be general, at least for rare gas atoms. Photorecombination in BHHS was found to preferentially lead to the emission of circularly-polarized radiation of the same direction of rotation as the orbital (Fig. 3b). Our study has additionally highlighted the importance of interference between high-harmonic emission from $p_+$ and $p_-$ orbitals as illustrated in Fig. 3d. Finally, the ratios also strongly depend on the radial structure of the orbital wave function through the atom-specific photorecombination matrix elements. The results summarized in this letter outline the foundation for the quantitative interpretation of BHHS, which can now be extended to molecules [@baykusheva16a], to time-dependent electronic wave packets in neutral molecules [@kraus13b; @baykusheva14a; @walt17a] and to probing sub-cycle electronic dynamics, such as charge migration [@kraus15b]. D.B. and H.J.W. acknowledge support from an ERC starting grant (project no. 307270-ATTOSCOPE) and the Swiss National Science Foundation (SNSF) through project no. 200021\_159875. S.B. and M.L. acknowledge support from the Deutsche Forschungsgemeinschaft in the frame of the Priority Programme Quantum Dynamics in Tailored Intense Fields. S.B. thanks the Studienstiftung des deutschen Volkes for financial support. [42]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , , , , ****, (), <http://link.aps.org/doi/10.1103/PhysRevLett.116.123001>. , , , , , , , , ****, (), <http://link.aps.org/doi/10.1103/PhysRevA.51.R3414>. , , , ****, (), <http://link.aps.org/doi/10.1103/PhysRevA.52.2262>. , ****, (). , , , ****, (), <http://link.aps.org/doi/10.1103/PhysRevA.61.063403>. , , , , , ****, (), <http://dx.doi.org/10.1038/nphoton.2014.108>. , , , , , , , , , , , ****, (), <http://www.pnas.org/content/112/46/14206.abstract>. , , , , , , , , , , , ****, (), <http://dx.doi.org/10.1038/nphoton.2014.293>. , , , , ****, (), <http://link.aps.org/doi/10.1103/PhysRevLett.115.153001>. , , , ****, (), <http://stacks.iop.org/0953-4075/49/i=10/a=10LT01>. , ****, (), <http://link.aps.org/doi/10.1103/PhysRevLett.117.133902>. , , , , pp. (), ISSN , <http://dx.doi.org/10.1080/00268976.2016.1257830>. , , , ****, (), <https://link.aps.org/doi/10.1103/PhysRevA.94.033419>. , , , ****, (). , , , ****, (), ISSN , <http://stacks.iop.org/0953-4075/48/i=23/a=234005>. , ****, (), <https://link.aps.org/doi/10.1103/PhysRevA.93.051402>. , , , , , **** (). , , , , , ****, (), <http://link.aps.org/abstract/PRL/v102/e103901>. , , , , , ****, (), <http://link.aps.org/abstract/PRA/v80/e013401>. , ****, (). , , , , , , , , , , , , , ****, (). , ****, (). , ****, (), <http://link.aps.org/doi/10.1103/PhysRevLett.81.1207>. , , , , , , , , , , , ****, (), <https://link.aps.org/doi/10.1103/PhysRevA.83.053401>. , ****, (), ISSN , <http://www.sciencedirect.com/science/article/pii/S0301010497000633>. , , , ****, (), <http://link.aps.org/doi/10.1103/PhysRevA.88.063419>. , , , , , ****, (). , , , , ****, (), <http://link.aps.org/doi/10.1103/PhysRevLett.100.013903>. , ****, (), <http://link.aps.org/doi/10.1103/PhysRevLett.71.1994>. , , , , ****, (), <http://stacks.iop.org/0953-4075/42/i=3/a=035601>. , , , ****, (), <http://link.aps.org/doi/10.1103/PhysRevA.60.1721>. , , , , , ****, (). , , , , ****, (). , , , ****, (). , , , p. (). , ****, (), <http://link.aps.org/doi/10.1103/PhysRevA.84.063415>. , , , , ****, (pages ) (), <http://link.aps.org/abstract/PRL/v100/e013903>. , ****, (), <https://link.aps.org/doi/10.1103/PhysRevA.32.617>. , , , , , , ****, (), <http://link.aps.org/doi/10.1103/PhysRevLett.111.243005>. , , , , , ****, (). , , , , , , , , ****, (), <http://dx.doi.org/10.1038/ncomms15651>. , , , , , , , , , , , ****, (), <http://www.sciencemag.org/content/350/6262/790.abstract>. [^1]: D.B. and S.B. contributed equally to this work. [^2]: D.B. and S.B. contributed equally to this work.
--- abstract: 'In this paper we investigate the problem of optimal cache placement under secrecy constraints in heterogeneous networks, where small-cell base stations are equipped with caches to reduce the overall backhaul load. For two models for eavesdropping attacks, we formally derive the necessary conditions for secrecy and we derive the corresponding achievable backhaul rate. In particular we formulate the optimal caching schemes with secrecy constraints as a convex optimization problem. We then thoroughly investigate the backhaul rate performance of the heterogeneous network with secrecy constraints using numerical simulations. We compare the system performance with and without secrecy constraints and we analyze the influence of the system parameters, such as the file popularity, size of the library files and the capabilities of the small-cell base stations, on the overall performance of our optimal caching strategy. Our results highlight the considerable impact of the secrecy requirements on the overall caching performance of the network.' author: - bibliography: - 'distributed\_caching.bib' title: On Edge Caching with Secrecy Constraints --- Introduction {#sec:intro} ============ System Model {#sec:model} ============ Secrecy Performance of MDS Coded Caching {#sec:perf} ======================================== Numerical Illustrations {#sec:num} ======================= Conclusions {#sec:conclusions} ===========
--- abstract: 'Load shedding is the last and most expensive control action against system collapse and blackout. Achievement of an efficient emergency control to stabilize the power system following severe disturbances, requires two key objectives. First, preventing of further cascading outages, i.e. saving the available and determinant power system elements and second, issuing proper control actions to stabilize the power system inside the permissible time frame. In this paper online contingency analysis is performed to monitor secure and reliable operation of transmission lines. Load shedding locations are continuously updated based on loading rate/thermal limit of lines to prevent their outage. Simulation of severe contingencies carried out on 39 bus IEEE standard test system in DIgSILENT PowerFactory validates the efficiency of proposed method.' author: - - - title: Centralized Load Shedding Based on Thermal Limit of Transmission Lines Against Cascading Events --- Introduction ============ Blackouts rarely happen in the power system, but it does not diminish their significance. Power system design should be robust/resilient against uncertainties, unmodeled dynamics and should be able to withstand/tolerate wide range of combinational contingencies and cascading events to protect the stability of whole or as much as possible of power system. To this aim, proper preventive and self-healing control actions should be considered for different condition of power system. In large scale power systems with thousands of complicated electrical equipments, although the current state variables may be available online for monitoring in the control center, the operators may not be able to make complicated decisions during serious contingencies, especially when the state variables are relatively fast comparing to the available time frame for bringing the variables back to the range above minimum permissible boundaries [@505]. Load shedding is the last and most costly countermeasure/barrier against blackouts, which should be minimized according to the cost function of control action [@508; @103]. Load shedding is basically initiated/activated, if available primary and/or secondary control actions fail in stabilizing the system in their corresponding time span. Therefore, beside of tolerable range for key state variables of power system, e.g. frequency and voltage, time is also an important and determinant factor especially for protection system. Most of recently proposed methods for centralized load shedding are merely based on frequency dynamics [@508; @510; @523]. The number of papers, in which voltage dynamics are also included are not considerable [@506; @113; @507; @103; @102]. Monitoring the frequency and voltage dynamics of power system is essential in the stability study, but they do not reflect reliable operation of determinant resources such as transmission lines. In other words, although a highly overloaded line may temporarily transmit the required active and reactive power causing the voltage and frequency profiles inside their permissible boundary, the overloaded line may not be available for a long time beyond the tolerable time of their protection relay. If the loading rate of a highly overloaded line is not alleviated in time, their over current protection relay may be triggered resulting in outage of line. Under such a circumstances even though the state variables seem to be in the safe range, they suddenly and unexpectedly collapse leading to initiate the next successive cascading events [@103; @102]. The current amplitude of transmission lines are available in the control center using existing Wide Area Measurement System (WAMS) collecting data from Phasor Measurement Unit (PMU) or Synchrophasors typically installed at substations in the transmission level. Monitoring of Loading Rate of Transmission Lines {#sec:3} ================================================ The thermal overload protection provides tripping or alarming based on a thermal model of line currents. The curve and time dial settings of inverse time-overcurrent characteristic of protection relays conform to the *IEEE C37.112* and *IEC 255-03* standards [@1023]. The maximum available time before activation of over current relay of line $k$ can be calculated based on the rate of thermal overload and the following inverse time tripping characteristic: $$\label{eq:1} T_k^{oc}(t) = \frac{\alpha }{{{({r_k^{oc}(t)})}^{\gamma}} - 1} + \beta$$ Parameters $\alpha$, $\beta$ and $\gamma$ are constants to provide desired curve characteristics from aforementioned standards according to the application. $r_k^{oc}(t)$ represents the over current rate regarding the chosen current setting/pickup ($i_k^{s}$) of the relay, which should be greater/less than one for trip/reset operation of the relay: $$\label{eq:2} r_k^{oc}(t)=\frac{i_k(t)}{i_k^{s}} %> 1$$ ![Typical bus-line topology[]{data-label="fig:1"}](bus.png){width="0.7\linewidth"} For mathematical formulation of proposed method, a network with $b$ Transmission lines ($T_1$-$T_b$) and $n$ Buses ($B_1$-$B_n$) is considered. Fig. \[fig:1\] indicates a typical bus-line connection with power flow direction regardless of their existing electrical elements. $B_i$, $L_i$ and $T_k$ stand for bus $i$, Load $i$ and line $k$, respectively. Outage risk index of an overloaded line is defined based on its loading rate to prevent more cascading events. To decline the line overload, e.g. $T_k$ in Fig. \[fig:1\], only alleviation of output power flow of destination bus ($B_j$) may be helpful, i.e the transmission line/s with outbound power flow and existing load/s ($L_j$). Therefore, the load/s of destination bus ($B_j$) should be assigned higher priority for interruption. To this aim, power flow direction and loading rate ($i_k^{oc}(t)$) are considered. The power flow direction of lines, e.g. $T_k$, can be summarized in the incident matrix ($A(t)$), which describes the network topology, i.e. the connection between buses and lines: $$\label{eq:3} A_{n \times b}(t) = \kbordermatrix{ & 1& \hdots & k & \hdots & b\\ \vdots & \hdots & \hdots & 0 & \hdots & \hdots\\ i & \hdots & \hdots & +{D_k}(t) & \hdots & \hdots\\ \vdots & \hdots & \hdots & 0 & \hdots & \hdots\\ j & \hdots & \hdots & -{D_k}(t) & \hdots & \hdots\\ \vdots & \hdots & \hdots & 0 & \hdots & \hdots }$$ where ${D_k}(t)$ represents the power flow Direction (${P_k}(t)$) of line $T_k$: $$\label{eq:4} A_{i,k}(t)=D_k(t) \buildrel \Delta \over = \left\{ {\begin{array}{*{20}{c}} +1&{{P_k}(t)\text{ enters bus } i}\\ {-1}&{{P_k}(t)\text{ leaves bus } i}\\ ~~0&\text{ No incident with bus } i \end{array}} \right.$$ The vector $i^{oc}(t)$ represents the loading rate of lines defined in (\[eq:2\]): $$\label{eq:5} i^{oc}(t)=\left[ {\begin{array}{*{20}{c}} {i_1^{oc}(t)}\\ \vdots \\ {i_k^{oc}(t)}\\ \vdots \\ {i_b^{oc}(t)} \end{array}} \right]$$ The Impact Factor (IF) of the load installed at bus $i$ on its transmission lines considering both power flow direction ($A_{i,k}(t)$) and line loading rate ($i_k^{oc}(t)$) can be determined as follows: $$\label{eq:6} IF_{i,k}(t)=A_{i,k}(t) \cdot i_k^{oc}(t)$$ Assume that $c$ is the index of the line with the maximum Impact Factor (IF) from the load $i$ over the range of $k\in[1,...,b]$: $$\label{eq:10} IF_{i,c}(t) = {\mathop {max}\limits_{k = 1}^b (IF_{i,k}(t))}$$ To sort the network loads and therefore to prioritize their interruption order, the maximum available time before outage of their overloaded transmission line/s resulting from aforementioned loading rate is considered. To this aim, the inverse trip time calculated in (\[eq:1\]) associated with the most overloaded line, i.e. the line with the index $c$ is assigned to the corresponding load bus: $$\label{eq:11} T_i^{oc}(t) = T_c^{oc}(t)$$ ![39 bus IEEE standard test system[]{data-label="fig:4"}](391.png){width="1\linewidth"} Simulation Setup {#sec:11} ================ In order to evaluate the performance of proposed scheme, the 39 bus IEEE standard test system demonstrated in Fig. \[fig:4\] is chosen as case study in DIgSILENT PowerFactory software [@106; @512; @107]. Synchronous generators are equipped with IEEE standard governor GOV-IEESGO and automatic voltage regulator AVR-IEEEX1. Moreover, dependency of active and reactive power of loads to voltage and frequency is considered [@a1; @a2; @a3; @ashk]. The loading rate of transmission lines in percentage of their nominal current are assumed to be equal to 85 %. Simulation Results {#sec:12} ================== ![Frequency at different buses[]{data-label="fig:5"}](f.png){width="1.0\linewidth"} ![Voltage at different buses[]{data-label="fig:6"}](v.png){width="1.0\linewidth"} ![Affected Loads by load shedding scheme[]{data-label="fig:7"}](p.png){width="1.0\linewidth"} ![Loading rate of transmission lines[]{data-label="fig:8"}](l.png){width="1.0\linewidth"} Outage of generator 1 at second 1 connected to the bus 30 with the generation almost equal to 1012 MW (17.7 % of total generation) is chosen as a severe contingency (Fig. \[fig:4\]). Figs. \[fig:5\]-\[fig:8\] indicates the simulation results for 20 s. Before the disturbance, the power system is in steady state with the frequency stable at 60 Hz (Fig. \[fig:5\]) and bus voltages in range around 1 pu of nominal (Fig. \[fig:6\]). Figs. \[fig:7\] and \[fig:8\] show the active power of loads and loading rate of transmission lines (equal to 85% prior the event), respectively. Outage of the largest generator of the system, i.e. G1 at 1s causes a sharp plunge at frequency and voltage profiles. Since the frequency is generally smoother than voltage dynamics, due to the existing inertia of the system, frequency returns back to a value close to its nominal value. Meanwhile, some minor local or inter-area oscillation modes are also observable in frequency curve [@521; @109]. Sudden voltage drop at nearby buses from 1 to 0.4 pu (Fig. \[fig:6\]) causes reduction of load’s active power, e.g. load 30 form 1100 to 550 MW as can be seen in Fig. \[fig:7\]. The dependency of load’s active and reactive power to the voltage dynamics justifies this phenomenon, which prevents fast drop of frequency as it is observed at voltage profile. Depending on the composition of loads, it may even improve the frequency temporarily as it can be seen around 2 s. Reduction of load’s active power consequently alleviates the load-generation imbalance, which causes relatively recovery of bus voltages. Recovery of voltages and thereafter load’s active power brings the load-generation imbalance back to its real situation. It means that frequency starts to fall down, even though the voltages are equal or above nominal values. Under such a condition, the system suffers form a frequency event rather than a voltage stability issue. The demanded active power is transfered to the event location form far away generation units, which increases the loading burden of transmission lines on the way to the destination. Fig. \[fig:8\] indicates that the loading rate of lines connected to the buses 1, 8 and 9 are increased to a high value between 6 to 8 times of their nominal current, which may trigger their over current protection relay and hence sudden outage of them. The proposed technique continuously monitors the transmission line currents to identifies the loads causing overloading of lines. The loads connected to the end of most overloaded lines are assigned highest priorities for curtailment. As can be seen the loading rate of line pron to outage are reduced by proper load shedding decision in time at the best locations before operation of corresponding protection relay. According to Fig. \[fig:7\], the first three feeders of load 30 are successively disconnected by three stages of load shedding relay followed by interruption of the first stage of loads 27, 18 and the last stage of load 30 to bring the frequencies, voltages and loading rates back to the acceptable range. Conclusion ========== To prevent cascading outage of transmission lines following contingencies due to operation of their over current protection relays, their thermal limit criterion is considered in the load shedding control action. Proper load shedding locations are determined online using performing the contingency analysis based on thermal limit of transmission lines. The loads supplied by most overloaded transmission lines are shed first. [1]{} N. A. E. R. C. (NERC), Nerc prc-006, standard on automatic underfrequency load shedding, Reliability Standards for the Bulk Electric Systems of North America, pp. 1387–1461, 2014. L. Sigrist, A ufls scheme for small isolated power systems using rate-of-change of frequency, IEEE Transactions on Power Systems, no. 99, pp. 1–2, 2014. B. Hoseinzadeh, F. Faria Silva, and C. Leth Bak, Adaptive tuning of frequency thresholds using voltage drop data in decentralized load shedding, Power Systems, IEEE Transactions on, vol. 30, no. 4, pp. 2055–2062, July 2015. C. Reddy, S. Chakrabarti, and S. Srivastava, A sensitivity-based method for under-frequency loadshedding, Power Systems, IEEE Transactions on, vol. 29, no. 2, pp. 984–985, 2014. J. Laghari, H. Mokhlis, M. Karimi, A. Abu Bakar, and H. Mohamad, A new under-frequency load shedding technique based on combination of fixed and random priority of loads for smart grid applications, IEEE Transactions on Power Systems, no. 99, pp. 1–9, 2014. S. AbdElwahid, A. Babiker, A. Eltom, and G. Kobet, Hardware implementation of an automatic adaptive centralized underfrequency load shedding scheme, IEEE Transactions on Power Delivery, vol. 29, no. 6, pp. 2664–2673, 2014. B. Hoseinzadeh, A power system emergency control scheme in the presence of high wind power penetration, Department of Energy Technology, pp. 1–123, 2015. M. Abedini, M. Sanaye-Pasand, and S. Azizi, Adaptive load shedding scheme to preserve the power system stability following large disturbances, Generation, Transmission Distribution, IET, vol. 8, no. 12, pp. 2124–2133, 2014. B. Hoseinzadeh, F. Faria Silva, and C. Leth Bak, Power system stability using decentralized under frequency and voltage load shedding, IEEE PES General Meeting Conference Exposition, pp. 1–5, 2014. IEEE standard inverse-time characteristic equations for overcurrent relays, IEEE Std, 1997. B. Hoseinzadeh, F. Faria Silva, and C. Leth Bak, Active power deficit estimation in presence of renewable energy sources, IEEE PES General Meeting Conference Exposition, pp. 1–5, 2015. H. Bevrani, M. Watanabe, and Y. Mitani, Power system monitoring and control. John Wiley and Sons, 2014. B. Hoseinzadeh, F. Faria Silva, and C. Leth Bak, Decentralized power system emergency control in the presence of high wind power penetration, IEEE PES General Meeting Conference Exposition, pp. 1–5, 2015. A. Zeinalzadeh, R. Ghorbani, and J. Yee, Stochastic model of voltage variations in the presence of photovoltaic systems. IEEE American Control Conference (ACC), July. A. Zeinalzadeh, R. Ghorbani, and E. Reihani, Optimal power flow problem with energy storage, voltage and reactive power control. The 45th ISCIE International Symposium on Stochastic Systems Theory and Its Applications, Okinawa, Nov 2013. A. Zeinalzadeh and Vijay Gupta, Pricing energy in the presence of renewables, ArXiv e-prints, arXiv:1611.08006, Nov. 2016. A. Zeinalzadeh and V. Gupta, Minimizing risk of load shedding and renewable energy curtailment in a microgrid with energy storage, ArXiv e-prints, arXiv:1611.08000, Nov. 2016. S. Chandra, D. Gayme, and A. Chakrabortty, Coordinating wind farms and battery management systems for inter-area oscillation damping: A frequency-domain approach, IEEE Transactions on Power Systems, vol. 29, no. 3, pp. 1454–1462, May 2014. B. Hoseinzadeh, F. Faria Silva, and C. Leth Bak, Decentralized coordination of load shedding and plant protection considering high share of ress, Power Systems, IEEE Transactions on, 2015.
--- abstract: 'Accurate and computationally efficient modeling of systems of interacting electrons is an outstanding problem in theoretical and computational materials science. For materials where strong electronic interactions are primarily of a localized character and act within a subspace of localized quantum states on separate atomic sites (e.g., in transition metal and rare-earth compounds), their electronic behaviors are typically described by the Hubbard model and its extensions. In this work, we describe BoSS (Boson Slave Solver), a software implementation of the slave-boson method appropriate for describing a variety of extended Hubbard models, namely $p-d$ models that include both the interacting atomic sites (“$d$” states) and non-interacting or ligand sites (“$p$” states). We provide a theoretical background, a description of the equations solved by BoSS, an overview of the algorithms used, the key input/output and control variables of the software program, and tutorial examples of its use featuring band renormalization in SrVO$_3$, Ni $3d$ multiplet structure in LaNiO$_3$, and the relation between the formation of magnetic moments and insulating behavior in SmNiO$_3$. BoSS interfaces directly with popular electronic structure codes: it can read the output of the Wannier90 software package [@mostofi_wannier90:_2008; @wannier90website] which postprocesses results from workhorse electronic structure software such as Quantum Espresso [@QE] or VASP [@VASP].' address: - 'Center for Computational Quantum Physics, Flatiron Institute, 162 5th Avenue, New York, New York 10010, USA' - 'Department of Materials Science and Engineering, Northwestern University, 2220 Campus Drive, Evanston, Illinois 60208, United States' - 'Department of Applied Physics, Yale University, New Haven, Connecticut 06520, USA' author: - 'Alexandru B. Georgescu' - Minjung Kim - 'Sohrab Ismail-Beigi' bibliography: - 'main.bib' title: 'Boson Slave Solver (BoSS) v1.1 ' --- electronic structure, correlated electrons, slave boson, Hubbard model, spinon **Program summary**\ Developer’s repository link: bitbucket.org/yalebosscode/boss\ Licensing provisions: Creative Commons by 4.0 (CC by 4.0)\ Programming language: MATLAB [@MATLAB2019]\ Nature of problem: The BoSS approach, a type of slave-boson method, provides approximate solutions to interacting electron problems described by Hubbard models in a computationally efficient manner. Hubbard models are widely used to describe materials systems with strongly localized electron-electron interactions. The interacting fermion problem is mapped onto two separate, but easier, coupled quantum problems: non-interacting fermions moving on a lattice (spinons) via tunneling between nearby atomic orbitals, and interacting slave bosons that live on individual atomic sites. A self-consistent description of the two degrees of freedom requires matching of mean particle numbers (spinons and bosons) on each site as well as the renormalization of tunneling events for one set of particles due to the fluctuations of the other set of particles. The method can be used to describe the interacting electronic ground state of a particular electronic configuration, or more generally it can find the minimum energy electronic configuration by searching over various symmetry broken phases (e.g., magnetic configurations, configurations with unequal occupation of nominally equivalent atomic orbitals, etc.)\ Solution method: The spinon and slave-boson problems are each represented as Hermitian eigenvalue problems where the lowest energy (eigenvalue) state is sought. The present implementation uses dense matrix digaonalization for the spinon problem and can use either dense or sparse matrix diagonalization for the boson problem. Particle number matching between the two descriptions is achieved by adjustment of Lagrange multipliers which represent potential energies for the bosons: their appropriate values are found by applying Newton’s method to match spinon and boson occupancies. Self-consistency of tunneling processes is achieved by simple fixed point iteration (solving spinon, then slave, then spinon, etc.) Minimization of the energy uses gradient descent with adjustable step size.\ Additional comments including Restrictions and Unusual features: Most users will prepare the input data for BoSS by running band structure calculations on a material, e.g., density functional theory (DFT) using available software packages such as Quantum Espresso [@QE]. Post processing of these calculations to create a spatially localized basis set provides the input to BoSS: most users will create the localized description by using software that transforms the electronic description into a Wannier function basis such as Wannier90 [@mostofi_wannier90:_2008] which BoSS interfaces with by default. However, one can bypass this approach and create BoSS input files manually to describe specific desired localized electron models.\ References:\ http://bitbucket.org/yalebosscode/boss\ http://www.wannier.org/\ https://www.quantum-espresso.org/\ https://www.mathworks.com/products/matlab.html Introduction ============ One of the long-standing areas of interest in condensed matter physics involves the role and effect of electron-electron interactions on the observable properties of materials. Due to the interactions, the motion of different electrons in the material become correlated with each other in a complex manner. Standard tools for efficient, realistic and first principles modelling of the electronic states of materials are based on single-particle (also called mean or band field) theories: one assumes that each electron moves separately in a single shared potential field, and thus each electron has a well-defined state; the shared electronic potential is created in a self-consistent manner due to the averaged inter-electronic forces created by all the electrons. The workhorse theoretical implementation is density functional theory (DFT) [@hohenberg_inhomogeneous_1964; @kohn_self-consistent_1965] which has had a history of success in describing many key properties of materials (stability of various crystal phases, thermodynamic and vibrational properties, a variety of chemical reactions, etc.) [@DFTSUCCESS]. Extensions to DFT to deal with stronger electronic interactions include the widely used DFT+U approach for localized interactions [@Anisimov1991; @anisimov_first-principles_1997], and more generally meta-GGAs and hybrid functionals [@Becke1993; @Perdew1996; @ComparisonMeta; @MetaGGALimits; @ConstraintMeta; @SCAN; @CompareSCAN]. Single-particle approaches do not describe the correlation of electrons explicitly in the distribution of electrons among electronic states: a single configuration consisting of independent electronic states is assumed. However, there are electronic phenomena where the correlations lead to important effects: e.g., quasiparticle spectral weights and lifetimes, electron energy band width renormalization, and most generally excited state properties. Materials phenomena where explicit inclusion of correlations in the calculations are important and understood to play a key role in the physics include energy band renormalization [@Nekrasov2006], unconventional superconductivity [@Lichtenstein2000; @Mandal2017], magnetism and colossal magnetoresistance [@Ramakrishnan2004; @Hariki2017; @Kim2015], electronic spectroscopy of Mott insulating states [@Ren2006], metal-insulator transitions [@VanadiumDioxide; @Rondinelli2008], and coupled structural and orbital symmetry breaking [@polarizationantoine]. The brute force approach of simply including more electronic configurations in the calculations leads to an impractical computational cost that grows exponentially in the number of electrons. This has led to significant research into theoretical methods that go beyond the single-particle description in an efficient manner. Dynamical mean field theory (DMFT) [@georges_dynamical_1996; @kotliar_electronic_2006] has emerged as a standard tool to describe explicitly electronic correlations in systems where localized electronic orbitals on a subset of atoms in the material dominate the electronic correlations; the method becomes [*ab initio*]{} when coupled to DFT (DFT+DMFT) [@georges_dynamical_1996; @kotliar_electronic_2006]. This approach has been able to describe a wide range of physical phenomena that stem from localized electronic correlations [@KT2004; @kotliar_electronic_2006]. To date, most DMFT calculations use adjustable parameters to describe the strength of local electronic interactions, but the parameters can now be more quantitatively justified via [*ab initio*]{} calculation [@Seth2017; @cRPA]. The application of DMFT is most obvious in cases where a material with a symmetric crystalline structure is expected to show strong electronic interaction effects: examples include insulating behavior in the high temperature paramagnetic phase where localized magnetic moments fluctuate in time (e.g., NiO above its Néel temperature [@Ren2006]), or where interaction-driven bandwidth and quasiparticle weight renormalization is significant, e.g., in correlated metals such as SrVO$_3$ [@Nekrasov2006]. Other concepts emerging from DMFT are the site-selective transition in rare earth nickelates [@Park2012], the interrelationship between lattice and electronic degrees of freedom in transition metal oxide heterostructures [@AlexPNAS], and the physics of materials driven by the Hund’s exchange interaction [@JANUS]. However, DMFT can be computationally costly when applied to systems containing multiple inequivalent correlated atomic sites, relevant to studying complex materials or heterostructures of multiple materials. The calculations can cost a significant amount of computational time and may be outside the routine budget of many research groups. For example, in the authors’ experience, it takes around 1-2 minutes running on a laptop to obtain an electronic structure and a resulting band structure within DFT for the correlated metal SrVO$_3$ that has a 5 atom formula unit cell. However, a DFT+DMFT calculation of the corresponding electronic structure with sufficient accuracy to obtain a spectral function would take 500 CPU hours within the context of a minimal model that only treats the 3 vanadium t$_{2g}$ bands explicitly in a one-shot manner without self-consistency (this order of magnitude estimate depends on the computational approach and convergence details employed). A more complex model including more bands and charge self-consistently will increase the time requirements by half an order of magnitude. Spin-orbit coupling terms that are straightforward to include in DFT can render the DFT+DMFT calculations within existing approaches close to intractable due to the well-known sign problem, although there are recent efforts to alleviate this problem [@SignProblem]. Again, these numerical estimates are for a small five-atom simulation cell. Therefore, there have been parallel developments of methods similar to DMFT that are more approximate but much less expensive computationally. Recently, particular effort has been put into methods such as the Gutzwiller [@Brinkman1970; @Kotliar1986; @Buenemann1997; @Gutzwiller; @LDAGutzwiller; @Wang2010; @Hugo] as well as slave-boson approaches. Since the original Kotliar-Ruckenstein slave-boson method which permitted numerical calculations at finite Coulomb interaction parameters [@Kotliar1986], a variety of new slave-boson methods have been developed and applied to real materials including the slave-rotor [@Florens2002; @Florens2004], slave-spin (in multiple varieties) [@DeMedici2005; @Yu2012; @DeMedici2014; @DeMedici2017] and the rotationally invariant slave-boson [@Fresard2002; @Fresard1996; @RotationallyInvariant] methods. Slave-boson methods of this form have been applied to elucidate the physics of RNiO$_3$ materials [@Lau2013a] with similar phenomenological predictive power to DFT+DMFT but at much lower computational cost, as well as to study Hund’s physics in Fe pnictides [@Yu2012; @DeMedici2017]. We proposed [@georgescu_generalized_2015; @georgescu_symmetry_2017] a generalized formalism based on the slave-rotor and slave-spin methods: by noticing the commonality between the two methods, we can straightforwardly build slave-boson models that allow different levels of fine-grained description of the electronic interactions (i.e., separate or aggregated description of spin and/or orbital degrees of freedom or various combinations of them). At the same time, our formalism corrects the weak-interaction limit of the slave-rotor method. Separately, our approach allows for spontaneous symmetry breaking (e.g., ordered magnetic states). Our approach is instantiated in the BoSS software, which this paper describes in detail. Our paper also provides examples of how to use this method to reproduce physics that is normally difficult to obtain from DFT alone. General theoretical framework ============================= The slave boson approach used in BoSS solves, approximately, for the ground-state properties and electronic excitations of an interacting electronic system described by a Hubbard Hamiltonian. Detailed theoretical descriptions of the approach can be found in prior publications [@georgescu_generalized_2015; @georgescu_symmetry_2017], so we will briefly summarize the ideas behind the method and then focus primarily on the formalism as it connects directly to the BoSS software implementation. A typical Hubbard Hamiltonian for interacting electrons is written a basis of localized atomic-like orbitals: each atomic site, indexed by $i$, has a set of localized orbitals indexed by $m$ and spin $\sigma\in{\pm1}$. The Hamiltonian has the form $$\hat H = \sum_{ii'mm'\sigma} t_{imi'm'\sigma} \hat c_{im\sigma}^\dag \hat c_{i'm'\sigma} + \sum_i \hat H^{(i)}_{int}\,. \label{eq:origH}$$ The $\hat c_{im\sigma}$ ($\hat c_{im\sigma}^\dag$) are electron annihilation (creation) field operators for the localized state $im\sigma$, the $t_{imi'm'\sigma}$ are spin-conserving tunneling (hopping) matrix elements between two localized states $im\sigma$ and $i'm'\sigma$, and the electron-electron interactions occur on each atomic site separately; the form of $\hat H^{(i)}_{int}$ will be specified further below. (The diagonal elements $t_{imim\sigma}$, which are called the on-site energies of the localized states $im\sigma$, are included automatically in the first term of the Hamiltonian; separately, in this work, the tunneling terms do not carry an overall minus sign in front unlike other common definitions of the Hubbard model.) Thus, the Hubbard Hamiltonian encodes the wave-like nature of electrons via the first tunneling term in $\hat H$ (also called the hopping or kinetic term) as well as electron-electron interactions in the second term. Solving for the ground state wave function $\ket{\Psi_0}$ of such a Hamiltonian for many electrons is very difficult and a central challenge in modern electronic structure theory: computationally efficient approximate solutions are of great interest to the research community. Introducing the slave bosons ---------------------------- The slave-boson approach is one such approximation. One separates the fermionic behavior from the inter-electron charged interactions by introducing a spinless charged bosonic “slave” degree of freedom at the atomic sites along with neutral fermion degrees of freedom with spin called spinons (i.e., one splits the original charged and spin-1/2 electron into a charged but spinless slave boson and a chargeless but spinfull fermion with spin 1/2). The mathematical separation is given by $$\hat c_{im\sigma} = \hat f_{im\sigma} \hat O_{i\alpha} \ , \ \hat c^\dag_{im\sigma} = \hat f^\dag_{im\sigma} \hat O^\dag_{i\alpha}$$ where $\hat f_{im\sigma}$ ($\hat f^\dag_{im\sigma}$) are fermionic annihilation field operators for the spinons, and $\hat O_{i\alpha}$ ($\hat O^\dag_{i\alpha}$) lower (raise) the number of slave bosons by one. The index $\alpha$ of the slave bosons on site $i$ describes a disjoint set of the $\{m\sigma\}$ indices belonging to that site. Choosing how the $\{m\sigma\}$ are partitioned into the disjoint sets $\{\alpha\}$ defines the type of slave boson model being used. For example, the coarsest model lumps all $\{m\sigma\}$ on a site into a single bosonic degree of freedom so $\alpha$ is nil and $\hat O_{i\alpha}=\hat O_i$; the most detailed model has a separate bosonic mode for each unique spin+orbital combination so $\alpha=m\sigma$. Other models can include having two bosons per site two account for the two values of $\sigma$ while lumping all $m$ together, or alternatively having the bosons describe the $m$ states with both spin $\sigma=\pm1$ lumped together. The number of bosons in channel $\alpha$ ranges from zero to the maximum number of electrons $M_\alpha^{max}$ that could be accommodated by the spin+orbital combinations belonging to $\alpha$. The matrix representation of the boson lowering operator $\hat O_{i\alpha}$ in the basis of the number of bosons is given by the $(M_\alpha^{max}+1) \times (M_\alpha^{max}+1)$ matrix $$O_{i\alpha} = \left( \begin{array}{ccccc} 0 & 1 & 0 & \ldots & 0 \\ 0 & 0 & 1 & \ldots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & \ldots & 1 \\ C_{i\alpha} & 0 & 0 & \ldots & 0 \end{array} \right) \label{eq:Omatrix}$$ where the choice of constants $C_{i\alpha}$ is described further below. Further details and derivation of the structure of the operators and matrices can be found in our prior publications [@georgescu_generalized_2015; @georgescu_symmetry_2017]. The Hamiltonian now takes the form $$\hat H = \sum_{ii'mm'\sigma} t_{imi'm'\sigma} \hat f_{im\sigma}^\dag\hat O_{i\alpha}^\dag \hat f_{i'm'\sigma}\hat O_{i'\alpha'} + \sum_i \hat H^{(i)}_{int}\,. \label{eq:HfandO}$$ The index $\alpha$ labels the partitioning of states $im\sigma$ while $\alpha'$ those of $i'm'\sigma$. The main point is that the slave bosons carry the electron charge so the interaction terms $\hat H^{(i)}_{int}$ only act on the bosonic subspace. (For the on-site contributions $im=i'm'$, we remove the $\hat O_{i\alpha}^\dag\hat O_{i\alpha}$ operator as its presence does not change anything [@georgescu_generalized_2015].) Exact solution of the original problem posed by the Hamiltonian of Eq. (\[eq:origH\]) was hard enough, but the addition of new bosonic degrees of freedom on top of the fermionic spinons makes for an even harder problem. This is because, when solving for the ground state of the Hamiltonian of Eq. (\[eq:HfandO\]), one must additionally impose the constraint that the boson and fermion numbers track each other exactly at each site in order to not introduce new quantum states to the new spinon+slave problem that did not exist in the original electron-only problem: one must restrict oneself to the subspace of states $\ket{\Xi}$ in the enlarged spinon+slave Hilbert space that obey the constraint $$\hat N_{i\alpha}\ket{\Xi} = \hat n_{i\alpha}\ket{\Xi} \label{eq:exactconstraint}$$ for every site $i$ and slave mode $\alpha$ because the electron charge (carried by the bosons) must follow the spin of the electron (carried by the spinons) as the particles move about the lattice. The number operator $\hat N_{i\alpha}$ counts the number of slave bosons at site $i$ in mode $\alpha$, while the corresponding number of spinons $\hat n_{i\alpha}$ is defineed by $$\hat n_{i\alpha} \equiv \sum_{(m\sigma)\in\alpha} \hat f_{im\sigma}^\dag \hat f_{im\sigma}\,.$$ Approximations and self-consistent equations -------------------------------------------- The slave boson method makes progress by separating the spinon and slave boson behaviors in order to end up with two simpler coupled problems. Namely, the ground state $\ket{\Psi_0}$ of the Hamiltonian of Eq. (\[eq:HfandO\]) is approximated as a product of a spinon wave function $\ket{\psi_f}$ and a slave wave function $\ket{\phi_s}$, $\ket{\Psi_0}\approx\ket{\psi_f}\ket{\phi_s}$. This approximation means that we can only enforce the constraint of Eq. (\[eq:exactconstraint\]) on average: $${\langle \hat N_{i\alpha} \rangle}_s = {\langle \hat n_{i\alpha} \rangle}_f \label{eq:avgconstraint}$$ In addition, as explained in our prior work [@georgescu_symmetry_2017], finding the optimal spinon and slave states corresponds to a variational minimization of the total energy functional $$\begin{gathered} E_{tot} = \sum_{ii'mm'\sigma} t_{imi'm'\sigma} {\langle \hat f_{im\sigma}^\dag \hat f_{i'm'\sigma} \rangle}_f {\langle \hat O_{i\alpha}^\dag \hat O_{i'\alpha'} \rangle}_s + \sum_i {\langle \hat H^{(i)}_{int} \rangle}_s\\ - \lambda_f \left[\braket{\psi_f|\psi_f} -1\right] - \lambda_s \left[\braket{\phi_s|\phi_s} -1\right] -\sum_{i\alpha}h_{i\alpha}\left[{\langle \hat n_{i\alpha} \rangle}_f - {\langle \hat N_{i\alpha} \rangle}_s\right] \\ -\sum_{im\sigma} b_{im\sigma}\left[{\langle \hat f^\dag_{im\sigma}\hat f_{im\sigma} \rangle}_f - \nu_{im\sigma}\right] \label{eq:Etotfull}\end{gathered}$$ where the shorthands for spinon and slave expectations are $${\langle \hat X \rangle}_f \equiv \bra{\psi_f}\hat X\ket{\psi_f} \ \ , \ \ {\langle \hat Y \rangle}_s \equiv \bra{\phi_s}\hat Y\ket{\phi_s}\,.$$ Above, four sets of Lagrange multipliers have been introduced: $\lambda_f$ and $\lambda_s$ enforce normalization of the states $\ket{\psi_f}$ and $\ket{\phi_s}$ (i.e., $\braket{\psi_f|\psi_f}=\braket{\phi_s|\phi_s}=1$), the $h_{i\alpha}$ enforce the averaged constraint of Eq. (\[eq:avgconstraint\]), and the “magnetic fields” $b_{im\sigma}$ control the spinon occupancies and ensure $\nu_{im\sigma}={\langle \hat f^\dag_{im\sigma}\hat f_{im\sigma} \rangle}_f$. We note that when all the constraints are obeyed, the energy $E_{tot}$ corresponds to the expectation value of the Hamiltonian $\hat H$ over the approximate product ground state $\ket{\psi_f}\ket{\phi_s}$ and is therefore a variational energy. Minimization of $E_{tot}$ over the two wave functions $\ket{\psi_f}$ and $\ket{\phi_s}$ leads to two separate eigenvalue problems: $$\hat H_f \ket{\psi_f} = E_f\ket{\psi_f} \ \ , \ \ \hat H_s\ket{\phi_s} = E_s\ket{\phi_s} \label{eq:spinonandslavehpsiepsi}$$ where the spinon Hamiltonian $\hat H_f$ is $$\hat H_f = \sum_{ii'mm'\sigma} t_{imi'm'\sigma}{\langle \hat O_{i\alpha}^\dag \hat O_{i'\alpha'} \rangle}_s \hat f_{im\sigma}^\dag \hat f_{i'm'\sigma} -\sum_{i\alpha}h_{i\alpha}\hat n_{i\alpha} -\sum_{im\sigma} b_{im\sigma}f^\dag_{im\sigma}\hat f_{im\sigma} \label{eq:Hspinonlittleb}$$ and the slave Hamiltonian $\hat H_s$ is $$\hat H_s = \sum_{ii'mm'\sigma} t_{imi'm'\sigma} {\langle \hat f_{im\sigma}^\dag \hat f_{i'm'\sigma} \rangle}_f \hat O_{i\alpha}^\dag \hat O_{i'\alpha'} + \sum_i \hat H^{(i)}_{int}\\ +\sum_{i\alpha}h_{i\alpha}\hat N_{i\alpha}\,. \\ \label{eq:Hslave}$$ The two eigenvalue equations in (\[eq:spinonandslavehpsiepsi\]) must be solved self-consistently since averages over slave operators enter into the spinon Hamiltonian (and vice versa). In addition to self-consistency, the $h_{i\alpha}$ must be adjusted to ensure that ${\langle \hat N_{i\alpha} \rangle}_s = {\langle \hat n_{i\alpha} \rangle}_f$ is obeyed. In practice, it is very difficult to solve these equations as written because of the opposite signs with which the $h_{i\alpha}$ enter the two Hamiltonians: increasing $h_{i\alpha}$ in $\hat H_f$ of Eq. (\[eq:Hspinonlittleb\]) stabilizes larger electron occupancy on site $i$ for the spinons but does the opposite for the slaves governed by $\hat H_s$ of Eq. (\[eq:Hslave\]). This leads to difficulties in reaching self-consistency as well as in stabilizing broken symmetry electronic phases (e.g., magnetism) [@georgescu_symmetry_2017]. The simple solution [@georgescu_symmetry_2017] is to notice that it is the sum $h+b$ that appears in $\hat H_f$ but only $h$ in $\hat H_s$: since $h$ and $b$ are independent, one can define a new variable $B=h+b$ for the spinons so that the particle matching problem is greatly simplified. Namely, for some fixed values of $B_{im\sigma}$, one solves for the ground state $\ket{\psi_f}$ of $$\hat H_f = \sum_{ii'mm'\sigma} t_{imi'm'\sigma}{\langle \hat O_{i\alpha}^\dag \hat O_{i'\alpha'} \rangle}_s \hat f_{im\sigma}^\dag \hat f_{i'm'\sigma} -\sum_{im\sigma} B_{im\sigma}f^\dag_{im\sigma}\hat f_{im\sigma} \label{eq:HspinonbigB}$$ as well as the ground state $\ket{\phi_s}$ of $\hat H_s$ of Eq. (\[eq:Hslave\]) self-consistently in terms of the expectations ${\langle \hat O^\dag_{i\alpha}\hat O_{i'\alpha'} \rangle}_s$ and ${\langle \hat f_{im\sigma}^\dag \hat f_{i'm'\sigma} \rangle}_f$ while the only job of the $h_{i\alpha}$ is to ensure the slave boson occupancies match the spinon occuapncies ${\langle \hat N_{i\alpha} \rangle}_s = {\langle \hat n_{i\alpha} \rangle}_f$. One then minimizes the total energy $E_{tot}$ versus $B_{im\sigma}$ to describe the final ground state of the system. This “one-sided” particle number matching is much more stable and efficient [@georgescu_symmetry_2017], and BoSS uses this “big $B$” approach. A final point regards how the constants $C_{i\alpha}$ in the $\hat O_{i\alpha}$ operators of Eq. (\[eq:Omatrix\]) are chosen. For an exact solution of the ground state of the interacting problem, the actual value of the $C_{i\alpha}$ is irrelevant since those entries are never accessed [@georgescu_generalized_2015; @georgescu_symmetry_2017]. However, for an approximate treatment, their choice matters. Their values are fixed by ensuring that the non-interacting limit of the spinon+slave problem matches the non-interacting limit of the original electronic problem. Namely, solving the ground state of the spinon Hamiltonian of Eq. (\[eq:HspinonbigB\]) should generate the same solution as solving the original Hamiltonian of Eq. (\[eq:origH\]) with $\hat H^{(i)}_{int}=0$. This means that the two sets of parameters $h_{i\alpha}$ and $C_{i\alpha}$ must be adjusted when solving the non-interacting slave problem (Hamiltonian $\hat H_s$ of Eq. (\[eq:Hslave\]) with $\hat H^{(i)}_{int}=0$) to ensure that both ${\langle \hat O_{i\alpha}^\dag \hat O_{i'\alpha'} \rangle}_s=1$ and ${\langle \hat N_{i\alpha} \rangle}_s={\langle \hat n_{i\alpha} \rangle}_f$. The resulting values of $C_{i\alpha}$ are then used without further change when solving the interacting problem. Specific slave-boson problem solved by BoSS ------------------------------------------- The discussion above has described the general aspects and philosophy of the slave-boson problem underlying the BoSS software. We now describe the specific form(s) of the Hubbard model and slave bosons used by BoSS to flesh out the method. The type of Hubbard model solved by BoSS is a “$pd$ model”. The localized basis $im\sigma$ is split into two categories: (i) one subset are strongly interacting or electronically correlated “$d$” states with non-zero $\hat H^{(i)}_{int}\ne0$ and associated slave boson modes $\hat O_{i\alpha}$ on the correlated atomic sites $i$, and (ii) the remainder non-interacting “$p$” states on uncorrelated atomic sites with no local interactions ($\hat H^{(i)}_{int}=0$) and no associated slave bosons ($\hat O_{i\alpha}=1$). This nomenclature derives from the physics of transition metal oxide materials where the transition metals host very localized $d$ atomic orbitals for which electronic repulsions are strong, whereas the electronegative oxygen atoms that bond with and link the transition metal atoms have $2p$ orbitals that are filled with electrons and are weakly interacting. (The correlated orbitals can also refer to the localized $f$ electrons of lanthanide- or actinide-based materials.) The $pd$ formalism used below is very much inspired by prior work using slave rotor bosons to study oxides of nickel [@lau_theory_2013]. ![Illustration of the typical cystal structure of transition metal oxides and the orbitals in the $pd$ model. (a) The unit cell of the cubic perovskite oxide SrVO$_3$: each V is bonded to six O atoms forming an octahedral cage (in blue); the Sr form a stabilizing cubic lattice of positive ions but do not participate significantly in the electronically conducting states. This unit cell is periodically repeated in all three directions (i.e., a 3D tiling) to create the crystal. (b) Schematic top view of the VO$_2$ layer in the xy plane. Each V $d$ orbital (green hatched lobes) overlaps with its neighboring O $p$ orbitals (white lobes); the O $p$ orbitals are the bridges between neighboring V sites. The interactions are non-zero on the correlated $d$ states, here localized on the V atoms.[]{data-label="fig:TMOschematic"}](Figure-pd.pdf){width="4.5in"} In transition metal oxides, the transition metal atoms bond with nearest neighbor oxygen atoms. Hence, the largest tunneling matrix elements $t_{imi'm'\sigma}$ are between the localized states of a transition metal atom and those of its oxygen neighbors. See Figure \[fig:TMOschematic\] for an illustration. Thus, when constructing the slave Hamiltonian $\hat H_s$, only these nearest neighbor $t$ elements are retained. Since the $p$ states on the oxygens do not have any associated slave modes, the slave Hamiltonian for such a $pd$ model turns into a sum of separate $d$ site Hamiltonians: $$\hat H_s = \sum_{i\in d} \hat H_s^{(i)} \label{eq:Hssumoveri}$$ where $$\begin{gathered} \hat H_s^{(i)} = \sum_{m\sigma\in i} \sum_{i'm'\in p} \left\{[t_{imi'm'\sigma} {\langle \hat f_{im\sigma}^\dag \hat f_{i'm'\sigma} \rangle}_f] \hat O_{i\alpha}^\dag + [t_{i'm'im\sigma} {\langle \hat f_{i'm'\sigma}^\dag \hat f_{im\sigma} \rangle}_f] \hat O_{i\alpha} \right\} \\ + \hat H^{(i)}_{int} +\sum_{\alpha}h_{i\alpha}\hat N_{i\alpha}\,. \label{eq:Hsi}\end{gathered}$$ The label $d$ refers to the set of all the correlated localized states, $p$ labels all the uncorrelated localized states, $i$ is a particular correlated site with correlated states $m\sigma$, and $\alpha$ is the partitioning index of the $m\sigma$ for correlated site $i$. We note that the spinon expectations ${\langle f_{im\sigma}^\dag \hat f_{i'm'\sigma} \rangle}_f$ renormalize the original tunneling matrix elements $t$. The structure of the slave-boson problem described in Eqs. (\[eq:Hssumoveri\],\[eq:Hsi\]) means that solving each correlated site $i$ separately is an exact solution to the interacting boson problem for this type of model [@lau_theory_2013]. Thus the slave ground state $\ket{\phi_s}$ is a simple product over the ground states of the separate correlated sites: $\ket{\phi_s}=\prod_{i\in d}\ket{\phi^{(i)}_s}$. We now specify the form of the interaction part $\hat H^{(i)}_{int}$ on atomic site $i$ which can contain up to three terms depending on the specific type of slave-boson model being employed (i.e., the partitioning indexed by $\alpha$), $$\hat H^{(i)}_{int} = \hat H^{(i)}_{int,1} + \hat H^{(i)}_{int,2} + \hat H^{(i)}_{int,3}\,.$$ Even the coarsest slave model must count the total number of slave bosons on site $i$ (i.e., a model where the $\alpha$ takes on a single value and refers to all $m\sigma$ on site $i$ so $\hat O_{i\alpha}=\hat O_i$). Therefore, the first interaction term $\hat H^{(i)}_{int,1}$ that depends only the total boson number is always included. It takes the form of a charging energy using a Hubbard parameter $U_i$: $$\hat H^{(i)}_{int,1} = \frac{U_i}{2} \left(\hat N_i - {\langle \hat N_i \rangle}_0\right)^2 \,. \label{eq:Hinti1}$$ Here, $\hat N_i = \sum_{\alpha}\hat N_{i\alpha}$ is the total number of slave modes on site $i$, and ${\langle \hat N_i \rangle}_0$ is a reference mean occupation number used for double counting corrections (see Section \[sec:doublecounting\] below). This interaction term is a charging energy that punishes charge fluctuations away from the mean value ${\langle \hat N_i \rangle}_0$. A second interaction term $\hat H^{(i)}_{int,2}$ may be non-zero if the slave decomposition being used is able to resolve individual spatial states labeled by $m$. In this case, one can distinguish between electronic repulsions when occupying the same orbital index $m$ with two electrons versus two different orbitals $m\ne m'$. The added interaction term depends on an additional Hubbard parameter $U'_i$ for inter-orbital interactions: $$\hat H^{(i)}_{int,2} = \frac{U'_i-U_i}{2} \left[ \left(\hat N_i - {\langle \hat N_i \rangle}_0\right)^2 - \sum_m \left(\hat N_{im} - {\langle \hat N_{im} \rangle}_0\right)^2\right]\,. \label{eq:Hinti2}$$ The occupation $\hat N_{im}=\sum_{\alpha|m\in\alpha} \hat N_{i\alpha}$ counts the number of bosons in spatial state $m$. An equivalent way to write this interaction term is $$\hat H^{(i)}_{int,2} =\frac{U_i'-U_i}{2} \sum_{m\ne m'} \left(\hat N_{im} - {\langle \hat N_{im} \rangle}_0\right)\left(\hat N_{im'} - {\langle \hat N_{im'} \rangle}_0\right)$$ which shows that this interaction is a correction to the $H^{(i)}_{int,1}$ term accounting for occupation fluctuations of different spatial orbitals. A final third term $\hat H^{(i)}_{int,3}$ is added if the salve-boson model can resolve different spin directions $\sigma$. This interaction represents the classic Hund’s term that lowers the energy due to same spin electron pairing on a site. Using the Hund’s interaction parameter $J$, it has the form $$\hat H^{(i)}_{int,3} = -\frac{J_i}{2} \sum_\sigma \left(\hat N_{i\sigma} - {\langle \hat N_{i\sigma} \rangle}_0\right)^2 \label{eq:Hinti3}$$ where $\hat N_{i\sigma}=\sum_{\alpha|\sigma\in\alpha} \hat N_{i\alpha}$ counts the total number of slave boson with spin $\sigma$. Having specified the form of the interaction term in $\hat H_s$, the remaining matter is the choice of the $C_{i\alpha}$ in the slave $\hat O_{i\alpha}$ operators. Since each correlated site $i$ has a separate slave Hamiltonian $\hat H^{(i)}_s$, the number of degrees of freedom are matched: if we set $\hat H^{(i)}_{int}=0$ (i.e., $U=U'=J=0$) and solve the slave problem, we have to match two conditions ${\langle \hat O_{i\alpha} \rangle}_s=1$ and ${\langle \hat N_{i\alpha} \rangle}_s={\langle \hat n_{i\alpha} \rangle}$ with two free parameters $h_{i\alpha}$ and $C_{i\alpha}$. This concludes the theoretical specification of the BoSS slave problem. The spinon Hamiltonian for the BoSS $pd$ model takes the form $$\begin{gathered} \hat H_f = \sum_{im\sigma\in d}\sum_{i'm'\in p} \left\{[t_{imi'm'\sigma}{\langle \hat O_{i\alpha}^\dag \rangle}_s] \hat f_{im\sigma}^\dag \hat f_{i'm'\sigma} + [t_{i'm'im\sigma} {\langle \hat O_{i\alpha} \rangle}_s ] \hat f_{i'm'\sigma}^\dag \hat f_{im\sigma} \right\} \\ + \left[\sum_{im\sigma\in d}\sum_{i'm'\in d} + \sum_{im\sigma\in p}\sum_{i'm'\in p} \right] \left\{t_{imi'm'\sigma} \hat f_{im\sigma}^\dag \hat f_{i'm'\sigma}\right\} \\ -\sum_{im\sigma} B_{im\sigma}f^\dag_{im\sigma}\hat f_{im\sigma}\,. \label{eq:HfBoSS}\end{gathered}$$ As explained above, the only modifications to the original tunneling elements $t$ are those between $p$ and $d$ localized states (the factors of ${\langle \hat O_{i\alpha} \rangle}_s$ above). The remainder of the tunneling matrix elements are unchanged. This concludes the theoretical specification of the BoSS spinon problem. Periodic systems and Bloch states --------------------------------- The above formalism is applicable to both isolated systems such as molecules as well as extended materials such as crystalline solid state materials. However, for crystalline systems which have a periodic arrangement of atoms over macroscopic length scales, one typically describes them using periodic boundary conditions which then permits use of Bloch’s theorem to greatly reduce the size of the problem: one can replace a large simulation cell with periodic boundary conditions by instead dealing with the much smaller primitive unit cell under “twisted” boundary conditions. In the solid state language, one uses $k$-sampling over a grid of uniform grid of Bloch wave vectors $k$ in the first Brillouin zone (Born-von Karman boundary conditions) [@ashcroft_solid_1976]. Within the BoSS approach, only the spinons are aware of the $k$-sampling because the slave problem is solved in a completely localized manner, i.e., one site at a time, and is thus unaffected by the long-range electronic boundary conditions. For the spinons, each $k$ vector is associated with its own Hamiltonian $$\begin{gathered} \hat H_f^{(k)} = \sum_{im\sigma\in d}\sum_{i'm'\in p} \left\{[t^{(k)}_{imi'm'\sigma}{\langle \hat O_{i\alpha}^\dag \rangle}_s] \hat f_{im\sigma}^\dag \hat f_{i'm'\sigma} + [t^{(k)}_{i'm'im\sigma} {\langle \hat O_{i\alpha} \rangle}_s ] \hat f_{i'm'\sigma}^\dag \hat f_{im\sigma} \right\} \\ + \left[\sum_{im\sigma\in d}\sum_{i'm'\in d} + \sum_{im\sigma\in p}\sum_{i'm'\in p} \right] \left\{t^{(k)}_{imi'm'\sigma} \hat f_{im\sigma}^\dag \hat f_{i'm'\sigma}\right\} \\ -\sum_{im\sigma} B_{im\sigma}f^\dag_{im\sigma}\hat f_{im\sigma}\,, \label{eq:Hfk}\end{gathered}$$ where the sums over $im\sigma$ and $i'm'\sigma$ now run only over the localized states in a single unit cell, and $$t^{(k)}_{imi'm'\sigma} = \sum_R e^{ik\cdot R}\, t_{imi'm'\sigma}\,, \label{eq:tk}$$ and $R$ sums over the lattice vectors identifying all the primitive unit cells inside the periodic supercell. Spinon averaged quantities are also averaged over the $k$ points: e.g., the average ${\langle \hat X \rangle}_f$ is given by $N_k^{-1}\sum_k {\langle \hat X \rangle}_f^{(k)}$, where ${\langle \hat X \rangle}_f^{(k)}$ is the average over the ground state of $ \hat H_f^{(k)}$ and $N_k$ is the number of $k$ points. Relation to prior work, double counting correction {#sec:doublecounting} -------------------------------------------------- The formalism above differs from our prior work [@georgescu_generalized_2015; @georgescu_symmetry_2017] in two ways. The minor difference is that the above BoSS approach aims to solve for the ground state of a $pd$ Hubbard model, while the prior work states the problem generally or applies it to a simpler $d$ only model where all the localized states are correlated and centered on transition metal sites. This boils down primarily to differences in notation and the factors involved in the rescaling of the tunneling terms in $\hat H_f$ and $\hat H_s$. The major difference is that (a) all the electron-electron interaction terms in BoSS are contained only in the slave boson sector of the problem, and (b) there are references occupation values such as ${\langle \hat N_i \rangle}_0$ in the interaction terms. The two major differences stem from how the BoSS approach should be used in practice. Our BoSS approach is aimed to be used as a post processing step to a mean field band structure calculation based on, e.g., DFT. Namely, the BoSS model takes as input the DFT description and then tries to correct its deficiencies. It assumes that electron-electron interactions at the mean field level, where electrons interact via averaged potentials and thus the description is of the single-particle type, are already included in the $t_{imi'm'\sigma}$ values. Hence, the interactions that are missing from the mean-field approach are those due to fluctuations in the number of electrons on the correlated sites as described by the slave-boson part. However, since the mean field approach already describes certain types of electron-electron interactions, we want to avoid including these interactions twice and erroneously double counting them. Double counting corrections have a long history and are an important part of any approach using localized basis sets for interacting electron problems [@kotliar_electronic_2006]. In the end, one posits a physically motivated correction that is exact in some limit. For BoSS, the interactions terms in Eqs. (\[eq:Hinti1\],\[eq:Hinti2\],\[eq:Hinti3\]) are written an explicit form showing that they are non-zero when the electron number in set of localized correlated states fluctuates away from an average value such as ${\langle \hat N_i \rangle}_0$. Physically, we expect that the slave boson theory should give no corrections to the mean field description when the electron number fluctuations about the mean field values are zero. Hence, we choose the double counting reference electron occupations ${\langle \hat N_i \rangle}_0$, ${\langle \hat N_{im} \rangle}_0$ and ${\langle \hat N_{i\sigma} \rangle}_0$ in Eqs. (\[eq:Hinti1\],\[eq:Hinti2\],\[eq:Hinti3\]) to be those obtained from solving the BoSS problem with no added interactions, i.e., with $U_i=U'_i=J_i=0$ or $\hat H^{(i)}_{int}=0$. While BoSS has been designed to be a post processor for a mean field calculation in order to add missing Hubbard-type physics, one can easily use the BoSS framework to (approximately) solve a Hubbard model itself. One simply sets the ${\langle \hat N_i \rangle}_0=0$ in the interaction terms of Eqs. (\[eq:Hinti1\],\[eq:Hinti2\],\[eq:Hinti3\]) and proceeds to solve the resulting problem. Algorithms used in BoSS ======================= Before describing the software implementation of BoSS, it is helpful to describe briefly the numerical algorithms used by BoSS to solve the slave-boson problem. Describing the algorithms first helps set the stage for the the necessarily more detailed and low-level software implementation description. The most basic problem BoSS must solve over and over is the computation of the ground state expectations of the spinon density matrix ${\langle \hat f^\dag_{im\sigma}\hat f_{i'm'\sigma} \rangle}_f$ and the slave expectation ${\langle \hat O^\dag _{i\alpha}\hat O_{i'\alpha'} \rangle}_s$. While formally these expectations are for the ground state wave function of $\hat H_f$ (Eq. \[eq:Hfk\]) and $\hat H_s$ (Eq. \[eq:Hsi\]), respectively, in practice we use a low but finite temperature Boltzmann distribution to compute them: thermal averaging naturally averages over degenerate manifolds, provides numerical stability for near degenerate states, and accelerates sampling of the Fermi surface for metallic spinon systems. For the non-interacting spinon Hamiltonian of Eq. (\[eq:Hfk\]) at a $k$ point, BoSS sets up a square hermitian Hamiltonian matrix $H^{(k,\sigma)}$ for each spin channel $\sigma$ with off diagonal $(d,p)$ entries given by $t^{(k)}_{imi'm'\sigma}{\langle O_{i\alpha} \rangle}_s$, other off digonal entries $t^{(k)}_{imi'm'\sigma}$, and diagonal entries $t^{(k)}_{imim\sigma}-B_{im\sigma}$. BoSS diagonalizes this matrix to obtain the band energies $\epsilon^{(k,\sigma)}_{n}$ and orthonormal eigenvectors $u_{im,n}^{(k,\sigma)}$. The expectation is then computed using the Fermi-Dirac distribution via $${\langle f^\dag_{im\sigma}f_{i'm'\sigma} \rangle}_f = \frac{1}{N_k}\sum_{k} \frac{u_{im,n}^{(k,\sigma)}\ {u_{i'm',n}^{(k,\sigma)}}^*}{1+\exp[-\beta(\epsilon_{n}^{(k,\sigma)}-\mu)]}\,,$$ where $\beta=1/(k_BT)$ is the inverse thermal energy. The chemical potential $\mu$ is determined by ensuring the correct mean number of total electrons $N_e$ per simulation cell, $$N_e = \frac{1}{N_k}\sum_{k,\sigma} \frac{1}{1+\exp[-\beta(\epsilon_{n}^{(k,\sigma)}-\mu)]}\,.$$ The unique value of $\mu$ is determined efficiently by the bisection algorithm [@press_numerical_2007] since the summand is monotonically increasing in $\mu$. For the slave Hamiltonian operator on each site, $\hat H_s^{(i)}$ of Eq. (\[eq:Hsi\]), the corresponding Hamiltonian matrix is computed in the number representation where the $\hat O_{i\alpha}$ operators have the matrix elements given by Eq. (\[eq:Omatrix\]), and the slave number operators $\hat N_{i\alpha}$ are diagonal matrices. Diagonalization of the Hamiltonian produces eigenenergies $E^{(i)}_n$ and eigenstates $\ket{n^{(i)}}$ that are used to compute averages of any slave-based operator $\hat X^{(i)}$ on site $i$ via $${\langle \hat X^{(i)} \rangle}_s = \frac{1}{Z^{(i)}}\sum_n \bra{n^{(i)}}\hat X^{(i)}\ket{n^{(i)}}e^{-\beta E_n^{(i)}} \ \ , \ \ Z^{(i)} = \sum_n e^{-\beta E_n^{(i)}}\,.$$ Given the sparsity of Eq. (\[eq:Omatrix\]), the slave Hamiltonian is also sparse so BoSS can employ sparse matrix methods to store and diagonalize the Hamiltonian thereby saving signficant memory and computational effort. In addition, since only states with a few $\beta^{-1}$ of the lowest energy contribute to the thermal averaging, the diagonalization needs only return a small subset of the lowest energy eigenvalues and associated eigenvectors. The next higher level problem BoSS must attack is finding the lowest energy state of the slave Hamiltonian $\hat H_s^{(i)}$ of Eq. (\[eq:Hsi\]) while matching certain conditions which always include matching specified spinon occupancies ${\langle \hat n_{i\alpha} \rangle}_f$. The first case is that one is seeking to find the constants $C_{i\alpha}$ that are needed to define the $O_{i\alpha}$ matrices: one adjusts both $C_{i\alpha}$ and $h_{i\alpha}$ to match ${\langle \hat N_{i\alpha} \rangle}_s={\langle \hat n_{i\alpha} \rangle}_f$ as well as ensure that ${\langle \hat O_{i\alpha} \rangle}_s=1$. Due to the lack of interactions, each channel $\alpha$ can be solved separately so this represents a two-dimensional search in $(h_{i\alpha},C_{i\alpha})$ to match two conditions. The second case is that one is solving the interacting slave-boson problem in which case the different $\alpha$ bosons on the same site $i$ are coupled so that one must search over the entire set of $\{h_{i\alpha}\}$ at each site $i$ to match all the spinon occupancies ${\langle \hat n_{i\alpha} \rangle}_f$. Our experience shows that due to the relatively well behaved nature of both cases, a modified Newton’s algorithm is sufficient to efficiently solve both problems. Both problems are of the generic form $f(x)-y=0$ where we search for a vector $x$ that satisfies the equation for a fixed vector $y$. The derivative matrix $df|_x$ is computed numerically by finite differences, and our modified Newton algorithm for going from Newton step $j$ to $j+1$ is $$x_{j+1}=x_j - \alpha_{j+1} (df|_x)^{-1} (f(x_j)-y)\,.$$ The scaling factor $\alpha_j=1$ defines the textbook Newton’s algorithm. However, to avoid instability and overshooting, we dynamically update $\alpha_j$ based on progress toward a solution which is based on the size of the residual $e(x)\equiv \|f(x)-y\|$ (the standard Euclidean norm). If $e(x)$ is worsened compared to the previous step, i.e., $e(x_j)> e(x_{j-1})$, then we reduce $\alpha_{j+1} = \alpha_j / 3$ to take a conservative small step towards the solution. But if $e(x_j)> e(x_{j-1})$, we instead push $\alpha_{j+1}$ towards unity via $\alpha_{j+1}=2\alpha_j(2-\alpha_j)$. The computationally costly part of this approach is evaluation of the derivative matrix $df|_x$ when many boson modes $\alpha$ exist on a site. To gain efficiency, BoSS will calculate the $df|_x$ matrix once, use it for some user-specified number of Newton steps before recomputing it (i.e., Picard’s method instead of Newton’s method for the intermediate steps). One level higher is to solve the spinon+slave problem self-consistently for some specified set of “big $B$” values $B_{im\sigma}$. We have found this numerical problem to be [*suprisingly smooth*]{}: a simple fixed point iteration algorithm is sufficient for rapid convergence. Namely, given some state of the spinon+slave system at fixed $B_{im\sigma}$, BoSS uses the current spinon averages ${\langle \hat f_{im\sigma}^\dag\hat f_{i'm'\sigma} \rangle}_f$ to set up and solve the slave problem over all correlated sites $i$ which provides updated averages ${\langle \hat O_{i\alpha} \rangle}_s$; then, these updated averages are used to set up and solve the spinon problem and to update the ${\langle \hat f_{im\sigma}^\dag\hat f_{i'm'\sigma} \rangle}_f$; and the process is repeated until the magnitude the successive changes of the spinon occupancies ${\langle \hat f_{im\sigma}^\dag\hat f_{im\sigma} \rangle}_f$ over the correlated sites drop below a tolerance value. At the highest level, BoSS must minimize the total energy $E_{tot}$ of Eq. (\[eq:Etotfull\]) over the $B_{im\sigma}$. When all required constraints are met and the BoSS $pd$ approach is used, $E_{tot}$ takes the simpler form $$E_{tot}(B) = \sum_{ii'mm'\sigma} t_{imi'm'\sigma} {\langle \hat f_{im\sigma}^\dag \hat f_{i'm'\sigma} \rangle}_f {\langle \hat O_{i\alpha}^\dag \rangle}_s{\langle \hat O_{i'\alpha'} \rangle}_s + \sum_i {\langle \hat H^{(i)}_{int} \rangle}_s \label{eq:Etotused}$$ where vector $B$ contains all the $B_{im\sigma}$ values. The $B$-dependence of $E_{tot}$ comes from the spinons via their Hamiltonian of Eq. (\[eq:HfBoSS\]). BoSS minimizes this energy by simple gradient descent in $B$ with adjustable step size. The gradient $\nabla_B E_{tot}(B)$ is computed numerically by finite differences of the components of $B$. The update step is $B_{j+1}=B_j - \gamma_{j+1}\nabla_B E_{tot}(B_j)$. The scaling factor $\gamma_j>0$ is adjusted based on progress in lowering the energy. If the energy went down, i.e., $E_{tot}(B_{j})<E_{tot}(B_{j-1})$, then the step size is increased via $\gamma_{j+1}=G\gamma_j$ with a growth factor $G>1$ (the default value is $G=1.1$). However, if the energy went up compared to the previous step, the step size is reduced via $\gamma_{j+1}=\gamma_j / R$ with $R>1$ (the default value is $R=3$). This simple algorithm attempts to adjust the steps in $B$ to be as large as possible while still decreasing $E_{tot}$. The minimization is terminated when successive changes of $E_{tot}$ are below tolerance. Software implementation ======================= The BoSS software has been implemented is in the MATLAB [@MATLAB2019] programming and software environment. This environment is widely available on many computational platforms and allows for rapid software development, testing, as well as plotting and visualization. In what follows, we briefly describe the program flow, input/output and key variables in BoSS. File names or key variables names associated with a particular routine or setting are typeset as `filename` or `variablename` below. BoSS program flow ----------------- The main program file `mainprogram.m` and important subroutine file `setup_system.m` (that reads the input data) reside in the top level directory of the BoSS package while the remaining subroutine are in a `functions/` subdirectory. A high level overview of the software is provided by the flowchart in Figure \[fig:flowchart\]. ![High level schematic flowchart of the main control flow in BoSS. The main program flow (left column) relies on two subroutines: “SCF loop” (defined in the center column) and “Minimize $E_{tot}$” (defined in the right column which also relies on SCF loop).[]{data-label="fig:flowchart"}](Figure-flowchart.pdf){width="5in"} The main program (`mainprogram.m`) calls a subroutine to initialize key variables (`setup_system.m`) and then does a self-consistent field (SCF) calculation of the slave-boson problem (`functions/SCFloop.m`) before reporting on the solution; if requested, the main flow calls the `functions/minimize_Etot.m` subroutine before reporting on final results. The minimization of $E_{tot}$ via gradient descent also relies on `functions/SCFloop.m` to find a self-consistent solution at a given value of the $B_{im\sigma}$ variables. The SCF loop implements a simple self-consistency loop over the spinon occupations of the correlated orbitals by calling lower level routines which use the algorithms discussed in the previous section. The main computational subroutines that these high level activities depend on are: - `functions/solve_spinon_then_slave.m` and\ `functions/solve_slave_then_spinon.m` : these two similar routines solve the spinon and slave problems in the order specified by their file names. - `functions/slave_driver.m` : loops over correlated sites and solves the slave problem on each site while matching spinon occupancies. - `functions/Csearch.m` : solves the slave problem on a site at zero interaction strength to find the $C_{i\alpha}$ values for that site that give ${\langle O_{i\alpha} \rangle}_s=1$ (while also matching spinon occupancies using the $h_{i\alpha}$). - `functions/hsearch.m` : solves the interacting slave problem on a site while matching spinon occupancies by adjusting the $h_{i\alpha}$. - `functions/HamSlave.m` : low level computational routine that sets up the interacting slave-boson Hamiltonian problem on a given site and finds the ground state via diagonalization. - `functions/solvespinon_fixedN` : solves the spinon problem over all $k$ vectors for a fixed number of electrons. - `functions/buildHspinon` ,\ `functions/diagHspinon` , and\ `functions/diagH_kspinon` : A set of routines that loop over the $k$ vectors, build the spinon Hamiltonian at each $k$, and then diagonalize them to find the spinon eigenvalues and eigenvectors at each $k$. - `functions/calcrhospinon.m` and\ `functions/findmu.m` : compute the spinon density matrix ${\langle \hat f^\dag_{im\sigma}\hat f_{i'm'\sigma} \rangle}_f$ by summing over the $k$ and using the eigenvectors at each $k$ together with the Fermi-Dirac occupancies computed using the spinon eigenvalues and chemical potential $\mu$. Input hopping/tunneling elements -------------------------------- The most important input to BoSS is the localized orbital (tight-binding) model specified by the tunneling matrix elements $t_{imi'm'\sigma}$. These are read from a plain text file in the format output by the Wannier90 software package [@mostofi_wannier90:_2008; @wannier90website] for computing maximally localized Wannier functions [@marzari_maximally_1997; @souza_maximally_2001; @marzari_maximally_2012]. The tight-binding data file output by Wannier90 is a plain text file named `<base>_hr.dat` (where `<base>` is a placeholder for a name chosen by the Wannier90 user). Typically, the Wannier90 program is run as a post-processing step to a first principles DFT calculation to produce a localized basis describing the electronic structure. However, one can generate a hand-written `<base>_hr.dat` file to describe some desired tight-binding problem (see the explanation of the tutorials in Sec. \[sec:tutorial\] below). The text file `<base>_hr.dat` is generally quite long and therefore slow to process, so BoSS requires that the user perform a one time preprocessing of this file to convert it into MATLAB binary form for rapid read access. This is accomplished by the supplied `convert_hrdat_to_bin.m` function that can process v1.1 or v1.2 formatted Wannier90 `<base>_hr.dat` files to produce the binary version. It is the binary files that are read by the subroutine `setup_system.m` during the execution of the BoSS program. In fact, BoSS reads two files of tight-binding data since there are two independent spin channels ($\sigma=\pm1$ or “up”/“down” spin): the file names are set by the variables `hrbinfileup` and `hrbinfiledn` in `setup_system.m`. This allows one to deals with spin-polarized tight-binding representations; if no spin polarization is evident (or desired), one simply makes the two file names identical. Key input/control variables --------------------------- BoSS has a large number of input and control variables that are defined and set to various values in `setup_system.m`. We refer the reader to examples in the software package for a full, commented list of the variables. Here, we highlight the meaning and implications of the more important variables. The BoSS programming philosophy is that all input or control variables are defined and initialized in the file `setup_system.m`: the rest of the program, subroutines, and functions should not contain other such variables or arbitrary numerical values (which have significant influence over the program execution or output). The important high-level variables are common to both spinon and slave problems are: - `corbs` and `porbs` : two integer arrays containing lists of localized orbitals that are correlated and uncorrelated (i.e., interacting and non-interacting), respectively. The numbering of orbitals is that of the input Wannier90 representation. - `occtol` : main electron occupancy tolerance for self-consistency and number matching. This value is used to decide if the SCF loop is converged (when `corb` spinon occupancies change by less than this magnitude between successive iterations) as well as the maximum difference allowed between slave and spinon occupancies when searching over $h_{i\alpha}$. - `tijtol` : tunneling elements $t_{imi'm'\sigma}$ smaller in magnitude that this number (in eV) are set to zero. This is useful for reducing significantly the size of the tight-binding representation which typically contains many small entries between spatially far apart orbitals. However, it may change the non-interacting spinon bands away from the ones defined by the Wannier90 output. Setting this to zero retains all input tunneling elements. - `minimize_Etot_over_Bfield` : a flag deciding if minimization of $E_{tot}$ over $B_{im\sigma}$ is to be performed (a non-zero value turns it on). Most of the variables controlling the spinon behavior are members of the structure `spinoninfo`. The key ones are: - `spinoninfo.dim` : controls the dimensionality of the $k$-sampling. If equal to 2, $k$ vectors sample only the $xy$ plane; if equal to 3, $k$ vectors sample in all three spatial directions. - `spinoninfo.nk` : the number of evenly-spacked $k$ samples along each axial direction being sampled. The sampling directions are along the primitive reciprocal lattice vectors. - `spinoninfo.kT` : temperature (in eV) for the Fermi-Dirac distribution converting spinon energies to occupancies. - `spinoninfo.Ne` : the total number of spinons (i.e., electrons) in each unit cell. This is the value the chemical potential $\mu$ search targets. - `spinoninfo.Bfield` : initial values of the $B_{im\sigma}$ (in eV) that control the spinon occupancies. These are updated if minimization is turned on. The key variables controlling the slave bosons are members of `slaveinfo`: - `slaveinfo.nsites` : the number of correlated sites. - `slaveinfo.nslavespersite` : the number of slave modes per site - `slaveinfo.allowedOccs` : an integer array specifying the set of allowed slave occupancy numbers on a correlated site. For example, if a single boson describes the occupancy of entire $d$ shell, which has 5 spatial orbitals and two spin channels, then set this to `[0:10]`; in the other extreme of each boson describing a unique spin+orbital combination, set this to `[0:1]`. - `slaveinfo.ncorbsperslave` : the number of spatial orbitals per slave mode. If the value is one, then the slave model can resolve individual spatial orbitals and the value of $U'$ is used in the interaction Hamiltonian. - `slaveinfo.spinresolved` : if set to one, the slave modes can distinguish the two spin indices $\sigma$, and this turns on the use of $J$ and the Hund’s interaction term (setting to zero turns this off). - `spinoninfo.U` , `spinoninfo.Up` , `spinoninfo.J` : arrays specifying the $U,U',J$ values (in eV) for each correlated site. - `slaveinfo.Oavgtol` : the tolerance within which ${\langle O_{i\alpha} \rangle}=1$ when solving the non-interacting slave problem for the $C_{i\alpha}$. - `slaveinfo.kTslave` : temperature (in eV) for the Boltzmann distribution used to compute the slave-boson averages. When minimization is performed, the structure `miniminfo` contains the variables controlling the minimization. The most critical variable is the energy tolerance `miniminfo.Etottol` (in eV) for changes of $E_{tot}$ during minimization: when the successive change of $E_{tot}$ between gradient descent steps drops below this tolerance, the minimization is terminated. Key working variables --------------------- The BoSS program flow has a number of variables that are modified as the final self-consistent and/or minimized solution is computed. Here we focus on four basic and key variables, and reader may consult the software package for other variables and how they are computed or used. The variables of interest are: - `dcount` : a $2\times n_c$ array containing the spinon occupancies ${\langle \hat f^\dag_{im\sigma}\hat f_{im\sigma} \rangle}_f$ of the correlated localized orbitals where $n_c$ is the length of the `corb` array (i.e., the number of spatial orbitals that are localized). The rows refer to the spin index $\sigma$ and the columns to the spatial orbitals in the order specified in `corb`. - `Oavg` : a $2\times n_c$ array containing the slave averages ${\langle \hat O_{i\alpha} \rangle}_s$. Correlated localized states belong to the same $i\alpha$ index have the same `Oavg` values. - `Eint` : expectation value of the total electron-electron interaction energy, the second term on the right hand side of Eq. (\[eq:Etotused\]). - `Eband` : expectation value of the hopping energy, the first term on the right hand side of Eq. (\[eq:Etotused\]). - `Etot`: the sum `Eband + Eint`. Tutorial examples {#sec:tutorial} ================= The BoSS software package is distributed with four examples forming an introductory tutorial. The first example is about the electronic structure of SrVO$_3$, a metallic and non-magnetic cubic perovskite transition metal oxide whose observed electronic bands show significant quantitative differences from the DFT-calculated ones for a 5-atom primitive unit cell. The second example is about how one can create a Wannier90-formatted `<base>_hr.dat` file easily to describe a desired Hubbard model. The third example shows the effect of having the $B_{im\sigma}$ symmetry breaking fields, and how they can be determined via minimization of the total energy $E_{tot}$. The fourth examples shows how comparing two different slave models for the same material, LaNiO$_3$, can give insight into the key physics. We will summarize key aspects of the examples below, and refer the reader to the software package’s tutorial documentation and downloadable files for full details. ![Output of BoSS run for SrVO$_3$ (Example 1). The slave model is the “orbital slave” for the V 3$d$ Wannier orbitals: 5 slaves per V atom, one slave per spatial orbital, no spin resolution for the slaves so allowed slave occupancies are {0,1,2}; the interaction strengths of $U=U'=12$ eV, and $J=0$ eV are used for the V 3$d$ orbitals. The left panel shows the band structure for both spin channels (which are identical due to lack of spin polarization): green shows the original Wannier (DFT) bands at $U=U'=J=0$ and red shows the renormalized spinon bands. The right panel shows the projected spinon density of states (PDOS) onto the $d$ (V 3$d$) orbitals and $p$ (O 2$p$) orbitals for both the original and renormalized spinon bands (negative PDOS refers to spin down and positive to spin up). The Gaussian broadening for the PDOS has a standard deviation of 0.05 eV. []{data-label="fig:svospinonbands"}](output_SVO_bands.pdf "fig:"){width="2.3in"} ![Output of BoSS run for SrVO$_3$ (Example 1). The slave model is the “orbital slave” for the V 3$d$ Wannier orbitals: 5 slaves per V atom, one slave per spatial orbital, no spin resolution for the slaves so allowed slave occupancies are {0,1,2}; the interaction strengths of $U=U'=12$ eV, and $J=0$ eV are used for the V 3$d$ orbitals. The left panel shows the band structure for both spin channels (which are identical due to lack of spin polarization): green shows the original Wannier (DFT) bands at $U=U'=J=0$ and red shows the renormalized spinon bands. The right panel shows the projected spinon density of states (PDOS) onto the $d$ (V 3$d$) orbitals and $p$ (O 2$p$) orbitals for both the original and renormalized spinon bands (negative PDOS refers to spin down and positive to spin up). The Gaussian broadening for the PDOS has a standard deviation of 0.05 eV. []{data-label="fig:svospinonbands"}](output_SVO_pdos.pdf "fig:"){width="2.3in"} : Bulk SrVO$_3$ has a cubic perovskite structure with a five atom primitive unit cell with no observed spin polarization or other symmetry breaking. A $p$-$d$ model is used with O 2$p$ and V 3$d$ Wannier orbitals (14 orbitals per unit cell). The full tutorial files include details of the DFT calculations including input files for the Quantum Espresso DFT package [@QE] as well as the Wannier90 input file and output `SVO_hr.dat` tight-binding description. Running the tutorial produces the band structure and projected densities of states (PDOS) shown in Figure \[fig:svospinonbands\]. The main observation is that the spinon bands for the V 3$d$ conduction bands (those crossing the chemical potential $\mu$) become systematically narrowed in energy compared to the bare DFT bands, which corresponds to an effective mass enhancement by a factor of $\approx$ 2. This is the primary effect of the local electronic interaction on the conducting electronic bands. Figure \[fig:svobossexptdmft\] shows a direct comparison of BoSS electronic spectra to available experimental and DMFT data: as no effort at fine-tuning of the parameters was performed in the BoSS calculation, the comparison to prior work is very encouraging. ![Comparison of the electronic band structure of SrVO$_3$ from (a) BoSS, (b) experimental data from angular resolved photoemission spectroscopy (ARPES) [@svoexpt], (c) and DFT+DMFT [@svodmft]. The results in (a) show the non-interacting bands (dashed green labeled “bare”) and the interacting spinon bands (solid red). The BoSS calculations uses $U=12$ eV, $J=2$ eV, and $U'=8$ eV for the V $3d$ mainfold of orbitals, and ten slave modes are used per V atom (one slave per spatial orbital and spin combination) with allowed slave occupancies of {0,1}. []{data-label="fig:svobossexptdmft"}](SVOexptdmft.pdf){width="4.5in"} : The aim of this example is to show how easy it is to create a tight-binding representation file `<base>_hr.dat` by hand and thus create a manually specified Hubbard model. The example creates a simple one-dimensional chain of alternating $d$ and $p$ sites. Interested readers can examine the software package files for this example : SmNiO$_3$ is a perovskite-structured material with an insulating and antiferromagnetic ground state whose unit cell contains 80 atoms. Instead of describing the full complexity of this system, this tutorial example focuses on a simpler description based on a 10-atom unit cell (two formula units) containing two inequivalent Ni cations: a “breathing mode” distortion exists in this material at low temperatures whereby one Ni atom has a larger oxygen octahedron surrounding it while the other Ni has a smaller octahedron. This distortion is accompanied by a transition from a non-magnetic metal at high temperature to an insulating and magnetic system at low temperatures. The magnetic struture in this small uit cell is taken as ferromagnetic for simplicity. The tutorial files provides details of calculations with $B_{im\sigma}=0$, $B_{im\sigma}\ne0$ as well as the minimization over $B_{im\sigma}$ that yields the final optimal state of the system. Here we will simply compare the magnetic and non-magnetic solutions. Figure \[fig:snoB0Bmin\] compares the band structure of the two extremes: the optimized description with lowest $E_{tot}$ is insulating and magnetic, in agreement with experiment (the non-magnetic calculation is metallic). Regardless of the magnetic state, the interactions reduce the width of the energy bands. ![Band structure output of BoSS run for SmNiO$_3$ (Example 3) run with $U=12$ eV, $U'=8$ eV and $J=2$ eV. The simulation cell has two formula units (10-atom cell) with two inequivalent Ni sites. Only the Ni $e_g$ orbitals (two per Ni) are treated as correlated (“$d$”) orbitals with the remaining $t_{2g}$ Ni 3$d$ orbitals and O 2$p$ orbitals treated as uncorrelated (“$p$”). The left two panels show the electronic bands resulting without any magnetism ($B_{im\sigma}=0$): the two spin channels are necessarily identical and the system is metallic (incorrect compared to experiment). The right two panels show the energy bands of the minimal $E_{tot}$ system with optimal $B_{im\sigma}$: the majority spin channel (spin=1) has two filled $e_g$ bands while all $e_g$ bands for minority spins (spin=2) are above $\mu$ and empty. The spinon bands correctly predict an insulating material.[]{data-label="fig:snoB0Bmin"}](output_SNO_Bfield=0_bands.pdf "fig:"){width="2.3in"} ![Band structure output of BoSS run for SmNiO$_3$ (Example 3) run with $U=12$ eV, $U'=8$ eV and $J=2$ eV. The simulation cell has two formula units (10-atom cell) with two inequivalent Ni sites. Only the Ni $e_g$ orbitals (two per Ni) are treated as correlated (“$d$”) orbitals with the remaining $t_{2g}$ Ni 3$d$ orbitals and O 2$p$ orbitals treated as uncorrelated (“$p$”). The left two panels show the electronic bands resulting without any magnetism ($B_{im\sigma}=0$): the two spin channels are necessarily identical and the system is metallic (incorrect compared to experiment). The right two panels show the energy bands of the minimal $E_{tot}$ system with optimal $B_{im\sigma}$: the majority spin channel (spin=1) has two filled $e_g$ bands while all $e_g$ bands for minority spins (spin=2) are above $\mu$ and empty. The spinon bands correctly predict an insulating material.[]{data-label="fig:snoB0Bmin"}](output_SNO_Bfield=minim_bands.pdf "fig:"){width="2.3in"} : LaNiO$_3$ is a conducting transition metal oxide in which electronic interactions are known to lead to quantitative and observable changes of the electronic bands. We choose a simple cubic unit cell for LaNiO$_3$ (one formula unit), which is the simplest representation and also allows for direct comparison to prior DMFT calculations. ![Band structure output of BoSS for cubic LaNiO$_3$ (Example 4) run with $U=10$ eV, $U'=6$ eV and $J=2$ eV without any magnetism. There are two spatial correlated orbitals on the Ni (the $e_g$ orbitals which are $d3z^2-r^2$ and $dx^2-y^2$) and all remaining Ni and O 2$p$ orbitals are in the uncorrelated set. Bare green dashed bands are the DFT-LDA results, and the solid red curves as the BoSS results for a slave model with full orbital and spin resolution (i.e., 4 slave modes, one for each spin and orbital combination with occupancies of either 0 or 1). The two high energy bands of $e_g$ character cross the chemical potential $\mu$: the crossing in the $\Gamma-X$ direction is highlighted by the blue circle, and the slope of the bands at the crossings are the velocities $v_F^9$ (bare bands) and $v_F$ (spinon bands).[]{data-label="fig:lnobands"}](LNO5_bands_spin+orb_U10J2.pdf){width="3in"} The electronic bands for this system are displayed in Figure \[fig:lnobands\]. We see that electronic interactions have a strong quantitative effect on the energy bands and make them narrower when compared to the non-interacting (bare) bands. To quantify this effect, it is customary to compute ratios of the slopes of the bands (called the Fermi velocities, $v_F$) as they cross the chemical potential: the interaction reduces band width and thus the slope, and the ratio of the non-interacting to interacting slope, $v_F^0/v_F$, is often quoted as the “effective mass enhancement factor” and as a measure of the effect of electronic interactions and correlations on the energy bands. ---------- ------------- ------ ------ ------ ------ $v_F^0/v_F$ 0 1 2 3 10 1.31 1.30 1.33 1.36 $U$ (eV) 12 1.44 1.43 1.49 1.58 14 1.58 1.59 1.68 1.80 ---------- ------------- ------ ------ ------ ------ : Renormalization of the Fermi velocity along the $\Gamma-X$ direction for cubic LaNiO$_3$: $v_F^0$ is the DFT-LDA value and $v_F$ is the value for the spinon energy bands calculated by BoSS using the $U$ and $J$ values listed ($U'=U-2J$ throughout). The slave model used has two slave modes on the Ni site representing the two $e_g$ Ni 3$d$ orbitals ($d3z^2-r^2$ and $dx^2-y^2$), and each slave mode can have occupancies in the set $\{0,1,2\}$ (no explicit resolution of the spin degree of freedom).[]{data-label="tab:vfnloorbslave"} ---------- ------------- ------ ------ ------ ------ $v_F^0/v_F$ 0 1 2 3 10 1.50 1.78 1.35 2.99 $U$ (eV) 12 1.64 1.99 2.73 3.49 14 1.78 2.18 2.99 3.84 ---------- ------------- ------ ------ ------ ------ : Renormalization of the Fermi velocity along the $\Gamma-X$ direction for cubic LaNiO$_3$. The nomeclature is identical to Table \[tab:vfnloorbslave\], but the slave model used has four slave modes on the Ni site, one slave for each unique combination of spin channel and spatial $e_g$ orbital; each slave mode can have occupancies in the set $\{0,1\}$.[]{data-label="tab:vfnlspinoorbslave"} Tables \[tab:vfnloorbslave\] and \[tab:vfnlspinoorbslave\] show the dependence of this slope ratio on the interaction parameters $(U,U',J)$ for two different slave models. Experimental measurements [@eguchi_fermi_2009] and prior theoretical work [@deng_hallmark_2012] find that the ratio is approximately 3. As the tables show, this numerical value is achievable by fine-tuning the parameters in one of the slave models but not the other. One of the features of BoSS is that it permits one to compare the two slave models in detail to see which physical effects create the the difference and lead to a better description of the actual material. For example, looking at the two tables, why does the spin+orbital description generate larger, and more physically reasonable, mass renormalizations that are quite sensitive to the $J$ value? To answer this, we can compare two spin+orbital calculations done with $\{U=U'=10,J=0\}$ and with $\{U=10,U'=6,J=2\}$ (all in eV). Upon examining the interacting slave ground state for these two cases, we find the wave functions illustrated graphically in Figure \[fig:nislavestates\]. ![Comparison of the spin+orbital slave ground state wave function of the Ni $e_g$ subsystem in cubic LaNiO$_3$. The figure shows results for $U=U'=10,J=0$ (top part) and $U=10,U'=6,J=2$ (bottom part) where all parameters are in eV. Each wave function is written as a superposition of configurations of the $e_g$ manifold: each circle represents a configuration; the left and right side of each circle represent the two $e_g$ spatial orbitals; the occupancy of each orbital is indicated by the presence (or absence) of blue arrows which also describe the spin occupancy. The top wave function is dominated by two-electron configurations which then equally sample the orbitals and spin states (the remaining configurations have much smaller amplitudes). The bottom wave function has a strong preference for spin-aligned two electron configurations due to the Hund’s coupling. []{data-label="fig:nislavestates"}](LNO5_nislavestates.png){width="4.0in"} As the figure shows, when $J=0$ and $U'=U$, the ground state has no preference between the different two-electron configurations: the system fluctuates between all six possible two-electron configurations equally and then rarely visits configurations with fewer or more electrons. However, once $J>0$, this two-electron and two-orbital system can lower its energy by favoring the two spin-aligned configurations at the expense of other configurations: this greatly reduces the configurational fluctuations which in turn suppresses tunneling between Ni sites and thus the velocity of electron motion in the associated energy bands. These effects have been described in prior literature as a feature of “Hund’s metals” [@Yin2011; @Hirjibehedin2015; @PhysRevLett.117.247001; @hundscouplingAGLdMMJ2013]. What we are highlighting is the ease with which the BoSS approach allows one to identify the basic physics by suppressing or enhancing the mechanism via changes in the slave model: e.g., the results in Table \[tab:vfnloorbslave\] are much less sensitive to the interaction parameters when compared to those in Table \[tab:vfnlspinoorbslave\] because the former has no explicit description of the electron spin state and thus no way of selecting the spin-aligned configurations. Outlook ======= The existing BoSS framework described in this paper is easy to modify and test. Hence, it should be applied to a broad range of interacting electron systems to understand its performance, strengths, and limitations in terms of correctly predicting materials properties. With the software available in open source form, accomplishing this important task is up to the theoretical materials physics community. In terms of improved methodology and capabilities for the future, we identify a number of them in order of increasing difficulty. First, the current software assumes that all the correlated atomic sites must have identical slave-boson models (i.e., the same slave $\alpha$ indices). This limitation is easy to address by creation of improved data structures to handle each site separately. Fortunately, the software already permits site-dependent values of the $U,U',J$ parameters. Second, at present the software computes and reports the total energy $E_{tot}$, the mean occupations ${\langle \hat f^\dag_{im\sigma}\hat f_{im\sigma} \rangle}_f$ and ${\langle N_{i\alpha} \rangle}_s$ as well as the full spinon density matrix ${\langle \hat f^\dag_{im\sigma}\hat f_{i'm'\sigma} \rangle}_f$, and, spectroscopically, the spinon energy bands and projected densities of states. Direct comparison to experimental spectroscopies, however, requires computation of the electron spectral function (of which the spinon energy bands form only one part). Since both the spinon and slave eigenstates are computed by BoSS, all the required inputs to computing the spectral function within a slave-boson formalism are available in principle. In practice, additional code and data structures must be implemented for the calculation of the spectral function after the BoSS solution is found. Third, and more ambitiously, it is preferable to relax the current reliance on having the electron spin index $\sigma$ as an quantum number for electrons. While this does permit the description of magnetic systems with collinear magnetic ordering, it does not permit arbitrary magnetic states or the description of spin-orbit coupled materials where the spatial ($m$) and spin ($\sigma$) degrees of freedom are necessarily mixed. This will require reorganization of key data structures and more significant modification of the software stack. A BoSS framework that can describe spin-orbit coupled electrons will enable a more realistic handling of materials containing 4$d$ and 5$d$ transition metal atoms. Acknowledgement {#acknowledgement .unnumbered} =============== The initial development of BoSS was supported primarily by the National Science Foundation via the grant NSF MRSEC DMR-1119826. The Flatiron Institute is a division of the Simons Foundation. A. B. G. also acknowledges discussions with A. J. Millis and H. U. R. Strand.
--- abstract: 'The problem of determining extremal hypergraphs containing at most $r$ isomorphic copies of some element of a given hypergraph family was first studied by Boros et al. in 2001. There are not many hypergraph families for which exact results are known concerning the size of the corresponding extremal hypergraphs, except for those equivalent to the classical Turán numbers. In this paper, we determine the size of extremal $k$-uniform hypergraphs containing at most one pair of 2-intersecting edges for $k\in\{3,4\}$. We give a complete solution when $k=3$ and an almost complete solution (with eleven exceptions) when $k=4$.' author: - 'Yeow Meng Chee[^1]' - 'Alan C. H. Ling[^2]' title: 'On Extremal -Graphs Without Repeated Copies of 2-Intersecting Edges[^3]' --- combinatorial design, hypergraph, packing 05B05, 05B07, 05B40, 05D05 10.1137/060675915 Introduction ============ A [*set system*]{} is a pair $G=(X,{{\mathcal A}})$, where $X$ is a finite set and ${{\mathcal A}}\subseteq 2^X$. The members of $X$ are called [*vertices*]{} or [*points*]{}, and the members of ${{\mathcal A}}$ are called [*edges*]{} or [*blocks*]{}. The [*order*]{} of $G$ is the number of vertices $|X|$, and the [*size*]{} of $G$ is the number of edges $|{{\mathcal A}}|$. The set $K$ is called a [*set of block sizes*]{} for $G$ if $|A|\in K$ for all $A\in{{\mathcal A}}$. $G$ is called a $k$-[*uniform hypergraph*]{} (or $k$-[*graph*]{}) if $\{k\}$ is a set of block sizes for $G$. A $2$-graph is also known simply as a [*graph*]{}. A pair of edges is said to be $t$-[*intersecting*]{} if they intersect in at least $t$ points. The $k$-graph of size two whose two edges intersect in exactly $t$ points is denoted $\Lambda(k,t)$. Let ${{\mathcal F}}$ be a family of $k$-graphs. Boros et al. [@Borosetal:2001] introduced the function $T(n,{{\mathcal F}},r)$, which denotes the maximum number of edges in a $k$-graph of order $n$ containing no $r$ isomorphic copies of a member of ${{\mathcal F}}$. So $T(n,{{\mathcal F}},1)$ is just the classical Turán number ex$(n,{{\mathcal F}})$ [@Bollobas:1978]. A family of $k$-graphs ${{\mathcal F}}$ is said to [*grow polynomially*]{} if there exist $c>0$ and a nonnegative integer $s$ such that, for every $m$, there are at most $cm^s$ members in ${{\mathcal F}}$ having exactly $m$ edges. The following theorem is established in [@Borosetal:2001]. \[boros\] Let ${{\mathcal F}}$ be a family of $k$-graphs which grows polynomially with parameters $c$ and $s$. Then, for $n$ sufficiently large, $$\begin{aligned} T(n,{{\mathcal F}},r) < & ~{\rm ex}(n,{{\mathcal F}}) + (c\cdot (r-1)\cdot s! + 1){\rm ex}(n,F)^{(s+1)/(s+2)} \\ & ~+ 2(c\cdot (r-1)\cdot s! + 1)^2 {\rm ex}(n,{{\mathcal F}})^{s/(s+2)}.\end{aligned}$$ For $k\geq 3$, let ${{\mathcal F}}(k)$ be the family of $k$-graphs of two 2-intersecting edges; that is, ${{\mathcal F}}(k)=\{\Lambda(k,t) : 2\leq t\leq k-1\}$. $T(n,{{\mathcal F}}(k),1)$, which is the Turán number ex$(n,{{\mathcal F}}(k))$, is equal to the following well studied parameters in design theory and coding theory: - $D(n,k,2)$, the maximum number of blocks in a 2-$(n,k,1)$ packing [@MillsMullin:1992], and - $A(n,2(k-1),k)$, the maximum number of codewords in a binary code of length $n$, minimum distance $2(k-1)$, and constant weight $k$ [@MacWilliamsSloane:1977]. Despite much effort, the exact value of $T(n,{{\mathcal F}}(k),1)$ is known for all $n$ only when $k=3$ [@Schonheim:1966; @Spencer:1968] and $k=4$ [@Brouwer:1979]. Even for $k=5$, there are an infinite number of $n$ for which $T(n,{{\mathcal F}}(5),1)$ is not yet determined. In this paper, we determine $T(n,{{\mathcal F}}(k),2)$ for all $n$ when $k=3$ and for all but 11 values of $n$ when $k=4$. Design-theoretic preliminaries ============================== Our determination of $T(n,{{\mathcal F}}(k),2)$, $k\in\{3,4\}$, makes extensive use of combinatorial designs. In this section, we review some design-theoretic constructs and review some prior results that are needed in oursolution. For positive integers $i\leq j$, the set $\{i, i+1,{\dotsc},j\}$ is denoted $[i,j]$. The set $[1,j]$ is further abbreviated as $[j]$. A $k$-graph $(X,{{\mathcal A}})$ of order $n$ is a [packing of pairs by $k$-tuples]{}, or more commonly known as a 2-$(n,k,1)$ [*packing*]{} if every 2-subset of $X$ is contained in at most one block of ${{\mathcal A}}$. The [*leave*]{} of $(X,{{\mathcal A}})$ is the graph $L=(X,{{\mathcal E}})$, where ${{\mathcal E}}$ consists of all 2-subsets of $X$ that are not contained in any blocks of ${{\mathcal A}}$. We also say that $(X,{{\mathcal A}})$ is a 2-$(n,k,1)$ packing [*leaving*]{} $L$. Given a graph $G$, the maximum size of a 2-$(n,k,1)$ packing whose leave contains $G$ is denoted $m(n,k,G)$. Note that the maximum size of a 2-$(n,k,1)$ packing, $D(n,k,2)$, is the quantity $m(n,k,G)$ when $G$ is the empty graph. For all $n\geq 0$, we have $$\begin{aligned} D(n,3,2) & = & \begin{cases} \left\lfloor \frac{n}{3} \left\lfloor \frac{n-1}{2} \right\rfloor \right\rfloor -1&\text{if $n\equiv 5$ {\rm (mod 6)},} \\ \left\lfloor \frac{n}{3} \left\lfloor \frac{n-1}{2} \right\rfloor \right\rfloor&\text{otherwise.} \end{cases}\end{aligned}$$ \[Bpacking\] For all $n\geq 0$, we have $$\begin{aligned} D(n,4,2) & = & \begin{cases} \left\lfloor \frac{n}{4} \left\lfloor \frac{n-1}{3} \right\rfloor \right\rfloor -1&\text{if $n\equiv 7$ or $10$ {\rm (mod 12)} and $n\notin\{10,19\}$,} \\ \left\lfloor \frac{n}{4} \left\lfloor \frac{n-1}{3} \right\rfloor \right\rfloor -1&\text{if $n\in\{9,17\}$,} \\ \left\lfloor \frac{n}{4} \left\lfloor \frac{n-1}{3} \right\rfloor \right\rfloor -2&\text{if $n\in\{8,10,11\}$,} \\ \left\lfloor \frac{n}{4} \left\lfloor \frac{n-1}{3} \right\rfloor \right\rfloor -3&\text{if $n=19$,} \\ \left\lfloor \frac{n}{4} \left\lfloor \frac{n-1}{3} \right\rfloor \right\rfloor&\text{otherwise.} \end{cases}\end{aligned}$$ A [*pairwise balanced design*]{} (PBD) is a set system $(X,{{\mathcal A}})$ such that every 2-subset of $X$ is contained in exactly one block of ${{\mathcal A}}$. If a PBD is of order $n$ and has a set of block sizes $K$, we denote it by PBD$(n,K)$. If a member $k\in K$ is superscripted with a “$\star$” (written “$k^\star$”), it means that the PBD has exactly one block of size $k$. We require the following result on the existence of PBDs. \[FH\] There exists a ${\rm PBD}(n,\{3,5^\star\})$ if and only if $n\equiv 5$ [(mod 6)]{}. \[RS\] There exists a ${\rm PBD}(n,\{4,f^\star\})$ if and only if $n\geq 3f+1$, and 1. $n\equiv 1$ or $4$ [(mod 12)]{} and $f\equiv 1$ or $4$ [(mod 12)]{} or 2. $n\equiv 7$ or $10$ [(mod 12)]{} and $f\equiv 7$ or $10$ [(mod 12)]{}. =-1Let $(X,{{\mathcal A}})$ be a set system, and let ${{\mathcal G}}=\{G_1,{\dotsc},G_s\}$ be a partition of $X$ into subsets, called [*groups*]{}. The triple $(X,{{\mathcal G}},{{\mathcal A}})$ is a [*group divisible design*]{} (GDD) when every 2-subset of $X$ not contained in a group appears in exactly one block, and $|A\cap G|\leq 1$ for all $A\in{{\mathcal A}}$ and $G\in{{\mathcal G}}$. We denote a GDD $(X,{{\mathcal G}},{{\mathcal A}})$ by $K$-GDD if $K$ is a set of block sizes for $(X,{{\mathcal A}})$. The [*type*]{} of a GDD $(X,{{\mathcal G}},{{\mathcal A}})$ is the multiset $[|G| : G\in{{\mathcal G}}]$. When more convenient, we use the exponentiation notation to describe the type of a GDD: A GDD of type $g_1^{t_1}{\dotsc}g_s^{t_s}$ is a GDD where there are exactly $t_i$ groups of size $g_i$, $i\in[s]$. The following results on the existence of $\{4\}$-GDDs are useful. \[H\] There exists a $\{3\}$-[GDD]{} of type $g^t$ if and only if $t\geq 3$, $g^2{t\choose 2}\equiv 0$ [(mod 3)]{}, and $g(t-1)\equiv 0$ [(mod 2)]{}. \[BSH\] There exists a $\{4\}$-[GDD]{} of type $g^t$ if and only if $t\geq 4$ and 1. $g\equiv 1$ or $5$ [(mod 6)]{} and $t\equiv 1$ or $4$ [(mod 12)]{} or 2. $g\equiv 2$ or $4$ [(mod 6)]{} and $t\equiv 1$ [(mod 3)]{} or 3. $g\equiv 3$ [(mod 6)]{} and $t\equiv 0$ or $1$ [(mod 4)]{} or 4. $g\equiv 0$ [(mod 6)]{}, with the two exceptions of types $2^4$ and $6^4$, for which $\{4\}$-[GDD]{}s do not exist. \[B\] A $\{4\}$-[GDD]{} of type $2^u5^1$ exists if and only if $u=0$, or $u\equiv 0$ [(mod 3)]{} and $u\geq 9$. \[KS\] There exists a $\{4\}$-[GDD]{} of type $3^tu^1$ if and only if $t=0$, or $t\geq (2u+3)/3$ and 1. $t\equiv 0$ or $1$ [(mod 4)]{} and $u\equiv 0$ or $6$ [(mod 12)]{} or 2. $t\equiv 0$ or $3$ [(mod 4)]{} and $u\equiv 3$ or $9$ [(mod 12)]{}. \[GL0\] There exists a $\{4\}$-[GDD]{} of type $2^tu^1$ for $t=0$ and for each $t\geq 6$ with $t\equiv 0$ [(mod 3)]{}, $u\equiv 2$ [(mod 3)]{}, and $2\leq u\leq t-1$, except for $(t,u)=(6,5)$ and except possibly for $(t,u)\in\{(21,17)$, $(33,23)$, $(33,29)$, $(39,35)$, $(57,44)\}$. \[GL\] There exists a $\{4\}$-[GDD]{} of type $12^tu^1$ for $t=0$ and for each $t\geq 4$ and $u\equiv 0$ [(mod 4)]{} such that $0\leq u\leq 6(t-1)$. An [*incomplete transversal design of group size*]{} $n$, [*block size*]{} $k$, and [*hole size*]{} $h$ is a quadruple $(X,{{\mathcal G}},H,{{\mathcal A}})$ such that 1. $(X,{{\mathcal A}})$ is a $k$-graph of order $nk$; 2. ${{\mathcal G}}$ is a partition of $X$ into $k$ subsets (called [*groups*]{}), each of cardinality $n$; 3. $H\subseteq X$, with the property that, for each $G\in{{\mathcal G}}$, $|G\cap H|=h$; and 4. every 2-subset of $X$ is - contained in the [*hole*]{} $H$ and not contained in any blocks or - contained in a group and not contained in any blocks or - contained in neither a hole nor a group and contained in exactly one block of ${{\mathcal A}}$. Such an incomplete transversal design is denoted ${\rm TD}(k,n)-{\rm TD}(k,h)$. \[HZ\] For $n>h>0$, a ${\rm TD}(4,n)-{\rm TD}(4,h)$ exists if and only if $n\geq 3h$ and $(n,h)\not=(6,1)$. Packings with leaves containing specified graphs ================================================ In this section, we relate the problem of determining $T(n,{{\mathcal F}}(k),2)$ to that of determining $m(n,k,G)$ for $G$ isomorphic to $K_4-e$, $K_5-e$, and $2\circ K_4$ (edge-gluing of two $K_4$’s) when $k\in\{3,4\}$. These graphs are shown in Figures \[fig3.1\]–\[fig3.3\], respectively. \[m+2\] There exists a $3$-graph of order $n$ and size $m$ containing exactly one copy of an element of ${{\mathcal F}}(3)$ if and only if there exists a $2$-$(n,3,1)$ packing of size $m-2$ with a leave containing $K_4-e$ as a subgraph. ![$K_4-e$.[]{data-label="fig3.1"}](K4-e.pdf){width="0.8in"} ![$K_5-e$.[]{data-label="fig3.2"}](K5-e.pdf){width="1in"} ![$2\circ K_4$.[]{data-label="fig3.3"}](G2.pdf){width="1in"} ${{\mathcal F}}(3)$ contains only a single 3-graph, $\Lambda(3,2)$. Let $(X,{{\mathcal A}})$ be a $3$-graph of order $n$ and size $m$ containing exactly one copy of $\Lambda(3,2)$. Then there exist exactly two blocks $A,B\in{{\mathcal A}}$, with $|A\cap B|=2$. Let $P=(X,{{\mathcal A}}\setminus\{A,B\})$. Then $P$ is a $2$-$(n,3,1)$ packing of size $m-2$ with a leave containing the 2-subsets in $X$ that occurs in $A$ and $B$, which together form a $K_4-e$. This construction is reversible. The following holds: $$T(n,{{\mathcal F}}(3),2)=\max\{ T(n,{{\mathcal F}}(3),1), m(n,3,K_4-e)+2\}.$$ If a 3-graph contains no two isomorphic copies of $\Lambda(3,2)$, then either it contains no copies, in which case its maximum size is given by $T(n,{{\mathcal F}}(3),1)$, or else it contains exactly one copy, in which case its maximum size is given by $m(n,3,K_4-e)+2$. The proofs for the following two lemmas are similar to that for Lemma \[m+2\] and are thus omitted. \[2K4\] There exists a $4$-graph of order $n$ and size $m$ containing exactly one copy of $\Lambda(4,2)$ if and only if there exists a $2$-$(n,4,1)$ packing of size $m-2$ with a leave containing $2\circ K_4$ as a subgraph. \[K5-e\] There exists a $4$-graph of order $n$ and size $m$ containing exactly one copy of $\Lambda(4,3)$ if and only if there exists a $2$-$(n,4,1)$ packing of size $m-2$ with a leave containing $K_5-e$ as a subgraph. The following holds: $$T(n,{{\mathcal F}}(4),2)=\max\{T(n,{{\mathcal F}}(4),1), m(n,4,2\circ K_4)+2, m(n,4,K_5-e)+2\}.$$ ${{\mathcal F}}(4)$ contains the graphs $\Lambda(4,2)$ and $\Lambda(4,3)$. So if a 4-graph contains no two isomorphic copies of an element of ${{\mathcal F}}(4)$, then either it contains none of them, in which case its maximum size is given by $T(n,{{\mathcal F}}(4),1)$, or else it contains exactly one of $\Lambda(4,2)$ or $\Lambda(4,3)$. In the former case, its maximum size is $m(n,4,2\circ K_4)+2$ by Lemma \[2K4\], and, in the latter case, its maximum size is $m(n,4,K_5-e)$ byLemma \[K5-e\]. Determining $T(n,{{\mathcal F}}(3),2)$ ====================================== When $n\equiv 1$ or $3$ [(mod 6)]{}, a 2-$(n,3,1)$ packing of size $T(n,{{\mathcal F}}(3),1)$ has the property that every pair of distinct points is contained in exactly one block. Such a 2-$(n,3,1)$ packing is called a [*Steiner triple system*]{} of order $n$ and is denoted STS$(n)$. Let $P=(X,{{\mathcal A}})$ be a 2-$(n,3,1)$ packing. When $n\equiv 1$ or 3 (mod 6), the leave $L=(X,{{\mathcal E}})$ of $P$ must satisfy: 1. $|{{\mathcal E}}|\equiv 0$ (mod 3), and 2. the degree of every vertex in $L$ is even. Any $L$ containing $K_4-e$ as a subgraph and satisfying conditions (i) and (ii) above has at least nine edges. Hence, the maximum size of a $2$-$(n,3,1)$ packing with a leave containing $K_4-e$ is at most $\frac{1}{3}({n\choose 2}-9)$. We show below that there indeed exists such a $2$-$(n,3,1)$ packing of size $\frac{1}{3}({n\choose 2}-9)$. There exists a $2$-$(n,3,1)$ packing of size $\frac{1}{3}({n\choose 2}-9)$, with a leave containing $K_4-e$, for every $n\equiv 1$ or $3$ [(mod 6)]{}. Let $(X,{{\mathcal A}})$ be an STS$(n)$. Suppose there exist three blocks in ${{\mathcal A}}$ of the form $\{1,2,3\}$, $\{1,4,5\}$, and $\{3,4,a\}$. Then deleting these three blocks gives a $2$-$(n,3,1)$ packing of size $\frac{1}{3}({n\choose 2}-9)$ with a leave containing $K_4-e$. Hence, it suffices to show that we can always find such a 3-block configuration in any STS$(n)$. To see that this is true, pick any two intersecting blocks in an STS$(n)$, say, $\{1,2,3\}$ and $\{1,4,5\}$. As the third block, take the unique block containing the 2-subset $\{3,4\}$. Next, we consider $n\equiv 5$ (mod 6). In this case, ${n\choose 2}\equiv 1$ (mod 3). So if the leave of a $2$-$(n,3,1)$ packing contains $K_4-e$, then it must contain at least seven edges. Therefore, such a packing can have at most $\frac{1}{3}({n\choose 2}-7)$ blocks. We show below that this upper bound can be met using pairwise balanced designs. There exists a $2$-$(n,3,1)$ packing of size $\frac{1}{3}({n\choose 2}-7)$, with a leave containing $K_4-e$, for every $n\equiv 5$ [(mod 6)]{}. Let $(X,{{\mathcal A}})$ be a PBD$(n,\{3,5^\star\})$ with $[5]$ as the block of size five. The existence of such a PBD is provided by Theorem \[FH\]. Deleting the block of size five from this PBD and adding the block $\{1,2,3\}$ yield the desired 2-$(n,3,1)$packing. For $n\equiv 0$, 2, or 4 (mod 6), every vertex in the leave $L$ of a $2$-$(n,3,1)$ packing is of odd degree. If $L$ contains $K_4-e$, then $L$ must have at least four vertices of degree at least three. The minimum possible number of edges in $L$, if $L$ contains $K_4-e$, is therefore $n/2+4$. It follows that the number of blocks in a 2-$(n,3,1)$ packing with a leave containing $K_4-e$ is at most $\left\lfloor\frac{1}{3}({n\choose 2}-\frac{n}{2}-4)\right\rfloor$. There exists a $2$-$(n,3,1)$ packing of size $\frac{1}{3}({n\choose 2}-\frac{n}{2}-4)$, with a leave containing $K_4-e$, for every $n\equiv 4$ [(mod 6)]{}. Let $(X,{{\mathcal A}})$ be a PBD$(n+1,\{3,5^\star\})$ which exists by Theorem \[FH\]. Let $x$ be a point contained in the block of size five. Then $(X\setminus\{x\},{{\mathcal B}})$, where $$\begin{aligned} {{\mathcal B}}& = & \{A\in{{\mathcal A}}:\text{$x\not\in A$ and $|A|=3$}\}\end{aligned}$$ is the desired $2$-$(n,3,1)$ packing. There exists a $2$-$(n,3,1)$ packing of size $\frac{1}{3}({n\choose 2}-\frac{n}{2}-6)$, with a leave containing $K_4-e$, for every $n\equiv 0$ or $2$ [(mod 6)]{}. Consider a $\{3\}$-GDD of type $2^{n/2}$, which exists whenever $n\equiv 0$ or 2 (mod 6) by Theorem \[H\]. Without loss of generality, we may assume $\{1,2\}$ is a group and $\{1,3,4\}$ is a block in this GDD. There is a unique block of the form $\{2,3,a\}$. Deleting the blocks $\{1,3,4\}$ and $\{2,3,a\}$ from this GDD gives a $2$-$(n,3,1)$ packing of size $\frac{1}{3}({n\choose 2}-\frac{n}{2}-6)$, with a leave containing $K_4-e$. This completes our determination of $m(n,3,K_4-e)$. We summarize our results above as follows. \[D3(2,2)\] For all $n\geq 0$, we have $m(n,3,K_4-e)=\frac{1}{3}({n\choose 2}-f(n))$, where $$\begin{aligned} f(n) & = & \begin{cases} n/2+6&\text{if $n\equiv 0$ or {\rm 2 (mod 6)},} \\ 9&\text{if $n\equiv 1$ or {\rm 3 (mod 6)},} \\ n/2+4&\text{if $n\equiv 4$ {\rm (mod 6)},} \\ 7&\text{if $n\equiv 5$ {\rm (mod 6)}.} \end{cases}\end{aligned}$$ Determining $T(n,{{\mathcal F}}(4),2)$ ====================================== We now determine $T(n,{{\mathcal F}}(4),2)$. The case $n\equiv 1$ or $4\pmod{12}$ ------------------------------------ The leave $L=(X,{{\mathcal E}})$ of a 2-$(n,4,1)$ packing must satisfy: 1. $|{{\mathcal E}}|\equiv 0$ [(mod 6)]{}, and 2. every vertex in $L$ has degree $\equiv 0$ [(mod 3)]{}. Any leave of $P$ containing $K_5-e$ or $2\circ K_4$ as a subgraph and satisfying conditions (i) and (ii) above has at least 18 edges. So $m(n,4,G)\leq \frac{1}{6}({n\choose 2}-18)$ for $G\in\{K_5-e,2\circ K_4\}$. We show below that this bound can be met with a finite number of possible exceptions. The [*cocktail party graph*]{} CP$(n)$ is the unique $(2n-2)$-regular graph on $2n$ vertices. We begin with an observation on CP$(4)$ (shown in Figure \[figcp4\]). ![[CP(4)]{}.[]{data-label="figcp4"}](CP4.pdf){width="1.2in"} \[CP4K5-e\] [CP]{}$(4)$ contains an edge-disjoint union of a $K_5-e$ and a $K_4$. Without loss of generality, we may take the vertex set and edge set of the CP$(4)$ as $[8]$ and $\{A\subset [8]: |A|=2\}\setminus\{\{i,i+4\}: i\in[4]\}$, respectively. Consider the subsets of edges ${{\mathcal E}}_1=\{A\subset\{1,2,3,5,8\} : |A|=2\}\setminus\{\{1,5\}\}$ and ${{\mathcal E}}_2=\{A\subset\{2,4,6,7\} : |A|=2\}$. ${{\mathcal E}}_1$ is the edge set of a $K_5-e$, ${{\mathcal E}}_2$ is the edge set of a $K_4$, and they are disjoint. \[CP4G2\] [CP$(4)$]{} contains an edge-disjoint union of a $2\circ K_4$ and a $K_4$. Without loss of generality, we may take the vertex set and edge set of the CP$(4)$ as $[8]$ and $\{A\subset [8]: |A|=2\}\setminus\{\{i,i+4\}: i\in[4]\}$, respectively. Consider the subsets of edges ${{\mathcal E}}_1=\{A\subset[4] : |A|=2\}\cup(\{A\subset [3,6] : |A|=2\}\setminus\{\{3,4\}\})$ and ${{\mathcal E}}_2=\{A\subset\{1,6,7,8\} : |A|=2\}$. ${{\mathcal E}}_1$ is the edge set of a $2\circ K_4$, ${{\mathcal E}}_2$ is the edge set of a $K_4$, and they are disjoint. Let $G\in\{K_5-e,2\circ K_4\}$ and $n \equiv 1$ or $4$ [(mod 12)]{}. If there exists a $2$-$(n,4,1)$ packing leaving ${\rm CP}(4)$, then there exists a $2$-$(n,4,1)$ packing of size $\frac{1}{6}({n\choose 2}-18)$ with a leave containing $G$. A 2-$(n,4,1)$ packing whose leave is ${\rm CP}(4)$ has size $\frac{1}{6}({n\choose 2}-24)$. We have seen from Lemmas \[CP4K5-e\] and \[CP4G2\] that we can add one more block of size four to this packing to give a 2-$(n,4,1)$ packing with a leave containing $G$. In view of the above lemma, we now focus on constructing 2-$(n,4,1)$ packings leaving CP$(4)$. \[TD\] Let $n\geq 6$. If there exists a [PBD]{}$(n+f,\{4,f^\star\})$, then there exists a $2$-$(4n+f,4,1)$ packing leaving ${\rm CP}(4)$. Take a ${\rm TD}(4,n)-{\rm TD}(4,2)$ $(X,{{\mathcal G}},H,{{\mathcal A}})$, which exists by Theorem \[HZ\], and for each $G\in{{\mathcal G}}$, let $(G\cup F,{{\mathcal A}}_G)$ be a PBD$(n+f,\{4,f^\star\})$, where $F$ is the block of size $f$ in the PBD. Consider the set system $(Y,{{\mathcal B}})$, where $Y=X\cup F$, and ${{\mathcal B}}={{\mathcal A}}\cup(\cup_{G\in{{\mathcal G}}} {{\mathcal A}}_G)$ (note that the block of size $F$ is included only once). $(Y,{{\mathcal B}})$ is a 4-graph of order $4n+f$ having the property that every 2-subset of $X\cup F$ is contained in exactly one block of ${{\mathcal B}}$, except for those 2-subsets $\{a,b\}$, with $a\in G\cap H$ and $b\in G'\cap H$ for distinct $G,G'\in {{\mathcal G}}$, which are not contained in any blocks of ${{\mathcal B}}$. $(Y,{{\mathcal B}})$ therefore gives the required 2-$(4n+f,4,1)$ packing leaving ${\rm CP}(4)$. Let $n \equiv 1$ or $4$ [(mod 12)]{} such that $n \geq 40$ and $n\not\in\{73,76,85\}$. Then there exists a $2$-$(n,4,1)$ packing leaving ${\rm CP}(4)$. Taking a PBD$(n+f,\{4,f^\star\})$, with $(n,f)\in\{$(9,4), (12,1), (13,0), (15,1), (16,0), (21,4), (24,1), (25,0), (27,1), (28,0)$\}$, whose existence is provided by Theorem \[RS\], and applying Lemma \[TD\] give 2-$(n,4,1)$ packings leaving CP$(4)$ for $n\in\{40$, $49$, $52$, $61$, $64$, $88$, $97$, $100$, $109$, $112\}$. By Theorem \[RS\], there exists a PBD$(n,\{4,40^\star\})$ for all $n\equiv 1$ or $4$ [(mod 12)]{} and $n\geq 121$. Break up the block of size 40 in this PBD with the blocks of a 2-$(40,4,1)$ packing leaving CP$(4)$ to obtain a 2-$(n,4,1)$ packing leaving CP$(4)$. Let $n \equiv 1$ or $4$ [(mod 12)]{} such that $n \geq 40$ and $n\not\in\{73,76,85\}$. Then $m(n,4,G)=\frac{1}{6}({n\choose 2}-18)$ for $G\in\{K_5-e,2\circ K_4\}$. The case $n\equiv 7$ or $10\pmod{12}$ ------------------------------------- The leave $L=(X,{{\mathcal E}})$ must satisfy: 1. $|{{\mathcal E}}|\equiv 3$ [(mod 6)]{}, and 2. every vertex in $L$ has degree $\equiv 0$ [(mod 3)]{}. We first consider the case when $L$ contains $K_5-e$. Any such $L$ satisfying the conditions (i) and (ii) above must have at least 15 edges. So $m(n,4,K_5-e)\leq \frac{1}{6} ({n\choose 2}-15)$. ![$K_{3,4}+3e$.[]{data-label="figg3"}](G3.pdf){width="1.2in"} \[fig5.2\] When $L$ contains $2\circ K_4$, $L$ must also have at least 15 edges. Suppose $L$ contains $2\circ K_4$ and has 15 edges. Then $L$ must have at least two vertices, each of degree at least six. Let $a$ be the number of degree three vertices, and let $b$ be the number of vertices with degree greater than three in $L$. Then we have $3a+6b\leq 30$ (counting the edges), $b\geq 2$ (considering the two vertices of degree five in $2\circ K_4$), and $a+b\geq 7$ (considering the presence of vertices with degree at least six). These inequalities imply that $2\leq b\leq 3$ and $a+b\leq 8$. So the possible degree sequences for $L$ are ${{\mathcal D}}_1=(6,6,6,3,3,3,3)$ and ${{\mathcal D}}_2=(6,6,3,3,3,3,3,3)$. Note that we suppress including vertices of degree zero in the degree sequence of $L$. There is a unique graph with degree sequence ${{\mathcal D}}_1$, namely, the graph in Figure \[fig5.2\], obtained by adding to $K_{3,4}$ three edges connecting the vertices in the part of the bipartition with three vertices. This graph does not contain $2\circ K_4$. Hence, $L$ cannot have degree sequence ${{\mathcal D}}_1$. If $L$ contains $2\circ K_4$ and has degree sequence ${{\mathcal D}}_2$, then since $2\circ K_4$ has degree sequence $(5,5,3,3,3,3)$, the two vertices of nonzero degree not in $2\circ K_4$ cannot both be adjacent to the two vertices of degree five in $2\circ K_4$. But this prevents these two vertices having degree three, a contradiction. Hence $L$ cannot have degree sequence ${{\mathcal D}}_2$. It follows that the leave of any 2-$(n,4,1)$ packing containing $2\circ K_4$ must have at least 21 edges, and we have $m(n,4,2\circ K_4)\leq\frac{1}{6}({n\choose 2}-21)$. The following shows that these bounds can be met. \[K7\] $K_7$ contains an edge-disjoint union of a $K_5-e$ and a $K_4$. Take the vertex set of the $K_7$ as $[7]$. Consider the subsets of edges ${{\mathcal E}}_1=\{A\subset [5] : |A|=2\}\setminus\{\{4,5\}\}$ and ${{\mathcal E}}_2=\{A\subset [4,7]:|A|=2\}$. Then ${{\mathcal E}}_1$ is the edge set of a $K_5-e$, ${{\mathcal E}}_2$ is the edge set of a $K_4$, and they are disjoint. Let $n\equiv 7$ or $10$ [(mod 12)]{} such that $n\geq 7$ and $n\not\in\{10,19\}$. Then $m(n,4,K_5-e)=\frac{1}{6}({n\choose 2}-15)$. Let $(X,{{\mathcal A}})$ be a PBD$(n,\{4,7^\star\})$ with $F$ as the block of size seven, whose existence is provided by Theorem \[RS\], and let $B$ be any 4-subset of $F$. Then $(X,({{\mathcal A}}\cup\{B\})\setminus\{F\})$ is a 2-$(n,4,1)$ packing of size $\frac{1}{6}({n\choose 2}-15)$ leaving $K_7-K_4$, which contains $K_5-e$ by Lemma \[K7\]. Let $n\equiv 7$ or $10$ [(mod 12)]{} such that $n\geq 7$ and $n\not\in\{10,19\}$. Then $m(n,4,2\circ K_4)=\frac{1}{6}({n\choose 2}-21)$. Observe that any 2-$(n,4,1)$ packing leaving $K_7$ has size $\frac{1}{6}({n\choose 2}-21)$. The theorem now follows for $n=7$ trivially and for $n\geq 22$ from the existence of a PBD$(n,\{4,7^\star\})$ provided by Theorem \[RS\]. The case $n\equiv 2$, $5$, $8$, or $11\pmod{12}$ ------------------------------------------------ The leave $L=(X,{{\mathcal E}})$ must have vertices all of degree $1$ [(mod 3)]{}. Furthermore, $|{{\mathcal E}}|\equiv 1$ [(mod 6)]{} when $n\equiv 2$ or $11$ [(mod 12)]{}, and $|{{\mathcal E}}|\equiv 4$ [(mod 6)]{} when $n\equiv 5$ or $8$ [(mod 12)]{}. If $L$ contains $K_5-e$, then $L$ must have at least five vertices, each of degree at least four and the remaining vertices each of degree at least one. Hence, $L$ must have at least $\frac{1}{2}(n+15)$ edges when $n\equiv 5$ or $11$ [(mod 12)]{} and at least $\frac{1}{2}(n+24)$ edges when $n\equiv 2$ or $8$ [(mod 12)]{}. Consequently, $$\begin{aligned} m(n,4,K_5-e) & \leq & \begin{cases} \frac{1}{6}({n\choose 2}-\frac{n+15}{2})&\text{if $n\equiv 5$ or $11$ {\rm (mod 12)},}\\[3pt] \frac{1}{6}({n\choose 2}-\frac{n+24}{2})&\text{if $n\equiv 2$ or $8$ {\rm (mod 12)}.} \end{cases}\end{aligned}$$ If $L$ contains $2\circ K_4$, then $L$ must have at least two vertices, each of degree at least seven, at least four vertices each of degree at least four, and the rest of the vertices each of degree one. Hence, $L$ must have at least $\frac{1}{2}(n+24)$ edges when $n\equiv 2$ or $8$ [(mod 12)]{} and at least $\frac{1}{2}(n+27)$ edges when $n\equiv 5$ or $11$ [(mod 12)]{}. Consequently, $$\begin{aligned} m(n,4,2\circ K_4) & \leq & \begin{cases} \frac{1}{6}({n\choose 2}-\frac{n+24}{2})&\text{if $n\equiv 2$ or $8$ {\rm (mod 12)},}\\[3pt] \frac{1}{6}({n\choose 2}-\frac{n+27}{2})&\text{if $n\equiv 5$ or $11$ {\rm (mod 12)}.} \end{cases}\end{aligned}$$ These bounds can be met with the following constructions. ### The value of $m(n,4,K_5-e)$ Let $n\equiv 5$ or $11$ [(mod 12)]{} such that $n=5$ or $n\geq 23$. Then we have $m(n,4,K_5-e)=\frac{1}{6}({n\choose 2}-\frac{1}{2}(n+15))$. Let $(X,{{\mathcal G}},{{\mathcal A}})$ be a $\{4\}$-GDD of type $2^{(n-5)/2}5^1$, which exists by Theorem \[B\]. Then $(X,{{\mathcal A}})$ is a 2-$(n,4,1)$ packing of size $\frac{1}{6}({n\choose 2}-\frac{1}{2}(n+15))$ with a leave containing $K_5$, and hence $K_5-e$. \[small14\] There exists a $2$-$(14,4,1)$ packing of size $12$ having a leave containing $K_5-e$. Let $(X,{{\mathcal A}})$ be a maximum $2$-$(13,4,1)$ packing, which has size 13 by Theorem \[Bpacking\]. Let $\infty\not\in X$ and $A\in{{\mathcal A}}$. Then $(X\cup\{\infty\},{{\mathcal A}}\setminus\{A\})$ is a 2-$(14,4,1)$ packing of size 12 with a leave containing $K_5$ (whose edges are the 2-subsets of $A\cup\{\infty\}$). Let $n\equiv 2$ or $8$ [(mod 12)]{} such that $n=14$ or $n\geq 44$. Then we have $m(n,4,K_5-e)=\frac{1}{6}({n\choose 2}-\frac{1}{2}(n+24))$. Let $(X,{{\mathcal G}},{{\mathcal A}})$ be a $\{4\}$-GDD of type $2^{(n-14)/2}14^1$, which exists by Theorem \[GL0\]. Let $G\in{{\mathcal G}}$ be the group of cardinality 14, and let $(G,{{\mathcal B}})$ be a 2-$(14,4,1)$ packing of size 12 having a leave containing $K_5-e$, whose existence is provided by Theorem \[small14\]. Then $(X,{{\mathcal A}}\cup{{\mathcal B}})$ is a 2-$(n,4,1)$ packing having a leave containing $K_5-e$. The size of this packing is $\frac{1}{6}({n\choose 2}-\frac{1}{2}(n-14)-{14\choose 2})+12= \frac{1}{6}({n\choose 2}-\frac{1}{2}(n+24))$. ### The value of $m(n,4,2\circ K_4)$ \[4GDD28\] If there exists a $\{4\}$-[GDD]{} of type $[g_1,{\dotsc},g_s]$ with $s\geq 3$ and a $\{4\}$-[GDD]{} of type $2^{g_i/2 +1}$ for each $i\in [s]$, then there exists a $2$-$(n,4,1)$ packing of size $\frac{1}{6}({n\choose 2}-\frac{1}{2}(n+24))$ with a leave contaning $2\circ K_4$, where $n=2+\sum_{i=1}^s g_i$. Suppose that $(X,{{\mathcal G}},{{\mathcal A}})$ is a $\{4\}$-GDD of type $[g_1,{\dotsc},g_s]$, where ${{\mathcal G}}=\{G_1,{\dotsc},G_s\}$ and $|G_i|=g_i$ for $i\in [s]$. Let $Y=\{\infty_1,\infty_2\}$, where $\infty_1,\infty_2\not\in X$, and let $(G_i\cup Y,{{\mathcal H}}_{G_i},{{\mathcal A}}_{G_i})$ be a $\{4\}$-GDD of type $2^{g_i/2+1}$ such that $$\begin{aligned} \begin{cases} Y\in{{\mathcal H}}_{G_i}&\text{if $i\in[s-2]$,} \\ \text{$Y$ is contained in a block $A_{G_i}\in{{\mathcal A}}_{G_i}$}&\text{if $i\in\{s-1,s\}$.} \end{cases}\end{aligned}$$ Construct a 4-graph $(X\cup Y,{{\mathcal B}})$ of order $2+\sum_{i=1}^s g_i$, where $$\begin{aligned} {{\mathcal B}}& = & {{\mathcal A}}\cup \left(\bigcup_{i=1}^{s} {{\mathcal A}}_{G_i}\right) \setminus \{ A_{G_{s-1}}, A_{G_{s}} \}.\end{aligned}$$ It is easy to see that $(X\cup Y,{{\mathcal B}})$ is a 2-$(2+\sum_{i=1}^s g_i,4,1)$ packing. Also, the 2-subsets of $A_{G_{s-1}}$ and $A_{G_{s}}$ are not contained in any blocks of ${{\mathcal B}}$. So the leave of $(X\cup Y,{{\mathcal B}})$ contains $2\circ K_4$ as a subgraph. It remains to compute the size of $(X\cup Y,{{\mathcal B}})$. The 2-subsets of $X\cup Y$ that are not contained in any blocks of ${{\mathcal B}}$ are precisely the elements of ${{\mathcal H}}_{G_i}$ for $i\in [s]$ and the 2-subsets of $A_{G_{s-1}}$ and $A_{G_s}$. Since $Y$ appears precisely $s$ times among these 2-subsets, the total number of distinct 2-subsets of $X\cup Y$ that are not contained in any blocks of ${{\mathcal B}}$ is $\sum_{i=1}^s (g_i/2+1)+12-(s-1)=n/2+12$, where $n=2+\sum_{i=1}^s g_i$. Hence $|{{\mathcal B}}|=\frac{1}{6}({n\choose 2}-\frac{1}{2}(n+24))$, as required. \[4GDD511\] If there exists a $\{4\}$-[GDD]{} of type $[g_1,{\dotsc},g_s]$ with $s\geq 3$, a $\{4\}$-[GDD]{} of type $2^{g_i/2+1}$ for each $i\in [s-1]$, and a $\{4\}$-[GDD]{} of type $2^{(g_s-3)/2}5^1$, then there exists a $2$-$(n,4,1)$ packing of size $\frac{1}{6}({n\choose 2}-\frac{1}{2}(n+27))$ with a leave containing $2\circ K_4$, where $n=2+\sum_{i=1}^s g_i$. Suppose that $(X,{{\mathcal G}},{{\mathcal A}})$ is a $\{4\}$-GDD of type $[g_1,{\dotsc},g_s]$, where ${{\mathcal G}}=\{G_1, {\dotsc}, G_s\}$ and $|G_i|=g_i$ for $i\in[s]$. Let $Y=\{\infty_1,\infty_2\}$, where $\infty_1,\infty_2\not\in X$, and let $(G_i\cup Y,{{\mathcal H}}_{G_i},{{\mathcal A}}_{G_i})$ be a $\{4\}$-GDD of type $2^{g_i/2+1}$ such that $$\begin{aligned} \begin{cases} Y\in{{\mathcal H}}_{G_i}&\text{if $i\in[s-3]$,} \\ \text{$Y$ is contained in a block $A_{G_i}\in{{\mathcal A}}_{G_i}$}&\text{if $i\in\{s-2,s-1\}$.} \end{cases}\end{aligned}$$ Further, let $(G_s\cup Y, {{\mathcal H}}_{G_s},{{\mathcal A}}_{G_s})$ be a $\{4\}$-GDD of type $2^{(g_s-3)/2}5^1$ such that $Y$ is contained in the group $H\in{{\mathcal H}}_{G_s}$ of cardinality five. Now form the 4-graph $(X\cup Y,{{\mathcal B}})$ of order $2+\sum_{i=1}^s g_i$, where $$\begin{aligned} {{\mathcal B}}& = & {{\mathcal A}}\cup \left(\bigcup_{i=1}^{s} {{\mathcal A}}_{G_i}\right) \cup \{H\setminus\{\infty_1\}\} \setminus \{A_{G_{s-2}},A_{G_{s-1}}\}.\end{aligned}$$ It is easy to see that $(X\cup Y,{{\mathcal B}})$ is a 2-$(2+\sum_{i=1}^s g_i,4,1)$ packing. Also, the 2-subsets of $A_{G_{s-1}}$ and $A_{G_{s}}$ are not contained in any blocks of ${{\mathcal B}}$. So the leave of $(X\cup Y,{{\mathcal B}})$ contains $2\circ K_4$ as a subgraph. It remains to compute the size of $(X\cup Y,{{\mathcal B}})$. The 2-subsets of $X\cup Y$ that are not contained in any blocks of ${{\mathcal B}}$ are precisely the 2-subsets of $A_{G_{s-2}}$ and $A_{G_{s-1}}$ and the 2-subsets of elements of ${{\mathcal H}}_{G_i}$ for $i\in [s]$, except for the 2-subsets of $H\setminus\{\infty_1\}$. Since $Y$ appears precisely $s$ times among these 2-subsets, the total number of distinct 2-subsets of $X\cup Y$ that are not contained in any blocks of ${{\mathcal B}}$ is $\sum_{i=1}^{s-1} (g_i/2+1)+(g_s-3)/2+(10-6)-1+12-(s-1) =\frac{1}{2}(n+27)$, where $n=2+\sum_{i=1}^s g_i$. Hence $|{{\mathcal B}}|=\frac{1}{6}({n\choose 2}-\frac{1}{2}(n+27))$, as required. For all $n\equiv 2$ [(mod 12)]{}, $n\geq 50$, there exists a $2$-$(n,4,1)$ packing of size $\frac{1}{6}({n\choose 2}-\frac{1}{2}(n+24))$ with a leave containing $2\circ K_4$. Apply Lemma \[4GDD28\] with $\{4\}$-GDDs of type $12^{(n-2)/12}$ and type $2^7$, which exist by Theorem \[BSH\]. For $n=29$ and for all $n\equiv 5$ [(mod 12)]{}, $n\geq 101$, there exists a $2$-$(n,4,1)$ packing of size $\frac{1}{6}({n\choose 2}-\frac{1}{2}(n+27))$ with a leave containing $2\circ K_4$. Apply Lemma \[4GDD511\] with $\{4\}$-GDDs of type $12^{(n-29)/12}27^1$, which exists by Theorem \[GL\], $\{4\}$-GDDs of type $2^7$, which exists by Theorem \[BSH\], and $\{4\}$-GDDs of type $2^{12}5^1$, which exists by Theorem \[B\]. For $n=20$ and for all $n\equiv 8$ [(mod 12)]{}, $n\geq 68$, there exists a $2$-$(n,4,1)$ packing of size $\frac{1}{6}({n\choose 2}-\frac{1}{2}(n+24))$ with a leave containing $2\circ K_4$. Apply Lemma \[4GDD28\] with $\{4\}$-GDDs of type $12^{(n-20)/12}18^1$, which existsby Theorem \[GL\], and $\{4\}$-GDDs of types $2^7$ and $2^{10}$, which exists byTheorem \[BSH\]. For $n=23$ and for all $n\equiv 11$ [(mod 12)]{}, $n\geq 83$, there exists a $2$-$(n,4,1)$ packing of size $\frac{1}{6}({n\choose 2}-\frac{1}{2}(n+27))$ with a leave containing $2\circ K_4$. Apply Lemma \[4GDD511\] with $\{4\}$-GDDs of type $12^{(n-23)/12}21^1$, which exists by Theorem \[GL\], $\{4\}$-GDDs of type $2^7$, which exists by Theorem \[BSH\], and $\{4\}$-GDDs of type $2^95^1$, which exists by Theorem \[B\]. The case $n\equiv 0$, $3$, $6$, or $9\pmod{12}$ ----------------------------------------------- The leave $L=(X,{{\mathcal E}})$ must have vertices all of degree $2$ [(mod 3)]{}. Furthermore, $|{{\mathcal E}}|\equiv 0$ [(mod 6)]{} when $n\equiv 0$ or $9$ [(mod 12)]{}, and $|{{\mathcal E}}|\equiv 3$ [(mod 6)]{} when $n\equiv 3$ or $6$ [(mod 12)]{}. If $L$ contains $K_5-e$ or $2\circ K_4$, then $L$ must have at least six vertices each of degree at least five and the remaining vertices each of degree at least two. Hence, $L$ must have at least $n+9$ edges when $n\equiv 6$ or $9$ [(mod 12)]{} and at least $n+12$ edges when $n\equiv 0$ or $3$ [(mod 12)]{}. Consequently, for $G\in\{K_5-e,2\circ K_4\}$, we have $$\begin{aligned} m(n,4,G) & \leq & \begin{cases} \frac{1}{6}({n\choose 2}-(n+9))&\text{if $n\equiv 6$ or $9$ {\rm (mod 12)},}\\[5pt] \frac{1}{6}({n\choose 2}-(n+12))&\text{if $n\equiv 0$ or $3$ {\rm (mod 12)}.} \end{cases}\end{aligned}$$ These bounds can again be met with the following constructions. For $n=6$ and for all $n\equiv 6$ or $9$ [(mod 12)]{}, $n\geq 21$ there exists a $2$-$(n,4,1)$ packing of size $\frac{1}{6}({n\choose 2}-(n+9))$ with a leave containing $G$, where $G\in\{K_5-e,2\circ K_4\}$. Let $(X,{{\mathcal G}},{{\mathcal A}})$ be a $\{4\}$-GDD of type $3^{(n-6)/3}6^1$, which exists by Theorem \[KS\]. Then $(X,{{\mathcal A}})$ is a 2-$(n,4,1)$ packing with a leave containing $K_6$, and hence $K_5-e$ and $2\circ K_4$. The size of $(X,{{\mathcal A}})$ is easily verified: $|{{\mathcal A}}|=\frac{1}{6}({n\choose 2}-\frac{n-6}{3}{3\choose 2}-{6\choose 2})= \frac{1}{6}({n\choose 2}-(n+9))$. \[15\] There exists a $2$-$(15,4,1)$ packing of size $13$ with a leave containing $G$, where $G\in\{K_5-e,2\circ K_4\}$. *Proof.* The 13 blocks of a $2$-$(15,4,1)$ packing with a leave containing $K_5-e$ are$$\begin{array}{ccccc} $\{2,6,13,14\}$, & $\{3,6,9,10\}$, & $\{4,7,9,13\}$, & $\{4,5,6,12\}$, & $\{1,6,11,15\}$, \\ [3pt] $\{3,7,11,14\}$, & $\{2,7,8,15\}$, & $\{1,8,9,14\}$, & $\{3,12,13,15\}$, & $\{2,9,11,12\}$, \\[3pt] $\{1,7,10,12\}$, & $\{5,10,14,15\}$, & $\{5,8,11,13\}$. \end{array}$$ The 13 blocks of a $2$-$(15,4,1)$ packing with a leave containing $2\circ K_4$ are $$\begin{array}{ccccc} $\{1,8,12,13\}$, & $\{6,8,11,14\}$, & $\{4,6,9,15\}$, & $\{3,7,8,9\}$, & $\{2,8,10,15\}$, \\ [3pt] $\{2,9,13,14\}$, & $\{4,5,7,14\}$, & $\{1,6,7,10\}$, & $\{1,5,11,15\}$, & $\{2,7,11,12\}$, \\ [3pt] $\{4,10,11,13\}$, & $\{3,12,14,15\}$, & $\{5,9,10,12\}$.\qquad\endproof \end{array}$$ For all $n\equiv 0$ or $3$ [(mod 12)]{}, $n\geq 48$, there exists a $2$-$(n,4,1)$ packing of size $\frac{1}{6}({n\choose 2}-(n+12))$ with a leave containing $G$, where $G\in\{K_5-e,2\circ K_4\}$. Let $(X,{{\mathcal G}},{{\mathcal A}})$ be a $\{4\}$-GDD of type $3^{(n-15)/3}15^1$, which exists by Theorem \[KS\]. Let $Y$ be the group of cardinality 15 in ${{\mathcal G}}$ and $(Y,{{\mathcal B}})$ be a 2-$(15,4,1)$ packing of size 13 with a leave containing $G$, which exists by Lemma \[15\]. Then $(X,{{\mathcal A}}\cup{{\mathcal B}})$ is a 2-$(n,4,1)$ packing with a leave containing $G$. The size of $(X,{{\mathcal A}}\cup{{\mathcal B}})$ is easily verified: $|{{\mathcal A}}\cup{{\mathcal B}}|=\frac{1}{6}({n\choose 2}-\frac{n-12}{3}{3\choose 2}-2{6\choose 2})+13= \frac{1}{6}({n\choose 2}-(n+12))$. Remaining small orders ---------------------- The values of $n$ for which $m(n, 4, K_5-e)$ and $m(n,4,2\circ K_4)$ remain undetermined are as follows: --------------------- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- $m(n,4,K_5-e)$ 8 9 10 11 12 13 16 17 18 19 20 24 25 26 27 28 32 36 37 38 39 73 76 85 $m(n,4,2\circ K_4)$ 8 9 10 11 12 13 14 16 17 18 19 24 25 26 27 28 32 35 36 37 38 39 41 44 47 53 56 59 65 71 73 76 77 85 89 --------------------- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- For $n=19$, we have the following tighter upper bound. For $G\in\{K_5-e,2\circ K_4\}$, we have $m(19,4,G) \leq 24$. Suppose we have a $2$-$(19,4,1)$ packing of size 25 with a leave containing $G$, and then we can add a $K_4$ in $G$ to this packing, giving a $2$-$(19,4,1)$ packing of size 26. This is a contradiction, since $D(19,4,2)=25$. For values of $n<16$, it is possible to determine $m(n,4,G)$, $G\in\{K_5-e,2\circ K_4\}$, via exhaustive search. Let $H$ be a specific subgraph of $K_n$ isomorphic to $G$. We form a graph $\Gamma_n$ whose vertex set is the set of all $K_4$’s of $K_n-H$, and two vertices in $\Gamma_n$ are adjacent if and only if the corresponding $K_4$’s are edge-disjoint. Then $m(n,4,G)$ is equal to the size of a maximum clique in $\Gamma_n$. We used [Cliquer]{}, an implementation of Österg[å]{}rd’s exact algorithm for maximum cliques [@Ostergard:2002], to determine the size of maximum cliques in $\Gamma_n$, for $n\leq 15$. When $n\geq 16$, it is infeasible to use [Cliquer]{}, so we resort to a stochastic local search heuristic to construct packings of the required size directly. The results of our computation are summarized in Table \[comput\], while the blocks of the actual packings are listed in Appendices A and B. --------------------- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- $n$ 8 9 10 11 12 13 16 17 18 19 20 24 25 $m(n,4,K_5-e)$ 1 2 3 4 6 9 21 24 28 40 $n$ 26 27 28 32 36 37 38 39 73 76 85 $m(n,4,K_5-e)$ 50 52 97 $n$ 8 9 10 11 12 13 14 16 17 18 19 24 25 $m(n,4,2\circ K_4)$ 1 2 3 4 6 9 11 21 24 40 $n$ 26 27 28 32 35 36 37 38 39 41 44 47 53 $m(n,4,2\circ K_4)$ 52 $n$ 56 59 65 71 73 76 77 85 89 $m(n,4,2\circ K_4)$ --------------------- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- : Values of $m(n,4,K_5-e)$ and $m(n,4,2\circ K_4)$ for some small values of $n$. A blank entry indicates an unknown value.[]{data-label="comput"} Piecing things together ----------------------- The results in previous subsections can be summarized as follows. \[exK5-e\] For all $n\geq 5$, we have $m(n,4,K_5-e)=\frac{1}{6}({n\choose 2}-f(n))$, where $$\begin{aligned} f(n) = \begin{cases} 18&\text{if $n\equiv 1$ or $4$ {\rm (mod 12)}, $n\not=13$,} \\ 15&\text{if $n\equiv 7$ or $10$ {\rm (mod 12)}, $n\not\in\{10,19\}$,} \\ (n+24)/2&\text{if $n\equiv 2$ or $8$ {\rm (mod 12)}, $n\not=8$,} \\ (n+15)/2&\text{if $n\equiv 5$ or $11$ {\rm (mod 12)}, $n\not=11$,} \\ n+9&\text{if $n\equiv 6$ or $9$ {\rm (mod 12)}, $n\not=9$,} \\ n+12&\text{if $n\equiv 0$ or $3$ {\rm (mod 12)}, $n\not=12$,} \\ 22&\text{if $n=8$,} \\ 24&\text{if $n=9$,} \\ 27&\text{if $n=10$,} \\ 31&\text{if $n=11$,} \\ 30&\text{if $n=12$,} \\ 24&\text{if $n=13$,} \\ 27&\text{if $n=19$,} \end{cases}\end{aligned}$$ except possibly for $n\in\{16, 17, 25, 28, 32, 37, 38, 39, 73, 76, 85\}$. \[ex1D422\] For all $n\geq 6$, we have $m(n,4,2\circ K_4)=\frac{1}{6}({n\choose 2}-f(n))$, where $$\begin{aligned} f(n) = \begin{cases} 18&\text{if $n\equiv 1$ or $4$ {\rm (mod 12)}, $n\not=13$,} \\ 21&\text{if $n\equiv 7$ or $10$ {\rm (mod 12)}, $n\not\in\{10,19\}$,} \\ (n+24)/2&\text{if $n\equiv 2$ or $8$ {\rm (mod 12)}, $n\not\in\{8,14\}$,} \\ (n+27)/2&\text{if $n\equiv 5$ or $11$ {\rm (mod 12)}, $n\not=11$,} \\ n+9&\text{if $n\equiv 6$ or $9$ {\rm (mod 12)}, $n\not=9$,} \\ n+12&\text{if $n\equiv 0$ or $3$ {\rm (mod 12)}, $n\not=12$,} \\ 22&\text{if $n=8$,} \\ 24&\text{if $n=9$,} \\ 27&\text{if $n=10$,} \\ 31&\text{if $n=11$,} \\ 30&\text{if $n=12$,} \\ 24&\text{if $n=13$,} \\ 25&\text{if $n=14$,} \\ 27&\text{if $n=19$,} \end{cases}\end{aligned}$$ except possibly for [ $n\in\{16$, 17, 25, 26, 28, 32, 35, 36, 37, 38, 39, 41, 44, 47, 53, 56, 59, 65, 71, 73, 76, 77, 85, $89\}$. ]{} Conclusion ========== Theorems \[D3(2,2)\], \[exK5-e\], and \[ex1D422\] can be expressed more succinctly in terms of $D(n,3,2)$ and $D(n,4,2)$ as follows. For all $n\geq 4$, $$\begin{aligned} m(n,3,K_4-e)+2 = \begin{cases} D(n,3,2)&\text{if $n\equiv 0$, $2$, or $5$ {\rm (mod 6)}}, \\ D(n,3,2)-1&\text{if $n\equiv 1$, $3$, or $4$ {\rm (mod 6)}}. \end{cases}\end{aligned}$$ For all $n\geq 5$, $$\begin{aligned} m(n,4,K_5-e)+2 = \begin{cases} D(n,4,2)+1&\text{if $n\equiv 5$, $6$, $7$, $9$, $10$, or $11$ {\rm (mod 12)}, } \\ & \text{$n\not\in\{9,10,11\}$,} \\ D(n,4,2)&\text{if $n\equiv 0$, $2$, $3$, or $8$ {\rm (mod 12)}, $n\not\in\{8,12\}$,} \\ D(n,4,2)-1&\text{if $n\equiv 1$ or $4$ {\rm (mod 12)}, $n\not=13$,} \\ n-5&\text{if $n\in\{8,9,10,11\}$,} \\ 8&\text{if $n=12$,} \\ 11&\text{if $n=13$,} \\ \end{cases}\end{aligned}$$ except possibly for $n\in\{16, 17, 25, 28, 32, 37, 38, 39, 73, 76, 85\}$. For all $n\geq 6$, $$\begin{aligned} m(n,4,2\circ K_4)+2=\begin{cases} D(n,4,2)+1&\text{if $n\equiv 6$ or $9$ {\rm (mod 12)}, $n\not=9$,} \\ D(n,4,2)&\text{if $n\equiv 0$, $2$, $3$, $5$, $7$, $8$, $10$, or $11$ {\rm (mod 12)},} \\ & \text{$n\not\in\{8,10,11,12,14\}$,} \\ D(n,4,2)-1&\text{if $n\equiv 1$ or $4$ {\rm (mod 12)}, $n\not=13$,} \\ n-5&\text{if $n\in\{8,9,10,11\}$,} \\ 8&\text{if $n=12$,} \\ 11&\text{if $n=13$,} \\ 13&\text{if $n=14$,} \\ \end{cases}\end{aligned}$$ except possibly for [ $n\in\{16$, 17, 25, 26, 28, 32, 35, 36, 37, 38, 39, 41, 44, 47, 53, 56, 59, 65, 71, 73, 76, 77, 85, $89\}$.]{} These have the following consequences. For all $n\geq 4$, $T(n,{{\mathcal F}}(3),2)=D(n,3,2)$. For all $n\geq 6$, $$\begin{aligned} T(n,{{\mathcal F}}(4),2)=\begin{cases} D(n,4,2)+1&\text{if $n\equiv 5$, $6$, $7$, $9$, $10$, or $11$ {\rm (mod 12)}, } \\ & \text{$n\not\in\{9,10,11\}$,} \\ D(n,4,2)&\text{if $n\equiv 0$, $1$, $2$, $3$, $4$, or $8$ {\rm (mod 12)}, } \\ & \text{$n\not\in\{8,12,13\}$,} \\ n-5&\text{if $n\in\{8,9,10,11\}$,} \\ 8&\text{if $n=12$,} \\ 11&\text{if $n=13$,} \end{cases}\end{aligned}$$ except possibly for $n\in\{16, 17, 25, 28, 32, 37, 38, 39, 73, 76, 85\}$. Some maximum $2$-$(n,4,1)$ packings with a leave containing $K_5-e$ =================================================================== In each case, the edges of the $K_5-e$ in the leave are ${[5]\choose 2}\setminus\{ \{4,5\}\}$. The blocks of a maximum $2$-$(10,4,1)$ packing with a leave containing $K_5-e$ ------------------------------------------------------------------------------ $\{4,5,6,7\}$, $\{3,7,8,9\}$, $\{1,6,8,10\}$. The blocks of a maximum $2$-$(18,4,1)$ packing with a leave containing $K_5-e$ ------------------------------------------------------------------------------ $$\begin{array}{ccccc} $\{4,8,12,16\}$, & $\{3,6,7,8\}$, & $\{3,11,13,16\}$, & $\{2,9,15,16\}$, & $\{10,11,12,14\}$, \\ $\{2,7,11,17\}$, & $\{4,9,13,14\}$, & $\{1,6,9,17\}$, & $\{5,13,17,18\}$, & $\{3,14,15,17\}$, \\ $\{2,8,14,18\}$, & $\{4,7,10,15\}$, & $\{2,6,10,13\}$, & $\{1,8,11,15\}$, & $\{4,6,11,18\}$, \\ $\{5,8,9,10\}$, & $\{1,10,16,18\}$, & $\{5,7,14,16\}$, & $\{3,9,12,18\}$, & $\{1,7,12,13\}$, \\ $\{5,6,12,15\}$. \end{array}$$ The blocks of a maximum $2$-$(19,4,1)$ packing with a leave containing $K_5-e$ ------------------------------------------------------------------------------ $$\begin{array}{ccccc} $\{8,14,17,18\}$, & $\{2,9,13,14\}$, & $\{3,7,12,14\}$, & $\{1,10,14,19\}$, & $\{4,5,10,18\}$, \\ $\{4,6,14,16\}$, & $\{6,11,18,19\}$, & $\{4,11,13,17\}$, & $\{3,8,15,19\}$, & $\{5,12,13,19\}$, \\ $\{1,9,12,18\}$, & $\{3,13,16,18\}$, & $\{2,7,15,18\}$, & $\{3,9,10,17\}$, & $\{4,7,9,19\}$, \\ $\{2,16,17,19\}$, & $\{5,6,7,17\}$, & $\{2,6,8,12\}$, & $\{10,12,15,16\}$, & $\{7,8,10,13\}$, \\ $\{5,8,9,16\}$, & $\{5,11,14,15\}$, & $\{1,7,11,16\}$, & $\{1,6,13,15\}$. \end{array}$$ The blocks of a maximum $2$-$(20,4,1)$ packing with a leave containing $K_5-e$ ------------------------------------------------------------------------------ $$\begin{array}{ccccc} $\{4,6,16,18\}$, & $\{3,12,16,20\}$, & $\{1,10,11,15\}$, & $\{9,12,14,19\}$, & $\{2,7,10,12\}$, \\ $\{6,7,15,19\}$, & $\{9,10,17,18\}$, & $\{4,9,11,13\}$, & $\{4,12,15,17\}$, & $\{4,5,10,19\}$, \\ $\{1,8,12,18\}$, & $\{3,13,18,19\}$, & $\{5,8,14,17\}$, & $\{1,16,17,19\}$, & $\{1,7,13,14\}$, \\ $\{2,6,13,17\}$, & $\{11,14,18,20\}$, & $\{2,8,11,19\}$, & $\{5,6,11,12\}$, & $\{5,13,15,20\}$, \\ $\{3,8,9,15\}$, & $\{8,10,13,16\}$, & $\{3,7,11,17\}$, & $\{2,14,15,16\}$, & $\{3,6,10,14\}$, \\ $\{5,7,9,16\}$, & $\{4,7,8,20\}$, & $\{1,6,9,20\}$. \end{array}$$ The blocks of a maximum $2$-$(24,4,1)$ packing with a leave containing $K_5-e$ ------------------------------------------------------------------------------ $$\begin{array}{ccccc} $\{12,14,15,18\}$, & $\{3,6,16,18\}$, & $\{6,9,10,13\}$, & $\{5,9,15,22\}$, & $\{3,9,11,21\}$, \\ $\{4,8,15,19\}$, & $\{1,18,21,22\}$, & $\{12,16,17,19\}$, & $\{11,12,22,23\}$, & $\{4,9,23,24\}$, \\ $\{4,5,6,12\}$, & $\{3,10,12,24\}$, & $\{5,8,21,24\}$, & $\{6,14,17,21\}$, & $\{1,8,12,13\}$, \\ $\{6,19,22,24\}$, & $\{4,16,20,21\}$, & $\{2,18,19,23\}$, & $\{1,7,17,23\}$, & $\{3,17,20,22\}$, \\ $\{1,11,16,24\}$, & $\{2,13,14,16\}$, & $\{2,7,10,21\}$, & $\{5,7,14,20\}$, & $\{8,10,17,18\}$, \\ $\{13,18,20,24\}$, & $\{2,9,12,20\}$, & $\{7,8,16,22\}$, & $\{3,7,13,19\}$, & $\{2,15,17,24\}$, \\ $\{5,11,13,17\}$, & $\{13,15,21,23\}$, & $\{10,11,19,20\}$, & $\{1,9,14,19\}$, & $\{4,7,11,18\}$, \\ $\{1,6,15,20\}$, & $\{3,8,14,23\}$, & $\{2,6,8,11\}$, & $\{5,10,16,23\}$, & $\{4,10,14,22\}$. \end{array}$$ The blocks of a maximum $2$-$(26,4,1)$ packing with a leave containing $K_5-e$ ------------------------------------------------------------------------------ $$\begin{array}{ccccc} $\{4,17,22,24\}$, & $\{3,11,17,20\}$, & $\{5,7,18,22\}$, & $\{4,16,18,23\}$, & $\{1,7,19,25\}$, \\ $\{14,21,22,23\}$, & $\{1,10,18,26\}$, & $\{2,11,21,26\}$, & $\{3,6,7,23\}$, & $\{11,14,16,19\}$, \\ $\{12,20,24,26\}$, & $\{4,7,14,26\}$, & $\{3,9,16,22\}$, & $\{6,10,15,16\}$, & $\{3,10,12,19\}$, \\ $\{7,8,15,17\}$, & $\{4,9,13,19\}$, & $\{5,12,13,21\}$, & $\{15,19,22,26\}$, & $\{5,19,23,24\}$, \\ $\{4,12,15,25\}$, & $\{3,15,18,21\}$, & $\{8,9,21,25\}$, & $\{6,12,17,18\}$, & $\{5,8,16,26\}$, \\ $\{2,7,9,12\}$, & $\{9,17,23,26\}$, & $\{1,8,20,22\}$, & $\{5,9,11,15\}$, & $\{7,10,21,24\}$, \\ $\{1,13,14,15\}$, & $\{6,19,20,21\}$, & $\{7,13,16,20\}$, & $\{10,11,22,25\}$, & $\{2,6,13,22\}$, \\ $\{2,16,24,25\}$, & $\{9,14,18,20\}$, & $\{2,8,18,19\}$, & $\{1,6,9,24\}$, & $\{4,6,8,11\}$, \\ $\{5,6,14,25\}$, & $\{8,10,13,23\}$, & $\{11,13,18,24\}$, & $\{2,10,14,17\}$, & $\{3,13,25,26\}$, \\ $\{3,8,14,24\}$, & $\{2,15,20,23\}$, & $\{1,11,12,23\}$, & $\{4,5,10,20\}$, & $\{1,16,17,21\}$. \end{array}$$ The blocks of a maximum $2$-$(27,4,1)$ packing with a leave containing $K_5-e$ ------------------------------------------------------------------------------ $$\begin{array}{ccccc} $\{2,7,16,21\}$, & $\{7,20,26,27\}$, & $\{5,17,25,27\}$, & $\{5,15,21,23\}$, & $\{5,6,11,22\}$, \\ $\{13,21,22,27\}$, & $\{3,8,11,26\}$, & $\{6,15,17,24\}$, & $\{4,5,7,19\}$, & $\{1,6,18,27\}$, \\ $\{3,18,21,24\}$, & $\{2,11,12,13\}$, & $\{9,13,16,23\}$, & $\{10,11,14,15\}$, & $\{3,14,23,27\}$, \\ $\{4,8,15,18\}$, & $\{14,19,22,24\}$, & $\{1,10,19,23\}$, & $\{3,12,16,20\}$, & $\{2,8,23,24\}$, \\ $\{5,8,9,20\}$, & $\{4,12,14,21\}$, & $\{4,9,11,27\}$, & $\{3,6,10,25\}$, & $\{8,14,16,17\}$, \\ $\{2,15,19,27\}$, & $\{9,12,15,22\}$, & $\{3,7,13,15\}$, & $\{1,8,12,25\}$, & $\{3,9,17,19\}$, \\ $\{19,20,21,25\}$, & $\{2,6,14,20\}$, & $\{6,8,13,19\}$, & $\{7,11,24,25\}$, & $\{1,11,17,21\}$, \\ $\{4,10,17,20\}$, & $\{9,10,21,26\}$, & $\{10,16,24,27\}$, & $\{4,16,22,25\}$, & $\{7,12,17,18\}$, \\ $\{1,7,9,14\}$, & $\{2,17,22,26\}$, & $\{11,16,18,19\}$, & $\{5,12,24,26\}$, & $\{1,15,16,26\}$, \\ $\{5,10,13,18\}$, & $\{1,13,20,24\}$, & $\{18,20,22,23\}$, & $\{2,9,18,25\}$, & $\{4,6,23,26\}$, \\ $\{13,14,25,26\}$, & $\{7,8,10,22\}$. \end{array}$$ The blocks of a maximum $2$-$(36,4,1)$ packing with a leave containing $K_5-e$ ------------------------------------------------------------------------------ $$\begin{array}{ccccc} $\{7,10,17,35\}$, & $\{11,15,26,36\}$, & $\{6,16,25,29\}$, & $\{1,12,24,28\}$, & $\{3,13,34,35\}$, \\ $\{30,31,35,36\}$, & $\{21,23,28,34\}$, & $\{1,14,19,35\}$, & $\{8,9,28,32\}$, & $\{15,18,21,25\}$, \\ $\{3,18,26,27\}$, & $\{1,8,25,30\}$, & $\{3,10,23,29\}$, & $\{6,28,30,33\}$, & $\{15,19,23,24\}$, \\ $\{4,14,17,34\}$, & $\{7,13,26,28\}$, & $\{10,19,22,36\}$, & $\{6,11,12,34\}$, & $\{1,7,11,29\}$, \\ $\{5,13,17,25\}$, & $\{14,24,26,31\}$, & $\{13,19,27,29\}$, & $\{1,20,23,26\}$, & $\{2,22,31,34\}$, \\ $\{14,23,25,36\}$, & $\{5,16,24,33\}$, & $\{4,18,29,33\}$, & $\{4,21,26,32\}$, & $\{8,22,26,29\}$, \\ $\{9,11,22,25\}$, & $\{12,18,20,32\}$, & $\{2,11,20,21\}$, & $\{11,13,31,32\}$, & $\{10,14,30,32\}$, \\ $\{3,9,33,36\}$, & $\{3,11,24,30\}$, & $\{24,29,32,36\}$, & $\{7,18,24,34\}$, & $\{7,19,21,31\}$, \\ $\{3,7,12,25\}$, & $\{2,8,13,24\}$, & $\{2,7,14,16\}$, & $\{5,7,8,20\}$, & $\{10,11,16,28\}$, \\ $\{5,6,18,31\}$, & $\{8,11,14,18\}$, & $\{3,17,19,32\}$, & $\{10,20,25,31\}$, & $\{4,5,11,19\}$, \\ $\{16,18,19,30\}$, & $\{16,20,34,36\}$, & $\{3,6,15,20\}$, & $\{4,8,10,12\}$, & $\{6,9,13,14\}$, \\ $\{9,17,20,24\}$, & $\{13,20,22,33\}$, & $\{4,6,7,36\}$, & $\{1,13,18,36\}$, & $\{5,26,30,34\}$, \\ $\{1,6,22,32\}$, & $\{16,21,27,35\}$, & $\{12,13,21,30\}$, & $\{2,9,18,35\}$, & $\{12,17,29,31\}$, \\ $\{8,17,21,36\}$, & $\{7,9,23,30\}$, & $\{20,28,29,35\}$, & $\{2,15,29,30\}$, & $\{4,20,27,30\}$, \\ $\{1,15,16,17\}$, & $\{1,9,27,31\}$, & $\{4,15,28,31\}$, & $\{12,14,15,33\}$, & $\{9,12,19,26\}$, \\ $\{25,26,33,35\}$, & $\{6,10,24,27\}$, & $\{3,14,21,22\}$, & $\{2,23,32,33\}$, & $\{5,14,27,28\}$, \\ $\{5,12,22,23\}$, & $\{25,27,32,34\}$, & $\{3,8,16,31\}$, & $\{4,13,16,23\}$, & $\{8,19,33,34\}$, \\ $\{2,6,17,26\}$, & $\{5,15,32,35\}$, & $\{2,19,25,28\}$, & $\{9,10,15,34\}$, & $\{6,8,23,35\}$, \\ $\{1,10,21,33\}$, & $\{7,15,22,27\}$, & $\{4,22,24,35\}$, & $\{11,17,27,33\}$, & $\{2,12,27,36\}$, \\ $\{5,9,21,29\}$, & $\{17,18,22,28\}$. \end{array}$$ Some maximum $2$-$(n,4,1)$ packings with a leave containing $2\circ K_4$ ======================================================================== The blocks of a maximum $2$-$(18,4,1)$ packing with a leave containing $2\circ K_4$ ----------------------------------------------------------------------------------- $$\begin{array}{ccccc} $\{3,9,10,12\}$, & $\{1,7,9,16\}$, & $\{4,7,17,18\}$, & $\{1,5,13,17\}$, & $\{5,9,11,14\}$, \\ $\{1,6,14,18\}$, & $\{4,6,8,9\}$, & $\{2,7,10,14\}$, & $\{3,14,16,17\}$, & $\{5,8,10,18\}$, \\ $\{1,8,12,15\}$, & $\{6,10,15,17\}$, & $\{3,7,8,13\}$, & $\{3,11,15,18\}$, & $\{6,7,11,12\}$, \\ $\{2,12,16,18\}$, & $\{10,11,13,16\}$, & $\{2,8,11,17\}$, & $\{2,9,13,15\}$, & $\{4,5,15,16\}$, \\ $\{4,12,13,14\}$. \end{array}$$ The blocks of a maximum $2$-$(19,4,1)$ packing with a leave containing $2\circ K_4$ ----------------------------------------------------------------------------------- $$\begin{array}{ccccc} $\{3,8,9,16\}$, & $\{5,12,16,18\}$, & $\{11,13,15,16\}$, & $\{1,12,13,19\}$, & $\{6,8,10,11\}$, \\ $\{1,10,14,16\}$, & $\{6,12,15,17\}$, & $\{6,7,9,13\}$, & $\{4,13,14,18\}$, & $\{1,9,15,18\}$, \\ $\{5,9,14,19\}$, & $\{4,6,16,19\}$, & $\{4,7,8,12\}$, & $\{5,8,13,17\}$, & $\{2,8,14,15\}$, \\ $\{3,10,17,18\}$, & $\{4,5,10,15\}$, & $\{2,9,10,12\}$, & $\{1,5,7,11\}$, & $\{2,7,16,17\}$, \\ $\{4,9,11,17\}$, & $\{3,7,15,19\}$, & $\{3,11,12,14\}$, & $\{2,11,18,19\}$. \end{array}$$ The blocks of a maximum $2$-$(24,4,1)$ packing with a leave containing $2\circ K_4$ ----------------------------------------------------------------------------------- $$\begin{array}{ccccc} $\{4,13,14,21\}$, & $\{3,12,16,20\}$, & $\{3,15,21,22\}$, & $\{3,17,18,23\}$, & $\{7,13,15,23\}$, \\ $\{4,12,19,22\}$, & $\{1,9,15,19\}$, & $\{4,7,8,18\}$, & $\{6,10,15,18\}$, & $\{9,11,17,21\}$, \\ $\{3,10,11,13\}$, & $\{5,9,14,22\}$, & $\{1,11,14,18\}$, & $\{1,6,8,21\}$, & $\{6,7,14,20\}$, \\ $\{2,13,19,24\}$, & $\{10,19,20,21\}$, & $\{5,11,12,15\}$, & $\{8,11,16,24\}$, & $\{2,8,10,17\}$, \\ $\{2,16,21,23\}$, & $\{5,8,13,20\}$, & $\{4,6,9,16\}$, & $\{1,7,10,16\}$, & $\{2,7,11,22\}$, \\ $\{2,9,18,20\}$, & $\{14,15,16,17\}$, & $\{8,9,12,23\}$, & $\{10,12,14,24\}$, & $\{3,7,9,24\}$, \\ $\{13,16,18,22\}$, & $\{6,17,22,24\}$, & $\{1,12,13,17\}$, & $\{5,7,17,19\}$, & $\{3,8,14,19\}$, \\ $\{4,15,20,24\}$, & $\{1,20,22,23\}$, & $\{4,5,10,23\}$, & $\{6,11,19,23\}$, & $\{5,18,21,24\}$. \end{array}$$ The blocks of a maximum $2$-$(27,4,1)$ packing with a leave containing $2\circ K_4$ ----------------------------------------------------------------------------------- $$\begin{array}{ccccc} $\{6,12,17,21\}$, & $\{1,5,10,19\}$, & $\{4,8,10,27\}$, & $\{4,14,21,22\}$, & $\{19,21,24,25\}$, \\ $\{2,11,23,26\}$, & $\{3,10,20,24\}$, & $\{3,9,15,21\}$, & $\{12,20,26,27\}$, & $\{8,11,12,25\}$, \\ $\{10,15,18,26\}$, & $\{3,8,19,26\}$, & $\{4,11,13,20\}$, & $\{9,22,24,26\}$, & $\{3,7,22,27\}$, \\ $\{1,15,22,23\}$, & $\{5,14,24,27\}$, & $\{3,12,14,23\}$, & $\{9,12,13,19\}$, & $\{2,9,17,27\}$, \\ $\{3,13,18,25\}$, & $\{4,7,12,15\}$, & $\{6,14,16,20\}$, & $\{5,7,9,11\}$, & $\{6,15,25,27\}$, \\ $\{10,17,22,25\}$, & $\{5,20,23,25\}$, & $\{2,10,12,16\}$, & $\{4,6,18,19\}$, & $\{5,12,18,22\}$, \\ $\{3,11,16,17\}$, & $\{6,8,13,22\}$, & $\{1,13,16,27\}$, & $\{2,13,15,24\}$, & $\{5,8,15,17\}$, \\ $\{5,16,21,26\}$, & $\{13,14,17,26\}$, & $\{7,10,13,21\}$, & $\{2,19,20,22\}$, & $\{1,6,11,24\}$, \\ $\{7,17,18,20\}$, & $\{1,8,20,21\}$, & $\{1,7,25,26\}$, & $\{11,14,15,19\}$, & $\{4,9,16,25\}$, \\ $\{7,16,19,23\}$, & $\{8,16,18,24\}$, & $\{11,18,21,27\}$, & $\{4,17,23,24\}$, & $\{1,9,14,18\}$, \\ $\{2,7,8,14\}$, & $\{6,9,10,23\}$. \end{array}$$ [10]{} , [*Extremal Graph Theory*]{}, London Math. Soc. Monogr. 11, Academic Press, Harcourt Brace Jovanovich, London, 1978. , [*Covering non-uniform hypergraphs*]{}, J. Combin. Theory Ser. B, 82 (2001), pp. 270–284. , [*Optimal packings of [$K\sb{4}$]{}’s into a [$K\sb{n}$]{}*]{}, J. Combin. Theory Ser. A, 26 (1979), pp. 278–297. , [*Group divisible designs with block-size four*]{}, Discrete Math., 20 (1977), pp. 1–10. , [*Minimal coverings of pairs by triples*]{}, Pacific J. Math., 8 (1958), pp. 709–719. , [*Group divisible designs with block size four and group type [$g\sp um\sp 1$]{} for small [$g$]{}*]{}, Discrete Math., 285 (2004), pp. 97–120. , [*Balanced incomplete block designs and related designs*]{}, Discrete Math., 11 (1975), pp. 255–369. , [*Existence of orthogonal [L]{}atin squares with aligned subsquares*]{}, Discrete Math., 59 (1986), pp. 69–78. , [*Small group-divisible designs with block size four*]{}, J. Statist. Plann. Inference, 58 (1997), pp. 111–118. , [*The Theory of Error-Correcting Codes*]{}, North-Holland, Amsterdam, 1977. , [*Coverings and packings*]{}, in Contemporary Design Theory, J. H. Dinitz and D. R. Stinson, eds., Wiley-Intersci. Ser. Discrete Math. Optim., Wiley, New York, 1992, pp. 371–399. , [*A fast algorithm for the maximum clique problem*]{}, Discrete Appl. Math., 120 (2002), pp. 197–207. , [*On the existence of incomplete designs of block size four having one hole*]{}, Util. Math., 35 (1989), pp. 119–152. , [*On maximal systems of [$k$]{}-tuples*]{}, Studia Sci. Math. Hungar., 1 (1966), pp. 363–368. , [*Maximal consistent families of triples*]{}, J. Combin. Theory, 5 (1968), pp. 1–8. [^1]: Interactive Digital Media R&D Program Office, Media Development Authority, 140 Hill Street, 179369 Singapore, the Division of Mathematical Sciences, School of Physical and Mathematical Sciences, Nanyang Technological University, 637616 Singapore, and the Department of Computer Science, School of Computing, National University of Singapore, 117590 Singapore ([email protected]). The research of this author was supported by the Singapore Ministry of Education Research Grant T206B2204. [^2]: Department of Computer Science, University of VT, Burlington, VT 05405 ([email protected]). [^3]: Received by the editors November 26, 2006; accepted for publication (in revised form) July 20, 2007; published electronically October 31, 2007. sidma/21-3/67591.html
--- abstract: 'This article is concerned with some weighted norm inequalities for the so-called horizontal (i.e. involving time derivatives) area integrals associated to a non-negative self-adjoint operator satisfying a pointwise Gaussian estimate for its heat kernel, as well as the corresponding vertical (i.e. involving space derivatives) area integrals associated to a non-negative self-adjoint operator satisfying in addition a pointwise upper bounds for the gradient of the heat kernel. As applications, we obtain sharp estimates for the operator norm of the area integrals on $L^p(\RN)$ as $p$ becomes large, and the growth of the $A_p$ constant on estimates of the area integrals on the weighted $L^p$ spaces.' address: - ' Ruming Gong, Department of Mathematics, Sun Yat-sen (Zhongshan) University, Guangzhou, 510275, P.R. China' - ' Lixin Yan, Department of Mathematics, Sun Yat-sen (Zhongshan) University, Guangzhou, 510275, P.R. China' author: - Ruming Gong   and   Lixin Yan title: | Weighted $L^p$ estimates for the area integral\ \[2pt\] associated to self-adjoint operators --- \[section\] \[theorem\][Proposition]{} \[theorem\][Lemma]{} \[theorem\][Definition]{} \[theorem\][Corollary]{} \[theorem\][Example]{} \[theorem\][Remark]{} Introduction ============= [**1.1. Background.**]{}Let $\varphi\in C_0^{\infty}({\RN})$ with $\int \varphi =0.$ Let $\varphi_t(x)=t^{-n}\varphi(x/t), t>0$, and define the Lusin area integral by $$\begin{aligned} \label{e1.1} S_{\varphi}(f)(x)=\bigg(\int_{|x-y|<t} \big|f\ast \varphi_t(y)\big|^2 {dy \, dt\over t^{n+1}}\bigg)^{1/2}.\end{aligned}$$ A celebrated result of Chang-Wilson-Wolff ([@CWW]) says that for all $w\geq 0$, $w\in L_{\rm loc}^{1}(\RN)$ and all $f\in \mathcal{S}(\RN)$, there is a constant $C=C(n, \varphi)$ independent of $w$ and $f$ such that $$\begin{aligned} \label{e1.2} \int_{\RN}S^2_{\varphi}(f)w\, dx\leq C\int_{\RN}|f|^2Mw\, dx,\end{aligned}$$ where $Mw$ denotes the Hardy-Littlewood maximal operator of $w$. The fact that $\varphi$ has compact support is crucial in the proof of Chang, Wilson and Wolff. In [@CW], Chanillo and Wheeden overcame this difficulty, and they obtained weighted $L^p$ inequalities for $1<p<\infty$ of the area integral, even when $\varphi$ does not have compact support, including the classical area function defined by means of the Poisson kernel. From the theorem of Chang, Wilson and Wolff, it was already observed in [@FP] that R. Fefferman and Pipher obtained sharp estimates for the operator norm of a classical Calderón-Zygmund singular integral, or the classical area integral for $p$ tending to infinity, e.g., $$\begin{aligned} \label{e1.3} \big\|S_{\varphi}(f)\big\|_{L^p(\RN)}\leq C p^{1/2} \big\|f\big\|_{L^p(\RN)}\end{aligned}$$ as $p\rightarrow \infty.$\ [**1.2. Assumptions, notation and definitions.**]{} In this article, our main goal is to provide an extension of the result of Chang-Wilson-Wolff to study some weighted norm inequalities for the area integrals associated to non-negative self-adjoint operators, whose kernels are not smooth enough to fall under the scope of [@CWW; @CW; @W]. The relevant classes of operators is determined by the following condition: [**Assumption $(H_1)$.**]{} Assume that $L$ is a non-negative self-adjoint operator on $L^2({\mathbb R}^{n}),$ the semigroup $e^{-tL}$, generated by $-L$ on $L^2(\RN)$, has the kernel $p_t(x,y)$ which satisfies the following Gaussian upper bound if there exist $C$ and $c$ such that for all $x,y\in {\mathbb R}^{n}, t>0,$ $$|p_{t}(x,y)| \leq \frac{C}{t^{n/2} } \exp\Big(-{|x-y|^2\over c\,t}\Big). \leqno{(GE)}$$ Such estimates are typical for elliptic or sub-elliptic differential operators of second order (see for instance, [@Da] and [@DOS]).\ For $f\in {\mathcal S}(\RN)$, define the (so called vertical) area functions $S_{P}$ and $S_{H}$ by $$\begin{aligned} \label{e1.4} S_{P}f(x)&=&\bigg(\int_{|x-y|<t} |t\nabla_y e^{-t\sqrt{L}} f(y)|^2 {dy dt\over t^{n+1}}\bigg)^{1/2},\\ S_{H}f(x)&=&\bigg(\int_{|x-y|<t} |t\nabla_y e^{-t^2L} f(y)|^2 {dy dt\over t^{n+1}}\bigg)^{1/2}, \label{e1.5}\end{aligned}$$ as well as the (so-called horizontal) area functions $s_{p}$ and $s_{h}$ by $$\begin{aligned} \label{e1.6} s_{p}f(x)&=&\bigg(\int_{|x-y|<t} |t\sqrt{L} e^{-t\sqrt{L}} f(y)|^2 {dy dt\over t^{n+1}}\bigg)^{1/2},\\ s_{h}f(x)&=&\bigg(\int_{|x-y|<t} |t^2L e^{-t^2L} f(y)|^2 {dy dt\over t^{n+1}}\bigg)^{1/2}.\label{e1.7}\end{aligned}$$ It is well known (cf. e.g. [@St; @G]) that when $L=\Delta$ is the Laplacian on $\RN$, the classical area functions $S_{P}, S_{H}, s_{p}$ and $s_{h}$ are all bounded on $L^p(\RN), 1<p<\infty.$ For a general non-negative self-adjoint operator $L$, $L^p$-boundedness of the area functions $S_{P}, S_{H}, s_{p}$ and $s_{h}$ associated to $L$ has been studied extensively – see for examples [@A], [@ACDH], [@ADM], [@CDL], [@St1] and [@Y], and the references therein. [**1.3. Statement of the main results.**]{} Firstly, we have the following weighted $L^p$ estimates for the area functions $s_{p}$ and $s_{h}.$ \[th1.1\]   Let $L$ be a non-negative self-adjoint operator such that the corresponding heat kernels satisfy Gaussian bounds $(GE)$. If $w\geq 0$, $w\in L_{\rm loc}^{1}(\RN)$ and $f\in \mathcal{S}(\RN)$, then $$\begin{aligned} \hspace{-2.5cm} &{\rm (a)}& \hspace{0.1cm} \int_{\RN}s_h(f)^pw\, dx\leq C(n,p)\int_{\RN}|f|^pMw\, dx,\ \ \ 1<p\leq 2,\\ \hspace{-2.5cm}&{\rm (b)}& \hspace{0.1cm}\int_{\{s_h(f)>\lambda\}} wdx\leq {C(n)\over \lambda}\int_{\RN}|f| Mwdx,\ \ \ \lambda>0, \\ \hspace{-2.5cm}&{\rm (c)}& \hspace{0.1cm} \int_{\RN} s_h(f)^pwdx\leq C(n,p)\int_{\RN}|f|^p(Mw)^{p/2}w^{-(p/2-1)}dx, \ \ \ 2<p<\infty.\end{aligned}$$ Also, estimates (a), (b) and (c) hold for the operator $s_p.$ To study weighted $L^p$-boundedness of the (so-called vertical) area integrals $S_{P}$ and $S_{H}$, one assumes in addition the following condition: [**Assumption $(H_2)$.**]{} Assume that the semigroup $e^{-tL}$, generated by $-L$ on $L^2(\RN)$, has the kernel $p_t(x,y)$ which satisfies a pointwise upper bound for the gradient of the heat kernel. That is, there exist $C$ and $c$ such that for all $x,y\in{\RN}, t>0$, $$\big|\nabla_x p_t(x,y)\big|\leq {C\over t^{(n+1)/2}} \exp\Big(-{|x-y|^2\over c\,t}\Big). \leqno{(G)}$$ Then the following result holds. \[th1.2\]   Let $L$ be a non-negative self-adjoint operator such that the corresponding heat kernels satisfy conditions $(GE)$ and $(G)$. If $w\geq 0$, $w\in L_{\rm loc}^{1}(\RN)$ and $f\in \mathcal{S}(\RN)$, then $(a), (b)$ and $(c)$ of Theorem \[th1.1\] hold for the area functions $S_{P}$ and $S_{H}$. Let us now recall a definition. We say that a weight $w$ is in the the Muckenhoupt class $A_p, 1<p<\infty,$ if $$\begin{aligned} \|w\|_{A_p}\equiv \sup_Q \bigg({1\over |Q|}\int_Q w(x)dx \bigg)\bigg({1\over |Q|}\int_Q w(x)^{-1/(p-1)}dx \bigg)^{p-1}<\infty.\end{aligned}$$ $\|w\|_{A_p}$ is usually called the $A_p$ constant (or characterization or norm) of the weight. The case $p=1$ is understand by replacing the right hand side by $(\inf_Q w)^{-1}$ which is equivalent to the one defined above. Observe the duality relation: $$\|w\|_{A_p}=\|w^{1-p'}\|^{p-1}_{A_{p'}}.$$ Following the R. Fefferman-Pipher’s method, we can use Theorems \[th1.1\] and  \[th1.2\] to establish the $L^p$ estimates of the area integrals as $p$ becomes large. \[th1.3\] Let $T$ be of the area functions $s_h$, $s_p$, $S_{P}$ and $S_{H}.$ Under assumptions of Theorems \[th1.1\] and  \[th1.2\], there exists a constant $C$ such that for all $w\in A_1$, the following estimate holds: $$\begin{aligned} \label{e1.8} \|Tf\|_{L^2_w(\RN)}\leq C\|w\|_{A_1}^{1/2}\|f\|_{L^2_w(\RN)}.\end{aligned}$$ This inequality implies that as $p\rightarrow \infty$ $$\begin{aligned} \label{e1.9} \|Tf\|_{L^p(\RN)}\leq Cp^{1/2}\|f\|_{L^p(\RN)}.\end{aligned}$$ The next result we will prove is the following. \[th1.4\] Let $T$ be of the area functions $s_h$, $s_p$, $S_{P}$ and $S_{H}.$ Under assumptions of Theorems \[th1.1\] and  \[th1.2\], there exists a constant $C$ such that for all $w\in A_p$, the following estimate holds for all $f\in L^p_w(\RN), 1<p<\infty$: $$\begin{aligned} \label{e1.10} \|T f\|_{L^p_w(\RN)}\leq C\|w\|_{A_p}^{\beta_p+1/(p-1)} \|f\|_{L^p_w(\RN)} \ \ (1<p<\infty),\end{aligned}$$ where $\beta_p=\max\{1/2,1/(p-1)\}.$ We should mention that Theorems \[th1.1\] and  \[th1.2\] are of some independent of interest, and they provide an immediate proof of weighted $L^p$ estimates of the area functions $s_h$, $s_p$, $S_{P}$ and $S_{H}$ on $L^p_w(\RN), 1<p<\infty $ and $ w\in A_p $ (see Lemma \[le5.1\] below). In the proofs of Theorems \[th1.1\] and  \[th1.2\], the main tool is that each area integral is controlled by $\gL$ pointwise: $$\begin{aligned} \label{e3.2} Tf(x) \leq C\gL(f)(x),\ \ \ x\in\RN,\end{aligned}$$ where $T$ is of $S_P, S_H, s_p$ and $s_H$, and $\gL$ is defined by $$\begin{aligned} \label{e3.1} \hspace{1cm} \gL(f)(x)=\Bigg(\iint_{\RR_{+}^{n+1}} \bigg({t\over t+|x-y|}\bigg)^{n\mu} |\Psi(t\sqrt{L})f(y)|^2{dydt\over t^{n+1}}\Bigg)^{1/2}, \ \ \mu>1\end{aligned}$$ with some $\Psi\in{\mathcal S}(\RN)$. The idea of using $\gL$ to control the area integrals is due to Calderón and Torchinsky [@CT] (see also [@CW] and [@W]). Note that the singular integral $\gL$ does not satisfy the standard regularity condition of a so-called Calderón-Zygmund operator, thus standard techniques of Calderón-Zugmund theory ([@CW; @W]) are not applicable. The lacking of smoothness of the kernel was indeed the main obstacle and it was overcome by using the method developed in [@CD; @DM], together with some estimates on heat kernel bounds, finite propagation speed of solutions to the wave equations and spectral theory of non-negative self-adjoint operators. The layout of the paper is as follows. In Section 2 we recall some basic properties of heat kernels and finite propagation speed for the wave equation, and build the necessary kernel estimates for functions of an operator, which is useful in the proof of weak-type $(1,1)$ estimate for the area integrals. In Section 3 we will prove that the area integral is controlled by $\gL$ pointwise, which implies Theorems \[th1.1\] and \[th1.2\] for $p=2$, and then we employ the R. Fefferman-Pipher’s method to obtain sharp estimates for the operator norm of the area integrals on $L^p(\RN)$ as $p$ becomes large. In Section 4, we will give the proofs of Theorems \[th1.1\] and \[th1.2\]. Finally, in Section 5 we will prove our Theorem \[th1.4\], which gives the growth of the $A_p$ constant on estimates on the weighted $L^p$ spaces. Throughout, the letter “$c$" and “$C$" will denote (possibly different) constants that are independent of the essential variables. Notation and preliminaries ========================== Let us recall that, if $L$ is a self-adjoint positive definite operator acting on $L^2({\mathbb R}^n)$, then it admits a spectral resolution $$\begin{aligned} L=\int_0^{\infty} \lambda dE(\lambda).\end{aligned}$$ For every bounded Borel function $F:[0,\infty)\to{\mathbb{C}}$, by using the spectral theorem we can define the operator $$\begin{aligned} \label{e2.1} F(L):=\int_0^{\infty}F(\lambda)\,dE_{L}(\lambda).\end{aligned}$$ This is of course, bounded on $L^2({\mathbb R}^n)$. In particular, the operator $\cos(t\sqrt{L})$ is then well-defined and bounded on $L^2({\mathbb R}^{n})$. Moreover, it follows from Theorem 3 of [@CS] that if the corresponding heat kernels $p_{t}(x,y)$ of $e^{-tL}$ satisfy Gaussian bounds $(GE)$, then there exists a finite, positive constant $c_0$ with the property that the Schwartz kernel $K_{\cos(t\sqrt{L})}$ of $\cos(t\sqrt{L})$ satisfies $$\begin{aligned} \label{e2.2} \hspace{1cm} {\rm supp}K_{\cos(t\sqrt{L})}\subseteq \big\{(x,y)\in {\mathbb R}^{n}\times {\mathbb R}^{n}: |x-y|\leq c_0 t\big\}.\end{aligned}$$ See also [@CGT] and [@Si]. The precise value of $c_0$ is inessential and throughout the article we will choose $c_0=1$. By the Fourier inversion formula, whenever $F$ is an even, bounded, Borel function with its Fourier transform $\hat{F}\in L^1(\mathbb{R})$, we can write $F(\sqrt{L})$ in terms of $\cos(t\sqrt{L})$. More specifically, we have $$\begin{aligned} \label{e2.3} F(\sqrt{L})=(2\pi)^{-1}\int_{-\infty}^{\infty}{\hat F}(t)\cos(t\sqrt{L})\,dt,\end{aligned}$$ which, when combined with (\[e2.2\]), gives $$\begin{aligned} \label{e2.4} \hspace{1cm} K_{F(\sqrt{L})}(x,y)=(2\pi)^{-1}\int_{|t|\geq |x-y|}{\hat F}(t) K_{\cos(t\sqrt{L})}(x,y)\,dt,\qquad \forall\,x,y\in{\mathbb R}^{n}.\end{aligned}$$ The following result is useful for certain estimates later. \[le2.1\] Let $\varphi\in C^{\infty}_0(\mathbb R)$ be even, $\mbox{supp}\,\varphi \subset (-1, 1)$. Let $\Phi$ denote the Fourier transform of $\varphi$. Then for every $\kappa=0,1,2,\dots$, and for every $t>0$, the kernel $K_{(t^2L)^{\kappa}\Phi(t\sqrt{L})}(x,y)$ of the operator $(t^2L)^{\kappa}\Phi(t\sqrt{L})$ which was defined by the spectral theory, satisfies $$\begin{aligned} \label{e2.5} {\rm supp}\ \! K_{(t^2L)^{\kappa}\Phi(t\sqrt{L})} \subseteq \big\{(x,y)\in \RN\times \RN: |x-y|\leq t\big\}\end{aligned}$$ and $$\begin{aligned} \label{e2.6} \big|K_{(t^2L)^{\kappa}\Phi(t\sqrt{L})}(x,y)\big| \leq C \, t^{-n}\end{aligned}$$ for all $t>0$ and $x,y\in \RN.$ The proof of this lemma is standard (see [@SW] and [@HLMMY]). We give a brief argument of this proof for completeness and convenience for the reader. For every $\kappa=0,1,2,\dots$, we set $\Psi_{\kappa, t}(\zeta):=(t\zeta)^{2\kappa}\Phi(t\zeta)$. Using the definition of the Fourier transform, it can be verified that $$\widehat{\Psi_{\kappa,t}}(s)=(-1)^{\kappa} {1\over t}\psi_{\kappa}({s\over t}),$$ where we have set $\psi_{\kappa} (s)={d^{2\kappa}\over ds^{2\kappa}}\varphi(s)$. Observe that for every $\kappa=0,1,2,\dots$, the function $\Psi_{\kappa,t}\in{\mathcal S}(\mathbb R)$ is an even function. It follows from formula (\[e2.4\]) that $$\label{e2.7} K_{(t^2L)^{\kappa}\Phi(t\sqrt{L})}(x,y) =(-1)^{\kappa}{1\over 2\pi}\int_{|st|\geq |x-y|} {d^{2\kappa }\over ds^{2\kappa}}\varphi({s})K_{\cos(st\sqrt{L})}(x,y)\,ds.$$ Since $\varphi\in C^{\infty}_0(\mathbb R)$ and $\mbox{supp}\,\varphi \subset(-1, 1)$, (\[e2.5\]) follows readily from this. Note that for any $m\in {\Bbb N}$ and $t>0$, we have the relationship $$(I+tL)^{-m}={1\over (m-1)!} \int\limits_{0}^{\infty}e^{-tsL}e^{-s} s^{m-1} ds$$ and so when $m>n/4$, $$\begin{aligned} \big\| (I+tL)^{-m} \big\|_{L^2\rightarrow L^{\infty}}\leq {1\over (m-1)!} \int\limits_{0}^{\infty} \big\| e^{-tsL}\big\|_{L^2\rightarrow L^{\infty}} e^{-s} s^{m-1} ds\leq C t^{-n/4} \end{aligned}$$ for all $t>0.$ Now $ \big\| (I+tL)^{-m} \big\|_{L^1\rightarrow L^{2}}=\big\| (I+tL)^{-m} \big\|_{L^2\rightarrow L^{\infty}}\leq C t^{-n/4}$, and so $$\begin{aligned} \big\|(t^2L)^{\kappa}\Phi(t\sqrt{L})\big\|_{L^1\rightarrow L^{\infty}} \leq \big\| (I+t^2L)^{2m}(t^2L)^{\kappa}\Phi(t\sqrt{L}) \big\|_{L^2\rightarrow L^2} \big\| (I+t^2L)^{-m} \big\|^2_{L^2\rightarrow L^{\infty}}. \end{aligned}$$ The $L^2$ operator norm of the last term is equal to the $L^{\infty}$ norm of the function $(1+t^2|s|)^{2m} (t^2|s|)^{\kappa}\Phi(t\sqrt{|s|})$ which is uniformly bounded in $t>0$. This implies that (\[e2.6\]) holds. The proof of this lemma is concluded. \[le2.2\] Let $\varphi\in C^{\infty}_0(\mathbb R)$ be even function with $\int \varphi =1$, $\mbox{supp}\,\varphi \subset (-1/10, 1/10)$. Let $\Phi$ denote the Fourier transform of $\varphi$ and let $\Psi(s)=s^{2n+2}\Phi^3(s)$. Then there exists a positive constant $C=C({n,\Phi})$ such that the kernel $K_{\Psi(t\sqrt{L})(1-\Phi(r\sqrt{L}))}(x,y)$ of $ \Psi(t\sqrt{L})(1-\Phi(r\sqrt{L}))$ satisfies $$\begin{aligned} \label{e2.8} \big|K_{\Psi(t\sqrt{L})(1-\Phi(r\sqrt{L}))}(x,y)\big| \leq C\, { r\over t^{n+1}}\Big(1+{|x-y|^2\over t^2}\Big)^{-(n+1)/2}\end{aligned}$$ for all $t>0, r>0$ and $x, y\in \RN$. By rescaling, it is enough to show that $$\begin{aligned} \label{e2.9} |K_{\Psi(\sqrt{L})(1-\Phi(r\sqrt{L}))}(x,y)| \leq C{r}\big(1+{|x-y|^2}\big)^{-(n+1)/2}.\end{aligned}$$ Let us prove (\[e2.9\]). One writes $\Psi(s)=\Psi_1(s)\Phi^2(s)$, where $\Psi_1(s)=s^{2n+2}\Phi(s)$. Then we have $\Psi(\sqrt{L})=\Psi_1(\sqrt{L})\Phi^2(\sqrt{L})$. It follows from Lemma 2.1 that $|K_{\Phi(\sqrt{L}) }(z,y)|\leq C$ and $K_{\Phi(\sqrt{L}) }(z,y)=0$ when $|z-y|\geq 1.$ Note that if $|z-y|\leq 1$, then $\big(1+|x-y|\big) \leq 2(1+|x-z|)$. Hence, $$\begin{aligned} &&\hspace{-1cm}\Big|\big(1+|x-y|\big)^{n+1}K_{\Psi(\sqrt{L})(1-\Phi(r\sqrt{L}))}(x,y)\Big|\\ &&= \big(1+|x-y|\big)^{n+1}\Big|\int_{\RN} K_{\Psi_1(\sqrt{L})(1-\Phi(r\sqrt{L}))\Phi(\sqrt{L})}(x,z) K_{\Phi(\sqrt{L}) }(z,y) dz\Big|\\ &&\leq C\int_{\RN}\big|K_{\Psi_1(\sqrt{L})(1-\Phi(r\sqrt{L}))\Phi(\sqrt{L})}(x,z)\big|\big(1+|x-z| \big)^{n+1}dz.\end{aligned}$$ By symmetry, we will be done if we show that $$\begin{aligned} \label{e2.10} \int_{\RN}\big|K_{\Psi_1(\sqrt{L})(1-\Phi(r\sqrt{L}))\Phi(\sqrt{L})}(x,z)\big|\big(1+|x-z| \big)^{n+1}dx\leq Cr.\end{aligned}$$ Let $G_r(s)=\Psi_1(s)(1-\Phi(rs))$. Since $G_r(s)$ is an even function, apart from a $(2\pi)^{-1}$ factor we can write $$G_r(s)=\int^{+\infty}_{-\infty}\widehat{G_r}(\xi){\rm cos}(s\xi)d\xi,$$ and by (\[e2.3\]), $$\begin{aligned} \label{e2.11}\Psi_1(\sqrt{L})(1-\Phi(r\sqrt{L}))\Phi(\sqrt{L}) =\int^{+\infty}_{-\infty}\widehat{G_r}(\xi){\rm cos}(\xi\sqrt{L})\Phi(\sqrt{L})d\xi.\end{aligned}$$ By Lemma 2.1 again, it can be seen that $K_{{\rm cos}(\xi\sqrt{L})\Phi(\sqrt{L})}(x,z)=0$ if $|x-z|\geq 1+|\xi|.$ Using the unitarity of $\cos(\xi\sqrt{L})$, estimates (\[e2.5\]) and (\[e2.6\]), we have $$\begin{aligned} \int_{\RN} \big|K_{{\rm cos}(\xi\sqrt{L})\Phi(\sqrt{L})}(x,z)\big|dx &=& \int_{\RN} \big|\cos(\xi\sqrt{L}) \big(K_{\Phi(\sqrt{L})}(\cdot\ , z)\big)(x)\big|dx\nonumber\\ &\leq& (1+|\xi|)^{n/2} \big\| \cos(\xi\sqrt{L}) \big(K_{\Phi(\sqrt{L})}(\cdot\ , z)\big)\big\|_{L^2(\RN)} \nonumber\\ &\leq& (1+|\xi|)^{n/2} \big\| K_{\Phi(\sqrt{L})}(\cdot\ , z) \big\|_{L^2(\RN)} \nonumber\\ &\leq& (1+|\xi|)^{n/2}.\end{aligned}$$ This, in combination with (\[e2.11\]), gives $$\begin{aligned} \label{e2.12} {\rm LHS\ \ of \ \ } (\ref{e2.10}) \ &\leq& C\int^{+\infty}_{-\infty} |\widehat{G_r}(\xi)| \, (1+|\xi|)^{2n+1}\, d\xi\nonumber\\ &\leq& C\Big(\int^{+\infty}_{-\infty} |\widehat{G_r}(\xi)|^2 \, (1+|\xi|)^{4n+4}\, d\xi\Big)^{1/2}\nonumber\\ &\leq& C\big\|G_r\|_{W^{2n+2, \,2}(\RN)}.\end{aligned}$$ Next we estimate the term $\big\|G_r\|_{W^{2n+2, \,2}(\RN)}$. Note that $G_r(s)=\Psi_1(s)(1-\Phi(rs)), \Phi(0)=\widehat{\varphi}(0)=\int \varphi =1$ and $\Phi=\widehat{\varphi}\in \mathcal{S}(\RR),$ also $\Psi_1(s)=s^{2n+2}\Phi(s).$ We have $$\begin{aligned} \label{e2.13} \hspace{1cm} \|G_r\|_{L^2}^2= \int_{\RR} |\Psi_1(s)|^2|1-\Phi(rs)|^2ds \leq C \|\Phi'\|^2_{L^\infty} \int_{\RR} |\Psi_1(s)|^2\, (rs)^2\,ds \leq C r^2.\end{aligned}$$ Moreover, observe that for any $k\in{\mathbb N}$, $\big|{d^k\over ds^k}\big(1-\Phi(rs)\big)\big|=r^k|\Phi^{(k)}(rs)|\leq Crs^{1-k}.$ By Leibniz’s rule, we obtain $$\begin{aligned} \label{e2.14} \Big\|{d^{2n+2}\over ds^{2n+2}}G_r(s)\Big\|_{L^2}&=& \Big\|{d^{2n+2}\over ds^{2n+2}}\Big(\Psi_1(s)\big(1-\Phi(rs)\big)\Big)\Big\|_{L^2(\RN)}\nonumber\\ &\leq&\sum_{m+k=2n+2}\Big\|{d^{m}\over ds^{m}}\Big(s^{2n+2}\Phi\Big) {d^{k}\over ds^{k}}\Big(1-\Phi(rs)\Big)\Big\|_{L^2(\RN)}\nonumber\\ &\leq&Cr \sum_{m=0}^{2n+2}\Big\|s^{m-(2n+1)}{d^{m}\over ds^{m}}\Big(s^{2n+2}\Phi\Big) \Big\|_{L^2(\RN)}\nonumber\\ &\leq&Cr.\end{aligned}$$ From estimates (\[e2.13\]) and (\[e2.14\]), it follows that $\big\|G_r\|_{W^{2n+2, \,2}(\RN)}\leq Cr$. This, in combination with (\[e2.12\]), shows that the desired estimate (\[e2.10\]) holds, and concludes the proof of Lemma 2.2. Finally, for $s>0$, we define $${\Bbb F}(s):=\Big\{\psi:{\Bbb C}\to{\Bbb C}\ {\rm measurable}: \ \ |\psi(z)|\leq C {|z|^s\over ({1+|z|^{2s}})}\Big\}.$$ Then for any non-zero function $\psi\in {\Bbb F}(s)$, we have that $\{\int_0^{\infty}|{\psi}(t)|^2\frac{dt}{t}\}^{1/2}<\infty$. Denote by $\psi_t(z)=\psi(tz)$. It follows from the spectral theory in [@Yo] that for any $f\in L^2(\RN)$, $$\begin{aligned} \Big\{\int_0^{\infty}\|\psi(t\sqrt{L})f\|_{L^2(\RN)}^2{dt\over t}\Big\}^{1/2} &=&\Big\{\int_0^{\infty}\big\langle\,\overline{ \psi}(t\sqrt{L})\, \psi(t\sqrt{L})f, f\big\rangle {dt\over t}\Big\}^{1/2}\nonumber\\ &=&\Big\{\big\langle \int_0^{\infty}|\psi|^2(t\sqrt{L}) {dt\over t}f, f\big\rangle\Big\}^{1/2}\nonumber\\ &=& \kappa \|f\|_{L^2(\RN)}, \label{e2.15}\end{aligned}$$ where $\kappa=\big\{\int_0^{\infty}|{\psi}(t)|^2 {dt/t}\big\}^{1/2},$ an estimate which will be often used in the sequel. An auxiliary $\gL$ function =========================== The $\gL$ function ------------------ Let $\varphi\in C^{\infty}_0(\mathbb R)$ be even function with $\int \varphi =1$, $\mbox{supp}\,\varphi \subset (-1/10, 1/10)$. Let $\Phi$ denote the Fourier transform of $\varphi$ and let $\Psi(s)=s^{2n+2}\Phi^3(s)$ (see Lemma 2.2 above). We define the $\gL$ function by $$\begin{aligned} \label{e3.1} \hspace{1cm} \gL(f)(x)=\Bigg(\iint_{\RR_{+}^{n+1}} \bigg({t\over t+|x-y|}\bigg)^{n\mu} |\Psi(t\sqrt{L})f(y)|^2{dydt\over t^{n+1}}\Bigg)^{1/2}, \ \ \mu>1.\end{aligned}$$ In this section, we will show that the area integrals $s_p$, $s_h $, $s_H$ and $ S_H$ are all controlled by $\gL$ pointwise. To achieve this, we need some results on the kernel estimates of the semigroup. Firstly, we note that the Gaussian upper bounds for $p_t(x,y)$ are further inherited by the time derivatives of $p_{t}(x,y)$. That is, for each $k\in{\mathbb N}$, there exist two positive constants $c_k$ and $C_k$ such that $$\begin{aligned} \label{e3.0} \Big|{\partial^k \over\partial t^k} p_{t}(x,y) \Big|\leq \frac{C_k}{ t^{n /2+k} } \exp\Big(-{|x-y|^2\over c_k\,t}\Big)\end{aligned}$$ for all $t>0$, and $x, y\in {\mathbb R}^{n}$. For the proof of (\[e3.0\]), see [@Da] and [@Ou], Theorem 6.17. Note that in the absence of regularity on space variables of $p_t(x,y)$, estimate (\[e3.0\]) plays an important role in our theory. \[le3.1\] Let $L$ be a non-negative self-adjoint operator such that the corresponding heat kernels $p_t(x,y)$ of the semigroup $e^{-tL}$ satisfy Gaussian bounds $(GE)$. Then for every $\kappa=0,1, ..., $ the operator $ (t\sqrt{L})^{2\kappa} e^{-t\sqrt{L}}$ satisfies $$\begin{aligned} \label{e3.00}\hspace{1cm} \big|K_{(t\sqrt{L})^{2\kappa} e^{-t\sqrt{L}}}(x,y)\big|\leq C_\kappa t^{-n}\Big(1+{ |x-y|\over t}\Big)^{-(n+2\kappa+1)}, \ \ \forall t>0\end{aligned}$$ for almost every $x,y\in \RN.$ The proof of (\[e3.00\]) is simple. Indeed, the subordination formula $$e^{-t\sqrt{L}}={1\over \sqrt{\pi}}\int^\infty_0e^{-u}u^{-1/2}e^{-{t^2\over 4u}L}du$$ allows us to estimate $$\begin{aligned} \big|K_{(t\sqrt{L})^{2\kappa} e^{-t\sqrt{L}}}(x,y)\big|&\leq& C_\kappa\int_0^{\infty}{e^{-u}\over\sqrt{u}}\Big({t^2\over u}\Big)^{-n/2}\exp \Big(-{u|x-y|^2\over c t^2}\Big)u^\kappa du\\ &\leq&C_\kappa t^{-n}\int_0^{\infty}e^{-u}u^{n/2+\kappa-1/2} \exp \Big(-{u|x-y|^2\over c t^2}\Big)\, du\\ &\leq&C_\kappa t^{-n}\Big(1+{|x-y| \over t }\Big)^{-(n+2\kappa+1)}\end{aligned}$$ for every $t>0$ and almost every $x,y\in \RN$. \[le3.2\] Let $L$ be a non-negative self-adjoint operator such that the corresponding heat kernels $p_t(x,y)$ of the semigroup $e^{-tL}$ satisfy $(GE)$ and ($G$). Then for every $\kappa=0,1, ..., $ the operator $ t^{2\kappa+1}\nabla (L^\kappa e^{-t^2L} )$ satisfies $$\begin{aligned} \Big|K_{t^{2\kappa+1}\nabla (L^\kappa e^{-t^2L} )}(x,y)\Big| &\leq& C t^{-n} \exp\Big(-{|x-y|^2\over c \,t^2}\Big), \ \ \ \forall t>0 \end{aligned}$$ for almost every $x,y\in \RN.$ Note that $ t^{2\kappa+1}\nabla (L^\kappa e^{-t^2L} )= t\nabla e^{-{t^2\over 2}L} \circ (t^2L)^\kappa e^{-{t^2\over 2}L}.$ Using (\[e3.0\]) and the pointwise gradient estimate ($G$) of heat kernel $p_t(x,y)$, we have $$\begin{aligned} \Big|K_{t^{2\kappa+1}\nabla (L^\kappa e^{-t^2L} )}(x,y)\Big|&=& \Big| \int_{\RN} K_{t\nabla e^{-{t^2\over 2}L}}(x,z) K_{(t^2L)^\kappa e^{-{t^2\over 2}L}} (z,y)dz\Big|\nonumber\\ &\leq& C t^{-2n}\int_{\RN} \exp\Big(-{|x-z|^2\over c \,t^2}\Big) \exp\Big(-{|z-y|^2\over c \,t^2}\Big) dz\nonumber\\ &\leq& C t^{-n} \exp\Big(-{|x-y|^2\over c \,t^2}\Big) \end{aligned}$$ for every $t>0$ and almost every $x,y\in \RN$. Now we start to prove the following Propositions \[prop3.3\] and  \[prop3.4\]. \[prop3.3\] Let $L$ be a non-negative self-adjoint operator such that the corresponding heat kernels satisfy condition $(GE)$. Then for $f\in {\mathcal S}(\RN),$ there exists a constant $C=C_{n,\mu,\Psi}$ such that the area integral $s_p$ satisfies the pointwise estimate: $$\begin{aligned} \label{e3.4} s_pf(x) \leq C\gL(f)(x).\end{aligned}$$ Estimate (\[e3.4\]) also holds for the area integral $s_h$. \[prop3.4\] Let $L$ be a non-negative self-adjoint operator such that the corresponding heat kernels satisfy conditions $(GE)$ and $(G)$. Then for $f\in {\mathcal S}(\RN),$ there exists a constant $C=C_{n,\mu,\Psi}$ such that the area integral $S_P$ satisfies the pointwise estimate: $$\begin{aligned} \label{e3.2} S_Pf(x) \leq C\gL(f)(x).\end{aligned}$$ Estimate (\[e3.2\]) also holds for the area integral $S_H$. \[Proofs of Propositions \[prop3.3\] and  \[prop3.4\]\]  Let us begin to prove (\[e3.2\]). By the spectral theory ([@Yo]), for every $f\in {\mathcal S}(\RN)$ and every $\kappa \in{\mathbb N}$, $$\begin{aligned} f =C_\Psi\int^\infty_0 (t^2L)^{\kappa}e^{-t^2{L}}\Psi(t\sqrt{L}) f {dt\over t}\end{aligned}$$ with $C^{-1}_{\Psi}=\int^\infty_0 t^{2\kappa} e^{-t^2} \Psi(t ){dt/t}$, and the integral converges in $L^2(\RN)$. Recall the subordination formula: $$e^{-t\sqrt{L}} ={1\over \sqrt{\pi}}\int^\infty_0 e^{-u} {u}^{-1/2} e^{-{t^2\over 4u}L} du.$$ One writes $$\begin{aligned} \label{e3.666} \hspace{-1cm} s\nabla e^{-s\SL}f(y) &=& {1\over \sqrt{\pi}}\int^\infty_0e^{-u} {u}^{-1/2}s\nabla e^{-{s^2\over 4u}L}f(y)du \nonumber\\ &=&{ C_{\Psi} \over \sqrt{\pi}}\int^{\infty}_{0}\int^\infty_0e^{-u} {u}^{-1/2} st^{2\kappa}\nabla\big(L^\kappa e^{-({s^2\over 4u}+t^2)L}\big)\Psi(t\sqrt{L})f (y){dt\, du\over t}.\end{aligned}$$ Fix $\kappa=[{n(\mu-1)\over 2}]+1$. Using Lemma \[le3.2\] and the Hölder inequality, we can estimate (\[e3.666\]) as follows: $$\begin{aligned} &&\hspace{-1cm}|s\nabla e^{-s\SL}f(y)|\\ &\leq&C \int^{\infty}_0\int^{\infty}_0\int_{\RN}e^{-u}u^{-1/2}st^{2\kappa}\Big({s^2\over 4u}+{t^2 }\Big)^{-1/2-\kappa-n/2}e^{-|y-z|^2/c({s^2\over 4u}+{t^2 })} |\Psi(t\SL)f(z)|{dzdtdu\over t}\\ &\leq&C A\cdot B,\end{aligned}$$ where $$\begin{aligned} A^2 &= & \int^{\infty}_0\int^{\infty}_0\int_{\RN}|\Psi(t\SL)f(z)|^2e^{-u} {u}^{-1/2}s t^{2\kappa} \Big({s^2\over 4u}+{t^2 }\Big)^{-1/2-\kappa-n/2}e^{-|y-z|^2/c({s^2\over 4u}+{t^2 })} {dzdtdu\over t} \end{aligned}$$ and $$\begin{aligned} B^2 &=& \int^{\infty}_0\int^{\infty}_0\int_{\RN}e^{-u}u^{-1/2}st^{2\kappa}\Big({s^2\over 4u}+{t^2 }\Big)^{-1/2-\kappa-n/2}e^{-|y-z|^2/c({s^2\over 4u}+{t^2 })} {dzdtdu\over t}\\ &=&C \int^{\infty}_0\int^{\infty}_0\int_{0}^{\infty}e^{-u}u^{-1/2}st^{2\kappa}\Big({s^2\over 4u}+{t^2 }\Big)^{-1/2-\kappa}e^{-r^2 } r^{n-1}{drdtdu\over t}\\ &\leq& C \int^{\infty}_0\int^{\infty}_0 e^{-u} \, v^{2\kappa}\Big(1+{v^2 }\Big)^{-1/2-\kappa} { du dv\over v} \\ &\leq& C.\end{aligned}$$ Note that in the first equality of the above term $B$, we have changed variables $|y-z|\rightarrow r({s^2\over 4u}+{t^2 })^{1/2}$ and $t\rightarrow v ({s^2/u})^{1/2}$. Hence, $$\begin{aligned} &&\hspace{-0.3cm}|s\nabla e^{-s\SL}f(y)|^2\\ &&\leq C \int^{\infty}_0\int^{\infty}_0\int_{\RN}|\Psi(t\SL)f(z)|^2e^{-u}u^{-1/2}st^{2\kappa}\Big({s^2\over 4u} +{t^2 }\Big)^{-1/2-\kappa-n/2}e^{-|y-z|^2/c({s^2\over 4u}+{t^2 })} {dzdtdu\over t}.\end{aligned}$$ Therefore, we put it into the definition of $S_p$ to obtain $$\begin{aligned} &&\hspace{-0.6cm}S^2_P(f)(x) = \int^{\infty}_0\int_{|x-y|<s}|s\nabla e^{-s\SL}f(y)|^2{dyds\over s^{n+1}}\\ &&\leq C\iint_{\RR^{n+1}_{+}}|\Psi(t\SL)f(z)|^2\\ &&\hspace{0.2cm}\times\Bigg(\int^{\infty}_0\int^{\infty}_0 \int_{|x-y|<s}e^{-u}u^{-1/2}st^{2\kappa+n}\Big({s^2\over 4u}+{t^2 }\Big)^{-1/2-\kappa-n/2}e^{-|y-z|^2/c({s^2\over 4u}+{t^2 })} {dydsdu\over s^{n+1}}\Bigg) {dzdt\over t^{n+1}}.\end{aligned}$$ We will be done if we show that $$\begin{aligned} \label{e3.3} && \hspace{-1.2cm} \int^{\infty}_0\int^{\infty}_0\int_{|v-y|<s}e^{-u}u^{-1/2}st^{2\kappa+n}\Big({s^2\over 4u}+{t^2 }\Big)^{-1/2-\kappa-n/2}e^{-|y|^2/c({s^2\over 4u}+{t^2 })} {dydsdu\over s^{n+1}}\\ &\leq& C\Big({t\over t+|v|}\Big)^{n\mu},\nonumber\end{aligned}$$ where we set $x-z=v,$ and we will prove estimate (\[e3.3\]) by considering the following two cases. [*Case 1.*]{} $|v|\leq t.$  In this case, it is easy to show that $$\begin{aligned} {\rm LHS\ of \ (\ref{e3.3})} &&\leq C \int^{\infty}_0\int^{\infty}_0 e^{-u}u^{-1/2}st^{2\kappa+n}\Big({s^2\over 4u}+{t^2 }\Big)^{-1/2-\kappa-n/2} { dsdu\over s }\\ &&\leq C \int^{\infty}_0\int^{\infty}_0 e^{-u}u^{-1/2}\sqrt{u}s\big({s^2}+1\big)^{-1/2-\kappa-n/2} { dsdu\over s }\\ &&\leq C.\end{aligned}$$ But $|v|\leq t$, so $$\Big({t\over t+|v|}\Big)^{n\mu}\geq C_{n,\mu}.$$ This implies that (\[e3.3\]) holds when $|v|\leq t.$ [*Case 2.*]{} $|v|> t$. In this case, we break the integral into two pieces: $$\begin{aligned} \int^\infty_0\int^{|v|/2}_0\int_{|v-y|<s}\cdots +\int^\infty_0\int^{\infty}_{|v|/2}\int_{|v-y|<s}\cdots =:I+{\it II}.\end{aligned}$$ For the first term, note that $|y|\geq |v|-|v-y|>|v|/2.$ This yields $$\begin{aligned} I&\leq&\int^{\infty}_0\int^{|v|/2}_0\int_{|v-y|<s}e^{-u}u^{-1/2}st^{2\kappa+n} \Big({s^2\over 4u}+{t^2 }\Big)^{-1/2-\kappa-n/2}e^{-|v|^2/c({s^2\over 4u}+{t^2 })} {dydsdu\over s^{n+1}}\\ &\leq&C\int^{\infty}_0\int^{\infty}_0e^{-u}u^{-1/2}st^{2\kappa+n} \Big({s^2\over 4u}+{t^2}\Big)^{-1/2-\kappa-n/2} \Big({s^2\over 4u}+{t^2 }\Big)^{n\mu/2}/|v|^{n\mu} { dsdu\over s }\\ &\leq&C\Big({t\over |v|}\Big)^{n\mu}\int^{\infty}_0\int^{\infty}_0 e^{-u}u^{-1/2}st^{2\kappa+n-n\mu}\Big({s^2\over 4u}+{t^2 }\Big)^{-1/2-\kappa-n/2+n\mu/2} { dsdu\over s }\\ &\leq&C\Big({t\over |v|}\Big)^{n\mu}\int^{\infty}_0\int^{\infty}_0e^{-u} s \big({s^2+1}\big)^{-1/2-\kappa-n/2+n\mu/2} { dsdu\over s }\\ &\leq&C\Big({t\over |v|}\Big)^{n\mu},\end{aligned}$$ where we used condition $\kappa=[{n(\mu-1)\over2}]+1$ in the last inequality. Since $|v|>t$, so $I\leq C_{n,\mu}\Big({t \over t+|v|}\Big)^{n\mu}.$ For the term $\it{II}$, we have $$\begin{aligned} \it{II} &\leq&C\int^{\infty}_0\int_{|v|/2}^{\infty}\int_{0}^{\infty}e^{-u} u^{-1/2}st^{2\kappa+n}\big({s^2\over 4u}+{t^2 }\big)^{-1/2-\kappa-n/2}e^{-r^2/ ({s^2\over 4u}+{t^2 })}r^{n-1} {drdsdu\over s^{n+1}}\\ &\leq&C\int^{\infty}_0\int_{|v|/2}^{\infty} e^{-u}u^{-1/2}st^{2\kappa+n}\Big({s^2\over 4u}+{t^2 }\Big)^{-1/2-\kappa} { dsdu\over s^{n+1}}\\ &\leq&C\int^{\infty}_0\int_{|v|/(4\sqrt{u}t)}^{\infty} e^{-u}u^{-1/2}(\sqrt{u})^{1-n} \Big({s^2+1}\Big)^{-1/2-\kappa} { dsdu\over s^{n}}\\ &\leq&C\int^{\infty}_0 e^{-u}u^{-1/2}(\sqrt{u})^{1-n} \Big({\sqrt{u}\, t\over |v|}\Big)^{2\kappa+n} du\\&\leq&C\Big({t\over |v|}\Big)^{2\kappa+n}\\ &\leq& C \Big({t\over t+|v|}\Big)^{n\mu},\end{aligned}$$ since $\kappa=[{n(\mu-1)\over2}]+1$ and $|v|>t$. From the above [*Cases 1*]{} and [*2*]{}, we have obtained estimate (\[e3.3\]), and then the proof of estimate (\[e3.2\]) is complete. The similar argument as above gives estimate (\[e3.2\]) since for the area integral $S_H$, and this completes the proof of Proposition \[prop3.4\]. For the area functions $s_P$ and $s_h$, we can use a similar argument to show Proposition \[prop3.3\] by using either estimate (\[e3.0\]) or Lemma \[le3.1\] instead of Lemma \[le3.2\] in the proof of estimate (\[e3.2\]), and we skip it here. Weighted $L^2$ estimate of $\gL$. --------------------------------- \[th3.5\] Let $\mu>1$. Then there exists a constant $C=C_{n,\mu,\Phi}$ such that for all $w\geq 0$ in $L_{loc}^{1}(\RN)$ and all $f\in {\mathcal S}(\RN)$, we have $$\begin{aligned} \label{e3.6} \int_{\RN}\gL (f)^2wdx\leq C\int_{\RN}|f|^2Mwdx.\end{aligned}$$ The proof essentially follows from [@CWW] and [@CW] for the classical area function. Note that by Lemma \[le2.1\], the kernel $K_{\Psi(t\sqrt{L})}$ of the operator $\Psi(t\sqrt{L})$ satisfies supp $K_{\Psi(t\sqrt{L})}\subseteq \big\{ (x,y)\in \RN\times \RN: |x-y|\leq t\big\}.$ By (\[e3.1\]), one writes $$\begin{aligned} \label{e3.7}\hspace{1cm} \int_{\RN}\gL (f)^2wdx &=& \int_{\RN} \int_{\RR_{+}^{n+1}} \bigg({t\over t+|x-y|}\bigg)^{n\mu} |\Psi(t\sqrt{L})f(y)|^2{dydt\over t^{n+1}} dx\nonumber\\ &=& \int_{\RR^{n+1}_{+}}|\Psi(t\sqrt{L})f(y)|^2\bigg({1\over t^n}\int_{\RN}w(x) \Big({t\over t+|x-y|}\Big)^{n\mu}dx\bigg){dydt\over t}.\end{aligned}$$ For $k$ an integer, set $$A_k=\Big\{(y,t):\ 2^{k-1}<{1\over t^n}\int_{\RN}w(x) \Big({t\over t+|x-y|}\Big)^{n\mu}dx\leq 2^k\Big\}.$$ Then $$\begin{aligned} \label{e3.8} {\rm RHS \ of \ (\ref{e3.7})} \ \leq \sum_{k\in \mathbb{Z}}2^k \int_{\RR^{n+1}_{+}}|\Psi(t\sqrt{L})f(y)|^2\chi_{A_k}(y,t){dydt\over t}.\end{aligned}$$ We note that if $(y,t)\in A_k$, then since $\mu>1$, $$2^{k-1}\leq {1\over t^n}\int_{\RN}w(x) \Big({t\over t+|x-y|}\Big)^{n\mu}dx\leq CMw(y).$$ Now if $|y-z|<t$, then $t+|x-y|\approx t+|x-z|$. Thus if $|y-z|<t$ and $(y,t)\in A_k,$ $$2^{k-1}\leq {C\over t^n}\int_{\RN}w(x) \Big({t\over t+|x-z|}\Big)^{n\mu}dx\leq CMw(z).$$ In particular, if $(y,t)\in A_k$ and $|y-z|<t$, then $z\in E_k=\{z:\ Mw(z)\geq C2^k\}$. Now since ${\rm supp}\ \! K_{ \Psi(t\sqrt{L})}(y,z) \subseteq \big\{(y,z)\in \RN\times \RN: |y-z|\leq t\big\}$, for $(y,t)\in A_k$, $$\Psi(t\sqrt{L})f(y)=\int_{|y-z|<t}K_{ \Psi(t\sqrt{L})}(y,z)f(z)dz =\int_{\RN}K_{ \Psi(t\sqrt{L})}(y,z)f(z)\chi_{E_k}(z)dz.$$ Therefore, $$\begin{aligned} {\rm RHS \ of \ (\ref{e3.7})} \ &\leq& \sum_{k\in \mathbb{Z}}2^k \int_{\RR^{n+1}_{+}}|\Psi(t\sqrt{L})(f\chi_{E_k})(y)|^2\chi_{A_k}(y,t){dydt\over t}\\ &\leq& \sum_{k\in \mathbb{Z}}2^k \int_{\RR^{n+1}_{+}}|\Psi(t\sqrt{L})(f\chi_{E_k})(y)|^2{dydt\over t}\\ &=& \sum_{k\in \mathbb{Z}}2^k \int_0^{\infty}\|\Psi(t\sqrt{L})(f\chi_{E_k})\|_{L^2(\RN)}^2{ dt\over t}\\ &=& C_{\Psi} \sum_{k\in \mathbb{Z}}2^k \| f\chi_{E_k} \|_{L^2(\RN)}^{2}\end{aligned}$$ with $C_{\Psi}=\int^\infty_0|\Psi(t)|^2{dt/t}<\infty,$ and the last inequality follows from the spectral theory (see [@Yo]). By interchanging the order of summation and integration, we have $$\begin{aligned} \int_{\RN}\gL (f)^2wdx&\leq& C \sum_{k\in \mathbb{Z}}2^k\int_{\RN}|f|^2\chi_{E_k}dx\\ &\leq& C\int_{\RN}|f|^2\Big(\sum_{k\in \mathbb{Z}}2^k\chi_{E_k}\Big)dx\\ &\leq& C \int_{\RN}|f|^2Mwdx.\end{aligned}$$ This concludes the proof of the theorem. As a consequence of Propositions \[prop3.3\],  \[prop3.4\] and Theorem \[th3.5\], we have the following analogy for the area function of the result of Chang, Wilson and Wolff. \[c3.3\] Let $T$ be of the area integrals $s_h$, $s_p$, $S_{P}$ and $S_{H}.$ Under assumptions of Theorems \[th1.1\] and  \[th1.2\], there exists a constant $C$ such that for all $w\geq 0$ in $L_{loc}^{1}(\RN)$ and all $f\in {\mathcal S}(\RN)$, $$\begin{aligned} \int_{\RN}|Tf|^2wdx\leq C\int_{\RN}|f|^2Mwdx.\end{aligned}$$ Proof of Theorem 1.3 -------------------- Let $T$ be of the area functions $s_h$, $s_p$, $S_{P}$ and $S_{H}.$ For $w\in A_1$, we have $Mw(x)\leq \|w\|_{A_1}w(x)$ for a.e. $x\in \RN$. According to Corollary \[c3.3\], $$\begin{aligned} \int_{\RN}T(f)^2wdx\leq C\int_{\RN}|f|^2Mwdx\leq C\|w\|_{A_1}\int_{\RN}|f|^2wdx.\end{aligned}$$ This implies (\[e1.8\]) holds. For (\[e1.9\]), we follow the method of Cordoba and Rubio de Francia (see pages 356-357, [@FP]). Let $p>2$ and take $f\in L^p(\RN).$ Then from duality, we know that there exist some $\varphi\in L^{(p/2)'}(\RN),$ with $\varphi\geq0,$ $\|\varphi\|_{L^{(p/2)'}(\RN)}=1$, such that $$\|Tf\|^2_{L^p(\RN)}\leq \int_{\RN}|Tf|^2\varphi dx.$$ Set $$v=\varphi +{M\varphi\over 2\|M\|_{L^{(p/2)'}(\RN)}}+{M^2\varphi\over (2\|M\|_{L^{(p/2)'}(\RN)})^2}+\cdots$$ following Rubio de Francia’s familiar method (Here $\|M\|_{L^{(p/2)'}(\RN)}$ denotes the operator norm of the Hardy-Littlewood maximal operator on $L^{(p/2)'}(\RN)$). Then $\|v\|_{L^{(p/2)'}(\RN)}\leq 2$ and $\|v\|_{A_1}\leq 2\|M\|_{L^{(p/2)'}(\RN)}\equiv O(p)$ as $p\to \infty.$ Therefore $$\begin{aligned} \|Tf\|^2_{L^p(\RN)}&\leq& \int_{\RN}|Tf|^2\varphi dx\\ &\leq&\int_{\RN}|Tf|^2v dx\\ &\leq&C\|v\|_{A_1}\int_{\RN}|f|^2v dx\\ &\leq&Cp\|f\|^2_{L^p(\RN)}.\end{aligned}$$ This proves (\[e1.9\]), and then the proof of this theorem is complete. $\Box$ Note that in Theorem \[e1.3\], when $L=-\Delta$ is the Laplacian on $\RN$, it is well known that estimate (\[e1.9\]) of the classical area integral on $L^p(\RN)$ is sharp, in general (see, e.g., [@FP]). Proofs of Theorems 1.1 and 1.2 ============================== Note that from Propositions \[prop3.3\] and  \[prop3.4\], the area functions $S_H, S_P, s_H$ and $s_p$ are all controlled by the $\gL$ function. In order to prove Theorems 1.1 and 1.2, it suffices to show the following result. \[th4.1\]   Let $L$ be a non-negative self-adjoint operator such that the corresponding heat kernels satisfy Gaussian bounds $(GE)$. Let $\mu>3$. If $w\geq 0$, $w\in L_{\rm loc}^{1}(\RN)$ and $f\in {\mathcal S}(\RN)$, then $$\begin{aligned} \hspace{-2.5cm}&{\rm (a)}& \hspace{0.1cm}\int_{\{\gL (f)>\lambda\}} wdx\leq {c(n)\over \lambda}\int_{\RN}|f| Mwdx,\ \ \ \lambda>0,\\ \hspace{-2.5cm} &{\rm (b)}& \hspace{0.1cm} \int_{\RN}\gL (f)^pw\, dx\leq c(n,p)\int_{\RN}|f|^pMw\, dx,\ \ \ 1<p\leq 2, \\ \hspace{-2.5cm}&{\rm (c)}& \hspace{0.1cm} \int_{\RN} \gL (f)^pwdx\leq c(n,p)\int_{\RN}|f|^p(Mw)^{p/2}w^{-(p/2-1)}dx, \ \ \ 2<p<\infty.\end{aligned}$$ Weak-type $(1,1)$ estimate -------------------------- We first state a Whitney decomposition. For its proof, we refer to Chapter 6, [@St]. \[le4.2\] Let $F$ be a non-empty closed set in $\RN$. Then its complement $\Omega$ is the union of a sequence of cubes $Q_k$, whose sides are parallel to the axes, whose interiors are mutually disjoint, and whose diameters are approximately proportional to their distances from $F$. More explicitly: \(i) $\Omega=\RN\setminus F=\bigcup\limits_{k=1}^{\infty}Q_k.$ \(ii) $Q_j\bigcap Q_k=\varnothing$ if $j\neq k$. \(iii) There exist two constants $c_1, c_2 > 0$, (we can take $c_1 = 1$, and $c_2 = 4$), so that $$c_1 {\rm diam}(Q_k)\leq {\rm dist}(Q_k,\ F)\leq c_2 {\rm diam}(Q_k).$$ Note that if $\Omega$ is an open set with $\Omega=\bigcup\limits_{k=1}^{\infty}Q_k$ a Whitney decomposition, then for every $\varepsilon:\ 0<\varepsilon<1/4,$ there exists $N\in \mathbb{N}$ such that no point in $\Omega$ belongs to more than $N$ of the cubes $Q_{k}^{\ast}$, where $Q_{k}^{\ast}=(1+\varepsilon)Q_k.$ Since $g^{\ast}_{\mu',\Psi}(f)\leq \gL(f)$ whenever $\mu'\geq \mu$, it is enough to prove ($a$) of Theorem \[th4.1\] for $3<\mu<4.$ Since $\gL$ is subadditive, we may assume that $f\geq0$ in the proof (if not we only need to consider the positive part and the negative part of $f$). For $\lambda>0,$ we set $\Omega=\{x\in\RN:\ Mf(x)>\lambda\}$. By [@FS] it follows that $$\begin{aligned} \label{e4.2} \int_{\Omega}wdx\leq {C\over \lambda}\int_{\RN}|f|Mwdx.\end{aligned}$$ Let $\Omega=\cup Q_j$ be a Whitney decomposition, and define $$\begin{aligned} h(x)&=&\left\{ \begin{array}{ll} f(x), & x\notin \Omega \\ [12pt] {1\over |Q_j|}\int_{Q_j}f(x)dx, & x\in Q_j \end{array} \right.\\[12pt] b_j(x)&=&\left\{ \begin{array}{ll} f(x)-{1\over |Q_j|}\int_{Q_j}f(x)dx, & x\in Q_j \\ [12pt] 0, & x\notin Q_j. \end{array} \right.\end{aligned}$$ Then $f=h+\sum_jb_j$, and we set $b=\sum_jb_j.$ As in [@St], we have $|h|\leq C\lambda$ a.e. By (\[e4.2\]), it suffices to show $$\begin{aligned} \label{e4.3} w\{x\notin\Omega:\ \gL(f)(x)>\lambda\}\leq {C\over \lambda}\int_{\RN}|f|Mwdx.\end{aligned}$$ By Chebychev’s inequality and Theorem \[th3.5\], $$\begin{aligned} w\{x\notin\Omega:\ \gL(h)(x)>\lambda\}&\leq& {1\over \lambda^2} \int_{\RN}\gL(h)^2(w\chi_{\RN\setminus\Omega})dx\\ &\leq& {C\over \lambda^2} \int_{\RN}|h|^2 M(w\chi_{\RN\setminus\Omega})dx\\ &\leq&{C\over \lambda } \int_{\RN}|h| M(w\chi_{\RN\setminus\Omega})dx\end{aligned}$$ since $|h|\leq C\lambda$ a.e. By definition of $h$, the last expression is at most $$\begin{aligned} \label{e4.4} {C\over \lambda } \int_{\RN}|f| Mwdx +\sum_j{C\over \lambda } \int_{Q_j}\Big({1\over |Q_j|}\int_{Q_j}|f(z)|dz\Big) M(w\chi_{\RN\setminus\Omega})(x)dx.\end{aligned}$$ From the property (iii) of Lemma \[le4.2\], we know that for $x,z\in Q_j$ there is a constant $C$ depending only on $n$ so that $M(w\chi_{\RN\setminus\Omega})(x)\leq CM(w\chi_{\RN\setminus\Omega})(z)$. Thus (\[e4.4\]) is less than $${C\over \lambda } \int_{\RN}|f| Mwdx +\sum_j{C\over \lambda } \int_{Q_j}\Big({1\over |Q_j|}\int_{Q_j}|f(z)|Mw(z)dz\Big) dx\leq {C\over \lambda } \int_{\RN}|f| Mwdx.$$ This gives $$w\{x\notin\Omega:\ \gL(h)(x)>\lambda\}\leq{C\over \lambda } \int_{\RN}|f| Mwdx.$$ Therefore, estimate (\[e4.3\]) will follow if we show that $$\begin{aligned} \label{eb} w\{x\notin\Omega:\ \gL(b)(x)>\lambda\}\leq{C\over \lambda } \int_{\RN}|f| Mwdx.\end{aligned}$$ To prove (\[eb\]), we follow an idea of [@DM] to decompose $b=\sum_jb_j=\sum_j\Phi_j(\sqrt{L})b_j + \sum_j\big(1-\Phi_j(\sqrt{L})\big)b_j,$ where $\Phi_j(\sqrt{L})=\Phi\Big({\ell(Q_j)\over32}\sqrt{L}\Big),$ $\Phi$ is the function as in Lemma \[le2.2\] and $\ell(Q_j)$ is the side length of the cube $Q_j.$ See also [@CD]. So, it reduces to show that $$\begin{aligned} \label{ei1} w\{x\notin\Omega:\ \gL\Big(\sum_j\Phi_j(\sqrt{L})b_j\Big)(x)>\lambda\}\leq{C\over \lambda } \int_{\RN}|f| Mwdx\end{aligned}$$ and $$\begin{aligned} \label{ei2} w\{x\notin\Omega:\ \gL\Big(\sum_j\big(1-\Phi_j(\sqrt{L})\big)b_j\Big)(x)>\lambda\}\leq{C\over \lambda } \int_{\RN}|f| Mwdx.\end{aligned}$$ By Chebychev’s inequality and Theorem \[th3.5\] again, we have $$\begin{aligned} {\rm LHS\ \ of \ \ (\ref{ei1})}\ &\leq&{C\over \lambda^2}\int_{\RN}\Big|\gL\Big(\sum_j\Phi_j(\sqrt{L})b_j\Big)\Big|^2 (w\chi_{\RN\setminus\Omega})dx\\ &\leq&{C\over \lambda^2}\int_{\RN}\Big| \sum_j\Phi_j(\sqrt{L})b_j \Big|^2 M(w\chi_{\RN\setminus\Omega})dx.\end{aligned}$$ Note that $\Phi_j(\sqrt{L})=\Phi\Big({\ell(Q_j)\over32}\sqrt{L}\Big),$ it follows from Lemma \[le2.1\] that ${\rm supp}\ \Phi_j(\sqrt{L})b_j\subset {{17 }Q_j/16} $ and $\big|K_{\Phi_j(\sqrt{L})}(x,y)\big|\leq C/\ell(Q_j)$. Hence, the above inequality is at most $$\begin{aligned} {C\over \lambda^2}\sum_j \int_{\RN}\Big| \Phi_j(\sqrt{L})b_j \Big|^2 M(w\chi_{\RN\setminus\Omega})dx.\end{aligned}$$ This, together with Lemma \[le2.1\] and the definition of $b$, yields $$\begin{aligned} {\rm LHS\ \ of \ \ (\ref{ei1})}\ &\leq& {C\over \lambda^2}\sum_j\int_{{17 }Q_j/16}\Big({\ell(Q_j)^{-n}}\int_{Q_j} |b(y)|dy \Big)^2 M(w\chi_{\RN\setminus\Omega})(x)dx\\ &\leq& {C\over \lambda^2}\sum_j\int_{{17 }Q_j/16}\Big( {1\over|Q_j|}\int_{Q_j}|f(y)| dy \Big)^2 M(w\chi_{\RN\setminus\Omega})(x)dx\\ &\leq& {C\over \lambda }\sum_j\int_{{17 }Q_j/16}\Big( {1\over|Q_j|}\int_{ Q_j}|f(y)| dy \Big) M(w\chi_{\RN\setminus\Omega})(x)dx\\ &\leq& {C\over \lambda }\sum_j{1\over|Q_j|}\int_{{17 }Q_j/16} \int_{ Q_j}|f(y)|M(w\chi_{\RN\setminus\Omega})(y) dy dx\\ &\leq&{C\over \lambda }\int_{\RN}|f |Mwdy.\end{aligned}$$ This proves the desired estimate (\[ei1\]). Next we turn to estimate (\[ei2\]). It suffices to show that $$\sum_j\int_{\RN\setminus\Omega}\gL\Big( \big(1-\Phi_j(\sqrt{L})\big)b_j\Big)wdx \leq C\int_{\RN}|f|Mwdx.$$ Further, the above inequality reduces to prove the following result: $$\begin{aligned} \label{ei22} \int_{\RN\setminus\Omega}\gL\Big( \big(1-\Phi_j(\sqrt{L})\big)b_j\Big)wdx \leq C\int_{Q_j}|f|Mwdx.\end{aligned}$$ Let $x_j$ denote the center of $Q_j$. Let us estimate $\Psi(t\sqrt{L})\big(1-\Phi_j(\sqrt{L})\big)b_j(y)=:\Psi_{jt}(\sqrt{L})b_j(y)$ by considering two cases: $t\leq \ell(Q_j)/4$ and $t>\ell(Q_j)/4$. [*Case 1. $t\leq \ell(Q_j)/4$*]{}.  In this case, we use Lemma \[le2.1\] to obtain $$\begin{aligned} \big|\Psi_{jt}(\sqrt{L})b_j(y)\big|&\leq& |\Psi(t\sqrt{L})b_j(y)|+\Big|\Psi(t\sqrt{L})\Phi_j(\sqrt{L})b_j(y)\Big|\\ &\leq& \Big|\int_{Q_j}K_{\Psi(t\sqrt{L})}(y,z)b(z)dz\Big|\\ &&+\Big|\int_{{17\over16}Q_j} K_{\Psi(t\sqrt{L})}(y,z)\Big(\int_{Q_j}K_{\Phi_j(\sqrt{L})}(z,x)b(x)dx\Big)dz\Big|\\ &\leq& C\|b_j\|_{ 1}t^{-n}.\end{aligned}$$ [*Case 2. $t> \ell(Q_j)/4$*]{}. Using Lemma \[le2.2\], we have $$\begin{aligned} \big|\Psi_{jt}(\sqrt{L})b_j(y)\big| & \leq&\int_{\RN}\Big|K_{\Psi(t\sqrt{L})\big(1-\Phi_j(\sqrt{L})\big)}(y,z)\Big||b_j(z)|dz\\ & \leq& C \|b_j\|_{ 1}\ell(Q_j)t^{-n-1}.\end{aligned}$$ From the property (iii) of Lemma \[le4.2\], we know that if $x\notin\Omega,$ then $|x-x_j|> (\sqrt{n}+1/2)\ell(Q_j)$. By Lemma \[le2.1\], we have $\Psi(t\sqrt{L})\big(1-\Phi_j(\sqrt{L})\big)b_j(y)=0$ unless $|y-x_j|\leq t+(1/32+\sqrt{n}/2)\ell(Q_j).$ Note that for $x\notin\Omega$, $0<t\leq \ell(Q_j)/4$ and $|y-x_j|\leq t+(1/32+\sqrt{n}/2)\ell(Q_j)$, $|x-y|\geq|x-x_j|-|y-x_j|>{\sqrt{n}/2+7/32\over \sqrt{n}+1/2}|x-x_j|$. Denote $F_j=:\{y:\ |y-x_j|<(9/32+\sqrt{n}/2)\ell(Q_j)\}$. Then for $x\notin\Omega$ and $\mu>3$, we have $$\begin{aligned} &&\hspace{-1cm}\bigg(\int^{\ell(Q_j)/4}_{0}\int_{F_j} \big|\Psi_{jt}(\sqrt{L})b_j(y)\big|^2 \Big({t\over t+|x-y|}\Big)^{n\mu}{dydt\over t^{n+1}}\bigg)^{1/2}\\ &&\leq C {\|b_j\|_1\ell(Q_j)^{n/2}\over |x-x_j|^{n\mu/2}} \bigg(\int^{\ell(Q_j)/4}_{0} t^{n\mu-2n-n-1}dt\bigg)^{1/2}\\ &&\leq C {\|b_j\|_1\ell(Q_j)^{n/2}\over |x-x_j|^{n\mu/2}}\ell(Q_j)^{(n\mu-3n)/2}\\ &&\leq C {\|b_j\|_1\ell(Q_j)^{-n}\over (1+|x-x_j|/\ell(Q_j))^{n\mu/2}} \\ &&\leq C |f\chi_{Q_j}|\ast \tau_{\ell(Q_j)}(x),\end{aligned}$$ where $\tau_{\ell(Q_j)}(x)=1/(1+|x|)^{n\mu/2}\in L^1(\RN)$. For the next part of the integral we consider two cases: $n=1$ and $n>1$. Note that for $x\notin\Omega$, $\ell(Q_j)/4<t\leq |x-x_j|/4$ and $y\in E_{jt}=:\{y:\ |y-x_j|\leq t+(1/32+\sqrt{n}/2)\ell(Q_j)\}$, $|x-y|\geq|x-x_j|-|y-x_j|>{\sqrt{n}/4+11/32\over \sqrt{n}+1/2}|x-x_j|$. Thus for $3<\mu<4$, if $n=1$, $$\begin{aligned} &&\hspace{-1cm}\bigg(\int_{\ell(Q_j)/4}^{|x-x_j|/4}\int_{E_{jt}} \big|\Psi_{jt}(\sqrt{L})b_j(y)\big|^2 \Big({t\over t+|x-y|}\Big)^{n\mu}{dydt\over t^{n+1}}\bigg)^{1/2}\\[3pt] &&\leq C {\|b_j\|_1\ell(Q_j)\over |x-x_j|^{ \mu/2}} \bigg(\int_{\ell(Q_j)/4}^{|x-x_j|/4} t^{-4+1+\mu-2}dt\bigg)^{1/2}\\[3pt] &&\leq C {\|b_j\|_1\ell(Q_j) \over |x-x_j|^{ \mu/2}}\ell(Q_j)^{( \mu-4)/2}\\[3pt] &&\leq C {\|b_j\|_1\ell(Q_j)^{-1}\over (1+|x-x_j|/\ell(Q_j))^{ \mu/2}} \\[3pt] &&\leq C |f\chi_{Q_j}|\ast \sigma_{\ell(Q_j)}(x),\end{aligned}$$ where $\sigma_{\ell(Q_j)}(x)=1/(1+|x|)^{ \mu/2} $. On the other hand, for $3<\mu<4$, if $n>1$, $$\begin{aligned} &&\hspace{-1cm} \bigg(\int_{\ell(Q_j)/4}^{|x-x_j|/4}\int_{E_{jt}} \big|\Psi_{jt}(\sqrt{L})b_j(y)\big|^2 \Big({t\over t+|x-y|}\Big)^{n\mu}{dydt\over t^{n+1}}\bigg)^{1/2}\\ &&\leq C {\|b_j\|_1\ell(Q_j)\over |x-x_j|^{ n\mu/2}} \bigg(\int_{\ell(Q_j)/4}^{|x-x_j|/4} t^{-2n-2+n\mu+n-n-1}dt\bigg)^{1/2}\\ &&\leq C {\|b_j\|_1\ell(Q_j) \over |x-x_j|^{ n\mu/2}}\ell(Q_j)^{( n\mu-2n-2)/2}\\ &&\leq C {\|b_j\|_1\ell(Q_j)^{-n}\over (1+|x-x_j|/\ell(Q_j))^{ n+1}} \\ &&\leq C |f\chi_{Q_j}|\ast P_{\ell(Q_j)}(x),\end{aligned}$$ where $P_{\ell(Q_j)}(x)=1/(1+|x|)^{ n+1} $. Finally, since $t/(t+|x-y|)\leq 1$, so $$\begin{aligned} &&\hspace{-1cm}\bigg(\int^{\infty}_{|x-x_j|/4}\int_{E_{jt}} \big|\Psi_{jt}(\sqrt{L})b_j(y)\big|^2 \Big({t\over t+|x-y|}\Big)^{n\mu}{dydt\over t^{n+1}}\bigg)^{1/2}\\ &&\leq C {\|b_j\|_1\ell(Q_j) } \bigg(\int^{\infty}_{|x-x_j|/4} t^{-2n-3}dt\bigg)^{1/2}\\ &&\leq C |f\chi_{Q_j}|\ast P_{\ell(Q_j)}(x).\end{aligned}$$ Therefore, if $x\notin\Omega$, and $n>1$, then $\gL\Big(\big(1-\Phi_j(\sqrt{L})\big)b_j\Big)(x)\leq C |f\chi_{Q_j}|\ast P_{\ell(Q_j)}(x).$ And $$\begin{aligned} \int_{\RN\setminus\Omega}\gL\Big( \big(1-\Phi_j(\sqrt{L})\big)b_j\Big)wdx &\leq& C \int_{\RN\setminus\Omega}|f\chi_{Q_j}|\ast P_{\ell(Q_j)}wdx\\ &\leq& C \int_{Q_j}|f|( P_{\ell(Q_j)}\ast w)dx\\ &\leq& C \int_{Q_j}|f|Mwdx.\end{aligned}$$ If $n=1$ we get the same thing, but with $P$ replaced by $\sigma.$ This concludes the proof of (\[e4.3\]). And the proof of this theorem is complete. Estimate for $2<p<\infty$ ------------------------- We proceed by duality. If $h(x)\geq0$ and $h\in L^{(p/2)'}(wdx)$, then $$\begin{aligned} \int_{\RN}\gL(f)^2hwdx=\int_{\RR^{n+1}_{+}}|\Psi(t\sqrt{L})f(y)|^2{1\over t} \bigg({1\over t^n}\int_{\RN}h(x)w(x)\Big({t\over t+|x-y|}\Big)^{n\mu}dx\bigg)dydt.\end{aligned}$$ Set $$E_k=\Big\{(y,t):\ {1\over t^n}\int_{\RN}h(x)w(x)\Big({t\over t+|x-y|}\Big)^{n\mu}dx\sim2^k\Big\}.$$ Note that if $|y-z|<t$, then $t+|x-y|\sim t+|x-z|$. Thus, if $(y,t)\in E_k$ and $|y-z|<t$, then $$\begin{aligned} 2^k<{1\over t^n}\int_{\RN}h(x)w(x)\Big({t\over t+|x-y|}\Big)^{n\mu}dx\sim {1\over t^n}\int_{\RN}h(x)w(x)\Big({t\over t+|x-z|}\Big)^{n\mu}dx.\end{aligned}$$ The last expression is at most $$\begin{aligned} &&\hspace{-1.2cm}C\sum_{j=0}^{\infty}{1\over2^{jn\mu}}{1\over t^n}\int_{B(z,2^jt)}hwdx\\ &=&C\sum_{j=0}^{\infty}{1\over2^{jn(\mu-1)}} {w(B(z,2^jt))\over (2^jt)^n}{1\over w(B(z,2^jt))}\int_{B(z,2^jt)}hwdx\\ &\leq&C\sum_{j=0}^{\infty}{1\over2^{jn(\mu-1)}}Mw(z)M_w(h)(z)\\ &\leq&C Mw(z)M_w(h)(z),\end{aligned}$$ where $$M_w(h)(z)=\sup_{t>0}\Big({1\over w(B(z,t))}\int_{B(z,t)}hwdx\Big).$$ Recall that ${\rm supp}\ K_{\Psi(t\sqrt{L})}(y,\cdot)\subset B(y,t)$. Since for $(y,t)\in E_k$ and $|y-z|<t$ we have $z\in A_k=\{z:\ Mw(z)M_w(h)(z)\geq C_{n,\mu}2^k\},$ it follows that for $(y,t)\in E_k$, $\Psi(t\sqrt{L})f(y)=\Psi(t\sqrt{L})(f\chi_{A_k})(y).$ Thus, $$\begin{aligned} &&\hspace{-1.2cm}\int_{\RR^{n+1}_{+}}|\Psi(t\sqrt{L})f(y)|^2{1\over t} \bigg({1\over t^n}\int_{\RN}h(x)w(x)\Big({t\over t+|x-y|}\Big)^{n\mu}\bigg)dydt\\ &&\leq \sum_k 2^{k+1}\int_{E_k}|\Psi(t\sqrt{L})f(y)|^2{dydt\over t}\\ &&= \sum_k 2^{k+1}\int_{E_k}|\Psi(t\sqrt{L})(f\chi_{A_k})(y)|^2{dydt\over t}\\ &&\leq C \sum_k 2^{k+1}\int_{\RN}| f|^2\chi_{A_k}{dy }\\ &&\leq C \int_{\RN}| f|^2 {MwM_w(h) }{dy }.\end{aligned}$$ Applying the Hölder inequality with exponents $p/2$ and $(p/2)'$, we obtain the bound $$C \bigg(\int_{\RN}| f|^p (Mw)^{p/2}w^{-(p/2-1)}{dy }\Big)^{2/p} \Big(\int_{\RN}M_w(h)^{(p/2)'}w{dy }\bigg)^{(p-2)/p}.$$ However, since $M_w$ is the centered maximal function, we have $$\int_{\RN}M_w(h)^{(p/2)'}wdx\leq C_{n,p}\int_{\RN}h^{(p/2)'}wdx,$$ by a standard argument based on the Besicovitch covering lemma. Since $h$ is arbitrary, we obtain our result. Proof of Theorem \[th1.4\] =========================== In order to prove Theorem \[th1.4\], we first prove the following result. \[le5.1\] Let $T$ be of the area functions $s_h$, $s_p$, $S_{P}$, $S_{H}$ and $\gL$ with $\mu>3$. Under assumptions of Theorems \[th1.1\],  \[th1.2\] and  \[th4.1\], for $w\in A_p,\ 1<p<\infty$, we have $$\begin{aligned} \label{e5.1}\|Tf\|_{L^p_w(\RN)}\leq C \|f\|_{L^p_w(\RN)}\end{aligned}$$ where constant $C$ depends only on $p$, $n$ and $w$. Let $T$ be of the area functions $s_h$, $s_p$, $S_{P}$, $S_{H}$ and $\gL$ with $\mu>3$. Note that if $w\in A_1$, then $Mw\leq Cw$ a.e. By Theorems \[th1.1\],  \[th1.2\] and  \[th4.1\], $T$ is bounded on $L^p_w(\RN), 1<p<\infty,$ for any $w\in A_1,$ i.e., $$\|Tf\|_{L^p_w(\RN)}\leq C \|f\|_{L^p_w(\RN)}.$$ By extrapolation theorem, these operators are all bounded on $L^p_w(\RN), 1<p<\infty,$ for any $w\in A_p,$ and estimate (\[e5.1\]) holds. For the detail, we refer the reader to pages 141-142, Theorem 7.8, [@D]. Going further, we introduce some definitions. Given a weight $w$, set $w(E)=\int_E w(x)dx$. The non-increasing rearrangement of a measurable function $f$ with respect to a weight $w$ is defined by (cf. [@CR]) $$\begin{aligned} f^\ast_w(t)=\sup_{w(E)=t} \inf_{x\in E}|f(x)|\ \ \ \ (0<t<w(\RN)).\end{aligned}$$ If $w\equiv1$, we use the notation $f^\ast(t)$. Given a measurable function $f$, the local sharp maximal function $M^\sharp_{\lambda}f$ is defined by $$\begin{aligned} M^\sharp_{\lambda}f(x)=\sup_{Q\ni x}\inf_{c}\big((f-c)\chi_Q\big)^\ast(\lambda|Q|)\ \ \ (0<\lambda<1).\end{aligned}$$ This function was introduced by Strömberg [@S], and motivated by an alternate characterization of the space $BMO$ given by John [@J]. \[le5.4\] For any $w\in A_p$ and for any locally integrable function $f$ with $f^\ast_w(+\infty)=0$ we have $$\begin{aligned} \label{e5.2} \|Mf\|_{L^p_w(\RN)}\leq C\|w\|_{A_p}^{\gamma_{p,q}}\cdot\|M^\sharp_{\lambda_n} (|f|^q)\|^{1/q}_{L^{p/q}_w(\RN )}\ \ \ (1<p<\infty,1\leq q<\infty),\end{aligned}$$ where $\gamma_{p,q}=\max\{1/q,1/(p-1)\}$, $C$ depends only on $p,q$ and on the underlying dimension $n$, and $\lambda_n$ depends only on $n$. For the proof of this lemma, see Theorem 3.1 in [@L1]. \[pro5.5\] Let $\gL$ be a function with $\mu>3$ in (\[e3.1\]). Then for any $f\in C^\infty_0(\RN)$ and for all $x\in \RN,$ $$M^\sharp_{\lambda}\big(\gL(f)^2\big)(x)\leq CMf(x)^2,$$ where $C$ depends on $\lambda,\mu,\Psi$ and $n$. Given a cube $Q$, let $T(Q)=\{(y,t):\ y\in Q,0<t<l(Q)\}$, where $\ell(Q)$ denotes the side length of $Q$. For $(y,t)\in T(Q)$, using (\[e2.5\]) of Lemma \[le2.1\] we have $$\begin{aligned} \label{e5.3} \Psi(t\sqrt{L})f(y)=\Psi(t\sqrt{L})(f\chi_{3Q})(y).\end{aligned}$$ Now, fix a cube $Q$ containing $x$. For any $z\in Q$ we decompose $\gL(f)^2$ into the sum of $$I_1(z)=\iint_{T(2Q)}|\Psi(t\sqrt{L})f(y)|^2\Big({t\over t+|z-y|}\Big)^{n\mu}{dydt\over t^{n+1}}$$ and $$I_2(z)=\iint_{\RR^{n+1}_{+} \setminus T(2Q)}|\Psi(t\sqrt{L})f(y)|^2\Big({t\over t+|z-y|}\Big)^{n\mu}{dydt\over t^{n+1}}.$$ From Theorem \[th4.1\], we know that for $\mu>3$, $\gL(f)$ is of weak type $(1, 1)$. Then using (\[e5.3\]), we have $$\begin{aligned} \label{e5.4} (I_1)^\ast(\lambda|Q|)&\leq& \Big(\gL(f\chi_{6Q})\Big)^{\ast}(\lambda|Q|)^2\\ &\leq&\Big({C\over \lambda|Q|}\int_{6Q}|f|\Big)^2\leq CMf(x)^2.\nonumber\end{aligned}$$ Further, for any $z_0 \in Q$ and $(y, t)\notin T (2Q)$, by the Mean Value Theorem, $$(t+|z-y|)^{-n\mu}-(t+|z_0-y|)^{-n\mu}\leq C\ell(Q)(t+|z-y|)^{-n\mu-1}.$$ From this and (\[e5.3\]), using Lemma \[le2.1\] again and $\mu>3$, we have $$\begin{aligned} &&\hspace{-1.2cm}|I_2(z)-I_2(z_0)|\\ &&\leq C\ell(Q)\iint_{\RR^{n+1}_{+}\setminus T(2Q)}t^{n\mu}|\Psi(t\sqrt{L})f(y)|^2\Big({1\over t+|z-y|}\Big)^{n\mu+1}{dydt\over t^{n+1}}\\ &&\leq C\sum^\infty_{k=1}{1\over 2^k}{1\over (2^k\ell(Q))^{n\mu}} \iint_{T(2^{k+1}Q)\setminus T(2^{k}Q)}t^{n\mu}|\Psi(t\sqrt{L})f(y)|^2{dydt\over t^{n+1}}\\ &&\leq C\sum^\infty_{k=1}{1\over 2^k}{|2^{k+1}Q|\over (2^k\ell(Q))^{n\mu}}\Big(\int^{2^{k+1}\ell(Q)}_0t^{n\mu-3n-1}dt\Big) \Big(\int_{6\cdot2^kQ}|f|\Big)^2\\ &&\leq C\sum^\infty_{k=1}{1\over 2^k}\Big({1\over |2^{k+1}Q|}\int_{6\cdot2^kQ}|f|\Big)^2\leq CMf(x)^2.\end{aligned}$$ Combining this estimate with (\[e5.4\]) yields $$\begin{aligned} \inf_{c}\Big((\gL(f)^2-c)\chi_Q\Big)^\ast(\lambda|Q|)&\leq& \big((I_1+I_2-I_2(z_0))\chi_Q\big)^\ast(\lambda|Q|)\\ &\leq&(I_1)^\ast(\lambda|Q|)+CMf(x)^2\\ &\leq& CMf(x)^2, \end{aligned}$$ which proves the desired result. Then we have the following result. \[th5.4\] Let $T$ be of the area functions $s_h$, $s_p$, $S_{P}$,$S_{H}$ and $\gL$ with $ \mu>3.$ Under assumptions of Theorems \[th1.1\],  \[th1.2\] and  \[th4.1\], for $w\in A_p,\ 1<p<\infty$, if $\|f\|_{L^p_w(\RN )}<\infty$, then $$\begin{aligned} \label{e5.5} \Big(\int_{\RN}\big(M(Tf)\big)^pwdx\Big)^{1/p}\leq C\|w\|_{A_p}^{\beta_p}\Big(\int_{\RN} \big(M( f)\big)^pwdx\Big)^{1/p},\end{aligned}$$ where $\beta_p=\max\{1/2,1/(p-1)\}$, and a constant $C$ depends only on $p$ and $n$. Suppose $T=\gL.$ From Lemma \[le5.1\], we know that $\gL$ is bounded on $L^p_w(\RN ) $ when $w\in A_p$. Therefore, assuming that $\|f\|_{L^p_w(\RN)}$ is finite, we clearly obtain that $(\gL)^\ast_w(+\infty)=0 $. Letting $\gL(f)$ instead of $f$ in (\[e5.2\]) with $q=2$ and applying Proposition \[pro5.5\], we get $$\Big(\int_{\RN}\big(M(\gL(f))\big)^pwdx\Big)^{1/p}\leq C\|w\|_{A_p}^{\beta_p}\Big(\int_{\RN} \big(M( f)\big)^pwdx\Big)^{1/p}.$$ Under assumptions of Theorems \[th1.1\] and  \[th1.2\], it follows that the area functions $s_h$, $s_p$, $S_{P}$ and $S_{H}$ are all controlled by $\gL$ pointwise. So we have the estimate (\[e5.5\]) for $s_h$, $s_p$, $S_{P}$ and $S_{H}$. Then the proof of this theorem is complete. In [@B], Buckley proved that for the Hardy-Littlewood maximal operator, $$\begin{aligned} \label{m} \|M\|_{L^p_w(\RN)}\leq C\|w\|_{A_p}^{1/(p-1)}\ \ (1<p<\infty),\end{aligned}$$ and this result is sharp. From (\[m\]) and Theorem \[th5.4\], there exists a constant $C=C(T, n, p)$ such that for all $w\in A_p$, $$\begin{aligned} \label{5.7} \|T\|_{L^p_w(\RN)}\leq C\|w\|_{A_p}^{ {1\over p-1}+ \, \max\big\{{1\over 2},\, {1\over p-1}\big\}} \ \ \ \ \ \ (1<p<\infty),\end{aligned}$$ where $T$ is of the area functions $s_h$, $s_p$, $S_{P}$ and $S_{H}.$ This proves Theorem \[th1.4\]. [**Remarks.**]{}  \(i) Note that when $L=-\Delta$ is the Laplacian on $\RN$, it is well known that the exponents $\beta_p$ of (\[e5.5\]) in Theorem \[th5.4\] is best possible, in general (see, e.g., Theorem 1.5, [@L1]). \(ii) For the classical area function $S_{\varphi}$ in (\[e1.1\]), the result of Theorem \[th1.4\] was recently improved by A. Lerner in [@L2], i.e., there exists a constant $C=C(S_{\varphi}, n, p)$ such that for all $w\in A_p, 1<p<\infty$, $$\begin{aligned} \label{e5.8} \|S_{\varphi}\|_{L^p_w(\RN)}\leq C\|w\|_{A_p}^{ \, \max\big\{{1\over 2},\, {1\over p-1}\big\}},\end{aligned}$$ and the estimate (\[e5.8\]) is the best possible for all $1<p<\infty.$ However, we do not know whether one can deduce the same bounds (\[e5.8\]) for the $L^p_w$ operator norms of the area functions $s_h$, $s_p$, $S_{P}$ and $S_{H},$ and they are of interest in their own right. Note that sharp weighted optimal bounds for singular integrals has been studied extensively, see for examples, [@CMP; @HLRSUV; @LOP1; @LOP2; @P] and the references therein. \(iii) Finally, for $f\in {\mathcal S}(\RN)$, we define the (so called vertical) Littlewood-Paley-Stein functions ${\mathcal G}_P $ and $ {\mathcal G}_H$ by $$\begin{aligned} {\mathcal G}_P(f)(x)&=&\bigg(\int_0^{\infty} |t\nabla_x e^{-t\sqrt{L}} f(x)|^2{ dt\over t }\bigg)^{1/2},\\ {\mathcal G}_H(f)(x)&=&\bigg(\int_0^{\infty} |t\nabla_x e^{-t^2L} f(x)|^2 { dt\over t }\bigg)^{1/2}, \end{aligned}$$ as well as the (so-called horizontal) Littlewood-Paley-Stein functions $ g_p$ and $g_h$ by $$\begin{aligned} g_p(f)(x)&=&\bigg(\int_0^{\infty} |t\sqrt{L} e^{-t\sqrt{L}} f(x)|^2 { dt\over t }\bigg)^{1/2},\\ g_h(f)(x)&=&\bigg(\int_0^{\infty} |t^2L e^{-t^2L} f(x)|^2 { dt\over t }\bigg)^{1/2}. \end{aligned}$$ One then has the analogous statement as in Theorems 1.1, 1.2, 1.3 and 1.4 replacing $s_p, s_h, S_P, S_H $ by $g_p, g_h, {\mathcal G}_P, {\mathcal G}_H$, respectively. [**Acknowledgment**]{}:  The research of Lixin Yan is supported by NNSF of China (Grant No. 10771221) and National Science Foundation for Distinguished Young Scholars of China (Grant No. 10925106). [99999]{} 0.12cm P.Auscher, On necessary and sufficient conditions for $L^p$-estimates of Riesz transforms associated to elliptic operators on $\RR$ and related estimates. [*Memoirs of the Amer. Math. Soc.*]{} 186, no. 871 (2007). 0.12cm P. Auscher, T. Coulhon, X.T. Duong and S. Hofmann, Riesz transform on manifolds and heat kernel regularity. [*Ann. Sci. École Norm. Sup.*]{}, [**37**]{}, (2004), 911-957. 0.12cm P. Auscher, X.T. Duong and A. McIntosh, Boundedness of Banach space valued singular integral operators and Hardy spaces. Unpublished preprint (2005). 0.12cm P. Auscher, J.M. Martell, Weighted norm inequalities, off-diagonal estimates and elliptic operators. Part I: General operator theory and weights, [*Adv. in Math.*]{}, [**212**]{}(2007), 225-276. 0.12cm S. M. Buckley, Estimates for operator norms on weighted spaces and reverse Jensen inequalities. [*Trans. Amer. Math. Soc.,*]{} [**340**]{} (1993), 253-272. 0.12cm A.P. Calderón and A. Torchinsky, Parabolic maximal function associated with a distribution. [*Adv. Math.*]{}, [**16**]{} (1975), 1-64. 0.12cm S.-Y.A. Chang, J.M. Wilson and T. Wolff, Some weighted norm inequalities concerning the Schrödinger operator. [*Comm. Math. Helv.,*]{} [**60**]{} (1985), 217-246. 0.12cm S. Chanillo and R. L. Wheeden, Some weighted norm inequalities for the area integral. [ *Indiana Univ. Math. J.*]{}, [**36**]{} (1987), 277-294. 0.12cm J. Cheeger, M. Gromov and M. Taylor, Finite propagation speed, kernel estimates for functions of the Laplacian and the geometry of complete Riemannian manifolds. [*J. Differential Geom.*]{}, [**17**]{} (1982), 15-53. 0.12cm K.M. Chong, N.M. Rice, Equimeasurable rearrangements of functions. Queen¡¯s Papers in Pure and Applied Mathematics, vol. 28, Queen¡¯s University, Kingston, Ont., 1971. 0.12cm T. Coulhon and X.T. Duong, Riesz transforms for $1\leq p\leq 2$. [*Trans. Amer. Math. Soc.,*]{} [**351**]{} (1999), 1151-1169. 0.12cm T. Coulhon, X.T. Duong and X.D. Li, Littlewood-Paley-Stein functions on complete Riemannian manifolds for $1\leq p\leq 2.$ [*Studia Math.,*]{} [**154**]{} (2003), 37-57. 0.12cm T. Coulhon and A. Sikora, Gaussian heat kernel upper bounds via Phragmén-Lindelöf theorem. [*Proc. Lond. Math.,*]{} [**96**]{} (2008), 507-544. 0.12cm D. Cruz-Uribe, J.M. Martell and C. Perez, Sharp weighted estimates for classical operators (2010), available at http://arxiv.org/abs/1001-4254. 0.12cm E.B. Davies, [*Heat kernels and spectral theory*]{}, Cambridge Univ. Press, 1989. 0.12cm J. Duoandikoetxea, [*Fourier analysis.*]{} Translated and revised from the 1995 Spanish original by David Cruz-Uribe. Graduate Studies in Mathematics, 29. American Mathematical Society, Providence, RI, 2001. 0.12cm X.T. Duong and A. McIntosh, Singular integral operators with non-smooth kernels on irregular domains. [*Rev. Mat. Iberoamericana*]{}, [**15**]{} (1999), 233-265. 0.12cm X.T. Duong, E.M. Ouhabaz and A. Sikora, Plancherel-type estimates and sharp spectral multipliers. [*J. Funct. Anal.*]{}, [**196**]{} (2002), 443–485. 0.12cm R. Fefferman and J. Pipher, Multiparameter operators and sharp weighted inequalities. [ *Amer. J. Math.*]{}, [**119**]{} (1997), 337-369. 0.12cm C. Fefferman and E.M. Stein, Some maximal inequalities. [ *Amer. J. Math.*]{}, [**92**]{} (1971), 107–115. 0.12cm J. Garcá-Cuerva and J.L. Rubio de Francia, [*Weighted Norm Inequalities and Related Topics*]{}, North Holland math., Studies [**116**]{}, North Holland, Amsterdam, 1985. 0.12cm L. Grafakos, [*Modern Fourier Analysis*]{}, Springer-Verlag, Graduate Texts in Mathematics [**250**]{}, Second Edition, 2008. 0.12cm S. Hofmann, G.Z. Lu, D. Mitrea, M. Mitrea and L.X. Yan, Hardy spaces associated to non-negative self-adjoint operators satisfying Davies-Gaffney estimates. To appear in [*Memoris of Amer. Math. Soc.*]{}, 2010. 0.12cm T. Hytönen, M. Lacey, M. Reguera, E. Sawyer, I. Uriarte-Tuero and A. Vagharshakyan, Weak and strong type $A_p$ estimates for Calderón-Zygmund operators (2010), available at http://arxiv.org/abs/1006.2530. 0.12cm F. John, Quasi-isometric mappings. Seminari 1962¨C1963 di Analisi, Algebra, Geometria e Topologia, Rome, 1965. 0.12cm A.K. Lerner, On some sharp weighted norm inequalities. [ *J. Funct. Anal.*]{}, [**32**]{} (2006), 477-494. 0.12cm A.K. Lerner, Sharp weighted norm inequalities for Littlewood-Paley operators and singular integrals (2010), available at http://arXiv.org/abs/1005.1422. 0.12cm A.K. Lerner, S. Ombrosi and C. Perez, $A_1$ bounds for Calderón-Zygmund operators related a problem of Muckenhoupt and Wheeden, [*Math. Res. Lett.*]{}, [**16**]{} (2009), 149-156. 0.12cm 0.12cm A.K. Lerner, S. Ombrosi and C. Perez, Sharp $A_1$ bounds for Calderón-Zygmund operators and the relationship with a problem of Muckenhoupt and Wheeden, [*Int. Math. Res. Not. IMRN*]{}, [**6**]{} (2008), Art. ID rnm 161, 11. 0.12cm E.M. Ouhabaz, Analysis of heat equations on domains. [*London Math. Soc. Monographs*]{}, Vol. [**31**]{}, Princeton Univ. Press (2004). 0.12cm C. Perez, A course on singular integrals and weights. Preprint (2010). 0.12cm J.-O. Strömberg, Bounded mean oscillation with Orlicz norms and duality of Hardy spaces. [ *Indiana Univ. Math. J.*]{}, [**28**]{} (1979), 511-544. 0.12cm A. Sikora, Riesz transform, Gaussian bounds and the method of wave equation. [*Math. Z.*]{}, [**247**]{} (2004), 643-662. 0.12cm A. Sikora and J.Wright, Imaginary powers of Laplace operators. [*Proc. Amer. Math. Soc.*]{}, [**129**]{} (2001), 1745-1754. 0.12cm E.M. Stein, [*Singular integral and differentiability properties of functions*]{}. Princeton Univ. Press, [**30**]{}, (1970). 0.12cm E.M. Stein, [*Topics in harmonic analysis related to the Littlewood-Paley theory*]{}. Princeton Univ. Press, (1970). 0.12cm J.M. Wilson, Weighted norm inequalities for the continuous square functions. [ *Trans. Amer. Math. Soc.*]{}, [**314**]{} (1989), 661-692. 0.12cm L.X. Yan, Littlewood-Paley functions associated to second order operators. [ *Math. Z.*]{}, [**246**]{} (2004), 655-666. 0.12cm K. Yosida, [*Functional Analysis*]{} (Fifth edition). Spring-Verlag, Berlin, 1978. 0.05cm
--- abstract: 'We utilize cosmological hydrodynamic simulations to study the formation of Population III (Pop III) stars in dark matter halos exposed to strong ionizing radiation. We simulate the formation of three halos subjected to a wide range of ionizing fluxes, and find that for high flux, ionization and photoheating can delay gas collapse and star formation up to halo masses significantly larger than the atomic cooling threshold. The threshold halo mass at which gas first collapses and cools increases with ionizing flux for intermediate values, and saturates at a value approximately an order of magnitude above the atomic cooling threshold for extremely high flux (e.g. $\approx 5 \times 10^8 ~ M_\odot$ at $z\approx6$). This behavior can be understood in terms of photoheating, ionization/recombination, and Ly$\alpha$ cooling in the pressure-supported, self-shielded gas core at the center of the growing dark matter halo. We examine the spherically-averaged radial velocity profiles of collapsing gas and find that a gas mass of up to $\approx 10^{6}~ M_\odot$ can reach the central regions within $3~{\rm Myr}$, providing an upper limit on the amount of massive Pop III stars that can form. The ionizing radiation increases this limit by a factor of a few compared to strong Lyman-Werner (LW) radiation alone. We conclude that the bright HeII 1640 Å emission recently observed from the high-redshift galaxy CR7 cannot be explained by Pop III stars alone. However, in some halos, a sufficient number of Pop III stars may form to be detectable with future telescopes such as the *James Webb Space Telescope* (JWST).' author: - | Eli Visbal$^{1,2}$[^1], Greg L. Bryan$^{1,2}$, Zoltán Haiman$^{2, 3}$\ $^1$Center for Computational Astrophysics, Flatiron Institute, 162 5th Ave, New York, NY, 10003, U.S.A.\ $^2$Department of Astronomy, Columbia University, 550 West 120th Street, New York, NY, 10027, U.S.A.\ $^3$Center for Cosmology and Particle Physics, New York University, 4 Washington Place, New York, NY, 10003, U.S.A. bibliography: - 'paper.bib' title: 'What is the Maximum Mass of a Population III Galaxy?' --- stars:Population III–galaxies:high-redshift–cosmology:theory Introduction ============ Cosmological simulations predict that the first metal-free Pop III stars form in $\sim 10^{5-6} M_{\odot}$ dark matter “minihalos” in the early Universe [for a recent review, see @2015ComAC...2....3G]. Molecular hydrogen (${\rm H}_2$) is essential for star formation in these halos because their virialized gas does not reach sufficient temperatures to cool through atomic hydrogen transitions. Hydrogen molecules can be photodissociated by LW radiation (11.2-13.6 eV photons), and as the star formation density of the Universe increases, a LW background builds up which suppresses star formation in small minihalos [e.g. @1997ApJ...476..458H; @2001ApJ...548..509M; @2007ApJ...671.1559W; @2008ApJ...673...14O; @2011MNRAS.418..838W; @2014MNRAS.445..107V]. Eventually, in regions with sufficiently strong LW radiation, Pop III star formation is suppressed in all halos up to the virial temperature where atomic cooling becomes efficient, $T_{\rm vir} \approx 10^4 ~{\rm K}$, corresponding to a redshift-dependent halo mass of $M \approx 3 \times 10^7 \left ( \frac{1+z}{11}\right )^{-3/2} M_\odot$ [@2001PhR...349..125B]. Preventing star formation up to this mass with LW radiation alone requires a very high flux (Regan et al. 2016, submitted). LW feedback along with inefficient metal mixing may result in significant Pop III star formation during the epoch of reionization or even at much later times [@2006Natur.440..501J; @2007ApJ...659..890W; @2009ApJ...694..879T]. Generally it has been thought that Pop III stars do not form in halos much larger than the atomic cooling threshold. This is because the lifetimes of massive ($M \gtrsim 100 ~ M_\odot$) Pop III stars are only a few Myr , leading to prompt enrichment of metals via supernovae winds and a transition to Population II (Pop II) stars. However, it has been proposed that Pop III stars could form in much more massive halos if star formation is suppressed in all progenitor halos via strong ionizing radiation [@2010MNRAS.404.1425J; @2016MNRAS.460L..59V; @2016arXiv161004249Y]. This was suggested recently as a possible interpretation of the high-redshift Ly$\alpha$ emitter “CR7” [@2015ApJ...808..139S; @2015MNRAS.453.2465P; @2016MNRAS.460L..59V], which has a strong HeII 1640 Å recombination line that could be produced by massive Pop III stars. Using the stellar models of (assuming an ionizing escape fraction of zero, gas temperature of 30,000 K, and electron density of $n_e = 100~{\rm cm^{-3}}$), we find that total stellar masses of $\sim 3\times 10^7~M_\odot$ or $\sim 10^7 M_\odot$ are required to generate the HeII luminosity of CR7 [@2015ApJ...808..139S] if all stars are $120~M_\odot$ or $1000~M_\odot$ zero age main sequence Pop III stars, respectively. In addition to correctly interpreting CR7, understanding this mode of Pop III star formation is important because it may lead to a larger, and thus more easily observable, total stellar mass in Pop III galaxies. Massive Pop III-forming halos would be promising targets for next-generation telescopes, such as *JWST*, leading to the first detection of metal-free stars. In this paper, we utilize cosmological hydrodynamics simulations to investigate photoionization feedback on pristine dark matter halos, leading to Pop III star formation in halos significantly above the atomic cooling threshold. Previous work has investigated the photoevaporation of minihalos due to ionizing radiation [@2005MNRAS.361..405I; @2004MNRAS.348..753S]. There has also been a large body of work seeking to understand the suppression of star formation in more massive dark matter halos due to reionization [e.g. @1994ApJ...427...25S; @1996ApJ...465..608T; @1998MNRAS.296...44G; @2000ApJ...542..535G; @2004ApJ...601..666D; @2006MNRAS.371..401H; @2008MNRAS.390..920O; @2013MNRAS.432L..51S; @2014MNRAS.444..503N]. [@2004ApJ...601..666D] find, using 1D, spherically symmetric simulations, that photoionization feedback is not important in halos with circular velocity $v_{\rm c}\gtrsim 10~{\rm km~s^{-1}}$, when subjected to the mean ultraviolet background at $z\gtrsim 10$. Here we consider much higher ionizing fluxes, as might be expected from nearby star-forming galaxies, which results in feedback being important at significantly larger halo masses. Much of the previous work assumes that the key physical scale in the problem is the Jeans mass of ionized photoheated gas at the mean cosmic density, or a related, time-averaged version of this quantity [the so-called filtering mass; @1998MNRAS.296...44G]. Recently, [@2014MNRAS.444..503N] have put forward a physical picture that goes beyond this approximation, considering the effects of gravity, cooling, pressure, and self-shielding on gas as it collapses into a halo. This model is able to reproduce the gas mass fraction within dark matter halos as a function of halo mass and redshift. We note that, in general, the goal of these previous works was to find either the characteristic halo mass at which gas accretion can occur or the mass at which halos contain a gas fraction of 50 per cent (relative to the total cosmic mean ratio of baryons to dark matter). In this paper, we pose and address a closely related, yet distinct question: [*At what halo mass can *any* gas first cool and form stars in a pristine halo subjected to ionizing radiation?*]{} Naively, one might expect that if gas is ionized before any star formation occurs, pressure will prevent it from collapsing and forming stars before it reaches the Jeans/filtering mass at the mean gas density of the Universe. We find that modeling the delay of star formation in pristine dark matter halos considering only the Jeans/filtering scale does not provide a complete physical picture. Instead, the essential physics can be described as follows. After gas is photoionized and photoheated, it forms a pressure supported core inside the center of the forming dark matter halo. As the halo grows, the density of this core increases and recombinations increase the neutral fraction. Eventually, the heating timescale in the self-shielded core becomes significantly longer than the dynamical timescale, leading to runaway collapse and star formation. The halo mass at which this occurs depends on the intensity of the ionizing flux. For extremely high flux, we find masses roughly an order of magnitude higher than the atomic cooling threshold. The simulations presented below quantify the mass at which star formation first occurs, for several halos across a wide range of photoionizing fluxes. For the three halos tested, we find that an ionizing photon flux of at least $\approx 10^6 ~{\rm s^{-1} ~ cm^{-2}}$ is required to suppress star formation significantly above the atomic cooling threshold. At high-redshift, this is most likely to occur in close proximity to large star-forming galaxies [@2010MNRAS.404.1425J; @2016MNRAS.460L..59V]. We examine the radial velocity profiles of gas in our simulations to investigate the total mass of Pop III stars formed. We find an upper limit of $\approx 10^6~M_\odot$ within $3~{\rm Myr}$ after collapse (corresponding to the lifetime of massive Pop III stars). This limit varies by a factor of a few depending on the halo and applied ionizing flux (it may also be decreased by a factor of a few by molecular hydrogen cooling, as discussed below). Even though the mass of halos hosting Pop III stars can be increased by more than an order of magnitude, the increase in limits on stellar mass is found to be more modest (a factor of a few increase for very high flux). As discussed below, we find that these limits suggest that CR7 cannot be explained solely by Pop III stars. Nevertheless, the maximum total stellar masses of Pop III galaxies we infer are large enough to be observable with *JWST*. This paper is structured as follows. In §2, we discuss the details of our cosmological simulations. We explain the physics of delayed star formation in a pristine halo subjected to strong ionizing radiation in §3. In §4, we use our simulations to approximate an upper limit on the amount of gas which could form Pop III stars once a halo is massive enough to overcome photoionization feedback. Finally, we summarize our results and conclusions in §5. Throughout we assume a $\Lambda$CDM cosmology consistent with : $\Omega_{\rm m} = 0.32$, $\Omega_{\Lambda} = 0.68$, $\Omega_{\rm b} = 0.049$, $h=0.67$, $\sigma_8=0.83$, and $n_{\rm s} = 0.96$. Simulations =========== Basic Setup ----------- We employ cosmological hydrodynamics simulations performed with the adaptive mesh refinement code <span style="font-variant:small-caps;">enzo</span> [@2014ApJS..211...19B], to study the delay of Pop III star formation in dark matter halos subjected to strong ionizing radiation. We simulate three different halos (“halos A, B, and C”) which were first identified in dark matter only simulations and then re-simulated with “zoom-in” runs where only the region of halo formation is refined. The initial conditions for all of the simulations were computed with the <span style="font-variant:small-caps;">music</span> software package [@2011MNRAS.415.2101H]. Our zoom-in initial conditions each contain three nested grids. Halo A is taken from an $L=3.5 h^{-1}~ {\rm Mpc}$ box and the most refined region is 0.13 times this length. Halos B and C are taken from an $L=2 h^{-1}~ {\rm Mpc}$ box and the lengths of the most refined regions are 0.18 and 0.16 times the total box length, respectively. Halos A, B, and C span a range of masses at fixed redshift. When no ionizing flux is included, they reach the atomic cooling threshold at $z\approx 21$, $15$, and $12$ respectively. For all of our runs, the full box has cell and particle resolutions of $128^3$. For most of our runs, the most refined region has an initial resolution of $512^3$ and a maximum refinement level of 18 for gas and 13 for dark matter particles. This resolution corresponds to a dark matter particle mass of $36000$, $7000$, and $7000~ M_\odot$ and an initial baryon cell mass of $6500$, $1200$, and $1200~ M_\odot$ for halos A, B, and C, respectively. For a few high-resolution convergence checks, we increase the initial mass resolution of the refined region by a factor of 8. Treatment of Ionizing Radiation ------------------------------- We treat ionizing radiation in our simulations with an approximate method that we have incorporated into <span style="font-variant:small-caps;">enzo</span> version 2.5. This treatment is based on [@2013MNRAS.430.2427R], which provides fitting functions to the photoionization rate as a function of local density that match cosmological simulations with radiative transfer in the post-reionization Universe. We assume a uniform photoionization rate throughout the entire box, modified by a local self-shielding factor estimated from the temperature and total (neutral+ionized) hydrogen number density. The values of the photoionization rate are then given by the following relation: $$\Gamma = f_{\rm sh} \Gamma_{\rm bg},$$ where $\Gamma_{\rm bg}$ is the photoionization rate due to a uniform ionizing background without shielding and $f_{\rm sh}$ is the local shielding factor which depends on the temperature and hydrogen density, $$\label{shield_eqn} f_{\rm sh} = 0.98 \left [ 1 + \left ( \frac{n_{\rm H}}{n_{\rm H, sh}} \right )^{1.64} \right ]^{-2.28} + 0.02 \left [ 1 + \frac{n_{\rm H}}{n_{\rm H, sh}} \right ]^{-0.84},$$ where $$n_{\rm H, sh} = 5 \times 10^{-3} {\rm cm}^{-3} \left ( \frac{T}{10^4 ~{\rm K}} \right )^{0.17} \left ( \frac{\Gamma_{\rm bg}}{10^{-12} ~{\rm s}^{-1}} \right )^{2/3}.$$ The same shielding factor is applied to photoheating as a result of hydrogen ionization. When computing the relative hydrogen ionization and heating rates, we assume a $T_*=3\times 10^4 ~{\rm K}$ black-body spectrum. We do not include any helium photoionization or photoheating. For the spectra of Pop II stars, we do not expect that including helium would change our results. However, for harder spectra (e.g. from active galactic nuclei), double ionization of helium could potentially heat ionized gas to higher temperatures, leading to larger collapse masses than discussed below. We note that the functional form of the shielding factor is calibrated in [@2013MNRAS.430.2427R] with cosmological simulations including radiative transfer at lower redshifts than we explore here. However, as we demonstrate next, this prescription gives similar results to using the (computationally much more expensive) ray-tracing radiative transfer available in <span style="font-variant:small-caps;">enzo</span>. The main goal of this paper is to determine how the halo mass and redshift when gas first collapses and forms stars depends on the intensity of ionization radiation. To test the accuracy of our approximate method, we compare a simulation of halo A, performed with the method described above, to a simulation which includes radiative transfer. For the simulation with radiative transfer, we utilize <span style="font-variant:small-caps;">enzo</span> version 3.0 with the ray tracing radiative transfer algorithm described in [@2011MNRAS.414.3458W]. When performing the simulation with radiative transfer, we insert a radiation particle at the center of the box at $z=30$ which emits $7.5 \times 10^{54}$ photons per second. The photon energies are in 6 bins spaced evenly in log-space such that 10 per cent of these photons are above the hydrogen ionization threshold. This setup is meant to resemble a halo forming in pristine gas exposed to ionizing radiation from a large Pop II galaxy/galaxies in a large HII bubble. In the case of the local shielding approximation, we simulate the same halo with a uniform ionizing background of $1.3 \times 10^8 {\rm photons~s^{-1} cm^{-2}}$ (also turned on at $z=30$). The background is chosen to roughly match the ionizing flux at the location of halo collapse (at $z\approx 14$), in the simulation with radiative transfer. This collapse location is approximately $7~{\rm kpc}$ from the location of the radiation particle at the time of collapse. For this comparison example, we have chosen a large flux to test our approximate method in the high-flux case. We do not repeat the same comparison for other values of the flux, due to the high computational cost of utilizing radiation transfer. Figure \[RT\_fig\] shows the maximum gas density within Halo A as a function of redshift for both simulation methods. We emphasize that the radiative transfer simulation gives very similar results to the much less computationally demanding shielding approximation. For the remainder of this work, we exclusively discuss runs performed with this approximate method. For comparison we also plot the shielding approximation case with 2000 times lower flux, leading to collapse at $T_{\rm vir} \approx 10^4~{\rm K}$. The higher ionizing flux significantly delays gas collapse and star formation. ![\[RT\_fig\] The maximum gas density in halo A as a function of redshift. The three curves showing runaway collapse at $z\approx 14$ are for a strong ionizing source, with ionizing flux of $\sim 10^8 ~{\rm photons~s^{-1} cm^{-2}}$. The solid curve is simulated with <span style="font-variant:small-caps;">enzo</span>’s ray tracing treatment of radiative transfer, while the dashed and dot-dashed curves are computed with the simpler self-shielding approximation discussed in §2.2. This demonstrates that the simpler and less numerically expensive prescription gives a consistent collapse redshift with the radiative transfer run. The dotted curve is for flux 2000 times lower, which results in collapse at the atomic cooling threshold much earlier on. The dashed and dot-dashed curves include and exclude molecular hydrogen cooling, respectively (as discussed in §2.3). ](RT_check){width="90mm"} Molecular Hydrogen ------------------ <span style="font-variant:small-caps;">enzo</span> has options to include a chemical network with 6 species (${\rm H}$, ${\rm He}$, ${\rm H^+}$, ${\rm He^+}$, ${\rm He^{++}}$, and ${\rm e}^-$), as well as an expanded network which also includes ${\rm H_2}$, ${\rm H^-}$, and ${\rm H_2^+}$. The main focus of this paper is to understand the impact of ionizing radiation on Pop III star formation assuming that LW radiation can suppress star formation up to the atomic cooling threshold. For this reason, we do not perform a detailed study of the effect of molecules, however we briefly discuss their impact here. For the comparison shown in Figure \[RT\_fig\], the case with radiative transfer includes molecular hydrogen as well as a uniform LW background with a value $J_{\rm LW} = 100 \times J_{21}$, defined such that the ${\rm H_2}$ photodissociation rate equals $ k_{\rm H_2} = 1.42 \times 10^{-12} J_{\rm LW} / J_{21} ~ {\rm s^{-1}} $ and $J_{21} = 10^{-21}~ {\rm erg~s^{-1}~cm^{-2}~Hz^{-1}~sr^{-1}}$. This photodissociation rate is modified by the self-shielding function described in [@2011MNRAS.418..838W]. We run the simulation both with molecular hydrogen and the same LW background as well as a case without any molecular hydrogen. We find that including LW feedback and molecules results in approximately the same collapse redshift as not including any molecules. When performing similar simulations on halos B and C with the shielding approximation and molecular hydrogen, we find that turning on ionization at $z=20$ actually enhances ${\rm H_2}$ formation and leads to collapse before the halo reaches the atomic cooling threshold (even with a $J_{\rm LW}=100 \times J_{21}$ background). We suspect that our assumption of a spatially uniform and isotropic photoionizing background may be artificially promoting collapse, since the outer regions ionize and heat first, simultaneously compressing the halo from all sides unrealistically. This does not occur in our examples with halo A above because the ionizing radiation is turned on before a minihalo has formed. Ignoring this likely unrealistic effect, collapse should be triggered by H line cooling. For this reason, for the rest of our simulations we do not include any molecular hydrogen. The results in Figure \[RT\_fig\] suggest that this should have a relatively small impact on the redshift and halo mass of collapse. However, molecular hydrogen may have a larger impact on our estimated limits of Pop III stellar mass formed, as we discuss in §4. Ionization Feedback =================== In order to characterize the impact of photoionization feedback on Pop III star formation, we simulate the formation of halos A, B, and C over a wide range of ionizing fluxes. We run each simulation until it reaches the maximum level of refinement during the runaway cooling and collapse of gas, which should immediately precede star formation. In Figure \[m\_fig\], we plot the mass, redshift, and virial temperature at collapse for each halo as a function of ionizing flux. For simplicity, we plot the virial temperature as defined in [@2001PhR...349..125B] assuming $\mu=1.22$, which corresponds to neutral gas (even though for high fluxes the gas is ionized). We quote all flux values in terms of a reference flux corresponding to $2\times 10^{54}$ ionizing photons per second from a source at a distance of $50~{\rm kpc}$, $F_0 = 6.7\times 10^6~ {\rm photons~s^{-1}~cm^{-2}}$. We note the merger tree analysis from [@2016MNRAS.460L..59V] demonstrates that a $M = 6.6 \times 10^{11} ~ M_\odot$ dark matter halo at $z \approx 7$ with a 10 per cent star formation efficiency and an ionizing photon escape fraction of $f_{\rm esc}=0.1$ will produce $\sim 2 \times 10^{53}$ ionizing photons per second over a range of $\Delta z \approx 10 $ in redshift (see their Figure 1). Thus, at a separation of $\sim 50~ {\rm kpc}$ from such a halo, the flux will be $0.1F_0$ up to very high redshift. For halo A, the flux is turned on at $z=30$ and for halos B and C the flux is turned on at $z=20$. We find that as long as there is sufficient time for the intergalactic medium (IGM) to be reionized before halo formation, changing the start time of the source does not have a large impact on our results. For example, we performed a run with $F_0$ flux starting at $z=16$ on halo C and found collapse at almost exactly the same redshift as the case starting at $z=20$. To check for convergence, we simulated halo B with mass resolution 8 times higher in the refinement region than our other runs. In Figure \[m\_fig\], we plot both the low- and high-resolution results for collapse time, redshift and virial temperature. The close match between the different resolutions suggests that our results have converged. ![image](M_vs_flux){width="80mm"} ![image](Tvir_vs_flux){width="80mm"} ![image](z_vs_flux){width="80mm"} For the lowest fluxes, collapse happens at $T_{\rm vir}\approx 10^4~{\rm K}$ for all of our halos. At intermediate flux (the precise intensity varies for each halo), the collapse mass increases with flux, and for high flux the mass dependence on flux again becomes very weak. Examining our runs in detail gives us a clear picture of the physical processes resulting in this behavior. The low flux collapse at $T_{\rm vir}\approx 10^4~{\rm K}$ is simply because this is the lowest temperature at which atomic hydrogen cooling is efficient (as discussed above, we assume molecules have been dissociated by LW radiation). Gas in smaller halos cannot cool and therefore does not undergo runaway collapse. At intermediate and high fluxes ($\gtrsim F_0$ for halos A and B, and $\gtrsim 0.1 F_0$ for halo C), ionization significantly delays collapse, increasing the collapse mass, by an order of magnitude or more. In these cases, the gas is ionized and photoheated to $\approx 2 \times 10^4~K$ before the formation of the halo. This photoheated gas then settles in the halo forming a pressure-supported core in quasi-hydrostatic equilibrium. This can be seen in Figure \[prof1\_fig\], where we plot the density, ionization, and temperature profiles for halo B at $z=15$ and $z=12$ in the $F_0$ ionizing flux case. This is significantly before the collapse at $z\approx8$. In Figure \[prof1\_fig\], we also use the spherically averaged profiles to plot each side of the equation for hydrostatic equilibrium, $$\label{hydro_eqn} \left |\frac{dP}{dr} \right | = \frac{G M_{\rm tot}(r) \rho_{\rm gas}(r)}{r^2},$$ where $M_{\rm tot}(r)$ is the total mass enclosed within radius $r$, and $\rho_{\rm gas}$ is the gas density. It is clear that the gas core in the halo is in quasi-hydrostatic equilibrium. ![image](profile1_rho){width="80mm"} ![image](profile1_T){width="80mm"} ![image](profile1_HI){width="80mm"} ![image](hydro2){width="80mm"} As the halo increases in mass, the density of the gas core increases to maintain quasi-hydrostatic equilibrium. This can be seen in Figure \[rho\_vs\_z\_fig\], where we plot the maximum density of the core in halo B as a function of redshift for a range of fluxes. As the density increases, so does the recombination rate and the neutral fraction. Eventually, the density reaches a level where the core is no longer stable and runaway collapse occurs. To better understand how this collapse is triggered, we plot the photoheating, hydrogen cooling, and dynamical timescales in the core as a function of redshift in Figure \[timescales\_fig\]. We have shown the case of halo B with a background flux of $F_0$. The cooling/heating times, $t_{\rm cool}$ and $t_{\rm p-heat}$, are defined as the total thermal energy density divided by the cooling/heating rates. The dynamical time is given by the free-fall time for a spherically symmetric mass distribution, $t_{\rm dyn} = \sqrt{\frac{3\pi}{32 G\rho_{\rm tot}}}$, where the density includes both dark matter and gas. At early times, we see that $t_{\rm cool} \approx t_{\rm p-heat} < t_{\rm dyn}$. This is consistent with our earlier finding that the core is in quasi-static equilibrium and means that the core properties only evolve slowly as the halo grows in mass. As time goes on and the potential well of the halo deepens, the dynamical time drops and the thermal equilibrium timescale increases. Once the photoheating and cooling timescales are comparable to the dynamical timescale, the core becomes unstable and collapse begins. During collapse, the cooling time is roughly equal to the dynamical time and compressional heating is balanced by hydrogen cooling. Thus, $t_{\rm p-heat} \gtrsim t_{\rm dyn}$ is the key criterion for collapse to be triggered. Suppressing numerical factors and physical constants, the heating timescale goes as $$t_{\rm p-heat} \propto \frac{T}{f_{\rm sh} F_{\rm bg} \mu f_{\rm HI} }, \label{theat_eqn}$$ where $\mu$ is the mean molecular weight, which depends on the neutral fraction, $f_{\rm HI}$. Leading up to collapse for the case plotted in Figure \[timescales\_fig\], the temperature, dark matter density, and neutral fraction (which is $f_{\rm HI} \approx 1)$ in the core do not vary quickly. Thus, the increase in $t_{\rm p-heat}$ is driven by the increase in self-shielding, which is changed mostly by the increasing gas density. We note that in the case where $f_{\rm HI} \ll 1$, replacing the equilibrium value for $f_{\rm HI}$ gives us a heating time scaling of $t_{\rm p-heat} \propto T^{3/2}/\rho_{\rm gas}$. Thus, the reason the increased self-shielding raises $t_{\rm p-heat}$ in the present example is due to the fact that the gas becomes mostly neutral well before collapse. Generally, the density, ionization fraction, and temperature evolution of the gas core determines the point at which the collapse criterion ($t_{\rm p-heat} \gtrsim t_{\rm dyn}$) is satisfied. In principle, one may be able to analytically estimate the equilibrium values of these quantities at each redshift by solving the equations of hydrostatic equilibrium in a fixed dark matter potential. However, the precise results may depend on the chosen boundary conditions. We defer these calculations to future work. From Eq. \[theat\_eqn\], it is clear that self-shielding has an important impact on when collapse occurs. This has been verified in our simulations. We repeated the simulation of halo B with $F_{\rm bg} = F_0$, but without any self-shielding (i.e. $f_{\rm sh} = 1$) and found that collapse was delayed. In this case, collapse occurs approximately at $z=7.3$, which is even slightly later than in the case of $F_{\rm bg} = 100F_0$ when self-shielding is included. We note that our simulations do not include pressure from Ly$\alpha$ radiation produced during cooling. In principle, this could provide some additional support against gravity, delaying collapse to larger masses. In [@2010ApJ...712L..69S], one-zone models were utilized to analyze the collapse of pristine gas including Ly$\alpha$ trapping (see also [@2006ApJ...652..902S]). They find that for increased optical depth to Ly$\alpha$ scattering, the overall hydrogen cooling rate is not greatly changed (see their Figure 3). This is because at high density two-photon decays lead to cooling through optically thin continuum photons. That there is a not a significant reduction in the overall cooling rate suggests that hydrogen cooling radiation can escape in a time scale similar to the dynamical time or faster. Thus, due to this relatively short trapping time scale, we do not expect Ly$\alpha$ pressure to have a large impact on the results presented above. We defer more detailed study of this effect to future work. For the highest fluxes we simulate, the collapse mass depends very weakly on flux. This is because once a halo reaches a very high mass, corresponding roughly to the virial temperature surpassing the photoheated gas temperature, the density of the core rapidly increases. The density quickly grows so large that even for our most extreme flux, the gas rapidly recombines and collapses. In Figure \[prof2\_fig\], we plot an example of the density, ionization fraction, and temperature profiles at collapse. ![\[rho\_vs\_z\_fig\] The maximum gas density versus redshift for halo B. The three curves are for fluxes of $F_0$, $10F_0$, and $100F_0$, with higher fluxes resulting in collapse at increasingly lower redshift.](rho_vs_z){width="85mm"} ![\[timescales\_fig\] [ The hydrogen cooling, photoheating, and dynamical timescales as a function of redshift for halo B with $F_{\rm bg} = F_0$. These quantities are computed in the cell with the highest gas density (initially in the quasi-hydrostaticly supported gas core). Collapse is triggered when $t_{\rm p-heat} > t_{\rm dyn}$. The increase in $t_{\rm p-heat}$ leading up to collapse is driven primarily by the increased self-shielding of the gas core.]{}](timescales){width="85mm"} ![image](profile2_rho){width="80mm"} ![image](profile2_T){width="80mm"} ![image](profile2_HI){width="80mm"} ![image](profile2_HII){width="80mm"} Limits on Pop III Stellar Mass ============================== In this section, we discuss the total mass of a Pop III starburst which can form in dark matter halos subjected to photoionization feedback. While simulating the detailed processes of star formation is beyond the scope of this paper, we estimate upper limits on the Pop III stellar mass by examining the radial velocity profiles of the gas at collapse. This is shown in Figures \[stellar\_fig1\] and \[stellar\_fig2\], where we plot the infall time $t_{\rm inf} = r/v_{\rm rad}$ (where $v_{\rm rad}$ is the spherically averaged radial velocity), versus the gas mass enclosed within radius $r$. This shows how much gas can reach the central region of a halo for a range of timescales. We focus mostly on how much gas can reach the center by $3~{\rm Myr}$, because this is the lifetime of $\sim 80~M_\odot$ Pop III stars . After this, gas may be polluted by metals from the first stellar generation, and subsequently form Pop II stars. For completeness, we also consider $20~{\rm Myr}$, which corresponds to the lifetime of a $9~M_\odot$ Pop III star . We note that this timescale is probably too long to be relevant because not having stars with masses $\gtrsim 100~M_\odot$ would require an unrealistically bottom-heavy initial mass function (IMF). Keeping the star-forming gas pristine for this long would require very inefficient metal mixing. In Figure \[stellar\_fig1\], we plot infall times for halo A. In the left panel we show the impact of resolution. For strong ionizing feedback, our high- and normal-resolution runs agree well above $M(r)\approx 10^6 M_\odot$ (corresponding to $t_{\rm inf} \approx 3 ~{\rm Myr}$). Below this value, our normal resolution contains more noise and shows a factor of a few larger mass at a given infall time. For comparison we also include a case with weak feedback and find that a factor $\sim 3$ less gas can reach the center within a few Myr. In the right panel of Figure \[stellar\_fig1\], we show the impact of including/excluding molecular hydrogen (with a $J_{LW}=100\times J_{21}$ LW background) and of our local self-shielding approximation. We find that the self-shielding approximation and the full radiative transfer simulation with molecular hydrogen give very similar results. However, compared to not including molecules, molecular cooling decreases the mass which can reach the center significantly (e.g. factor of $\sim 4$ for $3~{\rm Myr}$, from $\approx 2\times 10^6 M_\odot$ to $\approx 5\times 10^5 M_\odot$ ). Thus, for the results shown for halos B and C discussed next, the upper limits are rather conservative and could be a factor of a few lower than what we find without including molecular hydrogen. We suspect that molecular hydrogen increases the infall time because it cools the gas to lower temperatures, reducing the sound speed and creating weak shocks at lower velocities which slows this gas. This is consistent with the results of [@2010MNRAS.402.1249S], which show that the infall velocity in a collapsing atomic cooling halo is approximately equal to the sound speed. In Figure \[stellar\_fig2\], we plot the infall time for halos B and C over a large range in ionizing flux. We find that within $3~ {\rm Myr}$, our limiting gas mass is between $\approx 2\times 10^5 - 2\times 10^6~ M_\odot$. For this infall time, we find that strong ionizing radiation can increase the infalling mass by a factor of a few, but with scatter from run to run. Interestingly, if we allow additional time for star formation (due to longer-lived stars or slower metal mixing), we can reach significantly higher masses. For example, when halo C is exposed to flux $\gtrsim F_0$, approximately $10^7~M_\odot$ of gas can reach the center of the halo within $\sim 20~{\rm Myr}$, approximately 5 times more than in the lowest flux shown. Finally, we consider the observational prospects of Pop III stars formed in massive halos due to LW and photoionization feedback. Of the three halos we simulate, halo C seems the most likely to produce an observable starburst due to its higher infall mass than halo B and lower redshift than halo A. For strong feedback, $\sim 10^6~{\rm M_\odot}$ and $\sim 10^7~{\rm M_\odot}$ of gas can fall to the center of halo C within 3 Myr and 20 Myr, respectively. We note that including molecular hydrogen would likely reduce these masses by a factor of a few. [@2011ApJ...740...13Z] computed the total mass of a Pop III galaxy which could be observed in a 100-hour integration with *JWST*. They found that within 3 Myr of a starburst $\sim {\rm a~few~} \times 10^4~ M_\odot$ and $\sim {\rm a~few~} \times 10^5~ M_\odot$ of Pop III stars could be detected at $10-\sigma$ for maximal and no-nebular flux contribution, respectively (at $z \sim 6$). Thus, if a significant fraction of the gas which makes it to the center of halo C forms Pop III stars, it could potentially be observable with *JWST*. Halo C subjected to an ionizing flux of $0.1F_0$ is similar to the case of photoionization feedback by a $6.6 \times 10^{11}~M_\odot$ star-forming dark matter halo discussed in [@2016MNRAS.460L..59V]. It was estimated that a comoving number density of $\approx 10^{-7}~{\rm Mpc}^{-3}$ of Pop III starbursts would be visible at $z=6.6$ if star formation was suppressed by photoionization feedback up to $\sim 10^9 ~M_\odot$ in halo mass. Since we find that star formation occurs in smaller halos, we expect a higher space density. We note that a number density of $10^{-7}~{\rm Mpc}^{-3}$ corresponds to $\approx 0.002$ per *JWST* field of view per unit redshift at $\approx6$. Thus, it may not be possible to find Pop III galaxies in halos much more massive than the atomic cooling threshold in blind searches. A better strategy is likely to be searching near massive galaxies which have already been discovered. Pristine halos with $T_{\rm vir}\approx 10^4~{\rm K}$ are likely to be more common (since they only require strong LW radiation), and may form similar masses of Pop III stars (perhaps a factor of a few less as suggested by the limits discussed above). Our limits suggest that CR7 cannot be explained by Pop III stars alone, as its luminosity requires $\sim 10^7 ~ M_\odot$ of massive Pop III stars. However, photoionization feedback could have played a role in keeping its metallicity low for much of its halo formation history. ![image](stellar_mass1){width="80mm"} ![image](stellar_mass2){width="80mm"} ![image](stellar_mass208){width="88mm"} ![image](stellar_mass205){width="88mm"} Discussion and Conclusions ========================== We have performed cosmological zoom-in simulations to study the delay of Pop III star formation due to strong photoionization feedback. We utilize a local self-shielding approximation based on [@2013MNRAS.430.2427R], which for our test case is consistent with the ray tracing radiative transfer available in <span style="font-variant:small-caps;">enzo</span>, but is much less computationally demanding. Using this approximation, we determine the redshift and halo mass of runaway gas collapse immediately preceding star formation for three different dark matter halos over a wide range in ionizing fluxes. We find that for low ionizing flux, collapse first occurs at $T_{\rm vir} \approx 10^4~{\rm K}$. This is because it is the lowest temperature where atomic hydrogen cooling is efficient (we assume molecular hydrogen is photodissociated by a strong LW background). At intermediate flux, the collapse is delayed significantly, with the length of the delay depending on the strength of the ionizing flux. For halos A and B, feedback becomes important at $F_0$, while for halo C it becomes important at $0.1F_0$. These high fluxes (required to exist already at high redshift, far in advance of collapse) suggest that for ionization feedback to become important, a halo is required to be near one or more large star forming galaxies. At fluxes $\gtrsim 10F_0$, the threshold halo mass at collapse reaches a plateux and depends only weakly on flux, with stars forming in halos of a $\approx 5\times 10^8 ~ M_\odot$. This is roughly an order of magnitude higher than the low-flux case (corresponding to a factor of 5 higher in $T_{\rm vir}$). We note that while we have only considered redshift-independent fluxes (with some turn-on time), it may be possible for the flux to be lower at higher redshifts, ramping up over time, and still result in similar delays of star formation. We leave an investigation of different flux histories for future work. We note that for an escape fraction of 0.1 in ionizing photons and 1 in LW photons, the LW background corresponding to $F_0$ is $J_{\rm LW} = 150 \times J_{21}$. For flux values $\gtrsim 10F_0$, the LW background could exceed the critical intensity required to form a supermassive star, leading to a so-called direct collapse black hole (DCBH) [@2003ApJ...596...34B; @2016arXiv160902142W; @2010MNRAS.402.1249S; @2014MNRAS.445..544S]. Thus, for these extremely high fluxes it is unclear what the final state of the collapsing gas will be, since DCBH formation has not been studied in this context of massive halos and strong and long-lived ionizing radiation. [@2016MNRAS.459.3377R] studied DCBH formation including photoionization, but in smaller halos and with a shorter duration of ionizing radiation than we consider here. Previously it has been suggested that the mass at which star formation is suppressed is set by the Jeans mass of photoheated gas at the cosmic background density, because below this mass pressure forces support the gas against the force of gravity. We find that star formation can occur at much smaller masses. For reference, the Jeans mass of $T=2\times 10^4 ~{\rm K}$ gas at the mean cosmic density at $z=6$ is $M_{\rm J} \sim 3 \times 10^{10} M_\odot$. Assuming the gas was locally ionized in the distant past, the filtering mass [@2000ApJ...542..535G] is $M_{\rm F} \sim 5 \times 10^{9} M_\odot$, including the factor of 8 correction from [@2009MNRAS.399..369N] (see Eqs. 2 and 8 in @2014MNRAS.444..503N). Even for extremely high ionizing fluxes, we find about an order of magnitude lower halo masses at collapse. By examining the outputs of our simulations, we were able to understand the physical processes which set the collapse mass. For strong ionizing feedback, after the gas is ionized, it settles into an ionized core in quasi-hydrostatic equilibrium. As the halo grows, the density of the core increases leading to an increasing recombination rate. Eventually, the photoheating timescale in the self-shielded core becomes shorter than the dynamical timescale, leading to runaway collapse and star formation. For intermediate fluxes, the higher the flux, the longer this takes to happen. Once the virial temperature of the halo significantly exceeds the temperature of the photoheated gas, the density increases rapidly and even very large flux cannot delay collapse beyond $T_{\rm vir}\approx 3\times 10^4 ~{\rm K}$. To study the total stellar mass of Pop III starbursts, we examine the radial velocity profiles of collapsing gas at the end of our simulations. This allows us to put upper limits on the star formation which can occur within a given duration of time after collapse. Our halo C represents the most promising possibility of an observable mass of Pop III stars. We find that for strong feedback, $\sim 10^6~{\rm M_\odot}$ and $\sim 10^7~{\rm M_\odot}$ of gas reach the center of the halo C within 3 Myr and 20 Myr (corresponding to the lifetimes of 80 and 9 $M_\odot$ stars, respectively). Including molecular hydrogen would likely reduce these numbers by a factor of a few. On the other hand, photo-heating by sources with a harder spectrum to higher temperatures could potentially increase the masses. However for the spectra considered here, we point out that photoionization flux increases the halo mass of collapse much more than the possible star forming gas. These Pop III galaxy mass limits suggest that CR7 cannot be explained solely by a Pop III starburst formed from photoionization feedback (with a Pop II ionizing source), as this would require $\approx 10^7 ~M_\odot$ of massive ($\gtrsim 100 ~M_\odot$) Pop III stars to generate the HeII 1640 Å emission line [@2015ApJ...808..139S; @2015MNRAS.453.2465P; @2016MNRAS.460L..59V]. This is consistent with the recent observations by [@2016arXiv160900727B], who find evidence of oxygen lines in Spitzer/IRAC data. If a significant fraction of the infalling gas mass we find in our simulations forms Pop III stars, they may be observable with future telescopes such as *JWST*. Our simulations suggest that there should not be a dramatic difference in total stellar mass between halos with strong or weak ionizing feedback (assuming both have molecular cooling suppressed by LW radiation). Pop III starbursts formed near bright galaxies are likely to be a factor of a few times more massive than those with weak ionizing feedback, but may be hosted by over an order of magnitude more massive dark matter halos. Thus, the ratio of stellar mass to halo mass is significantly lower in halos which experienced strong ionizing feedback. In principle, this may be observable (through e.g. differences in the velocity dispersion of stars). In future work, it will be important to simulate the detailed process of star formation and internal metal mixing after the initial collapse to better estimate the total number and mass of Pop III stars formed. It will also be useful to perform a detailed estimate of the cosmic number density of these Pop III starbursts which will be observable with *JWST* or future ground-based telescopes such as the *Thirty Meter Telescope* and the *European Extremely Large Telescope*. Acknowledgements {#acknowledgements .unnumbered} ================ We thank John Regan for useful discussions. The Flatiron Institute is supported by the Simons Foundation. EV was also supported by the Columbia Prize Postdoctoral Fellowship in the Natural Sciences. ZH was supported by NASA grant NNX15AB19G and by a Simons Fellowship in Theoretical Physics. GLB was supported by NASA grant NNX15AB20G and NSF grants AST-1312888 and AST-1615955. The computations in this paper were carried out on the NASA Pleiades supercomputer. [^1]: [email protected]
--- abstract: 'In this paper we present visible range light curves of the irregular Uranian satellites Sycorax, Caliban, Prospero, Ferdinand and Setebos taken with Kepler Space Telescope in the course of the K2 mission. Thermal emission measurements obtained with the Herschel/PACS and Spitzer/MIPS instruments of Sycorax and Caliban were also analysed and used to determine size, albedo and surface characteristics of these bodies. We compare these properties with the rotational and surface characteristics of irregular satellites in other giant planet systems and also with those of main belt and Trojan asteroids and trans-Neptunian objects. Our results indicate that the Uranian irregular satellite system likely went through a more intense collisional evolution than the irregular satellites of Jupiter and Saturn. Surface characteristics of Uranian irregular satellites seems to resemble the Centaurs and trans-Neptunian objects more than irregular satellites around other giant planets, suggesting the existence of a compositional discontinuity in the young Solar system inside the orbit of Uranus.' author: - 'A. Farkas-Takács' - 'Cs. Kiss' - 'A. Pál' - 'L. Molnár' - 'Gy. M. Szabó' - 'O. Hanyecz' - 'K. Sárneczky' - 'R. Szabó' - 'G. Marton' - 'M. Mommert' - 'R. Szakáts' - 'T. Müller' - 'L.L. Kiss' title: 'Properties of the irregular satellite system around Uranus inferred from K2, Herschel[^1] and Spitzer observations' --- Introduction ============ Giant planets possess basically two distinct types of satellites concerning orbital dynamics. *Regular* satellites are characterized by orbits with small eccentricity that are very close to the planets’ equatorial plane with always prograde orbit within $\sim0.05\,r_H$, where $r_H$ is the radius of Hill sphere of host planet. In contrast, *irregular* satellites have moderate-to-high eccentricities and inclinations with prograde or retrograde orbits up to $0.65\,r_H$ from their host planets. The existence of two classes of satellites reflects two different ways of evolution: regular satellites likely formed in the same subnebula as the host planet while irregular satellites could not have formed at their present orbits. The most accepted scenario currently is that they have been captured from the inside of the planet’s subnebula in the last phase of planet formation on temporarily orbits, then settled through some kind of loss of angular momentum [see @Nicholson for a review]. There were several deep surveys of irregular satellites in the 2000s that established the basis of the currently known set of irregular satellites [@Gladman02; @Gladman01; @Holman03; @Sheppard03; @SheppardIAUC; @Sheppard05; @Sheppard06]. These surveys provided the main orbital characteristics of the satellites found and allowed the identification of orbital grouping/families around the specific planets [@Nicholson]. Unlike the members of other small body populations in the Solar System, irregular satellites may have remained close to their formation locations and their compositions may be an intermediate one between that of the main belt asteroids and the icy trans-Neptunian objects. The physical characterization of these satellites is, however, still a challenging task due to the large distance and the typical size below 100km, even in the case of the closest Jovian system. Among these characteristics, light curves provide information on the shape and/or surface albedo variegations and may give hints of the internal structure and strength in the case of fast rotators; the distribution of rotational frequencies and amplitudes are important properties of a small body population [@Pravec2002]. High-quality light curve is available for the largest Jovian irregular, Himalia [@Pilcher], rotational properties are known for the Jovian satellites Lysithea, Ananke, Carme and Sinope [@Luu] and the Cassini spacecraft provided rotation periods for many irregular satellites in the Saturnian system [@Denk2013; @Denk2014; @Denk2015]. In the Uranian system, @Maris01 [@Maris07] performed the investigation of the light curves of irregular satellites and obtained rotational characteristics for Sycorax, Caliban, Prospero and Setebos. However, in these latter cases the results are based on sparsely sampled data due to the observing capabilities of the telescopes and the large distance (hence faintness) of these satellites. Broad-band colors are the most readily available tools that can be used to characterize the surface of the irregular satellites [see @Nicholson and references therein]. In the Jovian system the colors are similar to those of carbonaceous asteroids and Jovian Trojans, while in the Saturnian system they show somewhat redder surfaces. However, in both systems the colors are still far from that of the red material typically found in the Kuiper belt. In the Uranian system the colors show a wide variety, but there are certainly satellites that show typical “Kuiper belt" colors [@Maris01; @Maris07; @Romon01; @Grav04]. Well-established size and albedo values are available for a limited sample only, mainly from data of space probes – e.g. Himalia and Phoebe by Cassini [@Porco03; @Porco05], or Nereid by Voyager-2 [@Thomas1991]. Recently, thermal emission data obtained with the MIPS camera of the Spitzer Space Telescope [@MIPS] and the PACS camera of the Herschel Space Observatory [@PACS] also provided independent size and albedo estimates for Sycorax [@Lellouch13] and Nereid [@Kiss2016]. As we noted above, ground-based observations could not place strong constraints on the rotation of most of the irregular satellites observed earlier. However, it is demonstrated in recent works [@Pal15; @Pal16; @Szabo2016; @Szabo2017] that data from the extended *Kepler* mission [@Howell14 K2] can be very effectively used to obtain rotational light curves of Solar system bodies due to uninterrupted photometric time series of several tens of days in length, including main belt and Jovian Trojan asteroids, and trans-Neptunian objects, even at the brightness level of typical irregular satellites. Lately, a thorough light curve analysis of the Neptunian irregular satellite Nereid was also performed [@Kiss2016], showing the great capabilities of K2 measurements for this kind of applications. In this paper we present the results of Uranian irregular satellite observations performed with the *Kepler* space telescope in Campaign 8 of the K2 mission. We provide light curves and derive rotational characteristics for Sycorax, Caliban, Prospero, Setebos and Ferdinand (Sect. \[sect:rotation\]). In addition, we use thermal emission measurements of the Spitzer Space Telescope and the Herschel Space Observatory to derive more accurate size and albedo for Sycorax; we also give constraints on the size and albedo of Caliban, Trinculo and Ferdinand (Sect. \[sect:thermal\]) based on Herschel/PACS observations. Our results are compared with the properties of other irregular satellites and other small body populations of our Solar system (Sects. \[sect:rotcomp\] and \[sect:albedocolor\]). Observations ============ Kepler/K2 measurements ---------------------- Continuous photometry from the *Kepler* space telescope may provide accurate and unbiased rotation rates and amplitudes for solar system targets. The telescope gathers light in a wide visual band spanning from 420 to 900 nm, and follows a step–and–stare scheme in the K2 mission, observing different fields for up to 80 days along the Ecliptic plane [@Howell14]. *Kepler* observed Uranus and its vicinity during Campaign 8 of the mission for 78.73 days, between 2016 January 4.55 and March 23.28. Apart from the planet, four irregular moons, Caliban, Setebos, Sycorax, and Prospero, were also proposed and selected for observations (GO8039, PI: A. Pál)[^2]. We applied the same pipeline for the reduction of the *Kepler* observations that we used in previous works, e.g., to determine the light variations and rotation rates of various Trans-Neptunian Objects, main-belt and Trojan asteroids, or of the moon Nereid [@Kiss2016; @Pal15; @Pal16; @Szabo2016; @Szabo2017]. The method is based on the FITSH[^3] software package [@Pal2012]: the processing steps are detailed in the previous papers and in the companion paper that describes the light curves of main-belt asteroids drifting though the same image mosaic [@Molnar2017], also providing information on the limitations and capabilities. In short, we created mosaic images from the individual Target Pixel Files (TPFs) which contain the time-series photometric information (time, flux, flux error, background) for each downloaded pixel around a given target, as opposed to light curve files that contain a pipeline-extracted brightness summed in an aperture as a function of time. For more details the reader is directed to Kepler Archive Manual[^4]. We then derived the astrometric solutions for the mosaic images, using the USNO-B1.0 catalog [@USNO], where the K2 full-frame images from the campaign were exploited as initial hints for the source cross matching. Then, we registered the images into the same reference system, and subtracted a median image from each image. This median image was created from a selection of individual images that did not include the obvious diffractions pattern contamination from Uranus. We then applied aperture photometry at the positions of the satellites. The sharp images of the stars that were shifted to compensate the attitude changes of the telescope create characteristic residuals in the differential images that may contaminate our photometry. Therefore we filtered out the epochs when the scatter of the background pixels in the photometric annulus was high. The per-cadence photometric uncertainty values were derived from the shot noise of Kepler and from the estimated background noise. All observations were collected in long cadence mode, with a sampling of 29.4 min. Three of the proposed satellites fell into, or near the large mosaic that also covered the apparent motion of Uranus (Fig. \[fig:k2map\]). Other satellites were also present in the same mosaic, and we successfully detected the light variations of a fourth one, Ferdinand. Prospero fell onto an adjacent CCD module, and its motion was covered with a narrow band of pixels. In that case we collected the Target Pixel Files of nearby, unrelated targets into a small mosaic around the track of Prospero in order to have a good astrometric solution. The log of observations is summarized in Table \[tab:keplerlog\]. The duty cycle there shows the ratio of accepted photometric data points to the number of cadences when the satellites were present on the images. We note that Ferdinand was observed twice, at the beginning and the end of the campaign: the two rows in the table refer to the entire length of the observation and the sections when Ferdinand was on the Kepler CCD array, respectively. The temporal distribution of data points we used for photometry is presented in Fig. \[fig:sampling\]. We also checked other satellites that fell on the CCD array, e.g., Stephano (24.5 mag) or Trinculo (25.5 mag), but found no meaningful signals in their photometry. ------------ ---------- -------- -------- ------- ------- Name Start Length Points Duty K$_p$   TBJD (d) d   cycle mag Caliban 7419.16 22.23 1019 0.93 21.99 Setebos 7416.15 28.85 495 0.35 22.87 Sycorax 7418.55 13.67 599 0.89 20.18 Prospero 7416.48 28.89 793 0.56 22.94 Ferdinand 7392.12 74.01 270 0.07 23.12 *on array*   16.14   0.32   ------------ ---------- -------- -------- ------- ------- : Log of the *Kepler* observations. TBJD means truncated BJD, BJD-2450000. The last line shows the values for Ferdinand if we ignore the 57.87 d long period it spent off the detector array during the campaign. Duty cycle is 1.0 when all photometric points are retained and 0 when all had to be discarded. K$_p$ is the mean brightness of the target in the Kepler photometric system.[]{data-label="tab:keplerlog"} ![image](uranus-k2fov.png){width="\textwidth"} ![Temporal distribution of data points used for photometry and light curve construction for Caliban, Setebos, Sycorax, Prospero and Ferdinand. $\Delta$JD is the date from the start of Campaign 8. \[fig:sampling\]](sampling.png){width="\columnwidth"} Infrared data ------------- ### Sycorax Sycorax was observed with the MIPS camera of the Spitzer Space Telescope [@MIPS] at two epochs, on December 27 and 29, 2008, both at 24 and 70$\mu$m. The summary of these observations is given in Table \[table:thermal\] below. Sycorax was successfully detected at both epochs and at both wavelengths. We used the same data reduction and photometry pipeline as in @stansberry2008 [@stansberry2012]. The MIPS instrument team data analysis tools [@Gordon2005] were used to produce flux-calibrated images for each band, and the contribution of background objects were subtracted [see @stansberry2008]. Aperture photometry was performed both on the original and final images and the final flux values were obtained using the aperture corrections by @Gordon2007 and @Engelbracht2007. Color correction of the in-band fluxes were done following [@Stansberry2007]. The flux densities obtained are presented in Table \[table:thermal2\]. ![Spitzer/MIPS 24 and 70$\mu$m images of Sycorax. The top and bottom rows present the images corresponding to the AORKEYs 28832512 and 28832768, respectively. On each image, red circles mark the target aperture and yellow apertures mark the positions used for background determinations [see @stansberry2008; @stansberry2012 for details] \[fig:mipsall\]](mips_all.png){width="8.5cm"} --------------- --------------- ----------- ---------- ------------- -------- ---------- ---------- Instrument AORKEY/ Target Band Date r $\Delta$ $\alpha$ OBSID ($\mu$m) (JD) (AU) (AU) (deg) Spitzer/MIPS 28832512 Sycorax 24 2454827.702 20.081 19.672 2.68 28832512 70 2454827.718 20.081 19.672 2.68 28832768 24 2454829.672 20.081 19.704 2.71 28832768 70 2454829.688 20.081 19.704 2.71 Herschel/PACS 1342221837-38 Sycorax 70 2455710.589 20.084 20.519 2.62 1342221875-76 70 2455710.939 20.084 20.514 2.62 1342221839-40 100 2455710.617 20.084 20.519 2.62 1342221877-78 100 2455710.966 20.084 20.513 2.63 1342221837-40 160 2455710.589 20.084 20.519 2.62 1342221875-78 160 2455710.939 20.084 20.514 2.62 Herschel/PACS 1342236891-92 Caliban 70 2455933.979 20.119 20.359 2.73 1342237436-37 70 2455940.001 20.117 20.457 2.63 1342236891-92 160 2455933.979 20.119 20.359 2.73 1342237436-37 160 2455940.001 20.117 20.457 2.63 Herschel/PACS 1342236891-92 Trinculo 70 2455933.979 20.051 20.293 2.73 1342237436-37 70 2455940.001 20.048 20.390 2.64 1342236891-92 160 2455933.979 20.051 20.293 2.73 1342237436-37 160 2455940.001 20.048 20.390 2.64 Herschel/PACS 1342236891-92 Ferdinand 70 2455933.979 19.930 20.174 2.75 1342237436-37 70 2455940.001 19.929 20.173 2.65 1342236891-92 160 2455933.979 19.930 20.174 2.75 1342237436-37 160 2455940.001 19.929 20.173 2.65 --------------- --------------- ----------- ---------- ------------- -------- ---------- ---------- ----------- ----------- -------------------------- ----------------- --------------- ---------------- --------------------- Target Detector/ $\lambda_{\mathrm{eff}}$ F$_i$ $C_{\lambda}$ $F_m$ H$_\mathrm{V/R}$ filter ($\mu$m) (mJy) mJy (mag) Sycorax MIPS 24 23.68 3.017$\pm$0.045 0.96$\pm$0.01 3.14$\pm$0.16 7.50$\pm$0.04 MIPS 70 71.42 14.68$\pm$2.78 0.92$\pm$0.01 16.07$\pm$2.89 (Grav et al., 2004) MIPS 24 23.68 3.109$\pm$0.046 0.96$\pm$0.01 3.17$\pm$0.17 MIPS 70 71.42 19.12$\pm$2.70 0.92$\pm$0.01 20.78$\pm$3.11 PACS 70 70.0 16.7$\pm$0.6 0.98$\pm$0.01 17.0$\pm$1.0 PACS 100 100.0 15.3$\pm$1.6 1.00$\pm$0.01 15.3$\pm$1.8 PACS 160 160.0 5.3$\pm$3.1 1.04$\pm$0.01 5.5$\pm$3.2 Caliban PACS 70 70.0 1.4$\pm$0.8 0.98$\pm$0.01 1.4$\pm$0.8 9.16$\pm$0.016 PACS 160 160.0 $<$3mJy 1.04$\pm$0.01 $<$3mJy (Grav et al., 2004) Trinculo PACS 70 70.0 $<$0.8mJy 0.98$\pm$0.01 $<$0.8mJy 11.92$\pm$0.18 PACS 160 160.0 $<$3mJy 1.04$\pm$0.01 $<$3mJy (Grav et al., 2004) Ferdinand PACS 70 70.0 $<$0.8mJy 0.98$\pm$0.01 $<$0.8mJy 12.5$\pm$0.1(R) PACS 160 160.0 $<$3mJy 1.04$\pm$0.01 $<$3mJy (this work) ----------- ----------- -------------------------- ----------------- --------------- ---------------- --------------------- Sycorax was also observed in dedicated observations with the PACS camera of the Herschel Space Observatory, in the framework of the ‘TNOs are Cool!’ Herschel Open Time Key Program [@Muller2009]. The flux densities derived from these observations have already been presented in @Lellouch13. We used both the Spitzer/MIPS and Herschel/PACS data for modeling of the thermal emission of the satellite (see Sect. \[sycoraxthermal\]). ### Serendipitous Herschel/PACS observations of irregular satellites ![Locations of three potentially observable irregular satellites on the Herschel/PACS 70 Level-2.5 map of two combined OBSIDs 1342237436/37. \[fig:pacsimagewithmoons\]](pacs_image_with_moons.png){width="8.5cm"} Caliban was identified as potentially present on some far-infrared maps taken with the PACS camera of the Herschel Space Observatory. Herschel/PACS observed the environment of Uranus at two epochs: on January 7, 2012 (OBSIDs: 1342236891/92 / scan and cross-scan) and on January 13, 2012 (OBSIDs: 1342237436/37), under the proposal ID OT1\_ddan01\_1 in both cases. All measurements used the 70/160$\mu$m filter combination in all four cases. The data reduction pipeline we used is the same as the one used in the ‘TNOs are Cool!’ Herschel Open Time Key Programme @Muller2009, described in detail in @Kiss2014, and identical to that we used to reduce the Herschel/PACS maps of Nereid [@Kiss2016]. As our aim was to obtain photometry of a point source, we used the [photProject()]{} task with high-pass filtering to create maps from the time domain detector data. The [photProject()]{} task performs a simple coaddition of the frames using the drizzle method [@Fruchter], and the high-pass filtering applies a sliding median-filter on individual pixel timelines. More details on the procedure can be found in The PACS Data Reduction Guide[^5]. Maps were created from the detector scans in the co-moving frame of Uranus that was practically identical to that of the satellites due to the small relative velocities of Uranus and the satellites ($<$05h$^{-1}$). We identified a faint source at both epochs in the 70$\mu$m band at the expected location of Caliban, obtained from the NASA Horizons System considering the Herschel-centric observing geometry (see Fig. \[fig:calibanpacs\]), and derived a combined flux of F$_{70}$=1.4$\pm$0.8mJy. No obvious source could be identified on the 160$\mu$m maps, and the general photometric accuracy obtained using the implanted source method [see @Kiss2014] defined a 1-$\sigma$ upper limit of F$_{160}$$<$3mJy for the 160$\mu$m brightness of Caliban. ![image](caliban_pacs_70_all.png){width="\textwidth"} While Trinculo and Ferdinand could be potentially present on the same set of Herschel/PACS images as Caliban, these were not detected neither at 70, nor at 160, therefore we consider that their flux densities are below 0.8mJy and 3mJy at 70 and 160, respectively. Konkoly Observatory 1m-telescope observations of Sycorax -------------------------------------------------------- ![Sloan r’-band light curve of Sycorax observed with the 1m-RCC telescope of Konkoly Observatory, folded with the double period of P=6.9162h, obtained from the combination of K2 and 1m-RCC measurements. \[fig:sycoraxrcc\]](sycorax-dpeak-20151105-rcc.pdf){width="8.5cm"} Sycorax was also observed on the night of 2015 November 05/06 with the 1-m Ritchey-Cretien Coude (RCC) telescope of the Konkoly Observatory, located at Piszkéstető Mountain Station. In total, 70 frames were acquired with an exposure time of 300 seconds each, using an Andor iXon-888 electron-multiplying CCD (EMCCD) camera. Although it is not so relevant for such long exposures, we note here that the camera was operated in frame transfer readout mode in order to have an effectively zero dead time between the subsequent frames. The frames were taken in Sloan r’ filters and used the SDSS-III DR9 catalogue [@SDSSDR9] for reference magnitudes of the comparison stars. Standard calibration procedures and aperture photometry were performed using the various tasks of the FITSH package [@Pal2012]. The resulted light curve is plotted in Fig. \[fig:sycoraxrcc\]. Thermal emission models\[sect:thermal\] ======================================= To model the thermal emission of some of our targets, infrared monochromatic flux densities were derived from the in-band flux densities applying the appropriate color corrections, based on the surface temperatures of the targets [see @Muller2011; @mipshandbook for the Herschel/PACS and Spitzer/MIPS color corrections, respectively], see also Table \[table:thermal2\]. We used these monochromatic flux densities and the observing geometry parameters listed in Table \[table:thermal\] to constrain the thermal emission models of our targets by calculating the $\chi^2$ values of the modeled and observed monochromatic flux densities [see e.g. @Vilenius2014 for details]. We applied either the Near-Earth Asteroid Thermal Model [NEATM, @Harris98] or a thermophysical model [TPM @Lagerros1; @Lagerros2; @Lagerros3; @ML2002] to obtain the model flux densities. TPM was only chosen in the case of Sycorax, as in the case of the other satellites with limited amount of thermal data (low degrees of freedom) it is not meaningful to run a complex TPM model over a more simple NEATM one. We used our own NEATM code written in IDL[^6], and a TPM code developed by J.S.V. Lagerros and T.G. Müller (see the references above). Sycorax {#sycoraxthermal} ------- ![Best-fit NEATM results of Sycorax. Flux densities of Spitzer/MIPS (24 and 71$\mu$m) and Herschel/PACS (70, 100 and 160$\mu$m) measurements are marked by diamonds and filled circles. The dotted and dashed curves correspond to the same set of best-fit NEATM parameters seen at the epoch of the Spitzer/MIPS and Herschel/PACS measurements. \[fig:sycoraxneatm\]](Sycorax_neatm00.pdf){width="8.5cm"} The thermal emission of Sycorax was modeled using the flux densities listed in Table \[table:thermal2\] applying a NEATM model. Due to the notable difference in observing geometry at the PACS and MIPS epochs, for a specific model (given size and beaming parameter) the corresponding observation geometries were considered at the PACS and MIPS epochs separately, and the $\chi^2$ values were derived accordingly. The best-fit model is presented in Fig \[fig:sycoraxneatm\], where the flux densities of the MIPS and PACS epochs are shown individually. The NEATM model provided a best-fit effective diameter and albedo estimate of D=165$\pm$13km, =0.065$^{+0.015}_{-0.011}$, with a beaming parameter of $\eta$=1.20$^{+0.25}_{-0.20}$. In addition to the NEATM model we also applied thermophysical modeling [TPM, see @ML98; @ML2002 and references therein] considering an absolute magnitude of =750$\pm$004 [@Grav04], a default slope parameter of G=0.15 and a wavelength-dependent emissivity [@ML2002]. We used P=6.9162h for the rotation period, as determined from the combination of K2 and Konkoly 1m-RCC measurements (see Sect. \[sect:sycoraxrot\]); possible thermal inertia values were considered in the $\Gamma$=0.1–50[$\mathrm{J\,m^{-2}\,s^{-1/2}\,K^{-1}}$]{} range and surface roughness values were allowed between $\rho$=0.1 and 0.9. As the spin-axis orientation of Sycorax is not known, we considered three possible scenarios, a pole-on (spin-axis orientation of $\lambda$=356, $\beta$$\approx$0 in ecliptic coordinates), an equator-on ($\lambda$=0, $\beta$=90), and an intermediate one ($\lambda$=356, $\beta$=45). The equator-on and intermediate solutions provide a low best-fit reduced $\chi^2$ of [$\chi^2_{\rm r}$]{}$<$1, and give very similar best fit values for the size. However, no acceptable solution with sufficiently low [$\chi^2_{\rm r}$]{} could be obtained for the pole-on case ([$\chi^2_{\rm r}$]{}$>>$1 in all cases) and a non-pole-on configuration is also supported by the presence of a definite visible-range light curve. Our best estimates for the size and albedo are D=157$_{-15}^{+23}$km and =0.07$_{-0.01}^{+0.02}$, associated with a likely intermediate surface roughness ($\rho$$\approx$0.5) and a close-to-equator-on spin axis configuration. Based on this analysis, very low levels of thermal inertia ($\Gamma$$<$1[$\mathrm{J\,m^{-2}\,s^{-1/2}\,K^{-1}}$]{}) can be excluded for Sycorax. ![Ratio of the observed thermal infrared fluxes of Sycorax to those obtained in the best-fit thermophysical model, as detailed in the text. \[fig:sycoraxtpm\]](obs2tpm_sycorax.png){width="8cm"} The diameter and geometric albedo obtained by our analysis is close to the best-fit values obtained by @Lellouch13 using Herschel/PACS data alone (D=165$_{-42}^{+36}$, =0.049$_{-0.017}^{+0.038}$), but in our case with smaller error bars. However, the beaming parameter is much better constrained with the consideration of the Spitzer/MIPS fluxes, as the Herschel/PACS data could not restrict the models further due to the lack of short wavelength ($\la$40$\mu$m) data [$\eta$=1.26$_{-0.78}^{+0.92}$ in @Lellouch13]. Our best-fit $\eta$=1.20 is very close to the median values obtained for Centaurs and trans-Neptunian objects based on a large sample of Spitzer/MIPS and Herschel/PACS measurements [$\eta$$\approx$1.2, @stansberry2008; @Lellouch13]. The thermal inertia value of $\Gamma$=3–4[$\mathrm{J\,m^{-2}\,s^{-1/2}\,K^{-1}}$]{} we obtained for Sycorax is also in agreement with the $\Gamma$=5$\pm$1[$\mathrm{J\,m^{-2}\,s^{-1/2}\,K^{-1}}$]{} value found by @Lellouch13 for Centaurs at heliocentric distances $<$25AU. Small irregular satellites on Herschel/PACS images -------------------------------------------------- For all potentially detectable satellites we used a NEATM model with a constant emissivity of $\epsilon$=0.9 to estimate the expected flux densities in the 70$\mu$m Herschel/PACS band for a range of diameters/geometric albedos and beaming parameters. The beaming parameter $\eta$ was allowed to vary between 0.6 and 1.6, while the diameters were chosen to match a V-band geometric albedo range of 0.01$\leq$$\leq$0.3. Phase integrals were calculated applying both the geometric albedo-dependent phase integral developed for the outer Solar system by @Brucker2009 and the ’standard’ value using the canonical slope parameter of G=0.15 [see e.g. @Muinonen2010]. The difference between the 70$\mu$m thermal emission flux densities obtained by the two methods were $\leq$4$\mu$Jy in all cases, and are negligible for our purposes. As described above, Caliban was tentatively detected on Herschel/PACS 70$\mu$m maps with a combined monochromatic flux density of F$_{70}$=1.4$\pm$0.8mJy. A generally assumed dark surface of $p_\mathrm{V}$$\approx$0.04 and the corresponding diameter of $\sim$70km would produce a 70$\mu$m flux density of F$_{70}$$>$2.4mJy, easily detectable on our Herschel/PACS maps, over 3$\sigma$ of the 0.8mJy 70$\mu$m flux uncertainty of the Herschel/PACS maps. The fact that the detected flux density of Caliban is *smaller* than that already indicates a brighter surface. We obtained D=42$_{-12}^{+20}$km and =0.22$_{-0.12}^{+0.20}$ as the best-fit NEATM solution using the single available 70$\mu$m flux density, with no real constraint on the beaming parameter in our originally chosen 0.6$\le$$\eta$$\le$1.6 range, with our best-fit model having a corresponding $\eta$ of 0.8. The 70$\mu$m flux of Caliban can, however, be equally well fitted with D$\approx$50km and $\approx$0.15, using a higher $\eta$ value of $\sim$1.4. The uncertainties in the size and albedo due to the unconstrained beaming parameter are reflected in the errors of the best-fit values quoted above. The geometric albedo we obtained for Caliban may seem to be surprisingly high in the Uranian irregular satellite system, taking into account the low albedo of Sycorax. On the other hand, relatively bright surfaces exist among other irregular satellites, e.g. Nereid has a surface with $>$20% [@Kiss2016], but it is notably larger than Caliban, $\sim$350km in effective diameter. ![NEATM modeling results for Caliban. The black filled circle with error bar shows the only available measurement at 70$\mu$m (Herschel/PACS). The red curve represent the best-fit result (D=42km, =0.22 and $\eta$=0.8), the gray area contains the model curves that are compatible with the 70$\mu$m flux (see the text for details). \[fig:calibanneatm\]](Caliban_neatm0.pdf){width="8.5cm"} ----------- --------------- --------------------- ------ ------------------- ------------------- ----------------- --------- --------- Satellite P $\Delta$m Ref. f$_0$ P $\Delta$m comm. P$_{s}$ (h) (mag) (cycleday$^{-1}$) (h) (mag) (h) Sycorax 4.12$\pm$0.04 0.032$\pm$0.008$^s$ M01 6.9374$\pm$0.0083 6.9190$\pm$0.0082 0.121$\pm$0.020 K2,d 67.3423 3.60$\pm$0.02 0.067$\pm$0.004$^s$ M07 6.9402$\pm$0.0013 6.9162$\pm$0.0013 0.120$\pm$0.019 K2+1m,d Caliban 2.66$\pm$0.04 0.13$\pm$0.01$^s$ M01 4.8249$\pm$0.0092 9.948$\pm$0.019 0.16$\pm$0.03 K2,d 7.0026 Prospero 4.55$\pm$0.04 0.22$\pm$0.03$^s$ M07 3.359$\pm$0.044 7.145$\pm$0.092 0.41$\pm$0.07 K2,s 16.5871 Setebos 4.38$\pm$0.05 0.189$\pm$0.03$^s$ M07 5.640$\pm$0.022 4.255$\pm$0.017 0.27$\pm$0.06 K2,s 13.1705 Ferdinand 2.027$\pm$0.039 11.84$\pm$0.22 0.54$\pm$0.09 K2,s 82.715 ----------- --------------- --------------------- ------ ------------------- ------------------- ----------------- --------- --------- Trinculo and Ferdinand were not detected on the Herschel/PACS images. However, considering the 0.8mJy 70$\mu$m flux uncertainties as an upper limit for both targets we can put some constraints on their geometric albedos and diameters. We note that for Ferdinand we calculated the R-band absolute magnitude using data in the Minor Planet Circular MPEC-2003-S105, assuming an R-band specific linear phase correction of $\beta_{\mathrm R}$=0.119 [@Belskaya2008] and obtained H$_\mathrm{R}$=12.06$\pm$0.15mag. For Trinculo we used the value provided by @Grav04 (see also Table \[table:thermal2\]). The 0.8mJy 1$\sigma$ upper limit indicates a geometric albedo of $>$0.03 for both satellites, and correspondingly their diameters are D$<$50km. Rotational characteristics from K2 measurements \[sect:rotation\] ================================================================= We searched for significant periodicities using the Fourier method as implemented in the *Period04* program package [@Lenz] and also the Lomb-Scargle periodogram in the *gatspy* Python package[^7]. We got very similar results in several test cases, therefore we decided to stick to the Lomb-Scargle periods. We note that the errors of the individual photometric points are taken into account. Only those signals were considered that were significant on the 3$\sigma$-level compared to the background local noise periodogram. We phase-folded the light curves with the best period and its double value, then decided which gave a better fit based on a visual inspection. As we have shown previously [@Szabo2016] period determination of Solar system object with K2 long cadence measurements is solid if the coverage exceeds five days and the duty cycle is above 60%. These conditions are fulfilled for three of our targets, as we have shown in Table \[tab:keplerlog\] already. The other two, Setebos and Ferdinand, have lower duty cycles, but the long baseline of the observations compensates for them. Overall, we were able to derive reliable solutions for most Uranian irregular satellites in our sample. The results are presented in Figs. \[fig:rot1\], \[fig:rot2\] and in Table \[table:periods\] below. Table \[table:periods\] also lists the rotation periods obtained from previous investigations. We emphasize that we do not accept all formally significant periods, but simply choose the one with the highest amplitude. Sycorax \[sect:sycoraxrot\] --------------------------- @Maris01 performed the first detailed study of the light curve in the $R$ band using the 3.6m ESO NTT telescope at La Silla, on 1999 October 8 and 9. The amplitude of light variation they found was $A$=0032$\pm$0008 with a P=4.12$\pm$0.04h period. Measurements taken with the VLT in 2005 [@Maris07] provided the most likely light curve period and amplitude of $P$=3.6$\pm$0.02h and $A$=0067$\pm$0004. Our K2 measurements revealed a well-defined rotational period with a frequency of f=6.9374$\pm$0.0083cycleday$^{-1}$ (P=3.458$\pm$0.001h, see Fig. \[fig:rot1\]). We assume that the light curve of Sycorax is double-peaked which is supported by the slight asymmetry of the light curve when it is folded with the half frequency / double rotation period (Fig. \[fig:rot1\]). This gives a double-peaked rotation period of P=6.9190$\pm$0.0082h. The single-peaked period is very close to that obtained by @Maris07 using VLT measurements. The peak-to-peak light curve amplitude obtained from K2 data is A=012$\pm$002, consistent with that obtained by @Maris07 as sinusoidal amplitude. There is a second significant peak on the frequency diagram at 0.35cycleday$^{-1}$ (Fig. 9, upmost panel). Such secondary periods are often explained by tumbling rotation or a companion. Although a companion around Sycorax (a moon of a moon) would be an intriguing possibility, we suggest that this second peak is a sampling artifact. The interaction of a strictly periodic sampling (such as that of *Kepler*) and a strictly periodic process (such as a rotation of the asteroid) results in periodic phase shifts in the sampling, and hence, an emergence of a stroboscopic period that can modulate timing, brightness, shape modulations etc. In @Szabo2013 we calculated the stroboscopic period as $P_s /P = 1/ {\rm min}( \, ] P/C[ , \, ] 1-P/C [ \, )$, where $P_s$ is the stroboscopic period, $P$ is the observed actual period, C is the cadence, and $] \, [$ denotes the fractional part. Substituting the rotation period (6.9162$\pm$0.0013h) and the cadence (1765.5s) into the formula, the stroboscopic period is calculated to be 9.5–10.0-times the rotation period. Since the ratio of the two detected periods is 9.97, this is perfectly compatible with the stroboscopic origin of the long period brightness variation. Stroboscopic periods were calculated from the single peak periods of the other targets, too (see below), as listed in Table \[table:periods\]. We note that a stroboscopic period is not accepted simply by an amplitude criterion as a possible rotation period, but should be rejected due to its stroboscopic nature. Since the series of ground-based observations with the 1m telescope of Konkoly Observatory was roughly 3 months before the K2 measurements and the precision of K2 measurements are rather fine, one can extrapolate the rotation cycle numbers in an unambiguous manner for such a relatively short period. This allows us to combine the two series of measurements (K2 and 1m RCC) to get an accurate rotational frequency, for which we obtained n=3.47012$\pm$0.00067cycleday$^{-1}$, i.e. P=6.9162$\pm$0.0013h. ![image](rot_sycorax_3.png){width="\textwidth"} ![image](rot_caliban_3.png){width="\textwidth"} ![image](rot_setebos_3.png){width="\textwidth"} ![image](rot_ferdinand_3.png){width="\textwidth"} ![image](rot_prospero_3.png){width="\textwidth"} Smaller satellites ------------------ #### Caliban: @Maris01 determined a light curve period of 2.66h that is not confirmed by our data (see Fig. \[fig:rot1\]). Instead, we obtained a most likely frequency of f=4.8249$\pm$0.0092cycleday$^{-1}$ and the asymmetry of the folded light curve indicates that the real rotation period corresponds to the half frequency, i.e. P=4.9742$\pm$0.0095h. The single peak period of Caliban is the only one apart from that of Sycorax for which the corresponding stroboscopic frequency (P$_s$=7.0026h or f$_s$=3.4256cycleday$^{-1}$) is close to a significant peak in Fig. \[fig:rot1\]. #### Prospero: In our analysis the least unambiguous light curve period was obtained for Prospero. We identified the most likely period of 3.359$\pm$0.044cycleday$^{-1}$ (P=7.145$\pm$0.092h), but a strong, secondary peak is also visible at f=4.415$\pm$0.045cycleday$^{-1}$ that corresponds to a single-peak rotation period of P=5.346$\pm$0.055h, very close to the light curve period obtained by @Maris07. #### Setebos: For this satellite we confirm the light curve period of 4.38$\pm$0.05h obtained by @Maris07 as we derived a most likely rotation period of P=4.255$\pm$0.017h, very close to the previously mentioned value, without indication of a double-peak light curve. #### Ferdinand: The light curve period of Ferdinand was not determined earlier. The most likely frequency of 2.027$\pm$0.039cycle day$^{-1}$ corresponds to a rather long rotation period of 11.84$\pm$0.22h, the longest one in our sample (assuming a single-peak light curve). Such long rotation periods are, however, not rare and present e.g. in the Saturnian system where the rotation periods of 10 from 16 irregular satellites in a recently studied sample show rotation periods longer than that of Ferdinand [@Denk2013]. Comparison with the rotational characteristics of other irregular satellites and asteroids\[sect:rotcomp\] ========================================================================================================== Rotation of small body populations in the Solar system is often characterized by the so-called spin barrier, a critical rotation period at which a rubble pile asteroid would fly apart due to its centripetal acceleration. This spin barrier is well established for Main Belt asteroids, the critical rotation period is $\sim$2.2 h (dashed horizontal line in Fig. \[fig:spin\]), resulting in a $\rho_{crit}$ critical density estimate of $\sim$2.0gcm$^{-3}$, using the formula by @Pravec+Harris. In Fig. \[fig:spin\] we plot the rotation period versus size for various small body populations as well as for irregular satellites of the giant planet systems. ![image](uirr_spin_barrier_diam.jpg){width="\textwidth"} Considering our sample, the *median* rotational frequencies are notably higher in the Uranian system ($\approx$3.4cycleday$^{-1}$) than in the other giant planet systems ($\approx$2cycleday$^{-1}$ for Jupiter, Saturn and Neptune). Also, assuming the double-peak rotation periods for Sycorax and Caliban provides us with critical densities $\rho_\mathrm{crit}$$\le$0.76gcm$^{-3}$. When the single-peak periods are considered for all Uranian irregular satellites $\rho_\mathrm{crit}$ still remains below 1gcm$^{-3}$. These values are below the upper limits of $\sim$2gcm$^{-3}$ of main belt asteroids, but higher than the $\sim$0.5gcm$^{-3}$ obtained for e.g. Jovian Trojans [@Szabo2017], the typical densities of cometary nuclei and trans-Neptunian objects [@AHearn2011; @Brown2013; @Vilenius2014], and also those critical densities that can be estimated for the irregular satellites of the other giant planet systems, based on rotational light curves alone. However, e.g., the mass of the largest Jovian irregular Himalia has been estimated from its perturbations on other satellites [@Emelyanov], and it gives an independent estimate on the density, $\rho$$>$2.6gcm$^{-3}$, using the size obtained during the Cassini flyby [effective radius of $\sim$67km @Porco03]. This is significantly larger than the critical density of $\sim$0.2gcm$^{-3}$ that can be obtained from the rotation period of 7.78h and the light curve amplitude of 020$\pm$001 [@Pilcher]. The Uranian irregular satellites in our sample are in the size range where Maxwellian distribution of rotational frequencies starts for main belt asteroids [D$\ga$40km, @Pravec2002]. While our sample is small and certainly not unbiased, the large median rotational frequency of the Uranian irregular satellites (3.4cycleday$^{-1}$) may indicate that this irregular satellite system had a collisional evolution different from those around Jupiter and Saturn and Uranian irregulars suffered from a higher number and/or more energetic collisions. The median rotation period of 7.1h in the Uranian system is close to that obtained for Centaurs [7.35h, @Duffard2009] and somewhat smaller than that of trans-Neptunian objects [8.6h, @Thirouin2014], however, these populations certainly went through a different collisional evolution than that of the Uranian irregular satellite system. The albedo-colour diversity of irregular satellites \[sect:albedocolor\] ======================================================================== ![Albedo and colour characteristics of irregular satellites of the giant planet systems. Purple, red, green and blue correspond to the satellites of Jupiter, Saturn, Uranus and Neptune, respectively. Error bars mark the uncertainties in geometric albedo and color. Pale dots in the background correspond to the data of outer Solar system objects (Centaurs and trans-Neptunian objects), taken from @L14. Europa, Io, Ganymede, Callisto, Nereid and Triton are marked with their initials. References: [@Buratti1983], [@Cruikshank1982], [@Denk2013], @Grav03 [@Grav04; @Grav2015], [@Hick2004], [@Karkoschka2001], [@Kiss2016], [@Lellouch13], [@Millis1975], [@Morrison2009], [@Rettig2001], [@Showalter2006], [@Simonelli1984], [@Smith1989]. \[fig:pvcolour\]](pv_slope00.pdf){width="8.5cm"} In Fig. \[fig:pvcolour\] we plotted the colors [represented by spectral slopes, @LJ] versus the geometric albedos of those irregular satellites for which these information were available. The Jovian irregular satellites are typically found in the same albedo-color region as the Centaurs/trans-Neptunian objects with dark-neutral surfaces (pale blue dots in Fig. \[fig:pvcolour\]). This is also the characteristic region for cometary nuclei and Jovian Trojan asteroids [see e.g. @L14]. Only two Jovian irregular satellites show red surfaces, both extremely red and dark, already outside the bright-red group of outer Solar systems objects (pale red dots in Fig. \[fig:pvcolour\]). Saturnian irregulars (red symbols) are obviously in the dark-neutral group. Here we included Hyperion, too (highest albedo Saturnian point in Fig. \[fig:pvcolour\]), although this is strictly speaking not an irregular satellite, but shows characteristics different from the typical regular Saturnian satellites (elongated shape, highly cratered surface, likely porous interior). The Neptunian irregular, Nereid is also a likely dark-neutral object. Triton, however, is clearly distinct from all other irregulars, and, as it is expected due to its large size, resemble more the group of large dwarf planets than any other irregular satellites – in that case internal processes (cryovolcanism) may have significantly altered the original surface. The surface of Triton, as well as those of the large regular satellites are more similar to the largest dwarf planets (green symbols in Fig. \[fig:pvcolour\]) and the members of the Haumea collisional family (yellow symbols). The two irregular Uranian satellites, Sycorax and Caliban, for which albedo and color data are available both seem to fall into the bright-red group, along with some regular Uranian satellites (Puck, Miranda, Ariel, Umbriel, Titania, Oberon). Currently no other irregular satellites in other giant planet system can be assigned to this albedo-color group. Although our sample is limited, the location of the irregular satellites on the albedo-color diagram may indicate that the surfaces of satellites in the Uranian system may resemble those of the bright-red trans-Neptunian objects. Irregular satellites in the Jovian and Saturnian systems, and also Nereid are generally darker and more neutral in color. If the surfaces of the Uranian irregular satellites and those in other giant planet systems are intrinsically different, and not a consequence of a different evolution of surfaces, this may be a further indication of a compositional discontinuity in the young Solar system. This discontinuity should have existed close to the heliocentric distance of Uranus, caused by the same processes that induced the bimodality among Centaurs and trans-Neptunian objects [@L14]. The research leading to these results has received funding from the European Unions Horizon 2020 Research and Innovation Programme, under Grant Agreement No. 687378; from the K-115709, PD-116175, and GINOP-2.3.2-15-2016-00003 grants of the National Research, Development and Innovation Office (NKFIH, Hungary); and from the LP2012-31 grant of the Hungarian Academy of Sciences. L. M. was supported by the János Bolyai Research Scholarship of the Hungarian Academy of Sciences. Funding for the *Kepler* and K2 missions are provided by the NASA Science Mission Directorate. The data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-HST data is provided by the NASA Office of Space Science via grant NNX09AF08G and by other grants and contracts. This work is based in part on archival data obtained with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. The authors thank the hospitality the Veszprém Regional Centre of the Hungarian Academy of Sciences (MTA VEAB), where part of this project was carried out. We also thank our referee for the helpful comments and suggestions. A’Hearn, M. F. 2011, ARA&A, 49, 281 Ahn, C.P., Alexandroff, R., Allende, P.C., et al., 2012, ApJS, 203, 21 Bauer, J.M., Buratti, B.J., Simonelli, D.P., Owen, W.M., 2004, ApJL, 610, L57 Belskaya, I.N., Levasseur-Regourd, A.-C., Shkuratov, Y.G., Muinonen, K., 2008, in The Solar System Beyond Neptune, Surface Properties of Kuiper Belt Objects and Centaurs from Photometry and Polarimetry, p.115 (Tucson, AZ: Univ. Arizona Press) Brown, M. E., 2013, ApJ, 778, L34 Bonniel Buratti & Joseph Veverka, 1983, Icarus, 55, 93 Brucker, M., Grundy, W.M., Stansberry, J.A., et al., 2009, Icarus, 201, 284 Colbert, J., and the MIPS Instrument and MIPS Instrument Support Teams, MIPS Instrument Handbook, 2011, Version 3.0 () Cruikshank, D.P. & Brown, R.H., 1982, Icarus, 50, 82 Denk, T. & Mottola, S., 2013, American Astronomical Society, DPS meeting \#45, id.406.08 Denk, T. & Mottola, S., 2013, American Astronomical Society, DPS meeting \#46, id.304.09 Denk, T. & Mottola, S., 2013, American Astronomical Society, DPS meeting \#47, id.412.02 Duffard, R., Ortiz, J.-L., Thirouin, A., Santos-Sanz, P., Morales, N., 2009, A&A, 505, 1283 Emelyanov, N.V., Archinal, B. A., a’Hearn, M. F., et al., 2005, A&A, 438, L33 Engelbracht, C.W., Blaylock, M., Su, K.Y.L., et al., 2007, PASP, 119, 994 Fruchter, A.S., Hook, R.N., 2002, PASP, 114, 144 Gladman, B. J., Nicholson, P. D., Burns, J. A., et al., 1998, Nature, 392, 897 Gladman, B., Kavelaars, JJ, Holman, M., et al., 2000, Icarus, 147, 320 Gladman, B., Kavelaars, J. J., Holman, M., Nicholson, P. D., et al., 2001, Nature, 412, 163 Gomes-Júnior, A. R., Assafin, M., Vieira-Martins, R., et al., 2015, A&A 580, A76 Gordon, K.D., Rieke, G.H., Engelbracht, C.W., et al., 2005, PASP, 117, 503 Gordon, K.D., Engelbracht, C.W., Fadda, D., et al., 2007, PASP, 119, 1019 Grav, T.,Holman, M. J., Gladman, B. J., Aksnes, K., 2003, Icarus, 166, 33 Grav, T., Holman, M. J., Fraser, W. C., 2004, ApJ, 613, L77 Grav, T., Bauer, J. M., Mainzer, A. K., et al., 2015, ApJ, 809, 3G Harris, A. W., 1998, Icarus, 131, 291 Heppenheimer, T. A., & Porco, C. 1977, Icarus, 30, 385 Hicks, M.D., Buratti, B.J., 2004, Icarus, 171, 210 Holman, M., Gladman, B., Kavelaars, JJ, et al., 2000, DPS 32.4201 Holman, M., Kavelaars, J., Grav, T., et al., 2003, IAU Circular 8047 Howell, S.B., Sobeck, C., Haas, M., et al., 2014, PASP, 126, 398 Karkoschka, E., 2001, Icarus, 151, 51K Kavelaars, J. J., Holman, M. J., Grav, T., et al., 2004, Icarus, 169, 474 Kiss, Cs., Müller, T.G., Vilenius, E., et al., 2014, Experimental Astronomy, 37, 161 Kiss, Cs., Pál, A., Farkas-Takács, A., et al., 2016, MNRAS, 457, 2908 Lacerda, P., Fornasier, S., Lellouch, E., et al., 2014, ApJL, 793, L2 Lagerros, J. S. V., 1996, A&A, 310, 1011 Lagerros, J. S. V., 1997, A&A, 325, 1226 Lagerros, J. S. V., 1998, A&A, 332, 1123 Lenz, P., & Breger, M. 2005, Commun. Asteroseismol., 146, 53 Lellouch, E., Santos-Sanz, P., Lacerda, P., et al., 2013, A&A, 557, A60 Luu, J., 1991, AJ, 102, 1213 Luu, J. X., Jewitt, D. C., 1990, AJ, 99, 1985 Maris, M., Carraro, G., Cremonese, G., Fulle, M., 2001, AJ, 121, 2800 Maris, M., Carraro, G., Parisi, M. G., 2007, A&A, 472, 311 Millis, R.L.& Thompson, D.T., 1975, Icarus, 26, 408 Molnár, L., et al., 2017, ApJS, submitted, arXiv:1706.06056 Monet, D.G., Levine, S.E., Canzian, B., et al., 2003, AJ, 125, 984 Morrison, S.J., Thomas, P.C., Tiscareno, M.S., Burns, J.A., Veverka, J., 2009, Icarus, 204, 262 Müller, T.G., Lagerros, J.S.V., 1998, A&A, 338, 340 Muinonen, K., Belskaya, I.N., Cellino, A., et al., 2010, Icarus, 209, 542 Müller, Th.G., Lellouch, E. Böhnhardt, H, et al., Earth, Moon, and Planets, 105, 209-219 Mueller, M., Stansberry, J., Mommert, M. & Grundy, W., 2012, “TNO Diameters And Albedos: The Final MIPS Dataset”, AAS DPS meeting, \#44, \#310.13 Müller, T. G. & Lagerros, J. S. V., 2002, A&A, 381, 324 Müller, T., Okumura, K., Klaas, U., 2011, “PACS Photometer Passbands and Colour Correction Factors for Various Source SEDs”, PICC-ME-TN-038 (Herschel/PACS calibration report) Nicholson, P.D., Cuk, M., Sheppard, S.S., et al., 2008, Irregular Satellites of the Giant Planets, in: The Solar System Beyond Neptune, p.411 (Tuscon, AZ: Univ. Arizona Press) Pál, A., 2012, MNRAS, 421, 1825 Pál, A., Szabó, R., Szabó, Gy. M., 2015, ApJ, 804, L45 Pál, A., Kiss, Cs., Thomas M.G.et al., 2016, AJ, 151, 117 Parisi, M. G., Carraro, G., Maris, M., Brunini, A., 2008, A&A 482, 657 Pilcher, F., Mottola, S., Denk, T., 2012, Icarus, 219, 741 Poglitsch A. et al., 2010, A&A, 518, L2 Pollack, J. B., Burns, J. A., Tauber, M. E., 1979, Icarus, 37, 587 Porco, C. C., West, R. A., McEwen, A., et al., 2003, Science, 299, 1541 Porco, C. C., et al., 2005, Science, 307, 1237 Pravec, P., Harris, A. W., Michalowski, T., 2002, Asteroid Rotations, in: Asteroids III, Univ. of Arizona Press. Pravec, P., & Harris, A. W., 2000, Icarus, 148, 12 Rettig, T. W., Walsh K. & Consolmagno, G., 2001, Icarus, 154, 313 Rieke, G. H., Young, E. T., Engelbracht, C. W., et al., 2004, ApJS, 154, 25 Romon, J., de Bergh, C., Barucci, M. A. et al., 2001, A&A, 376, 310 Schaefer, B. E., Tourtellotte, S. W., Rabinowitz, D. L., Schaefer, M. W., 2008, Icarus, 196, 225 Sheppard, S. S., Jewitt, D., 2003, Science, 423, 261 Sheppard, S. S., Gladman, B., Marsden, B. G., 2003, IAU Circular 8116 Sheppard, S. S., Jewitt, D., Kleyna, J., 2005, AJ, 129, 518 Sheppard, S. S., Jewitt, D., Kleyna, J., 2006, AJ, 132, 171 Showalter M. R., Lissauer, J. J., 2006, Science, 311, 973 Simonelli D. P. & Veverka J., 1984, Icarus, 59, 406 Smith, B. A., Soderblom, L. A., Banfield, D., et al., 1989, Science, 246, 1422 Stansberry, J.A., Gordon, K.D., Bhattacharya, B., et al., 2007, PASP, 119, 1038 Stansberry, J., Grundy, W.M., Brown, M.E., et al., 2008, Physical Properties of Kuiper Belt and Centaur Objects: Constraints from the Spitzer Space Telescope, in: The Solar System Beyond Neptune, p.161 (Tuscon, AZ: Univ. Arizona Press) Stansberry, J.A., Grundy, W.M., Mueller, M., et al., 2012, Icarus, 219, 676 Szabó, R., Szabó, Gy.M., Dálya, G., et al., 2013, A&A, 553, A17 Szabó, R., Pál, A., Sárneczky, K., et al., 2016, A&A, 596, A40 Szabó, M. Gy., Pál., A., Kiss, Cs., et al., 2017, A&A, 599, 44 Thirouin, A., Noll, K.S., Ortiz, J.-L., Morales, N., 2014, A&A, 569, A3 Thomas P., Veverka J., Helfenstein P., 1991, J. Geophys. Res. Suppl., 96, 19253 Vilenius, E., Kiss, Cs., Müller, Th. G., et al. 2014, A&A, 564, A35 [^1]: Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA. [^2]: https://keplerscience.arc.nasa.gov/k2-approved-programs.html [^3]: http://fitsh.net [^4]: https://archive.stsci.edu/kepler/manuals/archive\_manual.pdf [^5]: see http://herschel.esac.esa.int/hcss-doc-14.0/ for version 14 documentation [^6]: Interactive Data Language, Harris Geospatial Solutions [^7]: https://github.com/astroML/gatspy/
--- abstract: 'For iridates with large spatially extended $5d$ orbitals, it may be anticipated that distant neighbor interactions would play a crucial role in their ground state properties. From this perspective, we investigate the magnetic structure of Sr$_{2}$IrO$_{4}$ by including interactions beyond first and second neighbors, via supercell modeling. Adopting to first-principles scalar relativistic methods, it is found that the minimum in total energy among various magnetic structures correspond to a $\uparrow$$\uparrow$$\downarrow$$\downarrow$ type antiferromagnetic ordering of the Ir ions for which the magnitude of the electronic gap, that of the Ir local moments and, the facsimile of the two-peaked structure in the optical conductivity spectra of Sr$_{2}$IrO$_{4}$ were found to be in good agreement with the experiments. The results unequivocally evidence that the origin of the electronic gap in Sr$_{2}$IrO$_{4}$ is due to an unconventional antiferromagnetic ordering of Ir ions, thereby classifying the system as a Slater magnet, rather than the spin-orbit coupling driven $J_{eff}$ $=$ $\frac{1}{2}$ Mott insulator.' author: - 'Vijeta Singh$^{1,2}$ and J. J. Pulikkotil$^{1,2}$' title: 'Evidence of Slater-type mechanism as origin of insulating state in Sr$_{2}$IrO$_{4}$' --- Sr$_{2}$IrO$_{4}$ is an insulator at all temperatures [@Kim2; @Kim1; @Jin; @Rao; @Fisher; @Cao; @Castaneda; @Kini; @Klein] and undergoes an antiferromagnetic transition below $240$ K [@Cao; @Castaneda; @Kini; @Klein; @Chikara; @Crawford; @Cava; @Shimura]. Assuming that the strength of spin-orbit coupling (SOC) is comparable with that of crystal field interaction, Coulomb correlation and Hund’s coupling, a new quantum paradigm has been proposed [@Kim2]. In this model, the crystal field split Ir $t_{2g}$ states are further split by SOC into a four-fold degenerate $J_{eff}$ $=$ $\frac{3}{2}$ quartet and a two-fold $J_{eff}$ $=$ $\frac{1}{2}$ doublet states. With Ir in its $+4$ formal valence state, the low energy $J_{eff}$ $=$ $\frac{3}{2}$ states are fully filled with two electrons each, leaving the $J_{eff}$ $=$ $\frac{1}{2}$ doublet singly occupied. Furthermore, since the bandwidth of the $J_{eff}$$=$$\frac{1}{2}$ doublets are significantly narrow, Coulomb correlation splits the doublets into an upper and lower Hubbard band, thereby rendering the system an insulating ground state [@Kim2; @Kim3]. The model successfully accounts for both electron localization and insulating state on equal footing and derive consistent support from resistivity measurements, photo-emission spectroscopy, optical conductivity, absorption spectroscopy and model Hamiltonian based calculations [@Kim2; @Kim1; @Chikara; @Takayama]. However, few observations had also hinted to itinerant characteristics in Sr$_{2}$IrO$_{4}$ [@Kini; @Arita; @Li; @Yamasaki; @Piovera]. Scanning tunneling microscopy finds that the electronic gap emerges in the close vicinity of the magnetic transition [@Li], whereas optical conductivity measurement deduce a strong reduction in the optical gap with increasing temperature [@Moon]. Also, a metal-insulator transition is observed in the ultrafast dynamics of photo-excited carriers which indicate to a underlying Slater-type mechanism in Sr$_{2}$IrO$_{4}$ [@Hsieh]. Magnetic susceptibility and isothermal magnetization measurements find the effective paramagnetic moment and the saturation moment as $0.5$ $\mu_{B}$ and $0.14$ $\mu_{B}$, respectively which is far less than the expected spin-only value of $1$$\mu_{B}$ for localized spin of $S$ $=$ $\frac{1}{2}$ [@Cao; @Kini; @Ye]. The reduction in the magnitude of the Ir moment indicates to strong hybridization between Ir $5d$ and O $2p$ orbitals. In addition, Sr$_{2}$IrO$_{4}$ displays weak ferromagnetism which is attributed to spin canting [@Kim1; @Crawford; @Kim3; @Ye]. It has been addressed in terms of nontrivial exchange interactions accounting for the strong coupling of orbital magnetization to the lattice [@Jackeli; @Liu1]. The weak ferromagnetism although vanishes with increasing pressure, system retains the insulating ground state [@Haskel]. The effect is attributed to an increased tetragonal crystal field thereby substantiating the interplay of structural distortions and SOC, which affects the balance between the isotropic magnetic coupling and the Dzyaloshinskii- Moriya anisotropic interaction. It is also highlighted that distorted in-plane bond angle in Sr$_{2}$IrO$_{4}$ can be tuned through magnetic field [@Ge] and epitaxial strain [@Serrao]. Besides, the in-plane anisotropic nature and inter-layer coupling are also seen to play an important role in the low field magnetic nature of Sr$_{2}$IrO$_{4}$ [@Gim]. Therefore, in the view of these experimental findings, it is clear that there is a subtle interplay of SOC, crystal field, Coulomb correlations, magnetic exchange interactions, and the local chemistry of the underlying IrO$_{6}$ motifs in Sr$_{2}$IrO$_{4}$. Significantly, what appears less emphasized in Sr$_{2}$IrO$_{4}$ is the effect of distant near neighbor interactions on the magnetism and its electronic structure properties. The magnetic structure as refined from neutron diffraction associates a non-collinear Neel type ordering of the Ir spins in the crystallographic $a-b$ plane, with the spin orientation rigidly tracking the staggered rotation of the IrO$_{6}$ along the $c$-axis [@Ye]. However, the Ir $5d$ orbitals being much extended in space and that they strongly hybridize with the near neighboring O $2p$ orbitals, it may be anticipated that the magnetic interactions in the $a-b$ plane would significantly extend over distant neighbors than those along the $c-$axis. The antiferromagnetic ordering temperature as high as $240$ K, can be well thought of one such consequence of distant neighbor magnetic exchange interactions. Here, we present a comprehensive investigation of the electronic and magnetic structure of Sr$_{2}$IrO$_{4}$, by means of first principles density functional theory. To include interactions beyond first nearest neighbors, we model few antiferromagnetic structures on an underlying super-cell of dimension $2a$$\times$$2a$$\times$$c$, where $a$ and $c$ are the tetragonal lattice parameters of Sr$_{2}$IrO$_{4}$. Consistent with the previous works, we find that the local approximations to the exchange correlation potential, such as local density approximation (LDA) [@LDA-PW] and generalized gradient approximation (GGA-PBE) [@GGA-PBE] fail to capture the antiferromagnetic insulating ground state of Sr$_{2}$IrO$_{4}$. However, using the modified Becke- Johnson potential (mBJ) [@Tran], we find that the equilibrium corresponds to an unconventional $\uparrow$$\uparrow$$\downarrow$$\downarrow$ type antiferromagnetic ordering of the Ir ions in the $a$-$b$ plane. The predictive powers of the calculation are substantiated by the consistency it yields with the experiments. The magnitude of the insulating gap and that of the Ir local moment and, the double peak structure in the materials optical absorption spectra are found to be in good agreement with the experiments. These findings suggest that the underlying mechanism that drives Sr$_{2}$IrO$_{4}$ as an antiferromagnetic insulator is Slater-type, which is in stark contrast with the widely discussed SOC driven $J_{eff}$ $=$ $\frac{1}{2}$ Mott model. Calculations are based on the full potential linearized augmented plane-wave (FP-LAPW) method as implemented in the Wien2k code [@Blaha]. The lattice parameters were adopted to the experimental values, with $a$$=5.48$ Å, and $c=25.83$ Å [@Crawford], and the position coordinates of the Sr and O ions were fully relaxed. The ground state properties were obtained using well-converged basis sets using the Wien2k parameters; $R_{MT}$$K_{max}$$=$ $7$, $G_{max}$$=$$24$ a.u.$^{-1}$and $l_{max}$ $=$$7$ [@Blaha]. Additional local orbitals were also used to account for the semi-core Ir $5p$ states. The exchange correlation potential to the crystal Hamiltonian was considered in mBJ formalism [@Tran]. Few collinear magnetic structures with different initial Ir spin alignment were considered in the study. These are shown in Table \[TAB\_STRUCTURE\], which are described in terms of the Ir spin alignment in the first, second, third and fourth near neighbors designated as $d_{NN}^{(i)}$; $i$ $=$ $1$, $4$. Neglecting non-collinearity, AF1 then represents the experimentally determined structure and FM represents ferromagnetic ordering. In LDA spin polarized calculations, all structures converged to a paramagnetic metallic solution. However, in GGA the AF3 spin configuration converged to an antiferromagnetic metallic solution with an Ir moment of $0.2$ $\mu_{B}$, while all other structures converged to a nonmagnetic solution. The AF3 structure was $-1.4$ meV/f.u lower in energy in comparison to its non-magnetic counterpart. A schematic representation of the AF3 structure is shown in Fig.\[FIG\_AF3\_STRUCTURE\]. The AF3 unit cell consists of $16$ formula units, with an underlying *Pnna* symmetry of crystal lattice dimensions $a$ $=$ $5.48$$\textrm{\AA}$, $b=$ $25.83$$\textrm{\AA}$ and $c$ $=$ $10.96$$\textrm{\AA}$. [&gt;p[1.2cm]{}&gt;p[1.45cm]{}&gt;p[1.3cm]{}&gt;p[1.3cm]{}&gt;p[1.3cm]{}&gt;p[1.25cm]{}]{} & Space group & $d_{NN}^{(1)}$ ($3.88$$\textrm{\AA}$) & $d_{NN}^{(2)}$ ($5.48$$\textrm{\AA}$) & $d_{NN}^{(3)}$ ($7.01$ $\textrm{\AA}$) & $d_{NN}^{(4)}$ ($7.75$ $\textrm{\AA}$)[\ ]{} AF1 & $I$-$4_{2}d$   & $4$($\downarrow$) & $4$($\uparrow$) & $4$($\uparrow$) $4$($\downarrow$)   & $4$($\uparrow$)[\ ]{}AF2 & $P4_{1}2_{1}2$   & $2$($\uparrow$) $2$($\downarrow$) & 4($\downarrow$) & $4$($\uparrow$) $4$($\downarrow)$   & $4$($\uparrow$)[\ ]{}AF3 & $Pnna$   & $2$($\uparrow$) $2$($\downarrow$) & $2$($\uparrow$) $2$($\downarrow$) & $4$($\uparrow$) $4$($\downarrow$)   & 4($\downarrow$)[\ ]{}FM & $I4_{1}/acd$ & $4$($\uparrow$) & $4$($\uparrow$) & $8$($\uparrow$)   & $4$($\uparrow$)[\ ]{} ![\[FIG\_AF3\_STRUCTURE\]The schematic representation of the AF3 structure showing the antiferromagnetic ordering of Ir moments in the $a-b$ plane of Sr$_{2}$IrO$_{4}$.](Fig01) It is well known that the electron density representation of the Coulomb potential in both LDA and GGA leads to an unphysical self interaction. As a result, these approximations tend to reduce the self-repulsion of electrons thereby stabilizing artificially delocalized electronic states [@Cohen; @Mori]. Among various correction schemes that have been proposed [@Heyd; @Bechstedt; @Georges], we adopt to the mBJ formalism. With $t$ and $\rho$ representing the kinetic energy density and electron density, respectively, a screening term of the form $\sqrt{\frac{t}{\rho}}$ is introduced in the mBJ exchange potential, the contribution of which is calculated by $\frac{\left|\nabla\rho\right|}{\rho}$ [@Tran]. As a result, regions with low density are associated with higher positive potential thereby increasing the energy of these states [@Tran; @Koller]. The mBJ formalism is applicable for Sr$_{2}$IrO$_{4}$ and also for other iridates [@Vijeta2; @Vijeta1] since the states in the vicinity of Fermi energy are predominantly anti-bonding in character. It should be noted that the anti-bonding orbitals have less electron density, thus the choice of mBJ exchange potential for iridates is well justified. In Fig.\[DOS-AFM-MBJ\], we show the mBJ generated total, atom resolved and Ir $5d$ resolved density of states (DOS) of Sr$_{2}$IrO$_{4}$ with AF3 spin configuration in the Ir sub-lattice. The spectra reveal Sr$_{2}$IrO$_{4}$ to be an insulator with an electronic gap of $0.47$ eV, consistent with the experimental value of $0.54$ eV [@Moon]. Here, we note that the magnitude of the insulating gap in Sr$_{2}$IrO$_{4}$ have been reported between $0.1$$-$$0.6$ eV, with the lowest determined from the resistivity data fit using a thermal activation model [@Shimura; @Ge] and also from the earlier GGA+U+SOC calculations [@Kim2; @Jin]. The highest value of the insulating gap of $0.6$ eV follows from the density of states measurements using the scanning tunneling spectroscopy [@Dai]. Intermediate values of the gap are reported from the optical conductivity, resonant inelastic x-ray scattering [@Kim2; @Jin; @Kim3] and also from the photoemission spectroscopy measurements [@Kim2; @Wang; @Wojek]. ![\[DOS-AFM-MBJ\](color online): The mBJ density of states of the AF3 antiferromagnetic structure. (a) Total and atom resolved partial density of states for the AFM unit-cell and, (b) Ir $5d$ resolved partial density of states per Ir atom. Here, the O1 and O2 atoms represent the apical and in-plane O atoms, respectively. **** The broken line through energy zero represents the reference Fermi energy. ](Fig2) In accordance with the ionic model which associates Ir $5d$ manifold with five electrons, the electronic gap is found to reside well within the Ir $t_{2g}$ manifold which extends over the range $E$(eV) $\subset$ $\left[-1.5,\,0.8\right]$. Four distinct localized features, which are predominantly of Ir $d_{xz}$ / $d_{yz}$ orbital characters are observed in the spectra. Three of them are in the occupied part of the spectra centered at $-1.32$ , $-0.77$ and $-0.31$ eV below $E_{F}$ and, the fourth peak at $0.55$ eV above $E_{F}$. The position of these bands indicate to three Ir $t_{2g}$ inter-band transitions with the first, second and third transition energies being $\simeq$ $0.86$ eV , $1.32$ eV and $1.87$eV, respectively. These energies are reasonably in good agreement with the optical conductivity measurements, where two transition peaks labeled $\alpha$ and $\beta$ were determined to be at $\simeq$ $0.5$ eV and $1$ eV, respectively [@Moon; @Sohn]. However, unlike the $d_{xz}$ / $d_{yz}$ states, the $d_{xy}$ states appear relatively more widespread on the energy scale. We note that the Ir-O distance in Sr$_{2}$IrO$_{4}$ corresponds to $1.98$ $\textrm{ \AA}$ and $2.04$ $\textrm{\AA},$ for in-plane and apical O ions, respectively. For the in-plane O2 ions, the $2p_{z}$ orbitals hybridize with the Ir $d_{xz}$ and $d_{yz}$ orbitals, while the O $2p_{x}$ and $2p_{y}$ mix with the $d_{xy}$, $d_{z^{2}}$ and $d_{x^{2}-y^{2}}$ orbitals. On the other hand, for apical O1 ions the $2p_{z}$ orbitals hybridize with the Ir $d_{z^{2}}$ and the $2p_{x}$ / $2p_{y}$ mix with the $d_{xz}$ / $d_{yz}$ states. Thus, the crystal chemistry suggests a mixing of the Ir $t_{2g}$ and $e_{g}$ states in Sr$_{2}$IrO$_{4}$ primarily due to the rotation of the IrO$_{6}$ octahedra. The rotation of the octahedra mixes the otherwise orthogonal Ir $d_{xy}$ and $d_{x^{2}-y^{2}}$ orbitals and consequently push the $d_{xy}$ states below the Fermi energy. The $d_{xy}$ and $d_{x^{2}-y^{2}}$ hybridization also results in a pseudo-gap like feature which is manifested $\simeq$$-0.6$ eV below $E_{F}$. Further, the valence band energy integration of the orbitals states showed that the $d_{xy}$+ $d_{x^{2}-y^{2}}$ orbitals are occupied with $\simeq$ $2$ electrons per Ir ion, while the electron occupation in the $d_{xz}$+$d_{yz}$+$d_{z^{2}}$ sums to $3$ electrons per Ir ion, with $d_{xz}$ and $d_{yz}$ occupancy being $1.15$ and $1.29$ electrons, respectively. Also, the integrated DOS of the $d_{xy}$ / $d_{yz}$ orbitals above $E_{F}$ was determined to be $\simeq$$1$ e$^{-}$per Ir ion. Thus, we find that the scalar relativistic calculations with exchange potential as described in the mBJ formalism predicts Sr$_{2}$IrO$_{4}$ to be an antiferromagnetic insulator. The magnitude of the local magnetic moment at the Ir sites in the AF3 structure was calculated as $\simeq$ $0.57$ $\mu_{B}$. The value is significantly higher than those determined from experiment, the latter which deduce the value as $0.2$ $\mu_{B}$ [@Ye]. The overestimation of the Ir local moment might be due to the PBE-GGA functional in the calculation. However, the Ir magnetic moment is found to be much smaller than spin only value of $1$ $\mu_{B}$ anticipated for a $S$ $=$ $\frac{1}{2}$ system. This may be partly attributed to the strong hybridization of the Ir $5d$ $-$ O $2p$ orbitals. The effects of hybridization are also manifested on the induced moments at the O sites. We note that the AF3 structure has a $\uparrow$$\uparrow$$\downarrow$$\downarrow$ type antiferromagnetic ordering of Ir ions in the $a-b$ plane of the tetragonal unit cell. For those in-plane O ions which bridge the Ir ions in the $a$-$b$ plane with same polarization, *i.e.,* $\uparrow$$\uparrow$ or $\downarrow$$\downarrow$ the induced magnetic moment is calculated as $\simeq$ $0.12$ $\mu_{B}$ while for oppositely polarized Ir ions the moment is $\simeq$ $0.06$ $\mu_{B}$ . The induced moments on the apical O ions were found to be $0.03$ $\mu_{B}$. One of the well accepted methods to validate the electronic structure is by its optical response. In experiments, a double-peak structure with maxima around $0.5$ eV and $1$ eV have been observed, with the former peak being relatively sharper than the latter [@Kim3; @Sohn; @Lee]. These peaks are associated with two Ir $d-d$ transitions, which in terms of the $J_{eff}$ model are due to the transitions from occupied $J_{eff}$ $=$ $\frac{3}{2}$ and $\frac{1}{2}$ states to the unoccupied $J_{eff}$ $=$ $\frac{1}{2}$ states. The spectra has been well reproduced by the LDA+U+SO calculations, thereby suggesting the importance and interplay of SOC and Coulomb correlations in Sr$_{2}$IrO$_{4}$ [@Kim2]. ![\[Optical-conductivity\]The calculated optical conductivity spectra of Sr$_{2}$IrO$_{4}$ using the scalar relativistic Hamiltonian with an underlying AF3 structure. Two peaks correspond to Ir $d-d$ transitions and are positioned at $0.82$ eV and $1.32$ eV, respectively. The energy difference between the peaks corresponds to a magnitude of $0.5$ eV.](Fig3) In Fig. \[Optical-conductivity\], we show the optical conductivity calculated for Sr$_{2}$IrO$_{4}$ with the underlying AF3 structure. Consistent with the experimental spectra, we obtain two characteristic peaks, centered on the energy scale ** at $\simeq$ $0.82$ eV and $1.32$ eV, respectively. We note that the position of the peaks are shifted to higher energies in comparison with experiments [@Sohn; @Propper] which is primarily due to the larger band gap estimated ($0.57$ eV) in our calculations. However, what is very consistent between the experiments and that of our results is the energy difference between the two peaks, which is found to be $0.5$ eV. Our results, therefore show that the origin of electronic gap in Sr$_{2}$IrO$_{4}$ is associated with the long ranged antiferromagnetic interactions, and hence a Slater-insulator. ![\[DOS-AFM-MBJ-SOC\](color online): The mBJ-GGA+SOC density of states of the AF3 antiferromagnetic structure. (a) Total and atom resolved partial density of states for the AFM unit-cell and, (b) The Ir $5d$ resolved partial density of states. Here, the O1 and O2 atoms represent the apical and in-plane O atoms, respectively. **** The broken line through energy zero represents the reference Fermi energy. ](Fig4) We also investigated the effect of spin-orbit coupling (SOC) on the electronic structure of Sr$_{2}$IrO$_{4}$. The GGA-mBJ+SOC density of states are shown in Fig.\[DOS-AFM-MBJ-SOC\]. In the calculation the SOC was included for the valence states through the second variational step with the mBJ scalar relativistic basis, where states up to $10$ Ry above $E_{F}$ were included in the basis expansion. While the overall features of bonding states are more or less unaltered, we find noticeable changes in the anti-bonding region. The Ir $5d$ bands are more broadened manifesting an enhanced hybridization of the $t_{2g}$ states with the O $2p$ orbitals. The hybridization not only is found to reduce the band gap to $0.17$ eV, but also decreases the magnitude of the Ir local magnetic moment to $0.47$ $\mu_{B}$. ![\[DOS-AFM-GGA\_U2\](color online): The density of states of the AF3 antiferromagnetic structure calculated in the GGA+U$_{eff}$ (U$_{eff}$ $=$ $2$ eV) Hamiltonian, showing the total and atom resolved partial density of states. Here, the O1 and O2 atoms represent the apical and in-plane O atoms, respectively. **** The broken line through energy zero represents the reference Fermi energy. ](Fig5) So to check whether the insulating gap is originally due to the unconventional antiferromagnetic ordering of Ir spins and not pertained to the choice of the exchange functional, we also performed GGA+U$_{eff}$ calculations, with $U_{eff}$ $=$ $2$ eV. Quite interestingly, the overall features in the density of states (Fig.\[DOS-AFM-GGA\_U2\]) were found very much similar to that obtained with the mBJ-GGA. However, the calculated band gap and Ir local moment was $0.11$ eV and $0.41$ $\mu_{B}$, respectively. In general, our results following a comprehensive set of calculations concisely and convincingly show that SOC is lesser significant interaction in rendering Sr$_{2}$IrO$_{4}$ an antiferromagnetic insulating ground state. In summary, using the first-principles density functional theory based scalar relativistic calculations with exchange potential described in mBJ formalism, we find that Sr$_{2}$IrO$_{4}$ is an unconventional Slater-type antiferromagnetic system. The calculated magnitude of the electronic gap, that of the Ir local moment and, the two peak structure in the materials optical conductivity are found to be very consistent with the experimental results. Contrary to the present understanding that Sr$_{2}$IrO$_{4}$ is a SOC driven $J_{eff}$ Mott insulator, our calculations show that the role of of SOC in Sr$_{2}$IrO$_{4}$ is of lesser significance in rendering the system its insulating ground state. Our results, which are based on density functional theory, are expected to stimulate further experimental works with an objective to unravel the magnetic structure of the system and the nature of Ir magnetism, thereby providing robust understanding of iridates, in general. [10]{} B. J. Kim, H. Jin, S. J. Moon, J.-Y. Kim, B.-G. Park, C. S. Leem, J. Yu, T. W. Noh, C. Kim, S.-J. Oh, J.-H. Park, V. Durairaj, G. Cao, and E. Rotenberg, Phys. Rev. Lett. 101, 076402 (2008). B. J. Kim, H. Ohsumi, T. Komesu, S. Sakai, T. Morita, H. Takagi, and T. Arima, Science 323, 1329 (2009). H. Jin, H. Jeong, T. Ozaki, and J. Yu, Phys. Rev. B 80, 075112 (2009). M. V. R. Rao, V. G. Sathe, D. Sornadurai, B. Panigrahi, and T. Shripathi, J. Phys. Chem. Solids 61, 1989 (2000). B. Fisher, J. Genossar, A. Knizhnik, L. Patlagan, and G. M. Reisner, J. Appl. Phys. 101, 123703 (2007). G. Cao, J. Bolivar, S. McCall, J. E. Crow, and R. P. Guertin, Phys. Rev. B, 57, R11039 (1998). N. S. Kini, A. M. Strydom, H. S. Jeevan, C. Geibel, and S. Ramakrishnan, J. Phys. Condens. Matter 18, 8205 (2006). C. Cosio-Castaneda, G. Tavizon, A. Baeza, P. de la Mora, and R. Escudero, J. Phys. Condens. Matter 19, 446210 (2007). Y. Klein and I. Terasaki, J. Phys. Condens. Matter 20, 295201 (2008). S. Chikara, O. Korneta, W. P. Crummett, L. E. DeLong, P. Schlottmann, and G. Cao, Phys. Rev. B 80, 140407 (2009). T. Takayama, A. Matsumoto, G. Jackeli and H. Takagi, Phys. Rev. B 94, 224420 (2016) M. K. Crawford, M. A. Subramanian, R. L. Harlow, J. A. Fernandez-Baca, Z. R. Wang, and D. C. Johnston, Phys. Rev. B 49, 9198 (1994). R. J. Cava, B. Batlogg, K. Kiyono, H. Takagi, J. J. Krajewski, W. F. Peck, L. W. Rupp, and C. H. Chen, Phys. Rev. B 49, 11890 (1994). T. Shimura, Y. Inaguma, T. Nakamura, M. Itoh, and Y. Morii, Phys. Rev. B 52, 9143 (1995). J. Kim, D. Casa, M. H. Upton, T. Gog, Y. J. Kim, J. F. Mitchell, M. van Veenendaal, M. Daghofer, J. van den Brink, G. Khaliullin and B. J. Kim, Phys. Rev. Lett. 108, 177003 (2012). R. Arita, J. Kune¨, A. V. Kozhevnikov, A. G. Eguiluz, and M. Imada, Phys. Rev. Lett. 108, 086403 (2012). Q. Li, G. Cao, S. Okamoto, J. Yi, W. Lin, B. Sales, J. Yan, R. Arita, J. Kunes, A. Kozhevnikov, A. Eguiluz, M. Imada, Z. Gai, M. Pan, and D. Mandrus, Sci. Rep. 3, 3073 (2013). A. Yamasaki, H. Fujiwara, A. Higashiya, A. Irizawa, O. Kirilmaz, F. Pfaff, P. Scheiderer, J. Gabel, M. Sing, T. Muro, M. Yabashi, K. Tamasaku, H. Sato, H. Namatame, M. Taniguchi, A. Hloskovskyy, H. Yoshida, H. Okabe, M. Isobe, J. Akimitsu, W. Drube, R. Claessen, T. Ishikawa, S. Imada, A. Sekiyama, and S. Suga, Phys. Rev. B 89, 121111(R) (2014). C. Piovera, V. Brouet, E. Papalazarou, M. Caputo, M. Marsi, A. Taleb-Ibrahimi, B. J. Kim, and L. Perfetti, Phys. Rev. B 93, 241114(R) (2016). F. Ye, S. Chi, B. C. Chakoumakos, J. A. Fernandez-Baca, T. Qi and G. Cao, Phys. Rev. B 87, 140406 (2013). J. P. Perdew and Y. Wang Phys. Rev. B 45, 13244 (1992). J. P. Perdew, K. Burke, and M. Ernzerhof Phys. Rev. Lett. 77, 3865 (1996) D. Hsieh, F. Mahmood, D. H. Torchinsky, G. Cao, and N. Gedik Phys. Rev. B 86, 035128 (2012). G. Jackeli and G. Khaliullin, Phys. Rev. Lett. 102, 017205 (2009). P. Liu, S. Khmelevskyi, B. Kim, M. Marsman, D. Li, X.-Q. Chen, D. D. Sarma, G. Kresse, and C. Franchini, Phys. Rev. B 92, 054428 (2015). D. Haskel, G. Fabbris, M. Zhernenkov, P. P. Kong, C. Q. Jin, G. Cao, and M. van Veenendaal, Phys. Rev. Lett. 109, 027204 (2012). M. Ge, T. Qi, O. Korneta, D. DeLong, P. Schlottmann, W. Crummett, and G. Cao, Phys. Rev. B 84, 100402 (2011). C. R. Serrao, J. Liu, J. T. Heron, G. Singh-Bhalla, A. Yadav, S. J. Suresha, R. J. Paull, D. Yi, J. H. Chu, M. Trassin, A. Vishwanath, E. Arenholz, C. Frontera, J. Zelezny, T. Jungwirth, X. Marti and R. Ramesh, Phys. Rev. B 87, 085121 (2013). Y. Gim, A. Sethi, Q. Zhao, J. F. Mitchell, G. Cao and S. L. Cooper, Phys. Rev. B 93, 024405 (2016). P. Blaha, K. Schwarz, G. Madsen, D. Kvasicka, and J. Luitz, computer code WIEN2K, Technical University of Vienna, Vienna (2001). F. Tran and P. Blaha, Phys. Rev. Lett. 102, 226401 (2009). V. Singh and J. J. Pulikkotil, Phys. Chem. Chem. Phys., 18, 26300 (2016). A. J. Cohen, P. Mori-Sanchez, and W. Yang, Science 321, 792 (2008). P. Mori-Sanchez, A. J. Cohen, and W. Yang, Phys. Rev. Lett. 100, 146401 (2008). J. Heyd, J. E. Peralta, G. E. Scuseria, and R. L. Martin, J. Chem. Phys. 123, 174101 (2005). F. Bechstedt, F. Fuchs, and G. Kresse, Phys. Status Solidi B 246, 1877 (2009). A. Georges, G. Kotliar, W. Krauth, and M. J. Rozenberg, Rev. Mod. Phys. 68, 13 (1996). D. Koller, F. Tran, and P. Blaha, Phys. Rev. B 83, 195134 (2011). V Singh and J. J. Pulikkotil, Phys. Chem. Chem. Phys. 18, 26300 (2016). V Singh and J. J. Pulikkotil, Comput. Mater. Sci. 153, 97-102 (2018). S. J. Moon, Hosub Jin, W. S. Choi, J. S. Lee, S. S. A. Seo, J. Yu, G. Cao, T. W. Noh, and Y. S. Lee Phys. Rev. B 80, 195110 (2009). C. H. Sohn, M.-C. Lee, H. J. Park, K. J. Noh, H. K. Yoo, S. J. Moon, K. W. Kim, T. F. Qi, G. Cao, D.-Y. Cho, and T. W. Noh, Phys. Rev. B 90, 041105(R) (2014). J. Dai, E. Calleja, G. Cao and K. McElroy, Phys. Rev. B 90, 041102 (2014). Q. Wang, Y. Cao, J. A. Waugh, S. R. Park, T. F. Qi, O. B. Korneta, G. Cao, and D. S. Dessau, Phys. Rev. B 87, 245109 (2013). B. M. Wojek, M. H. Berntsen, S. Boseggia, A. T. Boothroyd, D. Prabhakaran, D. F. McMorrow, H. M. Rønnow, J. Chang, and O. Tjernberg, J. Phys. Condens. Matter 24, 415602 (2012). D. Pröpper, A. N. Yaresko, M. Höppner, Y. Matiks, Y-L. Mathis, T. Takayama, A. Matsumoto, H. Takagi, B. Keimer, and A. V. Boris, Phys. Rev. B 94, 035158 (2016). J. S. Lee, Y. Krockenberger, K. S. Takahashi, M. Kawasaki, and Y. Tokura, Phys. Rev. B, 85, 035101 (2012).
--- abstract: | The aim of this paper is to develop and analyze high-order time stepping schemes for solving semilinear subdiffusion equations. We apply the $k$-step BDF convolution quadrature to discretize the time-fractional derivative with order $\alpha\in (0,1)$, and modify the starting steps in order to achieve optimal convergence rate. This method has already been well-studied for the linear fractional evolution equations in Jin, Li and Zhou [@JinLiZhou:correction], while the numerical analysis for the nonlinear problem is still missing in the literature. By splitting the nonlinear potential term into an irregular linear part and a smoother nonlinear part, and using the generating function technique, we prove that the convergence order of the corrected BDF$k$ scheme is $O(\tau^{\min(k,1+2\alpha-\epsilon)})$, without imposing further assumption on the regularity of the solution. Numerical examples are provided to support our theoretical results.\ [**Keywords:**]{} semilinear subdiffusion, convolution quadrature, $k$-step BDF, initial correction, error estimate.\ [**AMS subject classifications 2010:**]{} 65M60, 65N30, 65N15, 35R11 author: - 'Kai Wang[^1]' - 'Zhi Zhou[^2]' title: 'High-order Time Stepping Schemes for Semilinear Subdiffusion Equations [^3]' --- **Introduction** {#Se:intr} ================ Fractional partial differential equations (PDEs) have been drawing increasing attention over the past several decades, due to their capability to describe anomalous diffusion processes, in which the mean square variance of particle displacements grow sublinearly/superlinear with the time, instead of the linear growth for a Gaussian process. Nowadays those models have been successfully employed in many practical applications, including dynamics of single-molecular protein [@kou2008stochastic], flow in highly heterogeneous aquifer[@berkowitz2002physical] and thermal diffusion in fractal domains[@nigmatullin1986realization], to name but a few; see [@Metzler:2014] for an extensive list. The aim of this paper is to study high-order time stepping schemes for solving the initial-boundary value problem for the semilinear subdiffusion equation: $$\label{Eqn:fde} \left\{\begin{aligned} \dalt u-\Delta u&=f(u)&&\text{ in } \Omega\times (0,T),\\ u &=0 &&\text{ on } \pt\Omega\times (0,T),\\ u(0)&=u_0 &&\text{ in }\Omega , \end{aligned}\right.$$ where $\Omega$ denotes a bounded, convex domain in $\R^d$ with smooth boundary, and $\Delta$ denotes the Laplacian on $\Omega$ with a homogenous Dirichlet boundary condition. Here $\dalt u$ denotes the left-sided Caputo fractional derivative of order $\al\in(0,1)$ with respect to $t$ and it is defined by [@KilbasSrivastavaTrujillo:2006 pp.91] $$\label{McT} \partial_t^\alpha u(t):= \frac{1}{\Gamma(1-\alpha)} \int_0^t(t-s)^{-\alpha}u'(s)\, \d s ,\quad \mbox{with}\quad \Gamma(z):=\int_0^\infty s^{z-1}e^{-s}\d s.$$ Throughout the paper, we assume that the initial data $u_0$ is smooth and compatible with the homogeneous Dirichlet boundary condition, and $f:\mathbb{R}\rightarrow \mathbb{R}$ is a globally smooth function, e.g., $f \in C^3(\mathbb{R})$. Moreover, we assume that the nonlinear subdiffusion problem has a unique global solution $u\in C([0,T]\times\bar\Omega)$. One typical example is the time-fractional Allen-Cahn equation, i.e., $f(u)=u-u^3$, whose well-posedness and smoothing properties have already been investigated in [@DuYangZhou:AllenCahn]. High-order time stepping schemes for solving the linear time-fractional evolution problems have been intensively studied in recent years; see [@JinLazarovZhou:overview] (and the references therein) for a concise overview. Roughly speaking, there are two prominent types of schemes: piecewise polynomial interpolation (e.g., [@alikhanov2015new; @GaoSunZhang:2014; @lin2007finite; @Sun:2006]) and convolution quadrature (CQ) (e.g., [@CuestaLubichPalencia:2006; @Jin:SISC2016; @YusteAcedo:2005; @Zeng:2013]). To the first group belongs the popular method using a piecewise linear interpolation (also known as L1 scheme). Lin and Xu [@lin2007finite] developed the scheme for fractional diffusion, and analyzed the stability and convergence rate; see also [@Sun:2006]. The discretization has a local truncation error $O(\tau^{2-\alpha})$ where $\tau$ denotes the step size in time, provided that the solution is smooth enough in time. The argument could be extended to high-order methods using piecewise polynomial interpolation [@alikhanov2015new; @GaoSunZhang:2014]. In the second group, CQ developed by Lubich [@Lubich:1986; @Lubich:1988] provided a systematic framework to construct high-order numerical schemes, and has been the foundation of many early works. Due to its particular construction, it naturally inherits the stability and accuracy of standard linear multistep methods, which greatly facilitates the analysis of resulting numerical schemes. However, for both techniques with uniform meshes, the desired convergence rates can be obtained only if data is sufficiently smooth and compatible, which is generally not valid. Otherwise, most of popular schemes can only achieve a first-order accuracy [@JinLazarovZhou:L1; @JinLiZhou:correction]. For the linear problem, the desired high-order convergence rates can be restored by correcting the first several time steps [@Jin:SISC2016; @JinLiZhou:correction; @Yan:2018L1], even for nonsmooth problem data. See also [@Stynes:2017; @LiaoLiZhang:2018] for the application of [L1]{} scheme with graded meshes, [@McLeanMustapha:2015; @Mustapha:2014; @MustaphaMcLean:2013] for an analysis of discontinuous Galerkin method and [@ChenXuHesthaven:2015; @LiXu:2009; @Zayernouri:2015uni] for studies of spectral methods. However, there is fewer work on nonlinear subdiffusion problems. The first rigorous analysis was given in [@JinLiZhou:nonlinear], where Jin et al. proposed a general framework for mathematical and numerical analysis of the nonlinear equation with a globally Lipschitz continuous potential term $f(u)$. A time stepping scheme based on backward Euler CQ scheme or L1 method was studied and a uniform-in-time convergence rate $O(\tau^\alpha)$ was proved. Then it was proved in [@Karaa:nonlinear] that the convergence rate of the backward Euler CQ scheme is $O(\tau)$ at a fixed time even for the nonsmooth data. As far as we know, there is no theoretical study on high-order schemes for the nonlinear problem based on confirmed solution regularity. Therefore, in this paper, we aim to study high-order time stepping schemes based on CQ generated by $k$-step BDF method. This work is motivated by our preceding studies on the corrected BDF$k$ schemes for linear subdiffusion equations [@Jin:SISC2016; @JinLiZhou:correction]. To discretize the fractional derivative, we let $0=t_0<t_1<\ldots<t_N=T$ be a uniform partition of the time interval $[0,T]$, with grid points $t_n=n\tau$ and step size $\tau=T/N$. Upon rewriting the Caputo derivative $\partial_t^\alpha u$ as a Riemann-Liouville one [@KilbasSrivastavaTrujillo:2006 pp. 91], we consider the following fully implicit time stepping scheme: for the given initial value $u_0$, find $u_n$, $n=1,2,\ldots, N$, such that $$\begin{aligned} \label{TD-scheme} \begin{aligned} &\bar\partial_\tau^\alpha (u_n-u_0)-\Delta u_n= f(u_n) , \end{aligned}\end{aligned}$$ where $u_n,\,n=1,2,\cdots,N$ are the approximations to the exact solutions $u(t_n)$, and $\bdalt\varphi_n$ denotes the convolution quadrature generated by $k$-step BDF, $k=1,2,\cdots,6$ with the definition $$\label{Eq:BDF-CQ} \bdalt\varphi_n:=\frac{1}{\tau^\al}\sum_{i=0}^{n} {\omega_i^{(\alpha)}}\varphi_{n-i}.$$ The coefficients $\{\omega_i^{(\alpha)}\}_{i=0}^{\infty}$ can be computed either by the fast Fourier transform[@podlubny1998fractional; @sousa2012approximate] or recursion[@wu2014determination] in the following series expansion $$\label{eqn:gen-k} \delta_\tau(\xi)^\al=\frac{1}{\tau^\al} \sum_{i=0}^{\infty} {\omega_i^{(\alpha)}}\xi^i\quad\text{with}\quad \delta_\tau(\xi)=\frac{1}{\tau} \sum_{i=1}^{k} \frac{1}{i}(1-\xi)^i.$$ For linear subdiffusion problem, it has been shown in [@JinLiZhou:correction] that the scheme is only first-order accurate in general. However, the optimal order $O(\tau^k)$ of the BDF$k$ scheme could be restored by correcting the first $k-1$ steps. For example, we split the source term $f$ into $f(t)=f(0) + (f(t)-f(0))$ and approximate $f(0)$ by $\bar\partial_\tau \partial_t^{-1}f(0)$, with a similar treatment of the initial data. This leads to a simple modification at the first step and restores the $O(\tau^2)$ accuracy for any fixed $t_n>0$ [@LubichSloanThomee:1996; @CuestaLubichPalencia:2006; @Jin:SISC2016]. This motivates us to decompose the nonlinear potential term $f(u)$ by $$\label{eqn:taylor} f(u(t))= f(u_0) + f'(u_0) (u(t)-u_0) + R(u(t);u_0).$$ Then the residue part, $R(u(t);u_0)=O((u(t)-u_0)^2)$, is more regular in the time direction. As a result, the semilinear equation can be reformulated by $$\label{eqn:fde-m} \begin{aligned} \dalt u(t)-(\Delta + f'(u_0)I) u(t)&= f(u_0) - f'(u_0)u_0 + R(u(t);u_0), \end{aligned}$$ where $I$ denotes the identity operator. Therefore, by letting $$\label{eqn:A} g_0 = f(u_0) - f'(u_0)u_0\quad\text{and} \quad A=\Delta + f'(u_0)I,$$ we can modify the BDF$k$ scheme by $$\label{eqn:BDF-CQ-m} \left\{\begin{aligned} &\bdalt (u-u_0)_n-Au_n=g_0+a_n^{(k)}(Au_0+g_0)+R(u_n;u_0),\quad &&1\leq n\leq k-1,\\ &\bdalt (u-u_0)_n-Au_n=g_0+R(u_n;u_0),\quad &&k\leq n\leq N, \end{aligned}\right.$$ where the unknown coefficients $a_n^{(k)}$ were given in [@JinLiZhou:correction Table 1]. -7pt ----------------- --------------------- --------------------- ------------------- --------------------- ------------------ BDF$k$ $a_1^{(k)}$ $a_2^{(k)}$ $a_3^{(k)}$ $a_4^{(k)}$ $a_5^{(k)}$ \[0.5ex\] $k=2$ $\frac{1}{2}$ \[0.5ex\] $k=3$ $\frac{11}{12}$ $-\frac{5}{12}$ \[0.5ex\] $k=4$ $\frac{31}{24}$ $-\frac{7}{6}$ $\frac{3}{8}$ \[0.5ex\] $k=5$ $\frac{1181}{720}$ $-\frac{177}{80}$ $\frac{341}{240}$ $-\frac{251}{720}$ \[0.5ex\] $k=6$ $\frac{2837}{1440}$ $-\frac{2543}{720}$ $\frac{17}{5}$ $-\frac{1201}{720}$ $\frac{95}{288}$ \[0.5ex\] ----------------- --------------------- --------------------- ------------------- --------------------- ------------------ : The coefficients $a_n^{(k)}$[]{data-label="Tab:ank"} By rearranging terms, the modified BDF$k$ scheme is equivalent to $$\label{eqn:BDF-CQ-m-r} \left\{\begin{aligned} &\bdalt (u-u_0)_n-\Delta u_n= a_n^{(k)}(\Delta u_0 + f(u_0)) + f(u_n),\quad &&1\leq n\leq k-1,\\ &\bdalt (u-u_0)_n-\Delta u_n=f(u_n),\quad &&k\leq n\leq N, \end{aligned}\right.$$ which is consistent to the BDF$k$ scheme for the linear subdiffusion problem [@Jin:SISC2016; @JinLiZhou:correction]. The main result of this paper is to derive an error estimate in $L^\infty(\Omega)$ for the novel time stepping scheme . In particular, if $u_0\in \{u\in C(\bar\Omega),~u=0~\text{on}~\partial\Omega,~\text{and}~ \Delta u \in C(\bar\Omega) \}$, we prove that (see Theorem \[thm:error\]) $$\label{eqn:err-0} \| u_n - u(t_n) \|_{L^\infty(\Omega)} \le c_T t_n^{\alpha - \min(k,1+2\alpha-\ep)} \tau^{\min(k,1+2\alpha-\ep)}.$$ This estimate is interesting, because the source term $f(u)\in {W^{1+\alpha-\ep,1}(0,T;L^\infty\II)}$ in general, which is nonsmooth in the time direction, and intuitively one only expects the convergence order $O(\tau^{\min(k,1+\alpha-\epsilon)})$ [@JinLiZhou:correction Table 8]. However, the estimate indicates that the best convergence rate of the BDF$k$ scheme is almost $O(\tau^{1+2\alpha})$. The restriction of the convergence order comes from the low regularity of the remainder $R(u;u_0)$, even though the initial data $u_0$ is smooth and compatible with boundary condition. This phenomena contrasts sharply with its normal parabolic counterpart, i.e., $\alpha=1$. For example, in [@CrouzeixThomee], it has been proved that the time stepping schemes of the semilinear parabolic equation fail to achieve the best convergence rate only if the initial data is not regular enough. The rest of the paper is organized as follows. In section \[sec:prelim\], we provide some preliminary results about the solution regularity which will be intensively used in error estimation. The error analysis of the time stepping scheme is established in section \[sec:error\]. Then the fully discrete scheme are analyzed in section \[sec:fully\]. Finally, in section \[sec:numerics\], we present some numerical results which support and illustrate our theoretical findings. Throughout this paper, the notation $c$ denotes a generic constant, which may vary at different occurrences, but it is always independent of the time step size $\tau$ and spatial mesh size $h$. Preliminary results {#sec:prelim} =================== In this section, we shall present some regularity results which will be actively used in the next section. As we introduced, we always assume that the semilinear subdiffion problem has a unique global solution $u\in C([0,T]\times\bar \Omega)$ (e.g., the time-fractional Allen-Cahn equation [@DuYangZhou:AllenCahn]). [Solution representation]{} --------------------------- First, we introduce a representation of the solution to problem by Laplace transform. For simplicity, we let $g(t):=f(u(t))$ and $w(t):=u(t)-u_0$. Then it is easy to see that the function $w(t)$ satisfies the equation $$\dalt w(t)-\Delta w(t)=\Delta u_0+g(t)$$ with initial condition $w(0)=0$. Taking Laplace transform, denoted by $\hat{}$ , we have $$z^\al{\hat{w}}(z)-\Delta {\hat{w}}(z)=z^{-1}\Delta u_0+\hat{g}(z),$$ which implies that ${\hat{w}}(z)=(z^\al-\Delta )^{-1}(z^{-1}\Delta u_0+\hat{g}(z))$. With inverse Laplace transform and convolution rule, the solution $u(t)$ can be explicitly expressed by $$\label{eqn:sol-rep} u(t)=(I + F(t)\Delta) u_0+\int_{0}^{t} E(t-s)f(u(s))\d s,$$ where the operators $F(t)$ and $E(t)$ are defined by $$\label{eqn:op} F(t)=\frac{1}{2\pi\i}\int_{\Gamma_{\theta,\delta}} e^{zt}z^{-1}(z^\al-\Delta )^{-1}\d z\quad\text{and}\quad E(t)=\frac{1}{2\pi\i}\int_{\Gamma_{\theta,\delta}} e^{zt}(z^\al-\Delta )^{-1}\d z,$$ respectively, where $\Gamma_{\theta,\delta}$ denotes the integral contour $$\Gamma_{\theta,\delta}=\{z\in\mathbb{C}:|z|=\delta,|\arg z|\leq \theta\} \cup \{z\in\mathbb{C}: z=\rho e^{\pm\i\theta},\rho\geq\delta\},$$ oriented with an increasing imaginary part with a fixed angle $\theta\in(\pi/2,\pi)$. In this paper, we shall derive some estimates in $L^\infty(\Omega)$ norm, which requires the resolvent estimate. Let us consider the second-order partial differential operator $$L u = -\Delta u + q u,$$ with the homogeneous Dirichlet boundary condition. Here we assume that $q\in L^\infty(\Omega)$ and $q(x)\ge 0$ for all $x\in \Omega$. This implies that $L$ is positively definite, i.e., $$(L u , u) \ge c \| \nabla u \|_{L^2(\Omega)}^2,\qquad \text{for all}~~u\in H_0^1(\Omega).$$ Then the following resolvent estimate holds: for any angle $\phi\in(\pi/2,\pi)$ (see [@Bakaev Theorem 1.1], [@Bakaev:2003 Theorem 2.1] or [@Stewart Theorem 1]) $$\label{eqn:resol} \| (z + L)^{-1} \|_{C(\bar\Omega)\rightarrow C(\bar\Omega)} \le c |z|^{-1} \qquad \text{for}~z\in \Sigma_{\phi}= \{ z\in\mathbb{C}\backslash \{ 0 \}: \text{arg}(z) \in (-\phi, \phi) \}.$$ From now on, we assume that the initial condition $u_0$ is smooth enough and compatible to the homogenous Dirichlet boundary condition, i.e., $$\label{eqn:ini-D} u_0 \in D=\{u\in C(\bar\Omega),~u=0~\text{on}~\partial\Omega,~\text{and}~ \Delta u \in C(\bar\Omega) \}.$$ Then by the resolvent estimate , it is easy to observe that the operators $F$ and $E$, defined in , satisfy the following regularity estimate that for $\ell=0,1,2,\ldots,$ $$\label{eqn:smoothing} t \| \partial_t^{(\ell)}E(t) v \|_{L^\infty(\Omega)} + \| \partial_t^{(\ell)}F(t) v \|_{L^\infty(\Omega)} \le c t^{\alpha-\ell} \| v \|_{L^\infty(\Omega)}\qquad \forall ~~v\in C(\bar\Omega).$$ The estimates with $L^2(\Omega)$-norm have already been confirmed in [@JinLiZhou:nonlinear Lemma 3.4] by using resolvent estimate. The proof of is similar to that, and hence is omitted here. [Solution regularity]{} ----------------------- With the help of , we are ready to state the following lemma on the regularity of the solution to the nonlinear subdiffusion equation . \[thm:reg-u\] We assume that $u_0\in D$ with the space $D$ defined by . Besides, suppose that the problem has a unique global solution $u\in C([0,T]\times\Omega)$. Then $u\in C^{\alpha}([0,T];C(\bar\Omega))\cap C^\ell((0,T];C(\bar\Omega))$, $\ell=1,2,3$, and it satisfies the a priori estimate $$\label{Eq:inf} \| \partial_t^\ell u(t) \|_{L^\infty(\Omega)} \le c t^{\alpha-\ell} ,\quad \text{for}~\ell=1,2,3,$$ where the constant $c$ depends on $\alpha, T$ and $u_0$. The Hölder continuity $u\in C^{\alpha}([0,T];C(\bar\Omega))$ and the estimate with $\ell=1$ are direct results of the solution representation , the estimate , and the Banach fixed point theorem. The argument is identical to the proof of [@JinLiZhou:nonlinear Theorem 3.1], and hence omitted here. Now we turn to the estimate with $\ell=2$, which requires more discussion. First, we take derivative on the solution representation and obtain $$\label{eqn:der1} \begin{aligned} u'(t)&= \frac{d}{dt} F(t) \Delta u_0 + \frac{d}{dt} \int_0^t E(s) f(u(t-s))\,\d s \\ &= E(t) [\Delta u_0 + f(u_0)] + \int_0^t E(s) f'(u(t-s)) u'(t-s)\,\d s, \\ \end{aligned}$$ where we use the fact that $F'(t) = E(t)$. Here we note that both $E(t)$ and $u'(t)$ are weakly singular near $t=0$. Therefore, we multiply $t^{2-\alpha}$ on to compensate for the singularity before differentiation. Then $$\label{Eq:tuprime} \begin{aligned} t^{2-\al}u'(t)&=t^{2-\al}E(t)[\Delta u_0+f(u_0)]+t^{2-\al} \int_0^t E(s) f'(u(t-s)) u'(t-s)\,\d s \\ &=t^{2-\al}E(t)(\Delta u_0+f(u_0))+t^{1-\al}\int_{0}^{t} (t-s)E(t-s)f'(u(s))u'(s)\d s\\ &\quad +t^{1-\al}\int_{0}^{t} E(s)f'(u(t-s))(t-s)u'(t-s)\d s\\ & =: \sum_{i=1}^3 I_i(t). \end{aligned}$$ After taking derivative of the first term $I_1$, we apply the estimate to obtain that $$\label{Eq:I1} \begin{aligned} \|\partial_tI_1(t)\|_{L^\infty(\Omega)}&=\Big\|\Big((2-\al)t^{1-\al}E(t)+t^{2-\al}E'(t)\Big)[\Delta u_0+f(u_0)]\Big\|_{L^\infty(\Omega)}\\ &\leq ct^{1-\al}\|E(t)[\Delta u_0+f(u_0)]\|_{L^\infty(\Omega)}+t^{2-\al}\|E'(t)[\Delta u_0+f(u_0)]\|_{L^\infty(\Omega)}\\ &\le c\| \Delta u_0+f(u_0) \|_{L^\infty(\Omega)}, \end{aligned}$$ where we use the fact that $\Delta u_0+f(u_0) \in C(\bar\Omega)$. The derivative of the second term $I_2$ in can be estimate analogously. Using the estimate , we have $$\lim_{t\rightarrow 0} \| t E(t) \|_{ C(\bar \Omega) \rightarrow C(\bar \Omega)} = 0,$$ which together with the triangle’s inequality and for $\ell=1$ implies that $$\begin{aligned} \|\partial_tI_2(t)\|_{L^\infty(\Omega)}&\le c t^{-\alpha}\int_{0}^{t} (t-s) \|E(t-s)f'(u(s))u'(s)\|_{L^\infty(\Omega)} \d s\\ &\quad + ct^{1-\alpha} \int_{0}^{t} \|E(t-s)f'(u(s))u'(s)\|_{L^\infty(\Omega)} \d s \\ &\quad + ct^{1-\alpha} \int_{0}^{t} (t-s) \|E'(t-s)f'(u(s))u'(s)\|_{L^\infty(\Omega)} \d s \\ &\le c t^{-\alpha}\int_{0}^{t} (t-s)^\alpha s^{\alpha-1} \d s + ct^{1-\alpha} \int_{0}^{t}(t-s)^{\alpha-1} s^{\alpha-1} \d s \le c_T. \end{aligned}$$ Similarly, the derivative of [the]{} third term can be bounded by $$\begin{aligned} \|\partial_tI_3(t)\|_{L^\infty(\Omega)}&\le ct^{-\al}\int_{0}^{t} s\|E(t-s)f'(u(s))u'(s)\|_{L^\infty(\Omega)} \d s\\ &\quad + ct^{1-\alpha} \int_{0}^{t} \|E(t-s)[f'(u(s))u'(s)+sf''(u(s))(u'(s))^2]\|_{L^\infty(\Omega)} \d s \\ &\quad + ct^{1-\alpha} \int_{0}^{t} s \|E(t-s) f'(u(s))u''(s)\|_{L^\infty(\Omega)} \d s \\ &\le c_T + ct^{1-\alpha} \int_{0}^{t} (t-s)^{\alpha-1}s \| u''(s)\|_{L^\infty(\Omega)} \d s. \end{aligned}$$ As a result, we achieve at $$\|\partial_t[t^{2-\alpha}u'(t)]\|_{L^\infty(\Omega)} \le c + ct^{1-\alpha} \int_{0}^{t} (t-s)^{\alpha-1}s \| u''(s)\|_{L^\infty(\Omega)} \d s.$$ Then we apply for $\ell=1$ again and use the triangle inequality to obtain that $$\label{eqn:est1} \begin{aligned} t^{2-\alpha}\|u''(t)\|_{L^\infty(\Omega)} &\le c + c t^{1-\alpha} \int_{0}^{t} (t-s)^{\alpha-1}s \| u''(s)\|_{L^\infty(\Omega)} \d s. \end{aligned}$$ In order to derive a uniform bound of $t^{2-\alpha}\|u''(t)\|_{L^\infty(\Omega)}$, we multiply $e^{-\sigma t}$ on the inequality for some parameter $\sigma>0$ to be determined, and obtain that $$\label{eqn:gr-01}\begin{aligned} &\quad e^{-\sigma t}t^{2-\alpha}\|u''(t)\|_{L^\infty(\Omega)}\\ &\le ce^{-\sigma t} + ce^{-\sigma t} \int_{0}^{t} t^{1-\alpha} (t-s)^{\alpha-1}s \| u''(s)\|_{L^\infty(\Omega)} \d s\\ &\le ce^{-\sigma t} + c \Big[\max_{t\in[0,T]}e^{-\sigma t}t^{2-\alpha}\| u''(t)\|_{L^\infty(\Omega)} \Big] \int_{0}^{t} t^{1-\alpha} (t-s)^{\alpha-1}e^{-\sigma(t-s)} s^{\alpha-1} \d s\\ &\le ce^{-\sigma t} + c \big(T/\sigma\big)^{\frac\alpha2}\max_{t\in[0,T]}e^{-\sigma t}t^{2-\alpha}\| u''(t)\|_{L^\infty(\Omega)}, \end{aligned}$$ where we use the estimate that $$\label{eqn:gr-02}\begin{aligned} \int_{0}^{t} t^{1-\alpha} (t-s)^{\alpha-1}e^{-\sigma (t-s)} s^{\alpha-1} \d s &= t^{\alpha}\int_{0}^{1} e^{-\sigma t s} s^{\alpha-1} (1-s)^{\alpha-1} \d s\\ &= \big(t/\sigma\big)^{\frac\alpha2} \int_{0}^{1} [e^{-\sigma t s}(\sigma ts)^{\frac\alpha2}] s^{\frac\alpha2-1} (1-s)^{\alpha-1} \d s\\ &\le c \big(t/\sigma\big)^{\frac\alpha2}\int_{0}^{1} s^{\frac\alpha2-1} (1-s)^{\alpha-1} \d s \le c \big(T/\sigma\big)^{\frac\alpha2}. \end{aligned}$$ Finally, by choosing a sufficient large $\lambda$ such that $2c \big(T/\sigma\big)^{\frac\alpha2}< 1$, we obtain that $$\label{eqn:gr-03} \max_{s\in[0,T]} e^{-\lambda s}s^{2-\al}\|u''(s)\|_{L^\infty(\Omega)}\leq c,$$ which confirms the assertion with $\ell=2$. Now we turn to the case $\ell=3$ and give a brief proof. The basic idea of this argument is identical to that of $\ell=2$. With the definition of $I_i$ in and the estimate of the solution operator $E(t)$, we have the bound that $$\label{Eq:I1-2} \begin{aligned} \|\partial_{tt}I_1(t)\|_{L^\infty(\Omega)} &\le c \sum_{k=0}^2 t^{k-\al}\Big\|\frac{\d^k}{\d t^k}E(t)[\Delta u_0+f(u_0)]\Big\|_{L^\infty(\Omega)} \le c t^{-1}. \end{aligned}$$ For the second term, we use the splitting $$I_2 = t^{-\alpha}\int_0^t (t-s)^2 E(t-s)f'(u(s))u'(s) \, \d s + t^{-\alpha}\int_0^t (t-s) E(t-s) s f'(u(s))u'(s)\, \d s,$$ and the fact that $$\lim_{t\rightarrow 0} \| t E(t) \|_{ C(\bar \Omega) \rightarrow C(\bar \Omega)} + \| tu'(t) \|_{L^\infty(\Omega)} = 0,$$ and hence derive that $$\begin{aligned} \|\partial_{tt}I_2(t)\|_{L^\infty(\Omega)}&\le c \sum_{k=0}^2 t^{-(2-k)-\al} \int_0^t \sum_{m=0}^k (t-s)^{2+m-k} \big|\big|\frac{\d^{m}}{\d t^{m}}E(t-s)[f'(u(s))u'(s)]\big|\big|_{L^\infty(\Omega)} \,\d s\\ &\quad + c \sum_{k=0}^1 t^{-(2-k)-\al} \int_0^t \big|\big|\frac{\d^{k}}{\d t^{k}}[(t-s)E(t-s)] [s(f'(u(s))u'(s)]\big|\big|_{L^\infty(\Omega)} \,\d s\\ &\quad + c t^{-2-\al} \int_0^t \big|\big|\frac{\d}{\d t}[(t-s)E(t-s)] [\partial_s(s(f'(u(s))u'(s))]\big|\big|_{L^\infty(\Omega)} \,\d s. \end{aligned}$$ Then the estimates and with $\ell=1,2$ imply that $$\label{Eq:I2-2} \begin{aligned} \|\partial_{tt}I_2(t)\|_{L^\infty(\Omega)}&\le c \sum_{k=0}^2 t^{-(2-k)-\al} \int_0^t (t-s)^{\alpha-k+1}s^{\alpha-1}\,\d s + c \sum_{k=0}^1 t^{-(2-k)-\al} \int_0^t (t-s)^{\alpha-k} s^\alpha \,\d s\\ &\quad + c t^{-2-\alpha}\int_0^t (t-s)^{\alpha-1} (s^{\alpha-1} +s^{2\alpha-1})\,\d s \le c t^{\alpha-1} . \end{aligned}$$ The same argument also works for the third term $I_3$ in : $$\begin{aligned} \|\partial_{tt}I_3(t)\|_{L^\infty(\Omega)}&\le c \sum_{k=0}^2 t^{-(2-k)-\al} \int_0^t \sum_{m=0}^k s^{2+m-k} \big|\big|E(t-s)[\partial_s^m(f'(u(s))u'(s))]\big|\big|_{L^\infty(\Omega)} \,\d s\\ &\quad + c \sum_{k=0}^1 t^{-(2-k)-\al} \int_0^t \big|\big| (t-s)E(t-s) \partial_s^{m}[s(f'(u(s))u'(s)]\big|\big|_{L^\infty(\Omega)} \,\d s\\ &\quad + c t^{-2-\al} \int_0^t \big|\big|\frac{d}{dt}[(t-s)E(t-s)] [\partial_s(s(f'(u(s))u'(s))]\big|\big|_{L^\infty(\Omega)} \,\d s\\ &\le c t^{\alpha-1} + t^{-\alpha}\int_0^t (t-s)^{\alpha-1}s^2 \| u^{(3)}(s) \|_{L^\infty(\Omega)} \,\d s. \end{aligned}$$ As a result, we conclude that $$\|\partial_{tt}[t^{2-\alpha}u'(t)]\|_{L^\infty(\Omega)} \le c t^{-1} + ct^{-\alpha} \int_{0}^{t} (t-s)^{\alpha-1}s^2 \| u''(s)\|_{L^\infty(\Omega)} \d s.$$ Then we apply the estimate for $\ell=1,2$ and obtain that $$t^{3-\alpha}\| \partial_t^3 u (t)]\|_{L^\infty(\Omega)} \le c + ct^{1-\alpha} \int_{0}^{t} (t-s)^{\alpha-1}s^2 \| u''(s)\|_{L^\infty(\Omega)} \d s.$$ Finally, the arguments in - yield the desired assertion for $\ell=3$. \[rem:CD\] Under the condition that $u_0\in D$, it is not always valid that $(I + F(t)\Delta) u_0 \in C([0,T];D)$. This is because $\Delta (I + F(t)\Delta) u_0$ is compatible with the homogeneous Dirichlet boundary condition for all $t>0$, while the initial condition $\Delta u_0\in C(\bar \Omega)$. As a result, $\Delta (I + F(t)\Delta) u_0$ is not continuous to the initial condition with $L^\infty(\Omega)$ norm. However, if $$u_0\in D = \{u,\Delta u\in C(\bar\Omega)~\text{and}~u=\Delta u=0~\text{on}~\partial\Omega \},$$ then we can conclude that $(I + F(t)\Delta)u_0\in C([0,T];D)$. Regularity of remainder $R(u(t);u_0)$ ------------------------------------- Recall the expansion of the nonlinear term in . The regularity of the remainder part $R(u;u_0)$ plays an important role in the error analysis. This motivates us to derive regularity results of $R(u;u_0)$ in the Bochner-Sobolev spaces. For any $s\ge 0$ and $1\le p < \infty$, we denote by $W^{s,p}(0,T;B)$ the space of functions $v:(0,T)\rightarrow B$, with the norm defined by interpolation, where $B$ denotes a Banach space. Equivalently, the space is equipped with the quotient norm $$\label{quotient-norm} \|v\|_{W^{s,p}(0,T;B)}:= \inf_{\widetilde v}\|\widetilde v\|_{W^{s,p}({\mathbb R};B)} ,$$ where the infimum is taken over all possible extensions $\widetilde v$ that extend $v$ from $(0,T)$ to ${\mathbb R}$. For any $0<s< 1$, the Sobolev–Slobodeckij seminorm $|\cdot|_{W^{s,p}(0,T;B)}$ is defined by $$\label{eqn:SS-seminorm} | v |_{W^{s,p}(0,T;B)}^p := \int_0^T\hskip-5pt\int_0^T \frac{\|v(t)-v(\xi)\|_{B}^p}{|t-\xi|^{1+ps}} \,\d t \, \d\xi ,$$ and the full norm $\|\cdot\|_{W^{k+s,p}(0,T;B)}$, with $k\ge 0$ and $k\in \mathbb{N}$, is defined by $$\|v\|_{W^{k+s,p}(0,T;B)}^p = \sum_{m=0}^k\|\partial_t^m v \|_{L^p(0,T;B)}^p+|\partial_t^k v |_{W^{s,p}(0,T;B)}^p .$$ Then the regularity of $R(u;u_0)$ is shown in the following theorem. \[thm:reg-R\] Suppose that the assumptions in Theorem \[thm:reg-u\] hold. Then the remainder part $R(u;u_0)$, which is defined by , has the regularity $$R(u(t);u_0)\in W^{1+2\al-\epsilon,1}(0,T;C(\bar\Omega))\cap C^3((0,T];C(\bar\Omega))$$ for any arbitrary small $\epsilon>0$. By the definition of $R(u(t);u_0)$ and the integral form of the remainder in Taylor’s expansion, we may rewrite $R(u(t);u_0)$ as $$\label{Eq:reRu} R(u(x,t);u_0)=\int_{u_0(x)}^{u(x,t)} (u(x,t)-\xi)f''(\xi) \, \d \xi.$$ It is easy to observe that $$\begin{aligned} \partial_t^3 R(u(x,t);u_0)& = \partial_t^2 \Big(u'(x,t)\int_{u_0(x)}^{u(x,t)} f''(\xi)\, \d \xi\Big)\\ & = u'''(x,t) \int_{u_0(x)}^{u(x,t)} f''(\xi)\, \d\xi +3u'(x,t)u''(x,t)f''(u(x,t))\\ &\quad +(u'(x,t))^3f'''(u(x,t)). \end{aligned}$$ Then using the facts that $f$ is smooth and $u\in C^3((0,T];C(\bar\Omega))$ by Theorem \[thm:reg-u\], we conclude that $ R(u;u_0) \in C^3((0,T];C(\bar\Omega))$. Therefore, it suffices to show that $ R(u(t);u_0)\in W^{1+2\al-\epsilon,1}(0,T;C(\bar\Omega))$. To this end, we shall confirm this claim by investigating the following two cases. **Case 1. $\alpha\in(0,1/2)$.** Obviously, we have $R(u ;u_0)\in C([0,T]\times\bar \Omega)$. Define $$\label{Eq:DtRu} w(x,t)=\partial_t R(u(x,t);u_0)=\int_{u_0(x)}^{u(x,t)} u'(x,t)f''(\xi)\, \d \xi.$$ Then we observe that $$\begin{aligned} \|w(t)\|_{L^\infty(\Omega)} &\leq c\|u(t)-u_0\|_{L^\infty(\Omega)}\|u'(t)\|_{L^\infty(\Omega)} \max_{t\in[0,T]}\|f''(u(t))\|_{L^\infty(\Omega)} \leq c t^{2\al-1}, \end{aligned}$$ where the last inequality follows from the fact that $u\in C^{\alpha}([0,T];C(\bar \Omega))$ and $\|u'(t)\|_{L^\infty(\Omega)}\le ct^{\alpha-1}$, by Theorem \[thm:reg-u\]. Then the similar argument also yields that $$\label{eqn:wt} \begin{aligned} \|w'(t)\|_{L^\infty(\Omega)} \leq& c\Big(\|u''(t)\|_{L^\infty(\Omega)} \| u(t)-u_0 \|_{L^\infty(\Omega)} + \|u'(t)\|_{L^\infty(\Omega)}^2 \Big) \max_{t\in[0,T]} \| f''(u(t)) \|_{L^\infty(\Omega)}\\ \leq& ct^{2\alpha-2}. \end{aligned}$$ Then according to the Sobolev–Slobodeckij seminorm , we have for any $\epsilon\in(0,2\alpha)$ $$\begin{aligned} | w |_{W^{2\alpha-\epsilon,1}(0,T;L^\infty(\Omega))} &= \int_{0}^{T}\hskip-5pt\int_{0}^{T} \frac{\|w(t)-w(s)\|_{L^\infty(\Omega)}}{|t-s|^{2\al+1-\epsilon}} \d t\d s= \int_{0}^{T}\hskip-5pt\int_{0}^{T} \frac{\|\int_{s}^{t} w'(y)\d y\|_{L^\infty(\Omega)} }{|t-s|^{2\al+1-\epsilon}} \d t\d s\\ &\leq \int_{0}^{T}\hskip-5pt\int_{0}^{T} \frac{|\int_{s}^{t} \| w'(y)\|_{L^\infty(\Omega)} \d y|}{|t-s|^{2\al+1-\epsilon}} \d t\d s. \end{aligned}$$ Now by applying the estimate , we arrive at $$\begin{aligned} | w |_{W^{2\alpha-\epsilon,1}(0,T;L^\infty(\Omega))} & \leq c\int_{0}^{T}\hskip-5pt\int_{0}^{T} \frac{|\int_{s}^{t} y^{2\al-2}\d y|}{|t-s|^{2\al+1-\epsilon}} \d t\d s =c\int_{0}^{1}\hskip-5pt\int_{0}^{1} \frac{|\xi^{2\al-1}- \zeta^{2\al-1}|}{|\xi-\zeta|^{2\al+1-\epsilon}} \d \xi\d \zeta \\ & =c\bigg(\int_{0}^{1}\hskip-5pt\int_{0}^{\xi} \frac{\zeta^{2\al-1}-\xi^{2\al-1}}{(\xi-\zeta)^{2\al+1-\epsilon}} \d \zeta\d \xi +\int_{0}^{1}\hskip-5pt\int_{\xi}^{1} \frac{\xi^{2\al-1}-\zeta^{2\al-1}}{(\zeta-\xi)^{2\al+1-\epsilon}} \d \zeta\d \xi\bigg)\\ &=2c\int_{0}^{1}\int_{0}^{\xi} \frac{\zeta^{2\al-1}-\xi^{2\al-1}}{(\xi-\zeta)^{2\al+1-\epsilon}} \d \zeta\d \xi =2c\int_{0}^{1}\xi^{-1+\epsilon}\d \xi \int_{0}^{1} \frac{ t^{2\al-1}-1}{(1-t)^{2\al+1-\epsilon}} \d t\\ &\le c_\epsilon \int_{0}^{1} \frac{ t^{2\al-1} -1 }{(1-t)^{2\al+1-\epsilon}} \d t. \end{aligned}$$ Then the assertion that $w\in W^{1+2\al-\epsilon,1}(0,T;L^\infty(\Omega))$ follows from the observation that $$\begin{aligned} \int_{0}^{1} \frac{ t^{2\al-1} -1 }{(1-t)^{2\al+1-\epsilon}} \d t &\le \Big( \int_{0}^{\frac12}+ \int_{\frac12}^{1} \Big) \frac{ t^{2\al-1} -1 }{(1-t)^{2\al+1-\epsilon}} \d t \\ &\le c + c \lim_{t\rightarrow 1} \frac{t^{2\al-1} -1 }{(1-t)^{2\al-\epsilon}} + c\int_{\frac12}^{1} \frac{ t^{2\al-2} }{(1-t)^{2\al-\epsilon}} \d t \le c. \end{aligned}$$ **Case 2. $\alpha\in (1/2,1)$.** In this case, using the estimate , it is easy to see that $ \|\partial_t w (t)\|_{L^\infty(\Omega)} \in L^1(0,T)$. Then our aim is to show that $$\partial_{tt}w \in {W^{2\alpha-\epsilon-1,1}(0,T;L^\infty(\Omega))}.$$ Using the expression for $\partial_{tt}w$ that $$\partial_{tt} w (x,t)=(u'(x,t))^3 f'''(u(x,t))+2u'(x,t)u''(x,t)f''(u(x,t))+u'''(x,t) \int_{u_0(x)}^{u(x,t)}f''(\xi)\d \xi,$$ and Theorem \[thm:reg-u\], we derive that $$\label{eqn:wtt} \begin{aligned} \|w_{tt}\|_{L^\infty(\Omega)}&\leq c \|(u'(t))\|_{L^\infty(\Omega)}^3 +c\| u'(t)\|_{L^\infty(\Omega)}\|u''(t)\|_{L^\infty(\Omega)} \\ &\quad +c\|u'''(t)\|_{L^\infty(\Omega)} \|u(t)-u_0\|_{L^\infty(\Omega)}\\ &\leq c(t^{3\al-3}+t^{2\alpha-3}) \le ct^{2\alpha-3}. \end{aligned}$$ Recalling the Sobolev–Slobodeckij seminorm , we have for any $\epsilon\in(0,2\alpha-1)$ $$\begin{aligned} \| w_t \|_{W^{2\alpha-1-\epsilon,1}(0,T;L^\infty(\Omega))} &= \int_{0}^{T}\hskip-5pt\int_{0}^{T} \frac{\|w_t(t)-w_s(s)\|_{L^\infty(\Omega)}}{|t-s|^{2\al-\epsilon}} \d t\d s\\ &= \int_{0}^{T}\hskip-5pt\int_{0}^{T} \frac{|\int_{s}^{t} \| \partial_{yy}w(y)\|_{L^\infty(\Omega)}\d y | }{|t-s|^{2\al-\epsilon}} \d t\d s. \end{aligned}$$ Then we apply the estimate and derive that $$\begin{aligned} \| w_t \|_{W^{2\alpha-1-\epsilon,1}(0,T;L^\infty(\Omega))} &\leq c\int_{0}^{T}\hskip-5pt\int_{0}^{T} \frac{|\int_{s}^{t} y^{2\al-3}\d y|}{|t-s|^{2\al-\epsilon}} \d t\d s =c\int_{0}^{1}\hskip-5pt\int_{0}^{1} \frac{|\xi^{2\al-2}- \zeta^{2\al-2}|}{|\xi-\zeta|^{2\al -\epsilon}} \d \xi\d \zeta \\ & =c\bigg(\int_{0}^{1}\hskip-5pt\int_{0}^{\xi} \frac{\zeta^{2\al-2}-\xi^{2\al-2}}{(\xi-\zeta)^{2\al -\epsilon}} \d \zeta\d \xi +\int_{0}^{1}\hskip-5pt\int_{\xi}^{1} \frac{\xi^{2\al-2}-\zeta^{2\al-2}}{(\zeta-\xi)^{2\al -\epsilon}} \d \zeta\d \xi\bigg)\\ &=2c\int_{0}^{1}\int_{0}^{\xi} \frac{\zeta^{2\al-2}-\xi^{2\al-2}}{(\xi-\zeta)^{2\al -\epsilon}} \d \zeta\d \xi =2c\int_{0}^{1}\xi^{-1+\epsilon}\d \xi \int_{0}^{1} \frac{ t^{2\al-2}-1}{(1-t)^{2\al -\epsilon}} \d t\\ &\le c, \end{aligned}$$ where the last inequality is a direct consequence of the fact that $$\begin{aligned} \int_{0}^{1} \frac{ t^{2\al-2} -1 }{(1-t)^{2\al -\epsilon}} \d t &\le \Big( \int_{0}^{\frac12}+ \int_{\frac12}^{1} \Big) \frac{ t^{2\al-2} -1 }{(1-t)^{2\al -\epsilon}} \d t \\ &\le c + c \lim_{t\rightarrow 1} \frac{t^{2\al-2} -1 }{(1-t)^{2\al+1-\epsilon}} + c\int_{\frac12}^{1} \frac{ t^{2\al-3} }{(1-t)^{2\al+1-\epsilon}} \d t \le c. \end{aligned}$$ Therefore, we obtain that $\partial_{tt}w \in {W^{2\alpha-\epsilon-1,1}(0,T;L^\infty(\Omega))}$, and thus $u\in {W^{2\alpha+1-\epsilon,1}(0,T;L^\infty(\Omega))}$ for any $\alpha\in(1/2,1)$. In conclusion, Case 1 and 2 together confirm the desired assertion for $\alpha\in(0,1/2)\cup(1/2,1)$. The critical case $\alpha=1/2$ follows directly from the result of Case 1 and hence the proof is completed. Error analysis of modified BDF schemes {#sec:error} ====================================== The aim of this section is to present a complete error analysis for the high-order time stepping scheme . To begin with, we assume that the nonlinear term is globally Lipschitz continuous, i.e., there exists a constant $c_L$ such that $$\label{eqn:GL} | f (s) - f (t)| \le c_L |t-s| \qquad \text{for all}~~t,s\in\mathbb{R}.$$ We shall establish numerical analysis under the assumption , and then extend the argument to the case without that assumption. Existence and uniqueness of the time stepping solution ------------------------------------------------------ In the analysis stated in the next subsection, we will always assume that the fully implicit scheme admits a unique solution. It is easy to confirm this assumption, provided that is valid. In each time level, the fully implicit scheme requires to solve a nonlinear elliptic problem $$\label{eqn:nonlinear} (b_0I + \tau^\alpha \Delta) v = w + \tau^\alpha f(v)$$ with homogeneous Dirichlet boundary condition and some function $w\in C(\bar\Omega)$. Next we show that there exists a unique solution to in $C(\bar \Omega)$. [By defining]{} the operator $M: C(\bar \Omega)\rightarrow C(\bar\Omega)$ as $$M v = (b_0I + \tau^\alpha \Delta)^{-1} (w+\tau^\alpha f(v) ),$$ [and]{} applying the resolvent estimate , we observe that for any $v_1, v_2\in C(\bar\Omega)$ $$\| M v_1 - M v_2\|_{L^\infty(\Omega)} = \| (b_0\tau^{-\alpha}I + \Delta)^{-1} ( f(v_1) - f(v_2)) \|_{L^\infty(\Omega)} \le c c_L\tau^\alpha \| v_1-v_2 \|_{L^\infty(\Omega)}.$$ Then for $\tau$ small enough, $M$ is a contraction mapping, and hence there exists a unique $v\in C(\bar\Omega)$ such that $M(v)=v$, i.e., the nonlinear elliptic problem has a unique solution. As a result, we conclude that the fully implicit time stepping scheme admits a unique sequence of functions $\{ u_n \}_{n=1}^N$ via mathematical induction. Error analysis of the BDF scheme for linear problem --------------------------------------------------- The fundamental idea of error estimation is to apply the representation of the time stepping solution by contour integral in $\mathbb{C}$, which has been extensively used in existing studies [@LubichSloanThomee:1996; @JinLazarovZhou:L1; @jin2017analysis; @JinLiZhou:correction; @Yan:2018L1]. We shall apply this technique to derive error estimates in $L^\infty(\Omega)$ norm. Note that the operator $A$ defined in is self-adjoint, but not negative definite. However, the spectrum of $A$ has an upper bound, since $f(u_0)\in L^\infty(\Omega)$. Now we define $$\label{eqn:la} \lambda= \max\big(1,\|f(u_0)\|_{L^\infty(\Omega)}\big),$$ and observe that $L=\Delta + (f(u_0)-\lambda)I $ is self-adjoint and negative definite. According to the resolvent estimate for $L$, we have the new resolvent estimate, for $v\in C(\bar\Omega)$ $$\label{eqn:resol-2} \begin{aligned} \|(z-A)^{-1}v\|_{ L^\infty(\Omega)}&=\|(z-\lambda -\big(\Delta + (f(u_0)-\lambda)I\big)v)^{-1} \|_{ L^\infty(\Omega)} \leq c_\phi|z-\lambda|^{-1} \|v \|_{ L^\infty(\Omega)}, \end{aligned}$$ for all $$\label{eqn:sig-la} z\in\Sigma_{\lambda,\phi}:=\{z\in\mathbb{C}\backslash\{\lambda\} :|\arg (z-\lambda)|<\phi \}~~ \text{and}~~ \phi\in(\pi/2,\pi).$$ To analyze the fully implicit BDF scheme, we shall start with the linear problem with a time-independent source term $$\label{eqn:v} \partial_t^\alpha v(t) - A v(t) = g_0 \quad\text{with}~~t\in(0,T],\quad \text{and} ~~ v(0)=u_0.$$ Then the time stepping scheme reads $$\label{eqn:v-disc} \left\{\begin{aligned} &\bdalt (v-u_0)_n-Av_n=g_0+a_n^{(k)}(Au_0+g_0),\quad &1\leq n\leq k-1,\\ &\bdalt (v-u_0)_n-Av_n=g_0,\quad &k\leq n\leq N \end{aligned}\right.$$ with $v_0 = u_0$. The next lemma gives an estimate of the difference between $v(t_n)$ and $v_n$. \[lem:vv\] Let $v(t)$ and $v_n$ be the solutions of and , respectively. We assume that the conditions in Theorem \[thm:reg-u\] hold true. Then there exists $\tau_0>0$, such that for $\tau\le\tau_0$ the following error estimate holds $$\begin{aligned} \|v_n-v(t_n)\|_{L^\infty(\Omega)} & \leq c \tau^k t_n^{\al-k}\|g_0 + Au_0\|_{L^\infty(\Omega)} \end{aligned}$$ where the constant $c$ depends on $\alpha, k$ and $T$. Let $w(t)=v(t)-u_0$. Then the linear problem can be reformulated as $$\label{Eq:resubw} \dalt w-Aw=Au_0+g_0 \quad\text{with}~~t\in(0,T],\quad \text{and} ~~ w(0)=0.$$ After taking Laplace transform, we derive that $$\widehat{w}(z)= z^{-1}(z^\alpha-A)^{-1} (Au_0+g_0)$$ for any $z$ in the resolvent set of $A$. With inverse Laplace transform, the function $w(t)$ can be expressed explicitly by $$\begin{aligned} w(t) =\frac{1}{2\pi \i}\int_{\sigma_0-\i\infty}^{\sigma_0+\i\infty} e^{zt}K(z)(Au_0+g_0)\, \d z \end{aligned}$$ with $\sigma_0$ such that $ (\sigma_0)^\alpha > \lambda$, where $\lambda$ is given in and the kernel $K(z)$ is defined by $$\label{eqn:K} K(z)=z^{-1}(z^\al-A)^{-1}.$$ Now we deform the integral contour and obtain that $$\label{eqn:solrep-w} w(t) =\frac{1}{2\pi \i}\int_{\Gamma_{\theta,\sigma}} e^{zt}K(z)(Au_0+g_0)\, \d z$$ where $\sigma\in(\lambda^{1/\alpha}, \sigma_0]$ and the contour $\Gamma_{\theta,\sigma}$ is defined by $$\label{eqn:Gamma} \Gamma_{\theta,\sigma}=\{z\in\C: z=\sigma + \rho e^{\pm \i\theta},\rho\geq 0\}\qquad \text{with any}~~ \theta\in(\pi/2,\pi),$$ oriented with an increasing imaginary part. Similarly, we may derive the integral representation of $v_n$ in the complex domain. By letting $w_n = v_n - u_0$, we can reformulate the time stepping scheme as $$\label{eqn:w-disc} \left\{\begin{aligned} &\bdalt w_n-Aw_n=(1+a_n^{(k)})(Au_0+g_0),\quad &1\leq n\leq k-1,\\ &\bdalt w_n-Aw_n=Au_0+g_0,\quad &k\leq n\leq N \end{aligned}\right.$$ with $w_0 = 0$. By multiplying $\xi^n$ on and taking summation over $n$, we have $$\sum_{n=1}^{\infty} \xi^n\bdalt w_n-\sum_{n=1}^{\infty} \xi^n Aw_n=\bigg(\sum_{n=1}^{\infty} \xi^n +\sum_{n=1}^{k-1} \xi^na_n^{(k)}\bigg)(Au_0+g_0).$$ For any given sequence $(f^n)_{n=0}^\infty$, let $\widetilde{f}(\xi):=\sum_{n=0}^{\infty}f^n\xi^n$ denote its generating function. Since $w_0=0$, according to properties of discrete convolution, we have the identity $$\sum_{n=1}^{\infty} \xi^n\bdalt w_n=\delta_\tau(\xi)^\al\widetilde{W}(\xi),$$ where $\delta_\tau(\xi)$ denotes the generating function of the standard BDF$k$ method . Therefore $$(\delta_\tau(\xi)^\al -A)\widetilde{w}=\bigg(\frac{\xi} {1-\xi}+\sum_{n=1}^{k-1}\xi^na_n^{(k)}\bigg)(Au_0+g_0).$$ By the $A(\theta_k)$-stability of the BDF$k$ method [@HairerWanner:1996 pp. 251], for any $\xi$ such that $|\xi|=\rho\in(0,\frac12]$, there exists $\tau_0$ small enough such that $\delta_{\tau_0}(\frac12)^\alpha>\lambda+c_0$, and we can find an angle $\theta_0\in(\pi/2,\pi)$ such that $\delta_\tau(\xi)^\alpha \in \Sigma_{\lambda+c_0,\theta_0}$ for all $\tau \le \tau_0$ and hence the operator $(\delta_\tau(\xi)^\al -A)$ is invertible. Then $$\widetilde{w}(\xi)=K(\delta_\tau(\xi))\tau^{-1}\mu(\xi)(Au_0+g_0),$$ where $\mu(\xi)=\delta(\xi)(\frac{\xi}{1-\xi}+\sum_{n=1}^{k-1}\xi^na_n^{(k)}).$ Let $\rho\in(0,\frac12]$ and $\tau\le \tau_0$, it is easy to see that $\widetilde{w}(\xi)$ is analytic with respect to $\xi$ in the circle $|\xi|=\rho$ on the complex plane, then with the change of variables $\xi=e^{-z\tau}$ and Cauchy’s integral formula, we have the following expression $$\label{eqn:solrep-wn} \begin{aligned} w_n&=\frac{1}{2\pi\i}\int_{|\xi|=\rho} \xi^{-n-1}\widetilde{w}(\xi) \d\xi\\ &=\frac{1}{2\pi\i}\int_{\Gamma^\tau} e^{zt_n}K(\delta_\tau(e^{-z\tau}))\mu(e^{-z\tau})(Au_0+g_0)\d z\\ &=\frac{1}{2\pi\i}\int_{\Gamma^\tau_{\theta,\sigma_0}} e^{zt_n}K(\delta_\tau(e^{-z\tau}))\mu(e^{-z\tau})(Au_0+g_0)\d z, \end{aligned}$$ where $\Gamma^\tau:=\{z=-\ln(\rho)/\tau+\i y: y\in\mathbb{R},\,|y|\leq \pi/\tau\}$ and $\Gamma^\tau_{\theta,\sigma}=\{z\in\Gamma_{\theta,\sigma}:|\operatorname{Im}(z)|\leq \pi/\tau\}$ with $\sigma= -\ln(\frac12)/\tau_0$. The deformation of contour from $\Gamma^\tau$ to $\Gamma^\tau_{\theta,\sigma_0}$ in the last equation is achieved due to the analyticity and periodicity of the function $e^{zt_n}K(\delta_\tau(e^{-z\tau}))\mu(e^{-z\tau})$. Then there exists $\theta\in(\pi/2,\pi)$ close to $\pi/2$ such that $\delta_\tau(e^{-z\tau})^\alpha \in \Sigma_{\lambda+c_0,\theta_0+\epsilon}$ for some small $\epsilon>0$. Now we recall the properties of the generating function $\delta_\tau(\xi)$ and correction term $\mu(\xi)$, which have already been established in [@JinLiZhou:correction eq. (2.13) and Theorem B.1.]. In particular, in case that $z\in \Gamma^\tau_{\theta,\sigma}$ there holds that $$\label{eqn:app} \begin{aligned} & c_1|z|\leq |\delta_\tau(e^{-z\tau})|\leq c_2|z|, &&|\delta_\tau(e^{-z\tau})-z|\le c\tau^k|z|^{k+1},\\ & |\delta_\tau(e^{-z\tau})^\alpha-z^\alpha|\leq c\tau^k|z|^{k+\alpha}, && |\mu(e^{-z\tau})-1| \leq c \tau^k |z|^k.\end{aligned}$$ To derive an estimate for $w_n - w(t_n)$, we compare those two solution representations and . To this end, we use the splitting $$\label{eqn:split} \begin{aligned} u_n-u(t_n)&=w_n -w(t_n)\\ &=\frac{1}{2\pi\i}\int_{\Gamma^\tau_{\theta,\sigma}} e^{zt_n}(K(\delta_\tau(e^{-z\tau}))\mu(e^{-z\tau})-K(z))(Au_0+g_0)\d z\\ &\quad-\frac{1}{2\pi \i}\int_{\Gamma_{\theta,\sigma}\backslash \Gamma^\tau_{\theta,\sigma}} e^{zt_n}K(z)(Au_0+g_0)\d z:=I-II. \end{aligned}$$ Next, we shall bound these two terms separately. By the resolvent estimate and approximation properties , we have $$\label{eqn:KK} \begin{aligned} &\|[K(\delta_\tau(e^{-z\tau}))-K(z)]\psi\|_{L^\infty(\Omega)} \\ \leq& |\delta_\tau(e^{-z\tau})^{-1}-z^{-1}|\|(\delta_\tau(e^{-z\tau})^\al-A)^{-1}\psi\|_{L^\infty(\Omega)}\\ &+ |z|^{-1}\|[(\delta_\tau(e^{-z\tau})^\al-A)^{-1}-(z^\al-A)^{-1}]\psi\|_{L^\infty(\Omega)}\\ =&c\tau^k|z|^{k-1}\|(\delta_\tau(e^{-z\tau})^\al-A)^{-1}\psi \|_{L^\infty(\Omega)}\\ &+ |z|^{-1} |z^\al-\delta_\tau(e^{-z\tau})^\al| \|(\delta_\tau(e^{-z\tau})^\al-A)^{-1} (z^\al-A)^{-1}\psi\|_{L^\infty(\Omega)}\\ &\leq c\tau^k|z|^{k-1}|\delta_\tau(e^{-z\tau})^\al-\lambda|^{-1}(1+|z|^{ \al }|z^\al-\lambda|^{-1}) \| \psi \|_{L^\infty(\Omega)} \end{aligned}$$ for $\psi \in C(\bar\Omega)$. For any $z=\sigma+\rho e^{i\theta}$ with $\rho\le1$, we have the uniform bound that $$\label{eqn:smallrho} |\delta_\tau(e^{-z\tau})^\al-\lambda|^{-1} + |z^\al-\lambda|^{-1}\le c$$ since $\delta_\tau(e^{-z\tau})^\al \in \Sigma_{\lambda+c_0,\theta_0+\epsilon}$. Besides, for $z=\sigma+\rho e^{i\theta}$ with $\rho>1$, it holds that $$\label{eqn:largerho-1} |z^\al-\lambda|^{-1} \le c|z^\alpha|^{-1} \le c\rho^{-\alpha},$$ and similarly, using the fact that $\delta_\tau(e^{-z\tau})^\al \in \Sigma_{\lambda+c_0,\theta_0+\epsilon}$ and the approximation properties of generating functions in , we have for $\rho>1$ $$\label{eqn:largerho-2} |\delta_\tau(e^{-z\tau})^\al-\lambda|^{-1} \le c| \delta_\tau(e^{-z\tau})^\al|^{-1} \le c |z|^{-\alpha} \le c\rho^{-\alpha}.$$ The same argument also gives the same bound for $z=\sigma+\rho e^{-i\theta}$. Now for the first term in , we have $$\begin{aligned} \|I\|_{L^\infty(\Omega)}&=\|\frac{1}{2\pi\i}\int_{\Gamma^\tau_{\theta,\sigma}} e^{zt_n}(K(\delta_\tau(e^{-z\tau}))\mu(e^{-z\tau})-K(z))(Au_0+g_0)\d z \|_{L^\infty(\Omega)}\\ &\leq \|\frac{1}{2\pi\i}\int_{\Gamma^\tau_{\theta,\sigma}} e^{zt_n}K(\delta_\tau(e^{-z\tau}))(\mu(e^{-z\tau})-1)(Au_0+g_0)\d z \|_{L^\infty(\Omega)}\\ &\quad + \|\frac{1}{2\pi\i}\int_{\Gamma^\tau_{\theta,\sigma}} e^{zt_n}(K(\delta_\tau(e^{-z\tau}))-K(z))(Au_0+g_0)\d z \|_{L^\infty(\Omega)}=: I_1 + I_2.\end{aligned}$$ The term $I_1$ can be bounded using estimates , and $$\begin{aligned} I_1 &\le c\tau^k\|Au_0+g_0\|_{L^\infty(\Omega)} \int_{\Gamma^\tau_{\theta,\sigma}} e^{\operatorname{Re}(z)t_n}|z|^{k-1}|z^\al-\lambda|^{-1}|\d z|\\ &\le c e^{\sigma t_n}\tau^k\|Au_0+g_0\|_{L^\infty(\Omega)} \Big( \int_0^1 e^{-c \rho t_n} \d \rho + \int_1^\infty e^{-c \rho t_n}\rho^{k-1-\alpha} \d \rho \Big)\\ &\le c_T \tau^k t_n^{\alpha-k}\|Au_0+g_0\|_{L^\infty(\Omega)}. \end{aligned}$$ Similarly, we apply - to derive a proper bound for $I_2$ $$\begin{aligned} I_2 &\le c\tau^k\|Au_0+g_0\|_{L^\infty(\Omega)} \int_{\Gamma^\tau_{\theta,\sigma}} e^{\operatorname{Re}(z)t_n}|z|^{k-1} |\delta_\tau(e^{-z\tau})^\al-\lambda|^{-1}(1+|z|^{ \al }|z^\al-\lambda|^{-1})|\d z|\\ &\le c e^{\sigma t_n} \tau^k\|Au_0+g_0\|_{L^\infty(\Omega)} \Big( \int_0^1 e^{-c \rho t_n} \d \rho + \int_1^\infty e^{-c \rho t_n}\rho^{k-1 -\alpha} \d \rho \Big)\\ &\le c_T \tau^k t_n^{\alpha-k}\|Au_0+g_0\|_{L^\infty(\Omega)}. \end{aligned}$$ Finally, we bound the second term in by using the resolvent estimate : $$\begin{aligned} \|II\|_{L^\infty(\Omega)} &\leq c e^{\sigma t_n}\|Au_0+g_0\|_{L^\infty(\Omega)}\int_{\pi/(\tau\sin\theta)}^{\infty} e^{-c\rho t_n }\rho^{-1}|(\sigma + \rho e^{\i\theta})^\al-\lambda|^{-1}\d \rho\\ &\leq c_T\tau^k\|Au_0+g_0\|_{L^\infty(\Omega)}\int_{\pi/(\tau\sin\theta)}^{\infty} e^{-c\rho t_n }\rho^{k-1-\alpha} \d \rho\quad(\text{since } 1\leq \tau^k|z|^k)\\ &\leq c_T\tau^kt_n^{\al-k}\|Au_0+g_0\|_{L^\infty(\Omega)}. \end{aligned}$$ This completes the proof of the lemma. \[eqm:const\] The generic constant $c$ in Lemma \[lem:vv\] depends on the terminal time $T$ with $c(T)\sim O(e^{\sigma T})$ for some $\sigma>0$. Therefore, the error estimate in Lemma \[lem:vv\] is not uniform in $T$ and hence it is not suitable for long-time estimate. This is because the operator $A=\Delta + f'(u_0) I$ might not be negative definite, and hence the solution might blow up exponentially as $T\rightarrow \infty$. In case that $A$ is negative definite, we can obtain an error estimate which is uniform in large terminal time (e.g., [@LiWangZhou:2020; @JinLiZhou:correction]). Now we turn to the subdiffusion problem driven by a general source term: $$\label{eqn:w-conti} \partial_t^\alpha w(t) - A w(t) = g(t) \quad\text{with}~~t\in(0,T],\quad \text{and} ~~ w(0)=0,$$ whose time stepping scheme reads $$\label{eqn:w-discere} \begin{aligned} \bdalt w_n-Aw_n=g_n:=g(t_n),\quad &1\leq n\leq k-1, \quad \text{with}\quad w_0=0. \end{aligned}$$ The time stepping solution $w_n$ can be represented by a discrete convolution $$\label{eqn:disc-con} w_n = \tau \sum_{j=1}^n E_\tau^{n-j}g_j, ~~\text{where}~~ E_{\tau}^n = \frac{1}{2\pi\mathrm{i}}\int_{\Gamma_{\theta,\sigma}^\tau } e^{zt_n} ({ \delta_\tau(e^{-z\tau})^\alpha}+A )^{-1}\,\d z.$$ The angle $\theta$ and parameter $\sigma$ are chosen as those in the proof of Lemma \[lem:vv\]. Then by , and , we derive that $$\label{eqn:est-E} \begin{aligned} \| E_{\tau}^n \psi\|_{L^\infty(\Omega)} &= \bigg\|\frac{1}{2\pi\mathrm{i}}\int_{\Gamma_{\theta,\sigma}^\tau } e^{zt_n} ({ \delta_\tau(e^{-z\tau})^\alpha}+A)^{-1}\psi\,\d z \bigg\|_{L^\infty(\Omega)} \\ &\le ce^{\sigma T} \Big(\int_{1}^{\frac{\pi}{\tau\sin\theta}} e^{-c\rho t_n } \rho^{-\alpha} \d \rho + \int_{0}^1 e^{-c\rho t_n} \d \rho\Big) \le c_T (t_n+\tau)^{\alpha-1} . \end{aligned}$$ Therefore, it holds the stability that $$\label{eqn:stab} \|w_n\|_{L^\infty(\Omega)} \le c_T\Big(\tau \sum_{j=1}^n t_{n-j+1}^{\alpha-1} \| g_j \|_{L^\infty(\Omega)}\Big).$$ Here we assume that the source term $g$ satisfies certain compatibility condition, e.g., $$g^{(j)}(0)=0, \qquad j=0,1,2,\ldots,k-1.$$ For such a source term $g$, by using the resolvent estimates and the technique in the proof of Lemma \[lem:vv\], the estimate of $w_n-w(t_n)$ can be done similarly (hence omitted) as that given in [@jin2017analysis Lemma 3.7], i.e., for all $\ell=1,2,\ldots,k$ $$\begin{aligned} \| w(t_n) - w_n \|_{L^\infty(\Omega)} &\le c_T \tau^\ell \int_0^{t_n}(t_n-s)^{\alpha-1} \| g^{(\ell)}(s) \|_{L^\infty(\Omega)}\,\d s \\ &\le c_T \tau^\ell \Big(t_n^{\alpha-1} \| g \|_{W^{\ell,1}((0,t_n/2);L^\infty(\Omega))} + t_n^\alpha \| g \|_{C^{\ell}([t_n/2,t_n];L^\infty(\Omega))} \Big). \end{aligned}$$ Then by the interpolation, we have the following estimate. \[lem:source\] Suppose that $g\in W^{\ell+s,1}((0,T);C(\bar\Omega))\cap C^{\ell+1}((0,T);C(\bar\Omega))$ and $g^{(j)}=0$ with $\ell\in \mathbb{N}^+$, $s\in(0,1)$ and $j=0,1,\ldots,\ell$. Let $w(t)$ and $w_n$ be the solutions of and , respectively. Then the following error estimate holds $$\| w(t_n) - w_n \|_{L^\infty(\Omega)} \le c t_n^{\alpha-1} \tau^{\min(k,\ell+s)},$$ where the constant $c$ depends only on $\alpha,~g$ and $T$. Error analysis of the BDF scheme for nonlinear problem ------------------------------------------------------ Now we turn to the error estimate for the fully implicit scheme . To this end, we begin with the following lemma, which provides a discrete Hölder bound of the time stepping solution to . \[lem:sol-bound\] Assume that the same conditions in Theorem \[thm:reg-u\] holds valid and further $f$ satisfies . Let $\{u_n\}_{n=1}^N$ be the solution to the time stepping scheme . Then we have $$\max_{1 \le n\le N} \| u_n \|_{L^\infty(\Omega)} + \max_{1 \le n\le N} t_n^{-\alpha} \| u_n - u_0 \|_{L^\infty(\Omega)} \le c , \qquad \text{for}~~n=1,2,\ldots,N,$$ where the constant $c$ depends on $T, u_0, f$ but is independent of $\tau$ and $N$. By the preceding argument, we have the following representation of $u_n$: $$\label{eqn:disc-con2} u_n = u_0 + F_\tau^{n} (Au_0 +g_0) + \tau \sum_{j=1}^n E_\tau^{n-j} R(u_j; u_0)$$ as well as the bound that (by Theorem \[lem:vv\] and the estimate ) $$\| F_\tau^{n} \|_{C(\bar\Omega)\rightarrow C(\bar\Omega)} \le c_T t_n^\alpha\quad\text{and}\quad \| E_\tau^{n}\|_{C(\bar\Omega)\rightarrow C(\bar\Omega)} \le c_T t_n^{\alpha-1}.$$ Therefore, by the Lipchitz continuity of the modified potential term $\bar f(s)$, we have $$\label{eqn:disc-con3} \begin{aligned} \| u_n - u_0 \|_{L^\infty(\Omega)} &\le c_T t_n^\alpha + \tau \sum_{j=1}^n t_{n-j+1}^{\alpha-1} \| R(u_j; u_0)\|_{L^\infty(\Omega)} \\ &\le c_T t_n^\alpha + \tau \sum_{j=1}^n t_{n-j+1}^{\alpha-1} \Big(\| f(u_j) - f(u_0)\|_{L^\infty(\Omega)} + \| f'(u_0) (u_j-u_0)\|_{L^\infty(\Omega)}\Big) \\ &\le c_T t_n^\alpha + \tau \sum_{j=1}^n t_{n-j+1}^{\alpha-1} \| u_j-u_0 \|_{L^\infty(\Omega)}. \end{aligned}$$ Then by the discrete Grönwall’s inequality [@Elliott:1992 Lemma 7.1], we obtain that $$\| u_n - u_0 \|_{L^\infty(\Omega)} \le c t_n^{\alpha},$$ where the constant $c$ depends on $T, u_0, f$, but it is independent of $\tau$ and $N$. Finally, the uniform bound of $\| u_n \|_{L^\infty(\Omega)}$ follows from the triangle inequality. Now we are ready to state our main theorem in the section. \[thm:error\] Assume that the same conditions in Theorem \[thm:reg-u\] holds valid and further $f$ satisfies . Let $u(t)$ be the solution of the semilinear subdiffusion problem and $\{u_n\}_{n=1}^N$ be the solution of fully implicit scheme . Then the following error estimate holds $$\| u_n - u(t_n) \|_{L^\infty(\Omega)} \le c \tau^{\min(k,1+2\alpha-\epsilon)} t_n^{\alpha -\min (k, 1+2\alpha-\epsilon)},$$ for any $t_n > 0$ and arbitrarily small $\ep>0$. Here the constant $c$ depends on $T, u_0, f, \ep$, but it is independent of $\tau$ and $N$. To begin with, we split the solution $u(t)$ into two components $$u(t) = v(t) + w(t),$$ where $v$ is the solution of , and $w$ satisfies with $g=R(u;u_0)$. Similarly, the time stepping solution can also be separated by $$u_n = v_n + w_n,$$ where $v_n$ is the solution of , and $w_n$ satisfies with $g_n= R(u_n;u_0)$, i.e., by $$w_n = \tau\sum_{j=1}^n E_\tau^{n-j} R(u_j;u_0).$$ We note that the difference between $v(t_n)$ and $v_n$ has been estimated in Lemma \[lem:vv\]. In order to study $w_n - w(t_n)$, we use an intermediate solution $\bar w_n$ which satisfies with [$g_n=R(u(t_n);u_0)$]{} and can be represented by $$\bar w_n = \tau\sum_{j=1}^n E_\tau^{n-j} R(u(t_j);u_0).$$Then the regularity of $R(u;u_0)$ proved in Theorem \[thm:reg-R\] and the estimate in Lemma \[lem:source\] yield $$\| \bar w_n - w(t_n) \|_{L^\infty(\Omega)} \le c \tau^{1+2\alpha-\epsilon} t_n^{\ep-\alpha-1}.$$ To sum up, we derive a bound of $e_n =u_n - u(t_n)$: $$\begin{aligned} \|e_n\|_{L^\infty(\Omega)} &= \| v_n - v(t_n) \|_{L^\infty(\Omega)} + \|\bar w_n - w(t_n) \|_{L^\infty(\Omega)} + \| w_n - \bar w_n \|_{L^\infty(\Omega)} \\ &\le c \tau^k t_n^{\alpha-k} + c \tau^{1+2\alpha-\epsilon} t_n^{\alpha-1} + \tau\sum_{j=1}^n \| E_\tau^{n-j} [R(u_j;u_0)- R(u(t_j);u_0)]\|_{L^\infty(\Omega)}\\ &\le c \tau^{\min(k,1+2\alpha-\epsilon)} t_n^{\alpha -\min (k, 1+2\alpha-\epsilon)} + c\tau\sum_{j=1}^n t_{n-j+1}^{\alpha-1} \| R(u_j;u_0)- R(u(t_j);u_0)\|_{L^\infty(\Omega)}.\end{aligned}$$ Recalling Lemma \[lem:sol-bound\] and the fact that $u\in C^\alpha([0,T];C(\bar\Omega))$, we obtain that $$\begin{aligned} &\quad \| R(u_j;u_0)- R(u(t_j);u_0)\|_{L^\infty(\Omega)} \\ &\le \Big\| \int_{u_0}^{u(t_j)} (u(t_j)-u_j) f''(s)\,ds \Big\|_{L^\infty(\Omega)} + \Big\| \int_{u (t_j)}^{u_j} (u_j - s) f''(s) \,ds \Big\|_{L^\infty(\Omega)}\\ &\le c \Big(\|u_j - u_0\|_{L^\infty(\Omega)} + \| u(t_j)- u_0\|_{L^\infty(\Omega)} \Big) \| e_n \|_{L^\infty(\Omega)} \\ &\le c t_n^\alpha \| e_n \|_{L^\infty(\Omega)}, \end{aligned}$$ and hence we arrive at the estimate $$\begin{aligned} \|e_n\|_{L^\infty(\Omega)} &\le c \tau^{\min(k,1+2\alpha-\epsilon)} t_n^{\alpha -\min (k, 1+2\alpha-\epsilon)} + c\tau\sum_{j=1}^n t_{n-j+1}^{\alpha-1} t_j^\alpha \| e_j \|_{L^\infty(\Omega)}.\end{aligned}$$ After multiplying $t_n^\alpha$ on both sides, we have $$\begin{aligned} t_n^\alpha \|e_n\|_{L^\infty(\Omega)} &\le c \tau^{\min(k,1+2\alpha-\epsilon)} t_n^{2\alpha -\min (k, 1+2\alpha-\epsilon)} + ct_n^\alpha \tau\sum_{j=1}^n t_{n-j+1}^{\alpha-1} t_j^\alpha \| e_j \|_{L^\infty(\Omega)}.\end{aligned}$$ Noting that $2\alpha -\min (k, 1+2\alpha-\epsilon)>-1$, we apply the Grönwall’s inequality [@Elliott:1992 Lemma 7.1] for $t_n^\alpha \|e_n\|_{L^\infty(\Omega)}$ and derive that $$\begin{aligned} t_n^\alpha \|e_n\|_{L^\infty(\Omega)} \le c \tau^{\min(k,1+2\alpha-\epsilon)} t_n^{2\alpha -\min (k, 1+2\alpha-\epsilon)}.\end{aligned}$$ This completes the proof. \[rem:bfisf-2\] The result in Theorem \[thm:error\] implies a uniform-in-time error $$\max_{1\le n\le N} \| u_n - u(t_n) \|_{L^\infty(\Omega)} \le c \tau^{\alpha-\epsilon},$$ for some small $\ep>0$. This result is consistent with the error estimate in [@JinLiZhou:nonlinear]. \[rem:bfisf-3\] The error estimate in Theorem \[thm:error\] indicates that the best convergence rate of the corrected BDF$k$ scheme is almost of order $O(\tau^{\min(k,1+2\alpha)})$, due to the low regularity of the remainder $R(u;u_0)$ (see Theorem \[thm:reg-R\] and Lemma \[lem:source\]). The reason is that $u$ is nonsmooth in the time direction, even though the initial condition is smooth and compatible with the boundary condition. This phenomena contrasts sharply with its normal parabolic counterpart, i.e., $\alpha=1$. For instance, it has been proved in [@CrouzeixThomee] that the time stepping schemes of the semilinear parabolic equation are able to achieve a better convergence rate in case of regular initial data. Numerical analysis without globally Lipschitz condition {#ssec:local-lip} ------------------------------------------------------- The preceding analysis could be easily extended to the nonlinear subdiffusion problem without the globally Lipschitz condition . For completeness, we briefly sketch the argument in this section. Under the assumptions in Theorem \[thm:reg-u\] and letting $$b = \| u \|_{L^\infty((0,T)\times\Omega)} + 1,$$ we are able to define a smooth function $\bar{f}$ such that $$\label{eqn:GL-2-0} \bar f(s) = f(s)\qquad \text{for all}~~-b\le s \le b,$$ and it is globally Lipschitz continuous $$\label{eqn:GL-2} |\bar f (s) - \bar f (t)| \le c_L |t-s| \qquad \text{for all}~~t,s\in\mathbb{R}.$$ Then we consider the BDF scheme with potential term $\bar f$ instead of $f$ $$\label{eqn:BDF-CQ-m-2-r} \left\{\begin{aligned} &\bdalt (u-u_0)_n-\Delta u_n= a_n^{(k)}(\Delta u_0 + f(u_0)) + \bar f(u_n),\quad &&1\leq n\leq k-1,\\ &\bdalt (u-u_0)_n-\Delta u_n= \bar f(u_n),\quad &&k\leq n\leq N. \end{aligned}\right.$$ Then under the condition we know that admits a unique solution. Meanwhile, Theorem \[thm:error\] indicates a uniform-in-time error estimate $$\max_{1\le n\le N} \| u_n - u(t_n) \|_{L^\infty(\Omega)} \le c \tau^{\alpha-\epsilon},$$ for some small $\ep>0$, where the constant $c$ depends on $T, u_0, \bar f, \ep$. As a result, for $\tau < \tau_0$ such that $c \tau_0^{\alpha-\epsilon}=1$, we have $$\max_{1\le n\le N}\| u_n \|_{L^\infty(\Omega)} \le \max_{1\le n\le N} \| u(t_n) \|_{L^\infty(\Omega)} + c\tau^{\alpha-\epsilon} \le b+1.$$ Therefore $\bar f(u_n) = f(u_n)$ for all $1\le n\le N$, and the modified time stepping scheme is identical to the original one (or equivalently ). Then we have the following corollary. \[cor:error\] Assume that the same conditions in Theorem \[thm:reg-u\] hold valid. Let $u(t)$ be the solution of the semilinear subdiffusion problem and $\{u_n\}_{n=1}^N$ be the solution of fully implicit scheme . Then the following error estimate holds $$\| u_n - u(t_n) \|_{L^\infty(\Omega)} \le c \tau^{\min(k,1+2\alpha-\epsilon)} t_n^{\alpha -\min (k, 1+2\alpha-\epsilon)},$$ for any $t_n > 0$ and arbitrarily small $\ep>0$. Here the constant $c$ depends on $T, u_0, f, \ep$, but it is independent of $\tau$ and $N$. \[rem:comp-CLP\] In [@CuestaLubichPalencia:2006], Cuesta et. al studied a second-order BDF method for solving a related (but different) subdiffusion model $$\label{eqn:fde2} u - \partial_t^{-\alpha} \Delta u = u_0 + \partial_t^{-1} f(u)\quad \text{with}\quad u(0)=u_0,$$ under the assumption that $f$ is sufficiently smooth and the solution $u$ can be expanded as $$\label{eqn:exp} u(t) = \sum_{m,l\ge0;m+l\alpha<2} c_{m,l} t^{m+l\alpha} + v(t) \quad \text{with}\quad c_{ml}\in D(\Delta)~~ \text{and}~~ v\in C^2([0,T];D(\Delta)).$$ This assumption requires stronger compatibility conditions of $u_0$ and $f(u_0)$. As a simple example, we consider the homogeneous problem, i.e. $f\equiv0$. In this case, the solution of the subdiffusion problem can be expanded by Mittag-Leffler function as $$u(t) = E_{\alpha,1}(\Delta t^\alpha) u_0 = \sum_{k=0}^\infty \frac{1}{\Gamma(\alpha k+1)} \Big((\Delta)^k u_0\Big) t^{\alpha k}.$$ Then assumption requires that $u_0\in D((-\Delta)^{1+2/\alpha})$ which is stronger than what we assumed in this work. Fully discrete scheme and error analysis {#sec:fully} ======================================== In this section, we will briefly discuss the fully discrete scheme for solving the nonlinear subdiffusion equation . We shall start with a spatially semidiscrete scheme for problem based on the standard Galerkin finite element method (see e.g., [@Jin:2013; @JinLazarovZhou:overview] for linear subdiffusion problems). For $h\in(0,h_0]$, $h_0>0$, we denote by $\mathcal{T}_h = \{K_j\}$ a triangulation of $\Omega_h=\text{Int}(\cup \overline K_j)$ into mutually disjoint open face-to-face simplices $K_j$. Assume that all vertices of a simplex $K_j$ locate on $\partial\Omega$. We also assume that $\{\mathcal{T}_h\}$ is globally quasi-uniform, i.e., $|K_j|\ge c h^d$ with a given $c>0$. Let $X_h$ be the finite dimensional space of continuous piecewise linear functions associated with $\mathcal{T}_h$, that vanish outside $\Omega_h$. Then we define the $L^2(\Omega)$ projection $P_h:L^2(\Omega)\to X_h$ and Ritz projection $R_h:H_0^1\rightarrow X_h$ respectively by $$\begin{split} (P_h \varphi,v_h) &=(\varphi,v_h) , \quad \forall\, v_h\in X_h,\\ (\nabla R_h v,\nabla v_h) &= (\nabla v,\nabla v_h),\quad \forall\, v_h\in X_h. \end{split}$$ The semidiscrete scheme reads: find $u_h(t) \in X_h$ such that $$\label{eqn:semi-FEM} (\partial_t^\alpha u_h (t), v_h) + (\nabla u_h(t), \nabla v_h) = (f(u_h(t)),v_h)\quad\text{for all}~v_h\in X_h,$$ with $u_h(t) = R_h u_0$. Let $\Delta_h:X_h\rightarrow X_h$ denote the Galerkin finite element approximation of the Dirichlet Laplacian $\Delta$, defined by $$(\Delta_hw_h,v_h):=-(\nabla w_h,\nabla v_h),\quad \forall\, w_h,v_h\in X_h .$$ Then the spatially semidiscrete scheme could be written as $$\label{eqn:semi-FEM-r} \partial_t^\alpha u_h (t) - \Delta_h u_h(t) = P_h f(u_h), \quad \text{with}~ u_h(0)=R_hu_0.$$ With the Laplace transform and convolution rule, $u_h(t)$ can be explicitly expressed by $$\label{eqn:sol-rep-semi} u(t)=(I + F_h(t)\Delta_h) u_0+\int_{0}^{t} E_h(t-s)f(u(s))\d s,$$ where the operators [$F_h(t)$]{} and [$E_h(t)$]{} are defined by $$\label{eqn:op-semi} F_h(t)=\frac{1}{2\pi\i}\int_{\Gamma_{\theta,\delta}} e^{zt}z^{-1}(z^\al-\Delta_h )^{-1}\d z\quad\text{and}\quad E_h(t)=\frac{1}{2\pi\i}\int_{\Gamma_{\theta,\delta}} e^{zt}(z^\al-\Delta_h )^{-1}\d z,$$ respectively. Recall that the discrete Laplacian satisfies the resolvent estimate in $L^\infty\II$ sense (cf. [@Bakaev:2003 Theorem 1.1]), i.e., for any angle $\phi\in(\pi/2,\pi)$, $$\begin{aligned} \label{eqn:reg-disc-inf} \| (z-\Delta_h)^{-1} w_h \|_{L^\infty \II } \le c |z|^{-1} \| w_h \|_{L^\infty \II} \quad \forall ~ z \in\Sigma_\phi.\end{aligned}$$ This immediately implies the following smoothing properties: $$\begin{aligned} \label{eqn:reg-disc-op-01} \|F_h\Delta_h v_h\|_{L^\infty\II} + t^{1-\alpha}\| E_h v_h\|_{L^\infty\II} + t \| E_h \Delta_h v_h\|_{L^\infty\II} \le c\|v_h\|_{L^\infty\II}\quad \forall~v_h\in X_h,\end{aligned}$$ which plays an important role in error analysis. Note that the $L^\infty\II$-norm error analysis of the scheme remains scarce, even though the $L^2\II$-norm estimate has been completely understood (cf. [@Karaa:nonlinear; @JinLiZhou:nonlinear]). For completeness, we shall provide an error estimate in $L^\infty\II$-norm. Spatially semidiscrete scheme for the linear problem ---------------------------------------------------- First we recall some error estimates for the following linear subdiffusion equation: $$\label{PDEv-linear} \partial_t^\alpha v(t)-\Delta v(t)=g(t), \quad\,\,\forall t\in (0,T] ,$$ where $g$ is a given source function, and $v(0)\in D$ is the given initial condition. The semidiscrete FEM for seeks $v_h(t)\in X_h$ such that $$\label{eqn:semidiscrete-linear} \partial_t^\alpha v_h(t)-\Delta_h v_h(t)=P_hg(t), \quad\,\,\forall t\in (0,T],$$ with $v_h(0)=R_hv(0)$. Recall that $R_h$ has the almost stability property [@Thomee:2006 eq. (6.60)] $$\label{eqn:ritz-stab} \| R_h w \|_{L^\infty\II} \le c\ell_h \| w\|_{L^\infty\II},\quad \text{with}~~\ell_h=\max(1, \log(1/h)).$$ To derive the error estimate of , we need the following lemma for the Ritz projection $R_h$, where the proof relies on the smoothing property of the solution operator $F(t)$:$$\begin{aligned} \label{eqn:reg-F} \| \Delta F(t) w \|_{L^p \II} \le c \| w \|_{L^p \II}, \quad \text{for all} ~~p\in[1,\infty).\end{aligned}$$ This follows directly from the representation and the resolvent estimate [@Ouhabaz:1995 Theorem 3.1] $$\| (z - \Delta)^{-1} w \|_{L^p\II} \le c_p |z|^{-1}\| w \|_{L^p\II}\quad \forall~~ z \in\Sigma_\phi,~\phi\in(\pi/2,\pi), ~p\in [1,\infty).$$ \[lem:space-error-linear-0\] Let $v$ be the solution of the linear problem . Then there holds $$\begin{aligned} \| (v-R_hv)(t) \|_{L^\infty\II} \le ch^2\ell_h^2 \big(\| \Delta v(0) \|_{L^\infty\II} + \int_0^t \| g'(s) \|_{L^\infty\II}\,ds\big)\end{aligned}$$ with $\ell_h=\max(1, \log(1/h))$. Let $I_h$ be the Lagrange interpolation operator. Then we have $$v-R_hv = (R_h - I)(v - I_h v)$$ and hence by and the approximation property of $I_h$, we derive for $2 \le p<\infty $ $$\| v-R_hv \|_{L^\infty} \le c \ell_h \| v - I_h v \|_{L^\infty\II} \le c h^{2-2/p}\ell_h \| v \|_{W^{2,p}\II}.$$ Now using the full elliptic regulariy, we have for $2 \le p<\infty $ [@Thomee:2006 eq. (6.78)] $$\| v \|_{W^{2,p}\II} \le c p \| \Delta v \|_{L^p\II}.$$ Recalling the solution representation , we have $$\begin{split} \Delta v&= \Delta (I + F(t)\Delta) v(0) + \int_0^t \Delta E(t-s) g(s)\,\d s\\ &= \Delta (I + F(t)\Delta) v(0) + \int_0^t \Delta F(t-s) g'(s)\,\d s - \Delta (F(0) g(t) - F(t)g(0))\\ \end{split}$$ Now we apply the smoothing property and arrive at $$\begin{aligned} \| \Delta v\|_{L^p\II} &\le c\| \Delta v(0) \|_{L^p\II} + c\int_0^t \| g'(s) \|_{L^p\II}\,ds. \end{aligned}$$ Then the desired result follows immediately by choosing $p=\ell_h$. The semidiscrete solution $v_h$ satisfies the following error estimate. \[lem:space-error-linear\] For the semidiscrete solution $v_h$ to problem , [there holds, with $\ell_h=\max(1, \log(1/h))$, that]{} $$\begin{aligned} \max_{t\in[0,T]} \|v_h(t)-v(t)\|_{L^2(\Omega)} \leq ch^2 \ell_h^3 \big(\| \Delta v(0) \|_{L^\infty\II} + \int_0^t \| g'(s) \|_{L^\infty\II}\,ds\big).\end{aligned}$$ We use the splitting $v_h -v = (v_h - P_h v ) + (P_h v - v) = : \psi + \theta.$ By Lemma \[lem:space-error-linear-0\] and [@Douglas:1974 Corollary], it is easy to see for all $t\in[0,T]$ $$\begin{aligned} \label{eqn:the} \| \theta(t) \|_{L^\infty \II} + \| (P_h v- R_h v)(t) \|_{L^\infty \II} \le c h^2\ell_h^2 \big(\| \Delta v(0) \|_{L^\infty\II} + \int_0^t \| g'(s) \|_{L^\infty\II}\,\d s\big).\end{aligned}$$ Besides, we note that $\psi$ satisfies the equation $$\begin{aligned} \partial_t^\alpha \psi (t) - \Delta_h \psi (t) = \Delta_h (R_h-P_h) v(t), \quad \text{with}~\psi(0) = (R_h - P_h)v.\end{aligned}$$ Therefore, by the representation , we arrive at $$\begin{aligned} \psi (t) &= (I+F_h(t)\Delta_h)(R_h - P_h)v(0) + \int_0^t E_h(t-s) \Delta_h (R_h-P_h) v(s)\,ds =: I_1+I_2.\end{aligned}$$ The estimate of $I_1$ follows directly from and $$\begin{aligned} \| I_1 \|_{L^\infty\II} &\le c \| (R_h - P_h)v(0) \|_{L^\infty\II} \le c h^2\ell_h^2 \| \Delta v(0) \|_{L^\infty\II} .\end{aligned}$$ For the second term, we apply the inverse inequality for finite element functions, as well as and , to obtain that $$\begin{aligned} \| I_2 \|_{L^\infty\II} &\le c h^{-2\epsilon} \int_0^t (t-s)^{-1+\epsilon} \| (R_h-P_h) v(s) \|_{L^\infty\II}\,\d s\\ &\le c \epsilon^{-1} h^{2-2\epsilon} \ell_h^2 \big(\| \Delta v(0) \|_{L^\infty\II} + \int_0^t \| g'(s) \|_{L^\infty\II}\,\d s\big)\end{aligned}$$ by choosing $ \epsilon=1/\ell_h$, then we complete the proof of the lemma. Error analysis for the nonlinear problem ---------------------------------------- Now we turn to the nonlinear problem . The following lemma provides an error estimate of the semidiscrete scheme . \[lem:error-semi-2\] Assume that the same conditions in Theorem \[thm:reg-u\] [hold]{} valid. Then the semidiscrete problem has a unique solution $u_h\in C([0,T]\times\bar\Omega)$, which satisfies $$\begin{aligned} \label{error-estimate-FEM} \max_{0\leq t\leq T}\|u(t)-u_h(t)\|_{L^\infty(\Omega)}\le c h^2\ell_h^3,\quad \text{with}~~\ell_h=\max(1, \log(1/h)).\end{aligned}$$ To begin with, we assume that the nonlinear term $f:\mathbb{R}\rightarrow\mathbb{R}$ is Globally Lipschitz continuous. Then, by the argument in [@JinLiZhou:nonlinear Theorem 3.1], the existence and uniqueness of the solution $u_h$ hold. It remains to establish the estimate . To this end, we define $v_h(t)$ as the solution of $$\begin{aligned} \partial_t^\al v_h(t) - \Delta_h v_h(t) = P_h f(u(t)), \quad \text{with}\quad v_h(0) =R_h u_0.\end{aligned}$$ This together with Lemma \[lem:space-error-linear\] and Theorem \[thm:reg-u\] yields the following estimate for $t \ge 0$ $$\label{eqn:wh} \begin{split} \| (u-v_h)(t) \|_{L^2(\Omega)} \le ch^2\ell_h^3. \end{split}$$ Meanwhile, we note that $\rho_h:=v_h-u_h$ satisfies the following equation $$\partial_t^\al \rho_h(t) -\Delta \rho_h(t) = P_h f (u(t)) - P_h f(u_h(t)), \quad \text{with}\quad \rho_h(0)=0.$$ Then, by the smoothing property , the Lipschitz continuity of $f$ and the stability of $P_h$ in $L^\infty\II$ [@Douglas:1974], we derive that $$\begin{split} \|\rho_h(t)\|_{L^\infty} &\le \int_0^t \| E_h(t-s) P_h [f (u(s)) - f(u_h(s)) ] \|_{L^\infty\II}\,\d s \\ &\le c \int_0^t (t-s)^{\alpha-1}\| P_h [f (u(s)) - f(u_h(s)) ] \|_{L^\infty\II}\,\d s\\ &\le c \int_0^t (t-s)^{\alpha-1}\| u(s) - u_h(s) \|_{L^\infty\II}\,\d s\\ &\le ch^2\ell_h^3+ c \int_0^t (t-s)^{\alpha-1} \| \rho_h(s) \|_{L^\infty\II}\,\d s. \end{split}$$ Then by the Grönwall’s inequality, we have $$\max_{t\in[0,T]}\| \rho_h(t) \|_{L^2\II} \le c h^2\ell_h^3.$$ This and directly imply the desired result. Then the same argument as the one in Section \[ssec:local-lip\] helps to remove the the globally Lipschitz condition. Finally, we consider the fully discrete scheme: find $U_h^n$ such that $$\label{eqn:BDF-CQ-m-fully} \left\{\begin{aligned} &\bdalt (U_h^n-U_h^0)-\Delta_h U_h^n= a_n^{(k)}(\Delta_h U_h^n + f(U_h^0) + f(U_h^0),\quad &&1\leq n\leq k-1,\\ &\bdalt (U_h^n-U_h^0)-\Delta_h U_h^n=f(U_h^n),\quad &&k\leq n\leq N. \end{aligned}\right.$$ Then by the resolvent estimate , all the [arguments]{} in Sections \[sec:prelim\] and \[sec:error\] [work]{} for the spatially discrete problems and . Therefore we have the following corollary. \[cor:error-fully-0\] Assume that the same conditions in Theorem \[thm:reg-u\] [hold]{} valid. Let $u_h(t)$ be the solution of the semidiscrete scheme and $\{U_h^n\}_{n=1}^N$ be the solution of fully discrete scheme . Then the following error estimate holds $$\| U^n_h - u_h(t_n) \|_{L^\infty(\Omega)} \le c \tau^{\min(k,1+2\alpha-\epsilon)} t_n^{\alpha -\min (k, 1+2\alpha-\epsilon)},$$ for any $t_n > 0$ and arbitrarily small $\ep>0$. Here the constant $c$ depends on $T, u_0, f, \ep$, but it is independent of $h$, $\tau$ and $N$. This corollary together with Lemma \[lem:error-semi-2\] immediately leads to the error estimate of the fully discrete scheme . \[thm:error-fd\] Assume that the same conditions in Theorem \[thm:reg-u\] [hold]{} valid. Let $u(t)$ be the solution of the semilinear subdiffusion problem and $\{U_h^n\}_{n=1}^N$ be the solution of fully discrete scheme . Then for $\ell_h=\max(1,\log({1/h}))$, the following error estimate holds $$\| U_h^n - u(t_n) \|_{L^\infty(\Omega)} \le c h^2\ell_h^3+ \tau^{\min(k,1+2\alpha-\epsilon)} t_n^{\alpha -\min (k, 1+2\alpha-\epsilon)},$$ for any $t_n > 0$ and arbitrarily small $\ep>0$. The constant $c$ depends on $T, u_0, f, \ep$, but it is independent of $h$, $\tau$ and $N$. Numerical experiments {#sec:numerics} ===================== In this section, we present numerical results to illustrate and support our theoretical findings. We consider the nonlinear subdiffusion model with $\Omega=(0,1)^2$ $$\label{Eqn:fde-numeric} \left\{\begin{aligned} \dalt u-\frac1{10}\Delta u&=4(u-u^3)&&\text{ in } \Omega\times (0,T),\\ u &=0 &&\text{ on } \pt\Omega\times (0,T),\\ u(0)&=u_0 &&\text{ in }\Omega , \end{aligned}\right.$$ In the computation, we divided the domain $\Omega$ into regular right triangles with $M$ equal subintervals of length $h$ on each side of the domain. The numerical solutions are computed by using fully discrete scheme . In each step, we solved the nonlinear elliptic problem by Newton’s iteration. We fixed the spatial mesh size $h=1/100$, [computed the numerical solution $\{ U_{h}^N \}$ with temporal step size $\tau=T/N$ with $T=1$, $N=100\times 2^\ell$, $\ell=0,1,\ldots,4$]{} and reported $$\begin{aligned} e_\tau= \big\|U_h^N-u_h(t_N)\big\|_{L^\infty\II}.\end{aligned}$$ Since the semidiscrete solution $u_h$ is unavailable, we compute reference solutions on a finer mesh, i.e., the fully discrete solution $U_h^N$ with $h=1/100$, $N=20000$ and $k=6$. We consider the following problem data: $$\begin{aligned} u_0(x,y) = 4x(1-x)y(1-y), \end{aligned}$$ where the initial condition satisfies $$\begin{aligned} u_0, \Delta u_0 \in C(\bar\Omega)\qquad \text{and}\qquad u=0~~\text{on}~\partial\Omega.\end{aligned}$$ Therefore, our assumptions on initial condition (i.e., $u_0\in D$) are fulfilled. In Table \[Tab:casea\], we present numerical results of the corrected $k$-step BDF scheme . Numbers in brackets are the theoretical convergence rates. Numerical results show that the convergence rate is $O(\tau^{\min(k,1+2\alpha)})$. For example, in case that $\alpha=0.7$, we observe an $O(\tau^{2.4})$ rate of BDF$k$ scheme with $k=3,4,5,6$, but an $O(\tau^2)$ rate in case that $k=2$. This is in good agreement with our theoretical results. In Table \[Tab:casea-2\], we present numerical results for uncorrected $k$-step BDF schemes . We observe that all schemes are first-order accurate. This phenomena has already been reported for the linear fractional evolution equations [@Jin:SISC2016; @JinLiZhou:correction]. This implies the necessity of the modification in the starting steps. -5pt $\al$ $k\backslash \ell$ 0 1 2 3 4 rate ------- -------------------- ---------- ---------- ---------- ---------- ---------- ----------------------- $k=2$ 2.94e-06 9.99e-07 3.45e-07 1.20e-07 4.19e-08 $\approx$ 1.52 (1.60) $k=3$ 2.43e-06 8.90e-07 3.21e-07 1.14e-07 3.99e-08 $\approx$ 1.51 (1.60) 0.3 $k=4$ 4.36e-06 1.57e-06 5.58e-07 1.96e-07 6.82e-08 $\approx$ 1.52 (1.60) $k=5$ 9.73e-06 3.45e-06 1.21e-06 4.23e-07 1.46e-07 $\approx$ 1.53 (1.60) $k=6$ 5.17e-09 1.70e-09 5.60e-10 1.85e-10 6.09e-11 $\approx$ 1.60 (1.60) $k=2$ 2.79e-06 7.53e-07 2.02e-07 5.44e-08 1.45e-08 $\approx$ 1.91(2.00) $k=3$ 6.42e-07 1.75e-07 4.63e-08 1.20e-08 3.07e-09 $\approx$ 1.97 (2.00) 0.5 $k=4$ 8.63e-07 2.24e-07 5.77e-08 1.47e-08 3.74e-09 $\approx$ 1.98 (2.00) $k=5$ 1.52e-06 3.93e-07 1.01e-07 2.57e-08 6.53e-09 $\approx$ 1.98 (2.00) $k=6$ 8.57e-09 2.15e-09 5.38e-10 1.34e-10 3.37e-11 $\approx$ 2.00 (2.00) $k=2$ 3.13e-06 7.88e-07 1.98e-07 4.97e-08 1.25e-08 $\approx$ 2.00 (2.00) $k=3$ 8.57e-08 1.97e-08 4.13e-09 8.31e-10 1.63e-10 $\approx$ 2.35 (2.40) 0.7 $k=4$ 1.05e-07 1.99e-08 3.79e-09 7.20e-10 1.37e-10 $\approx$ 2.39 (2.40) $k=5$ 1.55e-07 2.97e-08 5.66e-09 1.08e-09 2.05e-10 $\approx$ 2.39 (2.40) $k=6$ 1.05e-08 2.00e-09 3.78e-10 7.18e-11 1.36e-11 $\approx$ 2.40 (2.40) : Corrected BDF$k$ scheme at [$T=1$]{} with $h=1/100$ and $\tau=1/(100\times 2^\ell)$ []{data-label="Tab:casea"} -5pt $\al$ $k\backslash \ell$ 0 1 2 3 4 rate ------- -------------------- ---------- ---------- ---------- ---------- ---------- ----------------------- $k=2$ 6.01e-05 2.99e-05 1.49e-05 7.47e-06 3.73e-06 $\approx$ 1.00 (1.00) $k=3$ 5.99e-05 2.99e-05 1.49e-05 7.46e-06 3.73e-06 $\approx$ 1.00 (1.00) 0.3 $k=4$ 5.99e-05 2.99e-05 1.49e-05 7.45e-06 3.73e-06 $\approx$ 1.00 (1.00) $k=5$ 5.98e-05 2.99e-05 1.49e-05 7.45e-06 3.72e-06 $\approx$ 1.00 (1.00) $k=6$ 9.72e-06 4.85e-06 2.43e-06 1.21e-06 6.06e-07 $\approx$ 1.00 (1.00) $k=2$ 1.05e-04 5.24e-05 2.61e-05 1.31e-05 6.53e-06 $\approx$ 1.00 (1.00) $k=3$ 1.05e-04 5.23e-05 2.61e-05 1.31e-05 6.53e-06 $\approx$ 1.00 (1.00) 0.5 $k=4$ 1.05e-04 5.22e-05 2.61e-05 1.31e-05 6.53e-06 $\approx$ 1.00 (1.00) $k=5$ 1.05e-04 5.22e-05 2.61e-05 1.31e-05 6.53e-06 $\approx$ 1.00 (1.00) $k=6$ 3.85e-05 1.92e-05 9.62e-06 4.81e-06 2.40e-06 $\approx$ 1.00 (1.00) $k=2$ 1.60e-04 8.00e-05 3.99e-05 2.00e-05 9.97e-06 $\approx$ 1.00 (1.00) $k=3$ 1.60e-04 7.98e-05 3.99e-05 1.99e-05 9.97e-06 $\approx$ 1.00 (1.00) 0.7 $k=4$ 1.60e-04 7.98e-05 3.99e-05 1.99e-05 9.97e-06 $\approx$ 1.00 (1.00) $k=5$ 1.60e-04 7.98e-05 3.99e-05 1.99e-05 9.97e-06 $\approx$ 1.00 (1.00) $k=6$ 1.10e-04 5.50e-05 2.75e-05 1.38e-05 6.88e-06 $\approx$ 1.00 (1.00) : Uncorrected BDF$k$ scheme at [$T=1$]{} with $h=1/100$ and [$\tau=1/(100\times 2^\ell)$ ]{}[]{data-label="Tab:casea-2"} Acknowledgements {#acknowledgements .unnumbered} ================ The authors are grateful to Prof. Buyang Li for his suggestion and valuable comments on an earlier version of the paper. [10]{} M. Al-Maskari and S. Karaa. Numerical approximation of semilinear subdiffusion equations with nonsmooth initial data. , 57(3):1524–1544, 2019. A. A. Alikhanov. A new difference scheme for the time fractional diffusion equation. , 280:424–438, 2015. N. Y. Bakaev. Maximum norm resolvent estimates for elliptic finite element operators. , 41(2):215–239, 2001. N. Y. Bakaev, V. Thomée, and L. B. Wahlbin. Maximum-norm estimates for resolvents of elliptic finite element operators. , 72(244):1597–1610, 2003. B. Berkowitz, J. Klafter, R. Metzler, and H. Scher. Physical pictures of transport in heterogeneous media: Advection-dispersion, random-walk, and fractional derivative formulations. , 38(10):9–1, 2002. F. Chen, Q. Xu, and J. S. Hesthaven. A multi-domain spectral method for time-fractional differential equations. , 293:157–172, 2015. M. Crouzeix and V. Thomée. On the discretization in time of semilinear parabolic equations with nonsmooth initial data. , 49(180):359–377, 1987. E. Cuesta, C. Lubich, and C. Palencia. Convolution quadrature time discretization of fractional diffusion-wave equations. , 75(254):673–696, 2006. J. Douglas, Jr., T. Dupont, and L. Wahlbin. The stability in [$L^{q}$]{} of the [$L^{2}$]{}-projection into finite element function spaces. , 23:193–197, 1974/75. Q. Du, J. Yang, and Z. Zhou. An analysis of nonlocal-in-time allen-cahn equations. . C. M. Elliott and S. Larsson. Error estimates with smooth and nonsmooth data for a finite element method for the [C]{}ahn-[H]{}illiard equation. , 58(198):603–630, S33–S36, 1992. G. Gao, Z. Sun, and H. Zhang. A new fractional numerical differentiation formula to approximate the [C]{}aputo fractional derivative and its applications. , 259:33–50, 2014. E. Hairer and G. Wanner. , volume 14 of [*Springer Series in Computational Mathematics*]{}. Springer-Verlag, Berlin, second edition, 1996. Stiff and differential-algebraic problems. B. Jin, R. Lazarov, and Z. Zhou. Error estimates for a semidiscrete finite element method for fractional order parabolic equations. , 51(1):445–466, 2013. B. Jin, R. Lazarov, and Z. Zhou. An analysis of the [L]{}1 scheme for the subdiffusion equation with nonsmooth data. , 36(1):197–221, 2016. B. Jin, R. Lazarov, and Z. Zhou. Two fully discrete schemes for fractional diffusion and diffusion-wave equations with nonsmooth data. , 38(1):A146–A170, 2016. B. Jin, R. Lazarov, and Z. Zhou. Numerical methods for time-fractional evolution equations with nonsmooth data: a concise overview. , 346:332–358, 2019. B. Jin, B. Li, and Z. Zhou. An analysis of the [C]{}rank–[N]{}icolson method for subdiffusion. , 38(1):518–541, 2017. B. Jin, B. Li, and Z. Zhou. Correction of high-order [BDF]{} convolution quadrature for fractional evolution equations. , 39(6):A3129–A3152, 2017. B. Jin, B. Li, and Z. Zhou. Numerical analysis of nonlinear subdiffusion equations. , 56(1):1–23, 2018. A. A. Kilbas, H. M. Srivastava, and J. J. Trujillo. . Elsevier Science B.V., Amsterdam, 2006. S. C. Kou et al. Stochastic modeling in nanoscale biophysics: subdiffusion within proteins. , 2(2):501–535, 2008. B. Li, K. Wang, and Z. Zhou. Long-time [A]{}ccurate [S]{}ymmetrized [I]{}mplicit-explicit [BDF]{} [M]{}ethods for a [C]{}lass of [P]{}arabolic [E]{}quations with [N]{}on-self-adjoint [O]{}perators. , 58(1):189–210, 2020. X. Li and C. Xu. A space-time spectral method for the time fractional diffusion equation. , 47(3):2108–2131, 2009. H.-l. Liao, D. Li, and J. Zhang. Sharp error estimate of the nonuniform [L]{}1 formula for linear reaction-subdiffusion equations. , 56(2):1112–1133, 2018. Y. Lin and C. Xu. Finite difference/spectral approximations for the time-fractional diffusion equation. , 225(2):1533–1552, 2007. C. Lubich. Discretized fractional calculus. , 17(3):704–719, 1986. C. Lubich. Convolution quadrature and discretized operational calculus. i. , 52(2):129–145, 1988. C. Lubich, I. H. Sloan, and V. Thom[é]{}e. Nonsmooth data error estimates for approximations of an evolution equation with a positive-type memory term. , 65(213):1–17, 1996. W. McLean and K. Mustapha. Time-stepping error bounds for fractional diffusion problems with non-smooth initial data. , 293:201–217, 2015. R. Metzler, J.-H. Jeon, A. G. Cherstvy, and E. Barkai. Anomalous diffusion models and their properties: non-stationarity, non-ergodicity, and ageing at the centenary of single particle tracking. , 16:24128, 37 pp., 2014. K. Mustapha, B. Abdallah, and K. M. Furati. A discontinuous [P]{}etrov-[G]{}alerkin method for time-fractional diffusion equations. , 52(5):2512–2529, 2014. K. Mustapha and W. McLean. Superconvergence of a discontinuous [G]{}alerkin method for fractional diffusion and wave equations. , 51(1):491–515, 2013. R. Nigmatullin. The realization of the generalized transfer equation in a medium with fractal geometry. , 133(1):425–430, 1986. E.-M. Ouhabaz. Gaussian estimates and holomorphy of semigroups. , 123(5):1465–1474, 1995. I. Podlubny. , volume 198. Elsevier, 1998. E. Sousa. How to approximate the fractional derivative of order $1<\alpha\leq2$. , 22(04):1250075, 2012. H. B. Stewart. Generation of analytic semigroups by strongly elliptic operators. , 199:141–162, 1974. M. Stynes, E. O’Riordan, and J. L. Gracia. Error analysis of a finite difference method on graded meshes for a time-fractional diffusion equation. , 55(2):1057–1079, 2017. Z. Sun and X. Wu. A fully discrete difference scheme for a diffusion-wave system. , 56(2):193–209, 2006. V. Thom[é]{}e. Galerkin finite element methods for parabolic problems (springer series in computational mathematics). 2006. R. Wu, H. Ding, and C. Li. Determination of coefficients of high-order schemes for riemann-liouville derivative. , 2014, 2014. Y. Yan, M. Khan, and N. J. Ford. An analysis of the modified [L]{}1 scheme for time-fractional partial differential equations with nonsmooth data. , 56(1):210–227, 2018. S. B. Yuste and L. Acedo. An explicit finite difference method and a new von [N]{}eumann-type stability analysis for fractional diffusion equations. , 42(5):1862–1874, 2005. M. Zayernouri, M. Ainsworth, and G. E. Karniadakis. A unified [P]{}etrov-[G]{}alerkin spectral method for fractional [PDE]{}s. , 283:1545–1569, 2015. F. Zeng, C. Li, F. Liu, and I. Turner. The use of finite difference/element approaches for solving the time-fractional subdiffusion equation. , 35(6):A2976–A3000, 2013. [^1]: Department of Applied Mathematics, The Hong Kong Polytechnic University, Kowloon, Hong Kong. (`[email protected]`) [^2]: Department of Applied Mathematics, The Hong Kong Polytechnic University, Kowloon, Hong Kong. (`[email protected], [email protected]`) [^3]: The research of K. Wang is partially supported by a Hong Kong RGC grant (Project No. 15300817), and that of Z. Zhou by a start-up grant from the Hong Kong Polytechnic University and Hong Kong RGC grant No. 25300818.