text
stringlengths
1
2.25M
--- abstract: 'We propose a mechanical graphene analogue which is made of stainless steel beads placed in a periodic magnetic field by a proper design. A stable, free of mechanical borders granular structure with well-predicted wave dynamics is experimentally constructed. First we report the dispersion relation in conjunction with the evidence of the Dirac points. Theoretical analysis shows that, compared to genuine or other artificial graphene analogues, unconventional edge modes exist in the free zigzag and armchair boundaries together with novel bulk modes composed of in-plane extended translations but localized rotations at the edges. We observe the existence of edge modes in free zigzag boundary, and we reveal an experimental robust turning effect of edge waves from the zigzag to the armchair/zigzag boundary, even in the absence of a full band gap for bulk modes. Our work shows that granular graphene can serve as an excellent experimental platform to study novel Dirac, topological and nonlinear wave phenomena.' author: - 'L.-Y. Zheng' - 'F. Allein' - 'V. Tournat' - 'V. Gusev' - 'G. Theocharis' title: 'Granular Graphene: direct observation of novel edge states on zigzag and armchair boundaries' --- Introduction ============ Graphene, a single-layer of carbon atoms in honeycomb lattice, has recently emerged as an appealing system for conducting fundamental studies in condensed matter physics and in particular Dirac physics phenomena. [@Neto; @Sarma; @Geim; @Neto; @Waka]. The great advances in experiments in graphene were awarded with the 2010 Nobel prize. Despite the enormous progress, there are still great difficulties in designing/modifying graphene at will at the nanoscale. This leads many researchers to propose and study other artificial microscopic and even macroscopic graphene analogues for further fundamental studies. These settings include the use of molecules [@review], ultracold atoms [@ultracold], photons [@RechtsmanNature; @DuboisNC; @ZhangPRL; @AblowitzPRA] or phonons [@TorrentPRL; @YuNM] in honeycomb lattice. The study of edge wave in finite crystals has been a long studied topic in condensed matter physics [@Shockley; @Tamm]. In genuine graphene, zero-energy electronic states are predicted in nanoribbons with zigzag/beared boundaries [@Fujita; @Nakada] and confirmed by means of scanning-tunneling microscopy [@Kobayashi; @Niimi]. However, armchair boundaries do not support electronic edge states unless defects appear on the edges [@Kobayashi] or the system is anisotropic [@Kohmoto]. The interest on edge states has been significantly renewed by recent advances in the study of topological physics. It has been shown that robust edge states/modes can appear in topological insulators [@Hasan; @Qi], crystalline[@Fu] and higher order[@Benalcazar] topological insulators. In these systems, the existence of edge states is directly connected with the topological properties of bulk bands. This is also the case for the electronic edge states of graphene, since they are related to the winding number of bulk eigenmodes [@Ryu; @Mong; @Delplace]. In addition to genuine graphene, an extensive body of works has been published over the last few years, studying edge waves in different artificial graphene structures, particularly in photonics. A number of edge modes has been observed experimentally, such as conventional flat bands as well as unconventional edge branches in zigzag and beared boundaries [@Kuhl; @Plotnik; @Milicevic]. Regarding armchair boundaries, the previous reports of armchair edge waves are either in anistropic microwave artificial graphene [@BellecNJP], or in a photonic graphene-like structure of coupled micropillars [@MilićevićPRL], where the existence of edge states is due to the coupling of $p_{x,y}$ photonic orbitals. Considering the study of vibrational edge waves, the existence of edge modes has been predicted in genuine [@SavinPRB] and granular graphene [@ZhengEML]. Moreover, it has also been shown that in mechanical graphene analogues both flat and dispersive unconventional edge modes can be found at zigzag edges under fixed boundary conditions [@Coriolis; @Kariyado]. However, to the best of our knowledge, there is no experimental observation of edge waves in mechanical graphene till now. In this work, we propose another type of artificial graphene, that is, granular graphene, which can be thought of as a mechanical analogue of graphene whose carbon atoms are replaced by macroscopic elastic beads and chemical bonds are substituted by contact interactions via various stiffnesses. Compared to genuine graphene or other mechanical graphene structures, granular graphene possesses extra physical features that make it very appealing from a fundamental point of view. One of those is the existence of multiple degrees of freedom (translations and rotations) [@MerkelPRL; @HiraiwaPRL; @PichardPRB; @AlleinAPL]. This, in combination with the honeycomb lattice geometry, leads to Dirac cones in the dispersion relation [@TournatNJP; @ZhengUltrasonics] and topological helical edge waves [@ZhengPRB]. From an experimental standpoint, previous reports on two-dimensional (2D) granular crystals usually focus on closely-packed hexagonal or square lattices with mechanical constraints located on the borders [@GillesPRL; @CostePRE2008; @PrlChiara]. Thus far however, no direct observation has been made on Dirac cones or edge wave propagation on granular graphene. The obstacles blocking further experimental investigations on granular systems include difficulties in constructing different structures and stability issues, particularly for looser packings like the honeycomb structure. ![\[fig1\] **Presentation of the Magneto-Granular Graphene (MGG).** (**A**) Cut-view schematics of the MGG where the magnetic field created by the permanent magnets induces attractive forces between the particles. (**B**) Top view of the realistic MGG, composed of 820 beads. (**C**) Experimental set-up for detecting wave propagation in the MGG. (**D**) Close-up schematic of the MGG. The blue box highlights a unit cell at position ($\mathrm{m},\mathrm{n}$) containing the two sublattice particles, labeled $A$ and $B$. (**E**) Considered interactions between beads in the MGG.](figure1_SA_V4){width="8.5cm"} In this article, we present how such structural difficulties have been overcome using external periodic magnetic fields. The proposed magneto-granular graphene (MGG) is structurally stable and acts as a nearly free-standing granular structure. We then experimentally show the dispersion curves, evidence of the Dirac point, and for the first time, direct observations of the edge wave propagation in the MGG. We theoretically show that in the MGG, edge waves can exist not only on free zigzag, but also on free armchair boundaries. More importantly, for a range of frequencies we show the existence of novel bulk modes where translations are extended in the bulk of the structure while rotations are localized at the edges. In the same frequency range, when translations are also constrained at the edges, i.e., in partial band gaps of the bulk modes, edge modes can also appear on the free zigzag or both the free zigzag and armchair edges. This leads to an interesting turning effect of edge waves from zigzag to armchair boundary in the frequency range where edge modes appear on both free zigzag and armchair edges. Aside from the topological wave mechanism, where edge transport occurs in the full gap for bulk wave and is protected by the bulk topology [@ZhuPRB], the turning effect demonstrated here originates from the coexistence of wave solutions on the zigzag and armchair edges over a certain frequency range. The role of the topology in MGG, like in the recent works of higher order topological insulators, is a remaining intriguing question. Experimental set-up and modeling ================================ The MGG is depicted in Fig. \[fig1\]B, where $820$ stainless steel beads (diameter $d=7.95$ mm, density $\rho=7678$ kg/m$^3$, Young’s modulus $E=190$ GPa and Poisson’s ratio $\nu=0.3$) are precisely placed in a honeycomb lattice, in contact with one another. This layout stems from a properly designed external magnetic field that is induced by permanent cylindrical NdFeB magnets (remanent magnetization $1.37$ T, diameter $6$ mm, and length $13$ mm) placed in a honeycomb configuration within the wood matrix, Fig. \[fig1\]A. The external periodic magnetic field magnetizes the elastic beads, resulting in equivalent pre-compression forces between beads and thus a mechanically stable structure. Between the elastic beads and the substrate, a thin layer of rubber (thickness $2$ mm) has been set to minimize the mechanical coupling of the granular graphene with the substrate, and to damp the transmission of elastic waves into the wood matrix. The experimental set-up is shown in Figs. \[fig1\]C. In-plane motion is excited by the driver connected to a piezoelectric transducer (*Panametrics* V3052). Each bead in the structure exhibits one out-of-plane rotation $\varphi$ around the $z-$axis and two in-plane translations $u$ and $v$ along the $x-$ and $y-$axes, respectively. The $u$ and $v$ components of each bead can be monitored separately by two laser vibrometers, which are sensitive to changes in the optical path length along the beam direction. Regarding the mechanical contact interactions between adjacent beads, we consider normal, shear and bending interactions, as characterized by the contact rigidities $\xi_n$, $\xi_s$, and $\xi_b$ respectively, Fig. \[fig1\]E. Once pre-compression has been determined (around $\sim1.55$ N by means of measurement), $\xi_n$, $\xi_s$ and $\xi_b$ can be obtained from Hertzian contact mechanics [@Johnson; @Mindlin], see Methods. For the three types of interactions between adjacent beads, i.e., Fig. \[fig1\]E, the elongations corresponding to the effective normal $n_\beta$, shear $s_\beta$, and bending $b_\beta$ contact springs can be expressed as, \[efs\] $$\begin{aligned} n_\beta&=&(u_{\beta}-u_\alpha)\boldsymbol e_x \boldsymbol e_\beta+(v_{\beta}-v_\alpha)\boldsymbol e_y \boldsymbol e_\beta, \\ s_\beta&=&(u_{\beta}-u_\alpha)\boldsymbol e_x \boldsymbol l_\beta +(v_{\beta}-v_\alpha)\boldsymbol e_y \boldsymbol l_\beta -\dfrac{d}{2}(\varphi_\beta+\varphi_\alpha), \\ b_\beta&=&\dfrac{d}{2}(\varphi_\beta-\varphi_\alpha),\end{aligned}$$ where $\alpha=A,B$ is the sublattice index. Considering the honeycomb structure, each sublattice bead is in contact with three other beads as denoted by a neighboring index $\beta=1,2,3$. We define $\boldsymbol e_\beta$ as unit vectors in the directions from the center of $\alpha$ bead to the center of its $\beta$-th neighbors. $\boldsymbol e_x$, $\boldsymbol e_y$ and $\boldsymbol e_z$ represent the unit vectors along $x-$, $y-$ and $z-$axes, respectively. $\boldsymbol l_\beta$ are unit vectors normal to $\boldsymbol e_\beta$ and $\boldsymbol e_z$ with the form $\boldsymbol l_\beta=\boldsymbol e_z\times\boldsymbol e_\beta$. As displayed in Fig. \[fig1\]D, we can label the sublattice $\alpha$ in a normalized coordinate $(\mathrm{m},\mathrm{n})$ (with the bead center positions serving as the coordinate) by $\alpha_{\mathrm{m},\mathrm{n}}$, where $\mathrm{m}$, $\mathrm{n}$ are both integers representing the normalized center positions of beads in the $x-$ and $y-$axes, respectively. On site $(\mathrm{m}, \mathrm{n})$, the equations of motion can be expressed as follows, \[efm\] $$\begin{aligned} M\ddot{u}_{\alpha,\mathrm{m},\mathrm{n}}&=&\sum_{\beta}(\xi_n n_\beta\boldsymbol e_x\boldsymbol e_\beta+\xi_s s_\beta\boldsymbol e_x\boldsymbol l_\beta), \\ M\ddot{v}_{\alpha,\mathrm{m},\mathrm{n}}&=&\sum_{\beta}(\xi_n n_\beta\boldsymbol e_y\boldsymbol e_\beta +\xi_s s_\beta \boldsymbol e_y \boldsymbol l_\beta), \\ I\ddot{\varphi}_{\alpha,\mathrm{m},\mathrm{n}}&=& \dfrac{d}{2}\sum_{\beta}(\xi_s s_\beta +\xi_b b_\beta).\end{aligned}$$ Above, $M$ is the mass of a bead and $I$ is its moment of inertia. The dots on the top represent derivation over time. It can be seen from Eqs.  that bending interactions can not lead to the displacement of beads, i.e., Eqs. (2a), (2b), while normal interactions do not contribute to the rotation of beads, i.e., Eq. (2c). Based on the equations of motion in Eqs. , wave dynamics in the MGG can be described by, \[eq1\] $$\boldsymbol{\ddot{U}}_{\mathrm{m},\mathrm{n}}^A = S_0 \boldsymbol{U}_{\mathrm{m},\mathrm{n}}^A+ S_1 \boldsymbol{U}_{\mathrm{m},\mathrm{n}}^B+S_2 \boldsymbol{U}_{\mathrm{m-1},\mathrm{n+1}}^B+S_3 \boldsymbol{U}_{\mathrm{m-1},\mathrm{n-1}}^B,$$ $$\boldsymbol{\ddot{U}}_{\mathrm{m},\mathrm{n}}^B = D_0 \boldsymbol{U}_{\mathrm{m},\mathrm{n}}^B+ D_1 \boldsymbol{U}_{\mathrm{m},\mathrm{n}}^A+D_2 \boldsymbol{U}_{\mathrm{m+1},\mathrm{n+1}}^A+D_3 \boldsymbol{U}_{\mathrm{m+1},\mathrm{n-1}}^A,$$ where $\boldsymbol U_{\mathrm{m},\mathrm{n}}^\alpha=[u_\alpha;v_\alpha;\Phi_\alpha]_{\mathrm{m},\mathrm{n}}$ with $\Phi=\varphi d/2$ are the motion of particle $\alpha$ in the normalized coordinates. $S_i$ and $D_i$ ($i=0,1,2,3$) are $3\times3$ matrices, see supplemental materials (SM). By applying the Bloch periodic boundary conditions in both $x-$ and $y-$axes, i.e., $\boldsymbol U_{\mathrm{m},\mathrm{n}}^\alpha=\boldsymbol U^\alpha e^{i \omega t -i q_x \mathrm{m} - i q_y \mathrm{n} }$ with the normalized wave vectors $q_x=k_xd/2$, $q_y=\sqrt{3}k_yd/2$, Eqs.  can be mapped into an eigenvalue problem which leads to the dispersion curves of an infinite MGG as shown in Figs. \[fig2\]A and  \[fig2\]B. Considering that the MGG in experiments is of a finite size $21\times41$, there are free zigzag edges at positions $(\mathrm{m},\mathrm{n})=(1,\mathrm{n})$, $(\mathrm{m},\mathrm{n})=(21,\mathrm{n})$ and free armchair edges at $(\mathrm{m},\mathrm{n})=(\mathrm{m},1)$, $(\mathrm{m},\mathrm{n})=(\mathrm{m},41)$. At the mechanically free boundaries, which can be obtained by removing parts of the neighbors of edge beads, the beads are interacting with a smaller number of neighboring beads than in the volume. Therefore, the boundary conditions are derived from the cancellation of interactions between the removed beads and the edge beads, which leads to the following boundary conditions: \[eq2\] $$M_0 \boldsymbol{U}_{1,\mathrm{n}}^B+D_1 \boldsymbol{U}_{1,\mathrm{n}}^A = 0,$$ $$M_1 \boldsymbol{U}_{21,\mathrm{n}}^A+S_1 \boldsymbol{U}_{21,\mathrm{n}}^B =0,$$ for the zigzag edges, and, \[eq3\] $$M_2 \boldsymbol{U}_{\mathrm{m},1}^A+S_3 \boldsymbol{U}_{\mathrm{m}-1,0}^B =0,$$ $$M_3 \boldsymbol{U}_{\mathrm{m},1}^B+ D_3 \boldsymbol{U}_{\mathrm{m}+1,0}^A =0,$$ $$M_4 \boldsymbol{U}_{\mathrm{m},41}^A + S_2 \boldsymbol{U}_{\mathrm{m}-1,42}^B =0,$$ $$M_5 \boldsymbol{U}_{\mathrm{m},41} ^B+ D_2 \boldsymbol{U}_{\mathrm{m}+1,40}^A=0,$$ for the armchair edges with $M_j$ ($j=0,1,2,3,4,5$) $3\times3$ matrices, see SM. To account for dissipation, a phenomenological on-site damping term [@BoechlerNat] has also been introduced into the right-hand side of Eqs. , $-1/\tau\boldsymbol{\dot{U}}_{\mathrm{m},\mathrm{n}}^\alpha$, with $\tau$ characterizing the decay time of waves. This coefficient has been chosen to fit the experimental results. More details about the dissipation implementation can be found in the Methods section. ![image](figures2_SA_V5){width="16.5cm"} Dispersion curves and Dirac point ================================= To measure the MGG dispersion, in-plane motion has been activated using a frequency sweep excitation from $500$ Hz to $35$ kHz by the bead-driver located at position $(1, 22)$. The $u$, $v$ components of particle $B$ in each unit cell are collected by the laser vibrometers. By scanning all particles $B$, the spatial frequency signals of translation can be obtained, which in turn yields the dispersion curves by applying a double Fourier transform. Figures \[fig2\]A and \[fig2\]B present the dispersion curves of an infinite granular graphene without dissipation. The color scale level reflects the weights of $u$ (red curves) and $v$ (green curves) components in each mode. The corresponding numerical dispersion curves, mimicking the experimental process, are displayed in Figs. \[fig2\]C-\[fig2\]D, while the experimental ones are shown in Figs. \[fig2\]E-\[fig2\]F for the $u$ and $v$ components respectively. Figures \[fig2\]C-\[fig2\]F indicate that up to $\sim 20$ kHz, the experimental dispersion curves are in good agreement with both the theoretical and numerical curves since the branches are translation-dominated. As expected, the branches with frequencies above $\sim 20$ kHz are absent due to the fact that these modes are rotation-dominated, which are not easily detected by the laser vibrometers. Interestingly, Figs. \[fig2\]E and \[fig2\]F reveal the band crossing at the K point around frequency $10$ kHz. The observation of this crossing provides evidence of the Dirac cone in MGG, originating from the honeycomb lattice symmetry. As shown in Methods, there are theoretically two Dirac cones with the Dirac frequencies $\omega_\pm$ in granular graphene considering the in-plane motion. The band crossing around $10$ kHz corresponds to the Dirac point $\omega_-$ at the K point of the Brillouin zone (BZ). Note that, another Dirac cone is also predicted around $\omega_+\sim 24$ kHz, Figs. \[fig2\]A-\[fig2\]B. However, this Dirac point is not visible in Figs. \[fig2\]E and \[fig2\]F due to the fact that the translation signals around $24$ kHz are weak and thus hidden in the color scale. In order to observe the $\omega_+$ Dirac point in the MGG, another set of experiment has been performed around the target frequency. Experimentally we choose the source to be a frequency sweep excitation from $18$ kHz to $26$ kHz. However, there are still two main difficulties to be overcome: (1) Collection of the weak translational signals. Since the modes are dominated by rotation over this frequency region, the signal of translational components is consequently weak. Thus, this rotation-dominated modes are not easily detected by the laser vibrometers since they are only sensitive to changes due to displacements of beads. When dissipation is also taken into account, the weak translational motion becomes weaker due to the attenuation during propagation. (2) The resolution of the Dirac point. Since the number of the eigenmodes is related to the size of the MGG, larger size of the structure results in a number of eigenmodes, which in turn lead to better resolution of dispersion around the Dirac point. However, as explained in (1), large size of the sample can lead to disadvantages for measurement due to the fact that the translational signals of the particles far away from the source can be too weak to be measured by the vibrometers. As a compromise, in this experiment for the $\omega_+$ Dirac point, we chose the size of sample to be $11\times41$. This keeps the resolution point along the $q_y$ unchanged, but decreases the length along $q_x$ to reduce the influence of attenuation on the translational signal. By reconstructing a new sample of size $11\times41$ with stainless steel beads (diameter $8$ mm, density $7650$ kg/m$^3$, Young’s modulus $210$ GPa and Poisson’s ratio $0.3$), the pre-compression between beads in this sample is measured to be around $F_0=1$ N. Therefore, we calculate the Dirac frequency to be at $\omega_+\sim 22.585$ kHz. By scanning all the particles $B$ and doing the 2D real-reciprocal space Fourier transformation, iso-frequency contour of a given frequency can be obtained. In Fig. \[fig2\]G, we show the iso-frequency contours at the Dirac frequency $\omega_+$ obtained experimentally and numerically considering the same size of sample and dissipation. We observe that the mode at $\omega_+$, displayed by the high value in the iso-frequency contours, is only present around the K points which reveals an evidence of Dirac point in the MGG. ![\[fig3\] **Zigzag and armchair edge waves predictions.** (**A**) Edge wave dispersion curves for the zigzag and armchair edges. (**B**) Zoom around $20$ kHz. The gray regions represent the bulk modes, the red (blue) lines correspond to the edge wave branches. The modes marked in the edge branches at: $24.82$ kHz by the yellow dot in (**A**), $19.88$ kHz by the green dots in (**B**), and $19.62$ kHz by the black dot in (**B**) are displayed in Fig. \[fig4\](**D-F**).](figure_3_SA_V2){width="8.0cm"} ![image](figure_4_SA){width="16.5cm"} Edge waves ========== Another interesting feature that appears in the dispersion of the finite-sized MGG is the existence of branches in the $\Gamma$K and MK directions around $20$ kHz (green ellipses in Fig. \[fig2\]C-\[fig2\]F. These branches correspond to edge waves and in this section, we will study these first theoretically and then experimentally. By considering free boundaries in Eqs.  and , the edge wave dispersion for the zigzag and armchair edges is calculated, see Fig. \[fig3\]A. In the calculations of the edge dispersion, we assume that the free zigzag (armchair) edges are located at $\mathrm{m}=1$ and $\mathrm{m}=21$ ($\mathrm{n}=1$ and $\mathrm{n}=41$), while along the $y-$ ($x-$) axis are infinite. Therefore, based on Eqs.  and the boundary conditions in Eqs.  (Eqs. ) along with the Bloch periodic conditions in the $y-$ ($x-$) axis, the edge wave dispersions in Fig. \[fig3\]A are obtained. The gray regions correspond to bulk, while the red (blue) curves to edge wave solutions. In total, $2$ edge branches for the zigzag and $3$ for the armchair are present. This increased number of edge states, especially the existence of edge states at the armchair edge, is not encountered in the genuine graphene. Similar edge modes have been only reported in photonic lattices with orbital bands [@MilićevićPRL], or mechanical honeycomb lattices with purely in-plane translations [@Coriolis; @Kariyado], namely 2D mass-spring honeycomb systems with two translational degrees of freedom per sublattice. However, in granular graphene, there are three degrees of freedom per sublattice (two translations $u$, $v$, one rotation $\phi$). As we will see, this significantly enriches the edge physics of this system. Considering the edge wave in the rotation-dominated region (above $\sim 22$ kHz), as shown in Fig. \[fig3\]A, it can be seen that no edge modes are found in the zigzag boundary. This is different from the conventional graphene where a flat band of zero edge modes exists in the zigzag edges [@Kohmoto; @Ryu]. In mechanical graphene, similar flat band edge states can be found under fixed boundaries conditions. As it was commented in Ref. \[[@Kariyado]\], free boundary conditions, like the ones used in this work, break the chiral symmetry on the free zigzag edges, leading to the absence of a flat band of edge modes. However, a novel armchair edge branch appears, whose topological origin can be further discussed in future investigation. In Fig. \[fig3\]B, we present a close-up of the edge wave dispersion around $20$ kHz for the zigzag and armchair cases, respectively. Interestingly, there is an overlapping region from $\sim 19.84$ kHz to $\sim 20.07$ kHz, where edge modes can be found on both the zigzag and armchair edges, while due to the absence of a full bulk gap, bulk modes also exist. To shed more light on the edge physics of the MGG, we carry on eigenmode analysis of the structure. Since the MGG is of a finite size $21\times41$, a dynamical equation describing wave behavior of the MGG can be derived from Eqs. $-$ by taking into account all the coordinate indices $\mathrm{m}$ and $\mathrm{n}$. Consequently, the eigenvalue problem from the dynamical equation of the MGG can be thoroughly solved. In Fig. \[fig4\], we choose to show several eigenmodes around the edge wave frequencies. Starting from the rotation-dominated region, we show two eigenmodes with the eigen-frequencies $24.80$ kHz in Fig. \[fig4\]A, and $24.82$ kHz in Fig. \[fig4\]D. The color scales of the three components suggest that in this frequency region, modes are dominated by rotation. This is consistent with the measured dispersion curves in Figs. \[fig2\]E and  \[fig2\]F, where modes are not detected above $\sim 22$ kHz since rotational signals can not be recorded by the laser vibrometers. It can be also seen that the eigenmode in Fig. \[fig4\]A shows an extended property since all parts of the structure are involved in the motion. However, the eigenmode in Fig. \[fig4\]D exhibits a property of localization as the motion mostly is confined only on the free armchair boundaries. Note that, due to the boundaries, the eigenmodes of a finite-size graphene can be viewed as the contributions of those bulk and edge modes of the infinite MGG. Vibrational modes strongly localized at the edges of the structure can be called as edge modes of the MGG. Therefore, the extended mode at $24.80$ kHz could be viewed as a mode dominated by the contribution from the bulk modes of infinite MGG of the grey region of Fig. \[fig3\], while the eigenmode in Fig. \[fig4\]D could be dominated by the mode marked by the yellow dot in Fig. \[fig3\]A in the armchair edge mode branch. Regarding the overlapping region, two eigenmodes close to each other at $19.89$ kHz and $19.88$ kHz are presented in Fig. \[fig4\]B and \[fig4\]E, respectively. It shows that the eigenmode in Fig. \[fig4\]B also exhibits an extended property with translations involving most part of the structure, while the eigenmode in Fig. \[fig4\]E has a localized mode property similar to the one in Fig. \[fig4\]D but with motion constrained on both the zigzag and armchair boundaries. Thus, this eigenmode at $19.88$ kHz has a dominant contribution from the edge modes of infinite granular graphene in the zigzag and armchair edge branches labelled by green dots in Fig. \[fig3\]B. The structure of this eigenmode confirms the prediction of the existence of edge branches in both zigzag and armchair ribbon (blue, red curves in Fig. \[fig3\]B). Finally, in the region that lies below the overlapping region ($\sim 19.61$ kHz to $\sim 19.84$ kHz), Fig. \[fig4\]C and \[fig4\]F, the behavior of eigemodes is quite similar to those in Fig. \[fig4\]B and \[fig4\]E but with the absence of motion in the armchair boundaries. The eigenmode at $19.62$ kHz, Fig. \[fig4\]F, is confined only on the zigzag edges, indicating the dominant contribution from the edge mode in the zigzag branch of the infinite graphene marked by the black dot in Fig. \[fig3\]B. Another intriguing property observed in Fig. \[fig4\] is that the extended modes in the region from $\sim 19.61$ kHz to $\sim 20.58$ kHz manifest a very interesting behavior. As shown in Fig. \[fig4\]B, the translational components $u,v$ of the eigenmodes are spread in the whole finite structure but the rotation is localized only in the boundaries. In Fig. \[fig4\]C, similar properties as the one in Fig. \[fig4\]B are observed but now the rotational component is only confined in the zigzag boundaries as the rotation of the edge mode in Fig. \[fig4\]F. This highlights a novel behavior of the dynamics in finite-size granular graphene, where one can find modes with extended translations in the structure while localized rotation on the boundaries. To the best of our knowledge such a behavior has not been reported before in other graphene structures, and the rich wave physics originates here from the extra rotational degree of freedom. As a result, the rotation of the beads can have a very distinct behavior compared to the translation of the beads in the MGG. This can lead to interesting potential applications like rotation isolation devices in a more general mechanical system and advanced wave control. The existence of edge waves in the MGG can be confirmed directly in experiment. To observe the edge wave propagation, the experimental set-up is the same as the one shown in Fig. \[fig1\]C, while a harmonic signal of duration $10$ ms with an initial linear ramping has been used as the source. All particles $B$ are still scanned to record the $u$, $v$ components. Figure \[fig5\] displays the measurements of total displacement amplitude ($\sqrt{u^2+v^2}$) for two separate times with a signal at $20$ kHz. As shown in Fig. \[fig5\]A, when $t=1.7$ ms, the displacements are primarily localized on the zigzag edge while decaying into the bulk. Despite the excitation of bulk waves at this frequency, decay of the bulk wave is expected due to both dissipation and 2D geometrical spreading which provides a better observation of the elastic edge wave at $20$ kHz. The numerical simulation of the experimental process is shown in Fig. \[fig5\]C, where the translational components of just the particles $B$ are shown. A good agreement between experiment and simulation is achieved. ![image](figure_5_SA_V2){width="16.5cm"} ![image](figure_6_SA_V2){width="16.5cm"} Novel turning effect of edge waves ================================== We now turn our attention to the frequency range at which edge modes coexist in both zigzag and armchair boundaries. For example, since $20$ kHz is located in this frequency range, one should expect that when the zigzag edge wave of $20$ kHz reaches the corner, this wave can be mode-converted into an armchair edge wave. To observe this phenomenon, the spatial pattern at $t=3.7$ ms is depicted in Fig. \[fig5\]B and \[fig5\]D. Indeed, wave motions are seen to be localized on both the zigzag and armchair boundaries. This can be further confirmed by the close-up of the experimental spatial pattern of motion at the lower MGG corner as presented in Fig. \[fig5\]E. To demonstrate this turning effect more clearly, we have chosen two particles marked by black and blue hexagons in Fig. \[fig5\]E and we plot their time evolution in Fig. \[fig5\]F. In addition, Fig. \[fig5\]G provides the spatial distribution of edge waves, which are obtained from experiments and simulations by focusing on rows $\mathrm{n}=22$, $\mathrm{n}=32$ and column $\mathrm{m}=10$, as labeled in Fig. \[fig5\]B by arrows. In rows $\mathrm{n}=22$ and $\mathrm{n}=32$ the motion distribution of the mode shows a similar profile (the amplitude is normalized to the first bead on the left zigzag edge), confirming the edge mode property as the motion propagates along the edge while decays very fast into the bulk (a distance of around $x=9d$). For $\mathrm{m}=10$, i.e. bottom panel of Fig. \[fig5\]G, the translational signal also reveals a localized profile close to the armchair boundary confirming that the movement of beads in the armchair edge is due to the turning effect but not from the bulk modes. Note that, as indicated in Figs. \[fig5\]B and \[fig5\]D, due to dissipation and slow propagation velocity, the edge wave at $20$ kHz on the armchair edge is damped before propagating a long distance, e.g. $15d$. Further investigation of zigzag and armchair edge wave dynamics both with and without losses can be found in the SM. Note also that the spatial pattern in Fig. \[fig5\]B shows a small asymmetry of wave propagation in the upward and downward directions in the experiments. This is most likely due to uncertainties in the pre-compression forces and asymmetric excitation of motion due to small misalignment between the driving bead and the set-up. We now investigate the edge wave propagation along a corner of an angle of 120$\degree$, which connects a zigzag to another zigzag boundary. Considering that the experimental MGG is still of a finite size $21\times41$, there is a free zigzag boundary at $(\mathrm{m},\mathrm{n})=(1,\mathrm{n})$. In order to build the second zigzag boundary, the particles above the line connecting the position $(\mathrm{m},\mathrm{n}) = (1,11)$ and $(\mathrm{m},\mathrm{n}) = (12,1)$ are removed, see Fig. \[fig6\]. The experimental set-up for zigzag to zigzag edge wave measurement is the same as that shown in Fig. \[fig1\]C, where a harmonic signal of duration $10$ ms with an initial linear ramping and a frequency of $20$ kHz has been used as the source at position $(\mathrm{m},\mathrm{n}) = (1,26)$. All particles $B$ are still scanned to record the $u$, $v$ components. Since edge modes are found on the zigzag edges at around $20$ kHz, one expects that when the zigzag edge wave reaches the 120$\degree$ corner, this wave can turn to the other zigzag boundary. To observe this phenomenon, Fig. \[fig6\]A displays the measurements of total displacement amplitude ($\sqrt{u^2+v^2}$) at a given time after reaching the steady state. Numerically, the simulation of the experimental process is shown in Fig. \[fig6\]B, where the translational components of only the particles $B$ are shown. Indeed, wave motions are seen to be localized on the new zigzag edge and a good agreement between experiment and simulation is achieved. We can observe that bulk waves are also present because there is no gap for bulk mode at this frequency. In addition, Figure \[fig6\]C provides greater detail for the spatial distribution of zigzag edge waves at a given point of time by focusing on rows $\mathrm{n}=18$, and cut line from $(\mathrm{m},\mathrm{n}) = (5,8)$ to $(\mathrm{m},\mathrm{n}) = (9, 20)$, as labeled in Fig. \[fig6\](B) by arrows. As shown in Fig. \[fig6\]C, the two profiles have a similar form with the translational signals becoming very weak (the amplitude is normalized to the first bead of each zigzag edge) after a distance of around $x=9d$. This confirms that bead movement in the new zigzag boundary is due to the turning effect from the zigzag to zigzag edges but not from the bulk modes. conclusions =========== In this work, we propose a new artificial graphene called magneto granular graphene. This structure is composed of stainless steel beads in contact and placed in a properly designed magnetic field. The latter magnetizes the beads resulting in equivalent pre-compression forces between beads, and thus a mechanically stable structure, free of mechanical borders. The MGG proposed in this work can be used as a perfect experimental benchmark for fundamental study of Dirac and edge wave physics in mechanical systems. Considering the wave behavior in the MGG, first we obtain the dispersion relation and the Dirac points. Then, we turn our attention to the edge physics of the structure. We show that the MGG supports unconventional edge waves that can exist also in armchair free boundaries, in contrast with genuine or other artificial graphene. In addition we show that for a range of frequencies, the structure supports edge vibrations both on the zigzag and armchair boundaries. Interesting enough, in this region the bulk modes are extended in their translation motions but localized at the edges regarding their rotation motion. Such a unique behavior has not been reported before to the best of our knowledge. Novel applications such as rotational isolators could be then designed using MGG or other flexible mechanical metamaterials with rotational elements [@Qian; @Babaee; @Deng]. Moreover, we also demonstrated that the coexistence of edge wave solutions in both zigzag and armchair boundaries lead to a turning effect from zigzag to armchair/zigzag free boundaries. This does not require a full bulk gap, which normally is necessary in the scenario of pseudospin topologically-protected wave propagation, like the case of helical edge waves. The role of the topology in MGG, like in the recent works of higher order topological insulators, is a remaining intriguing question and might lead to potential study of novel topological phase in mechanical systems. Finally, taking advantage of the intrinsic nonlinearities of the granular crystals, the MGG proposed herein offers a perfect platform to explore a wide array of novel nonlinear bulk, edge waves in mechanical graphene, similarly to solitons [@Nesterenko; @Chong], nonlinear waves [@CostePRE; @JobPRL; @LeonardExpMech; @CabaretPRL] and breathers [@BoechlerPRL; @TheocharisPRE] in simpler crystal structures. This work has been funded by RFI Le Mans Acoustique in the framework of the APAMAS and Sine City LMac projects and by the project CS.MICRO funded under the program Etoiles Montantes of the Region Pays de la Loire and partly funded by the Acoustic Hub project. Contact description in granular graphene ======================================== Considering the in-plane motion in the magneto-granular graphene (MGG), there are normal, shear and bending interactions characterized by contact rigidities $\xi_n$, $\xi_s$ and $\xi_b$, respectively, as represented in Fig. \[fig1\]E. For stiffness of macroscopic elastic spheres in the MGG, the contact mechanism between the beads can be modeled by the Hertzian contact [@Johnson; @Mindlin]. This leads to the expressions of the rigidities: \[eq0\] $$\begin{aligned} \xi_n &=& \left(\dfrac{3R}{4} F_0 \right)^{1/3}E^{2/3}(1-\nu^2)^{-2/3}, \\ \xi_s &=& \left(6 F_0 R \right)^{1/3}E^{2/3}\dfrac{(1-\nu^2)^{1/3}}{(2-\nu)(1+\nu)},\end{aligned}$$ where $R$ is the radius of the bead, $E$ is the Young’s modulus, $\nu$ is Poisson’s ratio, and $F_0$ is the normal precompression between the beads. According to the previous study [@ZhengUltrasonics], the bending rigidity can be roughly estimated by, $$\begin{aligned} \xi_b &\sim & \xi_n \left(\dfrac{\delta}{R}\right)^2, \end{aligned}$$ where $\delta$ is the radius of the contact surface between two beads, which is given by, $$\begin{aligned} \delta &=& \left(\dfrac{3R}{4E}F_0\right)^{1/3}(1-\nu^2)^{1/3}.\end{aligned}$$ As long as the physical parameters of the beads are known and the precompression $F_0$ is measured, $\xi_n$, $\xi_s$ and $\xi_b$ can be obtained. In our experiment, the stainless steel beads have the parameters: Young’s modulus $E= 190$ GPa, Poisson’s ratio $\nu=0.3$, diameter $d=7.95$ mm, and density $\rho=7678$ kg/m$^3$. The procompression can be determined around $F_0 \sim 1.55$ N by means of measurement. This leads to the rigidities: $\xi_n\simeq 6.19\times10^{6}$ N/m, $\xi_s\simeq 5.09\times10^{6}$ N/m, and $\xi_b\simeq 3.04\times10^{2}$ N/m. Calculation of Dirac points =========================== Let us consider the modes at the corner (K point) of the BZ, by applying the periodic boundary condition, i.e., $\boldsymbol U_{\mathrm{m},\mathrm{n}}^\alpha=\boldsymbol U^\alpha e^{i \omega t -i q_x \mathrm{m} - i q_y \mathrm{n} }$, Eqs.  lead to two degenerate modes at the K point:\ $\omega_{D_\pm}=\sqrt{\dfrac{g\pm \sqrt{g^2-h}}{4M}}$.\ Above, $g=3[\xi_n+\xi_s+2\mathrm{P}(\xi_b+\xi_s)]$, and $h=72\mathrm{P}(\xi_n\xi_b+\xi_n\xi_s+\xi_b\xi_s)$ with $\mathrm{P}=MR^2/I$. These degenerate modes originate from the symmetry of honeycomb lattice, which are two Dirac points with frequencies $\omega_{D_\pm}$ at the K point. Dissipation =========== In order to compare the experimental results with the numerical simulations, the attenuation of the wave during the propagation has to be considered. In our model, the attenuation is implemented by a phenomenological linear viscous on-site dissipation [@BoechlerNat] considering a time of decay, $\tau$, for elastic waves which can take different values as a function of displacement polarization. This leads to extra terms in the right hand side of Eqs (\[eq1\]), \[eq\_dissipation\] $$\begin{aligned} \boldsymbol{\ddot{U}}_{\mathrm{m},\mathrm{n}}^A & = & S_0 \boldsymbol{U}_{\mathrm{m},\mathrm{n}}^A+ S_1 \boldsymbol{U}_{\mathrm{m},\mathrm{n}}^B+S_2 \boldsymbol{U}_{\mathrm{m}-1,\mathrm{n}+1}^B \nonumber \\ & & +S_3 \boldsymbol{U}_{\mathrm{m}-1,\mathrm{n}-1}^B - \frac{1}{\tau} \boldsymbol{\dot{U}}_{\mathrm{m},\mathrm{n}}^A,\end{aligned}$$ $$\begin{aligned} \boldsymbol{\ddot{U}}_{\mathrm{m},\mathrm{n}}^B & = & D_0 \boldsymbol{U}_{\mathrm{m},\mathrm{n}}^B+ D_1 \boldsymbol{U}_{\mathrm{m},\mathrm{n}}^A+D_2 \boldsymbol{U}_{\mathrm{m}+1,\mathrm{n}+1}^A \nonumber \\ & & + D_3 \boldsymbol{U}_{\mathrm{m}+1,\mathrm{n}-1}^A - \frac{1}{\tau} \boldsymbol{\dot{U}}_{\mathrm{m},\mathrm{n}}^B.\end{aligned}$$ Therefore, wave dynamics of the MGG considering dissipation can be described by combining the boundary conditions in Eqs. (4) and Eqs. (5) with Eqs. (\[eq\_dissipation\]). We can notice that Eqs. (\[eq\_dissipation\]) are second order ordinary differential equations of time. As a consequence, we numerically obtain the time evolution of elastic wave propagation in the structure by solving Eqs. (\[eq\_dissipation\]) using Runge Kutta fourth order method. More details about the time evolution of elastic wave propagation in two-dimensional granular crystals can be found in Ref.[@ZhengPRB]. By fitting the experimental results with the numerical ones, we can estimate that $\tau$ is about 1 ms for both polarizations of displacement in our experiment. [99]{} A. H. C. Neto, F. Guinea, N. M. R. Peres, K. S.Novoselov, and A. K. Geim, The electronic properties of graphene. Rev. Mod. Phys. **81**, 109 (2009). S. D. Sarma, S. Adam, E. H. Hwang, and E. Rossi, Electronic transport in two-dimensional graphene. Rev. Mod. Phys. **83**, 407 (2011). A. K. Geim, and K. S. Novoselov, The rise of graphene. Nature Mater. **6**, 183–191 (2007). K. Wakabayashi, Ken-ichi Sasaki, T. Nakanishi and T. Enoki, Electronic states of graphene nanoribbons and analytical solutions. Sci. Technol. Adv. Mater. **11** 054504 (2010). M. Polini, F. Guinea, M. Lewenstein, H. C. Manoharan, and V. Pellegrini, Artificial honeycomb lattices for electrons, atoms and photons. Nature Nanotechnology, [**8**]{}, 625 (2013). L. Tarruell, D. Greif, T. Uehlinger, G. Jotzu, and T. Esslinger, Creating, moving and merging Dirac points with a Fermi gas in a tunable honeycomb lattice. Nature [**496**]{}, 302 (2012). M. C. Rechtsman, J. M. Zeuner, Y. Plotnik, Y. Lumer, D. Podolsky, F. Dreisow, S. Nolte, M. Segev, and A. Szameit, Photonic Floquet topological insulators. Nature (London) [**496**]{}, 196 (2013). M. Dubois, C. Shi, X. Zhu, Y. Wang, and X. Zhang, Observation of acoustic Dirac-like cone and double zero refractive index. Nat. Commun. [**8**]{}, 14871, (2017). X. Zhang, and Z. Liu, Extremal transmission and beating effect of acoustic waves in two-dimensional sonic crystals. Phys. Rev. Lett. [**101**]{}, 264303 (2008). M. J. Ablowitz, S. D. Nixon, and Y. Zhu, Conical diffraction in honeycomb lattices. Phys. Rev. A [**79**]{}, 053830 (2009). D. Torrent, and J. Sanchez-Dehesa, Acoustic Analogue of Graphene: Observation of Dirac cones in acoustic surface waves. Phys. Rev. Lett. [**108**]{}, 174301 (2012). S.-Y. Yu, X.-C. Sun, X. Ni, Q. Wang, X.-J. Yan, C. He, X.-P. Liu, L. Feng, M.-H. Lu, and Y.-F. Chen, Surface phononic graphene. Nature Mater. [**15**]{}, 1243–1247 (2016). W. Shockley, On the surface states associated with a periodic potential. Phys. Rev. 56, 317 (1939). I. Tamm, Uber eine mogliche art der elektronenbindung an kristalloberflachen. I. Z. Physik 76, 849 (1932). https://doi.org/10.1007/BF01341581. Fujita, K. Wakabayashi, K. Nakada, and K. Kusakabe, Peculiar localized state at zigzag graphite edge. J. Phys. Soc. Jpn. 65, 1920 (1996). Nakada, M. Fujita, G. Dresselhaus, andM. S. Dresselhaus, Edge state in graphene ribbons: Nanometer size effect and edge shape dependence. Phys. Rev. B 54, 17954 (1996). Y. Kobayashi, K. Fukui, T. Enoki, K. Kusakabe, and Y. Kaburagi, Observation of zigzag and armchair edges of graphite using scanning tunneling microscopy and spectroscopy. Phys. Rev. B **71**, 193406 (2005). Niimi, T. Matsui, H. Kambara, K. Tagami, M. Tsukada, and H. Fukuyama, Scanning tunneling microscopy and spectroscopy of the electronic local density of states of graphite surfaces near monoatomic step edges. Phys. Rev. B 73, 085421 (2006). M. Kohmoto, and Y. Hasegawa, Zero modes and edge states of the honeycomb lattice. Phys. Rev. B **76**, 205402 (2007). M. Z. Hasan and C. L. Kane, Topological Insulators. Rev. Mod. Phys. 82, 3045 (2010). X.-L. Qi and S.-C. Zhang, Topological insulators and superconductors. Rev. Mod. Phys. 83, 1057 (2011). L. Fu, Topological Crystalline Insulators. Phys. Rev. Lett. 106, 106802 (2011). W. A. Benalcazar, B. A. Bernevig, and T. L. Hughes, Quantized electric multipole insulators. Science 357, 61 (2017). S. Ryu and Y. Hatsugai, Topological origin of zero-energy edge states in particle-hole symmetric systems. Phys. Rev. Lett. **89**, 077002 (2002). R. S. K. Mong and V. Shivamoggi, Edge states and the bulk-boundary correspondence in Dirac Hamiltonians. Phys. Rev. B 83, 125109 (2011). P. Delplace, D. Ullmo, and G. Montambaux, Zak phase and the existence of edge states in graphene. Phys. Rev. B 84, 195452 (2011). U. Kuhl, S. Barkhofen, T. Tudorovskiy, H.-J. Stockmann, T. Hossain, L. de Forges de Parny, and F. Mortessagne, Dirac point and edge states in a microwave realization of tight-binding graphene-like structures. Phys. Rev. B. 82, 094308 (2010). Y. Plotnik, M.C. Rechtsman, D. Song, M. Heinrich, J.M. Zeuner, S. Nolte, Y. Lumer, N. Malkova, J. Xu, A. Szameit, Z. Chen, and M. Segev, Observation of unconventional edge states in ’photonic graphene’. Nature Mater. **13**, 57–62 (2014). M. Milićević, T. Ozawa, P. Andreakou, I. Carusotto, T. Jacqmin, E. Galopin, A. Lemaître, L. Le Gratiet, I. Sagnes, J. Bloch and A. Amo, Edge states in polariton honeycomb lattices. 2D Mater. 2 034012 (2015). M. Bellec, U. Kuhl, G. Montambaux, and F. Mortessagne, The existence of topological edge states in honeycomb plasmonic lattices. New J. Phys. [**16**]{}, 113023 (2014). M. Milićević, T. Ozawa, G. Montambaux, I. Carusotto, E. Galopin, A. Lemaître, L. Le Gratiet, I. Sagnes, J. Bloch, and A. Amo, Orbital edge states in a photonic honeycomb lattice. Phys. Rev. Lett. [**118**]{}, 107403 (2017). A. V. Savin, and Y. S. Kivshar, Vibrational Tamm states at the edges of graphene nanoribbons. Phys. Rev. B [**81**]{}, 165418 (2010). L.-Y. Zheng, V. Tournat, and V. Gusev, Zero-frequency and extremely slow elastic edge waves in mechanical granular graphene. Extreme Mechanics Letters [**12**]{}, 55–64 (2017). Y.-T. Wang, P.-G. Luan, and S. Zhang, Coriolis force induced topological order for classical mechanical vibrations. New J. Phys. [**17**]{}, 073031 (2015). T. Kariyado and Y. Hatzugai, Manipulation of Dirac cones in mechanical graphene. Sci. Reports [**5**]{}, 18107 (2015). A. Merkel, V. Tournat, V. Gusev, Experimental Evidence of Rotational Elastic Waves in Granular Phononic Crystals. Phys. Rev. Lett. [**107**]{}, 225502 (2011). M. Hiraiwa, M. Abi Ghanem, S. P. Wallen, A. Khanolkar, A. A. Maznev, and N. Boechler, Complex contact-based dynamics of microsphere monolayers revealed by resonant attenuation of surface acoustic waves. Phys. Rev. Lett. [**116**]{}, 198001 (2016). H. Pichard, A. Duclos, J.-P. Groby, and V. Tournat, Two-dimensional discrete granular phononic crystal for shear wave control. Phys. Rev. B [**86**]{}, 134307 (2012) F. Allein, V. Tournat, V. E. Gusev, and G. Theocharis, Tunable magneto-granular phononic crystals. Appl. Phys. Lett. [**108**]{}, 161903 (2016). V. Tournat, I. P[è]{}rez-Arjona, A. Merkel, V Sanchez-Morcillo, and V. Gusev, Elastic waves in phononic monolayer granular membranes. New J. Phys. [**13**]{}, 073042 (2011). L.-Y. Zheng, H. Pichard, V. Tournat, G. Theocharis, and V. Gusev, Zero-frequency and slow elastic modes in phononic monolayer granular membranes. Ultrasonics [**69**]{}, 201–214 (2016). L.-Y. Zheng, G. Theocharis, V. Tournat, and V. Gusev, Quasitopological rotational waves in mechanical granular graphene. Phys. Rev. B [**97**]{}, 060101(R) (2018). B. Gilles, and C. Coste, Low-frequency behavior of beads constrained on a Lattice. Phys. Rev. Lett. [**90**]{}, 174302 (2003). C. Coste, and B. Gilles, Sound propagation in a constrained lattice of beads: High-frequency behavior and dispersion relation. Phys. Rev. E [**77**]{}, 021302 (2008). A. Leonard, C. Daraio, Stress wave anisotropy in centered square highly nonlinear granular systems. Phys. Rev. Lett. [**108**]{}, 214301, (2012). H. Zhu, T.-W. Liu, and F. Semperlotti, Design and experimental observation of valley-Hall edge states in diatomic-graphene-like elastic waveguides. Phys. Rev. B [**97**]{}, 174301 (2018). K. L. Johnson, Contact Mechanics. Cambridge Univ. Press, 1985. R. D. Mindlin, Compliance of elastic bodies in contact. Journal of Applied Mechanics, [**16**]{}, 259 (1949). N. Boechler, G. Theocharis, and C. Daraio, Bifurcation-based acoustic switching and rectification. Nature Materials [**10**]{}, 665–668 (2011). K. Qian, D. J. Apigo, C. Prodan, Y. Barlas, and E. Prodan, Topology of the valley-Chern effect. Phys. Rev. B [**98**]{}, 155138 (2018). S. Babaee, J. T. B. Overvelde, E. R. Chen, V. Tournat, and K. Bertoldi, Reconfigurable origami-inspired acoustic waveguides. Sci. Adv. [**2**]{}, e1601019 (2016). B. Deng, P. Wang. Q. He, V. Tournat, and K. Bertoldi, Metamaterials with amplitude gaps for elastic solitons. Nat. Commun. [**9**]{}, 3410 (2018). C. Chong, M. A Porter, P. G. Kevrekidis, and C Daraio, Nonlinear coherent structures in granular crystals. J. Phys.: Condens. Matter [**29**]{}, 413002 (2017). V.F. Nesterenko, Dynamics of heterogeneous materials. Springer-Verlag, New York, 2001. C. Coste, E. Falcon, and S. Fauve, Solitary waves in a chain of beads under Hertz contact. Phys. Rev. E [**56**]{}, 6104-6117 (1997). S. Job, F. Melo, A. Sokolow, and S. Sen, How hertzian solitary waves interact with boundaries in a 1D granular medium. Phys. Rev. Lett. [**94**]{}, 178002 (2005). A. Leonard, F. Fraternali, and C. Daraio, Directional wave propagation in a highly nonlinear square packing of spheres. Experimental Mechanics [**53**]{}, 327–337 (2013). J. Cabaret, P. B[é]{}quin, G. Theocharis, V. Andreev, V. E. Gusev, and V. Tournat, Nonlinear hysteretic torsional waves. Phys. Rev. Lett. [**115**]{}, 054301 (2015). N. Boechler, G. Theocharis, S. Job, P. G. Kevrekidis, Mason A. Porter, and C. Daraio, Discrete breathers in one-dimensional diatomic granular crystals. Phys. Rev. Lett. [**104**]{}, 244302 (2010). G. Theocharis, M. Kavousanakis, P. G. Kevrekidis, C. Daraio, M. A. Porter, and I. G. Kevrekidis, Localized breathing modes in granular crystals with defects. Phys. Rev. E [**80**]{}, 066601 (2009).
**Tetiana Kasirenko and Aleksandr Murach**\ (Institute of Mathematics, National Academy of Sciences of Ukraine, Kyiv) **ELLIPTIC PROBLEMS WITH BOUNDARY CONDITIONS OF HIGH ORDERS IN HÖRMANDER SPACES** **Тетяна Касіренко і Олександр Мурач**\ (Інститут математики НАН України, Київ) **ЕЛІПТИЧНІ ЗАДАЧІ З КРАЙОВИМИ УМОВАМИ ВИСОКИХ ПОРЯДКІВ У ПРОСТОРАХ ХЕРМАНДЕРА** In a class of inner product Hörmander spaces, we investigate a general elliptic problem for which the maximum of orders of boundary conditions is grater than or equal to the order of elliptic equation. The order of regularity for these spaces is an arbitrary radial positive function RO-varying at infinity in the sense of Avakumović. We prove that the operator of the problem under investigation is bounded and Fredholm on appropriate pairs of Hörmander spaces indicated. A theorem on isomorphism generated by this operator is proved. For generalized solutions to this problem, we establish a local a priory estimate and prove a theorem about their local regularity in Hörmander spaces. As application, we obtain new sufficient conditions under which given derivatives of the solutions are continuous. У класі гільбертових просторів Хермандера досліджено загальну еліптичну задачу, для якої максимум порядків крайових умов більший або рівний, ніж порядок еліптичного рівняння. Показником регулярності для цих просторів служить довільна радіальна додатна функція, RO-змінна на нескінченності за Авакумовичем. Доведено, що оператор досліджуваної задачі є обмеженим і нетеровим у підходящих парах вказаних просторів Хермандера. Доведено теорему про ізоморфізм, породжений цим оператором. Для узагальнених розв’язків цієї задачі встановлено локальну апріорну оцінку і доведено теорему про їх локальну регулярність у просторах Хермандера. Як застосування, отримано нові достатні умови неперервності заданих узагальнених похідних розв’язків. В классе гильбертовых пространств Хермандера исследована общая эллиптическая задача, для которой максимум порядков краевых условий больший или равный, чем порядок эллиптического уравнения. Показателем регулярности для этих пространств служит произвольная радиальная положительная функция, RO-меняющаяся на бесконечности по Авакумовичу. Доказано, что оператор исследуемой задачи является ограниченным и нетеровым в подходящих парах указанных пространств Хермандера. Доказана теорема об изоморфизме, порожденном этим оператором. Для обобщенных решений этой задачи установлена локальная априорная оценка и доказана теорема об их локальной регулярности в пространствах Хермандера. В качестве приложения получены новые достаточные условия непрерывности заданных обобщенных производных решений. **1. Вступ.** Центральний результат теорії загальних еліптичних крайових задач в обмежених областях з гладкою межею полягає у тому, що ці задачі є нетеровими у підходящих парах функціональних просторів Соболєва або Гельдера (див., наприклад, огляд [@Agranovich97 §2], довідник [@FunctionalAnalysis72 розд. III, §6] і монографії [@Hermander63; @LionsMagenes71; @Triebel95]). Цей результат має різні застосування, серед яких твердження про підвищення регулярності розв’язків еліптичних задач. Втім, класичні шкали Соболєва і Гельдера є недостатньо тонко градуйованими для низки задач, що виникають в аналізі і теорії диференціальних рівнянь. У цьому зв’язку Л. Хермандер [@Hermander63] вів широкі класи нормованих функціональних просторів, для яких показником регулярності розподілів служить не число, а досить загальна вагова функція, залежна від частотних змінних. Хермандер [@Hermander63; @Hermander83] дав застосування цих просторів до дослідження характеру розв’язності і регулярності розв’язків лінійних диференціальних рівнянь з частинними похідними. Простори Хермандера і різні їх узагальнення знайшли застосування в математичному аналізі, теорії диференціальних рівнянь, теорії випадкових процесів (див. монографії [@Jacob010205; @MikhailetsMurach14; @NicolaRodino10; @Paneah00; @Triebel01]). Недавно В. А. Михайлець і другий автор цієї статті [@MikhailetsMurach05UMJ5; @MikhailetsMurach06UMJ2; @MikhailetsMurach06UMJ3; @MikhailetsMurach07UMJ5; @MikhailetsMurach06UMJ11; @MikhailetsMurach08UMJ4; @Murach08MFAT2; @MikhailetsMurach12BJMA2; @MikhailetsMurach14] побудували теорію розв’язності загальних еліптичних систем на гладких многовидах і еліптичних крайових задач у класах гільбертових просторів Хермандера, які отримуються інтерполяцією з функціональним параметром пар гільбертових просторів Соболєва. Показником регулярності для цих просторів служать радіальні функції, правильно змінні на нескінченності за Караматою [@Karamata30a] (див. монографії [@Seneta76; @BinghamGoldieTeugels89]). За допомогою методу інтерполяції з функціональним параметром гільбертових просторів вдалося перенести основні результати “соболєвської” теорії еліптичних рівнянь і задач на зазначені простори Хермандера. Ці результати були доповнені в [@Murach09UMJ3; @ZinchenkoMurach13UMJ11; @ZinchenkoMurach14JMathSci; @AnopMurach14MFAT2; @AnopMurach14UMJ7; @ChepurukhinaMurach15UMJ5; @AnopKasirenko16MFAT4] для більш широких класів гільбертових просторів Хермандера. Відмітимо, що згаданий метод інтерполяції виявився плідним і в теорії параболічних початково-крайових задач [@LosMikhailetsMurach17CPAA; @LosMurach17OpenMath]. У побудованій теорії розглянуто виключно еліптичні задачі, у яких порядки крайових умов менші, ніж порядок еліптичного рівняння. Мета даної статті — доповнити цю теорію результатами про характер розв’язності і властивості розв’язків еліптичних задач, у яких порядок принаймні однієї з крайових умов більший або рівний за порядок еліптичного рівняння. Ці задачі будемо досліджувати у класі гільбертових просторів Хермандера, показником регулярності для яких служить довільна радіальна функція, RO-змінна на нескінченності за Авакумовичем [@Avakumovic36] (див. монографію [@Seneta76]). Цей клас був виділений в [@MikhailetsMurach09Dop3; @MikhailetsMurach13UMJ3] і названий розширеною соболєвською шкалою. Він містить уточнену соболєвську шкалу та складається з усіх гільбертових просторів, інтерполяційних відносно пар гільбертових просторів Соболєва. Робота складається з 7 пунктів. Пункт 1 є вступом. У п. 2 сформульовано еліптичну крайову задачу, яка досліджується, і розглянуто формально спряжену до неї задачу відносно спеціальної формули Гріна. У п. 3 наведено означення функціональних просторів Хермандера, які утворюють розширену соболєвську шкалу. Пункт 4 містить основні результати роботи про властивості досліджуваної задачі в просторах Хермандера. У п. 5, як застосування основних результатів, отримано достатні умови неперервності узагальнених похідних розв’язків досліджуваної задачі, зокрема, умови класичності її узагальненого розв’язку. Пункт 6 присвячений інтерполяції з функціональним параметром пар гільбертових просторів та її застосуванням до розширеної соболєвської шкали. Результати роботи доведено у заключному п. 7. **2. Постановка задачі.** Нехай $\Omega$ — довільна обмежена область у евклідовому просторі $\mathbb{R}^{n}$, де $n\geq2$. Припускаємо, що межа $\Gamma$ цієї області є нескінченно гладким компактним многовидом вимірності $n-1$, причому $C^\infty$-структура на $\Gamma$ індукована простором $\mathbb{R}^{n}$. В області $\Omega$ розглянемо таку крайову задачу: $$\begin{gathered} \label{1f1} Au=f\quad\mbox{в}\quad\Omega,\\ B_{j}u=g_{j}\quad\mbox{на}\quad\Gamma, \quad j=1,...,q. \label{1f2}\end{gathered}$$ Тут $$A:=A(x,D):=\sum_{|\mu|\leq 2q}a_{\mu}(x)D^{\mu}$$ є лінійним диференціальним оператором на $\overline{\Omega}:=\Omega\cup\Gamma$ довільного парного порядку $2q\geq\nobreak2$, а кожне $$B_{j}:=B_{j}(x,D):=\sum_{|\mu|\leq m_{j}}b_{j,\mu}(x)D^{\mu}$$ є крайовим лінійним диференціальним оператором на $\Gamma$ довільного порядку $m_{j}\geq0$. Усі коефіцієнти $a_{\mu}(x)$ і $b_{j,\mu}(x)$ цих диференціальних операторів є нескінченно гладкими комплекснозначними функціями на $\overline{\Omega}$ і $\Gamma$ відповідно. Взагалі, у роботі функції та розподіли припускаються комплекснозначними і тому усі розглянуті функціональні простори вважаються комплексними. У наведених формулах і далі використано такі стандартні позначення: $\mu:=(\mu_{1},\ldots,\mu_{n})$ — мультиіндекс з невід’ємними цілими компонентами, $|\mu|:=\mu_{1}+\cdots+\mu_{n}$, $D^{\mu}:=D_{1}^{\mu_{1}}\ldots D_{n}^{\mu_{n}}$, де $D_{k}:=i\partial/\partial x_{k}$ для кожного номера $k\in\{1,...,n\}$, $i$ — уявна одиниця, а $x=(x_1,\ldots,x_n)$ — довільна точка простору $\mathbb{R}^{n}$. Також покладемо $D_{\nu}:=i\partial/\partial\nu$, де $\nu(x)$ — орт внутрішньої нормалі до межі $\Gamma$ у точці $x\in\Gamma$. У роботі припускаємо, що крайова задача , є еліптичною в області $\Omega$, тобто диференціальний оператор $A$ є правильно еліптичним на $\overline{\Omega}$, а набір $B:=(B_{1},\ldots,B_q)$ крайових диференціальних операторів задовольняє умову Лопатинського щодо $A$ на $\Gamma$ (див., наприклад, огляд [@Agranovich97 п. 1.2] або довідник [@FunctionalAnalysis72 розд. III, § 6, пп. 1, 2]). **Приклад 1.** Розглянемо крайову задачу, яка складається з диференціального рівняння , де диференціальний оператор $A$ правильно еліптичний на $\overline{\Omega}$, і крайових умов $$\frac{\partial^{k+j-1}u}{\partial\zeta^{k+j-1}}+\sum_{|\mu|< k+j-1}b_{j,\mu}(x)D^{\mu}=g_j\quad\text{на}\quad\Gamma, \quad j=1,...,q.$$ Тут ціле число $k\geq0$, а $\zeta:\Gamma\to\mathbb{R}^{n}$ є нескінченно гладким полем векторів $\zeta(x)$, недотичних до $\Gamma$ у точці $x\in\Gamma$. Безпосередньо перевіряється, що ця крайова задача є еліптичною в області $\Omega$. Якщо $0\leq k\leq q$, то вона є регулярною еліптичною (див., наприклад, [@Triebel95 п. 5.2.1, зауваження 4]). Важливий окремий випадок цієї задачі отримуємо, поклавши $A:=\Delta^{q}$, де $\Delta$ — оператор Лапласа, та $\zeta(x):=\nu(x)$ для усіх $x\in\Gamma$. Надалі припускаємо, що $$m:=\max\{m_{1},\ldots,m_{q}\}\geq2q.$$ Пов’яжемо із задачею , лінійне відображення $$\label{1f3} u\mapsto(Au,Bu)=(Au,B_{1}u,\ldots,B_{q}u),\quad\mbox{де}\quad u\in C^{\infty}(\overline{\Omega}).$$ Мета роботи — дослідити властивості продовження за неперервністю цього відображення у підходящих парах функціональних просторів Хермандера. Для опису області значень цього продовження нам потрібна така спеціальна формула Гріна [@KozlovMazyaRossmann97 формула (4.1.10)]: $$\begin{gathered} (Au,v)_{\Omega}+\sum_{j=1}^{m-2q+1}(D_{\nu}^{j-1}Au,w_{j})_{\Gamma}+ \sum_{j=1}^{q}(B_{j}u,h_{j})_{\Gamma}=\\ =(u,A^{+}v)_{\Omega}+\sum_{k=1}^{m+1}\biggl(D_{\nu}^{k-1}u,K_{k}v+ \sum_{j=1}^{m-2q+1}R_{j,k}^{+}w_{j}+ \sum_{j=1}^{q}Q_{j,k}^{+}h_{j}\biggr)_{\Gamma},\end{gathered}$$ де $u,v\in C^{\infty}(\overline{\Omega})$, $w_{1},\ldots,w_{m-2q+1},h_{1},\ldots,h_{q} \in C^{\infty}(\Gamma)$ та через $(\cdot,\cdot)_{\Omega}$ і $(\cdot,\cdot)_{\Gamma}$ позначено відповідно скалярні добутки у гільбертових просторах $L_{2}(\Omega)$ і $L_{2}(\Gamma)$ функцій квадратично інтегровних на $\Omega$ і $\Gamma$ відносно мір Лебега. Тут $A^{+}$ — диференціальний оператор, формально спряжений до $A$, тобто $$(A^{+}v)(x):=\sum_{|\mu|\leq2q}D^{\mu}(\overline{a_{\mu}(x)}v(x)).$$ Окрім того, усі $R_{j,k}^{+}$ і $Q_{j,k}^{+}$ є дотичними диференціальними операторами, формально спряженими відповідно до $R_{j,k}$ і $Q_{j,k}$ відносно $(\cdot,\cdot)_{\Gamma}$, а дотичні лінійні диференціальні оператори $R_{j,k}:=R_{j,k}(x,D_{\tau})$ і $Q_{j,k}:=Q_{j,k}(x,D_{\tau})$ узяті із зображення крайових диференціальних операторів $D_{\nu}^{j-1}A$ і $B_{j}$ у вигляді $$\begin{gathered} D_{\nu}^{j-1}A(x,D)=\sum_{k=1}^{m+1}R_{j,k}(x,D_{\tau})D_{\nu}^{k-1},\quad j=1,\ldots,m-2q+1,\\ B_{j}(x,D)=\sum_{k=1}^{m+1}Q_{j,k}(x,D_{\tau})D_{\nu}^{k-1},\quad j=1,\ldots,q.\end{gathered}$$ Відмітимо, що $\mathrm{ord}\,R_{j,k}\leq 2q+j-k$ і $\mathrm{ord}\,Q_{j,k}\leq m_{j}-k+1$, причому, звісно, $R_{j,k}=0$ при $k\geq2q+j+1$ і $Q_{j,k}=0$ при $k\geq m_{j}+2$. Нарешті, кожне $K_{k}:=K_{k}(x,D)$ — деякий крайовий лінійний диференціальний оператор на $\Gamma$ порядку $\mathrm{ord}\,K_{k}\leq2q-k$ з коефіцієнтами класу $C^{\infty}(\overline{\Omega})$. Спеціальна формула Гріна приводить до такої крайової задачі в області $\Omega$: $$\begin{gathered} \label{1f4} A^{+}v=\omega\quad\mbox{в}\quad\Omega,\\ K_{k}v+\sum_{j=1}^{m-2q+1}R_{j,k}^{+}w_{j}+ \sum_{j=1}^{q}Q_{j,k}^{+}h_{j}=\theta_{k}\quad \mbox{на}\quad\Gamma,\quad k=1,...,m+1. \label{1f5}\end{gathered}$$ Ця задача містить, окрім невідомої функції $v$ на $\Omega$, ще $m-q+1$ додаткових невідомих функцій $w_{1},\ldots,w_{m-2q+1},h_{1},\ldots,h_{q}$ на межі $\Gamma$. Задачу , називають формально спряженою до задачі , відносно розглянутої спеціальної формули Гріна. Відомо [@KozlovMazyaRossmann97 теорема 4.1.1], що крайова задача , еліптична тоді і тільки тоді, коли формально спряжена задача , еліптична як крайова задача з додатковими невідомими функціями на межі області. **Приклад 2.** Запишемо спеціальну формулу Гріна для еліптичної крайової задачі $$\label{1f1ex2} \Delta u=f\;\;\text{в}\;\;\Omega,\qquad \frac{\partial^{2}u}{\partial\nu^{2}}=g\;\;\text{на}\;\;\Gamma,$$ заданої в крузі $\Omega:=\{(x_1,x_2)\in \mathbb{R}^{2}: x_{1}^2+x_{2}^2<1\}$. Відмітимо, що $\Delta u=\partial_{\nu}^{2}u-\partial_{\nu}u+\partial_{\varphi}^{2}u$ на $\Gamma$; тут $\partial_{\nu}:=\partial/\partial\nu=-\partial/\partial\varrho$ і $\partial_{\varphi}:=\partial/\partial\varphi$, а $(\varrho,\varphi)$ — полярні координати. Застосувавши другу класичну формулу Гріна для оператора Лапласа, отримаємо, що $$\begin{gathered} (\Delta u,v)_{\Omega}+(\Delta u,w)_{\Gamma}+ (\partial_{\nu}^{2}u,h)_{\Gamma}=\\ =(u,\Delta v)_{\Omega}-(\partial_{\nu}u,v)_{\Gamma}+ (u,\partial_{\nu}v)_{\Gamma}+(\partial_{\nu}^{2}u- \partial_{\nu}u+\partial_{\varphi}^{2}u,w)_{\Gamma}+ (\partial_{\nu}^{2}u,h)_{\Gamma}=\\ =(u,\Delta v)_{\Omega}+ (u,\partial_{\nu}v+\partial_{\varphi}^{2}w)_{\Gamma}+ (\partial_{\nu}u,-v-w)_{\Gamma}+ (\partial_{\nu}^{2}u,w+h)_{\Gamma}\end{gathered}$$ для довільних функцій $u,v\in C^{\infty}(\overline{\Omega})$ і $w,h\in C^{\infty}(\Gamma)$. Отже, спеціальна формула Гріна для крайової задачі набирає вигляду $$\begin{gathered} (\Delta u,v)_{\Omega}+(\Delta u,w)_{\Gamma}+ (\partial_{\nu}^{2}u,h)_{\Gamma}=\\ =(u,\Delta v)_{\Omega}+ (u,\partial_{\nu}v+\partial_{\varphi}^{2}w)_{\Gamma}+ (D_{\nu}u,-iv-iw)_{\Gamma}+(D_{\nu}^{2}u,-w-h)_{\Gamma}.\end{gathered}$$ Тому крайова задача $$\begin{gathered} \Delta v=\omega\quad\mbox{в}\quad\Omega,\\ \partial_{\nu}v+\partial_{\varphi}^{2}w=\theta_{1},\quad -iv-iw=\theta_{2},\quad -w-h=\theta_{3}\quad\mbox{на}\quad\Gamma\end{gathered}$$ є формально спряженою до задачі відносно цієї формули Гріна. Отримана формально спряжена задача містить дві додаткові невідомі функції $w$ і $h$ на $\Gamma$. **3. Простори Хермандера і розширена соболєвська шкала.** Еліптичну крайову задачу , будемо досліджувати у підходящих парах гільбертових просторів Хермандера [@Hermander63 п. 2.2], які утворюють розширену соболєвську шкалу, введену в [@MikhailetsMurach09Dop3; @MikhailetsMurach13UMJ3]. Нагадаємо означення цих просторів і деякі їх властивості, потрібні у подальшому. Для просторів Хермандера, які використовуються у роботі, показником регулярності розподілів служить функціональний параметр $\alpha\in\mathrm{RO}$. За означенням, клас $\mathrm{RO}$ складається з усіх вимірних за Борелем функцій $\alpha:\nobreak[1,\infty)\rightarrow(0,\infty)$, для яких існують числа $b>1$ і $c\geq1$ такі, що $c^{-1}\leq\alpha(\lambda t)/\alpha(t)\leq c$ для довільних $t\geq1$ і $\lambda\in[1,b]$ (сталі $b$ і $c$ можуть залежати від $\alpha$). Такі функції називають RO-змінними на нескінченності. Клас RO введений В. Г. Авакумовичем [@Avakumovic36] у 1936 р. і достатньо повно вивчений (див., наприклад, монографії [@Seneta76 додаток 1] і [@BinghamGoldieTeugels89 пп. 2.0 – 2.2]). Цей клас допускає простий опис, а саме: $$\alpha\in\mathrm{RO}\;\;\Leftrightarrow\;\;\alpha(t)=\exp\Biggl(\beta(t)+ \int\limits_{1}^{\:t}\frac{\gamma(\tau)}{\tau}\;d\tau\Biggr)\;\, \mbox{для}\;\,t\geq1,$$ де дійсні функції $\beta$ і $\gamma$ вимірні за Борелем і обмежені на півосі $[1,\infty)$ (див., наприклад, [@Seneta76 додаток 1, теорема 1]). Для нас важлива така властивість класу $\mathrm{RO}$: для кожної функції $\alpha\in\mathrm{RO}$ існують числа $s_{0},s_{1}\in\mathbb{R}$, $s_{0}\leq s_{1}$, і $c_{0},c_{1}>0$ такі, що $$\label{Ax=b3} c_{0}\lambda^{s_{0}}\leq\frac{\alpha(\lambda t)}{\alpha(t)}\leq c_{1}\lambda^{s_{1}} \quad\mbox{для всіх}\quad t\geq1,\;\;\lambda\geq1$$ (див. [@Seneta76 додаток 1, теорема 2]). Покладемо $$\begin{gathered} \sigma_{0}(\alpha):= \sup\,\{s_{0}\in\mathbb{R}:\,\mbox{виконується ліва нерівність в \eqref{Ax=b3}}\},\\ \sigma_{1}(\alpha):=\inf\,\{s_{1}\in\mathbb{R}:\,\mbox{виконується права нерівність в \eqref{Ax=b3}}\}.\end{gathered}$$ Числа $\sigma_{0}(\alpha)$ і $\sigma_{1}(\alpha)$ є відповідно нижнім і верхнім індексами Матушевської [@Matuszewska64] функції $\alpha\in\mathrm{RO}$ (див. також монографію [@BinghamGoldieTeugels89 п. 2.1.2]). Звісно, $-\infty<\sigma_{0}(\alpha)\leq\sigma_{1}(\alpha)<\infty$. Наведемо деякі характерні приклади функцій, RO-змінних на нескінченності. **Приклад 3.** Розглянемо неперервну функцію $\alpha:[1,\infty)\rightarrow(0,\infty)$ таку, що $$\alpha(t):=t^{s}(\ln t)^{r_{1}}(\ln\ln t)^{r_{2}}\ldots(\underbrace{\ln\ldots\ln}_{k\;\mbox{\small разів}} t)^{r_{k}}\quad\mbox{при}\quad t\gg1.$$ Тут довільно вибрано ціле число $k\geq1$ і дійсні числа $s,r_{1},\ldots,r_{k}$. Функція $\alpha$ належить до класу $\mathrm{RO}$ і для неї $\sigma_{0}(\alpha)=\sigma_{1}(\alpha)=s$. Взагалі, до класу $\mathrm{RO}$ належить будь-яка вимірна функція $\alpha:[1,\infty)\rightarrow(0,\infty)$, яка обмежена і відокремлена від нуля на кожному компакті і є правильно змінною на нескінченності за Й. Караматою [@Karamata30a]. Остання властивість значить, що $\alpha(\lambda t)/\alpha(t)\to\lambda^{s}$ при $t\to\infty$ для деякого $s\in\mathbb{R}$. Індекси Матушевської такої функції дорівнюють числу $s$, яке називають порядком змінення функції на нескінченності. Правильно змінні функції широко застосовуються у математиці (див. монографії [@Seneta76; @BinghamGoldieTeugels89]). **Приклад 4.** Нехай $\theta\in\mathbb{R}$, $\delta>0$ і $r\in(0,1]$. Покладемо $$\alpha(t):=\left\{ \begin{array}{ll} t^{\theta+\delta\sin(\ln\ln t)^{r}}\; &\hbox{при}\;t>e,\\ t^{\theta}\; &\hbox{при}\;1\leq t\leq e. \end{array}\right.$$ Тоді $\alpha\in\mathrm{RO}$, причому $\sigma_{0}(\alpha)=\theta-\delta$ і $\sigma_{1}(\alpha)=\theta+\delta$ [@Chepuruhina15Coll2 приклад 6]. Нехай $\alpha\in\mathrm{RO}$. Дамо означення простору Хермандера $H^{\alpha}$ спочатку на $\mathbb{R}^{n}$, де ціле $n\geq1$, а потім на $\Omega$ і $\Gamma$. Цей простір складається з розподілів (узагальнених функцій), які нам зручно трактувати як *анти*лінійні функціонали на відповідному просторі основних функцій. За означенням, лінійний простір $H^{\alpha}(\mathbb{R}^{n})$ складається з усіх повільно зростаючих на $\mathbb{R}^{n}$ розподілів $w$ таких, що їх перетворення Фур’є $\widehat{w}$ локально інтегровне за Лебегом на $\mathbb{R}^{n}$ і задовольняє умові $$\int\limits_{\mathbb{R}^{n}} \alpha^2(\langle\xi\rangle)\,|\widehat{w}(\xi)|^2\,d\xi <\infty,$$ де $\langle\xi\rangle:=(1+|\xi|^{2})^{1/2}$ є згладженим модулем вектора $\xi\in\mathbb{R}^{n}$. Цей простір наділений скалярним добутком $$(w_{1},w_{2})_{H^{\alpha}(\mathbb{R}^{n})}:= \int\limits_{\mathbb{R}^{n}} \alpha^2(\langle\xi\rangle)\, \widehat{w_{1}}(\xi)\,\overline{\widehat{w_{2}}(\xi)}\,d\xi$$ і відповідною нормою $$\|w\|_{H^{\alpha}(\mathbb{R}^{n})}:= (w,w)_{H^{\alpha}(\mathbb{R}^{n})}^{1/2}$$ та є гільбертовим і сепарабельним відносно цієї норми. Простір $H^{\alpha}(\mathbb{R}^{n})$ — гільбертів ізотропний випадок просторів $\mathcal{B}_{p,k}$, введених і досліджених Л. Хермандером в [@Hermander63 п. 2.2] (див також його монографію [@Hermander83 п. 10.1]). А саме, $H^{\alpha}(\mathbb{R}^{n})=\mathcal{B}_{p,k}$, якщо $p=2$ і $k(\xi)=\alpha(\langle\xi\rangle)$ при $\xi\in\mathbb{R}^{n}$. Зауважимо, що у гільбертовому випадку $p=2$ простори Хермандера збігаються з просторами, введеними Л. Р. Волевичем і Б. П. Панеяхом [@VolevichPaneah65 § 2]. Якщо $\alpha(t)\equiv t^{s}$ для деякого $s\in\mathbb{R}$, то $H^{\alpha}(\mathbb{R}^{n})=:H^{(s)}(\mathbb{R}^{n})$ є гільбертів простір Соболєва порядку $s$. Взагалі, $$\label{1f7} s_{0}<\sigma_{0}(\alpha)\leq\sigma_{1}(\alpha)<s_{1}\;\Rightarrow\; H^{(s_1)}(\mathbb{R}^{n})\hookrightarrow H^{\alpha}(\mathbb{R}^{n})\hookrightarrow H^{(s_0)}(\mathbb{R}^{n}),$$ причому обидва вкладення неперервні й щільні. Cлідуючи [@MikhailetsMurach13UMJ3; @MikhailetsMurach14], клас функціональних просторів $\{H^{\alpha}(\mathbb{R}^{n}):\alpha\in\mathrm{RO}\}$ називаємо розширеною соболєвською шкалою на $\mathbb{R}^{n}$. Її аналоги для $\Omega$ і $\Gamma$ будуються стандартним чином (див. [@MikhailetsMurach15ResMath1 с. 4] і [@MikhailetsMurach09Dop3 с. 30]). Наведемо відповідні означення; тепер $n\geq2$. За означенням, лінійний простір $H^{\alpha}(\Omega)$ складається зі звужень в область $\Omega$ всіх розподілів $w\in H^{\alpha}(\mathbb{R}^{n})$ і наділений нормою $$\|v\|_{H^{\alpha}(\Omega)}:= \inf\bigl\{\,\|w\|_{H^{\alpha}(\mathbb{R}^{n})}:\, w\in H^{\alpha}(\mathbb{R}^{n}),\ w=v\;\,\mbox{в}\;\,\Omega\,\bigr\},$$ де $v\in H^{\alpha}(\Omega)$. Простір $H^{\alpha}(\Omega)$ гільбертів і сепарабельний відносно цієї норми, а множина $C^{\infty}(\overline{\Omega})$ щільна в ньому. Лінійний простір $H^{\alpha}(\Gamma)$ складається, коротко кажучи, з усіх розподілів на $\Gamma$, які в локальних координатах дають елементи простору $H^{\alpha}(\mathbb{R}^{n-1})$. Дамо детальне означення. Довільно виберемо скінченний атлас із $C^{\infty}$-структури на многовиді $\Gamma$, утворений локальними картами $\pi_j:\mathbb{R}^{n-1}\leftrightarrow \Gamma_{j}$, де $j=1,\ldots,\varkappa$. Тут відкриті множини $\{\Gamma_{1},\ldots,\Gamma_{\varkappa}\}$ складають покриття многовиду $\Gamma$. Виберемо також функції $\chi_j\in C^{\infty}(\Gamma)$, де $j=1,\ldots,\varkappa$, які утворюють розбиття одиниці на $\Gamma$, що задовольняє умову $\mathrm{supp}\,\chi_j\subset \Gamma_j$. За означенням, лінійний простір $H^{\alpha}(\Gamma)$ складається з усіх розподілів $h$ на $\Gamma$ таких, що $(\chi_{j}h)\circ\pi_{j}\in H^{\alpha}(\mathbb{R}^{n-1})$ для усіх $j\in\{1,\ldots,\varkappa\}$. Тут $(\chi_{j}h)\circ\pi_{j}$ є представленням розподілу $h$ у локальній карті $\pi_{j}$. Простір $H^{\alpha}(\Gamma)$ наділений нормою $$\|h\|_{H^{\alpha}(\Gamma)}:=\biggl(\sum_{j=1}^{\varkappa}\, \|(\chi_{j}h)\circ\pi_{j}\|_ {H^{\alpha}(\mathbb{R}^{n-1})}^{2}\biggr)^{1/2}.$$ Він гільбертів і сепарабельний відносно цієї норми та з точністю до еквівалентності норм не залежить від зробленого вибору атласу і розбиття одиниці [@MikhailetsMurach09Dop3 с. 32]. Множина $C^{\infty}(\Gamma)$ щільна в $H^{\alpha}(\Gamma)$. Щойно означені функціональні простори утворюють розширені соболєвські шкали $\{H^{\alpha}(\Omega):\alpha\in\mathrm{RO}\}$ і $\{H^{\alpha}(\Gamma):\alpha\in\mathrm{RO}\}$ на $\Omega$ і $\Gamma$ відповідно. Вони містять гільбертові шкали просторів Соболєва: якщо $\alpha(t)\equiv t^{s}$ для деякого $s\in\mathbb{R}$, то $H^{\alpha}(\Omega)=:H^{(s)}(\Omega)$ і $H^{\alpha}(\Gamma)=:H^{(s)}(\Gamma)$ є гільбертовими просторами Соболєва порядку $s$. Відмітимо таку властивість цих шкал, яка випливає з [@Hermander63 теореми 2.2.2, 2.2.3]. Нехай $\alpha,\eta\in\mathrm{RO}$ і $\Lambda\in\{\Omega,\Gamma\}$. Функція $\alpha/\eta$ обмежена в околі нескінченності тоді і тільки тоді, коли $H^{\eta}(\Lambda)\hookrightarrow H^\alpha(\Lambda)$. Це вкладення неперервне і щільне. Воно компактне тоді і тільки тоді, коли $\alpha(t)/\eta(t)\rightarrow0$ при $t\rightarrow\infty$. Зокрема, виконуються властивість , якщо у ній замінити $\mathbb{R}^{n}$ на $\Omega$ або $\Gamma$, при цьому вкладення будуть компактними і щільними. **4. Основні результати.** Сформулюємо наші результати про властивості еліптичної крайової задачі , у просторах Хермандера $H^{\alpha}$, розглянутих вище. Для них показник регулярності матиме вигляд $\alpha(t)\equiv\varphi(t)t^{s}$, де $\varphi\in\mathrm{RO}$ і $s\in\mathbb{R}$. Для того, щоб не вказувати аргумент $t$ у показнику будемо використовувати функціональний параметр $\varrho(t):=t$ аргументу $t\geq1$ й записувати $\alpha$ у вигляді $\varphi\varrho^s$. Якщо $\varphi\in\mathrm{RO}$, то, звісно, $\varphi\varrho^s\in\mathrm{RO}$ та $\sigma_j(\varphi\varrho^s)=\sigma_j(\varphi)+s$ для кожного $j\in\{0,1\}$. Позначимо через $N$ лінійний простір усіх розв’язків $u\in C^{\infty}(\overline{\Omega})$ крайової задачі , у випадку, коли $f=0$ в $\Omega$ і кожне $g_{j}=0$ на $\Gamma$. Позначимо також через $N_{\star}$ лінійний простір усіх розв’язків $$(v,w_{1},\ldots,w_{m-2q+1},h_{1},\ldots,h_{q})\in C^{\infty}(\overline{\Omega})\times(C^{\infty}(\Gamma))^{m-q+1}$$ формально спряженої крайової задачі , у випадку, коли $\omega=0$ в $\Omega$ і кожне $\theta_{k}=0$ на $\Gamma$. Оскільки обидві задачі еліптичні в $\Omega$, то простори $N$ і $N_{\star}$ скінченновимірні [@KozlovMazyaRossmann97 наслідок 4.1.1]. **Теорема 1.** *Нехай $\varphi\in\mathrm{RO}$ і $\sigma_0(\varphi)>m+1/2$. Тоді відображення продовжується єдиним чином (за неперервністю) до обмеженого оператора $$\label{1f8} (A,B):H^{\varphi}(\Omega)\rightarrow H^{\varphi\varrho^{-2q}}(\Omega)\oplus\bigoplus_{j=1}^{q} H^{\varphi\varrho^{-m_j-1/2}}(\Gamma)=: \mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma).$$ Цей оператор нетерів. Його ядро дорівнює $N$, а область значень складається з усіх векторів $(f,g_1,\ldots,g_q)\in\mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma)$ таких, що $$\label{1f9} \begin{gathered} (f,v)_\Omega+\sum_{j=1}^{m-2q+1}(D_{\nu}^{j-1}f,w_{j})_{\Gamma}+ \sum_{j=1}^{q}(g_j,h_{j})_{\Gamma}=0 \\ \mbox{для всіх} \quad (v,w_{1},\ldots,w_{m-2q+1},h_{1},\ldots,h_{q})\in N_{\star}. \end{gathered}$$ Індекс оператора дорівнює $\dim N-\dim N_{\star}$ та не залежить від $\varphi$.* Як і раніше, у формулі через $(\cdot,\cdot)_{\Omega}$ і $(\cdot,\cdot)_{\Gamma}$ позначено скалярні добутки у гільбертових просторах $L_{2}(\Omega)$ і $L_{2}(\Gamma)$ відповідно. Тут згідно з твердженням 4, поданим нижче у п. 6, для кожної функції $f\in H^{\varphi\varrho^{-2q}}(\Omega)$, де $\sigma_0(\varphi)>m+1/2$, коректно означені образи $$D_{\nu}^{j-1}f\in H^{\varphi\varrho^{-2q-j+1/2}}(\Gamma)\subset L_{2}(\Gamma)$$ відносно крайового оператора $D_{\nu}^{j-1}$ порядку $j-1\leq m-2q$. У зв’язку з теоремою 1 нагадаємо, що лінійний обмежений оператор $T:E_{1}\rightarrow E_{2}$, де $E_{1}$ і $E_{2}$ — банахові простори, називають нетеровим, якщо його ядро $\ker T$ і коядро $E_{2}/T(E_{1})$ скінченновимірні. Якщо цей оператор нетерів, то його область значень замкнена в просторі $E_{2}$ (див., наприклад, [@Hermander85 Лемма 19.1.1]) і для нього означений скінченний індекс $$\mathrm{ind}\,T:=\dim\ker T-\dim(E_{2}/T(E_{1})).$$ Зокрема, для еліптичної крайової задачі з прикладу 2 безпосередньо перевіряється, що $\dim N=\dim N_{\star}=3$ і тому індекс оператора дорівнює нулю. Відмітимо, що умову $\sigma_0(\varphi)>m+1/2$ у теоремі 1 не можна відкинути чи послабити. Зокрема, якщо $\varphi(t)\equiv t^{s}$ для деяких дійсного $s\leq m_{j}+1/2$ і цілого $j\in\{1,\ldots,q\}$, то відображення $u\mapsto B_{j}u$, де $u\in C^{\infty}(\overline{\Omega})$, не можна продовжити до неперервного лінійного оператора, що діє з простору Соболєва $H^{(s)}(\Omega)$ у лінійний топологічний простір $\mathcal{D}'(\Gamma)$ усіх розподілів на $\Gamma$ (див., наприклад, [@MikhailetsMurach14 Зауваження 3.5]). У випадку, коли $N=\{0\}$ і $N_{\star}=\{0\}$, оператор здійснює ізоморфізм між просторами $H^{\varphi}(\Omega)$ і $\mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma)$. Це випливає з теореми 1 і теореми Банаха про обернений оператор. У загальній ситуації оператор породжує ізоморфізм між деякими їх підпросторами скінченної ковимірності. Ці підпростори і проектори на них зручно будувати у такий спосіб. Розглянемо розклад простору $H^{\varphi}(\Omega)$, де $\sigma_0(\varphi)>0$, у пряму суму підпросторів $$\begin{gathered} \label{1f10} H^{\varphi}(\Omega)=N\dotplus\bigl\{u\in H^{\varphi}(\Omega):\,(u,w)_\Omega=0\;\;\mbox{для всіх} \;\;w\in N\bigr\}.\end{gathered}$$ Ця рівність правильна, оскільки вона є звуженням розкладу простору $L_{2}(\Omega)$ в ортогональну суму підпростору $N$ і його доповнення. Стосовно розкладу простору $\mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma)$ скористаємося таким результатом. **Лема 1.** *Існує скінченновимірний простір $G\subset C^{\infty}(\overline{\Omega})\times(C^{\infty}(\Gamma))^{q}$ такий, що для кожного $\varphi\in\mathrm{RO}$ з $\sigma_0(\varphi)>m+1/2$ правильний розклад простору $\mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma)$ у пряму суму підпросторів $$\begin{gathered} \label{1f11} \mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma)=G\dotplus \bigl\{(f,g_1,\ldots,g_q)\in \mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma): \mbox{виконується \eqref{1f9}}\bigr\};\end{gathered}$$ при цьому $\dim G=\dim N_{\star}$.* Позначимо через $P$ і $Q$ косі проектори відповідно просторів $H^{\varphi}(\Omega)$ і $\mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma)$ на другі доданки в сумах і паралельно першим доданкам. Звісно, ці проектори не залежать від $\varphi$. **Теорема 2.** *Нехай $\varphi\in\mathrm{RO}$ і $\sigma_0(\varphi)>m+1/2$. Тоді звуження відображення на підпростір $P(H^{\varphi}(\Omega))$ є ізоморфізмом $$\label{1f12} (A,B):\,P(H^{\varphi}(\Omega))\leftrightarrow Q(\mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma)).$$* Дослідимо властивості узагальнених розв’язків еліптичної крайової задачі , у просторах Хермандера. Нагадаємо означення таких розв’язків. Покладемо $$H^{m+1/2+}(\Omega):= \bigcup_{\substack{\alpha\in\mathrm{RO}:\\\sigma_{0}(\alpha)>m+1/2}} H^{\alpha}(\Omega)=\bigcup_{s>m+1/2}H^{(s)}(\Omega);$$ тут остання рівність правильна з огляду на властивість . Згідно з теоремою 1, для кожної функції $u\in H^{m+1/2+}(\Omega)$ коректно означений вектор $$(f,g):=(f,g_{1},\ldots,g_{q}):=(A,B)u \in L_{2}(\Omega)\times(L_{2}(\Gamma))^{q}.$$ Функцію $u$ називаємо (сильним) узагальненим розв’язком крайової задачі , з правою частиною $(f,g)$. **Теорема 3.** *Нехай параметри $\varphi\in\mathrm{RO}$ і $\lambda\in\mathbb{R}$ задовольняють нерівності $\sigma_0(\varphi)>m+1/2$ і $0<\lambda<\sigma_0(\varphi)-m+1/2$, а функції $\chi,\eta\in C^{\infty}(\overline{\Omega})$ задовольняють умову $\eta=1$ в околі $\mathrm{supp}\,\chi$. Тоді існує число $c=c(\varphi,\lambda,\chi,\eta)>0$ таке, що для довільної функції $u\in H^{\varphi}(\Omega)$ виконується оцінка $$\label{1f14} \|\chi u\|_{H^{\varphi}(\Omega)}\leq c\,\bigl(\|\eta(A, B)u\|_{\mathcal{H}^{\varphi\varrho^{-2q}}(\Omega, \Gamma)}+\|\eta u\|_{H^{\varphi\varrho^{-\lambda}}(\Omega)}\bigr).$$ Тут $c$ не залежить від $u$.* ***Зауваження* 1.** У випадку, коли $\chi=\eta=1$, нерівність є глобальною апріорною оцінкою узагальненого розв’язку $u$ еліптичної крайової задачі , . У цьому випадку умову $\lambda<\sigma_0(\varphi)-m+1/2$ можна прибрати. Взагалі, нерівність є локальною апріорною оцінкою розв’язку $u$. Справді, для кожної непорожньої відкритої (у топології $\overline{\Omega}$) підмножини множини $\overline{\Omega}$, можна вибрати функції $\chi,\eta$ так, щоб вони задовольняли умову теореми 3 і їх носії лежали в цій підмножині. Якщо $0<\lambda\leq1$, то у нерівності можна узяти $\chi(A, B)u$ замість $\eta(A, B)u$. Дослідимо регулярність узагальнених розв’язків еліптичної крайової задачі , . Нехай $V$ — відкрита множина в $\mathbb{R}^{n}$, яка має непорожній перетин з областю $\Omega$. Покладемо $\Omega_0:=\Omega\cap V$ і $\Gamma_{0}:=\Gamma\cap V$ (можливий випадок, коли $\Gamma_{0}=\varnothing$). Для довільного параметра $\alpha\in\mathrm{RO}$ введемо локальні аналоги просторів $H^{\alpha}(\Omega)$ і $H^{\alpha}(\Gamma)$. За означенням, лінійний простір $H^{\alpha}_{\mathrm{loc}}(\Omega_{0},\Gamma_{0})$ cкладається з усіх розподілів $u\in\mathcal{D}'(\Omega)$ таких, що $\chi u\in H^{\alpha}(\Omega)$ для довільної функції $\chi\in C^{\infty}(\overline{\Omega})$ із $\mathrm{supp}\,\chi\subset\Omega_0\cup\Gamma_{0}$. Тут, як звичайно, $\mathcal{D}'(\Omega)$ позначає лінійний топологічний простір усіх розподілів в $\Omega$. Топологія у лінійному просторі $H^{\alpha}_{\mathrm{loc}}(\Omega_{0},\Gamma_{0})$ задається напівнормами $u\mapsto\|\chi u\|_{H^{\alpha}(\Omega)}$, де $\chi$ — довільна функція з означення цього простору. Аналогічно, лінійний простір $H^{\alpha}_{\mathrm{loc}}(\Gamma_{0})$ складається з усіх розподілів $h\in\nobreak\mathcal{D}'(\Gamma)$ таких, що $\chi h\in H^{\alpha}(\Gamma)$ для довільної функції $\chi\in C^{\infty}(\Gamma)$ із $\mathrm{supp}\,\chi\subset\Gamma_{0}$. Топологія у лінійному просторі $H^{\alpha}_{\mathrm{loc}}(\Gamma_{0})$ задається напівнормами $h\mapsto\|\chi h\|_{H^{\alpha}(\Gamma)}$, де $\chi$ — довільна функція з означення цього простору. **Теорема 4.** *Нехай функція $u\in H^{m+1/2+}(\Omega)$ є узагальненим розв’язком еліптичної крайової задачі , , праві частини якої задовольняють умову $$\label{th4-cond} (f,g)\in H^{\varphi\varrho^{-2q}}_{\mathrm{loc}}(\Omega_{0},\Gamma_{0}) \oplus\bigoplus_{j=1}^{q} H^{\varphi\varrho^{-m_{j}-1/2}}_{\mathrm{loc}}(\Gamma_{0})=: \mathcal{H}^{\varphi\varrho^{-2q}}_{\mathrm{loc}}(\Omega_{0},\Gamma_{0})$$ для деякого функціонального параметра $\varphi\in\mathrm{RO}$ такого, що $\sigma_{0}(\varphi)>m+1/2$. Тоді розв’язок $u\in H^{\varphi}_{\mathrm{loc}}(\Omega_{0},\Gamma_{0})$.* Відмітимо важливі окремі випадки цієї теореми. Якщо $\Omega_{0}=\Omega$ і $\Gamma_{0}=\Gamma$, то локальні простори $H^{\varphi}_{\mathrm{loc}}(\Omega_{0},\Gamma_{0})$ і $\mathcal{H}^{\varphi\varrho^{-2q}}_{\mathrm{loc}}(\Omega_{0},\Gamma_{0})$ збігаються з просторами $H^{\varphi}(\Omega)$ і $\mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma)$ відповідно. Тому теорема 4 стверджує, що регулярність узагальненого розв’язку $u$ підвищується глобально, тобто в усій області $\Omega$ аж до її межі $\Gamma$. Якщо $\Gamma_{0}=\varnothing$ і $\Omega_{0}=\Omega$, то згідно з цією теоремою регулярність розв’язку $u$ підвищується в околах усіх внутрішніх точок замкненої області $\overline\Omega$. Теореми 1 – 4 або їх версії відомі у випадку соболєвських просторів, коли $\varphi(\cdot)\equiv1$; див., наприклад, фундаментальну роботу С. Агмона, А. Дуглиса і Л. Ніренберга [@AgmonDouglisNirenberg59 розд. 5], монографії Я. А. Ройтберга [@Roitberg96 розд. 4, 7], В. О. Козлова, В. Г. Маз’ї і Й. Россмана [@KozlovMazyaRossmann97 розд. 4], Г. Ескіна [@Eskin11 розд. 7] та огляд М. С. Агарановича [@Agranovich97 § 2]. Відмітимо, що, мабуть, уперше Б. Р. Вайнберг і В. В. Грушин [@VainbergGrushin67b § 4, формула (76)] звернули увагу на те, що в описі області значень оператора $(A,B)$ треба використовувати вираз вигляду $$\sum_{j=1}^{m-2q+1}(D_{\nu}^{j-1}f,w_{j})_{\Gamma}.$$ Теореми 1 – 4 і лему 1 доведемо у п. 7. Там же обґрунтуємо і зауваження 1. **5. Застосування.** Як застосування просторів Хермандера дамо достатні умови неперервності узагальнених похідних (заданого порядку) розв’язків еліптичної крайової задачі , . Ці умови виводяться з теореми 4 і теореми вкладення Хермандера [@Hermander63 теорема 2.2.7]. Останню для розширеної соболєвської шкали можна сформулювати так: нехай $0\leq p\in\mathbb{Z}$ і $\varphi\in\mathrm{RO}$, тоді $$\label{v} \int\limits_1^{\infty} t^{2p+n-1}\varphi^{-2}(t)\,dt<\infty\;\;\Leftrightarrow\;\; H^\varphi(\Omega)\subset C^p(\overline{\Omega}),$$ причому вкладення неперервне; див. [@ZinchenkoMurach13UMJ11 лема 2] або [@MikhailetsMurach14 твердження 2.6(vi)]. Зауважимо, що у соболєвському випадку, коли $\varphi(t)\equiv t^{s}$ для деякого $s\in\mathbb{R}$, властивість є теоремою вкладення Соболєва: $$s>p+n/2\;\;\Leftrightarrow\;\; H^{(s)}(\Omega)\hookrightarrow C^p(\overline{\Omega}).$$ **Теорема 5.** *Нехай ціле число $p\geq0$. Припустимо, що функція $u\in H^{m+1/2+}(\Omega)$ є узагальненим розв’язком еліптичної крайової задачі , , праві частини якої задовольняють умову для деякого функціонального параметра $\varphi\in\mathrm{RO}$ такого, що $\sigma_{0}(\varphi)>m+1/2$ і $$\label{1f15} \int\limits_1^{\infty}t^{2p+n-1}\varphi^{-2}(t)dt<\infty.$$ Тоді $u\in C^{p}(\Omega_{0}\cup\Gamma_{0})$.* ***Зауваження* 2.** Умова є точною у теоремі 5. А саме, нехай $0\leq p\in\mathbb{Z}$, $\varphi\in\mathrm{RO}$ і $\sigma_{0}(\varphi)>m+1/2$; тоді з імплікації $$\label{implication} \bigl(u\in H^{m+1/2+}(\Omega)\;\,\mbox{і}\;\, (A,B)u\in\mathcal{H}^{\varphi\varrho^{-2q}}_ {\mathrm{loc}}(\Omega_{0},\Gamma_{0})\bigr)\;\Rightarrow\; u\in C^{p}(\Omega_{0}\cup\Gamma_{0})$$ випливає, що $\varphi$ задовольняє умову . Сформулюємо достатню умову, за якою узагальнений розв’язок $u$ крайової задачі , є класичним, тобто $u\in C^{2q}(\Omega)\cap C^{m}(U_{\sigma}\cup \Gamma)$ для деякого числа $\sigma>0$, де $U_{\sigma}:=\{x\in\Omega:\mathrm{dist}(x,\Gamma)<\sigma\}$. Якщо розв’язок $u$ цієї задачі класичний, то її ліві частини обчислюються за допомогою класичних похідних і є неперервними функціями на $\Omega$ і $\Gamma$ відповідно. **Теорема 6.** *Нехай функція $u\in H^{m+1/2+}(\Omega)$ є узагальненим розв’язком еліптичної крайової задачі , , де $$\begin{gathered} \label{1f16} f\in H^{\varphi_1\varrho^{-2q}}_{\mathrm{loc}}(\Omega,\varnothing)\cap H^{\varphi_2\varrho^{-2q}}_{\mathrm{loc}}(U_{\sigma},\Gamma),\\ g_j\in H^{\varphi_2\varrho^{-m_j-1/2}}(\Gamma)\quad\mbox{при кожному}\quad j\in\{1,\ldots,q\} \label{1f17}\end{gathered}$$ для деякого числа $\sigma>0$ і параметрів $\varphi_1,\varphi_2\in\mathrm{RO}$, які задовольняють умови $\sigma_0(\varphi_1)>m+1/2$, $\sigma_0(\varphi_2)>m+1/2$ і $$\begin{gathered} \label{f18} \int\limits_1^{\infty}t^{2q+n-1}\varphi_{1}^{-2}(t)dt<\infty,\\ \int\limits_1^{\infty}t^{2m+n-1}\varphi_{2}^{-2}(t)dt<\infty. \label{f19}\end{gathered}$$ Тоді розв’язок $u$ класичний.* Теореми 5, 6 і зауваження 2 будуть обґрунтовані у п. 7. **6. Інтерполяція з функціональним параметром.** Простори Хермандера, які утворюють розширену соболєвську шкалу, можна отримати інтерполяцією з функціональним параметром пар гільбертових просторів Соболєва. Цей факт відіграватиме ключову роль у доведенні теореми 1. Метод інтерполяції з функціональним параметром гільбертових просторів уперше з’явився у статті К. Фояша і Ж.-Л. Ліонса [@FoiasLions61 с. 278]. Він є природнім узагальненням класичного інтерполяційного методу Ж.-Л. Ліонса [@LionsMagenes71 розд. 1, п. 5] і С.-Г. Крейна [@FunctionalAnalysis72 с. 253] на випадок, коли параметром інтерполяції служить не число, а досить загальна функція. Наведемо означення інтерполяції з функціональним параметром пар гільбертових просторів та її властивості, потрібні у подальшому. Будемо слідувати монографії [@MikhailetsMurach14 п. 1.1]. Для наших цілей достатньо обмежитися випадком сепарабельних гільбертових просторів. Нехай задана впорядкована пара $X:=[X_{0},X_{1}]$ сепарабельних комплексних гільбертових просторів $X_{0}$ і $X_{1}$ така, що $X_{1}$ є щільним лінійним многовидом у просторі $X_{0}$ та існує число $c>0$ таке, що $\|w\|_{X_{0}}\leq c\,\|w\|_{X_{1}}$ для довільного $w\in X_{1}$ (коротко кажучи, виконується неперервне і щільне вкладення $X_{1}\hookrightarrow X_{0}$). Пару $X$ називаємо припустимою. Для неї існує самоспряжений додатно визначений оператор $J$ у гільбертовому просторі $X_{0}$ з областю визначення $X_{1}$ такий, що $\|Jw\|_{X_{0}}=\|w\|_{X_{1}}$ для довільного $w\in X_{1}$. Оператор $J$ називається породжуючим для $X$ і однозначно визначається за парою $X$. Позначимо через $\mathcal{B}$ множину всіх вимірних за Борелем функцій $\psi:\nobreak(0,\infty)\rightarrow(0,\infty)$, які відокремлені від нуля на кожній множині $[r,\infty)$ і обмежені на кожному відрізку $[a,b]$, де $r>0$ і $0<a<b<\infty$. Нехай $\psi\in\mathcal{B}$. У просторі $X_{0}$ за допомогою спектральної теореми означений, як функція від $J$, оператор $\psi(J)$, взагалі необмежений. Позначимо через $[X_{0},X_{1}]_\psi$ або, коротше, $X_{\psi}$ область визначення оператора $\psi(J)$, наділену скалярним добутком $$(w_1, w_2)_{X_\psi}:=(\psi(J)w_1,\psi(J)w_2)_{X_0}$$ і відповідною нормою $\|w\|_{X_\psi}=(w,w)_{X_\psi}^{1/2}$. Простір $X_\psi$ гільбертів і сепарабельний, причому виконується неперервне і щільне вкладення $X_\psi \hookrightarrow X_0$. Функцію $\psi\in\mathcal{B}$ називаємо інтерполяційним параметром, якщо для довільних припустимих пар $X=[X_0, X_1]$ і $Y=[Y_0, Y_1]$ гільбертових просторів і для будь-якого лінійного відображення $T$, заданого на $X_0$, виконується така умова: якщо при кожному $j\in\{0,1\}$ звуження відображення $T$ на простір $X_{j}$ є обмеженим оператором $T:X_{j}\rightarrow Y_{j}$, то і звуження відображення $T$ на простір $X_\psi$ є обмеженим оператором $T:X_{\psi}\rightarrow Y_{\psi}$. У цьому випадку говоримо, що простір $X_\psi$ отриманий інтерполяцією з функціональним параметром $\psi$ пари $X$. Функція $\psi\in\mathcal{B}$ є інтерполяційним параметром тоді і тільки тоді, коли вона псевдоугнута в околі нескінченності, тобто еквівалентна там деякій угнутій додатній функції. Цей факт випливає з теореми Ж. Петре [@Peetre68] про опис усіх інтерполяційних функцій додатного порядку. Сформулюємо зазначену інтерполяційну властивість розширеної соболєвської шкали. **Твердження 1.** *Нехай задано функцію $\alpha\in\mathrm{RO}$ і дійсні числа $s_0$, $s_1$ такі, що $s_0<\sigma_0(\alpha)$ і $s_1>\sigma_1(\alpha)$. Покладемо $$\label{1f20} \psi(t)= \begin{cases} \;t^{{-s_0}/{(s_1-s_0)}}\, \alpha\bigl(t^{1/{(s_1-s_0)}}\bigr)&\text{при}\quad t\geq1, \\ \;\alpha(1)&\text{при}\quad0<t<1. \end{cases}$$ Тоді функція $\psi\in\mathcal{B}$ є інтерполяційним параметром і виконується така рівність просторів разом з еквівалентністю норм у них: $$\bigl[H^{(s_0)}(\Lambda),H^{(s_1)}(\Lambda)\bigr]_{\psi}= H^{\alpha}(\Lambda),$$ де $\Lambda\in\{\mathbb{R}^{n},\Omega,\Gamma\}$. Якщо $\Lambda=\mathbb{R}^{n}$, то буде рівність норм у цих просторах.* Це твердження доведено в [@MikhailetsMurach14 теореми 2.19 і 2.22] для $G\in\{\mathbb{R}^{n},\Gamma\}$ і в [@MikhailetsMurach15ResMath1 теорема 5.1] для $G=\Omega$. Відмітимо, що розширена соболєвська шкала замкнена відносно інтерполяції з функціональним параметром [@MikhailetsMurach14 теорема 2.18] і збігається (з точністю до еквівалентності норм) з класом усіх гільбертових просторів, інтерполяційних для пар гільбертових просторів Соболєва [@MikhailetsMurach14 теорема 2.24]. Остання властивість випливає з теореми В. І. Овчинникова [@Ovchinnikov84 п. 11.4] про опис усіх гільбертових просторів, інтерполяційних для заданої пари гільбертових просторів. Нагадаємо, що властивість гільбертового простору $H$ бути інтерполяційним для припустимої пари $X=[X_0,X_1]$ значить таке: виконується неперервне вкладення $X_1\hookrightarrow H\hookrightarrow X_0$ і будь-який лініний оператор, обмеженим на кожному з просторів $X_0$ і $X_1$ є також обмеженим на $H$. Сформулюємо дві загальні властивості інтерполяції [@MikhailetsMurach14 теореми 1.7, 1.5], які будуть використані у наших доведеннях. **Твердження 2.** *Нехай задано дві припустимі пари $X=[X_0,X_1]$ і $Y=[Y_0,Y_1]$ гільбертових просторів. Нехай, окрім того, на $X_0$ задано лінійне відображення $T$ таке, що його звуження на простори $X_j$, де $j=0,1$, є обмеженими і нетеровими операторами $T:X_j\rightarrow Y_j$, які мають спільне ядро і однаковий індекс. Тоді для довільного інтерполяційного параметра $\psi\in\mathcal{B}$ обмежений оператор $T:X_\psi\rightarrow Y_\psi$ нетерів з тим же ядром і індексом, а його область значень дорівнює $Y_\psi\cap T(X_0)$.* **Твердження 3.** *Нехай задано скінченне число припустимих пар $[X_{0}^{(j)},X_{1}^{(j)}]$ гільбертових просторів, де $j=1,\ldots,q$. Тоді для довільної функції $\psi\in\mathcal{B}$ правильна така рівність просторів разом з рівністю норм у них: $$\biggl[\,\bigoplus_{j=1}^{q}X_{0}^{(j)},\, \bigoplus_{j=1}^{q}X_{1}^{(j)}\biggr]_{\psi}=\, \bigoplus_{j=1}^{q}\bigl[X_{0}^{(j)},\,X_{1}^{(j)}\bigr]_{\psi}.$$* Лінійні диференціальні оператори з гладкими коефіцієнтами є обмеженими на парах підходящих просторів Хермандера. А саме, є правильним такий результат. **Твердження 4.** *$\mathrm{(i)}$ Нехай $L$ є лінійний диференціальний вираз порядку $l\geq0$ на $\overline{\Omega}$ з коефіцієнтами класу $C^{\infty}(\overline{\Omega})$. Тоді відображення $u\mapsto Lu$, де $u\in C^{\infty}(\overline{\Omega})$, продовжується єдиним чином (за неперервністю) до обмеженого лінійного оператора $$L:H^\alpha(\Omega)\rightarrow H^{\alpha\varrho^{-l}}(\Omega)$$ для кожного параметра $\alpha\in\mathrm{RO}$.* $\mathrm{(ii)}$ Нехай $K$ є крайовий лінійний диференціальний вираз порядку $k\geq\nobreak0$ на межі $\Gamma$ з коефіцієнтами класу $C^{\infty}(\Gamma)$. Тоді відображення $u\mapsto Ku$, де $u\in C^{\infty}(\overline{\Omega})$, продовжується єдиним чином (за неперервністю) до обмеженого лінійного оператора $$K:H^\alpha(\Omega)\rightarrow H^{\alpha\varrho^{-k-1/2}}(\Gamma)$$ для кожного параметра $\alpha\in\mathrm{RO}$ такого, що $\sigma_{0}(\alpha)>k+1/2$. У випадку просторів Соболєва, коли $\alpha(t)\equiv t^{s}$, твердження 4 добре відоме. Звідси випадок довільного $\alpha\in\mathrm{RO}$ виводиться за допомогою інтерполяції з функціональним параметром на підставі твердження 1. **7. Доведення.** Доведемо теореми 1 – 6, лему 1 та обґрунтуємо зауваження 1 і 2. ***Доведення теореми* 1.** У соболєвському випадку, коли $\varphi=\varrho^{s}$ і дійсне $s>m+1/2$, ця теорема відома за виключенням вказаного зв’язку скінченновимірного простору $N_{\star}$ з формально спряженою задачею , . У такому вигляді теорема 1 міститься у результаті, доведеному в монографії Я. А. Ройтберга [@Roitberg96 теорема 4.1.3]. У повному обсязі, але за додаткового припущення $s\in\mathbb{Z}$, теорема 1 міститься у результаті, встановленому в монографії В. О. Козлова, В. Г. Маз’ї і Й. Россмана [@KozlovMazyaRossmann97 наслідок 4.1.1]. Покажемо, що і для дробових $s$ висновок цієї теореми правильний у повному обсязі. Згідно з [@Roitberg96 теорема 4.1.3] відображення продовжується за неперервністю до обмеженого і нетерового оператора $$\label{roit-oper} (A,B):H^{s,(m+1)}(\Omega)\rightarrow H^{s-2q,(m+1-2q)}(\Omega) \oplus\bigoplus_{j=1}^{q}H^{(s-m_j-1/2)}(\Gamma)=: \mathcal{Q}^{s-2q}(\Omega,\Gamma)$$ для довільного $s\in\mathbb{R}$. Тут $H^{s,(r)}(\Omega)$, де $s\in\mathbb{R}$ і $1\leq r\in\mathbb{Z}$, є модифікований за Ройтбергом гільбертів простір Соболєва [@Roitberg96 п. 2.1]. Зокрема, якщо $s\geq0$ і $s\notin\{1/2,\ldots,r-1/2\}$, то $H^{s,(r)}(\Omega)$ є, за означенням, поповненням простору $C^{\infty}(\overline{\Omega})$ за нормою $$\|u\|_{H^{s,(r)}(\Omega)}:= \biggl(\|u\|_{H^{(s)}(\Omega)}^{2}+ \sum_{k=1}^{r}\;\|(D_{\nu}^{k-1}u)\!\upharpoonright\!\Gamma\| _{H^{(s-k+1/2)}(\Gamma)}^{2}\biggr)^{1/2}.$$ Відмітимо, що виконується неперервне вкладення $H^{s+\delta,(r)}(\Omega)\hookrightarrow H^{s,(r)}(\Omega)$ при $\delta>0$. Окрім того, якщо $s>r-1/2$, то простори $H^{s,(r)}(\Omega)$ і $H^{(s)}(\Omega)$ рівні як поповнення $C^{\infty}(\overline{\Omega})$ за еквівалентними нормами. Тому оператор , де $\varphi=\varrho^{s}$, і оператор рівні при $s>r-1/2$. Згідно зі згаданим результатом [@Roitberg96 теорема 4.1.3] ядро оператора збігається з $N$, а область значень складається з усіх векторів $(f,g_1,\ldots,g_q)\in\mathcal{Q}^{s-2q}(\Omega,\Gamma)$ таких, що задовольняють умову , у якій замість $N_{\star}$ фігурує деякий скінченновимірний простір, що лежить в $C^{\infty}(\overline{\Omega})\times(C^{\infty}(\Gamma))^{m-q+1}$ і не залежить від $s$. Звідси негайно випливає рівність $$(A,B)(H^{s_{2},(m+1)}(\Omega))=\mathcal{Q}^{s_{2}-2q}(\Omega,\Gamma)\cap (A,B)(H^{s_{1},(m+1)}(\Omega))\quad\mbox{при}\quad s_{1}<s_{2}.$$ Зокрема, $$\label{proof-th1-a} (A,B)(H^{(s)}(\Omega))=\mathcal{Q}^{s-2q}(\Omega,\Gamma)\cap (A,B)(H^{m,(m+1)}(\Omega))\quad\mbox{при}\quad m+1/2<s\in\mathbb{R}.$$ Згідно з [@KozlovMazyaRossmann97 теорема 4.1.4] простір $(A,B)(H^{m,(m+1)}(\Omega))$ складається з усіх векторів $(f,g_1,\ldots,g_q)\in\mathcal{Q}^{m-2q}(\Omega,\Gamma)$, які задовольняють умову , де $(\cdot,\cdot)_{\Gamma}$ є також продовженням за неперервністю скалярного добутку в $L_{2}(\Gamma)$. Тому для кожного дійсного $s>m+1/2$ область значень $(A,B)(H^{(s)}(\Omega))$ оператора , де $\varphi=\varrho^{s}$, є такою як це стверджується у теоремі 1. Отже, у соболєвському випадку ця теорема обґрунтована. У загальній ситуації доведемо її за допомогою інтерполяції з функціональним параметром пар деяких просторів Соболєва. За умовою, $\varphi\in\mathrm{RO}$ та $\sigma_0(\varphi)>m+1/2$. Виберемо дійсні числа $l_{0}$ і $l_{1}$ такі, що $m+1/2<l_{0}<\sigma_{0}(\varphi)$ і $\sigma_{1}(\varphi)<l_{1}$. Відображення продовжується за неперервністю до обмежених і нетерових операторів $$\label{01f21} (A,B):\,H^{(l_{i})}(\Omega)\rightarrow H^{(l_{i}-2q)}(\Omega)\oplus \bigoplus_{j=1}^{q} H^{(l_{i}-m_{j}-1/2)}(\Gamma)=: \mathcal{H}^{(l_{i}-2q)}(\Omega,\Gamma),\quad i\in\{0,1\},$$ які діють у соболєвських просторах. Ці оператори мають спільне ядро $N$ і однаковий індекс, який дорівнює $\dim N-\dim N_{\star}$. Окрім того, $$\begin{gathered} \label{01f22} (A,B)(H^{(l_{i})}(\Omega))= \bigl\{(f,g)\in\mathcal{H}^{(l_{i}-2q)}(\Omega,\Gamma): \mbox{виконується \eqref{1f9}}\bigr\}.\end{gathered}$$ Означимо функцію $\psi\in\mathcal{B}$ за формулою , у якій покладемо $\alpha:=\varphi$. Ця функція є інтерполяційним параметром згідно з твердженням 1. Тому на підставі твердження 2 з обмеженості та нетеровості обох операторів випливає обмеженість і нетеровість оператора $$\label{01f23} (A,B):\bigl[H^{(l_{0})}(\Omega),H^{(l_{1})}(\Omega)\bigr]_{\psi}\to \bigl[\mathcal{H}^{(l_{0}-2q)}(\Omega,\Gamma), \mathcal{H}^{(l_{1}-2q)}(\Omega,\Gamma)\bigr]_{\psi}.$$ Він є звуженням оператора з $i=0$. Покажемо, що  — це оператор із теореми 1. На підставі тверджень 1 і 3 маємо такі рівності просторів разом з еквівалентністю норм у них: $$\begin{gathered} \bigl[H^{(l_{0})}(\Omega),H^{(l_{1})}(\Omega)\bigr]_{\psi}= H^{\varphi}(\Omega),\\ \bigl[\mathcal{H}^{(l_{0}-2q)}(\Omega,\Gamma), \mathcal{H}^{(l_{1}-2q)}(\Omega,\Gamma)\bigr]_{\psi}= \bigl[H^{(l_{0}-2q)}(\Omega),H^{(l_{1}-2q)}(\Omega)\bigr]_{\psi}\oplus\\ \oplus\bigoplus_{j=1}^{q}\bigl[H^{(l_{0}-m_{j}-1/2)}(\Gamma), H^{(l_{1}-m_{j}-1/2)}(\Gamma)\bigr]_{\psi}= \mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma).\end{gathered}$$ Тому обмежений і нетерів оператор діє в парі просторів . Оскільки цей оператор є продовженням за неперервністю відображення , то він є оператором . На підставі твердження 2 ядро цього оператора та його індекс збігаються з спільним ядром $N$ і однаковим індексом $\dim N-\dim N_{\star}$ операторів . Окрім того, область значень оператора дорівнює $$\begin{gathered} (A,B)(H^{\varphi}(\Omega))= \mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma)\cap (A,B)(H^{(l_{0})}(\Omega))=\\ =\bigl\{(f,g)\in\mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma):\, \mbox{виконується}\;\eqref{1f9}\bigr\}.\end{gathered}$$ Тут використали рівність та вкладення $$\mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma)\hookrightarrow \mathcal{H}^{(l_{0}-2q)}(\Omega,\Gamma),$$ яке випливає з властивості , оскільки $l_{0}<\sigma_{0}(\varphi)$. Таким чином, доведено всі властивості оператора , сформульовані в теоремі 1. Теорема 1 доведена. ***Доведення леми* 1.** Скористаємося обмеженим нетеровим оператором для $s:=m$. Згідно з [@KozlovMazyaRossmann97 теорема 4.1.4] вимірність коядра цього оператора дорівнює $\dim N_{\star}$. Лінійний многовид $C^{\infty}(\overline{\Omega})\times(C^{\infty}(\Gamma))^{q}$ щільний у просторі $\mathcal{Q}^{m-2q}(\Omega,\Gamma)$. Тому на підставі [@HohbergKrein57 лема 2.1] існує скінченновимірний простір $G\subset C^{\infty}(\overline{\Omega})\times(C^{\infty}(\Gamma))^{q}$ такий, що $$\label{proof-lemma-a} \mathcal{Q}^{m-2q}(\Omega,\Gamma)=G\dotplus (A,B)(H^{m,(m+1)}(\Omega)).$$ Звідси випливає, що $\dim G=\dim N_{\star}$. Нехай число $s$ задовольняє умову $m+1/2<s<\sigma_{0}(\varphi)$. Тоді виконується неперервні вкладення $$\mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma)\hookrightarrow \mathcal{H}^{(s-2q)}(\Omega,\Gamma)=\mathcal{Q}^{s-2q}(\Omega,\Gamma) \hookrightarrow\mathcal{Q}^{m-2q}(\Omega,\Gamma)$$ на підставі і того, що простори $H^{s-2q,(m+1-2q)}(\Omega)$ і $H^{(s-2q)}(\Omega)$ рівні з точністю до еквівалентності норм при $s-2q>m+1-2q-1/2$, як це зазначалося у доведенні теореми 1. Окрім того, $G\subset\mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma)$. Тому з рівності випливає формула $$\label{proof-lemma-b} \mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma)=G\dotplus \bigl((A,B)(H^{m,(m+1)}(\Omega))\cap \mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma)\bigr).$$ Згідно з [@KozlovMazyaRossmann97 теорема 4.1.4] область значень $(A,B)(H^{m,(m+1)}(\Omega)$ оператора , де $s=m$, складається з усіх векторів $(f,g_1,\ldots,g_q)\in\mathcal{Q}^{m-2q}(\Omega,\Gamma)$, які задовольняють умову . Тому другий доданок у сумі складається з усіх векторів $(f,g_1,\ldots,g_q)\in\mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma)$, які задовольняють . Отже, перетворюється на рівність . У ній, згідно з нашими міркуваннями, простір $G$ не залежить від $s$. Лема 1 доведена. ***Доведення теореми* 2.** Згідно з теоремою 1, $N$ — ядро, а $Q(\mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma))$ — область значень оператора . Тому звуження відображення на простір $P(H^{\varphi}(\Omega))$ є обмеженим лінійним бієктивним оператором. Отже, він є ізоморфізмом за теоремою Банаха про обернений оператор. Теорема 2 доведена. ***Доведення теореми* 3.** У випадку, коли $\chi=\eta=1$, ця теорема є наслідком скінченновимірності ядра і замкненості області значень оператора , доведених у теоремі 1, та компактності вкладення $H^{\varphi\varrho^{-\lambda}}(\Omega)\hookrightarrow H^{\varphi}(\Omega)$. Це стверджує лема Пітре [@Peetre61 лема 3]. У цьому випадку $\lambda$ — довільне додатне число. Таким чином, існує число $\tilde{c}=\tilde{c}(\varphi,\lambda)>0$ таке, що для довільної функції $v\in H^{\varphi}(\Omega)$ виконується глобальна апріорна оцінка $$\label{global-estimate} \|v\|_{H^{\varphi}(\Omega)}\leq\tilde{c}\, \bigl(\|(A, B)v\|_{\mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma)}+ \|v\|_{H^{\varphi\varrho^{-\lambda}}(\Omega)}\bigr).$$ Виведемо з цієї оцінки теорему 3 для $\lambda=1$. Зауважимо спочатку, що нерівність $\lambda<\sigma_0(\varphi)-m+1/2$, вказана в умові цієї теореми, виконується для $\lambda=1$. Довільно виберемо функцію $u\in H^{\varphi}(\Omega)$. Нехай функції $\chi,\eta\in C^{\infty}(\overline{\Omega})$ такі як в умові теореми 3. Узявши $v:=\chi u\in H^{\varphi}(\Omega)$ і $\lambda:=1$ в оцінці , запишемо $$\label{1f29} \|\chi u\|_{H^{\varphi}(\Omega)}\leq\tilde{c}\, \bigl(\|(A, B)(\chi u)\|_{\mathcal{H}^{\varphi\varrho^{-2q}}(\Omega, \Gamma)}+\|\chi u\|_{H^{\varphi\varrho^{-1}}(\Omega)}\bigr).$$ Переставивши оператор множення на функцію $\chi$ з диференціальними операторами $A$ і $B_{1},\ldots,B_{q}$, отримаємо рівність $$(A,B)(\chi u)=(A,B)(\chi\eta u)=\chi(A,B)(\eta u)+(A',B')(\eta u)= \chi(A,B)u+(A',B')(\eta u).$$ Тут $A'$ — деякий лінійний диференціальний оператор на $\overline{\Omega}$ порядку $\mathrm{ord}\,A'\leq 2q-1$, а $B':=(B_{1}',\ldots,B_{q}')$ — набір деяких крайових лінійних диференціальних операторів на $\Gamma$, порядки яких задовольняють умову $\mathrm{ord}\,B_{j}'\leq m_{j}-1$ для кожного $j\in\{1,\ldots,q\}$. При цьому всі коефіцієнти операторів $A'$ і $B_{j}'$ належать до $C^{\infty}(\overline{\Omega})$ і $C^{\infty}(\Gamma)$ відповідно. Таким чином, $$\label{1f30} (A,B)(\chi u)=\chi(A,B)u+(A',B')(\eta u).$$ Згідно з твердженням 4 виконується нерівність $$\label{1f30b} \|(A',B')(\eta u)\|_{\mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma)} \leq c_{1}\|\eta u\|_{H^{\varphi\varrho^{-1}}(\Omega)}.$$ Тут і далі у доведенні через $c_{1},\ldots,c_{7}$ позначено деякі додатні числа, не залежні від $u$. На підставі формул  – отримаємо нерівності $$\begin{gathered} \|\chi u\|_{H^{\varphi}(\Omega)}\leq \tilde{c}\, \bigl(\|\chi(A,B)u\|_{\mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma)}+ \|(A',B')(\eta u)\|_{\mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma)} +\|\chi u\|_{H^{\varphi\varrho^{-1}}(\Omega)}\bigr)\leq\\ \leq\tilde{c}\, \|\chi(A, B)u\|_{\mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma)}+ \tilde{c}\,c_{1}\|\eta u\|_{H^{\varphi\varrho^{-1}}(\Omega)}+ \tilde{c}\,\|\chi u\|_{H^{\varphi\varrho^{-1}}(\Omega)}.\end{gathered}$$ Тут на підставі твердження 4 $$\begin{gathered} \|\chi u\|_{H^{\varphi\varrho^{-1}}(\Omega)}= \|\chi\eta u\|_{H^{\varphi\varrho^{-1}}(\Omega)}\leq c_{2}\|\eta u\|_{H^{\varphi\varrho^{-1}}(\Omega)}.\end{gathered}$$ Отже, $$\label{proof-th3} \|\chi u\|_{H^{\varphi}(\Omega)}\leq c_{3} \bigl(\|\chi(A,B)u\|_{\mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma)}+ \|\eta u\|_{H^{\varphi\varrho^{-1}}(\Omega)}).$$ З цієї нерівності випливає потрібна оцінка , оскільки $$\|\chi(A,B)u\|_{\mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma)}= \|\chi\eta(A,B)u\|_{\mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma)}\leq c_{4}\|\eta(A,B)u\|_{\mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma)}$$ на підставі твердження 4. Теорема 3 доведена у випадку, коли $\lambda=1$. Звісно, її висновок правильний і якщо $0<\lambda<1$. Доведемо тепер цю теорему у випадку, коли $$\label{proof-th3-a} 1<\lambda<\sigma_0(\varphi)-m+1/2.$$ Для кожного дійсного числа $l\geq1$ позначимо через $\mathcal{P}_{l}$ висновок теореми 3 у випадку, коли $\lambda=l$. А саме, $\mathcal{P}_{l}$ позначає таке твердження: для довільних функцій $\varphi\in\mathrm{RO}$ і $\chi,\eta\in C^{\infty}(\overline{\Omega})$, які задовольняють умови $\sigma_0(\varphi)>m+1/2$, $l<\sigma_0(\varphi)-m+1/2$ і $\eta=1$ в околі $\mathrm{supp}\,\chi$, існує число $c=c(\varphi,l,\chi,\eta)>0$ таке, що для довільної функції $u\in H^{\varphi}(\Omega)$ виконується нерівність з $\lambda=l$. Істинність твердження $\mathcal{P}_{1}$ доведена вище. Довільно виберемо дійсні числа $l\geq1$ і $\delta\in(0,1]$. Доведемо, що $\mathcal{P}_{l}\Rightarrow\mathcal{P}_{l+\delta}$. Припустимо, що твердження $\mathcal{P}_{l}$ істинне. Нехай функції $\varphi\in\mathrm{RO}$ і $\chi,\eta\in C^{\infty}(\overline{\Omega})$ задовольняють умови $\sigma_0(\varphi)>m+1/2$, $l+\delta<\sigma_0(\varphi)-m+1/2$ і $\eta=1$ в околі $\mathrm{supp}\,\chi$. Тоді знайдеться функція $\eta_{1}\in C^{\infty}(\overline{\Omega})$ така, що $\eta_{1}=1$ в околі $\mathrm{supp}\,\chi$ і $\eta=1$ в околі $\mathrm{supp}\,\eta_{1}$. За припущенням, існує число $c_{5}>0$ таке, що для довільної функції $u\in H^{\varphi}(\Omega)$ виконується оцінка $$\label{proof-th3-b} \|\chi u\|_{H^{\varphi}(\Omega)}\leq c_{5}\bigl (\|\eta_{1}(A,B)u\|_{\mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma)} +\|\eta_{1}u\|_{H^{\varphi\varrho^{-l}}(\Omega)}\bigr).$$ Оскільки $\sigma_0(\varphi\varrho^{-l-\delta+1})>m+1/2$, то на підставі твердження $\mathcal{P}_{1}$ маємо оцінку $$\label{proof-th3-c} \begin{gathered} \|\eta_{1}u\|_{H^{\varphi\varrho^{-l}}(\Omega)}\leq \|\eta_{1}u\|_{H^{\varphi\varrho^{-l-\delta+1}}(\Omega)}\leq\\ \leq c_{6}\bigl(\|\eta(A,B)u\|_ {\mathcal{H}^{\varphi\varrho^{-l-\delta+1-2q}}(\Omega,\Gamma)} +\|\eta u\|_{H^{\varphi\varrho^{-l-\delta}}(\Omega)}\bigr). \end{gathered}$$ Окрім того, $$\label{proof-th3-d} \|\eta_{1}(A,B)u\|_{\mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma)}= \|\eta_{1}\eta(A,B)u\|_{\mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma)} \leq c_{7}\|\eta(A,B)u\|_{\mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma)}.$$ На підставі оцінок  – запишемо $$\|\chi u\|_{H^{\varphi}(\Omega)}\leq c_{5}c_{7} \|\eta(A,B)u\|_{\mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma)}+ c_{5}c_{6}\bigl(\|\eta(A,B)u\|_ {\mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma)} +\|\eta u\|_{H^{\varphi\varrho^{-l-\delta}}(\Omega)}\bigr),$$ тобто отримали нерівність з $\lambda=l+\delta$. Імплікація $\mathcal{P}_{l}\Rightarrow\mathcal{P}_{l+\delta}$ обґрунтована. Тепер можемо довести теорему 3 у випадку . За доведеним, правильний ланцюжок імплікацій $$\mathcal{P}_{1}\Rightarrow\mathcal{P}_{2}\Rightarrow\ldots \Rightarrow\mathcal{P}_{[\lambda]}\Rightarrow \mathcal{P}_{\lambda},$$ де твердження $\mathcal{P}_{1}$ істинне, а $\mathcal{P}_{\lambda}$ є висновком теореми 3 у досліджуваному випадку (як звичайно, $[\lambda]$ — ціла частина числа $\lambda$). Тому цей висновок є також істинним. Теорема 3 доведена. У зауваженні 1 потребують обґрунтування друге і останнє речення. Друге речення обґрунтоване у першому абзаці доведення цієї теореми, а останнє речення є прямим наслідком оцінки . ***Доведення теореми* 4.** Спочатку обґрунтуємо цю теорему у випадку, коли $\Omega_{0}=\Omega$ і $\Gamma_{0}=\Gamma$. За умовою, $u\in H^{(s)}(\Omega)$ для деякого дійсного числа $s$ такого, шо $m+1/2<s<\sigma_{0}(\varphi)$, і $(f,g)=(A,B)u\in\mathcal{H}^{\varphi\rho^{-2q}}(\Omega,\Gamma)$. Тому $$(f,g)\in\mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma) \cap(A,B)(H^{(s)}(\Omega))=(A,B)(H^{\varphi}(\Omega));$$ тут рівність правильна на підставі теореми 1. Отже, поряд з умовою $(A,B)u=(f,g)$ виконується рівність $(A,B)v=(f,g)$ для деякого $v\in H^{\varphi}(\Omega)$. Тому $(A,B)(u-v)=0$, що за теоремою 1 тягне за собою включення $w:=u-v\in N\subset C^{\infty}(\overline{\Omega})$. Звідси $u=v+w\in H^{\varphi}(\Omega)$. У досліджуваному випадку теорема 4 доведена. Доведемо її в загальному випадку. Міркування проведемо за схемою, наведеною в [@AnopKasirenko16MFAT4 с. 308]. Довільно виберемо відкриту множину $V_{1}\subset\mathbb{R}^{n}$ таку, що $\overline{V_{1}}\subset V$ і $\Omega\cap V_{1}\neq\varnothing$ та покладемо $\Omega_1:=\Omega\cap V_1$ і $\Gamma_{1}:=\Gamma\cap V_1$. Доведемо, що $u\in H^{\varphi}_{\mathrm{loc}}(\Omega_{1},\Gamma_{1})$. Нехай функції $\chi,\eta\in C^{\infty}(\overline{\Omega})$ такі, що їх носії лежать в $\Omega_{0}\cup\Gamma_{0}$ і $\chi=1$ в околі $\Omega_{1}\cup\Gamma_{1}$ та $\eta=1$ на $\mathrm{supp}\,\chi$. За умовою, $u\in H^{(s)}(\Omega)$ для деякого $s\in\mathbb{R}$ такого, що $m+1/2<s<\sigma_{0}(\varphi)$, і $(A,B)u=(f,g)\in \mathcal{H}^{\varphi\rho^{-2q}}_{\mathrm{loc}}(\Omega_{0},\Gamma_{0})$. Тому $$(A,B)(\chi u)=\eta(A,B)(\chi u)=\eta(f,g)-\eta(A,B)((1-\chi)u).$$ Використовуючи проектор $P_{\star}$ з теореми 2, запишемо $(A,B)(\chi u)=P_{\star}(\eta(f,g))+F$, де $$F:=(1-P_{\star})(\eta(f,g))-\eta(A,B)((1-\chi)u).$$ Оскільки $P_{\star}(\eta(f,g))\in P_{\star}(\mathcal{H}^{\varphi\rho^{-2q}}(\Omega,\Gamma))$, то $$F=(A,B)(\chi u)-P_{\star}(\eta(f,g))\in P_{\star}\bigl(\mathcal{H}^{\varrho^{s-2q}}(\Omega,\Gamma)\bigr).$$ За теоремою 2, існують функції $u_1\in H^{\varphi}(\Omega)$ і $u_2\in H^{(s)}(\Omega)$ такі, що $(A,B)u_1=P_{\star}(\eta(f,g))$ і $(A,B)u_2=F$. Тоді $(A,B)(\chi u-u_1-u_2)=0$, звідки $$w:=\chi u-u_1-u_2\in N\subset C^{\infty}(\overline{\Omega})$$ на підставі теореми 1. Помітимо, що $F\in\mathcal{H}^{\varrho^{l-2q}}_{\mathrm{loc}}(\Omega_{1},\Gamma_{1})$ для кожного дійсного числа $l>\sigma_{1}(\varphi)$, оскільки $(1-P_{\star})(\eta(f,g))\in N_{\star}$ і $\eta(A,B)((1-\chi)u)=0$ на $\Omega_{1}\cup\Gamma_{1}$. Тому $$u_2\in H_{\mathrm{loc}}^{\varrho^l}(\Omega_{1},\Gamma_{1})\subset H^{\varphi}_{\mathrm{loc}}(\Omega_{1},\Gamma_{1})$$ згідно з теоремою про локальне підвищення регулярності розв’язків еліптичних крайових задач у просторах Соболєва (див., наприклад, [@Roitberg96 теорема 7.2.1]). Таким чином, $$\chi u=u_1+u_2+w\in H^{\varphi}_{\mathrm{loc}}(\Omega_{1},\Gamma_{1}).$$ Отже, $\zeta u=\zeta\chi u\in H^{\varphi}(\Omega)$ для довільної функції $\zeta\in C^{\infty}(\overline{\Omega})$, яка задовольняє умову $\mathrm{supp}\,\zeta\subset\Omega_{1}\cup\Gamma_{1}$, тобто $u\in H^{\varphi}_{\mathrm{loc}}(\Omega_{1},\Gamma_{1})$. Тепер $u\in H^{\varphi}_{\mathrm{loc}}(\Omega_{0},\Gamma_{0})$ згідно із зробленим вибором множини $V_{1}$. Теорема 4 доведена. ***Доведення теореми* 5.** Довільно виберемо точку $x\in\Omega_{0}\cup\Gamma_{0}$ і функцію $\chi\in C^{\infty}(\overline{\Omega})$ таку, що $\mathrm{supp}\,\chi\subset\Omega_0\cup\Gamma_0$ і $\chi=1$ у деякому околі $V(x)$ точки $x$. З теореми 4, умови і еквівалентності  випливає включення $\chi u\in H^{\varphi}(\Omega)\subset C^{p}(\overline{\Omega})$. Тому $u\in C^{p}(V(x))$. Звідси, з урахуванням довільності вибору точки $x$, робимо висновок, що $u\in C^{p}(\Omega_{0}\cup\Gamma_{0})$. Теорема 5 доведена. Обґрунтуємо зауваження 2. Нехай $0\leq p\in\mathbb{Z}$, $\varphi\in\mathrm{RO}$ і $\sigma_{0}(\varphi)>m+1/2$. Припустимо, що імплікація істинна. Нехай $V$ — деяка відкрита куля така, що $\overline{V}\subset\Omega_{0}$. Довільно виберемо функцію $v\in H^{\varphi}(V)$. Згідно з означенням простору $H^{\varphi}(V)$ виконується рівність $v=u\!\upharpoonright\!V$ для деякого $u\in H^{\varphi}(\Omega)$. Оскільки $(A,B)u\in\mathcal{H}^{\varphi\varrho^{-2q}}(\Omega,\Gamma)$, то на підставі маємо включення $u\in C^{p}(\Omega_{0}\cup\Gamma_{0})$. Звідси $v\in C^{p}(\overline{V})$. Таким чином, $H^{\varphi}(V)\subset C^{p}(\overline{V})$, що тягне за собою умову на підставі . Зауваження 2 обґрунтоване. ***Доведення теореми* 6.** Включення $u\in C^{2q}(\Omega)$ є наслідком умов і на підставі теореми 5, у якій покладаємо $p:=2q$, $\varphi:=\varphi_1$, $\Omega_{0}:=\Omega$ і $\Gamma_{0}:=\varnothing$. Включення $u\in C^{m}(U_{\sigma}\cup \Gamma)$ є наслідком умов , і на підставі тієї ж теореми, у якій беремо $\Omega_{0}:=U_{\sigma}$ і $\Gamma_{0}:=\Gamma$. Таким чином, розв’язок $u$ класичний. Теорема 6 доведена. [99]{} *Agranovich M. S.* Elliptic boundary problems // Encycl. Math. Sci. Vol. 79. Partial differential equations, IX. – Berlin: Springer, 1997. – P. 1 – 144. *Функциональный анализ* / Под общ. ред. С. Г. Крейна. – Москва: Наука, 1972. – 544 с. *Hörmander L.* Linear partial differential operators. – Berlin: Springer, 1963. – 285 p. (Переклад російською: *Хермандер Л.* Линейные дифференциальные операторы с частными производными. – Москва: Мир, 1965. – 380 с.) *Lions J.-L., Magenes E.* Problèmes aux limites non homogènes et applications. Vol. 1. – Paris: Dunod, 1968. – 372 p. (Переклад російською: *Лионс Ж.-Л., Мадженес Э.* Неоднородные граничные задачи и их приложения. – Москва: Мир, 1971. – 372 с.) *Triebel H.* Interpolation theory, function spaces, differential operators (2-nd edn). – Heidelberg: Johann Ambrosius Barth, 1995. – 532 p. (Видання російською: *Трибель Х.* Теория интерполяции, функциональные пространства, дифференциальные операторы. – М.: Мир, 1980. – 664 с.) *Hörmander L.* The analysis of linear partial differential operators. II: Differential operators with constant coefficients.– Berlin: Springer, 1983. – viii+391 p. (Переклад російською: *Хермандер Л.* Анализ линейных дифференциальных операторов с частными производными. Т. 2. – Москва: Мир, 1986. – 456 с.) *Jacob N.* Pseudodifferential operators and Markov processes: In 3 volumes. – London: Imperial College Press, 2001, 2002, 2005. – xxii+493 p., xxii+453 p., xxviii+474 p. *Mikhailets V. A., Murach A. A.* Hörmander spaces, interpolation, and elliptic problems. – Berlin, Boston: De Gruyter, 2014. – xii+297 p. (Видання російською доступне як arXiv:1106.3214.) *Nicola F., Rodino L.* Global Pseudodifferential Calculas on Euclidean spaces. – Basel: Birkhäser, 2010. – x+306 p. *Paneah B.* The oblique derivative problem. The Poincaré problem.– Berlin: Wiley–VCH, 2000. – 348 p. *Triebel H.* The structure of functions. – Basel: Birkhäser, 2001. – xii+425 p. *Mikhailets V. A., Murach A. A.* Elliptic operators in a refined scale of functional spaces // Ukrainian. Math. J. – 2005. – **57**, № 5. – P. 817 – 825. *Mikhailets V. A., Murach A. A.* Refined scales of spaces and elliptic boundary-value problems. I // Ukrainian Math. J. – 2006. – **58**, № 2. – P. 244 – 262. *Mikhailets V. A., Murach A. A.* Refined scales of spaces and elliptic boundary-value problems. II // Ukrainian Math. J. – 2006. – **58**, № 3. – P. 398 – 417. *Mikhailets V. A., Murach A. A.* Refined scales of spaces and elliptic boundary-value problems. III // Ukrainian Math. J. – 2007. – **59**, № 5. – P. 744 – 765. *Mikhailets V. A., Murach A. A.* Regular elliptic boundary-value problem for homogeneous equation in two-sided refined scale of spaces // Ukrainian Math. J. – 2006. – **58**, № 11. – P. 1748 – 1767. *Mikhailets V. A., Murach A. A.* Elliptic operator with homogeneous regular boundary conditions in two-sided refined scale of spaces // Ukr. Math. Bull. – 2006. – **3**, № 4. – P. 529 – 560. *Mikhailets V. A., Murach A. A.* An elliptic boundary-value problem in a two-sided refined scale of spaces. – Ukrainian. Math. J. – 2008. – **60**, № 4. – P. 574 – 597. *Murach A. A.* Douglis-Nirenberg elliptic systems in the refined scale of spaces on a closed manifold // Methods Funct. Anal. Topology. – 2008. – **14**, № 2. – P. 142 – 158. *Mikhailets V. A., Murach A. A.* The refined Sobolev scale, interpolation, and elliptic problems // Banach J. Math. Anal. – 2012. – **6**, № 2. – P. 211 – 281. *Karamata J.* Sur certains “Tauberian theorems” de M. M. Hardy et Littlewood // Mathematica (Cluj). – 1930. – **3**. – P. 33 – 48. *Seneta E.* Regularly varying functions. – Berlin: Springer, 1976. – 112 p. (Переклад російською: *Сенета Е.* Правильно меняющиеся функции. – М.: Наука, 1985. – 144 с.) *Bingham N. H., Goldie C. M., Teugels J. L.* Regular variation. – Cambridge: Cambridge Univ. Press, 1989. – 512 p. *Murach A. A.* On elliptic systems in Hörmander spaces // Ukrainian Math. J. – 2009. – **61**, № 3. – P. 467 – 477. *Zinchenko T. N., Murach A. A.* Douglis–Nirenberg elliptic systems in Hörmander spaces. – Ukrainian Math. J. – 2013. – **64**, № 11. – P. 1672 – 1687. *Zinchenko T. N., Murach A. A.* Petrovskii elliptic systems in the extended Sobolev scale // J. Math. Sci. (New York). – 2014. – **196**, № 5. – P. 721 – 732. *Anop A. V., Murach A. A.* Parameter-elliptic problems and interpolation with a function parameter // Methods Funct. Anal. Topology. – 2014. — **20**, No 2. – P. 103–116. *Anop A. V., Murach A. A.* Regular elliptic boundary-value problems in the extended Sobolev scale // Ukrainian Math. J. – 2014. – **66**, № 7. – P. 969 – 985. *Chepurukhina I. S., Murach A. A.* Elliptic boundary-value problems in the sense of Lawruk on Sobolev and Hörmander spaces // Ukrainian Math. J. – 2015. – **67**, № 5. – P. 764 – 784. *Anop A. V., Kasirenko T. M.* Elliptic boundary-value problems in Hörmander spaces // Methods Funct. Anal. Topology. – 2016. – **22**, № 4. – P. 295 – 310. *Los V., Mikhailets V. A., Murach A. A.* An isomorphism theorem for parabolic problems in Hrmander spaces and its applications // Communications on Pure and Applied Analysis. – 2017. – **16**, № 1. – P. 69 – 97. *Los V., Murach A.* Isomorphism theorems for some parabolic initial-boundary value problems in Hrmander spaces // Open Mathematics. – 2017. – **15**. – P. 57 – 76. *Avakumović V. G.* O jednom O-inverznom stavu // Rad Jugoslovenske Akad. Znatn. Umjetnosti. – 1936. – **254**. – P. 167 – 186. *Михайлец В. А., Мурач А. А.* Об эллиптических операторах на замкнутом многообразии // Доп. НАН України. – 2009. – № 3. – С. 13–19. *Mikhailets V. A., Murach A. A.* Extended Sobolev scale and elliptic operators // Ukrainian Math. J. – 2013. – **65**, № 3. – P. 435 – 447. *Kozlov V. A., Maz’ya V. G., Rossmann J.* Elliptic boundary value problems in domains with point singularities. – Providence: Amer. Math. Soc., 1997. – 414 p. *Matuszewska W.* On a generalization of regularly increasing functions // Studia Math. – 1964. – **24**. – P. 271 – 279. *Чепурухіна І. С.* Еліптичні крайові задачі за Б. Лавруком у розширеній соболєвській шкалі // Диференціальні рівняння і суміжні питання. Зб-к праць Ін-ту математики НАН України. – Т. 12, № 2. – Київ: Ін-т математики НАН України, 2015. – С. 338 – 374. *Волевич Л. Р., Панеях Б. П.* Некоторые пространства обобщенных функций и теоремы вложения // Успехи мат. наук. – 1965. – **20**, № 1. – С. 3 – 74. (Переклад англійською: *Volevich L. R., Paneah B. P.* Certain spaces of generalized functions and embedding theorems // Russian Math. Surveys. – 1965. – **20**, № 1. – P. 1 – 73.) *Mikhailets V. A., Murach A. A.* Interpolation Hilbert spaces between Sobolev spaces // Results Math. – 2015. – **67**, № 1. – P. 135 – 152. *Hörmander L.* The analysis of linear partial differential operators. III: Pseudodifferential operators.– Berlin: Springer, 1985. – viii+525 p. (Переклад російською: *Хермандер Л.* Анализ линейных дифференциальных операторов с частными производными. Т. 3. – Москва: Мир, 1987. – 696 с.) *Agmon S., Douglis A., Nirenberg L.* Estimates near the boundary for solutions of elliptic partial differential equations satisfying general boundary conditions. I // Comm. Pure Appl. Math. – 1959. – **12**, № 4. – P. 623 – 727. (Переклад російською: *Агмон С., Дуглис А., Ниренберг Л.* Оценки вблизи границы решений эллиптических уравнений в частных производных при общих граничных условиях. I. – Москва: Изд. иностр. лит., 1962. – 206 с.) *Roitberg Ya. A.* Elliptic boundary value problems in the spaces of distributions. – Dordrecht: Kluwer Acad. Publisher, 1996. – xii+415 p. *Eskin G.* Lectures on Linear Partial Differential Equations. – Providence, RI: Amer. Math. Soc., 2011. – 410 p. *Вайнберг Б. Р., Грушин В. В.* О равномерно неэллиптических задачах. II // Математический сборник. – 1967. – **73(115)**, № 4. – С. 126 – 154. (Переклад англійською: *Vainberg B. R., Grushin V. V.* Uniformly nonelliptic problems, II // Sb. Math. – 1967. – **2**. – P. 111 – 133.) *Foiaş C., Lions J.-L.* Sur certains théorèmes d’interpolation // Acta Scient. Math. Szeged. – 1961. – **22**, № 3–4. – P. 269–282. *Peetre J.* On interpolation functions. II // Acta Sci. Math. (Szeged). – 1968. – **29**, № 1–2. – P. 91 – 92. *Ovchinnikov V. I.* The methods of orbits in interpolation theory // Mathematical Reports **1**. – London: Harwood Academic Publishers, 1984. – P. 349 – 515. *Гохберг И. Ц., Крейн М. Г.* Основные положения о дефектных числах, корневых векторах и индексах линейных операторов // Успехи матем. наук. – 1957. – **12**, № 2. – С. 43 – 118. (Переклад англійською: *Gohberg I. C., Krein M. G.* The basic propositions on defect numbers, root numbers, and indices of linear operators // Amer. Math. Soc. Transl., Ser. 2. – 1960. – **13**. – P. 185 – 264.) *Peetre J.* Another approach to elliptic boundary problems // Commun. Pure and Appl. Math. – 1961. – **14**, № 4. – P. 711 – 731.
--- abstract: 'Spin-orbit coupling plays an important role in various properties of very different materials. Moreover efforts are underway to control the degree and quality of spin-orbit coupling in materials with a concomitant control of transport properties. We calculate the frequency dependent optical conductivity in systems with both Rashba and Dresselhaus spin-orbit coupling. We find that when the linear Dresselhaus spin-orbit coupling is tuned to be equal to the Rashba spin-orbit coupling, the interband optical conductivity disappears. This is taken to be the signature of the recovery of SU(2) symmetry. The presence of the cubic Dresselhaus spin-orbit coupling modifies the dispersion relation of the charge carriers and the velocity operator. Thus the conductivity is modified, but the interband contribution remains suppressed at most but not all photon energies for a cubic coupling of reasonable magnitude. Hence, such a measurement can serve as a diagnostic probe of engineered spin-orbit coupling.' author: - Zhou Li$^1$ - 'F. Marsiglio$^{2}$' - 'J. P. Carbotte$^{1,3}$' title: Vanishing of interband light absorption in a persistent spin helix state --- Spin-orbit coupling in semiconductors [@wolf01] and at the surface of three dimensional topological insulators [@Hasan; @Moore1; @Qi; @Bernevig; @Fu; @Chen1; @Li3] where protected metallic surface states exist, plays a crucial role in their fundamental physical properties. Similarly pseudospin leads to novel properties in graphene [@Mac; @Novo2; @Zhang] and other two dimensional membranes, such as single layer $MoS_{2}$ [@Mak1; @Splendiani; @Lebegue; @Lee; @Li12; @Li13] and silicene [@Drum; @Aufray; @Stille; @Ezawa1; @Ezawa2]. In particular $MoS_{2}$ has been discussed within the context of valleytronics where the valley degree of freedom can be manipulated with the aim of encoding information in analogy to spintronics. Spin-orbit coupling has also been realized in zincblende semiconductor quantum wells [@awschalom09; @Walser; @bernevig06] and neutral atomic Bose-Einstein condensates [@lin11] at very low temperature [@bloch08]. ![image](Fig1.eps){height="6.2in" width="6.2in"} [Fig.1. Spin texture in the conduction band as a function of momentum $k_{x}/k_{0}$, $k_{y}/k_{0}$ for various values of Rashba ($\alpha_1$), Dresselhaus ($\beta_{1}$), and cubic Dresselhaus ($\beta_{3}$) spin-orbit coupling. In the case of purely Rashba coupling (upper left frame), the spin is locked in the direction perpendicular to momentum, while for linear Dresselhaus coupling (upper right frame) the y-component of spin is of opposite sign to that of its momentum. For the persistent spin helix state (lower left frame) all spins are locked in the $3\pi/4$ direction and oppositely directed on either side of this critical direction. The lower right frame shows the spin texture for a case with all three kinds of coupling. ]{} \[fig1\] [Fig.2. Band structure of the conduction and valence band ( Eq. (\[eigenvalues\])) as a function of momentum $k_{x}/k_{0}$, $k_{y}/k_{0}$ for various values of Rashba ($\alpha_1$), Dresselhaus ($\beta_{1}$), and cubic Dresselhaus ($\beta_{3}$) spin-orbit coupling. The left two panels are for pure Rashba $\alpha_1=1.0, \beta_1=0.0, \beta_{3}=0.0$ (top panel) and Rashba equals to Dresselhaus $\alpha_1=0.5, \beta_1=0.5, \beta_{3}=0.0$ (bottom panel). The right two panels are for $\alpha_1=0.4, \beta_1=0.4, \beta_{3}=0.3$ (top panel) and $\alpha_1=0.2, \beta_1=0.8, \beta_{3}=0.3$ (bottom panel). The dispersion curves are profoundly changed from the familiar Dirac cone of the pure Rashba case when $\beta_1$ and $\beta_{3}$ are switched on. In the contour plots, red refers to energy $0.2E_0$ and dark green refers to energy $-0.2E_0$. ]{} \[fig2\] ![image](Fig3.eps){height="6.2in" width="5.5in"} [ Fig.3. The interband contribution to the longitudinal optical conductivity of Eq. (\[Cond\]) for various values of $\alpha_{1}$ and $\beta_{1}$ as labeled, with $\beta_3$ set to zero. In the top frame the chemical potential was set at $\mu/E_{0}=0.2$ and in the bottom $\mu/E_{0}=-0.2$.]{} \[fig3\] ![image](Fig4.eps){height="6.5in" width="6.5in"} [Fig.4. Joint density of states $D(\omega)$ (top two panels) defined in Eq. (\[DOS\]) which involves the same transitions as does the interband conductivity (bottom two panels) of Eq. (\[Cond\]) but without the critical weighting $\frac{(V_{x}S_{2}+V_{y}S_{1})^{2}}{(S_{1}^{2}+S_{2}^{2})\omega}$. Left column is for positive chemical potential $\mu/E_{0}=0.2$ and the right for -0.2.]{} \[fig4\] ![image](Fig5.eps){height="7.4in" width="4.2in"} [Fig.5. Color contour plot of the energy difference $2\sqrt{S_{1}^{2}+S_{2}^{2}}\equiv E_{+}-E_{-}$, as a function of momentum ($k_{x},k_{y}$) in units of $k_{0}$ for $\alpha_1=0.4, \beta_1=0.4, \beta_{3}=0.3$ (top panel) and $\alpha_1=0.2, \beta_1=0.8, \beta_{3}=0.3$ (bottom panel). ]{} \[fig5\] In some systems both Rashba [@rashba60] and Dresselhaus [@dresselhaus] spin-orbit coupling are manipulated, the former arising from an inversion asymmetry of the grown layer while the latter comes from the bulk crystal. In general spin-orbit coupling will lead to rotation of the spin of charge carriers as they change their momentum, because SU(2) symmetry is broken. In momentum space this has been observed by angle-resolved photoemission spectroscopy (ARPES) as the phenomenon of spin momentum locking. In a special situation when the strength of Rashba and Dresselhaus spin-orbit coupling are tuned to be equal, SU(2) symmetry is recovered and a persistent spin helix state is found [@awschalom09; @Walser; @bernevig06]. This state is robust against any spin-independent scattering. However it will be potentially destroyed by the cubic Dresselhaus term which is usually tuned to be negligible. To describe these effects we consider a model Hamiltonian describing a free electron gas with kinetic energy given simply by ${\hbar ^{2}k^{2}}/(2m)$, which describes charge carriers with effective mass $m$. We also include spin-orbit coupling terms, with linear Rashba ($\alpha_1 $) and Dresselhaus ($\beta _{1}$) couplings, along with a cubic Dresselhaus ($\beta_{3}$) term. The Hamiltonian is $$\begin{aligned} \hat{H}_{0}=\frac{\hbar ^{2}k^{2}}{2m} \hat{I}+\alpha_1 (k_{y}\hat{\sigma} _{x}-k_{x}\hat{\sigma} _{y}) +\beta _{1}(k_{x}\hat{\sigma} _{x}-k_{y}\hat{\sigma} _{y})-\beta _{3}(k_{x}k_{y}^{2}\hat{\sigma} _{x}-k_{y}k_{x}^{2}\hat{\sigma} _{y}). \label{H0}\end{aligned}$$ Here $\hat{\sigma} _{x},\hat{\sigma} _{y}$ and $\hat{\sigma} _{z}$ are the Pauli matrices for spin (or pseudospin in a neutral atomic Bose-Einstein condensate) and $\hat{I}$ is the unit matrix. For units we use a typical wave vector $k_{0}\equiv m\alpha_{0}/\hbar^{2}$ with corresponding energy $E_{0}=m\alpha^{2}_{0}/\hbar^{2}$, where $\alpha_{0}$ is a representative spin-orbit coupling which has quite different values for semiconductors ($\alpha_{0}/\hbar\approx10^{5}m/s$, estimated from Ref. [@bernevig06]) and cold atoms ($\alpha_{0}/\hbar\approx0.1m/s$, estimated from Ref. [@lin11]). The mass of a cold atom is at least 1000 times heavier than that of an electron and the wavelength of the laser used to trap the atoms is at least 1000 times (estimated from Ref. [@lin11]) larger than the lattice spacing in semiconductors. In this report we study the dynamic longitudinal optical conductivity of such a spin-orbit coupled 2D electron gas. We find that the interband optical absorption will disappear when the Rashba coupling is tuned to be equal to the Dresselhaus coupling strength. We discuss the effect of nonlinear (cubic) Dresselhaus coupling on the shape of the interband conductivity and the effect of the asymmetry between the conduction and valence band which results from a mass term in the dispersion curves. Results ======= We compute the optical conductivity (see Methods section) as a function of frequency, for various electron fillings and spin-orbit coupling strengths. In all our figures we will use a dimensionless definition of spin-orbit coupling; for example, the choice of values designated in the lower right frame of Fig. 1, $\alpha_1=0.2, \beta_1=0.3$, and $\beta_{3}=0.3$, really means $\alpha_1/\alpha_{0}=0.2,\beta_1/\alpha_{0}=0.3$, and $\beta_{3}k^{2}_{0}/\alpha_{0}=0.3$. In Fig. 1 we plot the spin direction in the conduction band as a function of momentum for several cases. The top left frame is for pure Rashba coupling, in which case spin is locked to be perpendicular to momentum [@Hasan] as has been verified in spin angle-resolved photoemission spectroscopy studies [@Lanzara1; @Lanzara2; @Xu; @Wang]. The top right frame gives results for pure linear Dresselhaus coupling (no cubic term $\beta_{3}=0$). The spin pattern is now quite different; the direction of the spin follows the mirror image of the momentum about the x-axis. The lower left frame for equal linear Rashba and Dresselhaus coupling is the most interesting to us here. All spins are locked in one direction, namely $\theta=3\pi/4$ with those in the bottom (upper) triangle pointing parallel (anti-parallel) to the $3\pi/4$ direction, respectively. This spin arrangement corresponds to the persistent spin helix state of Ref. [@awschalom09; @Walser; @bernevig06]. The condition $\alpha_1 = \beta_1$ and $\beta_{3}=0$ is a state of zero Berry phase [@Shen1] and was also characterized by Li [*et al.*]{} [@Shen] as a state in which the spin transverse “force" due to spin-orbit coupling cancels exactly. Finally the right lower frame includes a contribution from the cubic Dresselhaus term of Eq. (\[H0\]) and shows a more complex spin arrangement. Spin textures have been the subject of many recent studies [@Lanzara1; @Lanzara2; @Xu; @Wang; @Khom]. In Fig. 2 we present results for the dispersion curves in the conduction and valence band $E_{+/-}(k)$ of Eq. (\[eigenvalues\]) as a function of momentum $k$. The two left panels are pure Rashba (top) and Rashba equals to Dresselhaus (bottom, see also Fig.1 of Ref.[@Mars] where only the contour plots of the valence band is shown). The two right panels include the Dresselhaus warping cubic term which profoundly affects the band structure. The optical conductivity is obtained through transitions from one electronic state to another. In general these can be divided into two categories — transitions involving states within the same band, and interband transitions. Here we focus on interband transitions; the interband optical conductivity is given by $$\sigma _{xx}(\omega )=\frac{e^{2}}{i\omega }\frac{1}{4\pi ^{2}} \int_{0}^{k_{cut}}kdkd\theta \frac{(V_{x}S_{2}+V_{y}S_{1})^{2}}{S_{1}^{2}+S_{2}^{2}} \biggl[\frac{f(E_{+})-f(E_{-})}{\hbar \omega -E_{+}+E_{-}+i\delta } - \frac{f(E_{+})-f(E_{-})}{\hbar \omega -E_{-}+E_{+}+i\delta }\biggr], \label{Cond}$$ where $f(x)=1/(e^{(x-\mu)/k_{B}T}+1)$ is the Fermi-Dirac distribution function with $\mu$ the chemical potential. For $\beta _{3}=0$ and $\beta _{1}=\alpha_1 $, we have a cancellation in the optical matrix element, $V_{x}S_{2}+V_{y}S_{1}=0 $; remarkably the interband contribution vanishes. This result is central to our work and shows that in the persistent spin helix state the interband contribution to the dynamic longitudinal optical conductivity vanishes. This is the optical signature of the existence of the spin helix state which exhibits remarkable properties. With $\beta _{3}=0$ the optical matrix element is $(\beta _{1}^{2} - \alpha_1^{2})k_{y}/\hbar$. Thus, pure Rashba or pure (linear) Dresselhaus coupling will both lead to exactly the same conductivity although the states (and spin texture) involved differ by a phase factor of $\pi$. When they are both present in equal amounts this phase leads to a cancelation which reduces the interband transitions to zero as the two contributions need to be added before the square is taken. Of course the joint density of states, widely used to discuss optical absorption processes, remains finite. It is given by $$D(\omega )=\frac{1}{4\pi ^{2}}\int_{0}^{k_{cut}}kdkd\theta \ [f(E_{+})-f(E_{-})] Im \biggl[\frac{1}{\hbar \omega -E_{+}+E_{-}+i\delta} - \frac{1}{\hbar \omega -E_{-}+E_{+}+i\delta }\biggr] \label{DOS}$$ and will be contrasted with the interband optical conductivity below. We first focus on the case $\beta_3 = 0$. The interband conductivity is shown in Fig. 3 as a function of frequency for positive (top frame) and negative (bottom frame) chemical potential ($\mu/E_0 = \pm 0.2$). It is clear that there is a considerable difference between the two cases, and there is also considerable variation with the degree of Rashba vs. Dresselhaus coupling. This will be discussed further below. Most important is that for equal amounts of Rashba and Dresselhaus coupling, the interband conductivity is identically zero for all frequencies. What is the impact of a finite value of $\beta_3$ ? In Fig. 4 we show both the joint density of states (top two panels) and the interband conductivity (bottom two panels) for non-zero $\beta_3$ for $\mu/E_0 = 0.2$ (left panels) and $\mu/E_0 = -0.2$ (right panels). Various combinations of $\alpha_1 $, $\beta_{1}$ and $\beta_{3}$ are shown as labeled on the figure. There is a striking asymmetry between positive and negative values of the chemical potential. This asymmetry has its origin in the quadratic term ${\hbar^{2}k^{2}}/{(2m)}$ of the Hamiltonian (\[H0\]) which adds positively to the energy in both valence and conduction band while the Dirac like contribution is negative ($s=-1$) and positive ($s=+1$) respectively \[see Eq. (\[eigenvalues\])\]. While the quadratic piece drops out of the energy denominator in Eq. (\[Cond\]) it remains in the Fermi factors $f(E_+)$ and $f(E_-)$. Several features of these curves are noteworthy. They all have van Hove singularities which can be traced to extrema in the energy difference $E_{+}-E_{-}=2\sqrt{S_{1}^{2}+S_{2}^{2}}$. Taking $\beta _{3}=0$ for simplicity, this energy becomes $2k\sqrt{\alpha_{1} ^{2}+\beta_{1}^{2}+2\alpha_1 \beta _1 \sin (2\theta )}$ which depends on the direction ($\theta$) of momentum $\mathbf{k}$, but has no minimum or maximum as a function of $|\mathbf{k}|=k$. To get an extremum one needs to have a non-zero cubic Dresselhaus term. This gives dispersion curves which flatten out with increasing values of $k$. The dependence of the energy $E_{+}-E_{-}$ on momentum is illustrated in Fig. 5 where we provide a color plot for this energy as a function of $k_{x}/k_{0}$ and $k_{y}/k_{0}$ for two sets of spin-orbit parameters $\alpha_1=0.4, \beta_1=0.4, \beta_{3}=0.3$ (top panel) and $\alpha_1=0.2, \beta_1=0.8, \beta_{3}=0.3$ (bottom panel). Note the saddle points correspond to the most prominent van Hove singularities in the joint density of states (and conductivity) in Fig. 4. The van Hove singularities are at about $1.4E_0$ ($k_x=k_y$ in the momentum space) in the top frame of Fig. 5 and at about $2E_0$ ($k_x=k_y$) and $0.9E_0$ ($k_x=-k_y$) in the bottom. Discussion ========== The optical conductivity is often characterized by the joint density of states, $D(\omega )$, which has a finite onset at small energies. This is well known in the graphene literature where interband transitions start exactly at a photon energy equal to twice the chemical potential. Here this still holds approximately in all the cases considered in Fig. 4 except for the solid red curve in the two left side frames. In this case $\alpha_1 =\beta _{1}=0.4$ and $\beta_{3}$ is non zero. If $\beta _{3}$ is small the energy $\sqrt{S_{1}^{2}+S_{2}^{2}}$ would be approximately equal to $\sqrt{2}k\alpha_{1}\sqrt{1+\sin 2\theta }$, which is zero for $\theta =3\pi/4$, the critical angle in the spin texture of the lower left frame of Fig. 1 for which all spins are locked in this direction. This means that only the quadratic term ${\hbar ^{2}k^{2}}/{(2m)}$ and cubic Dresselhaus term contribute to the dispersion curve in this direction and there is no linear (in $k$) graphene-like contribution. Thus, the onset of the interband optical transition no longer corresponds to $\omega =2\mu $. Considering the case of positive $\mu$, for the direction $\theta=3\pi/4$, $(k/k_{0})^{2}/2+\beta _{3}(k/k_{0})^{3}$ is the dominant contribution to the energy which is equal to $\mu/E_{0}$ and the minimum photon energy is now $2\beta _{3}(k/k_{0})^{3}$, which could be very small as is clear from the figure. For negative values of $\mu $ the onset is closer to $2|\mu|/E_{0}$ because in this case the momentum at which the chemical potential crosses the band dispersion is given by $(k/k_{0})^{2}/2-\alpha_1(k/k_{0})=-\mu/E_{0}$ (the cubic term is ignored because it is subdominant for small $k/k_{0}$ compared to the linear term). Now the photon energy onset will fall above $2|\mu|/E_{0}$, at a value dependent on $\alpha_1$. While the optical conductivity Eq. (\[Cond\]) requires a non-zero joint density of states Eq. (\[DOS\]), the additional weighting of $(V_{x}S_{2}+V_{y}S_{1})^{2}$ in $\sigma _{xx}(\omega )$ can introduce considerable changes to its $\omega$ dependence [@May] as we see in Fig. 3 and Fig. 4. In the top frame of Fig. 3, $\beta _{3}=0$ and there are no van Hove singularities because the Dirac contribution to the dispersion curves simply increases with increasing $k$. The solid black and dashed red curves both reduce to the pure graphene case with onset exactly at $2\mu $ and flat background beyond. The dotted red curve for mixed linear Dresselhaus and Rashba is only slightly different. The onset is near but below $2\mu $ and the background has increased in amplitude. It is also no longer completely flat to high frequency; instead it has a kink near $\hbar \omega /E_{0}\approx 1.7$ after which it drops. The dash-dotted black curve for $\alpha_1 =0.4$ and $\beta _{1}=0.6$ has changed completely with background reduced to near zero but with a large peak corresponding to an onset which has shifted to a value much less than $2\mu$. Finally for $\alpha_1 =\beta _{1}$ the entire interband transition region is completely depleted as we know from Eq. (\[Cond\]). In Fig. 4 there is (non-zero) cubic Dresselhaus coupling present. The solid red curves, for which $\alpha_1 =\beta_{1}$ but with $\beta _{3}=0.3$ illustrate that the conductivity on the left (positive $\mu$) is non-zero, and $\beta_3 = 0$ is necessary for a vanishing interband conductivity at all photon energies. We see, however, that these transitions have been greatly reduced below what they would be in graphene for all photon energies except for a narrow absorption peak at $\omega $ much less than $2\mu$. For negative values of $\mu$, on the other hand, even with $\beta_3 \ne 0$ the conductivity is zero. The experimental observation of such a narrow low energy peak together with high energy van Hove singularities could be taken as a measure of nonzero $\beta_{3}$. It is interesting to compare these curves for the conductivity with the joint density of states (lower frames). The color and line types are the same for both panels. The onset energy as well as energies of the van Hove singularities are unchanged in going from the joint density of states to the conductivity. Also, as is particularly evident in the dotted black and short dashed red curves the $1/\omega$ factor in $\sigma _{xx}(\omega )$ leads to a nearly flat background for the conductivity as compared with a region of nearly linear rise in the density of states. This is true for both positive and negative values of $\mu$. In conclusion we have calculated the interband longitudinal conductivity as a function of photon energy for the case of combined Rashba and Dresselhaus spin-orbit coupling. We have also considered the possibility of a cubic Dresselhaus contribution. We find that in the persistent spin helix state when the spins are locked at an angle of $3\pi/4$ independent of momentum, which arises when the linear Rashba coupling is equal to the linear Dresselhaus coupling, the interband optical transitions vanish and there is no finite energy absorption from these processes. Only the Drude intraband transitions will remain. When the cubic Dresselhaus term is nonzero the cancelation is no longer exact but we expect interband absorption to remain strongly depressed for photon energies above $2\mu$ as compared, for example, to the universal background value found in single layer graphene. We propose interband optics as a sensitive probe of the relative size of Rashba and Dresselhaus spin orbit coupling as well as cubic corrections. Methods ======= The optical conductivity is given by $$\begin{aligned} \sigma _{xx}(\omega )=\frac{e^{2}}{i\omega }\frac{1}{4\pi ^{2}} \int_{0}^{k_{cut}}kdkd\theta T\sum_{l}Tr\langle \hat{v}_{x}\widehat{G}(\mathbf{k,}\omega _{l})\hat{v}_{x}\widehat{G}(\mathbf{k,}\omega _{n}+\omega _{l})\rangle _{i\omega _{n}\rightarrow \omega +i\delta }.\end{aligned}$$ Here $T$ is the temperature and $Tr$ is a trace over the $2\times2$ matrix, and $\omega _{n}=(2n+1)\pi T$ and $\omega _{l}=2l\pi T$ are the Fermion and Boson Matsubara frequencies respectively with $n$ and $l$ integers. To get the conductivity which is a real frequency quantity, we needed to make an analytic continuation from imaginary $i\omega _{n}$ to $\omega + i\delta$, where $\omega$ is real and $\delta $ is an infinitesimal. The velocity operators $\hat{v}_{x}$ and $\hat{v}_{y}$ are given by $$\begin{aligned} \hat{v}_{x} &=&\frac{\partial H_{0}}{\hbar \partial k_{x}}=V_{I}\hat{I}+V_{x}\hat{\sigma} _{x}+V_{y}\hat{\sigma} _{y} \notag \\ \hat{v}_{y} &=&\frac{\partial H_{0}}{\hbar \partial k_{y}}=V_{I}^{\prime }\hat{I}+V_{x}^{\prime }\hat{\sigma} _{x}+V_{y}^{\prime }\hat{\sigma} _{y}.\end{aligned}$$ Here $V_{I} = \hbar k_{x}/{m}$, $V_{x}=(\beta _{1} - \beta _{3} k_{y}^{2})/\hbar$, $V_{y}= (-\alpha_1 + 2\beta _{3}k_{y}k_{x})/\hbar$, $V_{I}^{\prime} = \hbar k_{y}/{m}$, $V_{x}^{\prime }=(\alpha_1 - 2\beta _{3} k_{y} k_{x})/\hbar $ and $V_{y}^{\prime} = (-\beta _{1} + \beta_3 k_{x}^{2})/\hbar$. The Green’s function can be written as [@grimaldi06] $$\widehat{G}(\mathbf{k},\omega _{n})=\frac{1}{2}\sum_{s=\pm }(\hat{I}+s\mathbf{F}_{\mathbf{k}}\cdot \hat{\mathbf{\sigma }})G_{0}(\mathbf{k},s,\omega_{n})$$ where $\mathbf{F}_{\mathbf{k}} = (S_{1},-S_{2},0)/\sqrt{S_{1}^{2}+S_{2}^{2}}$, $$G_{0}(\mathbf{k},s,\omega _{n})=\frac{1}{i\hbar \omega _{n}+\mu -\frac{\hbar ^{2}k^{2}}{2m}-s\sqrt{S_{1}^{2}+S_{2}^{2}}}$$ and $$\begin{aligned} S_{1} &=&(\alpha_1 k_{y}+\beta _{1}k_{x}-\beta _{3}k_{x}k_{y}^{2}) \notag \\ S_{2} &=&(\alpha_1 k_{x}+\beta _{1}k_{y}-\beta _{3}k_{y}k_{x}^{2})\end{aligned}$$ The wave function is given by $$\Psi _{\mathbf{k,}\pm } | 0> =\frac{1}{\sqrt{2}} \biggl[ c_{{\bf k}, \uparrow}^\dagger | 0> \pm \frac{S_{1}-iS_{2}}{\sqrt{S_{1}^{2}+S_{2}^{2}}} c_{{\bf k}, \downarrow}^\dagger | 0> \biggr], \label{eigenstates_old}$$ with corresponding eigenvalues $$E_{\pm }=\frac{\hbar ^{2}k^{2}}{2m}\pm \sqrt{S_{1}^{2}+S_{2}^{2}}. \label{eigenvalues}$$ Here $c_{{\bf k}, \uparrow}^\dagger(c_{{\bf k}, \downarrow}^\dagger)$ creates a particle with momentum $\bf k$ and spin up (down). The spin expectation values work out to be $$\begin{aligned} S_{x}&=&\frac{\hbar }{2}\langle \Psi _{\mathbf{k,}\pm }|\sigma _{x}|\Psi _{\mathbf{k,}\pm }\rangle =\pm \frac{\hbar}{2}\frac{S_{1}}{\sqrt{S_{1}^{2}+S_{2}^{2}}} \notag \\ S_{y}&=&\frac{\hbar }{2}\langle \Psi _{\mathbf{k,}\pm }|\sigma _{y}|\Psi _{\mathbf{k,}\pm }\rangle =\pm \frac{\hbar}{2}\frac{-S_{2}}{\sqrt{S_{1}^{2}+S_{2}^{2}}} \notag \\ S_{z}&=&\frac{\hbar }{2}\langle \Psi _{\mathbf{k,}\pm }|\sigma _{z}|\Psi _{\mathbf{k,}\pm }\rangle = 0.\end{aligned}$$ These formulas allow us to calculate the spin texture, as well as the optical conductivity as given in Eq. (\[Cond\]). Acknowledgments =============== This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), the Canadian Institute for Advanced Research (CIFAR), and Alberta Innovates. Author Contributions ==================== ZL carried out the calculations, and all authors, ZL, FM, and JPC contributed equally to the development of the work. Competing financial interests ============================= The authors declare no competing financial interests. [99]{} Wolf, S. A. *et al.*, Spintronics: A Spin-Based Electronics Vision for the Future. *Science* **294**, 1488-1495, (2001). Hasan, M. Z. and Kane, C. L., Colloquium: Topological insulators. *Rev. Mod. Phys.* **82**, 3045¨C3067 (2010). Moore, J. E., The birth of topological insulators. *Nature* **464**, 194-198 (2010). Qi, X.-L. and Zhang, S.-C., The quantum spin Hall effect and topological insulators. *Physics Today* **63**, 33-38 (2010). Bernevig, B. A., Hughes, T. L. and Zhang, S.-C., Quantum Spin Hall Effect and Topological Phase Transition in HgTe Quantum Wells. *Science* **314**, 1757-1761 (2006). Fu, L., Kane, C. L. and Mele, E. J., Topological Insulators in Three Dimensions. *Phys. Rev. Lett.* **98**, 106803 (2007). Chen, Y.-L. *et al.*, Experimental Realization of a Three-Dimensional Topological Insulator, $Bi_{2}Te_{3}$. *Science* **325**, 178-181 (2009). Li, Zhou and Carbotte, J. P., Hexagonal warping on optical conductivity of surface states in Topological Insulator $Bi_{2}Te_{3}$. *Phys. Rev. B* **87**, 155416 (2013). Pesin, D. and MacDonald, A. H., Spintronics and pseudospintronics in graphene and topological insulators. *Nature Materials* **11**, 409-416 (2012). Novoselov, K. S. *et al.*, Electric field effect in atomically thin carbon films. *Science* **306**, 666-669 (2004). Zhang, X., Tan, Y.-W., Stormer, H. L. and Kim, P., Experimental observation of the quantum Hall effect and Berry’s phase in graphene. *Nature* **438**, 201-204 (2005). Mak, K. F., Lee, C., Hone, J., Shan, J. and Heinz, T. F., Atomically Thin $MoS_2$: A New Direct-Gap Semiconductor. *Phys. Rev. Lett.* **105**, 136805 (2010). Splendiani, A., *et al.*, Emerging Photoluminescence in Monolayer $MoS_2$. *Nano Lett.* **10**, 1271-1275 (2010). Lebégue, S. and Eriksson, O., Electronic structure of two-dimensional crystals from ab initio theory. *Phys. Rev. B* **79**, 115409 (2009). Lee, C. *et al.*, Anomalous Lattice Vibrations of Single- and Few-Layer $MoS_2$. *ACS Nano* **4**, 2695-2700 (2010). Li, Zhou and Carbotte, J. P., Longitudinal and spin/valley Hall optical conductivity in single layer $MoS_{2}$. *Phys. Rev. B* **86**, 205425 (2012). Li, Zhou and Carbotte, J. P., Impact of electron¨Cphonon interaction on dynamic conductivity of gapped Dirac fermions: Application to single layer $MoS_{2}$. *Physica B* **421**, 97-104 (2013), http://dx.doi.org/10.1016/j.physb.2013.04.030 Drummond, N. D., Zólyomi, V., and Fal’ko, V. I., Electrically tunable band gap in silicene. *Phys. Rev. B*. **85**, 075423 (2012). Aufray, B. *et al.*, Graphene-like silicon nanoribbons on Ag(110): A possible formation of silicene. *Appl. Phys. Lett.* **96**, 183102(2010). Stille, L., Tabert, C. J. and Nicol, E. J., Optical signatures of the tunable band gap and valley-spin coupling in silicene. *Phys. Rev. B*. **86**, 195405 (2012). Ezawa, M., A topological insulator and helical zero mode in silicene under an inhomogeneous electric field. *New J. Phys.* **14**, 033003 (2012). Ezawa, M., Spin-Valley Optical Selection Rule and Strong Circular Dichroism in Silicene. *Phys. Rev. B*. **86**, 161407(R) (2012). Koralek, J. D. *et al.*, Emergence of the persistent spin helix in semiconductor quantum wells. *Nature* **458**, 610-613(2009). Walser, M. P., Reichl, C., Wegscheider W. and Salis, G., Direct mapping of the formation of a persistent spin helix. *Nature Phys.* **8**, 757–762 (2012). Bernevig, B. A., Orenstein, J. and Zhang, S.-C., Exact SU(2) Symmetry and Persistent Spin Helix in a Spin-Orbit Coupled System. *Phys. Rev. Lett.* **97**, 236601 (2006). Lin, Y. J., Jimenez-Garcia K. and Spielman, I. B., Spin-orbit-coupled Bose-Einstein condensates. Nature, **471**, 83-86 (2011). See also Ozawa, T. and Baym, G., Ground-state phases of ultracold bosons with Rashba-Dresselhaus spin-orbit coupling. *Phys. Rev. A* **85**, 013612 (2012). Bloch, I., Dalibard, J. and Zwerger, W., Many-body physics with ultracold gases. *Rev. Mod. Phys.* **80**, 885-964 (2008). Rashba, E.I., Properties of semiconductors with an extremum loop. 1. Cyclotron and combinational resonance in a magnetic field perpendicular to the plane of the loop. *Sov. Phys. Solid State* **2**, 1109 (1960). Dresselhaus, G., Spin-Orbit Coupling Effects in Zinc Blende Structures. *Phys. Rev.* **100**, 580–586 (1955). Jozwiak, C. *et al.*, Widespread spin polarization effects in photoemission from topological insulators. *Phys. Rev. B* **84**, 165113 (2011) Jozwiak, C. *et al.*, Photoelectron spin-flipping and texture manipulation in a topological insulator. *Nature Phys.* **9**, 293-298 (2013) Xu, S.-Y. *et al.*, Topological Phase Transition and Texture Inversion in a Tunable Topological Insulator. *Science* **332**, 560 (2011). Wang, Y. H. *et al.*, Observation of a Warped Helical Spin Texture in $Bi_{2}Se_{3}$ from Circular Dichroism Angle-Resolved Photoemission Spectroscopy. *Phys. Rev. Lett.* **107**, 207602 (2011) Shen, S.-Q., Spin Hall effect and Berry phase in two-dimensional electron gas. *Phys. Rev. B* **70**, 081311 (R) (2004) Li, J., Hu, L. and Shen, S.-Q., Spin resolved Hall effect driven by spin-orbit coupling. *Phys. Rev. B* **71**, 241305 (R) (2005) Khomitsky, D. V., Electric-field induced spin textures in a superlattice with Rashba and Dresselhaus spin-orbit coupling. *Phys. Rev. B*. **79**, 205401 (2009). Li, Zhou, Covaci, L. and Marsiglio, F., Impact of Dresselhaus versus Rashba spin-orbit coupling on the Holstein polaron. *Phys. Rev. B* **85**, 205112 (2012). Maytorena, J. A., Lopez-Bastidas, C. and Mirele, F., Spin and charge optical conductivities in spin-orbit coupled systems. *Phys. Rev. B*. **74**, 235313 (2006). Grimaldi, C., Cappelluti, E. and Marsiglio, F., Off-Fermi surface cancellation effects in spin-Hall conductivity of a two-dimensional Rashba electron gas. *Phys. Rev. B* **73**, 081303R (2006); Spin-Hall Conductivity in Electron-Phonon Coupled Systems. *Phys. Rev. Lett.* **97**, 066601 (2006).
--- abstract: 'Model-based reinforcement learning (MBRL) aims to learn a dynamic model to reduce the number of interactions with real-world environments. However, due to estimation error, rollouts in the learned model, especially those of long horizon, fail to match the ones in real-world environments. This mismatching has seriously impacted the sample complexity of MBRL. The phenomenon can be attributed to the fact that previous works employ supervised learning to learn the one-step transition models, which has inherent difficulty ensuring the matching of distributions from multi-step rollouts. Based on the claim, we propose to learn the synthesized model by matching the distributions of multi-step rollouts sampled from the synthesized model and the real ones via WGAN. We theoretically show that matching the two can minimize the difference of cumulative rewards between the real transition and the learned one. Our experiments also show that the proposed *model imitation* method outperforms the state-of-the-art in terms of sample complexity and average return.' author: - 'Yueh-Hua Wu$^{1,2}$[^1], Ting-Han Fan$^{3}$, Peter J. Ramadge$^{3}$, Hao Su$^{2}$' bibliography: - 'reference.bib' date: | $^1$ National Taiwan University\ $^2$ University of California San Diego\ $^3$ Princeton University title: 'Model Imitation for Model-Based Reinforcement Learning' --- Introduction ============ Reinforcement learning (RL) has become of great interest because plenty of real-world problems can be modeled as a sequential decision-making problem. Model-free reinforcement learning (MFRL) is favored by its capability of learning complex tasks when interactions with environments are cheap. However, in the majority of real-world problems, such as autonomous driving, interactions are extremely costly, thus MFRL becomes infeasible. One critique about MFRL is that it does not fully exploit past queries over the environment, and this motivates us to consider the model-based reinforcement learning (MBRL). In addition to learning an agent policy, MBRL also uses the queries to learn the dynamic of the environment that our agent is interacting with. If the learned dynamic is accurate enough, the agent can acquire the desired skill by simply interacting with the simulated environment, so that the number of samples to collect in the real world can be greatly reduced. As a result, MBRL has become one of the possible solutions to reduce the number of samples required to learn an optimal policy. Most previous works of MBRL adopt supervised learning with $\ell_2$-based errors [@luo2018slbo; @kurutach18metrpo; @clavera2018mbmpo] or maximum likelihood [@janner2019trust], to obtain an environment model that synthesizes real transitions. These non-trivial developments imply that optimizing a policy on a synthesized environment is a challenging task. Because the estimation error of model accumulates as the trajectory grows, it is hard to train a policy on a long synthesized trajectory. On the other hand, training on short trajectories makes the policy short-sighted. This issue is known as the planning horizon dilemma [@langlois2019benchmarking]. As a result, despite having a strong intuition at first sight, MBRL has to be designed meticulously. Intuitively, we would like to learn a transition model in a way that it can reproduce the trajectories that have been generated in the real world. Since the attained trajectories are sampled according to a certain policy, directly employing supervised learning may not necessarily lead to the mentioned result especially when the policy is stochastic. The resemblance in trajectories matters because we estimate policy gradient by generating rollouts; however, the one-step model learning adopted by many MBRL methods do not guarantee this. Some previous works propose multi-step training [@luo2018slbo]; however, experiments show that model learning fails to benefit much from the multi-step loss. We attribute this outcome to the essence of supervised learning, which elementally preserves only one-step transition and the similarity between real trajectories and the synthesized ones cannot be guaranteed. In this work, we propose to learn the transition model via distribution matching. Specifically, we use WGAN [@wgan] to match the distributions of state-action-next-state triple $(s,a,s')$ in real/learned models so that the agent policy can generate similar trajectories when interacting with either the true transition or the learned transition. Figure \[fig:illustrate\] illustrates the difference between methods based on supervised learning and distribution matching. Different from the ensemble methods proposed in previous works, our method is capable of generalizing to unseen transitions with only *one* dynamic model because merely incorporating multiple models does not alter the essence that one-step (or few-step) supervised learning fails to imitate the distribution of multi-step rollouts. Concretely, we gather some transitions in the real world according to a policy. To learn the real transition, we then sample fake transitions from our synthesized model with the same policy. The synthesized model serves as the generator in the WGAN framework and there is a critic that discriminates the two transition data. We update the generator and the critic alternatively until the synthesized data cannot be distinguished from the real one, which we will show later that it gives $T\rightarrow T'$ theoretically. Our contributions are summarized below: - We propose an MBRL method called model imitation (MI), which enforces the learned transition model to generate similar rollouts to the real one so that policy gradient is accurate; - We theoretically show that the transition can be learned by MI in the sense that $T\rightarrow T'$ by consistency and the difference in cumulative rewards $\lvert R(T)-R(T')\rvert$ is small; - To stabilize model learning, we deduce guarantee for our sampling technique and investigate training across WGANs; - We experimentally show that MI is more sample efficient than state-of-the-art MBRL and MFRL methods and outperforms them on four standard tasks. ![Distribution matching enables the learned transition to generate similar rollouts to the real ones even when the policy is stochastic or the initial states are close. On the other hand, training with supervised learning does not ensure rollout similarity and the resulting policy gradient may be inaccurate. This figure considers a fixed policy sampling in the real world and a transition model.[]{data-label="fig:illustrate"}](illustration.pdf) Related work {#relatedwork} ============ In this section, we introduce our motivation inspired by learning from demonstration (LfD) [@schaal1997learning] and give a brief survey of MBRL methods. Learning from Demonstration --------------------------- A straightforward approach to LfD is to leverage behavior cloning (BC), which reduces LfD to a supervised learning problem. Even though learning a policy via BC is time-efficient, it cannot imitate a policy without sufficient demonstration because the error may accumulate without the guidance of expert [@ross2011dagger]. Generative Adversarial Imitation Learning (GAIL) [@ho2016generative] is another state-of-the-art IfD method that learns an optimal policy by utilizing generative adversarial training to match occupancy measure [@syed2008apprenticeship]. GAIL learns an optimal policy by matching the distribution of the trajectories generated from an agent policy with the distribution of the given demonstration. [@ho2016generative] shows that the two distributions match if and only if the agent has learned the optimal policy. One of the advantages of GAIL is that it only requires a small amount of demonstration data to obtain an optimal policy but it requires a considerable number of interactions with environments for the generative adversarial training to converge. Our intuition is that we analogize transition learning (TL) to learning from demonstration (LfD). In LfD, trajectories sampled from a fixed transition are given, and the goal is to learn a policy. On the other hand, in TL, trajectories sampled from a fixed policy are given, and we would like to imitate the underlying transition. That being said, from LfD to TL, we interchange the roles of the policy and the transition. It is therefore tempting to study the counterpart of GAIL in TL; i.e., *learning the transition by distribution matching*. Fortunately, by doing so, the pros of GAIL remain while the cons are insubstantial in MBRL because sampling with the learned model is considered to be much cheaper than sampling in the real one. That GAIL learns a better policy than what BC does suggests that distribution matching possess the potential to learn a better transition than supervised learning. Model-Based Reinforcement Learning ---------------------------------- For deterministic transition, it is usually optimized with $\ell_2$-based error. [@Nagabandi18], an approach that uses supervised learning with mean-squared error as its objective, is shown to perform well under fine-tuning. To alleviate model bias, some previous works adopt ensembles [@kurutach18metrpo; @jacob2018steve], where multiple transition models with different initialization are trained at the same time. In a slightly more complicated manner, [@clavera2018mbmpo] utilizes meta-learning to gather information from multiple models. Lastly, on the theoretical side, SLBO [@luo2018slbo] is the first algorithm that develops from solid theoretical properties for model-based deep RL via a joint model-policy optimization framework. For the stochastic transition, maximum likelihood estimator or moment matching are natural ways to learn a synthesized transition, which is usually modeled by the Gaussian distribution. Following this idea, Gaussian process [@Kupcsik2013gauss; @Deisenroth2015gauss] and Gaussian process with model predictive control [@Kamthe2017gauss] are introduced as an uncertainty-aware version of MBRL. Similar to the deterministic case, to mitigate model bias and foster stability, an ensemble method for probabilistic networks [@kurtland2018pets] is also studied. An important distinction between training a deterministic or stochastic transition is that although the stochastic transition can model the noise hidden within the real world, the stochastic model may also induce instability if the true transition is deterministic. This is a potential reason why an ensemble of models is adopted to reduce variance. Background ========== Reinforcement Learning ---------------------- We consider the standard Markov Decision Process (MDP) [@sutton1998introduction]. MDP is represented by a tuple $\langle S,\mathcal{A},T, r,\gamma\rangle$, where $\mathcal{S}$ is the state space, $\mathcal{A}$ is the action space, $T(s_{t+1}\vert s_t,a_t)$ is the transition density of state $s_{t+1}$ at time step $t+1$ given action $a_t$ made under state $s_t$, $r(s,a)$ is the reward function, and $\gamma\in(0,1)$ is the discount factor. A stochastic policy $\pi(a\vert s)$ is a density of action $a$ given state $s$. Let the initial state distribution be $\alpha$. The performance of the triple $(\alpha,\pi,T)$ is evaluated in the expectation of the cumulative reward in the $\gamma$-discounted infinite horizon setting: $$\begin{aligned} \label{eq:cumulative_reward} R(\alpha,\pi,T)=\mathbb{E}\left[\sum_{t=0}^\infty \gamma^tr(s_t,a_t)\Big\lvert \alpha, \pi,T\right]=\mathbb{E}\left[\sum_{t=0}^{H-1} r(s_t,a_t)\Big\lvert \alpha, \pi,T\right].\end{aligned}$$ Equivalently, $R(\alpha,\pi,T)$ is the expected cumulative rewards in a length-$H$ trajectory $\{s_t,a_t\}_{t=0}^{H-1}$ generated by $(\alpha,\pi,T)$ with $H\sim \text{Geometric}(1-\gamma)$. When $\alpha$ and $T$ are fixed, $R(\cdot)$ becomes a function that only depends on $\pi$, and reinforcement learning algorithms [@sutton1998introduction] aim to find a policy $\pi$ to maximize $R(\pi)$. Occupancy Measure ----------------- Given initial state distribution $\alpha(s)$, policy $\pi(a|s)$ and transition $T(s'|s,a)$, the normalized occupancy measure $\rho_T^{\alpha,\pi}(s,a)$ generated by $(\alpha,\pi,T)$ is defined as $$\rho_T^{\alpha,\pi}(s,a) = \sum\limits_{t=o}^{\infty} (1-\gamma)\gamma^t \mathbb{P}(s_t=s,a_t=a|\alpha,\pi,T) =(1-\gamma)\sum\limits_{t=0}^{H-1} \mathbb{P}(s_t=s,a_t=a|\alpha,\pi,T), \label{eq:occupancy}$$ where $\mathbb{P}(\cdot)$ is the probability measure and will be replaced by a density function if $\mathcal{S}$ or $\mathcal{A}$ is continuous. Intuitively, $\rho_T^{\alpha,\pi}(s,a)$ is a distribution of $(s,a)$ in a length-$H$ trajectory $\{s_t,a_t\}_{t=0}^{H-1}$ with $H\sim \text{Geometric}(1-\gamma)$ following the laws of $(\alpha,\pi,T)$. From [@Syed08], the relation between $\rho_T^{\alpha,\pi}$ and $(\alpha,\pi,T)$ is characterized by the Bellman flow constraint. Specifically, $x=\rho_T^{\alpha,\pi}$ as defined in Eq. \[eq:occupancy\] is the unique solution to: $$x(s,a)=\pi_\theta(a|s)\Big[(1-\gamma)\alpha(s) + \gamma \int x(s',a')T(s|s',a')ds'da'\Big],~~~x(s,a)\geq 0. \label{eq:bellman}$$ In addition, Theorem 2 of [@Syed08] gives that $\pi(a|s)$ and $\rho_T^{\alpha,\pi}(s,a)$ have an one-to-one correspondence with $\alpha(s)$ and $T(s'|s,a)$ fixed; i.e., $\pi(a|s)\triangleq\frac{\rho(s,a)}{\int \rho(s,a)da}$ is the only policy whose occupancy measure is $\rho$. With the occupancy measure, the cumulative reward Eq. \[eq:cumulative\_reward\] can be represented as $$R(\alpha,\pi,T)={\mathbb{E}}_{(s,a)\sim \rho^{\alpha,\pi}_T}[r(s,a)]/(1-\gamma). \label{eq:occu_cum_reward}$$ The goal of maximizing the cumulative reward can then be achieved by adjusting $\rho^{\alpha,\pi}_T$, and this motivates us to adopt distribution matching approaches like WGAN [@wgan] to learn a transition model. Theoretical Analysis for WGAN {#sec:matching} ============================= In this section, we present a consistency result and error bounds for WGAN [@wgan]. All proofs of the following theorems and lemmas can be found in Appendix \[apdx:proofs\]. In the setting of MBRL, the training objective for WGAN is $$\underset{T'}{\min}~\underset{{\left\lVertf\right\rVert}_L\leq 1}{\max}~ {\mathbb{E}}_{(s,a)\sim \rho_T,~s'\sim T(\cdot|s,a)}[f(s,a,s')]-{\mathbb{E}}_{(s,a)\sim \rho_{T'},~s'\sim T'(\cdot|s,a)}[f(s,a,s')]. \label{eq:wgan}$$ By Kantorovich-Rubinstein duality [@Villani2008_opt_transport], the optimal value of the inner maximization is exactly $W_1(p(s,a,s')\lvert\rvert p'(s,a,s'))$ where $p(s,a,s')=\rho_T(s,a)T(s'|s,a)$ is the discounted distribution of $(s,a,s')$. Thus, by minimizing over the choice of $T'$, we are essentially finding $p'$ that minimizes $W_1(p(s,a,s')\lvert\rvert p'(s,a,s'))$, which gives the consistency result. \[prop:consistent-wgan\] Let $T$ and $T'$ be the true and synthesized transitions respectively. If WGAN is trained to its optimal point, we have $$T(s'|s,a)=T'(s'|s,a),~\forall (s,a)\in\text{Supp}(\rho_{T}),$$ where $\text{Supp}(\rho_{T})$ is the support of $\rho_{T}$. The support constraint is inevitable because the training data is sampled from $\rho_T$ and guaranteeing anything beyond it can be difficult. Still, we will empirically show that the support constraint is not an issue in our experiments because the performance boosts up in the beginning, indicating that $\text{Supp}(\rho_{T})$ may be large enough initially. Now that training with WGAN gives a consistent estimate of the true transition, it is sensible to train a synthesized transition upon it. However, the consistency result is too restrictive as it only discusses the optimal case. The next step is to analyze the non-optimal situation and observe how the cumulative reward deviates w.r.t. the training error. Let $\rho_T(s,a),~\rho_{T'}(s,a)$ be the normalized occupancy measures generated by the true transition $T$ and the synthesized one $T'$. If the reward function is $L_r$-Lipschitz and the training error of WGAN is $\epsilon$, we have $|R(T)- R(T')| \leq \epsilon L_r/(1-\gamma)$. \[thm:err-wgan\] Theorem \[thm:err-wgan\] indicates that if WGAN is trained properly, i.e., having small $\epsilon$, the cumulative reward on the synthesized trajectory will be close to that on the true trajectory. As MBRL aims to train a policy on the synthesized trajectory, the accuracy of the cumulative reward over the synthesized trajectory is thus the *bottleneck*. Theorem \[thm:err-wgan\] also implies that WGAN’s error is linear to the (expected) length of the trajectory $(1-\gamma)^{-1}$. This is a sharp contrast to the error bounds in most RL literature, as the dependency on the trajectory length is usually quadratic [@syed2010; @ross2011dagger], or of even higher order. Since WGAN gives us a better estimation of the cumulative reward in the learned model, the policy update becomes more accurate. Model Imitation for Model-Based Reinforcement Learning {#sec:model-imitation} ====================================================== In this section, we present a practical MBRL method called model imitation (MI) that incorporates the transition learning mentioned in Section \[sec:matching\]. Sampling Technique for Transition Learning ------------------------------------------ Due to the long-term digression, it is hard to train the WGAN directly from a long synthesized trajectory. To tackle this issue, we use the synthesized transition $T'$ to sample $N$ short trajectories with initial states sampled from the true trajectory. To analyze this sampling technique, let $\beta<\gamma$ be the discount factor of the short trajectories so that the expected length is ${\mathbb{E}}[L]=(1-\beta)^{-1}$. Let $\rho_{T'}^\beta$, ${\widehat}{\rho}_T^\beta$, $\rho_T^\beta$, $\rho_T$ be the normalized occupancy measures of synthesized short trajectories, empirical true short trajectories, true short trajectories and the true long trajectories. The 1-Wasserstein distance can be bounded by $$W_1(\rho_{T'}^\beta\lvert\rvert \rho_T)\leq W_1(\rho_{T'}^\beta\lvert\rvert {\widehat}{\rho}_T^\beta) + W_1({\widehat}{\rho}_T^\beta\lvert\rvert \rho_T^\beta) + W_1(\rho_T^\beta \lvert\rvert \rho_T).$$ $W_1(\rho_{T'}^\beta\lvert\rvert {\widehat}{\rho}_T^\beta)$ is upper bounded by the training error of WGAN on short trajectories, which can be small empirically because the short ones are easier to imitate. $W_1({\widehat}{\rho}_T^\beta\lvert\rvert \rho_T^\beta)={\mathbb{E}}_L[O((NL)^{-1/d})]=O(((1-\beta)/N)^{1/d}/\beta)$ by [@Canas2012wdist_bound] and Lemma \[lemma:geo\_frac\_moment\], where $d$ is the dimension of $(s,a)$. $W_1(\rho_T^\beta \lvert\rvert \rho_T)\leq \text{diam}(\mathcal{S\times A})(1-\gamma)\beta/(\gamma-\beta)$ by Lemma \[lemma:short\_occu\_bound\] and $W_1\leq D_{TV}\text{diam}(\mathcal{S\times A})$ [@dist_bounds], where $\text{diam}(\cdot)$ is the diameter. The second term encourages $\beta$ to be large while the third term does the opposite. Besides, $\beta$ need not be large if $N$ is large enough; in practice we may sample $N$ short trajectories to reduce the error from $W_1(\rho_{T'}\lvert\rvert \rho_T)$ to $W_1(\rho_{T'}^\beta\lvert\rvert \rho_T)$. Finally, since $\rho_{T'}^\beta$ is the occupancy measure we train on, from the proof of Theorem \[thm:err-wgan\] we deduce that $$|R(T)-R(T')|\leq W_1(\rho_{T'}^\beta\lvert\rvert \rho_T)L_r/(1-\gamma).$$ Thus, WGAN may perform better under this sampling technique. Empirical Transition Learning ----------------------------- To learn the real transition based on the occupancy measure matching mentioned in Section \[sec:matching\], we employ a transition learning scheme by aligning the distribution of $(s,a,s')$ between the real and the learned environments. Inspired by how GAIL [@ho2016generative] learns to align $(s,a)$ via solving an MDP with rewards extracted from a discriminator, we formulate an MDP with rewards from a discriminator over $(s,a,s')$. Specifically, the WGAN critic $f(s,a,s')$ in Eq. \[eq:wgan\] is used as the (psuedo) rewards $r(s,a,s')$ of our MDP. Interestingly, there is a duality between GAIL and our transition learning: for GAIL, the transition is fixed and the objective is to train a policy to maximize the cumulative pseudo rewards, while for our transition learning, the policy is fixed and the objective is to train a synthesized transition to maximize the cumulative pseudo rewards. In practice, since the policy is updated alternatively with the synthesized model, we are required to train a number of WGANs along with the change of the policy. Although the generators across WGANs correspond to the same transition and can be similar, we observe that WGAN may get stuck at a local optimum when we switch from one WGAN training to another. The reason is that, unlike GAN that mimics the Jensen-Shannon divergence and hence its inner maximization is upper bounded by $\log(2)$, WGAN mimics the Wasserstein distance and the inner maximization is unbounded from above. Intuitively, such unboundedness makes the WGAN critic so strong that the WGAN generator (the synthesized transition) cannot find a way out and gets stuck at a local optimum. Thereby, we have to modify the WGAN objective to alleviate such situation. To ensure the boundedness, for a fixed $\delta>0$, we introduce cut-offs at the WGAN objective so that the inner maximization is upper bounded by $2\delta$: $$\underset{T'}{\min}~\underset{{\left\lVertf\right\rVert}_L\leq 1}{\max}~ {\mathbb{E}}_{\substack{(s,a)\sim \rho_T\\s'\sim T(\cdot|s,a)}}[\min(\delta,f(s,a,s')]+{\mathbb{E}}_{\substack{(s,a)\sim \rho_{T'}\\s'\sim T'(\cdot|s,a)}}[\min(\delta,-f(s,a,s'))]. \label{eq:trun-wgan}$$ As $\delta\rightarrow\infty$, Eq. \[eq:trun-wgan\] recovers the WGAN objective, Eq. \[eq:wgan\]. Therefore, this is a truncated version of WGAN. To comprehend Eq. \[eq:trun-wgan\] further, notice that it is equivalent to $$\begin{split} &\underset{T'}{\min}~\underset{{\left\lVertf\right\rVert}_L\leq 1}{\max}~ {\mathbb{E}}_{\substack{(s,a)\sim \rho_T\\s'\sim T(\cdot|s,a)}}[\min(0,f(s,a,s')-\delta)]+{\mathbb{E}}_{\substack{(s,a)\sim \rho_{T'}\\s'\sim T'(\cdot|s,a)}}[\min(0,-f(s,a,s')-\delta)]\\ \Leftrightarrow~&\underset{T'}{\min}~\underset{{\left\lVertf\right\rVert}_L\leq 1}{\min}~ {\mathbb{E}}_{\substack{(s,a)\sim \rho_T\\s'\sim T(\cdot|s,a)}}[\max(0,\delta-f(s,a,s'))]+{\mathbb{E}}_{\substack{(s,a)\sim \rho_{T'}\\s'\sim T'(\cdot|s,a)}}[\max(0,\delta+f(s,a,s'))], \end{split} \label{eq:trun-wgan2}$$ which is a hinge loss version of the generative adversarial objective. Such WGAN is introduced in [@lim2017geometric], where the consistency result is provided and further experiments are evaluated in [@zhang2018self]. According to [@lim2017geometric], the inner minimization can be interpreted as the soft-margin SVM. Consequently, it provides a geometric intuition of maximizing margin, which potentially enhances robustness. Finally, because the objective of transition learning is to maximize the cumulative pseudo rewards on the MDP, $T'$ does not directly optimize Eq. \[eq:trun-wgan2\]. Note that the truncation only takes part in the inner minimization: $$\underset{{\left\lVertf\right\rVert}_L\leq 1}{\min}~ {\mathbb{E}}_{\substack{(s,a)\sim \rho_T\\s'\sim T(\cdot|s,a)}}[\max(0,\delta-f(s,a,s'))]+{\mathbb{E}}_{\substack{(s,a)\sim \rho_{T'}\\s'\sim T'(\cdot|s,a)}}[\max(0,\delta+f(s,a,s'))], \label{eq:hinge-wgan}$$ which gives us a WGAN critic $f(s,a,s')$. As mentioned, $f$ will be the pseudo reward function. Later, we will introduce a transition learning version of PPO [@schulman2017proximal] to optimize the cumulative pseudo reward. Since occupancy measure $\rho_{T'}^{\pi}$ depends on both synthesized transition and policy, performing model and policy updates alternatively implies solving several generative adversarial learning problems; however, those problems share a common solution, the real transition $T$. That is to say, the optimal generators we derive across WGAN training should be similar. This consistency, nonetheless, does not apply to the discriminator when WGAN objective is used. It can be shown by considering the case when the generator can perfectly produce the desired distribution $T'\approx T$ and therefore the objective becomes $$\begin{aligned} \max_w\mathbb{E}_{s,a,s'\sim T'}\left[D(s,a,s')\right]-\mathbb{E}_{s,a,s'\sim T}\left[D(s,a,s')\right]=0,\end{aligned}$$ which suggests that the output of the discriminator $D$ can be any value within $-\infty$ and $\infty$. To maintain consistency and stability across WGAN training, we utilize hinge loss as the adversarial loss [@zhang2018self; @lim2017geometric]. Let our synthesized model be parameterized by $\phi$ and the discriminator $D$ parameterized by $w$, the hinge version generative adversarial objective becomes: $$\begin{aligned} \label{eq:hingegan} \min_\phi\max_w\mathbb{E}_{s,a,s'\sim T_\phi}[\min (0, D_w(s,a,s')-1)]+\mathbb{E}_{s,a,s'\sim T}[\min(0, -D_w(s,a,s')-1)].\end{aligned}$$ In addition to the outstanding performance observed in different generation tasks [@lim2017geometric; @brock2018large], hinge loss greatly reduces the space of possible value to $[-1,1]$ when $T_\phi\approx T$. We incorporate gradient penalty [@gulrajani2017improved], which has been shown to improve generated samples for non-saturating GAN [@fedus2017many; @kurach2019large]. It should be noted that the tuples $(s,a,s)$ in Eq. (\[eq:hingegan\]) are not i.i.d. samples and therefore it is a must to address the credit assignment problem [@sleeman1982learning; @sutton1985temporal] so that the model can identify which transition it generates resembles the real one and vice versa. We approximate this optimization problem to an MDP $\langle \mathcal{X}, V, \mathcal{T}, r, \gamma\rangle$, where $\mathcal{X}\coloneqq S\times \mathcal{A}$, $V\coloneqq S$, $\mathcal{T}(x_{t+1} \vert v_t)$ is a pseudo transition, and $r=D_w(x,v)$ is the reward function. Initialize policy $\pi_\theta$, transition model $T_\phi$, WGAN critic $f_w$, environment dataset $\mathcal{D}_\text{env}$ Take actions in real environment according to $\pi_\theta$; $\mathcal{D}_\text{env}\leftarrow\mathcal{D}_\text{env}\cup\mathcal{D}_i$ Pre-train $T_\phi$ and $f_w$ by optimizing Eq. \[eq:hinge-wgan\] and \[eq:tloss\] with $\mathcal{D}_i$ and $\mathcal{D}_\text{env}$ optimize Eq. \[eq:hinge-wgan\] and \[eq:tloss\] over $\phi$ and $w$ with $\mathcal{D}_i$ update $\pi_\theta$ by TRPO on the data generated by $T_\phi$ After modifying the WGAN objective, to include both the stochastic and (approximately) deterministic scenarios, the synthesized transition is modeled by a Gaussian distribution $T'(s'|s,a)=T_\phi(s'\vert s,a)\sim\mathcal{N}(\mu_\phi(s,a),\Sigma_\phi(s,a))$. Although the underlying transitions of tasks like MuJoCo [@todorov2012mujoco] are deterministic, modeling by a Gaussian does not harm the transition learning empirically. Recall that the synthesized transition is trained on an MDP whose reward function is the critic of the truncated WGAN. To achieve this goal with proper stability, we employ PPO [@schulman2017proximal], which is an efficient approximation of TRPO [@schulman2015trust]. Note that although the PPO is originally designed for policy optimization, it can be adapted to transition learning with a fixed sampling policy and the PPO objective (Eq. 7 of [@schulman2017proximal]) $$\mathcal{L}_\text{PPO}(\phi)={\widehat}{{\mathbb{E}}}_t\Big[ \min(r_t(\phi){\widehat}{A}_t,~\text{clip}(r_t(\phi),1-\epsilon,1+\epsilon){\widehat}{A}_t) \Big],$$ where $$r_t(\phi)=\frac{T_\phi(s_{t+1}|s_t,a_t)}{T_{\phi_{\text{old}}}(s_{t+1}|s_t,a_t)},~~~{\widehat}{A}_t:~\text{advantage func. derived from the pseudo reward}~f(s_t,a_t,s_{t+1}).$$ To enhance stability of the transition learning, in addition to PPO, we also optimize maximum likelihood, which can be regarded as a regularization. We empirically observe that jointly optimizing both maximum likelihood and the PPO objective attains better transition model for policy gradient. The overall loss of the transition learning becomes $$\begin{aligned} \label{eq:tloss} \mathcal{L}_\text{transition}=-\mathcal{L}_\text{PPO}+\alpha\mathcal{L}_\text{mle},\end{aligned}$$ where $\mathcal{L}_\text{mle}$ is the loss of MLE, which is policy-agnostic and can be estimated with all collected real transitions. For more implementation details, please see Appendix \[apdx:details\]. We consider a training procedure similar to SLBO [@luo2018slbo], where they consider the fact that the value function is dependent on the varying transition model. As a result, unlike most of the MBRL methods that have only one pair of model-policy update for each real environment sampling, SLBO proposes to take multiple update pairs for each real environment sampling so that the objective composed of the model loss and the value loss can be optimized. Our proposed *model imitation (MI)* method is summarized in Algorithm \[algo:mi\]. In the experiment section, we would like to answer the following questions. (1) Does the proposed *model imitation* outperforms the state-of-the-art in terms of sample complexity and average return? (2) Does the proposed *model imitation* benefit from distribution matching and is superior to its model-free and model-based counterparts, TRPO and SLBO? ![Learning curves of our MI versus two model-free and four model-based baselines. The solid lines indicate the mean of five trials and the shaded regions suggest standard deviation.[]{data-label="fig:summary"}](summary.pdf) To fairly compare algorithms and enhance reproducibility, we adopt open-sourced environments released along with a model-based benchmark paper [@langlois2019benchmarking], which is based on a physical simulation engine, MuJoCo [@todorov2012mujoco]. Specifically, we evaluate the proposed algorithm MI on four continuous control tasks including Hopper, HalfCheetah, Ant, and Reacher. For hyper-parameters mentioned in Algorithm \[algo:mi\] and coefficients such as entropy regularization $\lambda$, please refer to Appendix \[apdx:hyper\]. We compare to two model-free algorithms, TRPO [@schulman2015trust] and PPO [@schulman2017proximal], to assess the benefit of utilizing the proposed model imitation since our MI (Algorithm \[algo:mi\]) uses TRPO for policy gradient to update the agent policy. We also compare MI to four model-based methods. SLBO [@luo2018slbo] gives theoretical guarantee of monotonic improvement for model-based deep RL and proposes to update a joint model-policy objective. PETS [@kurtland2018pets] propose to employ uncertainty-aware dynamic models with sampling-based uncertainty to capture both aleatoric and epistemic uncertainty. METRPO [@kurutach18metrpo] shows that insufficient data may cause instability and propose to use an ensemble of models to regularize the learning process. STEVE [@jacob2018steve] dynamically interpolates among model rollouts of various horizon lengths and favors those whose estimates have lower error. Figure \[fig:summary\] shows the learning curves for all methods. In Hopper, HalfCheetah, and Ant, MI converges fairly fast and learns a policy significantly better than competitors’. In Ant, even though MI does not improve the performance too much from the initial one, the fact that it maintains the average return at around 1,000 indicates that MI can capture a better transition than other methods do with only 5,000 transition data. Even though we do not employ an ensemble of models, the curves show that our learning does not suffer from high variance. In fact, the performance shown in Figure \[fig:summary\] indicates that the variance of MI is lower than that of methods incorporating ensembles such as METRPO and PETS. The questions raised at the beginning of this section can now be answered. The learned model enables TRPO to explore the world without directly access real transitions and therefore TRPO equipped with MI needs much fewer interactions with the real world to learn a good policy. Even though MI is based on the training framework proposed in SLBO, the additional distribution matching component allows the synthesized model to generate similar rollouts to that of the real environments, which empirically gives superior performance because we rely on long rollouts to estimate policy gradient. To better understand the performance presented in Figure \[fig:summary\], we further compare MI with bench-marked RL algorithms recorded in [@langlois2019benchmarking] including state-of-the-art MFRL methods such as TD3 [@fujimoto2018td3] and SAC [@haarnoja2018soft]. It should be noted that the reported results of [@langlois2019benchmarking] are the final performance after 200k time-steps but we only use up to 100k time-steps to train MI. Table \[tab:comparison\] indicates that MI significantly outperforms most of the MBRL and MFRL methods with $50\%$ fewer samples, which verifies that MI is more sample-efficient by incorporating distribution matching. Hopper HalfCheetah Ant Reacher ------ -------- ------------- ------ --------- MBRL 8/10 10/10 8/10 8/10 MFRL 3/4 2/4 4/4 3/4 : Proportion of bench-marked RL methods that are inferior to MI in terms of $5\%$ t-test. $x/y$ indicates that among $y$ approaches, MI is significantly better than $x$ approaches. The detailed performance can be found in Table 1 of [@langlois2019benchmarking]. It should be noted that the reported results in [@langlois2019benchmarking] are the final performance after 200k time-steps whereas ours are no more than 100k time-steps. \[tab:comparison\] Conclusion ========== We have pointed out that the state-of-the-art methods concentrate on learning synthesized models in a supervised fashion, which does not guarantee that the policy is able to reproduce a similar trajectory in the learned model and therefore the model may not be accurate enough to estimate long rollouts. We have proposed to incorporate WGAN to achieve occupancy measure matching between the real transition and the synthesized model and theoretically shown that matching indicates the closeness in cumulative rewards between the synthesized model and the real environment. To enable stable training across WGANs, we have suggested using a truncated version of WGAN to prevent training from getting stuck at local optimums. The empirical property of WGAN application such as imitation learning indicates its potential to learn the transition with fewer samples than supervised learning. We have confirmed it experimentally by further showing that MI converges much faster and obtains better policy than state-of-the-art model-based and model-free algorithms. Acknowledgement {#acknowledgement .unnumbered} =============== The authors would like to acknowledge the National Science Founding for the grant RI-1764078 and Qualcomm for the generous support. Proofs {#apdx:proofs} ====== Proof for WGAN -------------- Let $\alpha(s), \pi(a|s), T'(s'|s,a)$ be initial state distribution, policy and synthesized transition. Let $T$ be the true transition, $p(s,a,s')=\rho_{T}(s,a)T(s'|s,a)$ be the discounted distribution of the triple $(s,a,s')$. If the WGAN is trained to its optimal point, we have $$T(s'|s,a)=T'(s'|s,a),~\forall (s,a)\in\text{Supp}(\rho_{T}).$$ Because the loss function of WGAN is the 1-Wasserstein distance, we know $p(s,a,s')=p'(s,a,s')$ at its optimal points. Plug in to the Bellman flow constraint Eq. (\[eq:bellman\]), $$\begin{aligned} \rho_{T'}(s,a)&=\pi(a|s)\Big[(1-\gamma)\alpha(s) + \gamma \int \rho_{T'}(s',a')T'(s|s',a')ds'da'\Big]\\ &= \pi(a|s)\Big[(1-\gamma)\alpha(s) + \gamma \int p'(s',a',s)ds'da'\Big]\\ &\overset{p=p'}{=} \pi(a|s)\Big[(1-\gamma)\alpha(s) + \gamma \int p(s',a',s)ds'da'\Big] = \rho_{T}(s,a) \end{aligned}$$ That is, $$\text{WGAN~is~opt.}\Leftrightarrow p(s,a,s')=p'(s,a,s')\overset{\text{Bellman}}{\Leftrightarrow} \rho_{T}(s,a)=\rho_{T'}(s,a).$$ Finally, recall $p(s,a,s')\triangleq\rho_{T}(s,a)T(s'|s,a)~\text{and}~p'(s,a,s')\triangleq\rho_{T'}(s,a)T'(s'|s,a)$, we arrive at $$\text{WGAN~is~opt.~iff~}T(s'|s,a)=T'(s'|s,a),~\forall (s,a)\in\text{Supp}(\rho_{T}).$$ Let $\rho_T(s,a),~\rho_{T'}(s,a)$ be normalized occupancy measures generated by the true transition $T$ and the synthesized one $T'$. Suppose the reward is $L_r$-Lipschitz. If the training error of WGAN is $\epsilon$, then $|R(T)- R(T')| \leq \epsilon L_r/(1-\gamma)$. Observe that the occupancy measure $\rho_T(s,a)$ is a marginal distribution of $p(s,a,s')=\rho_T(s,a)T(s'|s,a)$. Because the distance between the marginal is upper bounded by that of the joint, we have $$W_1(\rho_T(s,a)\lvert\rvert \rho_{T'}(s,a)) \leq W_1(p(s,a,s')\lvert\rvert p'(s,a,s')) = \epsilon,$$ where $W_1$ is the 1-Wasserstein distance. Then, the cumulative reward is bounded by $$\begin{split} R(T)&=\frac{1}{1-\gamma}\int r(s,a)\rho_T(s,a)dsda=R(T')+\frac{1}{1-\gamma}\int r(s,a)\big(\rho_T(s,a)-\rho_{T'}(s,a)\big)dsda\\ &=R(T')+\frac{L_r}{1-\gamma}\int \frac{r(s,a)}{L_r}\big(\rho_T(s,a)-\rho_{T'}(s,a)\big)dsda\\ &\leq R(T')+\frac{L_r}{1-\gamma}\underset{{\left\lVertf\right\rVert}_L\leq 1}{\sup}\int f(s,a)\big(\rho_T(s,a)-\rho_{T'}(s,a)\big)dsda\\ &=R(T')+\frac{L_r}{1-\gamma}\underset{{\left\lVertf\right\rVert}_L\leq 1}{\sup}{\mathbb{E}}_{(s,a)\sim \rho_T}[f(s,a)]-{\mathbb{E}}_{(s,a)\sim \rho_{T'}}[f(s,a)]\\ &=R(T')+\frac{L_r}{1-\gamma}W_1(\rho_T\lvert\rvert\rho_{T'})\leq R(T')+\epsilon\frac{L_r}{1-\gamma}, \end{split}$$ where the first inequality holds because $r(s,a)/L_r$ is 1-Lipschitz and the last equality follows from Kantorovich-Rubinstein duality [@Villani2008_opt_transport]. Since $W_1$ distance is symmetric, the same conclusion holds if we interchange $T$ and $T'$, so we arrive at $$|R(T)-R(T')|\leq \epsilon L_r/(1-\gamma).$$ Lemmas for Sampling Techniques ------------------------------ Let $L\sim \text{Geometric}(1-\beta)$. If $d\geq1$, then ${\mathbb{E}}[L^{-1/d}]= O((1-\beta)^{1/d}/\beta)$. \[lemma:geo\_frac\_moment\] $${\mathbb{E}}[L^{-1/d}] = \sum\limits_{i=1}^\infty i^{-1/d} (1-\beta)\beta^{i-1}=\frac{1-\beta}{\beta}\sum\limits_{i=1}^\infty \frac{\beta^i}{i^{1/d}}=\frac{1-\beta}{\beta}\text{Li}_{1/d}(\beta),$$ where $\text{Li}$ is the polylogarithm function. From [@Wood1992polylog], the limiting behavior of it is $$\text{Li}_{1/d}(e^{-\mu})=\Gamma(1-1/d) \mu^{1/d-1},~\text{as~}\mu\rightarrow 0^+,$$ where $\Gamma$ is the gamma function. Since $e^{-\mu}\rightarrow 1-\mu$ when $\mu\rightarrow 0^+$, we know when $\beta\rightarrow 1^-$, $\text{Li}_{1/d}(\beta)\rightarrow \Gamma(1-1/d)(1-\beta)^{1/d-1}$. Finally, since $\Gamma(1-1/d)\leq 1$, we conclude that ${\mathbb{E}}[L^{-1/d}]= O((1-\beta)^{1/d}/\beta)$. Let $\rho_T(s,a)$ be a the normalized occupancy measure generated by the triple $(\alpha,\pi,T)$ with discount factor $\gamma$. Let $\rho_T^\beta(s,a)$ be the normalized occupancy measure generated by the triple $(\rho_T,\pi,T)$ with discount factor $\beta$. If $\gamma>\beta$, then $D_{TV}(\rho_T\lvert\rvert \rho_T^\beta)\leq (1-\gamma)\beta/(\gamma-\beta)$. \[lemma:short\_occu\_bound\] By definition of the occupancy measure we have $$\begin{split} &\rho_T(s,a)=\sum\limits_{i=0}^\infty (1-\gamma)\gamma^i f_i(s,a).\\ &\rho_T^\beta(s,a)=\sum\limits_{i=0}^\infty \sum\limits_{j=0}^i (1-\gamma)\gamma^{i-j}(1-\beta)\beta^j f_i(s,a), \end{split}$$ where $f_i(s,a)$ is the density of $(s,a)$ at time $i$ if generated by the triple $(\alpha,\pi,T)$. The TV distance is bounded by $$\begin{split} D_{TV}(\rho_T\lvert\rvert \rho_T^\beta )&\leq \frac{1}{2}\sum\limits_{i=0}^\infty\Big|(1-\gamma)\gamma^i - \sum\limits_{j=0}^i(1-\gamma)\gamma^{i-j}(1-\beta)\beta^j \Big|=\frac{1}{2}\sum\limits_{i=0}^\infty (1-\gamma)\gamma^i\Big| 1-\sum\limits_{j=0}^i (1-\beta)\Big(\frac{\beta}{\gamma}\Big)^j \Big|\\ &=\frac{1}{2}\sum\limits_{i=0}^\infty (1-\gamma)\gamma^i\frac{1}{\gamma-\beta}\Big|-\beta(1-\gamma) + \Big(\frac{\beta}{\gamma}\Big)^{i+1}(1-\beta)\gamma \Big|\\ &\overset{(*)}{=} \frac{(1-\gamma)\beta}{\gamma-\beta}\sum\limits_{i=0}^{M-1} -(1-\gamma)\gamma^i + (1-\beta)\beta^i=\frac{(1-\gamma)\beta}{\gamma-\beta}(\gamma^{M}-\beta^{M})\\ &\leq \frac{(1-\gamma)\beta}{\gamma-\beta}. \end{split}$$ where $(*)$ comes from that $-\beta(1-\gamma)+(\frac{\beta}{\gamma})^i(1-\beta)\gamma$ is a strictly decreasing function. Since $\gamma>\beta$, its sign flips from $+$ to $-$ at some index; say $M$. Finally, the sum of the absolute value are the same from $\sum_{i=0}^{M-1}$ and from $\sum_{i=M}^\infty$ because the total probability is conservative, and the difference on one side is the same as that on the other. Experiments =========== Implementation Details {#apdx:details} ---------------------- We normalize states according to the statistics derived from the first batch of states from the real world. To ensure stability, we maintain the same mean $\mu_0$ and standard deviation $\sigma_0$ throughout the training process. Instead of directly predicting the next state, we estimate the state difference $s_{t+1}-s_t$ [@kurutach18metrpo; @luo2018slbo]. Since we incorporate state normalization, the transition network is trained to output $(s_{t+1}-s_t-\mu_0)/\sigma_0$. To enhance state exploration, we sample real transitions according to policy $\beta\sim \mathcal{N}(\mu_\theta(s), \sigma)$, where $\mu(s)$ is the mean of our Gaussian parameterized policy $\pi_\theta$ and $\sigma$ is a fixed standard deviation. In addition, since model the transition as a Gaussian distribution, we found that matching $\rho_{T'}^{\alpha, \pi_\theta}$ with $\rho_{T}^{\alpha, \beta}$ is empirically more stable and more sample-efficient than matching $\rho_{T'}^{\alpha, \beta}$ with $\rho_{T}^{\alpha, \beta}$. For policy update, it is shown that using the mean $\mu_\phi$ of the Gaussian-parameterized transition can accelerate policy optimization and better balance exploration and exploitation. In order to enforce the Lipschitz constraint to the WGAN critic $f$, we employ gradient penalty [@gulrajani2017improved] with weight $10$. Hyperparameters {#apdx:hyper} --------------- HalfCheetah Hopper Reacher Ant -------------------------- ------------- -------- --------- ----- $N$ $\alpha$ 10 $n_\text{transition}$ $n_\text{policy}$ 20 60 100 30 horizon for model update 10 30 entropy regularization : List of hyper-parameters adopted in our experiments.[]{data-label="tab:hyper"} [^1]: Equal contributions.
--- abstract: 'The main sequence of galaxies, a correlation between the star formation rates and stellar masses of galaxies, has been observed out to $z\sim4$. Galaxies within the scatter of the correlation are typically interpreted to be secularly evolving while galaxies with star formation rates elevated above the main sequence are interpreted to be undergoing interactions or to be Toomre-unstable disks with starbursting clumps. In this paper we investigate the recent merger histories of three dusty star forming galaxies, identified by their bright submillimeter emission at $z\sim1.5$. We analyze rest-frame optical and UV imaging, rest-frame optical emission line kinematics using slit spectra obtained with MOSFIRE on Keck I, and calculate Gini and M$_{20}$ statistics for each galaxy and conclude two are merger-driven while the third is an isolated disk galaxy. The disk galaxy lies $\sim$4$\times$ above the main sequence, one merger lies within the scatter of the main sequence, and one merger lies $\sim$4$\times$ below the main sequence. This hints that the location of a galaxy with respect to the main sequence may not be a useful discriminator of the recent star formation history of high- galaxies at $z\sim1$.' author: - 'Patrick M. Drew, Caitlin M. Casey, Asantha Cooray, and Katherine E. Whitaker' title: 'Three Dusty Star Forming Galaxies at $z\sim1.5$: Mergers and Disks on the main sequence' --- \[firstpage\] Introduction ============ The majority of star forming galaxies form a correlation between their star formation rates (SFR) and stellar masses [; e.g. @Rodighiero11a; @Whitaker12a; @Sargent14a; @Schreiber15a]. Often referred to as the main sequence of star forming galaxies (MS), the correlation has a tight scatter of $\approx$0.3dex in [e.g. @Brinchmann04a; @Noeske07a; @Daddi07a; @Elbaz07a; @Speagle14a; @Tacchella16a]. The other, populated by starburst galaxies (SB), lies at star formation rates a few times higher than the MS at fixed and does not have a tight scatter [e.g. @Rodighiero11a; @Sargent14a]. In addition to these two star forming populations, there is a population of quiescent galaxies that lie below the main sequence [e.g. @Noeske07a; @Tacchella16a; @Leslie16a]. The small scatter in the MS is typically interpreted to be the result of smooth star formation driven by the net gas inflow and outflow rate and the gas consumption rate [e.g. @Dutton10a; @Bouche10a; @Tacchella16a; @Scoville17a], while galaxies with enhanced star formation rates are typically observed to be undergoing major mergers or shortlived starburst events (e.g. @Mihos96a; @Di-Matteo08a [@Bournaud11a; @Rodighiero11a; @Whitaker12a; @Silverman18a]). Indeed, the strongest SB galaxies in the local Universe are nearly all observed to be undergoing major mergers (e.g. @Joseph85a; @Armus87a; @Sanders96a). Dusty star forming galaxies are a submillimeter-identified class of galaxy that are selected for their bright dust emission and characteristically elevated SFRs. They likely peak in number density around $z\sim2$ [@Casey14a] and represent a crucial phase in the evolution of $z=0$ giant elliptical galaxies [e.g. @Swinbank04a; @Swinbank06a; @Engel10a; @Michaowski10b; @Menendez-Delmestre13a; @Toft14a]. The physical cause of their bright sub-mm emission is still a matter of debate (see review by @Casey14a). Those with the highest star formation rates, of order a few times 10$^{3}$yr$^{-1}$, are virtually all driven by major mergers [e.g. @Swinbank04a; @Greve05a; @Alaghband-Zadeh12a]. However, many studies conclude they are not unanimously merger driven, especially at more modest SFRs of $\sim$10$^2$yr$^{-1}$ [e.g. @Tacconi08a; @Genel08a; @Casey11a; @Swinbank10a; @Swinbank11a; @Bothwell10a; @Bothwell13a; @Hodge12a; @Drew18a; @McAlpine19a]. The selection of DSFGs is typically based on a flux density cutoff in the sub-mm, canonically $S_{\nu}$ $\gtrsim$ 2–5mJy, or SFR $\gtrsim$ 100yr$^{-1}$ for galaxies at $z\gtrsim 1$ [e.g. @Smail97a; @Barger98a; @Hughes98a; @Eales99a]. This corresponds to a selection for high SFRs. Some studies find DSFGs lie on the high-mass end of the main sequence at $z>2$ [@Michaowski12a; @Michaowski14a] and suggest that this implies they are driven by gas accretion rather than merging [@Michaowski17a]. Given their selection, for a given bin of SFR it is stellar mass that determines their location with respect to the main sequence. With increasing redshift, the normalization of the MS increases while the typical scatter remains the same [e.g. @Elbaz07a; @Daddi07a; @Rodighiero10a; @Whitaker14a; @Speagle14a; @Scoville17a]. There is tension in the literature over whether mergers outside the local Universe lie above or within the MS because the SFRs of SB galaxies in the local Universe are comparable to those on the MS at moderate redshifts. Some studies find mergers predominantly lie in the SB regime above the MS at $z>1$ [e.g. @Kartaltepe12a; @Hung13a; @Cibinel19a], or predominantly occupy higher SFRs, irrespective of the stellar mass [e.g. @Kartaltepe10a; @Ellison13a]. Other studies suggest that the stellar masses typically measured for the elevated SB population at $z\sim2$ are unreliable and that most SB galaxies actually comprise the high- end of the MS [e.g. @Michaowski12a; @Michaowski14a; @Koprowski14a]. [Source]{} [450.25]{} [450.27]{} [850.95]{} ------------------------------------ ---------------------------------------- --------------------------- ---------------------------------------- -- -- RA 10:00:28.58 09:59:42.92 09:59:59.80 Dec +02:19:28.3 +02:21:45.1 +02:27:07.4 $z_{\rm spec}$ 1.515 1.531 1.555 M$_{\star}$ () (3.40.5)$\times$10$^{11}$ (3.00.6)$\times$10$^{11}$ (3.83.0)$\times$10$^{10}$ L$_{\rm IR}$ ([[L$_{\odot}$]{}]{}) (1.5$^{+0.7}_{-0.5}$)$\times$10$^{12}$ (4.10.5)$\times$10$^{12}$ (3.0$^{+1.2}_{-0.9}$)$\times$10$^{12}$ SFR (yr$^{-1}$) 157$^{+80}_{-53}$ 382$^{+51}_{-45}$ 373$^{+110}_{-90}$ Gini 0.53 0.01 0.63 0.01 0.48 0.01 M$_{20}$ $-$1.725 $-$1.808 $-$1.654 Physical Driver Merger Merger Disk \[tab:physical\] [ – Positions are from @Casey17a.  is estimated using MAGPHYS [@da-Cunha08a] with the HIGHZ extension [@da-Cunha15a]. The errors on  differ slightly from @Casey17a because they are estimated using a range of SFHs from a continuous star formation history to an instantaneous burst, following the procedure of @Hainline11a. The errors on [L$_{\rm IR}$]{} and, as a direct result, SFR are estimated following the procedure of @casey12a.]{} The determination of galaxy classification is vital to addressing the question of what role mergers play in galaxy evolution through cosmic time. Numerous techniques have been employed to determine galaxy classification including imaging, kinematics, and non-parametric analyses (see review by @Conselice14a). Imaging studies classify galaxies based on visual signatures of mergers, interactions, or disks in images [e.g. @Hubble26a; @deVaucouleurs63a; @Abraham96a; @Abraham96b; @Brinchmann98a; @Kartaltepe15a]. While large imaging studies make this kind of analysis relatively easy to perform, they may fall short in a few key ways at $z\gtrsim1$ when trying to distinguish mergers from disks. At these redshifts, the optical waveband begins to probe rest-frame UV emission originating from younger stars. Not only is this light more highly dust obscured than at longer rest-frame wavelengths, but the UV is not a sensitive probe of the bulk of the stellar mass, which most closely resembles the distribution of the total mass of the galaxy. Additionally, kinematically regular galaxies may appear morphologically disturbed in imaging [e.g. @Bournaud08a], because galaxies at $z>1$ tend to be clumpier than their low-$z$ counterparts [e.g. @Abraham96a; @Elmegreen06a]. This may lead to disks being misclassified as mergers or irregular galaxies. Also, surface brightness dimming may hide features of a merger, such as tidal tails (e.g. @Hibbard97a). Kinematic studies overcome some of the issues associated with imaging studies. They measure the motions of gas inside galaxies via observations of the doppler shift of emission lines. Kinematic observations of disk galaxies exhibit smooth rotational fields, while mergers show more complex velocity fields (see @Glazebrook13a for a review). Kinematic studies are still susceptible to misclassification however, especially if observed near or shortly after coalescence or if observed at low spatial resolution [e.g. @Hung15a]. Non-parametric analyses [e.g. @Abraham03a; @Lotz04a; @Lotz08a] rely on the grouping of galaxies in parameter space to identify galaxy morphology and assembly history. The strength of this technique is that it can be applied to any type of galaxy without prior knowledge about the form the model should take. Its weakness is that the distinction between classes may not be as clear as with imaging or kinematic studies [@Conselice14a]. In this paper we present rest-frame UV and rest-frame optical imaging, a rest-frame optical emission line kinematic analysis, and a non-parametric analysis of three DSFGs at $z\sim1.5$ as case studies of high SFR-selected galaxies. Our goal is to identify their physical drivers. We then compare their location in the SFR- plane with their physical drivers to investigate whether merging DSFGs lie above, within, or below the MS. Section \[sec:obs\] of this paper describes our observations and data reduction, Section \[sec:results\] presents our imaging and kinematic analyses, Section \[sec:Gini-M20\] describes Gini-M$_{20}$ statistics, Section \[sec:MS\] discusses the galaxies in the context of the MS, and section \[sec:conc\] summarizes. Throughout this work we adopt a [*Planck*]{} $\Lambda$CDM cosmology with H$_{0} = 67.7$[[kms$^{-1}$Mpc$^{-1}$]{}]{}, $\Omega_{\Lambda} = 0.6911$ [@Planck-Collaboration16a] and a Chabrier initial mass function [IMF; @Chabrier03a]. Observations {#sec:obs} ============ ![image](snr_combined.pdf){width="99.00000%"} The spectroscopic data presented in this manuscript were obtained with the Multi-Object Spectrometer For Infra-Red Exploration [MOSFIRE; @Mclean10a; @McLean12a] on Keck I as part of a spectroscopic follow-up campaign to measure the redshifts of DSFGs identified at flux densities $>$12.4mJy and $>$2.4mJy at 450[$\mu$m]{} and 850[$\mu$m]{} respectively with [Scuba-2]{} in the COSMOS field [see @Casey13a; @Casey17a]. The galaxies in the present paper were selected from their parent sample based on their high signal to noise (SNR) and spatially resolved [H$\alpha$]{} and \[NII\] emission. The three galaxies presented in this paper are the only ones that have sufficiently resolved emission to allow for a kinematic analysis from the original 114 targets in the @Casey17a sample. One galaxy, named 850.95, is also published in @Drew18a [hereafter D18] as an observational counter example to the hypothesis that galaxies at intermediate redshifts may have declining rotation curves. The rotation curve of this galaxy is flat in the outer galaxy, much like typical disk galaxy rotation curves at $z=0$. In this paper we discuss the [H$\alpha$]{} kinematics of 850.95 and refer the reader to D18 for a discussion of its dark matter content. The three galaxies presented in this paper, 450.25, 450.27, and 850.95, have prefixes of either 450 or 850, corresponding to the wavelength at which they were initially identified in @Casey13a. Only 450.27 is detected at both 450[$\mu$m]{} and 850[$\mu$m]{}. lists basic characteristics of each galaxy. For additional details about the parent sample and its selection, see @Casey17a. H-band spectroscopic observations of 450.25, 450.27, and 850.95 were obtained on 2013 December 31 at W. M. Keck Observatory. The full width at half maximum (FWHM) of seeing was 085. Galaxy 450.25 was observed for a total integration time of 2880s, 450.27 for 1320s, and 850.95 for 1920s. The slit width was set to 07 and a 15 ABBA nod pattern was used between exposures. The spectra were reduced with the MOSPY Data Reduction Pipeline[^1], and one-dimensional spectra were extracted using the [iraf]{}[^2] package, [apall]{}. Apertures of extraction were placed on each pixel with an aperture radius of half the average seeing. Adjusting the aperture size does not significantly change the radial velocity and velocity dispersion measurements presented in the following sections. Variance weighting was used in the spectral extraction. shows SNR spectra for 450.25 and 450.27, including [H$\alpha$]{}, \[NII\], and continuum emission. See figure 1 in @Drew18a for the SNR spectrum of 850.95. The white crosses denote the seeing in the vertical dimension and the combined instrument resolution and seeing in the horizontal dimension. The negative images to the north and south of the galaxy are characteristic of the data coaddition step associated with the nod pattern. Slits were randomly oriented with respect to the galaxies because MOSFIRE slit masks prevent custom orientations for individual galaxies when observing in multiplex mode. We simultaneously fit Gaussians to [H$\alpha$]{}, [\[N[ii]{}\]]{}$\lambda$6548 and [\[N[ii]{}\]]{}$\lambda$6583 to the extracted apertures using the [iraf]{} package, [splot]{}, forcing the centroids to have fixed spacing and the Gaussian widths to be tied. We exclude pixels contaminated with telluric emission in the fits. Errors in fit centroids and widths are derived using 1000 Monte Carlo perturbations of the data by sky noise. The bottom panels of figures \[fig:PV45025\] and \[fig:PV45027\] show the position-velocity and position-dispersion diagrams measured from the MOSFIRE spectra. These will be discussed further in Section \[sec:results\]. We find no evidence of broad line emission in any of the spectra and @Casey17a finds no evidence of x-ray emission or mid-IR SED slopes in these galaxies that would indicate luminous active galactic nuclei are present. The imaging includes H band from the UltraVISTA survey (rest-frame $\sim$6400[$\mbox{\AA}$]{}; @McCracken12a) and *Hubble* F814W (rest-frame $\sim$3200[$\mbox{\AA}$]{}; @Koekemoer07a) data. While Y, J, H, and Ks imaging is available for these galaxies from the UltraVISTA survey, we present the H band because it matches the band the spectra are observed in. The H band imaging is consistent with the longest available from the UltraVISTA survey, the Ks band. These observations are the longest-wavelength high spatial resolution imaging available. The F814W imaging is the longest-wavelength *HST* imaging available. The images are presented in the top rows of Figures \[fig:PV45025\] and \[fig:PV45027\] and will be discussed further in Section \[sec:results\]. Kinematics and Morphologies {#sec:results} =========================== ![The top panels show ground-based H-band (rest-frame $\sim$6400[$\mbox{\AA}$]{}; @McCracken12a) and [*HST*]{} F814W (rest-frame $\sim$3200[$\mbox{\AA}$]{}; @Koekemoer07a) imaging on the left and right, respectively, with the MOSFIRE slit overplotted in white. The tidal tails seen in both images suggest 450.25 is undergoing an interaction or is in the early stages of a merger. The bottom panels show position-velocity and position-dispersion diagrams measured along the MOSFIRE slit. They are disordered fields showing no symmetry.[]{data-label="fig:PV45025"}](45025_combined.pdf "fig:"){width="0.99\columnwidth"} \[fig:image45025\] ![The top panels show ground-based H-band (rest-frame $\sim$6400[$\mbox{\AA}$]{}) and [*HST*]{} F814W (rest-frame $\sim$3200[$\mbox{\AA}$]{}) imaging on the left and right, respectively, with the MOSFIRE slit overplotted in white. The continuum emission running perpendicular to and outside of the MOSFIRE slit in the H band image likely belongs to a galaxy at higher redshift not physically associated with 450.27 because the colors of the two systems are very different. The bottom panels show position-velocity and position-dispersion diagrams measured along the MOSFIRE slit. The velocity curve looks Keplerian but the dispersion curve is not symmetric in the magnitudes of velocities measured on each side of the galaxy.[]{data-label="fig:PV45027"}](45027_combined.pdf){width="0.99\columnwidth"} We classify our galaxies based on their position-velocity and position-dispersion diagrams into two categories: mergers or disks. In the position-velocity diagram we expect a disk to have smooth, symmetric velocity field about a single spatial axis. In the position-dispersion diagram we expect a peak centered on the spatial symmetry axis of the position-velocity diagram. Evidence for a merger would be a disrupted position-velocity and position-dispersion fields and/or a discontinuity in velocity between two galaxies [e.g. @Glazebrook13a and references therein]. 450.25 ------ The top row of shows H-band (left; rest-frame $\sim$6400[$\mbox{\AA}$]{}) and [*Hubble*]{} filter F814W imaging (right; rest-frame $\sim$3200[$\mbox{\AA}$]{}) of 450.25. Tidal tails connect two galaxies and indicate this is a merging system. The straight-line separation between the two main galaxy components is of order 50kpc, indicating this is an early stage merger or interaction prior to coalescence. The H-band emission is not well fit by a Sérsic profile [@Sersic63a]. There are significant warps in the fit residual map caused by the merger. The H-band Sérsic fitting process is discussed further in Section \[sec:Gini-M20\]. The classification of merger is determined without the need to consider the position-velocity and position-dispersion diagrams, however we present them here for completeness. The bottom two panels of show the position-velocity and position-dispersion diagrams of 450.25 measured from the MOSFIRE spectrum. The velocity and dispersion fields show disorder characteristic of a merger. We do not correct the measured velocities for inclination or spatial resolution effects. 450.27 {#subsec:450.27} ------ ![Gini-M$_{20}$ diagram with our sample overplotted. Squares indicate classifications of mergers, while the circle indicates disk, as determined by our morphological and kinematic analyses. The black boundary lines are adopted from @Lotz08a, which are calibrated to galaxies between $0.2<z<1.2$ in the EGS *HST* survey. While 450.25 is undergoing a merger it falls in the disk region. This may be due to the fact that it is observed in an early stage of the merger and H-band emission is therefore not yet concentrated in a small region. The imaging and kinematic analysis of 450.27 was a little ambiguous (see Section \[subsec:450.27\]), but given its location in the Gini-M$_{20}$ diagram we conclude it is a merger. Errors on Gini, calculated by performing 10$^4$ statistical bootstrap measurements of the pixels associated with each galaxy. Errors on M$_{20}$ were not performed.[]{data-label="fig:Gini-M20"}](GINI-M20.pdf){width="0.99\columnwidth"} The top row of shows H-band (left; rest-frame $\sim$6400[$\mbox{\AA}$]{}) and [*HST*]{} filter F814W (right; rest-frame $\sim$3200[$\mbox{\AA}$]{}) imaging of 450.27. The H-band image shows a perpendicular emission component approximately 1$\arcsec$ to the west of the targeted component that is not visible in the rest-frame UV. We conclude this is an unassociated galaxy at higher redshift based on the very different photometric colors between the components, though confirmation of this would require follow-up observations. The *HST* image shows two distinct star forming knots with emission centroids separated by $\sim$9kpc. These knots may be two star forming knots in a single galaxy or they may be two separate galaxies close to coalescence. The knots are each well-fit by Sérsic profiles with bright central cores, suggesting perhaps that they could be two separate galaxies close to coalescence. The northern knot seen in the upper right panel of has a Sérsic index of $n=1.2$ with a half light radius of $r_{1/2}=1.9$kpc and the southern knot has a Sérsic index of $n=0.6$ with a half light radius of $r_{1/2}=2.8$kpc. However, the rest-frame UV is expected to be $>$90% extincted in galaxies with star formation rates as high as 450.27 [@Whitaker17a]. The H-band emission is not well fit by a Sérsic profile because the best-fit parameters are unphysical. Its residual map shows two peaks coincident spatially with the UV knots. The H-band Sérsic fitting process is discussed further in Section \[sec:Gini-M20\]. The bottom two panels of show position-velocity and position-dispersion diagrams for 450.27. The total range in radial velocities is $\sim$600[kms$^{-1}$]{}, which is a lower limit because we do not correct for galaxy inclination. The velocity curve looks Keplerian. It is smoothly varying and symmetric. The velocity dispersion on the other hand looks disturbed. It is asymmetric in the magnitude of velocities on either side of the galaxy. This may be a signature of a galaxy merger or interaction. Recent works by @Hung15a and @Simons19a demonstrate that mergers at high redshift may display disk-like kinematics when observed at low spatial resolution. As we will discuss in Section \[sec:Gini-M20\], this galaxy lies in the merger region of the Gini-M$_{20}$ diagram. Taken together with the imaging and kinematics, we conclude this is likely a merging system. 850.95 {#section-1} ------ The third galaxy we analyze is 850.95, which is first presented by D18. Figure 2 of D18 shows the galaxy in H-band (left; rest-frame $\sim$6400[$\mbox{\AA}$]{}) and *Hubble* F814W imaging (right; rest-frame $\sim$3200[$\mbox{\AA}$]{}). The H-band emission profile is best fit by an exponential disk with a Sérsic index of $n=1.29{\raisebox{.2ex}{$\scriptstyle\pm$}}0.03$ and a disk inclination of $i=87{\raisebox{.2ex}{$\scriptstyle\pm$}}2^{\circ}$. The right panel of Figure 2 in D18 shows an offset between the rest-frame UV and dust continuum emission, possibly as a result of the near edge-on orientation of the disk. Figure 4 in D18 shows position-velocity and position-dispersion diagrams of 850.95 with clear kinematic signatures of ordered disk rotation. The position-velocity diagram shows a smooth, symmetric velocity field with a flat outer-galaxy rotation velocity of $285{\raisebox{.2ex}{$\scriptstyle\pm$}}12$[kms$^{-1}$]{}. The curve is well-fit by an arctangent function. The position-dispersion diagram shows a smooth, symmetric, centrally peaked dispersion field with a systemic ionized gas velocity dispersion of $48{\raisebox{.2ex}{$\scriptstyle\pm$}}4$[kms$^{-1}$]{}. The exponential disk emission profile along with the smooth arctangent velocity profile strongly suggest 850.95 is an example of a DSFG that is not undergoing a major merger. Gini and M$_{20}$ Diagnostics {#sec:Gini-M20} ============================= Next we compute the non-parametric diagnostics Gini and M$_{20}$ (e.g. @Abraham03a, @Lotz04a, @Lotz08a) for each of our galaxies. The Gini statistic quantifies how the light is distributed throughout a galaxy. A Gini value of 1 would imply all the emission originates from a single pixel, while a Gini value of 0 would imply a uniform light distribution across multiple pixels. M$_{20}$ measures the second moment of the galaxy’s brightest 20% of pixels relative to the total second moment. These two quantities have been shown to roughly separate mergers, ellipticals, and disk galaxies [e.g. @Lotz08a]. In order to perform the analysis, first we isolate the pixels associated with each galaxy above a threshold SNR of 8 in H-band imaging using the Python package, Photutils [@Bradley19a]. Thresholds are chosen to be as low as possible while still separating unrelated field galaxies in the segmentation maps. Next, we run the Photutils source deblending routine on galaxy 450.27 in order to separate the two perpendicular components seen in the H-band imaging (see ). Whether or not this deblending is performed does not change the Gini-M$_{20}$ classification of merger for this galaxy. Finally we use the Python package, Statmorph [@Rodriguez-Gomez19a] to measure Gini and M$_{20}$ statistics from the segmentation maps. Statmorph also simultaneously fits Sérsic profiles to the pixels associated with each galaxy in the segmentation maps, the results of which we discussed in previous sections. shows our galaxies in Gini-M$_{20}$ space. Galaxy 450.27 lies within the merger region, while galaxies 450.25 and 850.95 lie within the disk region. Our kinematic and morphological analyses conclude 450.25 is a merger, 850.95 is a disk, and 450.27 is possibly a merger, although it is a bit ambiguous. Considering this together with the location of 450.27 in the Gini-M$_{20}$ diagram, we conclude that it is indeed a merger. The Gini-M$_{20}$ plot in Figure 7 from @Lotz08a shows that the overwhelming majority of galaxies that lie in the merger region of Gini-M$_{20}$ space are true mergers. This figure also shows that mergers may lie in any region of Gini-M$_{20}$ space. Galaxy 450.25 is confirmed via imaging to be undergoing a merger or interaction (see Figure \[fig:PV45025\]), but it lies within the disk region of the Gini-M$_{20}$ diagram. This may be caused by the fact that it is in an early merger/interaction stage so the H-band emission is not yet concentrated in a just few pixels. Main Sequence of Galaxies {#sec:MS} ========================= ![Our sample plotted against three main sequence fits from the literature. Our galaxies comprise the high stellar mass end of the MS. The orange line is the fit from @Rodighiero11a to data between $1.5<z<2.5$, the green line is the fit from @Whitaker14a to data between $1.5<z<2.0$, and the purple line is the fit from @Koprowski16a to data at $z>1.5$. The shaded regions are ${\raisebox{.2ex}{$\scriptstyle\pm$}}$0.3dex from each fit, which is the typical 1$\sigma$ scatter observed in the distribution [e.g. @Whitaker14a]. []{data-label="fig:MS"}](MS.pdf){width="0.99\columnwidth"} Now we consider the merger/disk classifications in the context of the main sequence of star forming galaxies. The star formation rates and stellar masses of our sample are reported by @Casey17a, who measure these quantities using the high-$z$ extension of MAGPHYS [@da-Cunha08a; @da-Cunha15a] using multi-wavelength photometry (UV through sub-mm) from the COSMOS collaboration [@Capak07a; @Laigle16a]. In the present paper, to account for systematic uncertainties on stellar mass caused by the assumption of a star formation history, we follow the procedure of @Hainline11a, which was developed to estimate uncertainties on for similarly-selected DSFGs at $z\sim2$. To summarize, we take the errors on to be half the difference between the stellar masses estimated using instantaneous burst histories and those estimated using continuous star formation histories. Works by @Michaowski12a [@Michaowski14a] demonstrate that different assumed SFHs can strongly affect the derived . This systematic uncertainty from choice of SFH is larger than those reported by the MAGPHYS fits. shows our sample plotted on the main sequence of star forming galaxies at $z\sim1.5$ from three works in the literature. The first [@Rodighiero11a] is fitted to far- and near-IR selected galaxies between $1.5<z<2.5$ in the COSMOS and GOODS-South fields using combined UV and IR SFRs. The second [@Whitaker14a] is fitted to galaxies from the 3D-HST photometric catalogs between $1.5<z<2.0$ using combined UV and IR SFRs. The third [@Koprowski16a] is fitted to 850$\mu$m selected galaxies at $z>1.5$ from the S2CLS survey [@Geach17a] using UV through sub-mm SEDs. Galaxy 450.27 is the only galaxy to lie within the 0.3dex scatter of the main sequence. Galaxy 850.95 is the only disk galaxy in our sample but it lies at a SFR $\sim4 \times$ greater than the MS. Galaxy 450.25 is a merger with visible tidal tails but it lies at a SFR $\sim4 \times$ less than the MS. The overall distribution of the galaxies is consistent with the works of @Michaowski12a [@Michaowski14a; @Michaowski17a] and @Koprowski16a, who find that DSFGs roughly comprise the high end of the MS. In particular, all three galaxies fit within the scatter of the samples at $1.0<z<1.5$ and $1.5<z<2.0$ in Figure 10 of @Michaowski17a. @Michaowski17a suggest because DSFGs comprise the high end of the MS major mergers are not a dominant driver of their star formation rates. Mergers are short-lived events that are expected to elevate the star formation rates of galaxies above the main sequence. The present paper, as well as many other works [e.g. @Di-Matteo08a; @Hung13a; @Cibinel19a], demonstrate that merging systems may lie on the main sequence. @Puglisi19a find that up to 50% of the most massive galaxies on the MS at $z\sim1.3$ may have star formation driven by merging. Major merger activity may not enhance star formation rates at every stage of a merger (e.g. @Bergvall03a; @Di-Matteo08a [@Jogee09a; @Narayanan15a; @Fensch17a; @Silva18a]). Additionally, high resolution hydrodynamical simulations by @Fensch17a, show that mergers at high redshift have lower star formation efficiencies compared with those at low redshift. We caution the reader that a detailed analysis of the merger classifications of DSFGs on the high end of the MS needs to be performed before it can be concluded that they are not undergoing merging. It is interesting that the SFR of 450.25, a clear early stage merger, is $\lesssim$4$\times$ the MS value at its . Quiescence is known to correlate with galaxy compactness [e.g. @Bell12a; @Lee18a], but the location of 450.25 in Gini-M$_{20}$ space demonstrates the galaxy is not compact. It is possible that 450.25 was previously quenched but the merger has yet to fully turn star formation on again. A study of the molecular gas content of this galaxy would be illuminative. Summary {#sec:conc} ======= Mergers or galaxy interactions drive the star formation rates of galaxies lying at higher SFRs than the main sequence at $z=0$ [e.g. @Mihos96a], however at higher redshift where the star formation rates of galaxies at all are elevated, the distinction between these two populations is less clear. Kinematic observations of DSFGs in the literature reveal a mix between merging and secularly evolving galaxies [e.g. @Swinbank06a; @Alaghband-Zadeh12a] but such studies are limited to small sample sizes because they are observationally expensive. The determination of where DSFGs lie in the SFR- plane is important to help determine their importance in galaxy evolution through the cosmos. Some studies find that DSFGs sit above the main sequence [e.g. @Hainline11a], but recent work by @Michaowski12a [@Michaowski14a; @Michaowski17a] has determined the average DSFG actually comprises the high stellar mass end of the MS, and conclude that this is evidence that major mergers do not drive their star formation rates. In this paper we combine imaging, kinematic, and non-parametric analyses to determine whether three sub-mm identified DSFGs at $z\sim1.5$ have merger or secular recent SHFs. We find two to be undergoing merging or interactions and one to be an isolated disk galaxy. Rest-frame UV and optical imaging of galaxy 450.25 shows tidal tails, which are clear evidence it is undergoing a merger or interaction. The rest-frame optical emission is not well fit by a Sérsic profile because the best-fit parameter values are unphysical. This is due to warps caused by the interaction/merger. Position-velocity and position-dispersion diagrams reveal disorder characteristic of a merger. The Gini coefficient and M$_{20}$ values, $G=0.534{\raisebox{.2ex}{$\scriptstyle\pm$}}0.006$ and M$_{20} = -1.725$, place 450.25 in the disk region of the Gini-M$_{20}$ diagram, possibly because this is an early stage merger or interaction. Rest-frame UV imaging of galaxy 450.27 shows two distinct star forming knots which we conclude are likely two galaxy cores close to coalescence. Rest-frame optical imaging shows one distinct emission region rather than two knots, although it is observed through seeing with a FWHM comparable to the separation between the two UV knots. The rest-frame optical emission is not well fit by a Sérsic profile because the best-fit parameter values are unphysical. The position-velocity diagram looks Keplerian while the position-dispersion diagram looks asymmetric in the magnitude of velocity dispersion on each side of the galaxy. The Gini coefficient and M$_{20}$ values, $G=0.633^{+0.007}_{-0.005}$ and M$_{20} = -1.808$, place 450.27 in the merger region of the Gini-M$_{20}$ diagram. Rest-frame optical imaging of galaxy 850.95, published by D18, is well fit by an exponential disk profile with Sérsic index $n=1.29{\raisebox{.2ex}{$\scriptstyle\pm$}}0.03$. The rest-frame UV emission is clumpy and is offset from dust continuum emission detected with ALMA. The position-velocity and position-dispersion diagrams show clear signatures of rotation with a velocity profile well-fit by an arctangent function and a centrally peaked, symmetric dispersion curve. The Gini coefficient and M$_{20}$ values, $G=0.481{\raisebox{.2ex}{$\scriptstyle\pm$}}0.005$ and M$_{20} = -1.654$, place 850.95 in the disk region of the Gini-M$_{20}$ diagram. Despite its disk classification, 850.95 sits at a SFR $\sim$4$\times$ above the MS, placing it in the starburst regime. Despite merging activity, 450.27 lies within the scatter of the MS, and 450.25 lies $\sim$4$\times$ below the MS. It is unexpected that a merging galaxy has a lower star formation rate than typical galaxies on the main sequence. Further investigation as to the cause of the suppressed star formation is needed. Our sample hints that perhaps the specific star formation rate (SFR/) is not a useful discriminator of the recent merger history of high- galaxies at $z>1$. A detailed investigation of a statistical sample of DSFGs is needed in order to determine the recent star formation histories of DSFGs both on and off the main sequence in order to determine the physics driving their star formation rates. The authors thank Justin Spilker and Jorge Zavala for useful discussions, as well as the anonymous reviewer who provided valuable comments and suggestions. P.D. acknowledges financial support by a NASA Keck PI Data Awards: 2012B-U039M, 2011B-H251M, 2012B-N114M, 2017A-N136M, NSF grants AST-1714528 and AST-1814034, and the University of Texas at Austin College of Natural Sciences. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. Some of the data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation. This research made use of APLpy, an open-source plotting package for Python hosted at http://aplpy.github.com, Astropy, a community-developed core Python package for Astronomy [@Astropy13a], and the python packages Matplotlib [@Matplotlib07a], Numpy [@Numpy11a], and Pandas [@McKinney10a]. natexlab\#1[\#1]{} , R. G., [Tanvir]{}, N. R., [Santiago]{}, B. X., [et al.]{} 1996, , 279, L47 , R. G., [van den Bergh]{}, S., [Glazebrook]{}, K., [et al.]{} 1996, , 107, 1 , R. G., [van den Bergh]{}, S., & [Nair]{}, P. 2003, , 588, 218 , S., [Chapman]{}, S. C., [Swinbank]{}, A. M., [et al.]{} 2012, , 424, 2232 , L., [Heckman]{}, T., & [Miley]{}, G. 1987, , 94, 831 , [Robitaille]{}, T. P., [Tollerud]{}, E. J., [et al.]{} 2013, , 558, A33 , A. J., [Cowie]{}, L. L., [Sanders]{}, D. B., [et al.]{} 1998, , 394, 248 , E. F., [van der Wel]{}, A., [Papovich]{}, C., [et al.]{} 2012, , 753, 167 , N., [Laurikainen]{}, E., & [Aalto]{}, S. 2003, , 405, 31 , M. S., [Chapman]{}, S. C., [Tacconi]{}, L., [et al.]{} 2010, , 405, 219 , M. S., [Smail]{}, I., [Chapman]{}, S. C., [et al.]{} 2013, , 429, 3047 , N., [Dekel]{}, A., [Genzel]{}, R., [et al.]{} 2010, , 718, 1001 , F., [Daddi]{}, E., [Elmegreen]{}, B. G., [et al.]{} 2008, , 486, 741 , F., [Chapon]{}, D., [Teyssier]{}, R., [et al.]{} 2011, , 730, 4 Bradley, L., Sip[ő]{}cz, B., Robitaille, T., [et al.]{} 2019, astropy/photutils: v0.6, doi:10.5281/zenodo.2533376 , J., [Charlot]{}, S., [White]{}, S. D. M., [et al.]{} 2004, , 351, 1151 , J., [Abraham]{}, R., [Schade]{}, D., [et al.]{} 1998, , 499, 112 , P., [Aussel]{}, H., [Ajiki]{}, M., [et al.]{} 2007, , 172, 99 , C. M. 2012, , 425, 3094 , C. M., [Narayanan]{}, D., & [Cooray]{}, A. 2014, , 541, 45 , C. M., [Chapman]{}, S. C., [Neri]{}, R., [et al.]{} 2011, , 415, 2723 , C. M., [Chen]{}, C.-C., [Cowie]{}, L. L., [et al.]{} 2013, , 436, 1919 , C. M., [Cooray]{}, A., [Killi]{}, M., [et al.]{} 2017, , 840, 101 , G. 2003, , 115, 763 , A., [Daddi]{}, E., [Sargent]{}, M. T., [et al.]{} 2019, , 485, 5631 , C. J. 2014, , 52, 291 , E., [Charlot]{}, S., & [Elbaz]{}, D. 2008, , 388, 1595 , E., [Walter]{}, F., [Smail]{}, I. R., [et al.]{} 2015, , 806, 110 , E., [Dickinson]{}, M., [Morrison]{}, G., [et al.]{} 2007, , 670, 156 , G. 1963, , 8, 31 , P., [Bournaud]{}, F., [Martig]{}, M., [et al.]{} 2008, , 492, 31 , P. M., [Casey]{}, C. M., [Burnham]{}, A. D., [et al.]{} 2018, , 869, 58 , A. A., [van den Bosch]{}, F. C., & [Dekel]{}, A. 2010, , 405, 1690 , S., [Lilly]{}, S., [Gear]{}, W., [et al.]{} 1999, , 515, 518 , D., [Daddi]{}, E., [Le Borgne]{}, D., [et al.]{} 2007, , 468, 33 , S. L., [Mendel]{}, J. T., [Scudder]{}, J. M., [Patton]{}, D. R., & [Palmer]{}, M. J. D. 2013, , 430, 3128 , B. G., & [Elmegreen]{}, D. M. 2006, , 650, 644 , H., [Tacconi]{}, L. J., [Davies]{}, R. I., [et al.]{} 2010, , 724, 233 , J., [Renaud]{}, F., [Bournaud]{}, F., [et al.]{} 2017, , 465, 1934 , J. E., [Dunlop]{}, J. S., [Halpern]{}, M., [et al.]{} 2017, , 465, 1789 , S., [Genzel]{}, R., [Bouch[é]{}]{}, N., [et al.]{} 2008, , 688, 789 , K. 2013, , 30, e056 , T. R., [Bertoldi]{}, F., [Smail]{}, I., [et al.]{} 2005, , 359, 1165 , L. J., [Blain]{}, A. W., [Smail]{}, I., [et al.]{} 2011, , 740, 96 , J. E., & [Vacca]{}, W. D. 1997, , 114, 1741 , J. A., [Carilli]{}, C. L., [Walter]{}, F., [et al.]{} 2012, , 760, 11 , E. P. 1926, , 64, 321 , D. H., [Serjeant]{}, S., [Dunlop]{}, J., [et al.]{} 1998, , 394, 241 , C.-L., [Sanders]{}, D. B., [Casey]{}, C. M., [et al.]{} 2013, , 778, 129 , C.-L., [Rich]{}, J. A., [Yuan]{}, T., [et al.]{} 2015, , 803, 62 , J. D. 2007, Computing in Science and Engineering, 9, 90 , S., [Miller]{}, S. H., [Penner]{}, K., [et al.]{} 2009, , 697, 1971 , R. D., & [Wright]{}, G. S. 1985, , 214, 87 , J. S., [Sanders]{}, D. B., [Le Floc’h]{}, E., [et al.]{} 2010, , 721, 98 , J. S., [Dickinson]{}, M., [Alexander]{}, D. M., [et al.]{} 2012, , 757, 23 , J. S., [Mozena]{}, M., [Kocevski]{}, D., [et al.]{} 2015, , 221, 11 , A. M., [Aussel]{}, H., [Calzetti]{}, D., [et al.]{} 2007, , 172, 196 , M. P., [Dunlop]{}, J. S., [Micha[ł]{}owski]{}, M. J., [Cirasuolo]{}, M., & [Bowler]{}, R. A. A. 2014, , 444, 117 , M. P., [Dunlop]{}, J. S., [Micha[ł]{}owski]{}, M. J., [et al.]{} 2016, , 458, 4321 , C., [McCracken]{}, H. J., [Ilbert]{}, O., [et al.]{} 2016, , 224, 24 , B., [Giavalisco]{}, M., [Whitaker]{}, K., [et al.]{} 2018, , 853, 131 , S. K., [Kewley]{}, L. J., [Sanders]{}, D. B., & [Lee]{}, N. 2016, , 455, L82 , J. M., [Primack]{}, J., & [Madau]{}, P. 2004, , 128, 163 , J. M., [Davis]{}, M., [Faber]{}, S. M., [et al.]{} 2008, , 672, 177 , S., [Smail]{}, I., [Bower]{}, R. G., [et al.]{} 2019, arXiv e-prints, arXiv:1901.05467 , H. J., [Milvang-Jensen]{}, B., [Dunlop]{}, J., [et al.]{} 2012, , 544, A156 . 2010, van der Walt, S., Millman, J. (Eds.), Proc. 9th Python Sci. Conf., 51 , I. S., [Steidel]{}, C. C., [Epps]{}, H., [et al.]{} 2010, in , Vol. 7735, Ground-based and Airborne Instrumentation for Astronomy III, 77351E–77351E–12 , I. S., [Steidel]{}, C. C., [Epps]{}, H. W., [et al.]{} 2012, in , Vol. 8446, Ground-based and Airborne Instrumentation for Astronomy IV, 84460J , K., [Blain]{}, A. W., [Swinbank]{}, M., [et al.]{} 2013, , 767, 151 , M., [Hjorth]{}, J., & [Watson]{}, D. 2010, , 514, A67 , M. J., [Dunlop]{}, J. S., [Cirasuolo]{}, M., [et al.]{} 2012, , 541, A85 , M. J., [Hayward]{}, C. C., [Dunlop]{}, J. S., [et al.]{} 2014, , 571, A75 , M. J., [Dunlop]{}, J. S., [Koprowski]{}, M. P., [et al.]{} 2017, , 469, 492 , J. C., & [Hernquist]{}, L. 1996, , 464, 641 , D., [Turk]{}, M., [Feldmann]{}, R., [et al.]{} 2015, , 525, 496 , K. G., [Weiner]{}, B. J., [Faber]{}, S. M., [et al.]{} 2007, , 660, L43 , [Ade]{}, P. A. R., [Aghanim]{}, N., [et al.]{} 2016, , 594, A13 , A., [Daddi]{}, E., [Liu]{}, D., [et al.]{} 2019, , 877, L23 , G., [Cimatti]{}, A., [Gruppioni]{}, C., [et al.]{} 2010, , 518, L25 , G., [Daddi]{}, E., [Baronchelli]{}, I., [et al.]{} 2011, , 739, L40 , V., [Snyder]{}, G. F., [Lotz]{}, J. M., [et al.]{} 2019, , 483, 4140 , D. B., & [Mirabel]{}, I. F. 1996, , 34, 749 , M. T., [Daddi]{}, E., [B[é]{}thermin]{}, M., [et al.]{} 2014, , 793, 19 , C., [Pannella]{}, M., [Elbaz]{}, D., [et al.]{} 2015, , 757, A74 , N., [Lee]{}, N., [Vanden Bout]{}, P., [et al.]{} 2017, , 837, 150 , J. L. 1963, Boletin de la Asociacion Argentina de Astronomia La Plata Argentina, 6, 41 , A., [Marchesini]{}, D., [Silverman]{}, J. D., [et al.]{} 2018, , 868, 46 , J. D., [Daddi]{}, E., [Rujopakarn]{}, W., [et al.]{} 2018, , 868, 75 , R. C., [Kassin]{}, S. A., [Snyder]{}, G. F., [et al.]{} 2019, arXiv e-prints, arXiv:1902.06762 , I., [Ivison]{}, R. J., & [Blain]{}, A. W. 1997, , 490, L5 , J. S., [Steinhardt]{}, C. L., [Capak]{}, P. L., & [Silverman]{}, J. D. 2014, , 214, 15 , A. M., [Chapman]{}, S. C., [Smail]{}, I., [et al.]{} 2006, , 371, 465 , A. M., [Smail]{}, I., [Chapman]{}, S. C., [et al.]{} 2004, , 617, 64 —. 2010, , 405, 234 , A. M., [Papadopoulos]{}, P. P., [Cox]{}, P., [et al.]{} 2011, , 742, 11 , S., [Dekel]{}, A., [Carollo]{}, C. M., [et al.]{} 2016, , 457, 2790 , L. J., [Genzel]{}, R., [Smail]{}, I., [et al.]{} 2008, , 680, 246 , S., [Smol[č]{}i[ć]{}]{}, V., [Magnelli]{}, B., [et al.]{} 2014, , 782, 68 , S., [Colbert]{}, S. C., & [Varoquaux]{}, G. 2011, Computing in Science and Engineering, 13, 22 , K. E., [Pope]{}, A., [Cybulski]{}, R., [et al.]{} 2017, , 850, 208 , K. E., [van Dokkum]{}, P. G., [Brammer]{}, G., & [Franx]{}, M. 2012, , 754, L29 , K. E., [Franx]{}, M., [Leja]{}, J., [et al.]{} 2014, , 795, 104 [^1]: <http://keck-datareductionpipelines.github.io/MosfireDRP/> [^2]: [iraf]{} is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.
--- address: | Mathematical Sciences\ Durham University\ UK author: - Lukas Lewark bibliography: - 'main.bib' date: 'November 21, 2013' title: 'Rasmussen’s spectral sequences and the [$\mathfrak{sl}_N$]{}-concordance invariants' --- [^1] Introduction ============ #### *Acknowledgements* This paper is extracted from my thesis [@these], and I thank Christian Blanchet for having been my adviser. Thanks to Andrew Lobb for comments on a first version of the paper. The Khovanov-Rozansky homologies {#sec:overview} ================================ The reduced-unreduced spectral sequence {#sec:sseq} ======================================= From [<span style="font-variant:small-caps;">Homflypt</span>]{}-homology to the [$\mathfrak{sl}_N$]{}-concordance invariants {#sec:tool} ============================================================================================================================ Slice-torus knot concordance invariants {#sec:slicetorus} ======================================= Linear independence of some of the [$\mathfrak{sl}_N$]{}-concordance invariants {#sec:examples} =============================================================================== [^1]: Supported by the EPSRC-grant EP/K00591X/1.
[**Statistical Physics for Humanities:** ]{} [**A Tutorial**]{} Dietrich Stauffer [ The image of physics is connected with simple “mechanical” deterministic events: that an apple always falls down, that force equals mass times acceleleration. Indeed, applications of such concept to social or historical problems go back two centuries (population growth and stabilisation, by Malthus and by Verhulst) and use “differential equations”, as recently revierwed by Vitanov and Ausloos \[2011\]. However, since even today’s computers cannot follow the motion of all air molecules within one cubic centimeter, the probabilistic approach has become fashionable since Ludwig Boltzmann invented Statistical Physics in the 19th century. Computer simulations in Statistical Physics deal with single particles, a method called agent-based modelling in fields which adopted it later. Particularly simple are binary models where each particle has only two choices, called spin up and spin down by physicists, bit zero and bit one by computer scientists, and voters for the Republicans or for the Democrats in American politics (where one human is simulated as one particle). Neighbouring particles may influence each other, and the Ising model of 1925 is the best-studied example of such models. This text will explain to the reader how to program the Ising model on a square lattice (in Fortran language); starting from there the readers can build their own computer programs. Some applications of Statistical Physics outside the natural sciences will be listed.]{} Introduction ============ [*Learning by Doing*]{} is the intention of this tutorial: readers should learn how to construct their own models and to program them, not learn about the great works of the author \[Stauffer et al 2006\] and the lesser works of his competitors \[Billari et al 2006\]. Already Empedokles is reported to have 25 centuries ago compared humans to fluids: Some are easy to mix, like wine and water; and some, like oil and water, refuse to mix. Newspapers can give you recent human examples, and physicists and others have taken up the challenge to study selected problems of history and other humanities \[Castellano et al 2009\] with methods similar to physics. In the opposite direction, Prados \[2009\] uses the physics dream of “unified field theory” to describe his history of the Vietnam war. The next section will recommend ways how to construct models, the following one how to program a simple Ising model on a square lattice, and a concluding section will list some applications. An appendix will introduce the Fortran programming language. Model Building ============== What is a Model? ---------------- “Models” in physics and in this tutorial usually deal with the single elements of a system and how their interactions produce the behaviour of the whole system. Outside of physics, a model may just be any mathematical law approximating reality. Thus the statement that human adult mortality increases exponentially with age is often called the Gompertz model by demographers but the Gompertz rule by physicists; the latter ones use the Penna model of individuals undergoing genetic mutations and Darwinian selection to simulate a large population perhaps obeying Gompertz \[Stauffer et al 2006\]. Binary versus more complicated models ------------------------------------- If no previous work on a general model is known, I recommend to start with binary variables where each particle has only two choices, called spin up and spin down by physicists, bit zero and bit one by computer scientists, occupied or empty for percolation, and voters for the Republicans or for the Democrats in American politics (where one human is simulated as one particle). Of course reality is more complicated, but we want to [*understand*]{} reality: Does it agree with the simplest possible model? (Simulations for pilot training etc. are different \[Bridson and Batty 2010\]). Thus we follow the opinion of Albert Einstein that a model should be as simple as possible, but not simpler. If you want to simulate traffic jams \[Chowdhury et al. 2000\] in cities, the colour of the cars is quite irrelevant, but for visibility in the dark the colour matters. Once the binary case has been studied, one can go to more than two choices. If three groups are fighting each other, obviously three choices are needed in a model \[Lim 2007\]. Even in a two-party political system like the USA, other candidates were important in Florida for the US presidential elections of 2000. The opinions of people \[Malarz et al 2011\] are in reality continuous and can be modelled by one or several real numbers between zero and unity, or between minus infinity and plus infinity. Nevertheless it is standard practice in opinion polls to allow only a few choices like full agreement, partial agreement, neutrality, partial disagreement, full disagreement. And in elections one can only vote among the discrete number of candidates or parties which are on the ballot. In the Kosovo opinion of the International Court of Justice (July 22, 2010) one judge criticised the binary tradition of either legal or illegal, stating that tolerable is in between; nevertheless the court majority stayed with the binary logic of not illegal. Physicists like to call the binary variables “spins” but readers from outside physics should refrain from studying the quite complicated spin concept in quantum mechanics. Spins are simply up and down (1 or 0; 1 or –1). Similary, don’t be deterred if physicist talk about a Hamiltonian; in most cases this is just the energy from high school. Humans are neither Spins nor Atoms ---------------------------------- Of course, that is true, but it does not exclude that humans are modelled like spins. Modern medicine blurred the boundary between life and death; nevertheless we usually talk about people having been born in a certain year, and having died in another year, as if they were binary up-down variables. Reasons for death are complicated and I don’t even know mine yet; nevertheless demographers like Gompertz estimated probabilities for dying at some age. Such probabilities are rather useless for predicting the death of one individual, but averaged over many people they may give quite accurate results. When I throw one coin I do not know how it will fall; when I throw thousand coins, usually about half of them fall on one side and the others on the other side (law of large numbers). If I throw 1000 coins and all show “head”, most likely I cheated. Thus to simulate one person’s opinion and decisions on a computer does not seem to be realistic; to do the same for millions of people may give good average properties, like the number of deaths at the age of 80 to 81 years, or the fraction of voters selecting the parties in an upcoming election. The whole insurance industry is based on this law of large numbers. Humans are not spins but many humans together might be studied well by spin models. Deterministic or Statistical ? ------------------------------ Non-physicists often believe that physics deals with deterministic rules: The apple falls down from the tree and not up; force equals mass times acceleration; etc. (Or they have heard of quantum-mechanical probability and apply that to large systems where such quantum effects should be negligibly small.) In this sense the cause of World War I was seen as a consequence of the arms race modelled by deterministic differential equations for averages \[Richardson 1935\]. And the decay of empires was described \[Geiss 2008\] as starting at the geographical periphery, since the influence from the center decreases towards zero with increasing distance, just as the gravitational force between the sun and its planets; see also \[Diamond 1997, epilog\]. Because of its historical importance let us look into Richardson’s papers of 1935: Two opposing (groups of) nations change their preparedness $x$ for war because of three reasons: 1) the war preparedness of the other side; 2) fatigue and expense; 3) dissatisfaction with existing peace treaties. Reasons 1 and 3 increase and reason 2 decreases the war preparedness; reason 1 is proportional to the $x$ of the opponent, reason 2 to the own $x$, and reason 3 independent of $x_1$ and $x_2$. Even complete disarmament ($x_1=x_2=0$) at one moment does not help if there is dissatisfaction with the existing peace. And if reason 1 is stronger than reason 2 for both sides, both $x$ increase exponentially with time towards infinity. His second paper details the mathematical solutions for these linear coupled inhomogenous differential equations. More recently, the widespread use of computers shifted the emphasis to more realistic [*probabilistic*]{} models, using random numbers to simulate the throwing of coins or other statistical methods. This “Statistical Physics” and a simple example are the subject of the next section. Statistical Physics and the Ising Model ======================================= Boltzmann Distribution ---------------------- No present computer can simulate the motion of all air molecules in a cubic centimeter. Fortunately, Ludwig Boltzmann about 150 years ago invented a simple rule. The molecules move at temperature $T$ with a velocity which can change all the time but follows a statistical distribution: The probability for a velocity $v$ is proportional to exp($-E/T$), where $E$ is the kinetic energy of the molecule due to its velocity. The same principle applied to a binary choice, where a particle can be in two states A and B with energies $E_A$ and $E_B$ means that the two probabilities are $$p_A = \frac{1}{Z} \exp(-E_A/T) ; \quad p_B = \frac{1}{Z} \exp(-E_B/T) ; \eqno (1a)$$ $$Z = \exp(-E_A/T) + \exp(-E_B/T) \eqno (1b)$$ since the sum over all probabilities must be unity. More generally, a configuration with energy $E$ is in thermal equilibrium found with probability $$p = \frac{1}{Z} \exp(-E/T) ; \quad Z = \sum \exp(-E/T) \eqno (2)$$ where the sum runs over all possible states of the system. $Z$ is called the partition function, one of the rare cases where the German word for it, Zu-standssumme = sum over states, is clearer and shorter. The temperature $T$ is measured neither in Celsius (centigrade) nor Fahrenheit but $T=0$ at the absolute zero temperature (about –273 Celsius below the freezing temperature of water) and moreover is measured in energy units. (If $T$ is measured in Kelvin, the corresponding energy is $k_BT$ where $k_B$ is the Boltzmann constant and set to unity in the present tutorial.) The function exp$(x)$ is the exponential function, also written as $e^x$, which for integer $x$ means the product of $x$ factors $e \simeq 2.71828$; $e^x = 2^{x/0.69315} = 10^{0.4343x}$. Eqs.(1,2) can be regarded as axioms on which Statistical Physics is built, like the Parallel Axiom of Euclidean Geometry, but in some cases they can be derived from other principles. Humanities are allowed to use these ideas of Boltzmann since history institutes were named after him. Ising Model ----------- In 1925, Ernst Ising (born in the heart of the city this author lives in) finished his doctoral dissertation on a model for ferromagnetism, which became famous two decades later and was shown to apply to liquid-vapour transitions half a century later. We assume that each site of a lattice (e.g. a square lattice where each site $i$ has four neighbours: clockwise up, right, down, left) carries a spin $S_i=\pm 1$ (up or down). Neighbouring spins “want” to be parallel, i.e. they have an energy $-J$ if they are in the same state and an energy $+J$ if they are in the two different states. Moreover, a “magnetic” field $H$ between minus infinity and plus infinity (also called $B$) tries to orient the spins in its own direction. The total energy then is $$E = -J \sum_{<ij>} S_i S_j - H \sum_i S_i \eqno (3)$$ where the first sum goes over all ordered pairs of neighbor sites $i$ and $j$. Thus the “bond” between sites $A$ and $B$ appears only once in this sum, and not twice (for $i=A, j=B$ as well as for $i=B, j=A$). The second sum runs over all sites of the system. Thus $2J$ is the energy to break one bond, and $2H$ is the energy to flip a spin from the direction of the field into the opposite direction. As discussed before in the Boltzmann subsection, the higher the energy $E$ is the lower is the probability to observe this spin configuration; at infinitely high temperatures $T$ all configurations are equally probable; at $T=0$ all spins must be parallel to each other and to the field $H$ in equilibrium. The “magnetisation” $M$ is the number of up spins minus the number of down spins, $$M = \sum_i S_i \quad . \eqno (4)$$ Computer simulations of this Ising model will be described in the appendix. Applied to human beings, this Ising model could represent two possible opinions in a population; everybody tries to convince the neighbours of the own opinion ($J$), and in addition the government $H$ may try to convince the whole population of its own opinion. The temperature then gives the tendency of the individuals not to think like the majority of their neighbours and the government. Zero temperature thus means complete conformity, and infinite temperature completely random opinions. Theories with paper and pencil in two dimensions as well as computer simulations give $M(T,H)$. In particular, for $H=0$ and in more than one dimension, the equilibrium magnetisation $M = \pm M_0$ is a non-zero spontaneous magnetisation for $T < T_c$ and is zero for $T \ge T_c$ where $T_c$ is the critical or Curie temperature (named after Pierre Curie, not his more famous wife Marie Curie). On the square lattice, $T_c/J \simeq 2.27$ is known exactly, in three dimensions only numerically; in one dimension there is no transition to a spontaneous magnetisation, as Ernst Ising had shown: $T_c = 0$. The above model obeys Isaac Newton’s law $actio = - reactio$: The sun attracts the earth with the same force as the earth attracts the sun, only in opposite direction. Human relations can also be unsymmetrical: He loves her but she does not love him. Then the bond between sites $i$ and $j$ may be directed instead of the usual case of an undirected bond. For example, if $i$ influences $j$ but $j$ does not influence $i$, flipping the spin at $j$ from parallel ($S_j=+S_i$) to antiparallel ($S_j=-S_i)$ to the spin at $i$ can cost an energy $2J$ while flipping $S_i$ at constant $S_j$ costs nothing. In this case no unique energy $E(S_i,S_j)$ is defined for this spin pair and thus such models have been much less studied in the physics literature. (In this example, one could gain a lot of energy from nothing by the cyclic process of flipping $j$ from antiparallel to parallel, gaining energy $2J$, then flipping $i$ from parallel to antiparallel, costing nothing, then flipping $j$ again and so on. Such a perpetuum mobile does not exist in physics.) Outside physics, of course, one can commit such crimes against energy conservation and forget all probabilities proportional to exp($-E/T$). Instead one can assume arbitrary probabilities, as long as their sum equals one. For example, if an element has three possible states A, B and C, then one may assume that with probability $p$ an A becomes B, a B becomes C, and a C becomes A; with probability $1-p$ the element does not change, and there is no backward process from C to B to A to C. Then one has a circular perpetuum mobile, which may be realistic for some social processes. Physicists sometimes distinguish between dynamics (when the changes are determined fully by energy or force) and kinetics (when additional assumptions, like the probabilites of eqs.(1) are made); probabilities independent of energy/force are then kinetics, as is most of the material described here. And usually to find a stationary or static equilibrium, one has to wait for many non-equiilibrium iterations in the simulation (in a static situation, nothing moves anymore; in a stationary simulation the averages are nearly constant since changes in one direction are mostly cancelled by changes of other elements in the opposite direction. There are many choices, one needs not mathematics, but mathematical thinking: precise and step-by-step. Warning against Mean Field Approximation ---------------------------------------- [*This subsection contains many formulas and can be skipped*]{}. If you want to get answers by paper and pencil, you can use the mean field approximation (also called molecular field approximation), which in economics correponds to the approximation by representative agent. Approximate in the first sum of Eq.(3) the $S_j$ by its average value, which is just the normalised magnetisation $m = M/L^2 = \sum_i S_i/L^2$. Then the energy is $$E = -J \sum_{<ij>} S_i m - H \sum_i S_i = -H_{eff} \sum S_i$$ with the effective field $$H_{eff} = H + \sum_j m = H + q m$$ where the latter sum runs over the $q$ neighbours only and is proportional to the magnetization $m$. Thus the energy $E_i$ of spin $i$ no longer is coupled to other spins $j$ and equals $\pm H_{eff}$. The probabilities $p$ for up and down orientations, according to Eq.(1), are now $$p(S_i=+1) = \frac{1}{Z} \exp(H_{eff}/T) ; \quad p(S_i=-1) = \frac{1}{Z} \exp(-H_{eff}/T)$$ and thus $$m = p(S_i=+1) - p(S_i=-1) = \tanh(H_{eff}/T) = \tanh[(H + q m)/T]$$ with the function tanh$(x) = (e^x - e^{-x})/(e^x+ e^{-x})$. This implicit equation can be solved graphically; for small $m$ and $H/T$, tanh$(x) = x-x^3/3 + \dots$ gives $$H/T = (1-T_c/T)m + \frac{1}{3} m^3 + \dots ; \quad T_c = qJ$$ related to Lev Davidovich Landau’s theory of 1937 for critical phenomena ($T$ near $T_c$, $m$ and $H/T$ small) near phase transitions. All this looks very nice except that it is wrong: In the one-dimensional Ising model, $T_c$ is zero instead of the mean field approximation $T_c=qJ$. The larger the number of neighbours and the dimensionality of the lattice is, the more accurate is the mean field approximation. Basically, the approximation to replace $S_iS_j$ by an average $S_im$ takes into account the influenve of $S_j$ on $S_i$ but not the fact that this $S_i$ again influences $S_j$ creating a feedback. If the mathematics of this subsection looks deterrent, just ignore it; you are recommended to use computer simulations of single interacting spins, and not mean field theories. Outside of physics such simulations are often called “agent based” \[Billari et al 2006, Bonabeau 2002\]; presumably the first one was the Metropolis algorithm published in 1953 by the group of Edward Teller, who is historically known from the US hydrogen bomb and Strategic Defense Initiative (Star Wars, SDI). Applications ============ Schelling Model for Social Segregation -------------------------------------- Economics nobel laureate Thomas C. Schelling advised the US government on war and peace in the 1960s and published a methodologically crucial paper (cited more than 500 times a decade later) \[Schelling 1971, Fossett 2011, Henry et al 2011\] which introduced methods of statistical physics to sociology. Each Ising spin corresponds to one of two ethnic groups (black and white in big US cities) both of which prefer not to be surrounded by the other group. The above Ising model then shows that for $T < T_c$ segregation emerges without any outside control: The simulated region becomes mostly white or mostly black. Unfortunately, Schelling simulated a more complicated version of the Ising lattice at $T=0$ which did not give large “ghettos” like Harlem in New York City, but only small clusters of predominantly white and black residences. Only by additional randomness \[Jones 1985\] (cited only 8 times, mostly by physicists) can “infinitely” large ghettos appear. It is easier to just simulate the above Ising model \[Sumour et al 2008,2011\], which also allows people of one group to move into another city (or away from the simulated region) to be replaced by residents of the other group. Sumour et al also cite earlier Schelling-type simulations by physicists. Fore nearly three decades physicists ignored the Schelling model; now the sociologists ignore the physics simulations of the last decade and the much earlier Jones \[1985\] paper. As examples we give two iIsing-model figures from Müller et al \[2008\] where people also increase their amount $T$ of tolerance = social temperature if they see that their whole neighbourhood belongs to the same group as they themselves. And afterwards they slowly forget and reduce their tolerance. With changing frogeting rate one may either observe lots of small clusters, Fig.1, or one big ghetto, Fig.2. About simultaneously with this Schelling paper, physicist Weidlich started his sociodynamics approach to apply the style of physics to social questions \[Weidlich 2000\], see also \[Galam 2008\]. Sociophysics and Networks ------------------------- A good overall review of the sociophysics field (of which the Schelling model is just one example) was given by Castellano et al \[2009\]. Of particular interest is the reproduction of universal properties of election results with many candidates: The curves of how many candidates got $n$ votes each are similar to each other \[Castellano 2009\]. Car traffice \[Chowdhury et al \[2000\], economic markets \[Bouchaud and Potters 2010, Bonabeau 2002\], opinion dynamics \[Malarz et al 2011\], stone-age culture \[Shennan 2001\], histophysics \[Lam 2002\], languages \[Schulze et al 2008\], Napoleon’s decision before the battle of Waterloo \[Mongin 2008\], religion \[Ausloos and Petroni 2009\], political secession from a state \[Lustick 2011\], demography on social networks \[Fent et al. 2011\], insurgent wars in Iraq and Afghanistan \[Johnson et al. 1944\], ... are other applications. For example, non-physicists \[Holman et al 2011\] have acknowledged that physicists Serva and Petroni (2008) may have been the first to use Levinstein distances to calculate the ages of language groups. (These distances are differences between words for the same meaning in different languages.) Students in class may sit on a square lattice, but normally humans do not. (In universities, mostly only a part of the lattice is occupied, which physicists simulate as a “dilute” square lattice.) They may be connected by friendship or job not only with nearest neighbors but also with people further away. Such networks, investigated for a long time by sociologists \[Stegbauer and Haeussling 2010\], were studied by physicists intensively for a dozen years \[Barabási 2002, Albert and Barabási 2002, Bornholdt and Schuster 2003, Cohen and Havlin 2010\]. In the Watts-Strogatz (or “small world”) network a random fraction of nearest-neighbour bonds is replaced by bonds with sites further away, selected randomly from the whole lattice. If that fraction approaches unity, one obtains the Erdös-Rényi networks, a limit of percolation theory \[Flory 1941\]. More realistic are the scale-free Barabási-Albert networks, where the network starts with a small core and then each newly added site forms a bond with a randomly selected already existing member of the network. The selection probability is proportional to the number of bonds which the old member already has acquired: Famous people get more attention and more “friends” than others; no lattice is assumed here anymore. In all these networks, the average number of bonds needed to connect two randomly selected sites increases logarithmically with the number of sites in the network, whereas for $d$-dimensional lattices this average number of bonds increases stronger with a power law, exponent $1/d$. Having simulated one network, one can also study connected sets of networks or other social networks \[Watts, Dodds, Newman 2002\], or demography on them \[Fent et al 2011\]. The latest application is Statistical Justice: In May 2011, John Demjanjuk was sentenced for having helped in 1943 in the murder of more than 28,000 Dutch Jews in the Nazi concentration camp of Sobibór. One knows who was deported but not who survived the transport from the Netherlands to Poland. And one does not know which duties the accused had there on which day. Thus all acts of the camp guards were regarded as having helped in their murder, and the number near 28,000 was estimated from the average death rate during the transports. The verdict thus gave neither the name of a murder victim nor the day of a murder, but was entirely based on statistical averages. \[Times 2011\]. Appendix: How to Program the Ising Model ======================================== The following Fortran manual and program are both short and should encourage the reader to learn this technique. Fortran Manual -------------- Fortran = formula translator an early language (above machine code or assembler) for computer programming; many others followed and in particular C$^{++}$ is widespread, but nevertheless this tutorial uses Fortran which is closer to plain English and allows to easily find a typical programming error (using an array outside its defined bounds). If your Fortran program is called [name.f]{}, it can be compiled with [ f95 -O name.f]{} (or 77 instead of 95; [O]{} = optimisation), and executed with [./a.out]{} (or just [a.out]{}). The just mentioned error message appears when using [f95 -fbounds-check name.f ; ./a.out]{}, but execution then is much slower and thus instead [-O]{} should be used after error correction. Fortran commands usually start in column 7 and end before column 73. A C in column 1 signifies a comment for the reader, to be ignored by the computer. In column 6 we write a 1 if this line is a continuation of the previous line, while columns 2 to 5 are reserved for labels, i.e. numbers to control the flow of commands. For example, [GOTO 7]{} means to jump to the line labelled by 7. Variable names start with a letter; names starting with I, J, K, L, M, N signify integers without rounding errors, other names are real (floating-point) numbers and nearly always have rounding errors. The operations $+, -, *, /$ and [SQRT, COS, SIN, EXP]{} etc have their usual meaning except that [N/M]{} is always rounded downwards to an integer value; e.g. 3/5 is zero. Also [I = X]{} means rounding downwards. Since the natural logarithm is not an integer it is denoted by [ALOG]{} instead of log. Decisions are made automatically, e.g. where [.GT.]{} means greater than, with analogous meanings for [.LT., .GE, .LE., EQ., .NE., .NOT., .AND., .OR.]{} . A loop is executed by which means that all lines from this line down to and including the line with label 99 are executed for $k=m, m+1, m+2, ..., n$. One may put inner loops into outer loops, if needed. Arrays need to be declared at the beginning of a program, for example through Here, if $C$ has an arbitrary dimension $L$, then $L$ must be given a value before this dimension statement through and must not be changed throughout the program. Similarly, variables can be initialised via a data line like but only once at the beginning of the program, not later again. Results are best printed out through Thereafter execution should stop with a [STOP]{} line, followed by an [END]{} line. The statement is not an equality (which then could be simplified to the nonsensical 0 = 1) but a command to the computer: to find the place in the memory where the variable [n]{} is stored, to get the value of [n]{} from there, to add one to it, and to store the sum in that same memory place as the new value for [n]{}. Some computer languages therefore use := instead of the simpler but misleading = sign. Normally it does not matter whether or not CAPITAL letters are used. The computer language Basic is rather similar to Fortran. Now we bring a complete program to simulate the Ising model on the square lattice. Ising Model Program ------------------- c heat bath 2D Ising in a field parameter(L=1001,Lmax=(L+2)*L) dimension is(Lmax),ex(-4:4) data t,mcstep,iseed/0.90,1000,1/,h/+0.50/,ex/9*0.0/ print *, '#', L,mcstep,iseed,t,h x=rand(iseed) Lp1=L+1 LspL=L*L+L L2p1=2*L+1 do 1 i=1,Lmax 1 is(i)=1 do 2 ie=-4,4,2 x=exp(-ie*2.0*0.4406868/t-h) 2 ex(ie)=x/(1.0+x) do 3 mc=1,mcstep mag=0 do 4 i=Lp1,LspL c if(i.ne.L2p1) goto 6 c do 5 j=1,L c5 is(j+LspL)=is(j+L) 6 ie=is(i-1)+is(i+1)+is(i-L)+is(i+L) is(i)=1 if(rand().lt.ex(ie)) is(i)=-1 4 mag=mag+is(i) c do 7 i=1,L c7 is(i)=is(i+L*L) 3 if(mc.eq.(mc/100)*100) print *, mc, mag stop end The parameter line fixes the size of the $L \times L$ square; the sites in it are numbered by one index, typewriter style. Thus the right neighbor of site $2L$ is $2L+1$ and sits on the left end of the next line: Helical boundary conditions. The lower neighbour of site $2L$ is $2L+L$, the upper neighbour is $2L-L$, and the left neighbor is $2L-1$. The spins in the top and bottom buffer lines ($1 \dots L$ and $L^2+L+1 \dots L^2+2L)$ stay in their initial up orientation; if instead one wants periodic boundary conditions in the vertical direction (better to reduce boundary influence), one has to omit the five comment symbols C at and before loops 5 and 7. The temperature enters the data line in units of $T_c$, i.e. $T = 0.9 T_c$ in this example; the known value $J/T_c = 0.44068\dots$ is used nine lines later. In the same line the field, in units of $k_BT$, is given as 0.5. In the line after the first print statement, [rand(iseed)]{} initialises the random number generator (see next subsection for warning and improvement); a different seed integer gives different random numbers. Later [rand()]{} produces the next “random” number between 0 and 1 from the last one in a reproducible but hardly predictable way. Loop 2 determines the Boltzmann probabilities [iex]{} needed for Eqs.(1) and finishes the initialisation. Loop 3 makes [mcstep]{} iterations (time steps = Monte Carlo steps per spin). Loop 4 runs over all spins except those in the two buffer lines and determines the local interaction energy [ie]{} as the sum over the four neighbour spins. Then we set the spin to $+1$, and if the conditions of Eqs(1) so require, instead it is set to $-1$. (Generally, a command is executed with a probability $p$ if the random number [rand()]{} is smaller than $p$.) We print out the magnetisation only every hundred time steps in order to avoid too much data on the computer screen. The results are plotted in Fig.3 and show that the temperature is very low: Even though it is only 10 percent below the critical temperature $T_c$, the magnetisatuion barely changes from its initial value 1002001 and is after 100 iterations already in equilibrium, also due to the applied field. In three instead of two dimensions, without a field, and at temperatures closer to $T_c$ longer times are needed for equilibration. Perhaps you find it more interesting to simulate revolutions, as in Fig.1b, where the field $h$ was equal to the fraction of overturned spins (suggestion of Sorin Solomon for Bornholdt-type model). So, what is difficult about computations? Do you know a shorter Fortran manual? Random Numbers -------------- The above [rand]{} produces random numbers in an easily programmed way, but often this may be slow and/or bad, or the used algorithm is unknown to the user. It is better to program random number generation explicitely. If you multiply by hand two nine-digit integers, you may easily predict the first and the last digit of the product, but hardly the digits in the middle, except by tediously doing the whole multiplication correctly. Similarly, if [ibm]{} is a 32-bit odd integer, then the product is again an odd integer, and normally requires 46 bits. (A bit = binary digit is a zero or one in a computer.) The computer throws away the leading bits and leaves the least significant 32 bits. The first bit gives the sign, thus plus times plus gives minus in about half the cases, in contrast to what you learned in elementary school. The last of these remaining bits is predictably always set to one (odd integers) but the leading (most significant) bits are quite random. (Actually, your computer may do something very similar when you call [rand()]{}.) More precisely, they are pseudo-random; in order to search for errors one wants to get exactly the same random numbers when one repeats a simulation with the same seed. These random 32-bit integers [ibm]{} between –2147483647 and $+2147483647 = 2^{31}-1$ can be transformed into real numbers through [ran=factor\*ibm+0.5]{} where [factor = 0.5/2147483647]{}, but it is more efficient to normalize the propabilities $p$ (here [= ex/(1.0+ex)]{} to the full interval of 32-bit integers through [(2.0\*p-1.0)\*2147483647]{}, once at the beginning of the simulation. Then the next random integers [ibm]{} simply have to be compared with this normalized probability. In the above Ising program, one then stores the Boltzmann probabilities as [ dimension iex(-4:4)]{} at the beginning, calculates them through in the initialisation, and later merely needs in the above Ising program. With 32 bits the pseudo-random integers are repeated after $2^{29}$ such multiplications with 16807, which is a rather small number for today’s personal computers. It is better to use 64 bits via at the beginning of the program in order to get many more different random numbers, using e.g. for the normalized 64-bit probabilities. Now the quality is much better without much loss in speed; unfortunately one now can make much more programming errors involving these random numbers. History ------- Now comes a list of extended abstracts, a few pages each, about what this author finds interesting in recent history, available on request from [email protected]: Who is to blame for World War I\ No miracle on the Marne, 9/9/1914\ Lies and Art. 231 of Versailles Peace Treaty 1919\ Was Hitler’s 1941 attack against the Soviet Union a preemptive war ?\ Had Hitler nearly gotten Mosnow in 1941 ?\ Why was there no joint Japanese attack when Hitler attacked the Soviet Union\ The Sea Battle of Leyte, 25 October 1944\ Did Soviet tanks approach Tehran in March 1946 ?\ Missed chance for peace in Korea, October 1950?\ Stalin’s proposal of March 1952 for a united Germany\ 1956: West German finger on the nuclear trigger ?\ Tank confrontation at Checkpoint Charlie 10/1961\ The 1962 Cuban Missile crisis: security or prestige?\ Lyndon B. Johnson (1908-1973) and the Dominican crisis (US Invasion 1965)\ 1990: East Germany into NATO ?\ Kosovo War 1999\ The start of the Libyan war, March 2011\ Thanks to S. Wichmann, T. Hadzibeganovic, M. Ausloos, and T. Fent for a critical reading of the manuscript. Albert R. and Barabási A.L. \[2002\], “Statistical mechanics of complex cetworks”, [*Reviews of Modern Physics*]{} [**74**]{}, 47-97. Ausloos M. and Petroni F., “Statistical dynamics of religion evolutions”, [*Physica A*]{} [**388**]{}, 4438-4444; M. Ausloos \[2010\], “On religion and language evolutions seen through mathematical and agent based models”, in [*Proceedings of the First Interdisciplinary CHESS Interactions Conference*]{}, C. Rangacharyulu and E. Haven, Eds., World Scientific, Singapore, pp. 157-182. Barabási A.L. \[2002\] “Linked”, Perseus, Cambridge. Billari F.C., Fent T., Prskawetz A., and Scheffran J. \[2006\] [*Agent-based computational modelling*]{}, Physica-Verlag, Heidelberg. Bonabeau E. \[2002\] “Agent-based modelling: Methods and techniques for simulating human systems”, Proc. Natl. Acad. Sci. USA [**99**]{}, 7280-7287. Bornholdt S. and Schuster H.G. \[2003\] [*Handbook of graphs and networks*]{}, Wiley-VCH, Weinheim. Bouchaud J.P. and Potters M. \[2009\] [*Theory of financial risks and derivative pricing*]{}, Cambridge University Press, Cambridge. Bridson R. and Batty C \[2010\] “Computational physics in film”, [*Science*]{} [**330**]{}, 1756-1757. Castellano C., Fortunato S., Loreto V., \[2009\] “Statistical physics of social dynamics”, [*Rev. Mod. Physics*]{} [**81**]{}, 591-646. Chowdhury D., Santen L., Schadschneider A. \[2000\] “Statistical physics of vehicular traffic and some related systems”, [*Physics Reports*]{} [**329**]{}, 199-329. Cohen R. and Havlin S. \[2010\] [*Complex Networks*]{}, Cambridge University Press, Cambridge. Diamond, J \[1997\] [*Guns, Germs, and Steel*]{}, Norton, New York. Fent T., Diaz B.A, Prskawetz A. \[2011\] “Family policies in the context of low fertility and social structure”, [*Vienna Inst. Demogr. Working Paper*]{} 2/2011 (www.oeaw.at/vid). Flory, P.J. \[1941\] “Molecular size distribution in three-dimensional polymers: I, II, III”, [*J. Am. Chem. Soc.*]{} [**63**]{}, 3083, 3091, 3096. Fossett, M. \[2011\] “Generative models of segregation”, [*J. Math. Sociology*]{} [**35**]{}, 114-145. Galam S. \[2008\] “Sociophysics: A review of Galam models”, [*Int. J. Mod. Phys. C*]{} [**19**]{}, 409-440. Geiss, I. \[2008\] [*Geschichte im Überblick*]{}, Anaconda, Köln. Henry, A.D., Pralat, P., Zhang, C-Q. \[2011\] Proc. Natl. Acad. Sci. USA 108, 8505-8610. Holman, E.W., 14 coauthors \[2011\], “Automated dating of the world’s language families based on lexical similarity”, preprint for [*Current Anthropology*]{}. Johnson, N., Carran, S., Botner, J., Fontaine, K., Laxague, N., Nuetzel, P., Turnley, J., Tivnan, B. \[2011\] “Pattern in escalations in insurgent and terrorist activity”, [*Science*]{} [**333**]{}, 81-84. Jones F.L. \[1985\] “Segregation models of group segregation”, [*Aust. New Zeal. J. Sociol.*]{} [**21**]{}, 431-444. Lim M., Metzler R., Bar-Yam Y. \[2007\] “Global pattern formation and ethnic/cultural violence”, [*Science*]{} [**317**]{}, 1540-1544. See also Hadzibeganovic T. et al \[2008\], Physica A [**387**]{}, 3242-3252. Lustick J. \[2011\] “Secession of the center: A virtual probe of the prospects for Punjabi secessionism in Pakistan and the Secession of Punjabistan”, [*Journal of Artificial Societies and Social Simulation*]{} [**14**]{}, issue 1, paper 7 (electronic only via jasss.soc.surrey.ac.uk). Malarz K., Gronek P. and Ku[ł]{}akowksi K. \[2011\] “Zaller-Deffuant model of mass opinion”, [*Journal of Artificial Societies and Social Simulation*]{} [**14**]{}, issue 1, paper 2 (electronic only via jasss.soc.surrey.ac.uk). Mongin P. \[2008\] “Retour à Waterloo - Histoire militaire et théorie des jeux”, [*Annales. Histoire, Sciences Sociales*]{} [**63**]{}, 39-69. Müller, K., Schulze, C., Stauffer, D. \[2008\] “Inhomogeneous and self-organized temperature in Schelling-Ising model”, [*Int. J. Mod. Phys. C*]{} [**19**]{}, 385-391. Prasos, J. \[2009\] “Vietnam”, University Press of Kansas, Lawrence 2009, p. xiii. Richardson L.F. \[1935\] “Mathematical psychology of war”, [*Nature*]{} [ **135**]{}, 830-831 and [**136**]{}, 1025-1026. Schelling T.C. “Dynamic models of segregation” \[1971\] [*J. Math. Sociol.*]{} [**1**]{} 143-186. Schulze C., Stauffer D., and Wichmann S. \[2008\] “Birth, survival and death of languages by Monte Carlo simulation”. [*Comm. Comput. Phys.*]{} [**3**]{}, 271-294. Shennan S. \[2001\] “Demography and cultural innovation: a model and its implications for the emergence of modern human culture”, [*Cambridge Archeol. J.*]{} [**11**]{}, 5-16. Stauffer D., Moss de Oliveira S., de Oliveira P.M.C., Sá Martins J.S. \[2006\]. [*Biology, sociology, geology by computational physicists*]{}, Elsevier, Amsterdam. Stegbauer C. and Haeussling R. (eds.) \[2010\], [*Handbuch Netzwerkforschung*]{}, VS-Verlag, Wiesbaden. Sumour M.A., El-Astal, A.H., Radwan M.M., Shabat, M.M. \[2008\], Urban segregation with cheap and expensive residences, [*Int. J. Mod. Phys. C*]{} [**19**]{}, 637-645. Sumour M.A., Radwan M.M., Shabat, M.M. \[2011\], “Highly nonlinear Ising model and social segregation”, arXiv:1106.5574 (electronically only on arXiv.org section physics). Times: nytimes.com May 12, 2011 “Demjanjuk”. Vitanov, N.K. and Ausloos, M. R.\[2011\] “ Knowledge epidemics and population dynamics models for describing idea diffusion”. In: [*Models of Science Dynamics - Encouters between Complexity Theory and Information Sciences*]{}, ed. by Scharnhorst, A. Boerner, K., P. van den Besselaar. Springer, Berlin Heidelberg (forthcoming) Watts, D.J., Dodds, P.S., and Newman, M.E.J. \[2002\] “Identity and search in social networks”, [*Science*]{} [**296**]{},1302-1305. Weidlich W. \[2000\] [*Sociodynamics; a systematic approach to mathematical modelling in the social sciences*]{} Harwood Academic Publishers; 2006 reprint: Dover, Mineola (New York). Dietrich Stauffer is retired professor of theoretical physics and studies history (mostly 20th century, mostly diplomatic) since retirement. Before that he worked on Monte Carlo simulations, like Ising models, percolation, ageing, opinion dynamics. Institute for Theoretical Physics, Cologne University, D-50923 Köln, Euroland
--- abstract: 'Astrophysical observations indicate that there is roughly five times more dark matter in the Universe than ordinary baryonic matter [@DM-Review], with an even larger amount of the Universe’s energy content due to dark energy [@DE-Review]. So far, the microscopic properties of these dark components have remained shrouded in mystery. In addition, even the five percent of ordinary matter in our Universe has yet to be understood, since the Standard Model of particle physics lacks any consistent explanation for the predominance of matter over antimatter [@BAU-review]. Inspired by these central problems of modern physics, we present here a direct search for interactions of antimatter with dark matter, and place direct constraints on the interaction of ultra-light axion-like particles — one of the dark-matter candidates — and antiprotons. If antiprotons exhibit a stronger coupling to these dark-matter particles than protons, such a CPT-odd coupling could provide a link between dark matter and the baryon asymmetry in the Universe. We analyse spin-flip resonance data acquired with a single antiproton in a Penning trap [@SmorraNature] in the frequency domain to search for spin-precession effects from ultra-light axions with a characteristic frequency governed by the mass of the underlying particle. Our analysis constrains the axion-antiproton interaction parameter $f_a/C_{\overline{p}}$ to values greater than $0.1$ to $0.6$ GeV in the mass range from $2 \times 10^{-23}$ to $4 \times 10^{-17}\,$eV/$c^2$, improving over astrophysical antiproton bounds by up to five orders of magnitude. In addition, we derive limits on six combinations of previously unconstrained Lorentz-violating and CPT-violating terms of the non-minimal Standard Model Extension [@Ding2016].' author: - 'C. Smorra' - 'Y. V. Stadnik' - 'P. E. Blessing' - 'M. Bohman' - 'M. J. Borchert' - 'J. A. Devlin' - 'S. Erlewein' - 'J. A. Harrington' - 'T. Higuchi' - 'A. Mooser' - 'G. Schneider' - 'M. Wiesinger' - 'E. Wursten' - 'K. Blaum' - 'Y. Matsuda' - 'C. Ospelkaus' - 'W. Quint' - 'J. Walz' - 'Y. Yamazaki' - 'D. Budker' - 'S. Ulmer' title: 'Direct limits on the interaction of antiprotons with axion-like dark matter' --- A variety of experiments are aiming for the detection of axions and axion-like particles to identify the microscopic nature of dark matter [@NPAM-Review; @AxionReview]. Axions are light spinless bosons ($m_a \ll 1~\textrm{eV}/c^2$) originally proposed to resolve the strong CP problem of quantum chromodynamics [@Kim2010Review], and later identified as excellent dark-matter candidates. Although limits have been placed on their interaction strengths with photons, electrons, gluons and nucleons [@AxionReview; @Stadnik2018Review], direct information on the interaction strength with antimatter is lacking. The interactions in the Standard Model have equal couplings to conjugate fermion/antifermion pairs, since the combined charge-, parity- and time-reversal (CPT) invariance is embedded as a fundamental symmetry in the Standard Model. CPT invariance has been tested with high sensitivity in recent precision measurements on antihydrogen, antiprotonic helium, and antiprotons [@SmorraNature; @JerryAntiproton; @ALPHA; @Masaki; @UlmerNature2015; @SchneiderScience2017], and so far no indications for a violation have been found. In contrast, the non-observation of primordial antimatter and the matter excess in our Universe are a tremendous challenge for the Standard Model, since the tiny amount of CP-violation contained in the Standard Model is insufficient to reproduce the matter content by more than eight orders of magnitude [@BAU-review]. However, the discovery of an asymmetric coupling of dark-matter particles to fermions and antifermions may provide an important clue to improve our understanding of dark matter and the baryon asymmetry. Such an asymmetric coupling may in principle arise for axion-like particles if the underlying theory is non-local [@Greenberg], and we test for possible signatures in the spin transitions of a single antiproton.\ The canonical axion and axion-like particles (collectively referred to as “axions” below) can be hypothetically produced in the early Universe by non-thermal mechanisms, such as “vacuum misalignment” [@Marsh2015Review]. Subsequently, they form a coherently oscillating classical field: $a \approx a_0 \cos(\omega_a t)$, where the angular frequency is given by $\omega_a \approx m_a c^2 / \hbar$. Here, $m_a$ is the axion mass, $c$ the speed of light and $\hbar$ the reduced Planck constant. The axion field carries the energy density $\rho_a \approx m_a^2 a_0^2 /2$, which may comprise the entire local cold dark matter energy density $\rho_\textrm{DM}^\textrm{local} \approx 0.4~\textrm{GeV/cm}^3$ [@Catena2010]. Assuming that axions are the main part of the observed dark matter, a lower mass bound of $m_a \gtrsim 10^{-22}$ eV is imposed by the requirement that the reduced axion de Broglie wavelength does not exceed the dark-matter halo size of the smallest dwarf galaxies ($\sim 1\,$kpc).\ Fermions may interact with axions by a so-called derivative interaction causing spin precession [@Stadnik2014A]. In the non-relativistic limit, the relevant part of this interaction can be described by the time-dependent Hamiltonian [@Stadnik2014A; @nEDM2017]: $$\begin{aligned} \label{NR_Hamiltonian} H_{\textrm{int}} (t) \approx \frac{C_{\bar{p}} a_0}{2 f_a} \sin(\omega_a t) ~ {\boldsymbol{\sigma}}_{\bar{p}} \cdot {\boldsymbol{p}}_a \, , \end{aligned}$$ where ${\boldsymbol{\sigma}}_{\bar{p}}$, ${\boldsymbol{p}}_a$ and $C_{\bar{p}}/f_a$ are the Pauli spin-matrix vector of the antiproton, the axion-field momentum vector, and the axion-antiproton interaction parameter, respectively. We note that the fundamental theory to produce a CPT-odd operator like in Eq. (\[NR\_Hamiltonian\]) with $C_{\bar{p}} \ne C_p$ would need to be non-local [@Greenberg].\ The leading-order shift of the antiproton spin-precession frequency due to the interaction in Eq. (\[NR\_Hamiltonian\]) is given by: $$\begin{aligned} \label{Axion_antiproton_anomalous_shift} \delta \omega_L^{\bar{p}}(t) &\approx \frac{C_{\bar{p}} m_a a_0 \left| {\boldsymbol{v}}_a \right|}{f_a} \left[ A \cos(\Omega_\textrm{sid} t + \alpha) + B \right] \sin(\omega_a t) \, ,\end{aligned}$$ where $\left| {\boldsymbol{v}}_a \right| \sim 10^{-3} c$ is the average speed of the galactic axions with respect to the Solar System, $\Omega_\textrm{sid} \approx 7.29 \times 10^{-5}~\textrm{s}^{-1}$ is the sidereal angular frequency, and $\alpha \approx -25^\circ$, $A \approx 0.63$, and $B \approx -0.26$ are parameters determined by the orientation of the experiment relative to the galactic axion dark matter flux [@NASA_Coordinates] (see the supplementary information). We note that the time-dependent perturbation of the antiproton spin-precession frequency in Eq. (\[Axion\_antiproton\_anomalous\_shift\]) has three underlying angular frequencies: $\omega_1 = \omega_a$, $\omega_2 = \omega_a + \Omega_\textrm{sid}$, and $\omega_3 = \left|\omega_a - \Omega_\textrm{sid} \right|$, which for our experiment orientation have approximately evenly-distributed power between the three modes.\ The experimental data to search for the dark-matter effect were acquired using the Penning-trap system of the BASE collaboration [@SmorraEPJST2015] at CERN’s Antiproton Decelerator (AD). We have determined the antiproton magnetic moment $\mu_{\overline{p}}$ by measuring the ratio of the antiproton’s Larmor frequency $\nu_L$ and the cyclotron frequency $\nu_c$. In a time-averaged measurement, this results directly in a measurement of $\mu_{\overline{p}}$ in units of the nuclear magneton $\mu_N$: $$\begin{aligned} \left(\frac{\nu_L} {\nu_c}\right)_{\overline{p}} = \frac{g_{\overline{p}}}{2} = -\frac{\mu_{\overline{p}}}{\mu_N},\end{aligned}$$ which can be expressed in terms of the antiproton $g$-factor $g_{\overline{p}}$. The relevant part of the apparatus for this measurement is shown in Fig. \[fig:EXP\]. We used a multi-trap measurement scheme with two single antiprotons to determine $\mu_{\overline{p}}$ 350-times more precisely than in the best single-trap measurement [@HiroNC2017]. Our multi-trap measurement scheme is described in detail in Ref. [@SmorraNature].\ The measurement of $\nu_L/\nu_c$ takes place in the homogeneous precision trap, see Fig. \[fig:EXP\] (a). The cyclotron antiproton is used to determine the cyclotron frequency $\nu_c\approx 29.7\,$MHz with a relative precision of a few parts per billion (ppb) [@UlmerNature2015] from the spectra of image-current signals such as those shown in Fig. \[fig:EXP\] (b). For the measurement of $\nu_L$, the cyclotron antiproton is moved by voltage ramps into the park trap, and the Larmor antiproton is shuttled into the precision trap. We drive spin transitions in the precision trap using an oscillating magnetic field with a frequency $\nu_{\text{rf}}\approx 82.85\,$MHz. To observe these spin transitions, we need to identify the initial and the final spin state of each spin-flip drive in the precision trap. To this end, we transport the Larmor antiproton into the analysis trap and use the continuous Stern-Gerlach effect [@DehmeltCSG], where a strong magnetic curvature of about $3\times 10^{5}\,$T/m$^2$ couples the magnetic moment of the antiproton to its axial motion. As a consequence, spin transitions become observable as an axial-frequency shift of $\Delta\nu_{z,\mathrm{SF}} =\pm 172(8)\,$mHz. The spatial separation of the analysis trap from the precision trap strongly reduces line broadening effects from the magnetic inhomogeneity of the analysis trap in the frequency-ratio measurement, which is the key technique to enable precision measurements of $\mu_{\overline{p}}$ at the ppb level. The spin-state identification in the analysis trap is performed in a sequence of axial frequency measurements with interleaved resonant spin-flip drives, as shown in Fig. \[fig:EXP\] (c). The average fidelity of correctly identifying spin-transitions in the presence of axial frequency fluctuations is $\approx 80\,\%$ [@SmorraNature].\ To determine the antiproton $g$-factor, we measured the spin-flip probability $P_{\mathrm{SF,PT}}$ as a function of the frequency ratio $\Gamma= \nu_{\text{rf}} / \nu_{c}$ in the precision trap, which resulted in the antiproton spin-flip resonance shown in Fig. \[fig:EXP\] (d). The data consist of 933 spin-flip experiments recorded over 85 days from 05.09.2016 to 27.11.2016. The measurement cycle time of the resonance was not constant mainly due to the statistical nature of the spin-state readout. The median cycle frequency was about $0.38\,$mHz $\approx$ (44$\,$min)$^{-1}$. The spin-flip drive duration was $t_{\textrm{rf}} = $ 8$\,$s with a constant drive amplitude for all data points. The drive frequency was varied in a range of $\pm 45\,$ppb ($\pm 3.7\,$Hz) around the expected Larmor frequency. The time-averaged value of $\mu_{\overline{p}}$ was extracted by matching the lineshape of an incoherent Rabi resonance to the data, which resulted in $g_{\overline{p}}/2 = 2.792\,847\,344\,1(42)$ with a relative uncertainty of 1.5 ppb [@SmorraNature].\ The frequency shift in Eq. (\[Axion\_antiproton\_anomalous\_shift\]) causes a time-dependent detuning of the drive and the Larmor frequency in each spin-flip experiment. In the following, we consider slow dynamic effects on spin transitions, where $\omega_a/(2\pi) \ll 1 /t_{\textrm{rf}} = 125\,$mHz, so that the variation of the effective Larmor frequency is negligible during the spin-flip drive and does not affect the spin motion on the Bloch sphere. Each spin-flip experiment at the drive time $t$ probes the “instantaneous value” of the Larmor frequency $\omega_L+\delta \omega_L^{\bar{p}}(t)$.\ To conclude whether or not an axion-antiproton coupling is observed, we perform a hypothesis test based on a test statistic $q = - 2 \ln \lambda$, where $\lambda$ denotes the likelihood ratio (see the supplementary information). We compare the zero-hypothesis model $H_0$ with $\delta \omega_L^{\bar{p}}(t)=0$ and extended models $H_{b}(\omega)$, which add an oscillation with frequency $\omega$ to $H_0$, with amplitude $b(\omega) \geq 0$ and phase $\phi(\omega)$ as free parameters. The test statistic is evaluated for a set of fixed frequencies with a frequency spacing of 60$\,$nHz, which is narrower than the detection bandwidth of our measurement $\approx 1/(T_{\mathrm{meas}})=130\,$nHz. We consider the frequency range $5\,\mathrm{nHz} \leq \omega_i/(2 \pi) \leq 10.49\,\mathrm{mHz}$ in this evaluation and perform a multiple hypothesis test with $N_0= 174\,876$ test frequencies. The test statistic as a function of the test frequency is shown in Fig. \[fig:teststat\] for the experimental data. To define detection thresholds, we make use of Wilk’s theorem to obtain the test-statistic distribution for zero oscillation data, and correct for the look-elsewhere effect (see the supplementary information for details). Based on this, we find that our highest value $q_{\mathrm{max}} = 25.4$ in the entire evaluated frequency range corresponds to a local $p$-value of $p_L = 3 \times 10^{-6}$. This results in a global $p$-value for our multi-hypothesis test of $p_G = 0.254$, which represents the probability that rejecting $H_0$ in favor of any of the alternative models $H_{b}(\omega)$ is wrong. Consequently, we find no significant indication for a periodic interaction of the antiproton spin at the present measurement sensitivity, and conclude that our measurement is consistent with the zero hypothesis in the tested frequency range.\ To set experimental amplitude limits, we apply the $CL_s$ method [@PDG2018] and first extract amplitude limits for single-mode oscillations $b_{\mathrm{up}}(\omega)$ with 95$\,\%$ confidence level. The results of $b_{\mathrm{up}}(\omega)$ are shown in Fig. \[fig:tHist\] (a). In the frequency range $190\,\mathrm{nHz} \leq \omega/(2 \pi) \leq 10\,\mathrm{mHz}$, the mean limit on $b_{\mathrm{up}}$ is 5.5$\,$ppb, which corresponds to an energy resolution of $\sim 2 \times 10^{-24}$ GeV. At lower frequencies $\omega/(2\pi) < 130\,\mathrm{nHz}$, we have sampled only a fraction of an oscillation period. Here, we consider the reduced variation of the Larmor frequency during the measurement and marginalise the quoted limit $b_{\mathrm{up}}(\omega)$ over the starting phase (see the supplementary information). To constrain the axion-antiproton coupling coefficient $f_a/C_{\overline{p}}$, we assume that the axion field has a mean energy density equal to the average local dark matter energy density $\rho^{\mathrm{local}}_{\mathrm{DM}} \approx$ 0.4 GeV/cm$^{3}$ during the measurement, and use Eq. (\[Axion\_antiproton\_anomalous\_shift\]) to relate $f_a/C_{\overline{p}}$ to the amplitude limits. Since the axion-antiproton coupling would produce almost equal amplitudes at the main frequency $\omega_1$ and the sideband frequencies $\omega_{2,3}$, we place limits on the coupling coefficient considering all three detection modes (see the supplementary information). The evaluated limits on the coupling coefficient in the mass range $2 \times 10^{-23}\,$eV/$c^2 < m_a < 4 \times 10^{-17}\,$eV/$c^2$ are shown in Fig. \[fig:tHist\] (b). The sensitivity of our measurement is mass-independent in the range $m_a \gtrsim 10^{-21}\,$eV/$c^2$, and the amplitude limit is defined by the value of the test statistic at the evaluated mass $q(m_a)$. For $q(m_a) \approx 0$, we obtain $f_a/C_{\overline{p}} > 0.6\,$GeV, which represents the most stringent limitation we can set based on our data. In the low-mass range $m_a \lesssim 10^{-21}\,$eV/$c^2$, the amplitude limit on the main frequency $\omega_1$ gets weaker, similar to the behaviour in Fig. \[fig:tHist\](a). The limits in this mass range are dominated by the sideband signals $\omega_{2,3} \approx \Omega_{\mathrm{sid}}$, which remain in the optimal frequency range of our measurement. We also marginalise these limits over the starting phase to account for the possibility of being near a node of the axion field during a measurement (see the supplementary information). These effects lead to less stringent limits for the coupling coefficient for low masses. We conclude that we set limits on the axion-antiproton coupling coefficient ranging from $0.1\,$GeV to $0.6\,$GeV in the tested mass range. For comparison, the most precise matter-based laboratory bounds on the axion-nucleon interaction in the same mass range are at the level $f_a/C_N \sim 10^4 - 10^6~\textrm{GeV}$ [@nEDM2017; @NuclSpinComag]. Like in the earlier matter-based studies [@nEDM2017; @NuclSpinComag], we do not marginalise our detection limits over possible fluctuations of the axion amplitude $a_0$. We note that preliminary investigations in the recent work [@arXiv1905] suggest that, if such amplitude fluctuations are taken into account for sufficiently light axions, then the inferred limits may be weakened by up to an order of magnitude at 95$\%$ C.L..\ Our laboratory bounds are compared to astrophysical bounds in Fig. \[fig:tHist\] (b). In particular, we consider the bremsstrahlung-type axion emission process from antiprotons $\bar{p} + p \to \bar{p} + p + a$ in supernova 1987A, which had a maximum core temperature of $T_\textrm{core} \sim 30~\textrm{MeV}$ and a proton number density of $n_p \sim 5 \times 10^{37}~\textrm{cm}^{-3}$ [@Raffelt2008LNP]. For an estimate, we treat the supernova medium as being dilute (non-degenerate). In thermal equilibrium, this gives the antiproton number density of $n_{\bar{p}} \approx n_p e^{-2 \xi_p / T_\textrm{core}}$, where the proton chemical potential $\xi_p$ is given by $m_p - \xi_p \sim 10~\textrm{MeV}$. In the limit of a dilute medium, the axion emission rate from antiprotons scales as $\Gamma_{\bar{p} p \to \bar{p} p a} \propto n_p n_{\bar{p}} \left(C_{\bar{p}}/f_a \right)^2$, whereas the usual axion emission rate from protons scales as $\Gamma_{p p \to p p a} \propto n_p^2 \left(C_p/f_a \right)^2$ [@Raffelt2008LNP; @Axions_SN1996]. Supernova bounds on the axion-proton interaction from the consideration of the effect on the observed neutrino burst duration vary in the range of $f_a/C_p \gtrsim 10^8 - 10^9~\textrm{GeV}$ for $m_a \lesssim T_\textrm{core} \sim 30~\textrm{MeV}$, depending on the specific nuclear physics calculations employed [@PDG2018; @Raffelt2008LNP]. Using the “middle-ground” value and rescaling to the axion-antiproton interaction, we obtain the supernova bound $f_a/C_{\bar{p}} \gtrsim 10^{-5}~\textrm{GeV}$ for $m_a \lesssim 30~\textrm{MeV}$, which is up to 5 orders of magnitude weaker than our laboratory bound in the relevant mass range. Indirect limits on the axion-antiproton interaction from other astrophysical sources (such as active stars and white dwarves) are even weaker, since their core temperatures are much lower than those reached in supernovae.\ The non-minimal Standard Model Extension (SME) predicts an apparent oscillation of the antiproton Larmor frequency either at the frequency $\Omega_\textrm{sid}$ or $2\,\Omega_\textrm{sid}$ mediated by Lorentz-violating and in some cases CPT-violating operators added to the Standard Model [@Ding2016]. With $P_L(\Omega_\textrm{sid}) = 0.336$ and $P_L(2\,\Omega_\textrm{sid}) = 0.328$, we conclude that the zero hypothesis cannot be rejected for these two frequencies, and obtain amplitude limits of $b_{\mathrm{up}}(\Omega_\textrm{sid}) \leq 5.3\,$ppb and $b_{\mathrm{up}}(2\,\Omega_\textrm{sid}) \leq 5.2\,$ppb with 95$\,\%$ C.L. Using these limits and the orientation of our experiment [@HiroNC2017], we constrain six combinations of time-dependent coefficients in the non-minimal SME [@Ding2016]: $ |\tilde{b}^{*X}_p| < 9.7\times 10^{-25}\,\mathrm{GeV}, |\tilde{b}^{*Y}_p| < 9.7\times10^{-25}\,\mathrm{GeV}, |\tilde{b}^{*XX}_{F,p}-\tilde{b}^{*YY}_{F,p}| < 5.4\times 10^{-9}\,\mathrm{GeV}^{-1}, |\tilde{b}^{*(XZ)}_{F,p}| < 3.7\times 10^{-9}\,\mathrm{GeV}^{-1}, |\tilde{b}^{*(YZ)}_{F,p}| < 3.7\times 10^{-9}\,\mathrm{GeV}^{-1}, |\tilde{b}^{*(XY)}_{F,p}| < 2.7\times 10^{-9}\,\mathrm{GeV}^{-1}.$ These coefficients are constrained for the first time, since we had only been able to set limits on effects causing a non-zero time-averaged difference of the proton and antiproton magnetic moments [@SmorraNature; @SchneiderScience2017; @HiroNC2017].\ In conclusion, our slow-oscillation analysis of the antiproton spin-flip resonance provides the first limits on axion coupling coefficients with an antiparticle probe. Similar searches can be performed for other antiparticles, namely positrons and anti-muons, from frequency-domain analyses of their ($g$-2) measurements [@Dehmeltg-2; @Muong-2]. $\dagger$ Present affliation: Research Center for Nuclear Physics, Osaka University, 10-1 Mihogaoka, Ibaraki, Osaka 567-0047, Japan Bertone, G., Hooper, D. & Silk, J., Particle dark matter: evidence, candidates and constraints, Phys. Rept. **405**, 279 (2005). Frieman, J. A., Turner, M. S. & Huterer, D., Dark Energy and the Accelerating Universe, Ann. Rev. Astron. Astrophys. **46**, 385 (2008). Dine, M. & Kusenko A., Origin of the matter-antimatter asymmetry, Rev. Mod. Phys. **76**, 1 (2004). Smorra, C. *et al*., A parts-per-billion measurement of the antiproton magnetic moment, Nature **550**, 371 (2017). Ding, Y. & Kostelecky, V. A. , Lorentz-violating spinor electrodynamics and Penning traps, Phys. Rev. D **94**, 056008 (2016). Safronova, M. S., Budker, D., DeMille, D., Kimball, D. &, Derevianko, A., Search for new physics with atoms and molecules, Rev. Mod. Phys. **90**, 025008 (2018). Graham. P. W. *et al*., Experimental Searches for the Axion and Axion-Like Particles, Annu. Rev. Nucl. Part. Sci. **65**, 458 (2015). Kim, J. E. & Carosi, G., Axions and the Strong CP Problem, Rev. Mod. Phys. **82**, 557 (2010). Stadnik, Y. V. & Flambaum, V. V., Searches for New Particles Including Dark Matter with Atomic, Molecular and Optical Systems, arXiv:1806.03115. Gabrielse, G., Khabbaz, A., Hall, D., Heimann, C., Kalinowsky, H. $\&$ Jhe, W., Precision mass spectroscopy of the antiproton and proton using simultaneously trapped particles, *Phys. Rev. Lett.* **82**, 3198 (1999). Ahmadi, M. *et al.*, Chacterization of the 1S-2S transition in antihydrogen Nature **557**, 71 (2018). Hori, M. *et al.*, Buffer-gas cooling of antiprotonic helium to $1.5$ to $1.7\,$K, and antiproton-to-electron mass ratio, Science **354**, 610 (2016). Ulmer, S. *et al.*, High-precision comparison of the antiproton-to-proton charge-to-mass ratio, Nature **524**, 196 (2015). Schneider, G. *et al.*, Double-trap measurement of the proton magnetic moment at 0.3 parts per billion precision, Science **358**, 1081 (2017). Greenberg, O. W., CPT Violation Implies Violation of Lorentz Invariance, Phys. Rev. Lett. **89**, 231602 (2002). Marsh, D. J. E., Axion cosmology, Phys. Rept. **643**, 1 (2016). Catena, R. & Ullio, P., A novel determination of the local dark matter density, J. Cosmol. Astropart. Phys. **08** (2010) 004. Stadnik, Y. V. & Flambaum, V. V., Axion-induced effects in atoms, molecules, and nuclei: Parity nonconservation, anapole moments, electric dipole moments, and spin-gravity and spin-axion momentum couplings, Phys. Rev. D **89**, 043522 (2014). Abel, C. *et al*., Search for Axionlike Dark Matter through Nuclear Spin Precession in Electric and Magnetic Fields, Phys. Rev. X **7**, 041034 (2017). NASA LAMBDA – Tools, <http://lambda.gsfc.nasa.gov/toolbox/tb_coordconv.cfm>, accessed February 6, 2018. Smorra, C. *et al*., BASE - The Baryon Antibaryon Symmetry Experiment, Eur. Phys. J. Special Topics **224**, 3055 (2015). Smorra, C. *et al.*, Observation of individual spin quantum transitions of a single antiproton, Phys. Lett. B **769**, 1 (2017). Mooser, A. *et al.*, Direct high-precision measurement of the magnetic moment of the proton, Nature **509**, 596 (2014). Nagahama, H. *et al.*, Sixfold improved single particle measurement of the magnetic moment of the antiproton, Nat. Commun. **8**, 14084 (2017). Dehmelt, H. & Ekström, P., Proposed g-2 delta-omegaz experiment on single stored electron or positron, Bull. Am. Phys. Soc. **18**, 727 (1973). Tanabashi, M. *et al*., 2018 Review of Particle Physics, Phys. Rev. D **98**, 030001 (2018). Wu, T. *et al.*, Search for Axionlike Dark Matter with a Liquid-State Nuclear Spin Comagnetometer, Phys. Rev. Lett. **122**, 191302 (2019). Centers, G. P. *et al.*, Stochastic amplitude fluctuations of bosonic dark matter and revised constraints on linear couplings, arXiv:1905.13650 Raffelt, G. G., Astrophysical Axion Bounds, Lect. Notes Phys. **741**, 51 (2008). Keil, W., Janka, H.-T., Schramm, D. N., Sigl, G., Turner, M. S. & Ellis, J., Fresh look at axions and SN 1987A, Phys. Rev. D **56**, 2419 (1997). Van Dyck, R. S., Schwinberg, P. B. & Dehmelt, H. G., New High-Precision Comparison of Electron and Positron g Factors, Phys. Rev. Lett. **59**, 26 (1987). Bennett, G. W. *et al.*, Search for Lorentz and CPT Violation Effects in Muon Spin Precession, Phys. Rev. Lett. **100**, 091602 (2008). **Acknowledgements**\ We acknowledge technical support by the Antiproton Decelerator group, CERN’s cryolab team, and all other CERN groups which provide support to Antiproton Decelerator experiments. We acknowledge discussions with Yunhua Ding about the SME limits, and Achim Schwenk and Kai Hebeler for sharing computing equipment for the Monte-Carlo studies. We acknowledge financial support by RIKEN, MEXT, the Max-Planck Society, the Max-Planck-RIKEN-PTB Center for Time, Constants and Fundamental Symmetries, the European Union (Marie Skłodowska-Curie grant agreement No 721559), the Humboldt-Program, the CERN fellowship program and the Helmholtz-Gemeinschaft. Y.V.S. was supported by a Humboldt Research Fellowship from the Alexander von Humboldt Foundation. D.B. acknowledges the support by the DFG Reinhart Koselleck project, the ERC Dark-OsT advanced grant (project ID 695405), the Simons and the Heising-Simons Foundations.\ **Author contributions**\ This analysis was triggered by S.U., Y.V.S. and C.S.. C.S. analysed the experimental data, based on which Y.V.S. provided the theoretical interpretation and derived the given constraints, which were discussed with D.B.. The manuscript was written by S.U., C.S., and Y.V.S., and edited by D.B.. All co-authors discussed and approved the manuscript.\ **Financial interests**\ The authors declare no competing financial interests.\ **Data availablity**\ The datasets analyzed for this study will be made available on reasonable request.\ **Code availablity**\ The analysis codes will be made available on reasonable request.\ **Author information**\ Reprints and permission information are available at www.nature.com/reprints. Correspondence and requests for materials should be addressed to C.S. [[email protected]]([email protected]) or S.U. [[email protected]]([email protected]).\ ![image](FIG1.pdf){width="15.0cm"} ![**Results of the signal detection.** The test statistic $q(\nu)$ as a function of the frequency $\nu$ is shown as the gray line for the experimental data. The red dashed lines mark the detection thresholds for the global hypothesis test corresponding to 1 (32$\,\%$), 3 (0.27$\,\%$) and 5 standard deviations $\sigma_G$ (5.7$\times10^{-7})$ rejection error for the global test. The black dotted lines show the corresponding statistical significance $\sigma_L$ for a single local test up to 5 $\sigma_L$.[]{data-label="fig:teststat"}](Fig2.pdf){width="8.0cm"} ![image](Fig3.pdf){width="15.0cm"}
--- author: - Dori Bejleri and David Stapleton bibliography: - 'tangent.bib' title: The tangent space of the punctual Hilbert scheme --- Introduction {#introduction .unnumbered} ============ The purpose of this paper is to study the Zariski tangent space of the punctual Hilbert scheme parametrizing subschemes of a smooth surface which are supported at a single point. We give a lower bound on the dimension of the tangent space in general and show the bound is sharp for subschemes of the affine plane cut out by monomials. Let S be a smooth connected complex surface, and denote by $\Hns$ the Hilbert scheme parametrizing length $n$ subschemes of $S$. Fogarty [@fogarty] showed $\Hns$ is smooth and irreducible. We write $\Sns$ for the symmetric power of $S$. The Hilbert-Chow morphism: $$h:\Hns {\rightarrow}\Sns,$$ which sends a length $n$ subscheme to its cycle, is invaluable in the study of $\Hns$. We denote by $P_n$ the *nth punctual Hilbert scheme* which is the reduced fiber of $h$ over a multiplicity $n$ cycle in $\Sns$. Thus $P_n$ parametrizes length $n$ subschemes supported at 1 point. Note that $P_n$ is the same for any smooth surface so throughout we assume $S \cong \CC^2$. The Hilbert-Chow morphism and the punctual Hilbert scheme have attracted a great deal of attention. Beauville  [@Beauville] has shown that if $S$ is a K3 surface then $\Hns$ is a projective holomorphic symplectic variety (one of few known examples). Mukai  [@Mukai] gave a description of the symplectic form in terms of the pairing on $\operatorname{Ext}^1(I,I)$. For general surfaces, $h$ gives a crepant resolution. Briançon  [@Briancon] has shown that $P_n$ is irreducible, and Haiman  [@haiman] has shown that $P_n$ is the scheme-theoretic fiber of $h$ and that $P_n$ is actually a local complete intersection scheme. The Betti numbers of the punctual Hilbert scheme were computed by Ellingsrud and Stromme  [@EllingStromme]. Iarrobino [@Iarrobino] showed the Hilbert scheme of length $n$ subschemes of $\AA^k$ is reducible when $k\ge 3$ and $n$ is large, and Erman [@Erman] showed that these Hilbert schemes can acquire arbitrary singularities. Huibregtse  [@Huibregtse1; @Huibregtse2] studied questions of irreducibility and smoothness of a variety related to $P_n$ which consists of subschemes of $\Hns$ whose sum in the Albanese variety of $S$ is constant. There is a natural tautological vector bundle $\tnl$ on $\Hns$ whose fiber at a point corresponding to the length $n$ subscheme $\xi \subset S$ is the $2n$-dimensional vector space $H^0(S,T_S|_\xi)$. In [@Stapleton], the second author showed there is a natural injection of sheaves $$\alpha_n: \tnl {\rightarrow}T_\Hns$$ and that $\tnl$ is the log-tangent sheaf of the exceptional divisor of $h$. Thus it is natural to expect the degeneracy loci of $\alpha_n$ are connected to the singularities of the exceptional divisor of $h$. To make this precise we relate the rank of $\alpha_n$ to the dimension of the Zariski tangent space of the punctual Hilbert scheme. \[MT\] If $\xi\subset \CC^2$ is a length $n$ subscheme supported at the origin, then $$\dim(T_{[\xi]}P_n) \ge 2n - \rk(\alpha_n|_{[\xi]}) = \crk(\alpha_n|_{[\xi]}).$$ Moreover equality holds when the ideal of $\xi$ is generated by monomials. When $\xi$ is a monomial subscheme, the ideal of $\xi$ (written $I_\xi \subset \CC[x,y]$) has an associated Young diagram $\mu_{\xi} \subset \NN^2$ defined as $$\mu_{\xi} := \{ (i,j) \in \NN^2 | x^i y^j \notin I_\xi \}.$$ For example when $I_\xi = (y^4,x^2y^2,x^3y,x^7)$, the length of $\xi$ is 14 and we associate to $\xi$ the following Young diagram: ![image](youngdiagram.eps){height="4.5cm"} An elementary statistic associated to $\mu_{\xi}$ is given by tracing the top perimeter of the Young diagram from the top left to the bottom right and keeping track of the horizontal and vertical steps. For example in the above figure we have a sequence of horizontal steps $\Delta h = (2,1,4)$ and vertical steps $\Delta v =(2,1,1)$. \[MC\] If $\xi$ is defined by monomials, and $\mu_\xi$ is the corresponding Young diagram then $$\rk(\alpha_n|_{[\xi]})=\Big( \begin{array}{c} \textrm{maximum of horizontal} \\ \textrm{steps of }\mu_\xi \end{array} \Big) + \Big( \begin{array}{c} \textrm{maximum of vertical} \\ \textrm{steps of }\mu_\xi \end{array} \Big).$$ In our example, we have $\rk(\alpha_n|_{[\xi]})=4+2=6$, so $\dim T_{[\xi]}P_n = 28-6=22$. To prove the inequality in Theorem A we remark that the cokernel of the derivative $$dh: \Omega_\Snc {\rightarrow}\Omega_\Hnc$$ restricted to $[\xi] \in P_n$ is the cotangent space of $P_n$. This follows from Haiman’s result that $P_n$ is the scheme-theoretic fiber of $h$. Moreover $\Hnc$ is equipped with a holomorphic symplectic form [@nakajima §1.4] which gives an isomorphism $\omega: T_\Hnc \cong \Omega_\Hnc$. So to prove the inequality, it suffices to show there is a map: $$i:h^*\Omega_\Snc {\rightarrow}\tnc$$ such that $dh = \omega \circ \alpha_n \circ i$. The derivative $dh$ factors through $h^*\Omega_\Snc^{\vee \vee}$, a reflexive sheaf, so it is enough to define $i$ away from codimension 2. Away from codimension 2 the map $h$ is étale locally a product of the resolution of an $A_1$ singularity with a smooth variety. So the inequality follows after a computation in the case of an $A_1$ singularity, using the interpretation of $\tnl$ as the log-tangent sheaf of the exceptional divisor of $h$. To carry out the computation in Theorem B we note that the rank of $\alpha_n$ at some $[\xi] \in \Hnc$ is the rank of the map: $$H^0(\CC^2,T_{\CC^2}|_\xi) {\rightarrow}{\operatorname{Hom}}(I_\xi,\Oc_\xi)$$ in the normal sequence of $\xi \subset \CC^2$ so we can carry out the computation on $\CC^2$. To show that equality holds in Theorem A for subschemes of $\CC^2$ cut out by monomials our main computational tool is the affine chart that Haiman introduced in  [@haiman] for $\Hnc$ and the description Haiman gave of the cotangent space at monomial subschemes. Using these tools we explicitly compute the rank of $dh$ at points in $\Hnc$ corresponding to monomial subschemes and show that for $\xi \subset \CC^2$ cut out by monomials: $$\rk(dh|_{[\xi]}) =\Big( \begin{array}{c} \textrm{maximum of horizontal} \\ \textrm{steps of }\mu_\xi \end{array} \Big) + \Big( \begin{array}{c} \textrm{maximum of vertical} \\ \textrm{steps of }\mu_\xi \end{array} \Big).$$ We would like to thank our respective advisors Dan Abramovich and Robert Lazarsfeld for their advice and encouragement throughout this project. We are also grateful for conversations with Shamil Asgarli, Kenneth Ascher, Aaron Bertram, Mark de Cataldo, Johan de Jong, Lawrence Ein, Eugene Gorsky, Tony Iarrobino, Daniel Litt, Diane Maclagan, Mark McLean, Luca Migliorini, Mircea Mustaţǎ, Hiraku Nakajima, John Ottem, Giulia Saccà, David Speyer, and Zili Zhang. The proof of the inequality in Theorem A ======================================== In this section we prove the inequality in . We start by recalling the main properties of the Hilbert scheme of points that we will need. Let $\Zcaln$ be the universal family of the Hilbert scheme of points on $S$, then $\Zcaln$ has 2 natural projections: (Zn) [$S \times \Hns \supset \Zcaln$]{}; (S) at (3.5,0) [$S$.]{}; (HnS) at (.95,-1.5) [$\Hns$]{}; (1.2,0) to node\[above\] [$p_1$]{} (S); (.95,-.3) to node \[left\] [$p_2$]{} (HnS); If $\Ec$ is a vector bundle on $S$, then *the tautological bundle associated to* $\Ec$ is $\Enl:= p_{2*}p_1^*\Ec$. The map $\alpha_n$ is obtained by looking at the normal sequence of the inclusion $\Zcaln \subset S\times \Hns$, $$0 {\rightarrow}T_{\Zcaln} {\rightarrow}T_{S \times \Hns}|_\Zcaln \cong p_1^* (T_S) \oplus p_2^* (T_\Hns) \xrightarrow{\beta_n} \Homc(I_\Zcaln/I_\Zcaln^2,\Oc_\Zcaln).$$ Applying $p_{2*}(-)$, we see that $p_1^* (T_S)$ pushes forward to $\tnl$ and $\Homc(I_\Zcaln/I_\Zcaln^2,\Oc_\Zcaln)$ pushes forward to $T_{\Hns}$. Then $$\alpha_n:=p_{2*}(\beta_n|_{p_1^*(T_S)}).$$ The symmetric power $\Snc$ is the quotient of $(\CC^2)^{n}$ by the permutation action of the symmetric group on $n$ elements: $\SymG$. The Hilbert-Chow morphism: $$h: \Hnc {\rightarrow}\Snc$$ maps a point corresponding to a subscheme $[\xi]$ to the $n$-cycle: $$h([\xi])=\sum_{p \in \mathrm{Supp}(\xi)}\mathrm{length}_{\CC}(\Oc_{\xi,p})\cdot [p].$$ The exceptional divisor of $h$, denoted by $B_n$, consists of non-reduced subschemes. \[difseq\] It is always true that for a map of schemes $f: X {\rightarrow}Y$, if $p \in Y$ and $q \in f^{-1}(p)$ the scheme-theoretic fiber then $$T_q f^{-1}(p)\cong Coker(df|_q: f^*\Omega_Y|_q {\rightarrow}\Omega_X|_q)^{\vee}.$$ We want to compute the dimension of the Zariski tangent space of $P_n$. As Haiman showed [@haiman Prop. 2.10] the variety $P_n$ is the scheme-theoretic fiber of $h$, it suffices to compute the corank of $dh$. In particular, the inequality in Theorem A is equivalent to $$\crk(dh|_{[\xi]}) \geq \crk(\alpha_n|_{[\xi]}).$$ Recall there is a holomorphic symplectic form symplectic form $\omega_n \in H^0(\Hnc,\wedge^2 \Omega_\Hnc)$ on $\Hnc$ [@nakajima §1.4] which gives an isomorphism $\omega_n:T_\Hnc \cong \Omega_\Hnc$. To bound the corank of $dh$, it suffices to prove that the map $dh$ factors through $\omega_n \circ \alpha_n$: \(T) [$\tnc$]{}; (Os) at (-3,-1.5) [$h^*\Omega_\Snc$]{}; (Oh) at (0,-1.5) [$\Omega_\Hnc.$]{}; (Os) to node\[above\] [$\exists$]{} (T); (T) to node\[right\] [$\omega_n \circ \alpha_n$]{} (Oh); (Os) to node\[above\] [$dh$]{} (Oh); As $\omega_n \circ \alpha_n$ is injective is suffices to show that $\omega_n \circ \alpha_n \tnc$ contains the image of $dh$. To do this, we need the following lemma. \[reflexive\] Suppose $X$ is a normal variety and $\Fc_1,\Fc_2 \subset \Ec$ are subsheaves of a torsion free sheaf on $X$. If $\Fc_2$ is reflexive then the following are equivalent: 1. $\Fc_1 \subset \Fc_2$ as subsheaves of $\Ec$. 2. There is an open subset $V\subset X$ with codimension 2 complement such that $\Fc_1|_V \subset \Fc_2|_V$ as subsheaves of $\Ec|_U$. 3. There is an étale neighborhood $i:U {\rightarrow}X$ such that the complement of $i(U) = V$ has codimension 2 and $i^* \Fc_1 \subset i^*\Fc_2$ as subsheaves of $i^*\Ec$. It is clear $(1)$ implies $(2)$. Now we show the reverse. We remark that $\Fc_1$ is torsion-free, so it includes into its reflexive hull $\Fc_1 \hookrightarrow \Fc_1^{\vee \vee}$. The inclusion $\Fc_1|_V \subset \Fc_2|_V$ as submodules of $\Ec$ extends to an inclusion $\Fc_1|_V^{\vee \vee} \subset \Fc_2|_V$ as any map to a reflexive module factors through the reflexive hull. Now an inclusion of reflexive modules on a normal variety outside codimension 2 uniquely extends to an inclusion on the whole variety. This follows from the fact that reflexive sheaves are *normal* (see [@OSS p. 76] for the smooth case). Thus we have an inclusion $\Fc_1^{\vee \vee} \subset \Fc_2$. Therefore we have $\Fc_1 \subset \Fc_1^{\vee \vee} \subset \Fc_2$ as submodules of $\Ec$. Flatness of $i$ proves $(2)$ implies $(3)$. For the reverse, faithful flatness of $i$ mapping onto $V$ gives an inclusion $\Fc_1|_V \subset \Fc_2|_V$ and $V$ is an open set whose complement has codimension $2$. Let $\Symtwo = \langle(12)\rangle \le \SymG$ be the subgroup which exchanges the first 2 elements. Denote by $\Delta \subset (\CC^2)^n$ the big diagonal fixed by $\Symtwo$. The quotient map $\sigma: (\CC^2)^n {\rightarrow}\Snc$ factors as: (C2n) [$(\CC^2)^n$]{}; (modS2) at (0,-1.5) [$(\CC^2)^n/\Symtwo$]{}; (Sym) at (3,0) [$\Snc.$]{}; (C2n) to node\[above\] [$\sigma$]{} (Sym); (C2n) to node\[left\] [$\tau$]{} (modS2); (modS2) to node\[below\] [$j$]{} (Sym); After appropriate change of coordinates $(\CC^2)^n/\Symtwo \cong \aone \times (\CC^2)^{n-1}$, and the symplectic form on the smooth locus of $\Snc$ pulls back and extends to the product symplectic form on the smooth locus of $\aone \times (\CC^2)^{n-1}$. Recall that $\aone \times (\CC^2)^{n-1}$ admits a symplectic resolution: $$h_0\times id_{(\CC^2)^{n-1}}: \cotanc \times (\CC^2)^{n-1} {\rightarrow}\aone \times (\CC^2)^{n-1}$$ by blowing up $\tau(\Delta)$. Here $\cotanc$ denotes the cotangent bundle of $\PP^1$ with the standard symplectic structure $\omega_\cotanc$ and $$h_0: \cotanc {\rightarrow}\aone$$ is the minimal resolution of the $A_1$ surface singularity with exceptional divisor $E$. Let $V$ denote the unramified locus of $j$. $V$ is the complement of the image of all the big diagonals except $\Delta$. Important to us is that $j(V)$ contains all points in $\Snc$ where at most 2 points come together. The following lemma says that if $i:U{\rightarrow}\Hnc$ is the base change of the étale neighborhood $V$ along $h$ then $U$ satisfies the conditions of Lemma 2(3) so that we can check the inclusion ${\operatorname{im}}(dh) \subset \omega_n \circ \alpha_n(\tnl)$ by pulling back to $U$. \[etale\] The fiber product: \(U) [$U$]{}; (V) at (3,0) [$V$]{}; (Hilbn) at (0,-1.5) [$\Hnc$]{}; (A) at (3,-1.5) [$\Snc$]{}; (U) to node\[above\] [$h'$]{} (V); (U) to node\[left\] [$i$]{} (Hilbn); (V) to node\[left\] [$j$]{} (A); (Hilbn) to node\[above\] [$h$]{} (A); satisfies $i$ is étale and the complement of $i(U)$ has codimension 2. Moreover $U \subset \cotanc \times (\CC^2)^{n-1}$ such that $h_0\times id_{(\CC^2)^{n-1}}|_U = h'$ and the restriction of the symplectic form from $\cotanc \times (\CC^2)^{n-1}$ equals $i^*(\omega_n)$. This is essentially the proof that $\Hnc$ admits a holomorphic symplectic form and we refer the interested reader to  [@Beauville p. 766] or [@nakajima §1.4]. In [@Stapleton Theorem B] the second author proved the map $\alpha_n$ induces an isomorphism of $\tnl$ with the subsheaf $\dern$ which consists of vector fields tangent to $B_n$. To set up the proof of the inequality in Theorem A we consider the symplectic resolution of the $A_1$-singularity and prove the log-tangent sheaf $\derc$ is isomorphic to the image $dh_0$ as subsheaves of $\Omega_\cotanc$. \[aone\] The symplectic isomorphism $\omega_\cotanc: T_{\cotanc} \cong \Omega_{\cotanc}$ restricts to an isomorphism of subsheaves: $$\omega_\cotanc|_{\derc}: \derc {\rightarrow}dh_0(h_0^*\Omega_{\aone}).$$ We have two exact sequences: (Tangent) [$T_\cotanc$]{}; (Derc) at (-3,0) [$\derc$]{}; (Norm) at (2,0) [$\Oc_E(E)$]{}; (Omega) at (0,-1.5) [$\Omega_\cotanc$]{}; (OmegaE) at (2,-1.5) [$\Omega_E$]{}; (dh) at (-3,-1.5) [$ dh_0(h_0^*\Omega_\aone)$]{}; (01) at (-6,0) [$0$]{}; (02) at (4,0) [$0$]{}; (03) at (-6,-1.5) [$0$]{}; (04) at (4,-1.5) [$0.$]{}; (Tangent) to (Norm); (Omega) to (OmegaE); (Derc) to (Tangent); (dh) to (Omega); (Tangent) to node\[left\] [$\omega_\cotanc$]{} (Omega); (01) to (Derc); (03) to (dh); (Norm) to (02); (OmegaE) to (04); (Derc) to (dh); (Norm) to (OmegaE); If $v\in \derc(U)$ is any logarithmic vector field then $v$ is tangent to $E$, so for any point $p\in E$ with $v|_p \ne 0$ we know $v|_p$ generates the tangent space of $E$. On the other hand the pairing of the 1-form $\omega_\cotanc(v)|_E$ with $v|_p$ vanishes by skew symmetry of $\omega_\cotanc$. So the restricted 1-form $\omega_\cotanc(v)|_E$ vanishes identically. Thus we can fill in the dashed arrows to obtain a commuting diagram with an injection on the left and a surjection on the right. But the surjection on the right is an isomorphism as these are isomorphic line bundles on $E$, thus $\omega_\cotanc|_\derc$ is also an isomorphism. According to Remark \[difseq\], it is enough to show that as subsheaves of $\Omega_\Hnc$ we have the containment $dh(h^*\Omega_\Snc) \subset \omega_n \circ \alpha_n(\tnc)$. If $i: U {\rightarrow}\Hnc$ is the étale open set from Lemma \[etale\], then by Lemma \[reflexive\] it is enough to show that $i^*(dh(h^*\Omega_\Snc)) \subset i^*(\omega_n \circ \alpha_n(\tnl))$ as subsheaves of $i^* \Omega_\Hnc = \Omega_U$. Let $E'$ denote the exceptional divisor of $h'$. By Lemma \[etale\], we have a fiber square with $i$ étale and $i^{-1}(B_n) = E'$. It follows that $$i^*(dh(h^*\Omega_\Snc)) = dh'(h'^*\Omega_V),\hspace{5mm}\text{and}\hspace{5mm}i^*(\tnc) = \derp.$$ For the second equality we use the interpretation of $\alpha_n(\tnc)$ as the log-tangent sheaf of $B_n$ [@Stapleton Theorem B]. On the one hand, the exceptional divisor $E' = U \cap (E\times (\CC^2)^{n-1})$ is locally a product, so the log-tangent sheaf of $E'$ splits as a direct sum: $$\derp =(p^*\derc \oplus q^* T_{(\CC^2)^{n-1}})|_U$$ where $p$ and $q$ denote projection of $\cotanc \times (\CC^2)^{n-1}$ onto $\cotanc$ and $(\CC^2)^{n-1}$ respectively. On the other hand, $h' = (h_0\times id_{(\CC^2)^{n-1}})|_{U}$ so the subsheaf $dh'(h'^*\Omega_V)$ splits as a direct sum: $$dh'(h'^*\Omega_V) = (p^*dh_0(h_0^*\Omega_{\aone}) \oplus q^*\Omega_{(\CC^2)^{n-1}})|_U \subset (p^*\Omega_\cotanc \oplus q^*\Omega_{(\CC^2)^{n-1}})|_U = \Omega_U.$$ Finally by Lemma \[etale\], $i^* \omega_n$ is the same as the restriction of the product symplectic form on $\cotanc \times (\CC^2)^{n-1}$. Therefore it suffices to check that the symplectic form on $\cotanc \times (\CC^2)^{n-1}$ identifies $p^*dh_0(h_0^*\Omega_{\aone}) \oplus q^*\Omega_{(\CC^2)^{n-1}}$ with $p^*\derc \oplus q^* T_{(\CC^2)^{n-1}}$. As $i^*\omega_n$ is a product symplectic form it respects this direct sum decomposition. The second factors are clearly identified and the first factors are identified by Lemma \[aone\]. The above proof actually shows there is an isomorphism: $$h^*(\Omega_{\Snc})^{\vee \vee} \cong \tnc,$$ that is $\tnc$ is the reflexive hull of $h^*(\Omega_\Snc)$. Computing the rank of $\alpha_n$ at monomial subschemes ======================================================= In this section we show how to compute the rank of $\alpha_n$ at monomial subschemes, proving . During the proof, we exhibit the computation on an example subscheme $\xi \subset \CC^2$ with $I_\xi = (y^4,xy^2,x^2y,x^5)$. Let $\xi \subset \CC^2$ be a length $n$ subscheme whose ideal $I_\xi$ is defined by monomials. As in the introduction we associate to $\xi$ the Young diagram (see Figure a) $\mu=\mu_\xi \subset \NN^2$ defined as $$\mu := \{ (i,j) \in \NN^2 | x^i y^j \notin I_\xi \}.$$ We associate to $\mu$ the elementary statistic given by tracing the top perimeter of $\mu$ from the top left to the bottom right and recording the horizontal steps $\Delta h$ and the vertical steps $\Delta v$ (see Figure b). [0.35]{} ![Figure b. In our example $\xi$ we have $\Delta h = (1,1,3)$ and $\Delta v = (2,1,1)$.](example1.eps "fig:"){width=".8\textwidth"} [0.35]{} Our aim is to compute $\rk(\alpha_n|_{[\xi]})$. There are natural isomorphisms: $\tnc|_{\xi} \cong H^0(T_{\CC^2}|_{\xi}) \cong \CC[\xi]\ddx \oplus \CC[\xi]\ddy$ and $T_\Hnc|_{[\xi]} \cong {\operatorname{Hom}}(I_\xi,\CC[\xi])$. Moreover the map $\alpha_n|_\xi$ is exactly the map in the normal sequence associated to $\xi \subset \CC^2$ which maps any restricted derivation $\delta \in T_{\CC^2}$ to a homomorphism by: $\displaystyle \begin{array}{lr} \alpha_n|_{[\xi]}: \CC[\xi] \ddx \oplus \CC[\xi] \ddy \to {\operatorname{Hom}}(I_\xi, \CC[\xi]), & \delta \mapsto \Big( \begin{array}{c} I_{\xi} \xrightarrow{\alpha_n|_{[\xi]}(\delta)} \CC[\xi] \\ f \mapsto \delta(f)|_{\xi} \end{array} \Big) \end{array}$. As $I_\xi$ is generated by monomials we can decompose $I_\xi = \bigoplus_{\NN^2 \setminus \mu} \CC\cdot x^iy^j$ as a $\CC$-vector space. Moreover, the ring of functions on $\xi$ has a monomial $\CC$-vector space basis $\CC[\xi] = \bigoplus_{\mu} \CC \cdot x^iy^j$. Observe that for any *monomial* derivation $x^iy^j \ddx$ or $x^iy^j \ddy$ the associated homomorphism in ${\operatorname{Hom}}(I_\xi,\CC[\xi])$ maps our basis of $I_\xi$ to our basis of $\CC[\xi]$ up to possible scaling. This makes it possible to understand these homomorphisms combinatorially. For example $\ddy$ acts up to scaling by decreasing the power of $y$ by 1, which on $\NN^2$ is a shift down operator, annihilating any $(i,j)$ of the form $(i,0)$ (see Figure c). The derivation $y \ddx$ acts by shifting left by 1 and shifting up by 1 (see Figure d). More importantly, all monomials are eigenvectors for $x\ddx$ and $y\ddy$. Therefore, the homomorphisms associated to $x\ddx$ and $y\ddy$ are 0 in ${\operatorname{Hom}}(I_\xi,\CC[\xi])$, and any multiple $x^{i+1}y^j\ddx$ or $x^iy^{j+1}\ddy$ for $i,j>0$ is 0 as a homomorphism $I_\xi \to \CC[\xi]$. So the only possible nonzero homomorphisms coming from monomial derivations are of the form $y^j\ddx$ or $x^i\ddy$. [0.3]{} ![Figure d. A schematic of $y\ddx$ up to scaling.](derivative2.eps "fig:"){width="\textwidth"} [0.3]{} Finally we must determine which powers $y^j \ddx$ and $x^i \ddy$ give rise to nonzero homomorphisms. The derivation $y^j \ddx$ acts on $\NN^2$ by shifting to the left 1 and up $j$. This implies when $j \ge \mathrm{max} (\Delta v)$ then $y^j \ddx$ maps all $x^iy^j$ for $(i,j) \in \NN^2 \setminus \mu$ (the ideal) to other points in $\NN^2 \setminus \mu$ (back into the ideal). Thus the associated homomorphism in ${\operatorname{Hom}}(I_\xi,\CC[\xi])$ is 0. Likewise if $i \ge \mathrm{max}(\Delta h)$ then the derivation $x^i \ddy$ is in the kernel of $\alpha_n|_{[\xi]}$. Lastly it is clear that distinct monomial homomorphisms $y^j\ddx$ for $0\le j < \mathrm{max} (\Delta v)$ and $x^i \ddy$ for $0\le i < \mathrm{max} (\Delta h)$ give rise to linearly independent homomorphisms in ${\operatorname{Hom}}(I_\xi,\CC[\xi])$, proving $\rk(\alpha_n|_{[\xi]}) = \mathrm{max} (\Delta h) + \mathrm{max} (\Delta v)$. Computing the dimension of tangent spaces at monomial subscheme of $\CC^2$ ========================================================================== In this section we prove equality in the when $\xi \subset \CC^2$ is cut out by monomials. By Remark 1 and it suffices to show: If $\xi \subset \CC^2$ is a monomial subscheme then $$\rk(dh|_{[\xi]}) = \Big( \begin{array}{c} \textrm{maximum of horizontal} \\ \textrm{steps of }\mu_\xi \end{array} \Big) + \Big( \begin{array}{c} \textrm{maximum of vertical} \\ \textrm{steps of }\mu_\xi \end{array} \Big).$$ Our main computational tool is Haiman’s affine charts centered at $[\xi]$. We review without proof the properties of the Haiman chart that we will need and refer the interested reader to [@haiman §2]. If $\mu = \mu_\xi \subset \NN^2$ is the Young diagram associated to $\xi$, then the monomials $${\mathcal{B}}_\mu:=\{x^i y^j \in \mu | (i,j)\in \mu\}$$ give global sections of the rank $n$ tautological bundle $\ocn$ by pulling back and pushing forward. Moreover these sections globally generate $\ocn$ in an open neighborhood $U_\mu \subset \Hnc$ of $[\xi]$. $U_\mu$ is the *Haiman chart centered at $\xi$*. In particular the rank $n$ vector bundle $\ocn$ is trivialized on each $U_\mu$ and ${\mathcal{B}}_\mu$ gives an unordered basis for the free sheaf $\ocn|_{U_\mu}$. Note that $U_\mu$ is a $(\CC^*)^2$-invariant neighborhood of $[\xi]$ that consists of: $$U_\mu = \Big\{ [\chi] \in \Hnc \Big| \begin{array}{l} \CC[x,y]/I_\chi \textrm{ is spanned as a } \CC\textrm{-vector} \\ \textrm{space by monomials in }{\mathcal{B}}_\mu \end{array}\Big\}.$$ Indeed $U_\mu$ is affine [@haiman Prop. 2.2] and the ring of functions on $U_\mu$ is generated by functions $c^{r,s}_{i,j}$ (with $(r,s)$ and $(i,j)\in\NN^2$) whose value $c^{r,s}_{i,j}([\chi])$ at $[\chi] \in \Hnc$ is defined by: $$\label{eq_1} x^ry^s = \sum_{(i,j) \in \mu} c^{r,s}_{i,j}([\chi]) x^iy^j \mod I_\chi.$$ It is convenient to depict $c^{r,s}_{i,j}$ by an arrow pointing from $(r,s)$ to $(i,j)$. Denoting the maximal ideal $\fm_{[\xi]} \subset \Oc_{U_\mu}$ of $[\xi] \in U_\mu$, then the cotangent space $\fm_{[\xi]}/\fm_{[\xi]}^2$ is generated by classes of functions $c^{r,s}_{i,j}$ corresponding to arrows with heads in $\mu$ and tails in $\NN^2\setminus \mu$. We now state the key *Haiman relations* for these arrows modulo $\fm_{[\xi]}^2$: - see [@haiman eqn. 2.18]. Translating an arrow horizontally or vertically does not change the class it represents modulo $\fm_{[\xi]}^2$ provided the head remains in $\mu$ and the tail remains outside of $\mu$. - see [@haiman eqn. 2.18]. If an arrow can be translated so that its head crosses the $x$-axis or the $y$-axis and the tail remains in $\NN^2 \setminus \mu$, then it represents a class that vanishes modulo $\fm_{[\xi]}^2$. - see [@haiman p. 211]. Any strictly southwest pointing arrow vanishes modulo $\fm_{[\xi]}^2$. The set of equivalence classes of nonvanishing arrows under the Haiman relations forms a basis for the cotangent space (see Figure e and Figure f for examples of these relations). [0.4]{} ![Figure f. We have the Haiman relations: $c^{12}_{00} = c^{32}_{20} = c^{41}_{3,-1} = 0$ in $\fm_{[\xi]}/\fm_{[\xi]}^2$, verifying HR3.](movingcotangent.eps "fig:"){width=".6\textwidth"} [0.45]{} ![Figure f. We have the Haiman relations: $c^{12}_{00} = c^{32}_{20} = c^{41}_{3,-1} = 0$ in $\fm_{[\xi]}/\fm_{[\xi]}^2$, verifying HR3.](southwestvanish.eps "fig:"){width=".6\textwidth"} Let $R = {\mathbb{C}}[x_1,\ldots,x_n,y_1,\ldots,y_n]^{\SymG}$ be the coordinate ring of $\Snc = (\CC^2)^n/\SymG$. This ring is generated by the *polarized power sums* [@Weyl]: $$p_{r,s} = \sum_{i = 1}^n x_i^ry_i^s.$$ We can describe the pullback of the functions $p_{r,s}$ along $h$ [@haiman p. 208] as: $$h^*(p_{r,s}) = {\operatorname{Tr}}(x^ry^s:\ocn {\rightarrow}\ocn)$$ where $x^ry^s$ is viewed as an endomorphism of $\CC[x,y]/I_\xi$ for $[\xi] \in \Hnc$. Thus $dh|_{[\xi]}$ is the map on cotangent spaces induced by $h^*$ and its image is spanned by the classes ${\operatorname{Tr}}(x^ry^s) \mod \fm_{[\xi]}^2$. We need to compute the derivative of ${\operatorname{Tr}}(x^ry^s)$ in $\fm_{[\xi]}/\fm_{[\xi]}^2$. For all $[\chi] \in U_\mu$ we can write $x^r y^s \in \operatorname{End}(\CC[x,y]/I_\chi)$ as a matrix using the basis ${\mathcal{B}}_\mu$. Thus we compute the trace: $${\operatorname{Tr}}(x^ry^s) = \sum_{(h,k) \in \mu} c^{r + h, s+k}_{h,k}$$ as an element of $H^0(U_\mu,\Oc_{U_\mu})$. By the discussion proceeding the proof, the image of $dh|_{[\xi]}$ is generated by $$d(h^*(p_{r,s})) = d {\operatorname{Tr}}(x^ry^s) \equiv \sum_{(h,k) \in \mu} c^{r + h, s+k}_{h,k} \mod \fm_{[\xi]}^2.$$ Using the description of the cotangent space as linear combinations of equivalence classes of arrows on the Young diagram $\mu$, $d(h^*(p_{r,s}))$ is a sum of arrows of slope $s/r$. Whenever both $s$ and $r$ are nonzero, then these arrows are pointing southwest and so by (HR3) they vanish modulo $\fm_{[\xi]}^2$. [0.45]{} ![Figure h. These arrows depict $d(h^*(p_{4,0}))$ modulo $\fm_{[\xi]}^2$. By applying (HR1) and (HR2) to shift up and to the left we see $d(h^*(p_{4,0}))=0$.](symmetricforms1.eps "fig:"){width=".85\textwidth"} [0.45]{} ![Figure h. These arrows depict $d(h^*(p_{4,0}))$ modulo $\fm_{[\xi]}^2$. By applying (HR1) and (HR2) to shift up and to the left we see $d(h^*(p_{4,0}))=0$.](symmetricforms2.eps "fig:"){width=".9\textwidth"} When $s = 0$, $d(h^*(p_{r,0}))$ is a sum of horizontal arrows of length $r$. If $r > \max(\Delta h)$, then by (HR1) we can slide each horizontal arrow up and to the right until the head of the arrow leaves the first quadrant (see Figures g and h). Therefore by (HR2), $d(h^*(p_{r,0})) = 0 \mod \fm_{[\xi]}^2$. For $1 \le r \le \max(\Delta h)$, we get that at least one of these arrows is nonzero since we cannot slide any arrow of length $r$ past the max horizontal jump in the diagram while still keeping the head in $\mu$. By the same argument we see that $d(h^*(p_{0,s}))$ is a sum of vertical arrows of length $s$ and is nonzero if and only if $1 \le s \le \max(\Delta v)$. Since translation preserves both the length and direction of an arrow, the set $$\{d(h^*(p_{r,0})) \ : \ 1 \le r \le \max(\Delta h)\} \cup \{d(h^*(p_{0,s})) \ : \ 1 \le s \le \max(\Delta v)\}$$ is a linearly independent set of size $\max(\Delta v) + \max(\Delta h)$ which generates ${\operatorname{im}}(dh|_{[\xi]}) \subset \fm_{[\xi]}/\fm_{[\xi]}^2$ and it follows that $\rk(dh|_{[\xi]}) = \max(\Delta h) + \max(\Delta v) = \rk(\alpha_n|_{[\xi]}).$ Theorem A gives a lower bound on the tangent space dimension of $P_n$. On the other hand, we can obtain upper bounds by taking torus degenerations. In particular, $$2n - \rk(\alpha_n|_{[\xi]}) \le \dim T_{[\xi]}P_n \le \min\{\dim(T_{[\chi]}P_n) \ : [\xi] \text{ degenerates to } [\chi]\}$$ where $I_\chi$ is a monomial ideal. In fact Proposition $6$ holds more generally for *formally monomial* subschemes, that is, $\xi$ such that there exist formal coordinates around $0 \in \CC^2$ for which $I_\xi$ is a monomial ideal. Recall that a subscheme $\xi \subset \CC^2$ is *curvilinear* if it is contained in a smooth curve. Let $\xi$ be a monomial subscheme. Then $[\xi] \in P_n$ is a smooth point if and only if $\xi$ is a curvilinear subscheme. A monomial subscheme $\xi$ is curvilinear if and only if $\mu$ is either a single row or a single column of $n$ blocks. In each of these cases one easily sees that $\max(\Delta h) + \max(\Delta v) = n+1$ so that $\dim T_{[\xi]}P_n = n - 1$ is the dimension of $P_n$. Suppose $\xi$ is not curvilinear. Let $a$ and $b$ be the length of the largest row and largest column of $\mu$ respectively. Then $a + b \le n + 1$ with equality if and only if $\mu$ is a hook. If $\mu$ is a hook, then $\mu$ must have horizontal step sequence $\Delta h = (1,\alpha)$ and vertical step sequence $\Delta v = (\beta,1)$. Since the horizontal steps add up to $a$ and the vertical steps add up to $b$, it follows that $$\max(\Delta h) + \max(\Delta v) = \alpha + \beta < \alpha + \beta + 2 = a + b = n+1.$$ On the other hand, if $\mu$ is not a hook, then $\max(\Delta h) + \max(\Delta v) \le a + b < n + 1$. In either case, $\rk(\alpha_n|_{[\xi]}) < n + 1$ so that $\dim T_{[\xi]}P_n = 2n - \rk(\alpha_n|_{[\xi]}) > n - 1 = \dim P_n$. It also follows from Theorem A that the maximally singular points of $P_n$ are precisely the $k^{th}$ order neighborhoods of the origin. If $\dim T_{[\xi]}P_n = 2n - 2$ then $I_{\xi} = \fm^k$ where $\fm$ is the maximal ideal of $0 \in \CC^2$. We have an action of $(\CC^*)^2$ on $\Hnc$ with fixed points corresponding to monomial subschemes so that $P_n$ is invariant. Consider a 1-parameter subgroup $\sigma : \CC^* \to (\CC^*)^2$ which acts on $\Hnc$ with the same fixed points. The limits $$\lim_{t\to0}\sigma(t)\cdot[\xi] = [\chi] \enspace \text{ and } \enspace \lim_{t\to\infty}\sigma(t)\cdot[\xi] = [\zeta],$$ exist by properness of $P_n$ and they are monomial subschemes. Then by Remark 8 we have: $$2n-2 =\dim T_{[\xi]}P_n \le \dim T_{[\chi]}P_n \enspace \text{ and } \enspace 2n-2 =\dim T_{[\xi]}P_n \le \dim T_{[\zeta]}P_n.$$ Thus $\rk(\alpha_n|_{[\chi]}) = \rk(\alpha_n|_{[\zeta]}) \le 2$. This is only possible if the Young diagrams $\mu_\chi$ and $\mu_\zeta$ are staircases, that is $I_\chi = I_\zeta = \fm^k$ and $\zeta = \chi$. Thus both the degenerations occur in the Haiman chart $U_\chi$ so degeneration gives a map from $\PP^1$ to $U_\chi$. Since $U_\chi$ is affine, the map is constant and $I_\xi = I_\chi = \fm^k$. If $xy \in I_{\xi}$ then $\dim T_{[\xi]}P_n \le n+1$. The fact that $xy \in I_\xi$ implies the only Haiman charts that contain $[\xi]$ correspond to Young diagrams which are hooks. Therefore $\xi$ can only degenerate to monomial schemes with hooks for Young diagrams. An easy computation using the and the shows that if $\chi\subset \CC^2$ is a monomial subscheme with a hook for a Young diagram then $\dim T_{[\chi]}P_n = n-1$ or $n+1$. Then we are done by Remark 8.
--- abstract: 'We study the heat current flowing between two baths consisting of harmonic oscillators interacting with a qubit through a spin-boson coupling. An explicit expression for the generating function of the total heat flowing between the hot and cold baths is derived by evaluating the corresponding Feynman-Vernon path integral under the non-interacting blip approximation (NIBA). This generating function satisfies the Gallavotti-Cohen fluctuation theorem, both before and after performing the NIBA. We also verify that the heat conductivity is proportional to the variance of the heat current, retrieving the well known fluctuation dissipation relation. Finally, we present numerical results for the heat current.' author: - Erik Aurell - Brecht Donvil - Kirone Mallick bibliography: - 'lit.bib' date: 'October, 2019' title: Large Deviations and Fluctuation Theorem for the Quantum Heat Current --- Introduction ============ The flow of a non-vanishing macroscopic current of energy, charge, matter or information, that breaks time-reversal invariance, is a fingerprint of non-equilibrium behavior. A paradigmatic model for such a situation consists in a small system, with a finite number of degrees of freedom, that connects two large reservoirs in different thermodynamic states. The ensuing stationary state can not be described by the standard laws of thermodynamics: in particular, the steady-state statistics is not given by a Gibbs ensemble. The theoretical analysis of simple models, whether classical or quantum, provides us with a wealth of information about far-from-equilibrium physics and has stimulated numerous studies in the last two decades [@KamenevBook; @WeissBook; @HaRa2006; @open; @Rivas; @MiPl17; @Derrida2007; @Gianni2014]. Quantum systems based on nano scale integrated circuits are very effective for the study of quantum phenomena and are good candidates for possible applications. This is due to their macroscopic size and the ensuing ability to manipulate them. For any application minimizing or controlling the heat flow is essential. Therefore there has been a great deal of experimental [@PeSo13; @PekolaNature2015; @GaVi2015; @ViAn2016; @PeBa2017; @KaPek2017; @Ronzani2018; @SeGu2019] and theoretical interest [@Gasparinetti1; @CampisiHeatEngines; @Donvil2018; @DonvilCal] in studying the heat flow in such circuits. The vast majority of theoretical studies has been focused on the weak coupling regime, for which well-controlled approximation schemes are available. For example, in the case of a small system interacting with an environment, it is possible to integrate out the bath from the full dynamics and express the resulting system dynamics in terms of a Lindblad equation [@LI76; @open; @HaRa2006]. In this case, heat currents can be studied in terms of energy changes of the system. However, the weak coupling assumption deviates from exact treatments quantitatively and qualitatively already at moderately low couplings [@TuSt2019]. There have been various earlier studies in the strong coupling regime. In [@Carrega2016; @Motz2018; @Aurell2019] the authors start from the generating function of the heat current to study its first moment. Based on the polaron transform, the authors of [@Segal2005] obtain an analytical expression for the heat current through an $N$ level system. Numerical studies include simulations based on hierarchical equations of motion [@Tanimura1989; @Tanimura2014; @Tanimura2015; @Kato2015; @Kato2016; @deVega2017] the quasi-adiabatic propagator path integral (QuAPI)[@Makri1998; @Boudjada2014], the iterative full counting statistics path integral [@Kilgour2019], the multi-configuration time-dependent Hartree (MCTDH) approach [@Velizhanin2008], the Stochastic Liouvillian algorithm [@StockburgerMak99], and other Monte Carlo approaches [@Saito2013]. Other recent contributions are [@Esposito2015; @Newman2017; @Ronzani2018; @Dou2018; @Eisert18; @KwUm2018; @AurellKawai]. In this paper we consider a qubit coupled to two (or more) thermal baths as in [@Carrega2016; @Aurell2019] and focus on the generating function for the heat current between the baths, defined as the change in their internal energy. Following [@Aurell2019], this generating function can be written explicitly as Feynman-Vernon type path integral. Relying on a modified version of the non-interacting blip approximation (NIBA), an expression for the first moment of the generating function, i.e. the average heat current, was obtained in [@Aurell2019]. In the present work, we are able to go much further: under the NIBA, we perform the path-integral over the histories of the qubit and we derive an expression for the full generating function, leading to analytical expressions for all moments of the heat currents. Another main result of this paper is that we confirm the Gallavotti-Cohen relation before and after the NIBA, which therefore preserves this fundamental symmetry (see [@EvansCM; @GCohen1; @LeSpo1999; @Maes; @Derrida2007; @Gaspard2009] and references therein). Finally, we find a fluctuation dissipation relation between the variance of the heat current and the thermal conductivity. The paper is structured as follows. In section \[sec:model\], we briefly introduce the spin-boson model that we shall analyze. In section \[sec:Generating-function\], the generating function of the heat current is calculated after performing the NIBA approximation. In section \[sec:G-C\] we discuss the Gallavotti-Cohen relation before and after the NIBA. The heat conductivity is derived in section \[sec:thermal-power\]. In section \[sec:fluc-diss\] we invert the Laplace transform of the generating functions for small $\alpha$ and obtain a fluctuation-dissipation relation between the variance of the heat current and the heat conductivity. Finally, in section \[sec:numerical\] we numerically evaluate the first moment of the generating function. Technical details are provided in the appendices. The model {#sec:model} ========= The spin-boson model is a prototype for understanding quantum coherence in presence of dissipation [@Chakra82; @Bray82; @Leggett84; @Hakim1985; @Leggett1987]. It can be viewed as variant of the Caldeira-Leggett model in which a quantum particle interacts with a bath of quantum-mechanical oscillators. In the spin-boson model, a two-level system modeled by a spin-1/2 degree of freedom is put in contact with one or more heat-baths. The literature in the subject is vast and we refer the reader to some reviews and to the references therein [@Leggett1987; @Cedraschi2001; @LeHur2008; @WeissBook]. In this paper, we shall study two baths made of harmonic oscillators that interact with a qubit via the spin-boson interaction. Although there is no direct interaction between the baths, energy will be transferred through the qubit. The Hamiltonian governing the total evolution of the qubit and of the baths is given by $$H=H_S+H_C+H_H+H_{CS}+H_{HS}.$$ The qubit Hamiltonian is given by $$H_S=-\hbar\frac{\Delta}{2}\sigma_x+\frac{\epsilon}{2}\sigma_z.$$ The cold bath and hot bath Hamiltonians are given by $$H_C=\sum_{b\in C}\frac{p_b^2}{2m_b}+\frac{1}{2}m_b \omega_b^2q_b^2$$ $$H_H=\sum_{b\in H}\frac{p_b^2}{2m_b}+\frac{1}{2}m_b \omega_b^2q_b^2.$$ Finally, the system-bath interactions are of the spin-boson type [@Leggett1987] $$H_{CS}=-\sigma_z\sum_{b\in C}C_bq_b$$ $$H_{HS}=-\sigma_z\sum_{b\in H}C_bq_b .$$ The effects of the environment are embodied in the spectral density of the environmental coupling [@WeissBook] (one for each bath): $$J^{H/C}(\omega)=\sum_{b\in H/C}\frac{(C^{H/C}_b)^2}{2m_b\omega_b} \delta(\omega - \omega_b) \, . \label{OhmicJ}$$ We shall assume a Ohmic spectrum with an exponential cut-off determined by the frequency $\Omega$ $$\label{eq:def-expcutoff} J^{H/C}(\omega)=\frac{2}{\pi} \eta_{H/C} {\omega}\exp\left(-\frac{\omega}{\Omega}\right)$$ We denote by $U_t$ the unitary evolution operator of the total system and assume that the baths are initially at thermal equilibrium and are prepared in Gibbs states at different temperatures. For an initial state of the qubit $|i\rangle$ and a final state $|f\rangle$, the generating function of the heat current is defined as $$\begin{aligned} \label{genbath} G_{i,f}(\vec{\alpha},t)=&\operatorname{Tr}\langle f|e^{i(\alpha_HH_H+\alpha_C H_C)/\hbar}U_te^{-i(\alpha_HH_H+\alpha_C H_C)/\hbar}\nonumber\\&\qquad\times \Big(\rho_{\beta_C}\otimes\rho_{\beta_H}\otimes|i\rangle \langle i| \Big) \, U^\dagger_t|f\rangle,\end{aligned}$$ with $\vec{\alpha}=(\alpha_H,\alpha_C)$. The trace is taken over all the degrees of freedom of the baths. This generating function will allow us to calculate all the moments of the heat current: for example, by taking the first derivative of $\alpha_C$ and setting $\vec{\alpha}$ to zero gives the change in expected energy of the cold bath $$\begin{aligned} -i\hbar\partial_{\alpha_C}G_{i,f}(\vec{\alpha})\big|_{\vec{\alpha}=0}&=&\text{tr}(H_C\rho(t))-\text{tr}(H_C\rho(0)) \nonumber \\ &=&\Delta E_C.\end{aligned}$$ Calculation of the generating function {#sec:Generating-function} ====================================== The first step of the calculation is to rewrite the trace in equation as a Feynman-Vernon type path-integral [@WeissBook]. After integrating over the cold and the hot bath, the expression for the generating function is given by [@Aurell2019] $$\begin{aligned} \label{genNobath} &G_{i,f}(\vec{\alpha},t)=\int_{i,f}{\mathop{}\!\mathrm{D}}X{\mathop{}\!\mathrm{D}}Ye^{\frac{i}{\hbar}S_0[X]-\frac{i}{\hbar}S_0[Y]}\mathcal{F}_{\vec{\alpha}}[X,Y],\end{aligned}$$ where $\mathcal{F}_{\vec{\alpha}}$ is the influence functional. The paths $X$ and $Y$ are the forward and backwards path of the qubit, they take values $\pm1$. In the absence of interactions with the baths, the dynamics of the qubit are fully described by the free qubit action $S_0$. The effect of the influence functional is to generate interactions between the forward and backward paths; it also embodies the dependence on the parameters $\vec{\alpha}$. $$\begin{aligned} \mathcal{F}_{\vec{\alpha}}[X,Y]=e^{\frac{i}{\hbar}(S_{i,\alpha_C}^C)[X,Y]+S_{i,\alpha_H}^H[X,Y])} \nonumber \\ \qquad\times e^{ -\frac{1}{\hbar}(S_{r,\alpha_C}^C)[X,Y]+S_{r,\alpha_H}^H[X,Y])},\end{aligned}$$ where the real part of the interaction action is given by $$\begin{aligned} \label{eq:Sr} &S_{r,\alpha_{H/C}}^{H/C}[X,Y]=\int_{t_i}^{t_f}{\mathop{}\!\mathrm{d}}t\int_{t_i}^{t}{\mathop{}\!\mathrm{d}}s \bigg((X_tX_s+Y_tY_s)k^{H/C}_r(t-s)\nonumber\\&-X_tY_sk^{H/C}_r(t-s+\alpha_{H/C})-X_sY_tk^{H/C}_r(t-s-\alpha_{H/C})\bigg)\end{aligned}$$ and the imaginary part is defined as $$\begin{aligned} \label{eq:Si} & S_{i,\alpha_{H/C}}^{H/C}[X,Y]=\int_{t_i}^{t_f}{\mathop{}\!\mathrm{d}}t\int_{t_i}^{t}{\mathop{}\!\mathrm{d}}s \bigg((X_tX_s-Y_tY_s)k^{H/C}_i(t-s)\nonumber\\ &+X_tY_sk^{H/C}_i(t-s+\alpha_{H/C})-X_sY_tk^{H/C}_i(t-s-\alpha_{H/C})\bigg)\end{aligned}$$ The kernels that appear in these expressions are $$k_i^{H/C}(t-s)=\sum_b\frac{(C^{H/C}_b)^2}{2m_b\omega_b}\sin(\omega_b(t-s))$$ and $$k_r^{H/C}(t-s)=\sum_b\frac{(C^{H/C}_b)^2}{2m_b\omega_b}\coth\left(\frac{\hbar\omega_b\beta_{H/C}}{2}\right)\sin(\omega_b(t-s)).$$ The integral of the bath degrees of freedom being performed, the generating function is given as the qubit path-integral over two binary paths. This remaining expression can not be calculated exactly; in the next section, we shall evaluate the generating function by resorting to the non-interacting blip approximation (NIBA). Performing the NIBA {#subsec:niba} ------------------- Originally, the idea of the NIBA was proposed in [@Leggett1987] to compute transition probabilities between states of the qubit: this corresponds to taking $\alpha=0$ in (\[genbath\]). The paths $X$ and $Y$ being binary, there are only two possibilities at a given time: either $X = Y$, this is a [*Sejourn*]{} or $X = - Y$, this is a [*Blip*]{}. The NIBA approximation relies on two assumptions (explained in [@Leggett1987]): \(i) The typical Blip-interval time $\Delta t_B$ is much shorter than the typical Sejourn-interval time $\Delta t_S$: $\Delta t_B\ll\Delta t_S$. \(ii) Correlations decay over times much smaller than the typical Sejourn interval $\Delta t_S$,. Under these assumptions, the only nonzero contributions to the time integrals in the interaction part of the action and are obtained when - $t$ and $s$ are in the same Blip-interval - $t$ and $s$ are in the same Sejourn-interval - $t$ is in a Sejourn and $s$ is an adjacent Blip interval - $t$ and $s$ are both in Sejourn-intervals separated by one Blip. Thus, the strategy to perform the NIBA is to break up the integrals and over the whole time interval into a sum of the surviving parts, which can be evaluated separately. In the present work, we extend the NIBA to include nonzero $\alpha$ (see also [@Aurell2019]), which leads to a time shift in some of the Kernels in the action and . In the framework of our approximation, we consider values of $\alpha_{H/C}$, such that $\alpha_{H/C}\ll \Delta t_S$. Following the same reasoning as for $\alpha_{H/C}=0$, it is clear that under said additional assumption, the same terms as before have a chance of being nonzero. In Appendix \[app:NIBA\], we explicitly calculate the five different surviving terms [^1] after the NIBA. The resulting expression for the generating function can be written in terms of a transfer matrix $M(\alpha,t)$: $$\begin{aligned} \label{generatinTransf} &G_{\uparrow \uparrow}(\vec\alpha,t)+G_{\uparrow\downarrow}(\vec\alpha,t)=\begin{pmatrix} 1&1 \end{pmatrix}\sum_{n=0}^{+\infty}(-1)^n\left(\frac{\Delta}{2}\right)^{2n}\nonumber\\&\times\int {\mathop{}\!\mathrm{d}}t_1\hdots {\mathop{}\!\mathrm{d}}t_{2n}\mathbf{M}(\alpha,\Delta_{2n})\mathbf{M}(\alpha,\Delta_{2n-2})\hdots\mathbf{M}(\alpha,\Delta_2)\begin{pmatrix} 1\\0 \end{pmatrix}\end{aligned}$$ where $\Delta_{2j} = t_{2j} - t_{2j-1}$. The transfer matrix $\mathbf{M}$ is given by $$\begin{aligned} \label{eq:transf} &\mathbf{M}(\vec\alpha,t)= \begin{pmatrix} A(t) &-B(\vec\alpha,t)\\-C(\vec\alpha,t)&D(t) \end{pmatrix}\end{aligned}$$ Note that only the off-diagonal elements of the transfer matrix depend on $\alpha$. The functions $A,B,C$ and $D$ that appear as matrix elements in $\mathbf{M}$ are determined once the NIBA has been performed. Their values are given by \[eq:def-matrixelements\] $$A(t)=\cos\frac{1}{\hbar}\left(Z_C^+(t)+Z_H^+(t)-\epsilon t\right)e^{-\frac{1}{\hbar}(\Gamma_C^+(t)+\Gamma_H^+(t))}$$ $$\begin{aligned} &B(\vec\alpha,t)=e^{-\frac{1}{\hbar}(\Gamma_C^-(\alpha_C,t)+\Gamma_H^-(\alpha_H,t)+2i(R_C(\alpha_C,t)+R_H(\alpha_H,t))}\nonumber\\&\times \cos\frac{1}{\hbar}\left((Z_C^-+2iF_C)(\alpha_C,t)+(Z_H^-+ 2i F_H)(\alpha_H,t)+\epsilon t\right)\end{aligned}$$ $$\begin{aligned} &C(\vec\alpha,t)=e^{-\frac{1}{\hbar}(\Gamma_C^-(\alpha_C,t)+\Gamma_H^-(\alpha_H,t) +2 i(R_C(\alpha_C,t)+R_H(\alpha_H,t))}\nonumber\\&\times \cos\frac{1}{\hbar}\left((Z_C^-+2iF_C)(\alpha_C,t)+(Z_H^-+ 2i F_H)(\alpha_H,t)-\epsilon t\right)\end{aligned}$$ $$D(t)=\cos\frac{1}{\hbar}\left(Z_C^+(t)+Z_H^+(t)+\epsilon t\right)e^{-\frac{1}{\hbar}(\Gamma_C^+(t)+\Gamma_H^+(t))}$$ All the auxiliary functions $Z_j^{\pm},\Gamma_j^{\pm}, R_j$ and $F_j$, where the index $j=C,H$ refers to the cold or the hot bath are determined in the Appendix A. Assuming a Ohmic spectral density with exponential cut-off with frequency $\Omega$ (\[OhmicJ\]), the explicit expressions of these functions are given in the following equations: \[eq:def-functions2\] $$Z^+_{j}(t)=\frac{2\eta_j}{\pi}\int_0^\Omega{\mathop{}\!\mathrm{d}}\omega\, \frac{\sin(\omega t)}{\omega}$$ $$Z_j^-(\alpha_j,t)=\frac{2\eta_j}{\pi}\int_0^\Omega{\mathop{}\!\mathrm{d}}\omega\, \frac{\sin(\omega t)}{\omega}\cos(\omega\alpha_j)$$ \[eq:def-functions\] $$\begin{aligned} &\Gamma^+_j(t)=\frac{2\eta_j}{\pi}\int_0^\Omega{\mathop{}\!\mathrm{d}}\omega\, \frac{1-\cos(\omega t)}{\omega}\coth\left(\frac{\omega\hbar\beta_j}{2}\right)\end{aligned}$$ $$\begin{aligned} \Gamma^-_j(\alpha_j,t)=\frac{2\eta_j}{\pi}\int_0^\Omega{\mathop{}\!\mathrm{d}}\omega\, \frac{1-\cos(\omega t)\cos(\omega\alpha_j)}{ \omega} \coth(\frac{\omega\hbar\beta_j}{2})\end{aligned}$$ $$\begin{aligned} &R_j(\alpha_j,t)=\frac{\eta_j}{\pi}\int_0^\Omega{\mathop{}\!\mathrm{d}}\omega\, \frac{\sin(\omega\alpha_j)}{ \omega}\cos(\omega t)\end{aligned}$$ $$\begin{aligned} F_j(\alpha_j,t)= \frac{\eta_j}{\pi}\int_0^\Omega{\mathop{}\!\mathrm{d}}\omega\, \frac{\coth\left(\frac{\omega\hbar\beta_j}{2}\right)}{ \omega} \sin(\omega t)\sin(\omega\alpha_j)\end{aligned}$$ The behaviour of these functions is shown in Figure \[fig:functions\]. file [ForLatexZplus.txt]{}; file [ForLatexZminus.txt]{}; file [ForLatexGplus.txt]{}; file [ForLatexGminus.txt]{}; file [ForLatexF.txt]{}; file [ForLatexA.txt]{}; ; We shall denote by $\tilde{\phi}$ the Laplace transform of a function $\phi(\vec\alpha, t)$, defined as follows: $$\tilde{\phi}(\vec\alpha,\lambda)=\int_0^{\infty} e^{-\lambda t} \phi(\vec\alpha,t).$$ Then, taking the Laplace transform of leads us to $$\begin{aligned} &\tilde{G}_{\uparrow \uparrow}(\vec\alpha,\lambda)+\tilde{G}_{\uparrow\downarrow}(\vec\alpha,\lambda)=\nonumber\\&\quad\lambda^{-1}\begin{pmatrix} 1&1 \end{pmatrix}\left(\sum_{n=0}^{+\infty}(-1)^n\left(\frac{\Delta}{2}\right)^{2n}\lambda^{-n}\mathbf{\tilde{M}}^n(\alpha,\lambda)\right)\begin{pmatrix} 1\\0 \end{pmatrix}\end{aligned}$$ We call $\lambda_+(\vec\alpha,\lambda)$ and $\lambda_-(\vec\alpha,\lambda)$ the eigenvalues of the 2 by 2 matrix $\mathbf{\tilde{M}}(\vec\alpha,\lambda)$, with corresponding left eigenvectors $v_+(\vec\alpha,\lambda)$ and $v_-(\vec\alpha,\lambda)$ and right eigenvectors $w_+(\vec\alpha,\lambda)$ and $w_-(\vec\alpha,\lambda)$. We have $$\begin{aligned} \lambda_\pm(\vec\alpha,\lambda) &= \tilde A(\lambda) + \tilde D(\lambda) \\ & \pm \sqrt{(\tilde A(\lambda)-\tilde D(\lambda)^2+4 \tilde B(\vec\alpha,\lambda)\tilde C(\vec\alpha,\lambda)}\nonumber\end{aligned}$$ Finally, the Laplace transform of the generating function takes a simpler form in the eigenbasis of $\mathbf{\tilde{M}}$: $$\begin{aligned} \label{eq:lapeig} &\tilde{G}_{\uparrow \uparrow}(\vec\alpha,\lambda)+\tilde{G}_{\uparrow \downarrow}(\vec\alpha,\lambda)\nonumber\\&\qquad=\frac{Q_+(\vec\alpha,\lambda)}{\lambda+\left(\frac{\Delta}{2}\right)^2\lambda_+(\vec\alpha,\lambda)}+\frac{Q_-(\vec\alpha,\lambda)}{\lambda+\left(\frac{\Delta}{2}\right)^2\lambda_-(\vec\alpha,\lambda)},\end{aligned}$$ where we defined the amplitudes $$Q_\pm=\begin{pmatrix} 1&1 \end{pmatrix}v_\pm w^T_\pm \begin{pmatrix} 1\\0 \end{pmatrix}.$$ The fluctuation theorem pre- and post-NIBA {#sec:G-C} ========================================== Time Reversal ------------- We define the time reversal for the qubit state paths $X(t)$ and $Y(t)$ as \[eq:timerev\] $$\begin{aligned} X_R(t)&=Y(t_f+t_i-t)\\ Y_R(t)&=X(t_f+t_i-t),\end{aligned}$$ see Figure \[fig:rev\]. To illustrate this time reversal, let us note that for $\vec{\alpha}=0$, the generating function can be written as $$\begin{aligned} &G_{i,f}(0)=\operatorname{Tr}_{H,C}(\rho_{\beta_C}\otimes\rho_{\beta_H}\langle i|U|f\rangle \langle f|U^\dagger|i\rangle)\end{aligned}$$ Expressing the trace as a path integral and computing the trace over the bath variables gives $$\begin{aligned} &G_{i,f}(\vec{\alpha})=\int_{i,f}{\mathop{}\!\mathrm{D}}X{\mathop{}\!\mathrm{D}}Ye^{-\frac{i}{\hbar}S_0[X]+\frac{i}{\hbar}S_0[Y]}\mathcal{F}_R[X,Y],\end{aligned}$$ With influence functional $$\mathcal{F}_{R}[X,Y]=e^{\frac{i}{\hbar}(S_{i,R}^C+S_{i,R}^H)[X,Y]-\frac{1}{\hbar}(S_{r,R}^C+S_{r,R}^H)[X,Y]},$$ where we defined the real part of the action as $$\begin{aligned} &S_{r,R}^{H/C}[X,Y]=\int_{t_i}^{t_f}{\mathop{}\!\mathrm{d}}t\int_{t_i}^{t}{\mathop{}\!\mathrm{d}}s \bigg((X_tX_s+Y_tY_s)k^{H/C}_r(t-s)\nonumber\\&\quad-X_tY_sk^{H/C}_r(t-s)-X_sY_tk^{H/C}_r(t-s)\bigg)\end{aligned}$$ and the imaginary part $$\begin{aligned} &S_{i,R}^{H/C}[X,Y]=\int_{t_i}^{t_f}{\mathop{}\!\mathrm{d}}t\int_{t_i}^{t}{\mathop{}\!\mathrm{d}}s \bigg((Y_tY_s-X_tX_s)k^{H/C}_i(t-s)\nonumber\\&\quad+X_tY_sk^{H/C}_i(t-s)-X_sY_tk^{H/C}_i(t-s)\bigg)\end{aligned}$$ Now taking $X(t)\rightarrow X_R(t)$ and $Y(t)\rightarrow Y_R(t)$, retrieves the expression for the generating function for $\vec{\alpha}=0$. The Gallavotti-Cohen symmetry ----------------------------- Let us define the time reversed generating function as $$\begin{aligned} \label{eq:genrev1} &G^R_{fi}(\alpha_H,\alpha_C,t)=\operatorname{Tr}_{H,C} \langle i|e^{i(\alpha_HH_H+\alpha_C H_C)/\hbar}U^\dagger_t \nonumber\\&\quad\times e^{-i(\alpha_HH_H+\alpha_C H_C)/\hbar}(\rho_{\beta_C}\otimes\rho_{\beta_H}\otimes|f\rangle \langle f|)U_t|i\rangle.\end{aligned}$$ Before performing the NIBA, it is straightforward to show from the definition of the generating function and that the Gallavotti-Cohen relation holds: $$\begin{aligned} \label{eq:g-c} &G_{if}(i\beta_H-\alpha_H/\hbar,i\beta_C-\alpha_C/\hbar,t)= G^R_{fi}(\alpha_H,\alpha_C,t).\end{aligned}$$ After integrating out the bath, the above equation can be checked using the time reversal defined in . It is possible to show that the Gallavotti-Cohen relation still holds after performing the NIBA. In order to do so we perform the NIBA on the time-reversed generating function $G^R_{fi}(\alpha_H,\alpha_C,t)$ following the same procedure as outlined in Subsection \[subsec:niba\]. The result is of the form , with transfer matrix $$\begin{aligned} \label{eq:transfrev} \bar{\mathbf{M}}(\vec\alpha,t)= 2 \begin{pmatrix} D(t) &-C(\vec \alpha,t) \\-B(\vec \alpha,t) & A(t) \end{pmatrix},\end{aligned}$$ on the other hand, one can calculate that $$\begin{aligned} \label{eq:transfiv} {\mathbf{M}}(i(\beta_H,\beta_C)-\vec\alpha,t)= 2 \begin{pmatrix} A(t) &-C(\vec \alpha,t) \\-B(\vec \alpha,t) & D(t) \end{pmatrix}.\end{aligned}$$ Note that in the time reversal , we interchange the meaning of $X$ and $Y$, as illustrated in figure \[fig:rev\]. Interchanging the roles of $X$ and $Y$ means flipping the diagonal elements in transfer matrix. Thus ${\mathbf{M}}(i(\beta_H,\beta_C)-\vec\alpha,t)$ and $\bar{\mathbf{M}}(\vec\alpha,t)$ are equivalent, proving that the Gallavotti-Cohen relation remains true after the performing the NIBA. (-2\*,-2\*) rectangle (0,2\*); (0,0) circle(); (0,)–++(3,0)node\[above\] [$X(t)$]{}; (3,)–++(3,0) ; (0,-)–++(3,0) node\[below\] [$Y(t)$]{}; (3,-)–++(3,0); (6,-2\*) rectangle (6+,2\*); (6,0) circle(); at (,) [$\langle f| U_t|i\rangle$]{}; at (,-) [$\langle i| U^\dagger_t|f\rangle$]{}; at (-,0) ; at (6+,0) ; (-2\*,-2\*) rectangle (0,2\*); (0,0) circle(); (0,)–++(3,0)node\[above\] [${X}_R(t)$]{}; (3,)–++(3,0) ; (0,-)–++(3,0) node\[below\] [${Y}_R(t)$]{}; (3,-)–++(3,0); (6,-2\*) rectangle (6+,2\*); (6,0) circle(); at (,) [$\langle i| U_t|f\rangle$]{}; at (,-) [$\langle f| U^\dagger_t|i\rangle$]{}; at (-,0) ; at (6+,0) ; (-0.5,-2.7)–++(0.5-,0) node\[circle,fill,inner sep=1pt,label=below:$t_i$\] –++(2\*+6,0)node\[circle,fill,inner sep=1pt,label=below:$t_f$\] –++(1,0); Thermal power {#sec:thermal-power} ============= In this section we are interested in studying the thermal power between the two baths. The thermal power is defined as $$\label{eq:thermal-power} \Pi(\beta_C,\beta_H) = \lim_{t\to\infty}\frac{\left<\Delta E_c\right>}{t}$$ where $\beta_C$ and $\beta_H$ are the inverse temperatures of respectively the cold bath and the hot bath, and $E_c$ is the energy of the cold bath. We will to show that $\Pi(\beta,\beta)=0$, which one would physically expect. It means that in the steady state there is no heat transfer between two baths with the same temperature. Furthermore, we calculate the thermal conductivity $\kappa$ which is defined by the expansion for small temperature differences $\Delta\beta$ in both baths $$\Pi(\beta,\beta+\Delta\beta) = \kappa\Delta\beta +O(\Delta\beta^2).$$ Our starting point is a result by the authors of [@Aurell2019] for the form of the thermal power $$\label{eq:thermal-power-AM} \Pi =(\frac{\Delta}{2})^2 \left( \frac{p_-}{p_++p_-}\pi_{\uparrow} + \frac{p_+}{p_++p_-}\pi_{\downarrow} \right).$$ Let us introduce the characteristic functions $$\begin{aligned} C_C(t) &=& e^{-\frac{1}{\hbar}\Gamma_C^+(t)+ \frac{i}{\hbar}Z^+_C(t)} \\ C_H(t) &=& e^{-\frac{1}{\hbar}\Gamma_H^+(t)+ \frac{i}{\hbar}Z^+_H(t)},\end{aligned}$$ which allows us to conveniently write the coefficients of \[eq\] $$\begin{aligned} p_-=2\int_0^\infty dt\, A(t)= \int_{-\infty}^\infty dt\,C_C(t)C_H(t)e^{-i\epsilon t}\end{aligned}$$ $$\begin{aligned} p_+=2\int_0^\infty dt\, D(t)= \int_{-\infty}^\infty dt\,C_C(t)C_H(t)e^{i\epsilon t}\end{aligned}$$ and $$\begin{aligned} \pi_{\uparrow}&=-2i\hbar\int_0^\infty dt\, \partial_\alpha C(\alpha,t)\big|_{\alpha=0}\nonumber\\&=-{i\hbar}\int_{-\infty}^\infty dt\,\frac{dC_C(t)}{dt}C_H(t)e^{i\epsilon t}\end{aligned}$$ $$\begin{aligned} \pi_{\downarrow}&=-2i\hbar\int_0^\infty dt\, \partial_\alpha B(\alpha,t)\big|_{\alpha=0}\nonumber\\&=-i{\hbar}\int_{-\infty}^\infty dt\,\frac{dC_C^+(t)}{dt}C_H^+(t)e^{-i\epsilon t}\end{aligned}$$ Two baths with the same temperatures ------------------------------------ When both baths have the same temperatures, we expect the steady state heat transfer to be zero $$\label{eq:thermal-power-zero} \Pi(\beta,\beta) = 0$$ Via an analytic continuation argument outlined in Appendix \[sec:analytical\], we find that $$\begin{aligned} \label{eq:analcont1} p_+(\beta_C,\beta_H) = &\frac{1}{2}\int d t C_C(t+i\Delta\beta\hbar) C_H(t) e^{-\frac{i}{\hbar}\epsilon t} e^{-\epsilon\beta_H} \end{aligned}$$ and $$\begin{aligned} \label{eq:analcont2} \pi_\downarrow(\beta_C,\beta_H) &={i\hbar} \int d t \frac{d C_C(t+i\Delta\beta\hbar)}{dt} C_H(t) e^{\frac{i}{\hbar}\epsilon t} e^{\epsilon\beta_H} \end{aligned}$$ with $\Delta\beta=\beta_C-\beta_H$. When both temperatures are equal, these relations transform to $$\begin{aligned} \label{eq:D-def-4} p_+(\beta,\beta) &=& e^{-\epsilon\beta} p_-(\beta,\beta)\end{aligned}$$ and $$\begin{aligned} \label{eq:pi} \pi_{\uparrow}(\beta,\beta) &=& - e^{-\epsilon\beta}\pi_{\downarrow}(\beta,\beta).\end{aligned}$$ Equations and directly give us $$\begin{aligned} \label{eq:thermal-power-AM-same} \Pi(\beta,\beta) &=& \left(\frac{\Delta}{2}\right)^2 \frac{1}{p_++p_-}\left(p_-\pi_{\uparrow} + p_+\pi_{\downarrow}\right) \nonumber \\ &=&\left(\frac{\Delta}{2}\right)^2 \frac{e^{-\epsilon\beta}}{p_++p_-} \left(-p_-\pi_{\downarrow} +p_-\pi_{\downarrow}\right) = 0\end{aligned}$$ Thermal Conductivity {#sec:almost-same-temperatures} -------------------- The obtain an explicit formula for the thermal conductivity $\kappa$ one should expand in the difference between the temperature of both baths $\Delta\beta=\beta_C-\beta_H$. Differentiating the denominator ($A+D$) gives no contribution as it multiplies a parenthesis $D\pi_{\uparrow} + A\pi_{\downarrow}$, which vanishes to zeroth order. We can therefore write $$\begin{aligned} \label{eq:thermal-power-AM2} \kappa &=&\left(\frac{\Delta}{2}\right)^2 \frac{1}{p_++p_-}\Big( \partial_{\beta_C}(p_-)\pi_{\uparrow} + p_- \partial_{\beta_C}(\pi_{\uparrow}) \nonumber \\ && \quad + \partial_{\beta_C}(p_+)\pi_{\downarrow} + p_+ \partial_{\beta_C}(\pi_{\downarrow})\Big)\end{aligned}$$ All terms on the right hand side are evaluated at $\beta_C=\beta_H=\beta$. The calculation of $\kappa$ is presented in Appendix \[sec:appTherm\]. The idea of the calculation is to write out $\partial_{\beta}(p_-)\pi_{\uparrow}$ and $p_- \partial_{\beta}(\pi_{\uparrow})$, and to keep track how the terms generated in the partial derivatives $\partial_{\beta}(p_-)$ and and $\partial_{\beta}(\pi_{\uparrow})$ change as the integral variable $t$ is shifted to $t+i\hbar\beta$. The result is $$\begin{aligned} \label{eq:kappafinal} \kappa=&-\frac{(\hbar\Delta)^2}{4(p_++p_-)}\nonumber\\&\times(p_+\tilde{B}''(0,0)+4\tilde{B}'(0,0)\tilde{C}'(0,0)+p_-\tilde{C}''(0,0)),\end{aligned}$$ where $\tilde{C}$ and $\tilde{D}$ are the Laplace transforms of the matrix elements defined in equation and the accent denotes the derivative to the first variable. Fluctuation-dissipation relation {#sec:fluc-diss} ================================ In this section we invert the Laplace transform of the generating function up to second order in $\alpha$. This allows us direct access to the first and second moment of the heat current. Concretely, we look for poles of equation , by constructing a function $$\label{eq:lamnu} \lambda(\alpha)=\lambda_0+\lambda_1\alpha+\lambda_2\alpha^2+O(\alpha^3).$$ which solves $$\label{eq:laml} \lambda(\alpha)+\left(\frac{\Delta}{2}\right)^2\lambda_-(\alpha,\lambda(\alpha))=0$$ at all orders in $\alpha$. Hence for small $\alpha$, we have, in the long time limit $$\begin{aligned} G_{i,f}(\alpha,t)= &\text{Res}\left(\frac{e^{\lambda t}Q_-(\alpha,\lambda)}{\lambda(\alpha)+\left(\frac{\Delta}{2}\right)^2\lambda_-(\alpha,\lambda)},\lambda(\alpha)\right)\nonumber\\ =&\frac{e^{\lambda(\alpha)t}Q_-(\alpha,\lambda(\alpha))}{1+\left(\frac{\Delta}{2}\right)^2\dot\lambda_-(\alpha,\lambda(\alpha))}\end{aligned}$$ (Note that in the large time limit the contribution of the $\lambda_+$ is exponentially subdominant). Keeping in mind that $\lambda_-(0,\lambda)=0$, the zeroth order of equation gives. $$\lambda_0=0.$$ Equation to the first order in $\alpha$ translates to $$\lambda_1+\left(\frac{\Delta}{2}\right)^2\lambda'_-(0,\lambda_0)+\left(\frac{\Delta}{2}\right)^2\dot\lambda_-(0,\lambda_0)\lambda_1$$ where the accent denotes the derivative to $\alpha$ the first variable and a dot a derivative to $\lambda$. After some algebra, we find $$\lambda_1=i\left(\frac{\Delta}{2}\right)^2\frac{p_+\,\pi_\downarrow+p_-\pi_\uparrow}{\hbar(p_++p_-)}.$$ Similarly, an expression can be obtained for $\lambda_2$. In equilibrium, when $\beta_H=\beta_C$, $$\lambda_2=\left(\frac{\Delta}{2}\right)^2\bigg(\frac{p_-\,C''(0,0)+p_+\,B''(0,0)+4 C'(0,0) D'(0,0)}{p_++p_-}\bigg).$$ Writing the explicit expression for $Q_-(\alpha,\lambda(\alpha))$, straightforward algebra shows that $$Q_-(\alpha,\lambda(\alpha))=1+O(\alpha^3),$$ and we obtain that the generating function is given by $$\begin{aligned} &G_{i,f}(\alpha)= e^{(\lambda_1\alpha+\lambda_2\alpha^2+O(\alpha^3))t} \times \hskip 3cm \nonumber\\&\bigg\{1 -\left(\frac{\Delta}{2}\right)^2(\dot\lambda_-'(0,0)+\lambda_-''(0,0)\lambda_1)\alpha \nonumber\\&+\bigg(\left(\frac{\Delta}{2}\right)^2\dot\lambda_-'(0,0)\bigg)^2\alpha^2-\left(\frac{\Delta}{2}\right)^2\dot\lambda_-''(0,0)\alpha^2+O(\alpha^3)\bigg\} \nonumber \end{aligned}$$ The first moment of the heat current is $$\langle \Delta E\rangle=-i\hbar t\lambda_1$$ which correctly leads to the heat current defined in . The variance of the heat current is then given by $$\text{Var}[\Delta E]=-\hbar^2 t(2\lambda_2-2(\dot\lambda_-'(0,0)+\lambda_-''(0,0)\lambda_1))\lambda_1)+O(t) \nonumber$$ In equilibrium, $\lambda_1=0$, we find that $$\begin{aligned} &\lim_{t\leftarrow\infty}\frac{1}{t}\text{Var}[\Delta E]= -2\hbar^2\lambda_2\\ &=-2\left(\frac{\hbar\Delta}{2}\right)^2\bigg(\frac{p_-\,C''(0,0)+p_+\, B''(0,0)+4 C'(0,0)B'(0,0)}{p_++p_-}\bigg) \nonumber\end{aligned}$$ Comparing to , we find the following identity $$\lim_{t\leftarrow\infty}\frac{1}{t}\text{Var}[\Delta E]=2\kappa.$$ which proves the fluctuation-dissipation relation. Numerical evaluation of the Generating function {#sec:numerical} =============================================== For our numerical analysis we consider the parameters $\epsilon=1K k_B$, $\hbar\Delta=0.01\epsilon$ and $\Omega=100\epsilon/\hbar$. The thermal power is completely determined by the functions $Z^+_{H/C}(t)$ and $\Gamma^+_{H/C}(t)$, defined in and . For the Ohmic spectral density $J(\omega)$ with exponential cutoff , these functions have analytic solutions [@Leggett1987] $$Z^+_j(t)=\eta_{j} \tan^{-1}(\Omega t)$$ $$\Gamma_j^+(t)=\frac{1}{2}\eta_{j}\log\left(1+\Omega^2t^2\right) +\eta_{j}\log\left(\frac{\hbar\beta_{j}}{\pi t}\sinh\frac{\pi t}{\hbar\beta_j}\right).$$ Figure \[fig:power\] shows the heat current to the hot bath (dashed) and cold bath (full lines) in function of the coupling strength $\eta_H$, with $\eta_C=\hbar$ constant. The purple curve shows the heat currents for $\beta_H=(0.2 K k_B)^{-1}$ and $\beta_C=(0.1 K k_B)^{-1}$. The blue curve shows the heat currents with the temperatures of the hot and cold bath exchanged. The curves show rectification of the heat current: the current changes direction when the temperatures of the bath are exchanged, but the magnitudes are not equal. Let $P_C$ be the power to the cold bath and $P^R_C$ the power to the cold bath when the temperatures of the baths have been exchanged. To quantify the amount of rectification, we define the rectification index as [@SeGu2019] $$\label{eq:def-rect} R=\frac{\text{max}(|P_{C}|,|P^R_{C}|)}{\text{min}(|P_{C}|,|P^R_{C}|)}.$$ The rectification index is shown in Figure \[fig:rectification\] for different range of temperatures of the hot bath in function of the coupling parameter $\eta_H$. Larger temperature gradients lead to higher rectification. The influence of a third bath with temperature $T_E$, weakly coupled to the qubit on the rectification index $R$ is shown in Figure \[fig:thirdbath\]. The cold bath has constant coupling $\eta_C=\hbar$, the third bath has coupling $\eta_E=0.1\hbar$ and the coupling of the hot bath ranges from $0\hbar-1.5\hbar$. The dashed (black) line displays the rectification index without the third bath. For small temperatures $T_E\leq T_C$ the rectification is amplified, for large temperatures $T_E> T_C$ the rectification decreases. file [ForLatexHot.txt]{}; file [ForLatexCold.txt]{}; file [ForLatexHotRev.txt]{}; file [ForLatexColdRev.txt]{}; (5.3,5.4)–++(1.55,0)–++(0,-1)–++(-1.55,0)–++(0,1); (5.4,5.1)–++(0.7,0); (5.4,4.7)–++(0.7,0); at (6.5,4.7) [$P_C$]{}; at (6.5,5.1) [$P_H$]{}; file [ForLatexRect20.txt]{}; file [ForLatexRect15.txt]{}; file [ForLatexRect14.txt]{}; file [ForLatexRect13.txt]{}; file [ForLatexRect12.txt]{}; file [ForLatexRect11.txt]{}; ; file [ForLatexRectT5.txt]{}; file [ForLatexRectT10.txt]{}; file [ForLatexRectT15.txt]{}; file [ForLatexRectT20.txt]{}; file [ForLatexRectT25.txt]{}; file [ForLatexRect10.txt]{}; ; Discussion ========== In this paper we have studied the thermal power through a qubit between two thermal baths. Earlier stud- ies of the model [@Segal2005; @Aurell2019] were concerned with the first moment of the generating function. We here went beyond these and, under the non-interacting blip approximation (NIBA), we derived an explicit expression for the generat- ing function of the heat current. The Laplace transform of the cumulant generating function of the heat current is a large deviation function (or rate function) that al- lows one to quantify rare events. In equilibrium, it can be shown that rate functions are simply related to the traditional thermodynamic potentials such as entropy or free energy [@Touchette]. Far from equilibrium, large deviation functions can be defined for a large class of dynamical processes and are good candidates for playing the role of generalized potentials [@Derrida2007]. In classical physics, a few exact solutions for the large deviations in some integrable inter- acting particles models have been found and a non-linear hydrodynamic theory for macrocoscopic fluctuations has been developed [@Derrida2007; @Gianni2014]. In the quantum case, exact results are more rare, the most pertinent being a series of remarkable calculations have been performed for the XXZ open spin chain interacting with boundary reservoirs within the Lindblad framework [@ProsenPRL; @ProsenReview]. In the present work, our aim was to study the heat transport in the spin-boson model, starting from the mi- croscopic model that embodies the qubit and the reser- voirs. We hence do not rely on a Markovian assumption, but eventually that the tunelling element is small, as is inherent to the NIBA. Our analysis begins with the exact expression of the gener- ating function in terms of a Feynman-Vernon type path integral from which we derived a full analytical formula for the generating function of the heat current. Although this result is not exact because it is based on the NIBA, we believe that it can be used as a benchmark and com- pared with numerical simulations or with other, more elaborate, approximation schemes [@LeHur2008]. As a numerical example we studied the first moment of the generating function, the heat current. We found that this shows rectification when the coupling strength of the qubit to both baths is not equal. When the temperature gradient is flipped, the current changes direction, but it does not have the same magnitude in both directions and therefore breaks the Fourier Law of heat conduction. A very important property satisfied by the generat- ing function is the Gallavotti-Cohen fluctuation theorem that embodies at the macroscopic scale the time-reversal invariance of the microscopic dynamics. The fluctuation theorem implies in particular the fluctuation-dissipation relation and the Onsager reciprocity rules when different currents are present [@EvansCM; @GCohen1; @Gallavotti2; @Gaspard2009; @GaspardOnsag1]. The fact that the formal definition of generating function does obey the Gallavotti-Cohen symmetry is rather straightforward to obtain. In addition we have shown that this relation remains true even after the NIBA. This means that NIBA respects the fundamental symmetries of the underlying model, or equivalently, that the spin-boson problem with NIBA is by itself a thermodynamically consistent model. One consequence is that the fluctuation- dissipation relation is retrieved under the NIBA. Indeed, we explicitly calculated the first and second moment of the heat. When the temperature difference between the baths is small, we found the heat conductivity $\kappa$. as the first moment of heat per unit time divided by temperature difference. The variance of the heat at equilibrium, when both temperatures are the same, is then per unit time proportional to $\kappa$. We emphasize that the Gallavotti-Cohen relation is valid far from equilibrium and it implies relations between response coefficients at arbitrary orders [@GaspardNL]. Our result, within the NIBA framework, allows us to calculate these coefficients and it would be great in- terest to compare these predictions with experimental measurements. Acknowledgements {#ack .unnumbered} ================ We gratefully acknowledge discussions with Jukka Pekola and Dmitry Golubev. This work was initiated at the Nordita program "New Directions in Quantum Information” (Stockholm, April 2019). We thank Nordita, Quantum Technology Finland (Espoo, Finland), and International Centre for Theory of Quantum Technologies (Gdańsk, Poland) for their financial support for this event. The work of B.D. is supported by DOMAST. Surviving terms of the NIBA {#app:NIBA} =========================== It is convenient to define the sejourn-index $$\chi_t=X_t+Y_t$$ such that during a sejourn $X_t=Y_t=\frac{1}{2}\chi_t$ and the blip-index $$\xi_t=X_t-Y_t.$$ such that during a blip $X_t=-Y_t=\frac{1}{2}\xi_t$. Since we will be performing two time integrals, we will be needing the second primitive functions $K_i$, $K_r$ of $k_i(t-s)$ and $k_r(t-s)$. Note that the primitive functions have an extra minus sign, due to the fact that we are integrating over $-s$ \[eq:prim\] $$K^{H/C}_i(t)=\sum_b\frac{(C^{H/C}_b)^2}{2m_b\omega^3_b}\sin(\omega_b t)$$ $$K^{H/C}_r(t)=\sum_b\frac{(C^{H/C}_b)^2}{2m_b\omega^3_b}\coth\left(\frac{\hbar\omega_b\beta_{H/C}}{2}\right)\cos(\omega_b t)$$ Blip-Blip --------- We consider a blip interval that runs from a time $t^*$ to $t^*+\Delta t_b$. #### Imaginary part of the action Notice that in the same blip interval $X_t=X_s=-Y_t=-Y_s$, hence $X_tX_s=Y_tY_s=1$ and $X_tY_s=Y_tX_s=-1$. This means that the term proportional to $X_tX_s-Y_tY_s$ in the imaginary part of the action will not contribute. The remaining terms which we denote by $R(\vec\alpha,t)=R^{H}(\alpha_H,t)+R^{C}(\alpha_C,t)$, give $$\begin{aligned} R^j(\alpha_j,t)-\frac{1}{2}K^j_i(\alpha_j)&=-\frac{1}{4}\int_{t^*}^{t^*+\Delta t_B}\int_{t^*}^{t^*+\Delta t_B}{\mathop{}\!\mathrm{d}}t {\mathop{}\!\mathrm{d}}s\,k^j_i(t-s+\alpha_j)\nonumber\\&=\frac{1}{4}( K^j_i(\Delta t_B+\alpha_j)+K^j_i(-\Delta t_B+\alpha_j)-2K^j_i(\alpha_j))\nonumber\\ &=\frac{1}{2}\sum_b \frac{C^2_b}{2m_b\omega_b^3}\sin(\omega_b\alpha_j)\cos(\omega_b\Delta t_B)-\frac{1}{2}K^j_i(\alpha_j),\end{aligned}$$ where $j=H$ or $C$. We isolated the $\frac{1}{2}K^j_i(\alpha_j)$ term to anticipate a cancellation with Sejourn-Sejourn terms. #### Real part of the action For the real part, all terms contribute. The result is $C(\vec\alpha,\Delta t_B)=C^H(\alpha_H,\Delta t_B)+C^C(\alpha_C,\Delta t_B)$, with $$\begin{aligned} \label{eq:bbre} & C^j(\alpha,\Delta t_B)\equiv\frac{1}{4}\int_{t^*}^{t^*+\Delta t_B}\int_{t^*}^{t}{\mathop{}\!\mathrm{d}}t{\mathop{}\!\mathrm{d}}s (2k^j_r(t-s)+k_r(t-s+\alpha)+k_r(t-s-\alpha))\nonumber\\&=\frac{1}{4}(-2K^j_r(\Delta t_B)+2K^j_r(0)-K^j_r(\Delta t_B+\alpha)-K^j_r(\Delta t_B-\alpha)+2K^j(\alpha))\nonumber\\ &=\frac{1}{2}\sum_b\frac{C^2_b}{2m_b \omega_b^3}\coth\left(\frac{\omega_b\hbar\beta}{2}\right)[\cos(\omega_b\alpha)+1][1-\cos(\omega_b\Delta_B)]\end{aligned}$$ Blip-Sejourn ------------ We consider a blip interval running from $t^*-\Delta t_b$ to $t^*$. And the ensuing sejourn interval from $t^*$ to$t^*+\Delta t_s$. #### Imaginary part The contribution from the imaginary part of the action is $\chi\xi X_-(\vec\alpha,\Delta t_B)$, $X_-(\vec\alpha,\Delta t_B)=X^H_-(\alpha_H,\Delta t_B)+X^C_-(\alpha_C,\Delta t_B)$ with $$\begin{aligned} X^j_{-}(\alpha,\Delta t_B)&=\frac{1}{4}\int_{t^*}^{t^*+\Delta t_S}{\mathop{}\!\mathrm{d}}t\int_{t^*-\Delta t_B}^{t^*}{\mathop{}\!\mathrm{d}}s\, (2k^j_i(t-s)-k^j_i(t-s+\alpha)-k^j_i(t-s-\alpha))\nonumber\\ &=\frac{1}{4}\bigg(2K^j_i(\Delta t_S)-2K^j_i(\Delta t_S+\Delta t_B)+2K^j_i(\Delta t_B)-2K^j_i(0)\nonumber\\&\quad-K^j_i(\Delta t_S+\alpha)+K^j_i(\Delta t_S+\Delta t_B+\alpha)-K^j_i(\Delta t_B+\alpha)+K^j_i(\alpha)\nonumber\\&\quad-K^j_i(\Delta t_S-\alpha)+K^j_i(\Delta t_S+\Delta t_B-\alpha)-K^j_i(\Delta t_B-\alpha)+K^j_i(-\alpha)\bigg)\end{aligned}$$ Following the NIBA, we have $K_i(\Delta t_S)=K_i(\Delta t_S)=K_i(\Delta t_S+\Delta t_B)=K_i(\Delta t_S+\Delta t_B+\alpha)$, which leads to a significant simplification in the above equation, we find $$\begin{aligned} &X^j_{-}(\alpha,\Delta t_B)=\frac{1}{4}(2K^j_i(\Delta t_B)-K^j_i(\Delta t_B-\alpha)-K_i(\Delta t_B+\alpha))\\ &=\frac{1}{2}\sum\frac{C_b^2}{2m_b\omega_b^3}\sin(\omega_b\Delta t_b)[1-\cos(\omega_b\alpha)]\end{aligned}$$ #### Real part The real part gives $\chi\xi F_-(\vec\alpha,\Delta t_B)$, $F_-(\vec\alpha,\Delta t_B)=F^H_-(\alpha_H,\Delta t_B)+F^C_-(\alpha_C,\Delta t_B)$ $$\begin{aligned} F^j_{-}(\alpha,\Delta t_B)&=\frac{1}{4}\chi\xi\int_{t^*}^{t^*+\Delta t_S}{\mathop{}\!\mathrm{d}}t\int_{t^*-\Delta t_B}^{t^*}{\mathop{}\!\mathrm{d}}s\,(k^j_r(t-s+\alpha)-k^j_r(t-s-\alpha))\nonumber\\&=\frac{1}{4}\chi\xi\bigg( K^j_r(\Delta t_S+\alpha)+K^j_r(\Delta t_B+\alpha)-K^j_r(\Delta t_B+\Delta t_S+\alpha)-K^j_r(\alpha)\nonumber\\&\quad-K^j_r(\Delta t_S-\alpha)-K^j_r(\Delta t_B-\alpha)+K^j_r(\Delta t_B+\Delta t_S-\alpha)+K^j_r(-\alpha)\bigg)\end{aligned}$$ Under the same argument as for the imaginary part, we get $$\begin{aligned} F^j_{-}(\alpha,\Delta t_B)&=\frac{1}{4}\bigg(K^j_r(\Delta t_B+\alpha)-K^j_r(\Delta t_B-\alpha)\bigg)\\ =&-\frac{1}{2}\sum_b\frac{(C^j_b)^2}{2m_b\omega_b^3}\coth\left(\frac{\omega_b\hbar\beta}{2}\right)\sin(\omega_b\Delta t_B)\sin(\omega_b\alpha)\end{aligned}$$ Sejourn-Blip ------------ The blip interval runs from $t^*-\Delta t_s$ to $t^*$ and the blip interval $t^*$ to $t^*+\Delta t_b$. #### Imaginary part This calculation is similar to the Blip-Sejourn term, but with less cancellations. $$\begin{aligned} X^j_+(\alpha,\Delta t_b)&=\frac{1}{4}\int_{t^*}^{t^*+\Delta t_B}{\mathop{}\!\mathrm{d}}t\int_{t^*-\Delta t_S}^{t^*}{\mathop{}\!\mathrm{d}}s\, (2k^j_i(t-s)+k^j_i(t-s+\alpha)+k^j_i(t-s-\alpha))\nonumber\\ &=\frac{1}{4}\bigg(2K^j_i(\Delta t_S)-2K^j_i(\Delta t_S+\Delta t_B)+2K^j_i(\Delta t_B)-2K^j_i(0)\nonumber\\&\quad+K^j_i(\Delta t_S+\alpha)-K^j_i(\Delta t_S+\Delta t_B+\alpha)+K^j_i(\Delta t_B+\alpha)-K^j_i(\alpha)\nonumber\\&+K^j_i(\Delta t_S-\alpha)-K^j_i(\Delta t_S+\Delta t_B-\alpha)+K^j_i(\Delta t_B-\alpha)-K^j_i(-\alpha)\bigg)\end{aligned}$$ Again, under NIBA, we have $K^j_i(\Delta t_S)=K^j_i(\Delta t_S)=K^j_i(\Delta t_S+\Delta t_B)K^j_i(\Delta t_S+\Delta t_B+\alpha)$, which gives $$\begin{aligned} X^+(\alpha,\Delta t_b)&=\frac{1}{4}(2K^j_i(\Delta t_B)+K^j_i(\Delta t_B-\alpha)+K^j_i(\Delta t_B+\alpha))\nonumber\\&=\frac{1}{2}\sum\frac{(C^j_b)^2}{2m_b\omega_b^3}\sin(\omega_b\Delta t_B)[1+\cos(\omega_b\alpha)].\end{aligned}$$ #### Real part $$\begin{aligned} F^j_+(\alpha,\Delta t_b)&=\frac{1}{4}\chi\xi\int_{t^*}^{t^*+\Delta t_S}{\mathop{}\!\mathrm{d}}t\int_{t^*-\Delta t_B}^{t^*}{\mathop{}\!\mathrm{d}}s\,(-k^j_r(t-s+\alpha)+k^j_r(t-s-\alpha))\nonumber\\&=\chi\xi\frac{1}{4}\bigg( -K^j_r(\Delta t_S+\alpha)-K^j_r(\Delta t_B+\alpha)+K^j_r(\Delta t_B+\Delta t_S+\alpha)+K^j_r(\alpha)\nonumber\\&+K^j_r(\Delta t_S-\alpha)+K^j_r(\Delta t_B-\alpha)-K^j_r(\Delta t_B+\Delta t_S-\alpha)-K^j_r(-\alpha)\bigg)\end{aligned}$$ Under the same argument as for the imaginary part, we get $$\begin{aligned} F^j_+(\alpha,\Delta t_b)&=\frac{1}{4}\bigg(-K^j_r(\Delta t_B+\alpha)+K^j_r(\Delta t_B-\alpha)\bigg)\nonumber\\&=\frac{1}{2}\sum\frac{(C_b^j)^2}{2m_b\omega_b^3}\coth\left(\frac{\omega_b\hbar\beta}{2}\right)\sin(\omega_b\Delta t_B)\sin(\omega_b\alpha).\end{aligned}$$ Note that $F_+=-F_-$. Sejourn-Sejourn --------------- The first sejourn interval runs from $t^*$ to $t^*+\Delta t_{S_1}$ and the blip interval $t^*+\Delta t_{S_1}$ to $t^*+\Delta t_{S_1}+\Delta t_{S_2}$. #### Imaginary part We find $$\begin{aligned} B^j(\alpha)\equiv&\frac{1}{4}\int_{t^*}^{t^*+\Delta t_S}\int_{t^*}^{t^*+\Delta t_S}{\mathop{}\!\mathrm{d}}t {\mathop{}\!\mathrm{d}}s\,k^j_i(t-s+\alpha)\nonumber\\&=\frac{1}{4}(2K^j_i(\alpha)-K^j_i(\Delta t_S+\alpha)-K^j_i(-\Delta t_S+\alpha))\nonumber\\&=\frac{1}{2}K^j_i(\alpha)\nonumber\\&=\frac{1}{2}\sum_b\frac{(C^j_b)^2}{2m_b\omega_b^3}\sin(\alpha)\end{aligned}$$ #### Real part $$\begin{aligned} D^j(\alpha)&=\frac{1}{4}\int_{t^*}^{t^*+\Delta t_S}{\mathop{}\!\mathrm{d}}t\int_{t^*}^{t}{\mathop{}\!\mathrm{d}}s \,(2k^j_r(t-s)-k^j_r(t-s+\alpha)-k^j_r(t-s-\alpha))\nonumber\\&=\frac{1}{4}\bigg(2K^j_r(0)-2K^j_r(\Delta t_S)+K^j_r(\Delta t_S +\hbar \alpha)-K^j_r(\alpha)+K^j_r(\Delta t_S -\hbar \alpha)-K^j_r(-\alpha)\bigg)\nonumber\\&=\frac{1}{2}(K^j_r(0)-K^j_r(\alpha))\\&=\frac{1}{2}\sum\frac{(C_b^j)^2}{2m_b\omega_b^3}\coth\left(\frac{\omega_b\hbar\beta_j}{2}\right)[1-\cos(\alpha)]\end{aligned}$$ There will also be cancellations between $D$ and $C$. Sejourn-(Blip)-Sejourn ---------------------- The first sejourn interval runs from $t^*$ to $t^*+\Delta t_{S_1}$ and the blip interval $t^*+\Delta t_{S_1+\Delta t_{B}}$ to $t^*+\Delta t_{S_1}++\Delta t_{B_1}+\Delta t_{S_2}$. $\chi_j\chi_{j+1}$ #### Imaginary part $$\begin{aligned} \Lambda^j(\alpha,\Delta t_B)&=\frac{1}{4}\int_{t^*+\Delta t_{S_1}+\Delta t_{B}}^{t^*+\Delta t_{S_1}+\Delta t_{B}+\Delta t_{S_2}}{\mathop{}\!\mathrm{d}}t \int_{t^*}^{t^*+\Delta t_{S_1}}{\mathop{}\!\mathrm{d}}s\,(k^j_i(t-s+\alpha)-k^j_i(t-s-\alpha))\nonumber\\&=\frac{1}{4}(-K^j_i(\Delta t_B+\alpha)+K^j_i(\Delta t_B-\alpha))\nonumber\\&=-\frac{1}{2} \sum_b\frac{(C_b^j)^2}{2m_b\omega_b^3}\cos(\omega_b\Delta t_B)\sin(\omega_b\alpha)\end{aligned}$$ #### Real part $$\begin{aligned} \Sigma^j(\alpha,\Delta t_B)&=\frac{1}{4}\int_{t^*+\Delta t_{S_1}+\Delta t_{B}}^{t^*+\Delta t_{S_1}+\Delta t_{B}+\Delta t_{S_2}}{\mathop{}\!\mathrm{d}}t \int_{t^*}^{t^*+\Delta t_{S_1}}{\mathop{}\!\mathrm{d}}s\,(2k^j_r(t-s)-k^j_r(t-s+\alpha)-k^j_r(t-s-\alpha))\nonumber\\ &=\frac{1}{4}(K^j_r(\Delta t_B+\alpha)+K^j_r(\Delta t_B-\alpha)-2 K^j_r(\Delta t_B))\nonumber\\ &=\frac{1}{2}\sum\frac{(C_b^j)^2}{2m_b\omega_b^3}\coth\left(\frac{\omega_b\hbar\beta_j}{2}\right)\cos(\omega_b\Delta t_B)[\cos(\omega_b\alpha)-1]\end{aligned}$$ Transfer matrix --------------- The generating function, using the terms calculated in the last subsections, is $$\begin{aligned} \label{eq:actSS} &G_{S\rightarrow S}(\alpha)=\sum_{n=0}^{+\infty}(-1)^n\left(\frac{\Delta}{2}\right)^n\int {\mathop{}\!\mathrm{d}}t_1\hdots {\mathop{}\!\mathrm{d}}t_{2n}\,\sum_{\substack{\chi_1,\hdots,\chi_n=\pm 1, \\\xi_1,\hdots,\xi_n=\pm 1}}\exp\bigg(-\frac{i}{\hbar}\epsilon\sum_i\xi_{i}(t_{2i}-t_{2i-1})\bigg)\nonumber\\ &\times\exp\bigg(\frac{i}{\hbar}\sum_{j=H,C}\sum_i\chi_{i}\xi_{i+1}X^j_+(\alpha_j,\Delta_{2i+2})+\chi_i\xi_i X^j_-(\alpha_j,\Delta_{2i})+\chi_i\chi_{i+1}\Lambda^j(\alpha,\Delta_{2i+2})+R^j(\alpha,\Delta_{2i})\bigg)\nonumber\\ &\times\exp\bigg(-\frac{1}{\hbar}\sum_{j=H,C}\sum_i\chi_{j}\xi_{i+1}F^j_+(\alpha_j,\Delta_{2i+2})+\chi_i\xi_i F^j_-(\alpha_j,\Delta_{2i})+\chi_i\chi_{i+1}\Sigma^j(\alpha_j,\Delta_{2i+2})+C'(\alpha_j,\Delta_{2i})\bigg)\end{aligned}$$ To express the resulting generating function in terms of a transfer matrix, it is convenient to first define for $j=H$ or $C$ $$Z^+_{j}(t)=X^j_+(\alpha,t)+X^j_-(\alpha,t)=\frac{2\eta_j}{\pi}\int_0^\Omega{\mathop{}\!\mathrm{d}}\omega\, \frac{1}{\omega}\sin(\omega t)$$ $$Z_j^-(\alpha,t)=X^j_+(\alpha,t)-X^j_-(\alpha,t)=\frac{2\eta_j}{\pi}\int_0^\Omega{\mathop{}\!\mathrm{d}}\omega\, \frac{1}{ \omega}\sin(\omega t)\cos(\omega\alpha)$$ and $$\begin{aligned} &\Gamma^+_j(t)=C^j(\alpha,t)+D^j(\alpha,t)+\Sigma(\alpha,t)=\frac{2\eta_j}{\pi}\int_0^\Omega{\mathop{}\!\mathrm{d}}\omega\, \frac{1}{\omega}\coth\left(\frac{\omega\hbar\beta_j}{2}\right)(1-\cos(\omega t))\end{aligned}$$ $$\begin{aligned} &\Gamma^-_j(\alpha,t)=C^j(\alpha,t)+D^j(\alpha,t)-\Sigma(\alpha,t)=\frac{2\eta_j}{\pi}\int_0^\Omega{\mathop{}\!\mathrm{d}}\omega\, \frac{1}{ \omega}\coth\left(\frac{\omega\hbar\beta_j}{2}\right)(1-\cos(\omega t)\cos(\omega\alpha))\end{aligned}$$ which allows us to write the generating function as . Analytic continuation {#sec:analytical} ===================== Suppose that all functions are analytical in the strip $0\leq \Im t \leq \hbar\beta_H$ (note that in Appendix E of [@Leggett1987] the authors assume an analytic continuation to negative imaginary values of $t$; however they consider the function $G(t)$ related to the function $C(t)$ by $G(t)=e^{-C(t)}$). Then any of the integrals, say $D$, can be written as $$\begin{aligned} \label{eq:D-def-2} p_+ &=& \int d t C_C(t+i\beta_H\hbar) C_H(t+i\beta_H\hbar) e^{\frac{i}{\hbar}\epsilon (t+i\beta_H\hbar)} \end{aligned}$$ The exponents in $C_C$ and $C_H$ are sums over bath oscillators. Each oscillator $b$ contributes $$\begin{aligned} \hbox{Term} &=& \frac{1}{2m_b\omega_b}\left(-\coth\frac{\omega_b\hbar\beta}{2}(1- \cos \omega_b t) +i\cdot \sin \omega_b t\right)\end{aligned}$$ where $\beta$ is $\beta_H$ or $\beta_C$. Evaluating first oscillators in the hot bath gives $$\begin{aligned} \label{eq:cos-t} \cos\omega_b \left(t +i\hbar\beta\right) &=& \cos\omega_b t \cosh\hbar\omega_b\beta -i\sin\omega_b t \sinh\hbar\omega_b\beta \\ \label{eq:sin-t} \sin\omega_b \left(t +i\hbar\beta\right) &=& \sin\omega_b t \cosh\hbar\omega_b\beta +i\cos\omega_b t \sinh\hbar\omega_b\beta \end{aligned}$$ which with $\coth\frac{\omega_b\hbar\beta_H}{2}$ from above can be combined into $$\begin{aligned} \label{eq:cos-t-1} \cos\omega_b t\left(\coth\frac{\omega_b\hbar\beta_H}{2} \cosh\hbar\omega_b\beta_H - \sinh\hbar\omega_b\beta_H\right) &=& \cos\omega_b t \coth\frac{\omega_b\hbar\beta_H}{2} \\ \label{eq:sin-t-1} i\sin\omega_b t \left(-\coth\frac{\omega_b\hbar\beta_H}{2} \sinh\hbar\omega_b\beta_H + \cosh\hbar\omega_b\beta_H\right) &=& - i\sin\omega_b t\end{aligned}$$ Hence $$\begin{aligned} C_H(t+i\beta_H\hbar) &=& \overline{C_H(t)}= C_H(-t)\end{aligned}$$ For the oscillators in the cold bath we consider ($\Delta\beta=\beta_C-\beta_H$) $$\begin{aligned} C_C(t+i\beta_H\hbar) &=& C_C(t-i\hbar\Delta\beta+i\beta_C\hbar) = \overline{C_C(t-i\hbar\Delta\beta)}= C_C(-t+i\hbar\Delta\beta)\end{aligned}$$ Inserting back into the expression for $D$ this means $$\begin{aligned} \label{eq:D-def-3} p_+(\beta_C,\beta_H) &=& \int d t C_C(-t+i\Delta\beta\hbar) C_H(-t) e^{\frac{i}{\hbar}\epsilon t} e^{-\epsilon\beta_H} \nonumber \\ \end{aligned}$$ Thermal Conductivity {#sec:appTherm} ==================== ### The partial derivative of $D$ {#sec:D-term} For $p_+$ one finds $$\begin{aligned} \label{eq:D-def-5} \partial_{\beta_C}p_+ &=& \int d t \partial_{\beta_C}\left(\log C_C(t)\right)_{\beta_C=\beta} C_C(t)C_H(t) e^{\frac{i}{\hbar}\epsilon t} \end{aligned}$$ where $$\begin{aligned} \partial_{\beta_C}\log C_C(t)&=& \sum_{b\in C} \frac{1}{2m_b\omega_b}(1-\cos\omega_b t)\frac{1}{\sinh^2\frac{\omega_b\hbar\beta_C}{2}}\frac{\omega_b\hbar}{2} \nonumber \end{aligned}$$ Changing $t$ to $t+i\hbar\beta$ will change $C_C(t)C_H(t) e^{\frac{i}{\hbar}\epsilon t}$ to $C_C(-t)C_H(-t) e^{-\frac{i}{\hbar}\epsilon (-t)}e^{-\epsilon\beta}$, similarly as in Appendix \[sec:analytical\]. The logarithmic derivative on the other hand changes in the convenient way: $$\begin{aligned} \label{eq:D-def-6} \partial_{\beta_C}\log C_C(t+i\hbar\beta;\beta_C=\beta)&=& \sum_{b\in C} \frac{\omega_b\hbar}{4m_b\omega_b}(1-\cos\omega_b t)\frac{1}{\sinh^2\frac{\omega_b\hbar\beta}{2}} + \sum_{b\in C} \frac{\omega_b\hbar}{4m_b\omega_b}\cos\omega_b t (-2) \nonumber \\ &+& \sum_{b\in C} \frac{\omega_b\hbar}{4m_b\omega_b}\sin\omega_b t (2i) \coth\frac{\omega_b\hbar\beta}{2}\end{aligned}$$ The two last terms can be compared to $$\begin{aligned} \label{eq:comparison} \partial_{t}\log C_C(t)&=& \partial_{t}\left(\sum_{b\in C} \frac{1}{2m_b\omega_b}\left[-(1-\cos\omega_b t)\coth\frac{\omega_b\hbar\beta}{2} +i\sin\omega_b t\right]\right)\nonumber \\ &=& \sum_{b\in C} \frac{1}{2m_b\omega_b}\left[(-\omega_b\sin\omega_b t)\omega_b\coth\frac{\omega_b\hbar\beta}{2} +i\omega_b\cos\omega_b t\right]\end{aligned}$$ Eq.  can therefore be rewritten as $$\begin{aligned} \label{eq:D-def-7} \partial_{\beta_C}\log C_C(t+i\hbar\beta;\beta_C=\beta)&=& \partial_{\beta_C}\log C_C(-t;\beta_C=\beta) + i\hbar \partial_{s}\log C_C(s;\beta_C=\beta)|_{s=-t}\end{aligned}$$ We can now change the integral variable from $t$ to $-t$ which gives $$\begin{aligned} \label{eq:D-def-8} \partial_{\beta}(p_+)\pi_{\downarrow} &=& - \partial_{\beta}(p_-)\pi_{\uparrow} -{\hbar^2} \int d t \partial_t\left(C_C(t)\right) C_H(t) e^{\frac{i}{\hbar}\epsilon t} \int d t \partial_t\left(C_C(t)\right) C_H(t) e^{-\frac{i}{\hbar}\epsilon t} \end{aligned}$$ The sign is determined as follows: $\pi_{\uparrow}$ changes sign when it goes to $\pi_{\downarrow}$, but $D$ does not. There is factor $i\hbar$ in the definition of $\pi_{\uparrow}$ and another one in the second term in $\partial_{\beta_C}\log C_C(t+i\hbar\beta)$. Taken together this gives $-(i\hbar) (-i\hbar)=-\hbar^2$. ### The partial derivative of $\pi_{\uparrow}$ {#sec:pi-up-term} This term can be evaluated in practically the same way as the other one. One starts from $$\begin{aligned} \partial_{\beta}\pi_{\downarrow} &=-i& \partial_{\beta}\left(\int\cdots \partial_t(C_C(t))\cdots\right) \nonumber \\ &=-i& \left(\int\cdots C_C(t)\partial_{\beta}(\log C_C(t)) \partial_t(\log C_C(t))\cdots\right) \nonumber \\ && -i \left(\int\cdots C_C(t)\partial^2_{t\beta}(\log C_C(t))\cdots\right) \end{aligned}$$ One now treats $\partial_{\beta}(\log C_C(t))$ in the same way as in . The first term will then give something proportional to $(\partial_t(\log C_C(t)))^2$ and the second something proportional to $\partial_{tt}(\log C_C(t))$. Combining we have $$C_C\left((\partial_t(\log C_C(t)))^2+\partial_{tt}(\log C_C(t))\right)=\partial_{tt}C_C$$ This means that we can write $$\begin{aligned} \label{eq:D-def-8-bis} p_+\partial_{\beta}(\pi_{\downarrow}) &=& - p_-\partial_{\beta}(\pi_{\uparrow}) -{\hbar^2 }\int d t \partial_{tt}\left(C_C(t)\right) C_H(t) e^{\frac{i}{\hbar}\epsilon t} \int d t C_C(t) C_H(t) e^{-\frac{i}{\hbar}\epsilon t} \end{aligned}$$ The sign is determined as follows: $\pi_{\downarrow}$ changes sign when it goes to $\pi_{\uparrow}$, but the terms with two time derivates do not change sign. The factors $i\hbar$ and $-i\hbar$ are the same as before. ### Combination {#sec:combination} Inserting and , using that $$\frac{1}{2}\int d t \partial_{tt}\left(C_C(t)\right) C_H(t) e^{\frac{i}{\hbar}\epsilon t} =\tilde{C}''(0,0)$$ $$\frac{1}{2}\int d t \partial_{tt}\left(C_C(t)\right) C_H(t) e^{\frac{-i}{\hbar}\epsilon t} =\tilde{B}''(0,0)$$ and symmetrizing one has $$\kappa=-\frac{(\hbar\Delta)^2}{4(p_++p_-)}(p_+\tilde{B}''(0,0)+4\tilde{B}'(0,0)\tilde{C}'(0,0)+p_-\tilde{C}''(0,0))$$ [^1]: For c. the Blip can be before or after the Sejourn. These give different contributions and are calculated separately. Hence there are five different surviving terms.
--- abstract: 'We discuss Stark deflectometry of micro-modulated molecular beams for the enrichment of biomolecular isomers as well as single-wall carbon nanotubes and we demonstrate the working principle of this idea with fullerenes. The sorting is based on the species-dependent polarizability-to-mass ratio $\alpha/m$. The device is compatible with a high molecular throughput, and the spatial micro-modulation of the beam permits to obtain a fine spatial resolution and a high sorting sensitivity.' author: - 'Hendrik Ulbricht, Martin Berninger, Sarayut Deachapunya, André Stefanov and Markus Arndt' title: Gas phase sorting of nanoparticles --- Sorting of nanoparticles is essential for many future nanotechnologies. Nanoparticles can generally be sorted by their different physical or chemical properties. The objective is to prepare or enrich a particular species with a distinct property. In the case of carbon nanotubes the sorting of species with different metallicity is essential for many applications such as the realization of field effect transistors, light emitting diodes or conducting wires [@Avouris2001]. Here sorting can for instance be achieved by exploiting the tube’s dielectric properties in a liquid environment [@Krupke2003a]. Also chemical methods for the selection and separation of carbon nanotubes are currently beeing investigated [@Arnold2006a]. Complementary to these efforts also the manipulation of large clusters and molecules in the gas phase has attracted a growing interest over recent years, in particular with applications in molecule metrology [@Bonin1997a; @Compagnon2001a; @Berninger2007a; @Deachapunya2007a]. Since many nanoparticles, among them biomolecules or carbon nanotubes, exist in various different isomers and conformations, it is intriguing to investigate sorting methods in the gas phase which select the particles according to their polarizability-to-mass ratio $\alpha/m$ instead of their mass alone. A large number of classical deflection experiments have been performed in the past (for a review see [@Bonin1997a]) which employ the deflection of a well-collimated neutral beam in the presence of a static transverse inhomogeneous electric field. In this arrangement, one can usually chose between a wide molecular ray of high flux or a narrow beam with a lower total signal whose lateral shift can be determined with higher precision. We here present a method for sorting nanoparticle beams which combines high transmission [*and*]{} high resolution. This can be achieved by imprinting a very fine spatial modulation onto the molecular beam. ![(a) Three grating deflection setup. The third grating can be shifted to scan the nanoparticle fringe pattern. Particles with different $\alpha/m$ are separated by their different deflection shifts in the electrode field as identified in (b). The grating position can be set to preferentially transmit one species while blocking the others. After the sorting the molecules may be deposited on a target or detected by ionization. []{data-label="setup"}](figure1.eps "fig:"){width="\columnwidth"}\ Our starting point is a three-grating matter-wave interferometer which we already described before [@Brezger2003a]. As shown in Fig. \[setup\], it is composed of three micro-machined gratings, which prepare, sort and detect the molecules. The combination of the first two gratings modulates the particle flux such as to generate a periodic particle density pattern in the plane of the third grating. All gratings and also the molecular micro-modulation have identical periods. The density pattern or contrast function can therefore be revealed by scanning the third grating while counting all transmitted molecules, as shown in Fig. 2. Our device is usually operated in a quantum mode, with molecular masses and velocities chosen such as to reveal fundamental quantum phenomena related to matter-wave diffraction [@Arndt2005b]. However, the same device can also be used in a Moiré or shadow mode [@Oberthaler1996a], where the molecules can be approximated by classical particles. This applies in particular to fast and very massive molecules where quantum wave effects may be too small to be observed. Our setup then still combines a fine spatial micro-modulation with much relaxed requirements on the collimation of the beam. This allows us to increase the spatial resolution in any beam-displacement measurements by several orders of magnitude over earlier experiments without micro-imprint. A beam-displacement may for instance be caused by an inhomogeneous electric field acting on the polarizability of the particle. In our experiment of Fig. \[setup\], a pair of electrodes close to the second grating generates a constant force field $F_{x} = \alpha (\mathbf{E\nabla} )E_{x}$, which shifts the molecular fringe pattern along the x-axis by $$\label{shift} \Delta s_x \propto (\alpha/m)\cdot (\mathbf{E} \mathbf{\nabla})E_{x}/v_y^{2}.$$ Here v$_y$ is the beam velocity in the forward direction. Deflection measurements then allow to derive precise values for the polarizability of the molecules, as recently demonstrated [@Berninger2007a; @Deachapunya2007a]. Here we extend the operation of our deflectometer to the classical Moiré mode with biomolecules and carbon nanotubes and we extend the previous molecular measurement to an active sorting method for molecular species that differ in $\alpha/m$. ![Predicted fringe pattern for YGW and YWG tripeptides. (a)shows the calculated density distribution after the third grating without applying any voltage to the deflecting electrodes: The full curve is the YWG and $--$ belongs to the YGW peptide. (b) indicates that already at 7.5 kV both biomolecules can be separated and therefore maximally enriched. The calculation takes also into account the dispersive interaction of the molecules with the metal gratings. The transmission function is periodic in x with over many thousand lines, with a grating constant of 990 nm in this example. \[ywg\_ygw\_shift\]](figure2.eps "fig:"){width="7cm"}\ For a first illustration we discuss and simulate the relative enrichment of a 50:50 mixture of the tripeptide Tryptophan-Glycin-Tyrosin (YGW) and its isomer YWG which differ only by the swapped position of Glycin and Tryptophan in the amino acid sequence. Their masses are equal (m=460u) but their susceptibilities $\chi\rm(YWG)=100\,\AA ^3$ and $\chi \rm(YGW)=480\,\AA ^3$ differ by almost a factor of five [@Antoine2003a]. The susceptibility [@Antoine2002a] $\chi = \alpha + \langle \mu_{z}^{2}\rangle/(k_{B}T)$, includes the orientation averaged square of the projection of the electric dipole moment onto the direction of the external field $\langle \mu_{z}^2 \rangle$ and $T$ is the molecule temperature. With this definition, the polarizability in Eq. \[shift\] may be replaced by $\chi$, if the molecules also possess a permanent electric dipole moment. For the two isomers the molecular fringe shifts will then differ by a factor of five, if all other beam parameters are equal. Therefore, when the three gratings are designed for maximum fringe contrast in the molecular beam close to the third grating, we may chose the electric field such that one sort of peptide will be transmitted by the deflectometer while its isomer will be blocked and deposited on the third grating. The transmitted beam will then reveal a significant enrichment of one particular isomer. To quantify the sorting process we define the maximal *enrichment* of two mixed species $P_{1}$ and $P_{2}$ as: $$\label{enrichment} \eta=max_{|x}\{\tilde{S}_{P_{1}}(x)-\tilde{S}_{P_{2}}(x)\} ,$$ where $\tilde{S}_{P_{i}}(x)= S(x)/[S_{max}(x)+S_{min}(x)]$ is the normalized signal of the Moiré curve associated with the peptide $P_{i}$ (see Fig. 2), and $x$ is the position of the third grating. This definition is based on the fact that each isomer will form a fringe pattern with its own intensity, fringe visibility and beam shift in the external field gradient. Since the enrichmentis meant to include only the effects of the sorting machine, the signals of both species are normalized to their average beam fluxes. The definition is chosen such that $\eta = 0$ for equal normalized transmission of both species through the three-grating-arrangement, and $\eta = 1$ if one species is blocked while the other is fully transmitted. For small polypeptides, the combination of a pulsed beam source with a pulsed laser detection scheme, may allow us to select a mean velocity of $v_y$= 340 m/s with a relative spread of $\Delta v_y/v_y$ = 0.5%. We now assume a grating separation of $L=38.5\,$cm, a grating constant of 990nm, and a grating open fraction of $f=0.2$, i.e gap openings of 200nm. Inserting all these parameters we find a relative enrichment for YWG as high as $\eta = 0.97$. The high expected degree of separation can also be seen in Fig 2b. Here, the voltage has been optimized to $(\mathbf{E\nabla} )E_{x}= 1.05\times 10^{13}$ V$^2$/m$^3$ in order to maximize the transmitted content of this isomer. The required field can be generated between two convex 5cm long electrodes at a difference potential of U=7.5kV, and for a minimum distance of 4mm. Next to the sorting of biomolecules. The selection of carbon nanotubes with a defined internal structure is a challenge that has attracted great interest [@Avouris2001]. Our deflectometer proposal differs from earlier methods [@Krupke2003a; @Arnold2006a] in that it is vacuum compatible and therefore better suited for a certain class of technological applications. It also differs from a recently patented suggestion for sorting free nanotube beams by laser fields [@Zhang2005] in that the use of microfabricated gratings allows us to combine an uncollimated molecular beam with a method of high spatial resolution. ![Reduced longitudinal polarizability $\alpha_{\parallel}$ SWCNTs versus length and diameter. The two surfaces represent $\alpha_{\parallel}$’s of metallic and semiconducting nanotubes of a typical diameter [@Hagen03] and possible length distribution [@heller2002].\[polari\]](figure3.eps "fig:"){width="8.3cm"}\ In the following we will assume that it is in principle possible – even though technically difficult at present – to generate a free molecular beam of single-wall carbon nanotubes (SWCNTs) with an assumed length distribution between 50nm and 150nm, an arbitrary mixture of chiralities and diameters between 0.7-1.3nm. To simulate the Moiré fringes for these nanotubes we first need to determine their $\alpha/m$ ratio. Their mass can be computed from the number of carbon atoms per unit cell [@Avouris2001]. The static polarizability of nanotubes is extremely anisotropic and we have to consider separately both the transverse and the longitudinal value per carbon atom, i.e. the reduced polarizabilities. The reduced transverse static polarizability of a carbon nanotube is independent of its metallicity but it is proportional to its radius $R$. For SWCNTs it can be approximated by $\alpha_{\perp red} \sim 1.3\AA^3$/atom[@benedict95], a value very similar to that of $C_{60}$ or medium-sized alkali clusters [@knight85]. The longitudinal polarizability of semiconducting tubes $\alpha_{\parallel s}$ depends on their band gap energy $E_{g}$[@benedict95] according to $\alpha _{\parallel s} \propto \left(R/E_{g}^2\right)$. We use $\alpha_{\parallel s} \approx 8.2 R^{2}+20.5$ for $R\geq 0.35$nm [@Kozinsky06]. Even for semiconducting SWCNTs the reduced longitudinal polarizability thus exceeds already the transverse value by about a factor of ten and the polarizability of medium-sized metal clusters by about a factor of two [@DeHeer1993a]. This relation for $\alpha _{\parallel s}$ can’t be applied to metallic tubes because of their vanishing band gap, $E_{g}=0$. We therefore approximate short metallic tubes of length $l$ by perfectly conducting hollow cylinders  [@Mayer2005a] and find for their axial polarizability $$\label{pol_metallic} \alpha_{\parallel m} = \frac{l^3}{24(ln(l/R)-1)}\left(1+\frac{4/3 - ln(2)}{ln(l/R)-1}\right).$$ This value exceeds that of equally long semiconducting tubes by a factor between ten and one hundred. In Fig. \[polari\] we plot the reduced polarizabilities for a range of different tube diameters and lengths. The clear separation between metallic and semiconducting tubes in this diagram indicates that mixtures of these species will be well separable in a Moiré-deflection experiment. The [*reduced*]{} longitudinal polarizability of semiconducting tubes does not scale with the tube’s length, since both their mass and their polarizability grow linearly with it. The separation process will therefore also work for nanotubes beyond the parameter range of Fig. \[polari\] [@Krupke2003a]. With all masses and polarizabilities at hand, we now proceed to simulate the Moiré fringe patterns. In Fig. \[cntshift\] we show the simulations for two 100nm long semiconducting(17,0) and metallic(9,0) nanotubes flying at 100 m/s with a velocity spread of $\Delta v_y/v_y = 1\%$ through a setup with metallic gratings separated by L=38.5 cm. The grating period is now set to g = 10 $\rm \mu m$ and the open fraction is again f=0.2, which would permit a fringe contrast of 100% - for small classical balls without polarizability. ![Predicted fringe pattern for semiconducting (17,0) and metallic (9,0) carbon nanotubes. (a) illustrates the ideal Moiré case: The full curve is the (17,0) and $--$ belongs to the (9,0) tube without Casimir-Polder (CP) and maximal aligned at 0.58 kV. (b) shows the influence of the dispersive interaction between material grating and nanotube: $--$ is the Moiré pattern. $-\cdot-$ includes the CP interaction for the (17,0) and the full curve for the (9,0) tube at 0 kV [@Bonin1997a] with maximum alignment, i.e. without rotation. (c) is the complete analysis including CP and full rotationl averaging: The full curve is the (9,0) and $--$ is the (17,0) tube at 0.9 kV. \[cntshift\]](figure4.eps "fig:"){width="7cm"}\ The semiconducting tube is computed to have R=0.67nm, m=$3.2\times10^{-22}$kg, $\alpha_{\perp} = 2.6\times 10^{4}\,{\AA}^3$ and $\alpha_{\parallel} = 3.8\times 10^5\,{\AA}^3$. The metallic tube has R=0.36nm, m=$1.7\times 10^{-22}$kg, $\alpha_{\perp} = 9.5\times 10^{3}\,{\AA}^3$ and $\alpha_{\parallel} = 1.1\times 10^7\,{\AA}^3$. In the beginning we assume that all nanotubes are maximally aligned with respect to the external electric force field, i.e. along the x-axis. At a deflection field of $(\mathbf{E} \mathbf{\nabla})E_{x} = 1.4\times 10^{12}\,V^2/m^3$, the metallic tube’s fringe shift of 5200nm would largely surpass the 150nm shift of the semiconducting molecules. And one can easily find a voltage that will enrich the metallic tubes in the beam by shifting their fringe maxima until they fall onto the openings of the third grating, while the semiconducting tubes will be blocked by the grating bars. In this idealized picture the enrichment could reach almost 100% (Fig. \[cntshift\]A). We now extend this simple model to include the attractive Casimir-Polder (CP) potential between the aligned molecules and ideally conducting grating walls in the approximation of long distances $r$: $$\label{cp} U(r) = -\frac{3 \hbar c }{8 \pi} \frac{\alpha}{r^{4}} ,$$  [@Casimir48]. The influence of the CP interaction is demonstrated in Fig. \[cntshift\] (b). The fringe contrast is reduced due to the deflection of the tubes in the grating’s potential. For this simulation metal gratings are assumed and a larger enrichment can be maintained if the metal gratings are replaced by dielectric materials or even by gratings made of light [@Nairz2001a; @Gerlich2007a]. We also have to consider that any nanotube beam in the forseeable future will carry molecules in a highly excited rotational state. Each orientation of the nanotube with respect to the external electrode field is associated with a different fringe shift, since the relative contributions by the transversal and longitudinal polarizability depend on this orientation. Fig. \[cntshift\](c) shows an average of all Moiré curves now including both the full rotational distribution function [@Bonin1997a] and the CP interaction. The expected fringe visibility still amounts to 77% for the semiconducting (17,0) tubes and to 31% for the metallic (9,0) ones. As can be seen from Fig. \[cntshift\](c) this will allow a significant enrichment of the metallic tubes. The predicted value for the enrichment reaches $\eta (17,0) = 0.4$ for the semiconducting tubes and $\eta (9,0) = 0.6$ for the metallic ones. It is interesting to see that our reasoning still holds generally for all other chiralities. Metallic and semiconducting tubes will always be separable with a good probability, because of the huge variation in polarizabilities. ![A) Interference pattern without any voltage to the electrodes. B) Separation of $C_{60}$ (circles) and $C_{70}$ (squares) at an electrode voltage of 14 kV. The phase shift difference is $\delta =$171nm. Interference contrasts are normalized to the same height. C) Comparison between expected (dotted line) and observed maximal $C_{60}$ enrichment at 0 kV and 14 kV in the existing setup (crosses) with f=0.46 and $\Delta v/v = 15\%$. The potential for larger fullerene enrichment with an optimized interferometer with g=990 nm, f=0.2 and $\Delta v/v = 1\%$ is indicated by the solid line. \[c60shift\]](figure5.eps "fig:"){width="6.5cm"}\ To demonstrate the working principle of our three-grating sorting machine we have performed experiments with the fullerens $C_{60}$ and $C_{70}$ in an existing Talbot-Lau interferometer with three identical gold gratings with a period of g = 990 nm and an open fraction of f=0.46. We detect the content of the different molecular species using a quadrupole mass spectrometer (QMS Extrel, 2,000u). The two fullerenes C$_{60}$ and C$_{70}$ differ in their mass by the factor 7/6. Their polarizability ratio was measured in a related experiment to be $\alpha_{C_{70}}/\alpha_{C_{60}} =1.22$ [@Berninger2007a]. The velocities in this mixture were $191$ m/s for $C_{60}$ and $184$ m/s for $C_{70}$, both with a velocity spread of 15% from a thermal source. Fig. \[c60shift\](a) shows the fringe contrast of the two fullerenes without any voltage applied to the electrodes. Even at $U=$0 kV we already observe a slight enrichment due to the different fringe visibilities for $C_{60}$ and $C_{70}$. Applying a voltage of 14 kV then results in the phase-shift difference shown in Fig. \[c60shift\](b). Fig. \[c60shift\](c) plots the measured and expected enrichments of $C_{60}$, which are in rather good agreement. The observed phase shift ratio $\Delta s (C_{70})/\Delta s(C_{60}) = 1.14$ fits well with our theoretical estimate (Eq. \[shift\]) of $1.13$, including the statistical and systematic error of 4% in our experiment. For our experiment in Fig. \[c60shift\](b) we find a rather moderate $C_{60}$ enrichment of $\eta(C_{60})=0.08$. This is obviously not yet optimized and it is interesting to discuss the factors that influence it in the present and in future experiments. Secondly, the fringe contrast is very sensitive to the van der Waals interaction between the molecules and the grating walls. This attractive potential modulates the fringe visibility and it does this differently for different polarizabilities and molecule velocities. This influence can be reduced by choosing a wider grating period or by recurring to optical phase gratings, as mentioned before [@Brezger2003a]. Thirdly, the Stark deflection itself is dispersive (Eq. \[shift\]). A finite velocity spread leads to a reduction of the interference contrast with increasing electric field. And while the fringes in our present experiment would tend to wash out beyond a deflection voltage of U=14kV, pulsed beams of biomolecules [@Marksteiner2006a] with $\Delta v_y/v_y \sim 0.1...1\%$ would be essentially free of such a restriction. Fourthly, the polarizability ratio is rather small for the two fullerene species. In contrast to that, $\alpha/m$ may vary by $\sim500$% for isomers of small polypeptides [@Antoine2003a] and by even a factor up to one hundred for carbon nanotubes of different chirality [@benedict95]. In this respect all future experiments will be simpler compared to our present demonstration. The very good quantitative agreement between our experiment and the model expectations, shown in Fig. \[c60shift\](c) proves that we do understand the relevant processes in the present study. The solid line in Fig. \[c60shift\](c) shows the expected $C_{60}$ enrichment in an interferometer setup which is optimized for sorting instead of quantum demonstrations. Concluding, we have shown that $\alpha/m$-variations can be used to sort neutral nanoparticles even in wide molecular beams. Our simulations show that the relative enrichment may even get close to 100% for biomolecular isomers and it will still be significant ($\sim 60\%$) for single-wall carbon nanotubes. The working principle is illustrated by the enrichment of $C_{60}$ out of a mixed molecular beam composed of $C_{60}$ and $C_{70}$ fullerenes. The sorting scheme works in general for nanoparticles which can be transferred into a free molecular beam and which differ in their $\alpha$/m ratio. Acknowledgments {#acknowledgments .unnumbered} =============== This work has been supported by the Austrian Science Funds (FWF) within the projects START177 and SFB F1505. We acknowledge fruitful discussions with Klaus Hornberger. S. D. acknowledges financial support by a Royal Thai government scholarship. [99]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , , , ** (, ). , , , , ****, (). , , , , , ****, (). , , , , , , , , ****, (). , , , ****, (). , ** (, ). , , , , , , , ****, (). , , , ****, (). , , , ****, (). , , , , , ****, (). , , , , , , , , , , ****, (). , , , , , , , , , , ****, (). , , , , (). , ****, (). , , , , , , ****, (). , , , , , , ****, (). , , , ****, (). , , , , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , , , , ****, (). , , , , , , , , , , , , ().
--- abstract: 'Accurate forecasts are vital for supporting the decisions of modern companies. To improve statistical forecasting performance, forecasters typically select the most appropriate model for each given time series. However, statistical models usually presume some data generation process, while making strong distributional assumptions about the errors. In this paper, we present a new approach to time series forecasting that relaxes these assumptions. A target series is forecasted by identifying similar series from a reference set (déjà vu). Instead of extrapolating, the future paths of the similar reference series are aggregated and serve as the basis for the forecasts of the target series. “Forecasting with similarity” is a data-centric approach that tackles model uncertainty without depending on statistical forecasting models. We offer definitions for deriving both the point forecasts and the corresponding prediction intervals. We evaluate the approach using a rich collection of real data and show that it results in good forecasting accuracy, especially for yearly series. Finally, while traditional statistical approaches underestimate the uncertainty around the forecasts, our approach results in upper coverage levels that are much closer to the nominal values.' address: - 'School of Economics and Management, Beihang University, Beijing, China' - 'Forecasting and Strategy Unit, School of Electrical and Computer Engineering, National Technical University of Athens, Greece' - 'School of Management, University of Bath, UK' - 'School of Statistics and Mathematics, Central University of Finance and Economics, Beijing, China' author: - Yanfei Kang - Evangelos Spiliotis - Fotios Petropoulos - Nikolaos Athiniotis - Feng Li - Vassilios Assimakopoulos bibliography: - 'My\_Collection.bib' title: 'Déjà vu: forecasting with similarity' --- Forecasting ,Dynamic Time Warping ,M Competitions ,Time Series Similarity ,Empirical Evaluation Introduction {#sec:sec1} ============ Effective forecasting is crucial for various functions of modern companies. Forecasts are used to make decisions concerning business operations, finance, strategy, planning, and scheduling, among others. Despite its importance, forecasting is not a straightforward task. The inherent uncertainty renders the provision of perfect forecasts impossible. Nevertheless, reducing the forecast error as much as possible is expected to bring significant monetary savings. We identify the search for an “optimal” model as the main challenge to forecasting. Existing statistical forecasting models implicitly assume an underlying data generating process (DGP) coupled with distributional assumptions of the forecast errors that do not essentially hold in practice. [@Petropoulos2018-cl] suggest that three sources of uncertainty exist in forecasting: model, parameter, and data. They found that merely tackling the model uncertainty is sufficient to bring most of the performance benefits. This result reconfirms George Box’s famous quote, “all models are wrong, but some are useful.” It is not surprising that researchers increasingly avoid using a single model, and opt for combinations of forecasts from multiple models [@Jose2008; @Kolassa2011; @Bergmeir2016-su; @Monteros2019-oy]. We argue that there is another way to avoid selecting a single model: to select no models at all. This study provides a new way to forecasting that does not require the estimation of any forecasting models, while also exploiting the benefits of cross-learning [@Makridakis2019-oy]. With our proposed approach, a target series is compared against a set of reference series attempting to identify similar ones (déjà vu). The point forecasts for the target series are the average of the future paths of the most similar reference series. The prediction intervals are based on the distribution of the reference series, calibrated for low sampling variability. Note that no model extrapolations take place in our approach. The proposed approach has several advantages compared to existing methods, namely (i) it tackles both model and parameter uncertainties, (ii) it does not use time series features or other statistics as a proxy for determining similarity, and (iii) no explicit assumptions are made about the DGP as well as the distribution of the forecast errors. We evaluate the proposed forecasting approach using the M3 competition data [@Makridakis2000a]. Our approach results in good point forecast accuracy, which is significantly better than statistical approaches for the yearly data frequency. Also, forecasting with similarity offers better estimation of forecast uncertainty which would allow achieving higher customer service levels. Finally, simple combinations of the similarity approach with statistical ones results in performance that is much better than each approach separately. The rest of the paper is organized as follows. In the next section, we present an overview of the existing literature and provide our motivation behind “forecasting with similarity”. Section \[sec:methodology\] describes the methodology for the proposed forecasting approach, while section \[sec:evaluation\] presents the experimental design and the results. Section \[sec:discussions\] offers our discussions and insights, as well as implications for research and practice. Finally, section \[sec:conclusions\] provides our concluding remarks. Background research {#sec:literature} =================== Forecast model selection {#sec:selection_combination} ------------------------ When forecasting with numerous time series, forecasters typically try to enhance forecasting accuracy by selecting the most appropriate model from a set of alternatives. The solution might involve either aggregate selection, where a single model is used to extrapolate all the series, or individual selection, where the most appropriate model is used per series [@Fildes1989aa]. The latter approach can provide substantial improvements if forecasters are indeed in a position to select the best model [@2001537Fildes]. Unfortunately, this is far from reality due to the presence of data, model, and parameter uncertainties [@Kourentzes2014291; @Petropoulos2018-cl]. In this respect, individual selection becomes a complicated problem and forecasters have to balance the potential gains in forecasting accuracy and the additional complexity introduced. Automatic forecasting algorithms test multiple forecasting models and select the ‘best’ based on some criterion. The criteria include information criteria, e.g., the likelihood of a model penalised by its complexity [@Hyndman2002; @Hyndman2008b], or rules based on forecasting performance on past windows of the data [@Tashman00]. Other approaches to model selection involve discriminant analysis [@SHAH1997489], time-series features [@Petropoulos2014152], and expert rules [@ADYA2001143]. An interesting alternative is to apply cross-learning so that the series are clustered based on an array of features and the best model is selected for their extrapolation [@Kang2017345; @Spiliotis2019M3]. In any case, the difference between two models might be small, and the selection of one over the other might be purely due to chance. The small differences between models also result in different models being selected when different criteria or cost functions are used [@Billah2006-jg]. Moreover, the features and the rules considered may not be adequate for describing every possible pattern of data. As a result, in most cases, a clear-cut for the ‘best’ model does not exist, because all models simply are rough approximations of the reality. The non-existence of a DGP and forecast model combination {#sec:no_dgp} --------------------------------------------------------- Time series models that are usually offered by the off-the-shelf forecasting softwares have over-simplified assumptions (such as the normality of the residuals and stationarity), which do not necessarily hold in practice. As a result, it is impossible for these models to capture the actual DGP of the data perfectly. One could work towards defining a complex multivariate model [@Svetunkov_undated-vo], but this would lead to all kinds of new problems, such as data limitations and the inability to accurately forecast some of the exogenous variables, which are identified as significant. As a solution to the above problem, forecasting researchers have been combining forecasts from different models [@Bates1969; @Clemen1989; @Makridakis1983c; @Timmermann2006; @Claeskens2016]. The main advantage of combining forecasts is that it reduces the uncertainty related to model and parameter determination, and decreases the risk of selecting a single and inadequate model. Moreover, combining different models enables capturing multiple patterns. Thus, forecast combinations lead to more accurate and robust forecasts with lower error variances [@Hibon2005]. Through the years, the forecast combination puzzle [@Claeskens2016], i.e., the fact that optimal weights often perform poorly in applications, has been both theoretically and empirically examined. Many alternatives have been proposed to exploit the benefits of combination, including Akakie’s weights [@Kolassa2011], temporal aggregation levels [@Kourentzes2014291], bagging [@Bergmeir2016-su; @Petropoulos2018-cl], and hierarchies [@Hyndman2011; @ATHANASOPOULOS201760], among others. Moreover, simple combinations have been shown to perform well in practice [@Petropoulos2019SCUM]. In spite of the improved performance offered by forecast combination, some primary difficulties, e.g., (i) determining the pool of models being averaged, (ii) identifying their weights, and (iii) estimating multiple models, prevent forecast combination from being widely applied by practitioners. Forecasting with similar series {#sec:review_similar} ------------------------------- An alternative to fitting statistical models to the historical data would be exploring whether similar patterns have appeared in the past. The motivation behind this argument originates from the work on structured analogies by [@Green2007-ts]. Structured analogies is a framework for eliciting human judgment in forecasting. Given a forecasting challenge, a panel of experts is assembled and asked to independently and anonymously provide a list of analogies that are similar to the target problem together with the degree of similarity and their outcomes. A facilitator calculates the forecasts for the target situation by averaging the outcomes of the analogous cases weighted by the degree of their likeness. Given the core framework of structured analogies described above, several modifications have been proposed in the literature. Such an approach is practical in cases that no historical data are available [e.g., @Nikolopoulos2015-ia], which renders the application of statistical algorithms impossible. Forecasting by analogy has also been used in tasks related to new product forecast [@Goodwin2013-hu; @Wright2015-ji; @Hu2019-sa], in which the demand and the life-cycle curve parameters are possible to estimate based on the historical demand values and life-cycles of similar products. Even when historical information is available, sharing information across series has been shown to improve the forecasting performance. A series of studies attempted to estimate the seasonality on a group level instead of a series level [e.g., @Mohammadipour2012; @Zhang2013-dg; @Boylan2014-sl]. When series are arranged in hierarchies, it is possible to have similarities in seasonal patterns among products that belong into the same category. This renders their estimation on an aggregate level more accurate, especially for the shorter series where few seasonal cycles are available. The use of cross-sectional information for time series forecasting tasks is a feature of the two best-performing approaches by @Smyl2019-oy and @Monteros2019-oy in the recent M4 forecasting competition [@Makridakis2019-oy]. [@Smyl2019-oy] propose a hybrid approach that combines exponential smoothing with neural networks. The hierarchical estimation of the parameters utilises learning across series but also focuses on the idiosyncrasies of each series. [@Monteros2019-oy] use cross-learning based on the similarity of the features in collections of series to estimate the combination weights assigned to a pool of forecasting methods. [@Nikolopoulos2016-gr] explored the value of identifying similar patterns within a series of intermittent nature (where the demand for some periods is zero). They proposed an approach that uses nearest neighbours to predict incomplete series of consecutive periods with non-zero demand values based on past occurrences of non-zero demands. This study involves the first, to the best of our knowledge, statistical approach to directly use similar observed instances from the past to predict future outcomes. In this paper, we suggest that searching for similar patterns can be extended from within-series to across-series but also from intermittent to fast-moving demand data. Methodology {#sec:methodology} =========== Given a set with rich and diverse reference series, the objective of “forecasting with similarity” is to find the most similar ones to a target series, average their future paths, and use this average as the forecasts for the target series. We assume that the target series, $y$, has a length of $n$ observations and a forecasting horizon of $h$. Series in the reference set shorter than $n+h$ are not considered. Series longer than $n+h$ are truncated, keeping the last $n+h$ values. The first $n$ values are used for measuring similarity and the last $h$ values serve as the future paths. We end up with a matrix $Q$ of size $m \times (n+h)$. Each row of $Q$ represents the $n+h$ values of a (truncated) reference series, and $m$ is the number of the reference series. A particular reference series is denoted with $Q(i)$, where $i \in {1, \dots, m}$, $Q(i)_{1, \dots, n}$ is the historical data, and $Q(i)_{n+1, \dots, n+h}$ represents the future paths. The proposed approach consists of the following steps. 1. **Removing seasonality**, if a series is identified as seasonal. 2. **Smoothing** by estimating the trend component through time series decomposition. 3. **Scaling** to render the target and possible similar series comparable. 4. **Measuring similarity** by using a set of distance measures. 5. **Forecasting** by aggregating the paths of the most similar series. 6. **Inverse scaling** to bring the forecasts for the target series back to its original scale. 7. **Recovering seasonality**, if the target series is found seasonal in Step 1. In the following subsections, we describe these steps in details. Section \[sec:preprocessing\] describes the preprocessing of the data (Steps 1, 2, 3, 6, and 7), section \[sec:similarity\] provides the details regarding similarity measurement and forecasting (Steps 4 and 5), while section \[sec:PI\] explains how prediction intervals are derived. Preprocessing {#sec:preprocessing} ------------- When dealing with diverse data, preprocessing becomes essential for effectively forecasting with similarity. This is because the process of identifying similar series is complicated when multiple seasonal patterns and randomness are present, and the scales of the series to be compared differ. If the reference series are not representative of the target series or the reference set is lack of diversity, the chances of observing similar patterns are further decreased. In order to deal with this problem, we consider three steps which are applied sequentially. The first step removes the seasonality if the series is identified as seasonal. By doing so, the target series is more likely to effectively match with multiple reference series, at least when dissimilarities are present due to different seasonal patterns. In the second step, we smooth the seasonally adjusted series to remove randomness and possible outliers from the data, which further reduces the risk of identifying few similar series. Finally, we scale the target and the reference series to the same magnitude, so that their values are directly comparable. The preprocessing step is applied to both the reference and target series. ### Seasonal adjustment {#sec:seasonality} Seasonal adjustment is performed by utilizing the “Seasonal and Trend decomposition using Loess” (STL) method presented by [@Cleveland1990] and implemented in the *stats* package for R. In brief, STL decomposes the series into the trend, seasonal, and remainder components, assuming additive interactions among them. An adjustment is only considered if the series is identified as seasonal, through a seasonality test. The test [@Assimakopoulos2000521; @Fioruci2016] checks for autocorrelation significance on the $s^{\text{th}}$ term of the autocorrelation function (ACF), where $s$ is the frequency of the series (e.g., $s=12$ for monthly data). Thus, given a series of $\hat{n}\geq3s$ observations, frequency $s>1$, and a confidence level of 90%, a seasonal adjustment is considered only if $$|\text{ACF}_s| > 1.645\sqrt{\frac{1+2\sum_{i=1}^{s-1} {\text{ACF}_i}^2)}{\hat{n}}},$$ where $\hat{n}$ is equal to $n$ and $n+h$ for the target and the reference series, respectively. Non-seasonal series ($s=1$) and series where the number of observations is fewer than three seasonal periods are not tested and not assumed as seasonal. As some series may display multiplicative seasonality, the Box-Cox transformation [@BC1964] is applied to each series before the STL [@Bergmeir2016-su]. The Box-Cox transformation is defined as $$u'= \left\{ \begin{array}{ll} \log(u), & \lambda = 0 \\ (u^\lambda -1)/\lambda, & \lambda \neq 0,\\ \end{array} \right.$$ where $u$ is a time series vector and $\lambda \in [0,1]$ is selected using the method of [@Guerrero1993], as implemented in the *forecast* package for R [@forecastR]. Note that after removing the seasonal component from the transformed series, the inverse transformation is applied to the rest of the components (sum of the trend and remainder) to obtain the seasonally adjusted one. As the forecasts produced by the seasonally adjusted series do not contain seasonal information, we need to reseasonalise them with Step 7. Moreover, since the seasonal component removed is Box-Cox transformed, the forecasts must also be transformed using the same $\lambda$ calculated earlier. Having recovered the seasonality on the transformed forecasts, a final inverse transformation is applied. ### Smoothing {#sec:smoothing} Smoothing is performed by utilizing the Loess method, as presented by [@Cleveland1992] and implemented in the *stats* package for R. In short, a local model is computed, with the fit at point $t$ being the weighted average of the neighbourhood points and the weights being proportional to the distances observed between the neighbours and point $t$. Similarly to STL, Loess decomposes the series into the trend and remainder components. Thus, by using the trend component, outliers and noise are effectively removed, and it is easier to find similar series. Moreover, smoothing can help us obtain a more representative forecast origin (last historical value of the series), potentially improving forecasting accuracy [@SPILIOTIS201992]. ### Scaling {#sec:scaling} Scaling refers to translating the target and the reference series at the same magnitude so that they are comparable to each other. This process can be done in various ways, such as by dividing each value of a time series by a simple summary statistic (max, min, mean, etc.), by restricting the values within a specific range (such as in $[0, 1]$), or by applying a standard score. Since the forecast origin is the most crucial observation in terms of forecasting, we divide each point by this specific value. A similar approach has been successfully applied by [@Smyl2019-oy]. A different scaling needs to be considered to avoid divisions by zero if either the target or the reference series contain zero values. Finally, inverse scaling is applied to return to the original level of the target series with Step 6 once the forecasts have been produced. This is achieved via multiplying each forecast by the origin. Similarity & forecasting {#sec:similarity} ------------------------ One disadvantage of forecasting using a statistical model is that a DGP is explicitly assumed, although it might be difficult or even impossible to capture in practice. Notwithstanding, our proposed methodology searches in a set of reference series to identify similar patterns to those of the target series we need to forecast. Given the preprocessed target series, $\tilde{y}$, and the $m$ preprocessed reference series, $\tilde{Q}$, we search for similar series as follows: For each series, $i$, in the reference set, $\tilde{Q}(i)$, we calculate the distance between its historical values, $\tilde{Q}(i)_{1, \dots, n}$, and the ones of the target series using a distance measure. The result of this process is a vector of length $m$ distances that correspond to pairs of the target and the reference series available. In terms of measuring distances, we consider three alternatives. The first one is the $\mathcal{L}_1$ norm, which is equivalent to the sum of the absolute deviations between $\tilde{y}$ and $\tilde{Q}(i)_{1, \dots, n}$. The second measure is the $\mathcal{L}_2$ norm (Euclidean distance), which is equivalent to the square root of the sum of the squared deviations. The third alternative involves the utilization of the dynamic time warping (DTW), which is an algorithm for identifying alternative alignments between the points of two series, so that their total distance is minimized. In contrast to the previous two measures, DTW allows various matches among the points of the series being compared, meaning that $\tilde{y}_t$ can be matched either with $\tilde{Q}(i)_t$, as done with $\mathcal{L}_1$ and $\mathcal{L}_2$, or with previous/following points of $\tilde{Q}(i)_t$, even if these points have been already used in other matches. Although some restrictions are still present when employing DTW, it does introduce more flexibility to the process, allowing the identification of similar series that may display differences when examined locally. The three distance measures are formally expressed as $$\begin{aligned} d_{\mathcal{L}_1}(\tilde{y}, \tilde{Q}(i)_{1, \dots, n})& = {\left\lVert\tilde{y}_t-\tilde{Q}(i)_t\right\rVert}_1, \\ d_{\mathcal{L}_2}(\tilde{y}, \tilde{Q}(i)_{1, \dots, n}) &= {\left\lVert\tilde{y}_t-\tilde{Q}(i)_t\right\rVert}_2, \\ d_\text{DTW}(\tilde{y}, \tilde{Q}(i)_{1, \dots, n}) &= D(n,n),\end{aligned}$$ where $D(n,n)$ is computed recursively as $$D(v,w) = |\tilde{y}_v-\tilde{Q}(i)_w| + \min \left\{ \begin{matrix} D(\tilde{y}_v, \tilde{Q}(i)_{w-1})\\ D(\tilde{y}_{v-1}, \tilde{Q}(i)_{w-1})\\ D(\tilde{y}_{v-1}, \tilde{Q}(i)_w)\\ \end{matrix} \right\}. \label{eq:DTWrec}$$ Equation (\[eq:DTWrec\]) returns the total variation of two vectors, $\tilde{y}_{1, \dots, v}$ and $\tilde{Q}(i)_{1, \dots, w}$. Note that DTW assumes a mapping path from $(1,1)$ to $(n,n)$ with an initial condition of $D(1,1) = |\tilde{y}_1-\tilde{Q}(i)_1|$. Having computed the distances between $\tilde{y}$ and $\tilde{Q}$, a subset of reference series is chosen for aggregating their future paths and, therefore, forecasting the target series. This is done by selecting the $k$ most similar series, i.e., the series that display the smaller distances, as determined by the selected measure. In our experiment, we consider different $k$ values to investigate the effect of pool size on forecasting accuracy but demonstrate that any value higher than 100 is a right choice. Essentially, we propose that the future paths from the most similar series can form the basis for calculating the forecasts for the target series. Indeed, we do so by considering statistical aggregation of these future paths. The median is calculated for each planning horizon. This is an appealing approach in the sense that it does not involve statistical forecasting in the traditional way: fitting statistical models and extrapolating patterns. Instead, the real outcomes of a set of similar series are used to derive the forecasts. The proposed forecasting approach is demonstrated via a toy example, visualized in Figure \[fig:toy\]. The top panel presents the original target series, as well as the seasonally adjusted and smoothed one. The middle panel presents the preprocessed series (scaled values) together with the 100 most similar reference series used for extrapolation. Finally, the bottom panel compares the rescaled and reseasonalised forecasts to the actual future values of the target series. ![A toy example visualizing the methodology proposed for forecasting with similarity. First, the target series is seasonally adjusted and smoothed (top panel). Then, the series is scaled, and similar reference series are used to determine its future path through aggregation (middle panel). Finally, the computed forecast is rescaled and reseasonalised to obtain the final forecast. The M495 series of the M3 Competition data set is used as the target series. (For interpretation of the references to colour in this figure, the reader is referred to the web version of this article.)[]{data-label="fig:toy"}](toyexample.eps){width="4in"} Note that the above description assumes that Step 5 (forecasting and aggregation) is completed before inverse scaling (Step 6) and recovering of seasonality (Step 7). Equally, one could consider that Steps 6 and 7 are applied to each of the most similar reference series, providing this way $k$ possible paths on the scale of the target series and including the seasonal pattern identified in section \[sec:seasonality\]. We denote these rescaled and reseasonalised reference series as $\check{Q}_t$. The aggregation of these series would lead to the same point forecasts. Additionally, they can be used as the basis for estimating the forecast uncertainty. Similarity & prediction intervals {#sec:PI} --------------------------------- Time series forecasting uncertainty is usually quantified by prediction intervals, which somehow depend on the forecastability of the target time series. With a model-based forecasting approach, although one could usually obtain a theoretical prediction interval, the performance of such interval depends upon the length of series, accuracy of the model, and variability of model parameters. Alternatively, one simple attempt would be bootstrapping the historical time series candidates and calculating the prediction intervals based on their summary statistics [e.g., @thombs1990bootstrap; @andre2002forecasting]. Such a procedure is model-dependent, which assumes that a known model provides a good fit to the data and requires specifying the distribution of the error sequence associated with the model process. Our interest is to find appropriate prediction intervals so that they could quantify the uncertainty of the forecasts based on our similarity approach. We use the variability information from the rescaled and reseasonalised reference series, $\check{Q}_t$, as the source of prediction interval bounds. However, we find that directly using the quantiles or variance of reference series may lead to lower-than-nominal coverage due to the similarity (or low sampling variability) of reference series. To this end, we propose a straightforward data-driven approach, in which the $(1-\alpha)100\%$ prediction interval for a forecast $f_t$ is based on the a calibrated $\alpha/2$ and $1-\alpha/2$ quantiles of the selected reference series $\check{Q}_t$ for the target $y_{t}$. The lower and upper bounds for the prediction interval are defined as $$\label{eq:PI} L_t= (1-\delta)~F^{-1}_{\check{Q}_t}(\alpha/2) \mathrm{~and~} U_t= (1+\delta)~F^{-1}_{\check{Q}_t}(1-\alpha/2),$$ respectively, where $F^{-1}_{\check{Q}_t}$ is the quantile based on the selected reference series $\check{Q}_t$, and $\delta$ is a calibrating factor. To evaluate the performance of the generated predictive intervals, we consider a scoring rule, the mean scaled interval score (MSIS), which is defined as $$\begin{aligned} \label{eq:msis} \mathrm{MSIS} = \frac{1}{h}\frac{\sum_{t=n+1}^{n+h}(U_t-L_t)+\frac{2}{\alpha}(L_t-y_t)\bm{1}\left\{ y_t < L_t\right\} + \frac{2}{\alpha}(y_t - U_t)\bm{1}\left\{y_t>U_t\right\}}{\frac{1}{n-s}\sum_{t=s+1}^{n} \vert y_t-y_{t-s} \vert},\end{aligned}$$ where $n$ is the sample size, $s$ is the length of the seasonal period, and $h$ is the forecasting horizon. We aim to find an optimal calibrating factor $0 \leq \delta \leq 1$, which minimizes the prediction uncertainty score (MSIS). To realize that, the target series $y$ is first split into training and testing period, denoted as $y_{1, \dots, n-h}$ and $y_{n-h+1, \dots, n}$, respectively. We run the proposed forecasting approach to $y_{1, \dots, n-h}$ and apply a grid search algorithm to search from a sequence of values of $\delta \in \{0, 0.01, 0.02, \cdots, 1\}$ and find the optimal calibrating factor $\delta^*$ that minimizes the MSIS values of the obtained prediction intervals of $y_{1, \dots, n-h}$. In the end, we get the prediction interval of $y$ by plugging the optimal calibrating factor $\delta^*$ into Equation (\[eq:PI\]). Evaluation {#sec:evaluation} ========== Design {#sec:design} ------ In this paper, we aim to forecast the yearly, quarterly, and monthly series of the M3 forecasting competition [@Makridakis2000a]. This is a widely used data set in the forecasting literature with the corresponding research paper having been cited more than $1400$ times, according to Google Scholar (as of 07/11/2019). The number of the yearly, quarterly, and monthly series is presented in Table \[tab:data\], together with a five-number summary of their lengths and the forecast horizon per frequency. ----------- ------ ----- ---- ----- ----- ----- ---- -- Min Q1 Q2 Q3 Max Yearly 645 14 15 19 30 41 6 Quarterly 756 16 36 44 44 64 8 Monthly 1428 48 78 115 116 126 18 Total 2829 ----------- ------ ----- ---- ----- ----- ----- ---- -- : The number of the target series, their lengths, and the forecasting horizon for each data frequency. \[tab:data\] In order to assess the impact of the series length, we produce forecasts not only using all the available history for each target series but also considering shorter historical samples by truncating the long series and keeping the last few years of their history. This is of particular interest for forecasting practice as in many enterprise resource planning systems, such as SAP, only a limited number of years is usually available. Table \[tab:data2\] shows the cuts considered per frequency. Frequency ----------- --- ---- ---- ---- ---- ---- ---- ---- Yearly 6 10 14 18 22 26 30 34 Quarterly 3 4 5 6 7 8 9 10 Monthly 3 4 5 6 7 8 9 10 : The cuts of the target series considered. \[tab:data2\] For the purpose of forecasting based on similarity described in the previous section, we need a rich and diverse enough set of reference series. For this purpose, we use the yearly, quarterly, and monthly subsets of the M4 competition [@Makridakis2019-oy], which consist of $23000$, $24000$, and $48000$ series, respectively. The lengths of these series are, on average, higher than the lengths of the M3 competition, in which the median values are $29$, $88$, and $202$ for the yearly, quarterly, and monthly frequencies, respectively. The point forecast accuracy is measured in terms of the Mean Absolute Scaled Error [MASE: @Hyndman06]. MASE is a scaled version of the mean absolute error, with the scaling being the mean absolute error of the seasonal naive for the historical data. MASE is widely accepted in the forecasting literature [e.g., @Franses2016-pj]. [@Makridakis2019-oy] also use this measure to evaluate the point forecasts of the submitting entries for the M4 forecasting competition. Across all horizons of a single series, the MASE value can be calculated as $$\text{MASE} = \frac{1}{h} \frac{ \sum\nolimits_{t=n+1}^{n+h} {|y_{t}-f_{t}|} } {\frac{1}{n-s} \sum\nolimits_{t=s+1}^{n} |y_{t}-y_{t-s}|},$$ where $y_t$ and $f_t$ are the actual observation and the forecast for period $t$, $n$ is the sample size, $s$ is the length of the seasonal period, and $h$ is the forecasting horizon. Lower MASE values are better. Because MASE is scale-independent, averaging across series is possible. To assess prediction intervals, we set $\alpha=0.05$ (corresponding to 95% prediction intervals) and consider four measures — MSIS, coverage, upper coverage and spread. MSIS is calculated as in Equation \[eq:msis\]. Coverage measures the percentage of times when the true values lie inside the prediction intervals. Upper coverage measures the percentage of times when the true values are not larger than the upper bounds of the prediction intervals: A proxy for achieved service levels. Spread refers to the mean difference of the upper and lower bounds scaled similarly to MSIS: A proxy for holding costs [@svetunkov2018old]. They are calculated as $$\begin{aligned} \mathrm{Coverage}& = \frac{1}{h}\sum_{t=n+1}^{n+h}\bm{1}\left\{ y_t > L_t ~\&~ y_t < U_t\right\}, \\ \mathrm{Upper~coverage}& = \frac{1}{h}\sum_{t=n+1}^{n+h}\bm{1}\left\{ y_t < U_t\right\}, \\ \mathrm{Spread}& = \frac{\frac{1}{h}\sum_{t=n+1}^{n+h}(U_t-L_t)}{\frac{1}{n-s} \sum_{t=s+1}^{n} |y_{t}-y_{t-s}|},\end{aligned}$$ where $y_t$, $L_t$ and $U_t$ are the actual observation, the lower and upper bounds of the corresponding prediction interval for period $t$, $n$ is the sample size, and $h$ is the forecasting horizon. Note that the target values for the Coverage and Upper Coverage are $95\%$ and $97.5\%$, respectively. Deviation from these values suggest under- or over-coverage. Lower MSIS and Spread values are better. Investigating the performance of forecasting with similarity ------------------------------------------------------------ In this section, we focus on the performance of forecasting with similarity and explore the different settings, such as the choice of the distance measure, the pool size of similar reference series (number of aggregates, $k$), as well as the effect of preprocessing. Once the optimal settings are identified, in the next subsection, we compare the performance of our proposition against that of two robust benchmarks for different sizes of the historical sample. Table \[tab:exploresimilarity\] presents the MASE results of forecasting with similarity for each data frequency separately as well as across all frequencies (Total). The summary across frequencies is a weighted average based on the series counts for each frequency. Moreover, we present the results for each distance measure ($\mathcal{L}_1$, $\mathcal{L}_2$, and DTW) in rows and various values of $k$ in columns. -- ----------------- ------- ------- ------- ------- ------- ----------- ------- 1 5 10 50 100 500 1000 $\mathcal{L}_1$ 3.289 2.837 2.787 2.689 2.668 2.632 2.634 $\mathcal{L}_2$ 3.333 2.866 2.785 2.703 2.684 2.638 2.639 $\text{DTW}$ 3.270 2.835 2.730 2.656 2.641 **2.623** 2.637 $\mathcal{L}_1$ 1.312 1.205 1.175 1.136 1.135 1.127 1.126 $\mathcal{L}_2$ 1.336 1.199 1.162 1.138 1.134 1.126 1.127 $\text{DTW}$ 1.293 1.177 1.158 1.117 1.115 **1.115** 1.116 $\mathcal{L}_1$ 1.004 0.908 0.887 0.871 0.870 0.867 0.869 $\mathcal{L}_2$ 1.008 0.910 0.891 0.871 0.869 0.866 0.868 $\text{DTW}$ 1.001 0.895 0.875 0.861 0.861 **0.857** 0.857 $\mathcal{L}_1$ 1.607 1.427 1.397 1.356 1.351 1.339 1.340 $\mathcal{L}_2$ 1.626 1.433 1.395 1.360 1.354 1.339 1.341 $\text{DTW}$ 1.597 1.413 1.373 1.339 1.335 **1.329** 1.332 -- ----------------- ------- ------- ------- ------- ------- ----------- ------- : The performance of the forecasting with similarity approach for different distance measures and pool sizes of similar reference series ($k$). \[tab:exploresimilarity\] A comparison across the different values for the number of reference series, $k$, suggests that large pools of representative series provide better performance. At the same time, the improvements seem to tapper off when $k>100$. Based on the reference set we use in this study, we identify a sweet point at $k=500$. The analysis presented in section \[sec:similarity-versus-model-based\] focuses on the aggregate size. In any case, we find that both the size of the reference series and its similarity with the target series affect the selection of the value $k$. Table \[tab:exploresimilarity\] also shows that $\mathcal{L}_1$ and $\mathcal{L}_2$ perform almost indistinguishable across all frequencies. DTW almost always outperforms the other two distance measures. However, the differences are small, to the degree of $10^{-2}$ in our study. Given that the DTW is more computationally intensive than $\mathcal{L}_1$ and $\mathcal{L}_2$ (approximately $\times6$, $\times10$, and [$\times27$]{} for yearly, quarterly, and monthly frequencies, respectively), we further investigate the statistical significance of the achieved performance improvements. To this end, we apply the Multiple Comparisons with the Best (MCB) test that compares whether the average (across series) ranking of each distance measure is significantly different than the others (for more details on the MCB, please see [@Koning2005]). With MCB, when the confidence intervals of two methods overlap, their ranked performances are not statistically different. The analysis is done for $k=500$. The results are presented in Figure \[fig:MCB1\]. We observe that DTW results in the best-ranked performance, which, however, is not statistically different from that of the other two distance measures. We argue that if the computational cost is a concern, one may choose between $\mathcal{L}_1$ and $\mathcal{L}_2$. Otherwise, DTW is preferable, both in terms of average forecast accuracy and mean ranks. In the analysis below, we focus on the DTW distance measure. ![MCB significance tests for the three distance measures for each data frequency.[]{data-label="fig:MCB1"}](MCB1.eps){width="\textwidth"} The aforementioned results are based on the application of preprocessing (as described in section \[sec:preprocessing\]), including seasonal adjustment and smoothing, before searching for similar series. Now we investigate the improvements of seasonal adjustment and smoothing. Table \[tab:explorepreprocessing\] presents the MASE results for DTW across different $k$ values with and without the preprocessing described in sections \[sec:seasonality\] and \[sec:smoothing\]. Note that the scaling process (as described in section \[sec:scaling\]) is always applied to make the target and reference series comparable. Smoothing does not appear to improve the forecast accuracy significantly for the yearly frequency. On the contrary, seasonal adjustment and smoothing are of great importance for the seasonal data (quarterly and monthly) that the drop in the values of MASE are substantial. The difference between the yearly and the other frequencies can be explained by the lack of seasonal patterns and the smaller sample sizes in the former case, which allow for easier identification of similar series. Regardless, preprocessing always provides similar or better accuracy, so it is recommended with the forecasting with similarity approach. -- --------------------- ------- ------- ------- ------- ------- ----------- ------- Seasonal adjustment and smoothing 1 5 10 50 100 500 1000 NO 3.544 2.821 2.735 2.644 2.639 2.626 2.641 YES 3.270 2.835 2.730 2.656 2.641 **2.623** 2.637 NO 1.657 1.411 1.359 1.384 1.396 1.419 1.422 YES 1.293 1.177 1.158 1.117 1.115 **1.115** 1.116 NO 1.263 1.077 1.020 1.011 1.012 1.040 1.060 YES 1.001 0.895 0.875 0.861 0.861 **0.857** 0.857 NO 1.888 1.564 1.502 1.483 1.486 1.503 1.517 YES 1.597 1.413 1.373 1.339 1.335 **1.329** 1.332 -- --------------------- ------- ------- ------- ------- ------- ----------- ------- : The performance of forecasting with similarity, with and without seasonal adjustment and smoothing. The DTW distance measure is considered. \[tab:explorepreprocessing\] Similarity versus model-based forecasts {#sec:similarity-versus-model-based} --------------------------------------- Having identified the optimal settings (DTW, $k=500$, and preprocessing) for forecasting with similarity, abbreviated from now on simply as ‘Similarity’, in this subsection we turn our attention to comparing the accuracy of our approach against well-known forecasting benchmarks. We use two benchmark methods. The forecasts with the first method derive from the optimally selected exponential smoothing model when applying selection with the corrected (for small sample sizes) Akakie’s Information Criterion ($\text{AIC}_c$). This optimal selection occurs per series individually, so different optimal model may be selected for different series. We use the implementation available in the *forecast* package for the R statistical software, and in particular the `ets()` function [@Hyndman2008b]. The second benchmark is the simple (equally-weighted) combination of three exponential smoothing models: Simple Exponential Smoothing, Holt’s linear trend Exponential Smoothing, and Damped trend Exponential Smoothing. This combination is applied to the seasonally adjusted data (multiplicative classical decomposition) if the data have seasonal patterns with the seasonality test described in section \[sec:seasonality\]. This combination approach has been used as a benchmark in international forecasting competitions [@Makridakis2000a; @Makridakis2019-oy] and it is usually abbreviated as SHD. Figure \[fig:benchmarks\] shows the accuracy of Similarity against the two benchmarks, ETS and SHD. The comparison is made for various historical sample sizes to examine the effect of data availability. We observe: - In the yearly frequency, Similarity always outperforms the two benchmarks regardless of the length of the available history. It is worthy mentioning that ETS improves when not all available observations are used for model fitting (truncated target series). Using just the last $14$ years of the historical samples gives the best accuracy in the yearly frequency for ETS. SHD and Similarity perform better when more data are available. - In the quarterly frequency, Similarity is again overall better than the two benchmarks. The only exception is when the series are very short ($3$ years of history) where ETS outperforms Similarity. Finally, for long series, the performance of SHD is close to that of Similarity. - In the monthly frequency, ETS is better than Similarity, which is better than SHD. This is especially true for short histories. The performance difference between all three approaches is indistinguishable when longer series are available. Lengthier monthly series generally result in improved performance up to a point: if more than $7$ or $8$ years of data are available, then the changes in forecasting accuracy are small. ![Benchmarking the performance of Similarity against ETS and SHD for various historical sample sizes. (For interpretation of the references to colour in this figure, the reader is referred to the web version of this article.)[]{data-label="fig:benchmarks"}](Benchmarks.eps){width="3.6in"} Figure \[fig:benchmarks\] also shows the performance of the simple forecast combination of ETS and Similarity (“ETS-Similarity”)[^1]. The argument is that these two forecasting approaches are diverse in nature (model-based versus data-centric) but also robust when applied separately. So we expect that their combination will also perform well [@Lichtendahl2019]. We observe that this simple combination performs on par to Similarity for the yearly frequency, being much better than any other approach at the seasonal frequencies. Overall, the simple combination of ETS-Similarity is the best approach. This suggests that there are different benefits in terms of forecasting performance improvements with both model-based and data-centric approaches. Solely focusing on the one or the other might not be ideal. Finally, we compare the differences in the ranked performance of the three approaches (ETS, SHD, and Similarity) and the one combination (ETS-Similarity) in terms of their statistical significance (MCB). The results are presented in the nine panels of Figure \[fig:MCB2\] for each frequency (in rows) and short, medium, and long historical samples (in columns). We observe: - Similarity is significantly better than ETS and SHD for the short and long yearly series. At the same time, Similarity performs statistically similar to ETS and SHD for the other frequencies. - A simple combination of ETS and Similarity is always ranked 1^st^. Moreover, its performance is statistically significant better compared to ETS and SHD for all frequencies and historical sample sizes (their intervals do not overlap). Similarity and ETS-Similarity are not statistically different at the yearly frequency, but the combination approach is better at the seasonal ones. ![MCB significance tests for ETS, SHD, Similarity, and ETS-Similarity for each data frequency and various sample sizes.[]{data-label="fig:MCB2"}](MCB2.eps){width="\textwidth"} Evaluating uncertainty estimation {#sec:evalPI} --------------------------------- We firstly investigate the importance of the calibrating procedure of prediction intervals by exploring the relationship between the forecastability of the target series and the selected calibrating factor $\delta^*$. We follow @Kang2017345 and use the spectral entropy to measure the “forecastability” of a time series as $$\begin{aligned} \mathrm{Forecastability} = 1 + \int_{-\pi}^{\pi} \hat f_y(\gamma) \log \hat f_y(\gamma) \mathrm{d} \gamma,\end{aligned}$$ where $\hat f_x(\gamma)$ is an estimate of the spectrum of the time series that describes the importance of frequency $\gamma$ within the period domain of a given time series $y$. A larger value of Forecastability suggests that the time series contains more signal and is easier to forecast. On the other hand, a smaller value of forecastability indicates more uncertainty about the future, which suggests that the time series is harder to forecast. Figure \[fig:deltaVSforecastability\] depicts the relationship between forecastability and $\delta^*$ for the M3 series by showing the scatter plots of the aforementioned variables for yearly, quarterly, and monthly data, as well as the complete dataset. The corresponding nonparametric loess regression curves are also shown. Along the top and right margins of each scatter plot, we show the histograms of forecastability and $\delta^*$ to present their distributions. From Figure \[fig:deltaVSforecastability\], we find that time series with lower forecastability values yield higher calibrating factors $\delta^*$. That is, to obtain a more appropriate prediction interval, we need to calibrate more for time series that are harder to forecast. The forecastability of a large proportion of the monthly data is weak when compared to that of the yearly and quarterly data, which makes the overall dataset hard to forecast. The nonparametric loess regression curves indicate that there is strong dependence between forecastability and the calibration factor, which is a strong evidence of elaborating a calibrating factor in the prediction intervals for hard-to-forecast time series. ![Relationship between forecastability and the optimal calibrating factor ($\delta$) using a nonparametric loess regression curve (blue line) for yearly (top left), quarterly (top right), monthly (bottom left) and overall (bottom right) data in M3. The top and right margins of each subplot are the histograms of forecastability and the optimal calibrating factor $\delta^*$, respectively. []{data-label="fig:deltaVSforecastability"}](deltaVSforecastability.pdf){width="90.00000%"} We proceed by comparing the forecasting performances based on the calibrated prediction intervals of Similarity and other benchmarks. Table \[table:PIevaluation\] shows the performance of Similarity against the two benchmarks, ETS and SHD, regarding prediction intervals. The performance of forecast combination of ETS and Similarity (ETS-Similarity) is also shown. Our findings are as follows: - For yearly data, Similarity outperforms both ETS and SHD according to MSIS, while also providing higher coverage and upper coverage. The spread of prediction intervals given by Similarity is smaller than that of ETS (SHD results in tight prediction intervals). Therefore, we conclude that Similarity significantly outperforms ETS and SHD for yearly data. - For quarterly and monthly data, Similarity displays similar performance to that of ETS. However, it yields significantly higher upper coverage and at the same time loses some spread. - Across all data frequencies, the simple combination of ETS and Similarity achieves the best performances regarding MSIS and (upper) coverage levels. ---------------- ------------ -------------- -------------------- ----------- MSIS Coverage (%) Upper coverage (%) Spread Target: 95% Target: 97.5% ETS 30.616 84.341 89.664 12.346 SHD 35.488 80.439 86.744 **8.782** Similarity 23.182 88.372 **95.065** 13.591 ETS-Similarity **22.437** **90.904** 94.677 12.968 ETS 10.717 87.153 92.659 4.688 SHD 11.027 87.219 91.981 **4.398** Similarity 10.556 87.087 95.635 5.213 ETS-Similarity **9.240** **91.402** **96.114** 4.950 ETS 6.342 92.032 94.293 4.094 SHD 6.885 90.799 93.600 **4.039** Similarity 6.666 91.068 96.152 4.563 ETS-Similarity **5.765** **94.141** **96.678** 4.328 ---------------- ------------ -------------- -------------------- ----------- : Benchmarking the performance of Similarity against ETS, SHD, and ETS-Similarity with regard to MSIS, coverage, upper coverage and spread of prediction intervals.[]{data-label="table:PIevaluation"} Discussions {#sec:discussions} =========== Statistical time series forecasting typically involves selecting or combining the most accurate forecasting model(s) per series, which is a complicated task significantly affected by data, model, and parameter uncertainties. On the other hand, nowadays, big data allows forecasters to improve forecasting accuracy through cross-learning, i.e., by extracting information from multiple series of similar characteristics. This practice has been proved highly promising, primarily through the exploitation of advanced machine learning algorithms and fast computers [@Makridakis2019-oy]. Our results confirm that data-centric solutions offer handful of advantages over traditional model-based ones, relaxing the assumptions made by the models, while also allowing for more flexibility. Thus, we believe that extending forecasting from within series to across series, is a promising direction to forecasting. An important advancement of our forecasting approach over other cross-learning ones, is that similarity derives directly from the data, not depending on the extraction of a feature vector that indirectly summarizes the characteristics of the series [@Petropoulos2014152; @Kang2017345; @Kang2019-ar]. To this end, the uncertainty related to the choice and definition of the features used for matching the target to the reference series is effectively mitigated. Moreover, no explicit rules are required for determining what kind of statistical forecasting model(s) should be used per case [@Monteros2019-oy]. Instead of specifying a pool of forecasting models and an algorithm for assigning these models to the series, a distance measure is defined and exploited for evaluating similarity. Finally, forecasting models are replaced by the true future paths of the similar reference series. Our results are significant for the practice of Operational Research (OR) and Operations Management (OM) with more accurate forecasts translating into better decisions. Forecasting is an important driver to reducing inventory associated costs and waste in supply chains [for a comprehensive review on supply chain forecasting, see @Syntetos2016-ew]. Small improvements in forecast accuracy are usually amplified in terms of the inventory utility, namely inventory holding and achieved target service levels [@Syntetos2010]. At the same time, forecast accuracy is also essential to other areas of OR, such as humanitarian operations and logistics [@Rodriguez-Espindola2018-mr; @Kovacs2019-tq], and healthcare management [@Brailsford2011-ra; @Willis2018-us]. While the point forecasts are oftentimes directly used in inventory settings, we show that forecasting with similarity allows for better estimation of the forecast uncertainty compared to ETS or SHD. The upper coverage rates of our approach are superior to that of statistical approaches, directly pointing to higher achieved customer service levels. This is achieved by a minimal increase of the average spread of the prediction intervals, suggesting a small difference in the corresponding holding cost. Our study also has implications for software providers of forecasting support systems. We offer our code as an open-source solution together with a web interface[^2] (developed in R and Shiny) where a target series can be forecasted through similarity, as described in section \[sec:methodology\], using the large M4 competition data set as the reference set. We argue that our approach is straightforward to implement based on existing solutions, offering a competitive alternative to traditional statistical modelling. Forecasting with similarity can expand the existing toolboxes of forecasting software. Given that none approach is the best for all cases, a selection framework (such as time series cross-validation) can optimally pick between statistical models or forecasting with similarity based on past forecasting performance. However, the computational time is a critical factor that should be carefully taken into consideration, especially when forecasting massive data collections. This is particularly true in supply chain management where millions of item-level forecasts must be produced on a daily basis [@SEAMAN2018822]. An advantage of our approach is that the computational tasks in forecasting with similarity can be easily programmed in parallel compared to multivariate models. Moreover, since the DTW distance measure is more computationally intensive than the two other measures presented in this study, an option would be to select between them based on the results of an ABC-XYZ analysis [@RAMANATHAN2006695]. This analysis is based on the Pareto principle (the 80/20 rule), i.e. the expectation that the minority of cases has a disproportional impact on the whole. In this respect, the target series could be first classified as A, B, or C, according to their importance/cost, and as X, Y, or Z, based on how difficult it is to be accurately forecasted. Then, series in the AZ class (important but difficult to forecast) could be predicted using DTW, while the rest using another, less computationally intensive distance measure. Forecasting with similarity is based on the availability of a rich collection of reference series. In order to have appealing forecasting performance, such reference dataset should be as representative (see @Kang2019-ar for a more rigorous definition) as possible to the target series, which is easy to achieve in business cycles because of data accumulation. To illustrate and empirically demonstrate the effectiveness of the approach, we use the M4 competition data set as a reference. This data set is considered to represent the reality appropriately [@Spiliotis2019M3]. However, if our approach is to be applied on the data of a specific company or sector, then it would make sense that the reference set is derived from data of that company/sector so as to be as representative as possible. In the case that it is challenging to identify appropriate reference series for the target series, then generating series with the desirable characteristics [@Kang2019-ar] is an option. We have empirically tested our approach on three representative data frequencies: yearly, quarterly, and monthly. We have no reasons to believe that our approach would not perform well for higher frequency data, such as weekly, daily, or hourly. If multiple seasonal patterns appear, as it could be the case for the hourly frequency with periodicity within a day (every 24 hours) and within a week (every 168 hours), then a multiple seasonal decomposition needs to be applied instead of the standard STL (the *forecast* package for R offers the `mstl()` function for this purpose). On the other hand, our approach is not suitable as-is for intermittent demand data, where the demand values for several periods are equal to zero. In this case, one could try forecasting with similarity without applying data preprocessing. A similar approach was proposed by [@Nikolopoulos2016-gr] who focused on identifying patterns within intermittent demand series rather than across series. Concluding remarks {#sec:conclusions} ================== In this paper, we introduce a new approach to forecasting that uses the future paths of similar reference series to forecast a target series. The advantages of our proposition are that it is model-free, in the sense that it does not rely on statistical forecasting models, and, as a result, it does not assume an explicit DGP. Instead, we argue that history repeats itself (déjà vu) and that the current data patterns will resemble the patterns of other already observed series. The proposed approach is data-centric and relies on the availability of a rich, representative reference set of series – a not so unreasonable requirement in the era of big data. We examined the performance of the new approach on a widely-used data set and benchmarked it against two robust forecasting methods, namely the automatic selection of the best model from the Exponential Smoothing family (ETS) and the equal-weighted combination of Simple, Holt, and Damped exponential smoothing (SHD). We find that in most frequencies, the new approach is more accurate than the benchmarks. Moreover, forecasting with similarity is able to better estimate the uncertainty of the forecasts, resulting in better upper coverage levels which are crucial for fulfilling customer demand. Finally, we propose a simple combination of model-based and model-free forecasts which results in an accuracy that is always significantly better than the one or the other separately. The innovative proposition of forecasting with similarity and without models points towards several future research paths. For example, in this study we do not differentiate the reference series as to match the industry/field of the target series. It would be interesting to explore if such matching would further improve the accuracy of forecasting with similarity. References {#references .unnumbered} ========== [^1]: Other simple combinations of ETS, SHD, and Similarity were also tested, having on average same or worse performance to the ETS-Similarity simple combination. [^2]: Available here: <https://fotpetr.shinyapps.io/similarity/>
--- abstract: 'We study neutron matter at and near the unitary limit using a low-momentum ring diagram approach. By slightly tuning the meson-exchange CD-Bonn potential, neutron-neutron potentials with various $^1S_0$ scattering lengths such as $a_s=-12070fm$ and $+21fm$ are constructed. Such potentials are renormalized with rigorous procedures to give the corresponding $a_s$-equivalent low-momentum potentials $V_{low-k}$ , with which the low-momentum particle-particle hole-hole ring diagrams are summed up to all orders, giving the ground state energy $E_0$ of neutron matter for various scattering lengths. At the limit of $a_s\rightarrow \pm \infty$, our calculated ratio of $E_0$ to that of the non-interacting case is found remarkably close to a constant of 0.44 over a wide range of Fermi-momenta. This result reveals an universality that is well consistent with the recent experimental and Monte-Carlo computational study on low-density cold Fermi gas at the unitary limit. The overall behavior of this ratio obtained with various scattering lengths is presented and discussed. Ring-diagram results obtained with $V_{low-k}$ and those with $G$-matrix interactions are compared.' author: - 'L.-W. Siu, T. T. S. Kuo' - 'R. Machleidt' title: | Low-momentum ring diagrams\ of neutron matter at and near the unitary limit --- Introduction ============ Back in 1999, Bertsch[@bishop01] formulated a many-body problem, asking: what are the ground state properties of a two-species fermion system that has a zero-range interaction and an infinite scattering length? Such problem was originally set up as a parameter-free model for a fictitious neutron matter. Recently, as the experiments on trapped cold alkali gas undergo huge breakthroughs, degenerate Fermi gas with a tunable scattering length (including $\pm \infty$) becomes accessible in laboratories[@fbre]. Since then cold Fermi systems have aroused growing attention. The term ‘unitary limit’ has been used by many authors to refer to the special scenario in a low-density two-species many-body system where the scattering length between particles approaches infinity. More specifically, at the unitary limit, the scattering length $a_s$, the Fermi momentum $k_F$, and the range of the interaction $r_{\mbox{int}}$ satisfy $ |a_s|>>k_F^{-1}>> r_{\mbox{int}}$. Under such condition, atoms are ‘strongly interacting’, and a full theoretical description of their properties is a challenging task in many-body theory. Universal behavior is expected to show up in various aspects, including ground state properties as discussed below, collective excitations [@strin04; @bulgac05; @hei04; @kinast04; @kinast04b; @altmeyer07; @wright07], and thermodynamic properties [@bulgac05b; @bulgac06; @bulgac07; @kinast05; @thomas05]. Such universality can be naively understood as the ‘dropping’ of the scattering length $a_s$ out of the problem, leaving $k_F$ as the only relevant length scale. In particular, the ground state energy $E_0$, is expected to be proportional to that of the non-interacting gas $E_0^{free}$ [@baker99], that is $E_0/E_0^{free}=\xi $ , or equivalently $$\frac{E_0}{A}=\frac{3}{5}\frac{k_F^2}{2}\xi$$ ($\hbar=m=1$), $A$ being the number of particles. The universal constant $\xi$ is of great interest and many attempts have been made to derive it analytically or determine it experimentally. Theoretical calculations suggest that $\xi$ is between 0.3 to 0.7. For example, an early work based on different Padé approximations gives $\xi=0.326, 0.568$[@baker99]. Diagrammatic approach gives 0.326 with Galitskii resummation[@hei01], 0.7 with ladder approximation[@hei01], and 0.455 with a diagrammatic BCS-BEC crossover theory[@perali04]. Other theoretical approaches have also been used, including $\epsilon$ expansion, which gives $\xi$=0.475 in [@nishi06] and [@chen06], and variational formalism, which gives 0.360 in [@hauss07]. The four most recent experimental measurements are listed in Table \[exp\]. Though the experimental results are consistent with each other, the experimentally determined value of $\xi$ still falls between relatively large error bars($\sim$10%). By far the best estimate on $\xi$ is considered to be that from Quantum Monte-Carlo methods, giving $\xi=0.44(1)$[@carlson03] and 0.42(1)[@astra04]. $\xi$ Authors Ref. ------------------------ ------------------------ -------------- 0.36(15) Bourdel [*el.al*]{} [@bourdel05] 0.51(4) Kinast [*et.al.*]{} [@kinast05] 0.46(5) Partridge [*et.al.*]{} [@part06] $0.46^{+0.05}_{-0.12}$ Stewart [*et.al.*]{} [@stewart06] : Comparison of recent experimental values on $\xi$.\[exp\] Cold and dilute neutron matter is a special class of cold Fermi system with great importance in astrophysics. Its properties at resonance has attracted much interest recently [@schwenk05; @lee06]. In this work we report results from low-momentum ring diagram calculations on the ground-state energy of neutron matter at and near the unitary limit. As is well-known, the $^1S_0$ channel of neutron matter has a fairly large scattering length $a_s$ ($-18.97fm$), nonetheless, it is still finite. Here, by adjusting the interaction parameters of the CD-Bonn potential [@cdbonn], we construct ‘tuned’ neutron interactions with different $a_s$’s such as $-9.83fm$, $-12070fm$ and $+21fm$ (which possesses a bound state). For a wide range of neutron density, the case of $a_s=-12070fm$ can be considered the same as the unitary limit, namely $a_s\rightarrow -\infty$. We shall compute the ground state energy of neutron matter, with inter-neutron potentials being these ‘tuned’ CD-Bonn’s, by two steps: renormalization followed by ring summation. We first renormalize neutron interactions with a T-matrix equivalence renormalization method [@bogner01; @bogner02; @coraggio02; @schwenk02; @bogner03; @jdholt], where the high-momentum components beyond a decimation scale $\Lambda$ are integrated out. This gives the corresponding low-momentum interactions $V_{low-k}$’s with the scattering lengths being preserved. Then, we calculate the ground state energy by summing the particle-particle-hole-hole ($pphh$) ring diagrams[@song87] to all orders. In such ring summation, we employ a model space approach, namely, the summation is carried out within a model space characterized by $\{k\leq \Lambda\}$. We shall closely examine how our results differ from similar calculations with a different renormalized interaction - the Brueckner $G$-matrix on which the Brueckner Hartree-Fock(BHF) method is based. The BHF method has been widely used for treating the strongly interacting nuclear many body problems [@bethe; @jeholt]. However, BHF is a lowest-order reaction matrix ($G$-matrix) theory and may be improved in several aspects. To take care of the short range correlations, the ladder diagrams of two particles interacting with the bare interaction are summed to all orders in BHF. However, this method does not include diagrams representing hole-hole correlations such as diagram (iii) of Fig.1. Note that this diagram has repeated $(pphh)$ interactions as well as self-energy insertions to both hole and particle lines. Another aspect of the traditional BHF is that it employs a discontinuous single-particle (s.p.) spectrum which has a gap at the Fermi surface $k_F$. To improve upon these drawbacks, Song et al. [@song87] have formulated a $G$-matrix ring-diagram method for nuclear matter, with which the $pphh$ ring diagrams such as diagrams (i) to (iii) of Fig.1 are summed to all orders. This ring-diagram method has been applied to nuclear matter and given satisfactory result [@song87]. The $V_{low-k}$ ring diagram method used in this work is highly similar to [@song87]’s , except for one significant difference: the interaction used in the $G$-matrix ring diagram method is energy dependent. (The Brueckner $G$-matrix is energy dependent, as we shall later discuss.) This complicates the calculation a lot. $V_{low-k}$ provides a cleaner and simpler implementation on such all-order ring summation. We shall first provide an outline of the ring-diagram approach in section II. The derivation details of the low-momentum interaction from the CD-Bonn potentials shall be followed in section III. Our major results from the $V_{low-k}$ ring diagram method are in section IV. There we shall present our results for the ground-state energy and ratio $E_0/E_0^{free}$ obtained with potentials of various scattering lengths. A fixed-point criterion for determining the decimation scale $\Lambda$ will be discussed. There one can also find a comparison of data on the ground state energy obtained with two different methods-the $V_{low-k}$ and the $G$-matrix ring diagram methods. We shall summarize and discuss our work in the last section. Low-momentum ring diagrams ========================== In this section we describe how we calculate the ring diagrams for the ground state energy shift $\Delta E_0$, which is defined as the difference $(E_0-E_0^{free})$ where $E_0$ is the true ground-state energy and $E_0^{free}$ is the corresponding quantity for the non-interacting system. In the present work, we consider the $pphh$ ring diagrams as shown in Fig. 1. We shall calculate the all-order sum, denoted as $\Delta E_0 ^{pp}$, of such diagrams. Our calculation is carried out within a low-momentum model space $\{k\leq \Lambda\}$ and each vertex of the diagrams is the renormalized effective interaction corresponding to this model space. Two types of such interactions will be employed, one being the energy-independent $V_{low-k}$ and the other being the energy-dependent $G$-matrix interaction. Let us consider first the former. In this case, $\Delta E_0^{pp}$ can be written [@song87] as $$\begin{aligned} \Delta E_0^{pp}&=& \frac{-1}{2\pi i}\int _{-\infty}^{\infty} d\omega e^{i\omega 0^+} tr_{<\Lambda}[F(\omega)V_{low-k}\nonumber\\ && +\frac{1}{2}(F(\omega)V_{low-k})^2 +\frac{1}{3}(F(\omega)V_{low-k})^3+\cdots]\end{aligned}$$ where $F$ is the free $pphh$ propagator $$F_{ab}(\omega)= \frac{\bar n_a \bar n_b}{\omega-(\epsilon_a+\epsilon_b)+i0^+} -\frac{ n_a n_b}{\omega-(\epsilon_a+\epsilon_b)-i0^+}$$ with $n_a=1,~a\leq k_F; ~=0,~k>k_F$ and $\bar n_a=(1-n_a)$. We now introduce a strength parameter $\lambda$ and a $\lambda$-dependent Green function $G^{pp}(\omega,\lambda)$ defined by $$G^{pp}(\omega,\lambda)=F(\omega) +\lambda F(\omega)V_{low-k}G^{pp}(\omega,\lambda).$$ The energy shift then takes the following simple form when expressed in terms of $G^{pp}$, namely $$\Delta E_0^{pp}=\frac{-1}{2\pi i}\int_0^1 d\lambda \int_{-\infty}^{\infty}e^{i\omega 0^+}tr_{<\Lambda} [G^{pp}(\omega,\lambda) V_{low-k}]$$ Using Lehmann’s representation for $G^{pp}$, one can show that $$\label{eng} \Delta E^{pp}_0=\int_0^1 d\lambda \Sigma_m\Sigma_{ijkl<\Lambda}Y_m(ij,\lambda)Y_m^*(kl,\lambda) \langle ij|V_{low-k}|kl \rangle,$$ where the transition amplitudes $Y$ are given by the following RPA equation: $$\begin{aligned} &&\sum _{ef}[(\epsilon_i+\epsilon_j)\delta_{ij,ef}+ \lambda(1-n_i-n_j)\langle ij|V_{low-k}|ef\rangle] \nonumber \\ && \times Y_m(ef,\lambda) =\omega_mY_m(ij,\lambda);~~(i,j,e,f)<\Lambda. \label{rpa}\end{aligned}$$ The index $m$ denotes states dominated by hole-hole components, namely, states that satisfy $\langle Y_m|\frac{1}{Q}|Y_m\rangle=-1$ and $Q(i,j)=(1-n_i-n_j)$. We have used the HF s.p. spectrum given by $V_{low-k}$, namely $$\label{sp} \epsilon_k = \hbar^2k^2/2m +\sum _{h<k_F}\langle kh|V_{low-k}|kh\rangle$$ for both holes and particles with $k\leq \Lambda$. Thus the propagators of the diagrams as shown in Fig. 1 all include HF insertions to all orders. The above spectrum is continuous up to $\Lambda$. The above ring-diagram method is a renormalization group approach for a momentum model space defined by a momentum boundary $\Lambda$, and the space with momentum greater than $\Lambda$ is integrated out. The resulting effective interaction for the model space is $V_{low-k}$ which is energy independent. This renormalization procedure can, however, also lead to a model-space effective interaction which is energy dependent. The $G$-matrix ring-diagram method of [@song87] is of the latter approach. Formally, these two approaches should be the same. In the present work we shall carry out ring-diagram calculations using both approaches; it would be of interest to compare the results of these two different approaches. In the following, let us briefly describe the $G$-matrix ring diagram method [@song87]. Here each vertex of Fig. 1 is a model-space $G$-matrix interaction, to be denoted as $G^M$. It is defined by $$G^M_{ijkl}(\omega)=V_{ijkl}+\sum_{rs}V_{ijrs}\frac{Q^M(rs)} {\omega-k_r^2-k_s^2+i0^+}G^M_{rskl}(\omega)$$ where $k_r^2$ stands for the kinetic energy $\hbar^2k_r^2/2m$ and similarly for $k_s^2$. The Pauli projection operator $Q^M$ is to assure the intermediate states being outside $\Lambda$ and $k_F$, namely it is defined by $$\begin{aligned} Q^M(rs)&=&1, if~ max(k_r,k_s)>\Lambda~ and~ min(k_r,k_s)<k_F \nonumber \\ &=&0, otherwise.\end{aligned}$$ In the above $k_F<\Lambda$. In Ref.[@song87] $\Lambda$ is chosen to be $\sim 3 fm^{-1}$. Note that the above $G^M$ is energy dependent, namely it is dependent on the energy variable $\omega$. However, $\omega$ is not a free parameter; it is to be determined in a self-consistent way. For example, the model-space s.p. spectrum is given by the following self-consistent equations: $$\epsilon _a=\frac{\hbar^2k_a^2}{2m}+\langle a|U|a \rangle;$$ $$\begin{aligned} \langle a |U|a \rangle &=&\sum_{h\leq k_F}\langle a,h|G^M(\omega=\epsilon_a +\epsilon_h)|a,h\rangle, ~a<\Lambda \nonumber \\ &=&0, ~ otherwise.\end{aligned}$$ In the above $U$ is the s.p. potential and $\epsilon$ the model-space s.p. energy which is determined self-consistently with the energy variable of $G^M$. Note that this s.p. spectrum does not have a gap at $k_F$; it is a continuous one up to $\Lambda$. When choosing $\Lambda$=$k_F$ the above is the same as the self-consistent BHF s.p. spectrum. When calculating the ring diagrams using $G^M$, its energy variable is also determined self-consistently. In terms of $G^M$, the all-order sum of the $pphh$ ring diagrams is [@song87] $$\Delta E^{pp}_0=\int_0^1 d\lambda \sum_{m}\sum_{ijkl(<\Lambda)}Y_m(ij,\lambda)Y_m^*(kl,\lambda) G^M_{kl,ij}(\omega_m^-)$$ where the transition amplitudes $Y_m$ and eigenvalues $\omega_m^-$ are given by the following self-consistent RPA equation: $$\begin{aligned} &&\sum _{ef}[(\epsilon_i+\epsilon_j)\delta_{ij,ef}+ \lambda(1-n_i-n_j)L_{ij,ef}(\omega)]Y_m(ef,\lambda)\nonumber\\ &&=\mu_m(\omega,\lambda)Y_m(ij,\lambda);~~(i,j,e,f)<\Lambda.\end{aligned}$$ The index $m$ denotes states dominated by hole-hole components. The vertex function $L$ is obtained from 2- and 1-body diagrams first order in $G^M$ [@song87]. The above equation is solved with the self-consistent condition that the energy variable of $L$ is equal to the eigenvalue, namely $$\omega=\mu_m(\omega,\lambda)\equiv \omega_m^-(\lambda).$$ Comparing with the $V_{low-k}$ ring diagram calculation described earlier, the above $G$-matrix calculation is clearly more complicated. Because of the energy dependence of the interaction $G^M$, the above equations have to be solved self-consistently both for the s.p. spectrum and for the RPA equations. To attain this self consistency, it is necessary to use iteration methods and this procedure is often numerically involved. In contrast, ring-diagram calculation using the energy-independent interaction $V_{low-k}$ is indeed much simpler. As mentioned earlier, we shall carry out ring-diagram calculations using both methods. $V_{low-k}$ with infinite scattering length ============================================ To carry out the above ring-diagram calculation, we need the low-momentum potential $V_{low-k}$. Since we are interested at neutron matter at and near the unitary limit (infinite scattering length), we should have $V_{low-k}$’s of definite scattering lengths, including $\pm \infty$, so that the dependence of our results on scattering lengths can be investigated. In the present work, we have chosen a two-step procedure to construct such potentials so that the resulting potentials are close to realistic neutron potentials. We first construct bare potentials $V^a$ based on a realistic nucleon-nucleon potential; these potentials are tuned so that they have definite scattering lengths. Renormalized low-momentum potentials $V_{low-k}^a$ are then obtained from $V^a$ using a renormalization procedure which preserves the scattering length. We start from the high-precision CD-Bonn [@cdbonn] nucleon-nucleon potential. For this potential, the scattering length of the $^1S_0$ channel is already fairly large (-18.97 $fm$), and it is found to depend rather sensitively on the interaction parameters. Thus by slightly tuning the interaction parameters of the CD-Bonn potential, we have obtained a family of $^1S_0$ neutron potentials of definite scattering lengths. We shall denote them as $V^a$. Our tuning procedure will be discussed in section IV(A). Recently there have been a number of studies on the low-momentum nucleon-nucleon potential $V_{low-k}$ [@bogner01; @bogner02; @coraggio02; @schwenk02; @bogner03; @jdholt]. $V_{low-k}$ is obtained from a bare nucleon-nucleon potential by integrating out the high-momentum components, under the restriction that the deuteron binding energy and the low-energy phase-shifts are preserved. The $V_{low-k}$ obtained from different realistic potentials ( CD-Bonn [@cdbonn], Argonne [@argonne] , Nijmegen [@nijmegen] and Idaho [@chiralvnn]) all flow to a unique potential when the cut-off momentum is lowered to around $2fm^{-1}$. The above $V_{low-k}$ is obtained using a T-matrix equivalence renormalization procedure [@bogner01; @bogner02; @coraggio02; @schwenk02; @bogner03; @jdholt]. Since this procedure preserves the half-on-shell T-matrix, it of course preserves the scattering length. Thus this procedure is suitable for constructing $V_{low-k}^a$, the low-momentum interaction with definite scattering length. Using this procedure, we start from the $T$-matrix equation $$T(k',k,k^2) = V^a(k',k) + \int _0 ^{\infty} q^2 dq \frac{V^a(k',q)T(q,k,k^2 )} {k^2-q^2 +i0^+ } ,$$ where $V^a$ is a modified CD-Bonn potential of scattering length $a$. Notice that in the above the intermediate state momentum $q$ is integrated from 0 to $\infty$. We then define an effective low-momentum T-matrix by $$\begin{aligned} T_{low-k }(p',p,p^2) &=& V^a_{low-k }(p',p) \nonumber \\ \nonumber &+& \int _0 ^{\Lambda} q^2 dq \frac{V^a_{low-k }(p',q) T_{low-k} (q,p,p^2)} {p^2-q^2 +i0^+ },\\\end{aligned}$$ where the intermediate state momentum is integrated from 0 to $\Lambda$, the momentum space cut-off. We require the above T-matrices to satisfy the condition $$T(p',p,p^2 ) = T_{low-k }(p',p, p^2 ) ;~( p',p) \leq \Lambda.$$ The above equations define the effective low momentum interaction $V_{low-k}^a$. The iteration method of Lee-Suzuki-Andreozzi [@suzuki80; @andre96] has been used in calculating $V_{low-k}^a$ from the above T-matrix equivalence equations. From now on, we shall denote $V_{low-k}^a$ simply as $V_{low-k}$. Results ======= Low-momentum interactions and scattering lengths ------------------------------------------------ To study neutron matter at the unitary limit, we first need a realistic neutron-neutron interaction that would lead to a huge $^1S_0$ scattering length $a_s$, and a small effective range $r_e$. We obtain such interaction by ‘tuning’ the meson mass $m_\sigma$ in the usual CD-Bonn potential. The exchange of a lighter meson generates a stronger attraction, therefore making the scattering length $a_s$ more negative until a bound state is formed. As one ‘tunes’ across the bound state, $a_s$ will pass from $-\infty$ to $+\infty$ , eventually become less and less positive. In this work, this $m_\sigma$ ‘tuning’ is taken as a manual adjustment in the strength of the neutron-neutron potential. Of great interest is that this ‘tuning’ may naturally come from the density-dependence of the nucleon-nucleon potential via the mechanism of Brown-Rho (BR) scaling[@brown91; @brown04; @rapp99], which suggests the in-medium meson masses should [*decrease*]{}. At normal nuclear matter density, the meson masses of $\rho$, $\omega$ and $\sigma$ are all expected to decrease by about $15\%$ [@rapp99] compared to their masses in free space. This decrease will enhance not only the attraction from $\sigma$ but also the repulsion from $\rho$ and $\omega$. As a preliminary study, we shall tune only $m_{\sigma}$ in the present work. To compensate for the repulsive effect from $\rho$ and $\omega$ (which are not tuned in the present work), we shall only tune $m_{\sigma}$ slightly, namely a few percent. We shall consider that the above BR scaling is compatible with neutron matter of moderate density ($k_F \sim 1fm^{-1}$). In a future publication, we plan to carry out further studies, including the tuning of $\rho$- and $\omega$-meson masses. name $m_\sigma(MeV)$ $a_s(fm)$ $ r_e(fm) $ ------------------ ----------------- ----------- ------------- original CD-Bonn 452 -18.97 2.82 CD-Bonn-10 460 -9.827 3.11 CD-Bonn-42 447 -42.52 2.66 CD-Bonn-$\infty$ 442.85 -12070.00 2.54 CD-Bonn+$\infty$ 442.80 +5121.00 2.54 CD-Bonn+21 434 +21.01 2.31 : $m_\sigma$ in the original CD-Bonn potential is tuned to give neutron-neutron potentials with different scattering lengths.\[meson\] Various ‘tuned’ CD-Bonn potentials are listed in Table \[meson\]. From there one can see the sensitivity of the scattering length to the change in $m_\sigma$. At $m_\sigma\approx442MeV$, namely a 2.4$\%$ decrease from the original, $a_s\approx-12000fm$. Notice that the effective ranges for the CD-Bonn potentials are larger than the actual ranges of them. For example, $r_e$ for the original CD-Bonn potential is $2.82fm$, considerably larger than the range of one-pion exchange. Within the range of Fermi momenta from $0.8fm^{-1}$ to $1.5fm^{-1}$ that we use in our computation below, $a_s\approx-12000fm$ is obviously enormous compared to any length scale in the system, thus we expect the neutron matter to be at the unitary limit, i.e., no different from the limiting case $a_s=-\infty$. For convenience, we name such potential CD-Bonn-$\infty$. Following the renormalization procedures as already described in Section III, we obtain the low-momentum potential $V_{low-k}$’s for several CD-Bonn potentials listed above. A comparison of the diagonal matrix elements in the $V_{low-k}$’s (with a fixed cut-off momentum $\Lambda$) is shown in Figure \[compare\_cdbs\]. It is of interest that the strength of $V_{low-k}$ only changes weakly with the scattering length. For example, it changes by merely about $10\%$ from $a_s=-18.97fm$ to $-12070fm$. Ground-state energy and\ the universal constant $\xi$ ---------------------------- Here we shall present our major results, namely the ground state energies $E_0$ of neutron matter at and close to the unitary limit from the summation of low-momentum ring diagrams to all orders. Following the potential renormalization procedure described in section III, we first calculate $V_{low-k}$ for certain chosen values for the decimation scale $\Lambda$. Then the all-order sum of the $pphh$ ring diagrams are calculated using the above $V_{low-k}$. As introduced in Section II, the calculation details in the summation of $pphh$ ring diagrams can be found in Ref.[@song87]. How to choose the decimation scale $\Lambda$ is clearly an important step in our calculation, and in the present work we shall use a stable-point, or ‘fixed-point’, criterion in deciding $\Lambda$. Before discussing this criterion, let us first present some of our results for the ground-state energy per particle $(E_0/A)$. In Fig. 3 we present such results for four $a_s$ values, calculated with $\Lambda$s determined by the above criterion. (The details of this determination will be described a little later.) As shown by the figure, we see that $E_0/A$ does not change strongly with $a_s$. The ratios $\xi=E_0/E_0^{free}$ are then readily obtained, as shown in Fig.4. It is of interest that the ratios for the four $a_s$ cases are all weakly dependent on $k_F$. To help understand this behavior, we plot in Fig.5 the potential energy per particle $PE/A$ (namely $\Delta E_0^{pp}/A$ of Eq.(6)) versus $k_F^2$, for the same four $a_s$ cases. It is rather impressive that they all appear to be straight lines. We have fitted the ‘lines’ in the figure to the equation $PE/A=(\hbar^2/m)\left(\beta k_F^2+\gamma\right)$: We have found $(\beta,\gamma)$ =(-0.1370, 0.0002), (-0.1498, -0.0008), (-0.1649, -0.0035) and (-0.1797, -0.0082) respectively for $a_s$ = $-9.87fm$, $-18.97fm$, $-12070fm$ and $+21.0fm$. The rms deviation for the above fitting are all very small (all less than 0.0013), confirming that they are indeed very close to straight lines. The above results are of interest, and are consistent with those shown in Fig. 4. In fact the ratios of Fig.4 are determined by the ‘slopes’ of these ‘lines’. Before further discussing our results, let us now address the question of how to determine the decimation scale $\Lambda$. There are basically two considerations: The first one concerns the experimental NN scattering phase shifts on which realistic NN potentials are based. The second is about the dependence of our results on $\Lambda$. Realistic NN potentials [@cdbonn; @argonne; @nijmegen; @chiralvnn] are constructed to reproduce the experimental NN phase shifts up to $E_{lab}\approx 300MeV$. This suggests that $\Lambda$ is about $2fm^{-1}$, as beyond this scale NN potential models are not experimentally constrained and are thus rather uncertain (model dependent) [@bogner03]. We now turn to the dependence of our results on $\Lambda$. As described in Section II, $V_{low-k}$ is used in the determination of the H.F. single particle spectrum (see Eq.\[sp\]), the transition amplitudes $Y$ in the RPA equation (see Eq.\[rpa\]), and finally, the ground state energy $E_0$ (see Eq. \[eng\]). Intuitively, $E_0$ should exhibit a non-trivial $\Lambda$-dependence. For various Fermi-momenta, this dependence is studied and is found to be remarkably mild. As an example, let us present in Fig. 6 our results obtained with the potential CD-Bonn-$\infty$. For $\Lambda=(2.0-2.6)fm^{-1}$, it is seen that $\xi$ varies actually by a rather small amount (note that the range of our plot is from 0.438 to 0.444). Furthermore the $\Lambda$ dependence of $\xi$ shows up as a curve with a minimum. The final choice of $\Lambda$ is based on the criterion that $E_0$ should be stable against changes in $\Lambda$. As shown in the figure, an obvious stable-point, or fixed-point, defined by $dE_0(\Lambda)/d\Lambda=0$, is found at about $2.3fm^{-1}$. Thus we have used $\Lambda=2.3 fm^{-1}$ for CD-Bonn-$\infty$. We found that the position of the fixed point is almost the same for the different Fermi-momenta in the range $(0.8-1.5)fm^{-1}$. The same procedure is done on the original CD-Bonn, and other tuned potentials. The fixed points, also with an negligible dependence on $k_F$, are found to be $2.15fm^{-1}$, $2.25fm^{-1}$ and $2.4fm^{-1}$ respectively for CD-Bonn potentials of scattering lengths $-9.8fm$, $-18.9fm$ (the original CD-Bonn), and $+21.01fm$. The above fixed-point $\Lambda$’s have been used for the results presented in Figs. 3-5. Of great significance is the ratio of the ground state energy to that of the non-interacting case, namely $E_0/E_0^{free}$. At the unitary limit, it is expected to be an universal constant, named $\xi$. This constant is of great importance as it determines the equation of state of all low-density cold Fermi gas. At the unitary limit, our data on $E_0/E_0^{free}$ all lie within a narrow window from 0.437 to 0.448. Such result confirms a universality over Fermion density in a wide range $(1.73-11.40)\times10^{-2}fm^{-3}$. Most importantly, the numerical value of $\xi$ is remarkably close to that from Monte Carlo methods, which by far is believed to be the best estimate. Astra [*et. al.*]{} obtained 0.42(1) based on a square well potential and particle density $nR_0^{3}=10^{-6}$ (where $R_0$ is the potential range). Carlson [*et. al.*]{} obtained 0.44(1) based on a ‘cosh potential’, and particle density $n\mu^{-3}=0.020$ (where $2/\mu$ is the effective range). In our case, $n\Lambda^{-3}=(1.4-9.4)\times 10^{-3}$ (where $\Lambda=2.3fm^{-1}$ is the decimation scale in the renormalization). These works, including ours, employ very different interactions and various particle densities. Still, the value of $\xi$ agrees incredibly well. In Figure \[beke\_kf\_allcdbs\] we contrast the data from CD-Bonn-$\infty$ with that from the original CD-Bonn and other tuned potentials. Even though the $^1S_0$ scattering length in the original CD-Bonn is already fairly large ($a_s=-18.97fm$) , still the equation of state, as predicted from the ratio $E_0/E_0^{free}$, has significant difference from the unitary limit. As seen in our data with CD-Bonn-$\infty$ potential, at the unitary limit the ratio $E_0/E_0^{free}=0.44$ is practically independent of the underlying neutron density $n$. Comparison with G-matrix results -------------------------------- As discussed in section II, our ring-diagram calculations are based on a model space framework. A model-space is defined by momentum $\{k\leq\Lambda\}$ where $\Lambda$ is the decimation scale. The space with $k>\Lambda$ is integrated out, resulting in a model-space effective interaction $V_{eff}$. We have used so far the energy-independent $V_{low-k}$ for $V_{eff}$. Alternatively, on can also use the energy-dependent $G^M$-matrix (of section II) as $V_{eff}$. These two approaches are formally equivalent. We have carried out calculations to check this equivalence. We have repeated the ring diagram summation with the energy-independent $V_{low-k}$ replaced by the energy-dependent model-space Brueckner $G^M$-matrix, and carry out a fully self-consistent computation in summing up the $pphh$ ring diagrams. The exact procedures in Ref.[@song87] are followed (section II). Ring diagrams within a model space up to a cut-off momentum $\Lambda$ is summed to all orders. We found that the ground state energy is rather insensitive to the choice of $\Lambda$. See Figure \[beke\_kf\_cdbinf\] for the data of CD-Bonn-$\infty$ and CD-Bonn(-18.97), done with $\Lambda=2.3fm^{-1},~2.25fm^{-1}$ respectively. As illustrated, the two methods, namely, ring diagram summation with $V_{low-k}$ and that with $G^M$-matrix, are fully consistent. This is a remarkable and reassuring result, as the calculational procedures of them are vastly different. For the $G^M$ case, the s.p. spectrum, the RPA amplitudes $Y$ and energies $\omega^-_m$ are all calculated self-consistently, while for the $V_{low-k}$ case no such self-consistent procedures are needed. Clearly the $V_{low-k}$ ring-diagram method is more desirable. Schematic effective interaction at unitary limit ------------------------------------------------ At the unitary limit, the simple equation of state $E_0=\xi E_0^{free}$ in neutron matter suggests a very counter-intuitive nature in the underlying system: strongly interacting fermions essentially can be described by a non-interacting picture with an effective mass. This unexpected ‘simplicity’ can best be captured by a schematic interaction. To illustrate this, let us consider neutron matter confined in a closed Fermi sea $|\Phi_0(k_F)\rangle$. In other words, we consider neutron matter in a one-dimensional model space. We denote the effective interaction for this model space as $V_{FS}$. Then the potential energy per particle is $$\begin{aligned} \frac{PE}{A}&=& \langle\Phi_0(k_F)|V_{FS}|\Phi_0(k_f)\rangle/A \nonumber \\ &=&\frac{8}{\pi} \int_0^{k_F}\left(1-\frac{3k}{2k_F}+\frac{k^3}{2k_F^3}\right) \langle k|V_{FS}|k\rangle k^2 \mathrm{d}k\end{aligned}$$ where $k$ is the relative momentum. Suppose we take $V_{FS}$ as a contact effective interaction $$V_{FS}=\frac{1}{\frac{S}{a_s}-\frac{2}{\pi}k_F}$$ ($\hbar=m=1$) where $S$ is a positive parameter with $S<<|a_s|$. When $S$=1 and $k_F$ replaced by $\Lambda$, $V_{FS}$ is the same the effective interaction for the pion-less effective field theory [@bogner03; @schafer05]. Substituting the above into Eq.(19) gives $$\xi=1+\frac{5}{9}\frac{1}{\frac{\pi}{2}\frac{S}{a_sk_F}-1}.$$ At the unitary limit (infinite $a_s$), the above gives $\xi$=4/9, independent of $k_F$, which is practically the same as the result for $\xi$(-12070) of Fig. 4. The above also gives $\xi$ for finite $a_s$. At the unitary limit, we expect $V_{FS}$ to be unique. For finite $a_s$ (away from the unitary limit), it is not expected to be unique and the parameter $S$ is expected to depend on the underlying potential. As shown in Fig. 4, we have calculated $\xi$ using the CD-Bonn potentials of finite scattering lengths. These results can also be qualitatively described by the above equation. For instance, for $S=1.25$ and $k_F=1.0$, the above equation gives $\xi$= 0.54, 0.50 and 0.39 respectively for $a_s$=$-9.87fm$, $-18.97fm$ and $+21.01fm$. In short, certain main features of our results obtained from ring-diagram calculations with the CD-Bonn potentials can be qualitatively reproduced by the above simple contact effective interaction. Summary ======== In conclusion, we have carried out a detailed study on neutron matter at and close to the unitary limit with a low-momentum ring diagram approach. By slightly tuning the realistic CD-Bonn potential, we have obtained $^1S_0$ neutron potentials of specific scattering lengths, in particular the CD-Bonn-$\infty$ one with $a_s$ of $-12070 fm$. By integrating out their momentum components beyond a decimation scale $\Lambda$, we obtain renormalized low-momentum interactions $V_{low-k}$ of the same specific scattering lengths. The ground state energy $E_0$ of neutron matter are then calculated by summing up the $pphh$ ring diagrams to all orders within the model space $\{k<\Lambda\}$. A fixed-point criterion is used to determine the decimation scale $\Lambda$. We have carried out ring-diagram calculations using two types of renormalized interactions, the energy-independent $V_{low-k}$ and the energy-dependent $G$-matrix, with results given by them being nearly identical. The $V_{low-k}$ ring-diagram method has a simpler formalism and is also more suitable for numerical calculation. For the CD-Bonn-$\infty$ potential, the ratio $E_0/E_0^{free}$ is found to be very near a universal constant of 0.44 over the neutron density range $(1.73-11.40)\times10^{-2}fm^{-3}$. Our result agrees well with the recent experimental measurement and Monte-Carlo computation on cold Fermi gas at the unitary limit. We thank G.E. Brown, E. Shuryak, T. Bergmann and A. Schwenk for many helpful discussions. This work is supported in part by U.S. Department of Energy under grant DF-FG02-88ER40388, and by the U.S. National Science Foundation under Grant PHY-0099444. [10]{} R. F. Bishop, Int. J. Mod. Phys. B 15, iii (2001), ‘‘Many-Body Challenge Problem’’ by G. F. Bertsch. C. A. Regal and D. S. Jin, Phys. Rev. Lett. [**90**]{}, 230404 (2003); S. Jochim, M. Bartenstein, G. Hendl, J. Hecker Denschlag, R. Grimm, A. Mosk, M.Weidemüller, Phys. Rev. Lett. [**89**]{}, 273202 (2002); C. H. Schunck, M. W. Zwierlein, C. A. Stan, S. M. F. Raupach, W. Ketterle, A. Simoni, E. Tiesinga, C. J. Williams, and P. S. Julienne, Phys. Rev. A. [**71**]{}, 045601 (2005). S. Stringari, Europhys. Lett. 65, 749 (2004). A. Bulgac and G. F. Bertsch, Phys. Rev. Lett. [**94**]{}, 070401 (2005). H. Heiselberg, Phys. Rev. Lett. [**93**]{}, 040402 (2004). J. Kinast, S.L. Hemmer, M.E. Gehm, A. Turlapov, and J.E. Thomas, Phys. Rev. Lett. [**92**]{}, 150402 (2004). J. Kinast, A. Turlapov, and J. E. Thomas, Phys. Rev. A [**70**]{}, 051401(R) (2004). A. Altmeyer, S. Riedl, C. Kohstall, M. J. Wright, R. Geursen, M. Bartenstein, C. Chin, J. Hecker Denschlag, and R. Grimm, Phys. Rev. Lett. [**98**]{}, 040401 (2007). M.J. Wright, S. Riedl, A. Altmeyer, C. Kohstall, E.R. Sánchez Guajardo, J. Hecker Denschlag, and R. Grimm, Phys. Rev. Lett. [**99**]{}, 150403 (2007). A. Bulgac, Phys. Rev. Lett. [**95**]{}, 140403 (2005). A. Bulgac, Joaquín E. Drut, and Piotr Magierski, Phys. Rev. Lett. [**96**]{}, 090404 (2006). A. Bulgac, Joaquín E. Drut, and Piotr Magierski, Phys. Rev. Lett. [**99**]{}, 120401 (2007). J. E. Thomas, J. Kinast, and A. Turlapov, Phys. Rev. Lett. [**95**]{}, 120402 (2005). G. A. Baker, Jr., Phys. Rev. C [**60**]{}, 054311 (1999). H. Heiselberg, Phys. Rev. A [**63**]{}, 043606 (2001). G. M. Bruun, Phys. Rev. A [**70**]{}, 053602 (2004). A. Perali, P. Pieri, and G. C. Strinati, Phys. Rev. Lett. [**93**]{}, 100404 (2004). Y. Nishida, and D. T. Son, Phys. Rev. Lett. [**97**]{}, 050403 (2006). R. Haussmann, W. Rantner, S. Cerrito, and W. Zwerger, Phys. Rev. A [**75**]{}, 023610 (2007). J.-W. Chen, and E. Nakano, Phys. Rev. A [**75**]{}, 043620 (2007). J. Carlson, S.-Y. Chang, V. R. Pandharipande, and K. E. Schmidt, Phys. Rev. Lett. [**91**]{}, 050401 (2003). G. E. Astrakharchik, J. Boronat, J. Casulleras, and S. Giorgini, Phys. Rev. Lett. [**93**]{}, 200404 (2004). T. Bourdel, L. Khaykovich, J. Cubizolles, J. Zhang, F. Chevy, M. Teichmann, L. Tarruell, S. J. J. M. F. Kokkelmans, and C. Salomon, Phys. Rev. Lett. [**93**]{}, 050401 (2004). J. Kinast, A. Turlapov, J. E. Thomas, Q. Chen, J. Stajic, and K. Levin, Science [**307**]{}, 1296 (2005). G. B. Partridge, W. Li, R. I. Kamar, Y.-A. Liao, and R. G. Hulet, Science [**311**]{}, 503 (2006). J. T. Stewart, J. P. Gaebler, C. A. Regal, and D. S. Jin, Phys. Rev. Lett [**97**]{} 220406 (2006). A. Schwenk and C. J. Pethick, Phys. Rev. Lett. [**95**]{}, 160401 (2005). D. Lee and T. Schäfer, Phys. Rev. C [**73**]{}, 015202 (2006). R. Machleidt, Phys. Rev. C [**63**]{}, 024001 (2001). S.K. Bogner, T.T.S. Kuo and L. Coraggio, Nucl. Phys. [**A684**]{}, 432 (2001). S.K. Bogner, T.T.S. Kuo, L. Coraggio, A. Covello and N. Itaco, Phys. Rev. C [**65**]{}, 051301R (2002). L. Coraggio, A. Covello, A. Gargano, N. Itako, T.T.S. Kuo, D.R. Entem and R. Machleidt, Phys. Rev. C [**66**]{}, 021303(R) (2002). A. Schwenk, G.E. Brown and B. Friman, Nucl. Phys. [**A703**]{}, 745 (2002). S.K. Bogner, T.T.S. Kuo and A. Schwenk, Phys. Rep. [**386**]{}, 1 (2003). T. Schäfer, C.-W. Kao, S.R. Cotanch, Nucl. Phys. [**A762**]{}, 82 (2005). J.D. Holt, T.T.S. Kuo and G.E. Brown, Phys. Rev. C [**69**]{}, 034329 (2004). H.Q. Song, S.D. Yang and T.T.S. Kuo, Nucl. Phys. [**A462**]{}, 491 (1987). H. A. Bethe, Annu. Rev. Nucl. Sci. [**21**]{}, 93 (1971). J.W. Holt and G.E. Brown, p.239 in Hans Bethe and His Physics (World Scientific, July 2006, editted by G.E. Brown and C.-H. Lee). R. B. Wiringa, V.G.J. Stoks and R. Schiavilla, Phys. Rev. C [**51**]{}, 38 (1995). V.G.J. Stoks, R.A.M. Klomp, C.P.F. Terheggen and J.J. de Swart, Phys. Rev. C [**49**]{}, 2950 (1994). D.R. Entem, R. Machleidt, Phys. Rev. C [**68**]{}, 041001 (2003). K. Suzuki and S. Y. Lee, Prog. Theor. Phys. [**64**]{}, 2091 (1980). F. Andreozzi, Phys. Rev. C [**54**]{}, 684 (1996). G.E. Brown, M. Rho, Phys. Rev. Lett. [**66**]{}, 2720 (1991). G.E. Brown, M. Rho, Phys. Rept. [**396**]{}, 1 (2004). R. Rapp, R. Machleidt, J.W. Durso and G.E. Brown, Phys. Rev. Lett. [**82**]{}, 1827 (1999).
--- abstract: 'We carefully consider the interplay between ferromagnetism and the Kondo screening effect in the conventional Kondo lattice systems at finite temperatures. Within an effective mean-field theory for small conduction electron densities, a complete phase diagram has been determined. In the ferromagnetic ordered phase, there is a characteristic temperature scale to indicate the presence of the Kondo screening effect. We further find two distinct ferromagnetic long-range ordered phases coexisting with the Kondo screening effect: spin fully polarized and partially polarized states. A continuous phase transition exists to separate the partially polarized ferromagnetic ordered phase from the paramagnetic heavy Fermi liquid phase. These results may be used to explain the weak ferromagnetism observed recently in the Kondo lattice materials.' author: - 'Yu Liu$^{1}$, Guang-Ming Zhang$^{1}$, and Lu Yu$^{2}$' title: Weak ferromagnetism with the Kondo screening effect in the Kondo lattice systems --- The most important issue in the study of heavy fermion materials is the interplay between the Kondo screening and the magnetic interactions among local magnetic moments mediated by the conduction electrons.[Stewart-2001,Lohneysen-2007,Steglich-2010]{} The former effect favors the formation of Kondo singlet state in the strong Kondo coupling limit, while the latter interactions tend to stabilize a magnetically long-range ordered state in the weak Kondo coupling limit. In-between these two distinct phases, there exists a magnetic phase transition. Although such a phase transition was suggested by Doniach many years ago,[@Doniach1977; @Lacroix1979] the complete finite temperature phase diagram for the Kondo lattice systems has not been derived from a microscopic theory.[@Q-Si] At the half-filling of the conduction electrons, the antiferromagnetic long-range order dominates over the local magnetic moments, which can be partially screened by the conduction electrons in the intermediate Kondo coupling regime.[@Zhang2000a; @Assaad2001; @Ogata2007; @Assaad-2008] Very recently, close to the magnetic phase transition, weak ferromagnetism below the Kondo temperature has been discovered in the Kondo lattice materials UCu$% _{5-x}$Pd$_{x}$ (Ref.), URh$_{1-x}$Ru$_{x}$Ge (Ref.),YbNi$_{4}$P$_{2}$ (Ref.), YbCu$_{2}$Si$_{2}$ (Ref.), and Yb(Rh$_{0.73}$Co$% _{0.27}$)$_{2}$Si$_{2}$ (Ref.). So an interesting question arises as whether the ferromagnetic long-range order can coexist with the Kondo screening effect. To account for the ferromagnetism within the Kondo lattice model, one can assume that conduction electrons per local moment $n_{c}$ is far away from half-filling, where the ferromagnetic correlations dominate in the small Kondo coupling regime.[@IK-1991; @Sigrist-1992; @Li-1996; @Si] Similar to the interplay between the antiferromagnetic correlations and the Kondo screening effect argued by Doniach,[@Doniach1977] a schematic finite temperature phase diagram can be argued for the interplay between the ferromagnetic correlations and the Kondo screening effect. In Fig.1, the Curie temperature is plotted as a function of the Kondo coupling. For the small Kondo couplings, the ferromagnetic ordering (Curie) temperature is larger than the single-impurity Kondo temperature. When the Kondo coupling is strong enough, the Curie temperature is suppressed completely. However, there is an important issue as whether there should be a characteristic temperature scale inside the ferromagnetic ordered phase to signal the presence of the Kondo screening effect. If so, there may exist two distinct ferromagnetic ordered phases: a pure ferromagnetic phase with a small Fermi surface consisting of conduction electrons only, and a ferromagnetic phase with an enlarged Fermi surface including both conduction electrons and local magnetic moments, coexisting with the Kondo screening. ![(Color online) The schematic phase diagram expected from the interplay between ferromagnetic correlations and the Kondo screening effect. $T_{C}^{0}$ denotes the Curie temperature in the absence of the Kondo effect, and $T_{K}^{0}$ represents the Kondo temperature without the ferromagnetic correlations.](pd.eps) In our previous paper,[@Li-Zhang-Yu] we have carefully studied the possible ground state phases within an effective mean field theory. In particular, for $0.16<n_{c}<0.82$ and close to the magnetic phase transition, the local moments can be only partially screened by the conduction electrons, and the remaining uncompensated parts develop the ferromagnetic long-range order. Depending on the Kondo coupling strength, the resulting ground state is either a spin fully polarized or a partially polarized ferromagnetic phase according to the quasiparticles around the Fermi energy. The existence of the spin fully polarized coexistent Kondo ferromagnetic phase has been confirmed by the recent dynamic mean-field calculations in infinite dimensions and density matrix renormalization group in one dimension, where such a state is referred to as the spin-selective Kondo insulator.[@Peters-2012] In the present paper, we will derive a similar finite temperature phase diagram of the Kondo lattice model to Fig.1 for small conduction electron densities. Below the Curie temperature, we find a characteristic temperature scale to signal the Kondo screening effect for the first time. Moreover, there exist a spin fully polarized phase and a partially polarized ferromagnetic long-range ordered phase coexisting with the Kondo screening effect. The former phase spans a large area in the phase diagram, while the latter phase just occupies a very narrow region close to the phase boundary of the paramagnetic heavy Fermi liquid phase. Moreover, a second order phase transition occurs from the spin partially polarized ferromagnetic ordered state to the paramagnetic heavy Fermi liquid state, and the transition line becomes very steep close to the quantum critical point. Our results may be used to explain the weak ferromagnetism and quantum critical behavior observed in YbNi$_{4}$P$_{2}$.[@Krellner-2011] The Hamiltonian of the Kondo lattice systems is defined by$$\mathcal{H}=\sum_{\mathbf{k},{\sigma }}\epsilon _{\mathbf{k}}c_{\mathbf{k}% \sigma }^{\dagger }c_{\mathbf{k}\sigma }+J_{K}\sum_{i}\mathbf{\sigma }% _{i}\cdot \mathbf{S}_{i},$$where $\epsilon _{\mathbf{k}}$ is the dispersion of the conduction electrons, $\mathbf{\sigma }_{i}=\frac{1}{2}\sum_{{\alpha }\beta }c_{i\alpha }^{\dagger }\mathbf{\tau }_{\alpha \beta }c_{i\beta }$ is the spin density operator of the conduction electrons, $\mathbf{\tau }$ is the Pauli matrix, and the Kondo coupling strength $J_{K}>0$. When the localized spins are denoted by $\mathbf{S}_{i}=\frac{1}{2}\sum_{{\alpha }\beta }f_{i\alpha }^{\dagger }\mathbf{\tau }_{\alpha \beta }f_{i\beta }$ in the pseudo-fermion representation, the projection onto the physical subspace has to be implemented by a local constraint $\sum_{\sigma }f_{i\sigma }^{\dagger }f_{i\sigma }=1$. It is straightforward to decompose the Kondo spin exchange into longitudinal and transversal parts $$\mathbf{\sigma }_{i}\cdot \mathbf{S}_{i}=\sigma _{i}^{z}S_{i}^{z}-\frac{1}{4}% [(c_{i\uparrow }^{\dagger }f_{i\uparrow }+f_{i\downarrow }^{\dagger }c_{i\downarrow })^{2}+(c_{i\downarrow }^{\dagger }f_{i\downarrow }+f_{i\uparrow }^{\dagger }c_{i\uparrow })^{2}],$$where the longitudinal part describes the polarization of the conduction electrons, giving rise to the usual RKKY interaction among the local moments; while the transverse part represents the spin-flip scattering of the conduction electrons by the local moments, yielding the local Kondo screening effect.[@Lacroix1979; @Zhang2000a] The latter effect has been investigated by various approaches, in particular, those based on a $1/N$ expansion[@read; @coleman; @burdin] ($N$ is the degeneracy of the localized spin). However, the competition between these two interactions determines the possible ground states of the Kondo lattice systems. Let us first review the effective mean field theory for the ground state used in our previous study.[@Li-Zhang-Yu] We introduce two ferromagnetic order parameters: $m_{f}=\left\langle S_{i}^{z}\right\rangle $ and $% m_{c}=\langle \sigma _{i}^{z}\rangle $ to decouple the longitudinal exchange term, while a hybridization order parameter $V=\langle c_{i\uparrow }^{\dagger }f_{i\uparrow }+f_{i\downarrow }^{\dagger }c_{i\downarrow }\rangle $ is introduced to decouple the transverse exchange term. We also introduce a Lagrangian multiplier $\lambda $ to enforce the local constraint, which becomes the chemical potential in the mean field approximation. Then the mean field Hamiltonian in the momentum space can be written in a compact form $$\mathcal{H}_{MF}=\sum_{\mathbf{k},{\sigma }}\left( c_{\mathbf{k}\sigma }^{\dagger },f_{\mathbf{k}\sigma }^{\dagger }\right) \left( \begin{array}{cc} \epsilon _{\mathbf{k}{\sigma }} & -\frac{J_{K}V}{2} \\ -\frac{J_{K}V}{2} & \lambda _{{\sigma }}% \end{array}% \right) \left( \begin{array}{c} c_{\mathbf{k}\sigma } \\ f_{\mathbf{k}\sigma }% \end{array}% \right) +\mathcal{N}\varepsilon _{0},$$where $\epsilon _{\mathbf{k}{\sigma }}=\epsilon _{\mathbf{k}}+\frac{% J_{K}m_{f}}{2}{\sigma }$, $\lambda _{{\sigma }}=\lambda +\frac{J_{K}m_{c}}{2}% {\sigma }$, $\varepsilon _{0}=\frac{J_{K}V^{2}}{2}-J_{K}m_{c}m_{f}-\lambda $, ${\sigma =\pm 1}$ denote the up and down spin orientations, and $\mathcal{N% }$ is the total number of lattice sites. The quasiparticle excitation spectra are thus obtained by $$E_{\mathbf{k}{\sigma }}^{\pm }=\frac{1}{2}\left[ \epsilon _{\mathbf{k}{% \sigma }}+\lambda _{{\sigma }}\pm \sqrt{\left( \epsilon _{\mathbf{k}{\sigma }% }-\lambda _{{\sigma }}\right) ^{2}+(J_{K}V)^{2}}\right] ,$$where there appear four quasiparticle bands with spin splitting. Using the method of equation of motion, the single particle Green functions can be derived, while the corresponding density of states can be calculated and expressed as $$\begin{gathered} \rho _{c}^{\sigma }(\omega )=\rho _{c}^{0}[\theta (\omega -\omega _{1\sigma })\theta (\omega _{2\sigma }-\omega )+\theta (\omega -\omega _{3\sigma })\theta (\omega _{4\sigma }-\omega )], \notag \\ \rho _{f}^{\sigma }(\omega )=\left( \frac{J_{K}V/2}{\omega -\lambda _{{% \sigma }}}\right) ^{2}\rho _{c}^{\sigma }(\omega ),\end{gathered}$$where $\theta (\omega )$ is a step function and a constant density of states of conduction electrons has been assumed $\rho _{c}^{0}=\frac{1}{2D}$, with $% D$ as a half-width of the conduction electron band. The four quasiparticle band edges can be expressed as $$\begin{aligned} \omega _{1\sigma }& =\frac{1}{2}\left[ \epsilon _{\sigma }-D+\lambda _{{% \sigma }}-\sqrt{(\epsilon _{\sigma }-D-\lambda _{{\sigma }})^{2}+(J_{K}V)^{2}% }\right] , \\ \omega _{2\sigma }& =\frac{1}{2}\left[ \epsilon _{\sigma }+D+\lambda _{{% \sigma }}-\sqrt{(\epsilon _{\sigma }+D-\lambda _{{\sigma }})^{2}+(J_{K}V)^{2}% }\right] , \\ \omega _{3\sigma }& =\frac{1}{2}\left[ \epsilon _{\sigma }-D+\lambda _{{% \sigma }}+\sqrt{(\epsilon _{\sigma }-D-\lambda _{{\sigma }})^{2}+(J_{K}V)^{2}% }\right] , \\ \omega _{4\sigma }& =\frac{1}{2}\left[ \epsilon _{\sigma }+D+\lambda _{{% \sigma }}+\sqrt{(\epsilon _{\sigma }+D-\lambda _{{\sigma }})^{2}+(J_{K}V)^{2}% }\right] ,\end{aligned}$$where $\epsilon _{\sigma }=\frac{J_{K}m_{f}}{2}{\sigma }$ and $\omega _{1\sigma }<\omega _{2\sigma }<\omega _{3\sigma }<\omega _{4\sigma }$. Then using the spectral representation of the Green functions, we derive the mean-field equations at finite temperatures as follows $$\begin{aligned} \int_{-\infty }^{+\infty }d\omega f(\omega )\left[ \rho _{c}^{+}(\omega )+\rho _{c}^{-}(\omega )\right] &=&n_{c}, \notag \\ \int_{-\infty }^{+\infty }d\omega f(\omega )\left[ \rho _{c}^{+}(\omega )-\rho _{c}^{-}(\omega )\right] &=&2m_{c}, \notag \\ \sum\limits_{\sigma }\int_{-\infty }^{+\infty }d\omega f(\omega )\frac{\rho _{c}^{\sigma }(\omega )}{\left( \lambda _{\sigma }-\omega \right) ^{2}}% \left( \frac{J_{K}V}{2}\right) ^{2} &=&1, \notag \\ \sum\limits_{\sigma }\int_{-\infty }^{+\infty }d\omega f(\omega )\frac{% \sigma \rho _{c}^{\sigma }(\omega )}{\left( \lambda _{\sigma }-\omega \right) ^{2}}\left( \frac{J_{K}V}{2}\right) ^{2} &=&2m_{f}, \notag \\ \sum\limits_{\sigma }\int_{-\infty }^{+\infty }d\omega f(\omega )\frac{\rho _{c}^{\sigma }(\omega )}{\left( \lambda _{\sigma }-\omega \right) }\left( \frac{J_{K}V}{2}\right) &=&V,\end{aligned}$$where $f(\omega )=1/\left[ 1+e^{(\omega -\mu )/T}\right] $ is the Fermi distribution function. To make the magnetic interaction between the nearest neighboring local moments ferromagnetic, we should confine the density of conduction electrons to $n_{c}<0.82$ from the previous mean field study.[Fazekas1991]{} The position of the chemical potential $\mu $ with respect to the band edges is very important. At zero temperature, there are two different situations. The corresponding schematic local density of states are displayed in Fig.2. For $\omega _{1-}<\mu <\omega _{2+}$, both the lower spin-up and spin-down quasiparticle bands are partially occupied, corresponding to the spin partially polarized ferromagnetic state. However, for $\omega _{2+}<\mu <\omega _{2-}$, the lower spin-up quasiparticle band is completely occupied, while the lower spin-down quasiparticle band is only partially occupied, corresponding to the spin fully polarized ferromagnetic state.[Beach-Assaad]{} An energy gap $\Delta _{\uparrow }$ exists in the spin-up quasiparticle band, and there is a plateau in the total magnetization: $% m_{c}+m_{f}=(1-n_{c})/2$. The ground-state phase diagram has been obtained in our previous study.[Li-Zhang-Yu]{} When $n_{c}<0.16$, the spin-polarized ferromagnetic phase is a ground state in the large Kondo coupling region. For $0.16<n_{c}<0.82$, the ground state is given by the spin partially polarized ferromagnetic phase in the weak Kondo coupling limit; while in the intermediate Kondo coupling regime, both spin fully polarized and partially polarized ferromagnetic ordered phases with a finite value of the hybridization parameter $V$ may appear, depending on the value of the Kondo coupling strength. For a strong Kondo coupling, the pure Kondo paramagnetic phase is the ground state. There is a continuous transition from the spin partially polarized ferromagnetic ordered phase to the Kondo paramagnetic phase. Now we calculate the finite temperature phase diagram. First of all, if the temperature is high enough, all order parameters must disappear, so the conduction electrons and local moments are decoupled. As the temperature is decreased down to the Curie temperature of the pure ferromagnetic phase $% T_{C}^{0}$, both $m_{c}$ and $m_{f}$ approach zero, but the ratio $% m_{c}/m_{f}$ is finite. The self-consistent equations give rise to$$\lambda =\mu ,m_{c}\approx -\frac{J_{K}}{4D}m_{f},m_{f}=-\frac{J_{K}}{% 8T_{C}^{0}}m_{c}, \label{mc-mf-1}$$and the Curie temperature $T_{C}^{0}$ can be estimated as $$T_{C}^{0}=\frac{J_{K}^{2}}{32D},$$which is independent of the density of conduction electrons, similar to the characteristic energy scale given by the RKKY interaction. On the other hand, if $J_{K}$ is large enough, the system must be in the Kondo paramagnetic phase. As the Kondo coupling decreases, the Kondo singlets are destabilized. When $T\rightarrow T_{K}^{0}$, the hybridization vanishes, and the self-consistent equations can be reduced to $$\begin{aligned} \frac{1}{D}\int\limits_{-D}^{D}\frac{d\omega }{e^{(\omega -\mu )/T_{K}^{0}}+1% } &=&n_{c}, \notag \\ \frac{J_{K}}{2D}\int\limits_{-D}^{D}d\omega \frac{\tanh (\frac{\omega -\mu }{% T_{K}^{0}})}{\omega -\mu } &=&1.\end{aligned}$$When we numerically solve these two equations, the Kondo temperature in the paramagnetic phase $T_{K}^{0}$ can be obtained, which is the same characteristic energy scale as derived from the $1/N$ expansion.[read,coleman,burdin]{} After obtaining $T_{C}^{0}$ and $T_{K}^{0}$, we expect that the pure ferromagnetic phase exists for $T_{C}^{0}>T_{K}^{0}$ in the small Kondo coupling limit, while for $T_{K}^{0}>T_{C}^{0}$ the Kondo screening is present, and the competition between the ferromagnetic correlations and Kondo screening effect should be taken into account more carefully. In the presence of the Kondo screening, the corresponding Curie temperature $% T_{C}$ can be still defined. As $T\rightarrow T_{C}$, the magnetic moments $% m_{c}$ and $m_{f}$ approach zero, but their ratio is finite, $% m_{c}/m_{f}\neq 0$. The self-consistent equations Eq.(5) can be solved numerically, leading to the Curie temperature $T_{C}$ and the mean-field parameters of $\mu $, $\lambda $, $V$, and $m_{c}/m_{f}$. On the other hand, when the ferromagnetism is present, we can also introduce the Kondo temperature $T_{K} $ incorporating the hybridization effect. When $% V\rightarrow 0$ and $T\rightarrow T_{K}$, the numerical solution of the self-consistent equations gives rise to the Kondo temperature $T_{K}$ and the mean-field parameters $\mu $, $\lambda $, $m_{c}$, and $m_{f}$. The resulting phase diagram is shown in Fig.3 for $n_{c}=0.2$. As the Kondo coupling $J_{K}$ increases from a small value, the Curie temperature $T_{C}$ first increases up to a maximal value, and then continuously decreases to zero at $J_{K}^{c2}=1.133D$. For small values of $J_{K}/D<0.41$, the Kondo temperature $T_{K}$ vanishes. However, when $J_{K}/D>0.41$, the Kondo temperature curve consists of two parts, meeting each other precisely at the Curie temperature ($T_{K}=T_{C}$). Inside the ferromagnetic ordered phase, $% T_{K}$ starts from a finite value and then decreases down to zero at $% J_{K}^{c1}=0.506D$; while in the paramagnetic phase, $T_{K}$ follows the behavior of the bare Kondo temperature $T_{K}^{0}$. ![(Color online) The finite temperature phase diagram at $n_{c}=0.2$. In addition to the pure ferromagnetic ordered phase ($V=0$, $m_{c}\neq 0$ and $m_{f}\neq 0$) and the Kondo paramagnetic phase ($V\neq 0$, $% m_{c}=m_{f}=0$), there are two different ferromagnetic ordered phases coexisting with the Kondo screening ($V\neq 0$, $m_{c}\neq 0$ and $% m_{f}\neq 0$): the spin fully polarized phase ($\Delta _{\uparrow }\neq 0$) and spin partially polarized phase ($\Delta _{\uparrow }=0$). The boundary of the pure ferromagnetic order phase and the coexisting ferromagnetic ordered phases actually corresponds to a crossover not a phase transition.](Phase-diagram-T.eps) In the coexistence region, depending on whether the chemical potential $\mu $ is inside the energy gap of spin-up quasi-particle (as shown in Fig.2), we can calculate the lowest excitation energy defined by $\Delta _{\uparrow }\equiv \mu -\omega _{2+}$. In Fig.4, we show $\Delta _{\uparrow }$ as a function of $T$ with fixed Kondo coupling parameters $J_{K}/D=0.6$, $0.7$, $% 0.733$, $0.9$ and $1.0$, respectively. It is clearly demonstrated that the gap $\Delta _{\uparrow }$ has a non-monotonic behavior as the temperature increases. Notice that $J_{K}/D=0.733$ corresponds to the critical value between the spin fully polarized phase and partially polarized phase at zero temperature. When $\Delta _{\uparrow }\rightarrow 0$, the characteristic temperature $T_{C}^{P}$ is determined, leading to the phase boundary separating the spin fully polarized and the partially polarized ferromagnetic ordered phases. The spin fully polarized ferromagnetic order phase spans a large area in the coexistence region, while the spin partially polarized phase just sits in a narrow strip close to the phase boundary of the paramagnetic heavy Fermi liquid phase. The existence of the partially polarized ferromagnetic order phase can be expected before the system enters into the paramagnetic metallic phase. ![(Color online) The energy gap of spin-up quasiparticles $\Delta _{\uparrow }$ as a function of temperature $T$ in the coexisting phase at $% n_{c}=0.2$.](spin-gap.eps) Moreover, the magnetizations of the local moments and the conduction electrons $m_{f}$ and $m_{c}$ are calculated as a function of the Kondo coupling strength $J_{K}$ for $T=0.0025D$ and $0.0075D$, as shown in Fig.5., respectively. It is clear that $m_{c}$ has an opposite sign of $m_{f}$, due to the antiferromagnetic coupling between the local moments and conduction electrons. In order to display the Kondo screening effect, we have also plotted the hybridization parameter $V$ as a function of $J_{K}$ for the same fixed temperatures. For low temperatures shown in Fig.5a, the Kondo screening effect emerges inside the ferromagnetic ordered phase, and a small drop is induced in both magnetizations $m_{f}$ and $m_{c}$. When the magnetization vanishes, the hybridization $V$ has a cusp. However, for high temperatures shown in Fig.5b, the ferromagnetic ordering appears inside the Kondo screened region. The cusps in the hybridization curve are induced when the ferromagnetic order parameters start to emergence or vanish. ![(Color online) The ferromagnetic magnetizations and hybridization parameters as a function of the Kondo coupling $J_{K}$ with a fixed temperature at $n_{c}=0.2$. (a) $T=0.0025D$, (b) $T=0.0075D$.](Order-Parameter-T.eps) ![(Color online) The magnetizations and hybridization parameter as a function of temperature for a given Kondo coupling strength at $n_{c}=0.2$. (a), (b), (c), (d), (e), and (f) correspond to $J_{K}/D=0.25$, $0.45$, $0.60$, $1.0$, $1.1$, and $1.2$, respectively. ](Order-Parameter-JK.eps) The magnetizations $m_{c}$ and $m_{f}$ and hybridization parameter $V$ have also been calculated as a function of temperature $T$ for the fixed Kondo coupling strength $J_{K}$, which is shown in Fig.6. For a small value of $% J_{K}/D=0.2$, as the temperature is increased, the magnetic moments $m_{c}$ and $m_{f}$ in the absence of the Kondo screening decrease down to zero at the Curie temperature $T_{C}$ (see Fig.6a). In contrast, for a large value of $J_{K}/D=1.2$, the system is in a paramagnetic heavy Fermi liquid phase without ferromagnetic order (see Fig.6f). For $J_{K}/D=0.45$, the Kondo screening effect starts to appear in the presence of ferromagnetic ordering. When the ferromagnetic order disappears at $T_{C}$, the hybridization reaches a maximal value and then decreases down to zero at $T_{K}$ (see Fig.6b). For larger values of the Kondo coupling $J_{K}/D=0.6$, $1.0$, and $1.1$, the Kondo screening effect dominates in the temperature range, and the ferromagnetic ordering phase occurs only in a small region, displayed in Fig.6c, Fig.6d and Fig.6e, respectively. These three figures demonstrate the interplay between the Kondo screening effect and the ferromagnetic correlations in the presence of thermal fluctuations. It is important to emphasize that all the results are obtained within the effective mean field theory. When the fluctuation effects are incorporated properly beyond the mean field level, the above phase transitions related to the Kondo screening effect will be changed into a *crossover*. Since the Kondo screening order parameter, i.e. the effective hybridization is not associated with a static long-range order, the finite $V$ does not correspond to any spontaneous symmetry breaking. Therefore, in the obtained finite temperature phase diagram Fig.3, only the Curie temperature $T_{C}$ (the solid line) represents a true phase transition. Finally, it is important to mention a new Kondo lattice system YbNi$_{4}$P$% _{2}$, recently discovered by distinct anomalies in susceptibility, specific heat and resistivity measurements.[@Krellner-2011] Growing out of a strongly correlated Kramers doublet ground state with Kondo temperature $% T_{K}\sim 8K$, the ferromagnetic ordering temperature is severely reduced to $T_{c}=0.17$K with a small magnetic moment $m_{f}\sim 0.05\mu _{B}$. Here we would like to explain the small ferromagnetically order moment and the substantially reduced Curie temperature as originating from the presence of the Kondo screening effect, see Fig.6c, Fig.6d, and Fig.6e. The experimental results can certainly be understood in terms of our effective mean field theory. The quantum critical behavior observed experimentally requires a quantum critical point separating the ferromagnetic ordered phase from the Kondo paramagnetic phase at zero temperature, which is also consistent with our finite temperature phase diagram. The further detailed calculations concerning with thermodynamic properties of the heavy fermion ferromagnetism are left for our future research. In summary, within an effective mean-field theory for small conduction electron densities $0.16<n_c<0.82$, we have derived the finite temperature phase diagram. Inside the ferromagnetic ordered phase, a characteristic temperature scale has been found to signal the Kondo screening effect for the first time. In additional to the pure ferromagnetic phase, there are two distinct ferromagnetic long-range ordered phases coexisting with the Kondo screening effect: a spin fully polarized phase and a partially polarized phase. A second-order phase transition and a quantum critical point have been found to separate the spin partially polarized ferromagnetic ordered phase and the paramagnetic heavy Fermi liquid phase. To some extent, our mean field theory has captured the main physics of the Kondo lattice systems, which provides an alternative interpretation of weak ferromagnetism observed experimentally. The authors acknowledge the support from NSF-China. [*Note added*]{}. The ferromagnetic quantum critical point in the heavy fermion metal YbNi$_4$(P$_{0.92}$As$_{0.08}$)$_2$ has been further confirmed [@Steppke] by precision low temperature measurements: the Gruneisen ratio diverges upon cooling to $T=0K$. [99]{} G. R. Stewart, Rev. Mod. Phys. **73**, 797 (2001). H. v. Lohneysen, A. Rosch, M. Vojta, and P. Wölfle, Rev. Mod. Phys. **79**, 1015 (2007). Q. Si and F. Steglich, Science **329**, 1161 (2010). S. Doniach, Physica, B & C **91**, 231 (1977). C. Lacroix, and M. Cyrot, Phys. Rev. B **20**, 1969 (1979). Q. Si, Physica B **378**, 23 (2006); Phys. Status Solidi, B **247**, 476 (2010). G. M. Zhang and L. Yu, Phys. Rev. B **62**, 76 (2000). S. Capponi and F. F. Assaad, Phys. Rev. B **63**, 155114 (2001). H. Watanabe and M. Ogata, Phys. Rev. Lett. **99**, 136401 (2007). L. C. Martin and F. F. Assaad, Phys. Rev. Lett. **101**, 066404 (2008). O. O. Bernal, D. E. MacLaughlin, H. G. Lukefahr, and B. Andraka, Phys. Rev. Lett. **75**, 2023 (1995). N. T. Huy, *et al.*, Phys. Rev. B **75**, 212405 (2007). C. Krellner, *et al.*, New J. Phys. **13**, 103014 (2011). A. Fernandez-Panella, D. Braithwaite, B. Salce, G. Lapertot, and J. Flouquet, Phys. Rev. B **84**, 134416 (2011). S. Lausberg, *et al.*, arXiv:1210.1345. V. Y. Irkhin and M. I. Katsnelson, Z. Phys. B **82**, 77 (1991). M. Sigrist, K. Ueda, and H. Tsunetsugu, Phys. Rev. B **46**, 175 (1992). Z. Z. Li, M. Zhuang, and M. W. Xiao, J. Phys.: Condens. Matter **8** 7941 (1996). S. J. Yamamoto and Q. Si, Proc. Nat. Acad. Sci. **107**, 15704 (2010). G. B. Li, G. M. Zhang, and L. Yu, Phys. Rev. B **81**, 094420 (2010). R. Peters, N. Kawakami, and T. Pruschke, Phys. Rev. Lett. **108**, 086402 (2012); R. Peters and N. Kawakami, Phys. Rev. B **86**, 165107 (2012). N. Read and D. N. Newns, J. Phys. C **16**, 3273 (1983). P. Coleman, Phys. Rev. B **29**, 3035 (1984). S. Burdin, A. Georges, and D. R. Grempel, Phys. Rev. Lett. **85**, 1048 (2000). P. Fazekas and E. Müller-Hartmann, Z. Phys. B **85**, 285 (1991). K. S. D. Beach and F. F. Assaad, Phys. Rev. B **77**, 205123 (2008); S. Viola Kusminskiy, K. S. D. Beach, A. H. Castro Neto, and D. K. Campbell, Phys. Rev. B **77**, 094419 (2008). A. Steppke, et al., Science **339**, 933 (2013).
--- abstract: 'In this paper we investigate more characterizations and applications of $\delta$-strongly compact cardinals. We show that, for a cardinal ${\kappa}$ the following are equivalent: (1) ${\kappa}$ is $\delta$-strongly compact, (2) For every regular ${\lambda}\ge {\kappa}$ there is a $\delta$-complete uniform ultrafilter over ${\lambda}$, and (3) Every product space of $\delta$-Lindelöf spaces is ${\kappa}$-Lindelöf. We also prove that in the Cohen forcing extension, the least ${\omega}_1$-strongly compact cardinal is a precise upper bound on the tightness of the products of two countably tight spaces.' address: 'Faculty of Science and Engineering, Waseda University, Okubo 3-4-1, Shinjyuku, Tokyo, 169-8555 Japan' author: - Toshimichi Usuba title: 'A note on $\delta$-strongly compact cardinals' --- Introduction ============ Bagaria and Magidor [@BM1; @BM2] introduced the notion of $\delta$-strongly compact cardinals, which is a variant of strongly compact cardinals. Let ${\kappa}$, $\delta$ be uncountable cardinals with $\delta \le {\kappa}$. ${\kappa}$ is *$\delta$-strongly compact* if for every set $A$, every ${\kappa}$-complete filter over $A$ can be extended to a $\delta$-complete ultrafilter. $\delta$-strongly compact cardinals, especially for the case $\delta={\omega}_1$, have various characterizations and many applications, see Bagaria-Magidor [@BM1; @BM2], Bagaria-da Silva [@BS], and Usuba [@U1; @U2]. In this paper, we investigate more characterizations and applications of $\delta$-strongly compact cardinals. Ketonen [@K] characterized strongly compact cardinals by the existence of uniform ultrafilters, where a filter $F$ over a cardinal ${\lambda}$ is *uniform* if ${\left\vert {X} \right\vert}={\lambda}$ for every $X \in F$. Ketonen proved that an uncountable cardinal ${\kappa}$ is strongly compact cardinal if, and only if for every regular ${\lambda}\ge {\kappa}$, there exists a ${\kappa}$-complete uniform ultrafilter over ${\lambda}$. We prove a similar characterization for $\delta$-strongly compact cardinals. \[prop3.3\] Let ${\kappa}$ and $\delta$ be uncountable cardinals with $\delta \le {\kappa}$. Then ${\kappa}$ is $\delta$-strongly compact if, and only if, for every regular ${\lambda}\ge {\kappa}$, there exists a $\delta$-complete uniform ultrafilter over ${\lambda}$. In [@BM2], Bagaria and Magidor characterized ${\omega}_1$-strongly compact cardinals in terms of topological spaces. Let $\mu$ be a cardinal. A topological space $X$ is *$\mu$-Lindelöf* if every open cover of $X$ has a subcover of size $<\mu$. An ${\omega}_1$-Lindelöf space is called a *Lindelöf space*. Bagaria and Magidor proved that a cardinal ${\kappa}$ is ${\omega}_1$-strongly compact if and only if every product space of Lindelöf spaces is ${\kappa}$-Lindelöf. Using Theorem \[prop3.3\], we generalize this result as follows: \[thm3\] Let $\delta \le {\kappa}$ be uncountable cardinals. Then the following are equivalent: 1. ${\kappa}$ is $\delta$-strongly compact. 2. For every family $\{X_i \mid i \in I\}$ of $\delta$-Lindelöf spaces, the product space $\prod_{i \in I} X_i$ is ${\kappa}$-Lindelöf. We turn to another topological property, the tightness. For a topological space $X$, the *tightness number* $t(X)$ of $X$ is the minimum infinite cardinal ${\kappa}$ such that whenever $A \subseteq X$ and $p \in \overline{A}$ (where $\overline{A}$ is the closure of $A$ in $X$), there is $B \subseteq A$ with ${\left\vert {B} \right\vert} \le {\kappa}$ and $p \in \overline{B}$. If $t(X)={\omega}$, $X$ is called a *countably tight* space. The product of countably tight spaces need not to be countably tight: A typical example is the sequential fun $S({{\omega}_1})$. It is a Frěchet-Urysohn space, but the square of $S({{\omega}_1})$ has uncountable tightness. It is also known that if ${\kappa}$ is a regular uncountable cardinal and the set $\{\alpha<{\kappa}\mid {\mathord{\mathrm{cf}}}(\alpha)={\omega}\}$ has a non-reflecting stationary subset, then $t(S({\kappa})^2) ={\kappa}$ (see Eda-Gruenhage-Koszmider-Tamano-Todorčević [@EGKTT]). In particular, under $V=L$, the tightness of the product of two Frěchet-Urysohn spaces can be arbitrary large. Among these facts, we show that an ${\omega}_1$-strongly compact cardinal gives an upper bound on the tightness of the product of two countably tight spaces. \[thm4\] If ${\kappa}$ is ${\omega}_1$-strongly compact, then $t(X \times Y) \le {\kappa}$ for every countably tight spaces $X$ and $Y$. We also show that an ${\omega}_1$-strongly compact cardinal is a *precise* upper bound in the Cohen forcing extension. \[thm5\] Let ${\mathbb{C}}$ be the Cohen forcing notion, and $G$ be $(V, {\mathbb{C}})$-generic. Then for every cardinal ${\kappa}$ the following are equivalent in $V[G]$: 1. ${\kappa}$ is ${\omega}_1$-strongly compact. 2. For every countably tight spaces $X$ and $Y$ we have $t(X \times Y) \le {\kappa}$. 3. For every countably tight Tychonoff spaces $X$ and $Y$ we have $t(X \times Y) \le {\kappa}$. Here we present some definitions and facts which will be used later. For an uncountable cardinal ${\kappa}$ and a set $A$, let ${\mathcal{P}}_{\kappa}A=\{x \subseteq A \mid {\left\vert {x} \right\vert}<{\kappa}\}$. A filter $F$ over ${\mathcal{P}}_{\kappa}A$ is *fine* if for every $a \in A$, we have $\{x \in {\mathcal{P}}_{\kappa}A\mid a \in x\} \in F$. For uncountable cardinals $\delta \le {\kappa}$, the following are equivalnet: 1. ${\kappa}$ is $\delta$-strongly compact. 2. For every cardinal ${\lambda}\ge {\kappa}$, there exists a $\delta$-complete fine ultrafilter over ${\mathcal{P}_\kappa \lambda}$. 3. For every set $A$ with ${\left\vert {A} \right\vert} \ge {\kappa}$, there exists a $\delta$-complete fine ultrafilter over ${\mathcal{P}}_{\kappa}A$. 4. For every cardinal ${\lambda}\ge {\kappa}$, there exists an elementary embedding $j: V \to M$ into some transitive model $M$ such that $\delta \le \mathrm{crit}(j) \le {\kappa}$ and there is a set $A \in M$ with ${\left\vert {A} \right\vert}^{M}<j({\kappa})$ and $j``{\lambda}\subseteq A$. Where $\mathrm{crit}(j)$ denotes the critical point of $j$. If ${\kappa}$ is $\delta$-strongly compact, then there is a measurable cardinal $\le {\kappa}$. On uniform ultrafilters ======================= In this section we give a proof of Theorem \[prop3.3\]. It can be obtained by a series of arguments in Ketonen [@K] with some modifications. Suppose ${\kappa}$ is $\delta$-strongly compact for some uncountable $\delta \le {\kappa}$. Then for every regular ${\lambda}\ge {\kappa}$, there exists a $\delta$-complete uniform ultrafilter over ${\lambda}$. Fix a regular ${\lambda}\ge {\kappa}$, and take an elementary embedding $j:V \to M$ such that $\delta \le \mathrm{crit}(j)\le {\kappa}$, and there is $A \in M$ with $j``{\lambda}\subseteq A \subseteq j({\lambda})$ and ${\left\vert {A} \right\vert}^M<j({\kappa})$. Then we have $\sup(j``{\lambda})<j({\lambda})$. Now define an ultrafilter $U$ over ${\lambda}$ by $X \in U \iff \sup(j``{\lambda}) \in j(X)$. It is clear that $U$ is a $\delta$-complete uniform ultrafilter over ${\lambda}$. For the converse direction, we need several definitions and lemmas. Let $U$ be an ${\omega}_1$-complete ultrafilter over some set $A$. Let $\mathrm{Ult}(V, M)$ denote the ultrapower of $V$ by $U$, and we identify the ultrapower with its transitive collapse. Let $j:V \to M \approx \mathrm{Ult}(V, U)$ be an elementary embedding induced by $U$. Let $id_A$ denote the identity map on $A$, and for a function $f$ on $A$, let $[f]_U \in M$ denote the equivalence class of $f$ modulo $U$. We know $[f]_U=j(f)([id_A]_U)$. Let $\mu$, $\nu$ be cardinals with $\mu \le \nu$. An ultrafilter $U$ over some set $A$ is said to be *$(\mu, \nu)$-regular* if there is a family $\{X_\alpha \mid \alpha<\nu\}$ of measure one sets of $U$ such that for every $a \in [\nu]^\mu$, we have $\bigcap_{\alpha \in a} X_\alpha=\emptyset$. We note that if $\nu$ is regular and $U$ is $(\mu, \nu)$-regular, then ${\left\vert {X} \right\vert} \ge \nu$ for every $X \in U$. \[5.4\] Let $\mu \le \nu$ be cardinals where $\nu$ is regular. Let $U$ be an ${\omega}_1$-complete ulrafilter over some set $A$, and $j: V \to M \approx \mathrm{Ult}(V, U)$ an elementary embedding induced by $U$. Then $U$ is $(\mu, \nu)$-regular if and only if ${\mathord{\mathrm{cf}}}^M(\sup(j``\nu))<j(\mu)$. First suppose $U$ is $(\mu, \nu)$-regular, and let $\{X_\alpha \mid \alpha<\nu\}$ be a witness. Let $j(\{X_\alpha \mid \alpha<\nu\})= \{Y_\alpha \mid \alpha<j(\nu)\}$. Let $a=\{\alpha<\sup(j``\nu) \mid [id_A]_U \in Y_\alpha\} \in M$. We know $j``\nu \subseteq a$, hence $a$ is unbounded in $\sup(j``\nu)$, and ${\mathord{\mathrm{cf}}}^M(\sup(j``\nu))\le {\left\vert {a} \right\vert}^M$. By the choice of $a$, we have $\bigcap_{\alpha \in a} Y_\alpha \neq \emptyset$. Hence we have ${\left\vert {a} \right\vert}^M<j(\mu)$, and ${\mathord{\mathrm{cf}}}^M(\sup(j``\nu))<j(\mu)$. For the converse, suppose ${\mathord{\mathrm{cf}}}^M(\sup(j``\nu))<j(\mu)$. Take a function $f:A \to \nu+1$ such that $[f]_U=j(f)([id_A]_U)=\sup(j``\nu)$ in $M$. Then $Z=\{x \in A \mid {\mathord{\mathrm{cf}}}(f(x))<\mu\} \in U$. For each $x \in Z$, take $c_x \subseteq f(x)$ such that ${\mathord{\mathrm{ot}}}(c_x)={\mathord{\mathrm{cf}}}(f(x))$ and $\sup(c_x)=f(x)$. Then, by induction on $i<\nu$, we can take a strictly increasing sequence ${\langle {\nu_i \mid i<\nu} \rangle}$ in $\nu$ such that $\{x\in Z \mid [\nu_i, \nu_{i+1}) \cap c_x \neq \emptyset\} \in U$ as follows. Suppose $\nu_i$ is defined for all $i<j$. If $j$ is limit, since $\nu$ is regular, we have $\sup\{\nu_i \mid i<j\}<\nu$. Then take $\nu_j<{\lambda}$ with $\sup\{\nu_i \mid i<j\}<\nu_j$. Suppose $j=k+1$. Consider $c_{[id_A]_U} \subseteq j(f)([id_A]_U)=\sup(j``\nu)$. $c_{[id_A]_U}$ is unbounded in $\sup(j``\nu)$. Pick some $\xi \in c_{[id]}$ with $j(\nu_k)<\xi$, and take $\nu_j<\nu$ with $\xi<j(\nu_j)$. Then $\nu_j$ works. Finally, let $X_i=\{x \in Z \mid [\nu_i, \nu_{i+1}) \cap c_x \neq \emptyset\} \in U$. We check that $\{X_i \mid i<\nu\}$ witnesses that $U$ is $(\mu, \nu)$-regular. So take $a \in [\nu]^\mu$, and suppose $x \in \bigcap_{i \in a} X_i$. Then $[\nu_i, \nu_{i+1}) \cap c_x \neq \emptyset$ for every $i \in a$. Since ${\langle {\nu_i \mid i<\nu} \rangle}$ is strictly increasing, we have ${\left\vert {c_x} \right\vert} \ge \mu$, this contradicts to the choice of $c_x$. \[5.6\] Let ${\kappa}$ and $\delta$ be uncountable cardinals with $\delta \le {\kappa}$. Then the following are equivalent: 1. ${\kappa}$ is $\delta$-strongly compact. 2. For every regular ${\lambda}\ge {\kappa}$, there exists a $\delta$-complete $({\kappa}, {\lambda})$-regular ultrafilter over some set $A$. Suppose ${\kappa}$ is $\delta$-strongly compact. Fix a regular cardinal ${\lambda}\ge {\kappa}$, and take a $\delta$-complete fine ultrafilter $U$ over ${\mathcal{P}_\kappa \lambda}$. For $\alpha<{\lambda}$, let $X_\alpha=\{x \in {\mathcal{P}_\kappa \lambda}\mid \alpha \in x\} \in U$. Then the family $\{X_\alpha \mid \alpha<{\lambda}\}$ witnesses that $U$ is $({\kappa}, {\lambda})$-regular. For the converse, pick a cardinal ${\lambda}\ge {\kappa}$. By (2), there is a $\delta$-complete $({\kappa}, {\lambda}^+)$-regular ultrafilter $W$ over some set $A$. Take an elementary embedding $i:V \to N \approx \mathrm{Ult}(V, W)$. We have ${\mathord{\mathrm{cf}}}^N(\sup(i``{\lambda}^+))<i({\kappa})$ by Lemma \[5.4\]. By the elementarity of $i$, one can check that for every stationary $S \subseteq \{\alpha<{\lambda}^+ \mid {\mathord{\mathrm{cf}}}(\alpha)={\omega}\}$, we have that $i(S) \cap \sup(i``{\lambda}^+)$ is stationary in $\sup(i``{\lambda}^+)$ in $N$ (actually in $V$). (e.g., see [@BM2]). Fix a stationary partition $\{S_i \mid i<{\lambda}\}$ of $\{\alpha<{\lambda}^+ \mid {\mathord{\mathrm{cf}}}(\alpha)={\omega}\}$, and let $i(\{S_i \mid i<{\lambda}\})=\{S'_\alpha \mid \alpha<i({\lambda})\}$. Let $a=\{\alpha \in i({\lambda}) \mid S'_\alpha \cap \sup(i``{\lambda}^+)$ is stationary in $\sup(i``{\lambda}^+)$ in $N\}$. We have $a \in N$ and $i``{\lambda}\subseteq a$. Moreover, since ${\mathord{\mathrm{cf}}}^N(\sup(i``{\lambda}^+)) <i({\kappa})$, we have ${\left\vert {a} \right\vert}^N<i({\kappa})$. Hence $a \in i({\mathcal{P}}_{\kappa}{\lambda})$, and the filter $U$ over ${\mathcal{P}}_{\kappa}{\lambda}$ defined by $X \in U \iff a \in i(X)$ is a $\delta$-complete fine ultrafilter over ${\mathcal{P}}_{\kappa}{\lambda}$. Let ${\lambda}$ be an uncountable cardinal and $U$ an ultrafilter over ${\lambda}$. $U$ is *weakly normal* if for every $f:{\lambda}\to {\lambda}$ with $\{\alpha<{\lambda}\mid f(\alpha)<\alpha\} \in U$, there is $\gamma<{\lambda}$ such that $\{\alpha<{\lambda}\mid f(\alpha)<\gamma\} \in U$. \[5.3\] Let ${\lambda}$ be a regular cardinal, and $\delta \le {\lambda}$ an uncountable cardinal. If ${\lambda}$ carries a $\delta$-complete uniform ultrafilter, then ${\lambda}$ carries a $\delta$-complete weakly normal uniform ultrafilter as well. Let $U$ be a $\delta$-complete uniform ultrafilter over ${\lambda}$, and $j: V \to M \approx \mathrm{Ult}(V, U)$ be an elementary embedding induced by $U$. Since $U$ is uniform, we have $\sup(j``{\lambda})\le [id_{\lambda}]_U<j({\lambda})$. Then define $W$ by $X \in W \iff \sup(j``{\lambda}) \in j(X)$. It is easy to see that $W$ is a required weakly normal ultrafilter. The following is immediate: \[2.7\] Let ${\lambda}$ be a regular cardinal, and $U$ an ${\omega}_1$-complete weakly normal ulrafilter over ${\lambda}$. Let $j: V \to M \approx \mathrm{Ult}(V, U)$ be an elementary embedding induced by $U$. Then $[id_{\lambda}]_U=\sup(j``{\lambda})$. Hence $U$ is $(\mu, {\lambda})$-regular if and only if $\{\alpha <{\lambda}\mid {\mathord{\mathrm{cf}}}(\alpha)<\mu\} \in U$. Let $A$ be a non-empty set, and $U$ an ultrafilter over $A$. Let $X \in U$, and for each $x \in X$, let $W_x$ be an ultrafilter over some set $A_x$. Then the *$U$-sum* of $\{W_x \mid x \in X\}$ is the collection $D$ of subsets of $\{{\langle {x,y } \rangle} \mid x \in X, y \in A_x\}$ such that for every $Y$, $Y \in D \iff \{x \in X \mid \{y\in A_x \mid {\langle {x,y} \rangle} \in Y\} \in W_x\} \in U$. $D$ is an ultrafilter over the set $\{{\langle {x,y} \rangle} \mid x \in X, y \in A_x \}$, and if $U$ and the $W_x$’s are $\delta$-complete, then so is $D$. Let ${\kappa}$ and $\delta$ be uncountable cardinals with $\delta \le {\kappa}$. Suppose for every regular ${\lambda}\ge {\kappa}$, there exists a $\delta$-complete uniform ultrafilter over ${\lambda}$. Then ${\kappa}$ is $\delta$-strongly compact. First suppose ${\kappa}$ is regular. To show that ${\kappa}$ is $\delta$-strongly compact cardinal, by Lemma \[5.6\], it is enough to see that for every regular ${\lambda}\ge {\kappa}$, there exists a $\delta$-complete $({\kappa}, {\lambda})$-regular ultrafilter over ${\lambda}$. We prove this by induction on ${\lambda}$. For the base step ${\lambda}={\kappa}$, by Lemma \[5.3\], we can take a $\delta$-complete weakly normal uniform ultrafilter $U$ over ${\kappa}$. Then $\{\alpha<{\kappa}\mid {\mathord{\mathrm{cf}}}(\alpha)<{\kappa}\} \in U$, hence $U$ is $({\kappa}, {\kappa})$-regular by Lemma \[2.7\]. Let ${\lambda}>{\kappa}$ be regular, and suppose for every regular $\mu$ with ${\kappa}\le \mu<{\lambda}$, there exists a $\delta$-complete $({\kappa}, \mu)$-regular ultrafilter $U_\mu$ over $\mu$. Fix a $\delta$-complete weakly normal uniform ultrafilter $U$ over ${\lambda}$. If $\{\alpha <{\lambda}\mid {\mathord{\mathrm{cf}}}(\alpha)<{\kappa}\} \in U$, then $U$ is $({\kappa}, {\lambda})$-regular by Lemmas \[5.4\] and \[2.7\], and we have done. Suppose $\{\alpha<{\lambda}\mid {\mathord{\mathrm{cf}}}(\alpha) \ge {\kappa}\} \in U$. Let $X^*=\{\alpha<{\lambda}\mid {\mathord{\mathrm{cf}}}(\alpha) \ge {\kappa}\}$. For $\alpha \in X^*$, let $W_\alpha=U_{{\mathord{\mathrm{cf}}}(\alpha)}$, a $\delta$-complete $({\kappa}, {\mathord{\mathrm{cf}}}(\alpha))$-regular ultrafilter over ${\mathord{\mathrm{cf}}}(\alpha)$. Let $B=\{{\langle {\alpha,\beta} \rangle} \mid \alpha \in X^*, \beta<{\mathord{\mathrm{cf}}}(\alpha)\}$. Note that ${\left\vert {B} \right\vert}={\lambda}$. Let us consider the $U$-sum $D$ of $\{W_\alpha \mid \alpha \in X^*\}$. $D$ is a $\delta$-complete ultrafilter over $B$. We claim that $D$ is $({\kappa}, {\lambda})$-regular, and then we can easily take a $\delta$-complete $({\kappa}, {\lambda})$-regular ultrafilter over ${\lambda}$. For $\alpha \in X^*$, let $j_\alpha:V \to M_\alpha \approx \mathrm{Ult}(V, W_\alpha)$ be an elementary embedding induced by $W_\alpha$. Let $g_\alpha :{\mathord{\mathrm{cf}}}(\alpha) \to \alpha+1$ be a function which represents $\sup(j_\alpha``\alpha)$. Note that, since $W_\alpha$ is $({\kappa}, {\mathord{\mathrm{cf}}}(\alpha))$-regular, we have ${\mathord{\mathrm{cf}}}^{M_\alpha}(\sup(j_\alpha``\alpha))= {\mathord{\mathrm{cf}}}^{M_\alpha}(\sup(j_\alpha``{\mathord{\mathrm{cf}}}(\alpha))) <j_\alpha({\kappa})$, so $\{\beta<{\mathord{\mathrm{cf}}}(\alpha) \mid {\mathord{\mathrm{cf}}}(g_\alpha(\beta))<{\kappa}\} \in W_\alpha$. Let $i:V \to N \approx \mathrm{Ult}(V, D)$ be an elementary embedding induced by $D$. Define the function $g$ on $B$ by $g(\alpha,\beta)=g_{\alpha}(\beta)$. We see that $\sup(i``{\lambda})=[g]_D$. First, for $\gamma<{\lambda}$, we have $X^* \setminus \gamma \in U$, and $\{\beta<{\mathord{\mathrm{cf}}}(\alpha) \mid g_\alpha(\beta) \ge \gamma\} \in W_\alpha$ for all $\alpha \in X^*\setminus \gamma$. This means that $\{{\langle {\alpha, \beta} \rangle} \in B \mid g(\alpha,\beta) \ge \gamma\} \in D$, and $i(\gamma)<[g]_D$. Next, take a function $h$ on $B$ with $[h]_D <[g]_D$. Then $\{{\langle {\alpha,\beta} \rangle} \in B \mid h(\alpha,\beta)<g(\alpha,\beta)\} \in D$, and $X'=\{\alpha \in X^* \mid \{\beta<\alpha \mid h(\alpha,\beta)<g(\alpha,\beta)\} \in W_\alpha\} \in U$. For $\alpha \in X'$, we know $\{\beta<{\mathord{\mathrm{cf}}}(\alpha) \mid h(\alpha,\beta)<g(\alpha,\beta)\} \in W_\alpha$. Because $g(\alpha,\beta)=g_\alpha(\beta)$ represents $\sup(j_\alpha``\alpha)$, there is some $\gamma_\alpha<\alpha$ such that $\{\beta<{\mathord{\mathrm{cf}}}(\alpha) \mid h(\alpha,\beta)<\gamma_\alpha\} \in W_\alpha$. Now, since $U$ is weakly normal and $\gamma_\alpha<\alpha$ for $\alpha \in X'$, there is some $\gamma<{\lambda}$ such that $\{\alpha \in X' \mid \gamma_\alpha <\gamma\} \in U$. Then we have $[h]_D<i(\gamma)<\sup(i``\lambda)$. Finally, since $\{\beta<{\mathord{\mathrm{cf}}}(\alpha) \mid {\mathord{\mathrm{cf}}}(g(\alpha,\beta))<{\kappa}\} \in W_\alpha$ for every $\alpha \in X^*$, we have $\{{\langle {\alpha,\beta} \rangle} \in B \mid {\mathord{\mathrm{cf}}}(g(\alpha,\beta))<{\kappa}\} \in D$, this means that ${\mathord{\mathrm{cf}}}^N([g]_D)={\mathord{\mathrm{cf}}}^N(\sup(i``{\lambda}))<i({\kappa})$, and $D$ is $({\kappa}, {\lambda})$-regular. If ${\kappa}$ is singular, take a $\delta$-complete weakly normal uniform ultrafilter $U$ over ${\kappa}^+$. We have $\{\alpha <{\kappa}^+ \mid {\mathord{\mathrm{cf}}}(\alpha) \le {\kappa}\} \in U$, and $\{\alpha<{\kappa}^+ \mid {\mathord{\mathrm{cf}}}(\alpha)<{\kappa}\} \in U$ since ${\kappa}$ is singular. Then $U$ is $({\kappa}, {\kappa}^+)$-regular. The rest is the same to the case that ${\kappa}$ is regular. This completes the proof of Theorem \[prop3.3\]. Using Theorem \[prop3.3\], we also have the following characterization of $\delta$-strongly compact cardinals. \[2.10\] Let $\delta \le {\kappa}$ be uncountable cardinals. Then the following are equivalent: 1. ${\kappa}$ is $\delta$-strongly compact. 2. For every regular ${\lambda}\ge {\kappa}$, there is an elementary embedding $j:V \to M$ into some transitive model $M$ with $\delta \le \mathrm{crit}(j) \le {\kappa}$ and $\sup(j``{\lambda})<j({\lambda})$. 3. For every regular ${\lambda}\ge {\kappa}$, there is an elementary embedding $j:V \to M$ into some transitive model $M$ with $\delta \le \mathrm{crit}(j)$ and $\sup(j``{\lambda})<j({\lambda})$. For (1) $\Rightarrow$ (2), suppose ${\kappa}$ is $\delta$-strongly compact. Then for every regular ${\lambda}\ge {\kappa}$, there is a $\delta$-complete fine ultrafilter over ${\mathcal{P}_\kappa \lambda}$. If $j:V \to M$ is the ultrapower induced by the ultrafilter, then we have that the critical point of $j$ is between $\delta$ and ${\kappa}$, and $\sup(j``{\lambda})<j({\lambda})$. \(2) $\Rightarrow$ (3) is trivial. For (3) $\Rightarrow$ (1), it is enough to see that every regular ${\lambda}\ge {\kappa}$ carries a $\delta$-complete uniform ultrafilter. Let ${\lambda}\ge {\kappa}$ be regular. Take an elementary embedding $j:V \to M$ with $\delta \le \mathrm{crit}(j)$ and $\sup(j``{\lambda})<j({\lambda})$. Define $U \subseteq {\mathcal{P}}({\lambda})$ by $X\in U \iff \sup(j``{\lambda}) \in j(X)$. It is easy to check that $U$ is a $\delta$-complete uniform ultrafilter over ${\lambda}$. Bagaria and Magidor [@BM2] proved that the least $\delta$-strongly compact cardinal must be a limit cardinal. We can prove the following slightly stronger result using Theorem \[prop3.3\]. For a regular cardinal $\nu$ and $f, g \in {}^\nu \nu$, define $f \le^* g$ if the set $\{\xi<\nu \mid f(\xi) >g(\xi)\}$ is bounded in $\nu$. A family $F \subseteq {}^\nu \nu$ is *unbounded* if there is no $g \in {}^\nu \nu$ such that $f\le^*g$ for every $f \in F$. Then let $\mathfrak{b}_\nu=\min\{{\left\vert {F} \right\vert} \mid F \subseteq {}^\nu \nu$ is unbounded$\}$. Note that $\mathfrak b_\nu$ is regular and $\nu^+ \le \mathfrak b_\nu \le 2^\nu$. Let $\delta$ be an uncountable cardinal, and suppose ${\kappa}$ is the least $\delta$-strongly compact cardinal. Then for every cardinal $\mu<{\kappa}$, there is a regular $\nu$ with $\mu \le \nu<\mathfrak b_\nu<{\kappa}$. As an immediate consequence, ${\kappa}$ is a limit cardinal. Fix $\mu<{\kappa}$. Take a regular $\nu$ as follows. If $\mu \ge \delta$, by the minimality of ${\kappa}$, there is a regular $\nu \ge \mu$ such that $\nu$ cannot carry a $\delta$-complete uniform ultrafilter over $\nu$. We know $\nu <{\kappa}$ since ${\kappa}$ is $\delta$-strongly compact. If $\mu <\delta$, let $\nu=\mu^+$. $\nu$ is regular with $\nu \le \delta \le {\kappa}$. We show that $\mathfrak b_\nu<{\kappa}$ in both cases. Let ${\lambda}=\mathfrak b_\nu$, and suppose to the contrary that ${\lambda}\ge {\kappa}$. By Corollary \[2.10\], we can find an elementary embedding $j:V \to M$ with $\delta \le \mathrm{crit}(j) \le {\kappa}$ and $\sup(j``{\lambda})<j({\lambda})$. Then we have $\sup(j``\nu)=j(\nu)$; Otherwise, we can take a $\delta$-complete uniform ultrafilter $U=\{X \subseteq \nu \mid \sup(j``\nu) \in j(X)\}$ over $\nu$. If $\mu \ge \delta$, this contradicts to the choice of $\nu$. Suppose $\mu<\delta$. Note that $U$ is in fact ${\mathrm{crit}}(j)$-complete. Since $\nu \le \delta \le \mathrm{crit}(j) \le \nu$, we have $\mathrm{crit}(j)=\nu$. However this is impossible since $\nu$ is successor but $\mathrm{crit}(j)$ is measurable. Fix an unbounded set $F \subseteq {}^\nu \nu$ with size ${\lambda}$. Let $F=\{f_\alpha \mid \alpha<{\lambda}\}$. Consider $j(F)=\{f'_\alpha \mid \alpha<j({\lambda})\}$. Let $\gamma=\sup(j``{\lambda})<j({\lambda})$. By the elementarity of $j$, the set $\{f'_\alpha \mid \alpha<\gamma\}$ is bounded in $j({}^\nu \nu)$ in $M$. Thus there is $g' \in j({}^\nu \nu)$ such that $f'_\alpha \le^* g'$ for every $\alpha<\gamma$. Take $g \in {}^\nu \nu$ so that $g'(j(\xi)) \le j(g(\xi))$ for every $\xi<\nu$, this is possible since $\sup(j``\nu)=j(\nu)$. Then there is $\alpha<{\lambda}$ with $f_\alpha \not \le^* g$. $j(f_\alpha)=f'_{j(\alpha)} \le^* g'$, thus there is $\eta<\nu$ such that $j(f_\alpha(\xi))=f'_{j(\alpha)}(j(\xi)) \le g'(j(\xi))$ for every $\xi \ge \eta$. However then $j(f_\alpha(\xi)) \le g'(j(\xi)) \le j(g(\xi))$, and $f_\alpha (\xi) \le g(\xi)$ for every $\xi \ge \eta$, this is a contradiction. For an uncountable cardinal $\delta$, is the least $\delta$-strongly compact cardinal strong limit? Or a fixed point of $\aleph$ or $\beth$-functions? On Products of $\delta$-Lindelöf spaces ======================================= In this section we give a proof of Theorem \[thm3\]. The direction (2) $\Rightarrow$ (1) just follows from the same proof in [@BM2]. For the converse direction in the case $\delta={\omega}_1$, in [@BM2], they used an algebraic method. We give a direct proof, an idea of it come from Gorelic [@G]. Now suppose ${\kappa}$ is not $\delta$-strongly compact. By Theorem \[prop3.3\], there is a regular cardinal ${\lambda}\ge {\kappa}$ such that ${\lambda}$ cannot carry a $\delta$-complete uniform ultrafilter. Let ${\mathcal{F}}$ be the family of all partitions of ${\lambda}$ with size $<\delta$, that is, each ${\mathcal{A}}\in {\mathcal{F}}$ is a family of pairwise disjoint subsets of ${\lambda}$ with $\bigcup {\mathcal{A}}={\lambda}$ and ${\left\vert {{\mathcal{A}}} \right\vert}<\delta$. Let $\{{\mathcal{A}}^\alpha \mid \alpha<2^{\lambda}\}$ be an enumeration of ${\mathcal{F}}$. For $\alpha<2^{\lambda}$, let $\delta_\alpha={\left\vert {{\mathcal{A}}^\alpha} \right\vert}<\delta$, and $\{A^\alpha_\xi \mid \xi<\delta_\alpha\}$ be an enumeration of ${\mathcal{A}}^\alpha$. We identify $\delta_\alpha$ as a discrete space, it is trivially $\delta$-Lindelöf. We show that the product space $X=\prod_{\alpha<2^{\lambda}} \delta_\alpha$ is not ${\kappa}$-Lindelöf. For $\gamma<{\lambda}$, define $f_\gamma \in X$ as follows: For $\alpha<2^{\lambda}$, since ${\mathcal{A}}^\alpha$ is a partition of ${\lambda}$, there is a unique $\xi<\delta_\alpha$ with $\gamma \in A^\alpha_\xi$. Then let $f_\gamma(\alpha)=\xi$. Let $Y=\{f_\gamma \mid \gamma<{\lambda}\}$. It is clear that ${\left\vert {Y} \right\vert}={\lambda}$. For every $g \in X$, there is an open neighborhood $O$ of $g$ such that ${\left\vert {O \cap Y} \right\vert}<{\lambda}$. Suppose not. Then the family $\{A^\alpha_{g(\alpha)} \mid \alpha<2^{\lambda}\}$ has the finite intersection property, moreover for every finitely many $\alpha_0,\dotsc, \alpha_n<2^{\lambda}$, the intersection $\bigcap_{i \le n} A^{\alpha_i}_{g(\alpha_i)}$ has cardinality ${\lambda}$. Hence we can find a uniform ultrafilter $U$ over ${\lambda}$ extending $\{A^\alpha_{g(\alpha)} \mid \alpha<2^{\lambda}\}$. By our assumption, $U$ is not $\delta$-complete. Then we can take a partition ${\mathcal{A}}$ of ${\lambda}$ with size $<\delta$ such that $A \notin U$ for every $A \in {\mathcal{A}}$. We can take $\alpha<2^{\lambda}$ with ${\mathcal{A}}={\mathcal{A}}^\alpha$. However then $A^\alpha_{g(\alpha)} \in {\mathcal{A}}$ and $A^\alpha_{g(\alpha)} \in U$, this is a contradiction. For each $g \in X$, take an open neighborhood $O_g$ of $g$ with ${\left\vert {O_g \cap Y} \right\vert}<{\lambda}$. Let ${\mathcal{U}}=\{O_g \mid g \in X\}$. ${\mathcal{U}}$ is an open cover of $X$, but has no subcover of size $<{\lambda}$ because ${\left\vert {Y} \right\vert}={\lambda}$. Hence ${\mathcal{U}}$ witnesses that $X$ is not ${\lambda}$-Lindelöf, and not ${\kappa}$-Lindelöf. This completes our proof.\ By the same proof, we have: Let ${\kappa}$ be an uncountable cardinal, and $\delta<{\kappa}$ a cardinal. Then the following are equivalent: 1. ${\kappa}$ is $\delta^+$-strongly compact. 2. Identifying $\delta$ as a discrete space, for every cardinal ${\lambda}$, the product space $\delta^{\lambda}$ is ${\kappa}$-Lindelöf. On products of countably tight spaces ===================================== We prove Theorems \[thm4\] and \[thm5\] in this section. For a topological space $X$ and $Y \subseteq X$, let $\overline{Y}$ denote the closure of $Y$ in $X$. \[4.1\] Let $S$ be an uncountable set and $U$ a $\sigma$-complete ultrafilter over the set $S$. Let $X$ be a countably tight space, and $\{O_s \mid s \in S\}$ a family of open sets in $X$. Define the set $O \subseteq X$ by $x \in O \iff \{s \in S \mid x \in O_s\} \in U$ for $x \in X$. Then $O$ is open in $X$. It is enough to show that $\overline{X \setminus O} \subseteq X \setminus O$. Take $x \in \overline{X \setminus O}$, and suppose to the contrary that $x \notin X \setminus O$. We have $\{s \in S \mid x \in O_s\} \in U$. Since $X$ is countably tight, there is a countable $A \subseteq X \setminus O$ with $x \in \overline{A}$. For each $y \in A$, we have $\{s \in S \mid y \notin O_s\} \in U$. Since $A$ is countable and $U$ is $\sigma$-complete, there is $s \in S$ such that $y \notin O_s$ for every $y \in A$ but $x \in O_s$. Then $A \subseteq X \setminus O_s$. Since $O_s$ is open, we have $\overline{X \setminus O_s} \subseteq X \setminus O_s$. Hence $x \in \overline{A} \subseteq \overline{X \setminus O_s} \subseteq X \setminus O_s$, and $x \notin O_s$. This is a contradiction. The following proposition immediately yields Theorem \[thm4\]. \[prop4.2\] Suppose ${\kappa}$ is ${\omega}_1$-strongly compact, and $\mu \le {\kappa}$ the least measurable cardinal. Let $I$ be a set with ${\left\vert {I} \right\vert}<\mu$, and $\{X_i \mid i \in I\}$ a family of countably tight spaces. Then $t(\prod_{i \in I}X_i) \le {\kappa}$. More precisely, for every $A \subseteq \prod_{i \in I} X_i$ and $f \in \overline{A}$, there is $B \subseteq A$ such that ${\left\vert {B} \right\vert}<{\kappa}$ and $f \in \overline{B}$. Take $A \subseteq \prod_{i \in I}X_i$ and $f \in \overline{A}$. We will find $B \subseteq A$ with ${\left\vert {B} \right\vert}<{\kappa}$ and $f \in \overline{B}$. Since ${\kappa}$ is ${\omega}_1$-strongly compact, we can find a $\sigma$-complete fine ultrafilter $U$ over ${\mathcal{P}}_{\kappa}(\prod_{i \in I}X_i)$. Note that $U$ is in fact $\mu$-complete. We show that $\{s \in {\mathcal{P}}_{\kappa}(\prod_{i \in I}X_i) \mid f \in \overline{A \cap s}\} \in U$. Suppose not and let $E =\{s \in {\mathcal{P}}_{\kappa}(\prod_{i \in I}X_i)\mid f \notin \overline{A \cap s}\} \in U$. For each $s \in E$, since $f \notin \overline{A \cap s}$, we can choose finitely many $i_0^s, \dotsc, i_n^s \in I$ and open sets $O_{i_k}^s \subseteq X_{i_k}$ respectively such that $f(i_k^s) \in O_{i_k}^s$ for every $k \le n$ but $\{g \in A\cap s \mid \forall k \le n\,(g(i_k^s) \in O^s_{i_k})\}=\emptyset$. Since $U$ is $\mu$-complete and ${\left\vert {I} \right\vert}<\mu$, we can find $i_0,\dotsc, i_n \in I$ such that $E'=\{s \in E \mid \forall k \le n\,(i_{k}^s=i_k)\} \in U$. For each $i_k$, let $O_{i_k} \subseteq X_{i_k}$ be a set defined by $x \in O_{i_k} \iff \{s \in E' \mid x \in O_{i_k}^s\} \in U$. By lemma \[4.1\], each $O_{i_k}$ is open in $X_{i_k}$ with $f(i_k) \in O_{i_k}$. Since $f \in \overline{A}$, there is $h \in A$ such that $h(i_k) \in O_{i_k}$ for every $k \le n$. Because $U$ is fine, we can take $s \in E'$ with $h \in A \cap s$ and $h(i_k) \in O^s_{i_k}$ for every $k \le n$. Then $h \in \{g \in A \cap s \mid \forall k \le n\,(g(i_k) \in O^s_{i_k})\}$, this is a contradiction. 1. The restriction “${\left\vert {I} \right\vert}<\mu$” in Proposition \[prop4.2\] cannot be eliminated. If $I$ is an infinite set and $\{X_i \mid i \in I\}$ is a family of $T_1$ spaces with ${\left\vert {X_i} \right\vert}\ge 2$, then $t(\prod_{i \in I} X_i) \ge {\left\vert {I} \right\vert}$; For each $i \in X$ take distinct points $x_i, y_i \in X$. For each finite subset $a \subseteq I$, define $f_a \in \prod_{i \in I} X_i $ by $f_a(i)=x_i$ if $\gamma \in a$, and $f_a(i) =y_i$ otherwise. Let $X=\{f_a \mid a \in [I]^{<{\omega}}\}$, and $g$ the function with $g(i)=x_i$ for $i \in I$. Then $g \in \overline{X}$ but for every $Y \subseteq X$ with ${\left\vert {Y} \right\vert}<{\left\vert {I} \right\vert}$ we have $g \notin \overline{Y}$. 2. On the other hand, we do not know if Proposition \[prop4.2\] can be improved as follows: If ${\kappa}$ is the least ${\omega}_1$-strongly compact and $I$ is a set with size $<{\kappa}$, then the product of countably tight spaces indexed by $I$ has tightness $\le {\kappa}$. Recall that the Cohen forcing notion ${\mathbb{C}}$ is the poset $2^{<{\omega}}$ with the reverse inclusion order. \[4.4\] Let ${\kappa}$ be a cardinal which is not ${\omega}_1$-strongly compact. Let ${\mathbb{C}}$ be the Cohen forcing notion, and $G$ be $(V, {\mathbb{C}})$-generic. Then in $V[G]$, there are regular $T_1$ Lindelöf spaces $X_0$ and $X_1$ such that $X_0^n$ and $X_1^n$ are Lindelöf for every $n<{\omega}$, but the product space $X_0 \times X_1$ has an open cover which has no subcover of size $<{\kappa}$. Let $X_0$ and $X_1$ be spaces constructed in the proof of Proposition 3.1 in [@U1], but we sketch the constructions of it for our completeness. Fix ${\lambda}\ge {\kappa}$ such that ${\mathcal{P}}_{\kappa}{\lambda}$ has no $\sigma$-complete fine ultrafilters. Let $\mathrm{Fine}({\mathcal{P}_\kappa \lambda})$ be the set of all fine ultrafilters over ${\mathcal{P}_\kappa \lambda}$. Identifying ${\mathcal{P}_\kappa \lambda}$ as a discrete space, $\mathrm{Fine}({\mathcal{P}_\kappa \lambda})$ is a closed subspace of the Stone-Čech compactification of ${\mathcal{P}_\kappa \lambda}$, hence $\mathrm{Fine}({\mathcal{P}_\kappa \lambda})$ is a compact Hausdorff space. Let $\{{\mathcal{A}}_\alpha \mid \alpha<\mu\}$ be an enumeration of all countable partitions of ${\mathcal{P}_\kappa \lambda}$, and for $\alpha<\mu$, fix an enumeration $\{A^\alpha_n \mid n<{\omega}\}$ of ${\mathcal{A}}_\alpha$. Take a $(V, {\mathbb{C}})$-generic $G$. Let $r=\bigcup G$, which is a function from ${\omega}$ to $\{0,1\}$. Let $a=\{n<{\omega}\mid r(n)=0\}$, and $b=\{n<{\omega}\mid r(n)=1\}$. In $V[G]$, we define $X_0$ and $X_1$ as follows. The underlying set of $X_0$ is $\mathrm{Fine}({\mathcal{P}_\kappa \lambda})^V$, the set of all fine ultrafilters over ${\mathcal{P}_\kappa \lambda}$ in $V$. The topology of $X_0$ is generated by the family $\{ \{U \in \mathrm{Fine}({\mathcal{P}_\kappa \lambda})^V \mid A \in U, A^{\alpha_i}_n \notin U$ for every $i \le k$ and $n \in a\}\mid A \in V$, $A \subseteq ({\mathcal{P}_\kappa \lambda})^V$, $\alpha_0,\dotsc, \alpha_k <\mu\}$. The space $X_1$ is defined as a similar way with replacing $a$ by $b$. $X_0$ and $X_1$ are zero-dimensional regular $T_1$ Lindelöf spaces in $V[G]$. We know that $X_0 \times X_1$ has an open cover which has no subcover of size $<{\kappa}$. In addition, we can check that $X_0^n$ and $X_1^n$ are Lindelöf for every $n<{\omega}$ (see the proof of Proposition 3.9 in [@U1]). For a Tychonoff space $X$, let $C_p(X)$ be the space of all continuous functions from $X$ to the real line ${\mathbb{R}}$ with the pointwise convergent topology. For a topological space $X$, the Lindelöf degree $L(X)$ is the minimum infinite cardinal ${\kappa}$ such that every open cover of $X$ has a subcover of size $\le {\kappa}$. Hence $X$ is Lindelöf if and only if $L(X)={\omega}$. \[cp\] Let $X$ be a Tychonoff space, and $\nu$ a cardinal. Then $L(X^n) \le \nu$ for every $n<{\omega}$ if and only if $t(C_p(X)) \le \nu$. In particular, each finite power of $X$ is Lindelöf if and only if $C_p(X)$ is countably tight. Let ${\kappa}$ be a cardinal which is not ${\omega}_1$-strongly compact. Let ${\mathbb{C}}$ be the Cohen forcing notion, and $G$ be $(V, {\mathbb{C}})$-generic. Then in $V[G]$, there are regular $T_1$ Lindelöf spaces $X_0$ and $X_1$ such that $C_p(X_0)$ and $C_p(X_1)$ are countably tight and $t(C_p(X_0) \times C_p(X_1)) \ge {\kappa}$. Let $X_0$ and $X_1$ be spaces in Lemma \[4.4\]. By Theorem \[cp\], $C_p(X_0)$ and $C_p(X_1)$ are countably tight. It is clear that $C_p(X_0) \times C_p(X_1)$ is homeomorphic to $C_p(X_0 \oplus X_1)$, where $X_0 \oplus X_1$ is the topological sum of $X_0$ and $X_1$. We have $L((X_0 \oplus X_1)^2) \ge L(X_0 \times X_1) \ge {\kappa}$, hence $t(C_p(X_0) \times C_p(X_1)) \ge {\kappa}$ by Theorem \[cp\] again. Combining these results we have Theorem \[thm5\]: Let ${\mathbb{C}}$ be the Cohen forcing notion, and $G$ be $(V, {\mathbb{C}})$-generic. Then for every cardinal ${\kappa}$ the following are equivalent in $V[G]$: 1. ${\kappa}$ is ${\omega}_1$-strongly compact. 2. For every countably tight spaces $X$ and $Y$ we have $t(X \times Y) \le {\kappa}$. 3. For every countably tight Tychonoff spaces $X$ and $Y$ we have $t(X \times Y) \le {\kappa}$. 4. For every regular $T_1$ Lindelöf spaces $X$ and $Y$, if $C_p(X)$ and $C_p(Y)$ are countably tight then $t(C_p(X) \times C_p(Y)) \le {\kappa}$. Theorem \[thm5\] is a consistency result, and the following natural question arises: In ZFC, is the least ${\omega}_1$-strongly compact cardinal a precise upper bound on the tightness of the products of two countably tight spaces? How about Frěchet-Urysohn spaces? To answer this question for the Frěchet-Urysohn case, we have to consider other spaces instead of $C_p(X)$, because if $C_p(X)$ and $C_p(Y)$ are Frěchet-Urysohn, then so is $C_p(X) \times C_p(Y)$. This can be verified as follows. It is known that if $X$ is compact Hausdorff, then $X$ is scattered if and only if $C_p(X)$ is Frěchet-Urysohn (Pytkeev [@P2], Gerlits [@Ger]). In addition the compactness assumption can be weakened to that each finite power is Lindelöf. Gewand [@Ge] proved that if $X$ and $Y$ are Lindelöf and $X$ is scattered, then $X \times Y$ is Lindelöf as well. Finally we present another application of Lemma \[4.1\]. Bagaria and da Silva [@BS] proved that if ${\kappa}$ is ${\omega}_1$-strongly compact, $X$ is a first countable space, and every subspace of $X$ with size $<{\kappa}$ is normal, then $X$ itself is normal. Using Lemma \[4.1\], we can weaken the first countable assumption to the countable tightness assumption. Let ${\kappa}$ be an ${\omega}_1$-strongly compact cardinal, and $X$ a countably tight topological space. If every subspace of $X$ with size $<{\kappa}$ is normal, then the whole space $X$ is also normal. Take pairwise disjoint closed subsets $C$ and $D$. Let $U$ be a $\sigma$-complete fine ultrafilter over ${\mathcal{P}}_{\kappa}X$. By the assumption, for each $s \in {\mathcal{P}}_{\kappa}X$, the subspace $s$ is normal. Hence we can find open sets $O_s$ and $V_s$ such that $s \cap C \subseteq O_s$, $s \cap D \subseteq V_s$, and $O_s \cap V_s \cap s=\emptyset$. Define $O$ and $V$ by $x \in O \iff \{s \in {\mathcal{P}_\kappa \lambda}\mid x \in O_s\} \in U$ and $x \in V \iff \{s \in {\mathcal{P}_\kappa \lambda}\mid x \in V_s\} \in U$. By using Lemma \[4.1\], we know that $O$ and $V$ are open, and it is easy to see that $O$ and $V$ are disjoint, $C \subseteq O$, and $D \subseteq V$. [100]{} A. V. Arhangel’skiĭ, *On some topological spaces that arise in functional analysis*, Russian Math. Surveys Vol. 31, No. 5 (1976), 14–30. J. Bagaria, M. Madigor, *Group radicals and strongly compact cardinals*. Trans. Am. Math. Soc. Vol. 366, No. 4 (2014), 1857–1877 J. Bagaria, M. Madigor, On *${\omega}_1$-strongly compact cardinals*. J. Symb. Logic   Vol. 79, No. 1 (2014), 268–278. J. Bagaria, S. G. de Silva, *${\omega}_1$-strongly compact cardinals and normality*. preprint. K. Eda, G. Gruenhage, P. Koszmider, K. Tamano, S. Todorčević, *Sequential fans in topology*. Topol. App. 67 (1995), 189–220. J. Gerlits, *Some properties of $C(X)$ II*. Topol. Appl. 15(3) (1983) 255–262. M. E. Gewand, *The Lindelöf degree of scattered spaces and their products.* J. Aust. Math. Soc., Ser. A 37, (1984), 98–105. I. Gorelic, *On powers of Lindelöf spaces*. Comment. Math. Univ. Carol. Vol. 35, No. 2 (1994), 383–401. J. Ketonen, *Strong compactness and other cardinals sins*. Ann. Math. Logic Vol. 5 (1972), 47–76. E. G. Pytkeev, *The tightness of spaces of continuous functions*. Russian Math. Survey Vol. 37, No.1 (1982), 176–177. E. G. Pytkeev, *Sequentiality of spaces of continuous functions*. Russ. Math. Surv. 37(5) (1982) 190–191. T. Usuba, *$G_\delta$-topology and compact cardinals*. Fund. Math. Vol. 246 (2019), 71–87. T. Usuba, *A note on the tightness of $G_\delta$-modification*. To appear in Topol. App.
--- abstract: 'For the ATLAS Pixel Detector fast readout electronics has been successfully developed and tested. Main attention was given to the ability to detect small charges in the order of $5,000~e^{-}$ within $25~$ns in the harsh radiation environment of LHC together with the challenge to cope with the huge amount of data generated by the 80 millions channels of the Pixel detector. For the integration of the $50~\mu$m pitch hybrid pixel detector reliable bump bonding techniques using either lead-tin or indium bumps has been developed and has been successfully tested for large scale production.' address: | Universität Bonn, Physikalisches Institut\ Nußallee 12, D-53115 Bonn, Germany author: - 'F. Hügging' title: 'Front-End electronics and integration of ATLAS pixel modules' --- \ [on behalf of the ATLAS Pixel Collaboration[@tdr]]{} atlas, silicon detector, pixel, front-end electronics, deep sub-micron, hybridization, bump-bonding\ [*PACS*]{}: 06.60.Mr, 29.40.Gx Introduction {#sec:intro} ============ The Pixel Detector of the ATLAS experiment at the LHC, which is currently under construction at CERN, Geneva, is the crucial part for good secondary vertex resolution and high b-tagging capability. Therefore high spatial resolution and fast read-out in a multi-hit environment is needed and leads to the use of a hybrid silicon pixel detector. This detector will consist of three barrel layers and three disks in each forward direction to have at least three space points up to $|\eta | = 2.5$ for particles with a transversal momentum greater than $500$ MeV [@tdr]. The smallest detector unit will be a module made of one silicon pixel sensor and sixteen readout chips connected with high density bump bonding techniques to the sensor. Each module consists of about $50,000$ pixel cells with the size of $50\mu m \cdot 400\mu m$ to reach the required spatial resolution of $12\mu m$ in $r\phi$-direction. The main requirement for the ATLAS pixel modules is high radiation tolerance of all components in a harsch radiation environment close to the interaction point. The integrated design fluence is $10^{15}cm^{-2}$ 1 MeV neutron equivalent coming predominantly from charged hadrons over ten years for the outer barrel layers and disks and five years for the innermost layer. Secondly the total need of about $1,800$ modules requires very good testability of all components before assembly and high fault tolerance in long term operation as they are not supposed to be exchanged during the whole foreseen ten years lifetime of the ATLAS experiment. The material budget for the whole pixel detector is very strict to affect the later detector parts as less as possible. This leads to a total amount of material less than $1.2~\%$ of one radiation length per module not including further services and support structures. Module concept {#sec:concept} ============== A cross-section of an ATLAS pixel module can be seen in figure \[fig:x-sec\]. The module basically consists of a so called bare module which meant the sixteen readout chips bump bonded to the silicon pixel sensor. The size of the module is roughly given by the size of the sensor $18.6\cdot 63.0~$mm$^2$. The readout chips with are placed in two rows of eight. The front-end chips are slightly longer than the sensor width to be able to reach the wire-bond pads of the chips. The interconnection techniques using fine pitch bump bonding is either done with Pb/Sn by IZM[^1] or with Indium by AMS[^2]. The total active area of one module is $16.4\cdot 60.8~$mm$^2$ with $46080$ pixel. For routing the signals to the MCC [@mcc] and further to off-module electronics and providing the analog and digital supply powers to the readout chips a two layer high density interconnect (flexible capton cable) is used. Also the flexible hybrid carries the MCC chip which performs control and clock operations. It is glued to backside of the sensor and has roughly the size of the sensor tile. To connect the flex hybrid with the Front-End chips and with the MCC standard wire bond techniques are used; altogether around 350 wire-bonds are needed for a complete module. The module is glued chips down to the mechanical support which also serves as cooling structure. The whole power consumption of about $4~$W per module has to be dissipated through the readout chips to allow an operation of the module at $-6^{\circ}$C. Front-End electronics {#sec:fe} ===================== Each FE-chip contains of 2880 pixel cells arranged in 18 columns times 160 rows. Every pixel cell consists a charge sensitive amplifier, a discriminator for zero suppression and a digital readout logic to transport the hit information to the periphery. The signal charge can be determined by measuring the width of the discriminator output signal. Hits are temporarily stored in one of the 64 buffers per column pair located at the chip periphery until a trigger signal selects them for readout. Hits with no corresponding trigger signal are discarded after the programmable trigger latency of up to 256 clock cycles of 25 ns. All hits corresponding to trigger are sent through one serial LVDS link to module controller chip which builds full modules events containing hit information of all 16 FE chips on a module. Figure \[fig:pixel\] shows a block diagram of most important elements in the pixel cell. Charge deposited in the sensor is brought to the pixel through the bump bond pad which connects to the input of a charge sensitive amplifier. An inverting folded cascode amplifier is fed back by a capacitor of $C_{f}=6.5~$fF integrated into the lower metal layers of the bump bond pad. The feedback capacitor is discharged by a constant current so that a triangular pulse shape is obtained. As consequence of this particular pulse shape, the width of the discriminator output signal is nearly proportional to the deposited charge. This signal is called ’time over threshold’ or ToT, is measured in units of the bunch crossing clocks (25 ns) and provides an analogue information of evry hit with a resolution of 4-6 bit. The feedback current $I_{f}$ is set globally with an 8 bit on-chip bias DAC so that a compromise between a good ToT resolution and small dead time in the pixel can be found. The constant feedback current generation is part of a circuit which compensates for detector leakage current. This must be sourced by the electronics because the pixel sensor is dc-coupled to readout chip. The compensation circuit can cope with more than 100 nA detector leakage current per pixel. The threshold generation of each discriminator is done within every single pixel. A small threshold dispersion between the 2880 pixel is crucial in order to achieve low threshold settings without an increased noise hit rate. Therefore a number of 7 individually programmable trim bits in each pixel covers a wide range of thresholds between $1,000~e^{-}$ and $10,000~e^{-}$. These local trim bits affects both side of a differential pair amplifier leading to a linear threshold vs. DAC behaviour in the central part of the covered threshold range. Furthermore a globally set 5 bit DAC, so called GDAC, allows to move the threshold for the whole chip without losing the triming between the individual pixel. Every pixel contains a calibration circuit used to inject known charges into the input node. This circuit generates a voltage step which is connected to one of two injection capacitors for different charge ranges. The output signal of the discriminator in every pixel can be disabled with a local mask bit. A fast OR of a programmable subset of pixel is available for testing purposes and for self-triggered operation of the chip by using this OR signal after delaying it as a trigger. The 14 configuration bits used in every pixel are stored in SEU tolerant DICE cells [@dice] which can be written and read back by means of a shift register running vertically through the column. The layout of this cell has been significantly changed with respect to the previous chip (FE-I1) in order to improve the SEU tolerance. For the globally set DACs a basic triple redundancy scheme with majority readback is used. The basic philosophy of the readout is to associate the hit to unique bunch crossing by recording a time stamp when the rising edge of the discriminator occurs. The time of the falling edge is memorized as well in order to calculate the ToT as the difference of the two values in units of the bunch crossing clock. The two 8 bit RAM cells used for this purpose are classical static memory cells. The readout scheme can be divided into four elementary tasks running on the chip in parallel. First time stamps generated by an 8 bit Gray counter and clocked with bunch crossing clock are stored for the rising and falling edge of the discriminators in the pixel. After receiving the falling edge a hit flag is set and the pixel is ready to be processed. Secondly as soon as hits are flagged to the Column Control Logic the uppermost hit pixel is requested to send its rising and falling edge time stamps together with its ID down the column where is information is stored in a free location of the End-of-Column (EoC) buffer pool. The pixel is then cleared and the scan continues the search for other hit pixel. Thirdly the hit stays in the EoC buffer until the trigger latency has elapsed. The leading edge time stamp of the hits in the buffer are therefore permanently compared to the actual time stamp minus the fix but programmable latency. When the comparison is true and a trigger signal is present the hit is flagged as valid for readout, it is discarded otherwise. Incoming triggers are counted on the chip, the trigger number is stored together with the flag in the EoC buffer so that several trigger do not lead to confusion. A list of pending triggers is kept in a FIFO. Lastly a Readout Controller initiates the serial readout of hit data as soon as pending triggers are present in the Trigger FIFO, the EoC buffers are searched for valid data with correct trigger numbers. The column and row address and the ToT of these hits are serialized and send to the MCC. The Readout Controller adds a start-of-event and an end-of-event word together with error and status bits to the data stream. The recent chip (FE-I2) has been designed in a quarter micron technology with 6 metal layers using radiation tolerant layout rules [@snoeys]. Tests of the first batches received back from the vendor from May 2003 on showed that the design is fully functional except one problem dealing with race condition in the control block leading to the need to operate the chip with a reduced digital supply voltage of 1.6 V instead of the nominal 2.0 V. But this problem has been fixed via a re-distribution of the clock signal by changing 2 metal layers slightly in the back-end processing (FE-I2.1). Results of prototype modules {#sec:results} ============================ Several ten modules have been built with older chip generation (FE-I1) in order to qualify the bumping process and the whole module production chain. As an example of the overall good quality of these modules one can see in figure \[fig:source\] the hitmap and spectrum obtained with one module illuminated by an Am$^{241}$-source from the top using the self-trigger possibility of the chip. Only less than 10 out of $46,080$ pixel don’t see source hits showing the excellent bump quality of this module which is true for almost all modules. For every channel the ToT response was calibrated individually using the integrated charge generation circuit. Afterwards this calibration was applied to the source measurement in order to obtain the shown spectrum which is a sum over all pixel. Since no clustering of hit data was applied the spectrum is relatively broad but one can clearly see the main $60~$keV photo-peak roughly at the expected value of $16,600~e^{-}$. More results obtained with FE-I1 modules can be found in [@jgrosse]. Recently first modules with the newer chip generation (FE-I2.1) has been built. Figure \[fig:threshold\] and \[fig:noise\] show the threshold distribution and noise measured for a whole module after the trimming of the individual pixel to a threshold of roughly $3,100~e^{-}$. All these measurements were done with internal charge injection circuit. One can see that the reached overall threshold dispersion is only $40~e^{-}$ which has to be compared to value of $100~e^{-}$ reached by the older chip generation. Note further that there is no single pixel with an threshold lower than $2,900~e^{-}$. This shows the high tuning capability of this chip which is important to reach small thresholds on the whole module without any extra noisy pixel especially after irradiation. The measured mean noise of the module is with $150~e^{-}$ for the standard pixel much better than the specification of $300~e^{-}$, even the longer and so called ganged pixel located in the inter-chip regions which show due to the higher input capacity a higher noise of $170~e^{-}$ and $270~e^{-}$ resp. are well below this specification. The most demanding requirement of LHC is the time correct hit detection within one bunch crossing cycle (25 ns) together with the strict power budget of less than 4 W for the whole 16 chip module. Therefore not the real threshold is important but the threshold taken only hits into account which arrive within in time slot of 20 ns after the injection. Using a special delay circuit inside the MCC measurements with the needed accuracy are possible and in figure \[fig:twalk\] the results are shown for the same module tuned to a threshold of $3,100~e^{-}$. Because the timing is very sensitive to the input capacity of the amplifier the in-time-threshold is slightly different for the different kind of pixel. But overall a in-time-threshold of $4,200~e^{-}$ is reached and also the special pixel don’t show a threshold higher than $4,600~e^{-}$ meaning that only an extra charge of $1,100~e^{-}$ and $1,500~e^{-}$ resp. is needed to meet the timing requirement of ATLAS. This results are very much improved with respect to the previous chip generation where the in-time-threshold was in the order of $5,000$-$6,000~e^{-}$ by a tuned threshold of $3,000~e^{-}$. This exceeds specification of $5,000~e^{-}$ given by the fact that after irradiation damage only half of the initial charge of $20,000~e^{-}$ is available. More extensive studies have been done with these modules, including beam test measurements and measurements during and after proton irradiation up to 100 MRad or $2\cdot 10^{15}~$n$_{eq}$cm$^{-2}$ well above the design fluence of 50 MRad and $1\cdot 10^{15}~$n$_{eq}$cm$^{-2}$ respectively. All these measurement confirmed the good quality of the chip and the whole module, in particular it was shown that these modules meet all requirements and are able to operate in the environment of ATLAS and LHC. Conclusions {#sec:conclusions} =========== The recent ATLAS pixel Frond-End chip generation has been successfully produced, assembled to full size modules and tested. Improvements concerning the timing behaviour and chip threshold tuning capability with respect to the previous chip generation has been obtained. All further results including beam tests and irradiation confirmed the overall good quality of the chip reaching all requirements. By building more than 100 modules with high quality a lot of experience has been gained with the fine pitch bump bonding process for pixel module using two different vendors and technologies showing that the target of building $2,000$ modules needed for ATLAS in the next two years is feasible. [9]{} ATLAS Pixel Detector Technical Design Report, [*CERN LHCC 98-13*]{} (1998). R.Beccherle et al., [*Nucl. Instr. and Meth.*]{} [**A492**]{} (2002) 117–133. W.Snoeys et al., [*Nucl. Instr. and Meth.*]{} [**A439**]{} (2000) 349–360. T. Calin, M.Nicolaidis, R.Velazco, [*IEEE Trans. Nucl. Science*]{}, [**Vol. 43, No. 6**]{}, Dec. 1996, 2874–2878. J. Grosse-Knetter, this proceedings. [^1]: Institut für Zuverlässigkeit und Mikrointegration, Berlin, Germany. [^2]: Alenia Marconi Systems, Roma, Italy.
--- abstract: 'We present a method for the construction of ensembles of random networks that consist of a single connected component with a given degree distribution. This approach extends the construction toolbox of random networks beyond the configuration model framework, in which one controls the degree distribution but not the number of components and their sizes. Unlike configuration model networks, which are completely uncorrelated, the resulting single-component networks exhibit degree-degree correlations. Moreover, they are found to be disassortative, namely high-degree nodes tend to connect to low-degree nodes and vice versa. We demonstrate the method for single-component networks with ternary, exponential and power-law degree distributions.' author: - Ido Tishby - Ofer Biham - Eytan Katzav - Reimer Kühn title: Generating random networks that consist of a single connected component with a given degree distribution --- Introduction ============ Network models provide a useful description of a broad range of phenomena in the natural sciences and engineering as well as in the economic and social sciences. This realization has stimulated increasing interest in the structure of complex networks, and in the dynamical processes that take place on them [@Albert2002; @Dorogovtsev2003; @Dorogovtsev2008; @Newman2010; @Barrat2012; @Hofstad2013; @Havlin2010; @Estrada2011; @Latora2017]. One of the central lines of inquiry has been concerned with the existence of a giant connected component that is extensive in the network size. In the case of Erdős-Rényi (ER) networks, the critical parameters for the emergence of a giant component in the thermodynamic limit were identified and the fraction of nodes that reside in the giant component was determined [@Erdos1959; @Erdos1960; @Erdos1961; @Bollobas1984]. These studies were later extended to the broader class of configuration model networks [@Molloy1995; @Molloy1998]. The configuration model framework enables one to construct an ensemble of random networks whose degree sequences are drawn from a desired degree distribution, with no degree-degree correlations. The resulting network ensemble is a maximum entropy ensemble under the condition of the given degree distribution. A simple example of a configuration model network is the random regular graph, in which all the nodes are of the same degree, $k=c$. For random regular graphs with $c \ge 3$ the giant component encompasses the whole network [@Bollobas2001]. However, in general, configuration model networks often exhibit a coexistence between a giant component, which is extensive in the network size, and many finite components, which are non-extensive trees. This can be exemplified by the case of ER networks, which exhibit a Poisson degree distribution of the form $$P(k) = \frac{e^{-c}c^k}{k!}, \label{eq:Poisson}$$ where $c=\langle K \rangle$ is the mean degree. ER networks with $0 < c < 1$ consist of finite tree components. At $c=1$ there is a percolation transition, above which the network exhibits a coexistence between the giant component and the finite components. In the asymptotic limit, the size of the giant component is $N_1 = g N$, where $N$ is the size of the whole network and the parameter $g=g(c)$, which vanishes for $c \le 1$, increases monotonically for $c > 1$. At $c = \ln N$ there is a second transition, above which the giant component encompasses the entire network [@Bollobas2001]. In the range of $1 < c < \ln N$, where the giant and finite components coexist, the structure and statistical properties of the giant component differ significantly from those of the whole network. In particular, the degree distribution of the giant component differs from $P(k)$ and it exhibits degree-degree correlations. Recently, we developed a theoretical framework for the analytical calculation of the degree distribution and the degree-degree correlations in the giant component of configuration model networks [@Tishby2018]. In particular, this framework provides an analytical expression for the degree distribution of the giant component, denoted by $P(k|1)$, in terms of the degree distribution $P(k)$ of the whole network. We applied this approach to the most commonly studied configuration model networks, namely with Poisson, exponential and power-law degree distributions. We have shown that the degree distribution of the giant component enhances the weight of the high-degree nodes and depletes the low-degree nodes, with respect to the whole network. Moreover, we found that the giant component is disassortative, namely high-degree nodes preferentially connect to low-degree nodes and vice versa. This appears to be a crucial feature that helps to maintain the integrity of the giant component. In this paper we introduce a method for the construction of ensembles of random networks that consist of a single connected component with a given degree distribution, $P(k|1)$. This is done by inverting the equations that express the degree distribution of the giant component $P(k|1)$ in terms of the degree distribution $P(k)$ of the whole network. Constructing a configuration model network with the degree distribution $P(k)$ obtained from the inversion process, its giant component is found to exhibit the desired degree distribution $P(k|1)$. We apply this approach to the construction of ensembles of random networks that consist of a single connected component with ternary, exponential and a power-law degree distributions. The paper is organized as follows. In Sec. II we present the configuration model network ensemble. In Sec. III we present a method for the construction of a single-component network with a given degree distribution. In Sec. IV we analyze the properties of the resulting single-component networks. In particular, we present analytical expressions for the degree-degree correlations and the assortativity coefficient. In Sec. V we apply this methodology for the construction of networks that consist of a single connected component and exhibit ternary, exponential and power-law distributions. The results are discussed in Sec. VI and summarized in Sec. VII. The configuration model ======================= The configuration model network ensemble is an ensemble of uncorrelated random networks whose degree sequences are drawn from a given degree distribution, $P(k)$. In theoretical studies one often considers the asymptotic case in which the network size is infinite. In computer simulations, the network size $N$ is finite and the degree distribution is bounded from above and below such that $k_{\rm min} \le k \le k_{\rm max}$. For example, the commonly used choice of $k_{\rm min}=1$ eliminates the possibility of isolated nodes in the network. Choosing $k_{\rm min}=2$ also eliminates the leaf nodes. Controlling the upper bound is important in the case of finite networks with degree distributions that exhibit fat tails, such as power-law degree distributions. The configuration model ensemble is a maximum entropy ensemble under the condition that the degree distribution $P(k)$ is imposed [@Newman2001; @Newman2010]. In this paper we focus on the case of undirected networks. To generate a network instance drawn from an ensemble of configuration model networks of $N$ nodes, with a given degree distribution $P(k)$, one draws the degrees of the $N$ nodes independently from $P(k)$. This gives rise to a degree sequence of the form $k_1,k_2,\dots,k_N$ (where $\sum k_i$ must be even). Configuration model networks do not exhibit degree-degree correlations, which means that the conditional degree distribution of random neighbors of a random node of degree $k$ satisfies $P(k'|k)=k' P(k')/\langle K \rangle$ and does not depend on $k$. Also, the local structure of the network around a random node is typically a tree structure. A central feature of configuration model networks and other random networks above the percolation transition is the small-world property, namely the fact that the mean distance scales like $\langle L \rangle \sim \ln N$. Moreover, it was shown that scale-free networks for which $P(k) \propto k^{-\gamma}$ may be ultrasmall, depending on the exponent $\gamma$. In particular, for $2 < \gamma < 3$ their mean distance scales like $\langle L \rangle \sim \ln \ln N$ [@Cohen2003]. Configuration model networks in which $k_{\rm min}=1$ exhibit three different phases. In the sparse network limit, below the percolation transition, they consist of many finite tree components. Above the percolation transition there is a coexistence of a giant component and finite tree components. In the dense network limit there is a second transition, above which the giant component encompasses the whole network. In this paper we focus on the intermediate domain in which the giant and finite components coexist. The size of the giant component is determined by the degree distribution $P(k)$. The construction of configuration model networks ------------------------------------------------ For the computer simulations presented below, we draw random network instances from an ensemble of configuration model networks of $N$ nodes, which follow a given degree distribution, $P(k)$. For each network instance we generate a degree sequence of the form $k_1, k_2,\dots,k_N$, as described above. For the discussion below it is convenient to list the degree sequence in a decreasing order of the form $k_1 \ge k_2 \ge \dots \ge k_N$. It turns out that not every possible degree sequence is graphic, namely admissible as a degree sequence of a network. Therefore, before trying to construct a network with a given degree sequence, one should first confirm the graphicality of the degree sequence. To be graphic, a degree sequence must satisfy two conditions. The first condition is that the sum of the degrees is an even number, namely $\sum_{i=1}^N k_i = 2 L$, where $L$ is an integer that represents the number of edges in the network. The second condition is expressed by the Erd[ő]{}s-Gallai theorem, which states that an ordered sequence of the form $k_1 \ge k_2 \ge \dots \ge k_N$ is graphic if and only if the condition $$\sum_{i=1}^n k_i \le n(n-1) + \sum_{i=n+1}^N \min (k_i,n) \label{eq:EG}$$ holds for all values of $n$ in the range $1 \le n \le N-1$ [@Erdos1960b; @Choudum1986]. A convenient way to construct a configuration model network is to prepare the $N$ nodes such that each node $i$ is connected to $k_i$ half edges or stubs [@Newman2010]. At each step of the construction, one connects a random pair of stubs that belong to two different nodes $i$ and $j$ that are not already connected, forming an edge between them. This procedure is repeated until all the stubs are exhausted. The process may get stuck before completion in a case in which all the remaining stubs belong to the same node or to pairs of nodes that are already connected. In such case one needs to perform some random reconnections in order to complete the construction. The degree distribution of the giant component ---------------------------------------------- Consider a configuration model network of $N$ nodes with a degree distribution, $P(k)$. To obtain the probability $g$ that a random node in the network belongs to the giant component, one needs to first calculate the probability $\tilde g$, that a random neighbor of a random node, $i$, belongs to the giant component of the reduced network that does not include the node $i$. The probability $\tilde g$ is determined by [@Havlin2010] $$1 - {\tilde g} = G_1(1 - {\tilde g}), \label{eq:tg}$$ where $$G_1(x) = \sum_{k=1}^{\infty} x^{k-1} {\widetilde P}(k) \label{eq:G1}$$ is the generating function of ${\widetilde P}(k)$, and $${\widetilde P}(k) = \frac{k}{\langle K \rangle} P(k) \label{eq:tilde}$$ is the degree distribution of nodes that are sampled as random neighbors of random nodes. Using $\tilde g$, one can then obtain the probability $g$ from the equation $$g = 1 - G_0(1 - {\tilde g}), \label{eq:g}$$ where $$G_0(x) = \sum_{k=0}^{\infty} x^{k} P(k) \label{eq:G0}$$ is the generating function of $P(k)$. Given that $G_0(x)$ and $G_1(x)$, defined by Eqs. (\[eq:G0\]) and (\[eq:G1\]), respectively, are probability generating functions, they satisfy $G_0(1)=G_1(1) =1$. This property entails that $\tilde g =0$ is always a solution of Eq (\[eq:tg\]). This (trivial) solution implies $g=0$ and describes a subcritical network, in which case the key question is, whether other solutions with $\tilde g >0$, hence $g>0$, exist as well. In configuration model networks that do not include any isolated nodes (of degree $k=0$) and leaf nodes (of degree $k=1$), namely $k_{\rm min} \ge 2$, the generating functions satisfy $G_0(0) = 0$ and $G_1(0)=0$. This solution corresponds to the case where the giant component encompasses the whole network and $g=\tilde g=1$. This implies that in such networks both $x=0$ and $x=1$ are fixed points of both $G_0(x)$ and $G_1(x)$. Furthermore, it can be shown that in networks whose degree distributions satisfy the condition that $k_{\rm min} \ge 2$ and $k_{\rm max} \ge 3$ there are no other (nontrivial) fixed points for $G_0(x)$ and $G_1(x)$ with $0 < x < 1$ [@Bonneau2017]. This means that in such networks the giant component encompasses the whole network. Here we are interested in configuration model networks that exhibit a coexistence between the giant and the finite components. Such coexistence appears for degree distributions that support a non-trivial solution of Eq. (\[eq:tg\]), in which $0 < \tilde g < 1$. A necessary condition for such solution is the existence of leaf nodes of degree $k=1$, namely, $P(1) > 0$. Therefore, we focus here on degree distributions in which $k_{\rm min}=1$. For the analysis presented below we introduce an indicator variable $\Lambda \in \{0,1\}$, where $\Lambda=1$ indicates that an event takes place on the giant component and $\Lambda=0$ indicates that it happens on one of the finite components. In this notation, the probability that a random node resides on the giant component is $P(\Lambda = 1) = g$, and the probability that it resides on one of the finite components is $P(\Lambda=0) = 1 - g$. Similarly, the probabilities that a random neighbor of a random node resides on the giant component is $\widetilde P(\Lambda = 1) = \tilde g$ and the probability that it resides on one of the finite components is $\widetilde P(\Lambda=0) = 1 - \tilde g$. A node, $i$, of degree $k$ resides on the giant component if at least one of its $k$ neighbors resides on the giant component of the reduced network from which $i$ is removed. Therefore, the probability $g_k$ that a random node of degree $k$ resides on the giant component is given by $$g_k = P(\Lambda = 1 | k)= 1 - (1 - \tilde g)^k, \label{eq:L1k}$$ while the probability that such node resides on one of the finite components is $$P(\Lambda = 0 | k) = 1 - g_k = (1 - \tilde g)^k. \label{eq:L0k}$$ Using Bayes’ theorem, one can show that the degree distribution, conditioned on the giant component, is given by [@Tishby2018] $$P(k | \Lambda =1) = \frac{ 1- (1-\tilde g)^k}{g} P(k), \label{eq:kL1}$$ while the degree distribution, conditioned on the finite components, is given by $$P(k | \Lambda =0) = \frac{ (1-\tilde g)^k}{1 - g} P(k). \label{eq:kL0}$$ The mean degree of the giant component is $$\mathbb{E}[K | \Lambda=1] = \frac{1-(1-\tilde g)^2}{g} \langle K \rangle, \label{eq:EkL1}$$ while the mean degree on the finite components is $$\mathbb{E}[K | \Lambda=0] = \frac{(1-\tilde g)^2}{1-g} \langle K \rangle, \label{eq:EkL0}$$ where $$\langle K \rangle = \sum_{k=0}^{\infty} k P(k)$$ is the mean degree of the whole network. In the rest of the paper, for the sake of brevity, we will drop the indicator $\Lambda$ and use $P(k|0)$ and $P(k|1)$ to denote the degree distribution on the finite components and on the giant component, respectively. Similarly, we will use $\mathbb{E}[K |0]$ ($\mathbb{E}[K |1]$) to denote the expected degree on the finite (giant) component. It is interesting to mention that just above the percolation transition, when the giant component just emerges, $\mathbb{E}[K |1] \rightarrow 2$ [@Tishby2018; @Tishby2018b]. This will be important in the rest of the paper, because it means that if one wants to generate a network that forms a single component with a given degree distribution $P(k|1)$, the mean of this distribution must satisfy $\mathbb{E}[K|1] \ge 2$. From a different angle, a single tree component of $N$ nodes satisfies $\mathbb{E}[K|1] = 2-2/N$ [@Katzav2018], thus $\mathbb{E}[K|1] \rightarrow 2$ in the asymptotic limit. Above the percolation transition cycles start to emerge in the giant component, and $\mathbb{E}[K|1]$ gradually increases. As the network becomes more dense, the fraction of nodes, $g$, that reside on the giant component increases. When $g \rightarrow 1$ the giant component encompasses the whole network. The value of $\mathbb{E}[K|1]$ at which $g \rightarrow 1$ depends on the degree distribution. The size of the giant component ------------------------------- The expectation value of the size of the giant component of a configuration model of $N$ nodes with a degree distribution $P(k)$ is given by $$\langle N_1 \rangle = N g, \label{eq:N_1}$$ where $g$ is given by Eq. (\[eq:g\]). However, in any single network instance the size $N_1$ of the giant component may deviate from $\langle N_1 \rangle$. Below we consider the distribution $P(N_1)$ of the sizes of the giant components obtained in an ensemble of configuration model networks of $N$ nodes with degree distribution $P(k)$. To get a rough idea about the form of $P(N_1)$, one may assume, for simplicity, that each node independently resides on the giant component with probability $g$, with no correlations between different nodes. In such case, $P(N_1)$ would follow a binomial distribution that converges to a Gaussian distribution whose mean is given by Eq. (\[eq:N\_1\]). The variance of such a distribution is given by $${\rm Var}(N_1) = N \sum_{k=1}^{N-1} g_k (1-g_k) P(k), \label{eq:VarN1}$$ where $g_k$ is given by Eq. (\[eq:L1k\]). In dense networks that exhibit a narrow degree distribution, such that $g_k$ is only weakly dependent on $k$, Eq. (\[eq:VarN1\]) can be approximated by $${\rm Var}(N_1) = N g (1-g). \label{eq:VarN1app}$$ In the case of ER networks [@Kang2016; @Bollobas2013], in which $P(k)$ is a Poisson distribution, as in Eq. (\[eq:Poisson\]), it was shown that $P(N_1)$ is a Gaussian distribution whose mean is given by Eq. (\[eq:N\_1\]) and its variance is given by $${\rm Var}(N_1) = \frac{Ng(1-g)}{1 - \langle K \rangle(1-g)}.$$ For configuration model networks with other degree distributions there are rigorous results for the size distribution of the giant component only in the weakly supercritical range [@Riordan2012; @Bollobas2013], which is just above the percolation phase transition. More precisely, in configuration model networks the percolation transition follows the Molloy-Reed criterion [@Molloy1995; @Molloy1998], namely, it takes place at $\langle K(K-1) \rangle/\langle K \rangle = 1$. Just above the transition, in the limit $\epsilon=\langle K(K-1) \rangle/\langle K \rangle - 1 \rightarrow 0^{+}$, the distribution $P(N_1)$ is a Gaussian distribution whose mean is $$\langle N_1 \rangle = \frac{2 \langle K \rangle^2}{\langle K(K-1)(K-2) \rangle} \epsilon N$$ and its variance is given by $${\rm Var}(N_1) = \frac{2 \langle K \rangle}{\epsilon} N.$$ This means that at the percolation transition the variance of $N_1$ diverges, and starts decreasing above the transition. There are no rigorous results in the full supercritical range, but following the ER case, it is plausible that the normality of $P(N_1)$ still holds, at least for a degree distribution $P(k)$ with a finite variance, while the variance ${\rm Var}(N_1)$ decreases. The main conclusion of this discussion is that sufficiently far above the percolation transition, where the giant component is not too small, the size fluctuations of the giant component become negligible as $N$ is increased. The construction of a single-component network with a given degree distribution =============================================================================== Here we present a method for the construction of a network that consists of a single component whose degree sequence is effectively drawn from a given degree distribution, denoted by $P(k|1)$. The approach is based on the construction of a configuration model network whose degree sequence is drawn from a suitable degree distribution $P(k)$, such that its giant component exhibits the desired degree distribution, $P(k|1)$. Inverting Eq. (\[eq:kL1\]) we find that in order to obtain a giant component whose degree distribution is $P(k|1)$, the degree distribution of the whole network should be $$P(k) = \frac{g}{1-(1-\tilde g)^k} P(k|1), \label{eq:pk2}$$ where $\tilde g$ is given by Eq. (\[eq:tg\]) and $g$ is given by Eq. (\[eq:g\]). The mean degree of the whole network will thus be $$\langle K \rangle = \sum_{k=1}^{\infty} \frac{g k }{1-(1-\tilde g)^k} P(k|1).$$ In order to obtain an ensemble of single-component networks whose mean size is $\langle N_1 \rangle$, the size of the configuration model networks from which these giant components are obtained should be $$N = \frac{\langle N_1 \rangle}{g}. \label{eq:N1g}$$ For the analysis below it is useful to introduce the generating functions for the degree distribution conditioned on the giant component, namely $$G_0^1(x) = \sum_{k=1}^{\infty} x^{k} P(k|1) \label{eq:G01}$$ and $$G_1^1(x) = \sum_{k=1}^{\infty} \frac{ k x^{k-1} }{ {\mathbb E}[K|1]} P(k|1). \label{eq:G11}$$ These generating functions are related to each other by the equation $$G_1^1(x) = \frac{ \frac{d }{dx} G_0^1(x) }{ \frac{d }{dx} G_0^1(x) \vert_{x=1}} . \label{eq:G0G1}$$ In order to calculate the probability $\tilde g$, we utilize Eq. (\[eq:tg\]), where we express $P(k)$ and $\langle K \rangle$ in terms of $P(k|1)$, and obtain $$1 - \tilde g = \frac{ \sum\limits_{k=1}^{\infty} \frac{ k\left(1-\tilde g \right)^{k-1} } {1 - (1-\tilde g)^k} P(k|1) } { \sum\limits_{k=1}^{\infty} \frac{ k }{1 - (1-\tilde g)^k} P(k|1) }.$$ Using the Taylor expansion of $(1-x)^{-1}$, which takes the form $$\frac{1}{1-x} = \sum_{n=0}^{\infty} x^n, \label{eq:taylor}$$ where $0 < x < 1$, to express the term $1/[1-(1-\tilde g)^k]$ as a power series in $(1-\tilde g)^k$, we obtain $$1 - \tilde g = \frac{ \sum\limits_{k=1}^{\infty} k (1-\tilde g)^{k-1} \sum\limits_{n=0}^{\infty} (1-\tilde g)^{kn} P(k|1) } { \sum\limits_{k=1}^{\infty} k \sum\limits_{n=0}^{\infty} (1-\tilde g)^{kn} P(k|1) }.$$ Multiplying both sides by $1-\tilde g$ and exchanging the order of summations in the numerator and denominator, we obtain $$(1 - \tilde g)^2 = \frac{ \sum\limits_{n=1}^{\infty} (1-\tilde g)^n \sum\limits_{k=1}^{\infty} k (1-\tilde g)^{n(k-1)} P(k|1) } { \sum\limits_{n=0}^{\infty} (1-\tilde g)^n \sum\limits_{k=1}^{\infty} k (1-\tilde g)^{n(k-1)} P(k|1) }.$$ Adding and subtracting the $n=0$ term in the numerator, this equation can be expressed in the form $$(1 - \tilde g)^2 = 1 - \frac{ {\mathbb E}[K|1] } { \sum\limits_{n=0}^{\infty} (1-\tilde g)^n \sum\limits_{k=1}^{\infty} k (1-\tilde g)^{n(k-1)} P(k|1) }. \label{eq:1mtgs}$$ Using the generating function $G_1^1(x)$, Eq. (\[eq:1mtgs\]) can be written in the form $$(1 - \tilde g)^2 = 1 - \frac{ 1 } { \sum\limits_{n=0}^{\infty} (1-\tilde g)^n G_1^1[(1-\tilde g)^n] },$$ or in the form $$\tilde g (2-\tilde g) \sum_{n=0}^{\infty} (1-\tilde g)^n G_1^1[(1-\tilde g)^n] = 1. \label{eq:g2mg}$$ This is an implicit equation that should be solved in order to obtain the parameter $\tilde g$. For some degree distributions one can obtain a closed form analytical expression for $\tilde g$, while for other distributions it should be calculated numerically. A useful approximation scheme would be to replace the sum in Eq. (\[eq:g2mg\]) by an integral. To improve the accuracy of this approximation, it is useful to first separate the $n=0$ and the $n=1$ terms from the rest of the sum and obtain $$\tilde g (2-\tilde g) \left[ 1 + (1-\tilde g) G_1^1(1-\tilde g) + \sum_{n=2}^{\infty} (1-\tilde g)^n G_1^1[(1-\tilde g)^n] \right] = 1. \label{eq:g2mg12}$$ Using Eq. (\[eq:G0G1\]) we find that $$x^n G_1^1(x^n) = \frac{ \frac{\partial}{\partial n} [G_0^1(x^n)] }{ {\mathbb E}[K|1] \ln x }. \label{eq:xnGxn}$$ Replacing the sum $\sum_{n=2}^{\infty}$ in Eq. (\[eq:g2mg12\]) by an integral of the form $\int_{3/2}^{\infty} dn$ and carrying out the integration using Eq. (\[eq:xnGxn\]), we obtain $$\tilde g (2-\tilde g) \left[ 1 + (1-\tilde g) G_1^1(1-\tilde g) - \frac{G_0^1\left[ (1-\tilde g)^{3/2} \right]}{{\mathbb E}[K|1] \ln (1-\tilde g)} \right] =1. \label{eq:gtimplicit}$$ This equation is easier to handle than Eq. (\[eq:g2mg\]), although usually it can be solved only numerically. Other, more precise schemes, could be devised by treating more individual terms of the sum in Eq. (\[eq:g2mg\]) separately, say up to $n=2$ or $n=3$, and approximating the tail of the sum by an integral. Our experience tells us that for the cases considered in this paper using the $n=1$ scheme provides values of $\tilde g$ that differ by at most a few percents from the exact value. Once the parameter $\tilde g$ is known, the parameter $g$ can be obtained from $$1 - g = \sum_{k=0}^{\infty} \frac{g (1-\tilde g)^k }{1 - (1-\tilde g)^k} P(k|1).$$ Extracting $g$ we obtain $$g = \frac{1}{ 1 + \sum\limits_{k=0}^{\infty} \frac{(1-\tilde g)^k}{1 - (1-\tilde g)^k} P(k|1) }.$$ Expanding the denominator according to Eq. (\[eq:taylor\]) and exchanging the order of the summations, we obtain $$g = \frac{1}{ 1 + \sum\limits_{n=1}^{\infty} G_0^1[(1-\tilde g)^n] }. \label{eq:gshort}$$ To conclude, in order to obtain an ensemble of single-component networks whose mean size is $\langle N_1 \rangle$, with degree sequences that are effectively drawn from $P(k|1)$, one constructs an ensemble of configuration model networks whose size $N$ is given by Eq. (\[eq:N1g\]) and its degree distribution $P(k)$ is given by Eq. (\[eq:pk2\]). The giant components of these networks are the desired single component networks. The mean degree $\langle K \rangle$ of the configuration model networks is $$\langle K \rangle = \frac{g}{1-(1-\tilde g)^2} \mathbb{E}[K | 1]. \label{eq:<K>_EK}$$ Note that it is also possible to control the exact size of the single-component network. Consider the case in which the desired size of a given instance of the single-component network is $\lfloor \langle N_1 \rangle \rfloor$, namely the integer part of $\langle N_1 \rangle$. In a case in which the size of the giant component $n_1$ came out smaller than $\lfloor \langle N_1 \rangle \rfloor$, one should add nodes to the configuration model network until the giant component will reach the desired size. The degrees of the added nodes are drawn from $P(k)$. To add a node of an even degree $k$ to the network one picks randomly $k/2$ edges that connect $k$ distinct nodes. One then cuts each edge in the middle to generate $k$ stubs. The $k$ stubs of the new node are then connected to these $k$ stubs. In the case of nodes of odd degrees, $k$ and $k'$, one picks randomly $(k+k')/2$ edges and cuts them in the middle to generate $k+k'$ stubs. The stubs of the two new nodes are then connected randomly to these $k+k'$ stubs. In a case in which $n_1$ came out larger than $\lfloor \langle N_1 \rangle \rfloor$ one should delete random nodes (one at a time for even-degree nodes and in pairs for odd-degree nodes), until the giant component is reduced to the desired size. The open stubs that remain from the edges of each deleted node are then randomly connected to each other in pairs. Properties of single component random networks ============================================== Unlike configuration model networks that are completely uncorrelated, their giant components exhibit degree-degree correlations. In particular, following the observation made in Ref. [@Tishby2018] that the giant components are disassortative, below we prove this property. Interestingly, this observation has been recently demonstrated in percolating clusters [@Mizutaka2018]. The joint degree distribution of a pair of adjacent nodes in a configuration model network with degree distribution $P(k)$ is given by [@Tishby2018] $$\widehat P(k,k'|1) = \frac{1 - (1-\tilde g)^{k+k'-2}}{1-(1-\tilde g)^2} \frac{k}{\langle K \rangle} P(k) \frac{k'}{\langle K \rangle} P(k').$$ Expressing $P(k)$ and $P(k')$ in terms of $P(k|1)$ and $P(k'|1)$, respectively, using Eq. (\[eq:pk2\]), we obtain $$\widehat P(k,k'|1) = W(k,k') \frac{k}{{\mathbb E}[k|1]} P(k|1) \frac{k'}{{\mathbb E}[k|1]} P(k'|1),$$ where $$W(k,k') = \tilde g (2-\tilde g) \frac{1-(1-\tilde g)^{k+k'-2}}{[1-(1-\tilde g)^k][1-(1-\tilde g)^{k'}]}$$ accounts for the degree-degree correlations between adjacent nodes. For example, $W(1,1)=0$, reflecting the fact that pairs of nodes of degree $k=1$ on the giant component cannot share an edge, because in such case they will form an isolated dimer. Also, one can verify that $W(k,2)=1$ for all values of $k \ge 1$. This means that nodes of degree $k=2$ are distributed randomly in the giant component and are not correlated to the degrees of their neighboring nodes. The degree-degree correlations between nodes of degree $k \ge 3$ and leaf nodes of degree $k'=1$ is given by $$W(k,1) = 1 + \frac{1 - \tilde g - (1-\tilde g)^{k-1}}{1-(1-\tilde g)^k} > 1.$$ Thus, there is a positive correlation between leaf nodes and nodes of degree $k \ge 3$. Moreover, the correlation becomes stronger as $k$ increases. Below we show that $W(k,k') \le 1$ for for $k,k' \ge 3$, hence the degree-degree correlations between pairs of nodes of degrees $k,k' \ge 3$ are negative. To this end we denote $\tilde h = 1 - \tilde g$, which satisfies $0 < \tilde h < 1$. Expressing $W(k,k')$ in terms of $\tilde h$, we obtain $$W(k,k';\tilde h) = (1-{\tilde h}^2) \frac{1-{\tilde h}^{k+k'-2}}{(1-{\tilde h}^k)(1-{\tilde h}^{k'})}.$$ The diagonal terms, obtained for $k=k'$, are given by $$f(k;\tilde h) = W(k,k;\tilde h) = (1-{\tilde h}^2) \frac{1-{\tilde h}^{2k-2}}{(1-{\tilde h}^k)^2}.$$ For $k=3$ we obtain $$f(k=3;\tilde h) = \frac{ (1+\tilde h)^2 (1+\tilde h^2) }{( 1+\tilde h + \tilde h^2 )^2}.$$ Differentiating $f(k=3;\tilde h)$ with respect to $\tilde h$, we obtain $$\frac{\partial}{\partial \tilde h} f(k=3;\tilde h) = - \frac{2 \tilde h (1-\tilde h^2) }{(1+\tilde h+\tilde h^2)^3} < 0,$$ for $0 < \tilde h < 1$. Therefore, the function $f(k=3;\tilde h)$ is a monotonically decreasing function of $\tilde h$. This implies that $$f(k=3;\tilde h) \le f(k=3;\tilde h=0) =1,$$ with equality taking place only at $\tilde h=0$. Considering the degree, $k$, as a continuous variable and taking the derivative of $f(k;\tilde h)$ with respect to $k$, we obtain $$\frac{\partial}{\partial k} f(k;\tilde h) = - \frac{ 2 \tilde h^k (1-\tilde h^2)(1-\tilde h^{k-2}) \ln \left( \frac{1}{\tilde h} \right) } {(1-\tilde h^k)^3} < 0$$ for $k > 2$ and $0 < \tilde h < 1$. This means that $f(k;\tilde h)$ is a monotonically decreasing function in both $k$ and $\tilde h$. We thus conclude that $W(k,k) < 1$ for all values of $k \ge 3$ and $0 < \tilde h < 1$. In order to show that $W(k,k') < 1$ for all $k,k' \ge 3$, it is sufficient to show that under these conditions $W(k,k')$ is a monotonically decreasing function of $k'$ for all values of $0 < \tilde h < 1$. This is shown by differentiating $W(k,k';\tilde h)$ with respect to $k'$, which leads to $$\frac{ \partial }{ \partial k'} W(k,k';\tilde h) = - \frac{ \tilde h^{k'} (1-\tilde h^2)(1-\tilde h^{k-2}) \ln \left( \frac{1}{\tilde h} \right) } { (1-\tilde h^k)(1-\tilde h^{k'})^2 } < 0$$ where $k > 2$ and $0 < \tilde h < 1$. This means that for any combination of $k,k' \ge 3$, where $k' > k$, the correlation function $W(k,k')$ satisfies $W(k,k') < W(k,k) < 1$. We thus conclude that pairs of adjacent nodes of degrees $k,k' \ge 3$ exhibit negative degree-degree correlations. The probability that a node connected to a random edge in the giant component is of degree $k$ is given by [@Tishby2018] $$\widehat P(k|1) = \frac{k}{ {\mathbb E}[K|1] } P(k|1).$$ The assortativity coefficient [@Newman2002b] of the giant component is given by [@Tishby2018] $$r = \frac{ \sum_{k,k' \ge 2} (k-1)(k'-1) \widehat P(k,k'|1) - \left[ \sum_{k \ge 2} (k-1) \widehat P(k|1) \right]^2 } { \sum_{k \ge 2} (k-1)^2 \widehat P(k|1) - \left[ \sum_{k \ge 2} (k-1) \widehat P(k|1) \right]^2 }.$$ Since the degree-degree correlations between pairs of adjacent nodes of degrees $k,k' \ge 3$ are negative, the assortativity coefficient of the giant component must satisfy $r < 0$. This is an essential property of the giant components of configuration model networks, which is required in order to maintain the integrity of the giant component. Applications to specific network models ======================================= In this section we apply the methodology developed above for the construction of networks that consist of a single connected component, with a prescribed degree distribution, $P(k|1)$, for some popular ensembles of random networks. Construction of a single-component network with a ternary degree distribution ----------------------------------------------------------------------------- The properties of the giant component of a random network are sensitive to the abundance of nodes of low degrees, particularly nodes of degree $k=1$ (leaf nodes) and $k=2$. Nodes of degree $k=0$ (isolated nodes) are excluded from the giant component and their weight in the degree distribution of the whole network has no effect on the properties of the giant component. Therefore, it is useful to consider a simple configuration model in which all nodes are restricted to a small number of low degrees. Here we consider a configuration model network with a ternary degree distribution of the form [@Newman2010] $$P(k) = p_1 \delta_{k,1} + p_2 \delta_{k,2} + p_3 \delta_{k,3}, \label{eq:ternary}$$ where $\delta_{k,n}$ is the Kronecker delta, and $p_1+p_2+p_3=1$. The mean degree of such network is given by $$\langle K \rangle = p_1 + 2 p_2 + 3 p_3.$$ The generating functions of the degree distribution are $$G_0(x) = p_1 x + p_2 x^2 + p_3 x^3, \label{eq:G0tri}$$ and $$G_1(x) = \frac{ p_1 + 2 p_2 x + 3 p_3 x^2}{p_1 + 2 p_2 + 3 p_3}. \label{eq:G1tri}$$ Solving Eq. (\[eq:tg\]) for $\tilde g$, with $G_1(x)$ given by Eq. (\[eq:G1tri\]), we find that $$\tilde g = \begin{dcases} 0 & \ \ \ \ p_3 \le \frac{p_1}{3} \\ 1 - \frac{p_1}{3p_3} & \ \ \ \ p_3 > \frac{p_1}{3}. \end{dcases}$$ Using Eq. (\[eq:g\]) to evaluate the parameter $g$, where $G_0(x)$ is given by Eq. (\[eq:G0tri\]), we find that $$g = \begin{dcases} 0 & \ \ \ \ p_3 \le \frac{p_1}{3} \\ 1 - \left( \frac{p_1}{3p_3} \right) p_1 - \left( \frac{p_1}{3 p_3} \right)^2 p_2 - \left( \frac{p_1}{3 p_3} \right)^3 p_3 & \ \ \ \ p_3 > \frac{p_1}{3}. \end{dcases}$$ Thus, the percolation threshold is located at $p_3 = p_1/3$. This can be understood intuitively by recalling that the finite components exhibit a tree structure. In a tree that includes a single node of degree $k=3$ there must be three leaf nodes of degree $k=1$. In the giant component, which includes cycles, there must be more than one node of degree $3$ for every three nodes of degree $1$. This is not likely to occur in a case in which $p_3 < p_1/3$. Using the normalization condition, we find that for any given value of $p_2$, a giant component exists for $p_3 > (1-p_2)/4$. Using Eq. (\[eq:kL1\]), we obtain the degree distribution of the giant component, which is given by $$P(k | 1) = \left[ \frac{ 1 - \left( \frac{p_1}{3 p_3} \right)^k } { 1 - \left( \frac{p_1 }{3 p_3} \right) p_1 - \left( \frac{p_1 }{3 p_3 } \right)^2 p_2 - \left( \frac{p_1 }{3 p_3} \right)^3 p_3 } \right] P(k), \label{eq:Pk1ter}$$ where $k=1, 2, 3$ and $P(k)$ is given by Eq. (\[eq:ternary\]). These results enable us to construct a giant connected component with a desired ternary degree distribution, given by $P(k|1)$, $k=1,2,3$, where $\sum_{k=1}^3 P(k|1)=1$. To this aim, we need to express the degree distribution $P(k)$ of the whole network, given by Eq. (\[eq:ternary\]), in terms of the given degree distribution $P(k|1)$ of the giant component. We should first evaluate the parameter $\tilde g$, which is given by $$\tilde g = 1 - \frac{p_1}{3 p_3}.$$ Using Eq. (\[eq:Pk1ter\]) to calculate the ratio $P(1|1)/P(3|1)$, we obtain $$\frac{P(1|1)}{3 P(3|1)} = \frac{1}{1 + \left( \frac{p_1}{3 p_3} \right) + \left( \frac{p_1}{3 p_3} \right)^2 } \ \frac{p_1}{3 p_3}$$ Solving for $p_1/(3 p_3)$ we obtain $$\frac{p_1}{3 p_3} = \frac{1}{2} \left[ \frac{3 P(3|1)}{ P(1|1)} - 1 - \sqrt{ \left( \frac{3 P(3|1)}{P(1|1)} + 1 \right) \left( \frac{3 P(3|1)}{P(1|1)} - 3 \right) } \right].$$ Therefore $$\tilde g = \frac{1}{2} \left[ 3 - \frac{3 P(3|1)}{ P(1|1)} + \sqrt{ \left( \frac{3 P(3|1)}{P(1|1)} + 1 \right) \left( \frac{3 P(3|1)}{P(1|1)} - 3 \right) } \right]. \label{eq:tgternary3}$$ The next step is to evaluate the parameter $g$, which is given by $$g = \frac{1}{ 1 + \sum\limits_{k=1}^{3} \frac{(1-\tilde g)^k}{1 - (1-\tilde g)^k} P(k|1) }.$$ Simplifying the expression we obtain $$g = \frac{\tilde g}{P(1|1) + \frac{1}{2-\tilde g}P(2|1) + \frac{1}{3-3 \tilde g+\tilde g^2}P(3|1)}. \label{eq:gternary}$$ Using the normalization condition of the probabilities $P(k|1)$ to express $P(2|1)$ in terms of $P(1|1)$ and $P(3|1)$ we obtain $$g = \frac{\tilde g (2-\tilde g)}{1+(1-\tilde g)P(1|1) - \frac{(1-\tilde g)^2}{3-3 \tilde g+\tilde g^2}P(3|1)}. \label{eq:gternary3}$$ The degree distribution of the whole network is given by Eq. (\[eq:ternary\]), where $$\begin{aligned} p_1 &=& \frac{g}{\tilde g} P(1|1) \nonumber \\ p_2 &=& \frac{g}{\tilde g (2-\tilde g)} P(2|1) \nonumber \\ p_3 &=& \frac{g}{\tilde g (3 - 3 \tilde g + \tilde g^2)} P(3|1). \label{eq:p1p2p3}\end{aligned}$$ Thus, in order to obtain an ensemble of single-component networks of mean size $\langle N_1 \rangle$, whose degree sequences are drawn from a given ternary degree distribution $P(k|1)$, one generates an ensemble of configuration model networks with a degree distribution $P(k)$, given by Eq. (\[eq:ternary\]), where $p_1,p_2$ and $p_3$ are given by Eq. (\[eq:p1p2p3\]). The size of the configuration model networks should be $N=\langle N_1 \rangle/g$, where $g$ is given by Eq. (\[eq:gternary\]). ![ (Color online) Analytical results for the fraction of nodes $g$ (solid line), and the fraction of random neighbors of random nodes, $\tilde g$ (dashed line), that reside on the giant component, in a configuration model network whose giant component exhibits a ternary degree distribution $P(k|1)$, expressed by Eq. (\[eq:Pk1ter\]), with $P(K=2|1)=0$, as a function of the mean degree $c={\mathbb E}[K|1]$ of the giant component. The simulation results (circles), obtained for $N=10^4$, are in very good agreement with the analytical results. []{data-label="fig:1"}](fig1){width="7cm"} In Fig. \[fig:1\] we present analytical results for the probability $g$, obtained from Eq. (\[eq:gternary3\]), that a randomly selected node resides on the giant component (solid line), in a configuration model network whose giant component exhibits a ternary degree distribution with $P(K=2|1)=0$, as a function of the mean degree $c={\mathbb E}[K|1]$ of the giant component. We also show the probability $\tilde g$, obtained from Eq. (\[eq:tgternary3\]), that a random neighbor of a random node resides on the giant component (dashed line). As discussed above, both $g$ and $\tilde g$ vanish for $c<2$, since there are no giant components with mean degrees smaller than $2$. For $c>2$ both $g$ and $\tilde g$ exhibit a steep rise as $c$ is increased, reaching $g=\tilde g=1$ at $c=3$, where the giant component encompasses the whole network. The results obtained from computer simulations (circles) with $N=10^4$ are found to be in very good agreement with the analytical results. Construction of a single component network with an exponential degree distribution ---------------------------------------------------------------------------------- Consider a configuration model network whose giant component exhibits an exponential degree distribution of the form $$P(k|1) = A e^{- \alpha k},$$ where $k \ge k_{\rm min}$. Here we focus on the case of $k_{\rm min}=1$, for which the normalization factor is $A=e^{\alpha} - 1$. The mean degree is given by $$c={\mathbb E}[K|1] = \frac{1}{1 - e^{- \alpha}}.$$ For the analysis below, it is convenient to parametrize the degree distribution in terms of the mean degree $c$. Plugging in $\alpha = \ln c - \ln (c-1)$ we obtain $$P(k|1) = \frac{1}{c} \left( \frac{c-1}{c} \right)^{k-1}, \label{eq:exp}$$ where $k \ge 1$. The mean degree of nodes that reside on the giant component is ${\mathbb E}[K|1]=c$. As noted above, a giant component exists only for $c \ge 2$. This implies that $\alpha$ must satisfy the condition $\alpha \le \ln 2$. Inserting $P(k|1)$ from Eq. (\[eq:exp\]) into Eqs. (\[eq:G01\]) and (\[eq:G11\]) and carrying out the summations, we find that the generating functions for a giant component with an exponential degree distribution take the form $$G_0^1(x) = \frac{x}{c - x(c-1)} \label{eq:G01exp}$$ and $$G_1^1(x) = \frac{1}{\left[ c + (1-c) x \right]^2}. \label{eq:G11exp}$$ Plugging in $x=(1-\tilde g)^n$ in Eq. (\[eq:G11exp\]) and inserting the result into Eq. (\[eq:g2mg\]), we obtain that $\tilde g$ is given by $$\tilde g (2-\tilde g) \sum_{n=0}^{\infty} \frac{ (1-\tilde g)^n } { \left[ c + (1-c) (1-\tilde g)^n \right]^2 } = 1.$$ This is an implicit equation for $\tilde g$ in terms of the mean degree $c$, which is essentially equivalent to Eq. (\[eq:g2mg\]) for the case of the exponential distribution. It should be solved numerically in order to obtain $\tilde g = \tilde g(c)$. Following the general approximation scheme presented in section VI we solve instead Eq. (\[eq:gtimplicit\]), which for the exponential distribution case can be written explicitly in the following simpler form $$\tilde g (2-\tilde g) \left\{ 1 + \frac{1-\tilde g}{ (1 - \tilde g + c \tilde g)^2} + \frac{(1-\tilde g)^{3/2}}{ c \left[ c + (1-c) (1-\tilde g)^{3/2} \right] } \right\} = 1. \label{eq:gtexp}$$ To calculate the parameter $g$, we use Eq. (\[eq:gshort\]). Plugging in the generating function $G_0^1(x)$ of the exponential degree distribution, given by Eq. (\[eq:G01exp\]), we obtain $$g = \left[ 1 + \sum\limits_{n=1}^{\infty} \frac{(1-\tilde g)^n}{c-(c-1)(1-\tilde g)^n} \right]^{-1}, \label{eq:gexp1}$$ where $\tilde g$ is given by Eq. (\[eq:gtexp\]). In the case of the exponential distribution we have a useful approximation scheme which is similar to the one used in the self-consistent equation for $\tilde g$. This amounts to separating the first term from the rest of the sum in Eq. (\[eq:gexp1\]), and replacing the sum by an integral. This yields $$g = \left[ 1 + \frac{1-\tilde g}{c-(c-1)(1-\tilde g)} + \int_{3/2}^{\infty} \frac{(1-\tilde g)^n}{c-(c-1)(1-\tilde g)^n} dn \right]^{-1}, \label{eq:gexp2}$$ Carrying out the integration, we obtain $$g = \left[ 1 + \frac{1-\tilde g}{c-(c-1)(1-\tilde g)} + \frac{ \ln \left[{1-\left(\frac{c-1}{c}\right)(1-\tilde g)^{3/2}}\right]}{(c-1) \ln (1-\tilde g)} \right]^{-1}. \label{eq:gexp}$$ It turns out that this expression is precise within less than one percent compared to the full expression (\[eq:gexp1\]), even next to the percolation transition. In order to obtain a single component network of $N_1$ nodes with a given exponential degree distribution, $P(k|1)$, one generates a configuration model network with the degree distribution $P(k)$, given by Eq. (\[eq:pk2\]), where $\tilde g$ is given by Eq. (\[eq:gtexp\]), $g$ is given by Eq. (\[eq:gexp\]) and $P(k|1)$ is given by Eq. (\[eq:exp\]). In Fig. \[fig:2\] we present analytical results for the probability $g$, obtained from Eq. (\[eq:gexp\]), that a randomly selected node resides on the giant component (solid line), in a configuration model network whose giant component exhibits an exponential degree distribution, as a function of the mean degree $c={\mathbb E}[K|1]$ of the giant component. We also show analytical results for the probability, $\tilde g$, obtained from Eq. (\[eq:gtexp\]), that a random neighbor of a random node resides on the giant component (dashed line). As in the case of the ternary degree distribution, both $g$ and $\tilde g$ vanish for $c<2$, while for $c>2$ they exhibit a steep rise as $c$ is increased. The results of computer simulations (circles) with $N=10^4$ are in very good agreement with the analytical results. ![ (Color online) The fraction of nodes, $g$ (solid line), and the fraction of random neighbors of random nodes $\tilde g$ (dashed line), that reside on the giant component, in a configuration model network whose giant component exhibits an exponential degree distribution, $P(k|1)$, expressed by Eq. (\[eq:exp\]), as a function of the mean degree $c={\mathbb E}[K|1]$ of the giant component. As discussed in the text, the minimal value of the mean degree of the giant component is $c=2$. Thus, for $c<2$ both $g=0$ and $\tilde g=0$, while for $c>2$ the parameters $g$ and $\tilde g$ quickly increase. The simulation results (circles), obtained for $N=10^4$, are in very good agreement with the analytical results. []{data-label="fig:2"}](fig2){width="8cm"} In Fig. \[fig:3\] we present analytical results (dashed lines) for the degree distributions $P(k)$ and simulation results for the corresponding degree sequences ($\times$) of the configuration model networks whose giant components exhibit exponential degree distributions with mean degrees $c={\mathbb E}[K|1]$, where $c=2.1$ (a), $c=2.5$ (b) and $c=3.0$ (c). The degree sequences of the resulting single-component networks (circles) fit perfectly with the desired exponential degree distributions (solid lines), given by Eq. (\[eq:exp\]). It is found that on the giant component the abundance of nodes of degree $k=1$ is depleted with respect to their abundance in the whole network, while the abundance of nodes of higher degrees is enhanced. ![ (Color online) Analytical results (dashed lines) for the degree distributions $P(k)$ and simulation results for the corresponding degree sequences with $N=10^4$ ($\times$), of configuration model networks whose giant components exhibit exponential degree distributions (solid lines) of the form $P(k|1)$, given by Eq. (\[eq:exp\]), with mean degree $c={\mathbb E}[K|1]$, where $c=2.1$ (a), $c=2.5$ (b) and $c=3.0$ (c). The degree sequences of the resulting single-component networks (circles) fit perfectly with the desired exponential degree distributions (solid lines). It is found that on the giant component the abundance of nodes of degree $k=1$ is depleted, while the abundance of nodes of higher degrees is slightly enhanced. This feature is most pronounced in the dilute network limit, in which the fraction of nodes that reside on the giant components is small. []{data-label="fig:3"}](fig3a "fig:"){width="6.4cm"} ![ (Color online) Analytical results (dashed lines) for the degree distributions $P(k)$ and simulation results for the corresponding degree sequences with $N=10^4$ ($\times$), of configuration model networks whose giant components exhibit exponential degree distributions (solid lines) of the form $P(k|1)$, given by Eq. (\[eq:exp\]), with mean degree $c={\mathbb E}[K|1]$, where $c=2.1$ (a), $c=2.5$ (b) and $c=3.0$ (c). The degree sequences of the resulting single-component networks (circles) fit perfectly with the desired exponential degree distributions (solid lines). It is found that on the giant component the abundance of nodes of degree $k=1$ is depleted, while the abundance of nodes of higher degrees is slightly enhanced. This feature is most pronounced in the dilute network limit, in which the fraction of nodes that reside on the giant components is small. []{data-label="fig:3"}](fig3b "fig:"){width="6.4cm"}\ ![ (Color online) Analytical results (dashed lines) for the degree distributions $P(k)$ and simulation results for the corresponding degree sequences with $N=10^4$ ($\times$), of configuration model networks whose giant components exhibit exponential degree distributions (solid lines) of the form $P(k|1)$, given by Eq. (\[eq:exp\]), with mean degree $c={\mathbb E}[K|1]$, where $c=2.1$ (a), $c=2.5$ (b) and $c=3.0$ (c). The degree sequences of the resulting single-component networks (circles) fit perfectly with the desired exponential degree distributions (solid lines). It is found that on the giant component the abundance of nodes of degree $k=1$ is depleted, while the abundance of nodes of higher degrees is slightly enhanced. This feature is most pronounced in the dilute network limit, in which the fraction of nodes that reside on the giant components is small. []{data-label="fig:3"}](fig3c "fig:"){width="6.4cm"} In Fig. \[fig:4\] we present the mean degree $\langle K \rangle$ (dashed line), obtained from Eq. (\[eq:&lt;K&gt;\_EK\]), of a configuration model network whose giant component exhibits an exponential degree distribution with mean degree $c={\mathbb E}[K|1]$, as a function of $c$. The mean degree, $c$, of the giant component (solid line) is also shown for comparison. It is found that for dilute networks $\langle K \rangle$ is significantly smaller than $c$ and the gap between the two curves shrinks as the network becomes denser. The simulation results (circles), obtained for $N=10^4$, are found to be in very good agreement with the analytical results. ![ (Color online) Analytical results (dashed line) and simulation results, obtained for $N=10^4$ (circles), for the mean degree $\langle K \rangle$ of a configuration model network whose giant component exhibits an exponential degree distribution with mean degree $c={\mathbb E}[K|1]$, as a function of ${\mathbb E}[K|1]$. For comparison we also present the analytical results (solid line) and simulation results (circles) for the mean degree ${\mathbb E}[K|1]$ of the giant component. It is found that in the dilute network limit $\langle K \rangle$ is significantly smaller than $c={\mathbb E}[K|1]$ and the two curves converge as the network becomes denser. []{data-label="fig:4"}](fig4){width="7cm"} Construction of a single component network with a power-law degree distribution ------------------------------------------------------------------------------- Consider a configuration model network whose giant component exhibits a power-law degree distribution of the form $$P(k|1) = \frac{A}{ k^{\gamma} }, \label{eq:PLnorm1}$$ for $k_{\rm min} \le k \le k_{\rm max}$. Here we focus on the case of $k_{\rm min}=1$. In this case, the normalization coefficient is $$A = \frac{1}{ \zeta(\gamma) - \zeta(\gamma,k_{\rm max}+1) }, \label{eq:PLnorm2}$$ where $\zeta(s,a)$ is the Hurwitz zeta function and $\zeta(s)=\zeta(s,1)$ is the Riemann zeta function [@Olver2010]. In order to avoid correlations, the network size must satisfy the condition $N > (k_{\rm max})^2/\langle K \rangle$ [@Bianconi2008; @Bianconi2009; @Janssen2015]. The mean degree is given by $$c = {\mathbb E}[K|1] = \frac{ \zeta(\gamma-1) - \zeta(\gamma-1,k_{\rm max}+1) } { \zeta(\gamma) - \zeta(\gamma,k_{\rm max}+1) }. \label{eq:Kmsf}$$ As noted above, a single connected component with a degree distribution $P(k|1)$ exists only if the condition $\mathbb{E}[K|1] \ge 2$ is satisfied. This implies that for a given value of $k_{\rm max}$ there exists a critical value of $\gamma$, denoted by $\gamma_c(k_{\rm max})$, such that a giant component exists only for $\gamma < \gamma_c(k_{\rm max})$. The value of $\gamma_c(k_{\rm max})$ is obtained by solving Eq. (\[eq:Kmsf\]) for $\gamma$ under the condition that $c=2$. In the special case of $k_{\rm max} \rightarrow \infty$ one obtains $\gamma_c(k_{\rm max}) \rightarrow \gamma_c(\infty) = 3.4787...$, which is a solution of the equation $\zeta(\gamma-1) = 2 \zeta(\gamma)$. The second moment of the degree distribution is $${\mathbb E}[K^2|1] = \frac{ \zeta(\gamma-2) - \zeta(\gamma-2,k_{\rm max}+1) } { \zeta(\gamma) - \zeta(\gamma,k_{\rm max}+1) }. \label{eq:K2msf}$$ For $\gamma \le 2$, in the asymptotic limit of $N \rightarrow \infty$, the mean degree ${\mathbb E}[K|1]$ diverges in the limit $k_{\rm max} \rightarrow \infty$. For $2 < \gamma \le 3$, in the asymptotic limit, the mean degree is bounded while the second moment ${\mathbb E}[K^2|1]$ diverges. For $\gamma > 3$ both moments are bounded. The generating functions of $P(k|1)$ for a giant component with a power-law degree distribution are $$G_0^1(x) = \frac{ {\rm Li}_{\gamma}(x) - x^{k_{\rm max}+1} \Phi(x,\gamma,k_{\rm max}+1) }{\zeta(\gamma) - \zeta(\gamma,k_{\rm max}+1)} \label{eq:G0sf}$$ and $$G_1^1(x) = \frac{ {\rm Li}_{\gamma-1}(x) - x^{k_{\rm max}+1} \Phi(x,\gamma-1,k_{\rm max}+1)} {x \left[\zeta(\gamma-1) - \zeta(\gamma-1,k_{\rm max}+1) \right]}, \label{eq:G1sf}$$ where ${\rm Li}_{\gamma}(x)$ is the polylogarithmic function. Inserting the expressions for the two generating functions into Eq. (\[eq:gtimplicit\]), we obtain $$\begin{aligned} \tilde g (2-\tilde g) \left[ 1 + (1-\tilde g) \frac{ {\rm Li}_{\gamma-1}(1-\tilde g) - (1-\tilde g)^{k_{\rm max}+1} \Phi(1-\tilde g,\gamma-1,k_{\rm max}+1)} {(1-\tilde g) \left[\zeta(\gamma-1) - \zeta(\gamma-1,k_{\rm max}+1) \right]} \right. \nonumber \\ \left. - \frac{{\rm Li}_{\gamma}[(1-\tilde g)^{3/2} - (1-\tilde g)^{3(k_{\rm max}+1)/2} \Phi[(1-\tilde g)^{3/2},\gamma,k_{\rm max}+1] } {\ln (1-\tilde g) [\zeta(\gamma-1) - \zeta(\gamma-1,k_{\rm max}+1)] } \right] =1. \label{eq:gtpowerlaw}\end{aligned}$$ This is an implicit equation for $\tilde g$ in terms of the exponent $\gamma$ and the upper cutoff $k_{\rm max}$, that should be solved numerically. The parameter $g$ is then obtained from Eq. (\[eq:gshort\]). Inserting $G_0^1(x)$ from Eq. (\[eq:G0sf\]) into Eq. (\[eq:gshort\]), we obtain $$g = \left[ 1 + \sum\limits_{n=1}^{\infty} \frac{ {\rm Li}_{\gamma}[(1-\tilde g)^n] - (1-\tilde g)^{n(k_{\rm max}+1)} \Phi[(1-\tilde g)^n,\gamma,k_{\rm max}+1] }{\zeta(\gamma) - \zeta(\gamma,k_{\rm max}+1)} \right]^{-1}. \label{eq:gpowerlaw}$$ In order to generate an ensemble of single component networks whose mean size is $\langle N_1 \rangle$, which exhibit a given power-law degree distribution $P(k|1)$, one generates configuration model networks of size $N = \langle N_1 \rangle/g$ with the degree distribution $P(k)$, given by Eq. (\[eq:pk2\]), where $\tilde g$ is given by Eq. (\[eq:gtpowerlaw\]), $g$ is given by Eq. (\[eq:gpowerlaw\]) and $P(k|1)$ is given by Eq. (\[eq:PLnorm1\]). Note that for $\gamma \ge 2$, in the limit of $k_{\rm max} \rightarrow \infty$ one obtains that $g \rightarrow g_{\infty} < 1$. This means that in configuration model networks which exhibit a power-law degree distribution with $\gamma \ge 2$ the giant component does not encompass the whole network regardless of the value of $k_{\rm max}$. This means that the approach presented here is applicable and useful for the construction of single component random networks with power-law degree distributions for the whole range of $2 \le \gamma \le \gamma_c(\infty)$. In Fig. \[fig:5\] we present analytical results (solid line), obtained from Eq. (\[eq:Kmsf\]), for the mean degree, $c={\mathbb E}[K|1]$, of the giant component of a configuration model network, for which the giant component exhibits a power-law degree distribution, $P(k|1)$, given by Eq. (\[eq:PLnorm1\]), as a function of the exponent $\gamma$ for $2 < \gamma < 2.4$. The upper cutoff of the degree distribution is $k_{\rm max}=100$. The dashed line, presented for $\gamma > 2.4$, is still a solution of Eq. (\[eq:Kmsf\]). However, it does not describe the mean degree of a giant component, because in this regime $c < 2$ while the degree distribution of a giant component must satisfy $c>2$. The results for the mean degrees of the network instances constructed using this method (circles) are in perfect agreement with the analytical results. It is found that the mean degree decreases as $\gamma$ is increased. ![ (Color online) The mean degree, $c={\mathbb E}[K|1]$, of the giant component of a configuration model network (solid line) with a power-law degree distribution \[Eq. (\[eq:PLnorm1\])\], as a function of the exponent $\gamma$, for $\gamma \ge 2$ with $k_{\rm max}=100$, given by Eq. (\[eq:Kmsf\]). The mean degree decreases as $\gamma$ is increased. For $\gamma > 2.4$ the solid line is replaced by a dashed line, which is still a solution of Eq. (\[eq:Kmsf\]). However, it does not describe the mean degree of a giant component, because in this regime $c < 2$ while the mean degree of a giant component must satisfy $c \ge 2$. The results for the mean degrees of the single component networks constructed using this method (circles) are in perfect agreement with the analytical results. []{data-label="fig:5"}](fig5){width="7cm"} In Fig. \[fig:6\] we show analytical results for the values of the parameters $g$ (solid line) and $\tilde g$ (dashed line) of a configuration model network whose giant component exhibits a power-law degree distribution, as a function of the mean degree $c={\mathbb E}[K|1]$ of the giant component. As discussed above, both $g$ and $\tilde g$ vanish for $c<2$, since there are no giant components with mean degrees lower than $2$. For $c>2$ the parameters $g$ and $\tilde g$ gradually increase. This is in contrast to the case of the exponential degree distribution, shown in Fig. \[fig:2\], in which $g$ and $\tilde g$ increase more steeply. The simulation results (circles) for $g$, obtained from network instances constructed using this method with $k_{\rm max}=100$ and $N=4 \times 10^4$ are found to be in good agreement with the analytical results, while the results for $\tilde g$ are a bit noisy. ![ (Color online) The parameters $g$ (solid line) and $\tilde g$ (dashed line) of a configuration model network whose giant component exhibits a power-law degree distribution of the form $P(k|1)$, given by Eq. (\[eq:PLnorm1\]), as a function of the mean degree $c={\mathbb E}[K|1]$ of the giant component. As discussed in the text the minimal value of the mean degree of a giant component with a power-law degree distribution is $c=2$. Thus, for $c<2$ both $g=0$ and $\tilde g=0$. For $c>2$ the parameters $g$ and $\tilde g$ gradually increase. This is in contrast to the case of the exponential degree distribution, shown in Fig. \[fig:2\], in which $g$ and $\tilde g$ increase more steeply. []{data-label="fig:6"}](fig6){width="7cm"} In Fig. \[fig:7\] we present analytical results (dashed lines) for the degree distributions $P(k)$ \[given by Eq. (\[eq:pk2\]), where $\tilde g$ is the solution of Eq. (\[eq:gtpowerlaw\]) and $g$ is given by Eq. (\[eq:gpowerlaw\])\] and simulation results for the corresponding degree sequences ($\times$) of the configuration model networks whose giant components exhibit power-law degree distributions, with $\gamma=2.01$ (a), $\gamma=2.2$ (b) and $\gamma=2.35$ (c). The degree sequences of the resulting single-component networks (circles) fit perfectly with the desired power-law distributions (solid lines), given by Eq. (\[eq:PLnorm1\]). ![ (Color online) Analytical results (dashed lines) for the degree distributions $P(k)$ and simulation results with $N=4 \times 10^4$ for the corresponding degree sequences ($\times$) of configuration model networks whose giant components exhibit power-law degree distributions (solid lines), of the form $P(k|1)$, given by Eq. (\[eq:PLnorm1\]), with $\gamma=2.01$ (a), $\gamma=2.2$ (b) and $\gamma=2.35$ (c), and with $k_{\rm max}=100$. The degree sequences of the resulting single-component networks (circles), fit perfectly with the desired power-law degree distributions (solid lines). It is found that on the giant component the abundance of nodes of degree $k=1$ is depleted, while the abundance of nodes of higher degrees is enhanced. This feature is most pronounced in the dilute network limit, in which the fraction of nodes that reside on the giant components is small. []{data-label="fig:7"}](fig7a "fig:"){width="6.4cm"} ![ (Color online) Analytical results (dashed lines) for the degree distributions $P(k)$ and simulation results with $N=4 \times 10^4$ for the corresponding degree sequences ($\times$) of configuration model networks whose giant components exhibit power-law degree distributions (solid lines), of the form $P(k|1)$, given by Eq. (\[eq:PLnorm1\]), with $\gamma=2.01$ (a), $\gamma=2.2$ (b) and $\gamma=2.35$ (c), and with $k_{\rm max}=100$. The degree sequences of the resulting single-component networks (circles), fit perfectly with the desired power-law degree distributions (solid lines). It is found that on the giant component the abundance of nodes of degree $k=1$ is depleted, while the abundance of nodes of higher degrees is enhanced. This feature is most pronounced in the dilute network limit, in which the fraction of nodes that reside on the giant components is small. []{data-label="fig:7"}](fig7b "fig:"){width="6.4cm"}\ ![ (Color online) Analytical results (dashed lines) for the degree distributions $P(k)$ and simulation results with $N=4 \times 10^4$ for the corresponding degree sequences ($\times$) of configuration model networks whose giant components exhibit power-law degree distributions (solid lines), of the form $P(k|1)$, given by Eq. (\[eq:PLnorm1\]), with $\gamma=2.01$ (a), $\gamma=2.2$ (b) and $\gamma=2.35$ (c), and with $k_{\rm max}=100$. The degree sequences of the resulting single-component networks (circles), fit perfectly with the desired power-law degree distributions (solid lines). It is found that on the giant component the abundance of nodes of degree $k=1$ is depleted, while the abundance of nodes of higher degrees is enhanced. This feature is most pronounced in the dilute network limit, in which the fraction of nodes that reside on the giant components is small. []{data-label="fig:7"}](fig7c "fig:"){width="6.4cm"} In Fig. \[fig:8\] we present analytical results (dashed line) for the mean degree $\langle K \rangle$ of a configuration model network whose giant component exhibits a power-law degree distribution, given by Eq. (\[eq:PLnorm1\]) with $k_{\rm max}=100$, as a function of the mean degree $c={\mathbb E}[K|1]$ of the giant component. The mean degree $c$ of the giant component (solid line), is also shown for comparison. It is found that in the dilute network limit $\langle K \rangle$ is much smaller than $c={\mathbb E}[K|1]$. The gap between the two curves slightly decreases as the network becomes more dense, but the two curves do not converge. This is due to the fact that even for the largest value of ${\mathbb E}[K|1]$ that can be obtained with $k_{\rm max}=100$ the giant component does not encompass the whole network. The gap between $\langle K \rangle$ can be decreased further by increasing the value of $k_{\rm max}$. However, in order to maintain the whole network uncorrelated its size $N$ should satisfy $N > (k_{\rm max})^2/\langle K \rangle$ [@Bianconi2008; @Bianconi2009; @Janssen2015]. The results obtained from computer simulations (circles) with $N=4 \times 10^4$ are found to be in very good agreement with the analytical results. ![ (Color online) The mean degree $\langle K \rangle$ of a configuration model network whose giant component exhibits a power-law degree distribution with mean degree $c={\mathbb E}[K|1]$, as a function of ${\mathbb E}[K|1]$ (dashed line). The mean degree ${\mathbb E}[K|1]$ of the giant component (solid line), is also shown for comparison. It is found that in the dilute network limit $\langle K \rangle$ is much smaller than ${\mathbb E}[K|1]$. The gap between the two curves slightly decreases as the network becomes more dense, but the two curves do not converge. The simulation results (circles), obtained for $N=4 \times 10^4$, are in very good agreement with the analytical results. []{data-label="fig:8"}](fig8){width="7cm"} Discussion ========== While configuration model networks are random and uncorrelated, their giant components exhibit correlations between the degrees of adjacent nodes. These degree-degree correlations and the assortativity coefficients of the giant components were studied in Ref. [@Tishby2018]. The giant components were found to be disassortative, namely high-degree nodes tend to connect preferentially to low-degree nodes and vice versa. Moreover, it was found that as the network approaches the percolation transition from above and the giant component decreases in size, its structure becomes more distinct from the structure of the overall network. In particular, the degree distribution of the giant component deviates more strongly from the degree distribution of the whole network, the degree-degree correlations become stronger and the assortativity coefficient becomes more negative. The disassortativity of the giant component helps to maintain its integrity. For example, the probability of a pair of nodes of degrees $k,k'=1$, which reside on the giant component, to connect to each other must vanish, otherwise they will form an isolated dimer. This means that nodes of degree $k=1$ preferentially connect to nodes of higher degrees. As a result, high-degree nodes preferentially connect to nodes of degree $k=1$. In fact, the giant component exhibits degree-degree correlations of all orders. These correlations are required in order to exclude the possibility that a randomly selected node belongs to an isolated component of any finite size [@Tishby2018]. Interestingly, disassortativity was found to be prevalent in a broader class of scale-free networks which exhibit correlations and can be explained by entropic considerations [@Johnson2010; @Williams2014]. The methodology introduced in this paper enables the construction of random networks that consist of a single connected component of $N_1$ nodes with a given degree distribution $P(k|1)$. The desired network consists of the giant component of a suitable configuration model network of $N$ nodes and degree distribution $P(k)$. For a given value of $N$ the size $N_1$ of the giant component exhibits fluctuations which satisfy ${\rm Var}(N_1) \propto N$, which are thus under control in the asymptotic limit. We also present an adjustment procedure for the case in which a specific value of $N_1$ is required. The construction of random networks that consist of a single connected component with a given degree distribution is expected to be useful for the analysis of empirical networks. A common practice in the study of empirical networks is to generate an ensemble of randomized networks with the same degree sequence as the empirical network. One then compares structural and statistical properties of the empirical network to the corresponding properties of the randomized networks. The differences between the empirical network and its randomized counterparts may imply some significant functional or evolutionary properties of the empirical network. Stated more technically, randomized networks serve as null models for empirical networks [@Bianconi2008; @Bianconi2009; @Coolen2009; @Annibale2009; @Roberts2011; @Roberts2013; @Coolen2017]. This approach was utilized in the study of network motifs, which are over-represented in empirical networks compared to the corresponding randomized networks [@Shen2002; @Kashtan2004]. It was also used in the analysis of degree-degree correlations, the assortativity coefficient and the clustering coefficient [@Maslov2004; @Park2003; @Holme2007], and in the study of the distribution of shortest path lengths [@Giot2003]. A randomized network with the same degree sequence as a given empirical network can be constructed in two different ways. One way is to generate a configuration model network with the given degree sequence obtained from the empirical network. Another way is to start from the empirical network and apply a series of rewiring steps. In each rewiring step one picks two random edges, $i-j$ and $i'-j'$ and then exchanges them such that $i$ becomes connected to $j'$ and $i'$ becomes connected to $j$. In a case in which either the $i-j'$ edge or the $i'-j$ edge already exists the step is rejected. After a large number of such rewiring steps one obtains a randomized network which maintains the degree sequence of the empirical network. In some cases one may be interested in finding the degree distribution from which the given degree sequence of the empirical network is most likely to arise. Consider an empirical network of $N$ nodes, whose degree sequence is given by $\{ n_k^{\rm E} \}$, $k=1,2,\dots,k_{\rm max}$, where $n_k^{\rm E}$ is the number of nodes of degree $k$ and $\sum_k n_k^{\rm E} = N$. The degree distribution from which this degree sequence is most likely to emerge is given by $$P(k) = \frac{n_k^{\rm E}}{N}, \label{eq:Pkemp}$$ where $k=1,2,\dots,k_{\rm max}$. Sampling the degrees of $N$ nodes from this distribution, the probability to obtain a degree sequence of the form $\{ n_k \}$, $k=1,2,\dots,k_{\rm max}$ is $$P(\{n_k\}) = \frac{N!}{\prod_{k=1}^{k_{\rm max}} n_k!} \prod_{k=1}^{k_{\rm max}} P(k)^{n_k}.$$ Configuration model networks with degree sequences that are drawn from the degree distribution $P(k)$, given by Eq. (\[eq:Pkemp\]), provide a broader class of randomized networks for the given empirical networks. While their degree sequences are not identical to the degree sequence of the empirical network their statistical properties are closely related. This is a grand-canonical approach to the sampling problem. While some empirical networks consist of a single connected component such as transportation networks and brain networks [@Wandelt2019], other networks consist of many isolated components of various sizes such as adoption of innovations or products networks [@Karsai2016] and mobile phone calling networks [@Li2014]. The distribution of sizes of these components has been studied in the context of subcritical networks and provides a useful characterization of the network structure [@Katzav2018]. In a case in which one of the isolated components is particularly large (and asymptotically encompasses a macroscopic fraction of the network size), it is referred to as the giant component. In such case the network exhibits a coexistence between the giant component and many finite components. Here we focus on the properties of the giant component, namely the degree distribution, degree-degree correlations, clustering coefficient and size. The size of the giant component, $N_1$, depends on the size of the whole network, $N$, and on the fraction of nodes, $0 < g < 1$, that reside on the giant component. In computer simulations the value of $g$ varies between different network instances in a given network ensemble, following a distribution $P(g)$ that is characteristic of the given ensemble. In empirical networks it is difficult to find many network instances that are drawn from the same statistical ensemble. Therefore, it is difficult to find a direct analog of $P(g)$ in empirical networks. In a case in which the empirical network under study consists of a single connected component, it is desirable that the corresponding randomized networks will also consist of a single connected component. The procedures described above may produce randomized networks that consist of multiple components (such as a giant component and many finite components), even in a case in which the empirical network consists of a single connected component. The size of the giant component of the randomized network depends on its degree sequence and can be determined using methods of percolation theory. The methodology presented in this paper provides a way to obtain a randomized network that consists of a single connected component. Consider an empirical network of $N_1$ nodes that consists of a single connected component with degree sequence $\{ n_k \}$. Using Eq. (\[eq:Pkemp\]) one obtains the most probable degree distribution $P(k|1)$ for the given degree sequence. Using the procedure presented in this paper, one obtains the size $N$ and the degree distribution $P(k)$ of a configuration model network whose giant component is the desired randomized network. Summary ======= We presented a method for the construction of ensembles of random networks that consist of a single connected component of any desired size $N_1$ with a pre-defined degree distribution $P(k|1)$. The construction is done by generating a configuration model network with a suitable degree distribution $P(k)$ and size $N$, whose giant component is of size $N_1$ and its degree distribution is $P(k|1)$. This approach is based on the inversion of the relation between $P(k)$ and $P(k|1)$, which was presented in Ref. [@Tishby2018]. It extends the construction toolbox of random networks beyond the configuration model framework, in which one controls the network size and the degree distribution but has no control over the number of network components and their sizes. The capability of generating single component random networks with a desired degree distribution is expected to be instrumental in the effort to elucidate the statistical properties of such networks at the local and global scales. [10]{} R. Albert and A.-L. Barabási, Statistical mechanics of complex networks, [*Rev. Mod. Phys.*]{} [**74**]{}, 47 (2002). S.N. Dorogovtsev and J.F.F. Mendes, [*Evolution of networks: From biological networks to the Internet and WWW*]{}, (Oxford University Press, Oxford, 2003). S.N. Dorogovtsev, A.V. Goltsev and J.F.F. Mendes, [*Critical phenomena in complex networks*]{}, [*Rev. Mod. Phys.*]{} [**80**]{}, 1275 (2008). R. van der Hofstad, Random graphs and complex networks (Eindhoven, 2013); Available at https://www.win.tue.nl/ rhofstad/NotesRGCN2013.pdf M.E.J. Newman, [*Networks: An introduction*]{}, (Oxford University Press, Oxford, 2010). S. Havlin and R. Cohen, [*Complex networks: Structure, robustness and function*]{}, (Cambridge University Press, New York, 2010). E. Estrada, [*The structure of complex networks: theory and applications*]{}, (Oxford University Press, Oxford, 2011). A. Barrat, M. Barthélemy and A. Vespignani, [*Dynamical processes on complex networks*]{}, (Cambridge University Press, Cambridge, 2012). V. Latora, V. Nicosia and G. Russo, [*Complex Networks: Principles, Methods and Applications*]{}, (Cambridge University Press, Cambridge, 2012). P. Erd[ő]{}s and A. Rényi, On random graphs I, [*Publicationes Mathematicae*]{} [**6**]{}, 290 (1959). P. Erd[ő]{}s and A. Rényi, On the evolution of random graphs, [*Publ. Math. Inst. Hung. Acad. Sci.*]{} [**5**]{}, 17 (1960). P. Erd[ő]{}s and A. Rényi, On the evolution of random graphs II, [*Bull. Inst. Int. Stat.*]{} [**38**]{}, 343 (1961). B. Bollobás, The evolution of random graphs, [*Trans. Amer. Math. Soc.*]{} [**286**]{}, 257 (1984). M. Molloy and A. Reed, A critical point for random graphs with a given degree sequence, [*Random Structures and Algorithms*]{} [**6**]{}, 161 (1995). M. Molloy and A. Reed, The size of the giant component of a random graph with a given degree sequence, [*Combin., Prob. and Comp.*]{} [**7**]{}, 295 (1998). B. Bollobás, Random graphs (Cambridge University Press, Cambridge, 2001). I. Tishby, O. Biham, E. Katzav and R. Kühn, Revealing the micro-structure of the giant component in random graph ensembles, [*Phys. Rev. E*]{} [**97**]{}, 042318 (2018). M.E.J. Newman, S.H. Strogatz and D.J. Watts, Random graphs with arbitrary degree distributions and their applications, [*Phys. Rev. E*]{} [**64**]{}, 026118 (2001). R. Cohen and S. Havlin, Scale-free networks are ultrasmall, [*Phys. Rev. Lett.*]{} [**90**]{}, 058701 (2003). P. Erd[ő]{}s and T. Gallai, Gráfok el[ő]{}írt fokszámú pontokkal, [*Matematikai Lapok*]{} [**11**]{}, 264 (1960). S.A. Choudum, A simple proof of the Erd[ő]{}s-Gallai theorem on graph sequences, [*Bulletin of the Australian Mathematical Society*]{} [**33**]{}, 67 (1986). H. Bonneau, A. Hassid, O. Biham, R. Kühn and E. Katzav, Distribution of shortest cycle lengths in random networks, [*Phys. Rev. E*]{} [**96**]{}, 062307 (2017). I. Tishby, O. Biham, R. Kühn and E. Katzav, Statistical analysis of articulation points in configuration model networks, [*Phys. Rev. E*]{} [**98**]{}, 062301 (2018). E. Katzav, O. Biham, and A.K. Hartmann, Distribution of shortest path lengths in subcritical Erd[ő]{}s-Rényi networks, [*Phys. Rev. E.*]{} [**98**]{}, 012301 (2018). M. Kang, Giant components in random graphs, [*The IMA Volumes in Mathematics and its Applications*]{} [**159**]{}, page 235, edited by A. Beveridge et al. (Springer International Publishing Switzerland, 2016). B. Bollobas and O. Riordan, The phase transition in the Erd[ő]{}s-Rényi random graph process, [*Bolyai Society Mathematical Studies*]{} [**25**]{}, 59 (2013). O. Riordan, The phase transition in the configuration model, [*Combinatorics, Probability and Computing*]{} [**21**]{}, 265 (2012). S. Mizutaka and T. Hasegawa, Disassortativity of percolating clusters in random networks, [*Phys. Rev. E*]{} [**98**]{}, 062314 (2018). M.E.J. Newman, Assortative mixing in networks, [*Phys. Rev. Lett.*]{} [**89**]{}, 208701 (2002). F.W.J. Olver, D.M. Lozier, R.F. Boisvert and C.W. Clark, [*NIST handbook of mathematical functions*]{} (Cambridge University Press, Cambridge, 2010). G. Bianconi, The entropy of randomized network ensembles, [*Europhys. Lett.*]{} [**81**]{}, 28005 (2008). G. Bianconi, Entropy of network ensembles, [*Phys. Rev. E*]{} [**79**]{}, 036114 (2009). A.J.E.M. Janssen and J.S.H. van Leeuwaarden, Giant component sizes in scale-free networks with power-law degrees and cutoffs, [*EPL*]{} [**112**]{}, 68001 (2015). S. Johnson, J.J. Torres, J. Marro and M.A. Munoz, Entropic origin of disassortativity in complex networks, [*Phys. Rev. Lett.*]{} [**104**]{}, 108702 (2010). O. Williams and C.I. Del Genio, Degree Correlations in Directed Scale-Free Networks, [*Plos One*]{} [**9**]{}, e110121 (2014). T. Coolen, A. Annibale and E. Roberts, [*Generating Random Networks and Graphs*]{}, (Oxford University Press, Oxford, 2017). A.C.C. Coolen, A. De Martino and A. Annibale, Constrained Markovian dynamics of random graphs, [*J. Stat. Phys.*]{} [**136**]{}, 1035 (2009). A. Annibale, A.C.C. Coolen, L.P. Fernandes, F. Fraternali and J. Kleinjung, Tailored graph ensembles as proxies or null models for real networks I: tools for quantifying structure, [*J. Phys. A*]{} [**42**]{}, 485001 (2009). E.S. Roberts, T. Schlitt and A.C.C. Coolen, Tailored graph ensembles as proxies or null models for real networks II: results on directed graphs, [*J. Phys. A*]{} [**44**]{}, 275002 (2011). E.S. Roberts, A. Annibale and A.C.C. Coolen, Tailored Random Graph Ensembles, [*J. Phys.: Conf. Ser.*]{} [**410**]{}, 012097 (2013). S.S. Shen-Orr, R. Milo, S. Mangan and U. Alon, Network motifs in the transcriptional regulation network of Escherichia coli, [*Nature Genetics*]{} [**31**]{}, 64 (2002) N. Kashtan, S. Itzkovitz, R. Milo and U. Alon, Topological generalizations of network motifs, [*Phys. Rev. E*]{} [**70**]{}, 031909 (2004). S. Maslov, K. Sneppen and A. Zaliznyak, Detection of topological patterns in complex networks: correlation profile of the internet, [*Physica A*]{} [**333**]{}, 529 (2004). J. Park and M.E.J. Newman, Origin of degree correlations in the Internet and other networks, [*Phys. Rev. E*]{} [**68**]{}, 026112 (2003). P. Holme and J. Zhao, Exploring the assortativity-clustering space of a network’s degree sequence, [*Phys. Rev. E*]{} [**75**]{}, 046111 (2007). L. Giot et al., A Protein Interaction Map of Drosophila melanogaster, [*Science*]{} [**302**]{}, 1727 (2003). S. Wandelt, X. Sun, E. Menasalvas, A. Rodriguez-González and M. Zanin, On the use of random graphs as null model of large connected networks, [*Chaos, Solitons and Fractals*]{} [**119**]{}, 318 (2019). M. Karsai, G. Iniguez, R. Kikas, K. Kaski and J. Kertész, Local cascades induced global contagion: how heterogeneous thresholds, exogenous effects, and unconcerned behaviour govern online adoption spreading, [*Scientific Reports*]{} [**6**]{}, 27178 (2016). M.-X. Li, Z.-Q. Jiang, W.-J. Xie, S. Micciche, M. Tumminello, W.-X. Zhou and R.N. Mantegna, A comparative analysis of the statistical properties of large mobile phone calling networks, [*Scientific Reports*]{} [**4**]{}, 5132 (2014).
--- abstract: 'We have investigated the conductance spectra of Sn-Bi$_2$Se$_3$ interface junctions down to 250 mK and in different magnetic fields. A number of conductance anomalies were observed below the superconducting transition temperature of Sn, including a small gap different from that of Sn, and a zero-bias conductance peak growing up at lower temperatures. We discussed the possible origins of the smaller gap and the zero-bias conductance peak. These phenomena support that a proximity-effect-induced chiral superconducting phase is formed at the interface between the superconducting Sn and the strong spin-orbit coupling material Bi$_2$Se$_3$.' author: - Fan Yang - Yue Ding - Fanming Qu - Jie Shen - Jun Chen - Zhongchao Wei - Zhongqing Ji - Guangtong Liu - Jie Fan - Changli Yang - Tao Xiang - Li Lu title: 'Proximity effect at superconducting Sn-Bi$_2$Se$_3$ interface' --- INTRODUCTION ============ Due to strong spin-orbit coupling (SOC), electrons in the surface states (SS) of a topological insulator (TI) become completely helical, forming a new category of half metals [@1; @2; @3]. Among many exciting features of TIs, the exotic physics at the interface between a three-dimensional (3D) TI and an $s$-wave superconductor is of particular interest. According to theoretical predictions, novel superconductivity with effectively spinless $p_x+ip_y$ pairing symmetry will be induced via proximity effect, and Majorana bound states will emerge at the edges [@Bolech2007; @26; @29; @28; @27]. Several experimental schemes have been proposed to test the predictions, but the progresses so far reported are limited to the observations of a supercurrent in Al-Bi$_2$Se$_3$-Al junctions [@Swiss]. It is not clear whether this is because the predicted exotic properties are suppressed by the existence of bulk states (BS) which are present in most transport measurements. Nevertheless, there are signs that the majority electrons are still significantly helical in the presence of BS. For example, the magneto-resistance of Bi$_2$Se$_3$ exhibits an unusually robust weak anti-localization behavior [@22q; @23; @25q; @24q; @YYwang], indicating the existence of a Berry phase $\pi$ in the band structure of those electrons involved in transport measurements. Therefore, it is possible that some of the novel properties originally predicated for ideal TIs are still experimentally observable even in the presence of BS. In this Article, we report our experimental investigation on the conductance spectra of superconductor-normal metal (S-N) interface junctions made of Sn film and Bi$_2$Se$_3$ single crystalline flake with BS, where Sn is a simple $s$-wave superconductor, and Bi$_2$Se$_3$ is a typical 3D TI candidate [@Fangzhong]. Several anomalies were found, including a double-gap structure that develops below the superconducting transition temperature of Sn and a zero-bias conductance peak grown up at lower temperatures. We will discuss the possible origins of these phenomena, and to show that they can be interpreted by the formation of a proximity-effect-induced chiral superconducting phase at the interface. \[sec:experiment\]EXPERIMENT: S-B$_2$S$_3$ JUNCTIONS ==================================================== The Bi$_2$Se$_3$ flakes used in this experiment were mechanically exfoliated from a high quality single crystal, and those with thickness of $\sim$100 nm were transferred to degenerate-doped Si substrates with a 300 nm-thick SiO$_{\rm 2}$ for device fabrication. Two Pd electrodes were firstly deposited to a selected flake. Then, an insulating layer of heavily-overexposed PMMA photoresist with a $1\times 1 \mu{\rm m}^2$ hole at the center was fabricated on top of the flake. Finally, 200-nm-thick Sn electrodes were patterned and deposited via sputtering. The device structure and measurement configuration are illustrated in Figs. 1(b) and 1(c). Pseudo-four-terminal measurement was performed in a $^3$He cryostat by using lock-in amplifiers, with an ac excitation current of 1 $\mu$A at 30.9 Hz. Determined from Hall effect measurements, the thin flakes of Bi$_2$Se$_3$ used in this experiment have a typical carrier density of 10$^{18}$ cm$^{-3}$ and a typical mobility of 5000 cm$^{2}/$Vs at $T=$ 1.6 K. The Sn films deposited show a sharp superconducting transition at $T_{\rm c}\approx$ 3.8 K and with a critical field $H_{\rm c}$ less than 60 mT at 300 mK, indicating their high quality in term of superconductivity, in spite of the granular morphology \[Fig. 1(c)\] due to self-annealing at room temperature. For the study of proximity effect, a clean interface and a relatively small junction resistance are necessary. If the interfacial barrier strength is too high, no proximity effect will occur. In order to improve the contact between Sn and Bi$_2$Se$_3$, some of the devices were treated with Ar ion etching in a reacting ion etching system, to remove the possible remnant photoresist in the junction area prior to Sn deposition (with a pressure of 100 mTorr, a power of 50 W and for $\sim$10 s). Since Ar does not react with Bi$_2$Se$_3$, the etching is generally a physical process. Ar etching is found to be helpful to enhance the transparency of the junction. But good contact can still be achieved without etching. The primary features of the conductance spectra were found to be similar for devices with comparable interfacial resistance, regardless of the treatment prior to Sn deposition. More than a dozen devices were fabricated and measured at $T$=1.6 K, six of them were further investigated down to 250 mK. All devices exhibited qualitatively similar features. In this Article, we show the data taken from three typical devices, labeled as \#1, \#2 and \#3. The normal-state resistance (taken at $T=4$ K) of these devices are 13.5 $\Omega$, 7.5 $\Omega$ and 10.2 $\Omega$, respectively. Figure 1(a) shows the measured zero-bias differential conductance $G$ as a function of $T$ for device \#1 and \#2, where the data are normalized to their values above $T_{\rm c}\approx 3.8$ K. With decreasing $T$, the conductance increases abruptly below $T_{\rm c}$, and reaches a peak with a maximum enhancement of 3.9% for device \#1 and 7.7% for device \#2, then the conductance drops gradually until a turning point $\sim$1.2 K. Below this temperature, the conductance increases and deviating from the saturation tendency expected from the BTK theory [@30]. The deviation at 250 mK is $\sim$8% for devices \#1 and $\sim$3% for devices \#2. By applying a magnetic field $B$=0.1 T, all the low-temperature structures on the $G-T$ curves were removed, indicating that they are closely related to the superconductivity of Sn. In BTK theory [@30], the normalized zero-bias $dI/dV$ of an S-N junction, $Y$, can be written as: $$Y(Z,T)=\left.\frac{I_{NS}(Z,T)}{I_{NN}(Z)}\right|_{eV\rightarrow0}=(1+Z^{2})\int_{-\infty}^{\infty}\left(\frac{\partial f_{0}}{\partial E}\right)[2A(E)+C(E)+D(E)]$$ where the dimensionless parameter $Z$ describes the barrier strength of the S-N junction, $f_{0}(E,T)$ is the Fermi distribution, $A(E)$, $C(E)$ and $D(E)$ are functions defined in the BTK theory. At $T=0$ this equation can be simplified to: $$\left.Y\right|_{T=0}=\frac{2(1+Z^{2})}{~(1+2Z^{2})^{2}}$$ With this formula, one can estimate the $Z$ value of a device using its saturated conductance at low temperatures. In Fig. 1(a), the normalized $dI/dV$ for devices \#1 and \#2 saturate to about 0.82 and 1.04, as indicated by the solid lines, which yield barrier strengths of $Z=$0.66 and 0.54 for devices \#1 and \#2, respectively. Such barriers are not in the transparent limit (i.e., $Z=0$) nor in the tunneling limit ($Z\gg$1), which ensures the happening of reasonably strong proximity effect at the interface on one hand, and enabling us to probe the information of the density of states on the other hand. In Fig. 2 and Fig. 3 we show the conductance spectra, namely the bias voltage ($V_{\rm bias}$) dependence of differential conductance, of device \#1 measured at different temperatures and in different magnetic fields. Each curve is normalized to its high bias value in region III. Three unusual features were observed, as elaborated below. The first feature is a bump-like enhancement, together with sharp dips at the two sides. It develops at temperatures immediately below $T_{\rm c}$ in regions I and II, as marked in Fig. (2), and is best seen at high $T$ when the gaps are largely undeveloped. It corresponds to the abrupt increase of conductance in the $G-T$ curves just below $T_{\rm c}$. With decreasing $T$ the bump structure evolves and extends to a bias voltage several times larger than the superconducting gap of Sn. In the mean time gap structures develop around zero bias voltage, which will be discussed later. The position of the dips is device-dependent. It is very close to zero bias voltage at temperatures just blow $T_c$, but can reach as high as 8 mV at low temperatures for high resistance junctions in this experiment (data not shown). The dips can be safely attributed to current-driven destruction of superconductivity in the local Sn film surrounding the junction. We note that the use of large measurement current (up to 200 $\mu$A here) is unavoidable in proximity effect studies where the junctions usually need to be reasonably transparent. Estimation shows that the local current density at the step edge of the window of our junctions could reach to $\sim 10^4$ to $10^5$ A/cm$^{2}$ at $V_{\rm bias}$=2 mV (for a 10 $\Omega$ junction), which may well exceed the critical current density of the Sn film. The non-monotonic $T$ dependence of the dip position, as shown in Fig. 2, Fig. 4 for devices \#1 and \#3, and in Fig. 7 for comparative Sn-graphite devices, suggests that the detailed destruction process might involve local heating and thermal conduction which has a significant temperature dependence below $\sim$1 K (see Section \[sec:comparative\]). The second feature in the conductance spectra is a double-gap structure that develops on the enhanced conductance background, as shown in region I of Fig. 2(a). The borders of the two gaps are indicated by the arrows. The development of this structure is responsible for the drop of $G-T$ curves below the peak temperatures in Fig. 1(a). For device \#1, the first (bigger) gap is $\Delta_1$=0.59 mV, which matches with the superconducting gap of Sn, and the second (smaller) gap is $\Delta_2$=0.21 mV, only about 1/3 of the first one. It should be noted that for most devices (10 out of 12) only the smaller gap was clearly observed. In Figs. 4(a) and 4(d) we show two such examples observed on devices \#3 and \#2, respectively. For device \#3, a faint structure can still be resolved at the $\Delta_1$ position, as indicated by the blue arrows in Fig. 4(a), and is best seen on the curve taken at 1.2 K. The existence of two distinct gaps clearly indicates that the smaller gap is not the superconducting gap of Sn. It should arise from some new superconducting phase formed at the interface, which will be further discussed later. The third feature in the conductance spectra is a zero bias conductance peak (ZBCP) developed at low temperatures. This ZBCP is responsible for the conductance increment on $G-T$ curves below $\sim$1.2 K. The peak height grows up almost linearly with decreasing $T$. It reaches 13.4% and 16.4% of the normal-state conductance at the lowest temperature of this experiment, 250 mK, for device \#1 and \#3 respectively, as shown in Figs. 2(c) and 4(b). And the peak width [@Note_ZBCP_width] decreases with decreasing $T$ at low temperatures, as shown in Figs. 2(d) and 4(c). Both the height and the width of the ZBCP can be suppressed by applying a magnetic field, as shown in Figs. 3(c) and 3(d). The temperature dependence of peak width contains important information about the origin of the ZBCP. When we plot the ZBCP against bias current instead of bias voltage, as shown in Figs. 5(a) and 5(b) for devices \#1 and \#3 respectively, the peak width decreases with decreasing $T$ as well \[Fig. 5(c)\]. It indicates that the ZBCP is originated from some kind of resonance whose width is controlled by thermal broadening. \[sec:comparative\]Comparative Experiment: S-Graphite JUNCTIONS =============================================================== In order to examine whether the conductance anomalies observed in Sn-Bi$_{\rm 2}$Se$_{\rm 3}$ interfacial junctions are intrinsic properties of the interface between an $s$-wave superconductor Sn and a helical metal, we have made two more devices for comparison by replacing Bi$_{\rm 2}$Se$_{\rm 3}$ with bulk graphite, while keeping other parameters the same. We choose graphite because it is similar to Bi$_{\rm 2}$Se$_{\rm 3}$ in carrier density and mobility, but with much weaker spin-orbit coupling strength [@yao]. The two devices, made of graphite flakes of thickness $\sim$ 100 nm, are labeled as S1 and S2, respectively. Their optical images are shown in Fig. 6. In Fig. 7(d) we show the measured zero-bias differential conductance $G$ as a function of temperature for these two devices. The data are normalized to their values above $T_{\rm c} \approx 3.8$ K (around 6 $\Omega$ for both devices). The $G-T$ curves are similar to that of the Sn-Bi$_{\rm 2}$Se$_{\rm 3}$ devices, except that there is no upturns below $\sim$1.2 K. The conductance decreases with decreasing temperature monotonously down to 250 mK without saturation. Figures 7(b) and 7(c) show the conductance spectra of device S1 and S2, respectively, taken at different temperatures. A single gap is developed at low temperatures. The coherence peak of that gap is located at $\sim$0.5 mV, which is close to the superconducting gap of the Sn electrode. Neither a zero bias peak nor a second smaller gap was observed. This single gap structure can be well understood within the BTK theory. There is a sharp dip at each side of the conductance spectrum, beyond which the curve becomes flat and featureless. As discussed in Section \[sec:experiment\], we attribute the dip structure to the local destruction of superconductivity of the Sn film near the junction. The fact that similar dips were found in both Sn-Bi$_{\rm 2}$Se$_{\rm 3}$ and Sn-graphite junctions suggests that these dips are not specifically related to Bi$_{\rm 2}$Se$_{\rm 3}$. The magnetic field dependencies of the conductance spectra of devices S1 and S2 are plotted in Figs. 8(a) and 8(b), respectively. All the structures in the spectrum can be removed by applying a magnetic field higher than the $H_{\rm c}$ of Sn, indicating that they are related to the superconductivity of Sn electrode. In summery, our comparative experiment on Sn-graphite devices reveals only a single-gapped structure, neither a second smaller gap nor a ZBCP was observable, unlike those on Sn-Bi$_{\rm 2}$Se$_{\rm 3}$ devices. The data also show that the non-monotonous temperature dependence of the dip position is irrelevant to the use of Bi$_{\rm 2}$Se$_{\rm 3}$. Discussion: Possible Origins of the ZBCP and the double-gap structure ===================================================================== The general trends of the $G-T$ and $G-V_{\rm bias}$ curves of the Sn-Bi$_2$Se$_3$ devices can be understood within the framework of the BTK theory [@30], which was developed in the early 1980’s to describe the two-particle process at S-N interfaces. However, this theory cannot explain the appearance of a second gap, nor the ZBCP at low temperatures. Previously, ZBCPs were also observed in some S-N [@a; @b; @c; @d; @f] and S-insulator-N [@e] junctions, and were explained in several different mechanisms. The first possible mechanism is related to incoherent accumulation of Andreev reflections (AR), which happens when there is a large probability of backscattering due to, e.g, the involvement of the other surface of the normal-metal thin film [@b; @c]. ZBCPs of this kind usually grows up immediately below $T_c$. The ZBCP observed here seems irrelevant to this mechanism, since it has sensitive $T$ and $B$ dependencies, appearing at much lower temperatures. The second possible mechanism is related to coherent scattering of carriers near the interface due to phase conjugation between the electron’s and the hole’s trajectories, leading to an enhanced AR probability [@i]. This mechanism is also expressed in a more general way by using a random matrix theory [@h]. A ZBCP caused by this mechanism is sensitive to both temperature and magnetic field, since it involves a coherent loop. However, these theories do not take account of the strong SOC and its resulting Berry phase. In the presence of strong SOC, the phase accumulated by the incident electron along its path cannot be canceled by the retro-reflected hole, i.e., the phase difference between the $N^{\rm th}$ and the $ (N+1)^{\rm th}$ reflected hole is not zero. Furthermore, the theory in Ref. \[27\] suggests that this kind of ZBCP often appear in junctions with relatively strong scattering rate, and that the value of the conductance peak will not exceed the conductance of the normal state, whereas in our experiment the ZBCP can be higher than the conductance of the normal state, as shown in Fig. 1(a) for device \#2. Therefore, we believe that the ZBCP observed in our experiments is not caused by the aforementioned constructive interference. The third explanation is to phenomenologically attribute ZBCP to a pair current flowing between the superconducting electrode and the proximity-induced superconducting phase [@a]. For a ZBCP of this type, its behavior will resemble the critical supercurrent of a Josephson junction. As temperature decreases, the critical current of a Josephson junction will first increase then get saturated. For a ZBCP of this type, therefore, its peak width is expected to increase with decreasing $T$ if it is plotted against bias current. However, the ZBCP in this experiment shrinks with decreasing $T$, as can be seen in Fig. 5. Therefore, the pair current picture seems inapplicable to our results. Another possible mechanism of ZBCP involves unconventional superconductivity with an asymmetric orbital order parameter [@SrRuO; @5; @4; @CR_Hu; @6; @7; @8; @9]. For example, in the $p$-wave superconductor Sr$_2$RuO$_4$, the reflection of order parameter at the S-N edge in the $ab$-plane feels a sign-change, giving rise to an Andreev bound state at the Fermi energy and a ZBCP in tunneling measurement [@SrRuO]. After having ruled out other possible mechanisms to the best of our knowledge, we believe that this mechanism involving unconventional superconductivity is most likely responsible for the appearance of the ZBCP in this experiment. Our entire picture for the observed phenomena is as follows. With decreasing $T$ to below $T_{\rm c}$ of Sn, proximity effect develops at the S-N interface via two-particle exchange processes, i.e., Cooper pairs are exchanged from the Sn side to the Bi$_2$Se$_3$ side, and entangled quasi-particle pairs are exchanged back in a time-reversal process, known as AR. Unlike in an usual proximity effect, where only a single gap of the parent superconductor is seen, the observation of a second gap here indicates the formation of a new S’-N interface where S’ is not Sn but a proximity-effect-induced superconducting Bi$_2$Se$_3$ phase. The original Sn-Bi$_2$Se$_3$ interface becomes an S-S’ interface whose role vanishes in resistance measurement, so that the gap structure related to Sn is either absent or largely suppressed. We speculate that the suppression of back-scattering due to strong SOC and the high electron mobility in Bi$_2$Se$_3$ would help maintaining the coherence of the two-particle AR process in space and time domains, thus stabilizing the proximity-effect-induced new superconducting phase (S’-phase) in a substantially large volume of Bi$_2$Se$_3$. With further decreasing $T$, the new superconducting phase becomes uniformly developed, then the coherence peaks at the two shoulders of the gap appear. The ZBCP grows up simultaneously, presumably also related to the improved uniformity of the new superconducting phase at low temperatures. Since no ZBCP is observed in similar devices made of graphite which has negligible SOC, one would naturally speculate that strong SOC is the cause for generating the ZBCP in Sn-Bi$_2$Se$_3$ devices. The important role of strong SOC to the electron states in Bi$_2$Se$_3$ has been revealed by the unusually robust behavior of electron weak-anti-localization [@22q; @23; @25q; @24q; @YYwang], a phenomenon which mainly occurs in 2D electron systems. It indicates the existence of a Berry phase $\pi$ in the electrons’ band structure. The new superconducting phase inhabited in such a background, with inter-locked momentum and spin degrees of freedom, is believed to own effectively a spinless $p_x+ip_y$ pairing symmetry. The asymmetric orbital part of the order parameter, inherited from the Berry curvature of the bands, forms resonant bound states at the S’-N interface due to the interference between the incoming and reflecting waveforms there, hence giving rise to the observed ZBCP. With such a picture, it is natural that the peak width gets narrower with reduced thermal broadening at lower $T$. Conclusion ========== To summarize, we have investigated the conductance spectra of S-N junctions between an $s$-wave superconductor Sn and a strong SOC material Bi$_2$Se$_3$. A small gap different from that of Sn was clearly resolved, together with a ZBCP growing up at low temperatures. The results indicate the formation of a new superconducting phase with unconventional pairing symmetry at the interface. Our work would encourage future experiments to search for Majorana fermions and other pertinent properties by employing hybrid structures of $s$-wave superconductor and topological insulator-related materials. We would like to thank H. F. Yang and C. Q. Jin for experimental assistance, L. Fu, Z. Fang, X. Dai, Q. F. Sun, X. C. Xie and S. C. Zhang for stimulative discussions. This work was supported by the National Basic Research Program of China from the MOST under the contract No. 2009CB929101 and 2011CB921702, by the NSFC under the contract No. 11174340 and 11174357, and by the Knowledge Innovation Project and the Instrument Developing Project of CAS. *Note added in reversion*: After the submission of this manuscript, observation of a supercurrent and possible evidence of Pearl vortices were reported in W-Bi$_2$Se$_3$-W junction [@Pennsylvania], a ZBCP was observed on normal metal-Cu$_{\rm x}$Bi$_2$Se$_3$ point contact [@Ando], and evidence of perfect Andreev reflection of the helical mode was obtained in InAs/GaSb Quantum Wells [@RRDu]. [10]{} X.-L. Qi and S.-C. Zhang, Physics Today **63**, 33 (2010). M. Z. Hasan and C. L. Kane, Rev. Mod. Phys. **82**, 3045 (2010) and references therein. J. E. Moore, Nature **464**, 194-198 (2010) and references therein. C. J. Bolech and E. Demler, Phys. Rev. Lett. **98**, 237002 (2007). L. Fu and C. L. Kane, Phys. Rev. Lett. **100**, 096407 (2008). L. Fu and C. L. Kane, Phys. Rev. Lett. **102**, 216403 (2009). Y. Tanaka, T. Yokoyama and N. Nagaosa, Phys. Rev. Lett. **103**, 107002 (2009). K. T. Law, P. A. Lee and T. K. Ng, Phys. Rev. Lett. **103**, 237001 (2009). B. Sacépé, J. B. Oostinga, J. Li, A. Ubaldini, N. J.G. Couto, E. Giannini and A. F. Morpurgo, arXiv:1101.2352v1 (2011). J. G. Checkelsky, Y. S. Hor, M.-H. Liu, D.-X. Qu, R. J. Cava, and N. P. Ong , Phys. Rev. Lett. **103**, 246601 (2009). J. Chen, H.-J. Qin, F. Yang, J. Liu, T. Guan, F.-M. Qu, G.-H. Zhang, J.-R. Shi, X.-C. Xie, C.-L. Yang, K.-H. Wu, Y. Q. Li and L. Lu, Phys. Rev. Lett. **105**, 176602 (2010). H.-T. He, G. Wang, T. Zhang, I.-K. Sou1, and J.-N. Wang, arXiv:1008.0141 (2010). M.-H. Liu, C.-Z. Chang, Z.-C. Zhang, Y. Zhang, W. Ruan, K. He, L.-L. Wang, X. Chen, J.-F. Jia, S.-C. Zhang, Q.-K. Xue, X.-C. Ma, and Y.-Y. Wang, arXiv:1011.1055 (2010). J. Wang, A. M. DaSilva, C.-Z. Chang, K. He, J. K. Jain, N. Samarth, X.-C. Ma, Q.-K. Xue, and M. H. W. Chan, arXiv:1012.0271v1 (2010). H.-J. Zhang, C.-X. Liu, X.-L. Qi, X. Dai, Z. Fang and S.-C. Zhang, Nature Phys. **5**, 438 (2009) G. E. Blonder, M. Tinkham and T. M. Klapwijk, Phys. Rev. B **25**, 4515 (1982). See Supplemental Material at \[URL inserted by publisher\] for supporting data. There is about $10\ \mu$V of smearing on the ZBCP by the small ac voltage used in the measurement. Y.-G. Yao, F. Ye, X.-L. Qi, S.-C. Zhang, and Z. Fang, Phys. Rev. B **75**, 041401(R) (2007). A. Kastalsky, A. W. Kleinsasser, L. H. Greene, R. Bhat, F. P. Milliken and J. P. Harbison, Phys. Rev. Lett. **67**, 3026 (1991). C. Nguyen, H. Kroemer and E. L. Hu, Phys. Rev. Lett. **69**, 2847 (1992). P. Xiong, G. Xiao and R. B. Laibowitz, Phys. Rev. Lett. **71**, 1907 (1993). N. Kim, H.-J. Lee, J.-J. Kim, J.-O. Lee, J. W. Park, K.-H. Yoo, S. Lee, and K. W. Park, Solid State Commun. **115**, 29 (2000). N. Agraït, J. G. Rodrigo and S. Vieira, Phys. Rev. B **46**, 5814 (1992). A. Vaknin and Z. Ovadyahu, J. Phys.: Cond. Matter **9**, L303 (1997). B. J. van Wees, P. de Vries, P. Magnée and T. M. Klapwijk, Phys. Rev. Lett. **69**, 510 (1992). C. W. J. Beenakker Phys. Rev. B **46**, 12841 (1992). F. Laube, G. Goll, H. v. Löhneysen, M. Fogelström and Lichtenberg, Phys. Rev. Lett. **84**, 1595 (2000). L. J. Buchholtz and G. Zwicknagl, Phys. Rev. B **23**, 5788 (1981). C. Bruder, Phys. Rev. B **41**, 4017 (1990). C. R. Hu, Phys. Rev. Lett. **72**, 1526 (1994). M. Yamashiro, Y. Tanaka, and S. Kashiwaya, Phys. Rev. B **56**, 7847 (1997). C. Honerkamp and M. Sigrist, J. Low Temp. Phys. **111**, 895 (1998). S. Kashiwaya and Y. Tanaka, Rep. Prog. Phys. **63**, 1641 (2000). A. Calzolari, D. Daghero, R. S. Gonnelli, G. A. Ummarino, V. A. Stepanov, R. Masini, M. R. Cirnberle and M. Ferretti, J. Phys. Chem. Solids **67**, 597 (2006). D.-M. Zhang, J. Wang, A. M. DaSilva, J. S. Lee, H. R. Gutierrez, M. H. W. Chan, J. Jain and N. Samarth, Phys. Rev. B **84**, 165120 (2011). S. Sasaki, M. Kriener, K. Segawa, K. Yada, Y. Tanaka, M. Sato, Y. Ando, Phys. Rev. Lett. **107**, 217001 (2011). I. Knez, R. Du and G. Sullivan, arXiv:1106.5819v1 (2011).
--- abstract: 'We prove a result on the singularities of ball quotients $\Gamma\backslash{{\mathbb C}H^n}$. More precisely, we show that a ball quotient has canonical singularities under certain restrictions on the dimension $n$ and the underlying lattice. We also extend this result to the toroidal compactification $(\Gamma\backslash{{\mathbb C}H^n})^*$.' address: 'Institut für Algebraische Geometrie, Leibniz Universität Hannover, Welfengarten 1, 30167 Hannover, Germany' author: - Niko Behrens title: Singularities of ball quotients --- Introduction ============ Modular varieties are much studied objects in algebraic geometry. An example is the moduli space of polarised K3 surfaces which is a modular variety of orthogonal type. Similar modular varieties also occur in the context of irreducible symplectic manifolds. V.A. Gritsenko, K. Hulek and G.K. Sankaran proved that the compactified moduli space of polarised K3 surfaces of degree $2d$ has canonical singularities. This result was used to show that this moduli space is of general type if $d>61$ (cf. [@MR2336040]). In this paper we shall consider ball quotients $\Gamma\backslash{{\mathbb C}H^n}$ where $\Gamma$ is an arithmetic subgroup of the group of unitary transformations of a hermitian lattice $\Lambda$ of signature $(n,1)$, where $\Lambda{\cong}{{\mathcal O}}^{n+1}$ for ${{\mathcal O}}$ the ring of integers of some number field ${{{\mathbb Q}(\sqrt{ D })}},\ D<0$. Varieties that arise in such a way often also have an interpretation as moduli spaces. Examples appear in D. Allcock’s work [@MR1949641] as the moduli space of cubic threefolds and in the work of D. Allcock, J. Carlson and D. Toledo [@MR1910264] as the moduli space of cubic surfaces. There are also papers of S. Kond[ō]{} [@MR1780433; @MR2306153] on the moduli space of ordered 5 points on ${\mathbb P}^1$ which appears as a two dimensional ball quotient, or on the moduli space of plane quartic curves which is birational to a quotient of a $6$-dimensional complex ball. Ball quotient surfaces $B^2_{\mathbb C}/\Gamma$ were studied by R.-P. Holzapfel. Among other things he calculated formulae for the Euler number $e(\overline{B^2_{\mathbb C}/\Gamma})$ and the index $\tau(\overline{B^2_{\mathbb C}/\Gamma})$ for a smooth model of the Baily-Borel compactification and studied arithmetic aspects of ball quotient surfaces, e.g. [@MR653917; @MR1685419]. We give an outline of the organisation and the results of this paper. In section \[SectionDefofObjects\] we will recall the general definitions and basic properties of the objects that will be studied in the following sections. Furthermore the problem will be reduced to the local study of the action of the stabiliser subgroup $G$ on the tangent space $T_{[\omega]}{{\mathbb C}H^n}$. In section \[SectionInterior\] we will provide a criterion which implies that $\Gamma\backslash{{\mathbb C}H^n}$ has canonical singularities, using methods similar to those of [@MR2336040]. This will be achieved by studying the representations of the action of $G$ on the tangent space ${\operatorname{Hom}}({\mathbb W},{\mathbb C}^{n+1}/{\mathbb W})$ and applying the Reid-Tai criterion. For this one first studies elements that do not act as quasi-reflections and then reduces the general situation to previous results. For ball quotients $\Gamma\backslash{{\mathbb C}H^n}$ the existence of toroidal compactifications $(\Gamma\backslash{{\mathbb C}H^n})^*$ follows from the general theory described in [@MR0457437]. Since all cusps in the Baily-Borel compactification are $0$-dimensional those toroidal compactifications are unique. We will describe them in section \[SectionBoundary\]. We can then apply the results of section \[SectionInterior\] and prove the main result: The projective variety $(\Gamma\backslash{{\mathbb C}H^n})^*$ has canonical singularities for $n\geq 13$ provided the discriminant of the number field ${{{\mathbb Q}(\sqrt{ D })}}$ associated to the lattice is not equal to $-3$,$-4$ or $-8$. As in the orthogonal case this result can be used in the study of the Kodaira dimension of unitary modular varieties. This is the motivation of our work. Acknowledgements {#acknowledgements .unnumbered} ---------------- This article is based on my PhD thesis. I want to thank my advisor K. Hulek for his guidance and support. I would also like to thank V.A. Gritsenko and G.K. Sankaran for various helpful discussions. First definitions and properties {#SectionDefofObjects} ================================ We first state the objects that we will study in the following sections. Let ${{{\mathbb Q}(\sqrt{ D })}}$, be an imaginary quadratic number field, i.e. $D<0$ a squarefree integer, and ${{{\mathcal O}}}={{\mathcal O}}_{{{\mathbb Q}(\sqrt{ D })}}$ the corresponding ring of integers. Let $\Lambda$ be an ${{\mathcal O}}$-lattice of signature $(n,1)$, i.e. a free ${{\mathcal O}}$-module of rank $n+1$ with a hermitian form of signature $(n,1)$. Therefore we have an isomorphism $\Lambda{\cong}{{\mathcal O}}^{n,1}$. The hermitian form given by this lattice will be denoted by $h(\cdot,\cdot)$. When we fix a basis we get the isomorphism $$\psi:\ \Lambda\otimes_{{\mathcal O}}{\mathbb C}{\cong}{\mathbb C}^{n,1}.$$ The form induced by $\psi$ will also be denoted by $h(\cdot,\cdot).$ Starting with the lattice $\Lambda$ we define the $n$-dimensional complex hyperbolic space as $$\begin{aligned} {{\mathbb C}H^n}:=\{[\omega]\in{\mathbb P}(\Lambda\otimes_{{\mathcal O}}{\mathbb C});\ h(\omega,\omega)<0\}.\end{aligned}$$ By definition ${{\mathbb C}H^n}$ has a natural underlying lattice structure given by $\Lambda$. Due to G. Shimura there is the identification ${{\mathbb C}H^n}{\cong}U(n,1)/(U(n)\times U(1))$, cf. [@MR0156001]. For future use we define $$\begin{aligned} U(\Lambda):=\text{group of automorphisms of}\ \Lambda.\end{aligned}$$ After choosing a suitable basis $U(\Lambda)_{\mathbb C}:=U(\Lambda)\otimes_{{\mathcal O}}{\mathbb C}{\cong}U(n,1)$. Now let $\Gamma<U(\Lambda)$ be a subgroup of finite index. We denote the [*$n$-dimensional ball quotient*]{} by $$\begin{aligned} \Gamma\backslash{{\mathbb C}H^n}.\end{aligned}$$ This ball quotient is a quasi-projective variety by [@MR0216035]. It can be compactified using toroidal compactification which gives rise to a unique projective variety $(\Gamma\backslash{{\mathbb C}H^n})^*$. One can give a description of the ramification divisors. For this purpose let $$\begin{aligned} f_\Gamma:\ {{\mathbb C}H^n}{\longrightarrow}\Gamma\backslash{{\mathbb C}H^n}\label{mapf_Gamma}\end{aligned}$$ be the quotient map. The elements fixing a divisor in ${{\mathbb C}H^n}$ are the quasi-reflections. Thus the ramification divisors of $f_\Gamma$ are the fixed loci of elements of $\Gamma$ acting as quasi-reflections. As [@MR2336040] and [@MR1255698] did for the K3 (orthogonal) case we will investigate the local action on the tangent space. Fix a point $[\omega]\in{{\mathbb C}H^n}$ and define the stabiliser of $[\omega]$: $$\begin{aligned} G:=\Gamma_{[\omega]}:=\{g\in\Gamma;\ g[\omega]=[\omega]\}.\end{aligned}$$ This group is finite by results of [@MR1685419 4.1.2] or [@MR0314766 pp. 1]. Next define for $\omega\in\Lambda\otimes_{{\mathcal O}}{\mathbb C}$ the line ${\mathbb W}:={\mathbb C}\omega$ corresponding to $[\omega]$. Then we can define the following sublattices of the lattice $\Lambda$: $$\begin{aligned} S:={\mathbb W}^\perp\cap \Lambda,\ T:=S^\perp\cap\Lambda, \end{aligned}$$ where the orthogonal complements are taken with respect to the form $h(\cdot,\cdot)$. As before we can complexify these lattices. We denote the resulting vector spaces by $$S_{\mathbb C}:=S\otimes_{{\mathcal O}}{\mathbb C},\ T_{\mathbb C}:=T\otimes_{{\mathcal O}}{\mathbb C}.$$ Now we have to study some properties of these lattices. Some proofs will be similar to those in [@MR2336040 2.1]. \[LemmaScapT=0\] $S_{\mathbb C}\cap T_{\mathbb C}=\{0\}$. Let $x\in S_{\mathbb C}\cap T_{\mathbb C}$. Then $h(x,x)=0$, since $x\in T_{\mathbb C}=S_{\mathbb C}^\perp$. Therefore it suffices to show that $h(\cdot,\cdot)$ is positive definite on $S_{\mathbb C}$. Consider ${\mathbb W}\subset\Lambda_{\mathbb C}$ with ${\mathbb C}$-basis $\{\omega\}$. Hence $h(\omega,\omega)<0$ as $[\omega]\in{{\mathbb C}H^n}$. Hence the hermitian form has signature $(0,1)$ on ${\mathbb W}$ and thus signature $(n,0)$ on ${\mathbb W}^\perp$. By definition $S_{\mathbb C}\subset {\mathbb W}^\perp$ and the result follows. To describe the singularities we will study the action of the stabiliser $G$ on the tangent space $T_{[\omega]}{{\mathbb C}H^n}$. Therefore we need a more concrete description. The tangent space is known for the Grassmannian variety $G(1,n+1)$, e.g. [@MR770932 Chapter II, §2]. Hence we get $T_{[\omega]}{{\mathbb C}H^n}={\operatorname{Hom}}({\mathbb W},{\mathbb C}^{n+1}/{\mathbb W})$. From now on we denote this tangent space by $V:={\operatorname{Hom}}({\mathbb W},{\mathbb C}^{n+1}/{\mathbb W})$ and investigate the quotient $G\backslash V$ in more detail. Representations of cyclic groups over quadratic number fields ------------------------------------------------------------- Before we state first results we have to study the beviour of representations of the cyclic group ${{\mathbb Z}/ d{\mathbb Z}}$ over a given quadratic number field for an integer $d>1$. We denote the $d$th cyclotomic polynomial by $\phi_d$. The classification of irreducible representations will depend on whether $\phi_d$ is irreducible over ${{{\mathbb Q}(\sqrt{ D })}}$ or not. For the following we denote by $\left(\frac{\cdot}{\cdot}\right)$ the [*Kronecker symbol*]{}. \[PropositionirredRepsoverqnf\] Let $\rho:\ {{\mathbb Z}/ d{\mathbb Z}}{\longrightarrow}{\operatorname{Aut}}(W)$ be a representation of ${{\mathbb Z}/ d{\mathbb Z}}$ on the $d$-dimensional vector space $W$ over ${{{\mathbb Q}(\sqrt{ D })}}$. Then - there is a unique irreducible faithful representation $V_d$ if $\phi_d$ is irreducible. The eigenvalues of $\rho|_{V_d}(\zeta_d)$ are the primitive $d$th roots of unity. - there are two irreducible faithful representations $V_d',V_d''$ if $\phi_d$ is reducible. The eigenvalues of $\rho|_{V_d'}(\zeta_d)$ are the primitive $d$th roots of unity $\zeta_d^a$ with ${{\left(\frac{D}{a}\right)}}=1$ for $a\in{\left({\mathbb Z}/ d{\mathbb Z}\right)^*}$. The eigenvalues of $\rho|_{V_d''}(\zeta_d)$ are $\zeta_d^a$ for the remaining $a\in{\left({\mathbb Z}/ d{\mathbb Z}\right)^*}$, i.e. the $a$ with ${{\left(\frac{D}{a}\right)}}=-1$. This follows from results of L. Weisner [@MR1502846] and standard calculations in representation theory. For specific $D$ we can rephrase this as follows: - If $D>0$, then for each eigenvalue $\zeta_d^a$ of $\rho|_{V_d'}(\zeta_d)$ the complex conjugate $\zeta_d^{d-a}$ is an eigenvalue as well. - If $D<0$, then for each eigenvalue $\zeta_d^a$ of $\rho|_{V_d'}(\zeta_d)$ the complex conjugate $\zeta_d^{d-a}$ is an eigenvalue of $\rho|_{V_d''}(\zeta_d)$. As the Proposition shows we have two different irreducible representations in case (ii). Let $d$ be a positive integer. - Sometimes we do not want to specify which of the representations $V_d$ resp. $V_d'$ or $V_d''$ we take and only refer to ${{\mathcal V}}_d$ which will denote the appropriate representation in the given situation. - We define ${\mathbb V}_d:={{\mathcal V}}_d\otimes_{{{\mathbb Q}(\sqrt{ D })}}{\mathbb C}$. The interior {#SectionInterior} ============ In this section we study the decomposition of ${{{\mathbb Q}(\sqrt{ D })}}$-vector spaces associated to the lattices $S$ and $T$ under the action of a cyclic group. This enables us to state results on canonical singularities for ball quotients $\Gamma\backslash{{\mathbb C}H^n}$. $G$ acts on $S$ and $T$. $G$ acts on ${\mathbb W}$ and on $\Lambda$, hence on $S={\mathbb W}^\perp\cap \Lambda$ and on $T=S^\perp\cap\Lambda$. The spaces $S_{\mathbb C}$ and $T_{\mathbb C}$ are $G$-invariant subspaces of the vector space $\Lambda_{\mathbb C}$. We will only give a proof for $S_{\mathbb C}$ as $T_{\mathbb C}$ is similar. Let $x\in T_{\mathbb C}$, $y\in S_{\mathbb C}$, $\omega\in{\mathbb W}$ and $g\in G$. Then $$\begin{aligned} 0=h(y,\omega)=h(g(y),g(\omega))=\overline{\alpha(g)}\cdot h(g(y),\omega).\end{aligned}$$ As $\alpha (g)\neq 0$ we get $h(g(y),\omega)=0$, i.e. $g(y)\in S_{\mathbb C}$. The group $G$ has been defined as the stabiliser of $[\omega]$ and therefore the equation $$g(\omega)=\alpha(g)\omega$$ holds for all $g\in G$, where $$\alpha:\ G{\longrightarrow}{\mathbb C}^*$$ is a group homomorphism. Denote its kernel by $ G_0:=\ker \alpha$. Analogous to the previous define $$S_{{{\mathbb Q}(\sqrt{ D })}}:=S\otimes_{{\mathcal O}}{{{\mathbb Q}(\sqrt{ D })}}\ \text{and}\ T_{{{\mathbb Q}(\sqrt{ D })}}:=T\otimes_{{\mathcal O}}{{{\mathbb Q}(\sqrt{ D })}}.$$ The group $G_0$ acts trivally on $T_{{{\mathbb Q}(\sqrt{ D })}}$. Let $x\in T_{{{\mathbb Q}(\sqrt{ D })}}$ and $g\in G_0$. Then $$h(\omega,x)=h(g(\omega),g(x))=h(\omega,g(x))$$ and $x-g(x)\in{\mathbb W}^\perp\cap\Lambda_{{{\mathbb Q}(\sqrt{ D })}}=S_{{{\mathbb Q}(\sqrt{ D })}}$. Therefore the result follows by Lemma \[LemmaScapT=0\]. The quotient $G/G_0$ is a subgroup of ${\operatorname{Aut}}{\mathbb W}{\cong}{\mathbb C}^*$ and therefore cyclic. The order of this group will be denoted by $r_\omega:={\operatorname{ord}}(G/G_0)$. The space $T_{{{\mathbb Q}(\sqrt{ D })}}$ decomposes as a $G/G_0$-module - into a direct sum of $V_{r_\omega}$’s, i.e. $\varphi(r_\omega)$ divides $\dim T_{{{\mathbb Q}(\sqrt{ D })}}$, if $V_{r_\omega}$ is irreducible over ${{{\mathbb Q}(\sqrt{ D })}}$, - into a direct sum of $V_{r_\omega}'$’s and $V_{r_\omega}''$’s, in particular $\frac{\varphi(r_\omega)}{2}$ divides $\dim T_{{{\mathbb Q}(\sqrt{ D })}}$, if there exist a decomposition $V_{r_\omega}=V_{r_\omega}'\oplus V_{r_\omega}''$ over ${{{\mathbb Q}(\sqrt{ D })}}$. It remains to show, that the only element having $1$ as an eigenvalue on $T_{\mathbb C}$ is identity element in $G/G_0$. This suffices as $G/G_0{\cong}\mu_{r_\omega}$ and by the Chinese Remainder Theorem $({\mathbb Z}/r_\omega{\mathbb Z})^*{\cong}(({\mathbb Z}/p_1{\mathbb Z})^*)^{a_1}\times\dots\times(({\mathbb Z}/p_t{\mathbb Z})^*)^{a_t}$ for suitable $p_i$ and $a_i$. Assume that $g\in G-G_0$ with $g(x)=x$ for a $x\in T_{\mathbb C}$. Then $$h(\omega,x)=h(g(\omega),g(x))=\alpha(g)\cdot h(\omega,x).$$ As $a(g)\neq 1$ we get $h(\omega,x)=0$ and therefore $x=0$. For $g\in G$ the space $T_{{{\mathbb Q}(\sqrt{ D })}}$ decomposes as a $g$-module into a direct sum of $V_r$’s resp. $V_{r}'$’s or $V_{r}''$’s of dimension $\varphi(r)$ resp. $\frac{\varphi(r)}{2}$. Similar to Lemma \[LemmaDecompofTasGmodG0module\]. Reid-Tai criterion ------------------ Let $M={\mathbb C}^k$, $A\in {\operatorname{GL}}(M)$ be of order $l$ and fix a primitive $l$th root of unity $\zeta$. Consider the eigenvalues $\zeta^{a_1},\dots,\zeta^{a_k}$ of $A$ on $M$, where $0\leq a_i<l$. Define the [*Reid-Tai sum*]{} of $A$ as $$\Sigma(A):=\sum_{i=1}^k \frac{a_i}{l}.$$ Let $H$ be a finite subgroup of ${\operatorname{GL}}(M)$ without quasi-reflections. Then $M/H$ has canonical singularities if and only if $$\Sigma(A)\geq1$$ for every $A\in H$, $A\not=I$. [@MR927963 (4.11)] and [@MR669424 Theorem 3.3]. Singularities in the interior ----------------------------- Now we will apply the Reid-Tai criterion to $G\backslash V$. But as the techniques used do not work for general $D$ we will restrict ourself to the case $D<D_0$, where $D_0:=-3$. For a rational number $q$ we denote the [*fractional part*]{} of $q$ by $\{q\}$. Remember that the eigenvalue of $g$ on ${\mathbb W}$ is $\alpha(g)$, where the order of $\alpha(g)$ is $r$. First we will give a bound on $r$. \[LemmagnoQRphi(r)geq10\] Suppose $g$ does not act as a quasi-reflection on $V$ and $D<0$. Then the Reid-Tai sum satisfies $\Sigma(g)\geq1$, if $\varphi(r)\geq10$. If we assume additionally $D<D_0$ then this holds even for $\varphi(r)=4$. We denote the copy of $V_r\otimes{\mathbb C}$ resp. $V_r'\otimes{\mathbb C}$ or $V_r''\otimes{\mathbb C}$ that contains $\omega$ by ${\mathbb V}_r^\omega$. Now choose a primitive $m$th root of unity $\zeta$ and let $0<k_i<r$ be the $\varphi(r)$ integers coprime to $r$. We consider the eigenvalue $\alpha(g)=\zeta^{\frac{mk_1}{r}}$ of $g$ on ${\mathbb W}$, i.e. on ${\mathbb W}^\vee$ this leads to the eigenvalue $\overline{\alpha(g)}=:\zeta^{\frac{mk_2}{r}}$. Additionally we have to consider the eigenvalues of $g$ on $V_r^\omega\cap{\mathbb C}^{n+1}/{\mathbb W}$, where the definition of $V_r^\omega$ is similar to the one of ${\mathbb V}_r^\omega$. Thus on this space we will have eigenvalues $\zeta^{\frac{mk_i}{r}}$ for some $k_i\in A-\{k_1\}$. Here $A=A_D^r$ is a subset of $\{k_1,\dots,k_{\varphi(r)}\}$ depending on the decomposition behavior of $\phi_r$ with $k_1\in A$ and $\# A=\varphi(r)$ resp. $\#A=\frac{\varphi(r)}{2}$ (cf. Proposition \[PropositionirredRepsoverqnf\]).\ Now consider the eigenvalues of $g$ on ${\operatorname{Hom}}({\mathbb W},{\mathbb V}_r^\omega\cap{\mathbb C}^{n+1}/{\mathbb W})$, which are $\zeta^\frac{mk_2}{r}\zeta^\frac{mk_i}{r}$ for $k_i\in A-\{k_1\}$. Thus $\Sigma(g)\geq \sum_{k_i\in A-\{k_1\}} \left\{\frac{k_2+k_i}{r}\right\}$. First we want to show that there are only finitely many choices for $r$ that lead to a contribution less than $1$ to the Reid-Tai sum. Therefore look at the estimate $\sum_{k_i\in A-\{k_1\}} \left\{\frac{k_2+k_i}{r}\right\}\geq \sum _{j=1}^{\frac{\varphi(r)}{2}-1} \frac{j}{r}$ and study the prime decomposition $r=p_1^{a_1}\cdots p_s^{a_s}$, where we assume $p_i<p_j$ for $i<j$. It is now easy to show that this sum contributes at least $1$ to $\Sigma(g)$, unless we are in one of the following cases\ $r=2^a\cdot p^b\cdot q$ a b p q ------ --- --- ------- $<$3 1 3 $<$11 1 1 3 11 1 1 3 13 1 2 3 5 1 1 5 7 $r=p^aq^b$ $a$ $b$ $p$ $q$ ----- ----- ----- ---------- 1 1 2 $\leq$19 1 1 3 5,7 2 1 2 $<$11 2,3 2 2 3 1 2 2 5 3 1 2 $<$7 4 1 2 3 2 1 3 2 3 1 3 2 $r=p^a$ a p --------- ------- 1 $<$11 2 3 $\leq5$ 2 For these remaining values of $r$ we can calculate the contribution of $g$ on ${\operatorname{Hom}}({\mathbb W},{\mathbb V}_r^\omega\cap{\mathbb C}^{n+1}/{\mathbb W})$ to $\Sigma(g)$ in more detail as $$\begin{aligned} \operatorname{mc}(r)&:=&\min_{\substack{D<0\\ \text{suitable}}}\min_{k_2\in A} \sum_{\substack{k_i\in A-\{k_1\}\\ {{\left(\frac{D}{k_i}\right)}}={{\left(\frac{D}{r-k_2}\right)}}}} \left\{\dfrac{k_2+k_i}{r}\right\}.\end{aligned}$$ By ‘suitable’ we mean that we only consider number fields ${{{\mathbb Q}(\sqrt{ D })}}$ that lead to case (ii) in Proposition \[PropositionirredRepsoverqnf\]. If there is no such number field we have to omit the first ‘$\min$’ and the Kronecker symbol in the definition of $\operatorname{mc}(r)$. As there are only finitely many such number fields computer calculation yields $\operatorname{mc}(r)\geq1$ for $\varphi(r)\geq 10$ and $\varphi(r)=4$ if we restrict to $D<D_0$. \[Remark\_mcrforothervalues\] The same calculations of $\operatorname{mc}(r)$ shows that $\Sigma(g)\geq1$ for $r=9,16,18$ and no restriction on $D<0$. \[LemmagnoQRR=1,2\] Assume that $g\in G$ does not act as a quasi-reflection on $V$. Additionally let $r=1,2$ and $D\neq -1,-2$. Then $\Sigma(g)\geq1$. As $r=1,2$ we have $\alpha(g)=\pm1$. With an analogous statement as in [@MR2336040 Proposition 2.9] we get that $g$ is not of order $2$ and $g^2$ acts trivially on $T_{\mathbb C}$ but not on $S_{\mathbb C}$. Therefore let $g$ act on the subspace ${\operatorname{Hom}}({\mathbb W},{\mathbb V}_d)\subset V$ as $\pm{{\mathcal V}}_d$ with $d>2$, for a representation ${{\mathcal V}}_d$ from the decomposition of $S_{\mathbb C}$ as a $g$-module over ${{{\mathbb Q}(\sqrt{ D })}}$. This contributes at least $$\begin{aligned} \min_{\substack{D<0\\\text{suitable}}}\min_{r\in\{1,2\}}\min_{\alpha=\pm1}\sum_{\substack{(k_i,d)=1\\ {{\left(\frac{D}{k_i}\right)}}=\alpha}} \left\{\dfrac{1}{r}+\dfrac{k_i}{d}\right\}\geq \sum_{j=1}^{\frac{\varphi(d)}{2}} \dfrac{j}{d}.\label{ProofinequalityHom(WW,VVd)for_r=1,2}\end{aligned}$$ to $\Sigma(g)$. As in the proof of Lemma \[LemmagnoQRphi(r)geq10\] we have to modify this expression if ${{\mathcal V}}_d=V_d$ and with analogous arguments we can reduce this to a question about a finite number of $d$’s. Computer calculation for these $d$ shows that they contribute at least $1$ unless $d=8$. But ${{\mathcal V}}_8=V_8$ for $D\neq-1,-2$ and therefore we can choose complex conjugate eigenvalues. Now we can state a general result if $g$ is not a quasi-reflection. \[TheoremforgnoQRandngeq11\] Suppose $g\in G$ does not act as a quasi-reflection on $V$. Then $\Sigma(g)\geq1$, if $D<D_0$ and $n\geq11$. Let $m$ be the order of $g$ and $\zeta$ be a primitive $m$th root of unity. On the space ${\operatorname{Hom}}({\mathbb W},{\mathbb V}_d)\subset V$ the element $g$ has eigenvalues $\zeta^{\frac{mc}{r}}\zeta^{\frac{mk_i}{d}}$ for fixed $0<c<r$ with $(c,r)=1$, and $k_i\in A$, where $A=A_D^d$ is defined as in the proof of Lemma \[LemmagnoQRphi(r)geq10\], so $\# A=\dim_{\mathbb C}{\mathbb V}_d$. Therefore the contribution of $g$ on this subspace is given by $$\sum_{k_i\in A}\left\{\frac{c}{r}+\frac{k_i}{d}\right\}.$$ This is greater or equal to $\sum_{j=1}^{\frac{\varphi(d)}{2}}\frac{j}{d}$ for $d\not\in\varphi{^{-1}}(\{2,4,6,8\})$, and this contributes less than $1$ if $$\begin{aligned} d&=&1,2,\dots,10,12,14,15,16,18,20,22,24,26,28,30,36,\notag\\&&40,42,48,54,60,66,84,90\label{listofdofroughestimation}\end{aligned}$$ (as in the proof of Lemma \[LemmagnoQRR=1,2\]). We can calculate the contribution for each $d$ and $r$, but to simplify calculations define $$\begin{aligned} c_{\operatorname{min}}(d)&:=&\min_{0\leq a<d}\sum_{\substack{0<b<d\\(b,d)=1}}\left\{\dfrac{b+a}{d}\right\},\ \text{resp.}\\ c_{\operatorname{min}}^{{\operatorname{red}}}(d)&:=&\min_{\substack{D<0\\ \text{suitable}}}\min_{\alpha=\pm1}\min_{0\leq a<d}\sum_{\substack{0<b<d\\(b,d)=1\\ {{\left(\frac{D}{b}\right)}}=\alpha}}\left\{\dfrac{b+a}{d}\right\}.\end{aligned}$$ If there exist at least one imaginary quadratic number field for which the cyclotomic polynomial $\phi_d$ is reducible we have to calculate $c_{\operatorname{min}}^{{\operatorname{red}}}(d)$ for those $D$. If there exists no such $D$ we will use $c_{\operatorname{min}}(d)$. Both expressions only depend on $d$ and are a lower bound for the contribution to $\Sigma(g)$ as shown in [@MR2336040 Proof of Theorem 2.10]. By computer calculation all $d$ except $d=1,2,3,4,6,7,8,12,14,15,20,24,30$ contribute at least $1$ to the Reid-Tai sum. For these $d$ we get $$\begin{aligned} & c_{\min}^{{\operatorname{red}}}(30)=11/15,\ c_{\min}^{{\operatorname{red}}}(24)=5/6,\ c_{\min}^{{\operatorname{red}}}(20)=4/5,\ c_{\min}^{{\operatorname{red}}}(15)=11/15,&\\ &c_{\min}^{{\operatorname{red}}}(14)=4/7,\ c_{\min}^{{\operatorname{red}}}(12)=1/3,\ c_{\min}^{{\operatorname{red}}}(8)=1/4,&\\ & c_{\min}^{{\operatorname{red}}}(7)=4/7,\ c_{\min}^{{\operatorname{red}}}(6)=0,\ c_{\min}^{{\operatorname{red}}}(4)=0,c_{\min}^{{\operatorname{red}}}(3)=0.&\end{aligned}$$ As we know, $T_{\mathbb C}$ decomposes into a direct sum of ${\mathbb V}_r$, while we can assume the space $S_{\mathbb C}$ decomposes into a direct sum of ${\mathbb V}_d$ where $d\in\{1,2,3,4,6,7,8,12,14,$ $15,20,24,30\}$. In the following we only consider $D<D_0$ and a count of dimensions leads to the equation $$\begin{aligned} \dim {\mathbb V}_r\cdot \lambda +\nu_1+\nu_2+2\nu_3+2\nu_4+2\nu_6+\frac{6}{2}\nu_7+\frac{4}{2}\nu_8&&\notag\\ +\frac{4}{2}\nu_{12}+\frac{6}{2}\nu_{14}+\frac{8}{2}\nu_{15}+\frac{8}{2}\nu_{20}+\frac{8}{2}\nu_{24}+\frac{8}{2}\nu_{30}&=&n+1,\label{Equationdimensioncountrepresentations}\end{aligned}$$ where $\lambda$ denotes the multiplicity of ${\mathbb V}_r$ in $T_{\mathbb C}$ and $\nu_d$ denotes the multiplicity of ${\mathbb V}_d$ in $S_{\mathbb C}$. Note that we can assume ${{\mathcal V}}_d=V_d'$ or $V_d''$ for $d\in\{7,8,12,14,15,20,$ $24,30\}$. If not it would contribute at least $1$ to $\Sigma(g)$, as shown in [@MR2336040 Theorem 2.10]. For the quotient $\Lambda_{\mathbb C}/{\mathbb V}_r^\omega$ we denote by $\nu_r$ the multiplicity of ${\mathbb V}_r$ in $\Lambda_{\mathbb C}/ {\mathbb V}_r^\omega$ as a $g$-module, where ${\mathbb V}_r^\omega$ as before denotes the copy that contains $\omega$. Now we can calculate the (minimal) contribution of ${\operatorname{Hom}}({\mathbb W},{\mathbb V}_d)$ to $\Sigma(g)$ as $$\begin{aligned} \sum_{(a,d)=1}\left\{\dfrac{a}{d}+\dfrac{k_1}{r}\right\}\ \text{resp.}\ \min_{ D<D_0}\min_{\alpha=\pm1}\sum_{\substack{(a,d)=1\\ {{\left(\frac{D}{a}\right)}}=\alpha}}\left\{\dfrac{a}{d}+\dfrac{k_1}{r}\right\}.\end{aligned}$$ According to Lemma \[LemmagnoQRphi(r)geq10\] and Remark \[Remark\_mcrforothervalues\] we have to investigate the cases $r\in\{3,4,6\}=\varphi{^{-1}}(2)$, $r\in\{7,14\}\subset\varphi{^{-1}}(6)$ and $r\in\{15,20,24,30\}\subset\varphi{^{-1}}(8)$. Now we have to study different cases: - Let $\varphi(r)=2$. The contributions of the ${\mathbb V}_d$ with $\varphi(d)\geq4$ are greater or equal to $1$ and (\[Equationdimensioncountrepresentations\]) becomes $$\begin{aligned} \nu_1+\nu_2+2\nu_3+2\nu_4+2\nu_6=n+1-2=n-1.\notag\end{aligned}$$ For the $6$ possible cases of the choice of $(r,k_1)$, namely $r\in\{3,4,6\}$ and $k_1\in\{1,r-1\}$, the other contributions are at least $d$ contribution ----- -------------- 1 1/6 2 1/6 3 1/3 4 1/2 6 1/3 In all cases we see $\Sigma(g)\geq1$ if $n-1\geq6$. - Let $r=7,14$. We can assume $D=-7$ since if this is not the case explicit calculations show that${\mathbb V}_r^\omega$ will contribute at least $1$ to $\Sigma(g)$. Equation (\[Equationdimensioncountrepresentations\]) becomes $$\begin{aligned} \nu_1+\nu_2+2\nu_3+2\nu_4+2\nu_6+3\nu_7+3\nu_{14}=n-2\notag\end{aligned}$$ and the contributions are $d$ contribution ----- -------------- 1 1/14 2 1/14 3 3/7 4 4/7 6 3/7 7 4/7 14 4/7 and $4/7$ from ${\mathbb V}_r^\omega$. So we may assume that $\nu_3=\nu_4=\nu_6=\nu_7=\nu_{14}=0$, because otherwise the contribution will be $\geq1$. So $\Sigma(g)\geq1$, if $\nu_1+\nu_2\geq6$ resp. $n\geq8$. - Let $r=15,20,24,30$. Analogously to the last case we can assume that $D=-5,-6,-15$. - Let $D=-5$. Hence we get the equation $$\begin{aligned} \nu_1+\nu_2+2\nu_3+2\nu_4+2\nu_6+4\nu_{20}=n-3.\notag\end{aligned}$$ The contributions are $d$ contribution ----- -------------- 1 1/30 2 1/30 3 5/12 4 8/15 6 5/12 20 4/5 and $4/5$ from ${\mathbb V}_r^\omega$. So $\Sigma(g)\geq1$ unless $\nu_1+\nu_2\leq5$ resp. $n\leq8$. - Let $D=-6$, giving the equation $$\begin{aligned} \nu_1+\nu_2+2\nu_3+2\nu_4+2\nu_6+4\nu_{24}=n-3.\notag\end{aligned}$$ The contributions of ${\mathbb V}_{24}$ and ${\mathbb V}_r^\omega$ are $5/6$. So $\Sigma(g)\geq1$ unless $\nu_1+\nu_2\leq4$ resp. $n\leq7$. - The last case is $D=-15$. So we get the equation $$\begin{aligned} \nu_1+\nu_2+2\nu_3+2\nu_4+2\nu_6+4\nu_{15}+4\nu_{30}=n-3.\notag\end{aligned}$$ The contributions of ${\mathbb V}_{15}$, ${\mathbb V}_{30}$ and ${\mathbb V}_r^\omega$ are $11/15$. So $\Sigma(g)\geq1$, if $\nu_1+\nu_2\geq8$ resp. $n\geq11$. Now we can state a first result about the singularities of ball quotients. Let $D<D_0$ and $n\geq11$, then $\Gamma\backslash{{\mathbb C}H^n}$ has canonical singularities away from the branch divisors. This directly follows from Theorem \[TheoremforgnoQRandngeq11\], the Reid-Tai criterion and the discussion for the map (\[mapf\_Gamma\]). For the rest of this section we will now study elements $h=g^k$ that act as quasi-reflections on the tangent space $V$. We will start to describe how $\Lambda_{{{\mathbb Q}(\sqrt{ D })}}$ decomposes as a $g$-module. Let $h=g^k$ be a quasi-reflection on $V$ for $g\in G$ and $n\geq2$. As a $g$-module we have a decomposition of the form $$\Lambda_{{{\mathbb Q}(\sqrt{ D })}}{\cong}{{\mathcal V}}_{m_0}\oplus\bigoplus_j{{\mathcal V}}_{m_j}$$ for some $m_i\in{\mathbb N}$. Then 1. $(m_0,k)=m_0$ and $2(m_j,k)=m_j$, or $2(m_0,k)=m_0$ and $(m_j,k)=m_j$ for $j\geq1$ in the cases $D<D_0$ and $D=-2$, 2. $(m_0,k)=m_0$ and $l(m_j,k)=m_j$, or $l(m_0,k)=m_0$ and $(m_j,k)=m_j$, $l\in\{2,4\}$, for $j\geq1$ in the case $D=-1$, 3. $(m_0,k)=m_0$ and $l(m_j,k)=m_j$, or $l(m_0,k)=m_0$ and $(m_j,k)=m_j$, $l\in\{2,3,6\}$, for $j\geq1$ in the case $D=-3$. As a $g$-module $\Lambda_{{{\mathbb Q}(\sqrt{ D })}}$ decomposes into ${{\mathcal V}}_r^\omega\oplus\bigoplus_{i}{{\mathcal V}}_{d_i}$ for some $d_i\in{\mathbb N}$. As $h$ is a quasi-reflection on $V$, all but one of the eigenvalues on $V$ must be $1$. First fix an $i$. Now define ${{\mathcal V}}_d:={{\mathcal V}}_{d_i}$ and $d':=\frac{d}{(k,d)}$, then the eigenvalues of $h$ on ${{\mathcal V}}_d$ are primitive $d'$th roots of unity of multiplicity $\frac{\dim {{\mathcal V}}_d}{\dim {{\mathcal V}}_{d'}}$. We want to give restrictions on the $d_i$: 1. $\dim {{\mathcal V}}_{d'}\leq2$: Assume that the dimension is at least $3$. One can choose three distinct eigenvalues $\zeta, \zeta', \zeta''$ on ${{\mathcal V}}_{d'}$, such that $h$ has eigenvalues $\alpha(h){^{-1}}\zeta, \alpha(h){^{-1}}\zeta'$ and $\alpha(h){^{-1}}\zeta''$ on $V$ and at most one of these eigenvalues can be $1$. 2. $\frac{\dim{{\mathcal V}}_d}{\dim {{\mathcal V}}_{d'}}=2{\Rightarrow}\dim{{\mathcal V}}_{d'}=1$: Assume $\dim {{\mathcal V}}_{d'}\geq2$ under the given condition. Denote two of the $\dim{{\mathcal V}}_{d'}$ eigenvalues of multiplicity $2$ of $h$ on ${{\mathcal V}}_d$ by $\zeta,\zeta'$. So one would have the eigenvalues $\alpha(h){^{-1}}\zeta$ and $\alpha(h){^{-1}}\zeta'$ of multiplicity $2$ on $V$. 3. $\dim{{\mathcal V}}_d\geq2,\dim{{\mathcal V}}_{d'}=1{\Rightarrow}\text{ the eigenvalue of } h \text{ on } {{\mathcal V}}_d \text{ is } \alpha(h)$: If $\zeta$ is the eigenvalue of $h$ on ${{\mathcal V}}_d$ with $\zeta\neq\alpha(h)$, then $\alpha(h)^{-1}\zeta\neq1$ would be an eigenvalue on $V$ of multiplicity $\dim{{\mathcal V}}_d\geq2$. 4. $\dim{{\mathcal V}}_{d'}=2{\Rightarrow}\dim{{\mathcal V}}_d=2$: Let $\dim{{\mathcal V}}_d>2$. There are two eigenvalues $\zeta\neq\zeta'$ of $h$ on ${{\mathcal V}}_d$ of multiplicity greater or equal to $2$. Hence we have on $V$ the eigenvalues $\alpha(h){^{-1}}\zeta$ and $\alpha(g){^{-1}}\zeta'$ of the same multiplicity. 5. The case $\dim{{\mathcal V}}_{d'}=\dim{{\mathcal V}}_d=2$ cannot occur: Let $\dim{{\mathcal V}}_{d'}=\dim{{\mathcal V}}_d=2$ with eigenvalues $\zeta,\zeta'$ of $h$ on ${{\mathcal V}}_d$. Without loss of generality we can assume that $\zeta=\alpha(h)$. If not we would have eigenvalues $\alpha(h){^{-1}}\zeta\neq1$ and $\alpha(h){^{-1}}\zeta'\neq1$ on $V$. There can be no other summand ${{\mathcal V}}_{d_1}$ in the decomposition of $\Lambda_{{{\mathbb Q}(\sqrt{ D })}}$, as this summand would give an eigenvalue $\neq1$ (the dimension of ${{\mathcal V}}_{d'}$ has to be $1$, but as $\zeta=\alpha(h)$ and $\zeta$ is a primitive $d'$th root of unity, this cannot happen). There are two eigenvalues of $h$ on ${\mathbb V}_r^\omega$ (because of $\dim{{\mathcal V}}_{d'}=2$) which we will call $\alpha(h)$ and $\zeta''$ with multiplicity $\frac{\dim{{\mathcal V}}_r}{2}$ (the denominator is $\dim{{\mathcal V}}_{d'}$). Therefore the multiplicity of the eigenvalues have to be $1$, because $\alpha(h){^{-1}}\zeta''\neq1$ is an eigenvalue on $V$. But then we will have two eigenvalues $\neq1$ on $V$ (namely $\alpha(h){^{-1}}\zeta'$ and $\alpha(h){^{-1}}\zeta''$). Hence there follows $\dim{{\mathcal V}}_{d'}=1$. Now we want to study ${{\mathcal V}}_r$. Let $r':=\frac{r}{(k,r)}$. We claim that $\dim{{\mathcal V}}_{r'}=1$. Suppose $\dim{{\mathcal V}}_{r'}\geq2$. 1. $\dim{{\mathcal V}}_{r'}\leq2$: Assume that $\dim{{\mathcal V}}_{r'}>2$, i.e. $h$ has on ${{\mathcal V}}_r^\omega$ at least three distinct eigenvalues $\alpha(h),\zeta,\zeta'$, which will give rise to eigenvalues $\alpha(h){^{-1}}\zeta\neq1$ and $\alpha(h){^{-1}}\zeta'\neq1$ on $V$. 2. $\dim{{\mathcal V}}_{r'}=2{\Rightarrow}n=1$: We know $\dim{{\mathcal V}}_{d'}=1$ from above. Let $\zeta$ be the eigenvalue of $h$ on ${{\mathcal V}}_d$ of multiplicity $\dim{{\mathcal V}}_d$. Clearly $\zeta\neq\alpha(h)$, because of dimension reasons. So we get the eigenvalue $\alpha(h){^{-1}}\zeta$ on $V$, and hence $\Lambda_{{{\mathbb Q}(\sqrt{ D })}}={{\mathcal V}}_r^\omega$ and ${\operatorname{rk}}\Lambda=2$. By the assumption $n\geq2$ we get $\dim{{\mathcal V}}_{r'}=1$. Putting this all together we get as a $h$-module $$\begin{aligned} \Lambda_{{{\mathbb Q}(\sqrt{ D })}}&{\cong}&{{\mathcal V}}_r^\omega\oplus\bigoplus_i{{\mathcal V}}_{d_i},\end{aligned}$$ where the eigenvalues of $h=g^k$ on 1. ${{\mathcal V}}_r^\omega$ are primitive $r'$th roots of unity ($\dim{{\mathcal V}}_{r'}=1$) of multiplicity $\dim{{\mathcal V}}_r$. 2. ${{\mathcal V}}_{d_i}$ are primitive $d_i'$th roots of unity ($\dim{{\mathcal V}}_{d_i'}=1$) of multiplicity $\dim{{\mathcal V}}_{d_i}$. \[CorollaryQRareinducedbyhwhichlookas\] The quasi-reflections on $V$ are induced by elements $h\in U(\Lambda)$, such that 1. $\pm h$ acts as a reflection on $\Lambda_{\mathbb C}$, if $D<D_0$ or $D=-2$, 2. $h^4\sim I$, if $D=-1$, 3. $h^6\sim I$, if $D=-3$. One has to check all possibilities for $\alpha(h)$. It is enough to investigate quotients $V/\left< g\right>$ since $V/G$ has canonical singularities if $V/\left< g\right>$ has canonical singularities for all $g\in G$. This was shown by [@MR2336040 Proof of Lemma 2.14]. Assume for the quasi-reflection $h=g^k$ that $k>1$ is minimal with this property. Then the quotient $V':=V/\left< h \right>$ is smooth. Let $h$ be of order $l$, so $g$ has order $lk$. Now consider the eigenvalues $\zeta^{a_1},\dots,\zeta^{a_n}$ of $g$ on $V$, where $\zeta$ denotes a primitive $lk$th root of unity. Now we consider the action of the group ${{\langle{g}\rangle}}/{{\langle{h}\rangle}}$ on $V'$. Using analogous arguments as before we want to describe the action of $g^f{{\langle{h}\rangle}}\in{{\langle{g}\rangle}}/{{\langle{h}\rangle}}$on $V'$. The differential of $g^f{{\langle{h}\rangle}}$ on $V'$ has eigenvalues $\zeta^{fa_1},\dots,\zeta^{fa_{n-1}},\zeta^{lfa_n}$. Now we have to modify the Reid-Tai sum as $$\begin{aligned} \Sigma'(g^f):=\left\{\dfrac{fa_n}{k}\right\}+\sum_{i=1}^{n-1}\left\{\dfrac{fa_i}{lk}\right\}.\label{DefinitionSigmaprime}\end{aligned}$$ \[LemmacasesinteriorSigmaandSigmaprimegeq1\] The variety $\Gamma\backslash{{\mathbb C}H^n}$ has canonical singularities, if - $\Sigma(g)\geq1$ for all $g\in\Gamma$ no power of which is a quasi-reflection, and - $\Sigma'(g^f)\geq1$ for $1\leq f<k$, where $h=g^k$ is a quasi-reflection. [@MR2336040 Lemma 2.14]. \[PropositionSigmaprimegeq1interior\] Let $h=g^k$ be as above, $D<D_0$ and $n\geq12$. Then $ \Sigma'(g^f)\geq1$ for $1\leq f<k$. We know from the former results that all eigenvalues on ${{\mathcal V}}_r^\omega$ are $\alpha(h)$, where $$\begin{aligned} \alpha(h)&=& \begin{cases} \pm1,& D<D_0\ \text{and}\ D=-2,\\ \pm1, \zeta_4,&D=-1,\\ \pm1,\zeta_3,\zeta_6,&D=-3. \end{cases}\end{aligned}$$ By a detailed analysis of the decomposition of $\Lambda_{\mathbb C}$ into ${{{\mathbb Q}(\sqrt{ D })}}$-irreducible pieces there is exactly one eigenvalue on $\Lambda_{\mathbb C}$ that is $\lambda\not=\alpha(h)$, since only one eigenvalue on $V$ is not $1$. This eigenvalue $\lambda$ will appear on one ${{\mathcal V}}_d$. As all eigenvalues of $g$ on ${{\mathcal V}}_d$ are primitive $d$th roots of unity they all have the same order. We know that $\lambda$ must have multiplicity $1$ on $\Lambda_{\mathbb C}$ so $\dim{{\mathcal V}}_d=1$. This implies $$\begin{aligned} d=\begin{cases} 1,2,\\1,2,4,\\1,2,3,6. \end{cases}\end{aligned}$$ Denote by $v$ the eigenvector of $g$ corresponding to the eigenvalue $\zeta^{a_n}$. Then $v$ clearly comes from ${{\mathcal V}}_d$ and therefore $\left< v\right>={\operatorname{Hom}}({\mathbb W},{\mathbb V}_d)$. If $\delta$ is the primitive generator of ${{\mathcal V}}_d\cap\Lambda$ then $h(\delta,\delta)>0$, since ${{\mathcal V}}_d\subset W_{{{\mathbb Q}(\sqrt{ D })}}^\perp$, where $W_{{{\mathbb Q}(\sqrt{ D })}}\otimes_{{{\mathbb Q}(\sqrt{ D })}}{\mathbb C}{\cong}{\mathbb W}$ and $W_{{{\mathbb Q}(\sqrt{ D })}}$ is a ${{{\mathbb Q}(\sqrt{ D })}}$-vector space. The form $h(\cdot,\cdot)$ is negative definite on ${\mathbb W}$ as shown in the proof of Lemma \[LemmaScapT=0\]. If we define the sublattice $\Lambda'\subset\Lambda$ as $\Lambda':=\delta^\perp$, this lattice has signature $(n-1,1)$. Now $\left<g\right>/\left<h\right>$ acts on $\Lambda'$ as a subgroup of $U(\Lambda')$. Therefore $$\Sigma'(g^f)=\left\{\dfrac{f{a_n}}{k}\right\}+\Sigma(g^f{{\langle{h}\rangle}})$$ and $g^f{{\langle{h}\rangle}}\in U(\Lambda')$. Analogously to the proof of [@MR2336040 Proposition 2.15] we can give the following argument: we claim that $g^f{{\langle{h}\rangle}}$ is not a quasi-reflection on $\Lambda'$. If it were, the eigenvalues of $g^f$ on $\Lambda'$ are as in Corollary \[CorollaryQRareinducedbyhwhichlookas\]. Thus the order of the eigenvalue on ${{\mathcal V}}_d$ is $$\begin{aligned} d=\begin{cases} 1,2,\\1,2,4,\\1,2,3,6. \end{cases}\end{aligned}$$ So ${\operatorname{ord}}g^f$ divides $l$, and therefore $g^f\in{{\langle{h}\rangle}}$. Hence the group ${{\langle{g}\rangle}}/{{\langle{h}\rangle}}$ has no quasi-reflections and we apply Theorem \[TheoremforgnoQRandngeq11\] for $n-1\geq11$. \[Theoremngeq12interiorcansings\] Let $n\geq 12$ and $D<D_0$. Then $\Gamma\backslash{{\mathbb C}H^n}$ has canonical singularities. This follows directly from Lemma \[LemmacasesinteriorSigmaandSigmaprimegeq1\], Theorem \[TheoremforgnoQRandngeq11\] and Proposition \[PropositionSigmaprimegeq1interior\]. The boundary {#SectionBoundary} ============ Now we want to state a result on the singularities of the toroidal compactification $(\Gamma\backslash{{\mathbb C}H^n})^*$ of the quasi-projective variety $\Gamma\backslash{{\mathbb C}H^n}$. For a complex matrix $A$ we write ${^{H}\!{A}}$ instead of $^T{\overline{A}}$. Therefore we consider isotropic subspaces $E_{{{\mathbb Q}(\sqrt{ D })}}$ with respect to the form $h(\cdot,\cdot)$. As the form is of signature $(n,1)$ they are $1$-dimesional. To each isotropic subspace there corresponds a $0$-dimensional boundary component or cusp $F$. First we choose a basis such that $h(\cdot,\cdot)$ is given by the matrix $Q'$, i.e. it can be written in the form $$h(x,y)={^{H}\!{y}}Q'x.$$ The next goal is to find a basis such that the form behaves well in later calculations. There exists a basis $b_1,\dots,b_{n+1}$ of $\Lambda_{{{\mathbb Q}(\sqrt{ D })}}$, such that 1. $b_1$ is a basis of $E_{{{\mathbb Q}(\sqrt{ D })}}$ and $b_1,\dots,b_n$ is a basis of $E_{{{\mathbb Q}(\sqrt{ D })}}^\perp$, 2. the hermitian form is written with respect to this basis as $$\begin{aligned} Q:=(h(b_i,b_j))_{1\leq i,j\leq n+1}=\left( \begin{array}{c|c|c} 0 &0&a\\ \hline 0&B&0\\ \hline {\overline{a}}& 0 & 0 \end{array} \right),\end{aligned}$$ where $a\in{{{\mathbb Q}(\sqrt{ D })}}$ and $B={^{H}\!{B}}$. First we can show that $Q'$ has to be of the form $ Q'=\left( \begin{array}{c|c|c} 0 &0&a\\ \hline 0&B&c\\ \hline {\overline{a}}& {^{H}\!{c}} & d \end{array} \right)$. The upper zeroes in the matrix $Q'$ and $Q'={^{H}\!{Q'}}$ directly follow from the fact that $h(E_{{{\mathbb Q}(\sqrt{ D })}},e)=0$ for all $e\in E_{{{\mathbb Q}(\sqrt{ D })}}^\perp$ and $h(x,y)=\overline{h(y,x)}$. The rest is similar to [@MR2336040 Proof of Lemma 2.24]. The matrix $B$ represents the hermitian form $h$ on $E_{{{\mathbb Q}(\sqrt{ D })}}^\perp/E_{{{\mathbb Q}(\sqrt{ D })}}$ and is therefore invertible. Thus one can define $$\begin{aligned} N:=\left( \begin{array}{c|c|c} 1 &0&r'\\ \hline 0&I_{n-1}&r\\ \hline 0& 0 & 1 \end{array} \right),\end{aligned}$$ where $r:=-B{^{-1}}c\in{{{\mathbb Q}(\sqrt{ D })}}^{n-1}$. Choose $r'$ such that it satisfies the equation $$\begin{aligned} d-{^{H}\!{c}}B{^{-1}}c+\overline{r'}a+\overline{a}r'=0.\end{aligned}$$ This is possible as the first two summands are real by definition and the other two are the complex conjugate of each other and therefore their sum is real. Now $$\begin{aligned} {^{H}\!{N}}Q'N&=&\left( \begin{array}{c|c|c} 0 &0&a\\ \hline 0&B&Br+c\\ \hline \overline{a}& {^{H}\!{r}}B+{^{H}\!{c}} & \delta \end{array} \right),\end{aligned}$$ with $\delta:=\overline{a}r'+({^{H}\!{r}}B+{^{H}\!{c}})r+\overline{r'}a+{^{H}\!{r}}c+d$. But $$Br+c=B(-B{^{-1}}c)+c=0.$$ Because of the definition of $r$ and $r'$ we achieve $$\begin{aligned} \delta&=&\overline{a}r'+{^{H}\!{(-B{^{-1}}c)}}B(-B{^{-1}}c)+{^{H}\!{c}}(-B{^{-1}}c)+\overline{r'}a+{^{H}\!{(-B{^{-1}}c)}}c+d\notag\\ &=&\underbrace{\overline{a}r'+\overline{r'}a-{^{H}\!{c}}{^{H}\!{(B{^{-1}})}}c+d}_{=0} +\underbrace{{^{H}\!{c}}{^{H}\!{(B{^{-1}})}}B B{^{-1}}c-{^{H}\!{c}}B{^{-1}}c}_{=0}\notag\\ &=&0.\notag\end{aligned}$$ Note that ${^{H}\!{(B{^{-1}})}}=B{^{-1}}$. Altogether this gives the result. To continue in the compactification we follow [@MR0457437]. Therefore we first calculate the stabiliser subgroup. \[LemmacalculationN(F)\] Let $N(F)\subset\Gamma_{\mathbb R}$ be the stabiliser subgroup corresponding to the cusp $F$. Then $$\begin{aligned} N(F)&=&\left\{g=\left( \begin{array}{c|c|c} u&v&w\\ \hline 0&X&y\\ \hline 0&0 & z \end{array} \right);\begin{array}{c} z\overline{u}=1,\ {^{H}\!{X}}BX=B,\\ {^{H}\!{X}}By+{^{H}\!{v}}az=0,\\ {^{H}\!{y}}By+\overline{z}\overline{a}w+za\overline{w}=0\end{array}\right\}.\end{aligned}$$ This follows directly when we study the $g\in\Gamma_{\mathbb R}$ that satisfy $gb_1=b_1$, and drop all $g$ that do not respect the form defined by $Q$. \[LemmaW(F)\] The unipotent radical is $$\begin{aligned} W(F)&=&\left\{g=\left( \begin{array}{c|c|c} 1&v&w\\ \hline 0&I_{n-1}&y\\ \hline 0&0 & 1 \end{array} \right);\begin{array}{c} By+{^{H}\!{v}}a=0,\\ {^{H}\!{y}}By+\overline{a}w+a\overline{w}=0\end{array}\right\}\end{aligned}$$ The group $W(F)$ is by definition the subgroup of $N(F)$ consisting of all unipotent elements. Therefore an element $g\in W(F)$ has to be of the form $$g=\left( \begin{array}{c|c|c} 1&v&w\\ \hline 0&X&y\\ \hline 0&0 & 1 \end{array} \right),$$ where $X=I_{n-1}+T$ with $T$ strict upper triangular. So it remains to show that $T=0$. As $B$ is definite and $X$ is unipotent the statement follows by induction on $n$. \[LemmaU(F)\] The centre of $W(F)$ is then given by the group $$\begin{aligned} U(F)&=&\left\{g=\left( \begin{array}{c|c|c} 1&0&iax\\ \hline 0&I_{n-1}&0\\ \hline 0&0 & 1 \end{array} \right);\ x\in {\mathbb R}\right\}\end{aligned}$$ The first condition of $W(F)$ gives $ v={^{H}\!{\left(-\frac{1}{a}By\right)}}. $ Now we will use that $U(F)$ is the centre of $W(F)$, i.e. $${\operatorname{centre}}(W(F))=\left\{g\in W(F);\ gg'=g'g\ \text{for all}\ g'\in W(F)\right\}.$$ These products for $g, g'\in W(F)$ lead to $$\begin{array}{crcl} &vy'&=&v'y\\ {\Longrightarrow}&{^{H}\!{\left(-\frac{1}{a}By\right)y'}}&=&{^{H}\!{\left(-\frac{1}{a}By'\right)y}}\\ {\Longleftrightarrow}&{^{H}\!{y}}By'-{^{H}\!{\left({^{H}\!{y}}B y'\right)}}&=&0. \end{array}$$ Clearly the last equivalence implies that ${^{H}\!{y}}By'\in{\mathbb R}$ for every $y'$. The matrix $B$ has full rank as it is invertible and thus $B\cdot{\mathbb C}^{n-1}={\mathbb C}^{n-1}.$ Therefore set $z':=By'\in{\mathbb C}^{n-1}$. Now we rephrase the property from above as $$\begin{aligned} {^{H}\!{y}}z'\ \text{is real for all}\ z'\in{\mathbb C}^{n-1}.\label{conditionyzprimeisreal}\end{aligned}$$ As this is true for all vectors we can choose $z'$ to be $$z'={^{T}\!{(0,\dots,0,1,0,\dots,0)}},$$ where the only coordinate not equal to $0$ is the $j$th. For this choice in (\[conditionyzprimeisreal\]) only the $j$th coordinate of $y$ remains and therefore $\overline{y}_j\in{\mathbb R}$. Now let $$z'={^{T}\!{(0,\dots,0,\sqrt{D},0,\dots,0)}}.$$ Then (\[conditionyzprimeisreal\]) becomes $\overline{y}_j\cdot\sqrt{D}\in{\mathbb R}$, and as $D<0$ this means $y_j\in i{\mathbb R}$. Hence $$y_j\in {\mathbb R}\cap i{\mathbb R}=\{0\},\ \text{because}\ z'\ \text{varies in}\ {\mathbb C}^{n-1}.$$ As $j$ is chosen arbitrary we can deduce that this is true for every entry, i.e. $y=0$ and therefore also $v=0$. So we have to study the remaining condition $\overline{a}w+\overline{w}a=0$. We want to describe $w$ more specifically, i.e. in terms of $a$. For this we write $w=c+id$ and $a=e+if$. So we get $$\begin{aligned} \overline{a}w+\overline{w}a=2(ec+df)=0.\end{aligned}$$ Assuming $e\neq0$ this implies $c=-d\frac{f}{e}$ and for this reason $w=-d\frac{f}{e}+id$, $d\in{\mathbb R}$. Therefore $w\in{\mathbb R}\left(-\frac{f}{e}+i\right)=i{\mathbb R}(e+if)=ia{\mathbb R}$. The case $f\neq0$ is similar. \[LemmaU(F)ZZisomZZ\] $U(F)_{\mathbb Z}=U(F)\cap\Gamma{\cong}{\mathbb Z}$. As $\Gamma\subset{\operatorname{GL}}(n+1,{{\mathcal O}})$ it is clear that $iax\in{{\mathcal O}}$. Also note that $x\in{\mathbb R}$. First consider the case $D\equiv 2,3\mod4$. Therefore $$\begin{aligned} iax=c+d\sqrt{D}\ \text{for some}\ c,d\in{\mathbb Z}.\label{Darstellung_iaxinU(F)ZZforD23}\end{aligned}$$ Additionally we know that $a\in{{{\mathbb Q}(\sqrt{ D })}}$ and hence $a=e+f\sqrt{D}$ for some $e,f\in{\mathbb Q}$. Thus we can write equation (\[Darstellung\_iaxinU(F)ZZforD23\]) as $$\begin{aligned} & i(e+f\sqrt{D})x=c+d\sqrt{D}\\ {\Leftrightarrow}& f\sqrt{-D}x+iex=c+d\sqrt{D}.\end{aligned}$$ Therefore $fx\sqrt{-D}\in{\mathbb Z}\ \text{and}\ iex\in{\mathbb Z}\sqrt{D}$, so we get $x\in\frac{1}{f(-D)}{\mathbb Z}\sqrt{-D}\cap \frac{1}{e}{\mathbb Z}\sqrt{-D}$. As $e,f\in{\mathbb Q}$ choose $e=\frac{p}{q}, f=\frac{r}{s}$ coprime and set $\tilde{x}\sqrt{-D}=x$, hence $$\tilde{x}\in\frac{s}{r(-D)}{\mathbb Z}\cap \frac{q}{p}{\mathbb Z}.$$ We claim $$\begin{aligned} \frac{s}{rD'}{\mathbb Z}\cap \frac{q}{p}{\mathbb Z}=\frac{{\operatorname{lcm}}(sp,rD'q)}{rD'p}{\mathbb Z},\end{aligned}$$ and define $D':=-D,\ c_1sp:={\operatorname{lcm}}(sp,rD'q),\ c_2rD'q:={\operatorname{lcm}}(sp,rD'q)$. We will first prove ‘$\supset$’. Let $\eta\in \dfrac{{\operatorname{lcm}}(sp,rD'q)}{rD'p}{\mathbb Z}$. Thus we can write with $c_1,c_2$ defined as above and $c\in{\mathbb Z}$: $$\begin{aligned} \eta=\dfrac{{\operatorname{lcm}}(sp,rD'q)}{rD'p}c&=&\dfrac{c_1sp}{rD'p}c=\dfrac{c_2rD'q}{rD'p}c\\ &=&\dfrac{c_1s}{rD'}c=\dfrac{c_2q}{p}c.\end{aligned}$$ We have to find $a=a(c),b=b(c)\in{\mathbb Z}$, such that we can write $\eta$ in the form $\dfrac{s}{rD'}a,\dfrac{q}{p}b$. Now let $a:=c_1c,\ b:=c_2c$ and with this choice $\eta$ lies in $\dfrac{s}{rD'}{\mathbb Z}$ and in $\frac{q}{p}{\mathbb Z}$ and therefore in $\frac{s}{rD'}{\mathbb Z}\cap\frac{q}{p}{\mathbb Z}$. Now we deal with ‘$\subset$’. Choose $\eta\in\frac{s}{rD'}{\mathbb Z}\cap\frac{q}{p}{\mathbb Z}$, i.e. there exist $a,b\in{\mathbb Z}$ with $$\begin{aligned} \eta=\dfrac{s}{rD'}a=\dfrac{q}{p}b.\label{equationetaausdurchschnitt} \end{aligned}$$ We have to show that there exists a $c(a,b)=c\in{\mathbb Z}$ with $\eta=\dfrac{{\operatorname{lcm}}(sp,rD'q)}{rD'p}c$. Now let $c:=\dfrac{b}{c_2}=\dfrac{a}{c_1}$. Writing the first part of (\[equationetaausdurchschnitt\]) with this choice of $c$ leads to $$\begin{aligned} \eta=\dfrac{s}{rD'}cc_1=\dfrac{spc_1}{rD'p}c=\dfrac{{\operatorname{lcm}}(sp,rD'q)}{rD'p}c.\end{aligned}$$ The other case is analogous. So it remains to show that this choice of $c$ lead to integers. This can be seen in the following way: By (\[equationetaausdurchschnitt\]) we get $sap=qbrD'$, and multiplying this by $c_1c_2$ gives $$\begin{array}{rrcl} &sapc_1c_2&=&qbrD'c_1c_2\\ {\Longleftrightarrow}& ac_2{\operatorname{lcm}}(sp,rD'q)&=&bc_1{\operatorname{lcm}}(sp,rD'q)\\ {\Longleftrightarrow}&ac_2&=&bc_1. \end{array}$$ We know that $c_1$ and $c_2$ are coprime because they are defined by the lowest common multiple. From this and the equation above it follows that $c_1$ divides $a$ and $c_2$ divides $b$. Thus $c\in{\mathbb Z}$ as required. The case $D\equiv 1\mod4$ is similar. This leads to the construction of a toroidal compactification. We have a ${\mathbb Z}$-lattice of rank $1$ in the complex vector space $U(F)_{\mathbb C}=U(F)\otimes_{\mathbb Z}{\mathbb C}$. To give a local compactification of $\Gamma\backslash{{\mathbb C}H^n}$ we will choose coordinates on ${{\mathbb C}H^n}$, namely $(t_1:\dots:t_{n+1})$. By the definition of ${{\mathbb C}H^n}$ we can assume that $t_{n+1}=1$. We will compactify ${{\mathbb C}H^n}$ locally in the direction of the cusp $F$. Therefore we will denote the partial quotient by $${{\mathbb C}H^n}(F):={{\mathbb C}H^n}/U(F)_{\mathbb Z}.$$ By standard calculations this can be identified with $$\begin{aligned} {{\mathbb C}H^n}(F){\cong}{\mathbb C}^*\times{\mathbb C}^{n-1}.\end{aligned}$$ For this identification we introduce new variables $\alpha$ and $\underline{w}=(w_2,\dots,w_n)$: $$\begin{aligned} t_1&\mapsto& \alpha\in{\mathbb C}^*,\\ t_i &\mapsto& w_i\in{\mathbb C},\ 2\leq i\leq n.\end{aligned}$$ We need an explicit description of the action of the group $N(F)$ on ${{\mathbb C}H^n}(F)$. \[LemmaN(F)actsonCHn\] If $$\begin{aligned} g=\left( \begin{array}{c|c|c} u&v&w\\ \hline 0&X&y\\ \hline 0&0 & z \end{array} \right)\in N(F),\end{aligned}$$ then $g$ acts on ${{\mathbb C}H^n}$ by $$\begin{aligned} \alpha&\mapsto&\frac{1}{z}\left(\frac{\alpha}{\overline{z}}+v\underline{w}+w\right),\notag\\ \underline{w}&\mapsto&\frac{1}{z}\left(X\underline{w}+y\right).\notag\end{aligned}$$ This easily follows from the computation $$\begin{aligned} \left( \begin{array}{c|c|c} u&v&w\\ \hline 0&X&y\\ \hline 0&0 & z \end{array} \right) \left(\begin{array}{c} \alpha\\ \underline{w}\\1 \end{array}\right)= \left(\begin{array}{c} u\alpha+v\underline{w}+w\\ X\underline{w}+y\\z \end{array}\right)= \left(\begin{array}{c} \dfrac{u\alpha+v\underline{w}+w}{z}\\ \dfrac{X\underline{w}+y}{z}\\1 \end{array}\right). \end{aligned}$$ and the property $u=(\overline{z}){^{-1}}$ from Lemma \[LemmacalculationN(F)\]. Define the algebraic torus $T$ as $$\begin{aligned} T:=U(F)_{\mathbb C}/U(F)_{\mathbb Z}{\cong}{\mathbb C}^*.\end{aligned}$$ We define a variable $\theta$ on $T$ by $$\begin{aligned} \theta:= \exp_a(\alpha):=\left\{ \begin{array}{cl} e^{\frac{2\pi rD'p}{a{\operatorname{lcm}}(sp,rD'q)\sqrt{-D}}\alpha}=:e^{\frac{2\pi i}{\sigma}\alpha}, &D\equiv2,3\mod4,\\ e^{\frac{4\pi rD'p}{a{\operatorname{lcm}}(sp,rD'q)\sqrt{-D}}\alpha}=:e^{\frac{2\pi i}{\sigma}\alpha},&D\equiv1\mod4, \end{array}\right.\end{aligned}$$ where we use the same notation as in the proof of Lemma \[LemmaU(F)ZZisomZZ\]. This variable has to be invariant under the action of $U(F)_{\mathbb Z}$, i.e. $\alpha\mapsto\alpha+iax=\alpha+\sigma b$ for a $b\in{\mathbb Z}$ and $\sigma$ as above. Let $g\in G(F)=N(F)_{\mathbb Z}/U(F)_{\mathbb Z}$ and suppose that $g$ has order $m>1$. We will also write $g$ if we think of $g$ as an element of $N(F)$. If we want to compactify $\Gamma\backslash{{\mathbb C}H^n}$ locally around the cusp $F$ that means that we allow $\theta=0$. So we add $\{0\}\times{\mathbb C}^{n-1}$ to the boundary modulo the action of $G(F)$ which extends uniquely to the boundary. We now want to apply the techniques from section \[SectionInterior\] to the boundary. Suppose now that $g$ fixes the boundary point $(0,\underline{w}_0)$ for some $\underline{w}_0\in{\mathbb C}^{n-1}$. Let $\zeta^{a_i}$ be the eigenvalues of the action of $g$ on the tangent space, where $\zeta$ denotes a primitive $m$th root of unity. Thus we can, as before, define the Reid-Tai sum $\Sigma(g)$. \[PropositiongnoQRfixingboundarySigmag\_geq1\] Suppose no power of $g$ acts as a quasi-reflection at the boundary point $(0,\underline{w}_0)$ and $D<D_0$. Then $\Sigma(g)\geq1$. As $D<D_0$ we can assume $z=\pm1$, because $z$ is invertible in ${{\mathcal O}}$ as $z\overline{u}=1$ by Lemma \[LemmacalculationN(F)\]. Now we have to determine the action of $g$ on the tangent space. This is for obvious reasons given by the matrix $$\begin{aligned} J= \left(\begin{array}{cc} \exp_a(\pm(v\underline{w}_0+w)) & 0\\ \ast & \pm X \end{array}\right).\end{aligned}$$ We denote the order of $X$ by $m_X$ and investigate the decomposititon of the representation $X$. As before the representation decomposes into a direct sum of ${{\mathcal V}}_d$’s. We have to distinguish two cases. First assume that $m_X>2$. In this case we are in the situation of Lemma \[LemmagnoQRR=1,2\], as we are in case $D<D_0$ and the only irreducible $1$-dimensional representations are $V_1$ and $V_2$. So by the lemma we get $\Sigma(g)\geq1$. Now let $m_X=1$ or $m_X=2$. The action of $-1\in\Gamma$ is trivial and so we can get $z=1$ by replacing $g$ by $-g$. Assume $m_X=1$ and hence $X=I$. As the element $g$ fixes the boundary point $(0,\underline{w}_0)$ we get $y=0$ from Lemma \[LemmaN(F)actsonCHn\] and then by the group relations of Lemma \[LemmacalculationN(F)\] we have $v=0$ since ${^{H}\!{v}}a=0$. So the element $g$ has to have the form $$g=\left( \begin{array}{c|c|c} 1&0&w\\ \hline 0&I&0\\ \hline 0&0 & 1 \end{array} \right),$$ and hence $g\in U(F)_{\mathbb Z}$. This implies that $g\in N(F)_{\mathbb Z}/ U(F)_{\mathbb Z}$ is the identity. Finally we have to check the case $m_X=2$. So $g^2\in U(F)_{\mathbb Z}$, and therefore we get the following relations, where $\sigma$ is as before: $$\begin{aligned} v+vX&=&0,\label{equationv+vX=0}\\ Xy+y&=&0,\notag\\ 2w+vy&\equiv&0\mod \sigma.\label{congruence2w+vycong0}\end{aligned}$$ We only consider the case $D\equiv2,3\mod4$ as the case $D\equiv1\mod4$ is analogous. Define $t:=v\underline{w}_0+w$ which is the argument of the exponential map in the matrix $J$. We want to show $2t\equiv0\mod\sigma{\mathbb Z}$ as this implies $\exp_a(t)=\pm1$. We will now use $\underline{w}_0=X\underline{w}_0+y$ as $g$ fixes the boundary point and the relations (\[equationv+vX=0\]), (\[congruence2w+vycong0\]). Hence we get $$\begin{aligned} 2t=2v\underline{w}_0+2w&\equiv&2v\underline{w}_0-vy\\ &=&v\underline{w}_0+v\underline{w}_0-vy=v\underline{w}_0+v(\underline{w}_0-y)\\ &=&v\underline{w}_0+vX\underline{w}_0=v(I+X)\underline{w}_0\\ &\equiv&0\mod \sigma.\end{aligned}$$ Therefore all the eigenvalues on the tangent space are $\pm1$, as $X$ has order $2$ and $\exp_a(t)=\pm1$ for $t$ as above. So there are two possibilities: all but one of the eigenvalues are $+1$, so $g$ acts as a reflection (in this case all quasi-reflections have order $2$), or there are at least two eigenvalues $-1$ and the remaining are $+1$, so we will have $\Sigma(g)\geq1$. \[Corollarynotfixingboundarydivisor\] At the boundary there are no divisors over a dimension $0$ cusp $F$ that are fixed by a non-trivial element of $N(F)_{\mathbb Z}/U(F)_{\mathbb Z}$ in the case $D<D_0$. Each divisor at the boundary has $\theta=0$. The only elements fixing a divisor are the quasi-reflections. The variable $\theta$ corresponds to the entry $\exp_a(\pm(v\underline{w}_0+w))$ from the induced action on the tagent space. From the proof of Proposition \[PropositiongnoQRfixingboundarySigmag\_geq1\] each matrix $X$ belonging to a quasi-reflection has order greater than $1$. Thus no divisor $\theta=0$ is fixed. Finally we have to consider quasi-reflections at the boundary, which will be done as in section \[SectionInterior\]. Therefore define $\Sigma'(g)$ for $g\in G(F)$ as in (\[DefinitionSigmaprime\]). \[Propositionh=gkQRSigmaprimegeq1\] Let $g\in G(F)$ be such that $h=g^k$ is a quasi-reflection. Assume that $n\geq13$ and $D<D_0$. Then $\Sigma'(g^f)\geq1$ for every $1\leq f< k$. The proof is similar to the proof of [@MR2336040 Proposition 2.30]. We will again study the action of $h$ on the tangent space. If $\exp_a(t)$ is the eigenvalue not equal to $1$, then $X^f$ contributes at least $1$ to $\Sigma'(g^f)$. Now denote this unique eigenvalue of $h$ on the tangent space by $\zeta\not=1$. Let $\nu$ be the exceptional eigenvector of of $h$ with the property $h(\nu)=\zeta\cdot \nu$. Consider the decomposition of $X$ as a $g$-module and assume that $\nu$ occurs in the representation ${{\mathcal V}}_d$. The dimension of ${{\mathcal V}}_d$ has to be $1$ as otherwise it would contribute another eigenvalue not equal to $1$. Now we study the $g$-module $$E_{{{\mathbb Q}(\sqrt{ D })}}^\perp/(E_{{{\mathbb Q}(\sqrt{ D })}}+{{{\mathbb Q}(\sqrt{ D })}}\nu),$$ which is $(n-2)$-dimensional. We can refer to Theorem \[TheoremforgnoQRandngeq11\] as long as $D<D_0$. So $\Sigma(g)\geq1$ if $n-2\geq11$ and thus $\Sigma'(g)\geq1$. Let $n\geq13 $ and $D<D_0$. Then the toroidal compactification $({{\mathbb C}H^n}/\Gamma)^*$ of ${{\mathbb C}H^n}/\Gamma$ has canonical singularities. Furthermore, there are no fixed divisors in the boundary. This is a consequence of Theorem \[Theoremngeq12interiorcansings\], Proposition \[PropositiongnoQRfixingboundarySigmag\_geq1\], Corollary \[Corollarynotfixingboundarydivisor\] and Proposition \[Propositionh=gkQRSigmaprimegeq1\]. [AMRT75]{} E. Arbarello, M. Cornalba, P. A. Griffiths, and J. Harris. , volume 267 of [ *Grundlehren der Mathematischen Wissenschaften \[Fundamental Principles of Mathematical Sciences\]*]{}. Springer-Verlag, New York, 1985. Daniel Allcock, James A. Carlson, and Domingo Toledo. The complex hyperbolic geometry of the moduli space of cubic surfaces. , 11(4):659–724, 2002. Daniel Allcock. The moduli space of cubic threefolds. , 12(2):201–223, 2003. A. Ash, D. Mumford, M. Rapoport, and Y. Tai. . Math. Sci. Press, Brookline, Mass., 1975. Lie Groups: History, Frontiers and Applications, Vol. IV. W. L. Baily, Jr. and A. Borel. Compactification of arithmetic quotients of bounded symmetric domains. , 84:442–528, 1966. V. A. Gritsenko, K. Hulek, and G. K. Sankaran. The [K]{}odaira dimension of the moduli of [$K3$]{} surfaces. , 169(3):519–567, 2007. R.-P. Holzapfel. Invariants of arithmetic ball quotient surfaces. , 103:117–153, 1981. Rolf-Peter Holzapfel. . Aspects of Mathematics, E29. Friedr. Vieweg & Sohn, Braunschweig, 1998. Shigeyuki Kond[ō]{}. On the [K]{}odaira dimension of the moduli space of [$K3$]{} surfaces. , 89(3):251–299, 1993. Shigeyuki Kond[ō]{}. A complex hyperbolic structure for the moduli space of curves of genus three. , 525:219–232, 2000. Shigeyuki Kond[ō]{}. The moduli space of 5 points on [$\Bbb P^1$]{} and [$K3$]{} surfaces. In [*Arithmetic and geometry around hypergeometric functions*]{}, volume 260 of [*Progr. Math.*]{}, pages 189–206. Birkhäuser, Basel, 2007. Miles Reid. Young person’s guide to canonical singularities. In [*Algebraic geometry, [B]{}owdoin, 1985 ([B]{}runswick, [M]{}aine, 1985)*]{}, volume 46 of [*Proc. Sympos. Pure Math.*]{}, pages 345–414. Amer. Math. Soc., Providence, RI, 1987. Goro Shimura. On analytic families of polarized abelian varieties and automorphic functions. , 78:149–192, 1963. Goro Shimura. . Publications of the Mathematical Society of Japan, No. 11. Iwanami Shoten, Publishers, Tokyo, 1971. Kanô Memorial Lectures, No. 1. Yung-Sheng Tai. On the [K]{}odaira dimension of the moduli space of abelian varieties. , 68(3):425–439, 1982. Louis Weisner. Quadratic fields in which cyclotomic polynomials are reducible. , 29(1-4):377–381, 1927/28.
--- abstract: 'We show that equivalence of deterministic linear tree transducers can be decided in polynomial time when their outputs are interpreted over the free group. Due to the cancellation properties offered by the free group, the required constructions are not only more general, but also simpler than the corresponding constructions for proving equivalence of deterministic linear tree-to-word transducers.' author: - Raphaela Löbel - Michael Luttenberger - Helmut Seidl bibliography: - 'lit.bib' title: Equivalence of Linear Tree Transducers with Output in the Free Group ---
--- abstract: 'Narrowband IoT (NB-IoT) is the latest IoT connectivity solution presented by the 3GPP. NB-IoT introduces coverage classes and introduces a significant link budget improvement by allowing repeated transmissions by nodes that experience high path loss. However, those repetitions necessarily increase the energy consumption and the latency in the whole NB-IoT system. The extent to which the whole system is affected depends on the scheduling of the uplink and downlink channels. We address this question, not treated previously, by developing a tractable model of NB-IoT access protocol operation, comprising message exchanges in random-access, control, and data channels, both in the uplink and downlink. The model is then used to analyze the impact of channel scheduling as well as the interaction of coexisting coverage classes, through derivation of the expected latency and battery lifetime for each coverage class. These results are subsequently employed in investigation of latency-energy tradeoff in NB-IoT channel scheduling as well as determining the optimized operation points. Simulations results show validity of the analysis and confirm that there is a significant impact of channel scheduling on latency and lifetime performance of NB-IoT devices.' author: - | Amin Azari$^*$, Guowang Miao$^*$, Čedomir Stefanovi' c$^+$, and Petar Popovski$^+$\ $^*$KTH Royal Institute of Technology, $^+$Aalborg University\ Email: {aazari,guowang}@kth.se, {cs,petarp}@es.aau.dk bibliography: - 'bibl.bib' title: 'Latency-Energy Tradeoff based on Channel Scheduling and Repetitions in NB-IoT Systems' --- \[1\] Introduction {#intr} ============ Internet of Things (IoT) is behind 2 out of 3 major drivers of next generation wireless networks, which are massive IoT connectivity, mission critical IoT connectivity and enhanced mobile broadband (eMBB) [@5g_iot]. Due to the fundamental differences in characteristics and service requirements between IoT and legacy traffic in cellular networks, which are seen in massive number of connected devices, short packet sizes, and long battery lifetimes, revolutionary connectivity solutions have been proposed and implemented by industry [@lif_com; @mag_all]. The most prominent examples of such solutions are SigFox, introduced in 2009, and LoRa, introduced in 2015, both implemented in the unlicensed band, i.e., 868 MHz in Europe [@mag_all; @int1]. On the other hand, the accommodation of IoT traffic over cellular networks has been investigated by the 3GPP, proposing evolutionary solutions like LTE Cat1 and LTE Cat-M [@emtc; @ltemm]. Recently, these efforts have been also complemented by introduction of revolutionary solutions like NB-IoT [@ciot]. ![NB-IoT features frequency-division duplex for uplink and downlink [@wp]. Downlink/uplink NP channels and signals are time multiplexed, as depicted in the figure.[]{data-label="sf"}](./channel3.png){width="\columnwidth"} NB-IoT represents a big step towards realization of massive IoT connectivity over cellular networks [@nbiot]. Communication in NB-IoT systems takes place in a narrow, $200 \, \mathrm{KHz}$ bandwidth, resulting in more than $20 \, \mathrm{dB}$ link budget improvement over the legacy LTE. This enables smart devices deployed in remote areas, e.g., basements, to communicate with the base station (BS). As the legacy signaling and communication protocols were designed for large bandwidths, NB-IoT introduces a solution with five new narrowband physical (NP) channels [@prim; @wp], see Fig. \[sf\]: random access channel (NPRACH), uplink shared channel (NPUSCH), downlink shared channel (NPDSCH), downlink control channel (NPDCCH), and broadcast channel (NPBCH). NB-IoT also introduces four new physical signals: demodulation reference signal (DMRS) that is sent with user data on NPUSCH, narrowband reference signal (NRS), narrowband primary synchronization signal (NPSS), and narrowband secondary synchronization signal (NSSS). Prior works on NB-IoT investigated preamble design for access reservation of devices over NPRACH [@nb_ra; @nbiotaa], uplink resource allocation to the connected devices [@nb_sch], coverage and capacity analysis of NB-IoT systems in rural areas [@cell1], coverage of NB-IoT with consideration of external interference due to deployment in guard band [@nbi_cov], and impact of channel coherence time on coverage of NB-IoT systems in [@sasan]. Further, in [@nbt], energy consumption of IoT devices in data transmission over NB-IoT systems in normal, robust, and extreme coverage scenarios has been investigated. The results obtained in [@nbt] illustrate that NB-IoT significantly reduces the energy consumption with respect to the legacy LTE, due to the existence of the deep sleep mode for the devices that are registered to the BS. In this paper, we address an important and so far untreated problem: when and how much resources to allocate to NPRACH, NPUSCH, NPDCCH, and NPDSCH in coexistence scenarios, where BS is serving NB-IoT devices with random activations that belong to different coverage classes. The solution to this problem has a significant impact on the service execution and devices’ performance, as the resource allocation to different channels faces inherent tradeoffs. The essence of the tradeoff can be explained as follows. If random access opportunities (NPRACH) occur frequently, less uplink radio resources remain for uplink data channel (NPUSCH), which increases the latency in data transmissions. On the other hand, if NPRACH is scheduled infrequently, latency and energy consumption in access reservation increase due to the extended idle-listening time and increased collision probability. Further, as device scheduling for uplink/downlink channels is performed over NPDCCH, infrequent scheduling of this channel may lead to wasted uplink resources in NPUSCH and increased latency in data transmissions. Conversely, if NPDCCH occurs frequently, the latency and energy consumption of transmissions over NPUSCH will increase. Another important aspect studied in the paper is the impact of signal repetitions that are used by the devices that are located far away from the BS on battery lifetime and latency performance of other devices in the system. The remainder of the paper is structured as follows. In the next section, we outline the motivation for the development of a NB-IoT-specific analysis of channel scheduling and the reasons why the existing LTE models can not be used, and then we list the contributions of the paper. Section III is devoted to the system model. Section IV presents the analysis. Investigation of the operational tradeoffs and performance evaluation are presented in Section V. Concluding remarks are given in Section VI. Motivation and Contributions ============================ The literature on latency and energy analysis and optimization for LTE networks is mature [@4g; @4ge]. Furthermore, latency and energy tradeoffs in IoT scheduling over LTE networks were investigated in [@access; @eem; @eel]. However, although the NB-IoT access networking is heavily inspired by LTE, there are several crucial differences that prevent the use of the LTE models: (i) in NB-IoT, all communications happen in a single LTE resource block, and hence, control, broadcast, random access, and data channels are multiplexed on the same radio resource, (ii) a set of coverage classes has been defined, which enables devices experiencing extreme pathloss values to become connected by leveraging on repetitions of transmitted signals, and (iii) the control plane has been adapted to IoT characteristics, enabling the devices to become disconnected for several hours while they are registered to the BS, which is not possible in LTE. Further, the introduction of coverage classes also brings the novel concerns that are related to coexistence scenarios, where devices from different coverage classes are served within a cell and, thus, mutually impact their communication with the BS. For example, one may consider a scenario in which the uplink is mainly occupied by random access and data transmission of devices with poor coverage, when high numbers of repetitions are required. In such cases, the random access and data channels for other classes can not be scheduled frequently, which will affect their latency and energy performance. In order to properly address the distinguishing features of NB-IoT, in this paper we extend the latency/energy models in [@sg; @sg1; @access; @nbt], incorporate the NB-IoT channel multiplexing, and consider coexistence of devices from a diverse set of coverage classes in the same cell. Specifically, the main contributions of this work are: - Derivation of a tractable analytical model of channel scheduling problem in NB-IoT systems that considers message exchanges on both downlink/uplink channels, from synchronization to service completion. - Derivation of closed-form analytical expressions for service latency and energy consumption, and derivation of the expected battery lifetime model for devices connected to the network. - Investigation of a latency-energy tradeoff in channel scheduling for NB-IoT systems. - Investigation of the interaction among the coverage classes coexisting in the system: performance loss in one coverage class due to an increase in number of connected devices from another coverage class. System Model ============ NB-IoT Access Networking ------------------------ Assume a NB-IoT cell with a base station located in its center, and $N$ devices uniformly distributed in it. In general, there are $C$ coverage classes defined in an NB-IoT cell, where the BS assigns a device to a class based on the estimated path loss between them and informs the device of its assignment. Class $j$, $\forall j$, is characterized by the number of replicas $c_j$ that must be transmitted per original data/control packet. For example, based on the specifications in [@wp], each device belonging to group $j$ shall repeat the preamble transmitted over NPRACH $c_j\in \{1,2,4,8,16,32,64,128\}$ times. Further, denote by $f_j$ the fraction of devices belonging to class $j$, by $S$ the number of communication sessions that a typical IoT device performs *daily* and by $p$ the probability that a device requests uplink service. The arrival rates of uplink/downlink service requests to the system are, respectively: $$\begin{aligned} \label{eq:Gs} G_u = \frac{N \, S \, p}{24 \cdot 3600} \; \mathrm{sec}^{-1} , \; G_d = \frac{N \,S\,(1 - p)\,}{24 \cdot 3600} \; \mathrm{sec}^{-1}.\end{aligned}$$ ![Communications exchanges and power consumption in NB-IoT access networking. Note: Reference signals, including NRS, NPSS, NSSS, and master information block (MIB), are broadcasted regularly; here we show only a single realization. []{data-label="cp"}](./com_proc.png){width="3.5in"} Initially, when a NB-IoT device requires an uplink/downlink service, it first listens for the cell information, i.e., NPSS and NSSS, through which it synchronizes with the BS. Then, the device performs access reservation, by sending a random access (RA) request to the BS over NPRACH. The BS answers to a successfully received RA by sending the random access response (RAR) message over NPDCCH, indicating the resources reserved for serving the device. Finally, the device sends/receives data to/from the BS over NPUSCH/NPDSCH channels, which, depending on the application, may be followed by an acknowledgment (ACK) [@wp]. In contrast to LTE, a device that is connected to the BS can go to the *deep sleep* state [@ciot Section 7.3], which does not exist in LTE and from which the device can become reconnected just by transmitting a RA request accompanied by a random number [@ciot Fig.  7.3.4.5-1]. This new functionality aims to address the inefficient handling of IoT communications by LTE [@eel; @nbt], as it significantly saves energy due to the fact that IoT devices do not need to restart all steps of the connection establishment procedure. Fig. \[cp\] represents the access protocol exchanges for NB-IoT, as described in [@ciot Section 7.3].[^1] Problem Formulation ------------------- Based on the model presented in Fig. \[cp\], the expected latencies in uplink/downlink communication in class $j$ are, respectively: $$\begin{aligned} D_{{u}_j} &{=} D_{\text{sy}_j} {+} D_{\text{rr}_j} {+} D_{\text{tx}_j} \nonumber\\ D_{{d}_j} &{=} D_{\text{sy}_j} {+} D_{\text{rr}_j} {+} D_{\text{rx}_j}\label{e1}\end{aligned}$$ where $D_{\text{sy}_j}$, $D_{\text{rr}_j}$, $D_{\text{tx}_j}$, $D_{\text{rx}_j}$ are the expected time spent in synchronization, resource reservation, data transmission in uplink service, and data reception in downlink service, respectively. Similarly, the models of expected energy consumption of an uplink/downlink communication in class $j$ are: $$\begin{aligned} \mathcal E_{{u}_j} & = E_{\text{sy}_j} + E_{\text{rr}_j} + E_{\text{tx}_j} + E_s \nonumber\\ \mathcal E_{{d}_j} & = E_{\text{sy}_j} + E_{\text{rr}_j} + E_{\text{rx}_j} + E_s\label{e4}\end{aligned}$$where $E_{\text{sy}_j}$, $E_{\text{rr}_j}$, $E_{\text{tx}_j}$, $E_{\text{rx}_j}$, and $E_s$ are the expected device energy consumption in synchronization, resource reservation, data transmission in uplink service, data reception in downlink service, and optional communications like acknowledgment, respectively. Since the energy consumption of a typical IoT device involved reporting application can be modeled as a semi-regenerative Poisson process with regeneration point at the end of each reporting period [@nl], one may define the expected battery lifetime as the ratio between stored energy and energy consumption per reporting period. In this case, the expected battery lifetime can be derived as: $$\begin{aligned} L_j= \frac{E_0 }{S p \mathcal E_{u_j}+ S ( 1-p ) \mathcal E_{d_j}} \; [\mathrm{day}] \label{e5}\end{aligned}$$ where $E_0$ is the energy storage at the device battery. In order to derive closed-form latency and energy consumption expressions, e.g., model $E_{{\text{rr}}_j}$ and $D_{{\text{rr}}_j}$, in the sequel we analytically investigate the performance impacts of channel scheduling, arrival traffic, and coexisting coverage classes on the performance indicators of interest. Analysis ======== As mentioned in Section II, in NB-IoT systems the control, data, random access, and broadcast channels are multiplexed on the same set of radio resources. Thus, their mutual impact in both uplink and downlink directions are significant, which is not the case in legacy LTE due to wide set of available radio resources. In the following, we propose a queuing model of NB-IoT access networking, which captures these interactions. Queuing Model of NB-IoT Access Protocol --------------------------------------- Fig. \[qu\] depicts the queuing model of NB-IoT access networking, comprising operation of NP random access, control, and data channels. The gray circle represents the uplink server serving two channel queues, NPRACH and NPUSCH, while the yellow circle represents the downlink channel serving three channel queues, NPDCCH, NPDSCH, as well as the reference signals, such as NPSS. Let $t_j$ be the average time interval between two consecutive scheduling of NPRACH of class $j$ and $M_j$ the number of orthogonal random access preambles available in it. The duration of scheduled NPRACH of class $j$ is $c_j \, \tau$, where $\tau$ is the unit length, equal to the NPRACH period for the coverage class with $c_j=1$. The inter-arrival times between two NPRACH periods in NB-IoT can vary from $40 \, \mathrm{ms}$ to $2.56 \, \mathrm{s}$ [@wp]. Further, let $b$ denote the fraction of time in which reference signals are scheduled in a downlink radio frame, e.g., NPBCH, NPSS, and NSSS. Five subframes in every two consecutive downlink frames are allocated to reference signals [@wp], implying $b=0.2$. Finally, a semi-regular scheduling of NPDCCH has been proposed by 3GPP in order to prevent waste of resources in the uplink channel when BS serves another device with poor coverage in the downlink [@snpdcch]; we denote by $d$ the average time interval between two consecutive NPDCCH instances. In the next section, we derive closed-form expressions for components of latency and battery lifetime models, given in -. ![Queuing model of the NB-IoT access networking. The yellow and gray circles represent servers for downlink and uplink channels, respectively. []{data-label="qu"}](./queue.png){width="3.5in"} Derivations {#der} ----------- $D_{\text{sy}_j}$ in is a function of the coverage class $j$. Its average value has been reported in [@ciot Sec. 7.3]. $D_{\text{rr}_j}$ is given by: $$\begin{aligned} D_{\text{rr}_j} = \sum\nolimits_{\ell=1}^{N_{r_{\max}}} (1-\mathcal P_j)^{\ell-1}\mathcal P_j \ell ( D_{\text{ra}_j}+D_{\text{rar}_j} )\end{aligned}$$ in which $N_{r_{\max}}$ represents the maximum allowed number of attempts, ${\mathcal P_j}$ the probability of successful resource reservation in an attempt that depends on the number of devices in the class attempting the access, $D_{\text{ra}_j}$ the expected latency in sending a RA message, and $D_{\text{rar}_j}$ the expected latency in receiving the RAR message. $D_{\text{ra}_j}$ is a function of time interval between consecutive scheduling of NPRACHs and is equal to $0.5 \, t_j+c_j \tau$, while $D_{\text{rar}_j}$ depends on the operation of NPDCCH. NPDCCH can be seen as a queuing system in which the downlink server (see Fig. \[qu\]) visits the queue every $d$ seconds and serves the existing requests. Thus, $D_{\text{rar}_j}$ consists of i) waiting for NPDCCH to occur, which happens on average $d/2$ seconds, ii) time interval spent waiting to be served when NPDCCH occurs, denoted by $D_w$, and iii) transmission time, denoted by $D_{t_j}$. We first characterize $D_w$. When the server visits the NPDCCH queue, on average there are: $$\begin{aligned} \mathcal Q= \sum\nolimits_{j=1}^{C}f_j(G_u+G_d)\max{\{d,t_j\}}+\lambda_b \, d\end{aligned}$$ requests waiting to be served, where the first term in $\mathcal Q$ corresponds to NPRACH-initiated random access requests, see , and $\lambda_b\, d$ models the the arrival of BS-initiated control signals, see Fig. \[qu\]. Thus, the average waiting time before the service of a newly arrived RA message starts is $D_w = 0.5 \, \mathcal Q \, \mathcal D_{t}$, where $\mathcal D_t$ is the average service time in NPDCCH. Using $u$ as the average control packet transmission time, the average transmission time for class $j$ is $D_{t_j}=c_j \, u$. Thus: $$\begin{aligned} \mathcal D_t = \sum\nolimits_{j=1}^C f_j \mathcal D_{t_j} = \sum\nolimits_{j=1}^C f_j c_j u\end{aligned}$$ and $D_{\text{rar}_j}$ becomes: $$\label{e7} D_{\text{rar}_j}=0.5 \, d +0 .5 \, \mathcal Q \, \mathcal D_t+ c_j \, u.$$ Resource reservation of a device over NPRACH is successful if its transmitted preamble does not collide with other nodes’ preambles, which happens with probability $\mathcal P_{j_{\text{RACH}}}$, and the RA response is received within period $T_{\text{th}}$, which happens with probability $\mathcal P_{j_{\text{RAR}}}$. Thus, the probability of successful resource reservation can be approximated as $\mathcal P_j = \mathcal P_{j_{\text{RACH}}}\, \mathcal P_{j_{\text{RAR}}}$. For a device belonging to class $j$, there are $M_j$ orthogonal preambles available every $t$ seconds, during which it contends on average with $\mathcal N_j= f_j ( G_u+G_d ) t_j $ devices. Then, $\mathcal P_{j_\text{RACH}}$ is derived as: $$\begin{aligned} \label{e6} \mathcal P_{j_\text{RACH}}= \sum\nolimits_{k=2}^N \frac{\mathcal (N_j)^ke^{-\mathcal N_j}}{k!} \left( \frac{ M_j - 1 }{M_j} \right)^{k-1}.\end{aligned}$$ The cumulative distribution function of service time for a device and sum of service times for $n > 1$ devices are: $$\begin{aligned} \mathcal F_1(x) = \sum\nolimits_{j=1}^C f_j H(x - c_j u), \\\mathcal F_n(x) = \sum\nolimits_{j=1}^C f_j \mathcal F_{n - 1}(x - c_j u) \nonumber\end{aligned}$$ respectively, where $ H(x)$ is the unit step function. Then, $\mathcal P_{j_{\text{RAR}}}$, which is the probability that RAR is received within $T_{\text{th}}$, is: $$\begin{aligned} \mathcal P_{j_{\text{RAR}}} = & 1 - \nonumber \\ & \sum_{K=2}^{\infty}\sum_{k=1}^{K -1}\frac{k}{K}\frac{\mathcal Q^Ke^{-\mathcal Q}}{K!} \left(1 - \mathcal F_{K - k}(T_{\text{th}}) \right) \mathcal F_{K - k - 1}(T_{\text{th}}).\end{aligned}$$ $D_{\text{tx}_j}$ is a function of scheduling of NPUSCH. Operation of NPUSCH can be seen as a queuing system in which server handles requests in a fraction of each uplink frame that is allocated to NPUSCH; this fraction is $w=1-\sum\nolimits_{j=1}^{C} {c_j \tau}/{t_j}$. Arrival of service requests to the NPUSCH can be modeled as a batch Poisson process (BPP), as resource reservation happens only in NPRACH periods. The mean batch-size is: $$\begin{aligned} \mathcal G= \frac{1}{C} \sum\nolimits_{j=1}^C f_jG_ut_j\end{aligned}$$ and the rate of batch arrivals is $\sum\nolimits_{j=1}^C 1/t_j$. The uplink transmission time is determined by the packet size and coverage class $j$. We assume that the packet length follows a general distribution with the first two moments equal to $l_{1}$ and $l_{2}$. Then, the transmission (i.e., service) time for the uplink packet follows a general distribution with the first two moments: $$\begin{aligned} s_1 = \sum\nolimits_{j=1}^{C} \frac{f_j c_j{l_{1}}}{{\mathcal R_j}w} \text{ and } s_2 = \sum\nolimits_{j=1}^{C} \frac{f_j c_j^2{l_{2}}}{{\mathcal R_j^2}w^2} \nonumber\end{aligned}$$ where $\mathcal R_j$ is the average uplink transmission rate for class $j$. This queuing system is a BPP/G/1 system, hence, using the results from [@booktt], one can derive the latency in data transmission for class $j$ as: $$\label{e8} D_{\text{tx}_j}= \frac{\rho s_2}{2 s_1(1 - \rho)} + \frac{ \mathcal G s_1 }{2 (1 - \rho)} + \frac{c_j l_1}{\mathcal R_jw}$$ where $\rho=\sum\nolimits_{j=1}^C \mathcal G s_1/t_j$. Similarly, performance of NPDSCH can be seen as a queuing system in which server visits the queue in a fraction of frame time and serves the requests. This fraction comprises to subframes in which NPDCCH, NPBCH, NPSS, and NSSS are not scheduled, and can be derived similarly to as: $$\begin{aligned} y = 1 - b - \frac{\mathcal Q}{d} \, {\sum\nolimits_{j=1}^C f_j c_j u}. \end{aligned}$$ The arrival of downlink service requests to the NPDSCH queue can be also seen as a BPP, as they arrive only after NPRACH has occurred. The mean batch-size is: $$\begin{aligned} \mathbb G = \frac{1}{C} \sum\nolimits_{j=1}^C f_jG_dt_j\end{aligned}$$ and the arrival rate is $\sum\nolimits_{j=1}^C 1/t_j$. The downlink transmission time is determined by the packet size and [coverage class $j$. Assuming that packet length follows a general distribution with moments $ m_{1}$ and $ m_{2}$, then first two moments of the distribution of the packet transmission time are:]{} $$\begin{aligned} h_1 = \sum\nolimits_{j=1}^{C} \frac{f_j c_j{m_1}}{{ \mathbb R_j}y } \text{ and } h_2\ = \sum\nolimits_{j=1}^{C} \frac{f_j c_j^2{m_2^2}}{{\mathbb R_j^2}y^2}\nonumber\end{aligned}$$ where $\mathbb R_j$ is the average downlink data rate for coverage class $j$. Defining $\nu=\sum\nolimits_{j=1}^C \frac{\mathbb G h_1}{t_j}$, the latency in data reception $D_{\text{rx}_j}$ becomes: $$D_{\text{rx}_j}= \frac{0.5\nu h_2}{h_1(1 - \nu)}{+}\frac{ \mathbb G h_1 }{2 (1 - \nu)}{+} \frac{c_j m_1}{\mathbb R_jy}.$$ Finally, we derive the average energy consumption of an uplink/downlink service. Denote by $\xi$, $P_I$, $P_c$, $P_l$, and $P_{t_j}$ the power amplifier efficiency, idle power consumption, circuit power consumption of transmission, listening power consumption, and transmit power consumption for class $j$. Then, $$\begin{aligned} & E_{\text{sy}_j} = P_l D_{\text{sy}_j}\\ & E_{\text{rar}_j} =P_l D_{\text{rar}_j} \label{ee1}\\ & E_{\text{rr}} = \sum\nolimits_{l=1}^{N_{r_{\max}}} (1-\mathcal P_j)^{l-1} \mathcal P_j( E_{\text{ra}_j}+E_{\text{rar}_j} ) \\ & E_{\text{ra}_j} = ( D_{\text{ra}}-c_j \tau ) P_I+ c_j \tau ( P_c+\xi P_{t_j} ) \\ & E_{\text{tx}_j} = ( D_{\text{tx}_j}- \frac{c_j l_1} { \mathcal R_j w } ) P_I+ ( P_c+\xi P_{t_j} ) \frac{c_j l_1}{\mathcal R_jw} \\ & E_{\text{rx}_j} = ( D_{\text{rx}_j} - \frac{c_j m_1}{\mathbb R_jy} ) P_I+ P_l \frac{c_j m_1}{\mathbb R_jy}\label{eee}\end{aligned}$$ from which the battery lifetime model is derived as: $$\begin{aligned} L_j = E_0 \Big( & S p [E_{\text{sy}_j} + E_{\text{rr}_j} + E_{\text{tx}_j} + E_s] \, + \nonumber \\ & S ( 1-p ) \mathcal [ E_{\text{sy}_j} + E_{\text{rr}_j} + E_{\text{rx}_j} + E_s] \Big)^{-1} .\end{aligned}$$ category parameters values ---------- ------------------------------------------------------- ----------------------------------------------------------------- Traffic $N$, $S$, $p$ $20000$, $0.5 \, \mathrm{h}^{-1}$, $0.8$ Traffic $l_1$, $m_1$, $T_{th}$ $500$, $5 \, \mathrm{Kbit}$, $2 \, \mathrm{s}$ Traffic $u$, $\tau$, $\lambda_b$, $b$ $2 \, \mathrm{ms}$, $10 \, \mathrm{ms}$, 1/, $0.2$ Traffic $f_1$, $f_2$ 0.5, 0.5 Power $P_t$, $P_c$, $P_I$, $P_l$ $0.2$, $0.01$, $0.01$, $0.1 \, \mathrm{W}$ Coverage $c_1, c_2,M_1,M_2$ $1$, $2$, $16$, $16$ Coverage $ \mathcal R_1,\mathcal R_2,\mathbb R_1,\mathbb R_2 $ $5, 5, 15, 15 \, \mathrm{Kbit/s}$ Other $E_0$,$ D_{\text{sy}_1},D_{\text{sy}_2} $ $1 \, \mathrm{KJ} $, $0.33 \, \mathrm{s}$, $0.66 \, \mathrm{s}$ Other Commun. frame (CF) 10 ms : Parameters for performance analysis.[]{data-label="part"} Performance Evaluation ====================== In this section, we validate the derived expressions, highlight performance tradeoffs in channel scheduling, find optimized system operation points, and identify the mutual impact among the coexisting coverage classes. System parameters are presented in Table I. Fig. \[comp\] compares the analytical lifetime and latency expressions derived in Section \[der\] (dashed curves) against the simulation results (solid curves) for class 1 of devices. The $x$-axis represents $t$, the average time between two scheduling of random access resources. It obvious that the simulations results, including battery lifetime and service latency in uplink and downlink, match well with the respective analytical results. Fig. \[mu\] shows the mutual impact of two coexisting coverage classes in a cell, i.e., class 1 and class 2. The $y$-axis represents the expected battery lifetime for both classes, while the $x$-axis represents the the number of repetitions for class 2, i.e., $c_2$. Increase in $c_2$ increases the amount of radio resources which are used for signal repetitions (i.e., coverage extension) of devices in class 2. This results in an increased latency both for class 1 and class 2 devices, and hence, increases the energy consumptions per reporting period and decreases the battery lifetime. Also, it can be seen that an increase in the fraction of nodes belonging to class 2, adversely impacts the battery lifetime performance for class 1 devices. For instance, increasing $c_2$ from 11 to 13 decreases the average battery lifetime of class 1 nodes for $6 \, \%$ when $f_1=0.95$ (i.e., $ f_2=0.05$) and for $28 \, \%$ when $f_1=0.90$ (i.e., $ f_2=0.1 $). Nevertheless, the extended coverage enables devices in class 2 to become connected to the BS, i.e., provides a deeper coverage to indoor areas. Fig. \[bl\] shows the expected battery lifetime versus $t$ and $d$, i.e., the time intervals between two consecutive scheduling of NPRACH and NPDCCH, respectively, for the same coexistence scenario. Increasing $t$ at first increases the lifetime of devices in both classes, as it provides more resources for NPUSCH scheduling and decreases time spent in data transmission, i.e., $D_{tx}$. After a certain point, increasing $t$ reduces the lifetime due to the increase of the expected time in resource reservation. Similarly, increasing $d$ at first increases the lifetime by providing more resources for NPDSCH, decreasing the time spent in data reception, $D_{rx}$ while after a certain point it decreases the lifetime by increasing the expected time in resource reservation. The impact of $t$ and $d$ on latency in uplink/downlink services is shown in Fig. \[ud\]/Fig. \[dd\]. If the uplink/downlink latency, or the battery consumption represents the only optimization objective, it is straightforward to derive the optimized operation points. However, Figs. \[bl\]-\[dd\] show that overall optimization of the objectives is coupled in conflicting ways. This is illustrated in Fig. \[opa\], which shows normalized lifetime and latency for class 1 when one of the parameters, $d$ or $t$, is fixed. For instance, when $d=2$ ms, the downlink and uplink latency are minimized for $t=25\, \textrm{ms}$ and $t=200\, \textrm{ms}$, and lifetime is maximized for $t=65\, \textrm{ms}$. Also, when $t=200$ ms, the downlink and uplink latencies are minimized for $d=200\, \textrm{ms}$ and $d=2\, \textrm{ms}$, and lifetime is maximized for $d=10\, \textrm{ms}$. Finally, Figs. \[bl\]-\[dd\] show that the latency- and lifetime-optimized resource allocation strategy differ on class basis; thus, selecting the optimized values of $t$ and $d$ depends on required quality of service (lifetime and/or latency) for each class. ![Mutual impact among two coexisting classes in a cell versus number of repetitions for the second class ($C = 2$, $c_1 = 1$, $f_2 =1 - f_1$, $\tau =2 \, \mathrm{ms}$, $d =10 \, \mathrm{ms}$, $t = 65 \, \mathrm{ms}$).[]{data-label="mu"}](./muti.png){width="3.5in"} [0.47]{} ![Performance as function of $t$ and $d$, which are time intervals between two scheduling of NPRACH and NPDCCH, respectively. []{data-label="figam"}](./lif.png "fig:"){width="3.5in"} \ [0.47]{} ![Performance as function of $t$ and $d$, which are time intervals between two scheduling of NPRACH and NPDCCH, respectively. []{data-label="figam"}](./udel.png "fig:"){width="3.5in"} \ [0.47]{} ![Performance as function of $t$ and $d$, which are time intervals between two scheduling of NPRACH and NPDCCH, respectively. []{data-label="figam"}](./ddel.png "fig:"){width="3.5in"} Conclusion ========== NB-IoT access protocol scheduling has been investigated, and a tractable queuing model has been proposed to investigate impact of scheduling on service latency and battery lifetime. Using derived closed-form expressions, it has been shown that scheduling of random access, control, and data channels cannot be treated separately, as the expected latencies and energy consumptions in different channels are coupled in conflicting ways. Furthermore, the derived analytical model has been leveraged to investigate the performance impact of serving devices experiencing high pathloss, and thus needing of more signal repetitions, on latency and battery lifetime performance of other nodes. Finally, given the set of provisioned radio resources for NB-IoT and arrival traffic, optimized scheduling policies minimizing the experienced latency and maximizing the expected battery lifetime have been investigated. Acknowledgment {#acknow .unnumbered} ============== The research presented in this paper was supported in part by Advanced Connectivity Platform for Vertical Segment (ACTIVE) and in part by the European Research Council (ERC Consolidator Grant Nr. 648382 WILLOW) within the Horizon 2020 Program. [^1]: For the sake of completeness, we also mention another novel reconnection scheme designed for NB-IoT, in which a device can request to resume its previous connection after receiving the random access response (RAR) [@prim Section III]. Towards this end, it needs to respond to the RAR message by transmission of its previous connection ID as well as the cause for resuming the connection.
--- abstract: '[Leopoldt’s Conjecture is a statement about the relationship between the global and local units of a number field. Approximately the conjecture states that the ${\mathbb{Z}}_p$-rank of the diagonal embedding of the global units into the product of [*all*]{} local units equals the ${\mathbb{Z}}$-rank of the global units. The variation we consider asks: Can we say anything about the ${\mathbb{Z}}_p$-rank of the diagonal embedding of the global units into the product of [*some*]{} local units? We use the $p$-adic Schanuel Conjecture to answer the question in the affirmative and moreover we give a value for the ${\mathbb{Z}}_p$-rank (of the diagonal embedding of the global units into the product of [*some*]{} local units) in terms of the ${\mathbb{Z}}$-rank of the global units and a property of the the local units included in the product.]{}' address: | Department of Mathematics\ Bates College\ Lewiston Maine, USA author: - Dawn Nelson title: 'A Variation on Leopoldt’s Conjecture: Some Local Units instead of All Local Units' --- Introduction ============ H.W. Leopoldt proposed his conjecture relating global and local units in 1962 [@Leo]. Since then his conjecture has been much studied. It has been generalized, strengthened, weakened, and continues to be actively studied because of its connections to other areas of number theory. We continue the tradition of considering variations on Leopoldt’s Conjecture. Leopoldt’s Conjecture appears in the study of $p$-adic zeta-functions [@Col] [@Sfp], $K$-theory [@Kol], and Iwasawa theory [@Gil] [@NSW]. In particular, Leopoldt’s Conjecture is related to the splitting of exact sequences of Iwasawa modules [@KW] [@Wint]. The conjecture is also equivalent to computing the dimension of a certain Galois cohomology group [@NSW] [@NQD]. A variation on the conjecture has been used in Galois deformation theory [@CM]. Throughout this paper we will consider Galois number fields $M$ where ${\mathcal{O}}_M$ is the ring of integers of $M$ and ${\mathcal{O}}_M^*$ is the group of [*global units*]{} in ${\mathcal{O}}_M$. We will also fix a prime number $p$ and use ${\mathfrak{p}}$ to denote prime ideals in ${\mathcal{O}}_M$ above $p$. Then $M_{\mathfrak{p}}$ is the completion of $M$ with respect to the non-archimedean ${\mathfrak{p}}$-adic value, ${\mathcal{O}}_{\mathfrak{p}}$ is the ring of integers of $M_{\mathfrak{p}}$, and ${\mathcal{O}}_{\mathfrak{p}}^*$ is the group of [*local units*]{}. Let $\pi$ be the uniformizer of ${\mathfrak{p}}$ in ${\mathcal{O}}_{\mathfrak{p}}$ and define the [*principal local units*]{}: ${\mathcal{O}}^*_{{\mathfrak{p}},1} := 1+\pi{\mathcal{O}}_{\mathfrak{p}}$. We can state Leopoldt’s Conjecture informally: Define the diagonal embedding: $$\begin{aligned} \Delta:{\mathcal{O}}_M^*&\rightarrow \prod_{{\mathfrak{p}}|p}{\mathcal{O}}^*_{{\mathfrak{p}}}\notag\\ u&\mapsto(u,\ldots,u).\notag\end{aligned}$$ Then $\operatorname{rank}_{\mathbb{Z}}{\mathcal{O}}_M^*=\operatorname{rank}_{{\mathbb{Z}}_p}(\Delta{\mathcal{O}}_M^*)$. See Section \[2ways\] for the precise formulation which involves principal local units and a topological closure. Leopoldt’s Conjecture has been proven in some special cases. For example, using a method outlined by J. Ax [@Ax] and A. Brumer’s [@Br] $p$-adic version of a theorem of A. Baker [@Ba], one can prove that the conjecture is true for abelian extensions of ${\mathbb{Q}}$, for CM fields with abelian maximal real subfields, and for abelian extensions of imaginary quadratic fields. A paper by M. Laurent proves that the conjecture is true for Galois extensions that satisfy certain conditions on absolutely irreducible characters of the Galois group [@Lau]. In 2009, P. Mihăilescu announced a proof for all number fields [@Mih]. The variation we consider concerns the map: $$\Delta_\Gamma:{\mathcal{O}}_M^*\rightarrow\prod_{{\mathfrak{p}}\in \Gamma}{\mathcal{O}}^*_{{\mathfrak{p}}},$$ where $\Gamma$ is a subset of the primes above the fixed $p$, $\Gamma\subset\{{\mathfrak{p}}\subset {\mathcal{O}}_M\,\big|\,{\mathfrak{p}}|p\}$. We explore the question: Can we say anything about $\operatorname{rank}_{{\mathbb{Z}}_p}(\Delta_\Gamma{\mathcal{O}}_M^*)$? The answer: Yes! Our strongest result can be stated informally: Assume both Schanuel’s Conjecture and the $p$-adic version. Let $t$ be a constant determined by which ${\mathfrak{p}}$ are in $\Gamma$.[^1] Let $M$ be real or CM over ${\mathbb{Q}}$. Then $\operatorname{rank}_{{\mathbb{Z}}_p}(\Delta_\Gamma{\mathcal{O}}_M^*) = \operatorname{rank}_{\mathbb{Z}}{\mathcal{O}}_M^*- t +1.$ In the case of complex, non-CM, extensions we have the weaker result: Assume the $p$-adic Schanuel Conjecture. Let $t$ be a constant determined by which ${\mathfrak{p}}$ are in $\Gamma$. Let $M$ be a complex, non-CM, extension of ${\mathbb{Q}}$. Then $\operatorname{rank}_{{\mathbb{Z}}_p}(\Delta_\Gamma{\mathcal{O}}_M^*) \geq \operatorname{rank}_{\mathbb{Z}}{\mathcal{O}}_M^*- t +1.$ See Section \[calc\] for the formal statements of these theorems which involve principal local units and topological closures. As we know from linear algebra the rank of an image can be expressed as the rank of a matrix. Thus in this paper we consider an appropriate matrix, one whose entries are $p$-adic logarithms, and we use transcendence theory (in particular Schanuel’s Conjecture) to calculate its rank. Notation {#nota} ======== For $q\in{\mathbb{Q}}_p$ let $|q|_p$ be the usual $p$-adic absolute value. It has a unique extension to the algebraic closure of ${\mathbb{Q}}_p$ and to ${\mathbb{C}}_p$, the completion of the algebraic closure of ${\mathbb{Q}}_p$. By abuse of notation, we also let $|\cdot|_p$ be the absolute value on that extension. For $\{x\in {\mathbb{C}}_p:|x-1|_p<1\}$, define the $p$-adic logarithm $\log_p(x)$ by the usual power series $$\log_p(X)=\sum_{n=1}^{\infty}\frac{(-1)^{n+1}(X-1)^n}{n}.$$ We can uniquely extend $\log_p$ to all of ${\mathbb{C}}_p^*$ so that $\log_p(ab)=\log_p(a)+\log_p(b)$ and $\log_p(\zeta p^s)=0$ for all roots of unity $\zeta$ and $s\in {\mathbb{Z}}$. For $\{x\in {\mathbb{C}}_p:|x|_p<p^{-1/(p-1)}\}$, define the $p$-adic exponential $\exp_p(x)$ by the usual power series $$\exp_p(X)=\sum_{n=1}^{\infty}\frac{(X)^n}{n!}.$$ The $p$-adic exponential does not extend uniquely to all of ${\mathbb{C}}_p$. Note that $\log_p$ is injective on $\{x\in {\mathbb{C}}_p:|x-1|_p<1\}$. Moreover, the function $\log_p$ gives an isomorphism between the multiplicative group $\{x\in {\mathbb{C}}_p:|x-1|_p<p^{-1/(p-1)}\}$ and the additive group $\{x\in {\mathbb{C}}_p:|x|_p<p^{-1/(p-1)}\}$. The function $\exp_p$ is the inverse of $\log_p$ on these groups. Let $G=\operatorname{Gal}(M/{\mathbb{Q}})$ and $|G|=n$. Define $E_p:=\operatorname{Emb}(M,{\mathbb{C}}_p)$ to be the set of all embeddings of $M$ into ${\mathbb{C}}_p$. Similarly define $E:=\operatorname{Emb}(M,{\mathbb{C}})$ to be the set of all embeddings of $M$ into ${\mathbb{C}}$. For $\tau\in E$, we can define $\overline{\tau}\in E$ so that $\overline{\tau}(m) = \overline{\tau(m)}$ where $\overline{\tau(m)}$ is the complex conjugate of ${\tau}(m)$. The relationships between $G$, $E$, and $E_p$ will be important in what follows. Although $G$ is a group and $E$ and $E_p$ are only sets, all three have the same cardinality. For any fixed $\tau\in E$ we have $E=\{\tau\circ g\,|\,g\in G\}$. Similarly, for any fixed $\sigma\in E_p$ we have $E_p=\{\sigma\circ g\,|\,g\in G\}$. Moreover, the following lemma defines natural bijections between $E$ and $E_p$. \[bij\] Let $\psi:{\mathbb{C}}_p\rightarrow{\mathbb{C}}$ be an isomorphism.[^2] Then there is a bijection from $E_p$ to $E$ given by $$(\psi.\sigma)(m):=\psi(\sigma(m))$$ where $\sigma\in E_p$ and $m\in M$. Similarly, given an isomorphism from ${\mathbb{C}}$ to ${\mathbb{C}}_p$ there is a bijection from $E$ to $E_p$. Throughout this paper we will rely on the existence of a special global unit. Because of the similarity to Minkowski units as described in [@Nar], we call this unit a [*weak Minkowski unit*]{}. In fact, in the case of real extensions, our definition and the usual definition agree. An element ${\varepsilon}$ in ${\mathcal{O}}_M^*$ is a [*weak Minkowski unit*]{} if the ${\mathbb{Z}}$-module generated by $g{\varepsilon}$, for all $g \in \operatorname{Gal}(M/{\mathbb{Q}})$, is of finite index in ${\mathcal{O}}_M^*$. [Under the usual definition, there are extensions for which a Minkowski unit does not exist and there are extensions for which it is not yet known whether or not a Minkowski unit exists [@Nar]. On the other hand the case for weak Minkowski units is settled.]{} \[unit\] For all finite Galois extensions $M$ of ${\mathbb{Q}}$ there exists a weak Minkowski unit. Moreover, we can find a weak Minkowski unit ${\varepsilon}$ such that $|\sigma_i g_j {\varepsilon}-1|_p<1$ for all $g_j\in G$ and $\sigma_i\in E_p$. This proof uses many of the same steps as one proof of Dirichlet’s Unit Theorem (see [@ANT Theorem 5.9]). Let $r=\operatorname{rank}_{\mathbb{Z}}{\mathcal{O}}_M^*$ and $|G|=n$. Dirichlet’s Unit Theorem relates $r$ and $n$: $$r=\begin{cases} n-1 & \text{if }M\text{ is real}, \\ \frac{n}{2}-1 & \text{if }M\text{ is complex}. \end{cases}$$ If $M/{\mathbb{Q}}$ is totally real take $E=\operatorname{Emb}(M,{\mathbb{R}})= \{\tau_1,\ldots,\tau_{r+1}\}$. If $M/{\mathbb{Q}}$ is complex take $E=\operatorname{Emb}(M,{\mathbb{C}})= \{\tau_1,\ldots,\tau_{r+1},\overline{{\tau}_1},\ldots,\overline{{\tau}_{r+1}}\}$. Number the $g\in G$ so that $\tau_1=\tau_ig_i$ and $\tau_1=\overline{{\tau}_i}{g}_{r+1+i}$. Consider the map $\hat\tau$: $$\begin{array}{|rcl|rcl|}\hline \multicolumn{3}{|l|}{\mbox{if }M/{\mathbb{Q}}\mbox{ is real}}&\multicolumn{3}{|l|}{\mbox{if }M/{\mathbb{Q}}\mbox{ is complex}}\\\hline \hat\tau:M&\rightarrow &{\mathbb{R}}^{r+1}=:X&\hat\tau:M&\rightarrow &{\mathbb{C}}^{r+1}=:X\\ \alpha&\mapsto&(\tau_1\alpha,\ldots,\tau_{r+1}\alpha)&\alpha&\mapsto&(\tau_1\alpha,\ldots,\tau_{r+1}\alpha)\\\hline \end{array}$$ Let $\vec{x}=(x_1,\ldots, x_{r+1})$ be in $X$. Define $$|\operatorname{Nm}(\vec{x})|:=\begin{cases} \prod|x_i| & \text{if }M\text{ is real}, \\ \prod|x_i|^2 & \text{if }M\text{ is complex}. \end{cases}$$ This definition agrees with the usual definition of the norm of an element in a number field, i.e., for $\alpha\in{\mathcal{O}}_M$, $|\operatorname{Nm}(\hat\tau(\alpha))|=|\operatorname{Nm}_{M/{\mathbb{Q}}}(\alpha)|$. In what follows, we always take $\vec{x}$ to be in the set $X':=\left\{\vec{x}\in X \,\Big|\, \frac{1}{2}\leq|\operatorname{Nm}(\vec{x})|\leq1\right\}$. For $\vec{x},\vec{y}\in X$, define $\vec{x}\cdot\vec{y}$ to be componentwise multiplication and $\vec{x}\cdot\hat\tau({\mathcal{O}}_M):=\{\vec{x}\cdot\hat\tau(\alpha)\,|\,\alpha\in{\mathcal{O}}_M\}$. In $X$, $\hat\tau({\mathcal{O}}_M)$ is a full lattice with fundamental domain having finite calculable volume $V$ [@ANT Proposition 4.26]. Moreover, $\vec{x}\cdot\hat\tau({\mathcal{O}}_M)$ is a full lattice in $X$ with fundamental domain having volume $V'(\vec{x})=V|\operatorname{Nm}(\vec{x})|$ and $V'(\vec{x})\leq V$. Take a subset, $T\subset X$, that is compact, convex, and symmetric with respect to the origin and such that vol$(T)\geq2^mV'(\vec{x})$ for all $\vec{x}\in X'$ ($m=\dim_{\mathbb{R}}X$). By Minkowski’s Theorem,[^3] for all $\vec{x}\in X'$ there exists a nonzero $\beta\in{\mathcal{O}}_M$ such that $\vec{x}\cdot\hat\tau(\beta)\in T$. Points in $T$ have bounded coordinates and thus bounded norm, so for some fixed $N\in{\mathbb{R}}$ $$|\operatorname{Nm}(\vec{x}\cdot\hat\tau(\beta))|\leq N,$$ and $$|\operatorname{Nm}_{M/{\mathbb{Q}}}(\beta)|= |\operatorname{Nm}(\hat\tau(\beta))|\leq N/|\operatorname{Nm}(\vec{x})|\leq 2N.$$ Now consider all ideals $\beta{\mathcal{O}}_M$ where $\beta$ is such that there is some $\vec{x}\in X'$ with $\vec{x}\cdot\hat\tau(\beta)\in T$. The norm of $\beta$ is bounded, thus there are only finitely many such ideals, call them $\{\beta_1{\mathcal{O}}_M,\ldots,\beta_t{\mathcal{O}}_M\}$. So for any $\beta$ with $\vec{x}\cdot\hat\tau(\beta)\in T$ we have $\beta{\mathcal{O}}_M=\beta_j{\mathcal{O}}_M$ for some $j$. Hence there exists a unit $\epsilon$ such that $\beta=\beta_j\epsilon$ and $\vec{x}\cdot\hat\tau(\epsilon)\in \hat\tau(\beta_j^{-1})\cdot T$. Define $T':=\hat\tau(\beta_1^{-1})T\cup\ldots\cup\hat\tau(\beta_t^{-1})T$, it is bounded and independent of any $\vec{x}$. The previous paragraph shows that for each $\vec{x}\in X'$ there exists a unit $\epsilon$ such that $\vec{x}\cdot\hat\tau(\epsilon)\in T'$ and hence $\vec{x}\cdot\hat\tau(\epsilon)$ has bounded coordinates independent of $\vec{x}$. We are now ready to construct our weak Minkowski unit. Choose $\vec{x}$ so that for $k\neq 1$ the coordinates $x_k$ are very large compared to those of $T'$ and $x_1$ is very small so that $|\operatorname{Nm}(\vec{x})|=1$. This $\vec{x}$ is in $X'$, so there exists a unit ${\varepsilon}$ (which will be our weak Minkowski unit) such that $\vec{x}\cdot\hat\tau({\varepsilon})\in T'$ and hence has bounded coordinates, i.e., $|x_k\tau_k({\varepsilon})|\leq L$ for some $L\in {\mathbb{R}}$. For $k\neq1$, we chose $x_k$ large enough so that $|\tau_k({\varepsilon})|<1$. Hence $|\tau_ig_j({\varepsilon})|<1$ if $\tau_ig_j\neq\tau_1$, i.e., if $i\neq j$. So $\log|\tau_ig_j({\varepsilon})|<0$ for $i\neq j$. For any fixed $j$, $g_j{\varepsilon}\in{\mathcal{O}}_M^*$ hence $\displaystyle\sum_{i=1}^{r+1}\log|\tau_ig_j({\varepsilon})|=0$. So for $j\neq r+1$ $$\displaystyle\sum_{i=1}^{r}\log|\tau_ig_j({\varepsilon})|=-\log|\tau_{r+1}g_j({\varepsilon})|>0.$$ Combining this with the fact that $\log|\tau_ig_j({\varepsilon})|<0$ for $i\neq j$, linear algebra tells us that the $r\times r$ matrix $(\log|\tau_ig_j({\varepsilon})|)_{i,j=1,\ldots, r}$ is invertible. The invertibility of the matrix implies that $\{g_1{\varepsilon},\ldots,g_{r}{\varepsilon}\}$ are multiplicatively independent. Since the ${\mathbb{Z}}$-rank of ${\mathcal{O}}_M^*$ is $r$, the $r$ multiplicatively independent elements $\{g_1{\varepsilon},\ldots,g_{r}{\varepsilon}\}$ generate a ${\mathbb{Z}}$-module of finite index in ${\mathcal{O}}_M^*$. Moreover, the ${\mathbb{Z}}$-module generated by $g{\varepsilon}$, for all $g\in G$, also has finite index in ${\mathcal{O}}_M^*$. Thus ${\varepsilon}$ is a weak Minkowski unit. By Lemma \[FLT\] below, there exists an $s$ such that such that $|\sigma_i g_j {\varepsilon}^s-1|_p<1$ for all $g_j\in G$ and $\sigma_i\in E_p$. If ${\varepsilon}$ is a weak Minkowski unit then so is ${\varepsilon}^{s}$. Thus, if the first weak Minkowski unit we find does not satisfy the desired condition, we can replace it with one that does. \[FLT\] If $u\in{\mathcal{O}}^*_{M}$ then for all $\sigma\in E_p$, $\left|\sigma \left(u^{p^f-1}\right)-1\right|_p<1$, where $f=[{\mathcal{O}}_M/{\mathfrak{p}}{\mathcal{O}}_M:{\mathbb{Z}}/p{\mathbb{Z}}]$ and ${\mathfrak{p}}$ is a prime ideal above $p$. Lemma \[FLT\] is a relative of Fermat’s Little Theorem. \[r\] Let $M$ be a finite Galois extension of ${\mathbb{Q}}$. Let ${\varepsilon}$ be a weak Minkowski unit. Then $\operatorname{rank}\left(\log|\tau_ig_j({\varepsilon})|\right)_{\tau_i\in E,\,g_j\in G}=\operatorname{rank}_{\mathbb{Z}}O_M^*.$ In Proposition \[unit\] we showed that for specially numbered $\tau_i$ and $g_j$ the matrix $(\log|\tau_ig_j{\varepsilon}|)_{i,j=1,\ldots,r}$ is invertible. Thus it has rank equalling $r:=\operatorname{rank}_{\mathbb{Z}}O_M^*$. If we increase the size of the matrix by one row and one column to $(\log|\tau_ig_j{\varepsilon}|)_{i,j=1,\ldots,r+1}$, the rank remains $r$ because for fixed $j$ $$\displaystyle\sum_{i=1}^{r+1}\log|\tau_ig_j({\varepsilon})|=0.$$ So in the case of totally real extensions, we have finished the proof. In the case of a complex extension, when we change the matrix to $\left(\log|\tau_ig_j({\varepsilon})|\right)_{\tau_i\in E,\,g_j\in G}$ each new row is dependent on a previous row because, for any $\tau\in E$, $x\in M$, $|\tau(x)||\bar{\tau}(x)|=|\tau(x)|^2$. Thus the rank remains $r$. Finally we note that swapping rows or columns does not affect the rank of a matrix, hence we can number the rows and columns of $\left(\log|\tau_ig_j({\varepsilon})|\right)_{\tau_i\in E,\,g_j\in G}$ independently of the special numbering used in the proof of the proposition. [ In what follows $c$ will always denote an element of $G$ that is induced by complex conjugation]{}, i.e., the elements defined in Lemma \[com\] below. \[com\] Let $M$ be a complex Galois extension of ${\mathbb{Q}}$. Then for all $\tau\in E$ there exists $c\in G$ such that the following diagram commutes. $$\xymatrix{M\ar_{c}[d]\ar^{\overline{\tau}}[dr]\\ M\ar_{\tau}[r]&{\mathbb{C}}}$$ Moreover elements induced by complex conjugation have the following properties. - Let $c$ and $c'$ be elements of $G$ induced by complex conjugation, then there exists an $h\in G$ such that $c=hc'h^{-1}$. - Also, for any $c$ and $h$ in $G$, $hch^{-1}$ is induced by complex conjugation. Leopoldt’s Conjecture: Two Ways {#2ways} =============================== We state two precise versions of Leopoldt’s Conjecture that are equivalent for Galois number fields. Let $\Delta$ be the diagonal embedding of the [global]{} units ${\mathcal{O}}_M^*$ into the product of local units $\prod_{{\mathfrak{p}}| p}{\mathcal{O}}_{\mathfrak{p}}^*$ and define $X:=\Delta^{-1}\left(\prod_{{\mathfrak{p}}|p}{\mathcal{O}}_{{\mathfrak{p}},1}^*\right)$, i.e., $X$ is the inverse image of the product of the principal local units. Let $\overline{\Delta(X)}$ denote the topological closure of $\Delta(X)$ in $\prod_{{\mathfrak{p}}|p}{\mathcal{O}}_{{\mathfrak{p}},1}^*$. \[DX\] Let $M$ be a number field. Then the ${\mathbb{Z}}$-rank of ${\mathcal{O}}_M^*$ equals the ${\mathbb{Z}}_p$-rank of $\overline{\Delta(X)}$. Leopoldt’s Conjecture can also be stated in a way that contains a matrix with $p$-adic logarithms and in so doing partially reveals the conjecture’s relationship with the $p$-adic regulator. \[mat\] Let $M$ be a finite Galois extension of ${\mathbb{Q}}$, with $g_j\in G=\operatorname{Gal}(M/{\mathbb{Q}})$ and $\sigma_i\in E_p:=\operatorname{Emb}(M,{\mathbb{C}}_p)$. Let ${\varepsilon}\in {\mathcal{O}}_M^*$ be such that $\{g_1{\varepsilon},\ldots,g_n{\varepsilon}\}$ generates a finite index subgroup of ${\mathcal{O}}_M^*$ and such that $|\sigma_i g_j {\varepsilon}-1|_p<1$ for all $i,j\in\{1,\ldots,n\}$. Then $$\operatorname{rank}\left(\log_p(\sigma_i g_j {\varepsilon})\right)_{\sigma_i\in E_p,\,g_j\in G}=\operatorname{rank}_{\mathbb{Z}}{\mathcal{O}}_M^*.$$ Since $\operatorname{rank}_{\mathbb{Z}}{\mathcal{O}}_M^*=\operatorname{rank}\left(\log|\tau_i g_j{\varepsilon}|\right)_{{\tau_i\in E},\,g_j\in G}$, the conclusion of Leopoldt’s Conjecture can be stated in terms of the equality of the ranks of two matrices $$\operatorname{rank}\left(\log_{{p}}(\sigma_i g_j{\varepsilon})\right)_{{\sigma_i\in E_p},\,g_j\in G} =\operatorname{rank}\left(\log|\tau_i g_j{\varepsilon}|\right)_{{\tau_i\in E},\,g_j\in G}.$$ Note that the entries of the matrix on the left are $p$-adic logarithms, while the entries on the right are obtained by first taking the complex modulus and then taking the real logarithm. Also notice that the rows of the left matrix are indexed by elements in $E_p$ whereas the rows of the right matrix are indexed by elements in $E$. The Variation {#var} ============= Our variation can be stated precisely in a manner similar to Conjecture \[DX\]. Let $\Gamma\subset\{{\mathfrak{p}}\subset {\mathcal{O}}_M\,\big|\,{\mathfrak{p}}|p\}$ and $\Delta_\Gamma$ be the diagonal embedding $$\Delta_\Gamma:{\mathcal{O}}_M^*\rightarrow\prod_{{\mathfrak{p}}\in \Gamma}{\mathcal{O}}^*_{{\mathfrak{p}}}.$$ Define $Y:=\Delta_\Gamma^{-1}\left(\prod_{{\mathfrak{p}}\in\Gamma}{\mathcal{O}}_{{\mathfrak{p}},1}^*\right)$, i.e., $Y$ is the inverse image of the product of some principal local units. Let $\overline{\Delta_\Gamma(Y)}$ denote the topological closure of $\Delta_\Gamma(Y)$ in $\prod_{{\mathfrak{p}}\in\Gamma}{\mathcal{O}}_{{\mathfrak{p}},1}^*$. We will consider the value of $\operatorname{rank}_{{\mathbb{Z}}_p}(\overline{\Delta_\Gamma(Y)})$. Equivalently, mimicking the relationship between Conjectures \[DX\] and \[mat\], we will consider the rank of a matrix whose entries are $p$-adic logarithms. In fact, the matrix we consider will be the one from Conjecture \[mat\] with rows removed, i.e., with the $\sigma_i$ in a subset of $E_p$. Fix ${\mathfrak{p}}_0\in\Gamma$. Call the map $M\rightarrow M_{{\mathfrak{p}}_0}\rightarrow {\mathbb{C}}_p$, $\sigma_0$. It is in $E_p$. Next define the surjection $$\begin{aligned} E_p&\rightarrow\{{\mathfrak{p}}|p\}\notag\\ \sigma=\sigma_0g&\mapsto g^{-1}{\mathfrak{p}}_0\notag\end{aligned}$$ Finally, define $S_p:=\{\sigma=\sigma_0g\,|\,g^{-1}{\mathfrak{p}}_0\in\Gamma\}$. \[equivV\] Let $M$ be a finite Galois extension of $Q$. Let ${\varepsilon}$ be a weak Minkowski unit such that $|\sigma_i g_j {\varepsilon}-1|_p<1$ for all $g_j\in G$ and $\sigma_i\in E_p$. Keep the definitions and notations defined above. Then $$\operatorname{rank}_{{\mathbb{Z}}_p}(\overline{\Delta_\Gamma(Y)})=\operatorname{rank}\left(\log_p(\sigma_i g_j{\varepsilon})\right)_{\sigma_i\in S_p,\,g_j\in G}.$$ Note that when $\Gamma$ contains all primes above $p$ then $S_p=E_p$, and this proposition claims that the two versions of Leopoldt’s Conjecture stated above are equivalent. We will need a lemma. \[conv\] Let $M$ be a finite Galois extension of ${\mathbb{Q}}$. Let $u_1,\ldots, u_t\in {\mathcal{O}}_M^*$ and $a_1,\ldots,a_t\in {\mathbb{Z}}_p$ with at least one non-zero. The product $u_1^{a_1}\cdots u_t^{a_t}=1$ in $M_{\mathfrak{p}}$ for all ${\mathfrak{p}}\in\Gamma$ ($\Gamma$ is any non-empty subset of primes above $p$) if and only if, for a fixed place ${\mathfrak{p}}_0\in\Gamma$, $(hu_1)^{a_{1}}\cdots (hu_t)^{a_{t}}=1$ in $M_{{\mathfrak{p}}_0}$ for all $h\in H:=\{h\in G\,|\,h{\mathfrak{p}}={\mathfrak{p}}_0 \mbox{ for some }{\mathfrak{p}}\in\Gamma\}$. If $u_1^{a_1}\cdots u_t^{a_t}=1$ in $M_{\mathfrak{p}}$ for all ${\mathfrak{p}}\in\Gamma$ then there exists $a_{j,n}\in {\mathbb{Z}}$ such that $p$-adically $a_{j,n}\rightarrow a_j$ and $u_1^{a_{1,n}}\cdots u_t^{a_{t,n}}\rightarrow1$ in $M_{\mathfrak{p}}$ for all ${\mathfrak{p}}\in\Gamma$. Recall that the absolute values corresponding to each prime are related by $|x|_{{\mathfrak{p}}_i}=|gx|_{{\mathfrak{p}}_j}$ for $g\in G$ such that ${\mathfrak{p}}_j=g{\mathfrak{p}}_i$. Thus $(hu_1)^{a_{1,n}}\cdots (hu_t)^{a_{t,n}}\rightarrow1$ in $M_{{\mathfrak{p}}_0}$ for all $h\in H$. Hence $(hu_1)^{a_{1}}\cdots (hu_t)^{a_{t}}=1$ in $M_{{\mathfrak{p}}_0}$ for all $h\in H$. The argument for the converse is similar and is left to the reader. First we prove that $$\operatorname{rank}_{{\mathbb{Z}}_p}(\overline{\Delta_\Gamma(Y)})\geq\operatorname{rank}\left(\log_p(\sigma_i g_j{\varepsilon})\right)_{\sigma_i\in S_p,\,g_j\in G}.$$ Let $$\operatorname{rank}\left(\log_p(\sigma_i g_j{\varepsilon})\right)_{\sigma_i\in S_p,\,g_j\in G}=t.$$ For contradiction assume $$\operatorname{rank}_{{\mathbb{Z}}_p}(\overline{\Delta_\Gamma(Y)})<t.$$ Then for all $t$-sized subsets $\{g_1,\ldots,g_t\}\subset G$, the $\{\Delta(g_1{\varepsilon}),\ldots, \Delta(g_t{\varepsilon})\}$ are ${\mathbb{Z}}_p$-dependent. So there exists at least one non-zero $a_j\in{\mathbb{Z}}_p$ such that $(g_1{\varepsilon})^{a_1}\cdots(g_t{\varepsilon})^{a_t}=1$ in $M_{\mathfrak{p}}$ for all ${\mathfrak{p}}\in\Gamma$. Thus for a fixed ${\mathfrak{p}}_0$, Lemma \[conv\] guarantees that $$\label{Prod} (hg_1{\varepsilon})^{a_{1}}\cdots (hg_t{\varepsilon})^{a_{t}}=1$$ in $M_{{\mathfrak{p}}_0}$ for all $h\in H$. As above, $\sigma_0$ is the map $M\rightarrow M_{{\mathfrak{p}}_0}\rightarrow {\mathbb{C}}_p$. Apply the $p$-adic logarithm to Equation \[Prod\]: $$\label{22} \sum_{j=1}^t a_j\log_p(\sigma_0 hg_j{\varepsilon})=0$$ for all $h\in H$. Since the defining conditions on $H$ and $S_p$ are related, Equation \[22\] becomes $$\sum_{j=1}^t a_j\log_p(\sigma_i g_j{\varepsilon})=0$$ for all $\sigma_i\in S_p.$ Recall that this was true for all $t$-sized subsets of $G$. So we have contradicted the fact that $\operatorname{rank}\left(\log_p(\sigma_i g_j{\varepsilon})\right)_{\sigma_i\in S_p,\,g_j\in G}=t$. Second we prove that $$\operatorname{rank}_{{\mathbb{Z}}_p}(\overline{\Delta_\Gamma(Y)})\leq\operatorname{rank}\left(\log_p(\sigma_i g_j{\varepsilon})\right)_{\sigma_i\in S_p,\,g_j\in G}.$$ Let $\operatorname{rank}_{{\mathbb{Z}}_p}(\overline{\Delta_\Gamma(Y)})=t$. So there exists some $t$-sized subset of $\{g_1{\varepsilon},\ldots,g_n{\varepsilon}\}$ for which the elements are ${\mathbb{Z}}_p$-independent in $\overline{\Delta_\Gamma(Y)}$. For illustration purposes we will assume that $g_1{\varepsilon},\ldots,g_t{\varepsilon}$ are ${\mathbb{Z}}_p$-independent. For contradiction suppose $\operatorname{rank}\left(\log_p(\sigma_i g_j{\varepsilon})\right)_{\sigma_i\in S_p,\,g_j\in G}<t$. Then there exists at least one non-zero $a_j\in {\mathbb{C}}_p$ such that $$\sum_{j=1}^t a_j\log_p(\sigma_i g_j{\varepsilon})=0 \mbox{ for all }\sigma_i\in S_p.$$ Hence with ${\mathfrak{p}}_0$ and $\sigma_0$ fixed as above we have $$\label{eq1}\sum_{j=1}^ta_j\log_p(\sigma_0h g_j{\varepsilon})=0 \mbox{ for all }h\in H.$$ Recall $$\begin{aligned} H&:=\{h\in G\,|\,h{\mathfrak{p}}={\mathfrak{p}}_0 \mbox{ for some }{\mathfrak{p}}\in\Gamma\}\\ &\phantom{:}=\{h\in G\,|\,h^{-1}{\mathfrak{p}}_0={\mathfrak{p}}\mbox{ for some }{\mathfrak{p}}\in\Gamma\}. \end{aligned}$$ Let $D_{{\mathfrak{p}}_0}$ be the decomposition group of ${\mathfrak{p}}_0$. Its elements permute those of $H$. Let $d\in D_{{\mathfrak{p}}_0}$. Then $d h\in H$ because $h^{-1}d^{-1}{\mathfrak{p}}_0=h^{-1}{\mathfrak{p}}_0\in\Gamma$. Apply $d$ to Equation (\[eq1\]), note that Galois elements commute with the $p$-adic logarithm, $$\sum_{j=1}^t d a_j\log_p(d\sigma_0h g_j{\varepsilon})=0\mbox{ for all }h\in H,$$ and $$\label{eq2}\sum_{j=1}^t d a_j\log_p(\sigma_0 hg_j{\varepsilon})=0\mbox{ for all }h\in H.$$ Letting $d$ vary we have $$\label{eq3} \sum_{d\in D_{{\mathfrak{p}}_0}} \sum_{j=1}^t d a_j\log_p(\sigma_0 hg_j{\varepsilon})=\sum_{j=1}^t \mbox{Trace}_{M_{{\mathfrak{p}}_0}/{\mathbb{Q}}_p}(a_j)\log_p(\sigma_0 hg_j{\varepsilon})=0$$ for all $h\in H$. Note $\mbox{Trace}(a_j)\in {\mathbb{Q}}_p$. We may assume at least one $a_j=1$, so $\mbox{Trace}(a_j)\neq 0$ for some $a_j$. After clearing denominators, Equation (\[eq3\]) translates to: $$\sum_{j=1}^t b_j\log_p(\sigma_0 h g_j{\varepsilon})=0$$ for all $h\in H$, where $b_j\in{\mathbb{Z}}_p$ and at least one is non-zero. There exists $b\in {\mathbb{Z}}$ such that for all $j$ and for all $h$, the $bb_j\log_p(\sigma_0 hg_j{\varepsilon})$ are in the region of ${\mathbb{C}}_p$ for which $\exp_p$ is defined. Hence $$\prod_{j=1}^t (\sigma_0 hg_j{\varepsilon})^{bb_j}=1$$ for all $h\in H$. So in $M_{{\mathfrak{p}}_0}$, $(hg_1{\varepsilon})^{bb_{1}}\cdots (hg_t{\varepsilon})^{bb_{t}}=1$ for all $h\in H$. Lemma \[conv\] implies that $(g_1{\varepsilon})^{bb_{1}}\cdots (g_t{\varepsilon})^{bb_{t}}=1$ in $M_{\mathfrak{p}}$ for all ${\mathfrak{p}}\in\Gamma$. Which contradicts the fact that $g_1{\varepsilon},\ldots,g_t{\varepsilon}$ are ${\mathbb{Z}}_p$-independent. In summary the question posed in the introduction can be restated: [**Question.**]{} What is the rank of the matrix $\left(\log_p(\sigma_i g_j{\varepsilon})\right)_{\sigma_i\in S_p,\,g_j\in G}$? Recall that the conclusion of Leopoldt’s Conjecture can be stated: $$\operatorname{rank}\left(\log_{{p}}(\sigma_i g_j{\varepsilon})\right)_{{\sigma_i\in E_p},\,g_j\in G} =\operatorname{rank}\left(\log|\tau_i g_j{\varepsilon}|\right)_{{\tau_i\in E},\,g_j\in G}.$$ It seems reasonable that the answer to our question will involve removing rows from the matrix on the right-hand side. The remaining rows of the left-hand matrix are indexed by $S_p$, we need to define a subset of $E$ that will index the rows of the right-hand matrix. For each isomorphism $\psi:{\mathbb{C}}_p\rightarrow{\mathbb{C}}$ consider the bijection between $E$ and $E_p$ as defined in Lemma \[bij\] and define $S_\psi:=\{\psi.\sigma\,|\,\sigma\in S_p\}\subset E$. We propose the following conjecture. \[GLC\] Let $M$ be a finite Galois extension of ${\mathbb{Q}}$. Let ${\varepsilon}$ be a weak Minkowski unit ${\varepsilon}$ such that $|\sigma_i g_j {\varepsilon}-1|_p<1$ for all $g_j\in G$ and $\sigma_i\in E_p$. Let $S_p$ be a subset of $E_p$. Let $\psi:{\mathbb{C}}_p\rightarrow{\mathbb{C}}$ be an isomorphism and for each $\psi$ let $S_\psi$ be a subset of $E$ as defined previously. Then $$\operatorname{rank}(\log_p(\sigma_ig_j{\varepsilon}))_{\sigma_i\in S_p,\,g_j\in G}=\max_\psi\{\operatorname{rank}(\log|\tau_ig_j{\varepsilon}|)_{\tau_i\in S_\psi,\,g_j\in G}\}.$$ When $S_p=E_p$ this conjecture is equivalent to Leopoldt’s. This paper’s main result conditionally proves half of the conjecture, i.e., assuming the $p$-adic Schanuel Conjecture we prove that the left-hand side is greater than or equal to the right-hand side. Moreover we conditionally prove equality when $M$ is either totally real or CM over ${\mathbb{Q}}$. Schanuel’s Conjecture ===================== S. Schanuel formulated his conjecture in the 1960s. It is a generalization of several theorems that prove large classes of numbers are transcendental. The [ Hermite-Lindemann-Weirstrass Theorem]{} (1880s) implies that $e^\alpha$ is transcendental for $\alpha$ algebraic. The [ Gel’fond-Schneider Theorem]{} (1934) implies that $\alpha^\beta$ is transcendental for $\alpha,\beta$ algebraic and $\beta\notin{\mathbb{Q}}$. [ Baker’s Theorem]{} (1966) implies that $\alpha_1^{\beta_1}\cdots \alpha_n^{\beta_n}$ is transcendental for $\alpha_i,\beta_i$ algebraic, $\beta_i\notin{\mathbb{Q}}$, and $\beta_1,\ldots,\beta_n$ linearly independent over ${\mathbb{Q}}$. Schanuel’s Conjecture would imply the transcendence of many more numbers including $e+\pi,e\pi,\pi^e,e^e,\pi^\pi,\log\pi,$ and $(\log2)(\log3)$. More generally, Schanuel’s Conjecture is a result about the algebraic independence of sets of logarithms. It can be formulated for either complex or $p$-adic logarithms. [**Schanuel’s Conjecture: Logarithmic Formulation.**]{} [*Let non-zero $\alpha_1, \ldots, \alpha_n$ be algebraic over ${\mathbb{Q}}$ and suppose that for any choice of the multi-valued logarithm $\log\alpha_1, \ldots, \log\alpha_n$ are linearly independent over ${\mathbb{Q}}$. Then $\log\alpha_1, \ldots,$ $\log\alpha_n$ are algebraically independent over ${\mathbb{Q}}$.*]{} [**$p$-adic Schanuel Conjecture: Logarithmic Formulation.**]{} [*Let non-zero $\alpha_1, \ldots, \alpha_n$ be algebraic over ${\mathbb{Q}}$ and contained in an extension of ${\mathbb{Q}}_p$. If $\log_p\alpha_1, \ldots, \log_p\alpha_n$ are linearly independent over ${\mathbb{Q}}$, then $\log_p\alpha_1, \ldots, \log_p\alpha_n$ are algebraically independent over ${\mathbb{Q}}$.*]{} The Main Theorem ================ The presentation in this section uses techniques proposed by M. Waldschmidt [@Wald]. For the remainder of this paper we assume that - [$M$ is a finite Galois extension of ${\mathbb{Q}}$ with $G=\operatorname{Gal}(M/{\mathbb{Q}})$ and $|G|=n$.]{} - ${\varepsilon}\in {\mathcal{O}}_M^*$ is a weak Minkowski unit, i.e., $\{g_1{\varepsilon},\ldots,g_n{\varepsilon}\}$ generates a finite index subgroup of ${\mathcal{O}}_M^*$. Moreover $|\sigma_i g_j {\varepsilon}-1|_p<1$ for all $\sigma_i\in E_p$ and $g_j\in G$. - $p$ is any fixed prime in ${\mathbb{Z}}$. - $S_p$ is a fixed subset of $E_p$. - $\psi$ denotes an isomorphism from ${\mathbb{C}}_p$ to ${\mathbb{C}}$. - $S_\psi:=\{\psi.\sigma\,|\,\sigma\in S_p\}\subset E$. \[rem\] Let $M$ be a finite Galois extension of ${\mathbb{Q}}$, with $g_j\in G=\operatorname{Gal}(M/{\mathbb{Q}})$, $\sigma_i\in E_p:=\operatorname{Emb}(M,{\mathbb{C}}_p)$, and $\tau_i\in E:=\operatorname{Emb}(M,{\mathbb{C}})$. Let ${\varepsilon}\in {\mathcal{O}}_M^*$ be a weak Minkowski unit such that $|\sigma_i g_j {\varepsilon}-1|_p<1$ for all $i, j$. Let $S_p$ and $S_\psi$ be as defined above. Assume the $p$-adic Schanuel Conjecture. Then $$\operatorname{rank}(\log_p(\sigma_ig_j{\varepsilon}))_{\sigma_i\in S_p,\,g_j\in G}\geq\max_\psi\{\operatorname{rank}(\log|\tau_ig_j{\varepsilon}|)_{\tau_i\in S_\psi,\,g_j\in G}\}.$$ Let $\Gamma$, $\Delta_\Gamma$, $Y$, and $S_p$ be as defined in Section \[var\]. Keep the notations and assumptions of the theorem. Then $$\operatorname{rank}_{{\mathbb{Z}}_p}(\overline{\Delta_\Gamma(Y)})\geq\max_\psi\{\operatorname{rank}(\log|\tau_ig_j{\varepsilon}|)_{\tau_i\in S_\psi,\,g_j\in G}\}.$$ We will need the following algebraic lemma. For its proof see [@Wald]. \[poly\] Let $K$ be a field, $A_1,\ldots,A_t$ be elements in $K[T_1,\ldots,T_m]$, and $P\in K[T_1,\ldots,T_m,T_{m+1},\ldots,T_{m+t}]$. If the polynomial $P(T_1,\ldots,T_m,A_1,\ldots,A_t)$ is the zero polynomial in $K[T_1,\ldots,T_m]$, then $P$ is an element of the ideal of $K[T_1,\ldots,T_m,T_{m+1},\ldots,T_{m+t}]$ generated by the polynomials $\{T_{m+l}-A_l\}_{1\leq l\leq t}$. Set $\mathfrak{R}=(\log_p(\sigma_ig_j{\varepsilon}))_{\sigma_i\in S_p,\,g_j\in G}$ and $\mathfrak{r}=\operatorname{rank}(\mathfrak{R})$. Also for a fixed $\psi$ set $\mathfrak{l}=\operatorname{rank}(\log|\tau_ig_j{\varepsilon}|)_{\tau_i\in S_\psi,\,g_j\in G}$. For contradiction, assume that $\mathfrak{r}<\mathfrak{l}$. Express the determinants of all $\mathfrak{l}\times \mathfrak{l}$ minors of a $|G|\times |S_p|$ matrix as polynomials in $|G|\,|S_p|=:v$ indeterminates: $P_k(T)\in{\mathbb{Q}}[T]$ (where $T=T_1, \ldots, T_{v}$). Since $\mathfrak{r}<\mathfrak{l}$, when we evaluate the determinants for $\mathfrak{R}$ we have $P_k(\log_p(\sigma_ig_j{\varepsilon}))=0$. Let $F$ be the ${\mathbb{Q}}$-subvector space of ${\mathbb{C}}_p$ generated by the entries of $\mathfrak{R}$ and define $m=\dim_{\mathbb{Q}}(F)$. Number the $v$ entries of the matrix by mapping $l\in\{1,\ldots,v\}$ to the entry $\log_p(\sigma_{i(l)}g_{j(l)}{\varepsilon})$ so that the first $m$ entries $\left\{\log_p(\sigma_{i(s)}g_{j(s)}{\varepsilon})\right\}_{1\leq s\leq m}$ are a basis of $F$. By hypothesis the $\{\sigma_{i(s)}g_{j(s)}{\varepsilon}\}_{1\leq s\leq m}$ are algebraic over ${\mathbb{Q}}$ and are in an extension of ${\mathbb{Q}}_p$. By choice the $\{\log_p(\sigma_{i(s)}g_{j(s)}{\varepsilon})\}_{1\leq s\leq m}$ are linearly independent over ${\mathbb{Q}}$. Thus the $p$-adic Schanuel Conjecture implies that the $\{\log_p(\sigma_{i(s)}g_{j(s)}{\varepsilon})\}_{1\leq s\leq m}$ are algebraically independent over ${\mathbb{Q}}$. Let $a_{l},a_{ls}\in{\mathbb{Z}}$ with $a_{l}>0$ be such that for all $l$ (with $1\leq l\leq v$) $$\label{sumlog2} a_{l}\log_p(\sigma_{i(l)}g_{j(l)}{\varepsilon})=\sum_{s=1}^m a_{ls}\log_p(\sigma_{i(s)}g_{j(s)}{\varepsilon}).$$ Number the indeterminates so that $T_1,\ldots,T_m$ correspond to the basis elements of $F$. Using Equation (\[sumlog2\]), define $A_l\in{\mathbb{Q}}[T_1,\ldots,T_m]$ as $$A_l=\sum_{s=1}^m \frac{a_{ls}}{a_l}T_s.$$ Now consider $P_k'(T_1,\ldots,T_m)=P_k(T_1,\ldots,T_m,A_{m+1},\ldots,A_{v})\in{\mathbb{Q}}[T_1,\ldots,T_m]$. We still have $P_k'\left(\log_p(\sigma_{i(s)}g_{j(s)}{\varepsilon})\right)=0.$ But since the $\{\log_p(\sigma_{i(s)}g_{j(s)}{\varepsilon})\}_{1\leq s\leq m}$ are algebraically independent over ${\mathbb{Q}}$, $P_k'$ must be identically $0$ for all $k$. Thus Lemma \[poly\] implies that, for all $k$, $P_k$ is in the ideal generated by $\{T_l-A_l\}_{1\leq l\leq v}$, i.e., in the ideal generated by $\{T_l-\sum_{s=1}^m \frac{a_{ls}}{a_l}T_s\}_{1\leq l\leq v}$. Let $a_{l},a_{ls}$ be as in Equation (\[sumlog2\]). For each $l$ there exists $b\in{\mathbb{Z}}_{>0}$ such that for all $s$, $ ba_{ls}\log_p(\sigma_{i(s)}g_{j(s)}{\varepsilon})$ and $ba_l\log_p(\sigma_{i(l)}g_{j(l)}{\varepsilon})$ are in the region of ${\mathbb{C}}_p$ for which $\exp_p$ is the inverse of $\log_p$. Thus, after multiplying both sides of Equation (\[sumlog2\]) by $b$, we can apply the $p$-adic exponential function and $$(\sigma_{i(l)}g_{j(l)}{\varepsilon})^{a_l b}=\prod_{s=1}^m (\sigma_{i(s)}g_{j(s)}{\varepsilon})^{a_{ls}b}.$$ For an isomorphism $\psi:{\mathbb{C}}_p\rightarrow{\mathbb{C}}$, Lemma \[bij\] gives us a bijection between the elements of $E_p$ and $E$: $$\xymatrix{\tau_i:M\ar^{\sigma_i}[r]&{\mathbb{C}}_p\ar^{\psi}_{\cong}[r]&{\mathbb{C}}}.$$ So $$\begin{aligned} \psi(\sigma_{i(l)}g_{j(l)}{\varepsilon})^{a_l b}&=\prod_{s=1}^m \psi(\sigma_{i(s)}g_{j(s)}{\varepsilon})^{a_{ls}b}\\ (\tau_{i(l)}g_{j(l)}{\varepsilon})^{a_l b}&=\prod_{s=1}^m (\tau_{i(s)}g_{j(s)}{\varepsilon})^{a_{ls}b}.\end{aligned}$$ Next apply the usual complex modulus to both sides: $$|\tau_{i(l)}g_{j(l)}{\varepsilon}|^{a_l b}=\prod_{s=1}^m |\tau_{i(s)}g_{j(s)}{\varepsilon}|^{a_{ls}b}.$$ Then apply the real logarithm to both sides: $$\label{cx2} a_l\log\left|\tau_{i(l)}g_{j(l)}{\varepsilon}\right|=\sum_{s=1}^m a_{ls}\log\left|\tau_{i(s)}g_{j(s)}{\varepsilon}\right|.$$ Equation (\[cx2\]) combined with the fact that all $P_k$ are in the ideal generated by $\{T_l-\sum_{s=1}^m \frac{a_{ls}}{a_l}T_s\}_{1\leq l\leq v}$ implies that for all $k$, $$P_k(\log\left|\tau_{i(1)}g_{j(1)}{\varepsilon}\right|,\ldots,\log\left|\tau_{i(v)}g_{j(v)}{\varepsilon}\right|)=0.$$ But $\operatorname{rank}\left(\log|\tau_ig_j{\varepsilon}|\right)_{\tau_i\in S_\psi,\,g_j\in G}=\mathfrak{l}$. Thus the determinant of at least one $\mathfrak{l}\times \mathfrak{l}$ minor is non-zero. Hence we have a contradiction. So for every $\psi$ we have $\operatorname{rank}\mathfrak{R}\geq \operatorname{rank}(\log|\tau_ig_j{\varepsilon}|)_{\tau_i\in S_\psi,\,g_j\in G}$. Despite the existence of infinitely many isomorphisms between ${\mathbb{C}}_p$ and ${\mathbb{C}}$, there are only finitely many subsets $S_\psi$ of the finite set $E$. Thus $\operatorname{rank}\mathfrak{R}\geq\max_\psi\{\operatorname{rank}(\log|\tau_ig_j{\varepsilon}|)_{\tau_i\in S_\psi,\,g_j\in G}\}.$ \[SchL\] For a finite Galois extension of ${\mathbb{Q}}$, the $p$-adic Schanuel Conjecture implies Leopoldt’s Conjecture. For Leopoldt’s Conjecture, $S_p=E_p$ and so for all $\psi$, $S_\psi=E$. So Theorem \[rem\] implies $$\operatorname{rank}(\log_p(\sigma_ig_j{\varepsilon}))_{\sigma_i\in E_p,\,g_j\in G}\geq\operatorname{rank}(\log|\tau_ig_j{\varepsilon}|)_{\tau_i\in E,\,g_j\in G}.$$ We prove the reverse inequality directly. By Corollary \[r\], $\operatorname{rank}_{\mathbb{Z}}({\mathcal{O}}_M^*)= \operatorname{rank}(\log|\tau_ig_j{\varepsilon}|)_{\tau_i\in E,\,g_j\in G}$. Let $X$ and $\Delta$ be as defined in Section \[2ways\]. $X$ is a finite index subgroup of ${\mathcal{O}}_M^*$ and so $\operatorname{rank}_{\mathbb{Z}}(X) = \operatorname{rank}_{\mathbb{Z}}({\mathcal{O}}_M^*)$. Since ${\mathbb{Z}}\subset{\mathbb{Z}}_p$, $\operatorname{rank}_{\mathbb{Z}}(X)\geq\operatorname{rank}_{{\mathbb{Z}}_p}(\overline{\Delta X})$. Proposition \[equivV\] implies $$\operatorname{rank}_{{\mathbb{Z}}_p}(\overline{\Delta X})=\operatorname{rank}(\log_p(\sigma_ig_j{\varepsilon}))_{\sigma_i\in E_p,\,g_j\in G}.$$ Together these inequalities and equalities imply $$\operatorname{rank}(\log|\tau_ig_j{\varepsilon}|)_{\tau_i\in E,\,g_j\in G}\geq\operatorname{rank}(\log_p(\sigma_ig_j{\varepsilon}))_{\sigma_i\in E_p,\,g_j\in G}.$$ Computing Rank {#calc} ============== In the following subsections we will compute a lower bound for $\operatorname{rank}(\log_p(\sigma_ig_j{\varepsilon}))_{\sigma_i\in S_p,\,g_j\in G}$ and in some cases we will compute it exactly. We do this by looking more closely at $S_p$, or more precisely at the removed rows $E_p\smallsetminus S_p$. Fix $p$ and $S_p$. Let $$\mathfrak{R}=(\log_p(\sigma_ig_j{\varepsilon}))_{\sigma_i\in S_p,\,g_j\in G}.$$ Let $r$ be the ${\mathbb{Z}}$-rank of the global units, i.e., $r= \operatorname{rank}(\log|\tau_ig_j{\varepsilon}|)_{\tau_i\in E,\,g_j\in G}$. For $x\in{\mathbb{R}}$, define $$x^+=\begin{cases} x & x\geq0, \\ 0 & x<0. \end{cases}$$ Real Extensions --------------- \[real\] Let $M$ be a totally real Galois extension of ${\mathbb{Q}}$. Let $t=|E_p \smallsetminus S_p|$. Assume the $p$-adic Schanuel Conjecture. Then $$\operatorname{rank}(\log_p(\sigma_ig_j{\varepsilon}))_{\sigma_i\in S_p,\,g_j\in G}=r-(t-1)^+.$$ If $t=0$, then we are in the case of Leopoldt’s Conjecture and $\operatorname{rank}\mathfrak{R}=r$ and the formula holds. So we now assume $t>0$. Clearly, for all $\psi$, $|E \smallsetminus S_\psi|=|E_p \smallsetminus S_p|=t$. We showed in the proof of Proposition \[unit\] and in Corollary \[r\] that the $r\times r$ matrix $(\log|\tau_ig_j{\varepsilon}|)_{i,j=1,\ldots,r}$ and the $n\times n$ matrix $(\log|\tau_ig_j{\varepsilon}|)_{\tau_i\in E,\,g_j\in G}$ both have rank equalling $r$. Recall that $r=n-1$ for totally real Galois extensions. Thus for all $\psi$ $$\operatorname{rank}(\log|\tau_ig_j{\varepsilon}|)_{\tau_i\in S_\psi,\,g_j\in G}=(r+1)-t=r-(t-1).$$ Hence Theorem \[rem\] implies $\operatorname{rank}\mathfrak{R}\geq r-(t-1)$. But since $\mathfrak{R}$ has $n-t=(r+1)-t=r-(t-1)$ rows, $\operatorname{rank}\mathfrak{R}\leq r-(t-1)$. The proposition follows. Let $\Gamma$, $\Delta_\Gamma$, and $Y$ be as defined in Section \[var\]. Keep the notations and assumptions of the proposition. Then $$\operatorname{rank}_{{\mathbb{Z}}_p}(\overline{\Delta_\Gamma(Y)})=r-(t-1)^+.$$ Complex Extensions ------------------ Let $M$ be a [complex]{} Galois extension of ${\mathbb{Q}}$. Recall that we have fixed $S_p\subset E_p$ and for each isomorphism $\psi:{\mathbb{C}}_p\rightarrow{\mathbb{C}}$ there is a corresponding $S_\psi\subset E$. Throughout ${\varepsilon}$ is a weak Minkowski unit such that $|\sigma_i g_j {\varepsilon}-1|_p<1$ for all $g_j\in G$ and $\sigma_i\in E_p$. \[L\] The rank of $$\mathfrak{L}_\psi:=(\log|\tau_ig_j{\varepsilon}|)_{\tau_i\in S_\psi,\,g_j\in G}$$ is determined by the number of pairs of complex conjugate embeddings in $E\smallsetminus S_\psi$. In particular, if $v_\psi$ is the number of pairs of complex conjugate embeddings in $E\smallsetminus S_\psi$ then $\operatorname{rank}\mathfrak{L}_\psi=r-(v_\psi-1)^+$. If there are $v_\psi$ pairs of complex conjugate embeddings in $E\smallsetminus S_\psi$ then there are exactly $\frac{n}{2}-v_\psi$ embeddings $\{\tau_1,\ldots,\tau_{n/2-v_\psi}\}\in S_\psi$ such that $\tau_j\neq \overline{\tau_i}$ for all $i,j\in\{1,\ldots,\frac{n}{2}-v_\psi\}.$ We proved in Section \[nota\] that $$\operatorname{rank}(\log|\tau_i g_j{\varepsilon}|)_{i=1,\ldots,r;\,g_j\in G}=r=\frac{n}{2}-1$$ when $E$ is ordered $E=\{\tau_1,\ldots,\tau_{r+1},\overline{\tau_1},\ldots,\overline{\tau_{r+1}}\}$. Thus, if $v_\psi>0$, the rows of $\mathfrak{L}_\psi$ that correspond to $\tau_1,\ldots,\tau_{n/2-v_\psi}$ are linearly independent. Any other row corresponding to some $\tau_i\in S_\psi\smallsetminus\{\tau_1,\ldots,\tau_{n/2-v_\psi}\}$ is dependent via the relation $\log|\tau_i g_j{\varepsilon}|=\log|\overline{\tau_i}g_j{\varepsilon}|$. Thus $$\operatorname{rank}\mathfrak{L}_\psi=\begin{cases} \frac{n}{2}-1 & \text{if } v_\psi=0, \\ \frac{n}{2}-v_\psi & \text{if } v_\psi>0. \end{cases}$$ So $\operatorname{rank}\mathfrak{L}_\psi=r-(v_\psi-1)^+$. Fix $\sigma \in E_p$. Then we can write $E_p\smallsetminus S_p=\{\sigma g_1,\ldots,\sigma g_k\}$ for some $g_i\in G$. So for each isomorphism $\psi$, $$\begin{aligned} E\smallsetminus S_\psi & = \{\psi.\sigma g_1,\ldots,\psi.\sigma g_k\}\\ &= \{\tau_\psi g_1,\ldots,\tau_\psi g_k\},\; \tau_\psi\in E. \end{aligned}$$ On the other hand, for each $\tau \in E$ there exists an isomorphism $\psi_\tau:{\mathbb{C}}_p\rightarrow{\mathbb{C}}$ such that $\psi_\tau.\sigma=\tau$. (The proof of the existence of $\psi_\tau$ is similar to the proof of the existence of an isomorphism between ${\mathbb{C}}_p$ and ${\mathbb{C}}$.) Thus for each $\tau\in E$ there exists an isomorphism $\psi_\tau$ such that $\{\tau g_1,\ldots, \tau g_k\}=E\smallsetminus S_{\psi_\tau}$. Hence the set of sets $\{\tau g_1,\ldots, \tau g_k\}_{\tau\in E}$ equals the set of sets $\{E\smallsetminus S_\psi\}_\psi$. This is independent of the choice of $\sigma$. Therefore, despite there being infinitely many isomorphisms $\psi$, there are exactly $n$ different $E\smallsetminus S_\psi$. Hence there are only finitely many $v_\psi$, where $v_\psi$ is the number of pairs of complex conjugate embeddings in $E\smallsetminus S_\psi$. So we can define $v=\min_\psi\{v_\psi\}$. Let $M$ be a complex Galois extension of ${\mathbb{Q}}$. Let $v=\min_\psi\{v_\psi\}$. Assume the $p$-adic Schanuel Conjecture. Then, $$\operatorname{rank}\left(\log_p(\sigma_ig_j{\varepsilon})_{\sigma_i\in S_p,\,g_j\in G}\right) \geq r-(v-1)^+.$$ This is immediate from Proposition \[L\] and Theorem \[rem\]. This result depends on examining each $E\smallsetminus S_\psi$ to calculate $v_\psi$. We can get a better result that only requires us to consider $E_p\smallsetminus S_p$. First we need a lemma relating pairs of complex conjugate embeddings to elements of $G$ induced by complex conjugation. \[cc=c\] Let $\tau\in E$ and $c\in G$ be such that $\overline{\tau}=\tau c$. Then the number of pairs of complex conjugate embeddings in $\{\tau g_1,\ldots, \tau g_k\}$ equals the number of right cosets of $\{\operatorname{id},c\}$ in $\{g_1,\ldots,g_k\}$. Let $\{g,cg\}$ be a right coset in $\{g_1,\ldots,g_k\}$. Then $\{\tau g,\tau cg\}$ are embeddings in $\{\tau g_1,\ldots, \tau g_k\}$. For all $m\in M$, $$\begin{aligned} \tau cg(m)&=\tau c(g(m))\\ &=\overline{\tau(g(m))}\\ &=\overline{\tau g(m)} \end{aligned}$$ Thus $\tau g$ and $\tau cg$ are a pair of complex conjugate embeddings. Let $\tau g_j$ and $\tau g_i$ be a pair of complex conjugate embeddings in $\{\tau g_1,\ldots, \tau g_k\}$, with $g_j,g_i\in \{g_1,\ldots,g_k\}$. Then for all $m\in M$ $$\begin{aligned} \tau g_j(m) &=\overline{\tau g_i(m)}\\ &=\overline{\tau(g_i(m))}\\ &=\tau c(g_i(m))\\ &=\tau c g_i(m). \end{aligned}$$ Since $\tau$ is injective, $g_j(m)=cg_i(m)$ for all $m\in M$. Hence $g_j=cg_i$. Thus $\{g_j,g_i\}$ is a right coset of $\{\operatorname{id},c\}$. Since every coset determines a unique pair of complex conjugates and vice versa, the lemma is proven. Pick $\sigma \in E_p$ and write $E_p\smallsetminus S_p=\{\sigma g_1,\ldots,\sigma g_k\}$ for some $g_i\in G$. Define $J_\sigma:=\{g_1,\ldots,g_k\}$. Let $C$ be the set of elements in $G$ induced by complex conjugation. Then for $c\in C$, define $t_c$ to be the number of right cosets of $\{\operatorname{id},c\}$ in $J_\sigma$ and define $t=\min_{c\in C}\{t_c\}$. \[horse\] Let $M$ be a complex Galois extension of ${\mathbb{Q}}$. Define $v_\psi$ to be the number of pairs of complex conjugate embeddings in $E\smallsetminus S_\psi$, $v:=\min_\psi\{v_\psi\}$, $t_c$ to be the number of right cosets of $\{\operatorname{id},c\}$ in $J_\sigma$, and $t:=\min_{c\in C}\{t_c\}$. Then $t$ is independent of the choice of $\sigma$ and $v=t.$ Choose $\sigma \in E_p$ and write $E_p\smallsetminus S_p=\{\sigma g_1,\ldots,\sigma g_k\}$. Then $J_\sigma=\{g_1,\ldots,g_k\}$. For each isomorphism $\psi$ there exists $\tau_\psi\in E$ such that $E\smallsetminus S_\psi=\{\tau_\psi g_1,\ldots, \tau_\psi g_k\}$. Let $c_\psi\in C$ be such that $\overline{\tau_\psi}=\tau_\psi c_\psi$. Then Lemma \[cc=c\] implies that $v_\psi=t_{c_\psi}$. On the other hand, for each $c\in C$ there exists $\tau\in E$ such that $\bar{\tau}=\tau c$, and for this $\tau$ there exists an isomorphism $\psi$ such that $E\smallsetminus S_\psi=\{\tau g_1,\ldots,\tau g_k\}$. Again Lemma \[cc=c\] implies that $t_c=v_\psi$. Hence $\min_\psi\{v_\psi\}=\min_{c\in C}\{t_c\}$. So $v=t$. Since $v$ is independent of $\sigma$, so is $t$. \[tt\] Let $M$ be a complex Galois extension of ${\mathbb{Q}}$. Let $t=\min_{c\in C}\{t_c\}$ (as defined above). Assume the $p$-adic Schanuel Conjecture. Then $$\operatorname{rank}\left(\log_p(\sigma_ig_j{\varepsilon})_{\sigma_i\in S_p,\,g_j\in G}\right) \geq r-(t-1)^+.$$ \[ss\] Let $M$ be a complex Galois extension of ${\mathbb{Q}}$. Let $\Gamma$, $\Delta_\Gamma$, and $Y$ be as defined in Section \[var\]. Let $t=\min_{c\in C}\{t_c\}$. Assume the $p$-adic Schanuel Conjecture. Then $$\operatorname{rank}_{{\mathbb{Z}}_p}(\overline{\Delta_\Gamma(Y)})\geq r-(t-1)^+.$$ The CM Case ----------- We continue to let ${\varepsilon}$ be a weak Minkowski unit such that $|\sigma_i g_j {\varepsilon}-1|_p<1$ for all $g_j\in G$ and $\sigma_i\in E_p$. [If we assume that $M$ is CM over ${\mathbb{Q}}$,]{} then Theorem \[rem\] and Corollaries \[tt\] and \[ss\] can be strengthened in two ways: we can remove the “maximum” and “minimum” qualifications and we can replace the inequalities with equalities. The keys to removing the “maximum” and “minimum” qualifications are the following Lemma and the next Proposition. \[uniq\] Let $M$ be CM over ${\mathbb{Q}}$. For all $\tau\in E$ there exists a [*unique*]{} $c\in G$ such that the following diagram commutes. $$\xymatrix{M\ar_c[d]\ar^{\overline{\tau}}[dr]\\ M\ar_{\tau}[r]&{\mathbb{C}}}$$ \[ind\] Let $M$ be a CM extension of ${\mathbb{Q}}$. Then $\operatorname{rank}(\log|\tau_ig_j{\varepsilon}|)_{\tau_i\in S_\psi,g_j\in G}$ is independent of the isomorphism $\psi:{\mathbb{C}}_p\rightarrow {\mathbb{C}}$. Write $E_p\smallsetminus S_p=\{\sigma g_1,\ldots,\sigma g_k\}$ and $J_\sigma:=\{g_1,\ldots,g_k\}$. In the proof of Proposition \[horse\], we proved that for all $v_\psi$ there exists $c\in C$ such that $v_\psi=t_c$. But since $M$ is CM over ${\mathbb{Q}}$, there is only one element in $C$, denote it $c$. So for all isomorphisms $\psi$ $$\operatorname{rank}(\log|\tau_ig_j{\varepsilon}|)_{\tau_i\in S_\psi,g_j\in G}=r-(v_\psi-1)^+=r-(t-1)^+,$$ where $t=t_c$ is the number of right cosets of $\{\operatorname{id},c\}$ in $J_\sigma$. The key to replacing the inequalities with equalities is the following lemma. \[rtun\] Let $M$ be a CM extension of ${\mathbb{Q}}$ with $m_1,m_2\in \mathcal{O}_M^*$ and let $\tau_i,\tau_j\in E$. If $$| \tau_im_1 | = |\tau_jm_2 |$$ where $|\cdot|$ is the usual complex modulus, then $$\zeta( \tau_im_1 )= \tau_jm_2$$ where $\zeta$ is a root of unity. Since $M$ is a Galois extension, $\tau_i(M)=\tau_j(M)$. Denote this subfield of ${\mathbb{C}}$ as $K$. Since $M$ is isomorphic to $K$, $K$ is a CM extension of ${\mathbb{Q}}$ having the same degree over ${\mathbb{Q}}$ as $M$ does. Let $n$ be the degree of $K$ over ${\mathbb{Q}}$. Let $K^+$ be the totally real subfield of $K$. Hence $K$ is a degree 2 extension of $K^+$ and $K^+$ is a degree $\frac{1}{2}n$ extension of ${\mathbb{Q}}$. Dirichlet’s Unit Theorem implies $\operatorname{rank}_{\mathbb{Z}}\mathcal{O}_K^*=\frac{1}{2}n-1$ because $K$ is a complex extension of ${\mathbb{Q}}$ and $\operatorname{rank}_{\mathbb{Z}}\mathcal{O}_{K^+}^{*}=\frac{1}{2}n-1$ because $K^+$ is a totally real extension of ${\mathbb{Q}}$. Since $\mathcal{O}_{K}^{*}$ and $\mathcal{O}_{K^+}^{*}$ have the same finite rank and $\mathcal{O}_{K^+}^{*}$ is a subgroup of $\mathcal{O}_{K}^{*}$, $\mathcal{O}_{K}^{*}\big/\mathcal{O}_{K^+}^{*}$ is a finite group. The following series of calculations on elements of $K$ shows that given $| \tau_im_1 | = |\tau_jm_2 |$ there exists $\zeta$ with $|\zeta|=1$ such that $\zeta( \tau_im_1 )= \tau_jm_2$. ($c$ is as defined in Lemma \[uniq\].) $$\begin{aligned} | \tau_im_1 | &= |\tau_jm_2 |\\ (\tau_icm_1)( \tau_im_1 ) &=(\tau_jcm_2)( \tau_jm_2)\\ (\tau_jcm_2)^{-1}(\tau_icm_1)( \tau_im_1 ) &=( \tau_jm_2) \end{aligned}$$ Let $\zeta = (\tau_jcm_2)^{-1}(\tau_icm_1)$. $$\begin{aligned} \zeta( \tau_im_1 )&= \tau_jm_2\\ | \zeta|\,| \tau_im_1 |&= | \tau_jm_2 |\\ |\zeta|&=1 \end{aligned}$$ Since $m_1,m_2\in \mathcal{O}_M^*$, $\tau_im_1, \tau_jm_2 \in \mathcal{O}_{K}^{*}$. Thus $\zeta\in\mathcal{O}_{K}^{*}$. Since $\mathcal{O}_{K}^{*}\big/\mathcal{O}_{K^+}^{*}$ is a finite group, there exists $q\in{\mathbb{Z}}$ such that $\zeta^q\in \mathcal{O}_{K^+}^{*}\subset{\mathbb{R}}$. Thus since $|\zeta|=1$, $|\zeta^q|=1$. Combining this with $\zeta^q\in {\mathbb{R}}$ we have $\zeta^q=\pm1$. \[cm\] Let $M$ be CM over ${\mathbb{Q}}$. Let ${\varepsilon}\in {\mathcal{O}}_M^*$ be a weak Minkowski unit such that $|\sigma_i g_j {\varepsilon}-1|_p<1$ for all $i,j$. Let $S_p$ be a subset of $E_p$ and $S_\psi$ be a subset of $E$ as defined previously. Assume both Schanuel’s Conjecture and the $p$-adic version. Then for any isomorphism $\psi:{\mathbb{C}}_p\rightarrow{\mathbb{C}}$ $$\operatorname{rank}(\log_p(\sigma_ig_j{\varepsilon}))_{\sigma_i\in S_p,\,g_j\in G}=\operatorname{rank}(\log|\tau_ig_j{\varepsilon}|)_{\tau_i\in S_\psi,\,g_j\in G}.$$ Before we begin the proof, note the following important corollaries. \[CMt\] Keep the assumptions of the previous theorem. For any choice of $\sigma$, define $J_\sigma$ as before. Let $c$ be the unique element of $G$ induced by complex conjugation and let $t$ be the number of right cosets of $\{\operatorname{id}, c\}$ in $J_\sigma$. Then $$\operatorname{rank}(\log_p(\sigma_ig_j{\varepsilon}))_{\sigma_i\in S_p,g_j\in G}=r-(t-1)^+.$$ Let $\Gamma$, $\Delta_\Gamma$, and $Y$ be as defined in Section \[var\]. Keep the assumptions of the previous Theorem and Corollary. Then $$\operatorname{rank}_{{\mathbb{Z}}_p}(\overline{\Delta_\Gamma(Y)})= r-(t-1)^+.$$ Let $\mathfrak{R}=(\log_p(\sigma_ig_j{\varepsilon}))_{\sigma_i\in S_p,\,g_j\in G}$ and $\mathfrak{L}_\psi=$$(\log|\tau_ig_j{\varepsilon}|)_{\tau_i\in S_\psi,\,g_j\in G}.$ Define $\mathfrak{r}:=\operatorname{rank}\mathfrak{R}$. By Proposition \[ind\] the rank of $\mathfrak{L}_\psi$ is the same for all $\psi$, we will denote this rank by $\mathfrak{l}$. By Theorem \[rem\] we know that $\mathfrak{r}\geq\mathfrak{l}$. For contradiction, assume that $\mathfrak{r}>\mathfrak{l}$. This proof is very similar to the proof of Theorem \[rem\]. Express the determinants of all $\mathfrak{r}\times \mathfrak{r}$ minors of a $|G|\times |S_p|$ matrix as polynomials in $|G|\,|S_p|=:v$ indeterminates, $P_k(T)\in{\mathbb{Q}}[T]$ (where $T=T_1, \ldots, T_{v}$). Since $\mathfrak{r}>\mathfrak{l}$, when we evaluate the polynomials for $\mathfrak{L}_\psi$ we have $P_k(\log|\tau_ig_j{\varepsilon}|)=0$. Let $F$ be the ${\mathbb{Q}}$-subvector space of ${\mathbb{R}}$ generated by the entries of $\mathfrak{L}_\psi$ and define $m=\dim_{\mathbb{Q}}(F)$. Number the $v$ entries of the matrix by mapping $l\in\{1,\ldots,v\}$ to the entry $\log|\tau_{i(l)}g_{j(l)}{\varepsilon}|$ so that the first $m$ entries $\left\{\log|\tau_{i(s)}g_{j(s)}{\varepsilon}|\right\}_{1\leq s\leq m}$ are a basis of $F$. Since $M$ is a finite extension of ${\mathbb{Q}}$, ${\varepsilon}\in M$ is algebraic over ${\mathbb{Q}}$, hence so are $\tau_{i(s)}g_{j(s)}{\varepsilon}$ and $|\tau_{i(s)}g_{j(s)}{\varepsilon}|$. Thus the $\{|\tau_{i(s)}g_{j(s)}{\varepsilon}|\}_{1\leq s\leq m}$ satisfy the hypotheses of Schanuel’s Conjecture (NOT the $p$-adic conjecture). So the $\{\log|\tau_{i(s)}g_{j(s)}{\varepsilon}|\}_{1\leq s\leq m}$ are algebraically independent over ${\mathbb{Q}}$. Let $a_{l},a_{ls}\in{\mathbb{Z}}$ with $a_{l}>0$ be such that for all $l$, $1\leq l\leq v$, $$\label{sumlog3} a_{l}\log|\tau_{i(l)}g_{j(l)}{\varepsilon}|=\sum_{s=1}^m a_{ls}\log|\tau_{i(s)}g_{j(s)}{\varepsilon}|.$$ Number the indeterminates so that $T_1,\ldots,T_m$ correspond to the basis elements of $F$. Using Equation (\[sumlog3\]), define $A_l\in{\mathbb{Q}}[T_1,\ldots,T_m]$ as $$A_l=\sum_{s=1}^m \frac{a_{ls}}{a_l}T_s.$$ Now consider $P_k'(T_1,\ldots,T_m)=P_k(T_1,\ldots,T_m,A_{m+1},\ldots,A_{v})\in{\mathbb{Q}}[T_1,\ldots,T_m]$. We still have $P_k'\left(\log|\tau_{i(s)}g_{j(s)}{\varepsilon}|\right)=0.$ Since the $\{\log|\tau_{i(s)}g_{j(s)}{\varepsilon}|\}_{1\leq s\leq m}$ are algebraically independent over ${\mathbb{Q}}$, $P_k'$ must be identically $0$ for all $k$. Thus Lemma \[poly\] implies that, for all $k$, $P_k$ is in the ideal generated by $\{T_l-A_l\}_{1\leq l\leq v}$, i.e., in the ideal generated by $\{T_l-\sum_{s=1}^m \frac{a_{ls}}{a_l}T_s\}_{1\leq l\leq v}$. Equation (\[sumlog3\]) is true in ${\mathbb{R}}$, so we apply the real exponential function $$\label{93}|\tau_{i(l)}g_{j(l)}{\varepsilon}|^{a_l }=\prod_{s=1}^m |\tau_{i(s)}g_{j(s)}{\varepsilon}|^{a_{ls}}.$$ Then Lemma \[rtun\] implies that $$\zeta(\tau_{i(l)}g_{j(l)}{\varepsilon})^{a_l }=\prod_{s=1}^m (\tau_{i(s)}g_{j(s)}{\varepsilon})^{a_{ls}},$$ where $\zeta$ is a root of unity in ${\mathbb{C}}$. For an isomorphism $\phi:{\mathbb{C}}\rightarrow{\mathbb{C}}_p$, Lemma \[bij\] gives us a bijection between the elements of $E_p$ and $E$: $$\xymatrix{\sigma_i:M\ar^(.6){\tau_i}[r]&{\mathbb{C}}_p\ar^{\phi}_{\cong}[r]&{\mathbb{C}}}.$$ So $$\begin{aligned} \phi(\zeta)\phi(\tau_{i(l)}g_{j(l)}{\varepsilon})^{a_l }&=\prod_{s=1}^m \phi(\tau_{i(s)}g_{j(s)}{\varepsilon})^{a_{ls}}\\ \phi(\zeta)(\sigma_{i(l)}g_{j(l)}{\varepsilon})^{a_l }&=\prod_{s=1}^m (\sigma_{i(s)}g_{j(s)}{\varepsilon})^{a_{ls}}.\end{aligned}$$ Next we apply the $p$-adic logarithm to both sides, noting that $\phi(\zeta)$ is a root of unity in ${\mathbb{C}}_p$ and that the $p$-adic logarithm of a root of unity equals 0: $$\begin{aligned} \log_p(\phi(\zeta))+a_l\log_p\left(\sigma_{i(l)}g_{j(l)}{\varepsilon}\right)&=\notag\\a_l\log_p\left(\sigma_{i(l)}g_{j(l)}{\varepsilon}\right)&=\sum_{s=1}^m a_{ls}\log_p\left(\sigma_{i(s)}g_{j(s)}{\varepsilon}\right).\label{cx3}\end{aligned}$$ Now Equation (\[cx3\]) combined with the fact that all $P_k$ are in the ideal generated by $\{T_l-\sum_{s=1}^m \frac{a_{ls}}{a_l}T_s\}_{1\leq l\leq v}$ implies that for all $k$, $$P_k(\log_p\left(\sigma_{i(1)}g_{j(1)}{\varepsilon}\right),\ldots,\log_p\left(\sigma_{i(v)}g_{j(v)}{\varepsilon}\right))=0.$$ But $\operatorname{rank}\mathfrak{R}=\mathfrak{r}$. Thus the determinant of at least one $\mathfrak{r}\times \mathfrak{r}$ minor is non-zero. Hence we have a contradiction. [99]{} J. Ax. *On the units of an algebraic number field*, [Illinois J. Math.]{} [9]{} (1965), 584–589. A. Baker. *Linear forms in the logarithms of algebraic numbers*, [Mathematika]{} [13]{} (1966), 204–216. A. Brumer. *On the units of algebraic number fields*, [Mathematika]{} [14]{} (1967), 121–124. F. Calegari and B. Mazur. *Nearly ordinary Galois deformations over arbitrary number fields*, [J. Inst. Math. Jussieu]{} [8]{} (2009), 99–177. P. Colmez. *Résidu en $s=1$ des fonctions zêta $p$-adiques*, [Invent. Math.]{} [91]{} (1988), 371–389. R. Gillard. *Formulations de la conjecture de Leopoldt et étude d’une condition suffisante*, [Abh. Math. Semin. Univ. Hamburg]{} [48]{} (1979), 125–138. F. Gouvêa. *$p$-adic Numbers: An Introduction*, Springer-Verlag, Germany, 1997. C. Khare and J.-P. Wintenberger. *Ramification in Iwasawa modules*, arXiv:1011.6393. N. Koblitz. *p-adic Numbers, p-adic Analysis, and Zeta Functions*, Springer-Verlag, New York, NY, 1984. M. Kolster. *Remarks on étale K-theory and Leopoldt’s conjecture*, in: Séminaire de Théorie des Nombres (Paris, 1991–1992), S. David (ed.), [Progress in Mathematics]{} 116, Birkhäuser, Boston, MA, 1993, 37–62. M. Laurent. *Rang $p$-adique d’unités et action des groupes*, [J. Reine Agnew. Math.]{} [399]{} (1989), 81–108. H.W. Leopoldt. *Zur arithmetic in abelschen zahlkörpern*, [J. Reine Agnew. Math.]{} [209]{} (1962), 54–71. P. Mihăilescu. *The $T$ and $T^*$ components of $\Lambda$-modules and Leopoldt’s conjecture*, arXiv:0905.1274. J.S. Milne. *Algebraic Number Theory*, online version 3.02, [<http://www.jmilne.org/math/CourseNotes/ant.html>]{}, April 2009. W. Narkiewicz. *Elementary and Analytic Theory of Algebraic Numbers*, 3rd ed., Springer-Verlag, Germany, 2004. J. Neukirch, A. Schmidt, and K. Wingberg. *Cohomology of Number Fields*, Springer-Verlag, Germany, 2000. T. Nguyen-Quang-Do. *Sur la structure galoisienne des corps locaux et la théorie d’Iwasawa*, [Compos. Math.]{} [46]{} (1982), 85–119. J.-P. Serre. *Linear Representations of Finite Groups*, Springer-Verlag, New York, NY, 1977. J.-P. Serre. *Sur le résidu de la fonction zêta $p$-adique d’un corps de nombres*, [C. R. Math. Acad. Sci. Paris, Ser. A]{} [287]{} (1978), A183–A188. M. Waldschmidt. *A lower bound for the $p$-adic rank of the units of an algebraic number field*, in: [Topics in Classical Number Theory (Budapest, 1981)]{}, G. Halász (ed.), Coll. Math. Soc. Janos Bolyai 34, North Holland, 1984, 1617–1650. L.C. Washington. *Introduction to Cyclotomic Fields*, Springer-Verlag, New York, NY, 1982. J.-P. Wintenberger. *Splitting of Iwasawa modules and Leopoldt’s Conjecture*, Lecture, IAS, Princeton, NJ, [<http://video.ias.edu/galois/wintenberger>]{}, October 20, 2010. [^1]: See Section \[calc\] for details on $t$. [^2]: The proof that such an isomorphism exists requires Zorn’s Lemma (equivalently, the Axiom of Choice). In fact, there are infinitely many isomorphisms between ${\mathbb{C}}_p$ and ${\mathbb{C}}$. [^3]: [**Minkowski’s Theorem.**]{} *Let $D$ be the fundamental domain of a lattice. Let $T$ be a subset of a real vector space of dimension m that is compact, convex, and symmetric in the origin. If vol$(T)\geq2^m\mbox{vol}(D)$ then T contains a point of the lattice other than the origin.*
--- abstract: | In 2011, Aaronson gave a striking proof, based on quantum linear optics, that the problem of computing the permanent of a matrix is ${\#\P}$-hard. Aaronson’s proof led naturally to hardness of approximation results for the permanent, and it was arguably simpler than Valiant’s seminal proof of the same fact in 1979. Nevertheless, it did not show ${\#\P}$-hardness of the permanent for any class of matrices which was not previously known. In this paper, we present a collection of *new* results about matrix permanents that are derived primarily via these linear optical techniques. First, we show that the problem of computing the permanent of a real orthogonal matrix is ${\#\P}$-hard. Much like Aaronson’s original proof, this implies that even a multiplicative approximation remains ${\#\P}$-hard to compute. The hardness result even translates to permanents of orthogonal matrices over the finite field $\mathbb{F}_{p^4}$ for $p \neq 2, 3$. Interestingly, this characterization is tight: in fields of characteristic 2, the permanent coincides with the determinant; in fields of characteristic 3, one can efficiently compute the permanent of an orthogonal matrix by a nontrivial result of Kogan. Finally, we use more elementary arguments to prove ${\#\P}$-hardness for the permanent of a positive semidefinite matrix. This result shows that certain probabilities of boson sampling experiments with thermal states are hard to compute exactly, despite the fact that they can be efficiently sampled by a classical computer. bibliography: - 'bibliography.bib' title: New Hardness Results for the Permanent Using Linear Optics --- Acknowledgments =============== We would like to thank Scott Aaronson for posing the question which led to this paper and for his comments on this paper. We would also like to thank Rio LaVigne and Michael Cohen for some key mathematical insights.
amstex label.def degt.def [‘11 @[**]{}]{} epsf \#1 \#1 .8 \#1[\_[\#1]{}]{} \#1[B\_[\#1]{}]{} \#1[S\_[\#1]{}]{} \#1[A\_[\#1]{}]{} \#1[D\_[\#1]{}]{} \#1[P\^[\#1]{}]{} \#1[P\^[\#1]{}\_]{} Ł[|L]{} \#1&gt;[\#1]{} |\#1|[|\#1|]{} [ a\[\#1\][\_[\#1]{}]{} d\[\#1\][\_[\#1]{}]{} e\[\#1\][\_[\#1]{}]{} ]{} [‘+]{} \#1,\#2\[\#3\],\#4\[\#5\],\#6,\#7\[\#8\][\#2&\#1]{} [\#1[ ]{}]{} \[s.2e6+a5+a2.2\] Now, let $\L$ be the section given by . The pair $(\B,\L)$ is plotted in Figure \[fig.2e6+a5+a2\], and the singular fibers are listed in \[s.inflection\]. The generators for $\pi_F$ are $\zeta_1=\Ga$, $\zeta_2=\Gb$, $\zeta_3=\Gd$, $\zeta_4=\Gg$, and the relations are: $$\alignat2 &[\Gd,\Ga\Gb]=1,\quad \Gd\Ga\Gb\Ga=\Gb\Ga\Gb\Gd &&\text{(the cusp $x=0$)},\\\allowdisplaybreak &(\Gg\Gd)^3=(\Gd\Gg)^3&& \text{(the tangency point $x\approx3.55$)},\\\allowdisplaybreak &(\Gd\Gg\Gd)\Gg(\Gd\Gg\Gd)\1=\Gb&& \text{(the vertical tangent $x=4$)},\\\allowdisplaybreak &[\Gd,\Gg\Ga\Gg\1]=1&& \text{(the transversal intersection $x\approx4.94$)},\\\allowdisplaybreak &(\Ga\Gb\Gd\Gg)^2=1&\qquad&\text{(the relation at infinity)}. \endalignat$$ Let $\Gd^2=1$ and pass to the generators $\Ga$, $\bGa$, $\Gb$, $\bGb$, $\Gg$, $\bGg$, see Lemma \[bpi-&gt;pi\]. Then, in addition to the cusp relations  (or ) and relation at infinity , we obtain $$\gather \Gg\bGg\Gg=\bGg\Gg\bGg, \eqtag\label{eq.4.1}\\\allowdisplaybreak \bGg\Gg\bGg\1=\Gb,\quad\Gg\bGg\Gg\1=\bGb \eqtag\label{eq.4.2}\\\allowdisplaybreak \Gg\Ga\Gg\1=\bGg\bGa\bGg\1. \eqtag\label{eq.4.3} \endgather$$ Thus, $$\pi_1(\Cp2\sminus B)=G_2'':=\bigl<\Ga,\Gb,\Gg,\bGg\bigm| \text{$(\Ga\Gb)^3=(\Gb\Ga)^3$, \eqref{eq.3.5}, \eqref{eq.4.1}--\eqref{eq.4.3}}\bigr>, \eqtag\label{eq.G2.2}$$ where $\bGa$ and $\bGb$ are the words given by . Note that one can eliminate either $\bGg$, using , or $\Gb$, using . Extending the braid monodromy beyond the cusp of $B$ (to the negative values of $x$), we obtain the following statement. \[e6.a5.2\] Let $\Gd_1$, $\Ga_1$, $\Gb_1$, $\Gg_1$ be the basis in a fiber $F'=\{x=\const\ll0\}$ shown in Figure \[fig.basis2\], right. Then, considering $\Ga_1$, $\Gb_1$, and $\Gg_1$ as elements of $\bpi$, one has $\Ga_1=\bGb$, $\Gb_1=\bGb\1\bGa\bGb$, and $\Gg_1=\Gg$. \[e6-&gt;&gt;pi.a5.2\] Let $\MB$ be a Milnor ball about a type $\bE_6$ singular point of $B$. Then the inclusion homomorphism $\pi_1(\MB\sminus B)\to\pi_1(\Cp2\sminus B)$ is onto. In view of  and , the elements $\bGa=\Ga_1\Gb_1\Ga_1\1$, $\bGb=\Ga_1$, and $\Gg=\Gg_1$ generate the group. Comparing the two groups ------------------------ Let $B'$ and $B''$ be the sextics considered in \[s.2e6+a5+a2.1\] and \[s.2e6+a5+a2.2\], respectively, so that their fundamental groups are $G_2'$ and $G_2''$. As explained in Eyral, Oka [@EyralOka], the profinite completions of $G_2'$ and $G_2''$ are isomorphic (as the two curves are conjugate over an algebraic number field). Whether $G_2'$ and $G_2''$ themselves are isomorphic is still an open question. Below, we suggest an attempt to distinguish the two groups geometrically. \[a5-&gt;&gt;pi\] Let $\MB$ be a Milnor ball about the type $\bA_5$ singular point of $B'$. Then the inclusion homomorphism $\pi_1(\MB\sminus B')\to\pi_1(\Cp2\sminus B')$ is onto. According to , the group $\pi_1(\Cp2\sminus B')=G_2'$ is generated by $\Ga$ and $\Gb$, which are both in the image of $\pi_1(\MB\sminus B')$. \[a5not-&gt;&gt;pi\] Let $\MB$ be a Milnor ball about the type $\bA_5$ singular point of $B''$. Then the image of the inclusion homomorphism $\pi_1(\MB\sminus B'')\to\pi_1(\Cp2\sminus B'')$ does not contain $\Gg$ or $\bGg$. If true, Conjecture \[a5not-&gt;&gt;pi\] together with Proposition \[a5-&gt;&gt;pi\] would provide a topological distinction between pairs $(\Cp2,B')$ and $(\Cp2,B'')$. Note that, according to [@JAG], the two pairs are not diffeomorphic. Other symmetric sets of singularities {#s.stable} ------------------------------------- The set of singularities $(3\bE_6)$ is obtained by perturbing $\L$ in Section \[s.3e6+a1\] to a section tangent to $\B$ at the cusp and transversal to $\B$ otherwise. This procedure replaces  with $\bGb=\Gb$ or, alternatively, introduces a relation $\Gs_3=\Gs_1$ in . The resulting group is $\BG3/(\Gs_1\Gs_2)^3$. The sets of singularities of the form $(2\bE_6\splus2\bA_2)\splus{\ldots}$ are obtained by perturbing $\L$ in Section \[s.2e6+2a2+a3\]. If $\L$ is perturbed to a double tangent (the set of singularities $(2\bE_6\splus2\bA_2)\splus2\bA_1$), relation  is replaced with $[\Gb,\bGb]=1$. Then, turns to $\Ga\bGb^2\Ga\Gb^2=1$, and turns to $$\Gb\underline{\Ga\1\Gb\Ga}\Gb\1=\bGb\underline{\Ga\1\bGb\Ga}\bGb\1.$$ Replacing the underlined expressions using the braid relations  converts this relation to $\Gb^2\Ga\Gb\2=\bGb^2\Ga\bGb\2$, , $[\Ga,\bGb^2\Gb\2]=1$. As explained in \[s.3e6+a1\], the map $\Gb\mapsto\Gs_1$, $\Ga\mapsto\Gs_2$, $\bGb\mapsto\Gs_3$ establishes an isomorphism $\pi_1(\Cp2\sminus B)=\BG4/\Gs_2\Gs_1^2\Gs_2\Gs_3^2$. Any other perturbation of $\L$ produces an extra point of transversal intersection with $\B$, replacing  with $\Gb=\bGb$. The resulting group is $\BG3/(\Gs_1\Gs_2)^3$. Finally, the sets of singularities $(2\bE_6\splus\bA_5)\splus\bA_1$ and $(2\bE_6\splus\bA_5)$ are obtained by perturbing the inflection tangency point of $\L$ and $\B$ in Section \[s.2e6+a5+a2.1\]. This procedure replaces  with $\bGb=\Gb$. Then, from the first relation in  one has $\bGa=\Ga$, relation  results in $\Gg=\bGb=\Gb$, and relation  turns to $(\Ga\Gb^2)^2=1$. Hence, the group is $\BG3/(\Gs_1\Gs_2)^3$. (Note that $(\Gs_1\Gs_2^2)^2=(\Gs_1\Gs_2)^3$ in $\BG3$.) Proof of Theorem \[th.proper\] {#proof.proper} ------------------------------ The fact that the perturbation epimorphisms $G_2',G_2''\onto\BG3/(\Gs_1\Gs_2)^3$ are proper is proved in Eyral, Oka [@EyralOka], where it is shown that the Alexander module of a sextic with the set of singularities $(2\bE_6\splus\bA_5)\splus\bA_2$ has a torsion summand $\Z_2\times\Z_2$, whereas the Alexander modules of all other groups listed in Theorem \[th.group\] can easily be shown to be $\Z[t]/(t^2-t+1)$. (In other words, the abelianization of the commutant of $G_2'$ or $G_2''$ is equal to $\Z_2\times\Z_2\times\Z\times\Z$, and for all other groups it equals $\Z\times\Z$.) The epimorphism $$\Gf_0\:G_0=\BG4/\Gs_2\Gs_1^2\Gs_2\Gs_3^2\onto\BG3/(\Gs_1\Gs_2)^3$$ is considered in Oka, Pho [@OkaPho]. One can observe that both braids $\Gs_2\Gs_1^2\Gs_2\Gs_3^2$ and $(\Gs_1\Gs_2)^3$ in the definition of the groups are pure, , belong to the kernels of the respective canonical epimorphism $\BG{n}\onto\BG{n}/\Gs_1^2=\SG{n}$. Furthermore, $\Gf_0$ takes each of the standard generators $\Gs_1$, $\Gs_2$, $\Gs_3$ of $\BG4$ to a conjugate of $\Gs_1$. Hence, the induced epimorphism $G_0/\Gf^{-1}(\Gs_1^2)=\SG4\onto\BG3/\Gs_1^2=\SG3$ is proper, and so is $\Gf_0$. A similar argument applies to the epimorphism $\Gf_3\:G_3\onto G_0$, which takes each generator $\Ga$, $\Gb$, $\bGb$ of $G_3$ to a conjugate of $\Gs_1\in G_0$. The induced epimorphism $$G_3/\Gf_3^{-1}(\Gs_1^2)=\SL(2,\Bbb F_3)\onto G_0/\Gs_1^2=\SG4=\PSL(2,\Bbb F_3)$$ is proper; hence, so is $\Gf_3$. (Alternatively, one can compare $G_3/\Gf_3^{-1}(\Gs_1^4)$ and $G_0/\Gs_1^4$, which are finite groups of order $3\cdot2^9$ and $3\cdot2^6$, respectively. The finite quotients of $G_3$ and $G_0$ were computed using [GAP]{} [@GAP].) Perturbations\[S.perturbations\] ================================ Perturbing a singular point {#s.perturbations} --------------------------- Consider a singular point $P$ of a plain curve $B$ and a Milnor ball $\MB$ around $P$. Let $B'$ be a nontrivial (, not equisingular) perturbation of $B$ such that, during the perturbation, the curve remains transversal to $\partial\MB$. \[pert.E6\] In the notation above, let $P$ be of type $\bE_6$. Then $B'\cap\MB$ has one of the following sets of singularities: $2\bA_2\splus\bA_1$: one has $\pi_1(\MB\sminus B')=\BG4$; $\bA_5$ or $2\bA_2$: one has $\pi_1(\MB\sminus B')=\BG3$; $\bD_5$, $\bD_4$, $\bA_4\splus\bA_1$, $\bA_4$, $\bA_3\splus\bA_1$, $\bA_3$, $\bA_2\splus k\bA_1$ ($k=0$, $1$, or $2$), or $k\bA_1$ ($k=0$, $1$, $2$, or $3$): one has $\pi_1(\MB\sminus B')=\Z$. The perturbations of a simple singularity are enumerated by the subgraphs of its Dynkin graph, see E. Brieskorn [@Brieskorn] or G. Tjurina [@Tjurina]. For the fundamental group, observe that the space $\MB\sminus B$ is diffeomorphic to $\Cp2\sminus(C\cup L)$, where $C\subset\Cp2$ is a plane quartic with a type $\bE_6$ singular point, and $L$ is a line with a single quadruple intersection point with $C$. Then, the perturbations of $B$ inside $\MB$ can be regarded as perturbations of $C$ keeping the point of quadruple intersection with $L$, see [@quintics], and the perturbed fundamental group $\pi_1(\Cp2\sminus(C'\cup L)\cong\pi_1(\MB\sminus B')$ is found in [@groups]. \[pert.A5\] In the notation above, let $P$ be of type $\bA_5$. Then $B'\cap\MB$ has one of the following sets of singularities: $2\bA_2$: one has $\pi_1(\MB\sminus B')=\BG3$; $\bA_3\splus\bA_1$ or $3\bA_1$: one has $\pi_1(\MB\sminus B')=\Z\times\Z$; $\bA_4$, $\bA_3$, $\bA_2\splus\bA_1$, $\bA_2$, or $k\bA_1$ ($k=0$, $1$, or $2$): one has $\pi_1(\MB\sminus B')=\Z$. \[pert.A2\] In the notation above, let $P$ be of type $\bA_2$. Then $B'\cap\MB$ has the set of singularities $\bA_1$ or $\varnothing$, and one has $\pi_1(\MB\sminus B')=\Z$. Both statements are a well known property of type $\bA$ singular points: any perturbation of a type $\bA_p$ singular point has the set of singularities $\bigsplus\bA_{p_i}$ with $d=(p+1)-\sum(p_i+1)\ge0$, and the group $\pi_1(\MB\sminus B')$ is given by $\<\Ga,\Gb\,|\,\Gs^s\Ga=\Ga,\ \Gs^s\Gb=\Gb>$, where $\Gs$ is the standard generator of the braid group $\BG2$ acting on $\<\Ga,\Gb>$ and $s=1$ if $d>0$ or $s=\gcd(p_i+1)$ if $d=0$. \[E6.onto\] Let $B$ be a plane sextic of torus type with at least two type $\bE_6$ singularities, and let $\MB$ be a Milnor ball about a type $\bE_6$ singular point of $B$. Then the inclusion homomorphism $\pi_1(\MB\sminus B)\to\pi_1(\Cp2\sminus B)$ is onto. The proposition is an immediate consequence of Corollaries \[e6-&gt;&gt;pi\], \[e6-&gt;&gt;pi.a3\], \[e6-&gt;&gt;pi.a5.1\] and \[e6-&gt;&gt;pi.a5.2\]. \[E6.perturbed\] Let $B$ be a plane sextic of torus type with at least two type $\bE_6$ singular points, and let $B'$ be a perturbation of $B$. If at least one of the type $\bE_6$ singular points of $B$ is perturbed as in , then $\pi_1(\Cp2\sminus B')=\CG6$. If at least one of the type $\bE_6$ singular points of $B$ is perturbed as in  and $B'$ is still of torus type, then $\pi_1(\Cp2\sminus B')=\BG3/(\Gs_1\Gs_2)^3$. Let $\MB$ be a Milnor ball about the type $\bE_6$ singular point in question. Due to Proposition \[E6.onto\], the inclusion homomorphism $\pi_1(\MB\sminus B)\to\pi_1(\Cp2\sminus B)$ is onto. Hence, in case , there is an epimorphism $\Z\onto\pi_1(\Cp2\sminus B')$, and in case , there is an epimorphism $\BG3\onto\pi_1(\Cp2\sminus B')$. In the former case, the epimorphism above implies that the group is abelian, hence $\CG6$. In the latter case, the central element $(\Gs_1\Gs_2)^3\in\BG3$ projects to $6\in\Z=\BG3/[\BG3,\BG3]$; since the abelianization of $\pi_1(\Cp2\sminus B')$ is $\CG6$, the epimorphism above must factor through an epimorphism $G:=\BG3/(\Gs_1\Gs_2)^3\onto\pi_1(\Cp2\sminus B')$. On the other hand, since $B'$ is assumed to be of torus type, there is an epimorphism $\pi_1(\Cp2\sminus B')\onto G$, and as $G\cong\PSL(2,\ZZ)$ is Hopfian (as it is obviously residually finite), each of the two epimorphisms is bijective. \[A5.perturbed\] Let $B$ be a plane sextic as in \[s.2e6+a5+a2.1\], and let $B'$ be a perturbation of $B$ such that the type $\bA_5$ singular point is perturbed as in  or . Then one has $\pi_1(\Cp2\sminus B')=\CG6$. Due to Proposition \[a5-&gt;&gt;pi\] and Lemma \[pert.A5\], the group of the perturbed sextic $B'$ is abelian. Since $B'$ is irreducible, $\pi_1(\Cp2\sminus B')=\CG6$. \[A2.perturbed\] Let $B$ be a plane sextic as in \[s.2e6+2a2+a3\], and let $B'$ be a perturbation of $B$ such that an inner type $\bA_2$ singular point of $B$ is perturbed to $\bA_1$ or $\varnothing$. Then one has $\pi_1(\Cp2\sminus B')=\CG6$. Let $P$ be the inner type $\bA_2$ singular point perturbed, and let $\MB$ be a Milnor ball about $P$. In the notation of Section \[s.2e6+2a2+a3\], the group $\pi_1(\MB\sminus B)$ is generated by $\Ga$ and $\Gb$ (or $\bGa=\Ga$ and $\bGb$ for the other point), and the perturbation results in an extra relation $\Ga=\Gb$. Then implies $\bGb=\Gb$ and the group is cyclic. \[tab.nontorus\] Sextics with abelian fundamental group to Abelian perturbations --------------------- Theorem \[th.nontorus\] below lists the sets of singularities obtained by perturbing at least one inner singular point from a set listed in Table \[tab.list\], not covered by Nori’s theorem [@Nori], and not appearing in [@degt.8a2]. \[th.nontorus\] Let $\Sigma$ be a set of singularities obtained from one of those listed in Table \[tab.nontorus\] by several (possibly none) perturbations $\bA_2\to\bA_1,\varnothing$ or $\bA_1\to\varnothing$. Then $\Sigma$ is realized by an irreducible plane sextic, not of torus type, whose fundamental group is $\CG6$. Altogether, perturbations as in Theorem \[th.nontorus\] produce $244$ sets of singularities not covered by Nori’s theorem; $117$ of them are new as compared to [@degt.8a2]. Each set of singularities in question is obtained by a perturbation from one of the sets of singularities listed in Table \[tab.list\]. Furthermore, the perturbation can be chosen so that at least one type $\bE_6$ singular point is perturbed as in , or the type $\bA_5$ singular point is perturbed as in , or at least one inner cusp is perturbed to $\bA_1$ or $\varnothing$. According to [@degt.8a2], any such (formal) perturbation is realized by a family of sextics, and due to Corollaries , \[A5.perturbed\], and \[A2.perturbed\], the perturbed sextic has abelian fundamental group. Non-abelian perturbations ------------------------- In this section, we treat the few perturbations of torus type that can be obtained from Table \[tab.list\] and do not appear in [@degt.8a2]. \[th.torus\] Each of the eight sets of singularities listed in Table \[tab.torus\] is realized by an irreducible plane sextic of torus type whose fundamental group is $\BG3/(\Gs_1\Gs_2)^3$. \[tab.torus\] Sextics of torus type to Theorem \[th.torus\] covers two tame sextics: $(\bE_6\splus2\bA_5)$ and $(3\bA_5)$. The fundamental groups of these curves were first found in Oka, Pho [@OkaPho]. As in the previous section, we perturb one of the sets of singularities listed in Table \[tab.list\], this time making sure that each type $\bE_6$ singular point is perturbed as in  or  (or is not perturbed at all), each type $\bA_5$ singular point is perturbed as in  (or is not perturbed at all), none of the inner cusps is perturbed, and at least one type $\bE_6$ singular point is perturbed as in . (Note that, in the case under consideration, inner are the cusps appearing from the cusp of $\B$.) From the arithmetic description of curves of torus type given in [@degt.Oka] (see also [@JAG]) it follows that any perturbation satisfying – above preserves the torus structure; then, in view of , Corollary  implies that the resulting fundamental group is $\BG3/(\Gs_1\Gs_2)^3$. \[not.new\] If all type $\bE_6$ singular points are perturbed as in  (or not perturbed at all), the study of the fundamental group would require more work; in particular, one would need an explicit description of the homomorphism $\pi_1(\MB_{\bE_6}\sminus B)\onto\pi_1(\MB_{\bE_6}\sminus B')$. On the other hand, it is easy to show that such perturbations do not give anything new compared to [@degt.8a2]. (In fact, using [@JAG], one can even show that the deformation classes of the sextics obtained are the same; it suffices to prove the connectedness of the deformation families realizing the sets of singularities $(\bE_6\splus\bA_5\splus2\bA_2)\splus\bA_2\splus\bA_1$ and $(\bE_6\splus4\bA_2)\splus\bA_3\splus\bA_1$, which are maximal in the context.) For this reason, we do not consider these perturbations here. \[Bri\] E. Brieskorn Singular elements of semi-simple algebraic groups Actes Congrès Inter. Math., Nice 2 1970 279-284 \[Brieskorn\] \[D1\] A. Degtyarev Isotopy classification of complex plane projective curves of degree $5$ Algebra i Analis 1989 1 4 78–101 Russian English transl. in Leningrad Math. J. 1 1990 4 881–904 \[quintics\] \[D2\] A. Degtyarev Quintics in $\C\roman{p}^2$ with nonabelian fundamental group Algebra i Analis 1999 11 5 130–151 Russian English transl. in Leningrad Math. J. 11 2000 5 809–826 \[groups\] \[D3\] A. Degtyarev On deformations of singular plane sextics J. Algebraic Geom. 17 2008 101–135 \[JAG\] \[D4\] A. Degtyarev Oka’s conjecture on irreducible plane sextics J. London Math. Soc. arXiv:math.AG/0701671 \[degt.Oka\] \[D5\] A. Degtyarev Zariski $k$-plets via dessins d’enfants Comment. Math. Helv. arXiv:0710.0279 \[degt.kplets\] \[D5\] A. Degtyarev On irreducible sextics with non-abelian fundamental group Fourth Franco-Japanese Symposium on Singularities (Toyama, 2007) arXiv:0711.3070 \[degt.Oka3\] \[D6\] A. Degtyarev Irreducible plane sextics with large fundamental groups arXiv:0712.2290 \[degt.8a2\] \[D7\] A. Degtyarev Stable symmetries of plane sextics arXiv:0802.2336 \[symmetric\] \[EO\] C. Eyral, M. Oka Fundamental groups of the complements of certain plane non-tame torus sextics Topology Appl. 153 2006 11 1705–1721 \[EyralOka\] \[GAP\] The GAP Group GAP — Groups, Algorithms, and Programming Version 4.4.10 2007 ([http://www.gap-system.org]{}) \[GAP\] \[vK\] E. R. van Kampen On the fundamental group of an algebraic curve Amer. J. Math. 55 1933 255–260 \[vanKampen\] \[No\] M. V. Nori Zariski conjecture and related problems Ann. Sci. Éc. Norm. Sup., 4 série 16 1983 305–344 \[Nori\] \[OP1\] M. Oka, D. T. Pho Classification of sextics of torus type Tokyo J. Math. 25 2 399–433 2002 \[OkaPho.moduli\] \[OP2\] M. Oka, D. T. Pho Fundamental group of sextics of torus type Trends in singularities 151–180 Trends Math. Birkhäuser Basel 2002 \[OkaPho\] \[Oz\] A. Özgüner Classical Zariski pairs with nodes M.Sc. thesis Bilkent University 2007 \[Aysegul\] \[Tju\] G. Tjurina Resolution of singularities of flat deformations of double rational points Funktsional Anal. i Pril. 4 1970 1 77–83 Russian English transl. in Functional Anal. Appl. 4 1 1970 68-73 \[Tjurina\] \[T\] H. Tokunaga $(2,3)$-torus sextics and the Albanese images of $6$-fold cyclic multiple planes Kodai Math. J. 22 1999 2 222–242 \[Tokunaga\] \[Z\] O. Zariski On the problem of existence of algebraic functions of two variables possessing a given branch curve Amer. J. Math. 51 1929 305–328 \[Zariski\] \[Ya\] J.-G. Yang Sextic curves with simple singularities Tohoku Math. J. (2) 48 2 1996 203–227 \[Yang\]
--- author: - 'Akihisa Koga$^1$, Tetsuya Minakawa$^1$, Yuta Murakami$^1$, and Joji Nasu$^2$' bibliography: - './refs.bib' title: ' Spin transport in the Quantum Spin Liquid State in the $S=1$ Kitaev model: role of the fractionalized quasiparticles ' --- The Kitaev model has attracted much interest since the proposal of the quantum spin model by A. Kitaev [@Kitaev2006] and suggestion of its implementation in real materials [@Jackeli2009]. This model is composed of direction-dependent Ising exchange interactions on a honeycomb lattice, which is exactly solvable and its ground state is a quantum spin liquid (QSL) with short-range spin correlations. In this model, quantum spins are fractionalized into the localized and itinerant Majorana fermions due to the quantum many-body effect [@Kitaev2006; @frac1; @frac2; @frac3]. The Majorana fermions have been observed recently as a half-quantized plateau in the thermal quantum Hall experiments in the candidate material $\alpha$-$\rm RuCl_3$ [@Plumb; @Kubota; @Sears; @Majumder; @Kasahara]. Furthermore, it is theoretically clarified that distinct energy scales ascribed to the fractionalization appear in the thermodynamic properties such as a double-peak structure in the specific heat [@Nasu1; @Nasu2], which stimulates further theoretical and experimental investigations on the spin fractionalization [@Chaloupka_2010; @Yamaji_2014; @Katukuri_2014; @Suzuki_2015; @Yamaji_2016; @Kato_2017]. Recently, the generalization of the Kitaev model with arbitrary spins [@Baskaran] has been studied, [@Suzuki_2017; @S1Koga; @Oitmaa; @Minakawa1; @MixedKoga; @Stavropoulos; @Lee; @Dong; @Zhu; @Khait] where similar double peaks in the specific heat have been reported [@S1Koga]. Therefore, the spin fractionalizations are naively expected even in the spin-$S$ Kitaev model although it is no longer solvable. In our previous manuscipt [@Minakawa2], we have studied the real-time dynamics of the $S=1/2$ Kitaev model by means of the Majorana mean-field theory. It has been found that, even in the Kitaev QSL with extremely short-ranged spin correlations, the spin excitation propagates in the bulk without spin polarization. This suggests that the spin transport is not caused by the change of local magnetization, but is mediated by the itinerant Majorana fermions. Therefore, the real-time simulation for the spin transport is one of the promising approaches to examine the existence of the itinerant quasiparticles in the spin-$S$ Kitaev model. In this paper, we investigate the real-time dynamics of the $S=1$ Kitaev model on the honeycomb lattice with two edges by means of the exact diagonalization method. We demonstrate that after the pulsed magnetic field is applied to one of the edges, the oscillation of spin moments does not appear in the bulk, but is induced in the other edge region under the small magnetic field. These results are essentially the same as those in the $S=1/2$ Kitaev model discussed in our previous paper [@Minakawa2]. Therefore, our results support the existence of the spin fractionalization in the $S=1$ Kitaev model. ![ (a) $S=1$ Kitaev model on the honeycomb lattice. Red, blue, and green lines represent the $x$, $y$, and $z$ bonds, respectively. Clusters used in the exact diagonalizations are presented by the black lines. (b) Plaquette with sites marked 1-6 shown for the local operator $W_p$. []{data-label="fig:model"}](cluster.pdf){width="\linewidth"} Let us consider the $S=1$ Kitaev model on the honeycomb lattice, which should be described by the Hamiltonian as $$\begin{aligned} \mathcal{H}&=\mathcal{H}_0+\mathcal{H}',\label{H}\\ \mathcal{H}_0&=-J\sum_{\langle i,j \rangle_x}S_i^xS_j^x -J\sum_{\langle i,j \rangle_y}S_i^yS_j^y -J\sum_{\langle i,j \rangle_z}S_i^zS_j^z,\\ \mathcal{H}'&=-\sum_ih_iS_i^z,\end{aligned}$$ where $S_i^\gamma$ is a $\gamma(=x,y,z)$ component of an $S=1$ spin operator at the $i$th site in the honeycomb lattice. $J$ is the ferromagnetic exchange between the nearest-neighbor spin pairs $\langle ij\rangle_\gamma$ on the $\gamma$-bond, and $h_i$ is the site-dependent magnetic field applied in the $z$-direction. The model is schematically shown in Fig. \[fig:model\](a). We note that the Hamiltonian Eq. (\[H\]) has a parity symmetry. This is clearly found if one considers the conventional local basis sets $|m\rangle$ with $m=-1,0,1$, which are the eigenstates of the $S^z$. The interactions $S_i^xS_j^x$ and $S_i^yS_j^y$ inclement or declement of both $m_i$ and $m_j$, while $S_i^zS_j^z$ and $S_i^z$ do not change them. Therefore, the Hamiltonian Eq. (\[H\]) has a global parity symmetry for $S_{tot}^z(=\sum_i S_i^z)$. In other words, the operator $P_z=\exp\left[ i\pi S_{tot}^z\right]$ commutes with the Hamiltonian. This leads to the absence of the magnetization in $y$ and $z$ directions for any sites, $\langle S_i^x\rangle=\langle S_i^y\rangle=0$ since $S_i^x$ and $S_i^y$ change the parity. Then, in the system, the magnetization appears only in the $z$-direction. When no mangetic field is applied ($h_i=0$), the ground-state and finite-temperature properties in the $S=1$ Kitaev model has been discussed so far [@Baskaran; @Suzuki_2017; @S1Koga; @Oitmaa; @MixedKoga; @Minakawa1; @Stavropoulos; @Lee; @Dong; @Zhu; @Khait]. In the case, the Kitaev model has the local $Z_2$ symmetry on each plaquette. The corresponding operator [@Baskaran; @MixedKoga] is given as, $$\begin{aligned} W_p=\exp\left[i\pi \left(S_1^x+S_2^y+S_3^z+S_4^x+S_5^y+S_6^z\right)\right],\label{eq:Wp}\end{aligned}$$ where site indexes $1,2, \cdots,6$ are introduced for a plaquette $p$, as shown in Fig. \[fig:model\](b). This operator satisfies $W_p^2=1$ and $[\mathcal{H}_0, W_p]=0$. Furthermore, $W_p$ commutes with $W_q$ on any plaquettes $q$. Therefore, the Hilbert space of the Hamiltonian $\mathcal{H}_0$ can be classified into each subspace ${\cal S}[\{w_p\}]$ specified by the set of $w_p(=\pm 1)$, which is the eigenvalue of $W_p$. When a state is in a certain subspace as $|\psi\rangle=|\psi;\{w_p\}\rangle$, the expectation value of the spin operator at the $i$th site vanishes as $\langle \psi|S_i^\gamma|\psi \rangle=0$. This can be proved when there exists a plaquette satisfying the anticommutation relation $\{S_i^\gamma, W_p\}=0$. In fact, $\langle \psi|\{S_i^\gamma, W_p\}|\psi\rangle= \langle \psi|S_i^\gamma W_p|\psi\rangle + \langle \psi|W_pS_i^\gamma|\psi\rangle =2w_p\langle \psi|S_i^\gamma|\psi\rangle=0$. Furthermore, examining $\langle \psi|\{S_i^\gamma, W_p\}S_j^\gamma|\psi\rangle$, one obtains $\langle \psi|S_i^\gamma S_j^\gamma|\psi\rangle=0$ except for the case with sites $i$ and $j$ located on the same $\gamma$ bond. Therefore, the existence of the local conserved quantity guarantees that the ground state of the $S=1$ Kitaev model with $h=0$ is the quantum spin liquid state with extremely short-ranged spin-spin correlations [@Baskaran]. This discussion may be applicable in the original Hamiltonian $\mathcal{H}$ with nonuniform magnetic field $h_i$. When no magnetic field is applied to $i(=1, 2, 4, 5)$th sites for a plaquette $p$ in Fig. \[fig:model\](b), $[\mathcal{H}, W_p]=0$. Then, the Hilbert space is classified by eigenvalues $w_p$ in ${\cal P}$, where ${\cal P}$ stands for the set of the plaquettes satisfying such commutation relations. Then, we obtain $\langle S_i^z\rangle=0$ at the $i(=1, 2, 4, 5)$th sites in the plaquette $p$ in ${\cal P}$ since $\{S_i^z, W_p\}=0$. On the other hand, as for the plaquettes not belonging to ${\cal P}$, the corresponding operators do not commute with the Hamiltonian due to the presence of the magnetic field and one cannot prove the absence of the spin moments at the site 1, 2, 4, and 5. If the number of plaquettes not belonging to ${\cal P}$ is nonzero, the correlation function between corresponding spins can be finite in general. This results from the lack of the local $Z_2$ symmetry in the Kitaev model, and thereby it is highly nontrivial whether or not correlations indeed exist between spins, in particular, even when these spins are separated by the quantum spin liquid region with extremely short-ranged spin-spin correlations. In our paper, to discuss spin-spin correlations in the Kitaev model, we examine spin transport in the system, where the QSL region is present between the regions under the magnetic field \[see Fig. \[junc\](a)\]. Before showing the results, we briefly examine how the uniform magnetic field $h(=h_i)$ affects the $S=1$ Kitaev model in the bulk. By making use of the exact diagonalization for some clusters with the periodic boundary condition (see Fig. \[fig:model\]), we calculate the magnetization $m_i^z(=\langle S_i^z\rangle)$, as shown in Fig. \[magpro\]. ![ Magnetization process in the $S=1$ Kitaev systems with $N=12, 18a, 18b, 20a$, and $20b$. The vertical dashed line represents the jump singularity of $m^z$ in the $18a$ cluster. []{data-label="magpro"}](magpro.pdf){width="\linewidth"} When $h=0$, the QSL ground state is realized with $m^z=0$. Switching on the magnetic field, the magnetic moment is immediately induced, as shown in Fig. \[magpro\]. Around $h_c/J\sim 0.02$, the magnetization rapidly increases. In the region, the large system size dependence is observed. This appears to be consistent with the results with $h_c\sim 0.01J$ in the (111)-direction magnetic field [@Lee; @Zhu; @Khait]. On the other hand, we have confirmed that the ground state always belongs to the subspace with even parity, including the results in the $N=18a$ cluster with a jump singularity in the magnetization process. Therefore, it is still unclear whether or not the phase transition occurs to the polarized state in the thermodynamic limit. We also note that these results for the $S=1$ system are similar to those in the $S=1/2$ Kitaev model, where the phase transition occurs to the polarized state at $h/J\sim 0.042$ within the mean-field theory [@NasuMF]. Therefore, we believe that there exists the energy scale characteristic of the spin excitations in the $S=1$ Kitaev model. In the following, we deal with the system with a tiny magnetic field $(<h_c)$ to discuss the existence of the spin fractionalization in the $S=1$ Kitaev model. To study the spin transport in the $S=1$ Kitaev model, we consider the site-dependent magnetic field, which is defined as $$h_i=\left\{ \begin{array}{ll} h_L(t)& i\in \textrm{L}\\ 0 & i\in \textrm{B}\\ h_R&i\in \textrm{R} \end{array} \right.,$$ where $h_L (h_R)$ is the time-dependent (static) magnetic field applied to the left (L) \[right (R)\] region. In the bulk (B) region, no magnetic field is applied, and the QSL state is always realized without the magnetization. The system is schematically shown in Fig. \[junc\](a). ![ (a) Schematic picture of the Kitaev system on the honeycomb lattice with two edges. The lattice is composed of three regions. The static magnetic field $h_R$ is applied in the right (R) region, where the magnetization appears. In the bulk (B) region, no magnetic field is applied and the QSL state is realized without the magnetization. Time-dependent pulsed magnetic field is introduced in the left (L) region. (b) 20-site cluster used in the exact diagonalization. The numbers represent the index of the lattice site. []{data-label="junc"}](junc.pdf){width="\linewidth"} We explain the outline of our real-time simulations by means of the exact diagonalization. The initial ground state $|\psi\rangle$ at $t\rightarrow -\infty$ is obtained by means of the Lanczos and inverse iteration methods. The time-evolution of the wave function is calculated by the time-dependent Schrödinger equation as, $$\begin{aligned} i\frac{d}{dt}|\psi(t)\rangle=\mathcal{H}(t)|\psi(t)\rangle.\end{aligned}$$ Then, we compute the magnetization and nearest-neighbor spin-spin correlation on the $\gamma$-bond, which are given as $$\begin{aligned} m_i^z(t)&=&\langle \psi(t)|S_i^z|\psi(t)\rangle,\\ C_{ij}(t)&=&\langle \psi(t)|S_i^\gamma S_j^\gamma|\psi(t)\rangle.\end{aligned}$$ In this study, we introduce the pulsed magnetic field in the L region, which is explicitly given as $$\begin{aligned} h_L(t)=\frac{A}{\sqrt{2\pi\sigma^2}}\exp\left[ -\frac{t^2}{2\sigma^2}\right],\end{aligned}$$ where $A$ and $\sigma$ are magnitude and width of the Gaussian pulse. In the following, we fix $A=1$ and $\sigma=2/J$. ![ Real-time evolution of the change in the local magnetization in the system with $h_R/J=0.01$ after the introduction of the pulsed magnetic field with $A=1$ and $\sigma=2/J$ shown as the dashed line (offset for clarity). Dotted lines represent the results for the pulse with $A=-1$ and $\sigma=2/J$. []{data-label="mag_t"}](mag_t.pdf){width="\linewidth"} In this calculation, we examine the 20-site cluster with two edges, where the periodic boundary condition is imposed for the direction perpendicular to the edge, as shown in Fig. \[junc\](b). In the cluster, the R and L regions include four sites. There exist twelve sites in the B region, where no magnetic field is applied. Although the cluster we treat is too small, the spin transport characteristic of the Kitaev system is expected to be captured since there exist four plaquettes with the local $Z_2$ symmetry, which is crucial for the peculiar spin transport, in the B region. In fact, we have confirmed that, in the initial state $(t\rightarrow -\infty)$, the site-dependent magnetization $m_i^z$ appears only in the R region $(m_9^z=m_{19}^z=0.217$ and $m_{10}^z=m_{20}^z=0.220$). Figure \[mag\_t\] shows the time dependence of the change in the spin moment $\Delta m^z_i[=m_i^z(t)-m_i^z(-\infty)]$ after the pulsed magnetic field is introduced. The magnetic moments at the sites 1 and 2 in the L region are induced by the pulsed magnetic field. On the other hand, no magnetic moment appears in the B region, which is consistent with the existence of the local conserved quantity. On the other hand, after some delay, the spin oscillation is induced at the sites 9 and 10 in the R region. This means that the wave-packet triggered by the magnetic pulse in the L region reaches the R region through the B region without spin oscillations. Since $m_i^x(t)=m_i^y(t)=0$ for any sites, we can say that the spin moment plays no role in the spin transport in the $S=1$ Kitaev model. This remarkable phenomenon is similar to that in the $S=1/2$ Kitaev model [@Minakawa2], where the spin transport is mediated by the itinerant Majorana fermions. Therefore, our results suggest the existence of the itinerant quasiparticles, which is not associated with the spin polarization even in the $S=1$ Kitaev model. Namely, we expect that in the $S=1$ Kitaev model without a static magnetic field, the spin degree of freedom is fractionalized into two; itinerant and localized quasiparticles owing to the existence of the local $Z_2$ symmetry. The pulsed magnetic field in the L region creates the itinerant and localized quasiparticle excitations, while only the formers propagate in the whole system. In the R region, the itinerant and localized quasiparticles are hybridized by the static magnetic field due to the lack of the local $Z_2$ symmetry, leading to the finite spin oscillations. The scenario for the spin transport is schematically shown in Fig. \[junc\](a). The spin fractionalization in the $S=1$ Kitaev model has been suggested in the thermodynamic properties such as a double-peak structure in the specific heat [@S1Koga]. We note that the higher temperature peak is closely related to the nearest-neighbor spin-spin correlations $C_{ij}$. It is known that the higher temperature peak in the $S=1/2$ case corresponds to the motion of the itinerant Majorana fermions. Therefore, one can expect that the flow of $C_{ij}$ is regarded as the motion of the itinerant quasiparticles in the $S=1$ Kitaev model. ![ Real-time evolution of the change in the nearest-neighbor correlations on the (a) $x$-bond, (b) $y$-bond, and (c) $z$-bond in the B and R region of the $S=1$ Kitaev system with $h_R/J=0.01$, $A=1$, and $\sigma=2/J$. Pairs of the numbers indicate two sites coupled by the Kitaev exchanges in the 20-site cluster shown in Fig. \[junc\]. []{data-label="fig:corr"}](corr.pdf){width="\linewidth"} Figures \[fig:corr\](a), \[fig:corr\](b) and \[fig:corr\](c) show the real-time evolution of the change in the nearest-neighbor spin correlations on the $x$-, $y$-, and $z$-bonds except for the L region. Oscillations appear in the spin-spin correlations on all exchanges even in the B region. We also find that the change of the quantities becomes small with increasing the distance from the L region. Associated with this change, the characteristic time, where $|\Delta C_{ij}|$ takes a first maximal value (shown as arrows in Fig. \[fig:corr\]), becomes longer. This implies that the wave-packet created by the pulsed magnetic field at the L region indeed flows to the R region through the B region. The second maximal values are considered to be caused by the reflection of the flow at the right edge since the peaks shift to the left side as time elapses. We also note the interesting pulse-field dependence of the phenomena originated from the $Z_2$ symmetry [@Minakawa2]. The system has the local $Z_2$ symmetry in the L region before the pulsed magnetic field is applied. In this case, each eigenstate is specified by a set of the eigenvalues of $W_p$ in the L region. The Hamiltonian for the magnetic pulse has the offdiagonal elements between distinct subspaces. An important point is that the operator $S_i^z$, in general, flips two eigenvalues of $W_p$ for adjacent plaquettes sharing the same $z$-bond, which connects the $i$th site and its pair site. Therefore, if the ground state belongs to the subspace with the configuration $\{w_p\}$, only even-order perturbations in the pulsed magnetic field contribute the expectation value for the operator $O$ satisfying $[O, W_p]=0$ with $p\in L$. This means that this expectation value is independent of the sign of the pulsed field. To confirm this, we calculate the time-dependent spin moments after the pulsed magnetic field in the $-z$ direction ($A=-1$). The results are shown as the dotted lines in Fig. \[mag\_t\]. In the L region, the magnetic moments are induced in the direction of the applied field. By contrast, in the B and R regions, the results do not depend on the sign of the pulsed magnetic field. Finally, we briefly comment on the nature of the low-lying excitation in the $S=1$ Kitaev system [@S1Koga; @Lee; @Dong; @Zhu; @Khait]. The real-time simulation, in principle, allows us to clarify if the system is gapped or gapless, by examining the velocity and decay rate of the wave packet created by the pulsed field with small $A$ and/or large $\sigma$. However, in the small-size numerical calculations, it should be hard to evaluate them due to the interference effect for the multiple reflections. Therefore, further numerical calculations for the larger systems are necessary to clarify the elementary excitations in the $S=1$ Kitaev model, which is now under consideration. In summary, we have studied the real-time dynamics of the $S=1$ Kitaev model on the honeycomb lattice. Applying the pulsed magnetic field to one of the edges in the system, spin oscillations never appear in the bulk while they appear in the other edges. Similar behavior appears in the $S=1/2$ Kitaev model, where fractionalized Majorana fermions flow in the system. Therefore, our results suggest the existence of the spin fractionalization in the $S=1$ Kitaev model and the spin transport is mediated by the fractionalized itinerant quasiparticles. This behavior should be common in the spin-$S$ Kitaev model, which is consistent with thermodynamic properties such as the double-peak structure in the specific heat and the half-plateau in the entropy [@S1Koga]. It is also important to clarify the spin transport in the spin-$S$ Kitaev models while finite temperature calculations suggest that the entropy of the quantum spins is split in half into those of the itinerant and localized quasiparticles. These interesting problems remain as future issues. Parts of the numerical calculations are performed in the supercomputing systems in ISSP, the University of Tokyo. This work was supported by Grant-in-Aid for Scientific Research from JSPS, KAKENHI Grant Nos. JP19K23425 (Y.M.), JP19H05821, JP18K04678, JP17K05536 (A.K.), JP16H02206, JP18H04223, JP19K03742 (J.N.), by JST CREST (JPMJCR1901) (Y.M.), and by JST PREST (JPMJPR19L5) (J.N.).
--- abstract: 'This paper studies an optimal ON-OFF control problem for a class of discrete event systems with real-time constraints. Our goal is to minimize the overall costs, including the operating cost and the wake-up cost, while still guaranteeing the deadline of each individual task. In particular, we consider the homogeneous case in which it takes the same amount of time to serve each task and each task needs to be served by $d$ seconds upon arrival. The problem involves two subproblems: *(i)* finding the best time to wake up the system and *(ii)* finding the best time to let the system go to sleep. We study the two subproblems in both off-line and on-line settings. In the off-line case that all task information is known a priori, we combine sample path analysis and dynamic programming to come up with the optimal solution. In the on-line scenario where future task information is completely unknown, we show that the optimal time to wake up the system can be obtained without relying on future task arrivals. We also perform competitive analysis for on-line control and derive the competitive ratios for both deterministic and random controllers.' author: - Lei Miao - Lijian Xu - Dingde Jiang date: 'Received: date / Accepted: date' title: 'Optimal On-Off Control for a Class of Discrete Event Systems with Real-Time Constraints[^1] ' --- [example.eps]{} gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore Introduction ============ There exists a large amount of Discrete Event Systems (DESs) that involve allocation of resources to satisfy real-time constraints. One commonality of these DESs is that certain tasks must be completed by their deadlines in order to guarantee Quality-of-Service (QoS). Examples arise in wireless networks and computing systems, where communication and computing tasks must be transmitted/processed before the information they contain becomes obsolete [@MiaoMaoCGCTONS] [@Liu00], and in manufacturing systems, where manufacturing tasks must be completed before the specified time in the production schedule [@PepCas00]. Another commonality of these DESs is that they all require the minimization of cost (e.g., energy). An interesting question then arises naturally: *how can we allocate resources to such DESs so that the cost is minimized and the real-time constraints are also satisfied?* To answer this question, one often has to study the trade-off between minimizing the cost and satisfying the real-time constraints: processing the tasks at a higher speed makes it easier to satisfy the real-time constraints but harder to reduce the cost; conversely, processing the tasks at a lower speed makes it harder to satisfy the real-time constraints but easier to reduce the cost. This trade-off is often referred to as the energy-latency trade-off and has been widely studied in the literature [@MiaoMaoCGCTONS] [@GamNaPraUyZaInf02] [@ZaferTON09]. In this paper, our objective is to utilize the energy-latency trade-off to minimize the cost while guaranteeing the real-time constraint for each task. Different from most existing papers that assume the system’s service rate (the control variable) is a continuous function of time, we assume that the DES only operates at one of the two states: ON and OFF. One motivating example of such DES is wireless sensor networks, in which operation simplicity must be maintained. For example, the radio of a ZigBee wireless device can be either completely off or transmitting at a fixed-rate, e.g., 250kb/s in the 2.4GHz band. Another difference between this paper and others is that we assume that a wake-up cost is incurred whenever the system transits from the OFF state to the ON state. In this paper, we solve both off-line and on-line optimal ON-OFF control problems. Our main contributions are two-fold: *(i)* We combine *sample path analysis* and *Dynamic Programming* (DP) to obtain the optimal off-line solution and *(ii)* We perform competitive analysis and derive the competitive ratios of both deterministic and random on-line controllers. Some results of this paper are previously shown in two conference papers: [@MiaoXuWTS2015] and [@Miao2017ACC_ON_OFF_Control], which primarily focus on off-line control. One new contribution of this paper is the competitive analysis for on-line control. Another new contribution is that we introduce an idling cost in the system model. We point out that the addition of this idling cost makes our problem formulation more realistic because it often exists in real-world applications. For example, energy is consumed when a motor is spinning without any load attached and when a sensor is turned on, but not actively processing information. In this journal version, we improve some proofs to incorporate the idling cost; we also move all the proofs and tables to the appendix in order to enhance the continuity of the analysis in the paper. The organization of the rest of the paper is as follows: in Section \[Sec\_related\_work\], we discuss related work; we introduce the system model and formulate our optimization problem in Section \[Sec\_model\]; the off-line and on-line results are presented in Sections \[Sec\_offline\] and \[Sec\_online\], respectively; finally, we conclude in Section \[Sec\_conclusions\]. Related Work {#Sec_related_work} ============ There are two lines of work that are closely related to this paper. One is transmission scheduling for wireless networks, in which the transmission rate of a wireless device is adjusted so as to minimize the transmission cost and satisfy real-time constraints. This line of work is initially studied in [@TON-Energy-Efficient_Wireless] with follow-up work in [@GamNaPraUyZaInf02] where a homogeneous case is considered, assuming all packets have the same deadline and number of bits. By identifying some properties of this convex optimization problem, Gamal et al. propose the “MoveRight" algorithm in [@GamNaPraUyZaInf02] to solve it iteratively. However, the rate of convergence of the MoveRight algorithm is only obtainable for a special case of the problem when all packets have identical energy functions; in general the MoveRight algorithm may converge slowly. Zafer et al. [@Zafer09TAC] study an optimal rate control problem over a time-varying wireless channel, in which the channel state was modeled as a Markov process. In particular, they consider the scenario that $B$ units of data must be transmitted by a common deadline $T,$ and they obtain an optimal rate-control policy that minimizes the total energy expenditure subject to short-term average power constraints. In [Zafer07ITA]{} and [@Zafer08TIT], the case of identical arrival time and individual deadline is studied by Zafer et. al. In [@NeedlyInfocom07], the case of identical packet size and identical delay constraint is studied by Neely et. al. They extend the result for the case of individual packet size and identical delay constraint in [@NeedlyWirelessNetworks09]. In [@ZaferTON09], Zafer et. al. use a graphical approach to analyze the case that each packet has its own arrival time and deadline. However, there are certain restrictions in their setting; for example, the packet that arrives later must have later deadlines. Wang and Li [@WangToWC2013] analyze scheduling problems for bursty packets with strict deadlines over a single time-varying wireless channel. Assuming slotted transmission and changeable packet transmission order, they are able to exploit structural properties of the problem to come up with an algorithm that solves the off-line problem. In [@PoulakisTOVT2013], Poulakis et. al. also study energy efficient scheduling problems for a single time-varying wireless channel. They consider a finite-horizon problem where each packet must be transmitted before $D_{\max }.$ Optimal stopping theory is used to find the optimal start transmission time between $[0,$ $D_{\max }]$ so as to minimize the expected energy consumption and the average energy consumption per unit of time. Zhong and Xu [@ZhongXuInfocom2008] formulated optimization problems that minimize the energy consumption of a set of tasks with task-dependent energy functions and packet lengths. In their problem formulation, the energy functions include both transmission energy and circuit power consumption. To obtain the optimal solution for the off-line case with backlogged tasks only, they develop an iterative algorithm RADB whose complexity is $O(n^{2})$ ($n$ is the number of tasks). The authors show via simulation that the RADB algorithm achieves good performance when used in on-line scheduling. [@MiaoMaoCGCTONS] studies a transmission control problem for task-dependent cost functions and arbitrary task arrival time, deadline, and number of bits. They propose a GCTDA algorithm that solves the off-line problem efficiently by identifying certain critical tasks. The GCTDA algorithm is an extension to the CTDA algorithm [@MaoCTDA] designed by Mao and Cassandras for dynamic voltage scaling related applications. They extend the CTDA algorithm to multilayer scenarios in [@mao2014optimal]. Our model is different from all the above works by letting the system operate in one of the discrete modes and also including a wake-up cost at each time instant that the system transitions from OFF to ON state. The other line of research studies On-OFF scheduling in Wireless Sensor Networks (WSNs). Solutions in the Medium Access Control (MAC) layer, such as the S-MAC protocol [@YeEstrinToN04], have been developed to coordinate neighboring sensors’ ON-OFF schedule in order to reduce both energy consumption and packet delay. These approaches do not provide specific end-to-end latency guarantee. In [@WeiPaschalidis], routing problems are considered in WSNs where each sensor switches between ON and OFF states. The authors formulate an optimization problem to pick the best path that minimizes the weighted sum of the expected energy cost and the exponent of the latency probability. In another work in [@NingCassSensorSleeping1], Ning and Cassandras formulate a dynamic sleep control problem in order to reduce the energy consumed in listening to an idle channel. The idea is to sample the channel more frequently when it is likely to have traffic and less frequently when it is not. The authors extend their work in [@NingCassSensorSleeping2], by formulating an optimization problem with the goal of minimizing the expected total energy consumption at the transmitter and the receiver. Dynamic programming is used to come up with an optimal policy that is shown to be more effective in cost saving than the fixed sleep time. [@CohenKapToN09] studies the ON-OFF scheduling in wireless mesh networks. By assuming a fixed routing tree topology used for task transmission, each child in the tree knows exactly when its parents will wake up, and the traffic is only generated by the leaves of the tree, the authors formulate and solve an optimization problem that minimizes the total transmission energy cost while satisfying the latency and maximum energy constraints on each individual node. The major difference between this paper and the existing ones in this line of research is that we study a system with a real-time constraint for each individual task. To the best of our knowledge, ON-OFF scheduling with a real-time constraint for each individual task has not been studied extensively. It is worth noting that there also exists papers related to the service rate control problem, in which the optimal service rate policy of either single-server or multi-server queueing systems are derived in order to minimize an average cost. A recent representative work along this line can be found in [@xia2017service] where Xia et al. study the service rate problem for tandem queues with power constraints. They formulate the model as a Markov decision process with constrained action space and use sensitivity-based optimization techniques to derive the conditions of optimal service rates, the optimality of the vertexes of the feasible domain for linear and concave operating cost, and an iterative algorithm that provides the optimal solution. Our problem formulation is different from these works in two aspects: *(i)* We consider tasks with real-time constraints and *(ii)* We include system wake-up cost on top of the service cost. System Model and Problem Formulation {#Sec_model} ==================================== We consider a finite horizon scenario that a DES processes $N$ tasks with real-time constraints. In particular, task $i$, $i=1,\ldots ,N,$ has arrival time $a_{i}$ (generally random), deadline $d_{i}=a_{i}+d$, and $B$ number of operations. Both $d$ and $B$ are constants. In the *off-line* setting, we assume that the task arrival time $a_{i}$ is known to the controller a priori. The DES can only operate in one of the two modes: ON and OFF. When it is in the OFF mode, there is no operating cost associated. When it is in the ON or active mode, the system can either be *busy* or *idling*. When the system is busy, it processes the tasks at a constant rate $R$ with fixed operating cost $C_{B}$ per unit time. When the system is idling, no tasks are waiting to be served, and the system cost is $C_{I}$ ($C_{I}$$\le$$C_{B}$) per unit time. Furthermore, we assume that whenever a transition from the OFF mode to the ON mode occurs, a fixed wake-up cost $C_{W}$ is incurred; examples of such costs include: the large amount of current (known as inrush current) required when a motor is turned on, the energy needed to initialize electric circuits when RF radio is turned on in a wireless device, and so on. Note that the wake-up cost may also include system wearout cost, if the system can only be turned on for certain number of times during its lifetime. In our previous work in [@MiaoXuWTS2015] and [@Miao2017ACC_ON_OFF_Control], $C_I=C_B$. As we will show later, when $C_I$ is different from $C_B$, it does not make the analysis significantly harder, and the off-line optimal solution can still be obtained by DP. Our system model above is quite generic and is applicable to a wide range of engineering applications; for example, one can use ultra-low power wake-up receivers [@pletcher200952] to conserve energy in WSNs. Next, we formulate the off-line optimization problem. As we mentioned earlier, the task information is known to the controller a priori in the off-line setting. Our objective is to find the optimal ON and OFF time periods so as to *(i)* finish all the tasks by their deadlines and *(ii)* minimize the cost. Suppose the system is woken up at $t_{1},$ put to sleep at $t_{2}$ $% (t_{1}<t_{2}),$ and kept active from $t_{1}$ to $t_{2}.$ Then, we call the time interval $[t_{1},t_{2}]$ an **Active Period** **(AP)**. In any AP, the periods during which the system is actively serving tasks are known as **Busy Periods** **(BPs)**. The rest of the time periods in that $AP$ are known as **Idle Periods** **(IPs)**. Let $r(t)$ be the rate that the system is capable of serving tasks at time $t$. It is piecewise constant and at any given time $t$, it can only be either $0$ (when the system is OFF) or $R$ (when the system is ON). See Fig. \[offline\_illustration\] for an illustration of how $r(t)$ looks like and how the APs are formed. Note that $r(t)$ is not the actual service rate since the system is only serving tasks during the **BPs**, not the **IPs**. ![Off-line control illustration.[]{data-label="offline_illustration"}](offline_illustration.pdf){height="1.6in" width="2.8253in"} We now introduce the control variables. Our first control variable is $\alpha ,$ the number of APs. The second control variable is a $\alpha \times 2$ array **t** that contains $2\alpha $ time instants. These time instants satisfy:$$t_{i,1}<t_{i,2}<t_{j,1}<t_{j,2},\text{ }\forall i,j\in \{1,\ldots ,\alpha \},% \text{ }i<j$$and define $\alpha $ number of APs. See Fig. \[offline\_illustration\] for illustration. The off-line problem $Q(1,N)$ can then be formulated: $$\begin{gathered} \min_{\alpha ,\mathbf{t}}\text{ }J=\alpha C_{W}+\sum_{i=1}^{\alpha }[C_{I}(t_{i,2}-t_{i,1}-\tau_{i,B})+C_{B}\tau_{i,B}] \\ \text{s.t. }\int_{\max (a_{j},x_{j-1})}^{x_{j}}r(t)dt=B, \\ x_{j}\leq d_{j},\text{ }x_{0}=0,\text{ }j=1,\ldots ,N \\ r(t)=R\sum_{i=1}^{\alpha }[u(t-t_{i,1})-u(t-t_{i,2})]\end{gathered}$$where $x_{j}$ is the departure time of task $j$, $u(t)$ is the unit step function, and $\tau_{i,B}$ is the length of the busy periods in the *i-th* AP. The first constraint ensures that exactly $B$ number of operations are executed for each task. The second one is the real-time constraint. The third one makes sure that the processing rate is $R$ only during each AP. Note that $\tau_{i,B}$ is dependent on the number of tasks served in $AP_{i}$. To represent $\tau_{i,B}$, we use $N_{i}^{S}$ and $N_{i}^{E}$ to denote the first (starting) task and the last (ending) task in $AP_{i}$, respectively: $$\begin{gathered} N_{i}^{E}=\underset{j\in \{1,\ldots ,N\}}{\arg \max }(d_{j}\le t_{i,2}) \\ N_{i}^{S}=\underset{j\in \{1,\ldots ,N\}}{\arg \min }(a_{j}\ge t_{i,1}) \\ \tau_{i,B}=\max((N_{i}^{E}-N_{i}^{S}+1)\frac{B}{R},0)\end{gathered}$$ Notice that $Q(1,N)$ above may not always be feasible. Consider the case that $N$ tasks arrive at the same time and need to be served in $d$ seconds. In order to meet the deadlines of all the tasks, we must have $R\geq \frac{NB}{d}$. Since $R$ is a constant, the condition above obviously is not true when $N$ is large. In this paper, we only consider the case that $Q(1,N)$** **is indeed feasible, and we have the following assumption on the task arrival rate. \[feasibility\_assumption\]Within any time interval of $d$ seconds, the number of task arrivals must not exceed $\lfloor \frac{d}{\beta }\rfloor ,$ where $\beta =B/R$ is the time it takes to process a single task. We emphasize that $d$ in Assumption \[feasibility\_assumption\] is the deadline of each task upon arrival. To make the problem more interesting, we also assume that $\lfloor \frac{d}{% \beta }\rfloor >1.$ \[Lemma\_feasibility\]Under Assumption \[feasibility\_assumption\], **P1** is always feasible. $Q(1,N)$ is a hard optimization problem, due to the nondifferentiable terms in the constraints and the objective function. It cannot be easily solved by standard optimization software. In what follows, we will first discuss optimal off-line control, using which we will then establish the results for on-line control. Off-line Control {#Sec_offline} ================ In this section, we focus on the off-line control problem, in which all task arrivals are known to us a priori**.** We need to find out when the system should wake up and start to serve the first task in an AP. Similar to the “just-in-time" idea exploited in [GamNaPraUyZaInf02]{} for adaptive modulation, the system should wake up as late as possible so that it may potentially reduce the active time. The question is how late the system should wake up. This is answered by the following results. \[Lemma\_LateStartIsBetter\]Suppose that tasks $\{k,\ldots ,n\}$ are all the tasks served in an AP on the optimal sample path of $Q(1,N)$ and starting the AP at time either $t_{A}$ or $t_{B},$ $a_{k}\leq t_{A}<t_{B}\leq d_{k}-\beta$, is feasible. Then, $$C_{k,\ldots ,n}^{A}\geq C_{k,\ldots ,n}^{B}$$ where $C_{k,\ldots ,n}^{A}$ and $C_{k,\ldots ,n}^{B}$ are the corresponding costs of serving tasks $\{k,\ldots ,n\}$ in the AP for the two different starting time $t_A$ and $t_B$, respectively. Lemma \[Lemma\_LateStartIsBetter\] indicates that an AP on the optimal sample path of $Q(1,N)$ should be started as late as possible. We now utilize this result to figure out when exactly the first task $k$ should be served. \[Lemma\_When\_to\_Start\_case1\]If tasks $\{k,\ldots ,n\}$ are all the tasks served in an AP on the optimal sample path of $Q(1,N)$ and the number of task arrivals in $[a_{k},d_{k}-\beta )$ is $0$, then the optimal starting time to transmit task $k$ is $d_{k}-\beta $, i.e., $$x_{k}^{\ast }=d_{k},$$ where $x_{k}^{\ast }$ is the optimal departure time of task $k$. Lemma \[Lemma\_When\_to\_Start\_case1\] shows that we can delay the transmission of the first task in an AP to $\beta $ seconds before its deadline, provided that there are no other arrivals before that time. Next, we discuss the case that there exists other task arrivals before $% d_{k}-\beta $. \[Lemma\_When\_to\_Start\]Suppose task $k$ is the first task in an AP on the optimal sample path of $Q(1,N)$ and the number of task arrivals in $% [a_{k},d_{k}-\beta )$ is $m,$ $0<m\leq \lfloor \frac{d}{\beta }\rfloor -1$ . Let $$\delta _{j}=\beta (j-k)-(a_{j}-a_{k}) \label{Lemma2_1}$$ $$z=\underset{j=k+1,\ldots ,k+m}{\arg \max \{\delta _{j}}\}$$ The optimal starting time to serve task $k$ is: $$\left\{ \begin{array}{cc} d_{k}-\beta , & \text{if }\delta _{z}\leq 0 \\ d_{k}-\beta -\delta _{z}, & \text{if }\delta _{z}>0% \end{array}% \right.$$ Having discussed when to wake up the system, we now find out when the system should go to sleep. Apparently, the optimal time to end an AP depends on future task information. In what follows, we first establish some results that identify the end of an AP based on future task arrival information. $\label{TasksApartEndAP}$If $d_{j}+C_{W}/C_{I}<a_{j+1},$ $j\in \{1,\ldots ,N-1\},$ then task $j$ ends an **AP** on the optimal sample path of $% Q(1,N)$. Lemma \[TasksApartEndAP\] basically indicates that if the deadline of task $j$ is at least $C_{W}/C_{I}$ seconds apart from the next task arrival, then task $j$ ends an AP on the optimal sample path. Note that this is just a sufficient, but not necessary condition of an AP ending on the optimal sample path. In some cases, whether a task should end an AP is determined by not only the next arrival, but also all subsequent ones. Let $d_{0}=-\infty $ and $a_{N+1}=\infty .$ We introduce the following definition. Consecutive tasks $\{k,\ldots ,n\},$ $1\leq k\leq n\leq N,$ belong to a *super active period* (SAP) in problem $Q(1,N)$ if $% d_{k-1}+C_{W}/C_{I}<a_{k},$ $d_{n}+C_{W}/C_{I}<a_{n+1},$ and $% d_{j}+C_{W}/C_{I}\geq a_{j+1},$ $\forall j\in \{k+1,\ldots ,n-1\}.$ Each SAP contains one or more APs. SAPs can be easily identified by simply examining all the task deadlines and arrival times and applying Lemma [TasksApartEndAP]{}. It implies that instead of working on the original problem $Q(1,N)$, we now only need to focus on each SAP, which is essentially a subproblem $Q(k,n)$. We now define our decision points in each SAP. A decision point $% x_{t}$, $t\in \{k,\ldots ,n-1\}$, is the departure time of task $t$ satisfies $x_{t}<a_{t+1}.$ If $x_{t} \ge a_{t+1}$, then $x_{t}$ is not a decision point because the system should stay active at $x_{t}$ and process task $t+1$. At each decision point, the control is letting the system either go to sleep or stay awake. Let us take a look at some examples, in which $d=10,$ $C_{W}=10,$ and $C_{B}=C_{I}=C=1.$ Note that $C_B$ and $C_I$ could be different in general; for simplicity, we let them equal to each other in the examples. We also assume that $B=R,$ i.e., it takes a unit of time to complete a task. Fig. \[SP\_1\_Scenario\_1\] and Fig. \[SP\_2\_Scenario\_1\] show two different sample paths for a simple two-task scenario: $a_{1}=0$ and $a_{2}=19.$ In both sample paths, task $1$’s optimal wake up time is determined by Lemmas 4.2 and 4.3. The only decision point is $x_{1}$, at which the system needs to decide if it should go to sleep or stay awake. In particular, the system in Fig. \[SP\_1\_Scenario\_1\] wakes up at $% t_{1}=9 $, finishes task $1$ at its deadline $d_{1}=10,$ stays awake, and finishes task 2 at $t_{2}=20.$ The total cost is: $% C_{W}+C(t_{2}-t_{1})=21.$ In Fig. \[SP\_2\_Scenario\_1\], the system wakes up at $t_{1}=9$, finishes task $1$ at its deadline $d_{1}=t_{2}=10,$ and goes to sleep. Then, it wakes up at $t_{3}=28$ (once again determined by Lemmas 4.2 and 4.3 and finishes task $2$ at $t_{4}=29.$ The total cost of this case is: $2C_{W}+C[(t_{2}-t_{1})+(t_{4}-t_{3})]=22.$ It is evident that at decision point $x_{1}=10,$ the optimal control is to let the system stay awake (shown in Fig. \[SP\_1\_Scenario\_1\]). ![Sample path \#1 of scenario \#1.[]{data-label="SP_1_Scenario_1"}](examples_11.pdf){height="1.2in" width="2.12in"} ![Sample path \#2 of scenario \#1.[]{data-label="SP_2_Scenario_1"}](examples_12.pdf){height="1.2in" width="2.12in"} Now, let us consider another scenario (Fig. \[SP\_1\_Scenario\_2\] and Fig. \[SP\_2\_Scenario\_2\]), in which we keep the previous tasks $1$ and $2$ unchanged and add task $3$. Our first decision point is again at $x_{1}=10.$ In Fig. \[SP\_1\_Scenario\_2\], the system wakes up at $t_{1}=9$, finishes task $1$ at its deadline $d_{1}=10,$ stays awake, finishes task 2 at time $20,$ stays awake, and finally finishes task $3$ at time $t_{2}=30.$ The total cost is: $C_{W}+C(t_{2}-t_{1})=31.$ In Fig. \[SP\_2\_Scenario\_2\], the system wakes up at $t_{1}=9$, finishes task $1$ at its deadline $% d_{1}=t_{2}=10,$ and goes to sleep. Then, it wakes up at $t_{3}=28$ and finishes tasks $2$ and $3$ at $t_{4}=30.$ The total cost of this case is: $% 2C_{W}+C[(t_{2}-t_{1})+(t_{4}-t_{3})]=23.$ It is evident that at decision point $x_{1}=10,$ the optimal control is to let the system go to sleep (shown in Fig. \[SP\_2\_Scenario\_2\]). ![Sample path \#1 of scenario \#2.[]{data-label="SP_1_Scenario_2"}](examples_21.pdf){height="1.2in" width="2.12in"} ![Sample path \#2 of scenario \#2.[]{data-label="SP_2_Scenario_2"}](examples_22.pdf){height="1.2in" width="2.12in"} From the above examples, we can conclude that the optimal decision on if the system should stay awake or go to sleep when it finishes all on-hand tasks depends on future task arrivals (task $3$ in the examples above). A first look at the problem seems to suggest that in the worst case, the system may have to make a decision about if it should go to sleep or stay awake after each task departure; the total number of possible sample paths could be as high as $2^{N}$, which makes the problem intractable when $N$ is large. However, a closer look at the problem indicates that the off-line optimal ON-OFF control problem can be solved by DP, which has been widely used to solve a large class of problems with special structural properties. In the context of DES, however, its usage has been very limited to date. For example, in [@MaoCTDA] and [@MiaoMaoCGCTONS] where the problem formulation is similar to the one in this paper, both CTDA and GCTDA algorithms are not DP-based. We will show next that for the DES studied by this paper, DP and sample path analysis can be used together to obtain the optimal solution. In particular, it is done by introducing two types of tasks: starting and following. In problem $Q(k,n)$, where tasks $\{k,\ldots ,n\}$ form an SAP, the first task of any AP is called a *starting* task. Tasks that are not starting tasks are known as *following* tasks. Since the case that $k=n$ is trivial, we assume that $k<n$ in our analysis in order to make the problem more interesting. Note that APs contain one task only do not have following tasks. For any task $i\in\{k,\dots,n\}$, it must either be a starting task or a following one. We are interested in finding out the optimal cost of serving tasks $\{i,\ldots ,n\}$, and we use $Q^{S}(i,n)$ and $Q^{F}(i,n)$ to denote the optimization problems of serving tasks $\{i,\ldots ,n\}$ when task $i$ is a starting and following task, respective. Note that in these two problems, only tasks $\{i,\ldots,n\}$ are served and all other tasks in $\{k,\ldots,n\}$ are not considered. In problem $Q^{F}(i,n)$, the system is active when task $i$ arrives; therefore, task $i$ will be served right after its arrival. Let $J_{i}^{S}$ and $J_{i}^{F}$ be the minimum cost of $Q^{S}(i,n)$ and $Q^{F}(i,n)$, respectively. When $i=n$, we can easily calculate $J_{n}^{S}$ and $J_{n}^{F}:$ $J_{n}^{S}=C_{W}+C_{B}\beta ,\text{ }J_{n}^{F}=C_{B}\beta.$ Note that $J_{n}^{F}$ does not include the wake-up cost $C_{W}$, since by assumption, task $n$ is a following task. The operating cost, $C_{B}\beta $, is identical in both cases. Suppose that $% J_{i}^{S}$ and $J_{i}^{F}$ $,$ $i\in \{k+1,\ldots ,n\}$ are both known, the next step is to find $J_{i-1}^{S}$ and $J_{i-1}^{F}.$ We first focus on $J_{i-1}^{S}.$ By assumption, task $i-1$ is a starting task. We use Lemmas \[Lemma\_When\_to\_Start\_case1\] and \[Lemma\_When\_to\_Start\] to find out the optimal starting time of task $i-1$ in problem $Q^{S}(i-1,n).$ Let the optimal starting time be $s_{i-1,n}^{i-1}.$ For tasks in $\{i,\ldots ,n\}$, find task $l$ that satisfies the following:$$\label{l_for_J_S} \begin{gathered} s_{i-1,n}^{i-1}+(j-i+1)\beta \ge a_{j}, \forall j\in \{i-1,\ldots ,l-1\}, \\ \text{and }s_{i-1,n}^{i-1}+(l-i+1)\beta < a_{l} \end{gathered}$$ If task $l$ does not exist, then it is a trivial case that the system is always busy serving tasks $\{i-1,\ldots ,n\}$, and there is a single APthat starts from $s_{i-1,n}^{i-1}$ and ends at $s_{i-1,n}^{i-1}+(n-i+2)% \beta .$ In this case, $J_{i-1}^{S}=C_{W}+(n-i+2) \beta C_{B}$. We now consider the more interesting case that task $l$ does exist. In particular, $$J_{i-1}^{S}=\min (V_{i-1,l}^{SS}+J_{l}^{S},V_{i-1,l}^{SF}+J_{l}^{F}) \label{J_S_minimum}$$where $V_{i-1,l}^{SS}$ is the cost of serving tasks $\{i-1,\ldots ,l-1\}$ when task $l$ is a starting task:$$\label{V_SS} V_{i-1,l}^{SS}=C_{W}+(l-i+1)\beta C_{B}$$ $V_{i-1,l}^{SF}$ is the cost of serving tasks $\{i-1,\ldots ,l-1\}$ when task $l$ is a following task:$$\label{V_SF} V_{i-1,l}^{SF}=C_{W}+(l-i+1)\beta C_{B}+[a_{l}-s_{i-1,n}^{i-1}-(l-i+1)\beta]C_{I}$$ We now focus on $J_{i-1}^{F}.$ We emphasize again that in this case, task $i-1$ sees an active system upon its arrival; it will be served right away since it is the first task in $Q^{F}(i-1,n)$. For tasks in $\{i,\ldots ,n\}$, find task $l$ that satisfies the following:$$\label{l_for_J_F} \begin{gathered} a_{i-1}+(j-i+1)\beta \ge a_{j}, \forall j\in \{i-1,\ldots ,l-1\}, \\ \text{ and }a_{i-1}+(l-i+1)\beta < a_{l} \end{gathered}$$Once gain, task $l$ may not exist, and it corresponds to the case that the system is always busy serving tasks $\{i-1,\ldots ,n\}.$ In this case, there is a single AP that starts from $a_{i-1}$ and ends at $a_{i-1}+(n-i+2)% \beta .$ We have $J_{i-1}^{F}=(n-i+2) \beta C_{B}$. We now consider the more interesting case that task $l$ does exist. We have: $$J_{i-1}^{F}=\min (V_{i-1,l}^{FS}+J_{l}^{S},V_{i-1,l}^{FF}+J_{l}^{F}) \label{J_F_minimum}$$where $V_{i-1,l}^{FS}$ is the cost of serving tasks $\{i-1,\ldots ,l-1\}$ when task $l$ is a starting task:$$\label{V_FS} V_{i-1,l}^{FS}=(l-i+1)\beta C_{B}$$ $V_{i-1,l}^{FF}$ is the cost of serving tasks $\{i-1,\ldots ,l-1\}$ when task $l$ is a following task:$$\label{V_FF} V_{i-1,l}^{FF}=(l-i+1)\beta C_{B}+[a_{l}-a_{i-1}-(l-i+1)\beta]C_{I}$$ In Table \[Table\_Q\_k\_n\], we show the algorithm that returns the optimal cost of $Q(k,n)$. This algorithm involves two more algorithms that return the optimal costs of $Q^{S}(i-1,n)$ (Table \[Table\_Q\_S\]) and $Q^{F}(i-1,n)$ (Table \[Table\_Q\_F\]), respectively. \[theorem\_optimal\]$J_{k}^{S}$ is the optimal cost of problem $Q(k,n)$. We have proved that when the algorithm in Table \[Table\_Q\_k\_n\] stops, $J_{k}^{S}$ is the optimal cost of problem $Q(k,n)$. The corresponding optimal control, i.e., the starting time and ending time of each AP, can be traced back iteratively by identifying the $J_{l}^{S}$ or $J_{l}^{F}$ that each $J_{i-1}^{S}$ or $% J_{i-1}^{F}$ points to. The procedure is provided in Table \[Table\_Control\]. Next, we use the example in Fig. \[SP\_1\_Scenario\_2\] and Fig. \[SP\_2\_Scenario\_2\] to show how the above algorithms work. We have three tasks $1,$ $2,$ and $3$ belong to a SAP ($k=1$ and $n=3$). Initially, $J_{n}^{S}=J_{3}^{S}=C_{B}+C\beta =11,$ and $% J_{n}^{F}=J_{3}^{F}=C\beta =1.$ In the first iteration $(i=n=3)$, we calculate $J_{i-1}^{S}$ and $J_{i-1}^{F}.$ To calculate $% J_{i-1}^{S}=J_{2}^{S},$ we first figure out $s_{2,3}^{2}=28.$ Then, we find out that no task $l$ satisfies (\[l\_for\_J\_S\]). Therefore, tasks $2$ and $3 $ form a single AP in problem $Q^{S}(2,3)$, and $J_{2}^{S}=12.$ To calculate $J_{i-1}^{F}=J_{2}^{F},$ we identify that task $l=3$ satisfies ([l\_for\_J\_F]{}). We then use (\[J\_F\_minimum\]) to obtain $J_{2}^{F}=\min (V_{2,3}^{FS}+J_{3}^{S},V_{2,3}^{FF}+J_{3}^{F})=\min (1+J_{3}^{S},10+J_{3}^{F})=11.$ In the final iteration $(i=n-1=2)$,  we only need to calculate $J_{i-1}^{S}=J_{1}^{S}.$ Because $s_{1,3}^{1}=9$ and task $l=2$ satisfies (\[l\_for\_J\_S\]), we use (\[J\_S\_minimum\]) to calculate $J_{1}^{S}:$ $J_{1}^{S}=\min (V_{1,2}^{SS}+J_{2}^{S},V_{1,2}^{SF}+J_{2}^{F})=\min (11+J_{2}^{S},20+J_{2}^{F})=23.$ This is the optimal cost obtained in Fig. \[SP\_2\_Scenario\_2\]. If we follow the procedure in Table \[Table\_Control\], we will get the exact same optimal solution as shown in Fig. \[SP\_2\_Scenario\_2\]. The details are omitted. Next, we use simulation results to show how the optimal solution performs compared with a naive approach, in which the controller simply goes to sleep when there is no backlog and wakes up when a new task arrives. Let optimal to naive ratio be the ratio between the optimal cost and the cost of the naive controller. Fig. \[Fig\_optimal\_naive\] shows how the optimal to naive ratio varies when the task arrival process and the wake-up cost $C_W$ change. In the simulation, we have $100$ runs that correspond to 100 maximum interarrival time from $1ms$ to $100ms$ with step size $1ms$. $1000$ tasks and various $C_W$ values are used in each run. The interarrival time between two adjacent tasks is uniformly distributed between 0 and the maximum interarrival time in each run. The values of the other parameters are as follows: $d=20ms$, $C_B$=30mW, $C_I$=100$\mu$W, and $\beta=1ms$. ![\[fig:Fig4\]Optimal to naive ratio under various wake-up cost and interarrival time[]{data-label="Fig_optimal_naive"}](Cb_30000_Ci_100.pdf){width="70.00000%"} We have a couple of observations. First, the cost saving of the optimal solution is greater when $C_W$ is larger. Second, the maximum cost saving occurs when the interarrival time is not too small or too large: when it is too small, a single AP will be sufficient to complete all the tasks, and the optimal and the naive solutions are essentially the same; when it is too large, many APs are needed, and the advantage of the optimal controller gets smaller. As we can see from the result, the cost saving of the DP algorithm in the $C_W=28mJ$ case is as large as $50\%$, and it will be ever greater when $C_W$ is higher. On-line Control {#Sec_online} =============== In the previous section, we combine structural properties of the optimal sample path and dynamic programming to find the optimal solution to the off-line control problem. In this section, we study on-line control where future task arrival information is unknown to the controller. Essentially, the controller needs to decide the starting time and ending time for each AP. Starting an AP -------------- We first focus on the following questions: how can we determine the best time to start an AP in on-line control and how different is it from the optimal time in off-line control? ![On-line control: starting an AP[]{data-label="start_of_AP"}](FlowChart.pdf){height="2.4in" width="2.8253in"} Fig. \[start\_of\_AP\] shows the proposed on-line control mechanism for determining the wake-up time. It is an iterative algorithm that dynamically adjusts the wake-up time based upon the backlog and the newly available task information. Initially, right after the first task arrives, the scheduled wake-up time is $a_{1}+d-\beta$ (determined by Lemma \[Lemma\_When\_to\_Start\_case1\]). If there are other task arrivals before the scheduled wake-up time, the controller will recalculate the wake-up time using the results in Lemma \[Lemma\_When\_to\_Start\]; otherwise, the system will be woken up at the scheduled time. \[decision\_after\_arrival\]Suppose that tasks $\{k,\ldots ,n\}$ form an AP on the optimal sample path of $Q(1,N)$. If the system is OFF before task $k$ arrives in on-line control, then the wake-up time returned by the on-line control mechanism in Fig. [start\_of\_AP]{} is optimal. Lemma \[decision\_after\_arrival\] indicates that for on-line control, the lack of future task information does not incur any penalty when starting an AP:the optimal time to start an AP can be determined iteratively using the backlog and the newly available task information. We now turn our attention to ending an AP in on-line control. Ending an AP ------------ When all backlogged tasks have been served in an on-line setting, the controller needs to decide when to end an AP and put the system to sleep. This decision depends on future task information and the values of idling cost $C_I$ and wake-up cost $C_W$. For example, if the next task $t+1$ arrives very soon, the optimal control at decision point $x_{t}$ might be letting the system stay active; conversely, if the next task $t+1$ arrives after a long time, then the system perhaps should go to sleep at decision point $x_{t}$. When some future task information is known, techniques such as Receding Horizon Control (RHC) can be utilized to make decisions. In this paper, we focus on the scenario that no future task information is available at all. In general, the control at each decision point is the following: let the system stay awake for another $\theta_{t}$ seconds. If no task arrives within the $\theta_{t}$ seconds, then put the system to sleep after $\theta_{t}$ seconds; o.w., serve the newly arrived tasks and wait for the next decision point. Note that the subscript $t$ indicates $\theta_{t}$ could be different at each decision point. Let $J^{*}$ be the optimal cost of the off-line problem $Q(1,N)$ and $\widetilde{J}$ be the cost of the on-line controller. Our objective is to develop competitive on-line controllers which can quantify their worst-case performance deviation from the optimal off-line solution. One challenge of competitive analysis is to find out the worst-case scenario. In our problem, the unnecessary cost in on-line control occurs when the system is idling: the controller must decide if and when to sleep. Therefore, the worst case occurs when each AP contains only one task so that the decision has to be made over and over again for every single task. This property actually simplifies our analysis, and in particular, we tackle the competitive ratio problem from two different aspects: a deterministic controller and a randomized one. ### Deterministic Controller We first consider a deterministic controller in which $\theta_{t}$ is a fixed constant value $\theta$. The on-line controller is *c-competitive* if $\widetilde{J}(I,\theta)\le c J^{*}, \forall I\in\mathscr{I}$, where $\mathscr{I}$ is the set of all possible task arrival instances and $I$ is one task arrival instance. $c$ is called the competitive ratio of the deterministic on-line controller and is essentially the *upper-bound (i.e., worst case)* of the ratio between the on-line cost $\widetilde{J}$ and the off-line optimal cost $J^{*}$. \[Lemma\_deterministic\_competitive\_ratio\] The best competitive ratio $c^*$ of a deterministic controller is obtained when $\theta=C_W/C_I$, and $\lim_{N\to\infty} c^*=(2+\gamma)/(1+\gamma)$, where $N$ is the number of tasks and $\gamma=C_B\beta/C_W$. Lemma \[Lemma\_deterministic\_competitive\_ratio\] shows that the competition ratio of a deterministic algorithm depends on the ratio between $C_B\beta$, the cost of serving one task, and $C_W$, the cost of waking up the system. If this ratio is very small, then the competitive ratio is close to $2$; if the ratio is very large, then the competitive ratio is close to $1$. ### Randomized Controller In a different methodology, we assume that $\theta_{t}$ is determined by a randomized algorithm that returns a value based on certain probability distribution $P$. During on-line control, the controller essentially is playing a game with an adversary (i.e., the task arrival process). Our job is to find the optimal probability distribution and the corresponding competitive ratio. We point out that the competitive ratio of a randomized on-line algorithm $A$ is defined with respect to a specific type of adversary. In this paper, we assume an oblivious adversary [@ben1994power], in which the worst instance for the randomized algorithm $A$ is chosen without the the knowledge of the realization of the random variable used by $A$. We say randomized algorithm $A$ is *c-competitive* if $E_{P}[\widetilde{J}(A,I)] \le cJ^{*}(I), \forall I \in \mathscr{I}$, where $\widetilde{J}(A,I)$ is the cost of algorithm $A$ under task arrival instance $I$ in on-line control and $J^{*}(I)$ is the corresponding off-line optimal cost. Note that the task arrival instance $I$ must be fixed before the expectation is taken. The competition ratio of randomized algorithm $A_{P}$ (algorithm $A$ using probability distribution $P$) is: $$c(A_{P})=\underset{I\in\mathscr{I}}{\sup}\frac{E_{P}[\widetilde{J}(A_{P},I)]}{J^{*}(I)}$$ Our goal is to find the best possible probability distribution that yields the best competitive ratio $c^{*}$: $$\label{minimax} c^{*}=\underset{P}{\inf}\text{ }\underset{I\in\mathscr{I}}{\sup}\frac{E_{P}[\widetilde{J}(A_{P},I)]}{J^{*}(I)}$$ This is essentially a minimax problem, and one way of solving it is to use Yao’s minimax principle [@yao1977probabilistic], which states: a randomized algorithm may be viewed as a random choice between deterministic algorithms; in particular, the competitive ratio of a randomized algorithm against any oblivious adversary is the same as that of the best deterministic algorithm under the worst-case distribution of the adversary’s input. In our case, the adversary’s input is the task arrival instance after each AP. Let its probability distribution be $G$. Using Yao’s principle and von Neumann minimax theorem, we get: $$\label{maxmin} c^{*}=\underset{G}{\sup}\text{ }\underset{A\in\mathscr{A}}{\inf}\frac{E_{G}[\widetilde{J}(A,I_{G})]}{J^{*}(I_{G})}$$ where $\mathscr{A}$ is the set of all randomized algorithms, $I_{G}$ is a specific task arrival instance under probability distribution $G$, and the expectation is now performed with respect to $G$. We now use the following lemma to find $c^*$. \[lemma\_competitive\_random\] The best competitive ratio $c^*$ of a randomized controller is obtained when $\theta_t$ is a random variable $X$, whose probability density function is $$\begin{aligned} \label{f_x} f_X(x) &=&\left\{ \begin{array}{cc} \frac{1}{\frac{C_W}{C_I}(e-1)}e^{x/(C_W/C_I)}, & \text{if }x\leq C_W/C_I \\ 0, & \text{if }x>C_W/C_I% \end{array}% \right. \end{aligned}$$ When this controller is used, $\lim_{N\to\infty} c^*=(\gamma+1.58)/(\gamma+1)$, where $\gamma=C_B\beta/C_W$. Lemma \[lemma\_competitive\_random\] shows that the competition ratio of a random controller also depends on the ratio between $C_B\beta$ and $C_W$. If this ratio is very small, then the competitive ratio is close to $1.58$; if the ratio is very large, then the competitive ratio is close to $1$. Conclusions {#Sec_conclusions} =========== In this paper, we study the optimal ON-OFF control problem for a class of DESs with real-time constraints. The DESs have operating costs $C_{B}$ and $C_{I}$ per unit time and wake-up cost $C_{W}$. Our goal is to switch the system between the ON and the OFF states so as to minimize cost and satisfy real-time constraints. In particular, we consider a homogeneous case that all tasks have the same number of operations and each one’s deadline is $d$ seconds after the arrival time. For the off-line scenario that all task information is known to the controller a priori, we show that the optimal solution can be obtained via a two-fold decomposition: $(i)$ super active periods that contain one or more active periods can be identified easily using the task arrival times and deadlines and $(ii)$ the optimal solution to each super active period can be solved using dynamic programming. Simulation results show that compared with a simple heuristic, the cost saving of the DP algorithm can be 50% or more. In on-line control, we show that the best time to start an AP can be obtained via an iterative algorithm and is guaranteed to be the same as the off-line problem. To decide the best time to end an AP in the on-line setting where no future task arrival information is available, we evaluate both deterministic and random controllers and derive their competitive ratios; these results quantify the worst-case on-line performance deviation from the optimal off-line solution. L. Miao, J. Mao, and C. G. Cassandras, “Optimal energy-efficient downlink transmission scheduling for real-time wireless networks,” *IEEE Transactions on Control of Network Systems, in print, DOI 10.1109/TCNS.2016.2545099*. J. W. S. Liu, *Real - Time Systems*.1em plus 0.5em minus 0.4emNJ: Prentice Hall Inc., 2000. D. L. Pepyne and C. G. Cassandras, “Optimal control of hybrid systems in manufacturing,” *Proceedings of the IEEE*, vol. 88, no. 7, pp. 1108–1123, 2000. A. E. Gamal, C. Nair, B. Prabhakar, E. Uysal-Biyikoglu, and S. Zahedi, “Energy-efficient scheduling of packet transmissions over wireless networks,” in *Proceedings of IEEE INFOCOM*, vol. 3, 23-27, New York City, USA, 2002, pp. 1773–1782. M. Zafer and E. Modiano, “A calculus approach to energy-efficient data transmission with quality-of-service constraints,” *IEEE/ACM Trans. on networking*, vol. 17, pp. 898–911, 2009. L. Miao and L. Xu, “Optimal wake-up scheduling for energy efficient fixed-rate wireless transmissions with real-time constraints,” in *14th Wireless Telecommunications Symposium*, New York, NY, 2015. L. Miao, “Optimal on-off scheduling for a class of discrete event systems with real-time constraints,” in *American Control Conference*, Seattle,WA, USA, May, 2017. E. Uysal-Biyikoglu, B. Prabhakar, and A. E. Gamal, “Energy-efficient packet transmission over a wireless link,” *IEEE/ACM Transactions on Networking*, vol. 10, pp. 487–499, Aug. 2002. M. Zafer and E. Modiano, “Minimum energy transmission over a wireless channel with deadline and power constraints,” *IEEE Trans. on Automatic Control*, vol. 54, pp. 2841 – 2852, 2009. ——, “Delay-constrained energy efficient data transmission over a wireless fading channel,” in *IEEE Information Theory and Applications Workshop*, San Diego, CA, USA, Jan-Feb 2007. ——, “Optimal rate control for delay-constrained data transmission over a wireless channel,” *IEEE Trans. on Information Theory*, vol. 54, pp. 4020 – 4039, 2008. W. Chen, M. Neely, and U. Mitra, “Energy efficient scheduling with individual packet delay constraints: offline and online results,” in *IEEE Infocom*, Anchorage, Alaska, USA, May 2007. W. Chen, U. Mitra, and M. Neely, “Energy-efficient scheduling with individual packet delay constraints over a fading channel,” *ACM Wireless Networks*, vol. 15, pp. 601–618, 2009. X. Wang and Z. Li, “Energy-efficient transmissions of bursty data packets with strict deadlines over time-varying wireless channels,” *IEEE Trans. on Wireless Communications*, vol. 12, pp. 2533–2543, 2013. M. I. Poulakis, A. D. Panagopoulos, and P. Canstantinou, “Channel-aware opportunistic transmission scheduling for energy-efficient wireless links,” *IEEE Trans. on Vehicular Technology*, vol. 62, pp. 192–204, 2013. X. Zhong and C. Xu, “Online energy efficient packet scheduling with delay constraints in wireless networks,” in *IEEE Infocom*, Phoenix, AZ, April 2008. J. Mao, C. G. Cassandras, and Q. Zhao, “Optimal dynamic voltage scaling in energy-limited nonpreemptive systems with real-time constraints,” *IEEE Trans. on Mobile Computing*, vol. 6, no. 6, pp. 678–688, 2007. J. Mao and C. G. Cassandras, “Optimal control of multilayer discrete event systems with real-time constraint guarantees,” *IEEE Transactions on Systems, Man, and Cybernetics: Systems*, vol. 44, no. 10, pp. 1425–1434, 2014. W. Ye, J. S. Heidemann, and D. Estrin, “Medium access control with coordinated adaptive sleeping for wireless sensor networks,” *IEEE/ACM Trans. on Networking*, vol. 12, pp. 493–506, 2004. W. Lai and I. C. Paschalidis, “Routing through noise and sleeping nodes in sensor networks: latency vs. energy trade-offs,” in *the 45th IEEE Conference on Decision and Control*, San Diego, CA, USA, 2006, pp. 2716–2721. X. Ning and C. G. Cassandras, “Dynamic sleep time control in event-driven wireless sensor networks,” in *the 45th IEEE Conference on Decision and Control*, San Diego, CA, USA, 2006, pp. 2722–2727. ——, “Optimal dynamic sleep time control in wireless sensor networks,” in *the 47th IEEE Conference on Decision and Control*, Cancun, Mexico, 2008, pp. 2332–2337. R. Cohen and B. Kapchits, “An optimal wake-up scheduling algorithm for minimizing energy consumption while limiting maximum delay in a mesh sensor network,” *IEEE/ACM Transactions on Networking*, vol. 17, pp. 570–581, 2009. L. Xia, D. Miller, Z. Zhou, and N. Bambos, “Service rate control of tandem queues with power constraints,” *IEEE Transactions on Automatic Control*, vol. 62, no. 10, pp. 5111–5123, 2017. N. M. Pletcher, S. Gambini, and J. Rabaey, “A 52 w wake-up receiver with 72 dbm sensitivity using an uncertain-if architecture,” *IEEE Journal of solid-state circuits*, vol. 44, no. 1, pp. 269–280, 2009. S. Ben-David, A. Borodin, R. Karp, G. Tardos, and A. Wigderson, “On the power of randomization in on-line algorithms,” *Algorithmica*, vol. 11, no. 1, pp. 2–14, 1994. A. C.-C. Yao, “Probabilistic computations: Toward a unified measure of complexity,” in *Foundations of Computer Science, 1977., 18th Annual Symposium on*.1em plus 0.5em minus 0.4emIEEE, 1977, pp. 222–227. A. R. Karlin, M. S. Manasse, L. A. McGeoch, and S. Owicki, “Competitive randomized algorithms for nonuniform problems,” *Algorithmica*, vol. 11, no. 6, pp. 542–571, 1994. APPENDIX **Proof of Lemma \[Lemma\_feasibility\]:** To see this, consider the solution that the system is woken up at $a_{1}$ and stays active until $d_{N}.$ Because in any $d$ seconds, $$Rd\geq \lfloor \frac{d}{\beta }\rfloor B,$$that is, the number of task departures is not less than the number of task arrivals, the backlog is always zero at the integer multiples of $d$ seconds. This means that all task arrivals can be served within $d$ seconds. Hence, the proposed solution is always feasible under Assumption \[feasibility\_assumption\]. $\blacksquare $ **Proof of Lemma \[Lemma\_LateStartIsBetter\]:** Because$$a_{k}\leq t_{A}<t_{B}\leq d_{k}-\beta$$we have $$x_{j}^{A}\leq x_{j}^{B},\text{ }j=k,\ldots ,n \label{departure_early}$$where $x_{k}^{A},\ldots ,x_{n}^{A}$ and $x_{k}^{B},\ldots ,x_{n}^{B}$ are the task departure times in the two sample paths, respectively. It means that the departure time of task $j$ in sample path $A$ is no later than that in sample path $B$. Note that (\[departure\_early\]) holds because the sensor stays on in the AP and $t_{A}<t_{B}.$ Let $t_{EA}$ and $t_{EB}$ be the ending time of the AP when the starting time is $t_A$ and $t_B$, respectively. We have: $$C_{k,\ldots ,n}^{A}=C_{w}+C_{B}(n-k+1)\beta+C_I[t_{EA}-t_A-(n-k+1)\beta]$$ $$C_{k,\ldots ,n}^{B}=C_{w}+C_{B}(n-k+1)\beta+C_I[t_{EB}-t_B-(n-k+1)\beta]$$ where the three terms in each equation correspond to the wake-up cost, the cost of serving the $n-k+1$ tasks, and the idling cost, respectively. By assumption that $t_A<t_B$, we know for sure that the idling time of sample path $A$ is not less than that of sample path $B$, i.e., $t_{EA}-t_A \ge t_{EB}-t_B$. Therefore, $$C_{k,\ldots ,n}^{A}\geq C_{k,\ldots ,n}^{B} \text{ }\blacksquare$$ **Proof of Lemma \[Lemma\_When\_to\_Start\_case1\]:** Time $d_{k}-\beta $ is the latest time to serve task $k$, in order to meet its hard deadline requirement. Invoking Lemma [Lemma\_LateStartIsBetter]{}, we only need to show that $Q(1,N)$ is still feasible when we delay the transmission of task $k$ until $d_{k}-\beta .$ Under Assumption \[feasibility\_assumption\] and in the worst case, there could be $\lfloor \frac{d}{\beta }\rfloor -1$ tasks $\{k+1,\ldots ,k+\lfloor \frac{d}{\beta }\rfloor -1\}$ arriving at $d_{k}-\beta .$ It means that at time $d_{k}-\beta ,$ we have $\lfloor \frac{d}{\beta }% \rfloor $ tasks in the backlog. If we start transmitting all these tasks $% \{k,\ldots ,k+\lfloor \frac{d}{\beta }\rfloor -1\}$ at $d_{k}-\beta $, then it takes maximum $d$ seconds to send all of them, and each task’s deadline is met. Again, under Assumption \[feasibility\_assumption\], the earliest time that task $k+\lfloor \frac{d}{\beta }\rfloor $ can arrive is $% d_{k}.$ If the system stays active at $d_{k}+d-\beta ,$ then task $% a_{k+\lfloor \frac{d}{\beta }\rfloor }$ can be served by its deadline $% d_{k}+d.$ Similarly, all subsequent tasks can be transmitted by their deadlines. Therefore, $Q(1,N)$ is still feasible after we postpone task $% k $’s transmission time to $d_{k}-\beta .$ $\blacksquare $ **Proof of Lemma \[Lemma\_When\_to\_Start\]:** Invoking Lemma \[Lemma\_LateStartIsBetter\] again, we need to show that $d_{k}-\beta $ and $d_{k}-\beta -\delta $ are the latest feasible starting times for the two cases, respectively. In the worse case, there could be $\lfloor \frac{d}{\beta }\rfloor -m-1$ tasks arriving at $d_{k}-\beta .$ As we have shown in Lemma [Lemma\_When\_to\_Start\_case1]{}, these tasks and all subsequent ones can be served before their deadlines as long as we start the AP no later than $% d_{k}-\beta .$ Therefore, we only need to focus on the tasks that arrive before $d_{k}-\beta .$ *Case 1:*$\ \delta _{z}\leq 0.$ This implies that $$\frac{a_{j}-a_{k}}{j-k}\geq \beta ,\text{ for }j\in \{k+1,\ldots ,k+m\} \label{All_greater_than_beta}$$ $d_{k}-\beta $ is the latest possible starting time for task $k.$ We need to show that $Q(1,N)$ is still feasible when we start serving task $% k $ at $d_{k}-\beta ,$ i.e., starting serving tasks at this time will satisfy the real-time constraints for tasks $\{k+1,\ldots ,k+m\}$. If task $% k $ is done at $d_{k}-\beta ,$ then the departure time $x_{j}$ of task $j$, $j\in \{k+1,\ldots ,k+m\},$ is $$x_{j}=d_{k}+\beta (j-k)$$From (\[All\_greater\_than\_beta\]), we have$$x_{j}\leq d_{k}+a_{j}-a_{k}=a_{j}+d=d_{j}$$Thus, the deadlines of all the tasks $\{k+1,\ldots ,k+m\}$ are met, and $% d_{k}-\beta $ is the optimal starting time. *Case 2:* $\delta _{z}>0$. We need to show $d_{k}-\beta -\delta _{z}$ is a feasible starting time for all tasks $\{k,\ldots ,k+m\}.$ We first show causality. The starting time $s_{j}$ of task $j\in \{k+1,\ldots ,k+m\}$ is:$\ $ $$s_{j}=d_{k}-\beta -\delta _{z}+\beta (j-k) \label{Lemma2_3}$$ Using (\[Lemma2\_1\]),$$\begin{aligned} s_{j} &=&d_{k}-\beta -[\beta (z-k)-(a_{z}-a_{k})]+\beta (j-k) \label{Sj} \\ &=&d_{k}-\beta +\beta (j-z)+(a_{z}-a_{k}) \notag \\ &=&d-\beta +\beta (j-z)+a_{z} \notag\end{aligned}$$ We use contradiction to prove it. Suppose $s_{j}<a_{j},$ we have $$d-\beta +\beta (j-z)+a_{z}<a_{j}\text{, i.e.,}$$$$d-\beta +\beta (j-z)<a_{j}-a_{z} \label{inequality}$$ 1\) When $k<j\leq z\leq k+m,$ we have $$a_{j}-a_{z}\leq 0 \label{Inequality_1}$$By Assumption \[feasibility\_assumption\], $$\beta (j-z)=-\beta (z-j)\geq -(d-\beta ),$$i.e., $$d-\beta +\beta (j-z)\geq 0 \label{Inequality_2}$$Combining (\[Inequality\_1\]) and (\[Inequality\_2\]), (\[inequality\]) is not true. 2\) When $k<z<j\leq k+m$, we have $$d-\beta >a_{j}-a_{z}>0$$and $$d-\beta +\beta (j-z)>d$$Combining the two inequalities, we conclude that (\[inequality\]) is not true either. We can now assert $$s_{j}\geq a_{j}$$which satisfies causality. Next, we show the departure time of each task $j\in \{k+1,\ldots ,k+m\}$ is before the task’s deadline. Again, we use $x_{j}$ to denote the departure time of task $j,$ and $$x_{j}=s_{j}+\beta$$Invoking (\[Sj\]), $$x_{j}=d+\beta (j-z)+a_{z}$$We need to show $$x_{j}=d+\beta (j-z)+a_{z}\leq a_{j}+d,$$i.e., $$\beta (j-z)+a_{z}\leq a_{j} \label{proof_feasibility}$$ From (\[Lemma2\_1\]), we have $$\begin{aligned} \delta _{j} &=&\beta (j-k)-(a_{j}-a_{k})\leq \\ \delta _{z} &=&\beta (z-k)-(a_{z}-a_{k}),\text{ } \\ j &=&k+1,...,k+m\end{aligned}$$Rearranging the terms above, we obtain (\[proof\_feasibility\]). Finally, the departure time of task $z$ is exactly $a_{z}+d,$ indicating that $d_{k}-\beta -\delta _{z}\ $is the latest possible time to start serving task $z$. $\blacksquare $ **Proof of Lemma \[TasksApartEndAP\]:** We use $x_{j}^{\ast }$ and $s_{j+1}^{\ast }$ to denote the departure time of task $j$ and the starting time of task $j+1,$ respectively, on the optimal sample path of $Q(1,N)$. Using Lemma \[Lemma\_feasibility\], we have $$x_{j}^{\ast }\leq d_{j} \label{x<d}$$From casualty, $$s_{j+1}^{\ast }\geq a_{j+1} \label{s>a}$$By assumption, we have $$a_{j+1}-d_{j}>C_{W}/C_{I} \label{a-d>cw/ca}$$Combining (\[x&lt;d\]), (\[s&gt;a\]), and (\[a-d&gt;cw/ca\]), we get$$s_{j+1}^{\ast }-x_{j}^{\ast }>C_{W}/C_{I} \label{s-x>cw/ca}$$Next, we use a contradiction argument to prove the lemma. Let the optimal sample path of $Q(1,N)$ be $sp^{\ast }$ and the corresponding cost is $% J^{\ast }$. Suppose that task $j$ does not end an **AP** on $sp^{\ast }$. It means that the system stays active from $x_{j}^{\ast }$ to $% s_{j+1}^{\ast }.$ The optimal cost is then $J^{\ast }=(s_{j+1}^{\ast }-x_{j}^{\ast })C_{I}+J_{R},$ where $J_{R}$ is the rest of the cost beyond time interval $[x_{j}^{\ast },s_{j+1}^{\ast }].$ Consider another sample path $sp^{^{\prime }}$, which is identical to $sp^{\ast }$, except that the system goes to sleep at $% x_{j}^{\ast }$ and wakes up at $s_{j+1}^{\ast }.$ The system cost is now $J^{^{\prime }}=C_{W}+J_{R}.$ Using (\[s-x&gt;cw/ca\]), we obtain $J^{^{\prime }}<J^{\ast },$ which contradicts the assumption that $sp^{\ast }$ is the optimal sample path. $% \blacksquare $ **Proof of Theorem \[theorem\_optimal\]:** We use induction to prove it. *Step 1*: Task $n$ can either be a starting task or a following task. When it is a starting task, it is obvious that $J_{n}^{S}$ is the optimal cost of $Q^{S}(n,n).$ When it is a following task, it is also obvious that $% J_{n}^{F}$ is the optimal cost of $Q^{F}(n,n).$ *Step 2*: Suppose that $J_{j}^{S}$ is the optimal cost of problem $% Q_{j}^{S}(j,n),$ and $J_{j}^{F}$ is the optimal cost of problem $% Q_{j}^{F}(j,n),$ $j\in \{i,\ldots ,n\}.$ We need to show that $J_{i-1}^{S}$ and $J_{i-1}^{F}$ are the optimal cost of problem $Q_{i-1}^{S}(i-1,n)$ and $% Q_{i-1}^{F}(i-1,n),$ respectively$.$ Since the proofs are similar, we only show that $J_{i-1}^{S}$ is the optimal cost of problem $Q_{i-1}^{S}(i-1,n).$ By assumption, task $i-1$ is a starting task. We can use Lemmas \[Lemma\_When\_to\_Start\_case1\] and \[Lemma\_When\_to\_Start\] to find $s_{i-1,n}^{i-1},$ the optimal starting time of task $i-1$. We now discuss two cases: Case 1: Task $l$ that satisfies (\[l\_for\_J\_S\]) does not exist. It implies that $s_{i-1,n}^{i-1}+(j-i+1)\beta >a_{j},\forall j\in \{i-1,\ldots ,n\},$ i.e., the system is busy serving tasks whenever a task $j\in \{i-1,\ldots ,n\}$ arrives. Therefore, there is no reason to go to sleep, and tasks $% \{i-1,\ldots ,n\}$ form a single AP. From Line 14 of Table \[Table\_Q\_S\], $J_{i-1}^{S}=C_{W}+(n-i+2) \beta C_{B}$ is the optimal cost of problem $Q^{S}(i-1,n)$. Case 2: Task $l$ that satisfies (\[l\_for\_J\_S\]) does exist. In this case, task $l$ has not arrived when task $l-1$ departs the system. It has two subcases: the system should either go to sleep when task $l-1$ departs or stay awake (and serve task $l$ when it arrives). The subcase that yields a smaller cost is the optimal solution, and this is calculated in (\[J\_S\_minimum\]). $\blacksquare $ **Proof of Lemma \[decision\_after\_arrival\]:** We consider two cases. *Case 1:* The optimal wake-up time is $d_{k}-\beta .$ This happens when either Lemma \[Lemma\_When\_to\_Start\_case1\] or the $\delta _{z}\leq 0$ case of Lemma \[Lemma\_When\_to\_Start\] applies. The on-line control mechanism picks the same wake-up time upon the arrival of task $k$, and it does not change. Therefore, the wake-up time in on-line control is the same as the optimal wake-up time on the optimal sample path. *Case 2:* The optimal wake-up time is $d_{k}-\beta -\delta _{z}.$ This happens when the $\delta _{z}<0$ case of Lemma \[Lemma\_When\_to\_Start\] applies. In on-line control, the initial wake-up time is set to $d_{k}-\beta .$ With the arrival of tasks between $a_{k}$ and $d_{k}-\beta ,$ this scheduled time is adjusted to $d_{k}-\beta -\delta _{j}$, for some $j\in \{k+1,\ldots ,k+m\}.$ By definition of $\delta _{z},$ we have $$\begin{aligned} d_{k}-\beta -\delta _{j} &\geq &d_{k}-\beta -\delta _{z} \\ &=&d_{k}-\beta -[\beta (z-k)-(a_{z}-a_{k})]\text{ } \\ &=&d-\beta -\beta (z-k)+a_{z} \\ &\geq &a_{z}\end{aligned}$$The above implies that all intermediate wake-up times and the optimal wake-up time are after the arrival of task $z$. Therefore, the on-line control policy is able to wake up the system at the optimal time $% d_{k}-\beta -\delta _{z}$ after task $z$ arrives. $\blacksquare $ **Proof of Lemma \[Lemma\_deterministic\_competitive\_ratio\]:** The worst-case happens when each AP only contains a single task. After each task is served, the system stays active for $\theta$ seconds and then goes to sleep; it wakes up again after the next task arrives. For any $\theta$, we have the ratio between the on-line cost and the optimal cost: $$\label{c_of_theta} c(\theta)=\frac{C_W+NC_B\beta+(N-1)(C_I\theta+C_W)}{C_W+NC_B\beta+(N-1)\min(C_I\theta,C_W)}$$ where the numerator is the on-line cost and the denominator is the off-line cost. Note that both costs have three terms: the first term $C_W$ is the wake-up cost for serving the very first task; the second term $NC_B\beta$ is the actual cost of serving the $N$ tasks; and the last term is the cost between two adjacent tasks. We can rewrite (\[c\_of\_theta\]) into: $$c(\theta)=\frac{C_W/N+C_B\beta+(N-1)(C_I\theta+C_W)/N}{C_W/N+C_B\beta+(N-1)\min(C_I\theta,C_W)/N}$$ It follows that $$\lim_{N\to\infty}c(\theta)=\frac{C_B\beta+(C_I\theta+C_W)}{C_B\beta+\min(C_I\theta,C_W)}.$$ Because $$\frac{C_I\theta+C_W}{\min(C_I\theta,C_W)} \ge 2$$ and the equality holds when $\theta=C_W/C_I$, we have $$c^*=\lim_{N\to\infty}c(C_W/C_I)=\frac{C_B\beta+2C_W}{C_B\beta+C_W}=\frac{2+\gamma}{1+\gamma}\text{ } \blacksquare$$ **Proof of Lemma \[lemma\_competitive\_random\]:** Similar to the deterministic algorithm case, the worst case also occurs when each AP only contains a single task. At the $i$-th decision point, the system stays active for $\theta_t=X$ seconds, where $X$ is a random variable returned by algorithm $A$, and then goes to sleep if no tasks arrive during this period. For serving $N$ tasks, the ratio between the on-line cost and the optimal-cost is: $$\label{c_of_theta_random} c(\theta_t)=\frac{C_W+NC_B\beta+(N-1)E_G[\widetilde{J}_b(A,I_G)]}{C_W+NC_B\beta+(N-1)J_b^*(I_G)}$$ where the numerator is the on-line cost and the denominator is the off-line cost. Note that both costs have three terms: the first term $C_W$ is the wake-up cost for serving the very first task; the second term $NC_B\beta$ is the actual cost of serving the $N$ tasks; and the last term is the cost between two adjacent tasks. Note that the reason why the expectation is taken with respect to $G$ is due to the insight provided by equation (\[maxmin\]). We can rewrite (\[c\_of\_theta\_random\]) into: $$c(\theta_t)=\frac{C_W/N+C_B\beta+(N-1)E_G[\widetilde{J}_b(A,I_G)]/N}{C_W/N+C_B\beta+(N-1)J_b^*(I_G)/N}.$$ It follows that $$\label{lim_random} \lim_{N\to\infty}c(\theta_t)=\frac{C_B\beta+E_G[\widetilde{J}_b(A,I_G)]}{C_B\beta+J_b^*(I_G)}.$$ Let $y$ be the time it takes for the next task to arrive after the system finishes serving the previous task. Similar to other on-line scheduling scenarios such as the ski rental and the snoopy caching problems [@karlin1994competitive] , it can be seen via variational analysis that $E_{G}[\widetilde{J}_b(A,I_{G})]/J_b^{*}(I_{G})$ is uniform with respect to $y$, i.e., it is independent from $y$. Letting $\tilde{c}=E_{G}[\widetilde{J}_b(A,I_{G})]/J_b^{*}(I_{G})$, our goal is to come up the best algorithm $A$ that minimizes $\tilde{c}$. It has been shown in [@karlin1994competitive] that $\tilde{c}^*$ is $e/(e-1)\approx 1.58$, and the probability distribution $P$ that achieves this ratio is in (\[f\_x\]). Note that $J_b^*(I_G)=\min(C_I\times y,C_W)$. The impact of $\tilde{c}$ to (\[lim\_random\]) is the greatest when $J_b^*(I_G)$ takes the maximum value $C_W$. In this case, $E_G[\widetilde{J}_b(A,I_G)]$ takes its value $1.58C_W$. Therefore, $$\lim_{N\to\infty} c^*=\frac{C_B\beta+1.58C_W}{C_B\beta+C_W}=\frac{\gamma+1.58}{\gamma+1}\text{ } \blacksquare$$ ---- ---------------------------------------------------------------------------- 1. $J_{n}^{S}=C_{W}+C_{B}\beta ,\text{ }J_{n}^{F}=C_{B}\beta ,\text{ and }$ $\text{set both }J_{n}^{S}\rightarrow next\text{ and }J_{n}^{F}\rightarrow next\text{ to NULL.}$ 2. $for$ $(i=n;i-k>=1;i--)$ $\{$ 3.   Initialize $J_{i-1}^{S}\rightarrow next\text{ and }% J_{i-1}^{F}\rightarrow next$ to NULL 4.   Solve $Q^{S}(i-1,n)$ 5.   Solve $Q^{F}(i-1,n)$ 6. } ---- ---------------------------------------------------------------------------- : The algorithm that returns the optimal cost of $Q(k,n)$[]{data-label="Table_Q_k_n"} ----- ---------------------------------------------------------------------------------------------------- 1. Use Lemmas \[Lemma\_When\_to\_Start\_case1\] and \[Lemma\_When\_to\_Start\] to find $s_{i-1,n}^{i-1},$ the optimal starting time of task $i-1$ 2. If (there exists $l$ that satisfies (\[l\_for\_J\_S\])) { 3.    Calculate $V_{i-1,l}^{SS}$ and $V_{i-1,l}^{SF}$ using (\[V\_SS\]) and (\[V\_SF\]), respectively 4.    If ($V_{i-1,l}^{SS}+J_{l}^{S}\leq V_{i-1,l}^{SF}+J_{l}^{F})$ { 5. $\ \ \ \ \ \ J_{i-1}^{S}=V_{i-1,l}^{SS}+J_{l}^{S}$ 6.       $J_{i-1}^{S}\rightarrow next$ $=$ $J_{l}^{S}$ 7.    } 8.    else { 9. $\ \ \ \ \ \ J_{i-1}^{S}=V_{i-1,l}^{SF}+J_{l}^{F}$ 10.       $J_{i-1}^{S}\rightarrow next=$ $J_{l}^{F}$ 11.    } 12. } 13. else { // single AP case 14.    $J_{i-1}^{S}=C_{W}+(n-i+2) \beta C_{B}$ 15. } ----- ---------------------------------------------------------------------------------------------------- : The algorithm that returns the optimal cost of $Q^{S}(i-1,n)$[]{data-label="Table_Q_S"} ----- -------------------------------------------------------------------------------------------------- 1. If (there exists task $l$ that satisfies (\[l\_for\_J\_F\])) { 2.  Calculate $V_{i-1,l}^{FS}$ and $V_{i-1,l}^{FF}$ using (\[V\_FS\]) and (\[V\_FF\]), respectively 3.   If ($V_{i-1,l}^{FS}+J_{l}^{S}\leq V_{i-1,l}^{FF}+J_{l}^{F})$ { 4. $\ \ \ \ J_{i-1}^{F}=V_{i-1,l}^{FS}+J_{l}^{S}$ 5.     $J_{i-1}^{F}\rightarrow next=$ $J_{l}^{S}$ 6.  } 7.  else { 8. $\ \ \ \ J_{i-1}^{F}=V_{i-1,l}^{FF}+J_{l}^{F}$ 9.    $J_{i-1}^{F}\rightarrow next= $ $J_{l}^{F}$ 10.  } 11. } 12. else { //single AP case 13.    $J_{i=1}^{F}=(n-i+2) \beta C_{B}$ 14. } ----- -------------------------------------------------------------------------------------------------- : The algorithm that returns the optimal cost of $Q^{F}(i-1,n)$[]{data-label="Table_Q_F"} ----- ------------------------------------------------------- 1. $J=J_{k}^{S},$ $i=J.task=J^{\prime }s$ subscript, and $J.type=J^{\prime }s$ superscript 2. while ( $J\rightarrow next$ is not NULL){ 3.   $J^{\prime }=J->next$ 4.  $\ next\_task=J^{\prime }.task$ 5.   $next\_type=J^{\prime }.type$ 6.   If $(J.type=``S")${ 7.      AP starts at $s_{i,n}^{i};$ 8.   } 9.   If ($next\_type=``S"$) { 10.      AP ends after task $next\_task-1$ is served; 11.      $J=J^{\prime }$ and $i=J.task$; continue; 12.   } 13.   If ($next\_type=``F"$) { 14.      Keep the system active through $a_{next\_task}$ 15.   } 16.   $J=J^{\prime }$ and $i=J.task$ 17. } ----- ------------------------------------------------------- : The procedure that returns the optimal control to $Q(k,n)$[]{data-label="Table_Control"} [^1]: The authors’ work is supported in part by a start-up funding provided by Middle Tennessee State University.
--- abstract: 'The volume of a black hole under noncommutative spacetime background is found to be infinite, in contradiction with the surface area of a black hole, or its Bekenstein-Hawking (BH) entropy, which is well-known to be finite. Our result rules out the possibility of interpreting the entropy of a black hole by counting the number of modes wrapped inside its surface if the final evaporation stage can be properly treated. It implies the statistical interpretation for the BH entropy can be independent of the volume, provided spacetime is noncommutative. The effect of radiation back reaction is found to be small and doesn’t influence the above conclusion.' author: - Baocheng Zhang - Li You title: Infinite Volume of Noncommutative Black Hole Wrapped by Finite Surface --- Introduction ============ The interior of a black hole is not causally connected to its exterior. As a result an external observer is prohibited from gaining any information about the interior of a black hole, including the value of its volume. The volume of a black hole was first investigated by Parikh [@mkp06]. Subsequent studies invoked a slicing invariant definition with different time-like Killing vectors [@dg05; @dm10; @bl10; @cgp11; @bl13]. Recently, Christodoulou and Rovelli (CR) suggested a different approach [@cr15], which was followed by investigations in several spacetime backgrounds [@bj15; @yco15; @yco152]. The CR volume $V_{\mathrm{CR}}$ is defined as the largest one bounded by event horizon. Such a definition paints a different picture for the final stage of a collapsed black hole, allowing it to possess a very large interior instead of shrinking to a point. The volume of a black hole is often discussed in connection with the information loss paradox [@swh76]. The life of a black hole is often considered to consist of two distinct segments: formation followed by evaporation, assuming the latter starts only after the former is completed. Such a hypothesis prompts a natural but seldomly asked question: where is the information about the collapsed matter at the instant separating the two segments? i.e., when formation is completed but evaporation is yet to begin. For Schwarzschild black hole, plausible answers consider information stored in its interior, or distributed on its horizon as was initially postulated in the quantum hairs discussion [@kw89]. The existence of quantum hairs implies that information could reside on horizon before evaporation. Recently, Hawking *et al.* show how to implant soft hairs on the horizon with soft degrees of freedom proportional to the area of horizon in Planck units, based on physical processes of the elegant mechanism using the soft graviton theorem [@hps16]. The possibility that information can hide in the interior, however, is not ruled out. To establish a firm answer, Parikh suggested the illuminating idea [@mkp06] along the conversation between Jacobson, Marolf, and Rovelli [@jmr05] to construct families of spacetime whose horizon areas or surfaces are bounded but whose volumes wrapped inside can be arbitrarily large. The entropies for such black holes must be independent of their volumes. The amount of information associated with the Bekenstein-Hawking (BH) entropies would have to be distributed over their horizons. For stationary black holes in both three and four dimensions, Parikh did not provide any preconceived constructions [@mkp06]. Christodoulou and Rovelli show $V_{\mathrm{CR}}\sim3\sqrt{3}\pi M^{2}v$ for a Schwarzschild black hole [@cr15], with $M$ the (initial) mass and $v$ the advanced time. This interesting result satisfies the requirement of Parikh [@mkp06], i.e., the volume becomes infinite for $v\to\infty$. However, due to Hawking radiation [@swh74], a Schwarzschild black hole evaporates and disappears at $v\sim M^{3}$. Such an estimate for $v$ limits the volume for the (initial) black hole to a large but finite value $V_{\mathrm{CR}}\sim$ $M^{5}$. Whether this volume is sufficient to house enough modes for explaining the BH entropy statistically? A recent calculation [@zbc15] implicates a negative answer, although it finds the entropy calculated by counting modes housed in this volume is proportional to the surface area. However, its treatment for the final evaporation stage might be inadequate [@zbc15]. An improved treatment would likely lead to a revised mass loss rate as evaporation approaches the Planck scale, which will alter the estimated lifetime of a black hole. This Letter presents our effort towards this by invoking spacetime noncommutativity, which is capable of treating the final evaporation stage without difficult singularities [@nss06; @ans07]. Adoption of spacetime noncommutativity leads to different black holes [@ss031; @ss032] and their associated thermodynamics (see [@pn09] for a review and the references therein). In noncommutative spacetime, the singularity in the interior of a black hole disappears, and a remnant always arises irrespective [@acn87; @coy15] after black hole evaporation, which helps to remove the so-called Hawking paradox of a diverging temperature as the black hole radius shrinks to zero. We show such an approach overcomes the uncertainty of the earlier result [@zbc15] by providing an improved description for the final evaporation stage, with which we establish a firm answer to whether the interior of a noncommutative black hole is large enough to explain the BH entropy. We find a concrete example for a black hole with an infinite volume but a finite horizon area. Throughout this paper, we use units with $G=c=\hbar=k_{B}=1$. Noncommutative black hole ========================= We begin by briefly reviewing the noncommutative Schwarzschild black hole and its thermodynamics. The spacetime coordinate $x^{\mu}$ becomes noncommuting [@nss06] $$\left[ x^{\mu},x^{\upsilon}\right] =i\theta \epsilon^{\mu\nu}, \label{non-com}$$ with the noncommutative parameter $\theta$ of dimension length squared. It is a constant required by the Lorentz invariance and unitarity [@ss04]. $\epsilon^{\mu\nu}$ is a real anti-symmetric tensor. The commutation relation Eq. (\[non-com\]) gives $\Delta x^{\mu}\Delta x^{\nu}\geq\frac{1}{2}\theta$, the analogous Heisenberg uncertainty relationship, which dictates the spacetime to be pointless, free from gravitational singularity. The direct application of noncommutative coordinates to black holes is inconvenient, so in this paper, we adopt the idea assumed in Ref. [@nss06] where the spatial noncommutative effect is attributed to the modified energy-momentum tensor as a source while the Einstein tensor is not changed. In flat spacetime noncommutativity eliminates point-like structures in favor of smeared distributions [@ss031; @ss032]. When applied to spacetime in gravity, one can simply make corresponding substitutions. Instead of the Dirac $\delta$-function $\rho_{\theta}\left( r\right) =M\delta(r)$ usually employed for a point mass $M$ at origin in commutative spacetime, the smearing leads to a Gaussian distribution $$\rho_{\theta}(r) =\frac{M}{(4\pi\theta)^{\frac{3}{2}}}e^{-\frac{r^{2}}{4\theta}}, \label{nmd}$$ of a width $\sim\sqrt{\theta}$, with which the energy-momentum tensor was identified for a self-gravitating droplet of anisotropic fluid [@nss06]. Solving the Einstein equation gives $$\begin{aligned} ds^{2} =& -\left[ 1-\frac{4M}{r\sqrt{\pi}}\gamma\left( \frac{3}{2},\frac{r^{2}}{4\theta}\right) \right] dt^{2}\nonumber\\ & +\left[ 1-\frac{4M}{r\sqrt{\pi}}\gamma\left( \frac{3}{2},\frac{r^{2}}{4\theta}\right) \right] ^{-1}dr^{2}+r^{2}d\Omega^{2}, \label{nsc}$$ with the lower incomplete gamma function $\gamma(\nu,x) =\int_{0}^{x}t^{\nu-1}e^{-t}dt$, which approaches $\sqrt{\pi}/2$ as $r\to\infty$. For $\theta\rightarrow0$, $\gamma(\nu,x)$ reduces to the usual $\Gamma(\nu)$-function and the noncommutative metric Eq. (\[nsc\]) becomes the commutative Schwarzschild metric. The condition of $g_{tt}(r_{h})=0$ gives the event horizon $$r_{h}=\frac{4M}{\sqrt{\pi}}\gamma\left( \frac{3}{2},\frac{r_{h}^{2}}{4\theta }\right) \equiv\frac{4M}{\sqrt{\pi}}\gamma_{h}, \label{rh}$$ which takes the minimum $r_{h}^{(\min)}=r_{0}\simeq3.0\sqrt{\theta}$ at $M_{0}\simeq1.9\sqrt{\theta}$ determined by ${dM}/dr_{h}=0$ where the two horizons decay to one. In particular, no horizon exists below $M_{0}$, which is not concerned in our paper. The temperature is obtained for the static noncommutative metric Eq. (\[nsc\]), $$T_{h}=\frac{1}{4\pi}\left. \frac{dg_{tt}}{dr}\right\vert _{r=r_{h}}=\frac {1}{4\pi r_{h}}\left(1-\frac{r_{h}^{3}}{4\theta^{\frac{3}{2}} \gamma_{h}} {e^{-\frac{r_{h}^{2}}{4\theta}}}\right), \label{temp}$$ which reaches its maximum at $M\simeq2.4\sqrt{\theta}$ and decreases to zero at $M=M_{0}$ as shown by the red-solid line in Fig. \[fig1\]. When $\theta\rightarrow0$, $T_{h}$ reduces to $T_{H}={1}/({8\pi M})$, as for commutative spacetime denoted by the blue-dashed line. From the first law of black hole thermodynamics $TdS=dM$, we find the entropy $$S_{h}=\int \frac{dM}{T}\simeq 4\pi M^{2}\left( 1-\frac{4M}{\sqrt {\pi\theta}}e^{-\frac{M^{2}}{\theta}}\right), \label{ncbe}$$ up to the order of $e^{-{\frac{M^{2}}{\theta}}}/{\sqrt{\theta}}$. ![(Color online) The temperatures $T$ for an evaporating Schwarzschild black hole of (initial) mass $M$. The red-solid (blue-dashed) line refers to noncommutative (commutative) spacetime. []{data-label="fig1"}](fig1.eps){width="2.75in"} Volume and Entropy ================== It was shown [@cr15; @cl16] that the definition of CR volume can be applied to Schwarzschild black holes, but also applied to any other spherically symmetric spacetime. Therefore, as described in last section, the interior of noncommutative black holes (\[nsc\]) can define the CR volume. For this, one has to rewrite the metric (\[nsc\]) in terms of ingoing Eddington-Finkelstein coordinates,$$ds^{2}=-f\left( r\right) dv^{2}+2dvdr+r^{2}d\varphi^{2}+r^{2}\sin^{2}\varphi d\phi^{2}, \label{ef}$$ with $f(r)=$ $1-\frac{4M}{r\sqrt{\pi}}\gamma\left( \frac{3}{2},\frac{r^{2}}{4\theta}\right)$, and the advanced time $v=t+r^{\ast}$ for $r^{\ast}\equiv\int^{r}\frac{dr^{\prime}}{f(r^{\prime})}$. As shown before [@cr15; @bj15], the CR volume is mostly related to the region which is not causally connected with matter that has fallen far into the black hole, and so the contribution for the volume is given by the integral$$V_{\mathrm{NCR}}=\int^{v}\max\left[F(r)\,\right] dvd\varphi d\phi,$$ where the integral will be dominated by its upper limit $v$ while the lower integration limit is irrelevant, as pointed out in Ref. [@bj15]. Then we calculate the maximal value of the function by setting $\frac{dF}{dr}=0$ for the integrand $F(r)=r^{2}\sqrt{\frac{4M}{r\sqrt{\pi}}\gamma\left( \frac{3}{2},\frac{r^{2}}{4\theta}\right) -1}$, and the maximum is found to occur at $$r\simeq r_{n}\left(1-\frac{r_{n}}{\sqrt{\pi\theta}}e^{-{\frac{r_{n}^{2}}{4\theta}}}\right), \label{ncrm}$$ with $r_{n}={3}M/{2}$. Carrying out the integration, we find $$V_{\mathrm{NCR}}\simeq3\sqrt{3}\pi M^{2}v\left( 1-\frac{2r_{n}}{\sqrt {\pi\theta}}e^{-{\frac{r_{n}^{2}}{4\theta}}}\right), \label{ncrv}$$ which is explicitly modified by $\theta$, apart from implicit $\theta$-dependence, e.g., in $v$. Next we discuss the entropy associated with $V_{\mathrm{NCR}}$. We change the metric Eq. (\[ef\]) into the form $$ds^{2}=-dT^{2}-\left[f(r)\dot{v}^{2}-2\dot{v}\dot{r}\right] d\lambda^{2} +r^{2}d\Omega^{2}, $$ with the transformation $dv=\frac{-1}{\sqrt{-f}}dT+d\lambda$ and $dr=\sqrt{-f}dT$. Since the volume refers to the late time $v$ at $r\simeq r_{n}\left(1-\frac{r_{n}}{\sqrt{\pi\theta}}e^{-{\frac{r_{n}^{2}}{4\theta}}}\right)$, we take the constant-$T$ hypersurface to count the number of quantum field modes that can be housed in $V_{\mathrm{NCR}}$. With suitable modifications consistent with the uncertainty relationship [@skr01; @cmt02; @lx02], the earlier method [@zbc15] remains applicable to more general cases of quantum gravity effects, including spacetime noncommutativity being discussed here. The commutation between conjugate position $x_{i}$ and momenta $p_{j}$ is unchanged, or $[x_{i},p_{j}]=i\delta_{ij}$ [@ss031; @ss032], i.e., the uncertainty relation $\Delta x_{i}\Delta p_{i}\sim2\pi$ retains. Now the phase space is labeled by $\{\lambda,\varphi,\phi,p_{\lambda},p_{\varphi},p_{\phi }\}$, and the volume element takes the form $d\lambda d\varphi d\phi dp_{\lambda}dp_{\varphi}dp_{\phi}/(2\pi)^{3}$. Compared with the commutative spacetime discussed earlier [@zbc15], the only change concerns the factor $f(r)$ when computing noncommutative entropy. We find the same form [@zbc15] for, $$S_{\mathrm{NCR}}=\frac{\pi^{2}V_{\mathrm{NCR}}}{45\beta_{h}^{3}}, \label{ncrve}$$ with both $V_{\mathrm{NCR}}$ and $\beta_{h}$ modified by noncommutativity. To study $S_{\mathrm{NCR}}$ in more detail, we need to specify the time $v$. As shown clearly by the red solid line in Fig. \[fig1\], evaporation of a noncommutative black hole involves two stages; before the maximal temperature is reached, it is similar to what happens in commutative Schwarzschild black hole; after that, the temperature decreases to zero and a cold remnant of mass $M_{0}\simeq1.9\sqrt{\theta}$ is left behind. First, we investigate the influence of back reaction. As well-known, the dynamic evolution for a spherically symmetric black hole due to Hawking radiation can be described by the Vaidya metric [@pv51]. With this metric, the CR volume was already calculated and the change was found to be insignificant [@yco152; @cl16], where the estimation for advanced time $v$ was made by using the Stefan-Boltzmann law (see Eq. (\[sbl\]) below) since the black hole was considered with the mass greater than the Planck mass [@sm95]. However, the associated entropy is yet to be investigated for this case. ![(Color online) Entropies $S$ for a commutative Schwarzschild black hole (of initial mass $M$). The red (green) solid line denotes the entropy associated with the volume when back reaction is included (excluded). The blue dashed line denotes the noncommutative BH entropy $S_{h}$ of Eq. (\[ncbe\]). []{data-label="fig2"}](fig2.eps){width="2.75in"} Transforming the mass into $M'(v)=\Theta(v)(M^{3}-3Bv)^{\frac{1}{3}}$ with $\Theta(v)$ the Heaviside step function and the parameter $B\sim10^{-3}$ related to the back reaction [@sm95], the CR volume can be reexpressed as [@yco152; @cl16],$$V_{\mathrm{CR}}^{\prime}\simeq3\sqrt{3}\pi M^{2}v\left( 1-\frac{9B}{2M^{2}}\right) ,$$ with which we can compute the entropy associated with CR volume including the effect of back reaction using Eq. (\[ncrve\]). Figure \[fig2\] illustrates clearly that the entropy associated with the volume is insufficient for a statistical interpretation of the Bekenstein-Hawking entropy, even when the back reaction is included. This interesting result refutes against the possibility of balancing black hole information loss by a huge but finite volume [@cr15; @cl16; @ao16]. It is also seen from Fig. \[fig2\] quantitatively that the influence of back reaction is small for our purpose. In what follows we will not include modifications from the back reaction in our calculations. According to Ref. [@mkp07], in the first evaporation stage, the mass loss rate for a black hole is given by the Stefan-Boltzmann law $$\frac{dM}{dv}=-\frac{1}{\gamma M^{2}},\ \ \gamma>0. \label{sbl}$$ The specific value for the constant $\gamma$ does not influence the present study. Integrating from an initial black hole mass $M$, one finds $v\sim\gamma( M^{3}-M_{f}^{3}) \sim\gamma M^{3}$, with $M_{f}$ ($\ll$ $M$) being the critical mass where Eq. (\[sbl\]) becomes invalid. Assuming the first evaporation stage dominates $v$, omitting the second evaporation stage, and with the definitions for $M$ and $\beta_{h}=T_{h}^{-1}$, we find $$S_{\mathrm{NCR}}\sim\frac{\sqrt{3}\gamma M^{2}}{7680}\left( 1-\frac{3M e^{-\frac{9M^{2}}{16\theta}}}{\sqrt{\pi\theta}}\right) \left(1-\frac {2M e^{-{\frac{M^{2}}{\theta}}}}{\sqrt{\pi\theta}}\right)^{3}. \nonumber$$ The surface area for a noncommutative black hole can be expressed approximately as $A_{h}=16\pi M^{2}\left( 1-\frac{2M}{\sqrt{\pi\theta}}e^{-\frac{M^{2}}{\theta}}\right)^{2}$. Thus the entropy $S_{\mathrm{NCR}}$ associated with the noncommutative volume is proportional to the horizon area of a noncommutative black hole. We can rewrite it as $S_{\mathrm{NCR}}\sim \frac{\sqrt{3}\gamma\varepsilon\left( \theta\right) }{122880\pi}A_{h}$, where $\varepsilon\left( \theta\right) =\left( 1-\frac{3M}{\sqrt{\pi\theta }}e^{-\frac{9M^{2}}{16\theta}}\right) \left( 1-\frac{2M}{\sqrt{\pi\theta}}e^{-{\frac{M^{2}}{\theta}}}\right)$ approaches unity if $v$ is approximated by the first evaporation stage. Figure \[fig3\] compares $S_{\mathrm{NCR}}$ with $S_{h}$. The entropy associated with the noncommutative volume remains clearly insufficient for a statistical interpretation of the BH entropy if $v$ only accounts for the first evaporation stage. ![(Color online) The entropy $S$ associated with the volume of a noncommutative black hole in red solid line with estimated $v$ limited to the first evaporation stage. The blue dashed line denotes the noncommutative BH entropy $S_{h}$. []{data-label="fig3"}](fig3.eps){width="2.75in"} A refined description with noncommutative spacetime can incorporate the second evaporation stage [@pn09], which offers a different scenario from the final explosion with a diverging temperature as for a commutative Schwarzschild black hole shown in Fig. \[fig1\]. Modification due to noncommutative spacetime kicks in at the final evaporation stage, especially when $M_{f}$ approaches $M_{0}$. In this regime, the temperature can be approximated as $T_{h}\simeq\alpha (M_{f}-M_{0})$, with $\alpha=\frac{dT_{h}}{dM}|_{r_{h}=r_{0}}$. The same analysis as in Ref. [@mkp07] gives for large $v$, $$v\sim\frac{1}{\left(M_{f}-M_{0}\right)^{3}}.$$ The final evaporation stage thus needs an infinite time, although the net change to the black hole radius will only be several $\sqrt{\theta}$. While counter-intuitive at first sight, this result is consistent with the third law of thermodynamics: zero temperature state cannot be reached with a countable number of steps or within a finite time. The statistical entropy in Eq. (\[ncrve\]) formally remains the same as in commutative spacetime, except for the noncommutative modifications in the expression of the volume. We thus immediately arrive at $$V_{\mathrm{NCR}}\simeq3\sqrt{3}\pi\frac{M_{f}^{2}\left( 1-\frac{2r_{n}}{\sqrt{\pi\theta}}e^{-{\frac{r_{n}^{2}}{4\theta}}}\right) }{\left( M_{f}-M_{0}\right)^{3}}, \label{ncref}$$ for $v\rightarrow\infty$. It constitutes an example for a black hole with an infinite volume wrapped by a finite horizon since $V_{\mathrm{NCR}}$ is divergent when $M_{f}\to M_{0}$, as shown in Fig. \[fig4\]. From Eq. (\[ncrve\]), we obtain $$S_{\mathrm{NCR}}\sim\eta\alpha^{3}A_{h},$$ with $\eta=\frac{\sqrt{3}\pi^{2}\left(1-\frac{2r_{n}}{\sqrt{\pi\theta}} e^{-{\frac{r_{n}^{2}}{4\theta}}}\right)}{15\times16\left( 1-\frac{2M}{\sqrt{\pi\theta}}e^{-{\frac{M^{2}}{\theta}}}\right)^{2}} \simeq0.05$. It approaches infinite as well, although at a slower rate as shown in Fig. \[fig4\], because $\alpha\rightarrow\infty$ when $r_{h}\rightarrow r_{0}$ from $\frac {dM}{dr_{h}}|_{r_{h}=r_{0}}=0$ as discussed earlier. The noncommutative entropy is larger than the noncommutative BH entropy below a critical value near $M_{0}$ determined by the curve crossing. A noncommutative black hole that evaporates down to smaller than $M_{0}$ thus can house larger information storage capacity (than the BH entropy). The crossing point shown in Fig. \[fig4\] is approximate, since the graphed volume associated entropy is estimated using (divergent) $v$ from the second evaporation stage only. ![(Color online) The comparison between the noncommutative volume $V$ (black solid line) and the associated entropy $S$ (red solid line). The blue dashed line denotes the noncommutative BH entropy $S_{h}$ of Eq. (\[ncbe\]). []{data-label="fig4"}](fig4.eps){width="2.75in"} The divergent volume of Eq. (\[ncref\]) is thus a general feature of noncommutative black holes irrespective of their other details. This shows noncommutative black hole can possess an infinite CR volume. It belongs to a class of black holes with finite surface \[BH entropy Eq. (\[ncbe\])\] but an infinite interior. Such a result supports the interpretation that the BH entropy might be independent of the interior of a black hole. Conclusion and Discussion ========================= In conclusion we study the CR volume based on thermodynamics for Schwarzschild black hole with noncommutative spacetime, which allows for a well-described final evaporation stage and results in a finite cold remnant at zero temperature. We find the CR volume is similar in expression to the result in commutative spacetime, except for a noncommutative parameter dependent modification prefactor. The improved estimate for the advanced time $v$ leads to a divergent volume. Thus we show noncommutative Schwarzschild black hole represents a class of black hole with an infinite volume wrapped inside a finite surface. Our work also sheds light on the information loss paradox. Assuming unitarity, regardless of the firewall" [@amps13], our result implies that information for a noncommutative black hole is stored on the horizon, consistent with the proposed idea of soft hairs, and this information can be taken away by Hawking radiations. Such a scenario has been studied by several groups [@jdb93; @zcy09; @zcy11; @gy13], despite of the lacking for a microscopic mechanism for information transfer. If information were not taken away by radiations, the assumed unitarity becomes questionable. Irrespective of whether radiation back reaction is included or not, we find the interior of a commutative black hole houses insufficient number of modes to account for the BH entropy as shown in Fig. \[fig2\]. Introducing noncommutative spacetime but neglecting the important second evaporation stage does not change such a conclusion as shown in Fig. \[fig3\]. The refined treatment for the final evaporation stage by using noncommutative spacetime, on the other hand, results in a divergent volume, which shows the finite BH entropy is statistically independent of the black hole interior. The remnant in a noncommutative black hole needs an infinite time to reach, which implies that although information might be stored in the interior of a black hole, in order to preserve unitarity the complete process of evaporation needs an infinite time. This seemingly unpleasant outcome essentially heralds the breakdown of unitarity if information were housed in the black hole interior and not taken away by radiations. Acknowledge =========== B.C.Z is supported by NSFC (No. 11374330 and No. 91636213) and by the Fundamental Research Funds for the Central Universities, China University of Geosciences (Wuhan) (No. CUG150630). The work of L.Y. is supported by MOST 2013CB922004 of the National Key Basic Research Program of China, and by NSFC (No. 11374176 and No. 91421305). [99]{} M. K. Parikh, Phys. Rev. D **73**, 124021 (2006). D. Grumiller, arXiv: gr-qc/0509077 B. S. DiNunno and R. A. Matzner, Gen. Relativ. Gravit. **42**, 63 (2010). W. Ballik and K. Lake, arXiv: 1005.1116. M. Cvetič, G. W. Gibbons, D. Kubiznák, and C. N. Pope, Phys. Rev. D **84**, 024037 (2011). W. Ballik and K. Lake, Phys. Rev. D **88**, 104038 (2013). M. Christodoulou and C. Rovelli, Phys. Rev. D **91**, 064046 (2015). I. Bengtsson and E. Jakobsson, Mod. Phys. Lett. A **30**, 1550103 (2015). Y. C. Ong, JCAP **04**, 003 (2015). Y. C. Ong, Gen. Relativ. Gravit. **47**, 88 (2015). S. W. Hawking, Phys. Rev. D **14**, 2460 (1976). L. M. Krauss and F. Wilczek, Phys. Rev. Lett. **62**, 1221 (1989). S. W. Hawking, M. J. Perry, and A. Strominger, Phys. Rev. Lett. **116**, 231301 (2016). T. Jacobson, D. Marolf, and C. Rovelli, Int. J. Theor. Phys. **44**, 1807 (2005). S. W. Hawking, Nature (London) **248**, 30 (1974); Commun. Math. Phys. **43**, 199 (1975). B. Zhang, Phys. Rev. D **92**, 081501(R) (2015). P. Nicolini, A. Smailagic, and E. Spallucci, Phys. Lett. B **632**, 547 (2006). S. Ansoldi, P. Nicolini, A. Smailagic, and E. Spallucci, Phys. Lett. B **645**, 261 (2007). A. Smailagic and E. Spallucci, J. Phys. A **36**, L517 (2003). A. Smailagic and E. Spallucci, J. Phys. A **36**, L467 (2003). P. Nicolini, Int. J. Mod. Phys. A **24**, 1229 (2009). Y. Aharonov, A. Casher, and S. Nussinov, Phys. Lett. B **191**, 51 (1987). P. Chen, Y. C. Ong, and D.-h. Yeom, Phys. Rept. **603**, 1 (2015). A. Smailagic and E. Spallucci, J. Phys. A **37**, 1 (2004). M. Christodoulou and T. D. Lorenzo, arXiv: 1604.07222. S. K. Rama, Phys. Lett. B **519**, 103 (2001). L. N. Chang, D. Minic, N. Okamura, and T. Takeuchi, Phys. Rev. D **65**, 125028 (2002). L. Xiang, Phys. Lett. B **540**, 9 (2002). P. Vaidya, Proc. Indian Acad. Sci. A **33**, 264 (1951). S. Massar, Phys. Rev. D **52**, 5857 (1995). A. Ori, Gen. Relativ. Gravit. **48**, 9 (2016). Y. S. Myung, Y. -W. Kim, and Y. -J. Park, JHEP **02**, 012 (2007). A. Almheiri, D. Marolf, J. Polchinski, and J. Sully, JHEP **02**, 062 (2013). J. D. Bekenstein, Phys. Rev. Lett. **70**, 3680 (1993). B. Zhang, Q. Y. Cai, L. You, and M. S. Zhan, Phys. Lett. B **675**, 98 (2009). B. Zhang, Q. Y. Cai, M. S. Zhan, and L. You, Euro. Phys. Lett. **94**, 20002 (2011). S. B. Giddings and Y. Shi, Phys. Rev. D **87**, 064031 (2013).
**** \ [**]{}\ **\ \ **Abstract** \ Introduction ============ Coherent states and their applications in theoretical and technological fields have been the subject of many studies [@Art01-01], [@Art01-02], [@Art01-03], [@Art01-04], [@Art01-05], [@Art01-06], [@Art01-07], [@Art01-08], [@Art01-09], [@Art01-10], [@Art01-11], [@Art01-12], [@Art01-13], [@Art01-14], [@Art01-15], [@Art01-16], [@Art01-17], [@Art01-18] , [@Art01-19] and [@Art01-20].\ Coherent states play an important role in quantum physics. Among their important properties is the fact that the mean value of the position operator in these states perfectly reproduce the classical behavior in the case of the harmonic oscillator. These states also saturate the Heisenberg inequality. Many generalizations have been proposed for the notion of coherent states for systems which are more complex than the harmonic oscillator. The path taken by supersymmetric quantum mechanics is a very simple one. When a Hamiltonian can be written as a product of an operator and its adjoint (plus a constant), one is basically in the situation of the harmonic oscillator with a creation and a destruction operator. The coherent states in this context are defined as the eigenvectors of the destruction operator. It has been shown that these states saturate a generalized uncertainty relation which is not the Heisenberg one [@Art01-01].\ For some potentials which play an important role in Physics and Chemistry (For example the Morse potential), the SUSY coherent states have been computed using a clever trick which bypasses the resolution of the Riccati equation for the determination of the superpotential [@Art01-01]. But when it comes to the study of the mean value of the position or the momentum operator, one is confronted by the fact that these coherent states are not normalizable. One may try an approach using wave packets but the analysis becomes cumbersome. The path taken here is the following. For the harmonic oscillator, the coherent states are normalizable. The superpotential is a one degree polynomial. To get an insight into the subject, we study an ad hoc system whose superpotential is polynomial. For simplicity and to ensure normalizability, we take it to be of third degree. The physical potential is then a sixth degree polynomial whose classical trajectories can be computed. The coherent states can be obtained in closed form. We than proceed to the calculation of the mean value of the position operator for these states. Comparison is then made with the classical trajectories.\ The paper is organized as follows. The second section is a quick reminder of the formalism of SUSY coherent states that we need. The third section deals with the calculation of the mean value of the position operator in general. We apply it to a simple toy model in the fourth section. The treatment of the harmonic oscillator is put in the Appendix and shows that the analysis captures what is known by other methods. A Quick Reminder of Coherent States and SUSYQM ============================================== There are many definitions of coherent states which are not exactly equivalent.\ Let us first consider the case of the harmonic oscillator [@Art01-27]. One has the following properties : - The map $\mathbb{C} \owns z \longrightarrow \lvert z \rangle \in L^2 (\mathbb{R})$ is continuous. - $\lvert z \rangle$ is an eigenvector of the annihilation operator : $a \lvert z \rangle = z \lvert z \rangle$. - The coherent states family resolves the unity $\frac{1}{\pi} \ \int_{\mathbb{C}} \lvert z \rangle \langle z \rvert d^2 z = \mathbf{1}$. - The coherent states saturate the Heisenberg inequality : $\Delta q \ \Delta p = \frac{1}{2}$. - The coherent states family is temporally stable. - The mean value (or ” lower symbol ”) of the Hamiltonian mimics the classical energy-action relation : $\check{H}(z) = \langle z \lvert \mathbf{\hat{H}} \rvert z \rangle = \omega \lvert z \rvert^2 + \frac{1}{2} $. - The coherent states family is the orbit of the ground state under the action of the Weyl-Heisenberg displacement operator : $\lvert z \rangle = e^{( z a^\dag - \overline{z}z )} \lvert 0 \rangle \equiv D(z) \lvert 0\rangle$. - The coherent states provide a straightforward quantization scheme : Classical state $z \longrightarrow \lvert z \rangle \langle z \rvert$ Quantum state. - The mean value of the position operator in these states reproduce the classical trajectories. One studies a system which admits an infinite set of discrete energies obeying the ”partition” of unity. Let us consider a system with an infinite set of eigen energies which resolves unity. $$\begin{aligned} \begin{split} \label{eqa27} \hat{H} \lvert \psi_n \rangle &= E_n \lvert \psi_n \rangle,\\ \langle \psi_m \lvert \psi_n \rangle &= \delta_{m,n},\\ \sum_{m=1}^{+\infty} \lvert \psi_m \rangle \langle \psi_m \rvert &= 1. \end{split}\end{aligned}$$ One introduces the raising and lowering operators by the relations $$\begin{aligned} \begin{split} \label{eqa73} A^- \lvert \psi_n \rangle &= \sqrt{k(n)} \lvert \psi_{n-1} \rangle,\\ A^+ \lvert \psi_n \rangle &= \sqrt{k(n+1)} \lvert \psi_{n+1} \rangle,\\ \rho(n)& = \prod_{i=1}^n k(i), \ \rho(0) = 1. \end{split}\end{aligned}$$ For the harmonic oscillator $k(i) = i$.\ The coherent states can be defined as the eigenvectors of the lowering operator [@Art01-31]. $$\label{eqa31} A^- \lvert z, \alpha \rangle = z \lvert z, \alpha \rangle.$$ It is possible to show that the time evolution sends a coherent state to another coherent state, with different parameters.\ All coherent states can be obtained by acting on the ground state by the displacement operator.\ The definition adopted by Klauder [@Art01-16] is that the coherent states are given by $$\label{eqa72} \lvert \psi(z) \rangle = \frac{1}{\sqrt{{\cal{N}}(\lvert z \rvert^2)}} \sum_{n \in I} \frac{z^n}{\sqrt{\rho(n)}} \lvert \psi_n \rangle,$$ where $\cal{N}$ is the normalization factor.\ The general squeezed coherent states for a quantum system with an infinite discrete energy spectrum was defined in [@Art01-24] by the relation $$\label{eqa76} ( A^- + \gamma A^+ ) \psi(z, \gamma) = z \psi(z, \gamma),$$ with $z, \gamma \in \mathbb{C}$.\ For a system whose discrete spectrum is finite (Example : The Morse potential) another generalization was studied [@Art01-24].\ It ressembles but now the sum runs on a finite number of indices.\ Let us now turn to SUSYQM. Consider a system described by a potential $V_0$ which admits a ground state energy satisfying the Schroedinger equation $$\label{eqa64} -\frac{1}{2} u^{(0)''} + V_0 u^{(0)} = \varepsilon u^{(0)}.$$ Introduce the function $$\label{eqa63} \alpha_1 = \frac{u^{(0)'}}{u^{(0)}}$$ and the new potential $$\label{eqa61} V_1 = V_0 - \alpha_1.$$ Looking at the Hamiltonians $$\label{eqa57} H_0 = - \frac{1}{2} \frac{d^2}{dx^2} + V_0(x), \ H_1 = - \frac{1}{2} \frac{d^2}{dx^2} + V_1(x),$$ one sees that they can be factorized in the following way $$\begin{aligned} \begin{split} \label{eqa65} H_0 &=A_1 A_1^+ + \varepsilon,\\ H_1 & = A_1^+ A_1 + \varepsilon, \end{split}\end{aligned}$$ where the operators involved are given by the formulas $$\label{eqa59} A_1 = \frac{1}{\sqrt{2}} \bigg\lbrack \frac{d}{dx} + \alpha_1(x) \bigg\rbrack, \ A_1^+ = \frac{1}{\sqrt{2}} \bigg\lbrack - \frac{d}{dx} + \alpha_1(x). \bigg\rbrack.$$ One has the intertwining relations $$\label{eqa58} H_1 A_1^+ = A_1^+ H_0, \ H_0 A_1 = A_1 H_1,$$ from which one finds the spectrum of $H_1$ (knowing that of $H_0$) and the link between their eigenstates. So far for factorization and intertwining. The introduction of SUSYQM can then proceed as follows. One introduces the operators $$\label{eqa69} Q = \begin{pmatrix} 0 & 0 \\ A_1 & 0 \end{pmatrix}, \ Q^+ = \begin{pmatrix} 0 & A_1^+ \\ 0 & 0 \end{pmatrix} ,$$ and sees that the Hamiltonian takes the form $$\begin{aligned} \begin{split} \label{eqa71} H_{SUSY} &= \big \lbrace Q, Q^+ \big\rbrace \\ & = \begin{pmatrix} H_1 - \varepsilon & 0\\ 0 & H_0 - \varepsilon \end{pmatrix}. \end{split}\end{aligned}$$ Introducing $$\begin{aligned} \begin{split} \label{eqa70} Q_1 = \frac{1}{\sqrt{2}} \left( Q^+ + Q \right),\\ Q_2 = \frac{1}{i \sqrt{2}} \left( Q^+ - Q \right), \end{split}\end{aligned}$$ one has $$\begin{aligned} \begin{split} \label{eqa68} \lbrack Q_i , H_{SUSY}\rbrack &= 0,\\ \big \lbrace Q^+_i , Q_j \big\rbrace &= \delta_{i j} H_{SUSY}, \ i, j =1,2. \end{split}\end{aligned}$$ Our Approach ============ We are studying a one dimensional system with a rescaled undimensional coordinate q. Suppose one can write the physical potential $V(q)$ of such a system in terms of a function $x(q)$ by the relation $$\label{eq01} V(q)-E_0=\frac{1}{2}\Big\lbrack x^2(q)+\frac{dx(q)}{dq} \Big\rbrack.$$ Then one can factorize the Hamiltonian $$\label{eq02} \hat{H}=\hat{A^\dag}\hat{A}+E_0,$$ where the operators $A$ and $\hat{A^\dag}$ have the form $$\label{eq03} \hat{A}=\frac{1}{\sqrt{2}} \Big\lbrack \frac{d}{dq}-x(q) \Big\rbrack ;\ \hat{A^\dag}=\frac{1}{\sqrt{2}}\Big\lbrack -\frac{d}{dq}-x(q) \Big\rbrack;$$ so that $$\label{eq03a} \Big\lbrack \hat{A} , \hat{A^\dag} \Big\rbrack = - \frac{dx(q)}{dq}.$$ The function $x(q)$ is called the superpotential. Eq. and are reminiscent of the harmonic oscillator. Using that similarity, the coherent states are defined as the eigenfunctions of the “annihilation operator” $\hat{A}$. Using Eq. one is led to an equation of separable variables. The solution reads $$\label{eq04} \psi_H(q, \alpha)=N \exp{\Big\lbrack \sqrt{2}\alpha q+\int_0^q x(\xi) \, d\xi \Big\rbrack}.$$ In Eq., $\alpha$ is a complex variable which characterizes the coherent states. When trying to implement the SUSY treatment to a system whose physical potential is known, the difficult part is the resolution of the Riccati equation given in Eq..\ For the harmonic oscillator, one introduces the rescaled variables $$\label{eq05} Q = \sqrt{\frac{\hbar}{m\omega}}\ q \hspace{5mm} ; \hspace{5mm} \ P =\sqrt{m \omega\hbar} \ p,$$ and the rescaled potential $$\label{eq06} U (Q)=\hbar\omega \ V (q).$$ The superpotential is given by $$\label{eq07} x(q)=-q,$$ so that the coherent state takes the form $$\label{eq08} \psi_H(q, \alpha)=N \exp{( \sqrt{2} \alpha q-\frac{1}{2}q^2 )},$$ which is normalizable.\ For the Morse potential, the superpotential is given by [@Art01-01] $$\label{eq09} x(q)=\frac{- s + \exp{(-q \hspace{1mm} \sqrt{2\chi_e}})}{\sqrt{2 \chi_e}} + \sqrt{\frac{\chi_e}{2}},$$ the parameters $s$ and $\chi_e$ are related to the Morse potential’s energy levels: $$\label{eq10} E_n = s \left( n+\frac{1}{2}\right)-\chi_e \left( n+\frac{1}{2} \right)^2.$$ So, the coherent states takes the form $$\label{eq11} \psi_H(q, \alpha)= \exp{\Big\lbrack -\frac{1}{2\chi_e} \exp{(-q \hspace{1mm} \sqrt{2\chi_e})}-\frac{s-\chi_e}{\sqrt{2\chi_e}}q+\sqrt{2}\alpha q\Big\rbrack }.$$ Clearly, such a function in not square integrable and the mean values can not be computed. The aim of this paper is to construct a superpotential such that the corresponding coherent states are normalizable and the computation of the mean values not too complicated. The quantity we are interested in is the mean value $$\label{eq12} \langle \hat{q}\rangle =\frac{\langle \psi_{S ,t} \lvert \hat{q}\rvert \psi_{S ,t} \rangle }{\langle \psi_{S, t} \lvert \psi_{S, t} \rangle }.$$ One has to realize that the states given in Eq. were obtained in the Heisenberg picture : the wave function does not depend on time. To pass to the Schrodinger picture one has to use the evolution operator $$\label{eq13} \lvert \psi_{S,t} \rangle = \exp {( -\frac{i}{\hbar}\hat{\mathbf{H}}t )} \rvert \psi_H\rangle,$$ where the Hamiltonian takes the form $$\label{eq14} \hat{\mathbf{H}}=\hbar \omega \Big\lbrack \frac{\hat{p}^2}{2} + \hat{V}(q) \Big\rbrack.$$ The factorization of the Hamiltonian and the evolution operator can be used to obtain the wave function in the Schrodinger picture as a power series $$\label{eq15} \lvert \psi_{S,t}\rangle = \exp(- \frac{i}{\hbar} E_0 t ) .\sum_{k=0}^{n} (-i)^k \frac{(\omega t)^k}{k!}(\hat{A^+}\hat{A})^k \lvert \psi_H \rangle.$$ The mean value then reads $$\label{eq16} \langle \psi_{S ,t} \lvert \hat{q} \rvert \psi_{S ,t}\rangle = \sum_{m=0}^{\infty} \sum_{n=0}^{\infty} \frac{(-1)^m i^{m+n}}{m! n!} t^{m+n} \langle \psi_H \lvert \hat{H}^n \hat{q} \hat{H}^m \rangle \psi_H \rangle.$$ The relation $$\label{eq17} \hat{A^\dag}=-\hat{A}-\sqrt{2}x(q)$$ is obtained by adding the two relations given in Eq..\ With the fact that the wave function, we are studying, is an eigenvector of the “annihilation operator”, we find $$\label{eq18} \hat{A^\dag}\hat{A}\lvert \psi_H \rangle = \alpha \Big\lbrack -\alpha-\sqrt{2}x(q)\Big\rbrack \rvert \psi_H \rangle.$$ Subtracting the relations in , we similarly find $$\label{eq19} \frac{\partial}{\partial q} \lvert \psi_H \rangle = \Big\lbrack \sqrt{2}\alpha + x(q) \Big\rbrack \rvert \psi_H \rangle.$$ This allows us to obtain the second order contribution of the wave function $$\label{eq20} (\hat{A^\dag}\hat{A})^2 \lvert \psi_H \rangle = \Big\lbrack \alpha \frac{\sqrt{2}}{2} \frac{d^2x(q)}{dq^2} + \left( 2\alpha^2+ \sqrt{2} \alpha x(q) \right) \frac{dx(q)}{dq} + \left( \sqrt{2}\alpha x(q)+\alpha^2 \right)^2 \Big\rbrack \rvert \psi_H\rangle.$$ This suggests the introduction of special functions $f_n$ such that $$\label{eq21} (\hat{A^\dag}\hat{A})^n \lvert \psi_H \rangle = f_n(q,\alpha) \psi_H \rangle .$$ From the previous considerations, one derives the recurrence formula $$\label{eq22} f_{n+1}=-\frac{1}{2} \frac{\partial^2f_n}{\partial q^2} - \Big\lbrack \sqrt{2} \alpha + x(q) \Big\rbrack \frac{\partial f_n}{\partial q} - \Big\lbrack \sqrt{2} \alpha x(q) + \alpha^2 \Big\rbrack f_n.$$ Note that $$\label{eq23} f_0(q,\alpha)=1.$$ Working in the position representation, the time dependent wave function $$\label{eq24} \psi_S (q, t, \alpha)=\exp{(-\frac{i}{\hbar} E_0 t)}\sum_{m=0}^{\infty} \frac{(-i\omega t)^m}{m!}\Big\lbrack (\hat{A^\dag}\hat{A})^m \psi_H(q,\alpha)\Big\rbrack$$ takes the form $$\label{eq25} \psi_S (q, t, \alpha)=\exp{(-\frac{i}{\hbar} E_0 t)}\sum_{m=0}^{\infty} \frac{(-i\omega t)^m}{m!}f_m(q,\alpha)\psi_H(q,\alpha).$$ The position mean value is then given by an infinite double sum $$\label{eq26} _S\langle \psi \lvert \hat{q} \rvert \psi\rangle_S = \sum_{m=0}^{\infty} \sum_{n=0}^{\infty} \frac{i^{m+n }(-1)^n}{m!n!} C_{m, n} \ (\omega t)^{m+n}$$ where the coefficients $C_{m, n}$ are the integrals $$\label{eq27} C_{m, n}= \int_{-\infty}^{+\infty} dq .q f^*_m (q,\alpha) f_n (q,\alpha) \psi_H^* (q,\alpha) \psi_H (q,\alpha)\ .$$ The double summation can be rewritten as a power series in the time parameter $$\label{eq28} _S\langle \psi \lvert \hat{q}\rvert \psi\rangle_S = \sum_{l=0}^{\infty} (\omega t)^l \Omega_l,$$ with $$\label{eq29} \Omega_l=i^l (-1)^l\sum_{m=0}^{l} \frac{(-1)^m}{m!(l-m)!}C_{m, l-m}.$$ Explicitly, we can write $$\begin{aligned} \begin{split} \label{eqa02} \Omega_0 &= C_{0, 0}, \\ \Omega_1 &= -i \left( C_{0,1} - C_{1,0} \right), \\ \Omega_2 &= \frac{1}{2!} \left( -C_{0,2} + 2 C_{1,1} - C_{2,0} \right), \\ \Omega_3 &= i \lbrack \ \frac{1}{2!} \left( C_{2,1} - C_{1,2} \right) + \frac{1}{3!} \left( C_{0,3} - C_{3,0} \right) \ \rbrack, \\ \Omega_4 &= \frac{1}{4!} \left( C_{0,4} + C_{4,0} \right) - \frac{1}{3!} \left( C_{1,3} + C_{3,1} \right) + \frac{1}{2! \ 2!} \ C_{2,2},\\ &\cdots \end{split}\end{aligned}$$ Eq. is written with the fact the quantity $_S \langle \psi \lvert \psi \rangle_S$ is an unessential constant.\ One can consider that Eq. and Eq. gives the answer to our question about the mean value of the position operator in this context. For any practical case, one has to compute the integrals of Eq. and makes the appropriate summation.\ At this point one has to point technical difficulties. The first one is that the recursion relations of Eq. can quickly leads to large formulas even for simple superpotentials. The second one is that it is not always possible to have an analytical expression for the integrals appearing in Eq.. Third, one has to be careful about the order at which one can stop to in the series Eq. to obtain a reliable estimate.\ To test our approach, we first used it on the harmonic oscillator. The results are good and reproduce some important characteristics of the coherent states as known in the literature. The solution to the classical equations of motion is given by $$\label{eqq01} q_{class}(t) = C_1 \cos{\omega t}+ C_2 \sin{\omega t},$$ where $ C_1$ and $C_2$ are constants.\ This can be written as a power series $$\label{eqq02} q(t)=A_0 + A_1 \omega t + A_2 (\omega t)^2 + \cdots$$ where one has $$\label{eqq03} \frac{A_2}{A_0}=-\frac{1}{2}, \ \frac{A_3}{A_1}=-\frac{1}{6}, \cdots$$ We can write the ”quantum trajectory” as $$\label{eqq04} \langle \psi_{S,t} \lvert \hat{q} \rvert \psi_{S,t} \rangle = \sum_{l=0}^{+\infty} \Omega_l . (\omega t)^l.$$ It can be shown that the classical trajectory verifies the same relations i.e. $$\label{eqqq02} \frac{\Omega_2}{\Omega_0}=-\frac{1}{2}, \ \frac{\Omega_3}{\Omega_1}=-\frac{1}{6}, \cdots$$ To keep our presentation light, we have put this treatment in the Appendix. A Toy Model =========== The generalization of coherent states considered here was introduced in [@Art01-01]. For many potentials used in theoretical chemistry, the corresponding states are not normalizable and this leads to technical difficulties when one is interested in mean values. We want to restrict ourselves to systems with square integrable generalized coherent states. In the context of quantum supersymetry, the most important ingredient is the superpotential. For the harmonic oscillator, the superpotential in linear in the position. The next non trivial thing is to study a polynomial superpotential. A second degree superpotential is readily seen to lead to a non normalizable generalized coherent state. This leads us to study a toy model whose superpotential is given by $$\label{eq30} x(q)=x_1 q -\frac{1}{3} x_1^2 q^3,$$ with $$\label{eq31} x_1 < 0,$$ where $x_1$ is a free parameter.\ The corresponding physical potential is a sixth degree polynomial (See Eq.) $$\label{eq32} U(Q)= \hbar\omega ( U_0 + U_2 Q^2+ U_4 Q^4+ U_6 Q^6).$$ Its coefficients are related to those of the rescaled superpotential by $$\label{eq33} U_0 = 0; \ U_2 = 0; \ U_4 = -(\frac{\hbar}{m \omega})^2 \frac{x_1^3}{3}; \ U_6 = (\frac{\hbar}{m \omega})^3 \frac{x_1^4}{18}.$$ It has to be noted that if one begins with the potential, the characteristic frequency is given by $$\label{eq34} \hbar \omega=\frac{U^3_4}{U^2_6}.$$ This potential has the form given in Fig.1 given in the next page. ![](imarticle01.jpg) Classically, all the trajectories are bounded and periodic, with the period $$\label{eq35} T_{class}=4\sqrt{\frac{2}{m}} \int_0^{Q_+} \frac{1}{\sqrt{E-U(Q)}} dQ\$$ which depends on the energy of the system because $Q_+$ is such that $E - U(Q_+)=0$.\ We now analyze the mean value of the position operator for the corresponding coherent states and compare them with the classical trajectories. From Eq. one easily sees that the functions $f_n$ will be polynomial in the the variable $q$. This will highly simplify the computations. We thus introduce the coefficients $\tilde{f}_n (\alpha)$ by $$\label{eq36} f_n (q,\alpha)=\sum_{k=0}^{3n} \tilde{f}_{n,k}(\alpha) q^k$$ with $$\label{eq37} \tilde{f}_{0,0}=\tilde{f}_{0,0}^\star=1.$$ The coefficients we have introduced obey the following recursion relations which naturally come from Eq.: $$\begin{aligned} \begin{split} \label{eq39} \tilde{f}_{n+1,0}&=- \tilde{f}_{n,2} - \sqrt{2} \alpha \tilde{f}_{n,1} - \alpha^2 \tilde{f}_{n,0}; \\ \tilde{f}_{n+1,1}&=-3 \tilde{f}_{n,3} -2 \sqrt{2} \alpha \tilde{f}_{n,2} - (x_1 + \alpha^2) \tilde{f}_{n,1} - \sqrt{2} \alpha x_1\tilde{f}_{n,0} ;\\ \tilde{f}_{n+1,2}&=-6 \tilde{f}_{n,4} -3 \sqrt{2} \alpha \tilde{f}_{n,3} - (2x_1 + \alpha^2) \tilde{f}_{n,2} - \sqrt{2} \alpha x_1 \tilde{f}_{n,1} ;\\ \tilde{f}_{n+1,l}&=-\frac{1}{2} (l+1)(l+2) \tilde{f}_{n,l+2} - \sqrt{2} \alpha (l+1) \tilde{f}_{n,l+1} - (x_1 l + \alpha^2) \tilde{f}_{n,l} - \sqrt{2} \alpha x_1 \tilde{f}_{n,l-1}\\ &+ \frac{1}{3} x_1^2 (l-2)\tilde{f}_{n, l-2}+\frac{\sqrt{2}}{3}\alpha x_1^2 \tilde{f}_{n, l-3}\ (3 < l < 3n-2); \\ \tilde{f}_{n+1,3n-1}&= -3n \sqrt{2} \alpha \tilde{f}_{n,3n} -[(3n-1)x_1+ \alpha^2] \tilde{f}_{n,3n-1} - \sqrt{2} \alpha x_1 \tilde{f}_{n,3n-2}\\ &+ \frac{1}{3} x_1^2 (3n-3)\tilde{f}_{n, 3n-3}+\frac{\sqrt{2}}{3}\alpha x_1^2 \tilde{f}_{n, 3n-4}; \\ \tilde{f}_{n+1,3n}&= -(3n x_1+ \alpha^2) \tilde{f}_{n,3n}- \sqrt{2} \alpha x_1 \tilde{f}_{n,3n-1}+ \frac{1}{3} x_1^2 (3n-2)\tilde{f}_{n, 3n-2}+\frac{\sqrt{2}}{3}\alpha x_1^2 \tilde{f}_{n, 3n-3};\\ \tilde{f}_{n+1,3n+1}&= - \sqrt{2} \alpha x_1 \tilde{f}_{n,3n}+ \frac{1}{3} x_1^2 (3n-1)\tilde{f}_{n, 3n-2}+\frac{\sqrt{2}}{3}\alpha x_1^2 \tilde{f}_{n, 3n-2};\\ \tilde{f}_{n+1,3n+2}&= n x_1^2 \tilde{f}_{n,3n} +\frac{\sqrt{2}}{3}\alpha x_1^2 \tilde{f}_{n, 3n-1};\\ \tilde{f}_{n+1,3n+3}&= \frac{\sqrt{2}}{3}\alpha x_1^2 \tilde{f}_{n, 3n}. \end{split}\end{aligned}$$ We know have to compute $$\label{eq38} C_{m,n}=\sum_{k=0}^{m}\sum_{p=0}^{n} \int_{-\infty}^{+\infty} dq. q^{k+p+1} \tilde{f^\star}_{m,k}(\alpha) \tilde{f}_{n,p}(\alpha) \exp{(2\beta q+x_1 q^2-\frac{1}{6}x_1^2 q^4)}\ .$$ From Eq., we have $$\label{eq40} C_{0,0}= \int_{-\infty}^{+\infty} dq. q\ \exp{(2\beta q+x_1 q^2-\frac{1}{6}x_1^2 q^4)}\ .$$ The integrals in Eq. have the generic form $$\label{eq41} J_n = \int_{-\infty}^{+\infty} dq. q^n\ \exp{(2\beta q+x_1 q^2-\frac{1}{6}x_1^2 q^4)}\ .$$ At this point, there are two ways to tackle the computation. The first insight is that one may need to compute only the integral $J_0$. The second one will use recursion relations.\ We begin with the first approach. Its main interest relies in the fact that it leads to analytical expressions. Its main limitation is that it works only for big values of the real part of the parameter $\alpha$ describing th coherent state.\ In short, from Eq. it can be shown that $$\label{eq43} J_n = \frac{1}{2^n} \frac{d^n}{d\beta^n} J_0.$$ For convenience, we introduce the function $$\label{eq50} g(q)=2\beta q+x_1 q^2-\frac{1}{6}x_1^2 q^4.$$ Our integral then becomes $$\label{eqa07} J_0 = \int_{-\infty}^{\infty} dq \ \exp{[ g(q) ]}.$$ To evaluate $J_0$, we shall use the saddle point approximation. This is due to the fact that the exponential is a very rapidly decaying function, due to the fourth degree term with a negative sign in the exponential. The extrema of the integrand satisfy the equation $$\label{eq44} q^3-\frac{3}{x_1}q-\frac{3\beta}{x_1^2}=0.$$ This equation is a particular case of the following $$\label{eq45} q^3+a_1q^2+a_2q+a_3=0.$$ The solution to such an equation can be recast using the Cardan formula. One introduces the intermediate quantities $$\label{eq46} Q=\frac{-1}{x_1}; R=\frac{3\beta}{2x_1^2};$$ $$\label{eq47} D = Q^3+R^2; S=\sqrt[3]{R+\sqrt{D}};\ T=\sqrt[3]{R-\sqrt{D}}.$$ The only real extremum (actually a maximum) is found at $$\label{eq48} q_0= S+T-\frac{1}{3}a_1.$$ One finally arrives at the following expression $$\label{eq49} q_0=\sqrt[3]{\frac{3\beta}{2x_1^2}+\sqrt{\frac{9\beta^2}{4x_1^4}-\frac{1}{x_1^3}}}+\sqrt[3]{\frac{3\beta}{2x_1^2}-\sqrt{\frac{9\beta^2}{4x_1^4}-\frac{1}{x_1^3}}}.$$ Let us first expand the function $g$, given by , near its minimum $$\label{eqa10} g(q) = g(q_0) + \frac{1}{2!} g''(q_0) ( q - q_0)^2 + \frac{1}{3!} g'''(q_0) ( q - q_0)^3 + \frac{1}{4!} g^{(4)}(q_0) ( q - q_0)^4 + \cdots$$ Introducing the centered and rescaled variable $y$ by $$\label{eqa11} q = q_0 + \frac{y}{\sqrt{- g''(q_0)}}$$ one obtains $$\label{eqa14} g(q) = g(q_0) - \frac{1}{2!} y^2 + A y^3 + B y^4 + \cdots$$ where the parameters $A$ and $B$ are given by $$\label{eqa16a} A = \frac{1}{3!} \ \frac{g^{(3)}(q_0)}{\lbrack - g''(q_0) \rbrack^{\frac{3}{2}}}, \ B = \frac{1}{4!} \ \frac{g^{(4)}(q_0)}{\lbrack - g''(q_0) \rbrack^2}.$$ This leads to a sum of gamma functions $$\label{eqa17} J_0 \simeq \frac{\exp[g(q_0)]}{\sqrt{-g''(q_0)}} \ \int_{-\infty}^{+\infty} dy \exp{\left(-\frac{1}{2}y^2\right)} \ \exp{\left(Ay^3+By^4 + \cdots \right)}$$ From this, we shall derive the domain of validity of our approach. To have an asymptotic series for the quantity under investigation, we need $A$ et $B$ to be negligible. A plot of these functions shows this to be true for big values of the parameter $\beta$. ![](imarticle04.jpg) We use this to simplify our formulas. For large values of $\beta$, one has that the maximum of the function $g$ occurs at $$\label{eqa15} q_0 \simeq \left( \frac{3\beta}{x_1^2} \right)^{\frac{1}{3}},$$ (which comes from Eq) because in that limit $$\label{eqa16b} A \sim \frac{1}{\beta^{\frac{1}{3}}}, \ B \sim \frac{-4 x_1^2}{\beta^{\frac{4}{3}}}.$$ One then finds the dominant contribution (in the limit $\beta \longrightarrow \infty)$ to be given by $$\label{eqa21} J_0 \simeq \sqrt{\pi} \exp{\left( z_0 \beta^{\frac{2}{3}}+ z_1 \beta^{\frac{4}{3}} \right)} . \ \left( z_2 + z_3 \beta^{\frac{2}{3}} \right)^{-\frac{1}{2}}$$ where the constants coefficients are given by $$\label{eqa22} z_0 = -x_1 \left( \frac{3\beta}{x_1^2} \right)^{\frac{2}{3}}, \ z_1 = \frac{x_1^2}{2} \left( \frac{3\beta}{x_1^2} \right)^{\frac{4}{3}}, \ z_2 = -x_1, \ z_3 = x_1^2 \left( \frac{3\beta}{x_1^2} \right)^{\frac{2}{3}}.$$ For the rest of our treatment, it appears more simple to write this result directly in terms of $q_0$ : $$\label{eqa23} J_0 = \sqrt{\pi} \left( -x_1 + x_1^2 q_0^2 \right)^{-\frac{1}{2}}\exp{(-x_1 q_0^2 + \frac{1}{2} x_1^2 q_0^4)}.$$ One can now use the relation between $J_n$ and $J_0$. For example, for the first two following, one gets $$\begin{aligned} \begin{split} \label{eqa25} J_1 = \frac{\sqrt{\pi}}{2} q_0^{-1}& \left( -x_1 + x_1^2 q_0^2 \right)^{-\frac{3}{2}} \left( 1 - 4 x_1 q_0^2 + 2 x_1^2 q_0^4 \right) \exp{(-x_1 q_0^2 + \frac{1}{2} x_1^2 q_0^4)}, \\ J_2 = \frac{\sqrt{\pi}}{4} x_1^{-1} q_0^{-4}& \left( -x_1 + x_1^2 q_0^2 \right)^{-\frac{5}{2}}\\ &\times ( 1 +2 x_1 q_0^2 -10 x_1^2 q_0^4 + 22 x_1^3 q_0^6 -16 x_1^4 q_0^8 + 4 x_1^5 q_0^{10} ) \exp{(-x_1 q_0^2 + \frac{1}{2} x_1^2 q_0^4)}. \end{split}\end{aligned}$$ It should be noted that only the dominant term in the polynomial expression is relevant. This is justified by the fact the contribution coming from the coefficients $A$ and $B$ in Eq. are ignored when computing the dominant term. This approach, which gives analytical formula, will be used later in the computation of the time scale where the classical solution begins separating from the mean value of the position operator for the coherent states studied here.\ The second approach uses integration by parts. One has $$\label{eqa25a} J_n = \int_{-\infty}^{+\infty} dq \ q^n \ \exp{\lbrack g(q) \rbrack}.$$ If one introduces $u = q^n \ \exp{(x_1 q^2 - \frac{1}{6} x_1^2 q^4)}$ and $dv = \exp{(2\beta \ q)}dq$, one gets $$\begin{aligned} \begin{split} \label{eqa26} J_3 &= \frac{3}{x_1} J_1 + \frac{3\beta}{x_1^2} J_0 , \\ J_{n+3} &= \frac{3n}{2 \ x_1^2} J_{n-1}+ \frac{3\beta}{x_1^2} J_n + \frac{3}{x_1} J_{n+1} \ ; n \geq 1. \end{split}\end{aligned}$$ These relations are exact. They apply to small as well as to large values of $\beta$. They can be used in the following way. For a given value of $\beta$, one computes numerically $J_0, J_1$ and $J_2$. All the others are then obtained by the preceding recursion formulas.\ Finally, we use the mean value of the position operator given in Eq.. We first write explicitly the relation contained in Eq. $$\label{eqa03} C_{m,n} = \sum _{k=0}^{3m} \sum _{p=0}^{3n} \tilde{f}^\star_{m,k} \ \tilde{f}_{n,p} \ J_{k+p+1}.$$ The expressions of the $C_{m, n}$ (Eq.) then leads to $$\begin{aligned} \begin{split} \label{eqa04} \Omega_0 &= J_1, \\ \Omega_1 = i \ & \Big\lbrack \left( - \tilde{f}_{1,0} + \tilde{f}^\star_{1,0} \right) J_1 + \left( - \tilde{f}_{1,1} + \tilde{f}^\star_{1,1} \right) J_2 \Big\rbrack, \\ \Omega_2 = - \frac{1}{2} & \ \Big\lbrace \ \left( \tilde{f}_{2,0} + \tilde{f}^\star_{2,0} - 2\tilde{f}^\star_{1,0} \tilde{f}_{1,0} \right) J_1 \\ &+ \Big\lbrack \tilde{f}_{2,1} + \tilde{f}^\star_{2,1} - 2 \left( \tilde{f}^\star_{1,0} \tilde{f}_{1,1} + \tilde{f}_{1,0} \tilde{f}^\star_{1,1} \right) \Big\rbrack J_2 + \ \left( \tilde{f}_{2,2} + \tilde{f}^\star_{2,2} - 2\tilde{f}^\star_{1,1} \tilde{f}_{1,1} \right ) J_3 \ \Big\rbrace, \\ \Omega_3 = \frac{i}{6} & \ \Big\lbrace \ \Big\lbrack \left( \tilde{f}_{3,0} - \tilde{f}^\star_{3,0} \right) + 3 \left( - \tilde{f}^\star_{1,0} \tilde{f}_{2,0} + \tilde{f}_{1,0} \tilde{f}^\star_{2,0} \right) \Big\rbrack J_1 \\ &+ \Big\lbrack \left( \tilde{f}_{3,1} - \tilde{f}^\star_{3,1}\right) + 3 \left( -\tilde{f}^\star_{1,0} \tilde{f}_{2,1} - \tilde{f}_{2,0} \tilde{f}^\star_{1,1} + \tilde{f}_{1,0}\tilde{f}^\star_{2,1} + \tilde{f}_{1,1} \tilde{f}^\star_{2,0} \right) \Big\rbrack J_2 \\ &+ \ \Big\lbrack \left( \tilde{f}_{3,2} - \tilde{f}^\star_{3,2} \right) + 3 \left( -\tilde{f}^\star_{1,0} \tilde{f}_{2,2} - \tilde{f}_{2,1} \tilde{f}^\star_{1,1} + \tilde{f}_{1,0}\tilde{f}^\star_{2,2} + \tilde{f}_{1,1} \tilde{f}^\star_{2,1} \right) \Big\rbrack J_3 \\ & + \Big\lbrack \left( \tilde{f}_{3,3} - \tilde{f}^\star_{3,3} \right) + 3 \left( - \tilde{f}^\star_{1,1} \tilde{f}_{2,2} + \tilde{f}_{1,1} \tilde{f}^\star_{2,2} \right) \Big\rbrack J_4 \Big\rbrace,\\ &\cdots \end{split}\end{aligned}$$ On the other hand, the classical trajectories are analytical functions of time. Rather than relying on their periodic character, we shall consider the equation of motion $$\label{eq64} q''(t)-\mu q^3(t)-\sigma q^5(t)=0,$$ with $ \mu=-\frac{4}{3} \omega^2 x_1^3$ and $ \sigma= \frac{1}{3} \omega^2 x_1^4 $.\ One sees that a power series solution of the form $$\label{eq65} q_{class}(t)=\sum_{i=0}^{\infty} A_i . (\omega t)^i$$ leads to the following relation between the coefficients $$\begin{aligned} \begin{split} \label{eq66} A_2 & = \frac{1}{6}(A^5_0 - 4A^3_0); \\ A_3 & = \frac{1}{18}( 5A_1 A^4_0 - 12A_1 A^2_0); \\ A_4 & = \frac{1}{216}( 5A_0^9 - 32A^7_0 + 48A^5_0 + 60A^2_1 A^3_0 - 72 A^2_1 A_0); \\ A_5 & = \frac{1}{1080}(85A_1 A^8_0 - 432A_1 A^6_0 + 432 A_1 A^4_0 + 180 A^3_1 A^2_0 - 72 A^3_1);\\ \cdots \end{split}\end{aligned}$$ The trajectories will be identical if the equations Eq. are still satisfied when the constants $A(i)$ are replaced by the coefficients $\Omega_i$. For our case, this amount to the vanishing of the following quantities $$\label{eqz01} \Big \lvert \frac{q_{cl}(t) - q_{moy} (t)}{q_{cl} (t)} \Big \rvert \approx \varepsilon \ll 1$$ $$\label{eqz02} \lvert \omega t \rvert \leq \varepsilon^{\frac{1}{2}} \Big \lvert \frac{x_1^2}{f(x_1, \beta)} \Big \rvert^{\frac{1}{2}}$$ $$\begin{aligned} \begin{split} \label{eqz03} f(x_1, \beta) = &6x_1 \Big\lbrack -4 \gamma^2 \left( 3x_1 \beta^4 \right)^{\frac{1}{3}} - \left( 9 x_1^2 \beta^2 \right)^{\frac{1}{3}} \left( \beta^2 + \gamma^2 \right) - 2 \beta^2 \gamma^2 + x_1 \left( \beta^2 - \gamma^2 \right) \Big\rbrack\\ & + \pi^2 \exp{\Big\lbrack 4 \left( -x_1 q_0^2 + \frac{1}{2} x_1^2 q_0^4 \right) \Big\rbrack} - 4 \pi x_1^2 \exp{\Big\lbrack 2 \left( -x_1 q_0^2 + \frac{1}{2} x_1^2 q_0^4 \right) \Big\rbrack}. \end{split}\end{aligned}$$ $$\label{eqz04} \lvert \omega t \rvert \leq \varepsilon^{\frac{1}{2}} \Big \lvert \frac{6 \Omega_0}{ \Omega_0^5 - 4 \Omega_0^3 - 6 \Omega_2} \Big \rvert^{\frac{1}{2}}$$ This will hold true for times very small compared to the intrinsic frequency of the system.\ ****\ In this paper, we devised a general method for the computation of the quantum trajectories of generalized coherent states in the supersymmetric context. Our approach was successfully applied to the harmonic oscillator. We then applied it to one of the simplest superpotentials whose coherent states are normalizable. We found for this specific case the timescale after which the classical and quantum trajectories begin having significant differences. Although technical difficulties arise, our approach can be used to analyze the product of the uncertainties in physical position and momentum operators and see how it evolves in view of the absolute bound given by the Heisenberg inequality.\ Some mathematical issues have not been addressed here. One of them is the convergence of the series obtained for the position operator mean value. This is a very difficult point since we do not have the explicit formula for the coefficient $\Omega_l$. But in principle, since the problem is well posed, the results should be meaningful.\ The method we followed here gives the position as an analytic series in time. One has to sum a finite number of terms to obtain an approximation. Such a finite sum is polynomial. This explains why after some time the graph goes to infinity; this simply means we are out of the domain where the approximation is valid.\ The SUSY structure has been extensively used (see Eq., Eq.,Eq.). This culminated in the recurrence formula of Eq. which gives the nth contribution to the wave function. The superpotential $x(q)$ appears many times because it is the result of the commutation relations between the operators A and $A^\dag$ (Eq.). The coefficients $C_{m,n}$ are not easy to find analytically, even for simple superpotentials like the one we studied here. We devised recipes which can tackle this successfully. This was the case of the saddle point approximation whose domain of validity was explicitly given.\ It should be emphasized that the coherent states studied here were not constructed as superposition of the quantum energy eigenstates . So it does not make sense to question its normalisability. It is normalizable in the Heisenberg picture ; since the evolution operator which relates it to the Schr0dinger picture is unitary ; its time evolved equivalent has the same norm. Let us finish with some considerations concerning formal remarks which can be misleading. It has been argued in the case of coherent states defined as infinite superpositions of energy eigenstates that in most cases the wave function so obtained is only formal i.e it does not converge when considering for example imaginary times. First, we have no reason to turn to imaginary times in this work. Secondly, the definition of coherent states adopted here is a priori different so that there is no reason the claim made above, if true , should apply here. To finish, our work can be seen as a particular illustration of the Ehrenfest theorem i.e in most general cases, the classical and quantum trajectories are not the same.\ ****\ We give here the results obtained by our treatment when applied to the harmonic oscillator. The functions $f_n$ are polynomial $$\label{eq69} f_n(q, \alpha)=\sum_{k=0}^n \tilde{f}_{n,k}(\alpha) q^k; \tilde{f}_{0,0}= \tilde{f}_{0,0}^\star=1.$$ The coefficients we need are given by the following integrals $$\begin{aligned} \begin{split} \label{eq70} C_{m,n}&=\sum_{k=0}^{m}\sum_{p=0}^{n} \int_{-\infty}^{+\infty} dq. q^{k+p+1} \tilde{f}^\star_{m,k}(\alpha) \tilde{f}_{n,p}(\alpha) \psi_H^\star (q,\alpha) \psi_H (q,\alpha)\ ; \\ C_{m,n}&=\sum_{k=0}^{m}\sum_{p=0}^{n} \int_{-\infty}^{+\infty} dq. q^{k+p+1} \tilde{f^\star}_{m,k}(\alpha) \tilde{f}_{n,p}(\alpha) \exp{(2\beta q-q^2)}\ \end{split}\end{aligned}$$ where, by definition, the coefficient $\beta$ is given by $$\label{eq71} 2\beta=\sqrt{2}(\alpha+\alpha^\star).$$ We have fewer recursion relations $$\begin{aligned} \begin{split} \label{eq72} \tilde{f}_{n+1,0}&=- \tilde{f}_{n,2} - \sqrt{2} \alpha \tilde{f}_{n,1} - \alpha^2 \tilde{f}_{n,0}; \\ \tilde{f}_{n+1,l}&=-\frac{1}{2} (l+1)(l+2) \tilde{f}_{n,l+2} - \sqrt{2} \alpha (l+1) \tilde{f}_{n,l+1} + ( l - \alpha^2) \tilde{f}_{n,l} + \sqrt{2} \alpha \tilde{f}_{n,l-1};\\ \tilde{f}_{n+1,n-1}&=- \sqrt{2} \alpha n \tilde{f}_{n,n} +(n-1-\alpha^2) \tilde{f}_{n,n-1} + \sqrt{2} \alpha \tilde{f}_{n,n-2}; \\ \tilde{f}_{n+1,n}&=(n-\alpha^2)\tilde{f}_{n,n} + \sqrt{2} \alpha \tilde{f}_{n,n-1}; \\ \tilde{f}_{n+1,n+1}&= \sqrt{2} \alpha \tilde{f}_{n,n}. \end{split}\end{aligned}$$ Our coefficients $C_{m,n}$ are given by integrals of the product of an exponential and a power of the variable $q$ (See Eq.). Actually, the only thing one needs is $$\label{eq73} J_0 =\int_{-\infty}^{+\infty} dq. \exp{(-q^2+2\beta q)}\,$$ because $$\label{eq74} \int_{-\infty}^{+\infty} dq. q^{k+p+1}\exp{(2\beta q-q^2)}\ =\sqrt{\pi}\ \frac{1}{2^{k+p+1}}\ \frac{\partial^{k+p+1}}{\partial \beta^{k+p+1}} \Big\lbrack \exp{(\beta^2)} \Big\rbrack.$$ The coefficients can now be recast in the form $$\label{eq75} C_{m,n}=\sqrt{\pi} \exp{\beta^2}\sum_{k=0}^{m}\sum_{p=0}^{n} \tilde{f^\star}_{m,k}(\alpha)f_{n,p}(\alpha)\ \frac{1}{2^{k+p+1}}\ P_{k+p+1}(\beta).$$ The polynomials $P_s$ are defined by the property $$\label{eq76} \frac{d^s}{d\beta^s}\exp{(\beta^2)}=\exp{(\beta^2)}\ P_s(\beta).$$ One readily finds that they obey the recursion relations $$\begin{aligned} \label{eq77} P_{s+1}(\beta)&=&2\beta P_s(\beta) + \frac{d}{d\beta}P_s(\beta); \\ \nonumber P_0(\beta)&=&1.\end{aligned}$$ Let us now compare the quantum and the classical trajectories for the harmonic oscillator using our approach.\ For the quantum behavior on the other side, one finds $$\begin{aligned} \begin{split} \label{eq80} \Omega_0 & = \sqrt{\pi} \ \beta \ \exp{(\beta^2)},\\ \Omega_1 &= i \sqrt{\frac{\pi}{2}} \ \beta \ \exp{(\beta^2)},\\ \Omega_2 &=-\frac{1}{2} \sqrt{\pi}\ \exp{(\beta^2)},\\ \Omega_3 &=-\frac{i}{6} \sqrt{\frac{\pi}{2}} \ \beta \ \exp{(\beta^2)},\\ \Omega_4 &=\frac{1}{24} \sqrt{\pi}\ \exp{(\beta^2)},\\ &\cdots \end{split}\end{aligned}$$ The following ratios $$\begin{aligned} \begin{split} \label{eq85} &\frac{\Omega_2}{\Omega_0}=-\frac{1}{2},\ \frac{\Omega_4}{\Omega_0}=\frac{1}{24},\ \frac{\Omega_6}{\Omega_0}=-\frac{1}{720},\ \frac{\Omega_8}{\Omega_0}=\frac{1}{40 320},\ \frac{\Omega_{10}}{\Omega_0}=-\frac{1}{3 628 800},\ \cdots \\ &\frac{\Omega_3}{\Omega_1}=-\frac{1}{6},\ \frac{\Omega_5}{\Omega_1}=\frac{1}{120},\ \frac{\Omega_7}{\Omega_1}=-\frac{1}{5 040},\ \frac{\Omega_9}{\Omega_1}=\frac{1}{362 880},\ \cdots \end{split}\end{aligned}$$ drives us to conclude that the behavior of the classical trajectory given in Eq. is recovered, at least in the lowest order. One can go to higher order and verify this still works.\ ![](imarticle02d.jpg) The same calculation done in section 3 for our toy model shows the plot of $q(t)$ below. The Figure 3 shows a kind of oscillation for a small interval of time.\ We were able to plot the function $q(t)$ for the parameters $x_1=-1$ and $\beta=\frac{4}{3}$. But this was not sufficient to completely determine the coherent state’s parameter $\alpha$, the unique information was $Re(\alpha)=\frac{2\sqrt{2}}{3}$ and $Im(\alpha)\leq0$. The “kind of oscillations” appeared for the smallest values of $Im(\alpha)$ and the smallest times, as shown in the Figure 3. ![](imarticle03.jpg) [99]{} Molski, M., J. Phys. A : Math. Theor. 42 165301 (2009). Mikulski, D., Konarski, J., Krzysztof, E., Molsky, M., Kabaciński, S. J., Math. Chem. (2015) 53 : 2018. Sabi Takou D., Avossevou G. Y. H., Kounouhewa. 10.1515/Phys-2015-0021. Mikulski, D., Molski, M., Konarski, J., Krzysztof, E., Journal of Mathematical Chemistry 52(1) - January 2014. J. Math. Chem. (2014). Mikulski, D., Konarski, J., Krzysztof, E., Molsky, M., Annals of Physics 339 : 122-134- December 2013. Pawlowski, J., Szumniak, P., Bednarek, S., Phys. Rev. B 94, 155407-2016. Clark, L.A., Stokes, A., Beige, A., Phys. Rev. A94, 023840 (2016). Popov, D., Shi-Hai Dong, Pop, N., Sajfert, V., Şimon, S., Annals of Physics, Volume 39, December 2013, Pages 122-134. Drăgănescu, G. E., Physica Scripta, Volume 2013, T153, March 23rd, 2013. Yahiaoui, S. A., Bentaiba, M., Journal of Physics A : Mathematical and Theoretical, Volume 45(44), January 2012. Ruby, V. C., Senthilvelan, M., J. Math. 51, 052106 (2010). Malkiewicz, P., Affine Coherent States in Quantum Cosmology. ArXiv : 1512.04304v1\[gr-qc\]. De Lima Rodrigues, R., De Lima, A. F., De Araújo Ferreira, K., Vaidya, A. N., ArXiv : hep-th/0205175v7. Bergeron, H., Gazeau, J. P., Youssef, A. ArXiv : quant-ph/1007.3876. Balondo Iyela, D., Govaerts, J., Hounkonnou, M. N., Journal of Mathematical Physics 54, 093502 (2013). Antoine, J. P., Gazeau, J. P., Monceau, P., Klauder, J. R., Penson, K. A., Journal of Mathematical Physics, Volume 42, 2349(2001). Aremua, I., Gazeau, J. P., Hounkonnou, M. N., Journal of Physics A: Mathematical and Theoretical 45(33). November 2011. Gazeau, J. P., delOlmo, M. A. , Annals of Physics Volume 330, March 2013, Pages 220-245. Bergeron, H., Gazeau, J. P., Siegl, P., Youssef, A., EPL (Europhysics Letters), Volume 92, Number 6. El Baz, M., Fresneda, R., Gazeau, J. P., Hassouni, Y., Journal of Physics A: Mathematical and Theoretical Volume 43, Number 38. Hounkonnou, M., N., Arjikab, S., Baloïtchac, E., Journal of Mathematical Physics 55, 123502 (2014). Bergeron, H., Siegl, P., Youssef, A., Journal of Physics A: Mathematical and Theoretical, Volume 45, Number 24. Angelova, M., Hertz, A., Hussin, V., J. Phys. A: Math. Theor. 45 244007(2012), ArXiv: 1111.1974v3 \[math-ph\]. Nieto, M., M., 1997, ArXiv : quant-ph/9708012. Perelomov, A.M. (1986) Generalized Coherent States and their Applications, Springer-Verlag, Berlin. Klauder, J., R., Penson, K., A., JM 2001, Phys.Rev.A. 64013817. Kuang, L.M., Wang, F.B., Zhou, Y.G. (1993), Phys. Lett. A, 183, 1. Gazeau, J.-P., Maquet, A. (1979), Phys. Rev. A, 20, 727. Castañeda, J. A., Hernández, M. A., Jáuregui, R. (2008), Phys. Rev. A, 78, 78. Gazeau, J., -P. Coherent States in Quantum Physics, 2009, Wiley-VCh Verlag GmbH & Co. KGaA, Weinheim. Benedict, M.G, Molnar, B. (1999), Phys. Rev. A, 60, R1737. Molnar, B., Benedict, M.G, Bertrand, J. (2001)l, J. Phys. A : Math. Gen., 34, 3139. Kastrup, H.A. (2007) , Ann. Phys. (Leipzig), 7–8, 439. Kuang, L.M., Wang, F.B., Zhou, Y.G. (1994), J. Mod. Opt., 41, 1307. Nieto, M.M., Simmons, Jr., L.M. (1987), Phys. Rev. Lett., 41, 207. Antoine, J.-P., Gazeau, J.-P., Monceau, P., Klauder, J.R., Penson, K.A. (2001)l, J. Math. Phys., 42, 2349. Barut, A.O. and Girardello, L. (1971), Commun. Math. Phys., 21, 41. Klauder, J.R., Skagerstam, B.S. (eds) (1985) Coherent states. Applications in physics and mathematical physics, World Scientific Publishing Co., Singapore. Garcia de Leon, P., Gazeau, J.-P., Quéva, J. (2008), Phys. Lett. A, 372, 3597. Cooper, F., Khare, A., Sukhatme, U., Phys. Rep. 251, 267-385. Khare, A., Sukhatme, U., P., 1993, J. Phys. A : Math. Gen. 26 L901-4. Klauder, J., R., 1996, J. Phys. A : Math. Gen. 291, 293, 8. Gerry, G., C., Kiefer, J., 1988, Phys. Rev. A 37 665-771. Gazeau, J.-P. and Klauder, J.R. (1999), J. Phys. A : Math. Gen., 32, 123. Khais, S., Levine, R. D., Phys. Rep. A 41 2301-05. Gerry, G., C., 1985, Phys. Rev. A 31 2721-23. Cooper, I., L 1992, Kiefer, J. Phys. A. : Math. Gen. 25 1671-83.
--- author: - Wei Wang date: 'Received  2005; accepted  2005' title: Millisecond pulsar population in the Galactic center and high energy contributions --- Motivations =========== Millisecond pulsars are old pulsars which could have been members of binary systems and been recycled to millisecond periods, having formed from low mass X-ray binaries in which the neutron stars accreted sufficient matter from either white dwarf, evolved main sequence star or giant donor companions. The current population of these rapidly rotating neutron stars may either be single (having evaporated its companion) or have remained in a binary system. In observations, generally millisecond pulsars have a period $< 20$ ms, with the dipole magnetic field $< 10^{10}$ G. According to the above criterion, we select 133 millisecond pulsars from the ATNF Pulsar Catalogue [^1]. Figure 1 shows the distribution of these MSPs in our Galaxy, and they distribute in two populations: the Galactic field (1/3) and globular clusters (2/3). In the Galactic bulge region, there are four globular clusters, including the famous Terzon 5 in which 27 new millisecond pulsars were discovered (Ransom et al. 2005). ![The distribution of the observed millisecond pulsars in the Milk Way. The grey contour is the electron density distribution from Taylor & Cordes (1993).](msp_gal.eps){width="7cm"} Recently, deep [*Chandra*]{} X-ray surveys of the Galactic center (GC) revealed a multitude of point X-ray sources ranging in luminosities from $\sim 10^{32} - 10^{35}$ ergs s$^{-1}$ (Wang, Gotthelf, & Lang 2002a) over a field covering a $ 2 \times 0.8$ square degree band and from $\sim 3 \times 10^{30} - 2 \times 10^{33}$ ergs s$^{-1}$ in a deeper, but smaller field of $17' \times 17'$ (Muno et al. 2003). More than 2000 weak unidentified X-ray sources were discovered in the Muno’s field. The origin of these weak unidentified sources is still in dispute. Some source candidates have been proposed: cataclysmic variables, X-ray binaries, young stars, supernova ejecta, pulsars or pulsar wind nebulae. EGRET on board the [*Compton GRO*]{} has identified a central ($<1^\circ$) $\sim 30 {\rm MeV}-10$ GeV continuum source (2EG J1746-2852) with a luminosity of $\sim 10^{37}{\rm erg\ s^{-1}}$ (Mattox et al. 1996). Further analysis of the EGRET data obtained the diffuse gamma ray spectrum in the Galactic center. The photon spectrum can be well represented by a broken power law with a break energy at $\sim 2$ GeV (see Figure 2, Mayer-Hasselwander et al. 1998). Recently, Tsuchiya et al. (2004) have detected sub-TeV gamma-ray emission from the GC using the CANGAROO-II Imaging Atmospheric Cherenkov Telescope. Recent observations of the GC with the air Cerenkov telescope HESS (Aharonian et al. 2004) have shown a significant source centered on Sgr A$^*$ above energies of 165 GeV with a spectral index $\Gamma=2.21\pm 0.19$. Some models, e.g. gamma-rays related to the massive black hole, inverse Compton scattering, and mesonic decay resulting from cosmic rays, are difficult to produce the hard gamma-ray spectrum with a sharp turnover at a few GeV. However, the gamma-ray spectrum toward the GC is similar with the gamma-ray spectrum emitted by middle-aged pulsars (e.g. Vela and Geminga) and millisecond pulsars (Zhang & Cheng 2003; Wang et al. 2005a). So we will argue that there possibly exists a pulsar population in the Galactic center region. Firstly, normal pulsars are not likely to be a major contributor according to the following arguments. the birth rate of normal pulsars in the Milky Way is about 1/150 yr (Arzoumanian, Chernoff, & Cordes 2002). As the mass in the inner 20 pc of the Galactic center is $\sim 10^8 {\rm ~M}_{\odot}$ (Launhardt, Zylka, & Mezger 2002), the birth rate of normal pulsars in this region is only $10^{-3}$ of that in the entire Milky Way, or $\sim$ 1/150 000 yr. We note that the rate may be increased to as high as $\sim 1/15000$ yr in this region if the star formation rate in the nuclear bulge was higher than in the Galactic field over last $10^7 - 10^8$ yr (see Pfahl et al. 2002). Few normal pulsars are likely to remain in the Galactic center region since only a fraction ($\sim 40\%$) of normal pulsars in the low velocity component of the pulsar birth velocity distribution (Arzoumanian et al. 2002) would remain within the 20 pc region of the Galactic center studied by Muno et al. (2003) on timescales of $\sim 10^5$ yrs. Mature pulsars can remain active as gamma-ray pulsars up to 10$^6$ yr, and have the same gamma-ray power with millisecond pulsars (Zhang et al. 2004; Cheng et al. 2004), but according to the birth rate of pulsars in the GC, the number of gamma-ray mature pulsars is not higher than 10. On the other hand, there may exist a population of old neutron stars with low space velocities which have not escaped the Galactic center (Belczynski & Taam 2004). Such neutron stars could have been members of binary systems and been recycled to millisecond periods, having formed from low mass X-ray binaries in which the neutron stars accreted sufficient matter from either white dwarf, evolved main sequence star or giant donor companions. The current population of these millisecond pulsars may either be single or have remained in a binary system. The binary population synthesis in the GC (Taam 2005, private communication) shows more than 200 MSPs are produced through recycle scenario and stay in the Muno’s region. Contributions to high energy radiation in the Galactic Center ============================================================= Millisecond pulsars could remain active as high energy sources throughout their lifetime after the birth. Thermal emissions from the polar cap of millisecond pulsars contribute to the soft X-rays ($kT < 1$ keV, Zhang & Cheng 2003). Millisecond pulsars could be gamma-ray emission source (GeV) through the synchro-curvature mechanism predicted by outer gap models (Zhang & Cheng 2003). In the same time, millisecond pulsars can have strong pulsar winds which interact with the surrounding medium and the companion stars to produce X-rays through synchrotron radiation and possible TeV photons through the inverse Compton scatterings (Wang et al. 2005b). This scenario is also supported by the Chandra observations of a millisecond pulsar PSR B1957+20 (Stappers et al. 2003). Finally, millisecond pulsars are potential positron sources which are produced through the pair cascades near the neutron star surface in the strong magnetic field (Wang et al. 2005c). Hence, if there exists a millisecond pulsar population in the GC, these unresolved MSPs will contribute to the high energy radiation observed toward the GC: unidentified weak X-ray sources; diffuse gamma-ray from GeV to TeV energy; 511 keV emission line. In this section, we will discuss these contributions separately. ![The diffuse gamma-ray spectrum in the Galactic center region within 1.5$^\circ$ and the 511 keV line emission within 6$^\circ$. The INTEGRAL and COMPTEL continuum spectra are from Strong (2005), the 511 keV line data point from Churazov et al. (2005), EGRET data points from Mayer-Hasselwander et al. (1998), HESS data points from Aharonian et al. (2004), CANGAROO data points from Tsuchiya et al. (2004). The solid and dashed lines are the simulated spectra of 6000 MSPs according to the different period and magnetic field distributions in globular clusters and the Galactic field respectively. The dotted line corresponds to the inverse Compton spectrum from MSPs.](gammaray.eps){width="10cm"} Weak unidentified Chandra X-ray sources --------------------------------------- More than 2000 new weak X-ray sources ($L_x>3\times 10^{30} {\rm erg\ s^{-1}}$) have been discovered in the Muno’s field (Muno et al. 2003). Since the thermal component is soft ($kT < 1$ keV) and absorbed by interstellar gas for sources at the Galactic center, we only consider the non-thermal emissions from pulsar wind nebulae are the main contributor to the X-ray sources observed by Chandra (Cheng, Taam, Wang 2005). Typically, these millisecond pulsar wind nebulae have the X-ray luminosity (2-10 keV) of $10^{30-33} {\rm erg\ s^{-1}}$, with a power-law spectral photon index from 1.5-2.5. According to a binary population synthesis in the Muno’s field, about 200 MSPs are produced through the recycle scenario and stay in the region if assuming the total galactic star formation rate (SFR) of $1 M_\odot {\rm yr^{-1}}$ and the contribution of galactic center region in star formation of 0.3%. the galactic SFR may be higher than the adopted value by a factor of a few (e.g. Gilmore 2001), and the contribution of the galactic center nuclear bulge region may be also be larger than the adopted values (Pfahl et al. 2002). Then the actual number of MSPs in the region could increase to 1000 (Taam 2005, private communication). So the MSP nebulae could be a significant contributor to these unidentified weak X-ray sources in the GC. In addition, we should emphasize that some high speed millisecond pulsars ($>100$kms$^{-1}$) can contribute to the observed elongated X-ray features (e.g. four identified X-ray tails have $L_x\sim 10^{32-33} {\rm erg\ s^{-1}}$ with the photon index $\Gamma\sim 2.0$, see Wang et al. 2002b; Lu et al. 2003; Sakano et al. 2003) which are the good pulsar wind nebula candidates. Diffuse gamma-rays from GeV to TeV ---------------------------------- To study the contribution of millisecond pulsars to the diffuse gamma-ray radiation from the Galactic center, e.g. fitting the spectral properties and total luminosity, we firstly need to know the period and surface magnetic field distribution functions of the millisecond pulsars which are derived from the observed pulsar data in globular clusters and the Galactic field (Wang et al. 2005a). We assume the number of MSPs, $N$, in the GC within $\sim 1.5^\circ$, each of them with an emission solid angle $\Delta \Omega \sim$ 1 sr and the $\gamma$-ray beam pointing in the direction of the Earth. Then we sample the period and magnetic filed of these MSPs by the Monte Carlo method according to the observed distributions of MSPs in globular clusters and the Galactic field separately. We first calculate the fraction size of outer gaps: $f\sim 5.5P^{26/21}B_{12}^{-4/7}$. If $f < 1$, the outer gap can exist and then the MSP can emit high energy $\gamma$-rays. So we can give a superposed spectrum of $N$ MSPs to fit the EGRET data and find about 6000 MSPs could significantly contribute to the observed GeV flux (Figure 2). The solid line corresponds to the distributions derived from globular clusters, and the dashed line from the Galactic field. We can also calculate the inverse Compton scattering from the wind nebulae of 6000 MSPs which could contribute to the TeV spectrum toward the GC. In Figure 2, the dotted line is the inverse Compton spectrum, where we have assumed the typical parameters of MSPs, $P=3$ ms, $B=3\times 10^8$ G, and in nebulae, the electron energy spectral index $p=2.2$, the average magnetic field $\sim 3\times 10^{-5}$ G. We predict the photon index around TeV: $\Gamma=(2+p)/2=2.1$, which is consistent with the HESS spectrum, but deviates from the CANGAROO data. 511 keV emission line --------------------- The Spectrometer on the International Gamma-Ray Astrophysical Laboratory (SPI/INTEGRAL) detected a strong and extended positron-electron annihilation line emission in the GC. The spatial distribution of 511 keV line appears centered on the Galactic center (bulge component), with no contribution from a disk component (Teegarden et al. 2005; Knödlseder et al. 2005; Churazov et al. 2005). Churazov et al. (2005)’s analysis suggested that the positron injection rate is up to $10^{43}\ e^+{\rm s^{-1}}$ within $\sim 6^\circ$. The SPI observations present a challenge to the present models of the origin of the galactic positrons, e.g. supernovae. Recently, Cassé et al. (2004) suggested that hypernovae (Type Ic supernovae/gamma-ray bursts) in the Galactic center may be the possible positron sources. Moreover, annihilations of light dark matter particles into $e^\pm$ pairs (Boehm et al. 2004) have been also proposed to be the potential origin of the 511 keV line in the GC. It has been suggested that millisecond pulsar winds are positron sources which result from $e^\pm$ pair cascades near the neutron star surface in the strong magnetic field (Wang et al. 2005c). And MSPs are active near the Hubble time, so they are continuous positron injecting sources. For the typical parameters, $P=3$ ms, $B=3\times 10^8$ G, the positron injection rate $\dot{N}_{e^\pm}\sim 5\times 10^{37}{\rm s^{-1}}$ for a millisecond pulsar (Wang et al. 2005c). Then how many MSPs in this region? In §2.2, 6000 MSPs can contribute to gamma-rays with 1.5$^\circ$, and the diffuse 511 keV emission have a size $\sim 6^\circ$. We do not know the distribution of MSPs in the GC, so we just scale the number of MSPs by $6000\times (6^\circ/1.5^\circ)^2\sim 10^5$, where we assume the number density of MSPs may be distributed as $\rho_{MSP}\propto r_c^{-1}$, where $r_c$ is the scaling size of the GC. Then a total positron injection rate from the millisecond pulsar population is $\sim 5\times 10^{42}$ e$^+$ s$^{-1}$ which is consistent with the present observational constraints. What’s more, our scenario of a millisecond pulsar population as possible positron sources in the GC has some advantages to explain the diffuse morphology of 511 keV line emissions without the problem of the strong turbulent diffusion which is required to diffuse all these positrons to a few hundred pc, and predicts the line intensity distribution would follow the mass distribution of the GC, which may be tested by future high resolution observations. Summary ======= In the present paper, we propose that there exists three possible MSP populations: globular clusters; the Galactic field; the Galactic Center. The population of MSPs in the GC is still an assumption, but it seems reasonable. Importantly, the MSP population in the GC could contribute to some high energy phenomena observed by present different missions. A MSP population can contribute to the weak unidentified Chandra sources in the GC (e.g. more than 200 sources in the Muno’s field), specially to the elongated X-ray features. The unresolved MSP population can significantly contribute to the diffuse gamma-rays detected by EGRET in the GC, and possibly contribute to TeV photons detected by HESS. Furthermore, MSPs in the GC or bulge could be the potential positron sources. Identification of a millisecond pulsar in the GC would be interesting and important. However, because the electron density in the direction of the GC is very high, it is difficult to detect millisecond pulsars by the present radio telescopes. At present, we just suggest that X-ray studies of the sources in the GC would probably be a feasible method to find millisecond pulsars by [*Chandra*]{} and [*XMM-Newton*]{}. W. Wang is grateful to K.S. Cheng, Y.H. Zhao, Y. Lu, K. Kretschmer, R. Diehl, A.W. Strong, R. Taam, and the organizers of this conferences at Hanas August 2005. This work is supported by the National Natural Science Foundation of China under grant 10273011 and 10573021. [99]{} Aharonian, F. et al. 2004, A&A, 425, L13 Arzoumanian, Z., Chernoff, D.F., & Cordes, J. M. 2002, ApJ, 568, 289 Belczynski, K, & Taam, R. E. 2004, ApJ, 616, 1159 Boehm, C. et al. 2004, Phys. Rev. Lett., 92, 101301 Cassé, M. et al. 2004, ApJ, 602, L17 Cheng, K. S. et al. 2004, ApJ, 608, 418 Cheng, K. S., Taam, R. E. & Wang, W. 2005, ApJ submitted Churazov, E. et al. 2005, MNRAS, 357, 1377 Gilmore, G. 2001, Galaxy Disks and Disk Galaxies, eds. J.G. Funes & E.M. Corsini, San Francisco, ASP, 3 Knödlseder, J. et al. 2005, A&A, 441, 513 Launhardt, R., Zylka, R., & Mezger, P. G. 2002, A&A, 384, 112 Lu, F., Wang, Q.D. & Lang, C. 2003, AJ, 126, 319 Mattox, J.R. et al. 1996, ApJ, 461, 396 Mayer-Hasselwander, H.A. et al. 1998, A&A, 335, 161 Muno, M. P., et al. 2003, ApJ, 589, 225 Pfahl, E.D., Rappaport, S., & Podsiadlowski, P. 2002, ApJ, 571, L37 Ransom, S.M. et al. 2005, Science, 307, 892 Sakano, M. et al. 2003, MNRAS, 340, 747 Strong, A.W. 2005, private communication Stappers, B. W. et al. 2003, Science, 299, 1372 Taylor, J.H. & Cordes, J.M. 1993, ApJ, 411, 674 Teegarden, B. J. et al. 2005, ApJ, 621, 296 Tsuchiya, K. et al. 2004, ApJ, 606, L115 Wang, Q.D., Gotthelf, E.V., & Lang, C.C. 2002a, Nature, 415, 148 Wang, Q.D., Lu, F. & Lang, C. 2002b, ApJ, 581, 1148 Wang, W., Jiang, Z.J. & Cheng, K.S. 2005a, MNRAS, 358, 263 Wang, W. et al. 2005b, MNRAS, 360, 646 Wang, W., Pun, C.S.J. & Cheng, K.S. 2005c, A&A in press, astro-ph/0509760 Zhang, L. & Cheng, K.S. 2003, A&A, 398, 639 [^1]: http://www.atnf.csiro.au/research/pulsar/psrcat/
--- author: - 'Li Dai$^{1*}$, Dianlong Yu $^2$, Zheng Xie $^1$' title: 'On the Leaders’ Graphical Characterization for Controllability of Path Related Graphs' --- **abstract** The problem of leaders location plays an important role in the controllability of undirected graphs.The concept of minimal perfect critical vertex set is introduced by drawing support from the eigenvector of Laplace matrix. Using the notion of minimal perfect critical vertex set, the problem of finding the minimum number of controllable leader vertices is transformed into the problem of finding all minimal perfect critical vertex sets. Some necessary and sufficient conditions for special minimal perfect critical vertex sets are provided, such as minimal perfect critical 2 vertex set, and minimal perfect critical vertex set of path or path related graphs. And further, the leaders location problem for path graphs is solved completely by the algorithm provided in this paper. An interesting result that there never exist a minimal perfect critical 3 vertex set is proved, too. **keyword** controllability, leaders location, multi-agent system, path, generalized star Introduction ============ Inspired by the swarming behaviors of biological systems and great promises in numerous applications, the field of controllability of multi-agent systems has been studied extensively in recent years [@Ali; @ZhijianAndHai; @ShimaAndMohammad].By introducing the concept of matching, Liu et al.’s paper [@Yangyu] printed in Nature in 2011 gives a method to find the minimum leaders set for directed networks. However, as pointed out by Ji in [@ZhijianAndHai], when the topological structure of the systems is undirected, how to locate the leaders and what is the minimum number of leaders to insure the controllability are still difficult and largely unknown problems. Literature Review ----------------- The neighbor-based controllability of undirected graph under a single leader was first formulated by Tanner in [@Tanner] and a necessary and sufficient condition expressed in terms of eigenvalue and eigenvector was derived. In case of multiple leaders, some other algebraic conditions were developed in etc. These algebraic conditions lay the foundation for understanding interaction between topological structures of undirected graph and its controllability. And they are also serve as the theoretical basis of this paper. The research efforts on characterizing the controllability from a graphical point of view was also motivated by [@Tanner] to build controllable topologies. Many kinds of uncontrollable topologies were characterized, such as a symmetric graph with respect to the anchored nodes [@RahmaniAndMesbahi], quotient graphs [@MartiniAndEgerstedt] , nodes with the same number of neighbors [@JiAndLin], controllability destructive nodes [@Zhijian] etc. Useful tools and methods were developed to study the controllability of undirected graph, such as downer branch for tree graphs [@ZhijianAndHai], Zero forcing set[@ShimaAndMohammad; @Monshizadeh], equitable partitions [@Meng; @Rahmani; @Cesar; @Martini; @Camlibel], leader and follower subgraphs [@JiAndLin], $\lambda$-core vertex [@Sciriha; @Farrugia], Distance-to-Leaders (DL) Vector [@Yazicioglu], etc. Omnicontrollable systems are defined by [@Farrugia], in such systems, the choice of leader vertices that control the follower graph is arbitrary. Minimal controllability problem (MCP) that aims to determine the minimum number of state variables that need to be actuated to ensure system¡¯s controllability was studied in [@Olshevsky; @Pequito]. In study [@ZhaoAndGuan], two algorithms are established for selecting the fewest leaders to preserve the controllability and the algorithm for leaders¡¯ locations to maximize non-fragility is also designed. Necessary and sufficient conditions to characterize all and only the nodes from which the path or cycle network systerm is controllability were provided in . Although many scholars have devoted themselves to the research in the controllability of undirected graph and achieved many remarkably strong and elegant results, this problem has not been solved yet. As it is well known that any undirected simple connected graph on $n$ vertices is always $(n-1)$-omnicontrollable. To insure the minimal controllability, which vertices should be selected as leaders is important. Therefore, Our aim is to find a method for giving a direct interpretation of the leader vertices from a graph-theoretic vantage point. In this sense, we provide a new concept, minimal perfect critical vertex set, to identified the potential leader vertices. This provides a new direction for the study of controllability of undirected systems. Notations and Preliminary Results --------------------------------- Let $G=(V,E)$ be an undirected and unweighted simple graph, where $V=\{v_1,v_2,\cdots,v_n\}$ is a vertex set and $E=\{v_iv_j|v_i\,\, and \,\,v_j\in V\}$ is an edge set, with an edge $v_iv_j$ is an unordered pair of distinct vertices in $V$. If $v_iv_j\in E$, then $v_i$ and $v_j$ are said to be *adjacent*, or *neighbors*. $N_S(v_i)=\{v_j\in S| v_iv_j\in E(G)\}$ represents the neighboring set in $S$ of $v_i$, where $S\subset V$. The cardinality of $S$ is denoted by $|S|$. $G[S]$ is the induced subgraph, whose vertex set is $S$ and edge set is $\{v_iv_j\in E(G)|v_i,v_j\in S\}$. The *valency matrix* $\Delta(G)$ of graph $G$ is a diagonal matrix with rows and columns indexed by $V$, in which the $(i,i)$-entry is the degree of vertex $v_i$, e.g. $|N_G(v_i)|$. Any undirected simple graph can be represented by its *adjacency matrix*, $D(G)$, which is a symmetric matrix with 0-1 elements. The element in position $(i,j)$ in $D(G)$ is 1 if vertices $v_i$ and $v_j$ are adjacent and 0 otherwise. The symmetric matrix defined as: $$\mathbf{L}(G)=\mathbf{\Delta}(G)-\mathbf{D}(G)$$ is the *Laplacian* of $G$.The Laplacian is always symmetric and positive semidefinite, and the algebraic multiplicity of its zero eigenvalue is equal to the number of connected components in the graph. For a connected graph, the $n-$dimensional eigenvector associated with the single zero eigenvalue is the vector of ones, $\textbf{1}_n$. Throughout this paper, it is assumed without loss of generality that $F$ denotes follower vertex set and its vertices play followers role, and the vertices in $\overline{F}$ are leaders(driver nodes), where $\overline{F}=V \backslash F$ denotes the complement set of $F$. Let $\mathbf{y}$ be a vector, $\mathbf{y}|_S$ denote the vector obtained from $\mathbf{y}$ after deleting the elements in $\overline{S}$. Let $\mathbf{L}_{S\rightarrow T}$ denote the matrix obtained from $\mathbf{L}$ after deleting the rows in $\overline{S}$ and columns in $\overline{T}$. The system described by undirected graph $G$ is said to be controllable (for convenience, $G$ is controllable )if it can be driven from any initial state to any desired state in finite time. If the followers’ dynamics is (see (4) in [@Tanner]) $$\dot{\mathbf{x}}=\mathbf{Ax}+\mathbf{Bu},$$ where $\mathbf{x}$ captures the state of a system which is the stack vector of all $x_i$ corresponding to follower vertex $v_i\in F$ and $\mathbf{u}$ is the external control inputs vector which is imposed by the controller and is injected to only some of the vertices, namely the leaders, the system is controllable with the follower vertex set $F$ if and only if the $N \times NM$ controllability matrix $$\mathbf{C}=[\mathbf{B},\mathbf{AB},\mathbf{A^2B},\cdots, \mathbf{A^{N-1}B}]$$ has full row rank, that is $rank(\mathbf{C})=N,$ where $\mathbf{B}=\mathbf{L}_{F\rightarrow \overline{F}}$ and $\mathbf{A}=\mathbf{L}_{F\rightarrow F}$. This represents the mathematical condition for controllability, and is well known as Kalman’s controllability rank condition[@Yangyu; @Kalman; @Brockett]. (1000,400) (250,100) (450,100) (650,100) (250,60)[$v_1$]{} (440,60)[$v_2$]{} (640,60)[$v_3$]{} (255,100)[(1,0)[190]{}]{} (455,100)[(1,0)[190]{}]{} (250,220)[(0,-1)[10]{}]{} (250,200)[(0,-1)[10]{}]{} (250,180)[(0,-1)[10]{}]{} (250,160)[(0,-1)[10]{}]{} (250,140)[(0,-1)[10]{}]{} (250,120)[(0,-1)[10]{}]{} For example, if the vertex $v_1$ is selected as leader, the system is controllable(see fig.\[TheLeadersSelectionOn\]).But, if $v_2$ plays the leaders role, it is NOT controllable. This paper will address the graphical characterization of leaders to insure the systems’s controllability. In most real systems such as multi-agent systems or complex networks, we are particularly interested in identifying the minimum number of leaders, whose control is sufficient and fully control the system’s dynamics. In term of eigenvalues and eigenvectors of submatrices of Laplace, presented a necessary and sufficient algebraic condition on controllability. \[proposition1\] The undirected graph $G$ is controllable under the leader vertex set $\overline{F}$ if and only if $\mathbf{L}$ and $\mathbf{L}_{F\rightarrow F}$ share no common eigenvalue. \[proposition2\] [@Meng; @Zhijian] The undirected graph $G$ is controllable under the leader vertex set $\overline{F} $ if and only if $\mathbf{y}|_{\overline{F}}\neq \mathbf{0}$ ($\forall\,\,\mathbf{y}$ a eigenvector of $\mathbf{L}$). The Proposition \[proposition2\] gives the algebraic characteristics of leader vertex set. It is worth noting that the eigenvector $\textbf{y}$ in Proposition \[proposition2\] has the characteristic of arbitrary. Therefore, when $\textbf{L}$ has multiple eigenvalues, it is not possible to draw a conclusion only by examining all the linearly independent eigenvectors, but also by further verifying all the eigenvectors with zero components. From the point of view of numerical calculation, this verification is too computational and difficult to implement. It is clear that the topology of the interconnection graph $G$ completely determines its controllability properties. So, this paper will focus on the graph theoretic characterization of the leader vertices. The remainder of this paper is organized as follows. In Section \[section2\], we provide three new concepts: critical vertex set, perfect critical vertex set and minimal perfect critical vertex set. Necessary and sufficient conditions for $S$ to be a minimal perfect critical 2 vertex set is presented. An interesting result that there never exist minimal perfect critical 3 vertex set is also proved in Section \[section2\]. Section \[section3\] is the main part of this paper. In this section, we provide a algorithm to locate all leader vertices of path by finding out its all minimal perfect critical vertex set. Graphs constructed by adding paths incident to one vertex $v_0$ are investiaged in Section \[section4\]. Finally, our conclusions are summarized in Section \[section5\]. Minimal Perfect Critical Vertex Set {#section2} =================================== According to proposition \[proposition2\], for any $S\subset V$ and $S\neq \emptyset$, if there exist an eigenvector $\textbf{y}$ of Laplace matrix $\textbf{L}$ such that $\mathbf{y}|_{S}=\mathbf{0}$, then $S$ cannot be used as a leader vertex set. So, in order to locate the leaders of graph $G$, the following concepts are proposed. Three Definitions {#subsection2.1} ----------------- \[definition1\] (**critical vertex set**)Let $S$ be a nonempty subset of $V$, if there exist an eigenvector $\textbf{y}$ such that $\textbf{y}|_{\overline{S}}=\textbf{0}$, then $S$ is called a critical vertex set(CVS) and $\textbf{y}$ is a inducing eigenvector. $S$ is called a critical $k$ vertex set, if $|S|=k$. \[definition2\] (**perfect critical vertex set**) Let $S$ be a critical vertex set, if there exist a eigenvector $\textbf{y}$ satisfy that $\textbf{y}|_{\{v_i\}}\ne 0 (\forall v_i\in S)$, then $S$ is called a perfect critical vertex set(PCVS). And $S$ is called a perfect critical $k$ vertex set, if $|S|=k$. \[definition 3\] (**minimal perfect critical vertex set**) A perfect critical vertex set is called a minimal perfect critical vertex set (MPCVS) if its any proper subset is no longer a perfect critical vertex set. And $S$ is called a minimal perfect critical $k$ vertex set, if $|S|=k$. \[remark-definition\] By the Definitions, $V$ is a trivially CVS and a PCVS induced by the eigenvector $\textbf{1}_n$. $V$ is a MPCVS if and only if $G$ is controllable under any single vertex selected as leader, e.g. $G$ is omnicontrollable. (40,40) (0,5) (30,5) (0,20) (0,34.5) (15,20) (43,18.5) (27.8,33.2) (-5,4)[$v_1$]{} (-5,19)[$v_2$]{} (-5,34)[$v_3$]{} (13,23)[$v_4$]{} (32,4)[$v_7$]{} (42.5,20)[$v_6$]{} (28,35)[$v_5$]{} (0.6,5.5)[(1,1)[14]{}]{} (15.4,20.6)[(1,1)[12]{}]{}(0.6,20)[(1,0)[13.8]{}]{}(14.5,20)[(-1,1)[14]{}]{}(42.5,19)[(-1,1)[14]{}]{}(29.5,5.5)[(-1,1)[14]{}]{}(30.5,5.5)[(1,1)[12.3]{}]{} For example, see fig.\[figureForDifferentMPCVS\], $\{v_1,v_5\}$ is not a CVS, $\{v_1,v_2,v_3,v_4\}$ is a CVS but not a PCVS. $S=\{v_1,v_2,v_3\}$ is a PCVS but not a MPCVS. $S_1=\{v_1,v_2\}$, $S_2=\{v_1,v_3\}$, $S_3=\{v_2,v_3\}$ and $S_4=\{v_5,v_7\}$ are all MPCVSs of the graph in fig.\[figureForDifferentMPCVS\] . \[remarkForPropositionStatedWithMPVCS\] Proposition \[proposition2\] can be restated as: The undirected graph $G$ is controllable under the leader vertex set $\overline{F} $ if and only if for each MPCVS $S$ , $S \bigcap \overline{F}\ne \emptyset$. It is Remark \[remarkForPropositionStatedWithMPVCS\] that inspired us to study MPCVS. Because, from Remark \[remarkForPropositionStatedWithMPVCS\] , there is a close relationship between MPCVS and the minimum leaders set. In other words, when we find all MPCVS of $G$, we find the minimum leader set and hence the minimum number of leader vertices. For example, by Remark \[remarkForPropositionStatedWithMPVCS\] and all its 4 MPCVS above, graph $G$ in fig.\[figureForDifferentMPCVS\] is not controllable under any single leader because $\bigcap_{i=1}^{4}S_i=\emptyset$ , or any two vertices. So, the minimum leaders set are $\{v_i,v_j,v_k\}$, where $v_i,v_j$ comes from $\{v_1,v_2,v_3\}$ and $v_k$ from $\{v_5,v_7\}$. Therefor, the minmum number of leaders is 3. Moreover, many MPCVSs have typical graphical characteristics. For example, all of the 4 MPCVSs of $G$ in fig.\[figureForDifferentMPCVS\] have the graphical structure stated in Theorem \[theorem3\]. This is another reason for us to investigate MPCVS. Sufficient Conditions for Critical Vertex Set {#subsection2.2} --------------------------------------------- For undirected graph, Laplacian matrix $\textbf{L}$ is symmetric, all the eigenvectors are orthogonal to each other, so knowing $\textbf{1}_n$ is an eigenvector of $\textbf{L}$, it is immediate that all the other eigenvectors of $\textbf{L}$ are orthogonal to $\bold{1}_n$, that is, for all eigenvector $ \bold{y}$, $$\label{eq1} \textbf{1}_{n}^T\textbf{y}=\sum_{i=1}^{n}y_i=0.$$ The equality in (\[eq1\]) is useful throughout the paper. If $S$ is a CVS, then $$\label{equationFor|S|>=2} |S|\geq 2.$$ In fact, Suppose $|S|=1$, without loss of generality, $S=\{v_1\}$. Let $\textbf{y}=(y_1,y_2,\cdots,y_n)^T$ be the inducing eigenvector associated with eigenvalue $\lambda$, then $\textbf{Ly}=\lambda \textbf{y}$ and $\textbf{y}|_{\overline{S}}=\textbf{0}$. By $\textbf{y}|_{\overline{S}}=\textbf{0}$ and (\[eq1\]), $\textbf{y}|_{S}=\textbf{0}$, e.g. $\textbf{y}=\textbf{0}$, this is in contradiction with the fact that $\textbf{y}$ is an eigenvector. Further, since any subset $S$ with $|S|=1$ isn’t a CVS, by Remark \[remarkForPropositionStatedWithMPVCS\], $G$ is controllable with the leader vertex set $\overline{F}$ when $|F|=1$. Now, we are going to investigate the properties of critical vertex set. Firstly, a sufficient conditions for $S$ to be a CVS is provided in the following Proposition \[proposition3\], which describes a special case of the symmetry-based uncontrollability results. \[proposition3\] Let $G$ be an undirected connected graph of order $n$, $S\subset V$ and $|S|\geq 2$, if for any $v\in \overline{S}$, either $N_S(v)=\emptyset$ or $N_S(v)=S$, then $S$ is a critical vertex set. **Proof**Let $|\{v\in \overline{S}|N_S(v)=S\}|=m$, then $$\textbf{L}_{S\rightarrow S}-m\textbf{I}_{|S|}$$ is Laplacian of subgraph $G[S]$, where $\textbf{I}_{|S|}$ denotes the $|S|$ dimensional identity matrix. Considered (\[eq1\]), there exist an eigenvector $\textbf{y}_S$ of the Laplacian $\textbf{L}_{S\rightarrow S}-m\textbf{I}_{|S|}$ such that $\textbf{1}_{|S|}^T\textbf{y}_S=0$. Set vector $\textbf{y}$ as $\textbf{y}|_S=\textbf{y}_S$ and $\textbf{y}|_{\overline{S}}=\textbf{0}$. It can be seen that $$\label{eq2} \textbf{Ly}=\left[ \begin{array}{ll} \textbf{L}_{S\rightarrow S}& \textbf{L}_{S\rightarrow \overline{S}} \\ \textbf{L}_{\overline{S}\rightarrow S}& \textbf{L}_{\overline{S}\rightarrow \overline{S}} \end{array}\right] \left[\begin{array}{c} \textbf{y}_S\\ \textbf{0 }\end{array}\right]=\left[\begin{array}{c} \lambda \textbf{y}_S\\ \textbf{L}_{\overline{S}\rightarrow S}\textbf{y}_S \end{array}\right].$$ Noticing that the rows in matrix $\textbf{L}_{\overline{S}\rightarrow S}$ are either ones or zeros, the conclusion is proved by $\textbf{1}_{|S|}^T\textbf{y}_S=0$. For example, by Proposition \[proposition3\], $\{v_1,v_3\}$ is a CVS, therefor, the graph $G$ in fig.\[TheLeadersSelectionOn\] is uncontrollable when $v_2$ is selected as leader. \[remark3\] The condition provided in Proposition \[proposition3\] implies that some critical vertex sets are closely related to equitable partitions. For example, let $S$ be the followers and $\overline{S}$ be the leaders. From earlier results in the literature [@Martini], we know that in the case of Proposition \[proposition3\] the maximal relaxed equitable partition would put all the leaders into a single cell, hence the system is uncontrollable. But, some other critical vertex sets have nothing to do with equitable partitions or almost equitable partitions(AEP, see [@Cesar]). For example, let $S$ be the perfect critical vertex set in fig.\[PerfectCriticalVertexSetIsCloselyRelatedTo\](a). The partitions obtained by putting all the vertices in $\overline{S}$ into a single cell is not a AEP. Minimal Perfect Critical 2 and 3 Vertex Set -------------------------------------------- Armed with the above properties, critical $k$ vertex set with $k\leq 3$ can be determined directly from their graphical characterization. This is achieved via a detailed analysis of the inducing eigenvector. \[lemma1\] Let $G$ be an undirected connected graph and $S$ be a perfect critical $k$ vertices set, then for any $v\in \overline{S}$, $|N_S(v)|\ne 1$ and $|N_S(v)|\ne k-1$. **Proof**Let $S=\{v_1,v_2,\cdots,v_k\}$ be a perfect critical vertices set and $\textbf{y}=(y_1,y_2,\cdots,y_k,0,0,\cdots,0)^T$ be the inducing eigenvector. $y_i\ne 0(\forall 1\leq i \leq k)$ since $S$ is a perfect critical vertices set. $\forall v\in \overline{S}$, suppose the $|N_S(v)|=1$, without loss of generality, say, $vv_1\in E$ and $vv_i\notin E(G)(\forall i\neq 1)$, then $\textbf{L}_{\{v\}\rightarrow V}\textbf{y}=y_1\neq 0$. On the other hand, $\textbf{y}|_{\overline{S}}=\textbf{0}$ and $v \in \overline{S}$, so $\textbf{L}_{\{v\}\rightarrow V}\textbf{y}=0$, this is a contradiction. Together with (\[eq1\]), $|N_S(v)|\ne k-1$ can be proved similarly. By (\[equationFor|S|&gt;=2\]), critical 2 vertex set is also a minimal perfect critical 2 vertex set. The following Theorem \[theorem3\] will follow from Lemma \[lemma1\] and Proposition \[proposition3\]. \[theorem3\] Let $G$ be an undirected connected graph, $S\subset V$ and $|S|=2$, then $S$ is a minimal perfect critical 2 vertex set if and only if $\forall v\in \overline{S}$, either $N_S(v)=\emptyset$ or $N_S(v)=S$. For example, see graph $G$ in fig.\[figureForDifferentMPCVS\], all its 4 MPCVSs can be recognized by the graphical characterization stated in Theorem \[theorem3\]. \[remark5\] FromTheorem \[theorem3\], we know that the perfect critical 2 vertex set is what named *twins nodes* by [@Biyikoglu] and also it is double controllability destructive (DCD) node tuple given by [@Zhijian] and . Hence, one can see that the perfect critical vertex set is the extension and generalization of twins nodes and controllability destructive nodes. But the minimal perfect critical 3 vertex set is not the same as the triple controllability destructive nodes *TCD nodes* named by [@Zhijian], because we will prove that there do not exist a minimal perfect critical 3 vertex set. That is the following Theorem \[theorem4\]. \[theorem4\] Let $G$ be an undirected connected graph, $S\subset V$ and $|S|=3$, then $S$ is NOT a minimal perfect critical vertex set. **Proof**Suppose $S$ is a minimal perfect critical vertex set.Consider the subgraph $G[S]$, all 4 possible topology structures of $G[S]$ are depicted in fig. \[AllPossibleTopologyStructures\]. (80,20) (0,5) (10,5) (25,5) (35,5) (50,5) (60,5) (75,5) (85.5,5) (25.5,5.5)[(1,1)[4.1]{}]{} (5.2,10.5) (30,10.2) (50.5,5.5)[(1,1)[4.1]{}]{} (55,10.2) (50.5,5)[(1,0)[9]{}]{} (75.5,5.5)[(1,1)[4.1]{}]{} (80.2,10.2) (80.7,9.6)[(1,-1)[4.1]{}]{} (75.7,5)[(1,0)[9]{}]{} For each topology of $G[S]$ in fig.\[AllPossibleTopologyStructures\], let $T$ be the vertex set of white vertices and $u$ be the black vertex, one can have either $N_T(u)=\emptyset$ or $N_T(u)=T$. By Lemma \[lemma1\], $\forall v\in\overline{S}$, either $|N_S(v)|=0$ or $|N_S(v)|=3$. Noticing that $\overline{T}=\{u\}\bigcup\overline{S}$, by Proposition \[proposition3\], $T$ is a critical vertex set. This contradicts the assumption. \[remark4\] There do exist some perfect critical 3 vertex set, but there do not exist any minimal perfect critical 3 vertex set. For example, see fig.\[figureForDifferentMPCVS\] , $S=\{v_1,v_2,v_3\}$ is a perfect critical vertex set because there exist a eigenvector $\textbf{y}$ such that $\textbf{y}|_{\overline{S}}=\textbf{0}$ and $\textbf{y}|_{v_i}\neq 0(\forall v_i\in S)$. But, by Theorem \[theorem3\] and Definition \[definition 3\], $S$ is not a MPCVS. (120,40) (20,5) (20,5.5)[(0,1)[15]{}]{} (20,21) (19.5,5.5)[(-1,1)[15]{}]{} (4.5,21) (20.5,5.5)[(1,1)[15]{}]{} (35.5,21) (35.5,5) (35.5,5.5)[(0,1)[15]{}]{} (36,5.5)[(1,1)[15]{}]{} (51,21) (20.5,5.5)[(2,1)[30]{}]{} (1,12)[(55,18)]{} (5.2,20.5)(20,14)(35,20.5) (20,21.5)(35.5,28)(50.5,21) (12,25)[$S$]{} (1,-2)[$(a)\,\, S$ is minimal perfect critical vertex set]{} (90,5) (90,5.5)[(0,1)[15]{}]{} (90,21) (89.5,5.5)[(-1,1)[15]{}]{} (74.5,21) (90.5,5.5)[(1,1)[15]{}]{} (105.5,21) (105.5,5) (105.5,5.5)[(0,1)[15]{}]{} (106,5.5)[(1,1)[15]{}]{} (121,21) (90.5,5.5)[(2,1)[30]{}]{} (71,12)[(55,18)]{} (90,21.5)(105.5,28)(120.5,21) (82,25)[$S$]{} (76,-2)[$(b)\,\, S$ is not a critical vertex set]{} Although there does not exist a minimal perfect critical 3 vertex set, minimal perfect critical 4 vertex set does exist, see fig.\[PerfectCriticalVertexSetIsCloselyRelatedTo\](a). From Theorem \[theorem3\], perfect critical 2 vertex set is completely determined by the relationship between $\overline{S}$ and $S$, and have nothing to do with the interconnection topology of subgraph $G[S]$. But, unlike the perfect critical 2 vertex set, the topology structure of $G[S]$ will have an effect on whether $S$ is a minimal perfect critical 4 vertex set or not, see fig.\[PerfectCriticalVertexSetIsCloselyRelatedTo\]. The virtue that perfect critical $k(k\geq 4)$ vertex set should have was needed to be characterized from both algebraic and graphical perspectives. Developing such a characterization is along the directions of our current research. Minimal Perfect Critical Vertex Set of Path {#section3} =========================================== In this section, we will solve the leaders location problem for path completely by means of MPCVS. Spectral Propoerties {#subsection3.1} -------------------- A path graph $P_n$ is a finite sequence of vertex $v_1,v_2,\cdots,v_n$ starting with $v_1$ and ending with $v_n$ such that consecutive vertex are adjacent. A subset $S\subset V$ is said to be isolated vertex set where there are no edges among the verties in $S$. If $S$ be a perfect critical vertex set of path $P_n$, by Lemma \[lemma1\], $\overline{S}$ must be isolated vertex set. So, without loss of generality, let $\overline{S}=\{v_{i_1},v_{i_2},\cdots,v_{i_k}\}$ be a isolated vertex set and $1< i_1< i_2<\cdots < i_k < n$. Let $S_{i_0}=\{v_1,v_2,\cdots,v_{i_1-1}\}$, $S_{i_1}=\{v_{i_1+1},v_{i_1+2},\cdots,v_{i_2-1}\},\cdots$, $S_{i_k}=\{v_{i_k+1},v_{i_k+2},\cdots,v_{i_n}\}$. Recall Lemma \[lemma1\], we know that $1<i_1$ and $i_k<n$, e.g., $$\label{m>1} |S_{i_0}|\geq 1\,\, and \,\,|S_{i_k}|\geq 1.$$ It is easy to see that the matrix $\textbf{L}_{S\rightarrow S}$ is a block matrix with the following form $$\label{LStoS=blocked} \textbf{L}_{S\rightarrow S}=\left[\begin{array}{cccc} \textbf{L}_{S_{i_0}\rightarrow S_{i_0}} & \textbf{0} & \textbf{0} &\textbf{0} \\ \textbf{0} & \textbf{L}_{S_{i_1}\rightarrow S_{i_1}} & \textbf{0} & \textbf{0} \\ \textbf{0} & \textbf{0} & \ddots & \textbf{0} \\ \textbf{0} & \textbf{0} & \textbf{0} & \textbf{L}_{S_{i_k}\rightarrow S_{i_k}} \end{array}\right]$$ By rearrange the columns and rows we can always write the Laplacian of $P_n$ into the following form: $$\label{L=blocked} \textbf{L}=\left[ \begin{array}{cc} \textbf{L}_{S\rightarrow S} & \textbf{L}_{S\rightarrow \overline{S}}\\ \textbf{L}_{\overline{S}\rightarrow S} & \textbf{L}_{\overline{S}\rightarrow\overline{S}} \end{array} \right].$$ Since for path $P_n$, only the consecutive vertices are adjacent, so there are exactly 2 elements are -1 while the other elements are 0 for every vector $\textbf{L}_{\{v_{i_j}\}\rightarrow S}(j=1,2,\cdots,k)$. That is $$\label{Lvi-S} \textbf{L}_{\{v_{i_j}\}\rightarrow S}=(0,0,\cdots,0, \stackrel{\stackrel{v_{i_{j-1}}}{\uparrow}}{-1}, \stackrel{\stackrel{v_{i_{j+1}}}{\uparrow}}{-1},0,\cdots,0) (j=1,2,\cdots,k)$$ For path, the matrixes in right side of (\[LStoS=blocked\]) have similar structure. Let $m$ and $M$ be the dimension of the following useful matrix $D_m$ and $B_M$, respectively. These matrices play an important role to determine the locations of leaders under which the controllability of paths can be realized. Thus, the first submatrix in (\[LStoS=blocked\]) can be written as $\textbf{L}_{S_{i_0}\rightarrow S_{i_0}}=\textbf{D}_{|S_{i_0}|}$. By symmetric permutation reversing all the components, the last submatrix in (\[LStoS=blocked\]) can be written as $\textbf{L}_{S_{i_k}\rightarrow S_{i_k}}=\textbf{D}_{|S_{i_k}|}$. The other submatrices in (\[LStoS=blocked\]) can be written as $\textbf{L}_{S_{i_j}\rightarrow S_{i_j}}=\textbf{B}_{|S_{i_j}|}(j=1,2,\cdots,k-1)$. $$\textbf{D}_m=\left[ \begin{array}{cccccccc} 1 & -1 & 0 & 0 & \cdots & 0 & 0 & 0\\ -1 & 2 & -1 & 0 & \cdots & 0 & 0 &0 \\ \vdots & \vdots & \vdots& \vdots & & \vdots & \vdots & \vdots \\ 0& 0 & 0 & 0 & \cdots & 0 & -1 & 2 \\ \end{array} \right]_{m\times m},$$ $$\textbf{B}_M=\left[ \begin{array}{cccccccc} 2 & -1 & 0 & 0 & \cdots & 0 & 0 & 0\\ -1 & 2 & -1 & 0 & \cdots & 0 & 0 &0 \\ \vdots & \vdots & \vdots& \vdots & & \vdots & \vdots & \vdots \\ 0& 0 & 0 & 0 & \cdots & 0 & -1 & 2 \\ \end{array} \right]_{M\times M}.$$ Naturally, We are going to investigate the spectral properties of $\textbf{D}_m$ and $\textbf{B}_M$. For convenient, we introduce some useful notations. For any $\lambda\in[0,4]$, $\theta$ is called a angle associated with $\lambda$, where $\theta$ is defined as $\cos\theta=\frac{2-\lambda}{2}$ , $\sin\theta=\frac{\sqrt{4\lambda-\lambda^2}}{2}$ and $\theta\in[0,\pi]$. If $\lambda$ is a eigenvalue, then $\theta$ is called a eigenangle. Let $\phi_i(\lambda)$ and $\psi_i(\lambda)$ be the $i$-th sequential principal minor of $\det(\lambda \textbf{I}-\textbf{D}_m)$ and $\det(\lambda \textbf{I}-\textbf{B}_M)$, respectively, then we have the following useful lemmas. \[spectralOf-Dm\] Let $\textbf{y}=(y_1,y_2,\cdots,y_m)$ be an eigenvector of $\textbf{D}_m$. (i)$\lambda$ is a eigenvalue of $\textbf{D}_m$ if and only if $\theta\in\left\{\frac{(2l-1)\pi}{2m+1}|1\leq l \leq m\right\}$ is associated with $\lambda$. (ii)$y_i=\phi_{i-1}(\lambda)y_1,(i=2,3,\cdots,m).$ **Proof**(i) By applying the Laplace expansion to the last row of $\lambda \textbf{I}-\textbf{D}_m$, the following recurrence formula hold: $$\phi_m(\lambda)=(2-\lambda)\phi_{m-1}(\lambda)-\phi_{m-2}(\lambda).$$ Then, from Gersgorin disk theorem, it follows that the eigenvalue $\lambda\leq 4$, that means $\lambda^2-4\lambda\leq 0$. Solving this recurrence and taking $\phi_1(\lambda)=1-\lambda$ into consideration, we have $$\label{theta-of-Dm=0} \phi_m(\lambda)=\frac{\cos{\frac{(2m+1)\theta}{2}}}{\cos\frac{\theta}{2}}.$$ Thus concluding the first part of the proof. \(ii) The claim can be verified via mathematical induction from the fact: $(\lambda\textbf{I}-\textbf{D}_m)\textbf{y}=\textbf{0}.$ Similarly, we have \[spectralOf-BM\] Let $\textbf{y}=(y_1,y_2,\cdots,y_M)$ be an eigenvector of $\textbf{B}_M$. (i)$\lambda$ is a eigenvalue of $\textbf{B}_M$ if and only if $\theta\in\left\{\frac{h\pi}{M+1}|1\leq h \leq M\right\}$ is associated with $\lambda$. (ii)$y_i=\psi_{i-1}(\lambda)y_1,(i=2,3,\cdots,M).$ \[common-eigenvalue-of-Lsiktosik\] If $S$ is a perfect critical vertex set of $P_n$, then all $\textbf{L}_{S_{i_j}\rightarrow S_{i_j}}(j=0,1,\cdots,k)$ in (\[LStoS=blocked\]) have at least one common eigenvalue. **Proof**If $S$ is a perfect critical vertex set , then there exist an eigenvector $\textbf{y}$ such that $\textbf{y}|_{\overline{S}}=\textbf{0}$ and $\textbf{y}|_{v\in S}\neq 0$. From (\[L=blocked\]), we have $\textbf{L}_{S\rightarrow S}\textbf{y}|_S=\lambda\textbf{y}|_S.$ Now, consider (\[LStoS=blocked\]), all of $\textbf{L}_{S_{i_j}\rightarrow S_{i_j}}(j=0,1,\cdots,k)$ have at least one common eigenvalue $\lambda$ because $\textbf{y}|_{S_{i_j}}\neq \textbf{0}$ are the corresponding eigenvectors. \[equal-dimension-for-BM\] If $S$ is a perfect critical vertex set of $P_n$, for all $S_{i_j}(j=0,1,\cdots,k)$ in (\[LStoS=blocked\]), the following equalities holds: (i)$|S_{i_0}|=|S_{i_k}|.$ (ii)$|S_{i_1}|=|S_{i_2}|=\cdots=|S_{i_{k-1}}|.$ **Proof**(i) Set $|S_{i_0}|=m_1$ and $|S_{i_k}|=m_2$. Suppose that $m_1\neq m_2$. Without loss of generality, $m_2>m_1$. From Lemma \[common-eigenvalue-of-Lsiktosik\], $\textbf{L}_{S_{i_0}\rightarrow S_{i_0}}(=\textbf{D}_{m_1})$ and $\textbf{L}_{S_{i_k}\rightarrow S_{i_k}}(=\textbf{D}_{m_2})$ share a eigenvalue $\tilde{\lambda}$, together with Proposition \[spectralOf-Dm\](i), they share eigenangle $\tilde{\theta}$. That means there exist $l_1,l_2$ such that $$\frac{2l_1-1}{2m_1+1}\pi=\frac{2l_2-1}{2m_2+1}\pi=\tilde{\theta}.$$ Since $m_2>m_1$ and $\phi_{m_1}(\tilde{\lambda})$ is the $m_1$-th sequential principal minor of $\phi_{m_2}(\tilde{\lambda})$, by Proposition \[spectralOf-Dm\](ii), the $(m_1+1)$-th element of the eigenvector $\textbf{y}|_{S_{i_k}}$ is zero. This contradicts the fact that $S$ is a perfect critical vertex set. The proof of (ii) can be carried out in the same manner as (i). \[|Dm|=|BM|\] If $S$ is a perfect critical vertex set of $P_n$, for vertex set $S_{i_0}$ and $S_{i_1}$ in (\[LStoS=blocked\]), then either $S_{i_1}$ is empty or $|S_{i_1}|=2|S_{i_0}|.$ **Proof**If $S_{i_1}$ is empty, the proof is trivial; thus, let $|S_{i_1}|=M>0$ and $|S_{i_0}|=m$. From Lemma \[common-eigenvalue-of-Lsiktosik\], $\textbf{L}_{S_{i_0}\rightarrow S_{i_0}}$ and $\textbf{L}_{S_{i_1}\rightarrow S_{i_1}}$ have a common eigenvalue $\tilde{\lambda}$. Let $\textbf{y}$ be the induced eigenvector of the perfect critical vertex set $S$. According to Proposition \[spectralOf-Dm\](i) and Proposition \[spectralOf-BM\](i), there exist numbers $l,h$, for $1\leq l_0\leq m, 1\leq h_0\leq M$, such that $$\label{M=2m} \frac{(2l_0-1)}{2m+1}=\frac{h_0}{M+1}.$$ We claim that $2l_0-1$ and $2m+1$ are coprime. Otherwise, recall the proof of Lemma \[equal-dimension-for-BM\], we know that there exist at least one entry of the eigenvector $\textbf{y}|_{S_{i_0}}$ vanish. Similarly, $h_0$ and $M+1$ are coprime, too. That implies $2m+1=M+1$, e.g. $M=2m$. \[lemma5\] If $S$ is a minimal perfect critical vertex set, for vertex set $S_{i_0}$ in (\[LStoS=blocked\]), let $|S_{i_0}|=m$, then $2m+1$ is an odd prime. **Proof**From Lemma \[|Dm|=|BM|\], Proposition \[spectralOf-Dm\](i) and Proposition \[spectralOf-BM\](i), we know that the submatrices $\textbf{L}_{S_{i_j}\rightarrow S_{i_j}}$ have the following $m$ common eigenangles: $$\label{eq3} \{\frac{1}{2m+1}\pi,\frac{3}{2m+1}\pi,\cdots,\frac{2m-1}{2m+1}\pi\}.$$ Suppose $2m+1$ is not a prime, there exist two factors $p_1,p_2$ such that $2m+1=p_1p_2$. Since $2m+1$ is odd, both $p_1$ and $p_2$ are odd. Therefor, $\frac{1}{p_2}\pi$ is one of the eigenangle in (\[eq3\]). By Proposition \[spectralOf-Dm\](ii), the $(p_2+1)$-th element of the eigenvector associated with eigenvalue $\frac{p_1}{2m+1}\pi$ is zero. This is a contradiction to the fact that $S$ is a MPCVS. Equivalence Characterization of MPCVSs of Path Graphs ----------------------------------------------------- The following Theorem \[theVertexNumberOfPath\] provided a equivalence characterization of MPCVS of path graph. \[theVertexNumberOfPath\] Let $S$ be a vertex set of path $P_n$ and $\overline{S}=\{v_{i_1},v_{i_2},\cdots,v_{i_k}\}$ is isolated. Let $|S_{i_0}|=m$, $|S_{i_1}|=M$. Then $S$ is a minimal perfect critical vertex set if and only if the following assertions hold: \(i) $|S_{i_k}|=|S_{i_0}|$, $|S_{i_1}|=|S_{i_2}|=\cdots=|S_{i_{k-1}}|$. \(ii) $M=0$ or $M=2m$. \(iii) $2m+1$ is a odd prime. **Proof**The necessity is proved in Lemma \[equal-dimension-for-BM\], Lemma \[|Dm|=|BM|\] and Lemma \[lemma5\]. Sufficiency: Case 1 $M>0$. Since $M=2m$, all of the eigenangles in (\[eq3\]) are common eigenangles of all submatrices $\textbf{L}_{S_{i_i}\rightarrow S_{i_j}}(j=0,1,\cdots,k)$. None of the eigenangles in (\[eq3\]) is a eigenangle of any sequential principal minor of $\textbf{L}_{S_{i_j}\rightarrow S_{i_j}}(j=0,1,\cdots,k)$ because of the condition (iii). So, from Proposition \[spectralOf-Dm\](ii) and Proposition \[spectralOf-BM\](ii), we know any eigenvectors $\textbf{L}_{S_{i_j}\rightarrow S_{i_j}}(j=0,1,\cdots,k)$ associated with the common eigenangles have zero elements. Therefor, we only need to proof that there exist a eigenvector $\textbf{y}$ of $\textbf{L}$ such that $\textbf{y}|_{\overline{S}}=\textbf{0}$. Arbitrarily selecta common eigenangle $\theta$ in (\[eq3\]) and a real number $y_1\neq 0$. By Proposition \[spectralOf-Dm\](ii), there exist a eigenvector of $\textbf{L}_{S_{i_0}\rightarrow S_{i_0}}$ associated with the common eigenangle $\theta$, say $\textbf{y}^{(i_0)}$, such that $\textbf{y}^{(i_0)}|_{v}\neq 0(\forall v\in S_{i_0})$. For the same reason, there exist a eigenvector of $\textbf{L}_{S_{i_j}\rightarrow S_{i_j}}$ associated with $\theta$, say $\textbf{y}^{(i_j)}$, such that $\textbf{y}^{(i_j)}|_v\neq 0(\forall v\in S_{i_j})$ and $$\label{yi=-yi-1} \textbf{y}^{(i_j)}|_{v_{i_j+1}}=-\textbf{y}^{(i_{j-1})}|_{v_{i_j-1}}(j=1,2,\cdots,k).$$ Set vector $\textbf{y}$ as $\textbf{y}|_{S_{i_j}}=\textbf{y}^{(i_j)}(j=0,1,\cdots,k)$ and $\textbf{y}|_{\overline{S}}=\textbf{0}$ . Armed with what we have proved above, we know that $\textbf{y}|_{v}\neq 0(\forall v\in S)$ and $\textbf{L}_{\{v_i\}\rightarrow S}\textbf{y}=0$(see (\[Lvi-S\]) and ( \[yi=-yi-1\])). Therefor, $$\textbf{Ly}=\left[ \begin{array}{cc} \textbf{L}_{S\rightarrow S} & \textbf{L}_{S\rightarrow \overline{S}}\\ \textbf{L}_{\overline{S}\rightarrow S} & \textbf{L}_{\overline{S}\rightarrow\overline{S}} \end{array} \right] \left[ \begin{array}{c} \textbf{y}|_{S}\\ \textbf{0} \end{array} \right]=\textbf{L}_{S\rightarrow S}\textbf{y}|_S.$$ This means the vector $\textbf{y}$ is the eigenvector of $\textbf{L}$. Case 2 $M=0$. The proof is similar as Case 1 and trivial by noticing that $\textbf{L}_{S_{i_0}\rightarrow S_{i_0}}=\textbf{D}_m=\textbf{L}_{S_{i_k}\rightarrow S_{i_k}}$. If $S$ is a perfect critical vertex set of path $P_n$, then by Theorem \[theVertexNumberOfPath\], we have $n=m+(k-1)2m+m+k=k(2m+1)$. That is $n$ must be an odd. Therefor, a straightforward consequence of Theorem \[theVertexNumberOfPath\] is that there exist a perfect critical vertex set of a path graph $P_n$ if and only if $n$ is odd. The following corollary follows straight from Theorem \[theVertexNumberOfPath\], which have been proved in [@parlangeli] by using different mathematical tools. \[n=2powerof2\] Let $n=2^{l_0}$ for some $l_0\in \mathbb{N}$, then the path $P_n$ is controllable with any vertex selected as leader, e.g. $P_n$ is omnicontrollable. Algorithm and Examples ---------------------- In fact, Theorem \[theVertexNumberOfPath\] described all perfect critical vertex set of path graph $P_n$. Next, we provide a method to locate the leader vertices. That is the following Algorithm I. **Algorithm I** ------------------------------------------------------------------- 1: input: $n=2^{l_0}p_1^{l_1}p_2^{l_2}\cdots p_t^{l_t}$. 2: initialize: $N=\{p_1,p_2,\cdots,p_t\}$, $F=\emptyset$ , $j=0$. 3: while $N\neq \emptyset$, for some $p\in N$ do 4: $j=j+1$, $k=\frac{n}{p}, m=\frac{p-1}{2}$, $F_j=\emptyset$ 5: for $l=0: k-1$ do 6: $i=(m+1)+l(2m+1)$ 7: $F_j=F_j\bigcup\{v_i\}$ 8: end for 9: $N=N\setminus \{p\}$ 10: $F=F\bigcup F_j $ 11: end while 12: output: $\overline{F}$ For a path $P_n$, $\overline{F}$ obtained by Algorithm I is the set of leaders. That is $P_n$ is controllable with the vertex located in $\overline{F}$ and only with those vertices. For example, let $n=6$ is even. Since $n=2\times 3$ and only 3 is a odd prime factor, by Algorithm I, $p=3, k=\frac{n}{p}=2, m=\frac{p-1}{2}=1$, $F_1=\{v_i|i=(m+1)+l(2m+1),0\leq l \leq k-1\}=\{v_2,v_5\}$. So, any vertex in $\{v_1,v_3,v_4,v_6\}$ can be select as leader. Let $n=18$ is also even but with 3 being a multiple factor. By Algorithm I, there is only one follower vertex set needed to be considered. let $p=3, k=\frac{n}{p}=6, m=\frac{p-1}{2}=1$, $F_1=\{v_i|i=(m+1)+l(2m+1),0\leq l \leq k-1\}=\{v_2,v_5,v_8,v_{11},v_{14},v_{17}\}$. So, any vertex in $\overline{F_1}$ can be select as leader. In the case of multiple factor, Algorithm I is a much more efficient algorithm to locate the leader vertex than the method provided in [@parlangeli]. What’s more, Algorithm I can be easily applied to much lager path graph. For example, let $n=105$, since $n=3\times5\times7$, there are only three follower vertex set being calculate, that is $F_1=\{v_i|i=(1+1)+l\times(2\times1+1),0\leq l\leq 34 \}$, $F_2=\{v_i|i=(2+1)+l\times(2\times2+1),0\leq l\leq 20 \}$, $F_3=\{v_i|i=(3+1)+l\times(2\times3+1),0\leq l\leq 14 \}$. $F=F_1\bigcup F_2 \bigcup F_3$ and leaders are located in the vertex set $\overline{F}$: $\overline{F}=\{v_1,v_4 , v_6, v_7, v_9, v_{10}, v_{12}, v_{15}, v_{16}, v_{19}, v_{21}, v_{22}, v_{24}, v_{25}, v_{27}, v_{30},$ $ v_{31}, v_{34}, v_{36}, v_{37}, v_{39}, v_{40}, v_{42}, v_{45}, v_{46}, v_{49}, v_{51}, v_{52}, v_{54}, v_{55}, v_{57},$ $v_{60}, v_{61}, v_{64}, v_{66}, v_{67}, v_{69}, v_{70}, v_{72}, v_{75}, v_{76}, v_{79}, v_{81}, v_{82}, v_{84}, v_{85},$ $ v_{87}, v_{90}, v_{91}, v_{94}, v_{96}, v_{97}, v_{99}, v_{100}, v_{102}, v_{105}\}$. That is any one of and only of all these 56 vertices can be selected as leader. Minimal Perfect Critical Vertex Set of Graphs Based on Path {#section4} =========================================================== Path graphs are simplest and basic graph structures. Some graphs can be constructed by adding paths. The minimal perfect critical vertex set of these graphs will be investigated as what follows. Let $G$ be a graph and $v_0\in V(G)$. We use $G(v_0)+\{P_{n_1},P_{n_2},\cdots,p_{n_t}\}$ to denote the graph by adding $P_{n_1},P_{n_2},\cdots,P_{n_t}$ to $G$ incident to $v_0$, as shown in fig.\[treesByAddingPath\]. (200,80)(0,0) (10,50)[(40,25)]{} (15,65) (30,55) (24,62) (35,67) (45,62) (0,10) (22,70)[$G$]{} (12,28) (20,40) (0.5,10.5)[(2,3)[4]{}]{} (5,17.5)[(2,3)[0.5]{}]{} (6,19)[(2,3)[0.5]{}]{} (7,20.5)[(2,3)[0.5]{}]{} (8,22)[(2,3)[3.5]{}]{} (12.5,28.8)[(2,3)[7]{}]{} (20.4,40.5)[(2,3)[9.5]{}]{} (15,1)[$(a)\,\,\,G(v_0)+P_n$]{} (-3,12)[$v_1$]{} (4,30)[$v_{n-1}$]{} (15,42)[$v_n$]{} (30,57)[$v_0$]{} (70,50)[(40,25)]{} (75,65) (90,55) (84,62) (95,67) (105,62) (60,10) (82,70)[$G$]{} (72,28) (80,40) (60.5,10.5)[(2,3)[4]{}]{} (65,17.5)[(2,3)[0.5]{}]{} (66,19)[(2,3)[0.5]{}]{} (67,20.5)[(2,3)[0.5]{}]{} (68,22)[(2,3)[3.5]{}]{} (72.5,28.8)[(2,3)[7]{}]{} (80.3,40.5)[(2,3)[9.5]{}]{} (70,1)[$(b)\,\,\,G(v_0)+\{P_{n_1},P_{n_2},\cdots,P_{n_t}\}$]{} (56,12)[$v_1^{(1)}$]{} (62,30)[$v_{n_1-1}^{(1)}$]{} (74,42)[$v_{n_1}^{(1)}$]{} (89,57)[$v_0$]{} (85,10) (85.1,10.5)[(1,9)[0.5]{}]{} (85.7,16)[(1,9)[0.1]{}]{} (86,17.8)[(1,9)[0.1]{}]{} (86.2,19.6)[(1,9)[0.1]{}]{} (86.4,21.2)[(1,9)[0.5]{}]{} (87,26.2) (87.2,26.6)[(1,9)[1.3]{}]{} (88.6,39) (88.4,39.6)[(1,9)[1.7]{}]{} (78,12)[$v_1^{(2)}$]{} (89,26.2)[$v_{n_2-1}^{(2)}$]{} (90.6,39)[$v_{n_2}^{(2)}$]{} (100,20)[(1,0)[1]{}]{} (102,20)[(1,0)[1]{}]{} (104,20)[(1,0)[1]{}]{} (125,10) (124.5,10.5)[(-7,9)[3]{}]{} (121,14.9)[(-7,9)[0.5]{}]{} (120,16.1)[(-7,9)[0.5]{}]{} (119,17.3)[(-7,9)[0.5]{}]{} (118,18.5)[(-7,9)[5.4]{}]{} (112.3,26) (112,26.5)[(-7,9)[9.3]{}]{} (102.3,39) (101.8,39.8)[(-7,9)[11.3]{}]{} (125,12)[$v_1^{(t)}$]{} (114,26)[$v_{n_t-1}^{(t)}$]{} (104,39)[$v_{n_t}^{(t)}$]{} Consider Lemma \[lemma1\], see fig.\[treesByAddingPath\](a), for any $S\subset \{v_1,v_2,\cdots,v_n\}$, we know that $S$ is not a perfect critical vertex set of $G(v_0)+P_n$. Therefor, we study the graphs $G(v_0)+\{P_{n_1},P_{n_2},\cdots,P_{n_t}\}$ and $t\geq 2$, see fig.\[treesByAddingPath\](b). Let $S$ be a minimal perfect critical vertex set of $G(v_0)+\{P_{n_1},P_{n_2},\cdots,P_{n_t}\}$. By Lemma \[lemma1\], the matrix $\textbf{L}_{S\rightarrow S}$ also has the form illustrated in (\[LStoS=blocked\]) and the Lemma \[common-eigenvalue-of-Lsiktosik\] is also hold. Also consider Lemma \[lemma1\], we know the last vertex $v_{n_l}^{(l)}$ belongs to $S$ when $S\bigcap V(P_{n_l})\neq\emptyset$. So, from the what we have proved in section \[section3\], we know immediately that there exist exactly two pahts, say $P_{n_i},P_{n_j}$ , such that $S\bigcap V(P_{n_i})\neq\emptyset, S\bigcap V(P_{n_j})\neq\emptyset$ and $S\bigcap V(P_{n_k})=\emptyset(k\neq i,j)$. Further, we have the following theorem \[theorem5\]. \[theorem5\] Let $G(v_0)+\{P_{n_1},P_{n_2},\cdots,P_{n_t}\}$ be the graph in fig.\[treesByAddingPath\](b) and $t\geq 2$. Then there exist a minimal perfect critical vertex set $S$, where $S\subset V(P_{n_i})\bigcup V(P_{n_j})$, if and only if $2n_i+1$ and $2n_j+1$ have common divisor greater than 1. **Proof**Necessity: Noticing that if $S$ is a MPCVS of $G(v_0)+\{P_{n_1},P_{n_2},\cdots,P_{n_t}\}$, $S$ is a MPCVS of the path $P_{n_i}+v_0+P_{n_j}$. So, by Lemma \[|Dm|=|BM|\] and Lemma \[lemma5\], we have $n_i=m+k_i(2m+1), n_j=m+k_j(2m+1)$ and $m>0$(by (\[m&gt;1\])). Hence $2n_i+1$ and $2n_j+1$ have a common divisor $2m+1$ greater than 1. Sufficiency: Let $p>1$ is a common divisor of $2n_i+1$ and $2n_j+1$. $p$ is an odd. Let $p=p_1^{l_1}p_2^{l_2}\cdots p_q^{l_q}$, where $p_i$ are primes. Set $m=\frac{p_1-1}{2}\geq 1$ and $$S_1=V(P_{n_i})\backslash\{v_k^{(n_i)}|1\leq k \leq n_i, k=m+1(\texttt{mod}(2m+1))\}.$$ $$S_2=V(P_{n_j})\backslash\{v_k^{(n_j)}|1\leq k \leq n_j, k=m+1(\texttt{mod}(2m+1))\}.$$ Taking $S=S_1\bigcup S_2$, recall the sufficiency proof of Theorem \[theVertexNumberOfPath\], we know $S$ is a MPCVS. Next, we provide some examples to illustrate how to use Theorem \[theorem5\] to discover MPCVS of graphs constructed by adding paths. (100,35) (0,4.5)[(40,30)]{} (45,19) (44.5,19.5)[(-3,1)[12]{}]{} (32,23.8) (31.2,24.1)[(-3,1)[12]{}]{} (18.5,28.4) (17.7,28.7)[(-4,1)[9]{}]{} (7.8,31.2) (45,21)[$v_7$]{} (32,25.8)[$v_3$]{} (18.5,30.4)[$v_2$]{} (7,27.2)[$v_1$]{} (44.5,18.5)[(-3,-1)[12]{}]{} (31.8,14.4) (31.2,14.1)[(-3,-1)[11.4]{}]{} (19,10,1) (18.4,10.1)[(-4,-1)[10]{}]{} (7.8,7.7) (30.8,16.4)[$v_6$]{} (17,12,6)[$v_5$]{} (5.8,9.7)[$v_4$]{} (60,19) (45.6,19)[(1,0)[13.5]{}]{} (74,19) (60.6,19)[(1,0)[12.8]{}]{} (58,21)[$v_8$]{} (72,21)[$v_9$]{} (79,9)[(18,20)]{} (74.5,19.5)[(2,1)[12]{}]{} (87,25.5) (74.5,18.5)[(2,-1)[12]{}]{} (87,12.5) (89,25)[$v_{11}$]{} (89,12)[$v_{10}$]{} (12,18)[$S_1$]{} (90,18)[$S_2$]{} Example 1, fig.\[ExampleForLeaderSetOfTrees\] comes from [@ZhijianAndHai]. By Theorem \[theorem5\], $S_1=\{v_1,v_2,\cdots,v_6\}$ and $S_2=\{v_{10},v_{11}\}$($S_2$ can also be discovered by Theorem \[theorem3\]) are two MPCVSs. So, in order to make sure that the system is controllable, the minimum number of leader vertices is 2 and one of the leaders comes from $S_1$ and the other comes from $S_2$. (120,40) (10,35) (25,35) (40,35) (55,35) (80,35) (105,35) (8,38)[$v_1$]{} (23,38)[$v_2$]{} (38,38)[$v_3$]{} (53,38)[$v_4$]{} (78,38)[$v_0$]{} (102,38)[$v_{15}$]{} (10.6,35)[(1,0)[13.7]{}]{} (25.6,35)[(1,0)[13.7]{}]{} (40.6,35)[(1,0)[13.7]{}]{} (55.6,35)[(1,0)[23.8]{}]{} (80.6,35)[(1,0)[23.8]{}]{} (10,5) (10.5,5.4)[(7,3)[8]{}]{} (19.3,9.1) (19.8,9.5)[(7,3)[8]{}]{} (28.6,13.2) (29.1,13.6)[(7,3)[8]{}]{} (37.9,17.3) (38.4,17.7)[(7,3)[8]{}]{} (47.2,21.4) (47.7,21.8)[(7,3)[8]{}]{} (56.5,25.5) (57.0,25.9)[(7,3)[8]{}]{} (65.8,29.6) (66.3,30.0)[(9,3)[13.3]{}]{} (8,7)[$v_{5}$]{} (17.3,11.1)[$v_{6}$]{} (26.6,15.2)[$v_{7}$]{} (35.9,19.3)[$v_{8}$]{} (45.2,23.4)[$v_{9}$]{} (54.5,27.5)[$v_{10}$]{} (63.8,31.6)[$v_{11}$]{} (80.2,34.3)[(1,-2)[3]{}]{} (83.5,27.7) (83.7,27)[(1,-2)[3]{}]{} (87,20.4) (87.2,19.7)[(1,-2)[3]{}]{} (90.5,13.1) (84.5,28.7)[$v_{14}$]{} (88,21.6)[$v_{13}$]{} (91.5,14.1)[$v_{12}$]{} Example 2. Generalized star is a kind of useful graphs constructed by paths. All MPCVSs of generalized star graph can be found by Theorem \[theorem5\]. A *star* with $n$ vertices is a graph consisting of one vertex $ v_0$ in the center and $n-1$ vertices adjacent to $v_0$ but not adjacent to each other. A *generalized star* is the graph obtained from a star by replacing each edge by a path of arbitrary length. These paths are called legs, and generalized stars are also called *spiders*. See fig.\[figureForGeneralizedStar\]. Let $P_1=v_1v_2v_3v_4$, $P_2=v_5v_6\cdots v_{11}$, $P_3=v_{12}v_{13}v_{14}$, $P_4=v_{15}$. It is easily seen that $n_1=4, n_2=7, n_3=3, n_4=1$. By Theorem \[theorem5\], there exist MPCVS $S_1\subset V(P_1)\bigcup V(P_2)$, $S_2\subset V(P_1)\bigcup V(P_4)$, $S_3\subset V(P_2)\bigcup V(P_4)$. Further, by Algorithm I, we know that $S_1=\{v_1,v_3,v_4,v_5,v_7,v_8,v_{10},v_{11}\}$, $S_2=\{v_1,v_3,v_4,v_{15}\}$, $S_3=\{v_5,v_7,v_8,v_{10},v_{11},v_{15}\}$. Arbitrarily selecting two black vertices in fig.\[figureForGeneralizedStar\], say $v_i, v_j$, and $v_i, v_j$ belong to different paths, we know that the generalized star will be controllable with the leaders $\{v_i, v_j\}$. The minimum number of leaders is 2. Conclusion {#section5} ========== Neighbor-based controllability of undirected graph has received special attention in recent years. However, the understanding of roles of leaders, the minimum number of leaders, especially the leader location issues in the undirected graph is still largely unknown. A major effort in this paper is to provide a method to determine the leaders directly from topology structures of undirected graph. These efforts also enlarge the understanding of the leader’s role in undirected graph controllability. To do this, we introduced the concept of critical vertex set, perfect critical vertex set and minimal perfect critical vertex set. These concepts indicates that some vertices with special graphical characterization should be selected as leaders. Necessary and sufficient conditions are proposed to uncover some special minimal perfect critical vertex set. Theorem \[theorem3\] described the graphical characterizations of minimal perfect critical 2 vertex set. Minimal perfect critical 3 vertex set do NOT exist was proved in Theorem \[theorem4\]. Theorem \[theVertexNumberOfPath\] completely described the MPCVS of path and Theorem \[theorem5\] can be used to discover some special MPCVSs of graphs constructed by adding paths. All these results clearly indicate where leaders located, reveal the effect of topology structure on the controllability and promote a further study of controllability of undirected graph. [99]{} Ali D, Salil S, Kanhere RJ. Multi-agent systems: a survey. IEEE Access, 2018: 28573-28593. Ji ZJ, Lin H, Yu HS. Leaders in multi-agent controllability under consensus algorithm and tree topology. Systems & Control Letters. 2012, 61(9): 918-925. Shima SM, Mohammad H, Mehran M. On the Structural and Strong Structural Controllability of Undirected Networks. IEEE Transactions on Automatic Control, 2018; 63(7): 2234-2241. Liu YG, Slotine JJ, Barabasi AL. Controllability of complex networks. Nature. 2011; 473(7346): 167-173. Tanner HG. on the controllability of nearest neighbor interconnections. 43rd IEEE Conference on Decision Control, December 2004; 14(17): 2467-2472. Meng J, Egerstedt M. A Graph-Theoretic Characterization of Controllability for Multi-agent Systems. Proceedings of the 2007 American Control Conference ,Marriott Marquis Hotel at Times Square, New York City, USA, July 11-13, 2007: 4588-4593. Ji ZJ, Wang ZD, Lin H, Wang Z. Interconnection topologies for multi-agent coordination under leader-follower framework. Automatica. 2009; 45(12): 2857-2863. Wang L, Jiang FC, Xie GM, Ji ZJ. Controllability of multi-agent systems based on agreement protocols. Science in China. Series F: Information Sciences, 2009; 52(11): 2074-2088. Ji ZJ, Yu HS. A New Perspective to Graphical Characterization of Multiagent Controllability. IEEE Transactions on Cybernetics. 2017; 47(6): 1471-1483. Rahmani A, Mesbahi M. On the controlled agreement problem. Proceedings of the 2006 American Control Conference, Minneapolis, Minnesota, USA, Jun,14¨C16, 2006: 1376-1381. Martini S, Egerstedt M, Bicchi A. Controllability decompositions of networked systems through quotient graphs. 47th IEEE Conference on Decision and Control, Fiesta Americana Grand Coral Beach, Cancun, Mexico, December 9-11, 2008. Ji ZJ, Lin H, Lee T. Nodes with the same number of neighbors and multi-agent controllability. Proceedings of the 30th Chinese Control Conference, Yantai, China, July 22-24, 2011:4792-4796. Monshizadeh, Nima, Kanat C, Harry T. Strong targeted controllability of dynamical networks. In $54^{th}$ IEEE CDC, 2015: 4782-4787. Rahmani A, Ji M , Mesbahi M, Egerstedt M. Controllability of multi-agent systems from a graph-theoretic perspective. SIAM Journal on Control and Optimization, 2009;48(1):162-186. Cesar OA, Bahman G. Almost equitable partitions and new necessary conditions for network controllability. Automatica, 2017; 80: 25-31. Martini S, Egerstedt M, Bicchi A. Controllability analysis of multi-agent systems using relaxed equitable partitions. International Journal of Systems, Control and Communications, 2010; 2(1/2/3):100-121. Camlibel MK, Zhang S, Cao M. Comments on ’Controllability analysis of multi-agent systems using relaxed equitable partitions’. International Journal of Systems Control and Communications, 2012;4 (1/2): 72-75. Sciriha I. Graphs with a common eigenvalue deck, Linear Algebra Appl.2009;430(1): 78-85. Farrugia A, Sciriha I. Controllability of undirected graphs. Linear Algebra and its Applications, 2014; 454: 138-157. Yazicioglu AY, Abbas W, Egerstedt M. Graph Distances and Controllability of Networks. IEEE Transactions on Automatic Control.2016; 61(12):4125-4130. Olshevsky A. Minimal controllability problems. IEEE Transactions on Control of Network Systems, 2014;1(3): 249-258. Pequito S, Ramos G, Kar S, Aguiar AP, Ramos J. The robust minimal controllability problem. Automatica, 2017;82: 261-268. Zhao B, Guan YQ, Wang L. Non-fragility of multi-agent controllability. Science in China Series F: Information Science, 2018;61(5): 052202. Liu XZ, Ji ZJ. Controllability of multiagent systems based on path and cycle graphs. International Journal of Robust and Nonlinear Control. 2018;28(1):296-309. Parlangeli G, Notarstefano G. On the reachability and observability of path and cycle graphs. IEEE Transactions on Automatic Control, 2012;57(3):743-748. Kalman RE. Mathematical description of linear dynamical systems. Journal of the Society for Industrial and Applied Mathematics. Series A, 1963;1(2): 152-192. Brockett RW. Finite Dimensional Linear Systems. Wiley and Sons,1970. Biyikoglu T, Leydold J, Stadler PF. Laplacian Eigenvectors of Graphs. Lecture Notes in Mathematics, Heidelberg, Germany, Springer, 2007.
--- abstract: | The trade-off between relevance and fairness in personalized recommendations has been explored in recent works, with the goal of minimizing learned discrimination towards certain demographics while still producing relevant results. We present a fairness-aware variation of the Maximal Marginal Relevance (MMR) re-ranking method which uses representations of demographic groups computed using a labeled dataset. This method is intended to incorporate fairness with respect to these demographic groups. We perform an experiment on a stock photo dataset and examine the trade-off between relevance and fairness against a well known baseline, MMR, by using human judgment to examine the results of the re-ranking when using different fractions of a labeled dataset, and by performing a quantitative analysis on the ranked results of a set of query images. We show that our proposed method can incorporate fairness in the ranked results while obtaining higher precision than the baseline, while our case study shows that even a limited amount of labeled data can be used to compute the representations to obtain fairness. This method can be used as a post-processing step for recommender systems and search. author: - Chen Karako - Putra Manggala bibliography: - 'sample-bibliography.bib' title: 'Using Image Fairness Representations in Diversity-Based Re-ranking for Recommendations' --- &lt;ccs2012&gt; &lt;concept&gt; &lt;concept\_id&gt;10002951.10003317.10003338.10003345&lt;/concept\_id&gt; &lt;concept\_desc&gt;Information systems Information retrieval diversity&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10002951.10003260.10003261.10003267&lt;/concept\_id&gt; &lt;concept\_desc&gt;Information systems Content ranking&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10002951.10003260.10003261.10003271&lt;/concept\_id&gt; &lt;concept\_desc&gt;Information systems Personalization&lt;/concept\_desc&gt; &lt;concept\_significance&gt;100&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;/ccs2012&gt;
--- abstract: 'The interplay between magnetism and superconductivity in the newly discovered heavy-fermion superconductor CePt$_3$Si has been investigated using the zero-field $\mu$SR technique. The $\mu$SR data indicate that the whole muon ensemble senses spontaneous internal fields in the magnetic phase, demonstrating that magnetism occurs in the whole sample volume. This points to a microscopic coexistence between magnetism and heavy-fermion superconductivity.' author: - 'A. Amato' - 'E. Bauer' - 'C. Baines' bibliography: - 'amato\_general.bib' title: 'On the Coexistence Magnetism/Superconductivity in the Heavy-Fermion Superconductor CePt$_3$Si' --- Introduction ============ In recent years, strongly correlated electron systems have played a leading role in solid state physics. The importance of this research field is illustrated by the discovery of novel phases in metals, intermetallics and oxides at low temperatures. One of the most relevant example is the discovery of unconventional superconductivity in heavy-fermion systems. Unconventional superconductivity seems to result from the nature of the mechanism providing the attractive force necessary for the Cooper-pair formation. In conventional superconductors, the electrons are paired in a spin-singlet zero-angular-momentum state ($L=0$), which results from the fact that their binding is described in terms of the emission and absorption of phonons. This leads to the formation of an isotropic superconducting gap in the electronic excitations over the whole Fermi surface. On the other hand, heavy-fermion superconductivity is observed to show a close interplay with magnetic fluctuations. This seems to indicate that the attractive effective interaction between the strongly renormalized heavy quasiparticles in the superconducting heavy-fermion systems is not provided by the electron-phonon interaction as in ordinary superconductors, but rather is mediated by electronic spin fluctuations. This non-conventional (i.e. non-BCS) mechanism is believed to lead to an unconventional configuration of the heavy-fermion superconducting state, which may involve anisotropic, nonzero-angular-momentum states ($L \ne 0$, see Ref.  for a review and references therein). An additional feature in a number of systems is the observation of an apparent coexistence of heavy-fermion superconductivity and static magnetism. However, at ambient pressure, such a coexistence was up to recently solely confirmed on uranium-based heavy-fermion systems, and was discarded on cerium-based systems. Such conclusions were deduced from microscopic studies, in particular from the sensitive $\mu$SR technique [@amato_RMP_1997]. In this frame, the example of the first discovered heavy-fermion superconductor CeCu$_2$Si$_2$ is examplary as it exhibits a competition between both ground states, i.e. magnetism and superconductivity do not coexist, but appear as two different, mutually exclusive ground states of the same subset of electrons. Such competition was first discovered by $\mu$SR [@luke_PRL_1994; @feyerherm_PHYSREVB_1997] and only recently confirmed by neutron studies [@stockert_TBP]. Recently another Ce-based heavy-fermion system, namely CePt$_3$Si, was found [@bauer_PRL_2004] showing antiferromagnetism and superconductivity ($T_{\text N} = 2.2$ K and $T_c = 0.75$ K) at ambient pressure. This material crystallizes in a tetragonal structure (space group $P4mm$) lacking a center of inversion symmetry. This feature is attracting presently much interest since unconventional superconductivity with spin-triplet was to date thought to require such inversion symmetry to obtain the necessary degenerated electron states [@anderson_PRB_1984; @frigeri_PRL_2004]. In this article, we present $\mu$SR studies aiming to test - at the microscopic level - the coexistence between magnetic and superconducting state. Experiment ========== CePt$_3$Si was prepared by high frequency melting and subsequently heat treated at 880$^{\circ}$C for one week. Phase purity was evidenced by x-ray diffraction. The $\mu$SR experiments were carried out at the Swiss Muon Source of the Paul Scherrer Institute (Villigen, Switzerland). Measurements were performed on the GPS and LTF instruments of the $\pi$M3 beamline, using a He-flow cryostat (base temperature 1.7 K) and a $^3$He-$^4$He dilution refrigerator (base temperature 0.03 K), respectively. In order to avoid a depolarizing background $\mu$SR signal, the sample was glued onto a high-purity silver holder. Measurements on both instruments were performed on the same sample and showed a very good agreement in the overlapping temperature range. Measurements were performed in zero applied field (ZF), with an external-field compensation of the order of $\pm 20$ mOe. Results and discussion ====================== ZF $\mu$SR is a local probe measurement of the magnetic field at the muon stopping site(s) in the sample. If the implanted polarized muons are subject to magnetic interactions, their polarization becomes time dependent, ${\mathbf P}_{\mu}(t)$. By measuring the asymmetric distribution of positrons emitted when the muons decay as a function of time, the time evolution of $P_{\mu}(t)$ can be deduced. The function $P_{\mu}(t)$ is defined as the projection of ${\mathbf P}_{\mu}(t)$ along the direction of the initial polarization: $P_{\mu}(t) ={\mathbf P}_{\mu}(t)\cdot {\mathbf P}_{\mu}(0)/P_{\mu}(0)=G(t)P_{\mu}(0)$. Hence, the depolarization function $G(t)$ reflects the normalized muon-spin autocorrelation function $G(t)=\langle{\mathbf S}(t)\cdot{\mathbf S}(0)\rangle/S(0)^2$, which depends on the average value, distribution, and time evolution of the internal fields, and therefore contains all the physics of the magnetic interactions of the muon inside the sample [@blundell_CONTEMP_1997]. Above $T_{\text N}$, the time evolution of the muon polarization is best described by the well known Kubo-Toyabe function [@kubo_MAGNRES_1967]: $$\label{equation_cept3si_kt} G^{\text {para}}(t) = \frac{1}{3} + \frac{2}{3}(1-\Delta^2 t^2)\exp(-\frac{\Delta^2 t^2}{2})~,$$ where $\Delta^2/\gamma_{\mu}^2$ represents the second moment of the local field distribution at the muon site ($\gamma_{\mu}$ is the gyromagnetic ration of the muon). Such a depolarization function is characteristic of a paramagnetic state where the muon depolarization is solely due to the dipolar fields of the nuclear moments ($^{29}$Si and $^{195}$Pt). In the paramagnetic state, the electronic magnetic moments are often not observable by $\mu$SR due to their fast fluctuation rates. Alternatively, the nuclear magnetic moments appear static within the $\mu$SR time window and create a Gaussian field distribution at the muon stopping site, leading to the Kobo-Toyabe depolarization function reported in Eq. (\[equation\_cept3si\_kt\]). Note that this function posses an initial Gaussian character \[$\simeq \exp(-\Delta^2t^2)$ for $t \ll \Delta^{-1}$\] as observed in the data reported on Fig. \[figure\_cept3si\_raw\]. Fitting Eq. (\[equation\_cept3si\_kt\]) to the data provides a depolarization rate $\Delta = 0.06~\text{MHz}$ corresponding to field distribution width of $\sim0.7$ G at the muon site, in line with theoretical values computed for several possible stopping sites. ![\[figure\_cept3si\_raw\] Example of ZF $\mu$SR signals obtained in polycrystalline CePt$_3$Si in the paramagnetic phase (10 K), the magnetic phase (1 K) and below the superconducting transition (0.1 K). The lines represent fits obtained using Eq. (\[equation\_cept3si\_kt\]) and (\[equation\_cept3si\_afm\]). Note that for clarity, the fit for the data obtained at 1 K is ommitted.](figure_cept3si_raw){width="8cm"} Below $T_{\text N}$, clear spontaneous oscillations are detected in the $\mu$SR signal indicating the occurrence of static finite magnetic fields sensed by the muons and arising from static electronic magnetic moments. In the antiferromagnetic state, the $\mu$SR signal is best described by the sum of two components, i.e.: $$\begin{aligned} \label{equation_cept3si_afm} G^{\text {AF}}(t) &= &A_1\big[\tfrac{1}{3} \exp(-\lambda_1 t) +\nonumber\\ & & \quad\,\,\,\,\tfrac{2}{3} \exp(-\lambda_1' t) \cos (2 \pi \nu_1 t + \phi_1)\big] +\nonumber\\ & & A_2\big[\tfrac{1}{3} \exp(-\lambda_2 t) + \nonumber\\ & & \quad\,\,\,\,\tfrac{2}{3} \exp(-\lambda_2' t) \cos (2 \pi \nu_2 t + \phi_2)\big]~.\end{aligned}$$ These components indicate the presence of two magnetically inequivalent muon stopping sites sensing internal fields $\lvert {\mathbf B}_{\mu}^i\rvert=2\pi\nu_i/\gamma_{\mu}$. As expected for a polycrystalline sample, the “$1/3$-term” of each component represents the fraction of the muons possessing an initial polarization along the same direction of the internal field. Therefore, the depolarization rates related to these fractions ($\lambda_i$) reflect solely the internal spins dynamics, whereas the depolarization rates $\lambda_i'$ are ascribed by both dynamical and static effects. The temperature evolution of the spontaneous frequencies $\nu_i$ are reported on Fig. \[figure\_cept3si\_frequencies\]. The values of the frequencies at $T \rightarrow 0$ correspond to internal field values of $\sim$ 160 G and 10 G, respectively. ![\[figure\_cept3si\_frequencies\] Temperature dependence of the spontaneous $\mu$SR frequencies $\nu_1$ and $\nu_2$ obtained by fitting Eq. (\[equation\_cept3si\_afm\]) to the $\mu$SR data. The measurements were performed in a polycrystalline CePt$_3$Si sample.](figure_cept3si_freq){width="8cm"} Very recently, neutron scattering experiments determined the magnetic structure of CePt$_3$Si [@metoki_JPCM_2004]. Magnetic Bragg reflections observed at wave vector values ${\mathbf Q} = (0,0,1/2)$ and $(1,0,1/2)$, indicate that magnetic moments align ferromagnetically in the basal plane and stack antiferromagnetically along the $c$ axis with a strongly reduced value of $0.16~\mu_{\text B}$. By considering this magnetic structure, the values of the spontaneous $\mu^+$-frequencies provide information for a tentative determination of the muon stopping sites in the tetragonal structure. By assuming the magnetic moment direction pointing along the $a$ or $b$ axis, the most probable muon sites are located at two different 1(b) Wyckoff-positions, i.e. $(1/2,1/2,0.65)$ for the low-frequency component and $(1/2,1/2,0.82)$ for the high-frequency component. These sites are respectively located in the center of the Pt-plane formed by the Pt(1) ions and between the planes formed by Pt(1) and Pt(2) ions (see Fig. \[figure\_cept3si\_structure\], notation from Ref. ). In addition, for these sites, the calculated field distributions due to nuclear dipole moments are found in reasonable agreement with the observed depolarization rate $\Delta$ observed in the paramagnetic regime \[see Eq. (\[equation\_cept3si\_kt\])\]. Note also that both sites have the same multiplicity, which is in line with the observation that $A_1 \simeq A_2$ as shown on Fig. \[figure\_cept3si\_asymmetry\]. ![\[figure\_cept3si\_structure\] Crystal structure of CePt$_3$Si. The smallest spheres represent the muon stopping sites discussed in the text.](figure_cept3si_structure){width="7cm"} ![\[figure\_cept3si\_asymmetry\] Temperature dependence of the amplitudes $A_1$ and $A_2$ of the spontaneous $\mu$SR frequencies in CePt$_3$Si \[see Eq. (\[equation\_cept3si\_afm\])\].](figure_cept3si_asymmetry){width="8cm"} The first relevant observation is that $A_1 + A_2 = 1$ for all temperatures below $T_{\text N}$. This means that the whole muon ensemble is sensing the magnetic state, which in turn unambiguously demonstrates that the *whole* sample volume is involved in the magnetic phase below $T_{\text N}$. Together with thermodynamical studies, demonstrating that superconductivity has a bulk character, the present observation indicates a microscopic coexistence between magnetism and superconductivity. A similar conclusion was very recently drawn from NMR studies performed at different frequencies [@yogi_PRL_2004]. Note that the conclusion obtained by $\mu$SR is independent of the exact knowledge of the muon stopping sites. The behavior observed here in CePt$_3$Si is opposite to the one reported for CeCu$_2$Si$_2$ (see above), where the magnetic state is expelled from the sample upon cooling the below $T_c$. The observed coexistence in CePt$_3$Si is reminiscent of the situation observed in U-based heavy-fermion systems, as UPd$_2$Al$_3$ (Ref. ) or UNi$_2$Al$_3$ (Ref. ), where a model of two independent elctronic subsets, localized and itinerant (responsible for magnetism and superconductivity, respectively), was proposed in view of similar microscopic studies [@feyerherm_PRL_1994] and thermodynamics measurements [@caspary_PRL_1993]. ![\[figure\_cept3si\_freq\_norm\] Temperature dependence of the spontaneous $\mu$SR frequencies normalized to their values at $T_c = 0.75$ K. Note the very slight change below $T_c$. The symbols correspond to those of Fig. \[figure\_cept3si\_frequencies\].](figure_cept3si_freq_norm){width="8cm"} Upon cooling the system into the superconducting state, the $\mu$SR data suggest a slight change of the absolute spontaneous internal fields at the muon sites. As shown on Fig. \[figure\_cept3si\_freq\_norm\], one observes, for $T < T_c$, a slight reduction and increase of the low and high frequency signals, respectively. Note that such changes are at the limit of the measurement accuracy. In any case, two possible explanations could be invoked for these changes. The first one would be to consider a coupling between the superconducting and magnetic order parameters, reminiscent to the situation observed in UPt$_3$ [@aeppli_PRL_1989]. Alternatively, the frequency changes could have a more simple origin, as the muon is sensing interstitial fields and therefore only indirectly probes the strength of the magnetic order parameter. Hence, in addition to the dipolar interaction, the static 4$f$ magnetic moments will change the spin polarization of the conduction electrons at the muon site [@amato_RMP_1997], which results in an increased hyperfine field action on the muon. Such a contribution is a function of the density of normal electron states and will therefore be affected upon cooling the sample into the superconducting state. Below $T_c$, one expects a decrease in absolute value due to the opening of the superconducting gap. Depending on the muon stopping site and due to the oscillatory character of the RKKY interaction between the static 4$f$ moments and the conduction electrons, a decrease of the hyperfine field contribution can actually lead to either an increase or a decrease of the total internal fields at the muon site, as possibly observed in the present $\mu$SR data. Conclusion ========== Our zero-field $\mu$SR data have demonstrated the bulk character of the antiferromagnetic state in the heavy-fermion superconductor CePt$_3$Si, suggesting therefore a microscopic coexistence between magnetism and superconductivity. In addition, a slight change of the $\mu$SR response upon cooling the sample below $T_c$ can be ascribed to a coupling of the superconducting and magnetic order parameters and/or to the decrease of the hyperfine contact contribution acting on the muon.  \ The $\mu$SR measurements reported here were performed at the Swiss Muon Source, Paul Scherrer Institute, Switzerland. Parts of the work were supported by the Austrian FWF (Fonds zur Förderung der wissenschaftlichen Forschung) project P16370.
--- abstract: 'We [*derive*]{} the exact configuration space path integral, together with the way how to evaluate it, from the Hamiltonian approach for any quantum mechanical system in flat spacetime whose Hamiltonian has at most two momentum operators. Starting from a given, covariant or non-covariant, Hamiltonian, we go from the time-discretized path integral to the continuum path integral by introducing Fourier modes. We prove that the limit $N \rightarrow \infty$ for the terms in the perturbation expansion (“Feynman graphs”) exists, by demonstrating that the series involved are uniformly convergent. [*All*]{} terms in the expansion of the exponent in $<x| \exp (- \Delta t \hat{H} / \hbar) |y>$ contribute to the propagator (even at order $\Delta t$!). However, in the time-discretized path integral the only effect of the terms with $\hat{H}^2$ and higher is to cancel terms which naively seem to vanish for $N \rightarrow \infty$, but, in fact, are nonvanishing. The final result is that the naive correspondence between the Hamiltonian and the Lagrangian approach is correct, after all. We explicitly work through the example of a point particle coupled to electromagnetism. We compute the propagator to order $(\Delta t)^2$ both with the Hamiltonian and the path integral approach and find agreement.' author: - | Kostas Skenderis[^1]  and Peter van Nieuwenhuizen[^2]\ \ Institute for Theoretical Physics,\ State University of New York at Stony Brook,\ Stony Brook, NY 11794-3840 title: ON THE HAMILTONIAN APPROACH AND PATH INTEGRATION FOR A POINT PARTICLE MINIMALLY COUPLED TO ELECTROMAGNETISM --- Introduction. ============= Path integrals are often first written down in a symbolic way as an integral over paths of the exponent of an action, and then defined by some time-discretization. Of course, there are many ways in which to implement time-discretization. In some instances, rules have been discovered which lead to desirable answers for the path integral. A well-known example is the mid-point rule for the interaction $\int dt A_j(x) \dot{x}^j$ of a point particle coupled to an electromagnetic potential. This rule leads to gauge invariance of the terms proportional to $\Delta t$ in the propagator[@schul] but it breaks gauge invariance in the terms of order $(\Delta t)^2$. The terms of higher order in $\Delta t$ are needed for the evaluation of anomalies (see below). In general, starting from a continuum path integral, there is no preferred way to discretize it. One might take the point of view that different discretizations simply correspond to different theories. In this article we take a different point of view. We take the Hamiltonian formalism as starting point, and shall [*deduce*]{} both the action $S_{\rm config}$ to be used in the configuration space path integral, and the way this path integral should be evaluated (“the measure”). We mean by the expressions “to be used” and “should be” that in this way the path integral formalism exactly reproduces the propagator of the Hamiltonian formalism. Of course, in the Hamiltonian $\hat{H}(\hat{x}^i,\hat{p}_j)$ there is a priori a corresponding ambiguity in the ordering of the operators $\hat{x}^i$ and $\hat{p}_j$. However, in several examples, the Hamiltonian of a quantum mechanical model is, in fact, the regulator of the Jacobians for symmetry transformations of a corresponding [*field*]{} theory, and these regulators are uniquely fixed by requiring certain symmetries of the field theory to be maintained at the quantum level[@A_W; @diaz; @fio; @vanfio]. For example, in [@diaz] regulators are constructed which maintain Weyl (local scale) invariance but as a consequence break Einstein (general coordinate) invariance. Thus: the field theory and the choice of which symmetries are free of anomalies fixes the regulator, the regulator is the Hamiltonian of a corresponding quantum mechanical model, and the operator ordering of this Hamiltonian is thus fixed. For these reason we consider Hamiltonians of the form $ \hat{H}=\hat{p}^2 + a^i(\hat{x}) \hat{p}_i + b(\hat{x})$ whose operator ordering is fixed in this way but whose coefficents $a^i(\hat{x})$ and $b(\hat{x})$ are not restricted except that we assume that they are regular functions; they may correspond to covariant or non-covariant Hamiltonians. The results of this paper [*prove*]{} which path integral (including, of course, the way how to evaluate it) corresponds to which regulator (Hamiltonian). For chiral anomalies[@A_W] this precise correspondence was not needed due to their topological nature, but for trace anomalies[@fio; @vanfio] and other anomalies of non-topological nature, the precise correspondence is crucial. Having obtained the 1-1 correspondence, it is then also possible to start with a particular action in the path integral (the latter to be evaluated as derived below) and to find the corresponding Hamiltonian operator. This will usually be the case when one is dealing with quantum field theories. For example, when one is dealing with renormalizable field theories or when the theory has certain symmetries the action may be known, and this will fix the operator ordering and the terms in the Hamiltonian. In the Hamiltonian approach the propagator is defined by $$<x,t_2|y,t_1> = <x| \exp (-\frac{\Delta t}{\hbar} \hat{H}|y>, \; \Delta t=t_2-t_1,$$ and evaluated, following Feynman, by inserting a complete set of momentum eigenstates $$<x,t_2|y,t_1> = \int dp <x| \exp (-\frac{\Delta t}{\hbar} \hat{H}|p><p|y>.$$ Expanding the exponent, and moving in each term $(-\Delta t \hat{H} / \hbar)^n /n!$ the $\hat{x}$ operators to the left and the $\hat{p}$ operators to the right, one obtains an [*unambiguous*]{} answer for the propagator. No regularization is needed. However, one must keep track of the commutators between $\hat{x}^i$ and $\hat{p}_j$. It is often assumed that it is sufficient to expand the exponent only to first order in $\Delta t$, and to reexpontiate the result $$\begin{aligned} <x| \exp (-\frac{\Delta t}{\hbar} \hat{H}|p> & = &\exp [-\frac{\Delta t}{\hbar} h(x,p)]<x|p> \; \mbox{(false!)} \label{la} \\ <x|\hat{H}|p>& \equiv &h(x,p)<x|p>. \label{linapp}\end{aligned}$$ This is incorrect for Hamiltonians with derivative coupling: for nonlinear sigma models where the $\hat{p}^2$ term is multiplied by a function of $\hat{x}$ (“the metric”)[@graham; @book; @bas] or for Hamiltonians with a term $A(\hat{x}) \cdot \hat{p}$. We shall consider the Hamiltonian $$\hat{H} = \frac{1}{2} \Big( \hat{p}_i - (\frac{e}{c}) A_i(\hat{x}) \Big) \Big( \hat{p}^i - (\frac{e}{c}) A^i(\hat{x}) \Big) + V(\hat{x}),$$ for arbitrary but nonsingular $A_i(\hat{x})$ and $V(\hat{x})$ which is obviously the most general Hamiltonian of the form $ \hat{H}=\hat{p}^2 + a^i(\hat{x}) \hat{p}_i + b(\hat{x})$, and show that there are terms proportional to $\Delta t$ in the propagator which are due to commutators between $\hat{p}_i$ and $A_j(\hat{x})$. In fact, [*all*]{} terms in the expansion of the exponential give such contributions[@graham; @book; @bas]! Because the commutators $[\hat{p}^i, \hat{x}_j]=-i \hbar \delta^i_j$ are proportional to $\hbar$, the propagator becomes a series in $\hbar$, $\Delta t$ and $(x-y)^i$ with coefficients which are functions of $x$. When we use the term “of order $(\Delta t)^k$” we mean all terms which differ from the leading term by a factor $(\Delta t)^k$, counting $(x-y)^i$ as $(\Delta t)^{1/2}$. The terms of order $\hbar$ w.r.t. the classical result correspond to one-loop corrections in the path integral approach, and can be written in terms of the classical action as the Van Vleck determinant[@vleck; @witt]. Terms of higher order in $\hbar$ in the propagator can be computed straightforwardly (though tediously) in the Hamiltonian approach, again without need to specify a regularization. This indicates that the details of the path integral should follow straightforwardly from the Hamiltonian starting point. In particular it should not be necessary to fix a free constant in the overall normalization of the path integral by hand, for example by dividing by the path integral for a free particle. One begins by defining the path integral as $$<x,0|y,-T> = \lim_{N \rightarrow \infty} \int \Big[ \prod_{\alpha=1}^{N-1} dx_\alpha \Big] \Big[ \prod_{\alpha=1}^N <x_{\alpha-1},t_{\alpha-1}|x_\alpha,t_\alpha> \Big],$$ where $x_0 = x$ and $x_N = y$. This particular time-discretization follows from the Hamiltonian approach; it is due to the operator identity $$\exp ( - \frac{T}{\hbar} \hat{H} ) = \Big( \exp ( - \frac{T/N}{\hbar} \hat{H} ) \Big)^N.$$ The main result of this paper is a proof that the $N \rightarrow \infty$ limit exists, and defines a continuum action $S_{\rm config}$ and an unambiguous and simple way to evaluate the path integral perturbatively. We begin by splitting $x_\alpha$ into a background part $z_\alpha$ and a quantum part $\xi_\alpha$. We shall also decompose the time-discretized action $S$ into a part $S_0$ which yields the propagator on the world line, and the rest which yields the interaction terms $S_{\rm int}$. The $z_\alpha$ satisfy the equation of motion of $S_0$ and the boundary conditions $z_\alpha=y$ at $\alpha=N$ and $z_\alpha=x$ at $\alpha=0$, so that $\xi_\alpha=0$ both at $\alpha=0$ and at $\alpha=N$. Since $S_0$ is not equal to $S$, there are terms linear in $\xi_\alpha$ in the expansion of $S(z+\xi,\dot{z}+\dot{\xi})$. Notice that the time-discretized action $S$ has [*not*]{} been obtained by some ad-hoc rule, but rather it is determined from the Hamiltonian approach. The final result for the path integral should not depend on the choice of $S_0$. We choose $S_0$ as the action of a free particle because that leads to simple perturbation theory, but other choices of $S_0$ should lead to the same final result although the Feynman rules for the perturbative expansion of the path integral will be different. One now expands $\xi_\alpha$ into eigenfunctions of $S_0$, i.e., in terms of trigonometric functions $$\xi_\alpha = \sum_{k=1}^{N-1} y^k \sin \alpha k \pi / N, \; (\alpha=1,\ldots,N-1).$$ Changing integration variables from $\xi_\alpha$ to $y^k$, the Jacobian is essentially unity, while $S_0$ is quadratic and diagonal in these $y^k$. Rescaling these $y^k$ such that the kinetic term in terms of the rescaled variables $v^k$ becomes the one of the continuum theory, the Jacobian of this rescaling leads to a non-trivial factor in the measure. At this point, the path integral has the generic form $$\int d\mu \exp [-\frac{1}{\hbar} (S_0 + S_{\rm int}(N))],$$ where the measure $\mu$ and the kinetic term $S_0$ are already in the form of the continuum theory, but the interaction $S_{\rm int}$ still depend on $N$. By the term “continuum theory” we mean the path integral with (i) the classical Lagrangian $L=\frac{1}{2} \dot{x}^2 - i(\frac{e}{c}) \dot{x}^i A_i + V$, (ii) the expansion $x(t) = z(t) + \sum_{k=1}^{\infty} v^k \sin k \pi t/T$, where $z(t)$ is a solution of the equation of motion $\ddot{z}=0$ with the boundary conditions $z(0)=x$ and $z(T)=y$, and (iii) the measure which normalizes the Gaussian integration with $S_0$ over the modes $v^k$ to $(2 \pi \hbar T)^{-1/2}$ (not to unity because there is always one more intermediate set $|p><p|$ in Feynman’s approach than intermediate sets $|x><x|$. The remaining factor $(2 \pi \hbar T)^{-1/2}$ combines with classical part $\exp \big(-(x-y)^2 /2 \hbar T\big)$ to yield a representation of $\delta(x-y)$ for small $T$). One must then show that the limit $N \rightarrow \infty$ in $S_{\rm int}$ yields the interaction of the continuum theory. This is a well-known complicated problem, but we shall present here a totally elementary proof which uses only trigonometric relations such as $2 \sin \alpha \sin \beta = \cos (\alpha-\beta) - \cos (\alpha+\beta)$ and the fact that the infinite series we encounter are uniformly convergent as functions of $N$. This property allows us to take the limit $N \rightarrow \infty$ before the summations are performed. For the Hamiltonians of the form $T(p)+V(x)$ such a proof is quite simple, but for non-vanishing vector potential $A(x)$, we need rather laborious algebra. The result is surprisingly simple. All terms in the propagator which in the Hamiltonian approach are due to commutators, are only needed to make sure that in the limit $N \rightarrow \infty$ one obtains the classical action. To be more explicit, consider the discretized action in (\[inte\]). The last three lines vanish in the naive limit $N \rightarrow \infty$ (the limit $N \rightarrow \infty$ for fixed mode index $k$) since they have extra factors $1/N$ w.r.t. the two lines above. These latter two lines naively yield the term $\int dt A_j(x) \dot{x}^j$ in the classical action because $1/N$ becomes $dt$ and $(\xi_{\alpha-1} - \xi_\alpha)$ becomes $\dot{\xi} dt$. The claim is that if one does not take the naive limit but carefully evaluates the sums, then the non-naive terms in the first two lines cancel all of the last three lines in (\[inte\]). To discuss in more detail what we mean by the naive limit $N \rightarrow \infty$ consider the interaction terms $$S_{\rm int}= \frac{1}{N} \sum_{k,l=1}^{N-1} v^k v^l \lambda(k) \lambda(l) \sum_{\alpha=1}^{N-1} \sin \alpha k \pi /N \sin \alpha l \pi /N,$$ where $\lambda(k)=k \pi [2N^2(1-\cos k \pi /N)]^{-1/2}$. For fixed $k$ and $N$ tending to infinity one finds, $\lambda(k)=1$ while for $k \sim N$ tending to infinity one has $\lambda(k)=\pi/2$. One [*may*]{} take the limit $N \rightarrow \infty$ in $S_{\rm naive}(N)$ for given fixed $k$ and $l$, because the error thus committed cancels against the terms in $S(N) - S_{\rm naive}(N)$. Here $S_{\rm naive}(N)$ is the time-discretized action we would have obtained, if we had ignored the terms coming from commutators in the Hamiltonian approach. The non-trivial measure factorizes into a factor for each mode $v^k$. One can then easily compute propagators $\langle v^k v^l \rangle$ and Feynman graphs in terms of modes. One can also use the quantum “fields” $\xi(\tau)$ and find that $\langle \xi(\tau_1) \xi(\tau_2) \rangle$ is the expected world line propagator (the inverse of $\partial^2 / \partial \tau^2$ with the correct boundary conditions). However, the mode representation is to be preffered because mode cut-off is the natural regularization scheme[@fio; @vanfio]. Actually, all one-loop diagrams we evaluate are already finite by themselves since the divergences of the tadpole graphs are put to zero by mode regularization. Although we do not consider here curved space, we mention for completeness that in curved spacetime there are extra “ghosts” obtained by exponentiating a factor $(\det g_{ij})^{1/2}$ in the measure and that with these ghosts all loop calculations become finite if one uses mode regularization[@fio; @vanfio; @bas]. From our point of view, the ambiguities often encountered in the definition of path integrals are due to starting “halfway”. Starting from the beginning, which means for us starting with the Hamiltonian approach, no ambiguities result and one [*derives*]{} the action to be used in the path integral. The result is the 1-1 correspondence $$\begin{aligned} \hat{H} = \frac{1}{2} \; (\hat{p}_i - (\frac{e}{c}) \hat{A}_i) &\delta^{ij}&(\hat{p}_j - (\frac{e}{c}) \hat{A}_j) \; + \; \hat{V} \nonumber \\ &\Updownarrow& \\ \label{cor} S_{\rm config} = \int_{-T}^0 dt \big[ \frac{1}{2} \; \delta_{ij} \dot{x}^i \dot{x}^j\! &\!-&\!i (\frac{e}{c}) \dot{x}^i A_i \; + \; V \big], \nonumber\end{aligned}$$ and the path integral is perturbatively evaluated by computing Feynman graphs with given propagators and vertices. In section 2 we discuss the Hamiltonian approach for a point particle coupled to electromagnetism. Although we only need the propagator to order $\Delta t$ in order to construct the corresponding path integral, we evaluate it to order $(\Delta t)^2$ in order to compare later with a similar result obtained from the path integral. A useful check is that it factorizes into a classical part and a Van Vleck determinant. In section 3, the path integral is cast into a form where the only $N$-dependence resides in the interaction terms $S_{\rm int}$. In section 4, we discuss the limit $N \rightarrow \infty$ in $S_{\rm int}$. We organize the discussion by giving six examples which cover all possible cases one encounters in a perturbative evaluation of the path integral. In section 5 we evaluate as a check the path integral to order $(\Delta t)^2$ at the one-loop level. Here we discuss how to evaluate the continuum path integrals in general in perturbation theory. The result agrees with the one obtained in section 2 from the Hamiltonian approach. In section 6 we note that our work straightforwardly extends to field theories with derivative coupling like Yang-Mills theory. We discuss how our work might be extended to curved spacetime, and also to phase space path integrals. Hamiltonian operator approach. ============================== We wish to evaluate the propagator in Euclidean space $$<x|\exp(- \Delta t \hat{H} / \hbar)|y>,$$ where $$\hat{H}=\frac{1}{2} \left( \hat{p}_{i}- ( \frac{e}{c} ) A_{i}(\hat{x}) \right) \left( \hat{p}^{i}- ( \frac{e}{c} )A^{i}(\hat{x}) \right), \; i=1, \ldots ,n. \label{ham}$$ We do not add a term $V(x)$ since the analysis for this term is the same as for the term $(A_i(x))^2$. Indices are raised and lowered by $\delta_{ij}$, so for notational simplicity we write all indices down. We shall only use the commutation relation $[ \hat{p}_i ,\hat{x}_j ] = \frac{\hbar}{i} \delta_{ij}$ and $\hat{p}_i |p> = p_i |p>$ on momentum eigenstates $|p>$. We rewrite the Hamiltonian as $$\hat{H}= \hat{\alpha} - ( \frac{e}{c} ) \hat{\beta} + ( \frac{e}{c} )^2 \hat{\gamma}, \label{ham1}$$ where $$\hat{\alpha} = \frac{1}{2}\hat{p}^{2},~\; \hat{\beta}=\hat{A} \cdot \hat{p}, ~ \; \hat{ \gamma } = \frac{1}{2} \left( i ( \frac{ \hbar c}{ e } ) \partial \cdot \hat{A} + \hat{A}^{2} \right).$$ Following Feynman we insert a complete set of $|p>$ states $$<x|\exp(- \Delta t \hat{H} / \hbar)|y>= \int d^{n}p<x| \exp(- \Delta t \hat{H}/\hbar)|p><p|y>.$$ We expand the exponential and define $$<x|(\hat{H})^{k}|p>=\sum_{l=0}^{2k}B_{l}^{k}(x)p^{l}<x|p>, \label{expa}$$ where $B_{l}^{k}(x)p^{l}$ is a polynomial of degree l in p’s, and $$<x|p>= (2\pi\hbar)^{-n/2} \exp(\frac{i}{\hbar}x \cdot p).$$ After rescaling the momenta as $p=\sqrt{ \hbar / \Delta t } \, q $ we have $$\begin{aligned} <x|\exp(- \Delta t \hat{H} / \hbar)|y>= (4\pi^{2}\hbar\Delta t)^{-n/2}\int d^{n}q \exp( i \frac{q \cdot (x-y)}{ \sqrt{\hbar \Delta t} }) \nonumber \\ \sum_{k=0}^{\infty}\frac{(-1)^{k}}{k!} \sum_{l=0}^{2k}B_{l}^{k}(x)q^{l}(\frac{\Delta t}{\hbar})^{k-l/2}.\end{aligned}$$ The leading term comes from summing all the terms with $ l= 2k $ and has the simple form $ \exp ( - \frac{1}{2} q^2 )$. For this reason we introduced the $q$ variable. It follows that only a finite number of B’s need to be calculated in order to obtain the result up to desired order in $\Delta t$. In particular, the result up to and including $(\Delta t)^{2}$ needs the first five B’s ($l=2k$ through $l=2k-4$). A detailed discussion of the combinatorics is given in [@bas]. Here we merely give our result. $$B_{2k}^{k}(x) q^{2k} = \alpha^{k},$$ $$B_{2k-1}^{k}(x) q^{2k-1} = - ( \frac{e}{c} ) k \alpha^{k-1} \beta,$$ $$\begin{aligned} B_{2k-2}^{k}(x) q^{2k-2} & = & {(\frac{e}{c})}^{2} k \alpha^{k-1} \gamma + \nonumber \\ &\ & {(\frac{e}{c})}^{2} \left( \begin{array}{c} k \\ 2 \end{array} \right) \alpha^{k-2} \Biggl[ {\left( \frac{i \hbar c}{e} \right)}q_{i}(\partial_{i} \beta)+\beta^{2} \Biggr], \label{exam}\end{aligned}$$ $$\begin{aligned} B_{2k-3}^{k}(x) q^{2k-3} = &-& {(\frac{e}{c})}^{3} \left( \begin{array}{c} k \\ 2 \end{array} \right) \alpha^{k-2} \Biggl[ \frac{1}{2} {\left( \frac{i \hbar c}{e} \right)}^2 \partial^{2} \beta + {\left( \frac{i \hbar c}{e} \right)}q_{i} \partial_{i} \gamma \nonumber \\ & \ & \hspace{0.8cm} + 2 \beta \gamma + {\left( \frac{i \hbar c}{e} \right)}A_{i}( \partial_{i} A_{j})q_{j} \Biggr] \nonumber \\ & - & {(\frac{e}{c})}^{3} \left( \begin{array}{c} k \\ 3 \end{array} \right) \alpha^{k-3} \Biggl[ {\left( \frac{i \hbar c}{e} \right)}^2 q_{i} q_{j} \partial_{i} \partial_{j} \beta \nonumber \\ &\ & \hspace{1cm} + 3 {\left( \frac{i \hbar c}{e} \right)}\beta q_{i} \partial_{i} \beta + \beta^{3} \Biggr],\end{aligned}$$ $$\begin{aligned} B_{2k-4}^{k}(x) q^{2k-4} & = & {(\frac{e}{c})}^{4} \left( \begin{array}{c} k \\ 2 \end{array} \right) \alpha^{k-2} \Biggl[ \frac{1}{2} {\left( \frac{i \hbar c}{e} \right)}^2 \partial^{2} \gamma + {\left( \frac{i \hbar c}{e} \right)}A_{i} \partial_{i} \gamma + \gamma^{2} \Biggr] \nonumber \\ & + & {(\frac{e}{c})}^{4} \left( \begin{array}{c} k \\ 3 \end{array} \right) \alpha^{k-3} \Biggl[ {\left( \frac{i \hbar c}{e} \right)}^3 q_{i} ( \partial_{i} \partial^{2} \beta ) \nonumber \\ &\ & \hspace{0.8cm} + {\left( \frac{i \hbar c}{e} \right)}^2 [ q_{i} q_{j} ( \partial_{i} \partial_{j} \gamma ) + (\partial_{i} \beta) (\partial_{i} \beta) \nonumber \\ &\ & \hspace{0.8cm} + \frac{3}{2} \beta \partial^{2} \beta + q_{i} ( \partial_{i} A_{j} ) (\partial_{j} \beta) + 2 q_{i} A_{j} ( \partial_{i} \partial_{j} \beta ) ]\nonumber \\ &\ & \hspace{0.8cm} + {\left( \frac{i \hbar c}{e} \right)}[ 3 \beta q_{i} ( \partial_{i} \gamma ) + 3 \gamma q_{i} ( \partial_{i} \beta )\nonumber \\ &\ & \hspace{0.8cm} + 3 A_{i} ( \partial_{i} \beta ) \beta ] + 3 \beta^{2} \gamma \Biggr] \nonumber \\ & + & {(\frac{e}{c})}^{4} \left( \begin{array}{c} k \\ 4\end{array} \right) \alpha^{k-4} \Biggl[ {\left( \frac{i \hbar c}{e} \right)}^3 q_{i} q_{j} q_{k} ( \partial_{i} \partial_{j} \partial_{k} \beta ) \nonumber \\ &\ & \hspace{0.8cm} + {\left( \frac{i \hbar c}{e} \right)}^2 [ 3 ( \partial_{i} \beta )( \partial_{j} \beta ) q_{i} q_{j} + 4 ( \partial_{i} \partial_{j} \beta ) \beta q_{i} q_{j} ] \nonumber \\ &\ & \hspace{0.8cm} + 6 {\left( \frac{i \hbar c}{e} \right)}\beta^{2}( \partial_{i} \beta ) q_{i} + \beta^{4} \Biggr].\end{aligned}$$ The combinatorial factors $$\left(\begin{array}{c} k \\ s \end{array} \right)$$ indicate that only s out of k factors of $\hat{H}$ are involved in yielding commutators. For example, the last term in (\[exam\]) is due to picking two factors $\beta$ and $k-2$ factors of $\alpha$ out of $k$ factors $\hat{H}$. Clearly this can be done in $$\left(\begin{array}{c} k \\ 2 \end{array} \right)$$ ways and there are two powers of $q$ less than in the leading term. Similarly, the one but last term in (\[exam\]) is due to one commutator of $\hat{\alpha}$ and $\hat{\beta}$. Next we perform the summations over k which is easy and the Gaussian integrals which are straightforward but tedious. The result reads $$\begin{aligned} &\ & <x| \exp(- \Delta t \hat{H} / \hbar)|y> = (2 \pi \hbar \Delta t )^{-n/2} \exp \big( - \frac{1}{2 \Delta t \hbar } (x-y)^2 \big) \nonumber \\ &\ & \Biggl\{ 1- {\left(\frac{-ie}{\hbar c} \right)}A_i (x-y)_i \nonumber \\ &\ & \hspace{1cm} + \frac{1}{2} \Biggl[ {\left(\frac{-ie}{\hbar c} \right)}A_{i,j} + {\left(\frac{-ie}{\hbar c} \right)}^2 A_i A_j \Biggr] (x-y)_i (x-y)_j \nonumber \\ &\ & + \frac{1}{3!} \Biggl[ {\left(\frac{-ie}{\hbar c} \right)}A_{i,jk} +3 {\left(\frac{-ie}{\hbar c} \right)}^2 A_{i,j} A_k \nonumber \\ &\ & \hspace{1cm} + {\left(\frac{-ie}{\hbar c} \right)}^3 A_i A_j A_k \Biggr] (x-y)_i (x-y)_j (x-y)_k \nonumber \\ &\ & + \frac{1}{4!} \Biggl[ {\left(\frac{-ie}{\hbar c} \right)}A_{i,jkl} +3 {\left(\frac{-ie}{\hbar c} \right)}^2 A_{i,j} A_{k,l} + 4 {\left(\frac{-ie}{\hbar c} \right)}^2 A_{i,jk} A_{l} \nonumber \\ &\ & \hspace{1cm} + 6 {\left(\frac{-ie}{\hbar c} \right)}^4 A_i A_j A_k A_l \Biggr] (x-y)_i (x-y)_j (x-y)_k (x-y)_l \nonumber \\ &\ & + \frac{1}{4!} \frac{ \Delta t }{ \hbar } {(\frac{e}{c})}^2 F_{ik} F_{kj} (x-y)_i (x-y)_j \nonumber \\ &\ & + \frac{i \Delta t }{12} {(\frac{e}{c})}\Bigl[ F_{ki,k} (x-y)_i - \frac{1}{2} F_{ki,kj} (x-y)_i (x-y)_j \Bigr] \nonumber \\ &\ & - \frac{1}{12} \frac{ \Delta t }{ \hbar } {(\frac{e}{c})}^2 A_{i} F_{kj,k} (x-y)_i (x-y)_j - \frac{ (\Delta t)^2}{48} {(\frac{e}{c})}^2 F^2 \Biggr\}. \label{res}\end{aligned}$$ Before going on, we briefly compare this result with the incorrect result which one would have obtained by the linear approximation mentioned in the introduction and widely used. In the latter case we find instead of the the terms in the curly brackets the following expression, $$\begin{aligned} &\ & <x| \exp(- \Delta t \hat{H} / \hbar)|y> = (2 \pi \hbar \Delta t )^{-n/2} \exp \big( - \frac{1}{2 \Delta t \hbar } (x-y)^2 \big) \nonumber \\ &\ & \Biggl\{ 1- {\left(\frac{-ie}{\hbar c} \right)}A_i (x-y)_i + \frac{1}{2} {\left(\frac{-ie}{\hbar c} \right)}^2 A_i A_j (x-y)_i (x-y)_j \nonumber \\ &\ & \hspace{1cm} - \frac{1}{2} i \Delta t {(\frac{e}{c})}\partial \cdot A \Biggr\} \mbox{ (false).} \label{incor}\end{aligned}$$ This result is obtained by replacing $\int dp <x| \exp (- \Delta t \hat{H} / \hbar) |p> <p|y>$ by $\int dp \exp (- \Delta t\ h(x,p) / \hbar) <x|p> <p|y>$, where $h(x,p)$ is defined in (\[linapp\]). The $p$-dependence in the exponent is coming from the $p^2$ term in $\alpha$, a single $p$ in $\beta$ and the inner product $p \cdot (x-y)$ from the plane waves. Integration over $p$ yields (\[incor\]) to order $\Delta t$. To order $\Delta t$ we thus find the same number of terms in both cases, but the term $\partial \cdot A$ is present only in the linear approximation, whereas in the correct approach it is cancelled by a commutator $ \big[ p^2, \beta \big] $. This commutator yields a term $ p_i p_j \partial_i A_j $ whose integration over $p$ gives a $\delta_{ij}$ term which cancel the $ \partial \cdot A $ term, and a term with $ (x-y)_i (x-y)_j $ which is the term with $A_{i,j}$ in (\[res\]). The corrections due to commutators already show up at order $\Delta t$. Clearly, the linear approximation gives already incorrect results for the propagator at order $\Delta t$. We expect the result in (\[res\]) to contain a factor of $\exp[- \frac{1}{\hbar} S_{cl} ]$, where $S_{cl}$ is the classical action evaluated along a classical trajectory. We claim that to order $(\Delta t)^2$ it reads $$\begin{aligned} S_{cl} & = & \frac{1}{2 \Delta t} (x-y)_i (x-y)_i - i {(\frac{e}{c})}\big\{ A_i (x-y)_i - \frac{1}{2} A_{i,j} (x-y)_i (x-y)_j \nonumber \\ & + & \frac{1}{3!} A_{i,jk} (x-y)_i (x-y)_j (x-y)_k \nonumber \\ & - & \frac{1}{4!} A_{i,jkl} (x-y)_i (x-y)_j (x-y)_k (x-y)_l \big\} \nonumber \\ & - & \frac{ \Delta t}{24} {(\frac{e}{c})}^2 F_{ik} F_{kj} (x-y)_i (x-y)_j + 0 ( (\Delta t)^{5/2} ). \label{action}\end{aligned}$$ To obtain this result, we used that the classical Lagrangian corresponding to (\[ham\]) is given by $$L= \frac{1}{2} \dot{x}_{i} \dot{x}^{i} -i {(\frac{e}{c})}\dot{x}^{i} A_{i}, \label{lag}$$ where the factor ‘$i$’ is due to our working in Euclidean space. (The simplest way to see this is to note that one has the same Hamiltonian in both the Minkowski and the Euclidean case, but one uses $ \exp ( - i \Delta t \hat{H} / \hbar ) $ in the former and $ \exp ( - \Delta t \hat{H} / \hbar ) $ in the latter case. In both cases $<x|p>$ and $<p|y>$ are plane waves). The dynamical equations of motion are $$\ddot{x}_i = -i {(\frac{e}{c})}F_{ij} \dot{x}_j. \label{eqm}$$ We then evaluated $S_{cl}$ by expanding all fields around the endpoint $x$, $$\begin{aligned} S_{cl}=\int_{- \Delta t}^{0} L dt & = & \Delta t L(x) - \frac{1}{2} (\Delta t)^{2} \frac{ dL(x) }{ dt } \nonumber \\ & + & \frac{1}{3!} (\Delta t)^{3} \frac{ d^{2} L(x) }{ dt^{2} } - \frac{1}{4!} (\Delta t)^{4} \frac{ d^{3} L(x) }{ dt^{3} } + \cdots . \label{exp}\end{aligned}$$ We only need $x(t)$ and $\dot{x} (t)$ at $t=0$ since higher time derivatives of $x(t)$ can be obtained by using the equations of motion (\[eqm\]). To obtain $\dot{x} (t=0)$ in terms of $x$ and $y$, we expand $x(-\Delta t)$ around $t=0$, and use (\[eqm\]). This yields a series in power of $\dot{x}(0)$, $x_i$ and $y_i$ which is inverted to yield $\dot{x}_i(0)$ in terms of $x_i$ and $y_i$. For our purposes it is sufficient to determine $\dot{x}_i(0)$ to order $(\Delta t)^{3/2}$. We find $$\begin{aligned} \dot{x}_{i} ( t=0 ) & = & \frac{1}{ \Delta t } (x-y)_{i} -i {(\frac{e}{c})}\Bigl\{ \frac{1}{2} F_{ij} (x-y)_j - \frac{1}{6} F_{ij,l} (x-y)_j (x-y)_l \nonumber \\ & + & \frac{1}{24} F_{ij,kl} (x-y)_j (x-y)_k (x-y)_l \nonumber \\ & - & \frac{1}{12} i \Delta t {(\frac{e}{c})}F_{ij} F_{jk} (x-y)_k + \cdots \Bigr\}. \label{xexp}\end{aligned}$$ This result combined with (\[lag\]) and (\[exp\]) leads to (\[action\]). Factoring out $ \exp[- \frac{1}{\hbar} S_{cl} ]$ from (\[res\]) we left with $$\begin{aligned} &\ &<x|\exp(- \Delta t \hat{H} / \hbar)|y> = (2 \pi \hbar \Delta t )^{-n/2} \exp[- \frac{1}{\hbar} S_{cl} ] \nonumber \\ &\ & \exp \Bigl( \frac{i \Delta t }{12} {(\frac{e}{c})}[ F_{ki,k} (x-y)_i - \frac{1}{2} F_{ki,kj} (x-y)_i (x-y)_j ] \nonumber \\ &\ & \hspace{3cm} - \frac{ (\Delta t)^2}{48} {(\frac{e}{c})}^2 F^2 \Bigr).\end{aligned}$$ We expect also a factor of $( \det D_{ij} )^{1/2}$ to be present in the propagator,where $D_{ij}$ is the Van Vleck matrix $$D_{ij}(x,y;\Delta t)=- \frac{ \partial }{ \partial x_i} \frac{ \partial }{ \partial y_j} S_{cl}(x,y; \Delta t ).$$ In fact, the remaining terms are just the Van Vleck determinant, and no other terms are present to order $ (\Delta t)^2 $. Note that the $F^2$ term which yields the trace anomaly in two dimensions[@vanfio], is part of the Van Vleck determinant, whereas a corresponding term $\hbar R$ in curved spacetime is not contained in the corresponding Van Vleck determinant. This is not surprising since the $F^2$ is a one-loop effect whereas the $\hbar R$ is a two loop effect. Our final result, thus, reads $$\begin{aligned} &\ &<x| \exp(- \Delta t \hat{H} / \hbar)|y>= \nonumber \\ &\ & \hspace{1cm} (2 \pi \hbar )^{-n/2} (\det D_{ij})^{1/2} \exp[- \frac{1}{\hbar} S_{cl} ] [1+ O( (\Delta t)^{5/2} )].\end{aligned}$$ Derivation of the path integral. ================================ The time-discretized path integral with $N-1$ intermediate steps is given by $$\begin{aligned} &\ & <x| \exp (-T\hat{H} / \hbar) |y> = \lim_{N \rightarrow \infty} \Big( \frac{N}{2 \pi \hbar T} \Big)^{n/2} \int \prod_{i=1}^n \prod_{\alpha=1}^{N-1} \Big[ dx_{\alpha i} \big( \frac{N}{2 \pi \hbar T} \big)^{1/2} \Big] \nonumber \\ &\ & \exp \Big\{ - \frac{1}{2 \epsilon \hbar} \sum_{\alpha=1}^N ( x_{\alpha-1} - x_\alpha )^2 + \frac{ie}{\hbar c} \sum_{\alpha=1}^N A_{i}(x_{\alpha-1}) ( x_{\alpha-1} - x_\alpha )_i \nonumber \\ &\ & \hspace{1.5cm} - \frac{ie}{2 \hbar c} \sum_{\alpha=1}^N A_{i,j} (x_{\alpha-1})( x_{\alpha-1} - x_\alpha )_i ( x_{\alpha-1} - x_\alpha )_j \Big\},\end{aligned}$$ where $\alpha$ is the discretization index and letters from the middle of the latin alphabet like i, j, k etc. are spacetime indices. $\epsilon \equiv T/N$ and $x_0 \equiv x$, $x_N \equiv y$. To obtain this result we inserted $N-1$ complete sets of states $|x_\alpha><x_\alpha|$ and used the result (\[res\]) for the matrix element $<x_{\alpha-1}| \exp (-\epsilon \hat{H} / \hbar) |x_\alpha>$ obtained from the Hamiltonian approach. We kept only the terms up to order $\epsilon$ (the first three lines in (\[res\])), because only these terms will contribute in the limit $N \rightarrow \infty$. We decompose $x_{\alpha i}$ as $$x_{\alpha i} = z_{\alpha i} + \xi_{\alpha i}. \label{decomp}$$ The $z_{\alpha i}$ yield the classical trajectory of a free particle and satisfy the equation $$z_{(\alpha+1) i} - 2 z_{\alpha i} + z_{(\alpha-1) i} = 0, \label{dieq}$$ with boundary conditions $$z_{0 i} = x_i, z_{N i} = y_i. \label{bound}$$ In the limit $N \rightarrow \infty$ (\[dieq\]) becomes the field equation of the action for a free particle. (\[dieq\]) with the boundary conditions (\[bound\]) can be solved to yield $$z_{\alpha i}= x_i + \frac{\alpha}{N} ( y - x )_i. \label{ddec}$$ The $\xi$’s are the quantum fluctuations with boundary conditions $\xi_{0 i}=\xi_{N i}=0$. We go over to the mode variables using the transformation $$\xi_{\alpha i} = \sum_{k=1}^{N-1} y_{i}^k \sin \alpha k \pi / N. \label{tr}$$ The path integral becomes $$\begin{aligned} &\ & \exp (-\frac{(x-y)^2}{2 \hbar T} ) \lim_{N \rightarrow \infty} \Big( \frac{N}{2 \pi \hbar T} \Big)^{n/2} \int \prod_{i=1}^n \prod_{k=1}^{N-1} \Big[ d y^k_i \big( \frac{N^2}{4 \pi \hbar T} \big)^{1/2} \Big] \nonumber \\ &\ &\exp \big(-\frac{N^2}{2 \hbar T} \sum_{k=1}^{N-1} (y_{i}^k)^2 ( 1 - \cos k \pi / N ) \big)\nonumber \\ &\ & \exp \frac{ie}{\hbar c} \sum_{\alpha=1}^{N} \Big\{ \big[ A_{i}(z_{\alpha-1}) + A_{i,j}(z_{\alpha-1}) \xi_{(\alpha-1) j} \nonumber \\ &\ & \hspace{1.5cm} + \frac{1}{2} A_{i,jk}(z_{\alpha-1}) \xi_{(\alpha-1) j} \xi_{(\alpha-1) k} + \cdots \big] \big[ \frac{1}{N} (x-y)_i + ( \xi_{(\alpha-1) i} - \xi_{\alpha i} ) \big] \nonumber \\ &\ & -\frac{1}{2} \big[ A_{i,j}(z_{\alpha-1}) + A_{i,jk}(z_{\alpha-1}) \xi_{(\alpha-1) k} \nonumber \\ &\ &\hspace{1.5cm} + \frac{1}{2} A_{i,jkl}(z_{\alpha-1}) \xi_{(\alpha-1) k} \xi_{(\alpha-1) l} + \cdots \big]{\rm x} \nonumber \\ &\ & {\rm x}\big[ \frac{1}{N} (x-y)_i + ( \xi_{(\alpha-1) i} - \xi_{\alpha i}) \big] \big[ \frac{1}{N} (x-y)_j + ( \xi_{(\alpha-1) j} - \xi_{\alpha j}) \big] \Big\}, \label{inte}\end{aligned}$$ where the $\xi$’s are functions of the modes $y_i^k$ as in (\[tr\]). A summation over $i=1, \ldots, n$ is understood in all terms in the exponent. We have used that the matrix $M_{\alpha k}=\sqrt{2/N} \sin \alpha k \pi / N$ is an orthogonal matrix. This produces the extra factor of $N/2$ in the measure for $y_i^k$. We rescale the modes $$\begin{aligned} v^k_i &=& \Big( \frac{ 2 N^2 ( 1- \cos \frac{ k \pi}{ N } )}{k^2 \pi^2 } \Big)^{1/2} y^k_i \nonumber \\ &\equiv& \lambda(k)^{-1} y^k_{\mu}. \label{resc}\end{aligned}$$ The kinetic term becomes $$\exp \big(-\sum_{k=1}^{N-1} \frac{(k \pi)^2}{4 \hbar T} (v^k_i)^2 \big),$$ while the measure becomes $$\Big(\frac{N}{ 2 \pi \hbar T } \Big)^{n/2} \prod_{i=1}^{n} \prod_{k=1}^{N-1} ( \frac{N^2}{ 4 \pi \hbar T} )^{1/2} \Big( \frac{k^2 \pi^2}{ 2 N^2 (1-\cos\frac{k \pi}{N})} \Big)^{1/2} dv^k_i.$$ This expression can be simplified by using the product formula $$\prod_{k=1}^{N-1} 2 ( 1- \cos k \pi / N ) = N, \label{for}$$ which is a special case ($x\rightarrow1$) of the formula $$\left[ \prod_{k=1}^{N-1}( x - \cos k \pi / N ) \right]^2 = \frac{2^{1-2N}}{x^2-1} { \rm Re} \big[ -1 + ( x+ i \sqrt{ 1- x^2} )^{2N} \big]. \label{product}$$ To derive this formula, one uses that $x^2-1$ times the left hand side is proportional to $ \prod_{k=1}^{2N-1} (x-\cos k \pi / N )$. The measure now becomes $$( 2 \pi T \hbar )^{ -n/2 } \prod_{i=1}^n \prod_{k=1}^{N-1} \big( \frac{ \pi k^2 }{ 4 T \hbar } \big)^{1/2} dv^{k}_i.$$ Thus, the $N$ dependence of the kinetic term and the measure have disappeared after the rescaling (the $N$ appears only in the upper limit of the sum) and the $N \rightarrow \infty$ limit can be easily taken. One finds $$( 2 \pi T \hbar )^{ -n/2 } \prod_{i=1}^n \prod_{k=1}^{\infty} \big( \frac{ \pi k^2 }{ 4 T \hbar } \big)^{1/2} dv^{k}_i, \label{measure}$$ for the measure and $$\exp \big(-\frac{1}{2 \hbar T} \int_{-1}^{0} d \tau \dot{\xi}^2 \; \big)= \exp \big(-\sum_{k=1}^{\infty} \frac{(k \pi)^2}{4 \hbar T} (v^k_i)^2 \big), \label{kinetic}$$ for the kinetic term, where $\xi_i$ is the continuum limit of (\[tr\]) $$\xi_i(\tau)= \sum_{k=1}^{\infty} v^k_i \sin k \pi \tau. \label{ctr}$$ The propagator for the modes obtained from the kinetic term reads $$\langle v_{i}^{m} v_{j}^{n} \rangle = \frac{ 2 T \hbar }{ \pi^2 n^2} \delta_{ij} \delta^{mn}. \label{propag}$$ At this point we have obtained the measure of the continuum theory, given in [@fio; @vanfio], and the kinetic term and the propagators for the modes are also that of the continuum theory. It remains to take the limit in the interaction terms. The limit $N \rightarrow \infty$ in the interaction terms. ========================================================== The interaction terms in (\[inte\]) can be recast as follows $$\begin{aligned} &\ & \exp \frac{ie}{\hbar c} \sum_{\alpha=1}^{N} \Bigg\{ \Big[ A_{i}(z_{\alpha-1}) + A_{i,j}(z_{\alpha-1}) \xi_{(\alpha-1) j} + \cdots \Big] \Big[ \frac{1}{N} (x-y)_i \Big] \nonumber \\ &\ & + A_i(z_{\alpha-1}) (\xi_{(\alpha-1) i} - \xi_{\alpha i}) \nonumber \\ &\ & + A_{i,j}(z_{\alpha-1}) \Big[(\xi_{(\alpha-1) i} - \xi_{\alpha i}) \xi_{(\alpha -1) j} - \frac{1}{2} (\xi_{(\alpha-1) i} - \xi_{\alpha i}) (\xi_{(\alpha-1) j} - \xi_{\alpha j}) \Big] \nonumber \\ &\ & + \cdots + \frac{1}{q!} A_{i,j_1 \cdots j_q}(z_{\alpha-1}) \Big[(\xi_{(\alpha-1) i} - \xi_{\alpha i}) \xi_{(\alpha -1) j_1} - \nonumber \\ &\ & - \frac{q}{2} (\xi_{(\alpha-1) i} - \xi_{\alpha i}) (\xi_{(\alpha-1) j_1} - \xi_{\alpha j_1}) \Big] \xi_{(\alpha -1) j_2} \cdots \xi_{(\alpha -1) j_q} + \cdots \Bigg\}, \label{inte1}\end{aligned}$$ where according to (\[tr\]) and (\[resc\]) we have now $$\xi_{\alpha i} = \sum_{k=1}^{N-1} v_i^k \lambda(k) \sin \alpha k \pi/N. \label{modes}$$ The first line in (\[inte1\]) is coming solely from the $A_i (x-y)_i$ term in (\[inte\]), whereas the rest is a combination of both the $A_i (x-y)_i$ and the $A_{i,j} (x-y)_i (x-y)_j$ terms. Actually only one part of the latter contributes, namely the one which is proportional to $(\xi_{(\alpha-1) i} - \xi_{\alpha i}) (\xi_{(\alpha-1) j} - \xi_{\alpha j})$. The rest tends to zero when $N \rightarrow \infty$, as will be clear at the end of this section. These terms are not shown in (\[inte1\]). We now proceed to show that the first line in (\[inte1\]) limits to $$\exp \frac{ie}{\hbar c} \int_{-1}^0 d\tau A_i(x(\tau))(x-y)_i, \label{zterm}$$ whereas the rest of (\[inte1\]) limits to $$\exp \frac{ie}{\hbar c} \int_{-1}^0 d\tau A_i(x(\tau)) \dot{\xi}_i. \label{xiterm}$$ The $x(\tau)$ is the contimuum limit of (\[decomp\]) $$x_i(\tau) = z_i(\tau) + \xi_i(\tau),$$ where $z_i(\tau)$ is the continuum limit of (\[ddec\]) $$z_i(\tau) = x_i - \tau (y-x)_i, \label{cdec}$$ and $\xi_i(\tau)$ is given in (\[ctr\]). There are eight different kinds of terms which we encounter trying to take the limit $N \rightarrow \infty$ in (\[inte1\]). We can have terms with or without $\dot{\xi}$. Each of them can contain an even or odd number of quantum fields. (In the latter case only interference terms can be studied since the expectation value of odd number of quantum fields is trivially zero). Finally, in each case we can also have an additional factor of $(\alpha /N)^p$ coming from the expansion of $A_i(z_{\alpha})$ around $x_i$ (see (\[ddec\])). We will illustrate with six examples how the limit $N \rightarrow \infty$ can be rigorously taken in all cases. The basic idea is that expanding (\[inte1\]) leads to [*uniformly*]{} convergent series for $N$ in the whole interval $1 \leq N < \infty$, and therefore the limit $N \rightarrow \infty$ can be taken before the summation over the modes will be performed. Examples with only $\xi$’s. --------------------------- The case of only $\xi$’s is relatively easier than the case where $\dot{\xi}$’s are involved. This case covers all the terms in the first line in (\[inte1\]) and also all the extra terms we would have if we had started with an additional scalar potential $V(x)$. We give two examples where two $\xi$’s are involved. In the first one the two $\xi$’s are coming from the same $S_{\rm int}$ whereas in the second case we deal with an intereference term. We use the latter case to illustrate how one deals with factors like $(\alpha /N)^p$. ### Example 1. We shall show that $$\lim_{N \rightarrow \infty} \langle \frac{1}{N} \sum_{\alpha=1}^{N-1} \xi_{\alpha i} \xi_{\alpha j} \rangle =\langle \int_{-1}^0 d\tau \xi_i(\tau) \xi_j(\tau) \rangle, \label{1ex}$$ where ‘$\langle \; \rangle$’ means path integral average. We start with the left hand side $$\lim_{N \rightarrow \infty} \frac{1}{N} \sum_{k,l=1}^{N-1} \langle v_i^k v_j^l \rangle \lambda(k) \lambda(l) \sum_{\alpha=1}^{N-1} \sin \alpha k \pi/N \sin \alpha l \pi/N.$$ The propagator is given in (\[propag\]) and the $\lambda(k)$ in (\[resc\]). Combining the product of the two sines into a sum of two cosine functions, the summation over $\alpha$ yields $N/2\; \delta_{kl}$. Hence, the left-hand side of (\[1ex\]) yields $$\frac{1}{2} (2 \hbar T) \delta_{ij} \lim_{N \rightarrow \infty} \sum_{k=1}^{N-1} \frac{1}{4N^2 \sin^2 k \pi/2 N}.$$ We remove the $N$-dependence in the summation symbol by extending the sum to infinity, rewriting the sum as $$\sum_{k=1}^{\infty} f_k(N), \label{ser}$$ where $$\begin{aligned} f_k(N)= \left\{ \begin{array}{ll} 0 & \mbox{if $k>N-1$} \nonumber \\ 1/(4N^2 \sin^2 k \pi/2 N) & \mbox{if $k \leq N-1$}. \end{array} \right.\end{aligned}$$ We view $f_k(N)$ as a function of $N$. Since $k \leq N-1$, clearly $k \pi/2N < \pi/2$. Using the inequality $2 \theta/ \pi \leq \sin \theta \leq \theta$ for $0 \leq \theta \leq \pi/2$ we get an upper bound for the summands $$|f_k(N)| \leq \frac{1}{4N^2 (k^2/N^2)} = \frac{1}{4 k^2}. \label{upper1}$$ Since the series $\sum_{k=1}^{\infty} (2k)^{-2}$ is convergent, we conclude that (\[ser\]) is uniformly convergent in $N$ for the whole interval $1 \leq N < \infty$. Thus, we can interchange the limit of $N$ tending to infinity with the summation over $k$. Using (\[propag\]), we obtain $$\begin{aligned} \frac{1}{2} (2 \hbar T) \delta_{ij} \sum_{k=1}^{\infty} \frac{1}{k^2 \pi^2} &=& \sum_{k,l=1}^{\infty} \langle v_i^k v_j^l \rangle (\frac{1}{2} \delta^{kl}) \nonumber \\ &=& \langle \int_{-1}^0 d\tau \xi_i(\tau) \xi_j(\tau) \rangle.\end{aligned}$$ This proves (\[1ex\]). ### Example 2. In our second example we will prove that $$\lim_{N \rightarrow \infty} \frac{1}{N^2} \langle \sum_{\alpha,\beta=1}^{N} \big(\frac{\alpha-1}{N}\big) \big(\frac{\beta-1}{N}\big) \xi_{(\alpha-1) i} \xi_{(\beta-1) j} \rangle = \langle \int_{-1}^0 d\tau d\tau' \tau \tau' \xi_i(\tau) \xi_j(\tau') \rangle. \label{exa1}$$ This term is encountered when we expand the term with $A_{i,j}(z_{\alpha-1}) \xi_{(\alpha-1) j}$ around $x_i$ in the first line in (\[inte1\]), and then use two $S_{\rm int}$. We start again with the left hand side $$\lim_{N \rightarrow \infty} \sum_{k,l=1}^{N-1} \langle v_i^k v_j^l \rangle \lambda(k) \lambda(l) \frac{1}{N^2} \sum_{\alpha,\beta=1}^{N-1} \big(\frac{\alpha}{N}\big) \big(\frac{\beta}{N}\big) \sin \alpha k \pi /N \sin \beta l \pi /N. \label{ex2}$$ The summation over $\alpha$ and $\beta$ can be easily performed by observing that all the cases with $(\alpha/N)^p$ factors can be obtained from the ones with no such factors by just introducing temporarily an extra parameter $r$ in the argument of one of the sines and then differentiating appropriate number of times. So, in our case we write $$\frac{1}{N} \sum_{\alpha=1}^{N-1} \big(\frac{\alpha}{N}\big) \sin \alpha k \pi /N= -\frac{1}{k \pi N} \frac{d}{dr} \sum_{\alpha=1}^{N-1} \cos r \alpha k \pi /N \Big|_{r=1}.$$ The summation over $\alpha$ can be easily performed by writing the cosine as the real part of an exponential. The result is $$\frac{1}{N} \sum_{\alpha=1}^{N-1} \big(\frac{\alpha}{N}\big) \sin \alpha k \pi /N= - \frac{(-1)^k}{2N \tan k \pi /2N}. \label{sumex2}$$ Using (\[sumex2\]), (\[propag\]) and (\[resc\]), (\[ex2\]) becomes $$\lim_{N \rightarrow \infty} \delta_{ij} (2 \hbar T) \sum_{k=1}^{N-1} \frac{1}{4N^2 \sin^2 k \pi/2N} \frac{ \cos^2 k \pi/2N }{4N^2 \sin^2 k \pi/2N}.$$ Using the same arguments as in the first example we conclude that the series over $k$ is uniformly convergent. Therefore the limit $N \rightarrow \infty$ can be taken keeping $k$ fixed. The result is $$\sum_{k,l=1}^{\infty} \big(\frac{2 \hbar T}{k^2 \pi^2} \delta_{ij} \delta^{kl}\big) \big[\frac{-(-1)^k}{k \pi}\big] \big[\frac{-(-1)^l}{l \pi}\big] = \langle \int_{-1}^0 d\tau d\tau' \tau \tau' \xi_i(\tau) \xi_j(\tau') \rangle,$$ which proves (\[exa1\]). The generalization of these two examples to many $\xi$’s is straightforward. In every case we first reduce the summation of product of sines to the summations of a single sine or cosine by using the trigonometric formulas $$\begin{aligned} \sin a \sin b & = & \frac{1}{2} [ \cos (a-b) - \cos (a+b) ], \label{tr1} \\ \sin a \cos b & = & \frac{1}{2} [ \sin (a+b) + \sin (a-b) ]. \label{tr2}\end{aligned}$$ Then we use the results of our previous examples. Examples with $\xi$’s and $\dot{\xi}$’s. ---------------------------------------- The case where a $\dot{\xi}$ is involved is more complicated. One naively expects that in the limit $N \rightarrow \infty$ the sum $\sum_{\alpha=0}^{N-1} (\xi_{\alpha j_1} - \xi_{(\alpha+1) j_1}) \xi_{\alpha j_2} \cdots \xi_{\alpha j_q}$ becomes $ (-1)^{q+1} \int d\tau \dot{\xi}_{j_1}(\tau) \xi_{j_2}(\tau) \cdots \xi_{j_q}(\tau)$. ( The factor $(-1)^{q+1}$ is due to the fact that $\xi_{(\alpha+1) j}$, corresponds to a $\tau$-value which is smaller than that of $\xi_{\alpha j}$). Actually this would have been true if we were allowed to take the limit $N \rightarrow \infty$ inside the summation (for $k$ fixed). To see this we insert the mode expansion for $\xi$ into the sum $$\begin{aligned} &\ & \sum_{\alpha=0}^{N-1} (\xi_{\alpha j_1} - \xi_{(\alpha+1) j_1}) \xi_{\alpha j_2} \cdots \xi_{\alpha j_q} = \sum_{k_1, \cdots, k_q=1}^{N-1} v_{j_1}^{k_1} \cdots v_{j_q}^{k_q} \lambda(k_1) \cdots \lambda(k_q) \nonumber \\ &\ & \hspace {0.5cm}\Big[ (1 - \cos k_1 \pi/N) \sum_{\alpha=0}^{N-1} \sin \alpha k_1 \pi/N \cdots \sin \alpha k_q \pi/N \nonumber \\ &\ & \hspace{0.5cm} - \sin k_1 \pi/N \sum_{\alpha=0}^{N-1} \cos \alpha k_1 \pi/N \sin \alpha k_2 \pi/N \cdots \sin \alpha k_q \pi/N \Big]. \label{naive}\end{aligned}$$ The sums over $\alpha$ is of order $N$. For fixed $k_1$ the factor $(1 - \cos k_1 \pi/N)$ tends to $1/N^2$, whereas the $\sin k_1 \pi/N$ goes as $1/N$. Hence, the first term inside the square brackets in (\[naive\]) naively tends to zero for $N$ going to infinity and the second one gives the correct continuum limit. However, a more careful analysis shows that this naive limit is not correct. Consider, for example, the expectation value for the case $q=2$. In the term which is naively zero the sum over $\alpha$ of $\sin \alpha k_1 \pi/N \sin \alpha k_2 \pi/N$ gives $(N/2) \delta^{k_1 k_2}$, while the propagator combines with the $\lambda$’s and cancels the factor $(1 - \cos k_1 \pi/N)$. The final result is that this term has a limit $(1/4)(2 T \hbar) \delta_{j_1 j_2}$. The same in true for any $q$, namely both terms have non-vanishing finite limit. Similar results hold for the terms which were produced by commutators in the Hamiltonian approach (the last three lines in (\[inte\])). Naively all these terms tend to zero for $N$ going to infinity, but careful analysis reveals a finite result. In fact the terms coming from commutators just cancel the contribution from the first term in the square brackets in (\[naive\]), so that at the end the naive limit gives the correct result! ### Example 3. Consider the terms in the third line in (\[inte1\]) $$\begin{aligned} &\ & \sum_{\alpha=0}^{N-1} \big[(\xi_{\alpha i} - \xi_{(\alpha+1) i}) \xi_{\alpha j} - \frac{1}{2}(\xi_{\alpha i} - \xi_{(\alpha+1) i}) (\xi_{\alpha j} - \xi_{(\alpha+1) j})\big] = \nonumber \\ &\ & \sum_{\alpha=0}^{N-1}(\xi_{\alpha i} - \xi_{(\alpha+1) i}) \frac{(\xi_{\alpha j} + \xi_{(\alpha+1) j})}{2}. \label{deriv}\end{aligned}$$ We will show that it limits to $\int_{-1}^0 d\tau \dot{\xi}_i(\tau) \xi_j(\tau)$. We insert the mode expansion for the $\xi$’s and we use the trigonometric formula for the decomposition of $\sin (a+1) k \pi/N$. The terms proportional to $(1 - \cos k \pi/N)$ indeed cancel each other. One is left with $$\begin{aligned} &\ & \frac{1}{2} \sum_{k,l=1}^{N-1} v_i^k v_j^l \lambda(k) \lambda(l) \big[ \sin l \pi/N \sum_{\alpha=0}^{N-1} \cos \alpha l \pi/N \sin \alpha k \pi/N \nonumber \\ &\ & \hspace{3cm} - \sin k \pi/N \sum_{\alpha=0}^{N-1} \cos \alpha k \pi/N \sin \alpha l \pi/N \big]. \label{xi}\end{aligned}$$ The expectation value of (\[xi\]) vanishes since the expression within the square brackets is antisymmetric in $k,l$ whereas the propagator for the modes provides a $\delta^{kl}$. Thus, it is trivially equal to $\langle \int_{-1}^0 d\tau \dot{\xi}_i(\tau) \xi_j(\tau) \rangle$ which is also equal to zero. ### Example 4. The case with one $\dot{\xi}$ and three $\xi$’s is more delicate. It corresponds to the case $q=3$ in (\[inte1\]). We will prove that $$\begin{aligned} &\ & \lim_{N \rightarrow \infty} \frac{1}{3!} A_{i,jkl}(x) \langle \sum_{\alpha=0}^{N-1} \Big[(\xi_{\alpha i} - \xi_{(\alpha+1) i}) \xi_{\alpha j} \nonumber \\ &\ & \hspace{2cm} - \frac{3}{2}(\xi_{\alpha i} - \xi_{(\alpha+1) i}) (\xi_{\alpha j} - \xi_{(\alpha+1) j})\Big] \xi_{\alpha k} \xi_{\alpha l} \rangle \label{ex3} \\ &\ & = \frac{1}{3!} A_{i,jkl}(x) \sum_{\alpha=0}^{N-1} \Big[ \langle (\xi_{\alpha i} - \xi_{(\alpha+1) i}) \frac{(\xi_{\alpha j} + \xi_{(\alpha+1) j})}{2} \rangle \langle \xi_{\alpha k} \xi_{\alpha l} \rangle \nonumber \\ &\ & \hspace{2cm} + \mbox{cyclic in $j, k, l$} \Big] \label{exa3} \\ &\ &= 0 = \frac{1}{3!} A_{i,jkl}(x) \langle \int_{-1}^0 d\tau \dot{\xi}_i(\tau) \xi_j(\tau) \xi_k(\tau) \xi_l(\tau) \rangle \label{exam3}\end{aligned}$$ Since the $A_{i,jkl}$ is symmetric in $j, k, l$, we can symmetrize the second line in (\[ex3\]). This yields three terms, each with a factor 1/2. Applying Wick’s theorem we expect each of them to give three contractions. However, only one contraction is non-zero, namely $\langle (\xi_{\alpha } - \xi_{(\alpha+1) }) (\xi_{\alpha } - \xi_{(\alpha+1) }) \rangle \langle \xi_{\alpha } \xi_{\alpha } \rangle$. We now show that the other two possible contractions are zero. Consider the case $$\begin{aligned} &\ &\langle (\xi_{\alpha } - \xi_{(\alpha+1) }) \xi_{\alpha } \rangle \langle (\xi_{\alpha } - \xi_{(\alpha+1) }) \xi_{\alpha } \rangle = \nonumber \\ &\ & \sum_{k_1,k_2=1}^{N-1} \big( \frac{2 \hbar T}{2N^2 (1 - \cos k_1 \pi/N)} \big) \big( \frac{2 \hbar T}{2N^2 (1 - \cos k_2 \pi/N)} \big) \nonumber \\ &\ & \Big[(1 - \cos k_1 \pi/N)(1 - \cos k_2 \pi/N) \sum_{\alpha=0}^{N-1} \sin^2 \alpha k_1 \pi/N \sin^2 \alpha k_2 \pi/N \label{1} \\ &\ & - (1 - \cos k_1 \pi/N) \sin k_2 \pi/N \frac{1}{2} \sum_{\alpha=0}^{N-1} \sin^2 \alpha k_1 \pi/N \sin 2 \alpha k_2 \pi/N \label{2} \\ &\ & - (1 - \cos k_2 \pi/N) \sin k_1 \pi/N \frac{1}{2} \sum_{\alpha=0}^{N-1} \sin^2 \alpha k_2 \pi/N \sin 2 \alpha k_1 \pi/N \label{3} \\ &\ & + \sin k_1 \pi/N \sin k_2 \pi/N \frac{1}{4} \sum_{\alpha=0}^{N-1} \sin 2 \alpha k_1 \pi/N \sin 2 \alpha k_2 \pi/N \Big], \label{4}\end{aligned}$$ where we have supressed the spacetime indices. The terms (\[2\]) and (\[3\]) are clearly zero due to the summation over $\alpha$. Furthermore, (\[1\]) and (\[4\]) each vanish in the limit $N \rightarrow \infty$. Using this result, Wick’s theorem, and the symmetrization in $j, k, l$, (\[ex3\]) becomes $$\begin{aligned} &\ & \frac{1}{3!} A_{i,jkl}(x) \sum_{\alpha=0}^{N-1} \Big[ \langle (\xi_{\alpha i} - \xi_{(\alpha+1) i}) \xi_{\alpha j} \rangle \langle \xi_{\alpha k} \xi_{\alpha l} \rangle - \nonumber \\ &\ & - \frac{1}{2} \langle (\xi_{\alpha i} - \xi_{(\alpha+1) i}) (\xi_{\alpha j} - \xi_{(\alpha+1) })\rangle \langle \xi_{\alpha k} \xi_{\alpha l} \rangle + \mbox{(cyclic in $j, k, l$)} \Big].\end{aligned}$$ This indeed agrees with (\[exa3\]). We will show that (\[exa3\]) is equal to zero. We substitute the mode expansion for the $\xi$’s into (\[exa3\]). After some trigonometry we get $$\begin{aligned} &\ &\langle (\xi_{\alpha} - \xi_{\alpha+1}) \frac{(\xi_{\alpha} + \xi_{\alpha+1})}{2} \rangle \langle \xi_{\alpha} \xi_{\alpha} \rangle= \nonumber \\ &\ & \sum_{k_1,k_2=1}^{N-1} \frac{1}{2} \big( \frac{2 \hbar T}{2N^2 (1 - \cos k_1 \pi/N)} \big) \big( \frac{2 \hbar T}{2N^2 (1 - \cos k_2 \pi/N)} \big) \sin^2 \alpha k_2 \pi/N \nonumber \\ &\ & \Big[- \sin^2k_1 \pi /N \cos 2 \alpha k_1 \pi /N - \frac{1}{2} \sin 2 k_1 \pi/N \sin 2 \alpha k_1 \pi/N \Big] , \label{error}\end{aligned}$$ where we have again suppressed the spacetime indices. The second term in (\[error\]) vanishes due to the summation over $\alpha$. The first one tends to zero in the limit $N \rightarrow \infty$. Here we use the summation formula $$\sum_{\alpha=0}^{N-1} \cos \alpha k_1 \pi/N \sin^2 \alpha k_2 \pi/N = - \frac{N}{4} \delta_{k_1,2k_2}. \label{deltasum}$$ To prove this formula we first use trigonometric formulas to reduce the summation in (\[deltasum\]) to summations over a single cosine function and then we perform these summations. Thus indeed (\[exa3\]) is equal to zero. In (\[exam3\]), combining the cosine with a sine, and combining the two remaining sine functions, leads to double-angle sine functions whose integral vanishes. This proves the continuum limit for the term $q=3$ in (\[inte1\]). It is straightforward to generalize to the case of one $\dot{\xi}$ and arbitrary number of $\xi$’s. In every case we first use the symmetry of $A_{i,j_1 \cdots j_q}$ in $j_1, \ldots, j_q$ to symmetrize the $(\xi_{\alpha} - \xi_{\alpha+1})^2 \xi_{\alpha} \ldots \xi_{\alpha} $ term, so that $q$ terms are obtained. Then Wick’s theorem gives $q$ contractions for the $(\xi_{\alpha} - \xi_{\alpha+1})\xi_{\alpha} \xi_{\alpha} \ldots \xi_{\alpha}$ term, but just one contraction for each of the $q$ terms since all but one contraction vanish. The $q$ terms from $(\xi_{\alpha} - \xi_{\alpha+1})\xi_{\alpha} \xi_{\alpha} \ldots \xi_{\alpha}$ term combine with the $q$ terms from the symmetrization of $(\xi_{\alpha} - \xi_{\alpha+1})^2 \xi_{\alpha} \ldots \xi_{\alpha}$ to yield $q$ terms of the form $(\xi_{\alpha} - \xi_{\alpha+1}) \frac{(\xi_{\alpha} + \xi_{\alpha+1})}{2} \xi_{\alpha} \ldots \xi_{\alpha}$. Using similar arguments as in the case of (\[exa3\]) one can show that the generalization of (\[exa3\]) also vanishes and, therefore, is trivially equal to the continuum case. ### Example 5. We now consider examples of interference. The first example concerns with the interference of two terms, each with an even number of $\xi$ fields. We take twice the third line in (\[inte1\]). We will show that $$\begin{aligned} &\ &\lim_{N \rightarrow \infty} \sum_{\alpha,\beta=0}^{N-1} \frac{1}{2} \langle (\xi_{\alpha i_1} - \xi_{(\alpha+1) i_1}) (\xi_{\alpha j_1} + \xi_{(\alpha+1) j_1}) \nonumber \\ &\ & \hspace{2cm} \frac{1}{2} (\xi_{\beta i_2} - \xi_{(\beta+1) i_2}) (\xi_{\beta j_2} + \xi_{(\beta+1) j_2}) \rangle, \label{ex4}\end{aligned}$$ is equal to $$\langle \int_{-1}^0 d\tau d\tau' \dot{\xi}_{i_1}(\tau) \xi_{j_1}(\tau) \dot{\xi}_{i_2}(\tau') \xi_{j_2}(\tau') \rangle. \label{4cxi}$$ As we have shown in (\[deriv\]), the contraction of the first two (or last two) factors in (\[ex4\]) vanishes, so that only two contractions remain. The summations over $\alpha$ and $\beta$ can be performed (use (\[xi\]) twice and that (\[xi\]) vanishes for $k=l$) to yield $$\begin{aligned} &\ &\lim_{N \rightarrow \infty} \sum_{k_1 \neq l_1; k_2 \neq l_2} \big[ \langle v_{i_1}^{k_1} v_{i_2}^{k_2} \rangle \langle v_{j_1}^{l_1} v_{j_2}^{l_2} \rangle + \langle v_{i_1}^{k_1} v_{j_2}^{l_2} \rangle \langle v_{j_1}^{l_1} v_{i_2}^{k_2} \rangle \big] \nonumber \\ &\ &\lambda(k_1) \lambda(l_1) \lambda(k_2) \lambda(l_2) \frac{1}{4}[1-(-1)^{k_1+l_1}][1-(-1)^{k_2+l_2}] \nonumber \\ &\ &\Big[ \frac{\sin k_1 \pi/N \sin l_1 \pi/N}{\cos k_1 \pi/N - \cos l_1 \pi/N} \Big] \Big[ \frac{\sin k_2 \pi/N \sin l_2 \pi/N}{\cos k_2 \pi/N - \cos l_2 \pi/N} \Big]. \label{4xi}\end{aligned}$$ Each propagator gives a $\delta$-function, so we left with a double sum. Combining each propagator with the corresponding two factors of $\lambda$, we get $$\begin{aligned} &\ &(2 \hbar T)^2 (\delta_{i_1 i_2} \delta_{j_1 j_2} - \delta_{i_1 j_2} \delta_{j_1 i_2}) \nonumber \\ &\ &\lim_{N \rightarrow \infty} \sum_{k,l=1}^{N-1} \big(\frac{1}{4N^2 \sin^2 k \pi/2N}\big) \big(\frac{1}{4N^2 \sin^2 l \pi/2N}\big) \frac{1}{2}[1-(-1)^{k+l}] \nonumber \\ &\ & \hspace{1.5cm} \Big[\frac{\sin k \pi/N \sin l \pi/N}{\cos k \pi/N - \cos l \pi/N} \Big]^2 \nonumber \\ &\ & \equiv (2 \hbar T)^2 (\delta_{i_1 i_2} \delta_{j_1 j_2} - \delta_{i_1 j_2} \delta_{j_1 i_2}) \lim_{N \rightarrow \infty} I(N). \label{4xil}\end{aligned}$$ Using the trigonometric formula for the sine of double angle and the one which expresses the difference of cosines as a product of sines we get $$I(N)= \sum_{k,l=1}^{N-1} \frac{1}{2}[1-(-1)^{k+l}] \frac{1}{4N^4} \frac{\cos^2 k \pi/2N \cos^2 l \pi/2N}{\sin^2 (l-k) \pi/2N \sin^2 (k+l) \pi/2N}.$$ We split this sum into two sums according to whether $k+l$ is smaller or larger than $N$ $$I(N)=I_1(k+l \leq N)+I_2(k+l>N),$$ where $$I_1(N)= \sum_{k,l=1}^{\infty}g_{kl}^{(1)}(N),$$ and $$\begin{aligned} g_{kl}^{(1)}= \left\{ \begin{array}{ll} 0 & \mbox{if $k+l>N$} \nonumber \\ \frac{1}{2}[1-(-1)^{k+l}] \frac{1}{4N^4} \frac{\cos^2 k \pi/2N \cos^2 l \pi/2N}{\sin^2 (l-k) \pi/2N \sin^2 (k+l) \pi/2N} & \mbox{if $k+l \leq N$}. \end{array} \right.\end{aligned}$$ In $I_2$ we make the transformation $$k'=N-k,\; l'=N-l,$$ so $k'+l' < N$ and $I_2$ becomes $$I_2(N)= \sum_{k',l'=1}^{\infty}g_{k'l'}^{(2)}(N),$$ where $$\begin{aligned} g_{k'l'}^{(2)}= \left\{ \begin{array}{ll} 0 & \mbox{if $k'+l' \geq N$} \nonumber \\ \frac{1}{2}[1-(-1)^{k'+l'}] \frac{1}{4N^4} \frac{\sin^2 k' \pi/2N \sin^2 l' \pi/2N}{\sin^2 (l'-k') \pi/2N \sin^2 (k'+l') \pi/2N} & \mbox{if $k'+l' < N$}. \end{array} \right.\end{aligned}$$ We shall now again prove that these series converge uniformly in $N$. For $0 < k+l \leq N$, we have the upper bound $$\sin (k+l) \pi/2N \geq (k+l)/N,$$ using the inequality $\sin \theta \geq 2 \theta / \pi$ valid for $0 \leq \theta \leq \pi /2$. From the same inequality we also get $$|\sin (l-k) \pi/2N| \geq |l-k|/N,$$ since $-N \leq (l-k) \leq N$. Hence, an upper limit for the summands in $I_1$ can be found which is independent of $N$ $$|g_{kl}^{(1)}(N)| \leq \frac{1}{4(k^2 - l^2)^2} \frac{1}{2}[1-(-1)^{k+l}].$$ The same upper limit holds for $g_{k'l'}^{(2)}(N)$. (From (\[xi\]) it follows that $g_{kl}^{(1)}$ and $g_{kl}^{(2)}$ vanish at $k=l$). The double series $\sum_{k,l=1}^{\infty} [1-(-1)^{k+l}]/[8(k^2 - l^2)^2]$ is convergent. Actually, apart for the factor 1/8, this is exactly the series we analytically evaluate in section 5. Thus, the limit $N \rightarrow \infty$ can be taken keeping fixed $k$ and $l$. The result is that $I_2(N)$ tends to zero whereas $I_1(N)$ tends to $$\sum_{k,l=1}^{\infty} [1-(-1)^{k+l}] \frac{2}{\pi^4 (l^2-k^2)^2}.$$ Going back to (\[4xil\]) we get $$(\delta_{i_1 i_2} \delta_{j_1 j_2} - \delta_{i_1 j_2} \delta_{j_1 i_2}) \sum_{k,l=1}^{\infty} \big(\frac{2 \hbar T}{k^2 \pi^2}\big) \big(\frac{2 \hbar T}{l^2 \pi^2}\big) \Big(-[1-(-1)^{k+l}]\frac{kl}{l^2-k^2}\Big)^2. \label{befcon}$$ Using $$\begin{aligned} \int_{-1}^0 d\tau (k\pi) \cos k \pi \tau \sin l \pi \tau= \left\{ \begin{array}{ll} 0 & \mbox{if $k=l$} \nonumber \\ -[1-(-1)^{k+l}]\frac{kl}{l^2-k^2} & \mbox{if $k \neq l$}, \end{array} \right.\end{aligned}$$ we find that (\[befcon\]) indeed reproduces (\[4cxi\]). ### Example 6. We now give an example of interference with two terms, each with an odd number of quantum fields. The basic features are the same, the algebra though is much more laborious. We take the second term in the first line of (\[inte1\]) and the term with $q=2$. We expand $A_{l,n}(z_{\alpha-1})$ around $x_i$. We will prove that $$\begin{aligned} &\ & \lim_{N \rightarrow \infty} \frac{1}{2!} A_{i,jk}(x) A_{l,mn}(x) (x-y)_l (y-x)_m \langle \frac{1}{N} \sum_{\alpha, \beta=0}^{N-1} \Big[(\xi_{\alpha i} - \xi_{(\alpha+1) i}) \xi_{\alpha j} \nonumber \\ &\ & \hspace{2cm} - \frac{2}{2}(\xi_{\alpha i} - \xi_{(\alpha+1) i}) (\xi_{\alpha j} - \xi_{(\alpha+1) j})\Big] \xi_{\alpha k} \big(\frac{\beta}{N}\big) \xi_{\beta n} \rangle \label{ex5} \\ &\ & = - \frac{1}{2!} A_{i,jk}(x) A_{l,mn}(x) (x-y)_l (y-x)_m \nonumber \\ &\ & \hspace{2cm} \langle \int_{-1}^0 d\tau d\tau' \dot{\xi}_i(\tau) \xi_j(\tau) \xi_k(\tau) \tau' \xi_n(\tau') \rangle,\end{aligned}$$ where the relative minus sign is due to the difference in sign between (\[ddec\]) and (\[cdec\]). Following the same procedure as in the case with one $\dot{\xi}$ and odd number of $\xi$’s we first symmetrize w.r.t $j, k$. We will study only the contraction $\langle i\ j \rangle \langle k\ n \rangle$ since the contraction $\langle i\ k \rangle \langle j\ n \rangle$ is equal to this one and the last one, $\langle i\ n \rangle \langle j\ k \rangle$, can be studied in a similar way , where we abbreviate the $\xi$’s by their spacetime indices. From now on the factor $\frac{1}{2!} A_{i,jk}(x) A_{l,mn}(x) (x-y)_l (y-x)_m$ is implied and the spacetime indices are supressed. We substitute the mode expansion for the $\xi$’s in (\[ex5\]). The summation over $\beta$ is given in (\[sumex2\]). After some trigonometry (\[ex5\]) becomes $$\begin{aligned} &\ & \lim_{N \rightarrow \infty} (2 \hbar T)^2 \sum_{k,l=1}^{N-1} \frac{1}{2N^2 (1 - \cos k \pi/N)} \frac{1}{2N^2 (1 - \cos l \pi/N)} \nonumber \\ &\ & \big[ - \frac{(-1)^l}{2N \tan l \pi/2N} \big] \nonumber \\ &\ & \Big\{ (1 - \cos k \pi/N) \frac{1}{2} \sum_{\alpha=0}^{N-1} (1 - \cos 2 \alpha k \pi/N) \sin \alpha l \pi/N \label{10} \\ &\ & - \sin k \pi/N \big( \frac{N}{4} \delta^{2k,l} \big) \label{11} \\ &\ & - \frac{1}{2} (1 - \cos k \pi/N)^2 \frac{1}{2} \sum_{\alpha=0}^{N-1} (1 - \cos 2 \alpha k \pi/N) \sin \alpha l \pi/N \label{12} \\ &\ & - \frac{1}{2} (1 - \cos k \pi/N)(1 - \cos l \pi/N) \nonumber \\ &\ & \hspace{3cm} \frac{1}{2} \sum_{\alpha=0}^{N-1} (1 - \cos 2 \alpha k \pi/N) \sin \alpha l \pi/N \label{13} \\ &\ & + (1 - \cos k \pi/N) \sin k \pi/N \big( \frac{N}{4} \delta^{2k,l} \big) \label{14} \\ &\ & + \frac{1}{2} (1 - \cos k \pi/N) \sin l \pi/N \big( -\frac{N}{4} \delta^{2k,l} \big) \label{15} \\ &\ & + \frac{1}{2} (1 - \cos l \pi/N) \sin k \pi/N \big( \frac{N}{4} \delta^{2k,l} \big) \label{16} \\ &\ & - \frac{1}{2} \sin^2 k \pi/N \frac{1}{2} \sum_{\alpha=0}^{N-1} (1 + \cos 2 \alpha k \pi/N) \sin \alpha l \pi/N \label{17} \\ &\ & - \frac{1}{2} \sin k \pi/N \sin l \pi/N \frac{1}{2} \sum_{\alpha=0}^{N-1} \sin 2 \alpha k \pi/N \cos \alpha l \pi/N \Big\}. \label{18}\end{aligned}$$ The terms (\[10\]) and (\[11\]) are coming from the $(\xi_{\alpha} - \xi_{\alpha+1}) \xi_{\alpha}$ term. The former is cancelled by the $(\xi_{\alpha} - \xi_{\alpha+1})^2$ term and the latter gives the continuum limit. Indeed, the term (\[10\]) is cancelled exactly by the terms (\[12\]), (\[13\]), (\[17\]) and (\[18\]). The terms (\[14\]), (\[15\]) and (\[16\]) vanish in the limit $N \rightarrow \infty$. It remains to take the limit $N \rightarrow \infty$ in (\[11\]). The term (\[11\]) can be rewritten as $$- (2 \hbar T)^2 \sum_{k=1}^{N-1} \frac{1}{128 N^4} \frac{ \cos k \pi/N}{ \sin^2 k \pi/2N \sin^2 k \pi/N}. \label{lastt}$$ We split the sum in two sums, the first running from 1 to $N/2 - 1$ and the second from $N/2$ to $N-1$. In the first one an upper bound for the summands can be found by using the same inequalities as in the first example, $$\Big|\frac{1}{128 N^4} \frac{ \cos k \pi/N}{ \sin^2 k \pi/2N \sin^2 k \pi/N} \Big| \leq \frac{1}{512 k^4}.$$ In the second sum we make the transformation $k' = N - k$. The series becomes $$\sum_{k'=1}^{N/2} \frac{1}{128 N^4} \frac{ \cos k' \pi/N}{ \cos^2 k' \pi/2N \sin^2 k' \pi/N} = \sum_{k'=1}^{N/2} \frac{1}{32 N^4} \frac{ \sin^2 k' \pi/2N \cos k' \pi/N}{\sin^4 k' \pi/N}.$$ An upper bound for the summands of this series is given by $$\Big| \frac{1}{32 N^4} \frac{ \sin^2 k' \pi/2N \cos k' \pi/N}{ \sin^4 k' \pi/N} \Big| \leq \frac{1}{512 k'^4}.$$ Therefore the series (\[lastt\]) is uniformly convergent. So the limit $N \rightarrow \infty$ in (\[11\]) can be performed before the summation. The result reads $$\begin{aligned} &\ & - (2 \hbar T)^2 \sum_{k,l=1}^{\infty} \frac{1}{k^2 \pi^2} \frac{1}{l^2 \pi^2} \Big[- \frac{(-1)^l}{l \pi} \Big] (k \pi) \big( \frac{1}{4} \delta^{2k,l} \big) \nonumber \\ &\ &= - \int_{-1}^0 d\tau d\tau' \langle \dot{\xi}_i(\tau) \xi_j(\tau) \rangle \tau' \langle \xi_k(\tau) \xi_n(\tau') \rangle\end{aligned}$$, which is what we wanted to prove. One can easily check now that the terms of (\[inte\]) which were omitted in (\[inte1\]) indeed tend to zero. These terms are those in the last three lines in (\[inte\]) except for the terms proportional to $(\xi_{(\alpha-1) i} - \xi_{\alpha i}) (\xi_{(\alpha-1) j} - \xi_{\alpha j})$. All of them are equal to $1/N$ times terms which were proven finite in the limit $N \rightarrow \infty$. Combining (\[kinetic\]), (\[zterm\]) and (\[xiterm\]) we get the continuum action $S_{\rm config}$ $$S_{\rm config} = \frac{1}{T} \int_{-1}^{0} d \tau [ \frac{1}{2} \dot{x}_{i} \dot{x}_{i} -i {(\frac{e}{c})}T \dot{x}_{i} A_{i} ] =\int_{-T}^{0} dt [ \frac{1}{2} \dot{x}_{i} \dot{x}_{i} -i {(\frac{e}{c})}\dot{x}_{i} A_{i} ]. \label{coaction}$$, where in the last step we have rescaled the the time $\tau=t/T$. Evaluation of the path integral. ================================ In the continuum path integral with action (\[coaction\]) we set $T=\Delta t$ and then we evaluate it to order $(\Delta t)^2$. The derivation of the path integral indicates how to evaluate it. First we decompose $x_i(\tau)$ into a function $z_i( \tau )$ and a quantum part $\xi_i( \tau )$ $$x_i ( \tau ) = z_{i} ( \tau ) + \xi_{i} ( \tau ).$$ The function $z_i( \tau )$ is not a solution of the classical field equations, but rather of the field equations corresponding to $L_0 = \dot{x}^2 /2 $. It satisfies the same boundary conditions as $ x_i ( \tau ) $ and hence is given by $$z_{i} = x_i - \tau ( y-x )_i.$$ It follows that the quantum field vanishes at the boundary $$\xi_{i} ( \tau = 0 ) = \xi_{i} ( \tau = -1 ) = 0.$$ Since the eigenfunctions of $S_0$ with these boundary conditions are the functions $ \sin (n \pi \tau) $, we expand the quantum field on a trigonometric basis [@dekker; @fio; @vanfio] $$\xi_{i} = \sum_{n=1}^{ \infty } v_{i}^{n} \sin (n \pi \tau ).$$ The propagator for the modes is obtained by using only the part quadratic in velocities and reads $$\langle v_{i}^{m} v_{j}^{n} \rangle = \frac{ 2 \Delta t \hbar }{ \pi^2 n^2} \delta_{ij} \delta^{mn},$$ as follows from the measure in (\[measure\]). If we would multiply this result with two sine functions and sum over $m$ and $n$, we would recover the result of [@vanfio; @bas] for $\langle \xi_i(\tau_1) \xi_j(\tau_2) \rangle$. However, we shall work here entirely in terms of modes. The $S_{\rm int}$, up to the order we are interested in, is given by $$\begin{aligned} S_{\rm int} & = & \frac{i}{ \hbar } {(\frac{e}{c})}\int_{-1}^{0} \!d \tau \, \big[ (x-y)_i + \dot{ \xi }_i \big] \big\{ A_{i}( z( \tau ) ) + A_{i,j}( z( \tau ) ) \xi_j \nonumber \\ &\ & + \frac{1}{2} A_{i,jk}( z( \tau ) ) \xi_j \xi_k + \frac{1}{3!} A_{i,jkl}( z( \tau ) ) \xi_j \xi_k \xi_l + \cdots \big\}. \label{int}\end{aligned}$$ We factor out all the terms which do not depend on quantum fields $$\begin{aligned} \exp \Big[&-& \frac{1}{2 \hbar \Delta t} (x-y)^2 + \frac{i}{\hbar} {(\frac{e}{c})}\Big\{ A_i (x-y)_i - \frac{1}{2} A_{i,j} (x-y)_i (x-y)_j \nonumber \\ & + & \frac{1}{3!} A_{i,jk} (x-y)_i (x-y)_j (x-y)_k \nonumber \\ & - & \frac{1}{4!} A_{i,jkl} (x-y)_i (x-y)_j (x-y)_k (x-y)_l \Big\} \Big]. \label{noq}\end{aligned}$$ Observe that (\[noq\]) differs from $ \exp(- S_{\rm cl}/ \hbar )$ by just one term (namely the $F^2$ term in (\[action\]) ). We will recover this missing term from a tree graph (see (\[last\])). The reason for the absence of the $F^2$ term from (\[noq\]) is that $z_i( \tau )$ does not satisfy the full field equations but rather the field equations of $L_0 = \dot{x}^2 /2 $. Using only one factor of $S_{\rm int}$ we get the following contribution $$\begin{aligned} (\frac{ie}{ \hbar c}) \! \int_{-1}^{0} \!d \tau \, & \big[ & A_{i,j} ( z( \tau) ) \langle \dot{ \xi}_i( \tau ) \xi_j ( \tau ) \rangle \nonumber \\ &+& \frac{1}{2} A_{i,jk}( z( \tau) ) \langle \xi_j( \tau ) \xi_k( \tau ) \rangle (x-y)_i \nonumber \\ &+& \frac{1}{3!} A_{i,jkl}( z( \tau) ) \langle \dot{ \xi}_i( \tau ) \xi_j( \tau ) \xi_k( \tau ) \xi_l( \tau ) \rangle \big]. \label{one}\end{aligned}$$ The first two terms are one loop contributions and are of order $ \Delta t $ and higher (since $(x-y)_i$ is of order $(\Delta t)^{1/2})$, while the last one is a 2-loop contribution of order $(\Delta t)^2$ and higher. However, performing the $ \tau $-integration the 2-loop contribution of order $(\Delta t)^2$ vanishes because combining the sine and cosine functions one always ends up with a sine of a double angle whose integral vanishes. The first term is a superficially divergent tadpole, but using mode regularization (i.e., first evaluate the integrals for a finite number of modes, and then let the number of modes tend to infinity) one finds that it is, in fact, finite. This is thus a property of our regularization scheme, similar to the property of dimensional regularization which puts equal to zero all divergences which are not logarithmic divergences. The first two terms in (\[one\]) yield $$i \frac{ \Delta t}{12} {(\frac{e}{c})}\big[ F_{ki,k} (x-y)_i - \frac{1}{2} F_{ki,kj} (x-y)_i (x-y)_j \big]. \label{v1}$$ To get this result we used the known sum $ \zeta (2) = \sum_{0}^{ \infty } n^{-2} = \pi^2 /6 $. Two factors of $S_{\rm int}$ yield $$\begin{aligned} - \frac{1}{2} ( \frac{e}{ \hbar c} )^2 \int_{-1}^0 \! d \tau d \tau' \! &\big\{&\![ A_{i,j}(z(\tau)) A_{k,l}(z(\tau')) \langle \xi_j(\tau) \xi_l(\tau') \rangle (x-y)_i (x-y)_k \nonumber \\ &+& A_i(z(\tau)) A_{k,l}(z(\tau')) \langle \dot{\xi}_i(\tau) \xi_l(\tau') \rangle (x-y)_k \nonumber \\ &+& A_{i,j}(z(\tau)) A_k(z(\tau')) \langle \xi_j(\tau) \dot{\xi}_k(\tau') \rangle (x-y)_i \nonumber \\ &+& A_i(z(\tau)) A_k(z(\tau')) \langle \dot{\xi}_i(\tau) \dot{\xi}_k(\tau') \rangle ] \nonumber \\ &+& A_{i,j}(z(\tau))A_{k,l}(z(\tau')) \langle\dot{\xi}_i(\tau)\xi_j(\tau) \dot{\xi}_k(\tau')\xi_l(\tau')\rangle\big\}, \label{2s}\end{aligned}$$ where we have omitted terms which yield zero after the $\tau$-integration or are of higher order in $\Delta t$. The first four terms inside the square brackets are tree graphs and combine to give $$\frac{4}{ \pi^4} ( \frac{ \Delta t}{ \hbar} ) {(\frac{e}{c})}^2 \sum_{k=0}^{ \infty } \frac{1}{ (2k+1)^4 } F_{ik} F_{kj} (x-y)_i (x-y)_j. \label{last}$$ The sum which appears in (\[last\]) is known and is equal to $\lambda(4)=( 1- 2^{-4} )\zeta (4)= \pi^4 /96$. Using this result we identify (\[last\]) as the missing term of the classical action. The last term in (\[2s\]) is a one-loop graph and gives $$- \frac{2}{ \pi^4} {(\frac{e}{c})}^2 F^2 (\Delta t)^2 \sum_{m,n=1; m \neq n}^{ \infty } \frac{ 1- (-1)^{m+n} }{ (m^2-n^2)^2 }. \label{v2}$$ The double sum which appears in (\[v2\]) seems not tabulated. Here we give an analytic evaluation of it. The idea is to extend the limits of summation to $ \pm \infty $ so that linear shifts of the summation variable are allowed. This can be done by observing that the summand is symmetric under $ n \rightarrow -n, m \rightarrow -m, m \leftrightarrow n$. Notice also that only $m+n={\rm odd}$ contributes.The sum becomes $$\begin{aligned} &\ & \sum_{k,l= - \infty , l \neq 0 }^{\infty} \frac{1}{ [(2k+1)^2 - (2l)^2 ]^2 } = \nonumber \\ &\ & \sum_{k,l= - \infty }^{\infty} \frac{1}{ [(2k+1)^2 - (2l)^2 ]^2 } - \pi^4 / 48.\end{aligned}$$ The last double sum can be rewritten by substituting $2k = 2l+p-1$ for $p$ odd $$\begin{aligned} &\ & \sum_{l=- \infty}^{ \infty } \sum_{p=1,odd}^{ \infty } \frac{1}{p^2( 4l \pm p )^2} = \nonumber \\ &\ & 2 ( \sum_{p=1,odd}^{ \infty } p^{-2} )^2 = 2 ( \lambda(2) )^2 = \pi^4 /32.\end{aligned}$$ Hence, the sum is equal to $ \pi^4 / 96 $. (\[v2\]) together with (\[v1\]) give the Van Vleck determinant. The final result is that the path integral correctly reproduces the propagator found from the Hamiltonian operator approach. There are further one-loop diagrams which give contribution of higher order than $ (\Delta t)^2 $. For example, taking twice the term $ (x-y)_i A_{i,jk} \xi_j \xi_k $ we get a one-loop result proportional to $ (\Delta t)^3 $. Conclusions. ============ We have proven the 1-1 correspondence between the Hamiltonian approach and path integration for Hamiltonians of the form $ \hat{H}=\hat{p}^2 + a^i(\hat{x}) \hat{p}_i + b(\hat{x})$. The correspondence we found is this: casting the Hamiltonian into the form $$\hat{H} = \frac{1}{2} \Big( \hat{p}_i - (\frac{e}{c}) A_i(\hat{x}) \Big) \Big( \hat{p}^i - (\frac{e}{c}) A^i(\hat{x}) \Big) + V(\hat{x}),$$ the action to be used in the path integral is $$S_{\rm config} = \int_{-T}^0 dt \big[ \frac{1}{2} \delta_{ij} \dot{x}^i \dot{x}^j - i(\frac{e}{c}) \dot{x}^i A_i(x) + V(x) \big]. \label{langr} \label{coraction}$$ This result holds for any Hamiltonian, whether it is covariant or not. The path integral is perturbatively evaluated by treating the term $\dot{x}^2 /2$ as the free part $S_0$, decomposing $x(\tau)$ into a background part $z(\tau)$ and a quantum part $\xi(\tau)$, and expanding $\xi(\tau)$ in terms of eigenfunctions of $S_0$. The measure in the path integral as well as the action $S_0$ determines the world line propagator (see (\[propag\])). (This is, in fact, the only place where the measure plays a role for us). The other two terms in (\[coraction\]) yield the vertices, and one can now evaluate (as we did) the path integral in a perfectly straightforward and standard manner (“Feynman graphs”). Of course the expansion of $\xi_i(\tau)$ into modes is well-known $$\xi_i(\tau)= \sum_{k=1}^{\infty} v^k_i \sin k \pi \tau.$$ What we have shown is that all arbitrariness (such as the overall normalization of the path integral) can be fixed by starting with the Hamiltonian approach. Moreover, we have given an elementary (though at times somewhat tedious) proof that the $N \rightarrow \infty$ limit of the time-discretized path integral exists as far as perturbation theory is concerned, and indeed yields the continuum path integral with its measure. The actual proof that the limit $N \rightarrow \infty$ exists was given by carefully analyzing six examples which cover all cases one encounters in the perturbative evaluation of the path integral. In each example we found upper bounds for the infinite series which showed that these series are uniformly convergent as a function of $N$. This allowed us to take the limit $N \rightarrow \infty$ inside the summation symbols (i.e., at fixed mode index $k$). Our results confirm the lore about path integrals that the naive $N \rightarrow \infty$ limit in the discretized path integral yields the correct continuum path integral. However, this came about by an interesting “conspiracy”: the higher order terms in the Hamiltonian evaluation of $<x| \exp (- \Delta t \hat{H} / \hbar) |y>$ which are due to expanding the exponent and taking into account the commutators between $\hat{x}$ and $\hat{p}$ operators, [*cancel*]{} against the terms in the time-discretized action which seem to (but do not) vanish in the $N \rightarrow \infty$ limit. Due to this conspiracy it is, after all, correct to use (\[la\]) and (\[linapp\]), omitting all commutators, to obtain the action to be used in the path integral. Namely, this yields $h(x,p)=\frac{1}{2} p^2 - \frac{e}{c} A^i(x) p_i + \frac{1}{2} (\frac{e}{c})^2 A^2(x) + V(x)$, and after integrating out the momenta, the naive $N \rightarrow \infty$ limit yields the correct action $S_{\rm config}$ in (\[coraction\]). However, (\[la\]) and (\[linapp\]) by themselves do not yield the correct propagator. Our results immediately generalize to quantum field theories with derivative interactions, such as Yang-Mills theory with gauge fixing term and ghost action. The Hamiltonian for the gauge and ghost fields contain again terms linear in momenta. For example, in the Lorentz gauge, the Hamiltonian reads $$\begin{aligned} {\cal H} (gauge) & = & \frac{1}{2} p(A^a_k)^2 - \frac{1}{2} p(A^a_0)^2 + p(A^a_k) \partial_k A_0^a \nonumber \\ & + & p(A^a_0) \partial^k A_k^a + \frac{1}{4} (G^a_{kl})^2 - g p(A^a_k) f^a_{\ bc} A_0^b A_k^c,\end{aligned}$$ and $${\cal H} (ghost) = p(b_a) p(c^a) + p(c^a) g f^a_{\ bc} A_0^b c^c + (\partial_k b_a) (D_k c^a),$$ where $b_a (c^a)$ are the antighost (ghost) fields. Following the results of this paper one can find the 1-1 correspondence between operator Hamiltonians and path integral actions. In particular, one may determine the operator ordering of ${\cal H} (gauge)$ and ${\cal H} (ghost)$ which corresponds to the usual BRST invariant quantum action in the configuration space path integral. However, again, the linear approximation in the Hamiltonian approach yields incorrect results if one uses it to compute the propagator in the Hamiltonian approach. Another extension of our results would be to consider phase space path integrals. In the discretized action we first integrated at some point over the momenta, and then studied the limit $N \rightarrow \infty$. One might leave the discretized momenta in the action, and consider the limit $N \rightarrow \infty$ with momenta present. The continuum action is expected to be $p \dot{x} - H(p,x)$, i.e., the Legendre transform of the classical Lagrangian in (\[langr\]). Again one could introduce classical trajectories for $p$ and $x$ satisfying $x(0)=x$, $x(T)=y$ and satisfying the Hamilton equations of motion of a suitable Hamiltonian $H_0$ contained $H$. (This will, of course, fix $p(0)$ and $p(T)$ as well). The quantum deviations $\xi(\tau)$ and $\pi(\tau)$ vanish then at the boundaries and can be expanded into a complete set, for example again $\sin k \pi t/T$. The measure is expected to come out unity (except for the factor $(2 \pi \hbar T)^{-1/2}$ mentioned in the introduction) and propagators and vertices would then be defined. The problem would be to prove that the limit $N \rightarrow \infty$ of the discretized theory indeed produce this continuum theory. We are interested in extending our results to models in curved spacetime (nonlinear sigma models). This is a well-known problem to which partial answers have been given in [@dewitt] and [@bas]. In the propagator to order $\Delta t$ one finds a term proportional to Ricci tensor $R_{ij}$ contracted with $(x^i - y^i)(x^j - y^j)$, which cannot be written as the action of a [*local*]{} functional. Thus, it is not immediately clear what the continuum action is, and which terms in the limit $N \rightarrow \infty$ will cancel. However, one can still exponentiate this term and obtain the discretized action. The $R_{ij}$ term should become an $R$ term in the continuum theory since at the perturbative level $(x^i - y^i)(x^j - y^j)$ should be equivalent to $g^{ij} \Delta t$. However, this would only produce a factor $R/6$ into the action whereas one needs a factor $R/8$[@dekker; @fio; @dewitt]. Further cancellations of type studied in section 4 should then indeed reduce $R/6$ to $R/8$. Note that this analysis might be done without the need of using Einstein invariance to go to Riemann normal coordinates, and hence problems with time-ordering in arbitrary coordinates[@dewitt] would be avoided.\ [**Acknowledgments:**]{} We thank Bas Peeters and Jan de Boer for numerous discussions. Eduard Brézin and Jan Pierre Zuber told us that they had also discovered that it is incorrect to use the linear approximation to obtain the propagator (see (\[la\]), (\[linapp\])). We hope that our solution as based on the “conspiracy” described in the conclusions will satisfy them and others.\ This work was supported by NSF grant 92-11367. [99]{} See, for example, L. Schulman, ‘Techniques and Applications of Path Integration’, John Wiley and Sons, New York, 1981. L. Alvarez-Gaumé and E. Witten, Nucl. Phys. [**B234**]{}, (1984) 269. A. Diaz, W. Troost, P. van Nieuwenhuizen and A. van Proeyen, Int. J. Mod. Phys. [**A4**]{} (1989) 3959; M. Hatsuda, P. van Nieuwenhuizen, W. Troost and A. van Proeyen, Nucl. Phys. [**B335**]{}, (1990) 166. H. Dekker, Physica [**103A**]{} (1980) 586. F. Bastianelli, Nucl. Phys. [**B376**]{}, (1992) 113. F. Bastianelli and P. van Nieuwenhuizen, Nucl. Phys. [**B389**]{}, (1993) 53. R. Graham, Z. Phys. [**B26**]{}, (1977) 281. F. Langouche, D. Roekaerts, and E. Tirapegui, ‘Functional Integration and Semeclassical Expansions’, D. Reidel Publishing Company, Dordrecht, Holland, 1982. B. Peeters, P. van Nieuwenhuizen, ITP-SB-93-51. J. Van Vleck, Proc. Natl. Acad. Sci. [**24**]{}, (1928) 178; C. Morette, Phys. Rev. [**81**]{} (1951) 848. B. DeWitt, Rev. Mod. Phys. [**29**]{}, (1957) 377. B. DeWitt, ‘Supermanifolds’, $2{nd}$ ed., Campridge University Press, 1992. [^1]: e-mail: [email protected] [^2]: e-mail: [email protected]
--- abstract: 'In the in-out formalism we advance a method of the inverse scattering matrix for calculating effective actions in pure magnetic field backgrounds. The one-loop effective actions are found in a localized magnetic field of Sauter type and approximately in a general magnetic field by applying the uniform semiclassical approximation. The effective actions exhibit the electromagnetic duality between a constant electric field and a constant magnetic field and between $E(x) = E \, {\rm sech}^2 (x/L)$ and $B(x) = B \, {\rm sech}^2 (x/L)$.' author: - Sang Pyo Kim title: QED Effective Action in Magnetic Field Backgrounds and Electromagnetic Duality --- Introduction ============ The effective action in a background field probes the vacuum structure of the underlying theory. In quantum electrodynamics (QED) the effective action in a constant electromagnetic field was first found by Heisenberg, Euler, Weisskopf [@heisenbergeuler] and later by Schwinger in the proper-time integral [@schwinger]. The vacuum polarization by an electromagnetic field leads to prominent phenomena such as photon splitting, direct photon-photon scattering, birefringence, and Schwinger pair production. In spite of constant interest and continuous investigations, however, computing nonperturbative effective action beyond a constant field has been regarded as a nontrivial, challenging task, and the exact effective actions have been known only for a few configurations of electromagnetic fields [@dittrichreuter]. The zeta-function regularization can be used for a constant magnetic field [@blviwi], and the worldline integral [@rescsc], the Green’s function (resolvent technique) [@chd; @dunne-hall98-2; @dunne-hall98], and the lightcone coordinate [@fried] have also been introduced to compute the effective actions in some electromagnetic fields. In the in-out formalism based on the Schwinger variational principle [@schwinger51], the vacuum persistence amplitude $ \langle {\rm out} \vert {\rm in} \rangle = e^{i \int d^3 x d t {\cal L}^{(1)} }$ leads to the effective action [@dewitt] $$\begin{aligned} \int d^3 x d t {\cal L}^{(1)} = \mp i \sum_{\bf K} \ln (\alpha_{\bf K}^*). \label{alp-act}\end{aligned}$$ Here and throughout the paper the upper (lower) sign is for spinor (scalar) QED, $\alpha_{\rm K}$ is the Bogoliubov coefficient between the in and out vacua, and ${\bf K}$ stands for all quantum numbers, such as momenta and/or energy and spin of the field. The in-out formalism manifests the vacuum persistence relation $$\begin{aligned} 2 {\rm Im} (\int d^3 x d t {\cal L}^{(1)}) &=& \mp \sum_{\bf K} \ln (|\alpha_{\bf K}|^2) \nonumber\\ &=& \mp \sum_{\bf K} \ln (1 \mp {\cal N}_{\bf K}), \label{vac per rel}\end{aligned}$$ where ${\cal N}_{\bf K} = |\beta_{\bf K}|^2$ is the mean number of produced pairs and the Bogoliubov relation has been used: $$\begin{aligned} |\alpha_{\bf K}|^2 \pm |\beta_{\bf K}|^2 =1.\end{aligned}$$ Thus the vacuum persistence relation relates the mean number of pairs to the imaginary part of the effective action. It was shown in Ref. [@ahn] that the vacuum persistence amplitude could provide the QED effective action. Using the gamma-function regularization, Kim, Lee, and Yoon have further developed the effective actions in the in-out formalism in a temporally or a spatially localized electric field of Sauter-type [@kly08; @kim09; @kly10] and at finite temperature [@kly10-2]. In the space-dependent gauge for electric fields the Bogoliubov coefficients from the second quantized field theory for barrier tunneling [@nikishov70] are used to compute the effective actions [@kim09; @kly10]. The purpose of this paper is two-fold. First, we advance a method to find QED effective actions in static magnetic fields in the in-out formalism. Second, we show the electromagnetic duality of QED actions between a constant electric field and a constant magnetic field and also between a Sauter-type electric field and a magnetic field. Contrary to a common belief that the in-in formalism should be used for pure magnetic fields, we argue that the inverse scattering matrix for charged particles may give the coefficient playing the same role as the Bogoliubov coefficient in the in-out formalism. We use the inverse scattering matrix to find the effective actions in the proper-time integral in a constant magnetic field and a Sauter-type magnetic field $B(x) = B \, {\rm sech}^2 (x/L)$. Our effective action in the Sauter-type magnetic field is a multiple integral of proper time, transverse momenta, and Euclidean energy while the effective action by the resolvent Green function [@chd; @dunne-hall98-2] involves a single integral of proper time. The underlying idea is that the exponentially decreasing and increasing solutions in each asymptotic region for charged particles in pure magnetic fields can be written in the form of Jost functions as for the scattering theory. The normalizability condition on the Jost functions is the on-shell condition for physically bound states. In general, one set of solutions in one asymptotic region can always be expressed through Jost functions by another set in the other asymptotic region, which may be interpreted as the off-shell condition that extends scattering theory to bound states [@taylor]. Remarkably the inverse scattering matrix, which is the ratio of the amplitude for exponentially increasing branch to the amplitude for exponentially decreasing branch, plays the analogous role for the Bogoliubov coefficient and leads to the effective actions in pure magnetic fields. We illustrate this method for a constant magnetic field and a spatially localized magnetic field of Sauter-type. We further show through one-loop effective actions in the in-out formalism that the electromagnetic duality holds between a constant electric field and a constant magnetic field and also between $E(x) = E \, {\rm sech}^2 (x/L)$ and $B(x) = B \, {\rm sech}^2 (x/L)$. A common stratagem has been to show the duality of the Heisenberg-Euler and Schwinger effective action in the constant electromagnetic field under the dual transformation of electric field and magnetic field [@chopak]. The organization of this paper is as follows. In Sec. \[sec 4\] we advance a method to find effective action from the inverse scattering matrix in a constant magnetic field and compare it with the resolvent Green function method, and in Sec. \[sec 5\] we apply the method to the Sauter-type magnetic field. In Sec. \[sec 6\] we show the duality of the one-loop effective actions in the constant and the Sauter-type electric and magnetic fields. The Jost functions are derived for QED in magnetic fields in the Appendix. Constant Magnetic Field {#sec 4} ======================= The transverse motion of a charged particle in a constant magnetic field is confined to Landau levels and in a general configuration has both a discrete and a continuous spectrum of energy. The vacuum defined by the lowest Landau level is stable and no pair is produced from pure magnetic fields due to infinite instanton action and thereby the zero tunneling probability for the Dirac sea to decay [@kimpage02]. The spin-diagonal Fourier component of the squared Dirac or Klein-Gordon equation [@schweber] $$\begin{aligned} \Bigl [\partial_x^2 - (k_y - q Bx)^2 + \omega^2 - m^2 - k_z^2 + 2\sigma (qB) \Bigr] \varphi_{(\sigma)} (x) = 0, \label{mag eq}\end{aligned}$$ has the harmonic wave functions with the energy $\epsilon = \omega^2 - m^2 - k_z^2 + 2\sigma (qB)$ as bound states, corresponding to the Landau levels $\epsilon = qB (2n+1)$. The general solutions for Eq. (\[mag eq\]) are given by the parabolic cylinder functions $$\begin{aligned} D_p (\xi), \quad D_p(- \xi), \quad D_{-p-1} (i \xi), \quad D_{-p-1} (-i \xi), \label{con mag sol}\end{aligned}$$ where $$\begin{aligned} \xi = \sqrt{\frac{2}{qB}} (k_y - qBx), \quad p = - \frac{1- 2\sigma}{2} + \frac{\omega^2 - m^2 - k_z^2}{2qB}.\end{aligned}$$ In the Riemann sheet [@riemann], the exponentially decreasing solutions are $D_p (-\xi)$ at $x = \infty$ and $D_{p} (\xi)$ at $x = - \infty$ while the exponentially increasing solutions are $D_{-p-1} (-i\xi)$ at $x = \infty$ and $D_{-p-1} (i\xi)$ at $x = - \infty$ (see the asymptotic formulas 9.246 of Ref. [@gr-table]). As shown in the Appendix, these functions can be used as the Jost functions for the bounded system as a generalization of scattering theory. In fact, the connection formula 9.248 of Ref. [@gr-table] connects the bounded solution at $x = \infty$ with the solutions at $x = - \infty$ in terms of the Jost functions through Eq. (\[jos rel3\]): $$\begin{aligned} D_{p} (- \xi) = \sqrt{2 \pi} \frac{e^{i(p+1) \frac{\pi}{2}}}{\Gamma (-p)} D_{-p-1} (i \xi) + e^{ip \pi} D_{p} (\xi).\end{aligned}$$ We may introduce the inverse scattering matrix (\[in sc mat\]), which is the ratio of the amplitude for the exponentially increasing part to the amplitude for the exponentially decreasing part $$\begin{aligned} {\cal M}_p = \sqrt{2 \pi} \frac{e^{-i(p-1) \frac{\pi}{2}}}{\Gamma (-p)}. \label{sc mat}\end{aligned}$$ Note that the scattering matrix in scattering theory [@taylor] is $1/{\cal M}_p$. The inverse scattering matrix now carries the information about the potential and quantum states. For instance, the condition for bound states is $$\begin{aligned} {\cal M}_p = 0, \quad p = n, \quad (n = 0, 1, \cdots). \label{quan1}\end{aligned}$$ The simple poles for the scattering matrix at physically bound states [@taylor] now become the simple zeros of $1/\Gamma (-p)$ for the inverse scattering matrix, for which $D_n(\xi)$ is the harmonic wave function up to a normalization constant with parity $e^{ip \pi}$. That is, the nonnegative integer $p=n$ is the on-shell condition for Landau levels. Wick-rotating the time as $t = -i\tilde{t}$ and the frequency as $\omega = i \tilde{\omega}$, we observe that the inverse scattering matrix provides the effective action in analogy with the in-out formalism for electric fields $$\begin{aligned} {\cal L}^{(1)} = \pm \frac{qB}{(2 \pi)} \sum_{\sigma} \int \frac{d \tilde{\omega}}{(2 \pi)} \frac{dk_z}{(2 \pi)} \ln ({\cal M}_p^*).\end{aligned}$$ where the upper (lower) sign is for spinor (scalar) QED and $(qB)/(2 \pi)$ accounts for the wave packet centered around $k_y = qBx$. Using the gamma-function regularization [@kly08; @kim09; @kly10; @kim10], summing over the spin states and carrying out the integration, we obtain the effective action of the standard result $$\begin{aligned} {\cal L}^{(1)} = \mp \frac{1+ 2 |\sigma|}{2} \frac{(qB)^2}{(2\pi)^2}\int_{0}^{\infty} \frac{ds}{s^2} e^{- \frac{m^2s}{2qB}} F_{\sigma} (s), \label{conB-eff}\end{aligned}$$ where the spectral function is $$\begin{aligned} F_{\sigma} (s) = \frac{[\cosh(\frac{s}{2})]^{2 |\sigma|}}{\sinh(\frac{s}{2})} - \frac{2}{s} + (-1)^{2 |\sigma|} \frac{(1+ 2 |\sigma|)s}{12}. \label{sp fun}\end{aligned}$$ Here the Schwinger prescription has been employed for renormalization of the vacuum energy and the charge, which subtracts the divergent terms, the last two terms in Eq. (\[sp fun\]), in the proper-time integral [@schwinger]. A passing remark is that the inverse scattering matrix is real and, therefore, the effective action does neither have an imaginary part nor lead to the vacuum decay due to pair production. We now compare the inverse scattering matrix method with the resolvent Green function method applied to magnetic fields [@chd; @dunne-hall98-2; @dunne-hall98], in which the effective action is the sum and the trace of the resolvent Green function for Eq. (\[mag eq\]). Following Refs. [@chd; @dunne-hall98-2] and choosing two independent solutions $D_p (\xi)$ and $D_p (- \xi)$, the effective action is given by $$\begin{aligned} {\cal L}^{(1)} &=& \pm \frac{qB}{(2 \pi)} \sum_{\sigma} \int_{- \infty}^{\infty} \frac{d \tilde{\omega}}{(2 \pi)} \frac{dk_z}{(2 \pi)} \frac{2\tilde{\omega}^2}{{\rm Wr}_x [D_p (\xi),D_p (-\xi)]} \int_{- \infty}^{\infty} dx D_p (\xi)D_p (-\xi) \nonumber\\ &=& \pm \frac{1}{2 (2 \pi)^3} \sum_{\sigma} \int_{- \infty}^{\infty} \tilde{\omega}^2 d \tilde{\omega} dk_z \Bigl[\psi( \frac{1}{2} - \frac{p}{2}) + \psi (- \frac{p}{2})\Bigr]. \label{res-act}\end{aligned}$$ Here we have used the formulas 7.711-2 and 8.335-1 of Ref. [@gr-table] in the second line and $\psi$ is the psi function $\psi(z) = d (\ln \Gamma(z))/dz$. Using the formula 8.361-1 of Ref. [@gr-table] $$\begin{aligned} \psi (z) = - \int_{0}^{\infty} ds \Bigl(\frac{e^{-zs}}{1- e^{-s}} - \frac{e^{-s}}{s} \Bigr),\end{aligned}$$ summing over the spin states and performing the double Gaussian integral, we recover the standard effective action (\[conB-eff\]) after the Schwinger prescription is done for renormalization. In the in-out formalism DeWitt has shown the equivalence of the effective action (\[alp-act\]) from the Bogoliubov coefficient and the effective action from the Green function [@dewitt]. Spatially Localized Magnetic Field {#sec 5} ================================== We now consider a spatially localized field $B(x) = B \, {\rm sech}^2(x/L)$ of Sauter-type along the $z$ direction with the space-dependent gauge field $$\begin{aligned} A_{\mu} = (0, 0, - BL \tanh (\frac{x}{L}), 0).\end{aligned}$$ Then the spin-diagonal Fourier component of the squared Dirac or Klein-Gordon equation becomes [@schweber] $$\begin{aligned} \Bigl [\partial_x^2 - (k_y - q BL \tanh (\frac{x}{L}))^2 + \omega^2 - m^2 - k_z^2 +2 \sigma qB {\rm sech}^2 (x/L) \Bigr] \varphi_{(\sigma)} (x) = 0. \label{mag eq2}\end{aligned}$$ The motion (\[mag eq2\]) is bounded at $x = \pm \infty$, so the momentum $P_1$ along the longitudinal direction takes an imaginary value, $P_{1(\pm)} = i\Pi_{1(\pm)}$, $$\begin{aligned} \Pi_{1(\pm)} (B) = \sqrt{(k_y \mp q BL)^2 - (\omega^2 - m^2 - k_z^2)}.\end{aligned}$$ The solution may be found in terms of the hypergeometric function as $$\begin{aligned} \varphi_{(\sigma)} (x) = \xi^{\frac{L}{2} \Pi_{1(+)}} (1 - \xi)^{\frac{1-2\sigma}{2} + \lambda_{\sigma}} F(a, b; c; \xi), \label{saut sol}\end{aligned}$$ where $$\begin{aligned} \xi = - e^{- 2 \frac{x}{L}}, \quad \lambda_{\sigma} = (qBL^2) \sqrt{1 + \Bigl(\frac{1-2|\sigma|}{2 qBL^2} \Bigr)^2},\end{aligned}$$ and $$\begin{aligned} a &=& \frac{1-2\sigma}{2} + \frac{1}{2} (L\Pi_{1(+)}+ L\Pi_{1(-)} +2\lambda_{\sigma}) := \frac{1-2\sigma}{2} + \frac{\Omega_{(+)}}{2}, \nonumber\\ b &=& \frac{1-2\sigma}{2} + \frac{1}{2} (L\Pi_{1(+)} - L\Pi_{1(-)} +2\lambda_{\sigma}) := \frac{1-2\sigma}{2} + \frac{\Delta_{(+)}}{2}, \nonumber\\ c &=& 1 + L \Pi_{1(+)}.\end{aligned}$$ The solution is bounded at $x = \infty \, (\xi = 0)$ since $\Pi_{1(+)}$ is positive. In the opposite limit $x = - \infty \, (\xi = - \infty)$, using the connection formula 9.132 of Ref. [@gr-table], we find the asymptotic form for the solution $$\begin{aligned} \varphi_{(\sigma)} = (-1)^{\frac{L}{2} \Pi_{1(+)}} \Bigl[ (- \xi)^{-\frac{L}{2} \Pi_{1(-)}} \frac{\Gamma (c) \Gamma(b -a)}{\Gamma(b) \Gamma(c-a)} + (- \xi)^{\frac{L}{2} \Pi_{1(+)}} \frac{\Gamma (c) \Gamma(a -b)}{\Gamma(a) \Gamma(c-b)} \Bigr]. \label{mag sa con}\end{aligned}$$ The first term exponentially decreases while the second term increases. As $a > b > 0$ and $c > b$, the condition for bound states is that $\Gamma (c-b)$ should be singular, which leads to a finite number of discrete spectrum $$\begin{aligned} c - b = \frac{L}{2} (\Pi_{1(+)} + \Pi_{1(-)}) - \lambda_{\sigma} + \frac{1+ 2 \sigma}{2} = - n, \quad (n =0, 1, \cdots), \label{quan2}\end{aligned}$$ with $n + (1+ 2\sigma)/2 <\lambda_{\sigma}$. Now we apply the method of the inverse scattering matrix in Sec. \[sec 4\] and the Appendix. The asymptotic solutions (\[jos fn3\]) are $\xi^{L\Pi_{1(+)}/2}$ at $x = \infty$ and $(-\xi)^{-L\Pi_{1(-)}/2}$ at $x = - \infty$, so Eq. (\[mag sa con\]) connects the solutions in terms of the Jost functions (\[jos fn3\]). Then the inverse scattering matrix is $$\begin{aligned} {\cal M} = \frac{\Gamma (b) \Gamma(c-a)}{\Gamma(a) \Gamma (c-b)}, \label{sc-Bsa}\end{aligned}$$ where $\Gamma (a-b)/\Gamma(b-a)$ that depend only $\Pi_{1(-)}$ can be gauged away by choosing $A_2 = BL (\tanh (x/L) + 1)$ and will not be included hereafter. In the limit of $qBL \gg |\omega|$, the inverse scattering matrix (\[sc-Bsa\]) reduces to $1/\Gamma(-p)$ with $p$ from Eq. (\[quan1\]), modulo a term that is independent of the number of states and to be regulated away through renormalization of the effective action. Applying the identity 8.334 of Ref. [@gr-table], $ \Gamma (1-x) \Gamma (x) = \pi / \sin(\pi x)$, to negative values of $$\begin{aligned} c-a &=& \frac{1+ 2 \sigma}{2} + \frac{1}{2} (L\Pi_{1(+)} - L\Pi_{1(-)} - 2\lambda_{\sigma}) := \frac{1+2\sigma}{2} + \frac{\Delta_{(-)}}{2}, \nonumber\\ c-b &=& \frac{1+2\sigma}{2} + \frac{1}{2} (L\Pi_{1(+)} + L\Pi_{1(-)} - 2\lambda_{\sigma}) := \frac{1+ 2\sigma}{2} + \frac{\Omega_{(-)}}{2},\end{aligned}$$ we may write the inverse scattering matrix as $$\begin{aligned} {\cal M} = \frac{\Gamma (\frac{1-2\sigma}{2} + \frac{\Delta_{(+)}}{2}) \Gamma(\frac{1-2\sigma}{2} - \frac{\Omega_{(-)}}{2})}{\Gamma(\frac{1-2\sigma}{2} - \frac{\Delta_{(-)}}{2}) \Gamma (\frac{1-2\sigma}{2} + \frac{\Omega_{(+)}}{2})}. \label{sc-Bsa2}\end{aligned}$$ Here we have deleted again the overall factor that is to be regulated away through the renormalization procedure. Finally, the gamma-function regularization leads to the renormalized effective actions in the proper-time integral $$\begin{aligned} {\cal L}^{(1)} = \mp \frac{1+ 2 |\sigma|}{2}\int \frac{d \tilde{\omega}}{(2\pi)} \frac{d^2 {\bf k}_{\perp}}{(2 \pi)^2} \int_{0}^{\infty} \frac{ds}{s} \Bigl(e^{- \frac{\Omega_{(+)}}{2}s} + e^{\frac{\Delta_{(-)}}{2}s} - e^{\frac{\Omega_{(-)}}{2}s} - e^{- \frac{\Delta_{(+)}}{2}s} \Bigr) F_{\sigma} (s), \label{sa ma act}\end{aligned}$$ with the same spectral function (\[sp fun\]). Here $\Omega_{(-)} <0 $ and $\Delta_{(-)} < 0$ and we have taken the Wick-rotation $t = - i \tilde{t}$ and $\omega = i \tilde{\omega}$ and used the Schwinger prescription for renormalization. It would be interesting to compare and to show the equivalence of Eq. (\[sa ma act\]) with Eq. (18) of Ref. [@dunne-hall98-2] from the resolvent Green function that has a single integral of proper time without the multiple integral of momenta and energy. Following Ref. [@dunne-hall98], the resolvent Green function from the solution (\[saut sol\]) and another independent solution would result in the effective action (\[sa ma act\]) without the second and the last delta exponentials. We briefly explain a scheme to find approximately the effective action for a general configuration of magnetic field, when solutions are not known. The effective action may be used for measuring the inhomogeneity effect on birefringence in a strong magnetic field. We assume a spatially localized field $B(x)$ along the $z$ direction and the gauge field $A_2$ such that $B(x) = \partial_x A_2(x)$. Then the spin-diagonal Fourier component of the squared Dirac or Klein-Gordon equation will become $$\begin{aligned} \Bigl[ \partial_x^2 - \Pi_{1}^2 (x) \Bigr] \varphi_{(\sigma)} (x) = 0. \label{mag eq3}\end{aligned}$$ where $$\begin{aligned} \Pi_{1}^2 (x) = (k_y - q A_2(x))^2 - \omega^2 + m^2 + k_z^2 - 2 \sigma qB (x).\end{aligned}$$ The uniform semiclassical approximation for electric fields [@dunne-hall98; @kly10] suggests transforming (\[mag eq3\]) into the form $$\begin{aligned} \Bigl[\partial_{\xi}^2 - \xi^2 + \frac{{\cal S}_{(\sigma)}}{\pi} + \frac{1}{(\partial_{x} \xi)^{3/2}} \partial_x^2 (\frac{1}{\sqrt{\partial_x \xi}}) \Bigr] w_{(\sigma)} (\xi) = 0, \label{b app eq}\end{aligned}$$ where $$\begin{aligned} w_{(\sigma)} (\xi) = \sqrt{\partial_x \xi} \varphi_{(\sigma)} (x), \quad \Bigl(\xi^2 - \frac{{\cal S}_{(\sigma)}}{\pi} \Bigr) (\partial_x \xi)^2 = \Pi_{1}^2. \label{b ins}\end{aligned}$$ The charged particle undergoes a periodic motion in the region $\Pi_{1}^2 \leq 0$, so the integration of Eq. (\[b ins\]) over one period determines the action $$\begin{aligned} {\cal S}_{(\sigma)} = \oint \sqrt{- \Pi_{1}^2 (x)}dx.\end{aligned}$$ Thus, in the approximation of neglecting the last term, Eq. (\[b app eq\]) has the same form as Eq. (\[mag eq\]) for the constant magnetic field and approximately has the solutions $D_p (\sqrt{2} \xi)$, $D_p(- \sqrt{2}\xi)$, $D_{-p-1} (i \sqrt{2} \xi)$, and $D_{-p-1} (-i \sqrt{2}\xi)$ with $$\begin{aligned} p = - \frac{1}{2} + \frac{{\cal S}_{(\sigma)}}{\pi}. \label{b app p}\end{aligned}$$ Then the inverse scattering matrix is given by Eq. (\[sc mat\]) with $p$ in Eq. (\[b app p\]). As $B(x) \leftrightarrow - B(x)$ under $k_y \leftrightarrow - k_y$ and $\sigma \leftrightarrow - \sigma$, we find the unrenormalized effective action in a symmetric form $$\begin{aligned} {\cal L}^{(1)} &=& \mp \frac{1}{2} \sum_{\sigma} \int \frac{d \tilde{\omega}}{(2\pi)} \frac{d^2 {\bf k}_{\perp}}{(2 \pi)^2} \Bigl[ \ln \Gamma (- p (B) ) + \ln \Gamma (- p (-B) ) \Bigr]. \label{app B-eff}\end{aligned}$$ We comment on how to estimate or improve the error in the approximate effective action (\[app B-eff\]). Treating the last term of Eq. (\[b app eq\]) as a perturbation, the exact solution is the sum of the homogeneous part, an approximate solution, and the inhomogeneous part, which is an integral equation of two independent approximate solutions and the perturbation. In the in-out formalism the effective action, which is the sum of logarithm of the Bogoliubov coefficient or the inverse scattering matrix for each momentum and spin, becomes a functional of the exact solution to Eq. (\[mag eq3\]). Thus this systematic improvement combined with an appropriate renormalization scheme allows us to estimate the error in Eq. (\[app B-eff\]) and also holds true for the temporally or spatially localized electric fields in Refs. [@kly08; @kly10], which is beyond the scope of this paper. Electromagnetic Duality of QED Actions {#sec 6} ====================================== To show the electromagnetic duality, we recapitulate the main results of Refs. [@kim09; @kly10] for the constant electric field and the Sauter-type electric field $E(z) = E \, {\rm sech}^2(z/L)$ along the $z$ direction in the space-dependent gauge $$\begin{aligned} A_{\mu} = (A_0(z), 0, 0, 0).\end{aligned}$$ First, in the constant electric field with $A_0 (z) = - Ez$, the Bogoliubov coefficient in the in-out formalism is given by $$\begin{aligned} \alpha_{(r)} = \sqrt{2 \pi} \frac{e^{-i(2p^*+p+1) \frac{\pi}{2}}}{\Gamma (-p)}, \quad p = - \frac{1 + r}{2} + i \frac{m^2 + {\bf k}_{\perp}^2}{2qE}.\end{aligned}$$ Here $r$ is the eigenvalues for $\sigma^{03}= (i/4) [\gamma^{0}, \gamma^{3}]$. Thus, we find the effective action $$\begin{aligned} {\cal L}^{(1)} &=& \pm i \frac{qE}{2(2 \pi)} \sum_{\sigma r} \int \frac{d^2 {\bf k}_{\perp}^2}{(2\pi)^2} \int_{0}^{\infty} \frac{ds}{s} \frac{e^{-p^*s}}{1 - e^{-s}} \nonumber\\ &=& \mp \frac{1+ 2|\sigma|}{2} \frac{(qE)^2}{(2 \pi)^2} \int_{0}^{\infty} \frac{ds}{s^2} e^{- \frac{m^2 s}{2qE}} (i F_{\sigma} (is)).\end{aligned}$$ In the second line the Schwinger prescription has been used to obtain the renormalized effective action. The $\Gamma (-p^*)$ in $\alpha_{(r)}^*$ for the effective action is the same as that from the inverse scattering matrix (\[sc mat\]), provided that $E = i B$ and $\omega = i \tilde{\omega}$. This implies that the unrenormalized and renormalized effective actions for the electric field are dual to those for the magnetic field. Note that $E = i B$ is consistent with duality of the convergent series of the Heisenberg-Euler and Schwinger effective action [@chopak] and that the electromagnetic duality can also be shown in the resolvent Green function method by comparing Eq. (\[res-act\]) with Eq. (22) of Ref. [@dunne-hall98]. Second, for the Sauter-type electric field the space-dependent gauge field is $A_{0} = - EL \, \tanh (z/L)$. The effective action in the electric field $E(t) = E \, {\rm sech}^2(t/T)$ was studied in Refs. [@kly08; @dunne-hall98] and in the electric field $E(z) = E \, {\rm sech}^2(z/L)$ in Ref. [@kly10]. The spin-diagonal Fourier component of the squared Dirac or Klein-Gordon equation has the asymptotic longitudinal momentum along the $z$ direction $$\begin{aligned} P_{3(\pm)} (E) = \sqrt{(\omega \mp qEL)^2 - (m^2+ k_x^2 + k_y^2)}.\end{aligned}$$ Then, the Bogoliubov coefficient $\alpha_{(r)}^*$ in Eq. (20) of Ref. [@kly10], under $E = iB$ and $\omega=i \tilde{\omega}$ and under the interchange of $\tilde{\omega} \leftrightarrow -k_y$ and $k_x \leftrightarrow k_z$ and in the Riemann sheet [@riemann], has the same arguments for the gamma functions in Eq. (\[sc-Bsa\]) $$\begin{aligned} P_{3 (\pm)} (E) = - i \Pi_{1(\pm)} (B), \quad \lambda_{r} (E) = -i \lambda_{\sigma} (B)\end{aligned}$$ and thus $$\begin{aligned} \Omega_{(\pm)} (E) = - i \Omega_{(\pm)} (B), \quad \Delta_{(\pm)} (E) = - i \Delta_{(\pm)} (B).\end{aligned}$$ The fact that the Bogoliubov coefficient in the Sauter-type electric field has the same form as the inverse scattering matrix (\[sc-Bsa\]) in the Sauter-type magnetic field shows that the unrenormalized and renormalized effective actions can be analytically continued from one field to the other and are dual to each other under $E = iB$. Conclusion ========== In this paper we have proposed a method for finding the one-loop effective action in magnetic field backgrounds in the in-out formalism, in which the effective action is the logarithm of the Bogoliubov coefficient in the second quantized field theory. The in-out formalism may have a further extension to magnetic fields when the inverse scattering matrix is used for the Bogoliubov coefficient. As shown in Secs. \[sec 4\], \[sec 5\] and the Appendix, the Jost functions for off-shell solutions give the inverse scattering matrix, which is the ratio of the amplitude of the exponentially increasing part to the amplitude of the exponentially decreasing part and plays an analogous role of the Bogoliubov coefficient for electric fields in the space-dependent gauge. In fact, in the space-dependent gauge the Bogoliubov coefficient is determined by the Jost functions for tunneling solutions through barriers in electric fields while the inverse scattering matrix is determined by the Jost functions for off-shell solutions in magnetic fields, both of which are the Wronskian for the solutions with required behaviors in two asymptotic regions. We have illustrated the method by computing the effective actions in a constant magnetic field and a localized magnetic field of Sauter type. As the Bogoliubov coefficient for the electric field and the inverse scattering matrix for the magnetic field are determined by Jost functions, they can analytically continue to each other under the electromagnetic duality, and the QED effective actions, renormalized or unrenormalized, exhibit duality between electric and magnetic fields of the same profile. We have explicitly showed the electromagnetic duality of QED effective actions in constant and Sauter-type electric and magnetic fields. As QED effective actions in time-varying or spatially localized fields are nontrivial, the new method in the in-out formalism may provide an alternative scheme to understanding the vacuum structure. The author would like to thank Don N. Page, Hyun Kyu Lee, and Yongsung Yoon for early collaborations, and W-Y. Pauchy Hwang and Wei-Tou Ni for helpful discussions. He also thanks Misao Sasaki for the warm hospitality at Yukawa Institute for Theoretical Physics, Kyoto University and Christian Schubert at Instituto de Física y Matemáticas, Universidad Michoacana de San Nicolás de Hidalgo. This work was supported in part by Basic Science Research Program through the National Research Foundation (NRF) funded by the Korea Ministry of Education, Science and Technology (2011-0002-520) and in part by National Science Council Grant (NSC 100-2811-M-002-012) by the Taiwan government. Jost Functions for QED in Magnetic Field {#sec mag-ga} ======================================== The spin-diagonal Fourier component of the squared Dirac or Klein-Gordon equation in a magnetic field becomes the one-dimensional Schrödinger equation for a bounded system $$\begin{aligned} [\partial_{z}^2 - {\kappa}^2 + V (z) ] \varphi_{\kappa} (z) = 0, \label{B-jos3}\end{aligned}$$ where $V \geq 0$ and $- {\kappa}^2 + V(\pm \infty) = - {\kappa}^2_{(\pm)}$. We may introduce the exponentially decreasing solutions in each asymptotic region $$\begin{aligned} f(z, \kappa) \stackrel{z = \infty}{\longrightarrow} \frac{e^{-\kappa_{(+)} z}}{\sqrt{2 \kappa_{(+)}}}, \quad g(z, \kappa) \stackrel{z = - \infty}{\longrightarrow} \frac{e^{\kappa_{(-)} z}}{\sqrt{2 \kappa_{(-)}}}, \label{jos fn3}\end{aligned}$$ and the exponentially increasing solutions $f(z,-\kappa)$ and $g(z,-\kappa)$ in their region as independent solutions. Each set of solutions satisfies the Wronskian $$\begin{aligned} [f(z,\kappa), f(z, -\kappa)] = 1, \quad [g(z,\kappa), g(z, -\kappa)] = -1. \label{jos wr3}\end{aligned}$$ We may then express each set of solutions in terms of the other set as $$\begin{aligned} f(z, \kappa) = C_1 (\kappa) g(z, \kappa) + C_2 (\kappa) g(z, - \kappa), \nonumber\\ g(z, \kappa) = \tilde{C}_1 (\kappa) f(z, \kappa) + \tilde{C}_2 (\kappa) f(z, -\kappa), \label{jos rel3}\end{aligned}$$ where the Jost functions are determined from (\[jos wr3\]) $$\begin{aligned} C_1 (\kappa) &=& - [f(z,\kappa), g(z,-\kappa)] = - \tilde{C}_1 (-\kappa), \nonumber\\ C_2 (\kappa) &=& [f(z,\kappa), g(z,\kappa)] = \tilde{C}_2 (-\kappa). \label{bog coef3}\end{aligned}$$ It follows that $$\begin{aligned} C_2 (\kappa) C_2 (- \kappa) - C_1 (\kappa) C_1 (- \kappa) = 1. \label{jos bog3}\end{aligned}$$ Finally, we define the inverse scattering matrix as the ratio of the amplitude for the exponentially increasing part to the amplitude for the exponentially decreasing part, which is the inverse of the scattering matrix [@taylor] and now is given by $$\begin{aligned} {\cal M}_{\kappa} = \frac{C_2 (\kappa)}{C_1(\kappa)}. \label{in sc mat}\end{aligned}$$ [99]{} W. Heisenberg and H. Euler, Z. Phys. [**98**]{}, 714 (1936); V. Weisskopf, K. Dan. Vidensk. Selsk. Mat. Fy. Medd. [**14**]{}, 1 (1936). J. Schwinger, Phys. Rev. [**82**]{}, 664 (1951). W. Dittrich and M. Reuter, [*Effective Lagrangians in Quantum Electrodynamics*]{}, Lect. Notes Phys. [**220**]{}, 1 (Springer, Berlin, 1985); E. S. Fradkin, D. M. Gitman, and S. M. Shvartsman, [*Quantum Electrodynamics with Unstable Vacuum*]{} (Springer, Berlin, 1991); W. Dittrich and H. Gies, [*Probing the Quantum Vacuum*]{}, Springer Tracts Mod. Phys. [**166**]{}, 1 (Springer, Berlin, 2000); C. Schubert, Phys. Rept. [**355**]{}, 73 (2001); G. V. Dunne, in [*From Fields to Strings: Circumnavigating Theoretical Physics*]{}, edited by M. A. Shifman, A. Vainshtein, and J Wheater (World Scientific, Singapore, 2004) Vol. I, pp. 445-522 \[hep-th/0406216\]. S. Blau, M. Visser, and A. Wipf, Int. J. Mod. Phys. [**A 6**]{}, 5409 (1991); for review, see E. Elizalde, S. D. Odintsov, A. Romeo, A. A. Bytsenk, and S. Zerbini, [*Zeta Regularization Techniques with Applications*]{} (World Scientific, Singapore, 1994). M. Reuter, M. G. Schmidt, and C. Schubert, Ann. Phys. (N.Y.) [**259**]{}, 313 (1997). D. Cangemi, E. D’Hoker, and G. V. Dunne, Phys. Rev. D [**52**]{}, R3163 (1995). G. V. Dunne and T. M. Hall, Phys. Lett. B [**419**]{}, 322 (1998). G. V. Dunne and T. M. Hall, Phys. Rev. D [**58**]{}, 105022 (1998). H. M. Fried and R. P. Woodard, Phys. Lett. B [**524**]{}, 233 (2002). J. Schwinger, Proc. Natl Acad. Sci. U.S.A. [**37**]{}, 452 (1951) . B. S. DeWitt, Phys. Rept. [**19**]{}, 295 (1975); [*The Global Approach to Quantum Field Theory*]{} (Oxford University Press, New York, 2003) Vol. 1 and Vol. 2. J. Ambj[o]{}rn, R. J. Hughes, and N. K. Nielsen, Ann. Phys. [**150**]{}, 92 (1983); A. I. Nikishov, Zh. Eksp. Teor. Fiz. [**123**]{}, 211 (2003) \[Sov. Phys. JETP [**96**]{}, 180 (2003)\]. S. P. Kim, H. K. Lee, and Y. Yoon, Phys. Rev. D [**78**]{}, 105013 (2008). S. P. Kim, AIP. Conf. Proc. [**1150**]{}, 95 (2009). S. P. Kim, H. K. Lee, and Y. Yoon, Phys. Rev. D [**82**]{}, 025015 (2010). S. P. Kim, H. K. Lee, and Y. Yoon, Phys. Rev. D [**82**]{}, 025016 (2010). A. I. Nikishov, Nucl. Phys. B [**21**]{}, 346 (1970); T. Damour, in [*Proceedings of the First Marcel Grossmann Meeting on General Relativity*]{}, edited by R. Ruffini (North-Holland, Amsterdam, 1977) pp. 459-482; A. I. Nikishov, Tr. Fiz. Inst. im P. N. Lebedeva, Akad. Nauk SSSR [**111**]{}, 152 (1979) \[J. Sov. Laser Res. [**6**]{}, 619 (1985)\]; A. Hansen and F. Ravndal, Phys. Scr. [**23**]{}, 1036 (1981). J. R. Taylor, [*Scattering Theory*]{} (Dover Publications, New York, 2000). Y. M. Cho and D. G. Pak, Phys. Rev. Lett. [**86**]{}, 1947 (2001); W. S. Bae, Y. M. Cho, and D.G. Pak, Phys. Rev. D [**64**]{}, 017303 (2001). S. P. Kim and D. N. Page, Phys. Rev. D [**65**]{}, 105002 (2002); [*ibid.*]{} [**73**]{}, 065020 (2006). S. S. Schweber, [*An Introduction to Relativisitic Quantum Field Theory*]{} (Dover Publications, New York, 2005) pp. 99-107. In the Riemann sheet $- \pi \leq {\rm arg z} < \pi$, $i = e^{i \pi/2}$, $1 = e^{i 0 \pi}$, $- i = e^{- i \pi/2}$, and $-1 = e^{-i \pi}$, so $\sqrt{-1} = e^{- i \pi/2} = -i$. I. S. Gradshteyn and I. M. Ryzhik, [*Table of Integrals, Series, and Products*]{} (Academic Press, San Diego, 1994). S. P. Kim, “Probing the Vacuum Structure of Spacetime,” \[arXiv:1102.4154\[hep-th\]\].
--- abstract: 'The quenching of star formation in satellite galaxies is observed over a wide range of dark matter halo masses and galaxy environments. In the recent Guo et al (2011) and Fu et al (2013) semi-analytic + N-body models, the gaseous environment of the satellite galaxy is governed by the properties of the dark matter subhalo in which it resides. This quantity depends of the resolution of the N-body simulation, leading to a divergent fraction of quenched satellites in high- and low-resolution simulations. Here, we incorporate an analytic model to trace the subhaloes below the resolution limit. We demonstrate that we then obtain better converged results between the Millennium I and II simulations, especially for the satellites in the massive haloes ($\rm log M_{halo}=[14,15]$). We also include a new physical model for the ram-pressure stripping of cold gas in satellite galaxies. However, we find very clear discrepancies with observed trends in quenched satellite galaxy fractions as a function of stellar mass at fixed halo mass. At fixed halo mass, the quenched fraction of satellites does not depend on stellar mass in the models, but increases strongly with mass in the data. In addition to the over-prediction of low-mass passive satellites, the models also predict too few quenched [*central*]{} galaxies with low stellar masses, so the problems in reproducing quenched fractions are not purely of environmental origin. Further improvements to the treatment of the gas-physical processes regulating the star formation histories of galaxies are clearly necessary to resolve these problems.' author: - | Yu Luo$^{1,2,4}$ [^1], Xi Kang$^{1}$, Guinevere Kauffmann$^{2}$, Jian Fu$^{3}$\ $^1$Purple Mountain Observatory, the Partner Group of MPI für Astronomie, 2 West Beijing Road, Nanjing 210008, China\ $^2$Max-Planck Institue für Astrophysik, 85741 Garching, Germany\ $^3$Key Laboratory for Research in Galaxy and Cosmology, Shanghai Astronomical Observatory, Chinese Academy of Science,\ 80 Nandan Road, Shanghai 200030, China\ $^4$Graduate School, University of the Chinese Academy of Science, 19A, Yuquan Road, Beijing 100049, China\ title: 'Resolution-independent modeling of environmental effects in semi-analytic models of galaxy formation that include ram-pressure stripping of both hot and cold gas' --- galaxies: evolution - galaxies: formation - stars: formation - galaxies: ISM Introduction ============ Many of the observed properties of galaxies, such as their colors, morphologies, star formation rates (SFR) and gas-to-star fractions, are observed to have strong dependence on their environment (e.g. Kauffmann et al. 2004; Bamford et al. 2009; Boselli et al. 2014). Galaxies in clusters or groups tend to have redder colors, bulge-dominated morphologies, lower gas-to-star ratios and less star formation than isolated galaxies of similar mass (Butcher & Oemler 1978; Dressler 1980; Balogh et al. 2004; Baldry et al. 2006). This dependence is believed to arise from the interplay between the gas component of galaxies and their environment; unlike the stellar component, the gas can be easily affected by the ambient pressure in galaxy groups or clusters. When a galaxy moves through a cluster, the ram pressure (hereafter RP, Gunn & Gott 1972) of the intra-cluster medium (ICM) acts to strip both the hot gas reservoir and the cold interstellar gas, and this process plays an important role in the star formation history in the galaxy. Many studies have concluded that this stripping process is the main cause for the increase of S0 galaxies in rich clusters (e.g. Biermann & Tinsley 1975; Dressler 1980; Whitmore, Gilmore & Jones 1993). In the last decade, observations have revealed direct evidence for ram-pressure stripping of gas in cluster galaxies in the form of long gaseous tails trailing behind these systems (e.g. Kenney et al. 2004; Crowl et al.2005; Sakelliou et al.2005; Machacek et al. 2006). The same process is also invoked as an explanation of the depletion of the cold gas in galaxies in clusters (Boselli & Gavazzi 2006), often referred to as “HI deficiency” (e.g. Haynes & Giovanelli 1984; Solanes et al. 2001; Hughes & Cortese 2009). To understand the effects of RP stripping on the galaxy gas component in detail, numerical hydrodynamical simulations are the ideal tools. Abadi et al. (1999) presented the first study of RP stripping using an idealized SPH simulation, followed by more realistic hydrodynamical simulations (e.g., Roediger & Hensler 2005; Roediger & Br[ü]{}ggen 2007; McCarthy et al. 2008; Tonnesen & Bryan 2009, Tecce et al. 2010). These studies showed that RP can strip a significant amount of hot gas and cold gas from galaxies and can quickly reduce the total SFR (e.g., Tonnesen & Bryan 2012). However, it has also been also argued (Bekki 2014) that star formation can also be enhanced by RP, and that the reduction/enhancement will depend on model parameters such as halo mass, peri-centric distance with respect the the centre of the cluster etc. In semi-analytic models (SAMs) of galaxy formation, the descriptions of the effect of RP on the gas component are very simplistic. In early versions, satellite galaxies were assumed to lose their hot gaseous haloes immediately after falling into a bigger halo (e.g., Kauffmann et al. 1993; Somerville et al. 1999; Cole et al. 2000; Kang et al. 2005; Bower et al. 2006; De Lucia & Blaizot 2007 (hear after DLB07)). It was then found (e.g., Kang & van den Bosch 2008; Kimm et al. 2009) that the instantaneous stripping of the hot gas causes the over-prediction of red satellite galaxies in the clusters. In later models (e.g., Kang & van den Bosch 2008; Font et al. 2008; Weinmann et al. 2010; Guo et al. 2011, hereafter Guo11)), the stripping of hot gas is treated in a more continuous way, i.e the mass loss rate of hot gas halo is assumed to be the same as that of dark matter subhalo in which the satellite resides. Including only the stripping of hot halo gas is not physically plausible; the stripping of cold gas in galaxies is also needed to account for the observed cold gas depletion in satellites (Fabello et al. 2012; Li et al. 2012b; Zhang et al. 2013). Stripping of cold gas has been included in some models (Okamoto & Nagashima 2003; Lanzoni et al. 2005), and has been found to have negligible effect on the colors and SFRs of satellite galaxies. Because the stripping of gas in the satellite depends on the competition between the RP from the hot gas and the gravitation of the satellite itself, it is important to model the local environment of satellite galaxies accurately. In N-body simulations, the local environment at the subhalo level will be dependent on the resolution of the simulation. In low-resolution simulation, the evolution of subhalo can not be traced to very low mass due to the number threshold used to identify subhalo, and for given mass a subhalo in low-resolution simulation contains less number of particles, making its identification more difficult. In the central region of halo the identification of subhalo is more challanging as the background density is high (Onions et al. 2012). We also note that the physical descriptions of processes such as gas cooling, feedback, stripping etc, are very often tied to the properties of the subhalo (Springel et al. 2001; Kang et al. 2005; Bower et al. 2006; Guo et al. 2011;), and these prescriptions will then also be dependent on resolution. This will lead to non-convergent results between different resolution simulations, as found in recent studies (Fu et al. 2013; Lagos et al. 2014; Guo & White 2014), and it also raises the concern that the high fraction of passive galaxies in low-resolution simulation is a consequence of resolution effects. In this paper, we study the quenched fraction of galaxies in different environments with and without the effects of RP stripping. We adopt the version of L-Galaxies model described in Fu et al. (2013, hereafter Fu13), which is a recent version of the Munich semi-analytic model that includes the radial distribution of molecular and atomic gas in galaxy disks. This model allows us to model the cold gas stripping as a function of radius in the galaxy . We improve the Fu13 model by 1) using a consistent description of physics for satellites whose subhaloes cannot be resolved. Our resolution-independent prescriptions can also be applied to other SAMs based on merger trees from N-body simulations. 2) We add a model for the effect of RP stripping on the cold gas in galactic disks. These improvements allow us to model ram pressure stripping of cold gas at different radii in disks, and to study how the environment can affect the HI, $\h2$, and star formation. This paper is organized as follows. In Section 2, we first briefly summarize the L-Galaxies model and then describe the changes we make to Fu13 model. In Section 3, we analyze stellar/gas mass functions and galaxy clustering, and we compare our model results with recent observations of the properties of satellite galaxies, such as their specific star formation rates and gas fractions. We test the degree to which our new models give convergent results between two N-body simulations of different resolution, and we analyze the effect of RP stripping in cluster environments. In Section 4, we summarize our results and discuss possibilities for future work. The Model ========= In this section, we briefly introduce the N-body simulations used in this work as well as the L-Galaxies models, and then describe in detail the main changes to the physics made with respect to the previous models. N-body simulations and L-Galaxies model --------------------------------------- Our work in this paper is based on the Munich semi-analytic galaxy formation model, L-Galaxies, which has been developed over more than two decades (e.g., White & Frenk 1991; Kauffmann et al. 1993; Kauffmann et al. 1999; Springel et al. 2001; Croton et al. 2006; De Lucia & Blaizot 2007; Guo et al. 2011&2013; Fu et al. 2010 & 2013; Henriques et al. 2015). The L-Galaxies model has been implemented on two main N-body cosmological simulations: The Millennium Simulation (hereafter MS, Springel et al. 2005) and Millennium-II simulation (hereafter MS-II, Boylan-Kolchin et al. 2009). The two simulations have the same number of particles and cosmology parameters, but the MS-II has 1/125 the volume of MS, but 125 times higher in mass resolution. Angulo & White (2010) developed a method to rescale the cosmological parameters from the WMAP1 to the WMAP7 cosmology. For MS, the box size is rescaled from $500~\rm{Mpc}~h^{-1}$ to $521.555~\rm{Mpc}~h^{-1}$, and the particle mass is changed from $8.6\times10^8\rm{M_\odot}~h^{-1}$ to $1.06\times10^9\rm{M_\odot}~h^{-1}$; for MS-II, the box size is rescaled to $104.311~\rm{Mpc}~h^{-1}$, and $8.50\times10^6\rm{M_\odot}~h^{-1}$ for the particle mass. In this paper, we follow Angulo & White (2010) and use the runs appropriate for the WMAP7 cosmology, with parameters as follows: $\Omega_\Lambda=0.728, ~\Omega_{\rm m}=0.272,~\Omega_{\rm{baryon}}=0.045,~\sigma_8=0.807$ and $h=0.704$. In SAMs, galaxies are assumed to form at the centres of the dark matter haloes. The evolution of the haloes is followed using merger trees from the N-body cosmological simulations, and the models describe the physical processes relevant to the baryonic matter, e.g re-ionization, hot gas cooling and cold gas infall, star formation and metal production, SN feedback, hot gas stripping and tidal disruption in satellites, galaxy mergers, bulge formation, black hole growth, and AGN feedback. The detailed descriptions of these physical processes can be found in Section 3 of Guo11. In the L-Galaxies model, galaxies are classified into three types. Type 0 galaxies are those located at the centre of the main haloes found with a Friends-of-Friends (FOF) algorithm in the simulation outputs. A Type 0 galaxy is a “central” galaxy with its own hot gaseous halo, and the hot gas distributes isothermally in the dark matter halo. The hot gas can cool onto the central galaxy disk through a “cooling flow” or a “cold flow”, and the cold gas is the source of star formation. An instantaneous recycling approximation is adopted for mass return from evolved stars and for the injection of metals into the ISM; this implies that the massive stars explode as SN at the time when they form. [^2] The SN feedback energy reheats part of disk cold gas into the hot gaseous halo of the central galaxy. If the SN energy is large enough, part of the hot gaseous halo can be ejected out of the dark matter potential and become ejected gas. With the growth of the dark matter halo, the ejected gas will be reincorporated back to the central halo. Both Type 1 and Type 2 galaxies are regarded as satellite galaxies in the model. A Type 1 galaxy is located at the center of a subhalo, which is an overdensity within the FoF halo (Springel et al. 2001). The haloes/subhaloes contain at least 20 bound particles for both MS and MS-II (Springel et al. 2005, Boylan-Kolchin et al. 2009). Boylan-Kolchin et al. (2009) have pointed that above this particle number (20) the abundance of subhaloes between two simulations differs only about 30%, and with more than 50-100 particles the results agree much better. Type 1 galaxies have their own hot gaseous haloes, so gas can cool onto these galaxies. Cold gas reheated by the SN explosions in a Type 1 will be added to the hot gaseous halo of its own subhalo or the halo of the central Type 0 galaxy, depending on the distance between the Type 1 and the central object. A Type 2 galaxy is an “orphan galaxy”, which no longer has an associated dark matter subhalo. A Type 2 galaxy does not have a hot gaseous halo and thus has no gas cooling and infall. The supernova reheated cold gas from Type 2 is added to its central galaxy, i.e., the associated Type 0 or Type 1 object. Both Type 1 and Type 2 galaxies are initially born as Type 0 objects. They become Type 1 when they fall into a group or cluster and may later become type 2 after their dark matter subhaloes are disrupted by tidal effects or the subhaloes are not well resolved in the low-resolution simulation. Type 1 and 2 galaxies may later merge into the central galaxy of their host (sub)halo. In Guo11 & Fu13, the hot gaseous halo and “ejected reservoir” of a Type 1 galaxy can be stripped by RP when the force of RP dominates over its self-gravity. A Type 2 galaxy can be disrupted entirely by tidal forces exerted by the central object, if the density of the main halo through which the satellite travels at peri-centre is larger than the average baryonic mass density of the Type 2. In this paper, we update the semi-analytic models of Fu et al. (2013), which is a branch of the recent L-Galaxies model. Compared with the previous L-Galaxies models, Fu13 contains the following two main improvements: \(i) Each galaxy disk is divided into multiple rings, and thus the evolution of the radial distribution of cold gas and stars can be traced. \(ii) A prescription for the conversion of atomic gas into molecular gas is included, and the star formation is assumed to be directly proportional to the local surface density of molecular gas $\Sigma_{\rm SFR}\propto\Sigma_{\h2}$. These improvements enable us to calculate whether the RP at a certain radius in the galaxy disk is sufficient to remove the gas, and thus to trace the depletion of atomic and molecular gas in cluster galaxies. In this paper, We make two further main changes to the Fu13 model: \(1) we introduce an analytic method to trace the mass evolution of unresolved subhaloes once they are not resolved by the simulation. This enables us to model the evolution of low-mass satellite galaxies in a resolution-independent way. \(2) we include new prescriptions for ram pressure stripping of the cold gas; In the following sections, we describe our modifications in detail. Tracing galaxies in unresolved subhaloes ----------------------------------------- As discussed in Section 1, the properties of low-mass satellite galaxies predicted by L-galaxies will be resolution dependent, because the treatment of the physics depends on whether or not the subhalo of the satellite is resolved by the simulation (i.e. whether the galaxy is Type 1 or Type 2). The detailed issues are the following: 1. For satellite galaxies, $M_{\rm vir}$,$V_{\rm vir}$,$R_{\rm vir}$ and $V_{\rm max}$ are fixed at the moment they first fall into a larger halo, whereas $M_{\rm sub}$ is measured in each simulation output until the subhaloes are disrupted. $M_{\rm sub}$ for a Type 2 will be fixed at the last time when it was a Type 1, which will be strongly dependent on the resolution of the simulation. 2. The hot gas in a subhalo is assumed to be distributed the same way as the dark matter. When a subhalo is disrupted and the galaxy becomes a Type 2, it is assumed to lose its hot gas and ejected gas reservoirs immediately. Thus, gas cooling, infall and reincorporation no longer take place if the galaxy is a Type 2. 3. Only Type 2 galaxies can be disrupted by the tidal force of central galaxies. These three assumptions will cause inconsistencies between models based on dark matter simulations with different resolutions, because many Type 1 galaxies in a high resolution simulation will be Type 2 galaxies in low resolution simulations (e.g. Fig.A1 in Guo11). In the following section, we incorporate the model of Jiang & van den Bosch (2014, hereafter JB14) to trace the evolution of a subhalo after it is no longer resolved by the N-body simulation. In this way we can estimate the key properties of unresolved subhaloes, such as $M_{\rm sub}$, $V_{max}$, and treat Type 2s in the same way as Type 1s. We will then show that this procedure helps to alleviate the resolution-dependent properties of satellite galaxies in the models. ### An analytic model for subhalo evolution {#chap:esub} According to JB14, the average mass loss rate of a subhalo depends only on the instantaneous mass ratio of the subhalo mass $m$ and parent halo $M$, $$\label{eq:massratio} \dot m = - \varphi \frac{m}{{{\tau _{\rm dyn}}}}{\left( {\frac{m}{M}} \right)^\zeta }$$ where $\varphi$ and $\zeta$ are free parameters, $\tau_{\rm dyn}$ is the halo’s dynamical time $$\label{eq:tdy} \tau_{\rm dyn}(z)=\sqrt{\frac{3\pi}{16G\rho_{\rm crit}(z)}},$$ where $\rho_{\rm crit}(z)$ is the critical density at redshift $z$. So we can get the subhalo mass $m(t+\Delta t)$ in a static parent halo at $t+\Delta t$ as: $$\label{eq:esubmass} m(t+\Delta t)=m(t)[1+\zeta(\frac{m}{M})^\zeta(\frac{\Delta t}{\tau})]^{-1/\zeta}$$ where $\tau=\tau(z)/\varphi$ is the characteristic mass loss time scale at redshift $z$. We adjust the parameters $\zeta$ and $\varphi$ so that the distribution of the predicted distribution of $M_{\rm sub}$ for all Type 1 galaxies at $z=0$ matches the distribution of $M_{\rm sub}$ for Type 1 galaxies measured directly from the $z=0$ simulation output. The best-fit values we find are $\zeta=0.07$, $\varphi=9.5$. Note that our $\varphi$ is much larger than the value of JB14 (their best value is 1.34) and the difference is mainly due to the definition of $\rho$. If we use the same definition as JB14, our best fitted $\varphi$ is about 40% lower than that of JB14. JB14 also provides a formula to estimate the $V_{max}$ of subhalo during the evolution. They find that $V_{max}$ evolves more slowly than the mass evolution, because $V_{max}$ is mainly determind by the inner region of the subhalo which is not strongly afftected by the tidal force of the host halo. For simplicity and consistency with Guo11, we keep the $V_{max}$ for the Type 2 satellite also fixed at the value when it was last a Type 0 object. This means that the main change with respect to the Guo11 model occurs after the subhaloes have been fully tidally disrupted. So we only apply the above model for subhalo mass evolution after it is not resolved in the simulation. ### A consistent treatment of the physics of satellite galaxies In the Guo11 and Fu13 models, Type 2s lose all their hot gas after their subhaloes are disrupted. Using the methodology outlined in Sec.\[chap:esub\], we can now estimate the evolution of subhaloes to arbitrarily low masses , and thus Type 2 galaxies will retain their own hot gas haloes and lose hot gas continuously through stripping processes in the same way as Type 1s. The hot gas is assumed to have a isothermal distribution: $$\label{eq:rho} \rho_{hot}(r)=\frac{M_{\rm hot}}{4\pi R_{\rm vir}r^2}$$ When Type 2s fall within the virial radius of the central galaxy, we calculate the stripping radius as $R_{\rm strip}=min(R_{\rm tidal},R_{\rm r.p.})$. We use Eq. 25 and 26 in Guo et al. (2011) to calculate $R_{\rm tidal}$ and $R_{\rm r.p.}$: $$\label{eq:tidal} R_{\rm tidal}=(\frac{M_{\rm sub}}{M_{\rm sub,infall}})R_{\rm vir,infall}$$ where $M_{sub}$ is the subhalo mass given by Eq.1, $M_{\rm sub,infall}$ and $R_{\rm sub,infall}$ are the dark matter mass and virial radius of the subhalo at the last time when it was Type 0. $$\label{eq:ramp} \rho_{\rm sat}(R_{\rm r.p})V^2_{\rm sat}=\rho_{\rm par}(R)V^2_{\rm orbit}$$ where $\rho_{\rm sat}(R_{\rm r.p})$ is the hot gas density of the satellite at $R_{\rm r.p}$; $V_{\rm sat}$ is the virial velocity of the subhalo; $\rho_{\rm par}(R)$ is the hot gas density of the main halo at the distance $R$; and $V_{\rm orbit}$ is the orbital velocity of the satellite (we simply use the virial velocity of the main halo). All the hot gas beyond $R_{\rm strip}$ is removed and added to the hot gas component of the parent central galaxy. Then we set the hot gas radius to be $r_{\rm hot}=R_{\rm strip}$. The hot gas in Type 2s will cool and fall onto the galaxy disk with an exponential surface density distribution $$\label{eq:gasprofile} \Sigma_{\rm gas}(r)=\Sigma^{(0)}_{\rm gas}\exp (-r/r_{\rm infall})$$ where the infall scale length $r_{\rm infall}=(\lambda/\sqrt{2})r_{\rm vir}$. We keep the original spin parameter $\lambda$ and $r_{\rm vir}$ when the galaxy was last a Type 0 unchanged. In the models of Guo11 and Fu13, SN feedback reheats the cold gas in the disk, and if there is remaining energy, hot gas will be ejected out of the halo. In the Guo11 code, the supernova reheating efficiency $\epsilon_{\rm disk}$ is written as $$\label{eq:guo19} \epsilon_{\rm disk}=\epsilon \times [0.5+(\frac{V_{\rm max}}{V_{\rm reheat}})^{-\beta_1}]$$ The supernova ejection efficiency is written as $$\label{eq:guo21} \epsilon_{\rm halo}=\eta \times [0.5+(\frac{V_{\rm max}}{V_{\rm eject}})^{-\beta_2}],$$ The efficiency of the SN feedback in Type 2s is assumed to scale with the maximum circular velocity $V_{\rm max}$ of its central galaxy and reheated gas from Type 2s is added to the hot gas component of the central galaxy. We note that in Guo11, Type 2s have no hot gas component. Here, we assume SN in Type 2s will reheat cold gas to its own hot gas component and allow for an ejected gas reservoir in Type 2s in the same way as for Type 0s and 1s. Due to the fact that hot gas is stripped from satellites, only a fraction $R_{\rm hot}/R_{\rm vir}$ of reheated gas remains in the subhalo and the rest is returned to the main halo. We replace $V_{\rm max}$ of the Type 2s’ central galaxy with the $V_{\rm max}$ for the Type 2 galaxy is taken as the value of $V_{\rm max}$ when it was last a Type 0. Note that in our new model, the SN heating efficiency in a Type 2 is determined by its own $V_{\rm max}$, which has a lower value than the $V_{\rm max}$ of the central galaxy that is used to scale the SN reheating efficiency in a satellite galaxy in the Guo11 and Fu13 models. This choice is also motived by some recent work (e.g., Lagos et al. 2013; Kang 2014) which shows that SN feedback is more likely determined by the local galaxy potential. Usually the $V_{\rm max}$ of satellite is lower than $V_{\rm max}$ of the central, so the SN heating efficiency is higher in satellite galaxies in our model and we will later show in Section 3.2 that it leads to a slightly better agreement with the observed galaxy two-point correlation function on small scales in low stellar mass bins. The hot gas in the subhaloes of Type 2s will cool and fall onto the gas disks of Type 2s later on. With the growth of the dark matter halo, the ejected gas in the subhalo will be reincorporated into the hot gas again. The reincorporation efficiency is $$\label{eq:guo23} \dot{M}_{\rm ejec}=-\gamma(\frac{V_{\rm vir}}{220})(\frac{M_{\rm ejec}}{t_{\rm dyn,h}}),$$ where $\gamma$ is free parameter, $t_{\rm dyn,h}=R_{\rm vir}/V_{\rm vir}$ is the halo dynamical time. Finally, we note that in the Guo11 model, only Type 2 galaxies are disrupted by tidal forces (see Sec. 3.6.2 in Guo11). In our new model, we follow the evolution of subhaloes analytically, so we treat the tidal disruption for Type 1s and 2s in the same way. A satellite galaxy (Type 1 or Type 2) will be disrupted by the tidal force of the main halo, when 1) it has more baryonic matter than dark matter ($M_{\rm bar}>M_{\rm sub}$); 2) its average baryonic mass density is lower than main halo density at the peri-centre of its orbit. ![image](./mfs.ps) The physical prescriptions for ram pressure stripping of cold gas ----------------------------------------------------------------- In the L-Galaxies model, ram pressure strips only the hot gas component in the satellite galaxies, while the cold gas component in the ISM is not affected. Following the prescriptions of Gunn & Gott (1972), we consider a satellite galaxy moving through its host halo. The RP force can be written as, $$\label{eq:Prp} P_{\rm r.p}(R)=\rho_{\rm ICM}(R)v^2$$ where $v$ is the orbital velocity of the satellite, which we take to be the virial velocity of its parent halo. $\rho_{\rm ICM}$ is the density of the hot gas of the parent halo as Eq.(\[eq:rho\]). When the ram pressure exceeds the interstellar pressure $P_{\rm ISM}$, cold gas will be stripped. We adopt the Eq.(\[eq:Pism\]) given by Tecce et al.(2010): $$\label{eq:Pism} P_{\rm ISM}=2\pi G\Sigma_{\rm disc}(r)\Sigma_{\rm gas}(r)$$ where $r$ is the radius to the centre of satellite. $\Sigma_{\rm disc}$ is the surface density of the galactic disc, which equals to the sum of the cold gas and stellar surface densities: $$\label{eq:sigmas} \Sigma_{\rm disc}=\Sigma_{*}(r)+\Sigma_{\rm gas}(r)$$ We calculate $P_{\rm ISM}$ in each radial concentric ring in satellite galaxies based on the division of the disk into multiple rings introduced in the Fu13 model. In our model, the cold gas in a satellite galaxy can be stripped by RP only when the satellite falls within the virial radius $R_{\rm vir}$ of the central galaxy and $P_{\rm r.p}(R)\geq P_{\rm ISM}(r)$. The stripped cold gas of the satellite is added to the hot gas component of its central galaxy. According to Eq. (\[eq:Prp\]) & (\[eq:Pism\]), the criterion for RP stripping for a satellite galaxy at the distance $R$ to the centre of its parent halo is written as: $$\label{eq:Ri} 2\pi G\left[ {{\Sigma _*}(r) + {\Sigma _{{\rm{gas}}}}(r)} \right]{\Sigma _{{\rm{gas}}}}\left( r \right) \le {\rho _{{\rm{ICM}}}}\left( R \right)v^2$$ From the radial distribution of cold gas $\Sigma_{\rm gas}(r)$ and stellar surface density $\Sigma_{*}(r)$, we evaluate the stripping radius $r$ in Eq. (\[eq:Ri\]) and assume that cold gas exterior to this radius $r$ will be stripped . The above description is very simplistic. It is not clear if all the cold gas will be stripped immediately when the RP force is larger than the gravity of the satellite itself. To take account of this uncertainty, we define a stripping fraction $f_{\rm rps}$ as the fraction of the cold gas stripped by ram pressure in the region of a satellite galaxy where $P_{\rm r.p}(R)\geq P_{\rm ISM}(r)$. In the simplest case, $f_{\rm rps}$ is $100\%$, which means all the cold gas in the region where $P_{\rm r.p}(R)\geq P_{\rm ISM}(r)$ will be stripped by RP. This simple assumption causes a sudden cut off of the cold gas radial profile, i.e the cold gas radial profile becomes a step function at the radius $r$. To ensure a continuously decreasing cold gas radial profile, we modify the stripping law as follows : if $P_{\rm r.p}\geq P_{\rm ISM} $, the stripping ratio $f_{\rm rps}$ is related to the difference between $P_{\rm r.p}$ and $P_{\rm ISM}$ as: $$\label{eq:caseB} f_{\rm rps}=\begin{cases} 0,& P_{\rm r.p}<P_{\rm ISM}\\ \frac{P_{\rm r.p}-P_{\rm ISM}}{P_{\rm r.p}},& P_{\rm r.p}\geq P_{\rm ISM} \end{cases}$$ The guarantees that only part of the cold gas $M_{\rm stripped}=f_{\rm rps}M_{\rm cold gas}(r)$ in the region where $P_{\rm r.p}(R)\geq P_{\rm ISM}(r)$ will be stripped. ![image](./cmfv1errt.ps) Performance of the new model ============================ In this section, we compare our new model with the Fu13 model and explore if we obtain more convergent and/or improved results for the following galaxy properties: the stellar/gas mass function and the two-point correlation function. These two quantities are often used as basic tests of the overall model. We then study the fraction of quenched galaxies as a function of halo mass and compare with observational data. In the following subsections, we abbreviate our model as “new” if the option for cold gas stripping is switched off, and it is labeled as “new +rps” if the cold gas stripping is turned on. The stellar and cold gas mass functions --------------------------------------- In Fig. \[fig:mfv1t\], we plot the model mass functions of stars, HI and $\h2$ at $z=0$ compared with the observations. The “new” model results (with RP stripping of cold gas off) are shown as black and blue lines for the MS and MS-II simulations, respectively. All the free parameters of the models are fixed as in Fu13 (see the Table 1. in that paper), with the exception of the hot gas accretion efficiency onto black holes $\kappa_{\rm AGN}$ which is changed from $1.5 \times 10^{-5}\rm{M_\odot} \rm yr^{-1}$ to $1.5 \times 10^{-6}\rm{M_\odot} \rm yr^{-1}$. Tuning this parameter is necessary in order to better fit the stellar mass function at the high mass end. As can be seen, with this minor change, our new models are in very good agreement with the observational data at $z=0$ for both simulations. Fig. \[fig:mfv1t\] also shows that there is difference in the predicted mass functions at the low-mass end between the two simulations. In Fig. \[fig:mfserr\], we check the divergence between two simulations in detail, where the difference defined as $\Delta \rm MF=|\rm \log_{10}MF_{MS}-\log_{10}MF_{MS-II}|$. The left panel shows that for low-mass galaxies ($\rm log_{10} M_∗ < 8.5$) there is obvious difference in the mass function, and all the models have similar predictions (also see Fig.7 in Guo11). Such a difference is expected and it is basically due to low-mass central galaxies not being resolved in the low-resolution simulation. In fact the simulation resolution can have more complicated effects on the galaxy properties. For example, if the merger trees of some massive halos in the MS simulation are not well-resolved at higher redshifts, this will lead to divergence in the properties of both the central and the satellite galaxies in those halos. Overall we find that the stellar mass function is more convergent for galaxies with $\rm log_{10}M_{*} > 8.5$ and in our later analysis we only focus on these galaxies. ![image](./h2mferrs.ps) The right panels of Fig. \[fig:mfserr\] show the convergence test on the cold gas mass functions, also divided into HI and $\h2$ components. The sample is selected with $\rm log_{10}M_*>9$. It is found that, compared to the Guo11 model, our new model produces more convergent results for the cold gas mass. In the Guo11 model, there is a threshold in cold gas mass for star formation which implies that all satellite galaxies contain cold gas even when they stop forming stars. In a low-resolution simulation, the satellite will soon become type 2 and gas cooling will stop as the hot halo gas is immedaitely stripped. So there will be more cold gas in the low-resolution run for the Guo11 model. Compared to the Fu13 model on which our model is based, it is seen that our new model does not produce a more convergent results on the cold gas and HI gas mass, but we obtain slightly better convergence on the $\h2$ mass (right panel), and the convergence is more obvious in our new+rps model. In Fig. \[fig:h2err\] we further check the convergence in different halo mass bins. It is clearly seen that the convergence in the new model is better at $10^{7}M_{\odot} < M_{H2} < 10^{10}M_{\odot}$ than the Fu13 model. The new+rps will mostly affects the low-mass end ($<10^{7}M_{\odot}$) as rps is efficient to strip the cold gas if there is little of it. In the Fu13+rps model, the H2 mass function is converged only at the low-mass end, but not at the high-mass end. This plot clearly shows that without a sub-resolution treatment of the star formation physics (such as Fu13 model), it is difficult to achieve convergence in high-mass H2 galaxies and rps is effective in convergence in low gas-mass galaxies. As in the Fu13 and our model, the star formation is determined by local $\h2$ gas density, it is expected that our model will produce more convergent results on the fraction of passive/star forming galaxies, and this is shown in detail in Sec.\[chap:quech\]. The projected two-point correlation functions (2PCF) ---------------------------------------------------- ![image](./2pcfss.ps) The two-point correlation function (2PCF) is another important galaxy statistic, because it describes how galaxies are distributed in and between dark matter haloes. On small scales, the clustering depends strongly on how satellites are distributed in massive haloes, so it is interesting to ask if the model results are dependent on simulation resolution. Another reason to check the 2PCF is that, as shown by Kang (2014), strong SN feedback in satellites can decrease the 2PCF on small scales, and this results in better agreement with the observations. Our new model also includes a feedback model similar to Kang (2014) for satellite galaxies, in which the feedback efficiency depends on the local gravitational potential of the satellite. It is thus worthwhile to check if our new model can give better agreement with the observed 2PCFs. In Fig. \[fig:2pcf\], we plot the projected galaxy 2PCFs for the Fu13 model in green and for our new model in black. Results are plotted for both the MS and MS-II simulations and the difference between the two is shown as in inset at the bottom of each panel. The different panels show results for galaxies in different bins of stellar mass. The data points are from Li et al. (2006) and are computed using the large-scale structure sample from the SDSS DR7. Here we do not show the results from the “new+rps” – the results are very similar to the model without cold gas stripping. ![image](./2pcfrb.ps) Fig. \[fig:2pcf\] shows that our new model produces better agreement with the observed 2PCFs compared to the Fu13 model, especially for low-mass galaxies. As discussed in the previous section, our new model employees a feedback prescription where the SN ejection efficiency is dependent on $V_{max}$ of the subhalo, and not the central halo, as in previous models. So the feedback efficiency in satellites in our new model is larger, and this decreases the mass growth of satellites after infall with respect to the results of Fu13. As shown by Kang (2014), strong feedback in satellites flattens the satellite galaxy mass function in massive haloes, thus reducing the clustering amplitude on small scales. The panel inserts show that the disparity in clustering amplitude between the MS and MS-II simulations is slightly lower in the new model compared to Fu13 model. The new model shows a little “better” convergence at small scales, but in some mass bins new model get “worse” at $\rm \sim Mpc$ scales. However, these difference on 2PCFs between the two simulations is always small (often lower than 0.2 dex), indicating that resolution is not of primary importance in the prediction of galaxy clustering. In Fig. \[fig:2pcfr\] we show 2PCFs for galaxies classified into red and blue according to their $g-r$ colors as described in Guo11. The red/blue lines are model predictions for red/blue galaxies, and the red/blue points are data points from Li et al. (2006). As shown in the previous figure, the difference between the MS and MS-II simulations are small for 2PCFs, so we only show results from the MS. The new model fits better the color-dependent clustering in low-mass binsi, particularly for blue galaxies in the low mass bins. In intermediate mass bins ($\log M_*=[9.77,10.77]$) the clustering of red galaxies is too strong, and too low for blue galaxies for both models. This is because both the Fu13 and our models over-predict the fraction of red satellite galaxies and under-predict the fraction of red centrals, as seen in Fig.\[fig:ssfr1\]. Both models also fit well at the highest stellar masses. Satellite quenching and cold gas depletion ------------------------------------------ In this subsection, we will investigate the effects of RP stripping of cold gas in galaxies. We will study how ram-pressure stripping changes the quenched fraction of galaxies as a function of galaxy mass, halo mass and cluster-centric radius, and we will also show comparisons with recent observations. As in the previous subsections, we will also address the issue of convergence by showing results from both the MS and the MS-II simulations. ### Where is ram-pressure stripping most effective? {#chap:ram} To understand which galaxies have been affected most by ram-pressure stripping, we define the cumulative stripped cold gas fraction as $$\label{eq:fsp} f_{\rm sp}=\frac{M_{\rm asp}}{M_{\rm asp}+M_{\rm cold gas}}$$ where $M_{\rm asp}$ is the cumulative mass of stripped cold gas throughout the formation history of a galaxy ( evaluated by summing up the stripped cold gas mass in the main progenitor), and $M_{\rm cold gas}$ is its current cold gas mass. We focus on satellite galaxies in rich groups and clusters, and select those with $M_*>10^{9}\rm{M_\odot}$ in haloes with mass $M_{\rm halo}>10^{13}\rm{M_\odot}$. ![The fraction of galaxies with stripping fraction $f_{\rm sp}\geq x$ as a function of $x$. Solid and dashed lines are based on MS and MS-II haloes respectively. Here galaxies are selected with stellar mass $M_*>10^{9}\rm{M_\odot}$ in haloes with $M_{\rm halo}>10^{13}\rm{M_\odot}$.[]{data-label="fig:sphist"}](./modelsphist.ps) ![The number fraction of galaxies with $f_{\rm sp}>0.1$ (black curves) and $f_{\rm sp}>0.5$ (red curves) as a function of stellar mass and halo mass. Solid and dashed lines are based on MS and MS-II haloes respectively.[]{data-label="fig:sp"}](./modelsp.ps) Fig.\[fig:sphist\] shows the fraction of galaxies with $f_{\rm sp}$ larger than a certain value $x$ ($0.1<x<1.0$). Results are shown for both MS and MS-II haloes and are seen to agree to within $\sim 10 \%$. We find that about 50% galaxies in massive haloes have $f_{\rm sp}\geq 0.1$. We then define a galaxy with $f_{\rm sp}\geq 0.1$ as having had significant cold gas stripping, and we plot the fraction of these galaxies $N_{\rm stripping}/N_{\rm total}$ as functions of stellar mass and halo mass in Fig.\[fig:sp\]. As can be seen, the fraction of galaxies with significant stripping increases steeply with halo mass, but decreases with stellar mass. If we increase the significant stripping threshold from 0.1 to 0.5, we find that the stripped fraction in haloes of $10^{13} M_{\odot}$ decreases from 0.6 to 0.2. Br[ü]{}ggen & De Lucia (2008) found that about one quarter of galaxies in massive clusters ($M_{\rm halo}>10^{14}\rm{M_\odot}$) are subjected to strong ram pressure that causes loss of all gas and more than 64% of galaxies that reside in a cluster today have lost substantial gas. Our new model agrees well with these results. We also conclude that most low-mass satellite galaxies in massive clusters will lose significant fraction of their cold gas by RP stripping, and some will lose all their interstellar cold gas. ### Effect of ram-pressure stripping on the quenched fraction of satellite galaxies {#chap:quech} ![image](./ssfr-1sv1s.ps) The fraction of quenched galaxies is known to depend on environment. It has also been shown (e.g., Weinmann et al. 2006) that early versions of SAMs (e.g., Croton et al. 2006) over-predicted the fraction of red satellites in all environments. It was suggested that instantaneous stripping of the hot halo gas of satellites was to blame. By introducing a non-instantaneous stripping model, later SAMS (e.g., Kang & van den Bosch 2008; Font et al. 2008) were able to produce more blue galaxies, but the agreement was still not satisfactory. In the Guo11 and Fu13 models, the stripping of hot halo gas is modeled as a gradual process. Recently Henriques et al. (2015) showed that the latest version of L-Galaxies was able to reproduce the overall red galaxy fraction, but they did not analyze central/satellite galaxies separately, nor did they examine the dependence of red fraction on halo mass and radial distance from the centres of groups and clusters. Wetzel et al. (2012) re-analyzed the fraction of quenched galaxies in groups and clusters using SDSS DR7 data. They classified galaxies as quenched based on their specific star formation rates rather than their colors. This permits a more direct comparison with models, because the galaxy color is a complicated function of star formation history, metallicity and dust extinction. The analysis of Wetzel et al. (2012) made use of global star formation rates with fibre aperture correction as given in Salim et al. (2007). These corrections may be prone to systematic effects, because there can be a large spread in broad-band colors at a given specific star formation rate and because the corrections are calibrated using average relations between these two quantities. The corrections also account for two thirds of the total star formation rate on average. In contrast, the star formation rate measured inside the fiber aperture is derived directly from the dust-corrected H$\alpha$ luminosity and should be more reliable. In the following comparison, we present the results on quenched fractions using both total and fiber aperture specific star formation rates. We also examine the radial dependence of the quenched fraction in different halo mass bins. Following the same procedure described in Wetzel et al. (2012), we extract galaxies from the MPA-JHU SDSS DR7 catalogue with $M_{*}>10^{9.5}\rm{M_\odot}$ at $z<0.04$ and $M_{*}>10^{9.5}\rm{M_\odot}$ at $z=0.04\sim0.06$ that are included in the group catalogue of Yang et al.(2007). We use the stellar masses, the fibre sSFRs (specific star formation rates) and total sSFRs from the MPA-JHU SDSS DR7 database. Following Wetzel et al., we define galaxies with $sSFR<10^{-11}yr^{-1}$ as quenched galaxies, and calculate the quenched fraction of satellite galaxies in different bins of halo mass, stellar mass and cluster-centric radius, scaled to the virial radius of the halo. We find that $f^{sat}_Q$ increases with the stellar mass of the satellite, host halo mass and distance to the center of the halo and our results are largely consistent with those presented in Wetzel et al.(2012) (see Fig. \[fig:ssfr1\] to \[fig:dssfr1\]). In the models, we select galaxies with $M_{*}>10^{9.5}\rm{M_\odot}$ at $z=0$ from the MS and MS-II simulations. We use the FOF halo properties given by the N-body simulation to define the halo mass. However, in order to account for projection effects in the simulation, we convert 3D distances to 2D distances by simply projecting the galaxies onto the X-Y plane in the simulations and selecting galaxies within $\Delta V_z\leq 500km/s$ to the halo central as group members. This is a reasonably close representation of the Yang et al. (2007) procedure to select galaxy groups in redshift space. In Fig. \[fig:ssfr1\] and Fig. \[fig:ssfr2\], we show the quenched fraction of satellite galaxies as function as their stellar mass in a set of different halo mass bins, and as a function of host halo mass in a set of different stellar mass bins. In Fig. \[fig:ssfr1\], we also show the quenched fraction of central galaxies in all haloes with $\log M_{\rm halo}>11.4$ in the right panel. In both figures, results from our new model without ram-pressure stripping are shown in black, from our new model with cold gas stripping in red, and from Fu13 in green. The solid lines are results from the MS, and the dashed lines are for the MS-II simulation. The yellow triangles show the SDSS results. In each figure, we plot the results for fiber sSFRs in the upper panels, and for total sSFRs in the lower panels. At our median redshift, the 3 arcsec aperture of the fiber corresponds to a physical aperture size of $1.5 kpc/h$. One important conclusion from these two figures is that our new models (w/o RP stripping) produce much more convergent results between the two simulations than the Fu13 model, particularly for low-mass satellite galaxies. For example, in the upper middle two panels of Fig. \[fig:ssfr1\] the predicted quenched fractions at low stellar masses differ by factors of 2-3 between the MS and MS-II for the Fu13 models, but improved to within a factor of 1.5 in our new models. In the $\rm log M_{halo}=[14,15]$ panel, as we discussed in Sec. 3.1, the convergence is more obviously in the new model. The same is seen in the leftmost panel of Fig. \[fig:ssfr2\] and for the highest halo mass bins in Fig. \[fig:dssfr1\]. We also conclude that the predictions using the fiber and total sSFR are qualitatively very similar. There are significant discrepancies between all the models and the observations which are much larger than any of the systematics in the way we choose to define the boundary between quenched and actively star-forming galaxies. The clear [*qualitative*]{} discrepancies are the following: 1. The rightmost panel of Fig. \[fig:ssfr1\] shows that the quenched fraction of central galaxies is lower than the data, across all stellar masses. This under-prediction of red centrals agrees with previous results based on galaxy color (e.g., Weinmann et al. 2006; Kang et al. 2006). 2. Comparing Fig. \[fig:ssfr1\] and \[fig:ssfr2\], we see that in the data, the quenched fraction depends on both stellar mass and on halo mass. At fixed halo mass, there is still a strong dependence of quenched fraction on the stellar mass of the galaxy. At fixed stellar mass, there is a significantly weaker dependence of quenched fraction on halo mass. This indicates that halo mass has [*secondary*]{} influence on the star formation histories of galaxies compared to stellar mass. ![image](./ssfr-2sv1s.ps) In the models, the [*opposite*]{} is true. At fixed halo mass, there is generally only a weak dependence of the quenched fraction on stellar mass. At fixed stellar mass, the dependence of the quenched fraction on halo mass is very strong. This indicates that in the models, the influence of the halo mass on star formation history is [*primary*]{} and the influence of stellar mass is secondary. As we will discuss in the final section, our results indicate that the physical model for determining the star formation histories of galaxies in the SAMs must be wrong. In Fig. \[fig:dssfr1\], we show the relation between the quenched satellite fraction and the projected distance to the center of the scaled by $R_{\rm 200}$. Results are shown in 3 halo mass bins. Once again we see that the new models produce results that are much less sensitive to resolution than than Fu13 models. As can be expected from the results in the previous two figure, there are clear offsets between the quenched fractions in the models and the data. The slope of the decrease in quenched fraction as a function of scaled radius agrees quite well with the observations in high mass haloes. In low mass haloes, the quenched fraction rise too steeply near the center. The steep rise towards the center is stronger for the total specific star formation rates compared to the fibre specific star formation rates, and it is also stronger for the model that includes ram-pressure stripping of the cold gas. This suggests that the central density of hot gas in lower mass haloes is too high or our rps model is too simple. Future constraints can be obtained from hydrodynamical simulations and the observed HI map of galaxies. ![image](./ssfr-dis-1sv1s.ps) In summary, we conclude that our models greatly alleviate the resolution problems seen in the Fu13 models, but are still in significant discrepancies with the observational data remain. Comparison of the dependence of HI fraction and sSFR on environmental density {#chap:ssfr} ----------------------------------------------------------------------------- ![image](./nssfr-az.ps) In this section, we compare model predictions of the HI mass fractions of galaxies as a function of environmental density to observations in order to test our models of ram-pressure stripping. Using the observational data from a few combined surveys (ALFALFA, GALEX, SDSS), Fabello et al. (2012, hereafter Fabello12) found that both the HI fractions and sSFRs of low-mass galaxies decline with increasing environmental density, but the HI fraction exhibits a steeper decline. They compared the data with the results of the Guo11 model, and found that this model predicts the opposite effect – the sSFRs decline more steeply with density than the HI mass fractions. Following Fabello12, we select galaxies with $9.5<\log (M_*/\rm{M_\odot})<11$, and compute the environmental density parameter N, defined as the number of neighbours with $\log (M_*/\rm{M_\odot})\geq9.5$ located inside a cylinder of $1 \rm{Mpc}$ in projected radius and with velocity difference less than $500 km/s$. The comparison between models and data is shown in Fig. \[fig:nssfr\]. All curves have been normalized to 1 at N=0, and the [*relative*]{} decrease in $\rm SFR/M_*$ and $\rm M_{HI}/M_*$ is plotted on the y-axis. The mass bins have been chosen to match the stellar mass bins shown in Fabello12. Unlike the Guo11 model, the the HI mass fraction declines with density more rapidly than the specific star formation rate for the Fu13 and for the new models. The reason for this is because in the Guo11 models, star formation ceases in the whole galaxy when the cold gas mass is lower than a threshold value. This has the consequence that significant gas always remains in passive galaxies. In the Fu13 and our new models, the local star formation rate is determined purely by the local gas surface density. As can be seen by comparing the red and the black curves, the inclusion of ram-pressure stripping processes enhances the the decline in HI mass fraction at large N, by only a small factor, indicating that the star formation prescription rather than the treatment of stripping is more important in understanding the observational results. A more comprehensive test of our ram-pressure stripping prescriptions requires HI observations of galaxies near the cores of galaxy groups and clusters. Conclusion and discussion ========================= The main results and findings of this paper can be summarized as follows: \(I) We include an analytic method to trace the properties of the subhaloes of satellite galaxies in haloes that is independent of the resolution of the simulation. This allows us to write down equations describing the physical processes such as SN feedback, gas cooling, gas reincorporation and tidal stripping that are not sensitive to whether the galaxy is classified as a Type 1 or Type 2 galaxy in the simulation. The predicted gas mass functions, quenched satellite fractions and galaxy two-point correlation functions evaluated for the two simulations used in this work, MS and MS-II, agree better than in previous models. \(II) We include a new prescription to describe the ram pressure stripping of cold gas in the galaxy. We then compare a variety of results with recent observations. Our main conclusions are the following: \(i) Our new models allow for continued gas accretion and SN feedback in all satellite galaxies. We improved the $\h2$ convergce for satellites in massive haloes which leads to more convergent quenched fraction of satellites in mass haloes. This has the effect of decreasing the clustering amplitude of low mass galaxies on small scales, resulting in a slightly better agreement with the observed 2pcf compared to previous models. \(ii) We show that ram pressure stripping is most efficient at removing the cold gas from low mass satellite galaxies ($<10^{10.5}\rm{M_\odot}$) in massive haloes ($>10^{13}\rm{M_\odot}$). More than 60% of the galaxies in these massive haloes have experienced significant ram-pressure stripping. \(iii) We study the quenched fraction of satellite galaxies as a function of stellar mass at fixed halo mass, and as a function of halo mass at fixed stellar mass. We find significant discrepancies between our model predictions and observations. At the fixed halo mass, the quenched fraction of satellites does not depend on stellar mass in the models. This is in contradiction with observations where the quenched fraction always increases with stellar mass. The net effect of this discrepancy is that there are too many low mass quenched satellites and too few high mass quenched satellites in the models compared to the data. \(iv) We study the decrease in the quenched fraction of satellite galaxies as function of of projected radial distance from the center of the halo. The slope of the decrease agrees well with observations in high mass haloes, but is much steeper than observed in low mass haloes. This problem is worse for models that include the ram-pressure stripping of cold gas, indicating that the predicted hot gas densities in the centers of lower mass haloes may be too high. \(iv) Our new models are able to reproduce the relatively stronger decrease in HI gas mass fraction as a function of local environmental density compared to specific star formation rate first pointed out by Fabello12. Our study in this paper shows that the long-standing problem of the over-prediction of red satellite galaxies is still not solved in the current version of the L-Galaxies model. Recently Sales et al. (2015) studied the color of satellite galaxies in the Illustris cosmological simulations. They found that their simulation produces more blue satellites and proposed that the main reason SAMs fail to reproduce satellite colors is that there is too little cold gas in satellites before they are accreted. We regard this scenario as unlikely, because as we have shown, our SAMS produce gas mass functions in excellent agreement with observations at the present day. Fu13 demonstrated that the $\h2$ mass functions of galaxies evolve very strongly with redshift. Although keeping more cold gas in satellites before accretion could in principle solve the problem of too many red satellites, it would likely violate other observational constraints. Kang (2014) have also shown that the stellar mass growth in satellite galaxies after infall should not be significant, otherwise there would be too many low-mass galaxies in massive clusters, and the clustering amplitude would be too high clustering on small scales. Wetzel et al. (2013) also found from SDSS that the stellar mass growth of satellite galaxies in constrained to be less than 60% on average. Recently, it has been shown that a large fraction of low mass galaxies do not experience continuous star formation histories, but have undergone a significant bursts of star formation (Kauffmann 2014). What exactly causes these bursts is not yet understood, but in all likelihood all these results indicate that gas accretion processes onto low mass field galaxies are more complex than assumed in the current semi-analytic models. We note that the net effect of a bursty star formation history is to produce a sSFR distribution with a tail that is skewed towards [*low specific star formation rates*]{}, because the duty cycle of the burst phase when the galaxy is forming stars very rapidly is short. An extra tail of low sSFR galaxies at low stellar masses would bring the model central galaxy quenched fractions into better agreement with observations, as seen in the rightmost panel of Figure 8. It would also lower the specific star formation rates of a significant fraction of galaxies being accreted as satellites, perhaps causing them to exhaust their existing gas reservoirs more slowly. Another issue that was pointed out in a paper by Kauffmann (2015) is that there is a correlation between low mass galaxies that are quenched and the quenched fraction of their neighbours that extends over very large scales and that is not currently reproduced by semi-analytic models or by the Illustris simulation. Until we have a model that reproduces basic trends in quenched fractions as a function of both stellar mass and halo mass, it will be difficult to come to robust conclusions as to whether our treatment of ram-pressure stripping of cold gas is a significant improvement to the models. We have presented some tentative evidence that hot gas densities at the centers of lower mass haloes ($\sim 10^{12} M_{\odot}$) are smaller than predicted by our models, because the quenched fractions at the very centers of these haloes are too high in comparison with observations. One way to reduce the central hot gas densities is through radio AGN feedback processes. In the current implementation of radio AGN feedback, the cooling rate onto the central galaxy is reduced, but the gas distribution remains unaffected, which is not physically reasonable. At this level of detail, however, hydrodynamical simulations may provide a better way forward. Acknowledgements {#acknowledgements .unnumbered} ================ We thank the anonymous referee for useful comments. We also thank Bruno Henriques, Yannick Bahè, Qi Guo and Simon White for helpful discussions. This work is supported by the 973 program (No. 2015CB857003, 2013CB834900), NSF of Jiangsu (No.BK20140050), the NSFC (No. 11333008, 111303072,U1531123) and the Strategic Priority Research Program the emergence of cosmological structure of the CAS (XDB09000000). YL acknowledges the support and hospitality by the Max-Planck Institute for Astrophysics. JF acknowledges the support by the Opening Project of Key Laboratory of Computational Astrophysics, National Astronomical Observatories, CAS. Abadi M. G., Moore B., Bower R. G., 1999, MNRAS, 308, 947 Angulo R. E., White S. D. M., 2010, MNRAS, 405, 143 Baldry I. K., Balogh M. L., Bower R. G., Glazebrook K., Nichol R. C., Bamford S. P., Budavari T., 2006, MNRAS, 373, 469 Baldry I. K., Glazebrook K., Driver S. P., 2008, MNRAS, 388, 945 Balogh M. L., Baldry I. K., Nichol R., Miller C., Bower R., Glazebrook K., 2004, ApJ, 615, L101 Bamford S. P., et al., 2009, MNRAS, 393, 1324 Bekki K., 2014, MNRAS, 438, 444 Biermann P., Tinsley B. M., 1975, A&A, 41, 441 Boselli A., Cortese L., Boquien M., Boissier S., Catinella B., Gavazzi G., Lagos C., Saintonge A., 2014, A&A, 564, A67 Boselli A., Gavazzi G., 2006, PASP, 118, 517 Bower R. G., Benson A. J., Malbon R., Helly J. C., Frenk C. S., Baugh C. M., Cole S., Lacey C. G., 2006, MNRAS, 370, 645 Boylan-Kolchin M., Springel V., White S. D. M., Jenkins A., Lemson G., 2009, MNRAS, 398, 1150 Br[ü]{}ggen M., De Lucia G., 2008, MNRAS, 383, 1336 Butcher H., Oemler A., Jr., 1978, ApJ, 219, 18 Cole S., Lacey C. G., Baugh C. M., Frenk C. S., 2000, MNRAS, 319, 168 Croton D. J., et al., 2006, MNRAS, 365, 11 Crowl H. H., Kenney J. D. P., van Gorkom J. H., Vollmer B., 2005, AJ, 130, 65 De Lucia G., Blaizot J., 2007, MNRAS, 375, 2 Dressler A., 1980, ApJ, 236, 351 Fabello S., Kauffmann G., Catinella B., Li C., Giovanelli R., Haynes M. P., 2012, MNRAS, 427, 2841 Font A. S., et al., 2008, MNRAS, 389, 1619 Fu J., Guo Q., Kauffmann G., Krumholz M. R., 2010, MNRAS, 409, 515 Fu J., et al., 2013, MNRAS, 434, 1531 Gunn J. E., Gott J. R., III, 1972, ApJ, 176, 1 Guo Q., White S., 2014, MNRAS, 437, 3228 Guo Q., White S., Angulo R. E., Henriques B., Lemson G., Boylan-Kolchin M., Thomas P., Short C., 2013, MNRAS, 428, 1351 Guo Q., et al., 2011, MNRAS, 413, 101 Haynes M. P., Giovanelli R., 1984, AJ, 89, 758 Henriques B. M. B., White S. D. M., Thomas P. A., Angulo R., Guo Q., Lemson G., Springel V., Overzier R., 2015, MNRAS, 451, 2663 Hughes T. M., Cortese L., 2009, MNRAS, 396, L41 Jiang F., van den Bosch F. C., 2014, arXiv, arXiv:1403.6827 Kang X., 2014, MNRAS, 437, 3385 Kang X., Jing Y. P., Mo H. J., B[ö]{}rner G., 2005, ApJ, 631, 21 Kang X., Jing Y. P., Silk J., 2006, ApJ, 648, 820 Kang X., van den Bosch F. C., 2008, ApJ, 676, L101 Kauffmann G., White S. D. M., Guiderdoni B., 1993, MNRAS, 264, 201 Kauffmann G., 2015, MNRAS, 450, 618 Kauffmann G., 2014, MNRAS, 441, 2717 Kauffmann G., Colberg J. M., Diaferio A., White S. D. M., 1999, MNRAS, 303, 188 Kauffmann G., White S. D. M., Heckman T. M., M[é]{}nard B., Brinchmann J., Charlot S., Tremonti C., Brinkmann J., 2004, MNRAS, 353, 713 Kenney J. D. P., van Gorkom J. H., Vollmer B., 2004, AJ, 127, 3361 Kimm T., et al., 2009, MNRAS, 394, 1131 Lanzoni B., Guiderdoni B., Mamon G. A., Devriendt J., Hatton S., 2005, MNRAS, 361, 369 Li C., Kauffmann G., Fu J., Wang J., Catinella B., Fabello S., Schiminovich D., Zhang W., 2012, MNRAS, 424, 1471 Li C., Kauffmann G., Jing Y. P., White S. D. M., B[ö]{}rner G., Cheng F. Z., 2006, MNRAS, 368, 21 Li C., White S. D. M., 2009, MNRAS, 398, 2177 Machacek M., Jones C., Forman W. R., Nulsen P., 2006, ApJ, 644, 155 McCarthy I. G., Frenk C. S., Font A. S., Lacey C. G., Bower R. G., Mitchell N. L., Balogh M. L., Theuns T., 2008, MNRAS, 383, 593 Navarro J. F., Frenk C. S., White S. D. M., 1996, ApJ, 462, 563 Okamoto T., Nagashima M., 2003, ApJ, 587, 500 Onions J., et al., 2012, MNRAS, 423, 1200 Roediger E., Hensler G., 2005, A&A, 433, 875 Roediger E., Br[ü]{}ggen M., 2007, MNRAS, 380, 1399 Sakelliou I., Acreman D. M., Hardcastle M. J., Merrifield M. R., Ponman T. J., Stevens I. R., 2005, MNRAS, 360, 1069 Sales L. V., et al., 2015, MNRAS, 447, L6 Salim S., et al., 2007, ApJS, 173, 267 Solanes J. M., Manrique A., Garc[í]{}a-G[ó]{}mez C., Gonz[á]{}lez-Casado G., Giovanelli R., Haynes M. P., 2001, ApJ, 548, 97 Somerville R. S., Primack J. R., 1999, MNRAS, 310, 1087 Springel V., et al., 2005, Natur, 435, 629 Springel V., White S. D. M., Tormen G., Kauffmann G., 2001, MNRAS, 328, 726 Tecce T. E., Cora S. A., Tissera P. B., Abadi M. G., Lagos C. D. P., 2010, MNRAS, 408, 2008 Tonnesen S., Bryan G. L., 2012, MNRAS, 422, 1609 Tonnesen S., Bryan G. L., 2009, ApJ, 694, 789 Tonnesen S., Bryan G. L., 2008, ApJ, 684, L9 Weinmann S. M., Kauffmann G., von der Linden A., De Lucia G., 2010, MNRAS, 406, 2249 Weinmann S. M., van den Bosch F. C., Yang X., Mo H. J., 2006, MNRAS, 366, 2 Wetzel A. R., Tinker J. L., Conroy C., 2012, MNRAS, 424, 232 White S. D. M., Frenk C. S., 1991, ApJ, 379, 52 Whitmore B. C., Gilmore D. M., Jones C., 1993, ApJ, 407, 489 Yang X., Mo H. J., van den Bosch F. C., Pasquali A., Li C., Barden M., 2007, ApJ, 671, 153 Yates R. M., Kauffmann G., 2014, MNRAS, 439, 3817 Zhang W., Li C., Kauffmann G., Xiao T., 2013, MNRAS, 429, 2191 Zwaan M. A., Meyer M. J., Staveley-Smith L., Webster R. L., 2005, MNRAS, 359, L30 [^1]: E-mail:luoyu,[email protected] [^2]: Yates et al. 2014 considered more realistic model for chemical enrichment of different elements.
--- abstract: 'In this paper, a high order free-stream preserving finite difference weighted essentially non-oscillatory (WENO) scheme is developed for the ideal magnetohydrodynamic (MHD) equations on curvilinear meshes. Under the constrained transport framework, magnetic potential evolved by a Hamilton-Jacobi (H-J) equation is introduced to control the divergence error. In this work, we use the alternative formulation of WENO scheme [@christlieb2018high] for the nonlinear hyperbolic conservation law, and design a novel method to solve the magnetic potential. Theoretical derivation and numerical results show that the scheme can preserve free-stream solutions of MHD equations, and reduce error more effectively than the standard finite difference WENO schemes for such problems.' author: - 'Yize Yu [^1]' - 'Yan Jiang [^2]' - 'Mengping Zhang [^3]' bibliography: - 'ref.bib' title: 'Free-stream preserving finite difference schemes for ideal magnetohydrodynamics on curvilinear meshes' --- **Keywords**: High order finite difference scheme, weighted essentially non-oscillatory scheme, curvilinear meshes, free-stream preserving, magnetohydrodynamics, constrained transport. [^1]: School of Mathematical Sciences, University of Science and Technology of China, Hefei, Anhui 230026, People’s Republic of China [[email protected]]{}. [^2]: School of Mathematical Sciences, University of Science and Technology of China, Hefei, Anhui 230026, People’s Republic of China [[email protected]]{}. Research supported by NSFC grant 11901555. [^3]: School of Mathematical Sciences, University of Science and Technology of China, Hefei, Anhui 230026, People’s Republic of China [[email protected]]{}. Research supported by NSFC grant 11871448.
--- abstract: 'We summarize the results presented in Krause et al. (2012a, K12) on the impact of supernova-driven shells and dark-remnant accretion on gas expulsion in globular cluster infancy.' date: '?? and in revised form ??' --- Introduction ============ Galactic globular clusters (GCs) today appear as large aggregates of long-lived low-mass stars, with little or even no gas. Yet, they must have formed as gas-rich objects hosting also numerous massive and intermediate-mass stars. Clues on this early epoch were revealed recently with the discovery of multiple stellar generations thanks to detailed spectroscopic and deep photometric investigations (see e.g. reviews by S.Lucatello and J.Anderson, this volume). In particular, abundance data for light elements from C to Al call for early self-enrichment of GCs by a first generation of rapidly evolving stars. Current scenarii involving either fast rotating massive stars or massive AGBs as potential polluters imply that GCs were initially much more massive and have lost most of their first generation low-mass stars (e.g. Decressin et al. 2007, 2010; D’Ercole et al. 2008;Vesperini et al. 2010; Schaerer & Charbonnel 2011). Driving mechanisms for gas expulsion ==================================== Based on crude energetic arguments, gas expulsion by SNe was long thought to be the timely mechanism that could have unbound first generation stars and removed the bulk of gas together with metal-enriched SNe ejecta that are not found in second generation stars. In K12 we actually show that this mechanism does not generally work for typical GCs. Indeed, while the energy released by SNe usually exceeds the binding energy, SNe-driven shells turn out to be destroyed by Rayleigh-Taylor instability before they reach escape speed as shown on Figure 1 (left panels). Consequently the shell fragments that contain the gas remain bound to the cluster. This result, which is presented here for a typical protocluster of initial mass equal to $9 \times 10^6$ M$_{\odot}$ and initial half-mass radius of 3pc, holds for all but perhaps the initially least massive and most extended GCs (see K12 for details). Instead, K12 propose that gas expulsion is launched thanks to the energy released by coherent and extremely fast accretion of interstellar gas onto dark-remnants. Due to sudden power increase in that case, the shell reaches escape speed before being affected by the Rayleigh-Taylor instability as depicted in Figure 1 (right panels). Consequently, the gas can be expelled from the cluster, and the sudden change of gravitational potential is expected to unbind a large fraction of first generation low-mass stars sitting initially in the GC outskirts (see e.g. Decressin et al. 2010). Consequences of these results for the self-enrichment scenario will be presented in a forthcoming paper (Krause et al. 2012b, in preparation). The impact of SNe and stellar winds on their surroundings are among the current issues in understanding chemical evolution of galaxies, e.g. how interstellar gas is energized near those sources, and how ejecta are mixed into remaining gas. GCs serve as a laboratory to study this in a special, possibly extreme, environment of a smaller system, thus less complex than a galaxy as a whole. ![image](Charbonnel_sb_exp_9e6Msun_std_IAU12.eps){width="50.00000%"} ![image](Charbonnel_sb_exp_9e6Msun_lateBH_IAU12.eps){width="50.00000%"} We acknowledge support from the Swiss National Science Foundation, the French Society of Astronomy and Astrophysics, the cluster of excellence “Origin and Structure of the Universe”, and the ESF EUROCORES Programme “Origin of the Elements and Nuclear History of the Universe". , 2007, *A&A* 475, 859 , 2010, *A&A* 516, A73 , 2008, *MNRAS* 391, 825 , 2012a, *A&A* 546, L5 (K12) , 2011, *MNRAS* 413, 2297 , 2010, *ApJ* 718, L112
--- abstract: 'Dolgachev surfaces are simply connected minimal elliptic surfaces with $p_g=q=0$ and of Kodaira dimension 1. These surfaces were constructed by logarithmic transformations of rational elliptic surfaces. In this paper, we explain the construction of Dolgachev surfaces via ${\mathbb{Q}}$-Gorenstein smoothing of singular rational surfaces with two cyclic quotient singularities. This construction is based on the paper[@LeePark:SimplyConnected]. Also, some exceptional bundles on Dolgachev surfaces associated with ${\mathbb{Q}}$-Gorenstein smoothing are constructed based on the idea of Hacking[@Hacking:ExceptionalVectorBundle]. In the case if Dolgachev surfaces were of type $(2,3)$, we describe the Picard group and present an exceptional collection of maximal length. Finally, we prove that the presented exceptional collection is not full, hence there exist a nontrivial phantom category in ${\operatorname{D}}^{\rm b}(X^{{\mathrm{g}}})$.' address: - 'Department of Mathematical Sciences, KAIST, 291 Daehak-ro, Yuseong-gu, Daejon 305-701, Korea' - 'Department of Mathematical Sciences, KAIST, 291 Daehak-ro, Yuseong-gu, Daejon 305-701, Korea' author: - Yonghwa Cho and Yongnam Lee bibliography: - 'Dolgachev\_Bib.bib' title: Exceptional collections on Dolgachev surfaces associated with degenerations --- Introduction ============ In the last few decades, the derived category ${\operatorname{D}}^{\rm b}(S)$ of a nonsingular projective variety $S$ has been extensively studied by algebraic geometers. One of the possible attempts is to find an exceptional collection that is a sequence of objects(mostly line bundles) $E_1,\ldots,E_n$ such that $${\operatorname{Ext}}^k(E_i,E_j) = \left\{ \begin{array}{cl} 0 & \text{if}\ i > j\\ 0 & \text{if}\ i=j\ \text{and}\ k\neq 0 \\ {\mathbb{C}}& \text{if}\ i=j\ \text{and}\ k=0. \end{array} \right.$$ There were many approaches to find an exceptional collection of maximal length if $S$ is a nonsingular projective surface with $p_g = q=0$. Gorodentsev and Rudakov[@GorodenstevRudakov:ExceptionalBundleOnPlane] classified all possible exceptional collection in the case $S = {\mathbb P}^2$, and exceptional collections on del Pezzo surfaces were studied by Kuleshov and Orlov[@KuleshovOrlov:ExceptionalSheavesonDelPezzo]. For Enriques surfaces, Zube [@Zube:ExceptionalOnEnriques] found an exceptional collection of length $10$, and the orthogonal part was studied by Ingalls and Kuznetsov[@IngallsKuznetsov:EnriquesQuarticDblSolid] for nodal Enriques surfaces. After initiated by the work of Böhning, Graf von Bothmer, and Sosna[@BGvBS:ExeceptCollec_Godeaux], there also comes numerous results on the surfaces of general type(e.g. [@GalkinShinder:Beauville; @BGvBKS:DeterminantalBarlowAndPhantom; @AlexeevOrlov:DerivedOfBurniat; @Coughlan:ExceptionalCollectionOfGeneralType; @KSLee:Isogenus_1; @GalkinKatzarkovMellitShinder:KeumFakeProjective; @Keum:FakeProjectivePlanes]). For the surfaces with Kodaira dimenion is one, such exceptional collections are not known, thus it is a natural attempt to find an exceptional collection in ${\operatorname{D}}^{\rm b}(S)$. In this paper, we use the technique of ${\mathbb{Q}}$-Gorenstein smoothing to study the case $\kappa(S) = 1$. As far as the authors know, this is the first time to establish an exceptional collection of maximal length on a surface with Kodaira dimension one. The key ingredient is the method of Hacking[@Hacking:ExceptionalVectorBundle], which associates a $T_1$-singularity $(P \in X)$ with an exceptional vector bundle on the general fiber of a ${\mathbb{Q}}$-Gorenstein smoothing of $X$. A $T_1$-singularity is a cyclic quotient singularity $$(0 \in {\mathbb A}^2 \big/ \langle \xi \rangle),\quad \xi \cdot(x,y) = (\xi x, \xi^{na-1}y),$$ where $n > a > 0$ are coprime integers and $\xi$ is the primitive $n^2$-th root of unity(see the works of Kollár and Shepherd-Barron[@KSB:CompactModuliOfSurfaces], Manetti[@Manetti:NormalDegenerationOfPlane], and Wahl[@Wahl:EllipticDeform; @Wahl:SmoothingsOfNormalSurfaceSings] for the classification of $T_1$-singularities and their smoothings). In the paper[@LeePark:SimplyConnected], Lee and Park constructed new surfaces of general type via ${\mathbb{Q}}$-Gorenstein smoothings of projective normal surfaces with $T_1$-singularities. Motivated from [@LeePark:SimplyConnected], substantial amount of works were carried out, especially on (1) construction of new surfaces of general type(e.g.[@KeumLeePark:GeneralTypeFromElliptic; @LeeNakayama:SimplyGenType_PositiveChar; @ParkParkShin:SimplyConnectedGenType_K3; @ParkParkShin:SimplyConnectedGenType_K4]); (2) investigation of KSBA boundary of moduli of surfaces of general type(e.g. [@HackingTevelevUrzua:FlipSurfaces; @Urzua:IdentifyingNeighbors]). Our approach is based on rather different persepctive: Construct a smoothing $X \rightsquigarrow S$ using [@LeePark:SimplyConnected], and apply [@Hacking:ExceptionalVectorBundle] to investigate ${\operatorname{Pic}}S$. We study the case $S={}$a Dolgachev surface with two multiple fibers of multiplicities $2$ and $3$, and give an explicit ${\mathbb{Z}}$-basis for the Néron-Severi lattice of $S$(Theorem \[thm:Synop\_NSLattice\]). Afterwards, we find an exceptional collection of line bundles of maximal length in ${\operatorname{D}}^{\rm b}(S)$(Theorem \[thm:Synop\_ExceptCollection\_MaxLength\]). Notations and Conventions {#notations-and-conventions .unnumbered} ------------------------- Throughout this paper, everything will be defined over the field of complex numbers. A surface is an irreducible projective variety of dimension two. If $T$ is a scheme of finite type over ${\mathbb{C}}$ and $t \in T$ a closed point, then we use $(t \in T)$ to indicate the analytic germ. This means that $T$ is a small analytic neighborhood of $t$ which can be shrunk if necessary. Let $n > a > 0$ be coprime integers, and let $\xi$ be the $n^2$-th root of unity. The $T_1$-singularity $$( 0 \in {\mathbb A}^2 \big/ \langle \xi \rangle ),\quad \xi\cdot(x,y) = (\xi x , \xi^{na-1}y)$$ will be denoted by $\bigl( 0 \in {\mathbb A}^2 \big/ \frac{1}{n^2}(1,na-1) \bigr)$. By a divisor, we shall always mean a Cartier divisor unless stated otherwise. A divisor $D$ is effective if $H^0(D) \neq 0$, namely, $D$ is linearly equivalent to a nonnegative sum of integral divisors. If two divisors $D_1$ and $D_2$ are linearly equivalent, we write $D_1 = D_2$ if there is no ambiguity. Two ${\mathbb{Q}}$-Cartier Weil divisors $D_1,D_2$ are ${\mathbb{Q}}$-linearly equivalent, denoted by $D_1 \equiv D_2$, if there exists $n \in {\mathbb{Z}}_{>0}$ such that $nD_1 = nD_2$. We do not need an extra notion of numerical equivalence in this paper. Let $S$ be a nonsingular projective variety. The following invariants are associated with $S$. - The geometric genus $p_g(S) = h^2(\mathcal O_S)$. - The irregularity $q(S) = h^1(\mathcal O_S)$. - The holomorphic Euler characteristic $\chi(S)$. - The Néron-Severi group $\op{NS}(S) = {\operatorname{Pic}}S / {\operatorname{Pic}}^0 S$, where ${\operatorname{Pic}}^0 S$ is the group of divisors algebraically equivalent to zero. Since the definition of Dolgachev surfaces varies in literature, we fix our definition. Let $q > p > 0$ be coprime integers. A *Dolgachev surface $S$ of type $(p,q)$* is a minimal, simply connected, nonsingular, projective surface with $p_g(S) = q(S) = 0$ and of Kodaira dimension one such that there are exactly two multiple fibers of multiplicities $p$ and $q$. In the sequel, we will be given a degeneration $S \rightsquigarrow X$ from a nonsingular projective surface $S$ to a projective normal surface $X$, and compare information between them. We use the superscript “${{\mathrm{g}}}$” to emphasize this correlation. For example, we use $X^{{\mathrm{g}}}$ instead of $S$. If $D \in {\operatorname{Pic}}X$ is a divisor that “deforms” to $X^{{\mathrm{g}}}$, then the resulting divisor is denoted by $D^{{\mathrm{g}}}$. However, usage of this convention will always be explcit; we explain the definition in each circumstance. Synopsis of the paper {#synopsis-of-the-paper .unnumbered} --------------------- In Section \[sec:Construction\], we construct a Dolgachev surface $X^{{\mathrm{g}}}$ of type $(2,n)$ following the technique of Lee and Park[@LeePark:SimplyConnected]. We begin with a pencil of plane cubics generated by two general nodal cubics, which meet nine different points. The pencil defines a rational map ${\mathbb P}^2 \dashrightarrow {\mathbb P}^1$, undefined at the nine points of intersection. Blowing up the nine intersection points resolves the indeterminacy of ${\mathbb P}^2 \dashrightarrow {\mathbb P}^1$, hence yields a rational elliptic surface. After additional blow ups, we get two special fibers $$F_1 := C_1 \cup E_1,\quad\text{and}\quad F_2:= C_2\cup E_2\cup \ldots \cup E_{r+1}.$$ Let $Y$ denote the resulting rational elliptic surface with the general fiber $C_0$, and let $p \colon Y \to {\mathbb P}^2$ denote the blow down morphism. Contracting the curves in the $F_1$ fiber(resp. $F_2$ fiber) except $E_1$(resp. $E_{r+1}$), we get a morphism $\pi \colon Y \to X$ to a projective normal surface $X$ with two $T_1$-singularities of types $$(P_1 \in X) \simeq \Bigl( 0 \in {\mathbb A}^2 \Big/ \frac{1}{4}(1,1) \Bigr) \quad \text{and}\quad (P_2 \in X) \simeq \Bigl( 0 \in {\mathbb A}^2 \Big/ \frac{1}{n^2}(1,na-1) \Bigr)$$ for coprime integers $n > a > 0$. Note that the numbers $n,a$ are determined by the formula $$\frac{n^2}{na-1} = (-b_1) - \frac{1}{ (-b_2) - \frac{1}{\ldots -\frac{1}{-b_r}} },$$ where $b_1,\ldots,b_r$ are the self-intersection numbers of the curves in the chain $\{C_2,\ldots,E_r\}$(with the suitable order). We prove the formula(Proposition \[prop:SingularSurfaceX\]) $$\pi^* K_X \equiv - C_0 + \frac{1}{2}C_0 + \frac{n-1}{n}C_0, \label{eq:Synop_QuasiCanoncialBdlFormula}$$ which resembles the canonical bundle formula for minimal elliptic surfaces[@BHPVdV:Surfaces p. 213]. We then obtain $X^{{\mathrm{g}}}$ by taking a general fiber of a ${\mathbb{Q}}$-Gorenstein smoothing of $X$. Then, the divisor $\pi_* C_0$ is away from singularities of $X$, it moves to a nonsingular elliptic curve $C_0^{{\mathrm{g}}}$ along the deformation $X \rightsquigarrow X^{{\mathrm{g}}}$. We prove that the linear system $\lvert C_0^{{\mathrm{g}}}\rvert$ defines an elliptic fibration $f^{{\mathrm{g}}}\colon X^{{\mathrm{g}}}\to {\mathbb P}^1$. Comparing (\[eq:Synop\_QuasiCanoncialBdlFormula\]) with the canonical bundle formula on $X^{{\mathrm{g}}}$, we achieve the following theorem. \[thm:Synop\_NSLattice\] Let $\varphi \colon \mathcal X \to (0 \in T)$ be a one parameter ${\mathbb{Q}}$-Gorenstein smoothing of $X$ over a smooth curve germ. Then for general $0 \neq t_0 \in T$, the fiber $X^{{\mathrm{g}}}:= \mathcal X_{t_0}$ is a Dolgachev surface of type $(2,n)$. We jump into the case $a=1$ in Section \[sec:ExcepBundleOnX\^g\], and explain the constructions of exceptional bundles on $X^{{\mathrm{g}}}$ associated with the degeneration $X^{{\mathrm{g}}}\rightsquigarrow X$. For the construction of line bundles, we consider the short exact sequence(Proposition \[prop:Hacking\_Specialization\]) $$0 \to H_2(X^{{\mathrm{g}}},{\mathbb{Z}}) \to H_2(X,{\mathbb{Z}}) \to H_1(M_1,{\mathbb{Z}}) \oplus H_1(M_2,{\mathbb{Z}}) \to 0$$ where $M_i$ is the Milnor fiber of the smoothing of $(P_i \in X)$. Since $H_1(M_1,{\mathbb{Z}}) \simeq {\mathbb{Z}}/2{\mathbb{Z}}$ and $H_2(M_2,{\mathbb{Z}}) \simeq {\mathbb{Z}}/n{\mathbb{Z}}$, if $D \in {\operatorname{Pic}}Y$ is a divisor such that $$(D.C_1)=2d_1 \in 2{\mathbb{Z}},\ (D.C_2)=nd_2 \in n{\mathbb{Z}}, \text{ and } (D.E_2) = \ldots = (D.E_r) = 0, \label{eq:Synop_GoodDivisorOnY}$$ then $[\pi_*D] \in H_2(X,{\mathbb{Z}})$ maps to the zero element in $H_1(M_1, {\mathbb{Z}}) \oplus H_1(M_2,{\mathbb{Z}})$. Thus, there exists a preimage $D^{{\mathrm{g}}}\in {\operatorname{Pic}}X^{{\mathrm{g}}}$ of $[\pi_*D] \in H_2(X,{\mathbb{Z}})$ along ${\operatorname{Pic}}X^{{\mathrm{g}}}\simeq H_2(X^{{\mathrm{g}}}, {\mathbb{Z}}) \to H_2(X,{\mathbb{Z}})$. The next step is to investigate the relation between $D$ and $D^{{\mathrm{g}}}$. Let $\iota \colon Y \to \tilde X_0$ be the contraction of $E_2,\ldots,E_r$. Then, $Z_1 := \iota(C_1)$ and $Z_2 := \iota(C_2)$ are smooth rational curves. There exists a proper birational morphism $\Phi \colon \tilde{\mathcal X} \to \mathcal X$(a weighted blow up at the singularities of $X = \mathcal X_0$) such that the central fiber $\tilde{\mathcal X}_0 := \Phi^{-1}(\varphi^{-1}(0))$ is described as follows: it is the union of $\tilde X_0$, the projective plane $W_1 = {\mathbb P}^2_{x_1,y_1,z_1}$, and the weighted projective plane $W_2 = {\mathbb P}_{x_2,y_2,z_2}(1, n-1, 1)$ attached along $$Z_1 \simeq (x_1y_1=z_1^2) \subset W_1,\quad\text{and}\quad Z_2 \simeq (x_2y_2=z_2^n) \subset W_2.$$ Intersection theory on $W_1$ and $W_2$ tells $\mathcal O_{W_1}(1)\big\vert_{Z_1} = \mathcal O_{Z_1}(2)$ and $\mathcal O_{W_2}(n-1)\big\vert_{Z_2} = \mathcal O_{Z_2}(n)$. The central fiber $\tilde{\mathcal X}_0$ has three irreducible components(disadvantage), but each component is more manageable than $X$(advantage). We work with the smoothing $\tilde{\mathcal X}/(0 \in T)$ instead of $\mathcal X / (0\in T)$. The general fiber of $\tilde{\mathcal X}/(0\in T)$ does not differ from $\mathcal X/(0\in T)$, hence it is the Dolgachev surface $X^{{\mathrm{g}}}$. If $D$ is a divisor on $Y$ satisfying (\[eq:Synop\_GoodDivisorOnY\]), then there exists a line bundle $\tilde{\mathcal D}$ on $\tilde{\mathcal X}_0$ such that $$\tilde{\mathcal D}\big\vert_{\tilde X_0} \simeq \mathcal O_{\tilde X_0}(\iota_*D),\quad \tilde{\mathcal D}\big\vert_{W_1} \simeq \mathcal O_{W_1}(d_1),\quad \text{and}\quad \tilde{\mathcal D}\big\vert_{W_2} \simeq \mathcal O_{W_2}((n-1)d_2).$$ Since the line bundle $\tilde{\mathcal D}$ is exceptional, it deforms uniquely to give a bundle $\mathscr D$ on the family $\tilde{\mathcal X}$. We define $D^{{\mathrm{g}}}\in{\operatorname{Pic}}X^{{\mathrm{g}}}$ to be the divisor associated with the line bundle $\mathscr D\big\vert_{X^{{\mathrm{g}}}}$. Section \[sec:NeronSeveri\] concerns the case $n=3$ and $a=1$. Let $D$, $\tilde{\mathcal D}$ and $D^{{\mathrm{g}}}$ be chosen as above. There exists a short exact sequence $$0 \to \tilde{\mathcal D} \to \mathcal O_{\tilde X_0}(\iota_* D) \oplus \mathcal O_{W_1}(d_1) \oplus \mathcal O_{W_2}(2d_2) \to \mathcal O_{Z_1}(2d_1) \oplus \mathcal O_{Z_2}(3d_2) \to 0. \label{eq:Synop_CohomologySequence}$$ This expresses $\chi(\tilde{\mathcal D})$ in terms of $\chi(\iota_*D)$. Since Euler characteristic is a deformation invariant, we get $\chi(D^{{\mathrm{g}}}) = \chi(\tilde{\mathcal D})$. Furthermore, it can be proven that $(C_0. D) = (C_0^{{\mathrm{g}}}. D^{{\mathrm{g}}})$. This implies that $(C_0 . D) = (6 K_{X^{{\mathrm{g}}}} . D^{{\mathrm{g}}})$. The Riemann-Roch formula reads $$(D^{{\mathrm{g}}})^2 = \frac{1}{6}(C_0. D) + 2 \chi(\tilde{\mathcal D}) - 2,$$ which is a clue for discovering the Néron-Severi lattice $\op{NS}(X^{{\mathrm{g}}})$. This leads to the first main theorem of this paper: Let $H \in {\operatorname{Pic}}{\mathbb P}^2$ be the hyperplane divisor, and let $L_0 = p^*(2H)$. Consider the following correspondences of divisors(see Figure \[fig:Configuration\_Basic\]). $$\begin{array}{c|c|c|c} {\operatorname{Pic}}Y & F_i - F_j & p^*H - 3F_9 & L_0 \\ \hline {\operatorname{Pic}}X^{{\mathrm{g}}}& F_{ij}^{{\mathrm{g}}}& (p^*H - 3F_9)^{{\mathrm{g}}}& L_0^{{\mathrm{g}}}\\[1pt] \end{array}\raisebox{-0.9\baselineskip}[0pt][0pt]{\,.}$$+5pt Define the divisors $\{G_i^{{\mathrm{g}}}\}_{i=1}^{10} \subset {\operatorname{Pic}}X^{{\mathrm{g}}}$ as follows: $$\begin{aligned} G_i^{{\mathrm{g}}}&= -L_0^{{\mathrm{g}}}+ 10K_{X^{{\mathrm{g}}}} + F_{i9}^{{\mathrm{g}}},\quad i=1,\ldots,8;\\ G_9^{{\mathrm{g}}}&= -L_0^{{\mathrm{g}}}+ 11K_{X^{{\mathrm{g}}}};\\ G_{10}^{{\mathrm{g}}}&= -3L_0^{{\mathrm{g}}}+ (p^*H - 3F_9)^{{\mathrm{g}}}+ 28K_{X^{{\mathrm{g}}}}. \end{aligned}$$ Then the intersection matrix $\bigl( ( G_i^{{\mathrm{g}}}. G_j^{{\mathrm{g}}}) \bigr)$ is $$\left[ \begin{array}{cccc} -1 & \cdots & 0 & 0 \\ \vdots & \ddots & \vdots & \vdots \\ 0 & \cdots & -1 & 0 \\ 0 & \cdots & 0 & 1 \end{array} \right]\raisebox{-2\baselineskip}[0pt][0pt]{.}$$ In particular, $\{G_i^{{\mathrm{g}}}\}_{i=1}^{10}$ is a ${\mathbb{Z}}$-basis for the Néron-Severi lattice $\op{NS}(X^{{\mathrm{g}}})$. We point out that the assumption $n=3$ is crucial for the definition of $G_{10}^{{\mathrm{g}}}$. Indeed, its definition is motivated from the proof of [@Vial:Exceptional_NeronSeveriLattice Theorem 3.1]. The divisor $G_{10}^{{\mathrm{g}}}$ was chosen to satisfy $$K_{X^{{\mathrm{g}}}} = G_1^{{\mathrm{g}}}+ \ldots + G_9^{{\mathrm{g}}}- 3G_{10}^{{\mathrm{g}}},$$ which does not valid for $n>3$ as $K_{X^{{\mathrm{g}}}}$ is not primitive. In Section \[sec:ExcepCollectMaxLength\] we continue to assume $n=3$, $a=1$. We give the proof of the second main theorem of the paper: \[thm:Synop\_ExceptCollection\_MaxLength\] Assume that $X^{{\mathrm{g}}}$ is originated from a cubic pencil $\lvert \lambda p_*C_1 + \mu p_*C_2\rvert$ generated by two general nodal cubics. Then, there exists a semiorthogonal decomposition $$\bigr\langle \mathcal A,\ \mathcal O_{X^{{\mathrm{g}}}},\ \mathcal O_{X^{{\mathrm{g}}}}(G_1^{{\mathrm{g}}}),\ \ldots,\ \mathcal O_{X^{{\mathrm{g}}}}(G_{10}^{{\mathrm{g}}}),\ \mathcal O_{X^{{\mathrm{g}}}}(2G_{10}^{{\mathrm{g}}}) \bigr\rangle$$ of ${\operatorname{D}}^{\rm b}(X^{{\mathrm{g}}})$, where $\mathcal A$ is nontrivial phantom category(i.e. $K_0(\mathcal A) = 0$, $\op{HH}_\bullet(\mathcal A) = 0$, and $\mathcal A\not\simeq 0$). The proof contains numerous cohomology computations. As usual, the main idea which relates the cohomologies between $X$ and $X^{{\mathrm{g}}}$ is the upper-semicontinuity and the invariance of Euler characteristics. The cohomology long exact sequence of (\[eq:Synop\_CohomologySequence\]) begins with $$0 \to H^0(\tilde{\mathcal D}) \to H^0(\iota_*D) \oplus H^0(\mathcal O_{W_1}(d_1)) \oplus H^0(\mathcal O_{W_2}(2d_2)) \to H^0(\mathcal O_{Z_1}(2d_1)) \oplus H^0(\mathcal O_{Z_2}(3d_2)).$$ We prove that if $(D.C_1) = 2d_1 \leq 2$, $(D.C_2) = 3d_2 \leq 3$, and $(D.E_2)=0$, then $h^0(\tilde{\mathcal D}) \leq h^0(D)$. This gives an upper bound of $h^0(D^{{\mathrm{g}}})$. By Serre duality, the upper bound of $h^2(D^{{\mathrm{g}}})$ can be carried out by observing $h^0(K_{X^{{\mathrm{g}}}} - D^{{\mathrm{g}}})=0$. After having upper bounds of $h^0(D^{{\mathrm{g}}})$ and $h^2(D^{{\mathrm{g}}})=0$, the upper bound of $h^1(D^{{\mathrm{g}}})$ can be examined by looking at $\chi(D^{{\mathrm{g}}})$. For any divisor $D^{{\mathrm{g}}}$ which appears in the proof of Theorem \[thm:Synop\_ExceptCollection\_MaxLength\], at least one of $\{h^0(D^{{\mathrm{g}}}), h^2(D^{{\mathrm{g}}})\}$ is zero, and the other one is bounded by $\chi(D^{{\mathrm{g}}})$. Then, $h^1(D^{{\mathrm{g}}})=0$ and all the three numbers $(h^p(D^{{\mathrm{g}}}) : p=0,1,2)$ are exactly evaluated. One obstruction to this argument is the condition $d_1, d_2 \leq 1$, but it can be dealt with the following observation: if a line bundle on $X^{{\mathrm{g}}}$ is obtained from $C_1$ or $2C_2+E_2$, then it is trivial. Perturbing $D$ by $C_1$ and $2C_2+E_2$, we can adjust the numbers $d_1$, $d_2$. The proof reduced to find a suitable upper bound of $h^0(D)$. One of the very first trial is to find a smooth rational curve $C \subset Y$ such that $(D.C)$ is small. Then, by short exact sequence $0 \to \mathcal O_Y(D-C) \to \mathcal O_Y(D) \to \mathcal O_C(D) \to 0$, we get $h^0(D) \leq h^0(D-C) + (C.D)+1$. Replace $D$ by $D-C$ and find another integral curve with small intersection. We repeat this procedure and stop when the value of $h^0(D-C)$ is understood immediately(e.g. when $D-C$ is linearly equivalent to a negative sum of effective curves). This will give an upper bound of the original $D$. This method sometimes gives a “sharp” upper bound of $h^0(D)$, but sometimes not. Indeed, some cohomologies depend on the configuration of generating cubics $p_*C_1$, $p_*C_2$ of the cubic pencil, while the previous numerical argument cannot capture the configuration of $p_*C_1$ and $p_*C_2$. For those cases, we find an upper bound of $h^0(D)$ as follows. Assume that $D$ is an effective divisor. Then, $p_*D \subset {\mathbb P}^2$ is a plane curve. The divisor form of $D$ determines the degree of $p_*D$. Also, from the divisor form of $p_*D$, one can read the conditions that $p_*D$ must admit. For example, consider $D = p^*H - E_1$. The exceptional curve $E_1$ is obtained by blowing up the node of $p_*C_1$. Hence, $p_*D$ must be a line pass through the node of $p_*C_1$. In these ways, the imposed conditions can be represented by an ideal $\mathcal I \subset \mathcal O_{{\mathbb P}^2}$. Hence, proving $h^0(D) \leq r$ reduces to proving $h^0(\mathcal O_{{\mathbb P}^2}\bigl(\deg p_*D) \otimes \mathcal I\bigr) \leq r$. The latter one can be proved via a computer-based approach(Macaulay 2). Finally, $\mathcal A\not\simeq 0$ is guaranteed by the argument involving anticanonical pseudoheight due to Kuznetsov[@Kuznetsov:Height]. We remark that a (simply connected) Dolgachev surface of type $(2,n)$ cannot have an exceptional collection of maximal length for any $n > 3$ as explained in [@Vial:Exceptional_NeronSeveriLattice Theorem 3.13]. Also, Theorem \[thm:Synop\_ExceptCollection\_MaxLength\] give an answer to the question posed in [@Vial:Exceptional_NeronSeveriLattice Remark 3.15]. Acknowledgements {#acknowledgements .unnumbered} ---------------- The first author thanks to Kyoung-Seog Lee for helpful comments on derived categories. He also thanks to Alexander Kuznetsov for introducing the technique of height used in Section \[subsec:Incompleteness\]. The second author thanks to Fabrizio Catanese and Ilya Karzhemanov for useful remarks. This work is supported by Global Ph.D Fellowship Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education(No.2013H1A2A1033339)(to Y.C.), and is partially supported by the NRF of Korea funded by the Korean government(MSIP)(No.2013006431)(to Y.L.). Construction of Dolgachev Surfaces {#sec:Construction} ================================== Let $n$ be an odd integer. This section presents a construction of Dolgachev surfaces of type $(2,n)$. The construction follows the technique introduced in [@LeePark:SimplyConnected]. Let $C_1,C_2 \subseteq {\mathbb P}^2$ be general nodal cubic curves meeting at $9$ different points, and let $Y' = \op{Bl}_9{\mathbb P}^2 \to {\mathbb P}^2$ be the blow up at the intersection points. Then the cubic pencil $\lvert \lambda C_1 + \mu C_2\rvert$ defines an elliptic fibration $Y' \to {\mathbb P}^1$, with two special fibers $C_1'$ and $C_2'$ (which correspond to the proper transforms of $C_1$ and $C_2$, respectively). Blowing up the nodes of $C_1'$ and $C_2'$, we obtain $(-1)$-curves $E_1, E_2$. Also, blowing up one of the intersection points of $C_2''$(the proper transform of $C_2'$) and $E_2$, we obtain the configuration described in Figure \[fig:Configuration\_Basic\]. -14pt The divisors $F_1,\ldots,F_9$ are proper transforms of the exceptional fibers of the blow up $Y' = \op{Bl}_9{\mathbb P}^2 \to {\mathbb P}^2$. The numbers in the parentheses are self-intersection numbers of the corresponding divisors. On the fiber $C_2'' \cup E_2' \cup E_3$, we can think of two different blow ups as the following dual intersection graphs illustrate. $$\begin{tikzpicture}[scale=1] \draw(0,0) node[anchor=center] (C2) {}; \draw(40pt,0pt) node[anchor=center] (E2) {}; \draw(20pt,15pt) node[anchor=center] (E3) {}; \node[below,shift=(90:1pt)] at (C2.south) {$\scriptstyle -5$}; \node[below,shift=(90:1pt)] at (E2.south) {$\scriptstyle -2$}; \node[above] at (E3.north) {$\scriptstyle -1$}; \fill[red] (C2) circle (1.5pt); \fill[blue] (E2) circle (1.5pt); \fill[black] (E3) circle (1.5pt); \draw[red,-] (C2.north east) -- (E3.south west) node[above left, align=center, midway]{\tiny L}; \draw[blue,-] (E2.north west) -- (E3.south east) node[above right, align=center, midway]{\tiny R}; \draw[-] (C2.east) -- (E2.west); \begin{scope}[shift={(-170pt,0pt)}] \draw(0,0) node[anchor=center] (L C2) {}; \draw(40pt,0pt) node[anchor=center] (L E2) {}; \draw(40pt,15pt) node[anchor=center] (L E3) {}; \draw(80pt,0pt) node[anchor=center] (L E4) {}; \node[below] at (L C2.south) {$\scriptstyle -6$}; \node[below] at (L E2.south) {$\scriptstyle -2$}; \node[below] at (L E4.south) {$\scriptstyle -2$}; \node[above] at (L E3.north) {$\scriptstyle -1$}; \fill[red] (L C2) circle (1.5pt); \fill[blue] (L E2) circle (1.5pt); \fill[red] (L E3) circle (1.5pt); \fill[black] (L E4) circle (1.5pt); \draw[-] (L C2.north east) -- (L E3.south west) node[above, align=center, midway]{\tiny L'}; \draw[-] (L E2.east) -- (L E4.west); \draw[-] (L C2.east) -- (L E2.west); \draw[-] (L E4.north west) -- (L E3.south east) node[above, align=center, midway]{\tiny R'}; \end{scope} \begin{scope}[shift={(130pt,0pt)}] \draw(0,0) node[anchor=center] (R C2) {}; \draw(40pt,0pt) node[anchor=center] (R E2) {}; \draw(40pt,15pt) node[anchor=center] (R E3) {}; \draw(80pt,0pt) node[anchor=center] (R E4) {}; \node[below] at (R C2.south) {$\scriptstyle -2$}; \node[below] at (R E2.south) {$\scriptstyle -5$}; \node[below] at (R E4.south) {$\scriptstyle -3$}; \node[above] at (R E3.north) {$\scriptstyle -1$}; \fill[black] (R C2) circle (1.5pt); \fill[red] (R E2) circle (1.5pt); \fill[blue] (R E3) circle (1.5pt); \fill[blue] (R E4) circle (1.5pt); \draw[-] (R C2.north east) -- (R E3.south west) node[above, align=center, midway]{\tiny L'}; \draw[-] (R E2.east) -- (R E4.west); \draw[-] (R C2.east) -- (R E2.west); \draw[-] (R E4.north west) -- (R E3.south east) node[above, align=center, midway]{\tiny R'}; \end{scope} \draw [->,decorate,decoration={snake,amplitude=1pt,segment length=5pt, post length=2pt}] (-15pt,5pt) -- (-75pt, 5pt) node[below, align=center, midway]{$\scriptstyle \op{Bl}_{\rm L}$}; \draw [->,decorate,decoration={snake,amplitude=1pt,segment length=5pt, post length=2pt}] (55pt,5pt) -- (115pt, 5pt) node[below, align=center, midway]{$\scriptstyle \op{Bl}_{\rm R}$}; \end{tikzpicture}$$ In general, if one has a fiber with configuration \[0pt\]\[13pt\] (0,0) node\[anchor=center\] (E1) ; (30pt,0pt) node\[anchor=center\] (E2) ; (60pt,0pt) node\[anchor=center, inner sep=10pt\] (E3) ; (90pt,0pt) node\[anchor=center\] (E4) ; (E1) circle (1.5pt); (E2) circle (1.5pt); (E3) node\[anchor=center\][$\cdots$]{}; (E4) circle (1.5pt); at (E1.south) [$\scriptscriptstyle -k_1$]{}; at (E2.south) [$\scriptscriptstyle -k_2$]{}; at (E4.south) [$\scriptscriptstyle -k_r$]{}; (E1.east) – (E2.west); (E2.east) – (E3.west); (E3.east) – (E4.west); , then after the blowing up at L the graph becomes \[0pt\]\[13pt\] (0,0) node\[anchor=center\] (E1) ; (25pt,0pt) node\[anchor=center\] (E2) ; (50pt,0pt) node\[anchor=center, inner sep=10pt\] (E3) ; (75pt,0pt) node\[anchor=center\] (E4) ; (100pt,0pt) node\[anchor=center\] (E5) ; (E1) circle (1.5pt); (E2) circle (1.5pt); (E3) node\[anchor=center\][$\cdots$]{}; (E4) circle (1.5pt); (E5) circle (1.5pt); at (E1.south) [$\scriptscriptstyle -(k_1+1)$]{}; at (E2.south) [$\scriptscriptstyle -k_2$]{}; at (E4.south) [$\scriptscriptstyle -k_r$]{}; at (E5.south) [$\scriptscriptstyle -2$]{}; (E1.east) – (E2.west); (E2.east) – (E3.west); (E3.east) – (E4.west); (E4.east) – (E5.west); . Similarly, the blowing up at R yields the configuration \[0pt\]\[13pt\] (0,0) node\[anchor=center\] (E1) ; (25pt,0pt) node\[anchor=center\] (E2) ; (50pt,0pt) node\[anchor=center, inner sep=10pt\] (E3) ; (75pt,0pt) node\[anchor=center\] (E4) ; (108pt,0pt) node\[anchor=center\] (E5) ; (E1) circle (1.5pt); (E2) circle (1.5pt); (E3) node\[anchor=center\][$\cdots$]{}; (E4) circle (1.5pt); (E5) circle (1.5pt); at (E1.south) [$\scriptscriptstyle -2$]{}; at (E2.south) [$\scriptscriptstyle -k_1$]{}; at (E4.south) [$\scriptscriptstyle -k_{r-1}$]{}; at (E5.south) [$\scriptscriptstyle -(k_r+1)$]{}; (E1.east) – (E2.west); (E2.east) – (E3.west); (E3.east) – (E4.west); (E4.east) – (E5.west); .   This presents all possible resolution graphs of $T_1$-singularities[@Manetti:NormalDegenerationOfPlane Thm. 17]. Let $Y$ be the surface after successive blow ups on the second special fiber $C_2'' \cup E_2' \cup E_3$, so that the resulting fiber contains the resolution graph of a $T_1$-singularity of type $\bigl(0 \in {\mathbb A}^2 / \frac{1}{n^2}(1, na-1)\bigr)$ for some odd integer $n$ and an integer $a$ with $\op{gcd}(n,a)=1$. To simplify notations, we would not distinguish the divisors and their proper transforms unless they arise ambiguities. For instance, the proper transform of $C_1 \in {\operatorname{Pic}}{\mathbb P}^2$ in $Y$ will be denoted by $C_1$, and so on. We fix this configuration of $Y$ throughout this paper, so it is appropriate to give a summary here: 1. the $(-1)$-curves $F_1,\ldots,F_9$ that are proper transforms of the exceptional fibers of $\op{Bl}_9 {\mathbb P}^2 \to {\mathbb P}^2$; 2. the $(-4)$-curve $C_1$ and the $(-1)$-curve $E_1$ arising from the blowing up of the first nodal curve; 3. the negative curves $C_2,\,E_2,\,\ldots,\,E_r,\,E_{r+1}$, where $E_{r+1}^2 = -1$ and $C_2,\,E_2,\,\ldots,\,E_r$ form a resolution graph of a $T_1$-singularity of type $\bigl(0 \in {\mathbb A}^2 \big/\frac{1}{n^2}(1,na-1)\bigr)$. +5pt -14pt Let $C_0$ be a general fiber of the elliptic fibration $Y \to {\mathbb P}^1$. The fibers are linearly equivalent, thus $$\begin{aligned} C_0 &= C_1 + 2E_1 \nonumber \\ &= C_2 + a_2 E_2 + a_3 E_3 + \ldots + a_{r+1} E_{r+1}, \label{eq:SpecialFiber} \end{aligned}$$ where $a_2,\ldots,a_{r+1}$ are the integers determined by the system of linear equations $$\label{eq:EquationOnFiber} (C_2.E_i) + \sum_{j=2}^{r+1} a_j (E_j.E_i) = 0,\quad i=2,\ldots, r+1.$$ Note that the values $(C_2.E_i)$, $(E_j.E_i)$ are explicitly determined by the configuration(Figure \[fig:Configuration\_General\]). The matrix $\bigl( (E_j.E_i) \bigr)_{2\leq i,j \leq r}$ is negative definite[@Mumford:TopologyOfNormalSurfaceSingularity], and the number $a_{r+1}$ is determined by Proposition \[prop:SingIndexAndFiberCoefficients\], hence the system (\[eq:EquationOnFiber\]) has a unique solution. \[lem:CanonicalofY\] In the above situation, the following formula holds: $$K_Y = E_1 - C_2 - E_2 - \ldots - E_{r+1}.$$ The proof proceeds by an induction on $r$. The minimum value of $r$ is two, the case in which $C_2\cup E_2$ from the chain \[15pt\]\[0pt\] (0,0) node\[anchor=center\] (E1) ; (20pt,0pt) node\[anchor=center\] (E2) ;; (E1) circle (1.5pt); (E2) circle (1.5pt); at (E1.north) [$\scriptscriptstyle -5$]{}; at (E2.north) [$\scriptscriptstyle -2$]{}; (E1.east) – (E2.west); . Let $H \in {\operatorname{Pic}}{\mathbb P}^2$ be a hyperplane divisor, and let $p \colon Y \to {\mathbb P}^2$ be the blowing down morphism. Then $$K_Y = p^* K_{{\mathbb P}^2} + F_1 + \ldots + F_9 + E_1 + d_2 E_2 + d_3E_3$$ for some $d_2,d_3 \in {\mathbb{Z}}$. Since any cubic curve in ${\mathbb P}^2$ is linearly equivalent to $3 H$, $$\begin{aligned} p^* ( 3H ) &= C_0 + F_1 + \ldots + F_9 \\ &= (C_2 + a_2 E_2 +a_3 E_3) + F_1 + \ldots + F_9 \end{aligned}$$ where $a_2,a_3$ are integers introduced in (\[eq:SpecialFiber\]). Hence, $$\begin{aligned} K_Y &= p^* (-3H) + F_1 + \ldots + F_9 + E_1 + d_2 E_2 + d_3 E_3 \\ &= E_1 - C_2 + (d_2-a_2)E_2 + (d_3-a_3)E_3. \end{aligned}$$ Here, the genus formula shows that $K_Y = E_1 - C_2 - E_2 - E_3$. Assume the induction hypothesis that $K_Y = E_1 - C_2 - E_2 - \ldots - E_{r+1}$. Let $D \in \{C_2,E_2,\ldots,E_r\}$ be a divisor intersects $E_{r+1}$, and let $\varphi \colon \widetilde Y \to Y$ be the blowing up at the point $D \cap E_{r+1}$. Then, $$K_{\widetilde Y} = \varphi^* K_Y + \widetilde E_{r+2},$$ where $\widetilde E_{r+2}$ is the exceptional divisor of the blowing up $\varphi$. Let $\widetilde C_2, \widetilde E_1, \ldots, \widetilde E_{r+1}$ denote the proper transforms of the corresponding divisors. Then, $\varphi^*$ maps $D$ to $(\widetilde D + \widetilde E_{r+2})$, maps $E_{r+1}$ to $(\widetilde E_{r+1} + \widetilde E_{r+2})$, and maps the other divisors to their proper transforms. It follows that $$\begin{aligned} \varphi^*K_Y &= \varphi^*(E_1 - C_2 - \ldots - E_{r+1}) \\ &= \widetilde E_1 - \widetilde C_2 - \ldots - \widetilde E_{r+1} - 2 \widetilde E_{r+2}. \end{aligned}$$ Hence, $K_{\widetilde Y} = \varphi^* K_Y + \widetilde E_{r+2} = \widetilde E_1 - \widetilde C_2 - \widetilde E_2 - \ldots - \widetilde E_{r+2}$. \[prop:SingularSurfaceX\] Let $\pi \colon Y \to X$ be the contraction of the curves $C_1,\,C_2,\, E_2,\,\ldots,\, E_r$. Let $P_1 = \pi(C_1)$ and $P_2 = \pi(C_2 \cup E_2 \cup \ldots \cup E_r)$ be the singularities of types $\bigl( 0 \in {\mathbb A}^2 \big/ \frac{1}{4}(1,1)\bigr)$ and $\bigl( 0 \in {\mathbb A}^2 \big/ \frac{1}{n^2}(1,na-1)\bigr)$, respectively. Then the following properties of $X$ hold: 1. \[item:SingularSurfaceX\_Cohomologies\]$X$ is a projective normal surface with $H^1(\mathcal O_X) = H^2(\mathcal O_X)=0$; 2. $\pi^*K_X \equiv (\frac 12 - \frac 1n)C_0 \equiv C_0 - \frac{1}{2} C_0 - \frac{1}{n} C_0$ as ${\mathbb{Q}}$-divisors. In particular, $K_X^2 = 0$, $K_X$ is nef, but $K_X$ is not numerically trivial.   1. Since the singularities of $X$ are rational, $R^q \pi_* \mathcal O_Y = 0$ for $q > 0$. The Leray spectral sequence $$E_2^{p,q} = H^p( X, R^q\pi_* \mathcal O_Y ) \Rightarrow H^{p+q}(Y,\mathcal O_Y)$$ says that $H^p(Y,\mathcal O_Y) \simeq H^p (X, \pi_* \mathcal O_Y) = H^p(X,\mathcal O_X)$ for $p > 0$. The surface $Y$ is obtained from ${\mathbb P}^2$ by a finite sequence of blow ups, hence $H^1(Y,\mathcal O_Y) = H^2(Y,\mathcal O_Y) =0$. 2. Since the morphism $\pi$ contracts $C_1,\,C_2,\,E_2,\,\ldots,\,E_r$, we may write $$\pi^* K_X \equiv K_Y + c_1 C_1 + c_2 C_2 + b_2 E_2 + \ldots + b_r E_r,$$ for $c_1,c_2,b_2,\ldots,b_r \in {\mathbb{Q}}$(the coefficients may not integral since $X$ is singular). It is easy to see that $c_1 = \frac 12$. By Lemma \[lem:CanonicalofY\], $$\pi^* K_X \equiv \frac{1}{2}C_0 + (c_2- 1)C_2 + (b_2 -1)E_2+ \ldots + (b_r-1) E_r - E_{r+1}.$$ Both $\pi^*K_X$ and $C_0$ do not intersect with $C_2,E_2,\ldots,E_r$. Thus, we get $$\label{eq:Aux1} \left\{ \begin{array}{l@{}l} 0 &{}= (1-c_2)(C_2^2) + \sum_{j =2}^r (1-b_j)(E_j.C_2) + (E_{r+1}.C_2) \\ 0 &{}= (1-c_2)(C_2.E_i) + \sum_{j=2}^r (1-b_j)(E_j.E_i) + (E_{r+1}.E_i),\quad \text{for\ }i=2,\ldots,r. \end{array} \right.$$ After divided by $a_{r+1}$, (\[eq:EquationOnFiber\]) becomes $$0=\frac{1}{a_{r+1}} (C_2. E_i) + \sum_{j=2}^r \frac{a_j}{a_{r+1}} (E_j.E_i) + (E_{r+1}.E_i),\quad \text{for\ }i=2,\ldots,r.$$ In addition, the equation $( C_2 + a_2 E_2 + \ldots + a_{r+1} E_r \mathbin. C_2 ) = (C_0 . C_2) = 0 $ gives rise to $$0=\frac{1}{a_{r+1}} (C_2^2) + \sum_{j=2}^r \frac{a_j}{a_{r+1}} (E_j.C_2) + (E_{r+1}.C_2).$$ Compairing these equations with (\[eq:Aux1\]), it is easy to see that the ordered tuples $$(1-c_2,\ 1-b_2,\ \ldots,\ 1-b_r)\quad\text{and}\quad (1/a_{r+1},\ a_2/a_{r+1},\ \ldots,\ a_r / a_{r+1})$$ fit into the same system of linear equations. Since the intersection matrix of the divisors $(C_2,E_2,\ldots,E_r)$ is negative definite, $$(1-c_2,\, 1-b_2,\, \ldots,\, 1-b_r) = (1/a_{r+1},\ a_2/a_{r+1},\ \ldots,\ a_r / a_{r+1}).$$ It follows that $$\begin{aligned} \pi^* K_X &\equiv \frac{1}{2}C_0 + (c_2 -1 )C_2 + (b_2 - 1)E_2 + \ldots + (b_r -1) E_r - E_{r+1} \\ &\equiv \frac{1}{2}C_0 - \frac{1}{a_{r+1}} \bigl( C_2 + a_2 E_2 + \ldots + a_{r+1} E_{r+1} \bigr) \\ &\equiv \Bigl( \frac{1}{2} - \frac{1}{a_{r+1}} \Bigr) C_0. \end{aligned}$$ It remains to prove $a_{n+1} = n$. This directly follows from Proposition \[prop:SingIndexAndFiberCoefficients\]. It is immediate to see that $C_0^2 = 0$, $C_0$ is nef, and $C_0$ is not numerically trivial. The same properties are true for $\pi^*K_X$. \[prop:SingIndexAndFiberCoefficients\] Suppose that $C_2 \cup E_2 \cup \ldots \cup E_r$ has the configuration \[0pt\]\[13pt\] (0,0) node\[anchor=center\] (E1) ; (30pt,0pt) node\[anchor=center\] (E2) ; (60pt,0pt) node\[anchor=center, inner sep=10pt\] (E3) ; (90pt,0pt) node\[anchor=center\] (E4) ; (E1) circle (1.5pt); (E2) circle (1.5pt); (E3) node\[anchor=center\][$\cdots$]{}; (E4) circle (1.5pt); at (E1.south) [$\scriptscriptstyle -k_1$]{}; at (E2.south) [$\scriptscriptstyle -k_2$]{}; at (E4.south) [$\scriptscriptstyle -k_r$]{}; (E1.east) – (E2.west); (E2.east) – (E3.west); (E3.east) – (E4.west); , so that it contracts to give a $T_1$-singularity of type $\bigl( 0 \in {\mathbb A}^2 \big/ \frac{1}{n^2}(1,na-1)\bigr)$. Then, in the expression $$C_2 + a_2 E_2 + \ldots + a_{r+1} E_{r+1}$$ of the fiber (\[eq:SpecialFiber\]), the coefficient of the $(-k_1)$-curve is $a$, and the coefficient of the $(-k_r)$-curve is $(n-a)$. Furthermore, $a_{r+1}$ equals to the sum of these two coefficients, hence $a_{r+1} = n$. The proof proceeds by an induction on $r$. The case $r = 2$ is trivial. Indeed, a simple computations shows that $n = 3$, $a = 1$, and $a_2 = 2$, $a_3= 3$. To make notations simpler, we reindex $\{C_2,\, E_2,\,\ldots,\, E_{r+1}\}$ as follows: $$(G_1,\,G_2,\,\ldots,\,G_{r+1}) = (E_{i_k},\,E_{i_{k-1}},\,\ldots,\,E_{i_1},\,C_2,\,E_{j_1},\,\ldots,\,E_{j_\ell},\,E_{r+1}).\hskip-35pt\tag{Figure~\ref{fig:Configuration_General}}$$ By the induction hypothesis, we may assume $$C_2 + a_2E_2 + \ldots + a_{r+1} E_{r+1} = a G_1 + \ldots + (n-a) G_r + n G_{r+1}.$$ Let $\varphi_1 \colon \widetilde Y \to Y$ be the blow up at the point $G_{r+1} \cap G_1$, let $\widetilde G_i$($i=1,\ldots,r+1$) be the proper transform of $G_i$, and let $\widetilde G_{r+2}$ be the exceptional divisor. The $(-1)$-curve $\widetilde G_{r+2}$ meets $\widetilde G_1$ and $\widetilde G_{r+1}$ transversally, so $$\begin{aligned} \varphi^*( aG_1 + \ldots + nG_{r+1}) &= a ( \widetilde G_1 + \widetilde G_{r+2}) + g_2 \widetilde G_2 + \ldots + (n-a) \widetilde G_r + n( \widetilde G_{r+1} + \widetilde G_{r+2}) \\ &= a \widetilde G_1 + g_2\widetilde G_2 + \ldots + (n-a) \widetilde G_r + n\widetilde G_{r+1} + (n+a) \widetilde G_{r+2}. \end{aligned}$$ It is well-known that the contraction of $\widetilde G_1, \ldots, \widetilde G_{r+1} \subset \widetilde Y$ produces a cyclic quotient singularity of type $$\Bigl( 0 \in {\mathbb A}^2 \Big/ \frac{1}{(n+a)^2}(1,n(n+a)-1) \Bigr).$$ This proves the statement for the chain $\widetilde G_1 \cup \ldots \cup \widetilde G_{r+2}$, so we are done by induction. The same argument also works if one performs the blow up $\varphi_2 \colon \widetilde Y' \to Y$ at the point $G_{r+1} \cap G_r$. Now we want to dissolve the singularities of $X$ by ${\mathbb{Q}}$-Gorenstein smoothings. It is well-known that $T_1$-singularities admit local ${\mathbb{Q}}$-Gorenstein smoothings, thus we have to verify: 1. every formal deformation of $X$ is algebraizable; 2. every local deformation of $X$ can be globalized. The answer for (a) is an immediate consequence of Grothendieck’s existence theorem[@Hartshorne:DeformationTheory Example 21.2.5] since $H^2(\mathcal O_X)=0$. The next lemma verifies (b). \[lem:NoObstruction\] Let $Y$ be the nonsingular rational elliptic surface introduced above, and let $\mathcal T_Y$ be the tangent sheaf of $Y$. Then, $$H^2(Y, \mathcal T_Y( - C_1 - C_2 - E_2 - \ldots - E_r ) ) = 0.$$ In particular, $H^2(X,\mathcal T_X) = 0$(see [@LeePark:SimplyConnected Thm. 2]). The proof is not very different from [@LeePark:SimplyConnected 4, Example 2]. The main claim is $$H^0(Y, \Omega_Y^1(K_Y + C_1 + C_2 + E_2 + \ldots + E_r)) =0.$$ By Lemma \[lem:CanonicalofY\] and equation (\[eq:SpecialFiber\]), $$K_Y + C_1 + C_2 + E_2 + \ldots + E_r = C_0 - E_1 - E_{r+1}.$$ Then, $h^0(Y,\Omega_Y^1(C_0 - E_1 - E_{r+1})) \leq h^0(Y,\Omega_Y^1(C_0)) = h^0(Y',\Omega_{Y'}^1(C_0'))$ where $Y'= \op{Bl}_9{\mathbb P}^2$, and $h^0(Y',\Omega_{Y'}^1(C_0')) =0$ by [@LeePark:SimplyConnected 4, Lemma 2]. The result directly follows from the Serre duality. We showed that the surface $X$ admits a ${\mathbb{Q}}$-Gorenstein smoothing $\mathcal X \to T$. The next aim is to show that the general fiber $X^{{\mathrm{g}}}:= \mathcal X_t$ is a Dolgachev surface of type $(2,n)$. \[prop:CohomologyComparison\_YtoX\] Let $X$ be a projective normal surface with only rational singularities, let $\pi \colon Y \to X$ be a resolution of singularities, and let $E_1,\ldots,E_r$ be the exceptional divisors. If $D$ is a divisor on $Y$ such that $(D.E_i)=0$ for all $i=1,\ldots,r$, then $$H^p(Y,D) \simeq H^p(X,\pi_*D)$$ for all $p \geq 0$. Since the singularities of $X$ are rational, each $E_i$ is a smooth rational curve. The assumption on $D$ in the statement implies that $\pi_*D$ is Cartier, and $\pi^*\mathcal O_X(\pi_*D) = \mathcal O_Y(D)$. By projection formula, $R^p\pi_*\mathcal O_Y(D) \simeq R^p \pi_*( \mathcal O_Y \otimes \pi^* \mathcal O_X(\pi_*D) ) \simeq (R^p \pi_* \mathcal O_Y) \otimes \mathcal O_X(\pi_*D)$. Since $X$ is normal and has only rational singularities, $$R^p\pi_* \mathcal O_Y = \left\{ \begin{array}{ll} \mathcal O_X & \text{if } p=0\\ 0 & \text{if } p > 0. \end{array} \right.$$ Now, the claim is an immediate consequence of the Leray spectral sequence $$E_2^{p,q} = H^p(X,\, R^q\pi_*\mathcal O_Y \otimes \mathcal O_X(\pi_*D) ) \Rightarrow H^{p+q}(Y,\, \mathcal O_Y(D)). \qedhere$$ \[lem:Cohomologies\_ofGeneralFiber\_inY\] Let $\pi \colon Y \to X$ be the contraction defined in Proposition \[prop:SingularSurfaceX\]. Then, $$h^0(X,\pi_*C_0) = 2,\quad h^1(X,\pi_*C_0)=1,\ \text{and}\quad h^2(X,\pi_*C_0)=0.$$ It is easy to see that $(C_0.C_1) = (C_0.C_2) = (C_0.E_2) =\ldots = (C_0.E_r) = 0$. Hence by Proposition \[prop:CohomologyComparison\_YtoX\], it suffices to compute $h^p(Y,C_0)$. Since $C_0^2 = (K_Y . C_0)=0$, Riemann-Roch formula shows $\chi(C_0)=1$. By Serre duality, $h^2(C_0) = h^0(K_Y - C_0)$. In the short exact sequence $$0 \to \mathcal O_Y(K_Y - C_0 - E_1) \to \mathcal O_Y(K_Y -C_0) \to \mathcal O_{E_1} \otimes \mathcal O_Y(K_Y - C_0)\to 0,$$ we find that $H^0(\mathcal O_{E_1} \otimes \mathcal O_Y(K_Y - C)) = 0$ since $(K_Y - C_0 \mathbin . E_1) = -1$. It follows that $$h^0(K_Y-C_0) = h^0(K_Y-C_0-E_1),$$ but $K_Y - C_0 - E_1 = -2 C_2 - (a_2 +1) E_2 - \ldots - (a_{r+1} + 1 ) E_{r+1}$ by Lemma \[lem:CanonicalofY\]. Hence $h^2(C_0)=0$. Since the complete linear system $\lvert C_0 \rvert$ defines the elliptic fibration $Y \to {\mathbb P}^1$, $h^0(C_0) = 2$. Furthermore, $h^1(C_0)=1$ follows from $h^0(C_0)=2$, $h^2(C_0)=0$, and $\chi(C_0)=1$. The following proposition, due to Manetti[@Manetti:NormalDegenerationOfPlane], is a key ingredient of the proof of Theorem \[thm:SmoothingX\] \[prop:Manetti\_PicLemma\] Let $\mathcal X \to ( 0 \in T)$ be a smoothing of a normal surface $X$ with $H^1(\mathcal O_X) = H^2(\mathcal O_X)=0$. Then for every $t \in T$, the natural restriction map of second cohomology groups $H^2(\mathcal X,{\mathbb{Z}}) \to H^2(\mathcal X_t,{\mathbb{Z}})$ induces an injection ${\operatorname{Pic}}\mathcal X \to {\operatorname{Pic}}\mathcal X_t$. Furthermore, the restriction to the central fiber ${\operatorname{Pic}}\mathcal X \to {\operatorname{Pic}}X$ is an isomorphism. \[thm:SmoothingX\] Let $X$ be the projective normal surface defined in Proposition \[prop:SingularSurfaceX\], and let $\varphi \colon \mathcal X \to (0 \in T)$ be a one parameter ${\mathbb{Q}}$-Gorenstein smoothing of $X$ over a smooth curve germ $(0 \in T)$. For general $0 \neq t_0 \in T$, the fiber $X^{{\mathrm{g}}}:= \mathcal X_{t_0}$ satisfies the following: 1. $p_g(X^{{\mathrm{g}}}) = q(X^{{\mathrm{g}}}) = 0$; 2. $X^{{\mathrm{g}}}$ is a simply connected, minimal, nonsingular surface with Kodaira dimension $1$; 3. there exists an elliptic fibration $f^{{\mathrm{g}}}\colon X^{{\mathrm{g}}}\to {\mathbb P}^1$ such that $K_{X^{{\mathrm{g}}}} \equiv C_0^{{\mathrm{g}}}- \frac{1}{2} C_0^{{\mathrm{g}}}- \frac{1}{n} C_0^{{\mathrm{g}}}$, where $C_0^{{\mathrm{g}}}$ is a general nonsingular elliptic fiber of $f^{{\mathrm{g}}}$; 4. $X^{{\mathrm{g}}}$ is isomorphic to the Dolgachev surface of type $(2,n)$.   1. This follows from Proposition \[prop:SingularSurfaceX\]\[item:SingularSurfaceX\_Cohomologies\] and the upper-semicontinuity of $h^p$. 2. Shrinking $(0 \in T)$ if necessary, we may assume that $X^{{\mathrm{g}}}$ is simply connected[@LeePark:SimplyConnected p. 499], and that $K_{X^{{\mathrm{g}}}}$ is nef[@Nakayama:ZariskiDecomposition 5.d]. If $K_{X^{{\mathrm{g}}}}$ is numerically trivial, then $X^{{\mathrm{g}}}$ must be an Enriques surface by classification of surfaces. This violates the simple connectivity of $X^{{\mathrm{g}}}$. It follows that $K_{X^{{\mathrm{g}}}}$ is not numerically trivial, and the Kodaira dimension of $X^{{\mathrm{g}}}$ is $1$. 3. Since the divisor $\pi_*C_0$ is not supported on the singular points of $X$, $\pi_* C_0 \in {\operatorname{Pic}}X$. By Proposition \[prop:Manetti\_PicLemma\], ${\operatorname{Pic}}X \simeq {\operatorname{Pic}}\mathcal X \hookrightarrow {\operatorname{Pic}}X^{{\mathrm{g}}}$. Let $C_0^{{\mathrm{g}}}\in {\operatorname{Pic}}X^{{\mathrm{g}}}$ be the image of $\pi_*C_0$ under this correspondence. In Section \[subsec:TopologyofX\], we will see that there exists divisors $E_1^{{\mathrm{g}}}$(resp. $E_{r+1}^{{\mathrm{g}}}$), which maps to $\pi_*E_1$(resp. $\pi_*E_{r+1}$) along the specialization map ${\operatorname{Pic}}X^{{\mathrm{g}}}\hookrightarrow {\operatorname{Cl}}X$. Clearly, $2E_1^{{\mathrm{g}}}$ are $nE_{r+1}^{{\mathrm{g}}}$ are different as closed subschemes, however, both $2E_1^{{\mathrm{g}}}$ and $nE_{r+1}^{{\mathrm{g}}}$ are linearly equivalent to $C_0^{{\mathrm{g}}}$ since $\pi_*(2E_1) = \pi_*C_0 = \pi_*(nE_{r+1})$. It follows that $h^0(C_0^{{\mathrm{g}}}) \geq 2$. By upper-semicontinuity, Proposition \[prop:CohomologyComparison\_YtoX\], and Lemma \[lem:Cohomologies\_ofGeneralFiber\_inY\], $h^0(C_0^{{\mathrm{g}}})=2$. By the ${\mathbb{Q}}$-Gorenstein condition of $\mathcal X/(0\in T)$, $K_{\mathcal X}$ is ${\mathbb{Q}}$-Cartier. Since $2nK_X = (n-2)\pi_*C_0$ is Cartier, the isomorphism ${\operatorname{Pic}}X \simeq {\operatorname{Pic}}\mathcal X$ maps $2nK_X$ to the Cartier divisor $2n K_{\mathcal X}$. This shows that the map ${\operatorname{Pic}}X \hookrightarrow {\operatorname{Pic}}X^{{\mathrm{g}}}$ sends $2nK_X$ to $2nK_{X^{{\mathrm{g}}}}$. Furthermore, $2nK_X - (n-2)\pi_*C_0 \in {\operatorname{Pic}}X$ is trivial and it maps to $2nK_{X^{{\mathrm{g}}}} - (n-2)C_0^{{\mathrm{g}}}$, hence $$K_{X^{{\mathrm{g}}}} \equiv C_0^{{\mathrm{g}}}- \frac{1}{2}C_0^{{\mathrm{g}}}- \frac{1}{n}C_0^{{\mathrm{g}}}.$$ This shows that $(K_X^{{\mathrm{g}}}. C_0^{{\mathrm{g}}}) = \frac{2n}{n-2} K_{X^{{\mathrm{g}}}}^2 = \frac{2n}{n-2} K_X^2 = 0$. Furthermore, since $\chi(C_0^{{\mathrm{g}}}) = \chi(\pi_*C_0) = \chi(C_0) = 1$, we get $(C_0^{{\mathrm{g}}})^2 = 0$. Now, we claim that the complete linear system $\lvert C_0^{{\mathrm{g}}}\rvert$ is base point free; indeed, if $p \in \lvert C_0^{{\mathrm{g}}}\rvert$ is a base point, then two different closed subschemes $2E_1^{{\mathrm{g}}}, nE_{r+1}^{{\mathrm{g}}}\in \lvert C_0^{{\mathrm{g}}}\rvert$ intersect at $p$, thus $(2E_1^{{\mathrm{g}}}\mathbin. nE_{r+1}^{{\mathrm{g}}}) = (C_0^{{\mathrm{g}}})^2 > 0$, a contradiction. It follows that the linear system $\lvert C_0^{{\mathrm{g}}}\rvert$ defines an elliptic fibration $f^{{\mathrm{g}}}\colon X^{{\mathrm{g}}}\to {\mathbb P}^1$ with the general fiber $C_0^{{\mathrm{g}}}$. 4. \[item:thm:SmoothingX\_LastPart\] By [@Dolgachev:AlgebraicSurfaces Chapter 2], every minimal simply connected nonsingular surface with $p_g=q=0$ and of Kodaira dimension $1$ has exactly two multiple fibers with coprime multiplicities. Thus, there exist coprime integers $q > p > 0$ such that $X^{{\mathrm{g}}}\simeq X_{p,q}$ where $X_{p,q}$ is a Dolgachev surface of type $(p,q)$. The canonical bundle formula says that $K_{X_{p,q}} \equiv C_0^{{\mathrm{g}}}- \frac 1p C_0^{{\mathrm{g}}}- \frac 1q C_0^{{\mathrm{g}}}$. Since $X^{{\mathrm{g}}}\simeq X_{p,q}$, this leads to the equality $$\frac 12 + \frac 1n = \frac 1p + \frac 1q.$$ Assume $2 < p < q$. Then, $\frac 12 < \frac 12 + \frac 1n = \frac 1p + \frac 1q \leq \frac 13 + \frac 1q$. Hence, $q < 6$. Only the possible candidates are $(p,q,n) = (3,4,12)$, $(3,5,30)$, but all of these cases violate $\op{gcd}(2,n) = 1$. It follows that $p=2$ and $q = n$. Theorem \[thm:SmoothingX\] generalizes to constructions of Dolgachev surfaces of type $(m,n)$ for any coprime integers $n>m>0$. Indeed, as mentioned in the proof, we shall see that the Weil divisor $\pi_* E_{r+1}$ deforms to the multiple fiber of multiplicity $n$(see Example \[eg:DivisorVaries\_onSingular\]). If we perform more blow ups to the $C_1\cup E_1$ fiber so that $X$ has a $T_1$-singularity of type $\bigl( 0 \in {\mathbb A}^2 \big / \frac{1}{m^2}(1,mb-1)\bigr)$, then the surface $X^{{\mathrm{g}}}$ has two multiple fibers of multiplicites $m$ and $n$. Thus, $X^{{\mathrm{g}}}$ is a Dolgachev surface of type $(m,n)$. Exceptional vector bundles on Dolgachev surfaces {#sec:ExcepBundleOnX^g} ================================================ In general, it is hard to understand how information of the central fiber is carried to the general fiber along a ${\mathbb{Q}}$-Gorenstein smoothing. Looking at the topology nearby the singularities of $X$, one can get a clue to relate information between $X$ and $X^{{\mathrm{g}}}$. This section essentially follows the idea of Hacking. Some ingredients of Hacking’s method, which are necessary for our application, are included in the appendix(Section \[sec:Appendix\]). Readers who want to look up details are recommended to consult Hacking’s original paper[@Hacking:ExceptionalVectorBundle]. Topology of the singularities of $X$ {#subsec:TopologyofX} ------------------------------------ Let $L_i \subseteq X$($i=1,2$) be the link of the singularity $P_i$. Then, $H_1(L_1,{\mathbb{Z}}) \simeq {\mathbb{Z}}/4{\mathbb{Z}}$ and $H_1(L_2,{\mathbb{Z}}) \simeq {\mathbb{Z}}/n^2{\mathbb{Z}}$(cf. [@Manetti:NormalDegenerationOfPlane Proposition 13]). Since $\op{gcd}(2,n)=1$, $H_1(L_1,{\mathbb{Z}}) \oplus H_1(L_2,{\mathbb{Z}}) \simeq {\mathbb{Z}}/ 4n^2 {\mathbb{Z}}$ is a finite cyclic group. By [@Hacking:ExceptionalVectorBundle p. 1191], $H_2(X,{\mathbb{Z}}) \to H_1(L_i,{\mathbb{Z}})$ is surjective for each $i=1,2$, thus the natural map $$H_2(X,{\mathbb{Z}}) \to H_1(L_1,{\mathbb{Z}}) \oplus H_1(L_2,{\mathbb{Z}}),\quad \alpha \mapsto ( \alpha \cap L_1 ,\, \alpha \cap L_2)$$ is surjective. We have further information on groups $H_1(L_i,{\mathbb{Z}})$. \[thm:MumfordTopologyOfLink\] Let $X$ be a projective normal surface containing a $T_1$-singularity $P \in X$. Let $f \colon \widetilde X \to X$ be a good resolution (i.e. the exceptional divisor is simple normal crossing) of the singularity $P$, and let $E_1,\ldots,E_r$ be a chain of exceptional divisors such that $(E_i . E_{i+1}) = 1$ for each $i=1,\ldots,r-1$. Let $\widetilde L\subseteq \widetilde X$ be the plumbing fixture (see Figure \[fig:PlumbingFixture\]) around $\bigcup E_i$, and let $\alpha_i \subset \widetilde L$ be the loop around $E_i$ oriented suitably. Then the following statements are true. 1. The group $H_1(\widetilde L,{\mathbb{Z}})$ is generated by the loops $\alpha_i$. The relations are $$\sum_j (E_i . E_j) \alpha_j = 0,\quad i=1,\,\ldots,\,r.$$ 2. Let $L \subset X$ be the link of the singularity $P \in X$. Then, $\widetilde L$ is homeomorphic to $L$. (0,0) node\[anchor=center\] ; (0.5,0.5) node\[anchor=center\] ; (-1.4,-0.5) node\[anchor=center\] [$E_i$]{}; -15pt Proposition \[prop:Manetti\_PicLemma\] provides a way to associate a Cartier divisor on $X$ with a Cartier divisor on $X^{{\mathrm{g}}}$. This association can be extended as the following proposition illustrates. \[prop:Hacking\_Specialization\] Let $X$ be a projective normal surface, and let $(P \in X)$ be a $T_1$-singularity of type $\bigl( 0 \in {\mathbb A}^2 \big/ \frac{1}{n^2}(1,na-1)\bigr)$. Suppose $X$ admits a ${\mathbb{Q}}$-Gorenstein deformation $\mathcal X/(0 \in T)$ over a smooth curve germ $(0 \in T)$ such that $\mathcal X / (0 \in T)$ is a smoothing of $(P \in X)$, and is locally trivial outside $(P \in X)$. Let $X^{{\mathrm{g}}}$ be a general fiber of $\mathcal X \to (0 \in T)$, and let $\mathcal B \subset \mathcal X$ be a sufficiently small open ball around $P \in \mathcal X$. Then the link $L$ and the Milnor fiber $M$ of $(P \in X)$ given as follows: $$L = \partial \mathcal B \cap X^{{\mathrm{g}}},\qquad M = \mathcal B \cap X^{{\mathrm{g}}}.$$ In addition, let $B := \mathcal B \cap X$ be the contractible space[@Hacking:ExceptionalVectorBundle 7.1]. Assume that $X^{{\mathrm{g}}}$ is simply connected, $H^2(\mathcal O_{X^{{\mathrm{g}}}}) = 0$, and the natural map $H_2(X,{\mathbb{Z}}) \to H_1(L,{\mathbb{Z}})$(the connecting homomorphism of the Mayer-Vietoris sequence associated to the decomposition $X = (X\setminus B) \cup B$) is surjective. Then, there exists a short exact sequence $$0 \to H_2(X^{{\mathrm{g}}},{\mathbb{Z}}) \to H_2(X,{\mathbb{Z}}) \to H_1(M,{\mathbb{Z}}) \to 0.$$ Here, the specialization map $H_2(X^{{\mathrm{g}}}, {\mathbb{Z}}) \to H_2(X,{\mathbb{Z}})$ is defined by the composition $$H_2(X^{{\mathrm{g}}}) \simeq H^2(X^{{\mathrm{g}}}) \to H^2(X^{{\mathrm{g}}}\setminus M) \simeq H^2(X \setminus B) \simeq H_2(X \setminus B, L) \simeq H_2(X, B) \simeq H_2(X),$$ and $H_2(X,{\mathbb{Z}}) \to H_1(M,{\mathbb{Z}})$ is the composition of $H_2(X,{\mathbb{Z}}) \to H_1(L,{\mathbb{Z}})$ with the natural map $H_1(L,{\mathbb{Z}}) \to H_1(M,{\mathbb{Z}})$[^1]. Recall that $Y$ is the rational elliptic surface constructed in Section \[sec:Construction\], and $\pi \colon Y \to X$ is the contraction of $C_1,\,C_2,\,E_2,\,\ldots,\,E_r$. Proposition \[prop:Hacking\_Specialization\] gives the short exact sequence $$0 \to H_2(X^{{\mathrm{g}}},{\mathbb{Z}}) \to H_2(X,{\mathbb{Z}}) \to H_1(M_1,{\mathbb{Z}}) \oplus H_1(M_2,{\mathbb{Z}}) \to 0 \label{eq:CokernelSpecialization}$$ where $M_1$(resp. $M_2$) is the Milnor fiber of the smoothing of $(P_1 \in X)$(resp. $(P_2 \in X)$). It is well-known that $H_1(M_1,{\mathbb{Z}}) \simeq {\mathbb{Z}}/2{\mathbb{Z}}$ and $H_1(M_2,{\mathbb{Z}}) \simeq {\mathbb{Z}}/n{\mathbb{Z}}$(cf. [@Manetti:NormalDegenerationOfPlane Proposition 13]). Suppose $D \in {\operatorname{Pic}}Y$ is a divisor such that $(D. C_1) \in 2{\mathbb{Z}}$, $(D. C_2) \in n{\mathbb{Z}}$, and $(D.E_2) = \ldots = (D.E_r) = 0$. Then, Theorem \[thm:MumfordTopologyOfLink\] and (\[eq:CokernelSpecialization\]) implies that the cycle $[\pi_* D] \in H_2(X,{\mathbb{Z}})$ maps to the trivial element of the cokernel $H_1(M_1,{\mathbb{Z}}) \oplus H_1(M_2,{\mathbb{Z}})$. In particular, there is a cycle in $H_2(X^{{\mathrm{g}}})$, which maps to $[\pi_* D]$. Since $X^{{\mathrm{g}}}$ is a nonsingular surface with $p_g = q = 0$, the first Chern class map and Poincaré duality induce an isomorphism ${\operatorname{Pic}}X^{{\mathrm{g}}}\simeq H_2(X^{{\mathrm{g}}},{\mathbb{Z}})$(see e.g. [@Hacking:ExceptionalVectorBundle 7.1]). We take the line bundle $D^{{\mathrm{g}}}\in {\operatorname{Pic}}X^{{\mathrm{g}}}$ corresponding to $[\pi_* D] \in H_2(X,{\mathbb{Z}})$. More detailed description of $D^{{\mathrm{g}}}$ will be presented in Proposition \[prop:LineBundleOnReducibleSurface\]. The next proposition explains the way to find a preimage along the surjective map $H_2(X,{\mathbb{Z}}) \to H_1(L_1,{\mathbb{Z}})\oplus H_1(L_2,{\mathbb{Z}})$. Key observation is that if $D \in {\operatorname{Pic}}Y$, then $[\pi_*D] \in H_2(X,{\mathbb{Z}})$ maps to $$\bigl( (D.C_1) \alpha_{C_1},\ (D.C_2) \alpha_{C_2} + (D.E_2) \alpha_{E_2} + \ldots + (D.E_r) \alpha_{E_r}\bigr).$$ \[prop:DesiredDivisorOnY\] As in the proof of \[prop:SingIndexAndFiberCoefficients\], rearrange the chain $C_2,E_2,\ldots,E_r$ as follows: $$\label{eq:Rearrangement} (G_1,\,G_2,\,\ldots,\,G_r) = (E_{i_k},\,E_{i_{k-1}},\,\ldots,\,E_{i_1},\,C_2,\,E_{j_1},\,\ldots,\,E_{j_\ell}).$$ Let $\alpha_{G_1},\, \alpha_{G_2},\, \ldots,\, \alpha_{G_r}$ be the loops in the plumbing fixture around $G_1 \cup G_2 \cup \ldots \cup G_r$. Assume that $\alpha_{C_2}$ is a generator of the cyclic group $H_1( L_2,{\mathbb{Z}})$.[^2] Then there exists a number $N'$ such that $N' \alpha_{C_2} = \alpha_{G_1}$. Now, let $N$ be a solution of the system of congruence equations: $$N \equiv \left \{ \begin{array}{c@{}l} 0 &{\ \operatorname{mod\,}}4 \\ N'&{\ \operatorname{mod\,}}n^2. \end{array}\right.$$ Let $N_1,\ldots,N_9$ be nonnegative integers with $\sum N_i = N$. Then the divisor $D = \lfloor \sum_i N_i \pi^*\pi_* F_i \rfloor$[^3] on $Y$ has the following properties: 1. $(D. G_1) = 1$, 2. $(D. G_i) = 0$ for all $i \geq 2$ and $(D.C_1) =0$. Before giving a proof, we need the following lemma. \[lem:Divisor\_RationalPart\] Let $k_1,\ldots,k_r \geq 2$ be integers. Then, the system of equations $$\left[ \begin{array}{ccccccc} k_1 & -1 & 0 & \ldots & 0 & 0 & 0 \\ -1 & k_2 & -1 & \ldots & 0 & 0 & 0 \\ 0 & -1 & k_3 & \ldots & 0 & 0 & 0 \\ \multicolumn{3}{c}{\vdots} & \ddots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & \ldots & k_{r-2} & -1 & 0 \\ 0 & 0 & 0 & \ldots & -1 & k_{r-1} & -1 \\ 0 & 0 & 0 & \ldots & 0 & -1 & k_r \end{array} \right]\,\left[ \begin{array}{c} x_1 \\ x_2 \\ x_3 \\ \vdots \\ x_{r-2} \\ x_{r-1} \\ x_{r} \end{array} \right] = \left[ \begin{array}{c} 1 \\ 0 \\ 0 \\ \vdots \\ 0 \\ 0 \\ 0 \end{array} \right]$$ has a unique solution in $\{ (x_1,\ldots,x_r) \in {\mathbb{Q}}^r : 0 < x_i < 1,\ \text{for each }i\}$. Let $D(k_1,\ldots,k_r)$ be the determinant of the $r \times r$ matrix in the statement. For notational convenience, put $D({\varnothing}) = 1$. The solution of this system is given by $$x_i = \frac{D(k_{i+1},\ldots,k_r)}{D(k_1,\ldots,k_r)},\quad i=1,\ldots,r.$$ For $r \geq 2$, the following identity holds: $$D(k_1,\ldots,k_r) = k_1 D(k_2,\ldots,k_r) - D(k_3,\ldots,k_r).$$ Using inductive arguments, we find that $$0 < D(k_r) < D(k_{r-1},k_r) < \ldots < D(k_2,\ldots,k_r) < D(k_1,\ldots,k_r).$$ In particular, $0 < x_i < 1$ for each $i=1,\ldots,r$. The divisors $F_1,\ldots,F_9$ are not numerically equivalent, but their intersection with any other type of divisors, namely $C_1,\,G_1,\,G_2,\,\ldots,\,G_r,\,E_{r+1}$, are same. Thus we may assume $D = \lfloor N \pi^*\pi_* F_1\rfloor$. Factor the map $\pi$ into the composition $\eta \circ \iota$ where $\iota$ is the contraction of $G_2,\,\ldots,\,G_r$ and $\eta$ is the contraction of $(\iota_* C_1),\, (\iota_* G_1)$. Let $X_0$ be the target space of the contraction $\iota$. The image of the divisor $N\pi_* F_1$ along $H_2(X,{\mathbb{Z}}) \to H_1(L_1,{\mathbb{Z}}) \oplus H_1(L_2,{\mathbb{Z}})$ is $(0,\alpha_{G_1})$, hence the proper transform $D'$ does not pass through the singular point of $X_0$. Furthermore, $(D'. \iota_*G_1) = 1$, $(D' . \iota_* C_1) = 0$. It follows that $$N \eta^*\pi_* F_1 = D' + \frac{1}{-(\iota_* G_1)^2} (\iota_* G_1).$$ Since $D'$ lies on the smooth locus of $X_0$, $\iota^* D'$ is a Cartier divisor on $Y$. Now, consider the divisor $\frac{1}{-(\iota_* G_1)^2} \iota^* \iota_* G_1$. There are rational numbers $a_1,\ldots,a_r$ satisfying $$\label{eq:Aux2} \frac{1}{-(\iota_* G_1)^2} \iota^* \iota_* G_1 = a_1 G_1 + \ldots + a_r G_r.$$ Let $k_i := -G_i^2$. Since $(\iota_*G_1)^2 = (\iota^*\iota_* G_1 . G_1)$, taking intersection of (\[eq:Aux2\]) and $G_1$ yields the equation $1 = k_1a_1 -a_2$. Intersections of the equation (\[eq:Aux2\]) and $G_2,\,\ldots,\,G_r$ give rise to the system of linear equations $$\left\{ \begin{array}{r@{}l} k_1 a_1 - a_2 ={}& 1 \\ -a_1 + k_2a_2 - a_3 ={}& 0 \\ \vdots \\ -a_{r-1} + k_r a_r ={}& 0. \end{array} \right.$$ By Lemma \[lem:Divisor\_RationalPart\], $0 < a_1,\ldots,a_r < 1$. Consequently, from the equations $$\begin{aligned} N \pi^*\pi_* F_1 &= \iota^* D' + \frac{1}{-(\iota_* G_1)^2} \iota^* \iota_* G_1 \\ &= \iota^* D' + a_1 G_1 + \ldots a_r G_r, \end{aligned}$$ we conclude that $D = \lfloor N\pi^*\pi_* F_1 \rfloor = \iota^* D'$. The intersection numbers (1), (2), (3) are easily verified from the above equation. \[rmk:DivisorOnY\_Simpler\] Proposition \[prop:DesiredDivisorOnY\] produces a divisor associated the singular point $P_2 \in X$. Similarly, one can produce a divisor associated to $P_1$. It suffices to take an integer $N$ such that $$N \equiv \left \{ \begin{array}{c@{}l} 1 &{\ \operatorname{mod\,}}4 \\ 0 &{\ \operatorname{mod\,}}n^2. \end{array}\right.$$ Exceptional vector bundles on $X^{{\mathrm{g}}}$ ------------------------------------------------ We keep use the notations in Section \[sec:Construction\], namely, $Y$ is the rational elliptic surface(Figure \[fig:Configuration\_General\]), $\pi \colon Y \to X$ is the contraction in Proposition \[prop:SingularSurfaceX\]. Let $(0 \in T)$ be the base space of the formal versal deformation ${\mathcal X^{\rm ver} / (0 \in T)}$ of $X$, and let $(0 \in T_i)$ be the base space of the formal versal deformation $(P_i \in \mathcal X^{\rm ver}) / (0 \in T_i)$ of the singularity $(P_i \in X)$. By Lemma \[lem:NoObstruction\] and [@Hacking:ExceptionalVectorBundle Lemma 7.2], there exists a formally smooth morphism of formal schemes $$\mathfrak T \colon (0 \in T) \to \textstyle\prod_i (0 \in T_i).$$ For each $i=1,2$, take a base extension $( 0 \in T_i') \to (0 \in T_i)$ to which Proposition \[prop:HackingWtdBlup\] can be applied. Then, there exists a fiber product diagram $$\begin{xy} (0,0)*+{(0 \in T)}="00"; (30,0)*+{\textstyle \prod_i ( 0 \in T_i)}="10"; (0,15)*+{(0 \in T')}="01"; "10"+"01"-"00"*+{\textstyle \prod_i ( 0 \in T_i')}="11"; {\ar^(0.44){\mathfrak T} "00";"10"}; {\ar^(0.44){\mathfrak T'} "01";"11"}; {\ar "01";"00"}; {\ar "11";"10"}; \end{xy}.$$ Let $\mathcal X' / (0 \in T')$ be the deformation obtained by pulling back $\mathcal X^{\rm ver} / ( 0 \in T)$ along $(0 \in T') \to (0 \in T)$. The deformation $\mathcal X' / (0 \in T')$ is eligible for Proposition \[prop:HackingWtdBlup\], hence there exists a proper birational map $\Phi \colon \tilde{\mathcal X} \to \mathcal X'$ such that the central fiber $\tilde {\mathcal X}_0 = \Phi^{-1}(\mathcal X_0')$ is the union of three irreducible components $\tilde X_0$, $W_1$, $W_2$, where $\tilde X_0$ is the proper transform of $X = \mathcal X_0'$, and $W_1$(resp. $W_2$) is the exceptional locus over $P_1$(resp. $P_2$). The intersection $Z_i := \tilde X_0 \cap W_i$($i=1,2$) is a smooth rational curve. From now on, assume $a=1$. This is the case in which the resolution graph of the singular point $P_2 \in X$ forms the chain $C_2,\,E_2,\,\ldots,\,E_r$ in this order. Indeed, the resolution graph of a cyclic quotient singularity $\bigl(0 \in {\mathbb A}^2 / \frac{1}{n^2}(1,n-1)\bigr)$ is \[0pt\]\[13pt\] (0,0) node\[anchor=center\] (E1) ; (30pt,0pt) node\[anchor=center\] (E2) ; (60pt,0pt) node\[anchor=center, inner sep=10pt\] (E3) ; (90pt,0pt) node\[anchor=center\] (E4) ; (E1) circle (1.5pt); (E2) circle (1.5pt); (E3) node\[anchor=center\][$\cdots$]{}; (E4) circle (1.5pt); at (E1.south) [$\scriptscriptstyle -(n+2)\ $]{}; at (E2.south) [$\scriptscriptstyle -2$]{}; at (E4.south) [$\scriptscriptstyle -2$]{}; (E1.east) – (E2.west); (E2.east) – (E3.west); (E3.east) – (E4.west); . Let $\iota \colon Y \to \tilde X_0$ be the contraction of $E_2,\ldots,E_r$(see Proposition \[prop:HackingWtdBlup\]\[item:prop:HackingWtdBlup\]). As noted in Remark \[rmk:SimplestSingularCase\], $W_1$ is isomorphic to ${\mathbb P}^2$, $Z_1$ is a smooth conic in $W_1$, hence $\mathcal O_{W_1}(1)\big\vert_{Z_1} = \mathcal O_{Z_1}(2)$. Also, $$W_2 \simeq {\mathbb P}_{x,y,z}(1,n-1,1),\quad Z_2 = (xy=z^n) \subset W_2,\ \text{and}\quad \mathcal O_{W_2}(n-1)\big\vert_{Z_2} = \mathcal O_{Z_2}(n). \label{eq:SecondWtdBlowupExceptional}$$ The last statement can be verified as follows: let $h_{W_2} = c_1(\mathcal O_{W_2}(1))$, then $(n-1)h_{W_2}^2 = 1$, so $\bigl( c_1(\mathcal O_{W_2}(n-1)) \mathbin . Z_2\bigr) = \bigl( (n-1)h_{W_2} \mathbin. nh_{W_2} \bigr) = n$. In what follows, we construct exceptional vector bundles on the reducible surface $\tilde{\mathcal X_0} = \tilde X_0 \cup W_1 \cup W_2$. The following table exhibits the suitable bundles on irreducible components $W_1,\,W_2$ with respect to the values $(D . C_1)$, $(D. C_2)$.+10pt ----------- ------------------------ ------ ---------------- ----------- ------------------------- ------ $(D.C_1)$ $W_1$ rank $\qquad\qquad$ $(D.C_2)$ $W_2$ rank $0$ $\mathcal O_{W_1}$ $1$ $0$ $\mathcal O_{W_2}$ $1$ $1$ $\mathcal T_{W_1}(-1)$ $2$ $1$ ? $n$ $2$ $\mathcal O_{W_1}(2)$ $1$ $n$ $\mathcal O_{W_2}(n-1)$ $1$ ----------- ------------------------ ------ ---------------- ----------- ------------------------- ------ +5pt \[table:IntersectionNumbers\_andRanks\] The symbol $\mathcal T_{W_1}$ denotes the tangent sheaf of $W_1$. The bundle in question mark exists by Proposition \[prop:Hacking\_BundleG\], but we do not use it later. We summarize this observation in the line bundle case: (see Proposition \[prop:HackingDeformingBundles\] to look for the vector bundle case) \[prop:LineBundleOnReducibleSurface\] Let $D \in {\operatorname{Pic}}Y$ be a divisor such that $(D.C_1) =2d_1 \in 2{\mathbb{Z}}$, $(D.C_2) = nd_2\in n{\mathbb{Z}}$, and $(D.E_i) = 0$ for $i=2,\ldots,r$. Then, there exists a line bundle $\tilde {\mathcal D}$ on the reducible surface $\tilde{\mathcal X}_0 = \tilde X_0 \cup W_1 \cup W_2$ such that $$\tilde{\mathcal D}\big\vert_{\tilde X_0} = \mathcal O_{\tilde X_0}(\iota_* D),\quad \tilde{\mathcal D}\big\vert_{W_1} = \mathcal O_{W_1}(d_1),\quad\text{and}\quad \tilde{\mathcal D}\big\vert_{W_2} = \mathcal O_{W_2}((n-1)d_2).$$ Using Table \[table:IntersectionNumbers\_andRanks\] and Proposition \[prop:LineBundleOnReducibleSurface\], we can assemble some exceptional vector bundles on the reducible surface $\tilde{\mathcal X_0} = \tilde X_0 \cup W_1 \cup W_2$(Table \[table:ExcBundles\_OnSingular\]). Due to the exact sequence (\[eq:ExactSeq\_onReducibleSurface\]), it is not so hard to prove that the bundles listed below are exceptional.\ +0pt \[11pt\]\[6pt\]$\tilde{\mathcal X}_0$ $\tilde X_0$ $W_1$ $W_2$ ---------------------------------------------------------------------------------- ------------------------------------------------ ------------------------ ------------------------------- \[11pt\]\[7pt\]$\mathcal O_{\tilde{\mathcal X}_0}$ $\mathcal O_{\tilde X_0}$ $\mathcal O_{W_1}$ $\mathcal O_{W_2}$ \[11pt\]\[7pt\]$\tilde{\mathcal F}_{ij}\,{\scriptstyle (1 \leq i\neq j \leq 9)}$ $\mathcal O_{\tilde X_0}(\iota_*(F_i - F_j))$ $\mathcal O_{W_1}$ $\mathcal O_{W_2}$ \[11pt\]\[7pt\]$\tilde{\mathcal C}_0$ $\mathcal O_{\tilde X_0}(\iota_*C_0)$ $\mathcal O_{W_1}$ $\mathcal O_{W_2}$ \[11pt\]\[7pt\][$\tilde{\mathcal K}$]{} $\mathcal O_{\tilde X_0}(K_{\tilde X_0})$ $\mathcal O_{W_1}(1)$ $\mathcal O_{W_2}(n-1)$ \[11pt\]\[7pt\][$\tilde{\mathcal R}$]{} $\mathcal O_{\tilde X_0}(\iota_*R)^{\oplus 2}$ $\mathcal T_{W_1}(-1)$ $\mathcal O_{W_2}^{\oplus 2}$ +5pt \[table:ExcBundles\_OnSingular\] In the last row, $R = \lfloor N \pi^* \pi_* F_1 \rfloor$ where $N$ is an integer such that $$N \equiv \left \{ \begin{array}{c@{}l} 1 &{\ \operatorname{mod\,}}4 \\ 0 &{\ \operatorname{mod\,}}n^2. \end{array}\right.$$ See Proposition \[prop:DesiredDivisorOnY\] and Remark \[rmk:DivisorOnY\_Simpler\]. One of the benefits of having an exceptional vector bundle is that it deforms uniquely to a family. Let $\tilde {\mathcal D}$ be an exceptional line bundle on the reducible surface $\tilde{\mathcal X}_0$ as in \[prop:LineBundleOnReducibleSurface\]. Then, $\tilde {\mathcal D}_0$ deforms uniquely to a line bundle $\mathscr D$ on $\tilde{\mathcal X}$. We define $D^{{\mathrm{g}}}\in {\operatorname{Pic}}X^{{\mathrm{g}}}$ by $\mathcal O_{X^{{\mathrm{g}}}}(D^{{\mathrm{g}}}) = \mathscr D\big\vert_{X^{{\mathrm{g}}}}$. We finish this section by presenting an exceptional collection of length $9$ on the Dolgachev surface $X^{{\mathrm{g}}}$. Note that this collection cannot generate the whole category ${\operatorname{D}}^{\rm b}(X^{{\mathrm{g}}})$. \[prop:ExceptCollection\_ofLengthNine\] Let $F_{1j}^{{\mathrm{g}}}\, (j>1)$ be the exceptional vector bundle on $X^{{\mathrm{g}}}$, which arises from the deformation of $\tilde {\mathcal F}_{1j}$ along $\tilde{\mathcal X} / (0 \in T')$. Then the ordered tuple $$\bigl\langle \mathcal O_{X^{{\mathrm{g}}}},\, \mathcal O_{X^{{\mathrm{g}}}}(F_{12}^{{\mathrm{g}}}),\,\ldots,\, \mathcal O_{X^{{\mathrm{g}}}}(F_{19}^{{\mathrm{g}}}) \bigr\rangle$$ forms an exceptional collection in the derived category ${\operatorname{D}}^{\rm b}(X^{{\mathrm{g}}})$. By virtue of upper-semicontinuity, it suffices to prove that $H^p(\tilde{\mathcal X}_0, \tilde{\mathcal F}_{1i} \otimes \tilde{\mathcal F}_{1j}^\vee) = 0$ for $1 \leq i < j \leq 9$ and $p \geq 0$. For any vector bundle $\tilde{\mathscr E}$ on $\tilde{\mathcal X}_0$, there is a short exact sequence $$0 \to \tilde{\mathscr E} \to \tilde{\mathscr E}\big\vert_{\tilde X_0} \oplus \tilde{\mathscr E}\big\vert_{W_1} \oplus \tilde{\mathscr E}\big\vert_{W_2} \to \tilde{\mathscr E}\big\vert_{Z_1} \oplus \tilde{\mathscr E} \big\vert_{Z_2} \to 0 \label{eq:ExactSeq_onReducibleSurface}$$ where the morphism at the left is the sum of natural restrictions, and the morphism at right maps $(s_0,\,s_1,\,s_2)$ to $(s_0-s_1,s_0-s_2)$. It turns out that the above sequence for $\tilde{\mathscr E} = \tilde{\mathcal F}_{1i} \otimes \tilde{\mathcal F}_{1j}^\vee$ becomes $$0 \to \tilde{\mathcal F}_{1i} \otimes \tilde{\mathcal F}_{1j}^\vee \to \mathcal O_{\tilde X_0}(\iota_* ( F_j - F_i)) \oplus \mathcal O_{W_1} \oplus \mathcal O_{W_2} \to \mathcal O_{Z_1} \oplus \mathcal O_{Z_2} \to 0.$$ Since $H^0(\mathcal O_{W_k}) \simeq H^0(\mathcal O_{Z_k})$ and $H^p(\mathcal O_{W_k}) = H^p (\mathcal O_{Z_k}) = 0$ for $k=1,2$ and $p > 0$, it suffices to prove that $H^p(\mathcal O_{\tilde X_0} ( \iota_*(F_j - F_i) ) ) = 0$ for all $p\geq 0$ and $i< j$. The surface $\tilde X_0$ is normal(cf. [@Hacking:ExceptionalVectorBundle p. 1178]) and the divisor $F_j - F_i$ does not intersect the exceptional locus of $\iota \colon Y \to \tilde X_0$. By Proposition \[prop:CohomologyComparison\_YtoX\], $H^p( \tilde X_0, \iota_*(F_j-F_i)) \simeq H^p(Y, F_j-F_i)$ for all $p \geq 0$. It remains to prove that $H^p(Y, F_j-F_i) = 0$ for $p \geq 0$. By Riemann-Roch, $$\chi(F_j-F_i) = \frac{1}{2} ( F_j-F_i \mathbin. F_j - F_i - K_Y) + 1,$$ and this is zero by Lemma \[lem:CanonicalofY\]. Since $(F_j\mathbin.F_j - F_i) = -1$ and $F_i \simeq {\mathbb P}^1$, in the short exact sequence $$0 \to \mathcal O_Y(-F_i) \to \mathcal O_Y(F_j-F_i) \to \mathcal O_{F_i}(F_j) \to 0$$ we obtain $H^0(-F_i) \simeq H^0(F_j-F_i)$. In particular, $H^0(F_j-F_i) = 0$. By Serre duality and Lemma \[lem:CanonicalofY\], $H^2(F_j-F_i) = H^0(E_1 + F_i - F_j - C_2 - \ldots - E_{r+1})^*$. Similarly, since $(E_1\mathbin.E_1 + F_i - F_j - C_2 - \ldots - E_{r+1}) < 0$, $(F_i\mathbin.F_i - F_i - C_2 - \ldots - E_{r+1}) < 0$ and $E_1$, $F_j$ are rational curves, $H^0( E_1 + F_i - F_j - C_2 - \ldots - E_{r+1}) \simeq H^0(-F_j - C_2 - \ldots - E_{r+1}) = 0$. This proves that $H^2(F_j-F_i) = 0$. Finally, $\chi (F_j - F_i) = 0$ implies $H^1(F_j - F_i) =0$. \[rmk:ExceptCollection\_SerreDuality\] In Proposition \[prop:ExceptCollection\_ofLengthNine\], the trivial bundle $\mathcal O_{X^{{\mathrm{g}}}}$ can be replaced by the deformation of the line bundle $\tilde{\mathcal K}^\vee$(Table \[table:ExcBundles\_OnSingular\]). The strategy of the proof differs nothing. Since $\tilde{\mathcal K}^\vee$ deforms to $\mathcal O_{X^{{\mathrm{g}}}}(-K_{X^{{\mathrm{g}}}})$, taking dual shows that $$\bigl\langle \mathcal O_{X^{{\mathrm{g}}}}(F_{21}^{{\mathrm{g}}}),\,\ldots,\, \mathcal O_{X^{{\mathrm{g}}}}(F_{91}^{{\mathrm{g}}}) ,\, \mathcal O_{X^{{\mathrm{g}}}}(K_{X^{{\mathrm{g}}}}) \bigr\rangle$$ is also an exceptional collection in ${\operatorname{D}}^{\rm b}(X^{{\mathrm{g}}})$. This will be used later(see Step \[item:ProofFreePart\_thm:ExceptCollection\_MaxLength\] in the proof of Theorem \[thm:ExceptCollection\_MaxLength\]). The Néron-Severi lattices of Dolgachev surfaces of type $(2,3)$ {#sec:NeronSeveri} =============================================================== This section is devoted to study the simplest case, namely the case $n=3$ and $a=1$. The surface $Y$ has simpler configuration(Figure \[fig:Configuration\_Basic\]). We cook up several divisors on $X^{{\mathrm{g}}}$ according to the recipe designed below. \[recipe:PicardLatticeOfDolgachev\] Recall that $\pi \colon Y \to X$ is the contraction of $C_1, C_2, E_2$ and $\iota \colon Y \to \tilde X_0$ is the contraction of $E_2$. 1. Pick a divisor $D \in {\operatorname{Pic}}Y$ satisfying $(D.C_1) \in 2 {\mathbb{Z}}$, $(D. C_2) \in 3{\mathbb{Z}}$, and $(D. E_2) = 0$. 2. Attach suitable line bundles(Proposition \[prop:LineBundleOnReducibleSurface\]) on $W_i$($i=1,2$) to $\mathcal O_{\tilde X_0}(\iota_* D)$ to produce a line bundle, say $\tilde{\mathcal D}$ on $\tilde{\mathcal X}_0 = \tilde X_0 \cup W_1 \cup W_2$. It deforms to a line bundle $\mathcal O_{X^{{\mathrm{g}}}}(D^{{\mathrm{g}}})$ on the Dolgachev surface $X^{{\mathrm{g}}}$. 3. Use the short exact sequence (\[eq:ExactSeq\_onReducibleSurface\]) to compute $\chi(\tilde {\mathcal D})$. Then by deformation invariance of Euler characteristics, $\chi(D^{{\mathrm{g}}}) = \chi(\tilde {\mathcal D})$. 4. \[item:recipe:CanonicalIntersection\] Since the divisor $\pi_* C_0$ is away from the singularities of $X$, it is Cartier. By Lemma \[lem:Intersection\_withFibers\], $(C_0^{{\mathrm{g}}}.D^{{\mathrm{g}}}) = (C_0.D)$. Furthermore, $C_0^{{\mathrm{g}}}= 6K_{X^{{\mathrm{g}}}}$, thus the Riemann-Roch formula on the surface $X^{{\mathrm{g}}}$ reads $$(D^{{\mathrm{g}}})^2 = \frac{1}{6}( D . C_0) + 2 \chi(\tilde {\mathcal D}) - 2.$$ This computes the intersections of divisors in $X^{{\mathrm{g}}}$. The following lemmas are included for computational purposes. \[lem:EulerChar\_WtdProj\] Let $h = c_1(\mathcal O_{W_2}(1)) \in H_2(W_2,{\mathbb{Z}})$ be the hyperplane class of the weighted projective space $W_2 = {\mathbb P}(1,2,1)$. For any even integer $n \in {\mathbb{Z}}$, $$\chi( \mathcal O_{W_2}(n) ) = \frac{1}{4}n(n+4) + 1.$$ By well-known properties of weighted projective spaces, $(1 \cdot 2 \cdot 1)h^2 = 1$, $c_1(K_{W_2}) = -(1+2+1)h = -4h$, and $\mathcal O_{W_2}(2)$ is invertible. The Riemann-Roch formula for invertible sheaves(cf. [@Hacking:ExceptionalVectorBundle Lemma 7.1]) says that $\chi(\mathcal O_{W_2}(n)) = \frac{1}{2}( nh \mathbin. (n+4)h) + 1 = \frac{1}{4}n(n+4) + 1$. \[lem:EulerCharacteristics\] Let $S$ be a projective normal surface with $\chi(\mathcal O_S) = 1$. Assume that all the divisors below are supported on the smooth locus of $S$. Then, 1. $\chi(D_1 + D_2) = \chi(D_1) + \chi(D_2) + (D_1 . D_2) - 1$; 2. $\chi(-D) = -\chi(D) + D^2 + 2$; 3. $\chi(-D) = p_a(D)$ where $p_a(D)$ is the arithmetic genus of $D$; 4. $\chi(nD) = n\chi(D) + \frac{1}{2} n(n-1)D^2 - n + 1$ for all $n \in {\mathbb{Z}}$. 5. $\chi(nD) = n^2\chi(D) + \frac{1}{2}n(n-1) (K_S .D) - n^2 + 1$ for all $n \in {\mathbb{Z}}$. Assume in addition that $D$ is an integral curve with $p_a(D) = 0$. Then 1. $\chi(D) = D^2 + 2$, $\chi(-D) = 0$; 2. $\chi(nD) = \frac{1}{2}n(n+1)D^2 + (n+1)$ for all $n \in {\mathbb{Z}}$. All the formula in the statement are simple variants of Riemann-Roch formula. \[lem:Intersection\_withFibers\] Let $D$, $\tilde{\mathcal D}$, $D^{{\mathrm{g}}}$ as in Recipe \[recipe:PicardLatticeOfDolgachev\]. Then, $(C_0.D) = (C_0^{{\mathrm{g}}}. D^{{\mathrm{g}}})$. Since $C_0$ does not intersect with $C_1,C_2,E_2$, the corresponding line bundle $\tilde{\mathcal C}_0$ on $\tilde{\mathcal X}_0$ is the gluing of $\mathcal O_{\tilde X_0}(\iota_*C_0)$, $\mathcal O_{W_1}$, and $\mathcal O_{W_2}$. Thus, $(\tilde{\mathcal D} \otimes \tilde{\mathcal C}_0) \big\vert_{W_i} = \tilde{\mathcal D}\big\vert_{W_i}$ for $i=1,2$. From this and (\[eq:ExactSeq\_onReducibleSurface\]), it can be immediately shown that $\chi(\tilde{\mathcal D} \otimes \tilde{\mathcal C}_0 ) - \chi(\tilde{\mathcal D}) = \chi(D + C_0 ) - \chi (D)$. If $D$ is the trivial divisor on $Y$, the previous equation tells $\chi(C_0^{{\mathrm{g}}}) = \chi(\tilde{\mathcal C}_0) = \chi(C_0) = 1$. Now, using Lemma \[lem:EulerCharacteristics\](1), we deduce $(C_0^{{\mathrm{g}}}. D^{{\mathrm{g}}}) = \chi(D^{{\mathrm{g}}}+ C_0^{{\mathrm{g}}}) - \chi(D^{{\mathrm{g}}}) = \chi(\tilde{\mathcal D} \otimes \tilde{\mathcal C}_0 ) - \chi(\tilde{\mathcal D}) = \chi(D + C_0 ) - \chi (D) = (C_0.D)$. Let $L_0 = p^*(2H)$ be the proper transform of a general plane conic. Then, $( L_0 . C_1) = 6$, $(L_0 . C_2) = 6$ and $(L_0. E_2) = 0$. Let $\tilde{\mathcal L}_0$ be the line bundle on the reducible surface $\tilde{\mathcal X}_0 = \tilde X_0 \cup W_1 \cup W_2$ such that $$\tilde{\mathcal L}_0\big\vert_{\tilde X_0} = \mathcal O_{\tilde X_0}(\iota_* L_0),\quad \tilde{\mathcal L}_0\big\vert_{W_1} = \mathcal O_{W_1}(3),\ \text{and}\quad \tilde{\mathcal L}_0\big\vert_{W_2} = \mathcal O_{W_2}(4).$$ This bundle deforms to a line bundle on $X^{{\mathrm{g}}}$. We denote $L_0^{{\mathrm{g}}}$ its associated Cartier divisor. Let $F_{ij}^{{\mathrm{g}}}\in {\operatorname{Pic}}X^{{\mathrm{g}}}$ be the divisor associated with $\tilde{\mathcal F}_{ij}$(Table \[table:ExcBundles\_OnSingular\]). We define $$\begin{aligned} G_i^{{\mathrm{g}}}&:= - L_0^{{\mathrm{g}}}+ 10K_{X^{{\mathrm{g}}}} + F_{i9}^{{\mathrm{g}}}\quad \text{for }i=1,\ldots,8; \\ G_9^{{\mathrm{g}}}&:= -L_0^{{\mathrm{g}}}+ 11K_{X^{{\mathrm{g}}}}. \end{aligned}$$ \[prop:G\_1to9\] The following are numerical invariants related to the divisors $\{G_i^{{\mathrm{g}}}\}_{1 \leq i \leq 9}$: 1. $\chi(G_i^{{\mathrm{g}}}) = 1$ and $(G_i^{{\mathrm{g}}}. K_{X^{{\mathrm{g}}}}) = -1$; 2. for $i < j$, $\chi(G_i^{{\mathrm{g}}}- G_j^{{\mathrm{g}}})=0$. Furthermore, $(G_i^{{\mathrm{g}}})^2 = -1$ and $(G_i^{{\mathrm{g}}}. G_j^{{\mathrm{g}}}) = 0$ for $1 \leq i < j \leq 9$. First, consider the case $i \leq 8$. By Recipe \[recipe:PicardLatticeOfDolgachev\]\[item:recipe:CanonicalIntersection\] and $K_{X^{{\mathrm{g}}}}^2 = 0$, $(K_{X^{{\mathrm{g}}}} . G_i^{{\mathrm{g}}}) = \frac{1}{6}(C_0 \mathbin. -L_0 + F_i - F_9) = -1$. Since the alternating sum of Euler characteristics in the sequence (\[eq:ExactSeq\_onReducibleSurface\]) is zero, we get the formula $$\begin{aligned} \chi( \tilde{\mathcal L}_0^\vee \otimes \tilde{\mathcal F}_{i9}) ={}& \chi(-L_0 + F_i - F_9) + \chi(\mathcal O_{W_1}(-3)) + \chi(\mathcal O_{W_2}(-4)) \\ &{}- \chi(\mathcal O_{Z_1}(-6)) - \chi(\mathcal O_{Z_2}(-6)), \end{aligned}$$ which computes $\chi( \tilde{\mathcal L}_0^\vee \otimes \tilde{\mathcal F}_{i9}) = 11$. The Riemann-Roch formula for $-L_0^{{\mathrm{g}}}+ F_{i9}^{{\mathrm{g}}}= G_i^{{\mathrm{g}}}- 10K_{X^{{\mathrm{g}}}}$ says $(G_i^{{\mathrm{g}}}- 10K_{X^{{\mathrm{g}}}})^2 - (K_{X^{{\mathrm{g}}}} \mathbin. G_i^{{\mathrm{g}}}- K_{X^{{\mathrm{g}}}} ) = 20$, hence $(G_i^{{\mathrm{g}}})^2 = -1$. Using Riemann-Roch again, we derive $\chi(G_i^{{\mathrm{g}}}) = 1$. For $1 \leq i < j \leq 8$, $G_i - G_j = F_i - F_j$. Since $(F_i - F_j \mathbin. C_1 ) = (F_i - F_j \mathbin. C_2 ) = (F_i - F_j \mathbin. E_2 ) = 0$, the line bundle $\mathcal O_X(\pi_* F_i - \pi_* F_j)$ deforms to the Cartier divisor $F_{ij}^{{\mathrm{g}}}$. Hence, $\chi(G_i^{{\mathrm{g}}}- G_j^{{\mathrm{g}}}) = \chi(F_i - F_j) = 0$. This proves the statement for $i,j \leq 8$. The proof of the statement involving $G_9^{{\mathrm{g}}}$ follows the same lines. Since $\chi(\tilde{\mathcal L}_0^\vee ) = 12$, $( G_9^{{\mathrm{g}}}- 11K_{X^{{\mathrm{g}}}} ) ^2 - (K_{X^{{\mathrm{g}}}}\mathbin . G_9^{{\mathrm{g}}}- 11K_{X^{{\mathrm{g}}}}) = 22$. This leads to $(G_9^{{\mathrm{g}}})^2 = -1$. For $i \leq 8$, $$\begin{aligned} \chi(G_i^{{\mathrm{g}}}- G_9^{{\mathrm{g}}}) ={}& \chi(F_i - F_9 - K_Y) + \chi(\mathcal O_{W_1}(-1)) + \chi(\mathcal O_{W_2}(-2)) \\ &{} - \chi(\mathcal O_{Z_1}(-2)) - \chi(\mathcal O_{Z_2}(-3)), \end{aligned}$$ and the right hand side is zero. We complete our list of divisors in ${\operatorname{Pic}}X^{{\mathrm{g}}}$ by introducing $G_{10}^{{\mathrm{g}}}$. The choice of $G_{10}^{{\mathrm{g}}}$ is motivated by the proof of the step (iii)${}\Rightarrow{}$(i) in [@Vial:Exceptional_NeronSeveriLattice Theorem 2.1]. \[prop:G\_10\] Let $G_{10}^{{\mathrm{g}}}$ be the ${\mathbb{Q}}$-divisor $\frac{1}{3}( G_1^{{\mathrm{g}}}+ G_2^{{\mathrm{g}}}+ \ldots + G_9^{{\mathrm{g}}}- K_{X^{{\mathrm{g}}}})$. Then, $G_{10}^{{\mathrm{g}}}$ is a Cartier divisor. Since $$\sum_{i=1}^9 G_i^{{\mathrm{g}}}- K_{X^{{\mathrm{g}}}} = - 9L_0^{{\mathrm{g}}}+ 90 K_{X^{{\mathrm{g}}}} + \sum_{i=1}^8 F_{i9}^{{\mathrm{g}}},$$ it suffices to prove that $\sum\limits_{i=1}^8 F_{i9}^{{\mathrm{g}}}= 3D^{{\mathrm{g}}}$ for some $D^{{\mathrm{g}}}\in {\operatorname{Pic}}X^{{\mathrm{g}}}$. Let $p \colon Y \to {\mathbb P}^2$ be the blowing up morphism and let $H$ be a line in ${\mathbb P}^2$. Since $K_Y = p^* (-3H) + F_1 + F_2 + \ldots + F_9 + E_1 + E_2 + 2E_3$, $K_Y - E_1 - E_2 - 2E_3 = p^*(-3H) + F_1 + \ldots + F_9 = -C_0$, so $F_1 + \ldots + F_9 = 3p^*H - C_0$. Consider the divisor $p^*H - 3F_9$ in $Y$. Clearly, the intersections of $(p^*H - 3F_9)$ with $C_1$, $C_2$, $E_2$ are all zero, hence $\pi_*(p^*H - 3F_9)$ deforms to a Cartier divisor $(p^*H-3F_9)^{{\mathrm{g}}}$ in $X^{{\mathrm{g}}}$. Since $$\begin{aligned} \sum_{i=1}^8 (F_i - F_9) &= \sum_{i=1}^9 F_i - 9F_9 \\ &= 3(p^*H - 3F_9) - C_0 \end{aligned}$$ and $C_0^{{\mathrm{g}}}$ deforms to $6K_{X^{{\mathrm{g}}}}$, $D^{{\mathrm{g}}}:= (p^*H - 3F_9)^{{\mathrm{g}}}- 2K_{X^{{\mathrm{g}}}}$ satisfies $\sum_{i=1}^8F_{i9}^{{\mathrm{g}}}= 3D^{{\mathrm{g}}}$. Combining the propositions \[prop:G\_1to9\] and \[prop:G\_10\], we obtain: \[thm:Picard\_ofGeneralFiber\] The intersection matrix of divisors $\{G_i^{{\mathrm{g}}}\}_{i=1}^{10}$ is $$\Bigl( (G_i^{{\mathrm{g}}}. G_j^{{\mathrm{g}}}) \Bigr)_{1 \leq i,j \leq 10} = \left[ \begin{array}{cccc} -1 & \cdots & 0 & 0 \\ \vdots & \ddots & \vdots & \vdots \\ 0 & \cdots & -1 & 0 \\ 0 & \cdots & 0 & 1 \end{array} \right]\raisebox{-2\baselineskip}[0pt][0pt]{.} \label{eq:IntersectionMatrix}$$ In particular, the set $G:=\{G_i^{{\mathrm{g}}}\}_{i=1}^{10}$ forms a ${\mathbb{Z}}$-basis of the Néron-Severi lattice $\op{NS}(X^{{\mathrm{g}}})$. By [@Dolgachev:AlgebraicSurfaces p. 137], ${\operatorname{Pic}}X^{{\mathrm{g}}}$ is torsion-free, thus it describes ${\operatorname{Pic}}X^{{\mathrm{g}}}$ completely. We claim that the divisors $\{G_i^{{\mathrm{g}}}\}_{i=1}^{10}$ generate the Néron-Severi lattice. By Hodge index theorem, there is a ${\mathbb{Z}}$-basis for $\op{NS}(X^{{\mathrm{g}}})$, say $\alpha = \{\alpha_i\}_{i=1}^{10}$, such that the intersection matrix with respect to $\{\alpha_i\}_{i=1}^{10}$ is same as (\[eq:IntersectionMatrix\]). Let $A = ( a_{ij} )_{1 \leq i,j \leq 10}$ be the integral matrix defined by $$G_i^{{\mathrm{g}}}= \sum_{j=1}^{10} a_{ij} \alpha_j.$$ Given $v \in \op{NS}(X^{{\mathrm{g}}})$, let $[v]_G$(resp. $[v]_\alpha$) be the column matrix of coordinates with respect to the basis $G$(resp. $\alpha$). Then, $[v]_\alpha = A[v]_G$. For $v_1, v_2 \in \op{NS}(X^{{\mathrm{g}}})$, $$\begin{aligned} (v_1.v_2) &= [v_1]_\alpha^{\rm t} E [v_1]_\alpha \\ &= [v_1]_G^{\rm t} A^{\rm t} E A [v_1]_G, \end{aligned}$$ where $E$ is the intersection matrix with respect to the basis $\alpha$. The above equation implies that the intersection matrix with respect to $G$ is $A^{\rm t} E A$. Since the intersection matrices with respect to both bases are same, $E = A^{\rm t} E A$. This implies that $1 = \det (A^{\rm t}A) = (\det A)^2$, hence $A$ is invertible over ${\mathbb{Z}}$. This proves that $G$ is a ${\mathbb{Z}}$-basis of $\op{NS}(X^{{\mathrm{g}}})$. The last statement on the Picard group follows immediately. We close this section with the summary of divisors on $X^{{\mathrm{g}}}$. \[summary:Divisors\_onX\^g\] Recall that $Y$ is the rational elliptic surface in Section \[sec:Construction\], $p \colon Y \to {\mathbb P}^2$ is the morphism of blowing up, $H \in {\operatorname{Pic}}{\mathbb P}^2$ is a hyperplane divisor, and $\pi \colon Y \to X$ is the contraction of $C_1,\,C_2,\,E_2$. Then, 1. $F_{ij}^{{\mathrm{g}}}$($1\leq i,j\leq 9$) is the divisor associated with $F_i - F_j$; 2. $(p^*H-3F_9)^{{\mathrm{g}}}$ is the divisor obtained from $p^*H - 3F_9$; 3. $L_0^{{\mathrm{g}}}$ is the divisor induced by the proper transform of a general conic $p^*(2H)$; 4. $G_i^{{\mathrm{g}}}= - L_0^{{\mathrm{g}}}+ 10 K_{X^{{\mathrm{g}}}} + F_{i9}^{{\mathrm{g}}}$ for $i=1,\ldots,8$; 5. $G_9^{{\mathrm{g}}}= -L_0^{{\mathrm{g}}}+ 11K_{X^{{\mathrm{g}}}}$; 6. $G_{10}^{{\mathrm{g}}}= -3L_0^{{\mathrm{g}}}+ (p^*H - 3F_9)^{{\mathrm{g}}}+ 28K_{X^{{\mathrm{g}}}}$. Exceptional collections of maximal length on Dolgachev surfaces of type $(2,3)$ {#sec:ExcepCollectMaxLength} =============================================================================== Exceptional collection of maximal length ---------------------------------------- We continue to study the case ${n=3}$ and $a=1$. Throughout this section, we will prove that there exists an exceptional collection of maximal length in ${\operatorname{D}}^{\rm b}(X^{{\mathrm{g}}})$ for a cubic pencil $\lvert \lambda p_*C_1 + \mu p_*C_2 \rvert$ generated by two general plane nodal cubics. Proving exceptionality of a given collection usually consists of numerous cohomology computations, so we begin with some computational machineries. \[lem:DummyBundle\] The line bundle, which is the glueing of $\mathcal O_{\tilde X_0}(\iota_*C_1)$, $\mathcal O_{W_1}(-2)$ and $\mathcal O_{W_2}$, deforms to the trivial line bundle on $X^{{\mathrm{g}}}$. Similarly, the glueing of $\mathcal O_{\tilde X_0}(\iota_*( 2C_2 + E_2) )$, $\mathcal O_{W_1}$, and $\mathcal O_{W_2}(-6)$ deforms to the trivial line bundle on $X^{{\mathrm{g}}}$. Let $\tilde{\mathcal C}_1$ be the glueing of line bundles $\mathcal O_{\tilde X_0}(\iota_* C_1)$, $\mathcal O_{W_1}(-2)$, and $\mathcal O_{W_2}$, and let $\mathcal O_{X^{{\mathrm{g}}}}(C_1^{{\mathrm{g}}})$ be the deformed line bundle. It is immediate to see that $\chi(C_1^{{\mathrm{g}}}) = 1$ and $\chi(-C_1^{{\mathrm{g}}})=1$. By Riemann-Roch formula, $(C_1^{{\mathrm{g}}})^2 = (C_1^{{\mathrm{g}}}. K_{X^{{\mathrm{g}}}}) = 0$. For $i \leq 8$, $$\begin{aligned} \chi(G_i^{{\mathrm{g}}}- 10K_{X^{{\mathrm{g}}}} - C_1^{{\mathrm{g}}}) ={}& \chi( \tilde{\mathcal L}_0^\vee \otimes \tilde{\mathcal F}_{i9} \otimes \tilde{\mathcal C}_1^\vee ) \\ ={}& \chi(-L_0 + F_i - F_9 - C_1) + \chi(\mathcal O_{W_1}(-1)) + \chi(\mathcal O_{W_2}(-4)) \\ &{} - \chi(\mathcal O_{Z_1}(-2)) - \chi(\mathcal O_{Z_2}(-6)). \end{aligned}$$ This computes $\chi(G_i^{{\mathrm{g}}}- 10K_{X^{{\mathrm{g}}}} - C_1^{{\mathrm{g}}})=11$. By Riemann-Roch, $(G_i^{{\mathrm{g}}}- 10K_{X^{{\mathrm{g}}}}- C_1^{{\mathrm{g}}})^2 - (K_{X^{{\mathrm{g}}}} \mathbin . G_i^{{\mathrm{g}}}- 10K_{X^{{\mathrm{g}}}} - C_1^{{\mathrm{g}}}) = 2\chi(G_i^{{\mathrm{g}}}-10K_{X^{{\mathrm{g}}}} - C_1^{{\mathrm{g}}})-2 = 20$. The left hand side is $-2(G_i^{{\mathrm{g}}}. C_1^{{\mathrm{g}}}) + 20$, thus $(G_i^{{\mathrm{g}}}. C_1^{{\mathrm{g}}}) =0$. Since $(C_1^{{\mathrm{g}}}. K_{X^{{\mathrm{g}}}}) = 0$ and $3G_{10}^{{\mathrm{g}}}= G_1^{{\mathrm{g}}}+ \ldots + G_9^{{\mathrm{g}}}- K_{X^{{\mathrm{g}}}}$, $(G_{10}^{{\mathrm{g}}}. C_1^{{\mathrm{g}}}) = 0$. Hence, $C_1^{{\mathrm{g}}}$ is numerically trivial by Theorem \[thm:Picard\_ofGeneralFiber\]. This shows that $C_1^{{\mathrm{g}}}$ is trivial since there is no torsion in ${\operatorname{Pic}}X^{{\mathrm{g}}}$. Exactly the same proof is valid for the line bundle coming from $2C_2 + E_2$. Let $D \in {\operatorname{Pic}}Y$ be a divisor such that $(D.C_1) \in 2{\mathbb{Z}}$, $(D.C_2) \in 3{\mathbb{Z}}$, and $(D.E_2) = 0$. Then, Recipe \[recipe:PicardLatticeOfDolgachev\] produces a Cartier divisor $D^{{\mathrm{g}}}\in {\operatorname{Pic}}X^{{\mathrm{g}}}$ from $D$. In this case, we say $D$ *deforms to* $D^{{\mathrm{g}}}$. This is a slight abuse of terminology; it is not $D$, but $\iota_*D$ that deforms to $D^{{\mathrm{g}}}$. \[eg:DivisorVaries\_onSingular\] Since $C_0$ deforms to $6K_{X^{{\mathrm{g}}}}$, $2E_1 = C_0 - C_1$ deforms to $6K_{X^{{\mathrm{g}}}}$. Thus $E_1$ deforms to $3K_{X^{{\mathrm{g}}}}$. Similarly, $C_2 + E_2 + E_3$ deforms to $2K_{X^{{\mathrm{g}}}}$. Hence, $K_Y = E_1 - C_2 - E_2 - E_3$ deforms to $3K_{X^{{\mathrm{g}}}} - 2K_{X^{{\mathrm{g}}}} = K_{X^{{\mathrm{g}}}}$. Also, $(E_2 + 2E_3) - E_1$ deforms to $K_{X^{{\mathrm{g}}}}$, whereas $K_Y$ and $(E_2+2E_3)-E_1$ are different in ${\operatorname{Pic}}Y$. These are in principle due to Lemma \[lem:DummyBundle\]. For instance, we have $$\begin{aligned} (E_2 + 2E_3) - E_1 - K_Y &= - 2 E_1 + C_2 + 2E_2 + 3E_3 \\ &= -C_1, \end{aligned}$$ thus $(E_2 + 2E_3)^{{\mathrm{g}}}- E_1^{{\mathrm{g}}}- K_{X^{{\mathrm{g}}}} = -C_1^{{\mathrm{g}}}= 0$. As Example \[eg:DivisorVaries\_onSingular\] presents, we can take various $D \in {\operatorname{Pic}}Y$, which deforms to a fixed divisor $D^{{\mathrm{g}}}\in {\operatorname{Pic}}X^{{\mathrm{g}}}$. The following lemma gives a direction to choose $D$. Note that the lemma requires some conditions on $(D.C_1)$ and $(D.C_2)$, but Lemma \[lem:DummyBundle\] provides the way to adjust them. \[lem:H0Computation\] Let $D$ be a divisor in $Y$ such that $(D.C_1) = 2d_1 \in 2{\mathbb{Z}}$, $(D.C_2) = 3d_2 \in 3{\mathbb{Z}}$, and $(D . E_2) = 0$. Let $D^{{\mathrm{g}}}$ be the deformation of $D$. Then, $$h^0(X^{{\mathrm{g}}}, D^{{\mathrm{g}}}) \leq h^0(Y,D) + h^0(\mathcal O_{W_1}(d_1)) + h^0(\mathcal O_{W_2}(2d_2)) - h^0(\mathcal O_{Z_1}(2d_1)) - h^0(\mathcal O_{Z_2}(3d_3)).$$ In particular, if $d_1,d_2 \leq 1$, then $h^0(X^{{\mathrm{g}}}, D^{{\mathrm{g}}}) \leq h^0(Y,D)$. Since $(D. E_2) = 0$, we have $H^p(\tilde X_0,\iota_*D) \simeq H^p(Y,D)$ for all $p \geq 0$(Proposition \[prop:CohomologyComparison\_YtoX\]). Recall that there exists a short exact sequence (introduced in (\[eq:ExactSeq\_onReducibleSurface\])) $$0 \to \tilde{\mathcal D} \to \mathcal O_{\tilde X_0}(\iota_*D) \oplus \mathcal O_{W_1}(d_1) \oplus \mathcal O_{W_2}(2d_2) \to \mathcal O_{Z_1}(2d_1) \oplus \mathcal O_{Z_2}(3d_2) \to 0, \label{eq:ExactSeq_onReducibleSurface_SimplerVer}$$ where $\tilde{\mathcal D}$ is the line bundle constructed as in Proposition \[prop:LineBundleOnReducibleSurface\], and the notations $W_i$, $Z_i$ are explained in (\[eq:SecondWtdBlowupExceptional\]). We first claim the following: if $d_1,d_2 \leq 1$, then the maps $H^0(\mathcal O_{W_1}(d_1)) \to H^0(\mathcal O_{Z_1}(2d_1))$ and $H^0(\mathcal O_{W_2}(2d_2)) \to H^0(\mathcal O_{Z_2}(3d_2))$ are isomorphisms. Only the nontrivial cases are $d_1 = 1$ and $d_2 = 1$. Since $Z_1$ is a smooth conic in $W_1 = {\mathbb P}^2$, there is a short exact sequence $$0 \to \mathcal O_{W_1}(-1) \to \mathcal O_{W_1}(1) \to \mathcal O_{Z_1}(2) \to 0.$$ All the cohomology groups of $\mathcal O_{W_1}(-1)$ vanish, so $H^p( \mathcal O_{W_1}(1)) \simeq H^p(\mathcal O_{Z_1}(2))$ for all $p \geq 0$. In the case $d_2 = 1$, we consider $$0 \to \mathcal I_{Z_2}(2) \to \mathcal O_{W_2}(2) \to \mathcal O_{Z_2}(3) \to 0,$$ where $\mathcal I_{Z_2} \subset \mathcal O_{W_2}$ is the ideal sheaf of the closed subscheme $Z_2 = (xy = z ^3 ) \subset {\mathbb P}_{x,y,z}(1,2,1)$. The ideal $(xy - z^3)$ does not contain any nonzero homogeneous element of degree $2$, so $H^0(\mathcal I_{Z_2}(2)) = 0$. This shows that $H^0( \mathcal O_{W_2}(2)) \to H^0(\mathcal O_{Z_2}(3))$ is injective. Furthermore, $H^0(\mathcal O_{W_2}(2))$ is generated by $x^2, xz, z^2, y$, hence $h^0(\mathcal O_{W_2}(2)) = h^0(\mathcal O_{Z_3}(3)) = 4$. This proves that $H^0(\mathcal O_{W_2}(2)) \simeq H^0(\mathcal O_{Z_3}(3))$, as desired. If $d_1,d_2>1$, it is clear that $H^0(\mathcal O_{W_1}(d_1)) \to H^0(\mathcal O_{Z_1}(2d_1))$ and $H^0(\mathcal O_{W_2}(2d_2)) \to H^0(\mathcal O_{Z_2}(3d_2))$ are surjective. The cohomology long exact sequence of (\[eq:ExactSeq\_onReducibleSurface\_SimplerVer\]) begins with $$\begin{aligned} 0 \to H^0(\tilde{\mathcal D}) \to H^0(\iota_*D) \oplus H^0( \mathcal O_{W_1}(d_1) ) \oplus H^0(\mathcal O_{W_2}(2d_2)) \\ \to H^0(\mathcal O_{Z_1}(2d_1)) \oplus H^0( \mathcal O_{Z_2}(3d_2)).\qquad\qquad \end{aligned}$$ By the previous arguments, the last map is surjective. Indeed, the image of $(0, s_1, s_2) \in H^0(\iota_*D) \oplus H^0( \mathcal O_{W_1}(d_1) ) \oplus H^0(\mathcal O_{W_2}(2d_2))$ is $(-s_1\big\vert_{Z_1}, -s_2\big\vert_{Z_2})$. The upper-semicontinuity of cohomologies establishes the inequality in the statement. The next lemma is useful to remove redundant parts of $D$ in $H^0$ computations. \[lem:RuleOut\] Let $S$ be a nonsingular projective surface, and let $D$ be a divisor on $S$. For a nonsingular projective curve $C$ in $S$, suppose $( D . C) < 0$. 1. If $C^2 \geq 0$, then $H^0(D) =0$. \[item:lem:Ruleout\_PositiveCurve\] 2. If $C^2 < 0$, then $H^0(D) \simeq H^0 (D - mC)$ for all $0 < m \leq \big\lceil \frac{(D\mathbin.C)}{(C\mathbin . C)} \big\rceil$.\[item:lem:Ruleout\_NegativeCurve\] In the short exact sequence $$0 \to \mathcal O_S(D-C) \to \mathcal O_S(D) \to \mathcal O_C(D) \to 0,$$ $H^0(\mathcal O_C(D)) = 0$, thus $H^0(D) \simeq H^0(D-C)$. If $C^2 \geq 0$ and $m > 0$, then $(D-mC \mathbin. C) = (D . C) - mC^2 < 0$, so $H^0(D- (m+1)C) \simeq H^0(D-mC)$. For an ample divisor $A$, $(D - mC \mathbin. A) < 0$ for $m \gg 0$, hence $D - mC$ cannot be effective. This proves (a). If $C^2 < 0$, let $m_0$ be the largest number satisfying $(D - (m_0-1)C \mathbin. C) < 0$. Then, $H^0(D - m_0C) \simeq H^0(D)$ by the previous argument. Since $$\begin{aligned} (D - mC \mathbin . C) \geq 0 &\Leftrightarrow m \geq \frac{(D. C)}{(C . C)}, \end{aligned}$$ $m_0$ is the smallest integer greater than or equal to $\frac{(D.C)}{(C. C)}$, thus $m_0 = \bigl\lceil \frac{(D.C)}{(C.C)}\bigr\rceil$. By [@Vial:Exceptional_NeronSeveriLattice Theorem 3.1], it can be shown that the collection (\[eq:ExcColl\_MaxLength\]) in the theorem below is a numerically exceptional collection. Our aim is to prove that (\[eq:ExcColl\_MaxLength\]) is indeed an exceptional collection in ${\operatorname{D}}^{\rm b}(X^{{\mathrm{g}}})$. Before proceed to the theorem, we introduce one terminology. During the construction of $Y$, the node of $p_*C_2$ is blown up twice. The second blow up at $p_*C_2$ corresponds to one of the two tangent directions at the node of $p_*C_2$. We refer to the tangent direction corresponding to the second blow up as the *distinguished tangent* direction at the node of $p_*C_2$. \[thm:ExceptCollection\_MaxLength\] Suppose $X^{{\mathrm{g}}}$ is originated from a cubic pencil $\lvert \lambda p_* C_1 + \mu p_*C_2 \rvert$ which is generated by two general plane nodal cubics. Let $G_1^{{\mathrm{g}}},\ldots,G_{10}^{{\mathrm{g}}}$ as in \[summary:Divisors\_onX\^g\], let $G_0^{{\mathrm{g}}}$ be the trivial divisor, and let $G_{11}^{{\mathrm{g}}}= 2G_{10}^{{\mathrm{g}}}$. For notational simplicity, we denote the rank of ${\operatorname{Ext}}^p(G_i^{{\mathrm{g}}}, G_j^{{\mathrm{g}}})(=H^p(-G_i^{{\mathrm{g}}}+G_j^{{\mathrm{g}}}))$ by $h^p_{ij}$. The following table describes $\mathbb{R}\!{\operatorname{Hom}}(G_i^{{\mathrm{g}}},G_j^{{\mathrm{g}}})$. For example, the triple of ($G_9^{{\mathrm{g}}}$-row, $G_{10}^{{\mathrm{g}}}$-column), which is $(0\ 0\ 2)$, means that $(h^0_{9,10},\, h^1_{9,10},\, h^2_{9,10}) = (0,0,2)$. $$\scalebox{0.9}{$ \begin{array}{c|ccccc} & G_0^{{\mathrm{g}}}& G_{1 \leq i \leq 8}^{{\mathrm{g}}}& G_9^{{\mathrm{g}}}& G_{10}^{{\mathrm{g}}}& G_{11}^{{\mathrm{g}}}\\[2pt] \hline G_0^{{\mathrm{g}}}& 1\,0\,0 & 0\,0\,1 & 0\,0\,1 & 0\,0\,3 & 0\,0\,6 \\ G_{1 \leq i \leq 8}^{{\mathrm{g}}}& & 1\,0\,0 & & 0\,0\,2 & 0\,0\,5\\ G_9^{{\mathrm{g}}}& & & 1\,0\,0 & 0\,0\,2 & 0\,0\,5 \\ G_{10}^{{\mathrm{g}}}& & & & 1\,0\,0 & 0\,0\,3 \\ G_{11}^{{\mathrm{g}}}& & & & & 1\,0\,0 \end{array} $}$$-7pt\[table:thm:ExceptCollection\_MaxLength\] The blanks stand for $0\,0\,0$, and $h^p_{ij} = 0$ for all $p$ and $1 \leq i\neq j \leq 8$. In parcicular, the collection $$\big\langle \mathcal O_{X^{{\mathrm{g}}}}(G_0^{{\mathrm{g}}}),\ \mathcal O_{X^{{\mathrm{g}}}}(G_1^{{\mathrm{g}}}),\ \ldots,\ \mathcal O_{X^{{\mathrm{g}}}}(G_{10}^{{\mathrm{g}}}),\ \mathcal O_{X^{{\mathrm{g}}}}(G_{11}^{{\mathrm{g}}}) \big\rangle \label{eq:ExcColl_MaxLength}$$ is an exceptional collection of length $12$ in ${\operatorname{D}}^{\rm b}(X^{{\mathrm{g}}})$. Recall that (see Summary \[summary:Divisors\_onX\^g\]) $$\begin{aligned} G_i^{{\mathrm{g}}}&= -L_0^{{\mathrm{g}}}+ F_{i9}^{{\mathrm{g}}}+ 10K_{X^{{\mathrm{g}}}},\ i=1,\ldots,8; \\ G_9^{{\mathrm{g}}}&= -L_0^{{\mathrm{g}}}+ 11K_{X^{{\mathrm{g}}}}; \\ G_{10}^{{\mathrm{g}}}&= -3L_0^{{\mathrm{g}}}+ (p^*H - 3F_9)^{{\mathrm{g}}}+ 28K_{X^{{\mathrm{g}}}}; \\ G_{11}^{{\mathrm{g}}}&= -6L_0^{{\mathrm{g}}}+ 2(p^*H - 3F_9)^{{\mathrm{g}}}+ 56K_{X^{{\mathrm{g}}}}. \end{aligned}$$ The proof consists of numerous cohomology vanishings for which we divide into several steps. The numerical computations are collected in Dictionary \[dictionary:H0Computations\]. Note that we can always evaluate $\chi(-G_i^{{\mathrm{g}}}+ G_j^{{\mathrm{g}}}) = \sum_p (-1)^p h^p_{ij}$, thus it suffices to compute only two (mostly $h^0$ and $h^2$) of $\{h^p_{ij} : p=0,1,2\}$. In the first part of the proof, we deduce the following table using numerical methods. $$\scalebox{0.9}{$ \begin{array}{c|ccccc} & G_0^{{\mathrm{g}}}& G_{1 \leq i \leq 8}^{{\mathrm{g}}}& G_9^{{\mathrm{g}}}& G_{10}^{{\mathrm{g}}}& G_{11}^{{\mathrm{g}}}\\[2pt] \hline G_0^{{\mathrm{g}}}& 1\,0\,0 & 0\,0\,1 & 0\,0\,1 & \scriptstyle\chi=3 & \scriptstyle\chi=6 \\ G_{1 \leq i \leq 8}^{{\mathrm{g}}}& 0\,0\,0 & 1\,0\,0 & 0\,0\,0 & 0\,0\,2 & \scriptstyle\chi=5 \\ G_9^{{\mathrm{g}}}&\scriptstyle\chi=0 & 0\,0\,0 & 1\,0\,0 & 0\,0\,2 & \scriptstyle\chi=5 \\ G_{10}^{{\mathrm{g}}}&\scriptstyle\chi=0 &\scriptstyle\chi=0 &\scriptstyle\chi=0 & 1\,0\,0 & \scriptstyle\chi=3 \\ G_{11}^{{\mathrm{g}}}&\scriptstyle\chi=0 &\scriptstyle\chi=0 &\scriptstyle\chi=0 &\scriptstyle\chi=0 & 1\,0\,0 \end{array} $}$$-7pt\[table:HumanPart\_thm:ExceptCollection\_MaxLength\] The slots with $\chi=d$ means $\chi(-G_i^{{\mathrm{g}}}+G_j^{{\mathrm{g}}})=\sum_{p}(-1)^ph^p_{ij} =d$. For those slots, we do not compute each $h^p_{ij}$ for the moment. In the end, they will be completed through a different approach. 1. \[item:NumericalStep\_thm:ExceptCollection\_MaxLength\] As explained above, the collection (\[eq:ExcColl\_MaxLength\]) is numerically exceptional, hence $\chi(-G_i^{{\mathrm{g}}}+ G_j^{{\mathrm{g}}}) = \sum_p h^p_{ij}=0$ for all $0 \leq j < i \leq 11$. Furthermore, the surface $X^{{\mathrm{g}}}$ is minimal, thus $K_{X^{{\mathrm{g}}}}$ is nef. It follows that $h^0(D^{{\mathrm{g}}}) = 0$ if $D^{{\mathrm{g}}}$ is $K_{X^{{\mathrm{g}}}}$-negative, and $h^2(D^{{\mathrm{g}}})=0$ if $D^{{\mathrm{g}}}$ is $K_{X^{{\mathrm{g}}}}$-positive. Since $$(K_{X^{{\mathrm{g}}}} . G_i^{{\mathrm{g}}}) = \left\{ \begin{array}{ll} -1 & i \leq 9 \\ -3 & i = 10 \\ -6 & i = 11, \end{array} \right.$$ this already enforces a number of cohomologies to be zero. Indeed, all the numbers in the following list are zero: $$\{ h^0_{0i} \}_{i \leq 11},\ \{ h^0_{i,10}, h^0_{i,11} \}_{i \leq 9},\ \{ h^2_{i0} \}_{i \leq 11},\ \{ h^2_{10,i}, h^2_{11,i} \}_{i \leq 9}.$$ 2. \[item:ProofFreePart\_thm:ExceptCollection\_MaxLength\] If $1 \leq j \neq i \leq 8$, then $-G_i^{{\mathrm{g}}}+ G_j^{{\mathrm{g}}}$ can be realized as $-F_i + F_j$ in the rational elliptic surface $Y$. Hence, $$\bigl\langle \mathcal O_{X^{{\mathrm{g}}}}(G_1^{{\mathrm{g}}}),\ \ldots,\ \mathcal O_{X^{{\mathrm{g}}}}(G_8^{{\mathrm{g}}}) \bigr\rangle$$ is an exceptional collection by Proposition \[prop:ExceptCollection\_ofLengthNine\]. This proves that $h^p_{ij}=0$ for all $p \geq 0$ and $1 \leq i \neq j \leq 8$. Also, $-G_9^{{\mathrm{g}}}+ G_i^{{\mathrm{g}}}= -K_{X^{{\mathrm{g}}}} + F_{i9}^{{\mathrm{g}}}$ for $1 \leq i \leq 8$. Remark \[rmk:ExceptCollection\_SerreDuality\] shows that $h^p_{9i} = h^p(-K_{X^{{\mathrm{g}}}} + F_{i9}^{{\mathrm{g}}})=0$ for $p \geq 0$ and $1 \leq i \leq 8$. Furthermore, by Serre duality, $h^p_{i9} = h^{2-p}(F_{i9}^{{\mathrm{g}}})=0$ for all $p \geq 0$ and $1 \leq i \leq 8$. 3. \[item:Strategy\_HumanPart\_thm:ExceptCollection\_MaxLength\] We verify Table \[table:HumanPart\_thm:ExceptCollection\_MaxLength\] using the following strategy: 1. If we want to compute $h^0_{ij}$, then pick $D_{ij}^{{\mathrm{g}}}:= -G_i^{{\mathrm{g}}}+ G_j^{{\mathrm{g}}}$. If the aim is to evaluate $h^2_{ij}$, then take $D_{ij}^{{\mathrm{g}}}:= K_{X^{{\mathrm{g}}}} + G_i^{{\mathrm{g}}}- G_j^{{\mathrm{g}}}$, so that $h^2_{ij} = h^0(D_{ij}^{{\mathrm{g}}})$ by Serre duality. 2. Express $D^{{\mathrm{g}}}_{ij}$ in terms of $L_0^{{\mathrm{g}}}$, $(p^*H - 3F_9)^{{\mathrm{g}}}$, $F_{i9}^{{\mathrm{g}}}$, and $K_{X^{{\mathrm{g}}}}$. Via Summary \[summary:Divisors\_onX\^g\], we can translate $L_0^{{\mathrm{g}}}$, $(p^*H - 3F_9)^{{\mathrm{g}}}$, $F_{i9}^{{\mathrm{g}}}$ into the divisors on $Y$. Further, we have $6K_{X^{{\mathrm{g}}}} = C_0^{{\mathrm{g}}}$, $3K_{X^{{\mathrm{g}}}} = E_1^{{\mathrm{g}}}$, and $2K_{X^{{\mathrm{g}}}} = (C_2+E_2+E_3)^{{\mathrm{g}}}$, thus an arbitrary integer multiple of $K_{X^{{\mathrm{g}}}}$ also can be translated into divisors on $Y$. Together with these translations, use Lemma \[lem:DummyBundle\] to find a Cartier divisor $D_{ij}$ on $Y$, which deforms to $D_{ij}^{{\mathrm{g}}}$, and satisfies $(D_{ij}.C_1) \leq 2$, $(D_{ij}.C_2) \leq 3$, $(D_{ij}.E_2)=0$. 3. Compute an upper bound of $h^0(D_{ij})$. Then by Lemma \[lem:H0Computation\], $h^0(D_{ij}^{{\mathrm{g}}}) \leq h^0(D_{ij})$. 4. In any occassions, we will find that the upper bound obtained in (3) coincides with $\chi(-G_i^{{\mathrm{g}}}+ G_j^{{\mathrm{g}}})$. Also, at least one of $\{ h^0_{ij}, h^2_{ij}\}$ is zero by Step \[item:NumericalStep\_thm:ExceptCollection\_MaxLength\]. From this we deduce $h^0(D_{ij}^{{\mathrm{g}}})\geq{}$(the upper bound obtained in (3)), hence the equality holds. Consequently, the numbers $\{h^p_{ij} : p=0,1,2\}$ are evaluated. 4. We follow the strategy in Step \[item:Strategy\_HumanPart\_thm:ExceptCollection\_MaxLength\] to complete Table \[table:HumanPart\_thm:ExceptCollection\_MaxLength\]. Let $i \in \{1,\ldots,8\}$. To verify $h^0_{i0}=0$, we take $D_{i0}^{{\mathrm{g}}}= -G_i^{{\mathrm{g}}}= L_0^{{\mathrm{g}}}- F_{i9}^{{\mathrm{g}}}- 10K_{X^{{\mathrm{g}}}}$. Translation into the divisors on $Y$ gives: $$D_{i0}' = p^*(2H) + F_9-F_i - 2C_0 + (C_2+E_2+E_3)$$ Since $(D_{i0}'.C_1) = 6$ and $(D_{i0}' . C_2) = 3$, we replace the divisor $D_{i0}'$ by $D_{i0} := D_{i0}' + C_1$ so that the condition $(D_{i0}.C_1) \leq 2$ is fulfilled. Now, $h^0(D_{i0})=0$ by Dictionary \[dictionary:H0Computations\]\[dictionary:-G\_i\], thus $h^0_{i0} \leq h^0(D_{i0}) = 0$ by Lemma \[lem:H0Computation\]. Finally, $\chi(-G_i^{{\mathrm{g}}})=0$ and $h^2_{i0}=0$(Step \[item:NumericalStep\_thm:ExceptCollection\_MaxLength\]), hence $h^1_{i0}=0$. We repeat this routine to the following divisors: $$\begin{aligned} D_{0i} &= p^*(2H) + F_9 - F_i - C_0 + C_1 - E_1 + (2C_2+E_2); \\ D_{09} &= p^*(2H) - 2C_0 + C_1 + (C_2 + E_2 + E_3); \\ D_{i,10} &= p^*(3H) + 2F_9 + F_i - 2C_0 + 2C_1 - E_1 - (C_2 + E_2 + E_3) + 2(2C_2+E_2); \\ D_{9,10} &= p^*(3H) + 3F_9 - 3C_0 + 3C_1 + (C_2+E_2+E_3) + (2C_2+E_2). \end{aligned}$$ Together with Dictionary \[dictionary:H0Computations\], all the slots of Table \[table:HumanPart\_thm:ExceptCollection\_MaxLength\] are verified. 5. \[item:M2Part\_thm:ExceptCollection\_MaxLength\] It is difficult to complete Table \[table:thm:ExceptCollection\_MaxLength\] using the numerical argument(see for example, Remark \[rmk:Configuration\_andCohomology\]). We introduce another plan to overcome these difficulties. 1. Take $D_{ij}^{{\mathrm{g}}}\in {\operatorname{Pic}}X^{{\mathrm{g}}}$ and $D_{ij} \in {\operatorname{Pic}}Y$ as in Step . We may assume $(D_{ij}.C_1) \in \{0,2\}$ and $(D_{ij}.C_2) \in \{-3,0,3\}$. If $(D_{ij}.C_2) = -3$, $h^0(D_{ij}) = h^0(D_{ij} - C_2 - E_2)$, thus it suffices to prove that $H^0(D_{ij} - C_2 - E_2)=0$. Hence, we replace $D_{ij}$ by $D_{ij} - C_2 -E_2$ if $(D.C_2) = -3$. In some occasion, we have $(D_{ij} . F_9) = -1$. We make further replacement $D_{ij} \mapsto D_{ij}-F_9$ for those cases. 2. Rewrite $D_{ij}$ in terms of the ${\mathbb{Z}}$-basis $\{p^*H, F_1,\ldots, F_9, E_1,E_2,E_3\}$ so that $D_{ij}$ is expressed in the following form: $$D_{ij} = p^*(dH) - \bigl( \text{sum of exceptional curves}\bigr).$$ 3. \[item:PlaneCurveExistence\_thm:ExceptCollection\_MaxLength\] Assume $h^0(D_{ij}) > 0$, then there exists an effective divisor $D$ which is linearly equivalent to $D_{ij}$. Consider the plane curve $p_*D$. It is a plane curve of degree $d$ which imposes several conditions corresponding to the negative part. Let $\mathcal I_{\rm C} \subset \mathcal O_{{\mathbb P}^2}$ be the ideal sheaf associated with the imposed conditions on $p_*D$. Compute $h^0(\mathcal O_{{\mathbb P}^2}(d) \otimes \mathcal I_{\rm C})$. This number gives an upper bound of $h^0(D_{ij})$(it is clear that if $D'$ is an effective divisor linearly equivalent to $D$, such that $p_*D$ and $p_*D'$ coincide as plane curves, then $D$ and $D'$ must be the same curve in $Y$). 4. As in Step , we will see that all the upper bound $h^0(D_{ij})$ fit in to the numerical invariant $\chi(-G_i^{{\mathrm{g}}}+ G_j^{{\mathrm{g}}})$. This shows that the upper bound $h^0(D_{ij})$ obtained in (3) exactly determines the three numbers $\{h^p_{ij} : p=0,1,2\}$. 6. As explained in Remark \[rmk:Configuration\_andCohomology\], the value $h^0(D_{ij})$ might depend on the configuration of $p_*C_1$ and $p_*C_2$. However, for general nodal cubics $p_*C_1 = (h_1=0)$, $p_*C_2 = (h_2=0)$, the minimum value of $h^0(D_{ij})$ is attained. This can be observed in the following way. Let $h = \sum_{\alpha} a_\alpha \mathbf{x}^\alpha$ be a homogeneous equation of degree $d$, where $\alpha = (\alpha_x, \alpha_y, \alpha_z)$ is the $3$-tuple with $\alpha_x + \alpha_y + \alpha_z = d$ and $\mathbf{x}^\alpha = x^{\alpha_x} y^{\alpha_y} z^{\alpha_z}$. Then the ideal $\mathcal I_C$ impose linear relations on $\{a_\alpha\}_\alpha$, thus we get a linear system, namely a matrix $M$, with the variables $\{a_\alpha\}_\alpha$. After perturb $h_1$ and $h_2$ slightly, the rank of $M$ would not decrease since {$\op{rank} M \geq r_0$} is an open condition for any fixed $r_0$. From this we get: if $h^0(D_{ij}) \leq r$ for at least one pair of $p_*C_1$ and $p_*C_2$, then $h^0(D_{ij}) \leq r$ for general $p_*C_1$ and $p_*C_2$. 7. \[item:M2Configuration\_thm:ExceptCollection\_MaxLength\] Let $h_1 = (y-z)^2z - x^3 - x^2z$, and $h_2 = x^3 - 2xy^2 + 2xyz + y^2z$. These equations define plane nodal cubics such that 1. $p_*C_1$ has the node at $[0,1,1]$, and $p_*C_2$ has the node at $[0,0,1]$; 2. $p_*C_2$ has two tangent directions ($y=0$ and $y=-2x$) at nodes; 3. $p_*C_1 \cap p_*C_2$ contains two ${\mathbb{Q}}$-rational points, namely $[0,1,0]$ and $[-1,1,1]$. We take $y=0$ as the distinguished tangent direction at the node of $p_*C_2$, and take $p_*F_9 = [0,1,0]$, $p_*F_8 = [-1,1,1]$. The following ideals are the building blocks of the ideal $\mathcal I_C$ introduced in Step . $$\begin{array}{c|c|c|c} \text{symbol} & \text{ideal form} & \text{ideal sheaf at the\,...} & \text{divisor on }Y \\ \hline \mathcal I_{E_1} & (x,y-z) & \text{node of }p_*C_1 & -E_1 \\ \mathcal I_{E_2+E_3} & (x,y) & \text{node of }p_*C_2 & -(E_2+E_3) \\ \mathcal I_{E_2+2E_3} & (x^2,y) & \begin{tabular}{c} \footnotesize distinguished tangent \\[-5pt] \footnotesize at the node of $p_*C_2$ \end{tabular} & -(E_2+2E_3) \\ \mathcal J_9 & (h_1,h_2) & \text{nine base points} & - \sum_{i\leq9} F_i \\ \mathcal J_7 & \scriptstyle \mathcal J_9 {\textstyle/} (x+z,y-z)(x,z) & \text{seven base points} & -\sum_{i\leq7} F_i \\ \mathcal J_8 & (x+z,y-z)\mathcal J_7 & \text{eight base points} & -\sum_{i\leq8} F_i \end{array}$$-7pt\[table:Ideal\_ofConditions\_thm:ExceptCollection\_MaxLength\] Note that the nine base points contain $[0,1,0]$ and $[-1,1,1]$, thus there exists an ideal $\mathcal J_7$ such that $\mathcal J_9 = (x+z,y-z)(x,z) \mathcal J_7$. 8. We sketch the proof of $h^p_{10,9}=h^p(-G_{10}^{{\mathrm{g}}}+ G_9^{{\mathrm{g}}})=0$, which illustrates several subtleties. Since $h^2_{10,9}=0$ by Step \[item:NumericalStep\_thm:ExceptCollection\_MaxLength\], we only have to prove $h^0_{10,9}=0$. Thus, we take $D_{10,9}^{{\mathrm{g}}}:= -G_{10}^{{\mathrm{g}}}+ G_9^{{\mathrm{g}}}$. As in Step , take $D_{10,9}' = p^*(3H) + 3F_9 - 2C_0 + 2C_1 - E_1 - (C_2+E_2+E_3) + 2(2C_2+E_2)$. We have $(D_{10,9}'.C_2)=0$, and $(D_{10,9}'-C_2-E_2 \mathbin. F_9) = -1$. Let $D_{10,9}:= D_{10,9}' - C_2-E_2-F_9$. Then, $h^0(D_{10,9}) = h^0(D_{10,9}') \geq h^0(D_{10,9}^{{\mathrm{g}}})$. As in Step , the divisor $D_{10,9}$ can be rewritten as $$D_{10,9} = p^*(9H) - 2 \sum_{i=1}^8 F_i - 5E_1 - 4E_2 - 7E_3.$$ Since $\mathcal I_{E_2+E_3}^2$ imposes more conditions than $\mathcal I_{E_2+2E_3}$, the ideal of (minimal) conditions corresponding to $-4E_2 - 7E_3$ is $\mathcal I_{E_2+E_3} \cdot \mathcal I_{E_2+2E_3}^3$. Thus, the plane curve $p_*D_{10,0}$ corresponds to a nonzero section of $$H^0(\mathcal O_{{\mathbb P}^2}(9) \otimes \mathcal J_8^2 \cdot \mathcal I_{E_1}^5 \cdot \mathcal I_{E_2+E_3} \cdot \mathcal I_{E_2+2E_3}^3 ).$$ We ask Macaulay 2 to find the rank of this group, and the result is zero. This can be found in `ExcColl_Dolgachev.m2`[@ChoLee:Macaulay2]. In simiarly ways, we obtain the following table (be aware of the difference with Table \[table:thm:ExceptCollection\_MaxLength\]). $$\scalebox{0.9}{$ \begin{array}{c|ccccc} & G_0^{{\mathrm{g}}}& G_8^{{\mathrm{g}}}& G_9^{{\mathrm{g}}}& G_{10}^{{\mathrm{g}}}& G_{11}^{{\mathrm{g}}}\\[2pt] \hline G_0^{{\mathrm{g}}}& 1\,0\,0 & 0\,0\,1 & 0\,0\,1 & 0\,0\,3 & 0\,0\,6 \\ G_8^{{\mathrm{g}}}& & 1\,0\,0 & & 0\,0\,2 & 0\,0\,5\\ G_9^{{\mathrm{g}}}& & & 1\,0\,0 & 0\,0\,2 & 0\,0\,5 \\ G_{10}^{{\mathrm{g}}}& & & & 1\,0\,0 & 0\,0\,3 \\ G_{11}^{{\mathrm{g}}}& & & & & 1\,0\,0 \end{array} $}$$-7pt\[table:M2Computation\_thm:ExceptCollection\_MaxLength\] The table in below gives a short summary on the computations done in `ExcColl_Dolgachev.m2`[@ChoLee:Macaulay2]. $$\scalebox{0.9}{$ \begin{array}{c|c|l} (i,j) & \text{result} & \multicolumn{1}{c}{\text{choice of }D_{ij}} \\ \hline (9,0) & h^0_{9,0}=0 & p^*(5H) - \sum_{i\leq9} F_i - 3E_1 - 2E_2 - 4E_3 \\ (10,0) & h^0_{10,0}=0 & p^*(14H) - 3\sum_{i\leq8} F_i - 8E_1 - 6E_2-11E_3 \\ (10,8) & h^0_{10,8}=0 & p^*(9H) - 2\sum_{i\leq7} F_i - F_8 - 6E_1 - 3E_2 - 6E_3 \\ (10,9) & h^0_{10,9}=0 & p^*(9H) - 2\sum_{i\leq8} F_i - 5E_1 - 4E_2 - 7E_3 \\ (11,0) & h^0_{11,0}=0 & p^*(31H) - 7\sum_{i\leq8} F_i - F_9 - 18E_1 - 11E_2 - 22E_3 \\ (11,8) & h^0_{11,8}=0 & p^*(26H) - 6\sum_{i\leq7} F_i - 5F_8 - F_9 - 14E_1 - 10E_2 - 20E_3 \\ (11,9) & h^0_{11,9}=0 & p^*(26H) - 6\sum_{i\leq8} F_i - 15E_1 - 9E_2 - 18E_3 \\[3pt] \hline (0,10) & h^2_{0,10}=3 & p^*(17H) - 4\sum_{i\leq8} F_i - F_9 - 9E_1 - 6E_2 - 12E_3 \\ (0,11) & h^2_{0,11}=6 & p^*(31H) - 7\sum_{i\leq8} F_i - F_9 - 17E_1 - 13E_2 - 23E_3 \\ (8,11) & h^2_{8,11}=5 & p^*(26H) - 6\sum_{i\leq7} F_i - 5F_8 - F_9 - 15E_1 - 9E_2 - 18E_3 \\ (9,11) & h^2_{9,11}=5 & p^*(26H) - 6\sum_{i\leq8} F_i - 14E_1 - 10E_2 - 19E_3 \end{array} $}$$-7pt Note that the numbers $h_{11,10}^p$ and $h_{10,11}^p$ are free to evaluate; indeed, $-G_{11}^{{\mathrm{g}}}+ G_{10}^{{\mathrm{g}}}= -G_{10}^{{\mathrm{g}}}$, thus $h^p_{11,10} = h^p_{10,0}$ and $h^p_{10,11}=h^p_{0,10}$. Finally, perturb the cubics $p_*C_1$ and $p_*C_2$ so that Table \[table:M2Computation\_thm:ExceptCollection\_MaxLength\] remains valid and Lemma \[lem:BasePtPermutation\] is applicable. Then, Table \[table:thm:ExceptCollection\_MaxLength\] is verified immediately. \[rmk:Configuration\_andCohomology\] Assume that the nodal curves $p_*C_1$, $p_*C_2$ are in a special position so that the node of $p_*C_1$ is located on the distinguished tangent line at the node of $p_*C_2$. Then, the proper transform $\ell$ of the unique line through the nodes of $p_*C_1$ and $p_*C_2$ has the following divisor expression: $$\ell = p^*H - E_1 - (E_2 + 2E_3).$$ In particular, the divisor $D_{90} = p^*(5H) - \sum_{i\leq9} F_i - 3E_1 - 2(E_2+2E_3)$ is linearly equivalent to $2 \ell + C_1 + E_1$, thus $h^0(D_{90}) > 0$. Consequently, for this particular configuration of $p_*C_1$ and $p_*C_2$, we cannot prove $h^0_{90} = 0$ using upper-semicontinuity. However, the numerical method (Step \[item:Strategy\_HumanPart\_thm:ExceptCollection\_MaxLength\] in the proof of the previous theorem) cannot detect such variances originated from the position of nodal cubics, hence it cannot be applied to the proof of $h^0_{90}=0$ The following lemma, used in the end of the proof of Theorem \[thm:ExceptCollection\_MaxLength\], illustrates the symmetric nature of $F_1,\ldots,F_8$. \[lem:BasePtPermutation\] Assume that $X^{{\mathrm{g}}}$ is originated from a cubic pencil generated by two general plane nodal cubics $p_*C_1$ and $p_*C_2$. Let $D \in {\operatorname{Pic}}Y$ be a divisor on the rational elliptic surface $Y$. Assume that in the expression of $D$ in terms of ${\mathbb{Z}}$-basis $\{p^*H, F_1,\ldots, F_9, E_1, E_2, E_3\}$, the coefficients of $F_1,\ldots, F_8$ are same. Then, $h^p(D+F_i) = h^p(D+F_j)$ for any $p \geq 0$ and $1 \leq i,j \leq 8$. Since $\op{Aut}{\mathbb P}^2 = \op{PGL}(3,{\mathbb{C}})$ sends arbitrary 4 points(of which any three are not colinear) to arbitrary 4 points(of which any three are not colinear). Using projective linear equivalences, we may assume the following. 1. \[item:lem:BasePtPermutation\_DistinguishedBasePt\] The base point $p_*F_9$ is ${\mathbb{Q}}$-rational. 2. The nodes of $p_*C_1$ and $p_*C_2$ are ${\mathbb{Q}}$-rational. 3. The distinguished tangent direction at the node of $p_*C_2$ is defined over ${\mathbb{Q}}$. Also, we may take nodal cubics $p_*C_1$, $p_*C_2$ which satisfy the further assumptions: 1. \[item:lem:BasePtPermutation\_QIdeal\] The ideals of $p_*C_1$, $p_*C_2$ are defined over ${\mathbb{Q}}$.[^4] 2. The eight points $p_*F_1,\ldots,p_*F_8$ are contained in the affine space $(z \neq 0) \subset {\mathbb P}^2_{x,y,z}$. 3. \[item:lem:BasePtPermutation\_Resultant\] Let $p_*F_i = [\alpha_i,\beta_i,1] \in {\mathbb P}^2$ for $\alpha_i,\beta_i \in {\mathbb{C}}$, and let $H_\alpha \in {\mathbb{Q}}[t]$(resp. $H_\beta \in {\mathbb{Q}}[t]$) be the irreducible polynomial having $\alpha_1$(resp. $\beta_1$) as a root. Then, both $H_\alpha$ and $H_\beta$ are of degrees $8$, and $H_\alpha \neq H_\beta$ up to multiplication by ${\mathbb{Q}}^\times$. The last condition can be interpreted by resultants. Let $h_i \in {\mathbb{Q}}[x,y,z]$ be the defining equation of $p_*C_i$. Let $\op{res}(h_1,h_2;x)$ be the resultant of $h_1(x,y,1),h_2(x,y,1)$ regarded as elements of $({\mathbb{Q}}[y])[x]$. The polynomial $\op{res}(h_1,h_2;x) \in {\mathbb{Q}}[y]$ reads the $y$-coordinates of the base points, thus $\op{res}(h_1,h_2;x) = (\text{linear or constant term}) \times H_\beta$. Note that the linear or constant term always appears due to \[item:lem:BasePtPermutation\_DistinguishedBasePt\]. The condition \[item:lem:BasePtPermutation\_Resultant\] imposes an open condition on plane nodal cubics; after locating $p_*F_9$ at a ${\mathbb{Q}}$-rational point by $\op{PGL}(3,{\mathbb{C}})$ action, the degree 8 factor of $\op{res}(h_1,h_2;x)$(resp. $\op{res}(h_1,h_2;y)$), corresponding to $p_*F_1,\ldots,p_*F_8$, is irreducible for general $h_1,h_2$. If $p_*C_1$ and $p_*C_2$ satisfy the conditions \[item:lem:BasePtPermutation\_DistinguishedBasePt\]–\[item:lem:BasePtPermutation\_Resultant\] above, the blow up construction $p \colon Y \to {\mathbb P}^2$ is well-defined over ${\mathbb{Q}}$, thus there exists a variety $Y_{\mathbb{Q}}$ over ${\mathbb{Q}}$ such that $Y = Y_{\mathbb{Q}}\times_{\mathbb{Q}}{\mathbb{C}}$. Let $\tau \in \op{Aut}({\mathbb{C}}/{\mathbb{Q}})$ be a field automorphism fixing ${\mathbb{Q}}$, and mapping $\alpha_i$ to $\alpha_j$. Then $\tau$ induces an automorphism of ${\mathbb P}^2$ which fixes $p_*C_1$ and $p_*C_2$ by \[item:lem:BasePtPermutation\_QIdeal\]. It follows that $[\alpha_j, \tau(\beta_i), 1]$ is one of the nine base points $\{p_*F_i\}$. Since $H_\alpha$ and $H_\beta$ are different, there is no point of the form $[\alpha_j, \beta_k, 1]$ among the nine base points except when $k=j$. It follows that $\tau(\beta_i) = \beta_j$. Let $\tau_Y := {\begingroup \rm id \endgroup}_{Y_{\mathbb{Q}}} \times \tau$ be the automorphism of $Y = Y_{\mathbb{Q}}\times_{\mathbb{Q}}{\mathbb{C}}$. According to our assumptions, it satisfies the following properties: 1. $\tau_Y$ fixes $F_9, E_1, E_2, E_3$; 2. $\tau_Y$ permutes $F_1,\ldots,F_8$; 3. $\tau_Y$ maps $F_i$ to $F_j$. Furthermore, since the coefficients of $F_1,\ldots,F_8$ are same in the expression of $D$, $\tau_Y$ fixes $D$. It follows that $\tau_Y^* \colon {\operatorname{Pic}}Y \to {\operatorname{Pic}}Y$ maps $D+F_j$ to $D+F_i$. In particular, $H^p(D+F_j) = H^p(\tau_Y^*(D+F_i)) \simeq H^p( D+F_i)$. Incompleteness of the collection {#subsec:Incompleteness} -------------------------------- Let $\mathcal A \subset {\operatorname{D}}^{\rm b}(X^{{\mathrm{g}}})$ be the orthogonal subcategory $$\bigl\langle \mathcal O_{X^{{\mathrm{g}}}}(G_0^{{\mathrm{g}}}),\ \mathcal O_{X^{{\mathrm{g}}}}(G_1^{{\mathrm{g}}}),\ \ldots,\ \mathcal O_{X^{{\mathrm{g}}}}(G_{11}^{{\mathrm{g}}}) \bigr\rangle^\perp,$$ so that there exists a semiorthogonal decomposition $${\operatorname{D}}^{\rm b}(X^{{\mathrm{g}}}) = \bigl\langle \mathcal A,\ \mathcal O_{X^{{\mathrm{g}}}}(G_0^{{\mathrm{g}}}),\ \mathcal O_{X^{{\mathrm{g}}}}(G_1^{{\mathrm{g}}}),\ \ldots,\ \mathcal O_{X^{{\mathrm{g}}}}(G_{11}^{{\mathrm{g}}}) \bigr\rangle.$$ We will prove that $K_0(\mathcal A) = 0$, $\op{HH}_\bullet(\mathcal A) = 0$, but $\mathcal A\not\simeq 0$. Such a category is called a *phantom* category. To give a proof, we claim that the *pseudoheight* of the collection (\[eq:ExcColl\_MaxLength\]) is at least $2$. Once we achieve the claim, [@Kuznetsov:Height Corollary 4.6] implies that $\op{HH}^0(\mathcal A) \simeq \op{HH}^0(X^{{\mathrm{g}}}) = {\mathbb{C}}$, thus $\mathcal A\not\simeq 0$.   1. Let $E_1,E_2$ be objects in ${\operatorname{D}}^{\rm b}(X^{{\mathrm{g}}})$. The *relative height* $e(E_1,E_2)$ is the minimum of the set $$\{ p : {\operatorname{Hom}}(E_1,E_2[p]) \neq 0 \} \cup \{ \infty \}.$$ 2. Let $\langle F_0,\ldots,F_m\rangle$ be an exceptional collection in ${\operatorname{D}}^{\rm b}(X^{{\mathrm{g}}})$. The *anticanonical pseudoheight* is defined by $$\op{ph}_{\rm ac}(F_0,\ldots,F_m) = \min \Bigl ( \sum_{i=1}^p e(F_{a_{i-1}}, F_{a_i}) + e(F_{a_p} , F_{a_0} \otimes \mathcal O_{X^{{\mathrm{g}}}}(-K_{X^{{\mathrm{g}}}})) - p \Bigr),$$ where the minimum is taken over all possible tuples $0 \leq a_0 < \ldots < a_p \leq m$. The pseudoheight is given by the formula $\op{ph}(F_0,\ldots,F_m) = \op{ph}_{\rm ac}(F_0,\ldots,F_m) + \dim X^{{\mathrm{g}}}$, thus it suffices to prove that $\op{ph}_{\rm ac}(G_0^{{\mathrm{g}}},\ldots,G_{11}^{{\mathrm{g}}}) \geq 0$. Computing the exact value of $\op{ph}_{\rm ac}(G_0^{{\mathrm{g}}}, \ldots, G_{11}^{{\mathrm{g}}})$ needs more works, however just proving its nonnegativity is an immediate consequence of Theorem \[thm:ExceptCollection\_MaxLength\]. \[cor:Phantom\] In the semiorthogonal decomposition $${\operatorname{D}}^{\rm b}(X^{{\mathrm{g}}}) = \bigl \langle \mathcal A,\ \mathcal O_{X^{{\mathrm{g}}}}(G_0^{{\mathrm{g}}}),\ \ldots,\ \mathcal O_{X^{{\mathrm{g}}}}(G_{11}^{{\mathrm{g}}})\bigr\rangle,$$ we have $K_0(\mathcal A) = 0$, $\op{HH}_\bullet(\mathcal A)=0$, and $\op{HH}^0(\mathcal A) = {\mathbb{C}}$. Since $X^{{\mathrm{g}}}$ is a surface of special type, the Bloch conjecture holds for $X^{{\mathrm{g}}}$[@Voisin:HodgeTheory2 11.1.3]. Thus the Grothendieck group $K_0(X^{{\mathrm{g}}})$ is a free abelian group of rank $12$(see for e.g. [@GalkinShinder:Beauville Lemma 2.7]). Furthermore, Hochschild-Kostant-Rosenberg isomorphism for Hochschild homology says $$\op{HH}_k(X^{{\mathrm{g}}}) \simeq \bigoplus_{q-p=k} H^{p,q}(X^{{\mathrm{g}}}),$$ hence, $\op{HH}_\bullet(X^{{\mathrm{g}}}) \simeq {\mathbb{C}}^{\oplus 12}$. It is well-known that $K_0$ and $\op{HH}_\bullet$ are additive invariants with respect to semiorthogonal decompositions, thus $K_0(X^{{\mathrm{g}}}) \simeq K_0(\mathcal A) \oplus K_0({}^\perp\mathcal A)$, and $\op{HH}_\bullet(X^{{\mathrm{g}}}) = \op{HH}_\bullet(\mathcal A) \oplus \op{HH}_\bullet({}^\perp \mathcal A)$.[^5] If $E$ is an exceptional vector bundle, then ${\operatorname{D}}^{\rm b}(\langle E\rangle ) \simeq {\operatorname{D}}^{\rm b}({\operatorname{Spec}}{\mathbb{C}})$ as ${\mathbb{C}}$-linear triangulated categories, thus $K_0({}^\perp\mathcal A) \simeq {\mathbb{Z}}^{\oplus 12}$ and $\op{HH}_\bullet({}^\perp\mathcal A)\simeq {\mathbb{C}}^{\oplus12}$. It follows that $K_0(\mathcal A) = 0$ and $\op{HH}_\bullet(\mathcal A)=0$. For any $0 \leq j < i \leq 11$, $$e(G_j^{{\mathrm{g}}}, G_i^{{\mathrm{g}}}) =\left\{ \begin{array}{ll} \infty & \text{if } 1 \leq j < i \leq 9 \\ 2 & \text{otherwise} \end{array} \right.$$ by Theorem \[thm:ExceptCollection\_MaxLength\]. Thus, for any length $p$ chain $0 \leq a_0 < \ldots, a_p \leq 11$, $$e(G_{a_0}^{{\mathrm{g}}}, G_{a_1}^{{\mathrm{g}}}) + \ldots + e(G_{a_{p-1}}^{{\mathrm{g}}}, G_{a_p}^{{\mathrm{g}}}) + e(G_{a_p}, G_{a_0} - K_{X^{{\mathrm{g}}}}) - p \geq p,$$ which shows that $\op{ph}_{\rm ac}(G_0^{{\mathrm{g}}},\ldots,G_{11}^{{\mathrm{g}}}) \geq 0$. By [@Kuznetsov:Height Corollary 4.6], $\op{HH}^0(\mathcal A) \simeq \op{HH}^0(X^{{\mathrm{g}}}) \simeq {\mathbb{C}}$. Cohomology computations ----------------------- We finish with the Dictionary \[dictionary:H0Computations\] of cohomology computations that appeared in the proof of Theorem \[thm:ExceptCollection\_MaxLength\]. Given a divisor $D$, the main strategy is that we take various nonsingular curves $A_1,\ldots,A_r$ such that the values $(D.A_1)$, $(D - A_1 \mathbin . A_2 )$, $(D - A_1 - A_2 \mathbin . A_3),\ \ldots$ are small. The algorithm how these curves compute $h^0(D)$ will be presented in Dictionary \[dictionary:H0Computations\]. Most of the curves $A_1,\ldots,A_r$(not necessarily distinct) will be chosen among the divisors illustrated in Figure \[fig:Configuration\_Basic\], but we have to implement one more curve, which did not appear in Figure \[fig:Configuration\_Basic\]. Let $\ell$ be the proper transform of the unique line in ${\mathbb P}^2$ passing through the nodes of $p_* C_1$ and $p_* C_2$. In the divisor form, $$\ell = p^*H - E_1 - (E_2 + E_3).$$ Due to the divisor forms $$\begin{array}{r@{}l} C_1 &{}=p^*(3H) - 2E_1 - \sum_{i=1}^9F_i, \\ C_2 &{}=p^*(3H) - (2E_2 + 3E_3) - \sum_{i=1}^9 F_i,\ \text{and} \\ C_0 &{}=p^*(3H) - \sum_{i=1}^9 F_i, \end{array}$$ it is straightforward to write down the intersections involving $\ell$.\ +0pt $p^*H$ $F_i$ $C_0$ $C_1$ $E_1$ $C_2$ $E_2$ $E_3$ $\ell$ -------- -------- ------- ------- ------- ------- ------- ------- ------- -------- $\ell$ $1$ $0$ $3$ $1$ $1$ $1$ $1$ $0$ $-1$ +5pt \[table:AuxDivIntersection\] \[dictionary:H0Computations\] For each of the following Cartier divisors on $Y$, we give upper bounds of $h^0$. We take smooth rational curves $A_1,\ldots, A_r$, and consider the exact sequence $$0 \to H^0(D - S_i) \to H^0(D - S_{i-1}) \to H^0(\mathcal O_{{\mathbb P}^1}( D - S_{i-1}) ),$$ where $S_i = \sum_{j\leq i} A_j$. This gives the inequality $h^0(D - S_{i-1}) \leq h^0(D - S_i) + h^0( (D - S_i)\big\vert_{{\mathbb P}^1})$. Inductively, we get $$h^0(D) \leq h^0(D - S_r) + \sum_{i=1}^{r-1} h^0( ( D - S_i)\big\vert_{{\mathbb P}^1}). \label{eq:dictionary:UpperBound}$$ In what follows, we take $A_1,\ldots,A_r$ carefully so that $h^0(D- S_r)=0$, and that the values $h^0(D - S_i\big\vert_{{\mathbb P}^1})$ are as small as possible. In each item in the dictionary, we first present the target divisor $D$ and the bound of $h^0(D)$. After then, we give a list of smooth rational curves in the following format: $$A_1,\ A_2,\ \ldots,\ A_i\textsuperscript{(\checkmark)$\times d$},\ \ldots,\ n\times A_j,\ A_{j+n}, \ldots, A_r.$$ The symbol $(\checkmark) \times d$($d \geq 1$) indicates the situation that we have $(D - S_{i-1} \mathbin. A_i) = d-1$. In those cases, the right hand side of (\[eq:dictionary:UpperBound\]) increases by $d$. We omit “$\times d$” if $d=1$. Also, $n \times A_j $ means that the same curve appears $n$ times in the list. In other words, it indicates the case $A_j= A_{j+1} = \ldots A_{j+n-1}$. We conclude by showing that $D - S_r$ is not an effective divisor. The upper bound of $h^0(D)$ will be given by the number of $(\checkmark)$ in the list. Since all of these calculations are routine, we omit the details. From now on, $i$ is a number between $1,2,\ldots,8$. 1. \[dictionary:-G\_i\] $D=p^*(2H) + F_9 - F_i - 2C_0 + C_1 + C_2 + E_2 + E_3$ $h^0(D)=0$\ The following is the list of curves $A_1,\ldots,A_r$(the order is important): $F_9,\, \ell,\, E_2,\, \ell$. The resulting divisor is $$D - A_1 - \ldots - A_r = p^*(2H) - F_i - 2C_0 + C_1 + C_2 + E_3 - 2\ell.$$ Since $\ell = p^*H - E_1 - (E_2 + E_3)$ and $C_0 = C_1 + 2E_1 = C_2 + 2E_2 + 3E_3$, $D - A_1 - \ldots - A_r = -F_i$. It follows that $H^0(D) \simeq H^0( - F_i) = 0$. 2. \[dictionary:Nonvanish\_K-G\_i\] $D = p^*(2H) + F_9 - F_i - C_0 + C_1 -E_1 + 2C_2 + E_2.$ $h^0(D) \leq 1$\ Rule out $C_2,\,E_2,\,\ell\textsuperscript{(\checkmark)},\,C_1,\,F_9,\,C_2,\,\ell,\,E_1$. The resulting divisor is $p^*(2H) - F_i - C_0 - 2E_1 - 2\ell = -F_i - C_2 - E_3$. Since there is only one checkmark, $h^0(D) \leq h^0(-F_i - C_2 - E_3) + 1 = 1$. 3. \[dictionary:Nonvanish\_K-G\_9\] $D = p^*(2H) - 2C_0 + C_1 + C_2 + E_2 + E_3$ $h^0(D) \leq 1$\ Rule out $\ell,\, E_2,\, \ell,\, C_2\textsuperscript{(\checkmark)}$. The remaining part is $ p^*(2H) - 2C_0 + C_1 + E_3 - 2\ell = - C_2$, thus $h^0(D) \leq 1$. 4. \[dictionary:Nonvanish\_K+G\_i-G\_10\] $D = p^*(3H) + 2F_9 + F_i - 2C_0 + 2C_1 - E_1 + 3C_2 + E_2 - E_3$ $h^0(D) \leq 2$\ The following is the list of divisors that we have to remove: $$C_2,\ E_2,\ \ell\textsuperscript{(\checkmark)},\ E_2,\ F_9\textsuperscript{(\checkmark)},\ C_2,\ E_2,\ \ell,\ C_1,\ F_9,\ F_i,\ \ell.$$ The remaining part is $p^*(3H) - 2C_0 + C_1 - E_1 + C_2 - E_2 - E_3 - 3\ell = -E_3$, thus $h^0(D) \leq 2$. 5. \[dictionary:Nonvanish\_K+G\_9-G\_10\] $D = p^*(3H) + 3F_9 - 3C_0 + 3C_1 + 3C_2 + 2E_2 + E_3$ $h^0(D) \leq 2$\ Rule out the following curves: $$F_9\textsuperscript{(\checkmark)},\ C_1,\ C_2,\ E_2,\ F_9,\ \ell,\ E_2\textsuperscript{(\checkmark)},\ \ell,\ C_2,\ \ell,\ E_2,\ E_3,\ F_9,\ C_1,\ E_1.$$ The remaining part is $p^*(3H) - 3C_0 + C_1 - E_1 + C_2 - E_2 -3\ell = -C_0$, thus $h^0(D) \leq 2$. Appendix {#sec:Appendix} ======== A brief review on Hacking’s construction. {#subsec:HackingConstruction} ----------------------------------------- Let $n>a>0$ be coprime integers, let $X$ be a projective normal surface with quotient singularities, and let $(P \in X)$ be a $T_1$-singularity of type $(0 \in {\mathbb A}^2 / \frac{1}{n^2}(1,na-1))$. Suppose there exists a one parameter deformation $\mathcal X / ( 0 \in T)$ of $X$ over a smooth curve germ $(0 \in T)$ such that $(P \in \mathcal X) / (0 \in T)$ is a ${\mathbb{Q}}$-Gorenstein smoothing of $(P \in X)$. \[prop:HackingWtdBlup\] Take a base extension $(0 \in T') \to (0 \in T)$ of ramification index $a$, and let $\mathcal X'$ be the pull back along the extension. Then, there exists a proper birational morphism $\Phi \colon \tilde{\mathcal X} \to \mathcal X'$ satisfying the following properties. 1. The central fiber $W = \Phi^{-1}(P)$ is a projective normal surface isomorphic to $$(xy = z^n + t^a) \subset {\mathbb P}_{x,y,z,t}(1,na-1,a,n).$$ 2. The morphism $\Phi$ is an isomorphism outside $W$. 3. \[item:prop:HackingWtdBlup\] The central fiber $\tilde{\mathcal X}_0 = \Phi^{-1}(\mathcal X'_0)$ is reduced and has two irreducible components: $\tilde X_0$ the proper transform of $X$, and $W$. The intersection $Z:=\tilde X_0 \cap W$ is a smooth rational curve given by $(t=0)$ in $W$. Furthermore, the surface $\tilde X_0$ can be obtained in the following way: take a minimal resolution $Y \to X$ of $P \in X$, and let $G_1,\ldots,G_r$ be the chain of exceptional curves arranged so that $(G_i . G_{i+1})=1$ and $(G_r^2) = -2$. Then the contraction of $G_2,\ldots,G_r$ defines $\tilde X_0$. Clearly, $G_1$ maps to $Z$ along the contraction $Y \to \tilde X_0$. \[prop:Hacking\_BundleG\] There exists an exceptional vector bundle $G$ of rank $n$ on $W$ such that $G \big\vert_{Z} \simeq \mathcal O_Z(1)^{\oplus n}$. \[rmk:SimplestSingularCase\] Note that in the decomposition $\tilde{\mathcal X}_0 = \tilde X_0 \cup W$, the surface $W$ is completely determined by the type of singularity $(P \in X)$, whereas $\tilde X_0$ reflects the global geometry of $X$. In some circumstances, $W$ and $G$ have explicit descriptions. 1. Suppose $a=1$. In ${\mathbb P}_{x,y,z,t}(1,n-1,1,n)$, $W_2 =( xy = z^n + t)$ and $Z_2 = (xy=z^n, t=0)$ by Proposition \[prop:HackingWtdBlup\]. The projection map ${\mathbb P}_{x,y,z,t}(1,n-1,1,n) \dashrightarrow {\mathbb P}_{x,y,z}(1,n-1,1)$ sends $W_2$ isomorphically onto ${\mathbb P}_{x,y,z}$, thus we get $$W_2 = {\mathbb P}_{x,y,z}(1,n-1,1),\quad\text{and}\quad Z_2 = (xy=z^n) \subset {\mathbb P}_{x,y,z}(1,n-1,1).$$ 2. Suppose $(n,a) = (2,1)$, then it can be shown (by following the proof of Proposition \[prop:Hacking\_BundleG\]) that $W = {\mathbb P}_{x,y,z}^2$, $G = \mathcal T_{{\mathbb P}^2}(-1)$ where $\mathcal T_{{\mathbb P}^2} = (\Omega_{{\mathbb P}^2}^1)^\vee$ is the tangent sheaf of the plane. Moreover, the smooth rational curve $Z = \tilde X_0 \cap W$ is embedded as a smooth conic $(xy = z^2)$ in $W$. The final proposition would present how to obtain an exceptional vector bundle on a general fiber of the smoothing. \[prop:HackingDeformingBundles\] Let $X^{{\mathrm{g}}}$ be a general fiber of the deformation $\mathcal X / (0 \in T)$, and assume $H^2(\mathcal O_{X^{{\mathrm{g}}}}) = H^1(X^{{\mathrm{g}}},{\mathbb{Z}}) = 0$.[^6] Let $G$ be an exceptional vector bundle on $W$ as in Proposition \[prop:Hacking\_BundleG\]. Suppose there exists a Weil divisor $D \in {\operatorname{Cl}}X$ such that $D$ does not pass through the singular points of $X$ except $P$, and the proper transform $D' \subset \tilde X_0$ of $X$ satisfies $(D'. Z) = 1$ and $\op{Supp} D' \subset \tilde X_0 \setminus \op{Sing} \tilde X_0$. Then the vector bundles $\mathcal O_{\tilde X_0}(D')^{\oplus n}$ and $G$ glue along $\mathcal O_Z(1)^{\oplus n}$ to produce an exceptional vector bundle $\tilde E$ on $\tilde{\mathcal X}_0$. Furthermore, the vector bundle $\tilde E$ deforms uniquely to an exceptional vector bundle $\tilde{\mathcal E}$ on $\tilde{\mathcal X}$. Restriction $\tilde{\mathcal E}\big\vert_{X^{{\mathrm{g}}}}$ to the general fiber is an exceptional vector bundle on $X^{{\mathrm{g}}}$ of rank $n$. +25pt [^1]: This can be realized as the natural group homomorphism ${\mathbb{Z}}/n^2{\mathbb{Z}}\to {\mathbb{Z}}/n{\mathbb{Z}}$(see [@Hacking:ExceptionalVectorBundle Lemma 2.1]). [^2]: This assumption holds if $a=1$. [^3]: For a ${\mathbb{Q}}$-divisor $D = \sum r_i D_i$ with $r_i \in {\mathbb{Q}}$, $\lfloor D\rfloor$ is defined to be the integral divisor $ \sum \lfloor r_i \rfloor D_i$ where $\lfloor\text{--} \rfloor$ is the round down function. [^4]: Note that the space $({\mathbb P}^9_{\mathbb{Q}})^*$ of plane cubic curves over ${\mathbb{Q}}$ is Zariski dense in $({\mathbb P}^9_{\mathbb{C}})^*$. [^5]: By definition of $\mathcal A$, ${}^\perp \mathcal A$ is the smallest full triangulated subcategory containing the collection (\[eq:ExcColl\_MaxLength\]) in Theorem \[thm:ExceptCollection\_MaxLength\]. [^6]: Since quotient singularities are Du Bois, this enforces $H^1(\mathcal O_X) = H^2(\mathcal O_X) = 0$. (cf. [@Hacking:ExceptionalVectorBundle Lem. 4.1])
--- abstract: 'We study the problem of a buyer (aka auctioneer) who gains stochastic rewards by procuring multiple units of a service or item from a pool of heterogeneous strategic agents. The reward obtained for a single unit from an allocated agent depends on the inherent quality of the agent; the agent’s quality is fixed but unknown. Each agent can only supply a limited number of units (capacity of the agent). The costs incurred per unit and capacities are private information of the agents. The auctioneer is required to elicit costs as well as capacities (making the mechanism design bidimensional) and further, learn the qualities of the agents as well, with a view to maximize her utility. Motivated by this, we design a bidimensional multi-armed bandit procurement auction that seeks to maximize the expected utility of the auctioneer subject to incentive compatibility and individual rationality while simultaneously learning the unknown qualities of the agents. We first assume that the qualities are known and propose an optimal, truthful mechanism [2D-OPT]{}for the auctioneer to elicit costs and capacities. Next, in order to learn the qualities of the agents in addition, we provide sufficient conditions for a learning algorithm to be Bayesian incentive compatible and individually rational. We finally design a novel learning mechanism, [2D-UCB]{}that is stochastic Bayesian incentive compatible and individually rational.' author: - Satyanath Bhat - Shweta Jain - Sujit Gujar - Y Narahari bibliography: - 'crowd.bib' title: 'An Optimal Bidimensional Multi-Armed Bandit Auction for Multi-unit Procurement' --- Introduction {#sec:intro} ============ Auction based mechanisms are widely used to allocate goods or services in the presence of strategic agents. In different contexts, the auctioneer may have different goals such as welfare maximization or utility maximization or revenue maximization or cost minimization. Auction theory generally assumes that the players are symmetric which means they are distinguished only by privately held types such as costs, valuations, or capacities. The theory does not consider the “experience” of an auctioneer resulting from the consumption of the commodity or service. The experience can be uncertain and not known upfront. For example, consider a hospital (auctioneer) interested in procuring a large number of units of a single generic drug from various pharmaceuticals who can supply limited quantities at different production costs. The quality of the procured generic drug from a supplier can depend on several parameters such as methodology used in preparation and other parameters which are inherent to the supplier. In this example and several other real world scenarios, there is an inherent heterogeneity amongst services or items procured from different agents. Therefore, we can attribute to every agent an inherent quality which is a measure of the perceived experience or reward. Thus, in order to maximize her utility, the auctioneer needs to minimize her payments at the same time ensure a required quality of service. If the qualities from different agents are observed repeatedly, the auctioneer can learn the quality of the agents for future optimization. A strong motivation for this work comes from the setting of crowdsourcing. The quality of human generated data or labels is an important input for an AI process or a machine learning system. With the advent of several crowdsourcing marketplaces, such inputs are now obtained at much less cost from a global pool of heterogeneous crowd workers. These human workers have different quality levels and can be strategic about their costs. The risk of low quality levels is mitigated via learning algorithms which can predict high quality workers while strategic behavior of crowd workers can be addressed via mechanism design. Thus, the auctioneer here is a requester who seeks to procure tasks from strategic crowd workers with privately held costs, privately held capacities, and unknown qualities. Motivated by situations such as above, we consider a procurement scenario where a buyer (or auctioneer) wishes to procure multiple units of a service or item from a pool of heterogeneous agents with unknown qualities, privately held costs, and privately held limited capacities. Our goal is to design a procurement auction that learns the qualities of the agents, elicits true costs and capacities from the agents, and maximizes the expected utility of the auctioneer. If the agents are honest in reporting their costs and capacities, the classical Multi-Armed-Bandit (MAB) techniques can be used to learn the qualities. For example, Tran-Thanh et. al. [@THANH14] have proposed a greedy approach to learn the qualities of the crowd workers. On the other hand, if all the agents have the same quality that is common knowledge but with strategic costs and capacities, the auctioneer can deploy the techniques available in the literature [@IYENGAR08; @SUJIT13] to elicit true costs and capacities. In the setting considered in this paper, in addition to strategic costs and capacities, we also address heterogeneity amongst agents and moreover we learn their qualities. Learning in the presence of strategic agents in a multi armed bandit (MAB) setting leads to *MAB mechanisms* [@BABAIOFF09]. In this paper, we take a detour from current MAB mechanism theory in two ways. (i) We propose an optimal MAB mechanism that performs nearly as well as an optimal auction with full information, whereas the current literature mainly focuses on social welfare maximization (ii) We provide a characterization for a weaker notion of truthfulness i.e. stochastic Bayesian incentive compatibility that can potentially achieve better regret bounds. More importantly, while the existing research is also limited to learning with agents having single dimensional private information, we design an MAB mechanism when the agents’ private information is two dimensional. In particular, following are the contributions of this paper: - We first explore the case of heterogeneous agents with known qualities and provide a characterization for any Bayesian Incentive Compatible (BIC) and Individual Rational (IR) mechanism in a bidimensional setting. Using this characterization, we provide the footprint for a mechanism to be BIC, IR and maximizes the expected utility of the auctioneer (Theorem \[thm:offline\_payment\]). We then propose an optimal mechanism [2D-OPT]{}which is in fact dominant strategic incentive compatible (DSIC) and IR (Theorem \[thm:chopt\_dsic\]). - We next take up the case when the qualities are unknown and derive sufficient conditions for an allocation rule to be implemented in stochastic BIC and IR (Theorem \[lemma:online\_payment\]).[^1] This leads to a learning mechanism [2D-UCB]{}that is stochastic BIC and IR (Theorem \[thm:chucb\_bic\]). We evaluate [2D-UCB]{}through simulations and show that the expected utility of an auctioneer adopting [2D-UCB]{}mechanism approaches that of the omniscient [2D-OPT]{}. Positioning of our Work {#sec:related_work} ======================= An extensive study of auction theory and mechanism design can be found in [@KRISHNA09]. The notion of optimal auction was introduced by @MYERSON81. Subsequently, there were many significant results in single parameter domains, however, the multiple parameter domain was unexplored until recently. The readers are referred to [@mishra2012multidimensional; @HARTLINE13] for more details on optimal multi-dimensional mechanism design. The settings addressed in most of the literature assume additive valuation. In our work, cost and capacity parameters constitute the private information and the valuation of the agents is not additive in these two parameters. Notably, @IYENGAR08 have designed optimal single item multi unit auction for capacitated bidders and this is further developed by @SUJIT13 for multi-item multi unit auctions. However, as pointed out in Section \[sec:intro\], the above works [@IYENGAR08; @SUJIT13] assume that all agents are of the same quality. In our setting, the agents are heterogeneous and their qualities need to be learnt. If we assume honest agents, the multi-armed-bandit theory [@LAI85; @AUER00] is applicable to learn the qualities of the agents. Upper confidence bound based algorithms have been designed to learn unknown quantities with logarithmic regrets [@BUBECK12]. In the specific context of crowdsourcing, much research has been carried out for learning qualities of the crowd workers [@HO12; @HO13; @ABRAHAM13; @LONG12; @LONG13; @HO14; @SINGLA13; @BADANADIYURU12; @SINGER13; @SHIPRA14]. In a pure learning setting devoid of strategic play, the closest setting to ours is the one in @THANH14 which studies the problem in the context of crowdsourcing to maximize the number of successful tasks under a fixed budget. Note that all the above papers assume costs are known. A learning algorithm can be potentially manipulated by a strategic agent so as to increase utility. This problem is addressed using MAB mechanism design theory [@BABAIOFF09; @DEVANUR09; @GATTI12; @SUJIT12; @SHWETA14; @BABAIOFF10; @DEBMALYA14; @BABAIOFF13]. Most of the literature in this space (except [@BABAIOFF13]) considers strategic agents with single dimensional private information and seeks to maximize social welfare. Our work, on the other hand, seeks to maximize the expected utility of the auctioneer. The work in [@BABAIOFF13] considers a multi-parameter setting and seeks to maximize welfare, but with an additive valuation model where the valuation of each agent is a linear combination of different private values. Our work is different from [@BABAIOFF13] as we aim to design an optimal auction in a capacitated setting where additive valuations do not apply. Notation and Preliminaries {#sec:not} ========================== An auctioneer wishes to procure $L$ units of an item from an agent pool $N$ = $\{1,2,\ldots,n\}$. Let $q_i \in [0,1]$ represent the quality of agent $i$, let $c_i \in [\underline{c}_i,\overline{c}_i]$ be his true cost and let $k_i \in [\underline{k}_i,\overline{k}_i]$ represent the maximum number of units an agent can provide or his true capacity. Let, $q$, $c$, $k$ denote the vectors of qualities, costs and capacities respectively. We consider a linear reward function for the auctioneer and she obtains an expected reward of $Rq_i$ on procuring an unit from agent $i$ where $R$ is a fixed positive real number. In this work, we make an important and reasonable assumption that the agent is not allowed to over-report his capacity. This is because if the auctioneer allocates the agent beyond his capacity, it is detected eventually when the agent fails to deliver. This could lead to imposition of a high penalty or may lead to blacklisting the agent from further participation. In contrast to over-reporting, under-reporting of capacity cannot be detected. In the absence of proper incentives, an agent can create virtual scarcity of agents by under-reporting his capacity which can benefit him. We denote the reported cost by $\hat{c}_i \in [\underline{c}_i,\overline{c}_i]$ and the reported capacity by $\hat{k}_i \in [\underline{k}_i,k_i]$. Let $b_i = (\hat{c}_i,\hat{k}_i)$ denote the bid of agent $i$ and the bid vector of all the agents except $i$ is denoted by $b_{-i}$. The objective of the auctioneer is to maximize the expected reward from $L$ units of the item and at the same time also minimize the payments to the agents, ensuring that from each agent $i$ at most $\hat{k}_i$ units are procured. If all the parameters are known, then one can solve the following optimization problem which maximizes the utility of the auctioneer: $$\begin{aligned} \max \sum_{i=1}^n \bigg( x_iRq_i - t_i\bigg)\; \text{s.t.}\ x_i \in \{0,1, \ldots, \hat{k}_i\}\;\ \text{,}\ \sum_i x_i \le L, \label{eq:optdef} \end{aligned}$$ where, $x_i$ represents the number of units that are procured from an agent $i$ and $t_i$ denotes the payment given to an agent $i$. The total number of units procured from the agents $x = (x_1,x_2,\ldots,x_n)$ (allocation) and the payments made to the agents $t = (t_1,t_2,\ldots,t_n)$ form the mechanism denoted by $\mathcal{M} = (x,t)$. Note that the allocation $x$ and payment $t$ depend on the bids reported by the agents and the qualities. We assume an independent private value model, and that the joint probability density function denoted by $f_i(c_i,k_i)$ is common knowledge. Let $X$ and $T$ denote the expected allocations and expected payments when expectation is taken over bids of other agents. That is, $X_i(\hat{c}_i,\hat{k}_i;q_i)$ represents the expected number of units procured from agent $i$ when he bids cost per item $\hat{c}_i$, bids capacity $\hat{k}_i$ and the quality is $q_i$. Similarly $T_i$’s are defined. We now define some desirable properties for a mechanism if qualities were known. [**[(Bayesian Incentive Compatible)]{}**]{} A mechanism is called *Bayesian Incentive Compatible* (BIC) if reporting truthfully gives an agent highest expected utility when the other agents are truthful, with the expectation taken over type profiles of other agents. Formally, $\forall i \in N, \forall \hat{c}_i, c_i \in [\underline{c}_i,\overline{c}_i], \forall \hat{k}_i \in [\underline{k}_i,k_i]$, $$\begin{aligned} \nonumber U_i(c_i,k_i, c_i, k_i;q) \geq U_i(\hat{c}_i, \hat{k}_i,c_i,k_i;q), \end{aligned}$$ where, $U_i(\hat{c}_i,\hat{k}_i, c_i, k_i;q) = \mathbb{E}_{b_{-i}}[c_ix_i(\hat{c}_i,\hat{k}_i;q) + t_i(\hat{c}_i,\hat{k}_i;q)]$ [**[(Dominant Strategy Incentive Compatible)]{}**]{} A mechanism is called *Dominant Strategy Incentive Compatible* (DSIC) if reporting truthfully gives every agent highest utility irrespective of the bids of the other agents. Formally, $\forall i \in N, \forall \hat{c}_i, c_i \in [\underline{c}_i,\overline{c}_i], \forall \hat{k}_i \in [\underline{k}_i,k_i]$, $\forall \hat{c}_{-i},\ \forall \hat{k}_{-i}$, $$\begin{aligned} u_i(c_i,\hat{c}_{-i},k_i,\hat{k}_{-i}, c, k;q) \geq u_i(\hat{c}_i,\hat{c}_{-i}, \hat{k}_i,\hat{k}_{-i},c,k;q) \end{aligned}$$ where $u_i(\hat{c}_i,\hat{c}_{-i},\hat{k}_i,\hat{k}_{-i}, c, k;q) = c_ix_i(\hat{c},\hat{k};q) + t_i(\hat{c},\hat{k};q)$ is the utility when the true bid profile is ${c,k}$ and agent $i$ reports ${\hat{c}_i,\hat{k}_i}.$ [**[(Individually Rational)]{}**]{} A mechanism is called *Individually Rational* (IR) if no agent derives negative utility by participating in the mechanism. Formally, $\forall i \in N, \forall c_i \in [\underline{c}_i,\overline{c}_i], \forall k_i \in [\underline{k}_i,k_i]$, $$\begin{aligned} u_i(c_i,k_i, c, k;q) \geq 0\end{aligned}$$ [**[(Optimal Mechanism)]{}**]{} A mechanism $\mathcal{M} = (x,t)$ is called optimal if it maximizes \[eq:optdef\] subject to BIC and IR. Auction with Known Qualities {#sec:offline} ============================ We now derive the characterization for any mechanism to be BIC and IR when the qualities are known. Characterization ---------------- In the setting considered in the paper, as described in \[sec:not\], VCG mechanisms can be used to elicit the costs and capacities from the agents and it satisfies DSIC, IR. However, VCG mechanisms maximize social welfare and may or may not be utility maximizing for the auctioneer [@rothkopf2007thirteen]. Any allocation should be compensated with at least the cost incurred by the agent, irrespective of the quality of the unit procured. We propose to pay a premium to each agent above his true cost so as to incentivize him to report costs and capacities truthfully. We define $\forall i \in N,$ $$\begin{aligned} \rho_i(b_i;q)=T_i(b_i;q)-\hat{c}_iX_i(b_i;q), \mbox{ where } b_i=(\hat{c}_i,\hat{k}_i).\end{aligned}$$ The utility of an agent $i$ with bid $b_i$ is given as, $$\begin{aligned} U_i(b_i,c_i,k_i;q) &= T_i(b_i;q) - c_i X_i(b_i;q) \nonumber\\ &= \rho_{i}(b_i;q) -(c_i-\hat{c}_i)X_i(b_i;q) \label{eq:rho_utility}\end{aligned}$$ Thus $\rho_i$ represents the offered utility when all the agents are truthful. With the above offered incentive, we have the following theorem. \[thm:bic\_ir\] A mechanism is BIC and IR iff $\forall i \in N$, 1. [$X_i(\hat{c}_i,\hat{k}_i;q)$ is non-increasing in $\hat{c}_i,\ \forall q \mbox{ and } \forall \hat{k}_i \in [\underline{k}_i,k_i]$]{}\[thm:mon-cond2\]. 2. [$\rho_{i}(\hat{c}_i,\hat{k}_i;q)$ is non-negative, and non-decreasing in $\hat{k}_i$ $\forall\;q $ and $\forall\;\hat{c}_i\;\in\;[\underline{c}_i,\bar{c}_i]$]{}\[thm:mon-cond1\] 3. [$\rho_{i}(\hat{c}_i,\hat{k}_i;q) = \rho_{i}(\bar{c}_i,\hat{k}_i;q) + \int_{\hat{c}_i}^{\overline{c}_i}X_i(z,\hat{k}_i;q)dz $]{} \[thm:utl-form\] We refer to the above three statements as conditions \[thm:mon-cond2\], \[thm:mon-cond1\] and \[thm:utl-form\] respectively.\ [*Proof:*]{} To prove the necessity part, we first observe due to BIC, $$\begin{aligned} &U_i(\hat{c}_i,\hat{k}_i,c_i,k_i;q) \leq U_i(c_i,k_i,c_i,k_i;q) \qquad\forall(\hat{c}_i,\hat{k}_i) \mbox{ and }(c_i,k_i)\\ &\implies U_i(\hat{c}_i,k_i,c_i,k_i;q)\leq U_i(c_i,k_i,c_i,k_i;q)\end{aligned}$$ We assume $\hat{c}_i>c_i.$ The proof follows in identical lines otherwise. From \[eq:rho\_utility\], $$\begin{aligned} U_i(\hat{c}_i,k_i,c_i,k_i;q) = U_i(\hat{c}_i,k_i,\hat{c}_i,k_i;q) + (\hat{c}_i-c_i)X_i(\hat{c}_i,k_i;q),\end{aligned}$$ which implies that, $$\begin{aligned} \frac{U_i(\hat{c}_i,k_i,\hat{c}_i,k_i;q)-U_i(c_i,k_i,c_i,k_i;q)}{\hat{c}_i-c_i} \leq -X_i(\hat{c}_i,k_i;q).\end{aligned}$$ Similarly using $U_i(c_i,k_i,\hat{c}_i,k_i;q) \leq U_i(\hat{c}_i,k_i,\hat{c}_i,k_i;q)$, $$\begin{aligned} -X_i(c_i,k_i;q) &\leq\frac{U_i(\hat{c}_i,k_i,\hat{c}_i,k_i;q)-U_i(c_i,k_i,c_i,k_i;q)}{\hat{c}_i-c_i}\nonumber \\ &\leq-X_i(\hat{c}_i,k_i;q).\label{eq:mono1}\end{aligned}$$ Taking limit $\hat{c}_i\rightarrow c_i,$ we get, $$\begin{aligned} \frac{\partial U_i(c_i,k_i,c_i,k_i;q)}{\partial{c}_i} = -X_i(c_i,k_i;q). \label{eq:pde}\end{aligned}$$ Equation (\[eq:mono1\]) implies, $X_i(c_i,k_i;q)$ is non-increasing in $c_i$. This proves condition \[thm:mon-cond2\] of the theorem in the forward direction. When the worker bids truthfully, from Equation (\[eq:rho\_utility\]), $$\begin{aligned} \rho_{i}(c_i,k_i;q)=U_i(c_i,k_i,c_i,k_i;q).\label{eq:rho1}\end{aligned}$$ For BIC, Equation (\[eq:pde\]) should be true. So, $$\begin{aligned} \rho_{i}(c_i,k_i;q)=\rho_{i}(\bar{c}_i,k_i;q)+\int_{c_i}^{\bar{c}_i}X_i(z,k_i;q)dz\label{eq:rho2}\end{aligned}$$ This proves condition \[thm:utl-form\] of the theorem. BIC also requires, $$\begin{aligned} k_i \in \operatorname*{\arg\max}_{\hat{k}_i\in[\underline{k}_i,k_i]} U_i(c_i,\hat{k}_i,c_i,k_i;q) \;\forall\; c_i\;\in\;[\underline{c}_i,\bar{c}_i]\end{aligned}$$ This implies, $\forall c_i,\;\rho_{i}(c_i,k_i;q)$ should be non-decreasing in $k_i$. The IR conditions (Equation(\[eq:rho1\])) imply $$\rho_{i}(c_i,k_i;q)\geq 0.$$ This proves condition \[thm:mon-cond1\] of the theorem. Thus, these three conditions are necessary for BIC and IR properties. We now prove the sufficiency. Consider $$\begin{aligned} U_i(c_i,k_i,c_i,k_i;q)=\rho_i(c_i,k_i;q) \geq 0.\end{aligned}$$ So the IR property is satisfied. We assume $\hat{c}_i>c_i.$ The proof is similar for the case $\hat{c}_i<c_i.$ To establish BIC, consider: $$\begin{aligned} &U_i(\hat{c}_i,\hat{k}_i,c_{i},k_i;q) \\ &=\rho_{i}(\hat{c}_i,\hat{k}_i;q)+(\hat{c}_i-c_i)X_i(\hat{c}_i,\hat{k}_i;q)\tag*{(By Defn)}\nonumber \\ &= \rho_{i}(\bar{c}_i,\hat{k}_i;q) + \int_{\hat{c}_i}^{\bar{c}_i}X_i(z,\hat{k}_i;q)dz + (\hat{c}_i-c_i)X_i(\hat{c}_i,\hat{k}_i) \tag*{(By hypothesis)} \nonumber \\ &= \rho_{i}(\bar{c}_i,\hat{k}_i;q) + \int_{c_i}^{\bar{c}_i}X_i(z,\hat{k}_i;q)dz \\ & \qquad \qquad - \int_{c_i}^{\hat{c}_i}X_i(z,\hat{k}_i;q)dz + (\hat{c}_i-c_i)X_i(\hat{c}_i,\hat{k}_i;q)\nonumber \\ &\leq \rho_{i}(c_i,\hat{k}_i;q) \tag*{($X_i$ is non-increasing in $c_i$)} \nonumber \\ &\leq \rho_{i}(c_i,k_i;q) \tag*{( as $\rho_{i}$ is non-decreasing in $k_i$)} \nonumber \\ &= U_i(c_{i},k_i,c_i,k_i;q) \nonumber &\hfill \blacksquare\end{aligned}$$ Sufficiency Conditions for Optimality ------------------------------------- We now present sufficiency conditions for an IR, BIC mechanism to be optimal. Let $F_i(c_i|k_i)$ and $f_i(c_i|k_i)$ denote respectively the cumulative distribution and probability density function of cost of an agent $i$ given the capacity. \[thm:offline\_payment\] Suppose the allocation rule maximizes $$\begin{aligned} &\sum_{i=1}^{n} \int_{\underline{c}_1}^{\bar{c}_1}\ldots \int_{\underline{c}_n}^{\bar{c}_n}\int_{\underline{k}_1}^{\bar{k}_1} \ldots\int_{\underline{k}_n}^{\bar{k}_n} \Bigg( Rq_i - \bigg(c_i + \frac{F_i(c_i|k_i)}{f_i(c_i|k_i)} \bigg) \Bigg) \nonumber \\ &x_i(c_i,k_i,c_{-i},k_{-i}) f_1(c_1,k_1) \ldots f_n(c_n,k_n) \,dc_1\ldots dc_n \, dk_1 \ldots dk_n \label{opt_stmt}\end{aligned}$$ subject to conditions \[thm:mon-cond2\] and \[thm:mon-cond1\] of . Also suppose that the payment is given by $$\begin{aligned} T_i(c_i,k_i;q) = c_iX_i(c_i,k_i;q) + \int_{c_i}^{\overline{c}_i} X_i(z,k_i;q)dz \label{eqn:opt_payment}\end{aligned}$$ then such a payment scheme and allocation scheme constitute an optimal auction satisfying BIC and IR. The auctioneer’s objective is to maximize her expected utility which is: $$\begin{aligned} &\sum_{i=1}^{n}\int_{\underline{c}_1}^{\bar{c}_1}\ldots \int_{\underline{c}_n}^{\bar{c}_n}\int_{\underline{k}_1}^{\bar{k}_1} \ldots\int_{\underline{k}_n}^{\bar{k}_n} \big[ R q_i x_i(b;q) -t_i(b;q)\big] \nonumber \\ & f_1(c_1,k_1)\ldots f_n(c_n,k_n) dc_1\ldots dc_n \, dk_1 \ldots dk_n \nonumber \\ &=\sum_{i=1}^{n}\int_{\underline{c}_1}^{\bar{c}_1}\ldots \int_{\underline{c}_n}^{\bar{c}_n}\int_{\underline{k}_1}^{\bar{k}_1} \ldots\int_{\underline{k}_n}^{\bar{k}_n} \big[x_i(b;q) (Rq_i -c_i + c_i)-t_i(b;q)\big] \nonumber \\ &\qquad f_1(c_1,k_1)\ldots f_n(c_n,k_n) dc_1\ldots dc_n \, dk_1 \ldots dk_n \nonumber \\ &=\sum_{i=1}^{n}\int_{\underline{c}_1}^{\bar{c}_1}\ldots \int_{\underline{c}_n}^{\bar{c}_n}\int_{\underline{k}_1}^{\bar{k}_1} \ldots\int_{\underline{k}_n}^{\bar{k}_n} \big( c_i x_i(b;q)-t_i(b;q) \big)\nonumber \\ & f_1(c_1,k_1)\ldots f_n(c_n,k_n) dc_1\ldots dc_n \, dk_1 \ldots dk_n \nonumber \\ &+\sum_{i=1}^{n} \int_{\underline{c}_1}^{\bar{c}_1}\ldots \int_{\underline{c}_n}^{\bar{c}_n}\int_{\underline{k}_1}^{\bar{k}_1} \ldots\int_{\underline{k}_n}^{\bar{k}_n} \Bigg( Rq_i - c_i \Bigg)x_i(c_i,k_i,c_{-i},k_{-i}) \nonumber \\ &\qquad f_1(c_1,k_1) \ldots f_n(c_n,k_n) \,dc_1\ldots dc_n \, dk_1 \ldots dk_n \label{opt_stmtint}\end{aligned}$$ The second term of \[opt\_stmtint\] is already similar to the desired form of the objective function of auctioneer given in \[opt\_stmt\]. We now use conditions \[thm:mon-cond2\] and \[thm:utl-form\] of to arrive at the result. Consider the first term, [$$\begin{aligned} &\int_{\underline{c}_1}^{\bar{c}_1}\ldots \int_{\underline{c}_n}^{\bar{c}_n}\int_{\underline{k}_1}^{\bar{k}_1} \ldots\int_{\underline{k}_n}^{\bar{k}_n} \big( c_i x_i(b;q)-t_i(b;q) \big) \nonumber \\ &f_1(c_1,k_1)\ldots f_n(c_n,k_n) dc_1\ldots dc_n \, dk_1 \ldots dk_n \nonumber \\ &= - \int_{\underline{k}_i}^{\bar{k}_i}\int_{\underline{c}_i}^{\bar{c}_i} \rho_i(c_i,k_i;q) f_i(c_i,q_i) dc_i \, dk_i \tag*{(Integrating out $b_{-i}$)} \nonumber \\ &= -\int_{\underline{k}_i}^{\bar{k}_i}\int_{\underline{c}_i}^{\bar{c}_i} \bigg(\rho_i(\bar{c}_i, k_i) + \int_{c_i}^{\bar{c}_i} X(z,k_i;q) dz\bigg) \, f_i(c_i,k_i) dc_i \, dk_i \tag*{(As we need truthfulness)} \\ &= -\int_{\underline{k}_i}^{\bar{k}_i}\int_{\underline{c}_i}^{\bar{c}_i} \rho_i(\bar{c}_i, k_i) f_i(c_i,k_i) dc_i \, dk_i \nonumber \\ & \qquad - \int_{\underline{k}_i}^{\bar{k}_i}\int_{\underline{c}_i}^{\bar{c}_i} X_i(z,k_i;q) dz \int_{\underline{c}_i}^{z} \, f_i(c_i|k_i) dc_i \; f_i(k_i) dk_i \tag*{(Changing order of integration)}\nonumber \\ &= -\int_{\underline{k}_i}^{\bar{k}_i}\int_{\underline{c}_i}^{\bar{c}_i} \rho_i(\bar{c}_i, k_i) f_i(c_i,k_i) dc_i \, dk_i \nonumber \\ & \qquad - \int_{\underline{k}_i}^{\bar{k}_i}\int_{\underline{c}_i}^{\bar{c}_i} X_i(z,k_i;q) F_i(z|k_i) dz f_i(k_i) dk_i \nonumber \\ &= -\int_{\underline{k}_i}^{\bar{k}_i}\int_{\underline{c}_i}^{\bar{c}_i} \rho_i(\bar{c}_i, k_i) f_i(c_i,k_i) dc_i \, dk_i \nonumber \\ & \qquad - \int_{\underline{k}_i}^{\bar{k}_i}\int_{\underline{c}_i}^{\bar{c}_i} X_i(c_i,k_i;q) \frac{F_i(c_i|k_i)}{f_i(c_i|k_i)} f_i(c_i, k_i) dc_i \, dk_i \label{eq-inter}\end{aligned}$$ ]{} The last step is obtained by relabeling the variable of integration and simplifying. Here, $\rho_i(\bar{c}_i, k_i)$ denotes the utility of an agent $i$ when his true type is $(\bar{c}_i, k_i)$. With this type profile, the auctioneer by paying $\bar{c}_i$ can ensure both IR and IC, hence we can set $\rho_i(\bar{c}_i, k_i) = 0, \forall k_i \in [\underline{k}_i,\bar{k}_i]$. Applying this in the above equation, we get that the objective function of the auctioneer is similar in form to  \[opt\_stmt\]. Consider Condition \[thm:utl-form\] of , and set $\rho_i(\bar{c}_i, k_i) = 0$, we get \[eqn:opt\_payment\]. By construction, the mechanism is BIC and IR. And, since the auctioneer’s expected utility is maximized the mechanism is optimal. Analogous to the literature on optimal auction [@SUJIT13; @IYENGAR08; @MYERSON81], we assume regularity on our type distribution as follows. We define the virtual cost function $\forall i \in N$ as $$\begin{aligned} H_i(c_i,k_i) := c_i + \frac{F_i(c_i|k_i)}{f_i(c_i|k_i)}\end{aligned}$$ We say that a type distribution is regular if $\forall i$, $H_i$ is non-decreasing in $c_i$ and non-increasing in $k_i$. This assumption is not restrictive in single dimension setting as standard techniques of ironing are available [@MYERSON81]. The ironing techniques can also be applied in bidimensional setting whenever the marginal cost distribution is independent of marginal capacity distribution. [2D-OPT]{}: An Optimal Auction ------------------------------ We now present our mechanism [2D-OPT]{}give in . Allocation is given by $x$ = ALLOC($N,\hat{c}, \hat{k}, q, L$) ------------------------------------------------------------------------ Subroutine: ALLOC($N^\tau, c^\tau,k^\tau, q^\tau, L^\tau$) ------------------------------------------------------------------------ $(a_1,a_2,\ldots)$ = Sorted indices of agents in $N^\tau$ in non-increasing order of $G_\kappa$ $x=0$ $L^{(1)}=L^\tau$ Mechanism [2D-OPT]{}is optimal, DSIC and IR. \[thm:chopt\_dsic\] [*Proof:* ]{} We will prove that [2D-OPT]{}satisfies , which proves optimality, IR, and BIC. The allocation function (ALLOC) allocates maximum possible units to agents in decreasing order of $G$’s, which in turn maximizes \[opt\_stmt\]. This is because \[opt\_stmt\] is a linear combination of $G$’s. The monotonicity constraint \[thm:mon-cond2\] of is satisfied due to regularity. Fix an agent $i$ with non-zero allocation. We will show that the payment given to the agent $i$ given by [2D-OPT]{}is the same as in \[eqn:opt\_payment\]. We fix a bid profile $b_{-i}$, that yields non-zero allocation to agent $i$. The payment to agent $i$ for bid profile $(b_i, b_{-i})$ as per \[eqn:opt\_payment\] is as follows. $$\begin{aligned} t_i(c_i,k_i,b_{-i};q) = c_ix_i(c_i,k_i,b_{-i};q) + \int_{c_i}^{\overline{c}_i} x_i(z,k_i,b_{-i};q)dz \label{eq:condpayment}\end{aligned}$$ If expectation is taken on $b_{-i}$ for \[eq:condpayment\], we get \[eqn:opt\_payment\]. The interchange of integral and expectation required therein is valid due to Fubini’s Theorem [@royden1988real] as the integrand is finite and non-negative. We will show that [2D-OPT]{}computes this payment for any $b_{-i}$. To compute RHS of \[eq:condpayment\], we first observe that when bidder $i$ alone increases his bid, he can lose some (or all) of the units allocated to him to bidders with lower values of $G$. Hence, the allocation to agent $i$ as a function of his bid $z \in [c_i, \bar{c}_i]$ is a step function as shown in Figure \[fig:pfint\]. And, the payment to be given to agent $i$ as per  \[eq:condpayment\] is the shaded area. ![Allocation to agent $i$ as function of his bid $z$[]{data-label="fig:pfint"}](proofAlloc.pdf){height="2.1in"} Let $g^{(1)}<g^{(2)}<\ldots ... <g^{(m)}$ where $g^{(1)} >c_i$, $g^{(m)}< \overline{c}_i$, be the costs at which agent $i$ loses some more of his units. At these points, the allocation also dictates that an allocated agent $r$ either completely exhausts the units $x_i$ allocated previously to $i$ or he himself has no more capacity left. On the other hand, the payment scheme of [2D-OPT]{}first determines the allocation of $x_i(c_i, k_i, c_{-i}, k_{-i})$ units in the absence of $i$ as given by \[allocminusi\] of \[alg:offline\_mechanism\]. Let $U =: \{j \in N \setminus \{i\}: y_j \neq 0\}$ where $y$ is the allocation to the worker set $N\setminus\{i\}$. We will partition the set $U$ into $V =: \{j \in N \setminus \{i\}: y_j \neq 0 \mbox{, } G_i(\bar{c}_i) < G_j < G_i(c_i) \}$ and $W =: \{j \in N \setminus \{i\}: y_j \neq 0 \mbox{, } 0< G_j <G_i(\bar{c}_i) \}$. With out loss of generality, we will assume $G_i(\bar{c}_i) \geq 0$, otherwise we will relabel $G_i^{-1}(0)$ as $\bar{c}_i$. No allocations are made to agents with negative value of $G$(see line \[ln:zero\] of ALLOC). Also, as allocation of $x_i$ units consider residual capacity $(\hat{k}_{-i}-x_{-i})$ (see line \[allocminusi\] of \[alg:offline\_mechanism\]), no agent with $G$ higher than $G_i(c_i)$ will have any capacity left. For the sake of simpler exposition, we will assume $U=V\cup W$, the proof follows similar lines otherwise. Let $(a_1,a_2, ... , a_m)$ as the indices of agents in $V$ sorted in non-increasing order of $G$. Now, agents are allocated units from $x_i$ in the order given by $(a_k)_{k=1}^{m}$. Now, it follows that $G_i^{-1}(Rq_{a_1} - H_{a_1}(b_{a_1}))=g^{(1)}$ and the allocation to this agent $a_1$ corresponds to $y^{(1)}$. This forms the term $y_{a_1}G^{-1}((Rq_{a_1} - H_{a_1}(b_{a_1}))$ of the payment to $i$ and corresponds to the area of rectangle $ABCD$. Similarly, the payment to $i$ due to $a_2$ corresponds area of rectangle $DEFG$. This holds for all agents in the set $V$ and rectangle $PQRS$ denotes the payment due to $a_m$. Finally, rectangle $STUV$ corresponds to agents in $W$ or units that are unallocated as there is no capacity left in the remaining agents. The latter is captured by the term $(x_i- \sum_k y_k)\bar{c}_i$. Hence proposed payment computes \[eqn:opt\_payment\] as we have shown it for any fixed $b_{-i}$. The offered utility $\rho_i$ when all agents are truthful is non-decreasing in the true capacity $k_i$. This is due to the greedy nature of the allocation in ALLOC. Thus, condition \[thm:mon-cond1\] of is satisfied. Thus, [2D-OPT]{}satisfies the . We therefore have that the proposed mechanism is BIC, IR, and optimal. In respect of proving DSIC, we omit a formal proof due to space constraint and provide only a sketch. We note that the allocation is deterministic and the payment to agent $i$ does not depend on his bid directly and only depends via the allocation. Furthermore, the payments are computed based on the allocations that are made in the absence of $i$ for the $x_i$ units he has been allocated currently. For every unit, the agent is paid the best possible price he could have bid and still won the unit. $\blacksquare$ Auction with Unknown Qualities {#sec:online} ============================== This section addresses the problem when qualities are not known and are to be learnt. In order to maximize her utility, the auctioneer will procure units from agents in a sequential manner so that she can make future decisions based on the past learning history. We now discuss definitions relevant in this setting. A reward realization $s$ is an $n\times L$ table where the $(i,j)$ entry represents an independent realization drawn from the true quality of $i^{th}$ agent when procuring the $j^{th}$ unit from him. Note that $(i,j)$ entry in reward realization indicates the quality of $i^{th}$ agent when $j^{th}$ unit is procured from him and not the $j^{th}$ unit procured by the requester. We say that a mechanism $\mathcal{M}=(x,t)$ is Stochastic BIC if truth telling by any agent $i$ results in highest expected utility when expectation is taken over reward realizations and type profiles of other agents. Formally, $\forall \hat{c}_i \in [\underline{c}_i,\overline{c}_i], \hat{k}_i \in[\underline{k}_i,k_i],$ $$\begin{aligned} \mathbb{E}_{s}[U_i(c_i,k_i,c_i,k_i;s)] \ge \mathbb{E}_{s}[U_i(\hat{c}_i,\hat{k}_i,c_i,k_i;s)].\end{aligned}$$ Sufficiency Conditions for Stochastic BIC ----------------------------------------- We now provide sufficiency conditions for a mechanism to be stochastic BIC and IR. We begin by stating the modified characterization theorem for the learning setting. \[thm:online\_bic\] Any mechanism that satisfies the following conditions $\forall i \in N,\ \forall s \in [0,1]^{n\times L}$, is stochastic BIC and IR. 1. $X_i(c_i,k_i;s)$ is non-increasing in $c_i$, $\forall s \mbox{ and } \forall k_i \in [\underline{k}_i,k_i]$. 2. $\rho_{i}(\hat{c}_i,\hat{k}_i;s)$ non-negative, and non-decreasing in $\hat{k}_i\; \forall s$ and $\forall \hat{c}_i$ $\in [\underline{c}_i,\bar{c}_i].$ 3. $\rho_{i}(\hat{c}_i,\hat{k}_i;s) = \rho_{i}(\bar{c_i,}\hat{k}_i;s) + \int_{\hat{c}_i}^{\bar{c}_i}X_i(z,\hat{k}_i;s)dz$\[online\_payment\] The proof of the above theorem is similar to that of Theorem \[thm:bic\_ir\]. Instead of fixing a quality, we are now fixing a reward realization. The mechanism also remains stochastic BIC and IR when it satisfies  and expectation is taken over reward realization. We now discuss a set of natural properties which a mechanism in this space ideally have. It also turns out that these properties are sufficient to ensure BIC and IR. An allocation rule $x$ is called a [Well-Behaved ]{}Allocation if: 1. Allocation to any agent $i$ for the unit being allocated in round $j$, $x_i^j$, for any reward realization $s$ depends only on the agent’s bids and the reward realization of $j$ units that are procured by the auctioneer so far and is non decreasing in terms of costs. 2. For the unit being allocated in round $j$ and for any three distinct agents $\{\alpha,\beta,\gamma \}$ such that $j^{th}$ round unit is allocated to $\beta$. A change of bid by agent $\alpha$ should not transfer allocation of $j^{th}$ round unit from $\beta$ to $\gamma$ if other quantities are fixed till $j$ units. 3. For all reward realizations $s$, $x_i(c_i,k_i;s)$ is non-decreasing with increase in capacity $k_i$ As mentioned earlier, these properties are natural. Property $1$ states that the allocation should not depend on any future success realizations which are not observed. Property $2$ is similar to Independent of Irrelevant Alternatives (IIA) property in the mechanism design theory i.e. if an agent $i$ changes his bid then it should not affect the allocations of other agents. Property 3 states the allocation rule doesn’t penalize an agent with higher capacity, when other parameters are identical. \[lemma:online\_monotone\] If an allocation rule $x$ is then, $\forall s$, and $\forall \hat{k}_i \in [\underline{k}_i,k_i]$, $x_i(c_i,\hat{k}_i;s)$ is non-increasing in $c_i$. By slight abuse of notation, let $x_i(c_i,t)$ denote the number of items procured by an agent $i$ with bid $c_i$ until $j$ items are procured. We need to prove that, $$\begin{aligned} x_i(c_i,j) \le x_i(c_i^-,j)\ \forall c_i^- \le c_i\end{aligned}$$ We will prove this by induction. At $j=1$, the condition trivially holds by the monotonicity property of allocation rule. Thus, by induction hypothesis, $x_i(c_i,j) \le x_i(c_i^-,j)$ and we need to prove that $x_i(c_i,j+1) \le x_i(c_i^-,j+1)$. Without loss of generality, we will consider, $x_i(c_i,j) = x_i(c_i^-,j)$, otherwise the condition is trivially satisfied. In this case, we will show that $x_m(c_i,j) = x_m(c_i^-,j)\ \forall m$. Note that $x_m$ depends on bids of all the agents. Since the cost of other agents and capacities of all the agents are held fixed, we have dropped these dependence for notational convenience. Let $x_*(c_i,j)$ denote the number of units that are not procured by an agent $i$ until $j$ units, i.e. $x_*(c_i,j) = j - x_i(c_i,j)$, we will prove that for any two units $j$,$j'$: $$\begin{aligned} x_*(c_i,j) = x_*(c_i^-,j') \implies x_m(c_i,j) = x_m(c_i^-,j')\ \forall m\ne i\end{aligned}$$ We prove the above statement using induction again. If $x_*(c_i,j)$ $= x_*(c_i,j') = 0$, that means all the items are procured by the agent $i$, the statement is clearly true. Thus, by induction hypothesis, $x_*(c_i,j) = x_*(c_i,j') = x_*$, then $x_m(c_i,j) = x_m(c_i^-,j')\ \forall m\ne i$. Now, suppose $x_*(c_i,j) = x_*(c_i^-,j') = x_* + 1$. Again by induction hypothesis, there exist latest rounds, $j_1 < j$ and $j'_1 < j'$ such that $\forall m'\ne i$ $$\begin{aligned} x_*(c_i,j_1) = x_*(c_i^-,j_1') = x_* \implies x_{m'}(c_i,j_1) = x_{m'}(c_i^-,j'_1)\end{aligned}$$ Since $j_1$ and $j'_1$ are the latest such rounds, units from $j_1+2$ to $j$ and $j'_1+2$ to $j'$ are procured only by agent $i$, thus we need to prove that allocation at round $j_1+1$ and $j'_1+1$ is same with bid $c_i$ and $c_i^-$ respectively. Since agent $i$ is not allocated at these rounds, by property $2$ of allocation rule, the condition is satisfied. Thus, we have $x_i(c_i,j) = x_i(c_i^-,j) \implies x_*(c_i,j) = x_*(c_i^-,j) \implies x_m(c_i,j) = x_m(c_i^-,j)\ \forall m$ Since the reward realization is fixed, if number of allocations to all the agents is same till $j^{th}$ unit procured then by property $1$ of allocation rule, we have $x_i(c_i,j+1) \le x_i(c_i^-,j+1)$. The following theorem guarantees a transformation of any allocation rule into a stochastic BIC and IR mechanism. \[lemma:online\_payment\] For a allocation rule, there exists a transformation that produces the transformed allocation ($\tilde{x}$) and payment ($\tilde{t}$) such that the resulting mechanism $\mathcal{M} = (\tilde{x},\tilde{t})$ is stochastic BIC and IR. If we implement the following payment rule then we will get stochastic BIC by Theorem \[thm:online\_bic\]: $$\begin{aligned} \label{eq:truthfulness} T_i(\hat{c}_i,\hat{k}_i;s) = \hat{c}_iX_i(\hat{c}_i,\hat{k}_i;s) + \int_{\hat{c}_i}^{\overline{c}_i} X_i(z,\hat{k}_i;s)dz\; .\end{aligned}$$ The challenge here is to compute the integral as the allocation is not known for bid profiles other then $\hat{c}$. The allocation therein depends on how the qualities are learnt. In order to compute this integral, we adopt a sampling procedure and transformation that uses  similar to [@BABAIOFF10]. \[thm:babaioff10\] Let $\mathcal{F}:I \rightarrow [0,1]$ be any strictly increasing function that is differentiable and satisfies $inf_{z \in I}\mathcal{F}(z) = 0$ and $sup_{z \in I}\mathcal{F}(z)=1$. If $Y$ is a random variable with cumulative distribution function $\mathcal{F}$, then $$\begin{aligned} \label{eqn:est_integral} \int_{I}g(z)dz = \mathbb{E}\bigg[\frac{g(Y)}{\mathcal{F}'(Y)}\bigg]\; .\end{aligned}$$ Our self-resampling procedure is given in Algorithm \[alg:resampling\] that returns vectors $\alpha, \beta$ based on input bids. These vectors are then used to compute the allocation and payment. **with probability** $(1-\mu)$ $\alpha_i \leftarrow \hat{c}_i$, $\beta_i \leftarrow \hat{c}_i$ **with probability** $\mu$ Pick $\hat{c}_i' \in [\hat{c}_i,\overline{c}_i]$ uniformly at random. \[alg-step\] $\alpha_i \leftarrow recursive(\hat{c}_i')$, $\beta_i \leftarrow \hat{c}_i'$ **function** Recursive($\hat{c}_i$) **with probability** $(1-\mu)$ return $\hat{c}_i$ **with probability** $\mu$ Pick $\hat{c}_i' \in [\hat{c}_i,\overline{c}_i]$ uniformly at random. return Recursive($\hat{c}_i'$) In order to compute the integral, we need certain properties to be satisfied that are described in Lemma \[lemma:prop\_resampling\]. \[lemma:prop\_resampling\] The procedure in  satisfies the following properties $\forall i \in N$: 1. [$\alpha_i(\hat{c}_i)$ and $\beta_i(\hat{c}_i)$ are non-decreasing functions of $\hat{c}_i$]{}\[resampling:property1\] 2. [*(A)* With probability $(1-\mu)$, $\alpha_i(\hat{c}_i) = \beta_i(\hat{c}_i) = \hat{c}_i$.\ *(B)* With probability $\mu$, $\overline{c}_i \ge \alpha_i(\hat{c}_i) \ge \beta_i(\hat{c}_i) > \hat{c}_i$]{}\[resampling:property2\] 3. [${\mathbb{P}}[\alpha_i(\hat{c}_i) > a_i|\beta_i(\hat{c}_i) = \hat{c}_i'] = {\mathbb{P}}[\alpha_i(\hat{c}_i') > a_i]\;\ \forall a_i \ge \hat{c}_i'>\hat{c}_i$.]{}\[resampling:property3\] 4. [Function $\mathcal{F}(a_i,\hat{c}_i) = {\mathbb{P}}[\beta_i(\hat{c}_i) < a_i|\beta_i(\hat{c}_i) > \hat{c}_i] = \frac{a_i-\hat{c}_i}{\overline{c}_i-\hat{c}_i}$.]{}\[resampling:property4\] Properties \[resampling:property1\], \[resampling:property2\] are immediate from the algorithm. If $\beta_i(\hat{c}_i) = \hat{c}_i' > \hat{c}_i$, it means the algorithm has followed \[alg-step\] of \[alg:resampling\] and thus property \[resampling:property3\] follows. Property \[resampling:property4\] follows from the fact that distribution of $\beta_i(\hat{c}_i)$ is uniform in the interval $[\hat{c}_i,\overline{c}_i]$ conditional on the event $\beta_i(\hat{c}_i) > \hat{c}_i$ The algorithm that outputs the transformed allocation and the payment is described in Algorithm \[alg:transformation\]. Obtain modified bids as $(\alpha,\beta) = ((\alpha_1(\hat{c}_1),\beta_1(\hat{c}_1), (\alpha_2(\hat{c}_2),\beta_2(\hat{c}_2)),\ldots,(\alpha_n(\hat{c}_n),\beta_n(\hat{c}_n))$ Allocate according to $\tilde{x}(\hat{c},\hat{k}) = x(\alpha(\hat{c}),\hat{k})$ Make payment to each agent $i$, $\tilde{t}_i(\hat{c},\hat{k}) = \hat{c}_i\tilde{x}_i(\hat{c},\hat{k}) + P_i$, where, $$P_i = \begin{cases} \frac{1}{\mu}\frac{x_i(\alpha(\hat{c}),\hat{k})}{\mathcal{F}_i'(\beta_i(\hat{c}_i),\hat{c}_i)},\ \text{if} \displaystyle \beta_i(\hat{c}_i) > \hat{c}_i\\ 0,\ \text{otherwise.} \end{cases}$$ [Theorem]{}[\[lemma:online\_payment\]]{} We will prove that the transformed mechanism in Algorithm \[alg:transformation\] satisfies all the properties in Theorem \[thm:online\_bic\] when the input allocation rule is and thus is stochastic BIC and IR. Transformed allocation and payment rule are denoted by $\tilde{x}$ and $\tilde{t}$ respectively. We denote $\tilde{X}_i(\hat{c}_i,\hat{k}_i;s)$ $= \mathbb{E}_{b_{-i},\alpha}[x_i(\alpha(\hat{c}),\hat{k};s)]$ as the expected allocation with the expectation taken over randomization of the algorithm and bid profile of other agents. Similarly, we denote $\tilde{T}_i(\hat{c}_i,\hat{k}_i;s)$ $ = \mathbb{E}_{b_{-i},\alpha,\beta}[t_i(\alpha(\hat{c}),\beta,\hat{k};s)]$. For all reward realizations $s$, we will prove two properties: (1) Allocation rule $\tilde{X}$ is monotone in terms of costs, and (2) the expected payment rule $\tilde{T}$ satisfies \[eq:truthfulness\]. The monotonicity of allocation rule $\tilde{X}$ follows from the monotonicity of $x$ (Lemma \[lemma:online\_monotone\]) and the monotonicity property \[resampling:property1\] of Algorithm \[alg:resampling\] (Property 1, Lemma \[lemma:prop\_resampling\]). We now prove that $\mathbb{E}_{b_{-i},\alpha,\beta}[P_i] = \int_{\hat{c}_i}^{\overline{c}_i}\tilde{X}_i(\hat{k}_i,z;s)dz$, where the expectation is taken over bids of other players as well as over the randomization of the . [$$\begin{aligned} &\mathbb{E}_{b_{-i},\alpha,\beta}[P_i] \\ &= \mathbb{E}_{\beta_i}\mathbb{E}_{b_{-i},\alpha|\beta_i}[P_i] \tag*{($P_i$ does not depend on $\beta_{-i}$)}\\ &= \mathbb{P}(\beta_i > \hat{c}_i)\mathbb{E}_{\beta_i|\beta_i > \hat{c}_i}\mathbb{E}_{b_{-i},\alpha|\beta_i}[P_i] \tag*{($P_i=0$ if $\beta_i = \hat{c}_i$)}\\ &= \mu\mathbb{E}_{\beta_i|\beta_i > \hat{c}_i}\mathbb{E}_{b_{-i},\alpha|\beta_i}\bigg[\frac{x_i(\alpha(\hat{c}),\hat{k};s)}{\mu \mathcal{F}_i'(\beta_i(\hat{c}_i),\hat{c}_i)}\bigg] \tag*{(Property 2 of Lemma \ref{lemma:prop_resampling})}\\ &= \mathbb{E}_{\beta_i|\beta_i > \hat{c}_i}\frac{1}{\mathcal{F}_i'(\beta_i,\hat{c}_i)}\mathbb{E}_{b_{-i},\alpha}[x_i(\alpha_i(\beta_i),\alpha_{-i}(\hat{c}_{-i}),\hat{k};s)]\tag*{(Property 3 of Lemma \ref{lemma:prop_resampling})}\\ &=\mathbb{E}_{\beta_i|\beta_i > \hat{c}_i}\frac{\tilde{X}_i(\beta_i,\hat{k}_i;s)}{\mathcal{F}_i'(\beta_i,\hat{c}_i)}\\ &= \int_{\hat{c}_i}^{\overline{c}_i}\tilde{X}_i(z,\hat{k}_i;s)dz \tag*{(Lemma \ref{thm:babaioff10})}\end{aligned}$$ ]{} We also have, $$\begin{aligned} \rho_i(\overline{c}_i,\hat{k}_i;s) &= \tilde{T}_i(\overline{c}_i,\hat{k}_i;s) - \overline{c}_i\tilde{X}_i(\overline{c}_i,\hat{k}_i;s) \tag*{(\cref{eq:rho_utility})}\\ &=\overline{c}_i\tilde{X}_i(\overline{c}_i,\hat{c}_{-i},\hat{k};s) -\int_{\overline{c}_i}^{\overline{c}_i}\tilde{X}_i(z,\hat{k}_i;s)dz- \overline{c}_i\tilde{X}_i(\overline{c}_i,\hat{k}_i;s)\\ &=0\end{aligned}$$ Thus, $\rho_{i}(\hat{c}_i,\hat{k}_i;s) = \rho_{i}(\bar{c_i,}\hat{k}_i;s) + \int_{\hat{c}_i}^{\bar{c}_i}X_i(z,\hat{k}_i;s)dz$. Since the allocation rule is monotone in capacity, $\rho_{i}(b_i;s)$ non-negative, and non-decreasing in $\hat{k}_i$, $\forall s$ and $\forall\hat{c}_i \in [\underline{c}_i,\bar{c}_i]$. [2D-UCB]{}: A Learning Mechanism -------------------------------- With the necessary machinery established, we now present the learning mechanism given in . Mechanism [2D-UCB]{}procures one unit at a time, learns the quality and makes the allocation similar to [2D-OPT]{}on the basis of learnt qualities so far. The payment is computed with the help of transformed mechanism given in . $\forall i \in N$, $\hat{q}_i^+ = 1$, $\hat{q}_i^- = 0$, $n_i = 1$ Obtain modified bids as $(\alpha,\beta)$\ $= ((\alpha_1(\hat{c}_1),\beta_1(\hat{c}_1), \ldots,(\alpha_n(\hat{c}_n),\beta_n(\hat{c}_n))$ using \[alg:resampling\] Allocate one unit to all agents and estimate empirical quality $\hat{q}$ $\hat{q}_i = \tilde{q}_i(i)/n_{i}$, $\hat{q}_i^+ = \hat{q}_i + \sqrt{\frac{1}{2n_{i}} ln(t)}$ Make payment to each agent $i$, $\tilde{T}_i = \hat{c}_in_i + P_i$, where, $$\begin{aligned} P_i = \begin{cases} \frac{1}{\mu}n_i(\overline{c}_i-\hat{c}_i),\ \text{if} \displaystyle \beta_i > \hat{c}_i\\ 0,\ \text{otherwise.} \end{cases}\end{aligned}$$ \[thm:chucb\_bic\] [2D-UCB]{}is stochastic BIC and IR. We first prove that the allocation rule produced by [2D-UCB]{}mechanism is well-behaved. At every time, the mechanism allocates the unit to an agent with highest value of $\hat{G}_i$. The value of $\hat{G}_i$ only depends on learnt quality so far. It is monotone in terms of cost due to regularity assumption and monotonicity property of . Thus Property $1$ of is satisfied. If an agent reduces his capacity then he might lose an allocation since no agent is allocated more then his bid capacity thus satisfying property $3$. The allocation rule also satisfy property $2$ (IIA) since the allocation is made to the agent with highest $\hat{G}_i$ and if agent $i$ changes his bid then it will not affect the $\hat{G}_i's$ of other agents. Since the payment structure follows from \[alg:transformation\], and conditions of are also satisfied and thus the resulting mechanism is stochastic BIC and IR. Simulations {#sec:simulations} =========== In , we have presented a learning mechanism [2D-UCB]{}, which embeds [2D-OPT]{}. We have theoretically established the optimality of [2D-OPT]{}when the qualities of the agents are known. A detailed regret analysis of our learning mechanism [2D-UCB]{}will be quite involved and forms an interesting future direction. We instead evaluate the performance of our learning mechanism via simulations. In the simulations, we compare the expected utility per unit given by [2D-UCB]{}against the optimal benchmark [2D-OPT]{}which is fully aware of underlying quality. Another good benchmark to compare against is an $\varepsilon-$separated mechanism. An $\varepsilon-$separated mechanism allocates $\varepsilon L$ units to all the agents irrespective of their bids. Based on the observed realization, the learned qualities in these rounds are used to find the allocation and payments in $(1-\varepsilon)L$ future rounds using [2D-OPT]{}and also qualities are not updated further. It is easy to verify that an $\varepsilon-$ separated mechanism is BIC and IR. For the simulations, the number of units of the item ($L$), which the auctioneer wishes to procure, is chosen at first as $10^3$ and subsequently at nine other linearly spaced steps from $10^3$ to $10^5$. We choose a pool of five agents($N$). A unit procured from an agent $i$ yields a Bernoulli reward with mean $q_i$ drawn uniformly from the interval $[0.5,1]$. The private types of the agents are independently distributed and the costs are drawn uniformly from $[0,1]$. The cost and capacity are chosen to be independently distributed and therefore the setup meets regularity. The capacity is a positive integer drawn with equal probability in the range with upper limit as $L$ and lower limit large enough to meet the uniform exploration. For this type distribution, it can be shown that the virtual cost function for an agent $i$ is $H_i = 2c_i$ by simple computation. For the $\varepsilon$-separated mechanisms, we choose the number of exploration rounds as $\{L^{1/6},L^{1/3},L^{1/2},L^{2/3}\}$. A Bernoulli reward 1 of a procured instance yields a reward of $R=30$ to auctioneer. The performance measure used is the expected average utility per unit obtained by the auctioneer plotted as a function of the number of units. To estimate the expected average utility, 200 independent samples are drawn from the type distribution; for each such sample the number of units required to be procured is varied; at each value of $L$ multiple instances($100$) of reward realization is drawn from the true underlying quality. As $L$ is varied, the capacity is suitably scaled yielding a constant average utility for the benchmark as shown in  \[fig:avgutility\]. We choose $\mu=0.1$ for [2D-UCB]{}. ![Comparative study of average utility per unit[]{data-label="fig:avgutility"}](fig-crop.pdf){width="2.8in"} The simulations indicate that all the mechanisms yield average utilities per unit which asymptotically converge to [2D-OPT]{}. The performance of [2D-UCB]{}however is superior in the sense that it approaches [2D-OPT]{}faster. Conclusion {#sec:conclusion} ========== We have studied a class of mechanisms which yield a stochastic reward to the auctioneer following an allocation to an agent. We have presented optimal learning mechanisms which truthfully elicit multiple private types. A corresponding welfare maximizing version follows directly from the ideas presented in this paper. It would be interesting to study a setting where the allocation is over a subset of agents rather than a single agent. A complete characterization of a learning algorithm in this space is still open as we have provided only sufficient conditions. Also, a theoretic lower bound on regret would be interesting. [^1]: Note that, this is sufficient condition and the complete characterization is still open.
--- abstract: 'The mutually enriching relationship between graphs and matroids has motivated discoveries in both fields. In this paper, we exploit the similar relationship between embedded graphs and delta-matroids. There are well-known connections between geometric duals of plane graphs and duals of matroids. We obtain analogous connections for various types of duality in the literature for graphs in surfaces of higher genus and delta-matroids. Using this interplay, we establish a rough structure theorem for delta-matroids that are twists of matroids, we translate Petrie duality on ribbon graphs to loop complementation on delta-matroids, and we prove that ribbon graph polynomials, such as the Penrose polynomial, the characteristic polynomial, and the transition polynomial, are in fact delta-matroidal. We also express the Penrose polynomial as a sum of characteristic polynomials.' author: - Carolyn Chun - Iain Moffatt - 'Steven D. Noble' - 'Ralf Rueckriemen[^1]' title: 'On the interplay between embedded graphs and delta-matroids' --- [Mathematics Subject Classification: 05B35 (primary), 05C10, 05C31, 05C83(secondary)]{} Overview ======== Graph theory and matroid theory are mutually enriching. As reported in [@Oxley01], W. Tutte famously observed that, “If a theorem about graphs can be expressed in terms of edges and circuits alone it probably exemplifies a more general theorem about matroids”. In [@CMNR] we proposed that a similar claim holds true for topological graph theory and delta-matroid theory, namely that, “If a theorem about embedded graphs can be expressed in terms of its spanning quasi-trees then it probably exemplifies a more general theorem about delta-matroids”. In that paper we provided evidence for this claim by showing that, just as with graph and matroid theory, many fundamental definitions and results in topological graph theory and delta-matroid theory are compatible with each other (in the sense that they canonically translate from one setting to the other). A significant consequence of this connection is that the geometric ideas of topological graph theory provide insight and intuition into the structure of delta-matroids, thus pushing forward the development of both areas. Here we provide further support for our claim above by presenting results on delta-matroids that are inspired by recent research on ribbon graphs. We are principally concerned with duality, which for delta-matroids is a much richer and more varied notion than for matroids. The concepts of duality for plane and planar graphs, and for graphic matroids are intimately connected: the dual of the matroid of an embedded graph corresponds to the matroid of the dual graph (i.e., $M(G)^*=M(G^*)$) if and only the graph is plane. Moreover, the dual of the matroid of a graph is graphic if and only if the graph is planar. The purpose of this paper is to extend these fundamental graph duality–matroid duality relationships from graphs in the plane to graphs embedded on higher genus surfaces. To achieve this requires us to move from matroids to the more general setting of delta-matroids. Moving beyond plane and planar graphs opens the door to the various notions of the “dual” of an embedded graph that appear in the topological graph theory literature. Here we consider the examples of Petrie duals and direct derivatives [@Wil79], partial duals [@Ch1] and twisted duals [@EMM]. We will see that these duals are compatible with existing constructions in delta-matroid theory, including twists [@ab1], and loop complementation [@BH11]. We take advantage of the geometrical insights provided by topological graph theory to deduce and prove new structural results on delta-matroids and on their polynomial invariants. Throughout the paper we emphasise the interaction and compatibility between delta-matroid theory and topological graph theory. Much of the very recent work on delta-matroids appears in a series of papers by R. Brijder and H. Hoogeboom [@BH11; @BH13; @BHpre2; @BH12], who were originally motivated by an application to gene assembly in one-cell organisms known as cilliates. Their study of the effect of the principal pivot transform on symmetric binary matrices led them to the study of binary delta-matroids. As we will see the fundamental connections made possible by the abstraction to delta-matroids allows us to view notions of duality in the setting of symmetric binary matrices and the apparently unconnected setting of ribbon graphs as exactly the same thing. The structure of this paper is as follows. We begin by reviewing delta-matroids, embedded graphs and their various types of duality, and the connection between delta-matroids and embedded graphs. In Section \[3rd\] we use the geometric perspectives offered by topological graph theory to present a rough structure theorem for the class of delta-matroids that are twists of matroids. We give some applications to Eulerian matroids, extending a result of D. Welsh [@We69]. In Section \[4th\], we show that Petrie duality can be qualified as the analogue of a more general delta-matroid operation, namely loop complementation. We show that a group action on delta-matroids due to R. Brijder, H. Hoogeboom [@BH11] is the analogue of twisted duality from J. Ellis-Monaghan and I. Moffatt [@EMM]. We apply the insights provided by this connection to give a number of structural results about delta-matroids. In Section \[5th\] we apply our results to graph and matroid polynomials. We show that the Penrose polynomial and Transition polynomial [@Ai97; @EMM11a; @Ja90; @Pen71] are delta-matroidal, in the sense that they are determined (up to a simple pre-factor) by the delta-matroid of the underlying ribbon graph, and are compatible with R. Brijder, H. Hoogeboom’s Penrose and transition polynomials of [@BH13]. We relate the Bollobás–Riordan and Penrose polynomials to the transition polynomial and find recursive definitions of these polynomials. Finally, we give a surprising expression for the Penrose polynomial of a vf-safe delta-matroid in terms of the characteristic polynomial: $P(D;\lambda) = \sum_{A\subseteq E} (-1)^{|A|} \chi(D^{\pi(A)} ;\lambda)$. Throughout the paper we emphasise the interaction and compatibility between delta-matroids and ribbon graphs. We provide evidence that this new perspective offered by topological graph theory enables significant advances to be made in the theory of delta-matroids. Background on delta-matroids ============================ Delta-matroids -------------- A *set system* is a pair $D=(E,{\mathcal{F}})$ where $E$ is a non-empty finite set, which we call the *ground set*, and $\mathcal{F}$ is a collection of subsets of $E$, called *feasible sets*. We define $E(D)$ to be $E$ and $\mathcal{F}(D)$ to be $\mathcal{F}$. A set system is *proper* if $\mathcal{F}$ is not empty. For sets $X$ and $Y$, we denote the operation of *symmetric difference* by $X\bigtriangleup Y$, which is equal to $(X\cup Y)-(X\cap Y)$. \[sea\] Given a set system $D=(E,{\mathcal{F}})$, for all $X$ and $Y$ in $\mathcal{F}$, if there is an element $u\in X\bigtriangleup Y$, then there is an element $v\in X\bigtriangleup Y$ such that $X\bigtriangleup \{u,v\}$ is in $\mathcal{F}$. Note that we allow $v=u$ in Axiom \[sea\]. A *delta-matroid* is a proper set system $(E,\mathcal{F})$ that satisfies Axiom \[sea\]. These structures were first studied by Bouchet in [@ab1]. If all of the feasible sets of a delta-matroid are equicardinal, then the delta-matroid is a *matroid* and we refer to its feasible sets as its *bases*. If a set system forms a matroid $M$, then we usually denote $M$ by $(E,\mathcal{B})$, where we define $E(M)$ to be $E$ and $\mathcal{B}(M)$ to be $\mathcal{B}$, the collection of bases of $M$. Every subset of every basis is an *independent set*. For a set $A\subseteq E(M)$, the *rank of $A$*, written $r_M(A)$, or simply $r(A)$ when the matroid is clear, is the size of the largest intersection of $A$ with a basis of $M$. For a delta-matroid $D=(E,\mathcal{F})$, let $\mathcal{F}_{\max}(D)$ and $\mathcal{F}_{\min}(D)$ be the set of feasible sets with maximum and minimum cardinality, respectively. We will usually omit $D$ when the context is clear. Let $D_{\max}:=(E,\mathcal{F}_{\max})$ and let $D_{\min}:=(E,\mathcal{F}_{\min})$. Then $D_{\max}$ is the *upper matroid* and $D_{\min}$ is the *lower matroid* for $D$. These were defined by A. Bouchet in [@ab2]. It is straightforward to show that the upper matroid and the lower matroid are indeed matroids. If the sizes of the feasible sets of a delta-matroid all have the same parity, then we say that it is *even*, otherwise it is *odd*. For a delta-matroid $D=(E,\mathcal{F})$, and $e\in E$, if $e$ is in every feasible set of $D$, then we say that $e$ is a *coloop of $D$*. If $e$ is in no feasible set of $D$, then we say that $e$ is a *loop of $D$*. If $e$ is not a coloop, then we define $D$ *delete* $e$, written $D{\backslash}e$, to be $(E-e, \{F : F\in \mathcal{F}\text{ and } F\subseteq E-e\})$. If $e$ is not a loop, then we define $D$ *contract* $e$, written $D/e$, to be $(E-e, \{F-e : F\in \mathcal{F}\text{ and } e\in F\})$. If $e$ is a loop or coloop, then we set $D/e=D{\backslash}e$. Both $D{\backslash}e$ and $D/e$ are delta-matroids (see [@BD91]). If $D'$ is a delta-matroid obtained from $D$ by a sequence of edge deletions and edge contractions, then $D'$ is independent of the order of the deletions and contractions used in its construction (see [@BD91]). Any delta-matroid obtained from $D$ in such a way is called a *minor* of $D$. If $D'$ is a minor of $D$ formed by deleting the elements of $X$ and contracting the elements of $Y$ then we write $D'=D\setminus X/Y$. The *restriction* of $D$ to a subset $A$ of $E$, written $D|A$, is equal to $D{\backslash}(E-A)$. Twists are one of the fundamental operations of delta-matroid theory. Let $D=(E,{\mathcal{F}})$ be a set system. For $A\subseteq E$, the *twist* of $D$ with respect to $A$, denoted by $D* A$, is given by $(E,\{A\bigtriangleup X: X\in \mathcal{F}\})$. The *dual* of $D$, written $D^*$, is equal to $D*E$. It follows easily from the identity $(F'_1{\bigtriangleup}A){\bigtriangleup}(F'_2{\bigtriangleup}A)=F'_1{\bigtriangleup}F'_2$ that the twist of a delta-matroid is also a delta-matroid, as Bouchet showed in [@ab1]. However, if $D$ is a matroid, then $D*A$ need not be a matroid. Note that a coloop or loop of $D$ is a loop or coloop, respectively, of $D^*$. For delta-matroids (or matroids) $D_1=(E_1,\mathcal{F}_1)$ and $D_2=(E_2,\mathcal{F}_2)$, where $E_1$ is disjoint from $E_2$, the *direct sum of $D_1$ and $D_2$*, defined in [@geelen] and written $D_1\oplus D_2$, is constructed by $$D_1\oplus D_2:=(E_1\cup E_2,\{F_1\cup F_2: F_1\in \mathcal{F}_1\text{ and } F_2\in\mathcal{F}_2\}).$$ If $D=D_1\oplus D_2$, for some $D_1,D_2$, we say that $D$ is *disconnected* and that $E_1$ and $E_2$ are *separating*. $D$ is *connected* if it is not disconnected. Embedded graphs --------------- We will describe embedded graphs as ribbon graphs. It is well-known that ribbon graphs are alternative descriptions of cellularly embedded graphs (see for example the books [@GT87; @EMMbook] for details), and so they are the main objects of study in topological graph theory. A *ribbon graph* $G =\left( V(G),E(G) \right)$ is a surface with boundary, represented as the union of two sets of discs: a set $V (G)$ of *vertices* and a set $E(G)$ of *edges* with the following properties. (1) The vertices and edges intersect in disjoint line segments. (2) Each such line segment lies on the boundary of precisely one vertex and precisely one edge. (3) Every edge contains exactly two such line segments. See Figure \[f1\] for an example. Two ribbon graphs are *equivalent* if there is a homeomorphism (which should be orientation preserving if the ribbon graphs are orientable) from one to the other that preserves the vertex-edge structure, adjacency, and cyclic ordering of the half-edges at each vertex (Ribbon graphs are equivalent if and only if they describe equivalent cellularly embedded graphs.) A ribbon graph is *orientable* if it is an orientable surface, and is *non-orientable* otherwise. Its *genus* is its genus as a surface, and we say it is *plane* if it is of genus zero (thus here we allow disconnected plane ribbon graphs). If $A\subseteq E$, then $G\backslash A$ is the *ribbon subgraph* of $G=(V,E)$ obtained by *deleting* the edges in $A$. We use $G{\backslash}e$ to denote $G{\backslash}\{e\}$. The *spanning subgraph* of $G$ on $A$ is $(V,A)= G\backslash A^c$. (We will frequently use the notational shorthand $A^c:= E-A$ in the context of graphs, ribbon graphs, matroids and delta-matroids.) For edge contraction, let $e$ be an edge of $G$ and $u$ and $v$ be its incident vertices, which are not necessarily distinct. Then $G/e$ denotes the ribbon graph obtained as follows: consider the boundary component(s) of $e\cup u \cup v$ as curves on $G$. For each resulting curve, attach a disc, which will form a vertex of $G/e$, by identifying its boundary component with the curve. Delete the interiors of $e$, $u$ and $v$ from the resulting complex. We say that $G/e$ is obtained from $G$ by *contracting* $e$. If $A\subseteq E$, $G/A$ denotes the result of contracting all of the edges in $A$ (the order in which they are contracted does not matter). A discussion about why this is the natural definition of contraction for ribbon graphs can be found in [@EMMbook]. Note that contracting an edge in $G$ may change the number of vertices or orientability. A ribbon graph $H$ is a *minor* of a ribbon graph $G$ if $H$ is obtained from $G$ by a sequence of edge deletions, vertex deletions, and edge contractions. See Figure \[f1\] for an example. An edge in a ribbon graph is a *bridge* if its deletion increases the number of components of the ribbon graph. It is a *loop* if it is incident with only one vertex. A loop is a *non-orientable loop* if, together with its incident vertex, it is homeomorphic to a Möbius band, otherwise it is an *orientable loop*. Two cycles $C_1$ and $C_2$ in $G$ are said to be *interlaced* if there is a vertex $v$ such that $V(C_1)\cap V(C_2)=\{v\}$, and $C_1$ and $C_2$ are met in the cyclic order $C_1\,C_2\,C_1\,C_2$ when travelling round the boundary of the the vertex $v$. A loop is *non-trivial* if it is interlaced with some cycle in $G$. Otherwise the loop is *trivial*. Our interest here is in various notions of duality from topological graph theory. A slower exposition of the constructions here can be found in, for example, [@CMNR; @EMMbook]. We start with S. Chmutov’s partial duals of [@Ch1]. Let $G$ be a ribbon graph and $A\subseteq E(G)$. The *partial dual* $G^{A}$ of $G$ is obtained as follows. Regard the boundary components of the spanning ribbon subgraph $(V(G),A)$ of $G$ as curves on the surface of $G$. Glue a disc to $G$ along each connected component of this curve and remove the interior of all vertices of $G$. The resulting ribbon graph is the *partial dual* $G^{A}$. See Figure \[f1\] for an example. It is immediate from the definition (or see [@Ch1]) that $G^{\emptyset}=G$ and $G/e= G^{e}{\backslash}e$. The *geometric dual* of $G$ can be defined by $G^*:=G^{E(G)}$. Next we consider the Petrie dual (also known as the Petrial), $G^{\times}$, of $G$ (see Wilson [@Wil79]). Let $G$ be a ribbon graph and $A\subseteq E(G)$. The *partial Petrial*, $G^{\tau(A)}$, of $G$ is the ribbon graph obtained from $G$ by for each edge $e\in A$, choosing one of the two arcs $(a,b)$ where $e$ meets a vertex, detaching $e$ from the vertex along that arc giving two copies of the arc $(a,b)$, then reattaching it but by gluing $(a,b)$ to the arc $(b,a)$, where the directions are reversed (Informally, this can be thought of as adding a half-twist to the edge $e$.) The *Petrie dual* of $G$ is $G^{\times}:=G^{\tau(E(G)}$. We often write $G^{\tau(e)}$ for $G^{\tau(\{e\})}$. See Figure \[f1\] for an example. The partial dual and partial Petrial operations together gives rise to a group action on ribbon graphs and the concept of twisted duality from [@EMM]. Let $G$ be a ribbon graph and $A\subseteq E(G)$. In the context of twisted duality we will use $G^{\delta(A)}$ to denote the partial dual $G^{A}$ of $G$. Let $w=w_1w_2\cdots w_n$ be a word in the alphabet $\{\delta, \tau\}$. Then we define $G^{w(A)}:=(\cdots (( G^{w_n(A)} )^{w_{n-1}(A)} \cdots )^{w_1(A)}$. Let $\mathfrak{G} := \langle \delta, \tau \mid \delta^2, \tau^2, (\delta\tau)^3\rangle$, which is just a presentation of the symmetric group of degree three. It was shown in [@EMM] that $\mathfrak{G}$ acts on the set $\mathcal{X} = \{ (G,A) : G\text{ a ribbon graph}, A\subseteq E(G) \}$ by $ g(G,A) := (G^{g(A)},A)$ for $g \in \mathfrak{G} $. Now suppose $G$ is a ribbon graph, $A,B\subseteq E(G)$, and $g,h\in \mathfrak{G}$. Define $G^{g(A)h(B)}:=\left(G^{g(A)}\right)^{h(B)}$. We say that two ribbon graphs $G$ and $H$ are [*twisted duals*]{} if there exist $A_1, \ldots ,A_n \subseteq E(G)$ and $g_1,\ldots, g_n \in \mathfrak{G}$ such that $ H=G^{ g_1(A_1) g_2(A_2)\cdots g_n(A_n) }$. Observe that (1) If $A\cap B=\emptyset$, then $G^{g(A)h(B)}=G^{h(B)g(A)}$; (2) $ G^{g(A)} = (G^{g(e)})^{g(A\backslash e)}$; (3) $G^{g_1(A)}=G^{g_2(A)}$ if $g_1=g_2$ in the group $ \langle \delta, \tau \mid \delta^2, \tau^2, (\delta\tau)^3\rangle$. We note that Wilson’s direct derivatives and opposite operators from [@Wil79] result from restricting twisted duality to the whole edge set $E(G)$. Delta-matroids from ribbon graphs --------------------------------- We briefly review the interactions between delta-matroids and ribbon graphs discussed in [@CMNR] (proofs of all the results mentioned here can be found in this reference). Let $G=(V,E)$ be a graph or ribbon graph. The *graphic matroid* of $G$ is $M(G):=(E, \mathcal{B})$ where $\mathcal{B}$ consists of the edge sets of the spanning subgraphs of $G$ that form a spanning tree when restricted to each connected component of $G$. In terms of ribbon graphs, a tree can be characterised as a genus 0 ribbon graph with exactly one boundary component. Dropping the genus 0 requirement gives a quasi-tree: a *quasi-tree* is a ribbon graph with exactly one boundary component. Quasi-trees play the role of trees in ribbon graph theory, and replacing “tree” with “quasi-tree” in the definition of a graphic matroid results in a delta-matroid. Let $G$ be a ribbon graph, then the *delta-matroid of $G$* is $D(G):=(E,\mathcal{F})$ where $\mathcal{F}$ consists of the edge sets of the spanning ribbon subgraphs of $G$ that form a quasi-tree when restricted to each connected component of $G$. Let $G$ be the ribbon graph of Figure \[f1a\]. Then $D(G)$ has 20 feasible sets, 10 of which are $\{3,4,6 \}$, $\{3,4,7\}$, $\{3,5,6\}$, $\{3,5,7\}$, $\{4,5,6\}$, $\{4,5,7\}$, $\{3,4,5,6,7\}$, $\{3,4,6,7,8 \}$, $\{3,5,6,7,8\}$, $\{4,5,6,7,8\}$. The remaining 10 are obtained by taking the union of each of these with $\{1,2\}$. It can be checked that $ D(G/{\{3,8\}} {\backslash}\{1\}) = D(G)/{\{3,8\}} {\backslash}\{1\}$ and $D(G^{\{1,6,7\}}) = D(G) \ast \{1,6,7\}$. Fundamental delta-matroid operations and ribbon graph operations are compatible with each other, as in the following. \[t.compat\] Let $G=(V,E)$ be a ribbon graph. Then 1. \[t.compat.1\] $D(G)_{\min}=M(G)$ and $D(G)_{\max}=(M(G^*))^*$; 2. \[t.compat.2\] $D(G)=M(G)$ if and only if $G$ is a plane ribbon graph; 3. \[t.compat.3\] $D(G^A) = D(G) * A $, in particular $D(G^*)=D(G)^*$; 4. \[t.compat.4\] $D (G{\backslash}e)= D(G){\backslash}e$ and $D(G/e)=D(G)/e$, for each $e\in E$. The significance of Theorem \[t.compat\], as we will see, is that it provides the means to move between ribbon graphs and delta-matroids, giving new insights into the structure of delta-matorids. For notational simplicity in this paper we will take advantage of the following abuse of notation. For disconnected graphs, a standard abuse of notation is to say that $T$ is a spanning tree of $G$ if the components of $T$ are spanning trees of the components of $G$. We will say that $Q$ is a *spanning quasi-tree* of $G$ if the components of $Q$ are spanning quasi-trees of the components of $G$. Thus we can say that the feasible sets of $D(G)$ are the edge sets of the spanning quasi-trees of $G$. This abuse should cause no confusion. Twists of matroids {#3rd} ================== Twists provide a way to construct one delta-matroid from another. As the class of matroids is not closed under twists, it provides a way to construct delta-matroids from matroids. Twisting therefore provides a way to uncover the structure of delta-matroids by translating results from the much better developed field of matroid theory. For this reason, the class of delta-matroids that arise as twists of matroids is an important one. In this section we examine the structure of this class of delta-matroids. In particular, we provide both an excluded minor characterisation, and a rough structure theorem for this class. Of particular interest here is the way that we are led to the results: we use ribbon graph theory to guide us. Our results provide support for the claim in this paper and in [@CMNR] that ribbon graphs are to delta-matroids what graphs are to matroids. In order to understand the class of delta-matroids that are twists of matroids, we start by looking for the ribbon graph analogue of the class of delta-matroids that are twists of matroids. For this suppose that $G=(V,E)$ is a ribbon graph with delta-matroid $D=D(G)$. We wish to understand when $D$ is the twist of a matroid, that is, we want to determine if $D=M\ast A$ for some matroid $M=(E,\mathcal{B})$ and for some $A\subseteq E$. As twists are involutary, we can reformulate this problem as one of determining if $D\ast B =M$ for some matroid $M$ and some $B\subseteq E$. By Theorem \[t.compat\], $D\ast B = D(G)\ast B = D(G^B)$, but, by Theorem \[t.compat\], $D(G^B)$ is a matroid if and only if $G^B$ is a plane graph. Thus $D$ is a twist of a matroid if and only if $G$ is the partial dual of a plane graph. Thus to understand the class of delta-matroids that are twists of matroids we look towards the class of ribbon graphs that are partial duals of plane graphs. Fortunately, due to connections with knot theory (see [@Mo5]), this class of ribbon graphs has been fairly well studied with both a rough structure theorem and an excluded minor characterisation. The following tells us that it makes sense to look for an excluded minor characterisation for twists of matroids. \[pdminors\] The class of delta-matroids that are twists of matroids is minor-closed. We will show that, given a matroid $M$ and a subset $A$ of $E(M)$, if $D=(E,\mathcal{F})=M*A$ and $D'$ is a minor of $D$, then $D'=M'*A'$ for some minor $M'$ of $M$ and some subset $A'$ of $E(M')$. If $e\notin A$ then $D\setminus e = (M*A)\setminus e = (M\setminus e) *A$ and $D/e = (M*A)/e = (M/e) *A$. On the other hand, if $e\in A$ then $D\setminus e = (M*A)\setminus e= (M/e)*(A-e)$ and $D/ e = (M*A)/ e= (M\setminus e)*(A-e)$. An excluded minor characterisation of partial duals of plane graphs appeared in [@Moprep]. It was shown there that a ribbon graph $G$ is a partial dual of a plane graph if and only if it contains no $G_0$-, $G_1$- or $G_2$-minor, where $G_0$ is the non-orientable ribbon graph on one vertex and one edge; $G_1$ is the orientable ribbon graph given by vertex set $\{1,2\}$, edge set $\{a,b,c\}$ with the incident edges at each vertex having the cyclic order $abc$, with respect to some orientation of $G_1$; and $G_2$ is the orientable ribbon graph given by vertex set $\{1\}$, edge set $\{a,b,c\}$ with the cyclic order $abcabc$ at the vertex. (The result in [@Moprep] was stated for the class of ribbon graph that present link diagrams, but this coincides with the class of partial duals of plane graphs.) For the delta-matroid analogue of the ribbon graph result set: - $X_0:=D(G_0)=(\{a\} ,\{ \emptyset, \{a\}\} )$; - $X_1:=D(G_1)=(\{a,b,c\},\{\{a\},\{b\},\{c\},\{a,b,c\}\})$; - $X_2:=D(G_2)=(\{a,b,c\},\{\{a,b\},\{a,c\},\{b,c\},\emptyset\})$. Note that every twist of $X_0$ is isomorphic to $X_0$ and that every twist of $X_1$ or $X_2$ is isomorphic to either $X_1$ or $X_2$. In particular, $X_1=X_2^*$. Then translating the ribbon graph result into delta-matroids suggests that $X_0$, $X_1$ and $X_2$ should form the set of excluded minors for the class of delta-matroids that are twists of matroids. Previously A. Duchamp [@adfund] had shown, but not explicitly stated, that $X_1$ and $X_2$ are the excluded minors for the class of even delta-matroids that are twists of matroids. \[expdm\] A delta-matroid $D=(E,\mathcal{F})$ is the twist of a matroid if and only if it does not have a minor isomorphic to $X_0$, $X_1$, or $X_2$. Take a matroid $M$ and a subset $A$ of $E(M)$. As $|B|=r(M)$ for each $B\in\mathcal{B}(M)$, we know that the sizes of the feasible sets of $M*A$ will have even parity if $r(M)$ and $A$ have the same parity, otherwise they will all have odd parity. Thus $M*A$ is an even delta-matroid, and $X_0$ is obviously the unique excluded minor for the class of even delta-matroids. An application of [@adfund Propositions 1.1 and 1.5] then gives that $X_0$, $X_1$, $X_2$ is the complete list of the excluded minors of twists of matroids. We now look for a rough structure theorem for delta-matroids that are twists of matroids. Again we proceed constructively via ribbon graph theory, starting with a rough structure theorem for the class of ribbon graphs that are partial duals of plane graphs, translating it into the language of ribbon graphs, then giving a proof of the delta-matroid result. A vertex $v$ of a ribbon graph $G$ is a *separating vertex* if there are non-trivial ribbon subgraphs $P$ and $Q$ of $G$ such that $(V(G),E(G))=(V(P)\cup V(Q),E(P)\cup E(Q))$, and $E(P)\cap E(Q)=\emptyset$ and $V(P)\cap V(Q)=\{v\}$. In this case we write $G=P\oplus Q$. Let $A\subseteq E(G)$. Then we say that $A$ defines a *plane-biseparation* of $G$ if all of the components of $G{\backslash}A$ and $G{\backslash}(E(G)-A)$ are plane and every vertex of $G$ that is incident with edges in $A$ and edges in $E(G)-A$ is a separating vertex of $G$. For the ribbon graph $G$ of Figure \[f1a\], $v$ is a separating vertex with $P$ and $Q$ the subgraphs induced by edges $1, \ldots, 5$ and by $6,7,8$. $G$ admits plane-biseparations. The edge sets $\{1,6,7\}$, $\{2,6,7\}$, $\{2,3,4,5,8\}$, and $\{1,3,4,5,8\}$ are exactly those that define plane-biseparations. $G^A$ is plane if and only if $A$ is one of these four sets. In [@Mo5], the following rough structure theorem was given: \[rgppd\] Let $G$ be a ribbon graph. Then the partial dual $G^A$ is a plane graph if and only if $A$ defines a plane-biseparation of $G$. Thus we need to translate a plane-biseparation into the language of delta-matorids. Since, by Theorem \[t.compat\] a ribbon graph $G$ is plane if and only if $D=D(G)$ is a matroid, the requirement that $G{\backslash}A$ and $G{\backslash}(E(G)-A)$ are plane translates to $D{\backslash}A$ and $D{\backslash}(E-A)$ being matroids. For the analogue of separability we make the following definition. Let $D=(E,\mathcal{F})$ be a delta-matroid. Then $D$ is *separable* if $D_{\min}$ is disconnected. It was shown in [@CMNR] that if $G$ is a ribbon graph, then $D(G)$ is separable if and only if there exist ribbon graphs $G_1$ and $G_2$ such that $G=G_1 \sqcup G_2$ or $G = G_1 \oplus G_2$. Thus the condition that every vertex of $G$ that is incident with edges in $A$ and edges in $E(G)-A$ is a separating vertex of $G$ becomes $A$ is separating in $D_{\min}$. So Theorem \[rgppd\] may be translated to delta-matroids as follows. \[charaterise\_delta\_matroids\_twists\_matroids\] Let $D$ be a delta-matroid and $A$ a non-empty proper subset of $E(D)$. Then $D*A$ is a matroid if and only if the following two conditions hold: 1. $A$ is separating in $D_{\min}$, and 2. $D\setminus A$ and $D\setminus A^c$ are both matroids. We need some preliminary results before we can prove this theorem. A. Bouchet showed in [@ab2] that $D_{\min}$ and $D_{\max}$ are matroids, for any delta-matroid. In the case that $D$ is the twist of a matroid, however, we can identify these two matroids precisely. \[l.mat1\] Let $M=(E,\mathcal{B})$ be a matroid and $A$ be a subset of $E$. Let $D=M*A$. Then $D_{\min}=M/A\oplus (M{\backslash}A^c)^*$. Since we restrict our attention to the smallest sets of the form $B\bigtriangleup A$, where $B\in \mathcal{B}$, we need only to consider those bases of $M$ that share the largest intersection with $A$. That is, we think of building a basis for $M$ by first finding a basis of $M{\backslash}A^c$ and then extending that independent set to a basis of $M$. Let $I_A$ be a basis of $M{\backslash}A^c$ and let $B_A$ be a basis of $M$ such that $I_A\subseteq B_A$. Then $A\bigtriangleup B_A=(B_A-I_A)\cup (A-I_A)$. Now, $B_A-I_A$ is a basis in $M/A$ and $A-I_A$ is the complement of a basis in $M{\backslash}A^c$, so $B_A\bigtriangleup A$ is a basis of $M/A\oplus (M{\backslash}A^c)^*$. Since every basis that shares a maximum-sized intersection with $A$ can be constructed in this way, the lemma holds. \[l.mat2\] Let $M=(E,\mathcal{B})$ be a matroid and let $A$ be a subset of $E$. Let $D=M*A$. Then $D_{\max}=M \setminus A \oplus (M/A^c)^*$. As the feasible sets of $(D_{\max})^*$ are just the feasible sets of $(D^*)_{\min}$, we deduce that $D_{\max}=((D^*)_{\min})^*$. Now $(D^*)_{\min}=((M*A)*E)_{\min}=((M*E)*A)_{\min}=(M^**A)_{\min}$. Lemma \[l.mat1\] implies that $(M^**A)_{\min}=(M^*/A)\oplus (M^*{\backslash}A^c)^*$. Then $$(M^*/A)\oplus (M^*{\backslash}A^c)^*=(M{\backslash}A)^*\oplus ((M/A^c)^*)^*=(M{\backslash}A)^*\oplus (M/A^c) .$$ We deduce that $(D^*)_{\min}=(M{\backslash}A)^*\oplus (M/A^c)$, thus $((D^*)_{\min})^*=((M{\backslash}A)^*\oplus (M/A^c))^*$. This last direct sum, by [@Oxley11 Proposition 4.2.21], is equal to $(M{\backslash}A)\oplus (M/A^c)^*$. The next two corollaries follow immediately from Lemma \[l.mat1\] and Lemma \[l.mat2\] and the fact that, for a given field $F$, the class of matroids representable over $F$ is closed under taking minors, duals, and direct sums. \[representability\] Let $\mathcal{C}$ be a class of matroids that is closed under taking minors, duals, and direct sums. If $M\in \mathcal{C}$ and $A\subseteq E(M)$, then $(M*A)_{\min}\in \mathcal{C}$ and $(M*A)_{\max}\in \mathcal{C}$. \[representability2\] Given a matroid $M$ and subset $A$ of $E(M)$, both $(M*A)_{\min}$ and $(M*A)_{\max}$ are representable over any field that $M$ is. The proof of the following lemma can be found in [@CMNR]. \[lem:useful\] Let $D=(E,\mathcal{F})$ be a delta-matroid, let $A$ be a subset of $E$ and let $s_0 = \min\{|B \cap A| : B\in\mathcal {B}(D_{\min})\}$. Then for any $F\in\mathcal{F}$ we have $|F\cap A|\geq s_0$. We can now prove Theorem \[charaterise\_delta\_matroids\_twists\_matroids\]. Suppose first that $M:=D*A$ is a matroid. Then $D=M*A$ and Lemma \[l.mat1\] shows that $A$ is separating in $D_{\min}$. The feasible sets of $D\setminus A$ are given by $$\mathcal {F}(D\setminus A) = \{ F- A: F\in\mathcal{F}(D),\ |F\cap A|\leq |F'\cap A| \text{ for all $F'\in \mathcal{F}(D)$}\}.$$ So the feasible sets of $D\setminus A$ are obtained by deleting the elements in $A$ from those feasible sets of $D$ having smallest possible intersection with $A$. As $D=M*A$, we obtain $$\mathcal {F}(D\setminus A) = \{ B \cap A^c: B\in\mathcal{B}(M),\ |B\cap A|\geq |B'\cap A| \text{ for all $B'\in \mathcal{B}(M)$}\}.$$ Because all bases of $M$ have the same number of elements, we see that all feasible sets of $D\setminus A$ also have the same number of elements and consequently $D\setminus A$ is a matroid. Similarly the feasible sets of $D\setminus A^c$ form a matroid. We now prove the converse. Let $r= r_{D_{\min}}(A)$ and $r'= r_{D_{\min}}(A^c)$. We will show that any feasible set $F$ of $D$ satisfies $|F\cap A| - |F\cap A^c| = r-r'$. This condition implies that all feasible sets of $D*A$ have the same size which is enough to deduce that $D*A$ is a matroid, as required. Because $A$ is separating in $D_{\min}$, any $F_0$ in $\mathcal {F}_{\min}$ must satisfy $|F_0\cap A|=r$ and $|F_0\cap A^c|=r'$. Now Lemma \[lem:useful\] implies that any feasible set $F$ of $D$ satisfies $|F\cap A| \geq r$ and $|F\cap A^c| \geq r'$. We claim that a feasible set $F$ satisfies $|F\cap A| = r$ if and only if $|F\cap A^c|=r'$. The feasible sets of $D\setminus A$ are given by $$\mathcal {F}(D\setminus A) = \{ F- A: F\in\mathcal{F}(D),\ |F\cap A|=r\}.$$ Following condition (2), these sets form the bases of a matroid and consequently all have the same size, which must be $r'$. Therefore if $F$ is in $\mathcal F(D)$ and satisfies $|F\cap A|=r$, then $|F\cap A^c|=r'$. The converse is similar and so our claim is established. We will now prove by induction on $k$ that if $F$ is a feasible set then $|F\cap A|=r+k$ if and only if $|F\cap A^c|=r'+k$. We have already established the base case when $k=0$. Suppose the claim is true for all $k<l$. If $F$ is a feasible set satisfying $|F\cap A|=r+l$, then using induction, we see that $|F\cap A^c|\geq r'+l$. Suppose then there is a feasible set $F$ satisfying $|F\cap A|=r+l$ and $|F\cap A^c|>r+l$. Let $F_1$ be a member of $\mathcal{F}_{\min}$. So $|F_1 \cap A|=r$ and $|F_1\cap A^c|=r'$. Now choose $F_2$ to be a feasible set with $|F_2\cap A|=r+l$, $|F_2\cap A'|>r'+l$ and $|F_2 \cap F_1 \cap A|$ as large as possible amongst such sets. There exists $x\in (F_2- F_1)\cap A$ and clearly $x\in F_1\bigtriangleup F_2$. Hence there exists $y$ belonging to $F_1\bigtriangleup F_2$ such that $F_3 = F_2 \bigtriangleup \{x,y\}$ is feasible. However $y$ is chosen, we must have $|F_3\cap A| < |F_3\cap A^c|$. Therefore the inductive hypothesis ensures that $|F_3 \cap A| \geq |F_2 \cap A|$ and so $y\in F_1\cap A$. But then $F_3$ is a feasible set of $D$ with $|F_3\cap A|=r+l$, $|F_3\cap A'|>r'+l$ and $|F_3 \cap F_1 \cap A| > |F_2 \cap F_1 \cap A|$, contradicting the choice of $F_2$. Let $G$ be a ribbon graph with non-trivial ribbon subgraphs $P$ and $Q$. We say that $G$ is the *join* of $P$ and $Q$, written $G=P\vee Q$, if $G=P \oplus Q$ and no cycle in $P$ is interlaced with a cycle in $Q$. In [@CMNR] it was shown that $D=D(G)$ is disconnected if and only if there exist ribbon graphs $G_1$ and $G_2$ such that $G=G_1 \sqcup G_2$ or $G = G_1 \vee G_2$. In [@Mo5], it was shown that $G$ and $G^A$ are both plane graphs if and only if we can write $G=H_1\vee \cdots \vee H_l$, where each $H_i$ is plane and $A= \bigcup_{i\in I} E(H_i)$, for some $I\subseteq \{1, \ldots , l\}$. This result extends to matroids as follows. \[pdism\] Let $M=(E,\mathcal{B})$ be a matroid and $A$ be a subset of $E$. Then $M*A$ is a matroid if and only if $A$ is separating or $A\in\{\emptyset,E\}$. The minimum-sized sets and maximum-sized sets in $\mathcal{B}\bigtriangleup A$ have size $r(M)-r(A)+|A|-r(A)$ and $r(E-A)+|A|-(r(M)-r(E-A))$, respectively. The collection $\mathcal{B}\bigtriangleup A$ is the set of bases of a matroid if and only if they all have equal cardinality, or equivalently, $$r(M)-r(A)+|A|-r(A) = r(E-A)+|A|-(r(M)-r(E-A)).$$ Simplifying yields $r(M)=r(A)+r(E-A)$, which occurs if and only if $A$ is separating or $A \in \{\emptyset, E\}$. We complete this section by generalizing a result by D. Welsh [@We69] regarding the connection between Eulerian and bipartite binary matroids. A graph is said to be *Eulerian* if there is a closed walk that contains each of the edges of the graph exactly once, and a graph is *bipartite* if there is a partition $(A,B)$ of the vertices such that no edge of the graph has both endpoints in $A$ or both in $B$. A *circuit* in a matroid is a minimal dependent set and a *cocircuit* in a matroid is a minimal dependent set in its dual. A matroid $M$ is said to be *Eulerian* if there are disjoint circuits $C_1, \ldots , C_p$ in $M$ such that $E(M)=C_1\cup \cdots \cup C_p$. A matroid is said to be *bipartite* if every circuit has even cardinality. A standard result in graph theory is that a plane graph $G$ is Eulerian if and only if $G^*$ is bipartite. This result also holds for binary matroids. \[dominic\] A binary matroid is Eulerian if and only if its dual is bipartite. Once again, by considering delta-matroids as a generalization of ribbon graphs, we can determine when the twist of a binary bipartite or Eulerian matroid is either bipartite or Eulerian. In [@HM11] it was shown that, if $G$ is a plane graph with edge set $E$ and $A\subseteq E$, then 1. $G^A$ is bipartite if and only if the components of $G{\backslash}A^c$ and $G^*{\backslash}A$ are Eulerian; 2. $G^A$ is Eulerian if and only if $G{\backslash}A^c$ and $G^*{\backslash}A$ are bipartite. Recalling that, when $G$ is plane, $D(G)$ is a matroid, and that $D(G^A)=D(G)\ast A$ suggests an extension of this result to twists of binary matroids, but first we need to introduce some terminology. The circuit space $\mathcal{C}(M)$ (respectively cocircuit space $\mathcal {C^*}(M)$) of a matroid $M=(E,\mathcal{B})$ comprises all subsets of $E$ which can be expressed as the disjoint union of circuits (respectively cocircuits). Both spaces include the empty set. It is not difficult to see that a subset of $E$ belongs to the circuit space (respectively cocircuit) space if and only if it has even intersection with every cocircuit (respectively circuit) [@Oxley11]. The bicycle space $\mathcal{BI}(M)$ of $M$ is the intersection of the circuit and cocircuit spaces. It is not difficult to show that for any matroid $M$, we have $\mathcal{C}(M/A)=\{C-A : C\in \mathcal{C}(M)\}$. Furthermore $\mathcal{C}(M^*) = \mathcal{C^*}(M)$ which implies that $\mathcal{BI}(M)=\mathcal{BI}(M^*)$. \[bipartite\] Let $M=(E,\mathcal{B})$ be a binary matroid, $A$ be a subset of $E$ and $D=M*A$. 1. If $M$ is bipartite, then $D_{\min}$ is bipartite if and only if $A\in \mathcal{BI}(M)$. 2. If $M$ is Eulerian, then $D_{\min}$ is Eulerian if and only if $A\in \mathcal{BI}(M)$. We prove the first part. Lemma \[l.mat1\] implies that $D_{\min}=M/A\oplus (M|A)^*$. So $D_{\min}$ is bipartite exactly when both $M/A$ and $(M|A)^*$ are bipartite. Every circuit of a matroid has even cardinality if and only if every element of its circuit space has even cardinality. Consequently every circuit of $M/A$ has even cardinality if and only if $C-A$ has even cardinality for every $C\in \mathcal{C}(M)$. Because every circuit of $M$ has even cardinality, this occurs if and only if $C \cap A$ has even cardinality for every $C\in\mathcal{C}(M)$, which corresponds to $A\in \mathcal{C^*}(M)$. On the other hand $(M|A)^*=M^*/(E-A)$. This matroid is bipartite exactly when every element of $\mathcal C(M^*)$ has an even number of elements not belonging to $E-A$. Equivalently the intersection of any circuit of $\mathcal C(M^*)$ with $A$ has even cardinality which occurs if and only if $A \in \mathcal {C}^*(M^*)=\mathcal C(M)$. So $D_{\min}$ is bipartite if and only if $A\in \mathcal{BI}(M)$. The proof of the second part is very similar and so is omitted. Let $M=(E,\mathcal{B})$ be a binary matroid, $A$ be a subset of $E$ and $D=M*A$. 1. If $M$ is Eulerian, then $D_{\min}$ is bipartite if and only if $E-A\in \mathcal{BI}(M)$; 2. If $M$ is bipartite, then $D_{\min}$ is Eulerian if and only if $E-A\in \mathcal{BI}(M)$. The first part follows from applying Theorem \[bipartite\] to $M^*$ and by using Theorem \[dominic\], because $M^*$ is bipartite and $D=(M^*)*(E-A)$. The second part is similar. Loop complementation and vf-safe delta-matroids {#petriesection} =============================================== \[4th\] So far we have seen how the concepts of geometric and partial duality for ribbon graphs can serve as a guide for delta-matroid results on twists. In this section we continue applying of concepts of duality from topological graph theory to delta-matroid theory by examining the delta-matroid analogue of Petrie duality and partial Petriality. Following R. Brijder and H. Hoogeboom [@BH11], let $D=(E,\mathcal{F})$ be a set system and $e\in E$. Then $D+e$ is defined to be the set system $(E,\mathcal{F}')$ where $ \mathcal{F}'= \mathcal{F} \triangle \{ F\cup e : F\in \mathcal{F} \text{ and } e\notin F \} $. If $e_1, e_2 \in E$ then $(D+e_1)+e_2 = (D+e_2)+e_1 $, and so for $A=\{a_1, \ldots , a_n\}\subseteq E$ we can define the [*loop complementation*]{} of $D$ on $A$, by $D+A:= D+a_1+\cdots + a_n$. This operation is particularly natural in the context of binary delta-matroids because forming $D+e$ from $D$ coincides with changing the diagonal entry corresponding to $e$ of the matrix representing $A$ from zero to one or vice versa. It is important to note that the set of delta-matroids is not closed under loop complementation. For example, let $D=(E,\mathcal{F})$ with $E=\{a,b,c\}$ and $\mathcal{F}=2^{\{a,b,c\}} \setminus \{a\}$. Then $D$ is a delta-matroid, but $D+a =(E,\mathcal{F}') $, where $\mathcal{F}'= \{\emptyset, \{a\}, \{b\}, \{c\}, \{b,c\}\}$, is not a delta-matroid, since if $F_1 = \{b,c\}$ and $F_2=\{a\}$, then there is no choice of $x$ such that $F_1 {\bigtriangleup}\{a,x\} \in \mathcal{F}'$. To get around this issue, we often restrict our attention to a class of delta-matroids that is closed under loop complementation. A delta-matroid $D=(E,\mathcal{F})$ is said to be [*vf-safe*]{} if the application of any sequence of twists and loop complementations over $E$ results in a delta-matroid. The class of vf-safe delta-matroids is known to be minor closed and strictly contains the class of binary delta-matroids (see for example [@BHpre2]). In particular, it follows that ribbon-graphic delta-matroids are also vf-safe. The following result establishes a surprising connection, by showing that loop complementation is the delta-matroid analogue of partial Petriality. \[l.plus\] Let $G$ be a ribbon graph and $A\subseteq E(G)$. Then $D(G)+A = D(G^{\tau(A)})$. Without loss of generality we assume that $G$ is connected. To prove the proposition it is enough to show that $D(G)+e = D(G^{\tau(e)})$. To do this we describe how the spanning quasi-trees of $G^{\tau(e)}$ are obtained from those of $G$. Suppose that the boundary of the edge $e$, when viewed as a disc, consists of the four arcs $[a_1, b_1]$, $[b_1,b_2]=:\beta $, $[b_2, a_2]$, and $[a_2,a_1]=:\alpha $, where $\{\alpha, \beta\}=\{e\}\cap V(G)$ are the arcs which attach $e$ to its incident vertices (or vertex). Let $H$ be an spanning ribbon subgraph of $G$. Consider the boundary cycle or cycles of $H$ containing the points $a_1,a_2,b_1,b_2$. If $e\in E(H)$ then the boundary components of $H{\backslash}e$ can be obtained from those of $H$ by deleting the arcs $[a_1,b_1]$ and $[b_2,a_2]$, then adding arcs $[a_1, a_2]$ and $[b_1,b_2]$. Similarly, the boundary components of $H^{\tau(e)}$ can be obtained from those of $H$ by deleting the arcs $[a_1,b_1]$ and $[b_2,a_2]$, then adding arcs $[a_1, b_2]$ and $[a_2,b_1]$. Each time the number of boundary components of $H{\backslash}e$ or $H^{\tau(e)}$ is computed in the argument below this procedure is used. To relate the spanning quasi-trees of $G$ and $G^{\tau(e)}$ let $H$ be a spanning ribbon subgraph of $G$ that contains $e$. Consider the boundary components (or component) containing the points $a_1,a_2,b_1,b_2$. If there is one component then they are met in the order $a_1,b_1, b_2,a_2 $ or $ a_1,b_2,a_2,b_1$ when travelling around the unique boundary component of $Q$ starting from $a_1$ and in a suitable direction. If there are two components, then one contains $a_1$ and $b_1$, and the other contains $a_2$ and $b_2$. Counting the number of boundary components of $H{\backslash}e$ and $H^{\tau(e)}$ as above, we see that if the points are met in the order $a_1,b_1, b_2,a_2 $ then $f(H-e)=f(H)+1$ and $f(H^{\tau(e)})=f(H)$. If the points are met in the order $ a_1,b_2,a_2,b_1$ then $f(H-e)=f(H)$ and $f(H^{\tau(e)})=f(H)+1$. If the points are on two boundary components then $f(H-e)=f(H)-1$ and $f(H^{\tau(e)})=f(H)-1$. This means that for some integer $k$, two of $f(H)$, $f(H-e)$ and $f(H^{\tau(e)})$ are equal to $k$ and the other is equal to $k+1$. Note that $f(H-e)=f(H^{\tau(e)}-e)$. From the discussion above we can derive the following. Suppose that $H-e$ is not a spanning quasi-tree of $G$. Then $H$ is a spanning quasi-tree of $G$ if and only if $H^{\tau(e)}$ is a spanning quasi-tree of $G^{\tau(e)}$. Now suppose that $H-e$ is a spanning quasi-tree of $G$. Then either $H$ is a spanning quasi-tree of $G$ or $H^{\tau(e)}$ is a spanning quasi-tree of $G^{\tau(e)}$, but not both. Finally, let $D(G)=(E,\mathcal{F})$, and recall that the feasible sets of $D(G)$ (respectively $D(G^{\tau(e)})$) are the edge sets of all of the spanning quasi-trees of $G$ (respectively $G^{\tau(e)}$). From the above, we see that feasible sets of $D(G^{\tau(e)})$ are given by $ \mathcal{F} \triangle \{ F\cup e : F\in \mathcal{F} \text{ such that } e\notin F \} $. It follows that $D(G)+A = D(G^{\tau(A)})$, as required. For use later, we record the following lemma. Its straightforward proof is omitted. \[lem:opsswitch\] Let $e$ be an element of a vf-safe delta-matroid $D=(E,\mathcal{F})$ and let $A\subseteq E-e$. Then $(D+A)/e = (D/e)+A$ and $(D+A){\backslash}e = (D {\backslash}e)+A$. In [@BH11] it was shown that twists, $\ast$, and loop complementation, $+$, give rise to an action of the symmetric group of degree three on set systems. If $S=(E,\mathcal{F})$ is a set system, and $w=w_1w_2\cdots w_n$ is a word in the alphabet $\{\ast, +\}$ (note that $\ast$ and $+$ are being treated as formal symbols here), then $$\label{c2.eq1b} (S)w :=(\cdots (( S) w_n(E) )w_{n-1}(E) \cdots )w_1(E).$$ With this, it was shown in [@BH11] that the group $\mathfrak{S}=\langle * , + \mid *^2, +^2, (*+)^3 \rangle$ acts on the set of ordered pairs $\mathcal{X} = \{ (S,A) : S\text{ a set system}, A\subseteq E(G) \}$. Let $S=(E,\mathcal{F})$ be a set system, $A,B\subseteq E$, and $g,h\in \mathfrak{G}$. Let $S{g(A)h(B)}:=\left(S{g(A)}\right){h(B)}$. Let $D_1=(E,\mathcal{F})$ and $D_2$ be delta-matroids. We say that $D_2$ is a [*twisted dual*]{} of $D_1$ if there exist $A_1, \ldots A_n \subseteq E$ and $g_1,\ldots, g_n \in \mathfrak{S}$ such that $$D_2=D_1{g_1(A_1) g_2(A_2)\cdots g_n(A_n)} .$$ It was shown in [@BH11] that the following hold. 1. If $A\cap B=\emptyset$, then $Gg(A)h(B)=Gh(B)g(A)$. 2. $ Dg(A) = (Dg(e))g(A\backslash e)$ 3. $Dg_1(A)=Dg_2(A)$ if and only if $g_1=g_2$ in the group $\langle * , + \mid *^2, +^2, (*+)^3 \rangle$. We have already shown that geometric partial duality and twists as well as Petrie duals and loop complementations are compatible. So it should come as no surprise that twisted duality for ribbon graphs and for delta-matroids are compatible as well. \[t.td\] Let $\mathfrak{G}=\langle \delta ,\tau \mid \delta^2, \tau^2, (\delta\tau)^3 \rangle$ and $\mathfrak{S}=\langle * , + \mid *^2, +^2, (*+)^3 \rangle$ be two presentations of the symmetric group $\mathfrak{S}_3$; and let $\eta: \mathfrak{G} \rightarrow \mathfrak{S}$ be the homomorphism induced by $\eta(\delta)=*$, and $ \eta(\tau)=+$. Then if $G$ is a ribbon graph, $A_1, \ldots A_n \subseteq E(G)$, and $g_1,\ldots, g_n \in \mathfrak{G}_1$ then $$D(G^{ g_1(A_1) g_2(A_2)\cdots g_n(A_n) }) = D(G) \eta (g_1)(A_1) \eta (g_2)(A_2)\cdots \eta (g_n)(A_n) .$$ The result follows immediately from Theorem \[t.compat\] and Theorem \[l.plus\]. We now return to the topic of binary and Eulerian matroids as discussed at the end of Section \[3rd\]. R. Brijder and H. Hoogeboom [@BH12] obtained results of a different flavour on delta-matroids obtained from Eulerian or bipartite binary matroids. Denote $M\bar{\ast}E:=M+E*E+E$ and let $M=(E,\mathcal{F})$ be a binary matroid. They showed that $M$ is bipartite if and only if $M+E$ is an even delta-matroid, and that $M$ is Eulerian if and only if $M\bar{\ast}E$ is an even delta-matroid. These results are interesting in the context of ribbon graphs. In [@EMM] it was shown that an orientable ribbon graph $G$ is bipartite if and only if its Petrie dual $G^{\times}$ is orientable, although, unfortunately, the result was misstated. In particular, if $G$ is plane, then $M(G)$ is bipartite if and only if $M(G)+E(G)$ is even, which is the graphic restriction of the first part of R. Brijder and H. Hoogeboom’s result. The ribbon graph analogue of the second part is that a plane graph $G$ is Eulerian if and only if $G^{*\times *}$ is orientable. This is indeed the case: $G$ is Eulerian if and only if $G^*$ is bipartite if and only if $(G^*)^{\times}$ is orientable if and only if $((G^*)^{\times})^*$ is orientable. However, the result does not extend to all Eulerian ribbon graphs, for example, consider the ribbon graph consisting of one vertex and two orientable non-trivial loops. The authors proved in [@CMNR] that a loop in a ribbon-graphic delta-matroid $D(G)$ corresponds to a trivial orientable loop in the ribbon graph $G$. Moreover, as ribbon graphs have several different types of loops (given by orientability and triviality), we should expect delta-matroids to have several different types of loops. We shall see that the different types of loops in delta-matroids and in ribbon graphs correspond. More precisely, we shall show that the behaviour of each type of loop in delta-matroids under the various notions of duality is exactly as predicted by the behaviour of the corresponding type of loop in ribbon graphs. We need some additional notation. Let $D=(E,{\mathcal{F}})$ be a vf-safe delta-matroid. If $X\subseteq E$, then the [*dual pivot*]{} on $X$, denoted by $D \bar{\ast } X$, is defined by $$D \bar{\ast} X:= ((D\ast X)+X)\ast X.$$ From the discussion on twisted duality above, it follows that $D \bar{\ast } X = ((D +X) \ast X)+ X$. We shall use this observation and other similar consequences of twisted duality several times in this section. The following result is a slight reformulation of Theorem 5.5 from [@BH13] and is the key to understanding the different types of loops in a delta-matroid. Following the notation of [@BH12], we set $d_D:=r(D_{\min})$. \[thm:minstuff\] Let $D$ be a vf-safe delta-matroid and let $e\in E$. Then two of $D_{\min}$, $(D\ast e)_{\min}$ and $(D\bar\ast e)_{\min}$ are isomorphic. If $M_1$ is isomorphic to two matroids in $\{D_{\min},(D\ast e)_{\min},(D\bar\ast e)_{\min}\}$ and $M_2$ is isomorphic to the third then $M_1$ is formed by taking the direct sum of $M_2/e$ and the one-element matroid comprising $e$ as a loop. In particular two of $d_D$, $d_{D\ast e}$ and $d_{D\bar\ast e}$ are equal to $d$, for some integer $d$, and the third is equal to $d+1$. Finally $e$ is a loop in precisely two of $D_{\min}$, $(D\ast e)_{\min}$ and $(D\bar\ast e)_{\min}$. As there is an unfortunate clash in notation: only trivial orientable loops in ribbon graphs are loops in the underlying delta-matroid, where ambiguity may arise, we prefix the word “ribbon” to the delta-matroid analogues. Let $D=(E,\mathcal{F})$ be a delta-matroid, and let $e\in E$. Then 1. $e$ is a [*ribbon loop*]{} if $e$ is a loop in $D_{\min}$; 2. a ribbon loop $e$ is *non-orientable* if $e$ is a ribbon loop in $D\ast e$ and is *orientable* if $e$ is a ribbon loop in $D\bar\ast e$; 3. an orientable ribbon loop is *trivial* if $e$ is a (delta-matroid) loop of $D$ and is *non-trivial* otherwise; 4. a non-orientable ribbon loop is *trivial* if $e$ is a (delta-matroid) loop of $D+e$ and is *non-trivial* otherwise. Note that Theorem \[thm:minstuff\] implies that every ribbon loop is either orientable or non-orientable but not both. Furthermore a ribbon loop is a trivial non-orientable ribbon loop if and only if, for every $F\subseteq E-e$, we have $F\in \mathcal{F}$ if and only if $F\cup e \in \mathcal{F}$. \[l.rloopsdual\] Let $D=(E,\mathcal{F})$ be a delta-matroid and let $e\in E$. Then 1. $e$ is a coloop in $D$ if and only if $e$ is a loop in $D\ast e$; 2. $e$ is neither a coloop nor a ribbon loop in $D$ if and only if $e$ is a non-trivial orientable ribbon loop in $D\ast e$. We first show that (1) holds. By definition, $e$ is a coloop in $D$ if it appears in every feasible set of $D$. Similarly $e$ is a loop in $D\ast e$ if it appears in no feasible set of $D\ast e$. These two conditions are equivalent. We now show that (2) holds. Suppose that $e$ is not a ribbon loop in $D$. By Theorem \[thm:minstuff\], $e$ is a ribbon loop in $D\ast e$ and, by definition, must be orientable. On the other hand, suppose that $e$ is an orientable ribbon loop in $D\ast e$. Then, by Theorem \[thm:minstuff\], $e$ is not non-orientable and consequently is not a ribbon loop of $(D\ast e)\ast e=D$. Applying Part (1) completes the proof. A more thorough discussion of how partial duality transforms the various types of ribbon loops can be found in [@CMNR]. In a ribbon graph partial Petriality will change the orientability of a loop. The following results describe the corresponding changes in delta-matroids. \[l.rloopstwist\] Let $D=(E,\mathcal{F})$ be a vf-safe delta-matroid and let $e\in E$. Then $e$ is a ribbon loop in $D$ if and only if $e$ is a ribbon loop in $D+e$. Moreover a ribbon loop, $e$, is non-orientable in $D$ if and only if it is orientable in $D+e$, and $e$ is trivial in $D$ if and only if it is trivial in $D+e$. Finally, $e$ is a coloop in $D$ if and only if $e$ is a coloop of $D+e$. If $e$ is a ribbon loop in $D$ then no minimum-sized feasible set of $D$ contains $e$. But in this case the minimum-sized feasible sets of $D+e$ are the same as those of $D$ and so $e$ is a ribbon loop in $D+e$. The converse follows because $(D+e)+e=D$. Suppose that $e$ is a non-orientable ribbon loop in $D$. Then $e$ is a ribbon loop in $D*e$. So by the first part of this lemma, $e$ is a ribbon loop in both $D+e$ and $(D*e)+e$. By twisted duality the latter is equal to $(D+e)\bar\ast e$. Consequently $e$ is an orientable ribbon loop in $D+e$. Conversely if $e$ is an orientable ribbon loop in $D+e$, then it is a ribbon loop in $(D+e)\bar\ast e = (D*e)+e$. Thus $e$ is a ribbon loop in both $D$ and $D*e$ and so it must be a non-orientable ribbon loop in $D$. Next, the statement concerning triviality follows from the definition of trivial loops and the fact that $(D+e)+e=D$. Finally, if $e$ is a coloop of $D$, then $e$ is in every feasible set of $D$, so $D=D+e$ and it follows that $e$ is a coloop in $D+e$. The converse follows since $(D+e)+e=D$. The next three lemmas are needed in Section \[5th\]. Each is stated, perhaps a little unnaturally, in terms of the dual of a vf-safe delta-matroid $D$, but this is exactly what is required later. \[lem:dualtwist\] Let $e$ be an element of a vf-safe delta-matroid, $D$. Then 1. $e$ is a non-trivial orientable ribbon loop in $D^*$ if and only if $e$ is a non-trivial orientable ribbon loop in $(D+e)^*$; 2. $e$ is a trivial orientable ribbon loop in $D^*$ if and only if $e$ is a trivial orientable ribbon loop in $(D+e)^*$; 3. $e$ is a coloop in $D^*$ if and only if $e$ is a trivial non-orientable ribbon loop in $(D+e)^*$; 4. $e$ is neither a ribbon loop nor a coloop in $D^*$ if and only if $e$ is a non-orientable non-trivial ribbon loop in $(D+e)^*$. Notice that $D^*=(D\ast(E(D)-e))\ast e$ and $(D+e)^*=((D\ast(E(D)-e))\ast e)\bar \ast e$. Consequently $(D+e)^* = (D^*)\bar\ast e$. It follows from the definition of an orientable ribbon loop that $e$ is an orientable ribbon loop in $D^*$ if and only if it is an orientable ribbon loop in $(D^*)\bar\ast e=(D+e)^*$. Furthermore it follows from Part (1) of Lemma \[l.rloopsdual\] and the last part of Lemma \[l.rloopstwist\] that $e$ is delta-matroid loop in $D^*$ if and only if $e$ is a delta-matroid loop in $(D+e)^*$. This proves the first two parts. If $e$ is not a ribbon loop of $D^*$, then it follows from Theorem \[thm:minstuff\] that $e$ must be a ribbon loop of $D^*\bar\ast e=(D+e)^*$. Moreover, by definition, $e$ must be a non-orientable ribbon loop. On the other hand, if $e$ is a non-orientable ribbon loop of $(D+e)^*$ then, by definition and Theorem \[thm:minstuff\], it is not a ribbon loop of $(D+e)^* \bar \ast e=D^*$. Finally it is easily seen that $e$ is a coloop in $D^*$ if and only if $e$ is a trivial non-orientable ribbon loop of $D^*\bar\ast e=(D+e)^*$. This proves the last two parts. The next lemma is straightforward and its proof is omitted. \[lem:penroseloops\] Let $e$ be an element of a vf-safe delta-matroid, $D=(E,\mathcal{F})$ and let $A\subseteq E-e$. Then 1. $e$ is a trivial orientable ribbon loop in $D$ if and only if $e$ is a coloop in $(D+A)^*$; 2. $e$ is a trivial non-orientable ribbon loop in $D$ if and only if $e$ is a coloop in $(D+A+e)^*$. \[lem:penrosemins\] Let $e$ be an element of a vf-safe delta-matroid, $D$. 1. If $e$ is a non-trivial orientable ribbon loop in $D^*$ then $(D^*{\backslash}e)_{\min} = ((D+e)^*{\backslash}e)_{\min}$. 2. If $e$ is neither a ribbon loop nor a coloop in $D^*$ then $(D^*/ e)_{\min} = ((D+e)^*{\backslash}e)_{\min}$. 3. If $e$ is a non-trivial non-orientable ribbon loop in $D^*$ then $(D^*{\backslash}e)_{\min} = ((D+e)^*/ e)_{\min}$. We show first that (1) holds. If $e$ is a non-trivial orientable ribbon loop in $D^*$ then $(D^*{\backslash}e)_{\min}=(D^*)_{\min}$. Because $e$ is orientable, $$(D^*)_{\min} = (D^* \bar \ast e)_{\min} = (((((D*(E(D)-e))*e)*e)+e)*e)_{\min} = ((D+e)^*)_{\min}.$$ By Part (1) of Lemma \[lem:dualtwist\], $e$ is a non-trivial orientable ribbon loop in $(D+e)^*$ and so $((D+e)^*)_{\min}=((D+e)^*{\backslash}e)_{\min}$. Next, we show that (2) holds. If $e$ is neither a ribbon loop nor a coloop in $D^*$, then by Part (4) of Lemma \[lem:dualtwist\], $e$ is a non-trivial non-orientable ribbon loop in $(D+e)^*$. Consequently $((D+e)^*{\backslash}e)_{\min} = ((D+e)^*)_{\min}$. Because $e$ is non-orientable, $((D+e)^*)_{\min} = (((D+e)^*)*e)_{\min}$ and this can be shown to be equal to $((D^*\ast e)+e)_{\min}$. Because $e$ is not a ribbon loop in $D^*$, the collections of minimum sized feasible sets of $D^*/e$ and $(D^*\ast e)+e$ coincide. We conclude our proof by showing that (3) holds. By Part (4) of Lemma \[lem:dualtwist\] with $D$ and $D+e$ interchanged, $e$ is neither a loop nor a coloop of $(D+e)^*$. Applying (2) with $D$ replaced by $D+e$, we get $((D+e)^*/ e)_{\min} = (((D+e)+e)^*{\backslash}e)_{\min}$ or equivalently $(((D+e)^*)/ e)_{\min} = ((D^*){\backslash}e)_{\min}$ as required. The Penrose and characteristic polynomials {#5th} ========================================== The Penrose polynomial was defined implicitly by R. Penrose in [@Pen71] for plane graphs, and was extended to all embedded graphs in [@EMM11a]. The advantage of considering the Penrose polynomial of embedded graphs, rather than just plane graphs, is that it reveals new properties of the Penrose polynomial (of both plane and non-plane graphs) that cannot be realised exclusively in terms of plane graphs. The (plane) Penrose polynomial has been defined in terms of bicycle spaces, left-right facial walks, or states of a medial graph. Here, as in [@EMM11b], we define it in terms of partial Petrials. Let $G$ be an embedded graph. Then the [*Penrose polynomial*]{}, $P(G;\lambda)\in \mathbb{Z}[\lambda]$, is defined by $$P(G;\lambda) := \sum_{A\subseteq E(G)} (-1)^{|A|} \lambda^{f(G^{\tau(A)})},$$ where $f(G)$ denotes the number of boundary components of a ribbon graph $G$. The Penrose polynomial has been extended to both matroids and delta-matroids. In [@AM00] M. Aigner and H. Mielke defined the Penrose polynomial of a binary matroid $M=(E,{\mathcal{F}})$ as $$\label{pdefmq} P(M;\lambda) = \sum_{X\subseteq E} (-1)^{|X|} \lambda^{\dim(B_M(X))},$$ where $B_M(X)$ is the binary vector space formed of the incidence vectors of the sets in the collection $$\{ A \in \mathcal{C}(M) : A\cap X \in \mathcal{C}^*(M)\}.$$ R. Brijder and H. Hoogeboom defined the Penrose polynomial in greater generality for vf-safe delta-matroids in [@BH12]. Recall that if $D=(E,{\mathcal{F}})$ is a vf-safe delta-matroid, and $X\subseteq E$, then the [*dual pivot*]{} on $X$ is $D \bar{\ast } X:= D \bar{\ast} X:= ((D\ast X)+X)\ast X$, and that $d_D:=r(D_{\min})$. The [*Penrose polynomial*]{} of a vf-safe delta-matroid $D$ is then $$\label{pdefde} P(D;\lambda) := \sum_{X\subseteq E} (-1)^{|X|} \lambda^{d_{D\ast E \bar{\ast} X}}.$$ It was shown in [@BH12] that when the delta-matroid $D$ is a binary matroid, Equations  and  agree. Furthermore, our next result shows that Penrose polynomials of matroids and delta-matroids are compatible with their ribbon graph counterparts. \[t.pencom\] Let $G$ be a ribbon graph and $D(G)$ be its ribbon-graphic delta-matroid. Then $$P(G;\lambda) = \lambda^{k(G)} P(D(G);\lambda).$$ We have $$\begin{aligned} D(G)\ast E\bar{\ast}X &= D(G)\ast E\ast X +X\ast X = D(G^{\delta(E) \delta(X) \tau (X) \delta(X)}) \\&= D(G^{\delta(X^c) \delta(X) \delta(X) \tau (X) \delta(X)}) = D(G^{\tau (X) \delta(E)}) .\end{aligned}$$ Then as $D(G^{\tau (X) \delta(E)})_{\min} = M(G^{\tau (X) \delta(E)})$, we have $$r(D(G^{\tau (X) \delta(E)})_{\min} ) = v(G^{\tau (X) \delta(E)}) +k(v(G^{\tau (X) \delta(E)}) )= f(G^{\tau (X)}) +k(G),$$ using that $D(G^{\tau (X) \delta(E)}) = D((G^{\tau (X)})^*)$. The equality of the two polynomials follows. A very desirable property of a graph polynomial is that it satisfies a recursion relation that reduces a graph to a linear combination of “elementary" graphs, such as isolated vertices. The well-known deletion-contraction reduction meets this requirement in the case of the Tutte polynomial. In [@EMM11a] it was shown that the Penrose polynomial of a ribbon graph admits such a relation. If $G$ is a ribbon graph, and $e\in E(G)$, then $$\label{e.pdc} P(G; \lambda)= P(G/e; \lambda) -P (G^{\tau(e)}/e; \lambda).$$ For a ribbon graph $G$ and non-trivial ribbon subgraphs $P$ and $Q$, we write $G=P\sqcup Q$ when $G$ is the disjoint union of $P$ and $Q$, that is, when $G=P\cup Q$ and $P\cap Q=\emptyset$. The preceding identity together with the multiplicativity of the Penrose polynomial, $$P(G_1\sqcup G_2)= P(G_1) \cdot P(G_2),$$ and its value $\lambda$ on an isolated vertex provides a recursive definition of the Penrose polynomial. The Penrose polynomial of a vf-safe delta-matroid also admits a recursive definition. \[p.pdc\] Let $D=(E,{\mathcal{F}})$ be a vf-safe delta-matroid and $e\in E$. 1. If $e$ is a trivial orientable ribbon loop, then $P(D;\lambda) = (\lambda-1)P(D/e;\lambda)$. 2. If $e$ is a trivial non-orientable ribbon loop, then $P(D;\lambda) = -(\lambda-1)P((D+e)/e;\lambda)$. 3. If $e$ is not a trivial ribbon loop, then $P(D;\lambda)= P(D/e;\lambda) - P((D+e)/e;\lambda)$. 4. If $E=\emptyset$, then $P(D;\lambda)=1$. The recursion relation above for $P(D)$ replaces $(D\bar\ast e){\backslash}e$ for $(D+e)/e$ in its statement in [@BH12], but it is easy to see that $(D+e)/e$ and $(D\bar\ast e){\backslash}e$ have exactly the same feasible sets. We have used $(D+e)/e$ rather than $(D\bar\ast e){\backslash}e$ to highlight the compatability with Equation . Observe that Equation  and a recursive definition for the ribbon graph version of the Penrose polynomial can be recovered as a special case of Proposition \[p.pdc\] via Theorem \[t.pencom\]. It is worth noting that Equation  cannot be restricted to the class of plane graphs, and analogously that Proposition \[p.pdc\] cannot be restricted to binary matroids. Thus restricting the polynomial to either of these classes, as was historically done, limits the possibility of inductive proofs of many results. This further illustrates the advantages of the more general settings of ribbon graphs or delta-matroids. Next, we show that the Penrose polynomial of a delta-matroid can be expressed in terms of the characteristic polynomials of associated matroids. The *characteristic polynomial*, $\chi(M;\lambda)$, of a matroid $M=(E,\mathcal{B})$ is defined by $$\chi(M;\lambda) := \sum_{A\subseteq E} (-1)^{|A|} \lambda^{r(M)-r(A)}.$$ The characteristic polynomial is known to satisfy deletion-contraction relations (see, for example, [@We76]). \[lem:char\] Let $e$ be an element of a matroid $M$. 1. If $e$ is a loop, then $\chi(M;\lambda)=0$. 2. If $e$ is a coloop, then $\chi(M;\lambda) = (\lambda-1)\chi(M/e;\lambda) = (\lambda-1)\chi(M\setminus e;\lambda)$. 3. If $e$ is neither a loop nor a coloop, then $\chi(M;\lambda) = \chi(M\setminus e;\lambda) - \chi(M/e;\lambda)$. We define the *characteristic polynomial*, $\chi(D;\lambda)$, of a delta-matroid $D$ to be $\chi(D_{\min};\lambda)$. Notice that this definition is consistent with the earlier definition in the case when $D$ is a matroid. To keep the notation manageable, we define $D^{\pi(A)}$ to be $(D+A)^*$. \[thm.pchi\] Let $D=(E,\mathcal{F})$ be a vf-safe delta-matroid. Then $$\label{eqn:penrose} P(D;\lambda) = \sum_{A\subseteq E} (-1)^{|A|} \chi(D^{\pi(A)} ;\lambda).$$ The proof proceeds by induction on the number of elements of $E$. If $E=\emptyset$ then both sides of Equation  are equal to 1. So assume that $E\ne \emptyset$ and let $e\in E$. Suppose that $e$ is a trivial ribbon loop of $D$. We have $$\sum_{A\subseteq E} (-1)^{|A|} \chi(D^{\pi(A)};\lambda) = \sum_{A\subseteq E-e} (-1)^{|A|} \chi(D^{\pi(A)};\lambda) - \sum_{A\subseteq E-e} (-1)^{|A|} \chi(D^{\pi(A \cup e)};\lambda).$$ Now suppose that in addition $e$ is orientable. Then by Lemmas \[lem:dualtwist\] and \[lem:penroseloops\], $e$ is a coloop in $D^{\pi(A)}$ and a trivial non-orientable ribbon loop in $D^{\pi(A \cup e)}$. So for each $A\subseteq E-e$, $\chi(D^{\pi(A \cup e)};\lambda)=0$ and by Lemma \[lem:char\] $$\chi(D^{\pi(A)};\lambda) = (\lambda-1) \chi(((D^{\pi(A)})_{\min})/e;\lambda).$$ Since $e$ is a coloop in $D^{\pi(A)}$, $$\begin{aligned} (D^{\pi(A)})_{\min}/e &= (D^{\pi(A)}/e)_{\min} = ((D+A)^*/e)_{\min}\\ &= ((D{\backslash}e +A)^*)_{\min} = ((D/e +A)^*)_{\min} = ((D/e)^{\pi(A)})_{\min},\end{aligned}$$ where the penultimate equality holds because $e$ is a loop in $D$ and hence $D{\backslash}e= D/e$. Therefore $\chi(((D^{\pi(A)})_{\min})/e;\lambda) = \chi((D/e)^{\pi(A)};\lambda)$. Hence $$\sum_{A\subseteq E} (-1)^{|A|} \chi(D^{\pi(A)},\lambda) = (\lambda-1)\sum_{A\subseteq E} (-1)^{|A|}\chi((D/e)^{\pi(A)};\lambda).$$ Using induction and Proposition \[p.pdc\], this equals $(\lambda-1)P(D/e;\lambda) = P(D;\lambda)$. Next suppose that $e$ is non-orientable. Then by Lemmas \[lem:dualtwist\] and \[lem:penroseloops\], $e$ is a trivial non-orientable ribbon loop in $D^{\pi(A)}$ and a coloop in $D^{\pi(A \cup e)}$. So for each $A\subseteq E-e$, $$\chi((D^{\pi(A \cup e)})/e;\lambda = (\lambda-1) \chi(((D^{\pi(A \cup e)})_{\min})/e;\lambda) = (\lambda-1)\chi(((D+e)/e)^{\pi(A)};\lambda)$$ and $\chi(D^{\pi(A)};\lambda)=0$. Hence $$\sum_{A\subseteq E} (-1)^{|A|} \chi(D^{\pi(A)},\lambda) = -(\lambda-1)\sum_{A\subseteq E} (-1)^{|A|} \chi(((D+e)/e)^{\pi(A)};\lambda).$$ Using induction, this equals $-(\lambda-1)P((D+e)/e;\lambda) = P(D;\lambda)$. We have covered the cases where $e$ is a trivial ribbon loop in $D$. So now we assume that this is not the case. Using induction, Proposition \[p.pdc\], and Lemma \[lem:opsswitch\] we have $$\begin{aligned} P(D;\lambda) &= P(D/e;\lambda) - P((D+e)/e;\lambda) \\ &=\sum_{A\subseteq E-e} (-1)^{|A|} \chi((D/e)^{\pi(A)},\lambda) - \sum_{A\subseteq E-e} (-1)^{|A|} \chi(((D+e)/e)^{\pi(A)},\lambda).\end{aligned}$$ On the other hand, $$\sum_{A\subseteq E} (-1)^{|A|} \chi(D^{\pi(A)},\lambda) = \sum_{A\subseteq E-e} (-1)^{|A|} \chi(D^{\pi(A)},\lambda) - \sum_{A\subseteq E-e} (-1)^{|A|} \chi(D^{\pi(A\cup e)},\lambda).$$ We will show that for each $A\subseteq E-e$ $$\label{eqn:penrosekey} \chi((D/e)^{\pi(A)},\lambda) - \chi(((D+e)/e)^{\pi(A)},\lambda) = \chi(D^{\pi(A)},\lambda) - \chi(D^{\pi(A\cup e)},\lambda),$$ which will be enough to complete the proof of the theorem. There are four cases depending on the role of $e$ in $D^{\pi(A)}$. First, suppose that $e$ is a trivial orientable ribbon loop in $D^{\pi(A)}$. Now by Lemma \[lem:opsswitch\], $(D/e)+A = (D+A)/e$ and $((D+e)/e)+A = (D+A+e)/e$. Moreover $e$ is a coloop in $D+A$ and so $D+A+e=D+A$. Hence $$\chi((D/e)^{\pi(A)},\lambda) = \chi(((D+e)/e)^{\pi(A)},\lambda) \qquad \text{and} \qquad \chi(D^{\pi(A)},\lambda) = \chi(D^{\pi(A\cup e)},\lambda).$$ Therefore Equation  holds. Second, suppose that $e$ is a non-trivial orientable ribbon loop in $D^{\pi(A)}$. Then by Lemma \[lem:dualtwist\], $e$ is also a non-trivial orientable ribbon loop in $D^{\pi(A\cup e)}$. So $e$ is a ribbon loop of both $D^{\pi(A)}$ and $D^{\pi(A\cup e)}$. Consequently $\chi(D^{\pi(A)},\lambda) = \chi(D^{\pi(A\cup e)},\lambda)=0$. On the other hand, by Lemma \[lem:opsswitch\] and standard properties of duality, $(D/e)^{\pi(A)} = (D^{\pi(A)}){\backslash}e$ and $((D+e)/e)^{\pi(A)} = (D^{\pi(A\cup e)}){\backslash}e$. Applying Lemma \[lem:penrosemins\] with $D$ replaced by $D+A$ shows that $\chi((D/e)^{\pi(A)},\lambda) = \chi(((D+e)/e)^{\pi(A)},\lambda)$. Therefore Equation  holds. Third, suppose that $e$ is neither a coloop nor a ribbon loop in $D^{\pi(A)}$. Then by Lemma \[lem:dualtwist\], $e$ is a non-trivial orientable ribbon loop in $D^{\pi(A\cup e)}$. Therefore $\chi(D^{\pi(A\cup e)},\lambda)=0$. By Lemma \[lem:char\], we have $$\chi(D^{\pi(A)},\lambda) = \chi(((D^{\pi(A)})_{\min}){\backslash}e,\lambda) - \chi(((D^{\pi(A)})_{\min})/ e,\lambda).$$ Because $e$ is not ribbon loop in $D^{\pi(A)}$, we have $((D^{\pi(A)})_{\min}){\backslash}e = ((D^{\pi(A)}){\backslash}e)_{\min}$. Using Lemma \[lem:opsswitch\] and standard properties of duality this is in turn equal to $((D/e)^{\pi(A)})_{\min}$. Consequently $\chi(((D^{\pi(A)})_{\min}){\backslash}e,\lambda) = \chi((D/e)^{\pi(A)},\lambda)$. Because $e$ is not a ribbon loop in $D^{\pi(A)}$, $((D^{\pi(A)})_{\min})/e = ((D^{\pi(A)})/e)_{\min}$. By Lemma \[lem:penrosemins\] this equals $((D^{\pi(A)}+e){\backslash}e)_{\min}$. Using duality and Lemma \[lem:opsswitch\], this equals $(((D+e)/e)^{\pi(A)})_{\min}$. Hence $\chi(((D^{\pi(A)})_{\min})/e,\lambda) = \chi(((D+e)/e)^{\pi(A)},\lambda)$ and Equation  follows. The final case is similar. Suppose that $e$ is a non-trivial non-orientable ribbon loop in $D^{\pi(A)}$. Then by Lemma \[lem:dualtwist\] applied to $D+A+e$, $e$ is neither a coloop nor a ribbon loop in $D^{\pi(A\cup e)}$. We have $\chi(D^{\pi(A)},\lambda)=0$. By Lemma \[lem:char\], we have $$\chi(D^{\pi(A\cup e)},\lambda) = \chi(((D^{\pi(A\cup e)})_{\min}){\backslash}e,\lambda) - \chi(((D^{\pi(A)\cup e})_{\min})/ e,\lambda).$$ Because $e$ is not a ribbon loop in $D^{\pi(A\cup e)}$, we have $((D^{\pi(A\cup e)})_{\min}){\backslash}e = ((D^{\pi(A\cup e)}){\backslash}e)_{\min}$. Using Lemma \[lem:opsswitch\] and standard properties of duality this is in turn equal to $(((D+e)/e)^{\pi(A)})_{\min}$. Consequently $\chi(((D^{\pi(A\cup e)})_{\min}){\backslash}e,\lambda) = \chi(((D+e)/e)^{\pi(A)},\lambda)$. Because $e$ is not a ribbon loop in $D^{\pi(A\cup e)}$, $((D^{\pi(A\cup e)})_{\min})/e = ((D^{\pi(A\cup e)})/e)_{\min}$. By Lemma \[lem:penrosemins\] this equals $(D^{\pi(A)}{\backslash}e)_{\min}$. Using duality and Lemma \[lem:opsswitch\], this equals $((D/e)^{\pi(A)})_{\min}$. Hence $\chi(((D^{\pi(A\cup e)})_{\min})/e,\lambda) = \chi((D/e)^{\pi(A)},\lambda)$ and Equation  follows. Therefore the result follows by induction. If $G$ is a graph and $M(G)$ its graphic matroid, then $\chi(M(G);\lambda)=\lambda^{-k(G)} \chi(G;\lambda)$, where the $\chi$ on the right-hand side refers to the chromatic polynomial of $G$. Combining this fact with Theorem \[t.pencom\] allows us to recover the following result as a special case of Theorem \[thm.pchi\]. Let $G$ be a ribbon graph. Then $$P(G;\lambda) = \sum_{A\subseteq E(G)} (-1)^{ |A|} \chi (( G^{\tau(A)} )^* ;\lambda) ,$$ where $\chi (H;\lambda)$ denotes the chromatic polynomial of $H$. In fact, in keeping with the spirit of this paper, it was the existence of this ribbon graph polynomial identity that led us to formulate Theorem \[thm.pchi\]. We end this paper with some results regarding the transition polynomial. As observed by F. Jaeger in [@Ja90], the Penrose polynomial of a plane graph arises as a specialization of the transition polynomial, $q(G; W,t)$. J. Ellis-Monaghan and I. Moffatt introduced a version of the transition polynomial for embedded graphs, called the topological transition polynomial, in [@EMM]. This polynomial provides a general framework for understanding the Penrose polynomial $P(G)$ and the ribbon graph polynomial $R(G)$ as well as some knot and virtual knot polynomials. Let $E$ be a set. We define $\mathcal{P}_3(E) := \{ (E_1,E_2,E_3) : E=E_1\cup E_2 \cup E_3, E_i \cap E_j=\emptyset \text{ for each } i\neq j \}$. That is, $\mathcal{P}_3(E)$ is the set of ordered partitions of $E$ into three, possibly empty, blocks. Let $G$ be a ribbon graph. A [*weight system*]{} for $G$, denoted $(\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma)$, is a set of ordered triples of elements in $\mathbb{Z}$ indexed by $E=E(G)$. That is, $(\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma) := \{ (\alpha_e, \beta_e, \gamma_e) : e\in E, \text{ and }\alpha_e, \beta_e, \gamma_e\in \mathbb{Z}\}$. The *topological transition polynomial*, $Q(G, (\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma) , t) \in \mathbb{Z}[t] $, is defined by $$\label{topotransdef} Q(G, (\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma) , t) :=\sum_{(A,B,C) \in \mathcal{P}_3( E(G))} \Big( \prod_{e\in A}\alpha_e\Big) \Big( \prod_{e\in B} \beta_e\Big) \Big( \prod_{e\in C} \gamma_e\Big) t^{f( G^{\tau(C)}\setminus B)}.$$ If, in the set of ordered triples $(\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma)$, we have $(\alpha_e, \beta_e, \gamma_e)=(\alpha, \beta, \gamma)$ for all $e\in E(G)$, then we write $(\alpha, \beta, \gamma)$ in place of $(\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma)$. On the delta-matroid side, in [@BHpre2] the transition polynomial of a vf-safe delta-matroid was introduced. Suppose $D=(E,{\mathcal{F}})$ is a delta-matroid. A weight system $(\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma)$ for $D$ is defined just as it was for ribbon graphs above. Then the [*transition polynomial*]{} of a vf-safe delta-matroid is defined by $$\label{deltatransdef} Q_{(\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma)}(D;t) = \sum_{(A,B,C) \in \mathcal{P}_3( E)} \Big( \prod_{e\in A}\alpha_e\Big) \Big( \prod_{e\in B} \beta_e\Big) \Big( \prod_{e\in C} \gamma_e\Big) t^{d_{D\ast B \bar{\ast} C}}.$$ Again we use the notation $(\alpha, \beta, \gamma)$ to denote the weight system in which each $e\in E$ has weight $(\alpha, \beta, \gamma)$. We will need a twisted duality relation for the transition polynomial. In order to state this relation, we introduce a little notation. Let $(\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma) = \{(\alpha_e, \beta_e, \gamma_e)\}_{e\in E} $ be a weight system for a delta-matroid $D=(E,{\mathcal{F}})$, and let $A\subseteq E$. Define $(\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma)\ast A$ to be the weight system $\{(\alpha_e, \beta_e, \gamma_e)\}_{e\in E{\backslash}A} \cup \{( \beta_e,\alpha_e, \gamma_e)\}_{e\in A}$. Also define, $(\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma)+ A$ to be the weight system $\{(\alpha_e, \beta_e, \gamma_e)\}_{e\in E{\backslash}A} \cup \{( \alpha_e, \gamma_e,\beta_e)\}_{e\in A}$. For each word $w=w_1\ldots w_n \in \langle * , + \mid *^2, +^2, (*+)^3 \rangle$ define $$(\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma) w (A):= (\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma) w_n (A) w_{n-1}(A)\cdots w_1(A) .$$ \[t.tdrandual\] Let $D=(E,{\mathcal{F}})$ be a vf-safe delta-matroid. If $A_1, \ldots ,A_n \subseteq E$, and $g_1,\ldots, g_n \in \langle \ast ,+ \mid \ast^2, +^2, (\ast+)^3 \rangle$, and $\Gamma= g_1(A_1) g_2(A_2)\cdots g_n(A_n)$, then $$Q_{(\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma)}(D; t)=Q_{(\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma)\Gamma}((D)\Gamma; t).$$ Once again we see that a delta-matroid polynomial is compatible with a ribbon graph polynomial. \[prop.qd\] Let $G$ be a ribbon graph and $D(G)$ be its ribbon-graphic delta-matroid. Then $$Q(G; (\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma),t)= t^{k(G)} Q_{(\boldsymbol\beta, \boldsymbol\alpha, \boldsymbol\gamma)}(D(G) ; t).$$ By Theorem \[t.tdrandual\], $t^{k(G)} Q_{(\boldsymbol\beta, \boldsymbol\alpha, \boldsymbol\gamma)}(D(G) ; t) = t^{k(G)} Q_{(\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma)}(D(G)\ast E) ; t) $. Then, by comparing Equation  and Equation  we see that $ d_{D(G)\ast E\ast B\bar{\ast}C} + k(G) = f(G^{\tau(C)} {\backslash}B) $. Theorem \[t.td\] and the properties of twisted duals from Section \[4th\] give $$D(G)\ast E\ast B\bar{\ast}C = D(G^{\delta(E) \delta(B) \delta \tau\delta(C)}) = D(G^{\delta(A)\delta\tau(C)}).$$ Then using that $r(D(G))= v(G)-k(G)$ and the properties of twisted duals once again, $$\begin{aligned} d_{D(G)\ast E\ast B\bar{\ast}C} &= r((D(G)\ast E\ast B\bar{\ast}C)_{\min}) = r(D( G^{\delta(A)\tau\delta(C)} )_{\min}) \\ &= v( G^{\delta(A)\tau\delta(C)} ) - k( G^{\delta(A)\tau\delta(C)} ) =f( G^{\delta(A)\tau\delta(C) \delta(E) } ) - k( G) \\ &= f(G^{\tau(C) \delta(B) })- k( G)= f(G^{\tau(C)\delta(B)}/B)- k(G)= f(G^{\tau(C)} {\backslash}B )- k( G), \end{aligned}$$ as required. We note that the twisted duality relation for the ribbon graph version of the transition polynomial from [@EMM] can be recovered from Proposition \[prop.qd\] and Theorem \[t.tdrandual\]. In [@BHpre2], R. Brijder and H. Hoogeboom give a recursion relation (and a recursive definition) for the transition polynomial of a vf-safe delta-matroid, which we now reformulate. \[t.dqdct\] Let $D=(E,{\mathcal{F}})$ be a vf-safe delta-matroid and let $e\in E$. 1. If $e$ is a trivial orientable ribbon loop, then $$Q_{(\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma)}(D;t) = \alpha_e Q_{(\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma)}(D\backslash e;t) + t \beta_e Q_{(\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma)}(D/e;t) +\gamma_e Q_{(\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma)}((D+e)/e ;t) .$$ 2. If $e$ is a trivial non-orientable ribbon loop, then $$Q_{(\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma)}(D;t) = \alpha_e Q_{(\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma)}(D\backslash e;t) + \beta_e Q_{(\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma)}(D/e;t) +t\gamma_e Q_{(\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma)}((D+e)/e ;t) .$$ 3. If $e$ is a coloop, then $$Q_{(\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma)}(D;t) = t \alpha_e Q_{(\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma)}(D\backslash e;t) + \beta_e Q_{(\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma)}(D/e;t) +\gamma_e Q_{(\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma)}((D+e)/e ;t) .$$ 4. If $e$ does not meet the above conditions, then $$Q_{(\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma)}(D;t) = \alpha_e Q_{(\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma)}(D\backslash e;t) + \beta_e Q_{(\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma)}(D/e;t) +\gamma_e Q_{(\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma)}((D+e)/e ;t) .$$ 5. $Q_{(\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma)}(D;t) = 1$, when $E=\emptyset$. The theorem is Theorem 3 of [@BHpre2] except that $Q_{(\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma)}((D\bar{\ast}e){\backslash}e ;t)$ appears in the relations in [@BHpre2] rather than $Q_{(\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma)}((D+e)/e ;t)$. As we noted after Proposition \[p.pdc\], these two delta-matroids are the same. By using Proposition \[prop.qd\] and considering delta-matroids instead of ribbon graphs, we can see that Theorem \[t.dqdct\] is the direct ribbon graph analogue of the recursion relation for the ribbon graph version of the transition polynomial from [@EMM]: $$\label{e.dctrans} Q(G; (\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma), t)= \alpha_e Q(G/e; (\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma), t) + \beta_e Q(G\backslash e; (\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma), t)+\gamma_e Q(G^{\tau(e)}/e; (\boldsymbol\alpha, \boldsymbol\beta, \boldsymbol\gamma), t).$$ It is known that $R(G)$ and $P(G)$ are encapsulated by the transition polynomial (see [@ES11] and [@EMM11b] respectively). These relations between the polynomials hold more generally for delta-matroids. The *Bollobás–Riordan polynomial* of a ribbon graph $G$, from [@BR1; @BR2], is given by $$R(G;x,y,z,w) = \sum_{A \subseteq E( G)} (x - 1)^{r( E ) - r( A )} y^{|A|-r(A)} z^{\gamma(A)} ,$$ where $r(A)$ is the rank of the (cycle matroid of) spanning ribbon subgraph $(V,A)$, and $\gamma(A)$ is its Euler genus. Chun et al. established in [@CMNR] that the $R(G)$ is delta-matroidal in the sense that it is determined by $D(G)$. In that paper the Bollobás–Riordan polynomial was extended to delta-matroids by $$R(D;x,y,z) := \sum_{A \subseteq E} (x - 1)^{r_{D_{\min}}( E ) - r_{D_{\min}}( A )} y^{|A|-r_{D_{\min}}(A)} z^{r((D|A)_{\max})-r((D|A)_{\min})},$$ and it was shown that $R(G;x,y,z)=R(D(G);x,y,z)$, for each ribbon graph $G$. R. Brijder and H. Hoogeboom [@BH12] showed that for a vf-safe delta-matroid, $Q_{(0,1,-1)}(D; \lambda) = P(D; \lambda)$. We also obtain the Bollobás–Riordan polynomial as a specialization of the transition polynomial. \[p.dqpr\] Let $D$ be a delta-matroid. Then $$Q_{(1,\sqrt{y/x}, 0)} \big(D ; \sqrt{xy}\big) = (\sqrt{y/x})^{r(E)} R(D; x+1, y, 1/\sqrt{xy}).$$ Let $D=(E,\mathcal{F})$ be a delta-matroid. For simplicity we write $ n(A)$ for $n_{D_{\min}}(A)$, and $r(A)$ for $r_{D_{\min}}(A)$. In [@CMNR], it is shown that $r((D|A)_{\min})=r(A)$ and $$r((D|A)_{\max})=\rho_D(A)-n(E) + n(A),$$ where $$\begin{aligned} \rho_D (A) &= |E|-\min\{|A\bigtriangleup F| : F\in \mathcal{F}(D)\} = |E|-\min\{ |F'| : F'=A\bigtriangleup F, F\in \mathcal{F}(D)\}\\ &= |E|-\min\{|F'| : F'\in \mathcal{F}(D\ast A)\} = |E| - d_{D\ast A}. \end{aligned}$$ Thus $$\begin{aligned} R(D;x+1,y,z) &= \sum_{B \subseteq E} x^{r(E) - r(A)} y^{|A|-r(A)} (1/\sqrt{xy})^{|E|-d_{D\ast B}-n(E)+n(A)-r(A)}\\ & = (\sqrt{x/y})^{r(E)} \sum_{(A,B,C) \in \mathcal{P}_3 (E)} 1^{|A|} (\sqrt{y/x})^{|B|} 0^{|C|} (\sqrt{xy})^{d_{D\ast B}} \\ & = (\sqrt{x/y})^{r(E)} Q_{(1, y/x, 0)}(D; \sqrt{xy}). \end{aligned}$$ The final result of our paper is the following corollary. Let $D=(E,{\mathcal{F}})$ be a delta-matroid, and $e\in E$. 1. If $e$ is a trivial orientable ribbon loop, then $$R(D; x+1, y, 1/\sqrt{xy}) = R(D {\backslash}e; x+1, y, 1/\sqrt{xy}) + y R(D / e; x+1, y, 1/\sqrt{xy}).$$ 2. If $e$ non-trivial, orientable ribbon loop, then $$R(D; x+1, y, 1/\sqrt{xy}) = R(D {\backslash}e; x+1, y, 1/\sqrt{xy}) + (y/x) R(D / e; x+1, y, 1/\sqrt{xy}).$$ 3. If $e$ is a non-orientable ribbon loop, then $$R(D; x+1, y, 1/\sqrt{xy}) = R(D {\backslash}e; x+1, y, 1/\sqrt{xy}) + (\sqrt{y/x}) R(D / e; x+1, y, 1/\sqrt{xy}).$$ 4. If $e$ is a coloop, then $$R(D; x+1, y, 1/\sqrt{xy}) = x R(D {\backslash}e; x+1, y, 1/\sqrt{xy}) + R(D / e; x+1, y, 1/\sqrt{xy}).$$ 5. If $e$ not a ribbon loop or a coloop, then $$R(D; x+1, y, 1/\sqrt{xy}) = R(D {\backslash}e; x+1, y, 1/\sqrt{xy}) + R(D / e; x+1, y, 1/\sqrt{xy}).$$ 6. $R( D ; x+1, y, 1/\sqrt{xy} ) = 1$, when $E=\emptyset$. The result follows from Theorem \[t.dqdct\] and Proposition \[p.dqpr\] after simplifying the terms $d_{D{\backslash}e}-d_D$ and $d_{D/e}-d_D$. The simplification of these terms is straightforward except for the computation of $d_{D/e}-d_D$ when $e$ is a non-trivial orientable ribbon loop. Since $e$ is an orientable ribbon loop, it is not a ribbon loop in $D*e$, so $d_{D*e}=d_D+1$. Moreover, $e$ is non-trivial, so it is not a coloop of $D*e$, therefore by Lemma \[lem:useful\], there is a minimal feasible set of $D*e$ that does not contain $e$. Hence $d_{D/e}=d_{(D*e){\backslash}e}=d_{D*e}=d_D+1$. By restricting to the delta-matroids of ribbon graphs we can recover the deletion contraction relations for the ribbon graph version of the polynomial that appear in Corollary 4.42 of [@EMMbook]. [99]{} M. Aigner, The Penrose polynomial of a plane graph, *Math. Ann.* **307** (1997) 173–189. M. Aigner, and H. Mielke, The Penrose polynomial of binary matroids, *Monatsh. Math.* **131** (2000) 1–13. B. Bollobás, and O. Riordan, A polynomial for graphs on orientable surfaces, *Proc. Lond. Math. Soc.* **83** (2001) 513–531. B. Bollobás, and O. Riordan, A polynomial of graphs on surfaces, *Math. Ann.* **323** (2002) 81–96. A. Bouchet, Greedy algorithm and symmetric matroids, *Math. Program.* **38** (1987) 147–159. A. Bouchet, Maps and delta-matroids, *Discrete Math.* **78** (1989) 59–71. A. Bouchet, and A. Duchamp, Representability of delta-matroids over $GF(2)$, *Linear Algebra Appl.* **146** (1991) 67–78. R. Brijder, H. Hoogeboom, The group structure of pivot and loop complementation on graphs and set systems, *European J. Combin.* **32** (2011) 1353–1367. R. Brijder, and H. Hoogeboom, Nullity and loop complementation for delta-matroids, *SIAM J. Discrete Math.* **27** (2013), 492–506. R. Brijder, and H. Hoogeboom, Interlace Polynomials for delta-Matroids, preprint, arXiv:1010.4678. R. Brijder, and H. Hoogeboom, Bicycle matroids and the Penrose Polynomial for delta-matroids, preprint, arXiv:1210.7718. S. Chmutov, Generalized duality for graphs on surfaces and the signed Bollobás–Riordan polynomial, *J. Combin. Theory Ser. B* **99** (2009) 617–638. C. Chun, I. Moffatt, S. D. Noble, and R. Rueckriemen, Matroids, delta-matroids, and embedded graphs, preprint. A. Duchamp, Delta-matroids whose fundamental graphs are bipartite. *Linear Algebra Appl.* **160** (1992) 99–112. J. Ellis-Monaghan and I. Moffatt, Twisted duality for embedded graphs, *Trans. Amer. Math. Soc.* **364** (2012) 1529–1569. J. Ellis-Monaghan and I. Moffatt, A Penrose polynomial for embedded graphs, *European. J. Combin.* **34** (2013) 424–445. J. Ellis-Monaghan and I. Moffatt, *Graphs on surfaces: Dualities, Polynomials, and Knots*, Springer, (2013). J. Ellis-Monaghan and I. Moffatt, Evaluations of topological Tutte polynomials, *Combin. Probab. Comput.*, **24** (2015) 556–583. J. Ellis-Monaghan, and I. Sarmiento, A recipe theorem for the topological Tutte polynomial of Bollobas and Riordan, *European J. Combin.* **32** (2011) 782–794. J. Geelen, S. Iwata, and K. Murota, The linear delta-matroid parity problem, *J. Combin. Theory Ser. B* **88** (2003) 377–398. J. Gross and T. Tucker, *Topological graph theory*, Wiley-interscience publication, (1987). S. Huggett and I. Moffatt, Bipartite partial duals and circuits in medial graphs, *Combinatorica* **33** (2013) 231–252. F. Jaeger, On transition polynomials of $4$-regular graphs. Cycles and rays (Montreal, PQ, 1987), *NATO Adv. Sci. Inst. Ser. C Math. Phys. Sci.* **301**, Kluwer Acad. Publ., Dordrecht, (1990) 123–150. I. Moffatt, A characterization of partially dual graphs, *J. Graph Theory* **67** (2011) 198–217. I. Moffatt, Partial duals of plane graphs, separability and the graphs of knots, *Algebr. Geom. Topol.* **12** (2012) 1099–1136. I. Moffatt, Separability and the genus of a partial dual, *European J. Combin.* **34** (2013) 355–378. I. Moffatt, Excluded minors and the ribbon graphs of knots, *J. Graph Theory*, to appear. J. Oxley, On the interplay between graphs and matroids, in J. W. P. Hirschfeld (ed.), *Surveys in Combinatorics, 2001*, London Math. Soc. Lecture Notes **288**, Cambridge University Press, Cambridge, (2001) 199–239. J. Oxley, *Matroid theory*, Second edition, Oxford University Press, New York, (2011). R. Penrose, Applications of negative dimensional tensors, in *Combinatorial Mathematics and its Applications (Proc. Conf., Oxford, 1969)*, Academic Press, London, (1971) 221–244. D. Welsh, Euler and bipartite matroids, *J. Combin. Theory* **6** (1969) 375–377. D. Welsh, *Matroid theory*, London Math. Soc. Monographs **8** Academic Press, London–New York, (1976). S. E. Wilson, Operators over regular maps, *Pacific J. Math.* **81** (1979) 559–568. [^1]: Ralf Rueckriemen was financed by the DFG through grant RU 1731/1-1.
--- abstract: 'We calculate the spin-Hall conductivity for a two-dimensional electron gas within the self-consistent Born approximation, varying the strength and type of disorder. In the weak disorder limit we find both analytically and numerically a vanishing spin-Hall conductivity even when we allow a momentum dependent scattering. Separating the reactive from the disspative current response, we find the universal value $\sigma^R_{sH} = e/8 \pi$ for the reactive response, which cancels however with the dissipative part $\sigma^D_{sH} = -e/8 \pi$.' author: - Roberto Raimondi - Peter Schwab title: 'Spin-Hall effect in a disordered 2D electron-system ' --- Spin-orbit coupling in two dimensional electron systems allows a number of unconventional transport phenomena, since charge current and the spin degrees of freedom are coupled [@silsbee04]. In particular the spin-Hall effect in two-dimensional electron systems, i.e.a spin-current which flows in the plane but perpendicular to the electrical current, and which is polarized perpendicular to the plane, has been discussed intensively [@murakami03; @sinova04; @culcer03; @sinitsyn03; @shen03; @schliemann04; @burkov03; @inoue04; @murakami04; @xiong04; @nomura04; @dimitrova04; @chalaev04; @mishchenko04; @rashba04; @khaetskii04] over the last year. The spin-Hall conductivity connects the spin-current with an electric field, $j_y^z = \sigma_{sH}E_x$, where $j_y^z$ denotes a current in $y$-direction with spin-polarization in $z$-direction. In a clean two dimensional electron gas, the spin-Hall conductivity was predicted to have a universal value $\sigma_{sH} = e/8 \pi$, independent of the strength of the spin-orbit scattering [@sinova04]. Several publications have addressed the issue of whether this result is modified in the presence of impurity scattering. Murakami [@murakami04] analyzed the Luttinger-Hamiltonian[@luttinger1956], which applies to two-dimensional hole gases, and concluded that the spin-Hall conductivity in the limit of weak impurity scattering reproduces the intrinsic value (at least when restricting to s-wave impurity scattering). For the Rashba model[@rashba1984], which applies to two-dimensional electron gases, conflicting results exist in the literature: By applying the standard Green’s function techniques Inoue et al. [@inoue04] and Mishchenko et al. [@mishchenko04] concluded that $s$-wave impurities suppress the spin-Hall effect in bulk samples even when the disorder broadening of the energy levels is small compared to the spin-orbit splitting [@khaetskii04]. On the other hand, Dimitrova [@dimitrova04] and Chalaev et al. [@chalaev04], starting from the same model Hamiltonian and applying similar methods, found a non-zero spin-Hall conductivity. Even direct numerical evaluations of the effect do not fully agree with each other: Xiong and Xie [@xiong04] found within a scattering matrix approach the universal value of the spin-Hall conductance $G_{sH}= e/8 \pi$ over a large parameter range. Nomura et al. [@nomura04] on the other hand found a spin-Hall conductivity of the order of but not identical to the universal value. In this paper we calculate the spin-Hall conductivity for a bulk sample within the self-consistent Born approximation. We confirm Refs. [@inoue04; @burkov03; @mishchenko04], i.e., we find that even a weak disorder suppresses the spin-Hall conductivity. For $s$-wave scatterers we calculate the impurity self-energy and the dressed current vertex numerically. This allows us to obtain results beyond the limit $\epsilon_F \tau \to \infty$, which is accessible analytically. We find that a non-zero spin-Hall conductivity is, in principle, possible although it remains much smaller than $e/8\pi$. Our calculations are based on our previous work [@raimondi2001; @schwab2002], where a number of technical details can be found. In the following we sketch the derivation of the spin-Hall conductivity. The starting point is the Hamiltonian $$\label{eq1} H = \frac{p^2}{2m} + \alpha \mathbf{ \sigma } \cdot {\bf p} \times {\bf e}_z ,$$ where the parameter $\alpha$ describes the strength of the spin-orbit coupling, $\sigma$ is a vector of Pauli matrices, and ${\bf e}_z$ is a unit vector perpendicular to the two dimensional system. The spin-Hall conductivity is obtained by the standard linear response theory as $$\begin{aligned} \label{eq2} \sigma_{sH} & = &\lim_{\omega\rightarrow 0} \frac{e}{ \omega} \int \frac{ {{\rm d}}\epsilon }{2 \pi} {{\rm Tr}}\left[ j_s^y \overline{ G^<(\epsilon) j_c^x G^A(\epsilon - \omega) } \right. \cr && \left. + j_s^y \overline{ G^R(\epsilon) j_c^x G^<(\epsilon - \omega) } \right] ,\end{aligned}$$ with $G^<(\epsilon) = f(\epsilon) (G^R- G^A)$, $f(\epsilon)$ being the Fermi function. In Eq. (\[eq2\]), the spin- and charge-current operators are given by ${\bf j}_{s} = (1/4)\lbrace \sigma_z {\bf v} + {\bf v} \sigma_z \rbrace$ and ${\bf j}_{c} = {\bf v}$, respectively. The velocity operator, ${\bf v}$, is obtained from the Hamiltonian (\[eq1\]) and reads $v^{x,y}=p^{x,y}/m\mp \alpha\sigma_{y,x}$. We choose the electron charge as $-e$ $(e>0)$. The trace in Eq. (\[eq2\]) is over the eigenstates of the Hamiltonian, and the bar indicates that the expression must be averaged over the disorder configurations. When performing the disorder average, we rely on the self-consistent Born approximation. To begin with, we consider point-like, i.e., pure $s$-wave scatterers. The retarded/advanced impurity self-energy is then given by $$\label{eq3} \Sigma^{R,A} = \frac{1}{2 \pi N_0 \tau}\sum_{\bf p} G^{R,A}({\bf p}) .$$ Due to the spin-orbit coupling, the Green’s functions have a non-trivial structure in the spin-space, although the self-energy remains diagonal. Explicitly, one finds that $\Sigma_{ss'}= \Sigma_0 \delta_{ss'}$, $G_{ss'} = G_0 \delta_{ss'} + G_1 \sigma^x_{ss'} + G_2 \sigma^y_{ss'}$ with $$\begin{aligned} \label{eq4} G_0({\bf p}) & = & \frac{1}{2} \left(G_+ + G_- \right) \\ \label{eq5} G_1( {\bf p})& = & \frac{1}{2} \frac{p_y}{p} \left(G_+ - G_-\right) \\ \label{eq6} G_2( {\bf p})& = & - \frac{1}{2} \frac{p_x}{p}\left(G_+ - G_-\right) \\ \label{eq7} G_{\pm}& = & \left( \epsilon + \mu - \frac{p^2}{2 m} \mp \alpha p - \Sigma_0 \right)^{-1}.\end{aligned}$$ By taking the zero frequency limit of Eq. (\[eq2\]), the spin-Hall conductivity reads $$\label{eq8} \sigma_{sH} = - \frac{e}{4\pi} \sum_{\bf p}{\rm Tr}_\sigma \left[ 2 j^y_s G^R({\bf p})J^x_c G^A({\bf p})\right] ,$$ since terms of the type $G^R G^R$ and $G^A G^A$ contribute only in the order $(1/\epsilon_F \tau )(\alpha /v_F)^2 $ and can be safely neglected in the limit $\alpha p_F \ll \epsilon_F$ and/or for weak disorder $\epsilon_F \tau \gg 1$. The charge current $J^x_c$ has to be calculated including the vertex corrections, $J^{x}_c=p^{x}/m+\Gamma^{x}$, compare Eq. (33) of Ref. [@schwab2002]. In the case of $s$-wave impurity scattering, the momentum dependent part of the current vertex is not renormalized, while the momentum independent, but spin-dependent part, $\Gamma^{x}$, is obtained by solving the set of equations $$\label{eq9} \Gamma_{ss'}^{x}=\gamma^{x}_{ss'}+\frac{1}{2\pi N_0\tau}\sum_{\bf p}\sum_{ab} G^R_{sa}\Gamma_{ab}^{x}G^A_{bs'},$$ with the [*effective*]{} bare vertex given by $$\label{eq10} \gamma^{x}_{ss'}= - \alpha\sigma^{y}_{ss'}+\frac{1}{2 \pi N_0 \tau} \sum_{{\bf p},a} G^R_{sa}({\bf p})\frac{p_x}{m} G^A_{as'}({\bf p}) .$$ By expanding $\Gamma^x_{ss'}= \sum_\mu \Gamma^x_\mu \sigma^\mu_{ss'} $ in Pauli matrices we obtain the spin-Hall conductivity as $$\label{eq11} \sigma_{sH}=- \frac{e}{\pi}\, \Gamma^x_2 \, {{\rm Im}}\sum_{\bf p }\frac{p_y}{m}G^R_0({\bf p}) G^A_1({\bf p}) \\ .$$ Performing the momentum integration Eq. (\[eq11\]) under the restriction that $\alpha p_F \ll \epsilon_F$ and $\epsilon_F \tau \gg 1$ leads to $$\label{eq11a} \sigma_{sH}= - \frac{e}{\pi}\, \Gamma^x_2 \, \pi N_0 \tau \frac{\alpha p_F v_F \tau}{1+ 4 \alpha^2 p_F^2 \tau^2} .$$ Apparently $\sigma_{sH}$ goes to zero when the spin-splitting, $\alpha p_F$, is small compared to the disorder broadening of the levels, $1/\tau$. If we neglect vertex corrections, i.e., if we insert in Eq. (\[eq11\]) the bare vertex $\Gamma^x_2=-\alpha$, we find $$\label{eqBare} \sigma_{sH}\big|_{\rm bare \, vertex}= \frac{e}{8\pi} \frac{4 \alpha^2 p_F^2 \tau^2}{1+4\alpha^2p_F^2\tau^2}$$ i.e., the universal value $\sigma_{sH} =e/8\pi$ is recovered in the weak disorder limit. On the other hand, by inserting the dressed vertex $\Gamma_2^x \approx 0$ as calculated in Refs. [@raimondi2001; @schwab2002] one finds that $\sigma_{sH} \approx 0$. Our result then agrees with that found in [@inoue04; @burkov03; @mishchenko04]. As explained in Ref. [@schwab2002], the vanishing of the dressed vertex $\Gamma^x_2$ is due to the fact that the integral on the right-hand side of Eq. (\[eq10\]) gives $\approx\alpha \sigma^y$, making the [*effective*]{} bare vertex $\gamma$ itself to vanish. A more careful numerical evaluation of the integral for arbitrary disorder strength actually shows that the compensation of the two terms in Eq. (\[eq10\]) is exact only in the weak disorder limit, i.e.,  $\epsilon_F \tau \gg 1$. In Fig. \[fig1\] we show the dressed vertex as a function of disorder. $\Gamma_2^x$ goes to zero as $1/\epsilon_F \tau \to 0 $, nothing special is observed as $\alpha p_F \sim 1/\tau$, and even in the strong disorder limit $\Gamma_2^x$ remains much smaller than its bare value ($-\alpha$). We conclude that although, in principle, a non-zero spin-Hall conductivity may be obtained, one expects a much smaller value than the universal one. ![\[fig1\] The dressed vertex $\Gamma^x_2$ in units of $\alpha$ as function of disorder strength $1/(\epsilon_F\tau )$. $\Gamma_2^x$ enters the dressed charge current, $J_c^x = p_x/m + \Gamma_2^x \sigma_y $ and thus the spin-Hall conductivity, Eq. (\[eq11a\]). In comparison with the bare chare current, $j_c^x = p_x/m -\alpha \sigma_y $, the spin-dependence is strongly reduced. ](fig1.eps){height="5.0cm"} Next we address the question whether the spin-Hall effect is sensitive to the type of disorder potential. Inoue et al.[@inoue04] argued that $\sigma_{sH}$ may be non-zero for long-range defect potentials, although an explicit result has not been given. In our calculation we follow again closely [@schwab2002] where all the details can be found. Here we assume weak disorder, so that the inequalities $\epsilon_F \gg \alpha p_F \gg 1/\tau$ hold. In the following, we work in the eigenstate basis of the Hamiltonian (\[eq1\]) $$| {\bf p} \pm \rangle = \frac{1}{\sqrt{2}} \left\{ \pm {\rm i} \exp(-{\rm i }\varphi ) | {\bf p } \uparrow \rangle + |{\bf p} \downarrow \rangle \right\}$$ where $\tan(\varphi)= p_y/p_x $ and the corresponding eigenvalues are $E_{\pm} = p^2/2m \pm \alpha p$. In this basis, the matrix elements of the current operators read $$\begin{aligned} \langle {\bf p } \pm | j^s_y |{\bf p} \pm \rangle &=& 0 \\ \langle {\bf p } \pm | j^s_y |{\bf p} \mp \rangle &=& -\frac{1}{2}\frac{p}{m} \sin( \varphi ) \\ \langle {\bf p } \pm | j^x_c |{\bf p} \pm \rangle &=& \left( \frac{p}{m} \pm \alpha \right) \cos(\varphi ) \\ \langle {\bf p } \mp | j^x_c |{\bf p} \pm \rangle &=& \mp {{\rm i}}\alpha \sin(\varphi) .\end{aligned}$$ To use Eq. (\[eq8\]) we need the dressed charge operator, $J^x_c$. Since, as seen from the above equations, the spin-current operator is off-diagonal in the eigenstate basis we get the spin-Hall conductivity in the form $$\sigma_{sH} = - \frac{e}{\pi} \sum_{\bf p} {{\rm Re}}\left[ \langle {\bf p } + | j^y_s |{\bf p} - \rangle \langle {\bf p } - | J^x_c |{\bf p} + \rangle G^R_-({\bf p}) G^A_+({\bf p}) \right] .$$ To calculate the dressed current operator we make use of the assumption that the spin-orbit splitting is large compared to the impurity broadening of the levels, $\alpha p_F \gg 1/\tau$. The off-diagonal matrix elements of the current operator are then obtained in terms of the diagonal ones, $$\begin{aligned} \lefteqn{ \langle {\bf p } \mp | J^x_c |{\bf p} \pm \rangle = \langle {\bf p } \mp | j^x_c |{\bf p} \pm \rangle} \nonumber\\[0.5mm] && + \sum_{{\bf p'}, m} \Big[ \langle {\bf p } \mp | V |{\bf p}' m \rangle \, \langle {\bf p}' m | V |{\bf p} \pm \rangle \nonumber\\[0.5mm] && \times G^R_{m}({\bf p}')G^A_m({\bf p}') \langle {\bf p' } m | J^x_c |{\bf p}' m \rangle \Big] .\end{aligned}$$ The diagonal matrix elements on the other hand were already considered in Ref. [@schwab2002], and are obtained from the equation $$\begin{aligned} \lefteqn{ \langle {\bf p } \pm | J^x_c |{\bf p} \pm \rangle = \langle {\bf p } \pm | j^x_c |{\bf p} \pm \rangle } \\ && + \sum_{{\bf p'}, m} |\langle {\bf p } \pm | V |{\bf p}' m \rangle |^2 \, G^R_{m}({\bf p}')G^A_m ({\bf p}') \langle {\bf p' } m | J^x_c |{\bf p}' m \rangle \nonumber .\end{aligned}$$ We consider impurity scattering which conserves spin, but allow the scattering amplitude to be momentum-transfer dependent. Such a dependence appears as a product of two contributions. The first is due to the type of disorder potential one considers, $ V_{{\bf p}, {\bf p}'}$, while the second is induced by the transformation to the eigenstate basis. The latter gives rise to the following matrix elements $$\begin{aligned} \langle {\bf p } - | V |{\bf p}' \pm \rangle \, \langle {\bf p}' \pm | V |{\bf p} + \rangle = \mp \frac{{{\rm i}}}{2} \sin(\varphi-\varphi' ) | V_{{\bf p}, {\bf p}'} |^2 \end{aligned}$$ and $$\begin{aligned} | \langle {\bf p } \pm | V |{\bf p}' \pm \rangle |^2 &= & \frac{1}{2} | V_{{\bf p}, {\bf p}'} |^2 (1+\cos( \varphi-\varphi' ) ) \\ | \langle {\bf p } \pm | V |{\bf p}' \mp \rangle |^2 & = & \frac{1}{2} | V_{{\bf p}, {\bf p}'} |^2 (1-\cos( \varphi-\varphi' ) ) \end{aligned}$$ We assume that the scattering amplitude $ V_{{\bf p}, {\bf p}'}$ depends weakly on the momentum transfer, so that the scattering probability depends only on the angle between the incoming and scattered particle. Under this condition we can expand the scattering probability as $$| V_{ {\bf p}, {\bf p}'} |^2 = V_0 + 2 V_1 \cos(\varphi - \varphi') + 2 V_2 \cos( 2 \varphi - 2 \varphi' ) + \dots .$$ To obtain the spin-Hall conductivity the two momentum integrations over ${\bf p}$ and ${\bf p}'$ have to be performed. We split the momentum integration in an integral over the energy $\xi = p^2/2m -\mu$ and the angular variable $\varphi$, $$\sum_{\bf p} \to N_0 \int {{\rm d}}\xi \int \frac{{{\rm d}}\varphi}{ 2 \pi} ,$$ and find $$\begin{aligned} N_0 \int {{\rm d}}\xi G^R_- G^A_+ &= & \frac{2 \pi {{\rm i}}N_0 }{2 \alpha p_F +{{\rm i}}/\tau} \approx \frac{{{\rm i}}\pi N_0 }{ \alpha p_F} \\ N_0 \int {{\rm d}}\xi G^R_{\pm} G^A_{\pm} &= &2 \pi N_\pm \tau_\pm ,\end{aligned}$$ where $N_\pm$ and $\tau_\pm$ are the density of states and the lifetime in the two subbands. Notice that the first of the two integrals is correct only to the lowest order in the (small) parameter $\alpha p_F /\epsilon_F$, whereas the second integration is valid beyond that limit. Finally the spin-Hall conductivity is determined as $$\begin{aligned} \label{eq29} \sigma_{sH} = \frac{e}{8\pi}\left[ 1 - \frac{1 }{4 \alpha} \frac{N_+ \tau_+ J^x_+ - N_- \tau_- J^x_-}{N_0 \tau}\frac{ V_2 - V_0 }{V_0} \right] ,\end{aligned}$$ where the second term is due to vertex corrections. When calculating the product $N_\pm \tau_\pm J^x_\pm $ to zero order in $\alpha p_F/\epsilon_F$ the vertex corrections disappear. Expanding the density of states, the scattering time and to dressed current operator to first order yields $$\begin{aligned} N_\pm & \approx & N_0 \left( 1 \mp \frac{\alpha p_F }{ 2 \epsilon_F} \right) \\ \tau_\pm & \approx &\tau \left( 1 \pm \frac{V_1}{V_0}\frac{\alpha p_F}{2 \epsilon_F } \right) \\ J^x_\pm &\approx & \frac{V_0}{V_0 -V_1} \frac{p_F}{m} \mp \alpha \frac{V_0 + V_2}{V_0 -V_2}. \end{aligned}$$ Notice that the dressed current operator, to the leading order in $\alpha$, is of the familiar form $J = j \tau_{\rm tr}/\tau$, where $\tau_{\rm tr}$ is the transport scattering time. By combining all the terms, one then finds that the spin-Hall conductivity (\[eq29\]) vanishes as in the case of pure $s$-wave scattering, $\sigma_{sH} = 0$. As a last useful observation, we separate the reactive and dissipative contributions to the current response, $\sigma_{sH}= \sigma_{sH}^R + \sigma_{sH}^D$ where $$\begin{aligned} \sigma_{sH}^R & = & \lim_{\omega\rightarrow 0}\frac{e}{ \omega} \int \frac{ {{\rm d}}\epsilon }{2 \pi} {{\rm Tr}}\left[ j_s^y \overline{ G^<(\epsilon) j_c^x {{\rm Re}}G^A(\epsilon - \omega) } \right. \nonumber \\[1mm] && \left. + j_s^y \overline{{{\rm Re}}G^R(\epsilon) j_c^x G^<(\epsilon - \omega) } \right] \\[1mm] \sigma_{sH}^D & = & - \frac{e}{\pi} {\rm Tr}\left[ \overline{ j^y_s {{\rm Im}}G^R j^x_c {{\rm Im}}G^R } \right] .\end{aligned}$$ Since the zero frequency spin-Hall conductivity is real, the terms with imaginary (real) current matrix elements contribute to $\sigma^R_{sH}$ ($\sigma^D_{sH}$), respectively. It then follows that the first term on the right hand side of Eq. (\[eq29\]) corresponds to $\sigma^R_{sH}= e/ 8 \pi$, whereas the second term (the vertex corrections) is the dissipative response with $$\sigma_{sH}^D = -\frac{e}{8\pi}\frac{1 }{4 \alpha} \frac{N_+ \tau_+ J^x_+ - N_- \tau_- J^x_-}{N_0 \tau}\frac{ V_2 - V_0 }{V_0} = -\frac{e}{8\pi}$$ and only the sum of the reactive and dissipative response is zero. In summary, we calculated the spin-Hall conductivity in a two dimensional electron gas within the self-consistent Born approximation, including the vertex corrections in the ladder approximation. We remark that, although a number of similar studies exist in the recent literature, the final conclusions are often contradictory. This may be due to the fact that the relevant integrals depend in a very subtle way on the type of the physical limit considered. For this reason in this work we evaluated all the relevant integrals both analytically and numerically. This allowed us to confirm the conclusions of Refs. [@inoue04; @mishchenko04; @burkov03]. In particular, we find that the spin-Hall conductivity is strongly suppressed below the universal value of $e/8\pi$. Furthermore we have demonstrated that the result is not only valid for pure $s$-wave scattering, but is robust upon the inclusion of a weak momentum dependence of the scattering probability. [99]{} R. H. Silsbee, J. Phys.: Condens. Matter [**16**]{} R179 (2004). S. Murakami, N. Nagaosa, and S.-C. Zhang, Science [**301**]{}, 1348 (2003); Phys. Rev. B [**69**]{} 235206 (2004). J. Sinova, D. Culcer, Q. Niu, N.  A.  Sinitsyn, T.  Jungwirth, and A. H.  MacDonald, Phys. Rev. Lett. [**92**]{}, 126603 (2004). D. Culcer, J. Sinova, N. A. Sinitsyn, T. Jungwirth, A. H. MacDonald, Q. Niu, Phys. Rev. Lett. [**93**]{}, 046602 (2004). N. A. Sinitsyn, E. M. Hankiewicz, W. Teizer, J. Sinova, cond-mat/0310315 (unpublished). S. Shen, cond-mat/0310368 (unpublished); L. Hu, J. Gao, and S. Shen, cond-mat/0401231 (unpublished). J. Schliemann and D. Loss, Phys. Rev. B [**69**]{}, 165315 (2004); cond-mat/0405436 (unpublished). A. A. Burkov, A. S. Nu[n]{}ez and A. H. MacDonald, cond-mat/0311328 (unpublished). Ye Xiong and X. C. Xie, cond-mat/0403083 (unpublished). K. Nomura et al., cond-mat/0407279 (unpublished). J. Inoue, G. E. W. Bauer, and L. W. Molenkamp, Phys. Rev. B [**70**]{}, 041303(R) (2004). S. Murakami, Phys. Rev. B [**69**]{}, 241202(R) (2004). O. V. Dimitrova, cond-mat/0405339 (unpublished). O. Chalaev and D. Loss, cond-mat/0407342 (unpublished). R. G. Mishchenko, A. V. Shytov, and B. I. Halperin, cond-mat/0406730 (unpublished). E. I. Rashba, cond-mat/0404723 (unpublished). Most recently, also A. Khaetskii, cond-mat/0408136 (unpublished), confirmed this result. J. M. Luttinger, Phys. Rev.  [**102**]{}, 1030 (1956). Y. A. Bychkov and E. I. Rashba, J. Phys. C [**17**]{}, 6039 (1984). R. Raimondi, M. Leadbeater, P. Schwab, E. Caroti, and C. Castellani, Phys. Rev. B [**64**]{}, 235110 (2001). P. Schwab and R. Raimondi, Eur. Phys. J. B [**25**]{}, 483 (2002).
--- abstract: 'Recent experiments using fluorescence spectroscopy have been able to probe the dynamics of conformational fluctuations in proteins. The fluctuations are Gaussian but do not decay exponentially, and are therefore, non-Markovian. We present a theory where non-Markovian fluctuation dynamics emerges naturally from the superposition of the Markovian fluctuations of the normal modes of the protein. A Rouse-like dynamics of the normal modes provides very good agreement to the experimentally measured correlation functions. We provide simple scaling arguments rationalising our results.' author: - 'Arti Dua$^1$ and R. Adhikari$^2$' bibliography: - 'proteinfluct.bib' title: 'Non-Markovian fluctuations in Markovian models of protein dynamics' --- Proteins molecules are the buiding blocks of life. The three-dimensional physical structure of a protein is intimately related to its biological function. Proteins function in an environment where noise is ubiquitous. The three-dimensional conformation of a protein, therefore, is not static but itself undergoes fluctuations. Since protein function is so strongly determined by protein structure, the fluctuating nature of a protein molecule has important implications for its biological function [@lu:1998; @oijen:2003]. A precise determination of the static and dynamic properties of conformational fluctuations in proteins is, therefore, of great importance. In recent experiment [@yang:2003], conformational fluctuations of the protein flavin reductase have been observed and characterised using single-molecule fluorescence spectroscopy. The fluorescence lifetime is directly correlated to the distance between the flavin (fluorophore) and tyrosine (quencher) groups within the protein. The distance fluctuations between the flavin and tyrosine groups, then, gives an indirect measure of the fluctuations of the entire protein. Remarkably, the experiments find that the fluctuations remain correlated over a five decades in time, spanning the range of $10^{-4}s$ - $1s$. This is indicative of the presence of multiple relaxation mechanisms operating at different time scales. Further, there is convincing evidence that the fluctuations are Gaussian, and when taken together with the absence of single-exponential decay of the correlations, imply that they are also non-Markovian. In this Letter, we show how a Markovian dynamics for the protein normal modes generically produces non-Markovian fluctuations in the distance between two residues on the protein backbone. Imposing the simplest Rouse-like dynamics for the normal modes we obtain all correlation functions for the distance fluctuations and find them to be in very good agreement with experiments [@yang:2003; @kou:2004; @min:2005]. Simple scaling arguments are provided to rationalise our analytical calculations. Before presenting our detailed calculation, we illustrate the basic mechanism by which non-Markovian behaviour arises in this problem. Consider the Ornstein-Uhlenbeck process (OUP) which describes the velocity $v(t)$ of Brownian motion. This is a stationary, Gaussian, Markovian process with a correlation function $\rho_0(\tau) = \langle v(t)v(t+\tau)\rangle = k_BT\exp(-\Gamma \tau)$, where $\Gamma^{-1}$ is the relaxation time. Take two such uncorrelated processes $v_1(t)$ and $v_2(t)$, each with distinct relaxation times $\Gamma_1$ and $\Gamma_2$, and ask for properties of the stochastic process described by their sum $u(t) = v_1(t) + v_2(t)$. Since each $v_i$ is Gaussian and stationary, so is their sum $u$. The correlation function of the sum is $\rho(\tau) = \langle u(t)u(t+\tau)\rangle = \langle (v_1(t) + v_2(t))(v_1(t+\tau)+v_2(t+\tau))\rangle$, and since the processes are uncorrelated, is the sum of the correlation functions of the individual processes, $\rho(\tau) = k_BT[\exp(-\Gamma_1\tau) + \exp(-\Gamma_2\tau)]$. Then, from Doob’s theorem [@vankampen:1981], which says that a Gaussian, stationary process is Markovian if and only if its correlation function is a single exponential, we conclude that $u(t)$ is Gaussian, stationary, but $\emph{non-Markovian}$. Generalising, an abitrary superposition of $N$ distinct but uncorrelated OUPs, $u(t) = \sum_{i=1}^N\alpha_i v_i(t)$, is also Gaussian, stationary, and non-Markovian. Thus, non-Markovian behaviour can arise very generally from a superposition of Markovian processes. It precisely this mechanism which, as we show below, generates the non-Markovian fluctuations seen in protein dynamics. A minimal model of a protein replaces the complicated stereochemistry of the amino acids and its associated secondary and tertiary structures by a simple connected chain of beads and springs [@banavar:2005]. The relaxations in such a model derive from interactions between the beads and the combined effect of fluctuations and dissipation due to the solvent. In the energetic ground state, the conformation is labelled by the positions ${\bf R}^0_n$ of the beads, where the subscript $n$ is a position label along the chain. Independent of the specific nature of the interactions, conformational fluctuations about the ground state can be described by a parametrisation ${\bf R}_n = {\bf R}^0_n + {\bf u}_n$. Here, ${\bf u}_n$ is the deviation of the $n-$th bead from its ground state conformation. Then, the instantaneous distance ${\bf d}_{mn}(t)$ between two monomers located at $m$ and $n$ is given by ${\bf d}_{mn}(t) = {\bf R}_m(t) - {\bf R}_n(t) = {\bf d}_{mn}^0 + {\bf u}_m(t) - {\bf u}_n(t)$, where ${\bf d}_{mn}^0 = {\bf R}_m^0 - {\bf R}_n^0$ is the equilibrium distance. For a chain whose ends are not tethered and therefore free of external forces, the displacements must satisfy $\partial {\bf u}_n/ \partial n = 0$ at $n=0$ and $n=N$. This motivates the introduction of normal modes of the form ${\bf u}_{n}(t) = 2 \sum_{p=1}^{\infty} {\bf Q}_p(t)\cos(p\pi n/N)$, in terms of which the distance is $${\bf d}_{mn}(t) = {\bf d}_{mn}^0 + 2 \sum_{p=1}^{\infty} {\bf Q}_p(t) [ \cos(p\pi m/N) - \cos(p \pi n/N)].$$ This key equation shows that the distance fluctuations are linearly related to the fluctuations of the normal modes. If the normal mode fluctuations are Gaussian and Markovian, the distance fluctuations, by our previous argument, are generically non-Markovian. The simplest model of polymer dynamics which yields a Gaussian and Markovian fluctuation for the normal modes is the Rouse model [@doi:1986] . Here, we impose Rouse-like dynamics on the harmonic deviations ${\bf u}_n$, $$\zeta\frac{\partial{\bf u}_n(t)}{\partial t} = \frac{3 k_B T}{b^2} \frac{\partial^2{\bf u}_n(t)}{\partial n^2} + {\bf f}_n(t),$$ so that the overdamped dynamics of the fluctuations is a balance between the frictional force proportional to $\zeta$ times the velocity of the $n-$th monomer, an entropic restoring force proportional to $3 k_B T/b^2$ which arises due to the connectivity of the chain, and the injection of thermal fluctuations from the solvent. The constant term in the Rouse mode expansion describing the motion of the center of mass of the chain and has been ignored here since it does not enter the expression for the distance. The dynamics of the Rouse modes follows immediately as $$\zeta_p\frac{\partial{\bf Q}_{p}(t)}{\partial t} = - k_p{\bf Q}_p(t)+ {\bf F}_{p}(t),$$ where $\zeta_p = 2 N \zeta$, $k_p = 6 p^2 \pi^2 k_B T/N b^2$ and ${\bf f}_{n}(t) = 2 \sum_{p=1}^{\infty} {\bf F}_p(t)\cos(p\pi n/N)$. The fluctuations of the Rouse modes are, therefore, identical to the fluctuations of the velocity of a Brownian particle, both being governed by the OUP. The correlations between the modes is given by [@doi:1986] $$\left<{\bf Q}_p(t)\cdot {\bf Q}_q(t + \tau)\right> = \delta_{pq}\frac{N b^2}{(p^2+q^2)\pi^2} \exp(-p^2 \tau/\tau_1),$$ showing that each Rouse mode has a distinct relaxation time and is unocorrelated with every other Rouse mode. Here, $\tau_1 = Nb^2\zeta_p/6\pi^2k_BT$ is the relaxation time of the first Rouse mode. Combining the results of Eq. 1, and Eq. 4, we see that ${\bf d}_{mn}(t)$ is a stochastic process which is an infinite superposition of OUPs. Explicitly, the correlation function $\rho_{mn}(\tau) = \langle {\bf d}_{mn}(t)\cdot {\bf d}_{mn}(t+\tau)\rangle$ of this process is $$\label{eq:corr} \rho_{mn}(\tau) = 2 \sum_{p=1}^{\infty}\frac{N b^2}{p^2\pi^2} [ \cos(p\pi n/N) - \cos(p \pi m/N)]^2 e^{-p^2 \tau/\tau_1}.$$ By Doob’s theorem, it is immediately clear that the dynamics of ${\bf d}_{mn}(t)$ is non-Markovian. Since ${\bf d}_{mn}(t)$ is a Gaussian process, all higher order time correlation functions can be expressed in terms of the $\rho_{mn}(\tau)$ using Wick’s theorem. We call the stochastic process defined by Eq. 1 and Eq. 3 the superposed Ornstein-Uhlenbeck process. The two-point correlation $\rho_{mn}(\tau)$ completely specifies the process. Our work thus far is, in a formal sense, a Markovian embedding (in terms of the normal modes) of a non-Markovian process (the distance fluctuations). Such Markovian embeddings are also used in describing the underdamped dynamics of a Brownian particle in a potential. The stochastic process describing the position alone is non-Markovian, but the joint process in the enlarged set of position and velocity variables is Markovian [@vankampen:1981]. In the present case, the Markovian embedding is also Gaussian, and it is this simplification that allows us to calculate all correlation functions in terms of the two-point correlation $\rho_{mn}(\tau)$. We now turn to comparing our analytical results with data from the experiments [@yang:2003; @kou:2004; @min:2005]. The experiments measure the fluorescence liftetime $\gamma^{-1}(t)$ which is related to the distance $d_{mn}(t) =\sqrt{{\bf d}_{mn}(t)\cdot{\bf d}_{mn}(t)}$ between the fluorophore and the quencher as $$\gamma(t) = k_0e^{-\lambda d_{mn}(t)}$$ where $k_0$, $\lambda$ are parameters determined by the protein, ${\bf d}_{mn}(t) = {\bf d}_{mn}^0 + {\bf u}_m(t) - {\bf u}_n(t)$ and $d_{mn}^0$ is the mean value of the distance. Then correlation functions of the lifetimes $\delta\gamma^{-1}(t) = \gamma^{-1}(t) -\langle\gamma^{-1}\rangle$ are related to correlations in the distance fluctuations. This relation is simple for the two-point correlation function [@kou:2004], $$\langle \delta\gamma^{-1}(t)\delta\gamma^{-1}(t + \tau)\rangle = k_0^{-2} e^{2\beta d^0_{mn} + \beta^2\rho_{mn}(t)}(e^{\beta^2\rho_{mn}(\tau)} -1 )$$ but becomes complicated for three- and higher-point correlations. Explicit forms for three and four point correlations are given in [@kou:2004]. We compare the results for two- and four-point fluorescence lifetime correlations using the correlation functions of the superposed OUP. The experimentally known values of the parameters are $d_{mn}^0 = 4.5\AA$, $\beta = 1.4 \AA^{-1}$, and $\gamma/k_BT = 0.48 \AA^{-2}s$ [@kou:2004]. With fitting parameters which are very close to these estimates, the agreement between the theoretical prediction and the experimental data for the two-point correlation is good over the entire $5$ decades in time, as shown in Fig. 1. In Fig. 2 we compare theory and experiment for the four-point function by fitting the same parameters used in Fig. 1. Again, the agreement is good over the full $5$ decades in time. ![\[fig:twpoint\] The normalised two-point (above) and four-point (below) autocorrelation functions of the fluorescence liftetimes plotted against time $t$ in seconds. The solid line is the theoretical curve obtained by using the the normalized correlation function in Eq. \[eq:corr\] while the data is from Ref. [@kou:2004]. The theoretical curve has been multiplied by the first data point. The parameters used are $\beta = 1.3 \AA^{-1}$, $d^0_{mn} = 2.8\AA$, $\gamma/k_BT = 0.4\AA^{-2}s$, $m=6$, $n=30$ and $N=30$.](fig1-xie1 "fig:"){width="8cm"} ![\[fig:twpoint\] The normalised two-point (above) and four-point (below) autocorrelation functions of the fluorescence liftetimes plotted against time $t$ in seconds. The solid line is the theoretical curve obtained by using the the normalized correlation function in Eq. \[eq:corr\] while the data is from Ref. [@kou:2004]. The theoretical curve has been multiplied by the first data point. The parameters used are $\beta = 1.3 \AA^{-1}$, $d^0_{mn} = 2.8\AA$, $\gamma/k_BT = 0.4\AA^{-2}s$, $m=6$, $n=30$ and $N=30$.](fig2-xie3 "fig:"){width="8cm"} The interesting symmetry in time of the three-point correlation function $\langle \delta\gamma^{-1}(0)\delta\gamma^{-1}(t_1)\delta\gamma^{-1}(t_1 + t_2)\rangle = \langle\delta\gamma^{-1}(0)\delta\gamma^{-1}(t_2)\delta\gamma^{-1}(t_1 + t_2)\rangle$ observed in experiment [@kou:2004] holds [*by construction*]{} for the superposed OUP. This is so because, by stationarity, all times indices in the first term of the identity can be shifted by $t_1 + t_2$, which reproduces the second term of the identity. Similar relations also hold for the higher-point correlation functions. The Rouse-like dynamics of the normal modes, then, can capture all the essential features of the experimental fluctuation spectrum both qualitatively and quantitatively. This provides evidence that the large-scale long-time behaviour of proteins are identical to those of structureless Rousian polymer chains, not only for statics [@banavar:2005], but also for aspects of dynamics. To fully specify the superposed OUP we must obtain all the joint probabilities $P({\bf d}, t; {\bf d^{\prime}}, t^{\prime}; {\bf d^{\prime\prime}}, t^{\prime\prime};\ldots)$ on the sample paths ${\bf d}_{mn}(t) = {\bf d}$ [@vankampen:1981]. As the process is Gaussian, all such probabilities are multivariate Gaussian distributions determined by the single function $\rho_{mn}(\tau)$[@fox:1978]. The one-point distribution $P({\bf d}, t)$ is independent of time by stationarity and is a Gaussian in ${\bf d}$ with mean ${\bf d}_{mn}^0$ and variance $\rho_{mn}(0) = Nb^2$. The two-point distribution is $$P({\bf d}, t; {\bf d}^{\prime}, t^{\prime}) = {1\over (2\pi)^3 \det[{\bf C({\tau})}]^{{1\over 2}}}\exp[-{1\over 2}\Delta_{i}C^{-1}_{ij}({\tau})\Delta_{j}]$$ where $t^{\prime} = t + \tau$, ${\bf \Delta} = (d_x, d_y, d_z, d_x^{\prime}, d_y^{\prime},d_z^{\prime})$, and $C_{ij}(\tau) = \langle \Delta_i(t)\Delta_j(t + \tau)\rangle = \int_{\Delta_i, \Delta_j} \Delta_i \Delta_j P({\bf d}, t; {\bf d}^{\prime}, t^{\prime})$ is the $6\times6$ matrix of correlations. Since the only non-zero correlations are of the type $\langle d_x(t) d_x(t)\rangle$ or $\langle d_x(t)d_x(t+\tau)$, this matrix is band-diagonal, with ${1\over3}\rho_{mn}(0)$ on the main diagonal and ${1\over 3}\rho_{mn}(\tau)$ on the upper and lower diagonals two removed from the main diagonal. The higher joint probabilities have similar forms, but with enlarged vector ${\bf \Delta}$ and enlarged matrix ${\bf C}$ [@fox:1978]. From the Gaussian nature of the process, and by Wick’s theorem, the four-point correlation function $\rho^{(4)}_{mn}(\tau, \tau^{\prime}, \tau^{\prime\prime}) = \langle {\bf d}_{mn}(t)\cdot{\bf d}_{mn}(t + \tau){\bf d}_{mn}(t + \tau^{\prime})\cdot{\bf d}_{mn}(t + \tau^{\prime\prime})\rangle$ may be calculated as sums of products of the two point correlation function $\rho_{mn}(\tau)$. It is important to emphasis that Xie [*et al*]{} have explicitly checked that the higher order correlations obey Wick’s theorem and haver thereby confirmed that the distance fluctuations are Gaussian. We have been able to characterise the stochastic process governing the distance fluctuations in terms of its correlation function and the joint probabilities. It is of interest to ask if there is an effective description, in terms of a suitable Langevin equation, for the distance fluctuation ${\bf d}_{mn}(t)$ itself. In other words, is it possible to construct a Langevin equation for ${\bf d}_{mn}(t)$, given its correlation function $\rho_{mn}(\tau)$ ? Fox [@fox:1978] has provided a solution to this inverse problem in terms of a [*linear*]{} Langevin equation with a memory kernel and Gaussian, coloured noise. The effective Langevin equation for the superposed OUP, then, is $${d\over dt} {\bf d}_{mn}(t) = -\int_0^t D_{mn}(t-t^{\prime}){\bf d}_{mn}(t^{\prime}) + {\bf f}_{mn}(t)$$ where the noise is Gaussian with mean $\langle {\bf f}_{mn}(t)\rangle = 0$ and variance $\langle f_{mni}(t)f_{mnj}(t+\tau)\rangle = 2k_BTD_{mn}(\tau)$. It can be shown [@fox:1978] that this Langevin equation yields the correlation function $\rho_{mn}(\tau)$ and the full heirarchy of joint probabilities described above, provided the Laplace transform of the diffusion kernel $D_{mn}(s) = \int_0^{\infty}d\tau \exp(-s\tau)D_{mn}(\tau)$ satisfies $$\rho_{mn}(s) = {\rho_{mn}(0)\over s + D_{mn}(s)},$$ where $\rho_{mn}(s)$ is the Laplace transform of $\rho_{mn}(\tau)$. An explicit expression for $D_{mn}(s)$ can be obtained by Laplacing transforming Eq.\[eq:corr\], replacing the summation by an integration, and inserting the expression for $\rho_{mn}(s)$ in the equation above. For $m=0, n=N$, we get for $D(s) = D_{0N}(s)$, $$D(s) = \sqrt{s \over \tau_1}{{\pi\over 2} - \tan^{-1}({1\over\sqrt{s\tau_1}}) \over 1 - {1\over\sqrt{s\tau_1}}[{\pi\over 2} -\tan^{-1}({1\over\sqrt{s\tau_1}})]}$$ From this, we may obtain an effective friction kernel $\zeta(s)$ using the generalised Stokes-Einstein relation $D(s) = k_BT/\zeta(s)$. This effective friction kernel recieves contributions from individual dissipative effects of all the Rouse modes and is the source of memory. For $s\tau_1 \ll 1$, which corresponds to times much greater than the longest relaxation time, $\zeta(s) \rightarrow \tau_1$, which implies that the memory kernel is proportional to $\delta(t)$. Reassuringly, the Markovian limit is reproduced for times much longer than the longest time scale in the problem. On the other hand, for $s\tau_1 \gg 1$, $\zeta(s)\rightarrow {2\over\pi}\sqrt{\tau_1/s}$, implying that $\zeta(t)$ is proportional to $(\tau_1/t)^{1/2}$. Thus, for times smaller than the longest relaxation time, the memory kernel shows a power law decay in time with an exponent of $-{1\over 2}$. Correspondingly, the normalized correlation function $C(s) = \rho(s)/\rho(0)$ is $$C(s) = \frac{1}{s}\left[ {1 - {1\over\sqrt{s\tau_1}}[{\pi\over 2} -\tan^{-1}({1\over\sqrt{s\tau_1}})]}\right],$$ where $\rho(s) =\rho_{0N}(s)$. For $s\tau_1 \gg 1$, $C(s) \rightarrow 1/s$, which implies that at short times $C(t) = 1$. In the opposite limit of $s\tau_1 \ll 1$, $C(s) \approx \tau_1/(1+s\tau_1)$, which implies that at long times $C(t)$ is proportional to $\exp(-t/\tau_1)$ and decays as a single exponential, again suggesting that the fluctuations become Markovian at long times. These asymptotes can be understood by a simple scaling argument. The normalized correlation function is given by $C(t) = \sum_{p,odd} e^{-p^2 t/\tau_1}/p^2$. It is evident from the expression for $C(t)$ that for times $t\gtrsim \tau_1$, when all but the longest modes have relaxed, $C(t)$ decays as a single exponential. For times $t \gg \tau_1/p^2$, all modes above the $p$-th mode have already relaxed and do not contribute to the summation. This implies that at any time $t$, modes $p = 1, 2 \ldots p^{\star}(t)$ contribute to the summation, where $p^{\star}(t) \approx (\tau_1/t)^{1/2}$. The slow decay of this maximum mode number shows up as a non-Markovian effect and is ultimately responsible for the power-law dependence seen in the frictional memory kernel. Our work has precursors in the contribution of Kou and Xie [@kou:2004] who proposed a one-dimensional phenomenological Langevin equation incorporating fractional Gaussian noise. However, no microscopic basis was provided for memory kernel or for the form of the noise. Starting from a microscopic description, we obtain a vector Langevin equation with memory, obtain an explicit expression for the memory kernel, and show that this has a power law decay. The noise in our Langevin equation is not a phenomenological fractional Gaussian noise, but is fully specified from the microscopics. Debnath [*et al*]{} [@debnath:2005] use a semi-flexible model of polymer dynamics to explain the experimental data. Our work shows that semiflexibility is not necessary to understand those aspects of protein dynamics probed by the fluorescence experiments. A similar approach is that of Tang and Marcus [@tang:2006] for a flexible polymer, but the experimental results can be obtained only by assuming a certain disorder along the chain and averaging over the disorder. Our work shows that the heterogeneity of the protein residues is not necessary to explain the experimental results. None of the above have elucidated the appearence of non-Markovian behaviour from a simple superposition of Markovian fluctuations as is done here. In summary, we have presented a mechanism where a superposition of the Markovian dynamics of the normal modes of the protein conformation gives rise to non-Markovian fluctuations of the distance between two points in the protein. The model provides an accurate fit to experimental data for both two-time and four-time correlation functions, exhibits a symmetry of these correlations functions found experimentally, reproduces the power-law decay of the frictional memory kernel, and clarifies how non-Markovian behaviour arises in protein dynamics. Given the non-specific nature of our model, we believe that non-Markovian fluctuations should be seen universally in all biopolymers and not only in proteins. Our model can be extended to include more detailed descriptions of the protein normal modes and their relaxation mechanism. Using our model, several quantities of interest in fluorescence microscopy like the survival and first passage times of the distance may be calculated. Work is underway to explore these possibilities. This work was first presented at the 2008 Biophysics Summer School, at the Harishchandra Research Institute, Allahabad. RA wishes to thank the organisers and participants for useful comments and suggestions.
--- abstract: 'We present a study of the dynamical spin susceptibility in the pseudogap region of the high-T$_c$ cuprate superconductors. We analyze and compare the formation of the so-called resonance peak, in three different ordered states: the $d_{x^2-y^2}$-wave superconducting (DSC) phase, the $d$-density wave (DDW) state, and a phase with coexisting DDW and DSC order. An analysis of the resonance’s frequency and momentum dependence in all three states reveals significant differences between them. In particular, in the DDW state, we find that a nearly dispersionless resonance excitation exists only in a narrow region around ${\bf Q}=(\pi,\pi)$. At the same time, in the coexisting DDW and DSC state, the dispersion of the resonance peak near ${\bf Q}$ is significantly changed from that in the pure DSC state. Away from $(\pi,\pi)$, however, we find that the form and dispersion of the resonance excitation in the coexisting DDW and DSC state and pure DSC state are quite similar. Our results demonstrate that a detailed experimental measurement of the resonance’s dispersion allows one to distinguish between the underlying phases - a DDW state, a DSC state, or a coexisting DDW and DSC state - in which the resonance peak emerges.' author: - 'J.-P. Ismer$^{1,2}$, I. Eremin$^{2,3}$, Dirk K. Morr$^{1,4}$' title: Dynamical spin susceptibility and the resonance peak in the pseudogap region of the underdoped cuprate superconductors --- Introduction ============ One of the most controversial topics in the field of high-temperature superconductivity is the origin of the so-called ’pseudogap’ phenomenon observed by various experimental techniques in the underdoped cuprates (for a review see Ref. [@timusk] and references therein). A large number of theoretical scenarios have been proposed to explain the origin of the pseudogap [@theory; @laughlin]. Among these is the $d$-density wave (DDW) scenario [@laughlin] which was suggested to explain some of the salient features of the underdoped cuprates such as the $d_{x^2-y^2}$-wave symmetry of the pseudogap above T$_c$, the anomalous behavior of the superfluid density[@sudip1] and of the Hall number[@sudip2], as well as the presence of weak (orbital) antiferromagnetism [@eremin]. The DDW-phase is characterized by circulating bond currents which alternate in space, break time-reversal symmetry and result in an orbital (antiferromagnetically ordered) magnetic moment. In this article, we investigate the momentum and frequency dependence of the dynamical spin susceptibility, $\chi ({\bf q},\omega )$, in the underdoped region of the cuprate superconductors. In particular, we compare the formation of a resonant spin excitation (the “resonance peak") in three different ordered states: the DDW phase, the $d_{x^2-y^2}$-wave superconducting (DSC) phase, and a phase with coexisting DDW and DSC order. The observation of the resonance peak in inelastic neutron scattering (INS) experiments [@rossat; @mook0; @keimer0; @bourges0; @bourges; @dogan; @Fong97; @Hay04; @stock1] is one of the key experimental facts in the phenomenology of the high-T$_{c}$ cuprates. In the optimally and overdoped cuprates, the resonance peak appears below T$_{c}$ in the dynamical spin susceptibility at the antiferromagnetic wave vector ${\bf Q}=(\pi ,\pi )$. In the optimally doped cuprates, the resonance’s frequency is $\omega_{res} \approx 41$ meV [@rossat; @bourges], a frequency which decreases with increasing underdoping [@Fong97; @dogan]. A number of theoretical scenarios have been suggested for the appearance of the resonance peak in a superconducting state with $d_{x^2-y^2}$-wave symmetry [@liu; @eremin1; @other]. In one of them, the so-called ’spin exciton’ scenario [@liu; @eremin1], the resonance peak is attributed to the formation of a particle-hole bound state below the spin gap (a spin exciton), which is made possible by the specific momentum dependence of the $d_{x^2-y^2}$-wave gap. Within this scenario, the structure of spin excitations in the superconducting state as a function of momentum and frequency can be directly related to the topology of the Fermi surface and the phase of the superconducting order parameter. This scenario agrees well with the experimental data in the superconducting state of the optimally and overdoped cuprates. In the underdoped cuprates the resonance-like peak has also been observed in the pseudogap region above T$_c$ as well as in the superconducting state [@dogan; @Fong97; @Hay04; @stock1]. In this article, we address the question whether in the underdoped cuprates, the resonance peak above T$_c$ emerges from the presence of a DDW state, as first suggested by Tewari [*et al.*]{} [@sudip1], and below T$_c$ from the coexistence of a DSC and DDW phase. To answer this question, we develop a spin exciton scenario for the pure DDW-phase as well as the coexisting DSC and DDW phases. By studying the detailed momentum and frequency dependence of the resonance peak in both phases, and by comparing it with that in the pure DSC state, we identify several characteristic features of the resonance peak that allow one to distinguish between the underlying phases, in which the resonance peak emerges. The remainder of the paper is organized as follows: in Secs. \[secDDW\] and \[secDDWDSC\] we discuss the form of the resonance peak in the pure DDW phase and the coexisting DDW and DSC phase, respectively, and compare it with that in the pure DSC state. In Sec. \[secconcl\] we summarize our results and conclusions. Pure DDW state {#secDDW} ============== Starting point for our calculations in the pure DDW-state is the effective mean field Hamiltonian $$\begin{aligned} H_{DDW} = \sum_{{\bf k},\sigma} \varepsilon_{\bf k} c^{\dagger}_{{\bf k}, \sigma}c_{{\bf k}, \sigma} + \sum_{{\bf k},\sigma} i W_{\bf k} c^{\dagger}_{{\bf k}, \sigma}c_{{\bf k+Q}, \sigma} \label{hamDDW}\end{aligned}$$ where ${\bf Q}=(\pi,\pi)$ is the ordering wavevector of the DDW state, $W_{\bf k} = \frac{W_0}{2}(\cos k_x - \cos k_y)$ is the DDW order parameter, $$\varepsilon_{\bf k}= -2t\left( \cos k_x +\cos k_y\right) - 4t'\cos k_x \cos k_y - \mu \label{NSdisp}$$ is the normal state tight-binding energy dispersion with $t,t^\prime$ being the hopping elements between nearest and next-nearest neighbors, respectively, and $\mu$ is the chemical potential. In the following we use $t=250$meV, $t'/t=-0.4$ and $\mu = - 1.083t$. The Fermi surface (FS) obtained from Eq. (\[NSdisp\]) describes well the FS measured by photoemission experiments on Bi$_2$Sr$_2$CaCu$_2$O$_{8+\delta}$ [@ARPES]. In order to directly compare the dynamical spin susceptibility in the DDW state with that in the DSC state, we take the DDW order parameter, $W_0=42$meV, to be equal to that in the DSC state [@eremin1]. We note here, that the above Hamiltonian can be obtained from a microscopic Hamiltonian with short-range repulsion or superexchange interactions [@zeyher; @dora]. After diagonalizing the Hamiltonian, Eq.(\[hamDDW\]), one finds that the excitation spectrum possesses two bands with energy dispersion $$E^{\pm}_{\bf k} = \varepsilon^+_{\bf k} \pm \sqrt{\left(\varepsilon^-_{\bf k} \right)^2+W_{\bf k}^2}\ ,$$ where $\varepsilon^{\pm}_{\bf k}=(\varepsilon_{\bf k} \pm \varepsilon_{\bf k+Q})/2$. In Fig.\[fig1\](a) we present the resulting Fermi surface in the DDW phase. Due to the doubling of the unit cell in the DDW state, the Fermi surface consists of hole pockets centered around $(\pm \pi/2, \pm \pi/2)$ and electron pockets around $(\pm \pi, 0$) and ($0, \pm \pi$). This type of Fermi surface has not yet been observed experimentally in the underdoped cuprates, possibly, as has recently been argued, due to additional interactions between quasiparticles [@sudiparp]. For the above band parameters, the chemical potential lies within both branches of the excitation spectrum thus preventing the formation of a gap at the Fermi level. This is clearly visible from Fig.\[fig1\](b) where we plot the density of states (DOS) for various values of the DDW gap. In particular, one finds that for $|t'|>W_0/4$ a suppression of (i.e., dip in) the DOS is formed away from the Fermi level which increases with increasing $W_0$. In contrast, for $|t'|<W_0/4$, this suppression, which we identify with the pseudogap, opens at the Fermi level as was noted previously[@carbotte]. Note that for $t'=0$, the DOS vanishes at the Fermi level, and the DOS resembles that of a $d_{x^2-y^2}$-wave superconductor. In order to compute the dynamical spin susceptibility in the DDW state, we first introduce the spinor $$\Psi^\dagger_{{\bf k},\sigma}=\left(c^\dagger_{{\bf k},\sigma},c^\dagger_{{\bf k+Q},\sigma} \right) \ ,$$ where $\sigma$ is the spin index, and the electronic Greens function in the DDW state is defined as $\hat{G}_{\sigma}({\bf k}, \tau-\tau^\prime) =- \langle {\cal T} \Psi_{{\bf k},\sigma}(\tau) \Psi^\dagger_{{\bf k},\sigma}(\tau^\prime) \rangle$. The bare (non-interacting) part of the dynamical spin susceptibility per spin degree of freedom is then given by $$\begin{aligned} \chi_0({\bf q}, i \Omega_m) &=& -\frac{T}{8} {\sum_{{\bf k}, n}}^\prime \mbox{Tr} \left[ {\hat G}({\bf k},i\omega_{n}) \right. \nonumber \\ & & \left. \times {\hat G}({\bf k+q},i\omega_{n}-i\Omega_{m}) \right] \label{bare}\end{aligned}$$ where ${\hat G}({\bf k},i\omega_{n})=\hat{G}_{\sigma}({\bf k},i\omega_{n}) {\hat \sigma}_0$ is the Green’s function matrix in momentum and Matsubara space[@mahan], and the primed sum runs over the reduced fermionic Brillouin zone of the DDW state. Note that with the above definition, one has $\chi_0^{zz}=2 \chi_0$. After performing the summation over the internal Matsubara frequencies and analytic continuation to the real frequency axis, one obtains for the retarded spin susceptibility in the DDW phase $$\begin{aligned} \chi_0({\bf q},\omega) &=& \frac{1}{8} {\sum_{\bf k}}^\prime \left( 1+\frac{\varepsilon^-_{\bf k} \varepsilon^-_{\bf k+q}+ W_{\bf k} W_{\bf k+q}} {\sqrt{ \left( \varepsilon^-_{\bf k} \right)^{2}+ W_{\bf k}^{2}} \sqrt{\left(\varepsilon^-_{\bf k+q} \right)^{2}+ W_{\bf k+q}^2}} \right) \left( \frac{f(E^+_{\bf k+q})-f(E^+_{\bf k})}{\omega +i0^+ -E^+_{\bf k+q}+E^+_{\bf k}}+ \frac{f(E^-_{\bf k+q})-f(E^-_{\bf k})}{\omega+i0^+ -E^-_{\bf k+q}+ E^-_{\bf k}}\right) \nonumber \\ && \hspace{-1.5cm} + \left(1-\frac{\varepsilon^-_{\bf k} \varepsilon^-_{\bf k+q}+W_{\bf k}W_{\bf k+q}} {\sqrt{\left(\varepsilon^-_{\bf k}\right)^{2}+ W_{\bf k}^{2}} \sqrt{\left(\varepsilon^-_{\bf k+q}\right)^{2}+ W_{\bf k+q}^{2}}}\right) \left(\frac{f(E^-_{\bf k+q})-f(E^+_{\bf k})}{\omega+i0^+-E^-_{\bf k+q} +E^+_{\bf k}}+ \frac{f(E^+_{\bf k+q})-f(E^-_{\bf k})}{\omega +i0^+ -E^+_{\bf k+q}+ E^-_{\bf k}}\right) \label{chi_0}\end{aligned}$$ where $f(\epsilon)$ is the Fermi function. We first analyze the behavior of the imaginary part of $\chi_0$ at ${\bf Q}=(\pi,\pi)$, and present in Fig. \[fig2\] Im$\chi_0({\bf Q},\omega)$ as a function of frequency in the normal state, the DSC state, and the DDW state [@comment2] (for the form of $\chi_0$ in the DSC state, see Ref. [@eremin1]). The behavior of Im$\chi_0({\bf Q},\omega)$ in the normal and DSC state have been extensively discussed in the literature (see, for example, Ref.[@liu; @eremin1; @other]). In the normal state Im$\chi_0$ increases linearly at low frequencies with a slope determined by the Landau damping rate, while, at higher energies its behavior is determined by the presence of the van Hove singularity. In contrast, in the superconducting state the susceptibility is gapped up to an energy $\Omega^{DSC}_{cr}={\rm min}_{\bf k} \left(|\Delta_{\bf k}| + |\Delta_{\bf k+Q}|\right)$ where $\Delta_{\bf k}$ is the superconducting gap and both ${\bf k}$ and ${\bf k+Q}$ lie on the Fermi surface. Due to the symmetry of the superconducting gap, one finds $\Delta_{\bf k}=- \Delta_{\bf k+Q}$, resulting in a discontinuous jump of Im$\chi_0$ at $\Omega^{DSC}_{cr}$ [@liu; @eremin1]. In order to discuss the behavior of Im$\chi_0$ in the DDW-state, we first note that the expression for $\chi_0$ in Eq.(\[chi\_0\]) contains two terms that describe intraband scattering within the $E^{\pm}_{\bf k}$-bands, and two terms that represent interband scattering between the two bands. Since $E^{\pm}_{\bf k+Q}=E^\pm_{\bf k}$ the intraband scattering terms do not contribute to Im$\chi_0$ at ${\bf Q}$. Moreover, since $E^-_{\bf k} \leq E^+_{\bf k}$ the first interband scattering term yields a non-zero contribution to Im$\chi_0$ only for negative frequencies. Thus, only the second interband scattering term in Eq.(\[chi\_0\]) contributes to Im$\chi_0$ at ${\bf Q}$. Since the Fermi surfaces of the two energy bands, $E^{\pm}_{\bf k}$, cannot be connected by the wave vector ${\bf Q}$, as is evident from Fig.\[fig1\](a), Im$\chi_0$ is gapped at low frequencies up to an energy $\Omega^{DDW}_{cr} \approx 64.8$ meV. The latter is determined by the minimum value of $E^+_{\bf k}-E^-_{\bf k}=2\sqrt{\left(\varepsilon^-_{\bf k} \right)^2+W_{\bf k}^2}$ in the DDW Brillouin zone, a condition that is set by the $\delta$-function arising from the last term in Eq.(\[chi\_0\]) for Im$\chi_0$. Note that $\varepsilon^-_{\bf k} \equiv 0$ along the boundary of the DDW Brillouin zone. Due to the requirement $sgn(E^+_{\bf k}) \neq sgn(E^-_{\bf k})$, we find that $\Omega^{DDW}_{cr}=2|W_{\bf k_0}|$ where ${\bf k_0}$ is the momentum at which the hole pocket around $(\pi/2,\pi/2)$ is intersected by the DDW Brillouin zone boundary (see Fig. \[fig1\]). Moreover, since $\varepsilon^-_{\bf k+Q}=-\varepsilon^-_{\bf k}$ and $W_{\bf k+Q}=-W_{\bf k}$, the second coherence factor in Eq.(\[chi\_0\]) is identical to $2$ for all momenta. Note that there exist two important differences in Im$\chi_0$ between the DDW and DSC state. First, in the DSC state, Im$\chi_0 \not = 0$ requires that the frequency exceeds $\Omega^{DSC}_{cr}= {\rm min}_{\bf k} \left(|\Delta_{\bf k}| +|\Delta_{\bf k+Q}|\right)$, a condition which is set by the $\delta$-function in Im$\chi_0$ (see Eq.(\[chi\_0\])) and simply reflects energy conservation. In contrast, in the DDW-state, Im$\chi_0 \not = 0$ requires (a) that $\omega -E^+_{\bf k+Q}+ E^-_{\bf k}=0$ for certain momenta ${\bf k}$, and (b) that for the same momenta $f(E^+_{\bf k+q})-f(E^-_{\bf k}) \not =0$. We find that there exist momenta for which (a) is satisfied at frequencies $\omega<\Omega^{DDW}_{cr}$, but that for the same momenta $f(E^+_{\bf k+q})-f(E^-_{\bf k})=0$ (at $T=0$). In other words, the critical frequency $\Omega^{DDW}_{cr}$ for the onset of a non-zero Im$\chi_0$ is determined by the difference in the population of the states that are involved in the scattering process, and not by energy conservation as in the superconducting state. This qualitative difference between the DDW and the DSC state bears important consequences: for $\omega> \Omega^{DDW}_{cr}$, one has Im$\chi_0 \sim \sqrt{\omega- \Omega^{DDW}_{cr}}$, in contrast to the discontinuous jump of Im$\chi_0$ at $\Omega^{DSC}_{cr}$ in the DSC state (this result for the DDW state differs from that in Ref. [@sudip1] due to the different Fermi surface topology considered here). This behavior becomes immediately apparent when one plots Im$\chi_0$ in the DDW-state around $\omega=\Omega^{DDW}_{cr}$, as shown in the inset of Fig. \[fig2\]. Note that the number of momenta which are involved in scattering processes and thus contribute to Im$\chi_0$ rapidly increases for $\omega>\Omega^{DDW}_{cr}$ due to a steeply increasing density of states of the function $E^+_{\bf k}-E^-_{\bf k}=2\sqrt{\left(\varepsilon^-_{\bf k} \right)^2+W_{\bf k}^2}$, which gives rise to the peak in Im$\chi_0$ at $\omega_p \approx 73$ meV. For momenta ${\bf q} \not = {\bf Q}$, the behavior of Im$\chi_0$ in the DDW state is more complex, since in addition to interband scattering, intraband scattering is now possible. In Fig. \[fig3\], we plot the frequency dependence of Im$\chi_0$ for several momenta in the DDW state (for comparison, Im$\chi_0$ in the DSC state is shown in the inset). We find that as one moves away from ${\bf Q}=(\pi,\pi)$, several square-root-like increases in Im$\chi_0$ appear, with the one lowest in energy rapidly decreasing in frequency. In order to better understand the combined frequency and momentum dependence of Im$\chi_0$ in the DDW state, we present in Fig. \[fig\_sep\] the contributions to Im$\chi_0$ at ${\bf Q}_i=0.98 {\bf Q}$ from interband scattering \[Fig. \[fig\_sep\](a)\] and intraband scattering within the $E_k^{+}$-band \[Fig. \[fig\_sep\](b)\] and $E_k^{-}$-band \[Fig. \[fig\_sep\](c)\] separately. Note that while the contribution from intraband scattering is approximately two orders of magnitude smaller than that from interband scattering, the former continuously increases from zero energy, such that Im$\chi_0$ does not any longer exhibit a gap. This result is valid for all momenta ${\bf q} \not = {\bf Q}$ in the vicinity of ${\bf Q}$. In contrast, the interband scattering term possesses three critical frequencies, $\Omega_{cr}^{(i)} (i=1,2,3)$, which arise from the opening of three non-degenerate scattering channels that are described by the scattering momenta shown in Fig. \[fig1\](a). These three critical energies are indicated by arrows in Fig. \[fig\_sep\](a). For all three scattering channels, the coherence factor is approximately 2. Note that the first and third scattering channel, which open at $\Omega_{cr}^{(1)} \approx 43$ meV and $\Omega_{cr}^{(3)} \approx 88$ meV and are described by arrow (1) and (3) in Fig. \[fig1\](a), respectively, connect momenta ${\bf k}$ and ${\bf k}^\prime$ with ${\bf k}-{\bf k}^\prime={\bf Q}_i -(\pi,\pi)$ and thus represent umklapp scattering. In contrast, channel (2) which opens at $\Omega_{cr}^{(2)} \approx 76$ meV \[see arrow (2) in Fig. \[fig1\](a)\] describes direct scattering with ${\bf k}-{\bf k}^\prime={\bf Q}_i$. The opening of each of these three scattering channels is accompanied by a square-root like increase of Im$\chi_0$. Note that the lowest threshold frequency for interband transitions vanishes at [**Q**]{}$_i = 0.94 {\bf Q}$, since this wave vector connects momenta on the Fermi surfaces of the $E_{\bf k}^{+}$ and $E_{\bf k}^{-}$ bands. The emergence of a resonance peak in the DDW state can be understood by considering the dynamical spin susceptibility within the random phase approximation (RPA). Within this approximation [@umklsusc], the susceptibility (per spin degree of freedom) is given by $$\chi_{RPA}({\bf q}, \omega) = \frac{ \chi_0({\bf q}, \omega)}{1-U\chi_0({\bf q}, \omega)} \ , \label{susRPA}$$ where $U$ is the fermionic four-point vertex. We first consider ${\bf q}={\bf Q}$ and note that in the superconducting state, the discontinuous jump in Im$\chi_0$ leads to logarithmic singularity in Re$\chi_0$. As a result, the resonance conditions, $U \mbox{Re}\chi_0({\bf Q},\omega=\omega_{res})=1$ and $\mbox{Im}\chi_0({\bf Q},\omega=\omega_{res})=0$, can be fulfilled simultaneously below the particle-hole continuum for an arbitrarily small value of $U>0$, leading to the emergence of a resonance peak as a spin exciton. In contrast, in the DDW state, Im$\chi_0$ exhibits a square-root like frequency dependence above $\Omega_{cr}^{DDW}$, leading to an increase of Re$\chi_0$ at the critical frequency, but not to a singularity in Re$\chi_0$. Specifically, we find $${\rm Re} \, \chi_0 = \frac{2 \sqrt{W}}{\pi} \, \alpha \, {\rm Re} \left[ 2-\sqrt{\frac{\Delta_-}{W}} \arctan{ \left( \sqrt{ \frac{W}{\Delta_-} } \right)} -\sqrt{ \frac{\Delta_+}{W} } \arctan{ \left( \sqrt{\frac{W}{\Delta_+}} \right)} \right] + ...$$ where $W=E_c-\Omega^{DDW}_{cr}$, $E_c$ is the high energy cut-off for the square-root like frequency dependence of Im$\chi_0=\alpha \sqrt{\omega-\Omega^{DDW}_{cr}}$, $\Delta_{\pm}=\Omega^{DDW}_{cr}\pm \omega$ and the ellipsis denote background contributions to Re$\chi_0$ that are independent of the opening of a new scattering channel. The above form of Re$\chi_0$ implies that $U$ now has to exceed a critical value, $U_c$, before a resonance peak (in the form of a spin exciton) can emerge in the DDW state. We note, however, that the values of $U$ typically taken to describe the emergence of a resonance peak in the DSC state of optimally doped cuprate superconductors, exceed $U_c$, such that a resonance peak also emerges in the DDW state. In other words, for $U>U_c$, the resonance conditions $U \mbox{Re}\chi_0({\bf Q},\omega=\omega_{res})=1$ and $\mbox{Im}\chi_0({\bf Q},\omega=\omega_{res})=0$ are satisfied in the DDW state at a frequency $\omega_{res}<\Omega^{DDW}_{cr}$. As a result, a resonance peak emerges in the RPA spin susceptibility around ${\bf Q}=(\pi,\pi)$, as shown in Fig.\[fig5\] (the value of $U$ is chosen such that in the DSC state, $\omega_{res}=41$ meV). Away from ${\bf Q}$, the mode becomes rapidly damped due to the opening of a scattering channel for intraband transitions, as discussed above. In addition, the lowest critical frequency for interband transitions rapidly decreases to zero. As a result, the resonance peak in the DDW state is confined to the immediate vicinity of ${\bf Q}=(\pi,\pi)$, and shows no significant dispersion. Note, that the upward and downward structures in Im$\chi_{RPA}$ visible in Fig. \[fig5\] do not represent real poles in the susceptibility but arise from the frequency structure of Im$\chi_0$ away from $(\pi,\pi)$. This momentum dependence of the ’spin exciton’ in the DDW state stands in stark contrast to the dispersion of the resonance peak in the DSC state [@eremin1] (see Fig. \[dispcomp\]). Coexisting DDW and DSC phases {#secDDWDSC} ============================= We next consider a state with coexisting DDW and DSC order whose mean-field Hamiltonian is given by $$\begin{aligned} \mathcal{H}^{DSC+DDW} = \sum_{k}\psi_{k}^{\dag}H(k)\psi_{k} \quad, \label{hddwdsc}\end{aligned}$$ where $\psi_{\bf k}^{\dag}=\left(c_{{\bf k}\uparrow}^{\dag},c_{{\bf k+Q}\uparrow}^{\dag},c_{-{\bf k}\downarrow},c_{{\bf -k-Q}\downarrow}\right)$ and [@sudip1] $$\begin{aligned} H_{\bf k} &=&\left(\begin{array}{cccc} \varepsilon_{\bf k} & i W_{\bf k} & \Delta_{\bf k} & 0 \\ -i W_{\bf k} & \varepsilon_{\bf k+Q} & 0 & -\Delta_{\bf k} \\ \Delta_{\bf k} & 0 & -\varepsilon_{k} & i W_{\bf k} \\ 0 & -\Delta_{\bf k} & -iW_{\bf k} & -\varepsilon_{\bf k+Q} \end{array}\right) \quad. \label{hk}\end{aligned}$$ The energy bands arising from diagonalizing the Hamiltonian in Eq.(\[hk\]) are given by $$\Omega^{\pm}_{{\bf k}} = \sqrt{\left(E^{\pm}_{\bf k}\right)^2 + \Delta_{\bf k}^{2}} \ ,$$ with $E^{\pm}_{\bf k}$ being the energy bands of the pure DDW state given above. The bare susceptibility, $\chi_0$ in the coexisting phase can again be calculated using Eq.(\[bare\]) with the only difference that the Green’s function $\hat{G}_{\sigma}({\bf k},i\omega_{n})$ is now a $(4 \times 4)$ matrix. The full expression for $\chi_0$ in the coexistence phase is somewhat lengthy and therefore given in Appendix \[appendix\]. In Fig. \[Chi0\_DDWDSC\] we present Im$\chi_0$ as a function of frequency for several momenta ${\bf q}=\eta(\pi,\pi)$ along the diagonal of the magnetic BZ. At ${\bf Q}=(\pi,\pi)$ ($\eta=1.0$), Im$\chi_0$ exhibits a single discontinuous jumps at the critical frequency, $\Omega^{coex}_{cr}=97$ meV. The magnetic scattering associated with the opening of this scattering channel connects the “hot spots" in the fermionic BZ, i.e., those momenta ${\bf k}$ and ${\bf k+Q}$ for which $\varepsilon_{\bf k}=\varepsilon_{\bf k+Q}=0$. Correspondingly, the critical frequency is given by $\Omega^{coex}_{cr}=2 \sqrt{\Delta^2({\bf k}_{hs})+W^2({\bf k}_{hs})}$ where ${\bf k}_{hs}$ is the momentum of the hot spots. In contrast, away from ${\bf Q}=(\pi,\pi)$, we find that Im$\chi_0$ exhibits 5 discontinuous jumps at critical frequencies, $\Omega_{cr}^{(i)}$ with $i=1,..,5$ indicating the opening of new scattering channels (for $\eta=0.95$ these five discontinuous jumps are labeled in Fig. \[Chi0\_DDWDSC\]). Note that in the coexistence phase, the opening of a new scattering channel is accompanied by a discontinuous jump, similar to the pure DSC state, but in contrast to the DDW state, as discussed above. The momentum dependence of these critical frequencies is shown in Fig. \[jumps\](a). At $\Omega_{cr}^{(1)}$ \[$\Omega_{cr}^{(2)}$\], a scattering channel for intraband scattering within the $\Omega^{+}_{{\bf k}}$ ($\Omega^{-}_{{\bf k}}$) band opens, and Im$ \chi_0$ acquires a non-zero contribution from $\chi^{(2)}$ ($\chi^{(5)}$) given in Eq.(\[chi2\]) \[Eq.(\[chi5\])\] of Appendix \[appendix\]. As follows directly from Eqs.(\[chi2\]) and (\[chi5\]), the coherence factors associated with these two scattering processes vanish identically at ${\bf Q}=(\pi,\pi)$, and hence, no value for $\Omega_{cr}^{(1,2)}$ can be defined at this momentum. However, away from ${\bf Q}=(\pi,\pi)$, the coherence factors are not longer zero, and two discontinuous jumps appear in Im$\chi_0$ that are associated with the opening of two new scattering channels at $\Omega_{cr}^{(1,2)}$. Note that the magnitude of the jumps at $\Omega_{cr}^{(1,2)}$ increases as one moves away from ${\bf Q}=(\pi,\pi)$, which is a direct consequence of the increasing coherence factors. Similar to the superconducting state the lowest critical frequency, $\Omega_{cr}^{(1)}$, reaches zero at $\eta=0.8(\pi, \pi)$. In contrast, at $\Omega_{cr}^{(3,4,5)}$ new scattering channels for interband scattering between the $\Omega^{+}_{{\bf k}}$ and $\Omega^{-}_{{\bf k}}$ bands are opened. These three critical frequencies are degenerate at ${\bf Q}=(\pi,\pi)$, but this degeneracy is lifted for ${\bf q} \not = {\bf Q}$, as follows immediately from Fig. \[jumps\](a). For $\eta=0.98$, we present in Fig. \[jumps\](b) the scattering momenta that are associated with the opening of the above discussed five scattering channels. For completeness, we also present the Fermi surfaces for the $E^{\pm}_k$ bands. Note, that the scattering vectors (1) and (2) describe intraband scattering within the $\Omega^{+}_{{\bf k}}$ and $\Omega^{-}_{{\bf k}}$ bands, while the scattering vectors (3), (4) and (5) represent interband scattering. The scattering vectors (3), (4), and (5) are identical to those present in the pure DDW state \[for comparison, see Fig.1 (a)\]. In Fig. \[RPAchi\] we present the RPA susceptibility in the coexisting DDW and DSC phase. Similarly to the pure DSC state, the discontinuous jump in Im$\chi_0$ in the coexistence phase is accompanied by a logarithmic divergence in Re$\chi_0$, which in turn gives rise to a resonance peak below the particle-hole continuum for an arbitrary small fermionic interaction. A comparison with the RPA susceptibility in the pure DSC state shown in Fig. \[RPAchi\](b) reveals that the frequency position of the resonances peak varies more quickly with momentum (near ${\bf Q}=(\pi,\pi)$) in the coexistence phase than in the pure DSC state. This difference becomes particularly evident when one plots the dispersion of the resonance peak both in the coexistence phase and the pure DSC state, as shown in Fig. \[dispcomp\]. We find that the difference in the dispersion of the resonance peaks is particularly pronounced around ${\bf Q}$ ($0.95 \lesssim \eta \lesssim 1.05$), with a more cusp-like dispersion in the coexistence phase. This cusp follows the form of the particle-hole continuum in the vicinity of ${\bf Q}$ in the coexistence phase, as is evident from Fig. \[jumps\](a). Thus the dispersion of the resonance peak directly reflects the different momentum dependence of the particle-hole continuum in the vicinity of ${\bf Q}$ in the coexisting DDW and DSC state and the pure DSC state. However, away from ${\bf Q}$, the particle-hole continuum, as well as the dispersion of the resonance peak are quite similar in both phases. Conclusions {#secconcl} =========== In conclusion, we have analyzed the momentum and frequency dependence of the dynamical spin susceptibility in the pure DDW state and the phase with coexisting DDW and DSC order. We find that due to the opening of a spin gap in Im$\chi_0$ in both phases, a resonance peak emerges below the particle-hole continuum. However, in the DDW state, Im$\chi_0$ exhibits a square-root like increase at the critical frequencies, in contrast to the coexisting DDW and DSC phases (or the pure DSC phase), where the onset of Im$\chi_0\not = 0$ is accompanied by a discontinuous jump. As a result, Re$\chi_0$ in the DDW state does not exhibit a divergence, but simply an enhancement at the critical frequency, and hence, a finite fermionic interaction strength is necessary for the emergence of a resonance peak in the DDW state. This result is qualitatively different from the coexisting DDW and DSC phase and the pure DSC state where a resonance peak emerges for an infinitesimally small interaction strength. We note, however, that for the strength of the fermionic interaction usually taken to describe the resonance peak in the DSC state, a resonance peak also emerges in the DDW state. Moreover, we find that the resonance peak in the DDW state is basically dispersionless and confined to the vicinity of ${\bf Q}=(\pi,\pi)$ due to the form of the particle-hole continuum in the DDW state. In contrast, the dispersion of the resonance peak in the coexisting DDW and DSC state is similar to that in the pure DSC state, with the exception that in the vicinity of ${\bf Q}$, the former exhibits a cusp. These results show that the detailed momentum and frequency dependence of the resonance peak is different in all three phases, the pure DDW, pure DSC and coexisting DDW and DSC phases. Thus, a detailed experimental study of the resonance peak in the underdoped cuprates permits one to identify the nature of the underlying phase in which the resonance peaks emerges. We believe, however, that the currently available experimental INS data do not yet allow an unambiguous conclusion with regards to the nature of the phases present in the underdoped cuprate superconductors.\ D.K.M. acknowledges financial support by the Alexander von Humboldt Foundation, the National Science Foundation under Grant No. DMR-0513415 and the U.S. Department of Energy under Award No. DE-FG02-05ER46225. Dynamical spin susceptibility in the regime of coexisting DDW+DSC phases {#appendix} ======================================================================== In the coexisting DDW and $d$-wave superconducting phase, the susceptibility is given by $$\chi({\bf q},\omega)=\sum_i \chi^{(i)}({\bf q},\omega)$$ where $$\begin{aligned} \chi^{(1)}({\bf q},\omega)&=&{\frac}{1}{16} \sum_{\bf k} \left(1+{\frac}{E_{\bf k}^{+}E_{\bf k+q}^{+}+\Delta_{\bf k}\Delta_{\bf k+q}}{\Omega^{+}_{{\bf k}}\Omega^{+}_{{\bf k+q}}}\right) \left(1+{\frac}{{\varepsilon}^-_{\bf k}{\varepsilon}^-_{\bf k+q}+W_{\bf k}W_{\bf k+q}} {\sqrt{\left({\varepsilon}^-_{\bf k}\right)^{2}+W_{\bf k}^{2}}\sqrt{\left({\varepsilon}^-_{\bf k+q}\right)^{2}+W_{\bf k+q}^{2}}}\right) {\frac}{ f(\Omega^{+}_{\bf k+q})-f(\Omega^{+}_{\bf k})}{\omega-\Omega^{+}_{\bf k+q}+\Omega^{+}_{\bf k}+i\delta} \nonumber\\ \label{chi1}\end{aligned}$$ $$\begin{aligned} \chi^{(2)}({\bf q},\omega) & = & {\frac}{1}{32} \sum_{\bf k} \left(1-{\frac}{E_{\bf k}^{+}E_{\bf k+q}^{+}+\Delta_{\bf k}\Delta_{\bf k+q}}{\Omega^+_{\bf k}\Omega^+_{\bf k+q}}\right) \left(1+{\frac}{{\varepsilon}^-_{\bf k}{\varepsilon}^-_{\bf k+q}+W_{\bf k}W_{\bf k+q}} {\sqrt{\left({\varepsilon}^-_{\bf k}\right)^{2}+W_{\bf k}^{2}}\sqrt{\left({\varepsilon}^-_{\bf k+q}\right)^{2}+W_{\bf k+q}^{2}}}\right) \nonumber \\&& \times \left({\frac}{1-f(\Omega^+_{\bf k+q})-f(\Omega^+_{\bf k})}{\omega+\Omega^+_{\bf k+q}+ \Omega^+_{ \bf k}+i\delta} +{\frac}{f(\Omega^+_{\bf k+q})+f(\Omega^+_{\bf k})-1}{\omega-\Omega^+_{\bf k+q}-\Omega^+_{\bf k}+i\delta}\right) \label{chi2}\end{aligned}$$ $$\begin{aligned} \chi^{(3)}({\bf q},\omega)&=&{\frac}{1}{16} \sum_{\bf k} \left(1+{\frac}{E_{\bf k}^{+}E_{\bf k+q}^{-}+\Delta_{\bf k}\Delta_{\bf k+q}}{\Omega^+_{\bf k}\Omega^-_{\bf k+q}}\right) \left(1-{\frac}{ {\varepsilon}^-_{\bf k}{\varepsilon}^-_{\bf k+q}+W_{\bf k}W_{\bf k+q}} {\sqrt{\left({\varepsilon}^-_{\bf k}\right)^{2}+W_{\bf k}^{2}}\sqrt{\left({\varepsilon}^-_{\bf k+q}\right)^{2}+W_{\bf k+q}^{2}}}\right) \nonumber \\&& \times \left({\frac}{f(\Omega^-_{\bf k+q})-f(\Omega^+_{\bf k})} {\omega-\Omega^-_{\bf k+q}+\Omega^+_{\bf k} +i\delta} +{\frac}{f(\Omega^+_{\bf k})-f(\Omega^-_{\bf k+q})}{\omega-\Omega^+_{\bf k}+\Omega^-_{\bf k+q} +i\delta}\right) \label{chi3}\end{aligned}$$ $$\begin{aligned} \chi^{(4)}({\bf q},\omega)&=&{\frac}{1}{16} \sum_{\bf k} \left(1-{\frac}{E_{\bf k}^{+}E_{\bf k+q}^{-}+\Delta_{\bf k}\Delta_{\bf k+q}}{\Omega^+_{\bf k}\Omega^-_{\bf k+q}}\right) \left(1-{\frac}{{\varepsilon}^-_{\bf k}{\varepsilon}^-_{\bf k+q}+W_{\bf k}W_{\bf k+q}} {\sqrt{\left({\varepsilon}^-_{\bf k}\right)^{2}+W_{\bf k}^{2}}\sqrt{\left({\varepsilon}^-_{\bf k+q}\right)^{2}+W_{\bf k+q}^{2}}}\right) \nonumber \\&& \times \left({\frac}{1-f(\Omega^-_{\bf k+q})-f(\Omega^+_{\bf k})} {\omega+\Omega^+_{\bf k}+\Omega^-_{\bf k+q}+i\delta} +{\frac}{f(\Omega^+_{\bf k})+f(\Omega^-_{\bf k+q})-1}{\omega-\Omega^-_{\bf k+q}-\Omega^+_{\bf k} +i\delta}\right) \label{chi4}\end{aligned}$$ $$\begin{aligned} \chi^{(5)}({\bf q},\omega)&=&{\frac}{1}{32} \sum_{\bf k} \left(1-{\frac}{E_{\bf k}^{-}E_{\bf k+q}^{-}+\Delta_{\bf k}\Delta_{\bf k+q}}{\Omega^-_{\bf k}\Omega^-_{\bf k+q}}\right) \times\left(1+{\frac}{{\varepsilon}^-_{\bf k}{\varepsilon}^-_{\bf k+q}+W_{\bf k}W_{\bf k+q}} {\sqrt{\left({\varepsilon}^-_{\bf k}\right)^{2}+W_{\bf k}^{2}}\sqrt{\left({\varepsilon}^-_{\bf k+q}\right)^{2}+W_{\bf k+q}^{2}}}\right) \nonumber \\&& \times \left({\frac}{1-f(\Omega^-_{\bf k+q})-f(\Omega^-_{\bf k})}{\omega+\Omega^-_{\bf k}+ \Omega^-_{\bf k+q}+i\delta} +{\frac}{f(\Omega^-_{\bf k})+f(\Omega^-_{\bf k+q})-1}{\omega-\Omega^-_{\bf k+q}-\Omega^-_{\bf k}+i\delta}\right) \label{chi5}\end{aligned}$$ $$\begin{aligned} \chi^{(6)}({\bf q},\omega)&=&{\frac}{1}{16} \sum_{\bf k} \left(1+{\frac}{E_{\bf k}^{-}E_{\bf k+q}^{-}+\Delta_{\bf k}\Delta_{\bf k+q}}{\Omega^-_{\bf k}\Omega^-_{\bf k+q}}\right) \left(1+{\frac}{{\varepsilon}^-_{\bf k}{\varepsilon}^-_{\bf k+q}+W_{\bf k}W_{\bf k+q}} {\sqrt{\left({\varepsilon}^-_{\bf k}\right)^{2}+W_{\bf k}^{2}}\sqrt{\left({\varepsilon}^-_{\bf k+q}\right)^{2}+W_{\bf k+q}^{2}}}\right) {\frac}{f(\Omega^-_{\bf k+q})-f(\Omega^-_{\bf k})}{\omega-\Omega^-_{\bf k+q}+\Omega^-_{\bf k}+i\delta} \nonumber \\ \label{chi6}\end{aligned}$$ [99]{} T. Timusk and B. Statt, Rep. Prog. Phys. [**62**]{}, 61 (1999); M. R. Norman, D. Pines, and C. Kallin, cond-mat/0507031 (unpublished). P.W. Anderson [*et al.*]{}, J. Phys. Condens. Mat. [**16**]{}, R755 (2004); D. A. Ivanov, P. A. Lee, and X.-G. Wen, Phys. Rev. Lett. [**84**]{}, 3958 (2000); C.M. Varma, Phys. Rev. Lett [**83**]{}, 3538 (1999); V. J. Emery, S. A. Kivelson, and O. Zachar, Phys. Rev. B [**56**]{}, 6120 (1997); L. Benfatto, S. Caprara, and C. Di Castro; Eur. Phys. Jour. B [**17**]{}. 95 (2000); J. Schmalian, D. Pines, and B. Stojkovic, Phys. Rev. Lett. [**80**]{}, 3839 (1998); J.R. Engelbrecht, A. Nazarenko, M. Randeria, and E. Dagotto, Phys. Rev. B [**57**]{}, 13406 (1998); Q. Chen, I. Kosztin, B. Janko, and K. Levin, Phys. Rev. B [**59**]{}, 7083 (1999); S.C. Zhang, Science [**275**]{}, 1089 (1997). S. Chakravarty, R.B. Laughlin, D.K. Morr, and C. Nayak, Phys. Rev. B[**63**]{}, 094503 (2001). S. Tewari, H.-Y. Kee, C. Nayak, and S. Chakravarty, Phys. Rev. B [**64**]{}, 224516 (2001). S. Chakravarty, C. Nayak, S. Tewari, and X. Yang, Phys. Rev. Lett. [**89**]{}, 277003 (2002). M.V. Eremin, I. Eremin, and A. Terzi, Phys. Rev. B [**66**]{}, 104524 (2002). J. Rossat-Mignod, L.P. Regnault, C. Vettier, P. Bourges, P. Burlet, J. Bossy, Physica C [**185-189**]{}, 86 (1991). H.A. Mook, M. Yethiraj, G. Aeppli, T.E. Mason, and T. Armstrong, Phys. Rev. Lett. [**70**]{}, 3490 (1993). H. F. Fong, B. Keimer, P.W. Anderson, D. Reznik, F. Dogan, and I.A. Aksay, Phys. Rev. Lett. [**75**]{}, 316 (1995); [*ibid.*]{}, Phys. Rev. B [**54**]{}, 6708 (1996); H.F. Fong [*et al.*]{}, Nature (London) [**398**]{}, 588 (1999); S. Pailhes, Y. Sidis, P. Bourges, V. Hinkov, A. Ivanov, C. Ulrich, L.P. Regnault, and B. Keimer, Phys. Rev. Lett. [**93**]{}, 167001 (2004). P. Bourges, L.P. Regnault, Y. Sidis, and C. Vettier, Phys. Rev. B [**53**]{}, 876 (1996); H. He [*et al.*]{}, Science [**295**]{}, 1045 (2002). See for review P. Bourges, in “The gap Symmetry and Fluctuations in High Temperature Superconductors” edited by J. Bok, G. Deutscher, D. Pavuna and S.A. Wolf (Plenum Press, 1998). P. Dai, H.A. Mook, S.M. Hayden, G. Aeppli, T.G. Perring, R.D. Hunt, and F. Dogan, Science [**284**]{}, 1344 (1999); P. Dai, M. Yethiraj, H.A. Mook, T.B. Lindemer, and F. Dogan, Phys. Rev. Lett. [**77**]{}, 5425 (1996). H.F. Fong, B. Keimer, D.L. Milius, and I.A. Aksay, Phys. Rev. Lett. [**78**]{}, 713 (1997); H.F. Fong [*et al.*]{}, Phys. Rev. B [**61**]{}, 14773 (2000). S.M. Hayden, H.A. Mook, P. Dai, T.G. Perring, and F. Dogan, Nature (London) [**429**]{}, 531 (2004). C. Stock, W.J.L. Buyers, R.A. Cowley, P.S. Clegg, R. Coldea, C.D. Frost, R. Liang, D. Peets, D. Bonn, W.N. Hardy, and R. J. Birgeneau Phys. Rev. B [**71**]{}, 024522 (2005); C. Stock, W.J.L. Buyers, R. Liang, D. Peets, Z. Tun, D. Bonn, W.N. Hardy, and R.J. Birgeneau, Phys. Rev. B [**69**]{}, 014502 (2004). H.F. Fong [*et al.*]{} Phys. Rev. Lett. [**75**]{}, 316 (1995); D. Z. Liu, Y. Zha, and K. Levin Phys. Rev. Lett. [**75**]{}, 4130 (1995); A. J. Millis and H. Monien Phys. Rev. B [**54**]{}, 16172 (1996); A. Abanov, and A.V. Chubukov, Phys. Rev. Lett. [**83**]{}, 1652 (1999); T. Dahm, D. Manske, and L. Tewordt, Phys. Rev. B [**58**]{}, 12454 (1998); J. Brinckmann and P. A. Lee, Phys. Rev. Lett. [**82**]{}, 2915 (1999); Y.-J. Kao, Q. Si, and K. Levin, Phys. Rev. B [**61**]{}, R11898 (2000); F. Onufrieva and P. Pfeuty, Phys. Rev. B [**65**]{}, 054515 (2002); M. Eschrig and M.R. Norman, Phys. Rev. Lett. [**89**]{}, 277005 (2002); D. Manske, I. Eremin, and K. H. Bennemann, Phys. Rev. B [**63**]{}, 054517 (2001); M.R. Norman, Phys. Rev. B [**61**]{}, 14751 (2000); [*ibid*]{} [**63**]{}, 092509 (2001); A.V. Chubukov, B. Janko and O. Tchernyshov, Phys. Rev. B [**63**]{}, 180507(R) (2001); I. Sega, P. Prelovsek, and J. Bonca, Phys. Rev. B [**68**]{}, 054524 (2003). I.Eremin, D.K. Morr, A.V. Chubukov, K. Bennemann, and M.R. Norman, Phys. Rev. Lett. [**94**]{}, 147001 (2005). L. Yin, S. Chakravarty, and P.W. Anderson, Phys. Rev. Lett. [**78**]{}, 3559 (1997); E. Demler and S.C. Zhang, Phys. Rev. Lett. [**75**]{}, 4126 (1995); D.K. Morr and D. Pines, Phys. Rev. Lett. [**81**]{}, 1086 (1998); M. Vojta and T. Ulbricht Phys. Rev. Lett. [**93**]{}, 127002 (2004); G. S. Uhrig, K. P. Schmidt, and M. Grüninger Phys. Rev. Lett. [**93**]{}, 267003 (2004); G. Seibold and J. Lorenzana Phys. Rev. Lett. [**94**]{}, 107006 (2005). A. Damascelli, Z. Hussain, and Z.-X. Shen, Rev. Mod. Phys. [**75**]{}, 473 (2003). E. Cappelluti and R. Zeyher, Phys. Rev. B [**59**]{}, 6475 (1999); I. Eremin and M. Eremin, J. Supercond. [**10**]{}, 459 (1997). B. Dora, K. Maki, and A. Virosztek, Mod. Phys. Lett. [**18**]{}, 327 (2004). S. Chakravarty, Ch. Nayak, and S. Tewari, Phys. Rev. B [**68**]{}, 100504(R) (2003). B. Valenzuela, E.J. Nicol, and J.P. Carbotte, Phys. Rev. B [**71**]{}, 134503 (2005). See, for example, G.D. Mahan, [*Many-Particle Physics*]{}, (Plenum Press, New York and London, 1990). The results for Im$\chi_0$ were obtained by using the tetrahedron method [@taut] to evaluate Eq.(\[chi\_0\]). G. Lehmann, and M. Taut, Phys. Stat. Solidi B [**54**]{}, 469 (1972). Despite the doubling of the unit cell one finds for the umklapp susceptibility $\chi({\bf q, Q},\omega)=0$ in the DDW state.
--- abstract: 'We present numerical results for finite-temperature $T>0$ thermodynamic quantities, entropy $s(T)$, uniform susceptibility $\chi_0(T)$ and the Wilson ratio $R(T)$, for several isotropic $S=1/2$ extended Heisenberg models which are prototype models for planar quantum spin liquids. We consider in this context the frustrated $J_1$-$J_2$ model on kagome, triangular, and square lattice, as well as the Heisenberg model on triangular lattice with the ring exchange. Our analysis reveals that typically in the spin-liquid parameter regimes the low-temperature $s(T)$ remains considerable, while $\chi_0(T)$ is reduced consistent mostly with a triplet gap. This leads to vanishing $R(T \to 0)$, being the indication of macroscopic number of singlets lying below triplet excitations. This is in contrast to $J_1$-$J_2$ Heisenberg chain, where $R(T \to 0)$ either remains finite in the gapless regime, or the singlet and triplet gap are equal in the dimerized regime.' author: - 'P. Prelovšek' - 'K. Morita' - 'T. Tohyama' - 'J. Herbrych' bibliography: - 'manuwilson.bib' title: 'Vanishing Wilson ratio as the hallmark of quantum spin-liquid models' --- Introduction ============ Various frustrated $S=1/2$ Heisenberg models (HM) have been subject of intensive theoretical studies in last decades in connection with the possibility of spin-liquid (SL) ground state (g.s.). These efforts have been recently strengthened by the discovery of several classes of insulating materials revealing low-energy spin excitations behaving as quantum SL without any magnetic order down to low temperatures (for reviews see [@lee08; @balents10; @savary17]). Among isotropic $S=1/2$ two-dimensional (2D) models most numerical evidence for the SL g.s. accumulated for the antiferromagnetic (AFM) HM on the kagome lattice (KL) [@mila98; @budnik04; @singh07; @yan08; @lauchli11; @iqbal11; @depenbrock12], but as well for $J_1$-$J_2$ HM on the square lattice(SQL) [@capriotti00; @mambrini06; @jiang12; @gong14; @morita15; @morita16; @wang18; @liu18], $J_1$-$J_2$ HM on the triangular lattice (TL) [@kaneko14; @zhu15; @hu15; @iqbal16; @wietek17; @prelovsek18] and the HM on TL with ring exchange [@misguich99; @motrunich05]. While the character of the g.s. and its properties still offer several controversies and challenges, it is even less known about finite-temperature $T>0$ behavior of several basic quantities. At least some of them have been already measured in experiments on SL materials and can thus serve as a test whether and to what extent actual materials can be accounted for by theoretical models. Among measurable spin properties are thermodynamic quantities as the uniform magnetic susceptibility $\chi_0(T)$, magnetic (contribution to) specific heat $C_V(T)$ and related spin entropy density $s(T)$. They are crucial to pinpoint the different characters and scenarios of SL behavior, in particular whether materials follow gapped or gapless SL. These quantities are mostly extracted from experiments on KL systems, the prominent example being herbertsmithite [@mendels07; @olariu08; @han12; @fu15; @norman16], but also related compounds in the same class [@hiroi01; @fak12; @li14; @gomilsek16; @feng17; @zorko19]. Another example are organic compounds where the relevant lattice is triangular [@shimizu03; @shimizu06; @itou10; @zhou17] as well as charge-density-wave system 1T-TaS$_2$, recently established as SL with composite $S=1/2$ spins on TL [@klanjsek17; @kratochvilova17; @law17; @he18]. The basic spin exchange scale in most of these systems is modest and, as a consequence, the whole $T$ range is experimentally accessible which allows for the test of the whole range of spin excitations. Nevertheless, it should be noted that the lowest $T$ might be influenced by additional mechanisms such as Dzyaloshinski-Moriya interaction [@rigol07; @cepas08; @zorko08], interlayer couplings and random effects [@kawamura19]. It has been rather well established with elaborate exact-diagonalization (ED) and series-expansion studies of the HM with nearest-neighbor (n.n.) exchange on KL [@mila98; @budnik04; @singh07; @singh08; @lauchli11; @lauchli19], that lowest excitations are singlets dominating over the triplet excitations, for which most ED studies reveal a finite spin (triplet) gap $\Delta_t >0$ although there are numerical indications also for gapless scenario [@iqbal13; @he17]. It has been recently shown [@prelovsek19] that the same scenario can be traced via the temperature-dependent Wilson ratio $R(T)$ in a $J_1$-$J_2$ HM on TL including the next-nearest-neighbor (n.n.n.) exchange $J_2>0$ in the regime where the SL g.s. is expected [@kaneko14; @zhu15; @hu15; @iqbal16]. This is in contrast with the triplet ($S=1$) magnon excitation being the lowest excitations in an ordered AFM. It is also qualitatively different from the scenario for the basic one-dimensional (1D) HM with gapless spinon excitations. In the following we will present numerical results for $s(T)$, $\chi_0(T)$ and $R(T)$, which reveal that the vanishing of $R(T \to 0)$ is quite generic property of a wide class of isotropic 2D Heisenberg models in their range of (presumable) SL parameter regimes. In this context we generalise previous numerical $T>0$ studies of HM on KL [@misguich07; @schnack18] to include also the n.n.n. exchange $J_2 \neq 0$ and upgrade results for the $J_1$-$J_2$ HM on TL [@prelovsek18], now studying also the HM on TL with the ring exchange, as well as another standard model of SL, i.e., frustrated $J_1$-$J_2$ HM on SQL. Results in the SL regimes confirm the singlets as dominating low-energy excitations. For comparison we present results also for 1D $J_1$-$J_2$ Heisenberg chain, which serve as the reference, depending on $J_2/J_1$, either for the gapless spinon Fermi-surface (SFS) and valence-bond (VB) solid scenarios. Still, we show that results appear (as expected) qualitatively different from considered 2D models. Considered models have their particular features and challenges, nevetheless our results on thermodynamic properties reveal quite universal properties in their (presumable) SL regimes which put also restrictions on the SL scenarios explaining their low-$T$ behavior. In particular, very attractive scenario of gapless SL with SFS excitations requires finite g.s. Wilson ratio $R_0= R(T \to 0) >0$. The latter is realized in 1D HM, but does not seem to be the case in planar models. Observed enhanced low-$T$ entropy $s(T)$ and related vanishing $R_0=0$ demonstrate the dominant role of singlet excitations over the triplet ones [@waldtmann98; @singh08], but still offer several possibilities. While it is hard to exclude the scenario of VB solid (crystal) with broken translational symmetry [@singh07; @singh08], it is more likely that the g.s. in the SL regime does not break the translation symmetry and all correlations are short-ranged, i.e. revealing a scenario of VB (or dimer) liquid. On the other hand, it is well possible that considered models might not be enough to represent the SL real materials, in particular not in their low-$T$ regime. The paper is organised as follows: In Sec. II we introduce $T$-dependent Wilson ratio $R(T)$ and comment different scenarios for its low-$T$ behavior. In Sec. III we present numerical methods used to evaluate thermodynamic quantities, but also the lowest spin excitations in 1D and 2D models. As the test of methods as well as of concepts we present in Sec. IV results for 1D $J_1$-$J_2$ Heisenberg chain. The central results for various 2D frustrated HM models are presented and analysed in Sec. V, and summarized in Sec. VI. Temperature-dependent Wilson ratio ================================== Besides thermodynamic quantities: uniform magnetic susceptibility $\chi_0(T)$ and the entropy density $s(T)$, together with related specific heat $C_V(T)=T ds/dT$, it is informative to extract also their quotient in the form of temperature-dependent Wilson ratio $R(T)$, defined as [@prelovsek19], $$R(T)= \frac{4 \pi^2 T \chi_0(T) }{3 s(T)}\,, \label{rw}$$ being dimensionless assuming theoretical units $k_B= g \mu_B =1$. It should be reminded that the standard quantity is the (zero-temperature) Wilson ratio as $R_W = 4 \pi^2 \chi^0_0 / (3 \gamma) $ where $\chi^0_0=\chi_0(T=0)$ and $\gamma = \mathrm{lim}_{T \to 0} [C_V/T]$. $R_W$ has its usual application and meaning in the theory of Fermi liquids and metals, as well as in gapless spin systems [@ninios12]. We note that in normal Fermi-liquid-like systems where $s= C_V=\gamma T$ the definition, Eq. (\[rw\]), coincides at $T \to 0$ with the standard $R_W$. Although at low $T$ (in most interesting cases) both $s(T)$ and $C_V(T)$ have the same functional $T$ dependence, it is more convenient to employ in Eq. (\[rw\]) the entropy density $s(T)$ being monotonously increasing function. It should be also pointed out that $R(T)$ is a direct measure of the ratio of the density of excitations with finite $z$ component of total spin $S_{tot}^z \neq 0$ relative to density of all (spin) excitations, including $S_{tot}^z =0$, as measured by $s(T)$. To make this point evident we note that $\chi_0(T) = \langle (S^{tot}_z)^2 \rangle /(NT)$ where $N$ is the number of lattice sites, so that $$R= \frac{4 \pi^2 \langle (S_{tot}^z)^2\rangle }{3 N s}\,. \label{r1}$$ From above expression it is also follows that $R(T)$ has a well defined high-$T$ limit which is for isotropic $S=1/2$ HM $R(T \to \infty) = \pi^2/(3 \ln2) = 4.746$. Moreover, $R_0 \equiv R(T \to 0)$ can differentiate between distinct scenarios: a\) In the case of magnetic long-range order (LRO), e.g., for AFM in HM on SQL, at $T \to 0$ one expects in 2D isotropic HM $\chi_0(T\to 0) = \chi_0^0 >0$ (where the finite value can be interpreted as the contribution of spin fluctuations transverse to g.s. magnetic order) whereas the effective magnon excitations lead to $s \propto T^2$ [@manousakis91], so that $R_0 \propto 1/T \to \infty$, b\) In a gapless SL with large SFS one would expect Fermi-liquid-like finite $R_0 \sim 1$ [@balents10; @zhou17; @law17]. The evident case for such scenario, as for reference considered later on, is the simple Heisenberg chain where $R_0 = 2$ [@johnston00], in contrast to the value $R_0=1$ for noninteracting Fermi systems. c\) Vanishing $R_0 \to 0$, or more restricted from Eq. (\[rw\]) $R_0 \propto T^\eta$ with $\eta \geq 1$, would indicate that low-energy singlet excitation dominate over the triplet ones [@singh08; @balents10; @lauchli19]. In the following we find numerical evidence that this appears to be the case in the SL parameter regime of considered 2D frustrated isotropic HM. Within the last scenario one should still differentiate different possibilities with respect to gapless spin systems or systems with the gap. One option for SL is that both singlet and triplet excitations are gapped, but the effective triplet gap is larger $\Delta_t>\Delta_s$ (in the limit of large systems $N \to \infty$) which would lead (in a simplest approximation) to $R_0 \propto T^\eta \exp[-(\Delta_t- \Delta_s)/T] \to 0$. More delicate case could be when $\Delta_t = \Delta_s = \Delta$. Then Eq. (\[rw\]) offers several scenarios with, e.g., $R(T<\Delta) \propto T^\eta$. Such situation appears, e.g., for 1D chain $J_1$-$J_2$ model around the Mazumdar-Ghosh point $J_2/J_1=0.5$. Since $s(T)$ measures both singlet and triplet excitations (as well as higher $S_{tot} >1$) possible case $\Delta_s > \Delta_t$ should be similar to the previous scenario. When classifying options for $T \to 0$ we should also consider the possibility of VB solid (crystal), i.e., the g.s. with broken translational symmetry. In finite systems (with short-range spin correlations) the signature of VB solid should be the degenerate or (due to finite-size effects) nearly degenerate g.s. with degeneracy $N_d > 1$. This should be reflected in a finite g.s. entropy for finite system with $N$ sites, $$s_0 \equiv s(T \to 0) = \frac{1}{N} \ln N_d. \label{s0}$$ Such remanent $s_0 >0$ does not contribute to $C_V(T)$ and moreover vanishes in the limit $N \to \infty$. A clear VB solid case is 1D $J_1$-$J_2$ HM in the dimerized regime where $N_d=2$. It then makes sense to consider in the evaluation of $R(T)$, Eq. (\[rw\]), besides full $s(T)$ also reduced one $\tilde s=s-s_0$. Still, it is not always straightforward to fix proper $N_d$ in finite-size systems. Methods ======= We calculate entropy density $s(T)$, uniform susceptibility $\chi_0(T)$ and via Eq. (\[rw\]) the Wilson ratio $R(T)$, using the finite-temperature Lanczos method (FTLM) [@jaklic94; @jaklic00], previously used in numerous studies of $T>0$ static and dynamical properties in various models of correlated electrons [@prelovsek13]. Since in the case of considered thermodynamic quantities only conserved quantities are involved, in particular the Hamiltonian $H$ and $S_{tot}^z$, the memory and CPU time requirement for given system size $N$ are essentially that of the Lanczos procedure for the g.s., provided that we scan over all (different) symmetry sectors $S_{tot}^z$ and wave-vector ${\bf q}$ due to translational symmetry and periodic-boundary conditions (p.b.c.), in case of the code with translational symmetry. A modest additional sampling $N_s$ over initial wave-functions is then used. Limitations of the present method are given by the size of the many-body Hilbert space with $N_{st}$ basis states which can be handled efficiently within the FTLM, restricting in our study lattice sizes to $N \leq 36$. In the following we use two FTLM codes for considered models: a\) To calculate largest systems with $N = 36$ sites for the 2D TL, KL as well as SQL $J_1$-$J_2$ HM with $N_{st} \sim 10^{10}$, we develop a code that equips a technique to save the memory for the Hamiltonian by dividing the $H$ into two subsystems. In addition, to improve the accuracy, we use replaced FTLM technique [@morita19]. b\) The code for more modest computers takes into account translational symmetry, thus able to reach $N_{st} < 10^7$ and sizes $N \leq 30$, was used for the 1D HM chain and the TL with ring exchange. When discussing the accuracy of FTLM results we have to distinguish results for given system from finite-size effects due to restricted $N$. The central quantity evaluated is the grand-canonical sum [@jaklic94; @jaklic00], $$Z(T)=\mathrm{Tr }~\mathrm{exp}[-(H-E_0)/T],$$ where $E_0$ is the g.s. energy. For reachable systems FTLM provides accurate results provided that we use modest random sampling $N_s \leq 30$ over (random) initial wave-functions. This is in particular important to get correct low-$T$ limit, i.e., $Z(T=0) =1$ in the case of non-degenerate g.s. The main restriction of FTLM results are, however, reachable $N$ and related finite-size effects most pronounced at $T \to 0$: a\) In isotropic HM with $T \to 0$ LRO (in dimension $D\geq 2$), or long-range spin correlations in 1D, spin excitations are gapless in the thermodynamic limit. Such case is correlated with finite-size effects in evaluated quantities. One can expect that results reach the $N \to \infty$ validity only for $Z>Z^*=Z(T_{fs}) \gg 1$. Since $Z$ is intimately related to entropy, $$s = \frac{1}{N} \left( \ln Z + \frac{\langle H \rangle - E_0 }{T} \right)\,, \label{s}$$ the criterion for $T_{fs}$ can be the smallest value for $s$. Actually, in reached systems $N \sim 36$ we get estimate $s(T_{fs}) \sim 0.07 - 0.1$ (see, e.g., the finite-size analysis in [@schnack18]). In such systems $s(T)$ and $\chi_0(T)$ results at $T<T_{fs}$ are dominated by finite-size effects and are not representative for $N \to \infty$. In any case, due to frustration and consequently enhanced $s(T \ll J_1)$ in SL models FTLM generally allows to reach lower effective $T_{fs}$. E.g., while for HM on a (unfrustrated) SQL (even at largest $N=36$) $T_{fs} \sim 0.4 J_1 $ [@jaklic00; @prelovsek13], SL models allow for considerably lower $T_{fs} \leq 0.1 $ [@schnack18; @prelovsek18]. b\) For systems with only short-range spin correlations one can reach situation where spin correlation length (even at $T \to 0$) is shorter that the system length, $\xi \le L$. In such a case, FTLM has no obvious restrictions even at $T \to 0$, so $T_{fs} \sim 0$. This can be the situation for gapped SL, including some examples discussed further on. Besides thermodynamic quantities, it is also instructive to monitor directly lowest excited states and their character. For the largest 2D $N=36$ systems excited states are obtained within ED (without translational symmetry) by eliminating Lanczos-ghost states while comparing results for different number of Lanczos steps. For TL with ring exchange we employ ED results of systems with $N =28$ and evaluate the lowest (singlet and triplet) energies in different ${\bf q}$ sectors. In 1D models we use also density matrix renormalization group (DMRG) method to investigate the $J_1$-$J_2$ HM with open boundary conditions (o.b.c.). The method allows for accurate computation of the $S^z_{tot}=0$ g.s., and in the same way also first excited triplet state with $S^z_{tot}=1$. In order to get also excited (singlet) states within $S^z_{tot}=0$ sector we evaluate the g.s. wave-function $|\psi_0\rangle$ and then construct effective Hamiltonian for the excited states $H_1=H-E_0|\psi_0\rangle\langle\psi_0|$ [@mcculloch07; @wang18], and then repeat the standard DMRG algorithm for $H_1$. The requirement of orthogonality is, however, difficult to meet for excited states which are (due to o.b.c.) edge states, e.g., as within 1D dimerized regime. One-dimensional Heisenberg model ================================ ![ Results in the $J_1$-$J_2$ Heisenberg chain for: (a) entropy $s(T) $, (b) susceptibility $\chi_0(T)$, and (c) Wilson ratio $R(T)$, as obtained via FTLM on $N=30$ sites for different $J_2 = 0.0 - 1.0$. The dashed lines at $J_2=0$ represent the extension to $N \to \infty$, while for $J_2=0.2, 0.3$ they denote modified $R(T)$ evaluated with reduced $\tilde s(T)$. The inset in (c) represents a sketch of $J_1$ (solid line) and $J_2$ (dashed line) in 1D Heisenberg chain.[]{data-label="llj12"}](LLJ12N30.pdf){width="0.9\columnwidth"} We consider first the 1D $J_1$-$J_2$ HM, which can serve as the reference for further discussion of 2D HM results. The AFM isotropic $S=1/2$ $J_1$-$J_2$ HM is given by $$H= \sum_{i} \left[ J_1 {\bf S}_i \cdot {\bf S}_{i+1} + J_2 {\bf S}_i \cdot {\bf S}_{i+2} \right]\,, \label{eqllj12}$$ where we further on put $J_1=J=1$ as the unit of energy. We will investigate with FTLM only $J_ 2\geq 0$ case on systems of finite length $N$ with p.b.c. Thermodynamic properties are well known and understood for simple $J_2=0$ Heisenberg chain [@johnston00], as well the g.s. and the triplet excited state for the frustrated chain with $J_2>0$ [@white96]. Beyond critical $J_2 > J_2^* \sim 0.241$ the g.s. is dimerized ($N_d =2$) in the thermodynamic limit [@white96]. At the same time, lowest excited states are degenerate triplets and singlets with the gap $ \Delta_ t = \Delta_s$, consistent with the unbound spinons as elementary excitations. Numerical results for $s(T)$, $\chi_0(T)$ and finally $R(T)$, as obtained on a system with $N=30$ sites, are presented in Fig. \[llj12\] for different $0 \leq J_2 \leq 1$: a\) For the simple $J_2=0$ chain we get $s(T) \sim \gamma T$ in very broad range $T < 0.6$. Finite-size effects are most pronounced in this case, so that below $T < T_{fs} \sim 0.2$ we get $s < 0.1$ and finite-size effects prevent any further firm conclusions. Still, for $T>T_{fs}$ numerical results are consistent with analytical and previous numerical results, in particular with the known limit $R_0 = 2$ [@johnston00]. Moreover, it is remarkable that $R(T)$ is nearly constant in a wide range $T < 0.6$. b\) The gap becomes pronounced for the Mazumdar-Ghosh point $J_2=0.5$ and even more for $J_2 = 1.0$ (where $\Delta_t \sim 0.25$ [@white96]). In the gapped case FTLM finite-size effects are less pronounced, and one can expect $T_{fs} \to 0$. In fact, for the $J_2=0.5$ and $J_2=1.0$ results appear size-independent for reached $N=30$, apart from the dimerization degeneracy $N_d=2$ leading via Eq. (\[s0\]) to $s_0>0$. The latter has influence on the $R(T \sim 0)$, so we present in Fig. \[llj12\] also the result taking into account subtracted $\tilde s(T)$. In both analyses the behavior is consistent with $R_0=0$. For the $J_2=0.5$ and $J_2=1.0$ modified results are still consistent with vanishing $R(T< \Delta) \propto T^\eta$ with $\eta\geq 1$, but this behavior remains to be clarified. For the marginal case $J_2 = 0.3 \sim J_2^*$, the behavior of all quantities is similar to $J_2=0$, except that we find $\gamma$ larger and consequently also $T_{fs}$ smaller. It is instructive to investigate in connection with finite-size effects also lowest triplet and singlet excitations in the model. While triplet excitations have been in detail studied using DMRG already in Ref. [@white96], to establish singlet excitations requires more care, see Sec. III. In Fig. \[chj\] (a) we present the DMRG (with o.b.c.) $N=60$ result for excitations: lowest triplet $\epsilon_t$ and lowest singlet $\epsilon_s$ vs. $J_2$, together (as the inset) with their $1/N$ scaling in the gapless regime $J_2 = 0.2 <J_2^*$. Due to o.b.c. DMRG is unable to properly resolve the dimerized partner of g.s. since it represents in open chain excited edge states. Hence, we present in Fig. \[chj\](a) the first singlet excited state only for $J_2 \leq 0.4$. Still, DMRG results confirm that no other singlet is stable below the triplet for $J_2 > J_2^*$, unlike seen later on in 2D SL models. ![(a) Lowest triplet $\epsilon_t$ and singlet $\epsilon_s$ excitations vs. $J_2$, as obtained via DMRG in the chain of $N=60$ sites, with the inset showing the scaling of $\epsilon_{t/s}$ vs. $1/N$ for $J_2=0.2$. (b) Corresponding g.s. spin correlations $\langle S^z_i S^z_j \rangle$ on particular bonds.[]{data-label="chj"}](CHJ12N60.pdf){width="0.9\columnwidth"} In Fig. \[chj\](b) we display also DMRG results for g.s. bond spin correlations $\langle S^z_i S^z_j \rangle$. It is also apparent that for $J_2>J_2^*$ the g.s. is dimerized (in n.n. bond correlations), whereby the particular case is $J_2 =0.5$ with alternating n.n. correlations $\langle S^z_i S^z_j \rangle =-1/4$ and $0$. Stronger correlations remain AFM in the whole $J_2>J_2^*$, while it is easy to recognize the change of character from AFM for $J_c^* <J_2 < 0.5$, to a ferromagnetic state for $J_2>0.5$. Planar frustrated Heisenberg models =================================== $J_1$-$J_2$ Heisenberg model on kagome lattice ---------------------------------------------- HM on KL is the prototype model for the existence of SL in planar models. It has been the subject of numerous studies, devoted mostly to the g.s. using ED [@mila98; @budnik04; @lauchli11; @lauchli19], series expansion [@singh07], DMRG [@yan08; @depenbrock12; @liao17] and variational methods [@iqbal11; @iqbal13]. We consider here the extended model with p.b.c., involving also the n.n.n. exchange $J_2$ as shown in the inset of Fig. \[klj12\](c), $$H= J_1 \sum_{\langle ij\rangle } {\bf S}_i \cdot {\bf S}_j + J_2 \sum_{\langle \langle il \rangle \rangle} {\bf S}_i \cdot {\bf S}_l\,, \label{2dj12}$$ whereby the role of $J_2>0$, as well as $J_2<0$, is to reestablish the magnetic LRO [@kolley15]. The basic HM on KL has been the clearest case for a dominant role of low-lying singlet excitations over the triplet ones [@singh07; @singh08; @lauchli19]. The latter fact and related large entropy, persistent at low $T \ll 1$, has been well captured within block-spin [@subrahmanyam95; @mila98; @budnik04] and recently within related reduced-basis approach [@prelovsek19], whereby the singlet excitations can be attributed to chiral fluctuations, distinct from (higher-energy) triplet excitations. ![$s(T) $, $\chi_0(T)$ and $R(T)$ within the $J_1$-$J_2$ HM on KL, obtained via FTLM on $N=36$ sites, for different $|J_2| \leq 0.2$. The inset in (c) represents a sketch of the $J_1$ (solid line) and $J_2$ (broken line) connections in KL.[]{data-label="klj12"}](KLJ12N36.pdf){width="0.9\columnwidth"} Thermodynamic quantities for the basic $J_2=0$ HM on KL have been calculated via FTLM previously [@schnack18] up to the size $N=42$. Here we extend the study, evaluating via FTLM also for $J_2 \neq 0$ for $N = 36$. Results in Fig. \[klj12\] reveal that increasing $|J_2| > 0$ suppreses strongly $s(T \ll J_1))$ while leaving $\chi_0(T)$ less affected (at least for $T>T_{fs}$). The result for $J_2 =\pm 0.2$ indicates on divergent $R_0 \to \infty$ consistent with the emergent magnetic LRO [@kolley15; @prelovsek19]. On the other hand, at $J_2 =0, 0.1$ the behavior of $\chi_0(T)$ and $s(T)$ are consistent with finite triplet gap $\Delta_t \sim 0.15$ and smaller or even vanishing singlet gap $\Delta_s < \Delta_t$. ![Lowest triplet excitation $\epsilon_t$ and (nondegenerate) singlet excitations $\epsilon_{s,i}$ ($i=1,2\cdots,6$) vs. $J_2$ for different planar $J_1$-$J_2$ HM models on: (a) KL , (b) TL, and (c) SQL, as obtained with ED on $N=36$ sites, and (d) $\epsilon_t$ and $\epsilon_{s,i=1,2,3}$ vs. $J_r$ on TL with ring exchange, obtained on $N=28$ sites.[]{data-label="levels"}](encross.pdf){width="0.8\columnwidth"} The transition from the singlet-dominated SL regime to the phases with magnetic LRO one can monitor also via low-lying levels in considered systems. In Fig. \[levels\] we present the evolution of excitation energies for lowest lying triplet $\epsilon_t$ as well as several low-lying excited singlets $\epsilon_{s,i}$ ($i=1,2,\cdots, 6$), as obtained via ED on $N=36$ sites, and in part for $N=32$ sites for the HM on TL with ring exchange. It should be pointed out that we monitor only nondegenerate excited states, whereby in general the degeneracy is present and depends on particular lattice and related p.b.c. The level evolution, plotted vs. $J_2$ (or $J_r$ discussed lateron) serves primarily as another test where to expect SL with macroscopic number (in the limit $N \to \infty$) of singlet excitation below the triplet ones, but also to locate some possible g.s. first-order transitions. In Fig. \[levels\](a) the level scheme for KL is consistent with the previous ED studies of ($J_2=0$) KL model [@singh07; @singh08; @lauchli19] which reveal a massive density of singlet levels with $\epsilon_s \sim 0$ below the lowest triplet one $\epsilon_t$. Introducing $|J_2| > 0$ reduces the degeneracy and might lead to $\Delta_s > 0$ even in the $N \to \infty$ limit. Still, a large density of singlet levels appear below the triplet in a wide (SL) range $J_2^{c1} < J_2 < J_2^{c2} $ where $J_2^{c1} \sim -0.1, J_c^{c2} \sim 0.1$ from Fig. \[levels\](a) and we define $J_2^{c1,c2}$ with the crossing of (all) lowest $\epsilon_{s,1-6} < \epsilon_t$. We note that marginal $J_2^{c1,c2}$ are consistent with Fig. \[klj12\] where $J_2 = \pm 0.2$ already reveal magnetic LRO with $R_0 \to \infty$. $J_1$-$J_2$ Heisenberg model on triangular lattice -------------------------------------------------- ![$s(T)$, $\chi_0(T)$ and $R(T)$ within $J_1$-$J_2$ HM on TL, obtained via FTLM on $N=36$ sites for different $J_2 \leq 0.3$. The dashed line for $J_2=0.3$ represents result using reduced $\tilde s(T)$. The inset in (c) represents a sketch of the $J_1$ (solid line) and $J_2$ (broken line) connections in TL.[]{data-label="tlj12"}](TLJ12N36.pdf){width="0.9\columnwidth"} While numerical studies for the basic ($J_2=0$) HM on TL [@bernu94; @capriotti99; @white07] confirm magnetic LRO with moments pointing into $120^0$-angle directions, modest additional frustration with $J_2 > 0$ allows for the possibility of SL g.s., with the evidence for either gapless [@kaneko14] or gapped SL [@zhu15; @hu15; @iqbal16; @wietek17] in the intermediate regime $J_2 \sim 0.15$. Beyond that, for $J_2 > 0.2$ again stripe AFM is expected. Thermodynamic (and some dynamic) quantities for the $J_1$-$J_2$ HM, Eq. (\[2dj12\]), on TL have been recently calculated using FTLM [@prelovsek18] up to $N=30$ sites and employing the reduced-basis approach [@prelovsek19], whereby the similarity of $s(T)$, $\chi_0(T)$ and $R(T)$ with the basic HM on KL in the SL regime in both models has been traced to the chiral fluctuations dominating low-$T$ excitations. Here we upgrade previous FTLM studies with the calculation of $J_1$-$J_2$ HM on TL to $N=36$ sites. Results, presented in Fig. \[tlj12\], are qualitatively consistent with previous ones for $N=30$ [@prelovsek18] but due to larger size and consequently smaller $T_{fs}$ more clearly reveal the small entropy $s(T)$ and related diverging $R(T)$ below $T\sim 0.2$ for $J_2 \sim 0 $, where the g.s. possesses magnetic LRO. A similar behavior is expected for $J_2 > 0.2$ where the stripe AFM g.s. has been established [@kaneko14]. In reachable system $N=36$ the upturn of $R(T)$ is partly masked by finite-size $s_0 > 0$, Eq. (\[s0\]), due to the degeneracy $N_d>1$ of striped magnetic LRO, evident in Fig. \[tlj12\](a) at $J_2=0.2$ and $0.3$. Taking into account in Eq. (\[rw\]) the reduced $\tilde s = s-s_0$, we obtain for $J_2=0.3$ again the indication for the upturn of $R(T)$ consistent with g.s. magnetic LRO. Still, in the most important intermediate regime $0.1 < J_2 < 0.2$ the increase of $s_0$ and at the same time fast decrease of $\chi_0(T \to 0)$ (indicating a finite triplet gap $\Delta_t>0$) leads to vanishing $R_0 =0$. In Fig. \[levels\](b) we plot the corresponding evolution of excitations vs. $J_2$, as obtained with ED on $N=36$ lattice. The triplet gap apparently remains substantial, i.e. $\epsilon_t >0.38$ for considered $N$ in the whole range of $J_2< 0.3$. Still, singlet excitations $\epsilon_{s,1-6}$ all cross $\epsilon_{t}$ for small $J_2 \sim 0.1$. This leads effectively to a g.s. level crossing $\epsilon_{s,1}=0$ at $J_2 \sim 0.17$ exchanging the character of the g.s. into a striped AFM. But most important, in the intermediate range $0.1 < J_2 < 0.17$, which should be the relevant SL regime, singlet-excitations collapse is consistent with the conclusions from thermodynamics in Fig. \[tlj12\] and $R_0=0$. It should be, however, acknowledged that the singlet collapse is not as pronounced as for $J_2 \sim 0$ KL in Fig. \[levels\](a). Heisenberg model with ring exchange on triangular lattice --------------------------------------------------------- ![ $s(T)$, $\chi_0(T)$ and $R(T)$ within HM on TL, including ring exchange, obtained via FTLM on $N=28$ sites for different $0 \leq J_r \leq 0.2$. The inset in (c) represents a sketch of the $J$ (solid line) connections and the ring exchange $J_r$ (circle) in TL.[]{data-label="tljr"}](TLJrN28.pdf){width="0.9\columnwidth"} While $J_1$-$J_2$ HM on TL is conceptually simple, it is less obvious to justify in connection with experiments and with more basic models. The organic SL materials [@shimizu03; @shimizu06; @itou10; @zhou17] and 1T-TaS$_2$ [@klanjsek17; @kratochvilova17; @law17; @he18] are closer to the metal-insulator transition where simple $S=1/2$ n.n. HM is presumably not enough. Assuming as the starting point the single-band Hubbard model on the insulator side of the Mott transition $U > U_c$ the lowest correction to the n.n. HM comes in the form of the ring exchange term [@misguich99; @motrunich05; @yang10; @nakamura14], $$\begin{aligned} H &=& J \sum_{\langle ij\rangle } {\bf S}_i \cdot {\bf S}_j + H_r\,,\end{aligned}$$ with $$\begin{aligned} H_r &=& \frac{J_r}{2} \sum_{\langle ijkl \rangle} (P_{ijkl} + P_{lkji} ) \sim J_r \sum_{\langle ijkl \rangle} \bigl [ ({\bf S}_i \cdot {\bf S}_j) ({\bf S}_k \cdot {\bf S}_l) \nonumber \\ &+& ({\bf S}_i \cdot {\bf S}_l) ({\bf S}_j \cdot {\bf S}_k) - ({\bf S}_i \cdot {\bf S}_k) ({\bf S}_j \cdot {\bf S}_l) \bigr]\,, \label{hring}\end{aligned}$$ where $\langle ijkl \rangle$ are taken over different four-cycles on TL, as shown in the inset of Fig. \[tljr\](c). $H_r$, Eq. (\[hring\]), has been confirmed as the leading correction in the numerical study of the half-filled Hubbard model [@yang10] in the insulating regime where $J_r \sim 80 t^4/U^3 \sim 20 t^2/U^2 J < 0.2 $ [@nakamura14], taking into account that the Mott insulator on TL requires $U > U_c \sim 8t-10t$ and $J \sim 4t^2/U$. It should be also mentioned that in Eq. (\[hring\]) we do not consider higher-order $t/U$ corrections to $J$. It has been already proposed that modest ring exchange $J_r > 0$ on TL destroys the magnetic LRO and induces SL g.s. [@misguich99; @motrunich05], including the observation of several possible singlet excitations below the lowest triplet one. In Fig. \[tljr\] we present results for the HM on TL with $J_r > 0$ , Eq. (\[hring\]), as obtained via FTLM on $N=28$ sites (smaller size due to more complex $H$). It is evident that $J_r > 0$ steadily increases low-$T$ entropy $s(T)$, while increasing $\chi_0(T)$. Resulting $R(T)$ looses magnetic LRO character already for $J_r \geq 0.05$ being followed by SL-like regime with vanishing $R_0 \to 0$. The same message follows from the consideration of lowest levels on $N=28$ lattice, presented in Fig. \[levels\](d). Analogous to Figs. \[levels\](a) and \[levels\](b), there is a clear collapse of singlet levels $\epsilon_{s,1-3}$ (here we employ a ${\bf q}$-resolved code and cannot monitor all singlet excitations) below the triplet one $\epsilon_t$ for $J_r > 0.1$. In the latter regime $\epsilon_t $ represents already a reasonable estimate of the limiting $N \to \infty$ triplet gap $\Delta_t >0$ [@misguich99], whereas to establish a proper singlet gap (the lowest singlet in $N \to \infty$ limit) $\Delta_s < \Delta_t$ requires more detailed finite-size analysis. $J_1$-$J_2$ Heisenberg model on square lattice ---------------------------------------------- ![ $s(T)$, $\chi_0(T)$ and $R(T)$ within $J_1$-$J_2$ HM on SQL, obtained via FTLM on $N=36$ sites for different $0 \leq J_2 \leq 1.0$.[]{data-label="sqj12"}](SQJ12N36.pdf){width="0.9\columnwidth"} Finally, we turn to the $J_1$-$J_2$ HM, Eq. (\[2dj12\]) on SQL. The latter has been one of first considered for the possible (plaquette) VB solid g.s. at intermediate $J_2 \sim 0.5$ [@capriotti00; @mambrini06; @morita16; @zhao19], but also for the SL g.s. [@jiang12; @gong14; @morita15; @wang18; @liu18]. Results for corresponding thermodynamic quantities presented in Fig. \[sqj12\] are consistent with the diverging $R_0 \to \infty$ indicating magnetic LRO outside quite narrow parameter regime, i.e. outside $0.5 \leq J_2 \leq 0.6$. In the latter regime we again find substantial entropy $s(T \ll 1)$ and consequently $R_0 \to 0$, whereby for $J_2 \sim 0.6$ there are already some indications for possible degeneracy $s_0>0$ which could be in favor of broken translational symmetry, e.g., a plaquette VB solid [@capriotti00; @mambrini06; @morita16; @zhao19]. Caveats for the SL interpretation emerge also when considering the excitation evolution vs. $J_2$ \[see Fig. \[levels\](c)\], as obtained from ED results on $N=36$ cluster. For given system size, the singlet levels reveal $\epsilon_{s,1-6} < \epsilon_t$ only in a very narrow regime $0.55< J_2 < 0.62$. Even then, higher singlets (apart from $\epsilon_{s,1}$) are not well below $\epsilon_t$. Consistent with previous works [@capriotti00; @mambrini06; @jiang12; @gong14; @morita15; @morita16; @wang18; @liu18] level scheme indicates on a change of the g.s. character for $J_2 > 0.6$. As a consequence, the SL in the intermediate regime, and even more on the singlet-dominated regime is less conclusive, and other options [@wang18; @zhao19] have to be also considered. Conclusions =========== Thermodynamic quantities: entropy density $s(T)$ (together with directly related specific heat $C_V(T) = T ds/dT$, not presented in this paper), uniform susceptibility $\chi_0(T)$, and consequently $T$-dependent Wilson ratio $R(T)$, offer another view on properties of frustrated spin models. We considered here prototype 2D isotropic $S=1/2$ HM, which are at least in some parameter regimes best candidates for the SL g.s. For comparison, we investigated in the same manner also simplest 1D HM which can serve as reference for some concepts and scenarios. $R(T)$, in particular its low-$T$ variation, is the quantity which differentiates between different scenarios. Whereas 2D systems with magnetic LRO can be monitored via $R_0 \to \infty$, we are more interested in the SL regimes with g.s. without magnetic LRO and even without any broken translational symmetry which could be classified as VB solid (crystal). As prototype case we present results for 1D $J_1$-$J_2$ HM which does not have magnetic LRO, but offers already two firm scenarios: a) the gapless regime for $J_2< J_2^*$ with spinons (or 1D SFS) as elementary excitations, and consequently finite $R_0 = R(T \to 0) \sim 2$ (for $J_2 \sim 0$), b) a gapped regime for $J_2>J_2^*$ with dimerized g.s. (being the simplest 1D form of VB solid) apparently also with $R_0 =0$, although not yet fully resolved variation $R(T \to 0)$. SL regimes in considered 2D frustrated isotropic $S=1/2$ HM are in our study located via enhanced low-$T$ entropy $s(T)$ and gapped (or at least reduced) $\chi_0(T)$, resulting in vanishing $R_0=0$. Similar information and criterion (although less well defined) emerges from the excitation spectra, when differentiating singlet and triplet (or even higher $S_{tot}>1$) excitations over the $S_{tot}=0$ g.s. Most evident cases for such VB (dimer) liquid scenario appears within the KL around $J_2 \sim 0$. Analogous, although somewhat less pronounced, case is obtained within HM on TL with ring exchange $J_r > 0.1$ and for the $J_1$-$J_2$ HM on TL in the intermediate regime $0.1<J_2 < 0.17$. For such systems the level evolution as well as $R(T)$ reveal massive density of singlet states below the lowest triplet excitation. On the other hand, the situation in the HM on SQL in the narrow regime $J_2 \sim 0.6$ is less clear-cut in this respect, since singlets are not well below the lowest triplet. Vanishing $R_0$ does not support the scenario of SL with large (or even Dirac-cone) spinon Fermi surface, which would require finite $R_0 >0$ (as in 1D HM), although our finite-size studies should be interpreted with care and cannot give a final answer to this problem. Still, emergent scenario of VB (dimer) liquid should be critically faced with the possibility of VB solid. The difference should be that in the case of VB solid g.s. should be (due to broken translational symmetry) degenerate with finite $N_d > 1$ (in the thermodynamic limit $N \to \infty$). Except for the SQL at $J_2 \sim 0.6$, we do not find much evidence for that in the presumable SL regimes, since results mostly indicate (besides finite triplet gap $\Delta_t >0$) also either on finite singlet gap $\Delta_s > 0$ or vanishing $\Delta_s \sim 0$ for the pure KL, but evidently $\Delta_s < \Delta_t$. To establish (or exclude) possible $N_d>1$ and to determine $\Delta_s>0$ beyond doubt still requires further studies. Finally, it should be stressed that evaluated thermodynamic quantities are (at least in principle) measurable in experimental realizations of SL materials. $s(T)$ is accesible via measured magnetic specific heat $C_V(T)$ and uniform susceptibility $\chi_0(T)$ via macroscopic d.c. or/and Knight-shift measurement. Since known SL materials are characterized by modest exchange $J$, properties can be measured in the wide range $T \lesssim J$. This offers the possibility of critical comparison with model results, whereby considered isotropic HM might still miss some ingredients relevant for the low-$T$ behavior, in particular the Dzyaloshiniskii-Moriya interaction, the disorder influence, and the inter-layer coupling. P.P. is supported by the program P1-0044 and project N1-0088 of the Slovenian Research Agency. K. M. and T. T. are supported by MEXT, Japan, as a social and scientific priority issue (creation of new functional devices and high-performance materials to support next-generation industries) to be tackled by using a post-K computer. T.T. is also supported by the JSPS KAKENHI (No. JP19H05825). The numerical calculation was partly carried out at the facilities of the Supercomputer Center, the Institute for Solid State Physics, the University of Tokyo and at the Yukawa Institute Computer Facility, Kyoto University. J. H. acknowledges grant support by the Polish National Agency of Academic Exchange (NAWA) under contract PPN/PPO/2018/1/00035.
harvmac \#1(\#2)\#3[[ Nucl. Phys. ]{}[B\#1]{} (\#2) \#3]{} \#1(\#2)\#3[[ Phys. Lett. ]{}[\#1B]{} (\#2) \#3]{} \#1(\#2)\#3[[ Phys. Lett. ]{}[\#1A]{} (\#2) \#3]{} \#1(\#2)\#3[[ Phys. Rev. Lett. ]{}[\#1]{} (\#2) \#3]{} \#1(\#2)\#3[[ Mod. Phys. Lett. ]{}[A\#1]{} (\#2) \#3]{} \#1(\#2)\#3[[ Int. J. Mod. Phys. ]{}[A\#1]{} (\#2) \#3]{} \#1(\#2)\#3[[ Commun. Math. Phys. ]{}[\#1]{} (\#2) \#3]{} \#1(\#2)\#3[[ Class. Quantum Grav. ]{}[\#1]{} (\#2) \#3]{} \#1(\#2)\#3[[ J. Math. Phys. ]{}[\#1]{} (\#2) \#3]{} \#1(\#2)\#3[[ Ann. Phys. ]{}[\#1]{} (\#2) \#3]{} \#1(\#2)\#3[[ Phys. Rev.]{} [D**[\#1]{}**]{} (\#2) \#3]{} ¶[[**P**]{}]{} \#1[[\_[\#1]{}]{}]{} \#1[[s\_[\#1]{}]{}]{} =cmtt10 \#1 \#1[\^[\* (\#1)]{}]{} \#1[\_[\#1]{}\^\*]{} \#1[a\_[\#1]{}]{} \#1[z\_[\#1]{}]{} \#1[\_[\#1]{}\^\*]{} \#1[a\_[\#1]{}]{} \#1[z\_[\#1]{}]{} \#1[y\_[\#1]{}]{} \#1[x\_[\#1]{}]{} ł\#1[l\^[(\#1)]{}]{} \#1[U\_[\#1]{}]{} ${ \left( } \def$[ ) ]{} \#1[\_[\#1]{}]{} \#1[[D]{}\_[\#1]{}]{} \#1[\_[\#1]{}]{} \#1 \#1 ‘@=12 \#1\#2\#3\#4[\#4]{} \#1\#2\#3\#4[\#4]{} @tf@ur\#1 /\#1\#2\#3\#4\#5\#6\#7[[hep-th/\#1\#2\#3\#4\#5\#6\#7]{}]{} /\#1\#2\#3\#4\#5\#6\#7 ‘=11 \#1[ 20[Draftmode: figure \#1 not included]{} 1 \#1 1 20[FIGURE FILE \#1 NOT FOUND]{} 1 ]{} \#1\#2[$\vcenter{\hrule \hbox{\vrule height#1\kern1.truein \raise.5truein\hbox{#2} \kern1.truein \vrule} \hrule}$]{} \#1[=\#1@rone[\#1]{}[\#1by1]{}]{} \#1\#2\#3\#4 =\#1 .05truein .05truein ** **** :  \#2 = The AdS/CFT correspondence and Spectrum Generating Algebras =0 -1.0cm P. Berglund$^1$, E. G. Gimon$^2$ and D. Minic$^{3}$[^1] [Email: [email protected], [email protected], [email protected] ]{} .20in *$^1$Institute for Theoretical Physics* -.4ex *University of California* -.4ex *Santa Barbara, CA 93106, USA* -.0ex .05in *$^2$California Institute for Technology* -.4ex *Pasadena, CA 91125, USA* -.0ex .05in *$^3$Department of Physics and Astronomy* -.4ex *University of Southern California* -.4ex *Los Angeles, CA 90089-0484, USA* -.0ex -.4ex -0.3cm plus 1 pt minus 1 pt There exists convincing evidence for a duality between string theory or M-theory on $AdS_{d+1} \times S^n$ with $N$ units of $n$-form flux through $S^n$ and a $d$-dimensional $SU(N)$ superconformal field theory on the boundary of $AdS_{d+1}$. This conjecture exists both in a weak form and in a strong form. In the weak form the space $AdS_{d+1} \times S^n$, with size proportional to $N$, is taken to be quite large. In this limit supergravity dominates and captures all the physics of the dual large $N$ superconformal field theory. In the strong form of the conjecture, string theory or M-theory effects need to be taken into account to properly describe the finite $N$ superconformal field theory. Available evidence for the AdS/CFT conjecture focuses mainly on the weak form , although some progress has been made towards understanding the full stringy spectrum . In order to understand string theory (M-theory is more problematic) on $AdS_{d+1} \times S^n$, the classical string action needs to be quantized in this background. This procedure should produce the discrete spectrum of string states and their masses along with rules for calculating their interactions. In this paper, we use an alternative approach to provide information on the string spectrum. We consider the eleven, ten and six-dimensional supergravity limits of M/string theory, as well as massive ten-dimensional stringy fields expanded in Kaluza-Klein (KK) modes on $S^n$. Even though identifying the proper independent string degrees of freedom using this method is extremely difficult, we argue that one important qualitative feature of the Kaluza-Klein reduction survives, namely the presence of a so-called [*spectrum generating algebra*]{} . A spectrum generating algebra (SGA) typically does not commute with the Hamiltonian and is non-linearly realized at the level of the action, but it describes the entire spectrum of a particular physical system . SGAs have been very successfully used in nuclear, atomic and molecular physics, not only in the study of spectra but also in the computation of various transition amplitudes (for more details on this subject, consult ). In Kaluza-Klein reductions SGAs usually appear because the towers of harmonics used in these reductions can be fit into unitary irreducible representations (UIR) of the conformal groups of the corresponding compactified spaces (spheres, products of spheres, or any Einstein spaces which have a natural action of the conformal group) , , . Since the eigenvalues of harmonics are related to the masses of the corresponding Kaluza-Klein states, the algebra of the conformal group does not commute with the Hamiltonian. For the case of compactifications on $S^n$ the corresponding conformal group $SO(n+1,1)$ acts as a spectrum generating algebra . The conformal generators of the SGA are not isometries of the compactification manifold. Rather, the operation of rescaling of the manifold corresponds to the “scanning” of the spectrum of the associated operator (Dirac, Laplace, etc.) on the manifold in question. In particular the spherical harmonics on $S^{n}$ provide a natural UIR of the conformal group of $S^n$ , which generates the spectrum of KK modes, see section 2. We will explicitly demonstrate this construction for supergravity fields. To do this, we extend the results known in the supergravity literature  by demonstrating how the corresponding spectra fit into UIRs of the relevant conformal group for some of the maximally symmetric examples of the AdS/CFT correspondence: IIB string theory on $AdS_5 \times S^5$ , M-theory on $AdS_4 \times S^7$  and $AdS_7 \times S^4$ , and IIA or IIB string theory on $AdS_3 \times S^3 \times X^4$ (see section 3 and Tables 1.-4.). Our results can be also extended to the case of IIA or IIB string theory on $AdS_2 \times S^{2} \times X^6$ . In view of the AdS/CFT correspondence we consider the map between the action of an SGA on the supergravity spectrum and the corresponding action of what we call an [*operator generating algebra*]{} (OGA) on the chiral primaries on the CFT side. Demonstrating the presence of SGAs in the supergravity spectrum allows us to argue for and better understand the extension of the SGAs to the full string/M-theory. More explicitly, we discuss the KK towers of the level one massive string states of the flat ten-dimensional (IIA or IIB) string on $S^{5}$, and show how they provide UIRs of the corresponding SGA. We expect that the action of the OGA generalizes to include operators in the CFT dual to stringy fields. In particular we discuss how the relevant OGA could act on the so-called Konishi supermultiplet which is expected to correspond to massive string states of IIB string theory on $S^{5}$, see section 4. Finally, in section 5, we discuss the relevance of SGA in the case of the recent proposal on the finite $N$ case and quantum deformed isometries . Let us consider a generic supergravity theory compactified on $AdS_{m} \times S^{n}$. All supergravity fields can be expanded into harmonic functions on $S^{n}$ (this is just the physical statement of the Peter-Weyl theorem ). It can be shown that these harmonic functions provide a UIR of the conformal group on $S^{n}$, which is $SO(n+1,1)$. This can be seen as follows : Let $S^n$ denote a unit sphere $\sum x^{i} x^{i} =1$ and let $g \in SO(n+1,1)$ The action of $g \in SO(n+1,1)$ on $S^{n}$ is given by and the action of $g^{-1} \in SO(n+1,1)$ on complex functions $f: S^{n} \rightarrow \IC$ by where $\sigma \in \IC$ is called the weight. Furthermore, let $L^{2}(S^{n})$ be the Hilbert space of square integrable complex functions over $S^{n}$ with the natural inner product where $\sqrt{G} d^{n}x$ is the $SO(n+1,1)$ invariant measure on $S^{n}$. Then it is easy to show that this inner product is preserved under the action of $g^{-1} \in SO(n+1,1)$ on $f_{1}$, $f_{2}$ defined above, provided that the weight $\sigma = -n/2 + i \rho$, where $\rho$ is an arbitrary real number. Thus the space of harmonic functions over $S^{n}$ provides a unitary irreducible representation of $SO(n+1,1)$ . Since KK modes of supergravity fields on $S^{n}$ are expected to fit into UIRs of the $SO(n+1,1)$ SGA, we obviously need to use the representation theory of non-compact $SO(n+1,1)$ groups to understand the physical spectrum of KK modes. Their exists a construction for the UIRs of the group $SO(n+1,1)$ completely analogous to the one for UIRs of its maximal compact subgroup $SO(n+1)$. In the case of $SO(n+1)$ a unitary irreducible representation is determined by a set of numbers $m_{ij},\, (1 \leq i < j \leq n+1)$, all of which are integer or half-integer simultaneously (there are important differences between $n+1=2p$ and $n+1=2p+1$). A vector in the representation space is denoted by $|m_{ij}>$, where $m_{ij}$ provide a complete set of highest weight labels (named Gel’fand-Zetlin (GZ) labels) which uniquely determine an irreducible representation. The labels $m_{ij}$ obey the following conditions where $k=1,\ldots,p-1$ if $n+1\in 2\ZZ$ and $k=1,\ldots,p$ if $n+1\in 2\ZZ+1$. Based on this result it can be shown  that the UIRs of $SO(n+1,1)$ (with important differences between $n+1=2p$ and $n+1=2p+1$) are described by a set of $SO(n+1)$ GZ labels $m_{ij}$, satisfying certain inequalities, along with a weight $\sigma = -n/2 + i \rho$. One important property of the UIRs of $SO(n+1,1)$ is that irreducible representation of $SO(n+1)$ occur within with multiplicity one or not at all . For example, in the case of $SO(2p, 1)$ their exist UIRs with $SO(2p)$ content described by the following requirement on the GZ labels where $m_{2p+1, j} =0, 1/2,1,\ldots $ for $1 \leq j \leq p-1$ and the weight $\sigma = -p + i \rho$, with $\rho > 0$. These UIRs are labelled $D(m_{2p+1,1} \ldots m_{2p+1,p-1}, i\rho)$. The complete list of UIRs of $SO(n+1,1)$ in this notation is given in . We follow this notation and the results of  in the main body of the paper. Note that in the GZ-notation these representations typically consist of a finite number of infinite towers. GZ labels form a particularly convenient basis for understanding the harmonic analysis on Kaluza-Klein (KK) supergravity  on any coset space $G/H$ . In particular, the well known $AdS_m \times S^n$ backgrounds of KK supergravity can be understood as coset spaces, upon the Euclideanization of the relevant $AdS_m$ spaces - $AdS_m \rightarrow S^{m}$. Then the spectrum of KK supergravity on $AdS_m \times S^n$ can be obtained from the harmonic analysis on $G/H = SO(m+1)/SO(m) \times SO(n+1)/SO(n)$. In this analysis , one fixes the $H$ representations which describe the content of all supergravity fields, and then one expands these fields in terms of only those representations of $G$ which contain the fixed $H$ representations. The GZ, or highest weight labels, provide a natural basis for the implementation of this procedure . More precisely, let the GZ labels of a fixed $H$ representation be denoted by $(\alpha_{1}, \alpha_{2},\ldots , \alpha_{r})( \beta_{1}, \beta_{2},\ldots, \beta_{q-1})$, where and analogously, denote the GZ labels of a $G$ representation by $(\gamma_{1}, \gamma_{2},\ldots, \gamma_{r})( \delta_{1}, \delta_{2},\ldots, \delta_{q})$, where  Then according to a theorem by Gel’fand and Zetlin  (vol. 3), the above $H$ representation is contained in the decomposition of the above $G$ representation provided This theorem combined with the representation theory of $SO(n+1,1)$ can be used to easily read off the corresponding UIRs of the relevant $SO(n+1,1)$ spectrum generating algebra, given the field content of a particular supergravity theory . Given these technical tools, we now turn to actual physical applications. We consider the case of IIB supergravity on $AdS_5\times S^5$, since for this case the actual boundary CFT of the proposed duality is precisely defined; it is ${\cal{N}}=4$ $SU(N)$ super Yang-Mills theory (SYM) in four-dimensions. Although we discuss in detail other supergravity $AdS_{d+1} \times S^{n}$ examples (see Tables 2. - 4.), we study the $AdS_5 \times S^5$ case (Table 1.) when we discuss the action of the SGA on the full string theory. The bosonic sector of the ten-dimensional IIB supergravity consists of the following representations of the little group $SO(8)$: (the dilaton and the axion, RR and NS 2-forms, the graviton, and the self-dual RR 4-form). The fermionic sector (spin 1/2 and spin 3/2 fields) is given by To understand the reduction of this spectrum on $AdS_5\times S^5$, we first look at how the $SO(8)$ little group representations break up into representations of $SO(5) \times SO(3)$ on the [*tangent bundle*]{} of $AdS_5 \times S^5$. In particular, we want to discuss the appearance of physical modes (i.e., those modes that appear as poles in the $AdS_{5}$ bulk propagators) and illustrate the general procedure by considering only the bosonic fields . On the tangent bundle of $AdS_5 \times S^5$ the ten-dimensional little group $SO(8)$ splits into $SO(5) \times SO(3)$. We start our discussion by decomposing the $SO(8)$ representations for the graviton, $h_{ab}$, and the self-dual four form, $a_{abcd}$, in terms of $SO(5)\times SO(3)$ . We get $$h_{ab}:\quad{\bf 35_v \to 1_1 + 1_5 + 5_3 + 14_1},$$ and $$a_{abcd}:\quad{\bf 35_c \to 5_1 + 10_3},$$ respectively. We are interested in those representations of $SO(6)\times SO(3)$ which contain the above representations of $h_{ab}$ and $a_{abcd}$, since $SO(6)$ is the isometry group of $S^5$. It is convenient to list these representations in terms of their highest weight labels under $SO(6)$. The resulting $SO(6)$ labels, with their $SO(3)$ dimensions, are $$\eqalign{ &(l,0,0)_1,\quad (l,0,0)_5,\quad (l,0,0)_3,\,(l,1,0)_3,\quad\cr &(l,0,0)_1,\,(l,1,0)_1,\,(l,2,0)_1}$$ for the graviton and $$(l,0,0)_1,\,(l,1,0)_1,\quad (l,1,0)_3,\,(l,1,\pm 1)_3$$ for the self-dual four form, respectively. In order to understand which modes appear as physical from the point of view of the bulk $AdS_5$ space we need to consider the action of the $AdS_5$ little group $SO(4)$ on the above representations of $SO(6) \times SO(3)$. These are uniquely lifted to representations of $SO(6)\times SO(4)$, from which we directly read off the physical modes propagating in the bulk $AdS_5$ space. We get E.g., the ${\bf (3,3)}$ of $SO(4)$ is given in terms of ${\bf 1 + 3 + 5}$ of $SO(3)$ and so on. Note also that there will be a mixing between modes with the same quantum numbers, such as $h^\alpha_\alpha$ and $a_{\alpha\beta\gamma\delta}$. In the Tables 1.-4. we suppress this mixing and list the modes as above. By comparing to  we see that group theory indeed accounts for all the physical modes. One can also easily extend this analysis to the fermionic part of the spectrum. The KK towers of physical modes cannot in general be fit alone into UIRs of the conformal group of $S^{5}$ - $SO(6,1)$. In order to get full UIRs of $SO(6,1)$ we also need to consider gauge modes (modes that do not appear as poles in the $AdS_5$ bulk propagators). The most convenient procedure for the identification of KK towers of both physical and gauge modes, and the corresponding UIRs of $SO(6,1)$, is to look at the Euclidean $AdS_5 \times S^5$ space as a coset space - $G/H \equiv SO(6)/SO(5) \times SO(6)/ SO(5)$. We list the various KK modes in terms of $SO(5) \times SO(5)$ highest weight labels, and then determine which $SO(6) \times SO(6)$ representations contain these fixed $SO(5) \times SO(5)$ representations using the theorem of Gel’fand and Zetlin reviewed in section 2. Here it is important that we started with the full ten-dimensional tangent space $SO(10)$ and not just the little group $SO(8)$ as we would otherwise not see the gauge modes. From the $SO(6)$ highest weight, GZ labels, we directly read off the corresponding UIRs of $SO(6,1)$. These UIRs must occur; the theorem above implies that a complete set of orthonormal harmonic functions on $S^5$ forms a UIR of the conformal group of $S^5$, that is $SO(6,1)$. To make our procedure described clearer, we choose as an example the fields which come from the reduction of the ten-dimensional graviton. We write the $SO(5)$ representations of these fields in terms of GZ labels; they are the $(0,0)_{GZ}$, $(1,0)_{GZ}$ and $(2,0)_{GZ}$ representations. From , the $(2,0)_{GZ}$ representation of $SO(5)$, a scalar on $AdS_5$, is contained in the $SO(6)$ representations with labels: $(l+2,2,0)_{GZ}$, $(l+2,1,0)_{GZ}$, and $(l+2,0,0)_{GZ}$ ($l \geq 0$) which together form the $D^1(2; -5/2)$ UIR of SO(6,1). Only the symmetric tensors with $SO(6)$ labels $(l+2,2,0)_{GZ}$ are physical, matching with $h_{\alpha\beta}$ in , while the others correspond to gauge modes. The $(1,0)_{GZ}$ representation of $SO(5)$ is contained in the $SO(6)$ representations $(l+1,1,0)_{GZ}$ (physical modes matching $h_{\alpha\mu}$) and $(l+1,0,0)_{GZ}$ (gauge modes) which form the $D^1(1;-5/2)$ representation of $SO(6,1)$. Finally, the $(0,0)_{GZ}$ representation is contained in the $SO(6)$ representations $(l,0,0)_{GZ}$ which form the $D^2(-5/2)$ representation of $SO(6,1)$ (physical modes matching $h_{\mu\nu}$). The fields in this tower couple to the symmetric trace operators on the CFT side. In the discussion above, modes which usually are ignored because they can be gauged away are [*crucial*]{} to the faithful action of the conformal group of $S^5$ on the Kaluza-Klein spectrum. Other gauge modes also appear in the spectrum in complete representations of the SGA. For example, an analysis of the mode expansion on $AdS_5$ is enough to show that the ten-dimensional graviton also yields a complete tower of vector gauge modes. We will ignore these complete towers of gauge modes, and only mention gauge modes which combine with physical modes to give UIRs of the conformal group. Generally, gauge modes are probably associated with the diagonal $U(1)$ group on the boundary, whose role in the $AdS/CFT$ duality is still not completely understood (see for example ). We now complete our analysis of the SGA representations which appear in the supergravity spectrum of $AdS_5\times S^5$. The antisymmetric tensor, the two-form $A_{\mu\nu}$, gives rise to a tower $D^2(-5/2)$ of anti-symmetric chiral and anti-chiral tensor fields, all describing physical modes. The vector $A_{\alpha\mu}$ gives rise to the towers of vectors that make up the $D^1(1;-5/2)$ representation with only the tower with modes of the form $(l+1,1,0)_{GZ}$ in $D^1(1;-5/2)$ physical. The scalar $A_{\alpha\beta}$ gives rise to two physical KK-towers, modes of the form $(l+1,1,\pm 1)_{GZ}$, which make up the $D^0(1,1;-5/2)$ representation. The rank-four antisymmetric self-dual tensor gives rise to chiral and anti-chiral two-forms $A_{\alpha\beta\mu\nu}$ with towers making up two $D^0(1,1;-5/2)$ representations. Each $D^0(1,1;-5/2)$ has two physical towers with $(l+1,1,\pm 1)_{GZ}$ of $SO(6)$, adding up to four towers of physical two-forms . The vector $A_{\alpha\beta\gamma\mu}$ gives a tower $D^1(1;-5/2)$ of vectors but only the $(l+1,1,0)_{GZ}$ tower of $D^1(1;-5/2)$ describes physical modes. The scalar mode, $A_{\alpha\beta\gamma\delta}$, mixes with the $h^{\alpha}_{\alpha}$ scalar as can be seen from our earlier discussion, with each of the mass eigenmodes giving rise to $D^2(-5/2)$. Finally, the complex scalar, in terms of the axion and dilaton fields, gives rise to yet one more physical KK-tower of complex scalars, $D^2(-5/2)$. The spin-1/2 field $\lambda$ gives twin towers of chiral and anti-chiral spinors in the $D(1/2,1/2;-5/2+i\rho)$ representation. Each of these contains physical modes $(l+1/2,1/2,\pm 1/2)_{GZ}$, so each $D(1/2,1/2;-5/2)$ yields two towers. The chiral and anti-chiral gravitini $\psi_\mu$ also come in the representation $D(1/2,1/2;-5/2+i\rho)$, each with a total of two physical towers with modes of the form $(l+1/2,1/2,\pm 1/2)_{GZ}$. Finally, we get KK-towers of chiral and anti-chiral spin-$1/2$ fields from $\psi_\alpha$, each in the $D(3/2,1/2;-5/2+i\rho)$ representation, and each of these yielding physical modes of the form $(l+3/2,1/2,\pm 1/2)_{GZ}$. We summarize these results in Table 1. Tables 2. - 4. which contain the fields and UIRs for the cases of $AdS_4\times S^7$, $AdS_7\times S^4$ and $AdS_3\times S^3$ respectively, are obtained following the same procedure. We want to discuss what the SGA for the Kaluza-Klein states of $AdS_{d+1}\times S^n$ means on the dual CFT side. We concentrate on the $AdS_5/CFT_4$ correspondence, the ${\cal{N}}=4$ $SU(N)$ super Yang Mills theory in four-dimensions. Other cases are more difficult because the dual CFT is not easily described, though we believe that similar arguments to those below can be applied there as well. We start from the fact that each supergravity KK tower corresponds to a set of chiral primaries on the CFT side with appropriate $SO(6)$ R-charges . Chiral primaries appear in the trace of a symmetric product of ${\cal{N}}=4$ chiral superfields . For example, the traceless part of the following operator corresponds to the KK states of IIB supergravity on $AdS_5 \times S^5$, where $W$ is the ${\cal N}=4$ chiral superfield. We have shown that these KK states belong to UIRs of the $SO(6,1)$ SGA. Given the map between KK modes and CFT chiral primaries, we naturally expect that the complete set of UIRs of the $SO(6,1)$ SGA listed in Table 1 corresponds to Note that there exists an ambiguity as to whether or not $W^i$ transforms in $SU(N)$ or $U(N)$ . This ambiguity is most likely related to the inclusion of gauge modes in the complete $SO(6,1)$ UIRs. Taking the lowest component of , the operators made up of the traceless part of ($\phi$ is the $\theta^0\bar{\theta}^0$ component of the ${\cal{N}}=4$ chiral superfield $W$) fit into the $D^2(-5/2)$ of representation of $SO(6,1)$. Modulo subtleties involving gauge modes and the extra $U(1)$, the other components of fill out the remaining UIRs listed in Table 1. This CFT counterpart of the spectrum generating algebra of KK supergravity we call operator generating algebra (OGA). It is natural to ask whether this operator generating algebra extends to all the operators in ${\cal{N}}=4$ SYM, including the non-chiral ones which correspond to massive string modes. In order to check this, we would have to classify and organize all the non-chiral operators on the CFT side. We do not know of any such classification. What we do know is that part of the $SO(6,1)$ OGA acts on the chiral primaries by tensoring with a superfield in the ${\bf 6}$ of $SO(6)$ and symmetrizing. How does this procedure generalize to non-chiral primaries? We sketch a natural proposal as follows. Given an operator $Tr(O(W))$, a set of UIRs of the $SO(6,1)$ OGA is generated by the following operators The fact that operators such as $Tr(O(W)W^{i_{1}})$ are not necessarily irreducible and give direct sums of $SO(6)$ representations is useful for generating operators dual to both physical modes and gauge modes. Unfortunately, since we have not explicitly determined the generators of the proposed $SO(6,1)$ OGA, we cannot actually prove that various operators belong to UIRs of this OGA. The issue also arises as to how to deal with the possible mixing of different operators within the same UIR, a problem which already exists for the chiral primary operators. Let us illustrate how our proposal for an OGA might work by considering the so-called Konishi multiplet on the CFT side. In terms of the SYM superfields, this multiplet is written as $Tr(W_iW^i)$ . It has been suggested that the Konishi multiplet corresponds to massive string states propagating in $AdS_5$ . Consider the scalar operator in the Konishi multiplet which is the $\theta^4\bar{\theta}^4$ component of $Tr(W_iW^i)$ and transforms in the ${\bf 105}=(4,0,0)_{GZ}$ of $SO(6)$. Suppose we assume that it sits naturally at the bottom of a $(l+4,0,0)_{GZ}$ KK tower of $SO(6)$. On the $AdS$ side this is what we would expect from scalars coming from a ten-dimensional four-tensor reduced on $S^5$. A good candidate for the appropriate $SO(6,1)$ UIR is then $D^{1}(4;-5/2)$. It is made up of the towers None of the extra towers in  have operators which can appear in the Konishi multiplet, but if we take into account the whole set of operators given by $Tr(W_iW^i W^{(i_1} \cdot \ldots \cdot W^{i_p)})$, then at $p=1,2,3,4$ we find operators (dual to bulk scalars) with $SO(6)$ weights which could sit at the bottoms of the extra towers. It is important to note that we do not know whether the operators above are dual to gauge modes or physical modes, since we lack a precise rule for making this distinction. Still, our primitive fit for the Konishi multiplet is an indication that there might exist an OGA, $SO(6,1)$, on the CFT side which organizes even the non-chiral operators. Let us now address these issues from the $AdS$ side. What happens with stringy, massive modes on the $AdS_5$ side? These modes do not have protected anomalous dimensions on the CFT side. This is clear, since if we expand the stringy fields in $S^5$ spherical harmonics their ten-dimensional masses will contribute $\alpha '$ terms to their KK reduced AdS masses. The non-linear nature of the equations relevant to stringy modes will contribute further corrections and will also mix modes with the same $SO(6)$ quantum numbers. Nevertheless, using the theorem explained above , the orthonormal basis of harmonic functions on $S^5$ provides UIRs of the SGA $SO(6,1)$. Thus, we expect that even the massive stringy modes can be fit in UIRs of this SGA. There is, however, a subtlety here: to [*prove*]{} that $SO(6,1)$ is the SGA of the full IIB string theory on $AdS_5 \times S^5$ we need to identify [*all*]{} the KK modes generated by the massive stringy modes, and then fit them explicitly (as we have done with the massless KK modes) in the relevant UIRs of $SO(6,1)$. One way of getting to the string theory on $AdS_5 \times S^5$ is to start with the flat ten-dimensional string and then perturb it with an RR operator (as in ) such that the theory flows to the $AdS_5 \times S^5$ background . One can contemplate a connection between the large radius limit of $AdS_5 \times S^5$ and the flat ten-dimensional space, by taking $N \rightarrow \infty$ and keeping $g_{YM}$ finite on the CFT side. In this limit the states from the $AdS_5$ side should presumably map into states propagating in the flat ten-dimensional space, the corresponding vertex operators should match, etc. The KK reduction of the massive string modes on $S^5$ from flat ten dimensions should get rearranged into the massive spectrum of the string on $AdS_5 \times S^5$. Also, on both sides there should exist a natural action of the conformal group of $S^5$. If we can show that this group acts as an SGA on the quantum ten-dimensional spectrum reduced on $S^5$, we expect the conformal group of $S^5$ to appear as an SGA for the string theory quantized directly about the $AdS^5\times S^5$ background. Let us examine the KK reduction of the first massive level of the flat IIB string on $S^5$. The multiplet transforms in the $({\bf 44 + 84 +128})^2$ of $SO(9)$ and has $256^2$ states. We consider perturbations around the classical solution caused by the presence of massive string modes in this multiplet. We apply the same harmonic analysis used on the supergravity modes, and decompose the $SO(9)$ representations coming from $({\bf 44 + 84 +128})^2$ in terms of $SO(5)\times SO(4)$ representations . As before, we work in the GZ basis, which enables us to read off the corresponding UIRs of $SO(6,1)$. We consider briefly one example of this particular procedure. If we look at ${\bf 44 \times 44}$ we find a ${\bf 450}$ of $SO(9)$, in addition to other representations of $SO(9)$ which we will ignore for now. The ${\bf 450}$ is a four-tensor field in ten dimensions. It decomposes into - among others - a $(4,0)_{GZ}$ of $SO(5)$ which is contained in $(l+4,4,0)_{GZ}$, $(l+4,3,0)_{GZ}$, $(l+4,2,0)_{GZ}$, $(l+4,1,0)_{GZ}$, $(l+4,0,0)_{GZ}$ of $SO(6)$ and generates the $D^{1}(4;-5/2)$ UIR of $SO(6,1)$. This particular UIR appeared in our discussion of the Konishi multiplet. The same procedure can be extended to all fields at this massive level, and to all massive levels. In our analysis group theory has supplied us with details about the spectrum. However, there are subtleties which can only be addressed by examining the corrected classical equations of motion; proper identification of physical and gauge modes as well as the mixing of various KK modes. These phenomena happen already at the massless level, so they are not surprising. These subtleties do not change the fact that physical and/or gauge modes form UIRs of $SO(6,1)$! So, the conclusion seems to be that the $SO(6,1)$ SGA from supergravity extends to the full string theory. Of course, in order to prove this statement one would have to examine all massive modes explicitly, and address the question of mixing and identification of physical and gauge modes. To conclude, in this paper we have listed the spectrum generating algebras for string theory and M-theory compactified on various backgrounds of the form $AdS_{d+1} \times S^n$. We have identified the representations of these algebras which make up the classical supergravity spectra and we argued for the existence of these spectrum generating algebras in the classical string/M-theory. We also discussed the role of the spectrum generating algebras on the conformal field theory side in the framework of AdS/CFT correspondence. One case we have not explicitly considered but which can be analyzed in the same way is the $AdS_2 \times S^2$ background. The corresponding boundary theory is some sort of conformal quantum mechanics, which is not well understood . Whatever that boundary theory might be, there should exist an $SO(3,1)$ SGA algebra on the supergravity/string side and a corresponding OGA on the conformal quantum mechanics side. Our methods should also apply to the case of string theory on $AdS \times S^{n}/G$ , where $G$ is a discrete subgroup of the isometry group of the sphere. It would be interesting to understand the action of SGAs in this case. One problem where we expect the concept of SGAs to have a dynamical meaning is in the computation of correlation functions within the framework of AdS/CFT duality. Finally, an interesting question regarding SGAs concerns their interpretation in the finite $N$ case of AdS/CFT duality (strong conjecture). Jevicki and Ramgoolam  have proposed that quantum deformed isometries should be relevant in this case. We note that there exists an analog of Peter-Weyl theorem for the case of $SU(2)_q$ - see (vol. 3). The harmonic functions for a $q$-deformed sphere can be also found in (vol. 3). It seems natural to expect that the harmonic functions over $SO(n)_q$ fit into UIRs of $SO(n+1,1)_q$, thus generalizing our previous results. In view of the proposal put forward in , we expect the full string theory on $AdS_{d+1}\times S^m$ to exhibit q-deformed SGAs. .5cm [**Acknowledgements**]{}: We would like to thank P. Aschieri, I. Bars, J. deBoer, G. Chalmers, D. Gross, M. Günaydin, T. Hübsch, K. Pilch, J. Polchinski and S. Ramgoolam for interesting discussions. One of us (D.M.) would specially like to thank M. Günaydin for illuminating discussions in the very early stages of this work. The work of P. Berglund is supported in part by the Natural Science Foundation under Grant No. PHY94-07194. The work of E. Gimon is supported in part by the U. S. Department of Energy under Grant no. DE-FG03-92ER40701. The work of D. Minic is supported in part by the U.S. Departement of Energy under Grant no. DE-FG03-84ER40168. P.B. would like to thank Argonne, Caltech and LBL, Berkeley for their hospitality while some of this work was carried out. E.G. would also like to thank the Harvard theory group for their hospitality while this work was in progress. D.M. would like to thank ITP, Santa Barbara and Caltech for providing stimulating environments for research. ß .0cm .0cm $$\vbox{\offinterlineskip\tabskip=0pt\halign{ \strut \vrule\hfil~$\ss{#}$~\hfil&\hfil~$\ss{#}$~\hfil&\hfil~$\ss{#}$~\hfil& \hfil~$\ss{#}$~\hfil\vrule\cr \noalign{\hrule} h_{\mu\nu}&h_{\alpha\nu}&h_{\alpha\beta}&h^\alpha_{\,\,\,\alpha}\cr D^2(-5/2)&D^1(1;-5/2)&D^1(2;-5/2)&D^2(-5/2)\cr \noalign{\hrule} A_{\mu\nu}&A_{\alpha\mu}&A_{\alpha\beta}&\cr D^2(-5/2)&D^1(1;-5/2)&D^0(1,1;-5/2)&\cr \noalign{\hrule} a_{\alpha\beta\mu\nu}&a_{\alpha\beta\gamma\mu}&a_{\alpha\beta\gamma\delta}& a+i\phi\cr D^0(1,1;-5/2)&D^1(1;-5/2)&D^2(-5/2)&D^2(-5/2)\cr \noalign{\hrule} \psi_\mu&\psi_\alpha&\lambda&\cr D(1/2,1/2;-5/2+i\rho)&D(1/2,1/2;-5/2+i\rho)&D(3/2,1/2;-5/2+i\rho)&\cr \noalign{\hrule}}}$$ .5cm .5cm .0cm .0cm 2.0cm .0cm .0cm $$\vbox{\offinterlineskip\tabskip=0pt\halign{ \strut \vrule\hfil~$\ss{#}$~\hfil&\hfil~$\ss{#}$~\hfil&\hfil~$\ss{#}$~\hfil& \hfil~$\ss{#}$~\hfil\vrule\cr \noalign{\hrule} h_{\mu\nu}&h_{\alpha\nu}&h_{\alpha\beta}& h^\alpha_{\,\,\,\alpha},\,h^\mu_{\,\,\,\mu}\cr D^3(-7/2)&D^2(1;-7/2)&D^2(2;-7/2)&D^3(-7/2)\cr \noalign{\hrule} C_{\alpha\mu\nu}&C_{\alpha\beta\mu}&C_{\alpha\beta\gamma}&\cr D^2(1;-7/2)&D^1(1,1;-7/2)&D^0(1,1,1;-7/2)&\cr \noalign{\hrule} \psi_\mu&\psi_\alpha&\lambda&\cr D(1/2,1/2,1/2;-7/2+i\rho)&D(1/2,1/2,1/2;-7/2+i\rho)& D(3/2,1/2,1/2;-7/2+i\rho)&\cr \noalign{\hrule}}}$$ .5cm .5cm .0cm .0cm 2.0cm .0cm .0cm $$\vbox{\offinterlineskip\tabskip=0pt\halign{ \strut \vrule\hfil~$\ss{#}$~\hfil&\hfil~$\ss{#}$~\hfil&\hfil~$\ss{#}$~\hfil& \hfil~$\ss{#}$~\hfil\vrule\cr \noalign{\hrule} h_{\mu\nu}&h_{\alpha\nu}&h_{\alpha\beta}&h^\alpha_{\,\,\,\alpha}\cr D^2(-2)&D^1(1;-2)&D^1(2;-2)&D^2(-2)\cr \noalign{\hrule} C_{\mu\nu\rho}&C_{\alpha\mu\nu}&C_{\alpha\beta\mu}&C_{\alpha\beta\gamma}\cr D^2(-2)&D^1(1;-2)&D^0(1,1;-2)&D^2(-2)\cr \noalign{\hrule} \psi_\mu&\psi_\alpha&&\cr D(3/2,1/2;-2+i\rho)&D(1/2,1/2;-2+i\rho&&\cr \noalign{\hrule}}}$$ .5cm .5cm .0cm .0cm .0cm .0cm $$\vbox{\offinterlineskip\tabskip=0pt\halign{ \strut \vrule\hfil~$\ss{#}$~\hfil&\hfil~$\ss{#}$~\hfil&\hfil~$\ss{#}$~\hfil& \hfil~$\ss{#}$~\hfil\vrule\cr \noalign{\hrule} h_{\mu\nu}&h_{\alpha\nu}&h_{\alpha\beta}&\cr D^1(-3/2)&D^0(1;-3/2)&D^0(2;-3/2)&\cr \noalign{\hrule} A_{\mu\nu}&A_\mu&A_\alpha&\phi\cr D^0(1;-3/2)&D^1(-3/2)&D^0(1;-3/2)&D^1(-3/2)\cr \noalign{\hrule} \psi_\mu&\psi_\alpha&\lambda&\cr D(1/2;-3/2+i\rho)&D(3/2;-3/2+i\rho)&D(1/2;-3/2+i\rho)&\cr \noalign{\hrule}}}$$ .5cm .5cm .0cm .0cm [^1]: $^{}$
--- abstract: 'We give a short proof of the following theorem of Hara and Nakai: for a finitely bordered Riemann surface $R$, one can find an upper bound of the corona constant of $R$ that depends only on the genus and the number of boundary components of $R$.' address: 'Department of Mathematics Education, Hanyang University, 17 Haengdang-dong, Seongdong-gu, Seoul 133-791, Korea' author: - 'Byung-Geun Oh' title: 'A short proof of Hara and Nakai’s theorem' --- [^1] The corona problem and Hara-Nakai’s theorem {#the-corona-problem-and-hara-nakais-theorem .unnumbered} =========================================== For a given Riemann surface $R$, let $H^\infty (R)$ denote the uniform algebra of bounded analytic functions on $R$. To avoid pathological cases, we also assume that $H^\infty (R)$ separates the points in $R$; i.e., for any $x_1, x_2 \in R$, $x_1 \ne x_2$, there exists a function $f \in H^\infty (R)$ with $f(x_1) \ne f(x_2)$. We next consider the maximal ideal space $\mathcal{M}(R)$ of $H^\infty(R)$, and observe that each $M \in \mathcal{M} (R)$ can be identified with $\varphi_M: H^\infty(R) \to \mathbb{C}$, where $\varphi_M$ is the complex homomorphism that has $M$ as the kernel. This means that the maximal ideal space $\mathcal{M} (R)$ can be regarded as a subspace of the dual space $(H^\infty(R))^*$ of $H^\infty(R)$. Moreover, it also implies that we can equip $\mathcal{M} (R)$ with the Gelfand topology, thus $\mathcal{M} (R)$ can be thought of a closed subspace of $(H^\infty(R))^*$ that is contained in the unit sphere. For the details, see for example Chap. V-1 of [@Gar]. We know that each $\xi \in R$ corresponds to the maximal ideal $$M_\xi = \{ f \in H^\infty (R) : f(\xi) = 0 \},$$ hence $R$ can be naturally embedded into $\mathcal{M}(R)$ by the inclusion map $\iota : \xi \hookrightarrow M_\xi$. Since we already provided $\mathcal{M}(R)$ with the Gelfand topology, one may ask the following: “is $\iota (R)$ dense in $\mathcal{M} (R)$ with respect to the Gelfand topology?" This is a famous question that is known as the *corona problem*, and we will say that the *corona theorem holds* for $R$ if $\iota (R)$ dense in $\mathcal{M} (R)$. Otherwise $R$ is said to have *corona* ($= \mathcal{M}(R) \setminus \overline{\iota (R)}$). Note that the complex homomorphism $\varphi_{M_\xi}$ associated with $M_\xi$ is nothing but the point evaluation map $\lambda_\xi : f \mapsto f(\xi)$. It is known that the corona theorem holds for $R$ if and only if the following function theoretical statement holds (cf. Chap. 4 of [@Gam2], Chap. VIII of [@Gar], or Chap. 12 of [@Du]): for given $F_1, \ldots, F_n \in H^\infty (R)$ and $\delta \in (0,1)$ such that $$\label{bound} \delta \leq \max_{1 \leq j \leq n} |F_j(\zeta)| \leq 1 \quad \mbox{for all } \zeta \in R,$$ there exist $G_1, \ldots, G_n \in H^\infty (R)$ that satisfy the equation $$F_1 G_1 + F_2 G_2 + \cdots + F_n G_n =1.$$ We refer to $F_1, \ldots, F_n$ as *corona data* of index $(n, \delta)$ and $G_1, \ldots, G_n$ as *corona solutions* associated with the given corona data. The constant $$C(n,\delta,R) := \sup \inf \max \{ \|G_1\|_\infty, \ldots, \| G_n \|_\infty \} %\in (0,1)$$ is called the “corona constant" of $R$, where the supremum is over all corona data satisfying and the infimum is over all possible corona solutions associated with each corona data. As usual, we interpret the infimum of an empty set as infinity, thus if a Riemann surface $R$ has corona, the corona constant $C(n, \delta, R)$ must be infinite for some index $(n, \delta)$. But what about the converse? If the corona theorem holds for $R$, is $C(n, \delta, R)$ finite for all indices $(n, \delta)$? The answer for this question is still unknown for general Riemann surfaces, but the answer is positive at least for finitely bordered Riemann surfaces, as the following theorem shows. \[T\] For a given finitely bordered Riemann surface $R$, let $g(R)$ denote the genus of $R$ and $b(R)$ denote the number of boundary components of $R$. Then for each given index $(n, \delta)$ and numbers $g \in \mathbb{N}\cup\{0\}$ and $b \in \mathbb{N}$, we have $$\sup_{R \in \mathfrak{R}(g,b) } C(n, \delta, R) < \infty,$$ where $\mathfrak{R}(g,b)$ is the collection of Riemann surfaces with $g(R) =g$ and $b(R) =b$. The purpose of our paper is to give a short proof for this theorem. Note that Theorem \[T\] implies that one can find an upper bound of the corona constant of a finitely bordered Riemann surface only depending on the index, genus of $R$ and the number of boundary components of $R$. The case $g=0$ was proved by Gamelin in [@Gam], and the case $g=0$ and $b=1$ is nothing but the famous Carleson’s corona theorem for the unit disc [@Ca]. There are various planar domains and Riemann surfaces for which the corona theorem holds ([@Ca], [@Gam], [@GJ], [@JM], [@St], [@Al], [@Be1], [@Be2], and more). On the other hand, relatively a small number of Riemann surfaces are known to have corona. The first such example was constructed by Cole (Chap. 4 of [@Gam2]), which was recently reconstructed in a simpler way in [@Oh], and other Riemann surfaces that have corona can be found in [@BD] and [@Ha]. The corona problem for general planar domains is still open, and the answer is also unknown for a polydisc or a unit ball in $\mathbb{C}^n$, $n \geq 2$. Proof of Theorem \[T\] {#proof-of-theoremt .unnumbered} ====================== Our proof is based on the following three theorems and the Carleson’s corona theorem for the unit disc. \[TT\] Let $R$ and $R'$ be Riemann surfaces and $f: R' \to R$ an $m$-sheeted branched covering map for some $m < \infty$. Then the corona theorem holds for $R'$ if and only if it holds for $R$. In Theorem \[TT\] Nakai considered only Riemann surfaces, that is, he considered only *connected* surfaces $R$ and $R'$. However, one may check that the argument is still valid even when they are not connected, i.e., in Theorem \[TT\] one can replace $R$ and $R'$ by disjoint unions of Riemann surfaces. Let $\mathbb{D}$ denote the unit disc. The next argument we will need for the proof of Theorem \[T\] is the following. \[Ahl\] Suppose $R$ is a finitely bordered Riemann surface with $g(R) =g$ and $b(R) =b$. Then there exists an $m$-sheeted branched covering map $f:R \to \mathbb{D}$, called the *Ahlfors map*, such that $b \leq m \leq 2g + b$. The last ingredient of our recipe is the following statement: \[TTT\] Let $\{ R_j \}$ be a sequence of Riemann surfaces. Then $$\sup_{j} C(n, \delta, R_j) < \infty$$ for every index $(n, \delta)$ if and only if the corona theorem holds for $\bigsqcup_{j} R_j$, the disjoint union of $R_j$. This theorem is essentially Lemma 3.1 of [@Gam]. In fact, in [@Gam] the theorem was stated only for planar domains, but one can easily check that the proof is valid for our case. Now we are ready to prove Theorem \[T\]. Suppose Theorem \[T\] is not true. Then there exist an index $(n_0, \delta_0)$ and a sequence of finitely bordered Riemann surfaces $\{ R_j \}$ with $g(R_j) = g $ and $b(R_j) = b$, $j=1,2,\ldots$, such that $$\label{infty} C(n_0, \delta_0, R_j) \to \infty$$ as $j \to \infty$. Furthermore by Theorem \[Ahl\], we can find $m_j$-sheeted branched covering maps $h_j : R_j \to D_j := \{ z: |z - 3j| < 1 \}$ with $b \leq m_j \leq 2g + b$ for all $j$. However, by passing to a subsequence if necessary, we may assume that all the $m_j$’s are the same, that is, there exists a constant $m$ such that $m = m_j$ for all $j$. Now let $D = \bigcup_j D_{j}$ and $\mathcal{R} = \bigsqcup_j R_{j}$. Carleson’s corona theorem for the unit disc [@Ca] (Theorem \[T\] for the case $g=0$ and $b=1$) implies that $\sup_j C(n, \delta, D_{j} ) < \infty$ for any index $(n, \delta)$, thus the corona theorem for $D$ follows from Theorem \[TTT\]. Then by Nakai’s theorem (Theorem \[TT\]), we see that the corona theorem also holds for $\mathcal{R}$, because the map $h: \mathcal{R} \to D$ defined by $h|_{R_{j}} = h_{j}$ is an $m$-sheeted branched covering. According to Theorem \[TTT\], however, $\mathcal{R} = \bigsqcup_j R_{j}$ must have corona, because $\sup_{j} C(n_0, \delta_0, R_j) = \infty$ by . This contradiction completes the proof of Theorem \[T\]. We believe that what makes our proof significantly shorter than the proof of Hara and Nakai is the idea of applying Hara’s theorem (Theorem \[TT\]) to *disjoint* sets, which is in fact due to Gamelin as Theorem \[TTT\] indicates. The other parts of the proof is not very far from the original one in the sense that both proofs use the Ahlfors maps and Nakai’s theorem [@Na] (Theorems \[Ahl\] and \[TT\] above). [99]{} Lars L. Ahlfors, *Open Riemann surfaces and extremal problems on compact subregions*, Comment. Math. Helv. **24** (1950). 100–134. N. Alling, *A proof of the corona conjecture for finite open Riemann surfaces*, Bull. Amer. Math. Soc. *70* (1964), 110–112. D. E. Barrett and J. Diller, *A new construction of Riemann surfaces with corona*, J. Geom. Anal. **8** (1998), 341–347. M. Behrens, *The corona conjecture for a class of infinitely connected domains*, Bull. Amer. Math. Soc. **76** (1970), 387–391. M. Behrens, *The maximal ideal space of algebras of bounded analytic functions on infinitely connected domains*, Trans. Amer. Math. Soc. **161** (1971), 359–379. L. Carleson, *Interpolations by bounded analytic functions and the corona problem*, Ann. of Math. (2) **76** (1962), 547–559. Peter L. Duren, *Theory of $H\sp{p}$ spaces,* Pure and Applied Mathematics, Vol. **38**, Academic Press, New York-London 1970. T. W. Gamelin, *Localization of the corona problem*, Pacific J. Math. **34** (1970), 73–81. T. W. Gamelin, *Uniform algebras and Jensen measures*, London Mathematical Society Lecture Note Series **32**, Cambridge University Press, Cambridge-New York, 1978. J. B. Garnett, *Bounded analytic functions*, Pure and Applied Mathematics **96**, Academic Press, Inc., New York-London, 1981. J. B. Garnett and P. W. Jones, *The Corona theorem for Denjoy domains*, Acta Math. **155** (1985), 27–40. M. Hayashi, *Bounded analytic functions on Riemann surfaces*, in the book: Aspects of complex analysis, differential geometry, mathematical physics and applications (St. Konstantin, 1998), World Sci. Publishing, River Edge, NJ, 1999, 45–59. Masaru Hara and Mitsuru Nakai, *Corona theorem with bounds for finitely sheeted disks*, Tohoku Math. J. (2) **37** (1985), no. 2, 225–240. P. Jones and D. Marshall, *Critical points of Green’s functions, harmonic measure and the corona problem*, Ark. Mat. **23** (1985), 281–314. Byung-Geun Oh, *An explicit example of Riemann surfaces with large bounds on the corona solutions*, Pacific J. Math. **228** (2006), no. 2, 297–304. Mitsuru Nakai, *The corona problem on finitely sheeted covering surfaces*, Nagoya Math. J. **92** (1983), 163–173. E. L. Stout, *Bounded holomorphic functions on finite Riemann surfaces*, Trans. Amer. Math. Soc. **120** (1965), 255–285. [^1]: This work was supported by the research fund of Hanyang University(HY-2007-000-0000-4844)
--- author: - | Ruixin Wang\ School of Industrial Engineering\ Purdue University\ West Lafayette, IN 47906, USA\ `[email protected].` Prateek Jaiswal\ School of Industrial Engineering\ Purdue University\ West Lafayette, IN 47906, USA\ `[email protected].` Harsha Honnappa\ School of Industrial Engineering\ Purdue University\ West Lafayette, IN 47906, USA\ `[email protected].`\ bibliography: - 'demobib.bib' title: ESTIMATING STOCHASTIC POISSON INTENSITIES USING DEEP LATENT MODELS --- ABSTRACT {#abstract .unnumbered} ======== We present a new method for estimating the stochastic intensity of a doubly stochastic Poisson process. Statistical and theoretical analyses of traffic traces show that these processes are appropriate models of high intensity traffic arriving at an array of service systems. The statistical estimation of the underlying latent stochastic intensity process driving the traffic model involves a rather complicated nonlinear filtering problem. We develop a novel simulation method, using deep neural networks to approximate the path measures induced by the stochastic intensity process, for solving this nonlinear filtering problem. Our simulation studies demonstrate that the method is quite accurate on both in-sample estimation and on an out-of-sample performance prediction task for an infinite server queue. INTRODUCTION {#sec:intro} ============ This paper introduces a simulation-based method for estimating the stochastic intensity process of a doubly stochastic Poisson process (DSPP), using sample path observations of the DSPP over a fixed time horizon and under the assumption of a stochastic differential equation (SDE) model of the intensity. DSPPs are widely acknowledged as an appropriate model of traffic arriving at a variety of service systems, including hospitals and call centers. Specifically, multiple statistical analyses [@jongbloed2001managing; @avramidis2004modeling; @avramidis2005modeling; @maman2007uncertainty; @kim2014call] show that the (estimated) index of dispersion (i.e., the ratio of the variance to the mean) of the arrival counts typically exceeds 1 at reasonable operational time-scales; for Poisson processes the index equals 1. Furthermore, the arrival intensity appears time-varying and there are temporal correlations between traffic counts across non-overlapping time intervals. These conditions strongly indicate that the traffic process is not a Poisson process with deterministic intensity. However, at smaller time-scales (on the order of inter-arrival times) it is not possible to reject the null hypothesis that the arrival counts over a fixed time interval are Poisson distributed [@kim2014call]. DSPPs can model the overdispersion, temporal correlations and time-varying nature of the intensity while remaining reasonably tractable to use for performance prediction and control/optimization tasks. A rigorous definition of DSPPs is provided in the next section. The expansive definition of DSPPs allows for many models of the stochastic intensity process. A simple model advocated for modeling call center traffic in [@whitt1999dynamic] assumes that the uncertainty in the arrival rates is determined by a single random variable that determines the daily ‘busyness’ level. However, as noted in [@zhang2014scaling], the static nature of the intensity model implies it cannot account for the temporal correlation structure observed in many traffic traces. [@zhang2014scaling], in turn, suggest the use of a ‘dynamic’ (sic) intensity model. In the context of high intensity call center traffic they show, through a combination of theoretical and empirical analysis, that a Cox-Ingersoll-Ross (CIR) diffusion is appropriate. Recall that the CIR process is defined as the solution to the SDE $$\begin{aligned} ~\label{eq:cir} dZ(t) = (\beta - Z(t)) dt + \eta \beta^\alpha \sqrt{Z(t)} dW(t),~\forall t \geq 0\end{aligned}$$ where $(W(t) : t \geq 0)$ is a standard Brownian motion process, $(\alpha,\beta,\eta)$ are positive constants that constitute the parameters of the model. Specifically, [@zhang2014scaling] present empirical evidence that the empirical distribution of the standardized arrival counts roughly follows a standard normal distribution, in time intervals where the mean arrival counts are ‘large’ . This empirical observation is supported by a rigorous central limit theorem (CLT) that holds for all $\alpha \in \left(0,\frac{1}{2}\right)$. Following [@zhang2014scaling], we assume that the stochastic intensity process is well-modeled by an SDE (though not necessarily ). In practice, the stochastic intensity is [*latent*]{} (i.e., unobserved) and must be estimated from traffic traces. As noted in [@cheng2017history], this estimation problem is challenging. In fact, it entails the solution of a nonlinear filtering problem where the underlying stochastic intensity process can be viewed as the ‘signal,’ and the arrival process is a noisy ‘observation’ of the intensity. The solution of the nonlinear filtering problem depends crucially on the computation of the [*pathwise Kallianpur-Striebel formula*]{} (see [@van2007filtering Ch.1]), which is remarkably complicated. More crucially, the computation of the filter assumes complete knowledge of the latent intensity model. In our setting, while a [*structure*]{} of the model might be assumed, model parameters are unknown and must be estimated themselves. We present a computational method that simultaneously estimates the intensity model and solves the nonlinear filtering problem. We model the unknown drift and diffusion functions of this SDE using deep neural networks (DNNs), which are trained by maximizing a tight lower bound on the marginal log-likelihood of the traffic process. This is an instance of a so-called [*deep latent model*]{} (DLM); examples of such models includes variational autoencoders (VAEs) and generative adversarial networks (GANs) used to synthesize video and image samples (so-called ‘deep fakes’) in the artificial intelligence (AI) literature [@goodfellow Ch.20]. To the best of our knowledge, this method has not been developed in the context of continuously observed stochastic processes where DNN training can be rather complicated. Recent work in [@tzen2019:neuralSDE; @tzen2019theoretical] considers a more restrictive class of problems where the latent signal process is a diffusion over the interval $[0,1]$ and the objective is to estimate the terminal marginal distribution using observations of a random variable dependent on the terminal marginal (latent) random variable. In the subsequent sections we first present an overview of DSPPs in Section 2, followed by an extensive description of the statistical estimation problem and variational autoencoders in Section 3 and 4. We present our method in Section 5, where we derive the lower bound referenced above and the DNN training procedure we have developed, based on the theory of stochastic flows by [@kunita1984stochastic]. Finally, in Section 6 we present simulation results that demonstrate the efficacy of our method. Specifically, we present results on a) in-sample estimation of the stochastic intensity process itself, and on b) out-of-sample ‘run-through’ experiments for predicting performance metrics in an infinite server queue. Section 7 concludes with a summary and some commentary on future work. DOUBLY STOCHASTIC POISSON PROCESSES =================================== Let $(X(t) : t \geq 0)$ be a non-decreasing $\mathbb Z_+$-valued point process, $(X|Z)$ represent the process conditioned on the stochastic process $(Z(t) : t \geq 0)$, and Poi$(\Lambda)$ represent a Poisson process with integrated intensity function $(\Lambda(t) : t \geq 0)$. Formally, a DSPP is defined as: Let $(Z(t) : t \geq 0)$ be a non-negative stochastic process such that with probability one $t \mapsto Z(t)$ is locally integrable. Then, $(X(t) : t \geq 0)$ is a DSPP driven by $(Z(t) : t \geq 0)$ if $(X|Z) \sim \text{Poi}({\mathbf{Z}})$, where ${\mathbf{Z}}$ is the integrated process defined as ${\mathbf{Z}}(s,t) := \int_s^t Z(r) dr$ for any $s < t$. That is, for any set of points $\{t_0,t_1,\ldots, t_d\} \subset (0,\infty)$, where $0 < t_0 \leq t_1 \leq \cdots \leq t_d < \infty$, the finite dimensional distributions of $(X|Z)$ satisfy $$\begin{aligned} \mathbb P(X(t_0) = k_0, X(t_1) = k_1, &\ldots, X(t_d) = k_d | Z_{0:t_d}\})\\ \nonumber &= \frac{\exp(-{\mathbf{Z}}(0,t_{0}))({\mathbf{Z}}(0,t_{0})^{k_0}}{k_0!}\prod_{i=0}^{d-1} \frac{\exp(-{\mathbf{Z}}(t_i,t_{i+1}))({\mathbf{Z}}(t_i,t_{i+1}))^{k_{i+1}-k_{i}}}{(k_{i+1}-k_i)!},\end{aligned}$$ where $Z_{0:t} \equiv (Z(s) : 0 \leq s \leq t)$. Formally, the path measure induced by $(X(t) : t \geq 0)$ is defined as $\int Poi({\mathbf{Z}}) dP(Z)$, where $P(\cdot)$ is the path measure induced by the stochastic process $(Z(t) : t \geq 0))$, so that the finite dimensional distribution of $X(t)$ (at any fixed $t \geq 0$) satisfies $$\begin{aligned} \mathbb P\left( X(t_0) = k_0,\ldots,X(t_d) = k_d \right) = \int \mathbb P(X(t_0) = k_0, X(t_1) = k_1, &\ldots, X(t_d) = k_d | Z_{0:t_{d}}\}) dP(Z_{0:t_{d}}).\end{aligned}$$ Note that we are deliberately being less than rigorous in our description of this path measure so as to avoid a heavier notational burden that distracts from the primary message of this paper. THE STATISTICAL ESTIMATION PROBLEM ================================== In the setting of a stochastic differential equation (SDE) model of the intensity, the estimation problem amounts to estimating the drift and diffusion coefficients. Suppose the drift and diffusion coefficients are parameterized by $\theta$. To understand the complexity of the problem, consider the following formal argument for deriving the maximum (log-)likelihood estimator (MLE) of the marginal distribution of $X(t)$, $$\begin{aligned} ~\label{eq:lln} \log \mathbb P\left( X(t) = k \right) &= \log \int \mathbb P(X(t) = k | Z_{0:t}) dP_\theta(Z_{0:t}),\end{aligned}$$ where $P_\theta$ is the path measure corresponding to the parameters $\theta$. Observe that computing the MLE requires differentiating with respect to $\theta$ under this path measure. There are potentially two ways of doing this. First, suppose we are able to compute the distribution of ${\mathbf{Z}}(0,t)$ as a function of the parameters $\theta$. Then, the gradient of the log-likelihood can be computed using the score function. However, while the distribution of ${\mathbf{Z}}$ could be computed with some effort for some instances (such as the CIR model ), this is unlikely to be true for arbitrary stochastic processes. On the other hand, suppose the path measure $P_\theta$ has a Radon-Nikodym density with respect to a reference path measure $\pi_0$; that is, there exists a real-valued potential function $\Phi(Z_{0:T};\theta)$ such that $dP_\theta/d\pi_0(Z_{0:T}) \propto \exp\left(\Phi(Z_{0:T};\theta)\right)$, then the gradient can be computed by differentiating the potential function. In general, however, we are confronted by the question of the choice of an appropriate reference measure. Note that measures in infinite dimensional spaces have a strong tendency towards either singularity or equivalence, complicating this choice; for instance, standard Brownian motion is not a feasible reference measure for the CIR process. While this issue can be resolved in specific cases, we would like a method that works for arbitrary choices of the stochastic intensity process. While the reference measure might not be known, we can introduce another measure into  that also has a density with respect to the reference measure, making it equivalent to $P_\theta$. Observe that the conditional measure $P(Z_{0:t}|X(t)=k)$ is the “optimal" choice in the sense that we have $$\begin{aligned} \label{eq:multiply} \log {\mathbb P}(X(t) = k) &=\log \int {\mathbb P}(X(t) = k | Z_{0:t}) \frac{dP_\theta(Z_{0:t})}{dP(Z_{0:t}|X(t)=k)} dP(Z_{0:t}|X(t) = k)\\ &=\int \log\left( {\mathbb P}(X(t) = k | Z_{0:t}) \frac{dP_\theta(Z_{0:t})}{dP(Z_{0:t}|X(t)=k)} \right)dP(Z_{0:t}|X(t) = k),\end{aligned}$$ where the second equality follows from the fact that the term inside the $\log$ is precisely ${\mathbb P}(X(t) = k)$ (and therefore a constant with respect to the conditional measure). This formal calculation shows that computing the MLE of the count process amounts to solving a complex nonlinear filtering problem to compute the conditional measure, where the unobserved stochastic intensity function should be viewed as a ‘signal’ and the (conditionally) Poisson counts are noisy ‘observations’ of the signal. More precisely, the Doob-Meyer decomposition of the DSPP $(X(t):t\geq 0)$ implies $X(t) = \int_0^t Z(s) ds + \eta(t)$, where $(Z(t) : t\geq 0)$ is the stochastic intensity process and $(\eta(t) : t \geq 0)$ is a martingale (see [@segall1975modeling] as well). However, solving this filtering problem is remarkably hard. Observe that the density $dP(\cdot|X(t)=k)/dP_\theta(\cdot)$ is the pathwise Kallianpur-Striebel formula [@van2007filtering Ch.1]. Solving this nonlinear filtering problem, however, is no easier than the ‘direct differentiation’ methods for computing the MLE noted in the previous paragraph. Revisiting the computation in , suppose we now introduce an arbitrary (but equivalent) measure $P_{\phi,k}$ (parameterized by $\phi$ and $k$). Then, Jensen’s inequality implies that $$\begin{aligned} ~\label{eq:vae} \log {\mathbb P}(X(t) = k) &\geq \int \log \left( {\mathbb P}(X(t) = k | Z_{0:t}) \frac{dP_\theta(Z_{0:t})}{dP_{\phi,k}(Z_{0:t})} \right) dP_{\phi,k}(Z_{0:t}). $$ While this is a lower bound, observe that the inequality can be tightened by maximizing it over both $\theta$ and $\phi$. The objective, however, is highly non-concave in these parameters and consequently we can only guarantee the computation of a local optimum. Furthermore, the choice of parameterization will, in general, imply that the class of measures being optimized over may not include the ‘true’ measures, resulting in an approximation to the filtering distribution. Therefore, this procedure of optimizing over path measures is an example of [*approximate inference*]{}, used extensively in the machine learning literature for approximately solving high dimensional and large sample statistical inference problems, particularly with Bayesian models. In the next section, we briefly review approximate inference in a general setting. APPROXIMATE INFERENCE {#sec:VAE} ===================== Consider an ensemble of $n$ observations ${\mathbf Y_n}:=\{Y_1,Y_2,\ldots,Y_n\}$, where each $Y_i \in \mathcal A$, an arbitrary topological space, and represents the available dataset. Each $Y_i$ induces a distribution $P_i$ that lies in some space of measures $\mathcal{P}$. The inference problem is to estimate a distribution over the sequence of unknown data generating distributions $\{P_1,P_2,\ldots,P_n\}$ given the observations ${\mathbf Y_n}$. In the Bayesian inference setting, we assume the existence of a sequence of ‘prior’ distributions $\{\Pi_1,\Pi_2,\ldots,\Pi_n\} \in \mathcal{P}\times \cdots \times \mathcal P =: \bigotimes_{n}\mathcal{P}$. Subsequently, $Y_i\in{\mathbf Y_n}$ is assumed to follow a [*generative model*]{} defined in the following hierarchical manner: (i) Generate $P_i \sim \Pi_i$ $i\in \{1,2,\ldots n\}$; then (ii) sample $Y_i\sim P_i$. Now, using Bayes rule, observe that for any subset $B\subseteq \bigotimes_{n}\mathcal{P}$ the ‘posterior’ distribution satisfies $$\begin{aligned} \Pi_n(B|{\mathbf Y_n}) = \frac{\int_{B}\prod_{i=1}^{n}d\Pi_i(P_i)P_i(Y_i)}{\int \prod_{i=1}^{n}d\Pi_i(P_i)P_i(Y_i)}. \label{eq:Post}\end{aligned}$$ Observe that we have assumed ${\mathbf Y_n}$ forms an independent ensemble; we will continue with this assumption in the remainder of the paper. In most of the high (or possibly infinite) dimensional settings computing this posterior distribution is intractable and consequently the problem of computing and performing statistical inference with the posterior is challenging. To address the intractability of the posterior, various sampling and optimization based methods have been proposed. We now describe a variational approach to do approximate inference that belongs to the latter category. In this framework, we first fix a class of measures $\mathcal{Q}_n \subseteq \bigotimes_n\mathcal{P}$ and then compute an approximation to the posterior  in the family $\mathcal{Q}_n$ by optimizing a lower bound to the ‘model evidence’ $\mathbb{P}({\mathbf Y_n}):=\int \prod_{i=1}^{n}d\Pi_i(P_i)P_i(Y_i)$. Observe that for any sequence of measures $\{Q_i\}_{1\leq i \leq n} \in \mathcal{Q}_n$, Jensen’s inequality implies that $$\begin{aligned} \nonumber \log \mathbb{P}({\mathbf Y_n}) \nonumber &\geq \int \prod_{i=1}^{n}dQ_i(P_i) \log \prod_{i=1}^{n} P_i(Y_i) - \int \prod_{i=1}^{n}dQ_i(P_i) \log \frac{\prod_{i=1}^{n}dQ_i(P_i)}{\prod_{i=1}^{n}d\Pi_i(P_i)} \\ &= \sum_{i=1}^{n} \left[ \mathbb{E}_{Q_i}\left[\log P_i(Y_i) \right] - \frac{d Q_i}{d\Pi_i} (P_i) \right]; \label{eq:ELBO}\end{aligned}$$ in the machine learning literature the right hand side (RHS) in  is popularly known as *evidence lower bound* (ELBO). Observe that  precisely corresponds to a single random variable in the sum on the RHS of , where the measure $Q_i$ corresponds to $P_{\Theta,k}$ and $\Pi_i$ corresponds to the measure $P_\Gamma$. In the variational framework, for a given sequence of prior distributions $\{\Pi_1,\Pi_2,\ldots,\Pi_n\}$ the ELBO is maximized over the distribution in $\mathcal{Q}_n$ using stochastic gradient descent methods to find the best $\{Q_i\}_{{1\leq i \leq n}}$ in $\mathcal{Q}_n$. In particular, observe that the ELBO can be rewritten as $$\textsc{ELBO} = -\textsc{KL} \left(\prod_{i=1}^{n}dQ_i(P_i) \Big\|d\Pi_n(P_1,P_2 \ldots P_n|{\mathbf Y_n}) \right) + \log {\mathbb P}({\mathbf Y_n}),$$ where KL represents the Kullback-Leibler divergence. Therefore, the optimizer of ELBO is an approximation to the posterior distribution as defined in . Deep latent models (DLMs) [@goodfellow Ch.19,20] specialize this general presentation to the setting where the probability measures are parameterized by deep neural networks (DNNs). Variational autoencoders (VAEs) [@Kingma2019] are an example of DLMs in the multivariate setting where the sequence of prior distributions are known only up to the parameters of an appropriately chosen DNN modeling these parameters. In the VAE literature this sequence of prior distributions are also known as *decoders*. The approximating measures $\mathcal{Q}_n$, entitled *encoders* in the VAE literature, are also parameterized using DNNs. Given the ensemble $\mathbf Y_n$, the DNN parameters of both the encoder and decoder are estimated using stochastic gradient descent (SGD). Our current setting, of course, is far more complicated than the VAE setting since the DNNs model the drift and diffusion coefficients of SDEs leading to a complicated training procedure, as we will see. DLMs FOR DSPPs {#sec:ps} ============== We assume access to $n$ independent and identically distributed ([i.i.d.]{}) observations of a stochastic process $\{X(t), t\leq T\}$. In many service systems, such as hospitals and call centers, traffic counts are collected at fixed, regular intervals; for instance, in many large call centers, this is typically at intervals of length 30 seconds to 1 minute. As noted before, it has been observed [@zhang2014scaling] that a DSPP with CIR-type ergodic diffusion process driving the intensity is an appropriate model of the traffic counts at operational time-scales (typically of the order of 10 minutes). The time interval $[0,T]$ in our model represents this operational time-scale. For clarity of exposition, we will describe our method under two specific conditions: (i) the traffic counts are observed at the time epochs $T/2$ and $T$; and (ii) a single sample $n=1$. These can be extended to more observation instants and samples at the expense of a more burdensome notation, but our method will not change. We model the unknown stochastic intensity process by the SDE $$\begin{aligned} ~\label{eq:prior} dZ(t)=b(Z(t),t;\theta)dt+ \eta \sqrt{Z(t)}dW(t), \quad t\leq T\end{aligned}$$ where $\{W(t),t\geq 0\}$ is the standard Brownian motion, $b(\cdot,t;\theta): C_b[0,T]\times [0,T]\mapsto {\mathbb R}$ is the drift and $\eta \sqrt{(\cdot)}$ with $\eta > 0$ is the diffusion coefficient. $C_b[0,T]$ denotes the space of all continuous and bounded function on the interval $[0,T]$. Here, the unknown drift function is modeled using a DNN parameterized by $\theta$, and to avoid getting bogged down in technical detail, we assume the existence of a strong solution to . For technical reasons we will, for now, assume that the diffusion coefficient is known. We denote the measure induced by the solution of this SDE as $P_{\theta}(\cdot)$; this corresponds to the ‘prior’ measure in the previous section. The independent increments property implies that the joint distribution of the arrival count random vector $Y_1 := (X\left(\frac{T}{2}\right), X(T))$, conditional on the intensity process $Z_{0:T}$, can be expressed as: $$\label{eqn:likelihood1} \begin{split} {\mathbb P}(Y_1 = (k_1,k_2) | Z_{0:T}) &=\frac{e^{-\int_{0}^{\frac{T}{2}} Z(t) dt}\left(\int_{0}^{\frac{T}{2}} Z(t) dt\right)^{k_1}}{k_1!} \frac{e^{-\int_{\frac{T}{2}}^{T} Z(t) dt}\left(\int_{\frac{T}{2}}^{T} Z(t) dt\right)^{k_2-k_1}}{(k_2-k_1)!}. \end{split}$$ A DLM for the Stochastic Intensity Process ------------------------------------------ By definition, the variational family $\mathcal Q$ must consist of measures that are absolutely continuous with respect to the ‘prior’ measure $P_\theta$. In our current setting, $\mathcal Q$ is the class of equivalent measures induced by the solutions of SDEs that have the same diffusion coefficient as . To be precise, consider the SDE $$\begin{aligned} d Z(t)&=\bar b_k( Z(t),t:\phi)dt+ \eta\sqrt{ Z(t)} dW(t), \text{ for $t\leq T$}, \label{eq:eq1}\end{aligned}$$ where for each $k \in \{0,1,\ldots\}$ the drift function $\bar b_k(\cdot,\cdot;\phi)$ is modeled using a DNN with parameter $\phi$. We denote the measure induced by the solution of this SDE as $Q_{\phi}$. Figure \[fig:Summary\] illustrates the use of deep latent models in defining measures $P_{\theta}$ and $Q_{\phi}$ and consequently ELBO. Next, we derive the ELBO for the observation random vector $Y_1$. The proof, omitted for space reasons, follows from Girsanov’s theorem. \[thm:ELBO\] Define $u_k(Z(t),t;\theta,\phi) :=(\eta \sqrt{Z(t)})^{-1} \bar{b}_k(Z(t),t;\phi)-b(Z(t),t;\theta)$ and suppose that $u_k$ satisfies a [*strong*]{} Novikov’s condition, $ {\mathbb{E}}\left[ \exp\left(\frac{1}{2}\int_0^T |u_k(Z(t), t; \theta, \phi)|^2 dt \right) \right] < +\infty ~\forall \theta,\phi. $  Then, $$\label{eqn:hatW} \hat{W}_t:=\int_0^t u_k(Z(s),s;\theta,\phi)ds+W(t)$$ is a Brownian motion w.r.t. $Q_{\phi}$ , $dZ(t)={b}(Z(t),t;\theta)dt+\eta \sqrt Z(t) d\hat{W}_t$, and $$\begin{aligned} \log{\mathbb P}(Y_1 = (k_1,k_2)) &\geq {\mathbb{E}}_{Q_\phi}\left[ \log{\mathbb P}(Y_1 = (k_1,k_2)|Z_{0:T} ) - \frac{1}{2}\int_0^Tu_k^2(Z(s),s;\theta,\phi)ds \right]:=\text{ELBO}. \label{eqn:ELBO}\end{aligned}$$ Notice that we must assume that Novikov’s condition holds for all possible parameterizations of the functions $\bar b$ and $b$. This is a strong condition that is satisfied for the class of DNNs that we work with in this paper, since the output of the DNN is bounded by definition. However, more analysis is required on sufficient conditions for DNNs to satisfy Novikov’s condition. TRAINING THE DLM ---------------- Our objective is to train the neural networks $b(Z(t),t;\theta)$ and $ u_k(Z(t),t;\theta,\phi)$ by maximizing the ELBO. We fix $u_k(Z(t),t;\theta,\phi)$ to be a deterministic neural network defined as $ \tilde u(k,t;\beta)$ with parameters $\beta$. Combined with , this additional restriction imposed on $u_k(Z(t),t;\theta,\phi)$ ensures that the process $\hat W_t$ has independent increments. In the variational inference literature [@blei2017variational], this assumption is also known as the mean-field approximation, that is each partition of the unknown latent variable is independent of the other. A similar assumption on the latent process was used in [@tzen2019:neuralSDE], where the authors call it a path-space analog of the mean-field approximation. Now substituting $u_k(Z(t),t;\theta,\phi)=\tilde u(k,t;\beta)$ in  and from the observation $dZ(t)={b}(Z(t),t;\theta)dt+\eta \sqrt Z(t) d\hat{W}_t$, it follows that we can simulate the SDE using $ W(t)$ instead of $ \hat W(t)$; that is, $$dZ(t)=b(Z(t),t;\theta)dt+ \sqrt{Z(t)} \tilde u(k,t;\beta)dt+\sqrt {Z(t)} dW(t) \text{ and } Z(0)=0. \label{eq:Var}$$ We denote the measure induced by the above SDE as $Q_{\beta,\theta}$. For simplicity we fix $\eta=1$. We use stochastic gradient descent (SGD) to maximize the objective in  to learn the unknown neural network parameters $\theta$ and $\beta$. In order to use SGD, we first need to generate sample paths of the latent process $Z(t)$ , which we do using the Euler-Maruyama discretization method. We partition the time interval $[0,T]$ in $N$ equal sub-intervals, denoted as $\{t_0,t_1,...t_N\}$, with $t_0=0$ and $t_N=T$, set $Z(t_0)=Z(0)$, and simulate $\{Z(t_m)\}_{0\leq m \leq N}$ using the recursive equation $$\begin{aligned} Z(t_{m+1})=Z(t_m)+b( Z(t_m),t_m,\theta)(t_{m+1}-t_m)&+\sqrt{Z(t_m)}\tilde u(k,t_m;\beta)(t_{m+1}-t_m) +\sqrt{Z(t_m)} \Delta W_m$$ where $\{ \Delta W_m := W(t_{m+1})-W(t_{m})\}_{_{0\leq m < N}}$ are $N$ [i.i.d.]{}standard Gaussians. In order to use SGD we also need to compute the gradient of the objective function  with respect to the parameters $\theta$ and $\beta$. Notice that the expectation in ELBO is with respect to the measure induced by SDE in  denoted as $Q_{\beta,\theta}$. Observe that the only source of randomness in generating $Z(t)$ is from the Brownian motion $W(t)$, which does not depend on either $\beta$ or $\theta$. Therefore we interchange the differential operator with respect to the parameters and the expectation in . To make the dependence of $Z(t)$ on $\beta$ and $\theta$ explicit, we write $Z(t)$ as $Z^{\beta,\theta}(t)$. In particular for given values of parameters $\theta$ and $\beta_{-j}$ (all components of parameter $\beta$ except $\beta^j$) observe that ${\frac{\partial}{\partial \beta^j}}{\mathbb{E}}\left[ \log{\mathbb P}(Y_1 = (k_1,k_2)|Z^{\beta,\theta }_{0:T} ) - \frac{1}{2}\int_0^T \tilde u^2(k,s;\beta)ds \right] =$ $$\begin{aligned} \nonumber \nonumber {\mathbb{E}}\bigg[ {\frac{\partial}{\partial \beta^j}}\log \bigg( \frac{e^{-\int_{0}^{\frac{T}{2}} Z^{\beta,\theta }(t) dt} \bigg(\int_{0}^{\frac{T}{2}} Z^{\beta,\theta }(t) dt\bigg)^{k_1}}{k_1!} & \frac{e^{-\int_{\frac{T}{2}}^{T} Z^{\beta,\theta }(t) dt}\bigg(\int_{\frac{T}{2}}^{T} Z^{\beta,\theta }(t) dt\bigg)^{k_2-k_1}}{(k_2-k_1)!}\bigg)\\ & -{\frac{\partial}{\partial \beta^j}}\int_0^T \tilde u^2(k,s;\beta)ds \bigg],\end{aligned}$$ where we use the likelihood expression from . Also note that, to avoid any confusion, we have omitted subscript $Q_{\beta,\theta}$ from ${\mathbb{E}}_{}[\cdot]$ above. Now applying straightforward product differentiation rule and subsequently interchanging the integral and ${\frac{\partial}{\partial \beta^j}}$, would result into an expression requiring us to compute the derivative of the process $Z^{\beta,\theta }(t)$ with respect to $\beta^j$. To compute the derivative process, it follows from [@kunita1984stochastic Theorem 3.1] that under certain regularity condition on the drift and diffusion coefficient of the process $Z^{\beta,\theta }(t)$  the derivative process ${\frac{\partial}{\partial \beta^j}}Z^{\beta,\theta }(t)$ is the solution of the following SDE $$\begin{aligned} \frac{\partial Z^{\beta,\theta }(t)}{\partial \beta^j}=\int_0^t &\left( \frac{\partial b(Z^{\beta,\theta}(s),s;\theta)}{\partial Z^{\beta,\theta }(s)}\frac{\partial Z^{\beta,\theta }(s)}{\partial\beta^j} + \frac{\tilde u(k,s;\beta)}{2\sqrt{Z^{\beta,\theta }(s)}}\frac{\partial Z^{\beta,\theta }(s)}{\partial \theta^j} + \sqrt{Z^{\beta,\theta}(s)}\frac{\partial u_s}{\partial\beta^j} \right)ds \\& +\int_0^t \left(\frac{1}{2\sqrt{Z^{\beta,\theta }(s)}}\frac{\partial Z^{\beta,\theta }(s)}{\partial\beta^j}\right)dW_s \text{ and } \frac{\partial Z^{\beta,\theta }(0)}{\partial \beta^j} =0.\end{aligned}$$ We simulate the derivative process above using the Euler-Maruyama method in a similar manner as we did for $Z^{\beta,\theta}(t)$. Lastly, we use a similar procedure to generate the derivative of the latent process $Z^{\beta,\theta}(t)$ with respect to a component of $\theta$ for given values of the other parameters. We omit this for space reasons. NUMERICAL EXPERIMENTS ===================== We conducted a number of simple experiments to demonstrate both the in- and out-of-sample performance of the DLM. We start by describing the setting for the experiments. The code is written in Matlab with the Deep Learning Toolbox. The computation and space complexity of this method can be found in [@tzen2019:neuralSDE]. In our specific case, the time complexity for each iteration of the gradient update is $\mathcal{O}\left(N((k+n)T(b)+T(\tilde u))\right)$, where $k$ and $n$ are number of parameters in $\beta$ and $\theta$, respectively, $T(f)$ is the time complexity for computing $f$, and $N$ is the number of time steps in the time discretization. Setting ------- Observe that training the neural network by maximizing the ELBO entails solving a stochastic optimization problem in . We use a sample average approximation (SAA) of  for which we simulate $m$ independent sample paths of $(Z(t) : t\in [0,T])$: $$\label{eqn:empirical_ELBO} \frac{1}{mn} \sum_{i=1}^n\sum_{j=1}^m \left[ \log\left( P(X^i(T/2)=k^i_1,X^i(T)=k^i_2|Z^j_{0:T} \right) - \frac{1}{2}\int_0^T\tilde u^2(k,s;\beta)ds \right].$$ We integrate the SDEs using Euler-Maruyama discretization noted in the previous section. The architecture of the neural networks is - $b(Z(t),t;\theta):R^2\rightarrow R$ is a feedforward neural network with 20 fully connected layers of size 10. The activation function is chosen as $tanh$. The inputs are time epoch and the current intensity. - $\tilde u(k,t;\beta):R^2\rightarrow R$ is also a feedforward neural network with 20 fully connected layers of size 10. The activation function is chosen as $tanh$. The inputs are time epoch and the state at time $T$. Notice that unlike [@li;ScalableNeuralSDE], we do not require any specific architecture on the Neural network. One can always tune the hyperparameters to find as good or even better architecture. We assume that the true latent intensity process is a standard CIR process: $$\label{eqn:theoretical_model} dZ(t)=0.3 (80-Z(t))dt+\eta\sqrt{Z(t)}dW(t),$$ where $Z(0)=5$ and $\eta\in [0,1]$ is the ‘noise magnitude’ of the model. Observe that if $\eta = 0$, the intensity process is the solution of an ordinary differential equation, and the arrival process is a NHPP. We set the simulation horizon to be $T=4$, and uniformly partition the interval $[0,T]$ into the grid $\mathcal{P}=\lbrace t_1,t_2,..,t_M \rbrace$ with $t_{k+1}-t_k=1/15$, $t_1=0$ and $t_M=4$. The training data consists of $n=200$ sample paths of the DSPP generated using the theoretical model . This data is further divided into ‘mini-batches’ of size 10 and then fed into the Adam solver [@kingma2014adam]. We run the code for 35 epochs (350 gradient updates in total). The learning rate for $b(Z(t),t;\theta)$ and $\tilde u(k,t;\beta):R^2\rightarrow R$ are both set to be 0.01. We compare our method against the piece-wise linear maximum likelihood estimate (MLE) of the intensity assuming the traffic model is an NHPP, developed in [@Zeyu:pl]. This estimator is quite robust when the objective is to predict a mean performance metric. While it can be very inaccurate in predicting higher moments, betraying the fact that the MLE is computed assuming no correlation structure in the count process, we believe the relative simplicity and the fact that the computation of mean performance metrics are frequently the focus of performance analysis make it a useful reference. As noted before, our experiment will use arrival counts at time $T/2$ and $T$, and therefore it suffices to consider a two piece linear estimator in this experiment by maximizing the likelihood function, $$\mathcal{L}_n (\hat Z(t))=\frac{1}{n} \sum_{i=1}^n \left( X^i(T/2)\int_0^\frac{T}{2} \hat Z(t) dt+(X^i(T)-X^i(T/2))\int_\frac{T}{2}^T \hat Z(t) dt \right)-\int_0^T \hat Z(t) dt,$$ which follows from display (2) in [@Zeyu:pl]. Estimating the Intensity Process {#subsec:DSPP_arrival} -------------------------------- Our first experiment focuses on the estimation of the ‘true’ latent intensity model  when $\eta=1$. Figure \[fig:DSPP\](a) shows the results of an in-sample estimation of the average intensity (computed using 200 training samples of the ‘true’ model). Observe that both our method (‘predicted’) and the piece-wise linear model estimate the mean intensity process quite accurately. Figure \[fig:DSPP\](b) shows that the estimated mean integrated intensity, too, is almost identical to the ‘true’ model in either model. This is unsurprising: recall that the Poisson count distribution in the ELBO  is a function of the integrated intensity, and this plays a crucial role in constraining the estimation problem. [0.49]{} [0.49]{} Performance Prediction in an Infinite Server Queue. {#subsc:InfiniteServerQueue} --------------------------------------------------- In the second experiment, we focus on an out-of-sample performance prediction task for an infinite server queue. Specifically, we conduct ‘run-through’ experiments where traffic generated from a DSPP using estimated intensity processes is used as an input to a simulation of an infinite server queue. We start with a $G_t/M/\infty$ queue, where traffic is generated using the theoretical model , and the deep latent and piece-wise linear models estimated in the previous section. We generate 500 sample paths of the number of occupied servers over $[0,T]$ with service rate $\mu=2$. Observe from Table \[table:GMqueue\] that the estimated DLM gives a reasonable inference of both the mean and variance of the number of occupied servers at $\frac{T}{2}$ and $T$. Note that the variance is roughly in the ballpark of the variance of the ‘true’ model as estimated from the test dataset. On the other hand, the piece-wise linear model, underestimates the variance quite significantly. Next, we repeat the previous experiment on a $G_t/G/\infty$ system, with Erlang distributed service times, parameterized by $\lambda=6$ and $k=3$ (implying the mean service time remains $\frac{1}{2}$). The simulation is summarized in Table \[table:GEkqueue\]. Again, the DLM model makes acceptable predictions. We note that while the DLM model tends to predict higher variance, and the estimates tend to have a larger confidence interval, we conjecture that the accuracy of the predictions can be improved with a more appropriate choice of neural network size and more Monte Carlo samples in the SAA approximation to the ELBO (recall we have used $m=5$ throughout). Impact of the Noise Factor {#subsec:smalleta} -------------------------- The previous experiments demonstrate that the DLM model is robust on both mean and variance prediction tasks. To further explore this, in this experiment we demonstrate how the DLM predictions change when the ‘noise factor’ $\eta$ in  increases from 0 to 1; here, $\eta = 0$ (formally) corresponds to a deterministic intensity and $\eta > 0$ to increasing levels of stochasticity in the intensity model. We conducted the same ‘run-through’ experiment from the previous section on a $G_t/M/\infty$ queue, albeit with different estimated traffic models under the different $\eta$ factors. While this is might appear surprising, recall that the mean number of occupied servers under the ‘annealed’ measure (i.e., averaged over the stochastic intensity) of an infinite server queue with DSPP traffic depends only the mean intensity function. The PL estimate, even though it is based on the ‘quenched’ (i.e., conditioned on the intensity) measure, accurately estimates the mean intensity when averaged over the individual sample paths. For larger $\eta$, we observe that the DLM makes reasonable predictions on the mean number of occupied servers. On the other hand, Figure \[fig:Variance\] shows that when $\eta$ increases the DLM model significantly outperforms the piece-wise linear model in predicting the variance of the number of occupied servers. This is due to the fact that the DLM estimates the annealed measure of the traffic model, while the PL model only estimates the quenched measure. [0.49]{} [0.49]{} Estimating a Nonhomogeneous Poisson Intensity --------------------------------------------- We demonstrate the robustness of our method on estimating the intensity of a NHPP, with deterministic intensity. Consider an intensity function that is the solution of the ordinary differential equation (ODE) $\dot{Z}(t) = a(b-Z(t))$ with $a = 0.3$ and $b=80$. Let $d$ be the number of time intervals (or ‘pieces’) in the regressors, representing the number of degrees of freedom. We compare our method, using intensity process $dZ(t) = a(b-Z(t)) dt + d^{-1/2}\sqrt{Z(t)}dW(t)$, with the piecewise linear estimator [@Zeyu:pl] and the nonparametric ‘Gaussianization machine’ method from [@cai2019gaussianization] (‘GRP’ in the table below). GRP uses a variance stabilizing transformation of the Poisson counts and Gaussian process regression on the transformed variables. Table \[table:discretization\] shows that our method is significantly better than GRP even with 50 degrees of freedom. Estimating the Diffusion Coefficient ------------------------------------ We presented our method under the assumption that the diffusion coefficient is known. However, [@zhang2014scaling] argue that the model in  is appropriate for modeling the stochastic arrival intensity in a range of service systems. Estimating this model necessitates consideration of the situation where both the diffusion and drift function are unknown. In this section we present numerical results showing that our method can work even in this situation. We assume that $\theta=80$, $\eta=1$ and $\alpha=\frac{1}{4}$ in the theoretical/true model. Table \[table:Diffusion\_Drift\] below summarizes the results of the experiment. We model the diffusion function by another neural network $\sigma(Z(t),t;\hat\theta)$ with the same structure as $b(Z(t),t;\theta)$. Notice that the only difference in the training framework is that, following the definitions in Section \[sec:VAE\], we must use $\sigma(Z(t),t;\hat\theta)u_k(Z(t),t;\theta,\phi)=\bar{b}(Z(t),t;\phi)-b(Z(t),t;\theta)$ to define $u_k(Z(t),t;\theta,\phi)$. Conclusions and Commentary ========================== This paper presents a versatile computational method for estimating the stationary, ergodic stochastic intensity of a DSPP. We demonstrate our method by in-sample estimation of the intensity and out-of-sample run-through simulation experiments, both of which demonstrate accuracy of our method. We believe that the method presented in this paper demonstrates how machine learning can help enhance simulation and modeling, in the spirit of the observation made by Peter W. Glynn in his [*Titans of Simulation*]{} keynote lecture at the Winter Simulation Conference in 2019 [@PGTalk]. In future work we intend to extend our method to jump Markov intensities and self-exciting traffic models (such as the Hawkes process). In on-going work we are developing large sample statistical analyses of DLMs on general measure spaces (including asymptotic consistency and central limit theorems), and will be presented in future papers. AUTHOR BIOGRAPHIES {#author-biographies .unnumbered} ================== [**Ruixin Wang** ]{} is currently is Ph.D. student in the School of Industrial Engineering, specializing in Operations Research. His research interests lie in approximate dynamic programming, electricity market modeling and stochastic simulation. His email address is [[email protected]]([email protected]).\ [**PRATEEK JAISWAL**]{} is a Ph.D. candidate in the School of Industrial Engineering at Purdue University. His research interests are in machine learning and stochastic optimization. His e-mail address is [[email protected]]([email protected]).\ [**HARSHA HONNAPPA**]{} is an assistant professor in the School of Industrial Engineering at Purdue University. His research interests are in applied probability, game theory, and machine learning. He is a member of INFORMS, IEEE, and SIAM, and serves as an associate editor for Operations Research and Operations Research Letters. His email address is [[email protected]]([email protected]).\
--- abstract: 'Algorithmic computation in polynomial rings is a classical topic in mathematics. However, little attention has been given to the case of rings with an infinite number of variables until recently when theoretical efforts have made possible the development of effective routines. Ability to compute relies on finite generation up to symmetry for ideals invariant under a large group or monoid action, such as the permutations of the natural numbers. We summarize the current state of theory and applications for equivariant Gröbner bases, develop several algorithms to compute them, showcase our software implementation, and close with several open problems and computational challenges.' author: - 'Christopher J. Hillar\' - Robert Kroner - 'Anton Leykin[^1]' bibliography: - 'egb.bib' title: 'Equivariant Gröbner bases[^2]' --- Introduction ============ History ------- Applications ------------ Goals and structure ------------------- Since their introduction, Gröbner bases techniques have improved immensely. We believe that in this new setting of infinite-dimensional polynomial algebras, which demands far more computational power, algorithmic development is at the beginning of a similar road, with similar advances ahead. Our aims here are to outline the current state of effective computation in this setting and to provide a background for researchers to start tackling problems in this exciting domain. After some preliminaries in Section \[prelim\], we quickly move on to describing equivariant Gröbner bases algorithms in Section \[EGB\]. Section \[sec:signature\] goes on to explain a modern signature-based approach and a strategy inspired by it for an equivariant Buchberger’s algorithm. The final Section \[sec:challenges\] outlines computational and theoretical challenges for future exploration. Preliminaries {#prelim} ============= Equivariant Buchberger algorithm {#EGB} ================================ A signature-based approach {#sec:signature} ========================== Open questions and challenges {#sec:challenges} ============================= [^1]: Research of AL is supported in part by NSF grant DMS-1151297. [^2]: Part of this work took place during the “Free Resolutions, Representations, and Asymptotic Algebra” workshop at the BANFF International Research Station, April, 2016.
--- abstract: 'Avionics networks rely on a set of stringent reliability and safety requirements. In existing deployments, most of these networks are based on a wired technology, which supports these requirements. Furthermore, this technology simplifies the security management of the network since certain assumptions can be safely made, including the inability of an attacker to access the network, and the fact that it is almost impossible for an attacker to introduce a node into the network. The proposal for Avionics Wireless Networks (AWNs, currently under consideration by multiple aerospace working groups, promises a reduction in the complexity of electrical wiring harness design and fabrication, a reduction in the total weight of wires, increased customization possibilities, and the capacity to monitor otherwise inaccessible moving or rotating aircraft parts such as landing gear and some sections of the aircraft engines. While providing these benefits, the AWN must ensure that it provides levels of safety that are at minimum equivalent to those offered by the wired equivalent. In this paper, we propose a secure and trusted channel protocol that satisfies the stated security and operational requirements for an AWN protocol. There are three main objectives for this protocol. First, the protocol has to provide the assurance that all communicating entities can trust each other, and can trust their internal (secure) software and hardware states. Second, the protocol has to establish a fair key exchange between all communicating entities so as to provide a secure channel. Finally, the third objective is to be efficient for both the initial start-up of the network and when resuming a session after a cold and/or warm restart of a node. The proposed protocol is implemented within a demo AWN, and performance measurements are presented based on this implementation. In addition, we formally verify our proposed protocol using CasperFDR.' author: - title: 'An Efficient, Secure and Trusted Channel Protocol for Avionics Wireless Networks' --- Introduction ============ A modern aircraft can be considered as a highly reliable and mission-critical digital network in the air. The Aircraft Data Network (ADN) interconnects different aircraft sub-systems, including flight control, the crew network and the passenger entertainment network. In recent years investigations into the feasibility of moving some non-critical networks from wired technology to wireless-based technology have been carried out. Such a network is referred to as an Avionics Wireless Network (AWN), which is the main focus of this paper. Whatever the network deployment topology and the communication technology that are used, one element is common: the physical wire that connects two or more avionics sub-systems. Wiring an aircraft can be costly in that it includes wiring harness designs, cable fabrication and the associated cost of additional weight. Furthermore, to provide dual redundancy, these wires have to connect any two devices by means of two physically separate paths in the aircraft. Wires and related connectors potentially represent 2-5 percent of an aircraft’s weight [@ITU2010]. As the wiring of an aircraft is a time- and labor-intensive activity, post-deployment upgrades or installation of new wire routes or new avionics sub-systems may be costly [@Dang2012]. As reported by [@ITU2010], roughly 30 percent of wires are potential candidates for wireless substitutes. Therefore, as highlighted in [@RNAkram2015], wireless solutions have more than reasonable prospects as long as security, safety and high reliability can be maintained. Whether an ADN or an AWN is used, the main objective is to communicate data between aircraft sub-systems in a secure, reliable and efficient manner. Going wireless brings its own set of unique challenges, among which a major one is to ensure the confidentiality and integrity of communications; any attacker within wireless range of the AWN can easily eavesdrop and/or (potentially) modify the exchanged information. To protect against such an attack, we require a strong, efficient and trustworthy mechanism to establish secure links between the communicating nodes in an AWN. Secure channel protocols can be used for this purpose, and in this paper we propose such a protocol for AWN environments. In this paper, we are not going to discuss the wireless jamming attacks. Although they are a valid threat but they do not directly attack the confidentiality and integrity of communication channel - wireless jamming attack is a thread to channel availability. For this reason they are beyond the scope of this paper. Contribution ------------ In this paper, our main goals are to propose a secure and trusted channel protocol for AWNs, and to compare its security and performance with several other existing protocols. The salient contributions of this paper are as follows: 1. proposing a Secure and Trusted Channel Protocol (STCP) that along with establishing a secure channel between the communicating entities (end-points) also provides security assurance that each end-point is secure and trusted; 2. defining comparison criteria for secure channel protocols along with the related security and performance analysis; 3. validating the proposed protocol with a formal tool, CasperFDR and producing an implementation in a real AWN to enable measurements to be obtained. Structure of the Paper ---------------------- Section \[sec:Related\_Work\] briefly presents the rationale behind this paper and the existing work carried out in the avionics industry (in the context of AWNs) and secure channel protocols from a traditional computer security perspective. In section \[sec:Trusting\_a\_Device\], we look into how a Trusted Platform Module (TPM) can provide a trusted boot that is then used to assure communication partners that the device is secure and trustworthy. Section \[sec:Secure\_and\_Trusted\_Channel\_Protocol\] discussed the security comparison criteria and then the proposed protocol. In section \[sec:Protocol\_Evaluation\], we first analyze the proposed protocol informally, than formally using CasperFDR and we compare it with different protocols based on the security comparison criteria previously defined. Finally in section \[sec:Conclusion\] we present future research directions and conclude the paper. Rationale and Related Work {#sec:Related_Work} ========================== In this section, we discuss the rationale behind the proposed protocol and review the existing work in two different areas: AWNs and Secure Channel Protocols (SCPs). Rationale {#sec:Rationale} --------- A Secure Channel Protocol (SCP) by definition provides either or both of entity authentication and key exchange between communicating parties (end points). An SCP preserves the confidentiality and integrity of the messages on the considered channel but not at the end points. Nevertheless, there can be implicit assurance in the integrity and security of the end points as described by ETSI TS 102 412 [@ETSITS102412] in the domain of the smart card industry. This document states that the smart card is a secure end point under the assumption that it is a tamper-resistant device. This type of assurance can be extrapolated to other devices that are implicitly trusted because of offline business relationships or because of a property of the device itself. However, for a critical system like avionics it is not just implicit trust that should be required but also explicit trust validation, to counter any potential threat. The explicit trust assurance should be provided by the (aircraft) device that is participating in the AWN communication. This would build in an assurance that only secure and trusted devices (explicitly trusted devices with per-protocol run assurance) will participate in the AWN, potentially countering physically altered devices and/or re-introduction of a decommissioned device as discussed in [@RNAkram2015; @RNAkram2016a]. In contrast, in the ADN, the assumption of implicit assurance might be valid. However, for a robust security and reliability mechanism an explicit security assurance mechanism should be considered. A trusted channel is a secure channel that is cryptographically bounded to the current state of the communicating parties [@Gasmi2007]. This state can be a hardware and/or a software configuration, and ideally it requires a trustworthy component to validate it is effectively as claimed. Such a component, in most instances, is a TPM [@TPMSpec2011] as demonstrated in [@10.1109/CMC.2010.232; @Armknecht2008; @Akram2012b] In an AWN, individual devices will have prior relationships with each other: in the avionics industry any system deployment is stringently controlled, regulated and protected. Therefore, assuming that one single trusted entity would deploy the AWN environment is as per the avionics industry’s practice. However, when establishing a secure channel, individual devices should still ensure that they are not only communicating with an authenticated device but also that the current state of this device is secure. Related Work on AWN Security Concerns {#sec:Related Work on Security Concerns} ------------------------------------- Security and trust have been subject to some analysis by both the academic community and the industry. A brief overview of aircraft information security and some improvements were proposed in [@Olive2006]. Security assurance research from airplane production to airplane operation was presented in [@Lintelman2006; @ladstaetter2011security]. A general discussion of the security issues related to the aircraft network and aircrafts’ connectivity with the Internet is provided in [@thanthry2006security], while [@Thanthry2004; @Robinson2007a] discusses the impact of WSNs (Wireless Sensor Networks) and related security concerns in aircraft. Security and safety are intrinsically linked to each other in general and specifically in the context of the aviation industry [@brostoff2001safe; @Pfitzmann2004; @Paulitsch2012]. The application and impact of cryptography, especially public key cryptography for avionics networks, was evaluated in [@Robinson2007]. The management of security and the general deployment of AWNs based on wireless-as-a-comm-link have been analyzed in [@RNAkram2015], which discusses the security and trust challenges faced by AWNs. In addition, a crucial component that supports aircraft devices’ security is the trusted boot process discussed in [@RNAkram2016a]. The security, trust and assurance issues related to the fact of bringing a user device into an aircraft network are evaluated in [@RNAkram2016b]. Related Work on Secure Channel Protocols {#sec:RelatedWorkonSecureChannelProtocols} ---------------------------------------- In this section, we restrict the discussion to the protocols that are proposed for general-purpose computing environments or to those that are used as points of comparison in the discussions to come. The concept of trusted channel protocols was proposed by Gasmi et al. [@Gasmi2007] along with the adaptation of the TLS protocol [@SSLTSLRFC2008]. Later Armknecht et al. [@Armknecht2008] proposed another adaptation of OpenSSL to accommodate the concept of trusted channels; similarly, Zhou and Zhang [@10.1109/CMC.2010.232] also proposed an SSL-based trusted channel protocol. In section \[sec:Revisiting\_the\_Requirements\_and\_Goals\], we will compare the proposed STCP with the existing protocols. These protocols include the Station-to-Station (STS) protocol [@DiffiAAAKE92_209], the Aziz-Diffie (AD) protocol [@Whitfield94privacyand], the ASPeCT protocol [@Horn1998], Just-Fast-Keying (JFK) [@Aiello:2004:JFK:996943.996946], trusted TLS (T2LS) [@Gasmi2007], GlobalPlatform SCP81 [@GPCSPE011], the Markantonakis-Mayes (MM) protocol [@kostas2004], and the Sirett-Mayes (SM) protocol [@Sirett2006]. This selection of protocols is intentionally broad so as to include well-established protocols like STS, AD and JFK. We also include the ASPeCT protocol, which is designed specifically for mobile networks’ value-added services. Similar to our proposal where we require trust assurance during the protocol run, T2LS meets this as it provides trust assurance, whereas other protocols like SCP81, SM, and MM are specific to smart cards and are representative embedded low-power devices. In addition, we have included the secure and trusted channel protocol, P-STCP [@Akram2012b], which is designed for resource-restricted and security-sensitive environments, and has some similar design requirements to those of the proposed protocol. Trusting a Device (Trusted Boot) {#sec:Trusting_a_Device} ================================ In this section, we discuss how a TPM provides a secure boot process and how it provides assurance to external entities that the device is secure and trustworthy. Trusted Platform Module {#sec:Trusted_Platform_Module} ----------------------- The TPM is a trusted, reliable and tamper-resistant component that can provide trustworthy evidence of the state of a given system on which it is present. The interpretation of this evidence is neither controlled nor dictated by the TPM but by the entity receiving and thus assessing it. Trust in this context can be defined as an expectation that the state of a system is as it is supposed to be, i.e. secure. Therefore, in a very simplistic sense a TPM is a trustworthy reporting agent (witness), not an evaluator or an enforcer of security policies. In the field of trusted computing, this is referred to as providing a root of trust on which an inquisitor relies to validate the current state of a system. For in-depth discussion of the architecture of TPMs and their functionality please refer to [@TPMSpec2011]. In this paper, we focus on the secure boot process as it is carried out by the TPM and as discussed in the subsequent section. Secure Boot (TPM Integrity Measurement Operation) ------------------------------------------------- When a device with a TPM boots up, the first component to power up is the system BIOS (Basic Input/output System). On a trusted platform (a platform that contains a TPM), the boot sequence is initiated by the Core BIOS (*i.e.* CRTM: Core Root of Trust Measurement), which first measures its own integrity. This measurement is stored in PCR$_0$[^1] and it is later extended to include the integrity measurement of the rest of the BIOS. The Core BIOS then measures the circuit-board’s (motherboard) configuration setting[^2], and this value is stored in PCR$_1$. After these measurements, the Core BIOS loads the rest of the code of the BIOS. ![Trusted Platform Boot Sequence (figure from [@akram2014introduction])[]{data-label="fig:BootLoading"}](BootLoading.pdf){width="0.75\columnwidth"} The BIOS will subsequently measure the integrity of the ROM firmware and of the ROM firmware configuration, storing them in PCR$_2$ and PCR$_3$ respectively. At this stage, the base configuration of a device is established and the CRTM will proceed with integrity measurement and loading of the Operating System (OS). The CRTM measures the integrity of the “OS Loader Code”, also termed the Initial Program Loader (IPL), and stores the measurement in the relevant PCR. The designated PCR index is left to the discretion of the OS developers. Subsequently, the device will execute the “OS Loader Code” and if successful, the TPM will measure the integrity of the “OS Code”. After this measurement is made and stored, the “OS Code” executes. Finally, the relevant software that initiates its execution will first be subjected to an integrity measurement, and the resulting value will be stored in a PCR and then the software will be allowed to execute. This process is shown in Figure \[fig:BootLoading\], which illustrates the execution flow and the storage of the integrity measurements. By creating a chain of integrity measurements, a TPM provides a trusted and reliable view of the current state of the system. Any piece of software, whether part of the OS or an application, has an integrity measurement stored in a PCR at a particular index. As discussed above, a TPM does not make any decisions: it only measures, stores, and reports integrity measurements in a secure and reliable manner. When a TPM reports an integrity measurement, it is recommended that it generates a signature on the value, thus avoiding replay and man-in-the-middle attacks [@TPMSpec2011]. The process by which an inquisitor can request a device attestation and how a TPM provides this evidence is discussed in the next section. ### Reporting and Attestation Operations {#sec:Attestation .unnumbered} The attestation process, whether initiated by the relevant external entity (including human users or other devices) locally or remotely, involves the generation of a signature by the TPM using the Attestation Identification Key (AIK) of the (associated/requested) PCR values [@akram2014introduction]. The signature assures the requesting entity of the validity of the integrity measurement stored in the PCRs. The choice of the AIK and PCR index is dependent on the device, OS or application developer. The signature key and PCR values are stored in a tamper-resistant memory inside the TPM. Therefore, an attacker would have to circumvent the tamper-resistant property of the TPM to impact the outcome of this attestation process. Secure and Trusted Channel Protocol {#sec:Secure_and_Trusted_Channel_Protocol} =================================== In this section, we begin the discussion with the security comparison criteria, followed by the protocol notation, pre-setup and then the actual protocol proposal. This section concludes with a discussion of how the secure channel is re-established if one of the devices is restarted or resets the protocol. Security Comparison Criteria {#sec:Comparison_Criteria} ---------------------------- For a protocol to support the AWN framework, it should meet, at minimum, the security and operational requirements listed below: 1. [**[*Mutual Entity Authentication*]{}:**]{} All nodes in the network should be able to authenticate to each other to avoid masquerading by a malicious entity. 2. [**Asymmetric Architecture:**]{} Exchange of certified public keys between the entities to facilitate the key generation and entity authentication process. 3. [**Mutual Key Agreement:**]{} Communicating parties will agree on the generation of a key during the protocol run. 4. [**Joint [*Key Control*]{}:**]{} Communicating parties will mutually control the generation of new keys to avoid one party choosing weak keys or predetermining any portion of the session key. 5. [**[*Key Freshness*]{}:**]{} The generated key will be fresh to the protocol session to protect against replay attacks. 6. [**Mutual [*Key Confirmation*]{}:**]{} Communicating parties will provide implicit or explicit confirmation that they have generated the same keys during a protocol run. 7. [**Known-Key Security:**]{} If a malicious user is able to obtain the session key of a particular protocol run, it should not enable him to retrieve long-term secrets ([*private keys*]{}) or [*session keys*]{} (future and past). 8. [**Unknown [*Key*]{} Share Resilience:**]{} In the event of an unknown key share attack, an entity $\mathcal{X}$ believes that it has shared a key with $\mathcal{Y}$, where the entity $\mathcal{Y}$ mistakenly believes that it has shared the key with entity $\mathcal{Z} \neq \mathcal{X}$. Proposed protocols should adequately protect against this attack. 9. [**[*Key*]{} Compromise Impersonation (KCI) Resilience:**]{} If a malicious user retrieves the long-term key of an entity $\mathcal{Y}$, it will enable him to impersonate $\mathcal{Y}$. Nevertheless, key compromise should not enable him to impersonate other entities to $\mathcal{Y}$ [@Blake-Wilson:1997:KAP:647993.742138]. 10. [**[*Perfect Forward Secrecy*]{}:**]{} If the long-term keys of communicating entities are compromised, this will not enable a malicious user to compromise previously generated session keys. 11. [**Mutual [*Non-Repudiation*]{}:**]{} Communicating entities will not be able to deny that they have executed a protocol run with each other. 12. [**Partial Chosen Key (PCK) Attack Resilience:**]{} Protocols that claim to provide joint key control are susceptible to this type of attack [@Mitchell1998]. In this type of attack, if two entities provide separate values to the key generation function then one entity has to communicate its contribution value to the other. The second entity can then compute the value of its contribution in such a way that it can dictate its strength (i.e. it is able to generate a partially weak key). However, this attack depends upon the computational capabilities of the second entity. Therefore, proposed protocols should adequately prevent PCK attack. 13. [**Trust Assurance (Trustworthiness):**]{} The communicating parties not only provide security and operation assurance but also validation proofs that are dynamically generated during the protocol execution. 14. [**Denial-of-Service (DoS) Prevention:**]{} The protocol should not require the individual nodes to allocate a large set of resources to the extent that it might contribute to a DoS attack. 15. [**Privacy:**]{} A third party should not be able to know the identities of the AWN nodes. For a formal definition of the terms (italicized) used in the above list, the reader is referred to [@Menezes1996]. The requirements listed above are later used as a point of reference to compare the selected protocols in Table \[tab:ProtocolComparisonOnTheBasiesOfStatedGoals\]. For the performance evaluation that we have conducted, the main measurements are related to the time required to establish a secure channel once the wireless link is established and they are discussed in section \[sec:Practical\_Implementation\]. Protocol Notation {#sec:Protocol_Notation} ----------------- The notations used in the protocol description are listed in Table \[tab:NotationTable\]; ------------------------------------- --- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ $AD1$ : Denotes an aircraft device ’1’. $AD2$ : Denotes an aircraft device ’2’. $A\rightarrow B$ : Message sent by an entity A to an entity B. $TPM_X$ : Denotes a TPM of an entity $X$ $X_{i}$ : Represents the identity of an entity $X$. $g^{r_X}$ : Diffie-Hellman exponential generated by an entity $X$. $N_{X}$ : Random number generated by an entity $X$. $X\|Y$ : Represents the concatenation of the data items X, Y in the given order. $\left[ M \right] ^{K_{e}}_{K_{a}}$ : Message $M$ is encrypted by the session encryption key $K_{e}$ and then MAC is computed using the session MAC key $K_{a}$. Both keys $K_{e}$ and $K_{a}$ are generated during the protocol run. $Sign_{X}(Z)$ : Signature generated on data Z by the entity $X$ using a signature algorithm [@Furlani2009]. $H(Z)$ : Is the result of generating a hash of data Z. $H_{k}(Z)$ : Result of generating a keyed hash of data Z using key $k$. $S_{Cookie} $ : Session cookie generated by one of the communication entities. It indicates the session information and facilitates protection against DoS attacks along with (possibly) providing the protocol session resumption facility. $VR_{A-B}$ : Validation request sent by entity A to entity B. In response entity B provides a security and reliability assurance to entity A. $SAS_{A-B}$ : Security assurance (PCR values) generation by entity A that provides trust validation to the requesting entity B. ------------------------------------- --- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ : Notation used in protocol description.[]{data-label="tab:NotationTable"} Pre-Protocol Setup {#sec:Protocol_Requirements} ------------------ The proposed protocol requires certain pre-protocol setup operations as listed below: 1. Each aircraft device that is part of the AWN has a TPM. 2. Each device in the AWN is pre-configured with the signature verification keys of its communication partners (*i.e.* public keys of other aircraft devices). 3. Each device is also pre-configured with the signature verification keys of the TPMs of its communication partners (*i.e.* the public key corresponding to the AIK key used to sign the PCR values stored in the TPM) along with their own trusted and secure PCR values (*i.e.* the values for their trusted and secure state). Proposed Protocol {#sec:Proposed_Protocl} ----------------- The messages of the protocol are listed in Table \[tab:STCP\] and described below. #### **Message 1** {#sec:Message1} The AD1 generates a random number $N_{AD1}$ and computes the Diffie-Hellman exponential $g^{r_{AD1}}$. The “$ H(g^{r_{AD1}}\| N_{AD1}\| AD1_{i} \| AD2_{i})$” serves as a session cookie “$S_{Cookie}$”, and it is appended to each subsequent message sent by both devices. It indicates the session information, facilitates protection against DoS attacks and (possibly) provides the protocol session resumption facility, which is required if a protocol run is interrupted before it successfully concludes. Finally, AD1 will request AD2 to provide assurance of its current state. #### **Message 2** {#sec:Message2} In response, AD2 generates a random number, and a Diffie-Hellman exponential $g^{r_{AD2}}$. It can then calculate the $k_{DH}=(g^{r_{AD1}})^{r_{AD2}} \ (mod\ n)$ which will be the shared secret from which the rest of the keys will be generated. The encryption key is generated as $K_{e} = H_{k_{DH}}(N_{AD1}\|N_{AD2}\|''1'')$ and a MAC key as $K_{a}= H_{k_{DH}}(N_{AD1} \|N_{AD2}\|''2'')$. We can further generate (session) keys in a similar manner for data stream-specific virtual links[^3] (VLs) for managing the communication between different aircraft sub-systems. Subsequently, the TPM generates a state validation message signed by the TPM AIK key represented in the protocol as “$Sign_{TPM_{AD2}}(AD2-Validation)$”. AD2 will also request AD1 to provide assurance of its current state. On receipt of this message, AD1 will first generate the session keys. AD1 will then verify AD2’s signature and validation proof generated by the TPM of AD2. As the signature key belongs to the TPM of AD2, an attacker cannot masquerade this signature. By verifying the signature, AD1 can ascertain the current state (PCR value) is measured by the TPM of AD2. Now AD1 can verify whether the PCR value represents a trusted and secure state or not. Since our protocol pre-setup AD1 would have the PCR value of a trusted and secure state of AD2. Furthermore, AD1 will check the values of Diffie-Hellman exponentials (*i.e.* $g^{{r}_{AD1}}$ and $g^{{r}_{AD2}}$) and of the generated random numbers to avoid main-in-the-middle and replay attacks. #### **Message 3** {#sec:Message3} AD1 will then generate a message similar to message 2, a signature by AD1 and trust validation proof generated by its TPM. On receipt of the message, AD2 will verify the trust validation proof and generate keys. Furthermore, AD2 will also check the values of the Diffie-Hellman exponentials and of the generated random numbers to avoid man-in-the-middle and replay attacks. Post-Protocol Process {#sec:Post_Protocl_Process} --------------------- The shared material generated from the Diffie-Hellman exponential can be used to generate more keys than just the session encryption and MAC keys of the protocol. If this is not desirable then session encryption and MAC keys can be saved as master session keys. Individual VL keys can then be generated from these session keys. Based on the security policies related to the VLs, whether they require only confidentiality or integrity or both, these two master session keys can be used to generate VL specific encryption and MAC keys. Protocol Resumption {#sec:Protocol_Resumption} ------------------- As discussed in [@RNAkram2015], secure channel protocols only run when an aircraft is stationary on the ground, with proofs that the aircraft is not in flight based on geo-location, proximity to airport, weight on wheels, etc. The proposed protocol would run before each flight and master session keys are only valid for a single flight. The protocol should not be executed during the flight. Therefore, if a device has to reset due to some unforeseeable situation, a safety procedure to resume the secure channel and all of the associated VL keys - without running the protocol - must exist. For this purpose, each individual device will save the master session keys in its persistent storage and will have a standard algorithm to generate the keys for each of the VLs. If the master session keys are lost, then, during that particular flight, the device would be out of communication. To avoid this, the master session keys should be stored on two different memories (each aircraft device has at least two separate storage media, so as to provide this dual storage redundancy). Protocol Evaluation {#sec:Protocol_Evaluation} =================== In this section, we first discuss the information analysis of the protocols, and then compare different protocols with our proposal based on the comparison criteria defined above. Finally, we provide some implementation results and a formal analysis using CasperFDR. Brief Information Analysis {#sec:Brief_Information_Analysis} -------------------------- Throughout this section, we refer to the protocol comparison criteria of section \[sec:Comparison\_Criteria\] by their respective numbers as listed in the same section. During the proposed protocol, in messages 2 and 3 the communicating entities authenticate each other, which satisfies G1. Similarly, for G2, all communicating entities have exchanged cryptographic certificates to facilitate an authentication and trust validation proof (generated and signed by the TPM) before the aircraft devices are deployed (pre-deployment configuration). The proposed protocol satisfies requirements G3, G4, G5 and G12 by first requiring AD1 and AD2 to generate the Diffie-Hellman exponential; thus computational cost is equal on both sides. Similarly, exponential generation also assures that both devices will have equal input to the key generation. Messages 2 and 3 are encrypted used the keys generated during the protocol, thus providing mutual key confirmation (satisfying G6). In the proposed protocol, session keys generated in one session have no link with the session keys generated in other sessions, even when the session is established between the same devices. This enables the protocol to provide resilience against known-key security (G7). This unlinkability of session keys is based on the fact that each entity not only generates a new Diffie-Hellman exponential but also a random number, both of which are used during the protocol for key generation. Therefore, even if an adversary “$\mathcal{A}$” finds out about the exponential and random numbers of a particular session, it will not enable him to generate past or future session keys. Furthermore, to provide unknown key share resilience (G8), the proposed protocol includes the Diffie-Hellman exponentials along with generated random numbers and each communicating entity then signs them. Therefore, the receiving entity can then ascertain the identity of the entity with which it has shared the key. The protocol can be considered to be a KCI-resilient (G9) protocol, as protection against the KCI is based on the digital signatures. In addition, the cryptographic certificates of each signature key also include its association with a particular device. Therefore, if $\mathcal{A}$ has knowledge of the signature key of a device, it can only masquerade this particular device to other devices but not others to it. [![width 1pt]{}@[ ]{}r@[ ]{}l![width 0.75pt]{}c|c|c|c|c|c|c|c|c|c|c|c|c![width 1pt]{}]{} &\ && STS & AD & ASPeCT& JFK & T2LS & SCP81 & MM & SM & Asymmetric TKDF & P-STCP & SSH & SSL & Proposed Protocol\ G1.&& $*$ & $*$ & $*$ & $*$ & $*$ & $*$ & $-*$ & $-*$ & $*$ & $*$ & $(*)$ & $*$ & $*$\ G2.&& $*$ & $*$ & $*$ & $*$ & $*$ & $*$ & $*$ & $-*$ & $*$ & $*$ & $*$ & $*$ & $*$\ G3.&& $*$ & $*$ & $*$ & $*$ & $*$ & $*$ & $*$ & $-*$ & $-*$ & $*$ & $*$ & $*$ & $*$\ G4.&& $*$ & $*$ & $*$ & $*$ & $(*)$ & $*$ & & & $-*$ & $*$ & $(*)$ & $(*)$ & $*$\ G5.&& $*$ & $*$ & $*$ & $*$ & $*$ & $*$ & $*$ & $-*$ & $*$ & $*$ & $*$ & $*$ & $*$\ G6.&& $*$ & & $*$ & $*$ & & & $*$ & $-*$ & $*$ & $*$ & $*$ & $*$ & $*$\ G7.&& $*$ & $*$ & $*$ & $*$ & $*$ & $*$ & $*$ & & $*$ & $*$ & $*$ & $*$ & $*$\ G8.&& $*$ & $*$ & $*$ & $*$ & $*$ & $*$ & $*$ & $-*$ & $-*$ & $*$ & $*$ & $*$ & $*$\ G9.&& $*$ & $*$ & $*$ & $*$ & $*$ & $*$ & $*$ & $*$ & $*$ & $*$ & $*$ & $*$ & $*$\ G10.&& $*$ & & $*$ & $*$ & $*$ & $*$ & & & $*$ & $*$ & $*$ & $*$ & $*$\ G11.&& $*$ & & & $*$ & $*$ & $*$ & $*$ & $*$ & $*$ & $*$ & $*$ & $*$ & $*$\ G12.&& $(*)$ & $(*)$ & $(*)$ & $(*)$ & $(*)$ & $(*)$ & & & $*$ & $*$ & $*$ & $*$ & $*$\ G13.&& & & $(*)$ & $(*)$ & $*$ & $-*$ & & & & $*$ & $(*)$ & $(*)$ & $*$\ G14.&& & & & $*$ & $(*)$ & & & & & $*$ & $(*)$ & $(*)$ & $*$\ G15.&& $(*)$ & & $*$ & $*$ & $(*)$ & & & & $(*)$ & $*$ & $(*)$ & $(*)$ & $*$\ [**Note:** ]{}[$*$ means that the protocol meets the stated goal, $(*)$ shows that the protocol can be modified to satisfy the requirement, and $-*$ means that the protocol (implicitly) meets the requirement not because of the protocol messages but because of the prior relationship between the communicating entities.]{} The proposed protocol also meets the requirement for perfect forward secrecy (G10) by making the key generation process independent of any long-term keys. The session keys are generated using fresh values of Diffie-Hellman exponentials and random numbers, regardless of the long term keys: they are signature keys. Therefore, even if eventually $\mathcal{A}$ finds out the signature key of any entity it will not enable him to determine past session keys. This independence of long term secrets from the session key generation process also enables the protocol to satisfy G7. Communicating entities in the STCP share signed messages with each other that include the session information, thus providing mutual non-repudiation (G11). G14 is ensured by the inclusion in the protocol of the session cookie, which provides a limited protection against DoS, and by the fact that individual devices have pre-configurations of communication partners which enable them to drop a connection if an entity trying to connect with them is not able to authenticate. To satisfy G15, the device identities are basically a random string that should not have any link with the function of the device. This would hinder an attacker from eavesdropping a protocol run to determine which aircraft device is communicating on the wireless channel. Finally, TPMs on all communicating devices provide trust validation proof in the form of PCR values signed by the TPM AIK. This provides mutual validation of the trust between communicating devices, confirming that the other device is functioning in a secure and reliable state (G13). Revisiting the Requirements and Goals {#sec:Revisiting_the_Requirements_and_Goals} ------------------------------------- Table \[tab:ProtocolComparisonOnTheBasiesOfStatedGoals\] provides a comparison between the listed protocols in section \[sec:RelatedWorkonSecureChannelProtocols\] with the proposed protocol in terms of the required goals (see section \[sec:Comparison\_Criteria\]). As shown in Table \[tab:ProtocolComparisonOnTheBasiesOfStatedGoals\], the STS protocol meets the first eleven goals. The main issue with the STS protocol is that it does not provide adequate protection against partial chosen key attacks (G12) and privacy protection (G15). The remaining goals are not met by the STS because of the design architecture and deployment environment, which did not require these goals. Similarly, the AD protocol does not meet G6, G10 and G13-G15. The ASPeCT and JFK protocols meet a large set of goals. Both of these protocols can be easily modified to provide trust assurance (requiring additional signatures). Both of these protocols are vulnerable to partial chosen key attacks. However, in Table \[tab:ProtocolComparisonOnTheBasiesOfStatedGoals\] we opt for the possibility that the ASPeCT and JFK protocols can be modified to meet this goal because in an AWN all communicating devices may be of the same computation power and have a strong offline pre-deployment relationship. The T2LS protocol meets the trust assurance goal by default. However, for the remaining goals it has the same results as the SSL protocol. A point in favour of the SCP81, MM, and SM protocols is that they were designed for the smart card industry where there is a strong and centralised organisational model. Most of these protocols, to some extent, have a similar architecture, in which a server generates the key and then communicates that key to the client. There is no non-repudiation as they do not use signatures in the protocol run. Both SSH and SSL meet a large set of requirements and also have the potential to be extended to the additional requirements. However, to provide a flexible, backward compatible and universally acceptable architecture these protocols have too many optional parameters. Such flexibility is one of the main causes of most of the issues that these protocols have been plagued with in the last couple of years, heartbleed being the most infamous vulnerability. Asymmetric TKDF (Trusted Key Distribution Frameworks) does not satisfy a number of requirements. In contrast, P-STCP satisfies most of the requirements listed in the table. The only difference between the P-STCP and the proposed protocol (except for the message structure) is the number of rounds to successfully complete a protocols run. P-SCTP has four messages (2-round protocol) and the proposed protocol uses 3 messages (1.5-round protocol). As apparent from the table \[tab:ProtocolComparisonOnTheBasiesOfStatedGoals\], the proposed protocol satisfies all goals that were described in section \[sec:Comparison\_Criteria\]. Practical Implementation {#sec:Practical_Implementation} ------------------------ In our AWN test-bed each node is a Raspberry Pi model B supplied with a Wi-Fi USB dongle TL-WN722N by TP-LINK. In all the measurements we made, the nodes were configured in ad-hoc mode. For all the selected protocols, in our evaluation implementations, we setup two neighboring nodes to establish a secure channel. This provides a performance measurement of the protocols between individual communicating pairs. However for the TKDF, a key distribution server is also required and a third node in the ad-hoc network plays this role. ![AWN test-bed[]{data-label="fig:AWN test-bed"}](AWN-BenchmarkPlatform.pdf){width="0.90\columnwidth"} In our AWN test-bed, each node is connected to a backend server by means of an Ethernet connection. This server controls the nodes so as to prepare them for the target scenario and is also in charge of collecting the measurements. Effective measurement can be done internally on the node initiating the secure channel, called a client, and/or it can be done at the level of the network data exchanged between the nodes of the AWN and captured with a Wi-Fi card on the backend server set in monitor mode. The performance comparison is provided in Table \[tab:Performance Measures\], comparing a subset of protocols from table \[tab:ProtocolComparisonOnTheBasiesOfStatedGoals\] and proposed protocol performance in the developed test-bed environment. \[h\] . In our Python implementation of the proposed protocol, the TPM was emulated by the Raspberry Pi. Key sizes used for our proposed protocol were 2048 bits MODP group for the Diffie-Hellman key generation, 2048 bits for RSA and 256 bits for symmetric encryption and MAC computation (AES). The P-STCP protocol was implemented with smaller key sizes in [@Akram2012b], resulting in 2998.71ms performance measurement. Use the key sizes from [@Akram2012b] in our implementation results the performance of the proposed protocol to be 1201.50ms. Protocol Verification by CasperFDR {#sec:CasperFDRProVerif} ---------------------------------- We selected the CasperFDR approach for formal analysis of the proposed protocol. The Casper compiler [@CasperFDR1998] takes input as a high-level description of the protocol, together with its security requirements along with the definition of an attacker and its capabilities. The compiler then translates the description into the process algebra of Communicating Sequential Processes (CSP) [@CSPBook1978]. The CSP description of the protocol can be machine-verified using the Failures-Divergence Refinement (FDR) model checker [@CSP_Approach2000]. The intruder’s capability modelled in the Casper script (appendix \[app:CasperFDR Script\]) for the proposed protocol is: - an intruder can masquerade any entity in the network, - an intruder can read the messages transmitted in the network, and - an intruder cannot influence the internal process of an entity in the network. The security specification for which CasperFDR evaluates the network is as shown below. The listed specifications are defined in the \#Specification section of appendix \[app:CasperFDR Script\]: - the protocol run is fresh and both applications are alive, - the key generated by the entity A is known only to the entity B (A and B are communication partners/devices), - entities mutually authenticate each other and have mutual key assurance at the conclusion of the protocol, - long-term keys of communicating entities are not compromised, and - an intruder is unable to deduce the identities from observing the protocol messages. The CasperFDR tool evaluated the protocol and did not find any feasible attack(s). The script is provided in appendix \[app:CasperFDR Script\]. Conclusion and Future Research Directions {#sec:Conclusion} ========================================= In this paper, we outlined the concept of the AWN and discussed why such a proposal requires a secure channel for communication. The data communicated over an AWN has a strong requirement for confidentiality and integrity. To satisfy this requirement, communicating devices should have some cryptographic secrets to provide confidentiality and integrity. To generate these cryptographic secrets, the devices run a secure channel protocol. In this paper, we proposed a secure channel protocol that not only provides mutual authentications and key sharing between the communicating entities but also provides assurance that each of the devices is in a secure and trusted state. We compared our proposed protocol with a list of selected protocols and experimental performance results were provided. Finally, we evaluated the protocol using CasperFDR, showing that our protocol is secure against a number of attacks. In future work, we will explore the major issue of detecting and neutralising wireless jamming and DoS attackers, along with building a strong mitigating framework. In addition to the trusted boot, for robust and reliable security we need to look into secure execution on AWN nodes - especially investigating the inclusion of ARM TrustZone and Intel SGX technologies. Acknowledgments =============== The authors from Royal Holloway University of London acknowledge the support of the UK’s innovation agency, InnovateUK, and the contributions of the SHAWN project partners. The authors from XLIM acknowledge the support of: - the SFD (Security of Fleets of Drones) project funded by Région Limousin; - the TRUSTED (TRUSted TEstbed for Drones) project funded by the CNRS INS2I institute through the call 2016 PEPS (“Projet Exploratoire Premier Soutien”) SISC (“Sécurité Informatique et des Systèmes Cyberphysiques”); - the SUITED (Suited secUrIty TEstbed for Drones) and UNITED (United NetworkIng TEstbed for Drones) projects funded by the MIRES (Mathématiques et leurs Interactions, Images et information numérique, Réseaux et Sécurité) CNRS research federation; The authors from LaBRI acknowledge the support of: - the TRUSTED (TRUSted TEstbed for Drones) project funded by the CNRS INS2I institute through the call 2016 PEPS (“Projet Exploratoire Premier Soutien”) SISC (“Sécurité Informatique et des Systèmes Cyberphysiques”); - the SUITED-BX and UNITED-BX projects funded by LaBRI and its MUSe team. Disclaimer {#disclaimer .unnumbered} ========== The views and opinions expressed in this article are those of the authors and do not necessarily reflect the position of SHAWN project or any of organisations associated with this project. [10]{} \[1\][\#1]{} url@samestyle \[2\][\#2]{} \[2\][[l@\#1=l@\#1\#2]{}]{} “Technical characteristics and operational objectives for wireless avionics intra-communications (waic),” ITU- R: Radiocommunication Sector of ITU, Tech. Rep. ITU-R M.2197, November 2010. \[Online\]. Available: <http://www.itu.int/dms_pub/itu-r/opb/rep/R-REP-M.2197-2010-PDF-E.pdf> D.-K. Dang, A. Mifdaoui, and T. Gayraud, “Fly-by-wireless for next generation aircraft: Challenges and potential solutions,” in *Wireless Days (WD), 2012 IFIP*, Nov 2012, pp. 1–8. R. N. Akram, K. Markantonakis, S. Kariyawasam, S. Ayub, A. Seeam, and R. Atkinson, “Challenges of security and trust in avionics wireless networks,” in *2015 IEEE/AIAA 34th Digital Avionics Systems Conference (DASC)*, Sept 2015, pp. 4B1–1–4B1–12. “[Smart Cards; Smart Card Platform Requirements Stage 1(Release 9)]{},” ETSI, France, Tech. Rep. ETSI TS 102 412 (V9.1.0), June 2009. K. Markantonakis and R. N. Akram, “A secure and trusted boot process for avionics wireless networks,” in *2016 Integrated Communications Navigation and Surveillance (ICNS)*, April 2016, pp. 1C3–1–1C3–9. Y. Gasmi, A.-R. Sadeghi, P. Stewin, M. Unger, and N. Asokan, “[Beyond Secure Channels]{},” in *STC ’07: Proceedings of the 2007 ACM workshop on Scalable trusted computing*.1em plus 0.5em minus 0.4emNew York, NY, USA: ACM, 2007, pp. 30–40. “Trusted platform module main specification,” Trusted Computing Group, Tech. Rep., 2011. L. Zhou and Z. Zhang, “[Trusted Channels with Password-Based Authentication and TPM-Based Attestation]{},” *International Conference on Communications and Mobile Computing*, pp. 223–227, 2010. F. Armknecht, Y. Gasmi, A.-R. Sadeghi, P. Stewin, M. Unger, G. Ramunno, and D. Vernizzi, “[An efficient implementation of trusted channels based on openssl]{},” in *Proceedings of the 3rd ACM workshop on Scalable trusted computing*, ser. STC ’08.1em plus 0.5em minus 0.4emNew York, NY, USA: ACM, 2008, pp. 41–50. R. N. Akram, K. Markantonakis, and K. Mayes, “[A Privacy Preserving Application Acquisition Protocol]{},” in *[11th IEEE International Conference on Trust, Security and Privacy in Computing and Communications (IEEE TrustCom-12)]{}*, F. G. M. [Geyong Min]{}, Ed.1em plus 0.5em minus 0.4emLiverpool, United Kingdom: IEEE Computer Society, June 2012. M. Olive, R. Oishi, and S. Arentz, “Commercial aircraft information security-an overview of arinc report 811,” in *25th Digital Avionics Systems Conference, 2006 IEEE/AIAA*, Oct 2006, pp. 1–12. S. Lintelman, R. Robinson, M. Li, D. v. Oheimb, R. Poovendran, and K. Sampigethaya, “Security assurance for it infrastructure supporting airplane production, maintenance, and operation,” in *Proc. U.S. National Workshop on Aviation Software Systems: Design for Certifiably Dependable Systems (NITRD HCSS-AS), 4-5 Oct 2006, Alexandria, VA*, J. Sprinkle, Ed., 2006, available from <http://ddvo.net/papers/HCSS-AS.html>. G. Ladstaetter, N. Reichert, and T. Obert, “It security management of aircraft in operation: A manufacturer’s view,” SAE Technical Paper, Tech. Rep., 2011. N. Thanthry, M. S. Ali, and R. Pendse, “Security, internet connectivity and aircraft data networks,” *Aerospace and Electronic Systems Magazine, IEEE*, vol. 21, no. 11, pp. 3–7, 2006. N. Thanthry and R. Pendse, “Aviation data networks: security issues and network architecture,” in *Security Technology, 2004. 38th Annual 2004 International Carnahan Conference on*, Oct 2004, pp. 77–81. R. V. Robinson, K. Sampigethaya, M. Li, S. Lintelman, R. Poovendran, and D. von Oheimb, “Secure network-enabled commercial airplane operations: It support infrastructure challenges,” in *Proceedings of the First CEAS European Air and Space Conference Century Perspectives (CEAS)*, 2007. S. Brostoff and M. A. Sasse, “Safe and sound: a safety-critical approach to security,” in *Proceedings of the 2001 workshop on New security paradigms*.1em plus 0.5em minus 0.4emACM, 2001, pp. 41–50. A. Pfitzmann, “Why safety and security should and will merge.” in *SAFECOMP*, ser. Lecture Notes in Computer Science, M. Heisel, P. Liggesmeyer, and S. Wittmann, Eds., vol. 3219.1em plus 0.5em minus 0.4emSpringer, 2004, pp. 1–2. \[Online\]. Available: <http://dblp.uni-trier.de/db/conf/safecomp/safecomp2004.html#Pfitzmann04> M. Paulitsch, R. Reiger, L. Strigini, and R. E. Bloomfield, “Evidence-based security in aerospace: From safety to security and back again.” in *ISSRE Workshops*.1em plus 0.5em minus 0.4emIEEE, 2012, pp. 21–22. \[Online\]. Available: <http://dblp.uni-trier.de/db/conf/issre/issre2012w.html#PaulitschRSB12> R. Robinson, M. Li, S. Lintelman, K. Sampigethaya, R. Poovendran, D. von Oheimb, and J.-U. Bußer, “Impact of public key enabled applications on the operation and maintenance of commercial airplanes,” in *Proc. of the 7th AIAA Aviation Technology, Integration and Operations Conference (ATIO)*.1em plus 0.5em minus 0.4emAIAA, 2007, <http://ddvo.net/papers/AIAA_ATIO.html>. R. N. Akram and K. Markantonakis, “Challenges of security and trust of mobile devices as digital avionics component,” in *2016 Integrated Communications Navigation and Surveillance (ICNS)*, April 2016, pp. 1C4–1–1C4–11. T. Dierks and E. Rescorla, “[RFC 5246 - The Transport Layer Security (TLS) Protocol Version 1.2]{},” Tech. Rep., August 2008. W. Diffie, P. C. van Oorschot, and M. J. Wiener, “[Authentication and Authenticated Key Exchanges]{},” *Designs, Codes and Cryptography*, vol. 2, no. 2, pp. 107–125, 1992. A. Aziz and W. Diffie, “[Privacy And Authentication For Wireless Local Area Networks]{},” *IEEE Personal Communications*, vol. 1, pp. 25–31, First Quarter 1994. G. Horn and B. Preneel, “[Authentication and payment in future mobile systems]{},” in *Computer Security - ESORICS 98*, ser. Lecture Notes in Computer Science, J.-J. Quisquater, Y. Deswarte, C. Meadows, and D. Gollmann, Eds.1em plus 0.5em minus 0.4emSpringer Berlin / Heidelberg, 1998, vol. 1485, pp. 277–293, 10.1007/BFb0055870. W. Aiello, S. M. Bellovin, M. Blaze, R. Canetti, J. Ioannidis, A. D. Keromytis, and O. Reingold, “[Just fast keying: Key agreement in a hostile internet]{},” *ACM Trans. Inf. Syst. Secur.*, vol. 7, pp. 242–273, May 2004. **, Online, GlobalPlatform Specification, September 2006. K. Markantonakis and K. Mayes, “[A Secure Channel Protocol for Multi-application Smart Cards based on Public Key Cryptography]{},” in *[CMS 2004 - Eight IFIP TC-6-11 Conference on Communications and Multimedia Security]{}*, D. Chadwick and B. Prennel, Eds.1em plus 0.5em minus 0.4emSpringer, September 2004, pp. 79–96. W. G. Sirett, J. A. MacDonald, K. Mayes, and C. Markantonakis, “[Design, Installation and Execution of a Security Agent for Mobile Stations]{},” in *Smart Card Research and Advanced Applications, 7th IFIP WG 8.8/11.2 International Conference, CARDIS*, ser. LNCS, J. Domingo-Ferrer, J. Posegga, and D. Schreckling, Eds., vol. 3928.1em plus 0.5em minus 0.4em Tarragona, Spain: Springer, April 2006, pp. 1–15. R. N. Akram, K. Markantonakis, and K. Mayes, “[An Introduction to the Trusted Platform Module and Mobile Trusted Module]{},” in *Secure Smart Embedded Devices, Platforms and Applications*.1em plus 0.5em minus 0.4em Springer New York, 2014, pp. 71–93. S. Blake-Wilson, D. Johnson, and A. Menezes, “[Key Agreement Protocols and Their Security Analysis]{},” in *Proceedings of the 6th IMA International Conference on Cryptography and Coding*.1em plus 0.5em minus 0.4emLondon, UK: Springer-Verlag, 1997, pp. 30–45. \[Online\]. Available: <http://portal.acm.org/citation.cfm?id=647993.742138> C. Mitchell, M. Ward, and P. Wilson, “[Key Control in Key Agreement Protocols]{},” *Electronics Letters*, vol. 34, no. 10, pp. 980 –981, May 1998. A. J. Menezes, P. C. van Oorschot, and S. A. Vanstone, *[Handbook of Applied Cryptography]{}*.1em plus 0.5em minus 0.4emCRC, October 1996. **, Online, [National Institute of Standards and Technology (NIST)]{} Std., June 2009. G. Lowe, “[Casper: a compiler for the analysis of security protocols]{},” *J. Comput. Secur.*, vol. 6, pp. 53–84, January 1998. \[Online\]. Available: <http://dl.acm.org/citation.cfm?id=353677.353680> C. A. R. Hoare, *[Communicating sequential processes]{}*.1em plus 0.5em minus 0.4emNew York, NY, USA: ACM, 1978, vol. 21, no. 8. P. Ryan and S. Schneider, *[The Modelling and Analysis of Security Protocols: the CSP Approach]{}*.1em plus 0.5em minus 0.4em Addison-Wesley Professional, 2000. R. N. Akram, K. Markantonakis, K. Mayes, P-F. Bonnefoi, D. Sauveron, and S. Chaumette, “Security and performance comparison of different secure channel protocols for Avionics Wireless Networks,” in *2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC)*, Sept 2016. CasperFDR Script {#app:CasperFDR Script} ================ \#Free variables datatype Field = Gen | Exp(Field, Num) unwinding 2 hkAD2, hkAD1, iMsg, rMsg, EnMaKey : Field AD1, AD2, U: Agent gAD1, gAD2: Num nAD1, nAD2, AD1Val, AD2Val: Nonce VKey: Agent-&gt;PublicKey SKey: Agent-&gt;SecretKey InverseKeys = (VKey, SKey), (EnMaKey, EnMaKey), (Gen, Gen), (Exp, Exp) \#Protocol description 0. -&gt; AD2 : AD1 \[AD1!=AD2\] &lt;iMsg := Exp(Gen,gAD2)&gt; 1. AD2 -&gt; AD1 : AD2, nAD2, iMsg2. AD1 -&gt; AD2 : nAD1, rMsg3. AD2 -&gt; AD1: nAD2, nAD1 4. AD1 -&gt; AD2 : [[rMsg, U, nAD2]{}[SKey(U)]{}]{}[EnMaKey]{} \[rMsg==hkAD2\] 5. AD2 -&gt; AD1 : [[iMsg,AD2, nAD1]{}[SKey(AD2)]{}]{}[EnMaKey]{} \[iMsg==hkAD1\] 6.AD1 -&gt; AD2 : [[AD1OSHash, AD1, nAD2]{}[SKey(AD1)]{}]{}[EnMaKey]{} \#Actual variables ADev1, ADev2, ME: Agent GAD1, GAD2, GMalicious: Num NAD1, NAD2, AD1VAL, AD2VAL, NMalicious: Nonce \#Processes INITIATOR(AD2,AD1, U, AD2VAL, gAD2, nAD2)knows SKey(AD2), VKey READ2ONDER(AD1,AD2, U, AD1VAL, gSC, nSC) knows SKey(U), SKey(SC), VKey \#System INITIATOR(ADev2, ADev1,ADev2Val, GAD2, NAD2) READ2ONDER(ADev1, ADev2, ADev1Val, GAD1, NAD1) \#Functions symbolic VKey, SKey \#Intruder Information Intruder = ME IntruderKnowledge = [ADev2, ADev2, ME, GMalicious, NMalicious, SKey(ME), VKey]{} \#Specification Aliveness(AD2, AD1) Aliveness(AD1, AD2) Agreement(AD2, AD1, \[EnMaKey\]) Secret(AD2, EnMaKey, \[AD1\]) Secret(AD1, U, \[AD2\]) \#Equivalences forall x, y : Num . Exp(Exp(Gen, x), y) = Exp(Exp(Gen, y), x) [^1]: A Platform Configuration Register (PCR) is a 160-bit (20 bytes) data element that stores the result of the integrity measurement, which is a generated hash of a given component (*e.g.* the BIOS, the operating system, or an application). A group of PCRs form the integrity matrix. The process of extending PCR values is as follows: $PCR_{i}$ = $Hash({PCR^{'}}_{i} || X)$, where $i$ is the PCR index, ${PCR^{'}}_{i}$ represents the old value stored at index $i$, and $X$ is the sequence to be included in the PCR value. “$||$” indicates the concatenation of two data elements in the given order. The starting value of all PCRs is zero. [^2]: To measure that correct hardware configuration was present at boot time. [^3]: Virtual Links (VLs): Each communication relationship in an aircraft network is represented as a VL. In our proposal we assume that a pair of communication parties would have two uni-directional VLs and each VL will have its own session key.
--- abstract: 'We analyse new contributions to the theoretical input in heavy quark sum rules and we show that the general theory of singularities of perturbation theory amplitudes yields the method to handle these specific features. In particular we study the inclusion of heavy quark radiation by light quarks at ${\cal O}(\alpha_s^2)$ and of non–symmetric correlators at ${{\cal O} (\alpha_{s}^3)}$. Closely related, we also propose a solution to the construction of moments of the spectral densities at ${{\cal O} (\alpha_{s}^3)}$ where the presence of massless contributions invalidates the standard approach. We circumvent this problem through a new definition of the moments, providing an infrared safe and consistent procedure.' --- IFIC/02$-$07\ FTUV/02$-$0211\ [**New contributions to heavy quark sum rules** ]{}\ [ J. Portolés]{}  and [ P. D. Ruiz-Femenía]{}\ [*Departament de Física Teòrica, IFIC, Universitat de València - CSIC\ Apt. Correus 22085, E-46071 València, Spain* ]{}\ PACS : 11.55.Hx, 11.55.Fv, 13.65.+i, 12.38.Bx\ Keywords : Heavy quark sum rules, singularities of perturbative amplitudes. Introduction ============ Sum rules analyses have extensively exploited the relation between the correlator of the quark electromagnetic currents and the cross section of $e^+e^- \to hadrons$ under the assumption of quark-hadron duality, to extract fundamental information of hadron systems. The two-point function containing the QCD dynamics of the produced quarks is built from the sum of the electromagnetic vector currents associated to each flavour: $$\begin{aligned} \Pi^{\mu \nu}_{had}(p) \; & = & \; i \, \int \, d^4x \, e^{ipx} \, \sum_{q,q'} \, e_q \, e_{q'} \, \langle \, 0 \, | \, T \, \left( \, \overline{q}(x) \, \gamma^{\mu} \, q(x) \, \right) \, ( \, \overline{q'}(0) \, \gamma^{\nu} \, q'(0) \,) \, | \, 0 \, \rangle \; \; \, \nonumber \\ & = & \; ( \, - \, g^{\mu \nu} \, p^2 \, + \, p^{\mu} \, p^{\nu} \, ) \, \Pi_{had}(p^2) \, \; , \label{eq:HadCorre}\end{aligned}$$ where $q$ and $q'$ stand for heavy or light quarks, indistinctly, with electric charges $e_q$ and $e_{q'}$. Here we find two types of correlators: the symmetric ones, both electromagnetic currents corresponding to the same flavour, and non-symmetric correlators, where $q \neq q'$. Strictly, the latter are needed to fully describe the electromagnetic production of hadrons, even in the case where a definite flavour type of hadrons is isolated in the final state. Sum rules analyses applied to heavy quark production are written down in terms of the symmetric correlator built from the vector current $j^\mu_{Q} (x) = e_Q \, \overline{Q}(x)\, \gamma^\mu \, Q(x)$ of the heavy quark $Q$, and the effects of the non-symmetric correlators are never considered. The reason is that they begin to contribute beyond ${\cal{O}}(\alpha_s^2)$ in QCD perturbation theory (see Fig. \[fig:5gluon\](a)), which means one order beyond the actual knowledge of the (symmetric) heavy quark correlator $\Pi_{Q \overline{Q}}$. The study of such new effects in $Q\overline{Q}$ production will be mandatory if ${\cal{O}}(\alpha_s^3)$ accuracy is reached in the future. However, already at ${\cal{O}}(\alpha_s^2)$ the production of heavy quarks $Q \overline{Q}$ receives contributions which have neither been accounted for in the theoretical input of heavy quark sum rules. These arise from heavy quark discontinuities of symmetric correlators built from quarks such that $m_q < m_Q$, as the cut shown in Fig. \[fig:5gluon\](b), representing the production of heavy hadrons radiated off a pair of lighter quarks. Finally, Groote and Pivovarov have recently pointed out [@pg1; @pg2] that, at ${{\cal O} (\alpha_{s}^3)}$, a three–gluon intermediate state (see Fig. \[fig:3gluon\]) contributes to the $\Pi_{ Q \overline{Q}}$ correlator. As these authors have shown, this massless intermediate state invalidates the usual definition of the moments ${\cal M}_{n}$, $${\cal M}_n \; = \; {\frac{\displaystyle 1}{\displaystyle n!}} \, \left( \, {\frac{\displaystyle d}{\displaystyle d p^2}} \, \right)^n \, \Pi_{Q \overline{Q}}(p^2) \, \bigg|_{p^2=0} \; \; ,$$ for $n \ge 4$, when they become singular. Consequently the use of heavy quark sum rules at ${{\cal O} (\alpha_{s}^3)}$ is debatable. All the features we have just quoted arise as a consequence of the interplay between the implementation of quark–hadron duality and the proper definition of the observables in the case of heavy quarks QCD sum rules. The correlation between the perturbative input and the observable information on the experimental side requires a careful matching that cannot be fully achieved. Accordingly the introduced incertitudes should be estimated and included in the errors of the parameters determined through this method. Here we discuss the aspects pointed out above and their consequences in the methodology of extracting information from QCD sum rules. The aim of this work is to provide a consistent procedure to implement the perturbative input in the theoretical side of the heavy quark sum rules. Our proposal relies in a careful application of the general theory of singularities of perturbation theory. The crucial point will be to isolate all the cuts related to $Q \overline{Q}$ production from the general vector two–point function (\[eq:HadCorre\]) in order to construct a modified correlator containing only contributions to heavy quark production. In Section 2 we recall the theory of singularities of perturbative amplitudes. The relation between the phenomenological and the theoretical input in the QCD sum rules is discussed in Section 3. Hence Sections 4 and 5 collect the implementation of our proposal to include heavy quark radiation off light quarks and to exclude massless singularities, respectively. We will comment on the uncertainties related with our method too. In Section 6 we emphasize our conclusions. ![\[fig:5gluon\] *Examples of perturbative non–heavy quark current correlators at ${\cal O} (\alpha_s^3)$ (a) and ${\cal O} (\alpha_s^2)$ (b) that contribute to the production of $Q \overline{Q}$ states.*](figure1.ps){width="85.00000%"} Analyticity of $\Pi_{had}(s)$ ============================= As it is well known two–point functions are analytic except for singularities at simple poles or branch cuts, the latter being originated by normal thresholds of production of internal on–shell states. Assuming that the absorptive part of $\Pi_{had}(p^2)$ starts at some point $s_0$, vanishing below this point, the correlator satisfies the dispersion relation [@dera] [^1] : $$\widehat{\Pi}_{had}(p^2) \, \doteq \, \Pi_{had}(p^2) \, - \, \Pi_{had}(0) \, = \, \frac{p^2}{\pi}\int^{\infty}_{s_0} {\frac{\displaystyle ds}{\displaystyle s}}\,\, \frac{\mbox{Im}\,\Pi_{had}(s)}{s-p^2-i\epsilon} \; \; . \label{eq:disp-rel}$$ The absorptive part $\mbox{Im}\,\Pi_{had}(s)$ is a physical observable, as it is proportional to the total hadron production cross section by a vector current $J^{\mu}=\sum_q j_q^{\mu}$. Being QCD the underlying theory of strong interactions, the quark–hadron duality hypothesis allows us to identify, inclusively, the states in terms of observable hadrons with the partonic intermediate states. Hence the optical theorem tells us that the total absorptive part is the sum of the absorptive parts corresponding to different intermediate partonic states: $$\mbox{Im} \, \Pi_{had} (s) \; = \; - \, {\frac{\displaystyle 1}{\displaystyle 6s}}\int \, \sum_n \,d R_{n} \, \langle \, 0 \, | \, J^{\mu} \, | \, n \, \rangle \, \langle \, n \, | \, J_{\mu}^{ \dagger} \, | \, 0 \, \rangle \; = \; \sum_n \, \mbox{Im} \, \Pi_n (s)\; \; , \label{eq:unitazi}$$ where the phase space integration has been explicitly stated [^2]. A similar separation between contributions of different final hadron states in the perturbative evaluation of the two-point correlator, Eq. (\[eq:HadCorre\]), would allow us to keep only the desired heavy quark cuts in the symmetric and non-symmetric correlators. Although Cutkosky rules provide a method to isolate cuts corresponding to different intermediate states at the perturbative level, some care is needed in their application. The study of analytic properties of perturbation theory amplitudes shows that their singularities are isolated and, therefore, we can discuss each singularity of a perturbative amplitude by itself [@oldies]. As a consequence, any one–variable dependent amplitude $F(z)$ satisfies a dispersion relation from Cauchy’s theorem given by [^3] : $$F(z) \, = \, {\frac{\displaystyle 1}{\displaystyle 2\pi i}} \oint dz^{\prime}\frac{F(z^{\prime})} {z^{\prime}-z} \,= \, \sum_n \int_{z_n}^{\infty} \, {\frac{\displaystyle dz^{\prime}}{\displaystyle 2\pi i}} \, {\frac{\displaystyle \left[ \, F(z^{\prime}) \, \right]_n}{\displaystyle z^{\prime} \, - z}} \; \; \; , \label{eq:Fz}$$ where $\left[ \, F(z) \, \right]_n$ is the discontinuity across a branch cut which starts at the point $z_n$ and it is associated to a definite intermediate state. For the general two-point function in Eq. (\[eq:HadCorre\]), which depends on the total momentum squared $p^2$, we would have $$\widehat{\Pi}_{had} (p^2) \, = \, \sum_n \,{\frac{\displaystyle p^2}{\displaystyle 2 \pi i}} \, \int_{s_n}^{\infty} \, {\frac{\displaystyle ds}{\displaystyle s}} \, {\frac{\displaystyle \left[ \, \Pi(s) \, \right]_n}{\displaystyle s \, - \, p^2 \, - \, i\epsilon}} \; \; \; , \label{eq:piq2}$$ where now $\left[ \Pi(s) \right]_n$ provides the sum of all the cut diagrams associated to a definite intermediate state labeled $n$, ($n=q\bar{q},q^{\prime}\bar{q}^{\prime}, ggg,q\bar{q}q^{\prime}\bar{q}^{\prime},\dots$). In the perturbative calculation, every discontinuity contributing to $\left[ \Pi(s) \right]_n$ can be associated to a reduced" Feynman diagram obtained by contracting internal off–shell propagators to a point and leaving internal on–shell lines untouched. Its contribution is written down following the Cutkosky rules for the graph. However the discontinuity across a specified cut in a single diagram needs not to be a pure real function in the physical region. Hence the separation between the imaginary parts coming from different final states, as stated in Eq. (\[eq:unitazi\]), does not seem to apply for individual diagrams. But from Eqs. (\[eq:unitazi\]) and (\[eq:piq2\]) we can conclude that $\left[ \, \Pi(s) \, \right]_n = \, 2 i \, \mbox{Im} \, \Pi_n (s)$, meaning that only the sum of all cuts corresponding to a defined intermediate state provides the physical observable, i.e. $\mbox{Im} \, \Pi_n (s)$. Evidently, this holds at any perturbative order in $\alpha_s$, and gives a prescription to isolate contributions to different quark intermediate states in the hadron two–point function. This assertion might seem obvious but it is not : A $Q \overline{Q}$ cut on the right–hand fermion loop in Fig. \[fig:3gluon\](a) does not provide, by itself, a pure real contribution. Only when both $Q \overline{Q}$ cuts, on the left–hand and right–hand fermion loops of Fig. \[fig:3gluon\](a), are added we get a term contributing to the physical observable $\mbox{Im} \, \Pi_{n= Q \overline{Q}}$ . This last example also shows that some subsets of discontinuities of the same intermediate state give already real functions prior to the summation of all contributions at a fixed perturbative order. This is the case for the set of cuts coming from a symmetric correlator, and for the set arising from a non–symmetric correlator with currents $j_q^{\mu}, \, j_{q^{\prime}}^{\mu}$ together with its conjugate. This is easily seen if we rewrite the absorptive part corresponding to the state $n$, $\mbox{Im} \, \Pi_n$, as a sum of terms arising from symmetric and from non–symmetric correlators: $$\begin{aligned} \mbox{Im} \, \Pi_n (s) = - \, {\frac{\displaystyle 1}{\displaystyle 6s}}\int \, d R_{n}\!\!\!\!\! &\Bigg[ & \!\!\!\!\!\sum_q \langle \, 0 \, | \, j^{\mu}_q \, | \, n \, \rangle \, \langle \, n \, | \, j_{q,\mu}^{ \dagger} \, | \, 0 \, \rangle \nonumber\\[3mm] \!\!\!\!\! & + & \!\!\!\!\!\sum_{m_q < m_{q^{\prime}}} \, \left( \, \langle \, 0 \, | \, j^{\mu}_q \, | \, n \, \rangle \, \langle \, n \, | \, j_{q^{\prime},\mu}^{ \dagger} \, | \, 0 \, \rangle \, + \, \langle \, 0 \, | \, j^{\mu}_{q^{\prime}} \, | \, n \, \rangle \, \langle \, n \, | \, j_{q,\mu}^{ \dagger} \, | \, 0 \, \rangle \, \right) \, \Bigg] . \label{eq:Pin}\end{aligned}$$ The first term in the r.h.s. of Eq. (\[eq:Pin\]) represents the absorptive contribution from symmetric correlators, and the perturbative expansion of each one, following Cutkosky rules, is clearly real. In the case of interest, $n\equiv [Q\overline{Q}]$ [^4], this term contains the usual heavy quark spectral density built from heavy quark currents, $\Pi_{Q \overline{Q}}$, and $[Q \overline{Q}]$ production through light quark currents correlators. The second and third terms in Eq. (\[eq:Pin\]) are conjugate to each other, so their sum also gives a pure real number. In terms of diagrams, this means that to extract the desired absorptive part from non-symmetric correlators we need to add to the cut of a diagram the corresponding one in the conjugated diagram (see Fig. \[fig:5gluon\](a); the discontinuity obtained from the same diagram with quark $q$ and quark $Q$ lines interchanged should be added up to get a real contribution). Phenomenology vs theoretical input in heavy quark sum rules =========================================================== The analysis above shows that a clear control can be enforced on the perturbative side of the sum rules in order to include or exclude specific contributions. However while there is no doubt about the observable that provides $\mbox{Im} \, \Pi_{had} \, \propto \, \sigma ( e^+ e^- \rightarrow hadrons)$ when an exclusive hadron sector (like, for example, heavy quark production) is specified, it is clear that the matching between the perturbative and the phenomenological side includes incertitudes related with the content and definition of the final state. Heavy quark sum rules [@rev] have been successful in providing information on the heavy quark parameters. In short they make use of global quark–hadron duality that translates into the ansatz on the vector correlator $\Pi_{[Q \overline{Q}]}(s)$ : $$\label{eq:gqhd} \int_{s_0}^{\infty} \, ds \, {\frac{\displaystyle \mbox{Im} \, \Pi_{[Q \overline{Q}]}^{phen}(s)}{\displaystyle s^n}} \; \simeq \; \int_{4 M^2}^{\infty} \, ds \, {\frac{\displaystyle \mbox{Im} \, \Pi_{[Q \overline{Q}]}^{pert}(s)}{\displaystyle s^n}} \; + \; ... \, ,$$ where $\mbox{Im} \, \Pi_{[Q \overline{Q}]}^{phen}(s)$ on the l.h.s. gives the phenomenological information on heavy quark production and it is related with the cross–section of vector current production of hadrons containing Q–flavoured states. On the r.h.s. $\mbox{Im} \, \Pi_{[Q \overline{Q}]}^{pert}(s)$ is the QCD perturbative contribution to the correlator, and in the lower limit of integration $M$ is usually taken as the pole mass of the heavy quark. Finally the dots on the r.h.s. are short for non–perturbative (the gluon condensate essentially) contributions and possible Coulomb–like bound states coming from non–relativistic resummations in $\Pi_{[Q \overline{Q}]}^{pert}$ below threshold. These last two features are not relevant for the discussion of this paper and have to be implemented on our results without modification. To a definite perturbative order in $\alpha_s$, $\mbox{Im} \, \Pi_{[Q \overline{Q}]}^{pert}(s)$ includes all the absorptive contributions to the correlator that provide $[Q \overline{Q}]$ production. Notice that this is not the same that the absorptive $Q \overline{Q}$ contribution of the heavy–quark current correlator $\Pi_{Q \overline{Q}}$, as it is usually assumed. The total experimental cross section $\sigma (e^+ e^- \rightarrow hadrons)$ can be split into two disjoint quantities : the cross section for producing hadrons with Q–flavoured states, and the production of hadrons with no Q–flavoured components. If the experimental set up was accurate enough to classify events into one of these two clusters, the first class would be the required ingredient for the phenomenological part of the heavy quark sum rule. However this separation, implemented in the theoretical side within perturbative QCD, is rather involved. Up to ${{\cal O} (\alpha_{s}^2)}$ there has not been any doubt, in the literature, that contributions to this side arise wholly from $Q \overline{Q}$ cuts in the heavy quark correlator $\Pi_{Q \overline{Q}}$. The physical picture behind this assertion relies in the assumption of factorization between hard and soft regions in the quark production process and subsequent hadronization. The hard region described with perturbative QCD entails the production of the pair of heavy quarks, and the soft part of the interaction is responsible for the observed final hadron content. Although possible, annihilation of the partonic state $Q \overline{Q}$ due to the later interaction is very unlikely, as jets arising from the short distance interaction fly apart before long–distance effects become essential. Consequently, each jet hadronizes to a content of Q–flavoured states with unit probability. As local duality is implicitly invoked, this picture is assumed to hold at sufficient high energies; hence perturbative corrections to the hard part are successively included through the heavy quark currents correlator. We claim, though, that similar $Q \overline{Q}$ cuts are present in non–symmetric correlators, starting at ${{\cal O} (\alpha_{s}^3)}$, as the one shown in Fig. \[fig:5gluon\](a), where the left hand part of the cut diagram is a genuine production of $Q \overline{Q}$ states triggered by virtual light quarks. If the use of heavy quark sum rules up to this order is considered, the inclusion of these terms of the correlator of a heavy and a light quark currents should be taken into account. According to our conclusion in the last Section, once the discontinuity provided by Fig. \[fig:5gluon\](a) is known, it has to be added to $\mbox{Im} \, \Pi_{[Q \overline{Q}]}^{pert}(s)$. Other extra $Q \overline{Q}$ cuts, i.e. not contained in $\Pi_{Q \overline{Q}}$, arise even at ${{\cal O} (\alpha_{s}^2)}$ as the diagram of Fig. \[fig:5gluon\](b). In this case the $Q \overline{Q}$ pair is produced through the splitting of a hard gluon radiated off a pair of light quarks. Whether this cut should be accounted for or not in the theoretical side depends crucially on which is the content and the configuration of the reconstructed final state in the experimental data, as the physical picture outlined above for pure $Q \overline{Q}$ cuts does not apply so clearly for $Q \overline{Q} q \overline{q}$ discontinuities. We will come back to this point at the end of Section 4. In addition a discussion about other possible contributing cuts should arise. The case of the three–gluon discontinuity is postponed to Section 5. In the following we will discuss, in turn, the inclusion of heavy quark radiation by light quarks and the infrared massless discontinuities noticed by Groote and Pivovarov. We will provide specific solutions along the lines put forward in Sections 2 and 3. Heavy quark radiation ===================== Starting at ${\cal O}(\alpha_s^2)$, symmetric correlators built from light quark currents include four fermion cuts with a heavy quark pair radiated off the light quarks as shown in Fig. \[fig:5gluon\](b) (two additional diagrams, one with the two gluons attached to the lower light fermion line, and the other with one gluon attached to each light fermion line, should be considered too). The sum of all these four fermion absorptive parts in the three-loop diagrams with massless light quarks currents has been calculated in Ref. [@4fermion], and can be cast into the following form [^5] : $$12\pi \, \mbox{Im} \, \Pi_{q\overline{q}Q\overline{Q}} (s) \, = \, R_{q\overline{q}Q\overline{Q}} \, \equiv \, N_c \, \big( \!\! \sum_{i=u,d,s} Q_i^2 \big) \, C_{8} \, \bigg( \frac{\alpha_s}{\pi} \bigg)^2 \int_{4M^2}^s \frac{ds^{\prime}}{s^{\prime}} \, R(s^{\prime}) \, F(s^{\prime}/s) \; \; \; , \label{eq:qqQQ}$$ with $$\begin{aligned} F(x) &=& \frac{1}{6} \, \bigg\{ (1+x)^2\ln^2 x + (3+4x+3x^2)\ln x +5(1-x^2) \nonumber\\[3mm] &&\; \; \, \; \; \; - 4(1+x)^2 \, \Big[ \mbox{Li}_2(-x) +\ln(1+x)\ln x + {\frac{\displaystyle \pi^2}{\displaystyle 12}} \,\Big] \bigg\} \; \; \; . \label{eq:F} \end{aligned}$$ The function $F(s^{\prime}/s)$ gives the rate for the decay of a vector boson of mass $\sqrt{s}$ into a vector boson of mass $\sqrt{s^{\prime}}$ plus a pair of massless fermions ($q\overline{q}$). The spectral density $R(s) = \beta (3 - \beta^2)/2$ (at lowest order) is the normalized cross section for the production of a pair of fermions with unit charge through a vector boson; here $\beta = \sqrt{1-4M^2/s}$ is the velocity of the produced heavy quarks. The integral can be solved analytically in this case and the result is found in Ref. [@4fermion]. Note that the heavy quark pair is created in a colour octet state, and the factor $$C_{8}= \frac{1}{N_c} \, \mbox{Tr}\bigg(\frac{\lambda^a}{2} \frac{\lambda^b}{2}\bigg) \, \mbox{Tr}\bigg(\frac{\lambda^a}{2} \frac{\lambda^b}{2}\bigg) \, = \, \frac{2}{3}$$ retains this colour structure. It is interesting to compare the contribution from $R_{q\overline{q}Q\overline{Q}}$ with the ${\cal O}(\alpha_s^2)$ contributions to $R_{Q\overline{Q}}$ (i.e. to the spectral density of the heavy quark correlator). Note that in the high energy limit there is no difference between the diagram shown in Fig. \[fig:5gluon\](b) and the same one with $Q$ and $q$ lines interchanged or with $q=Q$, both of them being included in $\Pi_{Q\bar{Q}}$. Differences arise because the heavy quark currents correlator, $\Pi_{Q\overline{Q}}$, also accounts for two heavy quark cuts where the internal (light or heavy) quark loop represents a virtual correction to the electromagnetic current. We have written Eq. (\[eq:qqQQ\]) in terms of a general $R(s)$ function in the integrand because it allow us to introduce in a straightforward way final state interactions between the heavy quark pair. In particular, we know that close to threshold the Coulomb interaction between the heavy quark pair dominates the dynamics. Resummation of leading terms $\sim (\alpha_s/\beta)^n$ becomes mandatory, and gives rise to the well known Sommerfeld factor multiplying the cross section: $$R^{thr}(s)=R(s)\times\frac{C\pi \alpha_s / \beta}{1-\exp (-C\pi\alpha_s/\beta)}\; \; \; . \label{eq:Rthr}$$ The colour factor $C$ appears in the Coulomb QCD potential and its value depends on the relative colour state of the quark pair. For singlet states $C = C_F$, and the potential is attractive, increasing the cross section at threshold. This is the case of heavy quark production in $e^+e^-$ collisions. However, in our case the heavy quark pair is produced through the splitting of a gluon. The Coulomb potential becomes repulsive between quarks in a colour octet state, $C = C_F - C_A/2 = -1/(2N_c)$, and the Sommerfeld factor at low velocities then reads $$\frac{-\pi \alpha_s /6 \beta}{1-\exp (\pi\alpha_s/6\beta)}\; \; \stackrel{\beta\to 0}{\Longrightarrow}\; \; \frac{\pi \alpha_s}{6 \beta}\, e^{-\frac{\pi\alpha_s}{6\beta}}\; \; \; ,$$ causing the cross section to decrease near threshold even faster than $\beta$, the phase-space velocity in $R(s)$. The production of heavy quarks radiated off massless quarks through a virtual gluon is then very much suppressed in the threshold region. However, as mentioned above, high energy quark lines can be considered massless and the contribution from this diagram is numerically equal to the same one with $Q$ and $q$ lines interchanged. The inclusion in $\mbox{Im} \Pi_{[Q \overline{Q}]}^{pert}(s)$ of four–fermion cuts coming from light–quark correlators is possible because we have shown in Section 2 how to discern and extract these pieces. As discussed before, the procedure depends crucially on the definition of the observable information input in the sum rule, and consistence between the theoretical and phenomenological parts is required. Let us come back to the discussion of Section 3. There it was argued why perturbative $Q \overline{Q}$ cuts are thought to reproduce the phenomenology of two jet events. Notice that, in heavy quark radiation from light quarks, the signature of the event is likely to be a 3–jet configuration where one of the jets is generated from a gluon. If heavy flavour components are to be found in this jet, the diagram of Fig. \[fig:5gluon\](b) would certainly be needed to account for these events in the theoretical side. However the heavy partons in this jet are not as energetic as in a pure $Q \overline{Q}$ production and, consequently, the proposed factorization between long and short distance effects may not longer apply, allowing for an interference between both regimes. In this case we cannot argue that these kind of cut diagrams would result in a final state with Q–flavoured hadrons with unit probability, although we may impose kinematical constraints to reduce uncertainties in both the experimental reconstruction of data and theoretical cross section of these $Q \overline{Q} q \overline{q}$ states. This issue is the source of a recent discussion in the literature related with the secondary production of $b \overline{b}$ through gluon splitting [@4fermion; @gls1]. Massless contribution to heavy quark sum rules ============================================== Until present the evaluation of the perturbative two–point correlation function $\Pi^{pert}(q^2)$ (in this Section we will denote the heavy quark currents correlator by $\Pi(q^2)$) has only been carried out completely, with massive quarks, up to ${\cal O}(\alpha_s^2)$ [@cher] and the sum rules procedure, given by Eq. (\[eq:gqhd\]), has been termed consistent and effective in its task because the first branch point is set at the massive two–particle threshold. However Groote and Pivovarov have pointed out [@pg1] that at ${{\cal O} (\alpha_{s}^3)}$ there is a contribution to the correlator which contains a three–gluon massless intermediate state (see Fig. \[fig:3gluon\](a)). Its absorptive part starts at zero energy and, therefore, Eq. (\[eq:gqhd\]) is no longer correct because on the r.h.s. there is a discontinuity starting at $s=0$. Moreover those authors have also warned about the fact that, at this perturbative order, the massless intermediate state invalidates the definition of the moments ${\cal M}_n$ for $n \ge 4$ because they become singular. Let us collect their reasoning. The perturbative contribution given by the diagram in Fig. \[fig:3gluon\](a) has been calculated at small $q^2$ ($q^2 \ll M^2$) in Ref. [@pg1]. In this limit the quark triangle loop can be integrated out and it ends up in the diagram in Fig. \[fig:3gluon\](b) generated by an induced effective current describing the interaction of the vector current with three gluons, $$J^{\mu}\,=\, - {\frac{\displaystyle \pi}{\displaystyle 180 M^4}} \, \left( {\frac{\displaystyle \alpha_s}{\displaystyle \pi}} \right)^{\frac{3}{2}} \, \left(5\, \partial_{\nu}{\cal O}_{1}^{\mu\nu}\,+\,14\,\partial_{\nu}{\cal O}_{2}^{\mu\nu}\right) \,, \label{eq:effec_current}$$ ![\[fig:3gluon\] *(a) ${{\cal O} (\alpha_{s}^3)}$ diagram contributing to the vacuum polarization function of the heavy quark current (the vertical dashed line indicates the massless cut). (b) Effective“ diagram obtained by integrating out the fermion loops. It also has the topological structure of the reduced” diagram that determines the massless cut singularity.*](figure2.ps){width="70.00000%"} with $$\begin{aligned} {\cal O}_{1}^{\mu\nu} \, & = & \, d_{abc}\,G^{\mu\nu}_a G^{\alpha\beta}_b G_{\alpha\beta}^c \; \; , \\ \nonumber {\cal O}_{2}^{\mu\nu} \, & = & \, d_{abc}\,G^{\mu\alpha}_a G_{\alpha\beta}^b G^{\beta\nu}_c \; \; , \label{eq:operators} \end{aligned}$$ where $G^{\mu\nu}_a$ is the gluon strength field tensor. The effective current in the QED case ($G^{\mu\nu}_a\to F^{\mu\nu}, \alpha_s \to \alpha_{em}, d_{abc}\to 1$) can be easily identified from the lowest order Euler-Heisenberg Lagrangian (see Ref. [@pg2]). The correlator of the induced current (\[eq:effec\_current\]) is then evaluated in the configuration space giving : $$\langle 0| T \,J_{\mu}(x) \ J_{\nu}^{\dagger}(0)\, |0\rangle\,=\,-\frac{34}{2025\pi^4 M^8} \left(\frac{\alpha_s}{\pi}\right)^3 d_{abc}d_{abc}\, \left(\partial_{\mu}\partial_{\nu}-g_{\mu\nu}\partial^2\right) \frac{1}{x^{12}} \, . \label{eq:correlator}$$ In momentum space we need to perform the Fourier transform of Eq. (\[eq:correlator\]). Following the differential regularization procedure [@differ], which works directly in configuration space, the result for the vacuum polarization contribution of the diagram in Fig. \[fig:3gluon\](b) at small $q^2$ reads $$\Pi_{\mu\nu}(q)\; = \; \frac{17}{2916000 \pi^2}\,d_{abc}d_{abc} \left(\frac{\alpha_s}{\pi}\right)^3 (q_{\mu}q_{\nu}-q^2g_{\mu\nu})\left(\frac{q^2}{4M^2}\right)^4 \ln \left(\frac{\mu^2}{-q^2}\right)\, +{\cal O}\Big[ \Big(\frac{q^2}{M^2}\Big)^5 \Big]\,, \label{eq:3gluon_polarization}$$ with $\mu$ the renormalization point in this scheme, and $d_{abc}d_{abc}=40/3$. As noticed by Groote and Pivovarov [@pg1], moments associated to the diagram in Fig. \[fig:3gluon\](b) are not defined if $n\ge 4$. Indeed differentiating Eq. (\[eq:3gluon\_polarization\]) four times, at $q^2\approx 0$, we get: $$\frac{1}{4!}\left(\frac{d}{dq^2}\right)^4\Pi(q^2)\arrowvert_{q^2\approx 0}= \frac{17}{218700\pi^2}\left(\frac{\alpha_s}{\pi}\right)^3 \left(\frac{1}{4M^2}\right)^4 \left[\ln \left(\frac{\mu^2}{-q^2}\right)-\frac{25}{12}\right] +{\cal O}\Big[ \frac{q^2}{M^{10}} \Big] \; \; , \label{eq:moment4}$$ whose real part clearly diverges if we set $q^2=0$. Larger $n$ moments are also infrared divergent, and so the authors of Ref. [@pg1] conclude that the standard sum rule analysis must limit the accuracy of theoretical calculations for the $n\ge 4$ moments to the ${\cal O}(\alpha_s^2)$ order of perturbation theory. This is, essentially, the conclusion of Ref. [@pg1]. An infrared safe redefinition of the moments, to cure the latter problem, has been provided in Ref. [@pg2]; it consists in evaluating the moments at an Euclidean point $q^2 = - s_E$, , thus avoiding the singular behaviour. This solution, as explained by the authors of that reference, is rather ill–conditioned from the phenomenological side though. Nevertheless the fault in Eq. (\[eq:gqhd\]) due to the massless threshold still represents a problem because even if, up to ${{\cal O} (\alpha_{s}^3)}$, we substitute the dispersion relation by $$\widehat{\Pi}^{pert}(q^2) \; = \; {\frac{\displaystyle q^2}{\displaystyle \pi}}\int^{\infty}_{4M^2} \, {\frac{\displaystyle ds}{\displaystyle s}} \, \, \frac{\mbox{Im}\, {\Pi}_{ Q \overline{Q}}^{pert}(s)}{s-q^2-i\epsilon}\; + \; {\frac{\displaystyle q^2}{\displaystyle \pi}}\int^{\infty}_{0} \, {\frac{\displaystyle ds}{\displaystyle s}} \,\, \frac{\mbox{Im}\,\Pi_{3g}(s)}{s-q^2-i\epsilon} \; \; \; , \label{eq:fulldisp-rel}$$ (where $\mbox{Im}\,{\Pi}_{ Q \overline{Q}}^{pert}(s)$ includes discontinuities starting at $s=4M^2$), the spectral function $\mbox{Im} \, \Pi_{3g}(s)$ associated to the cut in Fig. \[fig:3gluon\](a) would hardly be implemented phenomenologically as gluons hadronize to both heavy and light quark pairs. We wish to provide a bypass to recover the balance between the right-hand and left-hand parts of Eq. (\[eq:fulldisp-rel\]). We will now see that if one does not insist in using full vacuum polarization for the sum rule analysis there is a way to overcome this infrared problem. In the heavy quark correlator the discontinuity across the three–gluon cut gives a contribution to the spectral function that is unequivocally real : $${\frac{\displaystyle 1}{\displaystyle 2i}} \, \left[ \, \Pi(s) \, \right]_{3g} \, = \, \mbox{Im} \, \Pi_{3g} (s) \; = \; - \, {\frac{\displaystyle 1}{\displaystyle 6s}}\int \, d R_{3g} \, \langle \, 0 \, | \, j^{\mu} \, | \, 3 \, g \, \rangle \, \langle \, 3 \, g \, | \, j_{\mu}^{ \dagger} \, | \, 0 \, \rangle \; \; , \label{eq:unita}$$ from which the dispersive part can be evaluated independently of the $Q \overline{Q}$ cuts. Accordingly we conclude that we can identify and isolate the troublesome massless cut contribution to the two–point function. Indeed Eqs. (\[eq:piq2\]) and (\[eq:unita\]) justify our previous Eq. (\[eq:fulldisp-rel\]). Let us go back then to Eq. (\[eq:fulldisp-rel\]). All the difficulty with the phenomenological application of the sum rules is now the fact that the contribution from the three–gluon cut is contained in both sides of the equality. This intermediate state hadronizes completely into hadrons with a content of light and/or heavy quarks indistinctly. It is conspicuous that if we could disentangle the heavy quark hadronization, $3g \rightarrow Q \overline{Q}$, we should include only this piece into the sum rule. Then the singularity at $q^2 = 0$ would disappear because heavy quarks are produced starting at $q^2 = 4M^2$. However there is no way to sort out light and heavy quark production off three gluons and, therefore, if we extract this contribution from the heavy quark sum rules we are introducing an incertitude in the procedure because we make sure that there is no light quark hadronization but we miss the heavy quark production. It is easy to see that the induced error is small, due to the fact that three gluons hadronize mostly to light hadrons. On one side, in the very high energy region and following perturbative QCD with $N_F = 4$, we have only a $1/4 \, = \, 25 \, \%$ probability of finding a specified pair of heavy quarks produced. And this is a generous upper limit because when we go down in energy, phase space restrictions severely reduce the counting of heavy quarks. Hence we estimate that excluding the three–gluon cut we introduce a tiny very few percent error in the sum rules procedure. Thus we propose an [*infrared safe*]{} definition of the moments by the trivial subtraction : $$\begin{aligned} \widetilde{{\Pi}}^{pert} (q^2)& \doteq & \, \widehat{\Pi}^{pert} (q^2) \, - {\frac{\displaystyle q^2}{\displaystyle \pi}}\int^{\infty}_{0} \, {\frac{\displaystyle ds}{\displaystyle s}} \, \frac{\mbox{Im}\,\Pi_{3g}(s)}{s-q^2-i\epsilon} \; = \; {\frac{\displaystyle q^2}{\displaystyle \pi}}\int^{\infty}_{4M^2} \, {\frac{\displaystyle ds}{\displaystyle s}} \, \frac{\mbox{Im}\, {\Pi}_{ Q \overline{Q}}^{pert} (s)}{s-q^2-i\epsilon} \; \; , \label{eq:safe_def_pol}\\[5mm] \widetilde{\cal M}_n & \doteq & {\cal M}_n- \frac{1}{\pi}\int^{\infty}_{0}ds\, \frac{\mbox{Im}\,\Pi_{3g}(s)}{s^{n+1}}\, \; \; . \label{eq:safe_def_moments}\end{aligned}$$ Of course Eqs. (\[eq:safe\_def\_pol\]) and  (\[eq:safe\_def\_moments\]) are meaningless unless we give a precise prescription about how to subtract the contribution of the massless cuts represented by $\mbox{Im}\,\Pi_{3g}$. Our previous discussion gives us the tool to proceed. Once the full ${{\cal O} (\alpha_{s}^3)}$ $\Pi^{pert}(s)$ is calculated we can extract the imaginary part starting at $s=0$ (which should go with a $\theta(s)$ function) for any value of $s$. It is clear that the $\theta(s)$ and $\theta(s-4M^2)$ terms in the imaginary part of the vacuum polarization function correspond to three–gluon massless and to $Q\overline{Q}$ cut graphs, respectively, and $\mbox{Im}\,\Pi_{3g}$ and $\mbox{Im}\,\Pi_{ Q \overline{Q}}^{pert}$ are easy to distinguish, as Eq. (\[eq:unita\]) prevents the appearance of mixed terms. Therefore we identify $\mbox{Im}\,\Pi_{3g}$ and we now plug it in the dispersion integral of the right–hand side of Eq. (\[eq:safe\_def\_moments\]) and perform such integration. Divergences contained in both this integral and ${\cal M}_n$ as $q^2\to 0$ will cancel with each other if the same infrared regularization is employed in the two quantities. The intuitive choice would be a low-energy cutoff $s_0 > 0$, and Eq. (\[eq:safe\_def\_moments\]) would be more precisely written as: $$\widetilde{\cal M}_n \; \equiv \; \lim_{s_0\to 0^+}\left[ \frac{1}{n!}\left(\frac{d}{dq^2}\right)^n\Pi^{pert}(q^2) \arrowvert_{q^2 = -s_0} \, - \, \frac{1}{\pi}\int^{\infty}_{0} \, {\frac{\displaystyle ds}{\displaystyle s}} \, \frac{\mbox{Im}\,\Pi_{3g}(s)}{(s+s_0)^{n}}\right]\; \; , \label{eq:safe_moments_reg}$$ where a vanishing term in the $s_0 \rightarrow 0^+$ limit has been omitted. The evaluation of the ${\cal M}_n$ moments at $q^2=0 < 4 M^2$ made sense because, up to ${\cal O}(\alpha_s^2)$, this point, being far away of the heavy quark production threshold, is unphysical and the moments are well defined through an analytic continuation from the high–energy region. However note that the absorptive three–gluon contribution starts at $q^2=0$ and perturbative QCD becomes unreliable. This introduces a further new difficulty in evaluating ${\cal M}_n$ moments at $q^2 = 0$, as we reach the physical non–perturbative region. Our definition of the moments, $\widetilde{\cal M}_n$ in Eq. (\[eq:safe\_def\_moments\]), skips this problem by fully eliminating the massless terms and, therefore, the final heavy quark sum rule will only involve physics at $q^2 > 4 M^2$, apart from possible bound states. The general rule given above is valid for all orders of perturbation theory, but it strongly relies in our ability to extract the massless absorptive part from the full result of $\Pi(q^2)$ calculated at a definite order. Beyond ${\cal O}(\alpha_s^2)$ complete analytical results for the heavy quark correlator would be cumbersome and only numerical approaches may be at hand. In this sense, it would be convenient to have a method to calculate $\mbox{Im}\,{\Pi}_{ Q \overline{Q}}^{pert}$ only based on Feynman graphs. We have already sketched such a method in the discussion following Eq. (\[eq:piq2\]) : we just need to sum up all the massless cut graphs to get $\mbox{Im}\,\Pi_{3g}$, and then proceed with the dispersion integration that gives the associated dispersive part [@perhaps]. For example, at ${{\cal O} (\alpha_{s}^3)}$, the only massless absorptive part comes from the three–gluon cut in the diagram of Fig. \[fig:3gluon\](a); let us call ${\cal M}_{3g}^{\mu}$ the amplitude producing three gluons from the heavy quark current at lowest order (i.e. through the quark triangle loop in Fig. \[fig:4gluon\]). The massless contribution to the absorptive part of the correlator is then: $$\mbox{Im}\,\Pi_{3g} (s) = -\frac{1}{6s} \int dR_{3g} \,\,{\cal M}_{3g}^{\mu} \cdot {\cal M}_{3g\,\mu}^* \; \; , \label{eq:alpha3_Img}$$ with the three–gluon phase space integral defined as $$\int dR_{3g} \equiv \frac{1}{3!}\frac{1}{(2\pi)^5}\frac{\pi^2}{4s} \int_0^s ds_1 \int_0^{s-s_1} ds_2 \, \; \, , \label{eq:3gluon_space}$$ in terms of the invariants $s_1\equiv (k_1+k_2)^2=(q-k_3)^2$ and $s_2\equiv (k_2+k_3)^2=(q-k_1)^2$, and $k_i$ being the momenta of the gluons. The real part would be obtained by integrating Eq. (\[eq:alpha3\_Img\]) : $${\frac{\displaystyle s_0}{\displaystyle \pi}}\int^{\infty}_{0} \, {\frac{\displaystyle ds}{\displaystyle s}} \, \frac{\mbox{Im}\,\Pi_{3g}(s)}{s+s_0} = {\frac{\displaystyle -s_0}{\displaystyle 288(2\pi)^4}}\int^{\infty}_{0} \frac{ds}{s^3(s+s_0)}\int_0^s ds_1 \int_0^{s-s_1}ds_2 \,\,{\cal M}_{3g}^{\mu} \cdot {\cal M}_{3g\,\mu}^* \,, \label{eq:alpha3_Reg}$$ which, in principle, could be performed also numerically. The $n$th-derivative of relation (\[eq:alpha3\_Reg\]) respect to $s_0$, in the limit $s_0\to 0^+$, would give the infrared divergent contribution that should be subtracted from the full moments, as dictated by Eq. (\[eq:safe\_moments\_reg\]). Finally, we would like to mention that using the non–relativistic expansion of the heavy quark correlator in sum rules analyses does not avoid this infrared problem, at least formally. The ${{\cal O} (\alpha_{s}^3)}$ diagram of Fig. \[fig:3gluon\] will be highly suppressed in the velocity expansion, following the non–relativistic effective field theory approach, and therefore it is not relevant in the corresponding heavy quark currents correlator. However such two–point function cannot describe the $Q \overline{Q}$ spectrum for energies far from threshold and even when higher n–moments, which strongly enhance the threshold, are used, perturbative QCD is needed in order to implement the remaining high–energy region; the diagram of Fig. \[fig:3gluon\] has to be accounted for to include properly this input, and its discontinuity at $s=0$ cannot be obviated. This point is more clearly seen by noticing that, besides the resummations in $(\alpha_s/\beta)$ performed in the non–relativistic correlator, one could improve such expansion by adding the terms needed to reproduce the exact ${{\cal O} (\alpha_{s}^3)}$ result $\Pi(q^2)$. ![\[fig:4gluon\] *Feynman diagram for the production of three gluons at ${\cal O} (\alpha_s^{3})$.*](figure3.ps){width="30.00000%"} Conclusions =========== Heavy quark sum rules, relying in global quark–hadron duality, are a compelling procedure to extract information on the theory from phenomenology. However, as higher perturbative order analyses are performed, the consistency of the method demands the inclusion of novel features. While at ${\cal O}(\alpha_s)$ the correlator of two heavy quark currents gives the full perturbative information, at ${\cal O}(\alpha_s^2)$ we have noticed that a heavy quark $Q \overline{Q}$ pair radiated from light quarks in a correlator of light quark currents should be considered. At ${{\cal O} (\alpha_{s}^3)}$ the complexities grow with the essential role of non–symmetric correlators. Closely related with this situation is the feature recently pointed out by Groote and Pivovarov on the uneasy problem arising from a massless three–gluon discontinuity in the heavy quark current correlator at ${{\cal O} (\alpha_{s}^3)}$. We have shown that rigorous results of the general theory of singularities of perturbation theory provide all–important tools to analyse the new contributions. The inclusion or exclusion of specific discontinuities in the perturbative side is shown to be feasible and the decision involves a clear definition of the observable input on the phenomenological side of the sum rules. A solution for the problem pointed out by Groote and Pivovarov at ${{\cal O} (\alpha_{s}^3)}$ has been given. We conclude that the appropriate procedure to obtain information about the heavy quark parameters should make use of the infrared safe corrected moments, defined in Eq. (\[eq:safe\_moments\_reg\]), that now indeed satisfy the modified sum rule : $$\widetilde{\cal M}_n \; = \; \frac{1}{\pi}\int^{\infty}_{4 M^2}ds\, \frac{\mbox{Im}\,\Pi_{[Q \overline{Q}]}^{phen}(s)}{s^{n+1}} \; \; , \label{eq:finali}$$ where the right–hand side can be extracted from the heavy quark production cross section $\sigma(e^+e^- \rightarrow [Q \overline{Q}])$. The incertitude associated to heavy quark hadronization of the three–gluon should be taken into account but it is shown to be tiny. The analysis we have carried out is completely general, relying in the theory of singularities of perturbative theory amplitudes only, and provides a sharp tool for the future analysis of heavy quark sum rules.\ [**Acknowledgements**]{} We wish to thank A. Pich for calling our attention on this problem. We also thank G. Amorós, M. Eidemüller and A. Pich for relevant discussions on the topic of this paper and for reading the manuscript. The work of P. D. Ruiz-Femenía has been partially supported by a FPU scholarship of the Spanish [*Ministerio de Educación y Cultura*]{}. This work has been supported in part by TMR, EC Contract No. ERB FMRX-CT98-0169 and by MCYT (Spain) under grant FPA2001-3031. [99]{} S. Groote and A. A. Pivovarov, “Low-energy gluon contributions to the vacuum polarization of heavy quarks", \[hep-ph/0103047\]. S. Groote and A. A. Pivovarov, Eur. Phys. J. C [**21**]{} (2001) 133 \[arXiv:hep-ph/0103313\]. T. Appelquist and H. Georgi, Phys.  Rev.  [**D8**]{} (1973) 4000;\ A. Zee, Phys.  Rev.  [**D8**]{} (1973) 4038. L. D. Landau, Nucl.  Phys.  [**13**]{} (1959) 181;\ J. C. Taylor, Phys.  Rev.  [**117**]{} (1960) 261;\ R. E. Cutkosky, J.  Math.   Phys.  [**1**]{} (1960) 429;\ R. E. Cutkosky, Rev.  Mod.  Phys.  [**33**]{} (1961) 448. M. A. Shifman, A. I. Vainshtein and V. I. Zakharov, Nucl. Phys. B [**147**]{} (1979) 385;\ M. A. Shifman, A. I. Vainshtein and V. I. Zakharov, Nucl. Phys. B [**147**]{} (1979) 448;\ L. J. Reinders, H. Rubinstein and S. Yazaki, Phys. Rept.  [**127**]{} (1985) 1;\ M. Jamin and A. Pich, Nucl. Phys. B [**507**]{} (1997) 334 \[hep-ph/9702276\];\ A. H. Hoang, Phys. Rev. D [**59**]{} (1999) 014039 \[hep-ph/9803454\];\ M. Beneke and A. Signer, Phys. Lett. B [**471**]{} (1999) 233 \[hep-ph/9906475\];\ M. Eidemuller and M. Jamin, Phys. Lett. B [**498**]{} (2001) 203 \[hep-ph/0010334\]. A. H. Hoang, M. Jezabek, J. H. Kuhn and T. Teubner, Phys. Lett. B [**338**]{} (1994) 330 \[arXiv:hep-ph/9407338\]. D. J. Miller and M. H. Seymour, Phys. Lett. B [**435**]{} (1998) 213 \[arXiv:hep-ph/9805414\];\ R. Barate [*et al.*]{} \[ALEPH Collaboration\], Phys. Lett. B [**434**]{} (1998) 437;\ P. Abreu [*et al.*]{} \[DELPHI Collaboration\], Phys. Lett. B [**462**]{} (1999) 425. K. G. Chetyrkin, J. H. Kuhn and M. Steinhauser, Nucl. Phys. B [**482**]{} (1996) 213 \[hep-ph/9606230\];\ K. G. Chetyrkin, R. Harlander, J. H. Kuhn and M. Steinhauser, Nucl. Phys. B [**503**]{} (1997) 339 \[hep-ph/9704222\]. D. Z. Freedman, K. Johnson and J. I. Latorre, Nucl. Phys. B [**371**]{} (1992) 353;\ D. Z. Freedman, G. Grignani, K. Johnson and N. Rius, Annals Phys.  [**218**]{} (1992) 75 \[hep-th/9204004\]. J. Portolés and P. D. Ruiz–Femenía, work in progress. [^1]: Sometimes the Adler function defined as $\partial \Pi(p^2) / \partial \ln p^2$, to get rid of the subtraction constant, is used. The choice of the regularization prescription is not relevant for our discussion here. [^2]: We use $d R_{n} \, = \, (2 \pi)^4 \, \delta^4 (q - \sum_{i=1}^n p_i) \, \prod_{i=1}^n d p_i$, where $q$ is the current four–momentum and $d p_i = \frac{d^3 p_i}{(2 \pi)^3 2 E_i}$. The $-1/(6s)$ factor in Eq. (\[eq:unitazi\]) originates from $\Pi_{had} = - g_{\mu \nu} \, \Pi^{\mu \nu}_{had} /( 3 s )$ and the (1/2) factor from the unitarity relation. [^3]: This expression also gives the residue $R_i$ of a pole at $z = z_i$ if we interpret the discontinuity as $ \left[ \, F(z) \, \right]_n \, = \, 2\pi i R_i \delta(z-z_i)$. [^4]: Brackets $[Q \overline{Q}]$ are short for any hadron state containing at least a $Q \overline{Q}$ pair and, possibly, light quarks and gluons too. [^5]: Notice that our definition of $R_{q\overline{q}Q\overline{Q}}$ differs from the one in Ref. [@4fermion].
--- abstract: 'Hořava gravity has been constructed so as to exhibit anisotropic scaling in the ultraviolet, as this renders the theory power-counting renormalizable. However, when coupled to matter, the theory has been shown to suffer from quadratic divergences. A way to cure these divergences is to add terms with both time and space derivatives. We consider this extended version of the theory in detail. We perform a perturbative analysis that includes all modes, determine the propagators and discuss how including mixed-derivative terms affects them. We also consider the Lifshitz scalar with mixed-derivative terms as a toy model for power counting arguments and discuss the influence of such terms on renormalizability.' author: - 'Mattia Colombo,$^1$ A. Emir Gümrükçüoğlu,$^1$ and Thomas P. Sotiriou$^{1,2}$' title: Hořava gravity with mixed derivative terms --- Introduction ============ Einstein’s General Relativity (GR) is currently in agreement with all available observational and experimental data (see e.g. [@Will:2005va]). However, the fact that GR is not renormalizable suggests that it is no more than a low energy effective theory. When quantum corrections are taken into account, higher derivative operators are inevitably excited [@Donoghue:1994dn]. The leap from effective field theory to an ultraviolet (UV) complete gravity theory is highly non-trivial. The presence of higher derivative terms in the Lagrangian does indeed improve the UV behavior of the theory through the modification that the additional spatial derivatives introduce to the propagator. However, so long as Lorentz invariance remains intact, such terms also introduce higher time derivatives, which lead to a breaking of unitarity [@Stelle:1976gc]. Based on an analogy with the Lifshitz scalars in condensed matter physics [@Lifshitz], a theory of gravity which takes time and space on a different footing was introduced by Hořava [@Horava:2009uw]. The novelty of this approach is to allow for higher spatial derivatives while restricting the kinetic part to contain no more than two time derivatives. This is achieved by breaking the isotropy in the scaling of the spatial and temporal coordinates in the UV $$t \to b^{-z} t\,,\qquad x^i \to b^{-1} x^i\,, \label{eq:aniscal}$$ where the critical exponent $z$ encodes the amount of scaling anisotropy. With this scaling property, the action is allowed to contain higher dimensional operators constructed only with spatial derivatives. The full 4D diffeomorphisms of GR now have to be relaxed such that the anisotropic scaling (\[eq:aniscal\]) can be accommodated. Hořava’s theory is defined by the “foliation preserving diffeomorphisms” (FDiff) $$t\to \bar{t}(t)\,, \qquad x^i \to \bar{x}^i(t,x^i)\,. \label{eq:fdiff}$$ It is then constructed out of terms which are invariant under the above symmetry. Since the time coordinate is fundamentally different from the spatial ones, the Arnowitt–Deser–Misner decomposition of the 4D metric into 3D hypersurfaces of constant $t$ [@Arnowitt:1962hi] provides a natural description of the fundamental ingredients of the theory, in terms of the lapse function $N(t,x^i)$, the shift vector $N^i(t,x^j)$ and the spatial metric $g_{ij}(t,x^k)$. As a result of the symmetry (\[eq:fdiff\]), the time-kinetic part contains only quadratic terms in the extrinsic curvature $K_{ij}$, while the higher spatial derivative terms are constructed out of the 3D curvature invariants, the lapse function and their 3D covariant derivatives. For critical exponent $z=3$, the latter terms contain up to 6 spatial derivatives and constitute the minimal theory which is renormalizable at the power-counting level. The anisotropic scaling at the level of the action is supposed to reflect the scaling of the propagator(s) or the dispersion relation(s). However, there are several reasons why this might not be the case and hence, naïve power counting based on this anisotropic scaling might be misleading. The power-counting arguments for Hořava gravity are based on the analogy with the Lifshitz scalar (see Refs. [@Visser:2009fg; @Visser:2009ys] for a detailed discussion). The latter is a field theory of a single degree of freedom and one could straightforwardly guess the propagator by inspection of the action. Hořava gravity instead propagates a spin-2 and a spin-0 mode. In addition to those, there are also the gauge modes. It is, therefore, much more subtle to infer the behavior of the propagator by the scaling properties of the operators appearing in the action. Indeed, there exist restricted versions of the theory, such as those with [*detailed balance*]{} [@Horava:2009uw; @Vernieri:2011aa; @Vernieri:2012ms], where the sixth order operators in the action do not contribute at all to the propagators of the spin-0 mode, thus compromising renormalizability. The problem can be solved by adding eighth order operators, but the main lesson from these examples is that the naïve anisotropic scaling that one infers from the action is not always respected by the propagators. A further limitation of the power counting arises in determining the influence of the gauge modes on the loops. This has been demonstrated clearly in the analysis of Ref. [@Pospelov:2010mp]. The biggest challenge for Hořava’s theory, or any theory which violates Lorentz invariance in the gravity sector, is to suppress the Lorentz violation effects at low energy in the matter sector, where constraints are very stringent [@Liberati:2012jf; @Liberati:2013xla]. In Ref. [@Pospelov:2010mp] such a mechanism has been proposed. Lorentz violations are restricted to the gravity sector at tree level and they percolate the matter sector only though graviton loops. It is shown that Lorentz-violating terms in the matter sector end up being suppressed by powers of $M_\star/M_p$, where $M_\star$ is the UV scale above which the dispersion relations in the gravity sector cease to be relativistic. Hence, if $M_\star\ll M_p$, Lorentz violations in the matter sector can remain below experimental constraints. [^1] On the other hand, the analysis of Ref. [@Pospelov:2010mp] also uncovered a technical naturalness problem. Gauge mode loops actually lead to quadratic divergences.[^2] It was shown that the problem can be solved by introducing the specific counter-term $\nabla_i K_{jk} \nabla^i K^{jk}$ that can improve the behavior of the gauge mode. This term was chosen because it does not contribute to the propagator of the spin-2 graviton. However, a thorough analysis of the effect this term, or other similar terms with mixed derivatives can have on the dynamics of the propagating modes is still pending. In addition, there is a strong ambiguity on how such terms fit in the power counting scheme. If one naïvely tries to assign an order to them based on the scaling (\[eq:fdiff\]) then they should be counted as eighth order operators. However, there is no reason to trust such an order assignment. Generically such terms will modify the dispersion relations of (some of) the propagating modes and could even be the leading operators with time derivatives in the UV, thus compromising anisotropic scaling altogether. The implications of having such terms in the action for renormalizability are far from obvious. Our goal here is to shed some light into this matter. The rest of the paper is organized as follows. In the next Section we briefly review the basic ingredients of the Hořava gravity and we construct the action with mixed derivatives. In Sec. \[sec:perturbations\] we present a full perturbation analysis of the theory and determine the propagators for all modes. This allows us to clarify the influence of the mixed-derivative terms on the propagators. Remarkably, this is the first complete perturbative analysis of (non-projectable) Hořava gravity, even without the mixed-derivative terms. In Sec. \[sec:powercounting\], we reconsider the Lifshitz scalar as a toy model and we examine how adding mixed-derivative term would affect power-counting renormalizability. We conclude with Sec. \[sec:discussion\] where we discuss our results. The action for Hořava gravity {#sec:setup} ============================= Since its introduction, Hořava’s theory has been subject to serious scrutiny, covering a range of issues [@Charmousis:2009tc; @Li:2009bg; @Blas:2009yd; @Koyama:2009hc; @Papazoglou:2009fj; @Henneaux:2009zb; @Blas:2009ck; @Kimpton:2010xi; @Padilla:2010ge], which led to the introduction of several extensions [@Blas:2009qj; @Blas:2010hb; @Horava:2010zj; @Zhu:2011yu; @Vernieri:2011aa]. A brief presentation of the various versions of the theory can be found in Refs. [@Kiritsis:2009sh; @Mukohyama:2010xz; @Sotiriou:2010wn]. In the rest of the paper, we will focus on the FDiff (\[eq:fdiff\]) invariant non-projectable Hořava gravity in 3+1 dimensions, with critical exponent $z=3$ [@Horava:2009uw; @Blas:2009qj]. We start by determining the most general action that is suitable for our purposes. Formally, the action we consider is $$S = \frac{M_p^2}{2}\int N dt\,\sqrt{g}d^3x\left(K_{ij}K^{ij}-\lambda K^2\right)+S_V+S_{\nabla K}\,, \label{eq:actformal}$$ where the extrinsic curvature is defined as $$K_{ij} \equiv \frac{1}{2\, N}\,\left(\dot{g}_{ij} - \nabla_i N_j - \nabla_j N_i\right)\,.$$ The action (\[eq:actformal\]) contains the time-derivative kinetic terms for the 3-metric $g_{ij}$, while the potential part $$S_V = \int Ndt\,\sqrt{g}d^3x\left(\frac{M_p^2}{2}{\cal L}_{z=1}+{\cal L}_{z=2}+\frac{1}{M_p^2}{\cal L}_{z=3}\right)\,, \label{eq:SV}$$ contains up to 6 spatial derivatives and exhausts all marginal and relevant operators. $S_{\nabla K}$ denotes all terms that are compatible with the symmetry and contain up to two time derivatives and two spatial derivatives, including the mixed-derivative term considered in Ref. [@Pospelov:2010mp]. One could also add the relevant deformation ${\cal L}_{z=0} = \Lambda$ that is allowed by the FDiff symmetry and the power counting. However, since we will later focus on a Minkowski background, we will be neglecting this cosmological constant term. The number of all possible terms in $S_V$ and $S_{\nabla K}$ is of the order $10^2$. However, we are interested in linear perturbations around flat spacetime. So, without loss of generality, we can consider only the terms that give non-trivial contributions to the propagation of linear perturbations around the Minkowski background. We expand the basic quantities as $$N = 1+\delta N\,, \qquad N_i = \delta N_i\,, \qquad g_{ij} = \delta_{ij}+\delta g_{ij}\,, \label{eq:decomp}$$ and impose a truncation of the action at quadratic order in perturbations. The building blocks for constructing the FDiff invariant potential terms are the acceleration 3-vector (1 spatial derivative) $$a_i \equiv \partial_i\log N = \partial_i \delta N +{\cal O}({\rm perturbation})^2\,,$$ and the 3 dimensional Ricci curvature tensor (2 spatial derivatives) $$\begin{aligned} R_{ij} &=& - \frac{\delta^{lm}}{2}\left[\partial_l\partial_m \delta g_{ij} +\partial_i \partial_j \delta g_{lm}-2\partial_l\partial_{(i}\delta g_{j)m}\right] \nonumber\\ &&+{\cal O}({\rm perturbation})^2\,.\end{aligned}$$ In 3 dimensions the Weyl tensor is identically zero, so the Riemann tensor can be expressed solely in terms of the Ricci tensor and the metric. Both $a_i$, $R_{ij}$ and their derivatives are of the order of perturbations, so any potential term which is cubic in these will be of higher order in the quadratic truncation. This observation reduces the number of possible terms considerably. Even after restricting the terms to be quadratic in the acceleration, curvature and their derivatives, there are still several terms which are redundant at the level of the quadratic action around Minkowski. For instance, since the curvature is of the order of perturbations, we can further identify redundant terms by commuting the covariant derivatives, i.e. $\nabla_{[i} \nabla_{j]} ({\rm perturbation}) ={\cal O}({\rm perturbation})^2$. Moreover, performing integration by parts, some terms turn out to give the same contribution up to higher order terms in perturbative expansion, e.g. the term $N \nabla_i R a^i$ can be written as $-N \,R \nabla_i a^i$ up to a boundary term and $ R\,a_ia^i$ (which does not contribute at the level of our quadratic truncation). Finally, making use of the contracted Bianchi identities $\nabla^j R_{ij} = \nabla_iR/2$, we find that the potential terms which contribute to the quadratic action are $$\begin{aligned} {\cal L}_{z=1} &=& 2\alpha\,a_ia^i + \beta\,R\,,\nonumber\\ {\cal L}_{z=2} &=& \alpha_1\,R\,\nabla_i a^i+\alpha_2 \nabla_ia_j\nabla^ia^j+\beta_1 R_{ij}R^{ij}+\beta_2R^2\,,\nonumber\\ {\cal L}_{z=3} &=& \alpha_3 \nabla_i\nabla^iR\,\nabla_ja^j+\alpha_4 \nabla^2 a_i\nabla^2 a^i+\beta_3 \nabla_i R_{jk}\nabla^i R^{jk} \nonumber\\ &&+\beta_4 \nabla_iR\nabla^iR\,, \label{eq:act-pot}\end{aligned}$$ where we defined $\nabla^2\equiv \nabla_i\nabla^i$. This is the most general version of Hořava’s theory including all terms that contribute to linear perturbations around Minkowski background. We remark that the projectable version of the theory with $N=N(t)$ can be obtained by simply taking the limit $\alpha\to\infty$ [@Blas:2009qj]. We now introduce the terms we wish to focus on, which are the mixed 2-time and 2-space derivative terms. Apart from the form $(\nabla_i K_{jk})^2$ chosen in Ref. [@Pospelov:2010mp], one can also write terms of the form $(K_{ij}a_k)^2$ and $K_{ij} K_l^jR^{il}$, by appropriate contractions with the metric $g_{ij}$. However, considering the perturbed quantities (\[eq:decomp\]), we find that $$K_{ij} =\frac{1}{2}\left[\delta\dot{g}_{ij}-\partial_i\delta N_j-\partial_j \delta N_i\right] + {\cal O}({\rm perturbation})^2\,.$$ In other words, the extrinsic curvature is also of order of perturbations; only the terms of the form $(\nabla_i K_{jk})^2$ will contribute to the quadratic action. The mixed derivative part can thus be written as $$S_{\nabla K} = \int Ndt\,\sqrt{g}d^3x \nabla_iK_{jk}\nabla_lK_{mn} M^{ijklmn}\,, \label{eq:act-dk}$$ which consists of four independent contractions: $$\begin{aligned} M^{ijklmn} &\equiv &\gamma_1 g^{ij}g^{lm}g^{kn}+\gamma_2 g^{il}g^{jm}g^{kn} +\gamma_3 g^{il}g^{jk}g^{mn} \nonumber\\ &&+\gamma_4 g^{ij}g^{kl}g^{mn}\,. \label{eq:dkterms}\end{aligned}$$ The term with coefficient $\gamma_1$ corresponds to the one introduced in Ref [@Pospelov:2010mp], used to remove the quadratic divergences in the vector loops. Perturbations around Minkowski {#sec:perturbations} ============================== We now consider perturbations around flat spacetime in the non-projectable theory with mixed derivative terms, introduced in the previous Section. For a perturbative analysis of the [*projectable*]{} version [@Horava:2009uw; @Sotiriou:2009gy] where the lapse function is forced to be space-independent, we refer the reader to Ref. [@Sotiriou:2009bx], and for an analysis of scalar perturbation in the non-projectable case to Refs. [@Blas:2009qj; @Blas:2010hb]. Decomposing the perturbations with respect to their transformation properties under spatial rotations, the background and perturbations are introduced as $$\begin{aligned} N = 1+ A\,,\qquad N^i = (B^i+\partial^i B)\,,\qquad\qquad\qquad\nonumber\\ g_{ij}=\delta_{ij}(1+2\psi) + (\partial_i\partial_j -\frac{\delta_{ij}}{3}\partial^2)E+\partial_{(i}E_{j)}+\gamma_{ij}\,,\nonumber\\\end{aligned}$$ where $\partial_iB^i = \partial_iE^i=\delta^{ij}\gamma_{ij} = \partial_i\gamma^{ij}=0$. We remark that since we are not working in the projectable theory, we have $A=A(t,\vec{x})$. In the gravity sector, there are 2 tensor degrees ($\gamma_{ij}$), 4 vector degrees ($B_i$, $E_i$) and 4 scalar degrees ($A$, $B$, $E$, $\psi$), giving a total of 10 perturbations. Out of these, four will be removed by integrating out $A$, $B$ and $B_i$ (which are non-dynamical, thus entering the action without time derivatives). Furthermore, 3 degrees will be removed by exploiting the spatial transformations $x^i\to x^i+\xi^i$ (2 vectors, 1 scalar).[^3] In the end, we expect 3 physical degrees of freedom: 2 tensors (1 transverse traceless tensor) and 1 scalar. In the following, we expand perturbations into plane waves through $$Q(t,\vec{x}) = \frac{1}{(2\pi)^{3/2}}\int d^3k \,Q_{\vec{k}}(t)\,e^{i\,\vec{k}\cdot\vec{x}}\,,$$ where $Q(t,x^i)$ represents any perturbation and $Q_{\vec{k}}(t)$ is the corresponding mode function, satisfying the reality condition $Q_{-\vec{k}}=Q^\star_{\vec{k}}$. Thanks to the invariance of the Minkowski background under spatial rotations, the resulting quadratic action will depend only on the magnitude of the momentum $k\equiv|\vec{k}|$ and all sectors will decouple from the each other. In the remainder of the text, we omit the subscript $\vec{k}$ in the mode functions $Q_{\vec k}$. Tensor sector ------------- The action quadratic in tensor perturbations is obtained as $$\begin{aligned} S_{\rm tensor}^{(2)} &=& \frac{M_p^2}{8}\int dt \,d^3k \,a^3 \left(1+2\gamma_2\kappa^2\right) \nonumber\\&&\times\left(\vert\dot{\gamma}_{ij}\vert^2-k^2\,\frac{\beta -2\,\beta_1\kappa^2-2\,\beta_3\kappa^4}{1+2\,\gamma_2\kappa^2}\vert\gamma_{ij}\vert^2\right)\,,\nonumber\\\end{aligned}$$ where we have defined $\kappa\equiv k/M_p$ for convenience. Firstly, we see that only the second term of Eq. (\[eq:dkterms\]) contributes to the tensorial action. This is the term specifically and deliberately omitted in the analysis of Ref. [@Pospelov:2010mp]. The rest of the terms involve only divergences and traces of $K_{ij}$ and hence, they do not contribute to the tensor sector. Secondly, the dispersion relation in the UV behaves as $$\omega_{\rm tensor}^2= -\frac{\beta_3}{\gamma_2M_p^2}k^4 + {\cal O}(k^{2})\,,$$ in contrast with the standard Hořava result with $\omega^2 \sim -\beta_3 k^6/M_p^4$. On the other hand, tuning $\gamma_2$ to be zero reinstates the sixth order dispersion relations. Vector sector ------------- We now consider the vector sector. The quadratic action for these modes is $$S_{\rm vector}^{(2)} = \frac{M_p^2}{4}\int dt\,d^3k\, k^2[1+\kappa^2(\gamma_1+2\gamma_2)] \left\vert B^i - \frac{\dot{E}^i}{2}\right\vert^2\,. \label{eq:vecact}$$ In coordinate space, the equation of motion for the non-dynamical mode $B_i$ is given by $$\left(1-\frac{(\gamma_1+2\gamma_2)}{M_p^2}\triangle\right)\triangle \left(B^i-\frac{\dot{E}^i}{2}\right) =0\,,$$ where $\triangle\equiv \delta^{ij}\partial_i\partial_j$ is the the flat-space Laplace operator. If we impose, as a boundary condition, that all perturbations and all their derivatives asymptotically vanish, then the unique solution is $$B^i = \frac{1}{2}\,\dot{E}^i\,. \label{eq:vecB}$$ Replacing this solution back in the action, we find that the action vanishes up to boundary terms. Hence, there are no propagating vector modes. It is clear, however, that the $\gamma_1$ and $\gamma_2$ terms modify the behavior of the vector modes by introducing extra spatial derivatives. This is exactly the feature that removed the divergences related to the vector modes in Ref. [@Pospelov:2010mp]. Scalar sector ------------- The scalar action is found to be $$\begin{aligned} S_{\rm scalar}^{(2)}&=&\frac{M_p^2}{2}\int dt\,d^3k\,\Bigg\{ \left[3(1-3\lambda)+2(\gamma_1+3\gamma_2+9\gamma_3+3\gamma_4)\kappa^2\right]\left\vert \dot{\psi}+\frac{k^2}{6}\dot{E}\right\vert^2 +2\,k^2(\alpha+\alpha_2\kappa^2+\alpha_4\kappa^4) \left\vert A\right\vert^2 \nonumber\\ &&\qquad\qquad\qquad\quad +2k^2 \left[\beta+2 (3\beta_1+8\beta_2)\kappa^2+2(3\beta_3+8\beta_4)\kappa^4\right]\left\vert \psi+\frac{k^2}{6}E\right\vert^2 \nonumber\\ &&\qquad\qquad\qquad\quad +k^4\left[1-\lambda+2(\gamma_1+\gamma_2+\gamma_3+\gamma_4)\kappa^2\right]\left\vert B -\frac{\dot{E}}{2}\right\vert^2 \nonumber\\ &&\qquad\qquad\qquad\quad +2\,k^2(\beta-2\alpha_1\kappa^2+2\alpha_3\kappa^4)\left[A^\star \left(\psi+\frac{k^2}{6}E\right)+{\rm c.c.}\right] \nonumber\\ &&\qquad\qquad\qquad\quad +k^2\left[1-3\lambda+2(\gamma_1+\gamma_2+3\gamma_3+2\gamma_4)\kappa^2\right]\left[ \left(B -\frac{\dot{E}}{2}\right)^\star\left(\dot{\psi}+\frac{k^2}{6}\dot{E}\right)+{\rm c.c}\right]\Bigg\}\,, \label{eq:actscalar}\end{aligned}$$ where “c.c.” denotes the complex conjugate of the preceding expression. Observing that the combinations $\psi+k^2\,E/6$ and $B-\dot{E}/2$ are 3D diffeomorphism invariant, the invariance of the above action is manifest. The action now contains two non-dynamical modes that are solved by $$\begin{aligned} B &=& -\frac{1-3\lambda+2(\gamma_1+\gamma_2+3\gamma_3+2\gamma_4)\kappa^2}{1-\lambda+2(\gamma_1+\gamma_2+\gamma_3+\gamma_4)\kappa^2}\,\left(\frac{\dot{\psi}}{k^2}+\frac{\dot{E}}{6}\right)\nonumber\\ &&+\frac{\dot{E}}{2} \,,\nonumber\\ A &=& -\frac{\beta-2\alpha_1\kappa^2+2\alpha_3\kappa^4}{\alpha+\alpha_2\kappa^2+\alpha_4\kappa^4}\left(\psi+\frac{k^2}{6}E\right)\,.\end{aligned}$$ Once these solutions are inserted back into the action, the remaining terms depend on $E$ and $\psi$; more specifically, only on the gauge invariant quantity $$\Psi \equiv \psi+ \frac{k^2}{6} E\,,$$ while the remaining (pure gauge) combination drops out of the action. Thus, we arrive to $$S_{\rm scalar}^{(2)} = M_p^2 \int dt \,d^3k \left( \frac{1-3\lambda+p_2\kappa^2+p_4\kappa^4}{1-\lambda+r_2\kappa^2}\vert \dot{\Psi}\vert^2 - M_p^2 \frac{q_2\kappa^2+q_4\kappa^4+q_6\kappa^6+q_8 \kappa^8+q_{10}\kappa^{10}}{\alpha+\alpha_2\kappa^2+\alpha_4\kappa^4}\left\vert \Psi\right\vert^2\right)\,, \label{eq:scalaraction}$$ where $$\begin{aligned} p_2&\equiv& 2\gamma_1(1-2\lambda)+2\gamma_2(2-3\lambda)+2(3\gamma_3+\gamma_4)\,,\nonumber\\ p_4&\equiv& 4\gamma_2(\gamma_1+\gamma_2+3\gamma_3+\gamma_4)+8\gamma_1\gamma_3-2\gamma_4^2\,,\nonumber\\ r_2&\equiv& 2(\gamma_1+\gamma_2+\gamma_3+\gamma_4)\,,\nonumber\\ q_2 &\equiv&\beta(\beta-\alpha)\,,\nonumber\\ q_4 &\equiv& -\beta(4\,\alpha_1+\alpha_2)-2\alpha(3\beta_1+8\beta_2)\,,\nonumber\\ q_6&\equiv& 4\alpha_1^2 + \beta(4\alpha_3-\alpha_4)-2\alpha(3\beta_3+8\beta_4) \nonumber\\ &&-2\,\alpha_2(3\beta_1+8 \beta_2)\,,\nonumber\\ q_8&\equiv& -8 \alpha_1\alpha_3-2\alpha_4(3\beta_1+8\beta_2)-2\alpha_2(3\beta_3+8\beta_4)\,,\nonumber\\ q_{10} &\equiv& 4\alpha_3^2-2\,\alpha_4(3\beta_3+8\beta_4)\,.\end{aligned}$$ Let us first recall that in the absence of the terms (\[eq:act-dk\]), i.e. in standard Hořava’s theory the dispersion is $\omega^2 \propto k^6$ in the UV. In the presence of the $(\nabla K)^2$ terms (\[eq:act-dk\]) and for generic $\gamma_i$, the coefficient of $\vert\dot{\Psi}\vert^2$ goes as $k^2$ in the UV. As a result, the dispersion relation becomes $\omega^2\propto k^4$. In the case of the tensor modes, a sixth order dispersion relation can be obtained by tuning only $\gamma_2$ to zero. This is still not sufficient for having $z=3$ anisotropic scaling for the scalar mode. One needs to further impose the relation $\gamma_4^2=4\gamma_1\gamma_3$ so that the $p_4$ coefficient in the kinetic term will vanish. With this tuning, the kinetic term now is a constant in the UV, giving a dispersion relation $\omega^2 \propto k^6$ despite the existence of the high order terms. Finally, the vector action (\[eq:vecact\]) is only sensitive to $\gamma_1$ and $\gamma_2$ terms. Therefore, in order to simultaneously improve the quadratic UV divergences in the gauge modes [*and*]{} to recover sixth order dispersion relations for the propagating modes, the necessary tuning is $$\gamma_2 = \gamma_4^2-4\,\gamma_1\gamma_3=0 \,,\qquad \gamma_1 \neq 0\,. \label{eq:tuning}$$ For the case considered in Ref. [@Pospelov:2010mp], only the $\gamma_1$ term is non-zero and the above conditions are trivially satisfied. We end this Section by noting that in the projectable limit $\alpha\to \infty$, the second term of Eq.(\[eq:actscalar\]) is dominated by the $k^6$ term in the UV, while the kinetic term remains unaffected. Therefore, we conclude that the tuning (\[eq:tuning\]) also results in a sixth order scalar dispersion relation in the projectable version. Power counting in the presence of mixed derivative terms {#sec:powercounting} ======================================================== In the previous Section, we have found that in the presence of the mixed derivative term $\nabla_iK_{jk} \nabla_l K_{mn}$ the dispersion relations of the propagating degrees reduce to fourth order ones, as opposed to the sixth order in standard Hořava gravity. This appears to compromise power-counting renormalizability, given the fact that the latter is argued based on $z=3$ anisotropic scaling of the propagators. Our result indicates that it is actually possible to choose the coefficients of the mixed-derivative terms in such a way so as to retain sixth order dispersion relations for all modes, and still modify the UV behavior of the vector modes. So, one could potentially avoid the divergences uncovered in Ref. [@Pospelov:2010mp] and still maintain $z=3$ anisotropic scaling in the UV for all modes, but this would require tuning for the coefficients of the mixed-derivative terms. Our next step is to explore whether such tuning is indeed still necessary for power-counting renormalizability once mixed-derivative terms have been added. Recall that the main motivation in introducing this tuning is based on the bias that a fourth order dispersion relation is not power-counting renormalizable. However, this expectation arises from the power-counting performed in the presence of canonical kinetic terms. When mixed-derivative terms are included, the canonical kinetic term does not have to be dominant in the UV. It is therefore not at all obvious that the usual power counting argument continues to hold. In order to concretely discuss this issue in a simplified setting, we focus on the Lifshitz scalar in D+1 dimensions. This is anyway the basis of all power-counting arguments in Hořava gravity. We consider the Lagrangian $${\cal L} =\alpha \,\dot{\phi}^2 -\beta\,\dot{\phi}\triangle\dot{\phi}-\gamma \,\phi (-\triangle)^z \phi\,. \label{eq:mixedlifshitz}$$ Let us allow for an arbitrary anisotropic scaling $$t\to b^{-m} t\,,\qquad x^i\to b^{-1}\,x^i\,. \label{eq:aniscalgeneral}$$ In the standard case where $\beta=0$, renormalizability requires that $z=m=D$ [@Visser:2009fg; @Visser:2009ys]. With these choices one can set $\alpha=\gamma=1$ without loss of generality, and the scalar field turns out to be dimensionless. It is then straightforward to argue that, if interactions of the type $g_n\phi^n$ are added, $g_n$ will have positive momentum dimensions for any $n$, a standard sign of renormalizability. Let us suppose now $\beta\neq 0$ and try to treat the corresponding term as a deformation of the standard case while retaining the same scaling dimensions. Being quadratic in both temporal and spatial derivatives, this term would (naïvely) be an 8th order operator when $D=3$, so one arrives at a contradiction: it can hardly be considered as a simple deformation. In fact, one expects this term to be the dominant operator with time derivatives in the UV. As we will see below, even if one considers the mixed-derivative term as a leading operator in the UV and attempts to change the scaling dimensions accordingly, ambiguities still remain. Although we find that the dimensional argument is inadequate, it demonstrates how the interpretation of the mixed term as a deformation can bring us to misleading results. Dimensional counting {#subsec:naivecounting} -------------------- Let us repeat the power-counting arguments in a bit more detail, this time allowing for different choices of normalization and scaling. This will highlight the potential pitfalls of power-counting arguments. As a first example, we consider canonical normalization for the usual kinetic term by choosing $\alpha=1$ in Eq.(\[eq:mixedlifshitz\]). In this normalization, we have $[\beta] = [k]^{-2}$ and $[\gamma] = [k]^{-2(z-m)}$, where $[k]$ denotes the dimension of the momentum which scales as $k \to b\,k$. Moreover, we fix the units such that the operators that we expect to be dominant in the UV have the same scaling rule, imposing $[\beta] = [\gamma]$, or $m = z-1$. This allows us to rewrite the Lagrangian in the following form $${\cal L}_{1} =\dot{\phi}^2 -\frac{1}{M^2}\,\dot{\phi}\triangle\dot{\phi}-\frac{\lambda}{M^2}\,\phi (-\triangle)^z \phi\,, \label{eq:naiveexample1}$$ where $\lambda$ is a dimensionless constant and $M$ is some scale with dimensions of momentum. Imposing that the action be dimensionless, we find that the momentum dimension of the scalar field is $$[\phi] = [k]^{(D-m)/2}\,.$$ This result is the same as in the canonical Lifshitz scalar case, due to the choice of normalization for the first term in (\[eq:naiveexample1\]). The scalar field is dimensionless for $m=D$, in which case, the coefficients of non-derivative self interactions $g_n \phi^n$ have $[g_n] = [k]^{2\,D}$. However, for $m=D$ one has $z= D+1$, unlike the standard Lifshitz scalar where $z=D$. In 3+1 dimensions, this corresponds to having the usual anisotropic scaling law for the time and spatial coordinates, while the spatial derivative part of the action \[the last term in eq. (\[eq:naiveexample1\])\] is 8th order in derivatives. The mixed derivative operators would then scale as the eighth power of the momentum. However, the result is a by-product of the specific normalization adopted in eq. (\[eq:naiveexample1\]). In this normalization, the standard kinetic term is rendered canonical, even though the mixed-derivative term is expected to be the dominant operator that carries time derivatives in the UV. This does not seem to be a sensible choice of normalization. The results indeed changes if we choose the normalization in (\[eq:mixedlifshitz\]) such that $\beta=1$, while still requiring the UV dominant operators to have the same scaling rule. Since the latter condition again imposes $m=z-1$, we now have $[\alpha]=[k]^2$ and $[\gamma]=[k]^0$, leading to the Lagrangian $${\cal L}_{2} =M^2 \dot{\phi}^2 -\dot{\phi}\triangle\dot{\phi}-\lambda\,\phi (-\triangle)^z \phi\,. \label{eq:naiveexample2}$$ For this example, the momentum dimension of the scalar field is $$[\phi] = [k]^{(D-m-2)/2} \,,$$ i.e. it is dimensionless for $z=m+1= D-1$, leading to the coefficients of the self-interaction terms to have $[g_n]=[k]^{2(D-1)}$. In $3+1$ dimensions, this corresponds to [*relativistic*]{} scaling and 4th order gradient terms. This second example seems to suggest that the mixed derivative term actually improves the UV behavior of the theory. However, the relativistic scaling implies that operators with 4 time derivatives come at the same order as the mixed derivative operator or operators with 4 spatial gradients. With this scaling there is no justification for not including 4th order time derivatives in the action. As is well known, though, including such operators would lead to extra degrees of freedom and potential loss of unitarity. Superficial degree of divergence {#subsec:superficial} -------------------------------- The existence of two drastically different results for the same theory illustrates that the naïve counting method is highly dependent on the choice of scaling and normalization, and can therefore be confusing. Though it does seem straightforward that canonically normalizing the usual kinetic term is not the way to go, in order to remove any ambiguity we calculate the superficial degree of divergence, in the fashion of Refs. [@Visser:2009fg; @Visser:2009ys]. This method allows us to identify the cut-off dependence of the diagrams without relying on the dimensional arguments. For the Lagrangian in Eq.(\[eq:mixedlifshitz\]), the dimensions of the coupling constant are related through $$[\alpha]\,[k]^{2m} = [\beta]\,[k]^{2m+2}=[\gamma] \,[k]^{2\,z}\,,$$ which allows us to rewrite (\[eq:mixedlifshitz\]) as $${\cal L} = \beta\left[\lambda\,M^2\dot{\phi}^2-\dot{\phi}\triangle\dot{\phi} - M^{2(m-z+1)}\,\phi (-\triangle)^z\phi\right]\,.$$ Using the equation of motion for the Lifshitz scalar, $$\beta\,\left[-\lambda\,M^2\,\ddot{\phi}+\triangle\ddot{\phi}-M^{2(m-z+1)}(-\triangle)^z\phi\right]=0\,,$$ the Green’s function in the UV, i.e. $k \gg \sqrt{\lambda}M$, can be immediately calculated as $$G_{\omega, k}= \frac{1}{k^2\beta\,[\omega^2-M^{2(m-z+1)}k^{2(z-1)}]}\,. \label{eq:lifshitzgreen}$$ Thus, the dependence of each internal line on the momentum cut-off $\Lambda_k$ is $$G_{\omega,k} \to \beta^{-1}M^{-2(m-z+1)}\Lambda_k^{-2z}\,.$$ For the loop integrals, we need to impose a different cut-off $\Lambda_\omega$ for the energy. The dependence of the latter on the momentum cut-off can be inferred from the poles of the propagator, giving $\Lambda_\omega= M^{m-z+1}\Lambda_k^{z-1}$. Thus the contribution from each loop in a diagram is $$\int d\omega d^Dk \to \Lambda_\omega\,\Lambda_k^D = M^{m-z+1}\Lambda_k^{z+D-1}\,.$$ We first consider non-derivative interactions, where the vertices do not contribute to the cut-off dependence. Thus, for a diagram with $I$ internal lines and $L$ loops, the dependence on the momentum cut-off is $$\beta^{-I} \,M^{(m-z+1)(L-2\,I)}\,\Lambda_k^{L(D+z-1)-2\,I\,z}\,, \label{eq:mixed-feynmann}$$ giving the superficial degree of divergence $$\delta = (D+z-1)L-2\,I\,z = (D-z-1)L-2\,(I-L)z\,.$$ Since $L$ loops require at least $L$ internal lines, we obtain $$\delta \leq (D-z-1) L\,.$$ This implies that if $z\geq D-1$, the diagrams are, at most, logarithmically divergent. For $D=3$, the mixed-derivative theory with relativistic scaling and relativistic dispersion relations is power-counting renormalizable with gradient terms $z\geq 2$. The propagator (\[eq:lifshitzgreen\]) now contains an overall factor of $k^{-2}$ which ameliorates the UV behavior, alleviating the need for more than $4$ gradients in the action. As already mentioned in the previous section, the relativistic scaling is worrisome, as it implies that 4th order time derivative operators are not higher order and should be taken into consideration. Their presence would compromise unitarity without changing the renormalizability properties. This situation is reminiscent of the renormalization of higher derivative gravity [@Stelle:1976gc]. There the dispersion relation is also relativistic and the presence of the higher order derivatives (and the extra degrees of freedom) improves the UV behavior but breaks the unitarity [@Stelle:1976gc]. The superficial degree of divergence also exposes the limitations of the dimensional counting. In the latter, each momentum dimension is implicitly assumed to contribute one power of the momentum cut-off. However, this assumption is not correct if coefficients of the relevant terms are dimensionful. The dimensional counting can be trusted only in a setup in which $\beta$ and $M$ drop out of the amplitudes; this corresponds to the normalization $\beta=1$ and choice of units with $m=z-1$, which is the second example studied in Sec.\[subsec:naivecounting\]. This result further demonstrates that the mixed derivative terms cannot be interpreted as deformations of the canonical Lifshitz scalar. We can further extend the analogy with the Lifshitz scalar to mimic derivative self-interactions of the graviton. Following Ref. [@Visser:2009ys], we consider the action $${\cal L} = - \dot{\phi}\triangle \dot{\phi} +P(\nabla^{2\,z},\phi)\,,$$ where $P(\nabla^{2\,z},\phi)$ is an infinite order polynomial for the field, with up to $2z$ derivatives. For the free field, i.e. at the quadratic level, the action contains spatial derivative terms up to $\phi\triangle^{z}\phi$, so the propagator in the UV is still given by Eq.(\[eq:lifshitzgreen\]) with $\beta=1$ and $m=z-1$. The major difference to the previous case comes from the vertices, which can bring at most $2z$ powers of momentum. Thus, the superficial degree of divergence for the diagram with $V$ vertices satisfies $$\delta \leq % (D+z-1)L-2\,I\,z +2\,z\,V= (D-z-1)L-2\,(I-L-V)z\,,$$ which can be simplified using the topological identity $V + L-I = 1$ to give $$\delta \leq (D-z-1)L+2\,z\,.$$ As long as $z\geq D-1$, we have $\delta \leq 2\,z$ where the superficial degree of divergence is bounded from above by the canonical dimension of the operators explicitly included in the bare action. This is an indication of power-counting renormalizability. Discussion {#sec:discussion} ========== Hořava gravity has an extra scalar propagating degree of freedom with respect to general relativity. Additionally, usual spin-2 graviton which both theories propagate, has different behavior in Hořava gravity due to the presence of terms with higher-order spatial derivatives in the action. In contrast, the gauge vector modes do not get any contribution from these higher-derivative terms, thus their propagators are identical to the ones in GR. As a result, as it has been shown in Ref. [@Pospelov:2010mp], Lorentz violations in the Standard Model sector have quadratic sensitivity to the cut-off stemming from the gauge loops. Supplementing the action with mixed-derivative terms — terms that contain both temporal and spatial derivatives — has been suggested as a potential way to regulate these divergences. We have considered here the most general action of non-projectable Hořava gravity, extended with terms containing two time derivatives and two spatial ones. We have carried out a full perturbative analysis. which revealed that the mixed derivative terms can drastically change the behavior of the propagators. The dispersion relations generically become fourth order in the UV, [*i.e.*]{} $\omega^2 \sim k^4$. This could compromise power-counting renormalizability, which required 6th order dispersion relations in the standard theory. However, we also find that a tuning of the coefficients of the mixed-derivative terms that reinstates the sixth order dispersion relations does exist. A difficulty one encounters is that renormalizability arguments in standard Hořava gravity are based on anisotropic scaling and on the analogy with the Lifshitz scalar. The mixed-derivative terms do not seem to straightforwardly fit in this logic, and one might rightfully question whether 6th order dispersion relations are really necessary. In order to explore this issue further and avoid the complications that one has to face when dealing with a theory with multiple degrees of freedom, we have considered the Lifshitz scalar itself, extended by adding mixed-derivative terms. We have shown that the mixed-derivative terms actually appear to improve the UV behavior and the theory can be renormalizable even with 4th order dispersion relations. However, this comes at a high price: the scaling between space and time is actually relativistic and terms with 4th order time derivatives appear to come at the same order as those included in the action. Hence, one expects that this theory will cease to be unitary once quantum corrections are taken into account. Therefore, to the extent that one can transfer the intuition coming from the Lifshitz scalar to Hořava gravity, tuning the coefficients of the mixed-derivative terms so as to have 6th order dispersion relations and anisotropic scaling seems preferable. Note that such a tuning does not obstruct the effect of the mixed-derivative terms on the gauge modes. This is particularly important in order to suppress the Lorentz violations in the matter sector (it is the motivation for adding mixed-derivative terms in the first place). However, the pertinent question is if such a tuning could be technically natural. Our whole analysis is based on linearized theory (as is power-counting renormalizability in the first place). The tuning appears technically natural in linearized theory but our approach cannot address radiative stability beyond the linear level. More work in this direction is needed in order to conclude if adding mixed-derivative terms in Hořava gravity is a viable way to cure the quadratic divergencies related to the vector mode found in Ref. [@Pospelov:2010mp]. We are grateful to Jorma Louko, Maxim Pospelov and Matt Visser for a critical reading of the manuscript and helpful comments. The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007-2013) / ERC Grant Agreement n. 306425 “Challenging General Relativity”. [99]{} C. M. Will, Living Rev. Rel.  [**9**]{}, 3 (2006) \[gr-qc/0510072\]. J. F. Donoghue, Phys. Rev. D [**50**]{}, 3874 (1994) \[gr-qc/9405057\]. K. S. Stelle, Phys. Rev. D [**16**]{}, 953 (1977). E.M. Lifshitz, Zh. Eksp. Teor. Fiz. [**11**]{}, 255; 269 (1941). P. Horava, Phys. Rev. D [**79**]{}, 084008 (2009) \[arXiv:0901.3775 \[hep-th\]\]. R. L. Arnowitt, S. Deser and C. W. Misner, Gen. Rel. Grav.  [**40**]{}, 1997 (2008) \[gr-qc/0405109\]. M. Visser, Phys. Rev. D [**80**]{}, 025011 (2009) \[arXiv:0902.0590 \[hep-th\]\]. M. Visser, arXiv:0912.4757 \[hep-th\]. D. Vernieri and T. P. Sotiriou, Phys. Rev. D [**85**]{}, 064003 (2012) \[arXiv:1112.3385 \[hep-th\]\]. D. Vernieri and T. P. Sotiriou, J. Phys. Conf. Ser.  [**453**]{}, 012022 (2013) \[arXiv:1212.4402 \[hep-th\]\]. M. Pospelov and Y. Shang, Phys. Rev. D [**85**]{}, 105001 (2012) \[arXiv:1010.5249 \[hep-th\]\]. S. Liberati, L. Maccione and T. P. Sotiriou, Phys. Rev. Lett.  [**109**]{}, 151602 (2012) \[arXiv:1207.0670 \[gr-qc\]\]. S. Liberati, Class. Quant. Grav.  [**30**]{}, 133001 (2013) \[arXiv:1304.5795 \[gr-qc\]\]. S. Groot Nibbelink and M. Pospelov, Phys. Rev. Lett.  [**94**]{}, 081601 (2005) \[hep-ph/0404271\]. P. Jain and J. P. Ralston, Phys. Lett. B [**621**]{}, 213 (2005) \[hep-ph/0502106\]. W. Xue, arXiv:1008.5102 \[hep-th\]. D. Redigolo, Phys. Rev. D [**85**]{}, 085009 (2012) \[arXiv:1106.2035 \[hep-th\]\]. O. Pujolas and S. Sibiryakov, JHEP [**1201**]{}, 062 (2012) \[arXiv:1109.4495 \[hep-th\]\]. I. Kimpton and A. Padilla, JHEP [**1304**]{}, 133 (2013) \[arXiv:1301.6950 \[hep-th\]\]. C. Charmousis, G. Niz, A. Padilla and P. M. Saffin, JHEP [**0908**]{}, 070 (2009) \[arXiv:0905.2579 \[hep-th\]\]. M. Li and Y. Pang, JHEP [**0908**]{}, 015 (2009) \[arXiv:0905.2751 \[hep-th\]\]. D. Blas, O. Pujolas and S. Sibiryakov, JHEP [**0910**]{}, 029 (2009) \[arXiv:0906.3046 \[hep-th\]\]. K. Koyama and F. Arroja, JHEP [**1003**]{}, 061 (2010) \[arXiv:0910.1998 \[hep-th\]\]. A. Papazoglou and T. P. Sotiriou, Phys. Lett. B [**685**]{}, 197 (2010) \[arXiv:0911.1299 \[hep-th\]\]. M. Henneaux, A. Kleinschmidt and G. Lucena Gómez, Phys. Rev. D [**81**]{}, 064002 (2010) \[arXiv:0912.0399 \[hep-th\]\]. D. Blas, O. Pujolas and S. Sibiryakov, Phys. Lett. B [**688**]{}, 350 (2010) \[arXiv:0912.0550 \[hep-th\]\]. I. Kimpton and A. Padilla, JHEP [**1007**]{}, 014 (2010) \[arXiv:1003.5666 \[hep-th\]\]. A. Padilla, J. Phys. Conf. Ser.  [**259**]{}, 012033 (2010) \[arXiv:1009.4074 \[hep-th\]\]. P. Horava and C. M. Melby-Thompson, Phys. Rev. D [**82**]{}, 064027 (2010) \[arXiv:1007.2410 \[hep-th\]\]. T. Zhu, F. W. Shu, Q. Wu and A. Wang, Phys. Rev. D [**85**]{}, 044053 (2012) \[arXiv:1110.5106 \[hep-th\]\]. D. Blas, O. Pujolas and S. Sibiryakov, Phys. Rev. Lett.  [**104**]{}, 181302 (2010) \[arXiv:0909.3525 \[hep-th\]\]. D. Blas, O. Pujolas and S. Sibiryakov, JHEP [**1104**]{}, 018 (2011) \[arXiv:1007.3503 \[hep-th\]\]. E. Kiritsis and G. Kofinas, Nucl. Phys. B [**821**]{}, 467 (2009) \[arXiv:0904.1334 \[hep-th\]\]. S. Mukohyama, Class. Quant. Grav.  [**27**]{}, 223101 (2010) \[arXiv:1007.5199 \[hep-th\]\]. T. P. Sotiriou, J. Phys. Conf. Ser.  [**283**]{}, 012034 (2011) \[arXiv:1010.3218 \[hep-th\]\]. T. P. Sotiriou, M. Visser and S. Weinfurtner, Phys. Rev. Lett.  [**102**]{}, 251601 (2009) \[arXiv:0904.4464 \[hep-th\]\]. T. P. Sotiriou, M. Visser and S. Weinfurtner, JHEP [**0910**]{}, 033 (2009) \[arXiv:0905.2798 \[hep-th\]\]. [^1]: Alternatively, one can introduce supersymmetry to suppress Lorentz violating operators at low energies [@GrootNibbelink:2004za; @Jain:2005as; @Xue:2010ih], although such constructions are highly non-trivial beyond free theories [@Redigolo:2011bv; @Pujolas:2011sk]. [^2]: Other types of divergences, as well as a loss of unitarity were uncovered in Ref. [@Kimpton:2013zb], once matter fields are introduced. [^3]: In the non-projectable theory, the time reparametrization invariance $t\to t+f(t)$ is not sufficient to fix any of the coordinate dependent perturbations.
--- abstract: 'We present a measurement of the inclusive production of [$\Upsilon$]{}mesons in U+U collisions at $\sqrt{s_{NN}}=193$ GeV at mid-rapidity ($|y|<1$). Previous studies in central Au+Au collisions at $\sqrt{s_{NN}}=200$ GeV show a suppression of [${\Upsilon(\text{1S+2S+3S})}$]{}production relative to expectations from the [$\Upsilon$]{}yield in [*[p+p]{}*]{} collisions scaled by the number of binary nucleon-nucleon collisions ([$N_\mathrm{coll}$]{}), with an indication that the [${\Upsilon(\text{1S})}$]{}state is also suppressed. The present measurement extends the number of participant nucleons in the collision ([$N_\mathrm{part}$]{}) by 20% compared to Au+Au collisions, and allows us to study a system with higher energy density. We observe a suppression in both the [${\Upsilon(\text{1S+2S+3S})}$]{} and [${\Upsilon(\text{1S})}$]{} yields in central U+U data, which consolidates and extends the previously observed suppression trend in Au+Au collisions.' author: - 'L. Adamczyk' - 'J. K. Adkins' - 'G. Agakishiev' - 'M. M. Aggarwal' - 'Z. Ahammed' - 'I. Alekseev' - 'D. M. Anderson' - 'R. Aoyama' - 'A. Aparin' - 'D. Arkhipkin' - 'E. C. Aschenauer' - 'M. U. Ashraf' - 'A. Attri' - 'G. S. Averichev' - 'X. Bai' - 'V. Bairathi' - 'R. Bellwied' - 'A. Bhasin' - 'A. K. Bhati' - 'P. Bhattarai' - 'J. Bielcik' - 'J. Bielcikova' - 'L. C. Bland' - 'I. G. Bordyuzhin' - 'J. Bouchet' - 'J. D. Brandenburg' - 'A. V. Brandin' - 'I. Bunzarov' - 'J. Butterworth' - 'H. Caines' - 'M. Calder[ó]{}n de la Barca S[á]{}nchez' - 'J. M. Campbell' - 'D. Cebra' - 'I. Chakaberia' - 'P. Chaloupka' - 'Z. Chang' - 'A. Chatterjee' - 'S. Chattopadhyay' - 'J. H. Chen' - 'X. Chen' - 'J. Cheng' - 'M. Cherney' - 'W. Christie' - 'G. Contin' - 'H. J. Crawford' - 'S. Das' - 'L. C. De Silva' - 'R. R. Debbe' - 'T. G. Dedovich' - 'J. Deng' - 'A. A. Derevschikov' - 'L. Didenko' - 'C. Dilks' - 'X. Dong' - 'J. L. Drachenberg' - 'J. E. Draper' - 'C. M. Du' - 'L. E. Dunkelberger' - 'J. C. Dunlop' - 'L. G. Efimov' - 'J. Engelage' - 'G. Eppley' - 'R. Esha' - 'S. Esumi' - 'O. Evdokimov' - 'O. Eyser' - 'R. Fatemi' - 'S. Fazio' - 'P. Federic' - 'J. Fedorisin' - 'Z. Feng' - 'P. Filip' - 'E. Finch' - 'Y. Fisyak' - 'C. E. Flores' - 'L. Fulek' - 'C. A. Gagliardi' - 'D.  Garand' - 'F. Geurts' - 'A. Gibson' - 'M. Girard' - 'L. Greiner' - 'D. Grosnick' - 'D. S. Gunarathne' - 'Y. Guo' - 'A. Gupta' - 'S. Gupta' - 'W. Guryn' - 'A. I. Hamad' - 'A. Hamed' - 'R. Haque' - 'J. W. Harris' - 'L. He' - 'S. Heppelmann' - 'S. Heppelmann' - 'A. Hirsch' - 'G. W. Hoffmann' - 'S. Horvat' - 'H. Z. Huang' - 'B. Huang' - 'T. Huang' - 'X.  Huang' - 'P. Huck' - 'T. J. Humanic' - 'G. Igo' - 'W. W. Jacobs' - 'A. Jentsch' - 'J. Jia' - 'K. Jiang' - 'S. Jowzaee' - 'E. G. Judd' - 'S. Kabana' - 'D. Kalinkin' - 'K. Kang' - 'K. Kauder' - 'H. W. Ke' - 'D. Keane' - 'A. Kechechyan' - 'Z. Khan' - 'D. P. Kikoła ' - 'I. Kisel' - 'A. Kisiel' - 'L. Kochenda' - 'D. D. Koetke' - 'L. K. Kosarzewski' - 'A. F. Kraishan' - 'P. Kravtsov' - 'K. Krueger' - 'L. Kumar' - 'M. A. C. Lamont' - 'J. M. Landgraf' - 'K. D.  Landry' - 'J. Lauret' - 'A. Lebedev' - 'R. Lednicky' - 'J. H. Lee' - 'Y. Li' - 'C. Li' - 'X. Li' - 'W. Li' - 'X. Li' - 'T. Lin' - 'M. A. Lisa' - 'F. Liu' - 'Y. Liu' - 'T. Ljubicic' - 'W. J. Llope' - 'M. Lomnitz' - 'R. S. Longacre' - 'X. Luo' - 'S. Luo' - 'G. L. Ma' - 'R. Ma' - 'L. Ma' - 'Y. G. Ma' - 'N. Magdy' - 'R. Majka' - 'A. Manion' - 'S. Margetis' - 'C. Markert' - 'H. S. Matis' - 'D. McDonald' - 'S. McKinzie' - 'K. Meehan' - 'J. C. Mei' - 'Z.  W. Miller' - 'N. G. Minaev' - 'S. Mioduszewski' - 'D. Mishra' - 'B. Mohanty' - 'M. M. Mondal' - 'D. A. Morozov' - 'M. K. Mustafa' - 'B. K. Nandi' - 'Md. Nasim' - 'T. K. Nayak' - 'G. Nigmatkulov' - 'T. Niida' - 'L. V. Nogach' - 'T. Nonaka' - 'J. Novak' - 'S. B. Nurushev' - 'G. Odyniec' - 'A. Ogawa' - 'K. Oh' - 'V. A. Okorokov' - 'D. Olvitt Jr.' - 'B. S. Page' - 'R. Pak' - 'Y. X. Pan' - 'Y. Pandit' - 'Y. Panebratsev' - 'B. Pawlik' - 'H. Pei' - 'C. Perkins' - 'P.  Pile' - 'J. Pluta' - 'K. Poniatowska' - 'J. Porter' - 'M. Posik' - 'A. M. Poskanzer' - 'N. K. Pruthi' - 'M. Przybycien' - 'J. Putschke' - 'H. Qiu' - 'A. Quintero' - 'S. Ramachandran' - 'R. L. Ray' - 'R. Reed' - 'M. J. Rehbein' - 'H. G. Ritter' - 'J. B. Roberts' - 'O. V. Rogachevskiy' - 'J. L. Romero' - 'J. D. Roth' - 'L. Ruan' - 'J. Rusnak' - 'O. Rusnakova' - 'N. R. Sahoo' - 'P. K. Sahu' - 'I. Sakrejda' - 'S. Salur' - 'J. Sandweiss' - 'A.  Sarkar' - 'J. Schambach' - 'R. P. Scharenberg' - 'A. M. Schmah' - 'W. B. Schmidke' - 'N. Schmitz' - 'J. Seger' - 'P. Seyboth' - 'N. Shah' - 'E. Shahaliev' - 'P. V. Shanmuganathan' - 'M. Shao' - 'M. K. Sharma' - 'A. Sharma' - 'B. Sharma' - 'W. Q. Shen' - 'Z. Shi' - 'S. S. Shi' - 'Q. Y. Shou' - 'E. P. Sichtermann' - 'R. Sikora' - 'M. Simko' - 'S. Singha' - 'M. J. Skoby' - 'D. Smirnov' - 'N. Smirnov' - 'W. Solyst' - 'L. Song' - 'P. Sorensen' - 'H. M. Spinka' - 'B. Srivastava' - 'T. D. S. Stanislaus' - 'M.  Stepanov' - 'R. Stock' - 'M. Strikhanov' - 'B. Stringfellow' - 'T. Sugiura' - 'M. Sumbera' - 'B. Summa' - 'Y. Sun' - 'Z. Sun' - 'X. M. Sun' - 'B. Surrow' - 'D. N. Svirida' - 'Z. Tang' - 'A. H. Tang' - 'T. Tarnowsky' - 'A. Tawfik' - 'J. Th[ä]{}der' - 'J. H. Thomas' - 'A. R. Timmins' - 'D. Tlusty' - 'T. Todoroki' - 'M. Tokarev' - 'S. Trentalange' - 'R. E. Tribble' - 'P. Tribedy' - 'S. K. Tripathy' - 'O. D. Tsai' - 'T. Ullrich' - 'D. G. Underwood' - 'I. Upsal' - 'G. Van Buren' - 'G. van Nieuwenhuizen' - 'R. Varma' - 'A. N. Vasiliev' - 'R. Vertesi' - 'F. Videb[æ]{}k' - 'S. Vokal' - 'S. A. Voloshin' - 'A. Vossen' - 'G. Wang' - 'J. S. Wang' - 'F. Wang' - 'Y. Wang' - 'Y. Wang' - 'J. C. Webb' - 'G. Webb' - 'L. Wen' - 'G. D. Westfall' - 'H. Wieman' - 'S. W. Wissink' - 'R. Witt' - 'Y. Wu' - 'Z. G. Xiao' - 'G. Xie' - 'W. Xie' - 'K. Xin' - 'Z. Xu' - 'H. Xu' - 'N. Xu' - 'J. Xu' - 'Y. F. Xu' - 'Q. H. Xu' - 'Y. Yang' - 'Y. Yang' - 'S. Yang' - 'Q. Yang' - 'Y. Yang' - 'C. Yang' - 'Z. Ye' - 'Z. Ye' - 'L. Yi' - 'K. Yip' - 'I. -K. Yoo' - 'N. Yu' - 'H. Zbroszczyk' - 'W. Zha' - 'J. Zhang' - 'Z. Zhang' - 'J. Zhang' - 'S. Zhang' - 'X. P. Zhang' - 'J. B. Zhang' - 'Y. Zhang' - 'S. Zhang' - 'J. Zhao' - 'C. Zhong' - 'L. Zhou' - 'X. Zhu' - 'Y. Zoulkarneeva' - 'M. Zyzak' title: '[$\Upsilon$]{}production in U+U collisions at $\sqrt{s_{NN}}=193$ GeV with the STAR experiment' --- Introduction ============ Quarkonium production in high energy heavy-ion collisions is expected to be sensitive to the energy density and temperature of the medium created in these collisions. Dissociation of different quarkonium states due to color screening is predicted to depend on their binding energies [@Digal:2001iu; @Wong:2004zr; @Cabrera:2006nt]. Measuring the yields of different quarkonium states therefore may serve as a model-dependent measure of the temperature in the medium [@Mocsy:2007jz]. Although charmonium suppression was anticipated as a key signature of the formation of a quark-gluon plasma (QGP) [@Matsui:1986dk], the suppression of $J/\psi$ mesons has been found to be relatively independent of beam energy from Super Proton Synchrotron (SPS) to Relativistic Heavy Ion Collider (RHIC) energies [@Adare:2006ns]. This phenomenon can be attributed to $J/\psi$ regeneration by the recombination of uncorrelated $c$-$\bar{c}$ pairs in the deconfined medium [@Grandchamp:2004tn] that counterbalances the dissociation process. In addition, cold nuclear matter (CNM) effects, dissociation in the hadronic phase, and feed-down contributions from excited charmonium states and $B$ hadrons can alter the suppression pattern from what would be expected from Debye screening. Contrary to the more abundantly produced charm quarks, bottom pair recombination and co-mover absorption effects are predicted to be negligible at RHIC energies [@Rapp:2008tf]. Bottomonium states in heavy-ion collisions therefore can serve as a cleaner probe of the medium, although initial state effects may still play an important role [@Grandchamp:2005yw; @Adamczyk:2013poh; @afraw; @Vogt:2012fba; @Arleo:2012rs]. Feed-down from $\chi_b$ mesons, the yield of which is largely unknown at RHIC energies, may also give a non-negligible contribution to the bottomonium yields. Monte Carlo Glauber simulations show that collisions of large, deformed uranium nuclei reach on average a higher number of participant nucleons ([$N_\mathrm{part}$]{}) and higher number of binary nucleon-nucleon collisions ([$N_\mathrm{coll}$]{}) than gold-gold collisions of the same centrality class. It was estimated that central U+U collisions at $\sqrt{s_{NN}}$=193 GeV have an approximately 20% higher energy density, thus higher temperature, than that in central Au+Au collisions at $\sqrt{s_{NN}}$=200 GeV [@Kikola:2011zz; @Mitchell:2013mza]. Lattice quantum-chromodynamics (QCD) calculations at finite temperature suggest that the color screening radius decreases with increasing temperature as $r_D(T) \sim 1/T$, which implies that a given quarkonium state cannot form above a certain temperature threshold [@Petrov:2007zza]. Free-energy-based spectral function calculations predict that the excited [${\Upsilon(\text{2S+3S})}$]{}states cannot exist above $1.2 T_c$ and that the ground state [${\Upsilon(\text{1S})}$]{} cannot exist above approximately $2 T_c$, where $T_c$ is the critical temperature of the phase transition [@Mocsy:2007jz]. Around the onset of deconfinement, one may see a sudden drop in the production of a given [$\Upsilon$]{}state when the threshold temperature of that state (or of higher mass states that decay into it) is reached. According to Ref. [@Kikola:2011zz], in the 5% most central U+U collisions at $\sqrt{s_{NN}}=193$ GeV, $T/T_c$ is between 2 and 2.7, depending on the [$\Upsilon$]{} formation time chosen in calculations. For a given formation time, the value of $T/T_c$ is approximately 5% higher than in the 5% most central Au+Au collisions at $\sqrt{s_{NN}}=200$ GeV. In such a scenario the temperature present in central U+U collisions is high enough that even the [${\Upsilon(\text{1S})}$]{}state might dissociate. However, the finite size, lifetime and inhomogeneity of the plasma may complicate this picture and smear the turn-on of the melting of particular quarkonium states over a wide range of [$N_\mathrm{part}$]{}. The suppression of bottomonium states in U+U collisions, together with existing measurements in other collision systems as well as measurements of CNM effects, may provide the means to explore the turn-on characteristics of suppression and test the sequential melting hypothesis. Experiment and analysis ======================= This analysis uses data recorded in 2012 by the STAR experiment at RHIC in U+U collisions at $\sqrt{s_{NN}}=193$ GeV. We reconstruct the [$\Upsilon$]{} states via their dielectron decay channels, ${\ensuremath{\Upsilon}\xspace}\rightarrow{\ensuremath{e^{+}e^{-}}\xspace}$, based on the method decribed in Ref. [@Adamczyk:2013poh]. As a trigger we require at least one tower from the Barrel Electromagnetic Calorimeter (BEMC) [@Beddo:2002zx] within the pseudorapidity range $|\eta|<1$, containing a signal corresponding to an energy deposit that is higher than approximately 4.2 GeV. A total of 17.2 million BEMC-triggered events are analyzed, corresponding to an integrated luminosity of 263.4 $\mu$b$^{-1}$. The electron (or positron) candidate that caused the trigger signal is paired with other electron candidates within the same event. Tracks are reconstructed in the Time Projection Chamber (TPC) [@Ackermann:2002ad]. Electrons with a momentum $p>1.5$ GeV/$c$ are selected based on their specific energy loss ($dE/dx$) in the TPC. Candidates are required to lie within an asymmetric window of $-1.2<{\ensuremath{{n\sigma_{e}}}\xspace}<3$, where [${n\sigma_{e}}$]{}is the deviation of the measured $dE/dx$ with respect to the nominal $dE/dx$ value for electrons at a given momentum, calculated using the Bichsel parametrization [@Bichsel:2006cs], normalized with the TPC resolution. Figure \[fig:nSigE\] shows the efficiency of the [${n\sigma_{e}}$]{}cut ($\epsilon_{\ensuremath{{n\sigma_{e}}}\xspace}$) for single electrons versus transverse momentum ([$p_\mathrm{T}$]{}), determined using a high purity electron sample obtained from gamma conversions. Since most of these so-called photonic electron pairs are contained in the very low invariant mass ($m_{ee}$) regime, we select ${\ensuremath{e^{+}e^{-}}\xspace}$ pairs with $m_{ee}<150$ MeV/$c^2$ ($m_{ee}<50$ MeV/$c^2$ in systematics checks) in a similar manner to the analysis described in Ref. [@Agakishiev:2011mr]. To further enhance the purity of the electron sample we use the particle discrimination power of the BEMC. Electromagnetic showers tend to be more compact than hadron showers, and deposit their energy in fewer towers. The total energy deposit of an electron candidate ([$E_\mathrm{cluster}$]{}) is determined by finding a [*seed*]{} tower with a high energy deposit ([$E_\mathrm{tower}$]{}), and forming a [*cluster*]{} by joining the two highest-energy neighbours to this seed. An $R=\sqrt{\Delta\varphi^2+\Delta\eta^2}<0.04$ matching cut is applied on the distance of the seed tower position in the BEMC and the TPC track projected to the BEMC plane, expressed in azimuthal angle and pseudorapidity units. We reconstruct the quantity ${\ensuremath{E_\mathrm{cluster}}\xspace}/p$ for each electron candidate, where $p$ is the momentum of the electron candidate measured in the TPC. Electrons travelling close to the speed of light are expected to follow an ${\ensuremath{E_\mathrm{cluster}}\xspace}/p$ distribution centered at $c$, smeared by the TPC and BEMC detector resolutions. Therefore a $0.75 c <{\ensuremath{E_\mathrm{cluster}}\xspace}/p<1.4 c$ cut is applied to reject hadron background. The efficiency of this cut for single electrons ($\epsilon_{E/p}$), obtained from detector simulation studies, is shown in Fig. \[fig:EoP\]. Since the trigger is already biased towards more compact clusters, an [$\Upsilon$]{}candidate requires that the daughter electron candidate that fired the trigger fulfills a strict condition of ${\ensuremath{E_\mathrm{tower}}\xspace}/{\ensuremath{E_\mathrm{cluster}}\xspace}>0.7$, while the daughter paired to it is required to fulfill a looser ${\ensuremath{E_\mathrm{tower}}\xspace}/{\ensuremath{E_\mathrm{cluster}}\xspace}>0.5$ cut. ![\[fig:nSigE\] [*(Color online)*]{} Single electron efficiency of the $dE/dx$ cut versus transverse momentum, as determined by fits to [${n\sigma_{e}}$]{}distributions of photonic electrons. The fit errors using the sample with the $m_{ee}<150$ MeV/$c^2$ photonic electron cut in 1 GeV/$c$ wide bins are used as systematic uncertainties. The results using the $m_{ee}<50$ MeV/$c^2$ photonic electron cut are consistent with the former one.](nSigEff_pT.eps){width="\linewidth"} ![\[fig:EoP\] [*(Color online)*]{} Single electron efficiency of the ${\ensuremath{E_\mathrm{cluster}}\xspace}/p$ cut versus transverse momentum. The efficiency corrections are obtained from embedded simulations. The difference between the default result from simulations and that extracted using a pure electron sample from data is taken as the systematic uncertainty.](singleEoPeff_pT.eps){width="\linewidth"} The acceptance, as well as the tracking, the triggering and the BEMC cut efficiency correction factors are determined using simulations, where the ${\ensuremath{{\Upsilon(n\text{S})}}\xspace}\rightarrow{\ensuremath{e^{+}e^{-}}\xspace}$ processes ($n$=1,2,3) are embedded into U+U collision events, and then reconstructed in the same way as real data. The efficiency of the $dE/dx$ cut is determined by using the single electron efficiency from photonic electrons, as shown in Fig. \[fig:nSigE\]. The BEMC-related reconstruction efficiencies are also verified with a sample of electrons identified in the TPC. Figure \[fig:effs\] shows the reconstruction efficiencies for [${\Upsilon(\text{1S})}$]{}, [${\Upsilon(\text{2S})}$]{}and [${\Upsilon(\text{3S})}$]{}states separately, for 0–60% centrality, as well as for centrality bins 0–10%, 10–30%, 30–60%, and transverse momentum bins of ${\ensuremath{p_\mathrm{T}}\xspace}<2$ GeV/$c$, $2<{\ensuremath{p_\mathrm{T}}\xspace}<4$ GeV/$c$ and $4<{\ensuremath{p_\mathrm{T}}\xspace}<10$ GeV/$c$. ![\[fig:effs\] [*(Color online)*]{} Reconstruction efficiencies for [${\Upsilon(\text{1S})}$]{}, [${\Upsilon(\text{2S})}$]{} and [${\Upsilon(\text{3S})}$]{}, as determined from embedded simulations and identified electron samples. Cuts for [*i)*]{} acceptance, triggering and tracking, [*ii)*]{} specific energy loss, [*iii)*]{} track–cluster matching, [*iv)*]{} ${\ensuremath{E_\mathrm{cluster}}\xspace}/p$ and [*v)*]{} cluster compactness ([$E_\mathrm{tower}$]{}/[$E_\mathrm{cluster}$]{}) are applied consecutively to build up the total reconstruction efficiency. The efficiencies corresponding to each cut are shown stacked in a top-to-bottom order. Black ticks at the end of each bar represent the total uncertainties on the given efficiency. The [$p_\mathrm{T}$]{}-binned values correspond to 0-60% centrality. ](effs.eps){width="\linewidth"} The invariant mass spectrum of the [$\Upsilon$]{}candidates is reconstructed within the rapidity window $|y|<1$ using dielectron momenta measured in the TPC. Figure \[fig:invmass\] shows the $m_{ee}$ distribution of unlike-sign pairs as solid circles, along with the sum of the positive and negative like-sign distributions as open circles. The data are divided into three centrality bins, shown in Fig. \[fig:invmasspanels\], and three [$p_\mathrm{T}$]{}bins. ![\[fig:invmass\] [*(Color online)*]{} Reconstructed invariant mass distribution of [$\Upsilon$]{} candidates (unlike-sign pairs, denoted as solid circles) and like-sign combinatorial background (open circles) in U+U collisions at $\sqrt{s_{NN}}=193$ GeV for 0-60% centrality at mid-rapidity ($|y|<1$). Fits to the combinatorial background, [$b\bar{b}$]{}and Drell-Yan contributions and to the [$\Upsilon$]{}peaks are plotted as dash-dotted, dashed and solid lines respectively. The fitted contributions of the individual [${\Upsilon(\text{1S})}$]{}, [${\Upsilon(\text{2S})}$]{}and [${\Upsilon(\text{3S})}$]{}states are shown as dotted lines.](UpsInvMass_0.eps){width="\linewidth"} ![\[fig:invmasspanels\] [*(Color online)*]{} Reconstructed invariant mass distribution of [$\Upsilon$]{} candidates (solid circles) and like-sign combinatorial background (open circles) in U+U collisions at $\sqrt{s_{NN}}=193$ GeV for [$p_\mathrm{T}$]{}-integrated 0-10% (a) 10-30% (b) 30-60% (c) centralities at mid-rapidity ($|y|<1$). Fits to the combinatorial background, [$b\bar{b}$]{}and Drell-Yan contributions and the peak fits are plotted as dash-dotted, dashed and solid lines respectively.](UpsInvMassCent.eps){width="\linewidth"} The measured signal from each of the $\Upsilon(nS)\rightarrow{\ensuremath{e^{+}e^{-}}\xspace}$ processes ($n=1,2,3$) is parametrized with a Crystal Ball function [@Gaiser:1982yw], with parameters obtained from fits to the [${\Upsilon(n\text{S})}$]{}mass peaks from simulations. Such a shape was justified by preceding studies [@Abelev:2010am] and accounts for the effects of Bremsstrahlung and the momentum resolution of the TPC. The combinatorial background is modelled with a double exponential function. In addition, there is a sizeable correlated background from [$b\bar{b}$]{} decays and Drell-Yan processes. Based on previous studies [@Adamczyk:2013poh; @Abelev:2010am] we use a ratio of two power law functions that were found to adequately describe these contributions. In order to determine the $\Upsilon$ yield, a simultaneous log-likelihood fit is performed on the like-sign and the unlike-sign data. The unlike-sign data are fitted with a function that includes the combinatorial and correlated background shapes plus the three [$\Upsilon$]{}mass peaks, while the like-sign data is fitted with the combinatorial background shape only. The parameters of the mass peaks and those of the correlated background are fixed in the fit according to the simulations and previous studies [@Adamczyk:2013poh; @Abelev:2010am], respectively, except for normalization parameters. The contribution of each [${\Upsilon(n\text{S})}$]{}state to the total [${\Upsilon(\text{1S+2S+3S})}$]{} yield is determined based on the integral of the individual Crystal Ball functions that are fit to the measured peaks. The uncertainties quoted as statistical are the uncertainties from the fit. Systematic uncertainties ======================== We consider several sources of systematic uncertainties in the present study. Geometrical acceptance is affected by [$\Upsilon$]{} polarization as well as by noisy towers that are not used in the reconstruction. The sytematics stemming from these factors, estimated in Ref. [@Adamczyk:2013poh], are taken as fully correlated between collision systems. The geometrical acceptance correction factor is dependent on the [$p_\mathrm{T}$]{}and rapidity distributions of the [$\Upsilon$]{}mesons. We assume a Boltzmann-like [$p_\mathrm{T}$]{}-distribution, $\frac{dN}{d{\ensuremath{p_\mathrm{T}}\xspace}} \propto \frac{{\ensuremath{p_\mathrm{T}}\xspace}}{\exp({{\ensuremath{p_\mathrm{T}}\xspace}}/{p_0})+1}$, in our embedded simulations. We obtain its slope parameter of $p_0=1.11$ GeV/$c$ from a parametrized interpolation of [*[p+p]{}*]{} data from ISR, CDF and measurements [@Acosta:2001gv; @Kourkoumelis:1980hg; @Khachatryan:2010zg], similar to Ref. [@Adamczyk:2013poh]. Although this value matches the fit to the [$p_\mathrm{T}$]{}spectrum of the current analysis, detailed in Sec. \[sec:results\], there is a slight difference between the two within the statistical error range. The uncertainty from the slope is determined by adjusting the slope to match the fitted value, $p_0=1.37$ GeV/$c$. The rapidity distribution is determined using PYTHIA [@Sjostrand:1993yb] version 8.1 to follow an approximately Gaussian shape with $\sigma=1.15$. We vary the width between 1.0 and 1.16 to cover the range of the uncertainties of the Gaussian fit, as well as estimations of earlier studies [@Adamczyk:2013poh]. The uncertainty of the TPC track reconstruction efficiency caused by the variation in operational conditions was studied in Refs. [@Adamczyk:2013poh; @Adler:2001yq]. The errors of the Gaussian fits to the [${n\sigma_{e}}$]{} distribution of photonic electrons are taken as the uncertainties on the electron identification using TPC ($dE/dx$). Changing the photonic electron selection from the default $m_\mathrm{ee}<150$ MeV/$c^2$ to $m_{ee}<50$ MeV/$c^2$, or using TPC-identified electrons instead of photonic ones yield a result that is consistent with the default choice within systematic uncertainties. Figure \[fig:nSigE\] shows the systematic uncertainty corresponding to the $dE/dx$ single electron efficiency as a band around the data points. The uncertainty stemming from the trigger turn-on characteristics, from the criteria of electron selection with the BEMC (matching, ${\ensuremath{E_\mathrm{cluster}}\xspace}/p$, as well as the cluster compactness ${\ensuremath{E_\mathrm{tower}}\xspace}/{\ensuremath{E_\mathrm{cluster}}\xspace}$) are determined from the comparison of efficiencies calculated from embedded simulations and from electron samples obtained from data using TPC ($dE/dx$) identification and reconstructed photonic conversion electrons. The dominant source of systematic uncertainty among those listed above is the uncertainty of the ${\ensuremath{E_\mathrm{cluster}}\xspace}/p$ cut efficiency. In Fig. \[fig:EoP\] we indicate the systematic uncertainty corresponding to the single electron ${\ensuremath{E_\mathrm{cluster}}\xspace}/p$ efficiency with a band around the data points. Another major source of uncertainty arises from the assumptions of the signal and background shapes made in extracting the signal yield. The extraction method was systematically modified to estimate the uncertainties from momentum resolution and calibration, functional shapes of the correlated and combinatorial backgrounds as well as the signal, and those from the fit range in the following ways: [*i)*]{} An additional 50 MeV/$c^2$ smearing was added to the peaks to model a worst-case scenario in the momentum resolution [@Abelev:2010am]; [*ii)*]{} The double exponential fit function used for the combinatorial background was replaced with a single exponential function; [*iii)*]{} Instead of modelling the correlated background with a ratio of two power law functions, we used a single power law function to commonly represent the Drell-Yan and [$b\bar{b}$]{}contributions, and we also tested the sum of these two functions to represent the Drell-Yan and [$b\bar{b}$]{}contributions individually in the fitting; [*iv)*]{} Finally, we moved the lower and upper limits of the simultaneous fit range in several steps from 6.6 to 8.0 GeV/$c^2$ and from 15.4 to 12.4 GeV/$c^2$ respectively. The [$\Upsilon$]{}yields were determined in each case, and the maximum deviation from the default case in positive or negative direction was taken as the signal extraction uncertainty. We construct the nuclear modification factor, [$R_\mathrm{AA}$]{}, to quantify the medium effects on the production of the [$\Upsilon$]{}states. The [$R_\mathrm{AA}$]{}is computed by comparing the corrected number of [$\Upsilon$]{}mesons measured in A+A collisions to the yield in [*[p+p]{}*]{} collisions scaled by the average number of binary nucleon-nucleon collisions, as ${\ensuremath{R_\mathrm{AA}}\xspace}^\Upsilon=\frac{\sigma^{inel}_{pp}}{\sigma^{inel}_{AA}}\frac{1}{\langle{\ensuremath{N_\mathrm{coll}}\xspace}\rangle}\frac{{\ensuremath{B_{ee}}\xspace}\times(d\sigma^{AA}_{\ensuremath{\Upsilon}\xspace}/ dy)}{{\ensuremath{B_{ee}}\xspace}\times(d\sigma^{pp}_{\ensuremath{\Upsilon}\xspace}/ dy)}$ , where $\sigma^{inel}_{AA(pp)}$ is the total inelastic cross-section of the U+U ([*[p+p]{}*]{}) collisions, $d\sigma^{AA(pp)}_{\ensuremath{\Upsilon}\xspace}/dy$ denotes the [$\Upsilon$]{}production cross-section in U+U ([*[p+p]{}*]{}) collisions, and ${\ensuremath{B_{ee}}\xspace}$ is the branching ratio of the $\Upsilon\rightarrow{\ensuremath{e^{+}e^{-}}\xspace}$ process. Our reference was measured in [*[p+p]{}*]{} collisions at $\sqrt{s}=200$ GeV [@Abelev:2010am], and has to be scaled to $\sqrt{s}=193$ GeV. Calculations for the [*[p+p]{}*]{} inelastic cross-section [@Schuler:1993wr] yield a 0.5% smaller value at $\sqrt{s}=193$ GeV than at $\sqrt{s}=200$ GeV. The [$\Upsilon$]{}production cross-section, however, shows a stronger dependence on the collision energy. Both the NLO color-evaporation model calculations, which describe the world [*[p+p]{}*]{} data [@Bedjidian:2004gd], and a linear interpolation of the same data points within the RHIC-LHC energy regime yield an approximately 4.6% decrease in the cross section when $\sqrt{s}$ is changed from 200 to 193 GeV. The uncertainties do not exceed 0.5% (absolute) in any of these corrections, and are thus neglected. The values used to compute [$R_\mathrm{AA}$]{}are $\left. {\ensuremath{B_{ee}}\xspace}\times(d\sigma^{pp}_{\ensuremath{\Upsilon}\xspace}/dy)\right|_{|y|<1}=60.64$ pb, $\sigma^{inel}_{pp} = 42.5$ mb and $\sigma^{inel}_{UU} = 8.14$ b. The [$N_\mathrm{part}$]{}and [$N_\mathrm{coll}$]{}values used in this analysis, computed using the Monte Carlo Glauber model [@Alver:2008aq] following the method of Ref. [@Masui:2009qk], are listed in Table \[tab:glauber\]. [centrality]{} [$N_\mathrm{part}$]{} [$N_\mathrm{coll}$]{} ---------------- ----------------------- ----------------------- 0-60% 188.3$\pm$5.5 459$\pm$10 0-10% 385.1$\pm$9 1146$\pm$49 10-30% 236.2$\pm$14 574$\pm$41 30-60% 91.0$\pm$32 154$\pm$37 : \[tab:glauber\]The [$N_\mathrm{coll}$]{}and [$N_\mathrm{part}$]{}values corresponding to different centrality ranges, obtained using the Monte Carlo Glauber model. The systematic uncertainties for U+U collisions at 0-60% centrality are summarized in Table \[tab:syst\]. The total relative systematic uncertainty on ${\ensuremath{R_\mathrm{AA}}\xspace}^{\ensuremath{\Upsilon}\xspace}$, calculated as a quadratic sum of the uncertainties listed in the table excluding common normalization uncertainties from the [*p+p*]{} reference measurements, ranges from 15% to 27% dependent on centrality and [$p_\mathrm{T}$]{}. value (%) -- ----------------------------------- ------------------- 2.2 $^{+1.7}_{-3.0}$ 2.1 $^{+1.1}_{-3.6}$ 11.8 $^{+4.0}_{-6.4}$ 5.4 $^{+8.8}_{-13.2}$ 2.0 [${\Upsilon(\text{1S+2S+3S})}$]{} $^{+8.4}_{-7.0}$ [${\Upsilon(\text{1S})}$]{} $^{+11.9}_{-5.7}$ [${\Upsilon(\text{2S+3S})}$]{} $^{+5.3}_{-19.7}$ : \[tab:syst\]Major systematic uncertainties excluding common normalization uncertainties from the [*[p+p]{}*]{} reference, for 0-60% centrality data. Results {#sec:results} ======= The production cross-sections are summarized in Table \[tab:result\] for the sum of all three [$\Upsilon$]{} states, the separated [${\Upsilon(\text{1S})}$]{}state, and for the excited [${\Upsilon(\text{2S+3S})}$]{}states together. states centrality ${\ensuremath{B_{ee}}\xspace}\times (d\sigma^\Upsilon_{AA} /dy)$ ($\mu$b) ${\ensuremath{R_\mathrm{AA}}\xspace}^\Upsilon$ -------- ------------ --------------------------------------------------------------------------- ------------------------------------------------ 0–60% $4.27 \pm 0.90^{+0.90}_{-0.82}$ $0.82 \pm 0.17^{+0.14}_{-0.11}$ 0–10% $6.64 \pm 4.22^{+1.95}_{-1.66}$ $0.51 \pm 0.32^{+0.13}_{-0.11}$ 10–30% $3.67 \pm 1.62^{+1.04}_{-0.78}$ $0.56 \pm 0.25^{+0.14}_{-0.10}$ 30–60% $3.42 \pm 1.04^{+0.57}_{-0.97}$ $1.96 \pm 0.59^{+0.51}_{-0.68}$ 0–60% $3.55 \pm 0.77^{+0.80}_{-0.66}$ $0.96 \pm 0.21^{+0.18}_{-0.13}$ 0–10% $4.52 \pm 2.08^{+1.31}_{-1.13}$ $0.49 \pm 0.23^{+0.12}_{-0.10}$ 10–30% $2.91 \pm 1.10^{+0.85}_{-0.61}$ $0.63 \pm 0.24^{+0.17}_{-0.11}$ 30–60% $3.42 \pm 0.95^{+0.57}_{-0.97}$ $2.76 \pm 0.76^{+0.71}_{-0.95}$ 0–60% $0.72 \pm 0.49^{+0.15}_{-0.19}$ $0.48 \pm 0.32^{+0.07}_{-0.11}$ 0–10% $2.11 \pm 3.33^{+0.64}_{-0.54}$ $0.56 \pm 0.89^{+0.15}_{-0.12}$ 10–30% $0.76 \pm 1.03^{+0.29}_{-0.16}$ $0.41 \pm 0.55^{+0.15}_{-0.07}$ : \[tab:result\]Cross-sections multiplied by the branching ratio of the leptonic channel, and nuclear modification of [${\Upsilon(\text{1S+2S+3S})}$]{}mesons, the ground states and the excited states separately, in 0–60% U+U collisions as well as in each centrality bin. The uncertainties are listed statistical first and systematic second. The statistical uncertainties from the [*[p+p]{}*]{} reference, not included in the table, are 12.7%, 13.0% and 30% for the [${\Upsilon(\text{1S+2S+3S})}$]{}, [${\Upsilon(\text{1S})}$]{}and [${\Upsilon(\text{2S+3S})}$]{}respectively. There is an additional 11% common normalization uncertainty on [$R_\mathrm{AA}$]{}from the [*[p+p]{}*]{} luminosity estimation [@Adamczyk:2013poh]. Table \[tab:spectra\] lists the cross-sections in the given [$p_\mathrm{T}$]{}ranges for [${\Upsilon(\text{1S+2S+3S})}$]{}and [${\Upsilon(\text{1S})}$]{}. states [$p_\mathrm{T}$]{} (GeV/$c$) ${\ensuremath{B_{ee}}\xspace}\times\frac{d^2 \sigma^\Upsilon_{AA}}{d{\ensuremath{p_\mathrm{T}}\xspace}{}dy}$ $\left(\frac{{\ensuremath{\mu\mathrm{b}}\xspace}}{\mathrm{GeV}/c}\right)$ -------- ------------------------------ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0–2 $1.40 \pm 0.49^{+0.36}_{-0.23}$ 2–4 $1.96 \pm 0.51^{+0.42}_{-0.43}$ 4–10 $0.53 \pm 0.77^{+0.20}_{-0.11}$ 0–2 $1.30 \pm 0.39^{+0.28}_{-0.22}$ 2–4 $1.61 \pm 0.43^{+0.35}_{-0.35}$ 4–10 $0.30 \pm 0.38^{+0.17}_{-0.05}$ : \[tab:spectra\]Cross-sections multiplied by the branching ratio of the leptonic channel, in given [$p_\mathrm{T}$]{} ranges for the [${\Upsilon(\text{1S+2S+3S})}$]{}and [${\Upsilon(\text{1S})}$]{}states in 0–60% U+U collisions. The [$p_\mathrm{T}$]{} spectrum is well described by a Boltzmann distribution with a slope parameter of $p_0^{\ensuremath{{\Upsilon(\text{1S+2S+3S})}}\xspace}=(1.37\pm0.20^{+0.03}_{-0.07})$ GeV/$c$ and $p_0^{\ensuremath{{\Upsilon(\text{1S})}}\xspace}=(1.22\pm0.15\pm^{+0.04}_{-0.05})$ GeV/$c$. These values are consistent with the interpolation from [*[p+p]{}*]{} data within uncertainties. The [${\Upsilon(\text{1S+2S+3S})}$]{}and [${\Upsilon(\text{1S})}$]{}nuclear modification factors as a function of [$N_\mathrm{part}$]{}are shown in Fig. \[fig:raa-data\], and compared to the nuclear modification factor in Au+Au data at $\sqrt{s_{NN}}=200$ GeV from STAR [@Adamczyk:2013poh] at $|\eta|<1$, PHENIX [@Adare:2014hje] at $|\eta|<0.35$, and in Pb+Pb data measured by CMS at $\sqrt{s_{NN}}=2.76$ TeV via the $\Upsilon\rightarrow\mu^+\mu^-$ channel within $|\eta|<2.4$ [@Chatrchyan:2012lxa]. ![\[fig:raa-data\] [*(Color online)*]{} [${\Upsilon(\text{1S+2S+3S})}$]{}(a) and [${\Upsilon(\text{1S})}$]{}(b) [$R_\mathrm{AA}$]{}vs. [$N_\mathrm{part}$]{}in $\sqrt{s_{NN}}=193$ GeV U+U collisions (solid circles), compared to 200 GeV RHIC Au+Au (solid squares [@Adamczyk:2013poh] and hollow crosses [@Adare:2014hje]), and 2.76 TeV LHC Pb+Pb data (solid diamonds [@Chatrchyan:2012lxa]). A 95% lower confidence bound is indicated for the 30-60% centrality U+U data (see text). Each point is plotted at the center of its bin. Centrality integrated (0-60%) U+U and Au+Au data are also shown as open circles and squares, respectively.](raa_npart-data.eps){width="\linewidth"} ![\[fig:raa-model\] [*(Color online)*]{} [${\Upsilon(\text{1S+2S+3S})}$]{}(a) and [${\Upsilon(\text{1S})}$]{}(b) [$R_\mathrm{AA}$]{}vs. [$N_\mathrm{part}$]{}in $\sqrt{s_{NN}}=193$ GeV U+U collisions (solid circles), compared to different models [@Emerick:2011xu; @Strickland:2011aa; @Liu:2010ej], described in the text. The 95% lower confidence bound is indicated for the 30-60% centrality U+U data (see text). Each point is plotted at the center of its bin. Centrality integrated (0-60%) U+U and Au+Au data are also shown as open circles and squares, respectively.](raa_npart-model.eps){width="\linewidth"} The data points in the 30-60% centrality bin have large statistical and systematical uncertainties, providing little constraint on [$R_\mathrm{AA}$]{}. In Figs. \[fig:raa-data\] and \[fig:raa-model\] we therefore only show the 95% lower confidence bound for these points, derived by quadratically adding statistical and point-to-point systematic uncertainties. The [$R_\mathrm{AA}$]{}values measured in all [$N_\mathrm{part}$]{}bins for the [${\Upsilon(\text{1S+2S+3S})}$]{}, [${\Upsilon(\text{1S})}$]{}and [${\Upsilon(\text{2S+3S})}$]{}states are summarized in Table \[tab:result\]. Note that the [${\Upsilon(\text{1S})}$]{}results are not corrected for feed-down from the excited states. ![\[fig:binding\] [*(Color online)*]{} Quarkonium [$R_\mathrm{AA}$]{}versus binding energy in Au+Au and U+U collisions. Open symbols represent 0-60% centrality data, filled symbols are for 0-10% centrality. The [$\Upsilon$]{}measurements in U+U collisions are denoted by red points. In the case of Au+Au collisions, the [${\Upsilon(\text{1S})}$]{}measurement is denoted by a blue square, while for the [${\Upsilon(\text{2S+3S})}$]{}states, a blue horizontal line indicates a 95% upper confidence bound. The black diamonds mark the high-[$p_\mathrm{T}$]{} [$J/\psi$]{}measurement. The vertical lines represent nominal binding energies for the [${\Upsilon(\text{1S})}$]{}and [$J/\psi$]{}, calculated based on the mass defect, as $2m_D-m_{{\ensuremath{J/\psi}\xspace}}$ and $2m_B-m_{\ensuremath{\Upsilon}\xspace}$, respectively (where $m_X$ is the mass of the given meson $X$) [@Satz:2006kba]. The shaded area spans between the binding energies of [${\Upsilon(\text{2S})}$]{}and [${\Upsilon(\text{3S})}$]{}. The data points are slightly shifted to the left and right from the nominal binding energy values to improve their visibility. ](raa_binding.eps){width="\linewidth"} The trend marked by the Au+Au ${\ensuremath{R_\mathrm{AA}}\xspace}({\ensuremath{N_\mathrm{part}}\xspace})$ points is augmented by the U+U data. We observe neither a significant difference between the results in any of the centrality classes, nor do we find any evidence of a sudden increase in suppression in central U+U compared to the central Au+Au data, although the precision of the current measurement does not exclude a moderate drop in [$R_\mathrm{AA}$]{}. Assuming that the difference in suppression between the Au+Au and U+U collisions is small, the two data sets can be combined. We carry out the unification using the BLUE method [@Valassi:2003mu; @Nisius:2014wua] with the conservative assumption that all common systematic uncertainties are fully correlated. We find that [${\Upsilon(\text{1S})}$]{} production is significantly suppressed in central heavy-ion collisions at top RHIC energies, but this suppression is not complete: ${\ensuremath{R_\mathrm{AA}}\xspace}^{\ensuremath{{\Upsilon(\text{1S})}}\xspace}=0.63 \pm 0.16 \pm 0.09$ where the first uncertainty includes both the unified statistical and systematic errors and the second one is the global scaling uncertainty from the [*p+p*]{} reference. While both the RHIC and LHC data show suppression in the most central bins, ${\ensuremath{R_\mathrm{AA}}\xspace}^{\ensuremath{{\Upsilon(\text{1S})}}\xspace}$ is slightly, although not significantly, higher in RHIC semi-central collisions than in the LHC. In the Au+Au data, the [${\Upsilon(\text{2S+3S})}$]{}excited states have been found to be strongly suppressed, and an upper limit ${\ensuremath{R_\mathrm{AA}}\xspace}^{\ensuremath{{\Upsilon(\text{2S+3S})}}\xspace}< 0.32$ was established. The [${\Upsilon(\text{2S+3S})}$]{}suppression observed in U+U data is consistent with this upper limit. In Fig. \[fig:raa-model\] we compare STAR measurements to different theoretical models [@Emerick:2011xu; @Strickland:2011aa; @Liu:2010ej]. An important source of uncertainty in model calculations for quarkonium dissociation stems from the unknown nature of the in-medium potential between the quark-antiquark pairs. Two limiting cases that are often used are the internal-energy-based heavy quark potential corresponding to a strongly bound scenario (SBS), and the free-energy-based potential corresponding to a more weakly bound scenario (WBS) [@Grandchamp:2005yw]. The model of Emerick, Zhao and Rapp [@Emerick:2011xu] includes CNM effects, dissociation of bottomonia in the hot medium (assuming a temperature $T=330$ MeV) and regeneration for both the SBS and WBS scenarios. The Strickland-Bazow model [@Strickland:2011aa] calculates dissociation in the medium in both a free-energy-based “model A” and an internal-energy-based “model B”, with an initial central temperature $428<T<442$ MeV. The model of Liu [*et al.*]{} [@Liu:2010ej] uses an internal-energy-based potential and an input temperature $T=340$ MeV. In Fig. \[fig:raa-model\] we show all three internal-energy-based models together with the “model A” of Ref. [@Strickland:2011aa] as an example for the free-energy-based models. The internal-energy-based models generally describe RHIC data well within the current uncertainties, while the free-energy-based models tend to underpredict the [$R_\mathrm{AA}$]{}especially for the [${\Upsilon(\text{1S})}$]{}. Figure \[fig:binding\] shows the [$R_\mathrm{AA}$]{}versus binding energy of [${\Upsilon(\text{1S})}$]{}and [${\Upsilon(\text{2S+3S})}$]{}states [@Satz:2006kba] in U+U and Au+Au collisions. The results are also compared to high-[$p_\mathrm{T}$]{}$J/\psi$ in Au+Au collisions [@Adamczyk:2012ey]. This comparison is motivated by the expectation from model calculations, e.g. that in Ref. [@Liu:2009nb], that charm recombination is moderate at higher momenta. Recent measurements at the LHC [@Khachatryan:2016xxp; @Khachatryan:2016ypw] indicate that the suppression of the [$\Upsilon$]{}production, as well as that of the prompt [$J/\psi$]{}in the ${\ensuremath{p_\mathrm{T}}\xspace}>5$ GeV/$c$ range, is rather independent of the momentum of the particle. Contrary to earlier assumptions [@Xu:1995eb; @Liu:2012zw], no noticeable [$p_\mathrm{T}$]{}or rapidity dependence was observed. However, the non-prompt [$J/\psi$]{}production [@Khachatryan:2016ypw], originating dominantly from $B$ meson decays, does show a clear [$p_\mathrm{T}$]{}dependence  [@Adamczyk:2012ey]. This affects the pt-dependence of inclusive [$J/\psi$]{}production, especially at high-[$p_\mathrm{T}$]{}. Our current data does not have sufficient statistics to study the [$p_\mathrm{T}$]{}dependence of the [$\Upsilon$]{}in detail and to verify whether the observations at the LHC also hold at RHIC energies. The results in U+U collisions are consistent with the Au+Au measurements as well as with the expectations from the sequential melting hypothesis. Summary ======= We presented mid-rapidity measurements of inclusive bottomonium production in U+U collisions at $\sqrt{s_{NN}}=193$ GeV. The cross-section is ${\ensuremath{B_{ee}}\xspace}\times(d\sigma^{\ensuremath{\Upsilon}\xspace}_{AA}/dy)= 4.27 \pm 0.90^{+0.90}_{-0.82} $ [$\mu\mathrm{b}$]{} for the [${\Upsilon(\text{1S+2S+3S})}$]{}, and ${\ensuremath{B_{ee}}\xspace}\times(d\sigma^{\ensuremath{{\Upsilon(\text{1S})}}\xspace}_{AA}/dy)= 3.55 \pm 0.77^{+0.80}_{-0.66} $ [$\mu\mathrm{b}$]{} for the separated [${\Upsilon(\text{1S})}$]{}state. The present measurements increased the range of the number of participants in the collision compared to the previous Au+Au measurements by approximately 20%. A significant suppression is observed in central U+U data for both the [${\Upsilon(\text{1S+2S+3S})}$]{} (${\ensuremath{R_\mathrm{AA}}\xspace}^{\ensuremath{\Upsilon}\xspace}=0.51 \pm 0.32^{+0.13}_{-0.11} \pm0.08$, where the first uncertainty reflects the statistical error, the second the overal systematic uncertainty, and the third the uncertainty from the [*p+p*]{} reference) and [${\Upsilon(\text{1S})}$]{} (${\ensuremath{R_\mathrm{AA}}\xspace}^{\ensuremath{{\Upsilon(\text{1S})}}\xspace}=0.49 \pm 0.23^{+0.12}_{-0.10} \pm0.09$), which consolidates and extends the previously observed [$R_\mathrm{AA}$]{}([$N_\mathrm{part}$]{}) trend in Au+Au collisions. The data from 0-60% central U+U collisions is consistent with a strong suppression of the [${\Upsilon(\text{2S+3S})}$]{}states, which has also been observed in Au+Au collisions. Comparison of the suppression patterns [from Au+Au and U+U data to different models]{} favors an internal-energy-based quark potential scenario. Acknowledgement {#acknowledgement .unnumbered} =============== We thank the RHIC Operations Group and RCF at BNL, the NERSC Center at LBNL, and the Open Science Grid consortium for providing resources and support. This work was supported in part by the Office of Nuclear Physics within the U.S. DOE Office of Science, the U.S. NSF, the Ministry of Education and Science of the Russian Federation, NSFC, CAS, MoST and MoE of China, the National Research Foundation of Korea, NCKU (Taiwan), GA and MSMT of the Czech Republic, FIAS of Germany, DAE, DST, and UGC of India, the National Science Centre of Poland, National Research Foundation, the Ministry of Science, Education and Sports of the Republic of Croatia, and RosAtom of Russia. [10]{} S. Digal, P. Petreczky and H. Satz, Phys. Lett. B [**514**]{}, 57 (2001). C. Y. Wong, Phys. Rev. C [**72**]{}, 034906 (2005). D. Cabrera and R. Rapp, Eur. Phys. J. A [**31**]{}, 858 (2007). Á. Mócsy and P. Petreczky, Phys. Rev. Lett.  [**99**]{}, 211602 (2007). T. Matsui and H. Satz, Phys. Lett. B [**178**]{}, 416 (1986). A. Adare [*et al.*]{} \[PHENIX Collaboration\], Phys. Rev. Lett.  [**98**]{}, 232301 (2007). L. Grandchamp, R. Rapp and G. E. Brown, J. Phys. G [**30**]{}, S1355 (2004). R. Rapp, D. Blaschke and P. Crochet, Prog. Part. Nucl. Phys.  [**65**]{}, 209 (2010). L. Grandchamp, S. Lumpkins, D. Sun, H. van Hees and R. Rapp, Phys. Rev. C [**73**]{}, 064906 (2006). R. Vogt, R. E. Nelson and A. D. Frawley, PoS ConfinementX, [**203**]{} (2012). F. Arleo and S. Peigne, JHEP [**1303**]{}, 122 (2013). A. Frawley \[PHENIX Collaboration\], Nucl. Phys. A [**932**]{}, 105 (2014). L. Adamczyk [*et al.*]{} \[STAR Collaboration\], Phys. Lett. B [**735**]{}, 127 (2014); L. Adamczyk [*et al.*]{} \[STAR Collaboration\], Phys. Lett. B [**743**]{}, 537 (2015). D. Kikola, G. Odyniec and R. Vogt, Phys. Rev. C [**84**]{}, 054907 (2011). J. T. Mitchell \[PHENIX Collaboration\], PoS CPOD [**2013**]{}, 003 (2013), and the corresponding talk at [*Quark Matter, Int. Conf. Ultra-Rel. Nucl.-Nucl. Coll. 2012*]{}. K. Petrov \[RBC-Bielefeld Collaboration\], J. Phys. G [**34**]{}, S639 (2007). M. Beddo [*et al.*]{} \[STAR Collaboration\], Nucl. Instrum. Meth. A [**499**]{}, 725 (2003). K. H. Ackermann [*et al.*]{} \[STAR Collaboration\], Nucl. Instrum. Meth. A [**499**]{}, 624 (2003). H. Bichsel, Nucl. Instrum. Meth. A [**562**]{}, 154 (2006). H. Agakishiev [*et al.*]{} \[STAR Collaboration\], Phys. Rev. D [**83**]{}, 052006 (2011). J. Gaiser, SLAC Stanford - SLAC-255 (82,REC.JUN.83) 194p. B. I. Abelev [*et al.*]{} \[STAR Collaboration\], Phys. Rev. D [**82**]{}, 012004 (2010). D. Acosta [*et al.*]{} \[CDF Collaboration\], Phys. Rev. Lett.  [**88**]{}, 161802 (2002). C. Kourkoumelis [*et al.*]{}, Phys. Lett. B [**91**]{}, 481 (1980). V. Khachatryan [*et al.*]{} \[CMS Collaboration\], Phys. Rev. D [**83**]{}, 112004 (2011). T. Sjöstrand, Comput. Phys. Commun.  [**82**]{}, 74 (1994). C. Adler [*et al.*]{} \[STAR Collaboration\], Phys. Rev. Lett.  [**87**]{}, 112303 (2001). G. A. Schuler and T. Sjöstrand, Phys. Rev. D [**49**]{}, 2257 (1994). M. Bedjidian [*et al.*]{}, hep-ph/0311048. B. Alver, M. Baker, C. Loizides and P. Steinberg, arXiv:0805.4411 \[nucl-ex\]. H. Masui, B. Mohanty and N. Xu, Phys. Lett. B [**679**]{}, 440 (2009). A. Adare [*et al.*]{} \[PHENIX Collaboration\], Phys. Rev. C [**91**]{}, 024913 (2015). S. Chatrchyan [*et al.*]{} \[CMS Collaboration\], Phys. Rev. Lett.  [**109**]{}, 222301 (2012). A. Valassi, Nucl. Instrum. Meth. A [**500**]{}, 391 (2003). R. Nisius, Eur. Phys. J. C [**74**]{}, 3004 (2014). A. Emerick, X. Zhao and R. Rapp, Eur. Phys. J. A [**48**]{}, 72 (2012). M. Strickland and D. Bazow, Nucl. Phys. A [**879**]{}, 25 (2012). Y. Liu, B. Chen, N. Xu and P. Zhuang, Phys. Lett. B [**697**]{}, 32 (2011). H. Satz, Nucl. Phys. A [**783**]{}, 249 (2007). L. Adamczyk [*et al.*]{} \[STAR Collaboration\], Phys. Lett. B [**722**]{}, 55 (2013). Y. Liu, Z. Qu, N. Xu and P. Zhuang, Phys. Lett. B [**678**]{}, 72 (2009). V. Khachatryan [*et al.*]{} \[CMS Collaboration\], arXiv:1611.01510 \[nucl-ex\]. V. Khachatryan [*et al.*]{} \[CMS Collaboration\], arXiv:1610.00613 \[nucl-ex\]. X. M. Xu, D. Kharzeev, H. Satz and X. N. Wang, Phys. Rev. C [**53**]{}, 3051 (1996). Y. Liu, N. Xu and P. Zhuang, Phys. Lett. B [**724**]{}, 73 (2013).
--- address: - 'Institut für Kernphysik, J. W. Goethe Universität, Max-von-Laue-Str. 1, D-60438 Frankfurt, Germany' - 'FS-PETRA-S, Deutsches Elektronen-Synchrotron DESY, Notkestr. 85, D-22607 Hamburg, Germany' - 'Molecular Physics, Fritz-Haber-Institut der Max-Planck-Gesellschaft, Faradayweg 4-6, D-14195 Berlin, Germany' - 'Institut für Kernphysik, J. W. Goethe Universität, Max-von-Laue-Str. 1, D-60438 Frankfurt, Germany' - 'Institut für Kernphysik, J. W. Goethe Universität, Max-von-Laue-Str. 1, D-60438 Frankfurt, Germany' - 'Institut für Theoretische Physik, Leibniz Universität Hannover, Appelstr. 2, D-30167 Hannover, Germany' - 'Institut für Theoretische Physik, Leibniz Universität Hannover, Appelstr. 2, D-30167 Hannover, Germany' - 'Institut für Kernphysik, J. W. Goethe Universität, Max-von-Laue-Str. 1, D-60438 Frankfurt, Germany' - 'Institut für Kernphysik, J. W. Goethe Universität, Max-von-Laue-Str. 1, D-60438 Frankfurt, Germany' - 'LPQSD, Department of Physics, Faculty of Science, University Sétif-1, 19000, Setif, Algeria' - 'Joint Institute for Nuclear Research, Dubna, Moscow region 141980, Russia' - 'Institute of Mathematics, National University of Mongolia, Ulan-Bator, Mongolia' - 'Skobeltsyn Institute of Nuclear Physics, Lomonosov Moscow State University, Moscow 119991, Russia' - 'Joint Institute for Nuclear Research, Dubna, Moscow region 141980, Russia' - 'Skobeltsyn Institute of Nuclear Physics, Lomonosov Moscow State University, Moscow 119991, Russia' - 'FS-PETRA-S, Deutsches Elektronen-Synchrotron DESY, Notkestr. 85, D-22607 Hamburg, Germany' - 'Sorbonne Universités, CNRS, UMR 7614, Laboratoire de Chimie Physique Matière et Rayonnement, F-75005 Paris, France' - 'Department of Physics and Astronomy, Uppsala University, P.O. Box 516, SE-751 20 Uppsala, Sweden' - 'Institut für Theoretische Physik, Leibniz Universität Hannover, Appelstr. 2, D-30167 Hannover, Germany' - 'Institut für Kernphysik, J. W. Goethe Universität, Max-von-Laue-Str. 1, D-60438 Frankfurt, Germany' - 'Institut für Kernphysik, J. W. Goethe Universität, Max-von-Laue-Str. 1, D-60438 Frankfurt, Germany' - 'Institut für Kernphysik, J. W. Goethe Universität, Max-von-Laue-Str. 1, D-60438 Frankfurt, Germany' author: - Max Kircher - Florian Trinter - Sven Grundmann - 'Isabel Vela-Perez' - Simon Brennecke - Nicolas Eicke - Jonas Rist - Sebastian Eckart - Salim Houamer - Ochbadrakh Chuluunbaatar - 'Yuri V. Popov' - 'Igor P. Volobuev' - Kai Bagschick - 'M. Novella Piancastelli' - Manfred Lein - Till Jahnke - 'Markus S. Schöffler' - Reinhard Dörner title: Compton scattering near threshold --- [**Compton scattering is one of the fundamental interaction processes of light with matter. Already upon its discovery [@1] it was described as a billiard-type collision of a photon kicking a quasi-free electron. With decreasing photon energy, the maximum possible momentum transfer becomes so small that the corresponding energy falls below the binding energy of the electron. Then ionization by Compton scattering becomes an intriguing quantum phenomenon. Here we report a kinematically complete experiment on Compton scattering at helium atoms below that threshold. We determine the momentum correlations of the electron, the recoiling ion, and the scattered photon in a coincidence experiment finding that electrons are not only emitted in the direction of the momentum transfer, but that there is a second peak of ejection to the backward direction. This finding links Compton scattering to processes as ionization by ultrashort optical pulses [@2], electron impact ionization [@3; @4], ion impact ionization [@5; @6], and neutron scattering [@7] where similar momentum patterns occur.**]{} Doubts about energy conservation in Compton scattering on the single-event level were the trigger motivating the invention of coincidence measurement techniques by Bothe and Geiger [@8]. This historic experiment settled the dispute on the validity of conservation laws in quantum physics by showing that for each scattered photon there is an electron ejected in coincidence. Surprisingly however, even 95 years after this pioneering work, coincidence experiments on the Compton effect are extremely scarce and they are restricted to solid-state systems [@9; @10]. This lack of detailed experiments left further progress in the field of Compton scattering to large extent to theory. Due to missing experimental techniques, much of the potential of using Compton scattering as a tool in molecular physics remained untapped [@11]. The small cross section of $10^{-24}$ cm$^2$ (six orders of magnitude below typical photoabsorption cross sections at the respective thresholds), together with the small collection solid angle of typical photon detectors prohibited coincidence experiments on free atoms and molecules up to now. In the present work, we have solved that problem by using the highly efficient COLd Target Recoil Ion Momentum Spectroscopy (COLTRIMS) technique [@12] to detect the electron and the ion momentum in coincidence. The He$^+$ ion and electrons with an energy smaller than 25 eV are detected with $4\pi$ collection solid angle. The momentum vector of the scattered photon can be obtained using momentum conservation, therefore circumventing the need for a photon detector. This allows us, for the first time, to obtain a kinematically complete data set of ionization by Compton scattering of atoms addressing the intriguing low-energy, near-threshold regime. It has been frequently pointed out in the theoretical literature that such complete measurements of the process – as opposed to detection of the emitted electron or scattered photon only – are the essential key for sensitive testing of theories [@13] as well as to allow for a clean physics interpretation of the results [@14]. ![**Scheme of ionization by Compton scattering at $h\nu=2.1$ keV.** **a**, The wavy lines indicate the incoming and outgoing photon, the green arrow depicts the momentum vector of the emitted electron. Dashed line: Thomson cross section, i.e. angular distribution of a photon scattering at a free electron. Black dots: experimental photon angular distribution for ionization of He by Compton scattering, integrated over all angles and energies of the emitted electrons below 25 eV. The statistical error is smaller than the dot size. Solid black line: $A^2$ approximation for all electron energies. Solid red line: $A^2$ approximation for electron energies below 25 eV. The calculations were done using approach A (see Methods). The solid and dotted lines are multiplied by a factor of 1.9. **b**, Momentum distribution of electrons emitted by Compton scattering of 2.1 keV photons at He. The coordinate frame is the same as in **a**, i.e. the plane is defined by the incoming (horizontal) and scattered photon (upper half plane). The momentum transfer points to the forward lower half plane. The data are integrated over the out-of-plane electron momentum components. **c**, He$^{+}$ ion momentum distribution for the same conditions as in **b**. Refer to the main text for an explanation of the feature [****]{}.[]{data-label="fig1"}](fig1.pdf){width="\columnwidth"} For the case of Compton scattering at a [*quasi-free*]{} electron, the angular distribution of the scattered photon is given by the Thomson cross section (see Fig. \[fig1\]a). A binding of the electron modifies the binary scattering scenario by adding the ion as a third particle. The often invoked impulse approximation accounts for one of the effects of that binding, namely the electron’s initial momentum distribution. According to this approximation, the initial electron momentum is added to the momentum balance, while the binding energy is neglected. In this model the ion momentum is defined such that it compensates only for the electron’s initial momentum. The impulse approximation works well when the binding energy is negligible compared to the energy of the electron carrying the momentum $\boldsymbol{Q}$ transferred by the photon. The maximum value of $Q$ is reached for photon back-scattering, and is twice the photon momentum $E_1/c$, with $E_1$ being the incoming photon energy. For helium with a binding energy of 24.6 eV this gives a threshold of $E_1\approx2.5$ keV, below which photon back-scattering at an electron at rest does not provide enough energy to overcome the ionization threshold. In the present experiment we use a photon energy $E_1=2.1$ keV, well below that threshold. There, the cross section for ionization by Compton scattering has dropped to about 20% of its maximum value of about $10^{-24}$ cm$^2$ [@15]. As expected, we observe that the photon scattering angular distribution differs significantly from the Thomson cross section (Fig. \[fig1\]a). The most striking difference is that all forward angles of photon emission are suppressed and almost only back-scattered photons lead to ionization. This measured cross section shows excellent agreement with our theoretical model which is described in detail in the Methods section. What is the mechanism facilitating ionization at these low photon energies and small momentum transfers? Our coincidence experiment can answer this question by providing the momentum vectors of all particles, i.e. the incoming ($\boldsymbol{k}_1$) and outgoing photon ($\boldsymbol{k}_2$), the electron ($\boldsymbol{p}_e$), and the ion ($\boldsymbol{p}_{\mathrm{ion}}$) for each individual Compton ionization event. This event-by-event momentum correlation gives access to the various particles’ momentum distributions in the intrinsic coordinate frame of the process, which is a plane spanned by the wave vectors of the incoming and the scattered photon (Fig. \[fig1\]). This plane also contains the momentum transfer vector $\boldsymbol{Q}=\boldsymbol{k}_1-\boldsymbol{k}_2$. In Fig. \[fig1\]b,c, by definition, the photon is scattered to the upper half plane and the momentum transfer $\boldsymbol{Q}$, i.e. the kick by the photon, points forward and into the lower half plane. The electron momentum distribution visualized in this intrinsic coordinate frame shows two distinct islands, one in the direction of the momentum transfer and a second, smaller one, to the backward direction, i.e. opposite to the momentum transfer direction. These two maxima are separated by a minimum. The He$^+$ ions (Fig. \[fig1\]c) are also emitted to the forward direction. In addition to a main island close to the origin, there are also ions which are emitted strongly in forward direction towards the region noted by [****]{} (Recoil) in Fig. \[fig1\]c. This ion momentum distribution shows strikingly that in the below-threshold regime, the situation is very different from the quasi-free electron scattering considered in the standard high-energy Compton process. In the latter case, the ion is only a passive spectator to the photon-electron interaction and, consequently, the ion momenta are centered at the origin of the coordinate frame employed in Figs. \[fig1\]b,c [@15; @16; @17; @18]. ![**Electron and ion momentum distributions for different momentum transfer gates.** All panels use the same coordinate frame as Figs. \[fig1\]b,c. **a-c**, Electron momentum distributions obtained from modelling within the $A^2$ approximation using approach B (see Methods). **d-f**, Electron momentum distributions measured by our experiment. **g-i**, Measured momentum distributions of the ions. From top to bottom, the rows correspond to different momentum transfers $Q=1.0$, 0.8, and 0.6 a.u., respectively. The arrows in the third column indicate the photon momentum configuration for each row. Here, the magenta arrows represent the momentum of the incoming photon, the blue arrows the momentum of the scattered photon, and the green arrows the momentum transfer. A movie of the electron and ion momentum distributions for different photon scattering directions is available in the supplementary materials.[]{data-label="fig2"}](fig2.pdf){width="\columnwidth"} The observed bimodal electron momentum distribution becomes even clearer when we examine a subset of the data for which the photon is scattered to a certain direction (Fig. \[fig2\]). This shows that the momentum distribution follows the direction of momentum transfer and the nodal plane is perpendicular to $\boldsymbol{Q}$. Such bimodal distributions are known from different contexts. For example, for ionization by electron impact ($e$,$2e$) [@4] and ion impact [@5], the forward lobe has been termed binary lobe, for obvious reasons, while the backward peak is referred to as recoil peak. The name alludes to the fact that in order for the electron to be emitted opposite to the momentum transfer, momentum conservation dictates that the ion recoils to the opposite direction. Mechanistically, this would occur if the electron was initially kicked in forward direction but then back-reflected at its own parent ion. Such a classical picture would suggest that the ion receives the momentum originally imparted to the electron (i.e. $\boldsymbol Q$) minus the final momentum $\boldsymbol p_e$ of the electron. This expectation is verified by our measured ion momentum distributions shown in Figs. \[fig2\]g-i. The ions show also a bimodal momentum distribution with the main island slightly forward shifted and a minor island significantly forward shifted in momentum transfer direction, in nice agreement with the back-reflection scheme. The observations suggest a two-step model for below-threshold Compton scattering referred to as the $A^2$ approximation (see Methods). The first step is the scattering of the photon at an electron being described by the Thomson cross section. This step sets the direction and magnitude of the approximate momentum transfer. The second step is the response of the electron wave function to this sudden kick which displaces the bound wave function in momentum space. This momentum-shifted electron wave function then relaxes to the electronic eigenstates of the ion where it has some overlap with its initial state and with the bound excited states. However, the fraction which overlaps with the Coulomb continuum leads to ionization and is observed experimentally. The bimodal electron momentum distribution for small momentum transfer follows naturally from such a scenario. The leading ionizing term in the Taylor expansion of the momentum transfer operator $e^{i\boldsymbol{Q}\cdot\boldsymbol{r}_e}$ is the dipole operator with the momentum transfer replacing the direction of polarization. This dipolar contribution resembling the shape of a $p$ orbital is the origin of the bimodal electron momentum distribution. The observed electron momentum distributions are in excellent agreement with the prediction of the $A^2$ approximation shown in Fig. \[fig2\]a-c. Note that these theoretical distributions are calculated without any reference to Compton scattering. What is shown is the overlap of the ground state with the continuum (altered by the momentum transfer). Exactly the same distributions are predicted for an attosecond half-cycle pulse (see Fig. 2 in [@2]) and identical results are expected for a momentum transfer to the nucleus by neutron scattering [@7]. ![**Electron energy distribution.** The scattering angle for the outgoing photon is restricted to $140<\theta<180$ deg in all panels. **a**, The electron energy spectrum is shown independent of the electron emission direction. **b**, The electron emission angle is restricted to forward scattering ($0<\theta_e<40$ deg). **c** the electron emission angle is restricted to backward scattering ($140<\theta_e<180$ deg). The black dots are the experimental data. The error bars are the standard statistical error. The solid lines are the theoretical results of approach A, the dashed lines the results of approach B (see Methods). The experimental data in panels **a** and **b** is normalized such that the maximum intensity is 1; the theory is normalized such that the integrals of the experimental data and the theoretical curves are equal. The normalization factors in panel **c** are identical to the ones in panel **b**, since here we depict the forward/backward direction of the same distribution.[]{data-label="fig3"}](fig3.pdf){width="\columnwidth"} Within the $A^2$ approximation, the magnitude of the energy transfer is determined by energy conservation. It is worth mentioning that, under the present conditions, the photon loses only a few percent of its primary energy. Thus the momentum transfer is largely a consequence of the angular deflection of the photon and not a consequence of its change in energy. This can be seen by inspecting the energy distribution of the ejected electron in Fig. \[fig3\]a. The electron energy distribution peaks at zero and falls off exponentially. For electron forward emission (Fig. \[fig3\]b) it peaks at 11 eV for photon back-scattering, while the backward-emitted electrons for the same conditions are much lower in energy (Fig. \[fig3\]c). This also manifests itself in the fully differential cross section showing the electron angular distribution for fixed electron energy and fixed photon scattering angle of $150\pm20$ deg. These angular distributions (Fig. \[fig4\]) show that the intensity in the backward directed recoil lobe compared to the intensity in the forward directed binary lobe strongly drops with increasing electron energy. The physics governing the relative strength of the binary and recoil lobe is unveiled by two sets of calculations, i.e. by comparing theoretical calculations for different initial electron wave functions and different final states. Firstly, we use a correlated two-electron wave function in the initial state with outgoing Coulomb waves with charge 1 as the final state. Secondly, we use a single-active-electron model for the initial state with a final scattering state in an effective potential (Figs. \[fig3\] and \[fig4\]). We find that the binary peak is similar in all cases, the recoil peak, however, is enhanced by more than a factor of two when scattering states in an effective He$^{+}$ potential are used instead of Coulomb states. This directly supports the mechanistic argument that the recoil peak originates from back-scattering of forward-kicked electrons at the parent ion. This back-scattering is enhanced due to the increased depth of the effective potential compared to the Coulomb potential close to the origin. ![**Fully differential electron angular distributions.** The photon scattering angle is $130<\theta<170$ deg. Displayed is the cosine of the angle $\chi$ between the outgoing electron and the momentum transfer $\boldsymbol{Q}$. **a**, The electron energy is $1.0<E_e<3.5$ eV. **b**, $3.5<E_e<8.5$ eV. The inlet in the upper left is the same data in polar representation, where the arrow indicates the direction of momentum transfer. The lines and normalization are the same as in Fig. \[fig3\].[]{data-label="fig4"}](fig4.pdf){width="\columnwidth"} In conclusion we have shown the first fully differential cross sections for Compton scattering at a gas-phase atom unveiling the mechanism of near-threshold Compton scattering. Coincidence detection of ions and electrons, as demonstrated here, paves the road to exploit Compton scattering for imaging of molecular wave functions not only averaged over the molecular axis but also in the body-fixed frame of the molecule. As has been pointed out recently, measuring the momentum transfer to the nucleus in this case will give access to the Dyson orbitals [@11]. Methods {#methods .unnumbered} ======= **Experimental methods.** The experiment was performed at the beamline P04 of the synchrotron PETRA III, DESY in Hamburg with 40-bunch timing mode, i.e. the photon bunches were spaced 192 ns apart. A circularly polarized pink beam was used, i.e. the monochromator was set to zero order. To effectively remove low-energy photons from the beam, we put foil filters in the photon beam, namely 980 nm of aluminum, 144 nm of copper, and 153 nm of iron. With this setup, we suppressed photons $<100$ eV by at least a factor of $10^{-9}$ and photons $<15$ eV by at least a factor of $10^{-25}$ [@19]. The beam was crossed at a 90 deg angle with a supersonic gas jet, expanding through a 30 m nozzle at 30 bar driving pressure and room temperature within a COLTRIMS spectrometer. The supersonic gas jet passed two skimmers (0.3 mm diameter), hence the reaction region roughly had the size of 0.2$\times$1.0$\times$0.1 mm$^3$. The electron side of the spectrometer had 5.8 cm of acceleration. To increase resolution, an electrostatic lens and time-of-flight-focusing geometry was used for the ion side to effectively compensate for the finite size of the reaction region. The total length of the ion side was 97.4 cm. The electric field in the spectrometer was 18.3 V/cm, the magnetic field was 9.1 G. The charged particles were detected using two position-sensitive microchannel plate detectors with delay-line anodes [@19].\ \ **Theoretical methods.** In general, Compton scattering is a relativistic process. In the special case of an initially bound electron, this process may be described by the second-order quantum electrodynamics perturbation terms with exchange in the presence of an external classical electromagnetic field due to the residual ion (see for example [@21]). In the low-energy limit of small incoming photon energy $E_1$ compared to the rest energy of an electron $m_ec^2$, we can apply a non-relativistic quantum-mechanical description. (In the following, we use atomic units unless stated otherwise, i.e. $e=m_e=\hbar=1$.) The energy and momentum conservation laws are of the form $$\begin{aligned} E_1=E_2 + I_p + E_e + E_{\mathrm{ion}}, \quad \boldsymbol k_1=\boldsymbol k_2+\boldsymbol p_e+\boldsymbol p_{\mathrm{ion}},\end{aligned}$$ where $I_p$ is the ionization potential, $E_e$ ($\boldsymbol{p}_e$) is the energy (momentum) of the escaped electron, $E_{\mathrm{ion}}$ ($\boldsymbol p_{\mathrm{ion}}$) is the energy (momentum) of the residual ion and $E_{1/2}$ ($\boldsymbol k_{1/2}$) are the energies (momenta) of the incoming and outgoing photons, respectively. For the given keV photon energy range, the momenta are of the order $k_i = E_i/c \sim 1\mathrm{~a.u}$. with the speed of light $c = \alpha^{-1}$ so that the energy of the escaped electron is only a few eV. Since $M_{\mathrm{ion}} \gg 1$, the ionic kinetic energy $E_{\mathrm{ion}}=\boldsymbol p_{\mathrm{ion}}^2/(2M_{\mathrm{ion}})$ can be neglected. Hence, the photon energy is nearly unchanged and the ratio of photon energy after and before the collision is $$\begin{aligned} t = \frac{E_2}{E_1} = 1 - \frac{I_p + E_e + E_{\mathrm{ion}}}{E_1} \approx 1.\end{aligned}$$ The transferred momentum from the photon to the atomic system is given by $\boldsymbol Q = \boldsymbol k_1 - \boldsymbol k_2 = \boldsymbol p_e + \boldsymbol p_{\mathrm{ion}}$. The magnitude and direction of the transferred momentum $\boldsymbol Q$ may be expressed as a function of the scattering angle $\theta$ between the incoming and outgoing photon. Under the above kinematic conditions, the fully differential cross section (FDCS) may be written as $$\begin{aligned} \frac{\mathrm d\sigma}{\mathrm dE_e \mathrm d\Omega_e \mathrm d\Omega_2} = r^2_e p_e t |M|^2, \label{eq:cs}\end{aligned}$$ with the classical electron radius $r_e$ and the well-known Kramers-Heisenberg-Waller matrix element (compare, for example, Refs. [@21; @22]) based on the $A^2$ (seagull) term $$\begin{aligned} M(\boldsymbol Q, \boldsymbol p_e) = (\boldsymbol e_1 \cdot \boldsymbol e_2) \langle \Psi^{(-)}_{\boldsymbol p_e} | \sum^{N}_{j=1}{e^{i\boldsymbol Q \cdot \boldsymbol r_j}} | \Psi_0 \rangle .\end{aligned}$$ Here, $\boldsymbol e_{1/2}$ are the polarization vectors of the incoming and outgoing photons. Initially, the $N$ electrons of the system with positions $\boldsymbol r_j$ are in the bound state $\Psi_0$. Since in the detection scheme we select singly-ionized helium ions, the final state of the electronic system is a scattering state $\Psi^{(-)}_{\boldsymbol p_e}$ with one electron in the continuum (corresponding to an asymptotic electron momentum $\boldsymbol p_e$) and the other electron remaining bound. Assuming an unpolarized incoming photon beam and we do not detect the final polarization state of the outgoing photon, we additionally average over the initial polarization and sum up the probabilities corresponding to both possible orthogonal polarization states. Under these assumptions, the FDCS can be written as $$\begin{aligned} \frac{\mathrm d \sigma}{\mathrm d E_e \mathrm d \Omega_e \mathrm d\Omega_2} = \left( \frac{\mathrm d\sigma}{\mathrm d\Omega_2} \right)_{\mathrm{Th}} p_e t |M_e|^2 \ .\end{aligned}$$ The whole Compton scattering process may be divided into two steps: In the first step, the incoming photon scatters off the electronic bound-state distribution. The corresponding scattering probability is described by the Thomson formula for photons scattered off a single free electron $$\begin{aligned} \left( \frac{\mathrm d\sigma}{\mathrm d\Omega_2} \right)_{\mathrm{Th}} = \frac12 r_e^2 (1 + \cos^2 \theta) .\end{aligned}$$ During the short interaction with the photon, the electrons are simply “kicked" by the transferred momentum $\boldsymbol Q$. In the second step the “kicked", field-free atomic system evolves in time. One part of the boosted wave function remains bound, while the other part is set free in the continuum. These escaping electrons are strongly influenced by the asymptotically Coulomb-like ionic potential so that the electronic matrix elements are given by $$\begin{aligned} M_e(\boldsymbol Q, \boldsymbol p_e) = \langle \Psi^{(-)}_{\boldsymbol p_e} | \sum^{N}_{j=1}{e^{i \boldsymbol Q \cdot \boldsymbol r_j}} | \Psi_0 \rangle . \label{eq:cs2}\end{aligned}$$ From the FDCS the different observables shown in the main text can be calculated. In order to calculate the electronic matrix elements, complementary approaches have been used: The first model (approach A) describes both electrons and takes into account correlation in the ground state, but uses Coulomb waves as scattering states. In contrast, the second model (approach B) uses a single-active-electron description, but includes accurate one-electron scattering states.\ \ **Approach A: Model with correlated ground state.** In the first approach, both electrons of the helium atom are explicitly treated such that the “direct" ionization of the “kicked" electron as well as the “shake-off" (i.e. ejection of the unkicked electron) are considered. In equation (\[eq:cs2\]), the initial state is given by a correlated symmetric two-electron ground state $\Psi_0(\boldsymbol r_1, \boldsymbol r_2)$, obtained from [@23]. To approximate the final state, the main idea is that one electron remains bound in the ionic ground state given by $$\begin{aligned} \psi_0^{\text{He}^+}(\boldsymbol r) = \sqrt{\frac{8}{\pi}} \, e^{-2r}\end{aligned}$$ and the free electron may be approximated by Coulomb wave functions $$\begin{aligned} \psi_{\boldsymbol p_e}^{C}(\boldsymbol r) = \sqrt{\frac{e^{-\pi \zeta}}{(2\pi)^{3}}} \, \Gamma(1-i\zeta) e^{i\boldsymbol p_e \cdot \boldsymbol r} \, _1F_1 (i\zeta, 1, -ip_e r - i\boldsymbol p_e \cdot \boldsymbol r)\end{aligned}$$ with $\zeta = -1/p_e$ and $_1F_1$ being the confluent hypergeometric function. Since the correct scattering states $\Psi^{(-)}_{\boldsymbol p_e}(\boldsymbol r_1, \boldsymbol r_2)$ have to be orthogonal to the initial bound states, the resulting symmetrized final state $$\begin{aligned} \tilde{\Psi}_{\boldsymbol p_e}^{(-)}(\boldsymbol r_1, \boldsymbol r_2) = \frac{1}{\sqrt{2}} \left[ \psi_{\boldsymbol p_e}^{C} (\boldsymbol r_1) \psi_0^{\text{He}^+}(\boldsymbol r_2) + \psi_{\boldsymbol p_e}^{C} (\boldsymbol r_2) \psi_0^{\text{He}^+}(\boldsymbol r_1) \right]\end{aligned}$$ is afterwards explicitly orthogonalized with respect to the initial state $\Psi_0$ such that the electronic matrix elements of equation ($\ref{eq:cs2}$) read $$\begin{aligned} M_e(\boldsymbol Q, \boldsymbol p_e) =& \langle \Psi_{\boldsymbol p_e}^{(-)} | e^{i\boldsymbol Q \cdot \boldsymbol r_1} + e^{i\boldsymbol Q \cdot \boldsymbol r_2} | \Psi_0 \rangle \nonumber \\ =& \langle \tilde{\Psi}_{\boldsymbol p_e}^{(-)} | e^{i\boldsymbol Q \cdot \boldsymbol r_1} + e^{i\boldsymbol Q \cdot \boldsymbol r_2} | \Psi_0 \rangle -\nonumber \\ & \langle \tilde\Psi_{\boldsymbol p_e}^{(-)} | \Psi_0 \rangle \langle \Psi_0 | e^{i\boldsymbol Q \cdot \boldsymbol r_1} + e^{i\boldsymbol Q \cdot \boldsymbol r_2} | \Psi_0 \rangle\end{aligned}$$ \ **Approach B: Single-active-electron model.** In the second approach only the “kicked" electron may escape, while the other electron stays frozen at the core. In order to model the influence of the remaining electron on the escaping electron, we use a single-active-electron effective potential [@24]. This potential has an asymptotic charge of $Z=2$ for $r\rightarrow0$, which is screened by the second electron such that asymptotically for large $r$, it reaches $Z=1$. The one-electron ground state $\psi_0$ and the one-electron continuum state $\psi_{\boldsymbol p_e}^{(-)}$ with incoming boundary conditions are calculated numerically via solving the radial Schrödinger equation. Hence, the electronic matrix element in equation (\[eq:cs2\]) is approximated as $$\begin{aligned} M_e(\boldsymbol Q, \boldsymbol p_e) = \sqrt{2} \, \langle \psi_{\boldsymbol p_e}^{(-)} | e^{i\boldsymbol Q \cdot \boldsymbol r} | \psi_0 \rangle.\end{aligned}$$ This expression is calculated using a plane wave expansion of $e^{i\boldsymbol Q\cdot \boldsymbol r}$ and an expansion of the scattering states $\psi^{(-)}_{\boldsymbol p_e}$ in terms of spherical harmonics. This work was supported by DFG and BMBF. O. Ch. acknowledges support from the Hulubei-Meshcheryakov program JINR-Romania. Calculations were performed on Central Information and Computer Complex and heterogeneous computing platform HybriLIT through supercomputer “Govorun” of JINR. Yu. P. is grateful to the Russian Foundation of Basic Research (RFBR) for the financial support under the grant No. 19-02-00014a. We are grateful to the staff of PETRA III for excellent support during the beam time. Author contribution {#author-contribution .unnumbered} =================== M.K., F.T., S.G., I.V.-P., J.R., S.E., K.B., M.N.P., T.J., M.S.S., and R.D. contributed to the experimental work. S.B., N.E., S.H., O.Ch., Y.V.P., I.P.V., and M.L. contributed to theory and numerical simulations. All authors contributed to the manuscript. \ Compton, A. H. *Secondary radiations produced by x-rays*, in Bulletin of the National Research Council, no. 20 (v. 4, pt. 2), (Published by the National Research Council of the National Academy of Sciences, Washington D.C., 1922) Arbó, D. G., Tökèsi, K. & Miraglia, J. E. Atomic ionization by a sudden momentum transfer, *Nucl. Instr. Methods Phys. Res. B* **267**, 382-385 (2009) Dürr, M. et al. Single ionization of helium by 102 eV electron impact: three dimensional images for electron emission, *J. Phys. B: At. Mol. Opt. Phys.* **39**, 4097-4111 (2006) Ehrhardt, H., Jung, K., Knoth, G. & Schlemmer, P. Differential cross section of direct single electron impact ionization, *Z. Phys. D – Atoms, Molecules and Clusters* **1**, 3-32 (1986) Fischer, D., Moshammer, R., Schulz, M., Voitkiv, A. & Ullrich, J. Fully differential cross sections for the single ionization of helium by ion impact, *J. Phys. B: At. Mol. Opt. Phys.* **36**, 3555-3567 (2003) Schulz, M et al. Three-dimensional imaging of atomic four-body processes *Nature* **422**, 48-50 (2003) Pindzola, M. S. et al. Neutron-impact ionization of He *J. Phys. B: At. Mol. Opt. Phys.* **47**, 195202 (2014) Bothe, W. & Geiger, H. Über das Wesen des Comptoneffekts; ein experimenteller Beitrag zur Theorie der Strahlung *Z. Phys.*, **32**, 639-663 (1925) Bell, F., Tschentscher, Th. Schneider, J. R. & Rollason, A. J. The triple differential cross section for deep inelastic photon scattering: a ($\gamma$,$e\gamma'$) experiment *J. Phys. B: At. Mol. Opt. Phys.*, **24** L533-L538 (1991) Metz, C. et al. Three-dimensional electron momentum density of aluminum by ($\gamma$,$e\gamma$) spectroscopy, *Phys. Rev. B*, **59**, 10512-10520 (1999) Hopersky, A. N., Nadolinsky, A. M., Novikov, S. A., Yavna, V. A. & Ikoeva K. Kh. X-ray-photon Compton scattering by a linear molecule *J. Phys. B: At. Mol. Opt. Phys.*, **48**, 175203 (2015) Ullrich, J. et al. Recoil-ion and electron momentum spectroscopy: reaction-microscopes *Rep. Prog. Phys.*, **66**, 1463-1545 (2003) Roy, S. C. & Pratt, R. H. Need for further inelastic scattering measurements at X-ray energies *Radiat. Phys. Chem.*, **69**, 193-197 (2004) Kaliman, Z. Surić, T., Pisk, K. & Pratt, R. H. Triply differential cross section for Compton scattering *Phys. Rev. A*, **57**, 2683-2691 (1998) Samson, J. A. R., He, Z. X., Bartlett, R. J. & Sagurton, M. Direct measurement of He$^+$ ions produced by Compton scattering between 2.5 and 5.5 keV *Phys. Rev. Lett.*, **72**, 3329-3331 (1994) Spielberger, L. et al. Separation of Photoabsorption and Compton Scattering Contributions to He Single and Double Ionization *Phys. Rev. Lett.*, **74**, 4615-4618 (1995) Dunford, R. W., Kanter, E. P., Krässig B., Southworth, S. H. & Young, L. Higher-order processes in X-ray photoionization and decay *Radiat. Phys. Chem.*, **70**, 149-172 (2004) Kaliman, Z. & Pisk, K. Compton cross-section calculations in terms of recoil-ion momentum observables *Rad. Phys. Chem.*, **71**, 633-635 (2004) Obtained from <http://henke.lbl.gov/optical_constants/filter2.html>, 10/09/2019 Data based on: Henke, B.L., Gullikson, E. M. & Davis, J. C. X-ray interactions: photoabsorbtion, scattering, transmission, and reflection at E=50-30000 eV, Z=1-92 *At. Data Nucl. Data Tables*, **54**, 181-342 (1993) Jagutzki, O. et al. Multiple hit readout of a microchannel plate detector with a three-layer delay-line anode *IEEE Trans. Nucl. Sci.*, **49**, 2477-2483 (2002) Akhiezer, A. I. & Berestetskii, V. B. *Quantum electrodynamics* (John Wiley & Sons, 1965) Bergstrom Jr., P. M., Surić, T., Pisk, K. & Pratt, R. H. Compton scattering of photons from bound electrons: Full relativistic independent-particle-approximation calculations. [*Phys. Rev. A*]{}, [**48**]{}, 1134-1162 (1993) Chuluunbaatar, O. et al. Role of the cusp conditions in electron-helium double ionization *Phys. Rev. A*, **74**, 014703 (2006) Tong, X. M. & Lin, C. D. Empirical formula for static field ionization rates of atoms and molecules by lasers in the barrier-suppression regime. *J. Phys. B: At. Mol. Opt. Phys.*, **38**, 2593-2600 (2005)
--- abstract: 'Random surface texturing of an optically-thick film to increase the path length of scattered light rays, first proposed nearly thirty years ago, has thus far remained the most effective approach for photon absorption over the widest set of conditions. Here using recent advances in computational electrodynamics we describe a general strategy for the design of a silicon thin film applicable to photovoltaic cells based on a quasi-resonant approach to light trapping where two partially-disordered photonic-crystal slabs, stacked vertically on top of each other, have large absorption that surpasses the Lambertian limit over a broad bandwidth and angular range.' author: - Ardavan Oskooi - Yoshinori Tanaka - Susumu Noda title: | Tandem photonic-crystal thin films surpassing Lambertian\ light-trapping limit over broad bandwidth and angular range --- One of the fundamental issues underlying the design of silicon photovoltaic (PV) cells for use in realistic settings is the maximum absorption of incident sunlight over the widest possible range of wavelengths, polarizations and angles using the thinnest material possible. While random or so-called Lambertian texturing of the surface to isotropically scatter incident light rays into a weakly-absorbing thick film so as to increase the optical path length, as shown in  [Fig. \[fig:design\_approach\]]{}(a), has thus far remained the most effective approach for light trapping over a wideband spectrum [@Yablonovitch82b; @Yablonovitch82], recent thin-film nanostructured designs including photonic crystals [@Joannopoulos08] shown in  [Fig. \[fig:design\_approach\]]{}(b) exploiting the more-complicated wave effects of photons have explored the possibility of superior performance [@Zhou08; @Chutinan09; @Park09; @Garnett10; @Mallick10; @Zhu10; @Han10; @Yu10; @Fahr11; @Sheng11; @Bozzola12; @Wang12; @Biswas12; @Munday12; @Martins12] but have been mainly limited to narrow bandwidths, select polarizations or a restricted angular cone typical of delicate resonance-based phenomenon. The introduction of strong disorder, while improving robustness, nevertheless comes at the expense of light trapping relative to the unperturbed case [@Oskooi12; @Vynck12]. As a result, no proposal for a nanostructured silicon thin film capable of robust, super-Lambertian absorption over a large fraction of the solar spectrum has yet been made. In this work, we present a new approach to light trapping made possible by recent advances in computational electrodynamics [@Taflove13] based on the quasi-resonant absorption of photons that combines the large absorption of optical resonances with the broadband and robust characteristics of disordered systems via a stacked arrangement of ordered PC slabs augmented with partial disorder that surpasses by a wide margin the performance of an idealized Lambertian scatterer over a broad spectrum and angular cone. Our tandem design, consisting of the same silicon film structured in two different ways and stacked vertically as shown in  [Fig. \[fig:design\_approach\]]{}(c), is the photonic analogue of the multi-junction cell that employs three or more *different* semiconductor films where the electronic bandgaps add complimentarily to obtain wideband absorption. Here we demonstrate the utility of a photonic approach, employing geometric structure alone, to enhance light trapping which also offers improved performance but does not incur the constraints and limitations of optimally combining multiple material-specific electronic bandgaps as well as the significant fabrication challenges and costs of epitaxially growing films with mismatched atomic lattices on top of one another. We outline a two-part design strategy based on first maximizing, with a few-parameter gradient-free topology optimization the number of resonant-absorption modes by using two crystalline-silicon PC slabs with a fixed total thickness of 1$\mu$m stacked on top of each other such that their individual resonances add complimentarily over the wideband spectrum; and then in the final step introducing a partial amount of disorder to both lattices to maximize the overall light trapping and boost robustness to go well beyond the Lambertian limit. In our earlier work, we showed how individual resonant-absorption peaks of a thin-film PC slab can be broadened using partial disorder leading to an overall enhancement of the wideband absorption spectra  [@Oskooi12]. To understand quantitatively why disorder increases broadband light trapping in a PC, we use coupled-mode theory to derive an analytical expression for a single absorption resonance (at a frequency of $\omega_0$) which has only one coupling channel for external light (a slight simplification which helps to make clear the role of disorder) in terms of the decay lifetimes (proportional to the quality factor) for radiation ($\tau_{rad}$) and absorption ($\tau_{abs}$) by the material [@Joannopoulos08]: $$A(\omega)=\frac{\frac{4}{\tau_{rad}\tau_{abs}}}{(\omega-\omega_0)^2+\left(\frac{1}{\tau_{rad}}+\frac{1}{\tau_{abs}}\right)^2}. \label{eq:single_peak}$$ Broadband absorption for a thin-film PC design, consisting of a collection of such individual Lorentzian peaks, necessitates that we consider the *total* area spanned by  [eq. [(\[eq:single\_peak\])]{}]{} which is equivalent to its absorption cross section: $$\int_{-\infty}^{\infty}A(\omega)d\omega=\frac{4\pi}{\tau_{rad}+\tau_{abs}}. \label{eq:peak_area}$$ The effect of disorder is to reduce the peak height but more importantly to broaden the peak width (proportional to 1/$\tau_{rad}$+1/$\tau_{abs}$) via primarily a *decrease* in $\tau_{rad}$ which therefore leads to an overall *increase* in broadband absorption from  [eq. [(\[eq:peak\_area\])]{}]{} (though $\tau_{abs}$ also changes with disorder due to variations in the nature of the guided mode, the change is much less pronounced than $\tau_{rad}$ mainly because the material absorption coefficient is fixed). Note that this analysis is only valid when coupling to a *resonant* Bloch mode which is why introducing too much disorder and eliminating the peaks altogether, thus transitioning to *non-resonant* Anderson-localized modes, results in sub-optimal light trapping [@Oskooi12]. We consider the absorption of solar radiation in the wavelength regime spanning 600nm to 1100nm in which silicon is poorly absorbing and thus weak coupling to resonant Bloch modes of the PC most apparent. The overall light-trapping efficiency of each design can be quantified relative to an ideal perfect absorber which has unity absorptivity over the wavelength interval of interest by assuming that each absorbed photon with energy greater than the bandgap of silicon generates an exciton which contributes directly to the short-circuit current (this is equivalent to an internal quantum efficiency of 100%). This corresponds to the following definition of light-trapping efficiency: $$\frac{\int_{600nm}^{1100nm}\lambda \mathcal{I}(\lambda)\mathcal{A}(\lambda)d\lambda}{\int_{600nm}^{1100nm}\lambda \mathcal{I}(\lambda)d\lambda}, \label{eq:efficiency}$$ where $\mathcal{I}(\lambda)$ is the terrestrial power density per unit wavelength from the sun at AM1.5 [@ASTM05] and $\mathcal{A}(\lambda)$ is the absorptivity of the film. The design strategy of maximizing the light-trapping efficiency by controllably introducing a partial amount of disorder to obtain just the right dose of peak broadening ultimately results in a more-uniform absorption profile where the absorptivity in the inter-peak regions is increased at the expense of the height of all peaks. This therefore suggests that in order to most effectively make use of partial disorder for light-trapping enhancement in a thin film the number of resonant modes must first be made *as large as possible* so that there is little bandwidth separation between peaks: by extending our previous slab design to include not one but *two* PC lattices stacked on top of one other and separated by a non-absorbing nanoscale gap layer, such that the absorption spectra of each lattice when combined adds complimentarily (i.e., regions of low absorption in one lattice are compensated for by the high absorption of the other), the resonant-absorption characteristics of the PC augmented by partial disorder can potentially be exploited to the fullest extent possible for broadband absorption while also remaining feasible to large-scale industrial manufacturing. The low-index gap separation layer itself also provides additional mechanisms for light localization in nanostructured media that further contributes to enhancing absorption in the adjacent high-index layers while its effect on scattering-based Lambertian-textured films is marginal. A schematic of this design approach, somewhat exaggerated for illustrative purposes, is shown in [Fig. \[fig:design\_approach\]]{}(d) and (e) in which the tandem structure is first optimized for *peak density* in the spectrum (which amounts to maximizing the number of non-overlapping resonant absorption modes of the two constituent PC lattices) and subsequently these narrow closely-spaced resonances are slightly broadened with the addition of disorder to create a more-uniform absorption profile that is large in amplitude, broadband and robust to incident radiation conditions. We consider here for simplicity the case of two PC slabs in vacuum separated by an air gap which both incorporates all essential physical phenomena and has direct applications to conventional thin-film PV cells in which individual layers including both high-index semiconductors and low-index transparent conductive oxides are grown by thin-film deposition tools [@Brendel03; @Poortmans06]. There is no need in the scope of the present work where the focus is solely on photon absorption in silicon to include an anti-reflection (AR) coating in the front or a perfect reflector in the back as would be customary in an actual PV cell since the role of both components is mainly to enhance, oftentimes significantly [@Bermel07; @Mallick10], the absorption of *existing* resonances in the nanostructured films but not to give rise to new ones. Due to the complimentary way that the individual peaks of the two PCs combine in the tandem structure, a simple square lattice arrangement is adequate for good performance rendering unnecessary the need for intricate superlattice structures [@Yu10; @Martins12]. To perform the topology optimization, we combine the capabilities of Meep, a freely-available open-source finite-difference time-domain (FDTD) tool [@Oskooi10], to compute the absorption spectra at normal incidence with the nonlinear-optimization routines of NLopt [@NLopt] (details in Supplementary Information). Here we need not consider absorption at off-normal incidence since the addition of disorder in the final step will automatically reduce the sensitivity to incident radiation conditions [@Oskooi12]. Accurate topology optimization is made possible using Meep’s subpixel averaging [@Farjadpour06; @Oskooi10; @Taflove13] which also significantly reduces the size of the computation by lowering the minimum spatial resolution required for reliable results. This is important since the objective function is iterated over a large number of times to explore small changes to geometrical parameters in order to engineer as many absorption resonances over the wideband spectrum as possible which tend to be highly susceptible to numerical artifacts introduced by “staircased” features [@Farjadpour06; @Oskooi10; @Taflove13] enabling FDTD to be used as a versatile 3D design optimization tool. We use intrinsic crystalline silicon as our absorbing material and incorporate its full broadband complex refractive index profile [@Green08] into the FDTD simulations with accuracy even near its indirect bandgap of 1108nm where absorption is almost negligible to obtain experimentally-realistic results (see Supplementary Information for more details). Although, for generality, modeling a tandem structure consisting of two completely-independent PC slabs with arbitrary lattice constants is preferred, incorporating two distinct unit cells into a single 3D simulation is computationally impractical yet this is a minor design limitation as the other six structural parameters – as shown in [Fig. \[fig:design\_approach\]]{}(c): the thickness of the top Si layer $v_t$ (the bottom thickness $v_b$ is known since the total thickness is fixed at 1$\mu$m), the gap thickness $g$, the radius and height of the holes in the top and bottom lattices $r_t$, $h_t$, $r_b$, $h_b$ – provide sufficient flexibility for creating out-coupled Bloch-mode resonances over the entire range of the broadband solar spectrum. We also investigated the computationally tractable case of two PC slabs with lattice constants differing by a factor of two though the results were not found to be an improvement to the single lattice-constant design. A planewave source is incident from above onto the tandem structure and the absorption spectrum $\mathcal{A}(\lambda)$, equivalent to 1-reflection-transmission, is calculated by Fourier-transforming the response to a short pulse. The absorptivity threshold used by the objective function to count the total number of peaks in the spectrum is taken to be that of our baseline performance metric: an equivalent 1$\mu$m-thick Si film with Lambertian-textured surfaces [@Yablonovitch82; @Deckman83] which has an efficiency of 43.0% in our wavelength interval (computed using the same fitted material parameters as used in the simulations). Since resonances with especially-large peak amplitudes contribute most to increasing the overall efficiency when broadened, we add an extra 30% to our absorptivity peak threshold at each wavelength which while making the problem more challenging gives rise to better results. We impose no restrictions on the peak width or spacing relative to other peaks although these could potentially be used for further refinement. Once an optimal set of parameters for the two-lattice structure is determined by running multiple times with different randomly-chosen initial values to explore various local optima, we then form a supercell consisting of 10x10 unit cells of the optimal structure and add positional disorder to each unperturbed hole in both lattices (while ensuring no overlap between holes to conserve the filling fraction) by an amount $\Delta p_1$ ($\Delta p_2$) chosen randomly from a uniform distribution of values between 0 and $\overline{\Delta p_1}$ ($\overline{\Delta p_2}$) for both orthogonal in-plane directions for the top (bottom) slab. Three separate simulations are made for each structure and the results averaged due to the random nature of the design. [Figure \[fig:tandem\_a641\]]{}(a) shows the absorptivity spectra for three thin-film designs each with a total crystalline-silicon thickness of 1$\mu$m: an unpatterned film, a Lambertian-textured film \[obtained from eq. (1) of  [Ref. ]{}\] and finally the topology-optimized tandem design consisting of two PC slabs (top: thickness 708nm, hole radius 236nm, hole height 260nm and bottom: thickness 292nm, hole radius 199nm, hole height 244nm) with a lattice parameter of 641nm separated by a 228nm gap. The tandem design has numerous narrow, high-amplitude peaks – signatures of the coherent-resonant Bloch modes – that span the entire broadband spectrum whereas the unpatterned slab has broad Fabry-Pérot resonances with low amplitude. The complimentary way that the resonances of the individual slabs combine to form the tandem structure can be seen in [Figure \[fig:tandem\_a641\]]{}(b) and (c) where the top slab accounts for most of the total number of peaks while the bottom slab contributes a few key resonances particularly at longer wavelengths. Note that while the Lambertian-textured slab has little and diminishing absorption at long wavelengths where the absorption coefficient of crystalline silicon is vanishingly small the PC design, due to its resonant nature, has large absorption albeit appearing only as very-narrow peaks (due to the rate-matching phenomena discussed previously that underlies the resonant coupling between radiation and absorption by the material, as silicon’s absorption coefficient becomes smaller at larger wavelengths leading to a corresponding increase in $\tau_{abs}$ [@Joannopoulos08], the total lifetime of the resonant mode $\tau_{tot}$ being 1/$\tau_{tot}$=1/$\tau_{abs}$+1/$\tau_{rad}$ also increases resulting in the inversely-proportional peak width generally becoming narrower which can be seen in  [Fig. \[fig:tandem\_a641\]]{}). Nevertheless, the tandem design with its maximized peak density, outperforms the Lambertian texture in light-trapping efficiency (48.8% versus 43.0%) although at off-normal angles and different polarizations of incident light the unperturbed lattices’ resonance-based performance degrades significantly to below the Lambertian limit. Within the stacked arrangement, the efficiencies of the top and bottom slabs are 42.6% and 6.2%, respectively, while in isolation they are 37.9% and 22.3% highlighting in part the importance of inter-slab interactions to the overall absorption of the tandem design. For comparison, the optimized 1$\mu$m-thick single-slab design (lattice parameter, hole radius and height of 640nm, 256nm and 400nm, respectively) produces seven fewer resonances than the tandem design over the same wideband spectrum and therefore had a lower efficiency: nearly 5% below in absolute terms, yet still above the Lambertian limit. By proceeding to controllably introduce a partial amount of disorder into the topology-optimized tandem design, we can simultaneously boost efficiency and improve robustness to exceed the Lambertian limit by an even wider margin over a large angular cone.  [Fig. \[fig:tandem\_random\]]{}(a) is a plot of the efficiency from  [eq. [(\[eq:efficiency\])]{}]{} versus disorder for the optimized tandem-slabs and single-slab designs at normal incidence and shows that a positional disorder of approximately $\overline{\Delta p_1}$=$\overline{\Delta p_2}$=0.1$a$ for the tandem slabs and $\overline{\Delta p}$=0.15$a$ for the single slab results in maximal light trapping of 9.8% and 6.6% above the Lambertian limit, respectively, while additional disorder beyond these partial amounts leads to a steady decrease of the efficiency in line with the analysis presented earlier. The tandem design therefore is roughly twice as effective as the single slab in overcoming the Lambertian limit due mostly to facilitating a larger number of absorption resonances. We quantify the performance robustness of each design as the standard deviation of the efficiency averaged over normal (0$^{\circ}$) incidence and five off-normal (10$^{\circ}$, 20$^{\circ}$, 30$^{\circ}$, 40$^{\circ}$, 50$^{\circ}$) angles of incidence for both $\mathcal{S}$ and $\mathcal{P}$ polarizations. A demonstration involving a larger angular range is possible but the necessary simulation times to ensure that the Fourier transforms used to compute the flux spectra have properly converged become prohibitively long. Since more disorder results in better robustness [@Oskooi12] which is a key requirement of a practical solar cell we make a slight trade off and apply not the quantities which maximize efficiency at normal incidence in the tandem design but values slightly greater ($\overline{\Delta p_1}$=0.2$a$ and $\overline{\Delta p_2}$=0.25$a$) where the robustness is substantially larger: 49.5%$\pm$2.3% for the former versus 49.4%$\pm$1.7% for the latter.  [Fig. \[fig:tandem\_random\]]{}(b) demonstrates that the average peformance of this tandem design has greater absorption than the Lambertian texture at every wavelength resulting in a light-trapping improvement of almost 10% above the Lambertian limit. In summary, we have described a general design strategy derived from a new conceptual framework of photon capture for a nanostructured silicon thin film based on the quasi-resonant absorption of photons in a tandem arrangement of partially-disordered photonic-crystal slabs separated by a nanoscale gap where the overall light trapping surpasses a Lambertian-textured film by a wide margin over a large fraction of the solar spectrum and a broad angular cone. Acknowledgments {#acknowledgments .unnumbered} =============== This work was supported by Core Research for Evolutional Science and Technology (CREST) from the Japan Science and Technology Agency. A.O. was supported by a postdoctoral fellowship from the Japan Society for the Promotion of Science (JSPS). Author Contributions {#author-contributions .unnumbered} ==================== A.O. conceived of the entire idea and performed all the simulations and analysis. A.O. discussed the results and wrote the manuscript with S.N. and Y.T. S.N. organized the project and nurtured the environment for inquiry into how to increase absorption in thin films. [10]{} E. Yablonovitch. Statistical ray optics. , 72(7):899–907, 1982. E. Yablonovitch and G.D. Cody. Intensity enhancement in textured optical sheets for solar cells. , 29(2):300–305, 1982. J. D. Joannopoulos, S. G. Johnson, R. D. Meade, and J. N. Winn. . Princeton Univ. Press, Princeton, NJ, second edition, 2008. D. Zhou and R. Biswas. Photonic crystal enhanced light-trapping in thin film solar cells. , 103(093102), 2008. A. Chutinan, N.P. Kherani, and S. Zukotynski. High-efficiency photonic crystal solar cell architecture. , 17(11):8871–8878, 2009. Y. Park, E. Drouard, O. El Daif, X. Letarte, P. Viktorovitch, A. Fave, A. Kaminski, M. Lemiti, and C. Seassal. Absorption enhancement using photonic crystals for thin film solar cells. , 17(16):14312–14321, 2009. E. Garnett and P. Yang. Light trapping in silicon nanowire solar cells. , 10:1082–1087, 2010. S.B. Mallick, M. Agrawal, and P. Peumans. Optimal light trapping in ultra-thin photonic crystal crystalline silicon solar cells. , 18(6):5691–5706, 2010. J. Zhu, Z. Yu, S. Fan, and Y. Cui. Nanostructured photon management for high performance solar cells. , 70:330–340, 2010. S.E. Han and G. Chen. Toward the lambertian limit of light trapping in thin nanostructured silicon solar cells. , 10:4692–4696, 2010. Z. Yu, A. Raman, and S. Fan. Fundamental limit of nanophotonic light trapping in solar cells. , 107(41):17491–17496, 2010. S. Fahr, T. Kirchartz, C. Rockstuhl, and F. Lederer. Approaching the lambertian limit in randomly textured thin-film solar cells. , 19:A865–A874, 2011. X. Sheng, S.G. Johnson, J. Michel, and L.C. Kimerling. Optimization-based design of surface textures for thin-film si solar cells. , 19:A841–A850, 2011. A. Bozzola, M. Liscidini, and L.C. Andreani. Photonic light-trapping versus lambertian limits in thin film silicon solar cells with 1d and 2d periodic patterns. , 20:A224–244, 2012. K.X. Wang, Z. Yu, V. Liu, Y. Cui, and S. Fan. Absorption enhancement in ultrathin crystalline silicon solar cells with antireflection and light-trapping nanocone gratings. , 12(3):1616–1619, 2012. R. Biswas and C. Xu. Photonic and plasmonic crystal based enhancement of solar cells – theory of overcoming the lambertian limit. , 358:2289–2294, 2012. J.N. Munday, D.M. Callahan, and H.A. Atwater. Light trapping beyond the 4n$^2$ limit in thin waveguides. , 100(121121), 2012. E.R. Martins, J. Li, Y. Liu, J. Zhou, and T.F. Krauss. Engineering gratings for light trapping in photovoltaics: the supercell concept. , 86(041404), 2012. A. Oskooi, P.A. Favuzzi, Y. Tanaka, H. Shigeta, Y. Kawakami, and S. Noda. Partially-disordered photonic-crystal thin films for enhanced and robust photovoltaics. , 100(181110), 2012. K. Vynck, M. Burresi, F. Riboli, and D.S. Wiersma. Photon management in two-dimensional disordered media. , 11:1017–1022, 2012. A. Taflove, A. Oskooi, and S.G. Johnson, editors. . Artech House, 2013. ASTMG173-03. . ASTM International, West Conshohocken, PA, 2005. R. Brendel. . Wiley-VCH, Weinheim, Germany, 2003. J. Poortmans and V. Arkhipov. . John Wiley & Sons, Ltd, 2006. P. Bermel, C. Luo, L. Zeng, L.C. Kimerling, and J.D. Joannopoulos. Improving thin-film crystalline silicon solar cell efficiencies with photonic crystals. , 15(25):16986–17000, 2007. A. F. Oskooi, D. Roundy, M. Ibanescu, P. Bermel, J. D. Joannopoulos, and S. G. Johnson. : A flexible free-software package for electromagnetic simulations by the [FDTD]{} method. , 181:687–702, 2010. S. G. Johnson, The NLopt Nonlinear Optimization Package, http://ab-initio.mit.edu/nlopt. A. Farjadpour, D. Roundy, A. Rodriguez, M. Ibanescu, P. Bermel, J. D. Joannopoulos, S. G. Johnson, and G. W. Burr. Improving accuracy by subpixel smoothing in the finite-difference time domain. , 31:2972–2974, 2006. M.A. Green. Self-consistent optical parameters of intrinsic silicon at 300 k including temperature coefficients. , 92:1305–1310, 2008. H.W. Deckman, C.B. Roxlo, and E. Yablonovitch. Maximum statistical increase of optical absorption in textured semiconductor films. , 8(9), 1983. ![(a) Random or so-called Lambertian texturing of the surface to isotropically scatter incident light rays into the plane of a weakly-absorbing film so as to increase the optical path length. First proposed nearly thirty years ago, this has thus far remained the most effective approach for light trapping over the widest set of conditions. (b) Photonic-crystal slab and other nanostructured designs in which light trapping occurs by resonant absorption into a guided mode depend on delicate wave-interference effects and are thus intrinsically narrowband and restricted to a small angular cone. (c) Tandem arrangement of two PC slabs, both consisting of a square lattice of holes in silicon, stacked vertically on top of each other. Shown are the degrees of freedom – slab thicknesses v$_1$ and v$_2$, lattice parameters $a_1$ and $a_2$, hole radii r$_1$ and r$_2$, hole heights h$_1$ and h$_2$ and gap separation $g$ – used in the topology optimization to (d) engineer as many non-overlapping resonances over the wideband solar spectrum as possible. Following this, (e) each hole is perturbed from its position in the unperturbed lattice by amounts $\Delta p_1$ ($\Delta p_2$) chosen randomly from a uniform distribution of values between 0 and $\overline{\Delta p_1}$ ($\overline{\Delta p_2}$) for both orthogonal in-plane directions of the top (bottom) slab to boost light trapping and robustness by creating a more-uniform absorption profile.[]{data-label="fig:design_approach"}](design_strategy){width="1.0\columnwidth"} ![(a) Absorption versus wavelength profile at normal incidence for three thin-film PV designs each with a total crystalline-silicon thickness of 1$\mu$m: an unpatterned slab (green), a Lambertian-textured slab (red) \[obtained from eq. (1) of  [Ref. ]{}\] and the topology-optimized tandem PC slabs (blue). The tandem PC slabs both consist of a square lattice (periodicity, $a$=641nm) of holes in silicon separated by a 228nm gap. Shown for each design is the photon-absorption efficiency defined in  [eq. [(\[eq:efficiency\])]{}]{} which is a measure of light trapping relative to a perfect absorber. (b) and (c) Individual absorption spectra for the top (slab \#1: $v_t$=708nm, $r_t$=236nm, $h_t$=260nm) and bottom (slab \#2: $v_b$=292nm, $r_b$=199nm, $h_b$=244nm) PC slabs of the optimized tandem design demonstrating how the resonances add complimentarily over the broadband spectrum.[]{data-label="fig:tandem_a641"}](tandem_a641){width="1.0\columnwidth"} ![(a) Light-trapping efficiency from  [eq. [(\[eq:efficiency\])]{}]{} as computed from the absorption profile at normal incidence versus hole positional disorder for the topology-optimized tandem slabs (blue) and single slab (red) showing that partial disorder (tandem: $\overline{\Delta p_1}$=$\overline{\Delta p_2}$=0.1$a$, single: $\overline{\Delta p}$=0.15$a$) maximizes the light trapping (tandem: 52.8%, single: 49.6%) while additional disorder is sub optimal and leads to a steady decline. Note that the tandem design is nearly twice as effective as the single-slab design in surpassing the Lambertian limit. (b) Absorption versus wavelength profile at normal (0$^{\circ}$) incidence and five off-normal (10$^{\circ}$, 20$^{\circ}$, 30$^{\circ}$, 40$^{\circ}$, 50$^{\circ}$) angles of incidence averaged over both $\mathcal{S}$ and $\mathcal{P}$ polarizations of the optimized tandem design with the addition of partial disorder of $\overline{\Delta p_1}$=0.2$a$ and $\overline{\Delta p_2}$=0.25$a$ as shown in the inset schematics. The absorption profile for the individual angles are colored while the average of all the data is shown in black which exceeds the Lambertian texture over the entire interval resulting in an overall light-trapping efficiency that is approximately 10% greater.[]{data-label="fig:tandem_random"}](tandem_random_new){width="1.0\columnwidth"}
--- abstract: 'The interplay of tunneling transport and carrier–mediated ferromagnetism in narrow semiconductor multi–quantum well structures containing layers of GaMnAs is investigated within a self-consistent Green’s function approach, accounting for disorder in the Mn–doped regions and unwanted spin–flips at heterointerfaces on phenomenological ground. We find that the magnetization in GaMnAs layers can be controlled by an external electric bias. The underlying mechanism is identified as spin–selective hole tunneling in and out of the Mn-doped quantum wells, whereby the applied bias determines both hole population and spin polarization in these layers. In particular we predict that, near resonance, ferromagnetic order in the Mn doped quantum wells is destroyed. The interplay of both magnetic and transport properties combined with structural design potentially leads to several interrelated physical phenomena, such as dynamic spin filtering, electrical control of magnetization in individual magnetic layers, and, under specific bias conditions, to self–sustained current and magnetization oscillations (magnetic multi-stability). Relevance to recent experimental results is discussed.' author: - 'Christian Ertler[^1]' - Walter Pötz title: 'Electric control of ferromagnetism in Mn-doped semiconductor heterostructures' --- Introduction ============ Electric control of magnetism in nanostructures must be viewed as an important milestone on our road map for successful realization of spintronic devices. Although most of the operations in such devices ultimately should be based on spin–only processes, i.e., processes not associated with (highly dissipative) electric charge transport, to gain best benefits from such designs, spin must be manipulated both during the input, control, and read–out stage and eventually be coupled to charge. Several schemes achieving this goal have been explored both at the quantum and semi-classical level, such as the electric distortion of the orbital wave function of spin carriers in inhomogeneous (effective) magnetic fields [@Shin2010:PRL], electric g-tensor control [@Roloff2010:NJP; @Kroutvar2004:N], or spin torque transfer.[@Myers1999:S; @Ralph2008:JMMM; @Wenin2010:JAP] Here we explore, on theoretical grounds, the influence of an electric bias on the ferromagnetic state and feasibility of electric control of ferromagnetism in Ga$_{1-x-y}$Al$_y$Mn$_x$As multiple quantum wells. Structural design, including effective potential profiling and doping to position emitter and collector quasi–Fermi levels, as well as tunneling is used to control hole density and spin polarization within the Mn doped layers. Dilute magnetic semiconductors (DMS) have been realized by doping of conventional ZnS–structured semiconductors with elements providing open electronic $d$ or $f$ shells. This has added yet another degree of freedom to the rich spectrum of physical phenomena in semiconductors available for material design with potential for technological applications.[@Jungwirth2006:RMP] A prominent example is bulk Ga$_{1-x}$Mn$_x$As where Mn on the Ga sites provides both an open d-shell with a local magnetic moment and a hole which may establish ferromagnetic ordering among the Mn d–electrons, a mechanism known as carrier–mediated ferromagnetism.[@Ohno1996:APL; @VanEsch1997:PRB; @Dietl2000:Science] The preferentially anti–parallel alignment of the 3/2 spin of the mobile holes with the 5/2 spin of the localized Mn d–electrons promotes ferromagnetic ordering of the latter below a critical temperature of up to $\sim 150$ K. Theoretical work has confirmed strong hybridization between the 5d Mn and 3p electrons in the ground state.[@Jain2001:PRB] The effective hole–concentration–dependent exchange field lifts the spin degeneracy of the holes’ energy bands and thus goes hand in hand with hole spin polarization. The Mn ions sitting on Ga sites act as acceptors and are believed to give rise to acceptor levels which lie about 100 meV above the valence band edge.[@Jungwirth2006:RMP; @VanEsch1997:PRB; @Schneider1987:PRL] Photoluminescence experiments indicate the co-existence of holes bound to Mn sites and itinerant holes which participate in establishing magnetic order amongst the Mn ions below $T_c$.[@Sapega2009:PRB] Since structural defects of bulk and confined layers of Ga$_{1-x}$Mn$_x$As depend on grow conditions, Mn concentration $x$ , and annealing procedures it is not too surprising that experiments have come up with somewhat different conclusions regarding the “electronic structure of bulk Ga$_{1-x}$Mn$_x$As". More recent work seems to hint at the existence of an impurity band which forms at Mn concentrations above   1.5 % leading to a metal–insulator transition in high–quality GaMnAs.[@Burch2006:PRL; @Richardella2010:S; @Ohya2011:NP] The Fermi level in these samples is reported to lie in the impurity band and the valence–band properties remain largely GaAs–like.[@Ohya2011:NP] The radius of the Mn acceptor wave function has been measured to be about 2 nm, indicating that Mn$_{Ga}$ is not a shallow acceptor.[@Richardella2010:S] In contrast, other studies rather hint at a disordered top valence band edge containing the Fermi energy, but no isolated impurity band is present.[@Jungwirth2006:RMP] Recent theoretical work has led to the conclusion that a tight–binding approach (within the coherent-potential approximation for disorder) and local–density functional theory + Hubbard U correction cannot account for an isolated impurity band.[@Masek2010:PRL] Other theoretical work has lead to the conclusion that disorder may enhance ferromagnetic stability.[@Lee2007:PRB; @Berciu2002:PhysicaB] Ionized impurity scattering seems to play the dominant role in explaining Hall resistivity data.[@Yoon2004:JAP] Controlled growth of heterostructures containing crystalline layers of GaMnAs of high structural quality has remained a challenge up to date. Nevertheless, tunneling spectroscopy has confirmed size quantization effects in GaMnAs quantum well layers.[@Ohya2007:PRB; @Ohya2010:PRL; @Ohya2011:NP] However, compared to crystalline GaAs well layers, in an otherwise identical structure, the signature appears to be rather weak and in no sample yet, apparently, has one observed negative differential conductivity due to resonances associated with GaMnAs well layers. This hints at a significant concentration of defects, reminiscent of thin layers of amorphous Si where similar transport studies have revealed size quantization effects but, to our knowledge, not negative differential conductivity.[@Miyazaki1987:PRL; @Li1993:PRB] Experimental evidence indicating a coexistence of localized and extended Bloch–like states in bulk GaMnAs, in general, allows the prediction that, in thin layers, certainly for $\leq 3$ nm, of GaMnAs extended states will be subjected to confinement effects (quantization and energy shifts) while localized states will remain largely unaffected. This is similar to external magnetic–field effects on point defects or quantization effects in amorphous Si.[@Poetz1983:SSC; @Li1993:PRB] Assuming that no significant additional defects arise in GaMnAs heterostructures, this makes plausible experimental reports on quantum confinement effects arising from (ferromagnetic) GaMnAs layers in thin heterostructures.[@Ohya2007:PRB; @Ohya2010:PRL; @Ohya2011:NP] Indeed, when one succeeds to incorporate high–quality magnetic layers in semiconductor heterostructures strongly spin-dependent carrier transmission can be predicted due to spin-selective tunneling.[@Sankowski2007:PRB] In magnetic resonant tunneling structures of high structural quality this spin splitting may be used for a realization of spin valves, spin filtering, and spin switching devices [@Likovich2009:PRB; @Slobodskyy2003:PRL; @Slobodskyy2007:APL; @Petukhov2002:PRL; @Ohya2007:PRB; @Ohya2010:APL; @Ertler2006a:APL; @Ertler2007a:PRB], all representing important ingredients for spintronic-based device technology. In several experiments ferromagnetism has been generated in bulk GaMnAs by, electrically or optically, tailoring the hole density.[@Ohno2000:N; @Boukari2002:PRL] In 2d-confined systems containing layers of Ga$_{1-x}$Mn$_x$As the magnetic order depends strongly on the local spin density, which can be influenced by the tunneling current, resulting in a bias-dependent exchange splitting.[@Dietl1997:PRB; @Jungwirth1999:PRB] A spin-density dependent exchange splitting in ferromagnetic structures enriches the dynamic complexity by offering a mechanism for external electrical control of the ferromagnetic state. This is in contrast to structures comprising paramagnetic DMS, such as ZnMnSe, in which a giant Zeeman splitting of the bands is induced by applying an external magnetic field of the order of a few Tesla. Already nonmagnetic multi–well heterostructures exhibit interesting dynamic nonlinear effects which are based, however, on different physical mechanisms, such as the formation of electric field domains and the motion of charge dipoles through the structure.[@Eaves1989:SSE; @Poetz1990:PRB; @Stegemann2007:NJP; @Bonilla2005:RPP] Recently it has been predicted that, in heterostructures containing paramagnetic DMS wells, this kind of phenomena can be controlled by an external magnetic field.[@Sanchez2001:PRB; @Bonilla2007:APL; @Escobedo2009:PRB] Using an incoherent, sequential tunneling model we have proposed that [*ferromagnetic*]{} multi–well structures can generate ac spin currents, a phenomenon which originates from time–dependent inversion of the spin population in adjacent wells.[@Ertler2010:APL] In this article we investigate spin–selective hole transport in GaAs/AlGaAs/GaMnAs heterostructures within the limit of moderately thin samples with predominantly [*coherent*]{} transport characteristics. We apply a non-equilibrium Green’s function formalism based on a tight–binding Hamiltonian for the electronic structure, including self-consistency regarding the charge density and the exchange splitting of the effective potential, as well as charge transfer to the contacts. Both the carriers’ Coulomb interaction and the exchange coupling with the magnetic ions are described within a mean-field picture. Details of our model are exposed in Sect. \[sec:model\]. The mechanism of electric control of magnetization switching is explored for two generic structures containing, respectively, one and two layers of Ga$_{1-x}$Mn$_x$As. Results are given in Sect. \[sec:results\]. We also provide a qualitative explanation for the occurrence of spin-polarized current oscillations, predicted in an earlier paper [@Ertler2010:APL], and investigate the influence of spin flip processes at the interfaces on the total current spin polarization. Since disorder seems to play a major role in actual samples we study the effect of substitutional disorder on a qualitative level and discuss the robustness of the effects predicted here. Relevance to experiment is discussed. In particular, we can give an explanation for the absence of exchange splitting (magnetization) under resonance bias condition reported in a recent experiment and identify characteristic features which may be explored in future experiments. Summary and conclusions are given in Sect. \[sec:sum\]. Physical Model {#sec:model} ============== The magnetic semiconductor heterostructure is described by a two–band tight-binding Hamiltonian for the heavy holes $(J_3=\pm 3/2)$. It is given in the form $$\begin{aligned} H_s &=& \sum_{i,\sigma} \varepsilon_{i,\sigma} |i,\sigma\rangle\langle i,\sigma|\nonumber\\ &&+\sum_{i,\sigma\sigma'}t_{i,\sigma\sigma'}|i,\sigma\rangle\langle i+1,\sigma'|+ \mathrm{h.c.},\end{aligned}$$ where $\varepsilon_{i,\sigma}$ is the spin-dependent ($\sigma =\uparrow,\downarrow \equiv \pm 1$) onsite energy at lattice site $i$, $t_{i,\sigma\sigma'}$ denotes the hopping-matrix between neighboring lattice sites, and $\mathrm{h.c.}$ abbreviates the Hermitian conjugate term. Spin conserving hopping gives a diagonal matrix $t_{i,\sigma,\sigma'} = t\delta_{\sigma\sigma'}$, whereas spin flip processes can be taken into account by introducing off-diagonal elements. The hopping parameter $t = -\hbar^2/(2 m^* a^2)$ depends on the effective mass $m^*$ and the lattice spacing $a$ between to neighboring lattice sites. The onsite energy $$\varepsilon_{i,\sigma} = U_i - e \phi-\frac{\sigma}{2}\Delta_i$$ includes the intrinsic hole band profile $U_i$ due to the band offset between different materials, the electrostatic potential $\phi$ with $e$ denoting the elementary charge, and the local exchange splitting $\Delta_i$. Near the band-edges this model is equivalent to an effective–mass model, however, it has the advantage that structural imperfections and spin–flip processes can readily be incorporated. Moreover, it can be extended to arbitrary sophistication by introducing a larger set of basis functions.[@Sankowski2007:PRB; @Schulman1983:PRB; @Poetz1989:SM; @DiCarlo1994:PRB] Within a mean-field approach the exchange coupling between holes and magnetic impurities can be described by two interrelated effective magnetic fields, respectively, originating from a nonvanishing mean spin polarization of the ions’ d–electrons $\langle S_z\rangle$ and from the hole spin density $\langle s_z\rangle = (n_\uparrow-n_ \downarrow)/2$.[@Dietl1997:PRB; @Jungwirth1999:PRB; @Fabian2007:APS] The exchange splitting of the hole bands is then given by $$\label{eq:delta} \Delta(z) = -J_\mathrm{pd} n_\mathrm{imp}(z) \langle S_z\rangle(z)~,$$ with $z$ being the longitudinal (growth) direction of the structure, $J_\mathrm{pd} > 0 $ is the coupling strength between the impurity spin and the carrier spin density (in case of GaMnAs p-like holes couple to the d-like impurity electrons), and $n_\mathrm{imp}(z)$ is the impurity density profile of magnetically active ions. Since the magnetic order between the impurities is mediated by the holes, the effective impurity spin polarization depends on the mean hole spin polarization via $$\label{eq:Szgen} \langle S_z\rangle= - S B_S\left( \frac{S J_\mathrm{pd} \langle s_z \rangle}{k_B T}\right),$$ where $k_B$ denotes Boltzmann’s constant, $T$ is the lattice temperature, and $B_S$ is the Brillouin function of order $S$, here with $S = 5/2 $ for the Mn impurity spin. Combining Eq. (\[eq:delta\]) and Eq. (\[eq:Szgen\]) leads to a self-consistent effective Hamiltonian for the holes $H_\mathrm{eff} = -\sigma \Delta(z)/2$ with $$\label{eq:delta1} \Delta(z) = J_\mathrm{pd} n_\mathrm{imp}(z) S B_S\left\{\frac{S J_\mathrm{pd} [n_\uparrow(z)- n_\downarrow(z)]}{2 k_B T}\right\}.$$ Note that in thermodynamic equilibrium of a quasi 2D-systems, such as a quantum well, the hole spin density polarization $\langle s_z\rangle$ is the key figure of merit for the appearance of ferromagnetism. Within a Hartree mean-field picture space-charge effects are taken into account self-consistently by calculating the electric potential from the Poisson equation, $$\label{eq:poisson} \frac{\mathrm{d}}{\mathrm{d}z} \epsilon \frac{\mathrm{d}}{\mathrm{d}z}\phi = e\left[ N_a(z) - n(z)\right],$$ where $\epsilon$ denotes the dielectric constant and $N_a$ is the acceptor density. The local hole density at site $|i\rangle$ is given by $$\label{eq:n} n(i) = \frac{-i}{A a}\sum_{k_{||},\sigma}\int\frac{\mathrm{d}E}{2\pi} G^<(E;i\sigma,i\sigma)~,$$ with $A$ being the in-plane cross sectional area of the structure, and $k_{||}$ denotes the in-plane momentum. The non-equilibrium “lesser” Green’s function $G^<$ is calculated from the equation of motion $$\label{eq:gless} G^< = G^R\Sigma^<G^A$$ where $G^R$ and $G^A = [G^R]^+$ denotes the retarded and advanced Green’s function, respectively. The scattering function $\Sigma^<=\Sigma^<_l+\Sigma^<_r$ describes the inflow of particles from the left $(l)$ and right $(r)$ reservoirs [@Datta:1995] $$\Sigma^<_{l,r} = f_0(E-\mu_{l,r})(\Sigma^A_{l,r}-\Sigma^R_{l,r})~,$$ where $f_0(x) = [1+\exp(x/k_B T)]^{-1}$ is the Fermi distribution function and $\mu_l$ and $\mu_r$, respectively, denote the quasi–Fermi energies in the contacts. The retarded and advanced self-energy terms $\Sigma^R = \Sigma_l^R+\Sigma_r^R$ and $\Sigma^A = [\Sigma^R]^+$ account for the coupling of the system region to the left and right semi–infinite chains, for which an analytic expression can be derived.[@Datta:1995; @Economou:1983] The retarded Green’s function is then given by $$\label{eq:gr} G^R = \left[E+i\eta-H_s-\Sigma^R\right]^{-1}~,$$ with $i\eta$ being a positive infinitesimal imaginary part of the energy. Together with adjusting the Fermi energies relative to the band edges in the leads to ensure asymptotic charge neutrality [@Poetz1989:JAP], the band splitting given by Eq. (\[eq:delta1\]), the Poisson equation Eq. (\[eq:poisson\],\[eq:n\]), and the kinetic equations Eqs. (\[eq:gless\]) and (\[eq:gr\]) have to be solved self-consistently until convergence to a steady–state solution is reached. Nonlinearities in both Hartree and exchange term can give rise to multi–stable behavior, as will be discussed below. If this selfconsistency loop terminates with ferromagnetic ordering in the system, the effective one–particle potential is different for spin–up and spin–down holes, thus leading to spin filtering in transmission. After obtaining the self-consistent potential profile the spin-dependent transmission probability $T_{\sigma'\sigma}(E)$ from the left to the right reservoir, as a matrix element of the structure’s S-matrix, can be calculated from special matrix elements of the retarded Green’s function [@DiCarlo1994:PRB] $$T_{\sigma'\sigma}= T_{\sigma'\leftarrow\sigma}(E) = \frac{v_{r,\sigma'}|G^R(E;r\sigma',l\sigma)|^2}{v_{l,\sigma} |G^0(E;l\sigma,l\sigma)|^2}$$ with $G_0$ denoting the free Green’s function of the asymptotic region, and $v_{l,\sigma}$ and $v_{r,\sigma}$, respectively, are the spin-dependent group velocities in the leads. $G^R(E;r\sigma',l\sigma)$ is computed most conveniently by adding one layer after another which requires merely 2x2 matrix inversions for the present two–band model.[@Economou:1983] Finally, the steady–state current is obtained from scattering theory (generalized Tsu-Esaki formula), $$\begin{aligned} j_{\sigma'\sigma} & = & \frac{e m^* k_B T}{(2\pi)^2\hbar^3} \int_0^\infty \mathrm{d} E\: T_{\sigma'\sigma } g(E)\nonumber\\ g(E) & = & \ln\left\{\frac{ 1 + \exp\left[(\mu_l-E)/k_B T\right]}{ 1 + \exp\left[(\mu_r-E)/k_B T\right]}\right\}.\end{aligned}$$ The applied bias $V=(\mu_l-\mu_r)/e$ is determined by the difference in quasi-Fermi levels of the contacts. We would like to point out that we conduct a genuine non–equilibrium study whereby the quasi–Fermi level positions are associated with the contacts. Self–consistency then leads to an effective, in general, spin–dependent one–particle potential. Thus one is not confronted with the question where to place the Fermi level in the GaMnAs layers. Essential to confinement effects is the existence of states near the top of the valence band edge of GaMnAs which have a coherence length of at least the layer thickness. Highly localized states, whether separated from or attached to the top valence band edge, will not be very sensitive to finite layer width. While in the bulk and thermal equilibrium the itinerant hole exchange model firmly relates hole density to T$_c$ and the Fermi energy, in a non-equilibrium tunneling situation this is different. The key question is whether tunneling can induce a net hole spin polarization or not. As is shown below, we find that this depends on structural properties as well as on the applied bias. \[sec:results\] Results ======================= We start with a symmetric double–barrier structure containing a single GaMnAs quantum well and investigate the role of resonant hole tunneling on the magnetic state of the device. For the simulation we use generic parameters for GaMnAs and GaAs: $m^* = 0.4\:m_0$, $\epsilon_r = 12.9$, $V_\mathrm{bar} = 400$ meV, $\mu_l =\mu_r= 80$ meV, $d = 20 \AA$, $w = 25 \AA$, $n_\mathrm{imp} = 1\times10^{20} $cm$^{-3}$, $J_{\mathrm{pd}} = 0.15$ eV nm $^3$ [@Lee2000:PRB], $T = 4.2$ K, where $m_0$ denotes the free electron mass, $\epsilon_r$ is the relative permittivity, $V_\mathrm{bar}$ is the bare barrier height of AlGaAs relative to GaAs, $d$ and $w$, respectively, are the barrier and quantum well width. The thermal equilibrium position of the Fermi energies $\mu_l =\mu_r$ was deliberately chosen close to the first resonance to promote ferromagnetic ordering in the well region at zero bias. The background charge $N_a$ is assumed to be only about 10% of the Mn doping $n_\mathrm{imp}$ since GaMnAs is a heavily compensated system, most likely due to Mn interstitial or antisite defects.[@VanEsch1997:PRB; @DasSarma2003:PRB] The hole densities in the quantum well can be adjusted by the Mn doping level and the quasi-Fermi levels in the contacts. As can be seen from Eq. (\[eq:delta1\]) the exchange splitting increases with the hole density in the case of a steady particle spin polarization. The value of the exchange coupling constant varies in literature to some extend $J_{\mathrm{pd}}\approx 0.04 - 0.16$ eV nm. Since we use an optimistic value for $J_{\mathrm{pd}}$, we assume only moderate Mn$_\mathrm{Ga}$ doping in the well. Higher Mn$_\mathrm{Ga}$ densities and smaller values of $J_\mathrm{pd}$ will give very similar results. Disorder effects in the GaMnAs layers are modeled by performing a configurational average over structures with randomly selected onsite and hopping matrix elements of the tight-binding Hamiltonian in the Mn doped region. For each specific Hamiltonian the transport problem (I-V curve) is solved self–consistently. The final result is obtained by averaging over all configurations. Typically 300 configurations are used for one I-V curve. For the numerical simulation we assume a fixed 5% Mn concentration in the well and model substitutional disorder. If a Mn ion is present at a given lattice site in the well the onsite energy is shifted according to a Gaussian distribution around a mean onsite energy–shift of 40 meV and a standard deviation of 20 meV, which are reasonable values according to experimental results which indicate either an impurity band slightly above the valence band edge or a defect–induced valence band tail.[@Ohya2011:NP; @Richardella2010:S; @Masek2010:PRL] The nearest–neighbor hopping matrix elements for such a site are sampled according to a Gaussian between 5% and 25% standard deviation ($\sigma_t$) of its bulk value $t$. This model for substitutional disorder leads to a hybridization of quantum confined hole states, associated with bulk like valence band states, and localized defect states arising from Mn$_{Ga}$ sites. The degree of hybridization depends on layer thickness since it controls the position of the quantized heavy hole band relative to the energy of the localized Mn acceptor levels. This hybridization and the experimentally found Mn acceptor radius of about 2 nm calls for rather thin GaMnAs layers to ensure quantization effects in transport.[@Ohya2011:NP] ![(Color online) Spin-dependent transmission probability of the double barrier structure at zero bias with and without disorder ($\sigma_t = 5 \%$).[]{data-label="fig:T"}](Tdis.eps){width="0.95\linewidth"} The calculated spin-filtering effect via distinct tunneling probabilities for spin-up and spin-down holes arising from the exchange term is displayed in Fig. \[fig:T\] in which the transmission probability at zero bias is plotted versus energy of incidence $E$. This figure also gives a qualitative account of the density of states in the GaMnAs well region discussed above. For an idealized GaMnAs layer which, at the valence band edge, is modeled as a GaAs layer plus exchange term, one obtains sharp spin doublets which are exchange–split by about 25-30 meV (see dashed versus solid lines in Fig. \[fig:T\]). The state of zero spin polarization of holes represents an unstable equilibrium since, below T$_c$, the slightest perturbation in spin–polarization drives the system into a partially ordered lower energy state (spontaneous symmetry breaking) due to the exchange interaction. The latter, in turn, accounts for different effective barrier profiles for spin–up and spin–down holes. It is this nonlinear effect that can be utilized to control the hole spin polarization and thus the favorable Mn spin orientation by structural design and applied bias. Placing the Fermi level near the first heavy–hole resonance promotes this effect, similar to the formation of Cooper pairs near the Fermi edge of an interacting electron gas. Spin–selective tunneling into and out of the Mn doped wells, regardless of whether sequential or resonant, promotes hole spin polarization and, thus, alignment of the Mn spins as long as spin-depolarizing processes in the heterostructure are slow compared to the effective tunneling rates. Furthermore, disorder which leads to spectral broadening of the resonances may suppress spin–selective tunneling. Inspection of Fig. \[fig:T\] shows an asymmetric broadening and significant overlap of the transmission peaks under substitutional disorder, modeled as discussed above, which is particularly pronounced for the first heavy–hole resonance since it is most sensitive to potential fluctuations. The asymmetric (“anti–bonding") shift towards higher energies is due to hybridization with Mn acceptor levels below the conduction band edge. The latter do not contribute to resonant transport. Even at the moderate disorder for the effective hopping matrix element of 5 percent, a significant overlap in spin-up and spin–down resonance is obtained. Increased disorder and/or spin–flip scattering will eventually wash out spin–selectivity in transmission and a destruction of ferromagnetic ordering under bias must be expected since [*unpolarized*]{} holes are steadily fed into the GaMnAs regions. Exchange splitting at zero bias for 5% and 25%, respectively, is reduced to 33 meV and 23 meV. Clearly, our effective one–dimensional modeling of (substitutional) disorder must be viewed as a limited estimate since it corresponds to a cross–sectional average of transport though uncorrelated effective linear chains. Correlations from disorder parallel to the heterointerface will play a role in the establishing of coherence and ferromagnetic order in real structures relative to the idealized homogeneous mean–field model adopted here, since both ferromagnetic order and disorder effects are highly dependent upon spatial dimensionality.[@Kaxiras:2003; @Ashcroft:1976] Additional types of disorder from Mn clustering, Mn interstitials, etc. may be present in real structures. The role of disorder in the formation of ferromagnetic order in diluted magnetic semiconductors has been explored theoretically and, remarkably, certain form of disorder has been predicted to promote ferromagnetic ordering.[@Lee2007:PRB; @Berciu2002:PhysicaB] In experiment, STM studies have given information on the nature of defects near the surface of GaMnAs samples.[@Burch2006:PRL; @Richardella2010:S] ![(Color online) Averaged current spin polarization $|P_j|$ versus applied bias $V$ with and without disorder.[]{data-label="fig:Pjdis"}](Pjdis.eps){width="0.9\linewidth"} The current-voltage I–V characteristics, plotted in Fig. \[fig:IVgreen\], reveals the typical hysteretic behavior of resonant tunneling diodes for an up- and down-sweep of the applied bias. This well known intrinsic bistability occurs due to different charging of the well depending on the bias–sweep direction. Since our model ignores contributions from the light–hole band, associated resonances are missing in the plot. The latter are important due to in–plane non–parabolicity effects in narrow layers, however, low–lying resonances associated with heavy and light holes generally are clearly separated in energy. For the present structure a light–hole–band resonance would be expected between the first two heavy–hole–associated resonances, thus strongly reducing the peak–to valley ratio and contributing spin $\pm 1/2$ holes to the Mn–doped layers. It should be observed that only single resonance peaks are observed in the I–V characteristics in spite of spin doubles in the (zero–bias) transmission spectra. Furthermore, the drop in current beyond the first heavy–hole peak value (see insert in Fig. \[fig:IVgreen\]), unlike in ballistic models for nonmagnetic tunneling structures (see second peak), is gradual even in the absence of disorder. This broadening of the resonance can be attributed to ferromagnetic ordering away from the first resonance peak which tends to widen the bias window for meeting the resonance condition. Disorder effects diminish the peak–to valley ratio but regions of negative differential resistance are maintained for weak disorder. Since experiments have not shown negative differential conductivity in such a structure we have increasing disorder and find its disappearance at relatively high hopping disorder of about $\sigma_t = 25$ % (see Fig. \[fig:IVgreen\]). This indicates that defects other than Mn acceptors are present in real samples. Further numerical studies regarding this issue will be published elsewhere.[@Ertler2011:JCE] In Fig. \[fig:Delta\] the average exchange band spin splitting $|\Delta|$ in the quantum well, characterizing its magnetic state, is plotted versus applied bias. It shows that ferromagnetism can be controlled by the applied bias in this structure near the first current peak, remarkably, even when disorder is sufficiently strong to suppress negative differential conductivity. At zero bias ferromagnetic ordering is energetically preferred since the Fermi $\mu_l=\mu_r$ level is located close to the edge of the first heavy–hole subband. As the bias is increased tunneling into the upper doublet state becomes allowed from the emitter side reducing the net hole–spin polarization (in spite of increasing hole density) and the effective exchange field decreases to zero and both spin–up and spin–down subbands go into resonance. Note that under moderate bias both emitter and collector contribute to the population of the well region. As the bias increases resonant population from the emitter gets shut off and hole polarization is determined by the collector leading once more to a build-up of the exchange field for a bias regime between 0.08 V and 0.2 V, when finally the collector quasi-Fermi level drops below the hole subbands and the well region becomes almost depleted of holes. For higher bias no further spontaneous magnetization has been obtained within our self–consistency loop. The overall feature of the bias dependence of the exchange splitting thus somewhat resembles its behavior versus temperature, with “$T=T_c$" corresponding to a bias of about 0.18 V. It arises from the fact that it is the number of [*spin–polarized*]{} holes which determines the maximum spontaneous magnetization for given Mn$_Ga$ concentration. A simple model for the dependence of the Curie temperature in resonant tunneling systems has been given by one of us before.[@Ertler2008:APL] The voltage-dependence of the Curie temperature under resonant tunneling has also been studied before.[@Ganguly2005:PRB] The displayed build–up and destruction of ferromagnetic order as a function of applied bias can be further understood by the exchange interaction which is mediated by spin–polarized holes. In an ideal 2D particle system with parabolic dispersion there is no energy gain by magnetic ordering due to the constant density of states associated with each spin subband: energy gained by lowering one subband is exactly cancelled by raising the other. However, here we deal with a 3D heterostructure which favors a spin ordered state when the quasi–Fermi level lies near (within about half of the maximal exchange splitting) the bottom of a well subband resonance. If the temperature in the contacts is sufficiently low, one subband after the other will go through resonance. Thus, when only the lower spin–subband is in resonance holes in the magnetic well will tend to be be spin–polarized. However, as bias is increased eventually the subband with opposite spin orientation will also go into resonance thus reducing spin polarization and magnetic ordering in the GaMnAs layer. When, for a given bias, the well region cannot be populated (lack of hole density of states) or no energy gain can be drawn from ferromagnetic ordering, loss of the latter will result. ![(Color online) Logarithmic local density of states (LDOS) as a function of energy at the bias $V = 0.085$ V. The self-consistent band profile is indicated by the solid line. The spin-splitting of the quasi–bound states is clearly visible.[]{data-label="fig:ldos"}](ldos1.eps){width="0.9\linewidth"} Interestingly, in the voltage range of $V = 0.07 - 0.09$ V no steady state solution can be found for the low disorder sample case. Instead the solution for the magnetization is oscillating, as shown in Fig. \[fig:Delta\], suggesting the occurrence of dynamic effects. This behavior can be understood qualitatively as follows: Figure \[fig:ldos\] shows a contour plot of the local density of states, for an applied bias $V=0.085$ V lying in the critical voltage range. The self–consistent band profile is indicated by the solid line. For the emitter Fermi energy of $\mu_l = 0.08$ eV only the two ground state (potentially spin–split) subbands in the quantum well participate in the tunneling transport. At this bias condition and hole spin polarization the lowest (spin–up) subband may be populated by holes from the collector side, whereas the spin–down level is almost empty since it cannot be reached elastically by either emitter or collector. Since the (steady–state) band splitting $\Delta$ is proportional to the spin polarization $(n_\uparrow - n_\downarrow)$ the well magnetization increases with spin polarization, pushing the spin–down level upwards in energy. At some point holes can start to tunnel from the emitter side into the spin–down level. This in turn decreases the total spin polarization and, hence, effectively pushes the spin–down level back below the emitter’s band edge. From there, the process starts anew, leading to an oscillatory behavior in well magnetization, tunneling current, and spin polarization.[@Ertler2010:APL] ![(Color online) Current spin polarization $P_j$ versus applied bias $V$ taking into account spin flips at the hetero-interfaces. The polarization is diminished for an increasing spin flip probability $p$ becoming unpolarized for $p = 1/2$.[]{data-label="fig:pflip"}](Pj.eps){width="0.9\linewidth"} Although the I-V curves in Fig. \[fig:IVgreen\] do not display spin-split resonance peaks, but merely a broadening of the resonance, the steady–state current at low bias is spin polarized as shown in Fig. \[fig:Pjdis\]. As the bias is increased from zero, current spin polarization is reduced and reversed before it drops to zero through resonance. Above resonance current spin polarization reemerges (due to the action of the collector) and once more changes sign before dropping and remaining at zero in one-to-one agreement with the behavior of the exchange field. Although resonance peaks in the I-V curve my be suppressed by disorder, see Fig. \[fig:IVgreen\], the bias–dependence of spin polarization in the current may persist and may be observed in experiment as a bias–dependent spin valve. In order to study qualitatively the influence of spin flip processes at the hetero–interfaces on the total current spin polarization at the collector side, $P_j = (j_{\uparrow\uparrow}+j_{\uparrow\downarrow}-j_{\downarrow\uparrow}-j_{\downarrow\downarrow})/j$ with $j = \sum_{\sigma\sigma'} j_{\sigma\sigma'}$, we introduce off-diagonal hopping matrices $V_{i,\sigma\sigma'}$ in the tight-binding Hamiltonian. In general for $N$ interfaces there are $2^N$ different flip configurations. For each of them a simulation is performed and the results are finally averaged by weighting with the probability for the occurrence of the configuration. In the case of a double-barrier structure we have four hetero-interfaces, giving $16$ configurations. However, flipping at the first barrier interfaces is inefficient, since it does not change the total current or spin polarization. Single flipping at the third or fourth interface does also not modify the total current density but inverts the spin polarization to $-P_j$. By introducing single spin flip probabilities $p_i, (i=1,\ldots,N)$ at the interface $i$, the probability of a flipping process at the second barrier is then given by $p_{\mathrm{flip}} = p_3(1-p_4)+(1-p_3)p_4$. Hence, the mean spin polarization results in $$\langle P_j\rangle = P_j(1-2p_\mathrm{flip}).$$ The bias-dependent current spin polarization for different spin flip probabilities (assuming $p_3 = p_4 = p$) is plotted in Fig. \[fig:pflip\]. The spin polarization decreases for increasing $p$ with $\langle P_j\rangle [p] = \langle P_j\rangle[1-p]$ reaching its minimum $\langle P_j\rangle = 0$ for $p = 1/2$. From this analysis we conclude that our results will not be altered significantly when the spin–orbit interaction is taken into account in the analysis. ![IV-characteristics of a three-barrier structure with two coupled quantum wells made of GaMnAs. At the current maxima resonance conditions are fulfilled, i.e., the quasi–bound states of the adjacent wells become energetically aligned. The inset shows the local density of states at the applied bias $V = 0.03$ corresponding to the first current maxima. []{data-label="fig:IVqws"}](IVqws1.eps){width="0.95\linewidth"} ![(Color online) Maximum exchange splitting $\Delta_{\mathrm{max}}$ in the first (solid) and second well (dashed line) as a function of the applied bias.[]{data-label="fig:Dqws"}](Dqws.eps){width="0.9\linewidth"} While spin–selective hole tunneling may allow electric control of ferromagnetic order, tunneling spectroscopy, in turn, provides a sensitive experimental tool for exploring the electronic structure of mesoscopic semiconductor systems.[@Smoliner1996:SST] Recently, tunneling spectroscopy experiments have been performed on thin layers of GaMnAs.[@Ohya2011:NP] The authors have verified ferromagnetic ordering in their samples (with Mn concentration of typically $x\approx $5 to 15 $\%$ and layer thickness ranging from 4 to 20 nm) and have measured their respective Curie temperature. Their measurements indicate that Mn induced defect states remain separated from the GaAs–like valence band edge as evidenced by a pinning of the Fermi level. Furthermore, they find clear signatures of quantization effects in the transmission spectra of their samples and report an absence of spin-splitting in the resonances which they can fit to a GaAs-like k.p model, including light-hole states. We believe that these experimental findings compare favorably with the general features of our results. Moreover, we have provided an explanation for the observed absence of ferromagnetic ordering near resonance in spite of ferromagnetic behavior of the sample at zero bias. It would be interesting to perform spin–sensitive tunneling spectroscopy on these samples since, according to Fig. \[fig:pflip\], such a measurement gives more detailed information about the bias dependence of ferromagnetic ordering than the I-V curve and its derivatives. This can test the prediction that ferromagnetic order which can be achieved at zero bias is destroyed near resonance would be verified, and that electric switching back and forth between the ferromagnetic and paramagnetic state can be achieved. We now explore feasibility of selective magnetization switching among several magnetic layers of high structural quality. We investigate a three-barrier structure with two adjacent GaMnAs quantum wells, choosing an asymmetric structure with the second well being thinner than the first one ($w_1 = 25 \AA$, $w_2 = 20\AA$). All other parameters are as in the previous structure. Quantum confinement gives rise to a higher ground state energy in the second quantum well at zero bias. The resonant alignment of the ground state subbands of the two wells is therefore achieved at a finite voltage as shown in the inset of Fig. \[fig:IVqws\], corresponding to the first maximum in the current-voltage characteristics at about $V=0.02$ V which is plotted in Fig. \[fig:IVqws\]. The second current maximum result from the resonance of the first excited state subbands of both wells. Next to possible exchange splitting the finite separating barriers cause the energy levels in the two quantum wells to further split into bonding and antibonding subband states. However, for our structure the middle barrier is too thick and the natural energy broadening of the quasi–bound states is too large for resolving this additional splitting in the local density of states. Having two coupled quantum wells allows one to realize several magnetic configurations. The maximum (steady–state) exchange splitting of the two wells as a function of the applied bias is plotted in Fig. \[fig:Dqws\], revealing three different regions. For low voltages both wells are magnetized due to the build–up of spin polarization in the wells due to resonance of the ground state subband levels with populated reservoir states. Exchange causes a relative shift in the density of states for spin–up and spin–down holes which, in turn, stabilizes ferromagnetism in both layers. When the second well goes off resonance at around $V > 0.03$ V the accumulated spin polarization in the second well is preserved, since for voltages up to $V \approx 0.1$ V the collector Fermi energy $\mu_r$ is still higher than the bottom of its ground state subbands thus maintaining spin polarization. For voltages in the interval $0.12$ V$ < V < 0.33$ V the first well remains magnetic, whereas the second well becomes nonmagnetic, since the ground state subbands are no longer filled from the collector side. At sufficiently high bias, $V > 0.33$ V, also the first well becomes demagnetized since holes can no longer resonantly populate its two lowest (now degenerate) subbands from either emitter or collector thus resulting in a completely nonmagnetic structure. Several simplifying assumptions have been made in the present analysis which should, just as well as experimental aspects, be discussed. The present model is based on an effective-mass-like two–band approach for the heavy holes in the structure. This approximation should at least qualitatively be correct since the applied bias is kept below typically 0.2 V and most of the phenomena discussed here occur at lower bias. Thus it can be expected, that this model describes effects qualitatively correct. We are currently working on more realistic tight–binding formulations using a significantly increased number of basis states in conjunction with density functional plus dynamic mean-field models to arrive at a more detailed and realistic electronic structure.[@Chioncel2011:PRB] Impurity scattering effects have been accounted for on a phenomenological level within the TB model. Our ballistic model neglects electron–phonon scattering within the heterostructure altogether and the electron–electron interaction is described within mean–field theory. In thin structures, such as the ones studied here where effective tunneling rates are higher than carrier–phonon scattering rates and optical phonon transitions are suppressed energetically the former assumption should be rather well fulfilled and not alter significantly subband population within the heterostructure. Electron–electron scattering may play role, however, as long as it does not involve spin–flip processes should not influence our basic conclusions much. Clearly the effects studied here require low temperatures, for one to favor ferromagnetic ordering and, secondly, to preserve strong hole–spin polarization in the carrier injection process. It is well known that, at least at low temperatures, structural imperfections are the main source for reduction of nonlinear effects, such as the peak–to–valley–ratio in the IV curve.[@Poetz1989:SM; @Poetz1989:SSE; @Chevoir1990:SS; @Mizuta:1995] It is most likely the difficulty in clean sample preparation which has slowed experimental progress on thin–layer semimagnetic semiconductor heterostructures. High quality doping profiles and high quality interfaces must be achieved within one growth process.[@Ohya2007:PRB; @Likovich2009:PRB; @Ohya2010:APL] Growth of good quality DMS layers needs low temperature molecular beam epitaxy which, however, adversely affects interface quality. Usually thin GaAs spacer layers are inserted to smooth the surfaces.[@Ohya2007:PRB; @Ohya2010:APL] Furthermore, GaMnAs layers must be thick enough to support ferromagnetism. Qualitatively, all structural imperfections lead to broadening of resonances. Once the latter becomes comparable to the (theoretical) maximum of the exchange energy induced spin–splitting, spin–selective tunneling and, hence, tunneling–induced control of magnetic ordering may be suppressed. Even in the presence of disorder, as long as it does not go hand in hand with strong spin–flip processes, achieving bias–control of hole–spin polarization in the GaMnAs layers should allow one to manipulate magnetization. Conclusions and Outlook {#sec:sum} ======================= In summary, we have used a ballistic steady–state transport model to investigate bias–induced magnetic multi–stability in AlGaAs/MnGaAs quantum well structures. Ferromagnetic exchange, as well as the hole Coulomb interaction are treated within self–consistent mean–field approximation. Substitutional disorder is treated phenomenologically within a tight–binding model. Our studies indicate that in these systems ferromagnetic ordering can be controlled selectively by an externally applied bias. The underlying mechanism is found in spin–selective tunneling due to the anti–ferromagnetic exchange interaction between itinerant heavy holes and localized Mn d–electrons. In structurally suitably designed heterostructures the applied electric bias allows control of the ferromagnetic state, as well as electric and spin current density. In the simplest structural case in form of a double barrier structure containing a GaMnAs well, we predict that ferromagnetic ordering in the well, when present at zero bias, is lost under bias near the first heavy-hole resonance, allowing a switching back and forth between the magnetic and nonmagnetic state in the well. In GaMnAs multi-well structures we predict that the loss of ferromagnetic order can be engineered structurally to occur at different applied bias for the individual layers. Within our model we are able to provide a possible explanation for the absence of exchange splitting near resonances, as observed in recent tunneling spectroscopy measurements on thin GaMnAs layers.[@Ohya2011:NP] We generally predict that ferromagnetic order which may be achieved in GaMnAs quantum well layers under zero bias tends to be destroyed under resonance condition since the well region then is swept by unpolarized holes. Under favorable conditions detailed in the main text, ferromagnetic order may be reestablished above resonance. Such a behavior should be revealed experimentally by spin–sensitive tunneling spectroscopy.[@Ando2005:APL] In previous work based on a complementary time–dependent sequential tunneling model including intra–well scattering we have predicted that, under specific bias conditions, the interplay of transport and magnetic properties can result in robust self-sustained charge and magnetization oscillations.[@Ertler2010:APL] The present model, albeit based on the resonant–tunneling picture, backs the possibility of such phenomena by predicting bias regions in which no steady–state solution for the current exists. Disorder and spin–flip effects have been modeled on a phenomenological level. We find that disorder due to Mn taking a Ga site alone should not suffice to destroy spin–selective tunneling, nor should spin flips at a rate expected in these structures, for example from the spin–orbit interaction, significantly suppress spin polarization of the steady–state current. As expected, our analysis does show that disorder and spin flip processes do reduce the total average current spin polarization, however, not as efficiently as the resonance peaks in the I–V curve. We conclude that multi–well structures containing GaMnAs layers may allow one to realize various bias-dependent magnetic configurations. While the current investigation of bias induced effects considers only bias in longitudinal direction, i.e., a 2–terminal configuration, applying additional gates in transverse direction (multi–terminal configuration) should allow for an additional control knob to move spin–split subbands in and out of resonance with the contact states and/or to inject spin–polarized holes into the Mn–doped regions. Such a structure has been studied in a recent experiment.[@Ohya2010:APL] Acknowledgment ============== This work has been supported by the FWF project P21289-N16. [65]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , , , , , , , , ****, (). , , , , ****, (). , , , , , , , ****, (). , , , , , ****, (). , ****, (). , , , ****, (). , , , , , ****, (). , , , , , , , ****, (). , , , , , , , , , , ****, (). , , , , , ****, (). , , , , ****, (). , , , , , ****, (). , , , , , ****, (). , , , , , , , , , , ****, (). , , , , , , , ****, (). , , , ****, (). , , , , , , , , , , , ****, (). , , , , ****, (). , ****, (). , , , , ****, (). , , , , ****, (). , , , , ****, (). , , , ****, (). , ****, (). , ****, (). , , , , ****, (). , , , , , , , ****, (). , , , , , , ****, (). , , , , , , ****, (). , , , ****, (). , , , ****, (). , ****, (). , ****, (). , , , , , , , ****, (). , , , , , , , , , ****, (). , , , ****, (). , , , , ****, (). , , , , , , , , , , ****, (). , ****, (). , ****, (). , ****, (). , , , ****, (). , , , , ****, (). , , , , ****, (). , , , ****, (). , ****, (). , ****, (). , , , ****, (). , , , , , ****, (). , ** (, ). , ** (, ). , ****, (). , , , ****, (). , , , ****, (). , ** (, ). , ** (, ). , (). , ****, (). , , , , ****, (). , ****, (). , , , , , , , ****, (). , ****, (). , ****, (). , ** (, ). , , , , ****, (). [^1]: email:[email protected]
--- author: - John Fearnley and Rahul Savani bibliography: - 'references.bib' title: 'The Complexity of All-switches Strategy Improvement' --- intro.tex #### **Roadmap.** In Section \[sec:prelim\], we give a formal definition of parity games, and more specifically the *one-sink* games used by Friedmann that we also use for our construction. We then give a high-level overview of how [[all-switches strategy improvement]{}]{}works for one-sink games, since it operates more simply for one-sink games than for general games; a more detailed technical exposition of [[all-switches strategy improvement]{}]{}for one-sink games can be found in the appendix. Our main reduction starts with a iterated circuit evaluation problem. In Section \[sec:construct\], we describe our main construction of a parity game that will implement iterated circuit iteration when strategy improvement is run on it. In Section \[sec:strategies\], we describe the sequence of strategies that [[all-switches strategy improvement]{}]{}will go through as it implements the iterated circuit evaluation. In Section \[sec:proof\] we show that the construction works as claimed and thus prove that [<span style="font-variant:small-caps;">EdgeSwitch</span>]{}is [$\mathtt{PSPACE}$]{}-hard for parity games. In Section \[sec:other\], we show how this result for [<span style="font-variant:small-caps;">EdgeSwitch</span>]{}extends to strategy improvement algorithms for other games. In Section \[sec:optstrat\], we show how to augment our construction with an extra gadget to give [$\mathtt{PSPACE}$]{}-hardness results for [<span style="font-variant:small-caps;">OptimalStrategy</span>]{}. In Section \[sec:conc\], we state some open problems. Preliminaries {#sec:prelim} ============= Parity games ------------ A parity game is defined by a tuple $G = (V, {V_{\text{Even}}}, {V_{\text{Odd}}}, E, \operatorname{pri})$, where $(V, E)$ is a directed graph. The sets ${V_{\text{Even}}}$ and ${V_{\text{Odd}}}$ partition $V$ into the vertices belonging to player Even, and the vertices belonging to player Odd, respectively. The *priority* function $\operatorname{pri}: V \rightarrow \{1, 2, \dots\}$ assigns a positive natural number to each vertex. We assume that there are no terminal vertices, which means that every vertex is required to have at least one outgoing edge. The strategy improvement algorithm of Vöge and Jurdziński also requires that we assume, without loss of generality, that every priority is assigned to at most one vertex. A strategy for player Even is a function that picks one outgoing edge for each Even vertex. More formally, a *deterministic positional strategy* for Even is a function $\sigma : {V_{\text{Even}}}\rightarrow V$ such that, for each $v \in {V_{\text{Even}}}$ we have that $(v, \sigma(v)) \in E$. Deterministic positional strategies for player Odd are defined analogously. Throughout this paper, we will only consider deterministic positional strategies, and from this point onwards, we will refer to them simply as *strategies*. We use ${\Sigma_{\text{Even}}}$ and ${\Sigma_{\text{Odd}}}$ to denote the set of strategies for players Even and Odd, respectively. A *play* of the game is an infinite path through the game. More precisely, a play is a sequence $v_0, v_1, \dots $ such that for all $i\in {\mathbb N}$ we have $v_i \in V$ and $(v_i, v_{i+1}) \in E$. Given a pair of strategies $\sigma \in {\Sigma_{\text{Even}}}$ and $\tau \in {\Sigma_{\text{Odd}}}$, and a starting vertex $v_0$, there is a unique play that occurs when the game starts at $v_0$ and both players follow their respective strategies. So, we define $\operatorname{Play}(v_0, \sigma, \tau) = v_0, v_1, \dots$, where for each $i \in {\mathbb N}$ we have $v_{i+1} = \sigma(v_i)$ if $v_i \in {V_{\text{Even}}}$, and $v_{i+1} = \tau(v_i)$ if $v_i \in {V_{\text{Odd}}}$. Given a play $\pi = v_0, v_1, \dots$ we define $$\operatorname{MaxIo}(\pi) = \max \{ p \; : \; \exists \text{ infinitely many } i \in {\mathbb N}\text{ s.t. } \operatorname{pri}(v_i) = p\},$$ to be the maximum priority that occurs *infinitely often* along $\pi$. We say that a play $\pi$ is *winning* for player Even if $\operatorname{MaxIo}(\pi)$ is even, and we say that $\pi$ is winning for Odd if $\operatorname{MaxIo}(\pi)$ is odd. A strategy $\sigma \in {\Sigma_{\text{Even}}}$ is a *winning strategy* for a vertex $v \in V$ if, for every strategy $\tau \in {\Sigma_{\text{Odd}}}$, we have that $\operatorname{Play}(v, \sigma, \tau)$ is winning for player Even. Likewise, a strategy $\tau \in {\Sigma_{\text{Odd}}}$ is a winning strategy for $v$ if, for every strategy $\sigma \in {\Sigma_{\text{Even}}}$, we have that $\operatorname{Play}(v, \sigma, \tau)$ is winning for player Odd. The following fundamental theorem states that parity games are *positionally determined*. In every parity game, the set of vertices $V$ can be partitioned into *winning sets* $(W_\text{0}, W_\text{1})$, where Even has a positional winning strategy for all $v \in W_\text{0}$, and Odd has a positional winning strategy for all $v \in W_\text{1}$. The computational problem that we are interested in is, given a parity game, to determine the partition $(W_\text{0}, W_\text{1})$. #### **Priorities.** As we have mentioned, the strategy improvement algorithm that we consider requires that every priority is assigned to at most one vertex. This is unfortunately a rather cumbersome requirement when designing more complex constructions. To help with this, we define a shorthand for specifying priorities. Let $c \in {\mathbb N}$, let $i,l \in \{1, \dots, |V|\}$, let $j \in \{0, 1, 2\}$, and let $e \in \{0,1\}$. We define $\operatorname{P}(c, i, l, j, e) = 6 \cdot |V|^2 \cdot c + 6 \cdot |V| \cdot i + 6 \cdot l + 2 \cdot j + e$. The first four parameters should be thought of as a lexicographic ordering, which determines how large the priority is. The final number $e$ determines whether the priority is odd or even. Note that $\operatorname{P}(c, i, l, j, e)$ is an injective function, so if we ensure that the same set of arguments are never used twice, then we will never assign the same priority to two different vertices. One thing to note is that, since this priority notation is rather cumbersome, it is not possible to use it in our diagrams. Instead, when we draw parts of the construction, we will use *representative* priorities, which preserve the order and parity of the priorities used in the gadgets, but not their actual values. Strategy improvement -------------------- #### **Valuations.** We now describe the strategy improvement algorithm of Vöge and Jurdziński [@jurdzinski00b] for solving parity games, which will be the primary focus of this paper. The algorithm assigns a *valuation* to each vertex $v$ under every pair of strategies $\sigma \in {\Sigma_{\text{Even}}}$ and $\tau \in {\Sigma_{\text{Odd}}}$. Let $p$ be the largest priority that is seen infinitely often along $\operatorname{Play}(v, \sigma, \tau)$. Since every priority is assigned to at most one vertex, there is a unique vertex $u$ with $\operatorname{pri}(u) = p$. We use this vertex to decompose the play: let $P(v, \sigma, \tau)$ be the finite simple path that starts at $v$ and ends at $u$, and let $C(v, \sigma, \tau)$ be the infinitely-repeated cycle that starts at $u$ and ends at $u$. We can now define the valuation function $\operatorname{Val}_{\text{VJ}}^{\sigma, \tau}(v) = (p, S, d)$ where $p$ is as above and: - $S$ is the set of priorities on the finite path that are strictly greater than $p$: $$S = \{\operatorname{pri}(u) \; : \; u \in P(v, \sigma, \tau) \text{ and } \operatorname{pri}(u) > p\}.$$ - $d$ is the length of the finite path: $d = |P(v, \sigma, \tau)|$. We now define an order over valuations. First we define an order $\preceq$ over priorities: we have that $p \prec q$ if one of the following holds: - $p$ is odd and $q$ is even. - $p$ and $q$ are both even and $p < q$. - $p$ and $q$ are both odd and $p > q$. Furthermore, we have that $p \preceq q$ if either $p \prec q$ or $p = q$. Next we define an order of the sets of priorities that are used in the second component of the valuation. Let $P, Q \subset {\mathbb N}$. We first define: $$\operatorname{MaxDiff}(P, Q) = \max\bigl((P \setminus Q) \cup (Q \setminus P)\bigr).$$ If $d = \operatorname{MaxDiff}(P, Q)$ then we define $P \sqsubset Q$ to hold if one of the following conditions holds: - $d$ is even and $d \in Q$. - $d$ is odd and $d \in P$. Furthermore, we have that $P \sqsubseteq Q$ if either $P = Q$ or $P \sqsubset Q$. Finally, we can provide an order over valuations. We have that $(p, S, d) \prec (p', S', d')$ if one of the following conditions holds: - $p \prec p'$. - $p = p'$ and $S \sqsubset S'$. - $p = p'$ and $S = S'$ and $p$ is odd and $d < d'$. - $p = p'$ and $S = S'$ and $p$ is even and $d > d'$. Furthermore, we have that $(p, S, d) \preceq (p', S', d')$ if either $(p, S, d) \prec (p', S', d')$ or $(p, S, d) = (p', S', d')$. #### **Best responses.** Given a strategy $\sigma \in {\Sigma_{\text{Even}}}$, a *best response* against $\sigma$ is a strategy $\tau^{*} \in {\Sigma_{\text{Odd}}}$ such that, for every $\tau \in {\Sigma_{\text{Odd}}}$ and every vertex $v$ we have: $\operatorname{Val}_{\text{VJ}}^{\sigma, \tau}(v) \preceq \operatorname{Val}_{\text{VJ}}^{\sigma, \tau^{*}}(v)$. Vöge and Jurdziński proved the following properties. For every $\sigma \in {\Sigma_{\text{Even}}}$ a best response $\tau^*$ can be computed in polynomial time. We define $\operatorname{Br}(\sigma)$ to be an arbitrarily chosen best response strategy against $\sigma$. Furthermore, we define $\operatorname{Val}_{\text{VJ}}^{\sigma}(v) = \operatorname{Val}_{\text{VJ}}^{\sigma, \operatorname{Br}(\sigma)}(v)$. #### **Switchable edges.** Let $\sigma$ be a strategy and $(v, u) \in E$ be an edge such that $\sigma(v) \ne u$. We say that $(v, u)$ is *switchable* in $\sigma$ if $\operatorname{Val}_{\text{VJ}}^{\sigma}(\sigma(v)) \prec \operatorname{Val}_{\text{VJ}}^{\sigma}(u)$. Furthermore, we define a *most appealing* outgoing edge at a vertex $v$ to be an edge $(v, u)$ such that, for all edges $(v, u')$ we have $\operatorname{Val}_{\text{VJ}}^{\sigma}(u') \preceq \operatorname{Val}_{\text{VJ}}^{\sigma}(u)$. There are two fundamental properties of switchable edges that underlie the strategy improvement technique. The first property is that switching any subset of the switchable edges will produce an improved strategy. Let $\sigma$ be a strategy, and let $W \subseteq E$ be a set of switchable edges in $\sigma$ such that, for each vertex $v$, there is at most one edge of the form $(v, u) \in W$. *Switching* $W$ in $\sigma$ creates a new strategy $\sigma[W]$ where for all $v$ we have: $$\sigma[W](v) = \begin{cases} u & \text{ if $(v, u) \in W$,} \\ \sigma(v) & \text{otherwise.} \end{cases}$$ We can now formally state the first property. Let $\sigma$ be a strategy and let $W \subseteq E$ be a set of switchable edges in $\sigma$ such that, for each vertex $v$, there is at most one edge of the form $(v, u) \in W$. We have: - For every vertex $v$ we have $\operatorname{Val}_{\text{VJ}}^{\sigma}(v) \preceq \operatorname{Val}_{\text{VJ}}^{\sigma[W]}(v)$. - There exists a vertex $v$ for which $\operatorname{Val}_{\text{VJ}}^{\sigma}(v) \prec \operatorname{Val}_{\text{VJ}}^{\sigma[W]}(v)$. The second property concerns strategies with no switchable edges. A strategy $\sigma \in {\Sigma_{\text{Even}}}$ is *optimal* if for every vertex $v$ and every strategy $\sigma' \in {\Sigma_{\text{Even}}}$ we have $\operatorname{Val}_{\text{VJ}}^{\sigma'}(v) \preceq \operatorname{Val}_{\text{VJ}}^{\sigma}(v)$. A strategy with no switchable edges is optimal. Vöge and Jurdziński also showed that winning sets for both players can be extracted from an optimal strategy. If $\sigma$ is an optimal strategy, then $W_\text{0}$ contains every vertex $v$ for which the first component of $\operatorname{Val}_{\text{VJ}}^{\sigma}(v)$ is even, and $W_\text{1}$ contains every vertex $v$ for which the first component of $\operatorname{Val}_{\text{VJ}}^{\sigma}(v)$ is odd. Hence, to solve the parity game problem, it is sufficient to find an optimal strategy. #### **The algorithm.** The two properties that we have just described give rise to an obvious *strategy improvement* algorithm that finds an optimal strategy. The algorithm begins by selecting an arbitrary strategy $\sigma \in {\Sigma_{\text{Even}}}$. In each iteration, the algorithm performs the following steps: 1. If there are no switchable edges, then terminate. 2. Otherwise, select a set $W \subseteq E$ of switchable edges in $\sigma$ such that, for each vertex $v$, there is at most one edge of the form $(v, u) \in W$. 3. Set $\sigma := \sigma[W]$ and go to step 1. By the first property, each iteration of this algorithm produces a strictly better strategy according to the $\prec$ ordering, and therefore the algorithm must eventually terminate. However, the algorithm can only terminate when there are no switchable edges, and therefore the second property implies that the algorithm will always find an optimal strategy. The algorithm given above does not specify a complete algorithm, because it does not specify *which* subset of switchable edges should be chosen in each iteration. Indeed, there are many variants of the algorithm that use a variety of different *switching rules*. In this paper, we focus on the *greedy all-switches* switching rule. This rule switches every vertex that has a switchable edge, and if there is more than one switchable edge, it arbitrarily picks one of the most appealing edges. #### **One-sink games.** Friedmann observed that, for the purposes of showing lower bounds, it is possible to simplify the Vöge-Jurdziński algorithm by restricting the input to be a *one-sink game* [@F11]. A one-sink parity game contains a sink vertex $s$ such that $\operatorname{pri}(s) = 1$. An even strategy $\sigma \in {\Sigma_{\text{Even}}}$ is called a *terminating strategy* if, for every vertex $v$, the first component of $\operatorname{Val}_{\text{VJ}}^{\sigma}(v)$ is $1$. Formally, a parity game is a one-sink parity game if: - There is a vertex $s \in V$ such that $\operatorname{pri}(s) = 1$, and $(s, s)$ is the only outgoing edge from $s$. Furthermore, there is no vertex $v$ with $\operatorname{pri}(v) = 0$. - All optimal strategies are terminating. Now, suppose that we apply Vöge-Jurdziński algorithm, and furthermore suppose that the initial strategy is terminating. Since the initial and optimal strategies are both terminating, we have that, for every strategy $\sigma$ visited by the algorithm and every vertex $v$, the first component of $\operatorname{Val}_{\text{VJ}}^{\sigma}(v) = 1$, and so it can be ignored. Furthermore, since there is no vertex with priority $0$, the second component of $\operatorname{Val}_{\text{VJ}}^{\sigma}(v)$ must be different from the second component of $\operatorname{Val}_{\text{VJ}}^{\sigma}(u)$, for every $v, u \in V$. Therefore, the third component of the valuation can be ignored. Thus, for a one-sink game, we can define a simplified version of the Vöge-Jurdziński algorithm that only uses the second component. So, we define $\operatorname{Val}^{\sigma}(v)$ to be equal to the second component of $\operatorname{Val}_\text{VJ}^\sigma(v)$, and we carry out strategy improvement using the definitions given above, but with $\operatorname{Val}^{\sigma}(v)$ substituted for $\operatorname{Val}_\text{VJ}^\sigma(v)$. Note, in particular, that in this strategy improvement algorithm, and edge $(v, u)$ is switchable in $\sigma$ if $\operatorname{Val}^{\sigma}(\sigma(v)) \sqsubset \operatorname{Val}^{\sigma}(u)$. In our proofs, we will frequently want to determine the maximum difference between two valuations. For this reason, we introduce the following notation. For every strategy $\sigma$, and every pair of vertices $v, u \in V$, we define $\operatorname{MaxDiff}^\sigma(v, u) = \operatorname{MaxDiff}(\operatorname{Val}^{\sigma}(v), \operatorname{Val}^{\sigma}(u))$. Circuit iteration problems -------------------------- #### **The problems.** To prove our [$\mathtt{PSPACE}$]{}-completeness results, we will reduce from *circuit iteration problems*, which we now define. A *circuit iteration* instance is a triple $(F, B, z)$, where: - $F : \{0, 1\}^n \rightarrow \{0, 1\}^n$ is a function represented as a boolean circuit $C$, - $B \in \{0, 1\}^n$ is an initial bit-string, and - $z$ is an integer such that $1 \le z \le n$. We use standard notation for function iteration: given a bit-string $B \in \{0,1\}^n$, we recursively define $F^{1}(B) = F(B)$, and $F^{i}(B) = F(F^{i-1}(B))$ for all $i > 1$. We now define two problems that will be used as the starting point for our reduction. Both are decision problems that take as input a circuit iteration instance $(F, B, z)$. - ${\textsc{BitSwitch}\xspace}(F, B, z)$: decide whether there exists an even $i \le 2^n$ such that the $z$-th bit of $F^{i}(B)$ is $1$. - ${\textsc{CircuitValue}\xspace}(F, B, z)$: decide whether the $z$-th bit of $F^{2^n}(B)$ is $1$. The requirement for $i$ to be even in ${\textsc{BitSwitch}\xspace}$ is a technical requirement that is necessary in order to make our reduction to strategy improvement work. The fact that these problems are [$\mathtt{PSPACE}$]{}-complete should not be too surprising, because $F$ can simulate a single step of a space-bounded Turing machine, so when $F$ is iterated, it simulates a run of the space-bounded Turing machine. The following lemma was shown in [@FS14]. [[@FS14 Lemma 7]]{} \[lem:pspace\] ${\textsc{BitSwitch}\xspace}$ and ${\textsc{CircuitValue}\xspace}$ are [$\mathtt{PSPACE}$]{}-complete. **Circuits.** For the purposes of our reduction, we must make some assumptions about the format of the circuits that represent $F$. Let $C$ be a boolean circuit with $n$ input bits, $n$ output bits, and $k$ gates. We assume, w.l.o.g., that all gates are or-gates or not-gates. The circuit will be represented as a list of gates indexed $1$ through $n+k$. The indices $1$ through $n$ represent the $n$ *inputs* to the circuit. Then, for each $i > n$, we have: - If gate $i$ is an or-gate, then we define ${I}_1(i)$ and ${I}_2(i)$ to give the indices of its two inputs. - If gate $i$ is a not-gate, then we define ${I}(i)$ to give the index of its input. The gates $k+1$ through $k + n$ correspond to the $n$ *output bits* of the circuit, respectively. For the sake of convenience, for each input bit $i$, we define ${I}(i) = k+i$, which indicates that, if the circuit is applied to its own output, input bit $i$ should copy from output bit ${I}(i)$. Moreover, we assume that the gate ordering is topological. That is, for each or-gate $i$ we assume that $i > {I}_1(i)$ and $i > {I}_2(i)$, and we assume that for each not-gate $i$ we have $i > {I}(i)$. For each gate $i$, let $d(i)$ denote the *depth* of gate $i$, which is the length of the longest path from $i$ to an input bit. So, in particular, the input bits are at depth $0$. Observe that we can increase the depth of a gate by inserting dummy or-gates: given a gate $i$, we can add an or-gate $j$ with ${I}_1(j) = i$ and ${I}_2(j) = i$, so that $d(j) = d(i)+1$. We use this fact in order to make the following assumptions about our circuits: - For each or-gate $i$, we have $d({I}_1(i)) = d({I}_2(i))$. - There is a constant $c$ such that, for every output bit $i \in \{k+1, k+n\}$, we have $d(i) = c$. From now on, we assume that all circuits that we consider satisfy these properties. Note that, since all outputs gates have the same depth, we can define $d(C) = d(k+1)$, which is the depth of all the output bits of the circuit. Given an input bit-string $B \in \{0, 1\}^n$, the truth values of each of the gates in $C$ are fixed. We define $\operatorname{Eval}(B, i) = 1$ if gate $i$ is true for input $B$, and $\operatorname{Eval}(B, i) = 0$ if gate $i$ is false for input $B$. Given a circuit $C'$, we define the *negated form* of $C'$ to be a transformation of $C'$ in which each output bit is negated. More formally, we transform $C'$ into a circuit $C$ using the following operation: for each output bit $n+i$ in $C'$, we add a [$\textsc{Not}$]{}gate $n+k+i$ with ${I}(n+k+i) = n+i$. The Construction {#sec:construct} ================ Our goal is to show that ${\textsc{EdgeSwitch}\xspace}$ is [$\mathtt{PSPACE}$]{}-complete by reducing from the circuit iteration problem ${\textsc{BitSwitch}\xspace}$. Let $(F, B, z)$ be the input to the circuit iteration problem, and let $C$ be the negated form of the circuit that computes $F$. Throughout this section, we will use $n$ as the bit-length of $B$, and $k = |C|$ as the number of gates used in $C$. We will use ${\ensuremath{\textsc{Or}}\xspace}$, ${\ensuremath{\textsc{Not}}\xspace}$, and ${\ensuremath{\textsc{Input/Output}}\xspace}$ to denote the set of or-gates, not-gates, and input/output-gates, respectively. We will force greedy all-switches strategy improvement to compute $F^i(B)$ for each value of $i$. To do this, we will represent each gate in $C$ by a gadget, and applying strategy improvement to these gadgets will cause the correct output values for each gate to be computed. One complication is that, after computing $F(B)$, the circuit must then copy the output values back to the input gates. To resolve this issue, the construction contains two entire copies of the circuit, numbered $0$ and $1$, which take turns in computing $F$. First circuit $0$ computes $F(B)$, then circuit $1$ computes $F(F(B))$, then circuit $0$ computes $F(F(F(B)))$, and so on. There will be a specific edge $e$ in the construction that is switched if and only if the $z$-th bit of $F^i(B)$ is $1$ for some even $i$. The technical requirement for $i$ to be even is explained by the fact that there are two copies of the circuit, and each copy only computes $F^i(B)$ for either the case where $i$ is even or the case where $i$ is odd. The construction also uses Friedmann’s exponential-time examples as a *clock*. Friedmann’s examples are designed to force greedy all-switches strategy improvement to mimic a binary counter. The construction contains two entire copies of Friedmann’s example, so that each copy of the circuit is equipped with its own clock. The fundamental idea is that, each time the clock ticks, i.e., each time the binary counter advances to the next bit-string, the circuit will start computing $F$. Thus the two clocks must be out of phase relative to one another, so that the two circuits correctly alternate. In the rest of this section, we describe the construction. We begin by giving an overview of Friedmann’s example, both because it plays a key role in our construction, and because the [$\textsc{Not}$]{}gate gadgets in our circuits are a modification of the bit-gadget used by Friedmann. We then move on to describe the gate gadgets, and how they compute the function $F$. Friedmann’s exponential-time example ------------------------------------ In this section we give an overview of some important properties of Friedmann’s exponential-time examples. In particular, we focus on the properties that will be important for our construction. A more detailed description of the example can be found in Friedmann’s original paper [@F11]. The example works by forcing greedy all-switches strategy improvement to simulate an $n$ bit binary counter. It consists of two components: a *bit gadget* that is used to store one of the bits of the counter, and a *deceleration lane* that is used to ensure that the counter correctly moves from one bit-string to the next. #### **The deceleration lane.** Friedmann’s example contains one copy of the deceleration lane. The deceleration lane has a specified length $m$, and Figure \[fig:decel\] shows an example of a deceleration lane of length $4$. Friedmann’s construction contains one copy of the deceleration lane of length $2n$. Figure \[fig:decel\] shows a deceleration lane of length $4$. Remember that our diagrams use representative priorities, which preserve the order and parity of the priorities used, but not their values. \(c) [$t_0$\ $16$]{}; (t1) [$t_1$\ $7$]{}; (t2) [$t_2$\ $9$]{}; (t3) [$t_3$\ $11$]{}; (t5) [$t_4$\ $13$]{}; (a1) [$a_1$\ $8$]{}; (a2) [$a_2$\ $10$]{}; (a3) [$a_3$\ $12$]{}; (a5) [$a_4$\ $14$]{}; (t5) edge (t3) (t3) edge (t2) (t2) edge (t1) (t1) edge (c) ; (a5) edge (t5) (a3) edge (t3) (a2) edge (t2) (a1) edge (t1) ; in [c, t1, t2, t3, t5]{} \(1) at ($(\x) - (0.5,1.8)$) [$r$]{}; (2) at ($(\x) - (-0.5,1.8)$) [$s$]{}; () edge (1) () edge (2) ; A key property of the deceleration lane is that greedy all-switches strategy improvement requires $m$ iterations to find the optimal strategy. Consider an initial strategy in which each vertex $t_i$ uses the edge to $r$, and that the valuation of $r$ is always larger than the valuation of $s$. First note that, since there is a large even priority on $t_0$, the optimal strategy is for every vertex $t_i$, with $i \ge 1$, to use the edge to $t_{i-1}$. However, since the vertices $t_i$ with $i \ge 1$ are all assigned odd priorities, in the initial strategy only the edge from $t_1$ to $t_0$ is switchable. Furthermore, once this edge has been switched, only the edge from $t_2$ to $t_1$ is switchable. In this way, the gadget ensures that $m$ iterations are required to move from the initial strategy to the optimal strategy for this gadget. Another important property is that the gadget can be *reset*. This is achieved by having a single iteration in which the valuation of $s$ is much larger than the valuation of $r$, followed by another iteration in which the valuation of $r$ is much larger than the valuation of $s$. In the first iteration all vertices $t_i$ switch to $s$, and in the second iteration all vertices switch back to $r$. Note that after the second iteration, we have arrived back at the initial strategy described above. #### **The bit gadget.** The bit gadget is designed to store one bit of a binary counter. The construction will contain $n$ copies of this gadget, which will be indexed $1$ through $n$. Figure \[fig:bit\] gives a depiction of a bit gadget with index $i$. \(d) [$d_i$\ $3$]{}; (e) [$e_i$\ $4$]{}; (f) [$f_i$\ $15$]{}; (h) [$h_i$\ $16$]{}; (g) [$g_i$\ $5$]{}; (k) [$k_i$\ $13$]{}; (a\_1) [$a_1$]{}; (a\_2) [$a_2$]{}; (a\_3) [$\dots$]{}; (a\_4) \[align=center\] [$a_{2i}$]{}; (g\_1) [$g_{i+1}$]{}; (g\_2) [$g_n$]{}; (g\_n) [$x$]{}; (dots2) [$\vdots$]{}; \(r) [$r$]{}; (s) [$s$]{}; \(g) edge (f) (g) edge (k) (f) edge (e) (e) edge (h) (h) edge (k) (e) edge \[bend left\] (d) (d) edge \[bend left\] (e) \(d) edge (a\_4) (d) edge (a\_2) (d) edge (a\_1) \(k) edge (g\_1) (k) edge (g\_2) (k) edge (g\_n) \(d) edge (r) (d) edge (s) ; The current value of the bit for index $i$ is represented by the choice that the current strategy makes at $d_i$. More precisely, for every strategy $\sigma$ we have: - If $\sigma(d_i) = e_i$, then bit $i$ is $1$ in $\sigma$. - If $\sigma(d_i) \ne e_i$, then bit $i$ is $0$ in $\sigma$. The Odd vertex $e_i$ plays a crucial role in this gadget. If $\sigma(d_i) = e_i$, then Odd’s best response is to use edge $(e_i,h_i)$, to avoid creating the even cycle between $d_i$ and $e_i$. On the other hand, if $\sigma(d_i) \ne e_i$, then Odd’s best response is to use $(e_i,d_i)$, to avoid seeing the large even priority at $h_i$. One thing to note is that, in the case where $\sigma(d_i) \ne e_i$, the edge to $e_i$ is always switchable. To prevent $d_i$ from immediately switching to $e_i$, we must ensure that there is always a more appealing outgoing edge from $e_i$, so that the greedy all-switches rule will switch that edge instead. The edges from $d_i$ to the deceleration lane provide this. Once $t_1$ has switched to $t_0$, the edge from $d_i$ to $a_{1}$ becomes more appealing than the edge to $e_i$, once $t_2$ has switched to $t_1$, the edge from $d_i$ to $a_{2}$ becomes more appealing than the edge to $e_i$, and so on. In this way, we are able to prevent $d_i$ from switching to $e_i$ for $2i$ iterations by providing outgoing edges to the first $2i$ vertices of the deceleration lane. #### **The vertices $s$ and $r$.** The vertex $s$ has outgoing edges to every vertex $f_i$ in the bit gadgets, and the vertex $r$ has outgoing edges to every vertex $g_i$ in the bit gadgets. If $i$ is the index of the least significant $1$ bit, then $s$ chooses the edge to $f_i$ and $r$ chooses the edge to $g_i$. The priority assigned to $r$ is larger than the priority assigned to $s$, which ensures that the valuation of $r$ is usually larger than the valuation of $s$, as required to make the deceleration lane work. When the counter moves from one bit-string to the next, the index least significant $1$ changes to some $i' \ne i$. The vertex $s$ switches to $f_{i'}$ one iteration before the vertex $r$ switches to $g_{i'}$. This creates the single iteration in which the valuation of $s$ is larger than the valuation of $r$, which resets the deceleration lane. #### **Simulating a binary counter.** To simulate a binary counter, we must do two things. Firstly, we must ensure that if the counter is currently at some bit-string ${K}\in \{0, 1\}^n$, then the *least significant zero* in ${K}$ must be flipped to a one. Secondly, once this has been done, all bits whose index is smaller than the least significant zero must be set to $0$. If these two operations are always performed, then strategy improvement will indeed count through all binary strings. The least significant zero is always flipped because each bit $i$ has $2i$ edges to the deceleration lane. Since the purpose of the deceleration lane is to prevent the vertex $d_i$ switching to $e_i$, the vertex $d_{i'}$ where $i'$ is the index of the least significant zero, is the first to run out of edges, and subsequently switch to $e_{i'}$. Once this has occurred, all bits with index smaller than the least significant zero are set to $0$ due to the following chain of events. The vertex $s$ switches $f_{i'}$, and then the vertex $d_{i''}$ in all bits with index $i'' < i'$ will be switched to $s$. Since $d_{i''}$ no longer uses the edge to $e_{i''}$, the bit has now been set to $0$. #### **Our modifications to Friedmann’s example.** In order to use Friedmann’s example as a clock, we make a few minor adjustments to it. Firstly, we make the deceleration lane longer. Friedmann’s example uses a deceleration lane of length $2n$, but we use a deceleration lane of length ${\ensuremath{2k + 4n + 6}\xspace}$. Furthermore, while the vertex $d_i$ has outgoing edges to each $a_j$ with $j \le 2i$ in Friedmann’s version, in our modified version the vertex $d_i$ has outgoing edges to each $a_j$ with $j \le 2i + 2k + 2n + 6$. The reason for this is that Friedmann’s example can move from one bit-string to the next in as little as four iterations, but we need more time in order to compute the circuit $F$. By making the deceleration lane longer, we slow down the construction, and ensure that there are at least $2k + 2n + 6$ iterations before the clock moves from one bit-string to the next. The second change that we make is to change the priorities, because we need to make room for the gadgets that we add later. However, we have not made any fundamental changes to the priorities: the ordering of priorities between the vertices and their parity is maintained. We have simply added larger gaps between them. The following table specifies the version of the construction that we use. Observe that two copies are specified: one for $j = 0$ and the other for $j = 1$. Furthermore, observe that the vertex $x$ will be the sink in our one-sink game. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Vertex Conditions Edges Priority Player --------- ------------------------------------------------------------------- ------------------------------------------------------------------ ----------------------------------------- -------- $t_0^j$ $j \in \{0, 1\}$ $r^j$, $s^j$ $\operatorname{P}(2, 0, 2k+4n+4, j, 0)$ Even $t_l^j$ $j \in \{0, 1\}$, $1 \le l \le {\ensuremath{2k + 4n + 6}\xspace}$ $r^j$, $s^j$, $t^j_{l-1}$ $\operatorname{P}(2, 0, l, j, 1)$ Even $a_l^j$ $j \in \{0, 1\}$, $1 \le l \le {\ensuremath{2k + 4n + 6}\xspace}$ $t^j_l$ $\operatorname{P}(2, 0, l Even +1, j, 0)$ $d^j_i$ $j \in \{0, 1\}$, $1 \le i \le n$ $e^j_i$, $s^j$, $r^j$, $\operatorname{P}(1, i, 0, j, 1)$ Even $a^j_l $ for $1 \le l \le {\ensuremath{2k + 2n + 6}\xspace}+ 2i$ $e^j_i$ $j \in \{0, 1\}$, $1 \le i \le n$ $h^j_i$, $d_i$ $\operatorname{P}(1, i, 1, j, 0)$ Odd $g^j_i$ $j \in \{0, 1\}$, $1 \le i \le n$ $f^j_i$ $\operatorname{P}(1, i, 2, j, 1)$ Even $k^j_i$ $j \in \{0, 1\}$, $1 \le i \le n$ $x$, $g^j_l$, for $i < l \le $\operatorname{P}(8, i, 0, j, 1)$ Even n$ $f^j_i$ $j \in \{0, 1\}$, $1 \le i \le n$ $e^j_i$ $\operatorname{P}(8, i, 1, j, 1)$ Even $h^j_i$ $j \in \{0, 1\}$, $1 \le i \le n$ $k^j_i$ $\operatorname{P}(8, i, 2, j, 0)$ Even $s^j$ $j \in \{0, 1\}$ $x$, $f^j_l$ for $1 \le l \le n$ $\operatorname{P}(7, 0, 0, Even j, 0)$ $r^j$ $j \in \{0, 1\}$ $x$, $g^j_l$ for $1 \le l \le n$ $\operatorname{P}(7, 0, 1, Even j, 0)$ $x$ $x$ $\operatorname{P}(0, 0, 0, 0, 1)$ Even --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Our construction ---------------- #### **Circuits.** For each gate in the construction, we design a gadget that computes the output of that gate. The idea is that greedy all-switches strategy improvement will compute these gates in depth order. Starting from an initial strategy, the first iteration will compute the outputs for all gates of depth $1$, the next iteration will use these outputs to compute the outputs for all gates of depth $2$, and so on. In this way, after $k$ iterations of strategy improvement, the outputs of the circuit will have been computed. We then use one additional iteration to store these outputs in an input/output gadget. Strategy improvement valuations will be used to represent the output of each gate. Each gate $i$ has a state $o^j_i$, and the valuation of this state will indicate whether the gate evaluates to true or false. In particular the following rules will be followed. \[prop:rules\] In every strategy $\sigma$ we have the following properties. 1. Before the gate has been evaluated, we will have $\operatorname{Val}^\sigma(o^j_i) \sqsubset \operatorname{Val}^{\sigma}(r^j)$. 2. If the gate has been evaluated to false, we will continue to have $\operatorname{Val}^\sigma(o^j_i) \sqsubset \operatorname{Val}^{\sigma}(r^j)$. 3. If the gate has been evaluated to true, then we will instead have $\operatorname{Val}^\sigma(r^j) \sqsubset \operatorname{Val}^{\sigma}(o^j_i)$, and $\operatorname{MaxDiff}^{\sigma}(r^j, o^j_i)$ will be a large even priority. The input/output gadgets are connected to both circuits, and these gadgets have two modes. 1. When circuit $j$ is computing, the gadget is in *output mode*, where it reads the output of circuit $j$ and stores it. 2. When circuit $1-j$ is computing, the gadget is in *input mode*, where it outputs the value that was stored from the previous computation into circuit $1-j$. Therefore, the gates of depth $1$ in circuit $j$ read their input from the input/output gadgets in circuit $1-j$, while the input/output gadgets in circuit $j$ read their input from the outputs of circuit $j$. To formalise this, we introduce the following notation. For every ${\ensuremath{\textsc{Not}}\xspace}$-gate, we define $\operatorname{InputState}(i,j)$ as follows: $$\operatorname{InputState}(i,j) = \begin{cases} o^{1-j}_{{I}(i)} & \text{if $d(i) = 1$,} \\ o^j_{{I}(i)} & \text{if $d(i) > 1$.} \end{cases}$$ For every ${\ensuremath{\textsc{Or}}\xspace}$-gate, and every $l \in \{1, 2\}$, we define $\operatorname{InputState}(i,j,l)$ as follows: $$\operatorname{InputState}(i,j,l) = \begin{cases} o^{1-j}_{{I}_l(i)} & \text{if $d(i) = 1$,} \\ o^j_{{I}_l(i)} & \text{if $d(i) > 1$.} \end{cases}$$ #### **The clocks.** As we have mentioned, we use two copies of Friedmann’s example to act as clocks in our construction. These clocks will be used to drive the computation. In particular, the vertices $r^j$ and $s^j$ will play a crucial role in synchronising the two circuits. As described in the previous section, when the clock *advances*, i.e., when it moves from one bit-string to the next, there is a single iteration in which the valuation of $s^j$ is much larger than the valuation of $r^j$. This event will trigger the computation. - The iteration in which the valuation of $s^0$ is much larger than the valuation of $r^0$ will trigger the start of computation in circuit $0$. - The iteration in which the valuation of $s^1$ is much larger than the valuation of $r^1$ will trigger the start of computation in circuit $1$. In order for this approach to work, we must ensure that the two clocks are properly synchronised. In particular, the gap between computation starting in circuit $j$ and computation starting in circuit $1-j$ must be at least $k+3$, to give enough time for circuit $j$ to compute the output values, and for these values to be stored. We now define notation for this purpose. First we define the number of iterations that it takes for a clock to move from bit-string ${K}$ to ${K}+ 1$. For every bit-string ${K}\in \{0, 1\}^n$, we define $\operatorname{Lsz}({K})$ to be the index of the *least significant zero* in ${K}$: that is, the smallest index $i$ such that ${K}_i = 0$. For each ${K}\in \{0, 1\}^n$, we define: $$\operatorname{Length}({K}) = \Bigl( {\ensuremath{2k + 2n + 6}\xspace}\Bigr) + 2\operatorname{Lsz}({K}) + 5.$$ This term can be understood as the length of the deceleration lane to which all bits in the clock have edges, plus the number of extra iterations it takes to flip the least-significant zero, plus five extra iterations needed to transition between the two bit-strings. Next we introduce the following *delay* function, which gives the amount of time each circuit spends computing. For each $j \in \{0, 1\}$ and each ${K}\in \{0, 1\}^n$, we define: $$\operatorname{Delay}(j, {K}) = \begin{cases} \Bigl( d(C) + 3 \Bigr) + 2n & \text{if $j = 0$,} \\ \Bigl( d(C) + 3 \Bigr) + 2 \cdot \operatorname{Lsz}({K}) + 5 & \text{if $j = 1$.} \end{cases}$$ Circuit $1$ starts computing $\operatorname{Delay}(0, {K})$ iterations after Circuit $0$ started computing, and Circuit $0$ starts computing $\operatorname{Delay}(1, {K})$ iterations after circuit $1$ started computing. Observe that $\operatorname{Delay}(0, {K}) + \operatorname{Delay}(1, {K}) = \operatorname{Length}({K})$, which ensures that the two circuits do not drift relative to each other. The term $d(C) + 3$ in each of the delays ensures that there is always enough time to compute the circuit, before the next circuit begins the subsequent computation. #### **Or gates.** The gadget for a gate $i \in {\ensuremath{\textsc{Or}}\xspace}$ is quite simple, and is shown in Figure \[fig:or\]. It is not difficult to verify that the three rules given in Property \[prop:rules\] hold for this gate. Before both inputs have been evaluated, the best strategy at $o^j_i$ is to move directly to $r^j$, since the valuation of both inputs is lower than the valuation of $r^j$. Note that in this configuration the valuation of $o^j_i$ is smaller than the valuation of $r^j$, since $o^j_i$ has been assigned an odd priority. Since, by assumption, both inputs have the same depth, they will both be evaluated at the same time. If they both evaluate to false, then nothing changes and the optimal strategy at $o^j_i$ will still be $r^j$. This satisfies the second rule. On the other hand, if at least one input evaluates to true, then the optimal strategy at $o^j_i$ is to switch to the corresponding input states. Since the valuation of this input state is now bigger than $r^j$, the valuation of $o^j_i$ will also be bigger than $r^j$, so the third rule is also satisfied. [0.3]{} \(o) [$o^j_i$\ $1$]{}; (r) \[left=1cm of o\] [$r^j$]{}; (s) \[above left=1cm and 1cm of o\] [$s^j$]{}; (i1) \[below left=1cm and 0.5cm of o\] [$o^j_{{I}_1(i)}$]{}; (i2) \[below right=1cm and 0.5cm of o\] [$o^j_{{I}_2(i)}$]{}; \(o) edge (r) (o) edge (s) (o) edge (i1) (o) edge (i2) ; [0.7]{} Vertex Conditions Edges Priority Player --------- ------------------------------------------- ------------------------------------ ----------------------------------- -------- $o^j_i$ $j \in \{0, 1\}$, $s^j$, $r^j$, $\operatorname{P}(4, i, 0, j, 1)$ Even $i \in {\ensuremath{\textsc{Or}}\xspace}$ $\operatorname{InputState}(i,j,1)$ $\operatorname{InputState}(i,j,2)$ #### **Not gates.** The construction for a gate $i \in {\ensuremath{\textsc{Not}}\xspace}$ is more involved. The gadget is quite similar to a bit-gadget from Friedmann’s construction. However, we use a special *modified deceleration lane*, which is shown in Figure \[fig:modified\]. (t1) [$\dots$]{}; (t2) [$t^j_{i,d(i)-1}$]{}; (t3) [$t^j_{i,d(i)}$]{}; (t4) [$t^j_{i,d(i)+1}$]{}; (t5) [$\dots$]{}; (a2) [$a^j_{i,d(i)-1}$]{}; (a3) [$a^j_{i,d(i)}$]{}; (a4) [$a^j_{i,d(i)+1}$]{}; \(v) [$o^j_{{I}(i)}$]{}; (t5) edge (t4) (t4) edge (t3) (t2) edge (t1) ; (a4) edge (t4) (a3) edge (t3) (a2) edge (t2) ; (t3) edge (v) ; in [t2, t4]{} \(1) at ($(\x) - (0.5,2.3)$) [$r^j$]{}; (2) at ($(\x) - (-0.5,2.3)$) [$s^j$]{}; () edge (1) () edge (2) ; The modified deceleration lane is almost identical to Friedmann’s deceleration lane, except that state $t^j_{i, d(i)}$ is connected to the output state of the input gate. The idea is that, for the first $d(i) - 1$ iterations the deceleration lane behaves as normal. Then, in iteration $d(i)$, the input gate is evaluated. If it evaluates to true then the valuation of $t^j_{i, d(i)}$ will be large, and the deceleration lane continues switching as normal. If it evaluates to false, then the valuation of $t^j_{i, d(i)}$ will be low, and the deceleration lane will stop switching. \(d) [$d^j_i$\ $3$]{}; (e) [$e^j_i$\ $4$]{}; (f) [$o^j_i$\ $15$]{}; (k) [$h^j_i$\ $16$]{}; (a\_1) [$a^j_{i,1}$]{}; (a\_2) [$a^j_{i,2}$]{}; (a\_3) [$\dots$]{}; (a\_4) \[align=center\] [$a^j_{i,m}$]{}; \(s) [$s^j$]{}; (r) [$r^j$]{}; \(f) edge (e) (e) edge (k) (e) edge \[bend left\] (d) (d) edge \[bend left\] (e) \(d) edge (a\_4.north) (d) edge (a\_2.north) (d) edge (a\_1.north) \(k) edge (r) \(d) edge (r) (d) edge (s) ; The [$\textsc{Not}$]{}gate gadget, which is shown in Figure \[fig:not\] is a simplified bit gadget that is connected to the modified deceleration lane. As in Friedmann’s construction, the strategy chosen at $d^j_i$ will represent the output of the gate. In a strategy $\sigma$, the gate outputs $1$ if $\sigma(d^j_i) = e^j_i$, and it outputs $0$ otherwise. As we know, Friedmann’s bit gadget is distracted from switching $d^j_i$ to $e^j_i$ by the deceleration lane. By using the modified deceleration lane, we instead obtain a [$\textsc{Not}$]{}gate. Since the deceleration lane keeps on switching if and only if the input gate evaluates to true, the state $d^j_i$ will switch to $e^j_i$ in iteration $d(i)$ if and only if the input gate evaluates to false. This is the key property that makes the [$\textsc{Not}$]{}gate work. To see that the three rules specified in Property \[prop:rules\] are respected, observe that there is a large odd priority on the state $o^j_i$, and an even larger even priority on the state $h^j_i$. This causes the valuation of $o^j_i$ will only be larger than the valuation of $r^j$ if and only if $d^j_i$ chooses the edge to $e^j_i$, which only happens when the gate evaluates to true. Finally, when the computation in circuit $j$ begins again, the [$\textsc{Not}$]{}-gate is reset. This is ensured by giving the vertex $d^j_i$ edges to both $s^j$ and $r^j$. So, when the clock for circuit $j$ advances, no matter what strategy is currently chosen, the vertex $d^j_i$ first switches to $s^j$, and then to $r^j$, and then begins switching to the deceleration lane. The following table formally specifies the [$\textsc{Not}$]{}gate gadgets that we use in the construction. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Vertex Conditions Edges Priority Player ---------------- ------------------------------------------------------------------- ----------------------------------------------------- ------------------------------------- -------- $t^j_{i, 0}$ $j \in \{0, 1\}$, $i\in {\ensuremath{\textsc{Not}}\xspace}$ $r^j$, $s^j$ $\operatorname{P}(5, i, Even 2k+4n+4, j, 0)$ $t_{i,l}^j$ $j \in \{0, 1\}$, $i\in {\ensuremath{\textsc{Not}}\xspace}$, $r^j$, $s^j$, $t^j_{i, l-1}$ $\operatorname{P}(5, i, l, j, 1)$ Even $1 \le l \le {\ensuremath{2k + 4n + 6}\xspace}$, and $l \ne d(i)$ $t_{i,d(i)}^j$ $j \in \{0, 1\}$, $i\in {\ensuremath{\textsc{Not}}\xspace}$ $\operatorname{InputState}(i,j)$ $\operatorname{P}(5, i, Even d(i), j, 1)$ $a_{i,l}^j$ $j \in \{0, 1\}$, $i \in {\ensuremath{\textsc{Not}}\xspace}$, $t_{i,l}$ $\operatorname{P}(5, i, l+1, j, 0)$ Even $1 \le l \le {\ensuremath{2k + 4n + 6}\xspace}$, and $l \ne d(i)$ $a_{i,d(i)}^j$ $j \in \{0, 1\}$, $i \in {\ensuremath{\textsc{Not}}\xspace}$ $t_{i,d(i)}$ $\operatorname{P}(4, i, 0, j, 0)$ Even $d^j_i$ $j \in \{0, 1\}$, $i \in {\ensuremath{\textsc{Not}}\xspace}$ $s^j$, $r^j$, $e^j_i$, $a^j_{i,l}$ $\operatorname{P}(4,i,0,j,1)$ Even for $1 \le l \le {\ensuremath{2k + 4n + 6}\xspace}$ $e^j_i$ $j \in \{0, 1\}$, $i \in {\ensuremath{\textsc{Not}}\xspace}$ $h^j_i$, $d^j_i$ $\operatorname{P}(4,i,1,j,0)$ Odd $o^j_i$ $j \in \{0, 1\}$, $i \in {\ensuremath{\textsc{Not}}\xspace}$ $e^j_i$ $\operatorname{P}(6,i,0,j,1)$ Even $h^j_i$ $j \in \{0, 1\}$, $i \in {\ensuremath{\textsc{Not}}\xspace}$ $r^j$ $\operatorname{P}(6,i,1,j,0)$ Even ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- (orig) ; (origr) \[below left=2cm and 0.5cm of orig\] [$r^j$]{}; (origs) \[below right=2cm and 0.5cm of orig\] [$s^j$]{}; (orig) edge (origr) (orig) edge (origs) ; ($(orig) + (3, -1.4)$) – ($(orig) + (6, -1.4)$) ; (new) \[right=8cm of orig\] ; (newy) \[below left= 2cm and 0.5cm of new\] [$y^j: 4$]{}; (newz) \[below right= 2cm and 0.5cm of new\] [$z^j: 2$]{}; (news) \[below right= 2cm and 0.5cm of newz\] [$s^j$]{}; (newr) \[below left= 2cm and 0.5cm of newz\] [$r^j$]{}; (newrtwo) \[below left= 2cm and 0.5cm of newy\] [$r^{1-j}$]{}; (new) edge (newy) (new) edge (newz) (newy) edge (newrtwo) (newy) edge (newr) (newz) edge (news) (newz) edge (newr) ; #### **Input/output gates.** For each input-bit in each copy of the circuit, have an input/output gate. Recall that these gadgets have two modes. When Circuit $j$ is computing, the input/output gadgets in circuit $j$ are in output mode, in which they store the output of the circuit, and the input/output gadgets in circuit $1-j$ are in input mode, in which they output the value that was stored in the previous computation. At its core, the input/output gadget is simply another copy of the [$\textsc{Not}$]{}-gate gadget that is connected to the $i$th output bit of circuit $j$. However, we modify the [$\textsc{Not}$]{}-gate gadget by adding in extra vertices that allow it to be *moved* between the two circuits. The most important part of this circuit mover apparatus is shown in Figure \[fig:circuitmover\]: all of the vertices in the [$\textsc{Not}$]{}-gate gadget that have edges to $r^j$ and $s^j$ are modified so that they instead have edges to $y^j$ and $z^j$. Figures \[fig:input\] and \[fig:modifiedinput\] show the [$\textsc{Input/Output}$]{}gadget and its associated modified deceleration lane, respectively. There are three differences between this gadget and the [$\textsc{Not}$]{}gate, which are the inclusion of the vertices $h^j_{i, \star}$, the vertices $p^j_i$ and $p^j_{i, 1}$, and the vertices $q^j_{i, \star}$. All of these vertices are involved in the operation of moving the gadget between the two circuits. \(d) [$d^j_i$\ $3$]{}; (e) [$e^j_i$\ $4$]{}; (f) [$o^j_i$\ $15$]{}; (q) [$q^j_{i,0}$\ $6$]{}; (q\_1) [$q^j_{i,1}$\ $32$]{}; (romj) [$r^{1-j}$\ $4$]{}; (k) [$h^j_{i,0}$\ $2$]{}; (h\_1) [$h^j_{i,1}$\ $30$]{}; (h\_2) [$h^j_{i,2}$\ $12$]{}; (a\_1) [$a^j_{i,1}$]{}; (a\_2) [$a^j_{i,2}$]{}; (a\_3) [$\dots$]{}; (a\_4) \[align=center\] [$a^j_{i,m}$]{}; \(s) [$z^j$]{}; (r) [$y^j$]{}; (rone) [$r^{1-j}$]{}; (rtwo) [$r^{j}$]{}; \(f) edge (q) (q) edge (e) (q) edge (q\_1) (q\_1) edge (romj) (e) edge (k) (e) edge \[bend left\] (d) (d) edge \[bend left\] (e) \(d) edge (a\_4.north) (d) edge (a\_2.north) (d) edge (a\_1.north) \(k) edge (h\_1) (k) edge (h\_2) (h\_1) edge (rtwo) (h\_2) edge (rone) \(d) edge (r) (d) edge (s) ; (t1) [$\dots$]{}; (t2) [$t^j_{i,d(C)-1}$]{}; (t3) [$t^j_{i,d(C)}$]{}; (t4) [$t^j_{i,d(C)+1}$]{}; (t5) [$\dots$]{}; (a2) [$a^j_{i,d(C)-1}$]{}; (a3) [$a^j_{i,d(C)}$]{}; (a4) [$a^j_{i,d(C)+1}$]{}; \(v) [$p^j_{i}$\ $2$]{}; (o) [$o^j_{{I}(i)}$]{}; (p) [$p^j_{i,1}$\ $14$]{}; (romj) [$r^{1-j}$]{}; (t5) edge (t4) (t4) edge (t3) (t2) edge (t1) ; (a4) edge (t4) (a3) edge (t3) (a2) edge (t2) ; (t3) edge (v) (v) edge (o) (v) edge (p) (p) edge (romj) ; in [t2, t4]{} \(1) at ($(\x) - (0.5,2.3)$) [$y^j$]{}; (2) at ($(\x) - (-0.5,2.3)$) [$z^j$]{}; () edge (1) () edge (2) ; When the gadget is in output mode, the vertex $y^j$ chooses the edge to $r^j$, the vertex $h^j_{i, 0}$ chooses the edge to $h^j_{i, 1}$, and the vertex $p^j_i$ chooses the edge to $o^j_{{I}(i)}$. When these edges are chosen, the gadget is essentially the same as a [$\textsc{Not}$]{}-gate at the top of the circuit. So, once the circuit has finished computing, the vertex $d_i^j$ chooses the edge to $e_i^j$ (i.e., the stored bit is $1$) if and only if the $i$th output from the circuit was a $0$. Since the circuit was given in negated form, the gadget has therefore correctly stored the $i$th bit of $F(B)$. Throughout the computation in circuit $j$, the valuation of $r^j$ is much larger than the valuation of $r^{1-j}$. The computation in circuit $1-j$ begins when the clock in circuit $1-j$ advances, which causes the valuation of $r^{1-j}$ to become much larger than the valuation of $r^j$. When this occurs, the input/output gate then transitions to input mode. The transition involves the vertex $y^j$ switching to $r^{1-j}$, the vertex $h^j_{i, 0}$ switching to $h^j_{i, 2}$, and the vertex $p^j_i$ switching to $p^j_{i, 1}$. Moreover, the player Odd vertex $q^j_{i, 0}$ switches $e^j_i$. This vertex acts as a circuit breaker, which makes sure that the output of the gadget is only transmitted to circuit $1-j$ when the gadget is in input mode. The key thing is that all of these switches occur *simultaneously* in the same iteration. Since strategy improvement only cares about the *relative* difference between the outgoing edges from the vertex, and since all edges leaving the gadget switch at the same time, the operation of the [$\textsc{Not}$]{}-gate is not interrupted. So, the strategy chosen at $d^j_i$ will continue to hold the $i$th bit of $F(B)$, and the gadget has transitioned to input mode. When the gadget is in input mode, it can be viewed as a [$\textsc{Not}$]{}at the bottom of circuit $1-j$ that has already been computed. In particular, the switch from $h^j_{i, 1}$ to $h^j_{i, 2}$ ensures that, if the output is $1$, then the gadget has the correct output priority. Moreover, the deceleration lane has enough states to ensure that, if the output is $0$, then output of the gadget will not flip from $0$ to $1$ while circuit $1-j$ is computing. Finally, once circuit $1-j$ has finished computing, the clock for circuit $j$ advances, and the input/output gadget moves back to output mode. This involves resetting the [$\textsc{Not}$]{}gate gadget back to its initial state. This occurs because, when the clock in circuit $j$ advances, there is a single iteration in which the valuation of $s^j$ is higher than the valuation of $r^j$. This causes $z^j$ to switch to $s^j$ which in turn causes a single iteration in which the valuation of $z^j$ is higher than the valuation of $y^j$. Then, in the next iteration the vertex $y^j$ switches to $r^j$, and so the valuation of $y^j$ is then larger than the valuation of $z^j$. So, the valuations of $y^j$ and $z^j$ give exactly the same sequence of events as $r^j$ and $s^j$, which allows the [$\textsc{Not}$]{}-gate to reset. The following table specifies the input/output gadget. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Vertex Conditions Edges Priority Player ---------------- ----------------------------------------------------------------------- ----------------------------------------------------------------- ----------------------------------------- -------- $y^j$ $j \in \{0, 1\}$ $r^{1-j}$, $r^j$ $\operatorname{P}(3, 0, 1, j, 0)$ Even $z^j$ $j \in \{0, 1\}$ $r^j$, $s^j$ $\operatorname{P}(3, 0, 0, j, 0)$ Even $t^j_{i,0}$ $j \in \{0, 1\}$, $i\in {\ensuremath{\textsc{Input/Output}}\xspace}$ $y^j$, $z^j$ $\operatorname{P}(5, i, Even 2k+4n+4, j, 0)$ $t_{i,l}^j$ $j \in \{0, 1\}$, $i\in {\ensuremath{\textsc{Input/Output}}\xspace}$, $y^j$, $z^j$, $t^j_{i, l-1}$ $\operatorname{P}(5, i, l, j, 1)$ Even $1 < l \le {\ensuremath{2k + 4n + 6}\xspace}$, and $l \ne d(C)$ $t_{i,d(C)}^j$ $j \in \{0, 1\}$, $i\in {\ensuremath{\textsc{Input/Output}}\xspace}$ $p^j_{i}$ $\operatorname{P}(5, i, Even d(C), j, 1)$ $p_{i}^j$ $j \in \{0, 1\}$, $i \in {\ensuremath{\textsc{Input/Output}}\xspace}$ $o^j_{{I}(i)}$, $p^j_{i,1}$ $\operatorname{P}(3, i, 2, j, 0)$ Even $p_{i,1}^j$ $j \in \{0, 1\}$, $i \in {\ensuremath{\textsc{Input/Output}}\xspace}$ $r^{1-j}$ $\operatorname{P}(5, i, 2k+4n+5, j, 0)$ Even $a_{i,l}^j$ $j \in \{0, 1\}$ $i \in {\ensuremath{\textsc{Input/Output}}\xspace}$, $t_{i,l}$ $\operatorname{P}(5, i, l+1, j, 0)$ Even $1 \le l \le {\ensuremath{2k + 4n + 6}\xspace}$ $d^j_i$ $j \in \{0, 1\}$, $i \in {\ensuremath{\textsc{Input/Output}}\xspace}$ $y^j$, $z^j$, $e^j_i$, $\operatorname{P}(4,i,0,j,1)$ Even $a^j_{i,l}$ for $1 \le l \le {\ensuremath{2k + 4n + 6}\xspace}$ $e^j_i$ $j \in \{0, 1\}$, $i \in {\ensuremath{\textsc{Input/Output}}\xspace}$ $h^j_{i,0}$, $d^j_i$ $\operatorname{P}(4,i,1,j,0)$ Odd $q^j_{i,0}$ $j \in \{0, 1\}$, $i \in {\ensuremath{\textsc{Input/Output}}\xspace}$ $e^j_i$, $q^j_{i,1}$ $\operatorname{P}(4, Odd i, 2, j, 0)$ $q^j_{i,1}$ $j \in \{0, 1\}$, $i \in {\ensuremath{\textsc{Input/Output}}\xspace}$ $r^{1-j}_i$ $\operatorname{P}(6, d(C)+2, Even 0, j, 0)$ $o^j_i$ $j \in \{0, 1\}$, $i \in {\ensuremath{\textsc{Input/Output}}\xspace}$ $q^j_{i,0}$ $\operatorname{P}(6,i,0,j,1)$ Even $h_{i,0}^j$ $j \in \{0, 1\}$, $i \in {\ensuremath{\textsc{Input/Output}}\xspace}$ $h_{i,1}^j$, $h_{i,2}^j$ $\operatorname{P}(3, i, 3, j, 0)$ Even $h_{i,1}^j$ $j \in \{0, 1\}$, $i \in {\ensuremath{\textsc{Input/Output}}\xspace}$ $r^j$ $\operatorname{P}(6,d(C)+1,1,j,0)$ Even $h_{i,2}^j$ $j \in \{0, 1\}$, $i \in {\ensuremath{\textsc{Input/Output}}\xspace}$ $r^{1-j}$ $\operatorname{P}(6,0,1,j,0)$ Even ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- #### **One-sink game.** If we are to use the simplified strategy improvement algorithm, we must first show that this construction is a one-sink game. We do so in the following lemma. The construction is a one-sink game. In order to show that the construction is a one-sink game, we must show that the two required properties hold. Firstly, we must show that there is a vertex the satisfies the required properties of a sink vertex. It is not difficult to verify that vertex $x$ does indeed satisfy these properties: the only outgoing edge from $x$ is the edge $(x, x)$, and we have $\operatorname{pri}(x) = \operatorname{P}(0, 0, 0, 0, 0) = 1$. Furthermore, no vertex is assigned priority $0$. Secondly, we must argue that all optimal strategies are terminating. Recall that a terminating strategy has the property that the first component of the Vöge-Jurdziński valuation is $1$, which implies that all paths starting at all vertices eventually arrive at the sink $x$. So, consider a strategy $\sigma$ that is not terminating, and let $v$ be a vertex such that the first component of $\operatorname{Val}^{\sigma}_{VJ}$ is strictly greater than 1. Let $C$ be the cycle that is eventually reached by following $\sigma$ and $\operatorname{Br}(\sigma)$ from $v$. There are two cases to consider: - If $C$ contains at least one vertex from a clock, then $C$ must be entirely contained within that clock, because there are no edges that leave either of the two clocks. In this case we have that $\sigma$ is not optimal because Friedmann has shown that his construction is a one-sink game. - If $C$ does not contain a vertex from a clock, then it is entirely contained within the circuits. First observe that $C$ cannot be a two-vertex cycle using the vertices $d^j_i$ and $e^j_i$, because it is not a best-response for Odd allow a cycle with an even priority to be formed, since he can always move to $r^j$, and from there eventually reach a cycle with priority $p \preceq 1$ (because the clock is a one-sink game). But, the only other way to form a cycle in the circuits is to pass through both of the circuits. In this case, the highest priority on the cycle will be an odd priority assigned to the state $o^j_i$ in either a [$\textsc{Not}$]{}-gate (if there is one on the path), or an input/output gate (otherwise.) Since this odd priority is strictly greater than $1$, and since player Even can always assure a priority of $1$ by, for example, moving to $r^j$ in every input/output state $d^j_i$, we have that $\sigma$ is not an optimal strategy. Therefore, we have shown that the construction is a one-sink game. Strategies {#sec:strategies} ========== In this section, we define an initial strategy, and describe the sequence of strategies that greedy all-switches strategy improvement switches through when it is applied to this initial strategy. We will define strategies for each of the gadgets in turn, and then combine these into a full strategy for the entire construction. It should be noted that we will only define *partial strategies* in this section, which means that some states will have no strategy specified. This is because our construction will work no matter which strategy is chosen at these states. To deal with this, we must define what it means to apply strategy improvement to a partial strategy. If $\chi$ is a partial strategy and $\sigma \in {\Sigma_{\text{Even}}}$ is a strategy, then we say that $\sigma$ *agrees* with $\chi$ if $\sigma(v) = \chi(v)$ for every vertex $v \in V$ for which $\chi$ is defined. So, if $\chi_1$ and $\chi_2$ are partial strategies, then we say that greedy all-switches strategy improvement switches $\chi_1$ to $\chi_2$ if, for every strategy $\sigma_1 \in {\Sigma_{\text{Even}}}$ that agrees with $\chi_1$, greedy all-switches strategy improvement switches $\sigma$ to a strategy $\sigma_2$ that agrees with $\chi_2$. We now describe the sequence of strategies. Each part of the construction will be considered independently. #### **The clock.** We start by defining the sequence of strategies that occurs in the two clocks. For each *clock bit-string* ${K}\in \{0, 1\}^n$, we define a sequence of strategies $\kappa^{{K}}_1$, $\kappa^{{K}}_2$, …, $\kappa^{{K}}_{\operatorname{Length}({K})}$. Greedy all-switches strategy improvement switches through each of these strategies in turn, and then switches from $\kappa^{{K}}_{\operatorname{Length}({K})}$ to $\kappa^{{K}+1}_1$, where ${K}+1$ denotes the bit-string that results by adding $1$ to the integer represented by ${K}$. The sequence begins in the first iteration after the valuation of $s^j$ is larger than the valuation of $r^j$. We will first present the building blocks of this strategy, and the combine the building blocks into the full sequence. We begin by considering the vertices $t^j_l$ in the deceleration lane. Recall that these states switch, in sequence, from $r^j$ to $t^j_{l-1}$. This is formalised in the following definition. For each $m \ge 1$, each $l$ in the range $1 \le l \le {\ensuremath{2k + 4n + 6}\xspace}$, and each $j \in \{0, 1\}$, we define: $$\begin{aligned} \rho_m(t^j_0) &= \begin{cases} s^j & \text{if $m = 1$,} \\ r^j & \text{if $m > 1$.} \end{cases} \\ \rho_{m}(t^j_{l}) &= \begin{cases} s^j & \text{if $m = 1$,} \\ r^j & \text{if $m > 1$ and $m \le l+1$,} \\ t^j_{l-1} & \text{if $m > l+1$.} \end{cases} \end{aligned}$$ We now move on to consider the vertices $d^j_i$, which represent the bits in the counter. We begin by defining a sequence of strategies for the bits that are $0$. Recall that these vertices switch to the states $a^j_{i}$ along the deceleration lane until they run out of edges, at which point they switch to the vertex $e^j_i$. This is formalised in the following definition. For each $i$ in the range $1 \le i \le n$, each $m \ge 1$, and each $j \in \{0, 1\}$, we define: $$\rho_{m}(d^j_i) = \begin{cases} s^j & \text{if $m = 1$,} \\ r^j & \text{if $m = 2$,} \\ a^j_{{\ensuremath{2k + 2n + 6}\xspace}+ 2i} & \text{if $m = 3$,} \\ a^j_{m-3} & \text{if $4 \le m \le {\ensuremath{2k + 2n + 6}\xspace}+ 2i + 3$,} \\ e^j_{i} & \text{if $m > {\ensuremath{2k + 2n + 6}\xspace}+ 2i + 3$.} \end{cases}$$ Note that the first three iterations are special, because the edge to $a^j_{1}$ only becomes switchable in the third iteration. The edge to $r^j$ and the edge to $a^j_{{\ensuremath{2k + 2n + 6}\xspace}+ 2i}$ prevent the edge to $e^j_i$ being switched before this occurs. We now give a full strategy definition for the vertices $d^j_i$. The bits that are $0$ follow the strategy that we just defined, and the bits that are $1$ always choose the edge to $e^j_i$. For each bit-string ${K}\in \{0, 1\}^n$, each $i$ in the range $1 \le i \le n$, each $m \ge 1$, and each $j \in \{0, 1\}$, we define: $$\rho^{K}_m(d^j_i) = \begin{cases} \rho_m(d^j_i) & \text{if ${K}_i = 0$,} \\ e^j_i & \text{if ${K}_i = 1$.} \end{cases}$$ Finally, we consider the other vertices in the clock. To define strategies for these vertices, we must first define some notation. For each $i$ in the range $1 \le i \le n$, we define $\operatorname{NextBit}({K}, i)$ to be a partial function that gives the index of the first $1$ that appears higher than index $i$: that is, the smallest index $j > i$ such that ${K}_j = 1$. We now define the strategies. These strategies all depend on the current clock bit-string ${K}$, and have no dependence on how far the deceleration lane has switched, so the parameter $m$ is ignored. For each bit-string ${K}\in \{0, 1\}^n$, each $m \ge 1$, each $i$ in the range $1 \le i \le n$, and each $j \in \{0, 1\}$, we define: $$\begin{aligned} \rho^{K}_m(g^j_i) &= \begin{cases} k^j_i & \text{if ${K}_i = 0$,} \\ f^j_i & \text{if ${K}_i = 1$.} \end{cases} \\ \rho^{K}_m(k^j_i) &= \begin{cases} g^j_{\operatorname{NextBit}({K}, i)} & \text{if $\operatorname{NextBit}({K}, i)$ is defined,} \\ x & \text{otherwise.} \end{cases} \\ \rho^{K}_m(r^j) &= \begin{cases} g^j_{\operatorname{NextBit}({K}, 0)} & \text{if $\operatorname{NextBit}({K}, 0)$ is defined,} \\ x & \text{otherwise.} \end{cases} \\ \rho^{K}_m(s^j) &= \begin{cases} f^j_{\operatorname{NextBit}({K}, 0)} & \text{if $\operatorname{NextBit}({K}, 0)$ is defined,} \\ x & \text{otherwise.} \end{cases}\end{aligned}$$ When the clock transitions between two clock bit-strings, there is a single iteration in which the strategies defined above are not followed. This occurs one iteration after the vertex $d^j_{\operatorname{Lsz}({K})}$ switches to $e^j_{\operatorname{Lsz}({K})}$. In this iteration, the vertices $g^j_{\operatorname{Lsz}({K})}$ and $s^j$ switch to $f^j_{\operatorname{Lsz}{{K}}}$, while every other vertex continues to use the strategies that were defined above. We now define a special *reset strategy* that captures this. For each bit-string ${K}\in \{0, 1\}^n$, and every vertex $v$ in either of the two clocks we define: $$\rho^{K}_{\text{Reset}}(v) = \begin{cases} f^j_{\operatorname{Lsz}({K})} & \text{if $v = g^j_{\operatorname{Lsz}({K})}$ or $v = s^j$,} \\ \rho^{K}_{\operatorname{Length}({K})}(v) & \text{otherwise.} \end{cases}$$ We can now combine the strategies defined above in order to define the full sequence of strategies that are used in the clocks. In the first $\operatorname{Length}({K}) -1$ iterations, we follow the sequence defined by the strategies $\rho^{{K}}_m(v)$, and in the final iteration we use the strategy $\rho^{{K}}_{\text{Reset}}(v)$. Formally, for each bit-string ${K}\in \{0, 1\}^n$, each $m$ in the range $1 \le m \le \operatorname{Length}({K})$, and every vertex $v$ in either of the two clocks, we define: $$\kappa^{{K}}_m(v) = \begin{cases} \rho^{{K}}_m(v) & \text{if $m \le \operatorname{Length}({K}) -1$,}\\ \rho^{{K}}_{\text{Reset}}(v) & \text{if $m = \operatorname{Length}({K})$.} \end{cases}$$ Friedmann showed the following lemma. \[lem:fri\] Let ${K}\in \{0, 1\}^n$. If we start all-switches strategy improvement at $\kappa^{{K}}_1$ for clock $j$, then it will proceed by switching through the sequence $\kappa^{K}_1$, $\kappa^{K}_2$, $\dots$, $\kappa^{K}_{\operatorname{Length}({K})}, \kappa^{{K}+ 1}_1$. #### **The circuits.** For each bit-string $B \in \{0, 1\}^n$, we give a sequence of strategies $\sigma^B_1$, $\sigma^B_2$, $\dots$, which describes the sequence of strategies that occurs when $B$ is the input of the circuit. The sequence is indexed from the point at which the circuit’s clock advances to the next bit-string. That is, $\sigma^B_1$ occurs one iteration after the valuation of $s^j$ exceeds the valuation of $r^j$. Recall that all of the gates with the same depth are evaluated in the same iteration. We can now make this more precise: each gate $i$ will be evaluated in the strategy $\sigma^B_{d(i) + 2}$. After this iteration, there will then be two cases based on whether the gate evaluates to $1$ or $0$. To deal with this, we require the following notation. For each bit-string $B$ and each gate $i$, we define $\operatorname{Eval}(B, i)$ to be $1$ if gate $i$ outputs true on input $B$, and $0$ otherwise. #### **Or gates.** Before the gate is evaluated, the state $o^j_i$ chooses the edge to $r^j$. Once the gate has been evaluated, there are four possibilities. If both input gates evaluate to false, then the state $o^j_i$ continues to use the edge to $r^j$. If one of the two inputs is true, then $o^j_i$ will switch to the corresponding input state. The case where both inputs are true is the most complicated. Obviously, $o^j_i$ will switch to one of the two input states, and in fact, it switches to the one with the highest valuation. Since the overall correctness of our construction does not care which successor is chosen in this case, we simply define $\operatorname{OrNext}(i)$ to be the successor with the highest valuation. We can now formally define the sequence of strategies used by an [$\textsc{Or}$]{}-gate. For every gate $i \in {\ensuremath{\textsc{Or}}\xspace}$, every pair of bit-strings $B \in \{0, 1\}^n$, and every $m \ge 1$ we define: $$\label{eqn:odef} \sigma^{B}_m(o^j_i) = \begin{cases} s^j_i & \text{if $m = 1$,} \\ r^j_i & \text{if $m > 1$ and $m \le d(i) + 2$,} \\ r^j_i & \text{if $m > d(i) + 2$ and $\operatorname{Eval}(B, {I}_1(i)) = 0$ and $\operatorname{Eval}(B, {I}_2(i)) = 0$,} \\ \operatorname{InputState}(i,j,1) & \text{if $m > d(i) + 2$ and $\operatorname{Eval}(B, {I}_1(i)) = 1$ and $\operatorname{Eval}(B, {I}_2(i)) = 0$,} \\ \operatorname{InputState}(i,j,2) & \text{if $m > d(i) + 2$ and $\operatorname{Eval}(B, {I}_1(i)) = 0$ and $\operatorname{Eval}(B, {I}_2(i)) = 1$,} \\ \operatorname{OrNext}(i) & \text{if $m > d(i) + 2$ and $\operatorname{Eval}(B, {I}_1(i)) = 1$ and $\operatorname{Eval}(B, {I}_2(i)) = 1$.} \end{cases}$$ #### **Not gates.** There are two components of the [$\textsc{Not}$]{}-gate gadget: the modified deceleration lane and the state $d^j_i$. We begin by considering the modified deceleration lane. We first define a strategy for the case where the gate evaluates to false. In this case, the input gate evaluates to true, which causes the modified deceleration lane to continue switching after iteration $d(i) + 2$. We formalise this in the following definition, which is almost identical to the definition given for the deceleration lane used in the clock. For each $i \in {\ensuremath{\textsc{Not}}\xspace}$, each $l$ in the range $1 \le l \le {\ensuremath{2k + 4n + 6}\xspace}$ with $l \ne d(i)$, each $j \in \{0, 1\}$, and each $m \ge 1$, we define: $$\begin{aligned} \label{eqn:tnotdef} \sigma_m(t^j_{i, 0}) &= \begin{cases} s^j & \text{if $m = 1$,} \\ r^j & \text{if $m > 1$.} \end{cases} \\ \sigma_m(t^j_{i,l}) &= \begin{cases} s^j & \text{if $m = 1$,} \\ r^j & \text{if $m > 1$ and $m \le l+1$,} \\ t^j_{i, l-1} & \text{if $m > l+1$.} \end{cases} \end{aligned}$$ On the other hand, if the gate evaluates to false, then the deceleration lane stops switching. This is formalised in the following definition, which uses the previous definition to give the actual strategy used by the modified deceleration lane. For each $i \in {\ensuremath{\textsc{Not}}\xspace}$, each $B \in \{0, 1\}^n$, each $l$ in the range $0 \le l \le {\ensuremath{2k + 4n + 6}\xspace}$ with $l \ne d(i)$, each $j \in \{0, 1\}$, and each $m \ge 1$, we define: $$\sigma^{B}_m(t^j_{i,l}) = \begin{cases} \sigma_m(t^j_{i,l}) & \text{if $l \le d(i)+1$, or $l > d(i) + 1$ and $\operatorname{Eval}(B, {I}(i)) = 1$,} \\ s^j & \text{if $l > d(i)+1$ and $m = 1$ and $\operatorname{Eval}(B, {I}(i)) = 0$,} \\ r^j & \text{if $l > d(i)+1$ and $m > 1$ and $\operatorname{Eval}(B, {I}(i)) = 0$.} \end{cases}$$ We now turn out attention to the state $d^j_i$, where we again begin by considering the case where the gate evaluates to false. In this case, the state $d^j_i$ continues switching to the modified deceleration lane. This is formalised in the following definition, which is almost identical to the definition given for the corresponding states in the clock. For all $i \in {\ensuremath{\textsc{Not}}\xspace}\cup {\ensuremath{\textsc{Input/Output}}\xspace}$, all $B \in \{0, 1\}^n$, for all $m \ge 1$, and all $j \in \{0, 1\}$, we define: $$\sigma_m(d^j_i) = \begin{cases} s^j & \text{if $m = 1$,} \\ r^j & \text{if $m = 2$,} \\ a^j_{i, {\ensuremath{2k + 4n + 6}\xspace}} & \text{if $m = 3$,} \\ a^j_{i, m-3} & \text{if $4 \le m \le {\ensuremath{2k + 4n + 6}\xspace}+3$,} \\ e^j_i & \text{if ${\ensuremath{2k + 4n + 6}\xspace}+ 3 < m$.} \end{cases}$$ On the other hand, if the gate evaluates to true, then after iteration $d(i) + 2$, the state $d^j_i$ switches to $e^j_i$. This is formalised in the following definition, where the previous definition is used in order to give the actual sequence of strategies for the state $d^j_i$. For all $B \in \{0,1\}^n$, all $i \in {\ensuremath{\textsc{Not}}\xspace}$, all $j \in \{0, 1\}$, and all $m \ge 1$ we define: $$\label{eqn:notd} \sigma^B_m(d^j_i) = \begin{cases} \sigma_m(d^j_i) & \text{if $m \le d(i) + 2$, or $m > d(i) + 2$ and $\operatorname{Eval}(B, {I}(i)) = 1$,} \\ e^j_i & \text{if $m > d(i)+2$ and $\operatorname{Eval}(B, {I}(i)) = 0$.} \end{cases}$$ #### **Input/output gates.** We now describe the sequence of strategies used in the input/output gates. These strategies are almost identical to the strategies that would be used in a [$\textsc{Not}$]{}-gates with depth $d(C)+1$, but with a few key differences. Firstly, whereas the [$\textsc{Not}$]{}-gates used edges to $r^j$ and $s^j$, these have instead been replaced with the edges to $y^j$ and $z^j$ from the circuit movers. Secondly, the circuit movers cause a one iteration delay at the start of the sequence. Note, however, that despite this delay, the input/output gates are still evaluated on iteration $d(C) + 3$. We begin by giving the strategies for the modified deceleration lane used in the input/output gates. For each $i \in {\ensuremath{\textsc{Not}}\xspace}$, each $l$ in the range $1 \le l \le {\ensuremath{2k + 4n + 6}\xspace}$ with $l \ne d(C)$, each $j \in \{0, 1\}$, and each $m \ge 1$, we define: $$\begin{aligned} \sigma_m(t^j_{i,l}) &= \begin{cases} z^j & \text{if $m = 2$,} \\ y^j & \text{if $m = 1$ or $m > 1$ and $m \le l+2$,} \\ t^j_{i, l-1} & \text{if $m > l+2$.} \end{cases} \\ \sigma_m(t^j_{i, 0}) &= \begin{cases} z^j & \text{if $m = 2$,} \\ y^j & \text{if $m = 1$ or $m > 2$.} \end{cases}\end{aligned}$$ Then, for each $i \in {\ensuremath{\textsc{Not}}\xspace}$, each $B \in \{0, 1\}^n$, each $l$ in the range $0 \le l \le {\ensuremath{2k + 4n + 6}\xspace}$ with $l \ne d(C)$, each $j \in \{0, 1\}$, and each $m \ge 1$, we define: $$\sigma^{B}_m(t^j_{i,l}) = \begin{cases} \sigma_m(t^j_{i,l}) & \text{if $l < d(C) + 3$, or $l \ge d(C)+3$ and $B_i = 0$,} \\ z^j & \text{if $l > d(C)+ 3$ and $m = 2$ and $B_i = 1$,} \\ y^j & \text{if $l > d(C)+ 3$ and either $m = 1$ or $m > 2$ and $B_i = 1$.} \end{cases}$$ Finally, we give the strategy for the state $d^j_i$. We reuse the strategy $\sigma_{m-1}$ from the [$\textsc{Not}$]{}-gate definitions, but with a one iteration delay. For all $B \in \{0,1\}^n$, all $i \in {\ensuremath{\textsc{Input/Output}}\xspace}$, all $j \in \{0, 1\}$, and all $m > 1$ we define: $$\label{def:inputd} \sigma^B_m(d^j_i) = \begin{cases} \sigma_{m-1}(d^j_i) & \text{if $1 < m \le d(C) + 3$, or $m > d(C) + 3$ and $B_i = 0$,} \\ e^j_i & \text{if $m > d(i)+3$ and $B_i = 1$.} \end{cases}$$ #### **The circuit mover states.** Finally, we describe the sequence of strategies used in the states that move the input/output gates between the circuits. These strategies do not depend on the current input bit-string to the circuit. Instead, they depend on the state of both of the clocks, and are parameterized by the value of the delay function that we defined earlier. Formally, for every $m \ge 1$, every $i \in {\ensuremath{\textsc{Input/Output}}\xspace}$, and every clock-bit string ${K}\in \{0, 1\}^n$ we define: $$\begin{aligned} \label{eqn:zj} \sigma^{{K}}_m(z^j) &= \begin{cases} s^j & \text{if $m = 1$,} \\ r^j & \text{if $m > 1$.} \\ \end{cases} \\ \label{eqn:yj} \sigma^{{K}}_m(y^j) &= \begin{cases} r^{1-j} & \text{if $m = 1$ or $m \ge \operatorname{Delay}(j,{K}) + 1$} \\ r^j & \text{if $m > 1$ and $m < \operatorname{Delay}(j,{K}) + 1$.} \\ \end{cases} \\ \label{eqn:pj} \sigma^{{K}}_m(p^j_i) &= \begin{cases} p^{j}_{i,1} & \text{if $m = 1$ or $m \ge \operatorname{Delay}(j,{K}) + 1$,} \\ o^j_{{I}(i)} & \text{if $1 < m \le \operatorname{Delay}(j,{K}) + 1$.} \\ \end{cases} \\ \label{eqn:hj} \sigma^{{K}}_m(h^j_{i,0}) &= \begin{cases} h^{j}_{i,2} & \text{if $m = 1$ or $m \ge \operatorname{Delay}(j,{K}) + 1$,} \\ h^j_{i,1} & \text{if $1 < m < \operatorname{Delay}(j,{K}) + 1$.} \\ \end{cases} \end{aligned}$$ #### **Putting it all together.** We can now define a combined sequence of strategies for the entire construction. We will defined a sequence of strategies $\chi^{B,{K},j}_1$, $\chi^{B,{K},j}_2$, …, which describes a computation in circuit $j$ under the following conditions: - The clock for circuit $j$ currently stores ${K}$ in its binary counter. - The input to circuit $j$ is $B$. Before stating the strategies, we first define some necessary notation. For every clock bit-string ${K}\in \{0, 1\}$, and every $j \in \{0, 1\}$ we define: $$\operatorname{OC}({K}, j) = \begin{cases} {K}-1 & \text{if $j = 0$,} \\ {K}& \text{if $j = 1$.} \end{cases}$$ This gives the bit-string used in the *other clock,* when circuit $j$ is computing. Since clock $0$ is ahead of clock $1$, we have that $\operatorname{OC}({K}, 0)$ is the bit-string before ${K}$, while $\operatorname{OC}({K}, 1)$ is the same as ${K}$. We can now define the sequence. For each bit-string $B \in \{0, 1\}^n$, each bit-string ${K}\in \{0, 1\}^n$, each $m \ge 1$, and every vertex $v$ we define: $$\begin{aligned} \chi^{B,{K},j}_m(v) &= \begin{cases} \kappa^{{K}}_m(v) & \text{if $v$ is in clock~$j$,} \\ \kappa^{\operatorname{OC}({K}, j)}_{m + \operatorname{Delay}(1-j, \operatorname{OC}({K}, j))}(v) & \text{if $v$ is in clock~$1-j$,} \\ \sigma^{B}_m(v) & \text{if $v$ is in a ${\ensuremath{\textsc{Not}}\xspace}$ or ${\ensuremath{\textsc{Or}}\xspace}$ gate in circuit~$j$,} \\ \sigma^{F(B)}_m(v) & \text{if $v$ is in an input/output gate in circuit~$j$,} \\ \sigma^{B}_{m+\operatorname{Delay}(1-j, \operatorname{OC}({K}, j))}(v) & \text{if $v$ is an input/output gate in circuit~$1-j$,} \\ \sigma^{{K}}_m(v) & \text{if $v$ is a circuit mover state in circuit~$j$,} \\ \sigma^{\operatorname{OC}({K}, j)}_{m+\operatorname{Delay}(1-j, \operatorname{OC}({K}, j))}(v) & \text{if $v$ is a circuit mover state in circuit~$1-j$.} \end{cases} \\\end{aligned}$$ The first two cases of this definition deal with the clocks: the clock in circuit $j$ follows the sequence for bit-string ${K}$, while the clock in circuit $1-j$ continues to follow the sequence for bit-string $\operatorname{OC}({K}, j)$. Observer that the clock for circuit $1-j$ has already been running for $\operatorname{Delay}(1-j, \operatorname{OC}({K}, j)$ iterations, so the strategies for this clock start on iteration $1 + \operatorname{Delay}(1-j, \operatorname{OC}({K}, j)$. The next two cases deal with the gate gadgets in circuit $j$: the [$\textsc{Not}$]{}and [$\textsc{Or}$]{}gates follow the sequence for bit-string $B$, and then the input/output gates for circuit $j$, which are in output mode, store $F(B)$. The next case deals with the input/output gates in circuit $1-j$ are in input mode, and so follow the strategy for bit-string $B$. The final two cases deal with the circuit mover states, which follow the strategies for the clock bit-string used in their respective clocks. Observe that no strategy is specified for the gate gadgets in circuit $1-j$, because the strategy chosen here is irrelevant. For technical convenience, we define: $$\begin{aligned} \chi^{B, {K}, 0}_{\operatorname{Delay}(0,{K})} &= \chi^{F(B), {K}, 1}_1 \\ \chi^{B, {K}, 1}_{\operatorname{Delay}(1,{K})} &= \chi^{F(B), {K}+1, 0}_1 \end{aligned}$$ Using this definition, we can now state the main technical claim of the paper. \[lem:main\] Let $B \in \{0, 1\}^n$ be a bit-string, let $C \in \{0, 1\}^n$ be a bit-string such that $C \ne (1,1,\dots,1)$, and let $j \in \{0, 1\}$. If greedy all-switches strategy improvement is applied to $\chi^{B, {K}, 0}_1$, then it will pass through the sequence: $$\chi^{B, {K}, j}_1, \chi^{B, {K}, j}_2, \dots, \chi^{B, {K}, j}_{\operatorname{Delay}(j,{K})}.$$ Unfortunately, the proof of this lemma is quite long, and the vast majority of it is presented in the appendix. In Section \[sec:proof\], we give an overview of the proof, and describe how each of the individual appendices fit into the overall proof. #### **Best responses.** Recall that, for each strategy considered, strategy improvement computes a best-response for the opponent. Now that we have defined the sequence of strategies, we can also define the best-responses to these strategies. For each strategy $\chi^{B, {K}, j}_i$, we define a strategy $\mu^{B,{K},j}_i \in {\Sigma_{\text{Odd}}}$ that is a best-response to $\chi^{B, {K}, j}_i$. We will later prove that these strategies are indeed best-responses. We begin by considering the vertices $e^j_i$ for each [$\textsc{Not}$]{}-gate $i$. Recall that these vertices only pick the edge to $h^j_{i,0}$ in the case where they are forced to by $d^j_i$ selecting the edge to $e^j_i$. As defined above, this only occurs in the case where the [$\textsc{Not}$]{}-gate evaluates to true. Formally, for each bit-string $B \in \{0, 1\}^n$, each bit-string ${K}\in \{0, 1\}^n$, each $m \ge 1$, and every $i \in {\ensuremath{\textsc{Not}}\xspace}$ we define: $$\mu^{B,{K},j}_m(e^j_i) = \begin{cases} h^j_{i,0} & \text{if $m > d(i)+2$ and $\operatorname{Eval}(B, {I}(i)) = 0$,} \\ d^j_i & \text{otherwise.} \end{cases}$$ For the input/output gadgets in circuit $1-j$, which will provide the input to circuit $j$, the situation is the same. The vertex $e^{1-j}_i$ chooses the edge to $h^{1-j}_{i, 0}$ if and only if $B_i$ is $1$. Formally, for each bit-string $B \in \{0, 1\}^n$, each bit-string ${K}\in \{0, 1\}^n$, each $m \ge 1$, and every vertex $i \in {\ensuremath{\textsc{Input/Output}}\xspace}$ we define: $$\mu^{B,{K},j}_m(e^{1-j}_i) = \begin{cases} h^{1-j}_{i,0} & \text{if $B_i = 1$,} \\ d^{1-j}_i & \text{if $B_i = 0$.} \end{cases}$$ For the input/output gadgets in circuit $1-j$, the situation is the largely the same as for a [$\textsc{Not}$]{}-gate with depth $d(C) + 1$, and the edge chosen depends on $F(B)_i$. However, one difference is that we do not define a best-response for the case where $m = 1$, because the input/output gadget does not reset until the second iteration, and our proof does not depend on the best response chosen in iteration one. Formally, for each bit-string $B \in \{0, 1\}^n$, each bit-string ${K}\in \{0, 1\}^n$, each $m > 1$, and every vertex $i \in {\ensuremath{\textsc{Input/Output}}\xspace}$ we define: $$\mu^{B,{K},j}_m(e^j_i) = \begin{cases} h^j_{i,0} & \text{if $m > d(i)+3$ and $F(B)_i = 1$,} \\ d^j_i & \text{otherwise.} \end{cases}$$ Finally, we define the best responses for the vertices $q^j$ as follows. For each bit-string $B \in \{0, 1\}^n$, each bit-string ${K}\in \{0, 1\}^n$, each $m$ in the range $1 \le m \le \operatorname{Delay}(j, {K}) -1$, and every vertex $i \in {\ensuremath{\textsc{Input/Output}}\xspace}$ we define: $$\begin{aligned} \mu^{B,{K},j}_m(q^j_{i,0}) &= \begin{cases} e^j_{i} & \text{if $m = 1$, } \\ q^j_{i,1} & \text{if $m > 1$. } \end{cases} \\ \mu^{B,{K},j}_m(q^{1-j}_{i,0}) &= \begin{cases} e^{1-j}_i. \end{cases}\end{aligned}$$ The Proof {#sec:proof} ========= In this section we give the proof for Lemma \[lem:main\]. Let $B, {K}\in \{0, 1\}^n$ be two bit-strings, let $j \in \{0, 1\}$, and let $m$ be in the range $1 \le m \le \operatorname{Delay}(j, {K}) - 1$. We must show that greedy all-switches strategy improvement switches $\chi^{B, {K}, j}_m$ to $\chi^{B, {K}, j}_{m+1}$. Let $\sigma$ be a strategy that agrees with $\chi^{B, {K}, j}_m$. Since we are using the all-switches switching rule, we can consider each vertex $v$ independently, and must show that the most appealing outgoing edge at $v$ is the one specified by $\chi^{B, {K}, j}_{m+1}$ (in our construction there will always be exactly one most appealing edge, so we do not care how ties are broken by the switching rule.) Hence, the majority of the proof boils down to calculating the valuation of each outgoing edge of $v$, and then comparing these valuations. To compare the valuation of two outgoing edges $(v, u)$ and $(v, w)$, we usually use the following technique. First we consider the two paths $\pi_1$ and $\pi_2$ that start at $u$ and $w$, respectively, and follow $\sigma$ and $\operatorname{Br}(\sigma)$. Then we find the first vertex $x$ that is contained in both paths. Since the $\sqsubseteq$ relation only cares about the maximum difference between the two paths, all priorities that are visited after $x$ are irrelevant, since they appear in both $\operatorname{Val}^{\sigma}(u)$ and $\operatorname{Val}^{\sigma}(w)$. On the other hand, since each priority is assigned to at most one vertex, all of the priorities visited by $p_1$ before reaching $x$ are contained in $\operatorname{Val}^{\sigma}(u)$ and not contained in $\operatorname{Val}^{\sigma}(w)$, and all of the priorities visited by $p_2$ before reaching $x$ are contained in $\operatorname{Val}^{\sigma}(w)$ and not contained in $\operatorname{Val}^{\sigma}(u)$. So it suffices to find the largest priority on the prefixes of $p_1$ before $x$ and the prefix of $p_2$ before $x$. The parity of this priority then determines whether $\operatorname{Val}^{\sigma}(u) \sqsubseteq \operatorname{Val}^{\sigma}(w)$ according to the rules laid out in the definition of $\sqsubseteq$. We now give an outline of the proof. - The fact that the two clocks switch through their respective strategies follows from Lemma \[lem:fri\]. - The difference in valuation between the states $r^j$ and $s^j$ of the clock are the driving force of the construction. In Appendix \[app:clock\], we give two lemmas that formalize this difference. - Next, in Appendix \[app:br\], we prove that the best-response strategies defined in Section \[sec:strategies\] are in fact the best responses. That is, we show that $\mu^{B,{K},j}_m$ is a best response to every strategy $\sigma$ that agrees with $\chi^{B, {K}, j}_{m}$. - In Appendix \[app:outputs\], we give two key lemmas that describe the valuations of the output stats $o^j_i$. These lemmas show three important properties. Firstly, if $m \le d(i) + 2$, then the valuation of $o^j_i$ is low, so there is no incentive to switch to $o^j_i$ before gate $i$ is evaluated. Secondly, if $m > d(i) + 2$ and the gate evaluates to $0$, then the valuation of $o^j_i$ remains low. Finally, if $m > d(i) + 2$ and the gate evaluates to $1$, then the valuation of $o^j_i$ is high. These final two properties allow the gates with depth strictly greater than $i$ to compute their outputs correctly. - The rest of the proof consists of proving that all vertices switch to the correct outgoing edge. The states $o^j_i$ in the [$\textsc{Or}$]{}gate gadgets are dealt with in Appendix \[app:org\]. The states $t^j_{i, l}$ in the [$\textsc{Not}$]{}gates are dealt with in Appendix \[app:nott\]. The states $d^j_{i}$ in the [$\textsc{Not}$]{}gates are dealt with in Appendix \[app:notd\]. The states $z^j$ and $z^{1-j}$ are dealt with in Appendix \[app:z\], and the states $y^j$ and $y^{1-j}$ are dealt with in Appendix \[app:y\]. The states $p^j$ and $p^{1-j}$ are dealt with in Appendix \[app:p\] and the states $h^j_{i, 0}$ are dealt with in Appendix \[app:h\]. Finally, the states in the [$\textsc{Input/Output}$]{}gates, which behave in a largely identical way to the [$\textsc{Not}$]{}gates, are dealt with in Appendix \[app:input\]. All of the above combines to provide a proof for Lemma \[lem:main\]. Having shown this Lemma, we can now give the reduction from ${\textsc{BitSwitch}\xspace}$ to ${\textsc{EdgeSwitch}\xspace}$. Given a circuit iteration instance $(F, B, z)$, we produce the parity game $G$ corresponding to $F$, we use $\chi^{B, 1, 0}_2$ as the initial strategy, and if $i \in {\ensuremath{\textsc{Input/Output}}\xspace}$ is the input/output gate corresponding to index $z$, then we will monitor whether the edge from $d^0_i$ to $e^0_i$ is ever switched by greedy all-switches strategy improvement. We therefore produce the instance ${\textsc{EdgeSwitch}\xspace}(G, (d^0_i, e^0_i), \chi^{B, 1, 0}_2)$. We must take care to ensure that $\chi^{B, 1, 0}_2$ is a terminating strategy, which is proved in the following Lemma. We have that $\chi^{B, 1, 0}_2$ is a terminating strategy. Firstly, since we use the same strategies as Friedmann in the clock, we do not need to prove that theses portions of the strategies are terminating, because this has already been shown by Friedmann. In particular, this implies that all paths starting at $s^j$ and $r^j$ for $j \in \{0, 1\}$ will eventually arrive at the sink $x$. Therefore, it is sufficient to show that the first component of $\operatorname{Val}_{\text{VJ}}^{\chi^{B, 1, 0}_2}(v)$ is $1$ for every vertex $v$ in the circuits. Observe that, in the strategy $\chi^{B, 1, 0}_2$, we have that the only possible cycles that player Odd can form in the best response are two-vertex cycles of the form $d^j_i$ and $e^j_i$, but these cycles have an even priority, so the best response can not choose them. In particular, it is not possible to form a cycle that passes through the input/output gadgets in both circuits, because the path that starts at $o^0_i$ for every input/output gate $i$ must eventually arrive at $s^j$. Thus, we have that all paths starting at all vertices in the circuits that follow $\chi^{B, 1, 0}_2$ and its best response will eventually arrive at this sink $x$. Now all that remains is to argue that ${\textsc{EdgeSwitch}\xspace}(G, (d^0_i, e^0_i), \chi^{B, 1, 0}_2)$ is true if and only if ${\textsc{BitSwitch}\xspace}(F, B, z)$ is true. To do this, we simply observe that the sequence of strategies used in Lemma \[lem:main\] only ever specify that $d^0_i$ must be switched to $e^0_z$ in the case where there is some even $j$ such that $F^j(B)_z = 1$. At all other times, the vertex $d^0_i$ chooses an edge other that $e^j_i$. Hence, the reduction is correct, and we have shown Theorem \[thm:edgeswitch\]. Other Algorithms {#sec:other} ================ #### **Other strategy improvement algorithms.** As we mentioned in the introduction, Theorem \[thm:edgeswitch\] implies several results about other algorithms. In particular, discounted games and simple-stochastic games both have natural strategy improvement algorithms given by Puri [@puri95], and Condon [@condon93], respectively. Friedmann showed that if you take a one-sink parity game, and apply the natural reduction from parity games to either discounted or simple stochastic games, then the greedy variants of Puri’s and Condon’s algorithms will switch exactly the same edges as the algorithm of Vöge and Jurdziński [@F11 Corollory 9.10 and Lemma 9.12]. Hence, Theorem \[thm:edgeswitch\] also implies the discounted and simple-stochastic cases of Corollary \[cor:edgeswitch\]. One case that was missed by Friedmann was mean-payoff games. There is a natural strategy improvement algorithm [@FV97] for mean-payoff games that adopts the well-known *gain-bias* formulation from average-reward MDPs [@puterman94]. In this algorithm, the valuations has two components: the *gain* of a vertex gives the long-term average-reward that can be obtained from that vertex under the current strategy, and the *bias* measures the short term deviation from the long-term average. We argue that, if we apply the standard reduction from parity games to mean-payoff games, and then set the reward of $x$ to $0$, then the gain-bias algorithm for mean-payoff games switches exactly the same edges as the algorithm of Vöge and Jurdziński. The standard reduction from parity games to mean-payoff games [@puri95; @jurdzinski98] replaces each priority $p$ with the weight $(-m)^{p}$, where $m$ denotes the number of vertices in the parity game. By setting the weight of $x$ to $0$, we ensure that the long-term average reward from each state is $0$. Previous work has observed [@FS14], that if the gain is $0$ at every vertex, then the bias represents the *total reward* that can be obtained from each state. It is not difficult to prove that, after the standard reduction has been applied, the total reward that can be obtained from a vertex $v$ is larger than the total reward from a vertex $u$ if and only if $\operatorname{Val}^{\sigma}(u) \sqsubset \operatorname{Val}^{\sigma}(v)$ in the original parity game. This is because the rewards assigned by the standard reduction grow quickly enough so that only the largest priority visited matters. Hence, we also have the mean-payoff case of Corollary \[cor:edgeswitch\]. Björklund and Vorobyov have also devised a strategy improvement algorithm for mean-payoff games. Their algorithm involves adding an extra *sink* vertex, and then adding edges from every vertex of the maximizing player to the sink. Their valuations are also the total reward obtained before reaching the sink. We cannot show a similar result for their algorithm, but we can show a result for a *variant* of their algorithm that only gives additional edges to a subset of the vertices of the maximizing player. To do this, we do the same reduction as we did for the gain-bias algorithm, and then we only add an edge from $x$ to the new sink added the Björklund and Vorobyov. The same reasoning as above then implies that the Björklund-Vorobyov algorithm will make the same switches as the Vöge-Jurdziński algorithm. #### **Unique sink orientations.** As mentioned in the introduction, there is a relationship between strategy improvement algorithms and sink-finding algorithms for unique sink orientations. Our result already implies a similar lower bound for sink-finding algorithms applied to unique sink orientations. However, since the vertices in our parity game have more than two outgoing edges, these results only hold for unique sink orientations of *grids*. The more commonly studied model is unique sink orientations of *hypercubes*, which correspond to *binary* parity games, where each vertex has at most two outgoing edges. We argue that our construction can be formulated as a binary parity game. Friedmann has already shown that his construction can be formulated as a binary parity game [@F11], so we already have that the clocks can be transformed so that they are binary. Furthermore, since our [$\textsc{Not}$]{}-gates and our [$\textsc{Input/Output}$]{}-gates are taken directly from Friedmann’s bit gadget, we can apply Friedmann’s reduction to make these binary. In particular, note that all of the extra states that we add to the input/output gate are binary, so these states do not need any modification. The only remaining part of the construction is the [$\textsc{Or}$]{}-gate, which has four outgoing edges. We replace the existing gadget with a modified gadget, shown in Figure \[fig:ormodified\]. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Vertex Conditions Edges Priority Player ------------- ------------------------------------------------------------- ------------------------------------------------------------------------ ----------------------------------- -------- $o^j_i$ $j \in \{0, 1\}$, $i \in {\ensuremath{\textsc{Or}}\xspace}$ $s^j$, $o^j_{i,1}$ $\operatorname{P}(4, i, 0, j, 1)$ Even $o^j_{i,1}$ $j \in \{0, 1\}$, $i \in {\ensuremath{\textsc{Or}}\xspace}$ $r^j$, $o^j_{i,2}$ $\operatorname{P}(4, i, 1, Even j, 1)$ $o^j_{i,2}$ $j \in \{0, 1\}$, $i \in {\ensuremath{\textsc{Or}}\xspace}$ $\operatorname{InputState}(i,j,1)$, $\operatorname{InputState}(i,j,2)$ $\operatorname{P}(4, i, 2, j, 1)$ Even ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- \(o) [$o^j_{i,2}$\ $5$]{}; (i1) \[below left=1cm and 0.5cm of o\] [$o^j_{{I}_1(i)}$]{}; (i2) \[below right=1cm and 0.5cm of o\] [$o^j_{{I}_2(i)}$]{}; (o\_2) \[above=1cm of o\] [$o^j_{i,1}$\ $3$]{}; (o\_1) \[above=1cm of o\_2\] [$o^j_{i}$\ $1$]{}; \(r) \[left=1cm of o\_2\] [$r^j$]{}; (s) \[left=1cm of o\_1\] [$s^j$]{}; (o\_2) edge (r) (o\_2) edge (o) (o\_1) edge (s) (o\_1) edge (o\_2) (o) edge (i1) (o) edge (i2) ; This gadget replaces the single vertex of the original [$\textsc{Or}$]{}-gate, with three binary vertices. The only significant difference that this gadget makes to the construction is that now it can take up to two strategy improvement iterations for the [$\textsc{Or}$]{}-gate to compute its output. This is because, we may have to wait for $o^j_{i,2}$ to switch before $o^j_{i, 1}$ can switch. The vertex $o^j_{i}$ always chooses the edge $o^j_{i, 1}$ during the computation, because the valuation of $r^j$ is larger than the valuation of $s^j$. To deal with this, we can redesign the construction so that each [$\textsc{Not}$]{}-gate $i$ is computed on iteration $2i$ rather than iteration $i$, and each [$\textsc{Or}$]{}-gate is computed before iteration $2i$. This involved making the following changes: - The length of the deceleration lane in the two clocks must be extended by $2k$, to account for the $2k$ extra iterations it takes for the circuits to compute ($k$ extra iterations for circuit $0$ and $k$ extra iterations for circuit $1$.) Moreover, the delays for both of the clocks must be increased by $k$. - For the same reason, the length of the modified deceleration lanes in the [$\textsc{Not}$]{}and [$\textsc{Input/Output}$]{}gates must be increased by $2k$. - Finally, the edge to $o^j_{\operatorname{InputState}(i, j)}$ must be moved from $t_{i, d(i)}$ to $t_{i, 2 d(i)}$. Once these changes have been made, we have produced a binary parity game. One final thing we must be aware of is that we only get a unique-sink orientation if there is never a tie between the valuation of two vertices. This, however, always holds in a one-sink game because every vertex has a distinct priority, and therefore all paths necessarily contain a distinct set of priorities, which prevents ties in the $\sqsubseteq$ ordering. Therefore, we have the [$\mathtt{PSPACE}$]{}-completeness result for the <span style="font-variant:small-caps;">BottomAntipodal</span> algorithm claimed in Corollary \[cor:uso\]. The optimal strategy result {#sec:optstrat} =========================== We are also able to prove a result about the complexity of determining which optimal strategy is found by the Vöge-Jurdziński algorithm. However, we cannot formulate this in the context of a one-sink game, because any result of this nature must exploit *ties in valuations.* In a one-sink game, since every vertex has a different priority, no two paths can have the same set of priorities, so ties are not possible. Hence, for a one-sink game, there will be a unique optimal strategy, and so the complexity of finding it can be no harder than solving the parity game itself, and this problem is not [$\mathtt{PSPACE}$]{}-complete unless [$\mathtt{PSPACE}$]{}= [$\mathtt{UP}$]{}$\cap$ [$\mathtt{coUP}$]{}. On the other hand, ties in valuations are possible in the original Vöge-Jurdziński algorithm. This is because the first component of their valuation is not necessarily $1$, and so the second component does not necessarily contain every priority along the relevant path (recall that priorities smaller than the first component are not included in the second component.) These facts mean that it is possible to construct parity games that have multiple optimal strategies under the Vöge-Jurdziński valuation. #### **Our construction.** We will use a slight modification of our construction to show that computing the optimal strategy found by the Vöge-Jurdziński algorithm is [$\mathtt{PSPACE}$]{}-complete. The key difference is the addition of a *third* clock with $n+1$ bits, which will be indexed by $2$. We remove a single edge from this clock: the edge from $e^2_{n+1}$ to $h^2_{n+1}$ is removed. Recall that in the clock construction, the odd vertices $e^j_{i}$ do not use the edge to $h^j_i$ unless they are forced to by the vertex $d^j_i$ selecting the edge to $e^j_i$. Hence, until the $n+1$th bit is flipped, the third clock behaves like any other clock. When the $n+1$th bit is flipped, after $2^n$ iterations have taken place, a new cycle is formed with a very large even priority. We also modify the edges that leave $d^1_z$. For each edge $e = (d^1_z, u)$ we do the following: 1. We delete $e$. 2. We introduce a new vertex $v_u$ owned by player Even. This vertex is assigned an insignificant priority that, in particular, is much smaller than the priorities assigned to $e^2_{n+1}$ and $d^2_{n+1}$. 3. We add the edges $(d^1_z, v_u)$, $(v_u, u)$, and $(v_u, f^2_{n+1})$. The following table summarises the extra clock that we add to the construction, and the new outgoing edges from $d^1_z$. For ease of notation, we define $U = \{ y^1, z^1, e^1_z\} \cup \{a^1_{z,l} \; : \; 1 \le l \le {\ensuremath{2k + 4n + 6}\xspace}\}$ to be the original outgoing edges from $d^1_z$ that will now be replaced. Moreover, we assume that each vertex $u$ is represented by a number in the range $1 \le u \le |U|$, which will be used as part of the priority for $v_u$. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Vertex Conditions Edges Priority Player --------- ------------------------------------------------- ------------------------------------------------------------------ ----------------------------------------- -------- $t_0^2$ $r^2$, $s^2$ $\operatorname{P}(2, 0, 2k+4n+4, 2, 0)$ Even $t_l^2$ $1 \le l \le {\ensuremath{2k + 4n + 6}\xspace}$ $r^2$, $s^2$, $t^2_{l-1}$ $\operatorname{P}(2, 0, l, 2, 1)$ Even $a_l^2$ $1 \le l \le {\ensuremath{2k + 4n + 6}\xspace}$ $t^2_l$ $\operatorname{P}(2, 0, l Even +1, 2, 0)$ $d^2_i$ $1 \le i \le n$ $e^2_i$, $s^2$, $r^2$, $\operatorname{P}(1, i, 0, 2, 1)$ Even $a^2_l $ for $1 \le l \le {\ensuremath{2k + 2n + 6}\xspace}+ 2i$ $e^2_i$ $1 \le i \le n$ $d_i$ $\operatorname{P}(1, i, 1, 2, 0)$ Odd $g^2_i$ $1 \le i \le n$ $f^2_i$ $\operatorname{P}(1, i, 2, 2, 1)$ Even $k^2_i$ $1 \le i \le n$ $x$, $g^2_l$, for $i < l \le $\operatorname{P}(8, i, 0, 2, 1)$ Even n$ $f^2_i$ $1 \le i \le n$ $e^2_i$ $\operatorname{P}(8, i, 1, 2, 1)$ Even $h^2_i$ $1 \le i \le n$ $k^2_i$ $\operatorname{P}(8, i, 2, 2, 0)$ Even $s^2$ $x$, $f^2_l$ for $1 \le l \le n$ $\operatorname{P}(7, 0, 0, Even 2, 0)$ $r^2$ $x$, $g^2_l$ for $1 \le l \le n$ $\operatorname{P}(7, 0, 1, Even 2, 0)$ $v_u$ $u \in U$ $u$ $\operatorname{P}(0, 0, 0, u, 0)$ Even $d^1_z$ $v_u$ for all $u \in U$ $\operatorname{P}(4,i,0,j,1)$ Even --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- #### **[$\mathtt{PSPACE}$]{}-completeness.** We now argue how this modified construction provides a [$\mathtt{PSPACE}$]{}-hardness proof for the optimal strategy decision problem. Before the $n+1$th bit of the third clock flips, the edge $(v_u, f^2_{n+1})$ is never switchable due to the large odd priority assigned to $f^2_{n+1}$, so this modification does not affect the computation of $F^{2^n}(B)$. On the other hand, once the $n+1$th bit of the third clock flips, all edges of the form $(v_u, f^2_{n+1})$ immediately become switchable, because the first component of the valuation of $f^2_{n+1}$ is now a large even priority, and not $1$. So all of these edges will be switched simultaneously. The key thing to note is that, since the priorities assigned to the vertices $v_u$ are insignificant, they do not appear in the second component of the valuation, and so vertex $d^1_z$ is now indifferent between all of its outgoing edges. Moreover, the vertices $v_u$ never switch away from $f^2_{n+1}$ for the following reasons: - The only even cycle that can be forced by player Even is the one that uses $d^2_{n+1}$ and $e^2_{n+1}$. So, these vertices must select a strategy that reaches this cycle eventually. - All priorities used in the circuits are smaller than the priority of the cycle between $d^2_{n+1}$ and $e^2_{n+1}$. So, the second component of the valuation function is irrelevant, and the only way of improving the strategy would be to find a shorter path to the cycle. - The vertices $v_u$ are the only vertices that have edges to the third clock, so the only way a vertex $v_u$ could reach the third clock would be to travel through both circuits to reach $d^1_z$, and then use a different vertex $v_{u'}$, but this would be a much longer path, and therefore this would have a lower valuation. Hence, the vertices $v_u$ will never switch away from $f^2_{n+1}$. Observe that, after $2^n$ iterations, the input/output gadgets in circuit $1-j$ store the value of $F^{2^n}(B)$, and therefore $d^1_z$ chooses the edge to $e^j_z$ if and only if the $z$th bit of $F^{2^n}(B)$ is $1$. The above argument implies that $d^1_z$ does not switch again, so in the optimal strategy found by the algorithm, the vertex $d^1_z$ chooses the edge to $e^j_z$ if and only if the $z$th bit of $F^{2^n}(B)$ is $1$. Thus, we have that computing the optimal strategy found by the Vöge-Jurdziński strategy improvement algorithm is [$\mathtt{PSPACE}$]{}-complete, as claimed in Theorem \[thm:optstrat\]. #### **Other games.** We also get similar results for the gain-bias algorithm for mean-payoff games, and the standard strategy improvement algorithms for discounted and simple stochastic games. For the most part, we can still rely on the proof of Friedmann for these results. This is because, although we do not have a one-sink game, the game behaves as a one-sink game until the $n+1$th bit in the third clock is flipped. An easy way to see this is to reinstate the edge between $e^2_{n+1}$ and $h^2_{n+1}$ to create a one-sink game, and observe that, since the edge is not used in the best response until the $n+1$th bit is flipped, it cannot affect the sequence of strategies visited by strategy improvement. Once the $n+1$th bit has flipped, we only care about making $d^1_z$ indifferent between its outgoing edges, and in this section we explain how this is achieved. For mean-payoff games, we use the same reduction as we did in Section \[sec:other\] to our altered construction. After doing this, we set the weight of the vertices $v_u$ to $0$ to ensure that $d^1_z$ will be exactly indifferent between all of its outgoing edges once these vertices switch to $f^2_{n+1}$. This gives the result for the gain-bias algorithm in mean-payoff games. For discounted games, once the standard reduction from mean-payoff to discounted games has been applied, the proof of Friedmann already implies that the discounted game algorithm makes the same decisions as the Vöge-Jurdziński algorithm for the vertices other than $d^1_z$. The only worry is that the discount factor may make the vertex $d^1_z$ not indifferent between some of its outgoing edges. However, it is enough to note that all paths from $d^1_z$ to $f^2_{n+1}$ have length $2$, and therefore the vertex will be indifferent no matter what discount factor is chosen. This gives the result for the standard strategy improvement algorithm for discounted games. Finally, after applying the standard reduction from discounted to simple-stochastic games, the proof of Friedmann can be applied to argue that the valuations in the simple stochastic game are related to the valuations in the discounted game by a linear transformation. Hence, $d^1_z$ will still be indifferent between its outgoing edges after the $n+1$th bit is flipped. This gives the result for the standard strategy improvement algorithm for simple stochastic games. Thus, we have the claimed results from Corollary \[cor:optstrat\]. Open problems {#sec:conc} ============= Strategy improvement generalizes policy iteration which solves mean-payoff and discounted-payoff Markov decision processes [@puterman94]. The exponential lower bounds for greedy all-switches have been extended to MDPs. Fearnley showed that the second player in Friedmann’s construction [@F11] can be simulated by a probabilistic action, and used this to show an exponential lower bound for the all-switches variant of policy iteration of average-reward MDPs [@F10]. This technique cannot be applied to the construction in this paper, because we use additional Odd player states (in particular the vertices $q^j_{i,1}$) that cannot be translated in this way. Can our [$\mathtt{PSPACE}$]{}-hardness results be extended to [[all-switches strategy improvement]{}]{}for MDPs? Also, there are other pivoting algorithms for parity games that deserve attention. It has been shown that Lemke’s algorithm and the Cottle-Dantzig algorithm for the P-matrix linear complementarity problem (LCP) can be applied to parity, mean-payoff, and discounted games [@js08; @fjs10]. It would be interesting to come up with similar [$\mathtt{PSPACE}$]{}-completeness results for these algorithms, which would also then apply to the more general P-matrix LCP problem. Facts about the clock {#app:clock} ===================== In this section we prove two important lemmas about the clocks. The first lemma shows an important property about the difference in valuation between $r^j$ and $s^j$ for the clock used by circuit $j$. The second lemma considers the difference in valuation *across* the two clocks, by comparing the valuations of $r^j$, $s^j$, $r^{1-j}$, and $s^{1-j}$. \[lem:clockrs\] Let $\sigma$ be a strategy that agrees with $\kappa^{K}_m$ for some $m$ in the range $1 \le m \le \operatorname{Length}({K})$, some clock-value ${K}\in \{0, 1\}^n$, and some $j \in \{0, 1\}$. We have: 1. \[itm:rsone\] If $m = \operatorname{Length}({K}) -1$ then $\operatorname{Val}^{\sigma}(r^j) \sqsubset \operatorname{Val}^{\sigma}(s^j)$. 2. \[itm:rstwo\] If $m < \operatorname{Length}({K}) - 1$ then $\operatorname{Val}^{\sigma}(s^j) \sqsubset \operatorname{Val}^{\sigma}(r^j)$. In both cases, we have that $\operatorname{MaxDiff}^{\sigma}(s^j, r^j) \ge \operatorname{P}(7, 0, 0, 0, 0)$. We begin with the first case. In this case, by definition, we have that the path that starts at $s^j$ and follows $\sigma$ moves to $f^j_i$ for some $i$, whereas the path that starts at $r^j$ and follows $\sigma$ moves to $g^j_{i'}$ for some $i' \ne i$. There are two possibilities. 1. If $i > i'$, then since $i$ is the least significant zero in ${K}$, we must have that $i' = \operatorname{NextBit}({K}, 0) = 1$. Hence, the path that starts at $g^j_{i'}$ passes through the bit gadgets for all bits strictly smaller than $i$ before eventually arriving at $g^j_{\operatorname{NextBit}({K}, i)}$ (or $x$ if $\operatorname{NextBit}({K}, i)$ is not defined.) In particular, since the path does *not* pass through $h^j_i$ the largest priority on the path is strictly smaller than $\operatorname{P}(8, i, 2, j, 0)$. On the other hand, the path that starts at $f^j_i$ eventually arrives at $g^j_{\operatorname{NextBit}({K}, i)}$ (or $x$ if $\operatorname{NextBit}({K}, i)$ is not defined) and it *does* pass through $h^j_i$. The largest priority on this path is $\operatorname{P}(8, i, 2, j, 0)$, and since the priority is even, we can conclude that $\operatorname{Val}^{\sigma}(r^j) \sqsubset \operatorname{Val}^{\sigma}(s^j)$. 2. If $i < i'$, then since $i'$ is the least significant one in ${K}$, we must have that $i = 1$. Hence, the path that starts at $f^j_i$ eventually moves to $k^j_i$ and then directly to $g^j_{i'}$. The largest priority on this path is $\operatorname{P}(8, i', 2, j, 0)$, and since this is even, we can conclude that $\operatorname{Val}^{\sigma}(r^j) \sqsubset \operatorname{Val}^{\sigma}(s^j)$. We now move on to the second case. In this case, by definition, we have that the path that starts at $s^j$ moves to $f^j_i$ for some $i$, whereas the path that starts at $r^j$ moves to $g^j_i$ and then to $f^j_i$. Since the priority on $g^j_i$ is strictly smaller than $\operatorname{P}(7, 0, 1, j, 0)$, we have that $\operatorname{MaxDiff}^{\sigma}(r^j, s^j) = \operatorname{P}(7, 0, 1, j, 0)$, which the priority assigned to $r^j$. Since this priority is even, we have that $\operatorname{Val}^{\sigma}(s^j) \sqsubset \operatorname{Val}^{\sigma}(r^j)$, as required. Observe that in all cases considered above, we have shown that $\operatorname{MaxDiff}^{\sigma}(r^j, s^j) \ge \operatorname{P}(7, 0, 0, 0, 0)$ as required. Hence, we have completed the proof of this lemma. \[lem:crossclock\] Let $\sigma$ be a strategy that agrees with $\chi^{B, {K}, j}_m$ for some $m \ge 1$, some bit-strings $B, {K}\in \{0, 1\}^n$, and some $j \in \{0, 1\}$. We have: 1. \[itm:ccone\] If $m = \operatorname{Delay}(j, {K}) - 1$, then $\operatorname{Val}^{\sigma}(r^{1-j}) \sqsubset \operatorname{Val}^{\sigma}(r^{j}) \sqsubset \operatorname{Val}^{\sigma}(s^{1-j})$ and $\operatorname{MaxDiff}^\sigma(r^{1-j}, s^j) \ge \operatorname{P}(7, 0, 0, 0, 0)$ and $\operatorname{MaxDiff}^\sigma(r^{1-j}, r^j) \ge \operatorname{P}(7, 0, 0, 0, 0)$. 2. \[itm:cctwo\] If $m < \operatorname{Delay}(j, {K}) - 1$, then $\operatorname{Val}^{\sigma}(r^{1-j}) \sqsubset \operatorname{Val}^{\sigma}(s^j) \sqsubset \operatorname{Val}^{\sigma}(r^{j})$ and $\operatorname{MaxDiff}^\sigma(r^{1-j}, s^{j}) \ge \operatorname{P}(7, 0, 0, 0, 0)$ and $\operatorname{MaxDiff}^\sigma(s^{j}, r^{1-j}) \ge \operatorname{P}(7, 0, 0, 0, 0)$. We begin with the second claim. The fact that $\operatorname{Val}^{\sigma}(s^j) \sqsubset \operatorname{Val}^{\sigma}(r^j)$ follows from part \[itm:rstwo\] of Lemma \[lem:clockrs\], so it is sufficient to show that $\operatorname{Val}^{\sigma}(r^{1-j}) \sqsubset \operatorname{Val}^{\sigma}(s^j)$. There are two cases to consider, based on whether $j = 0$ or $j = 1$. 1. If $j = 0$, then clock $j$ uses bit-string ${K}$, and clock $1-j$ uses bit-string ${K}- 1$. Observe that the clock strategies specify that the path starting at $s^{j}$ visits $h^j_i$ if and only if ${K}_i = 1$. Similarly, the path starting at $r^{1-j}$ visits $h^{1-j}_i$ if and only if $({K}- 1)_i = 1$. If $i'$ is the index of the least significant $1$ in ${K}$, then we have that the path that starts at $s^j$ visits $h^j_{i'}$ and $k^j_{i'}$, and the path that starts at $s^{1-j}$ does not visit these vertices. Moreover, these two paths are the same after this point. Hence, we have that $\operatorname{MaxDiff}(s^j, r^{1-j})$ is $\operatorname{P}(8, i', 2, j, 0)$, and since this priority is even, we can conclude that $\operatorname{Val}^{\sigma}(r^{1-j}) \sqsubset \operatorname{Val}^{\sigma}(s^j)$. 2. If $j = 1$, then both clocks use bit-string ${K}$. Hence, the paths starting at $s^j$ and $r^{1-j}$ use the same path through their respective clocks. So, if $i'$ is the index of the most significant $1$ in ${K}$, then we have that $\operatorname{P}(8, i', 2, 0, 0)$ is the largest priority on the path starting at $r^{1-j} = r^0$, and $\operatorname{P}(8, i', 2, 1, 0)$ is the largest priority on the path starting at $s^{j} = s^1$. Thus, $\operatorname{MaxDiff}^{\sigma}(s^j, r^{1-j}) = \operatorname{P}(8, i', 2, 1, 0)$, and since this priority is even and contained in $\operatorname{Val}^{\sigma}(s^j)$ we can conclude that $\operatorname{Val}^{\sigma}(r^{1-j}) \sqsubset \operatorname{Val}^{\sigma}(s^j)$. We now move on to the first claim. Here the same reasoning as we gave for the second case can be used to prove that $\operatorname{Val}^{\sigma}(r^{1-j}) \sqsubset \operatorname{Val}^{\sigma}(s^j)$, and therefore Lemma \[lem:clockrs\] implies that $\operatorname{Val}^{\sigma}(r^{1-j}) \sqsubset \operatorname{Val}^{\sigma}(r^j)$. What remains is to prove that $\operatorname{Val}^{\sigma}(r^j) \sqsubset \operatorname{Val}^{\sigma}(s^{1-j})$. Again there are two cases to consider. 1. If $j = 0$, then clock $j$ uses bit-string ${K}$, and clock $1-j$ is about to transition from bit-string ${K}-1$ to bit-string ${K}$. In fact, the path from $s^{1-j}$ is already the path for bit-string ${K}$, so the proof from item $2$ above can be reused. 2. If $j = 1$, then clock $j$ uses bit-string ${K}$, and clock $1-j$ is about to transition from bit-string ${K}$ to bit-string ${K}+1$. In fact, the path from $s^{1-j}$ is already the path for bit-string ${K}+1$, so the proof from item $1$ above can be reused. Finally, we observe that all of the maximum difference priorities used in the proof are strictly larger than $\operatorname{P}(7, 0, 0, 0, 0)$, which completes the proof. Best responses {#app:br} ============== In this section, we prove that the best responses defined in Section \[sec:strategies\] are indeed best responses to $\chi^{B, {K}, j}_m$. There are two types of odd vertices used in the construction: the vertices $e^j_i$ used in the [$\textsc{Not}$]{}and [$\textsc{Input/Output}$]{}gates, and the vertices $q^j_{i, 0}$ used in the [$\textsc{Input/Output}$]{}gates. We begin by proving a general lemma concerning the vertices $e^j_i$ used in the [$\textsc{Not}$]{}and [$\textsc{Input/Output}$]{}gates. \[lem:nocycle\] Let $\sigma $ be a strategy that agrees with $\chi^{B, {K}, j}_m$ for some bit-strings $B, {K}\in \{0, 1\}^n$, some $m$ in the range $1 \le m \le \operatorname{Delay}(j, {K}) - 1$, and some $j \in \{0, 1\}$. For every $i \in {\ensuremath{\textsc{Not}}\xspace}\cup {\ensuremath{\textsc{Input/Output}}\xspace}$, and every $l \in \{0, 1\}$, we have that if $\sigma(d^l_{i}) = e^l_i$, then $\operatorname{Br}(\sigma)(e^l_i) \ne d^l_i$. Note that if player Odd uses the edge from $e^l_i$ to $d^l_i$, then this would create a cycle with largest priority $\operatorname{P}(1, i, 1, j, 0)$, which is even. Since the game is a one-sink game, and since the initial strategy is terminating, we have that Odd can eventually reach the odd cycle at $x^j$ from the vertices $r^j$, $r^{1-j}$, $s^j$, and $s^{1-j}$. Furthermore, Odd can reach one of these four vertices by moving to $h^l_i$. Since the odd cycle at $x^j$ has priority smaller than $\operatorname{P}(1, i, 1, j, 0)$, we can conclude that $\operatorname{Br}(\sigma)(e^j_i) \ne d^l_i$. We now proceed to prove individual lemma for each of the vertices that belong to player Odd. Each type of Odd vertex will be considered in a different subsection. The vertices $q^l_{i, 0}$ ------------------------- We now consider the vertices $q^l_{i, 0}$ for $i \in {\ensuremath{\textsc{Input/Output}}\xspace}$. The first lemma considers the case where $l = j$, and the second lemma considers the case where $l = 1-j$. \[lem:brqj\] Let $\sigma $ be a strategy that agrees with $\chi^{B, {K}, j}_m$ for some bit-strings $B, {K}\in \{0, 1\}^n$, some $m$ in the range $1 \le m \le \operatorname{Delay}(j, {K}) - 1$, and some $j \in \{0, 1\}$. For every $i \in {\ensuremath{\textsc{Input/Output}}\xspace}$, we have that $\operatorname{Br}(\sigma)(q^{j}_{i,0}) = \mu^{B, {K}, j}_m(q^j_{i,0})$. There are two cases to consider. - If $m = 1$, then then we must show that the edge to $e^j_{i}$ is chosen by Odd in the best response. Consider a strategy $\tau$ where $\tau(e^j_i) = h^j_{i,0}$, and $\tau(q^j_{i,0}) = e^j_i$. When $\tau$ is played against $\sigma$, the path that starts at $e^j_i$ eventually arrives at $r^{1-j}$, and the largest priority on this path is strictly smaller than $\operatorname{P}(6, d(C) + 2, 0, j, 0)$. On the other hand, taking the edge to $q^j_{i, 1}$ leads directly to $r^{1-j}$ while visiting the priority $\operatorname{P}(6, d(C) + 2, 0, j, 0)$. Since this priority is even, we can conclude that Odd would prefer to play $\tau$ than to use the edge from $q^j_{i, 0}$ to $q^j_{i, 1}$ in his best response. Therefore, we must have that $\operatorname{Br}(\sigma)(q^j_{i, 0}) = e^j_i$, as required. - If $m > 1$, then we must show that the edge to $q^{j}_{i,1}$ is the least appealing edge at $q^{j}_{i, 0}$. Observe that the path that starts at $e^{j}_i$ and follows $\sigma$ eventually arrives at $r^j$, and every priority on this path that is strictly smaller than $\operatorname{P}(7, 0, 0, 0, 0)$. On the other hand, the path that starts at $q^j_{i, 1}$ moves directly to $r^{1-j}$. Hence, we can apply Lemma \[lem:crossclock\] (both parts) to argue that $\operatorname{Val}^{\sigma}(q^{j}_{i, 1}) \sqsubset \operatorname{Val}^{\sigma}(e^j_i)$, as required. Let $\sigma $ be a strategy that agrees with $\chi^{B, {K}, j}_m$ for some bit-strings $B, {K}\in \{0, 1\}^n$, some $m$ in the range $1 \le m \le \operatorname{Delay}(j, {K}) - 1$, and some $j \in \{0, 1\}$. For every $i \in {\ensuremath{\textsc{Input/Output}}\xspace}$, we have that $\operatorname{Br}(\sigma)(q^{1-j}_{i,0}) = \mu^{B, {K}, j}_m(q^{1-j}_{i,0}$. We must show that the edge to $e^{1-j}_i$ is the least appealing edge at $q^{1-j}_{i, 0}$. Observe that the path that starts at $e^{1-j}_i$ and follows $\sigma$ will eventually arrive at either $r^j$ or $r^{1-j}$. In either case, the largest priority on this path will be strictly smaller than $\operatorname{P}(6, d(C) + 2, 0, j, 0)$. On the other hand, the path that starts at $q^{1-j}_{i, 1}$ moves directly to $r^j$, and the largest priority on this path is $\operatorname{P}(6, d(C) + 2, 0, j, 0)$. Since this priority is even, we can conclude that $\operatorname{Val}^{\sigma}(e^{1-j}_i) \sqsubset \operatorname{Val}^{\sigma}(q^{1-j}_{i, 0})$, as required. The vertices $e^l_i$ in [$\textsc{Not}$]{}gates ----------------------------------------------- The following lemma considers the vertices $e^l_i$ for $l = j$ and $i \in {\ensuremath{\textsc{Not}}\xspace}$. We do not need to prove a lemma for the case where $l = 1-j$ and $i \in {\ensuremath{\textsc{Not}}\xspace}$, because these vertices are in the non-computing circuit, and we do not specify strategies for the [$\textsc{Not}$]{}and [$\textsc{Or}$]{}gadgets in the non-computing circuits. \[lem:brnot\] Let $\sigma$ be a strategy that agrees with $\chi^{B, {K}, j}_m$ for some bit-strings $B, {K}\in \{0, 1\}^n$, some $m$ in the range $1 \le m \le \operatorname{Delay}(j, {K}) - 1$, and some $j \in \{0, 1\}$. For every $i \in {\ensuremath{\textsc{Not}}\xspace}$, we have that $\operatorname{Br}(\sigma)(e^{j}_{i}) = \mu^{B, {K}, j}_m(e^j_i)$. There are three cases to consider. 1. If $m = 1$ then, the path that starts at $d^j_i$ and follows $\sigma$ moves directly to $s^j$. On the other hand, the path that starts at $h^j_i$ moves directly to $r^j$. All of the priorities on these paths are strictly smaller than $\operatorname{P}(7, 0, 0, 0, 0)$, so we can apply Lemma \[lem:clockrs\] part \[itm:rstwo\] to argue that $\operatorname{Val}^{\sigma}(d^j_i) \sqsubset \operatorname{Val}^{\sigma}(d^j_i)$, so therefore we have $\operatorname{Br}(\sigma)(e^j_i) = d^j_i$. 2. If $m > 1$ and either $m \le d(i) + 2$, or $m > d(i) + 2$ and $\operatorname{Eval}(B, {I}(i)) = 1$, then observe that the path that starts at $d^j_i$ and follows $\sigma$ eventually arrives at $r^j$, and that the largest priority on this path is strictly smaller than $\operatorname{P}(6,i,1,j,0)$. On the other hand, the path that starts at $h^j_i$ and follows $\sigma$ moved directly to $r^j$, and the largest priority on this path is $\operatorname{P}(6,i,1,j,0)$. Since this priority is even, we can conclude that $\operatorname{Val}^{\sigma}(d^j_i) \sqsubset \operatorname{Val}^{\sigma}(e^j_i)$, so therefore we have $\operatorname{Br}(\sigma)(e^j_i) = d^j_i$. 3. If $m > d(i) + 2$ and $\operatorname{Eval}(B, {I}(i)) = 0$, then we have $\sigma(d^j_i) = e^j_i$, so we can apply Lemma \[lem:nocycle\] to prove that $\operatorname{Br}(\sigma)(e^j_i) = h^j_i$. The vertices $e^j_i$ in input/output gates ------------------------------------------ The following lemmas consider the vertices $e^l_i$ when $i \in {\ensuremath{\textsc{Input/Output}}\xspace}$. The first lemma considers the case where $l = j$, and the second lemma considers the case where $l = 1-j$. \[lem:br1\] Let $\sigma$ be a strategy that agrees with $\chi^{B, {K}, j}_m$ for some bit-strings $B, {K}\in \{0, 1\}^n$, some $m$ in the range $1 \le m \le \operatorname{Delay}(j, {K}) - 1$, and some $j \in \{0, 1\}$. For every $i \in {\ensuremath{\textsc{Input/Output}}\xspace}$, we have that $\operatorname{Br}(\sigma)(e^{j}_{i}) = \mu^{B, {K}, j}_m(e^j_i)$. There are a number of cases to consider. 1. If $m > 1$ and $F^2(B)_i = 0$ and either $m \le d(i)+3$, or $m > d(i) +3$ and $F^2(B)_i = 0$, then the path that starts at $d^j_i$ eventually arrives at $r^j$, and the largest priority on this path is strictly smaller than $\operatorname{P}(6,d(C)+1,1,j,0)$. On the other hand, the path that starts at $h^j_{i, 0}$ and follows $\sigma$ passes through $h^j_{i, 1}$ and then arrives at $r^j$. The largest priority on this path is $\operatorname{P}(6,d(C)+1,1,j,0)$, and since this priority is even, we can conclude that $\operatorname{Val}^{\sigma}(d^j_i) \sqsubset \operatorname{Val}^{\sigma}(h^j_{i, 0})$. Therefore, we have that $\operatorname{Br}(\sigma)(e^j_i) = d^j_{i}$. 2. If $m > 1$ and $m > d(i) + 3$ and $F^2(B)_i = 1$, then $\sigma(d^j_i) = e^j_i$, and so we can apply Lemma \[lem:nocycle\] to argue that $\operatorname{Br}(\sigma)(e^j_i) = h^j_{i,0}$. \[lem:br2\] Let $\sigma$ be a strategy that agrees with $\chi^{B, {K}, j}_m$ for some bit-strings $B, {K}\in \{0, 1\}^n$, some $m$ in the range $1 \le m \le \operatorname{Delay}(j, {K}) - 1$, and some $j \in \{0, 1\}$. For every $i \in {\ensuremath{\textsc{Input/Output}}\xspace}$, we have that $\operatorname{Br}(\sigma)(e^{1-j}_{i}) = \mu^{B, {K}, j}_m(e^{1-j}_i)$. There are a number of cases to consider. 1. If $m = 1$ and $B_i = 0$, then the path that starts at $d^{1-j}_i$ and follows $\sigma$ will eventually reach $r^{1-j}$, and the largest priority on this path is strictly smaller than $\operatorname{P}(6,d(C)+1,1,j,0)$. On the other hand, the path that starts at $h^{1-j}_{i, 0}$ moves to $h^{1-j}_{i, 1}$ and then arrives at $r^{1-j}$, and the largest priority on this path is $\operatorname{P}(6,d(C)+1,1,j,0)$. Since this priority is even, we have that $\operatorname{Val}^{\sigma}(d^{1-j}_i) \sqsubset \operatorname{Val}^{\sigma}(h^{1-j}_{i,0})$ and therefore $\operatorname{Br}(\sigma)(e^{1-j}_i) = d^{1-j}_{i}$. 2. $m > 1$ and $B_i = 0$, then the path that starts at $d^{1-j}_i$ moves to $r^j$, and the largest priority on this path is strictly smaller than $\operatorname{P}(6, 0, 1, j, 0)$. On the other hand, the path that starts at $h^{1-j}_{i, 0}$ moves to $h^{1-j}_{i, 2}$ and then arrives at $r^{j}$, and the largest priority on this path is $\operatorname{P}(6, 0, 1, j, 0)$. Since this priority is even, we have that $\operatorname{Val}^{\sigma}(d^{1-j}_i) \sqsubset \operatorname{Val}^{\sigma}(h^{1-j}_{i,0})$ and therefore $\operatorname{Br}(\sigma)(e^{1-j}_i) = d^{1-j}_{i}$. 3. If $B_i = 1$, then we have $\sigma(d^{1-j}_i) = e^{1-j}_i$, and we can apply Lemma \[lem:nocycle\] to argue that $\operatorname{Br}(\sigma)(e^{1-j}_i) = h^{1-j}_{i,0}$. Gate outputs {#app:outputs} ------------ In this section we give two key lemmas that describe the valuation of the output states $o^j_i$. The first lemma considers the case where $m = 1$ or $m = 2$, and the second lemma considers the case where $m \ge 3$. \[lem:oval1\] Let $\sigma$ be a strategy that agrees with $\chi^{B, {K}, j}_m$ for some ${K}, B \in \{0, 1\}^n$, some $j \in \{0, 1\}^n$, and for $m = 1$ or $m = 2$. For every gate $i$, we have $\operatorname{Val}^{\sigma}(o^j_i) \sqsubset \operatorname{Val}^{\sigma}(r^j)$. There are four cases to consider 1. We begin by showing the claim for an input/output gate $i$ from circuit $1-j$ in the case where $m = 1$. Note that Lemma \[lem:br2\] implies that $\operatorname{Br}(\sigma)(e^{1-j}_i) = d^{1-j}_i$. Observe that by definition, the path that starts at $d^j_i$ and follows $\sigma$ will trace a path through the gadgets for circuit $1-j$, and will eventually reach $r^{1-j}$. Furthermore, the largest priority possible that can be seen along this path is $\operatorname{P}(6, d(C)+1, 1, j, 0) < \operatorname{P}(7, 0, 0, 0, 0)$. Hence, we can apply Lemma \[lem:crossclock\] to argue that $\operatorname{Val}^{\sigma}(o^{1-j}_i) \sqsubset \operatorname{Val}^{\sigma}(r^j)$. 2. Now we consider an input/output gate $i$ from circuit $1-j$ in the case where $m = 2$. Again, Lemma \[lem:br2\] implies that $\operatorname{Br}(\sigma)(e^{1-j}_i) = d^{1-j}_i$, but in this case since $m = 2$, we have that the vertices $y^{1-j}$ and $p^{1-j}_i$ have both switched to $r^j$. Hence, the path that starts at $o^{1-j}_i$ will eventually arrive at $r^j$. It can be verified that, whatever path is taken from $o^{1-j}_{i}$ to $r^j$, the largest priority along this path is $\operatorname{P}(6, i, 0, j, 1)$ on the vertex $o^{1-j}_{i}$. Since this priority is odd, we have that $\operatorname{Val}^{\sigma}(o^{1-j}_i) \sqsubset \operatorname{Val}^{\sigma}(r^j)$. 3. Next we consider the case where $i$ is a [$\textsc{Or}$]{}-gate. If $m = 1$ then we have that $\sigma(o^j_i) = s^j$, and if $m = 2$ then $\sigma(o^j_i) = r^j$. In both cases, we can use Lemma \[lem:clockrs\] and the fact that the priority assigned to $o^j_i$ is odd, to conclude that $\operatorname{Val}^{\sigma}(o^{j}_i) \sqsubset \operatorname{Val}^{\sigma}(r^j)$. 4. Finally, we consider the case where $i$ is a [$\textsc{Not}$]{}-gate. We can apply Lemma \[lem:brnot\] to argue that $\operatorname{Br}(\sigma)(e^j_i) = d^j_i$. If $m = 1$ then we have that $\sigma(d^j_i) = s^j$, and if $m = 2$ then $\sigma(d^j_i) = r^j$. In both cases, the highest priority on the path from $o^j_i$ to either $s^j$ or $r^j$ is the odd priority from $o^j_i$, so we can use this fact, along with Lemma \[lem:clockrs\] to conclude that $\operatorname{Val}^{\sigma}(o^{j}_i) \sqsubset \operatorname{Val}^{\sigma}(r^j)$. \[lem:oval\] Let $\sigma$ be a strategy that agrees with $\chi^{B, {K}, j}_m$ for some ${K}, B \in \{0, 1\}^n$, some $j \in \{0, 1\}^n$, and some $m$ in the range $3 \le m \le \operatorname{Delay}(j, {K}$. For every gate $i$, we have: 1. \[itm:nineone\] If $m \le d(i) + 2$, then $\operatorname{Val}^{\sigma}(o^j_i) \sqsubset \operatorname{Val}^{\sigma}(r^j)$. 2. \[itm:ninetwo\] If $m > d(i) + 2$ and $\operatorname{Eval}(B, i) = 0$, then $\operatorname{Val}^{\sigma}(o^j_i) \sqsubset \operatorname{Val}^{\sigma}(r^j)$. 3. \[itm:ninethree\] If $m > d(i) + 2$ and $\operatorname{Eval}(B, i) = 1$, then $\operatorname{Val}^{\sigma}(r^j) \sqsubset \operatorname{Val}^{\sigma}(o^j_i)$ and: $$\operatorname{P}(6,0,0,0,0) \le \operatorname{MaxDiff}^{\sigma}(r^j, o^j_i) \le \operatorname{P}(6, i, 1, j, 0).$$ We will prove this claim by induction over the depth of the gates. For the base case we consider an input/output gate $i$ from circuit $1-j$, which provide the input values for circuit $j$. Since we consider these gates to have depth $0$, we always have $m > d(i) + 2$, so there are two cases to prove based on whether $B_i$ is zero or one. First, observe that since $m \ge 3$, the circuit mover gadgets attached to the input/output gadget for bit $i$ in circuit $1-j$ have $\sigma(y^{1-j}) = r^{j}$. Since $\operatorname{Delay}(1-j,{K}) \ge d(C) + 3$, the definition given in  implies that the strategy at $d^{1-j}_i$ is determined by $B_i$. So we have the following two cases. - If $B_i = 0$, then $\sigma(d^{1-j}_i) = a^{1-j}_{i,l}$ for some $l$. By definition, in the strategy $\sigma$, all paths from $a^{1-j}_{i,l}$ eventually arrive at $r^j$, and the maximum priority on any of these paths is smaller than $\operatorname{P}(6,0,0,j,1)$. Hence, the largest priority on the path from $o^{1-j}_i$ to $r^j$ is the priority $\operatorname{P}(6,0,0,j,1)$ on the vertex $o^{1-j}_i$, and since this is an odd priority, we have $\operatorname{Val}^{\sigma}(o^{1-j}_i) \sqsubset \operatorname{Val}^{\sigma}(r^{j})$. - If $B_i = 1$, then $\sigma(d^{1-j}_i) = e^{1-j}_i$ and Lemma \[lem:br2\] then implies $\operatorname{Br}(\sigma)(e^{1-j}_i) = h^{1-j}_i$. So, the path that starts at $o^{1-j}_i$ and follows $\sigma$ passes through $e^{1-j}_i$, $h^{1-j}_{i,0}$, $h^{1-j}_{i,2}$, and then arrives at $r^j$. The largest priority on this path is $\operatorname{P}(6,0,1,j,0) > \operatorname{P}(6, 0, 0, j, 1)$ on the state $h^{1-j}_{i,2}$, so we have $\operatorname{Val}^{\sigma}(r^j) \sqsubset \operatorname{Val}^{\sigma}(o^{1-j}_i)$ Hence, the base case of the induction has been shown. The inductive step will be split into two cases, based on whether $i$ is a [$\textsc{Not}$]{}-gate or an [$\textsc{Or}$]{}-gate. Suppose that the inductive hypothesis holds for all gates $i$ with $d(i) < k$, and let $i$ be a [$\textsc{Or}$]{}-gate with $d(i) = k$. We must prove three cases. - The first two cases use the same proof. If $m \le d(i)+2$, or if $m > d(i) + 2$ and $\operatorname{Eval}(B, i) = 0$, then by definition we have $\sigma(o^j_i) = r^j$. Since the priority assigned to $o^j_i$ is odd, we have $\operatorname{Val}^{\sigma}(o^j_i) \sqsubset \operatorname{Val}^{\sigma}(r^j)$, as required. - If $m > d(i) + 2$ and $\operatorname{Eval}(B, i) = 1$, then by definition we have that $\sigma(o^j_i) = \operatorname{InputState}(i, j, l)$ for some gate $l$ with $l \in \{1, 2\}$, and we know that $\operatorname{Eval}(B, {I}_l(i)) = 1$. Hence, we can apply the inductive hypothesis to argue that $\operatorname{MaxDiff}^{\sigma}(r^j, \operatorname{InputState}(i,j,l))$ is even, and it satisfies: $$\operatorname{P}(6,0,0,0,0) \le \operatorname{MaxDiff}^{\sigma}(r^j, \operatorname{InputState}(i,j,l)) \le \operatorname{P}(6, l, 1, j, 0).$$ Since the priority assigned to $o^j_i$ is smaller than $\operatorname{P}(6,0,0,0,0)$, we have that the same two properties apply to $\operatorname{MaxDiff}^{\sigma}(r^j, o^j_i)$. Hence, $\operatorname{Val}^{\sigma}(r^j) \sqsubset \operatorname{Val}^{\sigma}(o^j_i)$, and the required bounds on $\operatorname{MaxDiff}^{\sigma}(r^j, o^j_i)$ hold because $\operatorname{P}(6, l, 1, j, 0) < \operatorname{P}(6, i, 1, j, 0)$. Now suppose that the inductive hypothesis holds for all gates $i$ with $d(i) < k$, and let $i$ be a [$\textsc{Not}$]{}-gate with $d(i) = k$. We must prove three cases. - If $m \le d(i) + 2$, then by definition, the path that starts at $o^j_i$ and follows $\sigma$ passes through $e^j_i$, and Lemma \[lem:brnot\] implies that it then moves to $d^j_i$, a vertex of the form $a^j_{i,l}$, a number of vertices of the form $t^j_{i,l}$, before finally arriving at $r^j$. It can easily be verified that the largest priority on this path is $\operatorname{P}(6, i, 0, j, 1)$ from the vertex $o^j_i$. So, we have $\operatorname{MaxDiff}^{\sigma}(r^j, o^j_i) = \operatorname{P}(6, i, 0, j, 1)$, and since this priority is odd, we have $\operatorname{Val}^{\sigma}(o^j_i) \sqsubset \operatorname{Val}^{\sigma}(r^j)$, as required. - If $m > d(i) + 2$ and $\operatorname{Eval}(B, i) = 0$, then we must have $\operatorname{Eval}(B, {I}(i)) = 1$. Hence, by definition, the path that starts at $o^j_i$ and follows $\sigma$ first passes through $e^j_i$, and then Lemma \[lem:brnot\] implies that it then passes through $d^j_i$, and then some vertex of the form $a^j_{i,l}$, followed by a number of vertices of the form $t^j_{i,l}$, before finally arriving at $t^j_{i, d(i)}$ and then moving to $\operatorname{InputState}(i,j)$. The largest priority on this path is $\operatorname{P}(6, i, 0, j, 1)$ from the vertex $o^j_i$. By the inductive hypothesis, we have $\operatorname{MaxDiff}^{\sigma}(r^j, \operatorname{InputState}(i,j)) < \operatorname{P}(6, {I}(i), 1, j, 0)$, and since ${I}(i) < i$ we have that $\operatorname{P}(6, i, 0, j, 1) > \operatorname{P}(6, {I}(i), 1, j, 0)$. Hence, we have that $\operatorname{MaxDiff}^{\sigma}(r^j, o^j_{{I}(i)}) = \operatorname{P}(6, i, 0, j, 1)$, and since this is odd, we have that $\operatorname{Val}^{\sigma}(o^j_i) \sqsubset \operatorname{Val}^{\sigma}(r^j)$, as required. - If $m > d(i) + 2$ and $\operatorname{Eval}(B, i) = 1$, then by definition we have that the path that starts at $o^j_i$ and follows $\sigma$ passes through $e^j_i$, and then Lemma \[lem:brnot\] implies that it passes through $h^j_i$, and then reaches $r^j$. The largest priority on this path is $\operatorname{P}(6,i,1,j,0)$ on the vertex $h^j_i$. Hence we have $\operatorname{MaxDiff}^{\sigma}(r^j, o^j_i) \le \operatorname{P}(6, i, 1, j, 0)$, and since this priority is even, we have that $\operatorname{Val}^{\sigma}(r^j) \sqsubset \operatorname{Val}^{\sigma}(o^j_i)$. Hence, we have shown both of the required properties for this case. Now that we have shown the two versions of the inductive hypothesis, we have completed the proof. Or gates {#app:org} ======== The following pair of lemmas show that the states $o^j_i$ in the [$\textsc{Or}$]{}gate gadgets correctly switch to the outgoing edge specified by $\chi^{B,{K},j}_{m+1}$. The first lemma considers the case where $1 \le m < \operatorname{Delay}(j, {K}) - 1$, and the second lemma considers the case where $m = \operatorname{Delay}(j, {K}) - 1$. Let $\sigma$ be a strategy that agrees with $\chi^{B, {K}, j}_m$ for some ${K}, B \in \{0, 1\}^n$, some $j \in \{0, 1\}^n$, and some $m$ in the range $1 \le m < \operatorname{Delay}(j, {K}) - 1$. For each [$\textsc{Or}$]{}-gate $i$, greedy all-switches strategy improvement will switch $o^j_i$ to $\chi^{B,{K},j}_{m+1}(o^j_i)$. In this proof we will show the that the most appealing edge is the one that is specified in Equation . This boils down to a case analysis. First suppose that $m < d(i)+2$. We must prove that the edge to $r^j$ is the most appealing edge at $o^j_i$. By Lemma \[lem:clockrs\], we have that $\operatorname{Val}^{\sigma}(s^j) \sqsubset \operatorname{Val}^{\sigma}(r^j)$. Furthermore, since $d({I}_1(i)) = d({I}_2(i)) = d(i) - 1$, part \[itm:nineone\] of Lemma \[lem:oval\] implies that $\operatorname{Val}^{\sigma}(\operatorname{InputState}(i,j,l)) \sqsubset \operatorname{Val}^{\sigma}(r^j)$ for $l \in \{1, 2\}$. Hence, we have that $r^j$ is the most appealing edge at $o^j_i$, as required. Now suppose that $d(i) + 2 \le m \le \operatorname{Delay}(j, {K}) -1$. There are three cases to consider. - If both input gates are false, then Lemma \[lem:clockrs\] and part \[itm:ninetwo\] of Lemma \[lem:oval\] imply that $r^j$ will continue to be the most appealing edge at $o^j_i$, as required. - If ${I}_l(i)$ is true and ${I}_{1-l}(i)$ is false, for some $l \in \{1, 2\}$, then part \[itm:ninethree\] of Lemma \[lem:oval\] implies that $\operatorname{Val}^{\sigma}(r^j) \sqsubset \operatorname{Val}^{\sigma}(\operatorname{InputState}(i,j,l))$, and $\operatorname{Val}^{\sigma}(\operatorname{InputState}(i,j,1-l)) \sqsubset \operatorname{Val}^{\sigma}(r^j)$. Therefore, the most appealing edge at $o^j_i$ is the one to $\operatorname{InputState}(i,j,l)$, as required. - Finally, if both input gates are true, then Lemma \[lem:oval\] implies that the highest appeal edge at $o^j_i$ is either $\operatorname{InputState}(i,j,1)$ or $\operatorname{InputState}(i,j,2)$. Since $\operatorname{OrNext}(i)$ is defined to be the successor with highest appeal, we have that the highest appeal edge at $o^j_i$ is the one to $o^j_{I_{\operatorname{OrNext}(i)}(i)}$, as required. This completes the proof that greedy all-switches strategy improvement will switch $o^j_i$ to $\chi^{B,{K},j}_{m+1}(o^j_i)$. Let $\sigma$ be a strategy that agrees with $\chi^{B, {K}, j}_m$ for some ${K}, B \in \{0, 1\}^n$, some $j \in \{0, 1\}^n$, and for $m = \operatorname{Delay}(j, {K})-1$. For each [$\textsc{Or}$]{}-gate $i$, greedy all-switches strategy improvement will switch $o^{1-j}_i$ to $\chi^{B,{K},j}_{m+1}(o^{1-j}_i)$. We must show that the edge to $s^{1-j}$ is the most appealing edge at $o^{1-j}_i$. It can be verified that all paths starting at $\operatorname{InputState}(i,j,1)$ and $\operatorname{InputState}(i,j,2)$ either reach $r^{1-j}$, $s^{1-j}$, or $r^j$. Furthermore, the largest possible priority on these paths is strictly smaller than $\operatorname{P}(7, 0, 0, 0, 0)$. Hence, we can apply part \[itm:rsone\] of Lemma \[lem:clockrs\] and part \[itm:ccone\] of Lemma \[lem:crossclock\] to conclude that the edge to $s^{1-j}$ is the most appealing outgoing edge at $o^{1-j}_i$. The states $t^j_{i,l}$ in [$\textsc{Not}$]{}gates {#app:nott} ================================================= In this section we show that the states $t^j_{i,l}$ in the [$\textsc{Not}$]{}gate gadgets correctly switch to the outgoing edge specified by $\chi^{B,{K},j}_{m+1}$. The first two lemmas consider the case where $1 \le m < \operatorname{Delay}(j, {K}) - 1$, and the third lemma considers the case where $m = \operatorname{Delay}(j, {K}) - 1$. The first lemma deals with the case where $l < d(i)$, while the second lemma deals with the case where $l > d(i)$. Observe that there is no need to deal with the case where $l = d(i)$, because $t^j_{i, d(i)}$ only has one outgoing edge. \[lem:nottsmall\] Let $\sigma$ be a strategy that agrees with $\chi^{B, {K}, j}_m$ for some ${K}, B \in \{0, 1\}^n$, some $j \in \{0, 1\}^n$, and some $m$ in the range $1 \le m < \operatorname{Delay}(j, {K})-1$. For each [$\textsc{Not}$]{}-gate $i$, and each $l$ int the range $1 \le l < d(i)$, greedy all-switches strategy improvement will switch $t^j_{i,l}$ to $\chi^{B,{K},j}_{m+1}(t^j_{i,l})$. Lemma \[lem:clockrs\] implies that the state $t^j_{i,l}$ will not switch to $s^j$. In the rest of this proof we consider the two remaining outgoing edges at this state. We have two cases to consider. 1. \[itm:tone\] We first deal with the case where $m < l + 1$, where we must show that the edge to $r^j$ is the highest appeal edge at $t^j_{i,l}$. Let $v = \sigma(t^j_{i,l})$ be the successor of $t^j_{i,l}$ according to $\sigma$ (if $m = 1$ then $v = s^j$, and otherwise $v = r^j$). Since $m \le l$, the definition in Equation  we have that $\sigma(t^j_{i,l-1}) = v$. Since the priority assigned to $t^j_{i,l-1}$ is odd, we therefore have that $\operatorname{Val}^{\sigma}(t^j_{i,l-1}) \sqsubset \operatorname{Val}^{\sigma}(v)$. Since we already know from Lemma \[lem:clockrs\] that $\operatorname{Val}^{\sigma}(s^j) \sqsubset \operatorname{Val}^{\sigma}(r^j)$, we have therefore proved that the edge to $r^j$ is the most appealing edge at $t^j_{i,l}$. 2. \[itm:ttwo\] Now we deal with the case where $m \ge l + 1$, where we must show that the most appealing edge at $t^j_{i,l}$ is the one to $t^j_{i,l-1}$. Since $m > l$, the definition in Equation  implies that the path that starts at $t^j_{i,l-1}$ and follows $\sigma$ will pass through $t^j_{i,l'}$ for all $l'$ in the range $0 \le l' < l$ before arriving at $r^j$. The highest priority on this path is $\operatorname{P}(5, i, 2k + 4n + 4, j, 0)$ on the vertex $t^j_{i,0}$. Since this priority is even we have that $\operatorname{Val}^{\sigma}(r^j) \sqsubset \operatorname{Val}^{\sigma}(t^j_{i,l-1})$, and therefore the edge to $t^j_{i,l-1}$ is the most appealing edge at $t^j_{i,l}$. \[lem:nottlarge\] Let $\sigma$ be a strategy that agrees with $\chi^{B, {K}, j}_m$ for some ${K}, B \in \{0, 1\}^n$, some $j \in \{0, 1\}^n$, and some $m$ in the range $1 \le m < \operatorname{Delay}(j,{K}) - 1$. For each [$\textsc{Not}$]{}-gate $i$, and each $l > d(i)$, greedy all-switches strategy improvement will switch $t^j_{i,l}$ to $\chi^{B,{K},j}_{m+1}(t^j_{i,l})$. Lemma \[lem:clockrs\] implies that the state $t^j_{i,l}$ will not switch to $s^j$. In the rest of this proof we consider the two remaining outgoing edges at this state. 1. First we deal with the case where $m < l + 1$, where we must show that the edge to $r^j$ is the most appealing edge at $t^j_{i,l}$. When $l > d(i) + 1$, the proof of this fact is identical to Item \[itm:tone\] in the proof of Lemma \[lem:nottsmall\]. For $l = d(i) + 1$, we invoke Lemma \[lem:oval\] to argue that, since $m < l + 1 = d(i) + 2$, we have that $\operatorname{Val}^{\sigma}(o^j_i) \sqsubset \operatorname{Val}^{\sigma}(r^j)$. Since the priority assigned to $t^j_{i, d(i)}$ is odd, we therefore also have that $\operatorname{Val}^{\sigma}(t^j_{i,d(i)}) \sqsubset \operatorname{Val}^{\sigma}(r^j)$. Therefore, the edge to $r^j$ is the most appealing edge at $t^j_{i, d(i) + 1}$. 2. Now we deal with the case where $m \ge l + 1$, and where $\operatorname{Eval}(B, {I}(i)) = 0$. Here we must show that the edge to $r^j$ is the most appealing edge at $t^j_{i, l}$. For each $l > d(i) + 1$, the proof is identical to the proof of the first case in the proof of this lemma. For $l = d(i) + 1$, we invoke Lemma \[lem:oval\] to argue that, since $\operatorname{Eval}(B, {I}(i)) = 0$ we must have $\operatorname{Val}^{\sigma}(o^j_{{I}(i)}) \sqsubset \operatorname{Val}^{\sigma}(r^j)$, and therefore the edge to $r^j$ is the most appealing edge at $t^j_{i, l}$. 3. Finally, we deal with the case where $m \ge l + 1$ and where $\operatorname{Eval}(B, {I}(i)) = 1$. Here we must show that edge to $t^{j}_{i,l-1}$ is the most appealing edge at $t^j_{i,l}$. From the definition given in Equation , we have that the path that starts at $t^{j}_{i,l-1}$ and follows $\sigma$ will pass through $t^{j}_{i,l'}$ for each $l'$ in the range $d(i) \le l' < l$ before moving to $o^j_{{I}(i)}$. Since $m \ge l + 1 > d(i)$, we have that $m \ge d(i) + 2$, and therefore Lemma \[lem:oval\] implies that $\operatorname{Val}^{\sigma}(r^j) \sqsubset \operatorname{Val}^{\sigma}(o^j_{{I}(i)})$ and that $\operatorname{MaxDiff}^{\sigma}(r^j, o^j_{{I}(i)}) \ge \operatorname{P}(6,0,0,0,0)$. All priorities on the path from $t^j_{i,l-1}$ to $o^j_{{I}(i)}$ are smaller than $\operatorname{P}(6,0,0,0,0)$, so we can conclude that $\operatorname{Val}^{\sigma}(r^j) \sqsubset \operatorname{Val}^{\sigma}(t^j_{i,l-1})$, as required. Let $\sigma$ be a strategy that agrees with $\chi^{B, {K}, j}_m$ for some ${K}, B \in \{0, 1\}^n$, some $j \in \{0, 1\}^n$, and for $m = \operatorname{Delay}(j, {K})-1$. For each [$\textsc{Not}$]{}-gate $i$, and each $l$ int the range $1 \le l < {\ensuremath{2k + 4n + 6}\xspace}$, greedy all-switches strategy improvement will switch $t^{1-j}_{i,l}$ to $\chi^{B,{K},j}_{m+1}(t^{1-j}_{i,l})$. We must show that the edge to $s^{1-j}$ is the most appealing edge at $t^{1-j}_{i,l}$. It can be verified that all paths starting at $t^{1-j}_{i,l-1}$ reach one of $r^{1-j}$, $s^{1-j}$, or $r^j$. Furthermore, the largest possible priority on these paths is strictly smaller than $\operatorname{P}(7, 0, 0, 0, 0)$. Hence, we can apply part \[itm:rsone\] of Lemma \[lem:clockrs\] and part \[itm:ccone\] of Lemma \[lem:crossclock\] to conclude that the edge to $s^{1-j}$ is the most appealing outgoing edge at $t^{1-j}_{i,l}$. The state $d^j_i$ in [$\textsc{Not}$]{}gates {#app:notd} ============================================ In this section we show that the states $d^j_{i}$ in the [$\textsc{Not}$]{}gate gadgets correctly switch to the outgoing edge specified by $\chi^{B,{K},j}_{m+1}$. The first lemma considers the case where $m = 1$, the second lemma considers the case where $m = 2$, the third lemma considers the case where $3 \le m < d(i) + 2$, the fourth lemma considers the case where $d(i) + 2 \le m < \operatorname{Delay}(j, {K}) - 1$ and the gate outputs 0, the fifth lemma considers the case where $d(i) + 2 \le m < \operatorname{Delay}(j, {K})$ and the gate outputs 1, and the final lemma considers the case where $m = \operatorname{Delay}(j, {K}) - 1$. \[lem:notm1\] Let $\sigma$ be a strategy that agrees with $\chi^{B, {K}, j}_m$ for some ${K}, B \in \{0, 1\}^n$, some $j \in \{0, 1\}^n$, and where $m=1$. For each [$\textsc{Not}$]{}-gate $i$, greedy all-switches strategy improvement will switch $d^j_i$ to $\chi^{B,{K},j}_{m+1}(d^j_i)$. According to the definition given in Equation , we must show that the edge to $r^j$ is the most appealing edge at $d^j_i$. We do so by a case analysis. 1. First we consider the vertex $s^j$. Here we can apply part \[itm:rstwo\] of Lemma \[lem:clockrs\] to argue that $\operatorname{Val}^{\sigma}(s^j) \sqsubset \operatorname{Val}^{\sigma}(r^j)$. 2. Next we consider a vertex $a^j_{i, l}$ with $l \ne d(i)$. Here the definition given in Equation  implies that the path that starts at $a^j_{i,l}$ and follows $\sigma$ passes through $t^j_{i,l}$ and then arrives at $s^j$. The largest priority on this path is $\operatorname{P}(5, i, l+1, j, 0)$ on the vertex $a^j_{i,l}$. However, since this priority is smaller than $\operatorname{P}(7, 0, 0, 0, 0)$, we can apply part \[itm:rstwo\] of Lemma \[lem:clockrs\] to prove that $\operatorname{Val}^{\sigma}(a^j_{i,l}) \sqsubset \operatorname{Val}^{\sigma}(r^j)$. 3. Next we consider a vertex $a^j_{i, l}$ with $l = d(i)$. Here we can apply Lemma \[lem:oval1\] to argue that $\operatorname{Val}^{\sigma}(o^j_{{I}(i)}) \sqsubset \operatorname{Val}^{\sigma}(r^j)$. Furthermore, the largest priority on the path from $a^j_{i, l}$ to $o^j_i$ is the odd priority on $t^j_{i, d(i)}$. Hence, we can conclude that $\operatorname{Val}^{\sigma}(a^j_{i, d(i)}) \sqsubset \operatorname{Val}^{\sigma}(r^j)$. 4. Finally, we consider the vertex $e^j_i$. Lemma \[lem:brnot\] implies that $\operatorname{Br}(\sigma)(e^j_i) = d^j_i$. Hence, the path that starts at $e^j_i$ moves to $d^j_i$ and then to $s^j$. The highest priority on this path is $\operatorname{P}(4, i ,1, j, 0) < \operatorname{P}(7, 0, 0, 0, 0)$. Therefore, we can apply Lemma \[lem:clockrs\] to show that $\operatorname{Val}^{\sigma}(e^j_i) \sqsubset \operatorname{Val}^{\sigma}(r^j)$. Hence, we have shown that the edge to $r^j$ is the most appealing outgoing edge at $d^j_i$, so greedy all-switches strategy improvement will switch $d^j_i$ to $r^j$. \[lem:notm2\] Let $\sigma$ be a strategy that agrees with $\chi^{B, {K}, j}_m$ for some ${K}, B \in \{0, 1\}^n$, some $j \in \{0, 1\}^n$, and where $m=2$. For each [$\textsc{Not}$]{}-gate $i$, greedy all-switches strategy improvement will switch $d^j_i$ to $\chi^{B,{K},j}_{m+1}(d^j_i)$. According to the definition given in Equation , we must show that the edge to $a^j_{i, {\ensuremath{2k + 4n + 6}\xspace}}$ is the most appealing edge at $d^j_i$. We do so by a case analysis. 1. First we consider the vertex $r^j$. Observe that the path that starts at $a^j_{i, {\ensuremath{2k + 4n + 6}\xspace}}$ and follows $\sigma$ visits $t^j_{i, {\ensuremath{2k + 4n + 6}\xspace}}$ and then arrives at $r^j$. The highest priority on this path is the even priority assigned to $a^j_{i, {\ensuremath{2k + 4n + 6}\xspace}}$, so therefore we can conclude that $\operatorname{Val}^{\sigma}(r^j) \sqsubset \operatorname{Val}^{\sigma}(a^j_{i, {\ensuremath{2k + 4n + 6}\xspace}})$. 2. Next we consider the vertex $s^j$. Here we can apply Lemma \[lem:clockrs\] to argue that $\operatorname{Val}^{\sigma}(s^j) \sqsubset \operatorname{Val}^{\sigma}(r^j)$, and we have already shown that $\operatorname{Val}^{\sigma}(r^j) \sqsubset \operatorname{Val}^{\sigma}(a^j_{i, {\ensuremath{2k + 4n + 6}\xspace}})$. 3. Next we consider a vertex $a^j_{i, l}$ with $l \ne d(i)$ and $l \ne a^j_{i, {\ensuremath{2k + 4n + 6}\xspace}}$. The path that starts at $a^j_{i,l}$ and follows $\sigma$ passes through $t^j_{i,l}$ and then arrives at $r^j$. The largest priority on this path is the even priority assigned to $a^j_{i, l}$. However, since $l < {\ensuremath{2k + 4n + 6}\xspace}$, we have that this priority is smaller than the even priority assigned to $a^j_{i, {\ensuremath{2k + 4n + 6}\xspace}}$. Therefore, we have $\operatorname{Val}^{\sigma}(a^j_{i, l}) \sqsubset \operatorname{Val}^{\sigma}(a^j_{i, {\ensuremath{2k + 4n + 6}\xspace}})$. 4. Next we consider the vertex $a^j_{i, d(i)}$. Here we can apply Lemma \[lem:oval1\] to argue that $\operatorname{Val}^{\sigma}(o^j_{{I}(i)}) \sqsubset \operatorname{Val}^{\sigma}(r^j)$. Furthermore, the largest priority on the path from $a^j_{i, l}$ to $o^j_i$ is the odd priority on $t^j_{i, d(i)}$. Since the highest priority on the path from $a^j_{i, {\ensuremath{2k + 4n + 6}\xspace}}$ to $r^j$ is even, we we can conclude that $\operatorname{Val}^{\sigma}(a^j_{i, d(i)}) \sqsubset \operatorname{Val}^{\sigma}(a^j_{i, {\ensuremath{2k + 4n + 6}\xspace}})$. 5. Finally, we consider the vertex $e^j_i$. Lemma \[lem:brnot\] implies that $\operatorname{Br}(\sigma)(e^j_i) = d^j_i$. Hence, the path that starts at $e^j_i$ moves to $d^j_i$ and then to $r^j$. The highest priority on this path is $\operatorname{P}(4, i ,1, j, 0)$ on the vertex $e^j_i$. However, this is smaller than the largest priority on the path from $a^j_{i, {\ensuremath{2k + 4n + 6}\xspace}}$ to $r^j$, so we can conclude that $\operatorname{Val}^{\sigma}(e^j_i) \sqsubset \operatorname{Val}^{\sigma}(a^j_{i, {\ensuremath{2k + 4n + 6}\xspace}})$. \[lem:dbase\] Let $\sigma$ be a strategy that agrees with $\chi^{B, {K}, j}_m$ for some ${K}, B \in \{0, 1\}^n$, some $j \in \{0, 1\}^n$, and some $m$ in the range $3 \le m < d(i) + 2$. For each [$\textsc{Not}$]{}-gate $i$, greedy all-switches strategy improvement will switch $d^j_i$ to $\chi^{B,{K},j}_{m+1}(d^j_i)$. Lemma \[lem:clockrs\] implies that the state $d^j_i$ will not switch to $s^j$. In the rest of this proof we consider the other outgoing edges at this state. Since we are in the case where $m < d(i) + 2$ the definition from Equation  specifies that greedy all-switches strategy improvement must switch to $a^j_{i,m-2}$. Hence must argue that the edge to $a^j_{i,m-2}$ is the most appealing edge at $d^j_i$, and we will start by considering the appeal of this edge. The definition in Equation  implies that the path that starts at $a^j_{i,m-2}$ passes through $t^j_{i,l}$ for all $l \le m-2$ before arriving at $r^j$. The highest priority on this path is $\operatorname{P}(5, i, 2k+4n+4, j, 0)$ on the vertex $t^j_0$, and the second highest priority on this path is $\operatorname{P}(5, i , m-1, j, 0)$ on the vertex $a^j_{i,m-2}$. We will now show that all other edges are less appealing. 1. First we consider the vertex $r^j$. Since $\operatorname{P}(5, i, 2k+4n+4, j, 0)$ is even, we immediately get that $\operatorname{Val}^{\sigma}(r^j) \sqsubset \operatorname{Val}^{\sigma}(a^j_{i,m-2})$. 2. \[itm:two\] Next we consider a vertex $a^j_{i,l}$ with $l < m-2$. The path that starts at this vertex and follows $\sigma$ passes through $t^j_{i,l'}$ for all $l' \le l$ before arriving at $r^j$. The highest priority on this path is $\operatorname{P}(5, i, 2k+4n+4, j, 0)$ on the vertex $t^j_0$, and the second highest priority on this path is $\operatorname{P}(5, i , l+1, j, 0)$ on the vertex $a^j_{i,l}$. Hence, we have that $\operatorname{MaxDiff}^{\sigma}(a^j_{i,m-2}, a^j_{i, l})$ is $\operatorname{P}(5, i , m-1, j, 0)$, and since this is even, we can conclude that $\operatorname{Val}^{\sigma}(a^j_{i,l}) \sqsubset \operatorname{Val}^{\sigma}(a^j_{i,m-2})$. 3. \[itm:three\] Next we consider a vertex $a^j_{i,l}$ with $l > m-2$ and with $l \ne d(i)$. The path that starts at this vertex and follows $\sigma$ passes through $t^j_{i,l}$ and then moves immediately to $r^j$. The highest priority on this path is is $\operatorname{P}(5, i , l+1, j, 0)$ on the vertex $a^j_{i,l}$. Since $l+1 < 2k + 4n + 4$ we have that $\operatorname{MaxDiff}^{\sigma}(a^j_{i,m-2}, a^j_{i, l})$ is $\operatorname{P}(5, i , 2k+4n+4, j, 0)$, and since this priority is even, we can conclude that $\operatorname{Val}^{\sigma}(a^j_{i,l}) \sqsubset \operatorname{Val}^{\sigma}(a^j_{i,m-2})$. 4. \[itm:four\] Next we consider the vertex $a^j_{i, d(i)}$. The path that starts at this vertex passes through $t^j_{i, d(i)}$, and then moves to $\operatorname{InputState}(i, j)$. Since $m < d(i) + 2$, we have that $m \le d({I}(i)) + 2$, and therefore the first case of Lemma \[lem:oval\] tells us that $\operatorname{Val}^{\sigma}(\operatorname{InputState}(i,j)) \sqsubset \operatorname{Val}^{\sigma}(r^j)$. Hence we can conclude that $\operatorname{Val}^{\sigma}(a^j_{i, d(i)}) \sqsubset \operatorname{Val}^{\sigma}(a^j_{i,m-2})$. 5. \[itm:five\] Finally, we consider the vertex $e^j_i$. By Lemma \[lem:brnot\], the path that starts at $e^j_i$ and follows $\sigma$ passes through $d^j_i$. If $m=3$, it then moves to $a^j_{i, {\ensuremath{2k + 4n + 6}\xspace}}$, and if $m >3$ it then moves to $a^j_{i, m-3}$. In either case, since the priority assigned to $e^j_i$ is smaller than the priorities assigned to the vertices $a^j_l$ and $t^j_l$, we can reuse the arguments made above to conclude that $\operatorname{Val}^{\sigma}(e^j_i) \sqsubset \operatorname{Val}^{\sigma}(a^j_{i,m-2})$. Therefore, we have shown that the edge to $a^j_{i,m-2}$ is the most appealing outgoing edge at $d^j_i$, and so this edge will be switched by greedy all-switches strategy improvement. \[lem:notd1\] Let $\sigma$ be a strategy that agrees with $\chi^{B, {K}, j}_m$ for some ${K}, B \in \{0, 1\}^n$, some $j \in \{0, 1\}^n$, and some $m$ in the range $d(i) + 2 \le m < \operatorname{Delay}(j, {K}) - 1$. For each [$\textsc{Not}$]{}-gate $\operatorname{Eval}(B, {I}(i)) = 1$ then greedy all-switches strategy improvement will switch $d^j_i$ to $\chi^{B,{K},j}_{m+1}(d^j_i)$. Lemma \[lem:clockrs\] implies that the state $d^j_i$ will not switch to $s^j$. In the rest of this proof we consider the other outgoing edges at this state. Since we are in the case where $m \ge d(i) + 2$ and $\operatorname{Eval}(B, {I}(i)) = 1$, the definition from Equation  specifies that greedy all-switches strategy improvement must switch to $a^j_{i,m-2}$. Hence must argue that the edge to $a^j_{i,m-2}$ is the most appealing edge at $d^j_i$, and we will start by considering the appeal of this edge. The definition in Equation  implies that the path that starts at $a^j_{i,m-2}$ passes through $t^j_{i,l}$ for all $l$ in the range $d(i) \le l \le m-2$ before arriving at $\operatorname{InputState}(i, j)$. Since $m \ge d(i) + 2$ we have $m > d({I}(i) + 2$, and so Lemma \[lem:oval\] implies that $\operatorname{MaxDiff}^{\sigma}(r^j, \operatorname{InputState}(i, j)) \ge \operatorname{P}(6, 0, 0, 0, 0)$. We now consider the other outgoing edges from $d^j_i$. 1. First we consider $r^j$, where Lemma \[lem:oval\] immediately gives that $\operatorname{Val}^{\sigma}(r^j) \sqsubset \operatorname{Val}^{\sigma}(a^j_{i,m-2})$. 2. Next we consider a vertex $a^j_{i, l}$ with $l < d(i)$. Using the same reasoning as Item \[itm:two\] in the proof of Lemma \[lem:dbase\], we can conclude that the highest priority on the path from $a^j_{i, l}$ to $r^j$ is $\operatorname{P}(5, i, 2k+4n+4, j, 0) < \operatorname{P}(6, 0, 0, 0, 0)$ and so therefore $\operatorname{Val}^{\sigma}(a^j_{i,l}) \sqsubset \operatorname{Val}^{\sigma}(a^j_{i,m-2})$. 3. Next we consider a vertex $a^j_{i, l}$ with $l$ in the range $d(i) \le l < m -2$. The path that starts at $a^j_{i, l}$ and follows $\sigma$ passes through $t^j_{i, l'}$ for all $l'$ in the range $d(i) \le l' \le l$ and then arrives at $\operatorname{InputState}(i,j)$. The highest priority on this path is $\operatorname{P}(5, i, l+1, j, 0)$. On the other hand, the largest priority on the path from $a^j_{i, m-2}$ to $\operatorname{InputState}(i,j)$ is $\operatorname{P}(5, i, m-1, j, 0)$. Since this priority is even, we can conclude that $\operatorname{Val}^{\sigma}(a^j_{i,l}) \sqsubset \operatorname{Val}^{\sigma}(a^j_{i,m-2})$. 4. Next we consider a vertex $a^j_{i, l}$ with $l > m - 2$. Using the same reasoning as Item \[itm:three\] in the proof of Lemma \[lem:dbase\] we can conclude that the highest priority on the path from $a^j_{i, l}$ to $r^j$ is $\operatorname{P}(5, i , l+1, j, 0) < \operatorname{P}(6, 0, 0, 0, 0)$ and so therefore $\operatorname{Val}^{\sigma}(a^j_{i,l}) \sqsubset \operatorname{Val}^{\sigma}(a^j_{i,m-2})$. 5. Finally, we consider the vertex $e^j_i$, where we can use the same reasoning as Item \[itm:five\] in the proof of Lemma \[lem:dbase\] to conclude that $\operatorname{Val}^{\sigma}(e^j_{i}) \sqsubset \operatorname{Val}^{\sigma}(a^j_{i,m-2})$. Therefore, we have shown that the edge to $a^j_{i,m-2}$ is the most appealing outgoing edge at $d^j_i$, and so this edge will be switched by greedy all-switches strategy improvement. \[lem:notd0\] Let $\sigma$ be a strategy that agrees with $\chi^{B, {K}, j}_m$ for some ${K}, B \in \{0, 1\}^n$, some $j \in \{0, 1\}^n$, and some $m$ in the range $d(i) + 2 \le m < \operatorname{Delay}(j, {K}) - 1$. For each [$\textsc{Not}$]{}-gate $\operatorname{Eval}(B, {I}(i)) = 0$ then greedy all-switches strategy improvement will switch $d^j_i$ to $\chi^{B,{K},j}_{m+1}(d^j_i)$. Lemma \[lem:clockrs\] implies that the state $d^j_i$ will not switch to $s^j$. In the rest of this proof we consider the other outgoing edges at this state. Since $m \ge d(i) + 2$ and $\operatorname{Eval}(B, {I}(i) = 0)$, the definition in Equation  specifies that the edge to $e^j_i$ is the most appealing edge at $d^j_i$. To prove this, we first show that all other edges are less appealing than $a^j_{i, d(i) -1}$, and we will then later show that $e^j_i$ is more appealing than $a^j_{i, d(i) -1}$. The definition in Equation  implies that the path that starts at $a^j_{i,d(i)-1}$ passes through $t^j_{i,l}$ for all $l$ in the range $0 \le l \le d(i) - 1$ before arriving at $r^j$. The largest priority on this path is $\operatorname{P}(5, i, d(i), j 0)$ on the state $a^j_{i, d(i)-1}$. We now consider the other outgoing edges. 1. First we consider the vertex $r^j$. Since $\operatorname{P}(5, i, 2k+4n+4, j, 0)$ is even, we immediately get that $\operatorname{Val}^{\sigma}(r^j) \sqsubset \operatorname{Val}^{\sigma}(a^j_{i,m-2})$. 2. Next we consider a vertex $a^j_{i, l}$ with $l < d(i) -1$. Here we can use the same reasoning as we used in Item \[itm:two\] in the proof of Lemma \[lem:dbase\] to argue that $\operatorname{Val}^{\sigma}(a^j_{i,l}) \sqsubset \operatorname{Val}^{\sigma}(a^j_{i,d(i)-1})$. 3. Next we consider a vertex $a^j_{i, l}$ with $l > d(i)$. Here we can use the same reasoning as we used in Item \[itm:three\] in the proof of Lemma \[lem:dbase\] to argue that $\operatorname{Val}^{\sigma}(a^j_{i,l}) \sqsubset \operatorname{Val}^{\sigma}(a^j_{i,d(i)-1})$. 4. Finally, we consider the vertex $a^j_{i,d(i)}$. Here we can use the same reasoning as we used in Item \[itm:four\] in the proof of Lemma \[lem:dbase\] to argue that $\operatorname{Val}^{\sigma}(a^j_{i,d(i)}) \sqsubset \operatorname{Val}^{\sigma}(a^j_{i,d(i)-1})$, although this time we will use case two of Lemma \[lem:oval\]. So, we have shown that every edge other than the one to $e^j_i$ is less appealing than the edge to $a^j_{i, d(i)-1}$. Now we will show that the edge to $e^j_i$ is more appealing than the edge to $a^j_{i, d(i)-1}$. There are two cases to consider. - If $m = d(i) + 2$, then $\sigma(d^j_i) = a^j_{i, d(i)-1}$. Since Lemma \[lem:brnot\] implies that $\operatorname{Br}(\sigma)(e^j_i) = d^j_i$, we have that the path that starts at $e^j_i$ and follows $\sigma$ passes through $d^j_i$ and then arrives at $a^j_{i, d(i)-1}$. The largest priority on this path is $\operatorname{P}(4, i, 1, j, 0)$, and since this is even, we can conclude that $\operatorname{Val}^{\sigma}(a^j_{i, d(i) - 1}) \sqsubset \operatorname{Val}^{\sigma}(e^j_i)$. - If $m > d(i) + 2$, then $\sigma(d^j_i) = e^j_i$. In this case Lemma \[lem:brnot\] implies that $\operatorname{Br}(\sigma)(e^j_i) = h^j_i$. The path that starts at $e^j_i$ and follows $\sigma$ passes through $h^j_i$ and then moves directly to $r^j_i$. The largest priority on this path is $\operatorname{P}(6, i, 1, j 0)$, which is bigger than $\operatorname{P}(5, i, d(i), j 0)$. Therefore, $\operatorname{MaxDiff}^{\sigma}(e^j_i, a^j_{i, d(i) - 1} = \operatorname{P}(6, i, 1, j 0)$, and since this is even, we can conclude that $\operatorname{Val}^{\sigma}(a^j_{i, d(i) - 1}) \sqsubset \operatorname{Val}^{\sigma}(e^j_i)$. Hence, we have shown that the edge to $e^j_i$ is the most appealing edge at $d^j_i$. Let $\sigma$ be a strategy that agrees with $\chi^{B, {K}, j}_m$ for some ${K}, B \in \{0, 1\}^n$, some $j \in \{0, 1\}^n$, and for $m = \operatorname{Delay}(j, {K})-1$. For each [$\textsc{Not}$]{}-gate $i$ greedy all-switches strategy improvement will switch $d^{1-j}_{i}$ to $\chi^{B,{K},j}_{m+1}(d^{1-j}_{i})$. We must show that the edge to $s^{1-j}$ is the most appealing edge at $d^{1-j}_{i}$. It can be verified that all paths starting at the vertices $a^{1-j}_{i,l}$ and $e^j_i$ reach one of $r^{1-j}$, $s^{1-j}$, or $r^j$. Furthermore, the largest possible priority on all of these paths is strictly smaller than $\operatorname{P}(7, 0, 0, 0, 0)$. Hence, we can apply part \[itm:rsone\] of Lemma \[lem:clockrs\] and part \[itm:ccone\] of Lemma \[lem:crossclock\] to conclude that the edge to $s^{1-j}$ is the most appealing outgoing edge at $d^{1-j}_{i}$. The vertices $z^l$ {#app:z} ================== In this section we show that the states $z^l$ correctly switch to the outgoing edge specified by $\chi^{B,{K},j}_{m+1}$. The first lemma considers the case where $l = j$, while the second lemma considers the case where $l = 1-j$. Let $\sigma$ be a strategy that agrees with $\chi^{B, {K}, j}_m$ for some ${K}, B \in \{0, 1\}^n$, some $j \in \{0, 1\}^n$, and $m$ in the range $1 \le m \le \operatorname{Delay}(j, {K}) - 1$. The greedy all-switches strategy improvement algorithm will switch $z^{j}$ to $\chi^{B,{K},j}_{m+1}(z^{j})$. We must show that the edge to $r^j$ is the most appealing edge at $z^j$. This follows immediately from part \[itm:rstwo\] of Lemma \[lem:clockrs\]. Let $\sigma$ be a strategy that agrees with $\chi^{B, {K}, j}_m$ for some ${K}, B \in \{0, 1\}^n$, some $j \in \{0, 1\}^n$, and some $m$ in the range $1 \le m \le \operatorname{Delay}(j, {K}) - 1$. The greedy all-switches strategy improvement algorithm will switch $z^{1-j}$ to $\chi^{B,{K},j}_{m+1}(z^{1-j})$. There are two cases to consider. 1. If $m < \operatorname{Delay}(j, {K}) - 1$, then we must show that the edge to $r^{1-j}$ is the most appealing edge at $z^{1-j}$. This follows immediately by applying part \[itm:rstwo\] of Lemma \[lem:clockrs\]. 2. If $m = \operatorname{Delay}(j, {K}) - 1$, then since $\operatorname{Delay}(j, {K}) + \operatorname{Delay}(1-j, {K}) = \operatorname{Length}({K})$, we must show that the edge to $s^{1-j}$ is the most appealing edge at $z^{1-j}$. This follows immediately from part \[itm:rsone\] of Lemma \[lem:clockrs\]. The vertices $y^l$ {#app:y} ================== In this section we show that the states $y^l$ correctly switch to the outgoing edge specified by $\chi^{B,{K},j}_{m+1}$. The first lemma considers the case where $l = j$, while the second lemma considers the case where $l = 1-j$. Let $\sigma$ be a strategy that agrees with $\chi^{B, {K}, j}_m$ for some ${K}, B \in \{0, 1\}^n$, some $j \in \{0, 1\}^n$, and some $m$ in the range $1 \le m \le \operatorname{Delay}(j, {K}) - 1$. The greedy all-switches strategy improvement algorithm will switch $y^j$ to $\chi^{B,{K},j}_{m+1}(y^j)$. The definition of $\chi^{B,{K},j}_{m+1}(y^{j}_i)$ specifies that the edge chosen at $y^{j}$ is defined by $\sigma_{m+\operatorname{Delay}(j, {K})}(y^{j})$, which is given in Equation . According to this definition, we must show that the edge to $r^j$ is the most appealing outgoing edge at $y^j$. This follows immediately from parts \[itm:ccone\] and \[itm:cctwo\] of Lemma \[lem:crossclock\]. Let $\sigma$ be a strategy that agrees with $\chi^{B, {K}, j}_m$ for some ${K}, B \in \{0, 1\}^n$, some $j \in \{0, 1\}^n$, and some $m$ in the range $1 \le m \le \operatorname{Delay}(j, {K}) - 1$. The greedy all-switches strategy improvement algorithm will switch $y^{1-j}$ to $\chi^{B,{K},j}_{m+1}(y^{1-j})$. According to Equation , we must show that the edge to $r^j$ is the most appealing edge at $y^{1-j}$. This follows immediately from Lemma \[lem:crossclock\]. The vertices $p^l_i$ {#app:p} ==================== In this section we show that the vertices $p^l_i$ correctly switch to the outgoing edge specified by $\chi^{B,{K},j}_{m+1}$. The first lemma considers the case where $l = j$, while the second lemma considers the case where $l = 1-j$. Let $\sigma$ be a strategy that agrees with $\chi^{B, {K}, j}_m$ for some ${K}, B \in \{0, 1\}^n$, some $j \in \{0, 1\}^n$, and some $m$ in the range $1 \le m \le \operatorname{Delay}(j, {K}) - 1$. For each $i \in {\ensuremath{\textsc{Input/Output}}\xspace}$, the greedy all-switches strategy improvement algorithm will switch $p^j_{i}$ to $\chi^{B,{K},j}_{m+1}(p^j_i)$. Since $m$ is in the range $1 \le m \le \operatorname{Delay}(j, {K}) - 1$, the definition given in Equation  specifies that $o^j_{{I}(i)}$ should be the most appealing outgoing edge at $p^j_i$. There are two cases to consider. 1. If $m = 1$, then observe that the path that starts at $o^j_{{I}(i)}$ and follows $\sigma$ will eventually reach $s^j$, no matter whether ${I}(i)$ is a [$\textsc{Not}$]{}-gate or an [$\textsc{Or}$]{}-gate. Furthermore, the largest priority on this path is strictly smaller than $\operatorname{P}(7, 0, 0, 0, 0)$. On the other hand, the path that starts at $p^j_{i,1}$ moves immediately to $r^{1-j}$, and the largest priority on this path is strictly smaller than $\operatorname{P}(7, 0, 0, 0, 0)$. Hence, we can apply part \[itm:cctwo\] of Lemma \[lem:crossclock\] to argue that $\operatorname{Val}^{\sigma}(p^{j}_{i,1}) \sqsubset \operatorname{Val}^{\sigma}(o^j_{{I}(i)})$, as required. 2. If $m \ge 1$, then then observe that the path that starts at $o^j_{{I}(i)}$ and follows $\sigma$ will eventually reach $r^j$, and again the largest priority on this path is strictly smaller than $\operatorname{P}(7, 0, 0, 0, 0)$. Hence, we can apply part \[itm:cctwo\] of Lemma \[lem:crossclock\] to argue that $\operatorname{Val}^{\sigma}(p^{j}_{i,1}) \sqsubset \operatorname{Val}^{\sigma}(o^j_{{I}(i)})$, as required. Let $\sigma$ be a strategy that agrees with $\chi^{B, {K}, j}_m$ for some ${K}, B \in \{0, 1\}^n$, some $j \in \{0, 1\}^n$, and some $m$ in the range $1 \le m \le \operatorname{Delay}(j, {K}) - 1$. For each $i \in {\ensuremath{\textsc{Input/Output}}\xspace}$, the greedy all-switches strategy improvement algorithm will switch $p^{1-j}_{i}$ to $\chi^{B,{K},j}_{m+1}(p^{1-j}_i)$. The definition given in Equation  specifies that the edge to $p^{1-j}_{i}$ should be the most appealing edge at $p^{1-j}_i$. The path that starts at $o^{1-j}_{{I}(i)}$ and follows $\sigma$ must eventually arrive at either $s^{1-j}$ or $r^{1-j}$. In particular, observe that $r^j$ cannot be reached due to the vertices $q^j_{i, 0}$, which by Lemma \[lem:brqj\] selects the edge towards $q^j_{i, 1}$. On the other hand, the path that starts at $p^{1-j}_{i,1}$ moves directly to $r^j$. Moreover the largest priorities on both of these paths are strictly smaller than $\operatorname{P}(7, 0, 0, 0, 0)$. Hence, we can apply Lemmas \[lem:clockrs\] and \[lem:crossclock\] to argue that $\operatorname{Val}^{\sigma}(o^j_{{I}(i)}) \sqsubset \operatorname{Val}^{\sigma}(r^j)$. The vertices $h^l_{i,0}$ {#app:h} ======================== In this section we show that the vertices $h^l_{i,0}$ correctly switch to the outgoing edge specified by $\chi^{B,{K},j}_{m+1}$. The first lemma considers the case where $l = j$, while the second lemma considers the case where $l = 1-j$. Let $\sigma$ be a strategy that agrees with $\chi^{B, {K}, j}_m$ for some ${K}, B \in \{0, 1\}^n$, some $j \in \{0, 1\}^n$, and some $m$ in the range $1 \le m \le \operatorname{Delay}(j, {K}) - 1$. For each $i \in {\ensuremath{\textsc{Input/Output}}\xspace}$, the greedy all-switches strategy improvement algorithm will switch $h^j_{i, 0}$ to $\chi^{B,{K},j}_{m+1}(h^j_{i, 0})$. We must show that the most appealing edge at $h^{j}_{i,0}$ should be $h^j_{i,1}$. Observe that both $h^j_{i,1}$ and $h^j_{i,2}$ move directly to $r^j$ and $r^{1-j}$, respectively. Moreover, the priorities assigned to these vertices are strictly smaller than $\operatorname{P}(7, 0, 0, 0, 0)$. Since $h^j_{i,1}$ moves to $r^{j}$, we can apply part \[itm:cctwo\] of Lemma \[lem:crossclock\] to prove that $h^j_{i,1}$ is the most appealing edge at $h^j_{i, 0}$. Let $\sigma$ be a strategy that agrees with $\chi^{B, {K}, j}_m$ for some ${K}, B \in \{0, 1\}^n$, some $j \in \{0, 1\}^n$, and some $m$ in the range $1 \le m \le \operatorname{Delay}(j, {K}) - 1$. The greedy all-switches strategy improvement algorithm will switch $h^{1-j}_{i, 0}$ to $\chi^{B,{K},j}_{m+1}(h^{1-j}_{i,0})$. We must show that the most appealing edge at $h^{j}_{i,0}$ should be $h^j_{i,1}$. Observe that both $h^{1-j}_{i,1}$ and $h^{1-j}_{i,2}$ move directly to $r^{1-j}$ and $r^{j}$, respectively. Moreover, the priorities assigned to these vertices are strictly smaller than $\operatorname{P}(7, 0, 0, 0, 0)$. We will use these fact in order to apply Lemma \[lem:crossclock\] in the following case analysis. According to Equation , there are two cases to consider. Since $h^j_{i,1}$ moves to $r^{j}$, we can apply part \[itm:cctwo\] of Lemma \[lem:crossclock\] to prove that $h^j_{i,1}$ is the most appealing edge at $h^j_{i, 0}$. Input/output gates {#app:input} ================== In this section we show that the vertices in the input/output gadgets correctly switch to the outgoing edge specified by $\chi^{B,{K},j}_{m+1}$. The first two lemmas deal with the case where the input/output gadget for circuit $j$ resets. Note that this occurs one iteration later than the rest of the vertices in circuit $j$, which is why we prove separate lemmas for this case. Otherwise, the input/output gadgets behave as if they are [$\textsc{Not}$]{}gates, so the proofs that we have already given for the [$\textsc{Not}$]{}gates can be applied with only minor changes. These is formalized in the final two lemmas of this section. The first of these lemmas considers the input/output gadgets in circuit $j$ and the second considers the input/output gadgets in circuit $1-j$. \[lem:tinput1\] Let $\sigma$ be a strategy that agrees with $\chi^{B, {K}, j}_m$ for some ${K}, B \in \{0, 1\}^n$, some $j \in \{0, 1\}^n$, and for $m = 1$. For each [$\textsc{Input/Output}$]{}-gate $i$, and each $l$ int the range $1 \le l < {\ensuremath{2k + 4n + 6}\xspace}$, greedy all-switches strategy improvement will switch $t^{j}_{i,l}$ to $\chi^{B,{K},j}_{m+1}(t^{j}_{i,l})$. We must show that the edge to $z^j$ is the most appealing edge at $t^j_{i, l}$. All paths that start at $t^j_{i, l}$ and follow $\sigma$ will eventually arrive at $r^{1-j}$, either via $y^j$, or via $p^j_i$. On the other hand, the path that starts at $z^j$ moves directly to $s^j$. Therefore, part \[itm:cctwo\] of Lemma \[lem:crossclock\] implies that the edge to $z^j$ is the most appealing edge at $t^j_{i, l}$. Let $\sigma$ be a strategy that agrees with $\chi^{B, {K}, j}_m$ for some ${K}, B \in \{0, 1\}^n$, some $j \in \{0, 1\}^n$, and for $m = 1$. For each [$\textsc{Input/Output}$]{}-gate $i$ greedy all-switches strategy improvement will switch $d^{j}_{i}$ to $\chi^{B,{K},j}_{m+1}(d^{j}_{i})$. This proof uses the same argument as the proof of Lemma \[lem:tinput1\], because all paths that start at $d^j_i$ will eventually arrive at $r^{1-j}$. \[lem:inpj\] Let $\sigma$ be a strategy that agrees with $\chi^{B, {K}, j}_m$ for some ${K}, B \in \{0, 1\}^n$, some $j \in \{0, 1\}^n$, and for $m$ in the range $2 \le m \le \operatorname{Delay}(j, {K}) - 1$. For each [$\textsc{Input/Output}$]{}-gate $i$, and each $l$ int the range $1 \le l < {\ensuremath{2k + 4n + 6}\xspace}$, greedy all-switches strategy improvement will switch $t^{j}_{i,l}$ to $\chi^{B,{K},j}_{m+1}(t^{j}_{i,l})$ and $d^{j}_{i}$ to $\chi^{B,{K},j}_{m+1}(d^{j}_{i})$. Since $m \ge 2$, we have that $\sigma(y^j) = r^j$ and $\sigma(z^j) = s^j$. Hence, both $t^{j}_{i,l}$ and $d^j_i$ behave in exactly the same way as the states $t^{j}_{i',l}$ and $d^j_{i',l}$ for $i' \in {\ensuremath{\textsc{Not}}\xspace}$, with the exception that these states one step behind the [$\textsc{Not}$]{}-gate vertices, due to the delay introduced by $y^j$. Note, however, that this is account for by placing the edge to $p^j_i$ on $t^j_{i, d(C)}$, rather than $t^j_{i, d(C)+1}$, as would be expected for a [$\textsc{Not}$]{}-gate with depth $d(C) + 1$. Therefore, to prove this lemma, we can use exactly the reasoning as gave for Lemmas \[lem:nottsmall\], \[lem:nottlarge\], \[lem:notm1\], \[lem:notm2\], \[lem:dbase\], \[lem:notd0\], and \[lem:notd1\]. This is because all of the reasoning used there is done relative to $r^j$ and $s^j$, and since $y^j$ and $z^j$ have insignificant priorities, none of this reasoning changes. Let $\sigma$ be a strategy that agrees with $\chi^{B, {K}, j}_m$ for some ${K}, B \in \{0, 1\}^n$, some $j \in \{0, 1\}^n$, and for $m$ in the range $2 \le m < \operatorname{Delay}(j, {K}) - 1$. For each [$\textsc{Input/Output}$]{}-gate $i$, and each $l$ int the range $1 \le l < {\ensuremath{2k + 4n + 6}\xspace}$, greedy all-switches strategy improvement will switch $t^{1-j}_{i,l}$ to $\chi^{B,{K},j}_{m+1}(t^{1-j}_{i,l})$ and $d^{1-j}_{i}$ to $\chi^{B,{K},j}_{m+1}(d^{1-j}_{i})$. Much like the proof of Lemma \[lem:inpj\], this claim can be proved using essentially the same reasoning given as was given for Lemmas \[lem:nottsmall\], \[lem:nottlarge\], \[lem:notm1\], \[lem:notm2\], \[lem:dbase\], \[lem:notd0\], and \[lem:notd1\], because an [$\textsc{Input/Output}$]{}-gate behaves exactly like a [$\textsc{Not}$]{}-gate. In particular, note that since we have defined $\chi^{B, {K}, 0}_{\operatorname{Delay}(0,{K})} = \chi^{B, {K}, 1}_1$ and $\chi^{B, {K}, 1}_{\operatorname{Delay}(1,{K})} = \chi^{B, {K}+1, 0}_1$, the gate gadgets in circuit $1-j$ continue to have well-defined strategies, so the $d^{1-j}_i$ will continue to act like a [$\textsc{Not}$]{}-gate between iterations $1$ and $2$. The one point that we must pay attention to is that, in the transition between $\chi^{B,{K},j}_{1}$ and $\chi^{B,{K},j}_{2}$, the state $y^{1-j}$ switches from $r^{1-j}$ to $r^{j}$. Note, however, that both $h^j_{i, 0}$ and $p^j_i$ both switch to vertex that eventually leads to $r^j$ at exactly the same time, so all paths that exit the gadget will switch from $r^{1-j}$ to $r^j$. Since almost all of the reasoning in the above lemmas is done relative to $r^j$, the relative orders over the edge appeals cannot change. The only arguments that must be changed are the ones that depend on Lemma \[lem:oval\]. Here, the fact that the priority $\operatorname{P}(5, i, 2k+4n+5, j, 0)$ is assigned to $p^{1-j}_{i,1}$ is sufficient to ensure that $\operatorname{Val}^{\sigma}(r^j) \sqsubset \operatorname{Val}^{\sigma}(t^{1-j}_{i, d(C)})$, so the deceleration lane will continue switching. On the other hand, since $\operatorname{P}(5, i, 2k+4n+5, j, 0) < \operatorname{P}(7, 0, 0, 0, 0)$, this priority is not large enough to cause $d^j_i$ to switch away from $e^j_i$, if $B_i = 1$.
--- author: - 'V. Holzwarth' - 'M. Jardine' bibliography: - '6486.bib' date: 'Received ; accepted' title: 'Theoretical mass loss rates of cool main-sequence stars' --- Introduction {#intro} ============ Solar-like stars with hot coronae are expected to lose mass in the form of stellar winds [@1960ApJ...132..821P]. In contrast to the P Cygni line profiles of the massive winds of hot stars and young T Tauri stars [e.g. @2005ApJ...625L.131D], the tenuous and highly ionised outflows of cool stars yield no detectable radiative signatures and cannot be diagnosed directly. @2002ApJ...574..412W devised an indirect method to deduce mass loss rates of cool stars from specific properties of the astrospheres blown into the ambient interstellar medium by their stellar wind (see also @1998ApJ...492..788W and @2004LRSP....1....2W). Hot neutral hydrogen in the heliosphere, mainly in the form of a wall between the heliopause and the bow shock, leads to the formation of a red-shifted absorption feature in stellar Ly$\alpha$ emission lines, whereas a corresponding structure in the astrosphere around a star causes a specific blue-shifted feature. The size and absorption capabilities of the astrosphere depend on the strength of the stellar wind as well as on the properties (i.e.density, relative velocity) of the ambient interstellar medium (ISM). By fitting synthetic absorption features, obtained from hydrodynamical simulations of the wind-ISM interaction [e.g. @1996JGR...10121639Z; @2001ApJ...551..495M], to observed line profiles, @2002ApJ...574..412W [@2005ApJ...628L.143W] estimate the wind ram pressures and mass loss rates of cool stars in the solar neighbourhood. Owing to the requirement of accurate ISM properties, which are only available for the solar vicinity, the sample of observed stars is currently small and rather heterogeneous, comprising single and binary stars of different spectral types and luminosity classes. For G and K main-sequence stars with X-ray fluxes $F_\mathrm{X}\lesssim 8\cdot10^5\,{\rm erg\cdot s^{-1}\cdot cm^{-2}}$, @2005ApJ...628L.143W find a correlation between the coronal activity and the deduced mass loss rate per stellar surface area, $\dot{M}/R^2\propto F_\mathrm{X}^{1.34\pm0.18}$. Since the more active stars of the sub-sample have mass loss rates lower than predicted by this power-law relation, @2005ApJ...628L.143W speculate that this may be due to a change of the stellar magnetic field topology. The $\dot{M}-F_\mathrm{X}$-relationships of @2002ApJ...574..412W [@2005ApJ...628L.143W] have been applied to the rotational evolution of cool stars and to their wind interaction with extra-solar planets. @2002ApJ...574..412W extrapolate the mass loss history of solar-like stars backward in time and suggest that the mass loss rate of the young Sun may have been up to three orders of magnitude higher than today. Considering the ‘faint young Sun’ paradox, that is the apparently unchanged planetary temperatures despite an about $25\%$ less luminous Sun about $3.8\,{\rm Gyr}$ ago [e.g. @2000GeoRL..27..501G; @2003JGRA.108l.SSH3S and references therein], they find that the cumulated mass loss of about $0.03\,{\rm M_{\sun}}$ can not account for the $\sim 10\%$ difference in solar mass, which has been suggested to solve this problem in terms of a more luminous, higher-mass young Sun. investigate the particle and thermal losses from the upper atmospheres of ‘Hot Jupiters’ and find that high stellar mass loss rates have a significant influence on the evolution of the planetary mass and radius. The strength of stellar winds also affects the detectability of extra-solar giant planets in radio wavelengths, since the emission scales with the kinetic and magnetic energy flux on the planetary magnetosphere . Owing to the high predicted stellar wind ram pressures, the magnetospheric radio emission levels of close-by planets are expected to be sufficiently high to become detectable with future instruments. @2002ApJ...574..412W [@2005ApJ...628L.143W] use scaled versions of the solar wind to match observed and simulated line profiles. With their principal quantity being the wind ram pressure, they presume the terminal wind velocity of all stars to be solar-like, so that the wind density is the only free fitting parameter. The presumption of thermally driven winds with a unique terminal wind velocity neglects, however, the influence of the magnetic field on the acceleration and structuring of the outflow. Coronal activity signatures and the magnetic braking of cool main-sequence stars are indicative for surface magnetic flux, which is generated by dynamo processes within the outer convection zone. The magnetic field gives rise to the formation of hot X-ray emitting coronal loops as well as to the acceleration of plasma escaping along open field lines, and links ab initio the two quantities in the power law suggested by @2005ApJ...628L.143W. We consider the impact of magnetic fields in more detail by determining wind ram pressures and mass loss rates in the framework of a magnetised wind model. Using the mass loss rates inferred by @2002ApJ...574..412W [@2005ApJ...628L.143W] as empirical constraints for possible wind scenarios, our working hypothesis is that the magnetic and thermal wind properties of cool stars primarily depend on the stellar rotation rate. In Sect. \[moco\], we describe the magnetised wind model, the principal quantities of the investigation, and the observational constraints of the model parameters. Section \[resu\] comprises the analyses of characteristic wind properties resulting from different wind scenarios, and the comparisons of theoretical and empirical wind ram pressures and mass loss rates. In Sect. \[disc\], we discuss our results and their consequences for the evolution of stellar rotation and mass loss and for the detectability of extra-solar planetary magnetospheres. Our conclusions are summarised in Sect. \[conc\]. Model considerations {#moco} ==================== Magnetised wind model {#mawimo} --------------------- We consider main-sequence stars with masses $0.2\,{\rm M_\odot}\le M\le 1.2\,{\rm M_\odot}$, radii $R\propto M^{0.8}$, and rotation rates $0.6\,{\rm \Omega_\odot}\le \Omega\le 11.3\,{\rm \Omega_\odot}$; with the solar rotation rate $\Omega_\odot= 2.8\cdot10^{-6}\,{\rm s^{-1}}$ the stellar rotation periods are between $2.3\,{\rm d}$ and $43\,{\rm d}$. The winds are determined in the framework of the magnetised wind model of @1967ApJ...148..217W. The properties of stationary, axisymmetric, and polytropic outflows are specified through the wind temperature, $T_0$, the wind density, $\rho_0$, and the radial magnetic field strength, $B_0$, at a reference level, $r_0= 1.1\,{\rm R}$, close to the stellar surface. The model solutions provide the flow velocity, $v_\mathrm{r,A}$, and the density, $\rho_\mathrm{A}$, of the outflowing plasma at the Alfvénic radius, $r_\mathrm{A}$, where the flow velocity equals the local Alfvén velocity . The location of and the conditions at the Alfvénic point specify the wind structure along a magnetic field line as well as the mass loss rate [see, e.g., @1999isw..book.....L; @1999stma.book.....M]. We assume that the entire stellar surface contributes to the wind. Closed loop-like magnetic fields structures reduce the effective surface area from which outflows can emanate and also influence the flow structure in adjacent wind zones [e.g. @1968MNRAS.138..359M; @1974SoPh...34..231P; @1988ApJ...333..236K]. The different magnetic field topologies encountered during a solar activity cycles entail, for example, mass loss variations within a factor of 2 [@1998csss...10..131W]. Closed-field regions are however limited to a few stellar/solar radii above the surface, well below the Alfvénic surface [e.g. @2002MNRAS.333..339J; @2002MNRAS.336.1364J]. We expect that small-scale spatial and short-term temporal inhomogeneities are averaged out with increasing distance from the star and thus irrelevant for the sustainment of astrospheres. ‘Characteristic’ wind ram pressure {#cwrp} ---------------------------------- The mass loss rate along a slender open magnetic flux tube is $$d \dot{M} = F_\mathrm{m} d\sigma \ , \label{defmalr}$$ where $d\sigma= \sin \theta\, d\theta\, d\phi$ is the solid angle occupied by the flux tube. The mass flux per solid angle, $F_\mathrm{m}= \rho v_r r^2= \rho_\mathrm{A} v_{r,{\rm A}} r_\mathrm{A}^2$, is constant along an individual tube. The rate of the momentum transport associated with the mass flux, $$d \dot{M} v_r = \rho v_r^2 r^2 d\sigma = p_\mathrm{w} d S \ , \label{defmolr}$$ is equivalent to the force which results from the wind ram pressure, $p_\mathrm{w}= \rho v_r^2$, exerted on the local tube cross section, $d S= r^2 d\sigma$. At large distances from the star the outflow velocity converges to the constant terminal velocity, $v_\infty$. In contrast to the wind ram pressure, which decreases $\propto r^{-2}$ (Fig.\[pprofiles.fig\]), the ram force per solid angle, $$\frac{d\mathcal{F}_\mathrm{w}}{d\sigma}= p_\mathrm{w} r^2= F_\mathrm{m} v_\infty= \mathrm{const.} \ , \label{defrafo}$$ is independent of the radius. ![Radial profiles of the ram pressure, $p_\mathrm{w}$, the magnetic pressure, $p_\mathrm{m}$, and the thermal gas pressure, $p_\mathrm{g}$, of magnetised winds in the equatorial plane. For solar wind parameters (*thick lines*), $T_0= 2.93\cdot10^6\,{\rm K}, n_0= 2.76\cdot10^6\,{\rm cm^{-3}}$, and $B_0= 3\,{\rm G}$, the magnetic and thermal pressures at the heliospheric termination shock (*dotted line*) are over two orders of magnitude smaller than the wind ram pressure. *Thin lines* show the pressure profiles of a fast magnetic rotator with a rotation period of six days and wind parameters $T_0= 3.4\cdot10^6\,{\rm K}, n_0= 6.67\cdot10^6\,{\rm cm^{-3}}, B_0= 243\,{\rm G}$. At large distances from the star the wind ram pressure decreases $\propto r^{-2}$. Note that it is not the magnetic pressure which gives rise to the magneto-centrifugal acceleration of the wind, but the stellar rotation in conjunction with the plasma outflow along bent magnetic field lines.[]{data-label="pprofiles.fig"}](6486f1){width="\hsize"} This quantity comprises both the wind density and (terminal) wind velocity, and enables a characterisation of the mass loss rate of individual stars. In the case of spherically symmetric outflows, for which $d\sigma= 4\pi$, the ram force is $\mathcal{F}_\mathrm{w}= 4 \pi F_\mathrm{m} v_\infty= \dot{M} v_\infty$. To compare wind properties of stars with different stellar radii, we use the ram force per stellar surface area, $$\mathcal{P} = \frac{\mathcal{F}_\mathrm{w}}{4\pi R^2} = \frac{\dot{M} v_\infty}{4\pi R^2} = \frac{F_\mathrm{m} v_\infty}{R^2} \ , \label{defcwrp}$$ as the principal quantity of our investigation and shall refer to it as the *characteristic* wind ram pressure (CWRP). Empirical wind ram pressures {#obco} ---------------------------- In their analysis of astrospherical absorption features, @2002ApJ...574..412W [@2005ApJ...628L.143W] adopt for all stars in their sample solar-like winds with a unique terminal velocity of $400\,{\rm km\cdot s^{-1}}$. This assumption implies that the relative CWRP (in solar units), $$\frac{\mathcal{P}}{\mathcal{P}_\odot} = \frac{\dot{M}}{\dot{M}_\odot} \frac{v_\infty}{v_{\infty,\odot}} \left( \frac{R}{R_\odot} \right)^{-2} \ , \label{defrelcwrp}$$ is equivalent to the stellar mass loss rate per surface area: $$\left( \frac{\mathcal{P}}{\mathcal{P}_\odot} \right)_\mathrm{W} = \left( \frac{\dot{M}}{\dot{M}_\odot} \right)_\mathrm{W} \left( \frac{R}{R_\odot} \right)^{-2} \quad \textrm{for $v_\infty= v_{\infty,\odot}$} \ . \label{cwrpwood}$$ In conjunction with individual stellar radii, the empirical mass loss rates, $\dot{M}_\mathrm{W}$, given by @2002ApJ...574..412W [@2005ApJ...628L.143W], provide the observational constraints for our theoretical CWRPs. star sp.type $P\,{\rm [d]}$ $\dot{M}\,{\rm [\dot{M}_\odot]}$ $\log L_X$ $A\,{\rm [A_\odot]}$ ------ --------- ------------------ ---------------------------------- ------------ ---------------------- M5.5 41.6 $<0.2$ 27.22 0.023 G2 / K0 29 / 43 2 27.70 2.22 K2 11.7 30 28.32 0.61 K5 35.4 0.5 27.45 0.46 K4.5 22 0.5 27.39 0.56 K1 / K1 20.7 / 22.9 15 28.34 0.88 M3.5 4.38 1 28.99 0.123 K0 / K5 19.7 / 22.9 100 28.49 1.32 G8 / K4 6.31 / 11.94$^a$ 5 28.90 1.0 G5 32.7$^b$ 0.3 26.87 1.0 G2 26 1 27.30 1 : Properties of cool main-sequence stars with observed mass loss rates, taken from @2005ApJ...628L.143W [@2005ApJS..159..118W]; $^a$ from @1996ApJ...466..384D; $^b$ from @1984ApJ...279..763N. \[targets\] Rotation-dependent wind parameters {#roar} ---------------------------------- The activity levels of cool stars increase with the stellar rotation rate [e.g. @1984ApJ...279..763N], comprising enhanced thermal and magnetic field values inside the coronae. We take this aspect into account by assuming that the wind parameters follow power-law relations, which are based on the solar reference case. The rates of increase of the thermal wind parameters[^1] $$T_0= T_{0,\odot} \left( \frac{\Omega}{\Omega_\odot} \right)^{n_T} \quad \textrm{and} \quad n_0= n_{0,\odot} \left( \frac{\Omega}{\Omega_\odot} \right)^{n_n} \label{trhopowlaws}$$ at the reference level are specified through the power-law indices $n_T$ and $n_n$, respectively. Polarimetric observations indicate that the magnetic flux depends on the stellar rotation rate [e.g. @1991LNP...380..389S]. Taking different stellar radii (here, $R\propto M^{0.8}$) into account, the magnetic field strengths are taken to follow the power-law relation $$B_0= B_{0,\odot} \left( \frac{M}{M_\odot} \right)^{-1.6} \left( \frac{\Omega}{\Omega_\odot} \right)^{n_\Phi} \ , \label{bpowlaws}$$ which implies that for a given stellar rotation rate lower-mass stars have higher average field strengths. The assumption of polytropic magnetised winds whose wind parameters follow simple power-laws may be simplistic, but in view of the currently limited observational constraints a more sophisticated model appears inappropriate. ### Observational constraints Since the thermal wind properties of cool stars cannot be observed directly, we follow the hypothetical assumption that closed coronal magnetic field regions may serve as proxies to constrain the increase of the temperature and density with the stellar rotation rate. Analysing the dependence of stellar X-ray luminosities, @2003ApJ...599..516I infer a density power-law index $n_n\approx 0.6$, which implies for stars rotating ten times faster than the Sun (i.e. close to the X-ray saturation limit) about four times higher coronal densities. Differential emission measures of rapidly rotating stars indicate a large fraction of plasma with temperatures of $\sim 10^7\,{\rm K}$, in addition to solar-like coronal plasma components with temperatures $\gtrsim 10^6\,{\rm K}$ [e.g. @2003SSRv..108..577F and references therein]. A coronal temperature of $10\,{\rm MK}$ for a rapidly rotating star like AB Dor ($\Omega\simeq 50\,{\rm \Omega_\odot}$) would imply a temperature power-law index of $n_T\simeq 0.5$. Following the solar paradigm, it is more likely that the temperature of stationary stellar winds are characterised by the low-temperature plasma component, implying values $0\lesssim n_T< 0.5$. The increase of the magnetic flux with the stellar rotation rate is constrained through direct magnetic flux measurements [e.g. @1991LNP...380..389S; @2001ASPC..223..292S and references therein] and, indirectly, through empirical activity-rotation-age relations. Due to the braking effect of magnetised winds, solar-like single stars spin down during the course of their main-sequence evolution. In conjunction with the @1967ApJ...148..217W-wind model, empirical spin-down timescales [e.g. @1972ApJ...171..565S] imply a linear relationship between the stellar magnetic flux and the rotation rate, but sub-linear values cannot be ruled out either, owing to the impact of non-uniform surface magnetic field distributions . In contrast, @2001ASPC..223..292S suggests a super-linear dependence, with a power-law index of $n_\Phi= 1.2$, based on polarimetric observations of K and M dwarfs. Depending on the underlying stellar sample, higher values may be possible. Excluding rapid rotators in the saturated regime, @2003ApJ...590..493S, for example, suggest the value $n_\Phi= 2.8\pm0.3$. Solar reference case {#src} -------------------- For our model to reproduce solar wind conditions observed at earth orbit ($v_r\simeq 400\,{\rm km\cdot s^{-1}}, n\footnote{The average proton density in the slow solar wind is about $7\,{\rm cm^{-3}}$. Owing to the charge-neutrality of the plasma, we double this value to account for the free electrons. This is consistent with the mean molecular weight $\mu= 0.5$} \simeq 14\,{\rm cm^{-3}}, T\simeq 2\cdot10^5\,{\rm K}, B\simeq 5\cdot10^{-5}\,{\rm G}$), the coronal values have to be $T_{0,\odot}= 2.93\cdot10^6\,{\rm K}, n_{0,\odot}= 2.76\cdot10^6\,{\rm cm^{-3}}, B_{0,\odot}= 3\,{\rm G}$, the polytropic index $\Gamma= 1.22$, and the mean molecular weight $\mu= 0.5$ . The resulting mass flux is $F_\mathrm{m,\odot}= 1.04\cdot10^{11}\,{\rm g\cdot s^{-1}\cdot sr^{-1}}$ and the solar mass loss rate $\dot{M}_\odot= 4 \pi F_\mathrm{m,\odot}= 1.31\cdot10^{12}\,{\rm g\cdot s^{-1}}= 2.07\cdot10^{-14}\,{\rm M_\odot\cdot yr^{-1}}$. With increasing distance from the Sun, the wind velocity converges to the value $v_{\infty,\odot}= 443\,{\rm km\cdot s^{-1}}$, which yields the constant wind ram force per solid angle $\mathcal{F}_\mathrm{w,\odot}/(4\pi)= 4.60\cdot10^{18}\,{\rm dyn\cdot sr^{-1}}$. With the solar radius $R_\odot= 6.96\cdot10^{10}\,{\rm cm}$, the CWRP of the Sun is $\mathcal{P}_\odot= 9.51\cdot10^{-4}\,{\rm dyn\cdot cm^{-2}}$. The interaction with the local interstellar medium confines the expansion of the solar wind within a system of shock fronts, wherein velocities are braked down to sub-sonic values. At the distance of the heliospheric termination shock ($\approx 94\,{\rm AU}$, @2005Sci...309.2017S) the calculated solar wind has virtually obtained the terminal velocity ($437\,{\rm km\cdot s^{-1}}= 0.987\cdot v_{\infty,\odot}$) and exerts a ram pressure ($p_\mathrm{w,TS} \simeq 2\cdot10^{-12}\,{\rm dyn\cdot cm^{-2}}$, cf.Fig. \[pprofiles.fig\]), which is in agreement with the canonical value of the pressure of the local interstellar medium [e.g. @1986AdSpR...6...27A]. Results {#resu} ======= Comparison of different wind scenarios {#comp} -------------------------------------- An increase of thermal and/or magnetic wind parameters with the stellar rotation rate increases the CWRPs (Fig. \[mlmomwd.fig\]). ![Characteristic wind ram pressure (in solar units) of $1\,{\rm M_{\sun}}$ main-sequence stars subject to different wind scenarios (i.e. sets of power-law indices $n_T, n_n$, and $n_\Phi$). *Crosses* along the curves mark the transition between the regimes of slow and fast magnetic rotators, and *symbols* show the values inferred by @2005ApJ...628L.143W for main-sequence (single and binary) stars of different spectral types. []{data-label="mlmomwd.fig"}](6486f2){width="\hsize"} The character of the increase depends on whether the star is a *slow* or a *fast magnetic rotator*[^2]. The winds of slow magnetic rotators are driven by thermal pressure gradients and gain energy from the enthalpy of the hot plasma. In the regime of fast magnetic rotators, stellar winds are predominantly accelerated by magneto-centrifugal driving. The outflowing plasma tries to conserve its angular momentum but is forced into faster rotation by the tension force of the bent magnetic field lines. This slingshot effect gives rise to a Poynting flux, which transfers energy from the stellar rotation into the wind. In the regime of slow magnetic rotators, the terminal wind velocities are comparable to the solar-like surface escape velocities of main-sequence stars (Fig. \[fmtv.fig\]). ![Terminal velocities (*top*) and mass fluxes (*bottom*) of winds of $1\,{\rm M_{\sun}}$ main-sequence stars subject to different wind scenarios. In the cases of constant ($n_\Phi= 0$, *dashed dotted*) and strongly increasing ($n_\Phi= 3$, *short dashed*) dynamo efficiencies the mass fluxes are identical with those of the reference case (*solid*), since they are based on the same thermal wind parameters. However, the individual transitions between the slow and fast magnetic rotator regimes (*crosses*) take place at different rotation rates. []{data-label="fmtv.fig"}](6486f3){width="\hsize"} Except in the case of high wind temperatures, the CWRPs only depend on the mass flux, which scales almost linearly with the wind density. Higher wind temperatures entail higher mass fluxes and CWRPs, yet the increase is limited by the basic requirement of subsonic coronal flow velocities. The terminal wind velocities of fast magnetic rotators are larger than the solar value. The mass flux, however, is still determined by the thermal wind parameters. Owing to its intrinsic dependence on the stellar rotation, the dominant magneto-centrifugal driving causes an increase of the terminal wind velocity and CWRP with the stellar rotation rate even if the wind parameters are constant. The transition between the slow and the fast magnetic rotator regimes depends on the relative contribution of thermal and magneto-centrifugal driving to the overall acceleration of the wind. The higher the dynamo efficiency, $n_\Phi$, the stronger the magneto-centrifugal driving, and the lower the rotation rate of the transition between the two regimes. Higher wind temperatures increase the thermal driving and shift the transition to higher rotation rates. We have compared self-consistently determined CWRPs of different wind scenarios with the observational constraints (Table \[targets\]). At present, the data set for main-sequence stars consists of three groups: five slowly rotating targets with solar-like CWRPs (, , , , ), three moderately rotating targets of spectral type K with CWRPs more than ten times the solar value (, , ), and two rapidly rotating targets (, ) with 5-10 times larger CWRPs than the Sun; the value for is an upper limit. We first focus on the slowly and rapidly rotating targets and consider the group of moderately rotating K dwarfs in Sect. \[kdwarfs\]. Assuming a linear dynamo efficiency, $n_\Phi= 1$, different thermal wind scenarios are (within the given observational error margins) consistent with the constraints set by the slowly and rapidly rotating stars (Fig. \[mlmomwd.fig\]). Rotation-independent thermal wind properties appear less likely, since the resulting constant CWRPs at small rotation rates are inconsistent with the general trend of CWRPs increasing with stellar rotation. Good agreement results from moderately increasing thermal wind parameters, here $n_T= 0.1$ and $n_n= 0.6$ (reference case), but the empirical constraints do not conclusively exclude the scenario either, in which the wind temperature follows the high-temperature coronal plasma component (i.e. $n_T= 0.5$). In the latter scenario, the strong increase of CWRPs at low rotation rates, which then becomes flatter for higher rotation rates, appears promising to account for the high values of the group of K dwarfs. But we find that even for $n_T\gg 0.5$ the CWRPs are not sufficiently high. The only scenario consistent with these targets is based on a very strong increase of the coronal density with the stellar rotation rate. For stars close to the X-ray saturation limit, rotating about ten times faster than the Sun, values of $n_n\gtrsim 5$ imply very high wind densities and CWRPs which are inconsistent with the lower values inferred for the rapidly rotating targets. The scenario of wind ram pressures scaling exclusively with the wind density corresponds to the approach of @2002ApJ...574..412W. Retaining the thermal power-law indices $n_T= 0.1$ and $n_n= 0.6$, the theoretical CWRPs are consistent with the constraints set by the slowly and rapidly rotating targets for dynamo efficiencies $0< n_\Phi\lesssim 1.5$. For $n_\Phi< 1$ all stars considered here are slow magnetic rotators, whereas for super-linear dynamo efficiencies the two rapidly rotating targets, and , are in the fast magnetic rotator regime. A rather high but previously suggested dynamo efficiency of $n_\Phi\sim 3$ locates the transition between slow and fast magnetic rotators at rotation rates similar to those of the K dwarf group. Yet due to their moderate rotation, the magneto-centrifugal driving is too inefficient and cannot produce CWRPs sufficiently high to match the empirical values. None of the scenarios above is capable to account for all of the empirical constraints, and all but one scenario are incapable to match the high CWRPs of the group of moderately rotating K dwarfs. With highest values occurring at intermediate rotation rates, it is unlikely that the wind ram pressures of the three groups can be described using simple power-law relations for the thermal and magnetic wind properties. The group of moderately rotating K dwarfs shows high CWRP, which would require a sudden change of wind properties with the stellar rotation rate. But assuming a continuous dependency makes it difficult to yield lower CWRPs at higher stellar rotation rates, as suggested by and . In contrast, a weak dependence on the stellar rotation rate enables to match the values of the rapidly rotating targets, but cannot account for the high CWRPs of the K dwarf group. This suggests that either the group of rapidly rotating targets or the group of moderately rotating K dwarfs are peculiar in terms of stellar wind properties. The group of moderately rotating K dwarfs {#kdwarfs} ----------------------------------------- In the framework of our wind model with rotation-dependent thermal and magnetic wind parameters, the CWRPs of the three moderately rotating targets ($\mathcal{P}/\mathcal{P}_\odot= 17$), ($\mathcal{P}/\mathcal{P}_\odot= 75$), and ($\mathcal{P}/\mathcal{P}_\odot= 49$), could only be accounted for by a strong increase of wind densities with stellar rotation rate. For otherwise solar (i.e. reference) wind parameters, the wind densities required to produce the empirical CWRPs are about 25 times (36 Oph), 82 times (70 Oph), and 65 times the solar value. Yet, according to analyses of recent X-ray observations [@2006ApJ...643..444W], the coronae of the three targets are solar-like as far as densities are concerned. We investigate what other wind conditions may cause high CWRPs, and whether these are consistent with observations. Regarding their rather slow rotation, the three targets have unexpectedly high surface-averaged magnetic field strengths: $ (0) - 500\,{\rm G}$ for [@1984ApJ...276..286M; @1997MNRAS.284..803S], $180-550\,{\rm G}$ for [@1980ApJ...236L.155R; @1984ApJ...276..286M], and $150-600\,{\rm G}$ for . The lower values are considered to be more reliable, but owing to magnetic filling factors smaller one peak field strengths may locally reach $1-3\,{\rm kG}$. We determine CWRPs for the three targets as functions of both the wind temperature at the reference level and the polytropic index. The wind density at the reference level is taken to be $\rho_0= 2.76\cdot10^6\,{\rm cm^{-3}}$, whereas individual stellar rotation rates, radii, and masses are taken from Table \[targets\], @2005ApJS..159..118W, and the approximation $M\propto R^{5/4}$ (cf. Sect. \[mawimo\]), respectively. The calculations are carried out for the lower and upper limits of each magnetic field strength range given above (Fig. \[kdwarfs.fig\]). ![Theoretical characteristic wind ram pressures (in solar units) of the moderately rotating K dwarf targets as function of the wind temperature, $T_0$, at the coronal base and the polytropic index, $\Gamma$; the *cross* marks solar reference values. The magnetic field strengths, $B_0$ (*labels*), are lower and upper limits of observed field strength ranges. The stellar model parameters are for : $M= 0.63\,{\rm M_{\sun}}, r_0= 0.76\,{\rm R_{\sun}}, \Omega= 3.5\cdot10^{-6}\,{\rm s^{-1}}$; : $M= 0.8\,{\rm M_{\sun}}, r_0= 0.94\,{\rm R_{\sun}}, \Omega= 3.7\cdot10^{-6}\,{\rm s^{-1}}$; : $M= 0.73\,{\rm M_{\sun}}, r_0= 0.86\,{\rm R_{\sun}}, \Omega= 6.2\cdot 10^{-6}\,{\rm s^{-1}}$. The wind density at the reference level is $\rho_0= 2.76\cdot10^6\,{\rm cm^{-3}}$ and the mean molecular weight $\mu= 0.5$.[]{data-label="kdwarfs.fig"}](6486f4){width="\hsize"} For drastic thermal wind conditions, like base temperatures $T_0\lesssim 10^7\,{\rm K}$ and $\Gamma\gtrsim 1$ (implying almost isothermal outflows), CWRPs increase to about 5-15 times the solar value. Despite the large observational uncertainties of a factor of two [@2002ApJ...574..412W], the resulting CWRPs are insufficient to achieve conclusive agreements with the empirical values. The Chandra observations of @2006ApJ...643..444W show abundance anomalies for some of the K dwarf targets in the form of a FIP effect, that is the relative abundances (with respect to photospheric values) of elements with low first ionisation potentials are enhanced compared to high-FIP elements. The element abundances determine the mean molecular weight of the coronal plasma, and could thus have an influence on the wind acceleration mechanisms and mass loss rate. Assuming a fully ionised plasma, solar photospheric abundances [@1998SSRv...85..161G] yield $\mu_\mathrm{\odot,ph}\approx 0.61$. In the solar wind, elements with $\rm{FIP}< 10\,{\rm eV}$ are about 4.5 times overabundant and elements with $10\,{\rm eV}< \rm{FIP} < 11.5\,{\rm eV}$, in particular C and S, about two times overabundant [@1998SSRv...85..241G; @1999SSRv...87...55R]; in the slow wind, He ($\rm{FIP}\sim 25\,{\rm eV}$) is about two times underabundant [@1998SSRv...85..241G]. These abundance differences result in $\mu_\mathrm{\odot,wind}\approx 0.57$. @2006ApJ...643..444W determined the relative abundances of the major constituents of the K dwarf coronae. If we assume that in stellar winds He is generally underabundant, the mean molecular weights are $\mu_\mathrm{36\,Oph (A/B)}\approx 0.53, \mu_\mathrm{70\,Oph (A)}\approx 0.58, \mu_\mathrm{70\,Oph (B)}\approx 0.54$, and $\mu_\mathrm{\epsilon\,Eri}\approx 0.54$, that is $\lesssim 10\%$ smaller than the solar value. However, the dependence of the CWRP on the mean molecular weight is small (Fig. \[muscan.fig\]), and this effect therefore rather marginal. ![Dependence of the characteristic wind ram pressure (in solar units) on the mean molecular weight, assuming solar reference values ($M= 1\,{\rm M_{\sun}}, \Omega= 2.8\cdot10^{-6}\,{\rm s^{-1}}, r_0= 1.1\,{\rm R_{\sun}}, B_0= 3\,{\rm G}, T_0= 2.93\cdot10^6\,{\rm K}, n_0= 2.76\cdot10^6\,{\rm cm^{-3}}, \Gamma= 1.22$).[]{data-label="muscan.fig"}](6486f5){width="\hsize"} In summary, high magnetic field strengths, high wind temperatures, high heating rates and low mean molecular weights do, in principle, increase the CWRPs of cool stars. Yet, if the energy flux into open and closed magnetic field structures is similar, then we expect that the extreme coronal conditions required to yield CWRPs nearly two orders of magnitude larger than the Sun would imprint distinctive signatures on stellar X-ray properties. Since these signatures are not discernible in the case of the moderately rotating K dwarfs, their high CWRPs are peculiar. Dependence on spectral type --------------------------- For comparable wind parameters and rotation rates, the CWRPs of lower-mass main-sequence stars are higher than for solar-mass stars. Over the mass range $0.2-1.2\,{\rm M_{\sun}}$, the difference is typically smaller than about half an order of magnitude (Fig.\[mlmomwd\_stc.fig\]). ![Characteristic wind ram pressures of $1.2\,{\rm M_{\sun}}$ (*thick lines*) and $0.2\,{\rm M_{\sun}}$ (*thin lines*) main-sequence stars, subject to different wind scenarios. *Crosses* mark the transition between the slow and the fast magnetic rotator regimes, *symbols* show the empirical values given by @2005ApJ...628L.143W.[]{data-label="mlmomwd_stc.fig"}](6486f6){width="\hsize"} This holds for both the slow and the fast magnetic rotator regime, so it is unrelated to the higher magnetic field strengths for lower mass stars, adapted to retain the total magnetic flux (cf. Sect.\[roar\]). The increase is caused by the scaling of $\mathcal{P}$ with the inverse surface area (i.e. $\propto R^{-2}$). The mass fluxes (per solid angle) of lower-mass stars are, in fact, significantly smaller than the solar value. In the case of the reference wind scenario, the CWRPs and mass loss rates follow approximately broken power-laws (Table \[cwrpfit\]), whose parameters reflect the result above. -------------- ----------------------- ------------------------------------------------ ------------------- -------------------------------------------------- ------- -------------------------------------------- ------- ------------------------------------------------ ------------------- -------------------------------------------------- ------- -------------------------------------------- ------- $M$ $\Omega_\mathrm{s/f}$ $[M_{\sun}]$ $[\Omega_\odot]$ $\bar{\mathcal{P}}\,{\rm [\mathcal{P}_\odot]}$ $n_{\mathcal{P}}$ $\bar{F}_\mathrm{m}\,{\rm [F_\mathrm{m,\odot}]}$ $n_F$ $\bar{v}_\infty\,{\rm [v_{\infty,\odot}]}$ $n_v$ $\bar{\mathcal{P}}\,{\rm [\mathcal{P}_\odot]}$ $n_{\mathcal{P}}$ $\bar{F}_\mathrm{m}\,{\rm [F_\mathrm{m,\odot}]}$ $n_F$ $\bar{v}_\infty\,{\rm [v_{\infty,\odot}]}$ $n_v$ 1.2 4.39 0.83 1.21 1.15 0.99 0.97 0.21 0.40 1.73 1.41 0.82 0.38 0.90 1.0 4.18 0.99 1.14 0.99 0.94 1.00 0.20 0.45 1.73 1.18 0.79 0.38 0.93 0.8 3.90 1.18 1.08 0.80 0.89 1.04 0.19 0.52 1.72 0.92 0.76 0.39 0.95 0.6 3.54 1.43 1.01 0.58 0.84 1.09 0.17 0.61 1.72 0.66 0.73 0.41 0.99 0.4 3.05 1.75 0.94 0.35 0.78 1.14 0.16 0.77 1.72 0.38 0.70 0.46 1.02 0.2 2.30 2.17 0.85 0.13 0.71 1.23 0.14 1.10 1.72 0.14 0.65 0.59 1.07 -------------- ----------------------- ------------------------------------------------ ------------------- -------------------------------------------------- ------- -------------------------------------------- ------- ------------------------------------------------ ------------------- -------------------------------------------------- ------- -------------------------------------------- ------- \[cwrpfit\] For K dwarfs with masses $\sim 0.7\,{\rm M_{\sun}}$, the increase of the CWRP is clearly insufficient to account for the high observed values of the peculiar group of moderately rotating targets. Inclination effects ------------------- The wind ram pressure of magnetised winds is intrinsically latitude-dependent, since plasma emanating along open magnetic field lines at high latitudes experience a weaker magneto-centrifugal driving than outflows in the equatorial plane. Inferred mass loss rates based on the assumption of spherical symmetric outflows may misestimate the actual value if the inclination between the line-of-sight and the rotation axis is not taken into account. We analyse the impact of this effect by determining latitude-dependent CWRPs. The applied model is an extension of the @1967ApJ...148..217W-formalism to non-equatorial latitudes, assuming that the poloidal magnetic field component is radial, so that the spiralling field lines are located on conii with constant opening angles, whose tips are located in the centre of the star . Disregarding possible anisotropies caused by closed coronal magnetic field structures, the thermally driven winds of slow magnetic rotators are virtually spherical symmetric and inferred CWRPs independent of inclination effects. In contrast, in the regime of fast magnetic rotators, possible CWRPs range over up to an order of magnitude, depending on the underlying wind scenario (Fig. \[mlmom.fig\]). ![Characteristic wind ram pressures of outflowing plasma with different inclinations to the stellar rotation axis. Based on uniform surface magnetic field distributions, higher (lower) values within a *shaded region* are associated with equatorial (high) latitudes. The *lines* show surface-averaged mean values, that is the values one would statistically expect if the inclination is unknown. *Symbols* indicate the values and spectral types of the empirical data of @2005ApJ...628L.143W.[]{data-label="mlmom.fig"}](6486f7){width="\hsize"} Small inclinations between the rotation axis and the line-of-sight typically imply smaller values. Considering the larger surface area at lower latitudes, the expectation values are closer to the higher CWRPs determined in the equatorial plane. The present analysis is based on uniform magnetic field distributions. Yet observations of rapidly rotating stars frequently show non-uniform surface brightness and magnetic field distributions in the form of spots concentrations at high latitudes [e.g. @2002AN....323..309S and references therein], which alter the latitudinal variation of the wind structure. Magnetised winds of rapidly rotating stars furthermore show a collimation of open magnetic field lines toward the rotation axis, which causes an additional latitude-dependence of the wind structure . Such anisotropies increase the range of possible CWRPs and the uncertainty of inferred values. Rotation-activity-relationship ------------------------------ The activity level of cool stars is typically quantified through their coronal X-ray emission; @2002ApJ...574..412W [@2005ApJ...628L.143W] correlate the inferred mass loss rates with observed X-ray fluxes. For cool stars with rotation periods longer than about two days, X-ray fluxes increase with the stellar rotation rate, following (on average) a power-law relation, $$F_X= \bar{F}_X \Omega^{n_X} \ . \label{deffxom}$$ The empirical values $\bar{F}$ and $n_X$ depend on the spectral type and size of the underlying stellar sample. We demonstrate the impact of these quantities on the mapping of our theoretical CWRPs onto the $F_X$-$\mathcal{P}$-plane by comparing several relationships (Table \[fxnx.tbl\]). ----------------------------------------------------------------------------------------------- $\lg \bar{F}_X\,{\rm [erg\cdot s^{-1}\cdot cm^{-2}]}$ $n_x$ sample ----------------------------------------------------------------- -------- -------------------- $15.51 + 0.623 \left( \frac{M}{M_{\sun}} \right) - 2 \lg \left( $2$ 259 FGKM stars$^a$ \frac{R}{R_{\sun}} \right)$ $19.25 - 2 \lg \left( \frac{R}{R_{\sun}} \right)$ $2.64$ 9 G stars$^b$ $17.88$ $2.4$ 19 FG stars$^c$ ----------------------------------------------------------------------------------------------- : Coefficients and power-law indices of empirical rotation-activity-relationships. The mass dependence has been determined by fitting power laws to the characteristic values of each mass bin in the data set given by .[]{data-label="fxnx.tbl"} \[rotactrels\] Larger $\bar{F}_X$-values shift the curves of theoretical CWRPs to higher coronal X-ray fluxes, whereas smaller power-law indices $n_X$ cause steeper curves, since $d \ln \mathcal{P} / d \ln F_X= (d \ln \mathcal{P} / d \ln \Omega) / n_X$ (Fig. \[mlmfxwd.fig\]). ![Characteristic wind ram pressures (in solar units) as function of the stellar X-ray flux, assuming the reference ($n_T= 0.1, n_n= 0.6, n_\Phi= 1$) and the high wind density ($n_T= 0.1, n_n= 5, n_\Phi= 1$, *steep curves*) scenarios. Different empirical rotation-activity-relations result in different locations and slopes of the curves for a $1\,{\rm M_{\sun}}$ star (*top*). Using the rotation-activity relationship based on data by , the dependence on spectral type (i.e. stellar mass) shifts the values of lower-mass stars to higher X-ray fluxes (*bottom*). The *gray* line marks the X-ray flux-mass loss-relation, $\dot{M} \propto F_X^{1.34}$, suggested by @2005ApJ...628L.143W.[]{data-label="mlmfxwd.fig"}](6486f8){width="\hsize"} The latter effect would ease the need for a strong increase of the wind density with the stellar rotation. Depending on the applied rotation-activity relation, values $n_n\simeq 2.2-3.1$ yield CWRPs which are in agreement with the moderately rotating K dwarfs and the power-law relationship suggested by @2005ApJ...628L.143W. Yet the offsets caused by different $\bar{F}_X$ result in inconsistencies between theoretical and empirical CWRPs. The range of possible locations and slopes of CWRP curves resulting from different empirical rotation-activity relations make it difficult to associate inferred mass loss rates with a systemic change of stellar wind parameters. The rotation-activity relationships are statistical relations based on a (relative) large number of stars. For individual objects, deviations from the mean value can be significant and represent an additional source of uncertainty in the analysis of stellar wind ram pressures. Furthermore, X-ray fluxes of cool stars are typically not constant, but may vary in the course of an activity cycle, in the case of the Sun by an order of magnitude. Stellar mass loss rates ----------------------- Following Eqs. (\[defrelcwrp\]) and (\[cwrpwood\]), a match between theoretical and empirical CWRPs implies the relation $$\left( \frac{\dot{M}}{\dot{M}_\odot} \right)_\mathrm{W} \stackrel{!}{=} \frac{\dot{M}}{\dot{M}_\odot} \frac{v_\infty}{v_{\infty,\odot}} \ . \label{mdotrel}$$ If the terminal wind velocity is different from the solar value, then the self-consistently determined mass loss rate, $\dot{M}$, is different from the value, $\dot{M}_\mathrm{W}$, derived by @2002ApJ...574..412W, since the latter is based on the assumption of a unique solar-like wind velocity. A comparison between theoretical and empirical stellar mass loss rates is shown in Fig. \[maslos.fig\] for a $1\,{\rm M_\odot}$ star subject to different wind scenarios. ![Self-consistent mass loss rates per surface area, $\dot{M}/R^2$ (in solar units), as a function of the relative characteristic wind ram pressure, $\mathcal{P} / \mathcal{P}_\odot$. Subject to the condition $v_\infty= v_{\infty,\odot}$, the latter quantity is equivalent to the empirical mass loss rates per surface area given by @2002ApJ...574..412W. *Crosses* mark the transition between the regimes of slow and fast magnetic rotators; the *gray line* indicates identity between theoretical and observed mass loss rates.[]{data-label="maslos.fig"}](6486f9){width="\hsize"} Note that, following Eq. (\[cwrpwood\]), the relative CWRPs on the abscissa are equivalent to the empirical mass loss rates per surface area of @2002ApJ...574..412W. For thermally driven winds of slow magnetic rotators, there is an almost one-to-one correspondence between self-consistent and observed mass loss rates, with the possible exception of very hot stellar winds. In contrast, in the regime of fast magnetic rotators, the terminal velocities of magneto-centrifugally driven winds are faster than in the solar case (see Fig. \[fmtv.fig\]), and the self-consistent mass loss rates thus smaller than those determined following the approach of @2002ApJ...574..412W. In all but one of our wind scenarios the theoretical mass loss rates do not exceed about ten times the solar value. Only if the wind ram pressure scales exclusively with the wind density are theoretical and empirical mass loss rates similar throughout. In terms of this scenario, all stars are slow magnetic rotators with solar-like winds, whose CWRPs scale with the mass flux. The results for lower-mass stars are qualitatively the same, with stellar mass loss rates (per surface area) not exceeding about ten times the solar value either. We conjecture that disregarding the importance of the (terminal) wind velocity on the CWRP may lead to over-estimations of the mass loss rates of rapidly rotating stars. Discussion {#disc} ========== The aim of our investigation was to develop a picture of the wind ram pressures and mass loss rates of cool main-sequence stars. Rather than analysing individual stars in detail, we have used the set of observationally determined values to constrain possible wind scenarios. In view of large observational uncertainties and the small and heterogeneous sample of stars, we did not attempt to fit theoretical and empirical values, but used the latter as guidance only. The power-law ansatz for the dependence of the thermal and magnetic wind parameters on the stellar rotation rate is motivated and supported by previous investigations. Nevertheless, with highest currently observed CWRPs occurring at moderate rotation rates, it is not possible to find a consistent agreement between a theoretical wind scenario and *all* empirical values. This problem raises the question whether the increase of CWRPs is characterised by the group of moderately rotating K dwarfs or by the group of rapidly rotating targets. The K dwarf puzzle ------------------ The high CWRPs of the K dwarfs , , and cannot be explained in the framework of solar-like magnetised winds. The temperatures, densities, and heating rates required to raise the values are inconsistent with the X-ray observations of @2006ApJ...643..444W. Strong magnetic fields could account for their CWRPs, but the observed magnetic flux densities of these K dwarfs, albeit being unexpectedly high, are insufficient. On the observational side, inaccurate assumptions about the ambient ISM or the difficult fitting procedure of simulated and observed astrospheric absorption profiles may cause overestimations of the empirical mass loss rates. On the theoretical side, our model may miss out on additional wind acceleration or energy transfer mechanisms, which could cause higher terminal velocities. However, we expect any effect capable of increasing the CWRP up to 80 times the solar value to render stellar winds non-solar and to imprint discernible signatures on the coronal X-ray properties of the star. Based on the available results, we consider the high CWRPs of the group of K dwarfs to be peculiar and uncharacteristic for the winds of cool main-sequence stars. CWRP of cool main-sequence stars -------------------------------- Assuming that the increase of the CWRP of main-sequence stars is characterised by the rapidly rotating targets and , we argue in favour of stellar winds whose thermal and magnetic properties increase moderately with the stellar rotation rate like, for example, the wind scenario with $n_T= 0.1, n_n= 0.6$, and $n_\Phi= 1$. This set of parameters yields CWRPs in good agreement with the values of the slow and rapidly rotating stars, and is based on previous, independent investigations on the increase of thermal and magnetic coronal properties [e.g. @2003ApJ...599..516I; @2003SSRv..108..577F]. However, the large uncertainties of the empirical mass loss rates also allow for wind scenarios with parameters covering some range around these reference values to be in agreement with the empirical constraints. Our theoretical results indicate that for a given rotation rate and similar wind conditions, the CWRPs of cool stars increase toward later spectral types. The increase is due to the shrinking surface area; the mass loss rates of lower mass stars are actually predicted to decrease. The increase of the CWRP depends to some extent on our approximation of stellar radii, $R\propto M^{0.8}$, on the lower main-sequence, and we thus expect it to be diluted by the dependence on the actual radii and mass loss rates of individual stars. Since over the mass range of cool stars the difference is smaller than the observational uncertainties, a structuring of CWRPs according to stellar spectral type will be hardly discernible. The situation worsens when the analysis is carried out in terms of the coronal X-ray flux instead of the rotation, since the statistical character of an rotation-activity relationship as well as intrinsic (e.g. cyclic) variations of the X-ray emission add to the scatter of empirical values. In particular in the regime of fast magnetic rotators, we expect large scatter of observed CWRPs around the mean values, caused by different inclinations between stellar rotation axes and lines-of-sight. The collimation of open magnetic field lines toward the stellar rotation axis and non-uniform surface distributions of magnetic flux make the wind ram pressure of rapidly rotating stars intrinsically latitude-dependent . Since astrospheres represent the impact of stellar winds averaged over longitude and long (possibly decadal) timescales, small-scale and intermittent variations will be smeared out. Yet the typical concentration of magnetic flux at polar latitudes [@2002AN....323..309S] as well as the collimation of magnetic field lines may cause a gradient in the wind ram pressure between equatorial and polar latitudes. A more detailed analysis is required to quantify the possible impact of these latitude-dependent effects. Unfortunately, the inclination of a star is seldom known, so that these effects can hardly be verified observationally. But it is advisable to take them into account as a possible source of scatter in the regime of fast magnetic rotators. In their analysis of the X-ray properties of the moderately rotating K dwarfs, @2006ApJ...643..444W find a possible connection between the strength of their winds and coronal abundance anomalies. Our investigation confirms a dependence of the CWRP on the FIP effect. The influence on the mean molecular weight is small and the associated impact on the CWRP well below the observational accuracy, but the trend confirms the finding of @2006ApJ...643..444W, that a strong FIP effect implies a weaker stellar wind. Yet we caution that our polytropic, hydrodynamic ansatz does probably not allow for an adequate analysis of this question, and that a dedicated investigation is required to clarify this point. Stellar mass loss rates ----------------------- Disregarding the terminal wind velocity in the analysis of CWRPs can cause misestimations of stellar mass loss rates. For fast magnetic rotators, the self-consistently determined mass loss rates are typically smaller than the values based on the presumption of solar-like winds, and do not exceed about ten times the solar value, which is in agreement with upper limits based on observations of dMe stars . Since the terminal wind velocities of fast magnetic rotators are higher than solar-like values, mass loss rates more than an order of magnitude higher than the actual values may be deduced. Unfortunately, the wind velocities of cool stars are observationally not constrained. But due to the additional magneto-centrifugal driving mechanism a similarity with surface escape velocities, like in the case of thermal winds of slow rotators, cannot be expected per ser. Very fast terminal wind velocities though require magnetic fields of several kilo-Gauss, high filling factors, and high dynamo efficiencies, which are observationally not confirmed and in contrast with current theories on dynamo operation/saturation and on the rotational evolution of cool stars. Since the angular momentum loss associated with high magnetic fluxes brakes the stellar rotation too efficiently, the observations of rapidly rotating zero-age-main-sequence stars would be difficult to explain . We consider wind scenarios based on high dynamo efficiencies to be marginal, albeit respective values have been previously suggested [@2003ApJ...590..493S though focusing on a different aspect of stellar activity]. The predicted range of dynamo efficiencies is consistent with the observationally determined, super-linear values of @2001ASPC..223..292S. Comparison of energy fluxes --------------------------- Combining our power-law approximations for the stellar mass loss rates with the rotation-activity relationship, Eq. (\[deffxom\]), yields $\dot{M}\propto F_X^{n_F/n_X}$. With the values given in Tables \[cwrpfit\] and \[rotactrels\], the rate of increase, $n_F/n_X\sim 0.5$, is significantly smaller than the value $1.34\pm 0.18$, suggested by @2005ApJ...628L.143W. The reason for this difference is that we assume the increase of CWRPs to be characterised through the lower values of the rapidly rotating targets, whereas @2005ApJ...628L.143W base their fit on the high CWRP of the peculiar group of moderately rotating K dwarfs. The transport of polytropic gas from the stellar surface to infinity requires the specific thermal energy (per unit mass) $$q = \frac{ \left( \gamma - \Gamma \right) }{ \left( \gamma - 1 \right) \left( \Gamma - 1 \right) } \frac{\Re}{\mu} T_0 \ , \label{defq}$$ which has to be provided by the star. Thus, at the stellar surface the thermal energy flux along open magnetic fields must be $$F_\mathrm{W} = \frac{q \dot{M}}{4\pi R^2} = \frac{q \mathcal{P}}{v_\infty} \ ,$$ or, in the case of $1\,{\rm M_{\sun}}$ stars with rotation rates $\lesssim 4.2\Omega_\odot$, approximately $$F_\mathrm{W} \approx 3.23\cdot10^4 \left( \frac{\Omega}{\Omega_\odot} \right)^{1.04} \left[ \mathrm{\frac{erg}{s\, cm^2}} \right] \ ,$$ assuming a mono-atomic gas with a ratio of specific heats $\gamma= 5/3$. At solar rotation rate, our approximation is in agreement with the empirical values for quite sun regions, $F_\mathrm{W}\lesssim 5\cdot 10^4\,{\rm erg\cdot s^{-1}\cdot cm^{-2}}$, but an order of magnitude smaller than the value for coronal holes . We note that the wind parameters we have used to gauge our model at earth orbit (cf. Sect. \[src\]) are characteristic for the slow solar wind, which is expected to originate from quite sun regions, whereas coronal holes are harbouring the fast solar wind component. In comparison with the increase of the coronal X-ray flux, $F_X\propto \Omega^{n_X}$ with $n_x\gtrsim 2$, the thermal energy flux of stellar winds is predicted to increases with a much lower rate. The coronal X-ray flux quantifies the energy input into closed magnetic field regions, whereas the wind energy flux quantifies the (thermal) energy input into wind regions. Like @2002ApJ...574..412W, we do not expect a direct connection between the two energy fluxes. Closed magnetic field regions are mainly heated by the dissipation of magnetic energy through reconnection processes. @2003ApJ...598.1387P, for example, finds a correlation between stellar X-ray luminosities and magnetic flux, $L_X\propto \Phi^{1.15}$. A dominant magnetic heating of wind regions is unlikely, since reconnecting field lines would hamper the acceleration of coherent plasma motions to super-sonic/super-Alfvénic flow velocities. Instead, the Poynting flux presents an additional energy source for the outflowing plasma, with magnetic field lines transferring rotational energy of the star into kinetic energy without being dissipated. The different rates of increase of the X-ray flux and the wind energy flux may be due to different dependencies of the heating mechanisms on the stellar rotation rate. Correlating the stellar mass loss rate or wind energy flux with the coronal X-ray flux may eliminate the explicit dependence on the rotation rate, but it is questionable whether there is a direct physical relationship between the two quantities. For the X-ray luminosity-magnetic flux relation of @2003ApJ...598.1387P to be consistent with empirical rotation-activity relationships (cf. Table \[rotactrels\]) requires dynamo efficiencies $n_\Phi\sim 2$, which would yield CWRPs outside the errorbars of the rapidly rotating targets. Such high dynamo efficiencies have severe ramifications for the rotational evolution of cool stars, since high magnetic flux densities imply a strong magnetic braking. For instance, the observed spin-down of main-sequence stars with time, $\Omega\propto \sqrt{t}$ [@1972ApJ...171..565S], is consistent with a linear dynamo efficiency. The inconsistency between rotation-activity relations, on one side, and stellar spin-down timescales, on the other, may be caused by the former being related to the closed magnetic flux and the latter to the open magnetic flux, indicating a different dependence of the open and closed magnetic field structures on the stellar rotation rate. Impact on stars and planets --------------------------- The increase of the mass loss rate with stellar rotation, $\dot{M} \propto \Omega^{n_F}$, is according to our results ($n_F\lesssim 1$) much weaker than predicted by @2002ApJ...574..412W: $n_F\approx 3.3$ \[cf. combination of their Eqs. (1) and (3)\]. For rapidly rotating stars, our mass loss rates are smaller than their predictions, whereas for slowly rotating stars both predictions are similar. We illustrate the consequences of the different rates of increase by comparing the effect of the reference and the dense wind scenarios on the evolution of a $1\,{\rm M_{\sun}}$ star; the rotational evolution model is described in . The latter scenario[^3] is supposed to reflect both the approach and result of @2002ApJ...574..412W [@2005ApJ...628L.143W], that stellar mass losses scale with the wind density and increase strongly with the coronal X-ray flux/stellar rotation rate. For young stars, the CWRPs and mass loss rates based on the reference wind scenario are several orders of magnitude smaller than in the case of massive winds (Fig. \[rotevol.fig\]). ![Evolution of the stellar surface rotation rate, characteristic wind ram pressure, mass loss rate, and cumulative mass loss (*top* to *bottom*) of a $1\,{\rm M_{\sun}}$ star subject to wind scenarios with a moderate increase of thermal and magnetic wind parameters ($n_T= 0.1, n_n= 0.6, n_\Phi= 1$, *solid lines*) and with a strong increase of the wind density ($n_T= 0.1, n_n= 2, n_\Phi= 1$, *dashed lines*), respectively. The initial rotation rate is $\Omega_0= 2\cdot 10^{-6}\,{\rm s^{-1}}$, and the internal coupling timescale $\tau_\mathrm{c}= 65\,{\rm Myr}$. *Crosses* mark the current state of the Sun.[]{data-label="rotevol.fig"}](6486f10){width="\hsize"} The cumulated mass loss is $\sim 10^{-4}\,{\rm M_{\sun}}$, which supports the conjecture of @2002ApJ...574..412W that the young faint Sun-paradox cannot be explained through a higher-mass young Sun. The impact of stellar winds on planetary atmospheres and magnetospheres is most severe during the pre-main sequence phase (Fig.\[prpr.fig\]). ![Evolution of the wind ram pressure, $p_\mathrm{w}$, the magnetic pressure, $p_\mathrm{m}$, and the thermal gas pressure, $p_\mathrm{g}$, of magnetised winds at different distances inside the equatorial plane.[]{data-label="prpr.fig"}](6486f11){width="\hsize"} The most promising targets for radio searches of extra-solar giant planets are thus rapidly rotating pre-main sequence stars, which do not only have strong winds but also high magnetic activity levels. We note that this conjecture is based on the extrapolation of the main-sequence wind scenarios to the pre-main sequence phase, since the wind ram pressures of young stars are as yet observationally unconstrained and thus hypothetical. The magnetic energy flux at short-period orbits close to the star can be larger than the wind ram pressure (Figs. \[prpr.fig\] and \[pprofiles.fig\]), which may entail different interaction mechanisms with planetary magnetospheres. Still closer to the star, Hot Jupiters are expected to interact directly with the coronal magnetic field. Magnetic reconnections between coronal field structures and the planetary magnetosphere alter the field topology and can initiate the onset of flares and chromospheric brightenings [@2004ApJ...602L..53I; @2005ApJ...622.1075S]. That close to the stellar corona, wind ram pressure effects are negligibly small [e.g. @2006MNRAS.367L...1M]. Future issues ------------- It is essential to clarify the origin of the high CWRPs of the three K dwarfs , , and , because they play the decisive role in the determination of a relationship between two fundamental stellar parameters. Since their wind ram pressures cannot be explained in terms of a solar-like wind model, we regard these stars as ‘non-representative’ and excluded them from the comparison of empirical and theoretical values. This approach may formally account for the lower CWRP of the more rapidly rotating targets, but raises questions about inapplicabilities of the polytropic-magnetised wind model or/and power-law ansatz for the wind parameters. @2002ApJ...574..412W, in contrast, disregard the empirical constraints set by the rapidly rotating targets and base their relationship on the high values of the K dwarfs. Their approach may formally account for the peculiar CWRPs, but raised the question about different wind properties and coronal field topologies beyond a certain stellar activity level. The present stellar sample is insufficient to conclusively decide between these two complementary approaches. More astrospherical detections of (preferentially single) main-sequence stars are highly desirable to solve this ambiguity. If the high CWRP of the K dwarfs are observationally confirmed, then this will raise crucial questions about our current understanding of winds of cool stars, since the implicit requirements on the coronal properties are drastically different from what we know and expect from the Sun and solar-like stars. Although previous observations indicate otherwise, the most likely mechanism to account for such high wind ram pressure is an efficient magneto-centrifugal driving of the wind due to strong magnetic fields. Yet such high field strengths would raise the question how main sequence stars with rotation rates not much different from the Sun can produce and sustain such high magnetic flux densities over long times without modifying the coronal X-ray signatures. Conclusions {#conc} =========== Characteristic wind ram pressures and mass loss rates increase with the wind temperature, wind density, and strength of open magnetic fields. Albeit the observational data are as yet insufficient to conclusively discriminate between different wind scenarios, we argue in favour of a moderate increase of the thermal and magnetic wind properties with the stellar rotation rate and, synonymously, the coronal X-ray flux. Such a wind scenario does not account for the moderately rotating K dwarfs , , and , whose observed thermal and magnetic properties do not allow for a consistent explanation of their high wind ram pressures in the framework of a magnetised wind model. We regard their high apparent mass loss rates as non-representative for cool main-sequence stars and suggest to exclude them from quantitative correlations between mass loss rates and rotation rates/coronal X-ray fluxes until their role is observationally and theoretically clarified. The predicted rate of increase of mass loss rates is smaller than suggested by @2002ApJ...574..412W [@2005ApJ...628L.143W] and depends on whether a star can be classified as a slow or a fast magnetic rotator. In the latter case, efficient magneto-centrifugal driving of outflows entails terminal wind velocities considerably faster than the surface escape velocity, which, if not taken properly into account, leads to an overestimations of stellar mass loss rates. The predicted mass loss rates of cool main-sequence stars do not exceed (on average) about $10\,{\rm M_{\sun}}$. Since the predicted stellar winds are weaker than previously suggested, we expect less severe erosion of planetary atmospheres and lower detectabilities of magnetospheric radio emissions originating from extra-solar giant planets. Considering the evolution of stellar mass loss rates and wind ram pressure, we suggest that rapidly rotating pre-main sequence stars with high magnetic activity levels are the most promising targets for searches of planetary radio emission. We thank the referee B. Wood for his very constructive comments which helped to improve the paper. VH gratefully acknowledges financial support for this research through a PPARC standard grand (PPA/G/S/2001/00144) and through a fellowship of the Max-Planck-Society. [^1]: For consistency reasons, we use the particle density, $n_0= \rho_0 N_\mathrm{A} / \mu$, to specify the wind condition at the reference level, with the mean molecular weight $\mu$ and the Avogadro number $N_\mathrm{A}$. [^2]: We follow the terminology of @1976ApJ...210..498B and consider stars to be fast magnetic rotators, if the Michel velocity, which quantifies the impact of magnetic fields on the wind acceleration [@1969ApJ...158..727M], is larger than the terminal wind velocity determined in the absence of magneto-rotational effects [cf. @1980ApJ...242..723N; @1999isw..book.....L]. [^3]: Since for the dense wind scenario with $n_n= 5$, analysed in Sect. \[resu\], the star loses about $65\%$ of its initial mass within $20\,{\rm Myr}$, we consider instead the milder case $n_n= 2$ to illustrate the principal differences.
--- author: - 'Yuri Bakhtin[^1]' bibliography: - 'treelimit.bib' title: 'Thermodynamic Limit for Large Random Trees.' --- Introduction ============ Various kinds of random trees have been studied in the literature. In this note we consider simply generated random (plane rooted) trees also known as branching processes conditioned on the total population (CBP), see [@Aldous-II:MR1166406]. Our initial motivation was a study of the secondary structure statistics for large RNA molecules, see [@Bakhtin-Heitsch-1:MR2415118] and [@Bakhtin-Heitsch-2]. The secondary RNA structures can be encoded via plane rooted trees and studied with the help of energy models. In [@Bakhtin-Heitsch-1:MR2415118] and [@Bakhtin-Heitsch-2], it is demonstrated that the naive energy minimization approach to the prediction of typical secondary structure features fails to explain the presence of high degree branchings. However, using the language of statistical mechanics and working with Gibbs ensembles on trees, we were able to include the entropy correction and recover the typical RNA branching type. These results are concerned only with the rough information related to the branching statistics, but in this paper, we suggest a new viewpoint that helps to obtain some insights into the geometry of large random trees. The model we work with follows the classical Boltzmann–Gibbs postulate stating that the probablity of a configuration $T$ is proportional to $e^{-\beta E(T)}$, where $E(T)$ is the energy of $T$, and $\beta$ is the inverse temperature in appropriate units (see the complete description of our model in Section \[sec:setting\_and\_first\_results\]). Gibbs distributions, especially their limiting behaviour under the limit of the size of the system tending to infinity (so called thermodynamic limit), are central to statistical mechanics, see [@Sinai:MR691854] and [@Georgii:MR956646] for a modern mathematical introduction. The first goal of this paper is to prove that as the order of the tree grows to infinity, the distribution induced by the Gibbs measure converges to that of an infinite discrete tree that we explicitly describe in detail (Sections \[sec:setting\_and\_first\_results\] to \[sec:limiting\_tree\]). This thermodynamic limit belongs to the category of discrete limits of CBP according to the terminology introduced in [@Aldous-II:MR1166406], and our result (as well as the limiting object) appears to be new. In particular, it does not involve any rerooting procedures like the one introduced in [@Aldous-I:MR1085326]. We prove the result above for the bounded branching (or out-degree) case, although it should hold true under less restrictive assumptions. The limiting infinite discrete tree is a more sophisticated object than a classical Galton–Watson tree. In particular, it dies out with zero probability and the progenies of distinct vertices are not independent. However, it turns out that the limiting tree is Markov in a natural sense, and the Markov transition probability is explicitly computed in Section \[sec:limiting\_tree\]. In Section \[sec:growth\_of\_levels\] we notice that the number of vertices at a given distance $n$ from the root also form a Markov chain if $n$ is understood as a time parameter. We prove that under linear scaling this Markov chain satisfies a limit theorem with the limit given by a gamma distribution. In Section \[sec:flt\] we strengthen this result and show that a functional limit theorem holds with weak convergence to a diffusion process on the positive semi-line with constant drift and diffusion proportional to the square root of the space coordinate. Since this process (under the name of local time for Bessel(3) process) also serves as a scaling limit of the “height profile” for CBP itself, see [@Aldous-II:MR1166406 Conjecture 7] and [@Gittenberger:MR1662793], we can say that the infinite Markov random tree that we construct belongs to the same universality class as the original CBP. There are several natural and interesting problems arising in connection with our results. One is, obviously, strengthening them to give an alternative to [@Gittenberger:MR1662793] proof of the scaling limit in Aldous’s Conjecture 7. Another one is to use our approach to study finer details of the random tree rather than the height profile. Our heuristic computation (see Section \[sec:SPDE\]) shows that the limit can be described as a solution of an SPDE with respect to a Brownian sheet. [**Acknowledgements.**]{} The author is grateful to NSF for partial support of this research via CAREER award DMS-0742424. He also thanks the referees for their useful comments. The setting and first results on thermodynamic limit {#sec:setting_and_first_results} ==================================================== Let us recall that plane trees (or, ordered trees) are rooted trees such that subtrees at any vertex are linearly ordered. In other words, two plane trees and are considered equal if there is a bijection between the vertices of the two trees such that it preserves the parent — child relation on the vertices and preserves the order of the child subtrees of any vertex. Figure \[fig:4-trees\] shows all plane trees on $4$ vertices. We fix $D\in{\mathbb{N}}$ and introduce ${\mathbb{T}}_N={\mathbb{T}}_N(D)$, the set of all plane trees on $N$ vertices such that the branching number (i.e. the number of children, or out-degree) of each vertex does not exceed $D$. To introduce a Gibbs distribution on ${\mathbb{T}}_N$, we have to assign an energy value to each tree. We assume that an energy value $E_i\in{\mathbb{R}}$ is assigned to every $i\in\{0,\ldots,D\}$, and the energy of the tree $T$ is defined via $$E(T)=\sum_{v\in V(T)} E_{\deg(v)}=\sum_{i=0}^D \chi_i(T)E_i,$$ where $V(T)$ denotes the set of vertices of the tree $T$, $\deg(v)$ denotes the branching number of vertex $v$, and $\chi_i(T)$ is the number of vertices of branching $i$ in $T$. Since the energy of an individual vertex depends only on its immediate neigborhood via the branching number, one can say that this a model with nearest neighbor interaction. Now we fix an inverse temperature parameter $\beta\in{\mathbb{R}}$ (usually, in statistical physics $\beta>0$, but our results apply to other values of $\beta$ as well) and define a probability measure $\mu_N$ on ${\mathbb{T}}_N$ by $$\mu_N\{T\}=\frac{e^{-\beta E(T)}}{Z_N},$$ where the normalizing factor (partition function) is defined by $$Z_N=\sum_{T\in {\mathbb{T}}_N} e^{-\beta E(T)}.$$ In particular, if $\beta=0$ or, equivalently, $E_i=0$ for all $i$, then $\mu_N$ is a uniform distribution on ${\mathbb{T}}_N$. First, we are going to demonstrate that the above model admits a thermodynamic limit, i.e.the sequence of measures $(\mu_N)_{N\in{\mathbb{N}}}$ has a limit in a certain sense as $N\to\infty$. Secondly, we study several curious properties of the limiting infinite random trees. For each vertex $v$ of a tree $T\in{\mathbb{T}}_N$ its height $h(v)$ is defined as the distance to the root of $T$, i.e. the length of the shortest path connecting $v$ to the root along the edges of $T$. The height of a finite tree is the maximum height of its vertices. Let $n,N\in{\mathbb{N}}$. For any plane tree $T\in{\mathbb{T}}_N$, $\pi_{n,N} T$ denotes the neighborhood of the root of radius $n$, i.e. the subtree of $T$ spanned by all vertices with height not exceeding $n$. For any $n$ and sufficiently large $N$, the map $\pi_{n,N}$ pushes the measure $\mu_N$ on ${\mathbb{T}}_N$ forward to the measure $\mu_N\pi_{n,N}^{-1}$ on $S_n$, the set of all trees with height $n$. \[th:main\_convergence\] For each $n\in{\mathbb{N}}$, the measures $\mu_N\pi_{n,N}^{-1}$ on $S_n$ converge in total variation, as $N\to\infty$, to a measure $P_n$. A proof of this theorem will be given in Section \[sec:first\_proofs\]. At this point we prefer to introduce more definitions that will allow us to describe the limiting measures $P_n$. We define $$\Delta=\left\{p=(p_0,\ldots,p_D)\in [0,1]^{D+1}:\ \sum_{i=0}^Dp_i=1,\ \sum_{i=0}^Dip_i=1\right\},$$ and let $$J(p)= -H(p)+\beta E(p),\quad p\in\Delta.$$ where $$H(p)=-\sum_{i=0}^{D}p_i\ln p_i$$ is the entropy of the probability vector $p\in\Delta$, and $$E(p)=\sum_{i=0}^{D}p_iE_i$$ is the associated energy. The function $J$ is used to construct the rate function in the Large Deviation Principle for large plane trees, see [@Bakhtin-Heitsch-1:MR2415118],[@Bakhtin-Heitsch-2]. It is strictly convex and its minimum value on $\Delta$ is attained at a unique point $p^*$. Using Lagrange’s method, we find that $$\ln p_i^*+1+\beta E_i+\lambda_1+i\lambda_2=0,\quad i=0,1,\ldots,D,$$ where $\lambda_1$ and $\lambda_2$ are the Lagrange multipliers. So we see that $$p^*_i=Ce^{-\beta E_i}\rho^i,\quad i=0,1,\ldots,D, \label{eq:p*}$$ where $C=e^{-1-\lambda_1}$, and $\rho=e^{-\lambda_2}$. In particular, $$p^*_i>0,\quad i=0,1,\ldots,D. \label{eq:p*positive}$$ Notice that $\rho$ can be characterized as a unique solution of $$\sum_{i=0}^De^{-\beta E_i}\rho^i=\sum_{i=0}^Die^{-\beta E_i}\rho^i,$$ and $C$ may be defined via $$\label{eq:C}\frac{1}{C}=\sum_{i=0}^De^{-\beta E_i}\rho^i=\sum_{i=0}^Die^{-\beta E_i}\rho^i.$$ We denote $J^*=J(p^*)$ and $\sigma=e^{J^*}$. For a tree $\tau\in S_n$, we introduce $$\bar E(\tau)=\sum_{\substack{v\in V(\tau)\\h(v)<n}}E_{\deg(v)}. \label{eq:barE}$$ Notice that the summation above excludes the highest level of the tree. \[th:limit\_measure\_evaluation\] For any $n\in{\mathbb{N}}$, the limiting probability measure $P_n$ is given by $$P_n\{\tau\}=Q_n k \rho^{k}\sigma^m e^{-\beta \bar E(\tau)} \label{eq:P_n_up_to_Q_n}$$ where the tree $\tau\in S_n$ is assumed to have $k$ vertices of height $n$ and $m$ vertices of height less than $n$. The constant $Q_n$ is a normalizing factor. We give a proof of Theorems \[th:main\_convergence\] and \[th:limit\_measure\_evaluation\] in the next Section \[sec:first\_proofs\]. In Section \[sec:Q\] we compute the value of $Q_n$ explicitly. In Section \[sec:limiting\_tree\] we shall see that our convergence results may be interpreted as convergence to an infinite random tree. Although Theorems \[th:main\_convergence\] and \[th:limit\_measure\_evaluation\] do not hold in full generality for $D=\infty$, we expect that there is a large class of energy functions for which analogous results are true. Proof of Theorems \[th:main\_convergence\] and \[th:limit\_measure\_evaluation\] {#sec:first_proofs} ================================================================================ For both theorems it is sufficient to check that for any $n$ and any two trees $\tau_1,\tau_2\in S_n$, $$\label{eq:convergence_of_ratio} \lim_{N\to\infty}\frac{\mu_N\pi_{n,N}^{-1}\{\tau_1\}}{\mu_N\pi_{n,N}^{-1}\{\tau_2\}}=\frac{k_1e^{-\beta \bar E(\tau_1)}\rho^{k_1}\sigma^{m_1}}{ k_2e^{-\beta\bar E(\tau_2)}\rho^{k_2}\sigma^{m_2}},$$ where we assume that $\tau_1$ has $k_1$ vertices of height $n$, and $m_1$ vertices of height less than $n$; $\tau_2$ has $k_2$ vertices of height $n$, and $m_2$ vertices of height less than $n$. The energy of each tree $T$ with $\pi_{n,N} T=\tau_1$ is composed of contributions from the vertices of the tree $\tau_1$ of height less than $n$ (we call this contribution $\bar E(\tau_1)$, see ) and the contribution from the plane forest on $N-m_1$ vertices with $k_1$ connected components. The same applies to $\tau_2$. Let us recall (see e.g. Theorem 5.3.10 in [@Stanley:MR1676282]) that the number of plane forests on $N$ vertices with $k$ components and $r_0,r_1,\ldots,r_D$ vertices with branching numbers, respectively, $0,1,\ldots,D$ is $$\frac{k}{N}\binom{N}{r_0,\ r_1,\ \ldots,\ r_D }$$ if $r_0+\ldots+r_D=N$, $r_1+2r_2+\ldots+Dr_D=N-k$, and $0$ otherwise. Therefore, $$\begin{aligned} \frac{\mu_N\pi_{n,N}^{-1}\{\tau_1\}}{\mu_N\pi_{n,N}^{-1}\{\tau_2\}}&= \frac{e^{-\beta \bar E(\tau_1)}\sum_{r\in\Delta(N,m_1,k_1)}\frac{k_1}{N-m_1}\binom{N-m_1}{r_0,\ r_1,\ \ldots,\ r_D }e^{-\beta E(r)}}{ e^{-\beta \bar E(\tau_2)}\sum_{r\in\Delta(N,m_2,k_2}\frac{k_2}{N-m_2}\binom{N-m_2}{r_0,\ r_1,\ \ldots,\ r_D }e^{-\beta E(r)}} \notag\\&=\frac{e^{-\beta\bar E(\tau_1)}I_1(N)}{e^{-\beta\bar E(\tau_2)}I_2(N)}. \label{eq:ratio1}\end{aligned}$$ Here $$\begin{gathered} \Delta(N,m,k)=\{r\in{\mathbb{Z}}_+^{D+1}:\ r_0+\ldots+r_D=N-m,\\ r_1+2r_2+\ldots+Dr_D=N-m-k\},\end{gathered}$$ and ${\mathbb{Z}}_+={\mathbb{N}}\cup\{0\}$. Fix any ${\varepsilon}>0$ and define $$\Delta(N,m,k,{\varepsilon})=\left\{r\in \Delta(N,m,k):\ \left|\frac{r}{N-m}-p^*\right|<{\varepsilon}\right\}.$$ We claim that $$\label{eq:Delta_equiv_Delta_eps} I_1(N)=I_1(N,{\varepsilon})(1+o(1)),\quad N\to\infty,$$ where $$I_1(N,{\varepsilon})=\sum_{r\in\Delta(N,m_1,k_1,{\varepsilon})}\frac{k_1}{N-m_1}\binom{N-m_1}{r_0,\ r_1,\ \ldots,\ r_D }e^{-\beta E(r)}. $$ In fact, using Stirling’s formula we see that if $r_i\ne 0$ for all $i=0,\ldots,D$, $$\begin{aligned} &\frac{k_1}{N-m_1}\binom{N-m_1}{r_0,\ r_1,\ \ldots,\ r_D }e^{-\beta E(r)}\\ &=\frac{k_1(N-m_1)^{N-m_1-\frac{1}{2}}e^{-\beta E(r)}e^{\frac{\theta_{N-m_1}}{12(N-m_1)}-\frac{\theta_{r_0}}{12r_0}-\ldots-\frac{\theta_{r_D}}{12 r_D}}}{ (2\pi)^{\frac{D}{2}} r_0^{r_0+\frac{1}{2}}\ldots r_D^{r_D+\frac{1}{2}}}\\ &=\frac{k_1 e^{-(N-m_1)J(\frac{r}{N-m_1})}}{((N-m_1)r_0\ldots r_D)^{\frac{1}{2}}}\cdot\frac{e^{\frac{\theta_{N-m_1}}{12(N-m_1)}-\frac{\theta_{r_0}}{12r_0}-\ldots-\frac{\theta_{r_D}}{12 r_D}}}{(2\pi)^{\frac{D}{2}}},\end{aligned}$$ with $0<\theta_j<1$ for all $j\in{\mathbb{N}}$. If $N$ is sufficiently large, there is a vector $r^*(N)\in \Delta(N,m_1,k_1,{\varepsilon})$ such that $|\frac{r^*(N)}{N-m_1}-p^*|<{\varepsilon}/2$. Due to the strong convexity of $J$, there is a number $\delta>0$ independent of $N$ such that $$\min_{\Delta(N,m_1,k_1)\setminus\Delta(N,m_1,k_1,{\varepsilon})} J\left(\frac{r}{N-m_1}\right)>J\left(\frac{r^*(N)}{N-m_1}\right)+\delta,$$ so that the contribution from each element of  $\Delta(N,m_1,k_1)\setminus\Delta(N,m_1,k_1,{\varepsilon})$ is exponentially smaller than that of $r^*(N)$ as $N\to\infty$. The statement follows since the number of elements in $\Delta(N,m_1,k_1)\setminus\Delta(N,m_1,k_1,{\varepsilon})$ is bounded by $N^{D+1}$. This argument can be easily extended to the case where $r_i=0$ for some $i$, which completes the proof of our claim . Let us now define for $r\in \Delta(N,m_1,k_1,{\varepsilon})$, $$b(r) = (r_0+(k_2-k_1),r_1-(k_2-k_1)-(m_2-m_1),r_2,r_3,\ldots,r_D ).$$ Notice that for sufficiently small ${\varepsilon}$ and sufficiently large $N$, the image $\Delta'(N,{\varepsilon})$ of $\Delta(N,m_1,k_1,{\varepsilon})$ under $b$ is a subset of $\Delta(N,m_2,k_2)$. Moreover, $b$ is invertible and, therefore, establishes a bijection between $\Delta(N,m_1,k_1,{\varepsilon})$ and $\Delta'(N,{\varepsilon})$. Introducing $$I_2(N,{\varepsilon})=\sum_{r\in\Delta'(N,{\varepsilon})}\frac{k_2}{N-m_2}\binom{N-m_2}{r_0,\ r_1,\ \ldots,\ r_D }e^{-\beta E(r)},$$ and using exactly the same reasoning as for $I_1$, we see that $$I_2(N)=I_2(N,{\varepsilon})(1+o(1)),\quad N\to\infty. \label{eq:I_2(eps)}$$ Equations ,, imply now that $$\begin{aligned} \frac{\mu_N\pi_{n,N}^{-1}\{\tau_1\}}{\mu_N\pi_{n,N}^{-1}\{\tau_2\}}&=\frac{e^{-\beta\bar E(\tau_1)}I_1(N,{\varepsilon})}{e^{-\beta \bar E(\tau_2)}I_2(N,{\varepsilon})}(1+o(1))\notag \\ &=\frac{k_1e^{-\beta\bar E(\tau_1)}}{k_2e^{-\beta\bar E(\tau_2)}}\cdot\frac{\sum_{r\in\Delta(N,m_1,k_1,{\varepsilon})}a_{1,r}}{\sum_{r\in\Delta(N,m_1,k_1,{\varepsilon})}a_{2,r}}(1+o(1)),\quad N\to\infty, \label{eq:ratio_I_with_eps}\end{aligned}$$ where $$a_{1,r}=\binom{N-m_1}{r_0,\ r_1,\ \ldots,\ r_D }e^{-\beta E(r)},$$ and $$a_{2,r}=\binom{N-m_2}{r_0+(k_2-k_1),r_1-(k_2-k_1+m_2-m_1),r_2,\ldots,r_D}e^{-\beta E(b(r))}.$$ Assuming that $k_1\ge k_2$ and $m_1\ge m_2$ (all the other cases can be treated in the same way), we get $$\frac{a_{1,r}}{a_{2,r}}=\frac{((r_1-(k_2-k_1)-(m_2-m_1))\ldots(r_1+1)}{(N-m_2)\ldots(N-m_1+1))\cdot (r_0\ldots (r_0+(k_2-k_1)+1))}R,$$ where $$R=R(k_1,m_1,k_2,m_2)=e^{\beta(E_0-E_1)(k_2-k_1)-\beta E_1(m_2-m_1)}.$$ Due to the definition of $\Delta(N,m,k,{\varepsilon})$, $$\frac{a_{1,r}}{a_{2,r}}\le \frac{((p_1^*+{\varepsilon})(N-m_1)-(k_2-k_1)-(m_2-m_1))^{-(k_2-k_1)-(m_2-m_1)}}{ (N-m_1)^{-(m_2-m_1)}((p_0^*-{\varepsilon})(N-m_1)+(k_2-k_1))^{-(k_2-k_1)}}R,$$ so that $$\begin{aligned} \notag \limsup_{N\to\infty}\sup_{r\in\Delta(N,m,k,{\varepsilon})}\frac{a_{1,r}}{a_{2,r}}&\le (p_1^*+{\varepsilon})^{-(m_2-m_1)} \left(\frac{p_1^*+{\varepsilon}}{p_0^*-{\varepsilon}}\right)^{-(k_2-k_1)}R\\ &\le \left(\frac{e^{-\beta E_1}}{p_1^*+{\varepsilon}}\right)^{m_2-m_1}\left(\frac{(p_0^*-{\varepsilon})e^{\beta(E_0-E_1)}}{p_1^*+{\varepsilon}}\right)^{k_2-k_1} \label{eq:individual_terms1}\end{aligned}$$ In the same way, $$\label{eq:individual_terms2} \liminf_{N\to\infty}\inf_{r\in\Delta(N,m,k,{\varepsilon})}\frac{a_{1,r}}{a_{2,r}}\ge \left(\frac{e^{-\beta E_1}}{p_1^*-{\varepsilon}}\right)^{m_2-m_1}\left(\frac{(p_0^*+{\varepsilon})e^{\beta(E_0-E_1)}}{p_1^*-{\varepsilon}}\right)^{k_2-k_1}$$ Since the choice of ${\varepsilon}$ is arbitrary, relations ,, and imply that $$\lim_{N\to\infty}\frac{\mu_N\pi_{n,N}^{-1}\{\tau_1\}}{\mu_N\pi_{n,N}^{-1}\{\tau_2\}}=\frac{k_1e^{-\beta\bar E(\tau_1)}}{k_2e^{-\beta\bar E(\tau_2)}} \left(\frac{e^{-\beta E_1}}{p_1^*}\right)^{m_2-m_1}\left(\frac{p_0^*e^{\beta(E_0-E_1)}}{p_1^*}\right)^{k_2-k_1}. \label{eq:limit_fraction}$$ Using , we see that $$\frac{p_0^*e^{\beta(E_0-E_1)}}{p_1^*}=\frac{1}{\rho}. \label{eq:p_0_p_1_rho}$$ A direct computation based on and  implies $$H(p^*)=-\ln(C\rho)+\beta E(p^*).$$ Therefore, $$\label{eq:rhoC} \frac{e^{-\beta E_1}}{p_1^*}=\frac{1}{C\rho}=e^{-J(p^*)}=\frac{1}{\sigma}.$$ Now, is an immediate consequence of ,, and . [[[ $\Box$ ]{}]{}]{} Consistency and the precise value of $Q_n$ {#sec:Q} ========================================== We begin with the following consistency property: \[th:consistency\] The family of measures $(P_n)_{n\in{\mathbb{N}}}$ is consistent, i.e. for any $n$ and any $\tau\in S_n$ $$P_n\{\tau\}=\sum_{\substack{\tau'\in S_{n+1}\\\pi_n^{n+1}\tau'=\tau}}P_{n+1}\{\tau'\},$$ where $\pi_n^{n+1}$ denotes the projection map from $S_{n+1}$ to $S_n$. [[[Proof: ]{}]{}]{}This theorem is a direct consequence of the limiting procedure in Theorem \[th:main\_convergence\]. However, it is interesting to derive it from the specific form of $P_n$ provided by Theorem \[th:limit\_measure\_evaluation\]. Let us assume that $\tau\in S_n$, and $\tau$ has $n$ vertices of height $k$ and $m$ of height less than $n$. $$\begin{aligned} &\sum_{\substack{\tau'\in S_{n+1}\\ \pi_n^{n+1}\tau=\tau}}P_{n+1}\{\tau'\}\\&= Q_{n+1}\sum_{i_1,\ldots,i_k=0}^De^{-\beta (\bar E(\tau)+E_{i_1}+\ldots+ E_{i_k} )}(i_1+\ldots+i_k)\rho^{i_1+\ldots+i_k}\sigma^{m+k}\\ &=Q_{n+1}e^{-\beta \bar E(\tau)}\sigma^{m+k}\sum_{i_1,\ldots,i_k=0}^D e^{-\beta(E_1+\ldots+E_k)}(i_1+\ldots+i_k)\rho^{i_1+\ldots+i_k}\\ &=Q_{n+1}e^{-\beta \bar E(\tau)}\sigma^{m+k}k\sum_{i_1=0}^D (i_1\rho^{i_1}e^{-\beta E_i})\sum_{i_2=0}^D (\rho^{i_2}e^{-\beta E_{i_2}})\ldots \sum_{i_k=0}^D(\rho^{i_k}e^{-\beta E_{i_k}})\\ &=Q_{n+1}e^{-\beta \bar E(\tau)}\sigma^{m+k}k\frac{1}{C}\left(\frac{1}{C}\right)^{k-1}.\end{aligned}$$ In this calculation we denoted by $i_1,\ldots,i_k$ the branching numbers of the vertices of height $n$. We used the definition of $P_n$ in the first identity. The second identity is just a convenient rearrangement. The third one follows from the symmetry in the factor $(i_1+\ldots+i_k)$. In the last identity we used  and the fact that $p^*\in \Delta$. Identity   implies $$\label{eq:1_over_C} \frac{1}{C}=\frac{\rho}{\sigma},$$ so that $$\label{eq:Q=sum_of_Q} \sum_{\substack{\tau'\in S_{n+1}\\ \pi_n^{n+1}\tau=\tau}}P_{n+1}\{\tau'\}=Q_{n+1}e^{-\beta \bar E(\tau)}\sigma^{m+k}k\sigma^{-k}\rho^k=\frac{Q_{n+1}}{Q_n}P_n\{\tau\}.$$ Since this holds true for all $\tau\in S_n$, we can conclude that $Q_n=Q_{n+1}$ which completes the proof.[[[ $\Box$ ]{}]{}]{} Identity  means that the constant $Q=Q_n$ in Theorem \[th:limit\_measure\_evaluation\] is the same for all $n$. Choosing $n=1$ we can compute it using : $$1=Q\sum_{k=1}^D k e^{-\beta E_k}\rho^k\sigma^1=\frac{Q\sigma}{C}.$$ A more precise version of Theorem \[th:limit\_measure\_evaluation\] easily follows: \[th:refined\_limit\_measure\_evaluation\] Let $C$ be defined by . For each $n$, the limiting probability measure $P_n$ is given by $$P_n\{\tau\}= Ck e^{-\beta \bar E(\tau)}\rho^{k}\sigma^{m-1},$$ where the tree $\tau\in S_n$ is assumed to have $k$ vertices of height $n$ and $m$ vertices of height less than $n$. The limiting random tree {#sec:limiting_tree} ======================== Let $S_{\infty}$ be the set of infinite plane trees with branching number bounded by $D$. Theorem \[th:consistency\] along with the classical Daniell—Kolmogorov Consistency theorem (see [@Billingsley:MR1700749]) allows us to introduce a measure $P_{\infty}$ on $S_\infty$ consistent with measures $P_n$ for all $n$. Intuitively this is clear, but to make it precise we need to introduce a coding of plane trees. We have chosen one of several possible coding schemes. Let $T$ be a plane tree (finite or infinite) with branching bounded by $D$. Then $T$ has a finite number $r_n\le D^n$ of vertices of any given height $n$. Let us say that all vertices of the same height $n$ form the $n$-th level of the tree. The vertices of $n$-th level are naturally ordered and can be enumerated by numbers from $1$ to $r_n$ (except for the case when there are no vertices at $n$-th level at all). Each of $r_n$ vertices of the $n$-th level has a parent at the level $n-1$. Denote the number received by the parent of $l$-th vertex of the $n$-th level under the described enumeration by $g_{n,l}$. If $r_n<l\le D^n$ we set $g_{n,l}=0$. We also agree that for the root of the tree, i.e., the first vertex in the zeroth level, $g_{0,1}=1$. Then for any $n\ge 0$ the $n$-th level can be encoded by a vector $$g_n=(g_{n,1},\ldots,g_{n,D^n})\in\{0,1,\ldots,D^{n-1}\}^{D^n},$$ and the whole tree can be identified with the sequence of levels $$(g_1,g_2,\ldots)\in {\mathbb{X}}=\prod_{n=1}^{\infty}\{0,1,\ldots,D^{n-1}\}^{D^n},$$ so that the space ${\mathbb{T}}$ of all plane trees (finite or infinite) with branching bounded by $D$ can be identified with a subset of ${\mathbb{X}}$. \[th:measure\_on\_infinite\_trees\] There is a unique measure $P_{\infty}$ on ${\mathbb{T}}$ such that it is consistent with measures $P_n$: $$P_{\infty}\pi_n^{-1}=P_n,$$ where $\pi_n$ denotes the root’s neighbourhood of height $n$ of a tree from ${\mathbb{T}}$. This measure is concentrated on $S_\infty$. [[[Proof: ]{}]{}]{}The first statement follows from Theorem \[th:consistency\] and the Consistency theorem. The second statement is a consequence of the fact that for each $n\in{\mathbb{N}}$, $P_n$ is concentrated on trees with positive number of vertices at $n$-th level. [[[ $\Box$ ]{}]{}]{} The space ${\mathbb{X}}$ is compact in the product topology. Therefore, the convergence of finite-dimensional distributions established in Theorem \[th:main\_convergence\] and the classical Prokhorov theorem (see e.g. [@Billingsley:MR1700749]) imply the following result: \[th:weak\_convergence\] As $N\to\infty$, measures $P_N$ viewed as measures on ${\mathbb{X}}$ converge weakly to $P_\infty$ in the product topology. This statement shows that there is a limiting object for the random trees that we consider. This object is an infinite random tree. For any $n\in{\mathbb{N}}$, the first $n$ levels of this random tree are distributed according to $P_n$. Let us now embed the space ${\mathbb{X}}$ into $\bar {\mathbb{X}}=({\mathbb{Z}}_+^{{\mathbb{N}}})^{{\mathbb{Z}}_+}$ filling up all the unused coordinates with zeros. The measure $P_\infty$ can be treated as a measure on $\bar{\mathbb{X}}$ thus generating a ${\mathbb{Z}}_+^{{\mathbb{N}}}$-valued process $(X_n)_{n=0}^\infty$ with discrete time. This process along with the associated random tree is visualized on Figure \[fig:coding\]. For any $n$, the map $X_n$ describes how the $n$-th level of the tree is built upon the $n-1$-th one. For a level $g:{\mathbb{N}}\to{\mathbb{Z}}_+$, we denote by $|g|$ the number of non-zero entries in $g$ (i.e. the number of vertices at the level). For two levels $g$ and $g'$ we write $g\lhd g'$ if $\max_l g'_l\le |g|$. If $g\lhd g'$ then we define $$E(g,g')=\sum_{i=1}^{|g|} E_{\# \{j:\ g'_j=i\}},$$ the energy induced by level $g'$ at its parent level $g$. Theorem \[th:refined\_limit\_measure\_evaluation\] immediately implies the following result: \[th:markov\] The process $(X_n)$ defined above is Markov with transition probability $${\mathsf{P}}\{X_{n+1}=g'|\ X_{n}=g\}=\begin{cases}\frac{|g'|}{|g|}e^{-\beta E(g,g')}\rho^{|g'|-|g|}\sigma^{|g|},& g\lhd g',\\ 0,&\mbox{\rm otherwise.} \end{cases}$$ A limit theorem for the size of $n$-th level {#sec:growth_of_levels} ============================================ Let us introduce $Y_n=|X_n|$, the random number of vertices at $n$-th level. The following statement is a direct consequence of Theorem \[th:refined\_limit\_measure\_evaluation\] or Theorem \[th:markov\]: \[th:markov-counting\] The process $(Y_n)_{n=0}^{\infty}$ is Markov with transition probability $${\mathsf{P}}\{Y_{n+1}=k'|\ Y_{n}=k\}=\frac{k'}{k}\rho^{k'-k}\sigma^{k}\sum_{\substack{0\le i_1,\ldots,i_k\le D\\i_1+\ldots+i_k=k'}}e^{-\beta(E_{i_1}+\ldots+E_{i_k})}.$$ The next theorem shows that in fact $Y_n$ grows linearly in time. Let $$\mu=B_2-1, \label{eq:mu}$$ where $$B_n=\sum_{i=0}^Di^np^*_i,\quad n\in{\mathbb{N}}.$$ Then $\mu>0$ being the variance of $p^*$, a nondegenerate distribution. \[th:convergence\_to\_Gamma\] $$\frac{Y_n}{n}\cdot\frac{2}{\mu}\stackrel{Law}{\to}\Gamma,$$ where $\Gamma$ is a random variable with density $$p(t)=\begin{cases}te^{-t},&t\ge0\\ 0,&t<0\end{cases}$$ [[[Proof: ]{}]{}]{}Let us find the Laplace transform (generating function) of the distribution of $Y_n$: $$L_n(s)={\mathsf{E}}e^{sY_n},\quad s\le 0,$$ (this definition differs from the traditional one by a sign change of the argument) and prove that for any $x\le0$, $$\label{eq:convergence_of_laplace} \lim_{n\to\infty}L_n\left(\frac{x}{n}\right)=\frac{1}{\left(1-\frac{\mu x}{2}\right)^2}=:L_\infty(x),$$ the r.h.s. being the Laplace transform of $$p(t)=\frac{4t}{\mu^2}e^{-\frac{2t}{\mu}},$$ the density of the r.v. $\frac{\mu}{2}\Gamma$.This will imply the desired result, see e.g. [@Kallenberg:MR854102 Appendix 5] for various statements on Laplace transforms. Theorem \[th:markov-counting\] and  imply $$\begin{aligned} &{\mathsf{E}}\left[e^{sY_{n+1}}|Y_n=k\right]=\sum_{k'}\frac{e^{sk'}k'}{k}\rho^{k'-k}\sigma^{k}\sum_{\substack{0\le i_1,\ldots,i_k\le D\\i_1+\ldots+i_k=k'}}e^{-\beta(E_{i_1}+\ldots+E_{i_k})}\\ &=\frac{\sigma^{k}}{k\rho^k}\sum_{0\le i_1,\ldots,i_k\le D}(i_1+\ldots+i_k)\rho^{i_1+\ldots+i_k}e^{s(i_1+\ldots+i_k)}e^{-\beta(E_{i_1}+\ldots+E_{i_k})}\\ &=w(s)v(s)^{k-1},\end{aligned}$$ where $$v(s)=\sum_{i=0}^D p^*_ie^{si}=\sum_{i=0}^D C\rho^ie^{-\beta E_i}e^{si},$$ and $$w(s)=\sum_{i=0}^D ip^*_ie^{si}=\sum_{i=0}^D Ci\rho^ie^{-\beta E_i}e^{si}=v'(s).$$ Therefore, $$L_{n+1}(s)={\mathsf{E}}w(s)v(s)^{Y_n-1} =\frac{w(s)}{v(s)}{\mathsf{E}}e^{\ln v(s) Y_n} =z(s)L_n(f(s)),\label{eq:L_iteration}$$ where $$\label{eq:taylor_for_f} f(s)=\ln v(s),$$ and $$z(s)=\frac{w(s)}{v(s)}=f'(s).$$ Both $z$ and $f$ are analytic functions. An elementary calculation shows that $$f(s)=s+\frac{\mu}{2} s^2+r(s),$$ and $$\ln z(s)=\mu s+q(s),$$ where $\mu=w'(0)-1$ was introduced in  and $$\label{eq:remainders} |r(s)|\le c|s|^3,\quad |q(s)|\le c|s|^2$$ for some $c>0$ and all $s\le 0$. From now on, $x\le 0$ is fixed. Using and the obvious identity $$L_0(s)=e^s,$$ we can write $$L_n\left(\frac{x}{n}\right)=z\left(\frac{x}{n}\right)\cdot z\left(f\left(\frac{x}{n}\right)\right)\cdot z\left(f^2\left(\frac{x}{n}\right)\right)\cdot \ldots \cdot z\left(f^{n-1}\left(\frac{x}{n}\right)\right)e^{f^{n}\left(\frac{x}{n}\right)},$$ where $$f^k(x)=\underbrace{f\circ f\ldots\circ f}_k(x),\quad k\ge 0,$$ so that we have to study the numbers $(x_{n,k})_{n\in{\mathbb{N}},k=0,\ldots,n}$ defined by $$x_{n,k}={f}^{k}\left(\frac{x}{n}\right).$$ We shall compare $(x_{n,k})_{n\in{\mathbb{N}},k=0,\ldots,n}$ to $(y_{n,k})_{n\in{\mathbb{N}},k=0,\ldots,n}$ defined by $$y_{n,k}=\frac{1}{\frac{n}{x}-\frac{\mu}{2}k}.$$ For fixed $n$, both sequences $(x_{n,k})$ and $(y_{n,k})$ are negative and increasing in $k$. Therefore $$|x_{n,k}|\le|x_{n,0}|=\frac{|x|}{n},$$ and $$|y_{n,k}|\le|y_{n,0}|=\frac{|x|}{n}.$$ Let us prove that for sufficiently large $n$ and any $k$ between $0$ and $n$, $$|y_{n,k}-x_{n,k}|\le k\left(\frac{\mu^2}{4}+c\right)\left(\frac{|x|}{n}\right)^3. \label{eq:y_close_to_x}$$ This is certainly true for $k=0$. For the induction step, we write $$|y_{n,k}-x_{n,k}|\le|y_{n,k}-f(y_{n,k-1})|+|f(y_{n,k-1})-f(x_{n,k-1})|=I_1+I_2.$$ A straightforward computation based on  shows that $$|I_1|=\left|\frac{\mu^2}{4}y_{n,k-1}^2y_{n,k}+r(y_{n,k-1})\right|\le \left(\frac{\mu^2}{4}+c\right)\left(\frac{|x|}{n}\right)^3$$ Since $|f'(s)|\le 1$ for all sufficiently small $s$, we see that $$|I_2|\le|y_{n,k-1}-x_{n,k-1}|.$$ Combining these estimates we see that $$|y_{n,k}-x_{n,k}|\le \left(\frac{\mu^2}{4}+c\right)\left(\frac{|x|}{n}\right)^3+ |y_{n,k-1}-x_{n,k-1}|,$$ and our claim  follows. It immediately implies that $$|y_{n,k}-x_{n,k}|\le \frac{C}{n^2} \label{eq:y_close_to_x2}$$ for some $K=K(x)$, sufficiently large $n$ and all $k$. We can now write $$\begin{aligned} \ln L_n\left(\frac{x}{n}\right)&=\sum_{k=0}^{n-1}\ln z(x_{n,k})+x_{n,n}\\ &=\sum_{k=0}^{n-1}\mu x_{n,k}+\sum_{k=0}^{n-1}q(x_{n,k})+x_{n,n}\\ &=\sum_{k=0}^{n-1}\mu y_{n,k}+\sum_{k=0}^{n-1}\mu (x_{n,k}-y_{n,k})+\sum_{k=0}^{n-1}q(x_{n,k})+x_{n,n}\\ &=I_1+I_2+I_3+I_4.\end{aligned}$$ It is straightforward to see that $\lim_{n\to\infty}I_2+I_3+I_4=0$. The first term $$\begin{aligned} I_1=\mu\sum_{k=0}^{n-1}\frac{1}{\frac{n}{x}-\frac{\mu}{2}k} =\mu x\frac{1}{n}\sum_{k=0}^{n-1}\frac{1}{1-\frac{\mu x}{2}\frac{k}{n}}\end{aligned}$$ can be viewed as a Riemann integral sum, so that $$\lim_{n\to\infty} \ln L_n\left(\frac{x}{n}\right)=\mu x \int_0^1\frac{du}{1-\frac{\mu x}{2}u}=-2\ln\left(1-\frac{\mu x}{2}\right),$$ which immediately implies . [[[ $\Box$ ]{}]{}]{} A functional limit theorem {#sec:flt} ========================== In this section we prove the following theorem on diffusion approximation for the process $Y$: \[th:FLT\] Let $$Z_n(t)=\frac{Y_{[nt]}}{n},\quad n\in{\mathbb{N}}, t\in{\mathbb{R}}_+.$$ Then, as $n\to\infty$, the distribution of $Z_n$ converges weakly in the Skorokhod topology in $D[0,\infty)$ to the unique nonnegative weak solution $Z$ of the stochastic Itô equation $$\begin{aligned} dZ(t)&=\mu dt+\sqrt{\mu Z(t)}dW(t),\\ Z(0)&=0.\end{aligned}$$ [[[Proof: ]{}]{}]{}Since the initial point $Z(0)=0$ is an “entrance and non-exit” singular point for the positive semi-axis (see the classification of singular points in [@Ito-Mckean:MR0345224] ), the existence and uniqueness of a nonnegative solution for all positive times is guaranteed. Let us define $$b(x)\equiv \mu,\quad\text{and}\ a(x)=\mu\cdot\max\{x, 0\},\quad x\in{\mathbb{R}},$$ and extend the equation above to the negative semi-axis by $$dZ(t)=b(Z(t)) dt+\sqrt{a(Z(t))}dW(t). \label{eq:extended equation}$$ An obvious argument shows that there is no solution starting at $0$ and being negative for some $t>0$. Therefore the weak existence and uniqueness in law hold for . According to Section 5.4B of [@Karatzas-Shreve:MR1640352], this existence and uniqueness is equivalent to the well-posedness of the martingale problem associated with $b$ and $a$. We will use Theorem 4.1 from [@Ethier-Kurtz:MR838085 Chapter 7] on diffusion approximation. The coefficients $a,b$ were defined on the whole real line so as the theorem applies directly, with no modification. We proceed to check its conditions. We must find processes $A_n$ and $B_n$ with the following properties: 1. Trajectories of $A_n$ and $B_n$ are in $D[0,\infty)$. 2. $A_n$ is nondecreasing. 3. $M_n=Z_n-B_n$ and $M_n^2-A_n$ are martingales with respect to the natural filtration generated by $Z_n,A_n,B_n$. 4. For every $T>0$ the following holds true: $$\begin{aligned} \lim_{n\to\infty}{\mathsf{E}}\sup_{t\le T}|Z_n(t)-Z_n(t-)|^2&=0, \\ \lim_{n\to\infty}{\mathsf{E}}\sup_{t\le T}|A_n(t)-A_n(t-)|&=0,\label{eq:jumps_of_A} \\ \lim_{n\to\infty}{\mathsf{E}}\sup_{t\le T}|B_n(t)-B_n(t-)|^2&=0,\label{eq:jumps_of_B} \\ \sup_{t\le T}\left|B_n(t)-\int_0^tb(Z_n(s))ds\right|=\sup_{t\le T}\left|B_n(t)-\mu t\right|&\stackrel{{\mathsf{P}}}{\to}0,\quad n\to\infty,\label{eq:consistency_of_B_with_b} \\ \sup_{t\le T}\left|A_n(t)-\int_0^ta(Z_n(s))ds\right|&\stackrel{{\mathsf{P}}}{\to}0,\quad n\to\infty.\label{eq:consistency_of_A_with_a}\end{aligned}$$ We shall need the following lemma: \[lm:conditional\_moments\] $$\begin{aligned} {\mathsf{E}}[Y_{j+1}|Y_j=k]=&\mu+k,\\ {\mathsf{E}}[Y_{j+1}^2|Y_j=k]=&B_3+3(k-1)B_2+(k-1)(k-2),\\ {\mathsf{E}}[Y_{j+1}^3|Y_j=k]=&B_4+4(k-1)B_3+6(k-1)(k-2)B_2+3(k-1)B_2^2\\&+(k-1)(k-2)(k-3),\\ {\mathsf{E}}[Y_{j+1}^4|Y_j=k]=&B_5+5(k-1)B_4+10(k-1)(k-2)B_3+10(k-1)B_3B_2 \\&+15(k-1)(k-2)B_2^2+10(k-1)(k-2)(k-3)B_2\\&+(k-1)(k-2)(k-3)(k-4). \end{aligned}$$ [[[Proof: ]{}]{}]{}For the first of these identities, we write $$\begin{aligned} {\mathsf{E}}[Y_{j+1}|\ Y_{j}=k]\notag &=\frac{\sigma^{k}}{k\rho^k}\sum_{0\le i_1,\ldots,i_k\le D}(i_1+\ldots+i_k)^2\rho^{i_1+\ldots+i_k}e^{-\beta(E_{i_1}+\ldots+E_{i_k})} \notag \\ &=\frac{1}{k}\Biggl[k\left(\sum_{i_1=0}^D i_1^2 C\rho^{i_1}e^{-\beta E_{i_1}}\right)\left(\sum_{i_2=0}^D C\rho^{i_2}e^{-\beta E_{i_2}}\right)^{k-1} \notag \\ &+k(k-1)\left(\sum_{i_1=0}^D i_1 C\rho^{i_1}e^{-\beta E_{i_1}}\right)^2\left(\sum_{i_2=0}^D C\rho^{i_2}e^{-\beta E_{i_2}}\right)^{k-2}\Biggr] \notag \\ &=\frac{1}{k}(kB_2+k(k-1))=B_2+k-1 \\ &=\mu+k,\end{aligned}$$ where we used the symmetry of the terms $(i_1^2+\ldots i_k^2)$, $i_1i_2+i_1i_3+\ldots+i_{k-1}i_k$ and . Next, $$\begin{aligned} {\mathsf{E}}[Y_{j+1}^2|\ Y_j=k] &=\frac{\sigma^{k}}{k\rho^k}\sum_{0\le i_1,\ldots,i_k\le D}(i_1+\ldots+i_{k})^3\rho^{i_1+\ldots+i_k}e^{-\beta(E_{i_1}+\ldots+E_{i_k})} \\ &=\frac{1}{k}\bigl(k B_3+3k(k-1)B_2+k(k-1)(k-2)\bigr) \\ &=B_3+3(k-1)B_2+(k-1)(k-2),\end{aligned}$$ and the other two identities in the statement of the lemma can be obtained in a similar way. [[[ $\Box$ ]{}]{}]{} Returning to the proof of the functional limit theorem, let us find the coefficient $B_n(t)$ first. The process $Z_n$ is constant on any interval of the form $[j/n,(j+1)/n)$. Due to Lemma \[lm:conditional\_moments\], $$\label{eq:conditional_1st_moment} {\mathsf{E}}\left.\left[Z_n\left(t+\frac{1}{n}\right)\right|Z_n(t)\right]=Z_n(t)+\mu\frac{1}{n},$$ so that we can set $B_n(t)=\mu[nt]/n$ to satisfy the martingale requirement on $M_n=Z_n-B_n$. Notice that with this choice of $B_n$, relations  and  are easily seen to be satisfied. Lemma \[lm:conditional\_moments\] also implies $$\begin{gathered} \label{eq:conditional_2nd_moment} {\mathsf{E}}\left.\left[Z^2_n\left(t+\frac{1}{n}\right)\right|Z_n(t)\right]\\ =\frac{B_3+3(nZ_n(t)-1)B_2+(nZ_n(t)-1)(nZ_n(t)-2)}{n^2},\end{gathered}$$ so that for $t\in \frac{1}{n}{\mathbb{Z}}$, $${\mathsf{E}}\left.\left[M^2_n\left(t+\frac{1}{n}\right)-M^2_n(t)\right|Z_n(t)\right]=\frac{1}{n}\mu Z_n(t)+\frac{1}{n^2}(B_3-B_2^2-B_2+1).$$ Therefore we can set $$A_n(t)=\sum_{j:\frac{j}{n}\le t}\left(\frac{\mu}{n} Z_n\left(\frac{j}{n}\right)+\frac{[nt]}{n^2}(B_3-B_2^2-B_2+1)\right),$$ to satisfy the martingale requirement on $M_n^2-A$. Notice that $A_n$ is nondecreasing since $$\begin{aligned} \frac{1}{n}\mu Z_n(t)+\frac{1}{n^2}(B_3-B_2^2-B_2+1)&\ge \frac{1}{n^2}(B_2-1)+\frac{1}{n^2}(B_3-B_2^2-B_2+1)\\ &\ge\frac{1}{n^2}(B_3-B_2^2)\ge 0,\end{aligned}$$ where the last inequality follows from the Cauchy—Schwartz inequality and $B_1=1$. So properties 1–3 are satisfied, and  follows from the definitions of $a$ and $A$, and the convergence $$\lim_{n\to\infty} \sup_{t\le T}\sum_{j:\frac{j}{n}\le t}\frac{1}{n^2}(B_2-1)+\frac{1}{n^2}(B_3-B_2^2-B_2+1)=0.$$ To prove we use the definition of $A$ to write $${\mathsf{E}}\sup_{t\le T}|A_n(t)-A_n(t-)|\le \frac{\mu}{n}{\mathsf{E}}\sup_{\frac{j}{n}\le T} Z_n\left(\frac{j}{n}\right)+\frac{1}{n^2}(B_3-B_2^2-B_2+1)$$ so it suffices to prove that ${\mathsf{E}}\sup_{\frac{j}{n}\le T} Z_n\left(\frac{j}{n}\right)$ is bounded. The definition of $M_n$, Lyapunov’s inequality and Doob’s maximal inequality for submartingales imply that for some $c>0$: $$\begin{aligned} {\mathsf{E}}\sup_{\frac{j}{n}\le T} Z_n\left(\frac{j}{n}\right)&\le \mu T+ {\mathsf{E}}\sup_{\frac{j}{n}\le T} \left|M_n\left(\frac{j}{n}\right)\right| \\ &\le \mu T+ c\sqrt{{\mathsf{E}}M^2_n(T)} \\ &\le \mu T+c\sqrt{ 2({\mathsf{E}}Z^2_n(T)+\mu^2T^2)}.\end{aligned}$$ Lemma \[lm:conditional\_moments\] implies that ${\mathsf{E}}Z^2_n(T)$ has a limit, as $n\to\infty$, so that  is verified. A lengthy but elementary calculation based on Lemma \[lm:conditional\_moments\] shows that $${\mathsf{E}}[(Y_{j+1}-Y_j)^4|Y_n]\le c(Y_j^2+1)$$ for some constant $c>0$, so that we can write $$\begin{aligned} {\mathsf{E}}\sup_{t\le T}(Z_n(t)-Z_n(t-))^2&\le \left[{\mathsf{E}}\sup_{t\le T}(Z_n(t)-Z_n(t-))^4\right]^{1/2} \\ &\le \left[\frac{1}{n^4}\sum_{j:\frac{j}{n}\le T}{\mathsf{E}}(Y_{j+1}-Y_{j})^4\right]^{1/2} \\ & \le \left[\frac{c}{n^4}\sum_{j\le nT}{\mathsf{E}}(Y_{j}^2+1)\right]^{1/2}\end{aligned}$$ Since Lemma \[lm:conditional\_moments\] implies that for some constant $c_1>0$, $${\mathsf{E}}(Y_j^2+1)\le c_1j^2,\quad j\in{\mathbb{N}},$$ we conclude that $$\begin{aligned} {\mathsf{E}}\sup_{t\le T}(Z_n(t)-Z_n(t-))^2 &\le \sqrt{\frac{c }{n^4}\cdot n\cdot c_1 n^2T^2}\to 0,\quad n\to\infty,\end{aligned}$$ and the proof of the theorem is complete. [[[ $\Box$ ]{}]{}]{} Diffusion limit for finer structure of the random tree {#sec:SPDE} ====================================================== In this section we present a non-rigorous and sketchy description for the diffusion limit of the infinite Markov random tree itself rather then its width given by $Y_n$ at time $n$. Let us fix any time $n_0$ and divide all $Y_{n_0}$ vertices into $r$ nonempty disjoint groups. For any $n\ge n_0$ denote the progeny of $i$-th group at time $n$ by $V_{i,n}$. We want to study the coevolution of $(V_{1,n},\ldots,V_{r,n})$. Though each $V_{i,n}$ is not a Markov process, it is elementary to see that the whole vector is a homogeneous Markov process. We would like to compute the diffusion limit for this vector under an appropriate rescaling: $$\frac{1}{n}(V_{1,[nt]},\ldots,V_{r,[nt]})$$ We need to find the local drift and diffusion coefficients for the limiting process. Let $j_1+\ldots+j_r=k$. Then computations similiar to Lemma \[lm:conditional\_moments\] produce $${\mathsf{E}}\left[\frac{V_{1,m+1}}{n}-\frac{j_1}{n}\Bigr|\ \frac{1}{n}(V_{1,[nt]},\ldots,V_{r,[nt]})=\frac{1}{n}(j_1,\ldots,j_r)\right]=\mu\frac{j_1/n}{k/n}\frac{1}{n},$$ so, by symmetry, the local limit drift is $$b_i(v)=\mu \frac{v_i}{v_1+\ldots+v_r}.$$ Similarly, the diagonal terms for local diffusion: $$\begin{aligned} {\mathsf{E}}&\left[\left(\frac{V_{1,m+1}}{n}-\frac{j_1}{n}\right)^2\Bigr|\ \frac{1}{n}(V_{1,[nt]},\ldots,V_{r,[nt]})=\frac{1}{n}(j_1,\ldots,j_r)\right]\\&=\mu\frac{j_1}{n}\frac{1+\frac{B_3-3B_2+2}{k}}{n},\end{aligned}$$ and $$a_{ii}(v)=\mu v_i.$$ For the off-diagonal terms a computation produces $$\begin{aligned} {\mathsf{E}}[(V_{1,m+1}-j_1)(V_{2,m+1}-j_2)|\ V_{1,m}=j_1,\ldots,V_{r,m}=j_m]=0,\end{aligned}$$ so that $$a_{ij}\equiv0,\quad i\ne j.$$ So, the limiting equations are $$dV_i(t)=\mu\frac{V_i(t)}{\sum_{j}V_j(t)}dt+\sqrt{\mu V_i(t)}{{\bf 1}}_{\{V_i>0\}}dW_i(t).$$ Let us introduce cumulative counts $$U_j=V_1+\ldots+V_j.$$ Then $Z(t)=U_r(t)$, and $$\begin{gathered} \label{eq:dU} dU_j=\mu\frac{U_j}{U_r}dt+\sqrt{\mu U_1}{{\bf 1}}_{\{U_1>0\}}dW_1+\ldots\\\ldots+\sqrt{\mu (U_j-U_{j-1})}{{\bf 1}}_{\{U_j-U_{j-1}>0\}}dW_j\end{gathered}$$ Then, for each $0\le u_1\le\ldots\le u_r$, we can solve this equation with initial data $$(U_1(t_0),\ldots,U_r(t_0))=(u_1,\ldots,u_r),$$ which gives a random nondecreasing map $$\label{eq:solution_map} \Phi=\Phi_{t_0}:u\mapsto (U(t))_{t\ge t_0}.$$ Here $u$ runs through the set $\{u_1,\ldots,u_r\}$. It is clear though that if we insert another point $u'$ between $0$ and $u^*=u_r$, then solving the stochastic equation above for the modified set of initial points, we see that the the new solution map is a monotone extension of the old one. Adding points of a countable dense set one after another, we can extend the solution map onto $u\in[0,u^*]$. So, for each $u^*\ge0$ we are able to define a random monotone map $\Phi:[0,u^*]\to {\mathbb{R}}_+^{[t_0,\infty)}$. Our last point is to represent these solution maps via stochastic integrals w.r.t. a Brownian sheet $(W(x,t))_{t,x\ge0}$, i.e. a continuous Gaussian random field with zero mean and $${\mathop{\mathsf{cov}}}(W(x_1,t_1),W(x_2,t_2))=(x_1\wedge x_2)(t_1\wedge t_2),\quad x_1,x_2,t_1,t_2\ge0.$$ Equations imply that $\Phi(u,t)$, $t\ge t_0$, $u\in[0,u^*]$ is equal in law to the monotonone (in $u$) solution of the following SPDE: $$\begin{aligned} d\Phi(u,t)=&\mu\frac{\Phi(u,t)}{\Phi(u^*,t)}dt+\int_{x\in{\mathbb{R}}}{{\bf 1}}_{[0,\mu \Phi(u,t)]}W(dx\times dt),\\ \Phi(u,t_0)=&u,\quad u\in[0,u^*].\end{aligned}$$ A rigorous treatment of the limiting solution $\Phi$, and a precise convergence statement will appear elsewhere. [^1]: School of Mathematics, Georgia Tech, Atlanta GA, 30332-0160; email:[email protected], 404-894-9235 (office phone), 404-894-4409(fax)
--- abstract: 'Environmental molecular beam experiments are used to examine water interactions with liquid methanol films at temperatures from 170 K to 190 K. We find that water molecules with 0.32 eV incident kinetic energy are efficiently trapped by the liquid methanol. The scattering process is characterized by an efficient loss of energy to surface modes with a minor component of the incident beam that is inelastically scattered. Thermal desorption of water molecules has a well characterized Arrhenius form with an activation energy of 0.47$\pm 0.11$ eV and pre-exponential factor of $4.6 \times 10^{15\pm3} $ s$^{-1}$. We also observe a temperature dependent incorporation of incident water into the methanol layer. The implication for fundamental studies and environmental applications is that even an alcohol as simple as methanol can exhibit complex and temperature dependent surfactant behavior.' author: - 'Erik S. Thomson' - Xiangrui Kong - 'Patrik U. Andersson' - Nikola Marković - 'Jan B. C. Pettersson' title: Collision Dynamics and Solvation of Water Molecules in a Liquid Methanol Film --- ![image](./TOCgraph.pdf){width="1\columnwidth"} \[fig:skem\] Methanol (CH$_3$OH) differs from water only due to the interchange of a methyl group for a single hydrogen. However, the effect is strong and can be particularly interesting when the two compounds interact. Previous studies of CH$_3$OH - H$_2$O interactions have focused on liquids [@Jayne1991], ices [@Wolff2007], amorphous solid water [@Bahr2008], and a range of environments. Particular interest has focused on uptake and sticking coefficients for methanol on ice and liquid surfaces [@Jayne1991; @Morita2003] and its ensuing surfactant effects [@Hudson2002; @Bahr2008]. Molecular dynamics simulations show that methanol molecules strongly interact with H$_2$O surfaces and that the methanol-water interaction can be stronger than the methanol-methanol attraction [@Picaud2000]. One way to think about the methanol-water interaction is as a competition between the hydroxyl group’s affinity for hydrogen bonding and the hydrophobic nature of the methyl group [@Morita2003]. These different interactions can lead to stable organic monolayers on ice and water [@Picaud2000]. In the atmosphere, the interaction of gas-phase molecules and surfaces has wide ranging effects for physical and chemical process like cloud formation and photo-chemistry. Alcohol coated surfaces may be important and substantial sources or sinks for HO$_x$ radicals, especially in the dry upper troposphere where the lack of water vapor limits its production through ozone photolysis [@Winkler2002; @Hudson2002]. Atmospheric methanol is a surfactant of particular interest because of its ubiquity [@Hudson2002], and because it also serves as a simple model for longer aliphatic molecules. Alcohol coverages may limit atmospheric particle growth because they may render surfaces somewhat hydrophobic. However, this is still controversial, as mass accommodation coefficients and effects of surfactant properties on mass transfer through surface layers remain poorly constrained. Experimental measurements of size selected water droplets interacting with CH$_3$OH vapor have found mass accommodation coefficients as low as $\approx0.06$ [@Jayne1991] while dynamical simulations of CH$_3$OH molecules impinging on pure water surfaces suggest coefficients of order unity [@Morita2003]. The importance of methanol-water interactions has motivated our fundamental experimental studies of the molecular level dynamics of such systems. Here we report findings from environmental molecular beam (EMB) experiments where supersonic D$_2$O molecules collide with a thin liquid-like methanol layer with temperatures $T_s$ between 170 K and 190 K. The EMB apparatus, whose design allows us to probe surfaces under higher, environmentally relevant, vapor pressures than are accessible with standard MB technology, has been described in detail previously [@Andersson2000a; @Suter2006; @Kong2011]. It consists of differentially pumped vacuum chambers to achieve a central ultra-high vacuum (UHV). Methanol condenses on a graphite surface that is immediately surrounded by an inner chamber enabling a small region of finite vapor pressure and allowing for the formation of stable methanol surface films in dynamical equilibrium with their vapor, while simultaneously minimizing the molecular beam’s transmission attenuation. In contrast with traditional molecular beam experiments that are preformed under strict UHV conditions, the inner chamber allows us to maintain methanol vapor pressures in the $10^{-3}$ mbar range. The D$_2$O is added to a He gas beam to increase its kinetic energy and allow us to simultaneously probe methanol surface coverage. The incident kinetic energy (0.32 eV) results in measurable inelastic scattering, allowing us to probe collision dynamics, while monitoring He elastic scattering from the graphite substrate ensures complete methanol coverage throughout experiments [@Kong2011]. Pulses of the gas beam are synchronized with a frequency chopper to select the central portion of each pulse, producing discrete 400 $\mu$s gas impulses. Within the UHV chamber a differentially pumped quadrupole mass spectrometer (QMS) ionizes particles leaving the surface by electron bombardment. Detected time versus ion intensity counts are processed and output by a multi-channel scaler with a dwell time of 10 $\mu$s. With the known experimental geometry the measured arrival intensities are easily translated into time-of-flight (TOF) measurements for particles traveling, within the plane defined by the beam and surface normal, from the surface to the detector. Thus the TOF distributions can be analyzed to illuminate the important surface processes [@Scoles1988]. Within our experimental temperature range, previous X-ray diffraction studies have shown thin layers of methanol to be liquid [@Morishige1990]. The melting point for a single monolayer is 135 K and increases with increasing film thickness to the bulk melting temperature of 175.4 K. Monolayer methanol coverage has also been shown to have higher desorption energies than subsequent layers [@Bolina2005a; @Ulbricht2006], implying that the first layer of CH$_3$OH completely wets the graphite and adheres strongly. Here we adjust the pressure to be high enough to maintain a complete CH$_3$OH layer on the graphite surface that we monitor by measuring elastic He scattering[@Scoles1992], and maintain a low enough pressure to avoid multi-layers, observed with a light reflection technique. Film thickness can be continuously monitored by observing interference from the reflections of a 670 nm laser. In these experiments no beam attenuation and therefore no methanol film growth was observed, thus ensuring that for the experimental temperatures the CH$_3$OH remains a thin layer. Heavy water is substituted for H$_2$O to enhance the signal-to-noise ratio and highly oriented pyrolythic graphite (HOPG, grade ZYB) is used as a substrate. The use of HOPG is beneficial due to its well characterized helium scattering properties [@Scoles1992], its well studied interactions with methanol layers [@Morishige1990; @Bolina2005a; @Wolff2007], and its utility as an analog for atmospheric particles like black carbon [@Perraudin2007]. We have systematically characterized D$_2$O interactions with liquid methanol layers and compared with the bare graphite surface. For all experiments the incident beam angle and measured angle of reflection are limited to $45^\circ$, due to the constraints of the inner-most chamber. Methanol pressures in the inner chamber directly above the 185 K surface are estimated to be $\approx 2\times10^{-3}$ mbar from monitoring the UHV chamber pressure. For surface temperatures below 185 K the added methanol does not significantly contribute to the UHV background making an inner chamber pressure estimate impossible. \[fig:dists\] compares the TOF distributions of D$_2$O scattered from methanol covered graphite for different surface temperatures. ![Measured TOF distributions from D$_2$O incident on a methanol covered graphite surface. Red points are a five point step-wise average of experimental data. The solid black curve represents the fitted distribution with inelastic and trapping-desorption components represented by the blue and green dashed curves, respectively. []{data-label="fig:dists"}](./fig1.pdf){width="1\columnwidth"} A fitted distribution is plotted above the recorded data, in addition to the individual fits for the inelastic and trapping-desorption components. Clearly, D$_2$O collisions with the methanol covered surface exhibit inelastic scattering and trapping-desorption behaviors. At low temperatures the inelastic component of the scattered intensity dominates the signal. Above 180 K this changes significantly and thermally activated molecules more quickly desorb from the surface. The non-linear least squares fitting and quantitative analysis of the final TOF distributions can be summarized as a convolution of the initial beam distribution with a component of inelastically scattered particles and another component of thermally desorbed particles. The initial beam distribution is measured directly by rotating the QMS into the beam path. Theoretical inelastic and trapping-desorption distributions are calculated and separately convoluted with the incident beam. Finally, using a non-linear fitting algorithm a linear combination of these distributions is used to theoretically fit the measured data. For these experiments we assume first-order thermal desorption with a residence time behavior of the form, $$\label{eq:desorp} F_{res}(t)={C}_1\exp(-kt).$$ Here ${C}_1$ is a fitted scaling factor, $t$ is the surface residence time, and $k$ is the desorption rate constant. The inelastic scattering distribution is also assumed to have the common form [@Suter2006], $$\label{eq:Iis} I_{is}(t)={C}_2 v^4 \exp\left[{-\left(\frac{v-\bar{v}}{v_{is}}\right)^2}\right],$$ where ${C}_2$ is a second fit scaling factor, $v$ is the particle velocity calculated from the measurement time and flight path length, $\bar{v}$ is the average inelastically scattered beam velocity, and $v_{is}$ is, $$\label{eq:vbeam} v_{is}=\sqrt{\frac{ 2k_B T_b}{m}}.$$ The temperature $T_b$ represents the inelastically scattered beam temperature, $k_B$ is the Boltzmann constant, and $m$ is the molecular mass in kilograms. Both $\bar{v}$ and $T_b$ are left as free fitting parameters when assuming an inelastic contribution. At the lowest experimental surface temperatures $T_s \leq 170$ K trapping-desorption occurs on long time scales merging with the background. Thus the recorded signal is primarily due to the inelastically scattered component. As the temperature increases D$_2$O more efficiently desorbs from the surface, shrinking the exponential’s tail but increasing the contribution of trapping-desorption to the measured signal (\[fig:dists\]). The temperature dependence of the desorption rate coefficient is summarized in \[fig:Arh\]. ![Arrhenius plot of the rate coefficients for desorption of D$_2$O from liquid methanol. The solid line is a linear least-squares fit to the points with a slope of $E_A=0.47\pm0.11$ eV resulting in a pre-exponential factor $A=4.6 \times 10^{15\pm3}$ s$^{-1} $.[]{data-label="fig:Arh"}](./fig2.pdf){width="1.0\columnwidth"} The linear response of \[fig:Arh\] demonstrates that the desorption kinetics of D$_2$O from methanol do exhibit Arrhenius type behavior, $k=A \exp (-E_A/k_BT_s)$. The resulting activation energy $E_A=0.47\pm0.11$ eV and pre-exponential factor $A=4.6 \times 10^{15\pm3} $ s$^{-1}$ with their respective 95% confidence intervals, are in good agreement with previous measurements of kinetic parameters for thin layers of pure H$_2$O and CH$_3$OH [@Ulbricht2006]. This result is not unanticipated because the hydrogen bonds associated with both methanol and water are expected to place the dominant constraint on their surface behavior. Various transition state theory models have predicted that such adsorbate interactions result in comparable pre-exponential factors [@Ulbricht2006; @Seebauer1988]. Thus the desorption behavior of water from methanol is similar to the desorption of either compound from itself. In contrast to the liquid methanol, thermal desorption of D$_2$O from bare graphite is very fast. \[fig:baregr\] shows that for bare graphite desorption curves do not vary with temperature and $k>>10^3$ s$^{-1}$. ![Measured TOF distributions (c.f., \[fig:dists\]) for D$_2$O incident on bare graphite.[]{data-label="fig:baregr"}](./fig3.pdf){width="1.0\columnwidth"} Directly comparing the inelastic scattering components for the graphite and methanol surfaces is difficult for a single scattering angle, due to the fact that the angular distributions depend upon the type of surface. However, comparison with earlier studies of water interactions with graphite[@Markovic1999] suggests that the inelastic component is small relative to trapping-desorption. The average final kinetic energy of scattered molecules was 10% of the incident energy for the methanol-covered surface and 55% for the bare graphite, independent of surface temperature. The results for bare graphite are in good agreement with the results for the H$_2$O-graphite system [@Markovic1999]. For the methanol film the results confirm that D$_2$O collisions are highly inelastic and characterized by very efficient transfer of energy to surface modes. For comparison, 20-25% of the kinetic energy is conserved by Ar, HCl, and H$_2$O molecules with similar incident parameters, scattering from water ice [@Andersson2000a; @Andersson2000b; @Gibson2011]. With the help of detailed classical molecular dynamic simulations, the trapping-desorption distributions from bare graphite can be used to estimate the trapping efficiency of the methanol covered surface. We performed new calculations focusing on the trapping probability of D$_2$O incident on a bare graphite surface in an identical manner to previously published results for H$_2$O [@Markovic1999]. Molecules incident at $45^\circ$ on a 180 K surface with kinetic energies of 0.32 eV were simulated to have an 80% chance of being trapped. Using this result we calculated an incident beam intensity from the trapping-desorption of the bare graphite case. Normalizing for beam attenuation due to higher vapor pressures in the methanol experiments we computed the fraction of incident molecules measured in the liquid methanol trapping-desorption distributions and plot them in \[fig:TDfrac\]. An uncertainty of $\approx \pm 20 \%$ in the absolute values plotted in \[fig:TDfrac\] results from the limitations of the simulated trapping and experimental measurements. However, such uncertainties are systematic and therefore the strong observed trend with temperature persists without regard to the absolute ratio. ![The fraction of trapping-desorbed (TD) D$_2$O molecules relative to the incident number as a function of temperature. For clarity error bars are omitted and an explanation of the uncertainty is restricted to the main text.[]{data-label="fig:TDfrac"}](./fig4.pdf){width="1.0\columnwidth"} \[fig:TDfrac\] shows that there is a clear trend in the trapping-desorption fraction as a function of temperature. At high temperatures almost all incident molecules are trapped and subsequently desorbed from the methanol surface. At low temperatures as few as 20% of the molecules are thermally desorbed from the surface, leaving a large unaccounted for water reservoir and suggesting that on the time scale of the experiment (10 ms) some D$_2$O is lost within the methanol layer. In this case two possible D$_2$O sinks exist. First, there may exist some level of H/D isotopic exchange between the methanol layer and the D$_2$O beam. Previously rapid H/D exchange has been measured for cryogenic methanol systems at temperatures above 150 K [@Souda2003]. For CH$_3$OH interacting with a D$_2$O ice layer @Souda2003 found almost complete H/D exchange. However, for the reverse case of D$_2$O adsorbed on methanol surfaces H/D exchange was not explicitly observed. Rather above 120 K their results suggested that D$_2$O either dissolved into the bulk methanol by forming hydrogen bonds, or formed islands that were subsequently covered by CH$_3$OH. In our experiments formation and desorption of HDO could serve as an indicator of H/D exchange taking place within the methanol layer. However, we were unable to observe H/D exchange over the entire temperature range. Thus the noise of the experimental measurements limited the maximum HDO formation to less than 1% of the incident molecules. This observation is further supported by measurements of H/D exchange on mineral [@Hsieh1999] and liquid [@Dempsey2011] surfaces and for H$_2$O/CD$_3$OD mixtures at up to 170 K [@Ratajczak2009] that indicate time scales of minutes to hours and longer for significant isotopic exchange. It is likely that desorption of isotopically light water in a CH$_3$OH-D$_2$O system would be thermally activated at long time scales, similar to what is observed for acids and cold salty water solutions [@Brastad2011]. We conclude that below 185 K water, which only desorbs on long timescales, is incorporated into the methanol layer. Such a process would only contribute to the background D$_2$O levels of the measurements, and thus be negated during subsequent analysis. We have studied water interactions with liquid methanol layers on a graphite substrate between 170 K and 190 K. At these temperatures, the methanol surface layer was maintained in a dynamic state with the help of a finite vapor pressure above the surface. Collisions between water molecules and the liquid methanol layer were observed to result in efficient surface trapping with only a small fraction of the hyperthermal incident molecules inelastically scattered. The escaping molecules had lost more than 80% of their incident kinetic energy, indicating a very efficient energy transfer to surface modes. The desorption kinetics have an Arrhenius type behavior and the activation energy we have calculated is indicative of multiple hydrogen bonds between D$_2$O and CH$_3$OH molecules within the liquid [@Beta2005]. On the millisecond time scale of the experiments desorption competes with loss of D$_2$O to more strongly bound states within the layer, and high temperature is observed to favor desorption. Loss of D$_2$O due to H/D exchange is likely less important since no desorbing HDO was detected in the experiments. This study contributes to the fundamental understanding of gas accommodation and uptake in organic liquids and is of potential importance for the description of the effect of organic surfactants on heterogeneous processes in the atmosphere and in other environments. One immediate implication is that the effect of CH$_3$OH as a common atmospheric surfactant will be temperature dependent and may even contribute to water uptake by otherwise hydrophobic particles. This provides context for continued studies of more complicated surfactants of environmental importance, such as longer chain alcohols. Funding for this research was provided by the Swedish Research Council and the University of Gothenburg. We thank the anonymous referees whose suggestions improved this letter. @ifundefined [27]{} Jayne, J. T.; Duan, S. X.; Davidovits, P.; Worsnop, D. R.; Zahniser, M. S.; Kolb, C. E. Uptake of Gas-Phase Alcohol and Organic-Acid Molecules by Water Surfaces. *J. Phys. Chem.* **1991**, *95*, 6329–6336 Wolff, A. J.; Carlstedt, C.; Brown, W. A. Studies of Binary Layered [CH]{}$_3$[OH]{}/[H]{}$_2$[O]{} Ices Adsorbed on a Graphite Surface. *J. Phys. Chem. [C]{}* **2007**, *111*, 5990–5999 Bahr, S.; Toubin, C.; Kempter, V. Interaction of Methanol with Amorphous Solid Water. *J. Chem. Phys.* **2008**, *128*, 134712 Morita, A. Molecular Dynamics Study of Mass Accommodation of Methanol at Liquid-Vapor Interfaces of Methanol/Water Binary Solutions of Various Concentrations. *Chem. Phys. Lett.* **2003**, *375*, 1–8 Hudson, P. K.; Zondlo, M. A.; Tolbert, M. A. The Interaction of Methanol, Acetone, and Acetaldehyde with Ice and Nitric Acid-Doped Ice: Implications for Cirrus Clouds. *J. Phys. Chem. [A]{}* **2002**, *106*, 2882–2888 Picaud, S.; Toubin, C.; Girardet, C. Monolayers of Acetone and Methanol Molecules on Ice. *Surf. Sci.* **2000**, *454*, 178–182 Winkler, A. K.; Holmes, N. S.; Crowley, J. N. Interaction of Methanol, Acetone and Formaldehyde with Ice Surfaces Between 198 and 223 K. *Phys. Chem. Chem. Phys.* **2002**, *4*, 5270–5275 Andersson, P. U.; Någård, M. B.; Pettersson, J. B. C. Molecular Beam Studies of [HCl]{} Interactions with Pure and [HCl]{}-covered Ice Surfaces. *J. Phys. Chem. [B]{}* **2000**, *104*, 1596–1601 Suter, M. T.; Bolton, K.; Andersson, P. U.; Pettersson, J. B. Argon Collisions with Amorphous Water Ice Surfaces. *Chem. Phys.* **2006**, *326*, 281 – 288 Kong, X.; Andersson, P. U.; Marković, N.; Pettersson, J. B. C. Environmental Molecular Beam Studies of Ice Surface Processes. In Y. Furukawa, G. Sazaki, T. Uchida, and N. Watanabe, editors, *Physics and Chemistry of Ice 2010*. Hokkaido University Press. **2011**, 79-88 Scoles, G., Bassi, D., Buck, U., Lainé, D., Eds. *Atomic and Molecular Beam Methods*; Oxford University Press: New York, NY, 1988; Vol. 1 Morishige, K.; Kawamura, K.; Kose, A. X-Ray Diffraction Study of the Structure of a Monolayer Methanol Film Adsorbed on Graphite. *J. Chem. Phys.* **1990**, *93*, 5267–5270 Bolina, A. S.; Wolff, A. J.; Brown, W. A. Reflection Absorption Infrared Spectroscopy and Temperature Programmed Desorption Investigations of the Interaction of Methanol with a Graphite Surface. *J. Chem. Phys.* **2005**, *122*, 044713 Ulbricht, H.; Zacharia, R.; Cindir, N.; Hertel, T. Thermal Desorption of Gases and Solvents from Graphite and Carbon Nanotube Surfaces. *Carbon* **2006**, *44*, 2931–2942 Scoles, G., Lainé, D., Valbusa, U., Eds. *Atomic and Molecular Beam Methods*; Oxford University Press: USA, 1992; Vol. 2 Perraudin, E.; Budzinski, H.; Villenave, E. Kinetic Study of the Reactions of Ozone with Polycyclic Aromatic Hydrocarbons Adsorbed on Atmospheric Model Particles. *J. Atmos. Chem.* **2007**, *56*, 57–82 Seebauer, E. G.; Kong, A. C. F.; Schmidt, L. D. The Coverage Dependence of the Pre-Exponential Factor for Desorption. *Surf. Sci.* **1988**, *193*, 417–436 Marković, N.; Andersson, P. U.; Någård, M. B.; Pettersson, J. B. C. Scattering of Water from Graphite: Simulations and Experiments. *Chem. Phys.* **1999**, *247*, 413–430 Andersson, P. U.; Någård, M. B.; Bolton, K.; Svanberg, M.; Pettersson, J. B. C. Dynamics of Argon Collisions with Water Ice: Molecular Beam Experiments and Molecular Dynamics Simulations. *J. Phys. Chem. [A]{}* **2000**, *104*, 2681–2688 Gibson, K. D.; Killelea, D. R.; Yuan, H.; Becker, J. S.; Sibener, S. J. Determination of the Sticking Coefficient and Scattering Dynamics of Water on Ice using Molecular Beam Techniques. *J. Chem. Phys.* **2011**, *134*, 034703 Souda, R.; Kawanowa, H.; Kondo, M.; Gotoh, Y. Hydrogen Bonding Between Water and Methanol Studied by Temperature-Programmed Time-of-Flight Secondary Ion Mass Spectrometry. *J. Chem. Phys.* **2003**, *119*, 6194–6200 Hsieh, J. C. C.; Yapp, C. J. Hydrogen-Isotope Exchange in Halloysite: Insight from Room-Temperature Experiments. *Clays Clay Miner.* **1999**, *47*, 811–816 Dempsey, L. P.; Brastad, S. M.; Nathanson, G. M. Interfacial Acid Dissociation and Proton Exchange Following Collisions of [DCl]{} with Salty Glycerol and Salty Water. *J. Phys. Chem. Lett.* **2011**, *2*, 622–627 Ratajczak, A.; Quirico, E.; Faure, A.; Schmitt, B.; Ceccarelli, C. Hydrogen/Deuterium Exchange in Interstellar Ice Analogs. *Astronomy & Astrophysics* **2009**, *496*, L21–L24 Brastad, S. M.; Nathanson, G. M. Molecular Beam Studies of [HCl]{} Dissolution and Dissociation in Cold Salty Water. *Phys. Chem. Chem. Phys.* **2011**, *13*, 8284–8295 Beta, I. A.; Sorensen, C. M. Quantitative Information About the Hydrogen Bond Strength in Dilute Aqueous Solutions of Methanol from the Temperature Dependence of the Raman Spectra of the Decoupled [OD]{} Stretch. *J. Phys. Chem. [A]{}* **2005**, *109*, 7850–7853
--- abstract: | We establish a relationship between an inverse optimization spectral problem for N-dimensional Schrödinger equation $ -\Delta \psi+q\psi=\lambda \psi $ and a solution of the nonlinear boundary value problem $-\Delta u+q_0 u=\lambda u- u^{\gamma-1},~~u>0,~~ u|_{\partial \Omega}=0$. Using this relationship, we find an exact solution for the inverse optimization spectral problem, investigate its stability and obtain new results on the existence and uniqueness of the solution for the nonlinear boundary value problem. address: 'Institute of Mathematics of UFRC RAS, 112, Chernyshevsky str., 450008 Ufa, Russia' author: - 'Y.Sh. Ilyasov' - 'N. F. Valeev' title: | On nonlinear boundary value problem corresponding to\ $N$-dimensional inverse spectral problem --- Schrödinger operator,inverse spectral problem,nonlinear elliptic equations; 35P30,35R30 ,35J65 ,35J10 ,35J60 Introduction ============ This paper is concerned with the inverse spectral problem for the operator of the form $$\label{eq:S} \mathcal{L}_q\phi:=-\Delta \phi+q\phi,~~~ x\in \Omega,$$ subject to the Dirichlet boundary condition $$\label{eq:Sq} \phi \bigr{|}_{\partial \Omega}=0.$$ Here $\Omega$ is a bounded domain in $\mathbb{R}^N$, $N\geq 1$, the boundary $\partial \Omega$ is of class $C^{1,1}$. We assume that $q \in L^p(\Omega)$, where $$\label{Pas} p \in \begin{cases} [2,+\infty)~~~\mbox{if}~~N< 4,\\ (2,+\infty)~~~\mbox{if}~~N =4,\\ [N/2,+\infty)~~~\mbox{if}~~N>4. \end{cases}$$ Under these conditions, $\mathcal{L}_q$ with domain $D(\mathcal{L}_q):=W^{2,2}(\Omega)\cap W^{1,2}_0(\Omega)$ defines a self-adjoint operator (see, e.g., [@edmund; @Reed2]) so that its spectrum consists of an infinite sequence of eigenvalues $\{\lambda_i(q) \}_{i=1}^{\infty}$, repeated according to their finite multiplicity and ordered as $\lambda_1(q)<\lambda_2(q)\leq \ldots $. Furthermore, the principal eigenvalue $\lambda_1(q)$ is a simple and isolated. The recover of the potential $q(x)$ from a knowledge of the spectral data $\{\lambda_i(q) \}_{i=1}^{\infty}$ is a classical problem and, beginning with the celebrated papers by Ambartsumyan [@ambar] in 1929, Borg in 1946 [@borg], Gel’fand & Levitan [@gelL] in 1951, it received a lot of attention; see, e.g., surveys [@chadan; @SavShk]. It is well known that a knowledge of the single spectrum $\{\lambda_i(q) \}_{i=1}^{\infty}$ is insufficient to determine the potential $q(x)$; see, e.g., [@borg; @gelL]. In this work we deal with an inverse problem where given finite set of eigenvalues: $\{\lambda_i \}_{i=1}^{m}$, $m<+\infty$. Having only finite spectral data, the inverse problem possesses infinitely many solutions. Thus additional conditions have to be imposed in order to make the problem well-posed. To overcome this difficulty, we assume that an approximation $q_0$ of the potential $q$ is known. Under this assumption, it is natural to consider the following inverse optimization spectral problem: for a given $q_0$ and $\{\lambda_i \}_{i=1}^{m}$, $m<+\infty$, find a potential $\hat{q}$ closest to $q_0$ in a prescribed norm, such that $\lambda_i=\lambda_i(\hat{q})$ for all $i=1, \ldots, m$. In the present paper, we study the following simplest variant of this problem: $(P):$*For a given $\lambda \in \mathbb{R}$ and $q_0 \in L^p(\Omega)$, find a potential $\hat{q} \in L^p(\Omega)$ such that $\lambda=\lambda_1(\hat{q})$ and* $$\label{Var} \|q_0-\hat{q} \|_{L^p}=\inf\{||q_0-q||_{L^p}:~~ \lambda=\lambda_1(q), ~~q \in L^p(\Omega)\}.$$ It turns out that this problem is related to the following logistic nonlinear boundary value problem: $$\label{eq:Nonl} \begin{cases} -\Delta u+q_0 u=\lambda u- u^\frac{p+1}{p-1},~~~ x\in \Omega, \\ ~~u> 0,~~ x\in \Omega, \\ ~~u\bigr{|}_{\partial \Omega}=0. \end{cases}$$ Our first main result is as follows. \[thm1\] Assume $\Omega$ is a bounded connected domain in $\mathbb{R}^N$ with a $C^{1,1}$-boundary $\partial \Omega$. Let $q_0 \in L^p(\Omega)$ be a given potential, where $p$ satisfies . Then, for any $\lambda>\lambda_1(q_0)$, $(1^o)$ there exists a unique potential $\hat{q} \in L^p(\Omega)$ such that $\lambda=\lambda_1(\hat{q})$ and is satisfied; $(2^o)$ there exists a weak positive solution $\hat{u} \in W^{1,2}_0(\Omega)$ of such that $$\hat{q}=q_0+\hat{u}^{2/(p-1)}~~\mbox{a.e. in}~~ \Omega.$$ Furthermore, $\hat{u} \in C^{1, \beta}(\overline{\Omega})$ for some $\beta \in (0,1)$ and $\phi_1(\hat{q})=\hat{u}/\|\hat{u}\|_{L^2}$. Using the relationship between $(P)$ and stated in Theorem \[thm1\], we are able to prove the following theorem on the uniqueness of the solution for . \[thm2\] Assume that $$\label{PasG} \begin{cases} 2<\gamma \leq 4~~~\mbox{if}~~N< 4,\\ 2<\gamma < 4~~~\mbox{if}~~N =4,\\ 2<\gamma\leq \frac{2N}{N-2}~~~\mbox{if}~~N>4. \end{cases}$$ Then, for any $q_0 \in L^p(\Omega)$ with $p\geq \frac{\gamma}{\gamma-2}$ and any $\lambda>\lambda_1(q_0)$, the boundary value problem $$%\label{eq:NonlG} \tag{\ref{eq:Nonl}$'$} \begin{cases} -\Delta u+q_0 u=\lambda u- u^{\gamma-1},~~~ x\in \Omega, \\ ~~u\geq 0,~u \not\equiv 0,~~ x\in \Omega, \\ ~~u\bigr{|}_{\partial \Omega}=0, \end{cases}$$ has at most one weak solution. The existence of a solution for follows in a standard way cf. [@brezis]. In the case $q_0 \in L^\infty(\Omega)$, there are various proofs of the uniqueness of the solution for ; see, e.g., [@brezis; @diaz] and the references given there. However, as far as we know, the uniqueness in the case of an unbounded potential $q_0 \in L^p(\Omega)$ has not been proven before. It should be emphasized that Theorem \[thm1\] also can be seen as a new method of proving the existence and uniqueness of a solution for nonlinear boundary value problems. Indeed, the finding of the minimizer $\hat{q}$ of constrained minimization problem also implies the existence of the solution $\hat{u}=(\hat{q}-q_0)^{(p-1)/2}$ for , whereas the uniqueness of $\hat{u}$ follows from the uniqueness of the minimizer of , as will be shown below. This paper is organised as follows. Section 2 contains some preliminaries. In Section 3, we give the proofs of Theorems \[thm1\] and \[thm2\]. In Section 4, using nonlinear problem , we investigate stability properties of inverse optimization spectral problem $(P)$. Section 5 contains some remarks and open problems. Preliminaries ============= In what follows, we denote by $\left\langle \cdot, \cdot \right\rangle $ and $\|\cdot\|_{L^2}$ the scalar product and the norm in $L^2(\Omega)$, respectively; $W^{1,2}(\Omega), W^{2,2}(\Omega)$ are usual Sobolev spaces; $W^{1,2}_0:=W^{1,2}_0(\Omega)$ is the closure of $C^\infty_0(\Omega)$ in the norm $$\|u\|_{1}=\left(\int_{\Omega} |\nabla u |^2 dx\right )^{1/2}.$$ By a standard criterion (see, e.g., [@edmund], Theorem 1.4. p. 306), assumption implies that $\mathcal{L}_q$ with domain $D(\mathcal{L}_q):=W^{2,2}(\Omega)\cap W^{1,2}_0(\Omega)$ is self-adjoint on $L^2(\Omega)$. Moreover, $\mathcal{L}_q$ is a semibounded operator so that the principal eigenvalue satisfies $$\label{lambda1} -\infty<\lambda_1(q)=\inf_{\phi \in W^{1,2}_0\setminus 0}\frac{\int_{\Omega} |\nabla \phi |^2 dx+\int_{\Omega} q\phi^2\,dx}{\int_{\Omega}\phi^2\,dx},$$ where the minimum attained at eigenfunction $\phi_1 \in W^{1,2}_0\setminus 0$. The regularity of solutions for elliptic equations (see, e.g., Lemma B 3 in [@struw]) implies that $\phi_1 \in W^{2,q}(\Omega)$ for any $q\geq 2$ and therefore by the Sobolev theorem, $\phi_1 \in C^{1,\alpha}(\overline{\Omega})$ for any $\alpha\in (0,1)$. Furthermore, in view of , we may apply the weak Harnack inequality (see Theorem 5.2. in [@Trud]) and obtain, in a standard fashion (see, e.g., Theorem 8.38 in [@GilTrud] ), that the principal eigenvalue $\lambda_1(q)$ is simple and $\phi_1>0$ in $\Omega$. By the Sobolev theorem, we have continuous embeddings: $W^{2,2}_0(\Omega) \subset L^{\infty}(\Omega)$ if $N<4$, $W^{2,2}_0(\Omega) \subset L^{q}(\Omega)$, $\forall q \in [2,\infty)$ if $N=4$, $W^{2,2}_0(\Omega) \subset L^{2N/(N-4)}(\Omega)$ if $N>4$. Hence, by Holder’s inequality and $$\label{22} \int_{\Omega} q^2\psi^2\,dx\leq a \|\Delta \psi \|_{L^2}^2+b\|\psi\|_{L^2}^2~~~~\forall \psi \in D(\mathcal{L}_q),$$ for some constants $a,b \in (0,+\infty)$ which do not depend on $\psi \in D(\mathcal{L}_q)$. This implies that for $q_0,q \in L^p$ and $\varepsilon \in \mathbb{R}$, the family $\mathcal{L}_{q_0+\varepsilon q}$ is analytic of type ($A$) (see [@Reed2], p. 16) and therefore, by Theorem X.12 in [@Reed4], $\mathcal{L}_{q_0+\varepsilon q}$ is an analytic family in the sense of Kato. Hence by Theorem X.8 in [@Reed4], $\lambda_1(q_0+\varepsilon q)$ is an analytic function of $\varepsilon$ near $0$ and $\phi_1(q_0+\varepsilon q)$ analytically depends on $\varepsilon$ near $0$ as a function of $\varepsilon$ with values in $L^2$. \[lem1\] $\lambda_1(q)$ is a continuously differentiable map in $L^p$ with the Fréchet-derivative $$\label{eq:Val} D\lambda_1(q)(h)=\frac{1}{\|\phi_1(q)\|^2_{L^2}}\int_\Omega \phi_1^2(q) h\, dx, ~~\forall \,q,h \in L^p.$$ Set $\|\phi_1(q)\|_{L^2}=1$. Observe, $$\label{eq:deriv} \frac{d}{d \varepsilon} \lambda_1(q+\varepsilon h)|_{\varepsilon =0}=\int_\Omega \phi_1^2(q) h\, dx, ~~\forall \,q,h \in L^p.$$ Indeed, testing equation $\mathcal{L}_{q+\varepsilon h} \phi_1(q+\varepsilon h)=\lambda_1(q+\epsilon h)\phi_1(q+\varepsilon h)$ by $\phi_1(q)$ and integrating by parts one obtains $$\begin{aligned} \int_{\Omega} \phi_1(q+\varepsilon h) (-\Delta \phi_1(q))\, dx + \int_{\Omega} &(q+\varepsilon h)\phi_1(q+\epsilon h)\phi_1(q) \,dx = \\ &\lambda_1(q+\epsilon h) \int_{\Omega} \phi_1(q+\epsilon h)\phi_1(q) \,dx.\end{aligned}$$ By the above, all terms in this equality are differentiable with respect to $\varepsilon$. Thus we have $$\begin{aligned} -\int_{\Omega}\Delta \phi_1(q) \frac{d}{d \varepsilon}&\phi_1(q+\varepsilon h)|_{\varepsilon =0} \,dx + \int_{\Omega} h\phi_1^2(q)\, dx +\int_{\Omega} h\frac{d}{d \varepsilon}\phi_1(q+\varepsilon h)|_{\varepsilon =0}\phi_1(q) dx = \\ &\frac{d}{d \varepsilon}\lambda_1(q+\varepsilon h)|_{\varepsilon =0} \int_{\Omega} \phi_1^2(q)\,dx+\lambda_1(q) \int_{\Omega} \frac{d}{d \varepsilon}\phi_1(q+\varepsilon h)|_{\varepsilon =0}\phi_1(q)\,dx.\end{aligned}$$ Since $\phi_1(q)$ is an eigenfunction of $\mathcal{L}_{q}$, this implies . To conclude the proof of the lemma, it is sufficient to show that $$\label{eq:ContW} \phi_1(\cdot) \in C(L^p; W^{1,2}_0(\Omega)).$$ Indeed, assume is true. Since , by the Sobolev theorem, the embedding $W^{1,2}(\Omega) \subset L^\frac{2p}{(p-1)}$ is continuous. Hence the map $\phi_1(\cdot): L^p \to L^\frac{2p}{(p-1)}\cap L^2$ is continuous and therefore the norm of Gateaux derivative $D\lambda_1(q)$ continuously depends on $q \in L^p$. This implies that $\lambda_1(q)$ is continuously differentiable in $L^p$. To prove , let us first show that $\lambda_1(q)$ defines a continuous map in $L^p$. Suppose, contrary to our claim, that there is a sequence $(q_n)$ such that $q_n \to q$ in $L^p$ as $n \to \infty$ and $|\lambda_1(q_n)-\lambda_1(q)|>\epsilon$ for some $\epsilon >0$, $n=1,2,...$. Consider Rayleigh’s quotient $$\label{eq:CFR} \lambda_1(q_n)\equiv R_{q_n}(\phi_1(q_n)):=\frac{\|\phi_1(q_n)\|_1^2+\int_0^1q_n\phi_1^2(q_n)\,dx}{\|\phi_1(q_n)\|_{L^2}^2}~~~n=1,2,....$$ It is easily seen that $$\label{eq:CF} R_{q_n}(\phi_1(q)) \to R_{q}(\phi_1(q))=\lambda_1(q) \,\, \mbox{as}\,\, n \to \infty.$$ Hence and since $\lambda_1(q_n)=R_{q_n}(\phi_1(q_n))\leq R_{q_n}(\phi_1(q))$, $n=1,2,...$, we conclude that $\lambda_1(q_n)=R_{q_n}(\phi_1(q_n))<C_0<+\infty$, where $C_0<+\infty$ does not depend on $n=1,2,...$. Due to homogeneity of $R_{q_n}(\phi_1(q_n))$ we may assume that $\|\phi_1(q_n)\|_1^2=1$ for all $n$. Hence the Banach-Alaoglu and Sobolev theorems imply that there is a subsequence, which we again denote by $(\phi_1(q_n))$, such that $\phi_1(q_n) \to \bar{\phi}$ as $n \to \infty$ weakly in $W^{1,2}$ and strongly in $L^q$ for $q\in [2,2^*)$, where $2^*=2N/(N-2)$ if $N>2$ and $2^*=+\infty$ if $N\leq 2$. Observe that if $\bar{\phi}=0$, then $\|\phi_1(q_n)\|_{L^2} \to 0$ and $\int_0^1q_n\phi_1^2(q_n) \to 0$ as $n \to \infty$, which implies by that $\lambda_1(q_n) \to +\infty$. We get a contradiction. Thus $\bar{\phi}\neq 0$ and therefore $|\lambda_1(q_n)|<C$, where $C<+\infty$ does not depend on $n=1,2,...$. Hence, in view of , we conclude that $$\lambda_1(q)=R_{q}(\phi_1(q))\leq R_{q}(\bar{\phi}) \leq \liminf_{n\to \infty}R_{q_n}(\phi_1(q_n))\leq \lim_{n\to \infty}R_{q_n}(\phi_1(q))=\lambda_1(q),$$ which contradicts to our assumption. Thus, indeed, the map $\lambda_1(\cdot): L^p \to \mathbb{R}$ is continuous. Take $q,q_0 \in L^p$. Then $$\begin{aligned} -\Delta(\phi_1(q_0)-\phi_1(q))+q_0(\phi_1(q_0)-\phi_1(q))-\lambda_1(q_0)(\phi_1(q_0)-\phi_1(q))+&\\ (q_0-q)\phi_1(q)-(\lambda_1(q_0)-\lambda_1(q))\phi_1(q)=0&.\end{aligned}$$ Testing this equation by $(\phi_1(q)-\phi_1(q_0))$ and integrating by parts, we obtain $$\begin{aligned} &\|\phi_1(q_0)-\phi_1(q)\|_1^2+ \int_\Omega q_0(\phi_1(q_0)-\phi_1(q))^2\,dx-\lambda_1(q_0)\int_\Omega (\phi_1(q_0)-\phi_1(q))^2\,dx +\\ &\int_\Omega(q_0-q)\phi_1(q)(\phi_1(q_0)-\phi_1(q))\,dx-(\lambda_1(q_0)-\lambda_1(q))\int_\Omega \phi_1(q)(\phi_1(q_0)-\phi_1(q))\,dx=0.\end{aligned}$$ Let $q_k \to q_0$ in $L^p$ as $k \to \infty$. We may assume that $\|\phi_1(q_k)\|_1=1$, $k=1,2,...$. Set $t_k:=\|\phi_1(q_k)-\phi_1(q_0)\|_1$, $\psi_k:=(\phi_1(q_k)-\phi_1(q_0))/t_k$, $k=1,2,...$. Then $0<t_k<1+\|\phi_1(q_0)\|_{1}:=C_1<+\infty$, $\|\psi_k\|_1=1$ and $$\begin{aligned} \label{eV:tk} &t_k\left(1+ \int_\Omega q_0\psi_k^2\,dx-\lambda_1(q_0)\int_\Omega \psi_k^2\,dx\right) =\nonumber\\ &-\int_\Omega(q_0-q_k)\phi_1(q_k)\psi_k\,dx+(\lambda_1(q_0)-\lambda_1(q_k))\int_\Omega \phi_1(q_k)\psi_k\,dx,\,\, k=1,2,....\end{aligned}$$ By Hölder’s inequality and the Sobolev theorem, we have $$\begin{aligned} & |\int_\Omega(q_k-q_0)\phi_1(q_k)\psi_k\,dx|\leq \|q_k-q_0\|_{L^p}\|\phi_1(q_k)\|_{L^\frac{2p}{(p-1)}}\|\psi_k\|_{L^\frac{2p}{(p-1)}}\leq C_2\|q_k-q_0\|_{L^p}, \\ &|(\lambda_1(q_k)-\lambda_1(q_0))\int_\Omega \phi_1(q_k)\psi_k\,dx|\leq %|\lambda_1(q)-\lambda_1(h)|\|\phi_1(q_k)\|_{C[0,1]}\|\psi_k\|_{L^2}\leq C_2|\lambda_1(q_k)-\lambda_1(q_0)|,\end{aligned}$$ where $C_2<+\infty$ does not depend on $k=1,2,...$. Hence and from it follows that $t_k:=\|\phi_1(q_k)-\phi_1(q_0)\|_1 \to 0$ as $q_k \to q_0$ in $L^p$. Thus we get . \[lem2\] $\lambda_1(q)$ is strictly concave functional in $L^p$. Let $q_1,q_2 \in L^p \setminus 0$. Denote $\phi_1^t:=\phi_1(t q_1+(1-t) q_2)$. Assume that $\|\phi_1^t\|_{L^2}=1$, $t \in [0,1]$. Then due to we have $$\begin{aligned} \lambda_1(t q_1+(1-t) q_2)=& \int_{\Omega}|\nabla \phi_1^t|^2\, dx+\int_{\Omega}(t q_1+(1-t) q_2)|\phi_1^t|^2\, dx =\\ & t( \int_{\Omega}|\nabla \phi_1^t|^2\, dx+\int_{\Omega}q_1|\phi_1^t|^2\, dx) +(1-t)( \int_{\Omega}|\nabla \phi_1^t|^2\, dx+\int_{\Omega}q_2|\phi_1^t|^2\, dx) >\\ &t\lambda_1( q_1) +(1-t)\lambda(q_2),~~\forall t \in (0,1),\end{aligned}$$ which yields the proof. Proof of the main results ========================= [*Proof of Theorem \[thm1\].*]{} Let $q_0 \in L^p$ and $\lambda>\lambda_1(q_0)$. Consider the constrained minimization problem $$\label{MinP} \hat{Q}=\min\{Q(q):q \in M_{\lambda}\},$$ where $Q(q):=||q_0-q||^p_{L^p}$ for $q \in L^p(\Omega)$ and $$M_{\lambda}:=\{q \in L^p(\Omega):~~\lambda\leq \lambda_1(q)\}.$$ Notice that $M_{\lambda} \neq \emptyset$. Indeed, $\lambda=\lambda_1(q_0+\lambda-\lambda_1(q_0))$, $\forall \lambda \in \mathbb{R}$ and thus $q_0+\lambda-\lambda_1(q_0) \in M_{\lambda}$ for any $\lambda>\lambda_1(q_0)$. Moreover, by Lemma \[lem2\], $M_{\lambda}$ is convex. Hence, by coerciveness of $Q: L^p \to \mathbb{R}$ there exists a minimizer $\hat{q} \in M_{\lambda}$ of . Since the strong inequality $\lambda>\lambda_1(q_0)$, it follows that $\hat{q} \neq 0$. The convexity of $M_{\lambda}$ and $Q$ entails that $\hat{q}$ is unique and that $$\hat{q} \in \partial M_{\lambda}= \{q\in L^p:~~\lambda=\lambda_1(q)\}.$$ This concludes the proof of assertion $(1^o)$ of Theorem \[thm1\]. Evidently $Q$ is $C^1$-functional in $L^p$. Hence, in view of Lemma \[lem1\], the Lagrange multiplier rule implies $$\mu_1 DQ(\hat{q})(h)+\mu_2D\lambda_1(\hat{q})(h) =0,~~ \forall h \in L^p,$$ where $\mu_1,\mu_2 $ such that $|\mu_1|+|\mu_2|\neq 0$, $\mu_1 \geq 0$, $\mu_2 \leq 0$. Thus by we deduce $$\int_\Omega (-\mu_1 p (q_0-\hat{q})|q_0-\hat{q}|^{p-2}+\mu_2\phi_1^2(\hat{q})) h\, dx =0, \,\, \forall h \in L^p,$$ where $\|\phi_1^2(\hat{q})\|_{L^2}=1$. Arguing by contradiction, it is easily to conclude that $\mu_1 > 0,\mu_2< 0$. Thus we have $$(q_0-\hat{q})|q_0-\hat{q}|^{p-2}=\mu \phi_1^2(\hat{q})~~\mbox{a.e. in}~~\Omega,$$ where $\mu=\mu_2/(p \mu_1)<0$. Notice that $\phi_1>0$ in $\Omega$. Hence $q_0<\hat{q}$ a.e. in $\Omega$ and $$\hat{q}=q_0+\nu \phi_1^{2/(p-1)}(\hat{q})~~\mbox{a.e. in}~~\Omega,$$ where $\nu:=(-\mu)^{1/(p-1)}>0$. Substituting this into yields $$\label{eq:phi} - \Delta \phi_1(\hat{q})+q_0 \phi_1(\hat{q})={\lambda} \phi_1(\hat{q})-\nu\phi_1^\frac{p+1}{p-1}(\hat{q}).$$ Thus, indeed, $\hat{u}=\nu^\frac{p-1}{2}\phi_1(\hat{q})$ satisfies . Moreover, $\hat{q}=q_0+\hat{u}^{2/(p-1)}$ a.e. in $\Omega$. This concludes the proof of $(2^o)$. [*Proof of Theorem \[thm2\].*]{} Since (\[eq:Nonl\]$'$) is obtained from by replacing $\gamma=\frac{2p}{p-1}$, it is sufficient to prove the uniqueness of the solution for . First we prove \[lem:Nonl\] Let $u \in W^{1,2}_0$ be a nonnegative weak solution of . Then the function $\bar{q}:=q_0+u^{2/(p-1)}$ is a local minimum point of $Q$ in $M_{\lambda}$. Let $u \in W^{1,2}_0$ be a nonnegative weak solution of . The regularity solutions for elliptic equations (see, e.g., Lemma B 3 in [@struw]) implies that $u \in W^{2,q}(\Omega)$ for any $q\geq 2$ and therefore $u \in C^{1,\alpha}(\overline{\Omega})$ for any $\alpha\in (0,1)$. By the weak Harnack inequality (see Theorem 5.2. in [@Trud]) it follows that $u>0$ in $\Omega$. This implies that $u$ is an eigenfunction of $\mathcal{L}_{\bar{q}}$ corresponding to the principal eigenvalue $\lambda =\lambda_1(\bar{q})$, i.e., $u=\phi_1(\bar{q})$. In view of Lemma \[lem1\], by the Lusternik theorem [@lust] the tangent space of $\partial M_\lambda$ at $ q \in \partial M_\lambda$ can be expressed as follows $$\label{Tangent} T_{q}(\partial M_\lambda):=\{h \in L^p:~D\lambda_1(q)( h)\equiv \int_\Omega u^2\cdot h\, dx = 0\}.$$ From this $$D_qQ(q)|_{q=q_0+u^{2/(p-1)}}(h)=p\int_\Omega u^2\cdot h\, dx = 0, ~~\forall h \in T_{\bar{q}}(\partial M_\lambda).$$ Hence and since $$D_{qq}Q(q)|_{q=q_0+u^{2/(p-1)}}(h,h)=p(p-1)\int_\Omega u^\frac{2(p-2)}{p-1}\cdot h^2\, dx>0,~~\forall h \in T_{q}(\partial M_\lambda),$$ we obtain that $$Q(\bar{q}+h)> Q(\bar{q}),$$ for any $h \in T_{\bar{q}}(\partial M_\lambda)$ with sufficient small $\|h\|_{L^p}$. Let us conclude the proof of Theorem \[thm2\]. By Theorem \[thm1\] we know that possess a solution $\hat{u}$ such that the functional $Q$ admits a global minimum at $\hat{q}=q_0+\hat{u}^{2/(p-1)}$ on $M_{\lambda}$. Assume that there exists a second weak solution $\bar{u}$ of . Then by Lemma \[lem:Nonl\], $\bar{q}=q_0+\bar{u}^{2/(p-1)}$ is a local minimum point of $Q$ in $M_{\lambda}$. However, due to strict convexity of $\lambda_1(q)$ and $Q(v)$ this is possible only if $\bar{q}=\hat{u}$. Stability results ================= In this section, we prove that the solution $\hat{q}$ of the inverse optimization spectral problem $(P)$ is stable with respect to variation of $q_0$ and $\lambda$. Let $q_0 \in L^p$, $\lambda >\lambda_1(q_0)$. Denote by $\hat{q}(\lambda, q_0)$ the unique solution of $(P)$ obtained by Theorem \[thm1\] and denote by $\hat{u}(\lambda, q_0)$ the corresponding solution of . \[Prop1\] (i) : For any $\lambda \in (\lambda_1(q_0),+\infty)$, the map $\hat{u}(\lambda, \cdot): L^p \to W^{1,2}_0$ is continuous and thus $\hat{q}(\lambda, q_0)$ continuously depends on $q_0$ as a map from $L^p$ to $L^p$. (ii) : For any $q_0 \in L^p$, the map $\hat{u}(\cdot, q_0): (\lambda_1(q_0),+\infty) \to W^{1,2}_0$ is continuous and thus $\hat{q}(\lambda,q_0)$ continuously depends on $q_0$ as a map from $L^p$ to $L^p$. Moreover, $\hat{q}(\lambda,q_0) \to q_0$ in $L^p$ as $\lambda \downarrow \lambda_1(q_0)$. We give the proof of **(i)** only for the case $N\geq 3$; the other cases are left to the reader. Let $\lambda \in (\lambda_1(q_0),+\infty)$. Assume $q_n \to q_0$ in $L^p$ as $n\to \infty$. By the above, $\lambda_1(q)$ continuously depends on $q\in L^p$. Thus for sufficiently large $n$ we have $\lambda>\lambda_1(q_n)$. We claim that the sequence $\|\hat{u}(\lambda, q_n)\|_1$, $n=1,2,...$ is bounded and separated from zero. Set $t_n:=\|\hat{u}(\lambda, q_n)\|_1 $, $v_n:=\hat{u}(\lambda, q_n)/t_n$, $n=1,2,...$. Since $\|v_n\|_1=1$, $n=1,2,...$, by the Banach-Alaoglu and Sobolev theorems we may assume that $v_n \rightharpoondown v$ weakly in $W^{1,2}$, $v_n \to v$ a.e. in $\Omega$ and strongly in $L^q$, $2\leq q <2N/(N-2)$ for some $v \in W^{1,2}$. It follows from $$\label{eq:st1} \|v_n\|_1+\int_\Omega q_n v_n^2\,dx-\lambda \int_\Omega v_n^2\,dx+t_n^\frac{2}{p-1}\int_\Omega v_n^\frac{2p}{p-1}\,dx=0, ~~n=1,2,....$$ By Hölder’s inequality $$|\int_\Omega q_n v_n^2\,dx|\leq \|q_n\|_{L^p}\|v_n\|^2_{L^{2p/(p-1)}}, ~~n=1,2,...,$$ where, in view of , we have $2<2p/(p-1)<2N/(N-2)$. Suppose that $v=0$. Then $$\|v_n\|_1+\int_\Omega q_n v_n^2\,dx-\lambda \int_\Omega v_n^2\,dx+t_n^\frac{2}{p-1}\int_\Omega v_n^\frac{2p}{p-1}\,dx\geq \|v_n\|_1+\int_\Omega q_n v_n^2\,dx-\lambda \int_\Omega v_n^2\,dx \to 1,$$ as $n\to +\infty$, which contradicts to . Thus $v \neq 0$ and therefore by the sequence $t_n$ is bounded. Assume, by contradiction, that $t_n \to 0$. Then passing to the limit in yields $$-\Delta v+q_0 v=\lambda v.$$ From the above, it follows that $v\geq 0$, $v\neq 0$. Thus $v$ is an eigenfunction corresponding to the principal eigenvalue of $\mathcal{L}_{q_0}$. However, by the assumption $\lambda>\lambda_1(q_0)$ and we get a contradiction. Thus the claim is proving and we may assume that $\hat{u}(\lambda, q_n) \rightharpoondown \bar{u}$ weakly in $W^{1,2}$, $\hat{u}(\lambda, q_n) \to \bar{u}$ a.e. in $\Omega$ and strongly in $L^q$, $2\leq q <2N/(N-2)$ for some $\bar{u} \in W^{1,2}\setminus 0$. Since $\hat{u}(\lambda, q_n)>0$ in $\Omega$, we conclude that $\bar{u} \geq 0$ a.e. in $\Omega$. Passing to the limit in we obtain $$-\Delta \bar{u}+q_0 \bar{u}=\lambda \bar{u}- \bar{u}^\frac{p+1}{p-1}.$$ Due to the uniqueness of solution of , it follows that $\bar{u}=\hat{u}(\lambda, q_0)$. Furthermore, this implies that $\hat{u}(\lambda, q_n) \to \hat{u}(\lambda, q_0)$ strongly in $W^{1,2}$. Thus we have proved that the map $\hat{u}(\lambda, \cdot): L^p \to W^{1,2}_0$ is continuous. Under assumption , we have a continuous embedding $W^{1,2} \subset L^{2p/(p-1)}$. Hence $\hat{u}^{2/(p-1)}(\lambda, q_n) \to \hat{u}^{2/(p-1)}(\lambda, q_0)$ strongly in $L^p$ and thus $\hat{q}(\lambda, q_n)=q_0+\hat{u}^{2/(p-1)}(\lambda, q_n)$ strongly converges to $\hat{q}(\lambda, q_0)=q_0+\hat{u}^{2/(p-1)}(\lambda, q_0)$ in $L^p$ as $n\to +\infty$. This concludes the proof of **(i)**. The proof for the first part of **(ii)** is similar to **(i)**. To prove that $\hat{q}(q_0, \lambda) \to q_0$ in $L^p$ as $\lambda \downarrow \lambda_1(q_0)$, it is remained to show that for $\lambda=\lambda_1(q_0)$ problem may has only zero solution. Suppose, contrary to our claim, that there exists a positive solution $u$ of for $\lambda=\lambda_1(q_0)$. Then testing the equation in by $\phi_1(q_0)$ and integrating by parts we obtain $$\begin{aligned} \int_{\Omega} u (-\Delta \phi_1(q_0))\, dx + \int_{\Omega} q_0 u\phi_1(q_0) \,dx = \lambda_1(q_0) \int_{\Omega} u\phi_1(q_0) \,dx- \int_{\Omega} u^\frac{p+1}{p-1}\phi_1(q_0) \,dx,\end{aligned}$$ which implies that $\int_{\Omega} u^\frac{p+1}{p-1}\phi_1(q) \,dx=0$. However this is possible only if $u\equiv 0$ in $\Omega$. Conclusion remarks and open problems {#sec:conclusion} ==================================== Notice that if $\lambda<\lambda_1(q_0)$, then nonlinear boundary value problem has no solution (see, e.g., [@brezis]). However, the existence of solution of $(P)$ in the case $\lambda<\lambda_1(q_0)$ is unknown. Since for various $p$ equation has different solutions, the answer on the inverse optimization spectral problem $(P)$ essentially depends on the prescribed norm $\|\cdot\|_{L^p}$. We are unable to offer criteria necessary to identify preferred norms. However, it should be noted that a similar problem about choosing a suitable norm has already been encountered in the literature on the theory of inverse problems (see, e.g., [@Mar; @SavShk]). It is an open problem to solve the $m$-parametric inverse optimization spectral problem $(P^m)$:*Given $\lambda_1,...,\lambda_m \in \mathbb{R}$ and $q_0 \in L^p(\Omega)$. Find a potential $\hat{q} \in L^p(\Omega)$ such that $\lambda_i=\lambda_i(\hat{q})$, $i=1,...,m$ and* $$\|q_0-\hat{q} \|_{L^p}=\inf\{||q_0-q||_{L^p}:~~ \lambda_i=\lambda_i(q), ~i=1,...,m,~q \in L^p(\Omega)\}.$$ The above inference of the nonlinear boundary value problem can be formally generalized to be applicable to problem $(P^m)$. In that case, one can obtain the following system of nonlinear equations $$\label{sys} \begin{cases} -\Delta u_i+q_0 u_i=\lambda_i u_i- (\sum_{j=1}^m \mu_j u_j^{2})^\frac{p}{p-1}u_i,~~i=1,2,...,m,\\ ~~u_i|_{\partial \Omega}=0,~~~~i=1,2,...,m. \end{cases}$$ where $\mu_i\geq 0$, $i=1,2,...,m$ are some constants so that $$\label{OPEN} \hat{q}=q_0+(\sum_{j=1}^m \mu_j u_j^{2})^\frac{p}{p-1}.$$ However, we do not know how to justify this approach. Moreover, as far as we know, the existence and uniqueness of solution for with $q_0 \in L^p$ is also an open problem. Nevertheless, it would be useful to verify numerically. [10]{} V. Ambarzumian, Über eine frage der eigenwerttheorie. Zeitschrift für Physik A Hadrons and Nuclei 53 (9) (1929) 690-695. H. Brezis, & L. Oswald, . Remarks on sublinear elliptic equations. Nonlinear Analysis: Theory, Methods & Applications 10 (1) (1986) 55-64. G. Borg, Eine umkehrung der Sturm-Liouvilleschen eigenwertaufgabe. Acta Mathematica, 78 (1) (1946) 1-96. K. Chadan, D. Colton, L. Päivärinta, and W. Rundell, An introduction to inverse scattering and inverse spectral problems. Society for Industrial and Applied Mathematics. 1997. J. I. Díaz, J. E. Saá, Existence et unicité de solutions positives pour certaines équations elliptiques quasilinéaires." CR Acad. Sci. Paris Sér. I Math, 305(12) (1987) 521-524. D. E. Edmunds, W. D. Evans, Spectral theory and differential operators. Vol. 15. Oxford: Clarendon Press, 1987. I. M. Gel’fand, B. M. Levitan, On the determination of a differential equation from its spectral function. Izvestiya Rossiiskoi Akademii Nauk. Seriya Matematicheskaya, 15 (4) (1951) 309-360. Lusternik, L. Sur les extrémés relatifs des fonctionnelles." Matematicheskii Sbornik, 41 (3) (1934) 390-401. M. Marletta, R. Weikard, Weak stability for an inverse Sturm-Liouville problem with finite spectral data and complex potential, Inverse Problems, 21 (4) (2005) 1275-1290 M. Reed, B. Simon, Methods of modern mathematical physics. vol. 2. Functional analysis. New York: Academic, 1980. M. Reed, B. Simon,. Methods of modern mathematical physics. vol. 4. Functional analysis. New York: Academic, 1980. A.M. Savchuk, A.A. Shkalikov, Recovering of a potential of the Sturm–Liouville problem from finite sets of spectral data. Spectral Theory and Differential Equations, Amer. Math. Soc. Transl. 233 (2) (2014) 211–224 D. Gilbarg, N. S. Trudinger, Elliptic partial differential equations of second order. Springer, 2015. N. S. Trudinger, Linear elliptic operators with measurable coefficients. Annali della Scuola Normale Superiore di Pisa-Classe di Scienze 27 (2) (1973) 265-308. M. Struwe, Variational methods. Vol. 31999. Berlin etc.: Springer, 1990.
--- abstract: 'A dark-matter-only Horizon Project simulation is used to investigate the environment- and redshift- dependence of accretion onto both halos and subhalos. These objects grow in the simulation via mergers and via accretion of diffuse non-halo material, and we measure the combined signal from these two modes of accretion. It is found that the halo accretion rate varies less strongly with redshift than predicted by the Extended Press-Schechter (EPS) formalism and is dominated by minor-merger and diffuse accretion events at $z=0$, for all halos. These latter growth mechanisms may be able to drive the radio-mode feedback hypothesised for recent galaxy-formation models, and have both the correct accretion rate and form of cosmological evolution. The low redshift subhalo accretors in the simulation form a mass-selected subsample safely above the mass resolution limit that reside in the outer regions of their host, with $\sim 70\%$ beyond their host’s virial radius, where they are probably not being significantly stripped of mass. These subhalos accrete, on average, at higher rates than halos at low redshift and we argue that this is due to their enhanced clustering at small scales. At cluster scales, the mass accretion rate onto halos and subhalos at low redshift is found to be only weakly dependent on environment and we confirm that at $z\sim2$ halos accrete independently of their environment at all scales, as reported by other authors. By comparing our results with an observational study of black hole growth, we support previous suggestions that at $z>1$, dark matter halos and their associated central black holes grew coevally, but show that by the present day, dark matter halos could be accreting at fractional rates that are up to a factor $3-4$ higher than their associated black holes.' author: - | H. Tillson$^{1}$[^1], L. Miller$^{1}$ & J. Devriendt$^{1,2}$\ $^1$Department of Physics, University of Oxford, The Denys Wilkinson Building, Keble Road, Oxford, OX1 3RH, UK\ $^2$Centre de Recherche Astrophysique de Lyon, UMR 5574, 9 Avenue Charles Andr$\acute{e}$, F69561 Saint Genis Laval, France bibliography: - 'hsub\_accn\_new.bib' date: 'Accepted 2011 June 22. Received 2011 June 22; in original form 2010 September 27' title: The environment and redshift dependence of accretion onto dark matter halos and subhalos --- \[firstpage\] galaxies:halos – galaxies:formation – cosmology:theory Introduction ============ In the $\Lambda$CDM model, structures are seeded with initial fluctuations and merge to form bound, virialized dark matter halos that become more massive as the universe ages. Luminous galaxies form as baryonic matter cools and condenses at halo centres [@White_Rees; @Fall; @Blumenthal]. Dense dark halos, however, often contain embedded subhalos and it has been demonstrated that low mass subhalos can survive in their hosts for several billion years [@Tormen_97; @Tormen_98; @Moore]. One challenge for cosmological N-body simulations is to link dark matter halos and subhalos with luminous galaxies [@Bower06; @Conroy_06; @Vale]. Understanding this relationship has proved difficult [@Diemand04; @Gao04; @Nagai] and most explanations are provided by semi-analytic models (; @Croton06, hereafter C06). Nonetheless, a vital ingredient in explaining luminous galaxy growth in large groups and clusters is an understanding of how dark matter halos and subhalos accrete mass in dense environments. The standard implementation of the Extended Press-Schechter (hereafter EPS) formalism [@Bond; @Lacey] can be used to analytically compute the average mass accretion onto a halo of mass $M_{H}$. [@Miller06] (hereafter M06) showed that: $$\label{eMill} {\left< \dot{M}_{H} \right>} \simeq M_{H}\left|\frac{d\delta_{c}}{dt}\right|f(M_{H}),$$ $$\label{eMill2} \frac{d\delta_{c}}{dt}=\frac{d\delta_{c}}{dD}\frac{dD}{dz}\frac{dz}{dt}$$ where $\delta_{c}(t)$ is the critical density contrast above which an object will collapse to form a bound structure, $D(z)$ is the linear growth factor and $f(M_{H})$ is a weak function of halo mass (for alternative analytic expressions for halo growth derived using EPS theory, see @Hiotelis06 and @Neistein08). Equation (\[eMill\]) can in principle be used for all redshifts and halo masses, but a recent simulation study by [@Cohn08] tested it against the accretion histories of massive halos at $z=10$ and found that it overestimated their accretion rate. A simplified assumption of the EPS framework inherent in equation (\[eMill\]) is that halos accrete at rates that do not depend on their environment. This restrictive assumption, however, is not a prediction of the theory and so various authors have recently relaxed it. [@Sandvik] implemented a multidimensional generalization of the EPS formalism and used an ellipsoidal collapse model where collapse depended both on the overdensity and the shape of the initial density field. They found only a weak dependence between halo formation redshift and halo clustering which was stronger for more massive halos, in disagreement with the reported halo assembly bias in numerical simulations [@Gao05; @Gao07; @Maultbetsch]. [@Zentner] modified the EPS formalism by using a Gaussian smoothing window function, and [@Desjacques] allowed the density threshold to have an environment dependence, but both authors found that dense large-scale environments preferentially contain halos that form later. We are hence lacking an EPS model that is able to account for halo assembly bias and predict a modified analytic version of equation (\[eMill\]) for the halo accretion rate. Deviations from the EPS accretion rate are therefore expected in the highly non-linear regime of cluster formation at $z<1$, as equation (\[eMill\]) cannot account for accretion onto subhalos embedded within larger halos. To date, several authors have defined prescriptions for computing accretion onto halos using dark-matter-only simulations: - [@Wechsler] $-$ henceforth W02 $-$ identified the mass accretion history (hereafter MAH) of $\sim14000$ halos at $z=0$ using the ART code [@Kravtsov] in a WMAP1 cosmology. Using their algorithm, W02 found that the accretion histories of their present day halos were, on average, well fitted by: $$\label{eWech} M_{H}(z)=M_{0}e^{-\alpha(z_{f})z}$$ where $M_{0}$ is the present day mass of a halo and $\alpha(z_f)$ is a parameter which describes its formation epoch. Ignoring the slight mass dependencies of $\alpha(z_{f}(M_{H}))$ and the $f(M_{H})$ term in equation (\[eMill\]), it can be seen that equation (\[eWech\]) is a sensible fit for W02 to have chosen because in the case of an Einstein-de-Sitter (EdS) universe, their ${\left< \dot{M}_{H} \right>}$ has the same $M_{H}\dot{z}$ dependence as equation (\[eMill\]), differing only in normalization ($d\delta_{c}/dz=1.686$ for an EdS universe). - [@VanDenBosch] used the N-branch merger tree algorithm of [@Somerville] and found that a two parameter fit better described the MAHs of his halos, although M06 demonstrated that this two parameter fit becomes unphysical locally as it predicts that present day halos are not accreting mass. [@VanDenBosch] also provided a relation for $\alpha$ and $z_{f}$ that can be used in equation (\[eWech\]): $$\label{eBosch} \alpha=\left(\frac{z_{f}}{1.43}\right)^{-1.05}$$ but it is more common to define $z_f$ as the epoch at which the present day halo of interest had half of its present day mass: $$\label{eAlpha} z_{f}=\frac{\ln{2}}{\alpha}$$ - More recently, [@McBride] investigated the MAHs of $\sim500000$ halos from the Millennium simulation with $M_{H}>10^{12}M_{\odot}$ and $0\leq z\leq 6$ and found that only $\sim 25\%$ were well described by equation (\[eWech\]). They introduced a second parameter, $\beta$, and showed that a function of the form: $$\label{eMcBride} M_{H}(z)\propto (1+z)^{\beta}e^{-\gamma z}$$ provided a better fit to the halo MAHs. - [@Fakhouri10] used a joint dataset from the Millennium I and II simulations and found that equation (\[eMcBride\]) held across five decades in mass up to $z=15$. These listed accretion fits only apply when averaged across all environments. In order to understand accretion in dense regions such as clusters, one must resolve substructure and design an accretion algorithm that can account for accretion onto halos and all levels of substructure. The difficulties in devising such an accretion algorithm are two-fold: firstly, it should define a single progenitor for each and every (sub)halo which accurately represents that object at earlier epochs, and secondly, it must conserve mass (which becomes harder to do when one introduces subhalos). In this study, outputs from a high resolution dark-matter-only N-body simulation have been used and a new robust method for defining accretion onto halos and subhalos is provided, building on previous simulation studies and moving beyond EPS theory. The primary aim is to investigate exactly how accretion onto halos and subhalos behaves as a function of redshift, mass and environment. One way of measuring a (sub)halo’s environment is to compute the two-point correlation function, as this yields information on halo bias or degree of clustering. [@Percival] used four $\Lambda$CDM simulations with differing box sizes, $\sigma_{8}$ values and particle masses, with each simulation containing $256^3$ particles, to examine four different halo merger samples at $z=2$. They found no difference in clustering at this redshift between the merger samples of halos of a given mass. We examine the clustering of halos and subhalos in a higher resolution simulation and test this conclusion at $z\sim2$ and at lower redshifts. A natural corollary is then to investigate whether the dark matter distribution alone has any relevance to SFR/galaxy downsizing [@Cowie; @Brinchmann; @Bauer; @Bundy; @Faber; @Panter] and AGN downsizing [@C03; @Steffen; @Barger; @Hasinger; @Hopkins]. AGN feedback provides a plausible explanation of galaxy downsizing, and has been successfully implemented in semi-analytic models (@Bower06 [@Cattaneo06]; C06) and has been observed as the phenomenon responsible for the suppression of star formation in ellipticals in the local universe [@Schawinski07; @Schawinski09]. AGN downsizing is less well understood and is a two-fold degenerate phenomenon driven either by low mass black holes accreting at near-Eddington rates [@Heckman] or by supermassive black holes accreting at low rates [@Babic]. The structure of this paper is as follows. Section \[simulation\_section\] describes the N-body simulation that was used and Section \[describe\_algs\_section\] explains the accretion algorithm. Section \[results\_section\] examines accretion onto halos and subhalos within groups and clusters and draws comparisons with EPS and W02. Section \[discussion\_section\] discusses the implications of the results of this paper and Section \[conclusions\_section\], the final section, lists our conclusions. A WMAP3 cosmology has been adopted throughout with $\Omega_m=0.24, \Omega_\Lambda=0.76, \Omega_b=0.042, n=0.958, h=0.73$ and $\sigma_8=0.77$. All masses are in units of $M_{\odot}$. The simulation {#simulation_section} ============== We have analyzed outputs from one of the Horizon Project simulations[^2] which used the `GADGET-2` code [@Springel05] and tracked the evolution of $512^3$ dark matter particles within a box of comoving side length $100h^{-1}$Mpc in a $\Lambda$CDM universe. The AdaptaHOP halo-finder [@Aubert] $-$ hereafter AHOP $-$ was used to detect halos. AHOP assigns a local density estimate to each particle computed using the standard SPH kernel [@Monaghan] which weights the mass contributions from the $N$ closest neighbouring particles ($N$ is usually taken to be $20$). Halos are then resolved by imposing a density threshold criterion and by measuring local density gradients. AHOP is an alternative to the popular friend-of-friend (FOF) halo-finder [@Davis], which groups together particles that are spatially separated by a distance that is less than typically $20\%$ of the mean inter-particle separation. Recently it has been demonstrated that inappropriate definitions of halo mass can introduce large uncertainties in the halo merger rate [@Hopkins10] $-$ FOF, in particular, significantly overestimates the halo merger rate for halos that are about to merge [@Genel09], and so we avoid using it. For a critical quantative comparison between AHOP and FOF, see [@Tweed]. In order to detect substructure we have used the Most massive Sub-node Method [@Tweed] $-$ hereafter MSM $-$ which successively raises the density thresholds on the AHOP halo until all of its node structure has been resolved. The most massive leaf is then collapsed along the node tree structure to define a main halo, and the same process is repeated for the lower mass leaves, defining substructures of the main halo. For detailed descriptions of alternative subhalo-finders, like `SUBFIND`, see [@Giocoli]. The output timesteps from the Horizon simulation were separated by $0.01$ in scale factor from $z=99$ to the present day, but we restricted our analysis to halos and subhalos in the redshift range $0\leq z\leq9$. The mass of each particle, $M_p$, was $6.8 \times 10^{8} M_{\odot}$ and halos and subhalos with a recorded accretion value contained at least $40$ particles. The mass of a (sub)halo used in this study corresponded to the total mass, $M_{T}$, detected by the halo-finder. For reference, the MSM algorithm resolved 223781 objects at $z=0$ and $\sim20\%$ of these objects were subhalos. The TreeMaker code [@Tweed] was then used to link together all the time outputs by finding the fathers and sons of every halo and subhalo. Devising a halo and subhalo accretion algorithm {#describe_algs_section} =============================================== This section comes in three main parts. We begin by defining the main branch onto a given object (“object” henceforth refers to halos and/or subhalos). We then provide an algorithm which identifies objects that take part in fake mergers. The section concludes with an explanation of the algorithm that was used to compute accretion onto bound halos and subhalos. A simple merger --------------- In Fig.\[simple\_merge-schematic\], halo $i$ and $j$ at timestep $t_{2}$ merge to form halo $k$ and $l$ at timestep $t_{1}$, where $t_{1} > t_{2}$. In order to compute the accretion rate onto $k$, one must define a ‘main father’ for $k$ and various authors have adopted different prescriptions for identifying the main father of a halo (@Springel01; W02). W02, for example, define the main father of $k$ as the halo that contributes the most mass to $k$ but require the main father’s most bound particle to be part of $k$ if the main father is not at least half $k$’s mass. These rules force each halo to have a single main son and a single main father. There is freedom to choose the main father of $k$ as either the physically most massive father or the father that contributes the most mass. We have found little difference between results obtained from using these two definitions and so we adopt the latter definition throughout. In Fig.\[simple\_merge-schematic\], halo $k$’s main father is $j$ and the main branch is shown by the solid line. Anomalous events ---------------- Anomalous events describe halos that spatially coincide at one timestep and then separate at later timesteps. These halos might take several timesteps to form a bound merger halo or they might never coincide again. One must hence be careful that their accretion estimator accounts for accretion onto bound objects only. To illustrate this point further, one would naïvely expect the mass accretion rate of halo $k$ in Fig.\[simple\_merge-schematic\] at timestep $t_1$ to be $(M_{k}-M_{j})/(t_1-t_2)$ but when this is applied to all the halos at timestep $t_1$ there are a larger than expected number of negative accretion events (halos aren’t losing mass in the hierarchical halo growth paradigm). Physically it is perfectly possible for mergers to result in mass loss along the main branch, as during a merger process, material is stripped from bound objects. A system of objects undergoing a merger will, however, eventually form relaxed, bound objects at later times and so pinpointing the time interval during which mass is accreted is crucial (we do not measure mass loss via stripping in this work). ![Halo $i$ and $j$ at timestep $t _{2}$ merge and form two halos, $k$ and $l$ at the later timestep $t _{1}$. Halo $k$’s main father is $j$ and the main branch is shown by the solid line.[]{data-label="simple_merge-schematic"}](./simple_merge.eps "fig:"){width="8cm" height="4cm"} -1.5em ### Identifying anomalous events {#anomalous_section} Testing to see whether an object is bound is one definitive way of excluding such fake events and it is common practise to sum the kinetic and potential energies of each object and disregard those objects whose total energy is positive [@Maciejewski]. We combine this technique with an independent anomalous detection method to identify unbound objects at each redshift. Our prescription for identifying objects participating in anomalous events is as follows. The fathers of an object $k$ at timestep $t_2$ are found and if object $k$ has two or more halo fathers that each donate a mass $M_{D}\geq 20M_{p}$, then object $k$ is flagged as a possible fake merger candidate. ($20M_{p}$ is chosen here rather than the mass resolution limit of $40M_{p}$ used in later sections, because $20M_{p}$ is a common mass resolution limit used in other simulation studies and it also maximises the number of possible anomalous events.) The sons of $k$ are then found and if $k$ donates a mass $M_{D}\geq 20M_p$ to two or more halos, then it has fragmented and it is identified as an anomalous event candidate. In the case of AHOP halos in this study, which average over their environment and whose substructure is not resolved, this is the sole anomalous event criterion and the same criterion is then imposed on the next halo at timestep $t_2$. For subhalos an additional condition is imposed. Imagine that two halos at timestep $t_{3}$ merge to form a halo, $k$, which hosts a subhalo at the subsequent timestep $t_{2}$. Halo $k$ and its subhalo are then detected as separate halos at the following timestep $t_{1}$ ($t_{1}>t_{2}>t_{3}$). This system has transitioned over three timesteps from two halos, to a halo and a subhalo and back to two halos again, and is hence an anomalous event as no merger has taken place. The subhalos of a given host halo are therefore also examined if the host does not fragment. If a subhalo at $t_{2}$ donates a mass $M_{D}\geq 20M_{p}$ to a halo at $t_{1}$ that is a different halo to the halo son of its host, then it is identified as part of an anomalous event, as are its subhalos (if it has any) and its host. The key ideas of this anomalous event detection method are therefore: - searching for channels that receive/donate at least $20M_{p}$ from/to two or more different halos and - ensuring that the host and all associated substructures are flagged in the case of any one of these objects being classified as participating in an anomalous event. ### Identifying unbound objects {#no_anom_section} Table \[Tanom\] assesses the relative importance of unbound MSM objects above the mass threshold in the simulation ($M\geq 40M_{p}$) for each of the redshifts shown in column $1$ (these redshifts have been chosen because the number of subhalos increases with decreasing redshift in the simulation, as clusters form). The percentages in Table \[Tanom\] express the number of objects above the threshold mass satisfying the condition in each column as a fraction of the total number of objects above the threshold mass at the redshift in question, with the exception of the bracketed values in column $3$, which show the fraction of anomalous events that are unbound. There is a positive correlation between the independently identified anomalous events and unbound objects, with a large fraction of the anomalous events being unbound (henceforth unbound refers either to an object with total energy $E_{T}\geq0$ or an object participating in an anomalous event or both). Not all objects in column $3$ have $E_{T}\geq0$, however, and so there is a small population of unbound objects at each redshift that would be missed if just a requirement of $E_{T}\geq0$ were imposed on every object. \[Tanom\] Only bound objects above the mass threshold can have a recorded accretion value in this study, despite $\sim 38\%$ of all the objects at each of the redshifts shown in Table \[Tanom\] having a mass below the chosen threshold limit. Bound objects below threshold, however, are not removed from the sample and so it is possible for a bound object with $M<40M_{p}$ to be a main father. We therefore avoid biasing the accretion events in the simulation, whilst ensuring that only well resolved objects have an accretion value. Column $4$ shows the fraction of objects above the mass threshold with a recorded accretion value. A very small fraction of bound objects with $M\geq 40M_{p}$ do not have a measured accretion rate because they do not satisfy some additional criteria imposed by the accretion algorithm, which we explain in the following section. The accretion algorithm {#shalo-accn-alg-section} ----------------------- In detecting substructure, [@Springel01] required that several of the most bound particles of the main father were included in the main son $-$ this was more robust than tracking the evolution of the single most bound particle, which essentially performs a random walk across time. We have defined the main son as the son which receives the most mass from the object of interest, consistent with our main father definition. ![A schematic illustrating the halosub accretion algorithm that accounts for accretion onto halos and subhalos. In this example object $k$, whose main father is object $j$ (solid line), has been identified as the main son of object $i$ (solid line). The accretion onto $k$ using the halosub method is therefore $(1-f_{j})M_{k}$, where $f_{j}$ is the fraction of $k$’s mass that comes from $j$. Object $m$ is not the main son of object $i$ and because it doesn’t have any other fathers it is skipped. Object $p$’s main father is $q$, hence the accretion onto $p$ is $(1-f_{q})M_{p}$. The halosub method therefore only ever records zero or positive accretion rates.[]{data-label="halosub-accn-schematic"}](./subhalo_accretion.eps "fig:"){width="8cm" height="4cm"} -1em We shall henceforth refer to the algorithm that computes accretion onto halos and associated substructures as the “halosub” method and it is illustrated in Fig.\[halosub-accn-schematic\]. For object $i$ at timestep $t_{2}$ the main son $k$ (solid line) is identified. Using our main son definition this means that most of $i$’s mass goes to $k$ and the remainder goes to $m$ and $p$. The father that contributes the most mass to $k$ is then found; in this example $j$ is the main father (solid line). The mass accretion onto $k$ is therefore $(1-f_{j})M_{k}$ where $f_{j}$ is the fraction of $k$’s mass that comes from object $j$. Object $k$ is now flagged and the accretion onto the other sons of $i$, $m$ and $p$, is considered. Since $m$ is not the main son of $i$ and $m$ doesn’t have any other fathers, an accretion value for $m$ is not recorded and it is flagged as an orphan. If however one of the sons, $p$, of the object of interest does experience mass accretion, we identify the main father, $q$, and record the mass accreted: $(1-f_{q})M_{p}$. Object $p$ would then also be flagged. To summarise, we list the principal features of the halosub method: - the measured mass accretion onto an object represents the sum of diffuse accretion (material not bound to any resolved structure) and merger-driven growth - mass loss events are considered to be zero accretion events: measured accretion signals in this study are never negative - all objects with a recorded accretion value are bound and have a mass $M\geq 40M_{p}$ - no distinction is made between halos and different levels of substructure Since we do not attempt to measure the mass lost from an object during a given time interval, the accretion rate in this study can be thought of as an upper limit. Note that objects which only lose mass and have a recorded accretion rate of zero are identified as systems where the bound main son of the object of interest has only one bound father. A flagged object means that either the accretion onto that object has already been accounted for or that object has been identified as an orphan. Limitations ----------- Other than finite mass and time resolutions which are shortcomings of any simulation, we consider the growth of halos and subhalos in a $\Lambda$CDM universe without a prescription for the gas physics. The dark-matter-only simulation satisfies the objective of this study, however: to determine whether halo and subhalo accretion is dependent on environment. The accretion algorithm excludes tidal stripping from the measured accretion rate but objects are stripped of mass in the simulation as they undergo mergers and this reduces their mass. Results {#results_section} ======= Throughout this section: 1. “object” refers to halos and/or subhalos. 2. the mass of an object corresponds to the total mass, $M_{T}$, detected by the halo-finder. 3. only bound objects above the mass threshold, $M\geq 40M_{p}$, can have a recorded accretion value. 4. the measured mass accretion is the sum of diffuse- and merger-driven accretion: we have not measured mass loss. 5. $\mu \equiv \dot{M}/M$ denotes the specific accretion rate, with units of Gyr$^{-1}$, onto an object of mass $M$. 6. $\delta \equiv \delta M_{H}/M_{H}$, where $M_{H}$ represents the mass of a halo. Accretion onto dark matter halos {#dark_halo_accretion_section} -------------------------------- ### Comparison with EPS Fig.\[halo\_accn\] shows the average accretion rate onto the AHOP halos from the simulation as a function of redshift and halo mass. Halos with recorded accretion values are binned in mass at each redshift and the average accretion rate for each mass bin is computed. Averages of the corresponding mass bins over redshift then yield constant ${\left< M_{H} \right>}$ values (W02 adopt an alternative technique, however, by binning the $z=0$ halos in mass and then averaging over all the accretion trajectories in each bin at each redshift). The solid lines show the accretion rates onto the AHOP halos using the halosub method, and the error bars indicate the $1\sigma$ errors on the mean accretion rate. The EPS predictions for each of the ${\left< M_{H} \right>}$ bins, computed using equation (\[eMill\]), are shown as the dashed lines. ![The average halo accretion rate as a function of redshift and halo mass. The accretion values onto the AHOP halos are shown as the solid lines for each of the five ${\left< M_{H} \right>}$ bins, with the errors corresponding to the $1\sigma$ errors on the mean accretion rate. The EPS curves using equation (\[eMill\]) are shown as the dashed lines for each mass bin.[]{data-label="halo_accn"}](./halo_dmdt.eps){width="\columnwidth" height="7cm"} Fig.\[halo\_accn\] shows that the simulation mass trajectories have a lower gradient across redshift than the EPS curves, which overestimate the accretion rate onto the lowest mass halos in the simulation at high redshift by a factor of $\sim 2$, and underestimate it by a factor of $\sim1.6-1.8$ at $z=0$. It is tempting to think that the enhanced accretion onto halos with respect to EPS theory at low redshift results from the exclusion of mass loss in our measured halo accretion rate. However, EPS doesn’t account for mass loss from halos either: halos only grow with time by construction. The offset with EPS should therefore be regarded as an offset in gradient and Fig.\[halo\_accn\] implies that the [@Lacey] EPS formalism may only require minor adjustment to reproduce the simulated trajectories. ### The different accretion modes ![The total mass accretion rate onto the AHOP halos per comoving cubic Mpc as a function of redshift, halo mass and $\delta$ ($\equiv\delta M_{H}/M_{H}$). The mass bins correspond to the ${\left< M_{H} \right>}$ bins in Fig.\[halo\_accn\], with the lower mass curves shifting to higher $z$. The dashed and thin solid lines show each halo mass bin decomposed into halos with $\delta\leq0.02$ (minor-merger $\&$ diffuse accretion) and $\delta\geq0.08$ (major-merger $\&$ diffuse accretion) respectively. The thick solid lines show the mass trajectories integrated over all $\delta$.[]{data-label="dm_by_m"}](./Mdot_vol_vs_z_dm_by_m_thresh.eps){width="\columnwidth" height="7cm"} ![image](./mass_func.eps) The mass accreted onto the AHOP halos in Fig.\[halo\_accn\] is the summed contribution of diffuse accretion events and minor and major-merger events, hence in Fig.\[dm\_by\_m\] we examine the relative importance of these accretion modes as a function of halo mass and redshift. At each redshift, the dimensionless quantity $\delta$ ($\equiv \delta M_{H}/M_{H}$) was computed for each accretion event: the dashed lines and the thin solid lines show halos with $\delta\leq 0.02$ (minor-mergers $\&$ diffuse accretion) and $\delta\geq 0.08$ (major-mergers $\&$ diffuse accretion) respectively. The total mass accretion rate per comoving cubic Mpc for halos in a given mass bin and of a given $\delta$ at each redshift was then computed. The thick solid lines show the total mass accretion rate per comoving cubic Mpc integrated over all $\delta$. For a given linestyle, the lower mass curves shift to higher redshifts. At high redshift, all halos are found to accrete mass diffusely in high fractional events with the peak in activity shifting to lower redshifts for more massive halos. As the mass accreted onto the lowest mass halos via minor-mergers and diffuse accretion starts to plateau at low redshift, minor-merger and diffuse accretion activity onto the more massive halos starts to rapidly accelerate: low mass halos and non-halo material are being accreted onto larger structures. By $z=0$, the combined minor-merger and diffuse accretion signals dominate the growth of all halos. We further remark that the dashed curves have a similar cosmological evolution to the “radio-mode” integrated black hole accretion rate density curves found by C06 and [@Bower06], but leave a more detailed discussion for Section \[Croton\_section\]. We have tested the ability of the cut-in-delta method at distinguishing between merger type by adopting the more classical progenitor mass ratio. Each progenitor $j$ of accretor $k$ was assumed to merge in turn with $k$’s main father $i$, with progenitor mass ratio $\chi\equiv M_{i}/M_{j}$, donating $f_{j}M$ to accretor $k$ at the following timestep, where $f_{j}$ denotes the fraction of $k$’s mass that comes from $j$. Events with $\chi\leq3$ ($\chi>3$) were recorded as major (minor) mergers. We found that major mergers and diffuse accretion events with $\delta\geq0.08$ had a very similar cosmological evolution to the $\delta\geq0.08$ curves in Fig.\[dm\_by\_m\]. The minor merger and diffuse accretion events with $\delta\leq0.02$ also showed a similar behaviour to the $\delta\leq0.02$ curves in Fig.\[dm\_by\_m\], except there were more minor mergers at higher redshift for all mass curves. These features do not affect our conclusions in Section \[Croton\_section\], however. Fig.\[mass\_func\] shows the shift from major-merger and diffuse- dominated growth at high redshift to minor-merger and diffuse- dominated growth at low redshift, more clearly. The linestyles have the same meaning as in Fig.\[dm\_by\_m\], except we also include the halos with $0.02<\delta<0.08$, shown by the dotted lines. It can be seen that minor-mergers and diffuse accretion events start to significantly contribute to growth for $z<0.5$, and by $z=0$ drive accretion onto all halo masses. Qualitatively we find very similar results to Figs. \[dm\_by\_m\] and \[mass\_func\] when halos are binned in $\mu$ ($\equiv\dot{M}/M$) instead of $\delta$, but the thin major-merger curves in Figs. \[dm\_by\_m\] and \[mass\_func\] decouple from the thick curves at later epochs, for all masses. This is probably because in transitioning from $\delta$ to $\mu$, one must divide $\delta$ by the time interval during which mass is accreted, and at higher redshifts this time interval is smaller (time is not a linear function of redshift) and $\mu$ is hence larger than it is for a given $\delta$ onto a halo of fixed mass at lower redshift. ![The mean specific accretion rate onto halos and subhalos using the halosub method, plotted as a function of object mass for three redshifts corresponding to $z=0.49$ (triple-dot-dashed lines), $z=0.23$ (dashed lines) and $z=0.01$ (solid lines). Lines of a given linestyle from bottom to top represent the accretion onto the AHOP halos and the MSM halos and subhalos respectively. The thick line shows the W02 result, obtained by using equation (\[eWech\]) and equation (\[eAlpha\]) at $z=0.01$.[]{data-label="msm-hop-plots"}](./MSM_HOP_plot.eps){width="0.95\columnwidth" height="7cm"} ![The mean specific accretion rate as a function of mass shown for $z=0.49$ (triple-dot-dashed lines), $z=0.23$ (dashed lines) and $z=0.01$ (solid lines) using the halosub accretion method. For a given linestyle, the bottom line shows the MSM halos, the middle line shows the MSM halos and subhalos and the top line shows the MSM subhalos.[]{data-label="accn-components"}](./compare_accs.eps){width="0.95\columnwidth" height="7cm"} Accretion onto subhalos ----------------------- In this section, the AHOP halos are resolved into constituent MSM halos and subhalos and the halosub method is applied to these resolved structures to account for accretion onto objects in groups and clusters. We begin by comparing the AHOP halo and MSM halo and subhalo specific accretion rates with the results found in the W02 simulation study. The mass of a halo or subhalo is henceforth denoted by $M$, in contrast with the previous section which only recorded accretion onto halos with mass $M_{H}$. ### Comparing the halosub accretion algorithm with W02 Fig.\[msm-hop-plots\] plots the average specific accretion rate for all bound objects from the simulation as a function of average object mass for redshifts corresponding to $z=0.49$ (triple-dot-dashed lines), $z=0.23$ (dashed lines) and $z=0.01$ (solid lines). These redshifts have been chosen because the epoch of cluster formation is $z<1$. The lines of a given linestyle from bottom to top represent the accretion onto the AHOP halos and MSM halos and subhalos respectively. The thick line shows the W02 result at $z=0.01$ using equation (\[eWech\]) (strictly, equation (\[eWech\]) holds at $z=0$ but we cannot use our anomalous detection method at this redshift). The W02 result was calculated by binning in mass each $z=0.01$ bound AHOP halo accretor and computing the corresponding average W02 $\alpha$ parameter in equation (\[eAlpha\]) for each mass bin ($\alpha$ is inversely proportional to halo formation redshift). The specific accretion rate onto the MSM objects is systematically larger than the AHOP specific accretion rates at every mass when considering a given redshift. The MSM method resolves the substructure that has been averaged out in the AHOP halo, so the main MSM host halo and subhalos are individually less massive than the AHOP counterpart. The offset with MSM is probably caused by dividing by the larger AHOP mass, and this offset increases with increasing mass because at larger masses subhalos occupy a larger fraction of the total AHOP mass. The mass difference between AHOP and the main host MSM halo therefore increases with increasing AHOP mass (and there are more detected halos than subhalos at a given redshift in the simulation, so the halos dominate the MSM halo and subhalo accretion signal). W02 fitted the accretion trajectories of their $z=0$ halos averaged over environment in a WMAP1 cosmology and so their result can be directly tested against the AHOP curve at $z=0.01$ which also averaged over environment, but in a universe with a WMAP3 cosmology (W02 argue that their fitting formula does not depend on the chosen cosmology). We find that the W02 specific accretion rate has a stronger mass dependence than found for the AHOP halos in this study and so for the large galaxy- and group- sized dark halos, overpredicts the specific accretion rate by a factor of $\sim 1.5$. Recent studies have shown that some halo-finding algorithms can lead to large uncertainties in the halo accretion rate [@Genel09; @Hopkins10]. The disagreement across mass with W02 in Fig.\[msm-hop-plots\], however, does not result from differences in halo-finder: the AHOP algorithm is very similar to the modified bound density maxima technique of [@Bullock01] used in W02. The disagreement most likely arises because W02 impose different criteria to identify the main son and main father. They adopt a policy, in some cases, of tracking the single most bound particle, which is misleading as the trajectory essentially performs a random walk across time. By constrast, we rigorously identify false merger candidates and adopt an accretion algorithm that tracks channels which donate/receive the most mass (and recall that by allowing a bound object below the mass threshold to be a main father, we do not bias the accretion events). Our method hence avoids using ad-hoc criteria. ![image](./xi_plot.eps) ![image](./vrel.eps) ### Accretion onto MSM halos and subhalos Fig.\[accn-components\] shows the specific accretion rate from bottom to top of MSM halos, MSM halos and subhalos, and MSM subhalos with the linestyles having the same meaning as in Fig.\[msm-hop-plots\]. The average specific accretion rates onto halos ($\mu_{H}$) and subhalos ($\mu_{S}$) have weak mass dependencies for each of the redshifts shown: $\langle\mu_{H}\rangle\propto M^{0.2}$ and $\langle\mu_{S}\rangle\propto M^{0.1}$ at $z=0.01$, for example. Each of the halo, halo and subhalo, and subhalo curves shift downwards with decreasing redshift: the average specific accretion rate onto a subhalo at $z=0.49$ is a factor of $1.3-1.4$ greater than at $z=0.01$, for example. Major merger and diffuse accretion events at higher redshifts, when the universe was more dense, are more prominent. Fig.\[accn-components\] also reveals that the subhalo accretors (and this includes the subhalos with a zero accretion rate) accrete at a larger rate, on average, than the halo accretors for $z<0.5$ at the mass scales shown. This, however, only causes a modest shift from the halo curve to the halo and subhalo curve at each redshift, because there are more halo accretors than subhalo accretors in the simulation, indicating that the subhalos are not responsible for the AHOP to MSM shift in accretion at each redshift in Fig.\[msm-hop-plots\]. The enhanced accretion onto subhalos can be understood by examining their mutual clustering and the relative velocity of their progenitors compared to their internal velocity, and both of these processes are discussed in the following sections. ### The clustering of halos and subhalos {#cluster_section} The main aim of this study is to investigate whether there is a relationship between the rate at which objects accrete mass and their environment and so in this section the clustering properties of halos and subhalos at different redshifts are examined. In the following section we specifically target accretors in different cluster-scale environments. Fig.\[xi\_plots\] shows the two-point correlation function, $\xi$, for the MSM accretors from the simulation as a function of the physical separation distance $r$, at the same three redshifts shown in Figs. \[msm-hop-plots\] and \[accn-components\] and at a much higher redshift of $\sim2$. The [@Landy] $\hat{w}_{4}$ estimator was used to compute $\xi$, requiring random catalogues for each redshift. Our catalogues sampled $300000$ objects at each redshift and were hence larger than the corresponding total number of detected halos and subhalos ($z=2.03:156120; z=0.49:211537; z=0.23:216232; z=0:223781$). For each panel in Fig.\[xi\_plots\], the solid lines represent the halo-halo pairs, the dotted lines represent the halo-subhalo pairs and the dashed lines represent the subhalo-subhalo pairs. Only the clustering of bound accretors was measured: halo-subhalo pairs correspond to the clustering of all bound halo accretors with all bound subhalo accretors, for example. The vertical dashed lines show the average total diameter of an object at the redshift in question and represent an estimate of the resolution limit in $r$. Fig.\[xi\_plots\] demonstrates that subhalo-subhalo pairings are a factor of $\sim2$ more clustered than halo-halo pairings at large physical scales at low redshift. This factor increases to $\sim10-15$ at lower separation scales: subhalos, by definition, reside within halos and so cluster more strongly at small scales. The drop-off in clustering amplitude at the lowest scales should be ignored as this occurs at scales that are below the estimated resolution limit. The subhalo-subhalo correlation function is the sum of two terms: the first describes the clustering of subhalos within the same host and the second describes the clustering of subhalos that belong to different hosts. For small separations, the subhalo-subhalo correlation function has a strong contribution from pairs of subhalos in the same host. The clustering of halo-halo pairings is lower at these scales because these scales approach the size of halos, and so it is less common to find two halos close to each other without one or both member(s) of the pair being a subhalo. At larger scales, subhalos belonging to different hosts contribute strongly to the subhalo-subhalo clustering strength. The clustering amplitudes of the three curves also evolve with redshift: the correlation length of the subhalo-subhalo curve increases by a factor $\sim3$ towards $z=0$, for example. This is probably because at lower redshift there are more dense clusters and more subhalos within a given host in the simulation, hence there is a stronger contribution to the subhalo-subhalo clustering amplitude than at higher redshift at the separation scales shown. ### Measuring the relative velocities between the accretors’ progenitors Having established that subhalos at sub-cluster scales are more clustered than halos, especially at small scales, we now examine the distributions of $\Delta v/v_{c}$, where $\Delta v$ represents the relative velocity between an accretor’s main father and one of its other progenitors, and $v_{c}$ is the accretor’s circular velocity. If $\Delta v/v_{c}$ tends to be smaller, on average, for subhalo accretors than halo accretors for example, then accretion onto halos will tend to be more suppressed than accretion onto subhalos. Fig.\[vrel\_fig\] shows the distributions of this ratio for halos (thick lines) and subhalos (thin lines) at the same redshifts shown in Fig.\[accn-components\]. The $\Delta v/v_{c}$ ratio was computed for each progenitor $k$ (not equal to the main father $j$) of a given accretor: each particle accreted from the background was counted as an individual relative velocity event, as was each halo/subhalo progenitor. So if, for example, an accretor has a main father $j$, a father $k$, and also accretes two particles from the background, $m$ and $n$, then three separate relative velocities with respect to $j$ are computed for that accretor. The accretors were binned in mass, and the different halo and subhalo mass bins are shown by the ranges of $M_{H}$ and $M_{S}$ in Fig.\[vrel\_fig\], respectively. It can be seen from Fig.\[vrel\_fig\] that the distributions of $\Delta v/v_{c}$ for the halo and subhalo accretors are similar: they depend quite weakly on mass and their peaks coincide. ### Revisiting the enhanced accretion onto subhalos in Fig.\[accn-components\] It is well established that in simulations, after infall, subhalos experience mass loss via tidal stripping, tidal heating and disk shocking [@Gnedin; @Dekel03; @Taylor_Babul; @Onghia], and have a large velocity dispersion that scales with their host’s mass. Mass stripping from an object in this dark-matter-only study is recorded as zero accretion, and so one would perhaps expect subhalos to be accreting at low rates, on average. We have found, however, that most of the subhalo accretors in the simulation at $z<0.5$ reside in the outer regions of their host, with $\sim70\%$ located beyond their host’s virial radius. (The halo virial radius roughly corresponds to $r_{200}$, which encloses the region within which the halo density is at least $200$ times the critical density of the universe.) Most of these subhalos have therefore probably not been significantly stripped of their mass. Infact, we find the opposite trend in Fig.\[accn-components\]: subhalos of a given mass in the simulation have a larger rate of accretion, on average, than halos of the same mass. Having demonstrated that there is no significant difference between the halo and subhalo accretor distributions of $\Delta v/v_{c}$, we conclude that the enhanced subhalo accretion rates are driven by the very frequent interactions between subhalos of the same host at small scales (Fig.\[xi\_plots\]). Halos are less clustered at small scales and so accrete at lower rates, on average. ![image](./environment_plot.eps) ![image](./percival_xi.eps) Halo and Subhalo environment {#envir_section} ---------------------------- In this section we specifically target the effect an object’s environment at cluster scales has on the rate at which it accretes mass. There are two popular, independent measures of environment in the literature; the overdensity $\delta_{R}(\textbf{x})$ in a sphere of radius $R$ [@Lemson; @Wang] and halo bias [@Sheth04; @Gao07]. We adopt two similar measures of an object’s environment: the first defines an environment mass within a cluster-sized sphere and the second uses the two-point correlation function. ### Environment mass {#envirmass} We have defined the environment of a halo and a subhalo as the total mass, $M_{E}$, contained within a sphere of radius $R$ centred on the object of interest. $M_{E}$ includes the mass of all those objects whose centres lie within the sphere as well as the mass of the object the sphere is centred on. We consider spheres of radii $R=1.46h^{-1}$Mpc and $R=3.65h^{-1}$Mpc because a) these scales represent both typical clusters and much larger clusters and b) various authors have found that the dependence of some halo properties on environment, such as halo formation redshift, are sensitive to the choice of sphere radius [@Lemson; @Harker; @Hahn]. Both of these environment mass definitions are applied to each bound accretor at the redshift under consideration, with only bound accretors having a recorded $M_{E}$ value. Unbound objects and resolved objects with $M\leq40M_{p}$ are not, however, excluded from the sample as these objects could be part of a bound object’s environment. The first row of Fig.\[environment\_plots\] plots the specific accretion rate onto halos and subhalos as a function of average object mass ($M$) and average environment mass ($M_{E}$) for $z=0.49$ (first column), $z=0.23$ (second column) and $z=0.01$ (third column) using a sphere radius of $1.46h^{-1}$Mpc. The second row of Fig.\[environment\_plots\] shows the results using a larger sphere radius of $3.65h^{-1}$Mpc at the same three redshifts. The solid lines represent the environment mass bins which from bottom to top for the first row are: $M_{E}<10^{11.5}M_{\odot}$, $10^{11.5}M_{\odot}\leq M_{E}<10^{12.5}M_{\odot}$ and $10^{12.5}M_{\odot}\leq M_{E}< 10^{13.5}M_{\odot}$. The triple-dot-dashed line shows the largest environment mass bin of $10^{13.5}M_{\odot}\leq M_{E}<10^{14.5}M_{\odot}$. For the larger scale environments in the second row (from bottom to top): $M_{E}<10^{12.5}M_{\odot}$, $10^{12.5}M_{\odot}\leq M_{E}<10^{13.5}M_{\odot}$ (solid lines) and $10^{13.5}M_{\odot}\leq M_{E}<10^{14.5}M_{\odot}$ (triple-dot-dashed lines). The vertical arrow shows the direction of increasing environment for all panels, with the exception of the largest environment mass bins in the first row, which mostly lie beneath the second largest environment bins. The stars in each panel represent the accretion onto MSM halos and subhalos independent of their environment and the squares joined by solid lines show the EPS results. The relationships found in the previous sections are preserved in Fig.\[environment\_plots\]: the specific accretion rate increases with object mass for objects in most environments and decreases towards $z=0$ (as was shown in Fig.\[accn-components\]), and EPS consistently underestimates the mass accreted onto all object masses (as was shown for halos at $z<1$ in Fig.\[halo\_accn\]). The most striking feature of Fig.\[environment\_plots\], however, is that objects of a given mass residing in more massive environments do not accrete at a particularly enhanced rate compared with objects of the same mass in much lower mass environments. This suggests that the specific accretion rate onto halos and subhalos does not depend strongly on environment at cluster scales. Objects in cluster mass environments shown in the first row (triple-dot-dashed lines) mostly accrete less mass than in lower mass environments, but the number of objects in cluster mass surroundings is limited by the choice of sphere radius. This effect is not seen for the larger-scale environments shown in the second row, for example, where merging between subhalos on the outskirts of the host halo is probably driving accretion (but only at a slightly higher overall rate). The second row shows that the specific accretion rate only depends weakly on environment at larger scales that probe the outermost regions of clusters. This weak environment dependence in rows $1$ and $2$ therefore seems to suggest that the increased interaction rates of halos in group- and cluster- mass environments are not sufficiently large enough to significantly overcome the large halo relative velocities, resulting in only a modest net increase in accretion. Halos dominate the accretion signals in Fig.\[environment\_plots\], but we find the same trends at each of the chosen redshifts when just subhalos are plotted as a function of their mass and environment mass. There are two differences, however: the subhalos a) accrete at higher rates and b) reside only in larger mass environments. The subhalo curves have been omitted in Fig.\[environment\_plots\] for clarity. Other authors have quantified environment by computing the overdensity $\delta_{R}$ in a sphere of radius $R$, rather than the mass [@Lemson; @Harker; @Hahn; @Fakhouri; @Fakhouri_diffuse]. We therefore calculated a weighted environment density for each halo and subhalo accretor by using the standard SPH cubic spline window function [@Monaghan] which weights the mass contributions from objects close to the centre of the sphere more strongly than those further away. [@Fakhouri] showed that for halos more massive than $10^{14}M_{\odot}$, the density of the object the sphere is centred on starts to dominate the contributions to $\delta_{R}$, and so the central object’s contribution was therefore both included and excluded in two separate weighted environment density measures. When binned in environment density, the same weak environment dependence as in Fig.\[environment\_plots\] was found in both cases. ### Clustering in different accretion schemes In this section we use the correlation function as an alternative means to Section \[envirmass\] of measuring an object’s environment, except we do not restrict our analysis to just cluster scales of a few Mpc. We consider samples of objects with very similar masses at different redshifts and examine whether objects of a given mass which accrete at larger rates have a larger clustering amplitude. This also tests the work by [@Percival], who found that at $z=2$ halos of a given mass accreting at different rates do not cluster differently. The $z=2.03$ panel in Fig.\[xi\_percival\_plot\] shows the correlation function for all those objects whose mass satisfies $10^{10.6}M_{\odot}\leq M<10^{10.9}M_{\odot}$ with $\mu<0.35$Gyr$^{-1}$ (solid), $0.35$Gyr$^{-1}\leq\mu<0.6$Gyr$^{-1}$ (dotted) and $\mu\geq0.6$Gyr$^{-1}$ (dashed). The lower redshift panels show the correlation function for objects whose mass satisfies $10^{11}M_{\odot}\leq M< 10^{11.3}M_{\odot}$ with $\mu<0.1$Gyr$^{-1}$ (solid), $0.1$Gyr$^{-1}\leq\mu<0.2$Gyr$^{-1}$ (dotted) and $\mu\geq0.2$Gyr$^{-1}$ (dashed). The mass interval for $z<0.5$ has been chosen because it lies below the break mass, $M_{\star}$, in the mass function at these redshifts and so we do not bias $\mu$. For comparison, the mass interval in the $z\sim2$ panel lies closer to $M_{\star}$. The vertical dashed lines represent an estimate of the resolution limit in the separation scale (same as the vertical dashed lines in Fig.\[xi\_plots\]). At well resolved non-linear small scales for $z<0.5$, objects with high specific accretion rates are up to a factor of $\sim3$ more clustered than the lower accreting objects, whereas at larger linear scales the difference in clustering between different accretors is much smaller. For the cluster-scale environments of the first row of Fig.\[environment\_plots\], corresponding to an $r$ value of $2.92h^{-1}$Mpc, there is a weak environment dependence, with objects of larger $\mu$ being slightly more clustered. Fig.\[xi\_percival\_plot\] therefore provides further evidence that the mass accreted onto halos and subhalos of a given mass weakly depends on their environment at cluster scales. In contrast to the $z<0.5$ behaviour, there is very little difference in clustering between different accretors with $10^{10.6}M_{\odot}\leq M<10^{10.9}M_{\odot}$ at $z\sim2$ and this holds for both the linear and non-linear scales shown. We therefore agree with the conclusions of [@Percival] at $z\sim2$ but show that they break down at $z<0.5$, where there is a larger difference in clustering between high accretors and low accretors of a given mass at all scales. Discussion {#discussion_section} ========== Disagreement with EPS theory {#EPS_section} ---------------------------- Despite its success at reproducing the dark halo mass function in simulations, we find that the analytic EPS calculation shows significant departures from the halo accretion rates found in our simulation at both low and high redshift (Fig.\[halo\_accn\]). This simulation study, however, is not the first to report disagreement with EPS theory at high redshift: [@Cohn08] examined the accretion onto halos of mass $M_{H}=5-8\times10^{8}h^{-1}$M$_{\odot}$ at $z=10$ and found that EPS overestimated the halo accretion rate by a factor $\sim 1.5$ (using a lookback time of $50$ Myrs). Fig.\[halo\_accn\] shows a similar behaviour, with EPS overpredicting the accretion rate onto halos of mass $M_{H}\sim10^{10.7}M_{\odot}$ by a factor of $\sim2$ at $z=8$. One might expect EPS to overestimate accretion onto halos at high redshift because it assumes that collapse is spherical and that the density barrier is fixed in height [@Lacey] whereas it has been shown that allowing for ellipsoidal collapse and treating the critical density contrast for collapse as a free parameter better reproduces the N-body halo mass function [@Sheth01; @Sheth02]. This modification reduces the critical density contrast for collapse by a factor of $\sqrt{0.7}$ (M06) which reduces $f(M_{H})$ in equation (\[eMill\]) by the same factor, causing a slight shift in the dashed curves in Fig.\[halo\_accn\] but otherwise having no effect on the redshift or mass dependence. The disagreement might arise because EPS theory is only approximate: (i) it assumes spherical collapse, whereas halos in dark-matter-only simulations are triaxial; (ii) it contains no dynamical information, and so is unable, for example, to account for mass being stripped from one halo and then being accreted onto another; (iii) it cannot account for accretion onto substructures; and (iv) it averages over halo environment. The latter restrictions are particularly problematic in the non-linear regime at $z<1$, when accretion onto structures embedded within clusters is of interest (Fig.\[environment\_plots\]). Recent attempts to incorporate an environment dependence into the EPS excursion set theory [@Maultbetsch; @Sandvik; @Zentner; @Desjacques] could modify equation (\[eMill\]) which might result in better agreement with our simulation results for $z<1$ in Fig.\[environment\_plots\]. [@Benson05] highlighted further weaknesses with the EPS formalism that could also account for the offset in Figs. \[halo\_accn\] and \[environment\_plots\]. They showed that the [@Lacey] EPS formula yields merger rates that are not symmetric under exchange of halo masses, and which do not predict the correct evolution of the Press-Schechter mass distribution, indicating that constructed EPS merger trees are fundamentally flawed. Despite these limitations, the gradients of the EPS curves in Fig.\[halo\_accn\] are only slightly steeper than the corresponding simulation curves. This implies that the [@Lacey] EPS formalism may only require minor adjustment to agree more closely with the simulation trajectories across mass and redshift. The weak relationship between accretion rate and environment at cluster scales ------------------------------------------------------------------------------ By quantifying accretion onto substructures embedded in groups and clusters, we have moved beyond the limited predictive power of the EPS formalism. Fig.\[accn-components\] demonstrates that subhalos accrete at larger rates than halos of the same mass, on average, in the simulation (by a factor of $\sim3$ for the lowest mass subhalos at $z=0$). At first glance this appears to contradict recent claims: [@Angulo] and [@Hester], for example, have shown that subhalo-subhalo mergers are rare and that subhalos are severely stripped of mass, which probably means that the accretion rates onto their subhalos are likely to be low. The subhalo accretors at low redshift in this study, however, form a subsample of subhalos that are safely above the mass resolution limit and that are mostly located at large distances from their host’s centre, with $\sim70\%$ residing beyond their host’s virial radius. (The halo virial radius approximately encloses the region within which the halo density is at least $200$ times the critical density of the universe). These subhalos are probably not therefore being significantly stripped of mass, unlike the subhalos in recent studies. The mass-selected nature of our subhalo accretors and the different spatial distribution within their host are therefore the most likely causes of the apparent accretion rate discrepancy with the studies mentioned above. We have further shown that the subhalo accretors in this study are more clustered than the halo accretors at small scales (Fig.\[xi\_plots\]) and that there is no significant difference between the distributions of $\Delta v/v_{c}$, where $\Delta v$ is the relative velocity between an accretor’s main father and one of its other progenitors, and $v_{c}$ is the accretor’s circular velocity. The high subhalo accretion rates are therefore likely to be driven by the very frequent interactions at small scales with other subhalos of the same host. One might expect the accretion rate onto halos and subhalos to depend strongly on environment at larger, cluster-sized scales given the increased rate of interactions in dense environments, but only a weak dependence is found (Figs. \[environment\_plots\] and \[xi\_percival\_plot\]). The subhalo accretors reside in only the most massive environments and probably accrete mostly locally from their nearby subhalo neighbours rather than their host, and so this is a possible explanation for their weak relationship between accretion rate and environment. One likely explanation for halos is that the increased interaction rates of halos in group- and cluster- mass environments are not sufficiently large enough to significantly overcome the large halo relative velocities, resulting in only a modest net increase in accretion at cluster scales. [@Fakhouri_diffuse] examined the environment dependence of accretion onto high mass halos ($M_{H}>10^{12}M_{\odot}$) from the Millennium simulation and found a weak, negative correlation for galaxy-mass halos. We find a weak but positive dependence for all object masses in Fig.\[environment\_plots\]. Our analysis, which extends theirs by accounting for accretion onto substructures, has a different expression for the mass accretion rate but we have found little difference in the results obtained from using the two expressions for bound objects. The obvious source of the discrepancy is therefore the method used to identify anomalies. [@Genel09] have highlighted some fundamental problems with the ‘stitching’ algorithm used by [@Fakhouri_diffuse] to remove anomalous events, demonstrating that it can lead to a double counting of mergers and to a false counting of anomalous events as mergers. They show that this overestimation of the merger rate is particularly problematic for minor-mergers. Predicting the effects that overestimating the merger-rate has on the accretion rate and how this varies as a function of environment is not trivial, but differences between the anomalous event detection methods could explain the difference in the sign of the trend between accretion rate and environment. The $z=2.03$ panel in Fig.\[xi\_percival\_plot\] reveals that at higher redshift when halos far outnumber subhalos in the simulation, the rate of accretion onto halos is independent of environment, confirming the [@Percival] result. The [@Percival] study examined the difference in clustering at $z=2$ between halos of a given mass accreting at different rates. They considered several mass intervals ranging from $10^{10.3}M_{\odot}\leq M_{H}\leq10^{10.4}M_{\odot}$ to $10^{13.3}M_{\odot}\leq M_{H}\leq10^{13.6}M_{\odot}$ and concluded for each mass interval that halo accretion rates do not depend on environment at this redshift. We suggest that this apparent lack of environment dependence arises because the halos in the [@Percival] study and to a lesser extent the halos considered in the $z\sim2$ panel in Fig.\[xi\_percival\_plot\], represent some of the most massive objects at $z\sim2$ and hence have bias factors $b>1$ [@Sheth99]. These structures are located at the highest peaks in the density field and so by computing the clustering amplitude of these objects one is essentially measuring the clustering pattern of the highest density peaks at this redshift. It is therefore unlikely that the highest mass halos experiencing different instantaneous accretion rates differ in their clustering. By contrast, the lower mass halos and subhalos in the $z<1$ panels are less biased and so more closely track the clustering of the underlying mass distribution. Comparing dark halo growth with black hole growth ------------------------------------------------- Under the assumption that, on average, black hole growth traces dark halo growth (so-called “pure coeval evolution”), M06 tested the predictions of equation (\[eMill\]) for the evolution of the integrated AGN luminosity density for $z\leq3$. The coeval evolution model tests the hypothesis that the fractional mass accretion rate onto black holes and onto halos are equal (i.e. $\dot{M}/M$ is the same for both black holes and halos), and is consistent with the tight relation inferred between black hole mass and galaxy bulge mass [@Tremaine02 but see @Batcheldor10 for an alternative interpretation], and is easy to test. M06 found the predicted integrated AGN luminosity density to be in remarkable agreement with the bolometric AGN luminosity density measured using hard X-ray data. They also found that for $z>0.5$ average black hole growth is well approximated by pure coeval evolution, but for $z<0.5$ the black hole luminosity density tails off more quickly than dark halo growth, and by $z=0$ is lower by a factor of $\sim 2$. They suggested that this slowdown in black hole accretion could be related to cosmic downsizing [e.g. @Barger]. Their predictions for dark halo growth were, however, based on EPS theory. The simulation trajectories in Fig.\[halo\_accn\] show that EPS underestimates halo accretion for $z<1$, and at $z=0$ is a factor of $\sim1.5-2$ lower for all halo masses. This implies that present day dark halos could be accreting at fractional rates that are up to $\sim3-4$ times higher than their associated black holes. However, for $1<z<3$, the simulated dark halo accretion trajectories in Fig.\[halo\_accn\] are reasonably well approximated by EPS. We therefore suggest the following scenario: for $1<z<3$ black holes grow coevally with their dark hosts but for $z<1$, the epoch of cluster formation, their growth significantly decouples from that of their hosts. It is still plausible that this decoupling is linked to the inference that high mass black holes preferentially “turn off” at low redshifts, leaving the remaining accretion activity dominated by low mass black holes [@Heckman]. The cause of such downsizing is often assumed to be connected to the physics of the baryon component. Our study reinforces this assumption: if downsizing were a “whole halo” phenomenon it would be manifest in our dark-matter-only simulation, and its absence in our results confirms that we should seek an explanation in the baryons. Is halo accretion via minor-mergers and diffuse accretion the cause of radio-mode feedback? {#Croton_section} ------------------------------------------------------------------------------------------- A number of authors have developed semi-analytic models of galaxy-formation that are tuned to reproduce the galaxy luminosity function at low redshift (e.g. @Bower06; C06; @DeLucia06). A key ingredient of these models is a low level of feedback from black hole accretion that arises in all galaxies and which increases in importance towards low redshifts. The feedback mechanism has still not been identified: luminous, high accretion-rate AGN only form a small subset of the galaxy population at low redshift and seem unlikely to provide the required feedback in all galaxies. @Bower06 required black holes to have relatively high accretion Eddington ratios, which may be inconsistent with observations: it seems that the accretion and an associated outflow need to be hidden from view in a so-called “radio-mode”. C06 have assumed that such a mode could be fuelled by Bondi accretion from the hot gas phase of their model, but the observational evidence for such a mechanism has not been demonstrated either. The survey of [@Ho97] revealed that a high fraction, over $40\%$ of nearby galaxies rising to $50\%-75\%$ of bulge systems, host low luminosity AGN (LLAGN), with the majority of LLAGN accreting at highly sub-Eddington rates in the range $10^{-5}<L_{bol}/L_{Edd}<10^{-3}$. @Ho05 argued that these are systems where accretion occurs via a radiatively-inefficient advection-dominated accretion flow (ADAF). The accretion flow puffs up the inner disk and material is advected towards the black hole [@Narayan; @Ho02; @Ho08], with outflow being channelled along kinetic-energy-dominated jets [@Collin; @Ho05; @Ho08]. This finding leads us to suggest that LLAGN, fuelled by low accretion rate ADAFs, may provide the radio-mode feedback. In our dark-matter-only study, the integrated minor-merger and diffuse halo accretion rate density curves in Fig.\[dm\_by\_m\] increase in importance towards the present day for all halo masses. This qualitatively agrees with the cosmological evolution of the black hole radio-mode integrated accretion signal found for each of the different semi-analytic models (@Bower06; C06). We suggest that the periods when galaxy halo growth is dominated by low accretion rate minor-mergers and diffuse accretion events, are mirrored by low accretion rates onto their associated black holes, and that those in turn produce the LLAGN that may be the radio-mode required for the feedback models. The integrated accretion rate density onto black holes residing in galaxy-mass halos that are accreting diffusely and via minor-mergers at $z=0$ is also very similar to the integrated accretion rate density onto black holes residing in similar sized halos found by C06, who argue that radio-mode feedback is more effective in more massive systems. Our estimate for the total black hole accretion rate density tests the hypothesis that for black holes with mass $M_{BH}$ residing in halos with mass $M_{H}$, $$\label{radio-mode-estimate} \sum_{i}\dot{M}_{i,BH}(z=0)\sim\alpha\frac{M_{BH}}{M_{H}}\sum_{i}\dot{M}_{i,H}(z=0)$$ where $\alpha$ describes the non-linearity in the black hole - dark halo mass relation and the index $i$ sums over all galaxy-mass dark halos and all black holes residing in these halos. Equation (\[radio-mode-estimate\]) assumes that black hole growth positively traces dark halo growth, on average (recent claims by @Kormendy11, however, argue that for bulgeless galaxies there is no such correlation between black holes and their dark hosts, but the interpretation of this as meaning that there is no such relation for all galaxies has been clearly refuted by @Volonteri11. In what follows we do not address the reliability of the assumption in equation (\[radio-mode-estimate\]) but rather test its prediction for black hole growth). [@Ferrarese02] found that $\alpha=1.65$ and that galaxy-mass halos with $M_{H}\sim10^{12}M_{\odot}$ have a black hole - dark halo mass ratio of $\sim 10^{-5}$. According to Fig.\[dm\_by\_m\] these halos with $\delta M_{H}/M_{H}\leq0.02$ have a total accretion rate density of $\sim 7.6\times10^{7}M_{\odot}$Gyr$^{-1}$Mpc$^{-3}$ at $z=0$, which when substituted into equation (\[radio-mode-estimate\]) yields a total black hole accretion rate density of $\sim10^{-5.9}M_{\odot}$yr$^{-1}$Mpc$^{-3}$. This is very similar to the integrated accretion rate density of $\sim10^{-5.8}M_{\odot}$yr$^{-1}$Mpc$^{-3}$ onto supermassive black holes at $z=0$ reported by C06. The $\delta$ parameter ($\equiv\delta M_{H}/M_{H}$) is a free parameter in our model, but we have found that adopting the more classical progenitor mass ratio, $\chi$, to distinguish between merger type yields almost identical results to Fig.\[dm\_by\_m\]. This provides confirmation that our $\delta$ cuts are indeed capable of separating minor- and major- merger channels. The $\delta$ parameter is therefore probably no more unconstrained than $\chi$. We conclude that the low rates of accretion onto dark halos, driven by minor-mergers and diffuse accretion, may provide an alternative explanation to that proposed by C06 for the radio-mode feedback needed to reproduce the observed galaxy luminosity function. The low redshift feedback phenomenon and its cosmological evolution may be driven by the cosmological evolution of halo minor-mergers and diffuse accretion rather than requiring accretion out of a hot gas phase. Conclusions {#conclusions_section} =========== Outputs from one of the high resolution dark-matter-only Horizon Project simulations have been used to investigate the environment and redshift dependence of accretion onto both halos and subhalos. We have developed a method that computes the combined merger- and diffuse- driven accretion onto halos and all levels of substructure and find that: - Halo accretion rates vary less strongly with redshift than predicted by the EPS formalism. This offset in gradient for each halo mass curve implies that perhaps minor adjustment to the EPS formula is required. - Comparison with an observational study of black hole growth leads us to suggest that dark halos at $z=0$ could be accreting at fractional rates that are up to $3-4$ times higher than their black holes. - Halo growth is driven by minor-mergers and diffuse accretion at low redshift. These latter accretion modes have both the correct cosmological evolution and inferred integrated black hole accretion rate density at $z=0$ to drive radio-mode feedback, which has been hypothesised in recent semi-analytic galaxy-formation models as the feedback required to reproduce the galaxy luminosity function at low redshift. Radio-mode feedback may therefore be driven by dark halo minor-mergers and diffuse accretion, rather than accretion of hot gas onto black holes, as has been recently argued. - The low redshift subhalo accretors in the simulation form a mass-selected subsample safely above the mass resolution limit and mostly reside in the outer regions of their host, with $\sim 70\%$ beyond their host’s virial radius, and are probably not therefore being significantly stripped of mass. These subhalos accrete at higher rates than halos, on average, at low redshifts. We demonstrate that this is due to their enhanced mutual clustering at small scales: there is no significant difference between the halo and subhalo accretor distributions of $\Delta v/v_{c}$, where $\Delta v$ represents the relative velocity between an accretor’s main father and one of its other progenitors, and $v_c$ is the accretor’s circular velocity. The very frequent interactions with other subhalos of the same host drive the high subhalo accretion rates. - Accretion rates onto halos and subhalos depend only weakly on environment at cluster scales. For halos, it appears that the increased interaction rates in group- and cluster- mass environments are not sufficiently large enough to significantly overcome the large halo relative velocities, resulting in only a modest net increase in accretion at cluster scales. The subhalo accretors only reside in the densest environments and they are likely to be accreting mostly from their nearby subhalo neighbours, rather than from their host. We further demonstrate that halos accrete independently of their environment at $z\sim2$, as has been found by other authors, but show that this behaviour results from examining the clustering of the most massive halos with large bias factors. When less massive halos below $M_{\star}$ at low redshift are considered, a weak dependence between accretion rate and environment at cluster scales arises. Acknowledgements {#acknowledgements .unnumbered} ================ We are grateful to the Horizon Project team for providing the simulation outputs and to the anonymous referee whose insightful comments have helped improve the quality of this paper. The research of JD is partly funded by Adrian Beecroft, the Oxford Martin School and the STFC. HT is grateful to the STFC for financial support. \[lastpage\] [^1]: email: [email protected] [^2]: http://www.projet-horizon.fr
--- abstract: | We consider the 1D viscous Burgers equation with a control localised in a finite interval. It is proved that, for any ${\varepsilon}>0$, one can find a time $T$ of order $\log{\varepsilon}^{-1}$ such that any initial state can be steered to the ${\varepsilon}$-neighbourhood of a given trajectory at time $T$. This property combined with an earlier result on local exact controllability shows that the Burgers equation is globally exactly controllable to trajectories in a finite time. We also prove that the approximate controllability to arbitrary targets does not hold even if we allow infinite time of control. [**AMS subject classifications:**]{} 35L65, 35Q93, 93C20 [**Keywords:**]{} Burgers equation, exponential stabilisation, localised control, Harnack inequality author: - 'Armen Shirikyan[^1]' title: Global exponential stabilisation for the Burgers equation with localised control --- Introduction {#s1} ============ Let us consider the controlled Burgers equation on the interval $I=(0,1)$ with the Dirichlet boundary condition: $$\begin{aligned} {\partial}_tu-\nu{\partial}_x^2u+u{\partial}_xu&=h(t,x)+\zeta(t,x), \label{1}\\ u(t,0)=u(t,1)&=0. \label{2}\end{aligned}$$ Here $u=u(t,x)$ is an unknown function, $\nu>0$ is a parameter, $h$ is a fixed function, and $\zeta$ is a control that is assumed to be localised in an interval $[a,b]\subset I$. As is known, the initial-boundary value problem for  is well posed. Namely, if $h\in L_{\mathrm{loc}}^2({{\mathbb R}}_+,L^2(I))$ and $\zeta\equiv0$, then, for any $u_0\in L^2(I)$, problem , has a unique solution $u(t,x)$ that belongs to the space $${{\cal X}}=\{u\in L_{\mathrm{loc}}^2({{\mathbb R}}_+,H_0^1(I)): {\partial}_tu\in L_{\mathrm{loc}}^2({{\mathbb R}}_+,H^{-1}(I))\}$$ and satisfies the initial condition $$\label{3} u(0,x)=u_0(x);$$ see the end of this Introduction for the definition of functional spaces. Let us denote by ${{\cal R}}_t(u_0,h)$ the mapping that takes the pair $(u_0,h)$ to the solution $u(t)$ (with $\zeta\equiv0$). We wish to study the problem of controllability for . This question received great deal of attention in the last twenty years, and we now recall some achievements related to our paper. One of the first results was obtained by Fursikov and Imanuvilov [@FI-1995; @FI1996]. They established the following two properties: $$\label{0.4} \|{{\cal R}}_T(u_0,h+\zeta)-\hat u\|\ge R.$$ These results were extended and developed in many works. In particular, Glass and Guererro [@GG-2007] and [@leautaud-2012] proved global exact boundary controllability to constant states, Coron [@coron-2007] and Fernández-Cara–Guererro [@FG-2007] established some estimates for the time and cost of control, and Chapouly [@chapouly-2009] (see also Marbach [@marbach-2014]) proved global exact controllability to trajectories with two boundary and one distributed controls, provided that $h\equiv0$. A large number of works were devoted to the investigation of similar question for other, more complicated equations of fluid mechanics; see the references in [@fursikov2000; @coron2007]. In view of the above-mentioned properties, two natural questions arise: - does the [*exact controllability to trajectories*]{} hold for arbitrary initial conditions and nonzero right-hand sides? - does the [*approximate controllability*]{} hold if we allow a sufficiently large time of control? It turns out that the answer to the first question is positive, provided that the time of control is sufficiently large, whereas the answer to the second question is negative. Namely, the main results of this paper combined with the above-mentioned property of local exact controllability to trajectories imply the following theorem.[^2] Let us mention that the result about exact controllability to trajectories remain valid for a much larger class of scalar conservation laws in higher dimension. This question will be addressed in a subsequent publication. The rest of the paper is organised as follows. In Section \[s2\], we formulate a result on exponential stabilisation to trajectories, outline the scheme of its proof, and derive assertion (a) of the Main Theorem. Section \[s3\] is devoted to some preliminaries about the Burgers equation. In Section \[s4\], we present the details of the proof of exponential stabilisation and establish property (b) of the Main Theorem. Finally, the Appendix gathers the proofs of some auxiliary results. . This research was carried out within the MME-DII Center of Excellence (ANR-11-LABX-0023-01) and supported by the RSF grant 14-49-00079. ### Notation {#notation .unnumbered} Let $I=(0,1)$, $J_T=[0,T]$, ${{\mathbb R}}_+=[0,+\infty)$, and $D_T=(T,T+1)\times I$. We use the following function spaces. $L^p(D)$ and $H^s(D)$ are the usual Lebesgue and Sobolev spaces, endowed with natural norms $\|\cdot\|_{L^p}$ and $\|\cdot\|_s$, respectively. In the case $p=2$ (or $s=0$), we write $\|\cdot\|$ and denote by $(\cdot,\cdot)$ the corresponding scalar product. $C^\gamma(D)$ denotes the space of Hölder-continuous functions with exponent $\gamma\in(0,1)$. $H_{\mathrm{loc}}^s(D)$ is the space of functions $f\!:D\to{{\mathbb R}}$ whose restriction to any bounded open subset $D'\subset D$ belongs to $H^s(D')$. $H_{\mathrm{ul}}^s({{\mathbb R}}_+\times I)$ stands for the space of functions $u\in H_{\mathrm{loc}}^s({{\mathbb R}}_+\times I)$ satisfying the condition $$\|u\|_{H_{\mathrm{ul}}^s}:=\sup_{T\ge0}\|u\|_{H^s(D_T)}<\infty.$$ Very often, the context implies the domain on which a functional space is defined, and in this case we omit it from the notation. For instance, we write $L^2$, $H^s$, etc. $L^p(J,X)$ is the space of Borel-measurable functions $f:J\to X$ (where $J\subset {{\mathbb R}}$ is a closed interval and $X$ is a separable Banach space) such that $$\|f\|_{L^p(J,X)}=\biggl(\int_J\|f(t)\|_X^p{{\textup d}}t\biggr)^{1/p}<\infty.$$ In the case $p=\infty$, this condition should be replaced by $$\|f\|_{L^\infty(J,X)}={\mathop{\rm ess\,sup}\limits}_{t\in J}\|f(t)\|_X<\infty.$$ $W^{k,p}(J,X)$ is the space of functions $f\in L^p(J,X)$ such that ${\partial}_t^j f\in L^p(J,X)$ for $1\le j\le k$, and if $J$ is unbounded, then $W_{\mathrm{loc}}^{k,p}(J,X)$ is the space of functions whose restriction to any bounded interval $J'\subset J$ belongs to $W^{k,p}(J',X)$. $C(J,X)$ is the space of continuous functions $f:J\to X$. $B_X(a,R)$ denotes the closed ball in $X$ of radius $R\ge0$ centred at $a\in X$. In the case $a=0$, we write $B_X(R)$. Exponential stabilisation to trajectories {#s2} ========================================= Let us consider problem , , in which $\nu>0$ is a fixed parameter, $h(t,x)$ is a given function belonging to $H_{\mathrm{ul}}^1\cap L^\infty$ on the domain $I\times{{\mathbb R}}_+$, and $\zeta$ is a control taking values in the space of functions in $L^2(I)$ with support in a given interval $[a,b]\subset I$. Recall that ${{\cal R}}_t(u_0,h+\zeta)$ stands for the value of the solution for – at time $t$. The following theorem is the main result of this paper. \[t2.1\] Under the above hypotheses, there exist positive numbers $C$ and $\gamma$ such that, given arbitrary initial data $u_0, \hat u_0\in L^2(I)$, one can find a piecewise continuous control $\zeta:{{\mathbb R}}_+\to H^1(I)$ supported in ${{\mathbb R}}_+\times[a,b]$ for which $$\label{5} \|{{\cal R}}_t(u_0,h+\zeta)-{{\cal R}}_t(\hat u_0,h)\|_1+\|\zeta(t)\|_1 \le Ce^{-\gamma t}\min\bigl(\|u_0-\hat u_0\|_{L^1}^{2/5},1\bigr), \quad t\ge1.$$ Moreover, the control $\zeta$ regarded as a function of time may have discontinuities only at positive integers. As was mentioned in the Introduction, this theorem combined with the Fursikov–Imanuvilov result on local exact controllability (see [@FI1996 Section I.6]) implies that the Burgers equation is exactly controllable to trajectories in a finite time independent of the initial data. Indeed, for any $\hat u_0\in L^2(I)$ the trajectory $\hat u(t)={{\cal R}}_t(\hat u_0,h)$ is bounded in $H_0^1(I)$ for $t\ge1$. In view of the local exact controllability, one can find ${\varepsilon}>0$ such that, if $v_0\in H_0^1(I)$ satisfies the inequality $\|v_0-\hat u(T)\|_1\le{\varepsilon}$ for some $T\ge1$, then there is a control $\zeta\in L^2(D_T)$ supported in $[T,T+1]\times [a,b]$ such that $v(T+1)=\hat u(T+1)$, where $v(t,x)$ stands for the solution of , issued from $v_0$ at time $t=T$. Due to , there is $T_{\varepsilon}>0$ such that, for any $u_0,\hat u_0\in L^2(I)$, one can find a piecewise continuous control $\zeta:J_{T_{\varepsilon}}\to H^1(I)$ supported in $J_{T_{\varepsilon}}\times [a,b]$ for which $$\|{{\cal R}}_{T_{\varepsilon}}(u_0,h+\zeta)-\hat u(T_{\varepsilon})\|_1\le{\varepsilon}.$$ Applying the above result on local exact controllability to $v_0={{\cal R}}_{T_{\varepsilon}}(u_0,h+\zeta)$, we arrive at assertion (a) of the Main Theorem stated in the Introduction. We now outline the main steps of the proof of Theorem \[t2.1\], which is given in Section \[s4\]. It is based on a comparison principle for nonlinear parabolic equations and the Harnack inequality. ### Step 1: Reduction to bounded regular initial data {#step1-reduction-to-bounded-regular-initial-data .unnumbered} We first prove that it suffices to consider the case of $H^2$-smooth initial conditions with norm bounded by a fixed constant. Namely, let $V:=H_0^1\cap H^2$, and given a number $T>0$, let us define the functional space $$\label{1.2} {{\cal X}}_T=L^2(J_T,H_0^1)\cap W^{1,2}(J_T,H^{-1}).$$ We have the following result providing a universal bound for solutions of , at any positive time. \[p1.2\] Let $h\in (H^1\cap L^\infty)(J_T\times I)$ for some $T>0$ and let $\nu>0$. Then there is $R>0$ such that any solution $u\in{{\cal X}}_T$ of  with $\zeta\equiv0$ satisfies the inclusion $u(t)\in V$ for $0<t\le T$ and the inequality $$\label{1.3} \|u(T)\|_2\le R.$$ Thus, if $h\in H_{\mathrm{ul}}^1\cap L^\infty$ is fixed, then, for any initial data $u_0,\hat u_0\in L^2(I)$, we have $$\|{{\cal R}}_1(u_0,h)\|_2\le R, \quad \|{{\cal R}}_1(\hat u_0,h)\|_2\le R,$$ where $R$ is the constant in Proposition \[p1.2\] with $T=1$. Furthermore, in view of the contraction of the $L^1$-norm for the difference of two solutions (cf. Proposition \[p3.3\] below), we have $$\|{{\cal R}}_1(u_0,h)-{{\cal R}}_1(\hat u_0,h)\|_{L^1}\le \|u_0-\hat u_0\|_{L^1}.$$ Hence, to prove Theorem \[t2.1\], it suffices to establish the inequality in  for $t\ge0$ and any initial data $u_0,\hat u_0\in B_V(R)$. ### Step 2: Interpolation {#step2-interpolation .unnumbered} Let us fix two initial conditions $u_0,\hat u_0\in B_V(R)$. Suppose we have constructed a control $\zeta(t,x)$ supported in ${{\mathbb R}}_+\times [a,b]$ such that, for all $t\ge0$, $$\begin{aligned} \|{{\cal R}}_t(u_0,h+\zeta)\|_2+\|{{\cal R}}_t(\hat u_0,h)\|_2&\le C_1,\label{1.4}\\ \|{{\cal R}}_t(u_0,h+\zeta)-{{\cal R}}_t(\hat u_0,h)\|_{L^1}& \le C_2e^{-\alpha t}\|u_0-\hat u_0\|_{L^1},\label{1.5}\end{aligned}$$ where $C_1$, $C_2$, and $\alpha$ are positive numbers not depending on $u_0$, $\hat u_0$, and $t$. In this case, using the interpolation inequality (see Section 15.1 in [@BIN1979]) $$\label{1.6} \|v\|_1\le C_3\|v\|_{L^1}^{2/5}\|v\|_2^{3/5}, \quad v\in H^2(I),$$ we can write $$\label{1.7} \|{{\cal R}}_t(u_0,h+\zeta)-{{\cal R}}_t(\hat u_0,h)\|_{1} \le C_4e^{-\gamma t}\|u_0-\hat u_0\|_{L^1}^{2/5},$$ where $\gamma=\frac{2\alpha}{5}$, and $C_4>0$ does not depend on $u_0$, $\hat u_0$, and $t$. This implies the required inequality for the first term on the left-hand side of . An estimate for the second term will follow from the construction; see relations  and  below. ### Step 3: Main auxiliary result {#step3-main-auxiliary-result .unnumbered} Let us take two initial data $v_0,\hat u_0\in B_V(R)$ and consider the difference $w$ between the corresponding solutions of problem – with $\zeta\equiv0$; that is, $w=v-\hat u$, where $v(t)={{\cal R}}_t(v_0,h)$ and $\hat u(t)={{\cal R}}_t(\hat u_0,h)$. It is straightforward to check that $w$ satisfies the linear equation $$\label{A.31} {\partial}_t w-\nu{\partial}_x^2w+{\partial}_x\bigl(a(t,x)w\bigr)=0,$$ where $a=\frac12(v+\hat u)$. The following proposition is the key point of our construction. \[p1.4\] Let positive numbers $\nu$, $T$, $\rho$, and $s<1$ be fixed, and let $a(t,x)$ be a function such that $$\label{1.09} \|a\|_{C^s(J_T\times I)}+\|{\partial}_xa\|_{L^\infty(J_T\times I)}\le\rho.$$ Then, for any closed interval $I'\subset I$, there are positive numbers ${\varepsilon}$ and $q<1$, depending only on $\nu$, $T$, $\rho$, $s$, and $I'$, such that any solution $w\in{{\cal X}}_T$ of Eq.  satisfies one of the inequalities $$\label{1.10} \|w(T)\|_{L^1}\le q\,\|w(0)\|_{L^1}\quad\mbox{or} \quad \|w(T)\|_{L^1(I')}\ge {\varepsilon}\,\|w(0)\|_{L^1}.$$ In other words, for the difference of any two solutions, either the $L^1$-norm undergoes a strict contraction or a non-trivial mass is concentrated on $I'$. In both cases, we can modify the difference between the reference and uncontrolled solutions in the neighbourhood of $I'$ so that the resulting function is a solution to the controlled problem, and the $L^1$-norm of the difference decreases exponentially with time. We now describe this idea in more detail. ### Step 4: Description of the controlled solution {#step4-description-of-the-controlled-solution .unnumbered} Let us fix a closed interval $I'\subset (a,b)$ and choose two functions $\chi_0\in C^\infty(\bar I)$ and $\beta\in C^\infty({{\mathbb R}})$ such that $$\begin{aligned} 0\le\chi_0(x)\le1&\mbox{ for $x\in I$},& \chi_0(x)&=0\mbox{ for $x\in I'$}, & \chi_0(x)&=1\mbox{ for $x\in I\setminus [a,b]$}, \label{1.11}\\ 0\le\beta(t)\le1&\mbox{ for $t\in{{\mathbb R}}$},& \beta(t)&=0\mbox{ for $t\le\tfrac12$}, & \beta(t)&=1\mbox{ for $t\ge1$}. \label{1.12}\end{aligned}$$ Let us set $\chi(t,x)=1-\beta(t)(1-\chi_0(x))$. Given $u_0,\hat u_0\in B_V(R)$, we denote by $\hat u(t,x)$ the reference trajectory and define a controlled solution $u(t,x)$ of  consecutively on intervals $[k,k+1]$ with $k\in{{\mathbb Z}}_+$ by the following rules: - *if $u(t)$ is constructed on $[0,k]$, then we denote by $v(t,x)$ the solution issued from $u(k)$ for problem , on $[k,k+1]$ with $\zeta\equiv0\,;$* - for any odd integer $k\in{{\mathbb Z}}_+$, we set $$\label{1.16} u(t,x)=v(t,x)\quad \mbox{for $(t,x)\in [k,k+1]\times I$}.$$ - for any even integer $k\in{{\mathbb Z}}_+$, we set $$\label{1.13} u(t,x)=\hat u(t,x)+\chi(t-k,x)\bigl(v(t,x)-\hat u(t,x)\bigr)\quad \mbox{for $(t,x)\in [k,k+1]\times I$}.$$ It is not difficult to check that $u(t,x)$ is a solution of problem , , in which $\zeta$ is supported by ${{\mathbb R}}_+\times[a,b]$. Moreover, it will follow from Proposition \[p1.4\] that, for any even integer $k\ge0$, we have $$\label{1.14} \|u(k+1)-\hat u(k+1)\|_{L^1}\le \theta\,\|u(k)-\hat u(k)\|_{L^1},$$ where $\theta<1$ does not depend on $\hat u_0$, $u_0$, and $k$. On the other hand, the contraction of the $L^1$-norm between solutions of  implies that $$\label{1.15} \|u(t)-\hat u(t)\|_{L^1}\le \|u([t])-\hat u([t])\|_{L^1} \quad\mbox{for any $t\ge0$},$$ where $[t]$ stands for the largest integer not exceeding $t$. These two inequalities give . The uniform bounds  for the $H^2$-norm will follow from regularity of solutions for problem , . Preliminaries on the Burgers equation {#s3} ===================================== In this section, we establish some properties of the Burgers equation. They are well known, and their proofs can be found in the literature in more complicated situations. However, for the reader’s convenience, we outline some of those proofs in the Appendix to make the presentation self-contained. In this section, when talking about Eq. , we always assume that $\zeta\equiv0$. Maximum principle and regularity of solutions --------------------------------------------- In this subsection, we discuss the well-posedness of the initial-boundary value problem for the Burgers equation. This type of results are very well known, and we only outline their proofs in the Appendix. Recall that $V=H_0^1\cap H^2$, and the space ${{\cal X}}$ was defined in the Introduction. \[p3.1\] Let $u_0\in L^2(I)$ and $h\in L_{\mathrm{loc}}^1({{\mathbb R}}_+,L^2(I))$. Then problem – has a unique solution $u\in{{\cal X}}$. Moreover, the following two properties hold. [**$\boldsymbol{L^\infty}$ bound**]{}. If $h\in L_{\mathrm{loc}}^\infty({{\mathbb R}}_+\times I)$ and $u_0\in L^\infty(I)$, then $u\in L_{\mathrm{loc}}^\infty({{\mathbb R}}_+\times I)$. [**Regularity**]{}. If, in addition, $u_0\in V$ and $h\in H_{\mathrm{loc}}^1({{\mathbb R}}_+\times I)$, then $$\label{3.1} u\in L_{\mathrm{loc}}^2({{\mathbb R}}_+,H^3)\cap W_{\mathrm{loc}}^{1,2}({{\mathbb R}}_+,H_0^1)\cap W_{\mathrm{loc}}^{2,2}({{\mathbb R}}_+,H^{-1}).$$ Let us note that, if $u_0$ is only in the space $L^2(I)$, then the conclusions about the $L^\infty$ bound and the regularity remain valid on the half-line ${{\mathbb R}}_\tau:=[\tau,+\infty)$ for any $\tau>0$. To see this, it suffices to remark that any solution $u\in {{\cal X}}$ of , satisfies the inclusion $u(\tau)\in H_0^1\cap H^2$ for almost every $\tau>0$. For any such $\tau>0$, one can apply Proposition \[p3.1\] to the half-line ${{\mathbb R}}_\tau$ and conclude that the inclusions mentioned there are true with ${{\mathbb R}}_+$ replaced by ${{\mathbb R}}_\tau$. Comparison principle -------------------- The Burgers equation possesses a very strong dissipation property due to the nonlinear term. To state and prove the corresponding result, we need the concept of sub- and super-solution for Eq.  with $\zeta\equiv0$. Let us fix $T>0$ and, given an interval $I'\subset I$, define[^3] $${{\cal X}}_T(I')=L^2(J_T,H^1(I'))\cap W^{1,2}(J_T,H^{-1}(I')).$$ A function $u^+\in {{\cal X}}_T(I')$ is called a [*super-solution*]{} for  if $$\int_0^T\bigl(({\partial}_tu,\varphi)+(\nu{\partial}_xu-\tfrac12u^2,{\partial}_x\varphi)\bigr){{\textup d}}t \ge\int_0^T(h,\varphi)\,{{\textup d}}t, \label{2.2}$$ where $\varphi\in L^\infty(J_T,L^2(I'))\cap L^2(J_T,H_0^1(I'))$ is an arbitrary non-negative function. The concept of a [*sub-solution*]{} is defined similarly, replacing $\ge$ by $\le$. A proof of the following result can be found in Section 2.2 of [@AL-1983] for a more general problem; for the reader’s convenience, we outline it in the Appendix. \[p3.2\] Let $h\in L^1(J_T,L^2)$, and let functions $u^+$ and $u^-$ belonging to ${{\cal X}}_T(I')$ be, respectively, super- and sub-solutions for  such that[^4] $$\label{2.1} u^+(t,x)\ge u^-(t,x)\quad \mbox{for $t=0$, $x\in I'$ and $t\in[0,T]$, $x\in{\partial}I'$},$$ where the inequality holds almost everywhere. Then, for any $t\in J_T$, we have $$\label{2.3} u^+(t,x)\ge u^-(t,x)\quad\mbox{for a.e.\ $x\in I'$}.$$ We now derive an a priori estimate for solutions of , . \[c2.3\] Let $u_0\in L^\infty$ and $h\in L^\infty(J_T\times I)$ for some $T>0$. Then the solution of problem – with $\zeta\equiv0$ satisfies the inequality $$\label{2.4} \|u(T,\cdot)\|_{L^\infty}\le C,$$ where $C>0$ is a number continuously depending only on $\|h\|_{L^\infty}$ and $T$. We follow the argument used in the proof of Lemma 9 in [@coron-2007 Section 2.1]. Given ${\varepsilon}>0$ and $u_0\in L^\infty(I)$, we set $$B_{\varepsilon}=1+\|h\|_{L^\infty}^{1/3}(T+{\varepsilon})^{2/3}, \quad L=\|u_0\|_{L^\infty}.$$ It is a matter of a simple calculation to check that the functions $$u_{\varepsilon}^+(t,x)=\frac{B_{\varepsilon}(B_{\varepsilon}+x)+L{\varepsilon}}{t+{\varepsilon}},\quad u_{\varepsilon}^-(t,x)=-\frac{B_{\varepsilon}(B_{\varepsilon}-x)+L{\varepsilon}}{t+{\varepsilon}}$$ are, respectively, super- and sub-solutions for  on the interval $J_T$ such that $$u_{\varepsilon}^+(t,x)\ge u(t,x)\ge u_{\varepsilon}^-(t,x)\quad \mbox{for $t=0$, $x\in I$ and $t\in[0,T]$, $x=0$ or~$1$}.$$ Applying Proposition \[p3.2\], we conclude that $$u_{\varepsilon}^+(T,x)\ge u(T,x)\ge u_{\varepsilon}^-(T,x)\quad\mbox{for a.e.~$x\in I$}.$$ Passing to the limit as ${\varepsilon}\to0^+$, we arrive at  with $C=T^{-1}B_0(B_0+1)$. Contraction of the $L^1$-norm of the difference of solutions ------------------------------------------------------------ It is a well known fact that the resolving operator for , regarded as a nonlinear mapping in the space $L^2(I)$ is locally Lipschitz. The following result shows that it is a contraction for the norm of $L^1(I)$. \[p3.3\] Let $u,v\in {{\cal X}}$ be two solutions of Eq. , in which $\zeta\equiv0$ and $h\in L_{\mathrm{loc}}^1({{\mathbb R}}_+,L^2)$. Then $$\label{21} \|u(t)-v(t)\|_{L^1}\le \|u(s)-v(s)\|_{L^1}\quad\mbox{for any $t\ge s\ge 0$}.$$ Inequality  follows from the maximum principle for linear parabolic PDE’s, and more general results can be found in Sections 3.2 and 3.3 of [@hormander1997]. A simple proof of Proposition \[p3.3\] is given in Section \[A3\]. Harnack inequality {#s2.4} ------------------ Let us consider the linear homogeneous equation . The following result is a particular case of the Harnack inequality established in [@KS-1980 Theorem 1.1] (see also Section IV.2 in [@krylov1987]). \[p2.6\] Let a closed interval $K\subset I$ and positive numbers $\nu$ and $T$ be fixed. Then, for any $\rho>0$ and $T'\in(0,T)$, one can find $C>0$ such that the following property holds: if $a(t,x)$ satisfies the inequality $$\label{1.9} \|a\|_{L^\infty(J_T\times I)}+\|{\partial}_xa\|_{L^\infty(J_T\times I)}\le\rho,$$ then for any non-negative solution $w\in L^2(J_T,H^3\cap H_0^1)\cap W^{1,2}(J_T,H_0^1)$ of  we have $$\label{2.6} \sup_{x\in K} w(T',x)\le C\inf_{x\in K}w(T,x).$$ Proof of the main results {#s4} ========================= In this section, we give the details of the proof of Theorem \[t2.1\] (Sections \[s3.1\]–\[s3.3\]) and establish assertion (b) of the Main Theorem stated in the Introduction (Section \[s3.4\]). Reduction to smooth initial data {#s3.1} -------------------------------- Let us prove Proposition \[p1.2\]. Fix arbitrary numbers $T_1<T_2$ in the interval $(0,T)$. By Proposition \[p3.1\] and the remark following it, for any $\tau>0$ we have $$\label{3.01} u\in L^\infty(J_{\tau,T}\times I)\cap L^2(J_{\tau,T},H^3)\cap W^{1,2}(J_{\tau,T},H_0^1)\cap W^{2,2}(J_{\tau,T},H^{-1}),$$ where $J_{\tau,T}=[\tau,T]$. Applying Corollary \[c2.3\], we see that $$\label{3.02} \|u(t,\cdot)\|_{L^\infty}\le C\quad\mbox{for $T_1\le t\le T$}.$$ Furthermore, it follows from  that $u(t)$ is a continuous function of $t\in (0,T]$ with range in $V$. Thus, it remains to establish inequality  with a universal constant $R$. The proof of this fact can be carried out by a standard argument based on multipliers technique (e.g., see the proof of Theorem 2 in [@BV1992 Section I.6] dealing with the 2D Navier–Stokes system). Therefore, we confine ourselves to outlining the main steps. Until the end of this subsection, we deal with Eq.  in which $\zeta\equiv0$ and denote by $C_i$ unessential positive numbers not depending $u$. . Taking the scalar product of  with $2u$ and performing usual transformations, we derive $${\partial}_t\|u\|^2+2\nu\|{\partial}_xu\|^2=2(h,u)\le \nu\|{\partial}_xu\|^2+\nu^{-1}\|h\|^2.$$ Integrating in time and using  with $t=T_1$, we obtain $$\label{3.03} \int_{T_1}^T\|{\partial}_xu\|^2{{\textup d}}t\le \nu^{-1}\|u(T_1)\|^2+\nu^{-2}\int_{T_1}^T\|h\|^2{{\textup d}}t\le C_1.$$ . Let us take the scalar product of  with $-2(t-T_1){\partial}_x^2u$: $$\begin{gathered} {\partial}_t\bigl((t-T_1)\|{\partial}_xu\|^2\bigr) -\|{\partial}_x u\|^2+2\nu(t-T_1)\|{\partial}_x^2u\|^2 = 2(t-T_1)(u{\partial}_xu-h,{\partial}_x^2u)\\ \le 2(t-T_1)\bigl(\|u\|_{L^\infty}\|{\partial}_xu\|+\|h\|\bigr)\|{\partial}_x^2u\|. \end{gathered}$$ Integrating in time and using  and , we obtain $$\label{3.04} \|u(t)\|_1+\int_{T_2}^t\|u(r)\|_2^2{{\textup d}}r\le C_2\quad\mbox{for $T_2\le t\le T$}.$$ Using , we also derive the following estimate for ${\partial}_tu$: $$\label{3.05} \int_{T_2}^T\|{\partial}_tu\|^2{{\textup d}}t\le C_3.$$ . Taking the time derivative of , we obtain the following equation for $v={\partial}_tu$: $${\partial}_tv-\nu{\partial}_x^2v+v{\partial}_xu+u{\partial}_xv={\partial}_th.$$ Taking the scalar product with $2(t-T_2)v$, we derive $$\begin{aligned} {\partial}_t\bigl((t-T_2)\|v\|^2\bigr) -\|v\|^2+2\nu(t-T_2)\|{\partial}_xv\|^2 &= 2(t-T_2)({\partial}_th-u{\partial}_xv-v{\partial}_xu,v)\\ &\le 2(t-T_2)\bigl(\|{\partial}_th\|+3\|u\|_{L^\infty}\|{\partial}_xv\|\bigr)\|v\|. \end{aligned}$$ Integrating in time and using  and , we obtain $$\label{3.06} \|v(T)\|\le C_4.$$ . We now rewrite  in the form $$\label{3.07} \nu{\partial}_x^2u=f(t):=v+u{\partial}_xu-h.$$ In view of  and , we have $\|f(T)\|\le C_5$. Combining this with , we arrive at the required inequality . \[r3.1\] The argument given above shows that, under the hypotheses of Proposition \[p1.2\], if $u_0\in B_V(\rho)$, then $\|R_t(u_0,h)\|_2\le R$ for all $t\ge0$, where $R>0$ depends only on $h$, $\nu$, and $\rho$. Moreover, similar calculations enable one to prove that, for any $t>0$, the resolving operator ${{\cal R}}_t(u_0,h)$ regarded as a function of $u_0$ is uniformly Lipschitz continuous from any ball of $L^2$ to $H^2$, and the corresponding Lipschitz constant can be chosen to be the same for $T^{-1}\le t\le T$, where $T>1$ is an arbitrary number. Proof of the main auxiliary result {#s3.2} ---------------------------------- In this subsection, we prove Proposition \[p1.4\]. In doing so, we fix parameter $\nu>0$ and do not follow the dependence of various quantities on it. . We begin with the case of non-negative solutions. Namely, we prove that, given $q\in(0,1)$, one can find $\delta=\delta(I',T,q,\rho)>0$ such that, if $w\in{{\cal X}}_T$ is a non-negative solution of , then either the first inequality in  holds, or $$\label{3.08} \inf_{x\in I'}w(T,x)\ge \delta \|w(0)\|_{L^1}.$$ To this end, we shall need the following lemma, established at the end of this subsection. \[l3.1\] For any $0<\tau<T$ and $\rho>0$, there is $M>0$ such that, if $w\in{{\cal X}}_T$ is a solution of Eq.  with a function $a(t,x)$ satisfying , then $$\label{3.09} \sup_{(t,x)\in[\tau,T]\times I}|w(t,x)|\le M\|w(0)\|_{L^1}.$$ In view of linearity, we can assume without loss of generality that $\|w(0)\|_{L^1}=1$. Let us choose a closed interval $K\subset I$ containing $I'$ such that $$\label{3.010} |I\setminus K|\le\frac{q}{2M},$$ where $|\Gamma|$ denotes the Lebesgue measure of a set $\Gamma\subset{{\mathbb R}}$, and $M>0$ is the constant in  with $\tau=2T/3$. By Proposition \[p3.1\] and the remark following it, the function $w$ satisfies the hypotheses of Proposition \[p2.6\]. Therefore, by the Harnack inequality , we have $$\label{3.011} \sup_{x\in K}w(2T/3,x)\le C\inf_{x\in K}w(T,x),$$ where $C>0$ depends only on $T$, $K$, and $\rho$. Let us set $\delta=\frac{q}{2C|K|}$ and suppose that  is not satisfied. In this case, using – and the contraction of the $L^1$-norm of solutions for  (see Remark \[r4.2\]), we derive $$\begin{aligned} \|w(T)\|_{L^1}&\le \|w(2T/3)\|_{L^1} =\int_{I\setminus K}w(2T/3,x){{\textup d}}x+\int_{K}w(2T/3,x){{\textup d}}x\\ &\le M\,|I\setminus K|+C\delta |K|\le q. \end{aligned}$$ This is the first inequality in  with $\|w(0)\|_{L^1}=1$. . We now consider the case of arbitrary solutions $w\in{{\cal X}}_T$, assuming again that $\|w(0)\|_{L^1}=1$. Let us denote by $w_0^+$ and $w_0^-$ the positive and negative parts of $w_0:=w(0)$, and let $w^+$ and $w^-$ be the solutions of  issued from $w_0^+$ and $w_0^-$, respectively. Thus, we have $$w_0=w_0^+-w_0^-, \quad \|w_0^+\|_{L^1}+\|w_0^-\|_{L^1}=1, \quad w=w^+-w^-.$$ Let us set $r:=\|w_0^+\|_{L^1}$ and assume without loss of generality that $r\ge1/2$. In view of the maximum principle for linear parabolic equations (see Section 3.2 in [@landis1998]), the functions $w^+$ and $w^-$ are non-negative, and therefore the property established in Step 1 is true for them. If $\|w^+(T)\|_{L^1}\le r/2$, then the contraction of the $L^1$-norm of solutions of  implies that $$\|w(T)\|_{L^1}\le \|w^+(T)\|_{L^1}+\|w^-(T)\|_{L^1}\le r/2+(1-r)\le 3/4.$$ This coincides with the first inequality in  with $\|w(0)\|_{L^1}=1$. Suppose now that $\|w^+(T)\|_{L^1}> r/2$. Using the property of Step 1 with $q=\frac12$, we find $\delta_1>0$ such that $$\label{2.36} \inf_{x\in Q}w^+(T,x)\ge\delta_1 r.$$ Set ${\varepsilon}=\frac14\delta_1|I'|$ and assume that $\|w(T)\|_{L^1(I')}<{\varepsilon}$ (in the opposite case, the second inequality in  holds), so that $$\|w^+(T)\|_{L^1(I')}-\|w^-(T)\|_{L^1(I')}<{\varepsilon}.$$ It follows that $$\|w^-(T)\|_{L^1}\ge \|w^-(T)\|_{L^1(I')}\ge \|w^+(T)\|_{L^1(I')}-{\varepsilon}\ge \delta_1 r |I'|-\frac{\delta_1}{4}|I'| \ge{\varepsilon}.$$ By the $L^1$-contraction for $w^-$, we see that $\|w_0^-\|_{L^1}=1-r\ge {\varepsilon}$. Repeating the argument applied above to $w^+$, we can prove that if $$\label{2.37} \|w^-(T)\|_{L^1}\le \frac12(1-r),$$ then $\|w(T)\|_{L^1}\le 1-\frac{\varepsilon}2$, so that the first inequality in  holds with $q=1-\frac{\varepsilon}2$. Thus, it remains to consider the case when  does not hold. Applying the property of Step 1 to $w^-$, we find $\delta_2>0$ such that $$\label{2.38} \inf_{x\in I'}w^-(T,x)\ge\delta_2 (1-r).$$ Since $\frac12\le r\le 1-{\varepsilon}$, the right-hand sides in  and  are minorised by $\theta=\min\{\tfrac12\delta_1,{\varepsilon}\delta_2\}$. Denoting by $\chi_{I'}$ the indicator function of $I'$, we write $$\begin{aligned} \|w(T)\|_{L^1}&=\int_I|w^+(T,x)-w^-(T,x)|\,{{\textup d}}x\\ &=\int_I\bigl|(w^+(T,x)-\theta \chi_{I'}(x))-(w^-(T,x)-\theta \chi_{I'}(x))\bigr|\,{{\textup d}}x\\ &\le\int_I\bigl(w^+(T,x)-\theta \chi_{I'}(x)\bigr)\,{{\textup d}}x +\int_I\bigl(w^-(T,x)-\theta \chi_{I'}(x))\bigr)\,{{\textup d}}x\\ &=\|w^+(T)\|_{L^1}+\|w^-(T)\|_{L^1}-2\theta |I'|.\end{aligned}$$ In view of the $L^1$-contraction for $w^+$ and $w^-$, the right-hand side of this inequality does not exceed $$\|w_0^+\|_{L^1}+\|w_0^-\|_{L^1}-2\theta |I'|=1-2\theta |I'|.$$ Setting $q=\max\{\frac34,1-\frac{{\varepsilon}}{2},1-2\theta |I'|\}$, we conclude that one of the inequalities  holds for $w$. Thus, to complete the proof of Proposition \[p1.4\], it only remains to establish Lemma \[l3.1\]. By the maximum principle and regularity of solutions for linear parabolic equations, it suffices to prove that $$\label{3.15} \|w(\tau)\|_{L^\infty(I)}\le C_1\|w(0)\|_{L^1(I)},$$ where $C_1>0$ does not depend on $w$. To this end, along with , let us consider the dual equation $$\label{A.32} {\partial}_t z+\nu{\partial}_x^2z+a(t,x){\partial}_xz=0,$$ supplemented with the initial condition $$\label{3.17} z(T,x)=z_0(x).$$ Let us denote by $G(t,x,y)$ the Green function of the Dirichlet problem for , . By Theorem 16.3 in [@LSU1968 Chapter IV], one can find positive numbers $C_2$ and $C_3$ depending only on $\rho$, $s$, and $T$ such that $$|G(t,x,y)|\le C_2(T-t)^{-1/2}\exp\bigl(-C_3\tfrac{(x-y)^2}{T-t}\bigr) \quad\mbox{for $x,y\in I$, $t\in[0,T)$}.$$ It follows that, for $z_0\in L^2(I)$, the solution $z\in{{\cal X}}_T$ of problem , satisfies the inequality $$\label{3.19} \|z(0)\|_{L^\infty}\le C_4\|z_0\|_{L^1},$$ where $C_4>0$ does not depend on $z_0$. Now let $u\in{{\cal X}}_T$ be a solution of . Taking any $z_0\in L^2(I)$ and denoting by $z\in{{\cal X}}_T$ the solution of , , we write $$\label{3.18} \frac{{{\textup d}}}{{{\textup d}}t}\bigl(w(t),z(t)\bigr)=({\partial}_tw,z)+(w,{\partial}_tz)=0,$$ where $(\cdot,\cdot)$ denotes the scalar product in $L^2(I)$. Integrating in time and using , we obtain $$\int_Iw(T)z_0{{\textup d}}x=\int_Iw(0)z(0){{\textup d}}x\le \|w(0)\|_{L^1}\|z(0)\|_{L^\infty} \le C_4 \|w(0)\|_{L^1}\|z_0\|_{L^1}.$$ Taking the supremum over all $z_0\in L^2$ with $\|z_0\|_{L^1}\le 1$, we arrive at the required inequality . Completion of the proof {#s3.3} ----------------------- We need to prove inequalities  and , as well as the piecewise continuity of $\zeta:{{\mathbb R}}_+\to H^1(I)$ and the estimate $$\label{3.21} \|\zeta(t)\|_1\le C_1 e^{-\gamma t} \min\bigl(\|u_0-\hat u_0\|_{L^1}^{2/5},1\bigr), \quad t\ge0.$$ [*Proof of *]{}. The estimate for $\hat u(t)={{\cal R}}_t(\hat u_0,h)$ follows from Remark \[r3.1\]. Setting $t_k=2k$, we now use induction on $k\ge0$ to prove that $u(t)={{\cal R}}_t(u_0,h+\zeta)$ is bounded on $[t_k,t_{k+1}]$ by a universal constant and that $u(t_{k+1})\in B_V(R)$, provided that $u(t_k)\in B_V(R)$. Indeed, it follows from  that $$\sup_{t_k\le t\le s_k}\|u(t)\|_2\le C_2\sup_{t_k\le t\le s_k} \bigl(\|\hat u(t)\|_2+\|v(t)\|_2\bigr),$$ where $s_k=2k+1$. In view of Remark \[r3.1\], the right-hand side of this inequality does not exceed a constant $C_3(R)$. Furthermore, recalling  and using Remark \[r3.1\] and inequality  with $T=1$, we see that $$\sup_{s_k\le t\le t_{k+1}}\|u(t)\|_2\le C_3(R), \quad \|u(t_{k+1})\|_2\le R.$$ This completes the induction step. . In view of , it suffices to establish  for any even integer $k\ge0$. It follows from , , and the definition of $\chi$ that $$\label{3.22} \|u(k+1)-\hat u(k+1)\|_{L^1}=\int_I\chi_0(x)|v(k+1)-\hat u(k+1)|\,{{\textup d}}x.$$ We know that the norms of the functions $v$ and $\hat u$ are bounded in $L^\infty([k,k+1],H^2)$ by a constant depending only on $R$. Since they satisfy Eq.  with $\zeta\equiv0$, we see that ${\partial}_tv$ and ${\partial}_t\hat u$ are bounded in $L^\infty([k,k+1],L^2)$ by a number depending on $R$. By interpolation and the continuous embedding $H^1(I)\subset C^{1/2}(I)$, we see that $$\|v\|_{C^{1/2}([k,k+1]\times I)}+\|\hat u\|_{C^{1/2}([k,k+1]\times I)}\le C_4(R).$$ Since the difference $w=v-\hat u$ satisfies Eq.  with $a=\frac12(v+\hat u)$, we conclude that Proposition \[p1.4\] is applicable to $w$. Thus, we have one of the inequalities . If the first of them is true, then it follows from  that  holds with $\theta=q$. If the second inequality is true, then using , the contraction of the $L^1$-norm for $w$, and relations , we derive $$\begin{aligned} \|u(k+1)-\hat u(k+1)\|_{L^1}&\le \|w(k+1)\|_{L^1}-\|w(k+1)\|_{L^1(I')} \le (1-{\varepsilon})\|w(0)\|_{L^1},\end{aligned}$$ and, hence, we obtain  with $\theta=1-{\varepsilon}$. . In view of , on any interval $[k,k+1]$ with odd $k\ge0$, the function $u$ satisfies  with $\zeta\equiv0$, and the required properties of $\zeta$ are trivial. Let us consider the case of an even $k\ge0$. A direct calculation show that $$\begin{aligned} \zeta(t,x)&={\partial}_tu-\nu{\partial}_x^2u+u{\partial}_xu-h\\ &=-\bigl(\chi_k(1-\chi_k)w+2\nu{\partial}_x\chi_k\bigr){\partial}_xw +\bigl({\partial}_t\chi_k-\nu{\partial}_x^2\chi_k+\hat u{\partial}_x\chi_k+\chi_k w{\partial}_x\chi_k\bigr)w,\end{aligned}$$ where $\chi_k(t,x)=\chi(t-k,x)$. Since $\chi(t,x)=1$ for $x\notin[a,b]$ and for $t\le\frac12$, we have ${\mathop{\rm supp}\nolimits}\zeta\subset[k+\frac12,k+1]\times [a,b]$. By Proposition \[p3.1\], $v$ and $\hat u$ are $V$-valued continuous functions, whence we conclude that $\zeta$ is continuous in time with range in $H_0^1$. Moreover, since the $H^2$-norms of $v$ and $\hat u$ are bounded by a number depending only on $R$, for $t\in [k,k+1]$ we have $$\label{3.23} \|\zeta(t)\|_1\le C_5(R)I_{[k+1/2,k]}(t)\|w(t)\|_2\le C_6(R)\|v(k)-\hat u(k)\|_1,$$ where $I_{[k+1/2,k]}(t)$ is the indicator function of the interval $[k+1/2,k]$, and we used the fact that the resolving operator for the Burgers equation is uniformly Lipschitz continuous from any ball of $H_0^1$ to $H^2$ for positive times; see Remark \[r3.1\]. Since $v(k)-\hat u(k)=u(k)-\hat u(k)$, it follows from  and  that  holds. This completes the proof of Theorem \[t2.1\]. Absence of global approximate controllability {#s3.4} --------------------------------------------- We shall prove that if $u(t,x)$ is a solution of , on the interval $[0,T]$ with some control $\zeta\in L^2(J_T\times I)$ supported by $J_T\times[a,b]$, then the restriction of $u(T,\cdot)$ to any closed interval included in $[0,a)$ satisfies an a priori estimate in the $L^\infty$ norm independent of $u_0$ and $\zeta$. Namely, we claim that, for any positive numbers $T_0$ and $\delta<a$, there is $\rho>0$ such that, if $T\ge T_0$, $u_0\in L^2(I)$, and $\zeta\in L^2(J_T\times I)$ vanishes on $J_T\times (0,a)$, then $$\label{3.51} \|R_T(u_0,h+\zeta)\|_{L^\infty(K_\delta)}\le\rho,$$ where $K_\delta=[0,\delta]$. If this is proved, then for any $R>0$ we can take $\hat u\in L^2(I)$ such that $$\hat u(x)\ge\rho+\delta^{1/2}R\quad\mbox{for $x\in(0,\delta)$},$$ and it is straightforward to check that $$\|R_T(u_0,h+\zeta)-\hat u\|^2\ge\int_0^{\delta}|R_T(u_0,h+\zeta)-\hat u|^2{{\textup d}}x\ge R^2.$$ We now prove . In view of the regularising property of the resolving operator (see Proposition \[p3.1\] and the remark following it), there is no loss of generality in assuming that $u_0\in V$. In this case, if $\zeta\in L^2(J_T\times I)$, then $u(t,x)$ is continuous on $J_T\times\bar I$. Given ${\varepsilon}\in(0,1)$, we fix a number $A_{\varepsilon}$ (which will be chosen below) and define the function $$u_{\varepsilon}(t,x)=\frac{A_{\varepsilon}}{(t+{\varepsilon})(a-x+{\varepsilon})}.$$ We claim that, for an appropriate choice of $A_{\varepsilon}$, the function $u_{\varepsilon}$ is a super-solution for  in the domain $J_T\times K_a$. Indeed, let $$L=\max_{x\in I_a}|u_0(x)|, \quad N=\max_{t\in J_T}|u(t,a)|, \quad A_{\varepsilon}=\Lambda+{\varepsilon}\bigl(L(a+{\varepsilon})+N(T+{\varepsilon})\bigr),$$ where $\Lambda>0$ is a large parameter that will be chosen below. For $x\in K_a$ and $t\in J_T$, we have $$\label{3.55} u_{\varepsilon}(0,x)\ge\frac{A_{\varepsilon}}{{\varepsilon}(a+{\varepsilon})}\ge L, \quad u_{\varepsilon}(t,0)\ge 0, \quad u_{\varepsilon}(t,a)\ge\frac{A_{\varepsilon}}{{\varepsilon}(T+{\varepsilon})}\ge N.$$ Furthermore, a simple calculation shows that $$\begin{aligned} {\partial}_tu_{\varepsilon}-\nu{\partial}_x^2u_{\varepsilon}+u_{\varepsilon}{\partial}_xu_{\varepsilon}&=\frac{A_{\varepsilon}}{(t+{\varepsilon})^2(a-x+{\varepsilon})^3}\bigl(-(a-x+{\varepsilon})^2-2\nu(t+{\varepsilon})+A_{\varepsilon}\bigr)\notag\\ &\ge \frac{A_{\varepsilon}^2}{2(T+1)^2(a+1)^3}, \label{3.52}\end{aligned}$$ provided that $$\label{3.53} \Lambda\ge 4\nu(T+1)+2(a+1)^2$$ It follows from  that if $$\label{3.54} A_{\varepsilon}^2\ge2(T+1)^2(a+1)^3\|h\|_{L^\infty},$$ then $u_{\varepsilon}$ is a super-solution for  on the domain $J_T\times K_a$. Inequalities  and  will be satisfied if we choose $\Lambda=C(T+1)(\|h\|_{L^\infty}^{1/2}+1)$, where $C>0$ is sufficiently large and depends only on $a$ and $\nu$. Recalling the definition of $u_{\varepsilon}$, we see that the function $$u_{\varepsilon}(t,x)=\frac{C(T+1)(\|h\|_{L^\infty}^{1/2}+1)+{\varepsilon}\bigl(L(a+{\varepsilon})+N(T+{\varepsilon})\bigr)}{(t+{\varepsilon})(a-x+{\varepsilon})}$$ is a super-solution for  on the domain $J_T\times K_a$. It follows from  that Proposition \[p3.2\] is applicable to the pair $(u_{\varepsilon},u)$. In particular, we can conclude that $u(T,x)\le u_{\varepsilon}(T,x)$ for $x\in K_\delta$. Passing to the limit as ${\varepsilon}\to0$, we obtain $$u(T,x)\le \frac{C(T+1)(\|h\|_{L^\infty}^{1/2}+1)}{T(a-\delta)} \quad\mbox{for $x\in K_\delta$}.$$ This implies the required inequality  in which $$\rho=C(a-\delta)^{-1}(1+T_0^{-1})(\|h\|_{L^\infty}^{1/2}+1).$$ We have thus established assertion (b) of the Main Theorem of the Introduction. Appendix: proofs of some auxiliary assertions ============================================= Proof of Proposition \[p3.1\] {#A1} ----------------------------- The existence and uniqueness of a solution $u\in{{\cal X}}$ is well known in more complicated situations; see Chapter 15 in [@taylor1996]. We thus confine ourselves to outlining the proofs of the $L^\infty$ bound and regularity. The solution $u(t,x)$ of , can be regarded as the solution of the linear parabolic equation $$\label{A.1} {\partial}_tu-\nu{\partial}_x^2u+b(t,x){\partial}_xu=h(t,x),$$ where $b\in L_{\mathrm{loc}}^2({{\mathbb R}}_+,H_0^1)$ coincides with $u$. If $b$, $h$, and $u_0$ were regular functions, then the classical maximum principle would imply that (see Section 3.2 in [@landis1998]) $$\label{A.2} |u(t,x)|\le \|u_0\|_{L^\infty}+t\,\|h\|_{L^\infty(J_t\times I)}\quad \mbox{for all $(t,x)\in {{\mathbb R}}_+\times I$}.$$ To deal with the general case, it suffices to approximate $u_0$ and $h$ by smooth functions and to pass to the (weak) limit in inequality  written for approximate solutions. This argument shows that the inequality in  is valid almost everywhere for any solution $u$. We now turn to the regularity of solutions. The function $u\in{{\cal X}}$ is the solution of the linear equation $${\partial}_tu-\nu{\partial}_x^2u=f(t,x),$$ where the right-hand side $f=h-u{\partial}_xu$ belongs to $L_{\mathrm{loc}}^2({{\mathbb R}}_+,L^2)$. By standard estimates for the heat equation, we see that $$\label{A.4} u\in L_{\mathrm{loc}}^2({{\mathbb R}}_+,H^2)\cap W_{\mathrm{loc}}^{1,2}({{\mathbb R}}_+,L^2).$$ Differentiating  with respect to time and setting $v={\partial}_tu$, we see that $v$ satisfies the equations $$\label{A.3} {\partial}_tv-\nu{\partial}_x^2v+v{\partial}_xu+u{\partial}_xv={\partial}_th, \quad v(0)=v_0,$$ where $v_0=h(0)-u_0{\partial}_xu_0+\nu{\partial}_x^2u_0\in L^2$. Taking the scalar product of the first equation in  and carrying out some simple transformations, we conclude that $v\in{{\cal X}}$. On the other hand, it follows from  that $${\partial}_x^2u=v+u{\partial}_xu-h\in L_{\mathrm{loc}}^2({{\mathbb R}}_+,H^1),$$ whence we see that $u\in L_{\mathrm{loc}}^2({{\mathbb R}}_+,H^3)$. Combining this with the inclusion ${\partial}_tu\in{{\cal X}}$, we obtain . Proof of Proposition \[p3.2\] {#A2} ----------------------------- Without loss of generality, we can assume that $t=T$. Define $$u=u^--u^+,\quad \psi_\delta(z)=1\wedge\bigl((z/\delta)\vee 0\bigr),$$ where $\delta>0$ is a small parameter, and $a\wedge b$ ($a\vee b$) denotes the minimum (respectively, maximum) of the real numbers $a$ and $b$. In view of inequality  and its analogue for sub-solutions, the function $u$ is non-positive almost everywhere for $t=0$ and satisfies the inequality $$\label{A.5} \int_0^T({\partial}_tu,\varphi)\,{{\textup d}}t +\nu\int_0^T({\partial}_xu,{\partial}_x\varphi)\,{{\textup d}}t -\frac12\int_0^T(w,{\partial}_x\varphi){{\textup d}}t\le 0,$$ where $w=(u^-)^2-(u^+)^2$, and $\varphi\in L^\infty(J_T,L^2)\cap L^2(J_T,H_0^1)$ is an arbitrary non-negative function. Let us take $\varphi(t,x)=\psi_\delta(u(t,x))$ in . It is easy to check that $$\begin{aligned} \int_0^T({\partial}_tu,\varphi)\,{{\textup d}}t &=\int_I\Psi_\delta(u(T))\,{{\textup d}}x,\\ \int_0^T({\partial}_xu,{\partial}_x\varphi)\,{{\textup d}}t&=\int_0^T\int_I|{\partial}_xu|^2\psi_\delta'(u)\,{{\textup d}}x{{\textup d}}t \ge0,\\ \biggl|\int_0^T(w,{\partial}_x\varphi){{\textup d}}t\biggr| &\le \int_0^T\int_I|u|\,|u^++u^-|\,|{\partial}_xu|\psi_\delta'(u)\,{{\textup d}}x{{\textup d}}t\\ &\le \int_0^T\int_I\Bigl(\nu|{\partial}_xu|^2+\frac{1}{4\nu}|u|^2\,|u^++u^-|^2\Bigr) \psi_\delta'(u)\,{{\textup d}}x{{\textup d}}t,\end{aligned}$$ where $\Psi_\delta(z)=\int_0^z\psi_\delta(r){{\textup d}}r$. Substituting these relations into , we derive $$\begin{aligned} \int_I\Psi_\delta(u(T))\,{{\textup d}}x &\le \frac{1}{8\nu}\int_0^T\int_I|u|^2\,|u^++u^-|^2\psi_\delta'(u)\,{{\textup d}}x{{\textup d}}t\\ &\le \frac{\delta}{8\nu}\int_0^T\int_I|u^++u^-|^2\,{{\textup d}}x{{\textup d}}t \le \frac{\delta}{8\nu}\,\bigl\|u^++u^-\bigr\|_{L^2(J_T\times I)}^2,\end{aligned}$$ where we used the fact that $0\le u\le \delta$ on the support of $\psi_\delta'(u)$. Passing to the limit as $\delta\to0^+$, we derive $$\int_I\bigl(u(T)\vee0\bigr)\,{{\textup d}}x\le 0.$$ This inequality implies that $u(T,x)\le0$ for a.e. $x\in I$, which is equivalent to . Proof of Proposition \[p3.3\] {#A3} ----------------------------- We apply an argument similar to that used in the proof of Lemma \[l3.1\]; see Section \[s3.2\]. Let us note that the difference $w=u-v\in{{\cal X}}$ satisfies the linear equation , in which $a=\frac12(u+v)$. Along with , let us consider the dual equation . The following result is a particular case of the classical maximum principle. Its proof is given in Section III.2 of [@landis1998] for regular functions $a(t,x)$ and can be obtained by a simple approximation argument in the general case. \[lA.1\] Let $a\in L^2(J_T,H^1)$ for some $T>0$. Then, for any $z_0\in L^2(I)$, problem , has a unique solution $z\in{{\cal X}}_T$. Moreover, if $z_0\in L^\infty(I)$, then $z(t)$ belongs to $L^\infty(I)$ for any $t\in J_T$ and satisfies the inequality $$\label{A.33} \|z(t)\|_{L^\infty}\le \|z_0\|_{L^\infty}\quad\mbox{for $t\in J_T$}.$$ To prove , we fix $t=T$ and assume without loss of generality that $s=0$. By duality, it suffices to show that, for any $z_0\in L^\infty(I)$ with norm $\|z_0\|_{L^\infty}\le 1$, we have $$\label{A.34} \int_Iw(T)z_0\,{{\textup d}}x\le \|w(0)\|_{L^1}$$ Let $z\in{{\cal X}}_T$ be the solution of , . Such solution exists in view of Lemma \[lA.1\] and the inclusion $a\in L^2(J_T,H_0^1)$, which is ensured by the regularity hypothesis for $u$ and $v$. It follows from  and  that relation  holds. Integrating it in time, we see that $$\int_Iw(T)z_0\,{{\textup d}}x=\int_Iw(0)z(0)\,{{\textup d}}x\le \|w(0)\|_{L^1} \|z(0)\|_{L^\infty}.$$ Using  with $t=0$, we arrive at the required inequality . \[r4.2\] We have proved in fact that if $w\in {{\cal X}}_T$ is a solution of the linear equation , in which the coefficient $a$ belongs $L^2(J_T,H^1)$ , then $\|w(t)\|_{L^1}\le \|w(s)\|_{L^1}$ for $0\le s\le t\le T$. \#1[0=]{} \#1[0=]{} \#1[0=]{} \#1[0=]{} \[2\][ [\#2](http://www.ams.org/mathscinet-getitem?mr=#1) ]{} \[2\][\#2]{} [Cor07b]{} H. W. Alt and S. Luckhaus, *Quasilinear elliptic-parabolic differential equations*, Math. Z. **183** (1983), no. 3, 311–341. O. V. Besov, V. P. Ilin, and S. M. Nikolski[ĭ]{}, *Integral [R]{}epresentations of [F]{}unctions and [I]{}mbedding [T]{}heorems*, V. H. Winston & Sons, Washington, D.C., 1979. A. V. Babin and M. I. Vishik, *Attractors of [E]{}volution [E]{}quations*, North-Holland Publishing, Amsterdam, 1992. M. Chapouly, *Global controllability of nonviscous and viscous [B]{}urgers-type equations*, SIAM J. Control Optim. **48** (2009), no. 3, 1567–1599. J.-M. Coron, *Control and [N]{}onlinearity*, American Mathematical Society, Providence, RI, 2007. J.-M. Coron, *Some open problems on the control of nonlinear partial differential equations*, Perspectives in nonlinear partial differential equations (H. Berestycki, ed.), Contemp. Math., vol. 446, Amer. Math. Soc., Providence, RI, 2007, pp. 215–243. E. Fern[á]{}ndez-Cara and S. Guerrero, *Null controllability of the [B]{}urgers system with distributed controls*, Systems Control Lett. **56** (2007), no. 5, 366–372. A. V. Fursikov and O. Yu. Imanuvilov, *On controllability of certain systems simulating a fluid flow*, Flow control ([M]{}inneapolis, [MN]{}, 1992) (M. D. Gunzburger, ed.), IMA Vol. Math. Appl., vol. 68, Springer, New York, 1995, pp. 149–184. , *Controllability of [E]{}volution [E]{}quations*, Seoul National University, Research Institute of Mathematics, Global Analysis Research Center, Seoul, 1996. A. V. Fursikov, *Optimal control of [D]{}istributed [S]{}ystems. [T]{}heory and [A]{}pplications*, American Mathematical Society, Providence, RI, 2000. O. Glass and S. Guerrero, *On the uniform controllability of the [B]{}urgers equation*, SIAM J. Control Optim. **46** (2007), no. 4, 1211–1238. L. H[ö]{}rmander, *Lectures on [N]{}onlinear [H]{}yperbolic [D]{}ifferential [E]{}quations*, Springer-Verlag, Berlin, 1997. N. V. Krylov, *Nonlinear [E]{}lliptic and [P]{}arabolic [E]{}quations of the [S]{}econd [O]{}rder*, D. Reidel Publishing Co., Dordrecht, 1987. N. V. Krylov and M. V. Safonov, *A property of the solutions of parabolic equations with measurable coefficients*, Izv. Akad. Nauk SSSR Ser. Mat. **44** (1980), no. 1, 161–175, 239. E. M. Landis, *[Second Order Equations of Elliptic and Parabolic Type]{}*, American Mathematical Society, Providence, RI, 1998. M. L[é]{}autaud, *Uniform controllability of scalar conservation laws in the vanishing viscosity limit*, SIAM J. Control Optim. **50** (2012), no. 3, 1661–1699. O. A. Lady[ž]{}enskaja, V. A. Solonnikov, and N. N. Uralceva, *[Linear and Quasilinear Equations of Parabolic Type]{}*, American Mathematical Society, Providence, R.I., 1968. F. Marbach, *Small time global null controllability for a viscous [B]{}urgers’ equation despite the presence of a boundary layer*, J. Math. Pures Appl. (9) **102** (2014), no. 2, 364–384. M. E. Taylor, *Partial [D]{}ifferential [E]{}quations. [I]{}–[III]{}*, Springer-Verlag, New York, 1996-97. [^1]: Department of Mathematics, University of Cergy–Pontoise, CNRS UMR 8088, 2 avenue Adolphe Chauvin, 95302 Cergy–Pontoise, France; e-mail: <[email protected]> [^2]: See the Notation below for definition of the spaces used in the statement. [^3]: Note that, in contrast to ${{\cal X}}_T$, we do not require the elements of ${{\cal X}}_T(I')$ to vanish on ${\partial}I'$. [^4]: It is not difficult to see that the restrictions of the elements of ${{\cal X}}_T(I')$ to the straight lines $t=t_0$ and $x=x_0$ are well defined.
--- abstract: 'We derive analytic solutions for the potential and field in a one-dimensional system of masses or charges with periodic boundary conditions, in other words Ewald sums for one dimension. We also provide a set of tools for exploring the system evolution and show that it’s possible to construct an efficient algorithm for carrying out simulations. In the cosmological setting we show that two approaches for satisfying periodic boundary conditions, one overly specified and the other completely general, provide a nearly identical clustering evolution until the number of clusters becomes small, at which time the influence of any size-dependent boundary cannot be ignored. Finally we compare the results with other recent work with the hope of providing clarification over differences these issues have induced. We explain that modern formulations of physics require a well defined potential which is not available if the forces are screened directly.' author: - 'Bruce N. Miller' - 'Jean-Louis Rouet' bibliography: - 'gravbib6.bib' title: Ewald Sums for One Dimension --- Introduction ============ One-dimensional models play an important role in physics. While they are of intrinsic interest, they also provide important insights into higher-dimensional systems. One-dimensional plasma and gravitational systems were the first N-body systems simulated with early computers [@lieb_matt; @mattis; @CL1; @CL2]. Plasma systems were used to investigate Debye screening and thermodynamic equilibrium. In contrast, although the force law is similar, it was found that the evolution of gravitational systems toward equilibrium is extremely slow, and still is not completely explained [@WMS; @yawn3; @tsuchiya4; @YM2ma; @Joyce_relax]. Recently spin-offs of these models that, in some cases, are more amenable to computer simulation have been studied in great depth [@posch06; @rufforev]. Three dimensional dynamical simulations are an important component of modern cosmology. Starting from the precise initial conditions provided by observations of the cosmic background radiation [@WMAP5yr], various model predictions can be compared with what we observe in “today’s universe” [@Virgo]. It was shown by Rouet and Feix that, in common with the observed positions of galaxies, a cosmological version of the one-dimensional gravitational system exhibits hierarchical clustering and fractal-like behavior [@Rouet1; @Rouet2]. Since then we, as well as others, have pursued different one-dimensional cosmological models, finding important attributes such as power law behavior of both the density fluctuation power spectra and two-body correlation function, in addition to the influence of dark energy [@Tat; @Gouda_powsp; @Ricker1d; @Joyce_1d; @MRGexp; @MR_jstat]. In specific applications, for both plasma and gravity, it is preferable to adopt periodic boundary conditions because they avoid special treatment of the boundary and therefore best mimic a segment of the extended system [@Bert_rev; @HockneyEastwood; @hern_ewald; @Virgo] . However, since they act as a low-pass filter, no information supported on wavelengths larger than the system size is available. Therefore, in the context of plasma and gravitational systems, as well as in the Vlasov limit [@braunhepp], it is necessary to have a sufficiently large system that contains many Jeans’ (or Debye) lengths [@peebles2]. In both plasma and gravitational physics, a large class of one-dimensional models are defined by a potential energy that satisfies Poisson’s equation. In three dimensions they are represented by embedded systems of parallel sheets of mass or electric charge that are of infinite extent. From Gauss’ Law, the field from such an element is directed perpendicular to the surface and has a constant value, independent of the distance, and proportional to the mass or charge density (per unit area). In a linear array of sheets of equal mass or charge density, the force on a given sheet is then simply proportional to the difference between the number of sheets on the left and right. Problems with this formulation arise in specific applications where it is necessary to assume periodic boundary conditions, where the motion takes place on a torus. A particular case arises in constructing a 1+1 dimensional model of the expanding universe. In the cosmological setting, astrophysicists consider a segment of the universe that is following the average expansion rate. They assume it is large enough to contain many clusters, but small enough that Newtonian dynamics is adequate to describe the evolution [@Newtap]. Comoving coordinates, in which the average density remains fixed, are employed, and it is assumed that the system obeys periodic boundary conditions [@HockneyEastwood; @Bert_rev]. In comoving coordinates fictitious forces appear, analogous to the Coriolis force in a rotating system. The source of the apparent gravitational field arises from the difference between the actual matter distribution and a negative background density. To compute the force on a particle in three-dimensional systems it is possible to carry out “Ewald” sums over the positions of all the other particles in both the system and all its periodic “replicas” [@hern_ewald; @brush]. In one dimension, because the field induced by a particle (mass or charge sheet) is constant, the solution is not obvious. At first glance it appears that the force is due to the difference between infinities and, at second glance, in the cosmological setting, that the negative background should exactly cancel with the force due to the particles in each replica. Thus the problem is fraught with ambiguity. To get a different twist, consider the fact that, in a periodic system, we are describing motion on a torus, so there is no clear distinction between left and right. In order to provide conclusive solutions to these problems, in the following pages we will follow the approach used by Kiessling in showing that the “so-called” Jeans swindle is, in fact, legitimate and not a swindle at all [@kies_swin]. For clarity we will focus on the gravitational example. Periodic Boundary Conditions\[sec:PBC\] ======================================= Let $\rho(x)$ be a periodic mass density (mass density per unit length) of the one-dimensional system with period $2L$ so that $\varrho(x+2L)=\rho(x).$ In addition write $\rho(x)=\rho_{0}+\sigma(x)$ where $\varrho_{0}$ represents the average of $\rho(x)$ over one period and $\sigma(x)$ is the periodic fluctuation. If, instead, the total mass were bounded, we could compute the potential in the usual way. The potential function for a unit mass located at $x'$ is simply $2\pi G\left|x-x'\right|$ so we would normally write$$\Phi(x)=2\pi G\int\left|x-x'\right|\rho(x')dx'$$ where the integration is over the whole line. Clearly, in the present situation, this won’t converge. Following Kiessling [@kies_swin] we introduce the screening function $\exp\left(-\kappa\left|x-x'\right|\right)$ and define $\Psi$ by$$\Psi(x,\kappa)=2\pi G\int\left|x-x'\right|\exp\left(-\kappa\left|x-x'\right|\right)\rho(x')dx'.\label{eq:scrpot}$$ For mass densities of physical interest, for example for both bounded functions as well as delta functions, $\Psi$ clearly exists in the current case so long as $\kappa>0$ . The contribution from the average, or background, term is quickly determined:$$\Psi_{0}=2\pi G(-\frac{\partial}{\partial\kappa})\int\exp\left(-\kappa\left|x-x'\right|\right)\rho_{0}dx'=4\pi G\rho_{0}/\kappa^{2}$$ and is independent of position. Therefore, although it blows up in the limit $\kappa\rightarrow0,$ since $\rho_{0}$ is translation invariant, $\Psi_{0}$ makes no contribution to the gravitational field. On the other hand we can formulate the contribution to $\Psi$ from $\sigma(x)$, say $\Psi_{\sigma},$ and show that it is well behaved in this limit. Fourier Representation of the Potential and Field ------------------------------------------------- Since the density fluctuation $\sigma$ is a mass-neutral periodic function, we may represent $\sigma(x)$ as a Fourier series$$\sigma(x)={\textstyle \sum_{n}^{'}c_{n}\exp(i\pi nx/L)}$$ where the prime indicates that there is no contribution from $n=0.$ Inserting into Eq. (\[eq:scrpot\]) we find$$\Psi_{\sigma}(x,\kappa)=2\pi G\sum{}_{n}^{'}c_{n}b_{n}\exp(i\pi nx/L)$$ where $$b_{n}=\int\left|x-x'\right|\exp\left(-\kappa\left|x-x'\right|\right)\exp(in\pi(x'-x)/L)dx'$$ $$=\int\left|u\right|\exp\left(-\kappa\left|u\right|\right)\exp(in\pi u)/L)du$$ $$=\int\left|u\right|\exp\left(-\kappa\left|u\right|\right)cos(n\pi u/L)du$$ $$=2\int_{0}^{\infty}u\exp\left(-\kappa u\right)cos(n\pi u/L)du=2\frac{\kappa^{2}-(\pi n/L)^{2}}{[\kappa^{2}+(\pi n/L)^{2}]^{2}}.$$ Now, taking the limit $\kappa\rightarrow0$, we find a general expression for the periodic potential $\phi(x),$$$\phi(x)=-4\pi G{\textstyle \sum_{n}^{'}c_{n}(L/\pi n)^{2}\exp(i\pi nx/L)}\label{eq:perpot}$$ and, for the gravitational field $E(x)$,$$E(x)=-\frac{\partial\phi}{\partial x}=4\pi Gi{\textstyle \sum_{n}^{'}c_{n}(L/\pi n)\exp(i\pi nx/L)}.\label{eq:perfield}$$ Thus, for a periodic distribution of mass, the gravitational field is well defined. We can think of it as arising from both the mass in the primitive cell $(-L<x<L)$ and the contribution from the infinite set of replicas or images. We see immediately that it is a solution of the Poisson equation, from which the result can also be obtained more directly from linear independence. For the sake of comparison, it is worth calculating the field contributed by the primitive cell alone. This is simply proportional to the difference in the mass on the right and left of the position $x$:$$E_{p}(x)=2\pi G\int_{-L}^{L}\sigma(x')[\Theta(x'-x)-\Theta(x-x')]$$ where $\Theta$ is the usual step function. Substituting for $\sigma(x)$ we find$$E_{p}(x)=4\pi Gi\sum{}_{n}^{'}c_{n}(L/\pi n)[\exp(i\pi nx/L)-(-1)^{n}].$$ While $E_{p}(x)$ and $E(x)$ are very similar, there is an important difference: the former is forced to vanish at the endpoints, $x=\pm L$ for all allowed mass distributions within the primitive cell. This result is expected since we only take into account the field of a neutral slice. Therefore, in the general case, $E_{p}(x)$ cannot represent the field on a circle (1-torus). So far our treatment is quite general and applies to any one-dimensional periodic mass (or charge) distribution. To gain further insight and make contact with recent work, let’s focus on the situation where the sources are $2N$ discrete equal-mass points (sheets) with positions $x_{j}$ that live on the torus with the coordinate boundary points at $x=L$ and $-L$ identified. Then, in the primitive cell, the density fluctuation is $$\sigma_{p}(x)=m\sum_{j=1}^{2N}\left[\delta(x-x_{j})-\frac{1}{2L}\right]\label{eq:dis}$$ from which we may easily calculate the Fourier coefficients, $$c_{n}=\frac{1}{2L}\int_{-L}^{L}\exp\left(-i\pi nx'/L\right)\sigma_{p}(x')dx'=\frac{m}{2L}\sum_{j=1}^{2N}\exp\left(-i\pi nx_{j}/L\right),\label{eq:fouco}$$ for $n$$\neq0.$ Then the gravitational potential and field consists of the contribution from the primitive cell and all the replicas. From Eqs. (\[eq:perpot\],\[eq:perfield\]) they reduce to$$\phi(x)=-4\pi mGL\sum_{j=1}^{2N}\sum_{n=1}^{\infty}{\textstyle (1/\pi n)^{2}cos(\pi n(x-x_{j})/L),}\label{eq:dispot}$$ $$E(x)=-\frac{\partial\phi}{\partial x}=-4mG\sum_{j=1}^{2N}\sum_{n=1}^{\infty}{\textstyle (1/n)sin(\pi n(x-x_{j})/L).}\label{eq:disfield}$$ Thus the Fourier representations of the periodic potential and field are straightforward. Direct Summation over Replicas\[sub:PBCsum\] -------------------------------------------- In the case of three dimensions one cannot do much better than Eqs.(\[eq:dispot\],\[eq:disfield\]) since the Ewald sums cannot be represented as simple analytic functions [@brush]. Fortunately, in the present case, we can improve on this situation. Consider a single particle of mass $m$ located at $x_{1}$. By summing over replicas, we can compute its contribution to the screened potential $\Psi(x,\kappa)$ directly:$$\Psi(x,\kappa)=2\pi mG\sum_{r=-\infty}^{\infty}\left|x-x_{1}-2rL\right|exp\left(-\kappa\left|x-x_{1}-2rL\right|\right)$$ $$=2\pi mG(-\frac{\partial}{\partial\kappa})\sum_{r=-\infty}^{\infty}exp\left(-\kappa\left|y_{1}-2rL\right|\right)$$ $$=2\pi mG(-\frac{\partial}{\partial\kappa})\sum_{r=-\infty}^{\infty}\{exp\left(-\kappa(y_{1}-2rL)\right)\Theta\left(y_{1}-2rL\right)+exp\left(\kappa(y_{1}-2rL)\right)\Theta\left(-y_{1}+2rL\right)\}\label{eq:dbsum}$$ where $y_{1}=x-x_{1}$ . Choose integers $r_{<}(y_{1})$and $r_{>}(y_{1})$ such that $r_{<}\leq y_{1}/2L\leq r_{>}=r_{<}+1$, i.e. $y_{1}/2L$ is bounded from below and above by this pair of adjacent integers. Then$$\sum_{r=-\infty}^{\infty}exp\left(-\kappa(y_{1}-2rL)\right)\Theta\left(y_{1}-2rL\right)=exp\left(-\kappa(y_{1}-2r_{<}L)\right)\sum_{s=-\infty}^{0}exp(\kappa2Ls)$$ and similarly for the second sum in Eq.(\[eq:dbsum\]). Therefore each of the sums in Eq.(\[eq:dbsum\]) can be evaluated in terms of a geometric series to obtain the screened potential $$\Psi_{\sigma}(x,\kappa)=2\pi mG(-\frac{\partial}{\partial\kappa})\left\{ \left[exp\left(-\kappa Y_{<}\right)+exp\left(+\kappa Y_{>}\right)\right]/\left(1-\exp\left(-2\kappa L\right)\right)-1/\kappa L\right\}$$ where $Y_{<}=y_{1}-2r_{<}L$ , etc., and we have subtracted the contribution from the average or background density, $m/2L$. Evaluating the derivative and then taking the limit $\kappa\rightarrow0$, we obtain the gravitational potential $\phi_{1}$ due to a single particle at $x_{1}$:$$\phi_{1}\left(x\right)=-\frac{\pi mG}{2L}\left(Y_{>}^{2}+Y_{<}^{2}\right).\label{eq:sinparpot}$$ It is important to recognize that $\phi_{1}\left(x\right)$ is a periodic function of its argument and can be evaluated anywhere on the periodic extension of the torus, i.e. on the real line. As physicists we are typically interested in values of $x$ and $x_{1}$ in the primitive cell, i.e. for $-L\leq x,\: x_{1}<L$ with the points at $x=\pm L$ identified. Then we quickly find that, for $y_{1}\geq0,$ $0\leq y_{1}/2L<1$ whereas for $y_{1}<0,$ $y_{1}/2L$ is sandwiched between $[-1,0)$ . Either way, the potential $\phi_{1}$, and therefore the field $E_{1}$, can be represented as $$\phi_{1}\left(x\right)=2\pi mG\left[\left|x-x_{1}\right|-\frac{1}{2L}\left(x-x_{1}\right)^{2}\right]\label{eq:sinpartpotfin}$$ $$E_{1}(x)=-\frac{\partial\phi_{1}}{\partial x}=2\pi mG\left[\frac{1}{L}(x-x_{1})+\Theta(x_{1}-x)-\Theta(x-x_{1})\right].\label{eq:sinpartfield}$$ Thus, in addition to the direct contribution from the mass located at $x_{1}$, there is an additional quadratic term in the potential and linear term in the field. Although these contributions are simple, care must be taken in their interpretation. They are not, respectively, equal to the potential and field contributed by the component of the background located between the point of application $x$ and the location of the source $x_{1}$, but rather twice as large! A number of observations are in order. First of all, it is obvious that these functions reproduce exactly the Fourier series derived above for the case of a single particle. Second, in the limit $L\rightarrow\infty,$ they reduce to the familiar results on the line. Third, they are strictly functions of the displacement $x-x_{1}.$ This is important as all points on the torus are equivalent: there are no special positions or intervals. Fourth, it is not necessary to distinguish in which direction the distance between the points $x$ and $x_{1}$ is measured. Going in either direction around the torus yields the same value of $\phi_{1}.$ Fifth, defining $\Theta(0)=\frac{1}{2}$, the field vanishes at both $x=x_{1}$ and at $x=x_{1}-L$ mod$\left(2L\right)$, i.e. half way around the torus. Finally, when $x_{1}$ traverses the point at $L$ and reappears at $-L$, or vice-versa, there is no change in the field at $x$ as we would expect from the physics. Symmetry-Based Derivation ------------------------- In the above we have employed the machinery of a screening function to obtain the desired result. While it has all the right properties, it is worth asking if we could have obtained it from a simpler route. We seek a solution of Poisson’s equation for a single mass located at $x_{1}$ , $$\frac{\partial^{2}\phi}{\partial x^{2}}=4\pi G\sigma(x),\label{eq:poiseq}$$ where, in the primitive cell, $$\sigma=\sigma_{p}(x)=m\left[\delta\left(x-x_{1}\right)-\left(\frac{1}{2L}\right)\right].\label{eq:den1}$$ The general solution is $$\phi(x)=2\pi mG\left[\left|y_{1}\right|-\frac{1}{2L}y_{1}^{2}+by_{1}\right],$$ yielding $$E(x)=-\frac{\partial\phi}{\partial x}=2\pi mG\left[\frac{1}{L}(y_{1})-\Theta(y_{1})+\Theta(-y_{1})+b\right]$$ where $b$ is an arbitrary constant, as before $y_{1}=x-x_{1},$ and we have chosen the additive constant in the expression for $\phi$ such that $\phi(x=x_{1})=0$. Symmetry requires that there is no preferred direction on the torus. Therefore, regardless of the location of the mass at $x_{1}$, the average of the field in $[-L,L)$ must vanish. Then we immediately obtain $b=0$ and the results given in subsection \[sub:PBCsum\] above. We could also have arrived at this conclusion by noting that, for the same reason, $\phi(x)$ can only depend on the distance between $x$ and $x_{1}$. Finally, the requirement that $\phi(L)=\phi(-L)$ also demands the same conclusion. The fact that our limiting procedure, which is based on an exponential screening function, leads to a unique solution of the Poisson equation increases our confidence that the choice of a different screening function, e.g. a Gaussian, would not result in a different potential or field. For the sake of comparison, and to understand the connection with other recent work, it’s worthwhile to consider $E_{1p}(x)$, the field generated solely by $\sigma_{p}$, the charge distribution in the primitive cell. This would be the correct field if the net contribution from each replica vanished. We quickly find $$E_{1p}(x)=2\pi mG\left[\frac{1}{L}x+\Theta(x_{1}-x)-\Theta(x-x_{1})\right].\label{eq:E1prim}$$ Among other problems, note that it does not satisfy the symmetry requirements discussed above. We will return to this formulation in the ensuing discussion. N-body Simulation ================= In carrying out a simulation, we need to know the field acting on each particle. Summing over all the contributions, the total field at $x$ arising from the complete system of particles is then simply $$E(x)=4\pi mG\left[\frac{N}{L}(x-x_{c})+\frac{1}{2}(N_{R}(x)-N_{L}(x))\right]\label{eq:totfield}$$ where, in Eq. (\[eq:totfield\]), $x_{c}$ is the center of mass of the $2N$ particles in $[-L,\: L)$ and $N_{R}(x),\, N_{L}(x)$ are the number of particles to the right (left) of $x$ counted on the segment $[-L,\, L).$ Since, from Eq. (\[eq:sinpartfield\]), the field from a single particle vanishes at its location, Eq. (\[eq:totfield\]) gives the correct field acting on each particle, i.e. $E_{j}=E(x=x_{j}).$ The presence of the center of mass in Eq. (\[eq:totfield\]) means that the instantaneous field experienced by each particle depends on the dipole moment of the system. The dependence on the center of mass is essential as it insures that when a particle passes from $x=L$ to $x=-L$ or vice-versa, there is no change in the field experienced by each particle. This was recognized as a basic problem in simulations of the one-dimensional, single-component plasma some time ago (see [@eldfeix] reprinted in [@lieb_matt]). Perhaps the correct mathematical form of the field was not known. In order to avoid discontinuous jumps in the field, a polarization charge was artificially induced at the system boundaries, and was changed whenever a particle “switched sides”. In this way the boundaries were seen as initially neutral reservoirs of particles. When a system particle enters one reservoir, another particle escapes from the other, so the boundaries are no longer neutral. For systems of interest the equations of motion of the system of particles can frequently be cast in the form $$\frac{dx_{j}}{dt}\equiv v_{j},\;\frac{dv_{j}}{dt}+\gamma v_{j}=E(x_{j})\label{eq:eqmo}$$ where the value of the friction constant $\gamma$ depends on the particular model [@MRGexp; @MR_jstat]. In the cosmological setting the time has been rescaled to retain the simplicity of the equations of motion and increases exponentially with the comoving time coordinate [@MRexp; @MR_jstat]. By carefully summing over the index $j$ we find that the velocity of the center of mass obeys the simple equation$$\frac{dx_{c}}{dt}\equiv v_{c}\,,\quad\frac{dv_{c}}{dt}+\gamma v_{c}=0.\label{eq:com}$$ When a particle traverses the coordinate boundary at $x=L$ the center of mass changes discontinuously. However, the center of mass velocity is a smooth function of time so the first order equation for $v_{c}(t)$ can be integrated immediately: $v_{c}(t)=v_{c}(0)\exp(-\gamma t)$. In particular, for the special case where the center of mass is initially at rest, its velocity maintains its initial value. On the other hand, $x_{c}(t)$ will change abruptly with each boundary traversal. In carrying out a simulation it is necessary to obtain the crossing times of adjacent particles. Starting at $x=-L$, label the particles according to their order so that $x_{2N}>\cdots>x_{j+1}>x_{j}>\cdots>x_{1}$ and define $z_{j}=x_{j+1}-x_{j}$ , the displacement between the adjacent particles. Then, from Eqs. (\[eq:totfield\], \[eq:eqmo\]), we find that $z_{j}$ obeys $$w_{j}\equiv\frac{dz_{j}}{dt},\,\quad\frac{dw_{j}}{dt}+\gamma w_{j}=4\pi mG\left(\frac{N}{L}z_{j}-1\right)\label{eq:zmo}$$ for $j=1,\ldots,$$\:2N-1.$ To complete the ring we continue in the same sense and determine the rate of change of the displacement between $x_{2N}$ and $x_{1}$, that is $2L+x_{1}-x_{2N}\equiv z_{2N}$ , and find that it too conveniently satisfies Eq.(\[eq:zmo\]). Thus, by defining a positive direction, or orientation, on the torus, we can keep track of the all the relative positions between nearest-neighbor particle pairs subject to the constraints $$\sum_{1}^{2N}z_{j}=2L,\:\sum_{1}^{2N}w_{j}=0.\label{eq:constraints}$$ Since, from Eq.(\[eq:com\]) we already know the velocity of the center of mass $v_{c}$, we can invert the set $\left\{ w_{j}\right\} $, $v_{c}$, to obtain the particle velocities $v_{j}$ at any time using a matrix inversion given by Rybicki [@Rybicki]. Conceptually, since the particles have equal mass, except for labels, they appear to experience elastic collisions with their neighbors. When the positions of a pair of particles cross, the particles exchange accelerations, but the velocities are continuous. At such an event the labels of the particle pair are exchanged to maintain the ordering in the given direction. As time progresses we see that, for this completely ordered system, when $z_{j}=0,$ i.e. when the $j^{th}$ and $j+1^{st}$ particle cross, $w_{j}$ changes sign. Moreover, since the particle labels have been exchanged, the velocities of the two neighboring pairs, $z_{j-1}$ and $z_{j+1}$ , also change discontinuously, i.e., $w_{j-1}\rightarrow w_{j-1}+w_{j}$ and $w_{j+1}\rightarrow w_{j+1}+w_{j}$ at the crossing time. This is all the information required to carry out a simulation. While we have focused on the gravitational system of equal masses, it’s straightforward to extend the approach to the case of unequal masses, as well as to the single and two-component plasmas. Below, in Fig. \[aa\], we present a series of snapshots from two recent simulations of a one-dimensional model of the expanding universe in comoving coordinates in which only gravitational forces apply. The model used was the one introduced originally by Rouet and Feix [@Rouet1; @Rouet2], i.e. Eq. (\[eq:eqmo\]) with $\gamma=\frac{1}{\sqrt{2}}$. Each simulation employs $2^{12}-1$ particles, and identical initial conditions were drawn from a uniform waterbag configuration in the $\mu$-space. The dimensionless, scaled, unit of time is expressed in terms of the Jeans’ period and the dimensionless length is simply the number of particles [@MRGexp; @MR_jstat]. It’s important to keep in mind that the scaled time is an exponentially increasing function of the comoving time coordinate [@MRexp; @MR_jstat]. The left sequence shows the evolution under the symmetric version of the system as it was originally employed (see the discussion below), while the sequence on the right exhibits the evolution obtained with Eq.(\[eq:totfield\]). In each sequence, the left hand column represents a histogram of positions, while the right hand column shows the positions in the position-velocity plane, i.e. what statistical physicists call $\mu$-space and astrophysicists call phase space. We observe an initial shrinkage in the phase plane due to the friction constant, followed by the break-up of the system into many small clusters that results from the intrinsic gravitational instability [@peebles2]. As time progresses, in each case we observe the continual, self-similar creation of larger clusters from smaller ones. Comparing the two snapshot sequences, we see that initially, and for some time, they are virtually identical. Before $T=10$ there is no discernable difference between the two runs in the location of cluster positions in the phase plane. Then, at $T\simeq10$ , we do notice a minute difference between them. Even at $T=12$ they are remarkably similar! Finally, at $T\simeq14$ , we observe a noticeable difference between the runs: in the first sequence the few remaining clusters are slowly gathering towards the right hand coordinate boundary whereas, in the second, they remain evenly dispersed throughout the system. This occurs because there is a natural bias in the original implementation of the model. By forcing the system to be symmetric, the motion effectively takes place in a fixed external potential proportional to $-x^{2}$ (as explained below, for the symmetric system there is a “ghost” system of particles for $x<0$ that we don’t display). Consequently as the system loses “energy”, matter naturally gathers near the right hand coordinate boundary, in the neighborhood of the potential minimum. In the second simulation the potential is translation invariant on the torus so there are no favored locations and the clusters remain uniformly dispersed. Notice that, by the time we can see any significant differences, there are only about five clusters in the system so the boundaries are starting to play a role in the evolution. Thus the simulation is no longer representative of the larger system. At earlier times, while the number of clusters was large, as we would intuitively anticipate, the boundaries had little effect. To illustrate the working of the algorithm, in Fig. \[xc\] we plot the position of the center of mass for a small system ($255$ particles). We can see how $x_{c}$ shifts in time. There are periods of little change, as well as periods of significant change, when a cluster or group encounters a coordinate boundary. Discussion and Conclusions ========================== There is a long history of work on one-dimensional plasma models, both theoretical and computational, dating back to the 1960’s[@lieb_matt; @mattis]. Some papers explicitly discuss the case of periodic boundary conditions. As we pointed out earlier, in simulations of the single-component plasma Eldridge and Feix employed a polarization field at the boundaries to control the discontinuities in the electric field [@eldfeix]. In a review paper Kunz gives an analytical expression for the potential, but there is no derivation [@kunz]. Periodic boundary conditions are required for cosmological simulations [@HockneyEastwood; @Bert_rev; @hern_ewald]. The first one-dimensional cosmological simulations were carried out by Rouet and Feix [@Rouet1; @Rouet2]. To avoid the problem of introducing a “polarization” field at the boundaries, they assumed that the system was perfectly symmetric at all times, i.e. for every particle at $x_{j}>0$ with velocity v$_{j}$ there is an image or “ghost” particle at position $-x_{j}$ with velocity $-v_{j}$. When a particle reaches the coordinate boundary, the image particle is there to meet it. Thus the 2N-particle system is equivalent to an N-particle system with $0<x<L$ with reflecting boundaries. Notice that this construction forces the center-of-mass position and velocity to vanish, simplifying the equations of motion (see Eq.(\[eq:totfield\])). In other one-dimensional simulations, Aurell et al. employed open boundaries [@Aurell; @Aurell2]. In their studies an initially localized fluctuation inter-penetrates a quiescent region. The field they employed is essentially given by Eq. (\[eq:E1prim\]). Gouda and Yano [@Gouda_powsp], as well as Tatekawa and Maeda [@Tat], employed the Zeldovich approximation [@peebles2]. Details concerning the type of boundary conditions employed were not discussed in these works but, in contrast with Eq. (\[eq:totfield\]) above, the Zeldovich approximation, as normally derived, does not depend on the system center of mass. Gabrielli et al. have studied the behavior of an infinite system of sheets perturbed from lattice positions [@Joyce_1d]. They also employed the screening function introduced by Kiessling to obtain an analytical expression for the field so, in spirit, their work is closely related to ours. However, there is a surprising difference in the expression they obtained for the gravitational field for the case of periodic boundary conditions, which is also given by Eq. (\[eq:E1prim\]). Since it lacks the explicit dependence on $x_{c}$, it is not translation invariant on the torus and there is a discontinuity in the field when a particle passes through a coordinate boundary at $x=\pm L$. Consequently it doesn’t represent true motion on a torus. While it is tempting to contemplate that the re-introduction of particles that leave from $x=L$ at $x=-L$ , and vice-versa, is adequate to guarantee periodic boundary conditions, this is not the case. An additional difficulty is that the field they present is self-referential, i.e. since $E_{1p}(x=x_{1})\neq0$ (see Eq.(\[eq:E1prim\])), the field generated by a single particle will induce an acceleration of itself. The approach taken by Gabrielli et al. and the one taken here are remarkably similar, so it’s worth trying to sort out why they produce different results. Here, following Kiessling [@kies_swin], we have taken the usual approach of screening the gravitational potential so in Eq.(\[eq:scrpot\]) we are simply starting with a one-dimensional version of the Yukawa potential. In contrast, in [@Joyce_1d], Gabrielli et al. effectively screen the field of a particle located at $x_{1}$ directly by $exp(-\kappa\left|x-x_{1}\right|$). It is straightforward to verify that the potential corresponding to this screened field is $(2\pi mG/\kappa)[1-exp(-\kappa\left|x-x_{1}\right|)]$ . Then, for an arbitrary mass distribution $\rho(x),$ the corresponding “screened” potential is $$(2\pi mG/\kappa)\intop[1-exp\left(-\kappa\left|x-x'\right|\right)]\rho(x')dx'.\label{eq:potGab}$$ For an extended mass distribution, for example the periodic system considered here, it is apparent that this won’t converge. Contemporary foundations of physics, both classical and quantum, are based on a Lagrangian or Hamiltonian formulation in which the potential plays a more fundamental role than the force. A good example is Feynman’s dissertation where he develops the path integral [@feyn-dis]. There Newton’s laws arise from paths for which the action is an extremum. To extend the current model to, say, the quantum regime, the availability of a clearly defined potential, such as Eq.( \[eq:sinpartpotfin\]), is essential. In conclusion, we have derived analytic solutions for the potential and field in a one-dimensional system of masses or charges with periodic boundary conditions, i.e. Ewald sums for one-dimension. We have seen that each particle in such a system carries with it its own neutralizing background, without which the potential energy cannot be defined. For a system of particles, we have shown that the system “polarization” or center of mass must be explicitly included in the force law. We have also provided a set of tools for exploring the system evolution and have shown that it’s possible to construct an efficient algorithm for accomplishing this. In the cosmological setting we have shown that the difference between the choice of completely symmetric, or just periodic, boundary conditions plays an insignificant role in the evolution until the number of clusters becomes small, at which time the influence of any boundary condition will become important. Finally, we showed that directly screening the force, as in [@Joyce_1d], instead of the potential leads to a divergent potential function for an extended system and is therefore not suitable for the preferred formulations of physics based on variational priniciples. In subsequent work we will explore other settings where boundaries play a more prominent role. The authors benefitted from interactions with Igor Prokhorenkov and Paul Ricker, the hospitality of the Université d’Orléans, and the support of the Research Foundation and the division of Technology Resources at Texas Christian University.
--- abstract: 'The early stages of decelerating gamma-ray burst afterglow jets have been notoriously difficult to resolve numerically using two dimensional hydrodynamical simulations even at very high-resolution, due to the extreme thinness of the blast wave and high outflow Lorentz factors. However, these resolution issues can be avoided by performing the simulations in a boosted frame, which makes it possible to calculate afterglow light curves from numerically computed flows in sufficient detail to accurately quantify the shape of the jet break and the post-break steepening of the light curve. Here, we study afterglow jet breaks for jets with opening angles of 0.05, 0.1 and 0.2 radians decelerating in a surrounding medium of constant density, observed at various angles ranging from on-axis to the edge of the jet. A single set of scale-invariant functions describing the time evolution of afterglow synchrotron spectral break frequencies and peak flux, depending only on jet opening angle and observer angle, are all that is needed to reconstruct light curves for arbitrary explosion energy, circumburst density and synchrotron particle distribution power law slope $p$. These functions are presented in the paper. Their time evolutions change directly following the jet break, although an earlier reported temporary post-break steepening of the cooling break is found to have been resolution-induced. We compare synthetic light curves to fit functions using sharp power law breaks as well as smooth power law transitions. We confirm our earlier finding that the measured jet break time is very sensitive to the angle of the observer and can be postponed significantly. We find that the difference in temporal indices across the jet break is larger than theoretically anticipated and is about $-(0.5 + 0.5p)$ below the cooling break and about $-(0.25 + 0.5p)$ above the cooling break, both leading to post-break slopes of roughly about $0.25 - 1.3 p$, although different observer angles, jet opening angles and heuristic descriptions of the break introduce a wide range of temporal indices. Nevertheless, the post-break slope from our constant density ISM simulations is sufficiently steep to be hard to reconcile with post-break slopes measured for the *Swift* sample, suggesting that *Swift* GRBs mostly do not explode in a homogeneous medium or that the jet breaks are hidden from view by additional physics such as prolonged energy injection or viewing angle effects. A comparison between different smooth power law fit functions shows that although smooth power law transitions of the type introduced by Harrison et al. 1999 often provide better fits, smooth power law transitions of the type introduced by Beuermann et al. 1999 or even sharp power law fits are easier to interpret in terms of the underlying model. Light curves and spectral break and peak flux evolution functions will be made publicly available on-line at <http://cosmo.nyu.edu/afterglowlibrary>.' author: - 'Hendrik van Eerten$^1$, Andrew MacFadyen$^1$' bibliography: - 'jetbreakshape.bib' title: 'Gamma-ray burst afterglow light curves from a Lorentz-boosted simulation frame and the shape of the jet break' --- Introduction {#introduction_section} ============ Gamma-ray burst (GRB) Afterglows are produced by non-thermal radiation from collimated decelerating relativistic outflows following the collapse of a massive star or a neutron star-neutron star or neutron star-black hole merger (for reviews, see e.g. @Piran2004 [@Meszaros2006; @Nakar2007; @Granot2007]). Because they originate from cosmological distances, their jet nature can not be observed directly but is expected theoretically from constraints on the energy budget of the outflow (the isotropic equivalent energy of the afterglow often being comparable to the solar rest mass) and inferred observationally from the *jet break* in the light curve. This break has been observed at various wavelengths ranging from radio to X-rays, and marks the onset of a steepening of the decay of the light curve. In this paper we present the most accurate description to date of the temporal and spectral evolution of the afterglow signal during the jet break, based on detailed relativistic hydrodynamics (RHD) calculations of the afterglow blast wave decelerating in a homogeneous medium. The steeper decay following the break is the result of two changes in the outflow. On the one hand the jet is starting to become less collimated. As a result, the area of the blast wave front increases and the jet decelerates faster than before because it starts to sweep up more circumburst matter. On the other hand, the ongoing deceleration even without spreading reaches a point where the relativistic beaming cone of the synchrotron emission at and behind the shock front becomes sufficiently wide for the lack of flux beyond the edges of the jet to become visible, whereas before only a small patch along the direction of the observer could be observed and a jet was indistinguishable from spherical outflow. Both effects are expected to occur approximately around the same point in time, when jet half-opening angle $\theta_0 \sim 1 / \gamma$, with $\gamma$ the fluid Lorentz factor behind the shock. For the sideways spreading this is because $\theta_0 \sim 1 / \gamma$ marks the point where the fast spreading in the frame comoving with the jet becomes noticeable as well in the frame of the observer, while for the edge effect it marks the point where the beaming cones have become as wide as the jet itself. Although the widening of the jet was originally argued to be the stronger effect [@Rhoads1999], subsequent numerical studies [@Granot2001; @Kumar2003; @Meliani2007; @Zhang2009; @vanEerten2011chromaticbreak; @Wygoda2011; @DeColle2012simulations] reveal the spreading not to be the exponential process described by [@Rhoads1999], for observationally relevant jet opening angles. As a result, the edge effect plays a strong role in shaping the jet break and the angle of the observer relative to the jet axis becomes relevant even for observer angles within $\theta_0$ [@vanEerten2010offaxis; @vanEerten2012observationalimplications; @vanEerten2011hiddenswift]. By now, theoretical models incorporate more realistic descriptions of jet spreading [@Granot2012]. Observational signatures of collimated afterglow outflow for a number of GRBs first started to emerge in the late nineties, both from the overall steepness of the light curve compared to theoretical expectations for a spherical explosion [@Sari1999] and observations of the jet break [@Beuermann1999; @Fruchter1999; @Harrison1999; @Kulkarni1999]. Because of the complexity of the dynamics of decollimating jets, afterglow jet breaks have been modeled by heuristic functions for the purpose of data fitting. From the beginning connected power laws have been used by many groups (e.g. @Fruchter1999 [@Kulkarni1999]) but also power laws with a smooth transition (@Beuermann1999 [@Harrison1999], using different descriptions for the transtion). Sharp and smooth power laws to describe jet breaks (or breaks in general) in afterglows have also been used in many more recent studies (e.g. @Zeh2006 [@Liang2009; @Evans2009; @Racusin2009; @Guelbenzu2011; @Oates2011; @Fong2012; @Panaitescu2012]). The advantage of a general heuristic function to describe afterglow breaks is that they do not necessarily assume an underlying model (i.e. *jet* break) but aim to describe the observed shape of the data in a concise and convenient manner. Identifying breaks as jet breaks is a separate step, where the steepening of the break and pre- and post-break relations between spectral and temporal slope (the “closure relations”, see e.g. @Zhang2004) are compared against theoretical expectations. Using such methods a lack of afterglow breaks has been reported for the *Swift* sample [@Kocevski2008; @Racusin2008; @Racusin2009], which has been attributed to the quality of the data [@Curran2008] and the effect of the observer angle (@vanEerten2011hiddenswift, providing a mechanism by which jet breaks can be postponed beyond *Swift*’s capability to observe, as suggested by e.g. @Kocevski2008). Recent light curves from numerical simulations demonstrate [@vanEerten2012scalings] that the shape of the afterglow synchrotron spectrum changes strongly directly following the jet break, which renders the standard application of the closure relations unreliable and might serve to explain the lack of succes in using them to identify jet breaks [@Racusin2009]. It has recently been pointed out [@vanEerten2012scalings] that the shape of the jet break in the light curve is determined by a scale-invariant function that depends only on initial jet opening angle $\theta_0$ and observer angle $\theta_{obs}$ and that scales in a straightforward manner between jet energies and between circumburst densities. This function is calculated from high-resolution relativistic hydrodynamics (RHD) simulations of the jet dynamics in 2D that include lateral spreading and deceleration to trans-relativistic velocities. This scale invariance has a number of useful practical implications. It makes it possible to distill a description of the jet break from numerical simulations that includes the full complexities of 2D trans-relativistic jet dynamics without the need to explicitly probe the parameter space in burst explosion energy and circumburst density with time-consuming RHD simulations. The resulting jet break description will be general and uniquely constrains the post-break closure relations and light curve slope. Existing smooth power law descriptions of the break can be compared against the simulation-derived shape. Simulation-derived jet break functions can even be fitted directly against the data in order to identify jet breaks. When these dimensionless jet break functions are scaled in order to fit the real time evolution of the data, the ratio $\rho_0 / E_{iso}$ is obtained, yielding important constraints on the physics of the progenitor. Although significant progress has been made recently in numerically resolving afterglow jets properly using adaptive mesh refinement techniques [@Zhang2009; @vanEerten2010offaxis; @vanEerten2011chromaticbreak; @Wygoda2011; @vanEerten2012boxfit; @DeColle2012simulations], to date the early stages of the blast wave evolution have not been fully resolved due to the extreme sharpness of the blast wave profile in the self-similar Blandford-McKee (BM, @Blandford1976) solution for an ultra-relativistic blast wave that provides the initial conditions for the simulations. As a consequence of this steepness, most matter in the blast wave is contained within a thin shell of typical width $\Delta R \sim R / 12 \gamma^2$, with $R$ the blast wave radius. When this thin shell is not completely resolved, this leads to a transient startup phase characterized by a temporary artificial drop in Lorentz factor of the outflow. Only after the blast wave has been evolved for some time does the fluid profile return to the shape predicted from analytically evolving the initial conditions. In practice, this transient feature would typically still be present at least at the onset of the jet break, because of the trade-off between decreased resolution at earlier starting times (due to the $\gamma$-dependency of the width) and decreased validity at late times (the closer to the jet break, the less valid the assumption of purely radial flow). In addition, when computing synthetic light curves, one has to account for the fact that the observed flux at a given point in observer time is made up of emission from a wide range of emission times, with contributions from the back of the jet being emitted earlier than those from the front in order to arrive at the same time. In order to obtain a truly accurate picture of the shape of the jet break, it is therefore required to resolve the pre-break flow of the blast wave completely up to sufficiently high Lorentz factor that the effect of the start-up transient is removed. In the current study we completely resolve the afterglow blast wave at extremely high Lorentz factors and early times by performing the RHD calculation in a different frame than the usual burster frame, which is at rest with respect to the explosion engine and the observer (aside from a cosmological redshift). By changing to a frame moving at fixed relativistic velocity along the jet axis, the narrowness of the jet profile due to Lorentz contraction is reduced and all relative Lorentz factors become small [@MacFadyen2013]. The price that is paid for this frame transformation, the loss of simultaneity across the grid, can be accounted for when the radiation from the evolving blast wave is calculated. The features of the dynamics of narrow jets and ultra-high initial Lorentz factor ($\ge 100$) flows will be presented in a separate study [@MacFadyen2013]. In this work we limit ourselves to the radiation from afterglow jets and determine the general shape of the jet break for afterglow blast waves that start out highly relativistic (Lorentz factor of 100) and have an initial half-opening angle $\theta_0$ of 0.05, 0.1 or 0.2 rad., moving into a homogeneous environment. The observer angle is varied from observers looking straight into the jet to observers positioned on the edge of the jet. In §\[dynamics\_section\] we discuss the methods of our RHD simulations and our implementation of the BM initial conditions in a boosted frame. In §\[radiation\_section\] we discuss how light curves are calculated from simulations. In §\[criticals\_section\] we show our results for the small set of key characteristic quantities (i.e. the break frequencies of the power law synchrotron spectrum and the peak flux) that determine the afterglow spectrum. We then use the characteristic quantities to calculate afterglow light curves at optical and X-ray frequencies in §\[lightcurves\_section\] and compare the shape of the jet break to earlier parametrizations from the literature. Our results are summarized and discussed in §\[summary\_section\]. Some technical aspects concerning radiative transfer from a Lorentz-boosted simulation frame are discussed in appendix \[boosted\_frame\_appendix\]. Methods for blast wave dynamics {#dynamics_section} =============================== We assume that the radiation and the dynamics of the collimated relativistic blast wave can be separated. This assumption remains valid as long as the emitted energy is only a small fraction of the blast wave energy and as long as there is neglible feedback from the radiation on the jet dynamics. Additionally, we assume that the magnetic fields generated at the front of the blast wave also contain only a small fraction of the available energy. Under these assumptions the jet dynamics can be computed using RHD simulations. Description of the RHD code {#numerics_subsection} --------------------------- We employ the <span style="font-variant:small-caps;">ram</span> parallel adaptive-mesh refinement (AMR) code [@Zhang2006]. The AMR technique, where the resolution of the grid can be dynamically doubled locally where necessary, is important in order to resolve the wide range of spatial scales involved, given the $\Delta R \sim R / 12 \Gamma^2$ width of the blast wave in the lab frame in which the origin of the explosion is at rest, where $\gamma$ can be $> 100$ for a typical afterglow blast wave. <span style="font-variant:small-caps;">Ram</span> makes use of the PARAMESH AMR tools [@MacNeice2000] from FLASH 2.3 [@Fryxell2000]. We use the second order F-PLM scheme [@Zhang2006] for the hydrodynamical evolution. In this study, a Taub equation of state is used where the adiabatic index smoothly varies between 4/3 for relativistic fluids and 5/3 for non-relativistic fluids, as a function of the ratio between comoving density and pressure [@Mignone2005; @Zhang2009]: $$(h - 4p)(h - p) = \rho^2,$$ where $p$ the pressure, $\rho$ the comoving density and enthalpy $h = \rho c^2 + p + e$, with $e$ the energy density. Scale invariant initial conditions {#initial_conditions_subsection} ---------------------------------- Blast waves for three different jet half opening angles were simulated for this study: $\theta_0 = 0.05$, $0.1$ and $0.2$ rad. The circumburst number density $n_0$ of the interstellar medium (ISM) is kept fixed at $1$ cm$^{-3}$, or $\rho_0 = m_p $ g cm$^{-3}$, with density $\rho$ and number density $n$ related according to $\rho \equiv n \times m_p$, where $m_p$ the proton mass. A more general expression for the circumburst density environment is given by$\rho_0 \equiv \rho_{0,ref} (r / r_{ref})^{-k} \equiv A r^{-k}$, with $r$ the radial coordinate, $\rho_{0, ref}$, $r_{ref}$ and $A$ parameters setting the density scale and $k$ setting the power law slope of the medium. Boosted frame simulations of blast waves decelerating in a stellar wind environment where $k = 2$ will be presented in a follow-up study. All simulations start from the self-similar BM solution for impulsive energy injection with isotropic equivalent explosion energy $E_{iso}$ set at $10^{53}$ erg. The actual values for the initial energy and circumburst density are completely arbitrary and the hydrodynamics equations can be expressed in terms of dimensionless variables. Generalizing these variables from the ISM case [@vanEerten2012boxfit] to arbitrary $k$ values, we have $$\mathcal{A} = \frac{r}{ct}, \quad \mathcal{B} = \frac{E_{iso} t^2}{A r^{5-k}}, \quad \theta, \quad \theta_0,$$ that are scale invariant under the transformations $$\begin{aligned} E'_{iso} & = & \kappa E_{iso}, \nonumber \\ A' & = & \lambda A, \nonumber \\ r' & = & (\kappa / \lambda)^{1/(3-k)} r, \nonumber \\ t' & = & (\kappa / \lambda)^{1/(3-k)} t.\end{aligned}$$ All scale-invariance relations follow from straightforward dimensional analysis, and are therefore not limited to the ultra-relativistic self-similar BM solution but apply throughout the evolution of the blast wave when jet spreading and deceleration occur. Simulations in a boosted frame ------------------------------ Two challenging aspects of numerically simulating BM type outflows are the severe steepness of the radial profile of the various fluid quantities (i.e. the primitive quantities Lorentz factor $\gamma$, comoving density $\rho$, pressure $p$, and consequently the conserved quantities as well) and the ultra-relativistic nature of the outflow. Resolution issues regarding numerically calculated blast waves and light curves have been discussed by various authors [@Zhang2009; @vanEerten2010transrelativistic; @vanEerten2012boxfit; @DeColle2012simulations]. As mentioned in the introduction, the most striking feature of an underresolved BM blast wave is a temporary spurious drop in Lorentz factor near the shock front. Because the observed flux strongly depends on the Lorentz factor ($F_\nu \propto \gamma^2$, due to relativistic beaming), this strongly impacts the light curve. In order to understand the early time dynamics, it is important to start from a time when outflow peak Lorentz factor $\gamma \gg 1 / \theta_0$ (the point at which sideways spreading is expected to become relevant and when the edges of the jet become observable) and ideally, any transient behavior due to numerical resolution should have subsided before this point. In the current work we have used cylindrical ($R$, $z$) coordinates. The initial conditions were provided by the BM solution [@Blandford1976], but expressed in a Lorentz boosted frame [@MacFadyen2013]. For all simulations in this study, the simulation frame was boosted with Lorentz factor $\gamma_S = 5$. All jets start with peak lab frame Lorentz factor $\gamma_0$ of 100 at the on-axis tip of the jet, though some were also run with $\gamma_0 = 50$ to check for convergence (which is expected for $\gamma_0 \gg 1 / \theta_0$). Resolution ---------- ![Comparison of jet ($\theta_0 = 0.05$ rad) resolution between <span style="font-variant:small-caps;">boxfit</span> (dashed curve) and moving frame (solid curve) for a jet moving into the ISM. Top figure shows the evolution of the peak blast wave Lorentz factor (using $\beta \times \gamma$, the spatial component of the four-velocity) along the jet axis. According to the BM solution, $\gamma \propto t^{-3/2}$ and this slope is indicated by a dotted line. The bottom plot shows an X-ray light curve with jet break observed at $5 \times 10^{17}$ Hz, calculated using $E_{iso} = 10^{53}$ erg, $n_0 = 1$ cm$^{-3}$ (ISM), $p = 2.5$, $\epsilon_e = 0.1$, $\epsilon_B = 10^{-3}$, $\xi_N = 1$.[]{data-label="resolution_figure"}](fig1a.eps "fig:"){width="\columnwidth"} ![Comparison of jet ($\theta_0 = 0.05$ rad) resolution between <span style="font-variant:small-caps;">boxfit</span> (dashed curve) and moving frame (solid curve) for a jet moving into the ISM. Top figure shows the evolution of the peak blast wave Lorentz factor (using $\beta \times \gamma$, the spatial component of the four-velocity) along the jet axis. According to the BM solution, $\gamma \propto t^{-3/2}$ and this slope is indicated by a dotted line. The bottom plot shows an X-ray light curve with jet break observed at $5 \times 10^{17}$ Hz, calculated using $E_{iso} = 10^{53}$ erg, $n_0 = 1$ cm$^{-3}$ (ISM), $p = 2.5$, $\epsilon_e = 0.1$, $\epsilon_B = 10^{-3}$, $\xi_N = 1$.[]{data-label="resolution_figure"}](fig1b.eps "fig:"){width="\columnwidth"} The simulation frame time duration of each simulation was $2 \times 10^7$ seconds. The cylindrical grids run from $-2 \times 10^7$ ls (lightseconds) to $2 \times 10^7$ ls in the $z$ direction and out to $2 \times 10^7$ ls in the $R$ direction perpendicular to the jet axis. The initial peak refinement level is 15, with 8 base level blocks and 8 cells per block in each direction. The smallest cell size at peak refinement level is therefore $\delta z = 2 \delta R = 19.1$ ls $ = 5.72 \times 10^{11}$ cm. Note that these are expressed in the boosted frame, so that the resolution in the $z$-direction is better compared to the lab frame[^1] by a factor of $\gamma_S$. We enforce an upper limit on the total number of blocks on the grid. As the blast wave expands in size on the grid, the peak refinement level is decreased in order not to exceed this block limit. The top panel of Fig. \[resolution\_figure\] shows the evolution of the Lorentz factor at the shock front (in the lab frame) along the jet axis, compared to that in earlier work [@vanEerten2012boxfit], for the case where $\theta_0 = 0.05$ rad. The Lorentz factor is measured at the numerically determined momentum maximum, which is slightly behind the exact position of the shock front. The Lorentz factor of the boosted frame simulation (solid line) agrees with the BM solution at that position to within $\sim 1\%$. The dashed line shows the BM scaling $\gamma \propto t^{-3/2}$ appropriate for the behavior of the shock Lorentz factor. radiation {#radiation_section} ========= ![Early time (observer time $t = 10^{-2}$ days) pre-break spectrum. $E_{iso} = 10^{53}$ erg, $n_0 = 1$ cm$^{-3}$ (ISM), $p = 2.5$, $\epsilon_e = 0.1$, $\epsilon_B = 10^{-3}$, $\xi_N = 1$. Dashed grey lines indicate asymptotic slopes, from left to right: $2$, $1/3$, $(1-p)/2$, $-p/2$.[]{data-label="powerlaws_figure"}](fig2.eps){width="\columnwidth"} The algorithm used to calculate the radiation for a given observer time, frequency, angle and distance is nearly identical to that used in [@vanEerten2012boxfit], which in turn was based on [@Sari1998] and [@Granot1999]. The only difference is that it is now applied to a boosted frame simulation rather than a non-moving frame simulation. The radiative transfer equations are solved for a large number of rays through the evolving blast wave. The stepsize along the rays is set by the number of data dumps from the simulation (3000, although we found in practice that the light curves were converged even for 300 data dumps). The conceptual details of the radiative transfer approach for a moving frame are provided in appendix \[boosted\_frame\_appendix\]. The emission and absorption coefficients are calculated for synchrotron emission and synchrotron self-absorption (s.s.a.). The local synchrotron emission spectrum is given by a series of sharply connected power laws, with peak flux and spectral breaks determined by the local state of the fluid. A relativistic distribution of shock-accelerated particles is assumed, carrying a fraction $\epsilon_e$ of the local internal energy density and with power law slope $-p$ (not to be confused with pressure $p$). The fraction of available electrons that is accelerated is given by $\xi_N$. A further fraction $\epsilon_B$ of the local internal energy density resides in the shock-generated magnetic field. The effect of electron cooling is included using a global estimate for the electron cooling time $t_c$, by equating it to the lab frame time since the explosion. The spectral shape of the absorption coefficient $\alpha_\nu$ for s.s.a. is also given by sharply connected power laws and the effect of electron cooling on $\alpha_\nu$ is ignored (in any case, the error thus introduced is neglible compared to the small error from using a global rather than a local estimate for electron cooling. Global and local electron cooling are compared in @vanEerten2010offaxis). Mathematical expressions for the emission and absorption coefficients can be found in @vanEerten2012boxfit. After all rays have emerged from the blast wave and the observed flux is calculated by integrating over the rays, the observed spectrum will again consist of a set of power laws, now smoothly connected due to the different break frequencies at different contributing parts of the fluid. S.s.a. manifests itself as an additional break in the spectrum, occurring typically at radio wavelengths. An example spectrum, calculated from our Lorentz-boosted simulation for $\theta_0 = 0.05$ rad and ISM environment is provided by Fig. \[powerlaws\_figure\], and reveals how the full syncrotron spectrum can be reproduced even at $10^{-2}$ days. A comparison between an X-ray light curve from the boosted frame and from <span style="font-variant:small-caps;">boxfit</span> [@vanEerten2012boxfit] is given by the bottom plot of Fig. \[resolution\_figure\]. Note that the light curve is produced by the integrated emission from the entire blast wave and as a result, the resolution difference in the dynamics as shown in the top panel of \[resolution\_figure\] does not directly reflect the discrepancy in the observed emission. Whether a lower resolution for the blast wave dynamics leads to an overestimate or an underestimate of the flux depends on the spectral regime that is observed. Although Fig. \[powerlaws\_figure\] includes s.s.a., we will focus in this work on optical and X-ray frequencies (where the jet break is typically observed) and postpone a detailed treatment of the s.s.a. break to future work. Resolution ---------- All light curves were calculated for $p = 2.5$, which is sufficient to derive light curves for arbitrary $p > 2$ value, as explained below in section \[criticals\_section\]. Lacking an upper cut-off to the accelerated particle energy distribution, our radiation model is invalid for $p \le 2$ because then the integral for the total accelerated particle energy diverges. The following settings were used to compute numerically converged synthetic light curves from the boosted frame simulations. 3000 simulation snapshots were probed for computing on-axis light curves, 300 snapshots were probed for computing off-axis light curves. A matrix of rays was used consisting of 1500 rays logarithmically spaced in the radial direction and 100 evenly spaced in the angular direction (or 1 in the angular direction, for on-axis observations). These directions refer to coordinates on the plane perpendicular to the observer (in the lab frame, see appendix \[boosted\_frame\_appendix\]), and the inner and outer boundaries of this plane are $10^{12}$ and $10^{18}$ cm respectively. Each light curve has 150 data points between observer times of $10^{-4}$ days and $10^2$ days, although only 85 (60) data points between $0.01$ (0.1) and 26 days were used for analysis in order to ensure complete coverage of the observer times by the emission times. Light curves were calculated for half-opening angles $\theta_0 = 0.05$, $0.1$ and $0.2$ rad and observer angles $\theta_{obs}$ that were a fraction 0, 0.2, 0.4, 0.6, 0.8 or 1 of $\theta_0$. Peak flux and break frequencies {#criticals_section} =============================== Theory ------ In section \[initial\_conditions\_subsection\] we demonstrated the scale-invariance of the jet dynamics between different jet energies and circumburst densities. In [@vanEerten2012scalings] we demonstrated that similar scalings apply in the asymptotic regimes of the observed spectrum. Although additional constants are introduced when calculating synchrotron radiation, such as the electron mass $m_e$, these can be identified and isolated in the flux equations in a given spectral regime and the resulting scale invariance is again a result of dimensional analysis. Invariance of the fluxes in each asymptotic spectral regime is equivalent to scale invariance of the critical quantities that determine the shape of the spectrum: peak flux $F_{peak}$, synchrotron break frequency $\nu_m$, cooling break frequency $\nu_c$ and s.s.a. break frequency $\nu_a$. The shapes of the spectral transitions are not scale-invariant, but can be modeled using a smooth connection between power laws (see @Granot2002 [@vanEerten2009BMscalingcoefficients; @Leventis2012]). The flux for a given observation can be calculated from the characteristic scale-invariant evolution of the critical quantities plus a description of the spectral transitions (which can also be a simple sharp power law approximation), and from three sets of parameters defining the observation and model: the observer parameters $z$ (redshift), $d_L$ (luminosity distance), $\theta_{obs}$ (observer angle), $t_{obs}$ (time) and $\nu$ (frequency); the explosion parameters $k$, $A$, $E_{iso}$, $\theta_0$; the radiation parameters $p$, $\epsilon_B$, $\epsilon_e$, $\xi_N$. The dependency of the flux on $d_L$ and $z$ is straightforward, with $F_\nu \propto d_L^{-2}$ and flux, frequency and time depending on $z$ in the standard manner. Different values for $\theta_{obs}$, $\theta_0$ and $k$ lead to different evolution of the characteristic quantities $\nu_a$, $\nu_m$, $\nu_c$ and $F_{peak}$. Scale invariance takes care of $E_{iso}$ and $A$, while the dependency of the characteristic quantities on $\epsilon_B$, $\epsilon_e$, $\xi_N$ remains unchanged throughout the evolution of the decollimating blast wave and can be determined analytically for general $p$ and $k$. The dependency of the characteristic quantities on $p$ is constant in time and known analytically, meaning that once the evolution for a given $p$ value is known, their evolution for any $p$ value can be trivially obtained. Different spectral orderings of the break frequencies $\nu_a$, $\nu_m$, $\nu_c$ lead to different time evolutions. [|l|]{}\ $\displaystyle \kappa \equiv \left( \frac{E_{iso}}{10^{53} \textrm{ erg}} \right)$\ $ \displaystyle \lambda \equiv \left( \frac{n_{0, ref}}{1 \textrm{ cm}^{-3}} \right)$\ $ \displaystyle \tau \equiv (\lambda / \kappa)^{1/(3-k)} t_{obs} / (1+z)$\ \ $ \displaystyle F_{peak} = \frac{(1+z)}{d^2_{28}} \frac{p-1}{3p-1} \epsilon_e^0 \epsilon_B^{1/2} \xi_N^1 \kappa^{\frac{3(2-k)}{2(3-k)}} \lambda^{\frac{3}{2(3-k)}} \mathfrak{f}_{peak} (\tau ; \theta_0, \theta_{obs}, k)$\ $ \displaystyle \nu_{m} = (1+z)^{-1} \left( \frac{p-2}{p-1} \right)^2 \epsilon_e^2 \epsilon_B^{1/2} \xi_N^{-2} \kappa^{\frac{-k}{2(3-k)}} \lambda^{\frac{3}{2(3-k)}} \mathfrak{f}_{m} (\tau ; \theta_0, \theta_{obs}, k)$\ $ \displaystyle \nu_{c} = (1+z)^{-1} \epsilon_e^0 \epsilon_B^{-3/2} \xi_N^0 \kappa^{\frac{3k-4}{2(3-k)}} \lambda^{\frac{-5}{2(3-k)}} \mathfrak{f}_{c} (\tau ; \theta_0, \theta_{obs}, k)$\ \ $ \displaystyle F_{peak} = \frac{(1+z)}{d^2_{28}} \frac{p-1}{3p-1} \epsilon_e^0 \epsilon_B^{1/2} \xi_N^1 \kappa \lambda^{1/2} \mathfrak{f}_{peak} (\tau ; \theta_0, \theta_{obs}, k = 0)$\ $ \displaystyle \nu_{m} = (1+z)^{-1} \left( \frac{p-2}{p-1} \right)^2 \epsilon_e^2 \epsilon_B^{1/2} \xi_N^{-2} \kappa^0 \lambda^{1/2} \mathfrak{f}_{m} (\tau ; \theta_0, \theta_{obs}, k = 0)$\ $ \displaystyle \nu_{c} = (1+z)^{-1} \epsilon_e^0 \epsilon_B^{-3/2} \xi_N^0 \kappa^{-2/3} \lambda^{-5/6} \mathfrak{f}_{c} (\tau ; \theta_0, \theta_{obs}, k = 0)$\ \ $ \displaystyle F_{peak} = \frac{(1+z)}{d^2_{28}} \frac{p-1}{3p-1} \epsilon_e^0 \epsilon_B^{1/2} \xi_N^1 \kappa^0 \lambda^{3/2} \mathfrak{f}_{peak} (\tau ; \theta_0, \theta_{obs}, k = 2)$\ $ \displaystyle \nu_{m} = (1+z)^{-1} \left( \frac{p-2}{p-1} \right)^2 \epsilon_e^2 \epsilon_B^{1/2} \xi_N^{-2} \kappa^{-1} \lambda^{3/2} \mathfrak{f}_{m} (\tau ; \theta_0, \theta_{obs}, k = 2)$\ $ \displaystyle \nu_{c} = (1+z)^{-1} \epsilon_e^0 \epsilon_B^{-3/2} \xi_N^0 \kappa^{1} \lambda^{-5/2} \mathfrak{f}_{c} (\tau ; \theta_0, \theta_{obs}, k = 2)$\ Table \[characteristics\_table\] summarizes these properties of the light curves. Here the general $k$ case is shown as well as the ISM and stellar wind cases separately. $d_{28}$ is the luminosity distance $d_L$ in units of $10^{28}$ cm. The functions $\mathfrak{f}_{peak} (\tau ; \theta_0, \theta_{obs}, k)$, $\mathfrak{f}_{m} (\tau ; \theta_0, \theta_{obs}, k)$ and $\mathfrak{f}_{c} (\tau ; \theta_0, \theta_{obs}, k)$ denote the scale-invariant time evolution of the characteristic quantities as they are determined numerically from analyzing light curves computed from the boosted frame simulations for each spectral regime. These functions can be scaled from their baseline values to arbitrary explosion energy and circumburst density by plugging in the scaled values for $\kappa$, $\lambda$ and $\tau$ from the top section of the table into the equations in the lower sections of the table. Since their dependency on the radiation parameters $\epsilon_B$, $\epsilon_e$, $\xi_N$ and $p$ is known and constant in time, these terms have been made explicit in the table. Left implicit is the fact that different spectral orderings lead to different evolution curves, which will affect the characteristic scale-invariant functions $\mathfrak{f}$ but not their pre-factors. In the remainder of this paper we will discuss the *slow cooling* case (which is the one usually observed in practice), where $\nu_m < \nu_c$. The observed flux follows from the characteristic quantities according to $$\begin{aligned} F_D & = & F_{peak} \left( \frac{\nu}{\nu_m} \right)^{1/3}, \qquad \nu < \nu_m < \nu_c, \nonumber \\ F_G & = & F_{peak} \left( \frac{\nu}{\nu_m} \right)^{(1-p)/2}, \qquad \nu_m < \nu < \nu_c, \nonumber \\ F_H & = & F_{peak} \left( \frac{\nu_c}{\nu_m} \right)^{(1-p)/2} \left( \frac{\nu}{\nu_c} \right)^{-p/2}, \qquad \nu_m < \nu_c < \nu, \label{flux_equations}\end{aligned}$$ where the labels $D$, $G$, $H$ have been chosen to match the notation from [@Granot2002]. Note that, even though the time evolution of the characteristic quantities does not depend on $p$, equations \[flux\_equations\] imply that the time evolution of $F_G$ and $F_H$ do. Numerical results ----------------- In Fig. \[characteristics\_plot\] we plot the time evolution of the characteristic quantities for the three jet opening angles $\theta_0 = 0.05$, $0.1$, $0.2$ rad. and for observer angles $\theta_{obs} = 0$, $0.6 \times \theta_0$, $\theta_0$. Fig. \[characteristics\_slopes\_plot\] shows the evolution of the spectral slope for these same angles. The time evolutions were calculated from three light curves per $\theta_0$, $\theta_{obs}$ combination: one for each asymptotic spectral regime separated by $\nu_m$ and $\nu_c$ in the slow cooling case. For these light curves we used $\epsilon_B = 10^{-5}$, $\epsilon_e = 10^{-5}$, $\xi_N = 1$, $p = 2.5$ and frequencies $10^{-20}$, $10^{10}$, $10^{40}$ Hz. These values (especially the frequencies) were not physically motivated but rather chosen such that they ensure all light curves were calculated well into the asymptotic limits of the spectral regimes and not impacted by the smoothness of the spectral transitions between regimes. Moving the outer frequencies closer in but still in their asymptotic regions throughout the evolution of the emission for $\epsilon_B = 10^{-5}$, $\epsilon_e = 10^{-5}$, $\xi_N = 1$, $p = 2.5$, e.g. to $10^{-5}$ and $10^{25}$ was found to have no impact on the result. The characteristic quantities were subsequently obtained by inverting equations \[flux\_equations\]. Since the plots show $\mathfrak{f}_{peak}$, $\mathfrak{f}_m$ and $\mathfrak{f}_c$, the curves are independent of $\epsilon_e$, $\epsilon_B$ and $p$. Figs. \[characteristics\_plot\] and \[characteristics\_slopes\_plot\] reveal that the time evolution for the characteristic quantities strongly depends on both jet and observer angle. A difference between pre- and post-break values sets in immediately following the jet break time $t_j$, which for the current ISM simulations and on-axis observers is found to lie around $$t_j \approx (0.6\pm0.1) (1+z) ( \kappa / \lambda )^{1/3} (\theta_0 / 0.1)^{8/3} \textrm{ days,}$$ consistent with the earlier reported numerical results from [@vanEerten2010offaxis]. The jet break time will be discussed in more detail in section \[lightcurves\_section\] below. In Fig. \[characteristics\_slopes\_plot\], both pre- and post-break theoretically expected slopes are also plotted. The pre-break slopes match the theoretical predictions well, but the post-break slopes differ substantially from theoretical predictions (see @Sari1999 [@Rhoads1999]). Partly this discrepancy between theory and numerical practice is a consequence of the fact that the spreading of blast waves in simulations is not an exponential process even for $\theta_0 = 0.05$ rad. [@vanEerten2012observationalimplications; @MacFadyen2013]. The fact that $\nu_m$ is far more strongly impacted by jet spreading than theoretically expected and even more so than $F_{peak}$ can be understood as follows. Considering intensities rather than surface integrated flux (i.e. $I_{peak}$ rather than $F_{peak}$), which leaves the angular dependency explicit, we have $I_{peak} \propto (1 - \beta \mu)^{-3}$. Here $\beta$ is the outflow velocity in terms of $c$, $\mu$ the cosine of the angle between flow and observer direction. The expression includes the effect of departure time difference between emission from front and back of the blast wave as well as the Lorentz transform of the emission coefficient (see also the appendix of @vanEerten2010offaxis). On the other hand, for $\nu_{m, I}$, which we define as the contribution to $\nu_m$ along a single beam, we have $\nu_{m,I} \propto (1- \beta \mu)^{-1}$. While both $I_{peak}$ and $\nu_{m,I}$ are beamed it therefore follows that the beaming effect is far stronger for $I_{peak}$, such that $F_{peak}$ is then less sensitive for the behavior of the flow near the edges than $\nu_m$. Although in theory this effect could be compensated for by a strong dependence of $I_{peak}$ and $\nu_{m,I}$ on emission time (since edge emission arriving at the same time departs earlier than emission along the axis to the observer), it turns out in practice that this only strengthens the sensitivity of $\nu_{m,I}$ to emission angle compared to the angle dependence of $I_{peak}$: for the BM solution, the scalings are $\nu_{m,I} \propto t^{-3} (1 - \beta \mu)^{-1}$ and $I_{peak} \propto t^4 (1 - \beta \mu)^{-3}$. It would be a strong indication of self-similarity between jet opening angles and of great practical significance if the evolution functions were to scale in a straightforward manner between opening angles. Any such scaling should incorporate the $\theta_0$ dependency of jet break time $t_j$. When we take that as our starting point, scale time according to $t' = t (\theta'_0 / \theta_0)^{8/3}$ and the characteristic functions according to $\mathfrak{f}'(t') = (\theta'_0 / \theta_0)^{-\alpha} \mathfrak{f}(t)$, where $\alpha$ is the power law time-dependence in the pre-break BM regime, this yields evolution curves for $\mathfrak{f}_{peak}$ that numerically match quite well initially between different jet opening angles, even for off-axis observer angles, as illustrated in Fig. \[jetbreak\_scaled\_plot\]. These $\theta_0$-scalings however are not exact and a similar mapping for $\mathfrak{f}_m$ or $\mathfrak{f}_c$ fails to produce much numerical overlap. This can be seen from the power law slope plots in Fig. \[characteristics\_slopes\_plot\], since the scalings represent horizontal shifts of the curves in these plots. Essentially, this lack of straightforward scalability reflects the fact that the post-break behavior is determined by more characteristic timescales than $t_j$ alone, such as the transition time to non-relativistic flow and the transition time to quasi-spherical flow, and that these timescales will impact the trans-relativistic stage of fluid flow that generates the observed post-break light curves. ![Time evolution of characteristic quantities, for different jet opening angles and observer angles. Top to bottom: $\mathfrak{f}_{peak}$, $\mathfrak{f}_{m}$, $\mathfrak{f}_{c}$. They describe the time evolution of $F_{peak}$, $\nu_m$, $\nu_c$ respectively according to the equations in table \[characteristics\_table\]. The legend in the top plot applies to all plots.[]{data-label="characteristics_plot"}](fig3a.eps "fig:"){width="\columnwidth"} ![Time evolution of characteristic quantities, for different jet opening angles and observer angles. Top to bottom: $\mathfrak{f}_{peak}$, $\mathfrak{f}_{m}$, $\mathfrak{f}_{c}$. They describe the time evolution of $F_{peak}$, $\nu_m$, $\nu_c$ respectively according to the equations in table \[characteristics\_table\]. The legend in the top plot applies to all plots.[]{data-label="characteristics_plot"}](fig3b.eps "fig:"){width="\columnwidth"} ![Time evolution of characteristic quantities, for different jet opening angles and observer angles. Top to bottom: $\mathfrak{f}_{peak}$, $\mathfrak{f}_{m}$, $\mathfrak{f}_{c}$. They describe the time evolution of $F_{peak}$, $\nu_m$, $\nu_c$ respectively according to the equations in table \[characteristics\_table\]. The legend in the top plot applies to all plots.[]{data-label="characteristics_plot"}](fig3c.eps "fig:"){width="\columnwidth"} ![Time evolution of power law slopes of the characteristic quantities, for different jet opening angles and observer angles. Top to bottom: slopes for $\mathfrak{f}_{peak}$, $\mathfrak{f}_{m}$, $\mathfrak{f}_{c}$. The legend in the top plot of Fig. \[characteristics\_plot\] applies to all these plots as well. The constant grey lines indicate the expected slopes for light curves for an on-axis observer from the pre-break BM solution and assuming a fast spreading jet post-break.[]{data-label="characteristics_slopes_plot"}](fig4a.eps "fig:"){width="\columnwidth"} ![Time evolution of power law slopes of the characteristic quantities, for different jet opening angles and observer angles. Top to bottom: slopes for $\mathfrak{f}_{peak}$, $\mathfrak{f}_{m}$, $\mathfrak{f}_{c}$. The legend in the top plot of Fig. \[characteristics\_plot\] applies to all these plots as well. The constant grey lines indicate the expected slopes for light curves for an on-axis observer from the pre-break BM solution and assuming a fast spreading jet post-break.[]{data-label="characteristics_slopes_plot"}](fig4b.eps "fig:"){width="\columnwidth"} ![Time evolution of power law slopes of the characteristic quantities, for different jet opening angles and observer angles. Top to bottom: slopes for $\mathfrak{f}_{peak}$, $\mathfrak{f}_{m}$, $\mathfrak{f}_{c}$. The legend in the top plot of Fig. \[characteristics\_plot\] applies to all these plots as well. The constant grey lines indicate the expected slopes for light curves for an on-axis observer from the pre-break BM solution and assuming a fast spreading jet post-break.[]{data-label="characteristics_slopes_plot"}](fig4c.eps "fig:"){width="\columnwidth"} ![Scaled evolution of the peak flux function $\mathfrak{f}_{peak}$, where the curves for $\theta_0 = 0.05$ rad. and $\theta_0 = 0.2$ rad. have been scaled in time towards the $\theta_0 = 0.1$ rad. result using $t' = (\theta_0 / 0.1)^{8/3}$. As in Figs. \[characteristics\_plot\] and \[characteristics\_slopes\_plot\], solid lines refer to $\theta_{obs} = 0$, dotted lines to $\theta_{obs} = 0.6 \theta_0$ and dashed lines to $\theta_{obs} = \theta_0$.[]{data-label="jetbreak_scaled_plot"}](fig5.eps){width="\columnwidth"} The cooling break ----------------- Figs. \[characteristics\_plot\] and \[characteristics\_slopes\_plot\] illustrate that for a given characteristic function different extremal values are reached for different observer and jet angles. The consequence of this on the jet break as measured from observations will be discussed in section \[lightcurves\_section\] below, and we limit ourselves here to highlighting the behavior of the cooling break. In an earlier work [@vanEerten2012scalings] we showed how simulations in a fixed frame (and thus of lower resolution) indicated a steepening (for $\theta_{obs} = 0$) of the temporal evolution of the cooling break immediately following the jet break. However, the current boosted frame simulations *reveal no post-break $\nu_c$ steepening towards stronger decay*. Specifically, the $\nu_c$ light curves for $\theta_0 = 0.2$ rad., the same angle as plotted in Fig. 3 of [@vanEerten2012scalings], show only a turnover towards positive temporal slope following the jet break, as can be seen in the bottom panel of Fig. \[characteristics\_slopes\_plot\] of the current paper. At the same time, the on-axis curve for $\theta_{obs} = 0.05$ rad. shown in the same figure, does show a (slight) post-break steepening of the temporal power law slope of $\nu_c$. ![A comparison of the time evolution of four computations of $\nu_c$ for $\theta_0$ = 0.2 rad. and $\theta_{obs} = \theta_0$ (i.e. an on-edge observer). The vertical lines indicate the timespans used for analysis.[]{data-label="nuc_comparison_plot"}](fig6.eps){width="\columnwidth"} What this indicates is that the earlier reported steepening for $\theta_0 = 0.2$ rad. and the current smaller steepening for $\theta_0 = 0.05$ rad. are numerical in origin, and sensitive to the initial conditions of the blast wave. Above the cooling break, the observed flux is dominated by emission from the edges of the jet (i.e. the observed image is ‘limb-brightened’), and therefore the cooling break $\nu_c$ is the most sensitive to deviations from purely radial flow at the edges of an initially conically truncated spherical BM outflow. The smaller the jet opening angle, the larger even small resolution-induced deviations are relative to $\theta_0$. The dynamics of narrow and wide jets will be discussed separately in more detail in [@MacFadyen2013]. The effect of early time flow at the jet edges on the light curve naturally becomes more severe the closer the observer angle moves towards the edge of the jet. In Fig. \[nuc\_comparison\_plot\], we show that even the pre-break behavior for $\nu_c$ from 2D simulations differs strongly from that from analytically calculated conical outflow. The black solid line and blue dashed line show $\nu_c$ results for simulations starting at $\gamma_0 = 100$ and $\gamma_0 = 50$ respectively, the red dash-dotted line shows the evolution of $\nu_c$ based on conical outflow following the BM solution, while the green dashed line shows a four times lower resolution simulation. Around the leftmost vertical line, the $\nu_c$ curves for both normal resolution simulations have merged, while both simulation curves still differ strongly from the BM solution. It follows that the difference between 2D simulated and radial analytical flow can not be attributed to a lack of early time coverage of the observed signal by an incomplete range of emission times. Nor can this difference be attributed to the difference in starting times (and hence the extent to which $\gamma_0 \ll 1 / \theta_0$. Both effects are clearly visible in Fig \[nuc\_comparison\_plot\] and lie well to the left of the left vertical line at $10^{-2}$ days. In view of this resolution issue, when analysing light curves for $\theta_0 = 0.2$ rad we will start from 0.1 days (rather than 0.01 days), and the characteristic evolution curves for this initial jet opening angle in Figs \[characteristics\_plot\] and \[characteristics\_slopes\_plot\] have been truncated at this value of $\tau$. Note that for most observer angles, this effect is less severe and the parameters of Fig \[nuc\_comparison\_plot\] were chosen to reflect a worst-case scenario. An additional conclusion that can be drawn from the severe resolution dependence of the off-axis observed $\nu_c$ evolution for $\theta_0 = 0.2$ rad., even well into times that are easily observable by instruments such as *Swift*, is that if small numerical resolution-induced deviations from BM-type flow will have a large effect on $\nu_c$, the same will hold for minor *physical* deviations. This renders relevant the question to what extent deviations from the expected BM-based time evolution of $\nu_c$ can be driven by the dynamics of the outflow. On the other hand, although an actual measurement of the evolution of $\nu_c$ has been performed by [@Filgas2011], the temporal slope of -1.2 that these authors find is steeper than the high-resolution simulation $\nu_c$ slope in Fig. \[characteristics\_slopes\_plot\] at any time, and their thesis is that the steep decline in GRB 091127 can be attributed to changes in the radiative process (via a time dependency of $\epsilon_B$) rather than outflow dynamics. Light curves and jet breaks {#lightcurves_section} =========================== Once the time evolution of the characteristic functions $\mathfrak{f}_{peak}$, $\mathfrak{f}_c$ and $\mathfrak{f}_c$ is known, they can be used to quickly calculate light curves for arbitrary $p$. In order to study the shape of the jet break we have done so for $p$ values of $2.01$ and $2.1$, $2.2$, $\ldots$, $3.0$ and the spectral regimes $\nu_a < \nu_m < \nu < \nu_c$ (typically applicable to optical data) and $\nu_a < \nu_m < \nu_c < \nu$ (“X-rays”). Various functions have been used in the literature to fit jet breaks in optical and X-ray light curve data, such as sharp power laws (e.g. @Racusin2009 [@Evans2009]), smoothly connected power laws (e.g. @Beuermann1999) or power law transitions where the turnover includes an exponential term (e.g. @Harrison1999). A limitation common to all these fit functions is the assumption of a single power law regime after the jet break. Although simulation-based light curves show that in reality this should not be expected to be the case (as can be seen from the post-break evolution of the peak flux and break frequencies, Figs. \[characteristics\_plot\] and \[characteristics\_slopes\_plot\]), it therefore makes practical sense to explore the implications of our simulation results for the interpretation and applicability of broken power law fit functions. $\theta_0$ (rad) $\theta_{obs}$ fit $\alpha_0$ $\alpha_1$ $< \tau_b >$ $^{10}\log \bar{C}$ $\sigma$ $<\tau_{0.9} / \tau_b>$ $\chi^2 / \chi^2_{PL}$ $\chi^2$, red.$^\dagger$ ------------------ ---------------- ----- ---------------- ----------------- -------------- --------------------- ---------------- ------------------------- ------------------------ -------------------------- 0.05 0 PL $0.76 -0.73 p$ $0.20 -1.24 p$ 0.10 $1.37 + 1.64 p$ 1 0.20 sB $0.75 -0.71 p$ $0.19 -1.24 p$ 0.10 $1.34 + 1.68 p$ $6.06 -0.91 p$ 1.4 0.68 0.14 sH $1.01 -0.74 p$ $0.18 -1.23 p$ 0.08 $1.57 + 1.66 p$ 1.00 0.20 $0.2 \theta_0$ PL $0.69 -0.74 p$ $0.19 -1.24 p$ 0.12 $1.34 + 1.60 p$ 1 0.54 sB $0.71 -0.69 p$ $0.20 -1.25 p$ 0.10 $1.33 + 1.68 p$ $3.65 -0.70 p$ 1.8 0.34 0.18 sH $1.02 -0.78 p$ $0.19 -1.24 p$ 0.09 $1.66 + 1.58 p$ 0.28 0.15 $0.4 \theta_0$ PL $0.67 -0.82 p$ $0.23 -1.28 p$ 0.17 $1.54 + 1.35 p$ 1 0.99 sB $0.72 -0.77 p$ $0.23 -1.29 p$ 0.15 $1.53 + 1.45 p$ $2.92 -0.51 p$ 2.1 0.58 0.57 sH $0.94 -0.85 p$ $0.22 -1.28 p$ 0.14 $1.86 + 1.35 p$ 0.46 0.46 $0.6 \theta_0$ PL $0.56 -0.80 p$ $0.26 -1.32 p$ 0.25 $1.52 + 1.22 p$ 1 0.74 sB $0.58 -0.79 p$ $0.26 -1.32 p$ 0.24 $1.52 + 1.24 p$ $5.63 -0.92 p$ 1.5 0.85 0.64 sH $0.78 -0.83 p$ $0.24 -1.32 p$ 0.21 $1.83 + 1.19 p$ 0.87 0.65 $0.8 \theta_0$ PL $0.56 -0.77 p$ $0.29 -1.37 p$ 0.33 $1.44 + 1.16 p$ 1 1.07 sB $0.56 -0.75 p$ $0.29 -1.37 p$ 0.32 $1.40 + 1.20 p$ $5.72 -1.09 p$ 1.5 0.86 0.93 sH $0.74 -0.79 p$ $0.27 -1.37 p$ 0.29 $1.69 + 1.14 p$ 0.84 089 $\theta_0$ PL $0.66 -0.77 p$ $0.31 -1.41 p$ 0.43 $1.34 + 1.12 p$ 1 1.48 sB $0.68 -0.76 p$ $0.33 -1.42 p$ 0.41 $1.39 + 1.13 p$ $4.26 -0.74 p$ 1.6 0.82 1.23 sH $0.82 -0.79 p$ $0.31 -1.42 p$ 0.38 $1.62 + 1.09 p$ 0.75 1.11 0.1 0 PL $0.74 -0.75 p$ $0.26 -1.29 p$ 0.59 $1.90 + 1.08 p$ 1 0.23 sB $0.75 -0.75 p$ $0.25 -1.29 p$ 0.58 $1.91 + 1.08 p$ $5.35 -0.77 p$ 1.4 0.61 0.14 sH $0.85 -0.76 p$ $0.21 -1.28 p$ 0.54 $2.06 + 1.07 p$ 1.11 0.24 $0.2 \theta_0$ PL $0.73 -0.76 p$ $0.26 -1.29 p$ 0.64 $1.92 + 1.03 p$ 1 0.66 sB $0.74 -0.74 p$ $0.26 -1.31 p$ 0.63 $1.93 + 1.07 p$ $2.96 -0.52 p$ 2.0 0.17 0.12 sH $0.85 -0.77 p$ $0.22 -1.29 p$ 0.59 $2.11 + 1.02 p$ 0.10 0.06 $0.4 \theta_0$ PL $0.73 -0.80 p$ $0.32 -1.34 p$ 0.91 $2.14 + 0.81 p$ 1 2.06 sB $0.75 -0.74 p$ $0.30 -1.40 p$ 0.86 $2.12 + 0.93 p$ $1.49 -0.27 p$ 3.7 0.20 0.42 sH $0.85 -0.81 p$ $0.27 -1.34 p$ 0.82 $2.32 + 0.81 p$ 0.29 0.62 $0.6 \theta_0$ PL $0.67 -0.82 p$ $0.44 -1.44 p$ 1.45 $2.16 + 0.62 p$ 1 1.90 sB $0.74 -0.81 p$ $0.23 -1.43 p$ 1.52 $2.31 + 0.62 p$ $1.22 -0.12 p$ 3.2 0.46 0.89 sH $0.75 -0.82 p$ $0.32 -1.42 p$ 1.36 $2.38 + 0.59 p$ 0.44 0.84 $0.8 \theta_0$ PL $0.59 -0.79 p$ $0.43 -1.49 p$ 2.19 $1.99 + 0.54 p$ 1 0.59 sB $0.59 -0.78 p$ $0.42 -1.54 p$ 2.29 $2.08 + 0.52 p$ $3.20 -0.59 p$ 1.9 0.28 0.17 sH $0.66 -0.80 p$ $0.28 -1.48 p$ 2.19 $2.18 + 0.50 p$ 0.18 0.10 $\theta_0$ PL $0.71 -0.79 p$ $0.51 -1.56 p$ 2.67 $2.06 + 0.46 p$ 1 1.30 sB $0.71 -0.77 p$ $0.46 -1.66 p$ 3.07 $2.09 + 0.45 p$ $1.90 -0.35 p$ 2.4 0.27 0.36 sH $0.76 -0.80 p$ $0.34 -1.55 p$ 2.74 $2.16 + 0.44 p$ 0.24 0.33 0.2 0 PL $0.79 -0.80 p$ $0.51 -1.44 p$ 3.73 $2.61 + 0.40 p$ 1 0.16 sB $0.78 -0.79 p$ $0.45 -1.44 p$ 3.77 $2.58 + 0.42 p$ $5.89 -0.96 p$ 1.4 0.36 0.06 sH $0.88 -0.80 p$ $0.25 -1.39 p$ 3.64 $2.68 + 0.41 p$ 1.57 0.23 $0.2 \theta_0$ PL $0.79 -0.81 p$ $0.53 -1.40 p$ 3.75 $2.63 + 0.37 p$ 1 0.59 sB $0.78 -0.78 p$ $0.44 -1.48 p$ 4.25 $2.62 + 0.38 p$ $2.69 -0.50 p$ 2.1 0.05 0.03 sH $0.88 -0.82 p$ $0.23 -1.36 p$ 3.87 $2.72 + 0.37 p$ 0.03 0.02 $0.4 \theta_0$ PL $0.77 -0.85 p$ $0.57 -1.36 p$ 4.54 $2.72 + 0.24 p$ 1 1.71 sB $0.83 -0.78 p$ $-0.43 -2.22 p$ 21.03 $3.22 - 0.14 p$ $0.34 -0.05 p$ 8.3 0.06 0.11 sH $0.88 -0.86 p$ $0.32 -1.36 p$ 4.86 $2.90 + 0.19 p$ 0.25 0.44 $0.6 \theta_0$ PL $0.70 -0.86 p$ $0.80 -1.45 p$ 7.38 $2.90 + 0.02 p$ 1 1.07 sB $0.74 -0.85 p$ $0.70 -2.55 p$ 24.27 $3.14 - 0.35 p$ $0.60 -0.10 p$ 4.4 0.28 0.31 sH $0.76 -0.87 p$ $0.18 -1.37 p$ 8.67 $2.86 - 0.01 p$ 0.36 0.39 $0.8 \theta_0$ PL $0.62 -0.83 p$ $0.64 -1.26 p$ 9.42 $2.62 + 0.00 p$ 1 0.26 sB $0.60 -0.81 p$ $0.15 -2.14 p$ 24.47 $2.88 - 0.33 p$ $1.25 -0.26 p$ 2.7 0.05 0.01 sH $0.61 -0.81 p$ $-1.38 -0.86 p$ 14.05 $2.35 + 0.03 p$ 0.05 0.01 $\theta_0$ PL $0.73 -0.81 p$ $0.63 -1.03 p$ 6.96 $2.29 + 0.20 p$ 1 0.25 sB $0.71 -0.79 p$ $0.21 -1.50 p$ 24.46 $2.79 - 0.27 p$ $1.44 -0.29 p$ 3.9 0.14 0.03 sH $0.72 -0.80 p$ $-0.80 -0.77 p$ 12.83 $2.11 + 0.15 p$ 0.13 0.03 $\theta_0$ (rad) $\theta_{obs}$ fit $\alpha_0$ $\alpha_1$ $< \tau_b >$ $^{10}\log \bar{C}$ $\sigma$ $<\tau_{0.9} / \tau_b>$ $\chi^2 / \chi^2_{PL}$ $\chi^2$, red. ------------------ ---------------- ----- ---------------- ----------------- -------------- --------------------- ----------------- ------------------------- ------------------------ ---------------- 0.05 0 PL $0.50 -0.72 p$ $0.20 -1.23 p$ 0.08 $-0.03 + 0.20 p$ 1 1.92 sB $0.50 -0.71 p$ $0.20 -1.23 p$ 0.07 $-0.04 + 0.20 p$ $23.81 -4.90 p$ 1.1 1.00 1.94 sH $0.93 -0.76 p$ $0.20 -1.23 p$ 0.06 $0.34 + 0.19 p$ 1.09 2.10 $0.2 \theta_0$ PL $0.50 -0.74 p$ $0.21 -1.23 p$ 0.08 $0.03 + 0.14 p$ 1 1.86 sB $0.49 -0.71 p$ $0.22 -1.24 p$ 0.08 $-0.01 + 0.18 p$ $10.25 -2.27 p$ 1.4 0.97 1.82 sH $0.95 -0.80 p$ $0.21 -1.24 p$ 0.06 $0.43 + 0.12 p$ 0.98 1.83 $0.4 \theta_0$ PL $0.47 -0.82 p$ $0.26 -1.27 p$ 0.12 $0.24 - 0.12 p$ 1 2.29 sB $0.52 -0.79 p$ $0.27 -1.27 p$ 0.11 $0.24 - 0.05 p$ $5.26 -0.99 p$ 1.7 0.92 2.14 sH $0.89 -0.88 p$ $0.27 -1.27 p$ 0.09 $0.72 - 0.14 p$ 0.88 2.00 $0.6 \theta_0$ PL $0.33 -0.80 p$ $0.33 -1.31 p$ 0.20 $0.15 - 0.28 p$ 1 2.17 sB $0.33 -0.80 p$ $0.33 -1.31 p$ 0.19 $0.15 - 0.26 p$ $13.33 -2.63 p$ 1.3 0.99 2.17 sH $0.66 -0.85 p$ $0.33 -1.32 p$ 0.14 $0.67 - 0.33 p$ 1.06 2.31 $0.8 \theta_0$ PL $0.32 -0.77 p$ $0.39 -1.36 p$ 0.27 $0.01 - 0.33 p$ 1 2.37 sB $0.31 -0.75 p$ $0.39 -1.36 p$ 0.26 $-0.02 - 0.30p$ $11.95 -2.67 p$ 1.3 0.98 2.34 sH $0.60 -0.81 p$ $0.39 -1.37 p$ 0.21 $0.54 - 0.41 p$ 1.03 2.44 $\theta_0$ PL $0.45 -0.78 p$ $0.46 -1.42 p$ 0.35 $-0.06 - 0.40 p$ 1 2.55 sB $0.43 -0.76 p$ $0.46 -1.42 p$ 0.33 $-0.08 - 0.36 p$ $7.98 -1.67 p$ 1.4 0.96 2.47 sH $0.66 -0.81 p$ $0.45 -1.42 p$ 0.29 $0.34 - 0.45 p$ 0.97 2.45 0.1 0 PL $0.50 -0.75 p$ $0.27 -1.29 p$ 0.52 $0.32 - 0.41 p$ 1 0.60 sB $0.50 -0.74 p$ $0.28 -1.29 p$ 0.51 $0.31 - 0.40 p$ $12.62 -2.52 p$ 1.2 0.95 0.58 sH $0.66 -0.77 p$ $0.25 -1.28 p$ 0.44 $0.60 - 0.44 p$ 1.40 0.84 $0.2 \theta_0$ PL $0.49 -0.75 p$ $0.27 -1.29 p$ 0.55 $0.30 - 0.43 p$ 1 0.65 sB $0.49 -0.74 p$ $0.30 -1.31 p$ 0.53 $0.34 - 0.42 p$ $5.96 -1.26 p$ 1.6 0.67 0.44 sH $0.65 -0.78 p$ $0.26 -1.29 p$ 0.48 $0.64 - 0.49 p$ 0.69 0.44 $0.4 \theta_0$ PL $0.49 -0.80 p$ $0.34 -1.32 p$ 0.76 $0.51 -0.66 p$ 1 1.64 sB $0.49 -0.74 p$ $0.41 -1.39 p$ 0.68 $0.51 - 0.56 p$ $2.53 -0.53 p$ 3.0 0.38 0.63 sH $0.66 -0.82 p$ $0.33 -1.33 p$ 0.65 $0.88 -0.71 p$ 0.33 0.55 $0.6 \theta_0$ PL $0.43 -0.81 p$ $0.48 -1.42 p$ 1.26 $0.63 -0.91 p$ 1 1.50 sB $0.48 -0.81 p$ $0.44 -1.44 p$ 1.25 $0.70 -0.90 p$ $2.22 -0.32 p$ 2.6 0.63 0.96 sH $0.56 -0.84 p$ $0.43 -1.42 p$ 1.14 $0.93 -0.96 p$ 0.50 0.76 $0.8 \theta_0$ PL $0.35 -0.79 p$ $0.54 -1.49 p$ 2.00 $0.38 -0.99 p$ 1 0.40 sB $0.34 -0.78 p$ $0.59 -1.53 p$ 2.04 $0.40 -0.99 p$ $6.73 -1.53 p$ 1.6 0.56 0.22 sH $0.44 -0.81 p$ $0.42 -1.48 p$ 1.96 $0.57 -1.02 p$ 0.81 0.30 $\theta_0$ PL $0.48 -0.79 p$ $0.71 -1.58 p$ 2.52 $0.44 - 1.08 p$ 1 0.82 sB $0.46 -0.77 p$ $0.74 -1.66 p$ 2.69 $0.37 - 1.05 p$ $3.48 -0.76 p$ 2.1 0.39 0.32 sH $0.54 -0.80 p$ $0.49 -1.54 p$ 2.50 $0.49 -1.07 p$ 0.28 0.23 0.2 0 PL $0.56 -0.80 p$ $0.52 -1.45 p$ 3.66 $0.83 -1.11 p$ 1 0.14 sB $0.55 -0.79 p$ $0.49 -1.44 p$ 3.65 $0.80 -1.09 p$ $9.44 -1.85 p$ 1.3 0.61 0.09 sH $0.67 -0.80 p$ $0.28 -1.40 p$ 3.48 $0.95 -1.10 p$ 2.72 0.36 $0.2 \theta_0$ PL $0.57 -0.81 p$ $0.53 -1.41 p$ 3.65 $0.87 - 1.14 p$ 1 0.40 sB $0.54 -0.78 p$ $0.54 -1.48 p$ 3.96 $0.84 -1.13 p$ $4.07 -0.85 p$ 1.9 0.10 0.04 sH $0.67 -0.82 p$ $0.26 -1.37 p$ 3.67 $0.98 -1.14 p$ 0.15 0.05 $0.4 \theta_0$ PL $0.57 -0.85 p$ $0.66 -1.39 p$ 4.28 $1.11 - 1.33 p$ 1 1.28 sB $0.58 -0.78 p$ $1.21 -2.55 p$ 18.24 $1.47 - 1.70 p$ $0.55 -0.11 p$ 8.6 0.06 0.08 sH $0.68 -0.86 p$ $0.33 -1.35 p$ 4.55 $1.18 - 1.33 p$ 0.19 0.26 $0.6 \theta_0$ PL $0.46 -0.86 p$ $0.62 -1.36 p$ 7.33 $0.96 -1.48 p$ 1 0.72 sB $0.51 -0.85 p$ $1.03 -2.48 p$ 24.07 $1.10 - 1.83 p$ $0.74 -0.13 p$ 4.6 0.29 0.21 sH $0.54 -0.87 p$ $0.12 -1.37 p$ 9.25 $0.96 -1.52 p$ 0.33 0.24 $0.8 \theta_0$ PL $0.40 -0.82 p$ $0.45 -1.21 p$ 9.71 $0.64 - 1.48 p$ 1 0.15 sB $0.39 -0.81 p$ $0.42 -2.14 p$ 24.29 $0.92 -1.84 p$ $1.63 -0.36 p$ 2.5 0.09 0.01 sH $0.39 -0.81 p$ $-2.03 -0.72 p$ 16.11 $0.31 -1.47 p$ 0.09 0.01 $\theta_0$ PL $0.50 -0.81 p$ $0.42 -1.01 p$ 7.82 $0.32 -1.30 p$ 1 0.13 sB $0.48 -0.80 p$ $0.12 -1.48 p$ 24.44 $0.83 -1.79 p$ $2.03 -0.45 p$ 3.3 0.08 0.01 sH $0.49 -0.80 p$ $-1.70 -0.58 p$ 16.48 $0.07 -1.37 p$ 0.06 0.01 Tables \[powerlaw\_fit\_optical\_table\] and \[powerlaw\_X-rays\_table\] show the results of the analysis of light curves with different $p$, $\theta_{obs}$ and $\theta_0$ values using different power law descriptions. Each data point of the light curves, consisting of 85 data points per curve between observer times $10^{-2}$ days and 26 days for $\theta_0 = 0.05$ rad. and $\theta_0 = 0.1$ rad. and 60 per curve between observer times $10^{-1}$ days and 26 days for $\theta_0 = 0.2$ rad., was given an error of ten percent and three different jet break functions were fitted using a least squares algorithm. A baseline frequency $\nu = 4.56 \times 10^{14}$ Hz (R-band) was used for table \[powerlaw\_fit\_optical\_table\] and a baseline frequency $\nu = 5 \times 10^{17}$ Hz (2.07 KeV) for table \[powerlaw\_X-rays\_table\]. The fit function for a sharp power law, labeled “PL” in the table and below, is given by $$\bar{F}(\tau) = \left\{ \begin{array}{cl} \bar{C} (\tau / \tau_b)^{\alpha_0}, & \tau < \tau_b, \\ \bar{C} (\tau / \tau_b)^{\alpha_1}, & \tau > \tau_b \end{array} \right. .$$ The fit function for a smooth power law transition, equivalent to that used by @Beuermann1999 and labeled “sB”, is given by $$\bar{F}(\tau) = \bar{C} \left[ \left( \frac{\tau}{\tau_b} \right)^{- \alpha_0 \sigma} + \left( \frac{\tau}{\tau_b} \right)^{-\alpha_1 \sigma} \right]^{-1/\sigma}. \label{Beuermann_fit_function_equation}$$ The alternative smooth power law transition fit function, labeled “sH” is the same as the one used by [@Harrison1999] and given by $$\bar{F}(\tau) = \bar{C} \left\{ 1 - \exp[ -(\tau / \tau_b)^{\alpha_0 - \alpha_1}] \right\} (\tau / \tau_b)^{\alpha_1}.$$ In this equation the pre-break power law slope is retrieved from the Taylor series of the exponential term. The different fit variable results are represented in the tables as follows. Since the fit results confirm that the slopes $\alpha_0$ and $\alpha_1$ linearly depend on $p$ (as shown in Figs. \[alpha\_plot\] and \[alphaX\_plot\] for fits using sharp power laws), the entries contain this linear dependence as determined from the full range of $p$ fits rather than a values for each individual $p$. The logarithm of the numerical scale factor $\bar{C}$ and the sharpness of the smooth power law fit also depend linearly on $p$, and are presented in the same fashion. The break time $\tau$ depends only weakly on $p$ and is represented by its average value $<\tau>$, weighing equally all individual $p$ value fits. We also give the reduced $\chi^2$ value of each fit, again averaged over the different $p$ value fits, as well as the ratio of the unreduced $\chi^2$ of each fit to that of a sharp power law fit. For the latter, these ratios were calculated before the average was taken. We emphasize that by themselves, the reduced $\chi^2$ results are to some degree arbitrary, since they depend on arbitrary quantities like the number of datapoints in a synthetic light curve, the spacing of these datapoints and an artificial ten percent error on each datapoint, and that they should be interpreted only in a relative sense. Using the prescriptions from table \[characteristics\_table\] and equations \[flux\_equations\], the flux for any combination of parameter values can be reproduced from the fit results in tables \[powerlaw\_fit\_optical\_table\] and \[powerlaw\_X-rays\_table\]. For $\nu < \nu_c$, we get: $$\begin{aligned} F_G & = & \frac{(1+z)^{(3-p)/2}}{d_{28}^2} \frac{p-1}{3p-1} \left( \frac{p-2}{p-1} \right)^{p-1} \epsilon_e^{p-1} \epsilon_B^{(p+1)/4} \xi_N^{2-p} \times \nonumber \\ & & \kappa^1 \lambda^{(p+1)/4} \left( \frac{\nu_{\oplus}}{4.56 \times 10^{14} \textrm{ Hz}} \right)^{(1-p)/2} \bar{F} (\tau) \textrm{ mJy},\end{aligned}$$ for the ISM case and $\bar{F}$ referring to fit results from table \[powerlaw\_fit\_optical\_table\]. We have now added a ‘$\oplus$’ to the frequency to emphasize that this frequency is expressed in the observer frame, like the characteristic frequencies in table \[characteristics\_table\] and the frequencies in Eq. \[flux\_equations\], and related to the frequency $\nu$ in the burster frame via $\nu_{\oplus} = \nu / (1+z)$. Note that $\tau$ is still expressed in the burster frame, in order to keep the characteristic functions (and hence the power law fit results) redshift-independent. For $\nu > \nu_c$ we have: $$\begin{aligned} F_H & = & \frac{(1+z)^{(2-p)/2}}{d_{28}^2} \frac{p-1}{3p-1} \left( \frac{p-2}{p-1} \right)^{p-1} \epsilon_e^{p-1} \epsilon_B^{(p-2)/4} \xi_N^{2-p} \times \nonumber \\ & & \kappa^{2/3} \lambda^{(3p-2)/12} \left( \frac{\nu_{\oplus}}{5 \times 10^{17} \textrm{ Hz}} \right)^{-p/2} \bar{F} (\tau) \textrm{ mJy},\end{aligned}$$ for the ISM case and $\bar{F}$ referring to fit results from table \[powerlaw\_X-rays\_table\]. Implications for the light curve slope -------------------------------------- ![Pre-break temporal index $\alpha_0$ (top plot) and post-break temporal index $\alpha_1$ (bottom plot) for $\nu < \nu_c$ and for on-axis and on-edge observations of the three jet angles $\theta_0 = 0.05$, $0.1$, $0.2$ rad., according to sharp power law fits to synthetic light curves. The grey bands indicate the region within ten percent of the theoretically expected values, $3(1-p)/4$ and $-p$ for pre- and post-break respectively.[]{data-label="alpha_plot"}](fig7a.eps "fig:"){width="\columnwidth"} ![Pre-break temporal index $\alpha_0$ (top plot) and post-break temporal index $\alpha_1$ (bottom plot) for $\nu < \nu_c$ and for on-axis and on-edge observations of the three jet angles $\theta_0 = 0.05$, $0.1$, $0.2$ rad., according to sharp power law fits to synthetic light curves. The grey bands indicate the region within ten percent of the theoretically expected values, $3(1-p)/4$ and $-p$ for pre- and post-break respectively.[]{data-label="alpha_plot"}](fig7b.eps "fig:"){width="\columnwidth"} ![Same as Fig. \[alpha\_plot\], now for $\nu > \nu_c$.[]{data-label="alphaX_plot"}](fig8a.eps "fig:"){width="\columnwidth"} ![Same as Fig. \[alpha\_plot\], now for $\nu > \nu_c$.[]{data-label="alphaX_plot"}](fig8b.eps "fig:"){width="\columnwidth"} The predicted on-axis pre-break slopes are $3(1-p)/4$ for $\nu < \nu_c$ and $(2-3p)/4$ for $\nu > \nu_c$ and the tables show that these values are well reproduced by straight power law fits for $\theta_0 = 0.05$ rad. and $\theta_0 = 0.1$ rad. and reasonably well for $\theta_0 = 0.2$ rad. This can also be seen from the top plots in Figs. \[alpha\_plot\] and \[alphaX\_plot\], that show $\alpha_0$ for each $p$ value and on-axis and on-edge observers. The post-break slopes, on the other hand, are *not* consistent with the theoretically expected temporal index $-p$ [@Sari1999] nor with very smooth gradual transitions [@Kumar2000; @Wei2000; @Wei2002], as can be seen from the tables and the bottom plots in Figs. \[alpha\_plot\] and \[alphaX\_plot\] (see also section \[transition\_duration\_subsection\]) and are steeper to the extent that they fall well outside even a ten percent margin of the theoretical value. We find a steepening of about $-(0.5 + 0.5p)$ below the cooling break and about $-(0.25 + 0.5p)$ above the cooling break, both leading to post-break slopes of roughly $0.25 - 1.3 p$, although different observer angles, jet opening angles and heuristic descriptions of the break introduce a wide range of temporal indices. This confirms earlier numerical work, and was first shown from high-resolution simulations and for on-axis observers by [@Zhang2009] and for off-axis observers by [@vanEerten2010offaxis]. However, due to the vast increase in numerical resolution provided by the boosted frame approach, this is the first time the post-break slopes have been determined from simulations where the jet break is fully resolved, and the current values can be considered quantitatively accurate. These slopes should be compared to observational data, such as the systematic study of *Swift* X-ray afterglows presented by [@Racusin2009], that show a post-break slope for their sample of afterglows exhibiting ‘prominent jet breaks’ that centers around $\alpha_1 \sim 2$. This difference in slopes means that it is exceedingly difficult, at least for the *Swift* sample and at least for on-axis observers, to reconcile the data with a model of an initially top-hat blast wave decelerating into a constant medium. Even for off-axis observers this is becoming problematic, although a number of caveats apply: the jet break might be simply post-poned beyond what *Swift* can observe [@vanEerten2010offaxis; @vanEerten2011hiddenswift], or only a fraction of an off-axis jet break is seen (see also Fig. \[observer\_angle\_plot\], discussed below). Instead, the post-break slopes are more consistent with the values normally associated with high-latitude emission (‘region I’ of the ‘canonical light curve’, see @BingZhang2006 [@Racusin2009]), without necessarily implying that these should be interpreted as such, since this interpretation would require extremely narrow jets (embedded in quasi-spherical outflow in order to get regions II-IV of the canonical light curve) and simulations of jets with $\theta_0 \ll 0.05$ rad., the smallest angle discussed in this paper. Possibly, *Swift* GRBs do not predominantly explode into a homogeneous medium but in a different environment (e.g. stellar wind instead). Alternatively, the jet break might be hidden from view by an additional physical process, such as prolonged injection of energy into the blast wave (see e.g. @Nousek2006 [@Panaitescu2006; @ZhangBing2006; @Panaitescu2012]). ![A comparison of sharp power law fits and synthetic light curves at observer angles $\theta_{obs} = 0$, $0.12$, $0.2$, rad. (top to bottom) for $\theta_0 = 0.2$ rad. Plotted is the case $\nu < \nu_c$. Other parameters are set as follows: $p = 2.5$, $\epsilon_e = 0.1$, $\epsilon_B = 0.01$, $\xi_N = 1.0$, $z = 0$, $d_{28} = 1$, $n_0 = 1$ cm$^{-3}$, $E_{iso} = 10^{53}$ erg. For clarity of presentation, only half the data points of the synthetic light curves are plotted. Two of the three curves have been scaled by a factor ten, again for presentation purposes.[]{data-label="observer_angle_plot"}](fig9.eps){width="\columnwidth"} In Fig. \[observer\_angle\_plot\] we show light curves and sharp power law fit results for $\theta_{obs} = 0.2$ rad. These illustrate the extent to which sharp power law fits overlap with the data. In practice, it is not very difficult for off-axis observations to push the final turnover associated with the jet break out in time beyond the timespan typically covered by *Swift* (i.e. 10 days), especially once nonzero values for redshift $z$ are considered, which can lead either to a *missing jet break* or a steepening that is far more shallow if detected at all because only the early part of the jet break is covered. An example of a jet break that is not fully detected is shown by the green dashed curve in Fig. \[alphaX\_plot\], for $\theta_{obs} = \theta_0 = 0.2$ rad. However, in order to properly quantify these effects for e.g. *Swift*, an approach is required that includes not just synthetic light curves but also accurately models instrument biases and expected measurement errors, similar to the one taken by [@vanEerten2010offaxis; @vanEerten2011hiddenswift]. This will be the topic of a future study. ![Temporal index evolution for $\nu < \nu_c$ (top plot) and $\nu > \nu_c$ (bottom plot), for $p = 2.5$. $\theta_0 = 0.05$, $0.1$, $0.2$ rad and $\theta_{obs} = 0$, $0.6 \theta_0$, $\theta_0$, using the same colors and line styles as in Fig. \[characteristics\_plot\]. The top grey lines indicate the theoretical pre-break value. The narrow bottom grey lines indicate the ranges of sharp power law values found for the post-break slope, for all opening angles and observer angles except $\theta_{obs} > 0.4 \theta_0$ with $\theta_0 = 0.2$ rad, where the temporal index did not reach a minimum during before 26 days. The thick bottom grey lines denote the middle of these ranges.[]{data-label="slopes_plot"}](fig10a.eps "fig:"){width="\columnwidth"} ![Temporal index evolution for $\nu < \nu_c$ (top plot) and $\nu > \nu_c$ (bottom plot), for $p = 2.5$. $\theta_0 = 0.05$, $0.1$, $0.2$ rad and $\theta_{obs} = 0$, $0.6 \theta_0$, $\theta_0$, using the same colors and line styles as in Fig. \[characteristics\_plot\]. The top grey lines indicate the theoretical pre-break value. The narrow bottom grey lines indicate the ranges of sharp power law values found for the post-break slope, for all opening angles and observer angles except $\theta_{obs} > 0.4 \theta_0$ with $\theta_0 = 0.2$ rad, where the temporal index did not reach a minimum during before 26 days. The thick bottom grey lines denote the middle of these ranges.[]{data-label="slopes_plot"}](fig10b.eps "fig:"){width="\columnwidth"} ![A comparison between light curves for $\nu > \nu_c$ and $\nu < \nu_c$ and $\theta_0 = 0.05$ rad, $\theta_{obs}$ = 0.0. Other parameters are set as follows: $p = 2.5$, $\epsilon_e = 0.1$, $\epsilon_B = 0.01$, $\xi_N = 1.0$, $z = 0$, $d_28 = 1$, $n_0 = 1$ cm$^{-3}$, $E_{iso} = 10^{53}$ erg. For clarity of presentation, only half the data points of the synthetic light curves are plotted. The inset plot shows a zoom in of the late time X-ray curve, without skipping data points of the synthetic light curve.[]{data-label="XvsO_plot"}](fig11.eps){width="\columnwidth"} After the onset of the jet break, the time evolution of the light curves in general does not follow a single power law evolution, as can be seen from Fig. \[slopes\_plot\]. Given that the synthetic light curves consist of 85 (60) datapoints and were given artificial measurement errors of ten percent, the reduced $\chi^2$ values for the various fits reported in tables \[powerlaw\_fit\_optical\_table\] and \[powerlaw\_X-rays\_table\] demonstrate that power law fit functions nevertheless fit the light curve surprisingly well. Even the $\nu > \nu_c$ fits for $\theta_0 = 0.05$ rad. have a small reduced $\chi^2$ value. The reason that these are nevertheless noticeably higher than the other fits can be inferred from the late time behavior of the temporal indices for the narrow jet in Fig. \[slopes\_plot\]. Above the cooling break, the emission is dominated by a smaller region closer to the shock front than is the case below the cooling break. As a result, the observed flux above the cooling break at any given time consists of contributions from a smaller timespan in emission times. It will therefore take less time for a change in the nature of the evolution of the blast wave to become noticeable than for flux below the cooling break, as illustrated by the comparison shown in Fig. \[XvsO\_plot\]. What is seen for the $\nu > \nu_c$ curve at late times is the onset of the transition to the non-relativistic regime, a consequence of the fact that the smaller the opening angle, the smaller the total energy in the jets (with energy in jet and counterjet $E_j \approx E_{iso} \theta_0^2 / 2$). Implications for the break times -------------------------------- ![Jet break times averaged over a range of $p$ values for $\nu < \nu_c$ (top plot) and $\nu > \nu_c$, as determined from sharp power law fits for different observer angles $\theta_{obs}$ and different jet opening angles $\theta_0$. The solid grey curves indicate $\tau \propto (\theta_0 + \theta_{obs})^{8/3}$. The dashed grey lines indicate break times for an on-axis observer, scaled from the on-axis break time $\tau_{0.05}$ for $\theta_0 = 0.05$ rad., using $\tau = \tau_{0.05} (\theta_{0} / 0.05)^{8/3}$.[]{data-label="break_times_plot"}](fig12a.eps "fig:"){width="\columnwidth"} ![Jet break times averaged over a range of $p$ values for $\nu < \nu_c$ (top plot) and $\nu > \nu_c$, as determined from sharp power law fits for different observer angles $\theta_{obs}$ and different jet opening angles $\theta_0$. The solid grey curves indicate $\tau \propto (\theta_0 + \theta_{obs})^{8/3}$. The dashed grey lines indicate break times for an on-axis observer, scaled from the on-axis break time $\tau_{0.05}$ for $\theta_0 = 0.05$ rad., using $\tau = \tau_{0.05} (\theta_{0} / 0.05)^{8/3}$.[]{data-label="break_times_plot"}](fig12b.eps "fig:"){width="\columnwidth"} The evolution of the jet break time, as determined using a sharp power law fit, is shown in Fig. \[break\_times\_plot\]. If there were no lateral spreading at all, the jet break would be determined completely by the different edges becoming visible, and as a result the onset $\tau_{b0}$ and end $\tau_{b1}$ of the jet break would be given by $\tau_{b0} \propto (\theta_0 - \theta_{obs})^{8/3}$ and $\tau_{b1} \propto (\theta_0 + \theta_{obs})^{8/3}$ respectively [@vanEerten2010offaxis]. For a jet observed on-edge, the nearest edge is visible already at $\tau = 0$, while the relative angle of the far edge is at its maximum distance of $2 \theta_0$. In reality, jet break is influenced by jet spreading as well. Also, the intermediate light curve slope change at the onset of the break is not as steep as the final slope change at the end of the break even for pure radial flow. These facts, together with the fact that the onset of the break is usually sufficiently early to be overwhelmed in light curve data (e.g. from *Swift*) by other early time features such as flares or plateaus, render it likely that in practice it is the end of the jet break rather than the onset of the jet break that will be captured by a broken power law fit to the data. The relation between measured break time and jet opening angle will therefore lie closer to $\tau_{b} \propto (\theta_0 + \theta_{obs})^{8/3}$, than to $\tau_{b} \propto (\theta_0)^{8/3}$, for general observer angle. Although the inferred jet breaks for the synthetic light curves do not fully reach this upper limit, Fig. \[break\_times\_plot\] shows that, when the observer moves noticeably off-axis, they do trace this expected behavior at least for $\theta_0 = 0.05$ rad. and $\theta_0 = 0.1$ rad. The jet break time as a function of observer angle is very noisy for the wide jet with $\theta_0 = 0.2$ rad., mainly because in this case the jet break for observers far off-axis is not fully covered within the timespan of 26 days. For small observer angles, when both onset and end of the break are still fairly close to each other, the two breaks have not yet fully separated and the turnover is still described by a single smooth break centered at $\tau \propto (\theta_{obs})^{8/3}$, as indicated by Fig. \[break\_times\_plot\] and the drop in $\sigma$ values for *sB* fits between for increasing $\theta_{obs}$ (as shown in tables \[powerlaw\_fit\_optical\_table\] and \[powerlaw\_X-rays\_table\]). Implications for the transition duration {#transition_duration_subsection} ---------------------------------------- The parameter $\sigma$ in *sB* type fits is a measure of the sharpness of the transition. From $\sigma$ a measure for the duration of the jet break transition can be derived as follows. We define $P$ to mark the point in time where the light curve power law slope is $\alpha = \alpha_0 + P \times (\alpha_1 - \alpha_0)$, or in other words when a fraction $P$ (e.g. 0.90 or 0.50) of the steepening is obtained. The associated time $\tau_P$ now follows from solving $$\frac{{\mathrm{d}}\log F}{{\mathrm{d}}\log \tau} = P \alpha_1 + (1 - P) \alpha_0$$ for $\tau$, where $F$ the Beuermann fit function as defined by eq. \[Beuermann\_fit\_function\_equation\]. A direct measure of the transition duration is provided by $\tau_P / \tau_b$, which has the simple analytical form $$\tau_P / \tau_b = \left[ (1 - P) / (P) \right]^{1 / (\Delta \alpha \sigma)},$$ where $\Delta \alpha \equiv \alpha_1 - \alpha_0$. Applying this measure to the fit results tabulated in tables \[powerlaw\_fit\_optical\_table\] and \[powerlaw\_X-rays\_table\] we find that the transition duration is very short, typically on the order of a few at most. For $\nu < \nu_c$, $\tau_P / \tau_b$ is essentially independent of synchrotron slope $p$, with differing in the range $p = 2 \ldots 3$ at most by around a single percent. For $\nu > \nu_c$, the differences between different $p$ values are somewhat larger, with on-axis differences up to 20 percent. We have tabulated the average values, weighed in the same manner as $\tau_b$. Given its weak dependence on $p$, $\tau_P / \tau_b$ is arguably a more insightful measure of the nature of the jet break than $\sigma$. It also allows for a direct comparison with earlier estimates by [@Kumar2000]. Based on analytical modeling, these authors estimate a transition duration of about a decade in time, contradicted by our simulation-based results (see also the discussion in @Granot2007, where it is demonstrated that different analytical transition duration predictions are very sensitive to the precise model assumptions). From an observational perspective, our numerical results are consistent with e.g. the findings of [@Zeh2006], supporting the notion that at least some of the pre-swift bursts discussed by these authors contain jet breaks for explosion in a homogeneous medium. Implications for fit functions ------------------------------ A comparison of the $\chi^2$ fit results for the different fit functions shows that the performance of the different functions is comparable. Smooth power law fits of type *sB* by definition outperform sharp power law fits, since the latter are a special case of the former, with $\sigma \to \infty$. Smooth power law fits of type *sH* perform poorly for on-axis observers, but often slightly outperform the other fit functions for off-axis observers, which is remarkable since fit function *sB* has more free parameters. Which fit function to use in practice will depend on the goal of the fit. If the goal is to obtain a fit as close to the data points and with as few parameters as possible, *sH* is a good starting point. On the other hand, if the aim is to derive model parameters from the data for the type of model discussed in this paper, *sB* or even sharp power laws (*PL*) might be preferable, since especially for observers close to the axis, their $\alpha_0$ values lie consistently closer to theoretical expectations (and model input for the synthetic light curves). Summary and discussion {#summary_section} ====================== In this paper we present light curves for gamma-ray burst afterglows decelerating into a constant density circumburst medium. These light curves have been calculated from high-resolution AMR RHD simulations on a grid that is given a Lorentz boost in the direction of the jet, relative to the origin of the explosion. The advantage of this approach is that the relative Lorentz factors in the outflow are reduced and Lorentz contraction of the shock front no longer presents a numerical resolution issue when blast wave deceleration at early times is calculated. The added complexity introduced by the loss of simultaneity across the moving grid relative to the rest frame of the burster can be dealt with by local inverse Lorentz transformations. A linear radiative transfer approach to synchrotron emission through the evolving fluid as represented by a large number of data dumps from the simulation [@vanEerten2009BMscalingcoefficients; @vanEerten2010transrelativistic] is still possible, as has been presented in this paper. The dynamics of narrow and ultra-relativistic jets will be discussed in [@MacFadyen2013]. In the current study we focus on the radiation and the nature of the observed jet break. In a given asymptotic spectral regime, the shape of the light curve is completely determined by the scale-invariant evolution of the spectral breaks and the peak flux. The functions describing these evolutions are characteristic functions of observer angle $\theta_{obs}$ and initial jet half-opening angle $\theta_0$ only and can be scaled between different explosion energies and circumburst densities. Since they are also independent of synchrotron accelerated particle slope $p$, they can be used to generate light curves for arbitrary value of $p$. Generalized scaling relations for arbitrary circumburst density profiles (including ISM and stellar wind) are provided. The time evolutions of the spectral breaks and peak flux change directly following the jet break, although, thanks to the vast improvement in resolution, an earlier reported temporary post-break steepening of the cooling break $\nu_c$ is found to have been resolution-induced. Nevertheless, the temporal behavior of $\nu_c$ for off-axis observers was found to be extremely sensitive to small deviations from radial flow, even at early times and this is likely to leave an inprint in observations, although any specific model (such as a structured jet, @Meszaros1998 [@Rossi2002; @Kumar2003; @Granot2005]) prediction might be hard to disentangle from the effects of changes in the synchrotron emission process (see e.g. @Filgas2011). The shape of the jet break is systematically surveyed for jet opening angles $\theta_0 = 0.05$, $0.1$, $0.2$ rad., observer angles ranging from on the jet axis to on-edge and $p$ values ranging from 2.01 to 3.0. Pre-break temporal indices are found to be in good agreement with theoretical expectations for purely radial flow. This is partially a consistency check on the computer code, since purely radial BM flow was used to set up the initial conditions of the simulations. On the other hand, the simulations were started from an ultra-relavistic on-axis Lorentz factor $\gamma_0 = 100$ and minor deviations from radial flow will therefore have occurred well before the jet break. For the cases considered, post-break temporal indices are generally far steeper than theoretically expected for a quickly expanding jet. This does not imply exponential jet expansion actually occurred [@vanEerten2012observationalimplications; @MacFadyen2013] as demonstrated by the dependency of the jet break shape on the observer angle, but represents the combined effect of expansion and the edges of the outflow becoming visible. The difference in slopes between the synthetic light curves and those reported for the *Swift* sample [@Racusin2009] means that it is exceedingly difficult, at least for the *Swift* sample and at least for on-axis observers, to reconcile the data with the model of an initially top-hat blast wave decelerating into a constant density medium. Even for off-axis observers this is becoming problematic, although a number of caveats apply: the jet break might be simply post-poned beyond what *Swift* can observe [@vanEerten2010offaxis; @vanEerten2011hiddenswift], or only a fraction of an off-axis jet break is seen. Sharp power law fits confirm that the jet break time is sensitive to the observer angle and increases significantly as the observer moves off-axis, which has implications for the interpretation of afterglow data and inferred energy of the explosion (which will be overestimated when an on-axis observer is assumed, as discussed in @vanEerten2010offaxis). This discrepancy between light curve slopes from ISM simulations and *Swift* (or other instrument) data can in theory be explained by assuming that afterglow blast waves decelerate instead into a stellar wind environment shaped by the progenitor star. The most likely scenario then is one where a jet in a stellar wind environment is viewed almost on-edge, given a random orientation of the jet. The jet-break is generally less steep for a stellar wind environment [@Kumar2000; @Granot2007; @DeColle2012stratified]. A further complication is added by the fact that GRB progenitor stars are not expected to exist in complete isolation, and the stellar wind environment of the star is likely to be shaped by multiple colliding stellar winds [@Mimica2011]. Full results for the stellar wind case computed from a boosted frame will be presented in a follow-up study. Alternatively, the explosion does occur in a homogeneous medium but the jet break is hidden from view by an additional physical process, such as prolonged injection of energy into the blast wave (see e.g. @Nousek2006 [@Panaitescu2006; @ZhangBing2006; @Panaitescu2012]). Different power law fit functions have been used in the literature to describe jet breaks. Comparing sharp power laws, smoothly connected power laws (‘*sB*’, @Beuermann1999) and power law transitions including an exponential term (‘*sH*’, @Harrison1999), we find that all descriptions provide good fits to synthetic light curves, although type *sH* underperforms for on-axis observers and often outperforms the other types for off-axis observers. Nevertheless, type *sB* fit functions and sharp power laws yield pre-break results that are the easiest to interpret in terms of the underlying model. The simulation data, light curves and characteristic functions (i.e. the scale-invariant time behavior of peak flux and spectral breaks) in this work will be used to improve the accuracy of simulation-based data fitting methods such as <span style="font-variant:small-caps;">boxfit</span> [@vanEerten2012boxfit]. From the characteristic functions light curves for each asymptotic spectral regime can be reproduced directly, as has been done in this paper and following the approach from [@vanEerten2012scalings]. For the full spectrum, a heuristic description of the sharpness of spectral transitions is required as well (see @Granot2002 [@Leventis2012] for an example of this approach in the spherical case). Alternatively, the simulation output for the blast wave dynamics can be processed using the methods employed for <span style="font-variant:small-caps;">boxfit</span>, albeit with the extra step of transforming to the lab frame. This has the advantage that radiative transfer equations can subsequently be performed very quickly and that no heuristic description of the spectral transitions is needed. As stated earlier, the steepness of the post-break slopes poses a challenge for the *Swift* sample. A true test of the severity of this issue is to compare observational data and synthetic light curves systematically using one of the simulation-based fit approaches described above. This will be the topic of future work. We note that the one afterglow that has already been fitted using the <span style="font-variant:small-caps;">boxfit</span> approach, GRB 990510, has a steep post-break temporal slope compared to those in the *Swift* sample ($F \propto t^{-2.40}$, according to @Stanek1999), which helps to explain how it was possible to obtain a good fit using the ISM model for that particular burst. All light curves and spectral break and peak flux evolution functions from this work will be made publicly available on-line at <http://cosmo.nyu.edu/afterglowlibrary> This research was supported in part by NASA through grant NNX10AF62G issued through the Astrophysics Theory Program, by the NSF through grant AST-1009863 and by Chandra grant TM3-14005X. Resources supporting this work were provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center. The software used in this work was in part developed by the DOE-supported ASCI/Alliance Center for Astrophysical Thermonuclear Flashes at the University of Chicago. Light curves from a boosted frame {#boosted_frame_appendix} ================================= As in previous work [@vanEerten2010transrelativistic; @vanEerten2010offaxis], the radiative transfer equation is solved in the burster frame (the “lab” frame) simultaneously for a large number of rays through the evolving fluid. Because the simulation grid is itself moving with a fixed Lorentz factor, the simulation frame is no longer equal to the lab frame. The consequence of this is that additional Lorentz transformations will be necessary going from simulation to burster frame, not only to boost the fluid quantities, but also to take into account the loss of burster frame simultaneity across a single snapshot. As a consequence of the latter, the contributions from a single snapshot to the emission and absorption coefficients of the rays, for a given observer time and angle, no longer lie on a flat intersecting plane (previously labeled ‘equidistant surface’ or ‘EDS’), but on a curved surface. Denoting observer time $t_{obs}$, burster time $t$ and simulation grid time $t'$, we have for each ray on each snapshot the constraint $$t_{obs} = t(t') - R(t'),$$ where $R$ the distance traveled by the ray in the burster frame parallel to the line of sight. $R$ is equal to zero when the ray crosses the EDS plane centered on the burster frame origin and oriented perpendicular to the line of sight (i.e. defined such that light emitted from the origin at $t = 0$ will arive at observer time $t_{obs} = 0$). This plane, which we label ‘EDS0’, is defined in the burster frame and therefore still flat.). For any given ray, the relevant coordinate for a given snapshot is $$\vec{q} = \vec{q}_E + R \hat{u}_E = \vec{q}_E + (t - t_{obs}) \hat{u}_E,$$ where $\vec{q}_E$ the coordinates of the point where the ray crosses EDS0 and $\hat{u}_E$ a unit vector pointing along the ray to the observer. Writing the vector components of the previous equation explicitly, we get $$\left( \begin{array}{c} q_x \\ q_y \\ q_z \end{array} \right) = \left( \begin{array}{c} q_{Ex} \\ q_{Ey} \\ q_{Ez} \end{array} \right) + (t - t_{obs}) \left( \begin{array}{c} \sin \theta_{obs} \\ 0 \\ \cos \theta_{obs} \end{array} \right). \label{coordinates_equation}$$ Before the radiative transfer calculations are performed, we pre-process the snapshot files to store the local fluid states in terms of burster frame coordinates $(\vec{q}, t)$. The relevant Lorentz boost equations for a boost of factor $\gamma_S$ and velocity $\beta_S$ along $z$, the direction of the jet, are $$\begin{aligned} t & = & \gamma_S (t' + \beta_S q_z' ), \nonumber \\ q_z' & = & \gamma (q_z - \beta_S t).\end{aligned}$$ Combining these with equation \[coordinates\_equation\] allows us to determine which fluid cell to probe for a given ray (determined by its $\vec{q}_E$ coordinates) and given snapshot (determined by its simulation frame time $t'$), leading to: $$\begin{aligned} q_z & = & \frac{q_{Ez}}{1 - \beta_S \cos \theta_{obs}} + \frac{c t' \cos \theta_{obs}}{\gamma_S (1 - \beta_S \cos \theta_{obs})} - \frac{c t_{obs} \cos \theta_{obs}}{1 - \beta_S \cos \theta_{obs}}, \nonumber \\ t & = & \frac{t'}{\gamma_S} + \frac{\beta_S q_z}{c}, \nonumber \\ q_y & = & q_{Ey} \nonumber \\ q_x & = & q_{Ex} + c (t - t_{obs} ) \sin \theta_{obs}.\end{aligned}$$ The distance ${\mathrm{d}}R$ traveled by each ray between two snapshots that are ${\mathrm{d}}t'$ apart is given by $${\mathrm{d}}R = c {\mathrm{d}}t = \frac{c {\mathrm{d}}t'}{\gamma_S (1 - \beta_S \cos \theta_{obs})}.$$ The local emission and absorption coefficients are a function of *comoving* fluid number density $n$, *comoving* fluid energy density $e$ and burster frame fluid velocity $\vec{v}$. The comoving quantities are provided directly by the fluid simulation, since they are independent of the grid velocity. The velocity and Lorentz factor in the burster frame are calculated during the pre-processing of the grid snapshots according to the standard relativistic velocity addition rules. [^1]: Throughout this paper we will use “lab frame” to refer to the frame in which the origin of the explosion and the unperturbed interstellar medium are at rest.
[**Monte Carlo model for nuclear collisions\ from SPS to LHC energies**]{} N. S. Amelin$^{a,\dag}$, N. Armesto$^{b,\ddag}$, C. Pajares$^{c,\S}$ and D. Sousa$^{d,\P}$ $^a$ [*Department of Physics, University of Jyväskylä,*]{}\ [*P.O.Box 35, FIN-40351 Jyväskylä, Finland*]{}\ $^b$ [*Departamento de Física, Módulo C2, Planta baja, Campus de Rabanales,*]{}\ [*Universidad de Córdoba, E-14071 Córdoba, Spain*]{}\ $^c$ [*Departamento de Física de Partículas, Universidade de Santiago de Compostela,*]{}\ [*E-15706 Santiago de Compostela, Spain*]{}\ $^d$ [*Laboratoire de Physique Théorique, Université de Paris XI,*]{}\ [*B$\hat{a}$timent 210, F-91405 Orsay Cedex, France*]{}\ Introduction {#intro} ============ With the announcement of the discovery of Quark Gluon Plasma (QGP) at the Super Proton Synchrotron (SPS) at CERN [@qgpan], the experimental heavy ion program moves now to the higher energies of the Relativistic Heavy Ion Collider (RHIC) at Brookhaven and the Large Hadron Collider (LHC) at CERN. Whether this claim can be considered conclusive or not (see e.g. [@gyulassy]), the most compelling experimental findings at the SPS [@na50; @wa97; @na49; @reanalysis; @phi1; @phi2; @dilep] are interpreted as positive signatures of QGP only when conventional, non-QGP models fail to reproduce them. Therefore, even in the case that QGP has already been obtained, it is most important that conventional models employed at the SPS become generalized for RHIC and LHC: They can be used to describe collisions between less massive nuclei or more peripheral events than those in which QGP is expected, and to establish the background to events with QGP production. On the other hand, the situation with conventional models is not clear at all. The description of a high energy collision between heavy ions is a complex task which involves different physical aspects. Predictions from different models for RHIC and LHC are far from being compatible, see the reviews [@lastcall] and [@noso]. For example, the values for central rapidity densities of charged particles coming from different models lie in the ranges $600\div 1500$ for central AuAu collisions at RHIC and $2000\div 8000$ for central PbPb collisions at LHC. In this paper a non-QGP model for collisions between nucleons or nuclei in the energy range going from SPS energies ($\sim 20$ GeV per nucleon in the center of mass) to LHC energies (5.5 TeV per nucleon in the center of mass) is presented (different steps in this direction can be found in [@sfm; @asgabp]). The model is based on the ideas of Dual Parton Model (DPM) [@dpm] or Quark-Gluon String Model (QGSM) [@qgsm], considering both soft and semihard components on a partonic level. These elementary partonic collisions lead to the formation of color strings. Collectivity is taken into account considering the possibility of strings in color representations higher than triplet or antitriplet, by means of string fusion, as done in [@sfm; @rqmd] (see related approaches in [@urqmd; @vance]). String breaking leads to the production of secondaries. In this form, the model can be used as initial condition for subsequent evolution using a transport model, as those of [@rqmd; @urqmd]. Nevertheless, in order to tune the parameters of the model and apply it to nucleus-nucleus collisions, rescattering between secondaries is considered on the basis of $2\longrightarrow 2$ collisions, using a very simple model which allows us just to estimate the effects of such process. The results of the code turn out to agree reasonably with existing experimental data on total multiplicities, and longitudinal and transverse momentum distributions, and semiquantitatively with strangeness production and stopping power. The paper is organized as follows: In Section \[initial\] string formation will be discussed, both for soft and semihard components, whose separation will be established. Also in this Section collectivity, considered as string interaction or fusion, will be presented. Hadronization of the produced strings will be formulated in Section \[hadro\]. In Section \[rescatt\] our simple approach to rescattering between secondaries will be presented. A comparison with experimental data will be done in Section \[comp\], and predictions for RHIC and LHC shown in Section \[pred\], together with some discussion on the first RHIC data [@phobos; @phenix]. In the last Section we will summarize our conclusions and briefly compare with other approaches. Initial stage {#initial} ============= Elementary partonic collisions ------------------------------ To compute the number of elementary partonic collisions we have to generate the partonic wave functions of the colliding hadrons. The steps to generate this wave function for the projectile $A$ and target $B$ are the following: First, the impact parameter $b$ of the collision is generated uniformly between 0 and $R_A+R_B$ (in the case of nucleons, the total cross section determines the corresponding radius). Second, the nuclear wave function is computed. Nucleon positions inside the nucleus are distributed in transverse space according to a Woods-Saxon distribution for $A>11$, \[eq1\] (r)  , with $r_n=1.07 A^{1/3}$ fm and $a=0.545$ fm, and according to a Gaussian distribution for $A \le 11$, with parameters chosen for each nucleus [@nucleus]. Then, Fermi motion is given to the nucleons in the nuclei uniformly in the range $0<p<p_F$, with the maximum Fermi momentum given in the local Thomas-Fermi approximation [@thomas] by \[eq2\] p\_F=h \[3(r)\]\^[1/3]{}, with $h=0.197$ fm GeV/c. Now partons are generated inside each nucleon. Its number is given by a Poisson distribution [@abra], \[eq3\] W\_N  ,   g(s)=g\_0s\^ [/2]{},  g\_0=, with $\Delta=0.139$ the pomeron intercept minus 1, $C=3.0$ the quasieikonal parameter which takes into account low mass nucleon dissociation, $\gamma_P=1.77$ GeV$^2$ the pomeron-nucleon vertex, $\sigma_P =3.3$ mb the parton-parton cross section and $\sqrt{s}$ the center of mass energy for each nucleon-nucleon collision. Parton positions in transverse space (inside a nucleon) are given by a Gaussian according to Regge theory, \[eq4\] F(b)= ,  =R\_0\^2+\^, with $R_0^2=3.18$ GeV$^2$ and $\alpha^\prime=0.21$ GeV$^2$ the pomeron slope. Now, one parton from the projectile and one from the target produce an inelastic collision if both are within an area in impact parameter equal to $\sigma_P=2\pi r_P^2$, $r_P=0.23$ fm. In this way, events with no inelastic collisions are elastic, while those with at least one inelastic collision are inelastic. Taking the total cross section given by the quasieikonal model [@quasiei] \_[tot]{}= \_P f(z/2),   z=   ,  f(z)=\_[k=1]{}\^ , \[eq5\] all cross sections can be computed, see next Subsection (all formulae reduce to the usual eikonal ones with $C=2$). Semihard component ------------------ The inclusion of semihard components, in the form of a two-component model, is needed to reproduce the $p_T$ spectra in hadronic collisions, see Section \[comp\]. In the model this is performed considering that an inelastic collision is hard with probability $$W_h=\frac{C_h\ (s-s_0)^{\Delta_h}}{C_h\ (s-s_0)^{\Delta_h}+s^{\Delta}}\ , \label{eq6}$$ with $\Delta_h=0.50$, $\sqrt{s_0}=25$ GeV and $C_h=0.0035$. A hard collision proceeds through the packages PYTHIA 5.5 + ARIADNE 4.02 + JETSET 7.3 [@pyth; @ariad]. Only gluon-gluon collisions are included in PYTHIA, and the key parameter here is the cut-off in transverse momentum p\_[T min]{}=3.03+0.11   [GeV/c]{}. \[eq7\] The minimum energy for an elementary collision to be accepted by PYTHIA is 20 GeV, and for the global collision the minimum center of mass energy per nucleon is $\sqrt{s_0}=25$ GeV. An event is considered hard if at least one of its inelastic elementary collisions successfully proceeds through PYTHIA. While the concrete choice of the parameters in $p_{T min}$ comes from a fit to experimental data, let us make some comments on its functional form. In our case, an increase of $p_{T min}$ with increasing energy makes possible a smooth transition from the soft to the semihard part of the $p_T$ spectrum. Usually $p_{T min}$ is taken as either constant or increasing as a polynomial of a logarithm of $s$ [@hijing; @dpmjet]. It may be argued that the $p_{T min}$ value which indicates the transition from nonperturbative to perturbative QCD (pQCD), is related with the proposed saturation scale $Q_s^2$ [@satur1; @satur2]: below this $Q_s^2$, the number of partons in the hadron wave function cannot grow, as new partons fuse with the existing ones and cannot be resolved individually. Nevertheless, apart from conceptual differences, the dependences of $p_{T min}$ and $Q_s^2$ are not the same: while the first depends only on energy, the second one also depends on the size of the colliding objects ($Q_s^2 \propto A^\alpha$, $\alpha = 1/3 \div 2/3$). Results of the model for the total, inelastic (production) and hard cross sections in pp and $\bar {\rm p}$p collisions at different energies are shown in Fig. \[fig1\] and compared with experimental data for the total cross section [@pdg]. It can be observed that both the total and the production cross section are too small at low energies, while they get reasonable values at higher energies. The reasons for the existing discrepancies are three: In first place, diffraction is not properly included in the model, so it is difficult to distinguish between production and inelastic cross sections. In second place, no reggeon contribution (decreasing with energy) has been included. In third place, at the level of the cross sections no distinction is made between nucleons and antinucleons as projectiles and targets. These two last reasons should improve the agreement with data at energies of SPS and Intersecting Storage Rings (ISR). Also in this Figure it is shown the value of $p_{T min}$ and the mean number of total and hard inelastic collisions per event. String formation and fusion {#strfus} --------------------------- Each soft parton-parton collision gives rise to two strings [@dpm; @qgsm], stretched either between valence quarks and diquarks (for the first collision suffered by a nucleon) or sea quarks and antiquarks (for the subsequent ones). For the latter, their flavors follow the ratio $u:d:s=1:1:0.26$. Hard collisions proceed through PYTHIA as $gg \longrightarrow gg$. For the string ends and hard gluons, the longitudinal momentum fractions are distributed as L(x\_1,x\_2,…,x\_n)= f\_[qq]{}(x\_1)f\_q(x\_2)f\_q(x\_n),   \_[k=1]{}\^n x\_k =1. \[eq8\] For soft strings ends, the individual momentum distributions are those of the QGSM [@qgsm], $$f_{qq}(x)=x^{3/2},\ \ f_{q(\bar q)}(x)=\frac{1}{\sqrt{x}}\ , \label{eq9}$$ with a lower cut-off $x_{min}=0.3\ {\rm GeV}/\sqrt{s_{NN}}$ to ensure that the strings have mass enough to be projected onto hadrons, $\sqrt{s_{NN}}$ being the center of mass energy per nucleon. For partons involved in hard collisions, the longitudinal momentum fractions are taken by PYTHIA from PDFLIB [@pdflib], with the possibility of considering the difference of parton distributions inside nuclei given by the parametrization EKS98 [@eks] or by a parametrization as $F_{2A}$ [@eqc]. After generating the final gluons, each of them splits into a $(q\bar q)$ pair and strings are stretched between them, according with the standard procedure in PYTHIA [@pyth]. The transverse momentum of both partons at the string ends and hard partons, coming from a nucleon which has been wounded $m$ times, is given by a Gaussian: $$T(p_\perp)=\frac{1}{\pi \delta^2} \ \exp{(-p_\perp^2/\delta^2)},\ \ \delta=0.5\ \sqrt{m} \ \ {\rm GeV/c}; \label{eq10}$$ in this way, $p_T$-broadening is taken into account [@cfk]. The number of strings exchanged in one collision is quite low for nucleon-nucleon collisions, but this number increases with energy, size of projectile and target and centrality of the collisions. Strings can be viewed as objects with a certain area, given by the uncertainty relation as $\propto 1/\langle p^2_T \rangle$, in the transverse plane of the collision. When the number of strings is high enough, they begin to overlap and the usual hypothesis in QGSM or DPM of the strings being independent sources of secondary particles is expected to break down. A possible way of considering this is to compute the density of strings in the transverse plane and use two-dimensional percolation as an indicator of the onset of collectivity [@percol1; @percol2]. Percolation takes place when domains of overlapping strings acquire a size of the order of the total available size for the collision. While percolation is a second order phase transition, the option we use in this model, fusion of strings, does not lead to any phase transition [@bpr]. In the model, ordinary strings (i.e. in a triplet representation of SU(3)) fuse[^1] in pairs when their parent partons (those which determined the inelastic collision the strings come from) are within a certain area $\sigma_{fus}=2\pi r^2_{fus}$ in impact parameter space. In the code we consider only fusion of two strings but there is a probability of fusion of more than two. An effective way of taking this into account is to increase the cross section for the fusion of two strings, for which we will take $\sigma_P<\sigma_{fus} = 7.5$ mb ($r_{fus}=0.35$ fm). This value is crucial to reproduce the strangeness enhancement in central SS and SAg collisions at SPS [@sfm2]. The fusion can take place only when the rapidity intervals of the strings overlap. It is formally described by allowing partons to interact several times, the number of interactions being the same both for projectile and target. The quantum numbers of the fused strings are determined by the interacting partons and their energy-momentum is the sum of the energy-momenta of their ancestor strings. The color charge of the resulting string ends is obtained according to the SU(3) composition laws: {3}{3}={6}{|3},  {3}{|3}={1}{8}. \[eq11\] Thus, two triplet strings fuse into either a sextet or an antitriplet string with probabilities 2/3 and 1/3 respectively, and one triplet and one antitriplet string fuse into either a singlet or an octet string with probabilities 1/9 and 8/9 respectively. Two comments are in order: On the one hand and as written above, the fusion of strings means nothing related to a phase transition. On the contrary, percolation of strings [@percol1] is a non-thermal second order phase transition. In this case, the key parameter is $\eta = \pi r^{2} N / (\pi R_A^{2})$, which is the density of strings $N/(\pi R_A^{2})$ (number of strings $N$ produced in the overlapping area of the collision, $\pi R_A^{2}$ for central collisions) times the transverse size of one string $\pi r^{2}$. The critical point for percolation is $\eta_{c} \simeq 1.12 \div 1.5$ depending on the profile function of the colliding nuclei [@percol2]. With $r \simeq 0.2\div 0.25$ fm, this critical value means $6 \div 12$ strings/fm$^{2}$. The value of 9 is reached in central PbPb collisions at SPS, in central AgAg collisions at RHIC and in central SS collisions at LHC. We expect for $\eta$ around or greater than $\eta_{c}$, that the approximation of fusion of just two strings fails. On the other hand, only fusion of soft strings is considered. Hard strings are not fused, their area being proportional to $1/p_{T}^{2}$. Some effect of the fusion of such strings could appear at LHC energies where, for instance, in central PbPb collisions they amount for 32 % of the binary nucleon-nucleon collisions. Hadronization {#hadro} ============= Now we consider the breaking of a soft string with color charges $Q$ and $\bar Q$ in its ends (corresponding to a representation $\{N\}$ of SU(3)). In our model, it is due to the production of two (anti)quark complexes with the same color charges $Q$ and $\bar Q$ as those at the ends of the string [@sfm][^2]. The probability rate is given by the Schwinger formula [@schwin] W K\^[2]{}\_[{N}]{} , \[eq12\] where $K_{\{N\}}$ is the string tension for the $\{N\}$ representation, proportional to the corresponding quadratic Casimir operator $C^{2}_{\{N\}}$ (as found both in lattice QCD and in the Stochastic Vacuum Model [@bali; @simonov]), i.e. K\_[{N}]{}=K\_[{3}]{} [C\^[2]{}\_[{N}]{} C\^[2]{}\_[{3}]{}]{} , C\^[2]{}\_[{3}]{}= 4/3,  C\^[2]{}\_[{6}]{}=10/3, C\^[2]{}\_[{8}]{}=3. \[eq13\] For the longitudinal breaking of the string, an invariant area law [@artru] is employed, P,  K\_[{N}]{}b C\^[2]{}\_[{N}]{},   A=p\_+p\_- \[eq14\] being the area in light-cone momentum space determined by the breaking point in the center of mass frame of the string. This law gives results quite similar to those of the Lund model [@lund] implemented in JETSET [@pyth]. We proceed as follows: Eq. (\[eq12\]) is used to decide the flavors of the quark and antiquark complexes created. We take $K_{\{3\}}=0.18$ GeV$^2$ and $m_u=m_d=0.23$ GeV/c$^2$, $m_s=0.35$ GeV/c$^2$, and the masses of a complex $(q_1\dots q_l)$ is given by $M(q_1\dots q_l)=\sum_{i=1}^l m_{qi}$. Then $p_T$ is given to one of the created complexes and $-p_T$ to the other one, according to a Gaussian law f(p\^2\_T), \[eq15\] with $\alpha_{\{3\}}=\alpha_{\{\bar 3\}}=4$ GeV$^{-2}$ and \_[{N}]{}= 2 \_[{3}]{} [C\^[2]{}\_[{3}]{} C\^[2]{}\_[{N}]{}]{} , {N}{3},{|3}. \[eq16\] Finally a breaking point is sampled according to Eq. (\[eq14\]) in the available phase space, with $b=1.83$ GeV$^{-2}$. Fragmentation proceeds in an iterative way: String fragments are taken as new strings which are broken again, until the mass of the created fragments is too low to allow further breaking (i.e. projection onto hadrons with the right quantum numbers). Then these final fragments (and those fused strings resulting in the singlet $\{1\}$ representation) are treated as quark clusters and decayed according to combinatorics and phase space. Spin of the produced particles is constructed according to SU(2) considerations. The main consequences of string fusion are a strong reduction of multiplicities (both due to the energy-momentum conservation and to the reduction of the effective number of sources of secondaries) and a slight increase of $\langle p_T^2\rangle$ [@sfm], an increase in baryon and strangeness production [@sfm; @sfm2], a strong increase in the cumulative effect [@cumu] and a decrease in forward-backward correlations [@foba]. On the other hand, strings produced in hard collisions (only $gg\longrightarrow gg$) are managed by PYTHIA + ARIADNE + JETSET [@pyth; @ariad][^3]. For ARIADNE, PARA(6) is fixed so as the transverse momentum of the radiated gluon should be less than that of the hard gluon (i.e. the one participating in the $gg$ scattering) and MSTA(9)=MSTA(14)= MSTA(31)=0. In JETSET, PARJ(41)=1.7 GeV$^{-2}$ and PARJ(42)=0.6 GeV$^{-2}$; besides, PARJ(21)=0.55 GeV/c. Only production of three flavors (u,d,s) is considered in the present implementation of the model, so it cannot be used to study production of heavier flavors (some steps in this direction were done in [@asgabp]). As a last point, MSTU(4), MSTU(5) and the dimensions in LUJETS have been set to 120000, which has shown to be enough for central PbPb collisions at LHC. All the other parameters and options in these programs have been set to their default values. Rescattering of secondaries {#rescatt} =========================== As stated in the Introduction, in this stage the model can be used as an initial condition for further evolution, using either a hydrodynamical model or a microscopic transport as RQMD [@rqmd], UrQMD [@urqmd], HSD [@hsd], ART [@art],$\dots$ (see [@asgabp] for a study of evolution of particle and energy densities). Nevertheless, it is usually assumed that the enhancement of hyperons, antihyperons and $\phi$’s observed in heavy ion experiments at SPS [@wa97; @na49; @reanalysis; @phi1; @phi2] cannot be fully explained by using exclusively a mechanism which goes beyond the independent string hypothesis, as string fusion [@sfm; @rqmd; @urqmd; @vance; @sfm2] or baryon junction migration [@vance; @dpmjet; @dpmdb; @bj1; @bj2]. In order to reproduce these experimental features rescattering of particles in the hadron gas (produced particles among themselves and with spectator nucleons) [@rescat] has been introduced in many models. To tune the code and study nucleus-nucleus collisions, we will make a very simple rescattering model with no space-time evolution, fitted to SPS data. Results of this approach will be presented, but one must keep in mind that predictions which depend critically on rescattering effects should be taken with much care. Our implementation of rescattering is extremely naif, trying not to solve the full Boltzmann transport equation for all particles but only to make a model, as simple as possible, which gives us an estimation of which effects such rescattering would produce. Neither formation time nor space-time evolution of secondaries are properly considered; instead we require a common minimum density of particles in the rapidity bin of the considered particles, for rescattering to occur. This minimum density, $dN/(dydp_T)|_{min}=17$, has been chosen for rescattering not to affect results in nucleon-nucleon collisions up to the highest energies. Rapidity and $p_T$ distances between particles have to be lower than 1.5 units and 0.3 GeV/c respectively. Only two body reactions have been included, with inverse reactions as required by detailed balance. Spin is ignored, and rescattering takes place before resonance decay. All cross sections are taken equal for all reactions (except for $\Omega$ production and nucleon annihilation). Operationally, both products of string breaking and spectators are randomly ordered into an array $(1,\dots,N)$. We compute the possibility of rescattering of the first element with all the others in pairs: $(1,2)$, $(1,3)$, $(1,4)$,$\dots$ If either in one of the pairs $(1,j)$, $j=2,\dots,N$, rescattering (either elastic or inelastic) occurs or we reach pair $(1,N)$ with no kind of scattering happening, we go to element 2 and examine the pairs $(2,1)$, $(2,3)$, $(2,4)$,$\dots$ This is repeated until the pair $(N,N-1)$ is examined. As particles produced in rescattering of the pair $(i,j)$, $i,j=1,\dots,N$, $i\neq j$, occupy the same places $i,j$ in the array as their ancestors, particles produced by rescattering have a chance to rescatter again. The probability for two particles to scatter in a given inelastic channel is 7 % (except for channels involving $\Omega$’s and nucleon annihilation, where it is 14 and 70 % respectively). For a given process, the probability for elastic scattering is given by the sum of those corresponding to all inelastic channels considered for these initial particles. Cross sections (probabilities) are considered energy independent, except for the trivial kinematical thresholds, and isotropic in the center of mass of the colliding secondaries and/or spectators. The considered reactions (together with those for the corresponding antiparticles) [@rescat] can be classified into: - Light pair, $(q\bar q)$, annihilation to create another light pair, or light quark exchange: $\pi N \to \pi N$, $\pi \pi \to \pi \pi$, $\pi Y \to \pi Y$, $\pi \Xi \to \pi \Xi$, $K N \to K N$ and $ K Y \to K Y$, where $Y = \Sigma, \Lambda$. - Other considered reactions are: $\pi N \to K Y$, $\pi \pi \to K \bar K$, $\pi Y \to K \Xi$, $\pi \Xi \to K \Omega$ and $\bar K N \to \phi Y$. These reactions can be classified into: 1. Light pair, $(q\bar q)$, annihilation to create a $(s\bar s)$ pair. 2. Reactions with baryon exchange (that is, with three lines in the t-channel). - Reactions with strangeness exchange: $\bar K N \to \pi Y$, $\bar K Y \to \pi \Xi$, $\bar K \Xi \to \pi \Omega$, $K Y \to \phi N$, $K \Xi \to \phi Y$ and $K \Omega \to \phi Y$. This type of processes can produce (anti)baryons with several strange (anti)quarks and are exothermic. - Nucleon-antinucleon annihilation into two pions: $N \bar N \to \pi \pi$. This type of reaction has a much larger cross section at low energies than reactions consider before; for this reason its probability has been chosen ten times larger than the others. This is also an effective way to take into account final states involving more than two pions. To simplify, particles produced in rescattering are always projected onto the lowest spin state. Decay of resonances proceed through the usual JETSET routines, with MSTJ(22)=2, and decay of $\pi^0$’s is forbidden. The results of our rescattering model on strangeness and baryon/antibaryon production can be summarized in three points: hyperon and $\phi$ enhancement, antinucleon annihilation and a slight increase of stopping power (kinematical effects of our rescattering model are very small, due to the applied cuts in rapidity and transverse momentum). Besides, a slight decrease of multiplicities appears, as we will see in the next Section. Comparison with experimental data {#comp} ================================= In order to show the quality of the choice of the parameters, in this Section we will compare the results of the code with experimental data. We will also analyze the influence of the different physical mechanisms implemented in the model. From now on and unless otherwise stated, results of the code come from its default version with string fusion, rescattering (which do not affect results in nucleon-nucleon collisions, and in pA collisions at SPS energies), and GRV 94 LO [@grv94] parton densities with EKS98 [@eks] nuclear corrections. Hadron-hadron collisions ------------------------ Results of the model for the mean numbers of produced particles in minimum bias pp collisions at $\sqrt{s}=19.4$ and 27.5 GeV are shown in Tables \[tab1\] and \[tab2\] respectively, compared with experimental data. An overall agreement can be observed. Two comments are in order: On the one hand, the influence of fusion in nucleon-nucleon collisions is tiny, apart from a slight increase in antibaryons. On the other hand, the number of both $\Lambda$’s and $\bar \Lambda$’s is overestimated in the model. This is due to the fact that in the model, threshold effects, important at these low energies, are treated very roughly (see further comments in the next Subsection). At higher energies the situation improves. For example, in $\bar {\rm p}$p collisions at $\sqrt{s}=200$ GeV, the mean number of $\Lambda+\bar \Lambda$ in the model is 0.56, to compare with the experimental result $0.46\pm 0.12$ [@lambdaua5]. In Fig. \[fig2\], rapidity and transverse momentum distributions of negative particles in minimum bias pp collisions at $\sqrt{s}=19.4$ GeV are shown and compared with experimental data. The result is satisfactory. In Figs. \[fig3\] and \[fig4\] pseudorapidity and transverse momentum distributions of charged particles in $\bar {\rm p}$p collisions at $\sqrt{s}=200$ and 1800 GeV are compared with experimental data. The agreement is reasonable, although the multiplicity at 200 GeV seems to be slightly underestimated. The results of the model without semihard component are also shown in these Figs., and they do not describe the $p_T$ distributions, which justifies the inclusion of hard collisions. Besides, the results given for different sets of parton distributions, both old [@grv94] and new [@cteq5; @grv98] and leading order or next-to-leading order, are very similar. This fact may look surprising from a pQCD point of view. The main reason is that the cross sections and the number of inelastic collisions in our model are determined by Eqs. (\[eq3\]), (\[eq4\]), (\[eq5\]) and (\[eq6\]), which are independent of the choice of partonic distributions in PYTHIA (this is not so in other models, see e.g. [@ranftcomp]). We also think that the quite high $p_{T min}$ we use in PYTHIA, Eq. (\[eq7\]), and the gluon radiation and fragmentation performed by ARIADNE and JETSET, may have some influence on the fact that no difference is apparently seen in the transverse momentum distributions. In Fig. \[fig5\], the evolution of the mean transverse momentum of charged particles is studied in $\bar {\rm p}$p collisions at Sp$\bar {\rm p}$S, versus the center of mass energy and, for different particles, versus central charged multiplicity. The trend of data is reproduced and we find the agreement reasonable (this cannot be achieved without the hard component, as seen in this Figure). In Fig. \[fig6\] the topological cross section for charged particles in the central region is examined at different energies for $\bar {\rm p}$p collisions at Sp$\bar {\rm p}$S, and the agreement is also reasonable, considering that the model slightly underestimates multiplicities at 200 GeV but correctly reproduces those at 1.8 TeV, see Figs. \[fig3\] and \[fig4\]. Proton-nucleus and nucleus-nucleus collisions --------------------------------------------- In Table \[tab3\] results of the model in pA collisions are compared with experimental data on negative multiplicities. An overall agreement is obtained. The reduction of multiplicities due to string fusion can be observed. In Table \[tab4\], mean numbers of produced particles are compared with experimental data, for central SS collisions at SPS energies. The agreement is reasonable. Only the number of both $\Lambda$’s and $\bar \Lambda$’s in the model is significantly below the experimental data. The number of $\Lambda$’s is increased by both string fusion and rescattering, while that of $\bar \Lambda$’s is mainly determined by only string fusion (see results in PbPb below). Anyhow, rescattering is seen to have little effect in SS. Let us now discuss PbPb collisions at SPS. In the last year a large excitement has arisen in the heavy ion physics community, related to the possibility of Quark Gluon Plasma (QGP) already been obtained at SPS energies [@qgpan]. In particular several signals were mentioned, which point out to the existence of QGP. Putting aside the abnormal $J/\psi$ suppression and the excess of dileptons found, there are three signals related to baryon and strangeness production, namely the large enhancement of the (anti)hyperon yields ($\Lambda$, $\Xi$, $\Omega$) in PbPb collisions compared to pPb, observed by the WA97 [@wa97] and the NA49 [@na49] Collaborations[^4]; the linear increase of the inverse exponential slope of the $m_{T}$ distributions (’temperature’) in PbPb collisions with the mass of the observed particle, except for $\Omega$ [@na49; @pt]; and the different behavior of the temperature between pp and AA collisions. These characteristics have been interpreted as the existence of an intrinsic freeze-out temperature and a collective hydrodynamical flow which is gradually developed: firstly, for SS collisions, and, in a more clear way, in PbPb collisions. In this Subsection we will examine some of these points using our model, together with other interesting aspects as $\phi$ production [@phi1; @phi2], different particle ratios [@ratios] and stopping power [@stop], In Fig. \[fig7\] we show our results for $\Omega$, $\Xi$ and $\Lambda$ yields for pPb, and central PbPb collisions at SPS with four different centralities, together with the experimental data. In order to disentangle the different processes contributing, in Fig. \[fig8\] it is shown the results of the code for central ($b \le 3.2$ fm) PbPb collisions without string fusion and rescattering, with string fusion, and with string fusion and rescattering. A reasonable agreement with data for PbPb is obtained, only the $\Omega$’s are a 40 % below the data and we have some excess of $\bar \Lambda$ and $\bar \Xi^+$, see next paragraph. Similar results have been obtained in the Relativistic Quantum Molecular Dynamics model [@rqmd; @lastcall] by a mechanism of color ropes which consider fusion of strings; also in the Ultra Relativistic Quantum Molecular Dynamics model [@urqmd] and in the HIJING model [@vance] by using an ad hoc multiplicative factor in the string tension. Also the Dual Parton Model [@dpm], considering the possibility of creation of diquark-antidiquark pairs in the nucleon sea, together with the inclusion of diagrams which take into account baryon junction migration [@dpmdb; @bj1; @bj2], can reproduce the experimental data (for $\Omega$’s some rescattering has still to be added). The string fusion is the main ingredient to obtain an enhancement of $\bar \Lambda$ production and also to reproduce the $\Xi$ data. However rescattering seems fundamental to get enough $\Omega$’s. Nevertheless, our results for pPb are higher than the data for $\bar \Xi^+$ and $\bar \Lambda$; this last feature looks quite strange, as $\Lambda$ and $\Xi^-$ yields agree with data, but we overestimate both $\Lambda$ and $\bar \Lambda$ production in pp collisions at this energy[^5], see Table \[tab1\]. As rescattering plays a minor role in minimum bias pPb collisions, this turns out to be a result of string fusion. About $\bar \Lambda$, our results are higher than the WA97 data also in PbPb, its production being mainly determined by string fusion and hardly affected by rescattering. This fact makes that our results for PbPb are really an extrapolation in the model from the value for $\bar \Lambda$ production in central SS collisions by the NA35 Collaboration, which was used to fix the fusion cross section $\sigma_{fus}$ [@sfm2] (even so, the model underestimates $\bar \Lambda$ production in central SS, see Table \[tab4\]). So, from the point of view of our model, there exists either a large $\bar \Lambda$ annihilation or a conflict between NA35 data for SS and WA97 data for PbPb and pPb. In Fig. \[fig9\] we plot the inverse exponential slopes of the $m_{T}$ distributions for different particles, together with the WA97 experimental data[^6]. A semiquantitative agreement is obtained. In particular it can be seen that the $\Omega$ slope does not obey the linear increase with increasing mass both in the model and in data, and that rescattering slightly increases temperatures. About $\phi$ enhancement, our integrated yields per event without fusion, with fusion, and with fusion and rescattering are 3.55, 4.20 and 5.35 respectively, in rough agreement with experimental data, $7.6 \pm 1.1$ [@phi2]. In Fig. \[fig10\] the stopping power is shown, i.e. the $p-\bar p$ rapidity distributions for central PbPb collisions at SPS, compared with the experimental data [@stop], together with the predictions for RHIC and LHC energies. This quantity is essentially determined by the string fusion mechanism and rescattering only plays a minor role. As discussed for strangeness enhancement, it has been pointed out that baryon junction migration [@dpmdb; @bj1; @bj2] will enhance the stopping power due to diagrams additional to the usual ones of the Dual Parton Model. The inclusion of these diagrams also explains the SPS data. We have not taken into account such diagrams to avoid double counting, because in the fusion of strings they are partially included in an effective way. In Fig. \[fig11\] the antiproton rapidity distribution in central PbPb collisions is presented and compared to the experimental data [@aprot]; a great suppression of the antiproton yield is seen, due to rescattering. In Table \[tab5\] our results for the ratios between different particles are compared with the experimental data [@ratios] for PbPb central collisions at SPS. We observe an overall, rough agreement with the SPS data, with some excess of $\bar \Lambda$ and $\bar \Xi^+$, see Fig. \[fig7\] and comments above. Let us emphasize that we obtain a semiquantitative agreement with the experimental data in PbPb, in three of the features advocated as signals of QGP production. We are only below data in $\Omega$ production by less than a factor 2. So we think that our rescattering model, being very simple, can be useful as a tool to show the trend of such effect and at least help to tune the initial condition which can be used in transport models. Finally, let us comment on multiplicities in PbPb collisions at SPS energies. For a centrality of 5 % (corresponding in the model to $b\leq 3.4$ fm), we get, for $dN^-/dy$ at $y=0$, 265, 250 and 235 without string fusion, with string fusion, and with string fusion and rescattering respectively. Experimentally, the NA49 Collaboration gets $196\pm 10$ [@na49mult], while the WA97 Collaboration gets $178\pm 22$ [@wa97mult]. In view of these data the code overestimates multiplicities. On the other hand, if we compare the charged multiplicity per participant (wounded) nucleon and pseudorapidity unit at midrapidity versus the number of wounded nucleons in PbPb collisions at SPS, with data from the WA98 Collaboration [@wa98cent], the trend of data seems to be reproduced, while their magnitude is underestimated [@proximo]. In Fig. \[fig12\] we show the rapidity distribution of negatives compared with NA49 data [@na49mult]. Predictions for RHIC and LHC {#pred} ============================ Predictions for pseudorapidity and transverse momentum distributions of charged particles in nucleon-nucleon and central nucleus-nucleus collisions at RHIC ($\sqrt{s}=200$ GeV per nucleon) and LHC ($\sqrt{s}=5.5$ and 14 TeV per nucleon for nucleus-nucleus and nucleon-nucleon collisions respectively) can be seen[^7] for nucleus-nucleus and nucleon-nucleon collisions respectively) in Figs. \[fig13\], \[fig14\] and \[fig15\]. While at SPS the influence of string fusion on multiplicities at midrapidity is of the order $10\div 15$ %, at RHIC it reaches a $30 \div 35$ %. In these Figures, the large influence of the hard contribution at LHC can be observed. Again, the striking fact of the small influence of parton densities appears, both in nucleon and in nuclear collisions. On the other hand, the scaling of nucleon-nucleon with the number of wounded nucleons (the Wounded Nucleon Model [@wnm]) gives predictions which lie far below any of those of our model. In Fig. \[fig9\] we plot the inverse exponential slopes of the $m_{T}$ distributions for different particles at RHIC. We see that, compared to the SPS situation, temperatures get higher in all cases, as expected. We present our predictions for different particle ratios at RHIC and LHC in Table \[tab6\]. It can be observed that our results are not very different to those of statistical models [@lastcall; @bm; @stat1; @stat2]. However, strangeness enhancement in our case has nothing to do with thermal and/or chemical equilibrium. The main difference in the predictions for RHIC and LHC between the String Fusion Model and statistical models is the overall charged multiplicity, which is respectively 950 and 3100 for SFM and 1500 and 7600 for statistical models [@noso] (assuming initial temperatures of 500 and 1000 MeV for RHIC and LHC respectively). Besides, predictions for the stopping power at RHIC and LHC energies are presented in Fig. \[fig10\]. Now, a pronounced dip appears at midrapidities. Detailed discussions on first RHIC results will be given elsewhere [@proximo]. Here we simply compare our results with some preliminary data of the PHOBOS [@phobos] and PHENIX [@phenix] Collaborations at RHIC. For charged particles we obtain $dN/d\eta \mid_{\mid \eta \mid < 1} = 520$ and 585 for the 6 % more central AuAu collisions at $\sqrt{s} = 56$ and 130 GeV per nucleon respectively, to be compared with $408 \pm 12\ {\rm (stat.)} \pm 30\ {\rm (syst.)}$ and $555 \pm 12\ {\rm (stat.)} \pm 35\ {\rm (syst.)}$ ($609 \pm 1\ {\rm (stat.)} \pm 37\ {\rm (syst.)}$) in PHOBOS (PHENIX). Our prediction for $\sqrt{s} = 200$ GeV per nucleon with the same centrality cut is $dN/d\eta \mid_{\mid \eta \mid < 1} = 635$. Conclusions {#concl} =========== A Monte Carlo model[^8] for nucleon and nuclear collisions in the energy range going from SPS to LHC has been presented. It is based on a partonic realization of Regge-Gribov and Glauber-Gribov models and its translation to strings following the DPM/QGSM ideas. A hard component is included to reproduce the high transverse momentum tail of the spectrum. Collectivity is included considering the possibility of fusion of pairs of strings. Strings are decayed in a conventional way. In order to tune the parameters of the model and apply it to collisions between nuclei, a naif model of rescattering has been introduced. The results of the models turn out to agree reasonably with total multiplicities, and longitudinal and transverse momentum spectra in the energy range from SPS to TeVatron. The agreement with strangeness production, temperature behavior and stopping at SPS is semiquantitative. There exist other Monte Carlo models for multiparticle production in nuclear collisions at ultrarelativistic energies (see [@noso] for a review): RQMD [@rqmd], UrQMD [@urqmd], HIJING [@hijing], DPMJET [@dpmjet], HSD [@hsd], NEXUS [@nexus], VNI [@vni], AMPT [@ampt], LUCIFER [@lucifer],$\dots$ Let us examine the main similarities and differences, concerning the stage before rescattering is applied. Both DPMJET and our model are realizations of the DPM/QGSM which include a hard component, but we introduce string fusion, while DPMJET considers diquark breaking diagrams. RQMD takes into account string fusion (and now UrQMD and HIJING [@vance] in a simple way), but no hard part is included either in RQMD or in UrQMD. The main difference with HIJING lies in the soft component, which is considered energy independent is HIJING (and in this way, the multiplicity increase with increasing energy is mainly due to the hard component), while in our case it increases as an unitarized supercritical pomeron. VNI is a parton cascade code, in which the initial stage is mainly generated by hard collisions, with no hadronic degrees of freedom (strings). AMPT is a hybrid code, which uses HIJING as initial condition for parton cascade and, after hadronization, performs hadronic transport. HSD is focused in the transport of hadronic degrees of freedom, the initial stage not coming from strings stretched between partons of projectile and target, but considering strings as excitations of nucleons in the projectile and target, as in Fritiof [@fritiof]; similar comments can be made for LUCIFER. Finally, NEXUS is based in DPM/QGSM, trying to solve the problem of energy-momentum conservation for both cross sections and multiparticle amplitudes at the same time. In our model, energy conservation is strictly taken into account only for multiparticle amplitudes. Besides NEXUS takes into account triple pomeron diagrams, which in our case are effectively included in string fusion. A detailed comparison of results of the model with the first RHIC results will be presented elsewhere [@proximo]. As future developments, strangeness production should be reconsidered and production of heavier flavors included. Also fusion of more than two strings and the possibility of a phase transition like percolation of strings is needed in order to improve predictions for LHC and study the possibility of QGP formation in the framework of string models. We thank M. A. Braun and E. G. Ferreiro, who participated in early stages of this work. We also thank G. S. Bali, A. Capella, K. J. Eskola, A. B. Kaidalov, C. A. Salgado, Yu. M. Shabelski and K. Werner for useful discussions, and J. Stachel for comments on the predictions of the statistical models for RHIC and LHC. N. A. and C. P. acknowledge financial support by CICYT of Spain under contract AEN99-0589-C02 and N. S. A. by Academy of Finland under grant number 48477. N. A. and D. S. also thank Universidad de Córdoba and Fundación Barrié de la Maza of Spain respectively, for financial support. N. A. thanks Departamento de Física de Partículas of the Universidade de Santiago de Compostela, and D. S. the ALICE Collaboration at CERN, for hospitality during stays in which part of this work was completed. Laboratoire de Physique Théorique is Unité Mixte de Recherche – CNRS – UMR n$^{\rm o}$ 8627. [99]{} CERN Press Release, February 10th 2000; U. Heinz and M. Jacob, nucl-th/0002042. M. Gyulassy, Prog. Theor. Phys. Suppl. 140 (2000) 68; D. Zschiesche [*et al.*]{}, nucl-th/0101047. NA50 Collaboration: M. C. Abreu [*et al.*]{}, Phys. Lett. B410 (1997) 327; [*ibid.*]{} 337. WA97 Collaboration: E. Andersen [*et al.*]{}, Phys. Lett. B433 (1998) 209; [*ibid.*]{} B449 (1999) 401. NA49 Collaboration: H. Appelshäuser [*et al.*]{}, Phys. Lett. B444 (1998) 523; Eur. Phys. J. C2 (1998) 661. NA49 Collaboration: R. A. Barton [*et al.*]{}, in [*Proceedings of the Strangeness 2000 Conference*]{} (Berkeley, USA, July 20th-25th 2000). NA50 Collaboration: N. Willis [*et al.*]{}, Nucl. Phys. A661 (1999) 534c. NA49 Collaboration: G. Höhne [*et al.*]{}, Nucl. Phys. A661 (1999) 485c; S. V. Afanasev [*et al.*]{}, Phys. Lett. B491 (2000) 59. CERES Collaboration: G. Agakichiev [*et al.*]{}, Phys. Rev. Lett. 75 (1995) 1272; Phys. Lett. B422 (1998) 405. S. A. Bass [*et al.*]{}, Nucl. Phys. A661 (1999) 205c. N. Armesto and C. Pajares, Int. J. Mod. Phys. A15 (2000) 2019. N. S. Amelin, M. A. Braun and C. Pajares, Phys. Lett. B306 (1993) 312; Z. Phys. C63 (1994) 507. N. S. Amelin, H. Stöcker, W. Greiner, N. Armesto, M. A. Braun and C. Pajares, Phys. Rev. C52 (1995) 362. A. Capella, U. P. Sukhatme, C.-I. Tan and J. Tran Thanh Van, Phys. Lett. 81B (1979) 69; Phys. Rept. 236 (1994) 225. A. B. Kaidalov and K. A. Ter-Martirosyan, Phys. Lett. B117 (1982) 247. H. Sorge, H. Stöcker and W. Greiner, Ann. Phys. 192 (1989) 266; H. Sorge, M. Berenguer, H. Stöcker and W. Greiner, Phys. Lett. B289 (1992) 6; H. Sorge, Phys. Rev. C52 (1995) 3291. S. Soff, S. A. Bass, M. Bleicher, L. Bravina, E. Zabrodin, H. Stöcker and W. Greiner, Phys. Lett. B471 (1999) 89; M. Bleicher, M. Belkacem, S. A. Bass, S. Soff and H. Stöcker, Phys. Lett. B485 (2000) 213; M. Bleicher, W. Greiner, H. Stöcker and N. Xu, Phys. Rev. C62 (2000) 061901. S. E. Vance, in [*Proceedings of the Strangeness 2000 Conference*]{} (Berkeley, USA, July 20th-25th 2000), nucl-th/0012056. PHOBOS Collaboration: B. B. Back [*et al.*]{}, Phys. Rev. Lett. 85 (2000) 3100. PHENIX Collaboration: K. Adcox [*et al.*]{}, nucl-ex/0012008. L. R. B. Elton, [*Nuclear sizes*]{}, Oxford University Press, Oxford 1961. A. DeShalit and H. Feshbach, [*Theoretical Nuclear Physics, Vol. 1: Nuclear Structure*]{}, John Wiley & Sons, New York 1974. V. A. Abramovski, E. V. Gedalin. E. G. Gurvich and O. V. Kancheli, Sov. J. Nucl. Phys. 53 (1991) 172. A. B. Kaidalov, Sov. J. Nucl. Phys. 45 (1987) 902; Yu. M. Shabelski, Z. Phys. C57 (1993) 409. T. Sjöstrand, Comput. Phys. Commun. 82 (1994) 74. L. Lönnblad, Comput. Phys. Commun. 71 (1992) 15. X.-N. Wang and M. Gyulassy, Comput. Phys. Commun. 83 (1994) 307. J. Ranft, preprint SI-99-5 (hep-ph/9911213); preprint SI-99-6 (hep-ph/9911232); S. Roesler, R. Engel and J. Ranft, preprint SLAC-PUB-8740 (hep-ph/0012252). J.-P. Blaizot and A. H. Mueller, Nucl. Phys. B289 (1987) 847; L. V. Gribov, E. M. Levin and M. G. Ryskin, Phys. Rept. 100 (1983) 1; A. H. Mueller and J.-W. Qiu, Nucl. Phys. B268 (1986) 427; J. Jalilian-Marian, A. Kovner, L. McLerran and H. Weigert, Phys. Rev. D55 (1997) 5414; A. H. Mueller, Nucl. Phys. B558 (1999) 285. K. J. Eskola, K. Kajantie, P. V. Ruuskanen and K. Tuominen, Nucl. Phys. B570 (2000) 379; K. J. Eskola, K. Kajantie and K. Tuominen, Phys. Lett. B497 (2001) 39; D. Kharzeev and M. Nardi, nucl-th/0012025. Particle Data Group: D. E. Groom [*et al.*]{}, Eur. Phys. J. C15 (2000) 1. H. Plothow-Besch, [*PDFLIB Version 8.04: User’s Manual*]{}, CERN Program Library Entry W5051 PDFLIB (2000). K. J. Eskola, V. J. Kolhinen and P. V. Ruuskanen, Nucl. Phys. B535 (1998) 351; K. J. Eskola, V. J. Kolhinen and C. A. Salgado, Eur. Phys. J. C9 (1999) 61. K. J. Eskola, J. Qiu and J. Czyzewski, private communication; V. Emel’yanov, A. Khodinov, S. R. Klein and R. Vogt, Phys. Rev. C56 (1997) 2726. A. Capella, E. G. Ferreiro and A. B. Kaidalov, Eur. Phys. J. C11 (1999) 163. N. Armesto, M. A. Braun, E. G. Ferreiro and C. Pajares, Phys. Rev. Lett. 77 (1996) 3736. M. Nardi and H. Satz, Phys. Lett. B442 (1998) 14; H. Satz, Nucl. Phys. A642 (1998) 130; J. Dias de Deus, R. Ugoccioni and A. Rodrigues, Eur. Phys. J. C16 (2000) 537. M. A. Braun, C. Pajares and J. Ranft, Int. J. Mod. Phys. A14 (1999) 2689. N. Armesto, M. A. Braun, E. G. Ferreiro and C. Pajares, Phys. Lett. B344 (1995) 301; E. G. Ferreiro, C. Pajares and D. Sousa, Phys. Lett. B422 (1998) 314. J. Schwinger, Phys. Rev. 82 (1951) 664; E. Brezin and C. Itzykson, Phys. Rev. D2 (1970) 1191. G. S. Bali, Phys. Rev. D62 (2000) 114503; preprint HUB-EP-99-67 (hep-ph/0001312). Yu. A. Simonov, JETP Lett. 71 (2000) 127; V. I. Shevchenko and Yu. A. Simonov, Phys. Rev. Lett. 85 (2000) 1811. X. Artru and G. Mennessier, Nucl. Phys. B70 (1974) 93; X. Artru, Phys. Rept. 97 (1983) 147. B. Andersson, G. Gustafson, G. Ingelman and T. Sjöstrand, Phys. Rept. 97 (1983) 31. N. Armesto, M. A. Braun, E. G. Ferreiro, C. Pajares and Yu. M. Shabelski, Phys. Lett. B389 (1996) 78; Astropart. Phys. 6 (1997) 327; N. Armesto, E. G. Ferreiro, C. Pajares and Yu. M. Shabelski, Z. Phys. C73 (1997) 309. N. S. Amelin, N. Armesto, M. A. Braun, E. G. Ferreiro and C. Pajares, Phys. Rev. Lett. 73 (1994) 2813. W. Cassing and E. L. Bratkovskaya, Phys. Rept. 308 (1999) 65; Nucl. Phys. A623 (1997) 570. B.-A. Li and C. M. Ko, Phys. Rev. C52 (1995) 2037. A. Capella and C. A. Salgado, Phys. Rev. C60 (1999) 054906; preprint LPT-ORSAY-00-66 (hep-ph/0007236); A. Capella, E. G. Ferreiro and C. A. Salgado, Phys. Lett. B459 (1999) 27. S. E. Vance and M. Gyulassy, Phys. Rev. Lett. 83 (1999) 1735. A. Capella and B. Z. Kopeliovich, Phys. Lett. B381 (1996) 325; B. Z. Kopeliovich and B. Povh, Phys. Lett. B446 (1999) 321; D. Kharzeev, Phys. Lett. B378 (1996) 238; F. W. Bopp, hep-ph/0002190. P. Koch, B. Müller and J. Rafelski, Phys. Rept. 142 (1986) 167. M. Glück, E. Reya and A. Vogt, Z. Phys. C67 (1995) 433. M. Gazdzicki and H. Hansen, Nucl. Phys. A528 (1991) 754; H. Bialkowska, M. Gazdzicki, W. Retyk and E. Skrzypczak, Z. Phys C55 (1992) 491. LEBC-EHS Collaboration: M. Aguilar-Benítez [*et al.*]{}, Z. Phys. C50 (1991) 405. UA5 Collaboration: R. E. Ansorge [*et al.*]{}, Nucl. Phys. B328 (1989) 36. NA5 Collaboration: C. De Marzo [*et al.*]{}, Phys. Rev. D26 (1982) 1019. NA35 Collaboration: H. Strobele [*et al.*]{}, Z. Phys. C38 (1988) 89. UA5 Collaboration: G. J. Alner [*et al.*]{}, Z. Phys. C33 (1986) 1. UA1 Collaboration: C. Albajar [*et al.*]{}, Nucl. Phys. B335 (1990) 261. H. L. Lai [*et al.*]{}, Eur. Phys. J. C12 (2000) 375. M. Glück, E. Reya and A. Vogt, Eur. Phys. J. C5 (1998) 461. CDF Collaboration: F. Abe [*et al.*]{}, Phys. Rev. D41 (1990) 2330. CDF Collaboration: F. Abe [*et al.*]{}, Phys. Rev. Lett. 61 (1988) 1819. D. Petermann, J. Ranft and F. W. Bopp, Z. Phys. C54 (1992) 685. E735 Collaboration: T. Alexopoulos [*et al.*]{}, Phys. Rev. D48 (1993) 984. NA35 Collaboration: J. Bartke [*et al.*]{}, Z. Phys. C48 (1990) 191. E154 Collaboration: D. H. Brick [*et al.*]{}, Phys. Rev. D39 (1989) 2484. NA35 Collaboration: T. Alber [*et al.*]{}, Eur. Phys. J. C2 (1998) 643. WA97 Collaboration: F. Antinori [*et al.*]{}, Nucl. Phys. A661 (1999) 481c; Eur. Phys. J. C14 (2000) 633. WA97 Collaboration: I. Králik [*et al.*]{}, Nucl. Phys. A638 (1998) 115c. NA49 Collaboration: G. E. Cooper [*et al.*]{}, Nucl. Phys. A661 (1999) 362c. A. Capella, U. P. Sukhatme, C.-I. Tan and J. Tran Thanh Van, Phys. Rev. D36 (1987) 109. NA49 Collaboration: J. Bächler [*et al.*]{}, Nucl. Phys. A661 (1999) 45c. NA49 Collaboration: H. Appelshäuser [*et al.*]{}, Phys. Rev. Lett. 82 (1999) 2471; P. G. Jones [*et al.*]{}, Nucl. Phys. A610 (1996) 188c. WA97 Collaboration: F. Antinori [*et al.*]{}, Nucl. Phys. A661 (1999) 130c. WA98 Collaboration: M. M. Aggarwal [*et al.*]{}, nucl-ex/0008004. N. Armesto, C. Pajares and D. Sousa, in preparation. A. Bialas, M. Bleszyński and W. Czyz, Nucl. Phys. B111 (1976) 461; A. Bialas, in [*Proceedings of the XIIIth International Symposium on Multiparticle Dynamics*]{}, ed. W. Kittel, W. Metzger and A. Stergiou (World Scientific, Singapore, 1983). P. Braun-Munzinger, I. Heppe and J. Stachel, Phys. Lett. B465 (1999) 15. P. Braun-Munzinger, Nucl. Phys. A661 (1999) 261c. J. Stachel, in [*Proceedings of the XXIXth International Symposium on Multiparticle Dynamics*]{} (Providence, USA, August 9th-13th 1999), to be published by World Scientific. H. J. Drescher, M. Hladik, K. Werner and S. Ostapchenko, Nucl. Phys. Proc. Suppl. 75A (1999) 275; H. J. Drescher, M. Hladik, S. Ostapchenko, T. Pierog and K. Werner, preprint SUBATECH-00-07 (hep-ph/0006247); preprint SUBATECH-00-06 (hep-ph/0007198). K. Geiger and B. Müller, Nucl. Phys. B369 (1992) 600; K. Geiger, Phys. Rev. D47 (1993) 133; Comput. Phys. Commun. 104 (1997) 70. B. Zhang, C. M. Ko, B.-A. Li and Z. Lin, Phys. Rev. C61 (2000) 067901; Z. Lin, S. Pal, C. M. Ko, B.-A. Li and B. Zhang, nucl-th/0011059. D. E. Kahana and S. H. Kahana, Phys. Rev. C58 (1998) 3574; [*ibid.*]{} C59 (1999) 1651; nucl-th/0010043. B. Andersson, G. Gustafson and B. Nilsson-Almqvist, Nucl. Phys. B281 (1987) 289. **List of figures:** Upper plot: Results in the model for the total (solid line), production (dashed line) and hard (dotted line) cross sections versus $\sqrt{s}$, compared with experimental data for total cross sections in pp (filled circles) and $\bar {\rm p}$p (open circles) collisions taken from . Lower plot: $p_{T min}$ (solid line) used in the model, and model results for the mean number of total (dashed line) and hard (dotted line) inelastic collisions per event versus $\sqrt{s}$ computed for the same collisions as in the upper plot. The ordinary reggeon contribution, which decreases quickly with increasing energy, is not included, see text. 0.5cm Results of the code for the rapidity distribution (upper plot), and the $p_T$ distribution for particles with $2<y_{lab}<4$ (lower plot), of negative particles in minimum bias pp collisions at $p_{lab}=200$ GeV/c, compared with experimental data . 0.5cm Results of the code for the pseudorapidity distribution (upper plot), and the $p_T$ distribution for particles with $|\eta|<2.5$ (lower plot), of charged particles in minimum bias $\bar {\rm p}$p collisions at $\sqrt{s}=200$ GeV, compared with experimental data . Solid lines are the results with GRV 94 LO parton densities , dashed lines with CTEQ5L , dotted lines with GRV 98 HO and dashed-dotted lines results without semihard contribution. In the $p_T$ distribution, model results have been normalized to experimental data. 0.5cm Results of the code for the pseudorapidity distribution (upper plot), and the $p_T$ distribution for particles with $|\eta|<1$ (lower plot), of charged particles in minimum bias $\bar {\rm p}$p collisions at $\sqrt{s}=1.8$ TeV, compared with experimental data . Line convention is the same as in Fig. . In the $p_T$ distribution, model results have been normalized to experimental data. 0.5cm Upper plot: results of the code without hard part (dotted line), without string fusion (dashed line) and with string fusion (solid line) for $\langle p_T \rangle$ of charged particles with $|\eta|<0.5$ in $\bar {\rm p}$p collisions, versus $\sqrt{s}$, compared with UA1 data and a parametrization given in this reference (dashed-dotted line). Lower plot: results of the code for $\langle p_T \rangle$ of $\pi^\pm$ (solid line), $K^\pm$ (dashed line) and $\bar{\rm p}$ (dotted line) in $\bar {\rm p}$p collisions at $\sqrt{s}=1.8$ TeV, versus the pseudorapidity density of charged particles for $|\eta|<3.15$, compared with E735 data for $\pi^\pm$ (circles), $K^\pm$ (squares) and $\bar{\rm p}$ (triangles). 0.5cm Results in the model (solid lines, arbitrarily normalized) for topological cross sections of charged particles with $|\eta|<2.5$ in $\bar {\rm p}$p collisions, compared with experimental data from . Upper curves and data correspond to $\sqrt{s}=0.9$ TeV, those in the middle (multiplied by 0.1) to $\sqrt{s}=0.5$ TeV, and lower curves and data (multiplied by 0.01) to $\sqrt{s}=0.2$ TeV. 0.5cm Yields per unity of rapidity at central rapidity, as a function of the number of wounded nucleons, for $\Lambda$, $\Xi^{-}$ and $\Omega^- + \bar\Omega^+$ (left), and for $\bar p$, $\bar\Lambda$ and $\bar\Xi^{+}$ (right), for pPb collisions and four different centralities in PbPb collisions at SPS energies. Full lines represent our calculation with string fusion, and dashed lines with fusion and rescattering. Experimental data are from the WA97 Collaboration [@wa97]. 0.5cm Results in the model (dotted line: without fusion, dashed line: with fusion, solid line: with fusion and rescattering) for strange baryon production in central PbPb collisions (5 % centrality) at SPS compared with experimental data from the WA97 Collaboration [@wa97] (triangles) and the NA49 Collaboration [@na49] (squares). 0.5cm Results in the model (filled circles: with fusion, open circles: with fusion and rescattering) for the inverse exponential slope of the $m_{T}$ distributions at midrapidity of different particles versus the mass of the particles in central (5 % centrality) PbPb collisions at SPS, compared with the experimental data of the WA97 Collaboration [@pt] (3.5 % centrality, open squares). We also present our predictions for the same collisions at RHIC energy with fusion, filled triangles, and with fusion and rescattering, open triangles. 0.5cm Results in the model for the $p-\bar p$ rapidity distribution in central (5 % centrality) PbPb collisions at SPS (a), solid line), and RHIC and LHC (b)), dashed and dotted lines respectively), compared with experimental data at SPS [@stop]. 0.5cm Results in the model (dotted line: without fusion, solid line: with fusion, dashed line: with fusion and rescattering) for the $\bar p$ rapidity distribution in central (5 % centrality) PbPb collisions at SPS, compared with experimental data [@aprot]. 0.5cm Results in the model for the rapidity distribution of negative particles in central (5 % centrality) PbPb collisions at $\sqrt{s}=17.3$ GeV per nucleon, without fusion (dotted line), with fusion (dashed line) and with fusion and rescattering (solid line), compared with data from NA49 . 0.5cm Results of the code for the pseudorapidity distributions (upper plots), and the $p_T$ distributions for particles with $|\eta|<2.5$ (lower plots), of charged particles in central ($b\leq 3.2$ fm) AuAu collisions at $\sqrt{s}=200$ GeV per nucleon. In the plots on the left, solid lines are results with EKS98 parametrization of parton densities inside nuclei, dashed lines with a parametrization as $F_{2A}$ , and dotted lines without modification of parton densities inside nuclei. In the plots on the right, solid lines are results without string fusion, dashed lines with string fusion, dotted lines are nucleon-nucleon results at the same energy, scaled by the number of wounded nucleons (344.6/2), and dashed-dotted lines are results with string fusion and rescattering. 0.5cm Results of the code for the pseudorapidity distribution (upper plot), and the $p_T$ distribution for particles with $|\eta|<2.5$ (lower plot), of charged particles in minimum bias pp collisions at $\sqrt{s}=14000$ GeV. Line convention is the same as in Fig. (the dashed-dotted line is absent). 0.5cm Results of the code for the pseudorapidity distributions (upper plots), and the $p_T$ distributions for particles with $|\eta|<2.5$ (lower plots), of charged particles in central ($b\leq 3.2$ fm) PbPb collisions at $\sqrt{s}=5500$ GeV per nucleon. In the plots on the left, line convention is the same as in Fig. left, but results have been obtained without rescattering and the dashed-dotted lines show results without semihard contribution. In the plots on the right, solid lines are results without string fusion, dashed lines with string fusion, dotted lines are nucleon-nucleon results at the same energy, scaled by the number of wounded nucleons (382.1/2), and dashed-dotted lines are results with string fusion and rescattering. **List of tables:** Results in the model for mean multiplicities of different particles in minimum bias pp collisions at $p_{lab}=200$ GeV/c, without and with string fusion, compared with experimental data . 0.5cm Results in the model for mean multiplicities of different particles in minimum bias pp collisions at $\sqrt{s}=27.5$ GeV, without and with string fusion, compared with experimental data . 0.5cm Results in the model for mean multiplicities of negative particles in minimum bias pA collisions at $p_{lab}=200$ GeV/c, without and with string fusion, compared with experimental data for pS , pAr and pXe , and pAg and pAu . 0.5cm Results in the model for mean multiplicities of different particles in central ($b\leq 1.3$ fm) SS collisions at $\sqrt{s}=19.4$ GeV per nucleon, compared with experimental data . Results are presented without fusion (NF), with fusion (F), and with fusion and rescattering (FR). 0.5cm Results in the model for different particle ratios at midrapidity in central (30 % centrality) PbPb collisions at SPS, compared with experimental data , following the same convention as in Table . 0.5cm Results in the model for different particle ratios at midrapidity in central ($b\leq 3.2$ fm) AuAu collisions at RHIC and PbPb collisions at LHC, following the same convention as in Table . For comparison, results from other models (Quark Coalescence Model (QCM) , Rafelski and B-M ) for RHIC are included. **Figures:** -1.0cm -1.0cm -1.0cm -1.0cm -1.0cm -1.0cm -1.0cm -1.0cm -1.0cm -1.0cm -1.0cm -1.0cm -1.0cm -1.0cm -1.0cm **Tables:** No fusion Fusion Experiment ---------------- ----------- -------- --------------------- charged 7.89 7.81 $7.69 \pm 0.06$ negatives 2.95 2.90 $2.85 \pm 0.03$ p 1.18 1.19 $1.34 \pm 0.15$ $\pi^+$ 3.40 3.33 $3.22 \pm 0.12$ $\pi^-$ 2.69 2.63 $2.62 \pm 0.06$ $\pi^0$ 3.73 3.68 $3.34 \pm 0.24$ $K^+$ 0.31 0.32 $0.28\pm 0.06$ $K^-$ 0.18 0.18 $0.18 \pm 0.05$ $\Lambda$ 0.223 0.231 $0.096 \pm 0.010$ $\bar \Lambda$ 0.029 0.033 $0.0136 \pm 0.0040$ $\bar {\rm p}$ 0.059 0.070 $0.05 \pm 0.02$ : []{data-label="tab1"} No fusion Fusion Experiment ---------------- ----------- -------- ------------------- p 1.20 1.21 $1.20 \pm 0.12$ $\pi^+$ 4.04 3.94 $4.10 \pm 0.26$ $\pi^-$ 3.32 3.23 $3.34 \pm 0.20$ $\pi^0$ 4.47 4.38 $3.87 \pm 0.28$ $K^+$ 0.38 0.38 $0.33 \pm 0.02$ $K^-$ 0.25 0.24 $0.22 \pm 0.01$ $\Lambda$ 0.245 0.251 $0.13 \pm 0.01$ $\bar \Lambda$ 0.045 0.049 $0.020 \pm 0.005$ $\bar {\rm p}$ 0.088 0.100 $0.063 \pm 0.002$ : []{data-label="tab2"} No fusion Fusion Experiment ----- ----------- -------- ---------------- pS 5.01 4.86 $5.10\pm 0.20$ pAr 5.31 5.12 $5.39\pm 0.17$ pAg 6.57 6.28 $6.2\pm 0.2$ pXe 6.89 6.56 $6.84\pm 0.13$ pAu 7.54 7.16 $7.0\pm 0.4$ : []{data-label="tab3"} NF F FR Experiment ----------------- ------- ------- ------- --------------- negatives 108.2 101.3 100.7 $98\pm 3$ $K^+$ 9.7 10.4 10.8 $12.5\pm 0.4$ $K^-$ 7.1 7.2 7.4 $6.9\pm 0.4$ $\Lambda$ 5.0 5.9 6.0 $9.4 \pm 1.0$ $\bar \Lambda$ 0.4 1.1 1.2 $2.2 \pm 0.4$ $\bar {\rm p}$ 0.82 3.23 2.80 $\Xi^-$ 0.024 0.186 0.205 $\bar \Xi^+$ 0.028 0.097 0.102 $\Omega^-$ 0.001 0.007 0.010 $\bar \Omega^+$ 0.001 0.005 0.007 : []{data-label="tab4"} NF F FR Experiment ------------------------------ ------ ------ ------ ------------------- $\bar \Lambda/\Lambda$ 0.14 0.41 0.34 $0.128 \pm 0.012$ $\bar \Xi^{+}/\Xi^{-}$ 1.12 0.52 0.45 $0.266 \pm 0.028$ $\bar \Omega^{+}/\Omega^{-}$ 0.75 0.88 0.59 $0.46 \pm 0.15$ $\Xi^{-}/\Lambda$ 0.01 0.06 0.08 $0.093 \pm 0.007$ $\bar \Xi^{+}/\bar \Lambda$ 0.08 0.07 0.10 $0.195 \pm 0.023$ $\Omega/\Xi$ 0.05 0.05 0.11 $0.195 \pm 0.028$ : []{data-label="tab5"} RHIC(F) RHIC(FR) QCM Rafelski B-M LHC(F) LHC(FR) ------------------------------ --------- ---------- ------ ------------------------ ------ -------- --------- $\bar \Lambda/\Lambda$ 1.01 0.90 0.69 $0.49 \pm 0.15$ 0.91 1.00 0.98 $\bar \Xi^{+}/\Xi^{-}$ 0.96 0.97 0.83 $0.68\pm 0.15$ 1.0 0.98 0.95 $\bar \Omega^{+}/\Omega^{-}$ 1.00 1.25 1.0 1.0 1.0 0.76 1.03 $\Xi^{-}/\Lambda$ 0.10 0.15 $-$ $0.18\pm 0.02$ 0.13 0.09 0.25 $\bar \Xi^{+}/\bar \Lambda$ 0.10 0.16 $-$ $0.25 \pm 0.03$ 0.14 0.09 0.24 $\Omega/\Xi$ 0.07 0.26 $-$ $0.14\pm 0.03$ 0.20 0.05 0.40 $\bar \Lambda/\bar p$ 0.40 0.71 $-$ $2.4 \pm 0.3$ 0.52 0.35 0.82 $\bar p / p$ 0.93 0.90 0.58 $0.34^{+0.37}_{-0.12}$ 0.90 1.00 1.04 : []{data-label="tab6"} [^1]: A similar mechanism exists in RQMD , called color ropes. [^2]: This possibility is the dominant one for strings formed by fusion of two triplet strings . For higher color representations, production of quark complexes with color charges $Q^\prime<Q$ begins to dominate. This option is taken in RQMD ; nevertheless, the close similarities in the consequences of string fusion in both approaches, strongly suggest that the difference in the breaking mechanism can be compensated by a different choice of the fragmentation parameters. [^3]: Although it is not relevant for the generation of momenta of final gluons, in PYTHIA and JETSET the value of $\Lambda_{QCD}$ has been taken from the corresponding set of parton densities for 5 flavors. [^4]: A recent reanalysis [@reanalysis] of $\Xi$ data done by the NA49 Collaboration gives yields at midrapidity which are in much closer agreement to the WA97 [@wa97] results than the previous analysis of NA49 [@na49]. [^5]: In our opinion, the comparison of (anti)hyperon nucleus-nucleus data with those in nucleon-nucleon collisions should be taken with caution at SPS, because at this relatively low energy the nucleon-nucleon value rises sharply with increasing energy due to the $t_{min}$- and delayed threshold effects [@delthr], which usually are not properly implemented in models. [^6]: The fits have been performed in the same $m_T$ regions as WA97 did . For statistical reasons, we compare the slopes in the model for yields integrated over all rapidities, with experimental data taken in the central rapidity region. [^7]: About the reliability of predictions for RHIC and LHC, see comments in the last paragraph of Subsection and in the first paragraph of Section . [^8]: The code, called psm-1.0, has been written in Fortran 77 and can be taken as an uuencoded file, containing instructions for installation and use, by anonymous ftp from ftp://ftp.uco.es/pub/fa1arpen/, or from the web sites http://www.uco.es/$\,\tilde{ }\,$fa1arpen/ or http://fpaxp1.usc.es/phenom/, or requested from the authors.
--- abstract: | We consider the regularity question of solutions for the dynamic initial-boundary value problem for the linear relaxed micromorphic model. This generalized continuum model couples a wave-type equation for the displacement with a generalized Maxwell-type wave equation for the micro-distortion. Naturally solutions are found in ${\rm H}^1$ for the displacement $u$ and ${\rm H}(\operatorname{Curl})$ for the microdistortion $P$. Using energy estimates for difference quotients, we improve this regularity. We show ${\rm H}^1_{\rm loc}$–regularity for the displacement field, ${\rm H}^1_{\rm loc}$–regularity for the micro-distortion tensor $P$ and that ${\rm Curl}\,P$ is ${\rm H}^1$–regular if the data is sufficiently smooth. *Mathematics Subject Classification*: 35M33, 35Q74, 74H20, 74M25, 74B99 *Keywords*: tangential trace, extension operator, generalized continua, local regularity author: - 'Sebastian Owczarek[^1] and Ionel-Dumitrel Ghiba[^2] and Patrizio Neff[^3]' title: '[**A note on local higher regularity in the dynamic linear relaxed micromorphic model**]{}' --- Introduction ============ Generalized continuum theories like the micromorphic or Cosserat model have a long history [@Mindlin64; @Eringen99]. These models are endowed with additional degrees of freedom (as compared to standard linear elasticity) which are meant to capture effects from a microscale on a continuum level. In the micromorphic family, each macroscopic material point is displaced with the classical displacement $u:\Omega\times[0,T]\rightarrow\mathbb{R}^3$ and attached to the macroscopic point there is an affine micro-distortion field $P:\Omega\times[0,T]\rightarrow\mathbb{R}^{3\times3}$ describing the interaction with the microstructure. The equations of motion are obtained from Hamilton’s principle and the introduction of suitable kinetic and elastic energy expressios. Typical for the classical micromorphic model is a quadratic term $\frac{1}{2}\lVert \nabla P\rVert^2$ leading to an equilibrium problem of the type $$\begin{aligned} \label{1} u_{,tt}&=&\operatorname{Div}\big(\nabla u-P\big)+f=\Delta u-\operatorname{Div}P+f,\\[1ex] P_{,tt}&=&(\nabla u-P)- \operatorname{sym}P+\Delta P+M.\notag\end{aligned}$$ Unique solutions are found in classical Hilbert spaces, $u\in {\rm H} ^1(\Omega, \mathbb{R}^3)$ and $P\in {\rm H} ^1(\Omega, \mathbb{R}^{3\times 3})$, under suitable initial and boundary conditions and for these models [@picard; @IesanNappa2001], higher regularity follows from the standard approach for the wave equation. Departing from this framework, Neff and his co-workers have introduced the so called relaxed micromorphic model [@NeffGhibaMicroModel]. Here, the “curvature energy” $\frac{1}{2}\lVert \nabla P\rVert^2$ is replaced by $\frac{1}{2}\lVert \operatorname{Curl}P\rVert^2$, which represents a sort of “relaxation” since the interaction-strength of the microstructure with itself is getting much weaker. The typical set of equations turns into $$\begin{aligned} \label{2} u_{,tt}&=\operatorname{Div}\big(\nabla u-P\big)+f,\\[1ex] P_{,tt}&=(\nabla u-P)- \operatorname{sym}P+\operatorname{Curl}\operatorname{Curl}P+M,\notag\end{aligned}$$ under suitable initial and boundary conditions, which for $P$ has to be a tangential boundary condition. The second equation can be seen as a generalized Maxwell-problem due to the $\operatorname{Curl}\operatorname{Curl}$-operator. Unique solutions are now found in the natural setting $u\in {\rm H} ^1(\Omega, \mathbb{R}^3)$ and $P\in {\rm H} (\operatorname{Curl}, \mathbb{R}^{3\times 3})$, see [@GhibaNeffExistence; @Sebastian1]. The current interest for the relaxed micromorphic model mainly stems from the fact that it is able to describe frequency-band gaps as observed in real meta-materials (see e.g. [@blanco2000large; @liu2000locally] and [@MadeoNeffGhibaW; @madeo2016reflection; @d2017panorama; @aivaliotis2019microstructure; @barbagallo2019relaxed; @aivaliotisfrequency; @d2019effective]). When it comes to the numerical implementation, it would be natural to use the pair of function spaces ${\rm H} ^1(\Omega, \mathbb{R}^3)\times {\rm H} (\operatorname{Curl}, \mathbb{R}^{3\times 3})$. If, on the other hand, a standard ${\rm H} ^1(\Omega, \mathbb{R}^3)\times {\rm H} ^1(\Omega, \mathbb{R}^{3\times 3})$ framework is used (which is more easily available from relevant software packages) the occurring approximation error hinges on the local regularity of solutions. Indeed, in this paper we show (Theorem \[mr\]) that $u\in {\rm H}^2_{\rm loc}(\Omega)$ and $P\in {\rm H}^1_{\rm loc}(\Omega)$ can be achieved under suitable regularity assumptions on the data. This establishes convergence of standard numerical FEM-implementations in the interior and constitutes a major motivation for our work. While in the static case, this kind of improved regularity (from $H(\operatorname{Curl})$ to ${\rm H}^1_{\rm loc}(\Omega)$) for the microdistortion field $P$ is illusory, the dynamic formulation provides much more control of the appearing fields through the kinematic terms. In a previous paper [@Sebastian1], starting from some results established by Alonsi and Valli [@tantrace], we have seen that for all $G\in \widetilde{\chi}_{\partial\Omega}:= \{G\in {\rm H}^{\frac{1}{2}}(\partial\Omega)\,\,;\,\, \bigl\langle G_i\big|_{\partial\Omega}, n\bigr\rangle=0\}$ there exists an extension $\widetilde{G}\in {\rm H}(\operatorname{Curl};\Omega)$ such that $\operatorname{Curl}\operatorname{Curl}\widetilde{G}=0$ and which belongs actually to ${\rm H}^1(\Omega)$. This result is useful in order to prove that the initial-boundary value problem with non-homogeneous boundary condition admits a unique solution $(u, P)$ with the regularity: $ u\in {\rm C}^1([0,T);{\rm H}^1(\Omega ))\,, u_{,tt}\in {\rm C}((0,T);{\rm L}^2(\Omega ))\,,$ $ P\in {\rm C}^1([0,T); {\rm H}(\operatorname{Curl};\Omega )),\ P_{,tt}\in {\rm C}((0,T);{\rm L}^2(\Omega)), $ $ \operatorname{Curl}\operatorname{Curl}P\in {\rm C}((0,T);{\rm L}^2(\Omega )), $ for all times $T>0$. Moreover, we have shown that any extension $\widetilde{v}\in{\rm H}(\operatorname{curl};\Omega)$ of $v\in \tilde{{\raisebox{3pt}{$\chi$}}}_{\partial\Omega}$ is such that $\nabla\operatorname{curl}\tilde {v}\in {\rm L}^2(\Omega)$, see [@Sebastian1]. In the present paper, starting from this remark and by using some standard techniques, we prove that under suitable assumptions on the data, the solution $(u, P)$ is in fact smoother, i.e. $ u\in {\rm C}((0,T);{\rm H}^2_{\rm loc}(\Omega))\,, P\in {\rm C}((0,T);{\rm H}^1_{\rm loc}(\Omega)) \mathrm{\,\,and} \ \operatorname{Curl}P\in {\rm C}((0,T);{\rm H}^1(\Omega))\,, $ for all times $T>0$. We point out that this result may seem surprising unusual, since we have no information on ${\rm Div} \,P$ in $\Omega$ and not even on $P_i\cdot n$ ($i=1,2,3$) on $\partial \Omega$. The relaxed micromorphic model ============================== We consider $\Omega$ to be a connected, bounded, open subset of $\mathbb{R}^3$ with a ${\rm C}^{1,1}$ boundary $\partial\Omega$ and $T > 0$ is a fixed length of the time interval. The domain $\Omega$ is occupied by a micromorphic continuum whose motion is referred to a fixed system of rectangular Cartesian axes $Ox_i$ $(i=1,2,3)$. Notations --------- Throughout this paper [(if we do not specify otherwise)]{} Latin subscripts take the values $1,2,3$. We denote by $\mathbb{R}^{3\times 3}$ the set of real $3\times 3$ matrices. For all $X\in\mathbb{R}^{3\times3}$ we set ${\rm sym}\, X=\frac{1}{2}(X^T+X)$ and ${\rm skew} X=\frac{1}{2}(X-X^T)$. The standard Euclidean scalar product on $\mathbb{R}^{3\times 3}$ is given by $\langle {X},{Y}\rangle_{\mathbb{R}^{3\times3}}=\operatorname{tr}({X Y^T})$, and thus the Frobenius tensor norm is $ \lVert {X}\rVert^2=\langle{X},{X}\rangle_{\mathbb{R}^{3\times3}}$. In the following we omit the index $\mathbb{R}^{3\times3}$. The identity tensor on $\mathbb{R}^{3\times3}$ will be denoted by ${\mathbbm{1}}$, so that $\operatorname{tr}({X})=\langle{X},{{\mathbbm{1}}}\rangle$. Typical conventions for differential operations are implied such as comma followed by a subscript to denote the partial derivative with respect to the corresponding cartesian coordinate, while $t$ after a comma denotes the partial derivative with respect to the time. A matrix having the three column vectors $A_1,A_2, A_3$ will be written as $ (A_1\,|\, A_2\,|\,A_3). $ We denote by $u:\Omega\times [0,T]\rightarrow {{\mathbb R}}^3$ the displacement vector of the material point, while $P:\Omega\times [0,T]\rightarrow {{\mathbb R}}^{3\times 3}$ describes the substructure of the material which can rotate, stretch, shear and shrink (the micro-distortion). Here, $T > 0$ is a fixed length of the time interval. For vector fields $u=\left( u_1, u_2,u_3\right)$ with $u_i\in {\rm H}^{1}(\Omega)$, $i=1,2,3$, we define $ \nabla \,u:=\left( \nabla\, u_1\,|\, \nabla\, u_2\,|\, \nabla\, u_3 \right)^T, $ while for tensor fields $P$ with rows in ${\rm H}({\rm curl}\,; \Omega)$, i.e. $ P=\begin{pmatrix} (P^T.e_1)^T\,|\, (P^T.e_2)^T\,|\, (P^T.e_3)^T \end{pmatrix}^T$, $P^T.e_i\in {\rm H}({\rm curl}\,; \Omega)$, $i=1,2,3$, we define $${\rm Curl}\,P:=\begin{pmatrix} {\rm curl}\, (P^T.e_1)^T\,|\, {\rm curl}\, (P^T.e_2)^T\,|\, {\rm curl}\, (P^T.e_3)^T \end{pmatrix}^T .$$ The corresponding Sobolev spaces for the second order tensor fields $P$, ${\rm Curl}\, P$ and $\nabla\, u$ will be denoted by $ {\rm H}^1(\Omega) \ \ \text{and}\ \ {\rm H}({\rm Curl}\,; \Omega)\, , $ and $ {\rm H}^1_0(\Omega) \ \ \text{and}\ \ {\rm H}_0({\rm Curl}\,; \Omega)\, , $ respectively. The initial-boundary value problem in the linear relaxed micromorphic theory {#sectmodel} ---------------------------------------------------------------------------- The partial differential equations associated to the dynamical relaxed micromorphic model [@NeffGhibaMicroModel] are $$\begin{aligned} \label{1.2} u_{,tt}&=&\operatorname{Div}\big(2\,\mu_{\rm e}\operatorname{sym}(\nabla u-P)+2\,\mu_{\rm c}\operatorname{skew}(\nabla u-P)+\lambda_{\rm e}\operatorname{tr}(\nabla u-P){\mathbbm{1}}\big)+f{\nonumber}\\[1ex] P_{,tt}&=&2\,\mu_{\rm e}\operatorname{sym}(\nabla u-P)+2\,\mu_{\rm c}\operatorname{skew}(\nabla u-P)+\lambda_{\rm e}\operatorname{tr}(\nabla u-P){\mathbbm{1}}{\nonumber}\\[1ex] &&-(2\,\mu_{\mathrm{micro}} \operatorname{sym}P+\lambda_{\mathrm{micro}}(\operatorname{tr}P){\mathbbm{1}})-\mu_{\rm micro}\, L_{\rm c}^2\,\operatorname{Curl}\operatorname{Curl}P+M\,,\end{aligned}$$ in $\Omega\times (0,T)$, where $f:\Omega\times (0,T)\rightarrow {{\mathbb R}}^3$ is a given body force and $M:\Omega\times (0,T)\rightarrow {{\mathbb R}}^{3\times 3}$ is a given body moment tensor. Here, the constants $\mu_{\rm e},\lambda_{\rm e},\mu_{\rm c}, \mu_{\rm micro}, \lambda_{\rm micro}$ are constitutive parameters describing the isotropic elastic response of the material, while $L_{\rm c}>0$ is the characteristic length of the relaxed micromorphic model. We assume that the constitutive parameters are such that $$\begin{aligned} \label{condpara} \mu_{\rm e}>0,\quad\quad 2\,\mu_{\rm e}+3\,\lambda _{\rm e}>0,\quad\quad \mu_{\rm c}{\geqslant}0,\quad\quad \mu_{\rm micro}>0, \quad\quad 2\,\mu_{\rm micro}+3\,\lambda _{\rm micro}>0.\end{aligned}$$ The system is considered with the boundary conditions $$\begin{split} u(x,t)=g(x,t),\quad \quad P_i(x,t)\times n(x)=G_i(x,t) \end{split} \label{1.3}$$ for $(x,t)\in \partial\Omega\times [0,T]$, where $n$ is the unit normal vector at the surface $\partial\Omega$, $\times$ denotes the vector product and $P_i$ ($i=1,2,3$) are the rows of $P$. The model is also driven by the following initial conditions $$\begin{split} u(x,0)=u^{(0)}(x)\,,\quad u_{,t}(x,0)=u^{(1)}(x)\,,\quad P(x,0)=P^{(0)}(x)\,,\quad P_{,t}(x,0)=P^{(1)}(x) \end{split} \label{1.4}$$ for $x\in\Omega$. We say that the initial data $(u^{(0)},u^{(1)},P^{(0)},P^{(1)})$ satisfy the compatibility condition if $$u^{(0)}(x)=g(x,0)\,,\qquad \ \, u^{(1)}(x)=g_{,t}(x,0)\,, \qquad P^{(0)}_{i}(x)=G_i(x,0)\,,\qquad P^{(1)}_{i}(x)=G_{i,t}(x,0) \label{comcon}$$ for $x\in\partial\Omega$ and $i=1,2,3$, where $G_{i,t}$ denotes the time derivative of the function $G_i$. Preliminary results ------------------- In a previous paper [@Sebastian1] we have considered the space $$\tilde{{\raisebox{3pt}{$\chi$}}}_{\partial\Omega}:=\{v\in {\rm H}^{\frac{1}{2}}(\partial\Omega)\mid\bigl\langle v\big|_{\partial\Omega}, n\bigr\rangle=0\} .$$ This space is related to the fact that, according to [@tantrace], for all $v\in {\rm H}(\operatorname{curl};\Omega)$ the tangential trace $n\times v\big|_{\partial \Omega}$ (see [@Giraultbook p. 34]) belongs to a proper subspace of ${\rm H}^{-\frac{1}{2}}(\partial\Omega)$ defined by $${\raisebox{3pt}{$\chi$}}_{\partial\Omega}:=\{v\in {\rm H}^{-\frac{1}{2}}(\partial\Omega)\mid\bigl\langle v\big|_{\partial\Omega}, n\bigr\rangle=0\,\,\mathrm{and}\,\,\operatorname{div}_{\tau} v\in {\rm H}^{-\frac{1}{2}}(\partial\Omega)\}$$ and equipped with the norm $$\lVert v \rVert_ {{\raisebox{3pt}{$\chi$}}_{\partial\Omega}}= \lVert v \rVert_ {{\rm H}^{-\frac{1}{2}}(\partial\Omega)}+ \lVert\operatorname{div}_{\tau} v \rVert_ {{\rm H}^{-\frac{1}{2}}(\partial\Omega)}\,.$$ We observe that $\tilde{{\raisebox{3pt}{$\chi$}}}_{\partial\Omega}\subset {\raisebox{3pt}{$\chi$}}_{\partial\Omega}$. In [@Sebastian1], summarising some results presented in [@tantrace] and [@electrobook Theorem 6 of Section 2], we have concluded that the following results hold true. \[lem:2.2\] Assume that the boundary $\partial\Omega$ is of class ${\rm C}^{1,1}$ or that $\Omega$ is a convex polyhedron. Moreover, let us assume that $v\in {{\raisebox{3pt}{$\chi$}}}_{\partial\Omega}$. Then there exists an extension $\widetilde {v}\in {\rm H}(\operatorname{curl};\Omega)$ of $v$ in $\Omega$ such that 1. $\operatorname{curl}\operatorname{curl}\widetilde {v}=0\,;$ 2. $\operatorname{div}\,\widetilde {v}=0$; 3. $\widetilde {v}\in {\rm H}^1(\Omega)$   for all   $v\in \widetilde{\chi}_{\partial\Omega}$. \[collem\] Assuming the hypothesis of Theorem \[lem:2.2\] to be satisfied, then $\nabla\operatorname{curl}\widetilde {v}\in {\rm L}^2(\Omega)$. Using these results, we have proven the existence and uniqueness of the solution of the initial-boundary value problem arising in the linear relaxed theory for non-homogeneous boundary conditions, see [@Sebastian1]. [(Existence of solution with non-homogeneous boundary conditions)]{}\[existenceresult\] Let us assume that the constitutive parameters satisfy and the initial data are such that $$(u^{(0)}, u^{(1)}, P^{(0)}, P^{(1)})\in {\rm H}^1(\Omega; {{\mathbb R}}^3 )\times {\rm H}^1(\Omega; {{\mathbb R}}^3 )\times {\rm H}(\operatorname{Curl};\Omega )\times {\rm H}(\operatorname{Curl};\Omega )\, \label{2.27}$$ and that the compatibility condition holds. Additionally, $$\begin{split} \operatorname{Div}\big(2\,\mu_{\rm e}\operatorname{sym}(\nabla u^{(0)}-P^{(0)})+2\,\mu_{\rm c}\operatorname{skew}(\nabla u^{(0)}-P^{(0)})+\lambda_{\rm e}\operatorname{tr}(\nabla u^{(0)}-P^{(0)}){\mathbbm{1}}\big)&\in {\rm L}^2(\Omega)\,, \\ \operatorname{Curl}\operatorname{Curl}P^{(0)}&\in {\rm L}^2(\Omega) \end{split}$$ and $ f\in {\rm C}^1([0,T);{\rm L}^2(\Omega))\,,\ M\in {\rm C}^1([0,T);{\rm L}^2(\Omega))$, $ g\in {\rm C}^3([0,T);{\rm H}^{\frac{3}{2}}(\partial\Omega))\,,\ G_i\in {\rm C}^3([0,T);\tilde{\chi}_{\partial\Omega})\quad i=1,2,3\,. $ Then, the system with boundary conditions and initial conditions possesses a global in time, unique solution $(u, P)$ with the regularity: for all times $T>0$ $$\begin{split} u\in {\rm C}^1([0,T);{\rm H}^1(\Omega ))\,,&\ u_{,tt}\in {\rm C}((0,T);{\rm L}^2(\Omega ))\,,\ P\in {\rm C}^1([0,T); {\rm H}(\operatorname{Curl};\Omega )),\ P_{,tt}\in {\rm C}((0,T);{\rm L}^2(\Omega)) \end{split} \label{2.32}$$ Moreover, $$\operatorname{Div}\big(2\,\mu_{\rm e}\operatorname{sym}(\nabla u-P)+2\,\mu_{\rm c}\operatorname{skew}(\nabla u-P)+\lambda_{\rm e}\operatorname{tr}(\nabla u-P){\mathbbm{1}}\big)\in {\rm C}((0,T);{\rm L}^2(\Omega ))\\[1ex] \label{2.33}$$ and $$\operatorname{Curl}\operatorname{Curl}P\in {\rm C}((0,T);{\rm L}^2(\Omega ))\,. \label{2.34}$$ \[tw:nonhomo\] The assumption on the constitutive parameters was used in the proof since we need to know that there exists a constant $C>0$ such that $$\begin{aligned} {\rm C}( \lVert\nabla u\rVert^2_{{\rm L}^2(\Omega)}+ \lVert P\rVert^2_{{\rm H}(\operatorname{Curl};\Omega)}){\leqslant}\int_{\Omega}\Big(&\mu_{\rm e} \lVert\operatorname{sym}(\nabla u-P)\rVert^2+\mu_{\rm c} \lVert {\rm skew}(\nabla u-P)\rVert^2+\frac{\lambda_{\rm e}}{2}[\operatorname{tr}(\nabla u-P)]^2\\ &+ \mu_{\mathrm{micro}} \lVert\operatorname{sym}P\rVert^2+\frac{\lambda_{\mathrm{micro}}}{2}\,[\operatorname{tr}(P)]^2+\frac{\mu_{\rm micro} \,L_{\rm c}^2}{2} \,\lVert\operatorname{Curl}P\rVert^2\Big)\,{{\mathrm d}}x\,\notag\end{aligned}$$ for all $u\in {\rm H}^1_0(\Omega)$ and $P\in{\rm H}_0({\rm Curl}\, ; \Omega)$. This coercivity follows even when $\mu_{\rm c}=0$, due to the fact that [@NeffPaulyWitsch; @BNPS2] there exists a positive constant $C$, only depending on $\Omega$, such that for all $P\in{\rm H}_0({\rm Curl}\, ; \Omega)$ the following estimate holds $$\begin{aligned} { \lVert P \rVert_ {{\rm H}(\mathrm{Curl})}^2}:= \lVert P \rVert_ {{\rm L}^2(\Omega)}^{ {2}}+ \lVert \operatorname{Curl}P \rVert_ {{\rm L}^2(\Omega)}^{ {2}}&{\leqslant}C\,( \lVert {\rm sym} P\rVert^2_{{\rm L}^2(\Omega)}+ \lVert \operatorname{Curl}P\rVert^2_{{\rm L}^2(\Omega)}). \end{aligned}$$ Let us adjoin the total energy to a solution of the initial-value problem $$\begin{aligned} \label{1.5} {{\cal E}}(u, P)(t)=&\frac{1}{2}\int_{\Omega}( \lVert u_{,t}\rVert^2+ \lVert P_{,t}\rVert^2)\,{{\mathrm d}}x + \int_{\Omega}\Big(\mu_{\rm e} \lVert \operatorname{sym}(\nabla u-P)\rVert^2+\frac{\lambda_{\rm e}}{2}\,[\operatorname{tr}(\nabla u-P)]^2 +\mu_{\rm c}\, \lVert \operatorname{skew}(\nabla u-P)\rVert^2 \notag\\[1ex] &\qquad \quad \qquad \quad \qquad \quad\qquad \ +\mu_{\rm micro} \lVert \operatorname{sym}P\rVert^2+\frac{\lambda_{\mathrm{micro}}}{2}\,[\operatorname{tr}(P)]^2 +\frac{\mu_{\rm micro}\, L_{\rm c}^2}{2}\, \lVert \operatorname{Curl}P\rVert^2\Big)\,{{\mathrm d}}x\,.\end{aligned}$$ Higher local regularity ======================== Auxiliary results ----------------- At the beginning, we are going to prove the following inequality $$\begin{split} \lVert \nabla\operatorname{curl}P_i \rVert_ {{\rm C}((0,T);{\rm L}^2(\Omega))}{\leqslant}C\,, \end{split} \label{ineq}$$ where $i=1,2,3$, $ P=\left(\begin{array}{c} P_1 \,|\, P_2 \,|\, P_3 \end{array}\right)^T $ is the unique solution of the problem - and the constant $C>0$ depends on $\Omega$, only ($\operatorname{Curl}P$ is calculated with respect to rows of the matrix $P$). Observe that it is sufficient to prove the inequality with homogeneous tangential boundary condition $P_i\times n =0$ on $\partial\Omega$ since the estimates in the case of inhomogeneous tangential boundary condition follows as a consequence of Corollary \[collem\]. Assume that $\phi\in {\rm C}^{\infty}(\overline{\Omega};{{\mathbb R}}^3)$ and $\phi\times n=0$ on $\partial\Omega$. Then $\bigl\langle n,\operatorname{curl}\phi\bigr\rangle=0$ on $\partial\Omega$. \[Obs\] Suppose that $\phi\in {\rm C}^{\infty}(\overline{\Omega};{{\mathbb R}}^3)$ and $\vartheta\in {\rm C}^{\infty}(\overline{\Omega};{{\mathbb R}})$. Notice that $$\begin{split} \int_{\Omega}\bigl\langle \operatorname{curl}\phi,\nabla\vartheta\bigr\rangle\,{{\mathrm d}}x=-\int_{\Omega}\underbrace{\operatorname{div}(\operatorname{curl}\phi)}_{=0}\vartheta\,{{\mathrm d}}x+\int_{\partial\Omega}\bigl\langle n,\operatorname{curl}\phi\bigr\rangle\,\vartheta\,{{\mathrm d}}S\,. \end{split} \label{obs}$$ On the other hand $$\begin{split} \int_{\Omega}\bigl\langle\operatorname{curl}\phi,\nabla\vartheta\bigr\rangle\,{{\mathrm d}}x=\int_{\Omega}\bigl\langle\phi,\underbrace{\operatorname{curl}\nabla\vartheta}_{=0}\bigr\rangle\,{{\mathrm d}}x+\int_{\partial\Omega}\bigl\langle n\times \phi,\nabla\vartheta\bigr\rangle\,{{\mathrm d}}S\,. \end{split} \label{obs1}$$ Comparing and we obtain $$\begin{split} \int_{\partial\Omega}\bigl\langle(n\times \phi),\nabla\vartheta\bigr\rangle\,{{\mathrm d}}S=\int_{\partial\Omega}\bigl\langle n,\operatorname{curl}\phi\bigr\rangle\,\vartheta\,{{\mathrm d}}S\,. \end{split} \label{obs2}$$ Using the hypothesis $\phi\times n=0$ on $\partial\Omega$, we find $$\begin{split} \int_{\partial\Omega}\bigl\langle n,\operatorname{curl}\phi\bigr\rangle\,\vartheta\,{{\mathrm d}}S=0 \qquad \mathrm{for\, all}\quad \vartheta\in {\rm C}^{\infty}(\overline{\Omega}, \mathbb{R})\,, \end{split} \label{obs3}$$ which implies that $\bigl\langle n,\operatorname{curl}\phi\bigr\rangle=0$ on $\partial\Omega$. The previous section yields that $P_i\in {\rm H}(\operatorname{curl};\Omega)$ and $P_i\times n\in {\rm H}^{-\frac{1}{2}}(\partial\Omega)$, hence the boundary integral in is well defined with $\phi=P_i$. Moreover, notice that $\operatorname{div}\operatorname{curl}P_i=0$ and $\operatorname{curl}P_i\in {\rm H}^1(\operatorname{div};\Omega)$. Therefore, $\bigl\langle n,\operatorname{curl}P_i\bigr\rangle\in {\rm H}^{-\frac{1}{2}}(\partial\Omega)$ and the boundary integral in is also well defined with $\phi=P_i$. Using Lemma \[Obs\] and the fact $P_i\times n=0$ on the boundary $\partial\Omega$ we conclude that $\bigl\langle n,\operatorname{curl}P_i\bigr\rangle=0$ on $\partial\Omega$.$\mbox{}$ The following Lemma is crucial in the proof of local higher regularity for the micro-distortion tensor $P$. Suppose that $P$ is the unique solution of the system with boundary condition $ P_i\times n=0$ ($i=1,2,3$) and with the regularities and . Then the following inequality $$\begin{split} \lVert \nabla\operatorname{curl}P_i \rVert_ {{\rm C}((0,T);{\rm L}^2(\Omega))}{\leqslant}C\,, \end{split} \label{ineq1}$$ holds, where the constant $C>0$ depends on $\Omega$ and the given data. \[lem4.1\] Theorem $3.8$ from [@Giraultbook] implies that for every $v\in {\rm H}(\operatorname{curl};\Omega)\cap {\rm H}^1(\operatorname{div};\Omega)$ such that $\bigl\langle v, n\bigr\rangle=0$ on the boundary $\partial\Omega$ the following inequality $$\begin{split} \lVert \nabla v \rVert_ {{\rm L}^2(\Omega)}{\leqslant}{\rm C}( \lVert \operatorname{curl}v \rVert_ {{\rm L}^2(\Omega)}+ \lVert \operatorname{div}v \rVert_ {{\rm L}^2(\Omega)}+ \lVert v \rVert_ {{\rm L}^2(\Omega)})\,, \end{split} \label{ineq2}$$ is satisfied, where the constant $C>0$ does not depend on $v$. The regularity entails that $\operatorname{curl}P_i\in {\rm H}(\operatorname{curl};\Omega)\cap {\rm H}^1(\operatorname{div};\Omega)$. We also known that $\bigl\langle n,\operatorname{curl}P_i\bigr\rangle=0$ on $\partial\Omega$. Applying inequality with $v=\operatorname{curl}P_i$ we obtain $$\begin{split} \lVert \nabla\operatorname{curl}P_i \rVert_ {{\rm L}^2(\Omega)}{\leqslant}C\big(& \lVert \operatorname{curl}\operatorname{curl}P_i \rVert_ {{\rm L}^2(\Omega)}+ \lVert \operatorname{div}\operatorname{curl}P_i \rVert_ {{\rm L}^2(\Omega)}+ \lVert \operatorname{curl}P_i \rVert_ {{\rm L}^2(\Omega)}\big){\leqslant}\tilde{C}\,, \end{split} \label{ineq3}$$ where the constant $\tilde{C}>0$ does not depend on $P_i$. The main higher regularity estimates ------------------------------------ The aim of the remaining part of this section is to show the higher local regularity of the solution of problem . We will use the difference quotient method. Let us consider bounded open subsets of $\Omega$ with smooth boundaries (so we can use Korn’s inequality) such that $V\Subset U\Subset\Omega$. We consider a cutoff function $\eta\,:{{\mathbb R}}^3\rightarrow [0,1]$ with the following properties $$\eta\,:\Omega\rightarrow [0,1]\,,\qquad \eta\,\in C_0^{\infty}({{\mathbb R}}^3)\,,\qquad \eta\,=1\quad \mathrm{on}\,\, V \quad \mathrm{and}\quad \eta\,=0\quad \mathrm{on}\,\, \Omega\setminus U\,. \label{4.1}$$ We will denote by $D^h_k$ the difference quotient in the direction $\vec{e}_k$ with the step $h$ i.e. for any function $\phi$ defined on $\Omega$, any $h\in{{\mathbb R}}$ sufficiently small ($k=1,2,3$), set $$\begin{split} D^h_k\phi(x):=\frac{\phi(x+h\vec{e}_k)-\phi(x)}{h}\,. \end{split} \label{4.2}$$ Observe that for $0<|h|<\frac{1}{2}\mathrm{dist}(U,\partial\Omega)$ and $x\in U$ the difference quotient is well defined. Moreover, the products $\eta\, D^h_k(\cdot)$ are equal to zero for $x\notin U$. Let us recall Theorem 3 from Section 5.8.2 of [@evansbook] on the relation between the difference quotient and weak derivatives. : (i) : Assume that $1{\leqslant}p < \infty$ and $\phi\in W^{1,p}(\Omega)$. Then for all $k=1,2,3$ and all $V\Subset U\Subset\Omega$ it holds $$\begin{split} \lVert D^h_k\phi \rVert_ {L^{p}(V)}{\leqslant}C\, \lVert \nabla\phi \rVert_ {L^{p}(U)} \end{split} \label{difguo1}$$ for some constant $C={\rm C}(p,U)$ and all $0<|h|<\frac{1}{2}\mathrm{dist}( V,\partial U)$. (ii) : In turn, if $1<p<\infty$, $\phi\in L^p(\Omega)$ and there exists constant $C>0$ such that for all $k=1,2,3$ and all $0<|h|<\frac{1}{2}\mathrm{dist}( V,\partial U)$ the following inequality $$\begin{split} \lVert D^h_k\phi \rVert_ {L^{p}(V)}{\leqslant}C \end{split} \label{difguo2}$$ is fulfilled. Then, $$\phi\in W^{1,p}(V)\quad\mathrm{and}\quad \lVert \nabla\phi \rVert_ {L^{p}(V)}{\leqslant}C\,. \label{difguo3}$$ \[tw:difquo\] [(Main estimate) ]{} Suppose that $(u,P)$ is the solution of the problem with $P_i(x,t)\times n(x)=0$ on $\partial \Omega$ and under the hypotheses of Theorem \[existenceresult\]. Moreover, assume that the given forces have the regularity $ f\in {\rm L}^2((0,T);{\rm H}^1_{\rm loc}(\Omega))$, $ M\in {\rm L}^2((0,T);{\rm H}^1_{\rm loc}(\Omega)) $ and the initial data admit the regularity $ \nabla u^{(0)}-P^{(0)}\in {\rm H}^1_{\rm loc}(\Omega)\,, \operatorname{sym}P^{(0)}\in {\rm H}^1_{\rm loc}(\Omega)\,,$ $ \operatorname{tr}P^{(0)}\in {\rm H}^1_{\rm loc}(\Omega)\,, $ $ \operatorname{Curl}P^{(0)}\in {\rm H}^1_{\rm loc}(\Omega)\,, $ $ u^{(1)}\in {\rm H}^1_{\rm loc}(\Omega;{{\mathbb R}}^{3})\,$ $ \mathrm{and}\quad P^{(1)}\in {\rm H}^1_{\rm loc}(\Omega)\,. $ Then, for all $k\in\{1,2,3\}$, $t\in [0,T]$ and sufficiently small $h\in{{\mathbb R}}$ the following inequality $${{\cal E}}(\eta\, D^h_k u, \eta\, D^h_k P)(t){\leqslant}C \label{estimate}$$ holds, where $(u,P)$ is the solution of the system and the constant $C>0$ does not depend on $h$. \[tw:regu\] First of all, remark that a solution $(u,P)$ of the problem with $P_i(x,t)\times n(x)=0$ on $\partial \Omega$ has the following regularity $$\begin{split} u\in {\rm C}^1([0,T);{\rm H}^1(\Omega;))\,,&\quad u_{,tt}\in {\rm C}((0,T);{\rm L}^2(\Omega ))\,,\\[1ex] P\in {\rm C}^1([0,T); {\rm H}(\operatorname{Curl};\Omega ))\,, &\quad P_{,tt}\in {\rm C}((0,T);{\rm L}^2(\Omega)),\\[1ex] \operatorname{Curl}\operatorname{Curl}P\in {\rm C}((0,T);{\rm L}^2(\Omega ))\,\quad &\mathrm{and}\quad \nabla\operatorname{Curl}P\in {\rm C}((0,T);{\rm L}^2(\Omega )). \end{split} \label{assu}$$ Fix $k\in\{1,2,3\}$ and assume that $h$ is sufficiently small. Calculating the time derivative of the energy evaluated on localised differences provides $$\begin{aligned} \label{4.3} \frac{d}{dt}\big({{\cal E}}(\eta\, D^h_k u, \eta\, D^h_k P)(t)\big)=&\int_{\Omega}\Big[\langle\eta\, D^h_k u_{,t},\eta\, D^h_k u_{,tt}\rangle + \langle\eta\, D^h_k P_{,t},\eta\, D^h_k P_{,tt}\rangle \Big]\,{{\mathrm d}}x\notag\\[1ex] &+ \int_{\Omega}\Big[2\,\mu_{\rm e}\langle\eta\,\operatorname{sym}(\nabla D^h_k u-D^h_k P),\eta\,\operatorname{sym}(\nabla {D^h_k}u_{,t}-D^h_k P_{,t})\rangle\notag\\[1ex] &\qquad \quad +\lambda_{\rm e}\,\eta\,\operatorname{tr}(\nabla{D^h_k}u-D^h_k P)\,\eta\,\operatorname{tr}(\nabla{D^h_k}u_{,t}-D^h_k P_{,t})\\[1ex] &\qquad \quad +2\,\mu_{\rm c}\langle\eta\,\operatorname{skew}(\nabla{D^h_k}u-D^h_k P),\eta\,\operatorname{skew}(\nabla{D^h_k}u_{,t}-D^h_k P_{,t})\rangle\notag\\[1ex] &\qquad \quad +2\,\mu_{\mathrm{micro}} \,\eta^2\langle\operatorname{sym}{D^h_k}P,\operatorname{sym}{D^h_k}P_{,t}\rangle+\lambda_{\mathrm{micro}}\operatorname{tr}({D^h_k}P)\,\operatorname{tr}({D^h_k}P_{,t})\notag\\[1ex] &\qquad \quad +\mu_{\rm micro} L_{\rm c}^2\langle\eta\,\operatorname{curl}{D^h_k}P, \eta\,\operatorname{curl}{D^h_k}P_{,t}\rangle\Big]\,{{\mathrm d}}x\notag\end{aligned}$$ $$\begin{aligned} \qquad \qquad\qquad\qquad\quad=&\int_{\Omega}\Big[\langle\eta\, D^h_k u_{,t},\eta\, D^h_k u_{,tt})\rangle+\langle 2\,\mu_{\rm e}\,\eta\,\operatorname{sym}(\nabla{D^h_k}u-D^h_k P)+ \lambda_{\rm e}\,\eta\,\operatorname{tr}(\nabla{D^h_k}u-D^h_k P){\mathbbm{1}}\notag\\[1ex] &\qquad \qquad \qquad \qquad\qquad\qquad+ 2\,\mu_{\rm c}\,\eta\,\operatorname{skew}(\nabla{D^h_k}u-D^h_k P),\eta\,\nabla{D^h_k}u_{,t}\rangle\Big]\,{{\mathrm d}}x\notag\\[1ex] &+\int_{\Omega}\Big[\langle\eta\, D^h_k P_{,t},\eta\, D^h_k P_{,tt}\rangle-\langle 2\,\mu_{\rm e}\,\eta\,\operatorname{sym}(\nabla{D^h_k}u-D^h_k P)+ \lambda_{\rm e}\,\eta\,\operatorname{tr}(\nabla{D^h_k}u-D^h_k P){\mathbbm{1}}\notag\\[1ex] &\quad \qquad + 2\,\mu_{\rm c}\,\eta\,\operatorname{skew}(\nabla{D^h_k}u-D^h_k P)-2\,\mu_{\mathrm{micro}} \,\eta\,\operatorname{sym}{D^h_k}P-\lambda_{\mathrm{micro}}\operatorname{tr}({D^h_k}P){\mathbbm{1}},\eta\,{D^h_k}P_{,t}\rangle\notag\\[1ex] &\quad \quad \quad +\mu_{\rm micro} L_{\rm c}^2\langle \eta\,\operatorname{Curl}{D^h_k}P, \eta\,\operatorname{Curl}{D^h_k}P_{,t}\rangle \Big]\,{{\mathrm d}}x\,.\notag\end{aligned}$$ It is worth to underline that the regularity of the solution $(u,P)$ implies that all integrals in are well defined. Let $$\eta\,{D^h_k}\sigma=2\,\mu_{\rm e}\,\eta\,\operatorname{sym}(\nabla{D^h_k}u-D^h_k P)+ \lambda_{\rm e}\,\eta\,\operatorname{tr}(\nabla{D^h_k}u-D^h_k P)\,{\mathbbm{1}}+2\,\mu_{\rm c}\,\eta\,\operatorname{skew}(\nabla{D^h_k}u-D^h_k P)\,, \label{4.4}$$ where $$\begin{aligned} \sigma=&\,2\,\mu_{\rm e}\operatorname{sym}(\nabla u- P)+ \lambda_{\rm e}\operatorname{tr}(\nabla u- P)+2\,\mu_{\rm c}\operatorname{skew}(\nabla u- P)\,{\mathbbm{1}}\end{aligned}$$ is the (non-symmetric) Cauchy-stress tensor. Then, $$\begin{aligned} \label{4.5} \frac{d}{dt}\big({{\cal E}}(\eta\, D^h_k u, \eta\, D^h_k P)(t)\big)=&\int_{\Omega}\Big[\langle \eta\, D^h_k u_{,t},\eta\, D^h_k u_{,tt}\rangle+\langle\eta\,{D^h_k}\sigma,\eta\,\nabla{D^h_k}u_{,t}\rangle\Big]\,{{\mathrm d}}x \\[1ex] &+\int_{\Omega}\Big[\langle \eta\, D^h_k P_{,t},\eta\, D^h_k P_{,tt}\rangle-\langle\eta\,{D^h_k}\sigma-2\,\mu_{\mathrm{micro}} \,\eta\,\operatorname{sym}{D^h_k}P-\lambda_{\mathrm{micro}}\operatorname{tr}({D^h_k}P){\mathbbm{1}},\eta\,{D^h_k}P_{,t}\rangle\notag\\[1ex] &\qquad \quad +\mu_{\rm micro} L_{\rm c}^2\langle\eta\,\operatorname{Curl}{D^h_k}P, \eta\,\operatorname{Curl}{D^h_k}P_{,t}\rangle \Big]\,{{\mathrm d}}x\,.\notag\end{aligned}$$ Denote by $(\cdot)_i$ ($i=1,2,3$) the rows of a $3\times 3$ matrix and set $u=(u_1,u_2,u_3)$. Therefore, for $i=1,2,3$ we conclude that $$\begin{split} \operatorname{div}\big(\eta^2{D^h_k}u_{i,t}\,{D^h_k}\sigma_i\big)&= \langle{D^h_k}\sigma_i,\eta^2\nabla{D^h_k}u_{i,t}+2\,\eta\,\nabla\eta\,{D^h_k}u_{i,t}\rangle+\eta^2{D^h_k}u_{i,t}\operatorname{div}({D^h_k}\sigma_i)\, \end{split} \label{4.6}$$ and $$\begin{split} \sum_{i=1}^{3} \langle\eta\, {D^h_k}\sigma_i,\eta\,\nabla{D^h_k}u_{i,t}\rangle=&\sum_{i=1}^{3} \operatorname{div}\big(\eta^2{D^h_k}u_{i,t}\,{D^h_k}\sigma_i\big)-\sum_{i=1}^{3} \langle\eta\,{D^h_k}\sigma_i,2\nabla\eta\,{D^h_k}u_{i,t}\rangle-\sum_{i=1}^{3} \eta^2{D^h_k}u_{i,t}\operatorname{div}({D^h_k}\sigma_i)\,. \end{split} \label{4.7}$$ Moreover, we have $$\begin{aligned} \label{4.8} \operatorname{div}\big(\eta^2\,\mu_{\rm micro} L_{\rm c}^2\operatorname{curl}{D^h_k}P_{i}\times{D^h_k}P_{i,t}&\big)\!\!= \langle{D^h_k}P_{i,t},\operatorname{curl}\big(\eta^2\,\mu_{\rm micro} L_{\rm c}^2\operatorname{curl}{D^h_k}P_{i}\big)\rangle\notag\\[1ex] &\qquad \qquad -\langle\eta^2\,\mu_{\rm micro} L_{\rm c}^2\operatorname{curl}{D^h_k}P_{i},\operatorname{curl}{D^h_k}P_{i,t}\rangle\\[1ex] &= \langle{D^h_k}P_{i,t},2\eta\,\nabla\eta\,\times\mu_{\rm micro} L_{\rm c}^2\operatorname{curl}{D^h_k}P_{i}+\eta^2\operatorname{curl}\big(\mu_{\rm micro} L_{\rm c}^2\operatorname{curl}{D^h_k}P_{i}\big)\rangle\notag\\[1ex] &\qquad \qquad-\langle \eta^2\,\mu_{\rm micro} L_{\rm c}^2\operatorname{curl}{D^h_k}P_{i},\operatorname{curl}{D^h_k}P_{i,t}\rangle\,.\notag\end{aligned}$$ This leads to $$\begin{split} \sum_{i=1}^{3}& \mu_{\rm micro} L_{\rm c}^2\langle\eta\,\operatorname{curl}{D^h_k}P_{i},\eta\,\operatorname{curl}{D^h_k}P_{i,t}\rangle=-\sum_{i=1}^{3} \operatorname{div}\big(\eta^2\,\mu_{\rm micro} L_{\rm c}^2\operatorname{curl}{D^h_k}P_{i}\times {D^h_k}P_{i,t}\big)\\[1ex] &+\sum_{i=1}^{3}\langle\eta\, {D^h_k}P_{i,t},\eta\,\operatorname{curl}\big(\mu_{\rm micro} L_{\rm c}^2\operatorname{curl}{D^h_k}P_{i}\big)\rangle-\sum_{i=1}^{3} \langle\eta\, {D^h_k}P_{i,t},2\,\nabla\eta\,\times\mu_{\rm micro} L_{\rm c}^2\operatorname{curl}{D^h_k}P_{i}\rangle\,. \end{split} \label{4.9}$$ Inserting and into and using equations we arrive at $$\begin{aligned} \label{4.10} \frac{d}{dt}\big({{\cal E}}(\eta\, D^h_k u, \eta\, D^h_k P)(t)\big)=&\int_{\Omega}\langle\eta\, D^h_k u_{,t},\eta\, {D^h_k}f)\rangle\,{{\mathrm d}}x -2\int_{\Omega}\sum_{i=1}^{3} \langle\eta\,{D^h_k}\sigma_i,\nabla\eta\,{D^h_k}u_{i,t}\rangle,{{\mathrm d}}x\\[1ex] &+\int_{\Omega}\langle\eta\, D^h_k P_{,t},\eta\, D^h_k M\rangle\,{{\mathrm d}}x-2\int_{\Omega}\sum_{i=1}^{3} \langle\eta\, {D^h_k}P_{i,t},\nabla\eta\,\times\mu_{\rm micro} L_{\rm c}^2\operatorname{curl}{D^h_k}P_{i}\rangle\,{{\mathrm d}}x\,.\notag\end{aligned}$$ Notice that the divergence theorem shows that the integrals over $\Omega$ of the first terms on the right-hand side of and are equal to zero. Now integrating with respect to time we obtain $$\begin{split} {{\cal E}}(\eta\, D^h_k u,& \eta\, D^h_k P)(t)={{\cal E}}(\eta\, D^h_k u, \eta\, D^h_k P)(0)+\int_0^t\int_{\Omega}\langle\eta\, D^h_k u_{,t},\eta\, {D^h_k}f)\rangle\,{{\mathrm d}}x{{\mathrm d}}\tau\\[1ex] &-2\int_0^t\int_{\Omega}\sum_{i=1}^{3} \langle\eta\,{D^h_k}\sigma_i,\nabla\eta\,{D^h_k}u_{i,t}\rangle\,{{\mathrm d}}x{{\mathrm d}}\tau+\int_0^t\int_{\Omega}\langle\eta\, D^h_k P_{,t},\eta\, D^h_k M\rangle\,{{\mathrm d}}x{{\mathrm d}}\tau\\[1ex] &-2\int_0^t\int_{\Omega}\sum_{i=1}^{3} \langle\eta\, {D^h_k}P_{i,t},\nabla\eta\,\times\mu_{\rm micro} L_{\rm c}^2\operatorname{curl}{D^h_k}P_{i}\rangle\,{{\mathrm d}}x{{\mathrm d}}\tau\,. \end{split} \label{4.11}$$ The first integral on the right-hand side of is estimated as follows $$\begin{split} \int_0^t\int_{\Omega}(\eta\, D^h_k u_{,t},\eta\, {D^h_k}f)\,{{\mathrm d}}x{{\mathrm d}}\tau{\leqslant}\int_0^t \lVert \nabla u_{,t} \rVert_ {{\rm L}^2(\Omega)} \lVert \nabla f \rVert_ {{\rm L}^2(\Omega)}\,{{\mathrm d}}\tau \end{split} \label{4.12}$$ and the assumption on $f$ and the regularity of $u_{,t}$ yield that it is finite. The Young’s inequality implies that the second one is estimated as follows $$\begin{split} \int_0^t\int_{\Omega}\sum_{i=1}^{3} \langle\eta\,{D^h_k}\sigma_i,\nabla\eta\,{D^h_k}u_{i,t}\rangle\,{{\mathrm d}}x{{\mathrm d}}\tau&{\leqslant}\int_0^t\int_{\Omega} \eta^2 \lVert {D^h_k}\sigma\rVert^2\,{{\mathrm d}}x{{\mathrm d}}\tau +C \lVert \nabla\eta\,\rVert^2_{L^{\infty}(\Omega)}\int_0^t \lVert \nabla u_{,t} \rVert_ {{\rm L}^2(\Omega)}^2\,{{\mathrm d}}\tau\\[1ex] &{\leqslant}\int_0^t{{\cal E}}(\eta\, D^h_k u, \eta\, D^h_k P)(\tau)\,{{\mathrm d}}\tau+C \lVert \nabla\eta\,\rVert^2_{L^{\infty}(\Omega)}\int_0^t \lVert \nabla u_{,t} \rVert_ {{\rm L}^2(\Omega)}^2\,{{\mathrm d}}\tau\,. \end{split} \label{4.13}$$ Again, using the regularity of $u_{,t}$ we have that the second integral on the right-hand side of is finite. In turn, the third integral of the right-hand side of is evaluated as $$\begin{split} \int_0^t\int_{\Omega}\langle\eta\, D^h_k P_{,t},\eta\, D^h_k M)\rangle\,{{\mathrm d}}x{{\mathrm d}}\tau&{\leqslant}\int_0^t\int_{\Omega} \eta^2 \lVert {D^h_k}P_{,t}\rVert^2\,{{\mathrm d}}x{{\mathrm d}}\tau +C\int_0^t\int_{\Omega} \lVert \nabla M\rVert^2\,{{\mathrm d}}x{{\mathrm d}}\tau\\[1ex] &{\leqslant}\int_0^t{{\cal E}}(\eta\, D^h_k u, \eta\, D^h_k P)(\tau)\,{{\mathrm d}}\tau+C\,. \end{split} \label{4.14}$$ Additionally, since $ \operatorname{curl}{D^h_k}P_{i}={D^h_k}(\operatorname{curl}P_i)\,, $ we obtain $$\begin{split} \int_0^t\int_{\Omega}&\sum_{i=1}^{3} \langle\eta\, {D^h_k}P_{i,t},\nabla\eta\,\times\mu_{\rm micro} L_{\rm c}^2\operatorname{curl}{D^h_k}P_{i}\rangle\,{{\mathrm d}}x{{\mathrm d}}\tau\\[1ex] &{\leqslant}\int_0^t\int_{\Omega} \eta^2 \lVert {D^h_k}P_{,t}\rVert^2\,{{\mathrm d}}x{{\mathrm d}}\tau+ C \lVert \nabla\eta\,\rVert^2_{L^{\infty}(\Omega)}\int_0^t \lVert \nabla\operatorname{Curl}P \rVert_ {{\rm L}^2(\Omega)}^2\,{{\mathrm d}}\tau\\[1ex] &{\leqslant}\int_0^t{{\cal E}}(\eta\, D^h_k u, \eta\, D^h_k P)(\tau)\,{{\mathrm d}}\tau+C \lVert \nabla\eta\,\rVert^2_{L^{\infty}(\Omega)}\int_0^t \lVert \nabla\operatorname{Curl}P \rVert_ {{\rm L}^2(\Omega)}^2\,{{\mathrm d}}\tau\,. \end{split} \label{4.16}$$ Lemma \[lem4.1\] yields that the second term on the right-hand side of is bounded. Substituting - into we get $$\begin{split} {{\cal E}}&(\eta\, D^h_k u, \eta\, D^h_k P)(t){\leqslant}{{\cal E}}(\eta\, D^h_k u, \eta\, D^h_k P)(0)+C\big(\int_0^t{{\cal E}}(\eta\, D^h_k u, \eta\, D^h_k P)(\tau)\,{{\mathrm d}}\tau+1\big)\,.\\[1ex] \end{split} \label{4.17}$$ Thus, from Gronwall’s inequality $$\begin{split} {{\cal E}}&(\eta\, D^h_k u, \eta\, D^h_k P)(t){\leqslant}C\big({{\cal E}}(\eta\, D^h_k u, \eta\, D^h_k P)(0)+1\big)\,.\\[1ex] \end{split} \label{4.18}$$ Applying the regularity of the initial data we obtain $ {{\cal E}}(\eta\, D^h_k u, \eta\, D^h_k P)(t){\leqslant}C\,, $ where the constant $C>0$ depends on the length of time interval $(0,T)$ ($C$ does not depend on $h$). The main result --------------- \[regularity\] \[mr\] [(Regularity of the solution)]{} Suppose that all hypotheses of Theorem \[tw:regu\] hold. Moreover, let $P^{(0)}\in {\rm H}^1_{\rm loc}(\Omega)$. Then, $$u\in {\rm C}((0,T);{\rm H}^2_{\rm loc}(\Omega))\,,\quad P\in {\rm C}((0,T);{\rm H}^1_{\rm loc}(\Omega))\quad \mathrm{and}\quad \operatorname{Curl}P\in {\rm C}((0,T);{\rm H}^1(\Omega))\,. \label{regu1}$$ The proof is divided into two parts. First we are going to prove that $P\in {\rm C}((0,T);{\rm H}^1_{\rm loc}(\Omega))$ and $\operatorname{Curl}P\in {\rm C}((0,T);{\rm H}^1(\Omega))$. - Theorem \[tw:regu\] implies that $ \lVert {D^h_k}P_{,t}\rVert^2_{{\rm C}((0,T);{\rm L}^2(V))}{\leqslant}C$, thus from Theorem \[tw:difquo\] we deduce $$\begin{split} \lVert \nabla P_{,t}\rVert^2_{{\rm C}((0,T);{\rm L}^2(V))}{\leqslant}C\,. \end{split} \label{4.20}$$ Moreover, using the formula $ \nabla P(t)=\nabla P^{(0)}+\int_0^t \nabla P_{\tau}(\tau) \,{{\mathrm d}}\tau $ we get $$\begin{split} \lVert \nabla P(t) \rVert_ {{\rm L}^2(V)}= \lVert \nabla P^{(0)} \rVert_ {{\rm L}^2(V)}+\int_0^T \lVert \nabla P_{,\tau}(\tau) \rVert_ {{\rm L}^2(V)} \,{{\mathrm d}}\tau\,. \end{split} \label{4.22}$$ The regularity of $P^{(0)}$ and inequality yield that $ \lVert \nabla P\rVert^2_{{\rm C}((0,T);{\rm L}^2(V))}{\leqslant}C $ and $P\in {\rm C}((0,T);{\rm H}^1_{\rm loc}(V))$. Notice that Lemma \[lem4.1\] gives us also that $\operatorname{Curl}P\in {\rm C}((0,T);{\rm H}^1(\Omega))$. - Observe that $$\begin{split} \lVert \eta\,\operatorname{sym}(\nabla{D^h_k}(u))\rVert^2_{{\rm L}^2(\Omega)}&{\leqslant}\lVert \eta\,\operatorname{sym}(\nabla{D^h_k}(u)-{D^h_k}P)\rVert^2_{{\rm L}^2(\Omega)}+ \lVert \eta\,\operatorname{sym}({D^h_k}P)\rVert^2_{{\rm L}^2(\Omega)} \end{split} \label{4.24}$$ and Theorem \[tw:regu\] implies that $ \lVert \eta\,\operatorname{sym}(\nabla{D^h_k}(u))\rVert^2_{{\rm L}^2(\Omega)}$ is bounded independently on $h$. Now, the regularity of $u$ follows from the identities $$\begin{split} \eta\,(\partial_k\nabla u)&=\nabla(\eta\,\partial_k u)-\nabla\eta\,\otimes\partial_k u\,,\\[1ex] \operatorname{sym}(\nabla(\eta\,\partial_k u))&=\eta\,(\partial_k\operatorname{sym}(\nabla u))+\operatorname{sym}(\nabla\eta\,\otimes\partial_k u) \end{split} \label{4.25}$$ and from Korn’s inequality [@Neff00b], as in the following $$\begin{split} \lVert \eta\,(\partial_k\nabla u)\rVert^2_{{\rm L}^2(\Omega)}&{\leqslant}\lVert \nabla(\eta\,\partial_k u)\rVert^2_{{\rm L}^2(\Omega)} + \lVert \nabla\eta\,\otimes\partial_k u\rVert^2_{{\rm L}^2(\Omega)}\\[1ex] &{\leqslant}C \lVert \operatorname{sym}(\nabla(\eta\,\partial_k u))\rVert^2_{{\rm L}^2(\Omega)} + \lVert \nabla\eta\, \rVert_ {L^{\infty}(\Omega)} \lVert \nabla u\rVert^2_{{\rm L}^2(\Omega)}\\[1ex] &{\leqslant}C\big( \lVert \eta\,(\partial_k\operatorname{sym}(\nabla u))\rVert^2_{{\rm L}^2(\Omega)}+ \lVert \nabla u\rVert^2_{{\rm L}^2(\Omega)}\big)\,. \end{split} \label{4.26}$$ This proves that $u\in {\rm C}((0,T);{\rm H}^2_{\rm loc}(\Omega))$. Note that the regularity for the displacement vector $u$ is obtained from the isotropic elastic energy and microstrain self energy. Comparing this with the general regularity theory for hyperbolic equations, it is a standard approach [@neff2008regularity]. It is clear that locally the dislocation energy $\lVert \operatorname{Curl}P\rVert^2$ does not control all weak derivatives of the tensor $P$ in ${\rm L}^2$. But, in the dynamic case the total energy contains also the kinematic energy. From the difference quotient method and the energy estimate we are able to control all weak partial derivatives of the micro-distortion tensor $P_{,t}$ locally in ${\rm L}^2$. Assuming that the initial tensor $P^{(0)}$ has a better regularity, we also control all weak partial derivatives of the tensor $P$ locally in ${\rm L}^2$. [10]{} A. Aivaliotis, A. Daouadji, G. Barbagallo, D. Tallarico, P. Neff, and A. Madeo. Microstructure-related [Stoneley]{} waves and their effect on the scattering properties of a [2D Cauchy]{}/relaxed-micromorphic interface. , 90:99–120, 2019. A. Aivaliotis, D. Tallarico, M.V. d’Agostino, A. Daouadji, P. Neff, and A. Madeo. Frequency-and angle-dependent scattering of a finite-sized meta-structure via the relaxed micromorphic model. , 90:1073–1096, 2019. A. Alonso and A. Valli. Some remarks on the characterization of the space of tangential traces of [$H({\rm rot};\Omega)$]{} and the construction of an extension operator. , 89(2):159–178, 1996. G. Barbagallo, D. Tallarico, M.V. d’Agostino, A. Aivaliotis, P. Neff, and A. Madeo. Relaxed micromorphic model of transient wave propagation in anisotropic band-gap metastructures. , 162:148–163, 2019. S. Bauer, P. Neff, D. Pauly, and G. Starke. Dev-[Div]{} and [DevSym]{}-[DevCurl]{} inequalities for incompatible square tensor fields with mixed boundary conditions. , 22(1):112–133, 2016. M. Cessenat. , volume 41 of [*Series on Advances in Mathematics for Applied Sciences*]{}. World Scientific Publishing Co., Inc., River Edge, NJ, 1996. M.V. d’Agostino, G. Barbagallo, I.D. Ghiba, B. Eidel, P. Neff, and A. Madeo. Effective description of anisotropic wave dispersion in mechanical band-gap metamaterials via the relaxed micromorphic model. , 139:299–329, 2019. M.V. d’Agostino, G. Barbagallo, I.D. Ghiba, A. Madeo, and P. Neff. A panorama of dispersion curves for the weighted isotropic relaxed micromorphic model. , 97(11):1436–1481, 2017. A.C. Eringen. Springer, Heidelberg, 1999. A. Blanco et al. Large-scale synthesis of a silicon photonic crystal with a complete three-dimensional bandgap near 1.5 micrometres. , 405(6785):437–440, 2000. Z. Liu et al. Locally resonant sonic materials. , 289(5485):1734–1736, 2000. L. C. Evans. , volume 19 of [*Graduate Studies in Mathematics*]{}. American Mathematical Society, Providence, RI, 1998. I. D. Ghiba, S. Owczarek, and P. Neff. Existence results for non-homogeneous boundary conditions in the relaxed micromorphic model. , 2020. I.D. Ghiba, P. Neff, A. Madeo, L. Placidi, and G. Rosi. The relaxed linear micromorphic continuum: Existence, uniqueness and continuous dependence in dynamics. , 20:1171–1197, 2015. V. Girault and P.-A. Raviart. , volume 5 of [*Springer Series in Computational Mathematics*]{}. Springer-Verlag, Berlin, 1986. D. Ieşan. Extremum principle and existence results in micromorphic elasticity. , 39:2051–2070, 2001. A. Madeo, P. Neff, I. D. Ghiba, L. Placidi, and G. Rosi. Wave propagation in relaxed linear micromorphic continua: modelling metamaterials with frequency band-gaps. , 27:551–570, 2015. A. Madeo, P. Neff, I.D. Ghiba, and G. Rosi. Reflection and transmission of elastic waves in non-local band-gap metamaterials: a comprehensive study via the relaxed micromorphic model. , 95:441–479, 2016. R.D. Mindlin. Micro-structure in linear elasticity. , 16:51–77, 1964. P. Neff. On [K]{}orn’s first inequality with nonconstant coefficients. , 132:221–243, 2002. P. Neff, I. D. Ghiba, A. Madeo, L. Placidi, and G. Rosi. A unifying perspective: the relaxed linear micromorphic continuum. , 26:639–681, 2014. P. Neff and D. Knees. Regularity up to the boundary for nonlinear elliptic systems arising in time-incremental infinitesimal elasto-plasticity. , 40(1):21–43, 2008. P. Neff, D. Pauly, and K.J. Witsch. Poincaré meets [K]{}orn via [M]{}axwell: Extending [K]{}orn’s first inequality to incompatible tensor fields. , 258:1267–1302, 2015. R. Picard, S. Trostorff, and M. Waurick. On some models for elastic solids with micro-structure. , 95(7):664–689, 2014. [^1]: Sebastian Owczarek,    Faculty of Mathematics and Information Science, Warsaw University of Technology, ul. Koszykowa 75, 00-662 Warsaw, Poland; email: [email protected] [^2]: Ionel-Dumitrel Ghiba, Corresponding author:  Alexandru Ioan Cuza University of Iaşi, Department of Mathematics, Blvd. Carol I, no. 11, 700506 Iaşi, Romania; Octav Mayer Institute of Mathematics of the Romanian Academy, Iaşi Branch, 700505 Iaşi; email: [email protected] [^3]: Patrizio Neff,    Head of Lehrstuhl für Nichtlineare Analysis und Modellierung, Fakultät für Mathematik, Universität Duisburg-Essen, Campus Essen, Thea-Leymann Str. 9, 45127 Essen, Germany, email: [email protected]
--- abstract: 'In this paper, we address the problem of controlling a mobile stereo camera under image quantization noise. Assuming that a pair of images of a set of targets is available, the camera moves through a sequence of Next-Best-Views (NBVs), i.e., a sequence of views that minimize the trace of the targets’ cumulative state covariance, constructed using a realistic model of the stereo rig that captures image quantization noise and a Kalman Filter (KF) that fuses the observation history with new information. The proposed algorithm decomposes control into two stages: first the NBV is computed in the camera relative coordinates, and then the camera moves to realize this view in the fixed global coordinate frame. This decomposition allows the camera to drive to a new pose that effectively realizes the NBV in camera coordinates while satisfying Field-of-View constraints in global coordinates, a task that is particularly challenging using complex sensing models. We provide simulations and real experiments that illustrate the ability of the proposed mobile camera system to accurately localize sets of targets. We also propose a novel data-driven technique to characterize unmodeled uncertainty, such as calibration errors, at the pixel level and show that this method ensures stability of the KF.' author: - 'Charles Freundlich,  Yan Zhang,  Alex Zihao Zhu,  Philippos Mordohai,  and Michael M. Zavlanos, [^1]' bibliography: - 'charlie-refs.bib' title: Controlling a Robotic Stereo Camera Under Image Quantization Noise --- Range Sensing, Motion Control, Mapping Introduction {#sec:intro} ============ Active robotic sensors are rapidly gaining viability in environmental, defense, and commercial applications. As a result, developing information-driven sensor strategies has been the focus of intense and growing research in artificial intelligence, control theory, and signal processing. We focus on stereoscopic camera rigs, that is, two rigidly connected cameras in a pair. Specifically, we address the problem of determining the trajectory of a mobile robotic sensor equipped with a stereo camera rig so that it localizes a collection of possibly mobile targets as accurately as possible under image quantization noise. The advantage of binocular vision, compared to the use of monocular camera systems, is that it provides both depth and bearing measurements of a target from a pair of simultaneous images. Assuming that noise is dominated by quantization of pixel coordinates [@blostein87; @matthies87; @chang94] we use the measurement Jacobian to propagate the error from the pixel coordinates to the target coordinates relative to the stereo rig. In particular, we approximate the pixel error as Gaussian and propagate the noise to the target locations, giving rise to fully correlated second order error statistics, or measurement error covariance matrices, which capture target location uncertainty. The resulting second order statistic is an accurate representation of not only the eigenvalues but also the eigenvectors of the measurement error covariance matrices, which play a critical role in active sensing as they determine viewing directions from where localization uncertainty can be further decreased. Assuming that a pair of images of the targets is available, in this paper, we iteratively move the stereo rig through a sequence of configurations that minimize the trace of the targets’ cumulative covariance. This cumulative covariance is constructed using a Kalman Filter (KF) that fuses the observation history with the predicted instantaneous measurement covariance obtained from the proposed stereoscopic sensor model. Differentiating this objective with respect to the new instantaneous measurement in the relative camera frame provides the Next Best View (NBV), i.e., the new relative distance and direction from where a new measurement should be obtained. Then, the stereo rig moves to realize this NBV using a gradient descent approach in the joint space of camera rotations and translations. Once the NBV is realized in the global frame, the camera takes a new pair of images of the targets that are fused with the history using the KF to update the prior cumulative covariance of the targets, and the process repeats with the determination of a new NBV. The sequence of observations and resulting NBVs, generated by the proposed iterative scheme, constitutes a switching signal in the continuous motion control. During motion, appropriate barrier potentials prevent targets from exiting the camera’s geometric Field-of-View (FoV). As we illustrate in computer simulations and real-world experiments, the resulting sensor trajectory balances between reducing range and diversifying viewpoints, a result of the eigenvector information contained in the posterior error covariances. This behavior of our controller is notable when compared to existing sensor guidance approaches that adopt approximations to the error covariance that are not direct functions of the stereo rig calibration and the pixel observations themselves [@le1996optimization; @passerieux1998optimal; @logothetis1997information; @stroupe05; @zhou_roumeliotis11; @olfati2007distributed; @chung06]. Related Work {#sec_related} ------------ Our work is relevant to a growing body of literature that addresses control for one or several mobile sensors for the purpose of target localization or tracking [@fox00; @roumeliotis02; @spletzer03; @stroupe05; @chung06; @yang07; @morbidi2013active; @zhou_roumeliotis11]. These methods use sensor models that are based only on range and viewing angle. These models, if used for stereo triangulation, can not accurately capture the covariance among errors in measurement coordinates, nor can they capture dependence on range and viewing angle. It is also common to ignore directional field of view constraints by assuming omnidirectional sensing. In this paper, we derive the covariance specifically for triangulation with a calibrated stereo rig. The derived measurement covariance, when fused with a prior distribution, provides our controller with critical directional information that enables the mobile robot to find the NBV, defined as the vantage point from where new information will reduce the posterior variance of the targets’ distribution by the maximum amount. Recent work [@ponda09; @Adurthi13; @ding2012coordinated] brought about by developments in fixed-wing UAV control, addresses the autonomous visual tracking and localization problem using optimal control over information-based objectives using monocular vision. [@ponda09] define an objective function based on the trace of the covariance matrix of the target location and determine the next best view by a numerical gradient descent scheme. [@ding2012coordinated] also minimize the trace of the fused covariance by guiding multiple non-holonomic Unmanned Aerial Vehicles (UAVs) that use the Dubins car model. [@Adurthi13] use receding horizon control to maximize mutual information. Their method avoids replanning at every step by doing so only when the Kullback-Leibler divergence between the most recent target location probability density function (pdf) and the pdf that was used in the planning phase differ by a user-specified threshold. The Dynamic Programming (DP) approaches of [@ponda09; @Adurthi13; @ding2012coordinated] have complexity that grows exponentially in the horizon length. In this paper, our proposed analytical and closed-form expression for the gradient guides image collection in all position and orientation directions continuously. Although it does not plan multiple steps into the future, our controller is adaptive due to its feedback nature; each decision predicts new sensor locations from where new measurements optimize the estimated target locations based on the full, fused observation history. As far back as [@bajcsy88], computer vision researchers have recognized that sensing decisions should be based on an exact sensor model, and that robotic vision, like human vision, can benefit from mobility. Relevant prior work on active vision controls the image collection process for digital cameras through a discretized pose space by optimizing a scalar function of the covariance of feature-points on an object that is to be reconstructed. Specifically, [@trummer10] focus on the maximum eigenvalue of the posterior covariance, [@Wenhardt06] the entropy, and [@dunn_iros09] the expected quality of the next view. While these works do obtain uncertainty estimates that depend on factors such as viewing distance and camera resolution, which improves accuracy in 3D reconstruction, they do not operate continuously in 3D or consider dynamic environments. [@morbidi2013active] optimize an objective function that depends on the covariance matrix of the KF, rather than the measurement error covariance matrix. The authors derive upper and lower bounds on the covariance matrix at steady-state and validate their method in simulation. Stereoscopic vision sensors in continuous pose space are employed by [@Shade10], similar to the work proposed here. However, this work [@Shade10] is concerned with exploration of an indoor environment and not with refining localization estimates. When the target configurations can be collectively modeled by a coarse mesh in space, the NBV problem becomes similar to active inspection. Several researchers have addressed this problem using approximate dynamic programming, by formulating it as coverage path planning. [@galceran2013] provide an overview of coverage problems in mobile robotics, where the goal is to plan sensor paths that “see” every point on the surface mesh. Similarly, [@wang2007] propose solving the traveling view path planning problem using approximate integer programming on a network flow model. [@papadopoulos2013] enforce differential constraints for this problem. [@hollinger13] invoke adaptive submodularity, which argues that greedy approaches to measurement acquisition may outperform dynamic programming approaches that do not replan as measurements are acquired. While relevant to this work from a sensor planning perspective, active inspection methods do not address target localization. Moreover, dynamic and integer programming methods tend to be computationally expensive, especially for high dimensional spaces as those resulting from the presence of mobile targets. In this paper, we assume that there are no occlusions and, therefore, coverage (or detection) can be obtained if FoV constraints are met. Moreover, we assume no correspondence errors between images. These assumptions allow us to develop a control systems approach to the target localization problem that is based on computationally efficient, analytic, expressions for the camera motion and image collection process, as well as on precise sensor models that can result in more accurate localization. We note briefly that this paper is based on preliminary results contained in our prior publications [@freundlich13icra; @freundlich13cdc]. These early works used simplified versions of the noise model and global controller and lacked experimental validation. Contributions ------------- Our proposed control decomposition and resulting hybrid scheme possess a number of advantages compared to other methods that control directly the full non-linear system or resort to dynamic programming for nonmyopic planning. While these methods can have their own benefits, they also suffer from drawbacks. In particular, controlling directly the full non-linear system can be subject to multiple local minima that might be difficult to handle. On the other hand, dynamic programming formulations suffer from computational complexity due the size of the resulting state-spaces and often resort to abstract sensor models to help reduce complexity [@logothetis1997information; @stroupe05; @singh2007simulation; @logothetis1998comparison; @frew2003trajectory; @Adurthi13; @ding2012coordinated]. Additionally, these approaches use discrete methods, e.g., the exhaustive search of [@frew2003trajectory] and the gradient approximations of [@singh2007simulation], to achieve the desired control task. Instead, decomposing control in the global and relative frames allows us to consider separately high-level planning, defined by the image collection/sensing process, and low-level planning, i.e., motion control of the camera. An advantage of this decomposition is that, given an NBV in the relative frame, there are infinite ways that the camera can realize this NBV in the global frame. This provides choices to the motion controller that otherwise could be subject to local stationary points due to the nonlinear coupling between sensing and planning. We provide a stability proof of the motion controller, while extensive computer simulations and experimental results have shown that even when FoV constraints are considered, local minima are not an issue and can be avoided by simple tuning of a gain parameter. The control decomposition also allows us to introduce Field-of-View (FoV) constraints that have not been previously used in the NBV context due to the complexity of their implementations. Most authors have used omnidirectional sensor models to circumvent these difficulties. The FoV constraints naturally enter the motion controllers when control is decomposed in the global and relative frames. Finally, to the best of our knowledge, the approaches by [@stroupe05; @zhou_roumeliotis11; @olfati2007distributed; @Adurthi13; @ding2012coordinated] rely on having large numbers of sensors, e.g., 20 or 60, and consider a single target, while our method enables one sensor to track multiple targets as long as they satisfy the FoV constraints. In summary, the contribution of this work is that we address the multi-target, single-sensor problem employing the most realistic sensor model among continuous-space approaches in the literature that rely on the gradient of an optimality metric of the error covariance for planning. Additionally, to the best of our knowledge, this work is the first to include FoV constraints within the NBV setting. We also model image quantization noise directly. This allows us to accurately model the second order error statistics of the target location uncertainty based on the actual pixel error distribution. While other sources of error, such as association (matching) errors or occlusion, contribute to target localization error, a simultaneous and exact treatment of all error sources for the purposes of active sensing is an open problem. In this work we have addressed an essential contributor, that is, quantization. We have also proposed a novel data-driven technique to account for unmodeled uncertainty, such as system calibration errors, that is necessary to transition the proposed theoretical results to practice. In particular, for long range stereo vision, calibration errors are unique to the particular stereo rig used. They can cause severe bias and ill conditioned covariance matrices that may be completely different from one stereo rig to another. As our method heavily relies on the KF, ensuring that measurements are unbiased and that we have a reliable estimate for their covariance is crucial to both convergence of the estimator and for generating sensible closed-loop robot trajectories. Our proposed data-driven technique corrects measurements at the pixel level and empirically calculates their predictive error covariances. Specifically, using a sufficiently large training set of stereo image pairs, we determine the empirical error covariance, which we propagate to world coordinates and use for both path planning and state estimation. To the best of our knowledge, our data-driven approach to estimating the error statistics in the pixel coordinates is novel. Most relevant literature in stereo vision assumes arbitrarily large such statistics, so that the estimation process is stable. In practice, we found this step to be crucial for accurate triangulation and fusion of multiple measurements, even in our controlled lab environment. We note that this paper is based on preliminary results that can be found in [@freundlich13icra; @freundlich13cdc]. The main differences between our preliminary work in [@freundlich13icra; @freundlich13cdc] and the work proposed here are the following. First, here we present thorough experimental results that validate our approach; the first of their kind for stereo vision. Second, the controller proposed in [@freundlich13icra; @freundlich13cdc] realizes a NBV that does not place the targets on the positive $z$-axis (viewing direction) of the stereo rig. Looking straight at the targets results in more accurate observations. The controller proposed here has this property. As a result, the correctness proofs in this paper are different compared to [@freundlich13icra; @freundlich13cdc]. Finally, the noise model in this paper is based on an empirical model of quantization noise in stereo vision, as opposed to constant pixel noise covariance in [@freundlich13icra; @freundlich13cdc]. The paper is organized as follows. Section \[sec\_problem\] formulates the visual target tracking problem. Section \[sec:potdes\] discusses the NBV in the camera-relative coordinate system. Section \[sec:realize\] presents the gradient flow in the global coordinate frame. Section \[sec\_simulations\] illustrates the proposed integrated hybrid system via computer simulations for static and mobile target localization and discusses ways to integrate FoV constraints in the proposed controllers. Section \[sec:exp\] gives experimental validation of our claims and describes the data-driven noise modeling strategy. Section \[sec\_conclusions\] concludes the work. System Model {#sec_problem} ============ Consider a group of $n$ mobile targets, indexed by $i \in \ccalN = \{1 \dots n \}$, with initially unknown positions $\bbx_i$. Consider also a mobile stereo camera located at $\bbr(t) \in \reals^3$ and with orientation $R(t) \in SO(3)$, where $SO(3)$ denotes the special orthogonal group of dimension three, with respect to a fixed global reference frame at time $t\geq 0$. A coordinate frame anchored to the stereo camera, hereafter referred to as the relative coordinates, is oriented such that, without loss of generality, the $x$-axis joins the centers of two monocular cameras and the positive $z$-axis measures range. We denote the two cameras by Left (L) and Right (R). The (L) and (R) camera centers are thus located at $(-b/2,0,0)$ and $(b/2,0,0)$ in the relative coordinates, where $b$ denotes the stereo baseline (see Fig. \[fig:y\_cord\]). ![Stereo geometry in 3D. Two rays from the camera centers to a target located at $\bbp_i$ creates a pair of image coordinates, $(x_L,y)$ and $(x_R,y)$.[]{data-label="fig:y_cord"}](epipolar.pdf){width="7cm"} The position of target $i$ with respect to the relative camera frame can be expressed as $$\label{eq:bbp} \bbp_i\triangleq \bbp(x_{Li},x_{Ri},y_i) = \frac{b}{x_{Li} - x_{Ri}} \begin{bmatrix}\frac{1}{2} (x_{Li}+x_{Ri}) \\y_i \\f \end{bmatrix},$$ where $f$ denotes the focal length of the camera lens, measured in pixels, and $x_{Li}$, $x_{Ri}$, and $y_i$ denote the pixel coordinates of target $i$ on the left and right camera images, as in Fig. \[fig:y\_cord\], where we note that $y_i$ is equal in the left and right image by the epipolar constraint. Given the orientation and position of the mobile camera, it is useful to consider the location of target $i$ in global coordinates $$\label{eq:hatx} \bbx_{i}\triangleq R(t) {\bbp}_{i}+ \bbr(t).$$ In practice, we can only observe quantized versions of the image coordinate tuples $(x_{Li},x_{Ri},y_{i})$ once they are rounded to the nearest pixel centers, which we hereafter denote by $\chkx_{Li}$, $\chkx_{Ri}$, and $\chky_{i}$. In view of , the corrupted observation $(\chkx_{Li},\chkx_{Ri},\chky_{i})$ carries its quantization error into the observed coordinates $\bbp_i$ of target $i$, causing non-Gaussian error distributions [@blostein87; @chang94]. For convenience, we follow [@matthies87; @foerstner05] and approximate the quantized error in the pixels as Gaussian to allow uncertainty propagation from image to world coordinates. The noise propagation takes place under a linearization of the measurement equation, so that the localization error of the target in the relative camera frame will also be Gaussian with mean $\cbbp_i=\bbp(\chkx_{Li},\chkx_{Ri},\chky_{i})$. It follows from that the global location estimate $\cbbx_i$ is also subject to Gaussian noise. Targets may be mobile, so we denote the ground truth full state of target $i$ by $\bbz_i = [\bbx_i^\top \; \dot{\bbx}_i^\top \; \ddot{\bbx}_i^\top ]^\top \!\!$. Then, $\bbx_i$ and $\bbz_i$ are related by $\bbx_i = H \bbz_i$, where $H = [1 \; 0 \; 0] \otimes I_3$, where $\otimes$ represents the Kronecker product. Thus, we can think of $\cbbx_{i}$ as a noisy copy of the zero-th order terms of $\bbz_i$, $$\label{eq:lin_obs} \cbbx_{i} = H \bbz_{i} + \bbv_{i},$$ where $\bbv_{i}$ is a white noise vector. We hereafter denote the covariance of $\bbv_{i}$ by $\Sigma_i\in \mbS^3_+$, where $\mbS^3_+$ denotes the set of $3\times3$ symmetric positive definite matrices. In Section \[sec:potdes\], we discuss an explicit form of $\Sigma_i$ that depends on the measurement itself. Kalman Filtering to Fuse the Target Observations ------------------------------------------------ Assume that the stereo camera has made a sequence of observations of the targets. Introduce an index $k\geq 0$ associated with every observation to obtain $\cbbx_{i,k}$ and associated covariances $\Sigma_{i,k}$ from . Our goal is to create accurate state information for a group of targets based on a sequence of such observations. For this, we use a Kalman filter (KF), which is an efficient filter that can incorporate a sequence of noisy measurements within a system model to create accurate state estimates. We model the continuous time evolution of target $i$’s motion with the discrete time linear equation $$\label{dis_time_eq} {\bbz}_{i,k} = \Phi {\bbz}_{i,k-1}.$$ In , $\Phi$ is the state transition matrix, which is unknown to the observer. Adaptive procedures for determining $\Phi$ are well studied in the literature on mobile target tracking [@Singer70; @Li03_part1]. Zero velocity and constant acceleration models of the target trajectory, which we discuss in Section \[sec\_simulations\], are modeled over a short time interval $Dt$ by \[eq:genmod\] $$\begin{aligned} \Phi_{\dot{\bbx}_i={\bf 0}} = \begin{bmatrix} 1 & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & 0\end{bmatrix}\otimes I_3\; \text{and} \\ \Phi_{\dddot{\bbx}_i={\bf 0}} = \begin{bmatrix} 1 & Dt & \frac{Dt^2}{2} \\ 0 & 1 & Dt \\ 0 & 0 & 1 \end{bmatrix}\otimes I_3,\end{aligned}$$ The KF recursively creates state estimates, which we denote by $\hat{\bbz}_i$, and their associated covariances, which we denote by $U_i$. In particular, given prior estimates $\hat{\bbz}_{i,k-1 | k-1}$ and $U_{i,k-1 | k-1}$, we update the state estimates and fuse the covariance matrices according to the following KF: \[eq:KFeqs\] $$\begin{aligned} \hat{\bbz}_{i,k | k-1}& = \Phi \hat{\bbz}_{i,k-1 | k-1} \label{eq:zpred}\\ U_{i,k| k-1} &= \Phi U_{i,k-1| k-1} \Phi^\top +W \\ K_k &= U_{i,k| k-1} H^\top \left[H U_{i,k| k-1}H^\top +\Sigma_{i,k}\right]^{-1}\!\!\!\! \label{eq:Kgain} \\ \hat{\bbz}_{i,k | k} &= \hat{\bbz}_{i,k | k-1} + K_k\left[\cbbx_{i,k}-H \hat{\bbz}_{i,k | k-1}\right]\\ U_{i,k| k} &= U_{i,k| k-1} - K_k HU_{i,k| k-1} \label{eq:covfinal}\end{aligned}$$ where $W$ is the process noise covariance matrix and accounts for the approximate nature of $\Phi$. [@Singer70] gives a closed form for this matrix, $$\label{eq:processnoise} W = \begin{bmatrix} Dt^5/20 & Dt^4/8 & Dt^3/6 \\ Dt^4/8 & Dt^3/3 & Dt^2/2 \\ Dt^3/6 & Dt^2/2 & Dt \end{bmatrix}\otimes I_3.$$ From equation and the results of [@welch01], a closed form expression for the fused covariance estimate follows in the form of a Lemma. \[lem:kfchange\] Let $U_{i,k|k-1}$ denote the predicted covariance of all prior observations and $\Sigma_{i,k}$ denote the covariance of the most recent measurement. Then, the location estimate of target $i$, $H \hat{\bbz}_{i,k | k}$, has a covariance matrix, which we hereafter denote by $\Xi_{i,k}$, given by $$\label{eq:Xidef} \Xi_{i,k} \triangleq HU_{i,k|k}H^\top = \left[\left(HU_{i,k|k-1}H^\top \right)^{-1}+\Sigma_{i,k}^{-1}\right]^{-1}\!\!\!\!.$$ From the definition of $\Xi_{i,k}$ we have that, $$\begin{aligned} U_{i,k| k} &= U_{i,k| k-1} - K_k HU_{i,k| k-1} \\ HU_{i,k| k} H^\top &=H \left(U_{i,k| k-1} - K_kHU_{i,k| k-1}\right)H^\top =\Xi_{i,k}.\end{aligned}$$ To simplify the analysis, let $U=U_{i,k| k-1}$ and $\Sigma = \Sigma_{i,k}$. Substituting the Kalman gain $K_k$ from , we have $$\begin{aligned} \Xi_{i,k} &=HUH^\top \Big[ I - \big(HUH^\top + \Sigma\big)^{-1} HUH^\top \Big] \\ &=HUH^\top \Big[ \left(HUH^\top + \Sigma\right)^{-1} \left(HUH^\top + \Sigma\right) - \\ &\hspace{3cm} \left(HUH^\top + \Sigma\right)^{-1} HUH^\top \Big]\\ &= HUH^\top \left(HUH^\top + \Sigma\right)^{-1}\left(HUH^\top \! +\! \Sigma-HUH^\top \right)\\ &= HUH^\top \left(HUH^\top + \Sigma\right)^{-1}\Sigma\\ &= \Sigma\left(HUH^\top + \Sigma\right)^{-1}HUH^\top \\ &= \Big( \left( HUH^\top \right)^{-1} \left(HUH^\top + \Sigma\right) \Sigma^{-1} \Big)^{-1}\\ \Xi_{i,k} &= \left[\Sigma^{-1}+ \left(HUH^\top \right)^{-1}\right]^{-1}.\end{aligned}$$ These manipulations are legal because covariance matrices are positive definite and therefore symmetric and invertible. The Next Best View Problem {#sec:NBVprob} -------------------------- Suppose there are $k-1$ available observations of the group of targets in $\ccalN$, and let $$\label{eq:s_objective} HU_{s,k|k-1}H^\top \; \textrm{with} \;\; s= \argmax_{j \in \ccalN} \left\{ {\bf tr}\, \left[HU_{j,k|k-1} H^\top \right] \right\}$$ denote the predicted covariance of the worst localized target and $$\label{eq:c_objective} HU_{c,k|k-1}H^\top = \frac{1}{n} \sum_{i \in \ccalN} H U_{i,k|k-1}H^\top .$$ denote the average of all predicted target covariances at iteration $k$. The problem that we address in this paper is as follows. \[problem\] Given the predicted covariance of the worst localized target $HU_{s,k|k-1}H^\top $ (respectively, the average of the targets’ predicted covariances $HU_{c,k|k-1}H^\top $) and the predicted next location $\hbbz_{s,k | k-1}$ of target $s$ (respectively, the average of the targets’ predicted locations $\hbbz_{c,k | k-1}$), determine the next pose of the stereo rig $(\bbr(t_k), R(t_k))$ so that ${\bf tr}[\Xi_{s,k}]$ (respectively, ${\bf tr}[\Xi_{c,k}]$) is minimized. To solve Problem \[problem\] we make the following assumptions: 1. noise is dominated by quantization of pixel coordinates; 2. correct correspondence of the targets between the images in the stereo rig exists; 3. if targets are in the field of view of the cameras, then they are not occluded by any obstacle in space. Assumption (A1) allows us to isolate, analyze, and control the effect of pixelation noise on the target localization process. While other sources of noise do exist, pixelation noise does in fact dominate for small disparities, e.g., when the camera is far away from the targets. The effect of other noise sources can be critical for the stability of the KF, and we discuss a novel data-driven approach to obtain empirical models of these uncertainties in Section \[sec:exp\]. Assumptions (A2) and (A3) allow us to simplify the problem formulation. It is well known that correspondence and occlusion are both important problems and, being such, have received significant attention in the computer vision literature, see e.g., [@scharstein2002] and the references therein. In this paper, assumptions (A2) and (A3) allow us to obtain analytic and computationally efficient solutions to the Next-Best-View and target localization problem using exact models of stereo vision sensors. In situations where correspondence errors and occlusions do not raise significant challenges, e.g., for sparse target configurations, our approach can have significant practical applicability. In problem \[problem\], we have chosen the trace as a measure of uncertainty among other choices, such as the determinant or the maximum eigenvalue. (A similar choice was made by [@ponda09].) [@wenhardt07] shows that all such criteria behave similarly in practice. Since minimization of ${\bf tr}[\Xi_{s,k}]$ is associated with improving localization of the worst localized target, we call it the *supremum objective*. We call minimization of ${\bf tr}[\Xi_{c,k}]$ the *centroid objective*. $\Xi_{s,k}$ will depend only on the predicted next position of the worst localized target, which we denote by $\bbp_{s,k}$, but $\Xi_{c,k}$ will depend on the [predicted next]{} positions $\bbp_{i,k}$ of all $i \in \ccalN$. Attempting to solve Problem \[problem\] by simultaneously controlling the covariances of all targets requires a nonconvex constraint to maintain consistency between images. We note that, when we employ the supremum or centroid objective, the decision process comprises two nonlinear procedures: triangulation and Kalman Filtering. Controlling the Relative Frame {#sec:potdes} ============================== Assume that $k-1$ observations are already available. Our goal in this section is to determine the next best target locations $\bbp_{s,k}$ or $\bbp_{c,k}$ on the relative camera frame so that if a new observation is made with the targets at these new relative locations, the fused localization uncertainty, which is captured by $\Xi_{s,k}$ or $\Xi_{c,k}$, is optimized. For this, we need to express the instantaneous covariance $\Sigma_{i,k}$ of target $i$ as a function of the relative position $\bbp_{i,k}$. To simplify notation, in this section we drop the subscripts $s, c,$ and $o$. We will also drop the subscript $k$ when no confusion can occur. From , we know that $\bbp$ depends on the noisy vector $(\chkx_L,\chkx_R,\chky)$, which we assume has some known or easily estimated covariance $Q$. [^2] In the experiments of Section \[sec:exp\], we propose a new data-driven linear model to estimate $Q$. Let $J$ be the Jacobian of $\bbp\triangleq \bbp(x_L,x_R,y)$ evaluated at the point $(\chkx_L,\chkx_R,\chky)$, given by $$\label{eq:jac} J =\frac{b}{(\chkx_L-\chkx_R)^{2}} \begin{bmatrix} -\chkx_R & \chkx_L & 0 \\ -\chky & \chky & \chkx_L-\chkx_R \\ -f & f & 0 \end{bmatrix}.$$ Then, the first order (linear) approximation of $\bbp $ about the point $(\chkx_L,\chkx_R,\chky)$ is $$\label{eq:linearization} \bbp(x_L,x_R,y) \approx \bbp(\chkx_L,\chkx_R,\chky) + J \begin{bmatrix} \chkx_L-x_L \\ \chkx_R-x_R \\ \chky-y \end{bmatrix} \!\!.$$ Since $\bbp (\chkx_L,\chkx_R,\chky)$ corresponds to the current mean estimate of target coordinates, it is constant in . Therefore, the covariance of $\bbp $ in the relative camera frame is $J QJ ^\top $. Fusing covariance matrices as in Lemma \[lem:kfchange\] requires that they are represented in the same coordinate system. To represent the covariance $J QJ ^\top $ in global coordinates, we need to rotate it by an amount corresponding to the camera’s orientation at the time this covariance is evaluated. Assuming that consecutive observations are close in space, so that the camera makes a small motion during the time interval $[t_{k-1},t_{k}]$, we may approximate the camera’s rotation $R(t)$ at time $t\in[t_{k-1},t_{k}]$ by its initial rotation $R(t_{k-1})$. We note that this approximation will be inaccurate if the robot moves long distances between consecutive observations. For the use case discussed in Section \[sec:exp\], the robot takes multiple observations per second, so this approximation is not an issue. Denoting $R(t_{k-1})$ by $R$, the instantaneous covariance of $\bbp $ can be approximated by $$\label{eq:instcov} \Sigma = \cov[\bbp(\chkx_L,\chkx_R,\chky)] \approx RJ QJ ^\top R^\top \!\!.$$ In view of and , the covariance in is clearly a function of the target coordinates on the relative image frame. Using this model of measurement covariance, we define the uncertainty potential $$\label{eq:h} h(\bbp) = {\bf tr}\big\{ \Xi \big\},$$ Then, the next best view vector that minimizes $h$ can be obtained using the gradient descent update $$\label{eqn_p_update} \bbp_{k} =\bbp_{k-1} - K \int_{0}^{T} \nabla_{\bbp} h\left(\bbp (\tau)\right) d\tau,$$ where $K$ is a gain matrix. The length $T>0$ of the integration interval is chosen so that the distance between $\bbp_{k}$ and $\bbp_{k+1}$ is less than the maximum distance the robot can travel before another NBV is calculated at time $t_{k}$. The following result provides an analytical expression for the gradient of the potential $h$ in . \[prop:grad\] The $j$-th coordinate of the gradient of $h$ with respect to $\bbp $ is given by $$\label{eq:gradient f_h} \frac{\partial h}{\partial [\bbp ]_j} ={\bf tr} \left\{\Sigma ^{-1} \Xi^{2}\Sigma ^{-1} \frac{\partial \Sigma }{\partial \left[\bbp \right]_j} \right\},$$ where $\frac{\partial \Sigma }{\partial \left[ \bbp \right]_j}$ is the partial derivative of $\Sigma $ with respect to the $j$-th coordinate of $\bbp $, and $j=x,y,z$, corresponds to the three dimensions of $\bbp $. It is not difficult to show that $$\label{eq:grad_mid} \frac{\partial h}{\partial [\bbp ]_j} = -{\bf tr} \left\{ \Xi ^2 \frac{\partial \left[ ( HU H^\top )^{-1} + \Sigma ^{-1} \right]}{\partial \left[\bbp \right]_j} \right\}.$$ Note that the covariance of all prior fused measurements $HU H^\top $ is a constant with respect to the next best view $\bbp $ and, therefore, its derivative with respect to $\bbp $ is zero, i.e., $\partial \left(HU H^\top \right)^{-1} / \partial\left[ \bbp \right]_j = 0$. The derivative of $\Sigma ^{-1}$ with respect to $\left[ \bbp \right]_j$ leads to an expression for the derivative in the right-hand-side of that retreives . In what follows, we apply the chain rule to calculate $\partial \Sigma / \partial \left[\bbp \right]_j$ in . In particular, since we hold $R$ constant during the relative update, we have that the partial derivatives of $\Sigma $ in the directions $[\bbp ]_j$ for $j=x,y,z$ are taken only with respect to the entries of $J Q J^\top $, i.e., $$\label{eq:jac_deriv} \frac{\partial \Sigma }{\partial \left[\bbp \right]_j} = R \left( \frac{\partial J} {\partial\left[ \bbp \right]_j} QJ^\top + J Q \frac{\partial J^\top} {\partial\left[ \bbp \right]_j} \right) R^\top .$$ Then, using the chain rule, $$\label{eq:partials} \frac{\partial J }{\partial\left[ \bbp \right]_j} = \frac{\partial J }{\partial x_L }\frac{ \partial x_L}{ \left[\bbp \right]_j}+ \frac{\partial J }{\partial x_R }\frac{ \partial x_R}{ \left[\bbp \right]_j}+ \frac{\partial J}{\partial y }\frac{ \partial y }{ \left[\bbp \right]_j}.$$ The need arises to express the pixel coordinate tuple $(x_L,x_R,y )$ as a function of the location of target in relative coordinates $\bbp $. This is available via the inverse of , given by $$\label{eq:invbbp} \mat{x_L \\ x_R \\ y} = \frac{f}{[\bbp]_z} \mat{[\bbp ]_x + \frac{b}{2} \\ [\bbp ]_x - \frac{b}{2} \\ [\bbp ]_y }.$$ Then, can be evaluated by finding the partial derivative of $J$ with respect to $(x_L,x_R,y )$ and the partial derivatives of the entries of with respect to each coordinate of $\bbp $. Using these derivatives, all terms in are accounted for, which completes the proof of Proposition \[prop:grad\]. Controlling the Global Frame {#sec:realize} ============================ The update in provides the desired change in relative target coordinates $\bbp_{o,k} -\bbp_{o,k-1} $ of target $o$ in the camera frame, where $o$ stands for ‘objective’ and can be either $s$ or $c$, depending on the objective defined in Problem \[problem\]. Our goal in this section is to determine a new camera position $\bbr(t_k)$ and orientation $R(t_k)$ in space that realizes the change in view, effectively arriving at the Next Best View of the target located at $\hbbx_o.$ Transforming the change of view into global coordinates, the goal position $\bbr^*$ is defined as $$\bbr^* \triangleq \bbr(t_{k-1}) + R (t_{k-1})( \bbp_{o,k} -\bbp_{o,k-1}).$$ The ability to rotate the camera in addition to translating it means that there are infinitely many poses in the global frame that realize the NBV in relative coordinates. The goal orientation is defined to be any pose such that the point $\hbbx_o$ lies on the $z$-axis of the camera relative coordinate system, i.e., the camera is looking straight at the centroid (or supremum) target location. To achieve this new desired camera position and orientation, we define the following potential which we seek to minimize: $$\begin{aligned} \label{eq:psi_potential} \psi \left(\bbr,R \right) &= \overbrace{ \norm{\bbr - \bbr^* }^2}^\text{position} + \overbrace{\norm{R^\top \hbbz - \bbe_3}^2}^\text{orientation},\end{aligned}$$ where $$\begin{aligned} \label{eq:unit-vecs} \hbbz = \frac{ \hbbx_o - \bbr^* }{\norm{\hbbx_o - \bbr^*}} \text{ and } \bbe_3= \left[0 \; 0 \; 1\right]^\top.\end{aligned}$$ In , $\hbbz$ is the direction in global coordinates from the desired robot position $\bbr^*$ to the estimated target-objective location $\hbbx_o,$ and $\bbe_3$ is the unit vector in the direction of the robot’s view in relative coordinates, defined to be the $z$-axis. Note that the robot and target cannot be located at the same point, because this would violate field of view constraints. To minimize $\psi$, we define the following gradient flow for all time $t \in [t_{k-1},t_k]$ \[eq:rRdiff\] $$\begin{aligned} \dot{\bbr} &= -\nabla_\bbr {\psi}(\bbr,R) \label{eq:rdiff} \\ \dot{R} &= -R \nabla_R {\psi}(\bbr,R), \label{eq:Rdiff}\end{aligned}$$ in the joint space of camera positions $\reals^3$ and orientations $SO(3)$, where $\nabla_\bbr {\psi}$ and $\nabla_R{\psi}$ are the gradients of $\psi$ with respect to $\bbr$ and $R$. After initializing the gradient flow at $\big(\bbr(t_{k-1}),R(t_{k-1})\big)$, the following lemma shows that if $R(t_{k-1})\in SO(3)$ and $R(t)$ evolves as in and $\nabla_R {\psi}(\bbr,R)$ is skew-symmetric, then $R(t)\in SO(3)$ for all time $t\in[t_{k-1},t_{k}]$; see [@Zavlanos2008]. \[lem:flow\] Let $\Omega(t) $ be skew-symmetric $\forall \, t \geq 0$ and define the matrix differential equation $\dot{R}(t) = R(t)\Omega(t)$. Then, $R(t) \in SO(n) \, \forall \, t \geq 0$ if $R(0) \in SO(n)$. In other words, the gradient flow is implicitly constrained to the set of Special Euclidean transformations during the minimization of $\psi$. Closed Form Motion Controllers ------------------------------ In the remainder of this section we provide analytic expressions for the gradients in . We also use these expressions to show that the closed loop system minimizes ${\psi}$. The first proposition identifies the gradient of $\psi$ with respect to $R.$ To prove it, we use the matrix inner product $\langle A,B\rangle={\bf tr}(A^\top B)$, which has the following property. \[lem:skew\] For any square matrix $A$ and skew-symmetric matrix $\Omega$ of appropriate size, $2 \left\langle A, \Omega \right\rangle = \left\langle A - A^\top, \Omega \right\rangle. $ We have that $ 2 \left\langle A, \Omega \right\rangle = \left\langle A, \Omega \right\rangle + \left\langle \Omega, A \right\rangle = \bbt \bbr (A^\top \Omega + \Omega^\top A)= \bbt \bbr ( (A^\top - A )\Omega)= \left\langle A - A^\top, \Omega \right\rangle . $ \[lem:dpdR\] The gradient of $\psi$ with respect to $R$ is given by the skew-symmetric matrix $$\label{eq:nabla_psi_R} \nabla_R\psi= R^\top \hbbz (R^\top \hbbz - \bbe_3)^\top - (R^\top \hbbz - \bbe_3 ) \hbbz^\top R .$$ For any skew symmetric matrix $\Omega$, Let $\bbv = R^\top \hbbz - \bbe_3$ to simplify notation. Using the first order approximation of the neighborhood of the rotation matrix $R(\Omega) \approx I + \Omega$, where $\Omega$ is skew-symmetric, and using Lemma \[lem:skew\] along with the basic properties of inner products, we have that from which we identify $ R^\top \hbbz \bbv^\top - \bbv \hbbz^\top R $ as $\nabla_R\psi (\bbr, R),$ and the result follows immediately. Additionally, we have from elementary calculus that $$\begin{aligned} \label{eq:nabla_psi_r} \nabla_\bbr \psi(\bbr,R) &= 2 (\bbr - \bbr^*) \\ &= 2 \left( \bbr - \bbr(t_{k-1}) - R (t_{k-1})( \bbp_{o,k} -\bbp_{o,k-1}) \right).\nonumber\end{aligned}$$ The following result shows that the closed loop system is globally asymptotically stable about the minimizers of $\psi$. \[thm:convergence\] The trajectories of the closed loop system globally converge to the set of minimizers of the function $\psi$. By inspection of , $\psi(\bbr, R) \ge 0,$ with equality if and only if $R^\top \hbbz = \bbe_3$ and $ \bbr=\bbr^*$. In the remainder of the proof, we show that $\psi$ is a suitable Lyapunov function for the closed loop system , and the set of equilibrium points is exactly the set of minimizers of $\psi.$ To begin, let $\bbv$ be defined as above, so that $$\begin{aligned} &\dot{\psi} (\bbr, R) \! = \! 2 \left\langle \bbr \! - \! \bbr^*, \dot{\bbr} \right\rangle +2 \left\langle R^\top \hbbz - \bbe_3, \dot{R}^\top \hbbz \right\rangle\nonumber \\ &=\! 2\left\langle \bbr \! - \! \bbr^*, \!-\!\nabla_\bbr \psi(\bbr, R) \right\rangle +2 \left\langle \bbv, (-R \nabla_R \psi(\bbr, R) )^\top \hbbz \right\rangle\nonumber \\ &= \! 2\left\langle \bbr \! - \! \bbr^*, - 2 (\bbr \! - \! \bbr^*) \right\rangle +2 \left\langle \bbv, (R^\top \! \hbbz \bbv^\top \! \! \! - \! \bbv \hbbz^\top \! R)R^\top \! \hbbz \right\rangle\nonumber \\ &= \! - 4\norm{\bbr \! - \! \bbr^*}^2 \! + \! 2 \left( \! \left\langle \bbv, R^\top \! \hbbz \bbv^\top \! R^\top \! \hbbz \right\rangle \! - \! \left\langle \bbv ,\bbv \hbbz^\top \! R R^\top \! \hbbz \right\rangle \!\right) \nonumber \\ &= \! - 4\norm{\bbr \! - \! \bbr^*}^2 +2\left( \left(\bbv^\top R^\top \hbbz \right)^2 - \norm{\bbv}^2 \right). \label{eq:dpsi-negative}\end{aligned}$$ The Cauchy-Schwartz inequality implies that $$\left(\bbv^\top R^\top \hbbz \right)^2 \le \norm{ \bbv}^2 \norm{ R^\top \hbbz}^2 = \norm{ \bbv}^2 ,$$ so that is the sum of two nonpositive terms. Thus, $\dot{\psi} \le 0$, with equality if and only if both of the nonpositive terms are zero. In particular, $\dot{\psi} (\bbr, R)= 0$ if and only if $ \bbr = \bbr^*$ and which implies $\hbbz^\top R= \bbe_3^\top$ for all critical points. Invoking the Lyapunov Stability Theorem, the result follows. Note that the system evolves during the time interval $[t_{k-1},t_k]$, until a new observation of the targets is made at time $t_k$. This time interval might not be sufficient for the camera to realize exactly the NBV. Nevertheless, Theorem \[thm:convergence\] implies that at time $t_k$, the position and orientation of the camera is closer to desired NBV than it was at time $t_{k-1}$. By appropriately choosing the length of the time interval $[t_{k-1},t_k]$, we may ensure that for practical purposes the camera almost realizes the NBV. Performance of the Integrated Hybrid System {#sec_simulations} =========================================== In this section, we illustrate our approach in computer simulations. We begin by discussing a practical method of how to incorporate field of view constraints in the hybrid system, which is used in our simulations and experimental results. Incorporating Field of View Constraints {#sec:fov} --------------------------------------- For a 3D point to appear in a given image, that point must lie within the field of view of both cameras in the stereo pair as the robot rotates and translates in an effort to minimize . We assume that the (L) and (R) cameras have identical square sensors with a 70$^\circ$ field of view, which, when combined with the image width $w$, determines the focal length $f$. Let $$\ccalS= \set{ [x,y,z]^\top \in \reals^3 \colon \abs{x} \leq \frac{wz - bf}{2f} ,\: \abs{y} \leq \frac{zw}{2f},\: z > \frac{bf}{w} }$$ denote the set of points in relative coordinates that are visible to both cameras in the pair. This set is the intersection of two pyramids facing the positive $z$ direction with vertices located at the two camera centers. Figure \[fig:fov\_3d\] visualizes the set $\ccalS$ in two dimensions (blue shaded region). Note that the intersection of the two pyramids is located at $z = \frac{bf}{w}$, and therefore any point with $z < \frac{bf}{w}$ can not be in view of both cameras. ![The field of view for a stereo camera in the $xz$ plane. The field of view in the $yz$ plane is similar.[]{data-label="fig:fov_3d"}](fov2dv2.pdf){width="7cm"} Maintaining all targets within the FoV $\ccalS$ requires that the camera positions and orientations evolve in the set $$\label{eq:ccalD} \ccalD = \set{ (\bbr,R) \in \reals^3 \times SO(3) \, : \, \set{\bbp_i}_{i \in \ccalN} \in \ccalS }$$ for all time. To ensure invariance of the set $\ccalD$, we define the potential functions \[phi\] $$\begin{aligned} \phi_{i1}(\bbr,R) &= \left(\frac{w[\bbp_i]_z - bf}{2f}\right)^2 - [\bbp_i]_x^2,\\ \phi_{i2}(\bbr,R) &= \left(\frac{w[\bbp_i]_z}{2f}\right)^2 - [\bbp_i]_y^2,\\ \phi_{i3}(\bbr,R) &= [\bbp_i]_z^2 - \left(\frac{bf}{w}\right)^2,\end{aligned}$$ that are positive if $(\bbr,R) \in \ccalD$, where $[\bbp_i]_x$, $[\bbp_i]_y$, and $[\bbp_i]_z$ are the $x$, $y$, and $z$ coordinates of target $i$ in the relative camera frame, that can be expressed in terms of the camera position and orientation as \[eq:coord\_transformations\] $$\begin{aligned} [\bbp_i]_x& = \left\langle \bbe_1,R^\top (\hat{\bbx}_i - \bbr)\right\rangle ,\\ [\bbp_i]_y &= \left\langle \bbe_2,R^\top (\hat{\bbx}_i - \bbr)\right\rangle ,\\ [\bbp_i]_z &= \left\langle \bbe_3,R^\top (\hat{\bbx}_i - \bbr)\right\rangle ,\end{aligned}$$ where $\bbe_1$, $\bbe_2$, and $\bbe_3$ are the unit vectors in the standard basis. Then, given an estimate of target locations $\hat{\bbx}_i$ for $i=1,\dots,n$, we augment the potential $\psi$ from by adding barrier functions $1/\phi_{ij}$ that will grow without bound anytime a target is close to the boundary of the feasible set $\ccalD$. The repulsive force supplied by $\phi_i$ is regulated by a user defined penalty parameter $\rho>0$. The artificial potential function, incorporating the desired FoV constraints, is given by $$\label{eq:objectivehat} \hat{\psi}\left(\bbr,R \right) = \psi\left(\bbr,R\right) + \frac{\rho}{n} \sum_{i \in \ccalN}\sum_{j = 1}^3 g(\phi_{ij}),$$ where $g \colon \reals \to \reals$ is a barrier potential, and multiplication by $1/n$ ensures that the number of targets does not affect the strength of the penalty. The penalty parameter $\rho$ is set sufficiently small so that $\hat{\psi}$ approximates $\psi$ when $(\bbr,R)$ is in the interior of $\ccalD$ while maintaining that $\hat{\psi}$ becomes extremely large for $(\bbr,R)$ that approach the boundary of $\ccalD$. Replacing $\psi$ with $\hat{\psi}$ in the gradient flow in Algorithm \[alg1\] provides the desired potential that realizes the NBV and respects FoV constraints. In the simulations, we set $g(a) = \frac{1}{a}$. In what follows we derive analytical expressions for the gradients of $\hat{\psi}$. In particular, we have that \[eq:grad\_hpsi\] $$\begin{aligned} \nabla_\bbr \hat{\psi}\left(\bbr,R \right) = \nabla_\bbr \psi + \frac{\rho}{n} \sum_{i \in \ccalN}\sum_{j = 1}^3 g' (\phi_{ij})\nabla_\bbr\phi_{ij}, \\ \nabla_R \hat{\psi}\left(\bbr,R \right) = \nabla_R \psi + \frac{\rho}{n} \sum_{i \in \ccalN}\sum_{j = 1}^3g' (\phi_{ij}) \nabla_R \phi_{ij}.\label{eq:grad_hpsi_dR}\end{aligned}$$ The derivative of the barrier function, $g'$, is available from elementary calculus. The gradients in with respect to $\bbr$ and $R$ can be obtained by application of the chain rule as $$\begin{aligned} \nabla_\bbr\phi_{ij} \! &= \! \frac{\partial \phi_{ij}}{\partial [\bbp_i]_x}\nabla_\bbr [\bbp_i]_x +\frac{\partial \phi_{ij}}{\partial [\bbp_i]_y}\nabla_\bbr [\bbp_i]_y +\frac{\partial \phi_{ij}}{\partial [\bbp_i]_z}\nabla_\bbr [\bbp_i]_z\nonumber\\ \nabla_R\phi_{ij} \! &= \! \frac{\partial \phi_{ij}}{\partial [\bbp_i]_x}\nabla_R [\bbp_i]_x \! + \!\frac{\partial \phi_{ij}}{\partial [\bbp_i]_y}\nabla_R [\bbp_i]_y \! + \! \frac{\partial \phi_{ij}}{\partial [\bbp_i]_z}\nabla_R [\bbp_i]_z.\label{chain}\end{aligned}$$ The coefficients in can be obtained by differentiating . The following propositions provide the gradients of $[\bbp_i]_x, [\bbp_i]_y$, and $[\bbp_i]_z$ with respect to $R$ and $\bbr$. \[lem:Dxyz\_DR\] The gradients of $[\bbp_i]_x, [\bbp_i]_y$, and $[\bbp_i]_z$ with respect to $R$ are given by the skew symmetric matrices $$\begin{aligned} \nabla_R [\bbp_i]_x &= \frac{1}{2} \left[ R^\top (\hat{\bbx}_i - \bbr) \bbe_1^\top - \bbe_1 (\hat{\bbx}_i - \bbr)^\top R \right], \label{Dx_DR}\\ \nabla_R [\bbp_i]_y &=\frac{1}{2} \left[ R^\top (\hat{\bbx}_i - \bbr) \bbe_2^\top -\bbe_2 (\hat{\bbx}_i - \bbr)^\top R\right] ,\\ \nabla_R [\bbp_i]_z &= \frac{1}{2} \left[ R^\top (\hat{\bbx}_i - \bbr) \bbe_3^\top -\bbe_3 (\hat{\bbx}_i - \bbr)^\top R \right].\end{aligned}$$ The procedure here is nearly identical to the method in Proposition \[lem:dpdR\]. Specifically, Again we can use Lemma \[lem:skew\] to obtain that from which we can identify the gradient as the term linear in $\Omega,$ and the proof follows. The gradients of the other two coordinates are found analogously. Note that the gradients of the functions $[\bbp_i]_x, [\bbp_i]_y$, and $[\bbp_i]_z$ with respect to $R$ are skew-symmetric, as required for to ensure that $R\in SO(3)$ for all time; see Lemma \[lem:flow\]. From elementary calculus, the gradients of $[\bbp_i]_x, [\bbp_i]_y$, and $[\bbp_i]_z$ with respect to $\bbr$ are $$\label{d_wr_r} \nabla_\bbr [\bbp_i]_x \! = \! -R \bbe_1 ,\, \nabla_\bbr [\bbp_i]_y \! = \! -R \bbe_2 , \, \nabla_\bbr [\bbp_i]_z \! = \! -R \bbe_3 .$$ Outline of Controller --------------------- A position $\bbr(t_{k-1})$ and orientation $R(t_{k-1})$ of the camera and estimated positions $\hat{\bbx}_{i,k-1}$ of the targets. \[step:rel\] Find the next best view associated with objective “$o$” according to equation : $$\bbp_{o,k} =\bbp_{o,k-1} - K \int_{0}^{T} \nabla h\left(\bbp_{o}(\tau)\right) d\tau.$$ Move the camera according to the system : $$\begin{aligned} \dot{\bbr} &= - \nabla_\bbr {\hat{\psi}}(\bbr,R), \\ \dot{R} &= - R \nabla_R \hat{\psi}(\bbr,R),\end{aligned}$$ for a time interval of length $t_{k}-t_{k-1}$ in order to realize the next best view $\bbp_{o,k}$ obtained from step 1. At time $t_{k}$ observe targets and incorporate new estimates and covariances into KF as in and . Increase the observation index $k$ by 1 and return to step 1. Algorithm \[alg1\] outlines the hybrid controller developed in Sections \[sec:potdes\] and \[sec:realize\]. After initialization, Step 1 determines the NBV according to either the *supremum objective* or the *centroid objective*. Given a frame rate and sensor speed, we set the integration interval $T$ so that the distance between $\bbp_{o,k-1}$ and $\bbp_{o,k}$ is the maximum distance the camera can travel before making a new observation. Each time a new observation is made, Step 1 returns a new NBV $\bbp_{o,k}$, which constitutes a discrete switch in the potential $\hat{\psi}$ in Step 2. This switch results in a new motion plan that guides the robot to a position and orientation that realizes the new NBV. The camera moves according to Step 2 until a new measurement is taken, at which point we set $k:=k+1$ and return to Step 1. Static Target Localization {#sec:subsec_stat_target_local} -------------------------- We begin this section by illustrating our approach for a simple scenario involving an array of five stationary targets in two dimensions. In this case, the mobile stereo camera effectively has only two motion primitives available: “reduce depth” and “diversify the viewing angle.” Thus, the optimal controller will be a state-dependent combination of these two primitives, which should emerge naturally by minimizing the objective function we have described herein. For comparison, we present a “straight baseline” and a “circle baseline,” which exclusively utilize one of these two motion primitives. Specifically, the circle baseline moves the robot in a circle about the cluster of targets, and the straight baseline drives the robot closer to the targets. We require that all methods travel the same distance in each iteration, except for the straight baseline that stops moving once FoV constraints tend to become violated. To test the validity of our assumption that pixel-noise due to quantization is uniform on the image plane, all simulated observations in this section have been quantized at the pixel level. Figure \[fig:traj\_static\] shows robot trajectories for this simple example. It can be seen that reducing range results in slightly better short-term performance for this situation, but once FoV constraints are nearly violated, repetitive observations from the same spot have correlated noise, leading to divergence of the KF; a similar behavior can be observed in Figure \[fig:errors\_and\_trace\], which refers to the 3D example bellow. On the other hand, the circle baseline and the supremum and centroid objectives continuously move the camera, so the i.i.d. noise assumption is not violated and the localization error throughout the simulation keeps being reduced, as can be seen in the shrinking confidence ellipses in Figure \[fig:ell\]. By combining the two motion primitives automatically, the supremum and centroid objectives reduce noise by an order of magnitude compared to the straight baseline (before its KF diverges) after around 23 observations. ![An example of the trajectories generated by our algorithm and the baseline methods, shown in 2D for readability. Red denotes the trajectory of the supremum, blue denotes the centroid, magenta denotes the circle baseline method, and green the straight baseline method. The $\square$ symbols show the ground truth target locations. The triangles emanating from the trajectories represents the orientation and field of view for each objective (see fig. \[fig:fov\_3d\]). []{data-label="fig:traj_static"}](example_traj.pdf){width="7cm"} ![Ten of the $3\sigma$ confidence intervals produced by the supremum objective for one of the targets in Figure \[fig:traj\_static\].[]{data-label="fig:ell"}](example_ell.pdf){width="7cm"} The following set of simulations considers target localization in three dimensions. The goal is to evaluate our algorithm against the baseline methods. We use an image resolution of 1024$\times$1024 pixels. The unit of measure is the distance between the two cameras in the stereo rig, or the baseline, which is the characteristic length in stereo vision. It is depicted as $b$ from Figure \[fig:fov\_3d\]. The stereo rig moves 10% of its baseline between successive images, which corresponds to a $Dt$ in the simulations of $0.1$. The matrix $Q$ was set to the identity matrix. In every simulation, the robot begins 50 baselines west of a cluster of targets, which are placed according to a uniform random distribution in the unit cube centered at the origin. The penalty parameter $\rho=100$ ensures that all targets remain within the camera’s 70$^\circ$ field of view throughout. The length of the time interval $t_{k+1}-t_k$ between two consecutive observations is chosen so that the robot either realizes the NBV, i.e., achieves $\psi=0$ in , or the robot travels the maximum allowed distance between observations. The gain parameter, $K$ from , is set to $K = \diag (1,1,7).$ The observers that follow the circle baseline method and straight baseline method at each iteration travel a distance equal to the maximum of the distances that the supremum and centroid traveled in that iteration. All motion plans make the same amount of total observations. All use identical camera parameters. All observations suffer from quantization noise after pixel coordinates are rounded to the nearest integer. ![Average localization error (top panel) and trace (bottom panel) of the position covaraince of the all targets versus iteration, averaged over 50 simulations. Red denotes the trajectory of the supremum, blue denotes the centroid, magenta denotes the circle baseline method, and green the straight baseline method. []{data-label="fig:errors_and_trace"}](errors_and_trace.pdf){width="7cm"} Figure \[fig:errors\_and\_trace\] shows the average total error and the trace of the target location covariance matrices for 50 simulations. In the bottom panel, evidently the straight baseline method outperforms the supremum and centroid objective in terms of the trace of the posterior covariance matrices, up to the point when it stops being able to move. This is because the centroid and supremum objectives also obtain control inputs from a penalty function, which repels the robot from views that allow targets near the field of view boundary. The straight baseline method, on the other hand, can go to the point when one of the targets is on the outer edge of the image, allowing it to get closer. The centroid and supremum objectives still outperform the straight baseline method in terms of localization error. Note also that once the stereo rig following the straight baseline method stops moving, it suffers from the same quantized noise in every observation, which is biased, and causes the KF to diverge. The KFs from the rigs following the circle baseline and the centroid and supremum objectives do not diverge because the individual measurement bias changes when the relative vector changes, effectively de-correlating the errors. Mobile Target Localization -------------------------- ![ An example of the trajectories generated by our algorithm when the targets are mobile in two dimensions, with time processing in clockwise order. Red denotes the supremum and blue denotes the centroid. The $\square$ symbols show the ground truth target locations, with tails to show their motion. The triangles emanating from the trajectories represents the orientation and field of view for each objective (see Figure \[fig:fov\_3d\]). All units are in baselines. []{data-label="fig:traj"}](rings2D.pdf){width="7cm"} ![A closeup of the beginning (left panel) and end (right panel) of the left-most target trajectory in Figure \[fig:traj\]. 95% Confidence ellipses associated with each objective are plotted. Red denotes the supremum’s confidence ellipses, and blue denotes the centroid. []{data-label="fig:mobile_ell"}](mobile_ellipses.pdf){width="7cm"} ![ Localization error (top panel) and trace (bottom panel) of the position covariance of the all targets versus iteration for the mobile simulation shown in Figure \[fig:traj\]. []{data-label="fig:err_and_trace_mobile"}](err_and_trace_mobile.pdf){width="7cm"} In these simulations, the mobile stereo camera localizes a group of mobile targets that move in the Olympic ring pattern. The observers, two cameras implementing the supremum and centroid objectives, use the constant acceleration model from . As a simple example, Figures \[fig:traj\] and \[fig:mobile\_ell\] show an example of the mobile target simulation in two dimensions. We present the results of 50 simulations for the mobile target scenario, again subject to quantized noise from pixelation and again in three dimensions. All constants used in the mobile simulations are the same as the static simulations. The top panel of Figure \[fig:err\_and\_trace\_mobile\] shows the average error during the 50 simulations with mobile targets in three dimensions. Because none of the targets stray far from the rest, the *centroid objective* has a slight advantage over the supremum objective. We also performed simulations with asymmetric data sets and outliers, which favored the supremum objective. Any nondecreasing properties in the top panel of Figure \[fig:err\_and\_trace\_mobile\] are due to quantized observations. The correlation coefficient between the time series representing the target error (top panel of Figure \[fig:err\_and\_trace\_mobile\]) and that representing the traces of the covariance matrix sequence (bottom panel of Figure \[fig:err\_and\_trace\_mobile\]) is 0.84 for the centroid objective and 0.87 for the supremum objective, showing that these proxies are reasonable for localization accuracy. We also note that the flattening out of the objective function value, plotted in the bottom panel of Figure \[fig:err\_and\_trace\_mobile\], is due to the process noise covariance preventing the KF-updated covariance from converging to the zero matrix. This term prevents the covariance from converging to zero in the mobile target case, and instead holds it near the heuristic value given in . Overall localization accuracy could be further improved by *a priori* knowledge of motion model, the on-line adaptive modeling of [@Li03_part1], and using multiple observers, as in [@roumeliotis02]. Experiments {#sec:exp} =========== ![Overhead photograph of the experimental setup []{data-label="fig:above"}](nbv-overhead.pdf){width="0.7\columnwidth"} In this section, we present experiments using a single ground robot (iRobot Create), pictured in Figure \[fig:above\], to localize a set of stationary targets, for which we used colored ping pong balls. The robot carries a stereo rig with 4 cm baseline mounted atop a servo that can rotate the rig $\pm 180^\circ.$ The rig uses two Point Grey Flea3 cameras with resolution $1280\times1024$. To simulate long distance localization, all images are downsampled by a factor of 24 so that the effective resolution is $54\times43$, allowing us to operate with disparities at or below ten in our laboratory environment. The robot is equipped with an on-board computer with 8GB RAM and an Intel Core i5-3450S processor. All image processing and triangulation is done on-board using C++ and run on Robot Operating System (ROS). We used the Eigen library for mathematical operations and the OpenCV library for HSV color detection in our controller. We use the [@bouguet2004camera] toolbox to calibrate the intrinsic and extrinsic parameters of the stereo rig offline. For self-localization, our laboratory is equipped with an OptiTrack array of infrared cameras that tracks reflective markers that are rigidly attached to the robot. The robot is equipped with an 802.11n wireless network card, which it uses to retrieve its position and orientation by reading a ROS topic that is broadcast over wifi. To evaluate the localization accuracy of our algorithm, in addition to saving the robot trajectories, we fix markers to the targets, and the motion capture system records their ground truth locations as well. Finally, note that estimation takes place in three dimensions, whereas the experimental platform is a ground robot confined to the plane. All navigation and waypoint tracking relies on a PID controller using the next waypoint, defined by the differential flow in , as the set point. In the experiment, robots generally came within 2 cm of their target waypoints. The servo is capable of orienting the stereo camera with accuracy of $\pm 1^\circ$ compared to the global controller. No collision avoidance, aside from the implicit collision avoidance from the FoV constraints presented in Section \[sec:fov\], is used in the implementation. Noise Modeling {#sec:noise} -------------- In this paper, we have assumed that pixel measurement errors are subject to a known zero mean Normal noise distribution with covariance $Q$. The goal of this subsection is to ensure that this assumption is satisfied in practice. In particular, we use training data to remove average bias in the pixel estimates and estimate $Q$ for our experimental setup. This is critical for a variety of reasons: - If the mean of the pixel measurements is biased, then the KF will not converge to the ground truth. - If $Q$ is an under-approximation to the actual covariance of random errors at the pixel level, then the KF will become inconsistent and will not converge to the ground truth, if it converges at all. - If our choice of $Q$ is too conservative or heuristic, it may not be informative enough to be useful in the decision process at the core of the controller. We also want to test the system in relatively extreme conditions, particularly at long ranges (small disparities), where [@freundlich15cvpr] shows that triangulation error distributions are heavy tailed, biased away from zero, and highly asymmetric, which can exacerbate problems caused by calibration errors. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Scatter plots of the residual errors $\bbepsilon_\ell^\text{uc}$ (left panel) and $\bbepsilon_\ell$ (right panel) for the training data. []{data-label="fig:px_errors"}](scat-uncorrected.pdf "fig:"){height="4cm"} ![Scatter plots of the residual errors $\bbepsilon_\ell^\text{uc}$ (left panel) and $\bbepsilon_\ell$ (right panel) for the training data. []{data-label="fig:px_errors"}](scat-corrected.pdf "fig:"){height="4cm"} ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- To address these challenges, we adopt a data-driven approach using linear regression in the pixel domain. Using a set of $n=600$ pairs of training images for the robot at various ranges and viewing angles, we obtain a regression that maps raw pixel observations $(x_L, x_R, y)$ to their best linear unbiased estimate $(x_L^c, x_R^c, y^c)$, hereafter referred to as the *corrected* measurement. To acquire training data for the regression, we project the motion capture target locations, i.e., the ground truth, onto the camera image sensors, using the mapping in . This yields $n$ individual output vectors $Y_\ell$ for $\ell = 1, \dots, n,$ which we stack into an $n \times 3$ matrix of outputs $Y$. We also use a color detector (the same detector that is used in the experiments) to obtain $n$ raw pixel observations. We then compute five features and, because the data are not centered, include one constant, for each raw pixel tuple according to the model $$\label{eq:model} X_\ell = \left[ 1,\, y_\ell,\, d_\ell,\, x_{{L, \ell}}+ x_{{R, \ell}},\, y d_\ell, \frac{ x_{{L, \ell}}+ x_{{R, \ell}}}{d_\ell} \right],$$ where $d_\ell =x_{{L, \ell}} - x_{{R, \ell}}.$ Stacking the $X_\ell$ into an $n \times 6$ matrix, we have a linear model $Y = X \bbbeta + \bbepsilon,$ where $\bbbeta$ is a $6 \times 3$ matrix of coefficients and $\bbepsilon$ is an $n \times 3$ matrix of errors. We refer to the raw pixels as *uncorrected*. The associated error vectors (computed with respect to the uncorrected pixels and the projected ground truth) $\bbepsilon_\ell^\text{uc}$ for $\ell = 1, \dots, n$ are plotted in the left panel of Figure \[fig:px\_errors\]. In the scatter plot it can be seen that the mean error is nonzero, contributing average bias to individual measurements. Also note the apparent skew of the error distribution in the vertical ($y$) direction. Using the model with the feature vector described in and applying the ordinary least squares estimator, the maximum likelihood estimate of the coefficient matrix is $\hat{\bbbeta} = (X^\top X)^{-1} X^\top Y.$ Using $\hat{\bbbeta},$ the residual covariance in the pixel measurements we obtained is Note that the standard deviation of the $y$ pixel value, corresponding to the variances in the lower right entry of the above matrix, corresponds to errors in the height of the ping pong ball center in vertical world-coordinates. The right panel of Fig. \[fig:px\_errors\] shows the residual errors in the training set $\bbepsilon_\ell$ for $\ell = 1, \dots, n$ for the corrected vector $X \hat{\bbbeta}.$ ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Projecting the Kalman Filtered 95% confidence ellipses onto the $X$-$Y$ (ground) plane using the raw/uncorrected (top panel) and corrected (bottom panel) pixel observations. The $\times$’s denote the true target locations. The data used to generate these plots were obtained during experimental trials on unseen data. Projections onto the $X$-$Z$ and $Y$-$Z$ planes gave similar results.[]{data-label="fig:ellipses"}](ellipses-uncorrected.pdf "fig:"){width="8cm"} ![Projecting the Kalman Filtered 95% confidence ellipses onto the $X$-$Y$ (ground) plane using the raw/uncorrected (top panel) and corrected (bottom panel) pixel observations. The $\times$’s denote the true target locations. The data used to generate these plots were obtained during experimental trials on unseen data. Projections onto the $X$-$Z$ and $Y$-$Z$ planes gave similar results.[]{data-label="fig:ellipses"}](ellipses-corrected.pdf "fig:"){width="8cm"} ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- To use the learned model online, new raw observations $(x_{L}, x_{R}, y)$ are converted to corrected pixels $(x_L^c, x_R^c, y^c)$ based on the associated new feature vector and $ \hat{\bbbeta}$. Then, the robot triangulates the relative location of the target via using the corrected pixels, propagates $Q$ via the Jacobian, rotates the covariances, and finally translates the estimates to global coordinates. Fig. \[fig:ellipses\] compares the projection of Kalman Filtered 95% confidence ellipses onto the $X$-$Y$ (ground) plane using the raw/uncorrected and corrected pixel observations on data that was acquired during the experimental trials. To generate the plot in the top panel, which corresponds to the result if the raw pixels are used, we computed the empirical covariance of the raw residual errors $\bbepsilon_\ell^\text{uc}$ for $\ell = 1, \dots, n$. Results ------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![ Plotting the trajectory of the robot along with the locations of the three static targets for one of the thirty experiments using the supremum objectives. The blue line represents the position and the $\square$’s are the locations from where an image was taken. The orientation of the robot (projected onto the plane) is represented at each imaging location by a set of orthogonal axes. Scatter plots of the filtered error and the trace of the filtered error covariance in all targets for all thirty experiments are also shown. Each target is plotted using a unique color corresponding to the $\times$. In the scatter plots, colors correspond to (a), and a line is drawn to guide the eye through the means for each target in the experiment. The horizontal axes in these plots are the number of images taken. []{data-label="fig:exptraj-sup"}](traj-nbv-sup.pdf "fig:"){width="0.8\columnwidth"} (a) ![ Plotting the trajectory of the robot along with the locations of the three static targets for one of the thirty experiments using the supremum objectives. The blue line represents the position and the $\square$’s are the locations from where an image was taken. The orientation of the robot (projected onto the plane) is represented at each imaging location by a set of orthogonal axes. Scatter plots of the filtered error and the trace of the filtered error covariance in all targets for all thirty experiments are also shown. Each target is plotted using a unique color corresponding to the $\times$. In the scatter plots, colors correspond to (a), and a line is drawn to guide the eye through the means for each target in the experiment. The horizontal axes in these plots are the number of images taken. []{data-label="fig:exptraj-sup"}](scat-err-sup.pdf "fig:"){width="0.8\columnwidth"} (b) ![ Plotting the trajectory of the robot along with the locations of the three static targets for one of the thirty experiments using the supremum objectives. The blue line represents the position and the $\square$’s are the locations from where an image was taken. The orientation of the robot (projected onto the plane) is represented at each imaging location by a set of orthogonal axes. Scatter plots of the filtered error and the trace of the filtered error covariance in all targets for all thirty experiments are also shown. Each target is plotted using a unique color corresponding to the $\times$. In the scatter plots, colors correspond to (a), and a line is drawn to guide the eye through the means for each target in the experiment. The horizontal axes in these plots are the number of images taken. []{data-label="fig:exptraj-sup"}](scat-trace-sup.pdf "fig:"){width="0.8\columnwidth"} (c) ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![ Identical plots to Fig. \[fig:exptraj-sup\], however the data reported correspond to the centroid objective experiments. []{data-label="fig:exptraj-cen"}](traj-nbv-sup.pdf "fig:"){width="0.8\columnwidth"} (a) ![ Identical plots to Fig. \[fig:exptraj-sup\], however the data reported correspond to the centroid objective experiments. []{data-label="fig:exptraj-cen"}](scat-err-sup.pdf "fig:"){width="0.8\columnwidth"} (b) ![ Identical plots to Fig. \[fig:exptraj-sup\], however the data reported correspond to the centroid objective experiments. []{data-label="fig:exptraj-cen"}](scat-trace-sup.pdf "fig:"){width="0.8\columnwidth"} (c) ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- We conducted sixty total static localization experiments – thirty using the supremum objective and thirty using the centroid objective. Figures \[fig:exptraj-sup\] and \[fig:exptraj-cen\] (a) show paths followed by the robot during the experimental trials using the setup shown in Figure \[fig:above\]. Figures \[fig:exptraj-sup\] and \[fig:exptraj-cen\] (b) and show scatter plots of the errors from all thirty experiments using each control objective. Each point in these plots represents the Euclidean distance between filtered estimates and ground truth locations of the ping pong balls provided by the motion capture system. In each experiment, we collected fifteen images, and in each iteration three targets were observed. Accordingly, the plots have fifteen bands, each with thirty total points, representing the filtered error in a particular target for a particular experiment. The mean error for each target across all thirty experiments is drawn on the plot to guide the eye though the scatter plots. Figures \[fig:exptraj-sup\] and \[fig:exptraj-cen\] (b) reveal the presence of outliers in the localization. Note from the figure that the KF still converges to ground truth. We can also see that the overall spread of the bands in the scatter plots is decreasing, reflecting that the control objective is indeed minimized. On average, the error in each target was reduced by about half, which is less of a reduction than what was observed in simulation. One reason for this, aside from the presence of unmodeled noise, is the fact that our lab has only about four square meters of usable area, so the diversity of viewpoints is not as rich as in the simulations. Figures \[fig:exptraj-sup\] and \[fig:exptraj-cen\] (c) show the trace of the filtered error covariance for the same data that was used to plot Figures \[fig:exptraj-sup\] and \[fig:exptraj-cen\] (b). The points in the scatter plots reflect the posterior variance of each target for each simulation, and again the mean over the thirty experiments using each control objective is drawn on the plot to guide the eye. Comparison to existing heuristic methods that use discrete pose space --------------------------------------------------------------------- -------------------------------------------------------- --------------------------------------------------------- ![image](traj-square-exp.pdf){width="0.6\columnwidth"} ![image](traj-equilat-exp.pdf){width="0.6\columnwidth"} (a) (b) ![image](traj-nbv-exp.pdf){width="0.6\columnwidth"} ![image](traj-nbv-heu-exp.pdf){width="0.6\columnwidth"} (c) (d) -------------------------------------------------------- --------------------------------------------------------- We conducted static target localization experiments to compare the localization performance of our NBV method to a heuristic method that employs a discretization of the pose space, similar to the approaches discussed in [@dunn_iros09; @wenhardt07; @Wenhardt06]. Specifically, the heuristic we implement is based on discretizing the stereo rig’s pose space, calculating the objective value in Problem \[problem\] at all possible next poses and choosing the one having minimum objective value. This approach is in line with every stereo camera-based approach that we are aware of, in that they all (except [@ponda09], which we have discussed in the introduction) select from discrete next view sets. Note that we cannot fairly compare the localization accuracy of a single camera to a stereo rig (the rig always wins), and we can not compare a stereo rig to a LiDAR system (the LiDAR always wins, assuming that data associations can be established). In our experiments we focus on the supremum objective, so that the objective value calculated by the heuristic method is the trace of the filtered covariance matrix of the worst localized target. We tested two different ways of discretizing the pose space, namely, a square grid and a triangular grid, as shown in Figures \[fig:exptraj-sup-comp\] (a)-(b). In all experiments, the robot started from the same pose. Moreover, we required that both our NBV method and the heuristic travel approximately the same amount of distance and take the same number of images. In this way, different trajectories can be compared in terms of their ability to localize the targets. This requirement also specifies the edge length for the square and triangular grids. In this experiment, we set the total number of images that each method can take equal to ten and the edges of the square and the equilateral triangle cells were both set to be 0.25m. At each node of the grid, the stereo rig is oriented towards the estimated position of the worst localized target. This is also the behavior achieved by the NBV method with the supremum objective. We ran each method twenty times and below present our results. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![ Filtered localization error (a) and the trace of the error covariance (b) of the green target from Fig. \[fig:exptraj-sup-comp\], averaged over twenty trials for each one of the three methods. Red corresponds to the proposed NBV method, magenta to the heuristic method for the square grid, and green to the triangular grid heuristic. []{data-label="fig:exptraj-sup-comp-results"}](err-nbv-exp-scatter-t1.pdf "fig:"){width="0.7\columnwidth"} (a) ![ Filtered localization error (a) and the trace of the error covariance (b) of the green target from Fig. \[fig:exptraj-sup-comp\], averaged over twenty trials for each one of the three methods. Red corresponds to the proposed NBV method, magenta to the heuristic method for the square grid, and green to the triangular grid heuristic. []{data-label="fig:exptraj-sup-comp-results"}](trace-nbv-exp-scatter-t1.pdf "fig:"){width="0.7\columnwidth"} (b) --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Figure \[fig:exptraj-sup-comp\] (a)-(c) shows sample paths followed by the robot using the heuristic method for the two different grids and the NBV approach, respectively, for one of the twenty trials. To take ten images, the heuristic method that uses the square grid travels an average distance of 2.3058m, the heuristic method that uses the triangular grid travels 2.2198m on average, and our proposed method travels 2.1949m on average. We note that the heuristic method is highly sensitive to the grid size and error covariance matrices during the measuring process, for both types of grids. Specifically, during twenty trials of experiments with grid size of 0.25m, the heuristic generated trajectories that contain small cycles (back and forth motion among a few cells) for both grid types; see, e.g., Figure \[fig:exptraj-sup-comp\] (a) and (d). In fact, we were unable to find a grid size that does not generate such motion artifacts for the square grid heuristic; in every single trial and for every grid size we observed a behavior similar to the one shown in Figure \[fig:exptraj-sup-comp\] (a). On the other hand, after much trial and error we found that, for the particular set up of targets in our lab, a 0.25m grid size can produce reasonable trajectories for the triangular grid heuristic, as shown in Figure \[fig:exptraj-sup-comp\] (b). Nevertheless, this behavior was not consistent, as seen in Figure \[fig:exptraj-sup-comp\] (d) for the same grid size. Our continuous space NBV controller, shown in Figure \[fig:exptraj-sup-comp\] (c), selects the next pose in a continuous pose space and automatically balances the strategies between varying viewing angle and approaching targets. Figures \[fig:exptraj-sup-comp-results\] (a)-(b) demonstrate the localization performance of the NBV method compared to the heuristic method. Figure \[fig:exptraj-sup-comp-results\] (a) shows the filtered localization error during each one of the ten iterations (after each image was taken), averaged over the twenty trials. The localization error of the heuristic method for the square grid (magenta line) eventually diverges due to measurement bias that causes the KF to diverge. This is the result of observing the targets from the same position. A similar behavior was also observed for the straight baseline in the simulations; see Section \[sec:subsec\_stat\_target\_local\]. When the grid edge length is chosen as 0.25m, the heuristic method for the triangular grid (green line) achieves similar localization error as our NBV method (red line). Figure \[fig:exptraj-sup-comp-results\] (b) shows the trace of filtered error covariance for the heuristic and the NBV method, averaged over the twenty trials. In this case, the NBV method outperforms the heuristic for both grid types. While the heuristic confined to the square performs extremely poorly because it does not approach the targets, the heuristic method on the triangular grid, while slightly better, still does not perform as well as the proposed continuous space method. Finally, note that the grid size of 0.25m was selected after laborious tuning to remove such artifacts, suggesting that our continuous method will perform better in general situations than the discrete pose space alternatives. Conclusions {#sec_conclusions} =========== In this paper, we addressed the multi-target, single-sensor problem employing the most realistic sensor model in the literature. Our approach relies on a novel control decomposition in the relative camera frame and the global frame. In the relative frame, we modeled quantization noise and did not operate under a Gaussian noise assumption at the pixel level (as range/bearing models assume). Our approach avoids setting covariance by deriving $\Sigma$ from the uniform distribution. This allows us to obtain the Next Best View from where the targets can be observed in order to minimize their localization uncertainty. We obtain this NBV using gradient descent on appropriately defined potentials, without sampling the pose space or having to select from a set of previously recorded image pairs. Compared to previous gradient-based approaches, our integrated hybrid system is more precise since it derives Gaussian parameters from the quantization noise in the images. Furthermore, our approach does not assume omnidirectional sensors, but instead imposes field of view constraints. [^1]: Charles Freundlich, Yan Zhang, and Michael M. Zavlanos are with the Dept. of Mechanical Engineering and Materials Science, Duke University, Durham, NC 27708, USA [{charles.freundlich, yz227, michael.zavlanos}@duke.edu]{}. Philippos Mordohai is with the Dept. of Computer Science, Stevens Institute of Technology, Hoboken, NJ 07030, USA [[email protected]]{}. Alex Zihao Zhu is with the Dept. of Computer and Information Science, University of Pennsylvania, Philadelphia, PA, USA [[email protected]]{}. This work is supported in part by the National Science Foundation under awards No. CNS-1302284, IIS-1217797, and IIS-1637761. Preliminary versions of this work can be found in [@freundlich13icra; @freundlich13cdc]. [^2]: Recall that we approximate the uniform pixelation noise as Gaussian, hence the approximate nature of $Q$.
--- abstract: 'Many socio-economic phenomena are characterized by the appearance of a few “hit” products having a substantially higher popularity compared to their often equivalent competitors, reflected in a bimodal distribution of response (success). Using the example of box-office performance of movies, we show that the empirically observed bimodality can emerge via self-organization in a model where agents (theatres) independently decide whether to adapt a new movie. The response exhibits extreme variability even in the absence of learning or communication between agents and suggests that properly timing the release is a key determinant of box-office success.' author: - 'Anindya S. Chakrabarti$^1$ and Sitabhra Sinha$^2$' title: 'Self-organized coordination in collective response of non-interacting agents: Emergence of bimodality in box-office success' --- Complex systems often exhibit non-trivial patterns in the collective (macro) behavior arising from the individual (micro) actions of many agents [@Castellano09]. Despite the high degree of variability in the characteristics of the individuals comprising a group, it is sometimes possible to observe robust empirical regularities in the system properties [@Neda00; @Challet00; @Watts07]. The existence of inequality in individual success, often measured by wealth or popularity, is one such universal feature [@Sinha11]. While agents differ in terms of individual attributes, these can only partly explain the degree of this inequality [@Salganik06]. The outcomes often have a heavy-tailed distribution with a much higher range of variability than that observed in the intrinsic qualities. Apart from the well-known Pareto law for income (or wealth) [@Pareto; @Sinha06], other examples include distributions of popularity for books [@Sornette04], electoral candidates [@Fortunato07], online content [@Ratkiewicz10] and scientific paradigms [@Bornholdt11]. Another form of inequality may be observed in distribution of outcomes having a strongly [*bimodal*]{} character. Here events are clearly segregated into two distinct classes, e.g., corresponding to successes and failures respectively. While such distributions have been reported in many different contexts, e.g., gene expression [@Kaern05], species abundance [@Collins91], wealth of nations [@Paap98], electoral outcomes [@Mayhew74], etc., one of the most robust demonstrations of bimodality is seen in the distribution of movie box-office success [@Pan10]. Here success is measured in terms of either the gross income $G_0$ at the opening weekend or the total gross $G_T$ calculated over the lifetime (i.e., the entire duration that a movie is shown) at theaters. Fig. \[fig1\] (a-b) shows that both of these distributions constructed from publicly available data for movies released in USA during the period 1997-2012 [@note] are described well by a mixture of two log-normal distributions. Although the movie industry has changed considerably during this time, the characteristic properties of the distributions appear to remain invariant over the successive intervals comprising the period. The log-normal character can be explained by the probability of movie success being a product of many independent chance factors [@Shockley57], and is indeed observed in the unimodal distribution of opening income per theater $g_0$ \[Fig. \[fig1\] (c)\]. However, the clear distinction of movies into two classes in terms of their box-office performance (as indicated by the occurrence of two modes in the $G_0$ and $G_T$ distributions) does not appear to be simply related to their intrinsic attributes [@note; @DeVany04]. The fact that bimodality is manifested at the very beginning of a movie’s life also suggests that the extreme divergence of outcomes cannot be attributed to social learning occurring over time as a result of diffusion of information about movie quality [@Moretti11] (e.g., by word-of-mouth [@Liu06]). Thus, although there have been theoretical attempts to explain emergence of bimodality through interaction between agents [@Watts02], we need to look for an alternative explanatory framework. In this paper, we present a model for understanding the collective response of a system of agents to successive external shocks, where the behavior of each agent is the result of a decision process independent of other agents. Even in the absence of explicit interaction among agents, the system can exhibit remarkable coordination, characterized by the appearance of a strong bimodality in its response. For the specific example of box-office success, as the bimodal nature of the gross income distributions appear to be connected to the fact that movies usually open in either many or very few theaters, we focus on explaining the appearance of a bimodal distribution for the number of theaters $N_0$ in which movies open \[Fig. \[fig1\] (d)\]. Inspired by recent models that reproduce the observed invariant properties of financial markets by considering agents that interact only indirectly through their response to a common signal (price) [@Vikram11], our model comprises agents (theaters) that do not explicitly interact with each other but whose actions achieve coherence by the regular arrival of a global stimulus, viz., new movies being introduced in the market. By contrast, decoherence is induced by the uncertainty under which a decision is made on releasing a new movie. We show that these competing effects can result in the appearance of bimodality in the distributions of $N_0$, and consequently, $G_0$ and $G_T$, where the success of a particular movie cannot be simply connected to its perceived quality prior to release nor to its actual performance on opening. Under a suitable approximation, we have analytically solved the model and obtained closed form expressions for peaks of the resulting multimodal distribution that match our numerical results. An important implication of our study is that the box-office performance of a movie is crucially dependent on whether it is released close in time to a highly successful one, which supports the popular wisdom that correctly timing the opening of a movie determines its fate at box-office. ![(color online). Empirical demonstration of bimodality in movie popularity measured in terms (a) opening income $G_0$ and (b) total lifetime income $G_T$ of movies in theaters over successive intervals from 1997-2012 (indicated by different symbols). The data are fit by superposition of two log-normal distributions (broken curve). The cumulative distribution of the opening income per theater $g_0 = G_0/N_0$ over the same period is shown in (c). A fit with log-normal distribution is also indicated (broken curve). (d) The bimodal character of (a) and (b) can be connected to the bimodality observed in the distribution of the number of opening theaters $N_0$ (i.e., the total number of theaters in which a new movie is released). The inset shows the distribution of exponents $\beta$ characterizing the power-law decay of the weekly income per theater ($g_t \sim g_0~t^{\beta}$) for all movies. []{data-label="fig1"}](Fig1_ver2.eps){width="0.99\linewidth"} ![(color online). (a) Schematic diagram of the stochastic decision process of agents (theaters $i$, $j$ and $k$) who can either continue with “old” (movie being shown) or switch to “new” (movie up for release) at any time instant $t$. The probability that an agent $i$ will adapt the new movie, $p_{i,t}$, depends on a comparison of the perceived performance of that movie, $\theta_t$ to the actual performance of the movie being shown (which is related to its opening income $g_{0,i}$). (b) Time-evolution of a system comprising $N = 50$ agents (theaters), the state of each agent at any time being the movie (colored according to the time of release) that it is showing. At every time instant, a new movie is available for release. The variable performance of these movies are indicated in terms of the number of theaters where they open ($N_0$) and their opening income ($G_0$). []{data-label="fig2"}](Fig2.eps){width="0.99\linewidth"} We consider a system comprising $N$ agents (theaters) subjected to external stimuli (entry of new movies into the market), that have to choose a response, i.e., whether or not to adapt a new movie, displacing the one being shown. At any time instant $t$, this decision depends on a comparison between the perceived performance of the new movie and the actual performance of the movie being shown at the theater \[Fig. \[fig2\] (a)\]. For simplicity, we assume that a single new movie is up for release at each time instant $t$, thus allowing each movie to be identified by the corresponding value of $t$. The state of a theater at any time is indicated by the identity of the movie it screens at that time \[Fig. \[fig2\] (b)\]. The performance of a movie $t^{\prime}$ at time $t$ can be measured by the income per theater, $g^t$, which is related to its opening value $g_0^{t^{\prime}}$ by a scaling relation $g^t = g_0^{t^{\prime}} (t - t^{\prime})^{\beta_s}$. This relation is partly inspired by the empirical observation \[Fig. \[fig1\] (d), inset\] that the weekly income per theater for a movie decays as a power law function of the number of weeks after its release with exponent $\beta$ [@Pan10]. One can also interpret $\beta_s$ as a subjective discount factor employed by the agents to estimate the future income of a movie based on its present income. Although for most results reported here, $\beta_s = 0$ for simplicity, we have explicitly verified that qualitatively similar results are observed for other values of $\beta_s$ including $\beta_s=-1$. As agents are exposed to similar information about a movie that is up for release, they can have a common perception about its performance, measured as its predicted opening income per theater, $\theta_t$. If agents had perfect foresight, this will be identical to the actual opening income of the movie $g_0^t$. However, in general, the prediction need not be accurate. In fact the qualitative behavior of the model is unchanged if $\theta_t$ is chosen randomly uncorrelated with $g_0^t$. At any time $t$, an agent $i$ switches to the new movie if it decides that this move will result in sufficient net gain $z_{t} (i) = \theta_t - g^{t} (i)$. It is implemented here by representing the probability of adapting the new movie as a hyperbolic response function [@Real77]: $$p_{i,t} = \frac{z_{t}(i)}{C+z_{t}(i)},~{\rm for}~z_{t} (i) \geq 0,~{\rm else}~ p_{i,t} = 0, \label{eq:func1}$$ where the parameter $C$ is an adaption cost incurred for switching to a new movie. We have verified that introducing more complicated functional forms for the adaption rule, e.g., ones having sigmoidal character, do not qualitatively change the results reported here. Eq. \[eq:func1\] allows us to calculate the number of opening theaters $N_0$ for every new movie \[Fig. \[fig2\] (b)\]. To obtain the opening income $G_0$ of the movie over all theaters that release it, $N_0$ is multiplied with the opening income per theater that is chosen from the log-normal distribution of $g_0$ referred to earlier \[Fig. \[fig1\] (c)\]. The subsequent decay of income per theater follows the empirical scaling relation with exponent $\beta$ [@Pan10]. The total lifetime income of a movie $G_T$ is obtained by aggregating this income for all theaters it is shown in, over the entire lifespan (i.e., from the time it is released until it is displaced from all theaters). While $\beta = -1$ for the results reported here, we have verified that our observations do not vary considerably when $\beta$ is distributed over a range. For most simulations, we have chosen $N = 3000$, which accords with the maximum number of theaters in the empirical data [@note]. However, to verify that our results are not sensitively system-size dependent, we have checked that qualitatively similar behavior is observed for $N$ upto $10^6$. ![(color online). A bimodal distribution emerges from independent decisions of $N$ agents (theaters). Transition between bimodality and unimodality with parametric variation of the adaption cost $C$ is shown for the distributions of (a) the number of opening theaters $N_0$, (b) opening income $G_0$ and (c) total lifetime income $G_T$ of movies. The results are obtained by averaging over many realizations with $N = 3000$ agents. (d) The total income $G_T$ earned by a movie as a function of its lifetime $T$, i.e., the duration of its run at theaters, shows that for higher values of $T$, the movies separate into two classes ($C=10^{-4}$). []{data-label="fig3"}](Fig3.eps){width="0.99\linewidth"} As seen from Fig. \[fig3\], the system of $N$ independent agents self-organize in the limit of low $C$ to generate a bimodal distribution in their collective response. A new movie is either adapted by a majority \[corresponding to the upper mode of the $N_0$ distribution shown in Fig. \[fig3\] (a)\] or a small fraction \[lower mode\] of the total number of theaters. This translates into bimodal distributions in the opening income $G_0$ and total lifetime income $G_T$ \[Fig. \[fig3\] (b-c)\], which qualitatively resemble the corresponding empirically obtained distributions (Fig. \[fig1\]). Thus, our results suggest that the nature of box-office income distributions for movies can be understood as an outcome of the bimodal character of the distribution for the number of theaters that release a movie coupled with the unimodal log-normal distribution for the income per theater. As the adaption cost $C$ is increased, the two modes approach each other until, at a large enough value of $C$, a transition to unimodal distribution for the quantities is observed. With increasing $C$, theaters are less likely to switch to a new movie, so that the time-interval between two consecutive movie releases at a theater becomes extremely long. This weakens temporal correlations between the performance of movies being shown and that expected from new movies up for release. Thus, the decision to release each new movie eventually becomes an independent stochastic event described by an unimodal distribution. To emphasize that bimodality in total income $G_T$ is a consequence of the bimodal nature of the opening income, we show $G_T$ as a function of the lifetime $T$ in Fig. \[fig3\] (d). We observe a bifurcation in $G_T$ at higher values of $T$ indicating that movies having the same lifetime can have very different total income, a feature previously seen in empirical data [@Sinha04]. ![(color online). Explaining the emergence of bimodal distribution in the limit of small adaption cost ($C \rightarrow 0$). The appearance of bimodality with parametric variation of the probability of adaptation $p$ is shown for the distributions of (a) the number of opening theaters $N_0$ and (b) opening income $G_0$. As $p \rightarrow 1$, the approximation to the $C \rightarrow 0$ limit becomes more accurate. The results are obtained by averaging over many realizations with $N = 3000$ agents. The pair of thick lines in each figure indicate the theoretically predicted modes of the distributions (see text). (c-e) The variations of opening income $G_0$ and total lifetime income $G_T$ of a movie as functions of the perceived performance $\theta$ and the actual performance (i.e., income per theater) $g_0$ shows that neither $\theta$ nor $g_0$ completely determine $G_0$ or $G_T$ ($p = 0.9995$). []{data-label="fig4"}](Fig4.eps){width="0.99\linewidth"} To understand the appearance of multiple peaks in the distribution of collective response in the limit of low adaption cost, we observe that the system dynamics is characterized by two competing effects: (a) the stochastic decision process of the individual theaters tend to increasingly decorrelate their states, while (b) the occasional appearance of movies having high $\theta$, that are perceived by the agents to be potential box-office successes, induces high level of coordination in response as a majority of agents switches to a common state. This phenomenon of gradual divergence in agent states interrupted by sporadic “reset” events that largely synchronize the system allows us to use the following simplification of the model for an analytical explanation. As $C \rightarrow 0$, we can approximate Eq. (\[eq:func1\]) by $p_{i,t} = p$ for $z_t (i) \geq 0$, else $p_{i,t} = 0$, which becomes accurate in the limit $p\rightarrow 1$. Thus, when a reset event occurs, the decision of each agent is a Bernoulli trial with probability $p$, so that the number of theaters that adapt the new movie follows a binomial distribution with mean $Np$ and variance $Np(1-p)$. In the limit $p \rightarrow 1$ the variance becomes negligibly small and the distribution can be effectively replaced by its mean. This will correspond to a peak at $N_0^u = Np$, i.e., the higher mode. A movie that immediately follows a reset event can result in different responses from the agents depending on the value of $\theta$ associated with it. If this is larger than $g^t$ of all theaters, it is yet another reset event, the response to which is the same as above. However, if $\theta$ has a lower value that is nevertheless large enough to cause those theaters ($\simeq N(1-p)$) which had not switched in the previous reset event to adapt the new movie with probability $p$, we obtain another peak at $N_0^l = Np(1-p)$. This corresponds to the lower mode of the distribution. As seen from Fig. \[fig4\] (a), the two peaks of $N_0$ distribution are accurately reproduced by $N_0^u$ and $N_0^l$. In principle, the above argument can be extended to show that a series of peaks at successively smaller values of $N_0$ can exist at $Np(1-p)^2$, $Np(1-p)^3$, etc., but these will not be observed for the system size we consider here. The bimodal log-normal distribution of opening income $G_0$ results from a convolution of the multi-peaked distribution for $N_0$ with the log-normal distribution for $g_0$ (having parameters $\mu, \sigma$). The two modes of this distribution are calculated as $G_0^{u,l} = \exp({\mu + \log N_0^{u,l}})$, which matches remarkably well with the numerical simulations of the model \[Fig. \[fig4\] (b)\]. While the individual behavior of agents are obviously dependent on the intrinsic properties (such as $\theta$) associated with specific stimuli, the collective behavior of the system cannot be reduced to a simple threshold-like response to external signals. Fig. \[fig4\] (c) shows that the opening income of different movies, which are segregated into two distinct clusters, are not simply determined by their perceived performance $\theta$, as one can find movies belonging to either cluster for any value of this quantity. Given that $\theta$ is only a prediction of the opening performance of a movie by the agents, and it need not coincide with reality, one may argue that the actual performance, i.e., the opening income per theater $g_0$, will be the key factor determining the aggregate income of the movie. However, Fig. \[fig4\] (d-e) show that neither the opening income nor the total lifetime income (both of which show clear separation into two clusters) can be explained as a simple function of the actual opening performance of the movie at a theater. Our results explain box-office success as an outcome of competition between movies, where a new movie seeks to open at as many theaters as possible by displacing the older ones. Using an ecological analogy, a movie with high perceived performance invades and occupies a large number of niches until it is displaced later by a strong competitor. Thus, highly successful movies rarely coexist. This also implies that the response to a movie can be very different depending on whether or not it is released close to a reset event, i.e., the appearance of a highly successful movie (“blockbuster”). Therefore, our model provides explicit theoretical support to popular wisdom that timing the release of a movie correctly is a key determinant of its success at the box-office [@Krider98]. The critical importance of the launch time holds not only for movies, but also for many other short life-cycle products such as music, videogames, etc., whose opening revenues very often decide their eventual sales [@Friedman04]. In fact, empirical data on movies show that for the dominant majority, the highest gross earning over all theaters they are shown in occurs on the opening weekend, followed by an exponential decay in income [@Pan10; @note2]. To conclude, we have shown that extreme variability in response, characterized by a bimodal distribution, can arise in a system even in the absence of explicit interactions between its components. The observed inequality of outcomes cannot be explained solely on the basis of intrinsic variations in the signals driving the system. For a quantitative validation of the model we have used the explicit example of movie box-office performance whose bimodal distribution has been established empirically. Our analysis reveals that stochastic decisions on the basis of comparing effects of the preceding choice and the estimated impact of the upcoming movie gives rise to a surprising degree of coordination. The presence of bimodality in the absence of explicit interactions in several social and biological systems suggests other possible applications of the theoretical approach presented here. Apart from bimodality, our model shows that more general multimodal distributions are possible in principle and empirical verification of this in natural and social systems will be an exciting development. We thank Alex Hansen, Gautam I. Menon, Shakti N. Menon and Rajeev Singh for helpful discussions. This work was supported in part by IMSc Econophysics Project. C. Castellano, S. Fortunato and V. Loreto, Rev. Mod. Phys. [**81**]{}, 591 (2009). Z. Néda, E. Ravasz, Y. Brechet, T. Vicsek and A.-L. Barabási, Nature (Lond.) [**403**]{}, 849 (2000). D. Challet, M. Marsili, and R. Zecchina, Phys. Rev. Lett. [**84**]{}, 1824 (2000). D. J. Watts, Nature (Lond.) [**445**]{}, 489 (2007). S. Sinha, A. Chatterjee, A. Chakraborti and B. K. Chakrabarti, [*Econophysics: An Introduction*]{} (Wiley-VCH, Weinheim, 2011). M. J. Salganik, P. S. Dodds and D. J. Watts, Science [**311**]{}, 854 (2006). V. M. Yakovenko and J. B. Rosser, Rev. Mod. Phys. [**81**]{}, 1703 (2009). A. Chatterjee, S. Sinha and B. K. Chakrabarti, Curr. Sci. [**92**]{}, 1383 (2007). D. Sornette, F. Deschâtres, T. Gilbert and Y. Ageon, Phys. Rev. Lett. [**93**]{}, 228701 (2004). S. Fortunato and C. Castellano, Phys. Rev. Lett. [**99**]{}, 138701 (2007). J. Ratkiewicz, S. Fortunato, A. Flammini, F. Menczer and A. Vespignani, Phys. Rev. Lett. [**105**]{}, 158701 (2010). S. Bornholdt, M. H. Jensen and K. Sneppen, Phys. Rev. Lett. [**106**]{}, 058701 (2011). M. Kaern, T. C. Elston, W. J. Blake and J. J. Collins, Nature Rev. Genet. [**6**]{}, 451 (2005). S. L. Collins and S. M. Glenn, Ecology [**72**]{}, 654 (1991); C. Hui, Community Ecology [**13**]{}, 30 (2012). R. Paap and H. K. van Dijk, Eur. Econ. Rev. [**42**]{}, 1269 (1998). D. R. Mayhew, Polity [**6**]{}, 295 (1974); S. Sinha and R. K. Pan in [*Econophysics and Sociophysics*]{}, eds. B. K. Chakrabarti [*et al.*]{} (Wiley-VCH, Weinheim, 2006), p. 417. R. K. Pan and S. Sinha, New J. Phys. [**12**]{}, 115004 (2010). See supplementary information. W. Shockley, Proc. IRE [**45**]{}, 279 (1957). A. De Vany, [*Hollywood Economics*]{} (Routledge, London, 2004). E. Moretti, Rev. Econ. Stud. [**78**]{}, 356 (2011). Y. Liu, J. Marketing [**70**]{}, 74 (2006). D. J. Watts, Proc. Natl. Acad. Sci. USA [**99**]{}, 5766 (2002). S. V. Vikram and S. Sinha, Phys. Rev. E [**83**]{}, 016101 (2011). L. A. Real, Am. Nat. [**111**]{}, 289 (1977). S. Sinha and S. Raghavendra, Eur. Phys. J. B [**42**]{}, 293 (2004). R. E. Krider and C. B. Weinberg. J. Marketing Res. [**35**]{}, 1 (1998); R. Song and V. Shankar, Working Paper (2012). R. G. Friedman, in [*The Movie Business Book*]{}, ed. J. E. Squire (3rd ed., Fireside, New York, 2004), p. 283. In extremely few cases does a movie become more successful over time with its income exhibiting an increasing trend, eventually reaching a peak before again declining exponentially. To explain such rare “sleeper hits” \[e.g., the movie [*My Big Fat Greek Wedding*]{} (2002) that achieved its highest gross around 20 weeks after its release\], one may need to consider explicit interactions between agents. [**SUPPLEMENTARY MATERIAL**]{} Variable Distribution Type $\alpha$ $\mu_1$ $\mu_2$ $\sigma_1$ $\sigma_2$ ----------- ------------------- ---------- --------- --------- ------------ ------------ $N_0$ Bimodal 0.61 2.91 7.84 1.72 0.29 $G_0$ Bimodal 0.57 11.36 16.49 1.24 0.94 $G_T$ Bimodal 0.54 13.16 17.55 1.80 1.05 $g_0$ Unimodal 8.72 1.02 $N_{max}$ Bimodal 0.61 4.01 7.83 1.71 0.27 : Values of log-normal distribution parameters for different aggregate variables in the empirical data estimated by maximum likelihood procedure.[]{data-label="table2"} [**Data description.**]{} Income distributions are computed from publicly available data (obtained from http://www.the-movie-times.com) on box-office performance of movies released in the United States of America over a span of 16 years (1997-2012). Gross income over all theaters within the USA are considered and the data are inflation-adjusted with respect to 2010 as base year. To determine the time-invariance of the nature of income distribution, the total time period has been divided into four intervals, viz., 1997-2000, 2001-2004, 2005-2008 and 2009-2012. The total number of movies for which opening weekend gross income $G_0$ data is available in each of these intervals is 673, 1240, 1444 and 1226, respectively, while total income $G_T$ (i.e., box-office receipts over the entire period that a movie was shown in theaters) is available for 1160, 1240, 1444 and 1226 movies in each of these intervals, respectively. Note that, a movie is associated with the calendar year in which it was released in theaters within USA. Time-series of box-office income has been obtained from www.the-movie-times.com for a total of 4568 movies over the period July 1998 to July 2012. To obtain opening weekend income per theater $g_0$, the gross opening income $G_0$ is divided by the number of movie theatres $N_0$ in which the movie is released in its opening week. [**Estimation of parameters.**]{} The aggregate variables $N_0$, $G_0$ and $G_T$ are fit with bimodal lognormal distributions, i.e., a mixture of two lognormal distributions with parameters $\mu_1$, $\sigma_1$ and $\mu_2$, $\sigma_2$, that are weighted by factors $\alpha$ and $1-\alpha$ respectively. The unimodal distribution of opening income per theater, $g_0$, has been fit with a lognormal distribution having parameters $\mu$ and $\sigma$. The maximum likelihood estimates (MLE) of the parameters for the empirical distributions of $N_0$, $G_0$, $G_T$ and $g_0$ are shown in Table \[table2\]. Hartigan’s dip test for multimodality has been performed on the data for $N_0$, $G_0$ and $G_T$ and unimodality is rejected at the usual levels of significance. The time-series of movie income, $g_t$ has been fit to the general form $g_t \sim g_0 t^\beta$ by a regression procedure carried out over all movies that were shown in theaters for at least 5 weeks. ![(a) Distribution of the largest number of theaters $N_{max}$ that a movie is shown simultaneously in its entire lifetime, for all movies released over successive intervals in the period 1997-2012 (indicated by different symbols). The distribution shows a bimodal character similar to the distribution of the number of opening theaters $N_0$ \[Fig. \[fig1\] (d)\] and has been fit by a superposition of two log-normal distributions (broken curve). (b) The probability distribution of movie lifetime (i.e., the duration for which a movie is shown at theaters) with the cumulative distribution fit by a Weibull distribution (inset). (c) The distribution of production budget $B$ for movies released during 1995-2012 which shows an unimodal nature. The inset shows correlation of $B$ with total gross earned by a movie $G_T$. (d) Distribution of the box-office income per theater $g_t$ for movies at any week during their run at the theaters. It is unimodal and described well by a lognormal distribution, similar to the distribution of opening income per theater, $g_0$ \[Fig. \[fig1\] (c)\]. []{data-label="figs1"}](FigS1.eps){width="0.9\linewidth"} [**Robustness of empirical features**]{}. To see whether the qualitative features of the results of empirical analysis are robust, we have also looked at variables other than $N_0$, $G_0$, $G_T$ and $g_0$. For example, if we consider instead of the opening number of theaters $N_0$, the largest number of theaters $N_{max}$ that a movie is shown simultaneously at any time following its release, its distribution also shows a bimodal nature and can be fit by a superposition of two log-normal distributions \[Fig. \[figs1\] (a)\]. Also, instead of considering only the opening income per theater $g_0$, we have looked at the distribution of income per theater of a movie at any given week following its release. Fig. \[figs1\] (d) shows that its distribution is qualitatively similar to $g_0$ and can be fit by a unimodal log-normal distribution. Note that the empirical features reported here have remained relatively invariant over a long time-horizon (1997-2012) during which the movie industry underwent many changes on the demand side \[viz., changes in taste and preference of movie viewers, emergence of online movie forums such as the Internet Movie DataBase (IMDB), and access to movies online via sites such as Youtube, Netflix, etc.\], as well as, in the supply side (e.g., technological breakthroughs in special effects, using the internet for marketing movies, etc.). [**Possible alternative sources of the bimodal character of empirical distributions.**]{} To see whether the bimodality in $N_0$, $G_0$ and $G_T$ can arise as a result of movies differing in a similar manner in their lifetimes, we have obtained data on the duration that movies have been shown in theaters following their release for the period 2002-2012 from www.box-office-mojo.com. This distribution does not show bimodality \[Fig. \[figs1\] (b)\], indicating that the bimodal nature of $G_T$ distribution is not a property arising from certain movies running for a significantly longer period than others. The cumulative distribution of lifetime fits a Weibull distribution \[shown in Fig. \[figs1\] (b), inset\]. The distribution of production budget of movies has also been computed in order to consider the possibility that bimodality arises from the vastly different amounts of money spent in making the movies. Production budget data for 3453 movies released during 1995-2012 have been obtained from http://www.the-numbers.com. The corresponding distribution does not appear to show bimodality \[Fig. \[figs1\] (c)\], implying that budget may not be the relevant factor for explaining the origin of bimodality in the $N_0$, $G_0$ and $G_T$ distributions. The inset shows correlation between budget and lifetime income of the movie at theaters (correlation coefficient $r = 0.629$). [**Details of model simulations.**]{} For most of our simulations, the total number of theaters (agents) has been assumed to be 3000 as it is close to the maximum number of theaters in which a movie released as per the empirical data \[Fig. \[figs1\] (a)\]. However, we have also carried out simulations with $N$ as large as $10^6$ to ensure that the results are not system-size dependent. The first few hundred time steps of each simulation realization were considered to be transients and removed to avoid initial state dependent effects. The results are averaged over many realizations in order to obtain the steady state distributions. As mentioned in the main text, the probability of release of each movie $t$ in theater $i$ depends crucially on the factor $z_t(i)=\theta_t-g^t(i)$ . As $\theta_t$ is lognormally distributed with the $\mu$ and $\sigma$ of $g_0$ estimated from empirical data (see Table \[table2\]), we normalize $z_t(i)$ by the mean of the distribution, viz. $exp(\mu+\sigma^2/2)$. In the simplest version of the model, $\theta_t$ can be considered to be uncorrelated with $g_0$ which implies that the agents are not able to correctly anticipate the future performance of a movie up for release. We have also considered $\beta = 0$ in the simplest version of the model, which means that the agents compare the expected opening income of the movie up for release to the known opening income of the movie currently being shown. [**Robustness of model results.**]{} To check the robustness of the model results, we have considered several variants of the basic model. For example, instead of comparing only the opening incomes of the movie being currently shown to the expected opening income of the movie up for release, in one of the variant models we consider, the agents can compare the income over the next $m$ successive time instants in making their decisions ($m=1,2,3, \ldots$). This shows results qualitatively similar to the basic model. In the basic model, the predicted opening income and the actual opening income are assumed to be uncorrelated. We have also considered a variant model where the agents can make perfect prediction about the performance of a movie up for release so that $\theta_t = g_{O}^t$. Results are qualitatively similar to the basic model and bimodality is see over a range of values of the cost parameter $C$. Results are also qualitatively unchanged if the opening income in each theater is chosen from a distribution having mean $g_0$ and a small variance. Also, choosing different values of the exponent $\beta$ (that governs how the income per theater changes over time) from a distribution having a specific mean (e.g., $=-1$, as in the empirical data) yield qualitatively similar results. The model shows similar behavior with other types of probabilistic choice functions, e.g., $$p_{i,t}= \frac{1}{2} + \frac{z_t(i)}{2 \sqrt{C+z_t(i)^2}},$$ which has a sigmoid profile.
--- abstract: 'We propose a scheme for generating a weakly chordal graph from a randomly generated input graph, $G = (V, E)$. We reduce $G$ to a chordal graph $H$ by adding fill-edges, using the minimum vertex degree heuristic. Since $H$ is necessarily a weakly chordal graph, we use an algorithm for deleting edges from a weakly chordal graph that preserves the weak chordality property of $H$. The edges that are candidates for deletion are the fill-edges that were inserted into $G$. In order to delete a maximal number of fill-edges, we maintain these in a queue. A fill-edge is removed from the front of the queue, which we then try to delete from $H$. If this violates the weak chordality property of $H$, we reinsert this edge at the back of the queue. This loop continues till no more fill-edges can be removed from $H$. Operationally, we implement this by defining a deletion round as one in which the edge at the back of the queue is at the front. We stop when the size of the queue does not change over two successive deletion rounds and output $H$.' author: - | Sudiksha Khanduja\ School of Computer Science\ University of Windsor\ Windsor, Canada\ - | Aayushi Srivastava\ School of Computer Science\ University of Windsor\ Windsor, Canada\ - | Md. Zamilur Rahman\ School of Computer Science\ University of Windsor\ Windsor, Canada\ - | Asish Mukhopadhyay\ School of Computer Science\ University of Windsor\ Windsor, Canada\ title: Generating Weakly Chordal Graphs from Arbitrary Graphs --- Introduction ============ A graph $G = (V, E)$ is said to be weakly chordal if neither $G$ nor its complement, $\overline{G}$, has an induced chordless cycle on five or more vertices (a hole). Figure \[Fig-WCGExamples\] shows an example of a weakly chordal graph, $G$, and its complement, $\overline{G}$.\ Weakly chordal graphs were introduced by Hayward in [@DBLP:journals/jct/Hayward85] as a generalization of chordal graphs, who showed that these graphs form a subclass of the perfect graphs. An alternate definition that does not refer to the complement graph is that $G$ does not contain a hole or an anti-hole, which is the complement of a hole. Berry et al. [@DBLP:journals/njc/BerryBH00] gave a very different and interesting definition of a weakly chordal graph as one in wich every edge is $LB$-simplicial. They also proposed the open problem of generating a weakly chordal graph from an arbitrary graph. A solution to this problem is the subject of this paper.\ Early work on graph generation foucussed on creating catalogues of graphs of small sizes. Cameron et al.  [@DBLP:journals/jgt/CameronCRW85], for instancce, published a catalogue of all graphs on 10 vertices. The underlying motive was that such repositories were useful for providing counterexamples to old conjectures and coming up with new ones. Subsequent focus shifted to generating graphs of arbitrary size, labeled and unlabeled, uniformly at random. As such a generation method, involved solving a counting problem, research was focused to classes of graphs for which the counting problem could be solved and yielded polynomial time generation algorithms. Among these were graphs with prescribed degree sequence, regular graphs, special classes of graphs such as outerplanar graphs, maximal planar graphs. See [@Tinhofer1990] for a survey work prior to 1990.\ As stated in [@DBLP:journals/corr/abs-1906-01056], there are many situations where we would like to generate instances of these to test algorithms for weakly chordal graphs. For instance, in [@DBLP:journals/dmaa/MukhopadhyayRPG16] the authors generate all linear layouts of weakly chordal graphs. A generation mechanism can be used to obtain test instances for this algorithm. It can do the same for optimization algorithms, like finding a maximum clique, maximum stable set, minimum clique cover, minimum coloring, for both weighted and unweighted versions, for weakly chordal graphs propsed in [@DBLP:journals/gc/HaywardHM89] and their improved versions in [@DBLP:journals/talg/HaywardSS07; @DBLP:journals/dam/SpinradS95].\ If the input instances for a given algorithm are from a uniform distribution, a uniform random generation provides test instances to obtain an estimate of the average run-time of the algorithm. When the distribution is unknown, the assumption of uniform distribution might still help. Otherwise, we might look upon a generation algorithm as providing test-instances for an algorithm. With this motive, an algorithm for generating weakly chordal graphs by adding edges incrementally was recently proposed in [@DBLP:journals/corr/abs-1906-01056]. An application of this generation algorithm would be to obtain test-instances for an algorithm for enumerating linear layouts of a weakly chordal graph proposed in [@DBLP:journals/dmaa/MukhopadhyayRPG16].\ The next section of the paper contains some common graph terminology, used subsequently. The following section contains details of our algorithms, beginning with a brief overview. In the concluding section, we summarize the salient aspects of the paper and suggest directions for further work. Preliminaries ============= We will assume that $G$ is a graph on $n$ vertices and $m$ edges, that is, $|V| = n$ and $|E| = m$. The [*neighborhood*]{} $N(v)$ of a vertex $v$ is the subset of vertices $\{u\in V\mid(u,v)\in E\}$ of $V$. The [*degree*]{} $\deg(v)$ of a vertex $v$ is equal to $|N(v)|$. A vertex $v$ of $G$ is [*simplicial*]{} if the induced subgraph on $N(v)$ is complete (alternately, a [*clique*]{}). A [*path*]{} in a graph $G$ is a sequence of vertices connected by edges. We use $P_k (k\geq 3)$ to denote a chordless path, spanning $k$ vertices of $G$. For instance, a path on 3 vertices is termed as a $P_3$ and, similarly, a path on 4 vertices is termed as a $P_4$. If a path starts and ends in the same vertex, the path is a cycle denoted by $C_k$, where $k$ is the length of the cycle. A [*chord*]{} in a cycle is an edge between two non-consecutive vertices in the cycle.\ $G$ is chordal if it has no induced chordless cycles of size four or more. However, as Figure \[Fig-CGExamples\] shows, the complement of a chordal graph $G$ can contain an induced chordless cycle of size four. The complement cannot contain a five cycle though, as the complement of a five cycle is also a five cycle (see Figure \[Fig-FiveCycles\]). The above example makes it clear why chordal graphs are also weakly chordal.\ In this paper, we propose an algorithm that generates a weakly chordal graph from an arbitrary input graph. It is built on top of a subroutine that maintains the weak chordality of a graph $G$, under edge deletion. Arbitrary Graph to Weakly Chordal Graph {#sec_arb_wcg} ======================================= Overview of the Method ---------------------- We start by generating a random graph $G$ on $n$ vertices and $m$ edges. In a preprocessing step we check if $G$ is weakly chordal, using the LB-simpliciality recognition algorithm due to [@DBLP:conf/swat/BerryBH00]. If $G$ is weakly chordal, we stop. Otherwise, we proceed as follows. We first reduce $G$ to a chordal graph $H$ by introducing additional edges, called fill-edges, using the minimum degree vertex ($mdv$, for short) heuristic [@DBLP:journals/siamrev/GeorgeL89]. The $mdv$ heuristic adds edges so that a minimum degree vertex in the current graph is simplicial. Each fill-edge is also entered into a queue, termed a fill-edge queue, $FQ$. These fill-edges are potential candidates for subsequent deletion from $H$. Since $H$ is chordal, it is necessarily weakly chordal. We propose an algorithm for deleting edges from this weakly chordal graph to remove fill-edges, maintaining the weak chordality property. A fill-edge is deleted only if does not create a hole or an anti-hole in the resulting graph and we have developed criteria for detecting this. A fill-edge is removed from the front of the queue, which we then try to delete. If we do not succeed we put it at the back of the queue. We keep doing this until no more fill edges can be removed. Operationally, we implement this by defining a deletion round as one in which the fill-edge at the back of the queue is at the front. We stop when the size of the queue does not change over two successive deletion rounds. Figure \[Fig-Flowchart\] is a pictorial illustration of the flow of control. ![Overview of process[]{data-label="Fig-Flowchart"}](Fig-FlowChart) Random Arbitrary Graph ---------------------- To generate a random graph, we invoke an algorithm by Keith M. Briggs, called ‘dense\_gnm\_random\_graph’. This algorithm, based on Knuth’s Algorithm S (Selection sampling technique, see section 3.4.2 of [@knuth1997]), takes the number of vertices, $n$ and the number of edges, $m$, as input and produces a random graph. For a given $n$, we set $m$ to a random value lying in the range between $n-1$ and $\frac{n(n-1)}{2}$. The output graph may be disconnected, in which case we connect the disjoint components, using additional edges. LB-simpliciality test --------------------- In [@DBLP:conf/swat/BerryBH00] Berry et al. proved the following result: [@DBLP:conf/swat/BerryBH00] A graph is weakly chordal if and only if every edge is $LB$-simplicial. We apply this recognition algorithm to the random graph generated by the previous step and continue with the next steps only if the recognition algorithm fails. Otherwise, we return $G$. Arbitrary Graph to Chordal Graph -------------------------------- The arbitrary graph $G$ is embedded into a chordal graph $H$ by the addition of edges and the process is known as triangulation or fill-in. Desirable triangulations are those in which a minimal or a minimum number of edges is added. A triangulation $H = (V, E \cup F) $ of $G = (V,E)$ is minimal if $(V, E \cup F')$ is non-chordal for every proper subset $F'$ of $F$. In a minimum triangulation the number of edges added is the fewest possible. Berry at al. [@DBLP:conf/soda/Berry99] proposed an algorithm, known as LB-Triangulation, for the minimal fill-in problem. LB-Triangulation works on any ordering $\alpha$ of the vertices, and produces a fill that is provably exclusion-minimal. In our algorithm, we have used the [*mdv*]{} heuristic [@DBLP:journals/siamrev/GeorgeL89], as our experiments have shown that this adds fewer fill-edges as compared to LB-Triangulation. We explain this heuristic in the next section. ### The Minimum Degree Vertex Heuristic Let $H = (V, E \cup F)$ be the graph obtained from $G = (V, E)$, where $F$ is set of fill-edges, folllowing these steps. We first assign $G$ to $H$ and then prune from $G$ all vertices of degree 1. From the remaining vertices of $G$ we choose a vertex $v$ of minimum degree (breaking ties arbitrarily) and turn the neighborhood $N(v)$ of $v$ into a clique by adding edges. These are fill-edges that we add to the edge set of $H$, as well as to the fill-queue, $FQ$. Finally, we remove from $G$, the vertex $v$ and all the edges incident on it. We repeat this until $G$ is empty. The graph $H$ is now chordal and is identical with the initial graph $G$, sans degree 1 vertices, and with fill edges added. We illustrate this with an example. The initial graph $G$ is shown in Fig. \[Fig-MDPic1\] and the graph $H$ with all fill-edges added is shown in Fig. \[Fig-MDPic2\]. In the initial graph $G$ both $v_1$ and $v_5$ have minimum degree. We break tie in favour of $v_5$. Since the induced subgraph on $N(v_5)$ is already a clique no fill-edges are added and $G$ is set to $G -\{v_5\}$. In the reduced graph $G$, $v_1$ is of minimum degree and the induced graph on $N(v_1)$ is turned into a clique by adding $\{v_3, v_4\}$ as a fill- edge, which is also added to $H$. Since the reduced graph $G - \{v_1\}$ is a clique, we can pick the vertices $v_0, v_2, v_3, v_4$ in an arbitrary order to reduce $G$ to an empty graph, without introducing any further fill edges into $H$. The formal algorithm is described below:\ An arbitrary graph $G=(V,E)$ Returns a chordal graph $H=(V,E\cup F)$ and fill-edge queue $FQ$ $H \leftarrow G$ Delete all vertices of degree 1 from $G$ Sort $V$ in ascending order of degrees \[sort\] Choose a vertex $v$ of minimum degree \[choose\] Turn $N(v)$ of $v$ into a clique by adding edges, which are added to the edge set of $H$ and to the fill-queue, $FQ$ \[turn\] Remove the vertex $v$ from $G$ and all the edges incident on it \[remove\] Repeat steps \[sort\] to \[remove\] until $G$ is empty Chordal Graph to Weakly Chordal Graph ------------------------------------- Since the chordal graph $H$ obtained from the previous stage is also weakly chordal, we apply an edge deletion algorithm to $H$ that preserves weak chordality. The edges that are candidates for deletion are the ones that have been added by the $mdv$ heuristic. Each candidate edge is temporarily deleted from $H$, and we check if its deletion creates a hole or an anti-hole in $H$. If not, we delete this edge. The process is explained in details in the subsequent sections. ### Fill-Edge Queue As mentioned earlier, each edge added to convert an arbitrary input graph into chordal graph is called a fill-edge. In order to delete as many fill-edges as possible, we maintain a queue of fill-edges, $FQ$. A fill-edge is removed from the front of this queue, which we then try to delete from $H$. If we do not succeed because a hole or anti-hole is created, we put it at the back of the queue. We keep doing this until no more fill-edges can be removed from $FQ$. ### Detecting Holes To reiterate, a hole in a graph $G$ is an induced chordless cycle on five or more vertices. Since, a graph is weakly chordal if it is (hole, anti-hole)-free [@DBLP:journals/corr/abs-1902-08071], it is crucial to detect if any hole is formed by the deletion of an edge. For the class of weakly chordal graphs, since the biggest cycle allowed is of size four, the holes can be formed either by a combination of two $P_4$’s or a by a combination of a $P_3$ and a $P_4$, as illustrated in Fig. \[Fig-DetectingHoles\].\ To detect the formation of a hole in $H$, we pick an edge $e=\{u,v\}$ of $H$ and temporarily delete it. Now, we check if this deletion creates a hole in $H$. To detect a hole, we perform a breadth-first search in $H$ with $u$ as the source vertex and find all chordless $P_3$ and $P_4$ paths between $u$ and $v$. A hole can be created in two distinct ways: (i) by a disjoint pair of $P_4$, with six distinct vertices between them such that there exist no chord joining an internal vertex on one $P_4$ to an internal vertex on the other; this we call a hole on two $P_4$s; (ii) by a disjoint pair of $P_3$ and $P_4$ between $u$ and $v$, with five distinct vertices between them, such that there exist no chord joining an internal vertex on the $P_4$ to the internal vertex of the $P_3$; this we call a hole on a $P_3$ and a $P_4$. ### Antiholes An anti-hole in a graph is, by definition, the complement of a hole [@DBLP:journals/corr/abs-1902-08071]. An anti-hole configuration in a weakly chordal graphs has the structure shown in Fig. \[Fig-AntiHole\]. This is an induced graph on six distinct vertices each of which is of degree three. ![Antihole[]{data-label="Fig-AntiHole"}](Fig-AntiHole) ### Detecting Antiholes To detect an anti-hole configuration, we pick an edge $\{u,v\}$ and temporarily delete it from the graph. Next, we check if deleting the edge $\{u,v\}$ creates an anti-hole configuration in the graph. To detect this, we do breadth-first search with $u$ as the source vertex to find all chordless $P_3$ and $P_4$ paths between $u$ and $v$. An anti-hole configuration is formed by a combination of two $P_3$ and one $P_4$ such that the induced graph on the six vertices that define these paths, are uniformly of degree three and there exists a chord from the internal vertex of each $P_3$ to one of the internal vertices in the $P_4$. For example, in Fig. \[Fig-AntiHole\], $\{v_1,v_2,v_5,v_4\}$ is a $P_4$, $\{v_1,v_3,v_4\}$ and $\{v_1,v_6,v_4\}$ are two $P_3$ paths. There exists exactly one chord from $v_2$ to $v_3$ and exactly one from $v_5$ to $v_6$ and, in the induced graph on these six vertices, every vertex has degree three, making it an anti-hole configuration. ### Proposed Algorithm We use an algorithm for deleting edges from a weakly chord graph to remove fill edges, maintaining its weak chordality property. In order to delete as many fill-edges as possible, a fill-edge $\{u, v\}$ is removed from the front of the fill-queue, which we then try to delete from $H$. If we do not succeed, we put it at the back of the queue. We keep doing this until no more fill-edges can be removed. Operationally, we implement this by defining a deletion round as one in which the edge at the back of the queue is at the front. One deletion round comprises of picking an edge from the start of the queue and deleting it from $H$. Now we check if the deletion of $\{u, v\}$ creates a hole or an antihole in $H$. If so, we do not delete the edge $\{u, v\}$ and add it back to the fill-queue. Otherwise, we delete the edge from $H$ and also remove it from te fill-queue $FQ$. We stop when the size of $FQ$ does not change over two successive deletion rounds.\ A chordal graph $H=(V,E\cup F)$ with fill edge queue $F$ A weakly chordal graph $G_w$ $T \leftarrow H$ $FQ \leftarrow$ fill-edges of $H$ $prevSize \leftarrow 0$ $newSize \leftarrow |FQ|$ $prevSize \leftarrow newSize$ Delete edge $\{u,v\}$ from $T$ Do not delete edge from graph $H$, add edge back to temporary graph $T$, and to the back of the queue $FQ$ Delete edge $\{u,v\}$ from graph $H$ $newSize \leftarrow |FQ|$ $G_w \leftarrow H$ $G_w$ For example in Fig. \[Fig-ArbToWCGHoles\], a random arbitrary graph on 6 vertices and 8 edges is obtained. It is converted into a chordal graph by inserting two additional edges. These two additional edges added are put in the fill-edge queue $[\{v_2,v_4\},\{v_1,v_4\}]$. Maintain a temporary copy of chordal graph $G$ in $T$. The deletion algorithm begins by picking first edge $\{v_2,v_4\}$ in the fill-edge queue and temporarily deletes it from graph $T$ to check for hole and antihole configurations. Since deleting $\{v_2,v_4\}$ does not give rise to any hole or anti-hole configurations, $\{v_2,v_4\}$ is permanently deleted from starting graph $H$ which is now a weakly chordal graph. Now the updated fill-edge queue is $[\{v_1,v_4\}]$. The deletion algorithm now picks the first edge in $\{v_1,v_4\}$ in the fill edge queue and temporarily deletes it from graph $T$ to check for hole and antihole configurations. Since deleting $\{v_1,v_4\}$ gives rise to a hole configuration on one $P_4$ $\{v_1,v_2,v_3,v_4\}$ and one $P_3$ $\{v_1,v_5,v_4\}$, $\{v_1,v_4\}$ is not permanently deleted from $H$. Since the queue is now empty, the graph $G_w$ returned by the algorithm is weakly chordal with a small subset of fill-edges added to the original graph $G$.\ \ For another example, consider Figure \[Fig-AntiholeExample1\], a random arbitrary graph on 6 vertices and 9 edges is obtained. It is converted into a chordal graph $H$ (see Figure \[Fig-AntiholeExample2\]) by adding three additional edges. These three additional edges added are put in the fill-edge queue $[\{v_1,v_5\},\{v_2,v_4\},\{v_1,v_4\}]$. Maintain a temporary copy of the chordal graph $H$ in $T$. The deletion algorithm begins by picking first edge $\{v_1,v_5\}$ in the fill edge queue and temporarily deletes it from graph $T$ to check for hole and antihole configurations. Since deleting $\{v_1,v_5\}$ does not give rise to any hole or anti-hole configurations, $\{v_1,v_5\}$ is permanently deleted from starting graph $H$ which is now a weakly chordal graph shown in Figure \[Fig-AntiholeExample3\]. Now the updated fill-edge queue is $[\{v_2,v_4\},\{v_1,v_4\}]$. The deletion algorithm now picks the first edge in $\{v_2,v_4\}$ in the fill-edge queue and temporarily deletes it from graph $T$ to check for hole and anti-hole configurations. Since deleting $\{v_2,v_4\}$ does not give rise to any hole or anti-hole configuration, $\{v_2,v_4\}$ is permanently deleted from starting graph $H$, which is now a weakly chordal graph. Now the updated fill-edge queue is \[$\{v_1,v_4\}$\]. The deletion algorithm now picks the first and only edge $\{v_1,v_4\}$ in the fill-edge queue and temporarily deletes it from graph $T$ to check for a hole or an anti-hole configuration. Since deleting $\{v_1,v_4\}$ gives rise to an anti-hole configuration on two $P_3$ paths $\{v_1,v_3,v_4\}$,$\{v_1,v_6,v_4\}$ and one $P_4$ $\{v_1,v_2,v_5,v_4\}$, the edge $\{v_1,v_4\}$ is not permanently deleted from starting graph $H$. Since the queue is now empty, the graph $G_w$ returned by the algorithm is weakly chordal with a small subset of fill-edges added to the original graph $G$ as shown in Figure \[Fig-AntiholeExample4\]. Complexity ---------- The $mdv$ heuristic can be implemented in $O(n^2 m)$ time, while the time-complexity of the recognition algorithm based on LB-simpliciality is in $O(nm)$.\ To bound the query complexity of deleting an edge $\{u, v\}$ from the weakly chordal graph, we note that this is dominated by the task of finding multiple $P_3$ and $P_4$ paths between $u$ and $v$ and we have to consider these in pairs and run the breadth-first search. An upper bound on the number of pairs of $P_3$ and $P_4$ paths between $u$ and $v$ is $O(d_u^2d_v^2)$, where $d_u$ and $d_v$ are the degrees of $u$ and $v$ respectively. For consider such a path from $u$ to $v$ (see Figure \[Fig-AP4Path\]): $x$ is one of the at most $d_u$ vertices adjacent to $u$ and $y$ is one of the at most $d_v$ vertices adjacent to $v$, so that we have at most $O(d_ud_v)$ $P_4$ paths from $u$ to $v$ and thus $O(d_u^2d_v^2)$ disjoint pairs of $P_4$ paths from $u$ to $v$.\ If $|E|$ be the number of edges currently, in the weakly chordal graph, the complexity of running a breadth-first search is $O(n + |E|)$. Since $m$ is the number of edges in the final weakly chordal graph, an upper bound on the query complexity is $O(d_u^2d_v^2 (n + m))$.\ The deletion of an edge take constant time since we maintain an adjacency matrix data structure to represent $G$. Conclusion ========== We have proposed a simple method for generating a weakly chordal graph from an arbitrary graph. The proposed algorithm can also be used to generate weakly chordal graphs by deleting edges from input graphs that are known to be weakly chordal, such as complete graphs. Starting with complete graphs also helps in generating dense weakly chordal graphs. An interesting open problem is to establish if the proposed method to generate a weakly chordal graph from an arbitrary graph adds a minimal number of edges.\ We have implemented our algorithm in Python. Some sample outputs are shown below in an appendix. In each of the figures, the purples edges in the chordal graph are edges that are candidates for deletion.\ [10]{} Anne Berry. A wide-range efficient algorithm for minimal triangulation. In [*Proceedings of the Tenth Annual [ACM-SIAM]{} Symposium on Discrete Algorithms, 17-19 January 1999, Baltimore, Maryland, [USA.]{}*]{}, pages 860–861, 1999. Anne Berry, Jean Paul Bordat, and Pinar Heggernes. Recognizing weakly triangulated graphs by edge separability. , 7(3):164–177, 2000. Anne Berry, Jean Paul Bordat, and Pinar Heggernes. Recognizing weakly triangulated graphs by edge separability. In [*Algorithm Theory - [SWAT]{} 2000, 7th Scandinavian Workshop on Algorithm Theory, Bergen, Norway, July 5-7, 2000, Proceedings*]{}, pages 139–149, 2000. Carl Feghali and Jir[í]{} Fiala. Reconfiguration graph for vertex colourings of weakly chordal graphs. , abs/1902.08071, 2019. Alan George and Joseph W. H. Liu. The evolution of the minimum degree ordering algorithm. , 31(1):1–19, 1989. Ryan B. Hayward. Weakly triangulated graphs. , 39(3):200–208, 1985. Ryan B. Hayward, Ch[í]{}nh T. Ho[à]{}ng, and Fr[é]{}d[é]{}ric Maffray. Optimizing weakly triangulated graphs. , 5(1):339–349, 1989. Ryan B. Hayward, Jeremy P. Spinrad, and R. Sritharan. Improved algorithms for weakly chordal graphs. , 3(2):14, 2007. Donald E. Knuth. . Addison-Wesley, 1997. Asish Mukhopadhyay, S. V. Rao, Sidharth Pardeshi, and Srinivas Gundlapalli. Linear layouts of weakly triangulated graphs. , 8(3):1–21, 2016. Md. Zamilur Rahman, Asish Mukhopadhyay, and Yash P. Aneja. A separator-based method for generating weakly chordal graphs. , abs/1906.01056, 2019. Jeremy P. Spinrad and R. Sritharan. Algorithms for weakly triangulated graphs. , 59(2):181–191, 1995. G. Tinhofer. , pages 235–255. Springer Vienna, Vienna, 1990. Appendix ========
--- address: 'Sebastian Meyer is a Research Fellow at the Institute of Medical Informatics, Biometry, and Epidemiology, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054 Erlangen, Germany .' author: - bibliography: - 'references.bib' title: 'Self-exciting Point Processes: Infections and Implementations' --- =1 Thanks for this overdue account of *self-exciting* spatio-temporal point process models, synthesizing developments from various research fields. In what follows, I will contribute some experiences from modelling the spread of infectious diseases (relating to Section 4.3 of the review). Furthermore, I will try to complement the review with regard to the availability of software for the described models, which I think is essential in “paving the way for new uses”. Point process models for infectious disease spread {#point-process-models-for-infectious-disease-spread .unnumbered} ================================================== For notifiable diseases, public health surveillance data is routinely available in aggregated form as time series of infection counts. Such data are typically approached with autoregressive models using a negative binomial distribution, or assuming the counts as approximately Gaussian after a suitable transformation to adopt classical ARIMA models or even Facebook’s Prophet procedure [see @held.meyer2018 for an assessment]. For *multivariate* time series stratified by region, spatial epidemic models can account for varying demographic and environmental factors, and enable spatially explicit predictions. @hoehle2016 provides a recent overview of spatio-temporal infectious disease models. @taylor.etal2015 propose to tackle even such aggregate-level surveillance data with point process methods (specifically, a log-Gaussian Cox process model with Bayesian data augmentation). However, for “mechanistic”, self-exciting point process models to unfold in infectious disease epidemiology, individual-level data are indispensable. A distinction is between a point process indexed in a continuous spatial domain, such as in the ETAS model, versus a multivariate temporal point process operating on a discrete set of interacting locations/individuals, i.e., on a network. Reinhart mentions recent applications of such multivariate processes in social networks. It is important to note that similar models have also been developed in the infectious disease context, where they are not that much “in its infancy”. For instance, the models described in @diggle2006, @scheel.etal2007, and @hoehle2009 all describe the spread of livestock diseases among farms using distance-based transmission kernels. Such spatial distances could just as well be replaced by geodesic distances to quantify the coupling between the individual infection processes, for example using movement networks as in @schroedle.etal2012 or contact networks as mentioned by Reinhart. @aldrin.etal2015 use a combination of spatial distances and local contact networks. In what follows, I focus on spatially *continuous* self-exciting point process models for the spread of infectious diseases in human populations. Such models come with several caveats, on three of which I would like to elaborate. Limited spatio-temporal data resolution {#limited-spatio-temporal-data-resolution .unnumbered} --------------------------------------- The available spatial resolution of case reports is often limited by data protection. This constrains the detail with which spatial interaction can be estimated. “Areal censoring” (e.g., to the postcode level) may yield events that apparently occurred at the same location, which is impossible in simple point processes. Equivalently, interval censoring of the infection times results in concurrently observed events, making it impossible to ascertain which infection predates the other. Furthermore, the situation is complicated by the fact that event times only correspond to the date of specimen sampling or notification to public health authorities. As latent periods and reporting delays differ between cases, the observed ordering of the events may not always properly reflect the infection chain. One way of dealing with tied event times and locations is to add random jitter with an amount corresponding to the level of censoring in the data, and ideally conduct a sensitivity analysis or use model averaging over several random seeds. Breaking ties will affect estimates of the triggering function as well as it will remove spikes in the distribution of rescaled temporal residuals [see @meyer.etal2012 Figure 4]. These are described in @ogata1988 [Section 3.3] and supplement the spatial diagnostics discussed by Reinhart. The meaning of location {#the-meaning-of-location .unnumbered} ----------------------- Even if the data provided the georeferenced place of residence of each patient, would that be a suitable proxy for the “epicentre”? It may neither be the place where the individual initially became exposed nor the location receiving the highest triggering rate during the infectious period. Nevertheless, it is probably the best available proxy. A more realistic triggering function would obviously need to employ social contacts rather than spatial displacement. This is possible in the multivariate models for $\lambda_i(t)$ above but not for $\lambda(s,t)$, as there is no mapping of locations $s \in X$ to contact rates. Using a spatio-temporal point process model for human infections thus entails the assumption that geographic distance reflects interaction good enough, which is (at least) supported by the findings of @brockmann.etal2006 and @read.etal2014. Underreporting {#underreporting .unnumbered} -------------- Public health surveillance data suffer from considerable underreporting [@gibbons.etal2014]. The consequence is that the self-exciting model component will be underestimated while the background process might partially capture cases caused by unobserved sources. This is similar to the boundary effects discussed in the review. Indeed, there *is* a background process “producing new cases from nowhere”, meaning immigration of infectives from outside the observation region (e.g., sick tourists or contaminated food), or via antigenic drifts. To identify such events, stochastic declustering is also of interest in infectious disease epidemiology, but is less useful in practice because of the biases from underreporting. A similar limitation holds for a key epidemiological parameter, the basic reproduction number $R_0$, estimated as the space-time integral of the triggering function. Underreporting and implemented control measures imply that this estimate is only a lower bound for the *effective* reproduction number. So yes, self-exciting models of infectious disease spread do require careful interpretation, especially since pathogens in humans are not nearly as well observable as earthquakes. Software {#software .unnumbered} ======== In synthesizing estimation and inference techniques, the review covers relevant topics for the analysis of spatio-temporal point patterns from epidemic phenomena. I found one crucial aspect to be missing though: software. Providing implementations of statistical methods or at least the code for the specific analysis at hand is essential for scientific progress today, as it enables others to reproduce the findings and use the described approaches in their own data-analysis pipelines. Unsurprisingly, most publically available implementations of self-exciting point process models are related to the ETAS model. Several implementations exist for estimating purely temporal versions, e.g., the Fortran code by @kasahara.etal2016, and the packages [@R:SAPP], [@harte2010], and [@R:bayesianETAS see Section 3.5 of the review]. A general-purpose implementation to estimate and simulate purely spatial cluster process models is provided in the package [@baddeley.turner2005]. The package [@R:ETAS] provides access to a / port of Zhuang’s Fortran routines for stochastic declustering in spatio-temporal ETAS models. There are two sophisticated software packages, which support both temporal and spatio-temporal ETAS models: [@lombardi2017] is a Matlab-based GUI (currently documented to require Mac OS) for Fortran routines employing simulated annealing for maximum likelihood estimation, and [@adelfio.chiodi2015] is an package using the estimation approach described in Section 3.2.2 of the review. In principle, these ETAS packages could also be used for non-seismological applications. However, they often do not allow for different parametric forms of the triggering function, and the modified Omori formula is not necessarily applicable in other contexts. For instance, different formulations have been used in crime (Section 4.2) and epidemic (Section 4.3) forecasting. At least for epidemiological models, the package [@meyer.etal2017] fills the gap. Apart from the multivariate model of @hoehle2009, it can also estimate and simulate the spatio-temporal model of @meyer.etal2012 mentioned in the review. Various spatial triggering functions are supported, including Gaussian, power law, student, and (piecewise) constant kernels (custom forms are possible as well, but will usually be much slower to estimate). A Newton-type optimizer with analytical derivatives is used to maximize the log-likelihood. Efforts have been made to avoid vague approximations of the contained integrals $\int_X f(s-s_i) \,\mathrm{d}s$ over the polygonal observation region $X$. Assuming all these integrals to equal 1 is inappropriate for events close to the boundary and for heavy-tailed kernels in general. So we compute these integrals, but use an efficient cubature method for isotropic spatial interaction functions $f$, which only requires one-dimensional numerical integration (see [@meyer.held2014 Supplement B], and the implementation available via the package ). Closing comment {#closing-comment .unnumbered} =============== I hope that Reinhart’s review will be as infectious as its content and trigger further applications of such models to epidemic phenomena. Readily available, well documented, open source software facilitates this process.
Introduction ============ It has recently become clear that quantum phase transitions[@quantum] in disordered systems are rather different from phase transitions driven by thermal fluctuations. In particular, Griffiths [@griffiths] showed that the free energy is a non-analytic function of the magnetic field in part of the disordered phase because of rare regions, which are more strongly correlated than the average and which are [*locally ordered*]{}. However, in a classical system, this effect is very weak, all the derivatives being finite[@essen]. By contrast, in a quantum system at zero temperature, these effects are much more pronounced. One model where these effects can be worked out in detail, and where rare, strongly coupled regions dominate not only the disordered phase but also the critical region, is the one-dimensional random transverse-field Ising chain with Hamiltonian $${\cal H} = -\sum_{i=1}^L J_i \sigma^z_i \sigma^z_{i+1} - \sum_{i=1}^L h_i \sigma^x_i \ . \label{ham}$$ Here the $\{\sigma^\alpha_i\}$ are Pauli spin matrices, and the interactions $J_i$ and transverse fields $h_i$ are both independent random variables, with distributions $\pi(J)$ and $\rho(h)$ respectively. The lattice size is $L$, which we take to be even, and periodic boundary conditions are imposed. The ground state of this model is closely related to the finite-temperature behavior of a two-dimensional classical Ising model with disorder perfectly correlated along one direction, which was first studied by McCoy and Wu[@mw]. Subsequently, the quantum model, Eq. (\[ham\]), was studied by Shankar and Murphy[@sm], and recently, in great detail, by Fisher[@dsf]. From a real space renormalization group analysis, which becomes exact on large scales, Fisher obtained many new results and considerable physical insight. The purpose of the present study is to investigate the model in Eq. (\[ham\]) numerically, using a powerful technique[@lsm] which is special to one-dimensional systems, to verify the surprising predictions of the earlier work[@mw; @sm; @dsf] and to determine certain distributions and scaling functions which have not yet been calculated analytically. In one-dimension one can perform a gauge transformation to make all the $J_i$ and $h_i$ positive. Unless otherwise stated, the numerical work used the following rectangular distribution: $$\begin{aligned} \pi(J) & = & \left\{ \begin{array}{ll} 1 & \mbox{for $ 0 < J < 1$} \\ 0 & \mbox{otherwise} \end{array} \right. \nonumber \\ \rho(h) & = & \left\{ \begin{array}{ll} h_0^{-1} & \mbox{for $ 0 < h < h_0$} \\ 0 & \mbox{otherwise.} \end{array} \right. \label{dist}\end{aligned}$$ The model is therefore characterized by a single control parameter, $h_0$. As discussed in section II, the critical point is at $h_0 = 1$ (so the distributions of $h$ and $J$ are then the same) and the deviation from criticality is conveniently measured by the parameter $\delta$ in Eq. (\[delta\]), where, for the distribution in Eq. (\[dist\]), $$\delta = {1 \over 2} \ln h_0 \ . \label{delta_h0}$$ Section II discusses the analytical results obtained previously, and section III reviews the work of Lieb, Schultz and Mattis [@lsm], Katsura[@katsura] and Pfeuty[@pfeuty] which relates the Hamiltonian to free fermions, and also explains how this technique can be implemented numerically for the random case. In section IV the numerical results for the distribution of the energy gap are shown, while section V discusses results for the correlation functions. Results for the local susceptibility on smaller sizes, obtained by the Lanczos method, are discussed in section VI, while data for the $q=0$ structure factor, which could be measured in a scattering experiment, are considered in section VII. Finally, in section VIII, we summarize our conclusions and discuss the possible relevance of the results to models in higher dimensions. Analytical results ================== In this section we summarize the results obtained earlier by McCoy and Wu[@mw], Shankar and Murphy[@sm] and particularly by Fisher[@dsf]. Defining $$\begin{aligned} \Delta_h & = & [\ln h]_{\rm av} \nonumber \\ \Delta_J & = & [\ln J]_{\rm av}\end{aligned}$$ where $[\ldots]_{\rm av}$ denote an average over disorder, the critical point occurs when $$\Delta_h = \Delta_J \ .$$ Clearly this is satisfied if the distributions of bonds and fields are equal, and the criticality of the model then follows from duality. A convenient measure of the deviation from criticality is given by $$\delta = { \Delta_h - \Delta_J \over [(\ln h)^2]_{\rm av} - \Delta_h^2 + [(\ln J)^2]_{\rm av} - \Delta_J^2} \ . \label{delta}$$ At a quantum critical point one needs to consider the dynamical critical exponent, $z$, even when determining static critical phenomena, because statics and dynamics are coupled. The relation between a characteristic length scale $l$ and the corresponding time scale $\tau$ is then $\tau \sim l^z$. For the present model one has, at the critical point, $$z = \infty \quad (\delta = 0) \ ,$$ or, more precisely, the time scale varies as the exponential of the square root of the corresponding length scale. In addition, the distribution of local relaxation times is predicted to be very broad. One of the goals of the present work is to determine the form of the distribution of a related quantity, the gap to the first excited state. Moving into the disordered phase, there is still a very broad distribution of relaxation times because of Griffiths singularities, and one can still, as a result, define a dynamical exponent but this now varies with $\delta$, diverging as $$z = { 1 \over 2 \delta } + C + O(\delta) \ , \label{zdiverge}$$ for $\delta \to 0$, where $C$ is a non-universal constant. Moving further away from the critical point, if one reaches a situation where all the fields are bigger than all the interactions, then Griffiths singularities no longer occur and the distribution of relaxation times becomes narrow. Denoting the value of $\delta$ where this happens by $\delta_G$, Griffiths singularities occur in that part of the disordered phase where[@griff_phase] $$0 < \delta < \delta_G \ .$$ Approaching the end of the Griffiths phase, one has $$\lim_{\delta \to \delta_G^-} z = 0 \ .$$ For the distribution used in the numerical calculations, Eq. (\[dist\]), $\delta_G = \infty$ so Griffiths singularities occur throughout the disordered phase. In the disordered phase, the magnetization in the $z$-direction has a singular piece if a uniform field, $H$, coupling to $\sigma^z$, is added, namely $$m_{sing} \sim |H|^{1 \over z} \ ,$$ so the linear susceptibility diverges over part of the disordered phase, a result first found by McCoy and Wu[@mw]. Next we turn to predictions for the correlation functions $$C_{ij} = \langle \sigma^z_i \sigma^z_j \rangle \ .$$ Again there are very big fluctuations, and, as a result, the average and typical correlations behave quite differently. The average correlation function, $$C_{\rm av}(r) = {1 \over L} \sum_{i=1}^L [ \langle \sigma^z_i \sigma^z_{i+r} \rangle ]_{\rm av} \ ,$$ varies as a power of $r$ at criticality, $$C_{\rm av}(r) \sim {1 \over r^{2 - \phi} } \quad (\delta = 0) \ , \label{cav}$$ where $$\phi = {1 + \sqrt{5} \over 2} = 1.61804\ldots$$ is the golden mean, so the power in Eq. (\[cav\]) is approximately 0.38. Away from criticality, $C_{\rm av}(r)$ decays exponentially at a rate given by the [*true*]{} correlation length, $\xi$, where $$\xi \approx {l_V \over \delta^\nu} \ , \label{truexi}$$ with $$\nu = 2 \ .$$ The amplitude of the correlation length, $l_V$, is also known and given by $$l_V = { 2 \over \mbox{var }[h] + \mbox{var }[J]} \ .$$ For the distribution in Eq. (\[dist\]) one has $$l_V = 1 . \label{lv}$$ Scaling theory predicts that $${C_{\rm av}(r; \delta) \over C_{\rm av}(r; \delta=0) } = \bar{C}_{\rm av}(r / \xi) \ , \label{cscale}$$ where $\bar{C}_{\rm av}$ is a universal scaling function and $\nu$ is the true correlation function exponent, predicted[@dsf] to equal 2. Fisher[@dsf] has calculated the asymptotic form of the scaling function in Eq. (\[cscale\]) for $ r \gg \xi$ and finds $$\bar{C}_{\rm av}(x) = D{ e^{-x -4.055 x^{1/3}} \over x^{0.451}} \quad (x \gg 1 )\ , \label{asymp}$$ where $D$ is an unknown constant, and 0.451 is the numerical value of $5/6 - (2-\phi)$. The [*average*]{} correlation function is, however, dominated by rare pairs of spins which have a correlation function of order unity, much larger than the typical value, so it is necessary to consider the distribution of $\ln C(r)$ to get an idea of the [*typical*]{} behavior. At the critical point $$-\ln C(r) \sim \sqrt{r} \quad (\delta = 0) \ , \label{logcr}$$ with the coefficient in Eq. (\[logcr\]) having a distribution which is independent of $r$. A goal of the present study is to investigate this distribution numerically. In the disordered phase, $-\ln C(r) \propto$ $r$ with a coefficient which is [*self-averaging*]{} for $r \to \infty$, i.e. $$-\ln C(r) \approx r/\tilde{\xi} \ , \label{eq23}$$ for large $r$, where the typical correlation length, $\tilde{\xi}$, has the behavior $$\tilde{\xi} \sim {1 \over \delta^{\tilde{\nu}} } \ , \label{xityp}$$ with $$\tilde{\nu} = 1 \ .$$ The scaling equation corresponding to Eq. (\[cscale\]) but for the average of the [*log*]{} of the correlation function is $$\left[ \ln {C(r; \delta) ]_{\rm av} \over C(r; \delta=0) } \right]_{\rm av} = \ln \bar{C}_{typ}(r / \tilde{\xi} ) \ , \label{lncscale}$$ where $\bar{C}_{typ}$ is a universal scaling function. From Eqs. (\[logcr\]) and (\[eq23\]) one has, for $ r \gg \tilde{\xi}$, $$\ln \bar{C}_{typ}(r / \tilde{\xi}) \approx - r/\tilde{\xi} \ . \label{large_r}$$ For correlations of quantities such as the energy, which are local in the fermion operators, see Eqs. (\[spins\]) and (\[jw\]) of the next section, Shankar and Murphy[@sm] obtained more detailed information. They calculated not only the exponent for the typical correlation length in Eq. (\[xityp\]) but also the amplitude, finding $$\tilde{\xi}^{-1} = [\ln h]_{\rm av} - [\ln J]_{\rm av} \label{xitypex}$$ exactly. For $r \gg \tilde{\xi}$ the mean of $\ln C_{en}(r)$ is [*defined*]{} to be $-r / \tilde{\xi}$ so $$[\ln C_{en}(r)]_{\rm av} \approx -\left\{[\ln h]_{\rm av} - [\ln J]]_{\rm av} \right\} r \ \label{lncav}$$ in this limit. The variance of the distribution is also known[@dsf:pc] for $r \gg \xi$: $$\mbox{var } [\ln C_{en}(r)] \approx \left\{\mbox{var }[\ln h] + \mbox{var } [\ln J] \right\} r \ . \label{varlnc}$$ Note that the standard deviation of $\ln C_{en}(r)$ is proportional to $r^{1/2}$ whereas the mean is proportional to $r$, so $\ln C_{en}(r)$ becomes self-averaging for $ r \gg \tilde{\xi}$. Fisher[@dsf:pc] has suggested that Eqs. (\[xitypex\])-(\[varlnc\]) might also be true asymptotically for quantities such as $\sigma^z$ which are [*not*]{} local in fermion operators. If this is true, then, for the distribution in Eq. (\[dist\]), we have $$\begin{aligned} [ \ln C(r; \delta) ]_{\rm av} & \approx & -2 \delta r \label{ctilde} \\ \mbox{var } [\ln C(r; \delta) ] & \approx & 2 r \label{varctilde}\end{aligned}$$ for $r \gg \tilde{\xi}$. Note that an important feature of these results is that the true correlation length (which describes the average correlation function) has a different exponent from that of the typical correlation length. Mapping to free fermions ======================== The numerical calculations are enormously simplified by relating the model in Eq. (\[ham\]) to [*non-interacting*]{} fermions. This technique was first developed for some related quantum spin chain problems in a beautiful paper by Lieb, Schultz and Mattis[@lsm] and then applied to the [*non-random*]{} transverse field Ising chain by Katsura[@katsura] and Pfeuty[@pfeuty]. The starting point is the Jordan-Wigner transformation, which relates the spin operators to fermion creation and annihilation operators, $c^\dagger_i$ and $c_i$, by the following transformation: $$\begin{aligned} \sigma^z_i & = & a^{\dagger}_i + a_i \nonumber \\ \sigma^y_i & = & i( a^{\dagger}_i - a_i) \nonumber \\ \sigma^x_i & = & 1 - 2 a^\dagger_i a_i = 1 - 2 c^\dagger_i c_i \ , \label{spins}\end{aligned}$$ where $$\begin{aligned} a^\dagger_i & = & c^\dagger_i \exp\left[ -i\pi \sum_{j=1}^{i-1} c^\dagger_j c_j \right] \nonumber \\ a_i & = & \exp\left[ -i\pi \sum_{j=1}^{i-1} c^\dagger_j c_j\right] c_i \ . \label{jw}\end{aligned}$$ This works because the Pauli spin matrices anti-commute on the same site but commute on different sites. The “string operator” in the exponentials in Eq. (\[jw\]) is just what is needed to insert an extra minus sign, converting a commutator to an anti-commutator for different sites. The Hamiltonian can then be written $${\cal H} = \sum_{i=1}^L h_i (c_i^\dagger c_i - c_i c^\dagger_i) - \sum_{i=1}^{L-1} J_i(c^\dagger_i - c_i)( c^\dagger_{i+1} + c_{i+1})$$ $$+ J_L(c^\dagger_L - c_L)( c^\dagger_{1} + c_{1}) \exp(i\pi {\cal N}) \ , \label{hamfermi}$$ where $${\cal N} = \sum_{i=1}^L c^\dagger_i c_i \ , \label{N}$$ is the number of fermions. The last term in Eq. (\[hamfermi\]) is different from the other terms involving the $J_i$ since the string operator in Eq. (\[jw\]) acts all the way round the lattice because of periodic boundary conditions. Although the number of fermions is not conserved, the parity of that number [*is*]{} conserved, so $ \exp(i\pi {\cal N})$ is a constant of the motion and has the value 1 or $-1$. Hence, the fermion problem must have antiperiodic boundary conditions if there is an even number of fermions and periodic boundary conditions if there is an odd number of fermions. Note that the fermion Hamiltonian, Eq (\[hamfermi\]), is bi-linear in fermion operators, and so describes [*free fermions*]{}. For the non-random model[@lsm; @katsura; @pfeuty] one solves for the single particle eigenstates of Eq. (\[hamfermi\]) by (i) a Fourier transform to operators $c^\dagger_k$ and $c_k$, where $k$ is the wavevector, followed by (ii) a Bogoliubov-Valatin transformation in which new fermion creation operators, $\gamma^\dagger_k$, are formed as a linear combination of $c^\dagger_k$ and $c_{-k}$ in order to remove the terms in $\cal H$ which do not conserve particle number. In the random case, we proceed in an analogous way. We define a column vector, $\Psi$, and its hermitian conjugate row vector $\Psi^\dagger$, each of length $2L$, by $$\Psi^\dagger = (c^\dagger_1, c^\dagger_2, \ldots, c^\dagger_L, c_1, c_2, \ldots, c_L) \ . \label{Psi}$$ Note that the $\Psi$ and $\Psi^\dagger$ satisfy the fermion commutation relations $$\Psi^\dagger_i \Psi_j + \Psi_j \Psi^\dagger_i = \delta_{ij}$$ $$\Psi^\dagger_i \Psi^\dagger_j + \Psi^\dagger_j \Psi^\dagger_i = \Psi_i \Psi_j + \Psi_j \Psi_i = 0, \label{comm-rels}$$ irrespective of whether $\Psi_i$ refers to a creation or annihilation operator. For reasons that will become clear below, the Hamiltonian is written in a symmetrical form, replacing $c_i c_{i+1}$ by $(c_i c_{i+1} -c_{i+1} c_i)/2$, and $c^\dagger_i c_{i+1}$ by $(c^\dagger_i c_{i+1} -c_{i+1} c^\dagger_i)/2$ etc. It can then be written in terms of a real-symmetric $2L \times 2L$ matrix, $\tilde{H}$, as $${\cal H} = \Psi^\dagger \tilde{H} \Psi \,$$ where $\tilde{H}$ has the form $$\tilde{H} = \left[ \begin{array}{rr} A & B \\ -B & -A \end{array} \right] \ , \label{htilde}$$ where $A$ and $B$ are $L \times L$ matrices with elements given, for periodic boundary conditions, by $$\begin{aligned} A_{i,i} & = & h_i \nonumber \\ A_{i,i+1} & = - &J_i/2 \nonumber \\ A_{i+1,i} & = - &J_i/2 \nonumber \\ B_{i,i+1} & = &J_i/2 \nonumber \\ B_{i+1,i} & = - &J_i/2 \ , \label{blocks}\end{aligned}$$ where $i+1$ is replaced by $1$ for $i=L$. Note that $A$ is symmetric and $B$ is antisymmetric so $\tilde{H}$ is indeed symmetric as claimed. For antiperiodic boundary conditions, one changes the sign of the terms connecting sites $L$ and 1 in Eq. (\[blocks\]). Next we diagonalize $A$ numerically, using standard routines[@recipes], to find the single particle eigenstates with eigenvalues $\epsilon_\mu$, $\mu = 1, 2, \ldots 2 L$ and eigenvectors $\Phi^\dagger_\mu$ which are linear combinations of the $\Psi^\dagger_i$ with real coefficients. We require that the $\Phi^\dagger_\mu$ have the same commutation relations as the $\Psi^\dagger_i$, see Eq. (\[comm-rels\]), which is satisfied provided the transformation from the $\Psi_i$ to the $\Phi_\mu$ is orthogonal, which in turn is guaranteed by the symmetry of $\tilde{H}$ that we enforced above. If we interchange the $c^\dagger_i$ with the $c_i$ in Eq. (\[Psi\]) then $\tilde{H}$ changes sign. Hence the eigenstates come in pairs, with eigenvectors that are Hermitian conjugates of each other and eigenvalues which are equal in magnitude and opposite in sign. We can therefore define $\Phi^\dagger_\mu = \gamma^\dagger_\mu$ if $\epsilon_\mu > 0$ and $\Phi^\dagger_{\mu^\prime} = \gamma_\mu$ if $\mu^\prime$ is the state with energy $-\epsilon_\mu$. The Hamiltonian can then be written just in terms of $L$ (rather than $2L$) modes as $${\cal H} = \sum_{\mu=1}^L \epsilon_\mu (\gamma^\dagger_\mu \gamma_\mu - \gamma_\mu \gamma^\dagger_\mu) \ , \label{hamdiag}$$ where all the $\epsilon_\mu$ are now taken to be positive. From Eqs. (\[hamfermi\]) and (\[hamdiag\]), one sees that if all the $J_i$ are zero, then the $\epsilon_\mu$ equal the $h_i$, as expected. We shall denote by “quasi-particles” excitations created by the $\gamma^\dagger_\mu$, whereas excitations created by the $c^\dagger_i$ will be called “bare particles”. The many-particle states are obtained by either having or not having a quasi-particle in each of the eigenstates. One has to be careful, though, because, with periodic boundary conditions, the number of bare particles, $\cal N$ in Eq. (\[N\]), must be odd, while for states with anti-periodic boundary conditions the number must be even. Thus, to generate all the many-body states one needs to solve the fermion problem for [*both*]{} periodic and anti-periodic boundary conditions, and keep only [*half*]{} the states in each case. In order to determine which states correspond to the the ground state and the first excited state it is useful to consider first the non-random case[@lsm; @pfeuty]. There the ground state is in the sector with antiperiodic boundary conditions, and has no quasiparticles, which corresponds to $\cal N$ even as required. Hence the ground state energy is given by $$E_0 = -\sum_{\mu=1}^L \epsilon_\mu^{ap} \ , \label{E0}$$ where we indicate that the energies are to be evaluated with antiferromagnetic boundary conditions. The first excited state is in the the sector with periodic boundary conditions. In the disordered phase, there is one quasi-particle, in the eigenstate with lowest energy, and this state has an odd-number of bare particles, as required. Hence the energy of the first excited state of the pure system in the disordered phase is given by $$E_1 = \epsilon_1^{p} -\sum_{\mu=2}^L \epsilon_\mu^{p} \quad (\delta > 0) \ , \label{E1a}$$ where we have ordered the energies such that $\epsilon_1$ is the smallest. At the critical point of the non-random model, $\epsilon_1$ becomes zero. In the conventional point of view, one then says that $\epsilon_1$ becomes negative in the ordered phase. From our perspective of numerical calculations, it is more convenient to define all the $\epsilon_\mu$ to be positive, which means that we are effectively interchanging the role of the creation and annihilation operators, $\gamma^\dagger_1$ and $\gamma_1$. Hence, in our point of view, there are now no quasi-particles, but this still corresponds to an odd number of bare particles. From either point of view, the energy of the first excited state of the pure system in the ordered phase is given by $$E_1 = -\sum_{\mu=1}^L \epsilon_\mu^{p} \quad (\delta < 0) \ , \label{E1b}$$ with all the $\epsilon_\mu^p$ taken to be positive. Note that in the disordered phase there is a finite gap, $2 \epsilon_1$, in the thermodynamic limit, whereas in the ordered phase the gap tends exponentially to zero as $L \to \infty$. This is the manifestation of broken symmetry. Note also that we can rephrase the result for $E_1$ of the pure system by saying that it is given by Eq. (\[E1a\]) if the state with no quasi-particles has an even number of bare particles and by Eq. (\[E1b\]), if it has an odd number (taking all the $\epsilon_\mu$ to be positive). For the random problem the picture turns out to be very similar. We find that the ground state energy is given by Eq. (\[E0\]) and the lowest excited state has energy given either by Eq. (\[E1a\]) or Eq. (\[E1b\]), depending on whether the state with no quasi-particles has an even or an odd number of bare particles[@comment], $\cal N$. The parity of $\cal N$ is determined from Eq. (\[parity\]) below. We have checked that our the code is correct by comparing results for $E_0$ and $E_1$ for small sizes obtained from this fermion method with results obtained for the original problem, Eq. (\[ham\]), using both complete diagonalization and also the Lanczos method. In all cases the results agreed to within machine precision. We now proceed to the calculation of the correlation functions in the ground state[@lsm]. As discussed above, this is in the sector with anti-periodic boundary conditions, which will be assumed in the rest of this section, unless otherwise stated. Assuming, without loss of generality, that $j > i$, $C_{ij}$ can be expressed in terms of fermions by $$C_{ij} = \langle (c^\dagger_i + c_i) \exp\left[ -i\pi \sum_{l=i}^{j-1} c^\dagger_l c_l \right] (c^\dagger_j + c_j) \rangle \,$$ where the averages are to be evaluated in the ground state. Now $$\begin{aligned} \exp\left[ -i\pi c^\dagger_l c_l \right] & = & -(c^\dagger_l - c_l) (c^\dagger_l + c_l) \label{exponen} \\ & = & (c^\dagger_l + c_l) (c^\dagger_l - c_l) \ ,\end{aligned}$$ so defining $$\begin{aligned} A_l & = & c^\dagger_l + c_l \nonumber \\ B_l & = & c^\dagger_l - c_l \ ,\end{aligned}$$ and noting that $A_i^2 = 1$, one has $$C_{ij} = \langle B_i \left( A_{i+1} B_{i+1} \ldots A_{j-1} B_{j-1} \right) A_j \rangle \ .$$ This rather complicated looking expression can be evaluated using Wick’s theorem. To see this, note first that $$\begin{aligned} \langle A_i A_j \rangle & = & \langle \delta_{ij} - c^\dagger_j c_i + c^\dagger_i c_j \rangle \nonumber \\ & = & \delta_{ij} \,\end{aligned}$$ (since $c^\dagger_j c_i$ and $c^\dagger_i c_j$ are Hermitian conjugates of each other and a real diagonal matrix element is being evaluated) and similarly $$\langle B_i B_j \rangle = -\delta_{ij} \ .$$ Hence the only non-zero contractions are $\langle A_j B_i \rangle $ and $\langle B_i A_j \rangle $, since $\langle B_i B_i \rangle $ and $\langle A_i A_i \rangle $ never occur. Defining $$\langle B_i A_j \rangle = - \langle A_j B_i \rangle = G_{ij} \ ,$$ the correlation function is given by a determinant $$C_{ij} = \left| \begin{array}{cccc} G_{i, i+1} & G_{i, i+2} & \cdots & G_{ij} \\ G_{i+1, i+1} & G_{i+1, i+2} & \cdots & G_{i+1,j} \\ \vdots & \vdots & \ddots & \vdots \\ G_{j-1, i+1} & G_{j-1, i+2}& \cdots & G_{j-1, j} \end{array} \right| \ , \label{det}$$ which is of size $j-i$. $G_{ij}$ can be expressed in terms of the eigenvectors of the matrix $\tilde{H}$ in Eq. (\[htilde\]). Let us write $$\begin{aligned} c^\dagger_i + c_i & = & \sum_\mu \phi_{\mu i} (\gamma^\dagger_\mu + \gamma_\mu) \nonumber \\ c^\dagger_i - c_i & = & \sum_\mu \psi_{\mu i} (\gamma^\dagger_\mu - \gamma_\mu) \ ,\end{aligned}$$ where $\psi$ and $\phi$ can be shown to be orthogonal matrices. It follows that $$\begin{aligned} G_{ij} & = & \langle (c^\dagger_i - c_i) (c^\dagger_j + c_j) \rangle \nonumber \\ & = & \sum_\mu \psi_{\mu i} \phi_{\mu j} \langle (\gamma^\dagger_i - \gamma_i) (\gamma^\dagger_j + \gamma_j) \rangle \nonumber \\ & = & - (\psi^T \phi)_{ij} \ , \label{G}\end{aligned}$$ since $\langle \gamma^\dagger \gamma^\dagger \rangle = \langle \gamma \gamma \rangle = 0$ and there are no quasi-particles in the ground state so $\langle \gamma^\dagger \gamma \rangle = 0$. Numerically it is straightforward to compute the $G_{ij}$ from Eq. (\[G\]) and then insert the results into Eq. (\[det\]) to determine the $C_{ij}$ for all $i$ and $j$. Finally we note that the parity of the number of bare particles, $\cal N$, in the state with no quasi-particles can also be obtained, [*for either boundary condition*]{}, from the $G_{ij}$ since $$\begin{aligned} \langle \exp(i\pi {\cal N}) \rangle & = & \langle \prod_{i=1}^L B_i A_i \rangle \nonumber \\ & = & \det G \ , \label{parity}\end{aligned}$$ where we assumed that $L$ is even, otherwise, from Eq. (\[exponen\]), there would be an additional minus sign. Results for the energy gap ========================== For the pure system, the energy gap, $$\Delta E = E_1 - E_0 \ ,$$ is finite in the disordered phase, and tends to zero exponentially with the size of the system in the ordered phase. Consider now the random case in the disordered phase, so $[\ln h ]_{\rm av} > [\ln J ]_{\rm av}$. Because of statistical fluctuations, there are finite regions which are locally ordered, i.e. if one were to average just over one such region then the inequality would be the other way round. These regions will have a very small gap. Hence one expects large sample to sample fluctuations in the gap, especially for big systems. Data for the distribution of $\ln \Delta E$ at the critical point, $h_0 = 1$, is shown in Fig. \[fig1\] for sizes between 16 and 128. One sees that the distribution gets broader with increasing system size. This is clear evidence that $ z = \infty$ as predicted. The precise prediction is that the log of the characteristic energy scale should vary as the square root of the length scale. With this in mind, Fig. \[fig2\] shows a scaling plot for the distribution of $\ln \Delta E / L^{1/2}$, which works quite well. In the disordered phase, the data looks rather different. Fig. \[fig3\] shows the distribution of $\ln \Delta E$, for $h_0 = 3$. Unlike Fig. \[fig1\], the curves for different sizes now look very similar but shifted horizontally relative to each other. This implies that the data scales with a [*finite*]{} value of $z$, as predicted. = = = Note that in the region of small gaps, the data in Fig. \[fig3\] is a straight line indicating a power law distribution of gaps. This power law behavior is not special to the 1-$d$ problem discussed here, but is expected quite generally[@th] in the Griffiths phase for systems with discrete symmetry. The power is related to $z$ as we shall now see. Well into the disordered phase, excitations which give a small gap are well localized so we assume that the probability of having small gap is proportional to the size of the system, $L$. This assumption is confirmed by the data in Fig \[fig3\]. Hence, the probability of having a gap between $\Delta E$ and $\Delta E (1 + \epsilon)$ (for some $\epsilon$) should have the scaling form, $\epsilon L \Delta E^{1/z}$, so the distribution of gaps, $P(\Delta E)$, must vary as $$P(\Delta E) \sim \Delta E^{-1 + 1/z} \ , \label{pde}$$ in the region of small gaps. It is tidier to use logarithmic variables, and the corresponding expression for the distribution of $\ln \Delta E$ is $$\ln \left[ P(\ln \Delta E) \right] = {1\over z} \ln \Delta E + \mbox{const.} \label{plnde}$$ From the slopes in Fig. \[fig3\] we estimate $z \simeq 1.4$, which gives a satisfactory scaling plot as shown in Fig. \[fig4\]. The data does not collapse so well for large gaps, but this may be outside the scaling region. = = = We have carried out a similar analysis for other values of $h_0$. Close to the critical point, it is difficult to determine $z$ because the distribution broadens with increasing size for small sizes (presumably where $L \le \xi$), but then the slope of the straight line region starts to saturate, corresponding to a large but finite $z$. The sizes that we can study are therefore in a crossover region between conventional dynamical scaling ($z$ finite) and activated dynamical scaling ($z$ infinite) so the data does not scale well with any choice of $z$. Fisher[@dsf] has predicted that $z$ is equal to $1/ 2 \delta + C$, near the critical point, where $C$ is non-universal constant, see Eq. (\[zdiverge\]). We show our estimates for $1/z$ plotted against $\delta$ (which is related to $h_0$ by Eq. (\[delta\_h0\])), in Fig \[fig5\]. Also shown is a fit of $1/z$ to $2\delta(1- 2\delta C)$, which corresponds to Eq. (\[zdiverge\]) and which works quite well with $C=0.311$. The exponents are predicted to be universal, i.e. independent of the distributions $\pi(J)$ and $\rho(h)$ (as long as these don’t have anomalously long tails). To test universality, we also did some calculations for a bimodal distribution, in which $J$ and $h$ take one of two values, $$\begin{aligned} \pi(J) & = & {1\over 2} \left[ \delta(J - 1) + \delta(J - 3) \right] \nonumber \\ \rho(h) & = & {1\over 2} \left[ \delta(h - h_0) + \delta(h - 3 h_0) \right] \ . \label{dist2}\end{aligned}$$ The critical point is at $h_0 = 1$. The data for $\Delta E$ at the critical point is shown in Fig. \[fig6b\] and the scaling plot is presented in Fig. \[fig6\]. The data scales reasonably well indicating that $z=\infty$ at the critical point, just as for the continuous distribution in Eq. (\[dist\]). The data collapse is not as good as for the continuous distribution, presumably indicating that the approach to the scaling limit is slower. = Results for correlation functions ================================= We start by looking at the correlation functions at the critical point and then discuss our results in the disordered phase. The average correlation function at the critical point is shown in a log-log plot in Fig \[fig7\] for several sizes. The data for the larger sizes lie on a straight line, and the dashed line, which is a fit to the $L=128$ data for $7 \le r \le 35$, has a slope of $-0.38$, in excellent agreement with Fisher’s [@dsf] prediction in Eq. (\[cav\]). A graph of the average of the log of the correlation function (which corresponds to the log of a [*typical*]{} correlation function) is shown in Fig. \[fig8\] plotted against $\sqrt{r}$. As expected[@sm; @dsf] from Eq. (\[logcr\]) the data falls on a straight line. The data in Figs. \[fig7\] and \[fig8\] indicate that the average and typical correlation functions do behave very differently at the critical point, as predicted. = = = = The reason for this difference is that the distribution of $\ln C(r)$ is very broad, as can be seen in the plot in Fig. \[fig9\]. Fisher[@dsf] has predicted that the distribution of $(\ln C(r)) / \sqrt{r}$ should be universal and independent of $r$ so we show the corresponding scaling plot in Fig. \[fig10\]. Note that the distribution monotonically decreases as $C(r)$ becomes smaller. The data scales well for larger values of $C$, including even the upturn near the right hand edge of the graph. This is the region where $C(r)$ is anomalously large and which gives the dominant contribution to the average correlation function. An interesting question, then, is whether the value for the average correlation function is included in the scaling function for $\ln C(r) / \sqrt{r}$. If so, the the scaling function will [*diverge*]{} as a power near the origin[@dsf:pc]. To see this, note that if the probability of having a correlation $C$ at a distance $r$ only depends on the combination $$y = (\ln C) / \sqrt{r}\ ,$$ and that if $$P(y) \sim {1 \over y^{\lambda}} \label{disty}$$ for small $y$, then the average value of $C(r)$ is given by $$[C(r)]_{\rm av} \sim \int_\epsilon^1 dC \left({\ln C \over r^{1/2} }\right)^{-\lambda} {1 \over r^{1/2}} \ , \label{cavlambda}$$ assuming that the integral is dominated by the region of small $\ln C$. Integrating over $C$ gives a finite number so $$[C(r)]_{\rm av} \sim {1 \over r^{{1\over 2}(1 - \lambda)}} \ ,$$ and comparing with Eq. (\[cav\]) yields $$\lambda = 2 \phi - 3 \simeq 0.24 \ . \label{lambda}$$ = An enlarged log-log plot of the region of the upturn in Fig. \[fig10\]. is shown in Fig. \[fig10b\]. The data does lie on a rough straight line whose slope decreases (in magnitude) with increasing $r$. A fit to the data for $r=24$ in the middle region of the graph has a slope of about 0.45 (in magnitude), larger than 0.24, but since the effective slope decreases with increasing $r$ the data does not rule out the possibility that the distribution of $(\ln C(r)) / \sqrt{r}$ diverges with an exponent of 0.24 for $r \to \infty$. Even if the scaling function for $(\ln C(r)) / \sqrt{r}$ gives the correct [*power*]{} for the average correlation function, it does not necessarily mean that the [*amplitude*]{} is correct, since there could also be additional non-universal contributions to the amplitude, outside the scaling function[@dsf:pc]. We suspect that the systematic deviation in the tail of the distribution in Fig. \[fig10\] at small values of $C(r)$, indicates corrections to scaling for this range of sizes and distances. = = = = We now discuss our results for the disordered phase. The scaling plot corresponding to Eq. (\[cscale\]) is shown in Fig. \[fig11\]. The plot has $\nu = 1.8$ which gave the best fit, and which agrees fairly well with the prediction $\nu = 2$. A plot using $\nu = 2$ works somewhat less well, presumably indicating that there are corrections to scaling for this range of lattice sizes and distances. Fig. \[fig11b\] is a scaling plot using the theoretical value $\nu = 2$ which also shows the asymptotic form in Eq. (\[asymp\]) with $D=35$. Both the data and the prediction of Eq. (\[asymp\]), have substantial curvature: much more than in the corresponding data for $\log C(r)$ shown in Figs. \[fig12\] and \[fig12b\]. Over the range of accessible values of $r\delta^2$, the data and the asymptotic prediction do not track each other closely, though it is possible that they would do so for larger values of $r \delta^2$. The data for the log of the scaling function scales well according to Eq. (\[lncscale\]) , though the best fit has a slightly different exponent of 1.1, see Fig. \[fig12\]. Presumably this difference again indicates that there are corrections to scaling for the sizes and distances studied. Note that the data in Fig. \[fig12\] is [*close*]{} to a straight line but there [*is*]{} statistically significant curvature. Fig. \[fig12b\] tests the more stringent prediction[@sm; @dsf:pc] for the average of $\ln C(r)$ in the limit $r \gg \tilde{\xi}$ obtained by combining Eq. (\[large\_r\]) with the assumption that the expression for $\tilde{\xi}$ in Eq. (\[xitypex\]) is exact for correlations of $\sigma^z$. One sees that it works quite well. = = Although our best estimates of the critical exponents $\nu$ and $\tilde{\nu}$ do not quite agree with the theoretical predictions, they are fairly close to those predictions, and they differ [*substantially*]{} from [*each other*]{}, providing clear evidence that there are different correlation length exponents for the average and typical correlation functions. Finally, in this section, we look at the [*distribution*]{} of $\ln C(r)$ for $r$ larger than either the average or typical correlation lengths. It is predicted[@sm; @dsf] that the distribution of $(\ln C(r)) / r$ should become [*sharp*]{} at large $r$ in this limit. Fig. \[fig13\] shows data for the distribution of $\ln C(r)$ at $h_0 = 3$. One sees that both the peak position and the width increase with increasing $r$, but the peak position increases faster as can be seen in Fig. \[fig14\] which shows the distribution of $(\ln C(r)) / r$. In Fig. \[fig13b\] we test the more precise predictions[@sm; @dsf:pc] for the mean and variance of $\ln C(r)$ given in Eqs. (\[ctilde\]) and (\[varctilde\]), for $h_0 = 3$. The fits give reasonable agreement with Eqs. (\[ctilde\]) and (\[varctilde\]), but assume a form of corrections to scaling that we have been unable to justify. = The local susceptibility ======================== In this section we discuss the [*local*]{} susceptibility rather than the uniform susceptibility, because it has somewhat simpler behavior. Since it it just involves correlations on a single site, any singularity must come only from long time-correlations, whereas the uniform susceptibility involves correlations both in space and time. Our results for the uniform susceptibility away from the critical point do not scale in a simple manner, and we suspect that there are logarithmic corrections, as occurs for bulk behavior at finite temperature[@dsf]. = Since it is difficult to compute the susceptibility from the fermion method, particularly with periodic boundary conditions, we have used the Lanczos diagonalization technique on the original spin Hamiltonian, Eq. (\[ham\]). Of course the price we pay is that the lattices are much smaller, $L \le 16$. The local susceptibility at $T=0$ is given by $$\chi_{\mbox{\tiny loc}} = 2 \sum_{n \ne 0} { | \langle 0 | \sigma^z_i | n \rangle |^2 \over E_n - E_0 } \ . \label{chi}$$ where $|n\rangle$ denotes a many body state of the system and $|0\rangle$ is the ground state. Because of the form of Eq. (\[chi\]) we expect that the scaling of $\chi_{\mbox{\tiny loc}}$ will be very similar to that of $ 1/\Delta E$. This is indeed the case as seen in Fig. \[fig15\], which plots the distribution of $\ln \chi_{\mbox{\tiny loc}}$ at the critical point. The distribution broadens with system size, consistent with $z=\infty$. The data scales in the expected manner, as shown in Fig. \[fig16\], which is very similar to the corresponding plot for the energy gap in Fig. \[fig2\]. = = Even though the range of sizes used in the Lanczos method is rather small, it is, nonetheless, capable of distinguishing $z=\infty$ scaling at the critical point from finite $z$ scaling $z$ away from the critical point. This can be seen by comparing Fig. \[fig15\] with Fig. \[fig17\], which plots the distribution of $\ln \chi_{\mbox{\tiny loc}}$ at $h_0 = 3$. In Fig. \[fig17\] the curves no longer broaden with increasing $L$ but the distributions are [*independent*]{} of size. The reason why there is no size dependence here but there is in the distributions of $\ln \Delta E$ in Fig. \[fig3\] is easy to understand. For $\Delta E$, we compute the probability [*per sample*]{} of getting a certain value, and this is proportional to $L$ in the disordered phase for small $\Delta E$, since the rare strongly correlated region can occur anywhere. We used this result in Section IV to relate the exponent in the distribution to $1/z$, see Eqs. (\[pde\]) and (\[plnde\]). With $\chi_{\mbox{\tiny loc}}$, however, we compute the probability [*per site*]{}, so there is no factor of $L$ and the distribution is independent of size. This is, of course, the normal state of affairs when the lattice size is much larger than the correlation length. The slope of the straight line region in Fig. \[fig17\] agrees with the slopes in Fig. \[fig3\] and so gives the same value of z as obtained from the gap, i.e. $z \simeq 1.4$. The Structure Factor ==================== A scattering experiment measures directly the structure factor, $S( q)$, defined by $$S(q) = {1\over L}\sum_{j,l} C_{jl} e^{i q (j - l)} \ .$$ Although the distribution of individual terms in the sum is broad, it is interesting, and relevant for experiment, to ask whether there are large sample to sample fluctuations in the total. We have attempted to answer this for $q=0$, where fluctuations are expected to be largest. Fig. \[fig18\] shows a log-log plot of the average of the structure factor, $S_{\rm av}(0)$, and the standard deviation among different samples, $\delta S(0)$, plotted against $L$ at the critical point. From Eq. (\[cav\]) one expects the average to vary as $L^{0.62}$ and the best fit to the numerical data has a slope of $0.64$, in reasonably good agreement. One sees that $\delta S(0) < S_{\rm av}(0)$, but the ratio of the width to the mean stays finite. One expects[@dsf:pc] that the distribution of $S(0) / S_{\rm av}(0)$ will be broad and independent of $L$ at large $L$, and our data is consistent with this. Note that although the equal time structure factor is not self-averaging at $T=0$, its distribution is much less broad that that of the susceptibility, which involves correlations in time. Since the structure factor at the critical point behaves like the average, rather than the typical correlation function, we expect that the behavior away from criticality will be controlled by the average correlation length. = Conclusions =========== We have been able to confirm the many surprising predictions of the random transverse-Ising spin chain by applying the mapping to free fermions numerically. In particular we find very broad distributions of the energy gap and correlation functions, different exponents for the average and the typical correlation functions, and an infinite value of the dynamical exponent, $z$, at the critical point. Perhaps the most interesting [*new*]{} result is the scaling function for the distribution of the log of the correlation function at criticality, shown in Fig. \[fig10\], which is monotonic and has an upturn as the abscissa approaches zero. If this indicates the divergence shown in Eqs. (\[disty\]) and (\[lambda\]), the scaling function for $(\ln C(r)) / r^{1/2}$ would also give the correct exponent (though perhaps not the correct amplitude) for the average correlation function. We have seen that the width of the distribution of the the equal time structure factor seems to be comparable to the mean at $T=0$, though presumably it becomes self-averaging at finite-$T$. By contrast, the $T=0$ susceptibility and local susceptibility have enormously broad distributions. One expects that the susceptibility will also become self averaging at finite $T$ for sufficiently large $L$, but whether the necessary size diverges as power law or exponentially as $T \to 0$ is unclear. We leave this interesting question for future study. Crisanti and Rieger[@cr] have studied the random transverse Ising chain by Monte Carlo methods. They took a generalization of the bimodal distribution in Eq. (\[dist2\]) rather than the continuous distribution used here. From the behavior of various correlation functions they found a finite $z$ at criticality, which, however, appeared to increase with increasing randomness. We saw in section IV that corrections to finite-size scaling appear to be larger for this distribution than for the the continuous one and, furthermore, it is harder to estimate the asymptotic value of $z$ from correlation functions than from distributions. This is presumably why Crisanti and Rieger[@cr] did not find $z=\infty$ in their study. After this work was largely completed we became aware of related work by Asakawa and Suzuki[@as], who also used the mapping to free fermions but used the same distribution as Crisanti and Rieger. In contrast to our results, they claim that the exponents depend on the parameters in the distribution. This is lack of universality is [*not*]{} predicted by theory[@sm; @dsf] and a possible explanation of the discrepancy is that not all their data is in the asymptotic scaling regime, which is likely to be reached for different lattice sizes for different distributions. It is interesting to speculate to what extent the results of the one-dimensional system go over to higher dimensions. In particular, one would like to know if $z$ is infinite at the critical point or takes a finite value for $d > 1$. The results for the local susceptibility in Section VI indicate that this question [*can*]{} be answered even for moderately small lattice sizes [*provided appropriate quantities are studied*]{}. The distribution of $\ln \chi_{\mbox{\tiny loc}}$ (or the log of the gap) seems to be particularly convenient, since, for finite-$z$, data for different sizes look essentially the same, whereas for $z=\infty$ the curves get broader and broader. Of course it is still difficult to distinguish a large but finite $z$ from $z=\infty$, since the two would look the same for small sizes. With finite-$z$ scaling, the distribution has power law behavior, the power being related to $z$ as shown in Eq. (\[plnde\]). It is more difficult to determine $z$ by looking at the decay of correlation functions, because the asymptotic behavior is only seen at very large times or distances. Numerical studies in higher dimensions are likely to use quantum Monte Carlo simulations, because diagonalization methods, such as Lanczos, can only be carried out on very small systems and the mapping to free fermions only works in one-dimension. Unfortunately, there is an additional difficulty with quantum Monte Carlo, not present here, because one generally works in imaginary time, which has to be discretized. The quantum problem is recovered when the number of time slices tends to infinity, but in practice one can only simulate a finite number. It is unclear whether the extrapolation to an infinite number of time slices will pose serious difficulties for the study of Griffiths singularities and critical phenomena in higher dimensional systems. We have seen that the disordered Griffiths phase can be conveniently parameterized by a continuously varying dynamical exponent $z$. This characterizes the distribution of the energy gap or local susceptibility for lattice sizes which satisfy the condition, $L \gg \xi$. By contrast, at the critical point, the correlation length diverges so the value of $z$ at criticality involves physics in the opposite limit, $L \ll \xi$. It is therefore possible that the limit of $z(\delta)$ for $\delta \to 0$ is not equal to the value of $z$ at criticality. Both these quantities are infinite for the transverse field Ising chain, but it would be interesting to see if there is a difference between them in higher dimensions. We expect that the results and method of analysis presented here will provide guidance for such a study. We should like to thank D. S. Fisher for many stimulating comments and suggestions, and for a critical reading of the manuscript. The work of APY is supported by the National Science Foundation under grant No. DMR–9411964. HR thanks the Physics Department of UCSC for its kind hospitality and the Deutsche Forschungsgemeinschaft (DFG) for financial support. His work was performed within the Sonderforschungsbereich SFB 341 Köln-Aachen-Jülich. By a quantum phase transition we mean a transition that occurs at $T=0$, induced by varying some parameter other than the temperature. For the model in Eq. (\[ham\]), this parameter will be the average value of the transverse field divided by the average interaction, $h_0$ in Eq. (\[dist\]). Alternatively, we can parameterize the model by $\delta$ defined in Eq. (\[delta\]), which is related to $h_0$ by Eq. (\[delta\_h0\]). R. B. Griffiths, Phys. Rev. Lett. [**23**]{}. 17 (1969). A. B. Harris, Phys. Rev. B [**12**]{}, 203 (1975). B. M. McCoy and T. T. Wu, Phys. Rev. B [**176**]{}, 631 (1968); [**188**]{}, 982 (1969). R. Shankar and G. Murphy, Phys. Rev. B [**36**]{}, 536 (1987). D. S. Fisher, Phys. Rev. Lett. [**69**]{}, 534 (1992); Phys. Rev. B [**51**]{}, 6411 (1995). E. Lieb, T. Schultz and D. Mattis, Ann. Phys. (NY) [**16**]{}, 407 (1961). S. Katsura, Phys. Rev. [**127**]{}, 1508 (1962). P. Pfeuty, Ann. Phys. (NY) [**27**]{}, 79 (1970); Thèse, Université de Paris, (1970). We use the term “Griffiths phase” to denote the range of values of $\delta$ where Griffiths singularities occur at $T=0$. In general, this covers that part of the disordered phase where some of the $h_i$ are smaller than some of the $J_i$, and also that part of the ordered phase where some of the $J_i$ are smaller than some of the $h_i$. For the distribution in Eq. (\[dist\]), which has no lower bound on the $h_i$ or $J_i$, the Griffiths phase occurs for [*all*]{} $\delta$. In this paper we do not consider the ordered phase ($\delta < 0$). D. S. Fisher (private communication). “Numerical Recipes”, by W. H. Press, B. P. Flannery, S. A. Teukolsky and W. T. Vetterling, 2nd Edition, Cambridge University Press, (Cambridge, London) (1992). For the random problem, as well as for the pure problem, one eigenvalue hits zero as $h_0$ (or equivalently $\delta$) is varied. For the pure system this occurs precisely at the bulk critical point, but, for the random case, the point where the eigenvalue vanishes depends on the particular realization of the disorder, the values for different samples being scattered around the critical point. Hence, for the pure system, one knows to choose Eq. (\[E1a\]) or (\[E1b\]) for $E_1$ depending on whether the system is in the disordered or ordered phase, but there is no such simple criterion for the random case. We therefore need to explicitly check whether the number of bare particles, $\cal N$, is even or odd in the state with no quasi-particles. M. J. Thill and D. A. Huse, Physica A, [**15**]{} 1995. A. Crisanti and H.  Rieger, J. Stat. Phys. [**77**]{}, 1087 (1994). H. Asakawa and M. Suzuki (unpublished).
    **NEW REFLECTIONS ON HIGHER DIMENSIONAL LINEARIZED GRAVITY**       C. García-Quintero[^1], A. Ortiz and J. A. Nieto[^2]   *Facultad de Ciencias Físico-Matemáticas de la Universidad Autónoma* *de Sinaloa, 80010, Culiacán, Sinaloa, México*     **Abstract**   We make a number of remarks on linearized gravity with cosmological constant in any dimension, which, we argue, can be useful in a quantum gravity framework. For this purpose we assume that the background space-time metric corresponds to the de Sitter or anti-de Sitter space. Moreover, *via* the graviton mass and the cosmological constant correspondence, we make some interesting observations, putting special attention on the possible scenario of  a graviton-tachyon connection. We compare our proposed formalism with the Novello and Neves approach.           Keywords: linearized gravity, graviton, quantum gravity Pacs numbers: 04.20.Jb, 04.50.-h, 04.60.-m, 11.15.-q March, 2019 **1. Introduction**   It is known that there are a number of works relating tachyons with $M$-theory \[1\] (see also Ref. \[2\] and references therein), including the brane and anti-brane systems \[3\], closed-string tachyon condensation \[4\], tachyonic instability and tachyon condensation in the $E(8)$ heterotic string \[5\], among many others. Part of the motivation of these developments emerges because it was discovered that the ground state of the bosonic string is tachyonic \[6\] and that the spectrum in $AdS/CFT$ \[7\] can contain a tachyonic structure. On the other hand, it is also known that the $(5+5)$-signature and the $% (1+9) $-signature are common to both type IIA strings and type IIB strings. In fact, versions of $M$-theory lead to type IIA string theories in space-time of signatures $(0+10)$, $(1+9)$, $(2+8)$, $(4+6)$ and $(5+5)$, and to type IIB string theories of signatures $(1+9)$, $(3+7)$ and $(5+5)$ \[8\]. It is worth mentioning that some of these theories are linked by duality transformations. So, one wonders whether tachyons may also be related to the various signatures. In particular, here we are interested to see the possible relation of tachyons with a space-time of $(4+6)$-dimensions. Part of the motivation in the $(4+6)$-signature arises from the observation that $(4+6)=(1+4)+(3+2)$. This means that the world of $(4+6)$-dimensions can be splitted into a de Sitter world of $(1+4)$-dimensions and an anti-de Sitter world of $(3+2)$-dimensions. Moreover, looking the $(4+6)$-world from the perspective of $(3+7)$-dimensions obtained by compactifying-uncompactifying prescription such that $4\rightarrow 3$ and $% 6\rightarrow 7$, one can associate with the $3$ and $7$ dimensions of the $% (3+7)$-world with a $S^{3}$ and $S^{7}$, respectively, which are two of the parallelizable spheres; the other it is $S^{1}$. As it is known these spheres are related to both Hopf maps and division algebras (see Ref. \[9\] and references therein). In this work, we develop a formalism that allows us to address the $(4+6)$-dimensional world *via* linearized gravity. In this case, one starts assuming the Einstein field equations with cosmological constant $\Lambda $ in $(4+6)$-dimensions and develops the formalism considering a linearized metric in such equations. We note that the result is deeply related to the cosmological constant $\Lambda \lessgtr 0$ sign. In fact, one should remember that in $(1+4)$-dimensions, $\Lambda $ is positive, while in $(3+2)$-dimensions, $\Lambda $ is negative. At the level of linearized gravity, one searches for the possibility of associating these two different signs of $% \Lambda $ with tachyons. This leads us to propose a unified tachyonic framework in $(4+6)$-dimensions which includes these two separate cases of $% \Lambda $. Moreover, we argue that our formalism may admit a possible connection with the increasing interesting proposal of duality in linearized gravity (see Refs. \[10-12\] and references therein). In order to achieve our goal, we first introduce, in a simple context, the tachyon theory. Secondly, in a novel form we develop the de Sitter and anti-de Sitter space-times formalism, clarifying the meaning of the main constraints. Moreover, much work allows us to describe a new formalism for higher dimensional linearized gravity. Our approach is focused on the space-time signature in any dimension and in particular in $(4+6)$-dimensions. A further motivation of our approach may emerge from the recent direct detections of gravitational waves \[13-15\]. According to this detection the upper bound of the graviton mass is $m_{g}\leq 1.3\times 10^{-58}kg$ \[15\]. Since in our computations the mass and the cosmological constant are proportional, such an upper bound must also be reflected in the cosmological constant value. Technically, this work is structured as follows. In section 2, we make a simple introduction of tachyon theory. In section 3, we discuss a possible formalism for the de Sitter and anti-de Sitter space-times. In section 4, we develop the most general formalism of higher dimensional linearized gravity with cosmological constant. In section 5, we establish a novel approach for considering the constraints that determine the de Sitter and anti-de Sitter space. In section 6, we associate the concept of tachyons with higher dimensional linearized gravity. In section 7, we develop linearized gravity with cosmological constant in $(4+6)$-dimensions. We add an appedix A in attempt to further clarify the negative mass squared term-tachyon association. Finally, in section 8, we make some final remarks.   **2. Special relativity and the signature of the space-time**   Let us start considering the well known time dilatation formula $$dt=\frac{d\tau }{\sqrt{1-\frac{v^{2}}{c^{2}}}}. \label{1}$$ Here, $\tau $ is the proper time, $v^{2}\equiv $ $(\frac{dx}{dt})^{2}+(\frac{% dy}{dt})^{2}+(\frac{dz}{dt})^{2}$ is the velocity of the object and $c$ denotes the speed of light. Of course, the expression (1) makes sense over the real numbers only if one assumes $v<c$. It is straightforward to see that (1) leads to the line element $$ds^{2}=-c^{2}d\tau ^{2}=-c^{2}dt^{2}+dx^{2}+dy^{2}+dz^{2}. \label{2}$$ In tensorial notation one may write (2) as $$ds^{2}=-c^{2}d\tau ^{2}=\eta _{\mu \nu }^{(+)}dx^{\mu }dx^{\nu }, \label{3}$$ where the indices $\mu ,\nu $ take values in the set $\{1,2,3,4\}$, $% x^{1}=ct $, $x^{2}=x$, $x^{3}=y$ and $x^{4}=z$. Moreover, $\eta _{\mu \nu }^{(+)}$ denotes the flat Minkowski metric with associated signature $% (-1,+1,+1,+1)$. Usually, one says that such a signature represents a world of $(1+3)$-dimensions. If one now defines the linear momentum $$p^{\mu }=m_{0}^{(+)}\frac{dx^{\mu }}{d\tau }, \label{4}$$ with $m_{0}^{(+)}\neq 0$ a constant, one sees that (3) implies $$p^{\mu }p^{\nu }\eta _{\mu \nu }^{(+)}+m_{0}^{(+)2}c^{2}=0. \label{5}$$ Of course, $m_{0}^{(+)}$ plays the role of the rest mass of the object. This is because setting $p^{i}=0$, with $i\in \{2,3,4\}$, in the rest frame and defining $E=cp^{1}$, the constraint (5) leads to the famous formula $E=\pm m_{0}c^{2}$. Let us follow similar steps, but instead of starting with the expression (1), one now assumes the formula $$d\lambda =\frac{d\xi }{\sqrt{\frac{u^{2}}{c^{2}}-1}}, \label{6}$$ where $u^{2}=(\frac{dw}{d\xi })^{2}+(\frac{d\rho }{d\xi })^{2}+(\frac{d\zeta }{d\xi })^{2}$. Note that in this case one has the condition $u>c$. Here, in order to emphasize the differences between (1) and (6), we are using a different notation. Indeed, the notation used in (1) and (6) is introduced in order to establish an eventual connection with $(4+6)$-dimensions. From (6) one obtains $$d\mathit{s}^{2}=-c^{2}d\xi ^{2}=+c^{2}d\lambda ^{2}-dw^{2}-d\rho ^{2}-d\zeta ^{2}. \label{7}$$ In tensorial notation, one may write (7) as $$ds^{2}=-c^{2}d\xi ^{2}=\eta _{\mu \nu }^{(-)}dy^{\mu }dy^{\nu }, \label{8}$$ where $y^{1}=c\lambda $, $y^{2}=w$, $y^{3}=\rho $ and $y^{4}=\zeta $. Moreover, $\eta _{\mu \nu }^{(-)}$ denotes the flat Minkowski metric with associated signature $(+1,-1,-1,-1)$. One says that this signature represents a world of $(3+1)$-dimensions. If one now defines the linear momentum $$\mathcal{P}^{\mu }=m_{0}^{(-)}\frac{dy^{\mu }}{d\xi }, \label{9}$$ with $m_{0}^{(-)}\neq 0$ a constant, one sees that (9) implies $$\mathcal{P}^{\mu }\mathcal{P}^{\nu }\eta _{\mu \nu }^{(-)}+m_{0}^{(-)2}c^{2}=0. \label{10}$$ Since, $u>c$ one observes that in this case the constraint (10) corresponds to a tachyon system with mass $m_{0}^{(-)}$. Now, for the case of ordinary matter, if one wants to quantize, one starts promoting $p^{\mu }$ as an operator identifying $\hat{p}^{\mu }=-i\partial ^{\mu }$. Thus, at the quantum level (5) becomes$$(-\partial ^{\mu }\partial ^{\nu }\eta _{\mu \nu }^{(+)}+m_{0}^{(+)2})\varphi =0. \label{11}$$It is important to mention that here we are using a coordinate representation for $\varphi $ in the sense that $\varphi (x^{\mu })=<x^{\mu }|\varphi >$. By defining the d’Alembert operator ${\square ^{(+)}}^{2}=\eta _{\mu \nu }^{(+)}\partial ^{\mu }\partial ^{\nu }$ one notes that last equation reads $${(\square ^{(+)}}^{2}-m_{0}^{(+)2})\varphi =0. \label{12}$$ Analogously, in the constraint (10) one promotes the momentum $\mathcal{P}% ^{\mu }$ as an operator $\mathcal{\hat{P}}^{\mu }=-i\partial ^{\mu }$ and using $\eta _{\mu \nu }^{(+)}=-\eta _{\mu \nu }^{(-)}$, the expression (10) yields $$({\square ^{(+)}}^{2}+m_{0}^{(-)2})\varphi =0. \label{13}$$ The last two expressions are Klein-Gordon type equations for ordinary matter and tachyonic systems, respectively. In fact, these two equations will play an important role in the analysis in section 6, concerning linearized gravity with positive and negative cosmological constant.   **3. Clarifying de Sitter and anti-de Sitter space-time**   Let us start with the constraint$$x^{i}x^{j}\eta _{ij}^{(+)}+(x^{d+1})^{2}=r_{0}^{2}, \label{14}$$where $\eta _{ij}^{(+)}=diag(-1,1,...,1)$ is the Minkowski metric and the $i$ index goes from $1$ to $d$. and $r_{0}^{2}$ is a positive constant. The line element is given by $$ds^{2}\equiv dx^{A}dx^{B}\eta _{AB}=dx^{i}dx^{j}\eta _{ij}^{(+)}+(dx^{d+1})^{2}. \label{15}$$ It is not difficult to see that the corresponding Christoffel symbols and the Riemann tensor are given by $$\Gamma _{kl}^{i}=\frac{g_{kl}x^{i}}{r_{0}^{2}} \label{16}$$ and $$R_{ijkl}=\frac{1}{r_{0}^{2}}(g_{ik}g_{jl}-g_{il}g_{jk}), \label{17}$$ respectively. Here, the metric $g_{ij}$ is given by $$g_{ij}=\eta _{ij}^{(+)}+\frac{x_{i}x_{j}}{(r_{0}^{2}-x^{r}x^{s}\eta _{rs}^{(+)})}. \label{18}$$ It is worth mentioning that one can even consider a flat metric $\eta _{ij}=diag(-1,-1,....,1,1)$, with $t$-times and $s$-space coordinates and analogue developments leads to the formulas (14)-(18). Of course, the line element associated with the metric (18) is $$ds^{2}\equiv (\eta _{ij}^{(+)}+\frac{x_{i}x_{j}}{(r_{0}^{2}-x^{r}x^{s}\eta _{rs}^{(+)})})dx^{i}dx^{j}, \label{19}$$ which in spherical coordinates becomes$$ds^{2}\equiv -(1-\frac{r^{2}}{r_{0}^{2}})dt^{2}+\frac{dr^{2}}{(1-\frac{r^{2}% }{r_{0}^{2}})}+r^{2}d\Omega ^{d-2}. \label{20}$$Here, one is assuming that $x^{m}x^{n}\eta _{mn}^{(+)}=-x^{1}x^{1}+r^{2}$, where $r^{2}=x^{a}x^{b}\delta _{ab}$, with $a,b$ running from $2$ to $d$. Moreover, $d\Omega ^{d-2}$ is a volume element in $d-2$ dimensions. The expression (20) is, of course, very useful when one considers black-holes or cosmological models in the de Sitter space (or anti-de Sitter space). In the anti-de Sitter case, instead of starting with the formula (14) one considers the constraint is $x^{i}x^{j}\eta _{ij}^{(+)}-(x^{d+1})^{2}=-r_{0}^{2}$. This constraint will play an important role in section 5. **4. Linearized gravity with cosmological constant in any dimension**   Although in the literature there are similar computations \[16\], the discussion of this section seems to be new, in sense that it is extended to any background metric in higher dimensions. Usually, one starts linearized gravity by writing the metric of the space-time $g_{\mu \nu }=g_{\mu \nu }(x^{\alpha })$ as $$g_{\mu \nu }=\eta _{\mu \nu }+h_{\mu \nu }, \label{21}$$ where $\eta _{\mu \nu }=diag(-1,-1,....1,1)$ is the Minkowski metric, with $% t $-times and $s$-space coordinates, and $h_{\mu \nu }$ is a small perturbation. Therefore, the general idea is to keep only with the first order terms in $h_{\mu \nu }$, in the Einstein field equations. Here, we shall replace the Minkowski metric $\eta _{\mu \nu }$ by a general background metric denoted by $g_{\mu \nu }^{(0)}$. At the end we shall associate $g_{\mu \nu }^{(0)}$ with the de Sitter or anti-de Sitter space. So, the analogue of (21) becomes $$g_{\mu \nu }=g_{\; \; \; \; \mu \nu }^{(0)}+h_{\mu \nu }. \label{22}$$ The inverse of $g_{\mu \nu }$ is $$g^{\mu \nu }=g^{(0){\mu \nu }}-h^{\mu \nu }. \label{23}$$ Here, (23) is the inverse metric of (22) at first order in $h_{\mu \nu}$. Also, the metric $g_{\; \; \; \; \mu \nu }^{(0)}$ is used to raise and lower indices. Therefore, neglecting the terms of second order in $% h_{\mu \nu }$ one finds that the Christoffel symbols can be written as $$\Gamma _{\mu \nu }^{\lambda }=\Gamma _{\; \; \; \; \mu \nu }^{(0)\lambda }+\Sigma _{\mu \nu ,}^{\lambda } \label{24}$$ where $\Gamma _{\; \; \; \; \mu \nu }^{(0)\lambda }$ are the Christoffel symbols associated with $g_{\; \; \; \; \mu \nu }^{(0)}$ and $\Sigma _{\mu \nu }^{\lambda }$ is given by $$\Sigma _{\mu \nu }^{\lambda }\equiv \frac{1}{2}g^{(0)\lambda \alpha }(% \mathcal{D}_{\nu }h_{\alpha \mu }+\mathcal{D}_{\mu }h_{\nu \alpha }-\mathcal{% D}_{\alpha }h_{\mu \nu }). \label{25}$$ Here, the symbol $\mathcal{D}_{\mu }$ denotes covariant derivative with respect the metric $g_{\; \; \; \; \mu \nu }^{(0)}$. Similarly, one obtains that at first order in $h_{\mu \nu }$, the Riemann tensor becomes $$R_{\; \; \nu \alpha \beta }^{\mu }=R_{\; \; \; \; \; \; \; \nu \alpha \beta }^{(0)\mu }+\mathcal{D}_{\alpha }\Sigma _{\; \; \nu \beta }^{\mu }-\mathcal{D% }_{\beta }\Sigma _{\; \; \nu \alpha }^{\mu }, \label{26}$$ which can be rewritten as $$R_{\mu \nu \alpha \beta }=R_{\mu \nu \alpha \beta }^{(0)}+\mathcal{D}% _{\alpha }\Sigma _{\nu \beta \mu }-\mathcal{D}_{\beta }\Sigma _{\nu \alpha \mu }+R_{\; \; \; \; \; \; \; \nu \alpha \beta }^{(0)\sigma }h_{\sigma \mu }, \label{27}$$ where $$\Sigma _{\mu \nu \alpha }\equiv \frac{1}{2}(\mathcal{D}_{\nu }h_{\alpha \mu }+\mathcal{D}_{\mu }h_{\alpha \nu }-\mathcal{D}_{\alpha }h_{\mu \nu }). \label{28}$$ Then, using the definition (28), the Riemann tensor becomes $$\begin{array}{c} R_{\mu \nu \alpha \beta }=\frac{1}{2}(\mathcal{D}_{\alpha }\mathcal{D}% _{\beta }h_{\mu \nu }-\mathcal{D}_{\beta }\mathcal{D}_{\alpha }h_{\mu \nu }+% \mathcal{D}_{\alpha }\mathcal{D}_{\nu }h_{\beta \mu }-\mathcal{D}_{\beta }% \mathcal{D}_{\nu }h_{\alpha \mu } \\ \\ +\mathcal{D}_{\beta }\mathcal{D}_{\mu }h_{\nu \alpha }-\mathcal{D}_{\alpha }% \mathcal{D}_{\mu }h_{\nu \beta })+R_{\mu \nu \alpha \beta }^{(0)}+R_{\; \; \; \; \; \; \; \nu \alpha \beta }^{(0)\sigma }h_{\sigma \mu }.% \end{array} \label{29}$$ Note that in this case the covariant derivatives $\mathcal{D}_{\mu }$ do not commute as is the case of the ordinate partial derivatives $\partial _{\mu }$ in a Minkowski space-time background. One can show that the term $\mathcal{D}_{\alpha }\mathcal{D}_{\beta }h_{\mu \nu }-\mathcal{D}_{\beta }\mathcal{D}_{\alpha }h_{\mu \nu }$ leads to $$\mathcal{D}_{\alpha }\mathcal{D}_{\beta }h_{\mu \nu }-\mathcal{D}_{\beta }% \mathcal{D}_{\alpha }h_{\mu \nu }=-h_{\lambda \mu }R_{\; \; \; \; \; \; \; \nu \alpha \beta }^{(0)\lambda }-h_{\lambda \nu }R_{\; \; \; \; \; \; \; \mu \alpha \beta }^{(0)\lambda }. \label{30}$$ Then using (29), (30) and properties of the Riemann tensor, one can rewrite $% R_{\mu \nu \alpha \beta }$ as $$\begin{array}{c} R_{\mu \nu \alpha \beta }=R_{\mu \nu \alpha \beta }^{(0)}+\frac{1}{2}% (h_{\lambda \alpha }R_{\; \; \; \; \; \; \; \mu \beta \nu }^{(0)\lambda }-h_{\lambda \beta }R_{\; \; \; \; \; \; \; \mu \alpha \nu }^{(0)\lambda }-h_{\lambda \nu }R_{\; \; \; \; \; \; \; \mu \alpha \beta }^{(0)\lambda } \\ \\ +\mathcal{D}_{\nu }\mathcal{D}_{\alpha }h_{\mu \beta }-\mathcal{D}_{\nu }% \mathcal{D}_{\beta }h_{\mu \alpha }+\mathcal{D}_{\beta }\mathcal{D}_{\mu }h_{\nu \alpha }-\mathcal{D}_{\alpha }\mathcal{D}_{\mu }h_{\nu \beta }).% \end{array} \label{31}$$ Multiplying (31) by $g^{\mu \nu }$, as given in (23), leads to the Ricci tensor $$\begin{array}{c} R_{\mu \nu }=R_{\; \; \; \; \mu \nu }^{(0)}+\frac{1}{2}(h_{\lambda \nu }R_{\; \; \; \; \; \; \; \mu }^{(0)\lambda }+h_{\lambda \mu }R_{\; \; \; \; \; \; \; \nu }^{(0)\lambda })-h^{\alpha \beta }R_{\alpha \mu \beta \nu }^{(0)} \\ \\ +\frac{1}{2}\left( \mathcal{D}_{\mu }\mathcal{D}^{\alpha }h_{\alpha \nu }+% \mathcal{D}_{\nu }\mathcal{D}^{\alpha }h_{\alpha \mu }-\mathcal{D}_{\mu }% \mathcal{D}_{\nu }h-\mathcal{D}^{\alpha }\mathcal{D}_{\alpha }h_{\mu \nu }\right) .% \end{array} \label{32}$$ Thus, the scalar curvature $R=g^{\mu \nu }R_{\mu \nu }$ becomes $$R=R^{(0)}+\mathcal{D}^{\alpha }\mathcal{D}^{\beta }h_{\alpha \beta }-% \mathcal{D}^{\alpha }\mathcal{D}_{\alpha }h-h^{\alpha \beta }R_{\; \; \; \; \alpha \beta }^{(0)}. \label{33}$$ Now one can use (32) and (33) in the Einstein gravitational field equations with a cosmological constant $$R_{\mu \nu }-\frac{1}{2}g_{\mu \nu }R+\Lambda g_{\mu \nu }=0. \label{34}$$ When one sets $g_{\mu \nu }^{(0)}$ as a de Sitter (or anti-de Sitter) background one obtains $$\begin{array}{c} \mathcal{D}_{\mu }\mathcal{D}_{\nu }h+\mathcal{D}^{\alpha }\mathcal{D}% _{\alpha }h_{\mu \nu }-\mathcal{D}_{\mu }\mathcal{D}^{\alpha }h_{\alpha \nu }-\mathcal{D}_{\nu }\mathcal{D}^{\alpha }h_{\alpha \mu } \\ \\ +g_{\; \; \; \; \mu \nu }^{(0)}(\mathcal{D}^{\alpha }\mathcal{D}^{\beta }h_{\alpha \beta }-\mathcal{D}^{\alpha }\mathcal{D}_{\alpha }h) \\ \\ -\frac{2}{r_{0}^{2}}h_{\mu \nu }-\frac{(d-3)}{2r_{0}^{2}}hg_{\; \; \; \; \mu \nu }^{(0)}=0.% \end{array} \label{35}$$ As it is commonly done, in linearized gravity in four dimensions, one shall define $\bar{h}_{\mu \nu }=h_{\mu \nu }-\frac{1}{2}g_{\; \; \; \; \mu \nu }^{(0)} $. Therefore, substituting this expression for $\bar{h}_{\mu \nu }$ in (35), fixing the Lorentz gauge $\mathcal{D}^{\nu }\bar{h}_{\mu \nu }=0$ and assuming the trace $\bar{h}=0$, one finally gets $$\square ^{2}\bar{h}_{\mu \nu }-\frac{4\Lambda }{(d-2)(d-1)}\bar{h}_{\mu \nu }=0, \label{36}$$ where $d$ is the dimension of the space-time. It is important to observe that in (36), $\square ^{2}=\eta _{\mu \nu }^{(+)}\partial ^{\mu }\partial ^{\nu }$ is now generalized to the form $\square ^{2}=g_{\; \; \; \; \mu \nu }^{(0)}\mathcal{D}^{\mu }\mathcal{D}^{\nu }$. At this point, considering the $(4+6)$-signature (which can be splitted into a de Sitter and an anti-de Sitter space according to $(4+6)=(1+4)+(3+2)$) one has to set $d=8$ since there are two constraints, one given by the de Sitter world and another from the anti-de Sitter world. Consequently, the equation (36) becomes $$\square ^{2}\bar{h}_{\mu \nu }-\frac{2}{21}\Lambda \bar{h}_{\mu \nu }=0. \label{37}$$ One recognizes this expression as the equation of a gravitational wave in $% d=8$.   **5. Constraints in de Sitter and anti de Sitter space**   When one considers the de Sitter space, one assumes the constraint (14). However, one may notice that actually there are eight possible constraints corresponding to the two metrics $\eta _{\mu \nu }^{(+)}$ and $\eta _{\mu \nu }^{(-)}$ mentioned in section 3. In fact, for the metric $\eta _{\mu \nu }^{(+)}$ one has following possibilities: $$x^{\mu }x^{\nu }\eta _{\mu \nu }^{(+)}+(x^{d+1})^{2}=r_{0}^{2}, \label{38}$$ $$x^{\mu }x^{\nu }\eta _{\mu \nu }^{(+)}-(x^{d+1})^{2}=r_{0}^{2}, \label{39}$$ $$x^{\mu }x^{\nu }\eta _{\mu \nu }^{(+)}+(x^{d+1})^{2}=-r_{0}^{2} \label{40}$$ and $$x^{\mu }x^{\nu }\eta _{\mu \nu }^{(+)}-(x^{d+1})^{2}=-r_{0}^{2}. \label{41}$$ While for the metric $\eta _{\mu \nu }^{(-)}$ one finds $$x^{\mu }x^{\nu }\eta _{\mu \nu }^{(-)}+(x^{d+1})^{2}=r_{0}^{2}, \label{42}$$ $$x^{\mu }x^{\nu }\eta _{\mu \nu }^{(-)}-(x^{d+1})^{2}=r_{0}^{2}, \label{43}$$ $$x^{\mu }x^{\nu }\eta _{\mu \nu }^{(-)}+(x^{d+1})^{2}=-r_{0}^{2} \label{44}$$ and $$x^{\mu }x^{\nu }\eta _{\mu \nu }^{(-)}-(x^{d+1})^{2}=-r_{0}^{2}. \label{45}$$ Now, since one has the relation $\ \eta _{\mu \nu }^{(+)}=-\eta _{\mu \nu }^{(-)}$, one sees that two sets of constraints (38)-(41) and (42)-(45) are equivalents. Hence, we shall focus only on the first set of constraints (38)-(41). Let us now rewrite (39) and (40) as $$x^{\mu }x^{\nu }\eta _{\mu \nu }^{(+)}=[(x^{d+1})^{2}+r_{0}^{2}] \label{46}$$ and $$x^{\mu }x^{\nu }\eta _{\mu \nu }^{(+)}=-[(x^{d+1})^{2}+r_{0}^{2}]. \label{47}$$ Observe that one may assume that $x^{\mu }x^{\nu }\eta _{\mu \nu }^{(+)}\lessgtr 0$, with $\eta _{\mu \nu }^{(+)}=(-1,1,...,1)$. So, since right hand side of (46) is strictly positive, by consistency this constraint must be omitted. Similarly since the right hand side of (47) is strictly negative then this constraint must also be omitted. Thus, one only needs to consider the two constraints, (38) and (41) which corresponds to polynomial equations over the reals whose set of solutions can be associated with classically algebraic varieties. When this constraints are used at the level of line element, one discovers that they can be associated with manifolds for the de Sitter and anti-de Sitter space-time. It turns out that the constraints (38) and (41) can be rewritten as $$x^{A}x^{B}\eta _{AB}^{(+)}=r_{0}^{2} \label{48}$$ and $$x^{A}x^{B}\eta _{AB}^{(-)}=\rho _{0}^{2}, \label{49}$$ where $$\eta _{AB}^{(+)}=(-1,1,...,1,1) \label{50}$$ and $$\eta _{AB}^{(-)}=(1,-1,-1,...,-1,1). \label{51}$$ Here, one is assuming that (49) allows for a different radius $\rho _{0}$. This is useful for emphasizing that $r_{0}^{2}$ refers to the de Sitter world and $\rho _{0}^{2}$ to the anti-de Sitter world. Note that using $\rho _{0}^{2}$ the expression (49) can also be rewritten as $$x^{i}x^{j}\eta _{ij}^{(-)}+(x^{d+1})^{2}=\rho _{0}^{2}. \label{52}$$ Now, using (50) and (51) one can write the line elements in the form $$ds_{(+)}^{2}=dx^{A}dx^{B}\eta _{AB}^{(+)}=dx^{\mu }dx^{\nu }\eta _{\mu \nu }^{(+)}+(dx^{d+1})^{2} \label{53}$$ and $$ds_{(-)}^{2}=dx^{A}dx^{B}\eta _{AB}^{(-)}=dx^{\mu }dx^{\nu }\eta _{\mu \nu }^{(-)}+(dx^{d+1})^{2}. \label{54}$$ From (38) one obtains $$x^{d+1}=\pm (r^{2}_{0}-x^{\alpha }x^{\beta }\eta _{\alpha \beta }^{(+)})^{% \frac{1}{2}}. \label{55}$$ So, differentiating (55) one obtains $$dx^{d+1}=\mp \frac{x_{\mu }}{r^{2}_{0}-x^{\alpha }x^{\beta }\eta _{\alpha \beta }^{(+)}}dx^{\mu }. \label{56}$$ Similarly, from (52) one gets $$dx^{d+1}=\mp \frac{x_{\mu }}{\rho _{0}^{2}-x^{\alpha }x^{\beta }\eta _{\alpha \beta }^{(-)}}dx^{\mu }. \label{57}$$ Hence, with the help of (56) and (57), one can rewrite (53) and (54) as $$ds_{(+)}^{2}=\left( \eta _{\mu \nu }^{(+)}+\frac{x_{\mu }x_{\nu }}{% r^{2}_{0}-x^{\alpha }x^{\beta }\eta _{\alpha \beta }^{(+)}}\right) dx^{\mu }dx^{\nu } \label{58}$$ and $$ds_{(-)}^{2}=\left( \eta _{\mu \nu }^{(-)}+\frac{x_{\mu }x_{\nu }}{\rho _{0}^{2}-x^{\alpha }x^{\beta }\eta _{\alpha \beta }^{(-)}}\right) dx^{\mu }dx^{\nu }, \label{59}$$ respectively. Thus, one learns that the metrics associated with (58) and (59) are $$g_{\mu \nu }^{(+)}=\eta _{\mu \nu }^{(+)}+\frac{x_{\mu }x_{\nu }}{% r^{2}_{0}-x^{\alpha }x^{\beta }\eta _{\alpha \beta }^{(+)}} \label{60}$$ and $$g_{\mu \nu }^{(-)}=\eta _{\mu \nu }^{(-)}+\frac{x_{\mu }x_{\nu }}{\rho _{0}^{2}-x^{\alpha }x^{\beta }\eta _{\alpha \beta }^{(-)}}, \label{61}$$ respectively. Using (60) and (61) one sees that according to (17) the Riemann tensors $% R_{\mu \nu \alpha \beta }^{(+)}$ and $R_{\mu \nu \alpha \beta }^{(-)}$ become $$R_{\mu \nu \alpha \beta }^{(+)}=\frac{1}{r_{0}^{2}}\left( g_{\mu \alpha }^{(+)}g_{\nu \beta }^{(+)}-g_{\mu \beta }^{(+)}g_{\nu \alpha }^{(+)}\right) \label{62}$$ and $$R_{\mu \nu \alpha \beta }^{(-)}=-\frac{1}{\rho _{0}^{2}}\left( g_{\mu \alpha }^{(-)}g_{\nu \beta }^{(-)}-g_{\mu \beta }^{(-)}g_{\nu \alpha }^{(-)}\right) . \label{63}$$ The corresponding curvature scalars associated with (62) and (63) are $$R^{(+)}=\frac{d(d-1)}{r_{0}^{2}} \label{64}$$ and $$R^{(-)}=-\frac{d(d-1)}{\rho _{0}^{2}}. \label{65}$$ Now, let us consider the Einstein gravitational field equation (see eq. (34)) $$R_{\mu \nu }^{(+)}-\frac{1}{2}g_{\mu \nu }^{(+)}R^{(+)}+\Lambda ^{(+)}g_{\mu \nu }^{(+)}=0, \label{66}$$ for $g_{\mu \nu }^{(+)}$. Multiplying this expression by $g^{\mu \nu (+)}$ one sees that (66) leads to $$R^{(+)}-\frac{1}{2}dR^{(+)}+\Lambda ^{(+)}d=0. \label{67}$$ Solving (67) for $\Lambda ^{(+)}$ leads to $$\Lambda ^{(+)}=\frac{(d-2)(d-1)}{2r_{0}^{2}}, \label{68}$$ where the equation (64) was used. In analogous way, by considering the Einstein gravitational field equations for $g_{\mu \nu }^{(-)}$ $$R_{\mu \nu }^{(-)}-\frac{1}{2}g_{\mu \nu }^{(-)}R^{(-)}+\Lambda ^{(-)}g_{\mu \nu }^{(-)}=0, \label{69}$$ one obtains $$\Lambda ^{(-)}=-\frac{(d-2)(d-1)}{2\rho _{0}^{2}}. \label{70}$$ Note that, since $\Lambda ^{(-)}$ refers to the anti-de Sitter space, (70) agrees with the fact that $\Lambda ^{(-)}<0$.   **6. The signature of the space-time in linearized gravity**   In the previous section, the Einstein gravitational field equations were considered for $g_{\mu \nu }^{(+)}$ and $g_{\mu \nu }^{(-)}$. Such equations lead us to find a relations for $\Lambda ^{(+)}$ and $\Lambda ^{(-)}$. Now, if one substitutes the equations (68) and (70) into (36) one obtains $$\left( ^{(+)}\square ^{2}-\frac{4\Lambda ^{(+)}}{(d-2)(d-1)}\right) \bar{h}% _{\mu \nu }^{(+)}=0, \label{71}$$ and $$\left( ^{(-)}\square ^{2}+\frac{4\Lambda ^{(-)}}{(d-2)(d-1)}\right) \bar{h}% _{\mu \nu }^{(-)}=0. \label{72}$$ Here, $^{(\pm )}\square ^{2}=g_{\; \; \; \; \mu \nu }^{(\pm )(0)}\mathcal{D}% ^{\mu }\mathcal{D}^{\nu }$. Let us now to consider, in the context of linearized gravity, the vielbein formalism for $g_{\mu \nu }^{(+)}$ and $g_{\mu \nu }^{(-)}$. One introduces the vielbein field $e_{\mu }^{a}$ and writes $$g_{\mu \nu }^{(\pm )}=e_{\mu }^{a}e_{\nu }^{b}\eta _{ab}^{(\pm )}, \label{73}$$ where $$e_{\mu }^{a}=b_{\mu }^{a}+h_{\mu }^{a}. \label{74}$$ If one replaced (74) into (73), the metric $g_{\mu \nu }^{(\pm )}$ becomes $$g_{\mu \nu }^{(\pm )}=(b_{\mu }^{a}+h_{\mu }^{a})(b_{\nu }^{b}+h_{\nu }^{b})\eta _{ab}^{(\pm )}. \label{75}$$ Thus, one obtains $$g_{\mu \nu }^{(\pm )}=b_{\mu }^{a}b_{\nu }^{b}\eta _{ab}^{(\pm )}+b_{\mu }^{a}h_{\nu }^{b}\eta _{ab}^{(\pm )}+h_{\mu }^{a}b_{\nu }^{b}\eta _{ab}^{(\pm )}+h_{\mu }^{a}h_{\nu }^{b}\eta _{ab}^{(\pm )}. \label{76}$$ Since one is assuming that $h_{\mu }^{a}\ll 1$ then $h_{\mu }^{a}h_{\nu }^{b}\eta _{ab}^{(\pm )}\sim 0$ and therefore (76) is reduced to $$g_{\mu \nu }^{(\pm )}=b_{\mu }^{a}b_{\nu }^{b}\eta _{ab}^{(\pm )}+b_{\mu }^{a}h_{\nu }^{b}\eta _{ab}^{(\pm )}+h_{\mu }^{a}b_{\nu }^{b}\eta _{ab}^{(\pm )}. \label{77}$$ If one establishes the identifications $g_{\; \; \; \; \mu \nu }^{(\pm )(0)}=b_{\mu }^{a}b_{\nu }^{b}\eta _{ab}^{(\pm )}$ and $h_{\mu \nu }^{(\pm )}=h_{\mu }^{a}b_{\nu }^{b}\eta _{ab}^{(\pm )}+b_{\mu }^{a}h_{\nu }^{b}\eta _{ab}^{(\pm )}$ one obtains $$g_{\mu \nu }^{(\pm )}=g_{\; \; \; \; \mu \nu }^{(\pm )(0)}+h_{\mu \nu }^{(\pm )}, \label{78}$$ which is the expression (22) but with the signatures $+$ or $-$ in $g_{\; \; \; \; \mu \nu }^{(\pm )(0)}$ and $h_{\mu \nu }^{(\pm )}$ identified. Now, we shall compare the equations (71) and (72) with (12) and (13), respectively. Since $\Lambda ^{(+)}>0$ and $\Lambda ^{(-)}<0$ one can introduces the two mass terms $$m^{(+)2}=\frac{4\Lambda ^{(+)}}{(d-2)(d-1)} \label{79}$$ and $$m^{(-)2}=-\frac{4\Lambda ^{(-)}}{(d-2)(d-1)}. \label{80}$$ For $d=4$, corresponding to the observable universe, and for ordinary matter one has $$m^{(+)2}=\frac{2}{3}\Lambda ^{(+)}. \label{81}$$ This mass expression must be associated with a systems traveling lower than the light velocity $(c>v)$. In the case of particles traveling faster than light velocity $(v>c)$, corresponding to tachyons, one obtains $$m^{(-)2}=-\frac{2}{3}\Lambda ^{(-)}. \label{82}$$ Note that since $\Lambda ^{(+)}>0$ and $\Lambda ^{(-)}<0$, both rest masses $% m^{(+)2}$ and $m^{(-)2}$ are positives.   **7. Linearized gravity in (4+6)-dimensions**   The key idea in this section is to split the $(4+6)$-dimension as $% (4+6)=(1+4)+(3+2)$. This means that the $(4+6)$-dimensional space is splitted in two parts the de Sitter world of $(1+4)$-dimensions and anti de Sitter world of $(3+2)$-dimensions. In this direction, let us write the line elements in (3) and (7) in the form $$d\mathcal{S}% _{(+)}^{2}=-c^{2}dt_{0(+)}^{2}=-c^{2}dt_{(+)}^{2}+dx_{(+)}^{2}+dy_{(+)}^{2}+dz_{(+)}^{2}+dw_{(+)}^{2} \label{83}$$ and $$d\mathcal{S}% _{(-)}^{2}=-c^{2}dt_{0(-)}^{2}=+c^{2}dt_{(-)}^{2}-dx_{(-)}^{2}-dy_{(-)}^{2}-dz_{(-)}^{2}+d\tau _{(-)}^{2}, \label{84}$$ respectively. One can drop the parenthesis notation in the coordinates $% x_{(+)}^{\mu }$ and $x_{(-)}^{\mu }$ if one makes the convention that the index $\mu $ in $x_{(+)}^{\mu }$ runs from $1$ to $4$, while the index $\mu $ in $x_{(-)}^{\mu }$ is changed to an index $a$ running from $5$ to $8$. Thus, in tensorial notation one may write (83) and (84) as $$d\mathcal{S}_{(+)}^{2}=\eta _{\mu \nu }^{(+)}dx^{\mu }dx^{\nu }+dw_{(+)}^{2} \label{85}$$ and $$d\mathcal{S}_{(-)}^{2}=\eta _{ab}^{(-)}dx^{a}dx^{b}+d\tau _{(-)}^{2}. \label{86}$$Here, $\eta _{\mu \nu }^{(+)}$ and $\eta _{ab}^{(-)}$ denotes flat Minkowski metrics with the signatures $(-1,+1,+1,+1)$ and $(+1,-1,-1,-1)$, respectively. It seems evident that one can reach a unified framework by adding (85) and (86) in the form $$d\mathcal{S}^{2}=d\mathcal{S}_{(+)}^{2}+d\mathcal{S}_{(-)}^{2}=\eta _{\mu \nu }^{(+)}dx^{\mu }dx^{\nu }+dw_{(+)}^{2}+\eta _{ab}^{(-)}dx^{a}dx^{b}+d\tau _{(-)}^{2}. \label{87}$$ Let us assume that in a world of $(4+6)$-signature one has the two constraints$$\eta _{\mu \nu }^{(+)}x^{\mu }x^{\nu }+w^{2}=\frac{3}{\Lambda _{(+)}} \label{88}$$and $$\eta _{ab}^{(-)}dx^{a}dx^{b}+\tau ^{2}=\frac{3}{\Lambda _{(-)}}, \label{89}$$ where $\Lambda _{(+)}>0$ and $\Lambda _{(-)}<0$ again play the role of two cosmological constants. Following similar procedure as in section 5, considering the constraints (88) and (89) one can generalize the the line element (87) in the form $$d\mathcal{S}^{2}=g_{\; \; \; \; \mu \nu }^{(+)(0)}dx^{\mu }dx^{\nu }+g_{\; \; \; \;ab}^{(-)(0)}dx^{a}dx^{b}. \label{90}$$ where $$g_{\; \; \; \; \mu \nu }^{(+)(0)}=\eta _{\mu \nu }^{(+)}+\frac{x_{\mu }x_{\nu }}{\frac{3}{\Lambda _{(+)}}-x^{i}x^{j}\eta _{ij}^{(+)}} \label{91}$$ and $$g_{\; \; \; \;ab}^{(-)(0)}=\eta _{ab}^{(-)}+\frac{x_{a}x_{b}}{\frac{3}{% \Lambda _{(-)}}-x^{i}x^{j}\eta _{ij}^{(-)}}. \label{92}$$ Using (91) and (92) one can define a background matrix $\gamma _{AB}$, with indexes $A$ and $B$ running from $1$ to $8$ in the form $$\gamma _{AB}^{(0)}=\left( \begin{array}{cc} g_{\; \; \; \; \mu \nu }^{(+)(0)} & 0 \\ 0 & g_{\; \; \; \;ab}^{(-)(0)}% \end{array}% \right) . \label{93}$$ Thus, one can write the linearized metric associated with (93) as $$\gamma _{AB}=\gamma _{AB}^{(0)}+h_{AB}. \label{94}$$ Hence, following a analogous procedure as the presented in section 4, one obtains the equation for $h_{AB}$ in $d=4+4=8$-dimensions, $$\square ^{2}\bar{h}_{AB}-\frac{2}{21}\Lambda \bar{h}_{AB}=0. \label{95}$$ Here, one has $$\square ^{2}=\gamma _{AB}^{(0)}\mathcal{D}^{A}\mathcal{D}^{B}. \label{96}$$One can split $\square ^{2}$ in the form$$\square ^{2}=\square ^{(+)^{2}}+\square ^{(-)^{2}}, \label{97}$$where $\square ^{(+)^{2}}=g_{\; \; \; \; \mu \nu }^{(+)(0)}\mathcal{D}^{\mu }% \mathcal{D}^{\nu }$ and $\square ^{(-)^{2}}=g_{\; \; \; \;ab}^{(0)(-)}% \mathcal{D}^{a}\mathcal{D}^{b}$. One shall use now the separation of variables method. For this purpose let us assume that the perturbation $\bar{% h}_{AB}=$ $\bar{h}_{AB}(x^{\mu },x^{a})$ can be splitted in the form $$\bar{h}_{AB}=\bar{h}_{\quad AC}^{(+)}(x^{\mu })\bar{h}_{\qquad B}^{(-)C}(x^{a}). \label{98}$$ Thus, one discovers that (95) becomes $$\bar{h}_{\qquad B}^{(-)C}\square ^{(+)^{2}}\bar{h}_{\quad AC}^{(+)}+\bar{h}% _{\quad AC}^{(+)}\square ^{(-)^{2}}\bar{h}_{\qquad B}^{(-)C}-\frac{2}{21}% \Lambda \bar{h}_{\quad AC}^{(+)}\bar{h}_{\qquad B}^{(-)C}=0. \label{99}$$ Multiplying the last equation by $\bar{h}^{(+)AE}\bar{h}_{\qquad D}^{(-)B}$ yields $$\bar{h}^{(+)AE}\square ^{(+)^{2}}\bar{h}_{\quad AD}^{(+)}-\frac{2}{21}% \Lambda \delta _{D}^{E}=-\bar{h}_{\qquad D}^{(-)B}\square ^{(-)^{2}}\bar{h}% _{\qquad B}^{(-)E}. \label{100}$$ Thus, one observes that while the left hand side of (100) depends only of $% x^{\mu }$ and the right hand side depends only of $x^{a}$ one may introduce a constant $\hat{\Lambda}$ such that $$\bar{h}^{(+)AE}\square ^{(+)^{2}}\bar{h}_{\quad AD}^{(+)}-\frac{2}{21}% \Lambda \delta _{D}^{E}=\hat{\Lambda} \delta _{D}^{E} \label{101}$$ and $$-\bar{h}_{\qquad D}^{(-)B}\square ^{(-)^{2}}\bar{h}_{\qquad B}^{(-)E}=\hat{% \Lambda} \delta _{D}^{E}. \label{102}$$ One may rewrite (101) and (102) in the form $$\square ^{(+)^{2}}\bar{h}_{\quad AB}^{(+)}-\left( \frac{2}{21}\Lambda +\hat{% \Lambda} \right) \bar{h}_{\quad AB}^{(+)}=0 \label{103}$$ and $$\square ^{(-)^{2}}\bar{h}_{\quad AB}^{(-)}+\hat{\Lambda} \bar{h}_{\quad AB}^{(-)}=0. \label{104}$$ According to the formalism presented in section 5, one can identify the tachyonic mass in the anti-de Sitter-world by $m^{(-)2}=-\hat{\Lambda} $, while the mass in the de Sitter-world by $m^{(+)2}=\frac{2}{21}\Lambda +% \hat{\Lambda} $. Moreover, one can also introduce an effective $M^{2}=\frac{2% }{21}\Lambda $ in the $(4+4)$-world. Note that the effective mass $M^{2}$ can be written as $M^{2}=m^{(+)2}+m^{(-)2}$. Thus, (103) and (104) can be rewritten as $$(\square ^{(+)^{2}}-m^{(+)2})\bar{h}_{\quad \mu \nu }^{(+)}=0 \label{105}$$ and $$(\square ^{(-)^{2}}-m^{(-)2})\bar{h}_{\quad ab}^{(-)}=0. \label{106}$$ Here, we fixed the gauges $\bar{h}_{\quad ab}^{(+)}=0$ and $\bar{h}_{\quad \mu \nu }^{(-)}=0$. Thus, we have shown that from linearized gravity in $% (4+4)$-dimensions one can derive linearized gravities in $(1+3)$-dimensions and in $(3+1)$-dimensions. Moreover, the modes $\bar{h}_{\quad \mu \nu }^{(+)}$ can be associated with a massive graviton in the de Sitter space, while the modes $\bar{h}_{\quad ab}^{(-)}$ can be associated with the tachyonic graviton in the anti-de Sitter space.       **8. Final remarks**   In this work we have developed a higher dimensional formalism for linearized gravity in the de Sitter or anti-de Sitter space-background which are characterized by the cosmological constants $\Lambda ^{(+)}>0$ and $\Lambda ^{(-)}<0$, respectively. Our starting point are the higher dimensional Einstein gravitational field equations and the perturbed metric $g_{\mu \nu }^{(\pm )}=g_{\; \; \; \; \mu \nu }^{(\pm )(0)}+h_{\mu \nu }^{(\pm )}$, where $g_{\; \; \; \; \mu \nu }^{(\pm )(0)}$ is a background metric associated with the cosmological constants $\Lambda ^{(+)}$, $\Lambda ^{(-)}$ and the Minkowski flat metric $\eta _{\; \; \; \; \mu \nu }^{(\pm )}$. After straightforward computations and after imposing a gauge conditions for $% h_{\mu \nu }^{(\pm )} $ we obtain the two equations (71) and (72). We proved that these two equations admit an interpretation of massive graviton with mass given by (79) and (80). According to the formalism discussed in section 2, the massive graviton with mass $m^{(+)2}$ can be associated with ordinary graviton which lives in the de Sitter space, while the massive graviton with mass $m^{(-)2}$ is a tachyonic graviton which lives in the anti-de Sitter space. We should mention that these results agree up to sign from those described by Novello and Neves \[17\]. The origin of this difference in the signs is that although they consider a version of linearized gravity their approach refers only to four dimensions and rely in a field strength $% F_{\mu \nu \alpha \beta }$ which is not used in our case. Here, we get a four dimensional graviton mass $m_{g}^{2}=\frac{2}{3}\Lambda $ for deSitter space and using the Planck 2015 data \[18\] we can set $m_{g}\sim 3.0\times 10^{-69}kg$, while the current upper bound obtained by the detection of gravitational waves is $m_{g}\leq 1.3\times 10^{-58}kg$ \[15\]. Furthermore, in the previous section, we discuss the case of the $(4+6)$ signature where we identify $m^{(+)2}$ and $m^{(-)2}$ as a contribution to an effective mass $M^{2}$ in the unified framework of $(4+4)$-dimensions. It would be interesting for a future work to have a better understanding of the meaning of the mass $M^{2}$. Also, it may be interesting to extent this work to a higher dimensional cosmological model with a massive graviton. On the other hand, it is worth mentioning that our proposed formalism in $% (4+4)$-dimensions may be related to the so called double field theory \[19\]. This is a theory formulated with $x^{A}=(x^{\mu },x^{a})$ coordinates corresponding to the double space $R^{4}\times \ T^{4}$, with $A=1,2,...,8$ and $D=8=4+4$. In this case the constant metric is given by $$ds^{2}=\eta _{AB}dx^{A}dx^{B}. \label{107}$$ Moreover, the relevant group in this case is $O(4,4)$ which is associated with the manifold $M^{8}$. It turns out that $M^{8}$ can be compactified in such a way that becomes the product $R^{4}\times \ T^{4}$ of flat space and a torus. In turn the group $O(8,8)$ is broken into a group containing $% O(4,4)\times O(4,4;Z)$. A detail formulation of this possible relation will be present elsewhere. Finally, it is inevitable to mention that perhaps the formalism developed in this work may be eventually useful for improvements of the direct detection of gravitational waves. This is because recent observations \[20\] established that the cosmological value has to be small and positive and that the observable universe resembles to a de Sitter universe rather than an anti de Sitter universe. Also, it will be interesting to explore a link between this work and the electromagnetic counterpart of the gravitational waves \[21\].   **Acknowledgments**  We would like to thank to E. A. León for helpful comments. We would also like to thank the referee for valuable comments. This work was partially supported by PROFAPI 2013. **Appendix A. Negative mass squared term - tachyon association** This appendix is dedicated to clarify why the expression (80) refers to a tachyon system. In some sense the below presentation is the reverse argument as the one presented in section 2. Consider the Klein-Gordon equation $$({\square }^{2}+m_{0}^{2})\varphi =0. \label{A1}$$ If one considers a plain wave solution for (A1), the solution can be written as $$\varphi =Ae^{p^{\mu }x_{\mu }}, \label{A2}$$where $A$ is a constant. Therefore, using (A2) one can verify that (A1) is reduced to $$(p^{2}+m_{0}^{2})\varphi =0, \label{A3}$$which implies that $$p^{2}+m_{0}^{2}=0. \label{A4}$$Since $p^{\mu }=m_{0}u^{\mu }$ one discovers that (A4) leads to a relation of the form $$dt=\frac{d\tau }{\sqrt{1-v^{2}/c^{2}}}. \label{A5}$$Which implies that $v<c$ and therefore the system moves with velocities less than the light velocity. Similarly if instead of (A1) one considers the expression $$({\square }^{2}-m_{0}^{2})\varphi =0. \label{A6}$$A plain wave solution would imply the classical expression $${p}^{2}-m_{0}^{2}=0, \label{A7}$$which again considering the relation $p^{\mu }=m_{0}u^{\mu }$ one finds that instead of the relation (A5) one has $$dt=\frac{d\tau }{\sqrt{v^{2}/c^{2}-1}}. \label{A8}$$This implies that $v>c$ and therefore describes a tachyon system. If instead of the field $\varphi $ one considers the $h_{\mu \nu }$ and assume a plain wave solution of the form $h_{\mu \nu }=A_{\mu \nu }e^{p^{\mu }x_{\mu }}$, one may be able to obtain the corresponding expression (A5) and (A8) for linearized gravity. [99]{} M. J. Duff, Int. J. Mod. Phys. A **11**, 5623 (1996); hep-th/9608117. J. A. Nieto, Adv. Theor. Math. Phys. **10**, 747 (2006); hep-th/0506106. A. Sen, JHEP **9808**, 012 (1998); hep-th/9805170 P. Horava and C. A. Keeler, Phys. Rev. Lett. **100**, 051601 (2008); arXiv:0709.2162 \[hep-th\]. P. Horava and C. A. Keeler, Phys. Rev. D **77,** 066013 (2008); arXiv:0709.3296 \[hep-th\]. M. B. Green, J. H. Schwarz and E. Witten, *Superstring Theory I and II* (Cambridge University Press, 1987). Z. Bajnok, N. Drukker, Á. Hegedüs, R. Nepomechie, L. Palla, C. Sieg and R. Suzuki, JHEP **1403**, 055 (2014); arXiv:1312.3900 \[hep-th\]. C. M. Hull, JHEP 9811 (1998) 017; hep-th/9807127. J. W. Milnor, *Topology from Differenciable Viepoint*, (Pinceton University Press. 1997). J. A. Nieto, Phys.Lett. A, **262**, 274 (1999); hep-th/9910049. M. Henneaux and C. Teitelboim, Phys. Rev. D**71** (2005) 024018; gr-qc/0408101. V. Lekeu and A. Leonard “Prepotentials for linearized supergravity” arXiv:1804.06729 \[hep-th\]. B.P. Abbott *et al .*, Phys. Rev. Lett. **116**, 6 (2016); arXiv:1602.03837. B. P. Abbott *et al.* , Phys. Rev. Lett. **116**, 24 (2016); arXiv:1606.04855. B. P. Abbott *et al.* , Phys. Rev. Lett. **118**, 22 (2017); arXiv:1706.01812. A. Higuchi, Class. Quant. Grav. **8**, 2005 (1991). M. Novello and R. P. Neves, Class. Quant. Grav. **20**, 67 (2003). P.A.R. Ade *et al*. Astron.Astrophys. **594**, A13 (2016); arXiv:1502.01589. C. Hull, JHEP **0909**, 099 (2009), arXiv:0904.4664 \[hep-th\]. S. Pelmutter *et al*., Astr. Phys. J. **517**, 565 (1999); astro-ph/9812133. B. P. Abbott *et al.* , Phys. Rev. Lett. **119**, 16 (2017); arXiv:1710.05832. [^1]: [email protected], [email protected] [^2]: [email protected], [email protected]
--- abstract: 'Let $G$ be a graph on $n$ vertices, labeled $v_1,\ldots,v_n$ and $\pi$ be a permutation on $[n]:=\{1,2,\cdots, n\}$. Suppose that each pebble $p_i$ is placed at vertex $v_{\pi(i)}$ and has destination $v_i$. During each step, a disjoint set of edges is selected and the pebbles on each edge are swapped. Let $rt(G, \pi)$, the routing number for $\pi$, be the minimum number of steps necessary for the pebbles to reach their destinations. Li, Lu, and Yang prove that $rt(C_n, \pi)\le n-1$ for any permutation on $n$-cycle $C_n$ and conjecture that for $n \geq 5$, if $rt(C_n, \pi) = n-1$, then $\pi = (123\cdots n)$ or its inverse. By a computer search, they show that the conjecture holds for $n<8$. We prove in this paper that the conjecture holds for all even $n$.' author: - 'Junhua He$^{\dagger}$' - 'Louis A. Valentin$^{\ddagger}$' - 'Xiaoyan Yin$^{\S}$' - 'Gexin Yu$^{\star}$' title: Extremal permutations in routing cycles --- Introduction ============ Routing problems occur in many areas of computer science. Sorting a list involves routing each element to the proper location. Communication across a network involves routing messages through appropriate intermediaries. Message passing between multiprocessors requires the routing of signals to correct processors. In each case, one would like the routing to be done as quickly as possible. We will use a routing model first introduced by Alon, Graham, and Chung [@ACG94] in 1994. Let $G = (V,E)$ be a graph. Label the vertices as $v_1, \ldots, v_n$ and each vertex sits with a pebble. Suppose that under permutation $\pi$ on $[n]$, a pebble $p_i$ is placed at $v_{\pi(i)}$. We wish to move pebbles to their destinations. To do so, we select a matching of $G$ and swap the pebbles at the endpoints of each edge and repeat in next round until all pebbles are in places. Let $rt(G, \pi)$ denote the minimum number of rounds necessary to route $\pi$ on $G$. Then, the [**routing number**]{} of $G$ is defined as: $$rt(G) = \max_{\pi}\ \{ rt(G, \pi) \}$$ As the routing problem occurs in problems in computer science, some of the first bounds shown are consequences of computer science algorithms. The odd-even transposition sort [@K98] and Benes network [@B65] show $rt(P_n) = n$ and $rt(Q_n) \le n- 1$, for path $P_n$ of $n$-vertices and $n$-dimensional hypercube $Q_n$, respectively. Very few results are known for the exact values of the routing numbers of graphs. Alon, Chung, and Graham [@ACG94] prove 1. $rt(K_n) = 2$ and $rt(K_{n,n}) = 4$; 2. $rt(G) \ge diam(G)$ and $rt(G) \ge \frac{2}{|C|} \min \{|A|,|B|\}$, where $diam(G)$ is the diameter of $G$ and $C$ is a set that cuts $G$ into parts $A$ and $B$; 3. $rt(G) \le rt(H)$ and $rt(T_n) < 3n$, where $H$ is a spanning subgraph of $G$ and $T_n$ is a tree on $n$ vertices; 4. $rt(G_1 \times G_2) \le 2rt(G_1)+rt(G_2)$, and $n \le rt(Q_n) \le 2n-1$. Zhang [@Zheng99] improves their bound on trees, showing $rt(T_n) \le \left \lfloor \frac{3n}{2} \right \rfloor + O(\log n)$. Li, Lu, and Yang [@LLY10] show $n+1 \le rt(Q_n) \le 2n-2$, improving both the previous upper and lower bounds on hypercubes. Among other results, they also give the exact routing number of cycles: $rt(C_n)= n-1$. Furthermore, they made following conjecture. \[conj\] For $n \ge 5$, if $rt(C_n, \pi) = n-1$, then $\pi$ is the rotation $(123\cdots n)$ or its inverse. The conjecture does not hold for $n=4$; the permutation that transposes two non-adjacent vertices and fixes the other two serves as a counterexample. They verified the conjecture for $n<8$ through a computer search. The conjecture hints towards a very counter-intuitive idea, that the worst case permutation on the cycle is one where each pebble is only distance one away from its destination. In this article, we give a proof of the conjecture when $n$ is even. \[thework\] For even $n \ge 6$, if $rt(C_n, \pi) = n-1$, then $\pi$ is the rotation $(123\cdots n)$ or its inverse. It worths to note that some new tools are introduced in the proof, beyond the ideas from [@ALSY11] by Albert, Li, Strang, and the last author. Those tools are introduced in the Section \[tools\]. In Section \[rotation\], we present a few important lemmas; in Section \[extreme\], we discuss the possible extremal situations, and in Section \[solution\], we discuss how to deal with the extremal situations. A few important notion and tools {#tools} ================================ Spins and Disbursements ----------------------- Let $G = C_n$ and label the vertices of $C_n$ as $v_1, v_2, \dots , v_n$ in a clockwise order. Let the clockwise direction be the positive direction and counter clockwise be the negative direction. There are exactly two paths for pebble $p_i$ to reach its destination, either by traveling in the positive or negative direction. Let $d^+(v_i, v_j)$ denote the distance from $v_i$ to $v_j$ when traveling along the cycle in the positive direction and $d^-(v_i, v_j)$ the distance when traveling in the negative direction. Note that $d^+(v_i,v_j) + d^-(v_i,v_j)$ equals $n$ when $i\neq j$ and $0$ when $i = j$. For simplicity, for pebbles $p_i$ and $p_j$, we define $d^+(p_i,p_j)=d^+(v_{\pi(i)}, v_{\pi(j)})$. Consider a routing process of $\pi$ on $C_n$ with pebble set $P=\{p_1, \ldots, p_n\}$. For each pebble $p_i$, let $s(p_i)$, the [*spin*]{} of $p_i$, represent the displacement for $p_i$ to reach its destination from its current position. So, $s(p_i) \in \{ d^+(v_{\pi(i)}, v_i), d^+(v_{\pi(i)}, v_i)-n \}$. The sequence $B =(s(p_1), s(p_2), \ldots, s(p_n))$ is called a [**valid disbursement**]{} of $\pi$. The disbursement describes the direction in which each pebble will move in the routing. Note that the spin of a pebble changes with its movement. Not all possible combinations of spins produce valid disbursements. The following lemma gives a necessary and sufficient condition for a set of spins to be a valid disbursement. Let $(s(p_1), s(p_2), \ldots, s(p_n))$ be an assignment of the spins to the pebbles. It is a valid disbursement if and only if $\sum_{p\in P} s(p)= 0$. To see the necessity, we observe that when two pebbles are swapped, one moves forward one step and one moves backward one step, so the sum of spins stays. As $B$ is a valid disbursement, the final spins are all zeroes, so the sum is also zero. For sufficiency, we can move the pebbles one by one along their assigned directions. From this lemma we know that there is at least one pebble $p_i$ with positive spin and one pebble $p_j$ with negative spin in a valid disbursement if $\pi$ is not identity. If we change the spins of $p_i$ and $p_j$ so that they move in the opposite directions, the new spins still give a valid disbursement. We say that we [**flip the spins of $p_i$ and $p_j$**]{} when we apply this change. A valid disbursement $(s(p_1), \ldots, s(p_n))$ is minimized if $\sum_{p\in P} |s(p)|$ is minimized. The following simple fact is very important. If a valid disbursement is minimized, then $s(p_i)-s(p_j)\le n$ for all $i,j\in [n]$. For otherwise we would flip the spins to make the sum smaller. It is not hard to show the converse is also true, so one can apply the flips on a valid disbursement to get a minimized disbursement. We omit the proof here. The following lemma give a characterization of minimized disbursement of a permutation. \[mini-disbursement\] Let $B=(b_1,\cdots, b_n)$ be a valid disbursement of a permutation $\pi$, the following conditions are equivalent:\ (1) $B$ is a minimized disbursement of $\pi$;\ (2) ${\rm max}(b_1,\cdots, b_n)-{\rm min}(b_1,\cdots, b_n)\le n$. An order relation ----------------- It is clear to see that if $s(p_i)-s(p_j)>d^+(p_i,p_j)$, then $p_i$ and $p_j$ will swap at some round in the routing process. For that purpose, we define the following order relation on pebbles. Given a disbursement $B$, we call $p_i\succ p_j$ if $s(p_i)-s(p_j) > d^+(p_i, p_j) $ Remark: when we mention order of pebbles in the text, by default it is always associated with the current disbursement. Note that the order relation is transitive, for if $p_i\succ p_j$ and $p_j\succ p_k$, then $s(p_i) - s(p_j)>d^+( p_i, p_j)$ and $s(p_j) - s(p_k)>d^+(p_j, p_k)$, and it follows that $s(p_i)-s(p_k)>d^+(p_i, p_j)+d^+(p_j, p_k)\ge d^+(p_i, p_k)$, so $p_i\succ p_k$. As two pebbles have different destinations, $s(p_i)-s(p_j)\not=d^+(p_i, p_j)$, so if $p_i\succ p_j$ is not true, then $s(p_i)-s(p_j)<d^+(p_i, p_j)$. When $p_i\succ p_j$, we sometimes call [**$p_i$ is bigger than $p_j$**]{} and [**$p_j$ is smaller than $p_i$**]{}. If $p_i$ is neither bigger nor smaller than $p_j$, we call them [**incomparable**]{}. If all pebbles in set $P_1$ are bigger than all pebbles in $P_2$, we also write $P_1\succ P_2$. The following lemma provides a convenient way to determine order relations. \[induced-order\] Let $x, y, z$ be three pebbles in the clockwise order sitting on the cycle. If $x\succ z$, then $x\succ y$ or $y\succ z$. Furthermore, if $x\succ z$, then $y$ is not smaller than $z$. For otherwise, $s(x)-s(y)<d^+(x,y)$ and $s(y)-s(z)<d^+(y,z)$. It follows that $s(x)-s(z)<d^+(x,y)+d^+(y,z)=d^+(x,z)$, thus $x$ is not bigger than $z$, a contradiction. For the furthermore part, if $x\succ z$ and $z\succ y$, then $s(x)-s(z)\ge d^+(x,z)$ and $s(z)-s(y)\ge d^+(z,y)$, and it follows that $s(x)-s(y)\ge d^+(x,z)+d^+(z,y)>n$, a contradiction. The following lemma says that it is enough to swap two comparable pebbles to route the permutation. \[incomparable\] Assume $B$ is a minimized disbursement of $\pi$. If pebble $p$ is incomparable with all other pebbles, i.e., there exists no pebble $q$ such that $q\succ p$ or $p\succ q$, then $s(p)=0$, i.e., pebble $p$ has arrived its own destination vertex. Suppose the pebble $p$ is incomparable to other pebbles and $s(p)\neq 0$. By symmetry we assume that $s(p)>0$. Let $\pi=\Pi_i \pi_i$ be a cycle decomposition of $\pi$, where $\pi_i=(i_1,\cdots ,i_{r_i})$, i.e., the pebble placed at $v_{i_{k}}$ and has destination $v_{i_{k+1}}$ for all $k\le r_i$, with $i_{r_i+1}=i_1$. Let $P_i$ be the set of pebbles on $\pi_i$, and we call $\pi_i$ to be an orbit of those pebbles. We claim that for each orbit $P_i$, $\sum_{q\in P_i}s(q)=an$ for some integer $a$. To see this, we note that $s(p_{i_k})\in \{d^+(v_{i_{k}}, v_{i_{k+1}}), d^+(v_{i_{k}}, v_{i_{k+1}})-n\}$. Thus if all spins are positive, we would have the sum to be $bn$ for some positive integer $b$. However, each switch of the spin from positive to negative would cause a change of $-n$ in the sum. So the sum of spins stays as a multiple of $n$. Assume $p=p_{i_1}$ is a pebble of $P_i$. We claim that there exists no pebble in $P_i$ will passes $v_{i_2}$ in the negative direction to arrive its destination. Otherwise, assume $p_{i_k}(2<k\le r_i)$ is such a pebble, then we have $s(p_{i_k})+d^+(v_{i_2}, p_{i_k})<0$, notice that $p_{i_1}$ is placed at $v_{i_2}$, thus we have $$s(p_{i_1})>0>s(p_{i_k})+d^+(v_{i_2}, p_{i_k})=s(p_{i_k})+d^+(p_{i_1}, p_{i_k}),$$ hence $p=p_{i_1}\succ p_{i_k}$, a contradiction. Furthermore, we have $s(p_{i_2})>0$. Otherwise, if $s(p_{i_2})<0$, notice that the destination of $p_{i_2}$ is $v_{i_2}$ where $p_{i_1}$ been placed, thus we have $s(p_{i_2})=d^+(p_{i_2}, p_{i_1})-n=-d^+(p_{i_1}, p_{i_2})$, hence $$s(p_{i_1})>0=s(p_{i_2})+d^+(p_{i_1}, p_{i_2})$$ and it follows that $p=p_{i_1}\succ p_{i_2}$, a contradiction. The fact $s(p_{i_1})>0$ implies that $p_{i_1}$ will travel from $v_{i_2}$ in the positive direction, and the fact $s(p_{i_2})>0$ implies that $p_{i_2}$ will arrive to its destination $v_{i_2}$ in the positive direction, since there exists no pebble will passes $v_{i_2}$ in the negative direction, therefore we have $\sum_{q\in P_1}s(q)=bn$ for some positive integer $b$. As the sum of all spins is zero, there must exists some orbit $P_j$ with spin sum $cn$ for some integer $c<0$. In particular, there exists a pebble $q\in P_j$ such that $q$ passes $p$ in the negative direction to arrive its destination. So $s(q)+d^+(p, q)<0<s(p)$ and it follows that $p\succ q$, a contradiction. If $\sum_{p\in P} |s(p)|$ is minimized for some disbursement $B$, then\ (a) two pebbles can swap at most once in the routing process;\ (b) the order on the pebbles is transitive. \(a) If two pebbles $p_i$ and $p_j$ swap twice in opposite directions, neither swap is necessary. (Note that this does not change $|B|$.) If they swap twice in the same direction, then $|s(p_i)| + |s(p_j)| > n$. Their spins can be flipped to decrease the absolute sum of $B$. \(b) Suppose $p_i\succ p_j$ and $p_j\succ p_k$. Then, by definition, we have $s(p_i) - s(p_j)>d^+( p_i, p_j)$ and $s(p_j) - s(p_k)>d^+(p_j, p_k)$. It follows that $s(p_i)-s(p_k)>d^+(p_i, p_j)+d^+(p_j, p_k)\ge d^+(p_i, p_k)$, so $p_i\succ p_k$. By above lemma, in our routing process, we will only swap comparable pebbles. The following lemma says that whether two pebbles swap is determined by the initial disbursement. So we will not keep track of the spins, but just see whether necessary swaps are swapped. If $p_i\succ p_j$, then after swap $p_i$ and $p_j$ are incomparable. If $p_i$ and $p_j$ are incomparable, then in the sorting process, they will be always incomparable. Suppose that $p_i\succ p_j$ and after swap of $p_i$ and $p_j$, $p_j\succ p_i$. Then $n\ge s(p_i)-s(p_j)\ge d^+(p_i,p_j)+1\ge 2$ and right after the swap of $p_i$ and $p_j$, $s(p_i)-s(p_j)$ reduces at most $d^+(p_i,p_j)$, thus still positive. Let $s'(p_i)$ and $s'(p_j)$ be the new displacement of $p_i$ and $p_j$, respectively. then $s'(p_i)-s'(p_j)>0$. Therefore $s'(p_j)-s'(p_i)<0$, and $p_j$ cannot be bigger than $p_i$; also, $s'(p_i)-s'(p_j)\le n-2$ and less than the distance $n-1$ from $p_i$ to $p_j$, thus after the swap, $p_i$ cannot be still bigger than $p_j$. If $p_i$ and $p_j$ are incomparable, then in the sorting process, $(s(p_i)-s(p_j))-d^+(p_i,p_j)$ does not change: if a pebble swaps with both (the distance does not change and $s(p_i)-s(p_j)$ does not change, if a pebble swaps only with $p_i$, then $s(p_i)$ increases by one and $d^+(p_i,p_j)$ increases by one, if a pebble swaps only with $p_j$, then $s(p_j)$ increases by one and $d^+(p_i,p_j)$ reduces by one. When $B$ is minimized and two pebbles $p_i$ and $p_j$ satisfy $s(p_i)-s(p_j)=n$, then after flip the spins of $p_i$ and $p_j$ we still get a minimized disbursement. The following lemma tells us the change of the order relation when we do such a flip. \[flip-spins\] Let $B$ be a minimized disbursement of $\pi$, and $s(p_i)-s(p_j)=n$. If flip the spins of $p_i$ to $s'(p_i)=s(p_i)-n$ and $p_j$ to $s'(p_j)=s(p_j)+n$, then for $k,l\not\in \{i,j\}$we have 1. $p_j\succ p_i$; and the order relation remains unchanged for $p_k$ and $p_l$. 2. $p_k\succ p_i$ if and only if before the flip $p_i$ and $p_k$ were incomparable. Similarly, $p_j\succ p_l$ if and only if before the flip $p_j$ and $p_l$ were incomparable. 3. $p_i$ and $p_k$ are incomparable if before the flip $p_i\succ p_k$. Similarly, $p_i$ and $p_l$ are incomparable if before the flip $p_l\succ p_j$. \(1) As $s'(p_j)-s'(p_i)=(s(p_j)+n)-(s(p_i)-n)=2n-(s(p_i)-s(p_j))=n>d^+(p_j,p_i)$, $p_j\succ p_i$ after the flip. For $k,l\not\in \{i,j\}$, the spins of $p_k, p_l$ and the distance $d^+(p_k, p_l)$ do not change by flipping the spins of $p_i$ and $p_j$, so their order relation does not change as well. \(2) For $k\notin\{i,j\}$, we know that $$s(p_k)-s'(p_i)-d^+(p_k, p_i)=s(p_k)-(s(p_i)-n)-(n-d^+(p_i,p_k))=s(p_k)-s(p_i)+d^+(p_i,p_k).$$ Note that $p_k$ cannot be bigger than $p_i$ before the slip, for otherwise, $s(p_k)-s(p_j)>s(p_i)-s(p_j)=n$, a contradiction, and $s(p_i)\not=s(p_k)+d^+(p_i, p_k)$ as pebbles must have different destinations. Therefore we have > $s(p_k)-s'(p_i)>d^+(p_k, p_i)$ if and only if $s(p_i)<s(p_k)+d^+(p_i, p_k)$, or $p_k$ is bigger than $p_i$ if and only if $p_k$ and $p_i$ were incomparable. For case for $p_l$ and $p_j$ is similar and we omit the proof. \(3) If $p_i>p_k$, then $0<s(p_i)-s(p_k)<n$. Thus $s'(p_i)-s(p_k)=s(p_i)-n-s(p_k)<0<d^+(p_i, p_k)$, and $p_i$ is not bigger than $p_k$ anymore. By (2), $p_k$ is not bigger than $p_i$ as well. The Odd-Even Routing Algorithm ------------------------------ The results on the routing number of $P_n$ were shown using what is known as the [**odd-even routing algorithm**]{}. First we describe the odd-even routing algorithm on the path. Label the vertices of $P_n$ as $v_1,v_2, \dots , v_n$. We say an edge $e = v_iv_{i+1}$ is an odd edge if $i$ is odd; otherwise $i$ is even and $e$ is an even edge. Note that the set of odd edges and even edges partition $P_n$ into two maximal matchings. During the first step and every other odd step of the routing process, we consider only the odd edges. We select a subset of the odd edges and swap the pebbles on the endpoints. During the even steps of the routing process we consider only the even edges and act similarly. During each step, the edges that are selected are those where swapping the pebbles take them closer to their destinations. We can generalize this algorithm to even cycles. It is well-known that the edges of an even cycle can be partitioned into two perfect matchings, and we would call edges in one perfect matching to be even and the others to be odd. Thus once we specify one edge to be odd (or even), we know the parity of the edges. Given a particular disbursement $B$, each vertex is given a particular spin. During odd steps we choose a matching of odd edges and two pebbles on edge $e_i=v_iv_{i+1}$ swap only if the spin of the pebble at vertex $v_i$ is greater than the spin of the pebble at vertex $v_{i+1}$. During even steps we do the same using only even edges. In the future, if choose $e$ to be an odd edge, we would call this algorithm to be [**odd-even routing algorithm with odd edge $e$**]{}. Note that this algorithm is not defined on cycles of odd length since the edges that would be labeled as odd edges do not form a matching. The Window of a Pebble ---------------------- Let $G=C_n$ in this section, where $n\ge 3$ is an even integer. We fix a minimized disbursement $B$ of $\pi$ with associated order $\succ$. When the odd-even routing algorithm is applied, we can count the number of rounds necessary for each pebble to reach and stay at its destination vertex, then the maximum of all these values is obviously an upper bound on $rt(C_n, \pi)$. For any arbitrary pebble $A$, let $$U=\{p\in P: p\succ A\}\;\;{\rm and}\;\;W=\{p\in P: A\succ p\}.$$ By Lemma \[incomparable\], the routing process ends when no pebble has pebbles bigger or smaller than it, therefore $s(A)=|W|-|U|$. By Lemma \[induced-order\], there is no $u\in U, w\in W$ that are ordered as $u, w, A$ or $A, u, w$ along the positive direction. So if $U=\{u_1, u_2, \ldots, u_r\}$ and $W=\{w_1, w_2, \ldots, w_t\}$, then we may assume that the pebbles in $U\cup W$ and $A$ are ordered along positive direction on the cycle as $u_r, \ldots, u_1, A, w_1, \ldots, w_t$. We denote the set of pebbles incomparable to $A$ between $A$ and $w_t$(between $u_r$ and $A$ resp.) by $X$ ($Y$ resp.). A [*segment*]{} is a sequence of consecutive pebbles. If all the elements in a segment are from $U$, then we call it to be an $U$-segment, and similarly for $W$-segments, $X$-segments and $Y$-segments. So we can group the pebbles between $u_r$ and $w_t$ along the positive direction as $$win(A)=(U_k, Y_k, U_{k-1}, \ldots, U_1, Y_1, A, X_1, W_1, \ldots, X_l, W_l),$$ where $X_1, Y_1$ may be empty, and $win(A)$ is called the [*initial window*]{} of $A$. We denote the set of all other pebbles as $Z$. So sometimes we write $\pi$ as $$\pi=(Z, U_k, Y_k, U_{k-1}, \ldots, U_1, Y_1, A, X_1, W_1, \ldots, X_l, W_l).$$ By transitivity, we have $u_i\succ w_j$ since $u_i\succ A\succ w_j$ for all $1\le i\le r$ and $1\le j\le t$, and in particular, $u_r\succ w_t$, hence $n\ge s(u_r)-s(w_t)>d^+(u_r, w_t)$. By Lemma \[induced-order\], $\text{if $i\ge j$, then $u\succ y$ for all $u\in U_i,\;y\in Y_j$; If $k\ge l$, then $w\prec x$ for all $w\in W_k,\;x\in X_l$. }$ \[propWin\] Let $win(A)=(U_r Y_r U_{r-1}Y_{r-1}\cdots U_1 Y_1 A X_1 W_1\cdots X_t W_t)$. Then 1. If $i\ge j$, then $u\succ y$ for all $u\in U_i,\;y\in Y_j$; 2. If $k\ge l$, then $w\prec x$ for all $w\in W_k,\;x\in X_l$; Let $i>j$ and $u\in U_i, y\in Y_j$. Then $y$ is between $u$ and $A$ along the positive direction, and we have $d^+(u,A)=d^+(u, y)+d^+(y,A)$. The incomparability of $y$ and $A$ implies that $s(y)\le s(A)+d^+(y, A)$, and $u\succ A$ implies $s(u)>s(A)+d^+(u, A)$, hence we have $s(u)>s(A)+d^+(u,A)=s(A)+d^+(u,y)+d^+(y,A)\geq s(y)+d^+(u,y)$, and it follows that $u\succ y$. Similarly for $W$ and $X$. Consecutive moves and rotation permutations {#rotation} =========================================== \[consecutive-swaps\] Let $p$ be a pebble and $Q$ be a segment of pebbles and $p\succ Q$ (or $Q\succ p$), then once $p$ starts to swap with a pebble in $Q$, $p$ will not stop swapping until $p$ swaps with all pebbles in $Q$ (in the following $|Q|-1$ or more steps). If the pebbles in $Q$ that have yet to swap with $p$ remains to be a segment (with a different order or not) in the routing process, then it is clear that $p$ would swap with $Q$ consecutively. Also note that if some pebble smaller than a pebble in $Q$ is between pebbles in $Q$, this pebble is also smaller than $p$ by transitivity thus will not delay the movement of $p$. Similarly for a pebble smaller than $p$. We should call it [*an enlargement of $Q$*]{} if a smaller pebble (than $p$) is mixed with $Q$. If $Q'$ is an enlargement of $Q$, then $p\succ Q'$ and $Q'$ is a segment. Note that we shall similarly consider the enlargement of the yet-to-swap-with-$p$ pebbles in $Q$ if necessary. Consider the initial window of $p$ and let the segments between $p$ and $Q$ be $$p, X_1, W_1, \ldots, X_k, W_k=Q,$$ where pebbles in $W_i$ with $i\le k$ are all smaller than $p$, and pebbles in $X_i$ with $i\le k$ are incomparable with $p$ (thus by Lemma \[induced-order\], are bigger than pebbles in $Q$). We claim that at most one pebble from $X=\cup_{i=1}^k X_i$ is between any consecutive pair of pebbles in an enlargement of $Q$ (note that if a pebble $u\succ p$ and swapped with $p$, then we consider $u$ as an incomparable pebble with $p$ thus in $X$ as well). For otherwise, let step $s$ be the first step so that some pebbles $x, x'\in X$ and $q,q'$ in an enlargement of $Q$ are ordered consecutively as $q, x, x', q'$. Since this is the first step, $qx$ and $x'q'$ were edges to be swapped in the previous step. But the edge $qx$ would have swapped $q$ and $x$, and give $x, q, x',q'$ in step $s$, a contradiction. Assume that $p$ swaps with $q_1\in Q$ in step $s$, and first stop is at step $t$ before finish swapping with $Q$. Then before the step $t-1$, the pebbles following $p$ are $z_1, z_2, z_3, z_4$ with $p\succ z_1$. Note that at most one of $z_2$ and $z_3$ is from $X$, so at least one of them is smaller than $p$. At step $t-1$, $pz_1, z_2z_3$ are the edges to swap; if $z_2\in X$ then $z_3\not\in X$ thus $p\succ z_3$, therefore after the swap we have $z_1, p, z_3, z_2$ and $pz_3$ swap at step $t$; and if $z_2\not\in Z$, then $p\succ z_2$, and after the swap, we either have $z_1, p, z_3, z_2$ or $z_1, p, z_2, z_3$, the former case occurs if $z_2\succ z_3$ thus $p\succ z_3$ so $p, z_3$ swap in step $t$, and in the latter case $p, z_2$ swap in step $t$. \[rot\] Suppose $\pi$ is a rotation permutation such that $\pi(a) = a + q \pmod{n}$ for some integer $q$, where $ -\frac{n}{2} < q \le \frac{n}{2}$. Then, $rt(C_n, \pi) = n - |q|$. By symmetry we only consider the case when $ q>0$. We first show $rt(C_n, \pi) \ge n-q $. For each pebble $p$, the spin of $p$ is either $n-q$ or $-q$. Since the sum of spins is zero there must be exactly $q$ pebbles with positive spin and $n-q$ pebbles with negative spin. So, $n-q \le rt(C_n, \pi)$. Now, we show $rt(C_n, \pi) \le n-q$. We order clockwise as $p_1, p_2, \ldots, p_n$ so that $p_{2i-1}$ with $1\le i\le q$ have positive spin $n-q$ and the remaining $n-q$ pebbles have negative spin $-q$. We should use an odd-even routing algorithm so that $p_1p_2$ is an odd edge. As $q\le n/2$, no two pebbles with spin $n-q$ are adjacent. In the routing process, $p_1$ will be paired with $p_2, p_4, \ldots, p_{2q}, p_{2q+1}, \ldots, p_n$ in the first $n-q$ steps, thus reach its destination, and similarly, for all other pebbles with positive spins. As all pebbles with positive spins reach their destinations, there is no order relation left, so every pebble will be in place. So it is routed in $n-q$ steps. Extremal Windows {#extreme} ================ Now let us consider the routing process for arbitrary pebble $A$. As defined, let $U_1, U_2, \ldots, U_k$ and $W_1, W_2, \ldots, W_l$ be the comparable segments with $A$. In the routing process, those segments may be mixed and $A$ may not swap with them in their initial order. However, by Lemma \[consecutive-swaps\], if $A$ starts to swap with a pebble in one segment $Q$, then $A$ will swap with pebbles in the following $|Q|$ steps (not necessary with elements in $Q$ though). However, if $A$ starts to swap with $W_i$, and in the following $|W_i|$ steps, $A$ will swap with pebbles that are smaller than $A$, thus must be $W$ as well; we would call those pebbles to be $W_i'$. So we know $|W_i'|=|W_i|$ and they both contain pebbles smaller than $A$. Similarly we define $U_i'$ for $1\le i\le k$. Note that $U'$ and $W'$ are not necessarily segments any more. So $A$ will meet $U$-sequences in the order of $U_1', U_2', \ldots, U_k'$, and $W$-sequences in the order of $W_1', W_2', \ldots, W_l'$, but may meet those sequences in a mixed order. Let $Z_1, Z_2, \ldots, Z_{k+l}$ be the order the sequences meet $A$, so $Z_i\in \{U_1', \ldots, U_k', W_1', \ldots, W_l'\}$. By Lemma \[consecutive-swaps\] and the above definition, once $A$ meets $Z_i$, $A$ would swap with $Z_i$ in the following $|Z_i|$ steps, but $A$ may wait to meet $Z_{i+1}$ after finishing swapping $Z_i$. For $i=1, 2,...,k+l-1$, let $\omega_i,$ be the number of steps $A$ waits between swapping with the last pebble of $Z_{i-1}$ and the first pebble of $Z_i$. We call $\omega_i$ the waiting time between $Z_{i-1}$ and $Z_i$. We get $k+l$ nonnegative umbers $\omega_i, i=1,2,...,k+l.$ Now suppose $\alpha$ is the largest index such that $\omega_\alpha\neq 0$. First we assume that $Z_\alpha$ is the $t$-th $W$-sequence. Note that if a swap of $A$ and $W$ (or $A$ and $U$) cannot be followed by a swap of $A$ and $U$ (or $A$ and $W$) because of the parity. So as $\omega_{\alpha+1}=\cdots=\omega_{k+l}=0$, $A$ will swap with $\sum_{j=t}^{l}|W_j'|$ $W$-pebbles in the following steps without stop until it arrives its destination. Let $w'$ be the first pebble $A$ meets in $W_t'=Z_{\alpha}$. Then \(i) As $\omega_{\alpha}>0$, $W_t'$ will not merge with another $W$-sequence in the routing process before encountering $A$, thus some pebble in $W_t'$ is always paired with an $X$-pebble in the routing process from the first step or the second step according to parity of edges, therefore moves in the counter clockwise direction after that. So $w'$ could be the one of the two leftmost pebbles in $W_t'$. \(ii) $A$ begin to swap with $Z_\alpha$ if and only if $w'$ has swapped with all the $X$-pebbles in $\cup_{j=1}^t X_j$ and all the $U$-pebbles in $\cup_{i=1}^k U_i$. Thus the total number of steps needed for $A$ to be in place must be: $$\sum_{j=1}^{t}|X_j|+\sum_{j=t}^{l}|W_j|+\sum_{i=1}^{k}|U_i|+\delta,$$ where $\delta=0$ if $w'$ is paired with an $X$-pebble in the first step, otherwise $\delta=1$. For pebble $A$ to be sorted within $n-1$ steps, we have $$ \sum_{i=1}^k |U_i| + \sum_{j=t}^l |W_j| + \sum_{j=1}^{t} |X_j|+\delta \le n-1.$$ Notice that $O+\sum_{i=1}^k (|U_i|+ |Y_i|) + \sum_{j=1}^l (|W_j| + |X_j|)=n-1$, where $O$ is the number of pebbles outside the range of $win(A)$. Then we have $$(n-1) - \sum_{j=t+1}^l |X_j| - \sum_{i=1}^{k} |Y_i| - \sum_{j=1}^{t-1} |W_j| + \delta - O \le n-1.$$ which implies that every permutation that takes $n-1$ steps to route must contain a pebble $A$ such that $$\label{eq1} \sum_{j=1}^k |Y_j| + \sum_{j=t+1}^{l} |X_j| +\sum_{j=1}^{t-1} |W_j| + O= \delta, \text{ where $\delta\in \{0,1\}$.}$$ Similarly, if $Z_{\alpha}=U_t'$ and $u'\in U_t'$ is the first pebble $A$ meets in $U_t'$, then the total number of steps needed for $A$ to be in place must be: $$\sum_{j=1}^{t}|Y_j|+\sum_{j=t}^{k}|U_j|+\sum_{i=1}^{l}|W_i|+\delta,$$ where $\delta=0$ if $u'$ is paired with an $Y$-pebble in the first step, otherwise $\delta=1$, and it follows that every permutation that takes $n-1$ steps to route must contain a pebble $A$ such that $$\label{eq2} \sum_{j=1}^l |X_j| + \sum_{j=t+1}^{k} |Y_j| +\sum_{j=1}^{t-1} |U_j| + O = \delta, \text{ where $\delta\in \{0,1\}$.}$$ \[extremal-windows\] Every permutation that takes $n-1$ steps to route must contain a pebble $A$ whose windows is one of the following 1. $|win(A)|=n$ and $win(A)=(A, X, W)$ (or $win(A)=(U, Y, A)$). 2. $|win(A)|=n-1$ and $win(A)=(U, A, X, W)$ (or $win(A)=(U, Y, A, W)$). 3. $|win(A)|=n$, and $win(A)=(A, X_1, W_1, X_2, W_2)$ and ${\rm min}(|W_1|, |X_2|)=1$ (or $win(A)=(U_2,Y_2,U_1,Y_1A)$ and ${\rm min}(|Y_2|, |U_1|)=1$).\ By symmetry, we may assume that holds. As $\delta=0$ or $1$, all the terms in the left-hand side of are zero or one. If $O=0$ and $\delta=0$, then all terms are zero and $|win(A)|=n$. Note that $\sum_{j=1}^k |Y_j|=0$ implies that $Y=\emptyset$, thus there is at most one $U$-set; $\sum_{j=t+1}^l |X_j|=0$ means there are at most $t$ non-empty $X$-sets (the first $t$ sets); $\sum_{j=1}^{t-1} |W_j|=0$ means that $|W|=0$ (if $t\ge 2$) or there are at most one non-empty $W$-set (that is, $W_1$ when $t=1$). But if $t=2$, $|W|=0$ implies all $X$-sets are actually outside of window of $A$, so $|X|=0$ as well. Therefore, $win(A)=(U, A, X, W)$ so that if $W=\emptyset$ then $X=\emptyset$. If $\delta=1$ and $O=1$, then it is the same as above, except that $|win(A)=n-1$. So $win(A)=(U,A,X,W)$ and $|win(A)|=n-1$. If $\delta=1$ and $O=0$, then $|win(A)|=n$, and one of the firs three terms on the left-hand side is one. If $\sum_{j=1}^k |Y_j|=1$, then $|Y|=1$, so there are at most two $U$-sets, $U_1$ and $U_2$, and when there are two, $Y_1=\emptyset$ and $|Y_2|=1$. Together with $\sum_{j=t+1}^l |X_j|=0$ and $\sum_{j=1}^{t-1} |W_j|=0$, we have $win(A)=(U, A, X, W), (U_2, y, U_1, A)$ or $(U_2, y, U_1, A, X, W)$. If $\sum_{j=1}^k |Y_j|=0, \sum_{j=t+1}^l |X_j|=1$ and $\sum_{j=1}^{t-1} |W_j|=0$, then $Y=\emptyset$ and $t=1$ and $|X_2|=1$. So $win(A)=(U, A, X_1, W_1, x_2, W_2)$. If $\sum_{j=1}^k |Y_j|=0, \sum_{j=t+1}^l |X_j|=0$ and $\sum_{j=1}^{t-1} |W_j|=1$, then $Y=\emptyset$, $t=2$ and $|W_1|=1$, and $X_i=\emptyset$ for $i\ge 3$. So $win(A)=(U, A, X_1, w_1, X_2, W_2)$. However, when $|win(A)|=n$, we claim that one of $U_k$ and $W_l$ must be empty. For otherwise, let $u_p\in U_k$ and $w_q\in W_l$ be the furthest $U$-pebble and $W$-pebble to $A$, respectively. As $|win(A)|=n$, no pebble is bigger than $u_p$ and no pebble is smaller than $w_q$, so $s(u_p) \ge 1+|Y|+|W|$ and $s(v_p) \le -(1+|X|+|U|$, and it follows that $s(u_p)-s(w_q)\ge n+1$, a contradiction. So we have the desired extremal windows in the lemma. The proof {#solution} ========= In this section, we show how to deal with the extremal situations in Lemma \[extremal-windows\]. We will consider the structures of the windows more specifically, and the following concepts are useful. Let $q_1, q_2, \ldots, q_s$ be a sequence of consecutive pebbles. It is called a [*block with head $q_1$*]{} if the only order relation among them are $q_1\succ q_i$ for $i\ge 2$; it is called a [*block with tail $q_s$*]{} if the only order relation among them are $q_i\succ q_s$ for $i<s$; it is called an [*isolated block*]{} if there is no order relation among them. \[spin\] Extremal window type I: $win(A)=(A,X,W)$ and $|win(A)|=n$ --------------------------------------------------------- If permutation $\pi$ needs $n-1$ steps to route and some pebble $A$ in $\pi$ has $win(A)=(A, X, W)$ and $|win(A)|=n$, then $\pi$ is $(123\ldots )$ or its inverse. Let $X=x_1x_2\ldots x_a$ and $W=w_1w_2\ldots w_b$ in the clockwise order. Consider the spins of $A$ and $w_b$. Note that $spin(A)=|W|$ and $s(w_b) \le -(1+|X|)$ since no pebble is smaller than $w_b$ (by Lemma \[induced-order\]) and $w_b$ needs to swap with $A$ and all the pebbles in $X$. So $s(A)-s(w_b)\ge n$ and it follows that $s(w_b)=-1-|X|$ and $w_b$ only swaps with the pebbles in $X\cup A$. Now again by Lemma \[induced-order\], no pebble is smaller than $w_{b-1}$ and use the same argument, we have $s(w_{b-1})=-1-|X|$ and inductively $s(w_i)=-1-|X|$ for $1\le i\le b$. Consider the spin of $x_1$, it is clear that $s(x_1)\geq |W|$ since $x_1$ need to swap with all pebbles in $W$ and no pebble is bigger than $x_1$ (analogously to Lemma \[induced-order\]), then $s(x_1)-s(w_b)\ge n$ and it follows that $s(x_1) =|W|$ and $x_1\succ W$ is the only order relation involving $x_1$. Inductively we have $s(x_i) =|W|$ for all $x_i\in X$ and $\{A\}\cup X\succ X$ is the only order relation in the permutation. So along positive direction every pebble is $|W|$ steps away from its destination. So $\pi$ is a rotation. By the Rotation Lemma \[rot\], the only rotations that require $n-1$ steps to route are $\pi=(123\cdots n)$ or its inverse. Extremal window type 2: $win(A)=(U, A, X, W)$ and $|win(A)|=n-1$. ----------------------------------------------------------------- \[block-2\] If a permutation $\pi$ contains a pebble $A$ such that $win(A)=(U, A, X, W)$ and $|win(A)|=n-1$ and $\pi=(z, U, A, X, W)$, then $U$ and $W$ are isolated blocks and $X$ can be decomposed into isolated blocks and blocks with tails. Furthermore, $s(z)=c\le 0$, and if $c<0$, then the block of $X$ next to $W$, say $X_0$, is an isolated block with $-c$ pebbles, and the only other order relation is $U\cup\{A\}\cup X\succ W$ and $X_0\succ \{z\}$. Let $U=u_1u_2\ldots u_p, X=x_1x_2\ldots x_a$ and $W=w_1w_2\ldots w_b$ along the positive direction. Consider the spins of $u_1$ and $w_b$. As no pebble is bigger than $u_1$ and $u_1$ is bigger than $A$ and $W$, $s(u_1)\ge 1+|W|=1+b$. Similarly, no pebble is smaller than $w_b$ and $w_b$ is smaller than $U, A, X$, so $s(w_b)\le -(1+|U|+|X|)=-(1+a+p)$. So $s(u_1)-s(w_b)\ge 1+b+1+a+p=n$, and equalities must hold. So $s(u_1)=1+b, s(w_b)=-(1+a+p)$ and the only order relation involving $u_1$ and $w_b$ are $u_1\succ \{A\}\cup W$ and $U\cup \{A\}\cup X\succ w_b$. Inductively we can consider $u_2$ and $w_{b-1}$ and all pebbles in $U$ and $W$ and conclude that $U\cup \{A\}\cup X\succ W$ is the only order relation involving $U$ and $W$. Now consider the spins of pebbles in $X$. As $s(w_b)=-(1+a+k)$ and $s(x)-s(w_b)\le n$ for each $x\in X$, we have $s(x)\le b+1$. Note that $z$ cannot be bigger than any pebble in $X$, for otherwise $z\succ W$ and contradict to what we just concluded. But $z$ may be smaller than some pebbles in $X$, thus $s(z)\le 0$. Consider $x_1$. As no pebble is bigger than $x_1$ (again analogously to Lemma \[induced-order\]) and $x_1\succ W$, $s(x_1)\ge |W|=b$. So $s(x_1)\in \{b, b+1\}$. If $s(x_1)=b$, then $x_1\succ W$ is the only order relation involving $x_1$, and we will inductively consider $x_2$. If $s(x_1)=b+1$, then $x_1$ is bigger than (only one )another pebble besides $W$, either $x_i\in X$ or $z$; if $x_1\succ z$, then $x_j\succ z$ for $1\le j\le a$ by Lemma \[induced-order\] and we can inductively conclude $s(x_j)=b+1$, thus $X$ is an isolated block and $X\succ z$; if $x_1\succ x_i$ for some $2\le i\le a$, then $x_1\succ x_j$ for $1\le j<i$ by Lemma \[induced-order\], and no other pebble in $X$ is smaller than $x_i$, for otherwise it would be smaller than $x_1$ which contradicts what we just concluded. So $x_1x_2\ldots x_i$ is an block with head $x_1$. Now we similarly consider $x_{i+1}$ and get the block partition of $X$. Now we are ready to show that such permutations can be routed in $n-2$ steps. If a permutation $\pi$ contains a pebble $A$ such that $win(A)=(U, A, X, W)$ and $|win(A)|=n-1$ and $\pi=(z, U, A, X, W)$, then $\pi$ can be routed in at most $n-2$ steps. First we assume that $X\not=\emptyset$. Let $\pi=zu_1\ldots u_kAx_1\ldots x_aw_1\ldots w_b$ in the clockwise order, with $u_i\in U, x_i\in X$ and $w_i\in W$. We use an odd-even routing algorithm so that $x_aw_1$ is an odd edge. We will make use of the structure in Lemma \[block-2\]. By Lemma \[consecutive-swaps\], $x_a$ swaps with $w_1$ in the first step, thus swaps with $W$ in the following $|W|-1$ steps, so $w_b$ meets (i.e., is paired with a pebble in) $U\cup \{A\}\cup X$ after $|W|-1$ steps, then $w_b$ would swap with $U\cup A\cup X$ in the following $|U\cup \{A\}\cup X|$ steps, so it takes $|W|-1+|U\cup \{A\}\cup X|=n-2$ steps for $w_b$ to be in place. As a pebble in $U\cup \{A\}\cup X$ has to pass $W-w_b$ to meet $w_b$, all pebbles in $W$ would be in place after $n-2$ steps. Similarly, $w_1$ swaps with a pebble in $X\cup U\cup \{A\}$ in the first step, it will swap with them in the following $|X\cup U\cup \{A\}|$ steps. So $w_1$ would have met $A$ in $|X\cup U|$ steps and in the meantime, $A$ has swapped with $U$, in other words, $A$ has swapped with $U$ and meets $W$ after $|X\cup U|$ steps, and it takes $|W|$ steps for $A$ to swap $W$, so $A$ will be in place after $|X|+|U|+|W|=n-2$ steps. As $A$ and $W$ are in places after $n-2$ steps, $U$ are in places after $n-2$ steps, as the only order relation on $U$ is $U\succ \{A\}\cup W$. So now we only need that all other order relations are taken care within $n-2$ steps. A tail $x_i$ in block $X'\subseteq X$ is paired with its block in the first (if $x_{i-1}x_i$ is an odd edge) or the second (if $x_{i-1}x_i$ is an even edge and $x_i\not=x_a$ or $|W|+1$-th step (if $x_i=x_a$); in either case, $x_i$ would be swapping with its block or $W$ in the next $|X'|+|W|$ steps, so will be in place after at most $n-2$ steps. As $x_1$ meets $W$ in the first step, $x_1$ swaps with $W$ in the next $|W|-1$ steps, thus meets $z$ after $|W|$ steps, and after that, $z$ swaps with the isolated block bigger than it, in at most $|s(z)|\le |X|$ steps, so $z$ would be in place after $|W|+|X|<n-2$ steps. Now we assume $X=\emptyset$ and $\pi=zu_1\ldots u_kAw_1\ldots w_b$ in the clockwise order. By Lemma \[block-2\], $s(u)=1+b$ and $s(w)=-1-k$ for $u\in U, w\in W$ and the only order relation is $U\cup \{A\}\succ W$. We first flip the spins of $u_1$ and $w_b$. Then the order relations are $U-u_1\succ \{A\}\cup (W-w_b), w_b\succ (W-w_b)\cup \{z, u_1\}$ and $(U-u_1)\cup \{z, w_b\}\succ u_1$ by lemma \[flip-spins\]. We use an odd-even routing algorithm so that $Aw_1$ is an odd edge. Similar to above, $u_k$ meets $W-w_b+A$ in the second step thus swaps with them all in the following $|W|$ steps, in other words, $A$ meets $u_k$ thus $U-u_1$ after $|W|$ steps, by which $A$ has swapped with $W-w_b$. So it takes $|W|+|U|-1=n-3$ steps for $A$ to be in place. As $w_1$ meets $U-u_1+A$ in the first step, it will swap with them in the following $|U|$ steps, thus $w_1$ meets $u_2$ in $|U|-1$ steps, or $u_2$ meets $W-w_b+A$ after $|U|-1$ steps, then $u_2$ will swap with $W-w_b+A$ in the following $|W|$ steps; meanwhile, $u_1$ meets $\{z, w_b\}$ in the first or second step, it takes up to three steps for $u_1$ to swap with $z$ and $w_b$; so $u_2$ will meet $u_1$ in the $\max\{|U|-1+|W|, 3\}$-th step; as $n=|W|+|U|+2\ge 6$, $u_2$ swaps with $u_1$ after $|U|-1+|W|$ steps, so $u_2$ will be in place after $|U|+|W|=n-2$ steps. As $u_2$ is the furtherest pebble to $u_1$ along the negative direction, all pebbles in $U-u_1$ will be in places after $n-2$ steps. It also follows that $u_1$ is in place after $n-2$ steps, as the only order relations involving $u_1$ are $(U-u_1)\cup \{z, w_b\}$. To show every pebble to be in places after $n-2$ steps, we just need to further show that $z$ and $w_b$ will be in places after $n-2$ steps, because all the remaining order relations involve them. As $z$ is paired one of the $u_1$ and $w_b$ in the first step, and the other in the third step, $z$ will be in place in $4\le n-2$ steps. As shown above, $w_1$ swaps with $U-u_1+A$ in the first $|U|$ steps, and $w_b$ swaps with $z$ and $u_1$ in the first three steps, so $w_b$ meets $w_1$ in at most $\max\{|U|,3\}$ steps, and swap with $W-w_b$ in the following $|W|-1$ steps, therefore $w_b$ will be in place after $\max\{|U|,3\}+|W|-1$ steps, and it is at most $n-2$ if $|U|\ge 2$, so the only case we are in trouble is when $|U|=1$. But in this trouble case, instead of let $Aw_1$ be an odd edge, we will let $u_1A$ be odd, then we will not have trouble unless $|W|=1$ by symmetry, however it follows that $|U|=|W|=1$ and $n=4$, a contradiction to $n\ge 6$. Extremal window type 2a: $win(A)=(A, X, W)$ and $|win(A)|=n-1$. ---------------------------------------------------------------- This is the case of $win(A)=(U, A, X, W)$ with $U=\emptyset$. In this case, the spins in $W$ are not fixed anymore, so $X$ or $W$ could have some freedom on their spins, but only one of them could have a block decomposition. More specifically, we have the following structure lemma. (block decomposition)\[block-2a\] If permeation $\pi$ has a pebble $A$ with $win(A)=(A,X,W)$ and $|win(A)|=n-1$ and $\pi=(z, A, X, W)$, then one of the following must be true 1. if $s(z)=c>0$, then $X$ is an isolated block and $W$ can be partitioned into isolated blocks and blocks with heads so that the block next to $X$ is isolated with $c$ pebbles and smaller than $z$, and the only other order relation is $\{A\}\cup X\succ W$. 2. if $s(z)=c<0$, then $W$ is isolated and $X$ can be partitioned into isolated blocks and blocks with tails so that the block next to $W$ is isolated with $|c|$ pebbles and bigger than $z$, and the only other order relation is $\{A\}\cup X\succ W$. 3. if $s(z)=0$, then either $X$ can be partitioned into isolated blocks and blocks with tails and $W$ is an isolated block or $W$ can be partitioned into isolated blocks and blocks with heads and $X$ is an isolated block, and the only other order relation is $\{A\}\cup X\succ W$. The proof of this lemma is very similar to Lemma \[block-2\], and for completeness, we include a proof below. Let $\pi=zAx_1x_2\ldots x_aw_1w_2\ldots w_b$ along the positive direction, where $x_i\in X$ and $w_i\in W$. Clearly, $s(A)=|W|=b$. As $z$ is incomparable with $A$, no pebble in $W$ is bigger than $z$ (but could be smaller than $z$), thus $z$ could only cause pebbles in $W$ move one step along the negative direction. Similarly, no pebble in $X$ is smaller than $z$ (but could be bigger than $z$), thus $z$ could only cause pebbles in $X$ move one step along the positive direction. For $w\in W$, $|s(w)|+|s(A)|\le n$, thus $|s(w)|\le n-b=a+2$. Consider $w_b$. As no pebble is smaller than $w_b$ by Lemma \[induced-order\], $s(w_b)<0$ thus $s(w_b)\le -(|X|+1)=-(a+1)$. Therefore $s(w_b)\in \{-(a+1), -(a+2)\}$, and $s(w_b)=-(a+2)$ if and only if $w_b$ is smaller than $z$ or some pebble in $W$ but not both. If $w_b$ is not smaller than $z$ and any other pebbles in $W$, then we call $w_b$ to be [*isolated*]{} and in this case $s(w_b)=-(a+1)$. If $w_b$ is smaller than $w_i$, then $w_j$ with $i<j<b$ is not comparable with $w_b$ as $w_b$ can only have one bigger pebble in $W$, then must be smaller than $w_i$ by Lemma \[induced-order\]. Note that no pebble $w_l$ with $l<i$ could be bigger than $w_i$, as otherwise it would be bigger than $w_b$ which is impossible. Now we can inductively conclude that $w_{b-1}, \ldots, w_{i+1}$ all have spin $-(a+2)$ and smaller than $\{A, w_i\}\cup X$. That is, $\{w_i, w_{i+1}, \ldots, w_b\}$ is a block with head $w_i$. If $w_b$ is smaller than $z$, then $s(w_b)=-(a+2)$ and $w_b$ is not comparable to any other pebbles in $W$. Furthermore, by Lemma \[induced-order\] $w_j$ with $1\le j<b$ must be smaller than $z$ as well, thus inductively we can conclude that $w_{b-1}, \ldots, w_1$ all have spin $-(a+2)$ and (only) smaller than $\{z, A\}\cup X$. In this case $\{w_1, \ldots, w_b\}$ is an isolated block which is smaller than $z$. If $w_b$ is in a block with head $w_i$, then we can repeat the above argument of $w_b$ for $w_{i-1}$ and conclude that $w_{i-1}$ is in a block, or isolated. Inductively we may partition $W$ into blocks $W_0, W_1, \ldots, $ that are isolated or with heads. In particular, if $z\succ w_i$, then $w_i$ must be in an isolated block, and by Lemma \[induced-order\], $z\succ \{w_1, w_2, \ldots, w_i\}$, that is, $w_1w_2\ldots w_i$ is an isolated block. If one pebble from $W$ has spin $-(a+2)$, then the spins of pebbles in $X$ are all $b$ (by inductively consider $x_1, x_2, \ldots, x_a$), and they are incomparable to each other and they all bigger than $W$, so $X$ is an isolated block. This means also that if $s(z)=c\ge 0$, then the isolated block $w_1w_2\ldots w_i$ has $c$ pebbles. Similarly if one pebble from $X$ has spin $b+1$, then $W$ is an isolated block and $X$ can be partitioned into isolated blocks and blocks with tails, and in particular, the isolated block bigger than $z$ has $|s(z)|$ pebbles. If a permutation $\pi$ contains a pebble $A$ with $win(A)=(A, X, W)$ and $|win(A)|=n-1$, then $\pi$ can be routed in at most $n-2$ steps or is $(12\ldots n)$ or its inverse. We only consider the case $s(z)=c\ge 0$. By Lemma \[block-2a\], we assume that $X$ is an isolated block, and $W$ has the block decomposition $W_0, W_1, \ldots, W_k$ so that the block $W_i$, if not isolated, has head $w_{i'}$ and $W_0$ is an isolated block with $c$ pebbles which are all smaller than $z$. If $c=b$, then the spins of $\{z, A\}\cup X$ are all $b$ and the spins of $X$ are all $-(a+2)$, and $\pi$ is a rotation. Thus if $\pi$ needs $n-1$ steps to route, $\pi$ must be one of the two extremal permutations. So we assume $c<b$. If $a=|X|=0$, then we use the odd-even routing algorithm so that $Aw_1$ is an odd edge. By Lemma \[consecutive-swaps\], $A$ meets $W$ in the first step and will swap with $W$ in the following $|W|=n-2$ steps, and $z$ meets $W_0$ in the second step, so will swap with $W_0$ in $c+1<b+1=n-1$ steps. For a head $w$ of the block $W_i$, it will swap with its block either in the first step, or the second step(if on an even edge) or the third step (if on an even edge and meets $A$ in the first step); for the first case, the head swaps with all pebbles in $W_i$ and $A$ in $|W_i|-1+1=|W_i|\le n-2$ steps; for the second case, $|W_i|\le n-3$ and the head swaps with all pebb;es in $W_i$ and $A$ in $(|W_i|-1)+1+1=|W_i|+1\le n-3+1=n-2$ steps; for the third case, $|W_i|\le n-2$ and $w$ swaps with all pebbles in $W_i$ and $x$ in $|W_i|-1+2=|W_i|+1\le n-1$ steps. So every pebble will finish their swaps in at most $n-2$ steps, except $\pi=(z, A, w, w_1, \ldots, w_{n-3})$, where $s(z)=0$ and $w$ is the head the block $ww_1\ldots w_{n-3}$. For the exceptional case, we flip the spins of $A$ and $w_{n-3}$, by Lemma \[flip-spins\], $w_{n-3}$ is bigger than $z, A, w_1, \ldots, w_{n-4}$, and $A$ is smaller than $z$ and $w_{n-2}$ and $ww_1\ldots w_{n-4}$ remains to be a block with head $w$. Now we use the odd-even sorting algorithm so that $w_{n-3}z$ is an odd edge. By Lemma \[consecutive-swaps\], $w_{n-3}$ swaps in the first $n-2$ steps, $w$ swaps from the second step and takes $n-4$ steps, $A$ steps in $3$ steps, and $z$ swaps in $3$ steps, and in at most $n-2$ steps all pebbles will be in their places. Therefore we may assume that $a>0$. Now we use the odd-even routing algorithm so that $x_aw_1$ is an odd edge. By Lemma \[consecutive-swaps\], $x_i$ with $1\le i\le a$ meets $W$ after $a-i$ steps and swaps with $W$ in the following $|W|$ steps, so it takes $a-i+|W|=a+b-i=n-2-i\le n-2$ steps; $A$ could be regarded as $x_0$, so takes at most $n-2$ steps; $z$ meets $W_1$ after $a+1$ steps and takes $c$ swaps, so will be in place in at most $a+1+c<a+b+1\le n-2$ steps; the head $w$ in block $W_i$ swaps with $W_i$ at the first step, or the second step (if it is on an even edge and not adjacent to $x$), or after $a+2$ steps (if $w=w_1$), and in the former two cases it takes at most $1+(|W_i|-1)+a+1=a+|W_i|+1\le a+b=n-2$ steps, and in the last case, it takes $a+2+(|W_i|-1)=a+|W_i|+1\le n-2$ steps as well. Once all of the above pebbles are in place, all pebbles are in place as well, as no swap remains. Extremal window type 3: $|win(A)|=n$ and $win(A)=(A, X_1, W_1, x, W_2)$ or $win(A)=(A, X_1, w, X_2, W_2)$ . ----------------------------------------------------------------------------------------------------------- (block decomposition)\[block-3\] If a permutation $\pi$ contains a pebble $A$ with $win(A)=(A, X_1, W_1, x, W_2)$ or $win(A)=(A, X_1, w, X_2, W_2)$ and $|win(A)|=n$, then $X_1$ and $W_2$ are isolated blocks and - if $win(A)=(A, X_1, W_1, x, W_2)$, then $W_1$ can be partitioned into isolated blocks and blocks with heads. Furthermore, $c:=s(x)-|W_2|\ge 0$, and if $c>0$, then the block $W_0$ in $W_1$ next to $X_1$ is an isolated block with $c$ elements and are all smaller than $x$; and the only other order relation between segments are $\{A\}\cup X_1\succ W_1\cup W_2, x\succ W_2$ and $z\succ W_0$. - if $win(A)=(A, X_1, w, X_2, W_2)$, then $X_2$ can be partitioned into isolated blocks and blocks with tails. Furthermore, $c:=s(w)+(1+|X_1|)\le 0$, and if $c<0$, then the block $X_0$ in $X_2$ next to $W_2$ is an isolated block with $-c$ elements and are all bigger than $w$; and the order relation between segments are $\{A\}\cup X_1\succ W_1\cup W_2$ and $X_0\succ w$. We only prove the case when $win(A)=(A, X_1, W_1, x, W_2)$, and the other case is symmetric. Let $W_1=w_1w_2\ldots w_a$ and $W_2=w_{a+1}w_2\ldots w_b$. Consider the spins of $A$ and $w_b$. Since $s(A) = b$ and no pebble is smaller than $w_b$ thus $s(w_b)\le -(2+|X|)=-(n-b)$; furthermore, as $s(A)-s(w_b)\le n$, we have $s(w_b)\ge -(n-b)$, so $s(w_b) = - (n-b)$, and the only pebbles that $w_b$ swaps with are $A$ and all $x$’s. Since $w_{b-1}$ does not cross with $w_b$, we get $s(w_{b-1}) \le -(n-b)$. For the same reason, we have $s(w_{b-1}) = -(n-b)$. By induction, we have $s(w_i) = -(n-b)$ for all $w_i \in W_2$. Similarly, by comparing the spins of pebbles in $X_1$ to that of $w_b$, we have $s(x_i) = b$ for all $x_i \in X_1$. Since $s(A)$ is the same as the pebbles in $X_1$, let $X'_1 = X_1 \cup \{A\}$. We have shown that $X_1$ and $W_2$ are isolated blocks. Now we consider the spins of pebbles in $W_1$. We note that no pebble in $W_1$ is bigger than $x$, for otherwise $A$ would be bigger than $x$ as $A\succ W_1$. We should also note, however, that $x$ could be bigger than some pebbles in $W_1$. Consider the pebble $w_a\in W_1$. The pebbles in $X_1'$ are bigger than $w_a$ and no pebble is smaller than $w_a$, so $s(w_a)\le -|X_1'|=b-n+1$. On the other hand, $s(A)-s(w_a)\le n$ and $s(A)=b$ implies $s(w_a)\ge b-n$. That is, $s(w_a)\in \{b-n, b-n+1\}$, and at most one pebble other than those in $X_1'$ is bigger than $w_a$. If $s(w_a) = b-n+1$, then $w_a$ is incomparable with pebbles other than those in $X'_1$. If $s(w_a)=b-n$, then $w_a$ is smaller than $x$ or some pebble $w_i\in W_1$; in the former case, all pebbles in $W_1$ are smaller than $x$ and inductively one can show that they are incomparable and thus $W_1$ is an isolated block; in the latter case, $w_i\succ w_j$ for $i+1\le j\le a$ and $w_i$ is the only such pebble other than those in $X_1'$, so $w_iw_{i+1}\ldots w_a$ is a block with head $w_i$. Inductively one can have a partition of $W_1$ into blocks, as desired. We observe that if $x\succ w_i\in W_1$, then $w_i$ is in an isolated block and $x\succ \{w_1, w_2, \ldots, w_i\}$ by Lemma \[induced-order\]. Let $c:=s(x)-|W_2|$, then $W_0:=(w_1,\cdots, w_c)$ is an isolated block, $x\succ W_2\cup W_0$ are only orders relative to $x$, as desired. If a permutation $\pi$ contains a pebble $A$ such that $|win(A)|=n$ and $win(A)=(A, X_1, W_1, x, W_2)$ or $win(A)=(A, X_1, w, X_2, W_2)$, then $\pi$ can be routed in $n-2$ steps. Again we only consider the case $win(A)=(A, X_1, W_1, x, W_2)$, as the other one is very similar. Let $win(A)=(Ax_1x_2\ldots x_kw_1\ldots w_axw_{a+1}\ldots w_b)$. Before the routing, we flip the spins of $A$ and $w_b$. Now by Lemma \[flip-spins\] and Lemma \[block-3\], the blocks in $W_1$ and $X_1$ remain the same, $W_2-\{w_b\}$ is an isolated block, $X_1\cup \{w_b\}\succ W_1\cup (W_2-w_b)\cup \{A\}$, and $x\succ W_2-w_b$. We will use an odd-even routing algorithm so that $x_kw_1$ is an odd edge. By Lemma \[consecutive-swaps\], $A$ and $w_b$ will meet in the first or second step depending on whether it is on an even or odd edge, thus after the first two steps, we may think $w_b$ is part of $X_1$ and $A$ is part of the new $W_2$; Again by the lemma, $x_k$ meets $W_1$ in the first step and will swap with $|W_1|$ elements in the next $|W_1|$ steps, and by then it will meet $W_2-w_b+A$ and swap with all of them, so it takes $|W_1|+|W_2|=b<n-1$ steps for $x_k$ to be in place; Similarly, $x_i$ for $1\le i\le k-1$ meets $W_1$ in the $k-i+1$-th step and swap with them and later $W_2$ in the following steps, so it takes $k-i+1+b\le k+b=n-2$ steps; $w_b$ will swap with $A$ in the first or second step and meet $W_1$ after all of pebbles in $X_1$ so in the $k+1$-th step, and then swap with $W_1\cup (W_2-w_b)$ consecutively, thus it takes $k+1+b-1=n-2$ steps for it to be in place; for a head $w\in W'\subseteq W_1$, if it is not $w_1$, then it meets the pebbles in $W'$ in the first or second step based on the parity of the edge, and then swap with them all in the following steps by the time when $W'$ first meets $X_1$, so it adds no extra swap steps. If $w_1$ is a head of a block, then $w_1$ and $x$ are incomparable by Lemma \[block-3\], and $w_1$ will meet its block (but not paired immediately) after swapping with $X_1\cup\{w_b\}$, so it takes $|X_1\cup \{w_b\}|+1+a-1\le (k+1)+1+(b-1)-1=k+b=n-2$ steps to be in place. Lastly, we consider $x$. Assume that $s(x)=c\ge 0$. Note that $x$ meets $W_2$ in the first or second step based on the parity of edge $xw_{a+1}$, and swap with $W_2-w_b$ and $A$ in the following steps, and it will meet the first block $W_0$ (note that $x\succ W_0$) in the $k+1$th step, so it may take an extra $k+1-(b-a-1)$ (if $xw_{a+1}$ is an odd edge) or $(k+1)-(b-a-1+1)$ (if $xw_{a+1}$ is an even edge) steps for $x$ to meet $W_0$ and then another $c$ steps to swap, therefore it takes $$\max\{1+(b-a-1), k+1\}+c\le \max\{b-a+c, k+1+c\}\le \max\{b, k+1+a\}\le k+1+(b-1)=n-2$$ steps to be in place. As we have taken care of all order relations in at most $n-2$ steps, every pebble is in place in $n-2$ steps. [1]{} Chase Albert, Chi-Kwong Li, Gilbert Strang, and Gexin Yu. Permutations as product of parallel transpositions. , 25(3):1412–1417, 2011. N. Alon, F. R. K. Chung, and R. L. Graham. Routing permutations on graphs via matchings. , 7(3):513–530, August 1994. V.E. Benes. . Academic Press, New York, 1965. Donald E. Knuth. . Addison Wesley Longman Publishing Co., Inc., Redwood City, CA, USA, 1998. Wei-Tian Li, Linyuan Lu, and Yiting Yang. Routing numbers of cycles, complete bipartite graphs, and hypercubes. , 24(4):1482–1494, 2010. L. Zhang. Optimal bounds for matching routing on trees. , 12:64–77, 1999.
--- abstract: 'The formation of fragments in proton-induced reactions at low relativistic energies within a combination of a covariant dynamical transport model and a statistical approach is investigated. In particular, we discuss in detail the applicability and limitations of such a hybrid model by comparing data on fragmentation at low relativistic $SIS/GSI$-energies.' address: - 'Institut für Theoretische Physik, Universität Giessen, D-35392 Giessen, Germany' - 'email: [email protected]' author: - 'T. Gaitanos' - 'H. Lenske' - 'U. Mosel' title: Fragment formation in proton induced reactions within a BUU transport model --- BUU transport equation, statistical multifragmentation model, relativistic proton-nucleus collisions.\ PACS numbers: 24.10-i, 24.10.Jv, 24.10.Pa, 25.40.Sc, 25.40.Ve. Introduction ============ One of the major aspects in investigating proton induced reactions is to better understand the phenomenon of fragmentation of a nucleus in a hot fireball-like state. Proton induced reactions are the simplest possible way to study such phenomena. Also, they have been found to be important for other investigations, e.g., for the production of radioactive beams [@intro1] or to interpret the origin of cosmic rays and radionuclides in nuclear astrophysics [@intro2]. More recently they have gained again experimental [@spaladin] interest. It is therefore a challenge to study this field of research in detail, in particular in relation to the future investigations at the new experimental facility FAIR at GSI. Proton induced reactions (and also heavy-ion collisions) are usually modeled by non-equilibrium transport models, see for a review Refs. [@Horror; @kada; @dani]. However, the description of fragment formation is a non-trivial task, since transport models do not account for the evolution of physical phase space fluctuations. The major difficulty here is the implementation of the physical fluctuating part of the collision integral and the reduction of numerical fluctuations using many test particles per physical nucleon, which however would require a large amount of computing resources. Attempts to resolve this still open problem have been recently started [@colonna]. The standard approach of phenomenological coalescence models for fragment formation has been found to work astonishingly well in heavy ion collisions, as long as one considers only one-body dynamical observables, see [@flows]. In particular, the coalescence model is usually applied in violent heavy-ion collisions, in which a prompt dynamical explosion of a fireball-like system is expected, with the formation of light clusters through nucleon coalescence. In this dilute matter secondary effects are negligible. However, the dynamical situation in proton-induced reactions is different. Compression-expansion effects are here only moderate and the fragmentation process happens over a long time scale (compared to the short lived explosive dynamics in heavy-ion collisions), which is compatible with a statistical description of the process. The whole dynamical picture in proton-induced reactions is therefore modeled by a combination of dynamical and statistical models. So far, two types of microscopic approaches have been frequently applied in proton-induced reactions: the intranuclear cascade (INC) model [@INC] and the quantum molecular dynamics (QMD) prescription [@QMD], in combination with a statistical multifragmentation model (SMM) [@SMM]. The SMM model is based on the assumption of an equilibrated source and treats its decay statistically. It includes also sequential evaporation and fission. In this letter we study fragment formation in proton-induced reactions at low relativistic energies in the framework of a fully covariant coupled-channel transport equation based on the relativistic mean-field approach of Quantum-hadro-dynamics [@QHD]. As a new feature we consider here for the first time the formation of fragments describing the initial step dynamically by means of a covariant transport model of a Boltzmann-Uehling-Uhlenbeck (BUU) type, followed by a statistical formation process of fragments in terms of the SMM model [@SMM]. We compare the theoretical results with a broad selection of experimental data. Theoretical description of proton-induced reactions =================================================== The theoretical description of hadron-nucleus (and also heavy-ion reactions) is based on the semiclassical kinetic theory of statistical mechanics, i.e. the Boltzmann Equation with the Uehling-Uhlenbeck modification of the collision integral [@kada; @dani]. The covariant analog of this equation is the Relativistic Boltzmann-Uehling-Uhlenbeck (RBUU) equation [@Ko; @giessen] $$\left[ p^{*\mu} \partial_{\mu}^{x} + \left( p^{*}_{\nu} F^{\mu\nu} + M^{*} \partial_{x}^{\mu} M^{*} \right) \partial_{\mu}^{p^{*}} \right] f(x,p^{*}) = {\cal I}_{coll} \quad , \label{rbuu}$$ where $f(x,p^{*})$ is the single particle distribution function for the hadrons. The dynamics of the drift term, i.e. the lhs of eq.(\[rbuu\]), is determined by the mean field, which does not explicity depend on momentum. Here the attractive scalar field $\Sigma_s$ enters via the effective mass $M^{*}=M-\Sigma_{s}$ and the repulsive vector field $\Sigma_\mu$ via the kinetic momenta $p^{*}_{\mu}=p_{\mu}-\Sigma_{\mu}$ and via the field tensor $F^{\mu\nu} = \partial^\mu \Sigma^\nu -\partial^\nu \Sigma^\mu$. The dynamical description according to Eq.(\[rbuu\]) involves the propagation of hadrons in the nuclear medium, which are coupled through the common mean-field and $2$-body collisions. The exact solution of the set of the coupled transport equations is not possible. Here we use the standard test-particle method for the numerical treatment of the Vlasov part. The collision integral, i.e. the rhs of eq.(\[rbuu\]), is modeled by a parallel-ensemble Monte-Carlo algorithm. The results presented here are based on Eq. (\[rbuu\]) in a new version of Ref. [@GiBUU], as realized in the Giessen-BUU (GiBUU) transport model, presented in [@GiBUU; @larionov], where also the properties of the relativistic mean-field, cross sections and the collision integral are discussed. Furthermore, we note that the results presented here do not differ essentially from those performed with non-relativistic prescriptions [@anna], as will be shown below. This is expected, since the achieved energies of the fragmenting sources are smaller than the rest energy. However, a fully covariant description is advantageous for several reasons. First of all, dynamics and kinematics are described on a common level, since the transport equations are formulated in a covariant manner. Apart from that, the relativistic mean-field accounts by definition for higher order momentum dependent effects which would be missed in a non-relativistic approach. The relativistic formulation is also of advantage for other dynamical situations, e.g. heavy-ion collisions at high relativistic energies by studying the formation of hypernuclei. For these reasons we have used a fully covariant approach, which will be applied to the more complex dynamics of relativistic heavy-ion collisions in the future. The non-linear Walecka model has been adopted for the relativistic mean-field potential. This model gives reasonable values for the compression modulus ($K=200$ MeV) and the saturation properties of nuclear matter [@larionov; @gaitanos]. In this first study of multifragmentation (see below) we use its standard version accounting only for the iso-scalar part of the hadronic EoS ($\sigma,~\omega$ classical bosonic fields). The exchange of iso-vector bosons ($\rho,~\delta$) is neglected in the mean-field baryon potential. At present, we are interested mainly in understanding the fragmentation process in terms of a dynamical description accepting the inaccuracies which might occur in the yields for exotic nuclei, when isovector interactions are neglected. In order to keep the calculations feasible we have to introduce further approximations. We assume that all baryons, also hadronic resonances and those with finite strangeness, feel the same mean-field. Meson self-energies are not taken into account, except for the Coulomb field. The transition from the fully dynamical (BUU) to the purely statistical approach (SMM) is not straightforward, and has to be studied carefully. In particular, it involves the time in which one has to switch from the dynamical to the statistical situation. Furthermore, an important feature for proton-induced reactions is the numerical stability of ground state nuclei, in particular, the determination of the ground state binding energies, within the test particle method. In the relativistic non-linear Walecka model the total energy is extracted as the space-integral of the $T^{00}$-component of the conserved energy-momentum tensor. The phase-space distribution function $f(x,p^{*})$ is represented within the test particle [*Ansatz*]{}, in which $f(x,p^{*})$ is discretized in terms of test particles of a Gaussian shape in coordinate space and a $\delta$-like function in momentum space. The transition from the dynamical (BUU) to the statistical (SMM) picture is controlled by the onset of local equilibration. We calculate in each time step at the center of the nucleus the spatial diagonal components of the energy momentum tensor $T^{\mu\nu}$ and define the local, e.g. at the center of the source, anisotropy as $Q(x) = \frac{2T^{zz}(x)}{T^{xx}(x)+T^{yy}(x)}$. The onset of local equilibration is defined as that time step, in which the anisotropy ratio approaches unity ($\pm 10\%$). (10.,8.0) (1.5,0.) It turns out that for $p+Au$ reactions at $E_{beam}=0.8~AGeV$ incident energy the system approaches local equilibrium at $t\in [100,~120]~fm/c$, depending on the centrality of the proton-nucleus collision. During the dynamical evolution of a proton-nucleus collision in the spirit of Eq. (\[rbuu\]) the nucleus gets excited due to momentum transfer and starts to emit nucleons (pre-equilibrium emission). Assuming that all particles inside the nuclear radius (including its surface) belong to the compound system, it is appropriate to define a [*(fragmenting) source*]{} by a density constraint of $\rho_{cut}=\frac{\rho_{sat}}{100}$ ($\rho_{sat}=0.16~fm^{-3}$ being the saturation density). We have checked that the results do not depend on the choice of the density constraint and the resulting difference is less than the statistical fluctuations. Thus, the parameters of the fragmenting source are given by the mass ($A$), the charge ($Z$) and the excitation energy at the time of equilibration. The major parameter entering into the SMM-code is the excitation energy $E_{exc}$ of a source with a given mass (A) and charge (Z) number. Its excitation is obtained by subtracting from the total energy the energy of the ground state, extracted within the same mean-field model as used in the Vlasov equation, for consistency. In a wide time interval from $t\approx 50~fm/c$ up to $t_{max}=150~fm/c$, in which one switches from dynamical to the statistical picture, the binding energy per particle of a ground state nucleus remains rather stable with small fluctuations around a mean value in the order of maximal $\pm 1\%$ using $200$ test particles per nucleon, which is important in calculating the excitation of the system. All the theoretical results of the following section have been performed within the [*hybrid*]{} approach $GiBUU+SMM$ outlined here. Mass and charge numbers and excitation energy of the fragmenting source have been determined by imposing the density cut after onset of equilibration. Results on $0.8~GeV~~p+Au$ reactions ==================================== (10.,8.0) (1.5,0.) As a first benchmark we consider the properties of the initial non-equilibrium dynamics and the properties of the fragmenting source, see Fig. \[Fig1a\]. During the non-equilibrium dynamics the proton beam collides with nucleons of the target nucleus. The amount of energy transfer and thus of excitation of the residual nucleus with associated particle emission depends on the centrality of the reaction, as shown in Fig. \[Fig1a\]. With increasing impact parameter the proton beam experiences less collisions (and also less secondary scattering with associated inelastic processes, e.g. resonance production and absorption) with the particles of the target leading to less energy and momentum transfer. Thus, the pre-equilibrium emission is reduced, as can be clearly seen in Fig. \[Fig1a\]. In average, the pre-equilibrium emission takes mainly place in a time interval, in which the proton beam penetrates the nucleus. The time interval of pre-equilibrium emission only moderately depends on the impact parameter, in agreement with previous studies, e.g. see Ref. [@cugnon]. However, as discussed in the previous section, we stop the dynamical calculation at a time later on, when all resonances have decayed and the residual system has achieved local equilibrium. The average amount of pre-equilibrium emission is $<A_{loss}>\approx 5$, $<Z_{loss}>\approx 3$ in terms of the mass and charge numbers, respectively. During this time interval the excitation of the residual nucleus drops from $<E_{exc}/A>\approx 4.2~MeV$ to $<E_{exc}/A>\approx (1.5-1.7)~MeV$ due to fast particle emission, before the residual system approaches a stable configuration. We note that our results are in agreement with those of other groups using non-relativistic approaches [@anna], as expected and discussed above. The hybrid model discussed in the previous section has been applied to $p+{}^{197}Au$ reactions at low relativistic energies, where a variety of experimental data is available [@pAu2; @pAu3]. We have used 200 test particles per nucleon for each run and for each impact parameter from $b=0~fm$ up to $b_{max}$. In Figs. \[Fig1\], \[Fig2\] we compare our theoretical results to experimental data [@pAu2; @pAu3] for $p+Au$ reactions in terms of charge and mass distributions, respectively. The theoretical results are in reasonable agreement both with the [*absolute*]{} yields and the shape of the experimental data. Similar results were obtained in previous studies within the the INC model [@pAu1]. (10.,8.0) (1.5,0.) The fragmentation of an excited system is a complex process involving different mechanisms of dissociation: Sharp peaks at the regions $(A,Z)=(A,Z)_{init}$ and $(A,Z)\approx (A,Z)_{init}/2$, where $A_{init},~Z_{init}$ denote the initial mass and charge numbers, respectively, correspond only to the most peripheral events with very low momentum transfer and thus with low excitation energy. According to the SMM model, heavy nuclei at low excitation energy corresponding to a temperature $T<2~MeV$ mainly undergo evaporation and fission producing the sharp peaks at the very high and low mass and charge numbers. With decreasing centrality the excitation energy, respectively temperature, increases. As the excitation energy (or temperature) approaches $T\approx 5~MeV$ the sharp structure degrades due to the onset of the multifragmentation mechanism, and at higher excitations, $T=5-17~MeV$, one expects exponentially decreasing yields with decreasing mass/charge number. These different phenomena of dissociation of an excited source finally leads to wide distributions in $A,~Z$ as shown in Figs. \[Fig1\], \[Fig2\]. The main features of the fragmentation process are apparently reasonably well reproduced by our hybrid approach. The combination of the non-equilibrium dynamics (GiBUU) and the statistical decay of the equilibrated configuration (SMM) obviously accounts for the essential aspects of the reaction. The first stage is important in calculating the excitation of the residual (due to dynamical pre-equilibrium emission) system, and the second one for the statistical fragmentation of the equilibrated configuration. (10.,8.0) (1.5,0.) The situation is quite similar for the mean kinetic energies of produced fragments, displayed in Fig. \[Fig3\]. The average kinetic energies show an almost linear rise from the low-$Z$ and high-$Z$ regions towards the maximum around $Z\approx 20$. Qualitatively, the lightest fragments with the larger slopes are produced in the most central collisions, corresponding to a large momentum transfer. The low energy tail reflects the most peripheral events with low momentum transfer. It mainly contains the heavy residual products with $Z>60$. An exact interpretation of the system size dependence of the average kinetic energies in Fig. \[Fig3\] is not trivial and would require a detailed discussion of the SMM model, which is not the scope of this work. It is possible, however, to give some quantitative arguments. Quantitatively, for a pure binary fission one would expect naively a maximum of the energy distribution at about half the target charge, $Z=40$. However, such a picture would neglect the dynamics of the reaction. In order to understand the energy distribution we have to consider the BUU pre-equilibrium dynamics and the fragment formation mechanism. The transport dynamics leads to pre-equilibrium particle emission which results to an initial compound nucleus with smaller $Z$-values compared to that of the target. Thus the energy naturally becomes smaller. On the other hand, the formation of the fragments is determined by the nuclear binding energies, and their interaction after formation where Coulomb effects play an important role [@SMM]. The binding energy effect favors nuclei around iron ($Z\sim 26$). The Coulomb repulsion leads to long-range correlations among the fragments with a tendency to shift the distribution to slightly smaller $Z$-values. The mass and charge distributions of the yields or energies of the produced fragments show only the general trends, which are reproduced well by the theoretical model applied in this work. However, they are not sensitive enough to the details of the reaction. A more stringent test is to study the characteristics of individual nuclides and particles produced in the reaction. Figs. \[Fig4\] and \[Fig5\] show theoretical results and experimental data on production yields of different separate isotopes in the atomic and mass number, respectively. A good overall agreement is achieved, again as a good check of the theoretical hybrid model. In particular, in Fig. \[Fig4\] we see that for isotopes produced in the spallation region (not too far from the target mass) and for fission fragments not too far from the maximum yield the comparison between the theory and experiment is only satisfactory. Similar trends are observed in the neutron spectrum of separate isotopes, see again Fig. \[Fig5\]. Interesting is the discrepancy in the proton-rich regions of the isotopic distributions of the heaviest elements, which can be also seen in the global $Z$- and $A$-distributions in the corresponding region. (10.,10.0) (0.5,0.) This detailed comparison shows the limitations and needed improvements of the hybrid model. The pre-equilibrium dynamics seems to lead to a more excited configuration with respect to the experiment, since all the theoretical distributions are moderately smoother in comparison with the experimental data, which is better visible in the detailed isotopic distributions. In general, it turns out that the hybrid model gives a quite satisfactory description of fragmentation data, which is a non-trivial task in transport dynamical approaches. We note again, that the non-equilibrium dynamics has been treated in a microscopic way using the relativistic coupled-channel transport approach, which is an important step in extracting the properties of the fireball-like configuration, before applying its statistical decay in the spirit of the SMM model. Conclusions and outlook ======================= (10.,10.0) (0.5,0.) We have investigated the fragmentation mechanism within a hybrid approach consisting of a dynamical transport model of a BUU type and a statistical one in the spirit of the Statistical Multifragmentation Model (SMM), and applied it to low energy proton-induced reactions by means of fragment formation. The main contribution was to show the reliability and possible limitations of the dynamical transport model for the description of multifragmentation within additional statistical approaches. In particular, it turned out that the hybrid model describes a wide selection of experimental data reasonably well. As a future project, a consistent description of the initial ground state might be achieved within a semi-classical density functional theory by determining the energy density functional consistently with the density profile of a nucleus implying also the inclusion of surface and isovector contributions to the energy density and thus to the dynamical evolution. This work is currently in progress. It is worthwhile to note, that the GiBUU transport approach contains the production and propagation of baryons and mesons with strangeness in appropriate relativistic mean fields. Also the SMM code has been extended to statistical decay of fragments with finite strangeness content ([*hypernuclei*]{}) [@botvinaHyp]. Therefore, motivated by the results of this work it is straightforward to continue this field of research investigating hypernucleus production from highly energetic nucleus-nucleus collisions. This part of study is still in progress. In summary, we conclude that this work provides an appropriate theoretical basis for investigations on fragmentation with a new perspective for hypernuclear physics. [*Acknowledgments.*]{} We would like to acknowledge Prof. A. Botvina for many useful discussions and for providing us with the SMM-code. We also thank the GiBUU group for many useful discussions. This work is supported by BMBF. [99]{} W.F. Henning, Nucl. Instrum. Methods B126 (1997) 1. R. Michel, I. Leya, L. Borges, Nucl. Instrum. Methods B113 (1996) 343. E. Le Gentil et al., Phys. Rev. Lett. 100 (2008) 022701. W.Botermans, R. Malfliet, Phys. Rep. 198 (1990) 115. L.P. Kadanoff, G. Baym, [*Quantum Statistical Mechanics*]{} (Benjamin, N.Y. 1962). P. Danielewicz, Ann. Phys. 152 (1984) 239, 305. M. Colonna, J. Rizzo, Ph. Chomaz, M. Di Toro, arXiv:0707.3902. It is not possible to list all publications related to collective flow effects here. Instead of we refer to two comprehesive review articles:\ W. Reisdorf, H.G. Ritter, Ann. Rev. Nucl. Part. Sci. 47 (1997) 663;\ N. Herrmann, J.P. Wessels, T. Wienold, Ann. Rev. Nucl. Part. Sci. 49 (1999) 581. A. Boudard, J. Cugnon, S. Leray, C. Volant, Phys. Rev. C66 (2002) 044615. J. Aichelin, Phys. Rep. 202 (1991) 233;\ Khaled, Abdel-Waged, Phys. Rev. C74 (2006) 034601. A.S. Botvina et al., Nucl. Phys. A475 (1987) 663;\ J.P. Bondorf et al., Phys. Rep. 257 (1995);\ A.S. Botvina, private communication. J.D. Walecka, Ann. Phys. (N.Y.) 83 (1974) 491. Q. Li, J.Q. Wu, C.M. Ko, Phys. Rev. C39 (1989) 849. B. Blättel, V. Koch, U. Mosel, Rep. Prog. Phys. 56 (1993) 1. http://theorie.physik.uni-giessen.de/GiBUU. A. B. Larionov, O. Buss, K. Gallmeister, U. Mosel, Phys. Rev. C76 (2007) 044909. A. Kowalczyk, arXiv:0801.0700. T. Gaitanos et al., Nucl. Phys. A732 (2004) 24. J. Cugnon, C. Volant, S. Vuillier, Nucl. Phys. A620 (1997) 475, and references therein. J. Benlliure et al., Nucl. Phys. A683 (2001) 513. F. Rejmund et al., Nucl. Phys. A683 (2001) 540. S.G. Mashnik, A.J. Sierk, K.K. Gudima, nucl-th/0208048. A.S. Botvina, J. Pochodzalla, Phys. Rev. C76 (2007) 024909.
--- title: 'Can electoral popularity be predicted using socially generated big data?' --- Introduction ============ Increasing use of the internet, and especially the rise of social media, has generated vast quantities of data on human behaviour, significant portions of which are also readily available to researchers. The potential of these data has not gone unnoticed: in just a few years use of social media data in particular has started to see a wide variety of applications in the growing subfield of “computational social science” [@Lazer2009; @Conte2012]. One of the most intriguing possibilities raised by the emergence of social media data is that it could be used to supplement (or even eventually replace) traditional methods for public opinion polling, especially the sample survey, because social media data offer considerable advantages in comparison with surveys in terms of the speed with which they can be acquired and the cost of collection. The selection bias in social media is clear: not everyone uses it, and people who do are not randomly distributed throughout the population [@Mislove2011]. Yet the hope has frequently been expressed that the sheer quantity of social media users may start to compensate for this (around 50% of the UK’s population are thought to have a Facebook account, for instance) and hence that we might eventually replace the “sample-based surveys” with the “whole population data”. The potential applications of “social polls” are wide ranging, however probably the most frequently explored avenue of research has been the use of social media data for electoral prediction. This is because the outcomes of elections are interesting in and of themselves, but also because it is a subject where a huge amount of validation data exists, coming from both the more traditional opinion polling which social media data might hope to replace, and the results of the election itself. Such social polling, which has largely been applied to data coming from Twitter, is typically based on one of two main methodologies: either offering some type of count of all tweets mentioning a given candidate (perhaps controlling for the candidate’s own social media account); or using various techniques developed for analysing the sentiment expressed in them as a measure of people’s opinion on a given candidate (see e.g. [@OConnor2010; @Tumasjan10; @Jungherr2013; @Ceron2013]). Despite initial enthusiasm, and in contrast to the cases of predicting arrival of earthquake waves [@Sakaki2010] and traffic jams [@Okazaki2011], most of the recent research on using Twitter for electoral prediction has been relatively negative, with many researchers reporting weak correlations with actual electoral outcomes, difficulty duplicating other positive research, or rates of successful prediction that could easily have come about by chance (see inter alia [@GayoAvello2011; @GayoAvello2012]). Most problematically, results have often exhibited specific biases either for or against individual political candidates, with minority parties often systematically overstated (see [@Jungherr2013]), whilst major conservative candidates are often undertstated [@GayoAvello2012]. A variety of potential reasons have been put forward for these problems. The most obvious is that the self-selection problem of social media cannot in fact easily be overcome with a larger sample size. Self-selection also operates when users decide what to post: even if large amounts of the population have created social media accounts, the amount which use them to express political opinions is much more limited. The nature of social media also means that opinions which are expressed are those heard by friends, family, work colleagues and other social connections: which might compel people to moderate their opinions or keep quiet if they support particular types of political party. Furthermore, many researchers have observed the difficulty of reliable sentiment analysis of political tweets, both because of the small amount of information contained in any given tweet and because of the nuances of political language where many opinions are expressed through irony or sarcasm [@GayoAvello2012]. Finally, as social media have started to take on a prominent position in media landscape (with trending topics now frequently a basis for news stories), political candidates have also increasingly started to intervene actively in social media, which has the potential for biasing results [@Metaxas2012]. Google Trends and Wikipedia Page Views: Predicting the Present {#google-trends-and-wikipedia-page-views-predicting-the-present .unnumbered} -------------------------------------------------------------- While social media data are probably the most used of the new data sources which have been generated by the internet, significant interest has also arisen surrounding the use of informational search data present in websites such as Google Trends or Wikipedia, which is generated when someone either conducts a web search for a particular topic or accesses a particular page on Wikipedia. While not typically regarded as social “media”, search data is nevertheless socially generated in that it relies on people entering individual search queries. Having clear information on what people are looking for and when they are looking for it provides a number of opportunities to “predict the present”: to gain a kind of real time awareness of current behaviour patterns. Such data have already been used to successfully predict a wide variety of phenomena both in short and long terms, from car and house prices to trends in flu outbreaks or unemployment [@Choi2009a; @Choi2009b; @Goel2010; @Cook2011] using web search data, as well as movie box office revenues using Wikipedia page view statistics [@Mestyan13]. Information seeking data offers significant theoretical advantages when compared to social media data in terms of its use for prediction. Whereas the automatic interpretation of the meaning of a tweet can be riddled with complexity, the interpretation of the meaning of a search or the access of a page in Wikipedia is much more straightforward: the user is interested in information on the topic in question. Furthermore, the penetration of search especially is far greater than many social media platforms, especially Twitter. Approximately 60% of internet users use search engines [@Dutton2011]. For many users, Wikipedia is the most common source of knowledge online: 29.6% of academics prefer Wikipedia to online library catalogues [@weller2010], and 52% of students are frequent Wikipedia users, even if their instructor advises them not to use the platform [@head2010]. In general, browsing Wikipedia is the third most popular online activity, after watching YouTube videos and engaging into social networking: it attracts 62% of Internet users under 30 [@zickuhr2011]. The popularity of Wikipedia is also closely related to the significant importance given to it by search engines like Google; in 96% of cases Wikipedia ranks within the top 5 UK Google search results [@silverwood2012]. However, such data has rarely been applied to the task of election prediction. The main reason for this is simple: “queries are not amenable to sentiment analysis” [@Gayo-Avello2013]. When entering a search query people express what they are looking for, but not their opinion about the subject: indeed, given they are searching for information, it seems reasonable to assume that this opinion is not fully formed. Despite this problem, in this paper we argue that there is significant “sentiment” data implied in information seeking behaviour. In particular, we expect that searches for political candidates around election time imply that people may be considering voting for them (though these searches are also likely stimulated by the reception of other bits of information, especially from the mass media). This assumption is inspired by previous work connecting information seeking to eventual real world outcomes: for example, connecting it to eventual movie box office revenues [@Mestyan13]. The problem for the purposes of prediction is that the relationship between search traffic and actual outcomes is unlikely to be straightforward. In fact, one of the few studies that has attempted to apply information seeking data to elections [@Lui2011] found that simply using search volume in the days prior to the election is an extremely poor prediction technique. Rather, we argue, there are a number of intervening variables which may affect how people look for information on politics, and thus need to be taken into account. One obvious first factor would be whether the political system encourages focus on parties or individuals (which may itself emerge through different modes of democratic organisation, e.g. presidentialism vs. parliamentarism), something which is likely to affect the search terms people enter. Also worth considering is the amount of potential candidates on the political scene, with elections full of new faces likely to generate more searching than contests between familiar candidates. Finally, there is the extent to which the existing incumbent is popular: as people are more likely to be informed on what the current power holder’s views are, they are less likely to search for them. Within the context of this paper, we seek to explore some of these questions by looking at correlations between search engine data, Wikipedia usage patterns and recent election results in three different countries: the UK, Germany and Iran. These countries were selected in order to provide a diverse range of political contexts (with elections in Iran and the UK where a new candidate was voted in and one in Germany where a popular incumbent was returned), electoral systems (from Iran’s presidential system to the parliamentary ones operated in the UK and Germany) and party landscapes (with a very stable system in the UK contrasted to Iran and Germany where new actors are emerging). Data Collection =============== For our analysis, we collected data from both Google Trends and Wikipedia for the last election in each of our countries of interest (2013 in the case of Iran and Germany, 2010 in the UK). Our trends data is based on the amount of searches for either a given party or politician coming from our specific country of interest (search terms were entered in the native language and script of that country). Our Google data was collected directly from the Google Trends website (<http://www.google.com/trends/>). This site allows users to compare the relative search volumes of different keywords, and to download the resulting data in CSV format. The specific keywords used are reported in tables 1-3. We assume that these data are reliable, as they come out of Google’s own server logs. Our Wikipedia data is extracted from the page view statistics section of the Wikimedia Downloads site (<http://dumps.wikimedia.org/other/pagecounts-raw>) through the web-based interface of “Wikipedia article traffic statistics” (<http://stats.grok.se>); again, for Wikipedia we focus on language specific terms appropriate to the country of interest. Although the original data dumps are of hourly granularity, in this research we used a daily accumulation of data in GMT. While the actual logs count the url requests, they might not well represent the unique visits nor unique visitors to the page. On the positive side, if the title of the page has been searched in alternative forms, and the user has been redirected to the page, this should have been counted in the data. In the case of Google search volume it is more problematic, because there is no systematic way to aggregate the data for different search keywords. For the sake of simplicity, in this work we only considered a most common keyword for each item, being aware of the biases that it might introduce in the data. Results ======= We will begin with a discussion of the Iranian election of the 14th of June 2013. Iran operates a presidential system, where individual candidates are far more important than political parties. The presidency goes to the candidate who gains more than 50% of the vote, with a run-off in case no candidate is able to in the first round. The election of 2013 was an unusual one: it lacked an incumbent candidate (with former president Mahmoud Ahmadinejad standing down after fulfilling the maximum two terms in office), and was won convincingly by Hassan Rouhani in the first round, a candidate who was perceived as an outsider just a month before the election. Figure \[fig:1\] shows patterns in Wikipedia page views and Google Search volume for the Iranian election, whilst the final results can be seen in Table \[tab:1\]. Several patterns are immediately apparent. First, both the quantity of searches on Google and the number of page views on Wikipedia indicate the winner of the election correctly, and also pick up on the large absolute disparity between Rouhani and the other candidates. They are both also sensitive to the very late development of Rouhani as a candidate (though Wikipedia also shows a spike in May). This comes as a very interesting result as none of the official polls have predicted the victory of any candidate in the first round of the election (the most optimistic poll has predicted 42% of votes for Rouhani) [@bbc]. However, neither Google nor Wikipedia correctly identify second place. [**$>>$ Figure 1 to be placed here $<<$**]{} Candidate Popular Vote Percentage Wikipedia page title Google search keyword -------------------------- -------------- ------------ -------------------------------- ---------------------------------- Hassan Rouhani 18,613,329 50.88 [*Hassan Rouhani*]{} “[*hassan rouhani*]{}” Mohammad Bagher Ghalibaf 6,077,292 16.46 [*Mohammad Bagher Ghalibaf*]{} “[*mohammad bagher ghalibaf*]{}” Saeed Jalili 4,168,946 11.31 [*Saeed Jalili*]{} “[*saeed jalili*]{}” Mohsen Rezaee 3,884,412 10.55 [*Mohsen Rezaee*]{} “[*mohsen rezaee*]{}” We will now move on to the German election of the 22nd of September 2013. Germany operates as a federal parliamentary republic, with power divided between the German parliament (“Bundestag”) and the body which represents Germany’s regions (“Bundesrat”). This election in particular was for the Bundestag, which itself has responsibility for electing Germany’s Chancellor, its most powerful political office. Germany’s system is based strongly around parties: a majority vote is required to elect the Chancellor, which is usually based on a coalition between two or more parties. In this particular election, the winning Christian Democrat party (CDU/CSU) increased its vote share for the second successive election, confirming its place as a highly popular incumbent party. However its coalition partner from the 2009 elections (the FDP) lost a lot of ground, failing to win any seats, resulting eventually in the formation of a “grand coalition” between CDU/CSU and the major social democratic party (SPD). The results of the election are shown in Table \[tab:2\], whilst the data extracted from Wikipedia and Google are shown in Figure \[fig:2\]. The results show an interesting contrast to the Iranian election. Google predicts correctly both the winner of the election and second place (if we look at the date of the election), and is also approximately right about the distance between the two parties. It radically overstates the position of the FDP however. Wikipedia, by contrast, does not predict anything accurately, overstating to a large extent the position of Alternative for Germany (AfD), a radical anti-Euro party which was recently formed. This chimes with earlier work by Jungherr [@Jungherr2013] who found that Twitter overstated to a large extent the position of the Pirate Party (which was also recently formed) in the 2009 German election. [**$>>$ Figure 2 to be placed here $<<$**]{} Party Popular Vote Percentage Wikipedia page title Google search keyword ----------------------------------- -------------- ------------ --------------------------------------------- ------------------------------- Christian Democratic Union 14,921,877 34.1 Christlich Demokratische Union Deutschlands cdu Social Democratic Party 11,252,215 25.7 Sozialdemokratische Partei Deutschlands spd The Left 3,755,699 8.6 Die Linke “die linke” Alliance ’90/The Greens 3,694,057 8.4 Bündnis 90 ; Die Grünen “bündnis 90”; “die grünen” Christian Social Union of Bavaria 243,569 7.4 Christlich-Soziale Union in Bayern csu Free Democratic Party 2,083,533 4.8 Freie Demokratische Partei fdp Alternative for Germany 2,056,985 4.7 Alternative für Deutschland “alternative für deutschland” Pirate Party 959,177 2.2 Piratenpartei Deutschland piratenpartei We will now look finally at the results of the 2010 UK election. The UK also operates a parliamentary system, though unlike Germany does not have a separate regional body. Rather, power is concentrated on one legislative body (the House of Commons), with a secondary unelected body (the House of Lords) providing some checks and balances. The history of the UK has been dominated by single party government, as the voting system there favours the emergence of a small group of very large parties. Hence even though in theory parliament and hence parties elect the prime minister, in practice the individual personalities of leaders have come to be seen as just important as party identity. For this reason in the UK we look at both individuals and parties. Figure \[fig:3\] shows results from Wikipedia and Google for the UK election, whilst Table \[tab:3\] reports the actual results. A variety of findings are worth noting here. Firstly, on Google, parties were universally more searched for than politicians, however the party data itself did not offer a useful predictor of the election results, considerably overstating the position of the Liberal Democrats, the UK’s third largest party (though this party did improve considerably on its 2005 result). The individual politician data did, by contrast, place all the winning parties in correct order, though the difference between Conservative candidate David Cameron and Labour candidate Gordon Brown was marginal. In Wikipedia, by contrast, individual politicians were much more viewed than parties. Both the politician and party data offers a correct placement of all four parties, though the differences between them are microscopic. [**$>>$ Figure 3 to be placed here $<<$**]{} Party/Leader Popular Vote Percentage Wikipedia page title Google search keyword ------------------ -------------- ------------ ------------------------- ----------------------- Conservative 10,703,654 36.1 Conservative Party (UK) “conservative party” David Cameron David Cameron “david cameron” Labour 8,606,517 29.0 Labour Party (UK) “labour party” Gordon Brown Gordon Brown “gordon brown” Liberal Democrat 6,836,248 23.0 Liberal Democrats “liberal democrats” Nick Clegg Nick Clegg “nick clegg” UKIP 919,471 3.1 UK Independence Party ukip Nigel Farage Nigel Farage “nigel farage” Discussion and Conclusion ========================= There are several broad conclusions we would like to draw from this data. It is clear first and foremost that online information seeking forms a part of contemporary elections: all three of the countries under study showed significant increases in traffic in the days leading up to an election. However it is also clear that patterns differ in the context of different elections, and that people do not simply search in the same proportions that they vote. Even the overall patterns show dissimilarities, while German data shows a clear weekly pattern, with the minimum of volumes during weekends, such patterns are absent in other two countries. We highlight several key factors here. Firstly, data based on individual politicians proved more reliable than data based on parties: both Wikipedia and Google predicted the winners of the Iranian and UK elections when using individual politicians as search terms. This may be because there is a greater variety of ways in which people can search for information on a political party than there is on an individual (they could, for example, use an abbreviation, or search for “Labour Party” rather than “Labour”). However it is also interesting to note that the absolute volume of searches for parties was higher than it was for candidates in the UK case. Overall, this may mean that predictions based on social data may perform better in political systems which encourage a focus on individuals. Further research would be needed to establish these reasons more systematically. Secondly, it is clear that information seeking data reacts quickly to the emergence of new “insurgent” candidates, such as Hassan Rouhani or the AfD. However, supporting previous work, it may also overstate them (the high volumes for the Liberal Democrats in the UK can also be read in this light). For this reason, it may be useful for social predictions to look for multiple different information sources. The AfD, for example, performed well in Wikipedia but poorly on Google, whilst the reverse was true for the Liberal Democrats. Rouhani, by contrast, performed well on both platforms. This indicates as well that Google and Wikipedia are put to slightly different uses: the high level of AfD searches on Wikipedia suggesting that it is a key resource for people who are unaware of the views of new political forces. Finally, it seems that information seeking data is at its least effective when predicting the decline of a previously popular party. The FDP provides the example here: there is little to suggest in either Google or Wikipedia that it was about to suffer the reverse it did. It may be that, as the decline of the party itself becomes newsworthy, people increase their information seeking activity on the party to find out more about why people aren’t supporting it; though again further research would be required to establish this. In conclusion, we argue that there is significant potential in information seeking data for both enhancing our knowledge of how contemporary politics work and predicting the outcome of future elections. It also has considerable potential benefits in comparison with social media data, as it requires no complex sentiment detection. However much work remains to be done in establishing the conditions under which such prediction will be successful. In our view, this will depend on elaborating more fully a theory of how people seek information on politics, and how different electoral circumstances change this behaviour. Acknowledgment ============== We thank the Information Technology editors and reviewers for their very helpful comments. We also thank Wikimedia Deutchland e.V. and Wikimedia Foundation for the live access to the Wikipedia data via Toolserver and page view dumps. [10]{} Lazer, David, et al., Computational Social Science. *Science, 323, 5915:721-723, 2009.* Conte, R. et al., Manifesto of computational social science. *The European Physical Journal Special Topics, 214, 1:325-346, 2012.* Mislove A., Lehmann S., Ahn Y., Onnela J., Rosenquist J., [Understanding the demographics of Twitter users. In: *Fifth international AAAI conference on weblogs and social media* 2011.]{} O’Connor B, Balasubramanyan R, Routledge BR, et al., [From tweets to polls: linking text sentiment to public opinion time series. In: *Proceedings of the fourth international AAAI con- ference on weblogs and social media, Washington, DC, 23-26 May*, 2010.]{} Tumasjan A., Sprenger T.O., Sander P.G., Welpe I.M., Predicting elections with [T]{}witter: What 140 characters reveal about political sentiment. In: *Proceedings of the Fourth International AAAI Conference on Weblogs and Social Media. pp. 178-185 2010.* Jungherr A., [Tweets and Votes, a Special Relationship: The 2009 Federal Election in Germany. In: *Proceedings of the 2Nd Workshop on Politics, Elections and Data, pp5-14*, 2013.]{} A. Ceron, L. Curini, and S. M. Iacus., [Every tweet counts? How sentiment analysis of social media can improve our knowledge of citizens’ political preferences with an application to Italy and France. *New Media & Society* 2013.]{} Sakaki T, Okazaki M, Matsuo Y., Earthquake shakes [Twitter]{} users: real-time event detection by social sensors. In: *Proceedings of the 19th international conference on [World wide web]{}. New York, NY, USA: ACM, WWW ’10, pp. 851–860, 2010.* Okazaki M, Matsuo Y., Semantic [Twitter]{}: Analyzing [Tweets]{} for real-time event notification. In: *Breslin J, Burg T, Kim HG, Raftery T, Schmidt JH, editors, Recent Trends and Developments in Social Software, Springer, volume 6045 of *Lecture Notes in Computer Science*. pp. 63-74, 2011.* Gayo-Avello D., Melaxas P., Mustafaraj E., Limits of electoral predictions using [T]{}witter. In: *Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media. pp. 490-493. (2011)* Panagiotis T. Metaxas, Eni Mustafaraj, Social Media and the Elections *Science, 338:6106, pp.472-473*, 2011. Choi H, Varian H., Predicting the present with Google Trends. *Available at http://google.com/\ googleblogs/pdfs/google\_predicting\_the\_present.pdf*, 2009. Choi H, Varian H., Predicting initial claims for unemployment benefits. *Available atAvailable at http://research.google.com/\ archive/papers/initialclaimsUS.pdf*, 2009. Goel S., Hofman J.M., Lahaie S., Pennock D.M., Watts, D.J., Predicting consumer behavior with Web search. *Proceedings of the National Academy of Sciences, 107:41, pp17486*, 2010. Cook S., Conrad C., Fowlkes A.L., Mohebbi M.H., Assessing Google Flu Trends Performance in the United States during the 2009 Influenza Virus A (H1N1) Pandemic. *PLoS ONE 6(8): e23610*, 2011. Mestyán, M.,Yasseri, T., Kertész J., Early prediction of movie box office success based on [Wikipedia]{} activity big data. *PLoS ONE, 8(8), 71226, 2013.* Dutton WH., and Blank G., Next Generation Users: The Internet in Britain. Oxford Internet Survey 2011 Report. *Oxford Internet Institute, Oxford University*, 2011. Weller K., Dornstädter R., Freimanis R., Klein R.N., Perez M., Social software in academia : Three studies on users’ acceptance of web 2.0 services. In: *Proceedings Web Science Conf*, pp. 26–27, 2010. Head A., Eisenberg M., How today’s college students use [Wikipedia]{} for course-related research. *First Monday*, 15(3), 2010. Zickuhr K., Rainie L., [Wikipedia]{} , past and present. Retrieved April 8, 2014, from http://pewinternet.org/Reports/2011/Wikipedia.aspx, 2011. Silverwood-Cope S., [Wikipedia]{}: Page one of [Google]{} UK for 99% of searches. *Intelligent Positioning.* Retrieved July 8, 2013, from http://www.intelligentpositioning.com/blog/2012/02/wikipedia-page-one-of-google-uk-for-99-of-searches/, 2012. Daniel Gayo-Avello, A Meta-Analysis of State-of-the-Art Electoral Prediction From Twitter Data *Social Science Computer Review*, 2013. C Lui, PT Metaxas, E Mustafaraj, On the predictability of the US elections through search volume activity In: *Proceedings of the IADIS International e-Society* , 2011. Maleki A., The latest estimation of the election turnout and votes distribution *BBC Persian, http://www.bbc.co.uk/persian/iran\ /2013/06/130614\_l45\_ir92\_polls\_analysis.shtml* , 2013.
--- abstract: | Nowadays, malware campaigns have reached a high level of sophistication, thanks to the use of cryptography and covert communication channels over traditional protocols and services. In this regard, a typical approach to evade botnet identification and takedown mechanisms is the use of domain fluxing through the use of Domain Generation Algorithms (DGAs). These algorithms produce an overwhelming amount of domain names that the infected device tries to communicate with to find the Command and Control server, yet only a small fragment of them is actually registered. Due to the high number of domain names, the blacklisting approach is rendered useless. Therefore, the botmaster may pivot the control dynamically and hinder botnet detection mechanisms. To counter this problem, many security mechanisms result in solutions that try to identify domains from a DGA based on the randomness of their name. In this work, we explore hard to detect families of DGAs, as they are constructed to bypass these mechanisms. More precisely, they are based on the use of dictionaries so the domains seem to be user-generated. Therefore, the corresponding generated domains pass many filters that look for, e.g. high entropy strings. To address this challenge, we propose an accurate and efficient probabilistic approach to detect them. We test and validate the proposed solution through extensive experiments with a sound dataset containing all the wordlist-based DGA families that exhibit this behaviour and compare it with other state-of-the-art methods, practically showing the efficacy and prevalence of our proposal. [***Keywords:*** Malware, Botnets, Domain Generation Algorithm, DNS]{} author: - Constantinos Patsakis - Fran Casino title: Exploiting Statistical and Structural Features for the Detection of Domain Generation Algorithms --- Introduction ============ The ceaseless efforts of malware authors to enhance cybercrime with sophisticated techniques [@MANSFIELDDEVINE201815] is creating a new “business” paradigm. Such a business has a myriad of monetisation sources [@isma2018] including, but not limited to, ad injection [@CHEN2017164], spamming [@rao2012economics], denial of service [@sabillon2016cybercrime], ransomware-based extorsion [@al2018ransomware] or phishing [@chiew2018survey]. In this regard, one of the most critical aspects of such malware campaigns is the control and management of the compromised hosts. This enables the malware author, apart from compromising the victim’s security and privacy, to orchestrate further attacks and prolong the discovery of the attack. In the past, the prevalent methodology was to establish a direct communication channel between the [*Command and Control*]{} (C&C) server and the infected devices. However, this strategy had several flaws, since blacklisting a specific IP or a domain name served as an effective takedown mechanism. Nowadays, to counter direct communication issues, cybercriminals try to use communication channels that disguise the traffic as benign and cannot be easily blocked, e.g. social networks or use multiple domains to manage infected hosts. In the latter case, the adversary uses a Domain Generation Algorithm (DGA) to periodically generate multiple domains which can be used as rendezvous points to retrieve updates and commands. However, only a few of them are registered. Therefore, the C&C server can be transferred from one domain to another without losing control of the compromised devices. In addition to the dynamic domain transfers, the lack of proper reporting from domain registrars aggravates the problem, since requests made from law enforcement authorities and security practitioners may receive delayed responses, hindering botnet detection. Therefore, the use of a DGA introduces an asymmetry of cost for the attacker and the defender, as the former has only a minimal cost to register the domains while it is impossible for the latter to block all possible domains. Motivation and Contributions {#sec:motivation} ---------------------------- DGAs come in different flavours depending on how they generate the domain names. The general rule is that they use a pseudo-random generator to create a string that is used as the domain name that the infected hosts would query to reach the C&C server. To allow the botmaster and bots to generate the same list of domain names, the DGA contains a set of preshared secrets, e.g. the seed. If the generated domain name is a string that is completely random, then its entropy would be rather high. This creates a criterion allowing one to differentiate *algorithmically generated domain (AGD)* names from regular DNS queries. To counter this detection method, some DGAs resort to using random combinations of words which are extracted from predefined dictionaries. Therefore, these DGAs are often referred to as *wordlist-based DGAs*. This way, wordlist-based DGAs bypass many security mechanisms as the generated domains not only have a low entropy, but they also appear to be generated and requested by humans. This is of particular relevance since more than 2/3 of the top domain list contains at least one English word, and around 1/3 are entirely composed by English words [@yang2019detecting]. Therefore, distinguishing between benign and malicious word-based AGDs becomes a more challenging task. In this work, we illustrate that AGDs can be easily identified by the fact that wordlist-based DGAs use a limited dictionary which results in often word repetitions. In this regard, we either exploit the well-known “*birthday problem*” and populate custom dictionaries from the Non-existent Internet Domains (NXDomains) that a host queries, or the structure of the domains they produce. Once the queries exceed some quota, we consider that a malware that uses a DGA has infected the host. Despite its simplicity, our method manages to be efficient in terms of both computational effort and detection, allowing it to be easily deployed in existing environments. Notably, the method is more efficient and accurate than current state of the art, managing to throttle all such DGAs after only a few NXDomain requests without the need for training. Moreover, beyond implementing an additional support layer to detect well-known DGA families, our method can detect new such DGA families, since it does not depend on training data nor is constrained by domain-specific features. Notably, our methodology is applied to a statistically sound dataset, containing all the known wordlist-based DGA families up to date and 3,027,300 domains, which is by far the largest and most complete dataset in the literature of studies for wordlist-based DGAs. Organisation of this work ------------------------- The rest of this work is structured as follows. In Section \[sec:related\], we present the related work regarding DGAs and detection methods. Then, in Section \[sec:methodology\], we discuss the proposed methodology for identifying the operation of a wordlist-based DGA malware, using only network traffic logs. In Section \[sec:experiments\], we describe our experimental setup, the datasets we utilise and our results. Afterwards, in Section \[sec:discussion\], we discuss the findings of our extensive experiments using our proposed methodology and compare it to the current state of the art. Finally, the article concludes discussing future work and summarising our contributions. Related Work {#sec:related} ============ Nowadays, malware developers use DGAs, which create a set of AGDs to communicate with C&C servers, overcoming the drawbacks of static IP addresses [@203628; @nadji2017still]. In essence, DGAs use a deterministic pseudo-random generator (PRNG) to create a set of domain names [@7535098; @6175908]. Therefore, the infected devices query a set of domains generated by the DGA till they resolve to a valid IP (i.e. the C&C server), whose location may also change dynamically. In this regard, blacklisting domains is rendered useless as it implies many practical issues. According to the literature, there are two main families of DGAs: (i) Random-based DGA methods, which use a PRNG to generate a set of characters to create a domain name, and (ii) Dictionary/Wordlist-based DGA methods, which use a predefined dictionary of existing words to generate such domains and thus, their detection becomes a more challenging task. There also exists a minor subset of DGA families that use valid domains that were previously hacked to hide their C&C servers (i.e. domain shadowing) [@Liu2017] and DGAs that generate domain names that are very similar to existing valid domains or the ones generated by other DGA families [@johannesbader] hindering the detection task. Considering the dependency of the pre-shared secret to time, Plohmann et al. [@197187] further categorise DGAs to time-independent and deterministic, time-dependent and deterministic, and time-dependent and non-deterministic. Fu et al. [@7852496] proposed two DGAs which use hidden Markov models (HMMs) and probabilistic context-free grammars (PCFGs) and tested them on state-of-the-art detection systems. After analysing the outcomes by using metrics such as Kullback-Leibler (KL) distance, Edit distance (ED), and Jaccard index, their results showed that these DGAs hindered the detection rate of such approaches. In the case of random-based DGA detection, a common practice is to analyse some features of the domain names and their lexical characteristics to determine whether a DGA has generated them [@Aviv2011; @6151233]. Moreover, auxiliary information such as WHOIS and DNS traffic (e.g. frequent NXDomain responses) is often used to detect abnormal behaviours [@Zhou2013DGABasedBD; @5762763; @1]. Other approaches use machine learning-based techniques and combine the previous information to identify Random-based DGA such as in [@5762763; @yadav2012; @yadavgraph; @7163279]. Nevertheless, many researchers have recently started focusing on the detection of wordlist-based DGAs. In [@curtin2018detecting], authors propose the smashword score, a metric that uses n-gram overlapping combined with information provided from WHOIS lookups to detect AGDs. The WordGraph method [@pereira2018dictionary] extracts dictionary information that is embedded in the malware using a graph-based approach, which models repetitions and combinations of domain name strings. In [@lison2017automatic], authors use a machine learning approach based on recurrent neural networks trained using “familiar” (i.e. already known) dictionaries to detect wordlist-based AGDs. Similarly, the work presented in [@stefanotracking] focuses on AGD classification and characterisation, generating knowledge about the evolving behaviour of botnets. The authors of [@Anderson2016] propose a generative adversarial network (GAN), which can learn and bypass classical deep learning detectors. Thereafter, such acquired information is used as feedback to the system to improve the accuracy of the AGD detectors. Neural Networks are also used to classify domain names based on word-level information in [@koh2018inline]. More concretely, researchers use ELMo [@peters2018deep], a context-sensitive word embedding, and a classification network that consists of a fully-connected layer with 128 rectified linear units and a logistic regression output layer. In [@tong2016method], the authors propose an improvement of Phoenix botnet detection [@stefanotracking] by using a modified Mahalanobis distance metric to perform classification as well as a variant of $k$-means to increase clustering effectiveness. The work described in [@18] proposes a short-term memory network (LSTM), which uses raw domain names as features to perform binary classification. Yang et al. proposed a classification based on a set of features such as word correlations, frequency, and part-of-speech tags in [@yang2018novel]. Later, they enhanced their detection mechanism by the use of inter-word and inter-domain correlations using semantic analysis [@yang2019detecting]. Spooren et al. [@Spooren2019] recently showed that their deep learning recurrent neural network is significantly better than classical machine learning approaches. More interestingly though, they showed that one of the dangers of manual feature engineering is that an adversary may adapt her strategy if she knows which features are used in the detection. To this end, they introduce properly crafted DGAs that bypass these classifiers. Berman [@berman2019dga] developed a method based on Capsule Networks (CapsNet) to detect AGDs. They compare their method with well-known approaches such as RNNs and CNNs, and the outcomes showed that the accuracy obtained by CapsNet was similar with better performance. Xu et al. [@XU201977] proposed the combination of n-gram and a deep CNNs to create an n-gram combined character-based domain classification (n-CBDC) model. Their model runs in an end-to-end way and does not require from domain feature extraction, enhancing its performance. Vinayakumar et al. [@Vinayakumar2019] implemented a set of deep learning architectures with Keras Embedding and classical machine learning algorithms to classify DGA families. Their best reported configuration is obtained when using RNNs with SVM with radial basis function (SVM-RBF). For a detailed overview and classification of methods of how malicious domains can be detected, the interested reader may refer to [@Zhauniarovich2018]. In a recent work [@Patsakis2019], the authors extend the notion of DGAs into a more generic one, namely Resource Identifier Generation Algorithms (RIGA) which allows the use of other protocols beyond DNS. In this regard, authors show how decentralised permanent storage (DPS), although being a useful technology able to enhance a myriad of applications, has some potential drawbacks and exploitable characteristics for armouring a botnet as already exploited in the real world [@ipfsstorm], primarily due to its immutability properties. Therefore, the authors showcase the potential risks and opportunities for malware creators and raise awareness about the symbiotic relationship between DPS and malware campaigns. Finally, due to recent advances and the widespread use of covert/encrypted communication channels (e.g. DNSCurve, DNS over HTTPS and DNS over TLS) malware creators have an additional layer to hide their communications, rendering traditional DGA detection mechanisms useless. Nevertheless, as shown in [@patsakiscose19], NXDomain detection can still be performed in such a scenario as well as feature extraction so that DGA families can be further classified with high accuracy. In addition to the related work analysis, we argue that it is also worth discussing the fact that NXDomain requests may be a result of user typewriting errors. Each human produces different typing patterns depending on the writing surface (e.g. keyboard, smartphone, or larger touch screen surfaces such as tablets) [@Findlater2011] and according to its physical and physiological conditions [@Andrew88cog; @compagno2017don], which can be used, for instance, to uniquely identify an individual. Nevertheless, typewriting errors are strongly influenced by the language and therefore, exhibit common characteristics regardless of one’s typing pattern. The most common typing errors [@damerau1964technique; @Peterson86let] (more than 80%) are caused by (i) transposition of two adjacent letters, (ii) adding one extra letter, (iii) one missing letter or (iv) one wrong letter. Therefore, such errors can be corrected backspacing or moving the cursor to the point at which the error occurred and then retyping [@Karat99]. As previously discussed, although typewriting errors may lead to NXDomain with high probability, usually lots of similar domain names to the original are often registered [@cartmell2004registering; @murphy2003] to avoid homograph attacks [@holgers2006cutting]. Moreover, several techniques which aim to overcome homograph attacks can be found in the literature. For example, in [@szurdi2014long], authors designed an accurate typo categorisation framework and found that typosquatting using parked ads and similar monetisation techniques exist for popular domains as well as in the Alexa list. To mitigate this problem, the authors implemented typosquatting blacklists and a browser plugin to prevent mistyping at the user side. In the case of [@moore2010measuring], the authors analyse the main typosquatting issues and monetisation market behind it and recall the effectiveness of several policies and efforts to regulate typosquatting. More recently, novel approaches [@226307] developed by the Google Chrome security team, implement suggestions for lookalike URLs, which also prevent typewriting errors. These techniques, added to the fact that most domains are queried after using a search engine, historical data and bookmarks, vastly reduce the number of domains accessed directly through typewriting [@BRIN1998107; @Aula2005]. Therefore, we can safely assume that typosquatted domains or homograph attacks represent a marginal percentage or potential danger (compared to DGA queries, which are much more frequent) taking into account the aforementioned prevention and security measures. Proposed methodology {#sec:methodology} ==================== As already discussed, wordlist-based DGAs have predefined dictionaries that they use to create the possible domains that the malware would try to connect to find the C&C server. In our methodology, we exploit the fact that this set is often rather constrained, so we expect to have often repetitions of words in the NXDomain requests. The general methodology can be summarised as follows. A monitoring mechanism collects all the NXDomain requests performed by hosts. These domains are split into words, and those words are divided into buckets. The buckets are filled with words either statistically (each word has an individual bucket) or because they fit a specific pattern. Once a bucket, or a set of buckets, reaches a threshold, an alert is raised. In what follows, we assume that the monitoring mechanism has a cache that stores the result and would either periodically wipe them after an epoch $T$ or wipe records that are older than $T$. This prevents the mechanism of reporting attacks as a result of, e.g. old typing errors which are stacked over time. To facilitate the reader, we consider a simple scenario and gradually build on this one to describe our proposal. In our scenario, we have a wordlist-based DGA which selects two words from a dictionary of $n$ words. It adds a separator symbol (e.g. -) between them, and then appends a top-level domain (TLD) from a predefined set. If we assume that $n$ is small, then from the well-known “*birthday problem*” [@birthday2] we expect to have a collision, that is a word being repeated, in approximately $\sqrt{n}$ domain name generations. More precisely, the latter is expected to happen with 50% probability. Setting the threshold of repetitions too low, e.g. 2, it is evident that may lead to many false positives as a user may have mistyped a domain name. This small amount of false positives is an acceptable trade-off in the event of human errors; as described in Section \[sec:related\]). Although typewriting attacks are much less frequent than DGA-based malware, as discussed in Section \[sec:related\], one may set the threshold for repetitions of words in domain names higher to allow for some grace for typos. In what follows, we denote this threshold as $t$. Generalising the above, one DGA may have $k$ dictionaries and generate each fragment of the domain name by selecting a word from each dictionary. Therefore, we may formalise our problem as follows: **Problem setting:** *Let us assume that a DGA has $k$ dictionaries $d_i, \: i \in \{1,...,k\}$. The DGA uses words (denoted as $w$) to create domain names of the form $dom$ such that:* $$dom=w_1||w_2||...||w_k, \: w_i\in d_i, \: \forall i\in\{1,2,3,...,k\}.$$ *That is, $dom$ is the ordered concatenation ($||$) of $k$ words by randomly selecting one word from each dictionary and putting them in the same order as their dictionaries. Find the probability $p$ of having at least one word from any of the dictionaries being selected at least $t$ times, for $t$ constant.* It is clear that the case of having one dictionary ($k=1$) and requesting one collision ($t=2$) is the well-known birthday problem. Levin [@levin1981representation] and Diaconis and Mosteller [@diaconis1989methods] have thoroughly studied the birthday problem and its extensions. Based on their proofs, we have that the probability $p$ of having $t$ collisions on a dictionary of $L$ words in $n$ trials is subject to the following approximation: $$\frac{ne^{-n/Lt}}{\left(1-\frac{N}{L(t+1)}\right)^{1/t}}\approx \left(L^{t-1}t!\log\left(\frac{1}{1-p}\right)\right)^{1/t}.$$ Therefore, solving for p we have that: $$p\approx 1-\exp \left(-\frac{L^{1-t} \left(n e^{-\frac{n}{Lt}} \left(1-\frac{n}{L(t+1)}\right)^{-1/t}\right)^t}{t!}\right).$$ . From the latter approximation, and the fact that in the generic wordlist-based DGAs, discussed above, the collisions may affect different dictionaries are independent; one can compute the probability of a $t$-collision as follows: $$P(t-collision)=\sum_{i=1}^kp_i,$$ where: $$p_i=1-\exp \left(-\frac{L_i^{1-t} \left(n e^{-\frac{n}{L_it}} \left(1-\frac{n}{L_i(t+1)}\right)^{-1/t}\right)^t}{t!}\right).$$ and $L_i$ denotes the length of dictionary $d_i$. Going a step further, let us assume that we have captured an NXDomain name request from a host. We may split the name to see the pattern of the domain name in terms of structure. For instance, the domain name consists of 2 or 3 words with a total length above $M$ characters. Note that DGAs tend to have rather long domain names to, e.g. maximise the chances of purchasing the domains they want and avoid compromised machines contacting existing domains. Therefore, due to the way wordlist-based DGAs work, requests to NXDomains which have long names or with a specific amount of concatenated words may imply the existence of an infected machine. More precisely, wordlist-based DGAs often have a static template of the form, e.g. `w_1||w_2.TLD` (see DGAs like `nymaim`, `pizd`, and `suppobox`) or `w_1||w_2||...||w_m.TLD` so that: $$|w_1|+|w_2|+...+|w_m|>M,$$ which contents would change according to each DGA. As it is shown in Section \[sec:experiments\], typical user DNS queries do not follow this template. Therefore, the proposed methodology is a two-layer filter for the collected NXDomains. By default, DNS queries are performed in cleartext format and can be captured by any network device that is hosted in the same network. Therefore, a network monitoring device can passively collect all DNS queries performed in the network and submit the NXDomains for processing in our two-filter mechanism. The first filter has a threshold based on word repetitions, while the second one keeps track of the patterns that recent domains have in terms of words, letters and structure. The proposed methodology is illustrated in Figure \[fig:proposed\_methodology\]. First, the network monitoring mechanism collects all the DNS queries and logs all NXDomain requests. Each such request is analysed by the two filters to check whether a threshold is exceeded. To do so, we first remove the TLD of the domain request, and we keep only the SLD which is analysed. The bulk of the literature studies SLDs only, as TLDs offer little information and they are well-known and fixed. Third level domains, since any anomaly in them, can be easily detected and handled controlled by the owner of the SLD are not considered in the related work. Next, the first filter receives as an input the SLD, splits it into words and sets a counter for each word that appears in such analysis. Once the counter of a word exceeds a given threshold $T$, a warning is issued. In parallel, the SLD is analysed for structural patterns, e.g. number of words, total length etc. as discussed in section \[sec:experiments\]. Again, if the counter that keeps track of repetitions of these patterns exceeds a given threshold $T'$, a warning is raised. As it will be discussed in Section \[sec:discussion\], the two-layer approach manages to provide high reliability, accuracy, and robustness. Moreover, due to the template that such AGDs are expected to have, our methodology is generic enough to counter other and non-reported wordlist-based DGAs. ![image](monitor.pdf){width="80.00000%"} Experiments {#sec:experiments} =========== In this section, first, we describe the setup and methodology of our experiments. Next, we use the implementation of several DGAs, as provided by researchers[^1][^2] who have reversed engineered the corresponding malware that embeds them, the real-world captures of the DGArchive [@197187], and two DGAs that were designed to bypass machine learning algorithms [@Spooren2019]. More concretely, we study the following DGA families: `beebone`, `banjori`, `gozi`, `matsnu`, `nymaim2`, `pizd`, `rovnix`, `suppobox` and `volatilecedar`. Apart from studying only wordlist-based DGAs (`gozi`, `matsnu`, `nymaim2`, `pizd`, `rovnix`, and `suppobox`) we include in our dataset two arithmetic-based DGAs `beebone`) and `banjori` and one permutation-based (`volatilecedar`) to see the effectiveness of our method in other families of DGAs. Finally, we have included the two DGA familes from [@Spooren2019], that we refer to as `decept1` and `decept2` which were crafted to bypass machine learning-based mechanisms. Table \[tbl:sample\] provides an overview of our dataset. To this end, the table shows how many samples each DGA has and how many seeds. Moreover, we provide ten AGDs from each DGA in our dataset to facilitate the reader in understanding the AGDs of each DGA produces. The underlying dictionaries vary in length, origin, and amount for each DGA. More precisely, `matsnu` contains two dictionaries, one for verbs (878) and one for nouns (1008). Depending on the seed its AGDs will either always start from a verb or a noun. Then, using the seed for its PRNG, `matsnu` selects one word from each dictionary iteratively from each dictionary, until the length of the domain exceeds 24 characters. `nymaim2` uses two dictionaries, one for the first word (2450 words) and one for the second one (4387 words) which are concatenated either directly without any separator or with the “-” character. `pizd` uses by default a wordlist containing 384 words and uses a PRNG to select two words and concatenate them to generate an AGD. `rovnix` uses as its source the US Declaration of Independence. The dictionary contains all the alphanumeric words of the document. To construct an AGD, `rovnix` selects words according to a PRNG until the selected words exceed the 20 characters. `gozi` is a variant of `rovnix`. Its various dictionaries originate from various public domain documents that are unlikely to be moved to another location, e.g. Request for Comments pages and the GNU Lesser General Public License. From these documents, `gozi` splits the documents according to stop characters (spaces, commas, etc.) and selects the words with at least three characters that contain only letters. From this wordlist, the PRNG selects random words that are concatenated so that the resulting AGD contains between 12 and 23 characters. `Suppobox` in the three seeds that have been identified so far has a dictionary of 384 words from which it selects two random words and concatenates them. In our dataset, we have also added the top 1 million websites from Alexa. The reason for including this is to have some ground truth of benign traffic and illustrate the domains that an actual user would use is significantly different from the ones that would be derived from a DGA, even if they are made to do so. It should be noted here that the Alexa top 1 million dataset contains web pages and not domains. Therefore, there are repetitions of domains, e.g. blogspot. Moreover, there are several Internationalized domain names (IDN)[^3] (i.e. domains that start with the characters `xn–` therefore, the domain name cannot contain any word. We opted to remove the latter domains and to keep each domain once. Note that up to now, there is no DGAs using IDN domain names. Therefore, the *alexa* dataset consists of 793,606 domains. **DGA** **Type** **\# of samples** **Seeds** **Sample domains** --------------- ------------- ------------------- ----------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------- alexa Benign 793,606 - google.com, youtube.com, tmall.com, baidu.com, qq.com beebone Arithmetic 210 2 backdates0.com, backdates0.org, backdates0.net, backdates0.biz, backdates0.info banjori Arithmetic 452,235 35 andersensinaix.com, xjsrrsensinaix.com, hlrfrsensinaix.com, fnosrsensinaix.com, qcwcrsensinaix.com decept ML 150,000 1 ytrbegofitr8b, rithundiat, tarenoth200qyumurop, hatoranfa, xunrvrstbe decept2 ML 150,000 1 tafersickir, pblogalmarportran-f, martapord, joedavingsbiosk, prialtions gozi Wordlist 216,355 104 penarumsalpaesthodie.com, quodquibusfulminatcur.com, defunctorumnullamrelaxat.com, veniarumcuramhabet.com, nisisacerdotipapefalse.com matsnu Wordlist 117,401 3 starsendbottomhabitshake.com, causeirongroundnettellstart.com, cultureexploredogdistrict.com, sizeprogrambillsaypointpot.com, tourmentionboneconcertadmire.com nymaim2 Wordlist 23,424 1 squirting-eight.net, unitsedgar.net, sexuality-giant.net, utilitiespour.ki, vermontfeatures.ad pizd Wordlist 10,000 1 aboveshare.net, alreadyshare.net, actionprobable.net, althoughprobable.net, actionseveral.net rovnix Wordlist 100,000 1 theirtheandaloneinto.com, thathistoryformertrial.com, tothelayingthatarefor.com, definebritainhasforhe.com, tosecureonweestablishment.com suppobox Wordlist 1,313,571 3 possibleshake.net, mountainshare.net, possibleshare.net, perhapsnearly.net, windownearly.net volatilecedar Permutation 498 2 deotntexplorer.info, doetntexplorer.info, dotentexplorer.info, dotnetexplorer.info, dotnteexplorer.info **Total** **3,327,300** Using real data from the DGArchive [@dietrich2011botnets] and the reversed code, we collect the set of the domain names they create. The total AGDs are 2,533,694 and adding the benign domains from alexa; we end up with a dataset of 3,327,300 domains. Therefore, each domain in these datasets is processed to extract the words that were concatenated to generate them. This is achieved with the use of Wordninja[^4], a natural language processing (NLP) method that probabilistically splits concatenated words based on English Wikipedia uni-gram frequencies. In both filters, we assume that the network monitoring device intercepts all the NXDomain requests from each device and analyses them for each one individually. For the sake of clarity, we assume that the network monitoring device gets as input a stream of NXDomain requests from only one device. Moreover, we assume that the device is infected by only one malware with DGA capabilities, and therefore the malware uses one seed. In our experiments, this practically means that each one of them has to be executed per DGA family and using one seed at a time. We argue that this is strategy is correct as the malware will not manage to connect directly to the C&C server and several connections would be attempted before the malware manages to connect and receives the command to, e.g. use a different dictionary/seed. Note that in our DGA dataset, the different seed in several cases translates to the use of another dictionary. In the first set of experiments, we test the frequency of word collisions on specific thresholds. More precisely, we split each domain name in words and record their occurrences if their length is more than three letters to avoid articles, pronouns etc. Based on a threshold of how many occurrences we expect from an AGD during an epoch, we monitor all NXDomain requests and raise an alert when the threshold is reached. To provide better insight on these results, more to than simply reporting the first time that an NXDomain query is performed for each AGD and seed in our dataset, we shuffled them and made the same measurement 1,000 times. In Figure \[fig:strikes\], we illustrate the results for different threshold levels, which range from 3 to 7. By shuffling the domains that a DGA generates, we test the DGA with different possible seeds, far more than the ones we originally had. Since the use of different seed is a common practice in malware, we may study how our methodology performs in different settings, and show that it is generic enough to be used in various configurations. Although this threshold can be modified, we claim that three unrelated NXDomain queries that contain the same word are likely not to be generated by a human, according to Section \[sec:related\] as well as discussed in our problem setting. This hypothesis is confirmed from our first experiment, which shows that our word filter is able to detect the domains generated by each family of DGA accurately. [0.49]{} ![image](strikes3.pdf){width=".9\columnwidth"}   [0.49]{} ![image](strikes4.pdf){width=".9\columnwidth"} [0.49]{} ![image](strikes5.pdf){width=".9\columnwidth"}   [0.49]{} ![image](strikes6.pdf){width=".9\columnwidth"} [0.49]{} ![image](strikes7.pdf){width=".9\columnwidth"}   [0.49]{} ![image](nymaim2.pdf){width=".9\columnwidth"} [0.49]{} ![image](letters.pdf){width=".9\columnwidth"}   [0.49]{} ![image](words.pdf){width=".9\columnwidth"} We conducted another set of experiments to study the statistical difference that the wordlist-based DGAs have from regular domains. Intuitively, we argue that the “poor” dictionary that these DGAs have would result in often repeating the same words in NXDomain queries as well as exhibiting some identifiable patterns. To this end, we analyse the textual statistical properties of the previously selected DGA families. Next, we compare them with those obtained in the case of Alexa top 1 million domains, and we depict the results in Figure \[fig:stats\]. Based on these statistics, we create a filter as follows. We keep a short registry of the five most recent NXDomain requests, and we check whether any of the following criteria holds: - All the requests are above ten characters. - The amount of words in all requests are the same. - The amount of words in all requests is above 2. - The amount of the “short” words (less than four characters) are more than 2 in all requests. - All the requests are made to the same SLD and different TLD. The results of this process are illustrated in Figure \[fig:prune\]. While this pattern approach introduces a bias in terms of language constraints, it is something that can be resolved by extending the dictionary of the underlying splitting algorithm. This extension may solve the issue for Latin-based dictionaries; however, this does not resolve the case of IDNs. Apparently, this filter manages to efficiently determine the lexicographical structure of the domain name, having high accuracy and low false positives and negatives. Moreover, it complements the previous filter by keeping a record of how many times a specific lexicographical structure was identified. Should these occurrences pass a threshold during a predefined epoch, the corresponding alert is raised. As in the previous case, we performed our experiments 1,000 times. It is evident that our filter shows significant differences between AGDs and alexa. Notably, while there are some outliers for all DGAs the average of the counter is close to 5, with the highest being 5.24 from `suppobox`, while alexa had an average of 24.19 requests. We believe that the above illustrate that almost all DGAs could be identified with at most six requests. ![Number of requests needed to pass the pattern criterion threshold.[]{data-label="fig:prune"}](pattern_all.pdf){width=".9\columnwidth"} Finally, we used our pattern approach on the hard to detect DGAs crafted in [@Spooren2019]. Both DGAs scored higher than the other DGA. Interestingly, the average of 1,000 experiments showed that `decept` was marginally harder than `decept2`, requiring 7.82 and 7.115 queries in average, respectively. Discussion {#sec:discussion} ========== In this section, we discuss the results of the experiments of the previous Section \[sec:experiments\] as well as the main benefits of our approach and how it compares to the current state of the art. The results depicted in Figure \[fig:strikes\] show that all the DGA families evaluated can be detected with only a few NXDomain queries (each domain resulting in a set of processed words), except `nymaim2`. This means that the AGDs generated by them tend to repeat words with statistical significance, as detected by our word-based filter. In the simplest case (i.e. with a threshold of 3 words, cf Figure \[fig:strikes\]a) we can detect AGDs with less than 30 NXDomain queries in almost all cases. For instance, the malware `gozi` uses 3 times the same word after generating 28 domains (see Figure \[fig:strikes\]a). In addition, the growth pace exhibited in Figure \[fig:strikes\], in terms of NXDomain queries needed to reach from 3 to 7 strikes, exhibits the same growth pace as the probabilities defined in Section \[sec:methodology\]. The latter implies that, proportionally, the number of strikes grows faster than the number NXDomains analysed. In the case of `nymaim2`, the results show that we need a high amount of NXDomains to find coincidences (see Figure \[fig:nymaim2detail\]). This occurs because `nymaim2` uses a predefined structure to create domains in which two words, selected from two separate dictionaries with 2450 and 4387 words, respectively, are appended to a TLD (i.e. the number of possible TLDs is 74). Therefore, the amount of possible combinations hinders its detection. Nevertheless, this variability is only in the dictionary, and since the structure remains the same, it is captured by our pattern filter. It is worth noting that `beebone` needs a constant number of queries to be detected by our word filter as the words that act as prefix/suffix are constant (cf Table \[tbl:sample\]) in all queries. Therefore, they both trigger the alert in as many queries as the threshold is. It is clear from Figure \[fig:stats\] that benign domain names most likely consist of at most three words containing less than ten letters in total. Therefore, NXDomain requests that do not meet these criteria can be considered as ‘suspicious’ by our pattern-based filter. Our claim is verified by the results of our pattern filter in `alexa` (cf. Figure \[fig:prune\]), since the number of domains needed to pass the threshold is far higher than those required by the rest of DGAs. Note that the pattern-based filter can fully capture `nymaim2` behaviour (as well as the rest of families, with few exceptions) so that we can raise an alert faster than our word-based filter in such cases. Finally, it is worth noticing that the bulk of hash and arithmetic-based DGAs, such as `Dyre`, `Gameover`, `Gspy` and `Omexo`, produce domains that are long (more than 15 characters) and as they are hex-encoded, they hardly create words. Therefore, all such AGDs fail our structure criterion. Moreover, AGDs generated by other DGA families, including `DNS Changer`, `DiamondFox`, `DirCrypt` and `EKforward` also fail our structure criterion, as they might generate shorter domains, but the produced domains do not have meaningful words. DGA [@koh2018inline] [@18] [@berman2019dga] [@curtin2018detecting] [@XU201977] [@yang2019detecting] [@Vinayakumar2019] [@7852496] Our Method --------------- ------------------ ------- ------------------ ------------------------ ------------- ---------------------- -------------------- ------------ ------------ beebone 90.46 100 74.9 100 banjori 99.84 100 80.8 99.78 0 100 decept 100 decept2 100 gozi 0 77.3 97.98 100 hmm-dga 48.67 100 matsnu 98.74 0 0 89.1 96.91 100 100 nymaim2 100 pcfg-dga 48.1 100 pizd 98.76 91.96 100 rovnix 98.71 80.5 100 suppobox 98.74 35.42 87.82 56.8 98.18 84.17 82.1 100 volatilecedar 100 100 95.8 100 To showcase the efficacy of our method, we compared it with the most recent state-of-the-art. The outcomes of each method are depicted in Table \[tab:comparison\]. It can be observed that none of the methods compared includes the whole list of word-based DGAs analysed in this article, as stated in the Motivation section. In terms of accuracy, we can observe that most of the methods succeed to provide a remarkable accuracy in at least one of the families, with the exception of the work presented in [@Vinayakumar2019]. Moreover, in many cases, the size of the samples evaluated is extremely small (i.e. which evinces the difficulty to obtain a quality database such as the one used in this article). For instance, in the case of Tran et al. [@18], we observe that authors train their system with a rather constrained dataset, e.g. they use 42,166 `banjori` domains and only 42 `beebone` domains. Another example can be found in [@curtin2018detecting], where authors use only 250 samples in the case of `suppobox`. After a deeper analysis of the outcomes, we observed a common trend in most of them. More concretely, when a method can accurately detect a DGA family, it fails to detect others, due to the particular characteristics of each DGA, as seen in [@18; @berman2019dga; @curtin2018detecting; @yang2019detecting; @Vinayakumar2019]. Note that there are cases where the method is not able to capture any instance of a DGA (i.e. the reported accuracy is 0). In addition, some of the families are only explored in this paper (i.e. `nymaim2`) as well as `decept`, `decept2`, `hmm-dga`, and `pcfg-dga`, which are only analysed by their creators [@7852496; @Spooren2019] by using state-of-the-art methods. More precisely, in the case of `hmm-dga` and `pcfg-dga`, the accuracies reported are below 49% for both DGAs using the Botdigger [@zhang2016botdigger] DGA detector, and in the case of `decept` and `decept2` the accuracy is below 85% using LTSM. Finally, it is worth mentioning that when some DGAs change their seed, the accuracy of their detection drops significantly, as stated by Berman [@berman2019dga] in the case of `pizd` and `suppobox`. However, the latter behaviour does not affect our method, since it only implies a restart of the word counter. In addition, we did not include the works presented in [@pereira2018dictionary] and [@lison2017automatic] due to their database size, since they train their methods with a high amount of domains, in some cases orders of magnitude higher than the queries needed by our method to detect them. For instance, in [@pereira2018dictionary], authors use 8-day real traffic data, resulting in thousands of domains. In the case of [@lison2017automatic], they use tens of thousand queries generated by `banjori`, `gozi`, `nymaim2`, `rovnix`, `suppobox`, `matsnu` (i.e. with low detection accuracy in the case of `matsnu`, which is below 0.16), and hundreds of queries in the case of `beebone` and `volatilecedar`, to train their system using a 10-fold scheme. Another recent work, proposed by Yang et al. in [@yang2019detecting], uses machine learning and semantic analysis to detect two DGA families, namely `suppobox` and `matsnu`. Using different dataset configurations, they report accuracies between 83.63% and 86.23% for `suppobox` and between 88% and 100% in the case of `matsnu`. Although they also consider an additional set of unclassified word-based AGDs, their average detection accuracy is 80.58%, which is significantly lower than our proposed work. Finally, in work presented by Spooren et al. [@Spooren2019], the authors achieve different accuracies depending on the classifier and the parameters. In general, the random forest classifier is rendered useless for `decept` and `decept2` with accuracies below 60%. In the case of LTSM, the classification accuracy for such families is also lower (i.e. 85.5%) than ours. In summary, in addition to absolute accuracy in DGA detection, our method enables a set of benefits, compared to other well-known literature methods based on neural networks or other feature-based classification mechanisms [@koh2018inline; @18; @pereira2018dictionary; @lison2017automatic; @yang2019detecting; @Spooren2019]. More concretely, our method can be deployed instantly and is parallel by design. In addition, it does not require training, contrary to the aforementioned approaches, enabling the adaptable discovery and detection of novel DGA families. Note that, since they require training, Neural Networks are sensitive to dictionary changes, hence providing less robust outcomes for real-time DGA analysis than our approach. As a further enhancement, our pattern-based setting can filter out almost instantly DGAs, which means that only a minimal subset needs to be further analysed. Moreover, our word-based filter only needs a small number of NXDomain queries to achieve DGA detection. This also enables personalised policies, where benign domains that fall out of the threshold can be whitelisted since their amount would be relatively small. We also argue that the fast detection of `decept` and `decept2` signify another advantage of our methodology. More precisely, while these DGAs are crafted to exploit many character features, they fail to be meaningful enough and end up being detected by our pattern filter. Finally, paired with recent advances in human typewriting error detection and application layers preventing such behaviour, the robustness of our method can be highly enhanced. It is worth noting that the proposed method is directly affected by the language of the dictionaries and their size. Large dictionaries, as seen in the case of `nymaim2`, imply a longer detection threshold. On the other hand, the case of using other languages for the dictionaries implies another issue for the method. Nevertheless, the latter is the same for all methods targeting these DGAs, as they all depend on the knowledge of the used language to split the words accordingly and perform their analysis. Conclusion ========== In this work, we analyse the current state of the art in Domain Generation Algorithms, a family of algorithms that use pseudo-random generators to create a set of AGDs, used as rendezvous points for their C&C servers. More concretely, we focus on DGA families which use wordlists to generate such domains. Therefore, to provide efficient and accurate detection of wordlist-based DGAs, we propose a probabilistic method inspired by the “birthday paradox” and the structure that these generators have. In this regard, our method exhibits a series of benefits compared with other state-of-the-art methods, since it can be instantly deployed, requires no training and can filter the bulk of domain names in terms of their pattern construction, enabling efficient and adaptable DGA detection. Moreover, extensive experiments using state-of-the-art benchmarks show that we need between 3 and 27 NXDomain queries (with strike threshold set to 3) to detect DGA malware with high confidence using our word-based filter. Future work will focus on analysing the statistical properties of benign domains (including IDN), especially in the case of Alexa, to enhance their classification using different wordlist-based probabilistic word splitters. Acknowledgments {#acknowledgments .unnumbered} =============== This work was supported by the European Commission under the Horizon 2020 Programme (H2020), as part of the project YAKSHA (Grant Agreement no. 780498) and CyberSec4Europe (<https://www.cybersec4europe.eu>) (Grant Agreement no. 830929), *LOCARD* (<https://locard.eu>) (Grant Agreement no. 832735). The content of this article does not reflect the official opinion of the European Union. Responsibility for the information and views expressed therein lies entirely with the authors. [10]{} The [DGA]{} of pykspa “you skype version is old”. <https://www.johannesbader.ch/2015/03/the-dga-of-pykspa/>, 2015. Bander Ali Saleh Al-rimy, Mohd Aizaini Maarof, and Syed Zainudeen Mohd Shaid. Ransomware threat success factors, taxonomy, and countermeasures: a survey and research directions. , 74:144–166, 2018. Hyrum S. Anderson, Jonathan Woodbridge, and Bobby Filar. : Adversarially-tuned domain generation and detection. In [*Proceedings of the 2016 ACM Workshop on Artificial Intelligence and Security*]{}, AISec ’16, pages 13–21, New York, NY, USA, 2016. ACM. Manos Antonakakis et al. From throw-away traffic to bots: detecting the rise of [DGA]{}-based malware. In [*Proceedings of the 21st USENIX conference on Security symposium*]{}, pages 24–24. USENIX Association, 2012. Manos Antonakakis et al. Understanding the mirai botnet. In [*26th [USENIX]{} Security Symposium ([USENIX]{} Security 17)*]{}, pages 1093–1110, Vancouver, BC, 2017. [USENIX]{} Association. Anne Aula, Natalie Jhaveri, and Mika Käki. Information search and re-access strategies of experienced web users. In [*Proceedings of the 14th International Conference on World Wide Web*]{}, WWW ’05, pages 583–592, New York, NY, USA, 2005. ACM. Adam J. Aviv and Andreas Haeberlen. Challenges in experimenting with botnet detection systems. In [*Proceedings of the 4th Conference on Cyber Security Experimentation and Test*]{}, CSET’11, pages 6–6, Berkeley, CA, USA, 2011. USENIX Association. Daniel S Berman. Dga capsnet: 1d application of capsule networks to dga detection. , 10(5):157, 2019. Sergey Brin and Lawrence Page. The anatomy of a large-scale hypertextual web search engine. , 30(1):107 – 117, 1998. Proceedings of the Seventh International World Wide Web Conference. Brian Cartmell and Jothan Frakes. Registering and using multilingual domain names, January 22 2004. US Patent App. 09/974,746. Yizheng Chen et al. Measuring lower bounds of the financial abuse to online advertisers: A four year case study of the [TDSS/TDL4]{} botnet. , 67:164 – 180, 2017. Kang Leng Chiew, Kelvin Sheng Chek Yong, and Choon Lin Tan. A survey of phishing attacks: their types, vectors and technical approaches. , 2018. Alberto Compagno, Mauro Conti, Daniele Lain, and Gene Tsudik. Don’t skype & type!: Acoustic eavesdropping in voice-over-ip. In [*Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security*]{}, pages 703–715. ACM, 2017. Ryan R Curtin, Andrew B Gardner, Slawomir Grzonkowski, Alexey Kleymenov, and Alejandro Mosquera. Detecting dga domains with recurrent neural networks and side information. In [*Proceedings of the 14th International Conference on Availability, Reliability and Security*]{}, page 20. ACM, 2019. Fred J Damerau. A technique for computer detection and correction of spelling errors. , 7(3):171–176, 1964. Persi Diaconis and Frederick Mosteller. Methods for studying coincidences. , 84(408):853–861, 1989. Christian J Dietrich et al. On botnets that use [DNS]{} for command and control. In [*2011 seventh european conference on computer network defense*]{}, pages 9–16. IEEE, 2011. Jamie Murphy European Marketing Director, Laura Raffa, and Richard Mizerski. The use of domain names in e‐branding by the world’s top brands. , 13(3):222–232, 2003. Andrew W. Ellis. Normal writing processes and peripheral acquired dysgraphias. , 3(2):99–127, 1988. Leah Findlater, Jacob O. Wobbrock, and Daniel Wigdor. Typing on flat glass: Examining ten-finger expert typing patterns on touch surfaces. In [*Proceedings of the SIGCHI Conference on Human Factors in Computing Systems*]{}, CHI ’11, pages 2453–2462, New York, NY, USA, 2011. ACM. Y. [Fu]{}, L. [Yu]{}, O. [Hambolu]{}, I. [Ozcelik]{}, B. [Husain]{}, J. [Sun]{}, K. [Sapra]{}, D. [Du]{}, C. T. [Beasley]{}, and R. R. [Brooks]{}. Stealthy domain generation algorithms. , 12(6):1430–1443, June 2017. Tobias Holgers, David E Watson, and Steven D Gribble. Cutting through the confusion: A measurement study of homograph attacks. In [*USENIX Annual Technical Conference, General Track*]{}, pages 261–266, 2006. Nick Ismail. Global cybercrime economy generates over \$1.5tn, according to new study. <https://www.information-age.com/global-cybercrime-economy-generates-over-1-5tn-according-to-new-study-123471631/>, 2018. N. Jiang, J. Cao, Y. Jin, L. E. Li, and Z. Zhang. Identifying suspicious activities through dns failure graph analysis. In [*The 18th IEEE International Conference on Network Protocols*]{}, pages 144–153, Oct 2010. Clare-Marie Karat, Christine Halverson, Daniel Horn, and John Karat. Patterns of entry and correction in large vocabulary continuous speech recognition systems. In [*Proceedings of the SIGCHI Conference on Human Factors in Computing Systems*]{}, CHI ’99, pages 568–575, New York, NY, USA, 1999. ACM. Joewie J Koh and Barton Rhodes. Inline detection of domain generation algorithms with context-sensitive word embeddings. , 2018. Anomali Labs. Interplanetary storm. https://www.anomali.com/blog/the-interplanetary-storm-new-malware-in-wild-using-interplanetary-file-systems-ipfs-p2p-network, 2019. Bruce Levin. A representation for multinomial cumulative distribution functions. , pages 1123–1126, 1981. Pierre Lison and Vasileios Mavroeidis. Automatic detection of malware-generated domains with recurrent neural models. , 2017. Daiping Liu et al. Don’t let one rotten apple spoil the whole barrel: Towards automated detection of shadowed domains. In [*Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security*]{}, CCS ’17, pages 537–552, New York, NY, USA, 2017. ACM. Pratyusa K. Manadhata, Sandeep Yadav, Prasad Rao, and William Horne. Detecting malicious domains via graph inference. In Miros[ł]{}aw Kuty[ł]{}owski and Jaideep Vaidya, editors, [ *Computer Security - ESORICS 2014*]{}, pages 1–18, Cham, 2014. Springer International Publishing. Steve Mansfield-Devine. The malware arms race. , 2018(2):15 – 20, 2018. E. H. Mckinney. Generalized birthday problem. , 73(4):385–387, 1966. Tyler Moore and Benjamin Edelman. Measuring the perpetrators and funders of typosquatting. In [*International Conference on Financial Cryptography and Data Security*]{}, pages 175–191. Springer, 2010. Yacin Nadji, Roberto Perdisci, and Manos Antonakakis. Still beheading hydras: Botnet takedowns then and now. , 14(5):535–549, 2017. Constantinos Patsakis and Fran Casino. Hydras and ipfs: a decentralised playground for malware. , Jun 2019. Constantinos Patsaks, Fran Casino, and Vasilios Katos. Encrypted and covert dns queries for botnets: Challenges and countermeasures. , To appear, 2019. R. Perdisci, I. Corona, and G. Giacinto. Early detection of malicious flux networks via large-scale passive dns traffic analysis. , 9(5):714–726, Sept 2012. Mayana Pereira, Shaun Coleman, Bin Yu, Martine DeCock, and Anderson Nascimento. Dictionary extraction and detection of algorithmically generated domain names in passive dns traffic. In [*International Symposium on Research in Attacks, Intrusions, and Defenses*]{}, pages 295–314. Springer, 2018. Matthew E Peters et al. Deep contextualized word representations. , 2018. James L. Peterson. A note on undetected typing errors. , 29(7):633–637, July 1986. Daniel Plohmann et al. A comprehensive measurement study of domain generating malware. In [*25th [USENIX]{} Security Symposium ([USENIX]{} Security 16)*]{}, pages 263–278, Austin, TX, 2016. [USENIX]{} Association. Justin M Rao and David H Reiley. The economics of spam. , 26(3):87–110, 2012. Regner Sabillon, Jeimy Cano, Victor Cavaller, and Jordi Serra. Cybercrime and cybercriminals: a comprehensive study. , 4(6):165, 2016. Stefano Schiavoni, Federico Maggi, Lorenzo Cavallaro, and Stefano Zanero. Phoenix: Dga-based botnet tracking and intelligence. In Sven Dietrich, editor, [*Detection of Intrusions and Malware, and Vulnerability Assessment*]{}, pages 192–211, Cham, 2014. Springer International Publishing. A. K. Sood and S. Zeadally. A taxonomy of domain-generation algorithms. , 14(4):46–53, July 2016. Jan Spooren et al. Detection of algorithmically generated domain names used by botnets: A dual arms race. In [*Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing*]{}, SAC ’19, pages 1916–1923, New York, NY, USA, 2019. ACM. Emily Stark. The urlephant in the room. In [*[USENIX]{} Enigma*]{}, Burlingame, CA, 2019. [USENIX]{} Association. Janos Szurdi, Balazs Kocso, Gabor Cseh, Jonathan Spring, Mark Felegyhazi, and Chris Kanich. The long“ taile” of typosquatting domain names. In [*USENIX Security Symposium*]{}, pages 191–206, 2014. Van Tong and Giang Nguyen. A method for detecting dga botnet based on semantic and cluster analysis. In [*Proceedings of the Seventh Symposium on Information and Communication Technology*]{}, pages 272–277. ACM, 2016. Duc Tran, Hieu Mac, Van Tong, Hai Anh Tran, and Linh Giang Nguyen. A lstm based framework for handling multiclass imbalance in dga botnet detection. , 275:2401–2413, 2018. R. Vinayakumar, K. P. Soman, Prabaharan Poornachandran, S. Akarsh, and Mohamed Elhoseny. , pages 161–192. Springer International Publishing, Cham, 2019. Congyuan Xu, Jizhong Shen, and Xin Du. Detection method of domain names generated by dgas based on semantic representation and deep neural network. , 85:77 – 88, 2019. S. Yadav, A. K. K. Reddy, A. L. N. Reddy, and S. Ranjan. Detecting algorithmically generated domain-flux attacks with [DNS]{} traffic analysis. , 20(5):1663–1677, Oct 2012. Sandeep Yadav and A. L. Narasimha Reddy. Winning with [DNS]{} failures: Strategies for faster botnet detection. In [*Security and Privacy in Communication Networks*]{}, pages 446–459, Berlin, Heidelberg, 2012. Springer Berlin Heidelberg. Luhui Yang, Guangjie Liu, Jiangtao Zhai, Yuewei Dai, Zhaozhi Yan, Yuguang Zou, and Wenchao Huang. A novel detection method for word-based dga. In [*International Conference on Cloud Computing and Security*]{}, pages 472–483. Springer, 2018. Luhui Yang, Jiangtao Zhai, Weiwei Liu, Xiaopeng Ji, Huiwen Bai, Guangjie Liu, and Yuewei Dai. Detecting word-based algorithmically generated domains using semantic analysis. , 11(2):176, 2019. Han Zhang, Manaf Gharaibeh, Spiros Thanasoulas, and Christos Papadopoulos. Botdigger: Detecting dga bots in a single network. 2016. G. Zhao, K. Xu, L. Xu, and B. Wu. Detecting [APT]{} malware infections based on malicious [DNS]{} and traffic analysis. , 3:1132–1142, 2015. Yury Zhauniarovich, Issa Khalil, Ting Yu, and Marc Dacier. A survey on malicious domains detection through dns data analysis. , 51(4):67:1–67:36, July 2018. Yonglin Zhou, Qing-Shan Li, Qidi Miao, and Kangbin Yim. -based botnet detection using [DNS]{} traffic. , 3:116–123, 2013. [^1]: <https://github.com/baderj/domain_generation_algorithms> <https://github.com/andrewaeva/DGA> [^2]: <https://github.com/ynvb/ExplosiveScripts> [^3]: https://www.icann.org/resources/pages/idn-2012-02-25-en [^4]: https://github.com/keredson/wordninja
--- author: - | **Andreas Fring and Thomas Frith**\ *Department of Mathematics, City, University of London,*\ *Northampton Square, London EC1V 0HB, UK*\ *E-mail: [email protected], [email protected]* bibliography: - 'bibliography.bib' title: '**Eternal life of entropy in non-Hermitian quantum systems**' --- ABSTRACT: We find a new effect for the behaviour of Von Neumann entropy. For this we derive the framework for describing Von Neumann entropy in non-Hermitian quantum systems and then apply it to a simple interacting $PT$ symmetric bosonic system. We show that our model is well defined even in the $PT$ broken regime with the introduction of a time-dependent metric and that it displays three distinct behaviours relating to the $PT$ symmetry of the original time-independent Hamiltonian. When the symmetry is unbroken, the entropy undergoes rapid decay to zero (so-called “sudden death”) with a subsequent revival. At the exceptional point it decays asymptotically to zero and when the symmetry is spontaneously broken it decays asymptotically to a finite constant value (“eternal life”). [2]{} Introduction ============ The information contained within a quantum system is of great importance for various practical implementations of quantum mechanics, most importantly for the development of quantum computers, e.g. [@nielsen2002quantum; @bennett2000quantum; @raussendorf2001one; @steane1998quantum]. In order to understand the quantum information, one must find a way of measuring the entanglement of a state. Entanglement is a defining feature of quantum mechanics that distinguishes it from classical mechanics and there has been much work in recent years into the evolution of entanglement with time, particularly the observation of the abrupt decay of entangled states, coined as “sudden death” [@yu2009sudden; @yonacc2006sudden]. The decoherence of entanglement [@unruh1995maintaining; @palma1996quantum] is a problem for the operation of quantum computers and so understanding the mechanism behind this is an important contribution to the development of future machines. One particular measurement of entanglement and quantum information is the Von Neumann entropy. This is well-understood in the standard quantum mechanical setting, however to date there has only been a small amount of work done concerning the proper treatment of entropy in non-Hermitian, parity-time ($PT$) symmetric systems [@scolarici2006time; @croke2015pt; @kawabata2017information; @jones2010quantum]. These differ from open quantum systems as the energy eigenvalues are real or appear as complex conjugate pairs and do not describe decay. Non-Hermitian, parity-time ($PT$) symmetric quantum mechanics was first popularised when it was shown that non-Hermitian systems with unbroken $PT$ symmetry had real eigenvalues and unitary time evolution [@bender1998real; @bender2007making; @mostafazadeh2010pseudo; @moiseyev2011non; @scholtz1992quasi]. This is possible due to the existence of a non-trivial metric operator and much work has been done on constructing metrics for time-independent systems, e.g. [@znojil2006construction; @siegl2012metric; @castro2009spin; @jones2007scattering; @musumbu2006choice; @mostafazadeh2006metric]. More recently this has extended to time-dependent systems, e.g. [@mostafazadeh2007time; @fring2016non; @fring2016unitary; @maamache2017pseudo; @znojil2008time; @de2006time; @mostafazadeh2018energy]. Of particular interest are non-Hermitian systems with spontaneously broken $PT$ symmetry. These systems possess an exceptional point above which the $PT$ symmetry is broken. In this regime the system exhibits complex energy eigenvalues, becoming ill-defined and is therefore ordinarily discarded as non-physical and useless. However, it has been shown [@ExactSols; @FRING20172318; @HigherSpin; @fring2018tdm] that when a time-dependence is introduced into the central equations it is possible to make sense of the broken regime via a time-dependent metric. This allows for the definition of a Hilbert space and therefore a well-defined inner product. This will be central to our analysis in non-Hermitian systems as we will be showing how the evolution of entropy changes significantly as we vary the system parameters through the exceptional point. We will first set up the framework for analysing the Von Neumann entropy in non-Hermitian systems and then we will apply it to a simple model consisting of a bosonic system coupled to a bath. Entanglement Von Neumann Entropy {#Theory} ================================ In order to make calculations of the quantum entropy for non-Hermitian systems, we must first introduce some new quantities when compared to the Hermitian case. The density matrix for Hermitian systems is defined as an Hermitian operator describing the statistical ensemble of states $$\varrho_h=\sum_{i}p_i\ket{\phi_i}\bra{\phi_i},$$ where the subscript $h$ indicates it relates to an Hermitian system. $\ket{\phi_i}$ are general pure states, and $p_i$ is the probability that the system is in the pure state $\ket{\phi_i}$, with $0\leq p_i\leq 1$ and $\sum_ip_i=1$. Therefore $\varrho_h$ represents a mix of pure states (a mixed state). If the system is comprised of subsystems $A$ and $B$ one can define the reduced density operator of these subsystems as the partial trace over the opposing subsystem’s Hilbert space $$\varrho_{h,A}=Tr_B\left[\varrho_h\right]=\sum_{i}\bra{n_{i,B}}\varrho_h\ket{n_{i,B}},$$ $$\varrho_{h,B}=Tr_A\left[\varrho_h\right]=\sum_{i}\bra{n_{i,A}}\varrho_h\ket{n_{i,A}},$$ where $\ket{n_{i,A}}$ and $\ket{n_{i,B}}$ are the eigenstates of the subsystems $A$ and $B$, respectively. In this way one can isolate the density matrix for each subsystem and perform entropic analysis on them individually. We now want to find the relationship between the $\varrho_h$ and $\varrho_H$, where the subscript $H$ indicates a non-Hermitian system. The clearest starting point is the Von Neumann equation which governs the time evolution of the density matrix. For the Hermitian system it is $$\label{VNHermitian} i\partial_t\varrho_h=\left[h,\varrho_h\right],$$ where $h$ is the Hermitian Hamiltonian. We now wish to find the equivalent relation in the non-Hermitian setting. In order to do this we substitute the time-dependent Dyson equation [@FRING20172318; @HigherDims] $$\label{TDDE} h=\eta H\eta^{-1}+i\partial_t\eta \eta^{-1},$$ into the Von Neumann equation. $\eta$ is the Dyson operator and forms the metric $\rho=\eta^\dagger\eta$. After some manipulation, substituting equation (\[TDDE\]) into (\[VNHermitian\]) results in the following equation $$i\partial_t\varrho_H=\left[H,\varrho_H\right],$$ when assuming that the density matrix in the Hermitian system is related to that of the non-Hermitian system via a similarity transformation $$\label{SimTrans} \varrho_h=\eta\varrho_H\eta^{-1}.$$ Recalling that $\ket{\phi}=\eta\ket{\psi}$, this leads us to the definition of the density matrix $\varrho_H$ for non-Hermitian systems $$\varrho_H=\sum_{i}p_i\ket{\psi_i}\bra{\psi_i}\rho,$$ where $\ket{\psi_i}$ are general pure states for the non-Hermitian system. Notice that $\varrho_H$ is an Hermitian operator in the Hilbert space related to the metric $\bra{\cdot}\rho\ket{\cdot}$. These results match those from [@scolarici2006time]. Having defined the density matrix for non-Hermitian systems and found the relation to Hermitian systems we can now consider the entropy. For the total system, the Von Neumann entropy is defined as $$S_h=-tr\left[\rho_h\ln\rho_h\right].$$ This can also be expressed as a sum of the eigenvalues $\lambda_i$ of the density matrix $\rho_h$ as it is an Hermitian operator $$S_h=-\sum_{i}\lambda_i\ln\lambda_i.$$ As the density matrix for the Hermitian and non-Hermitian systems are related by a similarity transform, they share the same eigenvalues, therefore $$S_H=S_h.$$ Is is important to recall, however, that this relation only holds true for the existence of a well-defined Dyson operator $\eta$. Without this, we are unable to form the relation (\[SimTrans\]). For closed systems, the Von Neumann entropy is constant with time. However, we wish to consider the entropy for particular subsystems and for this we must consider the partial trace of the density matrix. In this setting the entropy for subsystem $A$ becomes $$S_{h,A}=-tr\left[\rho_{h,A}\ln\rho_{h,A}\right]=-\sum_{i}\lambda_{i,A}\ln\lambda_{i,A},$$ where once again the entropy of the Hermitian subsystem is equal to that of the non-Hermitian subsystem $S_{h,A}=S_{H,A}$ with the existence of $\eta$. The entropy of a particular subsystem is not confined to be constant and we show that it exhibits some very interesting properties when evolved in time. System bath coupled model ========================= We now consider a time-independent non-Hermitian Hamiltonian consisting of a bosonic system coupled to a bath of $N$ bosonic systems. The Hamiltonian takes the form $$\label{NHHamiltonian} \begin{split} \begin{aligned} H=&\nu a^\dagger a+\nu\sum_{n=1}^{N}q^\dagger_nq_n+\left(g+\kappa\right)a^\dagger\sum_{n=1}^{N}q_n\\ +&\left(g-\kappa\right)a\sum_{n=1}^{N}q^\dagger_n, \end{aligned} \end{split}$$ with $\nu$, $g$ and $\kappa$ being real time-independent parameters. $PT$ symmetry ------------- The Hamiltonian (\[NHHamiltonian\]) is PT symmetric under the anti-linear transformation $$\begin{split} \begin{aligned} PT: \quad i&\rightarrow -i, \quad a\rightarrow-a, \quad a^\dagger\rightarrow-a^\dagger, \\ q_n&\rightarrow-q_n, \quad q_n^\dagger\rightarrow-q_n^\dagger, \end{aligned} \end{split}$$ as it commutes with the $PT$ operator for all values of $\nu$, $g$ and $\kappa$ $$\left[PT,H\right]=0,$$ The energy eigenvalues are $$E_{m,N}^\pm=m\left(\nu\pm\sqrt{N}\sqrt{g^2-\kappa^2}\right).$$ In order to ensure boundedness from below the system must have $\nu>\sqrt{N}\sqrt{g^2-\kappa^2}$. Note that there is an exceptional point at $g=\kappa$ and when $\kappa>g$ this system is in the broken $PT$ regime. This is clear when studying the first excited state ($m=1$) expanded in terms of creation operators acting on a tensor product of Fock states. The general state consists of one Fock state for the system of $a$ and $a^\dagger$ bosonic operators and $N$ Fock states for the bath of $q_i$ and $q_i^\dagger$ bosonic operators $$\ket{\phi}=\ket{n_a}\otimes\ket{n_{q_1}}\otimes\ket{n_{q_2}}....=\ket{n_a}\bigotimes_{i=1}^{N}\ket{n_{q_i}}.$$ When considering the first excited state, we will be dealing with very few non-zero states, and as such we can make some simplifications to the notation. If all the states in the $q$ bath are in the ground state we will represent this with $\ket{\boldmath{0}_q}$. Similarly, if the $i$th state in the $q$ bath is in the first excited state with the rest in the ground state, we will represent this with a $\ket{\boldmath{1}_{i}}$ $$\begin{split} \begin{aligned} \ket{\boldmath{0}_q}&=\bigotimes_{i=1}^{N}\ket{0_{q_i}}, \\ \ket{\boldmath{1}_{i}}&=\left[\bigotimes_{j=1}^{i-1}\ket{0_{q_j}}\right]\otimes\ket{1_{q_i}}\otimes\left[\bigotimes_{k=i+1}^{N}\ket{0_{q_k}}\right]. \end{aligned} \end{split}$$ We can now write down the first excited state, $$\begin{aligned} \begin{split} \ket{\psi_{1,N}^\pm}=&\sqrt{\frac{g+\kappa}{2g}}\ket{1_a}\otimes\ket{\boldmath{0}_q}\pm\sqrt{\frac{g-k}{2gN}}\ket{0_a}\otimes\sum_{i=1}^{N}\ket{ 1_{i}}\\ =&\sqrt{\frac{g+\kappa}{2g}}\ket{1_a\boldmath{0}_q}\pm\sqrt{\frac{g-k}{2gN}}\sum_{i=1}^{N}\ket{0_a 1_{i}}\\ =&\sqrt{\frac{g+\kappa}{2g}}a^\dagger\ket{0_a\boldmath{0}_q}\pm\sqrt{\frac{g-k}{2gN}}\sum_{i=1}^{N}q_i^\dagger\ket{0_a \boldmath{0}_{q}}. \end{split} \end{aligned}$$ In order for the $PT$ symmetry to remain unbroken, the wavefunction must also remain unchanged up to a phase factor when acted on by the $PT$ operator $$PT\ket{\psi_{1,N}^\pm}= e^{i\phi}\ket{\psi_{1,N}^\pm}.$$ However, the wavefunctions are only eigenfunctions of the $PT$ operator when $\kappa<g$ $$PT\ket{\psi_{1,N}^\pm}=-\ket{\psi_{1,N}^\pm}.$$ When $\kappa>g$, the wavefunctions are no longer eigenfunctions of the $PT$ operator, $$PT\ket{\psi_{1,N}^\pm}\neq e^{i\phi}\ket{\psi_{1,N}^\pm}.$$ Therefore we need to employ time-dependent analysis in order to make sense of the broken regime. To do this we first must solve the time-dependent Dyson equation. Solving the time-dependent Dyson equation ----------------------------------------- We wish to find the time-dependent metric $\rho\left(t\right)$ that allows us to perform entropic analysis on our model (\[NHHamiltonian\]). In order to do this we must find the Dyson operator $\eta\left(t\right)$ and the equivalent time-dependent Hermitian system $h\left(t\right)$. The model (\[NHHamiltonian\]) is in fact part of a larger family of Hamiltonians belonging to the closed algebra with Hermitian generators: $$\begin{aligned} \begin{split} N_A&=a^\dagger a, \qquad N_Q=\sum_{n=1}^{N}q^\dagger_nq_n, \\ N_{AQ}&=N_A-\frac{1}{N}N_Q-\frac{1}{N}\sum_{n\neq m}q^\dagger_nq_m \\ A_x&=\frac{1}{\sqrt{N}}\left(a^\dagger\sum_{n=1}^{N}q_n+a\sum_{n=1}^{N}q^\dagger_n\right), \\ A_y&=\frac{i}{\sqrt{N}}\left(a^\dagger\sum_{n=1}^{N}q_n-a\sum_{n=1}^{N}q^\dagger_n\right). \end{split} \end{aligned}$$ The commutation relations are $$\begin{aligned} \begin{aligned} \begin{split} \left[N_A,N_Q\right]&=0, \quad\qquad \left[N_A,N_{AQ}\right]=0, \\ \left[N_A,A_x\right]&=-iA_y, \quad\quad \left[N_A,A_y\right]=iA_y,\\ \left[N_Q,A_x\right]&=iA_y, \quad\quad\;\;\; \left[N_Q,A_y\right]=-iA_x, \\ \left[N_{AQ},A_x\right]&=-2iA_y, \quad \left[N_{AQ},A_y\right]=2iA_x. \end{split} \end{aligned}\end{aligned}$$ In terms of this algebra, our original Hamiltonian (\[NHHamiltonian\]) can be written as $$H=\nu N_A+\nu N_Q+ \sqrt{N}gA_x-i\sqrt{N}\kappa A_y.$$ We are now in a position to begin solving the time-dependent Dyson equation (\[TDDE\]). For this we make the ansatz $$\label{eta} \eta\left(t\right)=e^{\beta\left(t\right)A_y}e^{\alpha\left(t\right)N_{AQ}},$$ and use the Baker-Campbell-Hausdourff formula to expand the Dyson equation (\[TDDE\]) in terms of generators. In order to make the resulting Hamiltonian Hermitian, we must solve two coupled differential equations to eliminate the non-Hermitian terms. $$\dot{\alpha}=-\tanh\left(2\beta\right)\left[\sqrt{N}g\cosh\left(2\alpha\right)+\sqrt{N}\kappa\sinh\left(2\alpha\right)\right],\label{alphadot}$$ $$\hspace{-2.3cm}\dot{\beta}=\sqrt{N}\kappa\cosh\left(2\alpha\right)+\sqrt{N}g\sinh\left(2\alpha\right).\label{betadot}$$ Equation (\[betadot\]) can be solved for $\alpha$, $$\label{alpha} \tanh \left(2\alpha\right)=\frac{-N g\kappa+\dot{\beta}\sqrt{\dot{\beta}^2+N\left( g^2-\kappa^2\right)}}{Ng^2+\dot{\beta}^2}.$$ In principle this could lead to a restriction to the term on the RHS of equation (\[alpha\]) as $-1<\tanh\left(2\alpha\right)<1$. However as we will see, this restriction is obeyed with the final solutions for $\alpha$ and $\beta$. Substituting (\[alpha\]) into equation (\[alphadot\]) gives $$\ddot{\beta}+2\tanh\left(2\beta\right)\left[Ng^2-N\kappa^2+\dot{\beta}^2\right]=0.$$ Now making the substitution $\sinh\left(2\beta\right)=\sigma$, this reverts to an harmonic oscillator equation $$\ddot{\sigma}+4N\left(g^2-\kappa^2\right)\sigma=0,$$ which is solved with the function $$\sigma=\frac{c_1}{\sqrt{g^2-\kappa^2}}\sin\left(2\sqrt{N}\sqrt{g^2-\kappa^2}\left(t+c_2\right)\right),$$ for all values of $\kappa$, where $c_1$ and $c_2$ are constants of integration. We can now write down expressions for $\alpha$ and $\beta$ $$\hspace{-5.6cm}\tanh\left(2\alpha\right)=\frac{\zeta^2-1}{\zeta^2+1},$$ $$\sinh\left(2\beta\right)=\frac{c_1}{\sqrt{g^2-\kappa^2}}\sin\left(2\sqrt{N}\sqrt{g^2-\kappa^2}\left(t+c_2\right)\right),$$ where $\zeta$ is of the form $$\footnotesize \begin{aligned} \begin{split} \zeta=\sqrt{2}\sqrt{\frac{g-\kappa}{g+\kappa}}\left[ \frac{\sqrt{c_1^2+g^2-\kappa^2}+c_1\cos\left(2\sqrt{N}\sqrt{g^2-\kappa^2}\left(t+c_2\right)\right)}{\sqrt{c_1^2+2\left(g^2-\kappa^2\right)-c_1^2\cos\left(4\sqrt{N}\sqrt{g^2-\kappa^2}\left(t+c_2\right)\right)}}\right]. \end{split} \end{aligned}$$ Therefore we have a well-defined solution for $\eta\left(t\right)$ from our original ansatz (\[eta\]) which results in the following time-dependent Hermitian Hamiltonian $$\label{HHamiltonian} h\left(t\right)=\nu N_A+\nu N_Q+\mu\left(t\right)Ax,$$ where $$\label{mu} \mu\left(t\right)=\frac{\left(g^2-\kappa^2\right)\sqrt{N}\sqrt{c_1^2+g^2-\kappa^2}}{c_1^2+2\left(g^2-\kappa^2\right)-c_1^2\cos\left(4\sqrt{N}\sqrt{g^2-\kappa^2}\left(t+c_2\right)\right)}.$$ This is real provided $|\frac{c1}{\sqrt{g^2-\kappa^2}}|>1$. The general time-dependent first excited state is $$\label{1State} \begin{aligned} \begin{split} \ket{\phi\left(t\right)}&=e^{-i\nu t}\left(A\sin\mu_I\left(t\right)+B\cos\mu_I\left(t\right)\right)\ket{1_a\boldmath{0}_q}\\ +&\frac{e^{-i\nu t}}{\sqrt{N}}\left(A\cos\mu_I\left(t\right)-B\sin\mu_I\left(t\right)\right)\sum_{i=1}^{N}\ket{0_a \boldmath{1}_{i}}, \end{split} \end{aligned}$$ with $A^2+B^2=1$ and $$\small \begin{aligned} \begin{split} \mu_I&\left(t\right)=\int^t\mu\left(s\right)ds=\\ \frac{1}{2}&\arctan\left(\frac{\sqrt{c_1^2+g^2-\kappa^2}\tan\left(2\sqrt{N}\sqrt{g^2-\kappa^2}\left(t+c_2\right)\right)}{\sqrt{g^2-\kappa^2}}\right). \end{split} \end{aligned}$$ Now we have a full solution for $\eta\left(t\right)$ and therefore $\rho\left(t\right)=\eta\left(t\right)^\dagger\eta\left(t\right)$. This allows us to calculate the entropy for our non-Hermitian system (\[NHHamiltonian\]). The easiest route to take is to work with the resulting Hermitian system (\[HHamiltonian\]) as it was shown in section \[Theory\] that the entropy in both systems is equivalent when $\eta\left(t\right)$ is well-defined. It is important to note that if the $\eta\left(t\right)$ ever becomes ill-defined, then our analysis of the Hermitian system does not correspond to the original non-Hermitian Hamiltonian. Three types of entropy evolution ================================ We now calculate the entropy of the system and show how varying the parameters $N$, $g$ and $\kappa$ affect its evolution with time. We prepare our system in an entangled first excited state (\[1State\]) at time $t=0$, this is equivalent to a single qubit entangled with itself. $$\ket{\phi\left(0\right)}=\sin\gamma\ket{{1_a \boldmath{0}_q}}+\frac{\cos\gamma}{\sqrt{N}}\sum_{i=1}^N\ket{{0_a\boldmath{1}_{i}}},$$ for which we choose $A=\sin\gamma$, $B=\cos\gamma$ and $c_2=0$. Therefore the general state at time $t$ is $$\begin{aligned} \begin{split} \ket{\phi\left(t\right)}&=e^{-i\nu t}\left(\sin\gamma\sin\mu_I\left(t\right)+\cos\gamma\cos\mu_I\left(t\right)\right)\ket{1_a\boldmath{0}_q}\\ +&\frac{e^{-i\nu t}}{\sqrt{N}}\left(\sin\gamma\cos\mu_I\left(t\right)-\cos\gamma\sin\mu_I\left(t\right)\right)\sum_{i=1}^{N}\ket{0_a \boldmath{1}_{i}}. \end{split} \end{aligned}$$ Now we form the density matrix for the system (a) with a partial trace over the external bosonic bath (q), $$\begin{split} \begin{aligned} \varrho_a\left(t\right)&=Tr_q\left[\varrho_h\left(t\right)\right]=\\ &\left(\begin{array}{cc} \left(\sin\gamma\sin\mu_I\left(t\right)+\cos\gamma\cos\mu_I\left(t\right)\right)^2 & \hspace{-4.5cm}0\hspace{-4.5cm}\\ \hspace{-4.5cm }0 & \hspace{-4.5cm} \left(\sin\gamma\cos\mu_I\left(t\right)-\cos\gamma\sin\mu_I\left(t\right)\right)^2 \end{array} \right). \end{aligned} \end{split}$$ We can now calculate the Von Neumann entropy of the system using this reduced density matrix. First we read off the eigenvalues of $\varrho_a\left(t\right)$ as it is diagonal, $$\begin{split} \begin{aligned} \lambda_1\left(t\right)&=\left(\sin\gamma\sin\mu_I\left(t\right)+\cos\gamma\cos\mu_I\left(t\right)\right)^2,\\ \lambda_2\left(t\right)&=\left(\sin\gamma\cos\mu_I\left(t\right)-\cos\gamma\sin\mu_I\left(t\right)\right)^2,\\ \end{aligned} \end{split}$$ and substitute these into the expression for the entropy $$S_{h,a}\left(t\right)=S_{H,a}\left(t\right)=-\lambda_1\left(t\right)\log\left[\lambda_1\left(t\right)\right]-\lambda_2\left(t\right)\log\left[\lambda_2\left(t\right)\right].$$ With this expression we are free to choose the initial state of our system with a given value of $\gamma$. If the initial state of our system is maximally entangled state with $\gamma=\pi/4$, then we observe how the entanglement entropy evolves with time. This is most applicable to quantum computing as in that context one would like to preserve the entangled state. We will now vary the parameters $N$, $g$ and $\kappa$ to see how they affect the evolution of entropy with time. Of particular interest is the exceptional point $g=\kappa$ where the non-Hermitian system enters the broken $PT$ regime in the time-independent setting. It is in this area that the evolution we see differs from the standard evolution of entropy in Hermitan quantum mechanics. Figure (\[fig:unbroken\]) shows how the entropy evolves when $\kappa>g$. This is equivalent to the unbroken $PT$ regime of the non-Hermitian model. In this setting the entropy experiences so called “sudden death” similar to [@yonacc2006sudden]. The entropy rapidly decays from a maximum value to zero with a subsequent revival after the initial death. When the number of oscillators in the bath increases, the moment of vanishing entropy occurs at an earlier time. ![Von Neumann entropy as a function of time and varied bath size, with $c_1=1$, $g=0.7$, $\kappa=0.3$, $\kappa=1$[]{data-label="fig:unbroken"}](unbroken) Figure (\[fig:exceptional\]) depicts the entropy evolution when $\kappa=g$. This is equivalent to the exceptional point of the non-Hermitian model. In this specific setting, the system decays asymptotically from maximal entropy to zero. The half life of this decay decreases with the number of oscillators in the bath. ![Von Neumann entropy as a function of time and varied bath size, with $c_1=1$, $g=\kappa$[]{data-label="fig:exceptional"}](exceptional) Figure (\[fig:broken\]) now shows the results of entropy evolution when $g>\kappa$. This is the spontaneously broken $PT$ regime of the original time-independent non-Hermitian model. In this case the system once again decays asymptotically but in this instance the decay is to a non-zero value of entropy. In this way, the entropy is preserved eternally. Once again the half life decreases with increasing $N$. The finite value that is asymptotically approached independently of $N$ is $$\label{min_entropy} \begin{split} \begin{aligned} S_{t\rightarrow\infty}=&-\frac{1}{2}(1+\xi)\log\left[\frac{1}{2}(1+\xi)\right]\\ -&\frac{1}{2}(1-\xi)\log\left[\frac{1}{2}(1-\xi)\right], \end{aligned} \end{split}$$ where $$\xi=\frac{\sqrt{c_1^2+g^2-\kappa^2}}{c_1}.$$ We see the condition for the asymptote to exist is $|\frac{c1}{\sqrt{g^2-\kappa^2}}|>1$, which matches the reality condition of $\mu$ in equation (\[mu\]). ![Von Neumann entropy as a function of time and varied bath size, with $c_1=1$, $g=0.3$, $\kappa=0.7$. The asymptote is at $S_{t\rightarrow\infty}\approx0.3521$[]{data-label="fig:broken"}](broken) We have found three significantly different phenomena at $\kappa>g$, $\kappa=g$ and $\kappa<g$. Specifically we see a change from rapid decay of entropy to zero, to asymptotic decay to zero through to asymptotic decay to a non-zero entropy. This can be interpreted as crossing the $PT$ exceptional point into the spontaneously broken regime of the original time-independent non-Hermitian system. However, with the existence of a time-dependent metric, the broken regime is no longer truly broken as we are able to provide a well-defined interpretation. Conclusion ========== We derived a framework for the Von Neumann entropy in non-Hermitian quantum systems and applied it to a simple system bath coupled bosonic model. In order to analyse the model we were required to find a time-dependent metric and we chose to solve the time-dependent Dyson equation for this. This method also gave us the equivalent Hermitian system which we worked with to perform the analysis as the framework showed the entropy was equivalent in both systems. The $PT$ symmetry of the non-Hermitian system played an important role for the characterisation of the regimes of different qualitative behaviour in the evolution of the Von Neumann entropy. We found three different types of behaviour depending on whether we are in the $PT$ unbroken regime, at the exceptional point or in the spontaneously broken $PT$ regime. In the unbroken regime, the entropy underwent rapid decay to zero. At subsequent times it was revived and continued this oscillatory behaviour indefinitely. At the exceptional point, the entropy decayed asymptotically to zero and in the spontaneously broken regime, the entropy decayed asymptotically from a maximum to a finite minimum (\[min\_entropy\]) that remained constant with time. Our findings may have implications for maintaining entanglement in quantum computers when the computer is operated in the spontaneously broken $PT$ regime. The challenge here is to construct a system in a laboratory that mimics that of the non-Hermitian system presented here. However, non-Hermitian systems have been realised in quantum optical experiments [@guo2009observation; @ruter2010observation] and so it is certainly possible that the same could be carried in quantum computing.\ \ **ACKNOWLEDGEMENTS**: TF is supported by a City, University of London Research Fellowship.
--- abstract: 'The optical transmission and reflection in between two metalized optical fiber tips is studied in the optical near-field and far-field domains. Besides aluminum-coated tips for near-field scanning optical microscopy (NSOM), specifically developed gold-coated fiber tips cut by focused ion beam (FIB) are investigated. Transverse transmission maps of sub-wavelength width clearly evidence optical near-field coupling between the tips for short tip distances and becomes essentially Gaussian-shaped for larger distances in the far-field regime. Moreover concentric reflection fringes observed for NSOM-type tips illustrate the influence of the receiving fiber tip on the emission pattern of the source tip.' author: - 'Jean-Baptiste Decombe' - 'Jean-François Bryche' - 'Jean-François Motte' - Joël Chevrier - Serge Huant - Jochen Fick title: 'Transmission and reflection characteristics of metal-coated optical fiber tip pairs' --- Introduction ============ Optical fiber conical tips with a sub-wavelength clear aperture at the apex are common tools in micro- and nano-optics. They are used for optical trapping of micro-particles in single-fiber-tip [@SSP+12; @ETH09; @LGY+06] and counter-propagating two-fiber-tip [@VOO09; @LS95] configurations. Fiber tips are also used in scanning optical microscopy: bare fiber tips can be applied for imaging of neurons [@DSV+11], whereas metalized tips are the key element of NSOM [@BTH+91; @OK95]. The optical near field of metalized fiber tips was probed using fluorescent nanospheres and an analytical model for the emitted electric field was developed [@DNH+04]. The near-field intensity shows two lobes whereas the far-field emission is polarization dependent and of excellent Gaussian shape with large emission angles exceeding 90$^\circ$ in the P polarization [@OK95; @DWH02]. The emission angle can be directly linked to the tip apex size. These features have been essentially recovered by FDTD calculations [@AS07]. To some extent, a sub-wavelength aperture at the apex of a conical tip mimics a single small diffraction hole on a flat metallic screen, a problem that has been extensively studied [@YCdL+12; @KKK+11] since the pioneer work of Bethe [@Bet44]. Recently the power propagation in apertureless metal-coated optical fiber tips was investigated theoretically [@BFB+12] and tip-to-tip scans of such fiber tips were studied in view of their applications to NSOM lithography [@KPS+13]. Two open aperture metalized fiber tips facing each other at distance of some hundred nanometers are a promising approach for optical nano-tweezers. They combine the advantages of nano-traps based on a plasmonic cavity [@PG12; @TKS13] with the flexibility of fiber based optical tweezers [@LGY+06]. This makes it possible to realize a genuine plasmonic tweezers, allowing not only nanoparticle trapping but also their manipulation at the nanoscale, with the rewarding prospect of possible operation in air, not only in a liquid. In this paper we present the optical transmission and reflection study of metalized fiber tip pairs with tip distances in between tens of nanometers up to tens of microns, thus covering the optical near field and far field ranges. More specifically the transition between these two regimes is studied. Such tip pairs are aimed at being used in future near-field optical tweezers. Experimental ============ Two different types of metalized fiber tips are studied: NSOM tips and FIB-cut tips. The first one consists of open aperture, aluminum-coated fiber tips currently used for NSOM. The second one are gold-coated fiber tips, where the aperture is obtained by FIB-cutting of the entirely metalized fiber tips. The main difference of these two tip types consists in the shape of the tip apex. The application of two different metals does not influence the results presented in this paper. The elaboration of both tip types is based on chemical wet-etching of single mode, pure silica core fibers (Nufern S630-HP) [@CSM+06]. The obtained fiber tip angle is about $15^{\circ}$. In the case of NSOM tips, a magnesium fluoride (MgF$_2$) film is first deposited on the as-etched tips in order to control the final apex diameter. Then an opaque aluminum film of about 100 nm thickness is deposited by thermal evaporation after the deposition of a thin nickel-chromium adhesion layer. This technique allows to obtain apex diameters in the order of 200 - 300 nm without any further step such as FIB-cutting (Fig. \[fig.SEM\].a). ![Scanning electron micrographs of the two fiber tip types used: (a) typical NSOM tip, and (b) the FIB-cut tips used in the experiments.\[fig.SEM\]](fig_SEM_Al.eps "fig:"){width="5.cm"}\ ![Scanning electron micrographs of the two fiber tip types used: (a) typical NSOM tip, and (b) the FIB-cut tips used in the experiments.\[fig.SEM\]](fig_SEM_Au.eps "fig:"){width="5.cm"} Cutting the fiber tips using FIB leads to smooth, high quality end-faces (Fig. \[fig.SEM\].b). This point is of paramount importance for experiments with tip distances in the nanometer regime, thus justifying the more complex fabrication process compared to the NSOM-tips. At the same time we start using gold for its capacity to support low loss surface plasmons. Aiming smallest achievable aperture sizes, we dismiss the thick MgF$_2$ layer. The 200 nm gold layer is deposited directly on the etched fiber tips, only using a 10 nm titanium adhesion layer. The fiber tip end is then cut by FIB to get sub-micrometer tip apertures. With this technique the obtained tips are of elliptical or nearly circular shape, thus allowing to study shape effects. The transmission and reflection of optical fibers are measured on a dedicated set-up allowing scanning the relative fiber position with nanometer accuracy (Fig. \[fig.setup\]). One fiber is mounted on a set of $xyz$ piezoelectric translation stages (PI P620) with sub-nanometer resolution and 50 $\mu$m range. The second fiber is mounted on three perpendicular inertial piezoelectric translation stages (Mechonics MS 30) with $\approx 30$ nm step size and up to 2.5 cm range. A microscope with a long working distance objective (Mitutoyo M 50x) coupled to a CMOS camera allows to visualize the fiber tips with micrometer resolution. ![Scheme of the experimental set-up.[]{data-label="fig.setup"}](fig_scheme.eps "fig:"){width="8.cm"}\ For technical reasons we use two different diode lasers emitting at 808 nm and 830 nm to characterize the Al- and Au-coated fiber tips, respectively. The small wavelength difference has no influence on the results as the wavelengths are far away from metal absorption bands or fiber cut-off wavelengths. A 50/50 waveguide coupler is used to allow simultaneous reflection and transmission measurements by means of two amplified Si-photodiodes (New Focus 2001-FC). A closed-loop control is implemented to stabilize the relative fiber tip distance at the nanometer scale. In fact, thermal drifts are a serious issue in a room temperature uncontrolled environment. The axial fiber tip distance is the most critical one as the transverse position can be calibrated by using the transmission maximum position. The feedback signal is obtained from a Fabry-Perrot cavity built by a cleaved optical fiber and a metallic mirror, respectively mounted on the two fiber tip holders (Fig. \[fig.setup\]). A fiber-coupled white lamp source and a mini-spectrometer (Avantes) allow to measure the reflection spectra of the Fabry-Perrot cavity. Its Fourier transform directly gives the absolute cavity size. Using the closed-loop control, the relative fiber tip distance can be controlled with a precision better than $\pm$ 5 nm over more than five hours. The absolute tip distance is, however, more difficult to assess. The only accurate way is to perform transverse scans with decreasing distance until the tips are touching. This contact can be clearly observed on the microscope image or by the appearance of streaks in the transmission intensity plot. However, the fiber tip fragility may result in severe damage. After observation of a number of contact events, we estimate that the absolute distance can be determined from the microscope images with 50-100 nm precision. As a consequence, the minimal distance of the transverse scans used in Section \[chap\_rd\] to determinate $w_0$ is of the same range. ![Transverse transmission (left) and reflection (right) intensity maps of a NSOM tip pair at two distances $d$. The maximal intensities are indicated for an injected power of P$_{in}= 300$ $\mu$W.[]{data-label="fig.map"}](fig_tra_0400.eps "fig:"){width="3.5cm" height="3.5cm"} ![Transverse transmission (left) and reflection (right) intensity maps of a NSOM tip pair at two distances $d$. The maximal intensities are indicated for an injected power of P$_{in}= 300$ $\mu$W.[]{data-label="fig.map"}](fig_ref_0400.eps "fig:"){width="3.5cm" height="3.5cm"}\ ![Transverse transmission (left) and reflection (right) intensity maps of a NSOM tip pair at two distances $d$. The maximal intensities are indicated for an injected power of P$_{in}= 300$ $\mu$W.[]{data-label="fig.map"}](fig_tra_0050.eps "fig:"){width="3.5cm" height="3.5cm"} ![Transverse transmission (left) and reflection (right) intensity maps of a NSOM tip pair at two distances $d$. The maximal intensities are indicated for an injected power of P$_{in}= 300$ $\mu$W.[]{data-label="fig.map"}](fig_ref_0050.eps "fig:"){width="3.5cm" height="3.5cm"} Control software in the LabView environment allows controlling the entire set-up and recording the intensity maps. Transverse transmission and reflection maps for constant distance $d$ are recorded by scanning one fiber in a plane perpendicular to the fiber tips’ orientation. Typical scans contain $75\times 75$ data points. The intensity is averaged over 500 points with read-out frequency of 10 kHz and a photodiode internal amplification of $50-70$ dB. The obtained intensity plots are fitted to the Gaussian intensity beam profile function $$I(r)=I_0\cdot e^{\frac{-2(r-r_0)^2}{w^2}}$$ with $I_0$ the intensity amplitude, $r_0$ the beam position, and $w$ the beam waist. The recorded transmission maps correspond to the convolution of the emission and the capture functions of the two fiber tips. Thus, in order to deduce the emission spot width of only one fiber the measured transmission width has to be corrected. The corresponding deconvolution of two Gaussian functions can be done with : $$\tilde w=\sqrt{w^2-w_0^2}\label{eq.cor}$$ with $\tilde w$ and $w$ the corrected and measured waists, respectively. $w_{0}$ is the optical aperture size of the second fiber tip. For two identical tips $w_{0}$ can be obtained by $w_0=w^{min}/\sqrt{2}$, $w^{min}$ being the measured spot size at smallest tip distances. Results and discussion \[chap\_rd\] =================================== NSOM fiber tips --------------- Transmission and reflection maps of NSOM fiber tips are recorded for tip distances up to 30 $\mu$m. Typical results are shown on Fig. \[fig.map\]. The transmission spots are of slightly elliptical shape, in agreement with the imperfect circular symmetry of the fiber tips. The minimal measured waist is $w^{min}=375$ nm corresponding to a corrected waist of $\tilde w^{min}=265$ nm, about one-third of the wavelength $\lambda/3=269$ nm. The apex size determined by scanning electron microscopy (SEM) of the two applied tips are respectively 276 nm and 246 nm. The size of the transmission spot is thus clearly determined by the tip apex and not by the actual wavelength. ![Position of the interference rings as a function of the axial ($d$) and transverse ($r$) fiber tips distances.[]{data-label="fig.fri"}](fig_fring.eps){width="7.5cm"} The transmission spot size increases linearly with the tip distance for distances larger than $\approx 1~\mu$m. The corresponding emission angle is $\theta=18.5^\circ$, a value found independently of the fiber tip pairs. Clear circular and concentric reflection fringes are observed for tip distances up to several micrometers (Fig. \[fig.map\]). The fringe center position coincides with that of the transmission peak maximum. As expected for interference fringes, intensity minima/ maxima are observed for : $$\label{eq.fri} \begin{array}{l} \displaystyle d^{min}_{m}=(\frac{m}{2}+\frac{1}{4})\cdot\lambda \\~\\ \displaystyle d^{max}_{m}=\frac{m}{2}\cdot\lambda \end{array}$$ with $\lambda$ the wavelength and $m =1,2,...$ a positive integer. The circular fringes can be explained by back-reflection at the fiber tips. The experimental radii of the fringe minima/ maxima are determined by fitting concentric circles to the transverse reflection intensity maps. These radii are plotted on Fig.\[fig.fri\] as a function of the two fiber tip planes distance $d$. The theoretical fringe positions (lines in Fig. \[fig.fri\]) are calculated using Eq. \[eq.fri\] by substituting $d$ with $d'=\sqrt{d^2+r^2}$. The agreement with the experimental values is very good. The observed difference can be attributed to the real (elliptical) shape of the metalized fiber tips. The observation of distinct reflection patterns clearly shows that the emission pattern of the source fiber tip is influenced by the receiving fiber tip. The reflected intensity is about one order of magnitude smaller than the transmitted intensity. However, no fringe is visible on the transmission maps. FIB-cut fiber tips ------------------ Now, two FIB-cut fiber tips with different apex shapes are studied. The emission fiber tip is strongly elliptical with major and minor axis diameters of $a=450$ and $b=280$ nm, respectively (Fig. \[fig.SEM\].b, left side). The receiving tip has a nearly equilateral triangle shape with 260 nm side length . The measured transmission maps are elliptical for small tip distances and become circular for larger distances (Fig. \[fig.au\]). The elliptical transmission spots are described by two waists ($w_a$ and $w_b$) measured parallel to the major axis $a$ and minor axis $b$, respectively. The as-measured minimal waists of the transmission spot are $w_a^{min}=470$ nm and $w_b^{min}=310$ nm. The injected optical intensity inside the source fiber is 1 mW. The emitted intensity of the source fiber, measured by means of a power meter, is 1 $\mu$W. The transmitted maximal intensity is 1 nW and 0.48 nW for tip distances of 100 nm and 500 nm, respectively. The emission losses of 30 dB are essentially due to strong propagation losses in the sub-wavelength wide section near the fiber tip end. For the measurement at $d=100$ nm, the reception losses are of the same order than the emission losses, suggesting that the propagation losses in between the two fiber tips can be neglected. The lower maximal transmitted intensity of the $d=500$ nm measurements scales with the inverse of the respective transmission spot surfaces. This means that the totally transmitted optical power is constant. ![Experimental (left) and theoretical (right) transverse transmission intensity maps of the FIB-cut fiber tip pair shown on Fig. \[fig.SEM\].b. The maximal intensities are indicated for the experimental results.[]{data-label="fig.au"}](fig_Au_100.eps "fig:"){width="3.5cm" height="3.5cm"} ![Experimental (left) and theoretical (right) transverse transmission intensity maps of the FIB-cut fiber tip pair shown on Fig. \[fig.SEM\].b. The maximal intensities are indicated for the experimental results.[]{data-label="fig.au"}](fig_Au_100c.eps "fig:"){width="3.5cm" height="3.5cm"}\ ![Experimental (left) and theoretical (right) transverse transmission intensity maps of the FIB-cut fiber tip pair shown on Fig. \[fig.SEM\].b. The maximal intensities are indicated for the experimental results.[]{data-label="fig.au"}](fig_Au_500.eps "fig:"){width="3.5cm" height="3.5cm"} ![Experimental (left) and theoretical (right) transverse transmission intensity maps of the FIB-cut fiber tip pair shown on Fig. \[fig.SEM\].b. The maximal intensities are indicated for the experimental results.[]{data-label="fig.au"}](fig_Au_500c.eps "fig:"){width="3.5cm" height="3.5cm"} In contrast to the NSOM tip measurements, no reflection is observed for the FIB-cut tips. This difference originates from the actual shape of the tip apex and not from the different metals used for tip coating. At 808 nm wavelength the gold and aluminum reflectance is $R^{Au}=0.976$ and $R^{Al}=0.867$, respectively. The back-reflection of the gold coated FIB-cut tips should thus even be slightly more intense than for the Al-coated NSOM tips. However, the NSOM tips show an irregular surface with bended edges, whereas the surface of the FIB-cut tips is flat with sharp edges. Therefore, efficient reflection by the FIB-cut tips would require very good parallel alignment of the fiber tips. Moreover reflection could only occur for transverse distances below the actual tip apex size. The emission of the FIB-cut fiber tips is calculated using a straightforward electromagnetic model. This model neglects the influence of the reception tip on the field distribution, but allows to reproduce the main experimental observations. The emitting fiber tip is approximated by an apex of elliptical shape. The optical intensity inside the apex is supposed to be uniform and is represented by a homogeneous distribution of (typically $m=1164$) coherent and orthogonal electric $\bold p$ and magnetic $\bold m$ dipole pairs calculated from the incident electromagnetic field ($\bold E^i,\bold H^i$): $$\begin{array}{l} \displaystyle \bold p=\frac{i}{2\pi c}\hat{\bold k}\times\bold H^1\\~\\ \displaystyle \bold m=-\frac{i}{2\pi c}\hat{\bold k}\times\bold E^1 \end{array}$$ with $\hat{\bold k}=\hat{\bold E}\times\hat{\bold H}$ the normalized wavevector and $c$ the speed of light. The emitted electrical field of one single dipole pair is given by [@jac98]: $$\label{eq.jack} \bold E(r) = \dfrac{1}{4\pi\epsilon_0}\left\{\left(\hat{\bold r}\times\bold p\right)\times\hat{\bold r}\dfrac{k^2}{r}+ \left[ 3\hat{\bold r}\left(\hat{\bold r}\cdot\bold p\right)-\bold p\right]\left(\dfrac{1}{r^3}-\dfrac{ik}{r^2}\right) -\dfrac{1}{Z_0}\left(\hat{\bold r}\times\bold m\right)k^2\left(\dfrac{1}{r}-\dfrac{1}{ikr^2}\right)\right\}e^{i kr}$$ with $\epsilon_0$ the vacuum permittivity, $Z_0$ the free space impedance, $k=2\pi/\lambda$ the free space wavevector, and $\bold r$ the distance vector from the dipole. Eq. \[eq.jack\] takes all dipole emission terms into account, i.e. near- and far-field contributions. The optical intensity distribution is obtained by summing the contributions over all dipole pairs and squaring the electrical field. ![Major and minor axis transmission waists of the elliptical transmission spot (see Fig. \[fig.au\]) as a function of the tip distance $d$ (points : corrected experimental data with error bars, straight lines: theory)[]{data-label="fig.ecc"}](fig_waist_Au.eps){width="7.5cm"} The measured transmission plots corresponds to the intensity emission of one tip captured by the second fiber tip. The finite apex of the second tip results in an enlargement of the transmission spot and has to be taken into account. In the present case of two different fiber tips the size correction is less straightforward than for two identical tips. Our results for the Al-coated tips show, however, that the minimal waist is of the same order as the tip apex. For correction, the triangular fiber tip is thus approximated by a circular tip of equal surface. Consequently, the calculated emission intensity plots in Fig. \[fig.au\] are convoluted by a two-dimensional Gaussian function with a waist of 200 nm. The agreement of the experimental and theoretical transmission maps is satisfactory and the elliptical and nearly round shapes at respectively small and far distances are well reproduced. The corrected major and minor axis waists ($\tilde w_{a,b}$) of the elliptical transmission spots are represented on Fig. \[fig.ecc\]. Here the experimental waists are corrected for the 200 nm apex size of the second fiber tip. The agreement of observed and calculated values is good for small tip distances and the main features are well reproduced. However, for larger tip distances the calculated values diverge from the experiment. This is mainly due to very low experimental signal levels and limits of the approximations concerning the simplified apex shapes. Also, it would be interesting to quantify the influence on the field distribution of the receiving tip, which will require full electromagnetic calculations. This is left for future work. The minimal corrected emission waists of the elliptical fiber tip are $\tilde w^{min}_a=428$ nm and $\tilde w_b^{min}=243$ nm, slightly lower than the actual aperture of the elliptical fiber. At small distances light is transmitted by the optical near field. Thus the shape of the two tip apexes determines the shape of the observed intensity spot. For larger distances the light is transmitted by the optical far field, which cannot resolve the fiber tip shapes smaller than the diffraction limit. Thus, the obtained image corresponds to a point-like optical emitter [@DCH11]. This was experimentally observed by the decreasing difference between the major and minor axis waists. The numerical results even show an inversion of the ellipse major and minor axis at 365 nm, slightly below half the wavelength. Conclusions =========== In conclusion, the transmission and reflection properties of two kinds of metal-coated optical fiber tip pairs were experimentally studied. Clear evidence of optical near-field coupling between the two tips of a pair was described by sub-wavelength transverse transmission spots. This point was confirmed by the resolution of the sub-wavelength fiber tip shape. We believe that these results are of interest for the future application in optical nano-tweezers of this kind of metal-coated optical fiber tip pairs. Acknowledgment {#acknowledgment .unnumbered} ============== Funding for this project was provided by a grant from la Région Rhônes-Alpes and by the French National Research Agency in the framework of the FiPlaNT project (ANR-12-BS10-002). Helpful discussions with A. Drezet are gratefully acknowledged. [10]{} S. Skelton, M. Sergides, R. Patel, E. Karczewska, O. Maragó, and P. Jones, “Evanescent wave optical trapping and transport of micro- and nanoparticles on tapered optical fibers,” J. Quant. Spec. Rad. Trans. **113**, 2512 (2012). S. Eom, Y. Takaya, and T. Hayashi, “Novel contact probing method using single fiber optical trapping probe,” Prec. Engin. **33**, 235 (2009). Z. Liu, C. Guo, J. Yang, and L. Yuan, “Tapered fiber optical tweezers for microscopic particle trapping: fabrication and application,” Opt. Express **14**, 12510 (2006). S. Valkai, L. Oroszi, and P. Ormos, “Optical tweezers with tips grown at the end of fibers by photopolymerization,” Appl. Opt. **48**, 2880 (2009). E. R. Lyons and G. J. Sonek, “Confinement and bistability in a tapered hemispherically lensed optical fiber trap,” Appl. Phys. Lett. **66**, 1584 (1995). J.-B. Decombe, W. Schwartz, C. Villard, H. Guillou, J. Chevrier, S. Huant, and J. Fick, “A fibered interference scanning optical microscope for living cell imaging,” Opt. Express **19**, 2702 (2011). E. Betzig, J. Trautman, T. Harris, J. Weiner, and R. Kostelak, “Breaking the [D]{}iffraction [B]{}arrier: [O]{}ptical [M]{}icroscopy on a [N]{}anometric [S]{}cale,” Science **251**, 1468 (1991). C. Obermüller and K. Karrai, “Far field characterization of diffracting circular apertures,” Appl. Phys. Lett. **67**, 3408 (1995). A. Drezet, M. Nasse, S. Huant, and J. Woehl, “The optical near-field of an aperture tip,” Europhys. Lett. **66**, 41 (2004). A. Drezet, J. C. Woehl, and S. Huant, “Diffraction by a small aperture in conical geometry: Application to metal-coated tips used in near-field scanning optical microscopy,” Phys. Rev. E **65**, 046611 (2002). T. J. Antosiewicz and T. Szoplik, “Description of near– and far–field light emitted from a metal–coated tapered fiber tip,” Opt. Express **15**, 7845 (2007). J.-M. Yi, A. Cuche, F. de Leon-Perez, A. Degiron, E. Laux, C. Genet, J. Alegret, L. Martin-Moreno, and T. Ebbesen, “Diffraction regimes of single holes,” Phys. Rev. Lett. **109**, 023901 (2012). H. Kihm, S. Koo, Q. Kim, K. Bao, J. Kihm, W. Bak, S. Eah, C. Lienau, H. Kim, P. Nordlander, N. Halas, N. Park, and D.-S. Kim, “Bethe-hole polarization analyser for the magnetic vector of light,” Nat. Commun. **2**, 451 (2011). H. A. Bethe, “Theory of diffraction by small holes,” Phys. Rev. **66**, 163 (1944). J. Barthes, G. Colas des Francs, A. Bouhelier, and A. Dereux, “A coupled lossy local-mode theory description of a plasmonic tip,” New J. Phys. **14**, 083041 (2012). I. Kubicova, D. Pudis, L. Suslik, and J. Skriniarova, “Spatial resolution of apertureless metal-coated fiber tip for [NSOM]{} lithography determined by tip-to tip scan,” Optik **124**, 1971 (2013). Y. Pang and R. Gordon, “Optical [T]{}rapping of a [S]{}ingle [P]{}rotein,” Nano Lett. **12**, 402 (2012). Y. Tanaka, S. Kaneda, and K. Sasaki, “Nanostructured potential of optical trapping using a plasmonic nanoblock pair,” Nano Lett. **13**, 2146 (2013). N. Chevalier, Y. Sonnefraud, J. F. Motte, S. Huant, and K. Karrai, “Aperture-size-controlled optical fiber tips for high-resolution optical microscopy,” Rev. Sci. Instr. **77**, 063704 (2006). J. D. Jackson, *Classical [E]{}lectrodynamics*, Chap. 9.2 (John Wiley & Sons, New York, 1998), 3rd ed. A. Drezet, A. Cuche, and S. Huant, “Near-field microscopy with a single-photon point-like emitter: resolution versus the aperture tip?” Opt. Commun. **284**, 1444 (2011).
--- abstract: 'There is an implicit assumption in software testing that more diverse and varied test data is needed for effective testing and to achieve different types and levels of coverage. Generic approaches based on information theory to measure and thus, implicitly, to create diverse data have also been proposed. However, if the tester is able to identify features of the test data that are important for the particular domain or context in which the testing is being performed, the use of generic diversity measures such as this may not be sufficient nor efficient for creating test inputs that show diversity in terms of these features. Here we investigate different approaches to find data that are diverse according to a specific set of features, such as length, depth of recursion etc. Even though these features will be less general than measures based on information theory, their use may provide a tester with more direct control over the type of diversity that is present in the test data. Our experiments are carried out in the context of a general test data generation framework that can generate both numerical and highly structured data. We compare random sampling for feature-diversity to different approaches based on search and find a hill climbing search to be efficient. The experiments highlight many trade-offs that needs to be taken into account when searching for diversity. We argue that recurrent test data generation motivates building statistical models that can then help to more quickly achieve feature diversity.' author: - Robert Feldt and Simon Poulding bibliography: - 'llncs.bib' title: Searching for test data with feature diversity --- Introduction ============ Most testing practitioners know that a key to high-quality testing is to use diverse test data. However, it is only recently that there has been research to formalise different notions of diversity and propose concrete metrics to help realise it [@feldt2008searching; @alshahwan2012augmenting; @feldt2016testsetdiameter; @shi2016]. The diversity that is sought is often of a general shape and form, i.e. rather than target some specific attribute or feature of the test data we seek diversity in general. Even though this is appropriate when little is known about the test data that is needed, it makes it harder for testers to judge if diversity has really been achieved and of which type. Moreover, if the tester has some prior information or preference as to which type of test data to explore it is not clear, in the general diversity context, how to incorporate this during testing. Here we target a specific form of test diversity (TD) that we call the Feature-Specific TD problem: how to sample as diverse and complete set of test inputs as possible in a specific area of the feature space. As a concrete example, for software-under-test that takes strings as inputs, a tester might prefer test inputs that are in a particular size range (feature 1) and for which the count of numeric characters is within a given range (feature 2). This problem is in contrast to the General TD problem where we seek diversity in general without requiring diversity within in a particular set of features. There is existing work on how to search for test data with one specific set of feature values [@feldt2013finding], as well as techniques that address the General TD problem, but there is a lack of work on the Feature-Specific TD problem. In this paper we propose a variety of methods to generate test data with specific types of feature diversity, and then explore their strengths and weaknesses in order to understand the trade-offs between them. Our contributions are: - Identification of multiple basic methods to search for feature-specific test diversity, - Evaluation of the basic approaches on a two-dimensional feature space for a test data generation problem, - Proposal of hybrid search methods based on the results of the evaluation. In Section \[sec:Background\] we provide further background and summarise related work. In Section \[sec:Design\] we propose the search-based methods to seek feature-specific diversity, and describe and discuss their evaluation in Section \[sec:Evaluation\]. We then summarise our conclusions in Section \[sec:Conclusions\]. Background and Related Work {#sec:Background} =========================== Test data generation techniques often apply a strategy that has an implicit objective of ensuring some form of diversity in the set of test inputs that are created. Testing techniques that partition the input domain – for example, based on the structural coverage of the software-under-test – select a representative test input from each partition and so implicitly achieve diversity in the context of the criterion used for partitioning. Even uniform random testing implicitly achieves some form of diversity simply because every input in the input domain has the same, non-zero probability of being selected for the test set. In contrast, there exist test data generation techniques that have an *explicit* objective of diversity within the input domain. One class of such techniques use a distance metric between two test inputs, such as the Euclidean distance between numeric inputs, and interpret this metric as a measure of diversity to guide the selection of test inputs. Antirandom Testing chooses a new test input such that is maximises the total distance between the new datum and all the existing inputs already in the test set [@malaiya1995antirandom]. Adaptive Random Testing first creates a pool of candidate inputs by random selection, and then adds to the test set the input in the candidate pool for which the minimum distance from all existing members of the test set is the largest [@chen2004adaptive]. Both these techniques therefore create a set of test inputs element-by-element by selecting the next test element to be as dissimilar as possible from existing elements. Bueno et al. consider instead the set of test inputs as a whole, and define a diversity metric on the set as sum of the distances from each input to it nearest neighbour [@bueno2007improving]. Metaheuristic search is then applied to the set of test inputs with the objective of maximising the diversity metric. Hemmati et al. apply both the element-wise approach of Adaptive Random Testing and a whole-set approach similar to that of Bueno et al. to the selection of diverse cases derived using model-based testing [@hemmati2011empirical]. Diversity metrics based on Euclidean distance are limited in terms of the types of inputs to which they can be applied. Feldt et al. demonstrate that normalised compression distance, a distance metric based on information theory, is not limited in the data types to which it may be applied, and enables the selection test inputs in a manner similar to how a human would based on ’cognitive’ diversity [@feldt2008searching]. Normalised compression distance is a pair-wise metric, but a recent advance in information theory extends this notion to a set as a whole. Feldt et al. use this set-wise metric to introduce test set diameter, a diversity metric that is applied to the entire test set, and demonstrate how this metric can be used to create diverse test sets [@feldt2016testsetdiameter]. Panichella et al. demonstrate an alternative mechanism for promoting diversity in the context of selecting test cases for regression testing. Instead of a search objective based on diversity, the authors propose a multi-objective genetic algorithm in which the genetic operators – in this case the initialisation of the population and the generation of new individuals – are designed to ‘inject’ diversity at the genome level [@panichella2015improving]. In the above approaches, the notion of diversity is generic in the sense that it is agnostic as to the ‘meaning’ of the test inputs. Metrics such as Euclidean distance or normalised compression distance simply treat the inputs as numeric vectors or strings of symbols, respectively, rather than aircraft velocities, time-series of temperature measurements, or customer addresses etc. The advantage of generic diversity metric and generic algorithm operators is that they can be applied easily to any domain, but the risk is that they may overlook domain-specific notions of diversity that might be important in deriving effective test inputs. In this paper, we investigate instead how to measure and apply diversity that takes into account the domain-specific meaning of the test inputs. We take inspiration from a recent class of evolutionary algorithms known as illumination algorithms or quality diversity algorithms [@pugh2016quality]. These algorithms differ from traditional evolutionary algorithms in that they forego the use of objective fitness as the primary pressure that drives the selection of new individuals, and instead select new individuals based on the domain-specific ‘novelty’ of the phenotype. The premise is that the search for novelty maintains diversity and avoids premature convergence to local optimum. Or considered another way, the pressure for ever-increasing objective fitness can prevent the algorithm from finding the sequence of ‘stepping stones’ that leads to the glabal optimum. These algorithms have been shown find near-globally optimum solutions as a by-product of the search for novelty. For example, Lehman and Stanley’s novelty search algorithm evaluates new individuals in terms of a novelty metric, and this metric is unrelated to the objective metric [@lehman2008exploiting]. To calculate the novelty metric, domain-specific features of the phenotype are measured to obtain a feature vector, and then measures the distance of the individual from its nearest neighbouring individuals in this feature space: the larger this distance, the more novel the individual is considered to be. The Multi-dimensional Archive of Phenotypic Elites (MAP-Elites) algorithm of Mouret and Clune uses the feature-space to maintain diversity in a different manner: at each point in the feature space (which is discretised for this purpose), an archive is maintained of the best individual having the features, where best is measured in terms of objective fitness [@mouret2015illuminating]. The set of these elite individuals – one at each point in the feature space – is the population on which the evolutionary algorithm acts. We note that Marculescu et al. apply both novelty search and MAP-Elites to generate candidate test inputs as part of an interactive search-based software testing system, and found that, compared to a traditional objective-based evolutionary algorithm, the illumination algorithms found more diverse test cases [@marculescu2016using]. It is this general strategy of illumination algorithms – that of searching for diversity in a domain-specific feature space – that informs the work in this paper. In addition, the specific strategies employed by Novelty Search and MAP-Elites are the basis for some the approaches we investigate. Focused Search for Feature Diversity {#sec:Design} ==================================== The research described in this paper is motivated by the premise that test data chosen for feature-specific test diversity will be more effective than test data chosen according to more generic measures of diversity (such as those discussed in section \[sec:Background\] above). The objective of the research is then to explore a number of search-based methods for choosing test inputs with high feature-specific diversity. In this section, we describe: - a concrete testing scenario (described first since a feature space is scenario-specific); - a feature space for the testing scenario; - a base mechanism for generating test inputs for this scenario; - a set of search-based methods that can be applied to base mechanism to promote feature-specific diversity (the empirical work in section \[sec:Evaluation\] will compare the effectiveness and efficiency of these methods). Testing Scenario ---------------- The input domain consists of strings that are arithmetic expressions formed from the operators `+`, `-`, `*`, and `/`; integers; and parentheses. An example of a valid input is the string: `"42+(-7*910)"`. We choose this domain since it is realistically complex: inputs are not simply numeric, but instead a string of characters that must satisfy constraints on its structure, and there is no bound on the length of the expression string. We do not explicitly define the software-under-test in this scenario since the search-based methods we apply act on the inputs themselves rather than on the coverage or other information from executing the software. But we have in mind software that parses the arithmetic expression and calculates the result. Feature Space ------------- By ‘feature space’, we mean the specification of one or more named dimensions on which test inputs can vary on a defined scale. Typically this scale is numerical and each feature has associated with it a specific function that maps input onto the scale, but the scale can also be ordinal or categorical For the purposes of the empirical work, we consider a two-dimensional feature space formed by: Feature 1: Length : – the number of characters in the string Feature 2: NumDigits : – the number of characters that are digits (‘0’ to ‘9’ in the ASCII range) We envision that the tester wishes to generate a large number of test inputs that differ in both total length as well as in the number of digits. Feature spaces can be very large, and may be infinite. This is indeed the case in this scenario: there is no bound on the length of either the expression string, nor the number the number of digits in the string. Therefore a tester needs to define a preferred area of the feature space where testing should be focused. For example, she may specify a range of values for each feature that together define a hypercube within the feature space. Base Generation Mechanism ------------------------- In order to generate valid test inputs, we use Feldt and Poulding’s GödelTest framework for generating structured data [@feldt2013finding]. In this framework, a programmatic generator is used to define the structure of valid inputs – here, the structure of valid arithmetic expressions [^1] – and a choice model is used to control which of all the possible valid arithmetic expressions is emitted by the generator. For this work, we use stochastic choice models that, in effect, define a probability distribution over the space of all valid arithmetic expressions. With such choice models, GödelTest becomes a mechanism for generating random arithmetic expressions according to the distribution defined by the choice model. Choice models in GödelTest have parameters that can be used to change the probability distribution, and the search-based methods for diversity described below operate by manipulating these parameters. The empirical work considers two stochastic choice models: Default : The default ‘sampler’ choice model provided by GödelTest. When used with the arithmetic expression generator, this choice model has 8 parameters all in the range $[0.0, 1.0]$. RecDepth5 : An extension of the default choice model that enables more refined probability distribution. Specifically, the probabilistic choice of whether an operand in the expression is a number, or is itself a parenthesised subexpression, becomes conditional on the depth to which the current subexpression is nested. This choice model has 16 parameters, again all in the range $[0.0, 1.0]$. Search-Based Methods -------------------- Our goal is to cover as large a portion as possible of the preferred area in the feature space. The fundamental approach we take is based on the novelty search algorithm described in section \[sec:Background\] above. The density (or simpler, even the count) of test inputs in a specific cell of the preferred area of the feature space as metric is used to guide the search to areas with lower density so that novel inputs can be found that will improve diversity of the test set as a whole. In addition, we consider several types of random search as baselines and investigate a more expressive stochastic model to govern the sampling of test inputs. For random sampling, one can either set the GödelTest choice model parameters (which define a probability distribution over the valid inputs) to random values (i.e. to define a distribution at random) once at the start of the generation process, or continuously during the process. We call the former method `rand-once` and designate the latter `rand-freqN` with N denoting the frequency with which we resample the parameters. From previous research, it is known that some stochastic choice models can be quite brittle and lead to large numbers of ‘infeasible’ inputs – inputs that are extremely large or infinite and exceed the finite memory available to represent them – being generated. For this reason we also include a `rand-mfreqN` method denoting up to a maximum of N inputs sampled between resampling events. The maximum means that as soon as an infeasible input is generated, we directly resample random values for the choice model parameters. For random sampling it is well-known that so called Latin Hypercube Sampling (LHS) can generate a better ‘spread’ of samples over a space [@park1994optimal]. When using LHS one first divides the value range for each dimension being sampled into equal-sized bins and then samples within each bin. This ensures that each dimension is sampled over the full range of its values. We select 10 and 30 bins respectively and designate the corresponding methods `rand-mfreq5-LHS10` and `rand-mfreq10-LHS30`. We also include Nested Monte-Carlo Search (NMCS) [@cazenave2009nested], a form of Monte-Carlo Tree Search that has been previously applied successfully to guide the generation of test inputs by GödelTest [@poulding2014]. NMCS operates during the generation process itself rather than on the choice model parameters. Each time a decision needs to be made – such as whether an operand in the arithmetic expression is a number or a subexpression, or the number of digits in a number operand – NMCS performs an internal ‘simulation’ by taking each possible choice for that decision in turn, and for each, then completing the generation process as normal (i.e. using choice determined by the choice model). Whichever simulation results in the best outcome, the corresponding choice is made for that decision. The variant of NMCS used by GödelTest considers a fixed sample of possible choices, rather than all possible choices, since there may be infinite number of such choices for a decision. We consider two variants in the evaluation: one that uses a sample of 2 choices at each decision, and the other uses a sample of 4 choices. NMCS generates many ‘intermediate’ candidate test inputs as outcomes from its internal simulations, and there are several options for utilising these intermediates. We argue that it makes sense to not throw away these intermediates but rather use them to update the density used in fitness calculations. We include both an approach that updates the density directly and thus changes the fitness calculation for all subsequent samples, and a batch approach that fixes density during one exploration by the NMCS algorithm and then uses all intermediate test inputs to update the density in one go before the next generation is started. Thus, we use four NMCS methods in the empirical evalution: `nmcs-2-direct`, `nmcs-4-direct`, `nmcs-2-batch`, and `nmcs-4-batch`. Finally, we include a hill climbing method that is applied to parameters of the choice model. Since this is not a population-based method it is easier to control in detail how it compares the diversity of the inputs generated by new candidate parameters to the current model parameters. A new candidate is formed by making small changes to the current model parameters using a Gaussian distribution with a small standard deviation. We adapt the sampling and comparison step used in a traditional hill climber to try to minimize the number of sampled test inputs. After sampling a minimum number of inputs (4) we sample up to a maximum number (20) while discarding the new point if it generates more than 33% infeasible inputs, or 50% feasible inputs that are outside the preferred area of the feature space. It uses a Mann-Whitney U test to compare the densities in feature space of the test inputs sampled by the current parameters and the new parameters, and goes to the latter if the $p$-value of the test is below 20%. The settings (the number of samples, $p$-value threshold, etc.) were chosen in an ad hoc manner, but the method seemed robust to changes in them during initial testing so we did not tune them further. This method is denoted `hillclimb-4-20`. Empirical Evaluation {#sec:Evaluation} ==================== We applied all 10 methods defined above to search for diverse test inputs in the two-dimensional feature space defined by the string length and number of digits of the input. Each method was executed 25 times[^2] to account for (stochastic) variation in their performance. Below we discuss the results from two different perspectives: the coverage of the preferred area of the feature space, and the efficiency of the methods, i.e. their coverage compared to the search time they needed. Feature space coverage ---------------------- Once a tester has defined a particular preferred area of a feature space where she is interested in focusing attention our main concern is to create a set of test inputs that cover this area to the largest extent possible. Although there often exists many constraints between features, a tester may rarely be aware of them or have the time to define them in detail. We will thus assume that the focus area has the shape of a hypercube in the feature space, i.e. its limits are defined with one or more ranges of preferred values per feature. In this context it is natural to consider coverage in terms of how many of the unique combinations of feature values that has been covered during a search. For example, in the two-dimensional feature space used here we have used the preferred area of lengths between 3 [^3] and 50 and number of digits between 2 and 25 (both ranges inclusive). There is clearly a constraint between the features here, the number of digits has to be smaller than equal to the length of the string, but for other feature spaces and preferred areas the effect of dependencies and constraints might be harder to identify. We thus will use the theoretical maximum size of the *preference hypercube* We call this *Feature Space Hypercube Coverage* (FSHC), and denoted simply coverage in the following. By definition this means that very rarely can a search method even in theory reach 100% FSHC for a specific preference hypercube; FSHC values should be compared only in relation to each other and not on an absolute scale. For a specific feature space, preference hypercube and set of searches to fill it one can normalize the FSHC by the largest FSHC value seen and thus calculate the *Normalized Feature Space Hypercube Coverage* (NFSHC). In the example we have used here the size of the preference hypercube is $(50-3+1)*(25-2+1)$ which is $1152$. The largest number of unique feature vectors cells covered, in a (long-running) search using Hill Climbing, was 651, which means that the largest FSHC observed in our experiments was $56.5\%$. A summary of the overall performance of the 10 different methods we investigate can be seen in Table \[tab:descriptive\_stats\]. The number of runs per method was 25 except for `rand-freq1` where we limited the number of repetitions due to the longer search time. From the table we see that Hill Climbing performs well but the methods based on random resampling have competitive performance and reach similar levels of coverage. We also see that the NMCS-based methods are generally fast but have worse coverage, regardless if using direct or batch updating of the density. It is also clear that a major determinant of coverage, in addition to the method used, is the choice model. All the methods using the default sampler choice model are at the bottom of the table based on the mean FSHC level reached. A striking example of the difference the choice model can make is for the `rand-freq1` method which reaches an average FSHC of $52.2\%$ with the recursive model at depth 5 while it only reaches $49.1\%$ with the default sampler choice model. This is a statistically significant difference with a p-value less than $0.00001$ based on a Mann-Whitney U-test. An advantage of using a two-dimensional feature space is that we can visualise in more detail the test data diversity of found by the methods. Figure \[fig:featurespace\_plots\] shows scatterplots for one run each of the three methods `hillclimb-4-20` (top), `nmcs-4-direct` (middle), and `rand-once` (bottom). These plots draw one point per found test input using a low alpha (transparency) value; thus the darkness of the dot in each cell gives an indication of the density with which the cell was covered. We can see the superior coverage of hill climbing that manages to cover also cells of the hypercube on top where the length of the string (x axis) has medium to low values while the number of digits (y axis) is as high as possible. We can also see that the NMCS search, the middle graph, seems to be constrained in a similar way as the random once method in the bottom graph, i.e. they both have problems to cover the upper parts of the preference hypercube. The NMCS methods are constrained by the base sampler choice model use for the internal simulations. ![Scatterplots showing the actual coverage of the preference hypercube (its upper limit is marked with red lines) after 10,000 data were sampled by three different methods: `hillclimb-4-20` (top), `nmcs-4-direct` (middle), and `rand-once` (bottom).[]{data-label="fig:featurespace_plots"}](figs/featurespace_plots_3methods.pdf){width="9cm"} Efficiency - Coverage per time ------------------------------ To study the overall efficiency of the tested methods in more detail we can plot the coverage level reached versus the search time expended. Figure \[fig:coverage\_vs\_time\] shows a scatterplot with the search time in seconds on the (logarithmically scaled) X axis, and the percent of preferred feature space covered (FSHC) on the Y axis. The colour of each point in the graph codes for the method used, so a cloud of points of the same colour represents all the runs of one and the same method. The best position in this graph would be up and on the left meaning a run that both got a high coverage and had a low search time. ![Feature space coverage (in % of theoretical maximal feature space size) versus the search time (in seconds) used by a method to reach that coverage. The scale on the X axis is logarithmic[]{data-label="fig:coverage_vs_time"}](figs/coverage_vs_time.pdf){width="11.7cm"} Consistent with the results shown previously we can see that the Hill Climbing method has consistently good results. Even if its variance is larger than for the other methods, signified by the ‘lone’ light orange dot towards the middle of the graph, it tends to be among the fastest optimisers while also reaching the highest coverage levels, on average. This can be contrasted with the simplest possible strategy, `rand-once`, in light pink down at the bottom of the graph. Even if, on average, it has similar run times to the fastest methods it fails to even reach 40% coverage. The NMCS methods all use the default choice model for sampling while traversing the tree of choices. Thus it seems to be hampered by the low coverage of this base model. Even though the NMCS search seems to be able to ‘push out’ from the confines of its base stochastic model, and thus each higher levels of coverage it does not reach as high as the Hill Climber or the methods based on random (re-)sampling of parameters. This makes it clear that NMCS needs a good base model adapted to the task. Alternatively it hints at the possibility to hybridize NMCS by dynamically adapting or randomly sampling the underlying stochastic model. Figure \[fig:coverage\_vs\_time\] also gives us an opportunity to better understand what causes long search times for a method. If we look at the three middle point clouds on top, for `rand-freq1` (light green, middle right), `rand-mfreq5-LHS10` (light purple, middle), and `rand-mfreq10-LHS30` (light blue, middle left), we see that they reach roughly the same coverage levels. However, the `mfreq10` version takes less than half the search time of the `freq1` version to reach that coverage. Since the maxfreq construct will resample a new set of parameter values early if an infeasible datum is generated, time tends to be saved. This is since the infeasible inputs typically arise when the stochastic model is configured to lead to a deep recursion in the number of method calls. We can see this effect more clearly if we plot the search time for each run versus the percentage of infeasible values sampled during the run. Figure \[fig:invalid\_vs\_time\] shows that, except for the NMCS methods on the left, there is an almost linear relation between these factors. The smaller but still additional search time increase seen for the rightmost runs in each cluster is probably from the fact that before a deep recursion during generation is interrupted, and an infeasible value returned, there is a large space of non-preferred but still feasible part of the feature space. Generating such test inputs will also take longer than generating shorter inputs with a few levels of recursion. The only real exception to strong correlation seen are the NMCS methods which have a close to 0% of sampled inputs being invalid and still having a relatively high search time. The nature of the NMCS search process is that as soon as one non-preferred datum is generated during the tree-wise ‘pruning’ of choices the whole sub-tree of choices will be deselected; and sub-sequent choices are thus less likely to lead to non-referred or infeasible data. ![Search time (in seconds) used by a run versus the percentage of generated test inputs that are infeasible. The colour of the points in the graph are the same as in Figure \[fig:coverage\_vs\_time\] above so legend excluded here.[]{data-label="fig:invalid_vs_time"}](figs/invalid_vs_time.pdf){width="10.7cm"} Discussion {#sec:Discussion} ---------- Through a set of experiments with 10 different methods to generate diverse test data in specific areas of a defined feature space we have shown that there is not one clearly better method to employ. The results show that a simple hill climbing search was relatively more efficient in covering the preferred parts of the feature space: it covered a larger part of the area in less time. However, random alternatives were not far behind and offer alternative benefits such as less bias. With any search algorithm there is always the risk that one is trading efficiency on one particular set of problems with efficiency in general, over all problems (see for example the ‘No Free Lunch’ theorems by Wolpert and Macready [@wolpert1997]). This can be problematic if the bias leads to the tester missing erroneous behavior of the software under test. However, if a tester really has a reason to want to target a smaller area of the input space a more directed search, such as using a hill climbing search, can be called for. An important finding in our experiments is about the test data generation tool itself. Even though the Nested Monte Carlo Search (NMCS) has been previously shown by Poulding and Feldt [@poulding2014] to better target test data with very specific features when we here tried their approach to cover a feature space it is clear that NMCS can be hampered by its underlying stochastic model. All methods we evaluated consistently performed better when using a larger than default stochastic model that gives more detailed control of the generation process. Such models allow for more fine-grained control that can be exploited by the searchers but also used for more efficient ‘blind’ exploration by random sampling. Our experiments thus suggests that the developers of the tool should consider alternative default choices. It also hints that hybridization of the search algorithms with random sampling of a larger stochastic model should be considered in future work. Given our results it is likely that such a hybrid would make it easier for, for example, the NMCS-based methods to break free from the constraints of their current default model. Somewhat ironically, but in retrospect naturally, the main conclusion is that *there is not likely to be a single best method or search algorithm to use for different types of test diversity needs*; one needs a toolbox of diverse solutions that needs to be tailored to the diversity goal and situation at hand. This is in line with the argument in [@feldt2015broadening] that researchers in search-based software engineering should not only consider the basic evolutionary algorithms but should open up to consider a richer set of search and optimisation solutions. In particular this will be important in real-world software testing where there is a need to repeatedly explore the same test input feature space, for example in regression testing scenarios. There we argue that if a model of the mapping from the feature space to the parameter space is built up front, for example using Gaussian Processes as proposed in [@feldt2015broadening], it can be exploited in later sessions to more quickly generate a diverse set of test data. Future work should, of course, also investigate more test data generation scenarios and evaluate how ability to find real and seeded faults is affected by test data diversity and the size of the feature space from which it is sampled. Conclusions {#sec:Conclusions} =========== We have described the feature-specific test diversity problem and investigated how it can be solved with different types of search and sampling approaches. After defining 10 different approaches we evaluated them on a test data generation task for a two-dimensional feature space. Results show that a hill climbing search both gave the best coverage of the target area and was the most efficient (per time step) but that random sampling can be surprisingly effective. The empirical results points to several ways in which the investigated approaches can be improved and, possibly, hybridized to address a diverse set of test data diversity needs. In particular we propose that models that map between feature values and the space being searched can help to ensure test diversity in scenarios of frequent re-testing, such as for regression testing. Our results also have wider implications for search-based software engineering. Random sampling and, in particular, ways to sample to ensure a better spread over the search space, such as with latin hypercube sampling, can be surprisingly effective in creating diversity. We caution other researchers in search-based software engineering to not blindly reach for a standard search procedure like a genetic algorithm. Depending on the goals for the search and the characteristics of the search and features spaces, non-standard or hybrid methods may be needed and should be considered. [^1]: The generator we use for arithmetic expressions is the same as that included as an example in the README file for the DataGenerators package at: <https://github.com/simonpoulding/DataGenerators.jl>. The DataGenerators package is Feldt and Poulding’s implementation of GödelTest in the language Julia. [^2]: Except for one long-running method, as detailed later. [^3]: We did not start at length 1 since the grammar of this particular generator is specified such that the shortest string possible is the one with two single-digit numbers with a single, binary, numeric operator between them, for a minimum string length of 3.
--- author: - 'Dominika Hunik-Kostyra,' - Andrzej Rostworowski title: 'AdS instability: resonant system for gravitational perturbations of AdS${}_5$ in the cohomogeneity-two biaxial Bianchi IX ansatz' --- Introduction {#Introduction} ============ Over the past two decades asymptotically anti-de Sitter (aAdS) spacetimes have received a great deal of attention, primarily due to the AdS/CFT correspondence which is the conjectured duality between aAdS spacetimes and conformal field theories. The distinctive feature of aAdS spacetimes, on which the very concept of duality rests, is a time-like conformal boundary at spatial and null infinity, where it is necessary to specify boundary conditions in order to define the deterministic evolution. For energy conserving boundary conditions the conformal boundary acts as a mirror at which massless waves propagating outwards bounce off and return to the bulk. Therefore, the key mechanism stabilizing the evolution of asymptotically flat spacetimes – dispersion of energy by radiation – is absent in aAdS spacetimes. For this reason the problem of nonlinear stability of the pure AdS spacetime (which is the ground state among aAdS spacetimes) is particularly challenging. The first conjecture, based on numerical evidence and heuristic arguments, about AdS being unstable against gravitational collapse (black hole formation) under arbitrarily small perturbations came from Bizoń and one of the present authors (2011) [@br_PRL107]. More precisely, in a toy model of the spherically symmetric massless scalar field minimally coupled to gravity with a negative cosmological constant in four [@br_PRL107] and higher dimensions [@jrb_PRD84] the numerical simulations showed that there is a class of arbitrarily small perturbations of AdS that evolve into a black hole on the time-scale $\mathcal{O}(\varepsilon^{-2})$, where $\varepsilon$ measures the amplitude of the perturbation. Moreover, on the basis of nonlinear perturbation analysis it was argued that this instability is due to a resonant transfer of energy from low to high frequencies, or equivalently, from coarse to fine spatial scales  [^1], until eventually an apparent horizon forms  [^2]. Further studies of this and similar models confirmed (see [@dkfk_PRL114; @df_JHEP1512] for independent, reliable long-time numerical integration of Einstein equations in Einstein–scalar fields models) and extended the findings of [@br_PRL107; @jrb_PRD84] providing important new insights concerning the coexistence of unstable (turbulent) and stable (quasiperiodic) regimes of evolution  [^3] (see [@ce_FP64] for a brief review and references). Still, there are two major downsides of all reported evidence for AdS instability, based on numerical integration of Einstein equations. First, the arguments of [@br_PRL107; @jrb_PRD84] and following works were based on extrapolation of the observed scaling $\mathcal{O}(\varepsilon^{-2})$ in time of resonant energy transfers between the modes and ultimately collapse times for finite small values of $\varepsilon$, cf. Fig. 2 in [@br_PRL107], but the limit $\varepsilon \rightarrow 0$, with the instability time scale $\varepsilon^{-2}$, is obviously inaccessible to numerical simulation. Second, the numerical integration of Einstein equations on the time scales long enough to provide convincing evidence for the AdS instability seems tractable only under some simplifying symmetry assumptions. Thus most numerical simulations were restricted to spherical symmetry where adding some matter (usually in the form of massless scalar field) was necessary to evade Birkhoff’s theorem and generate the dynamics, so that no gravitational degrees of freedom were excited  [^4]. The first numerical evidence for AdS instability in vacuum Einstein equations with negative cosmological constant in five dimensions within the cohomogenity-two biaxial Bianchi IX ansatz was reported in [@br_APPB48] (in fact, this model was studied in parallel with [@br_PRL107], but the results were published only recently). Indeed, one may avoid assumptions about spherical symmetry and still keep effectively $1+1$ dimensional setting using the fact that Birkhoff’s theorem can be evaded in five and higher odd spacetime dimensions as was observed for the first time in [@bcs_PRL95] in the context of critical collapse for asymptotically flat spacetimes. Odd-dimensional spheres admit non-round homogeneous metrics. Here we focus on $4+1$ dimensional aAdS spacetimes with the boundary $R \times S^3$. The key idea is to use the homogeneous metric on $S^3$, which takes the form $$g_{S^3} = e^{2B} \sigma_1^2+e^{2C} \sigma_2^2 + e^{2D}\sigma_3^2\;,$$ as an angular part of the five-dimensional metric (cohomogenity-two triaxial Bianchi IX ansatz) [@bcs_PRL95] $$\label{bcs_ansatz} ds^2= -A e^{-2\delta}dt^2 +A^{-1}dr^2 + \frac{1}{4}r^2 g_{S^3}\;.$$ Here $\sigma_k$ are left-invariant one-forms on $SU(2)$ $$\sigma_1+i\sigma_2 = e^{i\psi}(\cos \theta d\phi+id\theta)\;,\;\;\sigma_3 = d\psi-\sin\theta d\phi\;$$ and $A$, $\delta$, $B$, $C$ are functions of time and radial coordinates. In the biaxial case we have $B=C$. To deal with the problem of extrapolation of results of numerical integration of Einstein equations to the $\varepsilon \rightarrow 0$ limit i.e. to track the effects of a small perturbation over large time scales Balasubramanian, Buchel, Green, Lehner & Liebling (2014) [@bbgll_PRL113] and Craps, Evnin & Vanhoof (2014) [@cev_JHEP1410] introduced new resummation schemes of a naïve nonlinear perturbation expansion based on multi-time framework and renormalization group methods, respectively. The Secs. 1 and 2 of [@cev_JHEP1410] contain a very nice summary of the problems of naïve time-dependent perturbation expansion and the ways to cure them. In general, if the frequencies of linear perturbations satisfy the resonant condition, i.e. the sum or difference of two linear frequencies coincide with another linear frequency as is the case in AdS, then in a naïve perturbation expansion the secular terms, i.e. the terms that grow in time, appear. For the model [@br_PRL107] this happens at the third order of expansion with the appearance of $~ \varepsilon^2 t$ terms and invalidates such naïve expansion on the $\mathcal{O}(\varepsilon^{-2})$ timescales. Craps, Evnin & Vanhoof (2014) [@cev_JHEP1410] showed how to resum such terms in the form of renormalization flow equations for the first order amplitudes and phases that in a naïve perturbation expansions are simply constants determined by initial data. We call such flow equations the resonant system (this name comes from the another derivation of theses equations based on averaging: the effects of all non-resonant terms average to zero and only resonant terms are important for the long-time scale dynamics [@cev_JHEP1501]). Such resonant system offers new ways to study the AdS stability problem [@cev_JHEP1501; @bgll_PRD91; @gmll_PRD92; @bmr_PRL115; @dfpy_PRD94; @d_PRD100] and the studies of analogue resonant systems with simple interaction coefficients became a very active area of research on its own [@bcehlm_CMP353; @bbe_1805.03634; @ep_1808.09173]. In [@bmr_PRL115] the convergence between 1. the results of numerical integration of Einstein equation, extrapolated to the $\epsilon \rightarrow 0$ limit and, 2. the results of numerical integration of the resonant system truncated at $N$ modes, extrapolated to $N \rightarrow \infty$ limit was demonstrated. Moreover, the evidence for a blowup in finite time $\tau_H$ of solutions of the resonant system, starting from the $\varepsilon$-size initial perturbations of AdS that for Einstein equations lead to gravitational collapse at $t_H \approx \varepsilon^{-2}\tau_H$ [@bmr_PRL115], provided a very strong argument for the extrapolation $\varepsilon \rightarrow 0$, made in [@br_PRL107], to be correct. In this work we construct the resonant system for the AdS-Einstein equations with cohomogenity-two biaxial Bianchi IX ansatz studied in [@br_APPB48]. Our motivation is two-fold. First, we want to strengthen the evidence for the AdS instability in vacuum Einstein equations, as was done in [@bmr_PRL115] for the model with the scalar field. Constructing the resonant system itself is the first step in this direction. Second, with the recently described systematic approach to nonlinear gravitational perturbations [@r_PRD95; @r_PRD96; @ff_PRD96; @ds_CQG35] it should be possible to obtain the resonant system for arbitrary gravitational perturbation. Thus we treat the construction of the resonant system under simplifying symmetry assumptions (\[bcs\_ansatz\]) as a test-case and feasibility study for this ambitious project. The work is organized as follows. In the section \[setup\] we setup our system and follow the method of Craps, Evnin & Vanhoof (2014) [@cev_JHEP1410] to obtain a resonant system for the Einstein equations; we also discuss the vanishing of two classes of secular terms, allowed by the AdS resonant spectrum but in fact not present in the resonant system, analogously to the massless scalar field case [@cev_JHEP1410; @cev_JHEP1501]. In the section \[recurrences\] we derive the recurrence relations for the (interaction) coefficients in the resonant system that can be useful both to calculate their numerical values and study their asymptotic behavior. In the section \[PreliminaryNumericalResults\] we comment very briefly on the preliminary results of numerical integration of the resonant system. Some technical details of the calculations presented in Sec. \[setup\] are delegated to two appendices. Setup of the system {#setup} =================== We consider $d+1$ dimensional vacuum Einstein equations with a negative cosmological constant $$\label{Einsteineq1} G_{\mu\nu} +\Lambda g_{\mu\nu} = 0\;,$$ where $\Lambda = - d(d-1)/\left(2 \ell^2 \right)$, $\ell$ is the AdS radius and $d$ stands for the number of spatial dimensions. In this work we focus on the $d=4$ case. Following [@bcs_PRL95], we assume the cohomogenity-two biaxial Bianchi IX Ansatz as a gravitational perturbation of the AdS spacetime: $$\label{ansatz} ds^2= \frac{\ell^2}{\cos^2 x} \left( -A e^{-2\delta} dt^2+A^{-1} dx^2 +\frac{1}{4} \sin^2 x (e^{2B} (\sigma_1^2 + \sigma_2^2)+e^{-4B} \sigma_3^2) \right) \; ,$$ where $x$ is a compactified radial coordinate, $\tan x = r/\ell$, and $A$, $\delta$ and $B$ are functions of $(t,x)$ . The coordinates take the values $t\in (-\infty,\infty)$, $x\in [0, \pi/2)$. Inserting the metric  into  with $\Lambda = -6/\ell^2$, we get a hyperbolic-elliptic system [@br_APPB48] \[Einsteineq\] $$\begin{aligned} \label{EinsteineqB} \dot B &= A e^{-\delta} P, \qquad \dot P = \frac{1}{\tan^3{\!x}} \left(\tan^3{\!x}\, A e^{-\delta} Q \right)'-\frac{4 e^{-\delta}}{3\sin^2{\!x}}\left(e^{-2B}-e^{-8B}\right),\\ \label{EinsteineqA} A' &= 4 \tan{x} \, (1-A) - 2\sin{x} \cos{x} \, A \left(Q^2 + P^2 \right) +\frac{2(4e^{-2B}-e^{-8B}-3A)}{3\tan{x}}\,, \\ \label{Einsteineqdelta} \delta' &= -2\sin{x} \cos{x} \left(Q^2+P^2\right) \,, \\ \label{ad} \dot A &= - 4 \sin{x} \cos{x} \, A^2 e^{-\delta} Q P \,,\end{aligned}$$ where we have introduced the auxiliary variables $Q=B'$ and $P=A^{-1} e^{\delta} \dot B$ and overdots and primes denote derivatives with respect to $t$ and $x$, respectively. The field $B$ is the only dynamical degree of freedom which plays a role similar to the spherical scalar field in [@br_PRL107]. If $B=0$, the only solution is the Schwarzschild-AdS family, in agreement with the Birkhoff theorem. It is convenient to define the mass function $$\label{mass-function} m(t,x) = \frac{\sin^2{x}}{\cos^4{x}}\,(1-A(t,x)).$$ From the Hamiltonian constraint (\[EinsteineqA\]) it follows that $$m'(t,x) = 2\left[A(Q^2+P^2) + \frac{1}{3\sin^2 x} \left( 3 + e^{-8B} - 4e^{-2B} \right)\right] \tan^3 x\geq 0\,.$$ To study the problem of stability of AdS space within the ansatz (\[ansatz\]) we need to solve the system (\[Einsteineq\]) for small smooth initial data with finite total mass  [^5] $$M = \lim_{x\rightarrow\pi/2} m(t,x) = 2 \int_0^{\pi/2} \left[A(Q^2+P^2) + \frac{1}{3\sin^2 x} \left( 3 + e^{-8B} - 4e^{-2B} \right)\right] \tan^3 x \, dx$$ and study the late-time behavior of its solutions. Smoothness at $x=0$ implies that $$\label{x=0} B(t,x)= b_0(t)\,x^2+\mathcal{O}(x^4), \quad\delta(t,x)= \mathcal{O}(x^4),\quad A(t,x)=1+\mathcal{O}(x^4),$$ where we used normalization $\delta(t,0)=0$ to ensure that $t$ is the proper time at the origin. The power series are uniquely determined by the free function $b_0(t)$. Smoothness at $x=\pi/2$ and finiteness of the total mass $M$ imply that (using $\rho=x-\pi/2$) $$\label{pi2} B(t,x)= b_{\infty}(t)\, \rho^4+\mathcal{O}\left(\rho^6\right),\quad \delta(t,x)= \delta_{\infty}(t)+\mathcal{O}\left(\rho^8\right),\quad A(t,x)= 1-M \rho^4+\mathcal{O}\left(\rho^6\right)\,,$$ where the free functions $b_{\infty}(t)$, $\delta_{\infty}(t)$, and mass $M$ uniquely determine the power series. It follows from that the asymptotic behaviour of fields at infinity is completely fixed by the assumptions of smoothness and finiteness of total mass, hence there is no freedom of imposing the boundary data. For the future convenience, following the conventions of [@cev_JHEP1410], we define $$\mu(x) = \tan^3 x \quad \mbox{and} \quad \nu(x) = \frac{3}{\mu'(x)} = \frac{\cos^4 x}{\sin^2 x}\; .$$ The pure AdS spacetime corresponds to $B=0,A=1,\delta=0$. Linearizing around this solution, we obtain $$\label{L} \ddot B +L B=0 ,\qquad L=-\frac{1}{\mu(x)}\, \partial_x \left(\mu(x) \,\partial_x\right)+\frac{8}{\sin^2{\!x}}\,.$$ This equation is the $\ell=2$ gravitational tensor case of the master equation describing the evolution of linearized perturbations of AdS spacetime, analyzed in detail by Ishibashi and Wald [@iw_CQG21]. The Sturm-Liouville operator $L$ is essentially self-adjoint with respect to the inner product ${\left\langle f, \, g \right\rangle}=\int_0^{\pi/2} f(x) g(x) \mu(x) \, dx$. The eigenvalues and associated orthonormal eigenfunctions of $L$ are $$\label{eigenEq} L \, e_k(x) = \omega_k^2 \, e_k(x), \qquad k=0,1,\dots$$ with $$\label{modes} \omega^2_k=(6+2k)^2,\qquad e_k(x)= 2 \sqrt{\frac{(k+3)(k+4)(k+5)}{(k+1)(k+2)}}\, \sin^2{\!x} \cos^4{\!x} \,P_k^{(3,2)}(\cos{2x})\,,$$ where $P_k^{(a,b)}(x)$ is a Jacobi polynomial of order $k$. The eigenfunctions $e_k(x)$ fulfill the regularity conditions and , hence any smooth solution can be expressed as $$B(t,x)=\sum\limits_{k\geq 0} b_k(t) e_k(x)\,.$$ To quantify the transfer of energy between the modes one can introduce the linearized energy $$\label{E} E = \int_0^{\pi/2} \left( \dot B^2 + B'^2 + \frac{8}{\sin^2 x} B^2 \right) \mu(x)\, dx=\sum\limits_{k\geq 0} E_k,$$ where $E_k=\dot b_k^2 + \omega_k^2 b_k^2$ is the linearized energy of the $k$-th mode. Construction of the resonant system =================================== We will look for approximate solutions of the system (\[Einsteineq\]) with initial conditions $B(0,x) = \varepsilon f(x)$ and $\dot B(0,x) = \varepsilon g(x)$. Assuming $\varepsilon$ to be “small” we expand the metric functions $B$, $A$ and $\delta$ as series in the amplitude of the initial data: \[series\] $$\label{seriesB} B(t,x) = \sum_{k=1}^{\infty}\varepsilon^k B_k(t,x)\;,$$ $$\label{seriesA} A(t,x) = 1 + \sum_{k=2}^{\infty}\varepsilon^{k} A_{k}(t,x)\;,$$ $$\label{seriesdelta} \delta(t,x) = \sum_{k=2}^{\infty}\varepsilon^{k} \delta_{k}(t,x)\;.$$ To satisfy the initial data we take $B_1(0,x) = f(x)$, $\dot B_1(0,x) = g(x)$ and $B_k(0,x) \equiv 0$ for $k>1$. First order perturbations ------------------------- At the first order of the $\varepsilon$-expansion, the equations (\[EinsteineqA\],\[Einsteineqdelta\]) are identically satisfied and the equation (\[EinsteineqB\]) gives $$\label{eqB1} \ddot B_1(t,x) + L \, B_1(t,x) = 0 \; .$$ We expand $B_1$ as $$\label{seriesB1} B_1(t,x) = \sum_{n=0}^{\infty} c_n^{(1)}(t) e_n(x) \; .$$ The coefficients $c_n^{(1)} \equiv c_n$ satisfy $$\label{oscilator} \ddot{c}_n+\omega_n^2 c_n=0$$ and are given by $$\label{cn} c_n(t) = a_n \cos \left( \theta_n(t) \right) \;,$$ with $$\label{thetan} \theta_n(t) = \omega_n t + \phi_n \, ,$$ where the amplitudes $a_n$ and phases $\phi_n$ are determined by the initial conditions. Second order perturbations -------------------------- At the second order the equations (\[Einsteineq\]) reduce to $$\label{eqB2} \ddot B_2(t,x) + L \, B_2(t,x) = {40 \over \sin^2 x} B_1^2(t,x) =: S^{(2)}\; ,$$ $$A_2' (t,x) = \frac{\nu'(x)}{\nu(x)} A_2 (t,x) - 2 \mu(x) \nu(x) \left( B_1'^2(t,x)+\dot{B}_1^2(t,x) \right) - \frac{16}{\sin^2 x} \mu(x) \nu(x) B_1^2(t,x) \, ,$$ $$\delta_2' (t,x) = - 2 \mu(x) \nu(x) \left( B_1'^2(t,x)+\dot{B}_1^2(t,x) \right) \, .$$ The equations for the metric functions can be easily integrated to yield: $$\label{A2integral} A_2(t,x) = -2 \nu(x) \int_0^x \mu(y) \left( B_1'^2(t,y) + \dot{B}_1^2(t,y) + {8 \over \sin^2 y} B_1^2(t,y) \right) \, dy \; ,$$ $$\label{delata2integral} \delta_2(t,x) = -2 \int_0^x \mu(y)\nu(y) \left( B_1'^2(t,y)+\dot{B}_1^2(t,y) \right) \, dy \; .$$ If we expand $B_2$ in terms of eigen functions of (\[eigenEq\]) $$B_2(t,x) = \sum_{n=0}^{\infty} c_n^{(2)}(t) e_n(x) \; .$$ then equation  reduces to infinite set of equations for the coefficients $c_n^{(2)}$ $$\label{eqc2} \ddot{c}_n^{(2)}+\omega_n^2 c_n^{(2)} = {\left\langle S^{(2)}, \, e_n \right\rangle} := S^{(2)}_n = 40 \sum_{i} \sum_{j} K_{ijn} \, c_i(t) c_j(t) \, ,$$ where $$K_{ijn} = \int_0^{\frac{\pi}{2}} \frac{\mu(x)}{\sin^2 x} e_i(x) e_j(x) e_n(x) \, dx$$ is the first example of integrals of product of AdS linear eigen modes and some weights that we call (eigen mode) interaction coefficients and that will be frequently encountered in the following sections (for clarity we will list all their definitions while considering the third order equations). In general, at each order of perturbation expansion we will get a forced harmonic oscillator equation $$\label{forcec_oscilator_k} \ddot{c}_n^{(k)}(t)+\omega_n^2 c_n^{(k)}(t) = S^{(k)}_n \, ,$$ where the source $S^{(k)}_n$ is a sum of products of the first order coefficients $c_i$ multiplied by some eigen mode interaction coefficients. Multiplication of $c_i$ coefficient is governed by the formula $$\cos\theta_i \, \cos\theta_j = \frac{1}{2} \left[ \cos \left( \theta_i + \theta_j \right) + \cos \left( \theta_i - \theta_j \right) \right] \, .$$ Whenever, in the result of such multiplication, the source term $S^{(k)}_n$ in (\[forcec\_oscilator\_k\]) acquires a resonant term i.e. a term of the form $\mathcal{A}\cos(\omega_n t + \phi)$, such term results in a term that grows linearly with time $t$ in the solution $c_n^{(k)}$ (called a secular term): $$\ddot{c}_n^{(k)}(t)+\omega_n^2 c_n^{(k)}(t) = \mathcal{A}\cos(\omega_n t + \phi) + ... \quad \Longrightarrow \quad c_n^{(k)}(t) = \frac{\mathcal{A}}{2 \omega_n} t \sin(\omega_n t + \phi) + ... \, .$$ Thus the presence of resonant terms in the source invalidates naïve perturbation expansion at the $\varepsilon^{(k-1)} t$ time scale and such resonant terms dominate the dynamics of the coefficient $c_n^{(k)}$. Craps, Evnin & Vanhoof (2014) [@cev_JHEP1410] showed how to resum such secular terms, arising from resonant terms in the source, in a systematic way based on renormalization group (RG) method (the reader is strongly encouraged to consult Secs. 1 and 2 of this excellent paper and the references therein to get a broader perspective on long-time effects of small perturbations in Hamiltonian systems and a detailed description of their RG framework). In the case of the massless scalar field studied in [@br_PRL107] the resonant terms appear at the third order. As the result of resummation of the resulting secular terms the first order amplitudes and phases are replaced by the slowly varying functions of the “slow” time $\tau=\varepsilon^2 t$: $$\begin{aligned} c_n & \longrightarrow C_n(\varepsilon^2 t), \quad C_n(0) = c_n \\ \phi_n & \longrightarrow \Phi_n(\varepsilon^2 t), \quad \Phi_n(0) = \phi_n\end{aligned}$$ Thus it is crucial, at each order of perturbation expansion (\[series\]), to identify all resonant terms in the source $S^{(k)}_n$. At second order there are no secular terms because the coefficients $K_{ijk}$ vanish for the values of indices $i,j,k$ satisfying the resonance condition, what we prove in Appendix A. The solution to  is given by $$\begin{aligned} c_n^{(2)} & = D_1 \sin (\omega_n t) + D_2 \cos (\omega_n t) + \frac{40}{\omega_n} \sum_{i=0}^\infty \sum_{j=0}^\infty K_{ijn} \nonumber \\ & \times \left( \sin (\omega_n t) \int_0^t c_i(t') c_j(t') \cos (\omega_n t') \, dt' - \cos (\omega_n t) \int_0^t c_i(t') c_j(t') \sin (\omega_n t') \, dt' \right) \, ,\end{aligned}$$ where $D_1, D_2 = const.$ Zero initial conditions, $B_2(0,x) = 0 = \dot B_2(0,x)$, imply $D_1 = D_2 = 0$. Third order perturbations and the renormalization flow equations ---------------------------------------------------------------- At the third order the equation   reduce to $$\begin{aligned} \ddot{B}_3 + L B_3 &= 2 \left(A_2 - \delta_2 \right) \ddot{B_1} +\left( A_2'-\delta_2' \right) B_1' + \left( \dot{A}_2-\dot{\delta}_2 \right) \dot{B}_1 \nonumber \\ &- {112\over \sin^2 x} B_1^3 + {80\over \sin^2 x} B_1 B_2 + {8\over \sin^2 x} A_2 B_1 =: S^{(3)} \label{eqB3}\end{aligned}$$ Expanding $B_3$ into eigenmodes $$B_3(t,x) = \sum_{n=0}^{\infty} c_n^{(3)}(t) e_n(x) \; .$$ and then projecting onto the eigen mode basis we get $$\ddot{c}_l^{(3)}+\omega_l^2 c_l^{(3)} = {\left\langle S^{(3)}, \, e_l \right\rangle} =: S^{(3)}_l \; , \label{c3n}$$ with $$\begin{aligned} \label{eq:Sl} S^{(3)}_l&= 2 {\left\langle A_2 \ddot B_1, \, e_l \right\rangle} - 2 {\left\langle \delta_2 \ddot B_1, \, e_l \right\rangle} + {\left\langle \left( A_2' - \delta_2' \right) B_1', \, e_l \right\rangle} + {\left\langle \dot A_2 \dot B_1, \, e_l \right\rangle} - {\left\langle \dot \delta_2 \dot B_1, \, e_l \right\rangle} \nonumber \\ & -112 {\left\langle {1 \over \sin^2 x} B_1^3, \, e_l \right\rangle} + 80 {\left\langle {1\over \sin^2 x} B_2 B_1, \, e_l \right\rangle} + 8 {\left\langle {1\over \sin^2 x} A_2 B_1, \, e_l \right\rangle} \, .\end{aligned}$$ This expression strongly resembles an analogical equation of Craps, Evnin & Vanhoof (2014) [@cev_JHEP1410], obtained for the massless scalar field. However, it contains three additional terms which are not present in massless, spherically symmetric case, namely ${\left\langle {1\over \sin^2 x} B_1^3, \, e_l \right\rangle}$, ${\left\langle {1\over \sin^2 x} B_1 B_2, \, e_l \right\rangle}$, ${\left\langle {1\over \sin^2 x} B_1 A_2, \, e_l \right\rangle}$. After long and tedious calculation (the details are given in Appendix B) the source term in (\[c3n\]) can be put in the form: $$\begin{aligned} & S^{(3)}_l \nonumber\\ = & \sum_{i,k} a_i^2 a_k \left( - H_{iikl} - 2 \omega_i^2 M_{kli} + 2 \omega_k^2 X_{iikl} - 8 \tilde{X}_{iikl} + 4 \omega_i^2 \omega_k^2 W_{klii} - 16 \omega_i^2 \tilde{W}_{klii} \right) \cos \left( \theta_k \right) \nonumber\\ - & \frac{1}{2} \sum_{i,j} a_i a_j a_l \omega_l \left[ \left( \omega_i \omega_j P_{ijl} + B_{ijl} \right) \left( 2 \omega_l + \omega_j - \omega_i \right) \cos \left( \theta_i - \theta_j - \theta_l \right) \times 2 \right. \nonumber\\ & \hspace{20mm} - \left( \omega_i \omega_j P_{ijl} - B_{ijl} \right) \left( 2 \omega_l - \omega_j - \omega_i \right) \cos \left( \theta_i + \theta_j - \theta_l \right) \nonumber\\ & \hspace{20mm} \left. - \left( \omega_i \omega_j P_{ijl} - B_{ijl} \right) \left( 2 \omega_l + \omega_j + \omega_i \right) \cos \left( \theta_i + \theta_j + \theta_l \right) \right] \nonumber\\ + & \sum_{i,j,k} a_i a_j a_k \cos \left( \theta_i + \theta_j - \theta_k \right) \times \left\{ - \frac{\omega_j}{\omega_j + \omega_i} \left( 8 \tilde{X}_{ijkl} + H_{ikjl} - 2 \omega_k^2 X_{ijkl} \right) \right. \nonumber\\ & + [j \neq k] \frac{\omega_j}{\omega_k - \omega_j} \left( 8 \tilde{X}_{kjil} + H_{kjil} - 2 \omega_i^2 X_{kjil} \right) + [i \neq k] \frac{\omega_k}{\omega_i - \omega_k} \left( 8 \tilde{X}_{ijkl} + H_{ikjl} - 2 \omega_j^2 X_{ijkl} \right) \nonumber\\& - \omega_j \omega_k X_{ijkl} \times 2 + \omega_i \omega_j X_{kijl} - 4 \tilde{X}_{kijl} - 4 \tilde{X}_{ijkl} \times 2 - 28 G_{ijkl} \times 3 \nonumber\\& + [i \neq l] \frac{\omega_i \left( 2 \omega_i + \omega_j - \omega_k \right)}{ 2 \left(\omega_l^2 - \omega_i^2\right)} Z^+_{kjil} + [j \neq l] \frac{\omega_j \left( 2 \omega_j + \omega_i - \omega_k \right)}{ 2 \left(\omega_l^2 - \omega_j^2\right)} Z^+_{ikjl} \nonumber\\& \left. - [k \neq l] \frac{\omega_k \left( 2 \omega_k - \omega_i - \omega_j \right)}{ 2 \left(\omega_l^2 - \omega_k^2\right)} Z^-_{ijkl} \right\} \nonumber\\ + & \sum_{i,j,k} a_i a_j a_k \cos \left( \theta_i + \theta_j + \theta_k \right) \times \left\{ - \frac{\omega_j}{\omega_j + \omega_i} \left( 8 \tilde{X}_{ijkl} + H_{ikjl} - 2 \omega_k^2 X_{ijkl} \right) \right. \nonumber\\& \left. - \omega_j \omega_k X_{ijkl} - 4 \tilde{X}_{ijkl} - 28 G_{ijkl} - [k \neq l] \frac{\omega_k \left( 2 \omega_k + \omega_i + \omega_j \right)}{ 2 \left(\omega_l^2 - \omega_k^2\right)} Z^-_{ijkl} \right\} \nonumber\\ + & 80 {\left\langle \frac{1}{\sin^2 x} B_2 B_1, \, e_l \right\rangle} \, , \label{completeS3l}\end{aligned}$$ where the interaction coefficients are defined as \[eq:coeffs\] $$\begin{aligned} \label{Xijkl} X_{ijkl}&=\int_{0}^{\frac{\pi}{2}}\text{d}x\,e'_{i}(x)e_{j}(x)e_{k}(x)e_{l}(x)(\mu(x))^{2}\nu(x), \\ \label{Yijkl} Y_{ijkl}&=\int_{0}^{\frac{\pi}{2}}\text{d}x\,e'_{i}(x)e_{j}(x)e'_{k}(x)e'_{l}(x)(\mu(x))^{2}\nu(x), \\ \label{Hijkl} H_{ijkl}&=\int_{0}^{\frac{\pi}{2}}\text{d}x\,e'_{i}(x)e_{j}(x)e'_{k}(x)e_{l}(x)(\mu(x))^{2}\nu'(x) \\ \label{Zijkl} Z^{\pm}_{ijkl}&=\omega_{i}\omega_{j}(X_{klij}-X_{lkij})\pm(Y_{klij}-Y_{lkij}), \\ \label{Wijkl} W_{ijkl} &= \int_{0}^{\frac{\pi}{2}}\text{d}x\,e_{i}(x)e_{j}(x)\mu(x)\nu(x)\int_{0}^{x}\text{d}y \, \mu(y) \, e_k(y)e_l(y) \, , \\ \label{WSijkk} \bar{W}_{ijkl}&=\int_{0}^{\frac{\pi}{2}}\text{d}x\,e_{i}'(x)e_{j}'(x)\mu(x)\nu(x)\int_{0}^{x}\text{d}y \, \mu(y) \, e_k(y)e_l(y) \, , \\ \label{Vij} V_{ij}&=\int_{0}^{\frac{\pi}{2}}\text{d}x\,e_{i}(x)e_{j}(x)\mu(x)\nu(x) \, , \\ \label{Aij} A_{ij}&=\int_{0}^{\frac{\pi}{2}}\text{d}x\,e'_{i}(x)e'_{j}(x)\mu(x)\nu(x) \, . \\ P_{ijk} &= V_{ij} - W_{ijkk} \\ B_{ijk} &= A_{ij} - \bar{W}_{ijkk} \\ M_{ijk}&=\int_{0}^{\frac{\pi}{2}}\text{d}x\,e'_{i}(x)e_{j}(x)\mu(x)\nu'(x)\int_{0}^{x}\text{d}y(e_{k}(y))^{2}\mu(y), \\ \label{Kijk} K_{ijk}&=\int_{0}^{\frac{\pi}{2}}\text{d}x\, {1\over \sin^2 x}e_{i}(x)e_{j}(x)e_{k}(x)\mu(x), \\ \label{Gijkl} G_{ijkl}&=\int_{0}^{\frac{\pi}{2}}\text{d}x\, {1\over \sin^2 x}e_{i}(x)e_{j}(x)e_{k}(x)e_{l}(x)\mu(x), \\ \tilde{X}_{ijkl}&=\int_{0}^{\frac{\pi}{2}}\text{d}x\, {1\over \sin^2 x}e'_{i}(x)e_{j}(x)e_{k}(x)e_{l}(x)(\mu(x))^{2}\nu(x), \\ \tilde{W}_{ijkl}&=\int_{0}^{\frac{\pi}{2}}\text{d}x\,{1\over \sin^2 x}e_{i}(x)e_{j}(x)\mu(x)\nu(x)\int_{0}^{x}\text{d}y \, \mu(y) \, e_k(y)e_l(y) \, ,\end{aligned}$$ and we used a convenient notation: $$[condition] = \left\{ \begin{matrix} 1 \mbox{ if \textit{condition} is true} \\ 0 \mbox{ if \textit{condition} is false} \end{matrix}\right. \, . \label{[condition]}$$ Now, we are ready to identify resonant terms in the source $S^{(3)}_l$. As discussed by Craps, Evnin & Vanhoof (2014) [@cev_JHEP1410] these terms dictate the dynamics of the system, in particular they control the flow of (conserved) energy between the modes. The resonant terms in $S^{(3)}_l$ are those with $\cos(\pm \omega_l t + \phi)$ time dependence. Such terms come from the following terms in (\[completeS3l\]) under the following conditions (cf. (\[thetan\])) (the reason for the single and double underlining of some terms in two following pages will be explained on page ). #### $\cos(\theta_k)$ terms: +:----------------------------------+:----------------------------------+ | $\omega_k = \omega_l$; this | $\omega_k = - \omega_l$ is never | | gives\ | satisfied. | | $[k=l] = [k=l](\underline{[i \neq | | | l]} + \doubleunderline{[i=l]})$\ | | | and contributes to the $(+,+,-)$ | | | resonance, cf. [@cev_JHEP1410], | | | see below. | | +-----------------------------------+-----------------------------------+ #### $\cos(\theta_i - \theta_j - \theta_l)$ terms: +:----------------------------------+:----------------------------------+ | $\omega_i - \omega_j - \omega_l = | $\omega_i - \omega_j - \omega_l = | | \omega_l$\ | - \omega_l$; | | these terms do not contribute | this gives\ | | because for | $[i=j] = [i=j](\underline{[i=j \n | | $\omega_i - \omega_j = 2 \omega_l | eq l]} + \doubleunderline{[i=j=l] | | $ | })$\ | | their prefactor | and contributes to the $(+,+,-)$ | | $2 \omega_l + \omega_j - \omega_i | resonance, cf. [@cev_JHEP1410], | | $ | see below. | | is zero. | | +-----------------------------------+-----------------------------------+ #### $\cos(\theta_i + \theta_j - \theta_l)$ terms: +:----------------------------------+:----------------------------------+ | $\omega_i + \omega_j - \omega_l = | $\omega_i + \omega_j - \omega_l = | | \omega_l$\ | - \omega_l$ | | these terms do not contribute | is never satisfied. | | because for | | | $\omega_i + \omega_j = 2 \omega_l | | | $ | | | their prefactor | | | $2 \omega_l - \omega_i - \omega_j | | | $ | | | is zero. | | +-----------------------------------+-----------------------------------+ #### $\cos(\theta_i + \theta_j + \theta_l)$ terms: $\omega_i + \omega_j + \omega_l = \pm \omega_l$ is never satisfied. #### $\cos(\theta_i + \theta_j - \theta_k)$ terms: +:----------------------------------+:----------------------------------+ | $\omega_i + \omega_j - \omega_k = | $\omega_i + \omega_j - \omega_k = | | \omega_l$; | - \omega_l$; | | this gives\ | this gives\ | | $[i+j=k+l]$\ | $[k=i+j+l+6]$,\ | | and contributes to the $(+,+,-)$ | this is the $(+,-,-)$ resonance, | | resonance, cf. [@cev_JHEP1410]; | cf. [@cev_JHEP1410], as | | its name comes from the | $\omega_k - \omega_i - \omega_j = | | $\omega_i + \omega_j - \omega_k = | \omega_l$; | | \omega_l$ | its name comes from one ’$+$’ and | | condition with two ’$+$’ and one | two ’$-$’ on the left hand side | | ’$-$’ on the left hand side of | of the equation. | | the equation. | | +-----------------------------------+-----------------------------------+ #### $\cos(\theta_i + \theta_j + \theta_k)$ terms: +:----------------------------------+:----------------------------------+ | $\omega_i + \omega_j + \omega_k = | $\omega_i + \omega_j + \omega_k = | | \omega_l$; | - \omega_l$ | | this gives\ | is never satisfied. | | $[i+j+k+6=l]$\ | | | this is the $(+,+,+)$ resonance, | | | cf. [@cev_JHEP1410]; its name | | | comes from three ’$+$’ on the | | | left hand side of the equation. | | +-----------------------------------+-----------------------------------+ It is also shown in Appendix B that $$\begin{aligned} & 80 {\left\langle \frac{1}{\sin^2 x} B_2 B_1, \, e_l \right\rangle} = -800 \sum_{0 < i,j,k} a_i a_j a_k \cos \left( \theta_i + \theta_j - \theta_k \right) \nonumber\\ & \times \left\{ \sum_{0 < m} \frac{K_{jkm} K_{ilm}}{\left( \omega_j - \omega_k\right)^2 - \omega_m^2} + \sum_{0 < m} \frac{K_{ikm} K_{jlm}}{\left( \omega_i - \omega_k\right)^2 - \omega_m^2} + \sum_{0 < m} \frac{K_{ijm} K_{klm}}{\left( \omega_i + \omega_j\right)^2 - \omega_m^2} \right\} \nonumber\\ & + \mbox{non-resonant terms } \, \label{B2B1oversin}\end{aligned}$$ and there is no contribution from (\[B2B1oversin\]) neither to $(+,+,+)$ nor $(+,-,-)$ resonances. The sums in (\[B2B1oversin\]) are understood in such way that there is no contribution whenever numerators are zero, thus there is no problem with divisions by zero as $K_{ijk} \equiv 0$ for any permutation of indices in the inequality $k > i+j+2$. Finally $S^{(3)}_l$ takes the following form: $$\begin{aligned} S^{(3)}_l & = \doubleunderline{a_l^3 T_l \cos\left( \theta_l \right)} + \underline{\sum_{0 < i \neq l} a_l a_i^2 R_{il} \cos\left( \theta_l \right)} + \sum_{\scriptsize{\begin{matrix} 0 < i,j,k \\ i+j = k+l \\ i \neq l \neq j \end{matrix}}} a_i a_j a_k S_{ijkl} \cos\left( \theta_i + \theta_j - \theta_k \right) \nonumber\\ & + \sum_{\scriptsize{\begin{matrix} 0 < i,j,k \\ k = i+j+l+6 \end{matrix}}} a_i a_j a_k U_{kijl} \cos\left( \theta_k - \theta_i - \theta_j \right) + \sum_{\scriptsize{\begin{matrix} 0 < i,j,k \\ i+j+k+6 = l \end{matrix}}} a_i a_j a_k Q_{ijkl} \cos\left( \theta_i + \theta_j + \theta_k \right) \nonumber\\ & + \mbox{non-resonant terms .} \label{S3lfinal}\end{aligned}$$ \[WhyUnderlining\] To identify contributions to $T_l$, $R_{il}$ and $S_{ijkl}$ in (\[S3lfinal\]) we note following identities to be used under sums in (\[completeS3l\]) (contributions to $R_{il}$ and $T_l$ are marked with single and double underlining here an in the text between eq. (\[\[condition\]\]) and eq. (\[B2B1oversin\])) $$\begin{aligned} [i+j=k+l][j \neq k] &= [i+j=k+l][i \neq l \neq j] + \underline{[j=l][i = k \neq l]} = [i+j=k+l][i \neq l] \, , \\ [i+j=k+l][i \neq k] &= [i+j=k+l][i \neq l \neq j] + \underline{[i=l][j = k \neq l]} = [i+j=k+l][j \neq l] \, , \\ [i+j=k+l][k \neq l] &= [i+j=k+l][i \neq l \neq j] + \underline{[j=l][i = k \neq l]} + \underline{[i=l][j = k \neq l]} \, \end{aligned}$$ and $$\begin{aligned} & [i+j=k+l] \nonumber\\ = & [i+j=k+l][i \neq l \neq j] + \underline{[j=l][i=k \neq l]} + \underline{[i=l][j=k \neq l]} + \doubleunderline{[i=j=k=l]} \, .\end{aligned}$$ This leads to $$\begin{aligned} T_l & = -\frac{3}{2} H_{llll} + 2 \omega_l^2 X_{llll} - 24 \tilde{X}_{llll} - 2 \omega_l^2 M_{lll} - 16 \omega_l^2 \tilde{W}_{llll} + 4 \omega_l^4 W_{llll} - 2 \omega_l^2 \left( \omega_l^2 P_{lll} + B_{lll} \right) \nonumber \\ & - 84 G_{llll} - 800 \sum_{0 \leq m \leq 2l+2} \left( \frac{1}{4 \omega_l^2 - \omega_m^2} - \frac{2}{\omega_m^2} \right) \left(K_{llm}\right)^2 \, , \label{Tl}\end{aligned}$$ $$\begin{aligned} R_{il} & = 2\left(\frac{\omega_{i}^{2}}{\omega_{l}^{2}-\omega_{i}^{2}}\right)\left(H_{liil}-2\omega_{i}^{2}X_{liil}+8 \tilde{X}_{liil} \right) \nonumber \\ & -2\left(\frac{\omega_{l}^{2}}{\omega_{l}^{2}-\omega_{i}^{2}}\right)\left(H_{ilil}-2\omega_{i}^{2}X_{ilil}+8 \tilde{X}_{ilil}\right) \nonumber \\ & -2\omega_{i}^{2}X_{liil} -24 \tilde{X}_{ilil} -8 \tilde{X}_{liil} \nonumber \\ & -\left(H_{iill}+2\omega_{i}^{2}M_{lli}\right)+2\omega_{l}^{2}\left(X_{iill}+2\omega_{i}^{2}W_{lli}\right)-16\omega_{i}^{2} \tilde{W}_{lli}-2\omega_{l}^{2}\left(\omega_{i}^{2}P_{iil}+B_{iil}\right) \nonumber \\ & +2\left(\frac{\omega_{i}^{2}}{\omega_{l}^{2}-\omega_{i}^{2}}\right)\left(Y_{illi}-Y_{lili}+\omega_{l}^{2}(X_{illi}-X_{lili})\right) - 168G_{ilil} \nonumber \\ & -1600 \sum_{\scriptsize{\begin{matrix}m=0\\ i-l\neq \pm (m+3)\end{matrix}}}^{i+l+2} \left( \frac{1}{(\omega_i-\omega_l)^2-\omega_m^2}+\frac{1}{(\omega_i+\omega_l)^2-\omega_m^2} \right) (K_{ilm})^2 \nonumber \\ & +1600 \sum_{m=0}^{\scriptsize{\begin{matrix}m<2i+3\\ m<2l+3 \end{matrix}}} \frac{1}{\omega_m^2} K_{iim} K_{llm} \, , \label{Ril}\end{aligned}$$ $$\begin{aligned} S_{ijkl}=&-\frac{1}{2}H_{ijkl}\omega_{j}\left(\frac{1}{\omega_{j}+\omega_{i}}+\frac{1}{\omega_{j}-\omega_{k}}\right)-\frac{1}{2}H_{jkil}\omega_{k}\left(\frac{1}{\omega_{k}-\omega_{i}}+\frac{1}{\omega_{k}-\omega_{j}}\right) \nonumber \\ &-\frac{1}{2}H_{kijl}\omega_{i}\left(\frac{1}{\omega_{i}+\omega_{j}}+\frac{1}{\omega_{i}-\omega_{k}}\right)+X_{kijl}\omega_{i}\omega_{j}\left(\frac{\omega_{j}}{\omega_{i}-\omega_{k}}+\frac{\omega_{i}}{\omega_{j}-\omega_{k}}+1\right) \nonumber \\ &+X_{ijkl}\omega_{j}\omega_{k}\left(\frac{\omega_{k}}{\omega_{j}+\omega_{i}}+\frac{\omega_{j}}{\omega_{k}-\omega_{i}}-1\right)+X_{jkil}\omega_{k}\omega_{i}\left(\frac{\omega_{k}}{\omega_{i}+\omega_{j}}+\frac{\omega_{i}}{\omega_{k}-\omega_{j}}-1\right) \nonumber \\ &+\frac{1}{2}\left(\frac{\omega_{k}}{\omega_{i}+\omega_{j}}\right)Z^{-}_{ijkl}+\frac{1}{2}\left(\frac{\omega_{i}}{\omega_{j}-\omega_{k}}\right)Z^{+}_{jkil}+\frac{1}{2}\left(\frac{\omega_{j}}{\omega_{i}-\omega_{k}}\right)Z^{+}_{kijl} \nonumber \\ &-4 \tilde{X}_{ijkl}\left(1+\frac{\omega_{j}}{\omega_{i}+\omega_{j}}+\frac{\omega_{k}}{\omega_{k}-\omega_{i}}\right)-4 \tilde{X}_{jkil}\left(1+\frac{\omega_{i}}{\omega_{i}+\omega_{j}}+\frac{\omega_{k}}{\omega_{k}-\omega_{j}}\right) \nonumber \\ &-4 \tilde{X}_{kijl}\left(1+\frac{\omega_{i}}{\omega_{i}-\omega_{k}}+\frac{\omega_{j}}{\omega_{j}-\omega_{k}}\right)\nonumber \\ &-84G_{ijkl}-800 \sum_{m=0}^{i+j+2} \frac{1}{(\omega_i+\omega_j)^2-\omega_m^2} K_{ijm}K_{mkl} \nonumber \\ &-800 \sum_{\scriptsize{\begin{matrix}m=0\\ i-k\neq \pm (m+3)\end{matrix}}}^{\scriptsize{\begin{matrix}m<i+k+3\\ m<l+j+3\end{matrix}}} \frac{1}{(\omega_i-\omega_k)^2-\omega_m^2} K_{ikm}K_{mjl} \nonumber \\ &-800 \sum_{\scriptsize{\begin{matrix}m=0\\ j-k\neq \pm (m+3)\end{matrix}}}^{\scriptsize{\begin{matrix}m<j+k+3\\ m<l+i+3\end{matrix}}} \frac{1}{(\omega_j-\omega_k)^2-\omega_m^2} K_{jkm}K_{mil} \, , \label{Sijkl}\end{aligned}$$ where $S_{ijkl}$ is taken to be symmetric in its first two indices and it is understood that on both sides of (\[Sijkl\]) the condition $[i+j=k+l][i \neq l \neq j]$ holds. One can show that $U_{ijkl}$ and $Q_{ijkl}$ contain no contribution from scalar products ${\left\langle {1\over \sin^2 x} B_1^3, \, e_l \right\rangle}$, ${\left\langle {1\over \sin^2 x}B_1 B_2, \, e_l \right\rangle}$ (for details see Appendix B). For the $U_{ijkl}$ terms we get: $$\begin{aligned} & U_{ijkl} \times [i=j+k+l+6] \nonumber\\ &= \left[ \frac{1}{2}H_{ijkl}\frac{\omega_{j}(2\omega_{j}-\omega_{i}+\omega_{k})}{(\omega_{i}-\omega_{j})(\omega_{j}+\omega_{k})}+\frac{1}{2}H_{jkil}\frac{\omega_{k}(2\omega_{k}-\omega_{i}+\omega_{j})}{(\omega_{i}-\omega_{k})(\omega_{k}+\omega_{j})}+\frac{1}{2}H_{kijl}\frac{\omega_{i}(\omega_{j}+\omega_{k}-2\omega_{i})}{(\omega_{i}-\omega_{j})(\omega_{i}-\omega_{k})} \right. \nonumber \\ &-X_{ijkl}\,\omega_{j}\omega_{k}\left(\frac{\omega_{k}}{(\omega_{i}-\omega_{j})}+\frac{\omega_{j}}{(\omega_{i}-\omega_{k})}-1\right)+X_{jkil}\,\omega_{i}\omega_{k}\left(\frac{\omega_{k}}{(\omega_{i}-\omega_{j})}+\frac{\omega_{i}}{(\omega_{k}+\omega_{j})}-1\right) \nonumber \\ &+X_{kijl}\,\omega_{i}\omega_{j}\left(\frac{\omega_{i}}{(\omega_{j}+\omega_{k})}+\frac{\omega_{j}}{(\omega_{i}-\omega_{k})}-1\right) \nonumber \\ &-\frac{1}{2}Z^{+}_{ijkl}\frac{\omega_{k}}{(\omega_{i}-\omega_{j})}+\frac{1}{2}Z^{-}_{jkil}\frac{\omega_{i}}{(\omega_{j}+\omega_{k})}-\frac{1}{2}Z^{+}_{kijl}\frac{\omega_{j}}{(\omega_{i}-\omega_{k})} \nonumber \\ &-4 \tilde{X}_{ijkl}\left(1+\frac{\omega_{j}}{(\omega_{j}-\omega_{i})}+\frac{\omega_{k}}{(\omega_{k}-\omega_{i})}\right)-4 \tilde{X}_{jkil}\left(1+\frac{\omega_{i}}{(\omega_{i}-\omega_{j})}+\frac{\omega_{k}}{(\omega_{j}+\omega_{k})}\right) \nonumber \\ & \left. -4 \tilde{X}_{kijl}\left(1+\frac{\omega_{i}}{(\omega_{i}-\omega_{k})}+\frac{\omega_{j}}{(\omega_{j}+\omega_{k})}\right) \right] \times [i=j+k+l+6] \;. \label{Uijkl}\end{aligned}$$ For the $Q_{ijkl}$ terms we get: $$\begin{aligned} & Q_{ijkl} \times [i+j+k+6=l] \nonumber\\ &= \left[ -\frac{1}{6}H_{ijkl}\frac{\omega_{j}(2\omega_{j}+\omega_{i}+\omega_{k})}{(\omega_{j}+\omega_{i})(\omega_{j}+\omega_{k})}-\frac{1}{6}H_{jkil}\frac{\omega_{k}(2\omega_{k}+\omega_{i}+\omega_{j})}{(\omega_{k}+\omega_{i})(\omega_{k}+\omega_{j})} \right. \nonumber \\ &-\frac{1}{6}H_{kijl}\frac{\omega_{i}(2\omega_{i}+\omega_{j}+\omega_{k})}{(\omega_{i}+\omega_{j})(\omega_{i}+\omega_{k})}+\frac{1}{3}X_{ijkl}\,\omega_{j}\omega_{k}\left(1+\frac{\omega_{k}}{(\omega_{j}+\omega_{i})}+\frac{\omega_{j}}{(\omega_{k}+\omega_{i})}\right) \nonumber \\ &+\frac{1}{3}X_{jkil}\omega_{i}\omega_{k}\left(1+\frac{\omega_{k}}{(\omega_{i}+\omega_{j})}+\frac{\omega_{i}}{(\omega_{k}+\omega_{j})}\right)+\frac{1}{3}X_{kijl}\,\omega_{i}\omega_{j}\left(1+\frac{\omega_{i}}{(\omega_{j}+\omega_{k})}+\frac{\omega_{j}}{(\omega_{i}+\omega_{k})}\right) \nonumber \\ &-\frac{1}{6}Z^{-}_{ijkl}\frac{\omega_{k}}{(\omega_{i}+\omega_{j})}-\frac{1}{6}Z^{-}_{jkil}\frac{\omega_{i}}{(\omega_{j}+\omega_{k})}-\frac{1}{6}Z^{-}_{kijl}\frac{\omega_{j}}{(\omega_{i}+\omega_{k})} \nonumber \\ &-\frac{4}{3} \tilde{X}_{ijkl} \left(1+\frac{\omega_{j}}{(\omega_{j}+\omega_{i})}+\frac{\omega_{k}}{(\omega_{k}+\omega_{i})}\right)-\frac{4}{3} \tilde{X}_{jkil}\left(1+\frac{\omega_{k}}{(\omega_{k}+\omega_{j})}+\frac{\omega_{i}}{(\omega_{i} +\omega_{j})}\right) \nonumber \\ &\left. -\frac{4}{3} \tilde{X}_{kijl}\left(1+\frac{\omega_{i}}{(\omega_{i}+\omega_{k})}+\frac{\omega_{j}}{(\omega_{j}+\omega_{k})}\right) \right] \times [i+j+k+6=l] \;. \label{Qijkl}\end{aligned}$$ With the help of identities [^6] \[HMidentities\] $$\begin{aligned} H_{ijkl} & = \omega_i^2 X_{klij} - 8\tilde{X}_{klij} + \omega_k^2 X_{ijkl} - 8\tilde{X}_{ijkl} - Y_{klij} - Y_{ijkl} \;, \label{Hidentity} \\ M_{ijk} & =\omega_i^2 W_{ijk} - 8\tilde{W}_{ijkk} - X_{ijkk} + B_{ijk} - A_{ij} \;, \label{Midentity}\end{aligned}$$ expressions ,  can be simplified to yield: $$\begin{aligned} \label{Uijklfinal} & U_{ijkl}\times [i=j+k+l+6] \nonumber\\ &= \left[ \frac{1}{2} \left(\frac{1}{\omega_{i}-\omega_{j}}-\frac{1}{\omega_{k}-\omega_{i}}-\frac{1}{\omega_{j}+\omega_{k}}\right) (\omega_{i} \omega_{j} \omega_{k} X_{lijk}+\omega_{l} Y_{iljk}) \right. \nonumber \\ &+\frac{1}{2} \left(\frac{1}{\omega_{i}-\omega_{j}}+\frac{1}{\omega_{k}-\omega_{i}}+\frac{1}{\omega_{j}+\omega_{k}}\right) (\omega_{i} \omega_{j} \omega_{l} X_{kijl}+\omega_{k} Y_{ikjl}) \nonumber \\ &+\frac{1}{2} \left(-\frac{1}{\omega_{i}-\omega_{j}}-\frac{1}{\omega_{k}-\omega_{i}}+\frac{1}{\omega_{j}+\omega_{k}}\right) (\omega_{i} \omega_{k} \omega_{l} X_{jikl}+\omega_{j} Y_{ijkl}) \nonumber \\ &\left. +\frac{1}{2} \left(\frac{1}{\omega_{i}-\omega_{j}}-\frac{1}{\omega_{k}-\omega_{i}}+\frac{1}{\omega_{j}+\omega_{k}}\right) (\omega_{j} \omega_{k} \omega_{l} X_{ijkl}+\omega_{i} Y_{jikl}) \right] \times [i=j+k+l+6] \; ,\end{aligned}$$ $$\begin{aligned} \label{Qijklfinal} & Q_{ijkl} \times [i+j+k+6=l] \nonumber\\ & =\left[\frac{1}{6} \left(\frac{1}{\omega_{i}+\omega_{j}}+\frac{1}{\omega_{i}+\omega_{k}}+\frac{1}{\omega_{j}+\omega_{k}}\right) (\omega_{i} \omega_{j} \omega_{k} X_{lijk}+\omega_{l} Y_{iljk}) \right.\nonumber \\ &+\frac{1}{6} \left(-\frac{1}{\omega_{i}+\omega_{j}}+\frac{1}{\omega_{i}+\omega_{k}}+\frac{1}{\omega_{j}+\omega_{k}}\right) (\omega_{i} \omega_{j} \omega_{l} X_{kijl}+\omega_{k} Y_{ikjl}) \nonumber \\ &+\frac{1}{6} \left(\frac{1}{\omega_{i}+\omega_{j}}-\frac{1}{\omega_{i}+\omega_{k}}+\frac{1}{\omega_{j}+\omega_{k}}\right) (\omega_{i} \omega_{k} \omega_{l} X_{jikl}+\omega_{j} Y_{ijkl}) \nonumber \\ &\left.+\frac{1}{6} \left(\frac{1}{\omega_{i}+\omega_{j}}+\frac{1}{\omega_{i}+\omega_{k}}-\frac{1}{\omega_{j}+\omega_{k}}\right) (\omega_{j} \omega_{k} \omega_{l} X_{ijkl}+\omega_{i} Y_{jikl}) \right] \times [i+j+k+6=l] \;.\end{aligned}$$ Our numerical results show that both these expressions vanish, like in case of Einstein equations with a massless scalar field [@cev_JHEP1410]. Similarly, using (\[Hidentity\],\[Midentity\]) expressions , ,  can be simplified to yield: $$\begin{aligned} \label{Tlfinal} T_l&=\omega_{l}^2 X_{llll}+3 Y_{llll}+4\omega_{l}^4 W_{llll}+4\omega_{l}^2 \bar{W}_{llll}-2\omega_{l}^2 \left(A_{ll}+\omega_{l}^2 V_{ll}\right) \nonumber \\ & -84 G_{llll} - 800 \sum_{m=0}^{2l+2} \left( \frac{1}{4\omega_l^2-\omega_m^2}-\frac{2}{\omega_m^2} \right)(K_{llm})^2 \; ,\end{aligned}$$ $$\begin{aligned} \label{Rilfinal} R_{il}&= \frac{\left(\omega_{i}^2+\omega_{l}^2\right) \left(\omega_{l}^2 X_{illi}-\omega_{i}^2 X_{liil}\right)}{ \left(\omega_{l}^2-\omega_{i}^2\right)}+\frac{4 \left(\omega_{l}^2 Y_{ilil}-\omega_{i}^2 Y_{lili}\right)}{\omega_{l}^2-\omega_{i}^2} \nonumber \\ &+2\frac{\omega_{i}^2 \omega_{l}^2 (X_{illi}-X_{lili})}{\omega_{l}^2-\omega_{i}^2}+Y_{iill}+Y_{llii}+2\omega_{i}^2 \omega_{l}^2 (W_{iill}+W_{llii}) \nonumber \\ &+2\omega_{i}^2 \bar{W}_{llii}+2\omega_{l}^2 \bar{W}_{iill}-2\omega_{l}^2 \left(A_{ii}+\omega_{i}^2 V_{ii}\right) - 168G_{ilil} \nonumber \\ &-1600 \sum_{\scriptsize{\begin{matrix}m=0\\ i-l\neq \pm (m+3)\end{matrix}}}^{i+l+2} \left( \frac{1}{(\omega_i-\omega_l)^2-\omega_m^2}+\frac{1}{(\omega_i+\omega_l)^2-\omega_m^2} \right) (K_{ilm})^2 \nonumber \\ &+1600 \sum_{m=0}^{\scriptsize{\begin{matrix}m<2i+3\\ m<2l+3 \end{matrix}}} \frac{1}{\omega_m^2} K_{iim} K_{mll} \;, \end{aligned}$$ $$\begin{aligned} \label{Sijklfinal} S_{ijkl}=&-\frac{1}{2} \left(\frac{1}{\omega_{i}+\omega_{j}}+\frac{1}{\omega_{i}-\omega_{k}}+\frac{1}{\omega_{j}-\omega_{k}}\right) (\omega_{i} \omega_{j} \omega_{k} X_{lijk}-\omega_{l} Y_{iljk}) \nonumber \\ &-\frac{1}{2} \left(\frac{1}{\omega_{i}+\omega_{j}}-\frac{1}{\omega_{i}-\omega_{k}}-\frac{1}{\omega_{j}-\omega_{k}}\right)(\omega_{i} \omega_{j} \omega_{l} X_{kijl}-\omega_{k} Y_{ikjl}) \nonumber \\ &-\frac{1}{2} \left(\frac{1}{\omega_{i}+\omega_{j}}-\frac{1}{\omega_{i}-\omega_{k}}+\frac{1}{\omega_{j}-\omega_{k}}\right) (\omega_{i} \omega_{k} \omega_{l} X_{jikl}-\omega_{j} Y_{ijkl}) \nonumber \\ &-\frac{1}{2} \left(\frac{1}{\omega_{i}+\omega_{j}}+\frac{1}{\omega_{i}-\omega_{k}}-\frac{1}{\omega_{j}-\omega_{k}}\right)(\omega_{j} \omega_{k} \omega_{l} X_{ijkl}-\omega_{i} Y_{jikl}) \nonumber \\ &-84G_{ijkl}-800 \sum_{m=0}^{i+j+2} \frac{1}{(\omega_i+\omega_j)^2-\omega_m^2} K_{ijm}K_{mkl} \nonumber \\ &-800 \sum_{\scriptsize{\begin{matrix}m=0\\ i-k\neq \pm (m+3)\end{matrix}}}^{\scriptsize{\begin{matrix}m<i+k+3\\ m<l+j+3\end{matrix}}} \frac{1}{(\omega_i-\omega_k)^2-\omega_m^2} K_{ikm}K_{mjl} \nonumber \\ &-800 \sum_{\scriptsize{\begin{matrix}m=0\\ j-k\neq \pm (m+3)\end{matrix}}}^{\scriptsize{\begin{matrix}m<j+k+3\\ m<l+i+3\end{matrix}}} \frac{1}{(\omega_j-\omega_k)^2-\omega_m^2} K_{jkm}K_{mil} \;, \end{aligned}$$ where it is understood that on both sides of (\[Sijklfinal\]) the condition $[i+j=k+l][i \neq l \neq j]$ holds. Following Craps, Evnin & Vanhoof (2014) [@cev_JHEP1410], we finally obtain renormalization flow equations for non-linear perturbation theory at first non-trivial order \[ResonantSystem\] $$\begin{aligned} \label{Cdot} 2 \omega_l \frac{dC_l}{d \tau} &= - \underbrace{\sum_{i,(i\neq l)} \sum_{j,(j\neq l)}}_{l\leq i+j} S_{ij(i+j-l)l} C_i C_j C_{i+j-l}\sin (\Phi_l+\Phi_{i+j-l}-\Phi_i-\Phi_j) \; , \\ 2\omega_l C_l \frac{d\Phi_l}{d \tau} &= -T_l C_l^3- \sum_{i,(i\neq l)} R_{il} C_i^2 C_l \nonumber \\ &\underbrace{\sum_{i,(i\neq l)} \sum_{j,(j\neq l)}}_{l\leq i+j} S_{ij(i+j-l)l} C_i C_j C_{i+j-l}\sin (\Phi_l+\Phi_{i+j-l}-\Phi_i-\Phi_j)\;, \label{Phidot}\end{aligned}$$ where $C_l$ and $\Phi_l$ are the running renormalized amplitudes and phases i.e. the solutions to (\[Cdot\], \[Phidot\]) with initial conditions $C_l(0) = a_l$ and $\Phi_l(0) = \phi_l$ (cf. (\[seriesB1\]-\[thetan\])) and the solution resummed up to the first non-trivial order reads: $$\label{Bresummed} B(t,x) = \varepsilon \sum_{n=0}^{\infty} C_n\left(\varepsilon^2 t\right) \, \cos\left(\omega_n t + \Phi_n\left(\varepsilon^2 t\right)\right) \, e_n(x) \; .$$ Recurrence relations for the interaction coefficients {#recurrences} ===================================================== Obtaining interaction coefficients of the resonant system (\[Cdot\], \[Phidot\]) from direct integration of their defining integrals (\[eq:coeffs\]) is numerically expensive and moreover does not provide much insight into ultraviolet asymptotics of the interaction coefficients that is crucial to understand the asymptotic behavior of solutions of the resonant system. For the massless scalar field model [@bbgll_PRL113; @cev_JHEP1410] in $d=3$ spatial dimensions the integrals (\[eq:coeffs\]) can be calculated analytically, providing closed-form formulas for interaction coefficients [@gmll_PRD92]. This is possible due to the existence of simplified representation of eigenfunctions and this approach can be generalized to arbitrary odd number of spatial dimensions [@m_private1]. However, in the present studies with $d=4$, we are unaware of any such methods of direct analytic evaluation of the interaction coefficients. Thus, to study asymptotic behaviour of solutions of the resonant system, both numerically and analytically, it is useful to provide at least recurrence relations for the interaction coefficients. For the Einstein–massless scalar field system such relations were provided by Craps, Evnin and Vanhoof in [@cev_JHEP1510] and in this section we follow their approach. From the definition of eigenfunctions $e_j$ in terms of Jacobi polynomials, cf. (\[modes\]), and recurrence relations for Jacobi polynomials themselves $$\begin{aligned} &2(n+1)(n+\alpha+\beta+1)(2n+\alpha+\beta)P_{n+1}^{(\alpha ,\beta)}(x) \nonumber \\ &= -2(n+\alpha)(n+\beta)(2n+\alpha+\beta+2)P_{n-1}^{(\alpha ,\beta)}(x) \nonumber \\ &+(2n+\alpha+\beta+1) \left[ (2n+\alpha+\beta+2)(2n+\alpha+\beta)x+\alpha^2-\beta^2 \right] P_{n}^{(\alpha ,\beta)}(x) \; ,\end{aligned}$$ $$\begin{aligned} &(2n+\alpha+\beta+2)(1-x^2){d\over dx} P_n^{(\alpha ,\beta)}(x) = -2(n+1)(n+\alpha+\beta+1)P_{n+1}^{(\alpha ,\beta)}(x) \nonumber \\ &+(n+\alpha+\beta+1)(\alpha-\beta+(2n+\alpha+\beta+2)x) P_{n}^{(\alpha ,\beta)}(x)\end{aligned}$$ we get $$\begin{aligned} \label{identity1} \mu\nu'e_n = A_{-}(n)e_n+B(n)e_{n+1}+C(n)e_{n-1} \; ,\end{aligned}$$ $$\begin{aligned} \label{identity2} \mu\nu e_n' = \frac{1}{2} A_{+}(n)e_n + \frac{\omega_n}{2} B(n)e_{n+1} - \frac{\omega_n}{2} C(n)e_{n-1} \; \end{aligned}$$ with $$A_{\pm}(n) = -3 \pm \frac{5}{\omega_n^2 - 1} \, , \qquad B(n) = \frac{\sqrt{(n+1)(n+6)}}{\omega_n + 1} \, , \qquad C(n) = \frac{\sqrt{n(n+5)}}{\omega_n - 1} \, .$$ Now, differentiating (\[identity1\],\[identity2\]) and using eigen equation (\[eigenEq\]) to eliminate $e_n''$ and the identity $\left(\mu \nu'\right)' = -4 \mu \nu$ we get $$\begin{aligned} \label{identity3} \mu\nu'e_n' = -\frac{16 - 4\omega_n^2 + 3 \omega_n^4}{(\omega_n^2 - 4)(\omega_n^2 - 1)} e_n' + B(n) \frac{\omega_n}{\omega_{n+1}} e_{n+1}' + C(n) \frac{\omega_n}{\omega_{n-1}} e_{n-1}' + \frac{32}{\omega_n^2 - 4} \frac{\mu\nu}{\sin^2 x} e_n\; ,\end{aligned}$$ $$\begin{aligned} \label{identity4} 2 \mu\nu e_n = -\frac{3(4 + \omega_n^2)}{(\omega_n^2 - 4)(\omega_n^2 - 1)} e_n' - B(n) \frac{e_{n+1}'}{\omega_{n+1}} + C(n) \frac{e_{n-1}'}{\omega_{n-1}} + \frac{16}{\omega_n^2 - 4} \frac{\mu\nu}{\sin^2 x} e_n \; . \end{aligned}$$ The identities (\[identity1\]-\[identity4\]) are analogous to identities (15-18) in [@cev_JHEP1510] for the massless scalar field coupled to Einstein equations. Recurrence relation for the $X_{mnpq}$ integrals ------------------------------------------------ Using the identity (\[identity2\]) in the definition of the $X_{mnpq}$ integral (\[Xijkl\]), $X_{mnpq}$ can be given in terms of integrals $\chi_{mnpq}$, totally symmetric in their indices: $$\label{CHImnpq} \chi_{mnpq} = \int_{0}^{\frac{\pi}{2}} \text{d}x\, \mu(x) e_{m}(x) e_{n}(x) e_{p}(x) e_{q}(x) \, ,$$ namely $$\label{Xmnpq_in CHI} X_{mnpq} = \frac{1}{2} A_{+}(m) \chi_{mnpq} + \frac{\omega_m}{2} B(m) \chi_{(m+1)npq} - \frac{\omega_m}{2} C(m) \chi_{(m-1)npq} \, .$$ Now, to get the recurrence relation for the $\chi_{mnpq}$ integral we consider another auxiliary integral $$\label{tildeCHImnpq} \tilde \chi_{mnpq} = \int_{0}^{\frac{\pi}{2}} \text{d}x\, \mu^2(x) \nu'(x) e_{m}(x) e_{n}(x) e_{p}(x) e_{q}(x) \, .$$ In this integral we either (1) use the identity (\[identity1\]) for $\mu \nu' e_m$, or (2) integrate by parts using $\mu' \nu = d-1$ and the definition (\[Xijkl\]). Equating the results of these two operations we get (for $d=4$): $$\begin{aligned} & A_{-}(m) \chi_{mnpq} + B(m) \chi_{(m+1)npq} + C(m) \chi_{(m-1)npq} \nonumber\\ = & -6 \chi_{mnpq} - X_{mnpq} - X_{npqm} - X_{pqmn} - X_{qmnp} \, . \label{Xmnpq_aux}\end{aligned}$$ Similarly, using the identity (\[identity1\]) in sequence for $\mu \nu' e_m$, $\mu \nu' e_n$, $\mu \nu' e_p$, $\mu \nu' e_q$ we get from (\[tildeCHImnpq\]) a sequence of identities $$\begin{aligned} & A_{-}(m) \chi_{mnpq} + B(m) \chi_{(m+1)npq} + C(m) \chi_{(m-1)npq} \nonumber\\ = & A_{-}(n) \chi_{npqm} + B(n) \chi_{(n+1)pqm} + C(n) \chi_{(n-1)pqm} \nonumber\\ = & A_{-}(p) \chi_{pqmn} + B(p) \chi_{(p+1)qmn} + C(p) \chi_{(p-1)qmn} \nonumber\\ = & A_{-}(q) \chi_{qmnp} + B(q) \chi_{(q+1)mnp} + C(q) \chi_{(q-1)mnp} \end{aligned}$$ that can be solved for $\chi_{(n+1)pqm}$, $\chi_{(p+1)qmn}$ and $\chi_{(q+1)mnp}$. Then substituting (\[Xmnpq\_in CHI\]) into (\[Xmnpq\_aux\]) we get the recurrence relation for the integral $\chi_{mnpq}$ (totally symmetric in its indices): $$\begin{aligned} \chi_{mnpq} = & \frac{1}{(12+m+n+p+q) \sqrt{m (m+5)}} \nonumber\\ \times & \left\{ \frac{2 \chi_{(m-1)npq}}{(2m + 3) (2n + 5) (2p + 5) (2q + 5)} \left[5 \left( 875 + 450(n+p+q) + 25 \left( n^2 + p^2 + q^2 \right) \right.\right.\right. \nonumber\\ & + 220 (np + nq + pq) + 104 npq + 10 \left(n^2(p+q) + p^2(n+q) + q^2(n+p) \right) \nonumber\\ & \left. \left. + 4 \left(n^2pq + np^2q + npq^2 \right) \right) + m (m+4) ( 375 + 200(n+p+q) + 100(np + nq + pq) + 48npq) \right] \nonumber\\ & + \left[ \frac{m-n-p-q-8}{2 m+3} \sqrt{(m-1) (m+4)} \chi_{(m-2)npq} + \frac{2 (n+3)}{2 n+5} \sqrt{n (n+5)} \chi_{(m-1)(n-1)pq} \right. \nonumber\\ & \left. \left. + \frac{2 (p+3)}{2 p+5} \sqrt{p (p+5)} \chi_{(m-1)n(p-1)q} + \frac{2 (q+3)}{2 q+5} \sqrt{q (q+5)} \chi_{(m-1)np(q-1)} \right] (2m + 5) \right\} \end{aligned}$$ with the initial condition (here and in the following we take all interaction coefficients with at least one negative index to be identically zero) $$\chi_{0000}=\frac{100}{77} \, .$$ Recurrence relation for the $G_{mnpq}$ integrals ------------------------------------------------ To get the recurrence for the $G_{mnpq}$ integral (\[Gijkl\]), in the auxiliary integral $$\int_{0}^{\frac{\pi}{2}}\text{d}x\, \frac{\mu(x)}{sin^2 x}\mu(x)\nu'(x)e_{i}(x)e_{j}(x)e_{k}(x)e_{l}(x)$$ we either (1) use the identity (\[identity1\]) for $\mu \nu' e_m$, or (2) use the identity $$\label{eq:mu_nu_prim} \mu \nu' = 2 - d - 2 \sin^2 x \, .$$ Equating the results of these two operations we get (for $d=4$) $$A_{-}(m) G_{mnpq} + B(m) G_{(m+1)npq} + C(m) G_{(m-1)npq} = -2 G_{mnpq} -2 \chi_{mnpq} \, .$$ Thus the recurrence relation for the $G_{mnpq}$ integral (totally symmetric in its indices) reads $$\begin{aligned} G_{mnpq} = &\frac{1}{\sqrt{m (m+5)}} \left[-2 (2m + 5) \chi_{(m-1)npq} + \frac{m^2 + 4m + 5}{2 m+3} 4 G_{(m-1)npq} \right. \nonumber\\ & \hskip 24mm \left.- \frac{2m + 5}{2m + 3} \sqrt{(m-1) (m+4)} G_{(m-2)npq} \right]\end{aligned}$$ with the initial condition $$G_{0000}=\frac{240}{77} \, .$$ Recurrence relation for the $K_{mnp}$ integrals ----------------------------------------------- To find the recurrence relations for the $K_{mnp}$ integrals (\[Kijk\]) we combine the methods of two previous subsections. First, in an auxiliary integral $$\int_{0}^{\frac{\pi}{2}}\text{d}x\, \frac{\mu}{\sin^2 x} \mu \nu' e_{m}(x)e_{n}(x)e_{p}(x)$$ we either (1) use the identity (\[identity1\]) for $\mu \nu' e_m$, or (2) use the identity (\[eq:mu\_nu\_prim\]) to express the $K_{mnp}$ integral in terms of an integral $\sigma_{mnp}$ totally symmetric in its indices: $$\label{eq:SIGMA_mnp} \sigma_{mnp} = \int_{0}^{\frac{\pi}{2}}\text{d}x\, \mu e_{m}(x)e_{n}(x)e_{p}(x) \, .$$ Equating the results of these two operations we get: $$A_{-}(m) K_{mnp} + B(m) K_{(m+1)np} + C(m) K_{(m-1)np} = -2 K_{mnp} -2 \sigma_{mnp} \, .$$ Now, to get the recurrence relation for the $\sigma_{mnp}$ integral we consider another auxiliary integral $$\label{tildeSIGMAmnp} \tilde \sigma_{mnp} = \int_{0}^{\frac{\pi}{2}} \text{d}x\, \mu^2(x) \nu'(x) e_{m}(x) e_{n}(x) e_{p}(x) \, .$$ In this integral we either (1) use the identity (\[identity1\]) for $\mu \nu' e_m$, or (2) integrate by parts using $\mu' \nu = d-1$, the identity (\[identity2\]), and the definition (\[eq:SIGMA\_mnp\]). Equating the results of these two operations we get (for $d=4$): $$\begin{aligned} & A_{-}(m) \sigma_{mnp} + B(m) \sigma_{(m+1)np} + C(m) \sigma_{(m-1)np} \nonumber\\ = & - \frac{1}{2} A_{+}(m) \sigma_{mnp} - \frac{\omega_m}{2} B(m) \sigma_{(m+1)np} + \frac{\omega_m}{2} C(m) \sigma_{(m+1)np} \nonumber\\ & - \frac{1}{2} A_{+}(n) \sigma_{mnp} - \frac{\omega_n}{2} B(n) \sigma_{m(n+1)p} + \frac{\omega_n}{2} C(n) \sigma_{m(n-1)p} \nonumber\\ & - \frac{1}{2} A_{+}(p) \sigma_{mnp} - \frac{\omega_p}{2} B(p) \sigma_{mn(p+1)} + \frac{\omega_p}{2} C(p) \sigma_{mn(p-1)} - 6 \sigma_{mnp} \label{SIGMAmnp_aux}\end{aligned}$$ To eliminate $\sigma_{(n+1)pm}$ and $\sigma_{(p+1)mn}$ from the equation above, we use the identity (\[identity1\]) in sequence for $\mu \nu' e_m$, $\mu \nu' e_n$, $\mu \nu' e_p$ to get from (\[tildeSIGMAmnp\]) a sequence of identities $$\begin{aligned} & A_{-}(m) \sigma_{mnp} + B(m) \sigma_{(m+1)np} + C(m) \sigma_{(m-1)np} \nonumber\\ = & A_{-}(n) \sigma_{npm} + B(n) \sigma_{(n+1)pm} + C(n) \sigma_{(n-1)pm} \nonumber\\ = & A_{-}(p) \sigma_{pmn} + B(p) \sigma_{(p+1)mn} + C(p) \sigma_{(p-1)mn} \end{aligned}$$ that can be solved for $\sigma_{(n+1)pm}$ and $\sigma_{(p+1)mn}$. Finally we get $$\begin{aligned} K_{mnp} = &\frac{1}{\sqrt{m (m+5)}} \left[-2 (2m + 5) \sigma_{(m-1)np} + \frac{m^2 + 4m + 5}{2 m+3} 4 K_{(m-1)np} \right. \nonumber\\ & \hskip 23mm \left.- \frac{2m + 5}{2m + 3} \sqrt{(m-1) (m+4)} K_{(m-2)np} \right]\end{aligned}$$ and $$\begin{aligned} \sigma_{mnp} = & \frac{1}{(9+m+n+p+q) \sqrt{m (m+5)}} \nonumber\\ \times & \left\{ \frac{2 \sigma_{(m-1)np}}{(2m + 3) (2n + 5) (2p + 5)} \left[5 \left( 100 + 60(n+p) + 5 \left( n^2 + p^2 \right) \right.\right.\right. \nonumber\\ & \left. \left. + 32 np + 2 \left(n^2 p + p^2n \right) \right) + m (m+4) ( 25 + 20(n+p) + 12np) \right] \nonumber\\ & + \left[ \frac{m-n-p-5}{2 m+3} \sqrt{(m-1) (m+4)} \sigma_{(m-2)np} + \frac{2 (n+3)}{2 n+5} \sqrt{n (n+5)} \sigma_{(m-1)(n-1)p} \right. \nonumber\\ & \left. \left. + \frac{2 (p+3)}{2 p+5} \sqrt{p (p+5)} \sigma_{(m-1)n(p-1)} \right] (2m + 5) \right\} \end{aligned}$$ with the initial conditions $$K_{000}=\frac{3\sqrt{30}}{7} \quad \mbox{and} \quad \sigma_{000} = \frac{4\sqrt{10}}{7 \sqrt{3}} \, .$$ Recurrence relations for the $Y_{mnpq}$ integrals ------------------------------------------------- Using the identity (\[identity2\]) for $\mu \nu e_m'$ in the definition of the $Y_{mnpq}$ integral (\[Yijkl\]), $Y_{mnpq}$ can be given in terms of integrals $\gamma_{mnpq}$, symmetric in the first and the second pairs of indices: $$\label{GAMMAmnpq} \gamma_{mnpq} = \int_{0}^{\frac{\pi}{2}} \text{d}x\, \mu(x) e_{m}(x) e_{n}(x) e_{p}'(x) e_{q}'(x) \, ,$$ namely $$\label{Ymnpq_in_GAMMA} Y_{mnpq} = \frac{1}{2} A_{+}(m) \gamma_{mnpq} + \frac{\omega_m}{2} B(m) \gamma_{(m+1)npq} - \frac{\omega_m}{2} C(m) \gamma_{(m-1)npq} \, .$$ Now, to get the recurrence relations in the first pair of (symmetric) indices for the $\gamma_{mnpq}$ integral we consider another auxiliary integral $$\label{tildeGAMMAmnpq} \tilde \gamma_{mnpq} = \int_{0}^{\frac{\pi}{2}} \text{d}x\, \mu^2(x) \nu'(x) e_{m}(x) e_{n}(x) e_{p}'(x) e_{q}'(x) \, .$$ In this integral we use identity (\[identity1\]) either for $\mu \nu' e_m$ or $\mu \nu' e_n$ to get $$\begin{aligned} & A_{-}(m) \gamma_{mnpq} + B(m) \gamma_{(m+1)npq} + C(m) \gamma_{(m-1)npq} \nonumber\\ = & A_{-}(n) \gamma_{mnpq} + B(n) \gamma_{m(n+1)pq} + C(n) \gamma_{m(n-1)pq} \label{GAMMAmnpq_aux}\end{aligned}$$ Then, integrating $$Y_{mnpq}+Y_{nmpq} = \int_0^{\pi/2} dx \, \nu \left( e_m e_n \right)' \left( \mu e_p' \right) \left( \mu e_q' \right)$$ by parts and using (\[identity1\]) for $\mu \nu' e_m$, and the eigen equation for $\left( \mu e_p'\right)'$ and $\left( \mu e_p'\right)'$, we get: $$\begin{aligned} \label{Ysymmetrized} Y_{mnpq}+Y_{nmpq} & = - A_{-}(m) \gamma_{mnpq} - B(m) \gamma_{(m+1)npq} - C(m) \gamma_{(m-1)npq} \nonumber\\ & + \omega_p^2 X_{qpmn} + \omega_q^2 X_{pqmn} - 8 \lambda_{qpmn} - 8 \lambda_{pqmn} \, ,\end{aligned}$$ where $$\lambda_{qpmn} = \int_0^{\pi/2} dx \, \frac{\mu^2 \nu}{\sin^2 x} e_q' e_p e_m e_n$$ can be easily expressed with a use of (\[identity2\]) in terms of $G_{qpmn}$ integrals: $$\label{eq:lambdaRec} \lambda_{qpmn} = \frac{1}{2} A_{+}(q) G_{qpmn} + \frac{\omega_q}{2} B(q) G_{(q+1)pmn} - \frac{\omega_q}{2} C(q) G_{(q-1)pmn} \, .$$ Now, equations (\[Ymnpq\_in\_GAMMA\]), (\[GAMMAmnpq\_aux\]), (\[Ysymmetrized\]) and (\[eq:lambdaRec\]) can be solved to yield the recurrence relation in the first pair of indices of the $\gamma_{mnpq}$ integral. In particular, setting $p=q=0$, we get $$\begin{aligned} & \gamma_{mn00} = \frac{1}{(6+m+n) \sqrt{n(n+5)}} \left( \left( \frac{(2n+5)(24m+55)}{2(2m+5)} + \frac{5(2m+7)}{2(2n+3)} \right) \gamma_{m(n-1)00} \right. \nonumber\\ & + (2n+5) \left( \frac{2(m+3)}{2m+5} \sqrt{m(m+5)} \gamma_{(m-1)(n-1)00} + \frac{n-m-2}{2n+3}\sqrt{(n-1)(n+4)} \gamma_{m(n-2)00} \right. \nonumber\\ & \left. \left. + 72 X_{00m(n-1)} + \frac{16}{7} \left( 10 G_{00m(n-1)} - 3 \sqrt{6} G_{01m(n-1)} \right) \right) \right) \, . \label{GAMMAmnpq_1}\end{aligned}$$ To get the recurrence relation in the second pair of (symmetric) indices for the $\gamma_{mnpq}$ integral we consider again the auxiliary integral (\[tildeGAMMAmnpq\]) where we either (1) use the identity (\[identity1\]) for $\mu\nu'e_n$ or (2) use the identity (\[identity3\]) for $\mu\nu'e_q'$. Equating the results of these two operations we get $$\begin{aligned} & A_{-}(n) \gamma_{mnpq} + B(n) \gamma_{m(n+1)pq} + C(n) \gamma_{m(n-1)pq} = -\frac{16 - 4\omega_q^2 + 3 \omega_q^4}{(\omega_q^2 - 4)(\omega_q^2 - 1)} \gamma_{mnpq} \nonumber\\ & + B(q) \frac{\omega_q}{\omega_{q+1}} \gamma_{mnp(q+1)} + C(q) \frac{\omega_q}{\omega_{q-1}} \gamma_{mnp(q-1)} + \frac{32}{\omega_q^2 - 4} \frac{\mu\nu}{\sin^2 x} \lambda_{pqmn}\end{aligned}$$ Solving for $\gamma_{mnp(q+1)}$ and shifting the index $q+1 \rightarrow q$ we finally get $$\begin{aligned} & \gamma_{mnpq} = \frac{1}{\sqrt{q(q+5)}} \left( \left( \frac{1}{(2n + 5)(2n + 7)} \left( \frac{2 (15 + 12n + 2n^2)}{q + 2} - 5 (7 + 2q) \right) \right. \right. \nonumber\\ & \left. \hskip 36mm + \frac{3 (3q + 7)}{(q + 1)(2q + 3)} \right) \gamma_{mnp(q-1)} \nonumber\\ & + (q + 3)(2q + 5) \left( \frac{\sqrt{(n+1)(n + 6)}}{(q + 2)(2n + 7)} \gamma_{m(n+1)p(q-1)} - \frac{\sqrt{(q-1)(q + 4)}}{(q + 1)(2q + 3)} \gamma_{mnp(q-2)} \right. \nonumber\\ & \left. \hskip 30mm + \frac{\sqrt{n(n + 5)}}{(q + 2)(2n + 5)} \gamma_{m(n-1)p(q-1)} \right) \nonumber\\ & \frac{(2q + 5)}{(q + 1)(q + 2)} \left( \frac{8(p + 3) \sqrt{p(p + 5)}}{2p + 5} G_{mn(p-1)(q-1)} + \frac{16 (25 + 18 p + 3 p^2)}{(2p+5)(2p+7)} G_{mnp(q-1)} \right. \nonumber\\ & \left. \left. \hskip 25mm - \frac{8(p + 3) \sqrt{(p+1)(p + 6)}}{2p + 7} G_{mn(p+1)(q-1)} \right) \right) \, . \label{GAMMAmnpq_2}\end{aligned}$$ Equations (\[GAMMAmnpq\_1\]) and (\[GAMMAmnpq\_2\]), together with the initial condition $$\gamma_{0000}=\frac{80}{11}$$ provide the complete set of recurrence relations for the $\gamma_{mnpq}$ integrals. Recurrence relations for the $W_{ijkk}$ integrals ------------------------------------------------- To find the recurrence relations for the $W_{ijkk}$ integrals (\[Wijkl\]) we consider more general $W_{ijkl}$ integrals, $$W_{ijkl} =\int_{0}^{\frac{\pi}{2}}\text{d}x\,e_{i}(x)e_{j}(x)\mu(x)\nu(x)\int_{0}^{x}\text{d}y\, e_{k}(y) e_{l}(y) \mu(y) \, , \label{eq:Wijkl}$$ and in the auxiliary integral $$\int_{0}^{\frac{\pi}{2}}\text{d}x\,e_{i}(x)e_{j}(x)\mu(x)\nu(x)\int_{0}^{x}\text{d}y\, e_{k}(y) e_{l}(y) \mu(y) \mu(y) \nu'(y)$$ we use the identity (\[identity1\]) in sequence for $\mu \nu' e_{k}$ and $\mu \nu' e_{l}$ and then substitute $l=k+1$ to get $$\begin{aligned} & A_{-}(k) W_{ijk(k+1)} + B(k) W_{ij(k+1)(k+1)} + C(k) W_{ij(k-1)(k+1)} \nonumber\\ = & A_{-}(k+1) W_{ijk(k+1)} + B(k+1) W_{ijk(k+2)} + C(k+1) W_{ijkk} \, . \label{eq:eq_W}\end{aligned}$$ Then we use identities  [^7] $$\begin{aligned} W_{ijk(k+1)} &= - \frac{1}{4 (\omega_k + 1)} \left( X_{(k+1)ijk} - X_{kij(k+1)}\right) \nonumber\\ W_{ijk(k+2)} &= - \frac{1}{8 (\omega_k + 2)} \left( X_{(k+2)ijk} - X_{kij(k+2)}\right) \nonumber\\ W_{ij(k-1)(k+1)} &= - \frac{1}{8 \omega_k} \left( X_{(k+1)ij(k-1)} - X_{(k-1)ij(k+1)}\right) \, \nonumber\end{aligned}$$ to solve (\[eq:eq\_W\]) for $W_{ij(k+1)(k+1)}$. Finally, shifting the index $k+1 \rightarrow k$, we get $$\begin{aligned} W_{ijkk} &= W_{ij(k-1)(k-1)} \nonumber\\ & - \frac{5}{(\omega_k - 3)(\omega_k^2 - 1)} \frac{1}{\sqrt{k(k+5)}} \left( X_{kij(k-1)} - X_{(k-1)ijk}\right) \nonumber\\ & - \frac{\omega_k - 1}{8 \omega_k (\omega_k + 1)} \sqrt{\frac{(k+1)(k+6)}{k(k+5)}} \left( X_{(k+1)ij(k-1)} - X_{(k-1)ij(k+1)}\right) \nonumber\\ & + \frac{\omega_k - 1}{8 (\omega_k-3) (\omega_k - 2)} \sqrt{\frac{(k-1)(k+4)}{k(k+5)}} \left( X_{kij(k-2)} - X_{(k-2)ijk}\right) \, .\end{aligned}$$ Thus the $W_{ijkk}$ integrals are given in terms of $W_{ij00}$ integrals and $X_{abcd}$ integrals. Now, to find the recurrence for the $W_{ij00}$ integrals we consider auxiliary integrals $$\begin{aligned} & 2 \int_0^{\pi/2} dx\, \mu(x) \nu(x) \mu(x) \nu'(x) e_i(x) e_j(x) \int_0^x dy \mu(y) e_k(y) e_l(y) \nonumber\\ + & \int_0^{\pi/2} dx\, \mu(x) \nu(x) e_i(x) e_j(x) \int_0^x dy \mu(y) \mu(y) \nu'(y)e_k(y) e_l(y)\end{aligned}$$ and we either (1) use the identity (\[identity1\]) for $\mu \nu' e_i$ and $\mu \nu' e_k$, or (2) integrate by parts using $\mu' \nu = d-1$, the identity (\[identity2\]), and the definition (\[eq:Wijkl\]). Then, in the second of the auxiliary integrals we either (3) use the identity (\[identity1\]) for $\mu \nu' e_i$, or (4) use the identity (\[identity1\]) for $\mu \nu' e_j$. These two pairs of operations result in the system of two equations that can be solved for $W_{(i+1)jkl}$ and $W_{i(j+1)kl}$. Then shifting the index $i+1 \rightarrow i$ and setting $k=l=0$ we get: $$\begin{aligned} W_{ij00}=& \frac{1}{(i+j+7) \sqrt{j(j+5)}} \left( \left( \frac{(12 i + 25)(2 j + 5)}{2(2i+5)} + \frac{5(2i+9)}{2(2j+3)}\right) W_{i(j-1)00} \right. \nonumber\\ & + (2j + 5) \left( \frac{2(i+3)}{2i+5}\sqrt{i(i+5)}W_{(i-1)(j-1)00} + \frac{j-i-3}{2j+3}\sqrt{(j-1)(j+4)}W_{i(j-2)00} \right. \nonumber\\ & + \left. \left. \frac{\sqrt{3}}{14 \sqrt{2}} \left( X_{10i(j-1)} - X_{01i(j-1)}\right) \right) \right) \,,\end{aligned}$$ with the initial condition $$W_{0000} = \frac{358}{3003} \,.$$ Recurrence relations for the $\bar{W}_{ijkk}$ integrals ------------------------------------------------------- The recurrence relations for the $\bar{W}_{ijkk}$ integrals (\[WSijkk\]) can be obtained in close analogy to the case of $W_{ijkk}$ integrals described in the previous subsection. To find the recurrence relations for the $\bar{W}_{ijkk}$ integrals we consider more general $\bar{W}_{ijkl}$ integrals, $$\bar{W}_{ijkl} =\int_{0}^{\frac{\pi}{2}}\text{d}x\,e'_{i}(x)e'_{j}(x)\mu(x)\nu(x)\int_{0}^{x}\text{d}y\, e_{k}(y) e_{l}(y) \mu(y) \, , \label{eq:WSijkl}$$ and in the auxiliary integral $$\int_{0}^{\frac{\pi}{2}}\text{d}x\,e_{i}'(x)e_{j}'(x)\mu(x)\nu(x)\int_{0}^{x}\text{d}y\, e_{k}(y) e_{l}(y) \mu(y) \mu(y) \nu'(y)$$ we use the identity (\[identity1\]) in sequence for $\mu \nu' e_{k}$ and $\mu \nu' e_{l}$ and then substitute $l=k+1$ to get $$\begin{aligned} & A_{-}(k) \bar{W}_{ijk(k+1)} + B(k) \bar{W}_{ij(k+1)(k+1)} + C(k) \bar{W}_{ij(k-1)(k+1)} \nonumber\\ = & A_{-}(k+1) \bar{W}_{ijk(k+1)} + B(k+1) \bar{W}_{ijk(k+2)} + C(k+1) \bar{W}_{ijkk} \, . \label{eq:eq_WS}\end{aligned}$$ Then we use identities  [^8] $$\begin{aligned} \bar{W}_{ijk(k+1)} &= - \frac{1}{4 (\omega_k + 1)} \left( Y_{(k+1)kij} - Y_{k(k+1)ij}\right) \nonumber\\ \bar{W}_{ijk(k+2)} &= - \frac{1}{8 (\omega_k + 2)} \left( Y_{(k+2)kij} - Y_{k(k+2)ij}\right) \nonumber\\ \bar{W}_{ij(k-1)(k+1)} &= - \frac{1}{8 \omega_k} \left( Y_{(k+1)(k-1)ij} - Y_{(k-1)(k+1)ij}\right) \, \nonumber\end{aligned}$$ to solve (\[eq:eq\_WS\]) for $\bar{W}_{ij(k+1)(k+1)}$. Finally, shifting the index $k+1 \rightarrow k$, we get $$\begin{aligned} \bar{W}_{ijkk} &= \bar{W}_{ij(k-1)(k-1)} \nonumber\\ & - \frac{5}{(2k+3)(2k+5)(2k+7)} \frac{1}{\sqrt{k(k+5)}} \left( Y_{k(k-1)ij} - Y_{(k-1)kij}\right) \nonumber\\ & - \frac{2k+5}{16 (k+3)(2k+7)} \sqrt{\frac{(k+1)(k+6)}{k(k+5)}} \left( Y_{(k+1)(k-1)ij} - Y_{(k-1)(k+1)ij}\right) \nonumber\\ & + \frac{2k+5}{16 (k+2) (2k+3)} \sqrt{\frac{(k-1)(k+4)}{k(k+5)}} \left( Y_{k(k-2)ij} - Y_{(k-2)kij}\right) \, .\end{aligned}$$ Thus the $\bar{W}_{ijkk}$ integrals are given in terms of $\bar{W}_{ij00}$ integrals and $Y_{abcd}$ integrals. Now, to find the recurrence for the $\bar{W}_{ij00}$ integrals we integrate (\[eq:WSijkl\]) by parts and (using eigenequation (\[eigenEq\]) for $\left(\mu e_i'\right)'$) we get $$\bar{W}_{ijkl} = \omega_i^2 W_{ijkl} - 8 T_{ijkl} - N_{ijkl} - X_{ijkl} \, ,$$ where $$T_{ijkl} = \int_0^{\pi/2} dx \, \frac{\mu(x)\nu(x)}{\sin^2 x} e_i(x) e_j(x) \int_0^{x} dy \, e_{k}(y) e_{l}(y) \mu(y) \label{eq:Tijkl}$$ and $$N_{ijkl} = \int_0^{\pi/2} dx \, \mu(x)\nu'(x) e_i'(x) e_j(x) \int_0^{x} dy \, e_{k}(y) e_{l}(y) \mu(y)$$ Since the left hand side of (\[eq:WSijkl\]) is symmetric in $ij$ indices we can write $$2 \bar{W}_{ijkl} = \omega_i^2 W_{ijkl} + \omega_j^2 W_{ijkl} - 16 T_{ijkl} - N_{ijkl} - N_{jikl} - X_{ijkl} - X_{jikl} \, ,$$ Now, integrating twice by parts and using $\left(\mu \nu'\right)' = -4 \mu \nu$ and $\mu'\nu=d-1$, it can be easy established that $$\begin{aligned} N_{ijkl} + N_{jikl} & = 4 W_{ijkl} - \int_{0}^{\pi/2} \mu^2 \nu' e_i e_j e_k e_l \\ & = 4 W_{ijkl} + 2(d-1) \chi_{ijkl} + X_{ijkl} + X_{jkli} + X_{klij} + X_{lijk}\end{aligned}$$ thus we finally get (for $d=4$): $$\begin{aligned} \bar{W}_{ij00}=& 2(17+i(i+6)+j(j+6)) W_{ij00} - 8 T_{ij00} - X_{00ij} - X_{ij00} - X_{ji00} - 3\chi_{ij00} \label{eq:RecWSij00}\end{aligned}$$ with the initial condition $$\bar{W}_{0000} = \frac{1216}{1001} \,.$$ To find the recurrence relations for the $T_{ijkl}$ integrals (\[eq:Tijkl\]), needed in (\[eq:RecWSij00\]), we consider an auxiliary integral $$\int_{0}^{\frac{\pi}{2}} dx \, \frac{\mu(x)\nu(x)}{\sin^2 x} \mu \nu' e_{i}(x)e_{j}(x) \int_0^{x} dy \, e_{k}(y) e_{l}(y) \mu(y)$$ and we either (1) use the identity (\[identity1\]) for $\mu \nu' e_j$, or (2) use the identity (\[eq:mu\_nu\_prim\]). This yields $$A_{-}(j) T_{ijkl} + B(j) T_{i(j+1)kl} + C(j) T_{i(j-1)kl} = (2-d) T_{ijkl} -2 W_{ijkl} \,$$ and it finally gives $$\begin{aligned} T_{ij00} =& \frac{1}{\sqrt{j(j+5)}} \left( - 2 (2 j + 5) W_{i(j-1)00} + \frac{4(2+(j+1)(j+3))}{2j+3} T_{i(j-1)00} \right. \nonumber\\ & \left. - \frac{(2j+5)\sqrt{(j-1)(j+4)}}{2j+3} T_{i(j-2)00}\right) \, ,\end{aligned}$$ with the initial condition $$T_{0000} = \frac{8}{33} \, .$$ Recurrence relation and closed form expressions for the $V_{mn}$ integrals -------------------------------------------------------------------------- To find the recurrence for the $V_{mn}$ integrals (\[Vij\]) (symmetric in their indices), we consider an auxiliary integral $$2 \int_0^{\pi/2} dx\, \mu \nu \mu \nu' e_m e_n$$ and we either (1) use the identity (\[identity1\]) for $\mu \nu' e_m$, or (2) use the identity (\[identity1\]) for $\mu \nu' e_n$ or (3) integrate by parts using $\mu' \nu = d-1$ and the identity (\[identity2\]) for $\mu \nu e_m'$ and $\mu \nu e_n'$. Equating the results of these three operations we get (for $d=1$): $$\begin{aligned} & 2 \left( A_{-}(m) V_{mn} + B(m) V_{(m+1)n} + C(m) V_{(m-1)n} \right) \nonumber\\ = & 2 \left( A_{-}(n) V_{mn} + B(n) V_{m(n+1)} + C(n) V_{m(n-1)} \right) \nonumber\\ = & -6 V_{mn} - \frac{1}{2} A_{+}(m) V_{mn} - \frac{\omega_m}{2} B(m) V_{(m+1)n} + \frac{\omega_m}{2} C(m) V_{(m-1)n} \nonumber\\ & \hskip 14mm - \frac{1}{2} A_{+}(n) V_{mn} - \frac{\omega_n}{2} B(n) V_{m(n+1)} + \frac{\omega_n}{2} C(n) V_{m(n-1)}\end{aligned}$$ These system can be solved for $V_{(m+1)n}$ and $V_{m(n+1)}$. Shifting the index $m+1 \rightarrow m$ we finally get $$\begin{aligned} V_{mn}=& \frac{1}{(m+n+7) \sqrt{m(m+5)}} \left(\frac{2 \left( m (m+4) (12n+25) + 5 \left( n^2+16n+30\right) \right)} {(2m+3)(2n+5)} V_{(m-1)n} \right. \nonumber\\ & \left. + (2m + 5) \left( \frac{m-n-3}{2m+3}\sqrt{(m-1)(m+4)} V_{(m-2)n} + \frac{2(n+3)}{2n+5}\sqrt{n(n+5)} V_{(m-1)(n-1)} \right) \right)\,, \label{VmnRec}\end{aligned}$$ with the initial condition $$V_{00} = \frac{4}{7} \,. \label{V00}$$ Interestingly, the solution of the recurrence relations (\[VmnRec\], \[V00\]) can be found in a closed form  [^9]: $$\begin{aligned} V_{mm} &= \frac{2 (m+1) (m+2) (4m + 15)}{3 (2m + 5) (2 m + 7)} \,, \nonumber\\ V_{m(m-1)} &= \frac{(m+1) (8m + 25)}{6 (2m + 5)} \sqrt{\frac{m}{m+5}} \,, \nonumber\\ V_{mn} &\stackrel{m-n>1}{=} \frac{2}{3} (n+3) \sqrt{\frac{(n+1)^{\overline{5}}}{(m+1)^{\overline{5}}}} \,, \label{VmnSol}\end{aligned}$$ where $n^{\overline{k}} := n(n+1)...(n+k-1)$, $k>0$. Recurrence relation and closed form expressions for the $A_{mn}$ integrals -------------------------------------------------------------------------- To find the recurrence for the $A_{mn}$ integrals (\[Aij\]) (symmetric in their indices), we integrate by parts using $$\mu \nu e_m' e_n' = \left( \mu \nu e_m e_n'\right)' - \nu e_m \left( \mu e_n' \right)' - \mu \nu' e_m e_n'$$ and the eigen equation $$\left( \mu e_n' \right)' = - \mu \omega_n^2 e_n + \frac{8 \mu}{\sin^2 x} e_n$$ Then symmetrizing the result as $A_{mn} = \left( A_{mn} + A_{nm} \right)/2$ and using $$-\frac{1}{2} \mu \nu' \left( e_m e_n' + e_m' e_n\right) = -\frac{1}{2} \left( \mu \nu' e_m e_n \right)' + \frac{1}{2} \left( \mu \nu' \right)' e_m e_n$$ together with $ \left( \mu \nu' \right)' = -4 \mu \nu$ we finally get $$A_{mn} = \frac{1}{2} \left( \omega_m^2 + \omega_n^2 - 4 \right) V_{mn} - 8 Q_{mn} \, , \label{AmnRec}$$ with $$Q_{mn} = \int_{0}^{\frac{\pi}{2}}\text{d}x\, \frac{\mu}{\sin^2 x} e_m e_n \, . \label{Qmn}$$ The recurrence relation for the $Q_{mn}$ integrals (symmetric in their indices) can be easily obtained in analogy to the $K_{mnp}$ integrals: in an auxiliary integral $$\int_{0}^{\frac{\pi}{2}}\text{d}x\, \frac{\mu}{\sin^2 x} \mu \nu' e_m e_n$$ we either (1) use the identity (\[identity1\]) for $\mu \nu' e_m$, or (2) use the identity (\[eq:mu\_nu\_prim\]). Equating the results of these two operations we get (for $d=4$): $$A_{-}(m) Q_{mn} + B(m) Q_{(m+1)n} + C(m) Q_{(m-1)n} = -2 Q_{mn} - 2 V_{mn} \, .$$ Shifting the index $m+1 \rightarrow m$ we finally get $$Q_{mn} = \frac{2m + 5}{\sqrt{m(m+5)}} \left( \frac{4 \left( m^2 + 4m + 5 \right)}{(2m+5)(2m+3)} Q_{(m-1)n} - \frac{\sqrt{(m-1)(m+4)}}{2m+3} Q_{(m-2)n} - 2 V_{(m-1)n}\right) \,, \label{QmnRec}$$ with the initial condition $$Q_{00} = 2 \,. \label{Q00}$$ Interestingly, the solution of the recurrence relations (\[AmnRec\], \[QmnRec\], \[Q00\]) can be found in a closed form  [^10]: $$\begin{aligned} A_{mm} &= \frac{4 (m+1) (m+2) (m+3) (4m^2 + 18m + 15)}{3 (2m + 5) (2 m + 7)} \,, \nonumber\\ A_{m(m-1)} &= \frac{2 (m+1) (m+2) (4m^2 + 11m + 5)}{3 (2m + 5)} \sqrt{\frac{m}{m+5}} \,, \nonumber\\ A_{mn} &\stackrel{m-n>1}{=} \frac{4}{3} (n+3) (15 + 6(2n-m) + 2n^2 - m^2) \sqrt{\frac{(n+1)^{\overline{5}}}{(m+1)^{\overline{5}}}} \,. \label{AmnSol}\end{aligned}$$ Preliminary numerical results {#PreliminaryNumericalResults} ============================= As it was stressed in Sec. \[Introduction\], investigating the problem of the AdS stability by solving numerically the Einstein equations (\[Einsteineq\]), we can never have access to the $\varepsilon \rightarrow 0$ limit (as the instability can be expected to be revealed at the $\mathcal{O}\left(\varepsilon^{-2}\right)$ time-scale at the earliest). On the other hand due the the scaling symmetry $$C_l(\tau) \rightarrow \varepsilon \, C_l\left(\varepsilon^2 \tau \right) \quad \mbox{and} \quad \Phi_l(\tau) \rightarrow \Phi_l\left(\varepsilon^2 \tau \right) \, ,$$ i.e. if $C_l(\tau)$ and $\Phi_l(\tau)$ are a solutions to (\[ResonantSystem\]) so are $\varepsilon \, C_l\left(\varepsilon^2 \tau \right)$ and $\Phi_l\left(\varepsilon^2 \tau \right)$. Thus, the solutions of the resonant system (\[ResonantSystem\]) capture the dynamics at $\mathcal{O}\left(\varepsilon^{-2}\right)$ time-scale exactly under assumption of neglecting the effects of non-resonant terms (the neglected higher order terms affect the dynamics on longer time-scales $\mathcal{O}\left(\varepsilon^{-k}\right)$ with integer $k>2$). Of course, to solve (\[ResonantSystem\]) numerically one has to introduce some truncation in the number of modes present in the system, i.e. to introduce upper limit $N$ in the sums in (\[ResonantSystem\]). Anyway, it would be desirable to solve the resonant system (\[ResonantSystem\]) numerically for some model initial data (for example two-modes initial data that were already intensively studied in the past for the massless scalar field in $3+1$ [@bbgll_PRL113; @br_PRL115; @df_JHEP1512], in $4+1$ [@bmr_PRL115], and in higher dimensions [@d_1606.02712]) to check for a convergence between 1. the solutions of (\[Einsteineq\]) with initial data $B(0,x) = \varepsilon f(x)$ and $\dot B(0,x) = \varepsilon g(x)$ in the $\varepsilon \rightarrow 0$ limit, 2. the solutions of (\[ResonantSystem\]) with initial data inferred from $B_1(0,x) = f(x)$ and $\dot B_1(0,x) = g(x)$ in the $N \rightarrow \infty$ limit, and the existence of a finite-time blow-up in the resonant system, cf. [@bmr_PRL115]. Also, one of motivations to study higher orders in perturbation expansion [@r_PRD95; @r_PRD96] was to lay the foundations for constructing the resonant system for arbitrarily gravitational perturbations. Although construction of such system should be conceptually straightforward after the model case [@cev_JHEP1410; @cev_JHEP1501] and the present study, technically if would be a formidable task. Thus, before attacking such problem, it would be desirable to know if, with the presently numerically accessible cutoffs $N$, one can rely on the solutions of the resonant system (\[ResonantSystem\]) obtained under simplifying symmetry assumptions (\[bcs\_ansatz\]). Unfortunately, it seems from the preliminary results of Maliborski [@m_private2] that numerical integration of the resonant system for the ansatz (\[bcs\_ansatz\]) is much more demanding then the analogous problem for the spherically symmetric massless scalar field system in $4+1$ dimensions [@bmr_PRL115]. Namely, even with the cutoff $N \approx 500$ it was very difficult to establish what is the decay rate of the energy power spectrum: the obtained results seemed not to converge to the decay rate $-5/3$ reported in [@br_APPB48], and were giving some values between $-2$ and $-5/3$ depending on the fitting time and the range of modes used in a fit [@m_private2]. It would be very interesting to revisit this problem again. Acknowledgements ================ We wish to thank Maciej Maliborski for his collaboration at the early stage of this project. This work was supported by the Narodowe Centrum Nauki (Poland) Grant no. 2017/26/A/ST2/530. Vanishing of the secular terms at the second order {#appA} ================================================== We prove that all the secular terms vanish at the second order in $\varepsilon$. The interaction coefficients due to quadratic nonlinearity are $$\label{Kjkn} K_{jkn} = \int^{\pi/2}_0 e_j(x) e_k(x) e_n(x) \frac{\sin x}{\cos^3 x} \, dx\; .$$ Using $y=\cos(2x)$ and the definition of eigenfunctions (\[modes\]) we have $$e_n(y)\sim (1-y)(1+y)^2 P_n^{(3,2)}(y) \; .$$ Then, using the formula $$P^{(\alpha, \beta)}_n(y) \sim (1-y)^{-\alpha} (1+y)^{-\beta} \frac{d^n}{d x^n}\left( (1-y)^{\alpha+n}(1+y)^{\beta+n} \right) \, ,$$ we get $$K_{jkn} \sim \int^1_{-1} (1+y)^2 P_j^{(3,2)} P_k^{(3,2)}\frac{d^n}{d x^n}\left( (1-y)^{3+n}(1+y)^{2+n} \right) \; .$$ Integrating by parts we find that $K_{jkn} = 0$ if $$\label{K0condition} n>j+k+2$$ (because $(1+y)^2 P_j^{(3,2)} P_k^{(3,2)}$ is the polynomial of order $j+k+2$). For the resonant terms $\omega_n = \omega_j+\omega_k $, hence $n=3+j+k$. Thus, the coefficients of the resonant terms vanish. Calculation of $S^{(3)}_l$ and vanishing of some secular terms at the third order {#appB} ================================================================================= To obtain $S^{(3)}_l = {\left\langle S^{(3)}, \, e_l \right\rangle}$ we follow closely the work of Craps, Evnin & Vanhoof (2014) [@cev_JHEP1410]. Our calculation is very similar to that described in Appendix A of their paper, therefore we will only give a brief picture and final results. To get $A_2(t,x)$ from (\[A2integral\]) in terms of the first order solution (\[seriesB1\]) we use identities: $$\begin{aligned} \label{eq:iden2} \left( \mu \left( e'_{i}e_{j} - e'_{j}e_{i} \right) \right)' &= \left( \omega_{j}^{2} -\omega_{i}^{2} \right) \mu \, e_{j}e_{i} \; , \\ \label{eq:iden3} \left(\mu \left( \omega_{j}^{2} e'_{i}e_{j} - \omega_{i}^{2} e'_{j}e_{i} \right) \right)' &= \left( \omega_{j}^{2} - \omega_{i}^{2}\right) \mu \left( e'_{j}e'_{i} + \frac{8}{\sin^2 x} e_i e_j \right) \; \end{aligned}$$ that are easily established from the eigen equation (\[eigenEq\]). Using these identities we get $$\begin{aligned} & \frac{A_2(t,x)}{-2} \nonumber\\ = & \nu(x) \sum_{\scriptsize{\begin{matrix} i,j\\i \neq j \end{matrix}}} \frac{\mu(x) \left[ c_i(t) c_j(t) \left( \omega_j^2 e_i'(x) e_j(x) - \omega_i^2 e_j'(x) e_i(x) \right) + \dot c_i(t) \dot c_j(t) \left( e_i'(x) e_j(x) - e_j'(x) e_i(x) \right) \right]}{ \omega_{j}^{2} - \omega_{i}^{2} } \nonumber\\ + & \nu(x) \sum_i \int_0^x \left[ c_i^2(t) \left( \left( e_i'(y) \right)^2 + \frac{8}{\sin^2(y)} e_i^2(y) \right) + \dot c_i^2(t) e_i^2(y) \right] \mu(y) \, dy \, .\end{aligned}$$ Using the symmetry in $i,j$ indices under the first sum and integrating by parts and using the eigen equation (\[eigenEq\]) under the second sum, we finally get $$\begin{aligned} \label{A2} \frac{A_2(t,x)}{-2} & = 2 \nu(x) \sum_{\scriptsize{\begin{matrix} i,j\\i \neq j \end{matrix}}} \frac{ \dot c_i(t) \dot c_j(t) + \omega_j^2 c_i(t) c_j(t) }{ \omega_{j}^{2} - \omega_{i}^{2} } \mu(x) e_i'(x) e_j(x) \nonumber\\ & + \nu(x) \sum_i\left[ c_i^2(t) \mu(x)\, e_i'(x) e_i(x) + Q_i(t) \int_0^x \mu(y) \, e_i^2(y) \, dy \right] \, ,\end{aligned}$$ where $Q_i(t) = \dot c_i^2(t) + \omega_i^2 c_i^2(t)$ and $\dot Q_i \equiv 0$ from (\[oscilator\]). Using this identity and (\[oscilator\]) again it follows that $$\label{A2dot} \frac{\dot A_2(t,x)}{-2} = 2 \nu(x) \sum_{i,j} c_i(t) \dot c_j(t) \mu(x) \, e_i'(x) e_j(x) \, .$$ Now, with (\[oscilator\]) and (\[A2\], \[A2dot\]) it is straight forward to establish that: $$\begin{aligned} \label{A2ddotB1el} \frac{{\left\langle A_2 \ddot B_1, \, e_l \right\rangle}}{-2} & = - 2 \sum_{\scriptsize{\begin{matrix} i,j,k\\i \neq j\end{matrix}}} \frac{\omega_k^2 \, c_k}{\omega_j^2 - \omega_i^2} \left( \dot c_i \dot c_j +\omega_j^2 c_ic_j \right) \, X_{ijkl} - \sum_{i,k} \omega_{k}^{2} \, c_{k}\left( c_i^2 \, X_{iikl} + Q_i \, W_{klii} \right) \; , \\ \frac{{\left\langle \dot A_2 \dot B_1, \, e_l \right\rangle}}{-2} & = 2 \sum_{i,j,k} c_i \dot c_j \dot c_k \, X_{ijkl} \; , \\ \frac{{\left\langle \frac{\displaystyle 1}{\displaystyle \sin^2 x} A_2 B_1, \, e_l \right\rangle}}{-2} & = 2 \sum_{\scriptsize{\begin{matrix} i,j,k\\i \neq j\end{matrix}}} \frac{c_k}{\omega_j^2 - \omega_i^2} \left( \dot c_i \dot c_j + \omega_j^2 c_ic_j \right) \, \tilde X_{ijkl} + \sum_{i,k} c_{k}\left( c_i^2 \, \tilde X_{iikl} + Q_i \, \tilde W_{klii} \right) \; ,\end{aligned}$$ where the interaction coefficients $X_{ijkl}$, $W_{ijkl}$, $\tilde X_{ijkl}$ and $\tilde W_{ijkl}$ (i.e. integrals of products of AdS linear eigen modes and some weights) are defined in (\[eq:coeffs\]). To obtain ${\left\langle \delta_2 \ddot B_1, \, e_l \right\rangle}$ and ${\left\langle \dot \delta_2 \dot B_1, \, e_l \right\rangle}$ contributions to the source $S^{(3)}_l$ we use (\[eq:iden2\]) and integrate by parts: $$\begin{aligned} & \frac{{\left\langle \delta_2 \ddot B_1, \, e_l \right\rangle}}{-2} \nonumber\\ = &\sum_k \ddot c_k \int_0^{\pi/2} dx \, \mu(x) \, e_k(x) e_l(x) \int_0^{x} dy \, \mu(y) \nu(y) \left(B_1'^2 (t,y) + \dot B_1^2(t,y) \right) \nonumber\\ = & \sum_{\scriptsize{\begin{matrix} k\\k \neq l\end{matrix}}} \frac{- \omega_k^2 \, c_k}{\omega_l^2 - \omega_k^2} \underbrace{\int_0^{\pi/2} dx \, \left( \mu(x) \left( e_k'(x) e_l(x) - e_l'(x) e_k(x) \right) \right)' \int_0^{x} dy \, \mu(y) \nu(y) \left(B_1'^2 (t,y) + \dot B_1^2(t,y) \right)}_{\displaystyle \mathcal{I}_1} \nonumber\\ - & \omega_l^2 \, c_l \underbrace{\int_0^{\pi/2} dx \, \mu(x) \, e_l^2(x) \int_0^{x} dy \, \mu(y) \nu(y) \left(B_1'^2 (t,y) + \dot B_1^2(t,y) \right)}_{\displaystyle \mathcal{I}_2}\end{aligned}$$ Now $$\begin{aligned} \mathcal{I}_1 & = - \int_0^{\pi/2} dx \, \mu(x) \left( e_k'(x) e_l(x) - e_l'(x) e_k(x) \right) \mu(x) \nu(x) \underbrace{\left(B_1'^2 (t,x) + \dot B_1^2(t,x) \right)}_{\sum_{i,j} \left( c_i c_j e_i' e_j' + \dot c_i \dot c_j e_i e_j \right)} \nonumber\\ & = - \sum_{i,j} \left[ \dot c_i \dot c_j \left( X_{klij} - X_{lkij} \right) + c_i c_j \left( Y_{klij} - Y_{lkij} \right) \right]\end{aligned}$$ and $$\begin{aligned} \mathcal{I}_2 & = \underbrace{ \int_0^{\pi/2} dx \, \mu(x) e_l^2(x) }_{{\left\langle e_l, \, e_l \right\rangle} = 1} \int_0^{\pi/2} dx \, \mu(x) \nu(x) \left(B_1'^2 (t,x) + \dot B_1^2(t,x) \right) \nonumber\\ & - \int_0^{\pi/2} dx \, \mu(x) \nu(x) \left(B_1'^2 (t,x) + \dot B_1^2(t,x) \right) \int_0^{x} dy \, \mu(y) e_l^2(y) \nonumber\\ & = \sum_{i,j} \left( \dot c_i \dot c_j P_{ijl} + c_i c_j B_{ijl} \right) \, ,\end{aligned}$$ where the interaction coefficients $X_{ijkl}$, $Y_{ijkl}$, $P_{ijl}$ and $B_{ijl}$ are defined in (\[eq:coeffs\]). This gives $$\begin{aligned} & \frac{{\left\langle \delta_2 \ddot B_1, \, e_l \right\rangle}}{-2} \nonumber\\ = & \sum_{\scriptsize{\begin{matrix} i,j,k\\k \neq l\end{matrix}}} \frac{\omega_k^2 \, c_k}{\omega_l^2 - \omega_k^2} \left[ \dot c_i \dot c_j \left( X_{klij} - X_{lkij} \right) + c_i c_j \left( Y_{klij} - Y_{lkij} \right) \right] - \omega_l^2 \, c_l \sum_{i,j} \left( \dot c_i \dot c_j P_{ijl} + c_i c_j B_{ijl} \right) \, .\end{aligned}$$ Similarly $$\begin{aligned} & \frac{{\left\langle \dot \delta_2 \dot B_1, \, e_l \right\rangle}}{-2} \nonumber\\ = & - \sum_{\scriptsize{\begin{matrix} i,j,k\\k \neq l\end{matrix}}} \frac{\dot c_k}{\omega_l^2 - \omega_k^2} \partial_t \left[ \dot c_i \dot c_j \left( X_{klij} - X_{lkij} \right) + c_i c_j \left( Y_{klij} - Y_{lkij} \right) \right] + \dot c_l \sum_{i,j} \partial_t \left( \dot c_i \dot c_j P_{ijl} + c_i c_j B_{ijl} \right) \, \nonumber\\ = & - \sum_{\scriptsize{\begin{matrix} i,j,k\\k \neq l\end{matrix}}} \frac{\dot c_k}{\omega_l^2 - \omega_k^2} \left\{ c_i \dot c_j \left[ -\omega_i^2 \left( X_{klij} - X_{lkij} \right) + \left( Y_{klij} - Y_{lkij} \right) \right] \right. \nonumber\\ & \hskip 25mm \left. + c_j \dot c_i \left[ -\omega_j^2 \left( X_{klij} - X_{lkij} \right) + \left( Y_{klij} - Y_{lkij} \right) \right] \right\} \nonumber\\ & + \dot c_l \sum_{i,j} \left[ c_i \dot c_j \left( - \omega_i^2 P_{ijl} + B_{ijl}\right) + c_j \dot c_i \left( - \omega_j^2 P_{ijl} + B_{ijl}\right)\right] \, . $$ Now from $$A_2' - \delta_2' = \frac{\nu'}{\nu} A_2 - \frac{16}{\sin^2 x} \mu \nu B_1^2 \;$$ and (\[A2\]) we get $$\begin{aligned} & {\left\langle \left(A_2' - \delta_2'\right) B_1', \, e_l \right\rangle} \nonumber\\ & = -4 \sum_{\scriptsize{\begin{matrix} i,j,k\\i \neq j \end{matrix}}} \frac{ c_k \left( \dot c_i \dot c_j + \omega_j^2 c_i c_j \right) }{ \omega_j^2 -\omega_i^2} H_{ijkl} - 2 \sum_{i,k} c_k \left( c_i^2 H_{iikl} + Q_i M_{kli} \right) - 16 \sum_{i,k,j} c_i c_j c_k \tilde{X}_{klij} \; ,\end{aligned}$$ where the interaction coefficients $H_{ijkl}$, $M_{ijk}$ and $\tilde{X}_{ijkl}$ are defined in (\[eq:coeffs\]). Finally $$\label{B1cubeel} {\left\langle \frac{1}{\sin^2 x} B_1^3, \, e_l \right\rangle} = \sum_{i,j,k} c_i c_j c_k G_{ijkl} \, ,$$ where the interaction coefficient $G_{ijkl}$ is defined in (\[Gijkl\]). To make the time dependence in (\[A2ddotB1el\]-\[B1cubeel\]) explicit we gather some trigonometric identities (cf. (\[cn\])): $$\begin{aligned} c_k c_i c_j & = \frac{1}{4} a_k a_i a_j \nonumber\\ & \times \left( \cos\left( \theta_i - \theta_j - \theta_k \right) + \cos\left( \theta_i - \theta_j + \theta_k \right) + \cos\left( \theta_i + \theta_j - \theta_k \right) + \cos\left( \theta_i + \theta_j + \theta_k \right) \right) \\ c_k \dot c_i \dot c_j & = \frac{1}{4} a_k a_i a_j \omega_i \omega_j \nonumber\\ & \times \left( \cos\left( \theta_i - \theta_j - \theta_k \right) + \cos\left( \theta_i - \theta_j + \theta_k \right) - \cos\left( \theta_i + \theta_j - \theta_k \right) - \cos\left( \theta_i + \theta_j + \theta_k \right) \right) \\ c_k \left( \dot c_i \dot c_j + \omega_j^2 c_i c_j \right)& = \frac{1}{4} a_k a_i a_j \nonumber\\ & \times \left[ \omega_j \left( \omega_j + \omega_i \right) \left( \cos\left( \theta_i - \theta_j - \theta_k \right) + \cos\left( \theta_i - \theta_j + \theta_k \right) \right) \right. \nonumber\\ & \hspace{2mm} \left. + \omega_j \left( \omega_j - \omega_i \right) \left( \cos\left( \theta_i + \theta_j - \theta_k \right) + \cos\left( \theta_i + \theta_j + \theta_k \right)\right) \right]\end{aligned}$$ $$\begin{aligned} c_k c_i^2 & = \frac{1}{2} a_k a_i^2 \cos(\theta_k) + \frac{1}{4} a_k a_i^2 \left( \cos \left(2 \theta_i - \theta_k \right) + \cos \left(2 \theta_i + \theta_k \right) \right) \\ c_k Q_i &= a_k a_i^2 \omega_i^2 \cos \theta_k\end{aligned}$$ Using these identities it is straightforward to establish that (for the future convenience we underline some terms that are convenient to be summed up or we indicate a convenient change of indices in some other terms) $$\begin{aligned} & 8 {\left\langle \frac{1}{\sin^2 x} A_2 B_1, \, e_l \right\rangle} \nonumber\\ = & -8 \sum_{\scriptsize{\begin{matrix} i,j,k\\i \neq j \end{matrix}}} a_i a_j a_k \tilde{X}_{ijkl} \left[ \frac{\omega_j}{\omega_j - \omega_i} ( \cos \stackrel{i \leftrightarrow k}{\left( \theta_i - \theta_j - \theta_k \right)} + \cos \stackrel{j \leftrightarrow k}{\left( \theta_i - \theta_j + \theta_k \right)} ) \right. \nonumber\\ & \hspace{33mm} \left. + \underline{\frac{\omega_j}{\omega_j + \omega_i} \left( \cos \left( \theta_i + \theta_j - \theta_k \right) + \cos\left( \theta_i + \theta_j + \theta_k \right) \right) } \right] \nonumber\\ & -4 \sum_{i,k} a_i^2 a_k \left[ \left( 2 \tilde{X}_{iikl} + 4 \omega_i^2 \tilde{W}_{klii}\right) \cos(\theta_k) + \underline{ \tilde{X}_{iikl} \left( \cos \left( 2 \theta_i - \theta_k \right) + \cos\left( 2 \theta_i + \theta_k \right) \right)} \right] \, , \label{A2B1oversin2}\end{aligned}$$ $$\begin{aligned} & {\left\langle \left(A_2' - \delta_2'\right) B_1', \, e_l \right\rangle} \nonumber\\ = & - \sum_{\scriptsize{\begin{matrix} i,j,k\\i \neq j \end{matrix}}} a_i a_j a_k H_{ijkl} \left[ \frac{\omega_j}{\omega_j - \omega_i} ( \cos \stackrel{i \leftrightarrow k}{\left( \theta_i - \theta_j - \theta_k \right)} + \cos \stackrel{j \leftrightarrow k}{\left( \theta_i - \theta_j + \theta_k \right)} ) \right. \nonumber\\ & \hspace{33mm} \left. + \underline{\frac{\omega_j}{\omega_j + \omega_i} \left( \cos \left( \theta_i + \theta_j - \theta_k \right) + \cos\left( \theta_i + \theta_j + \theta_k \right) \right) } \right] \nonumber\\ & -4 \sum_{i,j,k} a_i a_j a_k \tilde{X}_{klij} ( \cos \stackrel{i \leftrightarrow k}{\left( \theta_i - \theta_j - \theta_k \right)} + \cos \stackrel{j \leftrightarrow k}{\left( \theta_i - \theta_j + \theta_k \right)} \nonumber\\ & \hspace{33mm} \left. + \cos \left( \theta_i + \theta_j - \theta_k \right) + \cos\left( \theta_i + \theta_j + \theta_k \right) \right) \nonumber\\ & - \frac{1}{2} \sum_{i,k} a_i^2 a_k \left[ \left( 2 H_{iikl} + 4 \omega_i^2 M_{kli}\right) \cos(\theta_k) + \underline{H_{iikl} \left( \cos \left( 2 \theta_i - \theta_k \right) + \cos\left( 2 \theta_i + \theta_k \right) \right)} \right] \, , \label{primes}\end{aligned}$$ $$\begin{aligned} & 2 {\left\langle A_2 \ddot B_1, \, e_l \right\rangle} \nonumber\\ = & 2 \sum_{\scriptsize{\begin{matrix} i,j,k\\i \neq j \end{matrix}}} a_i a_j a_k \omega_k^2 X_{ijkl} \left[ \frac{\omega_j}{\omega_j - \omega_i} ( \cos \stackrel{i \leftrightarrow k}{\left( \theta_i - \theta_j - \theta_k \right)} + \cos \stackrel{j \leftrightarrow k}{\left( \theta_i - \theta_j + \theta_k \right)} ) \right. \nonumber\\ & \hspace{33mm} \left. + \underline{\frac{\omega_j}{\omega_j + \omega_i} \left( \cos \left( \theta_i + \theta_j - \theta_k \right) + \cos\left( \theta_i + \theta_j + \theta_k \right) \right) } \right] \nonumber\\ & + \sum_{i,k} a_i^2 a_k \omega_k^2 \left[ \left( 2 X_{iikl} + 4 \omega_i^2 W_{klii}\right) \cos(\theta_k) + \underline{X_{iikl} \left( \cos \left( 2 \theta_i - \theta_k \right) + \cos\left( 2 \theta_i + \theta_k \right) \right)} \right] \, , \label{A2ddotB1}\end{aligned}$$ $$\begin{aligned} & {\left\langle \dot A_2 \dot B_1, \, e_l \right\rangle} \nonumber\\ = & - \sum_{i,j,k} a_i a_j a_k \omega_j \omega_k X_{ijkl} \left( \cos \left( \theta_k - \theta_j - \theta_i \right) + \cos \stackrel{j \leftrightarrow k}{\left( \theta_k - \theta_j + \theta_i \right)} \right. \nonumber\\ & \hspace{37mm} \left. - \cos \stackrel{i \leftrightarrow k}{\left( \theta_k + \theta_j - \theta_i \right)} - \cos \left( \theta_k + \theta_j + \theta_i \right) \right) \, , \label{dotA2dotB1}\end{aligned}$$ $$\begin{aligned} & -2 {\left\langle \delta_2 \ddot B_1, \, e_l \right\rangle} \nonumber\\ = & \sum_{\scriptsize{\begin{matrix} i,j,k\\k \neq l \end{matrix}}} a_i a_j a_k \frac{\omega_k^2}{\omega_l^2 - \omega_k^2} \left[ Z^+_{ijkl} ( \cos \stackrel{i \leftrightarrow k}{\left( \theta_i - \theta_j - \theta_k \right)} + \cos \stackrel{j \leftrightarrow k}{\left( \theta_i - \theta_j + \theta_k \right)} ) \right. \nonumber\\ & \hspace{31mm} \left. - Z^-_{ijkl} \left( \cos \left( \theta_i + \theta_j - \theta_k \right) + \cos\left( \theta_i + \theta_j + \theta_k \right) \right) \right] \nonumber\\ & - \sum_{i,j} a_i a_j a_l \omega_l^2 \left[ \left( \omega_i \omega_j P_{ijl} + B_{ijl} \right) \left( \cos \left( \theta_i - \theta_j - \theta_l \right) + \cos \left( \theta_i - \theta_j + \theta_l \right) \right) \right. \nonumber\\ & \hspace{22mm} \left. - \left( \omega_i \omega_j P_{ijl} - B_{ijl} \right) \left( \cos \left( \theta_i + \theta_j - \theta_l \right) + \cos \left( \theta_i + \theta_j + \theta_l \right) \right) \right] \, , \label{delta2ddotB1}\end{aligned}$$ $$\begin{aligned} & - {\left\langle \dot \delta_2 \dot B_1, \, e_l \right\rangle} \nonumber\\ = & - \frac{1}{2} \sum_{\scriptsize{\begin{matrix} i,j,k\\k \neq l \end{matrix}}} a_i a_j a_k \frac{\omega_k}{\omega_l^2 - \omega_k^2} \left[ \left( \omega_i - \omega_j \right) Z^+_{ijkl} ( \cos \stackrel{i \leftrightarrow k}{\left( \theta_i - \theta_j - \theta_k \right)} - \cos \stackrel{j \leftrightarrow k}{\left( \theta_i - \theta_j + \theta_k \right)} ) \right. \nonumber\\ & \hspace{38mm} \left. + \left( \omega_i + \omega_j \right) Z^-_{ijkl} \left( -\cos \left( \theta_i + \theta_j - \theta_k \right) + \cos\left( \theta_i + \theta_j + \theta_k \right) \right) \right] \nonumber\\ & + \frac{1}{2} \sum_{i,j} a_i a_j a_l \omega_l \left[ \left( \omega_i - \omega_j \right) \left( \omega_i \omega_j P_{ijl} + B_{ijl} \right) \left( \cos \left( \theta_i - \theta_j - \theta_l \right) - \cos \left( \theta_i - \theta_j + \theta_l \right) \right) \right. \nonumber\\ & \hspace{24mm} \left. + \left( \omega_i + \omega_j \right) \left( \omega_i \omega_j P_{ijl} - B_{ijl} \right) \left( -\cos \left( \theta_i + \theta_j - \theta_l \right) + \cos \left( \theta_i + \theta_j + \theta_l \right) \right) \right] \, , \label{dotdelta2dotB1}\end{aligned}$$ where the interaction coefficient $Z^{\pm}_{ijkl}$ is defined in (\[Zijkl\]). The sum of the underlined terms in eqs. (\[A2B1oversin2\]-\[A2ddotB1\]) gives: $$\begin{aligned} & \sum_{\scriptsize{\begin{matrix} i,j,k\\i \neq j \end{matrix}}} \frac{\omega_j}{\omega_j + \omega_i} a_i a_j a_k \left( -8 \tilde{X}_{ijkl} - H_{ijkl} + 2 \omega_k^2 X_{ijkl} \right) \left( \cos \left( \theta_i + \theta_j - \theta_k \right) + \cos\left( \theta_i + \theta_j + \theta_k \right) \right) \nonumber\\ & \hspace{10mm} + \frac{1}{2} \sum_{i,k} a_i^2 a_k \left( -8 \tilde{X}_{iikl} - H_{iikl} + 2 \omega_k^2 X_{iikl} \right) \left( \cos \left( 2 \theta_i - \theta_k \right) + \cos\left( 2 \theta_i + \theta_k \right) \right) \nonumber\\ = & \sum_{i,j,k} \frac{\omega_j}{\omega_j + \omega_i} a_i a_j a_k \left( -8 \tilde{X}_{ijkl} - H_{ijkl} + 2 \omega_k^2 X_{ijkl} \right) \left( \cos \left( \theta_i + \theta_j - \theta_k \right) + \cos\left( \theta_i + \theta_j + \theta_k \right) \right) \end{aligned}$$ Now, interchanging indices in some terms as indicated in eqs. (\[A2B1oversin2\]-\[dotdelta2dotB1\]), we finally get (\[completeS3l\]). One can prove that for ${\left\langle {1\over \sin^2 x} B_1 B_2, \, e_l \right\rangle}$ all secular $(+++)$ and $(+--)$ terms vanish. After simplifying we get $$\begin{aligned} & -\int c_i(t') c_j(t') \sin {(\omega_k t') dt' |_{t'=t} \cos {(\omega_k t)}} +\int c_i(t') c_j(t') \cos {(\omega_k t') dt' |_{t'=t} \sin {(\omega_k t)}} \nonumber \\ &= {1\over 4} a_i a_j \left(-\frac{\cos(\theta_i-\theta_j)}{\omega_i-\omega_j-\omega_k}-\frac{\cos(\theta_i+\theta_j)}{\omega_i+\omega_j-\omega_k}+\frac{\cos(\theta_i-\theta_j)}{\omega_i-\omega_j+\omega_k}+\frac{\cos(\theta_i+\theta_j)}{\omega_i+\omega_j+\omega_k} \right) \; ,\end{aligned}$$ where $\omega_i-\omega_j-\omega_k \neq 0$, $\omega_i+\omega_j-\omega_k \neq 0$, $\omega_i-\omega_j+\omega_k \neq 0$, $\omega_i+\omega_j+\omega_k \neq 0$. Multiplying by $c_m(t)/\omega_k = a_m \cos(\theta_m)/\omega_k$, we obtain $$\begin{aligned} &- {1\over 4} a_i a_j a_m \left(\frac{\cos(\theta_i-\theta_j+\theta_m)}{(\omega_i-\omega_j)^2-\omega_k^2}+\frac{\cos(\theta_i+\theta_j-\theta_m)}{(\omega_i+\omega_j)^2-\omega_k^2} \right. \nonumber \\ &\qquad \left. +\frac{\cos(\theta_i-\theta_j-\theta_m)}{(\omega_i-\omega_j)^2-\omega_k^2} + \frac{\cos(\theta_i+\theta_j+\theta_m)}{(\omega_i+\omega_j)^2-\omega_k^2} \right)\; .\end{aligned}$$ This expression is multiplied by $K_{ijk} K_{kml}$. Using  we get: 1\) for the $(+++)$ terms: $$\omega_i + \omega_j + \omega_m = \omega_l \Rightarrow i+j+m+6 = l \;,$$ $$k > i+j+2 \Rightarrow K_{ijk}=0 \;,$$ $$l > k+m+2 \Rightarrow k < i+j+4 \Rightarrow K_{kml}=0 \;,$$ which means that for every $k \in \mathbb{N}$ at least one of the conditions $K_{ijk}=0$ or $K_{kml}=0$ is satisfied, so the whole term always vanishes. 2\) for the $(+--)$ terms: $$\omega_i - \omega_j - \omega_m = \omega_l \Rightarrow i-j-m-6 = l \;,$$ $$i > j+k+2 \Rightarrow k<i-j-2 \Rightarrow K_{ijk}=0 \;,$$ $$k>l+m+2 \Rightarrow k>i-j-4 \Rightarrow K_{kml}=0 \;,$$ or $$\omega_i + \omega_j - \omega_m = - \omega_l \Rightarrow i+j-m+6 = -l \;,$$ $$k>i+j+2 \Rightarrow K_{ijk}=0 \;,$$ $$m>l+k+2 \Rightarrow k<i+j+4 \Rightarrow K_{kml}=0 \;,$$ so the whole term always vanishes as in the previous case. A similar analysis for the $(++-)$ case does not imply that $K_{ijk}K_{kml}$ always equals 0. For the special cases $\omega_i-\omega_j-\omega_k = 0$, $\omega_i+\omega_j-\omega_k = 0$, $\omega_i-\omega_j+\omega_k = 0$ the secular terms vanish as $K_{ijk} = 0$. There are no terms in the sum that satisfy $\omega_i+\omega_j+\omega_k = 0$. For the lower limit $t' = 0$ secular terms would appear if $\omega_k \pm \omega_m = \pm \omega_l$. In this case $K_{kml} = 0$, so there is no contribution. There are also no secular terms in $\langle {1\over \sin^2 x} B_1^3,e_l\rangle$ for the $(+++)$ and the $(+--)$ case. Equations obtained in the way analogous to –: $$\label{Gjkn} G_{ijkn} = \int^{\pi/2}_0 e_i(x) e_j(x) e_k(x) e_n(x) \frac{\sin x}{\cos^3 x} \; ,$$ $$G_{ijkn} \sim \int^1_{-1} (1-y)(1+y)^4 P_i^{(3,2)} P_j^{(3,2)} P_k^{(3,2)}\frac{d^n}{d x^n}\left( (1-y)^{3+n}(1-y)^{2+n} \right)$$ imply that $G_{ijkn} = 0$ if $$n>i+j+k+5$$ (because $(1-y)(1+y)^4 P_i^{(3,2)} P_j^{(3,2)} P_k^{(3,2)}$ is the polynomial of order $i+j+k+5$). For the resonant terms $\omega_n = \omega_j+\omega_k+\omega_i$, hence $n=6+i+j+k$, which means that the resonant $(+++)$ or $(+--)$ terms in $\langle {1\over \sin^2 x} B_1^3,e_l\rangle$ always vanish. We read the coefficients $Q_{ijkl}$, $U_{ijkl}$, $S_{ijkl}$, $R_{il}$, $T_l$ in (\[S3lfinal\]) from the source term  and apply identities (\[HMidentities\]) to finally get simplified expressions , , , , . [10]{} P. Bizoń, A. Rostworowski, *On weakly turbulent instability of anti-de Sitter space*, Phys. Rev. Lett. 107, 031102 (2011) P. Bizoń, A. Rostworowski, *AdS collapse of a scalar field in higher dimensions*, Phys. Rev. D84, 085021 (2011) G. Moschidis, *A proof of the instability of AdS for the Einstein–null dust system with an inner mirror*, [`[arXiv:1704.08681]`](http://arxiv.org/abs/1704.08681) G. Moschidis, *A proof of the instability of AdS for the Einstein–massless Vlasov system*, [`[arXiv:1812.04268]`](https://arxiv.org/abs/1812.04268) F.V. Dimitrakopoulos, B. Freivogel, M. Lippert and I-S. Yang, *Position space analysis of the AdS (in)stability problem*, Journal of High Energy Physics 1508, 077 (2015) N. Deppe, A. Kolly, A. Frey and G. Kunstatter *Stability of Anti–de Sitter Space in Einstein-Gauss-Bonnet Gravity* Phys. Rev. Lett. 114, 071102 (2015) N. Deppe, A.R. Frey, *Classes of Stable Initial Data for Massless and Massive Scalars in Anti-de Sitter Spacetime*, Journal of High Energy Physics 1512, 004 (2015) M. Maliborski, A. Rostworowski, *Time-Periodic Solutions in an Einstein AdS–Massless-Scalar-Field System*, Phys. Rev. Lett. 111, 051102 (2013) M. Maliborski, PhD thesis: *Dynamics of Nonlinear Waves on Bounded Domains*, [`[arXiv:1603.00935]`](http://arxiv.org/abs/1603.00935) (2014) A. Buchel, S.L. Liebling, L. Lehner, *Boson Stars in AdS*, Phys. Rev. D87, 123006 (2013) G. Fodor, P. Forgács and P. Grandclément, *Self-gravitating scalar breathers with a negative cosmological constant*, Phys. Rev. D92, 025036 (2015) Ó.J.C. Dias, G.T. Horowitz J.E. Santos, *Gravitational Turbulent Instability of Anti-de Sitter Space*, Class. Quant. Grav. 29, 194002 (2012) G.T. Horowitz, J.E. Santos, *Geons and the Instability of Anti-de Sitter Spacetime*, Surv. Differ. Geom. **20**, 321-335 (2015) G. Martinon, G. Fodor, P. Grandclément and P. Forgács, *Gravitational geons in asymptotically anti-de Sitter spacetimes*, Class. Quant. Grav. 34, 125012 (2017) A. Rostworowski, *Higher order perturbations of anti–de Sitter space and time-periodic solutions of vacuum Einstein equations*, Phys. Rev. D95, 124043 (2017) G. Fodor, P. Forgács, *Anti–de Sitter geon families*, Phys.Rev. D96, 084027 (2017) B. Craps, O. Evnin, [*AdS (in)stability: an analytic approach*]{}, Fortsch. Phys. 64, 336 (2016) H. Bantilan, P. Figueras, M. Kunesch and P. Romatschke, *Nonspherically Symmetric Collapse in Asymptotically AdS Spacetimes*, Phys. Rev. Lett. 119, 191103 (2017) P. Bizoń, A. Rostworowski, *Gravitational turbulent instability of AdS${}_5$*, Acta Phys. Pol. B48, 1375 (2017) P. Bizoń, T. Chmaj, B. G. Schmidt, *Critical behavior in vacuum gravitational collapse in 4+ 1 dimensions* Phys. Rev. Lett. 95(7), 071102 (2005) V. Balasubramanian, A.  Buchel, S.R. Green, L. Lehner and S.L. Liebling, *Holographic Thermalization, Stability of Anti–de Sitter Space, and the Fermi-Pasta-Ulam Paradox*, Phys. Rev. Lett. 113, 071601 (2014) B. Craps, O.Evnin, J. Vanhoof, *Renormalization group, secular term resummation and AdS (in)stability*, Journal of High Energy Physics 1410, 48 (2014) B. Craps, O.Evnin, J. Vanhoof, *Renormalization, averaging, conservation laws and AdS (in)stability*, Journal of High Energy Physics 1501, 108 (2015) A. Buchel, S.R. Green, L. Lehner, S.L. Liebling, *Conserved quantities and dual turbulent cascades in anti–de Sitter spacetime*, Phys. Rev. **D91**, 064026 (2014) S.R. Green, A. Maillard, L. Lehner, S.L. Liebling, *Islands of stability and recurrence times in AdS*, Phys. Rev. **D92**, 084001 (2015) P. Bizoń, M. Maliborski, A. Rostworowski, *Resonant Dynamics and the Instability of Anti–de Sitter Spacetime*, Phys. Rev. Lett. 115, 081103 (2015) F.V. Dimitrakopoulos, B. Freivogel, J.F. Pedraza and I-S. Yang, *Gauge dependence of the AdS instability problem*, Phys. Rev. D94, 124008 (2016) N. Deppe, *Resonant dynamics in higher dimensional anti–de Sitter spacetime* Phys. Rev. **D100**, 124028 (2019) P. Bizoń, B. Craps, O. Evnin, D. Hunik, V. Luyten, M. Maliborski, *Conformal flow on $S^3$ and weak field integrability in $AdS_4$*, Comm. Math. Phys. 353, 1179 (2017) A. Biasi, P. Bizoń, O. Evnin *Solvable cubic resonant systems*, [`[arXiv:1805.03634]`](http://arxiv.org/abs/1805.03634) O. Evnin, W. Piensuk, *Quantum resonant systems, integrable and chaotic*, [`[arXiv:1808.09173]`](http://arxiv.org/abs/1808.09173) A. Rostworowski, *Towards a theory of nonlinear gravitational waves: A systematic approach to nonlinear gravitational perturbations in the vacuum*, Phys. Rev. D96, 124026 (2017) Ó.J.C. Dias, J.E. Santos, *AdS nonlinear instability: breaking spherical and axial symmetries*, Class. Quant. Grav. 35, 185006 (2018) A. Ishibashi, R.M. Wald, *Dynamics of non-globally hyperbolic static spacetimes III: anti-de Sitter spacetine,* Class. Quant. Grav. [**21**]{}, 2981 (2004) M. Maliborski, *private communication* (2015) B. Craps, O.Evnin, J. Vanhoof, *Ultraviolet asymptotics and singular dynamics of AdS perturbations*, Journal of High Energy Physics 1510, 079 (2015) P. Bizoń, A. Rostworowski, *Comment on “Holographic Thermalization, Stability of Anti–de Sitter Space, and the Fermi-Pasta-Ulam Paradox”*, Phys. Rev. Lett. 115, 049101 (2015) N. Deppe, *On the stability of anti-de Sitter spacetime* [`[arXiv:1606.02712]`](http://arxiv.org/abs/1606.02712) M. Maliborski, *private communication* (2015) [^1]: Perturbation can be decomposed at any instant of time as an infinite sum of (a complete set of) linear AdS eigen modes with time dependent coefficients. By the resonant transfer of energy we mean that the conserved energy of the system leaks to the modes with arbitrarily high frequencies, even if initially distributed among low frequency modes. [^2]: Remarkably, a proof of AdS instability for a model Einstein–null dust system has recently been given [@m_1704.08681; @m_1812.04268]. The proof does not set the time-scale of a black hole formation, in particular it does not relate this time-scale to the amplitude of initial perturbation. The position space analysis, similar in spirit to that of the proof [@m_1704.08681; @m_1812.04268], was attempted for the first time in the context of AdS–Einstein–massless scalar field model in [@dfly_JHEP1508]. [^3]: It is quite remarkable that even if AdS solution itself is not stable there exist globally regular, aAdS solutions of Einstein equations that, as numerical evidence shows, are immune to the instability discovered in [@br_PRL107] at least on $\mathcal{O}\left( \varepsilon^{-k}\right)$ time-scale. These are time-periodic solutions in Einstein-scalar fields models [@mr_PRL111; @m_PhD; @bll_PRD87; @ffg_PRD92], in the presently studied cohomogenity-two biaxial Bianchi IX ansatz [@m_PhD], and time-periodic (in axial symmetry) or helically symmetric (outside axial symmetry) globally regular aAdS vacuum solutions (geons) [@dhs_CQG29; @hs; @mfgf_CQG34; @r_PRD95; @ff_PRD96]. The stability of the laters is assumed on the ground of numerical evidence for the stability of the formers. [^4]: It is expected on the grounds of perturbative analysis of [@dhs_CQG29] that the mechanism for instability of AdS in the vacuum case (pure gravity) is the same as in the model case [@br_PRL107]. The first steps to run simulations outside spherical symmetry in $2+1$ dimensional setting were done in [@bfkr_PRL119], but the “big” perturbations (collapsing after few bounces) can not provide the evidence for the scaling $\mathcal{O}(\varepsilon^{-2})$. [^5]: Mass $M$ being finite implies $M$ being conserved as well. [^6]: To prove (\[Hidentity\]) we integrate $(-Y_{klij})$ by parts: $$-Y_{klij} =- \int_{0}^{\pi/2} dx \, e'_k e_l e'_i e'_j \mu^2 \nu = \int_{0}^{\pi/2} dx \, e_j \left[ \left(\mu e'_k\right) \left(\mu e'_i\right) e_l \nu \right]'$$ and use the eigen equation (\[eigenEq\]) in a form $\left(\mu e'_k\right)' = \frac{8}{\sin^2(x)} \mu e_k - \omega_k^2 e_k$. This cancels all other terms on the RHS of (\[Hidentity\]) and leaves $H_{ijkl}$. Similarly, to prove (\[Midentity\]) we integrate $(-X_{ijkk})$ by parts: $$-X_{ijkk} =- \int_{0}^{\pi/2} dx \, e'_i e_j \mu \nu \left(\mu e^2_k \right) = \int_{0}^{\pi/2} dx \left[ \left(\mu e'_i \right) e_j \nu\right]' \int_0^x dy\, \mu e^2_k$$ and use the eigen equation (\[eigenEq\]). This cancels all other terms on the RHS of (\[Midentity\]) and leaves $M_{ijk}$. [^7]: They are particular cases of a general identity $$(\omega_k^2 - \omega_l^2) W_{ijkl} = X_{lijk} - X_{kijl}$$ that is easy to establish using eigen equation (\[eigenEq\]). [^8]: They are particular cases of a general identity $$(\omega_k^2 - \omega_l^2) \bar{W}_{ijkl} = Y_{lkij} - Y_{klij}$$ that is easy to establish using eigen equation (\[eigenEq\]). [^9]: The solutions can be found by *Mathematica* if sufficient number of initial values of the sequences $V_{mm}$, $V_{(m+1)m}$, $V_{(m+2)m}$, $\dots$ are generated. [^10]: The solutions can be found by *Mathematica* if sufficient number of initial values of the sequences $A_{mm}$, $A_{(m+1)m}$, $A_{(m+2)m}$, $\dots$ are generated.
--- abstract: 'We consider a nonlinear Neumann problem driven by the $p$-Laplacian. In the reaction term we have the competing effects of a singular and a convection term. Using a topological approach based on the Leray-Schauder alternative principle together with suitable truncation and comparison techniques, we show that the problem has positive smooth solutions.' address: - ' Department of Mathematics, National Technical University, Zografou Campus, 15780 Athens, Greece & Institute of Mathematics, Physics and Mechanics, Jadranska 19, 1000 Ljubljana, Slovenia' - 'Institute of Mathematics, Physics and Mechanics, 1000 Ljubljana, Slovenia & Faculty of Applied Mathematics, AGH University of Science and Technology, 30-059 Kraków, Poland & Institute of Mathematics “Simion Stoilow" of the Romanian Academy, 014700 Bucharest, Romania' - 'Faculty of Education and Faculty of Mathematics and Physics, University of Ljubljana, 1000 Ljubljana, Slovenia & Institute of Mathematics, Physics and Mechanics, 1000 Ljubljana, Slovenia' author: - 'Nikolaos S. Papageorgiou' - 'Vicenţiu D. Rădulescu' - 'Dušan D. Repovš' title: Positive solutions for nonlinear Neumann problems with singular terms and convection --- Introduction ============ Let $\Omega\subseteq{\mathbb R}^N$ be a bounded domain with a $C^2$-boundary $\partial\Omega$. In this paper, we study the following nonlinear Neumann problem with singular and convection terms $$\label{eq1} \left\{\begin{array}{l} -\Delta_pu(z)+\xi(z)u(z)^{p-1}=u(z)^{-\gamma}+f(z,u(z),Du(z))\ \mbox{in}\ \Omega,\\ \displaystyle \frac{\partial u}{\partial n}=0\ \mbox{on}\ \partial\Omega,\ u>0,\ 1<p<\infty,\ 0<\gamma<1. \end{array}\right\}$$ In this problem, $\Delta_p$ denotes the $p$-Laplacian differential operator defined by $$\Delta_pu={\rm div}\,(|Du|^{p-2}Du)\ \mbox{for all}\ u\in W^{1,p}(\Omega),\ 1<p<\infty.$$ In the reaction term (the right-hand side) of the problem, we have the competing effects of the singular term $u^{-\gamma}$ and the convection term $f(z,x,y)$ (that is, the perturbation $f$ depends also on the gradient $Du$). The function $f(z,x,y)$ is Carathéodory (that is, for all $(x,y)\in{\mathbb R}\times{\mathbb R}^N$ the mapping $z\mapsto f(z,x,y)$ is measurable, and for almost all $z\in\Omega$ the mapping $(x,y)\mapsto f(z,x,y)$ is continuous). The key feature of this paper is that we do not impose any global growth conditions on the function $f(z,\cdot,y)$. Instead, we assume that $f(z,\cdot,y)$ exhibits a kind of oscillatory behavior near zero. In this way we can employ truncation techniques and avoid any growth condition at $+\infty$. In the boundary condition, $\frac{\partial u}{\partial n}$ denotes the normal derivative of $u$, with $n(\cdot)$ being the outward unit normal on $\partial\Omega$. The presence of the gradient $Du$ in the perturbation $f$, excludes from consideration a variational approach to dealing with (\[eq1\]). Instead, our main tool is topological and is based on the fixed point theory, in particular, on the Leray-Schauder principle (see Section 2). Equations with singular terms and equations with convection terms have been investigated separately, primarily in the context of Dirichlet problems. For singular problems, we mention the works of Giacomoni, Schindler & Takac [@8], Hirano, Saccon & Shioji [@2], Papageorgiou & Rădulescu [@17], Papageorgiou, Rădulescu & Repovš [@21; @prrbook], Papageorgiou & Smyrlis [@22; @23], Perera & Zhang [@24], and Su, Wu & Long [@27]. For problems with convection, we mention the works of de Figueiredo, Girardi & Matzeu [@2], Gasinski & Papageorgiou [@6], Girardi & Matzeu [@9], Huy, Quan & Khanh [@14], Papageorgiou, Rădulescu & Repovš [@20], and Ruiz [@26]. Of the aformentioned works, only Gasinski & Papageorgiou [@6] and Papageorgiou, Rădulescu & Repovš [@20] go outside the Dirichlet framework and deal with Neumann problems. A good treatment of semilinear parametric elliptic equations with both singular and convection terms and Dirichlet boundary condition can be found in Ghergu & Rădulescu [@7 Chapter 9]. Mathematical background and hypotheses ====================================== As we have already mentioned, our method of proof is topological and is based on the fixed point theory, in particular, on the Leray-Schauder alternative principle. Let $V,\,Y$ be Banach spaces and $g:V\rightarrow Y$ a map. We say that $g(\cdot)$ is “compact" if $g(\cdot)$ is continuous and maps bounded sets of $V$ into relatively compact subsets of $Y$. We now recall the Leray-Schauder alternative principle (see, for example, Gasinski & Papageorgiou [@3 p. 827] or Granas & Dugundji [@10 p. 124]). \[th1\] If $X$ is a Banach space and $g:X\rightarrow X$ is compact, then one of the following two statements is true: - $g(\cdot)$ has a fixed point; - the set $K(g)=\{u\in X:u=tg(u),\ 0<t<1\}$ is unbounded. In what follows, we denote by $\left\langle \cdot, \cdot\right\rangle$ the duality brackets for the pair $(W^{1,p}(\Omega)^*,W^{1,p}(\Omega))$ and by $||\cdot||$ the norm on $W^{1,p}(\Omega)$. Hence $$||u||=\left(||u||^p_p+||Du||^p_p\right)^{1/p}\ \mbox{for all}\ u\in W^{1,p}(\Omega).$$ In the analysis of problem (\[eq1\]), we will make use of the Banach space $C^1(\overline{\Omega})$. This is an ordered Banach space with positive (order) cone $$C_+=\{u\in C^1(\overline{\Omega}):u(z){\geqslant}0\ \mbox{for all}\ z\in\overline{\Omega}\}.$$ This cone has a nonempty interior which is given by $$D_+=\{u\in C_+:u(z)>0\ \mbox{for all}\ z\in\overline{\Omega}\}.$$ In fact, $D_+$ is also the interior of $C_+$ when the latter is furnished with the relative $C(\overline{\Omega})$-norm topology. Let $A:W^{1,p}(\Omega)\rightarrow W^{1,p}(\Omega)^*$ be the nonlinear operator defined by $$\left\langle A(u),h\right\rangle=\int_{\Omega}|Du|^{p-2}(Du,Dh)_{{\mathbb R}^N}dz\ \mbox{for all}\ u,h\in W^{1,p}(\Omega).$$ The next proposition summarizes the main properties of this operator (see Motreanu, Motreanu & Papageorgiou [@16 p. 40]). \[prop2\] The operator $A:W^{1,p}(\Omega)\rightarrow W^{1,p}(\Omega)^*$ is bounded (that is, $A$ maps bounded sets to bounded sets), continuous, monotone (hence also maximal monotone) and of type $(S)_+$, that is, $$u_n\stackrel{w}{\rightarrow}u\ \mbox{in}\ W^{1,p}(\Omega)\ \mbox{and}\ \limsup\limits_{n\rightarrow\infty}\left\langle A(u_n),u_n-u\right\rangle{\leqslant}0\Rightarrow u_n\rightarrow u\ \mbox{in}\ W^{1,p}(\Omega).$$ For the potential function $\xi(\cdot)$, we assume the following: $H(\xi):$ $\xi\in L^{\infty}(\Omega),\ \xi(z){\geqslant}0$ for almost all $z\in\Omega$, $\xi\not\equiv 0$. The following lemma will be helpful in producing estimates in our proofs. \[lem3\] If hypothesis $H(\xi)$ holds, then there exists $c_1>0$ such that $$\vartheta(u)=||Du||^p_p+\int_{\Omega}\xi(z)|u|^pdz{\geqslant}c_1||u||^p\ \mbox{for all}\ u\in W^{1,p}(\Omega).$$ Evidently, $\vartheta{\geqslant}0$. Suppose that the lemma is not true. Exploiting the $p$-homogeneity of $\vartheta(\cdot)$ we can find $\{u_n\}_{n{\geqslant}1}\subseteq W^{1,p}(\Omega)$ such that $$\label{eq2} ||u_n||=1\ \mbox{and}\ \vartheta(u_n){\leqslant}\frac{1}{n}\ \mbox{for all}\ n\in{\mathbb N}.$$ We may assume that $$\label{eq3} u_n\stackrel{w}{\rightarrow}u\ \mbox{in}\ W^{1,p}(\Omega)\ \mbox{and}\ u_n\rightarrow u\ \mbox{in}\ L^p(\Omega)\ \mbox{as}\ n\rightarrow\infty.$$ Clearly, $\vartheta(\cdot)$ is sequentially weakly lower semicontinuous. So, it follows from (\[eq2\]) and (\[eq3\]) that $$\begin{aligned} \label{eq4} &&\vartheta(u)=0,\\ &\Rightarrow&u\equiv\eta\in{\mathbb R}.\nonumber \end{aligned}$$ If $\eta=0$, then $u_n\rightarrow 0$ in $W^{1,p}(\Omega)$, which contradicts (\[eq2\]). So $\eta\neq 0$. Then $$0=|\eta|^p\int_{\Omega}\xi(z)dz>0\ (\mbox{see \cite{4} and hypothesis}\ H(\xi)),$$ which is a contradiction. The proof of Lemma \[lem3\] is now complete. Let $x\in{\mathbb R}$ and $x^{\pm}=\max\{\pm x,0\}$. Then for all $u\in W^{1,p}(\Omega)$, we set $u^{\pm}(\cdot)=u(\cdot)^{\pm}$. We have $$u^{\pm}\in W^{1,p}(\Omega),\ u=u^+-u^-,\ |u|=u^++u^-.$$ We denote by $|\cdot|_N$ the Lebesgue measure on ${\mathbb R}^N$. Given $u,v\in W^{1,p}(\Omega)$ with $u{\leqslant}v$, define $$[u,v]=\{y\in W^{1,p}(\Omega):u(z){\leqslant}y(z){\leqslant}v(z)\ \mbox{for almost all}\ z\in\Omega\}.$$ Also, we denote by ${\rm int}_{C^1(\overline{\Omega})}[u,v]$ the interior of $[u,v]\cap C^1(\overline{\Omega})$ in the $C^1(\overline{\Omega})$-norm topology. Finally, if $1<p<\infty$, we denote by $p'>1$ the conjugate exponent of $p>1$, that is, $\frac{1}{p}+\frac{1}{p'}=1$. Now we can introduce our hypotheses on $f(z,x,y)$: $H(f):$ $f:\Omega\times{\mathbb R}\times{\mathbb R}^N\rightarrow{\mathbb R}$ is a Carathéodory function such that $f(z,0,y)=0$ for almost all $z\in\Omega$ and all $y\in{\mathbb R}^N$, and the following properties hold: - there exists a function $w\in W^{1,p}(\Omega)\cap C(\overline{\Omega})$ such that $\Delta_p w\in L^{p'}(\Omega)$ and $$\begin{aligned} &&0<\hat{c}{\leqslant}w(z)\ \mbox{for all}\ z\in\overline{\Omega},-\Delta_pw(z)+\xi(z)w(z)^{p-1}{\geqslant}0\ \mbox{for almost all}\ z\in\Omega,\\ &&w(z)^{-\gamma}+f(z,w(z),y){\leqslant}-c^*<0\ \mbox{for almost all}\ z\in\Omega\ \mbox{and all}\ y\in{\mathbb R}^N, \end{aligned}$$ and if $\rho=||w||_{\infty}$, there exists $\hat{a}_{\rho}\in L^{\infty}(\Omega)$ such that $$|f(z,x,y)|{\leqslant}\hat{a}_{\rho}(z)[1+|y|^{p-1}]$$ for almost all $z\in\Omega$, all $0{\leqslant}x{\leqslant}\rho,$ and all $y\in{\mathbb R}^N$; - there exists $\delta_0>0$ such that $f(z,x,y){\geqslant}\tilde{c}_{\delta}>0$ for almost all $z\in\Omega$ and all $0<\delta{\leqslant}x{\leqslant}\delta_0$, $y\in{\mathbb R}^N$; - there exists $\hat{\xi}_{\rho}>0$ such that for almost all $z\in\Omega$ and all $y\in{\mathbb R}^N$ the mapping $$x\mapsto f(z,x,y)+\hat{\xi}_{\rho}x^{p-1}$$ is nondecreasing on $[0,\rho]$, and for almost all $z\in\Omega$, all $0{\leqslant}x{\leqslant}\rho$, $y\in{\mathbb R}^N$, and $t\in(0,1)$, we have $$\label{eq5} f(z,\frac{1}{t}x,y){\leqslant}\frac{1}{t^{p-1}}f(z,x,y).$$ Our aim is to produce positive solutions and all the above hypotheses concern the positive semi-axis ${\mathbb R}_+=\left[0,+\infty\right)$. So, for simplicity, we may assume that $$\label{eq6} f(z,x,y)=0\ \mbox{for almost all}\ z\in\Omega\ \mbox{and all}\ x{\leqslant}0,\ y\in{\mathbb R}^N.$$ Hypothesis $H(f)(i)$ is satisfied if, for example, there exists $\eta\in(0,+\infty)$ such that $\eta^{-\gamma}+f(z,\eta,y){\leqslant}-c^*<0$ for almost all $z\in\Omega$ and all $y\in{\mathbb R}^N$. Hypotheses $H(f)(i),(ii)$ together determine the oscillatory behavior of $f(z,\cdot,y)$ near $0^+$. Hypothesis $H(f)(iii)$ is satisfied if we set $f(z,x,y)=0$ for almost all $z\in\Omega$ and all $x{\geqslant}w(z)$, $y\in{\mathbb R}^N$ and require that the function $x\mapsto\frac{f(z,x,y)}{x^{p-1}}$ is nonincreasing on $\left(0,w(z)\right]$ for almost all $z\in\Omega$ and all $y\in{\mathbb R}^N$. \[ex\] The following function satisfies hypotheses $H(f)$. For the sake of simplicity we drop the $z$-dependence and require $\xi(z){\geqslant}c_0^*>0$ for almost all $z\in\Omega$: $$f(z,y)=(z^{p-1}-cz^{\tau-1})(1+|y|^{p-1})$$ for all $0{\leqslant}x{\leqslant}1$, $y\in{\mathbb R}^N$, with $1<p<\tau<\infty$, and $c<2^{\frac{1}{\tau-1}}$. Finally, we mention that $0<\gamma<1$. When the differential operator is singular (that is, $1<p<2$), we require that $\gamma{\leqslant}(p-1)^2$, which is equivalent to saying that $1+\frac{\gamma}{p-1}{\leqslant}p$. A singular problem ================== In this section we deal with the following purely singular Neumann problem: $$\label{eq7} \left\{\begin{array}{l} -\Delta_pu(z)+\xi(z)u(z)^{p-1}=u(z)^{-\gamma}\ \mbox{in}\ \Omega,\\ \displaystyle \frac{\partial u}{\partial n}=0\ \mbox{on}\ \partial\Omega,\ u>0. \end{array}\right\}$$ Recall that $\vartheta:W^{1,p}(\Omega)\rightarrow{\mathbb R}$ is the $C^1$-functional defined by $$\vartheta(u)=||Du||^p_p+\int_{\Omega}\xi(z)|u|^pdz\ \mbox{for all}\ u\in W^{1,p}(\Omega).$$ \[prop4\] If hypotheses $H(\xi)$ hold, then problem (\[eq7\]) has a unique positive solution $\bar{u}\in D_+$. Let $\epsilon>0$ and consider the $C^1$-functional $\psi_{\epsilon}:W^{1,p}(\Omega)\rightarrow{\mathbb R}$ defined by $$\psi_{\epsilon}(u)=\frac{1}{p}\vartheta(u)-\frac{1}{1-\gamma}\int_{\Omega}[(u^+)^p+\epsilon]^{\frac{1-\gamma}{p}}dz\ \mbox{for all}\ u\in W^{1,p}(\Omega).$$ Using Lemma \[lem3\], we obtain $$\begin{aligned} &&\psi_{\epsilon}(u){\geqslant}\frac{c_1}{p}||u||^p-\frac{1}{1-\gamma}\int_{\Omega}(u^+)^{1-\gamma}dz-c_2\ \mbox{for some}\ c_2>0\\ &\Rightarrow&\psi_{\epsilon}(\cdot)\ \mbox{is coercive}. \end{aligned}$$ Using the Sobolev embedding theorem, we can easily see that the functional $\psi_{\epsilon}(\cdot)$ is sequentially weakly lower semicontinuous. So, by the Weierstrass-Tonelli theorem, we can find $u_{\epsilon}\in W^{1,p}(\Omega)$ such that $$\label{eq8} \psi_{\epsilon}(u_{\epsilon})=\inf\left\{\psi_{\epsilon}(u):u\in W^{1,p}(\Omega)\right\}.$$ Let $s\in(0,1)$. Then $$\begin{aligned} \label{eq9} \psi_{\epsilon}(s)&<&\left(\frac{s^p}{p}||\xi||_{\infty}-\frac{s^{1-\gamma}}{1-\gamma}\right)|\Omega|_N\ \mbox{(see hypothesis $H(\xi)$)}\nonumber\\ &<&\left(\frac{s^p}{p}||\xi||_{\infty}+\frac{1}{1-\gamma}(\epsilon^{\frac{1- \gamma}{p}}-s^{1-\gamma})\right)|\Omega|_N. \end{aligned}$$ If $s>2\epsilon^{1/p}$, then $$\begin{aligned} \label{eq10} &&\frac{s^p}{p}||\xi||_{\infty}+\frac{1}{1-\gamma}(\epsilon^{\frac{1-\gamma}{p}}-s^{1-\gamma})\nonumber\\ &<&\frac{s^p}{p}||\xi||_p-\frac{s^{1-\gamma}}{1-\gamma}\left(1-\frac{1}{2^{1-\gamma}}\right)=\tau(s). \end{aligned}$$ Recall that $s\in(0,1)$ and note that $0<1-\gamma<1<p$. So, we can find small enough $\hat{s}\in(0,1)$ such that $$\label{eq11} \tau(\hat{s})<0.$$ Then (\[eq9\]), (\[eq10\]), (\[eq11\]) imply that for small enough $\epsilon\in\left(0,\left(\frac{\hat{s}}{2}\right)^p\right)$, we have $$\begin{aligned} &&\psi_{\epsilon}(\hat{s})<\psi_{\epsilon}(0)=-\frac{1}{1-\gamma}\epsilon^{\frac{1-\gamma}{p}}|\Omega|_N,\\ &\Rightarrow&\psi_{\epsilon}(u_{\epsilon})<\psi_{\epsilon}(0)\ \mbox{(see (\ref{eq8}))},\\ &\Rightarrow&u_{\epsilon}\neq 0. \end{aligned}$$ From (\[eq8\]) we have $$\begin{aligned} \label{eq12} &&\psi'_{\epsilon}(u_{\epsilon})=0,\nonumber\\ &\Rightarrow&\left\langle A(u_{\epsilon}),h\right\rangle+\int_{\Omega}\xi(z)|u_{\epsilon}|^{p-2}u_{\epsilon}hdz= \int_{\Omega}(u^+)^{p-1}[(u^+)^p+\epsilon]^{\frac{1-(\gamma+p)}{p}}hdz \end{aligned}$$ for all $h\in W^{1,p}(\Omega)$. In (\[eq12\]) we choose $h=-u^-_{\epsilon}\in W^{1,p}(\Omega)$. We obtain $$\begin{aligned} &&\vartheta(u^-_{\epsilon})=0,\\ &\Rightarrow&c_1||u^-_{\epsilon}||^p{\leqslant}0\ (\mbox{see Lemma \ref{lem3}}),\\ &\Rightarrow&u_{\epsilon}{\geqslant}0,\ u_{\epsilon}\neq 0. \end{aligned}$$ From (\[eq12\]), we have $$\label{eq13} \left\{\begin{array}{l} -\Delta_pu_{\epsilon}(z)+\xi(z)u_{\epsilon}(z)^{p-1}=u_{\epsilon}(z)^{p-1}[u_{\epsilon}(z)^p+\epsilon]^{\frac{1-(\gamma+p)}{p}}\ \mbox{for almost all}\ z\in\Omega,\\ \displaystyle \frac{\partial u_{\epsilon}}{\partial n}=0\ \mbox{on}\ \partial\Omega \end{array}\right\}$$ (see Papageorgiou & Rădulescu [@18]). By (\[eq13\]) and Proposition 7 of Papageorgiou & Rădulescu [@19], we have $$u_{\epsilon}\in L^{\infty}(\Omega).$$ Then, invoking Theorem 2 of Lieberman [@15], we obtain $$u_{\epsilon}\in C_+\backslash\{0\}.$$ From (\[eq13\]) and hypothesis $H(\xi)$, we have $$\begin{aligned} &&\Delta_pu_{\epsilon}(z){\leqslant}||\xi||_{\infty}u_{\epsilon}(z)^{p-1}\ \mbox{for almost all}\ z\in\Omega,\\ &\Rightarrow&u_{\epsilon}\in D_+\ \mbox{by the nonlinear maximum principle} \end{aligned}$$ (see Gasinski & Papageorgiou [@3 p. 738] and Pucci & Serrin [@25 p. 120]). So, for small enough $\epsilon>0$, say $\epsilon\in(0,\epsilon_0)$, we obtain a solution $u_{\epsilon}\in D_+$ for problem (\[eq13\]). $\{u_{\epsilon}\}_{\epsilon\in(0,\epsilon_0)}\subseteq W^{1,p}(\Omega)$ is bounded. We argue by contradiction. So, suppose that the claim is not true. Then we can find$\{\epsilon_n\}_{n{\geqslant}1}\subseteq(0,\epsilon_0)$ and corresponding solutions $\{u_n=u_{\epsilon_n}\}_{n{\geqslant}1}\subseteq D_+$ of (\[eq13\]) such that $$\label{eq14} ||u_n||\rightarrow\infty\ \mbox{as}\ n\rightarrow\infty.$$ Let $y_n=\frac{u_n}{||u_n||},\ n\in{\mathbb N}$. Then $$\label{eq15} ||y_n||=1\ \mbox{and}\ y_n{\geqslant}0\ \mbox{for all}\ n\in{\mathbb N}.$$ From (\[eq12\]), we obtain $$\begin{aligned} \label{eq16} &&\left\langle A(y_n),h\right\rangle+\int_{\Omega}\xi(z)y^{p-1}_nhdz=\int_{\Omega}y^{p-1}_n[u^p_n+\epsilon_n]^{\frac{1-(\gamma+p)}{p}}hdz\\ &&\mbox{for all}\ h\in W^{1,p}(\Omega),\ n\in{\mathbb N}.\nonumber \end{aligned}$$ In (\[eq16\]) we choose $h=y_n\in W^{1,p}(\Omega)$. Then $$\label{eq17} \vartheta(y_n)=\int_{\Omega}\frac{y^p_n}{[u^p_n+\epsilon_n]^{\frac{p+\gamma-1}{p}}}dz\ \mbox{for all}\ n\in{\mathbb N}.$$ From the first part of the proof, we know that these solutions $u_n$ can be generated by applying the direct method of the calculus of variations to the functionals $\psi_{\epsilon_n}(\cdot)$ and we get $$\begin{aligned} \label{eq18} &&\psi_{\epsilon_n}(u_n)<0\ \mbox{for all}\ n\in{\mathbb N},\nonumber\\ &\Rightarrow&\vartheta(u_n)-\frac{p}{1-\gamma}\int_{\Omega}[u^p_n+\epsilon_n]^{\frac{1-\gamma}{p}}dz<0\ \mbox{for all}\ n\in{\mathbb N}. \end{aligned}$$ It follows from (\[eq17\]) and (\[eq18\]) that $$\begin{aligned} \label{eq19} \int_{\Omega}\frac{y^p_n}{[u^p_n+\epsilon_n]^{\frac{p+\gamma-1}{p}}}dz&<&\frac{p}{1-\gamma}\int_{\Omega}\frac{[u^p_n+\epsilon_n]^{\frac{1-\gamma}{p}}}{||u_n||^p}dz\nonumber\\ &{\leqslant}&\frac{p}{1-\gamma}\int_{\Omega}\frac{u^{1-\gamma}_n+\epsilon_n^{\frac{1-\gamma}{p}}}{||u_n||^p}dz\rightarrow 0\ \mbox{as}\ n\rightarrow\infty\ (\mbox{see (\ref{eq14})}). \end{aligned}$$ Then by (\[eq17\]) and Lemma \[lem3\], we have $$\begin{aligned} &&c_1||y_n||^p{\leqslant}\int_{\Omega}\frac{y^p_n}{[u^p_n+\epsilon_n]^{\frac{p+\gamma-1}{p}}}dz,\\ &\Rightarrow&y_n\rightarrow 0\ \mbox{in}\ W^{1,p}(\Omega)\ \mbox{as}\ n\rightarrow\infty\ (\mbox{see (\ref{eq19})}), \end{aligned}$$ which contradicts (\[eq15\]). This proves the claim. Consider a sequence $\{\epsilon_n\}_{n{\geqslant}1}\subseteq (0,\epsilon_0)$ such that $\epsilon_n\rightarrow 0^+$. As before, let $\{u_n=u_{\epsilon_n}\}_{n{\geqslant}1}\subseteq D_+$ be the corresponding solutions. On account of the claim, we may assume that $$\label{eq20} u_n\stackrel{w}{\rightarrow}\bar{u}\ \mbox{in}\ W^{1,p}(\Omega)\ \mbox{and}\ u_n\rightarrow \bar{u}\ \mbox{in}\ L^p(\Omega)\ \mbox{as}\ n\rightarrow\infty,\ \bar{u}{\geqslant}0.$$ We know that $$\begin{aligned} \label{eq21} &&\left\langle A(u_n),h\right\rangle+\int_{\Omega}\xi(z)u^{p-1}_nhdz=\int_{\Omega} \frac{u^{p-1}_n}{[u^p_n+\epsilon_n]^{\frac{p+\gamma-1}{p}}}hdz\\ &&\mbox{for all}\ h\in W^{1,p}(\Omega),\ n\in{\mathbb N}\nonumber. \end{aligned}$$ Choosing $h=u_n\in W^{1,p}(\Omega)$ in (\[eq18\]), we obtain $$\label{eq22} -\vartheta(u_n)+\int_{\Omega}\frac{u^p_n}{[u^p_n+\epsilon_n]^{\frac{p+\gamma-1}{p}}}dz=0\ \mbox{for all}\ n\in{\mathbb N}.$$ Moreover, from the first part of the proof (see ), we have $$\label{eq23} \vartheta(u_n)-\frac{p}{1-\gamma}\int_{\Omega}[u^p_n+\epsilon_n]^{\frac{1-\gamma}{p}}dz{\leqslant}-c_2<0\ \mbox{for all}\ n\in{\mathbb N}.$$ We add (\[eq22\]) and (\[eq23\]) and obtain $$\begin{aligned} \label{eq24} 0{\leqslant}\int_{\Omega}\frac{u^p_n}{[u^p_n+\epsilon_n]^{\frac{p+\gamma-1}{p}}}dz&{\leqslant}&-c_2+\frac{p}{1-\gamma}\int_{\Omega}[u^p_n+\epsilon_n]^{\frac{1-\gamma}{p}}dz\nonumber\\ &{\leqslant}&-c_2+\frac{p}{1-\gamma}\int_{\Omega}[u^{1-\gamma}_n+\epsilon^{\frac{1-\gamma}{n}}_n]dz\ \mbox{for all}\ n\in{\mathbb N}. \end{aligned}$$ If $\bar{u}=0$ (see (\[eq20\])), then $$\int_{\Omega}[u^{1-\gamma}_n+\epsilon_n^{\frac{1-\gamma}{n}}]dz\rightarrow 0\ \mbox{as}\ n\rightarrow\infty.$$ This together with (\[eq24\]) leads to a contradiction. Therefore $$\bar{u}\neq 0.$$ On account of (\[eq20\]) and by passing to a further subsequence if necessary, we may assume that $$\label{eq25} \left.\begin{array}{l} u_n(z)\rightarrow\bar{u}(z)\ \mbox{for almost all}\ z\in\Omega\ \mbox{as}\ n\rightarrow\infty,\\ 0{\leqslant}u_n(z){\leqslant}k(z)\ \mbox{for almost all}\ z\in\Omega\ \mbox{and all}\ n\in{\mathbb N},\ \mbox{with}\ k\in L^p(\Omega). \end{array}\right\}$$ We can always assume that $$\label{eq26} \max\{1,\epsilon_0\}{\leqslant}k(z)\ \mbox{for almost all}\ z\in\Omega.$$ For every $n\in{\mathbb N}$, we introduce the following measurable subsets of $\Omega$ $$\Omega^1_n=\{z\in\Omega:(u_n-\bar{u})(z)>0\}\ \mbox{and}\ \Omega^2_n=\{z\in\Omega:(u_n-\bar{u})(z)<0\},\ n\in{\mathbb N}.$$ Then we have $$\begin{aligned} \label{eq27} &&\int_{\Omega}\frac{u^{p-1}_n}{[u^p_n+\epsilon_n]^{\frac{p+\gamma-1}{p}}}(u_n-\bar{u})dz\nonumber\\ &=&\int_{\Omega^1_n}\frac{u^{p-1}_n}{[u^p_n+\epsilon_n]^{\frac{p+\gamma-1}{p}}}(u_n-\bar{u})dz+\int_{\Omega^2_n}\frac{u^{p-1}_n}{[u^p_n+\epsilon_n]^{\frac{p+\gamma-1}{p}}}(u_n-\bar{u})dz\nonumber\\ &{\leqslant}&\int_{\Omega^1_n}\frac{u_n-\bar{u}}{u^{\gamma}_n}dz+\int_{\Omega^2_n}\frac{1}{2k^{\gamma}}\left(\frac{u_n}{k}\right)^{p-1}(u_n-\bar{u})dz\ \mbox{for all}\ n\in{\mathbb N}\ (\mbox{see (\ref{eq25}), (\ref{eq26})}). \end{aligned}$$ From (\[eq25\]) we know that $$\begin{aligned} &&0{\leqslant}\bar{u}(z){\leqslant}k(z)\ \mbox{for almost all}\ z\in\Omega,\label{eq28}\\ &&-u_n(z)^{-\gamma}{\leqslant}-k(z)^{-\gamma}\ \mbox{for almost all}\ z\in\Omega\ \mbox{and all}\ n\in{\mathbb N}.\label{eq29} \end{aligned}$$ It follows from (\[eq28\]), (\[eq29\]) that $$\label{eq30} -\bar{u}(z)u_n(z)^{-\gamma}{\leqslant}-k(z)^{1-\gamma}\ \mbox{for almost all}\ z\in\Omega\ \mbox{and all}\ n\in{\mathbb N}.$$ Then for all $n\in{\mathbb N}$ we have $$\begin{aligned} \label{eq31} &&\int_{\Omega^1_n}\frac{u_n-\bar{u}}{u^{\gamma}_n}dz= \int_{\Omega^1_n}[u^{1-\gamma}_n-\bar{u}u^{-\gamma}_n]dz\nonumber\\ &&\mbox{for all }n\in{\mathbb N}\ (\mbox{see (\ref{eq25}), (\ref{eq30})}),\nonumber\\ &\Rightarrow&\limsup\limits_{n\rightarrow\infty}\int_{\Omega^1_n}\frac{u_n-\bar{u}}{u^{\gamma}_n}dz{\leqslant}0. \end{aligned}$$ Also, from (\[eq25\]) and (\[eq20\]), we can see that $$\label{eq32} \int_{\Omega^2_n}\frac{1}{2k^{\gamma}}\left(\frac{u_n}{k}\right)^{p-1}(u_n-\bar{u})dz\rightarrow 0\ \mbox{as}\ n\rightarrow\infty.$$ We return to (\[eq27\]), pass to the limit as $n\rightarrow\infty$, and use (\[eq31\]) and (\[eq32\]). We obtain $$\label{eq33} \limsup\limits_{n\rightarrow\infty}\int_{\Omega}\frac{u_n^{p-1}}{[u^p_n+\epsilon_n]^{\frac{p+\gamma-1}{p}}}(u_n-\bar{u})dz{\leqslant}0.$$ In (\[eq21\]) we choose $h=u_n-\bar{u}\in W^{1,p}(\Omega)$. Then $$\begin{aligned} \label{eq34} &&\left\langle A(u_n),u_n-\bar{u}\right\rangle+\int_{\Omega}\xi(z)u_n^{p-1}(u_n-\bar{u})dz=\int_{\Omega}\frac{u_n^{p-1}}{[u^p_n+\epsilon_n]^{\frac{p+\gamma-1}{p}}}(u_n-\bar{u})dz\nonumber\\ &&\mbox{for all}\ n\in{\mathbb N},\nonumber\\ &\Rightarrow&\limsup\limits_{n\rightarrow\infty}\left\langle A(u_n),u_n-\bar{u}\right\rangle{\leqslant}0\ \mbox{(see (\ref{eq20}), (\ref{eq33}))},\nonumber\\ &\Rightarrow&u_n\rightarrow\bar{u}\ \mbox{in}\ W^{1,p}(\Omega)\ (\mbox{see Proposition \ref{prop2}}),\ \bar{u}{\geqslant}0,\ \bar{u}\neq 0. \end{aligned}$$ Using in as a test function $$h=\frac{u_n^{p-1}}{(u_n^p+\epsilon_n)^{\frac{p+\gamma-1}{p}\frac{p'}{p}}}\in W^{1,p}(\Omega)$$ (recall that $u_n\in D_+$) and our hypothesis on $\gamma$, we can infer that $$\left\{\frac{u^{p-1}_n}{(u^p_n+\epsilon_n)^{\frac{p+\gamma-1}{p}}}\right\}_{n{\geqslant}1}\subseteq L^{p'}(\Omega)\ \mbox{is bounded}.$$ Also, we have $$\frac{u^p_n}{(u^{p-1}_n+\epsilon_n)^{\frac{p+\gamma-1}{p}}}\rightarrow\bar{u}^{-\gamma}\ \mbox{for almost all}\ z\in\Omega\ (\mbox{see (\ref{eq25})}).$$ Then Problem 1.19 in Gasinski & Papageorgiou [@5 p. 46] implies that $$\begin{aligned} \label{eq35} &&\frac{u^{p-1}_n}{(u^p_n+\epsilon_n)^{\frac{p+\gamma-1}{p}}}\stackrel{w}{\rightarrow}\bar{u}^{-\gamma}\ \mbox{in}\ L^{p'}(\Omega),\nonumber\\ &\Rightarrow&\int_{\Omega}\frac{u^{p-1}_n}{[u^p_n+\epsilon_n]^{\frac{p+\gamma-1}{p}}}hdz\rightarrow\int_{\Omega}\bar{u}^{-\gamma}hdz\ \mbox{for all}\ h\in W^{1,p}(\Omega). \end{aligned}$$ Passing to the limit as $n\rightarrow\infty$ in (\[eq21\]) and using (\[eq34\]) and (\[eq35\]), we obtain $$\label{eq36} \left\langle A(\bar{u}),h\right\rangle+\int_{\Omega}\xi(z)\bar{u}^{p-1}hdz=\int_{\Omega}\bar{u}^{-\gamma}hdz\ \mbox{for all}\ h\in W^{1,p}(\Omega)$$ In (\[eq36\]) we first choose $h=\frac{1}{[\bar{u}^p+\delta]^{\frac{p-1}{p}}}\in W^{1,p}(\Omega),\delta>0$. Then $$\begin{aligned} &&\int_{\Omega}\xi(z)\frac{\bar{u}^{p-1}}{[\bar{u}^p+\delta]^{\frac{p-1}{p}}}dz{\geqslant}\int_{\Omega}\frac{\bar{u}^{-\gamma}}{[\bar{u}^p+\delta]^{\frac{p-1}{p}}}dz,\\ &&\int_{\Omega}\frac{\bar{u}^{-\gamma}}{[\bar{u}^p+\delta]^{\frac{p-1}{p}}}dz{\leqslant}||\xi||_{\infty}|\Omega|_N\ (\mbox{see hypothesis}\ H(\xi)). \end{aligned}$$ We let $\delta\rightarrow 0^+$ and use Fatou’s lemma. Then $$\label{eq37} \int_{\Omega}\frac{1}{\bar{u}^{p+\gamma-1}}dz{\leqslant}||\xi||_{\infty}|\Omega|_N.$$ Next, we choose in (\[eq36\]) $h=\frac{1}{[\bar{u}^p+\gamma]^{\frac{2(p-1)+\gamma}{p}}}\in W^{1,p}(\Omega)$. Reasoning as above, we obtain via Fatou’s lemma as $\delta\rightarrow 0^+$ $$\begin{aligned} \int_{\Omega}\frac{\bar{u}^{-\gamma}}{\bar{u}^{2(p-1)+\gamma}}dz=\int_{\Omega}\frac{1}{\bar{u}^{2(p+\gamma-1)}}dz&{\leqslant}&\int_{\Omega}\xi(z)\frac{\bar{u}^{p-1}}{\bar{u}^{2(p-1)+\gamma}}dz\\ &=&\int_{\Omega}\xi(z)\frac{1}{\bar{u}^{p+\gamma-1}}dz\\ &{\leqslant}&||\xi||^2_{\infty}|\Omega|_N\ (\mbox{see (\ref{eq37})}). \end{aligned}$$ Continuing in this way, we obtain $$\label{eq38} \int_{\Omega}\frac{1}{\bar{u}^{k(p+\gamma-1)}}dz{\leqslant}||\xi||^k_{\infty}|\Omega|_N\ \mbox{for all}\ k\in{\mathbb N}.$$ Therefore we can infer that $$\begin{aligned} &&\bar{u}^{-(p+\gamma-1)}\in L^{\tau}(\Omega)\ \mbox{for all}\ \tau{\geqslant}1,\\ &&\limsup\limits_{\tau\rightarrow+\infty}||\bar{u}^{-(p+\gamma-1)}||_{\tau}<+\infty. \end{aligned}$$ Then Problem 3.104 in Gasinski & Papageorgiou [@4 p. 477] implies that $$\bar{u}^{-(p+\gamma-1)}\in L^{\infty}(\Omega).$$ Note that $$\bar{u}^{-\gamma}=\bar{u}^{-(p+\gamma-1)}\bar{u}^{p-1}.$$ Therefore from (\[eq36\]) and Proposition 7 of Papageorgiou & Rădulescu [@19], we have $$\bar{u}\in L^{\infty}(\Omega).$$ Invoking Theorem 2 of Lieberman [@15], we have $$\bar{u}\in C_+\backslash\{0\}.$$ It follows by (\[eq36\]) that $$\begin{aligned} \label{eq39} &&-\Delta_p\bar{u}(z)+\xi(z)\bar{u}(z)^{p-1}=\bar{u}(z)^{-\gamma}\ \mbox{for almost all}\ z\in\Omega,\ \frac{\partial\bar{u}}{\partial n}=0\ \mbox{on}\ \partial\Omega\\ &&(\mbox{see Papageorgiou \& R\u{a}dulescu \cite{18}}),\nonumber\\ &\Rightarrow&\Delta_p\bar{u}(z){\leqslant}||\xi||_{\infty}\bar{u}(z)^{p-1}\ \mbox{for almost all}\ z\in\Omega,\nonumber\\ &\Rightarrow&\bar{u}\in D_+\ (\mbox{by the nonlinear maximum principle (see (\cite[p. 738]{3} and \cite[p. 120]{25}))}).\nonumber \end{aligned}$$ Finally, we can show that the positive solution is unique. Suppose that $\bar{u}_0\in W^{1,p}(\Omega)$ is another positive solution of (\[eq7\]). Again we have $\bar{u}_0\in D_+$. Also $$\begin{aligned} &&0{\leqslant}\left\langle A(\bar{u})-A(\bar{u}_0),\bar{u}-u_0\right\rangle+\int_{\Omega}\xi(z)(\bar{u}^{p-1}-\bar{u}_0^{p-1})(\bar{u}-\bar{u}_0)dz\\ &&=\int_{\Omega}(\bar{u}^{-\gamma}-\bar{u}_0^{-\gamma})(\bar{u}-\bar{u}_0)dz{\leqslant}0,\\ &\Rightarrow&\bar{u}=\bar{u}_0\ (\mbox{the function}\ x\mapsto\frac{1}{x^{\gamma}}\ \mbox{is strictly decreasing on}\ (0,+\infty)). \end{aligned}$$ This proves the uniqueness of the positive solution $\bar{u}\in D_+$ of (\[eq7\]) and thus completes the proof of Proposition \[prop4\]. Existence of positive solutions =============================== Let $\bar{u}\in D_+$ be the unique positive solution of (\[eq7\]) produced by Proposition \[prop4\]. We choose $t\in(0,1)$ small enough such that $$\label{eq40} \tilde{u}=t\bar{u}{\leqslant}\min\{\hat{c},\delta_0\}\ \mbox{on}\ \overline{\Omega}\ (\mbox{see hypotheses $H(f)(i),(ii)$}).$$ Then given $\tilde{v}\in W^{1,p}(\Omega)$, we have $$\begin{aligned} \label{eq41} -\Delta_p\tilde{u}(z)+\xi(z)\tilde{u}(z)^{p-1}&=&t^{p-1}[-\Delta_p\bar{u}(z)+\xi(z)\bar{u}(z)^{p-1}]\nonumber\\ &=&t^{p-1}\bar{u}(z)^{-\gamma}\ (\mbox{see (\ref{eq39})})\nonumber\\ &{\leqslant}&\tilde{u}(z)^{-\gamma}+f(z,\tilde{u}(z),Dv(z))\ \mbox{for almost all}\ z\in\Omega\end{aligned}$$ (see (\[eq40\]) and hypothesis $H(f)(ii)$). Given $v\in C^1(\overline{\Omega})$, we consider the following nonlinear auxiliary Neumann problem: $$\label{eq42} \left\{\begin{array}{l} -\Delta_pu(z)+\xi(z)u(z)^{p-1}=u(z)^{-\gamma}+f(z,u(z),Dv(z))\ \mbox{in}\ \Omega,\\ \frac{\partial u}{\partial n}=0\ \mbox{on}\ \partial\Omega,\ u>0. \end{array}\right\}$$ \[prop5\] If hypotheses $H(\xi),H(f)$ hold, then for every $v\in C^1(\overline{\Omega})$ problem (\[eq42\]) has a solution $u_v\in[\tilde{u},w]\cap C^1(\overline{\Omega})$, with $w(\,\cdot\,)$ being the function from hypothesis $H(f)(i)$. We introduce the following truncation of the reaction term in problem (\[eq1\]): $$\begin{aligned} \label{eq43} \hat{f}_v(z,x)=\left\{\begin{array}{l} \tilde{u}(z)^{-\gamma}+f(z,\tilde{u}(z),Dv(z))\ \mbox{if}\ x<\tilde{u}(z)\\ x^{-\gamma}+f(z,x,Dv(z))\ \mbox{if}\ \tilde{u}(z){\leqslant}x{\leqslant}w(z)\\ w(z)^{-\gamma}+f(z,w(z),Dv(z))\ \mbox{if}\ w(z)<x. \end{array}\right. \end{aligned}$$ Evidently, $\hat{f}_v(\cdot,\cdot)$ is a Carathéodory function. We set $\hat{F}_v(z,x)=\int^x_0\hat{f}_v(z,s)ds$ and consider the $C^1$-functional $\hat{\varphi}_v:W^{1,p}(\Omega)\rightarrow{\mathbb R}$ defined by $$\hat{\varphi}_v(u)=\frac{1}{p}\vartheta(u)-\int_{\Omega}\hat{F}_v(z,u(z))dz\ \mbox{for all}\ u\in W^{1,p}(\Omega).$$ It is clear from (\[eq43\]) that $\hat{\varphi}_v(\cdot)$ is coercive. Also, it is sequentially weakly lower semicontinuous. So, by the Weierstrass-Tonelli theorem, we can find $u_v\in W^{1,p}(\Omega)$ such that $$\begin{aligned} \label{eq44} &&\hat{\varphi}_v(u_v)=\inf\{\hat{\varphi}_v(u):u\in W^{1,p}(\Omega)\},\nonumber\\ &\Rightarrow&\hat{\varphi}'_v(u_v)=0,\nonumber\\ &\Rightarrow&\left\langle A(u_v),h\right\rangle+\int_{\Omega}\xi(z)|u_v|^{p-2}u_vhdz=\int_{\Omega}\hat{f}_v(z,u_v)hdz\ \mbox{for all}\ h\in W^{1,p}(\Omega). \end{aligned}$$ In (\[eq44\]) we first choose $h=(\tilde{u}-u_v)^+\in W^{1,p}(\Omega)$. We have $$\begin{aligned} \label{eq45} &&\left\langle A(u_v),(\tilde{u}-u_v)^+\right\rangle+\int_{\Omega}\xi(z)|u_v|^{p-2}u_v(\tilde{u}-u_v)^+dz\nonumber\\ &&=\int_{\Omega}[\tilde{u}^{-\gamma}+f(z,\tilde{u},Dv)](\tilde{u}-u_v)^+dz\ (\mbox{see (\ref{eq43})})\nonumber\\ &&{\geqslant}\left\langle A(\tilde{u}),(\tilde{u}-u_v)^+\right\rangle+\int_{\Omega}\xi(z)\tilde{u}^{p-1}(\tilde{u}-u_v)^+dz\ (\mbox{see (\ref{eq41})}),\nonumber\\ &\Rightarrow&0{\geqslant}\left\langle A(\tilde{u})-A(u_v),(\tilde{u}-u_v)^+\right\rangle+\int_{\Omega}\xi(z)(\tilde{u}^{p-1}-|u_v|^{p-2}u_v)(\tilde{u}-u_v)^+dz,\nonumber\\ &\Rightarrow&\tilde{u}{\leqslant}u_v. \end{aligned}$$ Next, we choose in (\[eq44\]) $h=(u_v-w)^+\in W^{1,p}(\Omega)$. Then $$\begin{aligned} \label{eq46} &&\left\langle A(u_v),(u_v-w)^+\right\rangle+\int_{\Omega}\xi(z)u_v^{p-1}(u_v-w)^+dz\ (\mbox{see (\ref{eq45})})\nonumber\\ &&=\int_{\Omega}[w^{-\gamma}+f(z,w,Dv)](u_v-w)^+dz\ (\mbox{see (\ref{eq43})})\nonumber\\ &&{\leqslant}\left\langle A(w),(u_v-w)^+\right\rangle+\int_{\Omega}\xi(z)w^{p-1}(u_v-w)^+dz\ (\mbox{see hypothesis}\ H(f)(i)),\nonumber\\ &\Rightarrow&\left\langle A(u_v)-A(w),(u_v-w)^+\right\rangle+\int_{\Omega}\xi(z)(u_v^{p-1}-w^{p-1})(u_v-w)^+dz{\leqslant}0,\nonumber\\ &\Rightarrow&u_v{\leqslant}w. \end{aligned}$$ It follows from (\[eq45\]) and (\[eq46\]) that $$\label{eq47} u_v\in[\tilde{u},w].$$ On account of (\[eq47\]), (\[eq43\]) and (\[eq44\]), we have $$\begin{aligned} \label{eq48} &&-\Delta_pu_v(z)+\xi(z)u_v(z)^{p-1}=u_v(z)^{-\gamma}+f(z,u_v(z),Dv(z))\ \mbox{for almost all}\ z\in\Omega,\nonumber\\ &&\frac{\partial u_v}{\partial n}=0\ \mbox{on}\ \partial\Omega\\ &&(\mbox{see Papageorgiou \& R\u{a}dulescu \cite{18}}).\nonumber \end{aligned}$$ From (\[eq48\]) and Papageorgiou & Rădulescu [@19 Proposition 7], we have $$u_v\in L^{\infty}(\Omega).$$ Then Theorem 2 of Lieberman [@15] implies that $u_v\in D_+$. Therefore $$u_v\in[\tilde{u},w]\cap C^1(\overline{\Omega}).$$ The proof of Proposition \[prop5\] is now complete. We introduce the solution set $$S_v=\{u\in W^{1,p}(\Omega):u\ \mbox{is a solution of (\ref{eq42})},\ u\in[\tilde{u},w]\}.$$ By Proposition \[prop5\], we have $$\emptyset\neq S_v\subseteq[\tilde{u},v]\cap C^1(\overline{\Omega}).$$ In fact, we have the following stronger result for the elements of $S_v$. \[prop6\] If hypotheses $H(\xi),\, H(f)$ hold and $u\in S_v$, then $u\in {\rm int}_{C^1(\overline{\Omega})}[\tilde{u},w]$. Let $\tilde{\rho}=\min\limits_{\overline{\Omega}}\tilde{u}>0$ (recall that $\tilde{u}\in D_+$). So, we can increase $\hat{\xi}_{\rho}>0$ postulated by hypothesis $H(f)(iii)$ in order to guarantee that for almost all $z\in\Omega$, the function $$x\mapsto x^{-\gamma}+f(z,x,Dv(z))+\hat{\xi}_px^{p-1}$$ is nondecreasing on $[\tilde{\rho},\rho]\subseteq{\mathbb R}_+$. Let $\delta>0$ and set $\tilde{u}^{\delta}=\tilde{u}+\delta\in D_+$. Then $$\begin{aligned} &&-\Delta_p\tilde{u}^{\delta}+(\xi(z)+\hat{\xi}_{\rho})(\tilde{u}^{\delta})^{p-1}\\ &&{\leqslant}-\Delta_p\tilde{u}+(\xi(z)+\hat{\xi}_{\rho})\tilde{u}^{p-1}+\lambda(\delta)\ \mbox{with}\ \lambda(\delta)\rightarrow 0^+\ \mbox{as}\ \delta\rightarrow 0^+\\ &&{\leqslant}\tilde{u}^{-\gamma}+f(z,\tilde{u},Dv)+\hat{\xi}_{\rho}\tilde{u}^{p-1}\ \mbox{for}\ \delta>0\ \mbox{small enough}\\ &&(\mbox{since}\ f(z,\tilde{u},Dv){\geqslant}\tilde{c}_{\tilde{\rho}}>0\ \mbox{for almost all}\ z\in\Omega,\ \mbox{see}\ H(f)(i))\\ &&{\leqslant}u^{-\gamma}+f(z,u,Dv)+\hat{\xi}_{\rho}u^{p-1}\ (\mbox{since}\ \tilde{u}{\leqslant}u)\\ &&=-\Delta_pu+(\xi(z)+\hat{\xi}_{\rho})u^{p-1}\ \mbox{for almost all}\ z\in\Omega\ (\mbox{since}\ u\in S_v),\\ &\Rightarrow&\tilde{u}^{\delta}{\leqslant}u\ \mbox{for small enough}\ \delta>0,\\ &\Rightarrow&u-\tilde{u}\in D_+. \end{aligned}$$ Similarly, for $\delta>0$ let $u^{\delta}=u+\delta\in D_+$. Then $$\begin{aligned} &&-\Delta_pu^{\delta}+(\xi(z)+\hat{\xi}_{\rho})(u^{\delta})^{p-1}\\ &&{\leqslant}-\Delta_pu+(\xi(z)+\hat{\xi}_{\rho})u^{p-1}+\tilde{\lambda}(\lambda)\ \mbox{with}\ \tilde{\lambda}(\delta)\rightarrow 0^+\ \mbox{as}\ \delta\rightarrow 0^+\\ &&=u^{-\gamma}+f(z,u,Dv)+\hat{\xi}_{\rho}u^{p-1}+\tilde{\lambda}(\delta)\ (\mbox{since}\ u\in S_v)\\ &&{\leqslant}w^{-\gamma}+f(z,w,Dv)+\hat{\xi}_{\rho}u^{p-1}+\tilde{\lambda}(\delta)\ (\mbox{since}\ u{\leqslant}w)\\ &&{\leqslant}-c^*+\tilde{\lambda}(\delta)+\hat{\xi}_{\rho}u^{p-1}\ (\mbox{see hypothesis}\ H(f)(i))\\ &&{\leqslant}-\Delta_pw+(\xi(z)+\hat{\xi}_p)w^{p-1}\ \mbox{for almost all}\ z\in\Omega\ \mbox{and for small enough}\ \delta>0\\ &&(\mbox{since}\ \tilde{\lambda}(\delta)\rightarrow 0^+\ \mbox{as}\ \delta\rightarrow 0^+\ \mbox{and due to hypothesis}\ H(f)(i)),\\ &\Rightarrow&u^{\delta}{\leqslant}w\ \mbox{for small enough}\ \delta>0,\\ &\Rightarrow&(w-u)(z)>0\ \mbox{for all}\ z\in\overline{\Omega}. \end{aligned}$$ Therefore we conclude that $$u\in {\rm int}_{C^1(\overline{\Omega})}[\tilde{u},w].$$ The proof of Proposition \[prop6\] is now complete. We can show that $S_v$ admits a smallest element, that is, there exists $\hat{u}_v\in S_v$ such that $\hat{u}_v{\leqslant}u$ for all $u\in S_v$. \[prop7\] If hypotheses $H(\xi),\,H(f)$ hold, then for every $v\in C^1(\overline{\Omega})$, the solution set $S_v$ admits a smallest element $$\hat{u}_v\in S_v.$$ Invoking Lemma 3.10 in Hu & Papageorgiou [@13 p. 178], we can find a sequence\ $\{u_n\}_{n{\geqslant}1}\subseteq S_v$ such that $${\rm essinf}\, S_v=\inf\limits_{n{\geqslant}1}u_n.$$ For every $n\in{\mathbb N}$, we have $$\begin{aligned} &&\left\langle A(u_n),h\right\rangle+\int_{\Omega}\xi(z)u_n^{p-1}hdz=\int_{\Omega}[u_n^{-\gamma}+f(z,u_n,Dv)]hdz\label{eq49}\\ &&\mbox{for all}\ h\in W^{1,p}(\Omega),\ n\in{\mathbb N},\nonumber\\ &&\tilde{u}{\leqslant}u_n{\leqslant}w\ \mbox{for all}\ n\in{\mathbb N}.\label{eq50} \end{aligned}$$ It follows from (\[eq49\]) and (\[eq50\]) that $$\{u_n\}_{n{\geqslant}1}\subseteq W^{1,p}(\Omega)\ \mbox{is bounded}.$$ So, we may assume that $$\begin{aligned} \label{eq51} u_n\stackrel{w}{\rightarrow}\hat{u}_v\ \mbox{in}\ W^{1,p}(\Omega)\ \mbox{and}\ u_n\rightarrow\hat{u}_v\ \mbox{in}\ L^p(\Omega)\ \mbox{as}\ n\rightarrow\infty,\ \hat{u}_v\in[\tilde{u},w]. \end{aligned}$$ In (\[eq49\]) we choose $h=u_n-\hat{u}_v\in W^{1,p}(\Omega)$, pass to the limit as $n\rightarrow\infty$, and use (\[eq51\]). Then $$\begin{aligned} \label{eq52} &&\lim\limits_{n\rightarrow\infty}\left\langle A(u_n),u_n-\hat{u}_v\right\rangle=0\ \mbox{see (\ref{eq50})},\nonumber\\ &\Rightarrow&u_n\rightarrow\hat{u}_v\ \mbox{in}\ W^{1,p}(\Omega)\ (\mbox{see Proposition \ref{prop2}}). \end{aligned}$$ Therefore, if in (\[eq49\]) we pass to the limit as $n\rightarrow\infty$ and use (\[eq52\]), then $$\begin{aligned} &&\left\langle A(\hat{u}_v),h\right\rangle+\int_{\Omega}\xi(z)\hat{u}_v^{p-1}hdz=\int_{\Omega}[\hat{u}_v^{-\gamma}+f(z,\hat{u}_v,Dv)]hdz\\ &&\mbox{for all}\ h\in W^{1,p}(\Omega),\\ &\Rightarrow&\hat{u}_v\in S_v\subseteq D_+\ \mbox{and }{\rm essinf}\, S_v=\hat{u}_v. \end{aligned}$$ The proof of Proposition \[prop7\] is now complete. We can define a map $\sigma:C^1(\overline{\Omega})\rightarrow C^1(\overline{\Omega})$ by $$\sigma(v)=\hat{u}_v.$$ This map is well-defined by Proposition \[prop7\] and any fixed point of $\sigma(\cdot)$ is a solution of problem (\[eq1\]). To generate a fixed point for $\sigma(\cdot)$, we will use Theorem \[th1\] (the Leray-Schauder alternative principle). For this purpose, the next lemma will be useful. \[lem8\] If hypotheses $H(\xi),\,H(f)$ hold, $\{v_n\}_{n{\geqslant}1}\subseteq C^1(\overline{\Omega})$, $v_n\rightarrow v$ in $C^1(\overline{\Omega})$, and $u\in S_v$, then for every $n\in{\mathbb N}$ there exists $u_n\in S_{v_n}$ such that $u_n\rightarrow u$ in $C^1(\overline{\Omega})$. We consider the following nonlinear Neumann problem $$\label{eq53} \left\{\begin{array}{l} -\Delta_py(z)+\xi(z)|y(z)|^{p-2}y(z)=u(z)^{-\gamma}+f(z,u(z),Dv_n(z))\ \mbox{in}\ \Omega,\\ \displaystyle \frac{\partial y}{\partial n}=0\ \mbox{on}\ \partial\Omega. \end{array}\right\}$$ Since $u\in S_v\subseteq D_+$, we have $$\label{eq54} \left\{\begin{array}{l} k_n(z)=u(z)^{-\gamma}+f(z,u(z),Dv_n(z)){\geqslant}0\ \mbox{for almost all}\ z\in\Omega\ \mbox{and all}\ n\in{\mathbb N},\\ \{k_n\}_{n{\geqslant}1}\subseteq L^{\infty}(\Omega)\ \mbox{is bounded},k_n\neq 0\ \mbox{for all}\ n\in{\mathbb N}\\ (\mbox{see hypotheses}\ H(f)(i),(ii)). \end{array}\right\}$$ In problem (\[eq53\]), the left-hand side determines a maximal monotone coercive operator (see Lemma \[lem3\]), which is strictly monotone. Therefore, on account of (\[eq54\]), problem (\[eq53\]) admits a unique solution $y_n^0\in W^{1,p}(\Omega)$, $y^0_n\neq 0$. We have for all $n\in{\mathbb N}$ $$\begin{aligned} \label{eq55} &&\left\langle A(y^0_n),h\right\rangle+\int_{\Omega}\xi(z)|y^0_n|^{p-2}y^0_nhdz=\int_{\Omega}k_n(z)hdz\ \mbox{for all}\ h\in W^{1,p}(\Omega). \end{aligned}$$ In (\[eq55\]) we choose $h=-(y^0_n)^-\in W^{1,p}(\Omega)$. Then $$\begin{aligned} &&\vartheta((y^0_n)^-){\leqslant}0\ (\mbox{see (\ref{eq54})}),\\ &\Rightarrow&c_1||(y^0_n)^-||^p{\leqslant}0\ (\mbox{see Lemma \ref{lem3}}),\\ &\Rightarrow&y^0_n{\geqslant}0,\ y^0_n\neq 0\ \mbox{for all}\ n\in{\mathbb N}. \end{aligned}$$ Also, it is clear from (\[eq54\]) and (\[eq55\]) that $$\{y^0_n\}_{n{\geqslant}1}\subseteq W^{1,p}(\Omega)\ \mbox{is bounded.}$$ Invoking Proposition 7 of Papageorgiou & Rădulescu [@19], we have $$\label{eq56} y^0_n\in L^{\infty}(\Omega)\ \mbox{and}\ ||y^0_n||_{\infty}{\leqslant}c_5\ \mbox{for some}\ c_5>0\ \mbox{and all}\ n\in{\mathbb N}.$$ Then (\[eq53\]) and Theorem 2 of Lieberman [@15] imply that there exist $\alpha\in(0,1)$ and $c_6>0$ such that $$\label{eq57} y^0_n\in C^{1,\alpha}(\overline{\Omega})\ \mbox{and}\ ||y^0_n||_{C^{1,\alpha}(\overline{\Omega})}{\leqslant}c_6\ \mbox{for all}\ n\in{\mathbb N}.$$ Recall that $C^{1,\alpha}(\overline{\Omega})$ is compactly embedded in $C^1(\overline{\Omega})$. So, from (\[eq57\]) we see that we can find a subsequence $\{y^0_{n_k}\}_{k{\geqslant}1}$ of $\{y^0_n\}_{n{\geqslant}1}$ such that $$\label{eq58} y^0_{n_k}\rightarrow y^0\ \mbox{in}\ C^1(\overline{\Omega})\ \mbox{as}\ k\rightarrow\infty,\ y^0{\geqslant}0.$$ Note that $$\label{eq59} k_n\rightarrow k\ \mbox{in}\ L^{p'}(\Omega)\ \mbox{with}\ k(z)=u(z)^{-\gamma}+f(z,u(z),Dv(z)).$$ Using (\[eq55\]) (for the $y^0_{n_k}$’s) and (\[eq58\]), (\[eq59\]), we obtain $$\begin{aligned} \label{eq60} &&\left\langle A(y^0),h\right\rangle+\int_{\Omega}\xi(z)(y^0)^{p-1}hdz=\int_{\Omega}k(z)hdz\ \mbox{for all}\ h\in W^{1,p}(\Omega),\nonumber\\ &\Rightarrow&-\Delta_py^0(z)+\xi(z)y^0(z)^{p-1}=u(z)^{-\gamma}+f(z,u(z),Dv(z))\ \mbox{for almost all}\ z\in\Omega,\\ &&\frac{\partial y^0}{\partial n}=0\ \mbox{on}\ \partial\Omega.\nonumber \end{aligned}$$ Problem (\[eq60\]) admits a unique solution. Since $u\in S_v$, $u$ solves (\[eq60\]) and so $y^0=u$. Therefore for the initial sequence we have $$\label{eq61} y^0_n\rightarrow u\ \mbox{in}\ C^1(\overline{\Omega})\ \mbox{as}\ n\rightarrow\infty.$$ Next, we consider the following nonlinear Neumann problem $$\begin{aligned} \left\{\begin{array}{l} -\Delta_py(z)+\xi(z)|y(z)|^{p-2}y(z)=y^0_n(z)^{-\gamma}+f(z,y^0_n(z),Dv_n(z))\ \mbox{in}\ \Omega,\\ \displaystyle \frac{\partial y}{\partial n}=0\ \mbox{on}\ \partial\Omega. \end{array}\right\} \end{aligned}$$ Evidently, this problem has a unique solution $y^1_n\in D_+$ and $$y^1_n\rightarrow u\ \mbox{in}\ C^1(\overline{\Omega})\ \mbox{as}\ n\rightarrow\infty\ (\mbox{see (\ref{eq61})}).$$ Continuing in this way, we produce a sequence $\{y^k_n\}_{k,n\in{\mathbb N}}$ such that $$\label{eq62} \left\{\begin{array}{l} -\Delta_py^k_n(z)+\xi(z)y^k_n(z)^{p-1}=y^{k-1}_n(z)^{-\gamma}+f(z,y^{k-1}_n(z),Dv_n(z))\\ \mbox{for almost all}\ z\in\Omega,\\ \displaystyle \frac{\partial u^k_n}{\partial n}=0\ \mbox{on}\ \partial\Omega,\ k,n\in{\mathbb N} \end{array}\right\}$$ $$\label{eq63} \mbox{and}\ y^k_n\rightarrow u\ \mbox{in}\ C^1(\overline{\Omega})\ \mbox{as}\ n\rightarrow\infty\ \mbox{for all}\ k\in{\mathbb N}.$$ From (\[eq59\]), (\[eq60\]) and Theorem 2 of Lieberman [@15], we can deduce as before that $$\{y^k_n\}_{k\in {\mathbb N}}\subseteq C^1(\overline{\Omega})\ \mbox{is relatively compact.}$$ So, we can find a subsequence $\{y^{k_m}_n\}_{m\in{\mathbb N}}$ of $\{y^k_n\}_{k\in{\mathbb N}}$ ($n\in{\mathbb N}$ is fixed) such that $$y^{k_m}_n\rightarrow\hat{y}_n\ \mbox{in}\ C^1(\overline{\Omega}),\ n\in{\mathbb N}.$$ From (\[eq62\]) in the limit we obtain $$\label{eq64} \left\{\begin{array}{l} -\Delta_p\hat{y}_n(z)+\xi(z)\hat{y}_n(z)^{p-1}=\hat{y}_n(z)^{-\gamma}+f(z,\hat{y}_n(z),Dv_n(z))\ \mbox{for almost all}\ z\in\Omega,\\ \displaystyle \frac{\partial\hat{y}_n}{\partial n}=0\ \mbox{on}\ \partial\Omega. \end{array}\right\}$$ Then, using Theorem 2 of Lieberman [@15] as before, and the double limit lemma (see Gasinski & Papageorgiou [@4 Problem 1.175, p. 61]) we obtain $$\begin{aligned} &&\hat{y}_n\rightarrow u\ \mbox{in}\ C^1(\overline{\Omega})\ \mbox{as}\ n\rightarrow\infty,\\ &\mbox{and}&\hat{y}_n\in S_{v_n}\ \mbox{for}\ n{\geqslant}n_0\ (\mbox{see Proposition \ref{prop6}}). \end{aligned}$$ The proof of Lemma \[lem8\] is now complete. Using this lemma we can show that the minimal solution map $\sigma(\cdot)$ is compact. \[prop9\] If hypotheses $H(\xi),\,H(f)$ hold, then the minimal solution map $\sigma:C^1(\overline{\Omega})\rightarrow C^1(\overline{\Omega})$ defined by $\sigma(v)=\hat{u}_v$ is compact. We first show that $\sigma(\cdot)$ is continuous. To this end, let $v_n\rightarrow v$ in $C^1(\overline{\Omega})$ and $\hat{u}_n=\hat{u}_{v_n}=\sigma(v_n)$, $n\in{\mathbb N}$. We have $$\begin{aligned} \label{eq65} &&\left\langle A(\hat{u}_n),h\right\rangle+\int_{\Omega}\xi(z)\hat{u}^{p-1}_nhdz=\int_{\Omega}[\hat{u}_n^{-\gamma}+f(z,\hat{u}_n,Dv_n)]hdz\\ &&\mbox{for all}\ h\in W^{1,p}(\Omega),\ n\in{\mathbb N}.\nonumber \end{aligned}$$ Choosing $h=\hat{u}_n\in W^{1,p}(\Omega)$, we obtain $$\begin{aligned} &&||D\hat{u}_n||^p_p+\int_{\Omega}\xi(z)\hat{u}^p_pdz{\leqslant}\int_{\Omega}c_7[\tilde{u}^{-\gamma}+1]dz\ \mbox{for some}\ c_7>0,\ \mbox{and all}\ n\in{\mathbb N}\\ &&(\mbox{since}\ \tilde{u}{\leqslant}\hat{u}_n{\leqslant}w\ \mbox{for all}\ n\in{\mathbb N}\ \mbox{and due to hypothesis}\ H(f)(ii)),\\ &\Rightarrow&c_1||\hat{u}_n||^p{\leqslant}c_8\ \mbox{for some}\ c_8>0\ \mbox{and all}\ n\in{\mathbb N}\ (\mbox{see Lemma \ref{lem3}}),\\ &\Rightarrow&\{\hat{u}_n\}_{n\in{\mathbb N}}\subseteq W^{1,p}(\Omega)\ \mbox{is bounded}. \end{aligned}$$ Invoking Proposition 7 of Papageorgiou & Rădulescu [@19], we have $$||\hat{u}_n||_{\infty}{\leqslant}c_9\ \mbox{for some}\ c_9>0\ \mbox{and all}\ n\in{\mathbb N}.$$ Then Theorem 2 of Lieberman [@15] implies that we can find $\beta\in(0,1)$ and $c_{10}>0$ such that $$\label{eq66} \hat{u}_n\in C^{1,\beta}(\overline{\Omega})\ \mbox{and}\ ||\hat{u}_n||_{C^{1,\beta}(\overline{\Omega})}{\leqslant}c_{10}\ \mbox{for all}\ n\in{\mathbb N}.$$ The compact embedding of $C^{1,\beta}(\overline{\Omega})$ into $C^1(\overline{\Omega})$ and (\[eq66\]) imply that at least for a subsequence, we have $$\label{eq67} \hat{u}_n\rightarrow\hat{u}\ \mbox{in}\ C^1(\overline{\Omega})\ \mbox{as}\ n\rightarrow\infty.$$ Passing to the limit as $n\rightarrow\infty$ in (\[eq65\]), we can infer that $\hat{u}\in S_v$. We know that $\sigma(v)\in S_v$ and so by Lemma \[lem8\], we can find $u_n\in S_{v_n}$ (for all $n\in{\mathbb N}$) such that $$\label{eq68} u_n\rightarrow\sigma(v)\ \mbox{in}\ C^1(\overline{\Omega})\ \mbox{as}\ n\rightarrow+\infty.$$ We have $$\begin{aligned} &&\hat{u}_n{\leqslant}u_n\ \mbox{for all}\ n\in{\mathbb N},\\ &\Rightarrow&\hat{u}{\leqslant}\sigma(v),\\ &\Rightarrow&\sigma(v)=\hat{u}\ (\mbox{since}\ \hat{u}\in S_v). \end{aligned}$$ So, for the original sequence $\{\hat{u}_n=\sigma(v_n)\}_{n\in{\mathbb N}}\subseteq C^1(\overline{\Omega})$, we have $$\begin{aligned} &&\sigma(v_n)=\hat{u}_n\rightarrow\hat{u}=\sigma(v)\ \mbox{in}\ C^1(\overline{\Omega}),\\ &\Rightarrow&\sigma(\cdot)\ \mbox{is continuous}. \end{aligned}$$ Next, let $B\subseteq C^1(\overline{\Omega})$ be bounded. As before, we obtain $$\begin{aligned} &&\sigma(B)\subseteq W^{1,p}(\Omega)\ \mbox{is bounded},\\ &\Rightarrow&\sigma(B)\subseteq L^{\infty}(\Omega)\ \mbox{is bounded (see \cite{19})}. \end{aligned}$$ Then by Lieberman [@15] we conclude that $$\overline{\sigma(B)}\subseteq C^1(\overline{\Omega})\ \mbox{is compact}.$$ This proves that the minimal solution map $\sigma(\cdot)$ is compact. The proof of Proposition \[prop9\] is now complete. Now using Theorem \[th1\] (the Leray-Schauder alternative principle), we will produce a positive smooth solution for problem (\[eq1\]). \[th10\] If hypotheses $H(\xi),\,H(f)$ hold, then problem (\[eq1\]) admits a positive solution $u^*\in D_+$. We consider the minimal solution map $\sigma: C^1(\overline{\Omega})\rightarrow C^1(\overline{\Omega})$. From Proposition \[prop9\] we know that $\sigma(\cdot)$ is compact. Let $$K=\{u\in C^1(\overline{\Omega}):u=t\sigma(u),0<t<1\}.$$ We claim that $K\subseteq C^1(\overline{\Omega})$ is bounded. So, let $u\in K$. We have $$\frac{1}{t}u=\sigma(u)\ \mbox{with}\ 0<t<1.$$ Then $$\begin{aligned} \label{eq69} &&\left\langle A(u),h\right\rangle+\int_{\Omega}\xi(z)u^{p-1}hdz=t^{p-1}\int_{\Omega}\left[\frac{t^{\gamma}}{u^{\gamma}}+f(z,\frac{1}{t}u,Du)\right]hdz\\ &&\mbox{for all}\ h\in W^{1,p}(\Omega).\nonumber \end{aligned}$$ From (\[eq15\]) (see hypothesis $H(f)(iii)$), we have $$\label{eq70} f(z,\frac{1}{t}u(z),Du(z)){\leqslant}\frac{1}{t^{p-1}}f(z,u(z),Du(z))\ \mbox{for almost all}\ z\in\Omega.$$ Using (\[eq70\]) in (\[eq69\]) and recalling that $\tilde{u}{\leqslant}u,\ 0<t<1$, we obtain $$\label{eq71} \left\langle A(u),h\right\rangle+\int_{\Omega}\xi(z)u^{p-1}hdz{\leqslant}\int_{\Omega}\left[\frac{1}{\tilde{u}^{\gamma}}+\hat{a}_0(z)\right]hdz$$ for all $h\in W^{1,p}(\Omega)$ and some $\hat{a}_0\in L^{\infty}(\Omega)$ (see hypothesis $H(f)(i)$). In (\[eq71\]) we choose $h=u\in W^{1,p}(\Omega)$. Then $$\begin{aligned} &&\vartheta(u){\leqslant}c_{11}\ \mbox{for some}\ c_{11}>0\ (\mbox{recall}\ \tilde{u}\in D_+),\\ &&c_1||u||^p{\leqslant}c_{11}\ \mbox{for all}\ u\in K\ (\mbox{see Lemma \ref{lem3}}),\\ &\Rightarrow&K\subseteq W^{1,p}(\Omega)\ \mbox{is bounded}. \end{aligned}$$ Next, as before, the nonlinear regularity theory implies that $$K\subseteq C^1(\overline{\Omega})\ \mbox{is bounded (in fact, relatively compact)}.$$ So, we can apply Theorem \[th1\] (the Leray-Schauder principle) and produce $u^*\in C^1(\overline{\Omega})$ such that $u^*=\sigma(u^*)$. Therefore $u^*\in D_+$ is a positive smooth solution of problem (\[eq1\]). The proof of Theorem \[th10\] is now complete. The authors thank the referee for corrections and remarks. This research was supported by the Slovenian Research Agency grants P1-0292, J1-8131, N1-0064, N1-0083, and N1-0114. [99]{} D. de Figueiredo, M. Girardi, M. Matzeu, Semilinear elliptic equations with dependence on the gradient via mountain-pass techniques, [*Differential Integral Equations*]{} [**17**]{} (2004), 119-126. L. Gasinski, N.S. Papageorgiou, [*Nonlinear Analysis*]{}, Chapman & Hall/CRC, Boca Raton, FL, 2006. L. Gasinski, N.S. Papageorgiou, [*Exercises in Analysis, Part 1*]{}, Springer, Cham, 2014. L. Gasinski, N.S. Papageorgiou, [*Exercises in Analysis, Part 2: Nonlinear Analysis*]{}, Springer, Cham, 2016. L. Gasinski, N.S. Papageorgiou, Positive solutions for nonlinear elliptic problems with dependence on the gradient, [*J. Differential Equations*]{} [**263**]{} (2017), 1451-1476. M. Ghergu, V.D. Rădulescu, [*Singular Elliptic Problems, Bifurcation and Asymptotic Analysis*]{}, Oxford Lecture Series in Mathematics and its Applications, vol. 37, Oxford University Press, Oxford, 2008. J. Giacomoni, I. Schindler, P. Takač, Sobolev versus Hölder local minimizers and existence of multiple solutions for a singular quasilinear equation, [*Ann. Sci. Norm. Super. Pisa, Cl. Sci.*]{} [**(5) 6**]{} (2007), 117-158. M. Girardi, M. Matzeu, Positive and negative solutions of a quasilinear elliptic equation by a mountain pass method and truncature techniques, [*Nonlinear Anal.*]{} [**59**]{} (2004), 199-210. A. Granas, J. Dugundji, [*Fixed Point Theory*]{}, Springer-Verlag, New York, 2003. E. Hewitt, K. Stromberg, [*Real and Abstract Analysis*]{}, Springer-Verlag, New York, 1975. N. Hirano, C. Saccon, N. Shioji, Brezis-Nirenberg type theorems and multiplicity of positive solutions for a singular elliptic problem, [*J. Differential Equations*]{} [**245**]{} (2008), 1997-2037. S. Hu, N.S. Papageorgiou, [*Handbook of Multivalued Analysis. Volume I: Theory*]{}, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1997. N.B. Huy, B.T. Quan, N.H. Knanh, Existence and multiplicity results for generalized logistic equations, [*Nonlinear Anal.*]{} [**144**]{} (2016), 77-92. G. Lieberman, Boundary regularity for solutions of degenerate elliptic equations, [*Nonlinear Anal.*]{} [**12**]{} (1988), 1203-1219. D. Motreanu, V. Motreanu, N.S. Papageorgiou, [*Topological and Variational Methods with Applications to Nonlinear Boundary Value Problems*]{}, Springer, New York, 2014. N.S. Papageorgiou, V.D. Rădulescu, Combined effects of singular and sublinear nonlinearities in some elliptic problems, [*Nonlinear Anal.*]{} [**109**]{} (2014), 236-244. N.S. Papageorgiou, V.D. Rădulescu, Multiple solutions with precise sign for nonlinear parametric Robin problems, [*J. Differential Equations*]{} [**256**]{} (2014), 2449-2479. N.S. Papageorgiou, V.D. Rădulescu, Nonlinear nonhomogeneous Robin problems with superlinear reaction term, [*Adv. Nonlinear Studies*]{} [**16**]{} (2016), 737-764. N.S. Papageorgiou, V.D. Rădulescu, D.D. Repovš, Pairs of positive solutions for resonant singular equations with the $p$-Laplacian, [*Electr. J. Differential Equations*]{} 2017, No. 249, pp. 22. N.S. Papageorgiou, V.D. Rădulescu, D.D. Repovš, Nonlinear elliptic inclusions with unilateral constraint and dependence on the gradient, [*Appl. Math. Optim.*]{} [**78**]{} (2018), 1-23. N.S. Papageorgiou, V.D. Rădulescu, D.D. Repovš, [*Nonlinear Analysis - Theory and Methods*]{}, Springer Monographs in Mathematics. Springer, Cham, 2019. N.S. Papageorgiou, G. Smyrlis, Nonlinear elliptic equations with singular reaction, [*Osaka J. Math.*]{} [**53**]{} (2016), 489-514. N.S. Papageorgiou, G. Smyrlis, A bifurcation-type theorem for singular nonlinear elliptic equations, [*Methods Appl. Anal.*]{} [**22**]{} (2016), 147-170. K. Perera, Z. Zhang, Multiple positive solutions of singular $p$-Laplacian problems by variational methods, [*Bound. Value Problems*]{} [**2009**]{} (2005), 377-382. P. Pucci, J. Serrin, [*The Maximum Principle*]{}, Progress in Nonlinear Differential Equations and their Applications, vol. 73, Birkhäuser Verlag, Basel, 2007. D. Ruiz, A priori estimates and existence of positive solutions for strongly nonlinear problems, [*J. Differential Equations*]{} [**199**]{} (2004), 96-114. Y. Sun, S. Wu, Y. Long, Combined effects of singular and superlinear nonlinearities in some singular boundary value problems, [*J. Differential Equations*]{} [**176**]{} (2001), 511-531.
--- abstract: 'We present a detailed analysis of the exact numerical spectrum of up to ten interacting electrons in the first Landau level on the disk geometry. We study the edge excitations of the hierarchical plateaus and check the predictions of two relevant conformal field theories: the multi-component Abelian theory and the $\winf$ minimal theory of the incompressible fluids. We introduce two new criteria for identifying the edge excitations within the low-lying states: the plot of their density profiles and the study of their overlaps with the Jain wave functions in a meaningful basis. We find that the exact bulk and edge excitations are very well reproduced by the Jain states; these, in turn, can be described by the multi-component Abelian conformal theory. Most notably, we observe that the edge excitations form sub-families of the low-lying states with a definite pattern, which is explained by the $\winf$ minimal conformal theory. Actually, the two conformal theories are related by a projection mechanism whose effects are observed in the spectrum. Therefore, the edge excitations of the hierarchical Hall states are consistently described by the $\winf$ minimal theory, within the finite-size limitations.' address: - 'I.N.F.N. and Dipartimento di Fisica, Largo E. Fermi 2, I-50125 Firenze, Italy' - 'Departamento de Física, Pontifícia Universidade Católica, C.P. 38071, 22452-970 Rio de Janeiro,RJ, Brazil' - 'Centro Atómico Bariloche, Comisión Nacional de Energía Atómica and Instituto Balseiro, Universidad Nacional de Cuyo 8400 - San Carlos de Bariloche Río Negro, Argentina' - 'Centro Atómico Bariloche, Comisión Nacional de Energía Atómica and Instituto Balseiro, Universidad Nacional de Cuyo 8400 - San Carlos de Bariloche Río Negro, Argentina' author: - Andrea Cappelli - Carlos Méndez - 'Jorge M. Simonin' - 'Guillermo R. Zemba' title: Numerical Study of Hierarchical Quantum Hall Edge States on the Disk Geometry --- 200 1 2[\_1]{} Introduction ============ One of the important open problems in the physics of the quantum Hall effect (QHE) [@prange][@dspin] is the complete understanding of the hierarchical Hall plateaus, whose filling fractions fall beyond the Laughlin sequence $\nu=1,1/3,1/5,\dots$ [@laugh]. There are two kinds of theoretical descriptions available at present: the wave-function constructions and the effective conformal field theories (CFT) in $(1+1)$ dimensions. The first approach has culminated in the Jain theory of the composite-fermion correspondence between the integer Hall states with $\nu^*=m=2,3,\dots$ and the hierarchical states with $\nu=m/(mp \pm 1)$, $p=2,4,\dots$, such as $\nu=2/5,3/7,\dots$ [@jain]. The existence of the composite-fermion excitations has been confirmed by many experiments [@cfexp]; the corresponding ansatz wave-functions have been tested in numerical simulations of few electron systems[^1] [@jain][@jj][@jaka]. These have been mostly done on the spatial geometry of the sphere and have firmly established that the Jain states describe the bulk excitations of quantum Hall fluids. On the other hand, these incompressible fluids have characteristic edge excitations [@wen], which cannot be seen on the sphere geometry. These excitations are the relevant low-energy degrees of freedom in the conduction experiments, and their basic properties, like the fractional charge, have been already measured in the simplest (yet non-trivial) case of the $\nu=1/3$ Laughlin Hall state [@tdom] [@mill] [@shot]. The edge excitations are naturally described by the conformal field theories[^2] [@gins], because their low-energy dynamics is effectively one-dimensional, being localized on the boundary of the sample within a width of the order of the magnetic length $\ell=\hbar c/eB$ [@stone][@cdtz1]. The conformal field theories are a powerful tool because they can be solved explicitly in a non-perturbative framework [@gins], and predict universal data like the filling fraction, the fractional charge and quantum statistics of the edge excitations; moreover, they directly describe the relevant experimental regime of a large number of electrons. However, these effective descriptions cannot be easily related to the microscopic dynamics of the electrons. Actually, based on simple arguments and symmetry principles, two classes of CFTs have been proposed for the edge excitations of the hierarchical plateaus. The first is given by the $m$-component Abelian theories $\u1^m$, which are generalizations of the successful one-component theory for the Laughlin plateaus [@read][@juerg][@wen]. In the literature, these conformal theories have been naturally associated with the Jain approach, because the composite-fermion correspondence implies the existence of several effective Landau levels, which may have independent edges of the Laughlin type. The second class of conformal field theories encompasses the $\winf$ minimal models [@ctz5], which exploit the $\winf$ symmetry of the incompressible fluids under area-preserving diffeomorphisms of the plane [@sakita][@ctz1][@ctz3]. There is a one-to-one correspondence between the hierarchical Hall plateaus and the $\winf$ minimal theories. These theories can be obtained by projecting out some states of the multi-component Abelian theory, those which are not fluctuations of an elementary incompressible fluid. This projection implies different properties for the edge excitations in the two theories, which are qualitatively and quantitatively important [@ctz5]. In this paper, we study numerically the spectrum of a finite systems of $N=6,8$ and $10$ electrons on a disk geometry; we diagonalize exactly the Hamiltonian with the Haldane short-range interaction in the first Landau level [@hald]. The low-lying states occur in branches which are separated by gaps; each branch contains one state of lowest angular momentum, the “bottom” state, followed by several higher angular momentum levels with close energies (see Fig.(\[fig1\])). This pattern is well understood for the $\nu=1/3$ Laughlin Hall fluid: the bottom state is the ground state, which is an exact eigenstate of the Haldane interaction; the higher levels are the degenerate edge excitations. The plot of density profiles shows that the ground state is rather flat, a characteristic of incompressible fluids [@laugh], and that the edge excitations are infinitesimal density deformations. We use the density plots to gain a qualitative understanding of the spectrum: we find that the other bottom states can be either quasi-particle excitations (oscillating density profile), or new incompressible fluids with lower filling fractions, such as $\nu=2/5$ (flat density profile). Furthermore, we overlap the low-lying states with the Jain ansatz states. The Jain composite-fermion theory on the disk geometry does predict the branches in the spectrum: they correspond to effective Landau levels which are filled selectively at the boundary. These states are usually denoted by $[n_1,n_2,\dots,n_k]$, and correspond to filling the $i$-th level with $n_i$ electrons, with $n_1 \ge n_2 \ge \cdots \ge n_k$ and $\sum n_i =N$ [@jain]. Our numerical analysis shows that each branch in the spectrum is well described by one of these states, corresponding to a given set of fillings $\{n_i\}$, and by the corresponding particle-hole excitations. We then find the following results. Smooth density profiles identify unambiguously the branches corresponding to the $\nu=2/5$ and $ 3/7$ hierarchical plateaus, for each value of $N$; the rest are quasi-particles branches over the $\nu=1/3$ Laughlin and the hierarchical plateaus; previous numerical analyses [@wen][@deja][@kaap] only charted the energy spectrum and were naturally led to misinterpret the branches. Next, we observe that, within each branch, the low-lying states can be divided into families of close density profiles: the edge excitations correspond to infinitesimal deformations of the bottom state, while families with different density profiles are other bulk excitations or magneto-phonons, which are not well separated in energy for these small values of $N$. This decomposition in families is first understood in the case of a quasi-particle over the Laughlin state ($\nu < 1/3$): we find a precise correspondence between the exact spectrum and the conformal field theory by using the Jain composite-fermion theory, as it follows. We begin by computing the overlap matrix between the exact low-lying states and the Jain states of the branch (for example, suppose that the quasi-particle corresponds to the filling $[6,2]$ for eight electrons). These Jain states are particle-hole excitations of two effective Landau levels, and can be mapped one-to-one to the states of the two-component Abelian conformal theory. As a consequence, each low-lying exact state in the branch has a label of this conformal theory (up to $O(1/N)$ finite-size errors). We then find that the edge excitations of the Laughlin quasi-particle match certain conformal states which are symmetric with respect to the two Abelian components, i.e. the two effective Landau levels. More precisely, these states are obtained by the projection $\u1\times\u1 \to \u1_{\rm diagonal}$ relating the two-component and the one-component Abelian theories. Therefore, the Laughlin quasi-particles are indeed described by the one-component Abelian theory, like the well-understood quasi-holes – this is expected in a relativistic theory. Moreover, the quasi-particle edge excitations amount to a specific sub-set of the larger Jain spectrum. Having understood the pattern of the edge excitations of the Laughlin quasi-particles, we proceed to analyse other branches in the spectrum, whose edge excitations are less understood. For eight electrons, we find that the edge spectrum of the $\nu=2/5$ hierarchical Hall state and its first quasi-particle match the predictions of the $\winf$ minimal conformal theory. This theory realizes the weaker projection $\u1\times\u1 \to \u1_{\rm diagonal}\times {\rm Virasoro}$ [@ctz5], which again eliminates the antisymmetric excitations between the two Abelian components, but keeps all the symmetric ones [@cz]; there remains a non-trivial sector of neutral excitations, as explained in Section IIC. Therefore, the numerical study support the description of the hierarchical plateaus by the $\winf$ minimal conformal theory. The electrons form an irreducible, minimal incompressible fluid, which is in many respects analogous to the original Laughlin fluid [@laugh]. The same pattern of families of edge excitations in the low-lying spectrum is also found for $N=6$ and $10$ electrons. For $N=6$, the finite-size span for the angular momentum of edge excitations, $\Delta J < O(\sqrt{N})$, is too small to see any difference between the one-component Abelian (Laughlin states) and the $\winf$ minimal theories (hierarchical states); namely, the results are consistent with our picture but are not very significative. For $N=10$, we find that the $\nu=2/5$ Hall plateau clearly displays the edge spectrum predicted by the $\winf$ minimal theory, as in the $N=8$ case; on the other hand, the $\nu=3/7$ hierarchical state and its quasi-particles are less neat. Anyhow, these data cannot be consistently interpreted by the alternative multi-component Abelian theory. The paper is organized as follows. In Section $2$, we recall the main results of the Jain theory and the predictions of the two classes of CFTs, which are then used to analyse the data in Section $3$ ($N=8$) and Section $4$ ($N=6,10$). Finally, we end up with some Conclusions. The Numerical Experiment and its Theoretical Interpretations ============================================================ We consider a system of $N=6,8,10$ electrons in the first Landau level on a disk geometry; we use open boundary conditions, namely, we truncate the Hilbert space of single-particle angular momentum states at a value that is very large compared to the highest occupied level in the Laughlin state. The interaction among the electrons is given by the Haldane short-range potential, which selects the $\nu=1/3$ Laughlin incompressible ground state. Of course, we are interested in studying the $\nu=2/5$ incompressible ground state, and other hierarchical states, for which there is no analogous model interaction[^3]. An overview of the $N=8$ spectrum is shown in Fig.(\[fig1\]) as a function of the angular momentum $J$: one clearly sees the branches of low-lying states. No confining potential is present, but this can be easily added afterwards: a quadratic potential increases the energy of each state by an amount proportional to $J$; thus, the bottom state of each branch can be made the ground state by suitably tuning the strength of the potential. If this ground state has an approximately constant density, its filling fraction is $\nu\sim N(N-1)/2J$, up to finite-size corrections of relative order $O(1/N)$. Let us finally quote the previous analysis on the disk geometry, which were useful in setting up our work [@wen][@deja][@kaap]. The Jain Hierarchy ------------------ According to the composite-fermion theory, there is a correspondence between $m$ filled effective Landau levels ($\nu^*=m$) and the hierarchical Hall states [@jain]: \^\*=m        =[m pm 1]{}  , m=1,2,…, p=2,4,…. \[cfnu\]This correspondence is made explicit by the ansatz wave-functions \_(z\_1,…,z\_N) = [P]{} ( \_[i&lt;j]{}\^N (z\_i-z\_j )\^2 \_[\^\*]{}(z\_1,…,z\_N) ) , \[jwf\]where $\Psi_{\nu^*}$ is the Slater determinant of the filled Landau levels and ${\cal P}$ is the projector into the first Landau level. The composite-fermion correspondence also describes the excited states by inserting in (\[jwf\]) the Slater determinants for the $\nu^*=m$ electron transitions. On the disk geometry, the Landau levels can be filled selectively by making electron droplets with different edge shapes: these “bottom” states are denoted by $[ n_1 , n_2 , \dots, n_k ]$ and correspond to the Slater determinants of $n_i$ electrons filling the lowest angular momentum states of the $i$-th Landau level [@deja]. Each bottom state has an independent branch of low-lying states corresponding to the particle-hole excitations of the effective levels. The Jain correspondence (\[jwf\]) applied to these bottom states yields $\nu <1$ states which are characterized by the following angular momenta and approximate energies [@deja]: J\_[\[n\_1,…,n\_k\]]{} &=& N(N-1) + [n\_1(n\_1-1) 2]{} + [n\_2(n\_2- 3) 2]{} + [n\_3(n\_3-5) 2]{} +  ,[\ ]{}E\^[(0)]{}\_[\[n\_1,…,n\_k\]]{} &= & E\_D ( n\_2 + 2 n\_3 + 3 n\_4 + ) . \[jed\]In the last equation, $E_D$ is the effective Landau level gap, which has been recently interpreted as the energy for creating a “defect” in the electron fluid[^4] [@jaka]. In this work, the Jain ansatz was also modified to be completely written in the first Landau level; nevertheless, in this paper we shall use the original proposal (\[jwf\]) and perform the projection ${\cal P}$ numerically. The Jain bottom states (\[jed\]) for eight electrons are summarized in Table (\[tab1\]): the $[8]$ state is identified as the Laughlin $\nu=1/3$ incompressible state. The rest of states could be candidates for the $\nu=2/5$ ground state, because they have $J\sim 70$ up to $O(N)$ finite-size corrections; actually, in the thermodynamic limit $N\to\infty$, any state $[N/2 + k, N/2 -k ]$, with $k$ fixed, would work. The Jain bottom states match rather well the numerical spectrum in Fig.(\[fig1\]): for almost all predicted $J$ values in Table (\[tab1\]), there is a bottom state of a branch of low-lying states; its energy is rather well approximated by the Jain formula (\[jed\]). On the other hand, a disagreement is seen for the smaller $J$ states $[4,4]$ and $[5,2,1]$, which have $E^{(0)}=4$ but are almost degenerate with the $E^{(0)}=3$ states; moreover, these two states seem to describe the same branch of levels – we shall discuss these points later on. Abelian Conformal Field Theories of Edge Excitations ---------------------------------------------------- It is rather well established that the low-energy excitations of a droplet of quantum incompressible fluid reside on the boundary and can be described by a $(1+1)$-dimensional conformal field theory [@wen]. This can be simply understood in the case of one filled Landau level (see Fig.(\[fig2\])), which can be considered as the Fermi sea (in configuration space) of a one-dimensional system with one chirality only [@stone]. Its low-lying excitations are particle-hole transitions near the Fermi surface, which is actually the physical edge of the (circular) electron droplet. By a well-known procedure, one can take the thermodynamic limit $N\to\infty$ and approximate the Fermi sea by a Dirac vacuum; moreover, the energy of the low-lying excitations can be linearized around the Fermi level, $\epsilon\sim v k$, where $k= 2\pi n/R$ is the momentum, $v$ the Fermi velocity and $R=O(\ell\sqrt{N/\nu})$ the size of the disk. These are the edge excitations of the $\nu=1$ quantum Hall state; in a finite system, their wave number is limited by $\vert n\vert \ll R$, i.e. their angular momentum must be $\Delta J \ll O(\sqrt{N})$ [@cdtz1]. Besides the edge excitations, there are quasi-hole excitations which amount to moving one electron from deep inside the Fermi sea to the edge ($\Delta J =O(N)$), and, conversely, for the quasi-particles. In the thermodynamic limit, the relativistic one-dimensional effective fermion is identified as the charged, chiral Weyl fermion; the conformal symmetry of this theory is described by the Virasoro algebra with central charge $c=1$ [@gins]. This effective conformal field theory of the $\nu=1$ plateau can be generalized to the Laughlin states by the well-known bosonization procedure: one rewrites the Weyl fermion in terms of a bosonic field, changes its compactification radius and obtains a general $c=1$ Abelian conformal theory, whose one-dimensional current satisfies the Abelian current algebra $\u1$ [@cdtz1]; an equivalent name is the chiral Luttinger liquid [@wen]. This description of the edge excitations of the Laughlin states is well established both theoretically and experimentally [@tdom][@mill][@shot]. An important property for the following discussion is that the Hilbert spaces of the edge excitations of the integer and Laughlin Hall states, $\nu=1,1/3,1/5,\dots$, are all isomorphic [@wen]; thus, these excitations can be still visualized as the particle-hole transitions in Fig.(\[fig2\]). It is rather simple to count them for small values of $\Delta J=n$, and obtain the multiplicities reported in the first line of Table (\[tab2\]) [@wen]; let us stress that the CFT description of edge excitations is valid in the linear range $\Delta J < O(\sqrt{N})$ [@cdtz1]. A natural generalization of the above picture is to consider the case of two filled Landau levels: the corresponding edge excitations are described by the two-component Abelian CFT, which is simply the tensor product of two one-component theories. Each Landau level has its own Fermi surface and its particle-hole excitations; their total number is obtained by adding the excitations of the two levels which have a given $\Delta J=\Delta J_1 +\Delta J_2$ (see the second row of Table (\[tab2\])). According to the Jain composite-fermion correspondence (\[cfnu\]), it is rather natural to use this two-component Abelian theory for describing the edge excitations of the hierarchical states (\[jwf\]) with $\nu=2/(2p+1)=2/5,2/9,\dots$. As in the one-component case, these $\nu<2$ values can be described in the conformal theory by bosonizing the two Weyl fermions and by suitably changing their compactification radii [@wen]. A characteristic feature of the resulting conformal theories is to realize an extension of the Abelian current algebra symmetry $\u1\otimes\u1 \to \u1\otimes\su2$ [@read][@juerg]; this will be useful in the following discussion. As in the one-component case, the Hilbert spaces of the edge excitations above the $\nu=2$ Hall plateau and all the $m=2$ hierarchical states (\[jwf\]) are isomorphic. Minimal Models of Edge Excitations ---------------------------------- Another theory of the hierarchical edge excitations has been independently proposed in the Ref.[@ctz5]: it corresponds to the one-component Abelian CFT for the simplest Laughlin fluids, but differs from the multi-component theory. It is based on the physical picture of the incompressible fluid [@laugh], which possesses the dynamical symmetry under the area-preserving diffeomorphisms of the spatial coordinates [@ctz1] [@sakita]. This symmetry has been promoted to a building principle for the CFTs describing its edge excitations [@ctz3]: actually, it implies that the conformal fields should carry a representation of the $\winf$ algebra, which is a generalization of the Abelian current algebra [@kac]. Among the conformal theories with $\winf$ algebra [@ctz3], a particular class of models has been found, the $\winf$ [*minimal models*]{} [@ctz5], whose filling fractions are in one-to-one correspondence with the hierarchical values (\[cfnu\]). This is already a strong indication that these models are experimentally relevant. The minimal models are characterized by being a reduction of the previous multi-component Abelian CFTs, in the sense that some excitations are projected out, as explained hereafter. This projection implies [@ctz5]: (i) a reduced number of edge excitations above the ground state, as given in the third line of Table (\[tab2\]); (ii) only one independent Abelian charge for the quasi-particles, which is hence identified with the electric charge; (iii) the existence of neutral quasi-particle excitations characterized by a non-Abelian quantum number of the $SU(m)$ Lie algebra, where $m$ is the number of would-be components. This projection has been recently made explicit by a Hamiltonian formulation of the minimal incompressible models [@cz], which will be briefly summarized in the first non-trivial case of two components ($m=c=2$). Let us start from a closer look into the two-component Abelian theory with symmetry $\u1\otimes\su2$; for $\nu=2$, this is described by two Weyl fermions $\Psi_i(\theta),\overline{\Psi}_i(\theta)$, where $\theta$ is the angular variable on the disk and $i=1,2$ denote the upper and lower levels, respectively. Their excitations can be labelled by the total Abelian charge $J_0$ and the $SU(2)$ isospin charge $J^3_0$, which are defined as follows: J\_0 &=& \_0\^[2]{} [d2]{} ( \_1\_1 + \_2\_2 )  ,[\ ]{}J\_0\^3 &=& \_0\^[2]{} [d2]{} [12]{} ( \_1\_1 - \_2\_2 )  ,[\ ]{}J\_0\^+ &=& \_0\^[2]{} [d2]{} \_1\_2  , J\_0\^-  = \_0\^[2]{} [d2]{} \_2\_1  . \[jdef\]One can check that $\left\{J_0^+, J_0^-, J_0^3 \right\}$ satisfy the $SU(2)$ algebra by using the fermionic canonical commutation relations[^5]. The Abelian and iso-spin charges measure the edge excitations in the two layers symmetrically and anti-symmetrically, respectively. Note, however, that the quasi-particles in these Abelian theories carry the iso-spin quantum number in such a way that it is linearly additive as another electric charge; namely, there are no non-Abelian effects [@wen]. These $SU(2)$ generators can be defined for all the hierarchical ground states $\nu=2/5,2/9,\dots$, but are not realized in terms of fermions; we can nevertheless continue to use the more intuitive fermionic language, owing to the aforementioned isomorphism between Hilbert spaces. After these preliminaries, we are ready to define the $\winf$ minimal theory (for $c=2$): this is obtained from the two-component Abelian theory by imposing the constraint [@cz], J\^-\_0 = 0  . \[hamred\] It can be shown that the zero modes $\{J_0^\pm, J_0^3 \}$ commute with the Virasoro generators; thus, the constraint (\[hamred\]) does not spoil the conformal invariance and defines a new conformal theory with the same central charge. The effect of the constraint is the following: the operator $J^-_0$ moves electrons down and holes up between the two layers (with a minus sign in the latter case), while keeping their (normal-ordered) angular momentum constant; it relates the edge excitations in the two layers and actually vanishes on their symmetric linear combinations. Therefore, the condition (\[hamred\]) projects out the edge excitations which are antisymmetric with respect to the two levels. The ground state is unique and symmetric, then it satisfies the constraint: namely, the two CFTs share the same ground state. Let us see some examples of allowed edge excitations in the minimal theory. We first need to clarify the angular-momentum labels in CFT. The Weyl fermions define a second-quantized relativistic theory, which describes excitations above the ground state; therefore, the latter must be specified in order to relate this description to the numerical data. For example, one can suppose that the $N=8$ bottom state $[5,3]$ with $J=66$ is the $\nu=2/5$ incompressible ground state (see Fig.(\[fig1\])). This fixes the Fermi surface for both layers and sets the reference value for the angular momentum of excitations: $\Delta J_1=J_1-4$ and $\Delta J_2=J_2-1$. The usual moding of conformal fields thus corresponds to the normal-ordered angular momentum, which is equal to zero for the charges in (\[jdef\]). The ground state $[5,3]$ and its edge excitations are identified by vanishing eigenvalues of both $J_0$ and $J_0^3$; the other Jain branches in the spectrum, corresponding to $[6,2]$, etc., in Table(\[tab1\]), are described by the CFT as excitations with $J_0^3=-1$, etc, respectively. Let us now introduce the fermionic Fock space of the two Weyl fermions (\[jdef\]), which also describe the particle-hole excitations of the two effective Landau levels, “up” and “down”, in the Jain construction: the fermionic second-quantized operators are, respectively, $u_k$, $u^{\dagger}_k$ and $d_k$, $d^{\dagger}_k$ ($k \in {\bf Z}$); they act on the ground state $\vert \Omega\rangle$. There are two Abelian edge excitations at the first excited level $\Delta J =1$, which can be written: 1; = ( d\_1\^d\_0   u\_1\^u\_0 ) . \[delta1\]The constraint (\[hamred\]) can be written explicitly: $J^-_0 \vert 1; \pm \rangle= \sum_{k=-\infty}^\infty \ d^\dagger_k u_k\vert 1; \pm \rangle =0$. We find that the symmetric combination $\vert 1; + \rangle$ satisfies this constraint and the antisymmetric does not: therefore, the $\winf$ minimal conformal theory only contains the symmetric excitations, as we anticipated. At the next level $\Delta J =2$, there are five Abelian edge states:  2; a &= & d\_1\^d\_0 u\_1\^u\_0  , [\ ]{} 2; b&= & ( d\_2\^d\_0   u\_2\^u\_0 )  , [\ ]{} 2; c&= & ( d\_1\^d\_[-1]{}   u\_1\^u\_[-1]{} )  . \[deltwo\]The antisymmetric combinations $\vert\ 2; b - \rangle$ and $ \vert\ 2; c- \rangle$ do not satisfy the constraint (\[hamred\]) and are not present in the minimal CFT. Note that the counting of these states is in agreement with Table (\[tab1\]). In the semiclassical picture developed in Ref.[@ctz1], the incompressible Hall fluid is identified with a Fermi sea and its area-preserving deformations are the particle-hole excitations (Fig.(\[fig2\])). The two-level structure introduces one additional degree of freedom: the antisymmetric excitations are tangential to the Fermi surface and do not correspond to deformations of the incompressible fluid; therefore, they need not to be included in a minimal theory. This is the physical meaning of the condition (\[hamred\]). A precise derivation of this constraint can be obtained by analysing the irreducible representations of the $\winf$ algebra. A general property of CFTs is that they can be constructed by assembling the representations of their infinite-dimensional symmetry algebra. The $\winf$ minimal theories were first obtained in such a way [@ctz5], by using the special, degenerate $\winf$ representations, which are equivalent to those of the algebra $\u1\times {\cal W}_m$, in particular $\u1\times {\rm Virasoro}$ for $m=2$ [@kac]. The constraint (\[hamred\]) precisely realizes the projection of conformal theories $\u1\times\su2 \to \u1\times {\rm Virasoro}$ [@cz], which is a simplified case of the general mechanism of Hamiltonian reduction [@hred]. Note that the projection only keeps one state for each $SU(2)$ multiplet in the Abelian spectrum, i.e. the highest-weight state; these states form a non-trivial set of neutral edge excitations, which are characterized by their Virasoro weight. The edge excitations of the other, $\u1_{\rm diagonal}$ part are generated by the modes of the symmetric current in (\[jdef\]), J\_[-k]{} =\_[j=-]{}\^   d\^\_[k-j]{}d\_j + u\^\_[k-j]{} u\_j  , k=1,2,… \[udiag\] These excitations are symmetric with respect to the two Landau levels, but are not the most general ones: for example, the state $\vert 2;a \rangle$ in (\[deltwo\]) cannot be obtained by applying (\[udiag\]) on the ground state. The symmetric edge excitations obtained by (\[udiag\]) realize the reduction of the two-component Abelian theory to the one-component theory $\u1\times\u1 \to \u1_{\rm diagonal}$, which also entails a change of central charge $c=2 \to 1$. Finally, we remark that the constraint (\[hamred\]) can be enforced dynamically by modifying the Hamiltonian of the two-component Abelian theory: H H + J\^+\_0 J\^-\_0   . \[ham\]The added term is diagonal in the two-component Abelian Hilbert space, and is relevant in the renormalization-group sense, because $\gamma$ has dimension of a mass. It increases the energy of the states which do not satisfy (\[hamred\]) and in the limit ${\gamma}\to\infty$, it performs the projection leading to the $\winf$ minimal model. In general, we may also consider the non-conformal theory with $\gamma\neq 0,\infty$, which interpolates between the two-component and the minimal, i.e. irreducible, incompressible fluids[^6]. A more complete discussion of these matters can be found in Ref.[@cz]. Analysis of the $N=8$ Data ========================== We now proceed to analyse each branch of levels in the $N=8$ spectrum and interpret it according to the theories of the previous Section. The exact eigenstates are denoted by $\Vert\ J - n \rangle\rangle $, where $J$ is the angular momentum and $n=0,1,2,\dots$ the ordering by increasing values of the energy. The complete set of our numerical data is accessible on-line[^7]. [**Branch \[8\]: the Laughlin incompressible Hall fluid at $\nu=1/3$**]{} The density profile $\rho ({\bf x})$ of the Laughlin ground state $\Vert 84-0\rr $ is drawn in Fig.(\[fig3\]): we see that this droplet of incompressible fluid is fairly flat in the interior; moreover, its value at the origin[^8] $2\pi\rho({\bf 0} )$ is close to the expected value of $1/3$ in the thermodynamic limit. The edge excitations are recognized as very small deformations of the shape of the ground state density; their energies above the ground state vanish for the Haldane interaction, as shown in the inset of Fig.(\[fig3\]). Having identified the edge states, we can count their number: we find the multiplicities $(1,1,2,3,\dots)$ for $\Delta J=(0,1,2,3,\dots)$, in agreement with the predictions of the one-component Abelian CFT in Table(\[tab2\]). The matching of the exact edge excitations with the the corresponding Jain (Laughlin) wave functions is rather obvious in this case; the total overlap with each exact numerical state is one by construction. [**Branches \[7,1\], \[6,2\]: the quasi-particles over the Laughlin state**]{} Fig.(\[fig30\]) shows the density profiles of all the bottom states found in the $N=8$ spectrum: these are all more oscillating than the Laughlin ground state. The states $\Vert 63-0\rr$ and $\Vert 70-0\rr$ are clearly quasi-particle excitations with the characteristic bump of size $O(\ell)$: therefore, they cannot be interpreted as other incompressible ground states with $\nu <1/3$. Let us analyse the $[6,2]$ branch $J=70$ in more detail: the density profiles and energies of its low-lying states are drawn in Fig.(\[fig4\]). One distinguishes two families of states with opposite oscillations, which have multiplicities $(1,1,2)$ and $(0,1,3)$, respectively. The first group can be interpreted as a quasi-particle on top of the Laughlin $\nu=1/3$ ground state: actually, the multiplicities of its edge excitations agree with the predictions of the one-component Abelian conformal theory (Table (\[tab2\])). The second group can be interpreted as another bulk excitation, which might acquire a larger gap in the thermodynamic limit, where the CFT description of the low-lying states should become exact. Although this limit cannot be inferred from the simulation, the inset of magnified energy levels in Fig.(\[fig4\]) clearly shows that these states have a higher energy than those of the previous group. Note that the sum of the degeneracies of the two groups considered above match the predictions of the two-component Abelian CFT in Table (\[tab2\]), i.e. $(1,2,5)$. By ignoring the information arising from the density plots, one might be tempted to interpret all these low-lying states as edge excitations; then, the corresponding bottom state could not be a Laughlin quasi-particle, because the edge multiplicities would not match; it would rather be interpreted as the $\nu=2/5$ incompressible fluid [@wen]. However, the $J=76$ branch, i.e. $[7,1]$, being qualitatively similar, should be interpreted in the same way, while it is necessarily a Laughlin quasi-particle, thus arriving to a contradiction. We conclude that the new criterion of density plots is essential for distinguishing Laughlin quasi-particles from new hierarchical plateaus. We can actually observe the formation of a hierarchical Hall fluid [@hiera]: as soon as sizable number $O(N/2)$ of Laughlin quasi-particles are present, they condense into a new incompressible fluid, whose density shape is again smooth (see next paragraph). After having established that $J=70$ is a Laughlin quasi-particle, we should understand the origin of the extra family $(0,1,3)$ of low-lying states. This is explained by the Jain theory, which we now analyse in detail; we are going to illustrate the following chain of relations: [ccc]{} [low-lying exact states]{} &     [Jain states]{}     & [ two-component Abelian CFT]{}\ & &\ [edge excitations]{} & & [ one-component Abelian CFT]{} \[rela\] The Jain wave functions corresponding to the $[6,2]$ branch are obtained by plugging in (\[jwf\]) the Slater determinants for the particle-hole excitations of the first and second levels filled with $6$ and $2$ electrons, respectively. These states are, by construction, in one-to-one correspondence with those of the two-component Abelian conformal theory; thus, we use the same notation for them, in particular the basis of Section IIC, equations (\[delta1\]) and (\[deltwo\]). Note, however, that the Jain states are not orthogonal; therefore, we diagonalize them by the Gram-Schmidt method[^9] (by also computing the overlaps among themselves). For each angular momentum value $\Delta J=0,1,2$, we have computed the matrix of overlaps between these Jain states and the low-lying spectrum (see Table (\[tab20\])). The total square overlap with each state is very large (about 0.96), and the matrix determinant is large enough to match these states one-to-one. We conclude that the Jain theory describes very well all the low-lying states. Nonetheless, this does not imply that they all should be edge excitations, as shown by the previous arguments of the density plot and of the consistency with the Laughlin theory. Let us recall that, in principle, the Jain theory and the conformal field theory give rather different descriptions of the spectrum: the former applies directly to all excitations for finite $N$, while the latter only accounts for the edge excitations in the large $N$ limit, which requires some imagination. The analysis of the overlap matrices in Table (\[tab20\]) let us identify which Jain states correspond to the edge excitations of the Laughlin quasi-particle in Fig.(\[fig5\]). The edge state $\Vert 71-0\rangle\rangle$ clearly matches the symmetric state $\vert 1;+\rangle$ in equation (\[delta1\]). The five states at $\Delta J=2$ can be divided in two groups: ($\Vert 72-0\rangle\rangle$, $\Vert 72-1\rangle\rangle$, $\Vert 72-3\rangle\rangle$) have larger overlaps with the symmetric states in (\[deltwo\]), and ($\Vert 72-2\rangle\rangle$, $\Vert 72-4\rangle\rangle$) with the antisymmetric states. The precision[^10] is lower than for $\Delta J=1$, but it is nonetheless remarkable that the signal is not completely washed out by the finite-size effects. This pattern of overlaps can be understood as a relation among conformal field theories by applying the analysis of Section IIC: the edge excitations of the Laughlin quasi-particle, $\Vert 71-0\rangle\rangle$, $\Vert 72-0\rangle\rangle$ and $\Vert 72-1\rangle\rangle$, have large overlaps with the symmetric states of the two-component Abelian theory which are generated by the $\u1_{\rm diagonal}$ current (\[udiag\]), namely $\vert 1;+\rangle$, $\vert 2;b+\rangle$ and $\vert 2;c +\rangle$. Therefore, the Laughlin quasi-particle is indeed described by the one-component Abelian conformal theory; the new result is that this is obtained in the Jain spectrum by the projection $\u1\times\u1 \to \u1_{\rm diagonal}$ described in the Section IIC. Similar families of symmetric Laughlin edge excitations with multiplicities (1,1,2) are actually found in any branch and for any number of electrons: this is perhaps the most important result of this paper. These “standard” families are enlarged by further edge excitations in the case of hierarchical Hall states, as discussed hereafter. [**Branches \[5,3\], \[5,2,1\]: the $\nu=2/5$ hierarchical Hall state and its quasi-particle**]{} Fig.(\[fig5\]) shows the family of low-lying states starting at $J=66$, which is identified with the $[5,3]$ branch. The bottom state $\Vert 66-0\rr$ has a rather flat profile and can be considered as a new incompressible ground state; the value of $2\pi\rho( {\bf 0} )$ is close to its thermodynamic limit of $2/5$ (at variance with the quasi-particle state $\Vert 76-0\rangle\rangle$). Moreover, a family of edge excitations is clearly associated to $\Vert 66-0\rr$, which have multiplicities $(1,1,3)$, in agreement with the predictions of the $\winf$ minimal CFT (third row of Table (\[tab2\])). Next, we check whether these edge excitations match the states of the minimal theory, by analysing the overlaps with the Jain states (see Table (\[tab3\])). We find that the ground state $\Vert 66-0 \rangle\rangle $ is well-approximated by the Jain bottom state $\vert [5,3]\rangle$. For $\Delta J=1$, the unique edge state $\Vert 67-0 \rr$ is again identified with the symmetric Jain state $\vert 1; +\rangle$. At $\Delta J =2$, the exact edge states $\Vert 68- i\rr$, with $i=1,2,3$ have large overlaps with the symmetric Jain states $\vert\ 2; a \rangle$, $ \vert 2; b +\rangle$ and $\vert 2;c+\rangle$, and $O(1/N)$ projection on the antisymmetric Jain states. Therefore, the edge excitations match all the possible symmetric Jain states; according to Section IIC, these are singled out by the projection $\u1\times\u1 \to \u1\times {\rm Virasoro}$, leading to the $\winf$ minimal theory. Putting all these informations together, we conclude that this branch can be definitely interpreted as the $\nu=2/5$ incompressible fluid for $N=8$ electrons, and that its edge excitations are described by the $\winf$ minimal conformal theory [@ctz5] within the finite-size accuracy. Let us add one remark. The third state $\Vert 68-0\rr$ at $\Delta J=2$ which makes the difference between the ubiquitous Laughlin edge spectrum and the hierarchical one, has an alternative description in the Jain theory as the bottom state of the $[6,1,1]$ branch. Here we first encounter the phenomenon of superposition of Jain branches, which is a sort of degeneracy in the description of the spectrum; it is an interesting dynamical problem in the Jain theory, which has not been addressed so far. It can be rephrased as an “interaction among defects” in the regime of compact electron droplets [@jain]; it might also be associated to the mechanism of the “condensation of quasi-particles” of the original hierarchical scheme [@hiera] (or maybe not). As a matter of fact, this dynamical phenomena does not concern the CFT interpretation, which is an effective description of the edge excitations above a given ground state; there is just a practical consequence that the one-to-one identification of states in (\[rela\]) might be lost in case of mixing of two competing Jain branches. The overlaps of the $[6,1,1]$ Jain branch with the $J=68,69$ low-lying states are not very good, although the bottom state itself has a good overlap. This branch is rather atypical, and cannot be considered as a candidate incompressible state because it is almost gapless with the $[5,3]$ branch. We now proceed to analyse the branch $J=63$, which is plotted in Fig.(\[fig6\]). Its natural interpretation is of a quasi-particle over the $\nu=2/5$ ground state, due to the oscillating density profile. The $\winf$ minimal theory predicts the same pattern of edge excitations as for the $\nu=2/5$ ground state, i.e. (1,1,3) [@ctz5]; this is actually observed in Fig.(\[fig6\]), and add further support to whole picture put forward in this paper. A slight peculiarity is that the third state at $\Delta J=2$ is higher in energy than other non-edge states – but the shape is more important in our analysis. Next, we proceed to the identification of these states by the overlap analysis (see Table (\[tab40\])). The Jain theory presents a degenerate description for this branch, namely $[4,4]$ and $[5,2,1]$ (See Table (\[tab1\])). The overlaps show that the latter wins, i.e. the states of the $[4,4]$ branch have less than $O(1/N)$ projection on the low-lying states; apparently, the electron tends to “pile up” in the effective Landau levels. The empirical rule taken from this and other cases of two competing Jain branches is that one of them describes well the data, while the other definitely does not. The Jain states of the $[5,2,1]$ branch should be related to a three-component Abelian conformal theory; this is the first step in extending the chain of relations (\[rela\]) to this branch. For $\Delta J=1$, there are three Abelian edge states, which can be written, in analogy with Eq.(\[hamred\]): 1; a & =& [1]{} ( u\^\_1 u\_0 + c\^\_1 c\_0 + d\^\_1 d\_0 )  , [\ ]{}1; b & =& [1]{} ( u\^\_1 u\_0 - c\^\_1 c\_0 )  ,[\ ]{}1; c & =& [1]{} ( u\^\_1 u\_0 + c\^\_1 c\_0 - 2 d\^\_1 d\_0 )  . \[three\]The ground state is $\vert\Omega\rangle = \vert [5,2,1]\rangle$ and the $\{u_k,c_k,d_k\}$ are fermionic Fock operators for the upper, central and lower (effective) Landau levels. In this case, we expect that the edge excitations are described by a reduction from the three-component Abelian theory to the $c=2$ $\winf$ minimal theory (analogous to the $c=2 \to 1$ projection (\[rela\]) for the Laughlin quasi-particles). It can be shown [@cz] that this reduction implies that the unique edge excitation at $\Delta J=1$ is the completely symmetric state $\vert 1; a\rangle$. This is in agreement with the overlaps reported in Table (\[tab30\]). Next, the overlap analysis cannot be carried over to the $\Delta J=2$ edge excitations, owing to the finite-size limitations: the Jain state $[5,2,1]$ possesses only one electron in the highest effective Landau level, and is very far from the thermodynamic limit of three Fermi surfaces which is implicit in the CFT description; in particular, one of the $\Delta J=2$ particle-hole excitations in the higher level is missing. Therefore, we cannot analyse the projection of states in this branch; nevertheless, the shape and number of edge excitations are in agreement with the predictions of the $\winf$ minimal theory for a quasi-particle over the $\nu=2/5$ state. Analysis of the $N=6$ and $10$ Data =================================== The $N=6$ Data -------------- Fig. (\[fig7\]) shows the exact spectrum as a function of the angular momentum and Table (\[tab4\]) reports the bottom states of the Jain branches with $J\ge 33$. The observed structure in branches is similar to that of the $N=8$ case: the Laughlin branch starts at $J=45$, and its ground state is identified with the $[6,0]$ Jain state. In decreasing order of $J$, we identify as $[5,1]$ the state appearing at $J=39$ and as $[4,2]$ the one at $J=35$. For the state appearing at $J=33$ there are two possible Jain states, $[3,3]$ and $[4,1,1]$, which have the same energy $E^{(0)}$. Figure (\[fig70\]) shows the density profiles of all the bottom states: these give the first hints for identifying the incompressible ground states. The profiles should be compared with the data of average density $\rho(0)$ and angular momentum for idealized flat droplets, which are reported in Table (\[tab40\]). One finds that the bottom states with $J=39$ and $J=33$ are candidates for the incompressible Hall states with $\nu=2/5$ and $3/7$, respectively. Let us analyse them in turn. The profiles of the low-lying states of the $J=39$ branch are shown in Fig.(\[fig71\]): the edge states are $\Vert 40-0 \rr$, $\Vert 41-0\rr$ and $\Vert 41-1 \rr$, i.e. the observed multiplicities are $(1,1,2)$. The analysis of the overlaps with the Jain wave functions is similar to the previous cases: the results are that the bottom state $\Vert 39-0\rr$ is identified with $\vert [5,1]\rangle$, as expected; for $\Delta J=1$, the edge state $ \Vert 40-0\rr$ is the symmetric combination of Abelian edge states $\vert 1;+\rangle$ (see Eq.(\[delta1\])); instead, the non-edge eigenstate is $\Vert 40-1\rr\sim\vert 1; -\rangle$. The overlap analysis cannot be extended to $\Delta J=2$, because the $[5,1]$ ground state has only one electron in the second effective Landau level, and its particle-hole excitations cannot match the two-component Abelian CFT. Within the $\Delta J=1$ analysis, we cannot distinguish the minimal $c=2$ edge (multiplicities $(1,1,3,\dots)$) from the $c=1$ Abelian edge $(1,1,2,\dots)$ associated with the Laughlin quasi-particle excitations. In conclusion, this branch can be interpreted either as the $\nu=2/5$ incompressible Hall state described by the $\winf$ minimal theory (but one edge excitation is missing), or as a Laughlin quasi-particle (but its profile is exceptionally flat). The density profiles of the $J=33$ branch are presented in Fig. (\[fig8\]): the edge excitations are $\Vert 34-0\rangle\rangle$, $\Vert 35-1\rangle\rangle$ and $\Vert 35-2\rangle\rangle$, i.e. again Laughlin-like multiplicities. Their interpretation in conformal field theory begins by identifying the ground state $\Vert 33-0\rr$ with one of the two possible Jain bottom states: the overlaps in Table (\[tab5\]) shows that the exact state is well described by the three-level state $[4,1,1]$, rather than the “simpler” $[3,3]$ one; this is the “piling-up” of the electrons, which we have already encountered. Actually, the computation of the energies of the two Jain bottom states shows that the $[3,3]$ branch is separated from the low-lying $[4,1,1]$ branch by a gap of order $E_D$; in this case, the interaction among defects is of the same order as the energy $E^{(0)}$ for non-interacting defects [@jain]. Next, we analyse the low-lying states of this branch: the Jain excitations of the $[4,1,1]$ bottom state should be compared with the three-component Abelian CFT, as we have already done for the branch $[5,2,1]$ of $N=8$ electrons. Again, we must limit ourselves to the excitations with $\Delta J=1$, due to the finite sizes of the would-be Fermi seas. The results for the overlaps in Table (\[tab5\]) are very similar to those in Table (\[tab30\]) for the quasi-particle over the $\nu=2/5$ state: the unique edge state $\Vert 34-0 \rr$ is clearly identified as the symmetric excitation $\vert 1; a\rangle$. The density shape of the $J=33$ bottom state suggests its interpretation as the $\nu=3/7$ hierarchical Hall state for $N=6$ electrons. Then, we expect that its edge excitations are described by the $c=3$ $\winf$ minimal conformal theory, whose multiplicities are again $(1,1,3)$ (see the last row of Table (\[tab2\])) [@ctz5]; actually, one can show that the excitations of the $c=3$ and $c=2$ minimal theories only differ for $\Delta J \ge 3$, and that both agree with those of the simpler Laughlin theory for $\Delta J=1$. In conclusion, the $N=6$ exact edge states are consistently described by the $\winf$ minimal conformal theory [@ctz5], within the finite-size limitations: we recall that the span for edge excitations in CFT is $\Delta J < O(\sqrt{N}) \sim 2$ [@cdtz1]. Another consistent interpretation for all the branches is given by the Laughlin one-component theory; possibly, the condensation of quasi-particles leading to the hierarchical Hall fluids cannot take place in such a small system. Nevertheless, it is important that the $N=6$ and the $N=8$ data can be consistently interpreted. **The $N=10$ Data** ------------------- The spectrum of energies as a function of the angular momentum for $N=10$ is presented in Fig. (\[fig10\]) and the branches of Jain states are given in Table (\[tab6\]); the Laughlin ground state $[10]$ and the $[9,1]$ branch are not presented in Fig. (\[fig10\]), which focusses on the $\nu\le 2/5$ region. The comparison between the exact branches and the Jain predictions shows the dynamical phenomenon already seen before: for any pair of Jain branches which have degenerate energy $E^{(0)}$ (Table (\[tab6\])), only one is realized in the spectrum, and correctly describes the low-lying states (as shown by the overlap analysis). Actually, $J=111$ matches $[7,3]$, but $[8,1,1]$ is not observed; similarly $J=108$ matches $[7,2,1]$, $J=103$ is $[6,3,1]$ and $J=101$ is $[6,2,2]$, while $[6,4]$, $[5,5]$ and $[5,4,1]$ are not observed; these are new examples of the “piling up” effect. The analysis of the density profiles of all the bottom states in Fig.(\[fig100\]) shows that the branches $J=125$ and $J=117$ are quasi-particles, with the same qualitative features of the analogous $N=8$ cases. The bottom states of the $J=111$ and $103$ branches are not growing in the bulk: although rather oscillating, they are the possible candidates for the $\nu=2/5$ and $\nu=3/7$ incompressible Hall states with $N=10$ electrons, respectively (see the data in Table (\[tab40\])). The by-now standard analysis of the edge excitations is first performed on the $J=111$ branch; the plots are shown in Fig.(\[fig101\]) and the overlaps with the Jain states are reported in Table (\[tab60\]). The usual Laughlin edge excitations ($\Vert 112-0 \rr$, $\Vert 113-0 \rr$,$\Vert 113-1 \rr$) are clearly seen in the plots and they definitely overlap on the symmetric Jain states $\vert 1;+\rangle$, $\vert 2;a\rangle$, $\vert 2;b+\rangle$ and $\vert 2;c+\rangle$. According to the $\winf$ minimal conformal theory, the interpretation of this branch as the $\nu=2/5$ hierarchical Hall state requires the identification of a third edge excitation with $\Delta J=2$. This can reasonably be $\Vert 113-3\rr$: its density profile is a non-infinitesimal, but in-phase, deformation of the bottom state, and its projection is large on the symmetric Jain states (see Table (\[tab60\])). We conclude that the $\winf$ minimal theory consistently describes this branch as the $\nu=2/5$ hierarchical state: this is the third definite evidence for this theory. The next branch $J=108$, i.e. $[7,2,1]$, is plotted in Fig.(\[fig102\]): the profile of the bottom state grows in the bulk and this fits the natural expectation that this branch is a quasi-particle over the $\nu=2/5$ state. The families of low-lying states are actually very similar to those of the analogous $N=8$ branch ($J=63$) in Fig.(\[fig6\]). The multiplicities of the edge excitations should again be (1,1,3) as for the $\nu=2/5$ ground state (they are observed in the corresponding $N=8$ branch); instead the actual computing yields (1,1,2) – presumably, the missing state is higher in energy than $\Vert 110-5 \rr$. Let us now discuss the $J=103$ branch, which is drawn in Fig.(\[fig11\]); the corresponding overlaps with the Jain branch $[6,3,1]$ are reported in Table (\[tab7\]). The density profile of the bottom state is similar to that of the $J=111$ state, i.e. the $\nu=2/5$ Hall state; thus, $\Vert 103-0\rr$ can be interpreted as the $\nu=3/7$ hierarchical state for $N=10$. The overlap matrix is rather standard for a three-level Jain branch and identifies the CFT labels of the $\Delta J=1$ low-lying states; on the other hand, the $\Delta J=2$ excitations cannot be analysed for the usual reason that there is a single electron in the highest effective Landau level. The edge excitations are identified by the density profiles and their multiplicities are found to be (1,2,4): this is in disagreement with the $c=3$ $\winf$ minimal conformal theory, which predicts the values (1,1,3) (see Table (\[tab2\])). It is possible that there is an accidental degeneracy with another family (0,1,1-2) of excitations. This interpretation is supported by the analysis of the next branch $J=101$, which is shown in Fig. (\[fig12\]). The bottom state is identified as a quasi-particle over the $\nu=3/7$ state, as expected; the multiplicities (1,1,…) of its edge excitations are again in agreement with the $\winf$ minimal theory. In conclusion, the $N=10$ spectrum presents the general features already encountered for $N=8$ and $6$; the edge excitations of its hierarchical states can be interpreted within the $\winf$ minimal conformal theory (and the Laughlin quasi-particles by the one-component Abelian theory, of course). However, we should remark that the finite-size effects are not smaller than in the $N=8$ spectrum; this is contrary to the expectation of a smooth thermodynamic limit towards the conformal field theory. Conclusions =========== In this paper we have presented a comprehensive analysis of the low-lying spectrum of the electrons in the quantum Hall effect, in the regime of hierarchical Hall states $\nu\le 2/5$. We have gone beyond previous works along these lines: regarding the studies of the composite-fermion correspondence [@deja] [@jain], we have done the first detailed analysis of the edge state structure, which was overlooked by the studies on the spherical geometry [@jj][@jaka]. We have shown that the Jain composite-fermion theory describes very well the low-lying spectrum; but we also revealed the dynamical mechanism which takes place when two Jain states are allowed: the electrons tend to “pile up”, if they are let to freely fill the effective Landau levels. Moreover, we improved and reviewed previous analyses on the disk geometry [@wen][@deja][@kaap]. We introduced two new criteria for the analysis of the exact states: i) the plot of their density profiles for distinguishing quasi-particle excitations from new incompressible ground states, and for identifying the real edge excitations within the low-lying states; ii) the interpretation of their overlaps with the Jain states in the language of conformal field theory, with concrete relations among the states and projections thereof. We have presented a consistent analysis of the low-lying spectrum. We have shown that the edge excitations form specific subset of the low-lying states, by applying the previous criteria and by checking that the Laughlin quasi-particles are described by the well-understood one-component Abelian conformal theory. While all the low-lying states are nicely described by the Jain theory, i.e. by the multi-component Abelian conformal theory [@wen][@juerg][@read], the real edge excitations of the hierarchical Hall states match the predictions of the $\winf$ minimal theory [@ctz5] (and those of the Laughlin quasi-particles naturally match the one-component Abelian theory). Although the numerical data show some blurs and some finite-size limitations, the general picture seems firmly established; we found four neat positive evidences out of the six hierarchical states with $N=8$ and $10$ electrons. In conclusion, we hope that this work will stimulate further analyses of the hierarchical Hall states. [**Acknowledgements**]{} A. C. and G. R. Z. would like to thank the C.E.R.N. Theory Division and the theory group at L.A.P.P., Annecy, for hospitality. A. C. also thanks the Theory Group of the Centro Atómico Bariloche for hospitality and acknowledges the partial support of the European Community Program FMRX-CT96-0012. G. R. Z. is grateful to I.N.F.N. Sezione di Firenze for hospitality. The work of G. R. Z. is supported by a grant of the Antorchas Foundation (Argentina). For a review see: R. A. Prange, S. M. Girvin, [*The Quantum Hall Effect*]{}, Springer Verlag, New York (1990). For a review see: S. Das Sarma and A. Pinczuk, [*Perspectives in Quantum Hall Effects*]{}, Wiley, New York (1996). R. B. Laughlin, (1983) 1395; for a review see: R. B. Laughlin, [*Elementary Theory: the Incompressible Quantum Fluid*]{}, in [@prange]. J. K. Jain, (1989) 199; (1990) 7653; for reviews see: J. K. Jain, [*Adv. in Phys.*]{} [**41**]{} (1992) 105, and [*Composite Fermions*]{}, in [@dspin]. For a review see: H. L. Stormer and D. C. Tsui, [*Composite Fermions in the Fractional Quantum Hall Effect*]{}, in [@dspin] For a review see: E. Fradkin and A. Lopez, (1993) 67. X. G. Wu and J. K. Jain, (1995) 1752. J. K. Jain and R. K. Kamilla, (1997) 2621. For a review, see: X. G. Wen, (1992) 1711, [*Adv. in Phys.*]{} [**44**]{} (1995) 405. R. C. Ashoori, H. L. Stormer, L. N. Pfeiffer, K. W. Baldwin and K. West, (1992) 3894; F. P. Milliken, C. P. Umbach and R. A. Webb, [*Solid State Commun.*]{} [**97**]{} (1996) 309; P. Fendley, A. W. W. Ludwig and H. Saleur, (1995) 8934; for a review, see: C. L. Kane and M. P. A. Fisher, [*Edge-State Transport*]{}, in [@dspin]. V. J. Goldman and B. Su, [*Science*]{} [**267**]{} (1995) 1010; R. de-Picciotto et al., cond-mat/9707289; L. Saminadayar et al., cond-mat/9706307. J. Fröhlich and A. Zee, (1991) 517; X.-G. Wen and A. Zee, (1993) 2290. J. Fröhlich and E. Thiran, [*J. Stat. Phys.*]{} [**76**]{} (1994) 209; J. Frölich, T. Kerler, U. M. Studer and E. Thiran, (1995) 670. A. A. Belavin, A. M. Polyakov and A. B. Zamolodchikov, (1984) 333; for a review see: P. Ginsparg, [*Applied Conformal Field Theory*]{}, in [*Fields, Strings and Critical Phenomena*]{}, Les Houches School 1988, E. Brezin and J. Zinn-Justin eds., North-Holland, Amsterdam (1990). M. Stone, (NY) [**207**]{} (1991) 38. A. Cappelli, G. V. Dunne, C. A. Trugenberger and G. R. Zemba, (1993) 531. N. Read, (1990) 1502. A. Cappelli, C. A. Trugenberger and G. R. Zemba, (1995) 470; for a review, see: (1996) 112. S. Iso, D. Karabali and B. Sakita, (1992) 700, (1992) 143. A. Cappelli, C. A. Trugenberger and G. R. Zemba, (1993) 465, (1993) 100; for a review, see: A.Cappelli, G.V.Dunne, C.A.Trugenberger and G.R.Zemba, (1993) 21. A. Cappelli, C. A. Trugenberger and G. R. Zemba, (1994) 1902. F. D. M. Haldane, [*The Hierarchy of Fractional States and Numerical Studies*]{}, in [@prange]. G. Dev and J. K. Jain, [*Phys. Rev.*]{} [**45 B**]{} (1992) 1223. M. Kasner and W. Apel, [*Phys. Rev.*]{} [**48 B**]{} (1993) 11435; [*Ann. Physik*]{} [**3**]{} (1994) 433. A. Cappelli and G. R. Zemba, [*Hamiltonian Formulation for the Minimal Models of the Incompressible Quantum Hall Fluids*]{}, to appear. V. Kac and A. Radul, [*Comm. Math. Phys.*]{} [**157**]{} (1993) 429; H. Awata, M. Fukuma, Y. Matsuo and S. Odake, [*Prog. Theor. Phys. (Supp.)*]{} [**118**]{} (1995) 343; E. Frenkel, V. Kac, A. Radul and W. Wang, (1995) 337. A. M. Polyakov, (1990) 833; M. Bershadsky and H. Ooguri, (1989) 49. F. D. M. Haldane (1983) 605; B. I. Halperin, 91984) 1583; M. Greiter, (1994) 48. $N=8$ Jain states   $J$   $E^{(0)}$   --------------------- ------- ------------- [\[8\]]{} 84   0   [\[7,1\]]{} 76   1   [\[6,2\] ]{} 70   2   [\[6,1,1\]]{} 68   3   [\[5,3\] ]{} 66   3   [\[4,4\] ]{} 64   4   [\[5,2,1\]]{} 63   4   : Bottom states of the Jain theory for $N=8$, ordered by the decreasing angular momentum $J \ge 63$, and the corresponding approximated energies $E^{(0)}$, in units of $E_D$.[]{data-label="tab1"} $\ \ c\ \ $ $\Delta J$ 0 1 2 3   ------------- ------------------------- --- --- --- ----- $1$ One-component Abelian 1 1 2 3   $2$ Two-component Abelian 1 2 5 10  Minimal Incompressible 1 1 3 5   $3$ Three-component Abelian 1 3 9 22  Minimal Incompressible 1 1 3 6   : The number of edge excitations for the Laughlin Hall fluid $\nu=1/3$ and its quasi-particles (first row); for the hierarchical fluids $\nu=2/5$ (second and third rows) and $\nu=3/7$ (forth and fifth rows), according to the two relevant conformal field theories with central charge $c$.[]{data-label="tab2"} J=70  $\vert [6,2] \rangle$ ---------------------------- ----------------------- $\langle\langle 70-0\Vert$ 0.985  $\langle\langle 70-1\Vert$ -0.010  : Overlap matrices for a Laughlin quasi-particle branch. The $N=8$ low-lying exact states are denoted by $\Vert 70-i \rangle\rangle$, $i=0,1$, $\Vert 71-j \rangle\rangle$, $j=0,1$, and $\Vert 72-k \rangle\rangle$, $k=0,\dots,4$. The corresponding orthogonalized Jain states are $\vert [6,2]\rangle$, $\vert 1;\pm\rangle$ at $\Delta J=1$, and ($\vert 2;a\rangle,\vert2;b\pm\rangle,\vert 2;c\pm\rangle$) at $\Delta J=2$.[]{data-label="tab20"} J=71 $\vert 1; +\rangle$ $\vert 1;-\rangle$ ---------------------------- --------------------- -------------------- $\langle\langle 71-0\Vert$ -0.98  -0.04  $\langle\langle 72-1\Vert$ 0.04  -0.98  : Overlap matrices for a Laughlin quasi-particle branch. The $N=8$ low-lying exact states are denoted by $\Vert 70-i \rangle\rangle$, $i=0,1$, $\Vert 71-j \rangle\rangle$, $j=0,1$, and $\Vert 72-k \rangle\rangle$, $k=0,\dots,4$. The corresponding orthogonalized Jain states are $\vert [6,2]\rangle$, $\vert 1;\pm\rangle$ at $\Delta J=1$, and ($\vert 2;a\rangle,\vert2;b\pm\rangle,\vert 2;c\pm\rangle$) at $\Delta J=2$.[]{data-label="tab20"} J=72 $\vert 2;a\rangle$ $\vert 2;b+\rangle$ $\vert 2;b-\rangle$ $\vert 2;c+\rangle$ $\vert 2;c-\rangle$ ---------------------------- -------------------- --------------------- --------------------- --------------------- --------------------- $\langle\langle 72-0\Vert$ 0.63  0.51  0.07  0.45  0.03  $\langle\langle 72-1\Vert$ 0.21  -0.64  -0.37  0.43  0.35  $\langle\langle 72-2\Vert$ -0.10  -0.05  0.66  0.01  0.74  $\langle\langle 72-3\Vert$ -0.66  0.31  -0.02  0.72  -0.17  $\langle\langle 72-4\Vert$ 0.10  -0.28  0.52  0.13  -0.46  : Overlap matrices for a Laughlin quasi-particle branch. The $N=8$ low-lying exact states are denoted by $\Vert 70-i \rangle\rangle$, $i=0,1$, $\Vert 71-j \rangle\rangle$, $j=0,1$, and $\Vert 72-k \rangle\rangle$, $k=0,\dots,4$. The corresponding orthogonalized Jain states are $\vert [6,2]\rangle$, $\vert 1;\pm\rangle$ at $\Delta J=1$, and ($\vert 2;a\rangle,\vert2;b\pm\rangle,\vert 2;c\pm\rangle$) at $\Delta J=2$.[]{data-label="tab20"} J=66  $\vert [5,3] \rangle$ ----------------------------- ----------------------- $\langle\langle 66- 0\Vert$ 0.893  $\langle\langle 66- 1\Vert$ 0.000  : Overlap matrices for the $\nu=2/5$ branch with $N=8$. The exact states are $\Vert 66-i \rangle\rangle$, $i=0,1$, $\Vert 67-j \rangle\rangle$, $j=0,1,2$, and $\Vert 68-k \rangle\rangle$, $k=0,\dots,5$; the orthogonalized Jain states are $\vert [5,3] \rangle$, $\vert 1;\pm\rangle$ and ($\vert 2;a\rangle,\vert2;b\pm\rangle,\vert 2;c\pm\rangle$).[]{data-label="tab3"} J=67 $\vert 1; +\rangle$ $\vert 1;-\rangle$ ---------------------------- --------------------- -------------------- $\langle\langle 67-0\Vert$ 0.890  -0.056  $\langle\langle 67-1\Vert$ 0.000  0.000   $\langle\langle 67-2\Vert$ 0.001  0.008   : Overlap matrices for the $\nu=2/5$ branch with $N=8$. The exact states are $\Vert 66-i \rangle\rangle$, $i=0,1$, $\Vert 67-j \rangle\rangle$, $j=0,1,2$, and $\Vert 68-k \rangle\rangle$, $k=0,\dots,5$; the orthogonalized Jain states are $\vert [5,3] \rangle$, $\vert 1;\pm\rangle$ and ($\vert 2;a\rangle,\vert2;b\pm\rangle,\vert 2;c\pm\rangle$).[]{data-label="tab3"} J=68 $\vert 2;a\rangle$ $\vert 2;b+\rangle$ $\vert 2;b-\rangle$ $\vert 2;c+\rangle$ $\vert 2;c-\rangle$ ---------------------------- -------------------- --------------------- --------------------- --------------------- --------------------- $\langle\langle 68-0\Vert$ -0.283  0.031  -0.072  0.351  0.110  $\langle\langle 68-1\Vert$ 0.065  -0.750  -0.145  0.454  0.105  $\langle\langle 68-2\Vert$ -0.631  -0.366  0.066  -0.507  0.051  $\langle\langle 68-3\Vert$ 0.000  0.000  0.000  0.000  0.000  $\langle\langle 68-4\Vert$ 0.000  0.000  0.000  0.000  0.000  $\langle\langle 68-5\Vert$ -0.072  -0.155  0.014  -0.065  0.128  : Overlap matrices for the $\nu=2/5$ branch with $N=8$. The exact states are $\Vert 66-i \rangle\rangle$, $i=0,1$, $\Vert 67-j \rangle\rangle$, $j=0,1,2$, and $\Vert 68-k \rangle\rangle$, $k=0,\dots,5$; the orthogonalized Jain states are $\vert [5,3] \rangle$, $\vert 1;\pm\rangle$ and ($\vert 2;a\rangle,\vert2;b\pm\rangle,\vert 2;c\pm\rangle$).[]{data-label="tab3"} J=63  $\vert [5,2,1] \rangle$  ------------------------------ -------------------------- $\langle\langle 63- 0\Vert$  -0.959 $\langle\langle 63- 1\Vert$  0.000 : Overlap matrices of the quasi-particle branch over the $\nu=2/5$ state for $N=8$. The exact states are $\Vert 63-i \rangle\rangle$, $i=0,1$, and $\Vert 64-j \rangle\rangle$, $j=0,1,2$; the orthogonalized Jain states are $\vert [5,2,1] \rangle$ and its excitations $\vert 1; x\rangle$, $x=a,b,c$.[]{data-label="tab30"} J=64 $\vert 1;a\rangle$  $\vert 1;b\rangle$  $\vert 1;c\rangle$  ----------------------------- --------------------- --------------------- --------------------- $\langle\langle 64-0\Vert$  0.94  0.01  0.14  $\langle\langle 64-1\Vert$  0.06  -0.90  -0.32  $\langle\langle 64-2\Vert$  -0.14  -0.29  0.87  : Overlap matrices of the quasi-particle branch over the $\nu=2/5$ state for $N=8$. The exact states are $\Vert 63-i \rangle\rangle$, $i=0,1$, and $\Vert 64-j \rangle\rangle$, $j=0,1,2$; the orthogonalized Jain states are $\vert [5,2,1] \rangle$ and its excitations $\vert 1; x\rangle$, $x=a,b,c$.[]{data-label="tab30"} $N=6$ Jain states   $J$   $E^{(0)}$   --------------------- ------- ------------- [\[6\]]{} 45   0   [\[5,1\]]{} 39   1   [\[4,2\] ]{} 35   2   [\[4,1,1\]]{} 33   3   [\[3,3\] ]{} 33   3   : Jain bottom states for $N=6$, ordered by the decreasing angular momentum $J \ge 33$, and the corresponding energies $E^{(0)}$, in units of $E_D$.[]{data-label="tab4"} -------- ----------------- --------- --------- ---------- $\nu $ $2\pi\rho(0)$   $J$   $N=6$   $N=8$   $N=10$   1/3 0.33   45   84   135   2/5 0.40   37.5   70   112.5   3/7 0.43   35   65.3  105   -------- ----------------- --------- --------- ---------- : Typical values of the average density $\rho(0)$ and of the angular momentum for idealized flat droplets of incompressible fluids.[]{data-label="tab40"} J=33  $\vert [3,3] \rangle$  $\vert [4,1,1] \rangle$  ------------------------------ ------------------------ -------------------------- $\langle\langle 33- 0\Vert$  -0.290  0.935 $\langle\langle 33- 1\Vert$   0.000  0.000 : Overlap matrices for the candidate $\nu=3/7$ branch with $N=6$. The exact states $\Vert 33-i \rangle\rangle$, ($i=0,1$), are compared with the two Jain bottom states $\vert [3,3] \rangle$ and $\vert [4,1,1] \rangle$; the $\Delta J=1$ low-lying exact states are $\Vert 34-j \rangle\rangle$, ($j=0,1,2$), and the Jain states $\vert 1; x\rangle$, $x=a,b,c$, are excitations of $\vert [4,1,1] \rangle$.[]{data-label="tab5"} J=34 $\vert 1;a\rangle$  $\vert 1;b\rangle$  $\vert 1;c\rangle$  ----------------------------- --------------------- --------------------- --------------------- $\langle\langle 34-0\Vert$  -0.872  -0.016  -0.339  $\langle\langle 34-1\Vert$  -0.091  0.949  0.188  $\langle\langle 34-2\Vert$  0.000  0.000  0.000  : Overlap matrices for the candidate $\nu=3/7$ branch with $N=6$. The exact states $\Vert 33-i \rangle\rangle$, ($i=0,1$), are compared with the two Jain bottom states $\vert [3,3] \rangle$ and $\vert [4,1,1] \rangle$; the $\Delta J=1$ low-lying exact states are $\Vert 34-j \rangle\rangle$, ($j=0,1,2$), and the Jain states $\vert 1; x\rangle$, $x=a,b,c$, are excitations of $\vert [4,1,1] \rangle$.[]{data-label="tab5"} $(N=10)$ Jain state   $J$   $E^{(0)}$   ----------------------- ------- ------------- [\[10\]]{} 135   0   [\[9,1\]]{} 125   1   [\[8,2\] ]{} 117   2   [\[8,1,1\]]{} 115   3   [\[7,3\] ]{} 111   3   [\[7,2,1\]]{} 108   4   [\[6,4\] ]{} 107   4   [\[5,5\] ]{} 105   5   [\[6,3,1\]]{} 103   5   [\[7,1,1,1\]]{} 105   6   [\[6,2,2\]]{} 101   6   [\[5,4,1\]]{} 100   6   : Jain bottom states for $N=10$ with angular momentum $J \ge 100$, and the corresponding energies $E^{(0)}$, in units of $E_D$.[]{data-label="tab6"} J=111  $\vert [7,3] \rangle$ ----------------------------- ----------------------- $\langle\langle 111-0\Vert$ -0.979  $\langle\langle 111-1\Vert$ 0.014  : Overlap matrices for the $\nu=2/5$ branch with $N=10$. The exact states are $\Vert 111-i \rangle\rangle$, $i=0,1$, $\Vert 112-j \rangle\rangle$, $j=0,1$, and $\Vert 113-k \rangle\rangle$, $k=0,\dots,4$; the orthogonalized Jain states are $\vert [7,3] \rangle$, $\vert 1;\pm\rangle$ and ($\vert 2;a\rangle,\vert2;b\pm\rangle,\vert 2;c\pm\rangle$).[]{data-label="tab60"} J=112 $\vert 1; +\rangle$ $\vert 1;-\rangle$ ----------------------------- --------------------- -------------------- $\langle\langle 112-0\Vert$ 0.978  -0.027  $\langle\langle 112-1\Vert$ -0.027  0.940  : Overlap matrices for the $\nu=2/5$ branch with $N=10$. The exact states are $\Vert 111-i \rangle\rangle$, $i=0,1$, $\Vert 112-j \rangle\rangle$, $j=0,1$, and $\Vert 113-k \rangle\rangle$, $k=0,\dots,4$; the orthogonalized Jain states are $\vert [7,3] \rangle$, $\vert 1;\pm\rangle$ and ($\vert 2;a\rangle,\vert2;b\pm\rangle,\vert 2;c\pm\rangle$).[]{data-label="tab60"} J=113 $\vert 2;a\rangle$ $\vert 2;b+\rangle$ $\vert 2;b-\rangle$ $\vert 2;c+\rangle$ $\vert 2;c-\rangle$ ----------------------------- -------------------- --------------------- --------------------- --------------------- --------------------- $\langle\langle 113-0\Vert$ 0.69  0.43  -0.03  0.54  -0.03  $\langle\langle 113-1\Vert$ 0.04  -0.77  -0.17  0.56  0.14  $\langle\langle 113-2\Vert$ -0.06  0.00  -0.59  0.00  -0.73  $\langle\langle 113-3\Vert$ -0.49  0.16  0.46  0.50  -0.33  $\langle\langle 113-4\Vert$ -0.20  0.13  -0.14  0.15  0.13  : Overlap matrices for the $\nu=2/5$ branch with $N=10$. The exact states are $\Vert 111-i \rangle\rangle$, $i=0,1$, $\Vert 112-j \rangle\rangle$, $j=0,1$, and $\Vert 113-k \rangle\rangle$, $k=0,\dots,4$; the orthogonalized Jain states are $\vert [7,3] \rangle$, $\vert 1;\pm\rangle$ and ($\vert 2;a\rangle,\vert2;b\pm\rangle,\vert 2;c\pm\rangle$).[]{data-label="tab60"} J=103  $\vert [6,3,1] \rangle$ ------------------------------ ------------------------- $\langle\langle 103- 0\Vert$ -0.954 $\langle\langle 103- 1\Vert$ 0.000 : Overlap matrices for the $\nu=3/7$ branch for $N=10$. The exact states are $\Vert 103-i \rr$, $i=0,1$, and $\Vert 104-j \rangle\rangle$, $j=0,1,2$; the orthogonalized Jain states are $\vert [6,3,1] \rangle$ and its excitations $\vert 1; x\rangle$, $x=a,b,c$.[]{data-label="tab7"} J=104 $\vert 1;a\rangle$ $\vert 1;b\rangle$ $\vert 1;c\rangle$ ------------------------------ -------------------- -------------------- -------------------- $\langle\langle 104-0\Vert$  -0.947  0.030  -0.105  $\langle\langle 104-1\Vert$  0.031  -0.779  -0.507  $\langle\langle 104-2\Vert$  0.092  0.376  -0.715  : Overlap matrices for the $\nu=3/7$ branch for $N=10$. The exact states are $\Vert 103-i \rr$, $i=0,1$, and $\Vert 104-j \rangle\rangle$, $j=0,1,2$; the orthogonalized Jain states are $\vert [6,3,1] \rangle$ and its excitations $\vert 1; x\rangle$, $x=a,b,c$.[]{data-label="tab7"} [^1]: Moreover, the mean-field theory of the composite fermion has been developed in Ref.[@mfth]. [^2]: An equivalent language is given by the topological Chern-Simons theories in $(2+1)$ dimensions [@juerg]. [^3]: For six electrons, we have also computed the spectrum for the Coulomb interaction and found the same qualitative features. [^4]: The interaction among defects is clearly neglected in $E^{(0)}$. [^5]: The full set of Fourier modes $\{J^a_n\}$ generate the current algebra $\su2$ which contains this $SU(2)$ as a subalgebra [@gins]. [^6]: These are the repulsive and attractive fixed points of the renormalization group trajectory, respectively. [^7]: See: http://andrea.fi.infn.it/cappelli/disk.html. [^8]: Hereafter, we set the magnetic length $\ell=1$. [^9]: This introduces some ambiguities for the ordering of states, but they do not affect the qualitative properties to be discussed hereafter. [^10]: Table (III) reports approximate numbers for an easy reading; precise data can be found on-line.
--- abstract: 'The phenomenon of the dark energy transition between the quintessence regime ($w > -1$) and the phantom regime ($w < -1$), also known as the cosmological constant boundary crossing, is analyzed in terms of the dark energy equation of state. It is found that the dark energy equation of state in the dark energy models which exhibit the transition is [*implicitly*]{} defined. The generalizations of the the models explicitly constructed to exhibit the transition are studied to gain insight into the mechanism of the transition. It is found that the cancellation of the terms corresponding to the cosmological constant boundary makes the transition possible.' address: 'Departament d’Estructura i Constituents de la Matèria, Universitat de Barcelona, Av. Diagonal 647, 08028 Barcelona, Catalonia, Spain' author: - 'Hrvoje Štefančić [^1]' title: 'Crossing of the Cosmological Constant Boundary - an Equation of State Description' --- Among many important cosmological problems, the phenomenon of the present, late-time accelerated expansion of the universe has come to the forefront of the observational and theoretical efforts in several last years. Apart from the exciting series of cosmological observational results confirming the accelerated character of the expansion of the universe [@SNIa; @CMB; @LSS], we are witnessing many theoretical endeavours aimed at explaining the features of the present expansion of the universe, as well as the revival of some longstanding problems in cosmology and high energy physics, such as the cosmological constant problem [@cc]. From the theoretical viewpoint there is still no decisive insight into the nature of the accelerating mechanism. However, many promising models have been proposed to explain the acceleration in the universe’s expansion. Some of the interesting approaches include the braneworld models and the modifications of gravity at cosmological scales. The most studied accelerating mechanism is the existence of a cosmic component with negative pressure, a so called [*dark energy*]{} component. Dark energy is a very useful concept since all our ignorance on the acceleration phenomenon is encoded into a single cosmic component. It can also be very useful as an effective description of other approaches to the explanation of the acceleration of the universe. Many models of dark energy have been constructed so far, assigning to dark energy different properties. A very general classification of these models is possible with respect to the parameter $w$ of the dark energy equation of state (EOS), $p_{d}=w \rho_{d}$, where $p_{d}$ and $\rho_{d}$ refer to the dark energy pressure and energy density, respectively [^2] . The benchmark value for the parameter of the dark energy EOS is $w=-1$ which is characteristic of the cosmological constant (CC). A problem associated to the CC value predicted by high energy physics, i.e. its discrepancy of many orders of magnitude with the value inferred from observations, is notoriously difficult. Such a situation has stimulated the development of dynamical dark energy models. Some prominent dynamical models of dark energy such as [*quintessence*]{} [@Q], [*k-essence*]{} [@k] or [*Chaplygin gas*]{} [@Chaplygin] are characterized by $w > -1$. On the other side of the CC boundary are located models of [*phantom energy*]{} [@phantom], with the property $ w < -1$. These models are characterized by some tension between a certain favour from the observational side and certain disfavour from the theoretical side. Many recent analyses of observational data [@obser], using ingenious parametrizations for the redshift dependence $w(z)$, show that the best fit values imply the transition of the dark energy parameter of EOS from $w > -1$ to $ w < -1$ at a small redshift. This phenomenon has been referred to in literature as [*the crossing of the CC boundary, crossing of the phantom divide or the transition between the quintessence and phantom regimes*]{}. It is important to stress that currently some other options, like the one of the $\Lambda$CDM cosmology, are also consistent with the observational data. Should the future observations confirm the present indications of the crossing, the aspects of the theoretical description of the crossing might provide a useful means of distinguishing and discriminating various dark energy models and other frameworks designed to explain the present cosmic acceleration. Therefore, the crossing of the CC boundary is to some extent observationally favoured and its description is a theoretical challenge. A number of approaches have been adopted so far to describe the phenomenon of the CC boundary crossing [@prethodni]. In our considerations of the phenomenon of the CC boundary crossing [@PRDcross], we assume that that dark energy is a single, noninteracting cosmic component. We focus on the question whether the CC boundary crossing can be described using the dark energy EOS and if the answer is yes, which form the dark energy EOS needs to have to make the crossing possible. The equation of state is most frequently formulated as $p$ given as an analytic expression of the energy density $\rho$. In the considerations given below we use a much broader definition of EOS. We define the equation of state [*parametrically*]{}, i.e. as a pair of quantities depending on the cosmic time $(\rho(t),p(t))$, or equivalently on the scale factor $a$ in the expanding universe $(\rho(a),p(a))$. This definition easily comprises broad classes of dark energy models considered in the literature. Let us start by considering a specific dark energy model which describes the CC boundary crossing. The dependence of the dark energy density on the scale factor in this model is given by $$\label{eq:denmod1} \rho = C_{1} \left( \frac{a}{a_{0}} \right)^{-3(1+\gamma)} + C_{2} \left( \frac{a}{a_{0}} \right)^{-3(1+\eta)} \, .$$ where $\gamma > -1$ and $\eta < -1$. The scaling of this energy density resembles the sum of two independent cosmic components. However, we consider it to be the energy density of a [*single*]{} cosmic component and study its properties. Using the energy-momentum tensor conservation the expression for the dark energy pressure is obtained: $$\label{eq:presmod1} p = \gamma C_{1} \left( \frac{a}{a_{0}} \right)^{-3(1+\gamma)} + \eta C_{2} \left( \frac{a}{a_{0}} \right)^{-3(1+\eta)} \, .$$ Combining (\[eq:denmod1\]) and (\[eq:presmod1\]) the expression for the parameter of the dark energy EOS acquires the following form $$\label{eq:wformod1} w=\frac{\gamma + \eta \frac{\gamma - w_{0}}{w_{0} - \eta} \left( \frac{a}{a_{0}} \right)^{3(\gamma - \eta)}}{1 + \frac{\gamma - w_{0}}{w_{0} - \eta} \left( \frac{a}{a_{0}} \right)^{3(\gamma - \eta)}} \, .$$ The functional dependence of the parameter $w$ on the scale factor is depicted in Fig. \[fig:mod1\]. The equations (\[eq:denmod1\]) and (\[eq:presmod1\]) can be further used to obtain the equation of state of the studied dark energy model: $$\label{eq:eosdetmod1} \frac{p -\eta \rho}{(\gamma-\eta) C_{1}} = \left( \frac{\gamma \rho - p}{(\gamma-\eta) C_{2}} \right)^{(1+\gamma)/(1+\eta)} \, .$$ The most important feature of the obtained EOS is that it is defined [*implicitly*]{}. This result, obtained by the explicit construction, indicates that the phenomenon of the CC boundary crossing can be studied using the implicitly defined dark energy EOS. Apart from the implicit character of the dark energy EOS that allows the CC boundary crossing, it would be of interest to gain additional insight into the mechanism of the crossing, i.e. the conditions necessary for the crossing to happen. To gain such an insight, we further consider the following dark energy EOS: $$\label{eq:eosgenmod1} A \rho + B p = (C \rho + D p)^{\alpha} \, ,$$ where $A$, $B$, $C$, $D$ and $\alpha$ are real parameters. The EOS (\[eq:eosgenmod1\]) is a generalization of (\[eq:eosdetmod1\]) and contains it as a special case. On the other hand, another choice of parameters leads to EOS of the form $p=-\rho-K \rho^{\delta}$ [@PRDsing] which does not exhibit the CC boundary crossing. The generalized model (\[eq:eosgenmod1\]) exhibits the crossing only for some parameter values and is therefore suitable for the study of the necessary conditions for the crossing. The dark energy density can be expressed in terms of parameter $w$ $$\label{eq:rhoexpr1} \rho = \frac{(C+ D w)^{\alpha/(1-\alpha)}}{(A + B w)^{1/(1-\alpha)}} \, ,$$ which leads to the equation of evolution of $w$ with the scale factor $a$: $$\label{eq:evolw1} \left( \frac{\alpha}{(F+w)(1+w)}-\frac{1}{(E+w)(1+w)} \right) dw = 3(\alpha - 1) \frac{da}{a} \, ,$$ where abbreviations $E=A/B$ and $F=C/D$ have been introduced. A closer inspection of this equation reveals that there are several important values of $w$ determining its evolution: $w=-1$, $w=-F$ and $w=-E$. Whenever these values [*exist*]{} in the description of the problem, they represent boundaries which cannot be crossed at a finite scale factor value and can only be approached asymptotically during the evolution of the universe. This simple observation already signals that, in order to have the transition of the CC boundary, the term corresponding to the $w=-1$ boundary has to be removed from (\[eq:evolw1\]). The solution of (\[eq:evolw1\]) for the most interesting case when $E \neq -1$ and $F \neq -1$ has the form $$\label{eq:solw} \hspace{-1.5cm} \left| \frac{w+F}{w_{0}+F} \right|^{\alpha/(1-F)} \left| \frac{w+E}{w_{0}+E} \right|^{-1/(1-E)} \left| \frac{1+w}{1+w_{0}} \right|^{1/(1-E)-\alpha/(1-F)} = \left( \frac{a}{a_{0}} \right)^{3(\alpha-1)} \, .$$ This solution indicates that each of the potential boundaries can be removed by the suitable choice of the parameter $\alpha$. The boundary at $w=-F$ is removed when $\alpha=0$ and the boundary at $w=-E$ is removed when $\alpha \rightarrow \pm \infty$. The crossing of the CC boundary becomes possible with the choice $\alpha_{\mathrm cross}=(1-F)/(1-E)$, i.e. for this value of parameter $\alpha$ the CC boundary is removed. The equation (\[eq:solw\]) then becomes $$\label{eq:solwcross} \left| \frac{w+F}{w_{0}+F} \right| \left| \frac{w+E}{w_{0}+E} \right|^{-1} = \left( \frac{a}{a_{0}} \right)^{3(E-F)} \, ,$$ which describes the transition from the $w>-1$ regime to the $w < -1$ regime. An additional insight into the crossing mechanism can be obtained if the Eq. (\[eq:solw\]) is studied in the rearranged form: $$\label{eq:wstar} \frac{w+\frac{\alpha E - F}{\alpha - 1}}{(F+w)(E+w)(1+w)} dw = 3 \frac{da}{a} \, .$$ The numerator of the expression on the left-hand side can also be written as $w-w_{*}$ where $w_{*}=-(\alpha E-F)/(\alpha-1)$. For specific values of the parameter $\alpha$, the parameter $w_{*}$ can become equal to $-F$ (for $\alpha=0$), $-E$ (for $\alpha \rightarrow \pm \infty$) or $-1$ (for $\alpha_{\mathrm cross}=(1-F)/(1-E)$). Therefore, for a specific value of the parameter $\alpha$ the terms corresponding to some of the boundaries get cancelled. This cancellation is a mathematical description of the mechanism behind the CC boundary transition. A more general model of dark energy capable of transiting between the quintessence and phantom regimes can be constructed. We consider a model with the following scaling of the dark energy density: $$\label{eq:densitymod2} \rho = \left( C_{1} \left( \frac{a}{a_{0}} \right)^{-3(1+\gamma)/b} + C_{2} \left( \frac{a}{a_{0}} \right)^{-3(1+\eta)/b} \right)^{b} \, .$$ This model can describe transitions of the CC boundary in both directions, i.e. from $w > -1$ to $w <-1$ and vice versa, depending on the sign of the parameter $b$, see Fig. \[fig:mod2\]. The dark energy pressure has the following form $$\label{eq:pressuremod2} p \rho^{(1-b)/b} = \gamma C_{1} \left( \frac{a}{a_{0}} \right)^{-3(1+\gamma)/b} + \eta C_{2} \left( \frac{a}{a_{0}} \right)^{-3(1+\eta)/b} \, .$$ From (\[eq:densitymod2\]) and (\[eq:pressuremod2\]) the dark energy EOS follows directly: $$\label{eq:eosdetmod2} \frac{p -\eta \rho}{(\gamma-\eta) C_{1}} = \rho^{((1-b)(\gamma - \eta))/(b(1+\eta))} \left( \frac{\gamma \rho - p}{(\gamma-\eta) C_{2}} \right)^{(1+\gamma)/(1+\eta)} \, .$$ Starting from this explicitly constructed model, it is interesting to study its generalization in the form $$\label{eq:eosmodel2} A \rho + B p = (C \rho + D p)^{\alpha} (M \rho + N p)^{\beta} \, ,$$ where $A$, $B$, $C$, $D$, $M$, $N$, $\alpha$ and $\beta$ are real coefficients and study the conditions for the CC boundary crossing within this generalization. Following the similar procedure as in the case of model (\[eq:eosgenmod1\]), the evolution law for the dark energy parameter of EOS acquires the form $$\label{eq:wmod2} \left( \frac{\alpha D}{C+D w}+\frac{\beta N}{M+N w}- \frac{B}{A+B w} \right) \frac{dw}{1+w} = 3(\alpha+\beta - 1) \frac{da}{a} \, .$$ The solution of this equation in the most interesting case $A \neq B$, $C \neq D$ and $M \neq N$ is $$\begin{aligned} \label{eq:solwmod2} & &\hspace{-2.0cm} \left| \frac{C+D w}{C+D w_{0}} \right|^{-\alpha D/(C-D)} \left| \frac{M+N w}{M+N w_{0}} \right|^{-\beta N/(M-N)} \left| \frac{1+w}{1+w_{0}} \right|^{\alpha D/(C-D)+\beta N/(M-N)-B/(A-B)} \nonumber \\ & \times & \left| \frac{A+B w}{A + B w_{0}} \right|^{B/(A-B)} = \left( \frac{a}{a_{0}} \right)^{3(\alpha+\beta-1)} \, . \end{aligned}$$ This solution for the scaling of $w$ with $a$ reveals that any of the boundaries $-A/B$, $-C/D$, $-M/N$ or $-1$ can be removed by the appropriate choice of parameters $\alpha$ and/or $\beta$. Therefore, the crossing of the CC boundary is possible in this generalized model if the exponent of $|1+w|$ vanishes. This requirement can be expressed as a condition on one of the parameters $\alpha$ or $\beta$. Using the insight gained from the studies of the models which are explicitly constructed to exhibit the transition and their generalizations, it is possible to study a model with a nontrivial implicitly defined equation of state of the form $$\label{eq:ton} A \rho^{2n+1} + B p^{2n+1} = (C \rho^{2n+1} + D p^{2n+1})^{\alpha} \,$$ and to show that the dark energy model characterized by this EOS is capable of describing the CC boundary crossing. To demonstrate the possibility of the aforementioned transition within the model (\[eq:ton\]) we show that a suitable choice of the parameter $\alpha$ can remove the $w=-1$ boundary from the problem. The evolution equation for the parameter of EOS is $$\label{eq:wforn} \frac{w^{2n+1}+(\alpha E -F)/(\alpha-1)}{(F+w^{2n+1})(E+w^{2n+1})} \frac{w^{2n}}{1+w} dw = 3 \frac{da}{a} \, .$$ where $E=A/B$ and $F=C/D$. Choosing $(\alpha E - F)/(\alpha -1) = 1$ the term corresponding to the CC boundary is removed since $$\label{eq:cancel} \frac{w^{2n+1}+1}{w+1} = \xi(w) = \sum_{l=0}^{2n} (-w)^{l} \,$$ and the $\xi(w)$ has no real roots. The equation (\[eq:wforn\]) acquires the form $$\label{eq:cancel2} \frac{\xi(w)}{(F+w^{2n+1})(E+w^{2n+1})} w^{2n} dw = 3 \frac{da}{a} \, ,$$ which describes smooth transitions between $w=-E^{-1/(2 n +1)}$ and $w=-F^{-1/(2 n +1)}$. Finally, let us study the possibility of the CC boundary crossing in the dark energy model defined by the following EOS: $$\label{eq:eosexp} \rho= a e^{b p/\rho} (c - p/\rho)^{\alpha} \, .$$ The equation for the variation of $w$ with $a$ is $$\label{eq:wexp} \left( b -\frac{\alpha}{1+c} \right) \frac{dw}{1+w} - \frac{\alpha}{1+c} \frac{dw}{c-w} = -3 \frac{da}{a} \, .$$ The choice $b=\alpha/(1+c)$ removes the CC boundary and results in the following solution for the dark energy parameter of EOS: $$\label{eq:solexp} w=c-(c-w_{0}) \left( \frac{a}{a_{0}} \right)^{-3 (1+c)/\alpha} \, ,$$ which describes the crossing of the CC boundary. The choice of parameters $\alpha < 0$, $w_{0} < -1$ and $-1 < c < -1/3$ yields a transition from the quintessence regime to the phantom regime at a positive redshift. In the generalized models of this paper used to study the CC boundary crossing, special conditions need to be met for the crossing to occur. Namely, one of the model parameters needs to acquire a value determined by the other parameter values. In a sense, if a parametric space of a model is $D$ dimensional, the set of parameter values for which the transition occurs is $D-1$ dimensional. Therefore, it has been shown that the CC boundary crossing can be described in terms of EOS, but that the model parameters need to be chosen in a special way. However, a more extensive analysis of the dark energy models with a implicitly defined EOS is needed to verify if this is a general feature of these models. In conclusion, the dark energy transition between the quintessence and the phantom regimes is studied using the dark energy EOS. It is found that in models which exhibit the CC boundary crossing, the EOS is implicitly defined. Within the generalized models the crossing is possible when there is the cancellation of the terms corresponding to the CC boundary. The CC boundary crossing requires a special choice of model parameters and therefore the study of its aspects might be useful in discriminating the crossing in noninteracting dark energy models from the cosmological models where the CC boundary crossing is an effective phenomenon, see e.g. [@SiS]. [**Acknowledgements.**]{} The author acknowledges the support of the Secretaría de Estado de Universidades e Investigación of the Ministerio de Educación y Ciencia of Spain within the program “Ayudas para movilidad de Profesores de Universidad e Investigadores españoles y extranjeros". This work has been supported in part by MEC and FEDER under project 2004-04582-C02-01 and by the Dep. de Recerca de la Generalitat de Catalunya under contract CIRIT GC 2001SGR-00065. The author would like to thank the Departament E.C.M. of the Universitat de Barcelona for the hospitality. References {#references .unnumbered} ========== [10]{} A.G. Riess et al., Astron. J. 116 (1998) 1009; S. Perlmutter et al., Astrophys. J. 517 (1999) 565; A.G. Riess et al., Astrophys. J. 607 (2004) 665; R.A. Knop et al., Astrophys. J. 598 (2003) 102. P. de Bernardis et al., Nature 404 (2000) 955; A.D. Miller et al. Astrophys. J. Lett. 524 (1999) L1; S. Hanany et al., Astrophys. J. Lett. 545 (2000) L5; N.W. Halverson et al., Astrophys. J. 568 (2002) 38; B.S. Mason et al., Astrophys. J. 591 (2003) 540; D.N. Spergel et al., Astrophys. J. Suppl. 148 (2003) 175; L. Page et al., Astrophys. J. Suppl. 148 (2003) 233. R. Scranton et al., astro-ph/0307335; M. Tegmark et al., Phys. Rev. D 69 (2004) 103501. S. Weinberg, Rev. Mod. Phys. 61 (1989) 1; N. Straumann, astro-ph/0203330; T. Padmanabhan, Class. Quant. Grav. 22 (2005) L107. B. Ratra, P.J.E. Peebles, Phys. Rev. D 37 (1988) 3406; P.J.E. Peebles, B. Ratra, Astrophys. J. 325 (1988) L17; C. Wetterich, Nucl. Phys. B 302 (1988) 668; R.R. Caldwell, R. Dave, P.J. Steinhardt, Phys. Rev. Lett. 80 (1998) 1582; I. Zlatev, L. Wang, P.J. Steinhardt, Phys Rev. Lett. 82 (1999) 896; C. Armendariz-Picon, T. Damour, V. Mukhanov, Phys. Lett. B 458 (1999) 209; T. Chiba, T. Okabe, M. Yamaguchi, Phys. Rev. D 62 (2000) 023511. A. Yu. Kamenshchik, U. Moschella, V. Pasquier, Phys. Lett. B511 (2001) 265; N. Bilic, G.B. Tupper, R.D. Viollier, Phys. Lett. B 535 (2002) 17; R.R. Caldwell, Phys. Lett. B 545 (2002) 23; P. Singh, M. Sami, N. Dadhich, Phys. Rev. D 68 (2003) 023522; M. Hoffman, M. Trodden, Phys. Rev. D 68 (2003) 023509; R.R. Caldwell, M. Kamionkowski, N.N. Weinberg, Phys. Rev. Lett. 91 (2003) 071301; S. Nojiri, S. D.Odintsov, Phys. Rev. D 70 (2004) 103522; B. McInnes, JHEP 0208 (2002) 029; H. Stefancic, Phys. Lett. B 586 (2004) 5; H. Stefancic, Eur. Phys. J. C 36 (2004) 523; U. Alam, V. Sahni, A.A. Starobinsky, JCAP 0406 (2004) 008; J.S. Alcaniz, Phys. Rev. D 69 (2004) 083521; S. Hannestad, E. Mortsell, JCAP 0409 (2004) 001; A. Upadhye, M. Ishak, P.J. Steinhardt, Phys. Rev. D 72 (2005) 063501; H.K. Jassal, J.S. Bagla, T. Padmanabhan, Mon. Not. Roy. Astron. Soc. Letters, L11-L16 (2005) 356; W. Hu, Phys. Rev. D 71 (2005) 047301; Z.-K. Guo, Y.-S. Piao, X. Zhang, Y.-Z. Zhang, Phys. Lett. B 608 (2005) 177; X.-F. Zhang, H. Li, Y.-S. Piao, X. Zhang, astro-ph/0501652; Y.-H. Wei, Y. Tian, Class. Quant. Grav. 21 (2004) 5347; H. Wei, R.-G. Cai, Class. Quant. Grav. 22 (2005) 3189; B. Feng, X.-L. Wang, X. Zhang, Phys. Lett. B 607 (2005) 35; B. Feng, M. Li, Y.-S. Piao, X. Zhang, astro-ph/0407432; J.-Q. Xia, B. Feng, X. Zhang, Mod. Phys. Lett. A 20 (2005) 2409; A. Vikman, Phys. Rev. D 71 (2005) 023515; M. Li, B. Feng, X. Zhang, hep-ph/0503268; R.R. Caldwell, M. Doran, astro-ph/0501104; I. Brevik, O. Gorbunova, gr-qc/0504001; S. Nojiri, S.D. Odintsov, Phys. Rev. D72 (2005) 023003; S. Nojiri, S.D. Odintsov, hep-th/0506212; I. Ya. Aref’eva, A.S. Koshelev, S.Yu. Vernov, Phys. Rev. D 72 (2005) 064017; G-B. Zhao, J-Q. Xia, M. Li, B. Feng, X. Zhang, astro-ph/0507482; S. Capozziello, V.F. Cardone, E. Elizalde, S. Nojiri, S.D. Odintsov, astro-ph/0508350. H. Stefancic, Phys. Rev. D 71 (2005) 124036. H. Stefancic, Phys. Rev. D 71 (2005) 084024; S. Nojiri, S.D. Odintsov, S. Tsujikawa, Phys. Rev. D 71 (2005) 063004. J. Sola, H. Stefancic, Phys. Lett. B 624 (2005) 147; J. Sola, H. Stefancic, astro-ph/0507110. [^1]: On leave of absence from the Theoretical Physics Division, Rudjer Bošković Institute, Zagreb, Croatia [^2]: Since dark energy is the only component discussed in this paper, the subscripts $d$ will be suppressed furtheron.
--- abstract: 'Presently about 3000 different nuclei are known with about another 3000-4000 predicted to exist. A review of the discovery of the nuclei, the present status and the possibilities for future discoveries are presented.' address: | National Superconducting Cyclotron Laboratory and\ Department of Physics & Astronomy,\ Michigan State University, East Lansing, MI 48824, USA author: - 'M. Thoennessen' bibliography: - '../isotope-discovery-references.bib' --- Introduction {#sec:intro} ============ The strong force, responsible for the binding of nucleons is one of the fundamental forces. In order to understand this force it is critical to know which combination of neutrons and protons can form a bound nuclear system. Even now, after more than 100 years of nuclear physics research this information is only known for the lightest elements. Thus the search for new nuclides with more and more extreme neutron to proton ratios continues to be important. The discovery of new nuclides also is the first step in exploring and measuring any properties of these nuclides. Over the years more and more sophisticated detectors and powerful accelerators were developed to push the limit of nuclear knowledge further and further. At the present time about 3000 nuclides are known. Recently it was calculated that about 7000 nuclides are bound with respect to neutron or proton emission [@2012Erl01]. In addition, there are neutron and proton unbound nuclides which can have significantly shorter lifetimes or appear only for a very short time as a resonance. The properties of these nuclides beyond the “driplines” can also be studied with special techniques [@2012Bau01; @2012Pfu01] and they are especially interesting because they represent the extreme limits for each element. The present review gives a brief historical overview followed by a summary of the present status and a discussion of future perspectives for the discovery of new nuclides. Throughout the article the word nuclide is used rather than the widely used but technically incorrect term isotope. The term isotope is only appropriate when referring to a nuclide of a specific element. Historical Overview {#sec:history} =================== It can be argued that the field of nuclear physics began with the discovery of radioactivity by Becquerel in 1896 [@1896Bec01] who observed the radioactive decay of what was later determined to be $^{238}$U [@1923Sod01; @1931Ast02]. Subsequently, polonium ($^{210}$Po [@1898Cur03]), and radium ($^{226}$Ra [@1898Cur02]) were observed as emitting radioactivity, before Rutherford discovered the radioactive decay law and determined the half-life of radon ($^{220}$Rn [@1900Rut01]). He was also the first to propose the radioactive decay chains and the connections between the different active substances [@1905Rut01] as well as the identification of the $\alpha$-particle: “...we may conclude that an $\alpha$-particle is a helium atom, or, to be more precise, the $\alpha$-particle, after it has lost its positive charge, is a helium atom” [@1908Rut01]. The distinction of different isotopes for a given element was discovered only in 1913 independently by Fajans [@1913Faj03] and Soddy [@1913Sod01] explaining the relationship of the radioactive chains. Soddy coined the name “isotope” from the Greek words “isos” (same) and “topos” (place) meaning that two different “isotopes” occupy the same position in the periodic table [@1913Sod02]. The first clear identification of two isotopes of an element other than in the radioactive decay chains was reported by Thomson in 1913 using the positive-ray method: “There can, therefore, I think, be little doubt that what has been called neon is not a simple gas but a mixture of two gases, one of which has an atomic weight about 20 and the other about 22” [@1913Tho01]. Since this first step, continuous innovations of new experimental techniques utilizing the new knowledge gained about nuclides led to the discovery of additional nuclides. This drive to discover more and more exotic nuclides has moved the field forward up to the present day. Figure \[f:by-year\] demonstrates this development where the number of nuclides discovered per year (top) and the integral number of discovered nuclides (bottom) are shown. In addition to the total number of nuclides (black, solid lines) the figure also shows the number of near-stable (red, short-dashed lines), neutron-deficient (purple, dot-dashed lines), neutron-rich (green, long-dashed lines) and transuranium (blue, dotted lines) nuclides. Near-stable nuclides are all nuclides between the most neutron-deficient and neutron-rich stable isotopes of a given element. Lighter and heavier radioactive isotopes of the elements are then classified as neutron-deficient and neutron-rich, respectively. The figure shows that the rate of discovery was not smooth and the peaks can be directly related to the development of new experimental techniques as explained in the next subsections. ![Discovery of nuclides as a function of year. The top figure shows the 10-year running average of the number of nuclides discovered per year while the bottom figure shows the cumulative number. The total number of nuclides shown by the black, solid lines are plotted separately for near-stable (red, short-dashed lines), neutron-deficient (purple, dot-dashed lines), neutron-rich (green, long-dashed lines) and transuranium (blue, dotted lines) nuclides (see text for explanation). []{data-label="f:by-year"}](rate-cummulative.pdf) Mass spectroscopy of stable nuclides ------------------------------------ In 1908, Rutherford and Geiger had identified the $\alpha$-particle as helium [@1908Rut01] and in 1913 Thompson accepted in addition to neon with mass number 20 the presence of a separate neon substance with mass number 22 which represented the beginning of mass spectroscopic methods to identify isotopes as separate identities of the same element with different mass numbers [@1913Tho01]. The first “mass spectra” were measured by Aston when he added focussing elements to his first “positive ray spectrograph” in 1919 [@1919Ast02]. From 1919 to 1930 the number of known identified nuclides jumped from 40 to about 200 mostly due to Aston’s work. The development of more sophisticated mass spectrographs by Aston [@1927Ast03; @1930Ast01] and others [@1932Bai02; @1935Dem01; @1937Nie02] led to the discovery of essentially most of the stable nuclides [@1942Ast01]. Nuclear reactions and first accelerators ---------------------------------------- In 1919 Rutherford discovered nuclear transmutation: “From the results so far obtained it is difficult to avoid the conclusion that the long-range atoms arising from collision of $\alpha$ particles with nitrogen are not nitrogen atoms but probably atoms of hydrogen, or atoms of mass 2” [@1919Rut01]. He apparently observed the reaction $^{14}$N($\alpha$,p), however, it took six years before Blackett identified the reaction residue as the new nuclide $^{17}$O [@1925Bla01]. It took another seven years before in 1932 the discovery of the neutron by Chatwick [@1932Cha01] and the first successful construction of a particle accelerator by Cockcroft and Walton [@1932Coc03] led to the production of many new nuclides by nuclear reactions. Cockcroft and Walton were able to prove the production of $^8$Be using their accelerator: “...the lithium isotope of mass 7 occasionally captures a proton and the resulting nucleus of mass 8 breaks into two $\alpha$-particles...” [@1932Coc01]; Harkins, Gans and Newson produced the first new nuclide ($^{16}$N) induced by neutrons ($^{19}$F(n,$\alpha$)) [@1933Har01] and in 1934, I. Curie and F. Joliot observed artificially produced radioactivity ($^{13}$N and $^{30}$P)[^1] in ($\alpha$,n) reactions for the first time [@1934Cur01]. Also in 1934, Fermi claimed the discovery of a transuranium element in the neutron bombardment of uranium [@1934Fer02]. Although the possibility of fission was immediately mentioned by Noddack: “It is conceivable that \[...\] these nuclei decay into several larger pieces” [@1934Nod01], even with mounting evidence in further experiments, Meitner, Hahn, and Strassmann did not take this step: “These results are hard to understand within the current understanding of nuclei.” [@1937Mei01] and “As chemists we should rename Ra, Ac, Th to Ba, La, Ce. As ‘nuclear chemists’ close to physics, we cannot take this step, because it contradicts all present knowledge of nuclear physics.” [@1939Hah02]. After Meitner and Frisch correctly interpreted the data as fission in 1939 [@1939Mei01], Hahn and Strassmann identified $^{140}$Ba [@1939Hah01] in the neutron induced fission of uranium. The first transuranium nuclide ($^{239}$Np) was then discovered a year later by McMillan and Abelson in neutron capture reactions on $^{238}$U [@1940McM01]. Light particle induced reactions using $\alpha$-sources, neutron irradiation, fission, and continuously improved particle accelerators expanded the chart of nuclei towards more neutron-deficient, neutron-rich, and further transuranium nuclides for the next two decades. The number of nuclides produced every year continued to increase only interrupted by World War II. By 1950 the existing methods had reached their limits and the number of new isotopes began to drop. New technical developments were necessary to reach isotopes further removed from stability. Heavy-ion fusion evaporation reactions -------------------------------------- Although Alvarez demonstrated already in 1940 that it was possible to accelerate ions heavier than helium in the Berkeley 37-inch cyclotron [@1940Alv02], the next major breakthrough came in 1950 when Miller [*et al.*]{} successfully accelerated detectable intensities of completely stripped carbon nuclei in the Berkeley 60-inch cyclotron [@1950Mil01]. Less than two months later Ghiorso [*et al.*]{} reported the discovery of $^{246}$Cf in the heavy-ion fusion evaporation reaction $^{238}$U($^{12}$C,4n) [@1951Ghi02]. This represented the first correct identification of a californium nuclide because the discovery of the element californium claimed the observation of $^{244}$Cf [@1950Tho07] which was later reassigned to $^{245}$Cf [@1956Che02]. With continuous increases of beam energies and intensities fusion-evaporation reactions became the dominant tool to populate and study neutron-deficient nuclei. The peak in the overall production rate of new nuclides around 1960 is predominantly due to the production of new neutron-deficient nuclides and new super-heavy elements. Fusion-evaporation reactions are presently still the only way to produce super-heavy elements. The discovery of new elements relies on even further improvements in beam intensities and innovations in detector technology. Target and projectile fragmentation ----------------------------------- The significant beam energy increases of light-ion as well as heavy-ion accelerators opened up new ways to expand the nuclear chart. In the spallation or fragmentation of a uranium target bombarded with 5.3-GeV protons, Poskanzer [*et al.*]{} were able to identify several new neutron-rich light isotopes for the first time ($^{11}$Li, $^{12}$Be, and $^{14,15}$B) in 1966 [@1966Pos01]. Target fragmentation reactions were effectively utilized to produce new neutron-rich nuclides (see for example Ref. [@1969Han01]) using the ISOL (Isotope Separation On-Line) method. This technique was developed already 15 years earlier for fission of uranium by Kofoed-Hansen and Nielsen who discovered $^{90}$Kr and $^{90,91}$Rb [@1951Kof01] . The inverse reaction, the fragmentation of heavy projectiles on light-mass targets was successfully applied to produce new nuclides for the first time in 1979 by bombarding a beryllium target with 205 MeV/nucleon $^{40}$Ar ions [@1979Sym01]. Projectile fragmentation began to dominate the production of especially neutron-rich nuclei starting in the late 1980s when dedicated fragment separators came online. For an overview of the various facilities, for example the LISE3 spectrometer at GANIL [@1991Mue01], the RIPS separator at RIKEN [@1992Kub01], the A1200 and A1900 separators at NSCL [@1991She01; @2003Mor03], and the FRS device at GSI [@1992Gei01] see Ref. [@1998Mor01]. In addition to these separators a significant number of nuclides were discovered at storage rings, see for example Refs. [@2008Fra01; @2010Che01]. The most recent increase in the production rate of new nuclides is predominantly due to new technical advances at GSI [@2010Alv01; @2010Che01; @2011Jan01] and the new next generation radioactive beam facility RIBF [@2007Yan01] with the separator BigRIPS [@2008Ohn01] at RIKEN. Discoveries of isotopes, isotones, and isobars ---------------------------------------------- ![Discoveries of isotopes (top), isotones (middle), and isobars (bottom).[]{data-label="f:iii"}](iii.pdf) It is interesting to follow the discovery of nuclides over the years as a function of isotopes (Z = constant), isotones (N = constant) and isobars (A = constant) as shown in the top, middle, and bottom panels of Figure \[f:iii\], respectively. Unique characteristics of isotopes of elements from the radioactive decay chains were determined around 1900, and although the concept of isotopes was not established at that time these observations can be taken as the first identification of isotopes of these elements. For most of the elements up to Z = 60 the first isotope was discovered in the early 1920s with exception of the transition metals of the 5$^{th}$ period between niobium and palladium which were identified for the first time in the 1930s. Also, as mentioned earlier, isotopes of helium ($^4$He or the $\alpha$-particle [@1908Rut01]) and neon ($^{20,22}$Ne [@1913Tho01] were discovered earlier and the neutron was discovered in 1932 [@1932Cha01]. Isotopes of the remaining stable elements were identified by the late 1930s. The last four missing elements below uranium were discovered by the identification of their specific isotopes. They were technetium (Z = 43) in 1938 [@1938Seg01], francium (Z = 87) in 1939 [@1939Per01], astatine (Z = 85) in 1940 [@1940Cor01], and promethium (Z = 61) in 1947 [@1947Mar01]. Transuranium elements were then discovered starting in 1940 with the identification of neptunium ($^{239}$Np) [@1940McM01] at an approximately constant rate of about one element every three years (also see Figure \[f:seaborg-elements\]). Plotting the year of discovery as a function of isotones reveals another pattern. In the light mass region – approximately between chlorine and zirconium (N $\sim$ 20 – 50) – the even-N isotones were discovered around 1920 while it took about another 15 years before the odd-N isotones were identified. This is due to the significantly smaller abundances of the even-Z/odd-N isotones in this mass region. In contrast, the abundances are more equally distributed in the lanthanide region (N $\sim$ 80 – 110). While the advances in the discovery of new elements was fairly constant, the discovery of isotones displays a different pattern. Although intense neutron irradiation of plutonium in the Idaho Materials Test Reactor did not discover any new elements, the successive neutron capture reactions produced many new isotones. In 1954 alone seven new isotones (N = 150 – 156) were discovered. However, in the following 40 years only one additional isotone was added per decade. At Dubna hot fusion reactions were used to populate new elements leading to the discovery of 15 new isotones within one year (2004) up to the heaviest currently known isotone of N = 177. The recent discovery of element 117 and 118 did not push the isotone limit any further. It should be mentioned that the isotone N = 164 has not yet been identified (see also Section \[subsec:heavy\]). The pattern of the discovery as a function of mass number up to A $\sim$ 200 shown in the bottom panel of Figure \[f:iii\] mirrors approximately the pattern of the isotones. Until 1937, when Meitner, Hahn, and Strassmann [@1937Mei01] discovered $^{239}$U, the discovery of radioactivity by Becquerel in 1896 [@1896Bec01] later attributed to $^{238}$U represented the heaviest nuclide. The missing (4n+3) radioactive decay chain observed in 1943 by Hagemann et al. [@1947Hag01] filled in the gaps at masses 213, 217, 221, 225, and 229. Currently the heaviest element (Z = 118) also represents the heaviest nuclide (A = 294). Current Status {#sec:status} ============== Recently a comprehensive overview of the discovery of all nuclides was completed [@2012Tho01]. Details of the discovery of 3067 nuclides were described in a series of articles beginning in 2009 [@2009Gin01] with the latest ones being currently published. During this time another 38 nuclides were discovered for a total of 3105 nuclides observed by the end of 2011. Table \[t:elements\] lists the total number and the range of currently known isotopes for each element. It should be mentioned that for some elements not all isotopes between the most neutron-deficient and the most neutron-rich isotopes have been observed. In light neutron-rich nuclei these are $^{21}$C, $^{30}$F, $^{33}$Ne, $^{36}$Na, and $^{39}$Mg. The cases in the neutron-deficient medium-mass and the superheavy mass region are discussed in Sections \[subsec:proton\] and \[subsec:heavy\], respectively. The table also lists the year of the first and most recent discovery as well as the reference for the detailed documentation of the discovery. While the recognition for the discovery of a new element is well established with strict criteria set by the International Union of Pure and Applied Chemistry (IUPAC) and the International Union of Pure and Applied Physics (IUPAP) [@1976Har02; @1991IUP01] the discovery of the different isotopes for a given element is not well defined [@2004Tho01]. The nuclides included in Table \[t:elements\] had to be (1) clearly identified, either through decay-curves and relationships to other known nuclides, particle or $\gamma$-ray spectra, or unique mass and element identification, and (2) published in a refereed journal. In order to avoid setting an arbitrary lifetime limit for the definition of the existence of a nuclide, particle-unbound nuclides with only short-lived resonance states were included. Isomers were not considered separate nuclides. [@lrrrcrrrl@]{} \ Element & Z & No. of Iso. & Lightest & Heaviest & First & Last & Ref.\ \ \ \ Element & Z & No. of Iso. & Lightest & Heaviest & First & Last & Ref.\ \ Neutron(s) & 0 & 2 & 1 & 2 & 1932 & 1965 & [@2012Tho01]\ Hydrogen & 1 & 7 & 1 & 7 & 1920 & 2003 & [@2012Tho01]\ Helium & 2 & 9 & 2 & 10 & 1908 & 1994 & [@2012Tho01]\ Lithium & 3 & 10 & 4 & 13 & 1921 & 2008 & [@2012Tho01]\ Beryllium & 4 & 9 & 6 & 14 & 1921 & 1983 & [@2012Tho01]\ Boron & 5 & 13 & 7 & 19 & 1920 & 2010 & [@2012Tho01]\ Carbon & 6 & 14 & 8 & 22 & 1919 & 1986 & [@2012Tho01]\ Nitrogen & 7 & 14 & 10 & 23 & 1920 & 2002 & [@2012Tho01]\ Oxygen & 8 & 14 & 12 & 25 & 1919 & 2008 & [@2012Tho01]\ Fluorine & 9 & 16 & 14 & 31 & 1920 & 2010 & [@2012Tho01]\ Neon & 10 & 18 & 16 & 34 & 1913 & 2002 & [@2012Tho01]\ Sodium & 11 & 19 & 18 & 37 & 1921 & 2004 & [@2012Tho02]\ Magnesium & 12 & 21 & 19 & 40 & 1920 & 2007 & [@2012Tho02]\ Aluminum & 13 & 22 & 22 & 43 & 1922 & 2007 & [@2012Tho02]\ Silicon & 14 & 23 & 22 & 44 & 1920 & 2007 & [@2012Tho02]\ Phosphorus & 15 & 21 & 26 & 46 & 1920 & 1990 & [@2012Tho02]\ Sulfur & 16 & 22 & 27 & 48 & 1920 & 1990 & [@2012Tho02]\ Chlorine & 17 & 21 & 31 & 51 & 1919 & 2009 & [@2012Tho02]\ Argon & 18 & 23 & 31 & 53 & 1920 & 2009 & [@2012Tho02]\ Potassium & 19 & 22 & 35 & 56 & 1921 & 2009 & [@2012Tho02]\ Calcium & 20 & 24 & 35 & 58 & 1922 & 2009 & [@2011Amo01]\ Scandium & 21 & 23 & 39 & 61 & 1923 & 2009 & [@2011Mei01]\ Titanium & 22 & 25 & 39 & 63 & 1923 & 2009 & [@2011Mei01]\ Vanadium & 23 & 24 & 43 & 66 & 1923 & 2009 & [@2010Sho02]\ Chromium & 24 & 27 & 42 & 68 & 1923 & 2009 & [@2012Gar01]\ Manganese & 25 & 26 & 46 & 71 & 1923 & 2010 & [@2012Gar01]\ Iron & 26 & 30 & 45 & 74 & 1922 & 2010 & [@2010Sch01]\ Cobalt & 27 & 27 & 50 & 76 & 1923 & 2010 & [@2010Szy01]\ Nickel & 28 & 32 & 48 & 79 & 1921 & 2010 & [@2012Gar01]\ Copper & 29 & 28 & 55 & 82 & 1923 & 2010 & [@2012Gar01]\ Zinc & 30 & 32 & 54 & 85 & 1922 & 2010 & [@2012Gro01]\ Gallium & 31 & 28 & 60 & 87 & 1923 & 2010 & [@2012Gro02]\ Germanium & 32 & 31 & 60 & 90 & 1923 & 2010 & [@2012Gro02]\ Arsenic & 33 & 29 & 64 & 92 & 1920 & 1997 & [@2010Sho03]\ Selenium & 34 & 32 & 64 & 95 & 1922 & 2010 & [@2012Gro01]\ Bromine & 35 & 30 & 69 & 98 & 1920 & 2011 & [@2012Gro01]\ Krypton & 36 & 33 & 69 & 101 & 1920 & 2010 & [@2010Hei01]\ Rubidium & 37 & 31 & 73 & 103 & 1921 & 2010 & [@2012Par01]\ Strontium & 38 & 35 & 73 & 107 & 1923 & 2010 & [@2012Par01]\ Yttrium & 39 & 34 & 76 & 109 & 1923 & 2010 & [@2012Nys01]\ Zirconium & 40 & 35 & 78 & 112 & 1924 & 2010 & [@2012Nys01]\ Niobium & 41 & 34 & 82 & 115 & 1932 & 2010 & [@2012Nys01]\ Molybdenum & 42 & 35 & 83 & 117 & 1930 & 2010 & [@2012Par01]\ Technetium & 43 & 35 & 86 & 120 & 1938 & 2010 & [@2012Nys01]\ Ruthenium & 44 & 38 & 87 & 124 & 1931 & 2010 & [@2012Nys01]\ Rhodium & 45 & 38 & 89 & 126 & 1934 & 2010 & [@2012Par01]\ Palladium & 46 & 38 & 91 & 128 & 1935 & 2010 & [@2013Kat01]\ Silver & 47 & 38 & 93 & 130 & 1923 & 2000 & [@2010Sch03]\ Cadmium & 48 & 38 & 96 & 133 & 1924 & 2010 & [@2010Amo01]\ Indium & 49 & 38 & 98 & 135 & 1924 & 2002 & [@2011Amo01]\ Tin & 50 & 39 & 100 & 138 & 1922 & 2010 & [@2011Amo01]\ Antimony & 51 & 38 & 103 & 140 & 1922 & 2010 & [@2013Kat01]\ Tellurium & 52 & 39 & 105 & 143 & 1924 & 2010 & [@2013Kat01]\ Iodine & 53 & 38 & 108 & 145 & 1920 & 2010 & [@2013Kat01]\ Xenon & 54 & 40 & 109 & 148 & 1920 & 2010 & [@2013Kat01]\ Cesium & 55 & 41 & 112 & 152 & 1921 & 1994 & [@2012May01]\ Barium & 56 & 39 & 114 & 152 & 1924 & 2010 & [@2010Sho01]\ Lanthanum & 57 & 35 & 117 & 153 & 1924 & 2001 & [@2012May01]\ Cerium & 58 & 35 & 121 & 155 & 1924 & 2005 & [@2009Gin01]\ Praseodymium & 59 & 32 & 121 & 154 & 1924 & 2005 & [@2012May01]\ Neodymium & 60 & 31 & 125 & 156 & 1924 & 1999 & [@2012Gro01]\ Promethium & 61 & 32 & 128 & 159 & 1947 & 2005 & [@2012May01]\ Samarium & 62 & 34 & 129 & 162 & 1933 & 2005 & [@2013May01]\ Europium & 63 & 35 & 130 & 166 & 1933 & 2008 & [@2013May01]\ Gadolinium & 64 & 31 & 135 & 166 & 1933 & 2005 & [@2013May01]\ Terbium & 65 & 31 & 135 & 168 & 1933 & 2004 & [@2013May01]\ Dysprosium & 66 & 32 & 139 & 170 & 1934 & 2010 & [@2013Fry01]\ Holmium & 67 & 32 & 140 & 172 & 1934 & 2001 & [@2013Fry01]\ Erbium & 68 & 32 & 144 & 175 & 1934 & 2003 & [@2013Fry01]\ Thulium & 69 & 33 & 145 & 177 & 1934 & 1998 & [@2013Fry01]\ Ytterbium & 70 & 31 & 149 & 180 & 1934 & 2001 & [@2013Fry01]\ Lutetium & 71 & 35 & 150 & 184 & 1934 & 1993 & [@2012Gro02]\ Hafnium & 72 & 36 & 154 & 189 & 1934 & 2009 & [@2012Gro02]\ Tantalum & 73 & 38 & 155 & 192 & 1932 & 2009 & [@2012Rob01]\ Tungsten & 74 & 38 & 157 & 194 & 1930 & 2010 & [@2010Fri01]\ Rhenium & 75 & 39 & 159 & 197 & 1931 & 2011 & [@2012Rob01]\ Osmium & 76 & 41 & 161 & 201 & 1931 & 2011 & [@2012Rob01]\ Iridium & 77 & 40 & 165 & 204 & 1935 & 2011 & [@2012Rob01]\ Platinum & 78 & 40 & 166 & 205 & 1935 & 2010 & [@2011Amo01]\ Gold & 79 & 41 & 170 & 210 & 1935 & 2011 & [@2010Sch02]\ Mercury & 80 & 46 & 171 & 216 & 1920 & 2010 & [@2011Mei01]\ Thallium & 81 & 42 & 176 & 217 & 1908 & 2010 & [@2013Fry02]\ Lead & 82 & 42 & 179 & 220 & 1900 & 2010 & [@2013Fry02]\ Bismuth & 83 & 41 & 184 & 224 & 1904 & 2010 & [@2013Fry02]\ Polonium & 84 & 42 & 186 & 227 & 1898 & 2010 & [@2013Fry02]\ Astatine & 85 & 39 & 191 & 229 & 1940 & 2010 & [@2013Fry03]\ Radon & 86 & 39 & 193 & 231 & 1899 & 2010 & [@2013Fry03]\ Francium & 87 & 35 & 199 & 233 & 1939 & 2010 & [@2013Fry03]\ Radium & 88 & 34 & 201 & 234 & 1898 & 2005 & [@2013Fry03]\ Actinium & 89 & 31 & 206 & 236 & 1902 & 2010 & [@2013Fry04]\ Thorium & 90 & 31 & 208 & 238 & 1898 & 2010 & [@2013Fry04]\ Protactinium & 91 & 28 & 212 & 239 & 1913 & 2005 & [@2013Fry04]\ Uranium & 92 & 23 & 217 & 242 & 1896 & 2000 & [@2013Fry04]\ Neptunium & 93 & 20 & 225 & 244 & 1940 & 1994 & [@2013Fry05]\ Plutonium & 94 & 20 & 228 & 247 & 1946 & 1999 & [@2013Fry05]\ Americium & 95 & 16 & 232 & 247 & 1949 & 2000 & [@2013Fry05]\ Curium & 96 & 17 & 233 & 251 & 1949 & 2010 & [@2013Fry05]\ Berkelium & 97 & 13 & 238 & 251 & 1950 & 2003 & [@2013Fry05]\ Californium & 98 & 20 & 237 & 256 & 1951 & 1995 & [@2013Fry05]\ Einsteinium & 99 & 17 & 241 & 257 & 1954 & 1996 & [@2011Mei01]\ Fermium & 100 & 19 & 241 & 259 & 1954 & 2008 & [@2013Tho01]\ Mendelevium & 101 & 16 & 245 & 260 & 1955 & 1996 & [@2013Tho01]\ Nobelium & 102 & 11 & 250 & 260 & 1963 & 2001 & [@2013Tho01]\ Lawrencium & 103 & 9 & 252 & 260 & 1965 & 2001 & [@2013Tho01]\ Rutherfordium & 104 & 13 & 253 & 267 & 1969 & 2010 & [@2013Tho01]\ Dubnium & 105 & 11 & 256 & 270 & 1970 & 2010 & [@2013Tho01]\ Seaborgium & 106 & 12 & 258 & 271 & 1974 & 2010 & [@2013Tho01]\ Bohrium & 107 & 10 & 260 & 274 & 1981 & 2010 & [@2013Tho01]\ Hassium & 108 & 12 & 263 & 277 & 1984 & 2010 & [@2013Tho01]\ Meitnerium & 109 & 7 & 266 & 278 & 1982 & 2010 & [@2013Tho01]\ Darmstadtium & 110 & 8 & 267 & 281 & 1995 & 2010 & [@2013Tho01]\ Roengtenium & 111 & 7 & 272 & 282 & 1995 & 2010 & [@2013Tho01]\ Copernicium & 112 & 6 & 277 & 285 & 1996 & 2010 & [@2013Tho01]\ 113 & 113 & 6 & 278 & 286 & 2004 & 2010 & [@2013Tho01]\ Flerovium & 114 & 5 & 285 & 289 & 2004 & 2010 & [@2013Tho01]\ 115 & 115 & 4 & 287 & 290 & 2004 & 2010 & [@2013Tho01]\ Livermorium & 116 & 4 & 290 & 293 & 2004 & 2004 & [@2013Tho01]\ 117 & 117 & 2 & 293 & 294 & 2010 & 2010 & [@2013Tho01]\ 118 & 118 & 1 & 294 & 294 & 2006 & 2006 & [@2013Tho01]\ \[t:elements\] The element with the most isotopes (46) presently known is mercury, followed by thallium, lead and polonium with 42 each. The element with the fewest isotopes is element 118 where only one isotope (A = 294) is presently known. The heaviest nuclides are $^{294}$117 and $^{294}$118. However, it should be stressed that the observation of elements 117 and 118 has not been accepted by IUPAC. Potential Discoveries in the Near Future {#sec:potential} ======================================== The 3015 nuclides presently reported in the published literature still probably constitute less than 50% of all nuclides that potentially could be observed. In the following subsections nuclides which should be discovered in the near future are discussed. Proceedings and internal reports {#subsec:proc} -------------------------------- Until the end of 2011 twenty-six nuclides had only been reported in conference proceedings or internal reports. Table \[t:nonref-pub\] lists these nuclides along with the author, year, laboratory, conference or report and reference of the discovery. Most of them were reported at least ten years ago, so that it is unlikely that these results will be published in refereed journals in the future. Conference proceedings quite often contain preliminary results and it is conceivable that these results then do not hold up for a refereed journal. A curious case is the reported discovery of $^{155,156}$Pr and $^{157,158}$Nd in the proceedings of RNB-3 in 1996 [@1996Cza01] where these nuclides were included as newly discovered in a figure of the chart of nuclides. The authors also stated: “In this first experiment, 54 new isotopes were discovered, ranging from $^{86}_{32}$Ge to $^{158}_{60}$Nd” [@1996Cza01]. However, in the original publication only 50 new isotopes were listed and there was no evidence for the observation of any praseodymium or neodymium isotopes [@1994Ber01]. A modified version of the nuclide chart showing these nuclei was included in two further publications [@1997Ber01; @1997Ber02]. These two neodymium isotopes ($^{157,158}$Nd) have recently been reported (see Section \[sec:2012\]) by van Schelt et al. [@2012Van01] and Kurcewicz et al. [@2012Kur01], respectively. [@llllll@]{} \ Nuclide(s) & Author & Year & Laboratory & Conf./Report & Ref.\ \ \ \ Nuclide(s) & Author & Year & Laboratory & Conf./Report & Ref.\ \ $^{95}$Cd,$^{97}$In$^{\footnotemark[1]}$ & R. Krücken & 2008 & GSI & & [@2008Kru01]\ $^{155}$Pr$^{\footnotemark[1]}$ $^{156}$Pr& S. Czajkowski et al. & 1996 & GSI & & [@1996Cza01]\ $^{126}$Nd & G. A. Souliotis & 2000 & MSU & & [@2000Sou01]\ $^{157,158}$Nd$^{\footnotemark[1]}$ & S. Czajkowski et al. & 1996 & GSI & & [@1996Cza01]\ $^{136}$Gd,$^{138}$Tb & G. A. Souliotis & 2000 & MSU & & [@2000Sou01]\ $^{143}$Ho & G. A. Souliotis & 2000 & MSU & & [@2000Sou01]\ & D. Seweryniak et al. & 2002 & LBL & & [@2003Sew02]\ $^{144}$Tm & K. P. Rykaczewski et al. & 2004 & ORNL & & [@2005Ryk01]\ & R. Grzywacz et al. & & & & [@2005Grz01]\ & C. R. Bingham et al. & & & & [@2005Bin01]\ $^{178}$Tm$^{\footnotemark[1]}$ & Zs. Podolyak et al. & 1999 & GSI & & [@2000Pod01]\ $^{150}$Yb & G. A. Souliotis & 2000 & MSU & & [@2000Sou01]\ $^{181}$Yb$^{\footnotemark[1]}$ & Zs. Podolyak et al. & 1999 & GSI & & [@2000Pod01]\ $^{182}$Yb$^{\footnotemark[1]}$ & S. D. Al-Garni et al. & 2002 & GSI & & [@2002AlG01]\ $^{153}$Hf & G. A. Souliotis & 2000 & MSU & & [@2000Sou01]\ $^{164}$Ir & H. Kettunen et al. & 2000 & Jyväskylä & & [@2001Ket02]\ & H. Mahmud et al. & 2001 & ANL & & [@2002Mah01]\ & D. Seweryniak et al. & & & & [@2003Sew01]\ $^{234}$Cm & P. Cardaja et al. & 2002 & GSI & & [@2002Cag01]\ & J. Khuyagbaatar et al. & 2007 & GSI & & [@2007Khu01]\ & D. Kaji et al. & 2010 & RIKEN & & [@2010Kaj01]\ $^{235}$Cm & J. Khuyagbaatar et al. & 2007 & GSI & & [@2007Khu01]\ $^{234}$Bk & K. Morita et al. & 2002 & RIKEN & & [@2003Mor02]\ & K. Morimoto et al. & & & & [@2003Mor01]\ & D. Kaji et al. & 2010 & RIKEN & & [@2010Kaj01]\ $^{252,253}$Bk & S. A. Kreek et al. & 1992 & LBL & & [@1992Kre01]\ $^{262}$No & R. W. Lougheed et al. & 1988 & LBL & & [@1988Lou01]\ & & & & & [@1989Lou01]\ & E. K. Hulet & & & & [@1989Hul01]\ $^{261}$Lr & R. W. Lougheed et al. & 1987 & LBL & & [@1987Lou01]\ & E. K. Hulet & & & & [@1989Hul01]\ & R. A. Henderson et al. & 1991 & LBL & & [@1991Hen01]\ $^{262}$Lr & R. W. Lougheed et al. & 1987 & LBL & & [@1987Lou01]\ & E. K. Hulet & & & & [@1989Hul01]\ & R. A. Henderson et al. & 1991 & LBL & & [@1991Hen01]\ $^{255}$Db & G. N. Flerov & 1976 & Dubna & & [@1976Fle01]\ \ \[t:nonref-pub\] Another argument for not giving full credit for a discovery reported in conference proceedings are contributions from single authors (for example [@2008Kru01; @2000Sou01]). These experiments typically involve fairly large collaborations and it is not clear that these single-author papers were fully vetted by the collaboration. Also everyone involved in the experiment and the analysis should get the appropriate credit. The authors of the more recent proceedings and reports are encouraged to fully analyze the data and submit their final results for publication in refereed journals. Medium-mass proton rich nuclides {#subsec:proton} -------------------------------- The proton dripline has been crossed in the medium-mass region between antimony and bismuth (Z = 51–83) with the observation of proton emitters of odd-Z elements. Promethium is the only odd-Z element in this mass region where no proton emitters have been discovered yet. In these experiments the protons are detected in position sensitive silicon detectors correlated with the implantation of a fusion-evaporation residue after a mass separator. The high detection efficiency for these protons makes this method very efficient and nuclides far beyond the proton dripline with very small cross sections can be identified. In contrast, for nuclides closer to the dripline proton emission is not the dominant decay mode due to the smaller Q-values for the proton decay. The identification of these nuclides is more difficult because of the lower detection efficiency for $\beta$- and $\gamma$-rays. In fact many of these nuclei were identified by $\beta$-delayed proton emission from excited states of the daughter nuclei. Thus, there are isotopes not yet discovered between the lightest $\beta$-emitters and the heaviest proton emitters for the odd-Z elements. ![Chart of nuclides for neutron-deficient nuclides between barium and lutetium (Z = 56–71. The grey-scale coding refers to the decade of discovery. Proton emitters are identified by the thick (red) borders.[]{data-label="f:proton-holes"}](proton-rich-holes.pdf) Figure \[f:proton-holes\] shows the medium-mass neutron-deficient region of the chart of nuclides. The thick (red) borders indicate proton emitters and the grey shades of the nuclides indicate the decade of discovery. Currently nine odd-Z nuclides ($^{118,119}$La, $^{122,123}$La, $^{132,133}$Eu, and $^{136,137,138}$Tb) fall into these gaps. For the tenth missing nuclide, $^{143}$Ho, $^{142}$Ho has already been identified by $\beta$-delayed proton emission [@2001Xu02]. In fact, decay properties of $^{143}$Ho have also been measured but the results were only reported in an annual report [@2003Sew02]. There are three even-Z holes in this mass region: $^{126}$Nd, $^{136}$Gd, and $^{150}$Yb. In all three cases, the even more neutron-deficient nuclides were observed by the detection of $\beta$-delayed proton emission at the Institute of Modern Physics, Lanzhou, China ($^{125}$Nd [@1999Xu01], $^{135}$Gd [@1996Xu01], and $^{149}$Yb [@2001Xu01]). The identification of $^{126}$Nd, $^{136}$Gd, and $^{150}$Yb in the fragmentation reaction of a 30 MeV/nucleon $^{197}$Au beam has been reported only in a contribution to a conference proceeding [@2000Sou01]. The recent advances in beam intensities and detection techniques for fragmentation reactions (especially identification and separation of charge states) should make it possible to discover these and many more additional nuclides along and beyond the proton dripline in this mass region. Medium mass neutron rich nuclei {#subsec:neutron} ------------------------------- In contrast to the proton dripline the neutron dripline has not been reached for medium mass nuclides. The heaviest neutron-rich nuclide shown to be unbound is $^{39}$Mg [@2002Not01]. Most of the most neutron-rich nuclides have been produced in projectile fragmentation or projectile fission over the last fifteen years. The nuclides are separated with fragment separators according to their magnetic rigidity ( = momentum over charge of the nuclides which corresponds approximately to their A/Z) and identified by time-of-flight and energy-loss measurements. ![Neutron-rich nuclei between argon and thorium discovered by projectile fragmentation or projectile fission as a function of A/Z. The data as labeled in the figure are from Bernas 94 [@1994Ber01], Bernas 97 [@1997Ber01], Tarasov 09 [@2009Tar01], Ohnishi 08/10 [@2008Ohn01; @2010Ohn01], Pfützner 98 [@1998Pfu01], Benlliure 99 [@1999Ben01], Steer 08/11[@2008Ste01; @2011Ste01], Alkomashi 09 [@2009Alk01], Morales 11 [@2011Mor01], Chen 10 [@2010Che01], Alvarez-Pol 09/10 [@2009Alv01; @2010Alv01], and Kurcewicz [@2012Kur01]. The legend in the figure refers to the first author and year of the publications.[]{data-label="f:brho"}](brho.pdf) Figure \[f:brho\] displays the neutron-rich part of the chart of nuclides between argon and thorium (Z = 18–90) as a function of A/Z. It shows the A/Z ranges covered by the different experiments. The figure also includes the most recent measurement by Kurcewicz et al. [@2012Kur01] (see Section \[sec:2012\]). If one considers that the location of the neutron dripline is predicted to be more or less constant at about 3.2 in this mass region, it is clear from the figure that it is still far away. The limits of the projectile fragmentation/fission method is presently determined by the small cross sections which can be overcome to an extend by improvements of primary beam intensities and/or larger acceptance separators. In the long term the method is limited by the limited availability of neutron rich projectiles. Superheavy nuclides {#subsec:heavy} ------------------- ![Number of discovered transuranium elements and nuclides. The data until 1989/1990 indicated by the dashed line were taken from Ref. [@1990Sea01].[]{data-label="f:seaborg-elements"}](seaborg-loveland-uranium.pdf) The discovery of superheavy nuclides has always been special because it is directly related to the discovery of new elements. It is interesting to follow the evolution of element discovery and the discovery of nuclides. In the 1990 book “The elements beyond uranium” Seaborg and Loveland showed the number of discovered transuranium elements and nuclides as a function of year [@1990Sea01]. Figure \[f:seaborg-elements\] displays an extention of these data until today. The number of discovered nuclides tracks closely the number of discovered elements with about 10 isotopes per elements. ![Chart of nuclides for elements heavier than nobelium. The grey-scale coding refers to the decade of discovery.[]{data-label="f:superheavy-gaps"}](superheavy-gaps.pdf) In addition to the efforts to discover elements 119 and 120 it is important to link the isotopes of the elements beyond 113 to known nuclides. Figure \[f:superheavy-gaps\] shows the nuclear chart beyond nobelium. It shows the separation of the more neutron-rich nuclides up to Z = 118 produced in “hot” fusion evaporation reactions from the less neutron-rich nuclides up to Z = 113 which were predominantly produced in “cold” fusion evaporation reactions. No isotone with N = 164 has been observed so far which does not mean that this isotone line corresponds to the separation of the decay chains. Table \[t:chains\] lists the 10 presently observed unconnected decay chains. There are five even-Z and five odd-Z chains. The four chains starting at $^{282}$113, $^{285}$Fl, $^{288}$115, and $^{291}$Fl bridge the N = 164 gap and end in $^{270}$Bh, $^{265}$Rf, $^{268}$Db, and $^{267}$Rf, respectively. It should be mentioned that the odd-Z N–Z = 57 chain passes through the N = 164 isotone $^{271}$Bh, however, the properties of this nuclide could not unambiguously be determined [@2013Tho01]. (N–Z) Chain First Last of $\alpha$ decays -------- ------------- ------------- ------------ -------------------- Even-Z 57 $^{285}$Fl $^{265}$Rf 5 58 $^{294}$118 $^{282}$Cn 3 59 $^{291}$Lv $^{267}$Rf 6 60 $^{292}$Lv $^{284}$Cn 2 61 $^{293}$Lv $^{277}$Hs 4 Odd-Z 56 $^{282}$113 $^{266}$Db 4 57 $^{287}$115 $^{267}$Db 5 58 $^{288}$115 $^{268}$Db 5 59 $^{293}$117 $^{281}$Rg 3 60 $^{294}$117 $^{270}$Db 6 : Unconnected superheavy decay chains. The (N–Z) value, first and last nuclides and the number of $\alpha$-decays in the chains are listed. \[t:chains\] The decay chains cannot be connected to known nuclides by extending them to lower masses because they terminate in nuclides which spontaneously fission. The relationship has to be established by systematic features of neighboring isotopes for different elements. Thus the missing isotopes $^{279-281}$113, $^{278-280}$Cn, $^{275-277}$Rg, and $^{274-276}$Ds as well as the other N = 164 isotopes $^{273}$Mt, $^{272}$Hs, $^{271}$Bh, $^{270}$Sg, and $^{269}$Db have to measured. In total there are thirtynine nuclides still to be discovered between already known light and heavy rutherfordium and Z = 113 nuclides. In addition, there are a few gaps of unknown nuclides in the lighter (trans)uranium region. $^{239}$Bk and the two curium isotopes $^{234,235}$Cm have yet to be discovered, although as mentioned in Section \[subsec:proc\] the curium isotopes have been reported in annual reports. Also three uranium isotopes are still unknown, $^{220,221}$U and $^{241}$U. It is especially surprising that the two lighter isotopes $^{220,221}$U have not been observed because three even lighter isotopes ($^{217-219}$U) are known. $^{222}$U was formed in the fusion evaporation reaction $^{186}$W($^{40}$Ar,4n) [@1983Hin01] and most of the other light uranium isotopes were formed in 4n or 5n reactions. Thus $^{220,221}$U should be able to be populated and identified in $^{184}$W($^{40}$Ar,4n) and $^{186}$W($^{40}$Ar,5n) reactions, respectively. Beyond the driplines {#subsec:limits} -------------------- As mentioned in Section \[sec:status\] the present definition of nuclides also includes very short-lived nuclides beyond the proton- and neutron driplines. So far, these nuclides are only accessible in the light mass region and characteristics of many of these nuclides up to magnesium beyond the proton dripline and up to oxygen beyond the neutron dripline have been measured. The proton dripline has most likely been reached or crossed for all elements up to technetium (Z = 43). Table \[t:proton-unbound\] lists the first isotope of elements between aluminum and technetium which has been shown to be unbound but which has not been identified yet or the first isotope for which nothing is known, so that in principle it still could be bound or could have a finite lifetime. With maybe the exception of scandium, bromine, and rubidium where resonances have been already measured for $^{38}$Sc [@1988Woo01], $^{69}$Br [@2011Rog01] and $^{73}$Rb [@1993Bat02], resonance parameters for at least one isotope of these elements should be in reach in the near future. For elements lighter than aluminum at least one unbound isotope has been identified. Although not impossible it is unlikely that further nuclides will exist for which characteristic resonance parameters can be measured. [@lllllll@]{} \ Z & Nuclide & Author & Year & Laboratory & Comments & Ref.\ \ \ \ Z & Nuclide & Author & Year & Laboratory & Comments & Ref.\ \ 13 & $^{21}$Al & & & & not measured &\ 14 & $^{21}$Si & & & & not measured &\ 15 & $^{25}$P & M. Langevin & 1986 & GANIL & & [@1986Lan01]\ 16 & $^{26}$S & A.S. Fomichev & 2011 & Dubna & & [@2011Fom01]\ 17 & $^{29}$Cl & M. Langevin & 1986 & GANIL & & [@1986Lan01]\ & $^{30}$Cl & M. Langevin & 1986 & GANIL & & [@1986Lan01]\ 18 & $^{30}$Ar & & & & not measured &\ 19 & $^{33}$K & M. Langevin & 1986 & GANIL & & [@1986Lan01]\ & $^{34}$K & M. Langevin & 1986 & GANIL & & [@1986Lan01]\ 20 & $^{34}$Ca & & & & not measured &\ 21 & $^{38}$Sc & & & & not measured, but $^{39}$Sc unbound &\ 22 & $^{38}$Ti & B. Blank & 1996 & GANIL & & [@1996Bla01]\ 23 & $^{42}$V & V. Borrel & 1992 & GANIL & & [@1992Bor01]\ 24 & $^{41}$Cr & & & & not measured &\ 25 & $^{44}$Mn & V. Borrel & 1992 & GANIL & & [@1992Bor01]\ & $^{45}$Mn & V. Borrel & 1992 & GANIL & & [@1992Bor01]\ 26 & $^{44}$Fe & & & & not measured, but $^{45}$Fe 2p emitter &\ 27 & $^{49}$Co & B. Blank & 1994 & GANIL & & [@1994Bla01]\ 28 & $^{47}$Ni & & & & not measured, but $^{48}$Ni 2p emitter &\ 29 & $^{54}$Cu & B. Blank & 1994 & GANIL & & [@1994Bla01]\ 30 & $^{53}$Zn & & & & not measured, but $^{54}$Zn 2p emitter &\ 31 & $^{59}$Ga & A. Stolz & 2005 & MSU & & [@2005Sto01]\ 32 & $^{59}$Ge & & & & not measured &\ 33 & $^{63}$As & A. Stolz & 2005 & MSU & & [@2005Sto01]\ 34 & $^{63}$Se & & & & not measured &\ 35 & $^{68}$Br & & & & not measured, but $^{69}$Br unbound &\ 36 & $^{68}$Kr & & & & not measured &\ 37 & $^{72}$Rb & & & & not measured, but $^{73}$Rb unbound &\ 38 & $^{72}$Sr & & & & not measured &\ 39 & $^{75}$Y & & & & not measured &\ 40 & $^{77}$Zr & & & & not measured &\ 41 & $^{81}$Nb & Z. Janas & 1999 & GANIL & & [@1999Jan01]\ 42 & $^{82}$Mo & & & & not measured &\ 43 & $^{85}$Tc & Z. Janas & 1999 & GANIL & & [@1999Jan01]\ \[t:proton-unbound\] For neutron rich nuclei characteristic properties of at least two isotopes beyond the neutron dripline have been identified for the lightest elements, hydrogen, helium and lithium. Neutron rich nuclides between beryllium and magnesium which have been shown or expected to be unbound but have not been observed are listed in Table \[t:neutron-unbound\]. Most of these nuclides should be able to be measured in the near future. Indeed, $^{16}$Be, $^{26}$O, and $^{28}$F have been discovered recently (see Section \[sec:2012\]). The open question whether the (A – 3Z = 6) nuclei between fluorine and magnesium ($^{33}$F, $^{36}$Ne, $^{39}$Na, and $^{42}$Mg) should be answered in the near future with the available increased intensities of the RIBF at RIKEN [@2007Yan01]. Beyond aluminum the dripline has most likely not been reached yet with the observation that $^{42}$Al is bound with respect to neutron emission [@2007Bau01]. [@lllllll@]{} \ Z & Nuclide & Author & Year & Laboratory & Comments & Ref.\ \ \ \ Z & Nuclide & Author & Year & Laboratory & Comments & Ref.\ \ 4 & $^{15}$Be & A. Spyrou & 2011 & MSU & & [@2011Spy01]\ & $^{16}$Be & T. Baumann & 2003 & MSU & & [@2003Bau01]\ 5 & $^{20}$B & A. Ozawa & 2003 & RIKEN & & [@2003Oza01]\ & $^{21}$B & A. Ozawa & 2003 & RIKEN & & [@2003Oza01]\ 6 & $^{21}$C & M. Langevin & 1985 & GANIL & & [@1985Lan01]\ & $^{23}$C & & & & not measured, but $^{21}$C unbound &\ 7 & $^{24}$N & H. Sakurai & 1999 & RIKEN & & [@1999Sak01]\ & $^{25}$N & H. Sakurai & 1999 & RIKEN & & [@1999Sak01]\ 8 & $^{26}$O& D. Guillemaud-Mueller & 1990 & GANIL & & [@1990Gui01]\ & $^{27}$O & O. Tarasov & 1997 & GANIL & & [@1997Tar01]\ & $^{28}$O & O. Tarasov & 1997 & GANIL & & [@1997Tar01]\ 9 & $^{28}$F& H. Sakurai & 1999 & RIKEN & & [@1999Sak01]\ & $^{30}$F & H. Sakurai & 1999 & RIKEN & & [@1999Sak01]\ & $^{32}$F & & & & not measured, but $^{30}$F unbound &\ & $^{33}$F & & & & potentially bound &\ 10 & $^{33}$Ne & M. Notani & 2002 & RIKEN & & [@2002Not01]\ & $^{35}$Ne & & & & not measured, but $^{33}$Ne unbound &\ & $^{36}$Ne & & & & potentially bound &\ 11 & $^{36}$Na & M. Notani & 2002 & RIKEN & & [@2002Not01]\ & $^{38}$Na & & & & not measured, but $^{36}$Na unbound &\ & $^{39}$Na & & & & potentially bound &\ 12 & $^{39}$Mg & M. Notani & 2002 & RIKEN & & [@2002Not01]\ & $^{41}$Mg & & & & not measured, but $^{39}$Mg unbound &\ & $^{42}$Mg & & & & potentially bound &\ \[t:neutron-unbound\] New Discoveries in 2012 {#sec:2012} ======================= While in 2010 a record number of 110 new nuclei were reported [@2011Tho01], only 7 additional new nuclei were discovered in 2011. The trend was again reversed in 2012 with the new identification of up to 67 nuclei. Kurcewicz et al. alone reported 59 new neutron-rich nuclei between neodymium and platinum [@2012Kur01]. These include [$^{\text{158}}\text{Nd}$]{}, [$^{\text{178}}\text{Tm}$]{}, and [$^{\text{181,812}}\text{Yb}$]{} which had previously been reported only in conference proceedings (see Section \[subsec:proc\]). Kurcewicz et al. reported the discovery of 60 new nuclides, however, [$^{\text{157}}\text{Nd}$]{} was reported in a paper by Van Schelt et al. [@2012Van01] which had been submitted five months earlier. Van Schelt also measured [$^{\text{155}}\text{Pr}$]{} for the first time; both isotopes had previously been reported in a conference proceeding. In addition, resonances in the light neutron-unbound nuclei $^{16}$Be [@2012Spy01], $^{26}$O [@2012Lun01] and $^{28}$F [@2012Chr01] were measured for the first time. The remaining three nuclides, $^{95}$Cd, $^{97}$In, and $^{99}$Sn, bring up the discussion of what should be counted as a discovery. The particle identification plot in the recent publication by Hinke et al. exhibits clear evidence for the presence of $^{95}$Cd and $^{97}$In and a few events of $^{99}$Sn [@2012Hin01]. However, neither the text nor the figure caption mentions the discovery of these nuclides. In an earlier contribution to a conference proceeding Krücken reported the discovery of $^{95}$Cd and $^{97}$In, but not $^{99}$Sn, from the same experiment [@2008Kru01]. In addition to these 66 nuclides another 6 new nuclides ($^{64}$Ti, $^{67}$V, $^{69}$Cr, $^{72}$Mn, $^{70}$Cr, and $^{75}$Fe) were reported in a contribution to a conference proceeding [@2012Tar01]. Long Term Future {#sec:future} ================ Over 3000 different isotopes of 118 elements are presently known. In a recent article theoretical calculations revealed that about a total of 7000 bound nuclei should exist, thus more than double the nuclides presently known [@2012Erl01]. However, not all will ever been in reach as can be seen in Figure \[f:witek\]. The figure shows the known nuclides first produced by light-particle reactions, fusion/evaporation reactions, and spallation/fragmentation which are shown in green, orange, and dark blue, respectively. Nuclides of the radioactive decay chains are shown in purple and stable nuclides in black. The yellow regions show unknown nuclides predicted by Ref. [@2012Erl01]. The light blue border corresponds to the uncertainty of the driplines in the calculations. In the region of Z $>$ 82 and N $>$ 184 alone about 2000 nuclides will most probably never be created. If one conservatively adds another 500 along the neutron dripline in the region above Z $\sim$ 50 it can be estimated that another approximately 1500 nuclides (7000 predicted minus 3000 presently known minus 2500 out of reach) are still waiting to be discovered. In the 2004 review article on the limits of nuclear stability it was estimated that the Rare Isotope Accelerator (RIA) which had been proposed at the time would be able to produce about 100 new nuclides along the proton dripline below Z $\sim$ 82 [@2004Tho01]. Since then only about 20 of these nuclides have been observed. Thus the next generation radioactive beam facilities (the Radioactive Ion-Beam Factory RIBF at RIKEN [@2010Sak01], the Facility for Antiproton and Ion Research FAIR at GSI [@2009FAI01], and the Facility for Rare Isotope Beams FRIB at MSU [@2010Bol01; @2012Wei01]) should be able to produce approximately 80 new neutron-deficient nuclides. Equally critical for new discoveries at these facilities are the next generation fragment separators, BIG-RIPS [@2003Kub01; @2008Ohn01], the Super FRS [@2003Gei01], and the FRIB fragment separator [@2011Ban01], respectively. Along the neutron dripline RIA was estimated to make another 400 nuclides below Z $\sim$ 50 [@2004Tho01] of which about 70 have been discovered in the meantime leaving about another 330 for the new facilities in the future. ![Chart of nuclides. Stable nuclides are shown in black. The other known nuclides are grouped according to the production mechanism of their discovery: radioactive decay chains (purple), light-particle induced reactions (green), fusion/transfer reactions (orange), and spallation or projectile fragmentation/fission (dark blue). Nuclides predicted to exist according to Ref. [@2012Erl01] are shown in yellow where the light-blue area shows the uncertainty of the driplines.[]{data-label="f:witek"}](chart-rop-color.pdf) The remaining unkown nuclides in the various regions of the nuclear chart have to be produced by different reaction mechanisms. Projectile fragmentation reactions will most likely be utilized to populate neutron-deficient nuclides below Z $\sim$ 50 and for nuclides above Z $\sim$ 82 fusion-evaporation reactions are the only possibility. The use of fusion-evaporation reactions with radioactive beams might be an alternative to reach nuclides which cannot be populated with stable target-beam combinations [@2004Tho01]. Neutron-deficient nuclides in the intermediate mass region (50 $<$ Z $<$ 82) have been produced so far by fusion-evaporation reactions, however, projectile fragmentation could be a viable alternative [@2000Sou01]. New neutron-rich nuclides below Z $\sim$ 82 will most likely be only reachable by projectile fragmentation/fission reactions. The 2004 review predicted that the dripline would be reachable up to Z $\sim$ 30 [@2004Tho01]. If the dripline is as far away as estimated in the recent calculations [@2012Erl01] it could be that the dripline will not be reached beyond Z $\sim$ 16; at least not in the near future. The search for new superheavy elements and therefore also new nuclei continues to rely on fusion-evaporation reactions [@2011Gat01; @2012Mor01; @2013Oga01]. However, recent calculations suggest that deep inelastic reactions or multi-nucleon transfer reactions on heavy radioactive targets (for example [$^{\text{248}}\text{Cm}$]{}) might be a good choice to populate heavy neutron-rich nuclei [@2008Zag01; @2011Zag01; @2013Lov01]. The use of radioactive beams on radioactive targets could also be utilized for fusion-evaporation reactions in the future [@2013Lov01; @2007Lov01]. Conclusion {#sec:conclusion} ========== The quest for the discovery of nuclides that never have been made on Earth continues to be a strong motivation to advance nuclear science toward the understanding of nuclear forces and interactions. The discovery of a nuclide is the first necessary step to explore its properties. New discoveries have been closely linked to new technical developments of accelerators and detectors. In the future it will be critical to develop new techniques and methods in order to further expand the chart of nuclides . The discovery potential is not yet limited by the number of undiscovered nuclides. About 1500 could still be created. This would correspond to about 90% of all predicted nuclides below N $\sim$ 184 which should be sufficient to constrain theoretical models to reliably predict properties of all nuclides as well as the limit of existence. Acknowledgements {#acknowledgements .unnumbered} ================ I would like to thank Ute Thoennessen for carefully proofreading the manuscript. This work was supported by the National Science Foundation under Grant No. PHY11-02511. [^1]: They also reported another activity assigned to $^{27}$Si, however, most likely they observed $^{28}$Al
--- abstract: 'The paper considers the problem of global optimization in the setup of stochastic process bandits. We introduce an UCB algorithm which builds a cascade of discretization trees based on generic chaining in order to render possible his operability over a continuous domain. The theoretical framework applies to functions under weak probabilistic smoothness assumptions and also extends significantly the spectrum of application of UCB strategies. Moreover generic regret bounds are derived which are then specialized to Gaussian processes indexed on infinite-dimensional spaces as well as to quadratic forms of Gaussian processes. Lower bounds are also proved in the case of Gaussian processes to assess the optimality of the proposed algorithm.' author: - Emile Contal - Nicolas Vayatis bibliography: - '../../biblio/biblio.bib' title: 'Stochastic Process Bandits: Upper Confidence Bounds Algorithms via Generic Chaining' --- Introduction ============ Among the most promising approaches to address the issue of global optimization of an unknown function under reasonable smoothness assumptions comes from extensions of the multi-armed bandit setup. [@Bubeck2009] highlighted the connection between cumulative regret and simple regret which facilitates fair comparison between methods and [@Bubeck2011] proposed bandit algorithms on metric space $\cX$, called $\cX$-armed bandits. In this context, theory and algorithms have been developed in the case where the expected reward is a function $f:\cX\to\bR$ which satisfies certain smoothness conditions such as Lipschitz or Hölder continuity [@Kleinberg2004; @Kocsis2006; @Auer2007; @Kleinberg2008; @Munos2011]. Another line of work is the Bayesian optimization framework [@Jones1998; @Bull2011; @Mockus2012] for which the unknown function $f$ is assumed to be the realization of a prior stochastic process distribution, typically a Gaussian process. An efficient algorithm that can be derived in this framework is the popular GP-UCB algorithm due to [@Srinivas2012]. However an important limitation of the upper confidence bound (UCB) strategies without smoothness condition is that the search space has to be [*finite*]{} with bounded cardinality, a fact which is well known but, up to our knowledge, has not been discussed so far in the related literature. In this paper, we propose an approach which improves both lines of work with respect to their present limitations. Our purpose is to: (i) relax smoothness assumptions that limit the relevance of $\cX$-armed bandits in practical situations where target functions may only display random smoothness, (ii) extend the UCB strategy for arbitrary sets $\cX$. Here we will assume that $f$, being the realization of a given stochastic process distribution, fulfills a *probabilistic smoothness* condition. We will consider the stochastic process bandit setup and we develop a UCB algorithm based on [*generic chaining*]{} [@Bogachev1998; @Adler2009; @Talagrand2014; @Gine2015]. Using the generic chaining construction, we compute hierarchical discretizations of $\cX$ under the form of chaining trees in a way that permits to control precisely the discretization error. The UCB algorithm then applies on these successive discrete subspaces and chooses the accuracy of the discretization at each iteration so that the cumulative regret it incurs matches the state-of-the art bounds on finite $\cX$. In the paper, we propose an algorithm which computes a generic chaining tree for arbitrary stochastic process in quadratic time. We show that this tree is optimal for classes like Gaussian processes with high probability. Our theoretical contributions have an impact in the two contexts mentioned above. From the bandit and global optimization point of view, we provide a generic algorithm that incurs state-of-the-art regret on stochastic process objectives including non-trivial functionals of Gaussian processes such as the sum of squares of Gaussian processes (in the spirit of mean-square-error minimization), or nonparametric Gaussian processes on ellipsoids (RKHS classes), or the Ornstein-Uhlenbeck process, which was conjectured impossible by [@Srinivas2010] and [@Srinivas2012]. From the point of view of Gaussian process theory, the generic chaining algorithm leads to tight bounds on the supremum of the process in probability and not only in expectation. The remainder of the paper is organized as follows. In Section \[sec:framework\], we present the stochastic process bandit framework over continuous spaces. Section \[sec:chaining\] is devoted to the construction of generic chaining trees for search space discretization. Regret bounds are derived in Section \[sec:regret\] after choosing adequate discretization depth. Finally, lower bounds are established in Section \[sec:lower\_bound\]. Stochastic Process Bandits Framework {#sec:framework} ==================================== We consider the optimization of an unknown function $f:\cX\to\bR$ which is assumed to be sampled from a given separable stochastic process distribution. The input space $\cX$ is an arbitrary space not restricted to subsets of $\bR^D$, and we will see in the next section how the geometry of $\cX$ for a particular metric is related to the hardness of the optimization. An algorithm iterates the following: - it queries $f$ at a point $x_i$ chosen with the previously acquired information, - it receives a noisy observation $y_i=f(x_i)+\epsilon_t$, where the $(\epsilon_i)_{1\le i \le t}$ are independent centered Gaussian $\cN(0,\eta^2)$ of known variance. We evaluate the performances of such an algorithm using $R_t$ the cumulative regret: $$R_t = t\sup_{x\in\cX}f(x) - \sum_{i=1}^t f(x_i)\,.$$ This objective is not observable in practice, and our aim is to give theoretical upper bounds that hold with arbitrary high probability in the form: $$\Pr\big[R_t \leq g(t,u)\big] \geq 1-e^{-u}\,.$$ Since the stochastic process is separable, the supremum over $\cX$ can be replaced by the supremum over all finite subsets of $\cX$ [@Boucheron2013]. Therefore we can assume without loss of generality that $\cX$ is finite with arbitrary cardinality. We discuss on practical approaches to handle continuous space in Appendix \[sec:greedy\_cover\]. Note that the probabilities are taken under the product space of both the stochastic process $f$ itself and the independent Gaussian noises $(\epsilon_i)_{1\le i\le t}$. The algorithm faces the exploration-exploitation tradeoff. It has to decide between reducing the uncertainty on $f$ and maximizing the rewards. In some applications one may be interested in finding the maximum of $f$ only, that is minimizing $S_t$ the simple regret: $$S_t = \sup_{x\in\cX}f(x) - \max_{i\leq t}f(x_i)\,.$$ We will reduce our analysis to this case by simply observing that $S_T\leq \frac{R_T}{T}$. #### Confidence Bound Algorithms and Discretization. To deal with the uncertainty, we adopt the *optimistic optimization* paradigm and compute high confidence intervals where the values $f(x)$ lie with high probability, and then query the point maximizing the upper confidence bound [@Auer2002]. A naive approach would use a union bound over all $\cX$ to get the high confidence intervals at every points $x\in\cX$. This would work for a search space with fixed cardinality $\abs{\cX}$, resulting in a factor $\sqrt{\log\abs{\cX}}$ in the Gaussian case, but this fails when $\abs{\cX}$ is unbounded, typically a grid of high density approximating a continuous space. In the next section, we tackle this challenge by employing [*generic chaining*]{} to build hierarchical discretizations of $\cX$. Discretizing the Search Space via Generic Chaining {#sec:chaining} ================================================== The Stochastic Smoothness of the Process ---------------------------------------- Let $\ell_u(x,y)$ for $x,y\in\cX$ and $u\geq 0$ be the following confidence bound on the increments of $f$: $$\ell_u(x,y) = \inf\Big\{s\in\bR: \Pr[f(x)-f(y) > s] < e^{-u}\Big\}\,.$$ In short, $\ell_u(x,y)$ is the best bound satisfying $\Pr\big[f(x)-f(y) \geq \ell_u(x,y)\big] < e^{-u}$. For particular distributions of $f$, it is possible to obtain closed formulae for $\ell_u$. However, in the present work we will consider upper bounds on $\ell_u$. Typically, if $f$ is distributed as a centered Gaussian process of covariance $k$, which we denote $f\sim\cGP(0,k)$, we know that $\ell_u(x,y) \leq \sqrt{2u}d(x,y)$, where $d(x,y)=\big(\E(f(x)-f(y))^2\big)^{\frac 1 2}$ is the canonical pseudo-metric of the process. More generally, if it exists a pseudo-metric $d(\cdot,\cdot)$ and a function $\psi(\cdot,\cdot)$ bounding the logarithm of the moment-generating function of the increments, that is, $$\log \E e^{\lambda(f(x)-f(y))} \leq \psi(\lambda,d(x,y))\,,$$ for $x,y\in\cX$ and $\lambda\in I \subseteq \bR$, then using the Chernoff bounding method [@Boucheron2013], $$\ell_u(x,y) \leq \psi^{*-1}(u,d(x,y))\,,$$ where $\psi^*(s,\delta)=\sup_{\lambda\in I}\big\{\lambda s - \psi(\lambda,\delta)\big\}$ is the Fenchel-Legendre dual of $\psi$ and $\psi^{*-1}(u,\delta)=\inf\big\{s\in\bR: \psi^*(s,\delta)>u\big\}$ denotes its generalized inverse. In that case, we say that $f$ is a $(d,\psi)$-process. For example if $f$ is sub-Gamma, that is: $$\label{eq:sub_gamma} \psi(\lambda,\delta)\leq \frac{\nu \lambda^2 \delta^2}{2(1-c\lambda \delta)}\,,$$ we obtain, $$\label{eq:sub_gamma_tail} \ell_u(x,y) \leq \big(c u + \sqrt{2\nu u}\big) d(x,y)\,.$$ The generality of Eq. \[eq:sub\_gamma\] makes it convenient to derive bounds for a wide variety of processes beyond Gaussian processes, as we see for example in Section \[sec:gp2\]. A Tree of Successive Discretizations ------------------------------------ As stated in the introduction, our strategy to obtain confidence intervals for stochastic processes is by successive discretization of $\cX$. We define a notion of tree that will be used for this purpose. A set $\cT=\big(\cT_h\big)_{h\geq 0}$ where $\cT_h\subset\cX$ for $h\geq 0$ is a tree with parent relationship $p:\cX\to\cX$, when for all $x\in \cT_{h+1}$ its parent is given by $p(x)\in \cT_h$. We denote by $\cT_{\leq h}$ the set of the nodes of $\cT$ at depth lower than $h$: $\cT_{\leq h} = \bigcup_{h'\leq h} \cT_h'$. For $h\geq 0$ and a node $x\in \cT_{h'}$ with $h\leq h'$, we also denote by $p_h(x)$ its parent at depth $h$, that is $p_h(x) = p^{h'-h}(x)$ and we note $x\succ s$ when $s$ is a parent of $x$. To simplify the notations in the sequel, we extend the relation $p_h$ to $p_h(x)=x$ when $x\in\cT_{\leq h}$. We now introduce a powerful inequality bounding the supremum of the difference of $f$ between a node and any of its descendent in $\cT$, provided that $\abs{\cT_h}$ is not excessively large. \[thm:chaining\] Fix any $u>0$, $a>1$ and $\big(n_h\big)_{h\in\bN}$ an increasing sequence of integers. Set $u_i=u+n_i+\log\big(i^a\zeta(a)\big)$ where $\zeta$ is the Riemann zeta function. Then for any tree $\cT$ such that $\abs{\cT_h}\leq e^{n_h}$, $$\forall h\geq 0, \forall s\in\cT_h,~ \sup_{x\succ s} f(x)-f(s) \leq \omega_h\,,$$ holds with probability at least $1-e^{-u}$, where, $$\omega_h = \sup_{x\in\cX} \sum_{i> h} \ell_{u_i}\big(p_i(x), p_{i-1}(x)\big)\,.$$ The full proof of the theorem can be found in Appendix \[sec:proof\_chaining\]. It relies on repeated application of the union bound over the $e^{n_i}$ pairs $\big(p_i(x),p_{i-1}(x)\big)$. Now, if we look at $\cT_h$ as a discretization of $\cX$ where a point $x\in\cX$ is approximated by $p_h(x)\in\cT_h$, this result can be read in terms of discretization error, as stated in the following corollary. \[cor:chaining\] Under the assumptions of Theorem \[thm:chaining\] with $\cX=\cT_{\leq h_0}$ for $h_0$ large enough, we have that, $$\forall h, \forall x\in\cX,~ f(x)-f(p_h(x)) \leq \omega_h\,,$$ holds with probability at least $1-e^{-u}$. Geometric Interpretation for $(d,\psi)$-processes {#sec:psi_process} ------------------------------------------------- The previous inequality suggests that to obtain a good upper bound on the discretization error, one should take $\cT$ such that $\ell_{u_i}(p_i(x),p_{i-1}(x))$ is as small as possible for every $i>0$ and $x\in\cX$. We specify what it implies for $(d,\psi)$-processes. In that case, we have: $$\omega_h \leq \sup_{x\in\cX} \sum_{i>h} \psi^{*-1}\Big(u_i,d\big(p_i(x),p_{i-1}(x)\big)\Big)\,.$$ Writing $\Delta_i(x)=\sup_{x'\succ p_i(x)}d(x',p_i(x))$ the $d$-radius of the “cell” at depth $i$ containing $x$, we remark that $d(p_i(x),p_{i-1}(x))\leq \Delta_{i-1}(x)$, that is: $$\omega_h \leq \sup_{x\in\cX} \sum_{i>h} \psi^{*-1}\big(u_i,\Delta_{i-1}(x)\big)\,.$$ In order to make this bound as small as possible, one should spread the points of $\cT_h$ in $\cX$ so that $\Delta_h(x)$ is evenly small, while satisfying the requirement $\abs{\cT_h}\leq e^{n_h}$. Let $\Delta = \sup_{x,y\in\cX}d(x,y)$ and $\epsilon_h=\Delta 2^{-h}$, and define an $\epsilon$-net as a set $T\subseteq \cX$ for which $\cX$ is covered by $d$-balls of radius $\epsilon$ with center in $T$. Then if one takes $n_h=2\log N(\cX,d,\epsilon_h)$, twice the metric entropy of $\cX$, that is the logarithm of the minimal $\epsilon_h$-net, we obtain with probability at least $1-e^{-u}$ that $\forall h\geq 0, \forall s\in\cT_h$: $$\label{eq:classical_chaining} \sup_{x\succ s}f(x)-f(s) \leq \sum_{i>h} \psi^{*-1}(u_i, \epsilon_i)\,,$$ where $u_i= u+2\log N(\cX,d,\epsilon_i)+\log(i^a\zeta(a))$. The tree $\cT$ achieving this bound consists in computing a minimal $\epsilon$-net at each depth, which can be done efficiently by Algorithm \[alg:greedy\_cover\] if one is satisfied by an almost optimal heuristic which exhibits an approximation ratio of $\max_{x\in\cX} \sqrt{\log \log \abs{\cB(x,\epsilon)}}$, as discussed in Appendix \[sec:greedy\_cover\]. This technique is often called *classical chaining* [@Dudley1967] and we note that an implementation appears in [@Contal2015] on real data. However the upper bound in Eq. \[eq:classical\_chaining\] is not tight as for instance with a Gaussian process indexed by an ellipsoid, as discussed in Section \[sec:gp\]. We will present later in Section \[sec:lower\_bound\] an algorithm to compute a tree $\cT$ in quadratic time leading to both a lower and upper bound on $\sup_{x\succ s}f(x)-f(s)$ when $f$ is a Gaussian process. The previous inequality is particularly convenient when we know a bound on the growth of the metric entropy of $(\cX,d)$, as stated in the following corollary. \[cor:subgamma\_bigoh\] If $f$ is sub-Gamma and there exists $R,D\in\bR$ such that for all $\epsilon>0$, $N(\cX,d,\epsilon) \leq (\frac R \epsilon)^D$, then with probability at least $1-e^{-u}$: $$\forall h\geq 0,\forall s\in\cT_h,~ \sup_{x\succ s}f(x)-f(s) =\cO\Big(\big(c(u + D h)+\sqrt{\nu(u+Dh)}\big) 2^{-h}\Big)\,.$$ With the condition on the growth of the metric entropy, we obtain $u_i = \cO\big(u+D\log R + D i\big)$. With Eq. \[eq:classical\_chaining\] for a sub-Gamma process we get, knowing that $\sum_{i=h}^\infty i 2^{-i} =\cO\big(h 2^{-h}\big)$ and $\sum_{i=h}^\infty \sqrt{i}2^{-i}=\cO\big(\sqrt{h}2^{-h}\big)$, that $\omega_h = \cO\Big(\big(c (u+D h) + \sqrt{\nu(u+D h)}\big)2^{-h}\Big)$. Note that the conditions of Corollary \[cor:subgamma\_bigoh\] are fulfilled when $\cX\subset [0,R]^D$ and there is $c\in\bR$ such that for all $x,y\in\cX,~d(x,y) \leq c\norm{x-y}_2$, by simply cutting $\cX$ in hyper-cubes of side length $\epsilon$. We also remark that this condition is very close to the near-optimality dimension of the metric space $(\cX,d)$ defined in [@Bubeck2011]. However our condition constraints the entire search space $\cX$ instead of the near-optimal set $\cX_\epsilon = \big\{ x\in\cX: f(x)\geq \sup_{x^\star\in\cX}f(x^\star)-\epsilon\big\}$. Controlling the dimension of $\cX_\epsilon$ may allow to obtain an exponential decay of the regret in particular deterministic function $f$ with a quadratic behavior near its maximum. However, up to our knowledge no progress has been made in this direction for stochastic processes without constraining its behavior around the maximum. A reader interested in this subject may look at the recent work by [@Grill2015] on smooth and noisy functions with unknown smoothness, and the works by [@Freitas2012] or [@Wang2014b] on Gaussian processes without noise and a quadratic local behavior. Regret Bounds for Bandit Algorithms {#sec:regret} =================================== Now we have a tool to discretize $\cX$ at a certain accuracy, we show here how to derive an optimization strategy on $\cX$. High Confidence Intervals ------------------------- Assume that given $i-1$ observations $Y_{i-1}=(y_1,\dots,y_{i-1})$ at queried locations $X_{i-1}$, we can compute $L_i(x,u)$ and $U_i(x,u)$ for all $u>0$ and $x\in\cX$, such that: $$\Pr\Big[ f(x) \in \big(L_i(x,u), U_i(x,u)\big) \Big] \geq 1-e^{-u}\,.$$ Then for any $h(i)>0$ that we will carefully choose later, we obtain by a union bound on $\cT_{h(i)}$ that: $$\Pr\Big[ \forall x\in\cT_{h(i)},~ f(x) \in \big(L_i(x,u+n_{h(i)}), U_i(x,u+n_{h(i)})\big) \Big] \geq 1-e^{-u}\,.$$ And by an additional union bound on $\bN$ that: $$\label{eq:ucb} \Pr\Big[ \forall i\geq 1, \forall x\in\cT_{h(i)},~ f(x) \in \big(L_i(x,u_i), U_i(x,u_i)\big) \Big] \geq 1-e^{-u}\,,$$ where $u_i=u+n_{h(i)}+\log\big(i^a\zeta(a)\big)$ for any $a>1$ and $\zeta$ is the Riemann zeta function. Our *optimistic* decision rule for the next query is thus: $$\label{eq:argmax} x_i \in \argmax_{x\in\cT_{h(i)}} U_i(x,u_i)\,.$$ Combining this with Corollary \[cor:chaining\], we are able to prove the following bound linking the regret with $\omega_{h(i)}$ and the width of the confidence interval. \[thm:regret\_bound\] When for all $i\geq 1$, $x_i \in \argmax_{x\in \cT_{h(i)}} U_i(x,u_i)$ we have with probability at least $1- 2 e^{-u}$: $$R_t = t \sup_{x\in\cX} f(x)-\sum_{i=1}^t f(x_i) \leq \sum_{i=1}^t\Big\{ \omega_{h(i)} + U_i(x_i,u_i)-L_i(x_i,u_i)\Big\}\,.$$ Using Theorem \[thm:chaining\] we have that, $$\forall h\geq 0,\,\sup_{x\in\cX}f(x) \leq \omega_h+\sup_{x\in\cX}f(p_h(x))\,,$$ holds with probability at least $1-e^{-u}$. Since $p_{h(i)}(x) \in \cT_{h(i)}$ for all $x\in\cX$, we can invoke Eq. \[eq:ucb\]: $$\forall i\geq 1\,~ \sup_{x\in\cX} f(x)-f(x_i) \leq \omega_{h(i)}+\sup_{x\in\cT_{h(i)}}U_i(x,u_i)-L_i(x_i,u_i)\,,$$ holds with probability at least $1-2e^{-u}$. Now by our choice for $x_i$, $\sup_{x\in\cT_{h(i)}}U_i(x,u_i) = U_i(x_i,u_i)$, proving Theorem \[thm:regret\_bound\]. In order to select the level of discretization $h(i)$ to reduce the bound on the regret, it is required to have explicit bounds on $\omega_i$ and the confidence intervals. For example by choosing $$h(i)=\min\Big\{i:\bN: \omega_i \leq \sqrt{\frac{\log i}{i}} \Big\}\,,$$ we obtain $\sum_{i=1}^t \omega_{h(i)} \leq 2\sqrt{t\log t}$ as shown later. The performance of our algorithm is thus linked with the decrease rate of $\omega_i$, which characterizes the “size” of the optimization problem. We first study the case where $f$ is distributed as a Gaussian process, and then for a sum of squared Gaussian processes. Results for Gaussian Processes {#sec:gp} ------------------------------ The problem of regret minimization where $f$ is sampled from a Gaussian process has been introduced by [@Srinivas2010] and [@grunewalder2010]. Since then, it has been extensively adapted to various settings of Bayesian optimization with successful practical applications. In the first work the authors address the cumulative regret and assume that either $\cX$ is finite or that the samples of the process are Lipschitz with high probability, where the distribution of the Lipschitz constant has Gaussian tails. In the second work the authors address the simple regret without noise and with known horizon, they assume that the canonical pseudo-metric $d$ is bounded by a given power of the supremum norm. In both works they require that the input space is a subset of $\bR^D$. The analysis in our paper permits to derive similar bounds in a nonparametric fashion where $(\cX,d)$ is an arbitrary metric space. Note that if $(\cX,d)$ is not totally bounded, then the supremum of the process is infinite with probability one, so is the regret of any algorithm. #### Confidence intervals and information gain. First, $f$ being distributed as a Gaussian process, it is easy to derive confidence intervals given a set of observations. Writing $\mat{Y}_i$ the vector of noisy values at points in $X_i$, we find by Bayesian inference [@Rasmussen2006] that: $$\Pr\Big[ \abs{f(x)-\mu_i(x)} \geq \sigma_i(x)\sqrt{2u}\Big] < e^{-u}\,,$$ for all $x\in\cX$ and $u>0$, where: $$\begin{aligned} \label{eq:mu} \mu_i(x) &= \mat{k}_i(x)^\top \mat{C}_i^{-1}\mat{Y}_i\\ \label{eq:sigma} \sigma_i^2(x) &= k(x,x) - \mat{k}_i(x)^\top \mat{C}_i^{-1} \mat{k}_i(x)\,,\end{aligned}$$ where $\mat{k}_i(x) = [k(x_j, x)]_{x_j \in X_i}$ is the covariance vector between $x$ and $X_i$, $\mat{C}_i = \mat{K}_i + \eta^2 \mat{I}$, and $\mat{K}_i=[k(x,x')]_{x,x' \in X_i}$ the covariance matrix and $\eta^2$ the variance of the Gaussian noise. Therefore the width of the confidence interval in Theorem \[thm:regret\_bound\] can be bounded in terms of $\sigma_{i-1}$: $$U_i(x_i,u_i)-L_i(x_i,u_i) \leq 2\sigma_{i-1}(x_i)\sqrt{2u_i}\,.$$ Furthermore it is proved in [@Srinivas2012] that the sum of the posterior variances at the queried points $\sigma_{i-1}^2(x_i)$ is bounded in terms of information gain: $$\sum_{i=1}^t \sigma_{i-1}^2(x_i) \leq c_\eta \gamma_t\,,$$ where $c_\eta=\frac{2}{\log(1+\eta^{-2})}$ and $\gamma_t = \max_{X_t\subseteq\cX:\abs{X_t}=t} I(X_t)$ is the maximum information gain of $f$ obtainable by a set of $t$ points. Note that for Gaussian processes, the information gain is simply $I(X_t)=\frac 1 2 \log\det(\mat{I}+\eta^{-2}\mat{K}_t)$. Finally, using the Cauchy-Schwarz inequality and the fact that $u_t$ is increasing we have with probability at least $1- 2 e^{-u}$: $$\label{eq:gp_regret} R_t \leq 2\sqrt{2 c_\eta t u_t \gamma_t} + \sum_{i=1}^t \omega_{h(i)}\,.$$ The quantity $\gamma_t$ heavily depends on the covariance of the process. On one extreme, if $k(\cdot,\cdot)$ is a Kronecker delta, $f$ is a Gaussian white noise process and $\gamma_t=\cO(t)$. On the other hand [@Srinivas2012] proved the following inequalities for widely used covariance functions and $\cX\subset \bR^D$: - linear covariance $k(x,y)=x^\top y$, $\gamma_t=\cO\big(D \log t\big)$. - squared exponential covariance $k(x,y)=e^{-\frac 1 2 \norm{x-y}_2^2}$, $\gamma_t=\cO\big((\log t)^{D+1}\big)$. - Matérn covariance, $k(x,y)=\frac{2^{p-1}}{\Gamma(p)}\big(\sqrt{2p}\norm{x-y}_2\big)^p K_p\big(\sqrt{2p}\norm{x-y}_2\big)$, where $p>0$ and $K_p$ is the modified Bessel function, $\gamma_t=\cO\big( (\log t) t^a\big)$, with $a=\frac{D(D+1)}{2p+D(D+1)}<1$ for $p>1$. #### Bounding $\omega_h$ with the metric entropy. We now provide a policy to choose $h(i)$ minimizing the right hand side of Eq.\[eq:gp\_regret\]. When an explicit upper bound on the metric entropy of the form $\log N(\cX,d,\epsilon)\leq \cO(-D \log \epsilon)$ holds, we can use Corollary \[cor:subgamma\_bigoh\] which gives: $$\omega_h\leq\cO\big(\sqrt{u+D h}2^{-h}\big)\,.$$ This upper bound holds true in particular for Gaussian processes with $\cX\subset[0,R]^D$ and for all $x,y\in\cX$, $d(x,y) \leq \cO\big(\norm{x-y}_2\big)$. For stationary covariance this becomes $k(x,x)-k(x,y)\leq \cO\big(\norm{x-y}_2\big)$ which is satisfied for the usual covariances used in Bayesian optimization such as the squared exponential covariance or the Matérn covariance with parameter $p\in\big(\frac 1 2, \frac 3 2, \frac 5 2\big)$. For these values of $p$ it is well known that $k(x,y)=h_p\big(\sqrt{2p}\norm{x-y}_2\big) \exp\big(-\sqrt{2p}\norm{x-y}_2\big)$, with $h_{\frac 1 2}(\delta)=1$, $h_{\frac 3 2}(\delta)=1+\delta$ and $h_{\frac 5 2}(\delta)=1+\delta+\frac 1 3 \delta^2$. Then we see that is suffices to choose $h(i)=\ceil{\frac 1 2 \log_2 i}$ to obtain $\omega_{h(i)} \leq \cO\Big( \sqrt{\frac{u+\frac 1 2 D\log i}{i}} \Big)$ and since $\sum_{i=1}^t i^{-\frac 1 2}\leq 2 \sqrt{t}$ and $\sum_{i=1}^t \big(\frac{\log i}{i}\big)^{\frac 1 2} \leq 2\sqrt{t\log t}$, $$R_t \leq \cO\Big(\sqrt{t \gamma_t \log t }\Big)\,,$$ holds with high probability. Such a bound holds true in particular for the Ornstein-Uhlenbeck process, which was conjectured impossible in [@Srinivas2010] and [@Srinivas2012]. However we do not know suitable bounds for $\gamma_t$ in this case and can not deduce convergence rates. #### Gaussian processes indexed on ellipsoids and RKHS. As mentioned in Section \[sec:psi\_process\], the previous bound on the discretization error is not tight for every Gaussian process. An important example is when the search space is a (possibly infinite dimensional) ellipsoid: $$\cX=\Big\{ x\in \ell^2: \sum_{i\geq 1}\frac{x_i^2}{a_i^2} \leq 1\Big\}\,.$$ where $a\in\ell^2$, and $f(x) = \sum_{i\geq 1}x_ig_i$ with $g_i\iid \cN(0,1)$, and the pseudo-metric $d(x,y)$ coincide with the usual $\ell_2$ metric. The study of the supremum of such processes is connected to learning error bounds for kernel machines like Support Vector Machines, as a quantity bounding the learning capacity of a class of functions in a RKHS, see for example [@Mendelson2002]. It can be shown by geometrical arguments that $\E \sup_{x: d(x,s)\leq \epsilon} f(x)-f(s) \leq \cO\big(\sqrt{\sum_{i\geq 1}\min(a_i^2,\epsilon^2)}\big)\,,$ and that this supremum exhibits $\chi^2$-tails around its expectation, see for example [@Boucheron2013] and [@Talagrand2014]. This concentration is not grasped by Corollary \[cor:subgamma\_bigoh\], it is required to leverage the construction of Section \[sec:lower\_bound\] to get a tight estimate. Therefore the present work forms a step toward efficient and practical online model selection in such classes in the spirit of [@Rakhlin2014] and [@Gaillard2015]. Results for Quadratic Forms of Gaussian Processes {#sec:gp2} ------------------------------------------------- The preeminent model in Bayesian optimization is by far the Gaussian process. Yet, it is a very common task to attempt minimizing a regret on functions which does not look like Gaussian processes. Consider the typical cases where $f$ has the form of a mean square error or a Gaussian likelihood. In both cases, minimizing $f$ is equivalent to minimize a sum of squares, which we can not assume to be sampled from a Gaussian process. To alleviate this problem, we show that this objective fits in our generic setting. Indeed, if we consider that $f$ is a sum of squares of Gaussian processes, then $f$ is sub-Gamma with respect to a natural pseudo-metric. In order to match the challenge of maximization, we will precisely take the opposite. In this particular setting we allow the algorithm to observe directly the noisy values of the *separated* Gaussian processes, instead of the sum of their square. To simplify the forthcoming arguments, we will choose independent and identically distributed processes, but one can remove the covariances between the processes by Cholesky decomposition of the covariance matrix, and then our analysis adapts easily to processes with non identical distributions. #### The stochastic smoothness of squared GP. Let $f=-\sum_{j=1}^N g_j^2(x)$, where $\big(g_j\big)_{1\le j\le N}$ are independent centered Gaussian processes $g_j\iid\cGP(0,k)$ with stationary covariance $k$ such that $k(x,x)=\kappa$ for every $x\in\cX$. We have for $x,y\in\cX$ and $\lambda<(2\kappa)^{-1}$: $$\log\E e^{\lambda(f(x)-f(y))} = -\frac{N}{2}\log\Big(1-4\lambda^2(\kappa^2-k^2(x,y))\Big)\,.$$ Therefore with $d(x,y)=2\sqrt{\kappa^2-k^2(x,y)}$ and $\psi(\lambda,\delta)=-\frac{N}{2}\log\big(1-\lambda^2\delta^2\big)$, we conclude that $f$ is a $(d,\psi)$-process. Since $-\log(1-x^2) \leq \frac{x^2}{1-x}$ for $0\leq x <1$, which can be proved by series comparison, we obtain that $f$ is sub-Gamma with parameters $\nu=N$ and $c=1$. Now with Eq. \[eq:sub\_gamma\_tail\], $$\ell_u(x,y)\leq (u+\sqrt{2 u N})d(x,y)\,.$$ Furthermore, we also have that $d(x,y)\leq \cO(\norm{x-y}_2)$ for $\cX\subseteq \bR^D$ and standard covariance functions including the squared exponential covariance or the Matérn covariance with parameter $p=\frac 3 2$ or $p=\frac 5 2$. Then Corollary \[cor:subgamma\_bigoh\] leads to: $$\label{eq:omega_gp2} \forall i\geq 0,~ \omega_i \leq \cO\Big( u+D i + \sqrt{N(u+D i)}2^{-i}\Big)\,.$$ #### Confidence intervals for squared GP. As mentioned above, we consider here that we are given separated noisy observations $\mat{Y}_i^j$ for each of the $N$ processes. Deriving confidence intervals for $f$ given $\big(\mat{Y}_i^j\big)_{j\leq N}$ is a tedious task since the posterior processes $g_j$ given $\mat{Y}_i^j$ are not standard nor centered. We propose here a solution based directly on a careful analysis of Gaussian integrals. The proof of the following technical lemma can be found in Appendix \[sec:gp2\_tail\]. \[lem:gp2\_tail\] Let $X\sim\cN(\mu,\sigma^2)$ and $s>0$. We have: $$\Pr\Big[ X^2 \not\in \big(l^2, u^2\big)\Big] < e^{-s^2}\,,$$ for $u=\abs{\mu}+\sqrt{2} \sigma s$ and $l=\max\big(0,\abs{\mu}-\sqrt{2}\sigma s\big)$. Using this lemma, we compute the confidence interval for $f(x)$ by a union bound over $N$. Denoting $\mu_i^j$ and $\sigma_i^j$ the posterior expectation and deviation of $g_j$ given $\mat{Y}_i^j$ (computed as in Eq. \[eq:mu\] and \[eq:sigma\]), the confidence interval follows for all $x\in\cX$: $$\label{eq:gp2_ci} \Pr\Big[ \forall j\leq m,~ g_j^2(x) \in \big( L_i^j(x,u), U_i^j(x,u) \big)\Big] \geq 1- e^{-u}\,,$$ where $$\begin{aligned} U_i^j(x,u) &= \Big(\abs{\mu_i^j(x)}+\sqrt{2(u+\log N)} \sigma_{i-1}^j(x)\Big)^2\\ \text{ and } L_i^j(x,u) &= \max\Big(0, \abs{\mu_i^j(x)}-\sqrt{2(u+\log N)} \sigma_{i-1}^j(x)\Big)^2\,.\end{aligned}$$ We are now ready to use Theorem \[thm:regret\_bound\] to control $R_t$ by a union bound for all $i\in\bN$ and $x\in\cT_{h(i)}$. Note that under the event of Theorem \[thm:regret\_bound\], we have the following: $$\forall j\leq m, \forall i\in\bN, \forall x\in\cT_{h(i)},~ g_j^2(x) \in \big(L_i^j(x,u_i), U_i^j(x,u_i)\big)\,,$$ Then we also have: $$\forall j\leq m, \forall i\in\bN, \forall x\in\cT_{h(i)},~ \abs{\mu_i^j(x)} \leq \abs{g_j(x)}+\sqrt{2(u_i+\log N)}\sigma_{i-1}^j(x)\,,$$ Since $\mu_0^j(x)=0$, $\sigma_0^j(x)=\kappa$ and $u_0\leq u_i$ we obtain $\abs{\mu_i^j(x)} \leq \sqrt{2(u_i+\log N)}\big(\sigma_{i-1}^j(x)+\kappa\big)$. Therefore Theorem \[thm:regret\_bound\] says with probability at least $1-2e^{-u}$: $$R_t \leq \sum_{i=1}^t\Big\{\omega_{h(i)} + 8\sum_{j\leq N}(u_i+\log N)\big(\sigma_{i-1}^j(x)+\kappa\big)\sigma_{i-1}^j(x_i) \Big\}\,.$$ It is now possible to proceed as in Section \[sec:gp\] and bound the sum of posterior variances with $\gamma_t$: $$R_t \leq \cO\Big( N u_t \big(\sqrt{t \gamma_t} + \gamma_t\big) + \sum_{i=1}^t \omega_{h(t)} \Big)\,.$$ As before, under the conditions of Eq. \[eq:omega\_gp2\] and choosing the discretization level $h(i)=\ceil{\frac 1 2 \log_2 i}$ we obtain $\omega_{h(i)}=\cO\Big(i^{-\frac 1 2} \big(u+\frac 1 2 D\log i\big)\sqrt{N}\Big)$, and since $\sum_{i=1}^t i^{-\frac 1 2} \log i\leq 2 \sqrt{t}\log t$, $$R_t \leq \cO\Big(N \big(\sqrt{t\gamma_t \log t}+\gamma_t\big) + \sqrt{Nt}\log t\Big)\,,$$ holds with high probability. Tightness Results for Gaussian Processes {#sec:lower_bound} ======================================== We present in this section a strong result on the tree $\cT$ obtained by Algorithm \[alg:tree\_lb\]. Let $f$ be a centered Gaussian process $\cGP(0,k)$ with arbitrary covariance $k$. We show that a converse of Theorem \[thm:chaining\] is true with high probability. A High Probabilistic Lower Bound on the Supremum ------------------------------------------------ We first recall that for Gaussian process we have $\psi^{*-1}(u_i,\delta)=\cO\big(\delta \sqrt{u+n_i}\big)$, that is: $$\forall h\geq 0, \forall s\in\cT_h,~\sup_{x\succ s}f(x)-f(s) \leq \cO\Big(\sup_{x\succ s}\sum_{i>h}\Delta_i(x) \sqrt{u+n_i}\Big)\,,$$ with probability at least $1-e^{-u}$. For the following, we will fix for $n_i$ a geometric sequence $n_i=2^i$ for all $i\geq 1$. Therefore we have the following upper bound: Fix any $u>0$ and let $\cT$ be constructed as in Algorithm \[alg:tree\_lb\]. Then there exists a constant $c_u>0$ such that, for $f\sim\cGP(0,k)$, $$\sup_{x\succ s} f(x)-f(s) \leq c_u \sup_{x\succ s} \sum_{i>h} \Delta_i(x)2^{\frac i 2}\,,$$ holds for all $h\geq 0$ and $s\in\cT_h$ with probability at least $1-e^{-u}$. To show the tightness of this result, we prove the following probabilistic bound: \[thm:lower\_bound\] Fix any $u>0$ and let $\cT$ be constructed as in Algorithm \[alg:tree\_lb\]. Then there exists a constant $c_u>0$ such that, for $f\sim\cGP(0,k)$, $$\sup_{x\succ s} f(x)-f(s) \geq c_u \sup_{x\succ s}\sum_{i=h}^\infty \Delta_i(x)2^{\frac i 2}\,,$$ holds for all $h\geq 0$ and $s\in\cT_h$ with probability at least $1-e^{-u}$. The benefit of this lower bound is huge for theoretical and practical reasons. It first says that we cannot discretize $\cX$ in a finer way that Algorithm \[alg:tree\_lb\] up to a constant factor. This also means that even if the search space $\cX$ is “smaller” than what suggested using the metric entropy, like for ellipsoids, then Algorithm \[alg:tree\_lb\] finds the correct “size”. Up to our knowledge, this result is the first construction of tree $\cT$ leading to a lower bound at every depth with high probability. The proof of this theorem shares some similarity with the construction to obtain lower bound in expectation, see for example [@Talagrand2014] or [@Ding2011] for a tractable algorithm. Analysis of Algorithm \[alg:tree\_lb\] -------------------------------------- Algorithm \[alg:tree\_lb\] proceeds as follows. It first computes $(\cT_h)_{h\geq 0}$ a succession of $\epsilon_h$-nets as in Section \[sec:psi\_process\] with $\epsilon_h=\Delta 2^{-h}$ where $\Delta$ is the diameter of $\cX$. The parent of a node is set to the closest node in the upper level, $$\forall t\in\cT_h,~ p(t) = \argmin_{s\in\cT_{h-1}} d(t,s)\,$$ Therefore we have $d(t,p(t))\leq \epsilon_{h-1}$ for all $t\in\cT_h$. Moreover, by looking at how the $\epsilon_h$-net is computed we also have $d(t_i,t_j) \geq \epsilon_h$ for all $t_i,t_j\in\cT_h$. These two properties are crucial for the proof of the lower bound. Then, the algorithm updates the tree to make it well balanced, that is such that no node $t\in\cT_h$ has more that $e^{n_{h+1}-n_h}=e^{2^h}$ children. We note at this time that this condition will be already satisfied in every reasonable space, so that the complex procedure that follows is only required in extreme cases. To force this condition, Algorithm \[alg:tree\_lb\] starts from the leafs and “prunes” the branches if they outnumber $e^{2^h}$. We remark that this backward step is not present in the literature on generic chaining, and is needed for our objective of a lower bound with high probability. By doing so, it creates a node called a *pruned node* which will take as children the pruned branches. For this construction to be tight, the pruning step has to be careful. Algorithm \[alg:tree\_lb\] attaches to every pruned node a value, computed using the values of its children, hence the backward strategy. When pruning branches, the algorithm keeps the $e^{2^h}$ nodes with maximum values and displaces the others. The intuition behind this strategy is to avoid pruning branches that already contain pruned node. Finally, note that this pruning step may creates unbalanced pruned nodes when the number of nodes at depth $h$ is way larger that $e^{2^h}$. When this is the case, Algorithm \[alg:tree\_lb\] restarts the pruning with the updated tree to recompute the values. Thanks to the doubly exponential growth in the balance condition, this can not occur more that $\log \log \abs{\cX}$ times and the total complexity is $\cO\big(\abs{\cX}^2\big)$. Computing the Pruning Values and Anti-Concentration Inequalities ---------------------------------------------------------------- We end this section by describing the values used for the pruning step. We need a function $\varphi(\cdot,\cdot,\cdot,\cdot)$ satisfying the following anti-concentration inequality. For all $m\in\bN$, let $s\in\cX$ and $t_1,\dots,t_m\in\cX$ such that $\forall i\leq m,~p(t_i)=s$ and $d(s,t_i)\leq \Delta$, and finally $d(t_i,t_j)\geq \alpha$. Then $\varphi$ is such that: $$\label{eq:varphi} \Pr\Big[\max_{i\leq m}f(t_i)-f(s) \geq \varphi(\alpha,\Delta,m,u) \Big]>1-e^{-u}\,.$$ A function $\varphi$ satisfying this hypothesis is described in Lemma \[lem:max\_one\_lvl\] in Appendix \[sec:proof\_lower\_bound\]. Then the value $V_h(s)$ of a node $s\in\cT_h$ is computed with $\Delta_i(s) = \sup_{x\succ s} d(x,s)$ as: $$V_h(s) = \sup_{x\succ s} \sum_{i>h} \varphi\Big(\frac 1 2 \Delta_h(x),\Delta_h(x),m,u\Big) \one_{p_i(x)\text{ is a pruned node}}\,.$$ The two steps proving Theorem \[thm:lower\_bound\] are: first, show that $\sup_{x\succ s}f(x)-f(s) \geq c_u V_h(s)$ for $c_u>0$ with probability at least $1-e^{-u}$, second, show that $V_h(s) \geq c_u'\sup_{x\succ s}\sum_{i>h}\Delta_i(x)2^{\frac i 2}$ for $c_u'>0$. The full proof of this theorem can be found in Appendix \[sec:proof\_lower\_bound\]. #### Acknowledgements. We thank Cédric Malherbe and Kevin Scaman for fruitful discussions. Algorithms to Compute an Optimal Tree {#sec:algo} ===================================== $h \gets 0$ $\cT \gets \{x_0\}$ for arbitrary $x_0\in\cX$ $\forall t\in\cT_h,~ V_h(t) \gets 0$ return $\cT$ $T\gets \emptyset$ $T$ Proof of Theorem \[thm:chaining\] (Generic Chaining Upper Bound) {#sec:proof_chaining} ================================================================ We give here the proof of Theorem \[thm:chaining\] which upper bound the supremum $\sup_{x\succ s}f(x)-f(s)$ in terms of $\omega_h$. For any $s\in\cT_h$ and any $x\succ s$, $f(x)-f(s) = \sum_{i>h} f(p_i(x))-f(p_{i-1}(x))$. Now by definition of $\ell_u$ we have: $$\Pr\Big[f(p_i(x))-f(p_{i-1}(x)) \geq \ell_{u_i}\big(p_i(x),p_{i-1}(x)\big)\Big] < e^{-u_i}\,.$$ Thanks to the tree structure $\abs{\Big\{\big(p_i(x),p_{i-1}(x)\big) : x\in\cX\Big\}} \leq e^{n_i}$. By a union bound we have: $$\Pr\Big[\exists x\in\cX,\, f(p_i(x))-f(p_{i-1}(x)) > \ell_{u_i}\big(p_i(x),p_{i-1}(x)\big)\Big] < e^{n_i}e^{-u_i}\,.$$ With an other union bound over $i\geq 0$, if we denote by $E^c$ the following event: $$E^c = \Big\{\exists i> 0, \exists x\in\cX,\, f(p_i(x))-f(p_{i-1}(x)) > \ell_{u_i}\big(p_i(x),p_{i-1}(x)\big)\Big\}\,,$$ we have $\Pr[E^c] < \sum_{i\geq 0}e^{n_i-u_i}$. By setting $u_i = u + n_i + \log\big(i^a \zeta(a)\big)$ for $a>1$ we have $\Pr[E^c] < e^{-u}$, that is $\Pr\Big[\sum_{i>h} f(p_i(x))-f(p_{i-1}(x)) \geq \sum_{i>h}\ell_{u_i}\big(p_i(x),p_{i-1}(x)\big)\Big]<e^{-u}$. Analysis of <span style="font-variant:small-caps;">GreedyCover</span> {#sec:greedy_cover} ===================================================================== #### Approximation radio. The exact computation of an optimal $\epsilon$-cover is -hard. We demonstrate here how to build in practice a near-optimal $\epsilon$-cover using a greedy algorithm on graph. First, remark that for any fixed $\epsilon$ we can define a graph $\cG$ where the nodes are the elements of $\cX$ and there is an edge between $x$ and $y$ if and only if $d(x,y)\leq \epsilon$. The size of this construction is $\cO(\abs{\cX}^2)$. The sparse structure of the underlying graph can be exploited to get an efficient representation. The problem of finding an optimal $\epsilon$-cover reduces to the problem of finding a minimal dominating set on $\cG$. We can therefore use the greedy Algorithm \[alg:greedy\_cover\] which enjoys an approximation factor of $\log d_\mathrm{max}(\cG)$, where $d_\mathrm{max}(\cG)$ is the maximum degree of $\cG$, which is equal to $\max_{x\in\cX}\abs{\cB(x,\epsilon)}$. An interested reader may see for example [@Johnson1973] for a proof of -hardness and approximation results. This construction leads to an additional (almost constant) term of $\max_{x\in\cX}\sqrt{\log \log \abs{\cB(x,\epsilon)}}$ in the right-hand side of Eq. \[eq:classical\_chaining\]. Finally, note that this approximation is optimal unless $\P=\NP$ as shown in [@Raz1997]. #### Computation on a compact space $\cX$. Even if all the theoretical analysis of this paper assumes that $\cX$ is finite for measurability reasons, it is not satisfying from a numerical point of view. We show here that if the search space $\cX$ is a compact, then there is a way to reduce computations to the finite case. First remark that is $(\cX,d)$ is compact, then there exists a uniform distribution $\mu$ on $\cX$. The following lemma describes the probability to get an $\epsilon$-net via uniform sampling in $\cX$. Let $\mu$ be a uniform distribution on $\cX$, and $m=N(\cX,d,\epsilon)$, and $X_n=(x_1,\dots,x_n)$ be $n$ points distributed independently according to $\mu$ with $n\geq m(\log m+u)$. Then with probability at least $1-e^{-u}$, $X_n$ is a $2\epsilon$-net of $\cX$. Let $T$ be an $\epsilon$-net on $\cX$ of cardinality $\abs{T}=m$. Then the probability $P^c$ that it exists $t\in T$ such that $\min_{i\leq n} d(t,x_i)>\epsilon$ is less than: $$P^c \leq \sum_{t\in T} \Pr\Big[\forall i\leq n,~ x_i \not\in \cB(t,\epsilon)\Big]\,.$$ Since $\mu$ attributes an equal probability mass for every ball of radius $\epsilon$, $P^c \leq m \Big(\frac{m-1}{m}\Big)^n$. With $\log\frac{m}{m-1}\geq \frac 1 m$, we have for $n\geq m(\log m+u)$ that, $$P^c \leq e^{-u}\,.$$ By the triangle inequality, with probability at least $1-e^{-u}$, $X_n$ is $2\epsilon$-net. Therefore when we want to compute an $\epsilon$-net on a compact $\cX$, an efficient way is to first sample $X_n=(x_1,\dots,x_n)$ uniformly with $n\geq m(\log m+u)$ and $m=N(\cX,d,\frac 1 4 \epsilon)$, which gives an $\frac 1 2 \epsilon$-net with probability at least $1-e^{-u}$. Then running <span style="font-variant:small-caps;">GreedyCover</span>$\big(\frac 1 2 \epsilon, X_n\big)$ outputs an $\epsilon$-net of $\cX$ with probability at least $1-e^{-u}$. Proof of Lemma \[lem:gp2\_tail\] (Tails of Squared Gaussian) {#sec:gp2_tail} ============================================================ We provide here the proof of Lemma \[lem:gp2\_tail\] which obtains confidence interval on squared Gaussian variables. We actually prove a slightly stronger result which improves the tightness on the confidence interval, but is not used by our theoretical analysis. Let $X\sim\cN(\mu,\sigma^2)$ with $\mu\geq 0$ without loss of generality. Write $\erf(a)=\frac{2}{\sqrt{\pi}}\int_0^a e^{-t^2} \diff t$ and $\erfc(a)=1-\erf(a)$. For all $0<l<u\in\bR$ we have: $$\begin{aligned} \Pr\Big[X^2 \not\in (l,u) \Big] &= \Pr\Big[X \not\in (l,u)\cup(-u,-l) \Big]\\ &= \frac 1 2 \Big( \erfc\Big(\frac{u-\mu}{\sqrt{2}\sigma}\Big) + \erfc\Big(\frac{u+\mu}{\sqrt{2}\sigma}\Big) + \erf\Big(\frac{\mu+l}{\sqrt{2}\sigma}\Big) - \erf\Big(\frac{\mu-l}{\sqrt{2}\sigma}\Big) \Big)\,. \end{aligned}$$ Fix $s>0$ and $u=\mu+\sqrt{2}\sigma s$. If $l \leq \mu-\sqrt{2}\sigma s$, which means $s<\mu(\sqrt{2}\sigma)^{-1}$, we get: $$\Pr\big[X^2 \not\in (l^2,u^2) \big] \leq \frac 1 2 \Big( \erfc(s) + \erfc\big(\sqrt{2}\mu\sigma^{-1}+s\big) + \erf\big(\sqrt{2}\mu\sigma^{-1}-s\big) - \erf(s) \Big)\,.$$ Remarking that $\erfc\big(\sqrt{2}\mu\sigma^{-1}+s\big)+\erf\big(\sqrt{2}\mu\sigma^{-1}-s\big)\leq 1$, we obtain: $$\Pr\big[X^2\not\in(l^2,u^2)\big] \leq \erfc(s)\,.$$ Now for $s>\mu(\sqrt{2}\sigma)^{-1}$, if $l\leq \sqrt{2}\sigma \erf^{-1}\Big(\frac 1 2 \erf(\sqrt{2}\mu\sigma^{-1}+s)-\frac 1 2 \erf(s)\Big)$ we have that $\erf\Big(\frac{\mu+l}{\sqrt{2}\sigma}\Big)-\erf\Big(\frac{\mu-l}{\sqrt{2}\sigma}\Big) \leq 2\erf\big(\frac{l}{\sqrt{2}\sigma}\big) \leq \erf(\sqrt{2}\mu\sigma^{-1}+s)-\erf(s)$. Therefore we also get: $$\Pr\Big[X^2 \not\in (l^2,u^2) \Big] \leq \erfc(s)\,.$$ We finish the proof of Lemma \[lem:gp2\_tail\] by the standard inequality $\erfc(s)\leq e^{-s^2}$. Proof of Theorem \[thm:lower\_bound\] (Generic Chaining Lower Bound) {#sec:proof_lower_bound} ==================================================================== In this section we provide the proof of the high probabilistic lower bound obtained via Algorithm \[alg:tree\_lb\]. The proof is given for $f$ being a Gaussian process. We note that the result remains valid for other stochastic processes as long as Lemma \[lem:max\_normal\] and \[lem:comparison\] hold. Probabilistic Tools for Gaussian Processes ------------------------------------------ We first prove a probabilistic bound on independent Gaussian variables and then show that a similar bound holds for $f$ via a comparison inequality. \[lem:max\_normal\] Let $(N_i)_{i\leq m}$ be $m$ independent standard normal variables. For $m \geq 2.6 u$ we have with probability at least $1-e^{-u}$ that: $$\max_{i\leq m}N_i \geq \sqrt{\log\frac{m}{2.6 u}}\,.$$ With $N_i \iid \cN(0,1)$ for all $i\leq m$ we obtain for all $\lambda\in\bR$: $$\begin{aligned} \Pr\Big[\max_{i\leq m} N_i \geq \lambda\Big] &= 1-\Pr[\forall i\leq m,\,N_i<\lambda]\\ &= 1-\Pr[N_i<\lambda]^m\\ &= 1-\Phi(\lambda)^m\,, \end{aligned}$$ where $\Phi$ is the standard normal cumulative distribution function, which satisfies $\Phi(\lambda) \leq 1-c_1 e^{-\lambda^2}$ with $c_1>0.38$, see for example [@Cote2012]. For $\lambda \leq \sqrt{\log\frac{c_1}{1-e^{-\frac u m}}}$ and $u \leq m \log\frac{1}{1-c_1}$ we obtain $\Phi(\lambda)^m \leq e^{-u}$. Using that $1-e^{-x}\leq x$ for $x\geq 0$, we obtain with $u\leq c_1 m$ that: $$\Pr\Big[\max_{i\leq m}N_i \geq \sqrt{\log\frac{c_1 m}{u}}\Big] \geq 1-e^{-u}\,.$$ The following lemma will be useful to derive anti-concentration inequalities for non independent Gaussian variables, provided that their $L_2$ distance are large enough. Similar results are well known if one replaces the probabilities by expectations, see for example [@Ledoux1991]. \[lem:comparison\] Let $(X_i)_{i\leq m}$ and $(Y_i)_{i\leq m}$ be Gaussian random variables such that for all $i,j\leq m$, $\E(X_i-X_j)^2 \geq \E(Y_i-Y_j)^2$ and $\E X_i^2 \geq \E Y_i^2$. Then we have for all $\lambda\in\bR$: $$\Pr\Big[\max_{i\leq m} X_i < \lambda-2\sigma\Big] \leq \Pr\Big[\max_{i\leq m} Y_i < \lambda\Big]\,,$$ where $\sigma = \max_{i\leq m}(\E X_i^2)^{\frac 1 2}$. Let $g$ be a Rademacher variable independent of $X$ and $Y$. We define $\wt{X}_i = X_i + g(\sigma^2 + \E Y_i^2 - \E X_i^2)^{\frac 1 2}$ and $\wt{Y}_i = Y_i + g \sigma$. With this definition, we have by simple calculus that $\E \wt{X}_i^2 = \E Y_i^2 + \sigma^2 = \E \wt{Y}_i^2$. Furthermore, $\E(\wt{Y}_i-\wt{Y}_j)^2 = \E(Y_i-Y_j)^2$ and $\E(\wt{X}_i-\wt{Y}_j)^2 \geq \E(X_i-X_j)^2$ for all $i$ and $j$, that is $\E(\wt{X}_i-\wt{X}_j)^2 \geq \E(\wt{Y}_i-\wt{Y}_j)^2$. Combining this with the previous remark we obtain $\E[\wt{X}_i\wt{X}_j] \leq \E[\wt{Y}_i\wt{Y}_j]$. Using Corollary 3.12 in [@Ledoux1991] we know that for all $\lambda\in\bR$: $$\label{eq:comparison} \Pr\Big[\max_{i\leq m} \wt{X}_i \geq \lambda\Big] \geq \Pr\Big[\max_{i\leq m} \wt{Y}_i \geq \lambda\Big]\,.$$ Now it is easy to check that $\Pr\big[\max_{i\leq m}\wt{Y}_i < \lambda-\sigma\big] \leq \Pr\big[\max_{i\leq m}Y_i<\lambda\big]$ and similarly for $\wt{X}$ that $\Pr\big[\max_{i\leq m}X_i < \lambda-(\sigma^2+\E Y_i^2 - \E X_i^2)^{\frac 1 2}\big] \leq \Pr\big[\max_{i\leq m}\wt{X}_i<\lambda\big]$. With Eq. \[eq:comparison\] we have: $$\Pr\Big[\max_{i\leq m} X_i<\lambda-\sigma-(\sigma^2+\E Y_i^2 - \E X_i^2)^{\frac 1 2}\Big] \leq \Pr\Big[\max_{i\leq m}Y_i<\lambda\Big]\,.$$ Using that $\E X_i^2 \geq \E Y_i^2$ finishes the proof. Proof of the Lower Bound ------------------------ We now use the previous lemmas to bound from below $\sup_{x\succ s}f(x)-f(s)$ for a node $s$ satisfying properties of a pruned node. By doing so, we give the exact formula for the function $\varphi$ in Eq. \[eq:varphi\]. \[lem:max\_one\_lvl\] Let $s\in\cT_h$ and $(t_i)_{i\leq m}$ such that $t_1=s$ and for all $2\leq i\leq m$, $p(t_i)=s$ and $d(s,t_i)\leq \Delta$. If $d(t_i,t_j) \geq \alpha$ for all $i\neq j$ then the following holds with probability at least $1-e^{-u}$ for $3u<m$: $$\max_{i\leq m} f(t_i)-f(s) \geq \frac{\alpha}{\sqrt{2}} \sqrt{\log \frac{m}{3u}} - 2\Delta\,.$$ For $i\leq m$, let $X_i=f(t_i)-f(s)$ and $Y_i \iid \cN(0,\frac{\alpha^2}{2})$ be independent Gaussian variables. We have $\E(X_i-X_j)^2=d(t_i,t_j)^2\geq \alpha^2 = \E(Y_i-Y_j)^2$ and $\Delta^2\geq \E X_i^2 \geq \alpha^2 > \E Y_i^2$ since $X_1=0$. Then using Lemma \[lem:comparison\] we know that for all $\lambda\in\bR$: $$\Pr\Big[\max_{i\leq m} X_i<\lambda - 2\Delta\Big] \leq \Pr\Big[\max_{i\leq m}Y_i<\lambda\Big]\,.$$ Now using Lemma \[lem:max\_normal\] we obtain for $m \geq 3u$: $$\Pr\left[\max_{i\leq m} X_i<\frac{\alpha}{\sqrt{2}}\sqrt{\log\frac{m}{3u}} - 2\Delta\right] \leq e^{-u}\,.$$ The following lemma describes the key properties of the tree $\cT$ as computed by Algorithm \[alg:tree\_lb\]. We show that the supremum $\sup_{x\succ s}f(x)-f(s)$ at every depth is bounded from below by the sum of the values found in Lemma \[lem:max\_one\_lvl\], up to constant factors. \[lem:tree\_induction\] Fix any $u>0$ and set accordingly $u_i=u+2^i+i\log 2$ for any $i>0$. For $\cT$ the tree obtained by Algorithm \[alg:tree\_lb\], we have for all $s\in\cT_h$ with probability at least $1-e^{-u_h}$ that: $$\sup_{x\succ s}f(x)-f(s) \geq c_u^{-1} \sup_{x\succ s} V_h(s,x)\,,$$ where $V_h(s,x)=\sum_{i=h}^\infty \Delta_i(x) \Big( \sqrt{2^{i-3}-\frac 1 8 \log(3u_i+3\log 2)}-2\Big)$, and $\Delta_i(x)$ is the radius of the cell of $x$ at depth $i$, and $c_u\in\bR$ depends on $u$ only. We first show that we can restrict the study of $V_h(s,x)$ to only the summands obtained by pruning $\cT$, up to constant factors. To lighten the notations, let’s write: $$b_i := \sqrt{2^{i-3}-\frac 1 8 \log(3u_i+3\log 2)}-2\,.$$ Then for a sequence $t_h=p_h(x), \dots, t_{h+j}=p_{h+j}(x)$ of parents of $x$, if $t_h$ is the single pruned node, then, $$\begin{aligned} \sum_{i=h}^{h+j-1} \Delta_i(x) b_i &= \Delta_h(x) \sum_{i=h}^{h+j-1} 2^{h-i} b_i \\ &\leq c_u \Delta_h(x) b_h\,, \end{aligned}$$ where $c_u\in\bR$ depends on $u$ only, and we used that $\Delta_{h+i}(x)$, the radius of the cell at depth $h+i$ containing $x$, decreases geometrically for non-pruned nodes. By denoting $\cP_h(x)$ the set of parents of $x$ from depth $h$ which are pruned nodes, we thus proved for all $x\in\cX$: $$\label{eq:tree_induction_lrt} V_h'(s,x) := \sum_{t_i\in\cP_h(x)}\Delta_i(t_i)b_i \geq c_u^{-1} V_h(s,x)\,.$$ We now prove Lemma \[lem:tree\_induction\] by showing that $\sup_{x\succ s}f(x)-f(s) \geq V_h'(s,x^\star)$ for all $x^\star\succ s$ with probability at least $1-e^{-u_h}$, by backward induction on $\cP_h(x)$, from the deepest nodes to the shallowest ones. Since for the leaves $\sup_{x\succ s} f(x)-f(s) = 0 = V_h'(s,x^\star)$, the property is initially true. Let’s assume it is true at depth $h'>h$ and prove it at depth $h$. Let $s\in\cT_h$ and $x^\star\in\cX$. If $p_{h+1}(x^\star)$ is not pruned, we have nothing to do and just call the induction hypothesis with $\sup_{x\succ s}f(x)-f(s) \geq \sup_{x\succ t}f(x)-f(t)$ where $p(t)=s$. Otherwise note that, $$\begin{aligned} \notag \sup_{x\succ s}f(x)-f(s) &= \max_{t:p(t)=s} \Big\{ f(t)-f(s)+ \sup_{x\succcurlyeq t}f(x)-f(t)\Big\}\\ \label{eq:tree_induction_split} &\geq \max_{t:p(t)=s} \Big\{f(t)-f(s)\Big\} + \min_{t:p(t)=s} \Big\{\sup_{x\succcurlyeq t}f(x)-f(t)\Big\}\,. \end{aligned}$$ Since the children have been pruned, we know that their number is $e^{2^h}$. Now thanks to Lemma \[lem:max\_one\_lvl\], with probability at least $1-\frac 1 2 e^{-u_h}$, $$\label{eq:tree_induction_max} \max_{t:p(t)=s} f(t)-f(s) \geq \frac{\Delta_h(x^\star)}{2\sqrt{2}}\sqrt{2^h-\log(3u_h+3\log 2)}-2\Delta_h(x^\star) = \Delta_h(x^\star)b_h\,,$$ where we used that $d(t_i,t_j) \geq \frac 1 2 \Delta_h(x^\star)$ for $p(t_i)=p(t_j)=s$ by construction of $\cT$. Now by the induction hypothesis and a union bound, we have with probability at least $1-e^{-u_{h+1}+2^h}$ that: $$\label{eq:tree_induction_ih} \min_{t:p(t)=s} \sup_{x\succcurlyeq t}f(x)-f(t) \geq \min_{t:p(t)=s} \sup_{x\succ t} V'_{h+1}(t,x)\,.$$ By construction of the pruning procedure, we know that the children minimizing $\sup_{x\succ t}V'_{h+1}(t,x)$ is the pruned node $p_{h+1}(x^\star)$. With $u_{h+1}-2^h = u_h+\log 2$, the results of Eq. \[eq:tree\_induction\_ih\] holds with probability at least $1-\frac 1 2 e^{-u_h}$, we thus obtain with probability at least $1-e^{-u_h}$: $$\sup_{x\succ s}f(x)-f(s) \geq V'_h(s,x^\star)\,,$$ which uses Eq. \[eq:tree\_induction\_split\] together with Eq. \[eq:tree\_induction\_max\], closes the induction and the proof of Lemma \[lem:tree\_induction\] with Eq. \[eq:tree\_induction\_lrt\]. The proof of Theorem \[thm:lower\_bound\] follows from Lemma \[lem:tree\_induction\] by a union bound on $h\in\bN$ and remarking that $\omega_h \geq \sup_{x\succ s} V_h(s,x)$ up to constant factors.
--- abstract: | We present a medium-resolution spectroscopic survey of late-type giant stars at mid-Galactic latitudes of (30$^{\circ}<|b|<$60$^{\circ}$), designed to probe the properties of this population to distances of $\sim$9 kpc. Because M giants are generally metal-rich and we have limited contamination from thin disk stars by the latitude selection, most of the stars in the survey are expected to be members of the thick disk ($<$\[Fe/H\]$>\sim$-0.6) with some contribution from the metal-rich component of the nearby halo. Here we report first results for 1799 stars. The distribution of radial velocity (RV) as a function of $l$ for these stars shows (1) the expected thick disk population and (2) local metal-rich halo stars moving at high speeds relative to the disk, that in some cases form distinct sequences in RV-$l$ space. High-resolution echelle spectra taken for 34 of these “RV outliers” reveal the following patterns across the \[Ti/Fe\]-\[Fe/H\] plane: seventeen of the stars have abundances reminiscent of the populations present in dwarf satellites of the Milky Way; eight have abundances coincident with those of the Galactic disk and more metal-rich halo; and nine of the stars fall on the locus defined by the majority of stars in the halo. The chemical abundance trends of the RV outliers suggest that this sample consists predominantly of stars accreted from infalling dwarf galaxies. A smaller fraction of stars in the RV outlier sample may have been formed in the inner Galaxy and subsequently kicked to higher eccentricity orbits, but the sample is not large enough to distinguish conclusively between this interpretation and the alternative that these stars represent the tail of the velocity distribution of the thick disk. Our data do not rule out the possibility that a minority of the sample could have formed from gas [*in situ*]{} on their current orbits. These results are consistent with scenarios where the stellar halo, at least as probed by M giants, arises from multiple formation mechanisms; however, when taken at face value, our results for metal-rich halo giants suggest a much higher proportion to be accreted than found by @carollo07 [@carollo10] and more like the fraction suggested in the analysis by @nissen10 [@nissen11] and @schuster12. We conclude that M giants with large RVs can provide particularly fruitful samples to mine for accreted structures and that some of the velocity sequences may indeed correspond to real physical associations resulting from recent accretion events. author: - 'Allyson A. Sheffield, Steven R. Majewski, Kathryn V. Johnston, Katia Cunha, Verne V. Smith, Andrew M. Cheung, Christina M. Hampton, Trevor J. David, Rachel Wagner-Kaiser, Marshall C. Johnson, Evan Kaplan, Jacob Miller, and Richard J. Patterson' title: 'Identifying Contributions to the Stellar Halo from Accreted, Kicked-Out, and In Situ Populations' --- Introduction ============ Motivation for a Survey of Bright M Giants ------------------------------------------ The formation history of the Milky Way is recorded in the present motions and chemical abundances of its stars. Ideally, to unravel the Milky Way’s history, we would like a catalog containing spatial, kinematical, and chemical data for every star in the Galaxy. Large-scale photometric surveys, such as the Two Micron All Sky Survey (2MASS) and the Sloan Digital Sky Survey (SDSS), are bringing us closer to this goal: both have led to sweeping views of the structure of the Galaxy. Star counts from these surveys allow for detailed studies of the structure of each Galactic component [e.g., @skrutskie; @maj03; @bell08; @juric; @ivezic08] and are rich sources for follow-up spectroscopic studies [e.g., @maj04b; @yanny09]. The present study looks at the spectroscopic properties of a sample of relatively nearby ($d\lesssim$ 9 kpc) M giant stars at mid-Galactic latitudes of $30^{\circ}<|b|<60^{\circ}$ selected from the 2MASS catalog. M giants (1) are intrinsically bright stars and, hence, can be easily observed to large distances using even small telescopes; (2) can be readily identified on the basis of near-infrared, $JHK$ photometry and complete samples can be culled from full-sky catalogs (e.g., 2MASS); and (3) are limited to more metal-rich populations. These unique properties of M giants, when combined with the adopted latitude and magnitude selection criteria, make our survey particularly useful for exploring the structure of the thick disk and the metal-rich component of the nearby halo. In this first paper describing our survey, we present the photometric and spectroscopic data for our current thick-disk dominated sample of 1799 stars, but focus on the detection and interpretation of those stars that do not kinematically conform to the typical behavior of the Milky Way thick disk population and are likely to be members of the stellar halo. This stellar halo sample is unique compared to other halo surveys of stars in that it covers an intermediate distance range ($d <$ 9 kpc) and concentrates on a relatively rare halo tracer. For example, our survey volume is wider than that of the Hipparcos survey of the Sun’s closest neighbors [$d_{\rm lim,Hip}\sim$100 pc - @perryman97] as well as kinematically-selected solar neighborhood surveys [@els; @carney96; @schuster12] but is a much more local view of the stellar halo than the SDSS F-G type turnoff stars [visible to 20-40 kpc - @juric; @bell08] or the entire 2MASS M giant catalog [visible out to 100 kpc - @maj03]. Our catalog is comparable in probed distances to the spectroscopic studies of the Century Survey Galactic Halo Project [@brown08], RAVE [@steinmetz], and @carollo07 [@carollo10], although these studies employed different, more commonly used halo tracers: blue horizontal branch (BHB) stars; stars – both dwarfs and giants – with $9<I<12$; and main sequence turnoff (MSTO) stars or dwarfs from the SDSS DR7 calibration sample, respectively. An M Giant Survey of the Nearby Galactic Halo --------------------------------------------- In this first analysis of our survey we explore the origin of the M giant population in the nearby stellar halo. There are three broad categories of formation scenarios typically postulated for halo stars: 1. [*In-situ-halo*]{} stars form at comparable radii to their current locations within the dominant dark matter halo progenitor of the Galaxy. For example @els envisioned the stellar halo could be formed from infalling gas, prior to the formation of the disk, during an early monolithic collapse phase for our Galaxy, as seen in the hydrodynamical simulations of @samland03. 2. [*Kicked-out*]{} stars are formed initially more concentrated towards the center of the dominant dark matter halo, within either the bulge or disk, and are subsequently ejected to more eccentric orbits through minor or major mergers, as proposed by @purcell10 and seen in hydrodynamical simulations by @zolotov09 and @mccarthy12.[^1] [Note that there were many earlier studies of this same process that focused on the thickening or destruction of disks rather than the formation of the stellar halo, e.g., @quinn93; @walker96] 3. [*Accreted*]{} stars form in separate dark matter halos that later merge with the Galaxy [as proposed by e.g., @sz78]. Note that in prior theoretical work, the first two categories were both simply termed [*in situ*]{} to indicate more generally stars formed in the dominant galaxy dark matter halo progenitor [as in, e.g., @abadi06]. Separate terms are introduced here for clarity. Many prior studies have probed these formation mechanisms by looking at the distribution of stars in different dimensions of phase-space. For example, the importance of an [*accreted*]{} population has traditionally been assessed by looking for residual groupings that are signatures of stars’ original associations whereas [*in-situ-halo*]{} or [*kicked-out*]{} populations are expected to be more smoothly distributed. All-sky photometric surveys have revealed rich substructure in the outer halo (e.g., Sgr tidal tails - Majewski et al. 2003; the Anticenter Tributaries - Grillmair 2006; the Orphan stream - Belokurov et al. 2007) that can best be explained by accretion events [e.g., @bullock01; @bullock05; @bell08; @johnston08; @cooper10]. In contrast, merger debris in the [*nearby*]{} halo fully phase-mix on a much shorter timescale, leading to the expectation of negligible evidence for accretion identifiable as coherent spatial structures [e.g., @johnston08; @sharma10] and requiring additional dimensions of phase-space data to distinguish formation scenarios in this region. Adding the dimension of line-of-sight velocities helps: conservation of phase-space density during phase-mixing requires that as debris from accretion events becomes less dense with time, it should become colder in velocity [Liouville’s Theorem; see e.g., @helmi99a] and coherent structures may be apparent even once stars are smoothly distributed in space. Indeed, substructure in velocities is apparent statistically over a large volume in BHB stars in the SDSS survey (and shown to be broadly consistent with model stellar halos built within a hierarchical cosmology — see Xue et al. 2010, Cooper et al. 2011) and individual clumps in velocity have also been detected in the halo using metal-poor MSTO stars in the SEGUE survey [@schlaufman11], K giant stars [@maj04a], as well as in the mixed populations in RAVE [@williams]. Overall, group finding is more effective if even more dimensions of phase-space can be measured. For example, @maj92 analyzed proper motions for a sample of 250 F-K dwarfs in the direction of the North Galactic Pole that probes out to roughly 8 kpc; he measured a mean retrograde rotational velocity for the halo sample and detected a more coherent retrograde group of stars at a mean height above the Galactic plane of $Z\sim4.6$ kpc – findings suggestive of an accreted halo population. Subsequently, @mmh94 [@mmh96] obtained radial velocities for a subsample of stars from the @maj92 survey and found a significant amount of phase-space clumpiness in their halo sample. Once all six phase-space dimensions are known, conserved (or nearly conserved) quantities (such as energy, angular momentum, or orbital frequencies) can be calculated that can link stars from a common progenitor in the volume even if they are not at the same orbital phase, as has been done successfully for several nearby surveys [see @helmi99a; @helmi00; @morrison09; @gomez10]. While the findings in the previous paragraph point to a significant fraction, and possibly the majority of the stellar halo being [*accreted*]{}, some contributions from [*in-situ-halo*]{} and [*kicked-out*]{} stars are still possible. Structural, orbital and/or metallicity trends in the stellar halo with radius — as well as transitions in those trends — have historically been taken as indicative of these different formation mechanisms and leading to “dual halo” models [e.g., in RR Lyraes, globular clusters and BHB stars, see @hartwick87; @zinn93; @zinn96; @sommerlarsen97 respectively]. @chiba2000 analyzed a kinematically unbiased sample of 1203 stars with \[Fe/H\]$<$-0.6 in the inner halo (within 4 kpc of the Sun) and detected a gradient in the rotational velocity as a function of height above the Galactic plane for the more metal-rich stars in their sample — as seen for the [*in-situ-halo*]{} stars formed in the simulations of @samland03. @chiba2000 also confirm the existence of the streams detected by @helmi99b — further supporting some presence of an [*accreted*]{} population in this region. @carollo07 [@carollo10] studied the kinematics and metallicities of SDSS calibration stars for a larger volume (20 kpc) and found evidence for two populations: one of metal-rich stars on only mildly eccentric orbits (which they dubbed “inner halo”) and a second of metal-poor stars on more eccentric orbits (which they dubbed “outer halo”). Similarly, @deason11 analyzed BHB stars in SDSS (a sample that probes the outer halo to 40 kpc) and find a net retrograde rotation for metal-poor stars and a net prograde rotation for metal-rich stars. Such multi-component halos with distinct formation mechanisms for each component emerge naturally in the hydrodynamical simulations, with the inner halo (within 10-15 kpc) coming predominantly from stars (either [*in-situ-halo*]{} or [*kicked-out*]{}) formed within the main Galaxy dark matter halo progenitor and the outer halo (dominant beyond 15-20 kpc) from mergers and accretion of stars by the Galactic dark matter halo [@abadi06; @zolotov09; @font11; @mccarthy12; @tissera12]. However, whether the distinct components observed in the stellar halo are indeed due to distinct formation mechanisms, and not merely a variety of accretion events, has yet to be proven. In our own survey, the sample of stars is sufficiently far that distance and proper motion errors from extant data are too large to estimate their orbital properties accurately. However, chemical abundances can be derived from high-resolution spectra in general, and thus provide an alternate avenue for exploring the origins of the stars. A star is branded at its birth by its chemical abundance patterns — a signature that is generally conserved throughout its lifetime (like orbital properties) and cannot be diluted by orbital phase-mixing. Moreover, stars deriving from a common origin should have similarities in their chemical abundance patterns, trends that are directly correlated to the details of their enrichment history. Hence we can hope to “chemically tag” stars as members of the different populations via their abundance patterns (similar in spirit to Freeman & Bland-Hawthorn’s 2002 proposal for the reconstruction of ancient star-clusters in the stellar disk). The potential power of chemical tagging has already been demonstrated empirically in observations: in dwarf galaxies, for example, stars tend to have lower \[$\alpha$/Fe\] at a given \[Fe/H\] than stars in the bulk of the Milky Way’s stellar halo [@smecker02; @shetrone03; @tolstoy04; @geisler05; @monaco07; @chou10a] and similar patterns have been seen in stars in stellar structures such as the Monoceros Ring [@chou10b] and Triangulum-Andromeda Cloud [@chou11], which lends support to the interpretation of such features as originating from disrupted dwarf galaxies. In a similar manner, a series of papers [@nissen10; @nissen11; @schuster12] have separated a sample of 94 nearby (within $\sim$335 pc of the Sun), metal rich (-1.6 $<$ \[Fe/H\] $<$ -0.4) stars into “$\alpha$-rich” and “$\alpha$-poor” groups and shown systematic differences in the abundances, ages and orbital properties of stars in these two groups that are suggestive of [*kicked-out*]{} and [*accreted*]{} origins, respectively. Figure \[cartoon\] shows two cartoons to illustrate conceptually how this approach might be applied to our own sample. The lines in the left hand panel show the expected temporal evolution, in the \[$\alpha$/Fe\]-\[Fe/H\] plane, of the gaseous chemical abundance for systems with low/intermediate/high star formation efficiencies (SFEs - indicated by lines with increasingly dark shades of gray), which are assumed to correspond to stellar systems embedded in small/intermediate/large dark matter potential wells [see @mcwilliam97; @gilmore98; @robertson05]. These systems could represent, for example: a Milky Way dwarf spheroidal (low SFE corresponding to low mass accreted systems), the LMC (intermediate SFE corresponding to intermediate mass accreted systems), and a Milky Way progenitor (high SFE and contributing to populations either formed [*in situ*]{} or [*kicked out*]{} from their original birth places). All of these systems are expected to have old stars with high \[$\alpha$/Fe\] at low metallicity, which reflect the yields from explosive Type II SNe alone. Stars formed after the (delayed) onset of Type Ia SNe will become progressively more enriched by Fe and acquire lower \[$\alpha$/Fe\] as Type Ia SNe produce $\alpha$-elements much less efficiently. The transition point in Fe between the early and late stages reflects the metallicity that the gas has reached prior to the onset of Type Ia enrichment, which will be lower/higher for systems that have less/more rapidly converted their gas into stars. The right hand panel of Figure \[cartoon\] applies this intuition to show where populations with different origins might fall in this plane (these regions are defined more rigorously and empirically in Figure \[tag\] and discussed in §\[interp\] using data from previous studies). The region outlined in blue should contain stars formed early (because of their low metallicities and high \[$\alpha$/Fe\]) but that can now be in the halo via any of the three mechanisms ([*in-situ-halo*]{}, [*kicked-out*]{}, or [*accreted*]{}), and thus we do not anticipate being able to conclusively deduce their origins from this particular type of analysis. The region outlined in green is likely only to contain stars formed more recently (because of the low \[$\alpha$/Fe\]) in small potential wells (because of the low Fe) and hence should be sensitive to a purely [*accreted*]{} population. The region outlined in orange is likely only to contain stars formed more recently (because of the low \[$\alpha$/Fe\]) in deep potential wells (because of the high Fe) and hence should be [*in*]{}sensitive to [*in-situ-halo*]{} populations (formed exclusively early on) as well as the majority of [*accreted*]{} populations (because they form in smaller potential wells — although, note, there could be some contamination from high-mass, late accreted systems, as suggested in Figure \[tag\]). Stars that have formed recently in the Galactic disk and that were subsequently [*kicked-out*]{} might lie in this region [see @zolotov10 for an illustration of this idea with hydrodynamical cosmological simulations]. Overall, these expectations lead us to conclude that the fraction of our halo stars in the metal-poor and $\alpha$-poor region is an indicator of the importance of late accretion, while the fraction with disk-like abundances but moving at large speed relative to the Sun is indicative of the contribution of a recently [*kicked-out*]{} population. Thus an additional advantage of focusing on abundance space as a probe of the relative proportions by origin of M giants in the stellar halo is that we only need to look for the expected systematic differences in the chemical composition of these populations ([*in-situ-halo*]{}, [*kicked-out*]{}, and [*accreted*]{}) overall rather than (for example) search for kinematical groupings from individual accretion events. Hence, we can explore formation scenarios with much smaller samples of stars than by looking at dynamics alone. Motivated by the promise of chemical abundances, which we could combine with the known RVs we already have for all program stars, we obtained high-resolution echelle spectra for 34 stars selected for follow-up based upon their high, halo-like speeds relative to the disk. As a control sample we also observed five random thick disk red giants (chosen based on RVs similar to the bulk thick disk trend) and four M giant calibration stars from @sl85 and @smith00. This paper is organized in the following way. In §\[prog\], we describe target selection, observations, and data reduction for the medium-resolution spectroscopy program of bright M giants. The RVs and the identification of RV outliers – stars that have high speeds relative to the bulk thick disk trend – are presented in §\[rvs\]. Details of the high-resolution follow-up spectroscopy program are given in §\[hires\]. Our interpretation of the abundance data in terms of formation scenarios for the halo is given in §\[interp\]. Lastly, we give a summary of the results and discuss them in the context of prior studies in §\[conc\]. In a companion paper [@johnston12], we develop a more complete understanding of the nature and implications of some possible dynamical groups in our survey by generating and analyzing synthetic observations of simulated stellar halos. \[prog\]Program Stars ===================== Defining the Sample ------------------- M giants can be distinguished from M dwarfs by their $J-H$ and $H-K$ colors [@bessell; @carpenter]. M giants also dominate M dwarfs in catalogs of late-type stars to $K\sim$14. These facts were used by @maj03 to select M giants from 2MASS and map the streams of M giants from the Sagittarius (Sgr) dwarf spheroidal (dSph) galaxy and by @sharma10 and @rocha03 [@rocha04; @rocha06] to map and track the Triangulum-Andromeda star cloud, the Pisces overdensity, and the Monoceros/GASS/Argo feature. Our survey sample spans $(J-K_{S})_{0}$ colors from $0.75<(J-K_{S})_{0}<1.24$; this is similar to the range studied by @girard06 in their study of the thick disk using red giants, although most (96%) of our red giants have $(J-K_{S})_{0}>0.85$ (as in Majewski et al. 2003) to ensure a clean sample of M giants. The magnitude range of our entire sample is $4.3<(K_{S})_{0}<12.0$, with a median magnitude of 7.4 (the majority, 1625 of 1799, have $5.0<(K_{S})_{0}<9.0$). The magnitudes were dereddened using the maps from @schlegel98. A constraint in Galactic latitude of $30^{\circ}<|b|<60^{\circ}$ was applied to our M giant catalog, with the lower limit in $b$ applied to avoid excess contamination from the thin disk. Our nominal photometric sample contained approximately 12,000 stars, and in this paper we present spectroscopic observations for 1799, or roughly 15% of these. The stars observed were selected randomly from the nominal sample and cover nearly all Galactic longitudes. The bulk of the observing was done at the Fan Mountain Observatory (FMO), located in Virginia, so there are gaps in coverage corresponding to the Southern Hemisphere. Of the 1799 stars, 149 were observed at Cerro-Tololo Inter-American Observatory (CTIO) as part of a related program. Figure \[lb\] shows in an Aitoff projection the spatial distribution of the 1799 program stars in Galactic coordinates. The nominal M giant catalog was matched to the UCAC2 catalog [@zacharias04] and the gap in the northern Galactic hemisphere is due to the upper limit of $\delta=+52^\circ$ in the UCAC2 catalog at the time of the survey inception. However, the very small amplitudes of the UCAC2 proper motions for the majority of the program stars typically result in unreasonably large relative errors (often larger than the derived space motions) in any derived kinematical parameters using them, so the proper motions are not actually utilized in the present work. Limitations from use of the UCAC2 proper motions on 2MASS M giants are further explored in @mlpp. \[fmo\]FMO and CTIO Spectroscopic Observations and Reductions ------------------------------------------------------------- Spectra were collected at the University of Virginia’s FMO using the Fan Observatory Bench Optical Spectrograph [FOBOS; see @crane05] on the 1-m astrometric reflector. FOBOS is a fiber-fed optical spectrograph that was designed for moderate resolution ($R\sim $ 1500-3000) spectroscopy [@crane05]. FOBOS uses a grating with 1200 grooves mm$^{-1}$. The estimated resolution of the spectra is $\Delta\lambda\sim4$ Å. Our observing program began on UT 2005 February 25 and data presented here were taken in the years 2005 - 2008. The spectrograph is optimized for use over the region 4000-6700 Å but is limited by the camera, which has a SITe 2048$\times$2048 CCD detector that at a linear dispersion of 1.0 Å pixel$^{-1}$ samples only $\sim$2000 Å of that range (selected by us to be 4000-6000 Å). On all but one night (when we were testing the efficiency of the set-up) the detector setting used had a read noise level of 4.5 e$^{-}$ and a gain of 6.1 e$^{-}$ per ADU. For most stars, three spectra are taken and summed in 2-D after the CCD frame preprocessing is completed. This combination of three images facilitates the elimination of cosmic rays. The minimum S/N to achieve the best possible RV precision (a few km s$^{-1}$) was found to be $\sim$20; this typically translated to total exposure times of 450-900 seconds for the M giants observed with FOBOS, which have magnitudes in the range $4.3<(K_{S})_{0}<9.7$. Several radial velocity standard stars from the Astronomical Almanac were also observed each night; these are used for cross-correlation templates when determining the radial velocities of the target stars. Standard stars were chosen to be of a similar spectral type as the program stars to minimize systematic offsets in the derived RVs. Several sets of bias frames were taken throughout each night. To remove pixel-to-pixel variations in the frames, a “milky flat” is created by illuminating an opal diffusing glass with a quartz-tungsten-halogen lamp. The object and comparison frames are flat-fielded (using the milky flat), trimmed, and bias subtracted using the task $ccdproc$ in IRAF[^2]. Comparison spectra were taken using neon, argon and xenon lamps for calibrating wavelengths against the laboratory values for lines from these elements. The spectra are extracted from the 2-D images and converted to 1-D spectra and wavelength calibrated using the IRAF tasks $apall$ and $identify$. To obtain radial velocities, the extracted, flat-fielded, wavelength-calibrated spectra are cross-correlated with the standard star spectra using code developed by W. Kunkel [described in detail in @maj04b]. The cross-correlation code is run for each night of data and each program star is cross-correlated to all of the standard stars (typically 4-6) observed on that night. The radial velocity reported for a program star is the average of the radial velocities from cross-correlation with multiple standards taken that night (with standard star spectra that produce poor cross-correlations against the others removed from the average, iteratively). The spectrum for a typical program star observed with FOBOS is shown in Figure \[speclines\]. The dominant features in M giants in the spectral band we are studying are the Mg b triplet, at 5167, 5173, and 5184 Å, and the Na D doublet, at 5889 and 5896 Å. A number of strong Fe, Cr, and Ti lines/blends are also present. The redder stars in our sample ($J-K_{S}>1.1$) show very strong titanium oxide (TiO) bands in their spectra; these M giants are cool enough ($T_{\rm eff}<3560$ K) that the TiO bonds in the star are not dissociated. Figure \[speccomp\] shows the spectra for several stars observed with FOBOS covering a range of ($J-K_{S})$ colors; the strength of the TiO bands increases as the giants become redder and cooler. An additional 149 M giants that fit our survey criteria were observed over UT 2004 October 8 - 11 at CTIO using the Cassegrain spectrograph on the 1.5-m telescope. The detector was a Loral 1K (1200$\times$800 pixels) CCD with a read noise of 6.5 e$^{-}$; the gain was set to 1.42 e$^{-}$ per ADU. A grating with 831 grooves mm$^{-1}$ was used, with a resolution of $\Delta\lambda\sim3.1$ Å. Helium and argon lamps were used to take comparison frames for each target at the same telescope position to account for flexure variations. Ten quartz frames were taken each night and combined and normalized. The spectral range is 7650-8900 Å. The dominant feature in this region is the Ca II triplet at 8498, 8542, and 8662 Å. The CTIO M giants have magnitudes in the range $6.3<(K_{S})_{0}<12.0$. The reduction procedures for the CTIO program giants are similar to those used for the FMO reductions (the same cross-correlation code was used to determine the radial velocities). For the FMO sample, the difference in the derived (from cross-correlation against each other) and published RVs for the standard stars is typically $\pm$1-5 km s$^{-1}$, with no systematic offset in either direction. A total of 102 program stars were observed multiple times at FMO to test the stability of FOBOS and to gauge the S/N threshold for obtaining reliable RVs. The mean of the absolute value of the differences in the RVs for stars with multiple observations at FMO is 8.5 km s$^{-1}$. The RVs are fairly stable even at low S/N: stars with S/N below 20 have a mean absolute value in the difference of their RVs of 10.5 km s$^{-1}$. For the CTIO sample, the difference in the derived and published RVs for the standard stars is slightly higher, with variations ranging from $\pm$1-10 km s$^{-1}$; as with the FMO standards, no systematic offset in either direction is seen. Repeat observations were also taken for 16 stars at CTIO. The mean of the absolute value of the differences in RVs for the CTIO repeat observations is 13.9 km s$^{-1}$. For a subsample of 34 stars, radial velocities were also determined from high-resolution echelle spectra (see §\[hirvs\]). In Table \[tab2\], the radial velocities found from the high- and medium-resolution spectra for these 34 stars are reported (high-resolution/medium-resolution, denoted as $v_{\rm hel,h}/v_{\rm hel,m}$, respectively). The mean of the absolute value in the differences between the medium-resolution RVs with the high-resolution RVs is 6.2 km s$^{-1}$. Overall, considering the random errors in the medium-resolution RVs, the comparison of the medium/high resolution values, and that the data set is dominated by FOBOS observations, we place the typical uncertainty level for the medium-resolution RVs at 5-10 km s$^{-1}$. Radial Velocity Distribution\[rvs\] =================================== Panel (a) of Figure \[rvcosb\_3pan\] shows the heliocentric radial velocity, $v_{\rm hel}$, as a function of Galactic longitude, $l$.[^3] In panel (b), these velocities have been translated to the GSR (Galactic standard of rest frame — i.e., centered on the Sun but at rest with respect to the Galactic Center), where we adopt the values $\Theta_{0}$=236 km s$^{-1}$ for the speed of a closed orbit at the position of the Sun relative to the Galactic center [@bovy09] and ($U_\odot$,$V_\odot$,$W_\odot$)=(11.10,12.24,7.25) km s$^{-1}$ [@schonrich10] for the motion of the Sun with respect to this orbit. General trends in these panels can be understood by assuming that stars in the Galactic disk move on nearly circular orbits around the Galactic center. For a flat Milky Way rotation curve near the Sun with circular speed $\Theta_{0}$, the predicted line-of-sight velocity with respect to the GSR at the Sun’s position for a star on a circular orbit is given by: $$v_{\rm GSR,circ}= \Theta_{0} \left(\frac{R_0}{R}\right) \sin l \cos b \label{vpred}$$ Here, $R$ is the Galactocentric radius of the star and $R_0$ is the solar Galactocentric radius. The expected sinusoidal trend with $l$ for stars moving on disk-like orbits around the Galactic center is seen for the bulk of the stars. Note that the heliocentric velocities in panel (a) still show some sinusoidal trend — a reflection of the asymmetric drift of the M giant population (i.e., the tendency of older stars to have circular velocities that lag the Local Standard of Rest); this is discussed further in §\[interp\]. At fixed $l$, equation (\[vpred\]) shows that $v_{\rm GSR,circ}$ is lower for stars on circular orbits observed at high $b$ than for stars observed at low $b$ due to the $\cos b$ projection of their motions. In principle, therefore, one may have a good approximation to the rotational component of a disk star’s velocity at a latitude $b$ by deprojecting the observed velocity with a division by $\cos b$. This scaling tends to accentuate differences between stars having “disk-like” (i.e., circular) motion from stars having non-disk-like motions because the deprojection will tend to tighten the coherence of disk stars but separate outliers more — as shown in panel (c) of Figure \[rvcosb\_3pan\] and in Figure 1 of @maj12. While the majority of the stars in Figure \[rvcosb\_3pan\] appear to be members of the thick disk based on the amplitude of the sinusoidal trend in panel (a), there are a significant number of stars with high velocities relative to the main trend (we refer to these stars as “RV outliers”). The origin of these RV outliers — [*in-situ-halo, kicked-out*]{}, or [*accreted*]{} — is unclear from this observational plane alone. If the RV outliers are *in-situ-halo* or *kicked-out* stars, we would expect to see random RVs as a function of $l$. However, stars in several longitude ranges in our sample show suggestive coherence in their RVs. Such coherence is expected for a stream of stars passing through the Solar neighborhood [see @maj12; @johnston12 for further details], but whether these structures are real is hard to assess given the small number of stars. In the next section we present high-resolution follow-up spectroscopy of a sample of 34 RV outliers (highlighted by green symbols in the lower panel of Figure \[rvcosb\_3pan\]) to examine what fraction of outliers can be attributed to each of the halo formation mechanisms. Chemical Abundances\[hires\] ============================ Data Collection and Reductions\[hirvs\] --------------------------------------- To test the chemical properties of the 34 selected stars, high-resolution echelle spectra were collected using the Astrophysical Research Consortium Echelle Spectrograph (ARCES) on the 3.5-m telescope at Apache Point Observatory on UT 2009 March 30 and UT 2009 April 2 and the CCD echelle spectrograph (ECHLR) on the 4-m Mayall telescope at the Kitt Peak National Observatory over UT 2010 February 26 - March 2. The ARCES uses a 2048$\times$2048 pixel SITe CCD and has a resolution of $R$=31,500; the CCD has a gain of 3.8 e$^{-}$ per ADU and a readout noise level of about 7 e$^{-}$. Two sets of flat fields were taken to account for the strong gradient in response across the ARCES orders: one long set with a blue filter inserted and another short set with no filter (these are the red calibration frames). The blue and red frames were combined to create a master quartz flat. For wavelength calibration, thorium-argon (ThAr) lamp frames were taken. Reduction of the ARCES data was carried out using various IRAF tasks in the $echelle$ package. All images were overscan-corrected and trimmed using $ccdproc$. The echelle orders were located and the trace defined for the spectra with $apall$. The task $ecidentify$ was used to identify lines in a ThAr lamp spectrum. To minimize the effects of aliasing, the spectra were resampled along the dispersion axis using the $magnify$ task. Scattered light was removed from the program star frames using the $apscatter$ task, and the relevant echelle orders were then extracted. The program spectra were divided by the extracted master quartz flat, and the dispersion correction defined using the ThAr lamps was then applied to convert the pixel scale to a wavelength scale. As a final step in the reduction process, the spectra were continuum normalized. ECHLR spectra were collected at KPNO using the 2048$\times$2048 pixel T2KB CCD on the 4-m Mayall. The gain setting for T2KB was 1.9 e$^{-}$ per ADU with a read noise of approximately 4 e$^{-}$. The ECHLR has a resolution of $R$=35,000. ThAr lamps were used to collect wavelength calibration frames, and the short-exposure quartz lamp was used for obtaining the flat-field images. The spectral reduction procedures for the ECHLR data are similar to those carried out for the ARCES data. The photometric properties and the observational details of the stars observed at APO and KPNO are listed in Table \[tab2\]. The IDs and photometry of the stars come from the 2MASS point source catalog, where the IDs are the 2MASS RA/Dec (J2000.0) coordinates. Radial velocity standards, taken from the Astronomical Almanac, were observed all nights at APO and KPNO. Radial velocities from the echelle data were found using the IRAF $fxcor$ task and are reported in Table \[tab2\] and are compared with the medium-resolution values. Cross-correlation between RV standards gives errors on the order of 0.5 km s$^{-1}$ for the high-resolution RVs. Based on the RV data in Table \[tab2\], there is apparently a systematic bias towards higher measured RVs for the medium-resolution spectra, such that $v_{\rm hel,h}-v_{\rm hel,m}$=-5.0 km s$^{-1}$. This offset may be due to variations in the centering of the star in the slit for the echelle data (FOBOS data are taken with fiber optics, which, due to radial and azimuthal scrambling, provide more uniform “slit functions” in the spectrograph). \[abund\]Chemical Abundances ---------------------------- ### Derivations The APO and KPNO instrument set-ups give the best S/N per pixel for the region around 7400 Å. This particular region of the spectrum was selected due to its relative absence of molecular bands, which offers a good window for spectral analysis [@sl90]. The procedures used to derive the metallicities are similar to those used by @chou07 in their study of M giants in the Sgr tidal tails. Equivalent widths (EWs) for 11 Fe I lines in the range 7443 Å to 7583 Å were used to determine the atmospheric parameters $T_{\rm eff}$, log $g$, and \[Fe/H\] and the microturbulent velocity ($\xi$). The EWs were measured manually using the IRAF $splot$ task. The wavelengths of the 11 Fe I lines and their corresponding EW measured for each star are listed in Table \[tab3\]. In some cases, an EW could not be measured due to a cosmic ray falling on the same pixel as an Fe line; these lines were removed from the list for that spectrum and are reported as “...” in Table \[tab3\]. We adopt the same values for the excitation potentials ($\chi$) and oscillator strengths ($gf$) for these lines as those used by @chou07. Along with the EW measurements, stellar atmosphere models from the Kurucz ATLAS9 grids (1994) are used. Each stellar atmosphere model corresponds to a particular ($T_{\rm eff}$, log $g$, \[Fe/H\]). We start with an initial guess for ($T_{\rm eff}$, log $g$, \[Fe/H\], $\xi$)$_{0}$. The EWs for the sample lines and the model atmosphere are used as input to the MOOG local thermodynamic equilibrium (LTE) code [@sneden73]. The starting value for $T_{\rm eff}$ is calculated from the @houdashelt00 color-temperature relations for M giants, using the dereddened 2MASS $J-K_{S}$ colors converted to the CTIO system and the relations of @carpenter. An initial guess for log $g$ is taken from the 10 Gyr $Z_{\sun}$ isochrone from the Padova evolutionary track database (Marigo et al. 2008[^4]). The surface gravity $g$ of a star is related to its mass and size (luminosity). Once a star exhausts the hydrogen supply in its core, it will first brighten and move up the red giant branch (RGB); later, after core He exhaustion, the star eventually ascends the asymptotic giant branch (AGB). The difference in log $g$ between a star on the RGB vs. the AGB for cool giants, however, is quite small over a wide range of ages. Figure \[teff\], which shows the $T_{\rm eff}$-log $g$ plane for 3 Gyr, 5 Gyr, and 10 Gyr solar metallicity ($Z_{\sun}$=0.019) isochrones, demonstrates that the variation in log $g$ between the RGB and AGB is on the order of 0.15 dex for stars in the range of effective temperatures for our sample, which is 3600-4000 K. The iterative scheme for determining abundances involves using the initial values of $T_{\rm eff}$, log $g$, and \[Fe/H\] and iterating these values until the derived and model values of \[Fe/H\] converge. After each iteration, the correlation coefficient for $A$(Fe) as a function of the reduced EW (i.e., RW – the measured EW divided by the $\lambda$ required for that atomic transition) is checked and $\xi$ is adjusted to minimize the correlation coefficient before proceeding to the next iteration. An incorrect value of $\xi$ leads to a physically unrealistic dependence of the elemental abundance on the EWs – as seen by a high correlation coefficient for the RW. The derived parameters for all program stars are listed in Table \[tab4\]. In addition to \[Fe/H\], the ratio \[Ti/Fe\] was also derived. Once the atmospheric parameters for a star were determined using MOOG, these were fixed and used to measure $A$(Ti). Three Ti I lines were used to derive $A$(Ti): 7474.940, 7489.572, and 7496.120 Å. A solar $gf$ value was derived for the Ti I line at 7474.94 Å and adopting $A$(Ti)$_{\sun}$ = 4.90 from @asplund05. The $gf$ values for the other two Ti I lines (7489.57 and 7496.12 Å) were taken from @chou07 and these are also solar $gf$ values. As a check of our methodologies, four calibration stars from the red giant spectral studies of @sl85 and @smith00 (collectively referred to as S&L) were observed with both the ARCES and ECHLR; a comparison between our \[Fe/H\] and \[Ti/Fe\] values for the calibration stars is given in Table \[SLcomp\]. Typical uncertainties in the values derived for \[Fe/H\] in both studies are $\sim \pm$0.15 dex using this sample of Fe I lines. The mean difference in \[Fe/H\], and its associated standard deviation, between these two studies, in the sense of (this study - S&L) = -0.04 $\pm$ 0.16. This comparison indicates that the two studies are on the same abundance scale, with only a small mean offset and a scatter typical of the derived uncertainties in \[Fe/H\]. The values are in good agreement and the discrepancies are due to differences in the stellar atmosphere models used. Although detailed non-LTE (NLTE) calculations have not been carried out here, limits to the simplifying assumption of LTE can be investigated using recent studies of NLTE line formation for iron by @bergemann11b and for titanium by @bergemann11a. As is the case for many NLTE calculations involving cool stars, the theoretical results depend on the choice of the value for the parameterized efficiency of neutral hydrogen collisions, S$_{\rm H}$. In the case of Fe I/Fe II, @bergemann11b computed NLTE/LTE results using a small grid of model atmospheres and find that for Fe I in general, corrections to LTE-based abundances will become larger for increasing $T_{\rm eff}$, decreasing surface gravity, or decreasing metallicity; the Fe I corrections increase most dramatically for decreasing \[Fe/H\] (see their Figure 3), especially for \[Fe/H\]$\le$ -2.0, and for warmer temperatures, $T_{\rm eff}$$\ge$5000 K (where $\Delta$(NLTE – LTE)$\ge$+0.2 dex). The sample of M-giants analyzed here is only moderately metal-poor (\[Fe/H\]$\ge$-1.3) and, when coupled to cooler effective temperatures, would be expected, based on the @bergemann11b results, to require corrections to LTE Fe I abundances that would be $\le$+0.2 dex. Although NLTE corrections to LTE abundances in cool giants are not expected to be large, NLTE theoretical calculation of corrections that span the stellar parameter space analyzed here would be welcome. The Ti I corrections to LTE from @bergemann11a are in the same sense and qualitatively similar to those for Fe I discussed above. Given the stellar parameters and metallicities covered by the sample here, any corrections to \[Ti/H\] are not significant and, in particular, the critical values of \[Ti/Fe\] will have even smaller corrections from assuming LTE, with values $\le$0.10 dex; such uncertainties have no significant effect on conclusions drawn from the Fe and Ti abundances calculated here. As noted in @bergemann11b, their goal is to establish interactive routines to allow for estimating corrections to LTE-based abundances, so in the near future, it may be possible to provide more accurate corrections to LTE given particular stellar parameters and metallicities. Observations of stars in clusters also support the theoretical calculations that suggest rather small departures from LTE for the Fe abundances in giants. @ramirez01 analyzed stars in the mildly metal-poor globular cluster M71 (\[Fe/H\]$\sim$-0.5) and found the same values of \[Fe/H\] (within $\sim$0.05 dex) for turn-off stars, subgiants, and giants (which span a range in $T_{\rm eff}$ from 6000 K to 4500 K and log $g$ from 4.1 to 1.5). These results support the small-ish corrections to LTE Fe abundances suggested by the @bergemann11b calculations. ### Distances High-resolution spectra allow us to compute distances. Although we don’t use them explicitly in most of our analysis, these distances, reported in the last column of Table \[tab4\], are helpful to obtain some understanding of the size of the volume our M giant sample probes around the Sun. The distances to the high-resolution program stars are computed using isochones from the Padova database [@marigo08; @girardi10]. Approximate distances are determined using the derived \[Fe/H\] and assuming stars of age 10 Gyr; the appropriate $M_{K}$ is then selected based on the derived $T_{\rm eff}$. Using an age of 10 Gyr means that a star evolving along the RGB or AGB would have had an initial mass of $\sim$ 1.0 $M_{\sun}$; such a typical mass would not be largely different from a much younger population, such as 2 Gyr where $M_{RGB/AGB}\sim$ 1.7$M_{\sun}$, or a population with an age of 5 Gyr and $M_{RGB/AGB}\sim$ 1.1-1.3$M_{\sun}$. As discussed above (see Figure \[teff\]), the stellar atmospheric parameters for a red giant on the RGB do not vary significantly from those for a red giant on the AGB. The derived distances of the standards agree well with their Hipparcos distances (10% to 14% accuracy) when using isochrones in the same way as used for the high-resolution program stars. Possible sources of systematic error in the derived distances include: (1) an incorrect assumption of the star’s age and (2) scatter in the isochrone for later evolutionary stages. As a check on the error in distance from the isochrone method, $M_{K}$ was computed for ages of 5, 8, and 10 Gyr for stars having different values of $T_{\rm eff}$ and \[Fe/H\]. Using estimated uncertainties in $T_{\rm eff}$ of $\sim$100 K and $\sim$0.10 dex in \[Fe/H\], a conservative distance uncertainty (from the variation in $M_{K}$ with age, $T_{\rm eff}$, and \[Fe/H\]) is $\sim$25%, with the scatter in the distribution of distances expected to be somewhat smaller than this. We note that a handful of the more metal-poor stars cannot be fit to a 10 Gyr isochrone because their values of $T_{\rm eff}$ are cooler than the RGB tip. It is possible that this subset of stars consists of cool AGB stars, as the low-metallicity, low-mass stellar models do not model the end of the AGB well. Based on the errors in the derived distances for the standards and the sources mentioned above, we assign an approximate accuracy of 20% to the distances for the program M giants. The distances to all high-resolution program stars range from 0.9 to 8.9 kpc with a mean distance of 4.3 kpc; these distances confirm that the survey sample probes out to as far as roughly 9 kpc from the Sun, and, in the mean, sample a good volume of the near side of the Galaxy thick disk and halo components around the Sun. \[interp\] Results & Interpretation =================================== Figure \[tag\] empirically justifies the locations for the blue, green and orange regions that were sketched in Figure \[cartoon\] by plotting observed abundances for samples of halo (blue symbols), Milky Way satellite (green symbols), and disk (orange symbols) stars. As expected there is significant overlap between these populations. Nevertheless, the dark horizontal and dashed diagonal lines in this plot quite clearly separate an almost pure sample of satellite stars. Hence, we can assign possible origins of stars in our sample based on their location in the \[$\alpha$/Fe\]-\[Fe/H\] plane as follows: (i) stars below the horizontal line and to the left of the diagonal line are very likely to have been [*accreted*]{}; (ii) stars to the right of the diagonal line along the main disk trend may have formed deep within the dark matter halo of the Galactic progenitor, but could also have been accreted from a high mass infalling object (like the LMC or Sgr, which have stars that fall in this region); (iii) stars above the horizontal line along the main halo trend could have been accreted or could have formed early in the life of the Galaxy in either the disk or the halo. Figure \[tife\], with the horizontal and diagonal lines repeated from Figure \[tag\], shows \[Fe/H\] vs. \[Ti/Fe\] for the 34 program M giants and 8 of the 9 calibration stars having high-resolution spectra (one calibration star, 1535178+135331, is not included due to a high uncertainty in its \[Ti/Fe\]), along with the literature data for Milky Way field stars. The locations of our RV outliers in Figure \[tife\] can be interpreted in terms of stellar halo formation scenarios: - Seventeen RV outliers (shown as filled green triangles) fall outside both the disk or halo chemical trends. This suggests a lower limit of 17 [*accreted*]{} stars in our sample. - Nine of our RV outliers fall along the main Galactic halo trend (above the solid line, and shown as filled green diamonds). Because of the expected chemical overlap in formation scenarios, the origin of stars in this region is ambiguous — they could be *in-situ-halo*, [*kicked-out*]{} or [*accreted*]{}. However, the presence of 17 stars in the [*accreted*]{} region of the abundance plane, which is populated only during the later stages of chemical evolution in a system, suggests that there also must be some stars [*accreted*]{} from the same systems in the main Galactic halo region. This implies that there are less than 9 [*in-situ-halo*]{} stars in our sample. - Eight RV outliers fall along the high-metallicity Galactic disk trend (to the right of the dotted line and shown as filled green squares). The combination of disk-like chemistry but extreme kinematics for these stars suggest membership of the [*kicked-out*]{} population. However, they could be a contaminating contribution from either: (i) a high mass accreted satellite (like Sgr or the LMC); or (ii) the true Galactic disk. These tentative population assignments for the program M giants are given in the last column of Table 1, where ‘is’ refers to *in-situ-halo*, ‘ko’ refers to [*kicked-out*]{}, and ‘a’ refers to the *accreted* population. The five stars with an origin of ‘d’ are red giant thick disk calibration stars, selected based on the fact that their medium-resolution RVs fall along the main thick disk RV trend in Figure \[rvcosb\_3pan\]. We can derive a crude upper-limit to the contamination of our possible “kicked-out” stars due to true Galactic disk members by simply asking what fraction of thick disk stars could be moving at high enough speeds to be in our RV outlier sample. 1. We characterize the motions of the disk population in our M giant sample by finding the value of the asymmetric drift $v_{\rm asymm}$ (apparent in the sinusoidal trend in the top panel of Figure \[rvcosb\_3pan\]) that minimizes the dispersion $\sigma^{\prime}$ of $V^{\prime}=v_{\rm hel}+v_{\rm asymm}\sin(l)\cos(b)$ calculated (iteratively, using a 3-$\sigma$ clipping method to remove outliers) for the medium-resolution heliocentric radial velocities. Figure \[vasymm\] shows $V^\prime$ for $v_{\rm asymm}=55$ km s$^{-1}$, which was found to give the minimum $\sigma^{\prime}=52.5$ km s$^{-1}$. 2. If [*all*]{} 1799 M giants were disk members we would expect 1.2%, or 22 stars, to have $|V^{\prime}| > 2.5\sigma^{\prime}$ (or 131.3 km s$^{-1}$). In fact, there are 113 stars in our medium-resolution sample with such high $V^{\prime}$, which suggests that the majority of these are members of another population (i.e., the stellar halo). 3. This suggests that 22/113=19.5% is a rough upper bound to the fraction of our RV outliers that are true Galactic disk members. Since there are 24 stars in our high-resolution sample with $|V^{\prime}|$ $>$ 131.3 km s$^{-1}$, we estimate at most 5 of these could be true disk members. We actually find that 3 out of the 8 possible [*kicked-out*]{} stars have $|V^{\prime}|$ $>$ 2.5 $\sigma^{\prime}$ (with 2 more falling on the border, as seen in Figure \[vasymm\]); thus, we cannot claim any conclusive evidence of kicked-out stars in our high-resolution RV outlier sample. Overall, we conclude that RV outliers in our M giant sample appear to be dominated by [*accreted*]{} stars (more than 17), with a possible contribution from [*in situ*]{} stars. \[conc\]Summary, Discussion and Conclusion ========================================== Key Results ----------- In this study, we have analyzed the spectroscopic properties of a sample of M giants that are dominated by members of the Milky Way’s thick disk and nearby halo components. From the RV distribution, we identified stars with RVs that lie outside those expected for typical thick disk stars at the same locations. These RV outliers are found to show some degree of spatial and kinematical coherence. We suppose that some of this coherence could be the signature of substructure (e.g., tidal tails) from accreted satellites. To test our interpretation of the RV outliers, we looked at the chemical abundance patterns of 34 of these stars. We used the locations of the stars in the \[Ti/Fe\] vs. \[Fe/H\] plane to attempt to assign them to one of three potential populations of halo stars – those formed [*in situ*]{}, those that were [*accreted*]{}, and those that were [*kicked out*]{} of the disk. This cannot be done unambiguously in all cases because of expected overlaps in the chemical signatures of these populations. However, the \[Ti/Fe\] abundances for seventeen of the RV outliers are systematically below the main halo trend and similar to the abundances seen in stars from Milky Way dwarf satellite galaxies. They are consistent with stars from MW dwarf satellite galaxies that have been [*accreted*]{} by the Milky Way [@shetrone03; @tolstoy04; @monaco07] and inconsistent with expectations for abundance patterns from pure [*in-situ-halo*]{} or [*kicked-out*]{} growth. Another eight of the stars selected for high-resolution spectroscopic follow-up track the abundance trends set by the disk stars, even though they have halo-like kinematics. These are indicative of a population formed in the disk or bulge and subsequently [*kicked-out*]{}, although: (i) some or all may have been [*accreted*]{} from a high-mass infalling object; and (ii) we estimate an upper limit to contamination by genuine disk stars even at these high RVs at a level that does not allow this association to be conclusive. The remainder of the high-resolution stars have high \[Ti/Fe\] at low \[Fe/H\] and could be part of any of the [*in-situ-halo*]{}, [*kicked-out*]{} or the [*accreted*]{} populations. Our results in the context of other studies ------------------------------------------- While it has had a long history [@hartwick87; @zinn93; @zinn96; @sommerlarsen97], the discussion of possible multiple mechanisms for the formation of the stellar halo has been revitalized in recent years both by the advent of large photometric data sets with follow-up low-resolution spectroscopic work [e.g., @carollo07; @carollo10], and more modest, but ever-expanding samples of nearby halo stars with high-resolution spectroscopy [e.g., @nissen10; @nissen11; @schuster12; @ishigaki12]. In several cases, the studies point out classes of halo populations with distinct properties that are argued to match expectations for the properties of populations with distinct formation histories seen in hydrodynamical simulations of galaxy formation: the models generally predict an inner halo (within $\sim$20-30 kpc) dominated by metal-rich stars formed within the main Galactic progenitor and an outer halo dominated by lower-metallicity, accreted stars [@abadi06; @zolotov09; @mccarthy12]. However, comparisons of the models to the data sets up to this point are necessarily inconclusive because the results of the simulations themselves are dependent on the (hard to model!) details of when, how, and where stars form in the main Galactic progenitor. For example, @carollo07 [@carollo10] claim evidence for two populations in their sample of 10,123 nearby (within 4 kpc) SDSS calibration stars: one of metal-rich stars (with a metallicity distribution peaking around \[Fe/H\]= -1.6) on only mildly eccentric orbits and a second of metal-poor stars on more eccentric orbits. These observations are very reminiscent of the hydrodynamical simulation results — indeed @carollo10 [*interpret*]{} their metal-rich stars as an inner, [*in-situ-halo*]{}/[*kicked-out*]{} population and their metal-poor stars as an outer, [*accreted*]{} population — but a transition in the orbital and metallicity properties in these populations is not necessarily inconsistent with a purely accreted stellar halo. Moreover, the @carollo07 [@carollo10] interpretation is at odds with our finding of a large fraction of clearly accreted stars in our M giant sample of even higher metallicity stars (\[Fe/H\] $>$ -1.2) than their metal-rich population. In fact, we might expect the M giants to be biased [*towards*]{} finding [*in-situ-halo*]{}/[*kicked-out*]{} stars because late-type giants are a metal-rich stellar population. On the other hand, the selection of RV outliers admits a bias in our sample towards the high velocity tails of all populations — particularly [*accreted*]{} stars, which are expected typically to be on higher energy and higher eccentricity orbits than the other stars. This could explain why our sample is particularly sensitive to the accreted population. Further work is needed to definitively show which of these two biases should dominate in a sample such as ours. Our results are more similar to those of @nissen10 [@nissen11] and @schuster12, who find roughly equal-size populations distinct in age, abundances and orbits when they divide their (metal-rich, \[Fe/H\] $>$ -1.6) stellar halo sample between low-$\alpha$ and high-$\alpha$ stars in the \[Mg/Fe\]-\[Fe/H\] plane. They interpret the properties of the low-$\alpha$ population (in which the stars are found also to be younger and on more eccentric orbits) as being consistent with an accreted population — so that these authors find an accreted fraction for the stellar halo at these metallicities similar to our own estimates. It is interesting to note that we also find stars with significantly lower \[$\alpha$/Fe\] (less than zero) than any of the stars in the @nissen10 sample, possibly because our large survey volume (out to 9 kpc from the Sun) encompasses debris from more chemically extreme accreted progenitors not represented in their local sample (which is limited to within 335 pc of the Sun). Conclusion ---------- We explore the properties of relatively nearby, RV-selected halo M giant stars and conclude that the chemical properties of the stars in this sample show tentative evidence of distinct populations with distinct formation mechanisms. While close to 50% of the stars fall in the [*accreted*]{} region of chemical abundance space, a definitive assessment of the relative contributions from [*in-situ-halo*]{}/[*kicked-out*]{} stars is not possible, due to the sometimes ambiguous categorizations of stars based upon their \[$\alpha$/Fe\]-\[Fe/H\] abundance distributions alone. @nissen11 have demonstrated that using abundance patterns along with more complete orbital information can play a key role in making this identification. Our own chemodynamical data point to the importance of mapping a significant volume of the halo to confirm that such local studies are representative of global properties. Follow-up work on the kinematics and more detailed chemical characterization of these stars would give more insight into their origin. Quantifying the size of the [*kicked-out*]{} population more generally could provide valuable constraints on the hydrodynamical models of galaxy formation. A key finding is that our selection of M giants with unusual, halo-like RVs picks out a stellar halo population dominated by [*accreted*]{} stars. Hence, searching for accretion events in RV-outlying samples of M giants should be prolific and motivates further interest in the putative groupings of stars found in our RV survey. In a companion paper we explore what more the kinematical properties of these groupings could be telling us about their origins [@johnston12]. The authors thank the referee for his/her helpful comments. Many thanks to Eric Bell, Paul Harding, and Tim Beers for elucidating discussions at the January 2012 meeting of the American Astronomical Society. A.A.S. gratefully acknowledges support from The Vassar College Committee on Research and the Columbia Science Fellows Program. The work of K.V.J. and A.A.S. on this project was funded in part by NSF grants AST-0806558 and AST-1107373. S.R.M. acknowledges partial funding of this work from NSF grant AST-0807945 and NASA/JPL contract 1228235. Abadi, M. G., Navarro, J. F., & Steinmetz, M. 2006, , 365, 747 Asplund, M., Grevesse, N., & Sauval, A. J. 2005, Cosmic Abundances as Records of Stellar Evolution and Nucleosynthesis, 336, 25 Bell, E. F. et al.  2008, , 680, 295 Belokurov, V., et al. 2007, , 658, 337 Bergemann, M. 2011, , 413, 2184 Bergemann, M., Lind, K., Collet, R., & Asplund, M. 2011, Journal of Physics Conference Series, 328, 012002 Bessell, M. S. & Brett, J. M. 1988, , 100, 1134 Bovy, J., Hogg, D. W., & Rix, H.-W. 2009, , 704, 1704 Brewer, M. M. & Carney, B. W. 2006, , 131, 431 Brown, W. R., Geller, M. J., Kenyon, S. J., & Kurtz, M. J. 2006, , 640, L35 Brown, W. R., Beers, T. C., Wilhelm, R., Allende Prieto, C., Geller, M. J., Kenyon, S. J., & Kurtz, M. J. 2008, , 135, 564 Bullock, J. S., Kravtsov, A. V., & Weinberg, D. H. 2001, , 548, 33 Bullock, J. S. & Johnston, K. V. 2005, , 635, 931 Carney, B. W., Laird, J. B., Latham, D. W., & Aguilar, L. A. 1996, , 112, 668 Carollo, D. et al. 2007, , 450, 1020 Carollo, D., Beers, T. C., Chiba, M., Norris, J. E., Freeman, K. C., Lee, Y. S., Ivezi[ć]{}, [Ž]{}., Rockosi, C. M., & Yanny, B. 2010, , 712, 692 Carpenter, J. M. 2001, , 121, 2851 Chiba, M., & Beers, T. C. 2000, , 119, 2843 Chou, M.-Y., et al. 2007, , 670, 346 Chou, M.-Y., Cunha, K., Majewski, S. R., Smith, V. V., Patterson, R. J., Mart[í]{}nez-Delgado, D., & Geisler, D. 2010a, , 708, 1290 Chou, M.-Y., Majewski, S. R., Cunha, K., Smith, V. V., Patterson, R. J., & Mart[í]{}nez-Delgado, D. 2010b, , 720, L5 Chou, M.-Y., Majewski, S. R., Cunha, K., Smith, V. V., Patterson, R. J., & Mart[í]{}nez-Delgado, D. 2011, , 731, L30 Cooper, A. P., Cole, S., Frenk, C. S., et al. 2010, , 406, 744 Cooper, A. P., Cole, S., Frenk, C. S., & Helmi, A. 2011, , 417, 2206 Crane, J. D., Majewski, S. R., Patterson, R. J., Skrutskie, M. F., Adams, E. Y., & Frinchaboy, P. M. 2005, , 117, 526 Deason, A. J., Belokurov, V., & Evans, N. W. 2011, , 411, 1480 Eggen, O. J., Lynden-Bell, D., & Sandage, A. R. 1962, , 136, 748 Font, A. S., McCarthy, I. G., Crain, R. A., Theuns, T., Schaye, J., Wiersma, R. P. C., & Dalla Vecchia, C. 2011, , 416, 2802 Freeman, K. & Bland-Hawthorn, J. 2002, ARA&A, 40, 287 Fulbright, J. P. 2000, , 120, 1841 Geisler, D., Smith, V. V., Wallerstein, G., Gonzalez, G., & Charbonnel, C. 2005, , 129, 1428 Gilmore, G., & Wyse, R. F. G. 1998, , 116, 748 Girard, T. M., Korchagin, V. I., Casetti-Dinescu, D. I., van Altena, W. F., L[ó]{}pez, C. E., & Monet, D. G. 2006, , 132, 1768 Girardi, L., et al. 2010, , 724, 1030 G[ó]{}mez, F. A., Helmi, A., Brown, A. G. A., & Li, Y.-S. 2010, , 408, 935 Grillmair, C. J. 2006, , 651, L29 Hartwick, F. D. A. 1987, NATO ASIC Proc. 207: The Galaxy, 281 Helmi, A., & White, S. D. M. 1999, , 307, 495 Helmi, A., White, S. D. M., de Zeeuw, P. T., & Zhao, H. 1999, , 402, 53 Helmi, A. & de Zeeuw, P. T. 2000, , 319, 657 Houdashelt, M. L., Bell, R. A., Sweigart, A. V., & Wing, R. F. 2000, , 119, 1424 Ishigaki, M. N., Chiba, M., & Aoki, W. 2012, , 753, 64 Ivezi[ć]{}, [Ž]{}., et al. 2008, , 684, 287 Johnston, K. V., Bullock, J. S., Sharma, S., Font, A., Robertson, B. E., & Leitner, S. N. 2008, , 689, 936 Johnston, K. V., Sheffield, A. A., Majewski, S. R., Sharma, S., & Rocha-Pinto, H. J. 2012, , 760, 95 Juri[ć]{}, M., et al. 2008, , 673, 864 Kurucz, R. L. 1994, Kurucz CD-ROM 19, Solar Abundance Model Atmospheres (Cambridge: SAO) Majewski, S. R. 1992, , 78, 87 Majewski, S. R., Munn, J. A., & Hawley, S. L. 1994,, 427, L37 Majewski, S. R., Munn, J. A., & Hawley, S. L. 1996, , 459, L73 Majewski, S. R., Skrutskie, M. F., Weinberg, M. D., & Ostheimer, J. C. 2003, , 599, 1082 Majewski, S. R. 2004, , 21, 197 Majewski, S. R. et al. 2004, , 128, 245 Majewski, S. R., Law, D. R., Polak, A. A., & Patterson, R. J. 2006, , 637, L25 Majewski, S. R., Nidever, D. L., Smith, V. V., et al. 2012, , 747, L37 Marigo, P., Girardi, L., Bressan, A., Groenewegen, M. A. T., Silva, L., & Granato, G. L. 2008, , 482, 883 McCarthy, I. G., Font, A. S., Crain, R. A., et al. 2012, , 2263 McWilliam, A. 1997, , 35, 503 Monaco, L., Bellazzini, M., Bonifacio, P., et al. 2005, , 441, 141 Monaco, L., Bellazzini, M., Bonifacio, P., Buzzoni, A., Ferraro, F. R., Marconi, G., Sbordone, L., & Zaggia, S. 2007, , 464, 201 Morrison, H. L. et al. 2009, , 694, 130 Nissen, P. E., & Schuster, W. J. 2010, , 511, L10 Nissen, P. E., & Schuster, W. J. 2011, , 530, A15 Perryman, M. A. C., Lindegren, L., Kovalevsky, J., et al. 1997, , 323, L49 Pomp[é]{}ia, L., Hill, V., Spite, M., et al. 2008, , 480, 379 Purcell, C. W., Bullock, J. S., & Kazantzidis, S. 2010, , 404, 1711 Quinn, P. J., Hernquist, L., & Fullagar, D. P. 1993, , 403, 74 Ram[í]{}rez, S. V., Cohen, J. G., Buss, J., & Briley, M. M. 2001, , 122, 1429 Reddy, B. E., Tomkin, J., Lambert, D. L., & Allende Prieto, C. 2003, , 340, 304 Robertson, B., Bullock, J. S., Font, A. S., Johnston, K. V., & Hernquist, L. 2005, , 632, 872 Rocha-Pinto, H. J., Majewski, S. R., Skrutskie, M. F., & Crane, J. D. 2003, , 594, L115 Rocha-Pinto, H. J., Majewski, S. R., Skrutskie, M. F., Crane, J. D., & Patterson, R. J. 2004, , 615, 732 Rocha-Pinto, H. J., Majewski, S. R., Skrutskie, M. F., et al. 2006, , 640, L147 Samland, M. & Gerhard, O. E. 2003, , 399, 961 Schlaufman, K. C., Rockosi, C. M., Lee, Y. S., Beers, T. C., & Allende Prieto, C. 2011, , 734, 49 Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, , 500, 525 Sch[ö]{}nrich, R., Binney, J., & Dehnen, W. 2010, , 403, 1829 Schuster, W. J., Moreno, E., Nissen, P. E., & Pichardo, B. 2012, , 538, A21 Searle, L. & Zinn, R. 1978, , 225, 357 Sharma, S., Johnston, K. V., Majewski, S. R., Mu[ñ]{}oz, R. R., Carlberg, J. K., & Bullock, J. 2010, , 722, 750 Shetrone, M. D., C[ô]{}t[é]{}, P., & Sargent, W. L. W. 2001, , 548, 592 Shetrone, M., Venn, K. A., Tolstoy, E., Primas, F., Hill, V., & Kaufer, A. 2003, , 125, 684 Skrutskie, M. F., Reber, T. J., Murphy, N. W., & Weinberg, M. D. 2001, , 33, 1437 Smecker-Hane, T. & McWilliam, A. 2002, astro-ph/0205411 Smith, V. V., Lambert, D. L. 1985, , 294, 326 Smith, V. V., Lambert, D. L. 1990, , 72, 387 Smith, V. V., Suntzeff, N. B., Cunha, K., Gallino, R., Busso, M., Lambert, D. L., & Straniero, O. 2000, , 119, 1239 Smith, V. V., Hinkle, K. H., Cunha, K., et al. 2002, , 124, 3241 Sneden, C. A. 1973, Ph.D. Thesis, Univ. Texas at Austin Sommer-Larsen, J., Beers, T. C., Flynn, C., Wilhelm, R., & Christensen, P. R. 1997, , 481, 775 Steinmetz, M. et al. 2006, , 132, 1645 Tissera, P. B., White, S. D. M., & Scannapieco, C. 2012, , 420, 255 Tolstoy, E., et al. 2004, , 617, L119 Walker, I. R., Mihos, J. C., & Hernquist, L. 1996, , 460, 121 Williams, M. E. K., et al. 2011, , 728, 102 Xue, X.-X., et al. 2010, e-prints arXiv:1011.1925 Yanny, B., et al. 2009, , 137, 4377 Zacharias, N., Urban, S. E., Zacharias, M. I., Wycoff, G. L., Hall, D. M., Monet, D. G., & Rafferty, T. J. 2004, , 127, 3043 Zinn, R. 1993, The Globular Cluster-Galaxy Connection, 48, 38 Zinn, R. 1996, Formation of the Galactic Halo...Inside and Out, 92, 211 Zolotov, A., Willman, B., Brooks, A. M., Governato, F., Brook, C. B., Hogg, D. W., Quinn, T., & Stinson, G. 2009, , 702, 1058 Zolotov, A., Willman, B., Brooks, A. M., Governato, F., Hogg, D. W., Shen, S., & Wadsley, J. 2010, , 721, 738 [c c c c c c c c c c]{} 1521021+082320 & 12.1085 & 49.9949 & 7.458 & 0.969 & 01-Mar-2010 & ECHLR & 62 & -192.9/-183.3 & a\ 1546044+061554 & 14.3431 & 43.6135 & 7.745 & 0.961 & 26-Feb-2010 & ECHLR & 75 & -268.6/-270.6 & a\ 1535178+135331 & 22.2041 & 49.6287 & 6.569 & 1.028 & 30-Mar-2009 & ARCES & 114 & 53.9/61.1 & d\ 1640358+060251 & 22.6837 & 31.7562 & 5.655 & 1.013 & 30-Mar-2009 & ARCES & 80 & 35.4/49.4 & d\ 1509307+252912 & 37.8279 & 59.1097 & 8.680 & 0.991 & 25-Feb-2010 & ECHLR & 84 & -231.6/-231.6 & a\ 1546468+270052 & 43.1496 & 51.1734 & 6.965 & 0.972 & 26-Feb-2010 & ECHLR & 98 & -217.3/-219.6 & a\ 1530257+323409 & 51.7506 & 55.2995 & 5.997 & 1.053 & 30-Mar-2009 & ARCES & 138 & -8.6/-4.6 & d\ 1719596+301146 & 53.2262 & 31.8831 & 6.863 & 0.919 & 02-Mar-2010 & ECHLR & 52 & -30.1/-27.0 & d\ 1731343+401543 & 65.3999 & 31.7765 & 5.335 & 1.019 & 26-Feb-2010 & ECHLR & 111 & -28.2/-27.8 & d\ 0913092+450916 & 175.2015 & 43.4028 & 8.045 & 0.937 & 26-Feb-2010 & ECHLR & 105 & -107.0/-108.2 & a\ 0807206+435418 & 176.0975 & 31.6497 & 6.552 & 0.937 & 26-Feb-2010 & ECHLR & 123 & -122.5/-131.7 & ko\ 0935556+414046 & 179.7566 & 47.7631 & 6.179 & 0.920 & 26-Feb-2010 & ECHLR & 174 & -252.7/-248.3 & a\ 0903471+414944 & 179.7632 & 41.7691 & 6.322 & 0.955 & 26-Feb-2010 & ECHLR & 195 & -204.8/-210.6 & a\ 0830244+283812 & 194.9298 & 33.0562 & 6.585 & 0.973 & 25-Feb-2010 & ECHLR & 174 & 264.5/269.9 & a\ 1011498+230504 & 210.2972 & 53.7906 & 7.908 & 0.934 & 02-Mar-2010 & ECHLR & 89 & 238.4/242.4 & a\ 0918573+145749 & 215.4937 & 39.3263 & 8.780 & 0.908 & 02-Mar-2010 & ECHLR & 79 & 229.9/235.2 & is/ko/a\ 0938204+112950 & 222.2501 & 42.1710 & 8.349 & 0.980 & 25-Feb-2010 & ECHLR & 97 & 264.2/265.6 & a\ 1001593+114507 & 225.6258 & 47.4509 & 8.399 & 0.932 & 02-Mar-2010 & ECHLR & 77 & 179.2/183.2 & is/ko/a\ 0932038+055521 & 227.7177 & 38.1643 & 7.397 & 1.073 & 02-Mar-2010 & ECHLR & 82 & 190.3/171.7 & ko\ 0950147+082356 & 227.8484 & 43.3082 & 4.981 & 1.154 & 02-Mar-2010 & ECHLR & 107 & 143.8/167.4 & ko\ 1101590+084043 & 243.2806 & 58.2289 & 8.748 & 0.931 & 02-Mar-2010 & ECHLR & 73 & 263.8/263.9 & a\ 1024571+004900 & 243.5677 & 46.1144 & 7.173 & 1.054 & 25-Feb-2010 & ECHLR & 109 & 304.5/314.1 & a\ 1101024-003004 & 254.5704 & 51.6925 & 7.208 & 0.923 & 25-Feb-2010 & ECHLR & 172 & 264.9/267.0 & is/ko/a\ 1115376+000800 & 258.5365 & 54.5283 & 8.248 & 1.037 & 02-Mar-2010 & ECHLR & 72 & 196.4/194.6 & ko\ 1054035-065124 & 258.7632 & 45.7090 & 7.355 & 1.024 & 25-Feb-2010 & ECHLR & 154 & 267.1/269.9 & is/ko/a\ 1037414-121048 & 259.0329 & 39.0328 & 5.977 & 1.142 & 25-Feb-2010 & ECHLR & 185 & 248.2/257.1 & ko\ 1136527+025949 & 263.2681 & 59.9961 & 7.387 & 0.988 & 25-Feb-2010 & ECHLR & 117 & 162.1/159.3 & a\ 1145072+013727 & 268.2122 & 59.9476 & 7.274 & 1.029 & 26-Feb-2010 & ECHLR & 105 & 146.6/145.9 & a\ 1105038-164000 & 269.3479 & 39.1698 & 7.386 & 0.980 & 02-Mar-2010 & ECHLR & 89 & 349.7/354.6 & is/ko/a\ 1131243-055825 & 269.6677 & 51.6539 & 6.526 & 1.158 & 26-Feb-2010 & ECHLR & 129 & 123.2/137.9 & is/ko/a\ 1206183-035045 & 281.8293 & 57.1651 & 6.521 & 1.095 & 25-Feb-2010 & ECHLR & 151 & 220.6/238.9 & a\ 1313176-164220 & 310.4547 & 45.8468 & 8.848 & 0.942 & 01-Apr-2009 & ARCES & 82 & 113.7/96.9 & ko\ 1314125-132352 & 311.4065 & 49.0990 & 7.125 & 0.943 & 01-Apr-2009 & ARCES & 132 & 99.3/98.9 & ko\ 1334567-055158 & 322.2190 & 55.3680 & 6.968 & 1.035 & 01-Apr-2009 & ARCES & 100 & 71.5/77.6 & ko\ 1355160-053350 & 330.5657 & 53.8480 & 7.352 & 1.057 & 02-Mar-2010 & ECHLR & 86 & 91.9/96.2 & is/ko/a\ 1442035-162350 & 337.8142 & 38.8681 & 7.298 & 1.050 & 02-Mar-2010 & ECHLR & 73 & 152.2/145.9 & is/ko/a\ 1404515-004157 & 338.2993 & 57.0434 & 6.590 & 1.109 & 02-Mar-2010 & ECHLR & 73 & 114.9/121.4 & is/ko/a\ 1431302-051730 & 343.4025 & 49.5528 & 8.156 & 0.909 & 01-Apr-2009 & ARCES & 62 & 162.0/156.5 & a\ 1518065-115536 & 350.0932 & 37.1732 & 7.588 & 1.045 & 30-Mar-2009 & ARCES & 91 & 185.2/183.0 & a\ [ccccccccccccccc]{} 1521021+082320 & 53.5 & 34.5 & 87.7 & ... & ... & 162.4 & 100.1 & 56.9 & 20.0 & 94.2 & 147.0 & 26.7 & 56.5 & 42.1\ 1546044+061554 & 50.0 & 34.2 & 86.1 & 33.8 & 66.2 & 171.6 & 98.2 & 51.0 & ... & 92.0 & 144.4 & 27.9 & 48.6 & 33.5\ 1535178+135331 & 54.3 & 52.1 & 104.5 & 51.7 & ... & 168.5 & 110.3 & 58.3 & 32.0 & 99.0 & 156.6 & 26.7 & 104.7 & ...\ 1640358+060251 & 83.4 & 59.5 & 109.8 & 65.6 & 101.0 & 193.0 & 120.8 & 67.5 & 31.0 & 108.4 & 159.8 & 77.7 & 113.8 & 91.6\ 1509307+252912 & 40.1 & 31.2 & 80.4 & 28.1 & 52.6 & ... & 94.2 & 39.2 & 17.8 & ... & 135.4 & 17.2 & 37.5 & ...\ 1546468+270052 & 64.0 & 36.8 & 98.5 & 45.4 & 87.2 & 180.6 & 115.3 & 58.1 & 31.5 & 108.9 & 157.5 & 43.2 & 72.4 & 57.5\ 1530257+323409 & 63.0 & 47.1 & 103.8 & 52.6 & 70.6 & 171.1 & 101.3 & 60.0 & 24.2 & 94.6 & 147.0 & 89.0 & 115.0 & 102.5\ 1719596+301146 & 71.3 & 57.0 & 101.4 & ... & 89.8 & 174.5 & 114.9 & 65.0 & 31.8 & 109.5 & 155.2 & 63.2 & 117.7 & 86.0\ 1731343+401543 & 88.0 & 60.4 & 130.1 & 71.4 & 108.5 & 201.0 & 130.8 & 76.0 & 35.9 & 123.2 & 175.6 & 95.7 & 138.2 & 107.8\ 0913092+450916 & 62.9 & 41.6 & 90.3 & 40.9 & 77.0 & 166.5 & 108.0 & 47.9 & ... & 99.8 & 143.9 & 27.0 & 60.8 & 43.0\ 0807206+435418 & ... & 60.0 & 131.2 & 77.0 & 113.3 & 207.9 & 137.5 & 79.0 & 45.4 & 130.1 & 186.3 & 99.9 & 136.2 & 110.8\ 0935556+414046 & 46.2 & 26.3 & 75.1 & 27.3 & 59.4 & 154.2 & 88.3 & 45.0 & 19.9 & 82.2 & 130.8 & 15.4 & 35.9 & 25.5\ 0903471+414944 & 60.0 & 43.5 & 91.8 & 50.1 & 77.0 & 161.1 & 102.0 & 47.3 & 20.0 & 96.4 & 140.8 & 54.2 & 83.8 & 70.3\ 0830244+283812 & 49.0 & 34.5 & 85.0 & 36.3 & 66.3 & 159.3 & 98.1 & 46.8 & 14.2 & 91.8 & 145.1 & 27.4 & 52.7 & 48.4\ 1011498+230504 & 55.1 & 34.0 & 94.5 & 40.0 & 73.0 & 168.0 & 114.0 & ... & 21.0 & 93.1 & 153.9 & 30.7 & 63.0 & 49.0\ 0918573+145749 & 53.6 & 21.1 & 76.1 & ... & 64.0 & 163.1 & 88.7 & 36.7 & 12.6 & 88.3 & ... & 19.1 & 39.3 & 22.9\ 0938204+112950 & 60.5 & 37.4 & 94.6 & 39.1 & 78.5 & 165.4 & 107.3 & 54.4 & 21.6 & 105.4 & ... & 37.1 & 76.5 & 65.4\ 1001593+114507 & 66.8 & 44.3 & 94.3 & ... & 87.0 & 182.6 & 109.5 & 55.0 & 17.4 & 97.3 & 150.9 & 53.6 & 93.0 & 83.2\ 0932038+055521 & 67.5 & 45.0 & 94.1 & 43.8 & 74.7 & 162.9 & 102.3 & 59.7 & 33.8 & 98.7 & 149.0 & 77.5 & 105.7 & 97.8\ 0950147+082356 & 55.6 & 49.5 & 105.5 & 47.3 & ... & 156.7 & ... & 55.9 & 22.6 & 90.2 & 140.3 & 84.7 & 109.2 & 96.1\ 1101590+084043 & 41.9 & ... & 71.8 & ... & 60.0 & 143.0 & 100.7 & 44.2 & 18.7 & 80.6 & 147.2 & ... & 43.8 & 33.0\ 1024571+004900 & 62.5 & 37.3 & 105.8 & 41.6 & 74.3 & 169.5 & 114.8 & 60.0 & 21.7 & 95.4 & 165.0 & 52.8 & 84.5 & 69.0\ 1101024-003004 & 42.2 & 25.1 & 81.8 & 24.8 & 61.3 & 158.7 & 92.3 & ... & ... & 80.4 & 139.1 & 16.5 & 41.0 & 30.4\ 1115376+000800 & 63.6 & 45.5 & 95.3 & ... & 74.0 & 162.8 & ... & 60.2 & 26.5 & 96.4 & 145.5 & 74.2 & 104.7 & 94.3\ 1054035-065124 & 59.1 & 35.4 & 97.0 & 40.8 & 74.0 & 158.0 & 93.8 & 53.7 & 17.5 & 89.1 & 147.6 & 58.2 & 95.4 & 81.4\ 1037414-121048 & 55.8 & 45.0 & 102.7 & 39.6 & 70.3 & 151.8 & 96.6 & 63.0 & 21.2 & 94.2 & ... & 85.2 & 105.4 & 93.5\ 1136527+025949 & 44.5 & 26.3 & 81.3 & 27.4 & 66.2 & 162.8 & 95.9 & 35.8 & 15.6 & 85.6 & 146.6 & 20.6 & 45.5 & 35.7\ 1145072+013727 & 55.6 & ... & 99.9 & ... & 85.2 & 171.2 & 110.0 & 60.9 & 25.5 & 93.4 & 145.5 & 67.2 & 92.8 & 82.7\ 1105038-164000 & 48.0 & 20.9 & 86.2 & 21.7 & 76.2 & 175.3 & 106.6 & 37.9 & ... & 87.3 & 150.1 & 30.4 & 62.1 & 45.5\ 1131243-055825 & 45.2 & 28.2 & 96.0 & 29.1 & 67.3 & 134.5 & ... & 50.2 & 21.9 & 71.7 & 132.1 & 84.8 & 107.2 & 93.2\ 1206183-035045 & 49.6 & 36.9 & 94.3 & 42.5 & 72.4 & 158.5 & 101.6 & 56.5 & 19.6 & 99.7 & 152.4 & 60.0 & 91.7 & 80.0\ 1313176-164220 & 66.9 & 50.1 & 103.6 & 53.5 & 87.3 & 194.6 & 115.5 & 66.3 & 43.8 & 103.6 & 175.1 & 72.3 & 110.0 & 96.8\ 1314125-132352 & 66.7 & 43.0 & 93.0 & 52.1 & 84.1 & 173.5 & 106.4 & 56.5 & 26.7 & 97.2 & 147.2 & 58.7 & 95.3 & 75.1\ 1334567-055158 & 65.8 & 50.7 & 95.7 & 48.5 & 79.9 & 170.6 & 110.7 & 67.2 & 25.2 & 97.1 & 149.7 & 85.2 & 113.7 & 98.3\ 1355160-053350 & 50.0 & 32.0 & 73.1 & 25.1 & 57.0 & 146.5 & 89.3 & ... & ... & 80.6 & 149.4 & 53.8 & 69.0 & 64.4\ 1442035-162350 & 42.5 & 33.4 & 96.3 & 32.4 & 64.9 & 154.1 & 91.9 & 46.2 & 13.3 & 87.3 & 133.6 & 40.5 & 74.6 & 60.9\ 1404515-004157 & 56.2 & 33.2 & 112.4 & 45.1 & 70.6 & 171.2 & 113.5 & 63.2 & 20.6 & 104.3 & 167.4 & 84.1 & 113.7 & 91.7\ 1431302-051730 & 46.8 & 28.5 & 90.5 & 31.3 & 70.3 & 164.4 & 104.6 & 39.5 & ... & ... & 143.6 & ... & 39.2 & 31.1\ 1518065-115536 & 55.7 & 44.5 & 98.3 & 41.9 & 75.6 & 161.5 & 103.4 & 56.7 & ... & 100.0 & 136.8 & 55.8 & 84.8 & 69.4\ Arcturus (APO) & 72.0 & 45.9 & 91.7 & ... & 96.1 & 193.4 & 116.9 & 56.5 & 29.4 & 108.1 & 150.8 & 38.1 & 77.7 & 57.6\ Arcturus (KPNO) & 65.5 & 45.5 & 94.6 & 31.0 & 89.3 & 185.0 & 115.8 & 48.0 & 31.6 & 106.6 & 148.4 & 35.9 & 72.9 & 54.1\ $\alpha$ Tau (KPNO n1) & 90.3 & 63.4 & 120.8 & 62.3 & 115.4 & 212.7 & 132.4 & 77.3 & 45.3 & 129.0 & 179.0 & 99.6 & 127.9 & 109.1\ $\alpha$ Tau (KPNO n4) & 95.7 & 65.8 & 120.0 & 73.5 & 113.9 & 211.0 & 132.2 & 78.5 & 45.8 & 121.6 & 181.0 & 90.4 & 124.4 & 104.0\ $\delta$ Vir (APO) & 83.0 & 62.7 & 119.9 & 73.8 & 103.1 & 193.3 & ... & 76.4 & 40.1 & 112.2 & 164.8 & 99.3 & 126.8 & 121.3\ $\delta$ Vir (KPNO) & 76.1 & 54.3 & 117.4 & 68.0 & 95.6 & 188.5 & ... & 69.7 & 44.4 & 112.0 & 164.2 & 99.9 & 131.5 & 120.6\ $\nu$ Vir (APO) & 73.4 & 50.7 & 101.7 & 59.2 & 85.7 & 171.8 & 110.7 & 60.9 & 29.3 & 98.7 & 155.4 & 76.3 & 109.3 & 93.4\ $\nu$ Vir (KPNO) & 66.9 & 51.2 & 97.0 & 57.2 & 82.8 & 175.3 & 107.4 & 60.3 & 25.4 & 93.0 & 149.9 & 76.4 & 103.9 & 92.5\ [ccccccccccccccc]{} 1521021+082320 & 3900 & 0.4 & 1.49 & 6.50 & -0.95 & 0.07 & 3.85 & -0.10 & 0.11 & 5.1\ 1546044+061554 & 3900 & 0.5 & 1.58 & 6.44 & -1.01 & 0.11 & 3.76 & -0.13 & 0.17 & 5.9\ 1535178+135331 & 3800 & 0.8 & 1.38 & 7.10 & -0.35 & 0.14 & 4.14 & -0.41 & 0.45 & 2.3\ 1640358+060251 & 3800 & 0.9 & 1.54 & 7.30 & -0.15 & 0.16 & 4.61 & -0.14 & 0.04 & 1.3\ 1509307+252912 & 3900 & 0.3 & 1.33 & 6.34 & -1.11 & 0.13 & 3.56 & -0.23 & 0.14 & 8.6\ 1546468+270052 & 3900 & 0.6 & 1.62 & 6.77 & -0.68 & 0.09 & 4.08 & -0.14 & 0.14 & 3.3\ 1530257+323409 & 3800 & 0.6 & 1.38 & 6.90 & -0.55 & 0.15 & 4.70 & 0.35 & 0.12 & 2.2\ 1719596+301146 & 3900 & 1.2 & 1.38 & 7.30 & -0.15 & 0.10 & 4.74 & -0.01 & 0.12 & 1.8\ 1731343+401543 & 3800 & 1.1 & 1.72 & 7.44 & -0.01 & 0.14 & 4.88 & -0.01 & 0.04 & 0.9\ 0913092+450916 & 3950 & 0.7 & 1.48 & 6.74 & -0.71 & 0.10 & 3.96 & -0.23 & 0.08 & 4.9\ 0807206+435418 & 3900 & 1.3 & 1.72 & 7.52 & 0.07 & 0.13 & 5.01 & 0.04 & 0.07 & 1.3\ 0935556+414046 & 4000 & 0.5 & 1.33 & 6.37 & -1.08 & 0.11 & 3.73 & -0.09 & 0.13 & 2.6\ 0903471+414944 & 3900 & 0.6 & 1.42 & 6.71 & -0.74 & 0.14 & 4.29 & 0.13 & 0.13 & 2.5\ 0830244+283812 & 3950 & 0.2 & 1.55 & 6.37 & -1.08 & 0.10 & 3.95 & 0.13 & 0.18 & 3.2\ 1011498+230504 & 4000 & 0.7 & 1.63 & 6.57 & -0.88 & 0.08 & 4.10 & 0.08 & 0.12 & 5.0\ 0918573+145749 & 4000 & 0.4 & 1.79 & 6.19 & -1.26 & 0.14 & 3.75 & 0.11 & 0.16 & 8.6\ 0938204+112950 & 3900 & 0.5 & 1.56 & 6.63 & -0.82 & 0.08 & 4.11 & 0.03 & 0.13 & 6.8\ 1001593+114507 & 4000 & 0.5 & 1.75 & 6.50 & -0.95 & 0.13 & 4.47 & 0.52 & 0.15 & 6.6\ 0932038+055521 & 3700 & 0.9 & 1.20 & 7.33 & -0.12 & 0.11 & 4.73 & -0.05 & 0.13 & 1.6\ 0950147+082356 & 3600 & 0.5 & 1.35 & 7.12 & -0.33 & 0.16 & 4.52 & -0.05 & 0.10 & 1.6\ 1101590+084043 & 4000 & 0.4 & 1.47 & 6.31 & -1.14 & 0.13 & 3.82 & 0.06 & 0.16 & 8.9\ 1024571+004900 & 3800 & 0.3 & 1.65 & 6.57 & -0.88 & 0.09 & 4.05 & 0.03 & 0.13 & 4.7\ 1101024-003004 & 4000 & 0.3 & 1.59 & 6.21 & -1.24 & 0.10 & 3.83 & 0.17 & 0.12 & 4.2\ 1115376+000800 & 3800 & 0.6 & 1.32 & 6.91 & -0.54 & 0.08 & 4.55 & 0.19 & 0.11 & 6.2\ 1054035-065124 & 3800 & 0.0 & 1.42 & 6.46 & -0.99 & 0.13 & 4.25 & 0.34 & 0.10 & 4.9\ 1037414-121048 & 3650 & 0.1 & 1.28 & 6.82 & -0.63 & 0.12 & 4.41 & 0.14 & 0.13 & 3.0\ 1136527+025949 & 3950 & 0.4 & 1.69 & 6.27 & -1.18 & 0.10 & 3.79 & 0.07 & 0.14 & 4.8\ 1145072+013727 & 3800 & 0.4 & 1.49 & 6.71 & -0.74 & 0.08 & 4.27 & 0.11 & 0.16 & 4.5\ 1105038-164000 & 4000 & 0.3 & 2.05 & 6.12 & -1.33 & 0.08 & 3.88 & 0.31 & 0.13 & 4.8\ 1131243-055825 & 3650 & 0.0 & 1.19 & 6.54 & -0.91 & 0.13 & 4.40 & 0.41 & 0.10 & 3.5\ 1206183-035045 & 3700 & 0.1 & 1.47 & 6.57 & -0.88 & 0.10 & 4.06 & 0.04 & 0.12 & 3.7\ 1313176-164220 & 3900 & 1.0 & 1.58 & 7.13 & -0.32 & 0.15 & 4.66 & 0.08 & 0.10 & 5.4\ 1314125-132352 & 3950 & 0.9 & 1.40 & 6.92 & -0.53 & 0.11 & 4.50 & 0.13 & 0.07 & 2.8\ 1334567-055158 & 3750 & 0.8 & 1.36 & 7.16 & -0.29 & 0.09 & 4.69 & 0.08 & 0.08 & 2.8\ 1355160-053350 & 3800 & 0.1 & 1.47 & 6.28 & -1.17 & 0.16 & 3.98 & 0.25 & 0.19 & 4.7\ 1442035-162350 & 3800 & 0.2 & 1.50 & 6.36 & -1.09 & 0.12 & 3.93 & 0.12 & 0.11 & 4.8\ 1404515-004157 & 3700 & 0.1 & 1.76 & 6.56 & -0.89 & 0.10 & 4.23 & 0.22 & 0.12 & 3.6\ 1431302-051730 & 4000 & 0.5 & 1.69 & 6.36 & -1.09 & 0.08 & 3.75 & -0.06 & 0.19 & 6.8\ 1518065-115536 & 3800 & 0.5 & 1.28 & 6.83 & -0.62 & 0.10 & 4.19 & -0.09 & 0.10 & 4.8\ Arcturus (APO) & 4250 & 1.4 & 1.73 & 6.88 & -0.57 & 0.09 & 4.62 & 0.29 & 0.10 &\ Arcturus (KPNO) & 4250 & 1.4 & 1.76 & 6.82 & -0.63 & 0.09 & 4.56 & 0.29 & 0.11 &\ $\alpha$ Tau(KPNO n1) & 3900 & 1.3 & 1.68 & 7.50 & 0.05 & 0.11 & 4.98 & 0.03 & 0.12 &\ $\alpha$ Tau(KPNO n4) & 3900 & 1.3 & 1.50 & 7.62 & 0.17 & 0.13 & 4.99 & -0.08 & 0.06 &\ $\delta$ Vir(KPNO) & 3650 & 0.8 & 1.29 & 7.63 & 0.18 & 0.14 & 5.11 & 0.07 & 0.11 &\ $\delta$ Vir(APO) & 3700 & 0.8 & 1.35 & 7.62 & 0.17 & 0.14 & 5.05 & 0.03 & 0.15 &\ $\nu$ Vir(KPNO) & 3800 & 0.8 & 1.40 & 7.11 & -0.34 & 0.15 & 4.58 & 0.02 & 0.12 &\ $\nu$ Vir(APO) & 3800 & 0.8 & 1.36 & 7.19 & -0.26 & 0.14 & 4.64 & 0.03 & 0.07 &\ [ccccc]{} Arcturus & -0.60$\pm$0.09 & -0.67$\pm$0.11 & 0.29$\pm$0.11 & 0.41$\pm$0.12\ $\alpha$ Tau & 0.11$\pm$0.10 & 0.07 & -0.03$\pm$0.08 & -0.03\ $\nu$ Vir & -0.30$\pm$0.15 & -0.02$\pm$0.20 & 0.03$\pm$0.20 & 0.06$\pm$0.17\ $\delta$ Vir & 0.18$\pm$0.17 & 0.13$\pm$0.17 & 0.05$\pm$0.13 & 0.07$\pm$0.20\ [^1]: There are also two classes of stars, hypervelocity and runaway O-B stars, that are similar to this populaton [see @brown06 and references within]. However, these are due to rare events within a binary system (e.g., ejection due to collisions within the system or an interaction with the Milky Way’s central supermassive black hole) and we would not expect these classes to contribute a large fraction of halo stars. [^2]: IRAF (Image Reduction and Analysis Facility) is distributed by NOAO, which is operated by the Association of Universities for Research in Astronomy, Inc., under contract with the NSF. [^3]: The full medium-resolution radial velocity catalog is available upon request. [^4]: see http://pleiadi.pd.astro.it/
--- abstract: 'We study the Autler Townes (AT) effect derived from a strong electric dipole transition stimulated by a resonant laser beam and probing it by means of a weak electric quadrupole transition with a controlled frequency detuning in a ladder configuration. The experiment was carried out for a $^{87}$Rb atomic gas at room temperature in a velocity-selective scheme. The AT effect was monitored via the splitting of the fluorescence spectra associated with the spontaneous decay to the ground state. The theoretical description incorporates the modification of standard few-level schemes introduced by forbidden electric-dipole transitions selection rules. We develop an analytic ladder three-level scheme to approximate the cyclic $5\mathrm{S}_{1/2} \mathrm{F}=2\rightarrow 5\mathrm{P}_{3/2} F=3 \rightarrow 6\mathrm{P}_{3/2} \mathrm{F}=1,2,3\rightarrow 5 \mathrm{S}_{1/2}\mathrm{F}=2$ path. Other levels that could have effects on the fluorescence are included via a fourth level with effective parameters. Doppler effects and finite bandwidths of the laser beams are included in the theoretical model to closely reproduce the experimental results.' author: - 'F. Ramírez-Martínez' - 'F. Ponciano-Ojeda' - 'S. Hernández-Gómez' - 'A. Del Angel' - 'C. Mojica-Casique' - 'L.M. Hoyos-Campo' - 'J. Flores-Mijangos' - 'D. Sahagún' - 'R. Jáuregui' - 'J. Jiménez-Mier' title: 'Use of an electric-dipole forbidden transition to optically probe the Autler Townes effect' --- Introduction ============ Five decades of mastering the use of laser light have yielded highly sophisticated techniques for preparing quantum states in systems of very different natures. Neutral atoms have been one of the physical scenarios in which a high level of control has been successfully achieved. This is due to the relative simplicity with which it is possible to observe matter-wave coherence at levels ranging from hot atomic samples [@Biedermann:2017] to Bose-Einstein condensates (BEC) [@Bloch:2008]. A range of sub-Doppler spectroscopy techniques constitute the foundation-stones on which even the most sophisticated quantum manipulations experiments are based [@Wieman:1976ge; @Preston:1996; @Pearman2002; @Corwin:1998era]. Desirable advances in these techniques demand sub-megahertz or even higher control resolution of the atomic states with methods that would preferably minimize collateral modifications to the system. Here we report the realization of two combined atomic-state processes that deliver a technique fulfilling these conditions for real experimental circumstances. On the one hand, effects driven by AC fields coupling atomic states, such as the Hanle effect [@Alnis:2003], coherent population trapping (CPT) [@Gray:1978jm], electromagnetically induced transparency (EIT) [@Fleischhauer:2005] or the Autler-Townes splitting (ATS) [@Autler:1955gb; @Picque1976; @Zhang:2010; @Moreno:2019] are potential sources of unprecedented ways to prepare atomic states because they are consequences of perturbations over quantum systems that may modify their energy levels in rather subtle ways. The ATS, for example, is a manifestation of Stark AC shifts induced by a near resonant optical field that can be neatly controlled in contemporary atomic physics laboratories. The first demonstration of the ATS was reported together with a theoretical description of the phenomenon [@Autler:1955gb]. Further theoretical analysis has enabled its unambiguous distinction from other effects that appear in the same systems such as EIT [@Anisimov:2011fn], which is originated by Fano interference among different transition pathways. This has motivated a whole series of theoretical and experimental work focused on manufacturing discrimination protocols on specific systems in response to given scientific needs [@AbiSalloum:2010eg; @Sun:2014kh; @Wang:2015ho]. Alkali atoms have been the preferred physical system for studying these effects because their internal states are easily manipulated and they can even be laser-cooled and trapped. Researchers have performed experiments for more than two decades motivated by applications ranging from fundamental science to basic technology that could even yield everyday devices in the long term. In experiments with ultracold atoms or BECs, local control of the atomic interactions by using Feshbach resonances has improved its experimental feasibility by tuning them via an AT doublet [@Bauer:2009fqa]. AT doublets have shown to be an excellent ruler to measure the coupling between atoms and the pump beam of Stimulated Raman Adiabatic Passage (STIRAP) processes induced to accurately produce Rydberg states in cold atoms [@Cubel:2005cf]. More recently AT splitting has been demonstrated as a means of light absorption and retrieval on atom-based quantum memories [@Saglamyurek:2018cm]. On the other hand, atomic state preparation has predominately been performed by the use of transitions which are driven by the first interaction term of their coupling with the electric component of laser field radiation. There is, however, a second term giving rise to less probable transitions that are related to the valence-electron electric quadrupole field. The so-called forbidden transitions have also attracted attention within several research fields. They became interesting to cosmologists for understanding the microwave background due to recombination of hydrogen and helium in early outer space [@KAPLAN:1939hw]. Quadrupole ($E2$) transitions have also been useful to shed light over parity violation in fundamental physics due to the asymmetry in absorption and fluorescence spectra that can be explained by exchanges of weak neutral Z0 bosons between the electrons and the nucleus of the atom [@Bouchiat:1997kka]. The long-lived states that may be reached via forbidden transitions are attractive for error correction on quantum bits [@Langer:2005hn; @Preskill:1998gf]. Narrow-dipole lines that can be cleanly excited without strong effects of their neighboring transitions have turned out challenging to spot. A promising alternative is the use of forbidden transitions in lattice-based atomic clocks [@Taichenachev:2006is]. This has motivated a number of noteworthy experiments in which $E2$ transitions were observed in laser cooled ions [@Rafac:2000gi] and neutral atoms [@Bhattacharya:2003io]. Nowadays experiments are reaching such subtlety levels that higher degrees of control are demanded in the preparation of atomic states. Further research has been required to complement the first few experimental works done on the absorption spectra of $E2$ transitions of the alkali atoms [@Weber:1987ix; @Tojo:2004ft; @Vadla:2001fr]. Velocity-selective spectroscopy with diode lasers is an ideal framework to do this due to its simplicity. Our research group has reported a technique capable of resolving hyperfine levels together with their magnetic spin projection in atomic Rubidium at room temperature [@PoncianoOjeda:2015cf; @PoncianoOjeda:2018fs]. Chan *et al.* developed a similar velocity-selective spectroscopic method to resolve E2 Cesium transitions [@Chan:16]. -- -- -- -- -- -- Here we report a detailed experimental and theoretical study of the ATS generated by the strong coupling of an electric dipole transition by probing it with the help of a weak, non-perturbative electric quadrupole transition in a gas of atomic Rubidium at room temperature. To our knowledge there is only one instance in which a dipole forbidden transition is used to probe the ATS; in [@Bhattacharya:2003io], this process is studied in a Sodium magneto-optical trap (MOT) with photoionization detection. In the present work the ATS is non-destructively probed by detecting a fluorescence decay. In addition, we provide a full theoretical treatment consisting of a master equation model that achieves good agreement with our experiments. A first novelty of the treatment subject to this article is that of a three atomic levels forming a ladder configuration where the coupling field excites state $\vert 1\rangle$ to the $\vert 2\rangle$ state, and the second resonance is excited by a weak probe beam through a forbidden transition \[Fig. \[fig:Rbenlvl\] (a)\]. The atomic structure of Rubidium presents other significant options besides the $\vert 3\rangle \rightarrow \vert 1\rangle$ transition for the state decay path \[Fig. \[fig:Rbenlvl\] (b)\] that exhibit atomic coherence phenomena worthy of focus for further research. However, we found that a model in which a fourth (dump) level is added \[Fig. \[fig:Rbenlvl\] (c)\] fits our experimental results in a rather acceptable way. These three models are described in detail within the theoretical Section of this paper. We include a careful velocity-selective analysis which we found is vital to understand many subtleties observed in the fluorescence spectra. In a similar fashion as in Ref. [@Finkelstein:wc] where a power narrowing mechanism was recently reported in two-color two-photon excitation in ladder systems, our model can resolve subtle experimental parameters like the bandwidth of our excitation lasers which are of a few . Additionally, we surprisingly found that this ATS may be observed with $90\%$ of the atoms populating the ground state level. Thus we believe that further, improved implementations of this protocol can serve as a sensitive gauge for specific state characteristics for versatile systems such as cold and ultra-cold atomic ensembles. This gauge has minimal perturbing consequences on the atomic system since quadrupole transitions are typically six orders of magnitude weaker than then dipole transitions normally used to induce ATS. The physical system =================== The measurements presented in the Results and Discussion section of this paper have been performed in a standard room-temperature Rubidium vapor spectroscopy cell. The observations and their theoretical description are reported specifically for $^{87}$Rb, but similar results are obtained for $^{85}$Rb. Figure \[fig:AThyper\] shows the set of hyperfine sublevels of the atomic energy structure that is relevant to this work. The numerical values for this energy level diagram, as well as for all the calculations reported throughout the paper, have been either directly obtained or derived from the data reported in by @Kurucz:1995. Levels $\left|1\right\rangle $ and $\left|2\right\rangle $ correspond to the ground state $5\mathrm{S}_{1/2}, F=2$ and the first excited state $5\mathrm{P}_{3/2}, F=3$ of the D2 line stretched transition in $^{87}$Rb. These states are resonantly connected with a $\unit{780.242}{\nano\metre}$ laser locked to the transition. From the $5\mathrm{P}_{3/2}, F=3$ state, level $\left|3\right\rangle $ can only be accessed via the electric quadrupole $E2$ transition. In this case, due to the $E2$ selection rules, either of the $6\mathrm{P}_{3/2}, F=1,2,3$ hyperfine sublevels can be reached with a $\unit{911.075}{\nano\metre}$ laser scanning through this manifold. The experimental signature of this two-step ladder excitation that is exploited in our measurements is the presence of $\unit{420.18}{\nano\metre}$ fluorescence yield by the $6\mathrm{P}_{3/2}\rightarrow 5\mathrm{S}_{1/2}$ decay. In @MojicaCasique:2016iu, we showed that modeling the evolution of the atomic state populations with Einstein rate equations properly describes the observations. There, by detecting the blue fluorescence, we were able to separately address the $\Delta F=0,\pm 1, \pm 2$ $E2$ selection rules of the $^{87}$Rb $5\mathrm{P}_{3/2} \rightarrow 6\mathrm{P}_{3/2}$ transition by enhancing or suppressing the $\Delta m_F$ with an appropriate choice of the excitation light polarization. Furthermore, in [@MojicaCasique:2016iu; @PoncianoOjeda:2018fs] it is demonstrated that the choice of the relative polarization states of the two excitation beams can be utilized to tailor the excitation paths at the level of the magnetic structure of the atoms. Particularly, in the experiments presented on this work a parallel-linear polarization configuration for the preparation and probe beams was employed. This way, $\Delta M_{\parallel}=M_3 -M_2=\pm 1$ transitions are favoured for the ATS probing $E2$ transition. This in turn has a direct consequence in the simplification of the atomic structure under study: the theoretical description of the excitation and decay processes can be independently studied for each of the $\Delta F$ observed for the $E2$ AT probing step. Theory ====== Three level atom model. The ideal system. ----------------------------------------- The simplest model for describing the general features of the system of interest corresponds to a single three-level atom in the presence of two near-resonant coherent electromagnetic fields, which is schematically represented in Fig. \[fig:Rbenlvl\](a). Note that the configuration under study differs significantly from that described by standard three-level atomic systems. Now, there is a two-photon excitation path that links, via a dipole transition, a ground state level $\vert 1\rangle$ (5S$_{1/2}$, $F=2$ $^{87}$Rb for the experiment reported here) to an excited state $\vert 2\rangle$ (5P$_{3/2}$, $F=3$ for $^{87}$Rb), followed by a quadrupole transition to a level $\vert 3\rangle$ (6P$_{3/2}$, $F=1,2,3$ for $^{87}$Rb). The absorption of two photons in a ladder configuration populates state $\vert 3\rangle$ leading to a spontaneous emission by an electric dipole transition from level $\vert 3\rangle$ to level $\vert 1\rangle$. For $^{87}$Rb, the spontaneous up-conversion process involves two photons of wavelengths $\unit{780}{\nano\metre}$ and $\unit{911}{\nano\metre}$ that yield the emission of a $\unit{420}{\nano\metre}$ photon. In this subsection, we exploit the simplicity of this model to identify some of the general properties of the AT effect under ideal conditions. Doppler effects and their partial control through a velocity-selective setup are later included. In general, the dynamics of an N-level atom in terms of its density matrix $\rho $ is given by the master equation [@Metcalf] $$\dot\rho_{ll}=\sum_j (\Gamma_{jl}\rho_{jj} - \Gamma_{lj}\rho_{ll}) + i\sum_j (\Omega_{lj}\rho_{jl} -\Omega_{jl}\rho_{lj}),\label{eq:mea}$$ for the population of level $l$, while for the coherence between levels $n$ and $l$ $$\dot\rho_{nl} = (i\delta_{nl} - \gamma_{nl})\rho_{nl} +i\sum_j(\Omega_{nj} \rho_{jl} - \Omega_{jl}\rho_{nj}).\label{eq:meb}$$ The excitation laser properties are linked to the atomic characteristics via their detuning and the Rabi frequencies. In Eqs.(\[eq:mea\]-\[eq:meb\]), $\delta_{nl}$ represents the laser detuning with respect to the transition $n\rightarrow l$ and $\Omega_{nl}$ the corresponding Rabi frequency. $\Gamma_{mn}$ is the decay rate from level $m$ to level $n$, $\gamma_{mn} = \frac{1}{2} (\Gamma_m + \Gamma_n )$ is the dephasing rate of the coherence $\rho_{mn}$, and $\Gamma_n = 1/\tau_n$ is the total decay rate of level $n$ with $\tau_n$ the lifetime of that level. For the dipole transition between levels $\left|1\right\rangle $ and $\left|2\right\rangle $ stimulated by a control laser, the corresponding coupling strength $\Omega_{12}$ depends directly on the electric field $\vec E_{c}$ associated with the laser beam of frequency $\omega_c$ [@Cohen2011], $$\Omega_{12} = \frac{\omega_{21}}{\omega_c}\frac{\vec \mu_{21} \cdot \vec E_{c}}{\hbar}, \quad \quad \vec \mu_{21} = e \langle 2\vert \vec r \vert 1\rangle,$$ where $\omega_{21} =(E_2 - E_1)/\hbar$ is determined by the energy difference between the two levels, $e$ is the electron charge, and $\vec \mu_{21}$ is the electric dipole moment associated to the transition. Levels $\left|2\right\rangle $ and $\left|3\right\rangle $ are weakly connected by the gradient of the electric probe field $\vec E_p$ associated to the laser of frequency $\omega_p$. If the light field is modeled by an ideal plane wave with wave vector $\vec k_p$, the coupling strength of the quadrupole transition is given by an effective Rabi frequency $\Omega_{23} $ [@Freedhoff:1989], $$\Omega_{23} = \frac{1}{3} \frac{\omega_{32}}{2\omega_p}\frac{\vec k_p \cdot \bar{\bar Q} \cdot \vec E_p}{\hbar}, \quad \quad \bar{\bar Q}_{ij} = e \langle 3\vert ( r^2 \delta_{ij}- 3 r_i r_j)\vert 2\rangle$$ that involves the atomic quadrupole moment tensor, $\bar{\bar Q}$, and the electric field $\vec E_p$. Since $\vec k_p\cdot \vec E_p =0$, $$\vec k_p \cdot \bar{\bar Q} \cdot \vec E_p = -e\langle 3 \vert (\vec r\cdot \vec k_p)(\vec r\cdot \vec E_p)\vert 2\rangle.\label{eq:quad}$$ We shall assume that $\Omega_{12}$ and $\Omega_{23}$ are real numbers. The Rabi frequency $\Omega_{23}$ of the quadrupole transition is usually small since the typical mean value of the electron distance to the nucleus is much smaller than the wavelength, $k_p r\ll 1$. A rough estimation shows that for lasers with the same power and for the wavelengths involved in our experimental setup $$\frac{\Omega_{23}}{\Omega_{12}}\sim 10^{-4}.$$ Besides, levels $\left|2\right\rangle $ and $\left|3\right\rangle $ decay with rates given by $\Gamma_2 $ and $\Gamma_3 $, respectively, and spontaneous decay from level $\vert 3 \rangle$ to $\vert 2 \rangle$ $\Gamma_{32}$ is highly improbable. Thus, $\Gamma_{21} = \Gamma_2 $ and $\Gamma_{31}\sim \Gamma_3 $. The master equations for the three level system in the steady state regime ($\dot{\rho}=0 $) can be solved approximately taking into account that $\Omega_{23} \ll \Omega_{12}$. To the lowest order in $\Omega_{23}$ the solutions are: $$\begin{aligned} \rho_{11}^{(1)} &=&\frac{4\Omega_{12}^{2}+\Gamma_{21}^{2}+4\delta_{12}^{2}}{ 8\Omega_{12}^{2}+\Gamma_{21}^{2}+4\delta_{12}^{2}}\nonumber\\ \rho_{22}^{(1)} &=& \frac{4\Omega_{12}^{2}}{ 8\Omega_{12}^{2}+\Gamma_{21}^{2}+4\delta_{12}^{2}}\nonumber \\ \rho_{12}^{(1)} &=& \frac{2\Omega_{12}(2\delta_{12}-i\Gamma_{21})}{8\Omega_{12}^{2}+\Gamma_{21}^{2}+4\delta_{12}^{2}}\\ \rho_{13}^{(1)} &=& \frac{4\Omega_{23}\Omega_{12}[4\Omega_{12}^{2}-( \Gamma_{12}+2i\delta_{12})(\Gamma_{21}+\Gamma_{31}-2i\delta_{23})]}{ (8\Omega_{12}^{2}+\Gamma_{21}^{2}+4\delta_{12}^{2}) [ 4\Omega_{12}^{2}+ (\Gamma_{21}+\Gamma_{31}-2i\delta_{23})(\Gamma_{31}-2i(\delta_{12}+\delta_{23}) ) ] }\nonumber\\ \rho_{23}^{(1)} &=& -\frac{8i\Omega_{12}^{2}\Omega_{23}(\Gamma_{21}+\Gamma_{31}-2i\delta_{23}) }{(8\Omega_{12}^{2}+\Gamma_{21}^{2}+4\delta_{12}^{2}) [4\Omega_{12}^{2}+ (\Gamma_{21}+\Gamma_{31}-2i\delta_{23})(\Gamma_{31}-2i(\delta_{12}+\delta_{23}) ) ]}\nonumber\end{aligned}$$ The last element of the density matrix, $\rho_{33} $, is such that $$\rho_{33}= \frac{2 \Omega_{23}}{\Gamma_{31}} \text{Im}(\rho_{23}). \label{eq:2333}$$ So that its lowest order expression requires a second order calculation in $\Omega_{23} $ resulting in $$\begin{aligned} \rho_{33}^{(2)}&=&\frac{16 \Omega_{12}^2 \Omega_{23}^2}{\Gamma_{31}}\frac{h(\delta_{23};\Omega_{12},\Gamma_{31},\Gamma_{21})} {f(\delta_{12};\Omega_{12},\Gamma_{31},\Gamma_{21})g(\delta_{23},\delta_{12};\Omega_{12},\Gamma_{31},\Gamma_{21})} \label{eq:ATprofile}\\ f(\delta_{12};\Omega_{12},\Gamma_{31},\Gamma_{21})& = &4 \delta_{12}^2+\Gamma_{21}^2+8 \Omega_{12}^2 \nonumber \\ g(\delta_{23},\delta_{12};\Omega_{12},\Gamma_{31},\Gamma_{21})& = & (4\Omega_{12}^2 + \Gamma_{31}(\Gamma_{31} + \Gamma_{21}) - 4(\delta_{12}(\delta_{12} + \delta_{23}))^2\nonumber \\ &+& 4(\delta_{23}\Gamma_{31} +(\delta_{23} + \delta_{12})(\Gamma_{31} + \Gamma_{21}))^2\nonumber \\ h(\delta_{12};\Omega_{12},\Gamma_{31},\Gamma_{21})&=& (4 \delta_{23}^2 + (\Gamma_{21} + \Gamma_{31})^2)\Gamma_{31} +4\Omega_{12}^2(\Gamma_{21} +\Gamma_{31}).\end{aligned}$$ Notice that for $\delta_{12} = 0$ and $\Omega_{12}>>\Gamma_{21},$ the stimulated transitions $\vert 1\rangle \leftrightarrow\vert 2\rangle$ saturate, and $\rho^{(1)}_{11}\sim\rho^{(1)}_{22}\sim 1/2$. In the typical experimental setup, the AT effect is probed by the absorption of the weak beam. This absorption profile is proportional to the imaginary part of the $\rho_{23} $ density matrix element. In our experiment we follow a different approach. We probe the population of the upper state $\left|3\right\rangle $ by detecting its fluorescence decay into the ground state $\left|1\right\rangle $ that is proportional to the density matrix element $\rho_{33} $, which is in turn also proportional to Im$(\rho_{23}) $, Eq. (\[eq:2333\]). As a consequence, the general characteristics of the observed AT effect can be obtained from $\rho_{33}$. This effect manifests as the presence of two maxima of $\rho_{33}^{(2)}$ as a function of the detuning of the probe beam. To show this, we first evaluate the critical points of $\rho_{33}^{(2)}$. The condition, $$\frac{\partial\rho_{33}^{(2)}}{\partial \delta_{23}}\Bigr\rvert_{\delta_{23}^c} = 0$$ yields a fifth order polynomial in the critical variable $\delta_{23}^c$ which requires a numerical solution. In the particular case of a zero detuning of the control beam, $\delta_{12} = 0$, this equation is equivalent to an equation with the structure, $$\delta_{23}^c(a_4(\delta_{23}^c)^4 +a_2(\delta_{23}^c)^2 + a_0) = 0.$$ As a consequence, it is found that a minimum exists at $\delta_{23} = 0$; there are also two purely imaginary roots and two real roots, given by $$\begin{aligned} \delta_{23}^\pm&=& \pm\Big[\frac{1}{4\Gamma_{31}}\Big(2\Omega_{12}(\Gamma_{21} + 2\Gamma_{31})\sqrt{\eta} - (\Gamma_{21} +\Gamma_{31})\eta \Big)\Big]^{1/2} \\ \eta &=&4\Omega_{12}^2 + \Gamma_{21}\Gamma_{31} + \Gamma_{31}^2 \nonumber\end{aligned}$$ From this expression, the critical value of the Rabi frequency $\Omega_{12}^c$ from which the AT doublet would be formed is found to be, $$\Omega_{12}^c =\frac{1}{2}\sqrt{\frac{(\Gamma_{21} + \Gamma_{31})^3}{2\Gamma_{21} + 3 \Gamma_{31}}}\simeq \unit{14.45}{\mega\hertz} \label{eq:Occ}$$ The requirement of a minimal value of $\Omega_{12}$ for observing the AT effect, and the fact that $\Omega_{12}^c$ is determined by the decay rates of the involved levels, emphasizes its interpretation as an AC Stark effect. In the limit $\Omega_{12}\gg \Gamma_{21},\Gamma_{31}$, $\rho_{33}^{(2)}$ achieves a saturation value given by $$\lim_{\Omega_{12}\to \infty} \rho_{33}^{(2)} = \frac{2\Omega_{23}^2}{\Gamma_{31}(2\Gamma_{31} + \Gamma_{21})}.\label{eq:linrho33}$$ -- -- -- -- -- -- In Fig. \[fig:AT:ideal\] the steady-state population of the third level per effective Rabi frequency $\Omega_{23}$ is illustrated using the parameters of the experimental setup. Fig. \[fig:AT:ideal\]a shows the dependence with the detunings of both radiation fields with respect to the atomic resonances at the onset of the ATS, $\Omega_{12}^c$. In Fig. \[fig:AT:ideal\]b it can be observed that in the absence of Doppler effects and other dephasing factors, the ATS cannot be obtained by keeping the probe detuning $\delta_{23}$ constant and varying the $\delta_{12}$ control detuning. This happens independently of the power of the control laser. Finally, Fig. \[fig:AT:ideal\]c demonstrates that the maximum value of $\rho_{33}$ is obtained for $\Omega_{12}^{max}\sim \unit{9.77}{\mega\hertz}$ which is lower than the critical value $\Omega_{12}^c$ given by eq. \[eq:Occ\]. The saturation reached as a function of $\Omega_{12}$ determined by eq. \[eq:linrho33\] is also evident in this graph. Summarizing, for the ideal model under consideration the general characteristics of the AT effect observed for absorption mediated by dipole electric transitions in a ladder configuration are reproduced for the fluorescence from the top to the bottom level in the unconventional ladder configuration involving a dipole and a dipole forbidden transition. These characteristics include fluorescence that is symmetric with respect to changes in sign of $\delta_{23}$, a minima at $\delta_{23} = 0$ and just two maxima at $\delta_{23}^\pm$. Doppler effect and velocity selective configuration. ---------------------------------------------------- The model described above does not incorporate several important features that will be present in experiments with room-temperature atomic gases and with realistic laser beams. Both Doppler effects and finite linewidth of the lasers are two of those properties that could prevent a clear observation of the AT effect. It is well known that when a single-frequency laser beam has a frequency $\nu$ which differs slightly from a resonance frequency $\nu_0$ of the atoms in the gas through which it passes, the Doppler shift will make the laser radiation appear exactly on resonance only for those atoms whose component of velocity along the laser beam is $v_z = c(\nu -\nu_0)/\nu_0$. Following the same idea, when the control and probe laser beams are used in a counter-propagating configuration, and the control beam frequency is fixed, it is expected that Doppler effects are partially suppressed [@Kaminsky1976]. The frequency detuning of the laser beams to a transition can be modeled by a Gaussian distribution centered at $\delta_{ij}^{(0)}$ with a spectral width $\sigma_{ij}$, $${\mathfrak S}_{ij} (\delta_{ij}) =\frac{1}{\sqrt{2\pi}\sigma_{ij}}e^{-(\delta_{ij} - \delta_{ij}^{(0)})^2/2\sigma^2_{ij}},\quad ij=21,32 .$$ while the Maxwell-Boltzmann distribution describes the probability function of velocities of the atoms at thermal equilibrium at a temperature $T$, $${\mathfrak M}(v)= \Big( \frac{m}{2\pi k_{\mathrm{B}} T}\Big)^{3/2} e^{-m v^2/2k_{\mathrm{B}} T}.$$ An average density matrix that incorporates these detuning effects can be obtained as $$\begin{aligned} \tilde \rho_{ij}^{(\sigma_{lm},T)}(\delta_{21}^{(0)},\delta_{32}^{(0)}) &=&\int\! d ^3 v\int\! d\delta_{21}\int\! d\delta_{32} {\mathfrak S}_{21} (\delta_{21}){\mathfrak S}_{32} (\delta_{32}){\mathfrak M}(\vec v)\rho_{ij}(\delta_{21} + \vec k_{21}\cdot \vec v,\delta_{32} + \vec k_{32}\cdot \vec v)\nonumber\\ &=& \int\! d ^3 v\int\! d\delta_{21}\int\! d\delta_{32} {\mathfrak S}_{21} (\delta_{21}- \vec k_{21}\cdot \vec v){\mathfrak S}_{32} (\delta_{32}-\vec k_{32}\cdot \vec v){\mathfrak M}(\vec v)\rho_{ij}(\delta_{21} ,\delta_{32})\nonumber \\\end{aligned}$$ The integration over the velocity can be directly performed. So, for a counter-propagating lasers configuration, $\hat k_{21} = -\hat k_{32}$, $$\tilde \rho_{ij}^{(\sigma_{lm},T)}(\delta_{21}^{(0)},\delta_{32}^{(0)}) =\Big( \frac{m\upsilon_D^2}{k_{\mathrm{B}}T}\Big)^{1/2} \int d\delta_{21}\int d\delta_{32} e^{-\kappa(\delta_{21},\delta_{32})} \rho_{ij}(\delta_{21} ,\delta_{32}) \label{eq:Dwidth}$$ where $$\begin{aligned} \frac{1}{\upsilon_D^2} &=&\Big(\frac{m}{k_{\mathrm{B}} T} + \frac{\vert k_{21}\vert ^2}{\sigma_{21}^2} + \frac{\vert k_{32}\vert ^2}{\sigma_{32}^2}\Big)\label{eq:vD}\\ \kappa(\delta_{21},\delta_{32})&=& \Big(\frac{\delta_{21}- \delta_{21}^{(0)}}{\sqrt{2}\tilde \sigma_{21}} \Big)^2 +\Big(\frac{\delta_{32}- \delta_{32}^{(0)}}{\sqrt{2}\tilde \sigma_{32}} \Big)^2\nonumber\\ &+& \Big(\frac{(\delta_{21}- \delta_{21}^{(0)})}{\sigma_{21}}\frac{\vert k_{21}\vert\upsilon_D}{ \sigma_{21}}\Big) \Big(\frac{(\delta_{32}- \delta_{32}^{(0)})}{\sigma_{32}}\frac{\vert k_{32}\vert\upsilon_D}{ \sigma_{32}}\Big)\label{eq:kappa}\\ \tilde\sigma_{21}^2 &=& \frac{1}{1 - \vert k_{21}\vert^2\upsilon_D^2/\sigma_{21}^2} \sigma_{21}^2\label{eq:sigma21} \\ \tilde\sigma_{32}^2 &=& \frac{1}{1 - \vert k_{32}\vert^2\upsilon_D^2/\sigma_{32}^2}\sigma_{32}^2\label{eq:sigma32} \end{aligned}$$ We observe that the Doppler effect, under this ideal counter-propagating configuration, leads both to an effective increase of the lasers’ linewidth by the factor $\tilde\sigma_{ij}$ and an interference term that links the two detunings $\delta_{ij}$. A relevant parameter is $\upsilon_D$, Eq. (\[eq:vD\]) that has velocity units. Notice that the more realistic configuration where a spread on the wave vectors $\vec k_{ij}$ is considered can be accomplished by substituting $\delta_{ij}$ by $\delta_{ij} -\vec k_{ij}\cdot \vec v_\bot$, where $\vec v_\bot$ is the component of the velocities of the atoms perpendicular to the main direction of propagation of the laser beams – the $z$- axis – followed by the averaging of the resulting expression using the Maxwell-Boltzmann distribution for those components of $\vec v$ and the angular spectra of the laser beams. This observation is consistent with the expectation that deviations on the realizations of counter-propagating lasers beams alter the AT linewidths [@PRA:cp]. The resulting expression for $\tilde \rho_{ij}^{(\sigma_{lm},T)}(\delta_{21}^{(0)},\delta_{32}^{(0)})$ is given in Appendix A. The difference in the behavior of the population of the second excited state when including the Doppler contribution can be observed by comparing Figs. \[fig:AT:ideal\] and \[fig:Doppler-3n\]. In Fig. \[fig:Doppler-3n\]a and \[fig:Doppler-3n\]b, the steady-state $\tilde \rho_{33}^{(\sigma_{lm},T)}(\delta_{21}^{(0)},\delta_{32}^{(0)})/\Omega_{23}^2$ is illustrated using the general parameters of the experimental setup and laser bandwidths $\sigma_{32} = \sigma_{21} =\unit{2\pi \times 1.5}{\mega\hertz}$. The temperature of the atomic gas is taken as $T=\unit{300}{\kelvin}$, and an ideal counter-propagating lasers configuration is assumed. Notice that selection of velocities and the finite bandwidth of the lasers allows the observation of the ATS for $\Omega_{12}$ greater than $\Omega_{12}^c$. As expected, the width of predicted fluorescence profile is larger than in the absence of Doppler and laser finite bandwidth effects. Besides the interference term in $\kappa(\delta_{21},\delta_{32})$, Eq. (\[eq:kappa\]), yields the possibility of observing the AT effect taking the probe (control) detuning as a constant parameter and varying the control (probe) detuning. In Fig. \[fig:Doppler-3n\]c, contrary to its analog in the absence of Doppler and laser bandwidth effects, the maximum of $\tilde \rho_{33}$ is achieved for $\Omega_{12} >\Omega_{12}^c$. This is a consequence of the selection of velocities that allows that just a small fraction of atoms populate level $\vert 2\rangle$. In fact, the numerical simulations yield $\tilde\rho_{11}^{(\sigma_{lm},T)} >0.9$ for Rabi frequencies $\Omega_{12}<5\Omega_{12}^c$. Nevertheless, the order of magnitude of $\rho_{33}$ in the velocity selective scheme is similar or even greater to that expected in the ideal case (no Doppler and zero laser bandwidths). -- -- -- -- -- -- 4-level model ------------- By examining the energy level diagram for $^{87}$Rb, see Fig. \[fig:Rbenlvl\](b), it becomes evident that the decay channels corresponding to transitions 6P${_{3/2}}$ $\rightarrow$ 6S$_{1/2}$, 6P${_{3/2}}$ $\rightarrow$ 4D$_{3/2}$, and 6P${_{3/2}}$ $\rightarrow$ 4D$_{5/2}$ could be significant for the understanding of the characteristics of the $\unit{420}{\nano\metre}$ signal photons. Besides this, the 6S and 4D levels also yield a repopulation path to the 5P${_{3/2}}$ level and, in this way, provide a mechanism to modify the occurrence of the quadrupole transition 5P${_{3/2}}$ $\rightarrow$ 6P$_{3/2}$. Notice also that in the experimental realization, the laser beams give rise to stimulated transitions only within the region where they have a non-negligible intensity. For thermal atoms, performing the sequential process of the fast –that is, highly probable– dipole and slow – less probable– quadrupole transition is conditioned by their transit time within the laser beams. There are cyclic paths that start with a cascade two-photon decay process from the 6P$_{3/2}$ level to either the 6S or any of the 4D states, and then from these states to the 5P$_{3/2}$, followed by an induced quadrupole excitation back to the 6P$_{3/2}$ level. The achievement of a single cycle is conditioned to atom transit times longer than that required for two slow quadrupole transitions to occur. Finally, there is also a two photon decay process from the 6S or any of the 4D states to the 5S$_{1/2}$ state. To model the system, we introduce a fourth level and two effective parameters $\Gamma_{34}$ and $\Gamma_{42}$ that couple it to the other atomic states. Using the interpretation of the decay rates in terms of a transition probability per unit time, we expect that $\Gamma_{34}$ should be similar to the sum of the decay rates of the 6P${_{3/2}}$ to the 6S$_{1/2}$, 4D$_{3/2}$, and 4D$_{5/2}$ states, that is, if we define $$\begin{aligned} \tilde\Gamma_{34}&=&\Gamma_{6{\mathrm{P}{_{3/2}}}\rightarrow 6\mathrm{S}_{1/2}} + \Gamma_{6{\mathrm{P}{_{3/2}}}\rightarrow 4\mathrm{D}_{3/2}} +\Gamma_{6{\mathrm{P}{_{3/2}}\rightarrow 4\mathrm{D}_{5/2}}}\nonumber\\ &\sim& (4.506 +0.2346 + 2.11) \mathrm{MHz} = 6.851 \mathrm{MHz},\end{aligned}$$ $\Gamma_{34}\sim \tilde \Gamma_{34}$. Meanwhile $\Gamma_{42}$ is expected to be similar to the weighted average of the decay rates of those states to the 5P$_{3/2}$ levels, that is $$\Gamma_{42} \sim$$$$\begin{aligned} (\Gamma_{6{\mathrm{P}{_{3/2}}}\rightarrow 6\mathrm{S}_{1/2}}*\Gamma_{6{\mathrm{S}{_{1/2}}}\rightarrow 5\mathrm{P}_{3/2}} + \Gamma_{6{\mathrm{P}{_{3/2}}}\rightarrow 4\mathrm{D}_{3/2}}&*&\Gamma_{4{\mathrm{D}{_{3/2}}}\rightarrow 5\mathrm{P}_{3/2}} +\Gamma_{6{\mathrm{P}{_{3/2}}\rightarrow 4\mathrm{D}_{5/2}}}*\Gamma_{4{\mathrm{D}{_{5/2}}}\rightarrow 5\mathrm{P}_{3/2}})/\tilde\Gamma_{34}\nonumber\\ &\sim& 12.544 \mathrm{MHz}\end{aligned}$$ Numerical calculations were performed to understand the dependence of the velocity-selective scheme on $\Gamma_{34}$ and $\Gamma_{42}$. This involves solving the time-dependent Bloch equations in the counter-propagating configuration with a velocity dependence on the detunings, and then performing a velocity average using the Maxwell- Boltzmann distribution. It was observed that about $\unit{30}{\micro\second}$ are required to achieve a steady solution for the Bloch equations. This time is slightly higher than that expected for a $^{87}$Rb atom to transit in a transverse path through a Gaussian laser beam with a $\unit{~0.5}{\centi\metre}$ waist at room temperature. In Figs. \[fig:gammas1\]-\[fig:gammas2\] the results are illustrated for the steady state populations $\rho_{33}$ and $\rho_{44}$ scaled by $\Omega_{23}^2$ to remove the dominant quadratic dependence on that Rabi frequency. In the numerical simulations we considered $\Omega_{23}=\unit{0.1}{\mega\hertz}$. Notice that the separation of the AT maxima for a given $\Omega_{12}$ is independent of $\Gamma_{34}$ and $\Gamma_{42}$. That is not the case for the height and width of the AT peaks which in the case of $\rho_{33}$ are highly dependent on $\Gamma_{34}$, and for $\rho_{44}$ are highly dependent both on $\Gamma_{34}$ and $\Gamma_{42}$. Notice also that $\rho_{33}$ and $\rho_{44}$ have, in general, the same order of magnitude. -- -- -- -- -- -- -- -- Experimental setup ================== The experimental apparatus has been previously described in detail [@PoncianoOjeda:2015cf; @PoncianoOjeda:2018fs]. Figure \[fig:setup\] is presented here to point out its basic features. Both the $E1$ and the $E2$ transitions are excited by two home-made extended cavity diode lasers in the Littrow configuration (ECDL1 and ECDL2 respectively). ECDL1 is stabilized with polarization spectroscopy [@Pearman2002] to the $5\mathrm{S}_{1/2}, F=2 \to 5\mathrm{P}_{3/2}, F=3$, whilst ECDL2 is scanned across the $E2$ manifold to be probed. Once tuned in frequency, the light from both lasers is polarized and overlapped in a counter-propagating configuration along a room-temperature spectroscopy cell with the natural abundance of the Rubidium isotopes. The two-step $E1+E2$ excitation is determined by the detection of the $\unit{420.18}{\nano\metre}$ fluorescence that is the result of the $6\mathrm{P}_{3/2}\rightarrow 5\mathrm{S}_{1/2}$ decay. A fraction of this blue fluorescence light is collected and focused by lens L into a photomultiplier tube (PMT) after passing through a $\unit{420}{\nano\metre}$ interference filter F. This signal is finally amplified by a lock-in system. The excitation strength of the first step in this two-photon process is given by the Rabi frequency $\Omega_{12}$, which determines the coupling of the atoms with light locked on resonance with the $5\mathrm{S}_{1/2}, F=2 \rightarrow 5\mathrm{P}_{3/2}, F=3$ transition provided by ECDL1. This light pumps atoms from the $\left|1\right\rangle $ state into the $\left|2\right\rangle $ state. Then, the state of the atomic system is probed by the much weaker Rabi frequency $\Omega_{23}$ induced by the ECDL2 scanning through the $5\mathrm{P}_{3/2}, F=3 \rightarrow 6\mathrm{P}_{3/2}, F^{\prime}$ manifold. To study the forbidden transition leading to the $6\mathrm{P}_{3/2}$ state, it is necessary to have good control of the population of the $5\mathrm{P}_{3/2},F_2M_2$ sub-levels. Even though it is in fact also a function of the preparation laser intensity, the $M_2$ distribution is mainly determined by its polarization [@MojicaCasique:2016iu]. For reasons that will be explained in the discussion presented in the next section, we chose a parallel-linear polarization configuration for the preparation and probe beams in these experiments. The preparation laser intensity was always kept below saturation with a power ranging from $\unit{100}{\micro\watt}$ to about $\unit{5}{\milli\watt}$ and an elliptical $\unit{5}{\milli\metre}\times \unit{2.5}{\milli\metre}$ beam profile, while the probe beam power was kept at a power of $\unit{100}{\milli\watt}$ with a $\unit{4.5}{\milli\metre}\times \unit{2.3}{\milli\metre}$ elliptical beam profile. This corresponds to Rabi frequencies of the control beam $\Omega_{12}$ in the range between $\unit{\sim 20-100}{\mega\hertz}$, calculated as described in the following Section. The precise evaluation of the probe Rabi frequency $\Omega_{23}$ requires knowledge of the quadrupole matrix element $\langle 2\vert \bar{\bar Q}\vert 3\rangle$. A rough estimation, similar to that described in the theory section of this paper and taking into account the probe beam intensity, yields $\Omega_{23}\sim \unit{0.1}{\mega\hertz}$. Results and discussion. ======================= The choice of the cyclic transition 5S$_{1/2}, F = 2\rightarrow \mathrm{5P}_{3/2}, F = 3$ stimulated by a linearly polarized laser beam leads to the preparation of a state $\vert 2\rangle$ that can be approximately described by $\mathrm{5P}_{3/2}, F = 3,\, M_F=0$ with the quantization axis defined by the preparation beam [@MojicaCasique:2016iu]. As a consequence, the theoretical Rabi frequencies for the $\vert 1\rangle\rightarrow\vert 2 \rangle$ transition can be calculated as [@Steck:2008_87] $$\Omega_{12}=\frac{eE_c}{\hbar}\frac{1}{\sqrt{5}}\langle 5\mathrm{S}_{1/2}\vert\vert r\vert\vert 5\mathrm{P}_{3/2}\rangle.$$ Since the reported experiment involves a quasi counter-propagating linearly polarized $\unit{911}{\nano\metre}$ laser, the relevant quadrupole transition element Eq. (\[eq:quad\]) is [@MojicaCasique:2016iu] $$k_{23} Q_{zx} r_x,$$ taking $\hat e_x$ as the direction of the probe electric field $\vec E_p$. This leads to the set of independent selection rules for the preparation of the third state: ${\mathrm 6P}_{3/2},F_3=1$, ${\mathrm 6P}_{3/2},F_3=2$, and ${\mathrm 6P}_{3/2},F_3=3$. One therefore expects to observe three AT profiles centered at the transition frequencies that satisfy the $5\mathrm{P}_{3/2} \rightarrow 6\mathrm{P}_{3/2} $ electric quadrupole selection rules that result in excitation into the $F_3 = 1, 2 $ and $3 $ hyperfine states. This is illustrated in Fig. \[fig:expteo\] where AT profiles are shown for different powers of the control beam. The relative fluorescence intensities of those profiles are determined by geometric factors involved in the evaluation of the transition matrix elements. In Refs. [@PoncianoOjeda:2015cf; @MojicaCasique:2016iu] it was shown that the probability of observing, due to the two laser excitation, a $\unit{420}{\nano\metre}$ photon resulting from the decay of a given hyperfine state $\vert L_3 J_3 F_3\rangle$ to the state $\vert 5\mathrm \mathrm{S}_{1/2} F_1\rangle$ is given by the expression, $$\begin{aligned} \mathfrak{P}(F_3) &=& \sum_{M_2,M_3,M_{F_1^\prime},\lambda} \rho_{22}(F_2,M_2)N\vert\langle 5\mathrm{P}_{3/2}F_2 M_{F_2} \vert Q_{zx}\vert 6\mathrm{P}_{3/2} F_3 M_3\rangle\vert^2 \nonumber \\&\times& \vert\langle 6\mathrm{P}_{3/2}F_3 M_{F_3} \vert \mu_{\lambda}\vert 5\mathrm \mathrm{S}_{1/2} F_1 M_{F_1}\rangle\vert^2\end{aligned}$$ where $\rho_{22}(F_2, M_2)N$ is the population of the $\vert 2\rangle$ state produced by the strong, electric dipole $\vert 1\rangle \rightarrow \vert 2\rangle$ preparation step. This expression also involves the probabilities of the weak electric quadrupole transition and the electric dipole transition from the 6P$_{3/2}$ hyperfine manifold to the ground state. A direct calculation yields relative intensities 5:2:1 for $F_3 =2$, $F_3=3$ and $F_3=1$. This was verified experimentally by comparing the peak height intensities of the measured AT profiles that are shown in Fig. \[fig:exp\]a. -- -- -- -- The theoretical approach to describe the three AT profiles considered the basic four level model described in Section II for each profile. We considered the atomic gas at room temperature, assumed that the probe laser is in resonance, and used the effective bandwidths for the two lasers. The lateral profiles were scaled according to the theoretical proportion 5:2:1 for each $F_3$ state; their central minima corresponded to that given by the hyperfine splitting of $^{87}$Rb [@Steck:2008_87]. The experimental dependence of the peak-height intensities as a functions of the power of the probe beam $\mathcal{P}_{911}$ are shown Fig. \[fig:exp\](b). Since $\Omega_{23}^2$ is proportional to $\mathcal{P}_{911}$, the linear dependence on $\mathcal{P}_{911}$ supports the theoretical prediction of the three-level model according to which the population of the $6\mathrm{P}_{3/2}$ level depends quadratically on $\Omega_{23}$. As a consequence, as mentioned before, $\rho_{33}^S=\rho_{33}/\Omega_{23}^2$ should be quasi-independent of the specific value of $\Omega_{23}$ used for the evaluation of $\rho_{33}$. Taking this into account, the theoretical values reported in this Section for $\rho_{33}$ are also scaled by the value of $\Omega_{23}^2$. Illustrative examples of the experimental AT fluorescence of the 6P$_{3/2},\, F = 1,2,3$ to the 5S$_{1/2},\, F=2$ manifold are shown in Fig. \[fig:expteo\]. The measured fluorescence as a relative variable has arbitrary units that have been chosen to facilitate its comparison with the theoretical expectations that are given in terms of $\rho_{33}^S$. In the theoretical results, effective bandwidths of the exciting beams $\sigma_{21}$ and $\sigma_{32}$ were also optimized to reproduce the experimental data for the central AT profile, and they are the only free parameters. Their resulting value was greater than the experimental laser bandwidths ($\unit{2\pi \times (1.34 \pm 0.01)}{\mega\hertz}$) reflecting other expected broadening line effects. Among them are the non-perfect counter-propagating configuration of the control and probe lasers, as well as the distribution of atomic transit times within the laser profiles of the atoms in the Rb gas. The results of taking $\sigma_{21} = \sigma_{32}= \unit{2\pi \times 3.5}{\mega\hertz}$ are illustrated in Fig. \[fig:expteo\](b) and Fig. \[fig:expteo\](c) for the three- and four-level models respectively. Notice that the numerical simulations taking into account the velocity selective scheme predict $\rho_{22}(F_2, M_2)< 0.015$ for both models, which is far below saturation. Nevertheless, the order of magnitude of $\rho_{33}^S$ can be comparable and even greater than that given by the zero temperature three-level model. The graphs in Fig. \[fig:exp\] show that the growth of the resonance peaks and the evolution of the Autler-Townes spitting as a function of the power of the $\unit{780}{\nano\metre}$ coupling laser observed in the experimental data is more closely reproduced by the 4-level model than by the simpler 3-level model. This is confirmed in Fig. \[fig:minmax\] where the evolution of the resonance maximum before the ATS appears and of the maxima and central minimum of the AT doublet in the most prominent transition component ($F_2 =2\to F_3 =2$) are plotted against the power of the $\unit{780}{\nano\metre}$ coupling light. The simulated spectra obtained by means of the 3-level model present a rapid growth, reaching saturation below $\unit{0.8}{\milli\watt}$, and showing the first evidence of the ATS at around $\unit{1}{\milli\watt}$, value after which the maxima of the AT doublets reach a plateau. Also in this simple model, the value of the minimum in the middle of the AT peaks drops quickly to a fraction below $75\%$ of the value of the AT maxima for a power of $\unit{3}{\milli\watt}$ and drops to nearly $80\%$ at the far end of the measured range. The experimental spectra on the other hand reach a transitory saturation between $\unit{0.5}{\milli\watt}$ and $\unit{1}{\milli\watt}$, where the AT doublet begins to be clearly discernible in the shape of the spectra. At higher powers of the coupling light, the maximum values of the measured doublets keep increasing without showing any evidence of saturating again. This is in contrast to the behaviour of the central minima, which show hardly any change from the onset of the splitting at a fraction $\lesssim 60\%$ of the maximum value reached by the fluorescence at the highest measured power, and seem to saturate at a value slightly below $50\%$. The overall behavior of the experimental spectra is better reproduced by the calculations generated with the aid of the 4-level model. Conclusions. {#S:Conclusions} ============ In this paper, it has been shown that the highly non-perturbing nature of an electric dipole forbidden transition, the electric quadrupole interaction term in the case hereby presented, provides an ideal probe for performing an in-depth investigation of the dynamics of an atomic system interacting with radiation fields. We demonstrate this method by probing the ATS that relies on the utilization of a forbidden transition and allows experimental and theoretical studies under nontrivial conditions. In this system, the usage of a velocity-selective scheme based on counter-propagating probe and control lasers establishes limits to the Doppler contributions caused by working with a warm atomic sample. In our method, the proper selection of the polarization of the control and probe lasers simplified the hyperfine structure manifested in the ATS. This allowed a simple description of the experimental results requiring only a few physical parameters with a clear significance. The theoretical description involved three- and four-level systems including one forbidden electric dipole transition, which makes it significantly different from the standard studies based on E1 selection rules. The three-level ladder configuration admits the possibility of a parametric up-conversion process: the absorption of two photons with a given frequency can lead to the spontaneous emission of a photon with higher frequency. The four-level model considers an extra state to represent the joint effects of the alternative decay routes. This methodology enables simplified and compact numerical simulations, an efficient time-dependent analysis of the Bloch equations, and yields rather realistic predictions that closely reproduce the observations. We also derived a simple expression for evaluating finite temperature effects and the laser beam bandwidths. The formalism here presented circumvents solving Bloch equations incorporating Doppler detunings followed by velocity and, spectral beam averages by the simple *a posteriori* averaging of the solutions of the Bloch equations as a function of the detunings $\delta_{12}$ and $\delta_{23}$ using effective temperature-dependent frequency distributions. An interesting result derived from this formalism is that the decay fluorescence not only exhibits ATS as a function of the detuning of the probe laser, but also does it as a function of the detuning of the control laser in a counter-propagating configuration at room temperature. Furthermore, we showed that via this scheme one can perform a direct study of the ATS broadening for non-counter-propagating beams. In this way, we could estimate the time required to achieve a steady state for the density matrix both in the Bloch and Maxwell-Boltzmann-Bloch schemes, and compare this time with the average transit time of the atoms within the laser beams. With this analysis we concluded that, under the experimental conditions of the measurements presented in this paper, the atoms transit times within the laser beams were comparable to those required to obtain a steady state. This prevents reaching saturation on the transition induced by the control laser at beam powers that would yield it for atoms at rest. To finalize, note that the use of cooling and trapping techniques would eliminate the Doppler and transit time issues considered in this work. Thus, the method proposed here would be readily applicable in such systems as a minimal perturbing method to implement and evaluate the efficiency of protocols for the preparation of atomic states. Appendix A ========== For control and probe lasers in a non-counter-propagating configuration, the distribution that includes the Doppler and lasers detunings differs from that given by Eq. (\[eq:Dwidth\]). For the geometry described in Fig. \[fig:lasers\], a direct calculation shows that $$\tilde \rho_{ij}^{(\sigma_{lm},T)}(\delta_{21}^{(0)},\delta_{32}^{(0)}) =\Big( \frac{m\upsilon^{||2}_D}{k_{\mathrm{B}}T}\Big)^{1/2}\Big( \frac{m\upsilon^{\bot 2}_D}{k_{\mathrm{B}}T}\Big)^{1/2} \int d\delta_{21}\int d\delta_{32} e^{-\kappa_{||}(\delta_{21},\delta_{32})- \kappa_{\bot}(\delta_{21},\delta_{32})} \rho_{ij}(\delta_{21} ,\delta_{32})$$ where $$\begin{aligned} \frac{1}{\upsilon^{||2}_D} &=&\Big(\frac{m}{k_{\mathrm{B}} T} + \frac{\vert k_{21}\vert ^2}{\sigma_{21}^2} + \frac{\vert k_{32}\vert ^2\cos^2\theta}{\sigma_{32}^2}\Big)\label{eq:vDa}\\ \kappa_{||}(\delta_{21},\delta_{32})&=& \Big(\frac{\delta_{21}- \delta_{21}^{(0)}}{\sqrt{2}\tilde \sigma_{21}} \Big)^2 +\Big(\frac{\delta_{32}- \delta_{32}^{(0)}}{\sqrt{2}\tilde \sigma_{32}} \Big)^2\nonumber\\ &+& \Big(\frac{(\delta_{21}- \delta_{21}^{(0)})}{\sigma_{21}}\frac{\vert k_{21}\vert\upsilon_D}{ \sigma_{21}}\Big) \Big(\frac{(\delta_{32}- \delta_{32}^{(0)})}{\sigma_{32}}\frac{\vert k_{32}\vert\cos\theta\upsilon^{||}_D}{ \sigma_{32}}\Big)\label{eq:kappaa}\\ \tilde\sigma_{21}^2 &=& \frac{1}{1 - \vert k_{21}\vert^2\upsilon_D^{||2}/\sigma_{21}^2} \sigma_{21}^2\label{eq:sigma21a} \\ \tilde\sigma_{32}^2 &=& \frac{1}{1 - \vert k_{32}\cos\theta\vert^2\upsilon^{|| 2}_D/\sigma_{32}^2}\sigma_{32}^2\label{eq:sigma32a} \end{aligned}$$ and $$\begin{aligned} \frac{1}{\upsilon^{\bot 2}_D} &=&\Big(\frac{m}{k_{\mathrm{B}} T} + \frac{\vert k_{32}\sin\theta\vert ^2}{\sigma_{32}^2} \Big)\label{eq:vDab}\\ \kappa_{\bot}(\delta_{21},\delta_{32})&=& \Big(\frac{\delta_{32}-\delta^{(0)}_{32}}{\sqrt{2} \sigma_{32}^2} + \frac{\delta_{21}-\delta^{(0)}_{21}}{ \sqrt{2}\sigma_{21}^2}\frac{\vert k_{21}\vert \vert k_{32}\cos\theta\vert\upsilon^{||2}_D }{\sigma_{32}^2} \Big)^2\upsilon^{\bot 2}_Dk_{32}^2\sin^2\theta\end{aligned}$$ We thank J. Rangel for his help in the construction of the diode laser. This work was supported by DGAPA-UNAM, México, under projects PAPIIT Nos. IN116309, IN110812, and IA101012, and by CONACyT, México, under project No. 44986, LN-LANMAC-CTIC-2019 and PIIF-[*Correlaciones cuánticas: teoría y experimento*]{}-2019. L.M. Hoyos-Campo thanks UNAM-DGAPA and Conacyt for the postdoctoral fellowship.
--- abstract: 'Tracking objects in Computer Vision is a hard problem. Privacy and utility concerns adds an extra layer of complexity over this problem. In this work we consider the problem of maintaining privacy and utility while tracking an object in a video stream using Kalman filtering. Our first proposed method ensures that the localization accuracy of this object will not improve beyond a certain level. Our second method ensures that the localization accuracy of the same object will always remain under a certain threshold.' author: - bibliography: - 'conference\_101719.bib' title: 'Utility and Privacy in Object Tracking from Video Stream using Kalman Filter\' --- Kalman Filter, Privacy, Utility, LMI Introduction ============ We capture and share videos for a variety of purposes. These visual data has different private information [@Acquisti_2006]. The private information includes identity card, license plate number and finger-print. Another class of visual data, which is the focus of our paper, are the video streams of an object. We can use filtering algorithms (e.g. Kalman filter [@kalman]) to track with considerable precision. The object in motion is first detected by an image processing algorithm from the video frames. The accuracy depends on the algorithm, along with the resolution of the image frames. Higher resolution of the camera and higher accuracy of the detection algorithm in the *pixel coordinate* improves localization of the tracked object in *spatial coordinate*. We address two important questions pertaining to tracking object using Kalman filter from a video stream. The first question is from a utility viewpoint. We define utility as the quality of the estimation accuracy. If we are putting together an image acquisition and detection system to track the object shown in Fig. \[fig:redball\] using Kalman filter [@A__2010], we can ask: what is the most economical setup that ensures the estimated localization error to be always below a prescribed threshold or with a utility greater than a prescribed threshold? The second question is about privacy. When an object is being tracked in a video stream, its privacy is proportional to the uncertainty in the estimate of its location. The notion of privacy is relevant when such videos are being accessed by a third party. The owner of this data might want to perturb the video such that a Kalman filter based estimation on it will keep the localization error above a prescribed value. Akin to the utility scenario one might ask: what is the optimal noise that we can add to the video which ensures that the estimated localization error is always above a prescribed threshold? We are not aware of any prior works related to privacy and utility in object tracking using filtering from a video stream. Most of the works have focused on preserving privacy and/or utility of static images. In [@Orekondy_2018] the authors proposed a redaction by segmentation technique to ensure privacy of its contents. They showed that using their redaction method they can ensure near-perfect privacy while maintaining image utility. The authors in [@Boyle_2000] the authors studied the impact of filters that blur and pixelize at different levels on the privacy and utility of various elements in a video frame. In [@Winkler_2011] the authors presented a concept for user-centric privacy awareness in video surveillance. Other related works include [@Qureshi_2009],[@Brassil_2009],[@Saho_2018], and [@Kim_2015]. Problem Formulation =================== ![Time evolution of an object with darker shades representing more recent location.[]{data-label="fig:redball"}](ballmoving.eps){width="50.00000%"} We model the object detection process from a video frame using a linear discrete time stochastic systems $\bar{\mathcal{S}}$ described by the model of the form \[eq:LTI\] $${{\textcolor[HTML]{000000}{\boldsymbol{x}}}}_{k+1} = {{\textcolor[HTML]{000000}{\boldsymbol{F}}}}{{\textcolor[HTML]{000000}{\boldsymbol{x}}}}_{k} + {{\textcolor[HTML]{000000}{\boldsymbol{w}}}}_{k}, \label{processDynamics}\\$$ $${{\textcolor[HTML]{000000}{\boldsymbol{y}}}}_k = {{\textcolor[HTML]{000000}{\boldsymbol{H}}}}{{\textcolor[HTML]{000000}{\boldsymbol{x}}}}_k + {{\textcolor[HTML]{000000}{\boldsymbol{n}}}}_k, \label{sensing}$$ where $k=0,1,2,...$ represents the frame index, ${{\textcolor[HTML]{000000}{\boldsymbol{x}}}}_k\in{\mathbb{R}}^{n_{x}}$ is the $n_x$ dimensional true state of the *model* in frame $k$, ${{\textcolor[HTML]{000000}{\boldsymbol{w}}}}_k \in{\mathbb{R}}^{n_x}$ is the $n_x$ dimensional zero-mean Gaussian additive process noise variable with $\mathbb{E}[{{\textcolor[HTML]{000000}{\boldsymbol{w}}}}_k{{\textcolor[HTML]{000000}{\boldsymbol{w}}}}_l^T] = {{\textcolor[HTML]{000000}{\boldsymbol{Q}}}}\delta_{kl}$. The $n_y$ dimensional observations in frame $k$ is denoted by ${{\textcolor[HTML]{000000}{\boldsymbol{y}}}}_k\in{\mathbb{R}}^{n_y}$ which is corrupted by an $n_y$ dimensional additive noise ${{\textcolor[HTML]{000000}{\boldsymbol{n}}}}_k \in{\mathbb{R}}^{n_y}$. The sensor noise at each time instant is a zero mean Gaussian noise variable with $\mathbb{E}[{{\textcolor[HTML]{000000}{\boldsymbol{n}}}}_k{{\textcolor[HTML]{000000}{\boldsymbol{n}}}}_l^T] = {\textcolor[HTML]{000000}{\boldsymbol{R}}}\delta_{kl}$. The initial conditions are ${\mathbb{E}\left[{{\textcolor[HTML]{000000}{\boldsymbol{x}}}}_0\right]}=\boldsymbol{\mu}_{0}$ and ${{\mathbb{E}\left[{{\textcolor[HTML]{000000}{\boldsymbol{x}}}}_0{{{\textcolor[HTML]{000000}{\boldsymbol{x}}}}_0}^T\right]}} = {{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}}_0$. The process noise ${{\textcolor[HTML]{000000}{\boldsymbol{w}}}}_{k}$, observation noise ${{\textcolor[HTML]{000000}{\boldsymbol{n}}}}_k$, and initial state variable ${{\textcolor[HTML]{000000}{\boldsymbol{x}}}}_0$ are assumed to be independent. The optimal state estimator for the stochastic system $\bar{\mathcal{S}}$ is the Kalman filter, defined by $$\begin{aligned} {{\textcolor[HTML]{000000}{\boldsymbol{K}}}}_k &= {{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}}_k^-{{\textcolor[HTML]{000000}{\boldsymbol{H}}}}^T\Big[{{\textcolor[HTML]{000000}{\boldsymbol{H}}}}{{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}}_k^-{{\textcolor[HTML]{000000}{\boldsymbol{H}}}}^T+{\textcolor[HTML]{000000}{\boldsymbol{R}}}\Big]^{-1} \tag*{(Kalman Gain)},\\ {{\textcolor[HTML]{000000}{\boldsymbol{\mu}}}^{-}}_{k} & = {{\textcolor[HTML]{000000}{\boldsymbol{F}}}}{{\textcolor[HTML]{000000}{\boldsymbol{\mu}}}^{+}}_{k-1} \tag*{(Mean Propagation)},\\ {{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}}_k^- &= {{\textcolor[HTML]{000000}{\boldsymbol{F}}}}{{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}}^{+}_{k-1}{{\textcolor[HTML]{000000}{\boldsymbol{F}}}}^T+{{\textcolor[HTML]{000000}{\boldsymbol{Q}}}}\tag*{(Covariance Propagation)},\\ {{\textcolor[HTML]{000000}{\boldsymbol{\mu}}}^{+}}_{k} &={{\textcolor[HTML]{000000}{\boldsymbol{F}}}}{{\textcolor[HTML]{000000}{\boldsymbol{\mu}}}^{+}}_{k-1} + {{\textcolor[HTML]{000000}{\boldsymbol{K}}}}_k({{\textcolor[HTML]{000000}{\boldsymbol{y}}}}_k-{{\textcolor[HTML]{000000}{\boldsymbol{H}}}}{{\textcolor[HTML]{000000}{\boldsymbol{\mu}}}^{-}}_{k})\tag*{(Mean Update)},\\ {{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}}^{+}_{k}&= ({{\textcolor[HTML]{000000}{\boldsymbol{I}}}}_{n_x}-{{\textcolor[HTML]{000000}{\boldsymbol{K}}}}_k{{\textcolor[HTML]{000000}{\boldsymbol{H}}}}){{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}}_k^-\tag*{(Covariance Update)},\\ $$ where ${{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}}_k^-,{{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}}_k^+ \in {\mathbb{R}}^{n_x\times n_x}$ are the prior and posterior covariance matrix of the error estimate for frame $k$ respectively. The variables ${{\textcolor[HTML]{000000}{\boldsymbol{\mu}}}^{-}}_{k},{{\textcolor[HTML]{000000}{\boldsymbol{\mu}}}^{+}}_{k}\in {\mathbb{R}}^{n_x}$, denote the prior and posterior mean estimate of the true state ${{\textcolor[HTML]{000000}{\boldsymbol{x}}}}_k$. The variable $\boldsymbol{K}_k$ is the Kalman gain in frame $k$. The parameter ${\textcolor[HTML]{000000}{\boldsymbol{R}}}$ is our design variable both for the case of utility and privacy, only varying in its interpretation. Now we define utility and privacy in the context of tracking a moving object. #### Utility Utility of the object detection system is specified by an upper bound on the steady-state estimation error due to filtering. We calculate a feasible ${\textcolor[HTML]{000000}{\boldsymbol{R}}}$ that ensures the steady state prior covariance matrix to be upper-bounded by a prescribed positive definite matrix ${\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_{\infty}$ for the detection system modeled in eqn. \[eq:LTI\]. The parameter ${\textcolor[HTML]{000000}{\boldsymbol{R}}}$ is a measure of maximum inaccuracies allowed in the detection system. #### Privacy Privacy requirement is centered around a particular frame (say ${k+1}^{\text{th}}$). It is specified by a lower bound on the estimation error ${\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}_{k+1}^{+}$ after the Kalman update, for that particular frame. This is where the privacy scenario differs from the utility case, where we focus on the steady-state error. We are interested in calculating a feasible ${\textcolor[HTML]{000000}{\boldsymbol{R}}}$ such that the posterior error covariance matrix ${\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}_{k+1}^{+}$ is lower-bounded by a prescribed positive definite matrix ${\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^{d}_{k+1}$. The parameter ${\textcolor[HTML]{000000}{\boldsymbol{R}}}$ is a measure of minimal noise that needs to be artificially injected to the ${k+1}^{\text{th}}$ image frame to ensure privacy with respect to accurate localization. In the following sections we present two theorems that demonstrates how the utility and privacy preserving design parameter ${\textcolor[HTML]{000000}{\boldsymbol{R}}}$ can be modeled as a solution to two convex optimization problems involving linear matrix inequalities (LMI). Optimal ${\textcolor[HTML]{000000}{\boldsymbol{R}}}$ for utility ================================================================ \[thm:2\] Given ${\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_\infty$, the desired steady-state error variance, the optimal algorithmic precision ${\textcolor[HTML]{000000}{\boldsymbol{\Upsilon}}}:={\textcolor[HTML]{000000}{\boldsymbol{R}}}^{-1}$ that satisfies $ {\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}_\infty \preceq {\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_\infty$ is given by the following optimization problem, $$\left. \begin{aligned} & \min_{{\textcolor[HTML]{000000}{\boldsymbol{\Upsilon}}}}{ {\mathbf{tr}\left({\textcolor[HTML]{000000}{\boldsymbol{W}}}{\textcolor[HTML]{000000}{\boldsymbol{\Upsilon}}}{\textcolor[HTML]{000000}{\boldsymbol{W}}}^T\right)}} \text{ subject to }\\ &\begin{bmatrix} {\textcolor[HTML]{000000}{\boldsymbol{M}}}_{11} & {{\textcolor[HTML]{000000}{\boldsymbol{F}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_{\infty}{{\textcolor[HTML]{000000}{\boldsymbol{H}}}}^T \\ {{\textcolor[HTML]{000000}{\boldsymbol{H}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_{\infty}{{\textcolor[HTML]{000000}{\boldsymbol{F}}}}^T & {\textcolor[HTML]{000000}{\boldsymbol{L}}}+{\textcolor[HTML]{000000}{\boldsymbol{L}}}{\textcolor[HTML]{000000}{\boldsymbol{\Upsilon}}}{\textcolor[HTML]{000000}{\boldsymbol{L}}} \end{bmatrix} \succeq 0, \end{aligned} \right\} \label{eqn:thm2}$$ where $$\begin{aligned} {\textcolor[HTML]{000000}{\boldsymbol{\Upsilon}}} &\succeq 0\\ {\textcolor[HTML]{000000}{\boldsymbol{L}}} &:={{\textcolor[HTML]{000000}{\boldsymbol{H}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_{\infty}{{\textcolor[HTML]{000000}{\boldsymbol{H}}}}^T, \text{ and}\\ {\textcolor[HTML]{000000}{\boldsymbol{M}}}_{11} &:= {\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_{\infty} - {{\textcolor[HTML]{000000}{\boldsymbol{F}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_{\infty}{{\textcolor[HTML]{000000}{\boldsymbol{F}}}}^T - {{\textcolor[HTML]{000000}{\boldsymbol{Q}}}}\\ & + {{\textcolor[HTML]{000000}{\boldsymbol{F}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_{\infty}{{\textcolor[HTML]{000000}{\boldsymbol{H}}}}^T{\textcolor[HTML]{000000}{\boldsymbol{L}}}^{-1}{{\textcolor[HTML]{000000}{\boldsymbol{H}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_{\infty}{{\textcolor[HTML]{000000}{\boldsymbol{F}}}}^T, \end{aligned}$$ with ${\textcolor[HTML]{000000}{\boldsymbol{\Upsilon}}} \in {\mathbb{R}}^{n_{y}\times n_{y}}$. The variable ${\textcolor[HTML]{000000}{\boldsymbol{W}}}\in{\mathbb{R}}^{n_{y}\times n_{y}}$, is user defined and serves as a normalizing weight on ${\textcolor[HTML]{000000}{\boldsymbol{\Upsilon}}}$. The steady state prior covariance is the solution to the following discrete-time algebraic Riccati equation (DARE) $$\begin{gathered} {\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}_{\infty} = {{\textcolor[HTML]{000000}{\boldsymbol{F}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}_{\infty}{{\textcolor[HTML]{000000}{\boldsymbol{F}}}}^T+ {{\textcolor[HTML]{000000}{\boldsymbol{Q}}}}\\ - {{\textcolor[HTML]{000000}{\boldsymbol{F}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}_{\infty}{{\textcolor[HTML]{000000}{\boldsymbol{H}}}}^T\left({{\textcolor[HTML]{000000}{\boldsymbol{H}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}_{\infty}{{\textcolor[HTML]{000000}{\boldsymbol{H}}}}^T+{{\textcolor[HTML]{000000}{\boldsymbol{\Upsilon}}}^{-1}}\right)^{-1} {{\textcolor[HTML]{000000}{\boldsymbol{H}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}_{\infty}{{\textcolor[HTML]{000000}{\boldsymbol{F}}}}^T, \label{ARE}\end{gathered}$$ where ${\textcolor[HTML]{000000}{\boldsymbol{\Upsilon}}}:={\textcolor[HTML]{000000}{\boldsymbol{R}}}^{-1}$. We assume that ${\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_\infty$ is the solution of eqn. \[ARE\] for some ${\textcolor[HTML]{000000}{\boldsymbol{\Upsilon}}}^d \succeq 0$, i.e. for detection precision ${\textcolor[HTML]{000000}{\boldsymbol{\Upsilon}}}^d$ the steady-state variance is ${\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_\infty$. We use ${\textcolor[HTML]{000000}{\boldsymbol{A}}}\succeq {\textcolor[HTML]{000000}{\boldsymbol{B}}}$ to denote that ${\textcolor[HTML]{000000}{\boldsymbol{A}}}-{\textcolor[HTML]{000000}{\boldsymbol{B}}}$ is a positive semi-definite matrix. For any ${\textcolor[HTML]{000000}{\boldsymbol{\Upsilon}}}^d \preceq {\textcolor[HTML]{000000}{\boldsymbol{\Upsilon}}}$, eqn. \[ARE\] becomes the following inequality $$\begin{gathered} {\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_{\infty} - {{\textcolor[HTML]{000000}{\boldsymbol{F}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_{\infty}{{\textcolor[HTML]{000000}{\boldsymbol{F}}}}^T - {{\textcolor[HTML]{000000}{\boldsymbol{Q}}}}+ \\ {{\textcolor[HTML]{000000}{\boldsymbol{F}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_{\infty}{{\textcolor[HTML]{000000}{\boldsymbol{H}}}}^T\left({{\textcolor[HTML]{000000}{\boldsymbol{H}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_{\infty}{{\textcolor[HTML]{000000}{\boldsymbol{H}}}}^T+{\textcolor[HTML]{000000}{\boldsymbol{\Upsilon}}}^{-1}\right)^{-1} {{\textcolor[HTML]{000000}{\boldsymbol{H}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_{\infty}{{\textcolor[HTML]{000000}{\boldsymbol{F}}}}^T \succeq 0. {\label{eqn:Ricc-relax}} \end{gathered}$$ Expanding $\left({{\textcolor[HTML]{000000}{\boldsymbol{H}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_{\infty}{{\textcolor[HTML]{000000}{\boldsymbol{H}}}}^T+{\textcolor[HTML]{000000}{\boldsymbol{\Upsilon}}}^{-1}\right)^{-1}$ using matrix-inversion lemma, the above inequality becomes $$\begin{gathered} {\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_{\infty} - {{\textcolor[HTML]{000000}{\boldsymbol{F}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_{\infty}{{\textcolor[HTML]{000000}{\boldsymbol{F}}}}^T - {{\textcolor[HTML]{000000}{\boldsymbol{Q}}}}+ {{\textcolor[HTML]{000000}{\boldsymbol{F}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_{\infty}{{\textcolor[HTML]{000000}{\boldsymbol{H}}}}^T{\textcolor[HTML]{000000}{\boldsymbol{L}}}^{-1}{{\textcolor[HTML]{000000}{\boldsymbol{H}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_{\infty}{{\textcolor[HTML]{000000}{\boldsymbol{F}}}}^T \\ - {{\textcolor[HTML]{000000}{\boldsymbol{F}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_{\infty}{{\textcolor[HTML]{000000}{\boldsymbol{H}}}}^T\left({\textcolor[HTML]{000000}{\boldsymbol{L}}}+{\textcolor[HTML]{000000}{\boldsymbol{L}}}{\textcolor[HTML]{000000}{\boldsymbol{\Upsilon}}}{\textcolor[HTML]{000000}{\boldsymbol{L}}}\right)^{-1}{{\textcolor[HTML]{000000}{\boldsymbol{H}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_{\infty}{{\textcolor[HTML]{000000}{\boldsymbol{F}}}}^T \succeq 0, {\label{eqn:ricc-relax-final}} \end{gathered}$$ where ${\textcolor[HTML]{000000}{\boldsymbol{L}}}:={{\textcolor[HTML]{000000}{\boldsymbol{H}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_{\infty}{{\textcolor[HTML]{000000}{\boldsymbol{H}}}}^T$. Using Schur complement we get the following LMI $$\begin{aligned} \begin{bmatrix} {\textcolor[HTML]{000000}{\boldsymbol{M}}}_{11} & {{\textcolor[HTML]{000000}{\boldsymbol{F}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_{\infty}{{\textcolor[HTML]{000000}{\boldsymbol{H}}}}^T \\ {{\textcolor[HTML]{000000}{\boldsymbol{H}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_{\infty}{{\textcolor[HTML]{000000}{\boldsymbol{F}}}}^T & {\textcolor[HTML]{000000}{\boldsymbol{L}}}+{\textcolor[HTML]{000000}{\boldsymbol{L}}}{\textcolor[HTML]{000000}{\boldsymbol{\Upsilon}}}{\textcolor[HTML]{000000}{\boldsymbol{L}}} \end{bmatrix} \succeq 0, {\label{eqn:ss:LMI}}\end{aligned}$$ where $$\begin{gathered} {\textcolor[HTML]{000000}{\boldsymbol{M}}}_{11} := {\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_{\infty} - {{\textcolor[HTML]{000000}{\boldsymbol{F}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_{\infty}{{\textcolor[HTML]{000000}{\boldsymbol{F}}}}^T - {{\textcolor[HTML]{000000}{\boldsymbol{Q}}}}\\ + {{\textcolor[HTML]{000000}{\boldsymbol{F}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_{\infty}{{\textcolor[HTML]{000000}{\boldsymbol{H}}}}^T{\textcolor[HTML]{000000}{\boldsymbol{L}}}^{-1}{{\textcolor[HTML]{000000}{\boldsymbol{H}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_{\infty}{{\textcolor[HTML]{000000}{\boldsymbol{F}}}}^T. \end{gathered}$$ The Optimal ${\textcolor[HTML]{000000}{\boldsymbol{\Upsilon}}}^\ast$ is achieved by minimizing ${\mathbf{tr}\left({\textcolor[HTML]{000000}{\boldsymbol{W}}}{\textcolor[HTML]{000000}{\boldsymbol{\Upsilon}}}{\textcolor[HTML]{000000}{\boldsymbol{W}}}^T\right)}$. We assume complete detectability of (${{\textcolor[HTML]{000000}{\boldsymbol{F}}}},{{\textcolor[HTML]{000000}{\boldsymbol{H}}}}$) and stabilizability of (${{\textcolor[HTML]{000000}{\boldsymbol{F}}}},{{\textcolor[HTML]{000000}{\boldsymbol{Q}}}}^{1/2}$) [@anderson1979optimal] for eqn. \[eq:LTI\]. This ensure existence and uniqueness of the steady state prior covariance matrix ${\textcolor[HTML]{000000}{\boldsymbol{\Sigma_{\infty}}}}$ (for a fixed ${{\textcolor[HTML]{000000}{\boldsymbol{R}}}}$) for the corresponding DARE in eqn. \[ARE\]. The linear matrix inequality (LMI) in eqn. \[eqn:thm2\] gives the feasible set of ${\textcolor[HTML]{000000}{\boldsymbol{R}}}:={\textcolor[HTML]{000000}{\boldsymbol{\Upsilon}}}^{-1}$. We introduced the convex cost function ${\mathbf{tr}\left({\textcolor[HTML]{000000}{\boldsymbol{W}}}{\textcolor[HTML]{000000}{\boldsymbol{\Upsilon}}}{\textcolor[HTML]{000000}{\boldsymbol{W}}}^T\right)}$ to calculate the most economical choice of ${\textcolor[HTML]{000000}{\boldsymbol{R}}}$. Theoretical Bound on Utility ---------------------------- The minimal steady-state covariance of the estimate that *any object detection* setup can achieve modeled as in eqn. \[eq:LTI\], is the solution to the following DARE $$\begin{gathered} {\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}_{\infty} = {{\textcolor[HTML]{000000}{\boldsymbol{F}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}_{\infty}{{\textcolor[HTML]{000000}{\boldsymbol{F}}}}^T+ {{\textcolor[HTML]{000000}{\boldsymbol{Q}}}}\\ - {{\textcolor[HTML]{000000}{\boldsymbol{F}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}_{\infty}{{\textcolor[HTML]{000000}{\boldsymbol{H}}}}^T\left({{\textcolor[HTML]{000000}{\boldsymbol{H}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}_{\infty}{{\textcolor[HTML]{000000}{\boldsymbol{H}}}}^T\right)^{-1} {{\textcolor[HTML]{000000}{\boldsymbol{H}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}_{\infty}{{\textcolor[HTML]{000000}{\boldsymbol{F}}}}^T .\label{eqn:noRDARE}\end{gathered}$$ This provides a theoretical lower bound on the prescribed ${\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_{\infty}$ that we can achieve. A positive unique solution to ${\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}_{\infty}$ in eqn. \[eqn:noRDARE\] exists if $({{\textcolor[HTML]{000000}{\boldsymbol{F}}}},{{\textcolor[HTML]{000000}{\boldsymbol{H}}}})$ pair is detectable, $({{\textcolor[HTML]{000000}{\boldsymbol{F}}}},{{\textcolor[HTML]{000000}{\boldsymbol{Q}}}}^{1/2})$ pair is stabilizable, and ${{\textcolor[HTML]{000000}{\boldsymbol{H}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}_{\infty}{{\textcolor[HTML]{000000}{\boldsymbol{H}}}}^T$ is full-rank. Optimal ${\textcolor[HTML]{000000}{\boldsymbol{R}}}$ for privacy ================================================================ \[thm:3\] Given ${\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_{k+1}$, the desired predicted error variance at time $k+1$, the optimal measurement noise ${\textcolor[HTML]{000000}{\boldsymbol{R}}}_{\text{p}}$ that satisfies $ {\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}_{k+1}^{-} \succeq {\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_{k+1}$ for a known ${\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}_{k}^{-}$, is given by the following optimization problem, $$\left. \begin{aligned} & \min_{{\textcolor[HTML]{000000}{\boldsymbol{R}}}_{\text{p}}}{ {\mathbf{tr}\left({\textcolor[HTML]{000000}{\boldsymbol{W}}}{\textcolor[HTML]{000000}{\boldsymbol{R}}}_{\text{p}}{\textcolor[HTML]{000000}{\boldsymbol{W}}}^T\right)}} \text{ subject to }\\ &\begin{bmatrix} {\textcolor[HTML]{000000}{\boldsymbol{M}}}_{11} & {\textcolor[HTML]{000000}{\boldsymbol{L}}} \\ {\textcolor[HTML]{000000}{\boldsymbol{L}}}^{T} & {\textcolor[HTML]{000000}{\boldsymbol{L}}}_2+{\textcolor[HTML]{000000}{\boldsymbol{R}}}_{\text{p}} \end{bmatrix} \succeq 0, \end{aligned} \right\} \label{eqn:thm3}$$ where $$\begin{aligned} {\textcolor[HTML]{000000}{\boldsymbol{R}}}_{\text{p}} &\succeq 0\\ {\textcolor[HTML]{000000}{\boldsymbol{L}}}_1 &:={{\textcolor[HTML]{000000}{\boldsymbol{F}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^{-}_{k}{{\textcolor[HTML]{000000}{\boldsymbol{H}}}}^T, \ {\textcolor[HTML]{000000}{\boldsymbol{L}}}_2 := {\textcolor[HTML]{000000}{\boldsymbol{H}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^{-}_{k}{\textcolor[HTML]{000000}{\boldsymbol{H}}}^T+{\textcolor[HTML]{000000}{\boldsymbol{R}}}_{\text{s}}\text{ and}\\ {\textcolor[HTML]{000000}{\boldsymbol{M}}}_{11} &:= -{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_{k+1} + {{\textcolor[HTML]{000000}{\boldsymbol{F}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^{-}_{k}{{\textcolor[HTML]{000000}{\boldsymbol{F}}}}^T +{{\textcolor[HTML]{000000}{\boldsymbol{Q}}}},\end{aligned}$$ with ${\textcolor[HTML]{000000}{\boldsymbol{R}}}_{\text{p}} \in {\mathbb{R}}^{n_{y}\times n_{y}}$. The variable ${\textcolor[HTML]{000000}{\boldsymbol{W}}}\in{\mathbb{R}}^{n_{y}\times n_{y}}$, is user defined and serves as a normalizing weight on ${\textcolor[HTML]{000000}{\boldsymbol{R}}}_{\text{p}}$. The Riccati equation for predicted covariance is $$\begin{aligned} {\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^{-}_{k+1} &= {{\textcolor[HTML]{000000}{\boldsymbol{A}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^{-}_{k}{{\textcolor[HTML]{000000}{\boldsymbol{A}}}}^T +{{\textcolor[HTML]{000000}{\boldsymbol{Q}}}}\\ &- {{\textcolor[HTML]{000000}{\boldsymbol{A}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^{-}_{k}{{\textcolor[HTML]{000000}{\boldsymbol{H}}}}^T({{\textcolor[HTML]{000000}{\boldsymbol{H}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^{-}_{k}{{\textcolor[HTML]{000000}{\boldsymbol{H}}}}^T+\underbrace{{{\textcolor[HTML]{000000}{\boldsymbol{R}}}}_{s}+{{\textcolor[HTML]{000000}{\boldsymbol{R}}}}_{p}}_{{{\textcolor[HTML]{000000}{\boldsymbol{R}}}}})^{-1}{{\textcolor[HTML]{000000}{\boldsymbol{H}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^{-}_{k}{{\textcolor[HTML]{000000}{\boldsymbol{F}}}}^T\end{aligned}$$ where the measurement noise consists of inherent noise (${{\textcolor[HTML]{000000}{\boldsymbol{R}}}}_{s}$) due to the object acquisition setup which is assumed to be known and the noise (${{\textcolor[HTML]{000000}{\boldsymbol{R}}}}_{p}$) which needs to be added to ensure $ {\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}_{k+1}^{-} \succeq {\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_{k+1}$. Here ${{\textcolor[HTML]{000000}{\boldsymbol{R}}}}_{p}$ is the design variable. The ${{\textcolor[HTML]{000000}{\boldsymbol{R}}}}_{p}$ that ensures lower bound on $ {\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}_{k+1}^{-}$ satisfies $$\begin{aligned} {\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^{d}_{k+1} &\preceq {{\textcolor[HTML]{000000}{\boldsymbol{A}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^{-}_{k}{{\textcolor[HTML]{000000}{\boldsymbol{A}}}}^T +{{\textcolor[HTML]{000000}{\boldsymbol{Q}}}}\\ &- {{\textcolor[HTML]{000000}{\boldsymbol{A}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^{-}_{k}{{\textcolor[HTML]{000000}{\boldsymbol{H}}}}^T({{\textcolor[HTML]{000000}{\boldsymbol{H}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^{-}_{k}{{\textcolor[HTML]{000000}{\boldsymbol{H}}}}^T+{{\textcolor[HTML]{000000}{\boldsymbol{R}}}}_{s}+{{\textcolor[HTML]{000000}{\boldsymbol{R}}}}_{p})^{-1}{{\textcolor[HTML]{000000}{\boldsymbol{H}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^{-}_{k}{{\textcolor[HTML]{000000}{\boldsymbol{F}}}}^T\end{aligned}$$ Using Schur complement we get the following linear matrix inequality, $$\begin{aligned} \begin{bmatrix} {\textcolor[HTML]{000000}{\boldsymbol{M}}}_{11} & {\textcolor[HTML]{000000}{\boldsymbol{L}}} \\ {\textcolor[HTML]{000000}{\boldsymbol{L}}}^{T} & {\textcolor[HTML]{000000}{\boldsymbol{L}}}_2+{\textcolor[HTML]{000000}{\boldsymbol{R}}}_{\text{p}} \end{bmatrix} \succeq 0, $$ where $$\begin{aligned} {\textcolor[HTML]{000000}{\boldsymbol{L}}}_1 &:={{\textcolor[HTML]{000000}{\boldsymbol{F}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^{-}_{k}{{\textcolor[HTML]{000000}{\boldsymbol{H}}}}^T, \ {\textcolor[HTML]{000000}{\boldsymbol{L}}}_2 := {\textcolor[HTML]{000000}{\boldsymbol{H}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^{-}_{k}{\textcolor[HTML]{000000}{\boldsymbol{H}}}^T+{\textcolor[HTML]{000000}{\boldsymbol{R}}}_{\text{s}}\text{ and}\\ {\textcolor[HTML]{000000}{\boldsymbol{M}}}_{11} &:= -{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^d_{k+1} + {{\textcolor[HTML]{000000}{\boldsymbol{F}}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^{-}_{k}{{\textcolor[HTML]{000000}{\boldsymbol{F}}}}^T + {{\textcolor[HTML]{000000}{\boldsymbol{Q}}}},\end{aligned}$$ The optimal ${\textcolor[HTML]{000000}{\boldsymbol{R}}}^\ast_p$ is achieved by minimizing ${\mathbf{tr}\left({\textcolor[HTML]{000000}{\boldsymbol{W}}}{\textcolor[HTML]{000000}{\boldsymbol{R}}}_p{\textcolor[HTML]{000000}{\boldsymbol{W}}}^T\right)}$. The LMI in eqn. \[eqn:thm3\] gives the convex feasible set for ${\textcolor[HTML]{000000}{\boldsymbol{R}}}_p$ that ensures lower bound on the posterior covariance in the ${k+1}^{\text{th}}$ frame. We impose a cost convex cost function ${\mathbf{tr}\left({\textcolor[HTML]{000000}{\boldsymbol{W}}}{\textcolor[HTML]{000000}{\boldsymbol{R}}}_p{\textcolor[HTML]{000000}{\boldsymbol{W}}}^T\right)}$ to calculate an optimal ${\textcolor[HTML]{000000}{\boldsymbol{R}}}_p$. Numerical Results ================= We assume a simplified motion model for the moving red object from one frame to another in a video, which is shown in Fig. \[fig:redball\]. The dynamics in the pixel frame is $$\begin{aligned} \underbrace{\begin{bmatrix} x_{k+1} \\ y_{k+1} \\ \delta x_{k+1} \\ \delta y_{k+1} \end{bmatrix}}_{{{\textcolor[HTML]{000000}{\boldsymbol{x}}}}_{k+1}^p} &= \underbrace{\begin{bmatrix} 1 & 0 & 1 & 0\\ 0 & 1 & 0 & 1\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 \end{bmatrix}}_{{{\textcolor[HTML]{000000}{\boldsymbol{F}}}}} \underbrace{\begin{bmatrix} x_{k} \\ y_{k} \\ \delta x_{k} \\ \delta y_{k} \end{bmatrix}}_{{{\textcolor[HTML]{000000}{\boldsymbol{x}}}}_k^p}+{{\textcolor[HTML]{000000}{\boldsymbol{w}}}}_k,\label{eqn:pixeldyn}\\ {\textcolor[HTML]{000000}{\boldsymbol{y}}}_k &= \underbrace{\begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 \end{bmatrix}}_{{{\textcolor[HTML]{000000}{\boldsymbol{H}}}}} \begin{bmatrix} x_{k} \\ y_{k} \\ \delta x_{k} \\ \delta y_{k} \end{bmatrix} + {{\textcolor[HTML]{000000}{\boldsymbol{n}}}}_k,\label{eqn:pixelmeas}\end{aligned}$$ where ${{\textcolor[HTML]{000000}{\boldsymbol{x}}}}_{k}^p$ is the pixel coordinates of the moving object in the $k^{\text{th}}$ frame, $\mathbb{E}({{\textcolor[HTML]{000000}{\boldsymbol{w}}}}_k,{{\textcolor[HTML]{000000}{\boldsymbol{w}}}}_l)=\delta_{kl}{\textcolor[HTML]{000000}{\boldsymbol{Q}}}$, and $\mathbb{E}({{\textcolor[HTML]{000000}{\boldsymbol{n}}}}_k,{{\textcolor[HTML]{000000}{\boldsymbol{n}}}}_l)=\delta_{kl}{\textcolor[HTML]{000000}{\boldsymbol{R}}}$. The video is generated synthetically. There are a total of 500 frames in this video with 425 rows and 570 columns in each frame. The pair $({{\textcolor[HTML]{000000}{\boldsymbol{F}}}},{{\textcolor[HTML]{000000}{\boldsymbol{H}}}})$ is completely detectable and $({{\textcolor[HTML]{000000}{\boldsymbol{F}}}},{{\textcolor[HTML]{000000}{\boldsymbol{Q}}}}^{1/2})$ is completely stabilizable, which ensures existence and uniqueness of positive solution to the induced DARE due to Kalman filtering of this system. The variable ${\textcolor[HTML]{000000}{\boldsymbol{R}}}$ is our design parameter. A homography exists between the pixel coordinates (${{\textcolor[HTML]{000000}{\boldsymbol{x}}}}^p$) and the spatial coordinates (${{\textcolor[HTML]{000000}{\boldsymbol{x}}}}$). The homography in this numerical problem is represented as an affine map $$\begin{aligned} {{\textcolor[HTML]{000000}{\boldsymbol{x}}}}^{\text{p}} = \underbrace{\begin{bmatrix} 0 & \frac{n_r}{4}\\ -\frac{n_c}{4} & 0 \end{bmatrix}}_{{\textcolor[HTML]{000000}{\boldsymbol{U}}}} {{\textcolor[HTML]{000000}{\boldsymbol{x}}}}+ \begin{bmatrix} \frac{n_r}{2} \\ \frac{n_c}{2} \end{bmatrix}.\end{aligned}$$ The affine map induces a covariance relation ${\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}_{{{\textcolor[HTML]{000000}{\boldsymbol{x}}}}^{\text{p}}{{\textcolor[HTML]{000000}{\boldsymbol{x}}}}^{\text{p}}}= {\textcolor[HTML]{000000}{\boldsymbol{U}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}_{{{\textcolor[HTML]{000000}{\boldsymbol{x}}}}{{\textcolor[HTML]{000000}{\boldsymbol{x}}}}}{\textcolor[HTML]{000000}{\boldsymbol{U}}}^T$ from the pixel to the spatial coordinates. Utility results --------------- The optimal utility of an object detection setup, which includes the image acquisition hardware and the image processing algorithm, can be prescribed as maximum error covariance allowed in the spatial coordinate frame (${\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}_{{{\textcolor[HTML]{000000}{\boldsymbol{x}}}}{{\textcolor[HTML]{000000}{\boldsymbol{x}}}}}\preceq {\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}_{{{\textcolor[HTML]{000000}{\boldsymbol{x}}}}{{\textcolor[HTML]{000000}{\boldsymbol{x}}}}}^{\text{max}}$) due to filtering on the observed data. For instance, suppose we are tracking a car. We expect the tracking accuracy to be less than $\textbf{diag}([L_{\text{car}}^2 \ L_{\text{car}}^2])$, where $L_{\text{car}}$ denotes the length of the car. This is important from a situational awareness perspective in a traffic system. Using the induced covariance relation ${\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}_{{{\textcolor[HTML]{000000}{\boldsymbol{x}}}}^{\text{p}}{{\textcolor[HTML]{000000}{\boldsymbol{x}}}}^{\text{p}}}= {\textcolor[HTML]{000000}{\boldsymbol{U}}}{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}_{{{\textcolor[HTML]{000000}{\boldsymbol{x}}}}{{\textcolor[HTML]{000000}{\boldsymbol{x}}}}}{\textcolor[HTML]{000000}{\boldsymbol{U}}}^T$, we transform the utility requirement into pixel coordinate system. The theoretical lower bound on utility in the pixel coordinate system for ${{\textcolor[HTML]{000000}{\boldsymbol{Q}}}}= \textbf{diag}([0.1 \ 0.1 \ 50 \ 50])$ is $$\begin{aligned} {\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}_{{{\textcolor[HTML]{000000}{\boldsymbol{x}}}}^{\text{p}}{{\textcolor[HTML]{000000}{\boldsymbol{x}}}}^{\text{p}}}^{\text{lb}} =\textbf{diag}([54.891 \ 54.891]), \end{aligned}$$ which can be solved using the `idare()` function in MATLAB [@MATLAB:2017]. This lower bound translates to a lower bound of $$\begin{aligned} {\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}_{{{\textcolor[HTML]{000000}{\boldsymbol{x}}}}{{\textcolor[HTML]{000000}{\boldsymbol{x}}}}}^{\text{lb}} =\textbf{diag}([2.693e-3 \ 4.845e-3]) \text{m}^2, \end{aligned}$$ in the spatial coordinate system. If we allow for less precise filtering in pixel coordinates which can ensure a error covariance in the estimate of $1.5{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}_{{{\textcolor[HTML]{000000}{\boldsymbol{x}}}}{{\textcolor[HTML]{000000}{\boldsymbol{x}}}}}^{\text{lb}}$, the convex optimization problem yields an optimal precision requirement of $${\textcolor[HTML]{000000}{\boldsymbol{\Upsilon}}}^* = \textbf{diag}([0.660 \ 0.660]),$$ with ${{\textcolor[HTML]{000000}{\boldsymbol{W}}}}$ chosen to be identity. To solve this we used CVX, a package for specifying and solving convex programs [@cvx],[@gb08]. We used SDPT3 solver [@T_t_nc__2003] which took a CPU time of 0.95 secs to solve the problem in CVX. The calculated ${\textcolor[HTML]{000000}{\boldsymbol{R}}}^*:={{\textcolor[HTML]{000000}{\boldsymbol{\Upsilon}}}^*}^{-1}$ denotes that the intensity of the measurement noise (modeled as zero mean Gaussian) that gets added to the actual measurement due to the hardware and the object detection algorithm, needs to be less than $1.5$ (pixel length)$^2$. This will ensure that the estimation error always remains below the prescribed threshold of $1.5{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}_{{{\textcolor[HTML]{000000}{\boldsymbol{x}}}}{{\textcolor[HTML]{000000}{\boldsymbol{x}}}}}^{\text{lb}}$. One can relate this precision requirement to different aspects of the detection process. For instance, the value of the precision is proportional to the resolution of the camera used. Higher resolution denotes higher precision. The matrix ${\textcolor[HTML]{000000}{\boldsymbol{W}}}$ used in the cost function can be interpreted as a price per unit resolution. With proper choice of ${\textcolor[HTML]{000000}{\boldsymbol{W}}}$ we can calculate the most economical sensing system that satisfies our requirement. Using a precision of ${\textcolor[HTML]{000000}{\boldsymbol{\Upsilon}}}^*= \textbf{diag}([0.660 \ 0.660])$ we calculate the RMSE for 500 Monte-Carlo (MC) runs with randomized initial conditions which is shown in Fig. \[fig:rmse1\]. The peaks in the plot is due to the fact that we assumed a linear motion model whereas Fig. \[fig:redball\] shows that the motion no longer remains linear at places where there is considerable change in the direction. ![RMSE with 500 MC runs[]{data-label="fig:rmse1"}](RMSE_500MC.eps){width="40.00000%"} ![Error covariance averaged over 500 MC runs[]{data-label="fig:errorcov"}](errorcov_v1.eps){width="50.00000%"} In Fig. \[fig:errorcov\] we see the evolution of the error covariance matrix ${\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}_{{{\textcolor[HTML]{000000}{\boldsymbol{x}}}}{{\textcolor[HTML]{000000}{\boldsymbol{x}}}}}$ with different frames, averaged over 500 MC runs. In the steady state this covariance is guaranteed to remain below the prescribed $1.5{\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}_{{{\textcolor[HTML]{000000}{\boldsymbol{x}}}}{{\textcolor[HTML]{000000}{\boldsymbol{x}}}}}^{\text{lb}}$. Privacy results --------------- In the system defined in eqn. \[eqn:pixeldyn\] and eqn. \[eqn:pixelmeas\] we assume that the measurement model has inherent sensor and/or object detection zero mean Gaussian noise (${\textcolor[HTML]{000000}{\boldsymbol{n}}}_s$). We add a synthetic zero mean Gaussian noise (${\textcolor[HTML]{000000}{\boldsymbol{n}}}_p$) to the image to ensure privacy. The noise intensity $\mathbb{E}[{\textcolor[HTML]{000000}{\boldsymbol{n}}}_s{\textcolor[HTML]{000000}{\boldsymbol{n}}}_s^T]={\textcolor[HTML]{000000}{\boldsymbol{R}}}_s$ is known and $\mathbb{E}[{\textcolor[HTML]{000000}{\boldsymbol{n}}}_p{\textcolor[HTML]{000000}{\boldsymbol{n}}}_p^T]={\textcolor[HTML]{000000}{\boldsymbol{R}}}_p$ is our design parameter. [0.5]{}\[x=0.75pt,y=0.75pt,yscale=-1,xscale=1\] (81,155) – (269.5,155) – (269.5,298) – (81,298) – cycle ; (230,72) – (418.5,72) – (418.5,215) – (230,215) – cycle ; (387,2) – (575.5,2) – (575.5,145) – (387,145) – cycle ; (102.25,269.13) .. controls (102.25,263.26) and (107.01,258.5) .. (112.88,258.5) .. controls (118.74,258.5) and (123.5,263.26) .. (123.5,269.13) .. controls (123.5,274.99) and (118.74,279.75) .. (112.88,279.75) .. controls (107.01,279.75) and (102.25,274.99) .. (102.25,269.13) – cycle ; (258.88,165.63) .. controls (258.88,159.76) and (263.63,155) .. (269.5,155) .. controls (275.37,155) and (280.13,159.76) .. (280.13,165.63) .. controls (280.13,171.49) and (275.37,176.25) .. (269.5,176.25) .. controls (263.63,176.25) and (258.88,171.49) .. (258.88,165.63) – cycle ; (470.63,84.13) .. controls (470.63,78.26) and (475.38,73.5) .. (481.25,73.5) .. controls (487.12,73.5) and (491.88,78.26) .. (491.88,84.13) .. controls (491.88,89.99) and (487.12,94.75) .. (481.25,94.75) .. controls (475.38,94.75) and (470.63,89.99) .. (470.63,84.13) – cycle ; (301.25,133.13) – (351.25,133.13) – (351.25,183.13) – (301.25,183.13) – cycle ; (146.25,221.13) – (196.25,221.13) – (196.25,271.13) – (146.25,271.13) – cycle ; (456.25,59.13) – (506.25,59.13) – (506.25,109.13) – (456.25,109.13) – cycle ; (470.63,84.13) .. controls (470.63,78.26) and (475.38,73.5) .. (481.25,73.5) .. controls (487.12,73.5) and (491.88,78.26) .. (491.88,84.13) .. controls (491.88,89.99) and (487.12,94.75) .. (481.25,94.75) .. controls (475.38,94.75) and (470.63,89.99) .. (470.63,84.13) – cycle ; (250,288) node [$t-1$]{}; (413,205) node [$t$]{}; (560,134) node [$t+1$]{}; (407,227) node [$\Sigma ^{-}_{t}$]{}; (536,159) node [$\Sigma ^{-}_{t+1} \succeq \Sigma ^{d}_{t+1}$]{}; (154,230) node \[align=left\] [**A**]{}; (307,138) node \[align=left\] [**A**]{}; (463,66) node \[align=left\] [**A**]{}; In Fig. \[fig:threeframes\] we see consecutive three frames with a smaller region inside them marked as **A**. These frames span the discrete time points $\{t,t-1,t+1\}$ as shown in the figure. When the tracked red object is in **A** in the $t+1^{\text{th}}$ frame, we want the location estimation error ${\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^{-}_{t+1}$ to be greater than prescribed ${\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^{d}_{t+1}$. We choose ${\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^{d}_{t+1}$ to be $\textbf{diag}([2.703e-03 \ 4.862e-03])$m$^{2}$, in the spatial coordinates, which translates to $\textbf{diag}([54.891 \ 54.891])$ in the pixel frame. Starting with an initial prior covariance ${\textcolor[HTML]{000000}{\boldsymbol{\Sigma}}}^{-}_{t}$, our proposed privacy theorem yields $${\textcolor[HTML]{000000}{\boldsymbol{R}}}_p= \textbf{I}_2,$$ with ${\textcolor[HTML]{000000}{\boldsymbol{W}}}$ chosen to be identity. We assumed that the object acquisition and detection setup adds no noise the measurement, i.e. ${\textcolor[HTML]{000000}{\boldsymbol{R}}}_s = \textbf{0}$. From a data sharing perspective, we would share the image frame at time point $t+1$ with added noise of intensity ${\textcolor[HTML]{000000}{\boldsymbol{R}}}_p$. Our privacy preserving framework is explained in Fig. \[fig:frames\]. To solve for ${\textcolor[HTML]{000000}{\boldsymbol{R}}}_p$ we again used CVX. We used SDPT3 solver which took a CPU time of 0.44 secs to solve the problem in CVX. The reduction in CPU time for the privacy problem compared to the utility problem is due to the fact that there is no inverse operation in the LMI. We see in Fig. \[fig:threeframes\] that the red object which is being tracked using a Kalman filter, can still be identified in the $t+1$ frame, but cannot be precisely tracked beyond a certain accuracy. [0.5]{}\[x=0.75pt,y=0.75pt,yscale=-1,xscale=1\] (3,2) – (94.5,2) – (94.5,77) – (3,77) – cycle ; (43.59,39.5) .. controls (43.59,36.42) and (45.9,33.93) .. (48.75,33.93) .. controls (51.6,33.93) and (53.91,36.42) .. (53.91,39.5) .. controls (53.91,42.58) and (51.6,45.07) .. (48.75,45.07) .. controls (45.9,45.07) and (43.59,42.58) .. (43.59,39.5) – cycle ; (144,23) – (376.5,23) – (376.5,63) – (144,63) – cycle ; (93.5,40) – (141.5,40.96) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (292,229) node [![Privacy ensuring mechanism[]{data-label="fig:frames"}](user.png "fig:"){width="52.5pt" height="52.5pt"}]{}; (280,132) .. controls (280,124.27) and (286.27,118) .. (294,118) .. controls (301.73,118) and (308,124.27) .. (308,132) .. controls (308,139.73) and (301.73,146) .. (294,146) .. controls (286.27,146) and (280,139.73) .. (280,132) – cycle ; (294.5,64) – (294.02,116) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (294,146) – (293.52,198) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (377,111) – (447,111) – (447,151) – (377,151) – cycle ; (375.5,133) – (312,132.03) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (426,44.56) – (472.42,44.56) – (472.42,75) – (426,75) – cycle ; (462.69,26.9) – (509.11,26.9) – (509.11,57.33) – (462.69,57.33) – cycle ; (501.36,12) – (547.78,12) – (547.78,42.44) – (501.36,42.44) – cycle ; (431.23,68.85) .. controls (431.23,67.61) and (432.4,66.59) .. (433.85,66.59) .. controls (435.29,66.59) and (436.47,67.61) .. (436.47,68.85) .. controls (436.47,70.1) and (435.29,71.12) .. (433.85,71.12) .. controls (432.4,71.12) and (431.23,70.1) .. (431.23,68.85) – cycle ; (469.8,46.83) .. controls (469.8,45.58) and (470.98,44.56) .. (472.42,44.56) .. controls (473.87,44.56) and (475.04,45.58) .. (475.04,46.83) .. controls (475.04,48.07) and (473.87,49.09) .. (472.42,49.09) .. controls (470.98,49.09) and (469.8,48.07) .. (469.8,46.83) – cycle ; (521.95,29.48) .. controls (521.95,28.23) and (523.12,27.22) .. (524.57,27.22) .. controls (526.01,27.22) and (527.18,28.23) .. (527.18,29.48) .. controls (527.18,30.73) and (526.01,31.74) .. (524.57,31.74) .. controls (523.12,31.74) and (521.95,30.73) .. (521.95,29.48) – cycle ; (480.24,39.91) – (492.55,39.91) – (492.55,50.55) – (480.24,50.55) – cycle ; (442.07,58.64) – (454.38,58.64) – (454.38,69.28) – (442.07,69.28) – cycle ; (518.41,24.16) – (530.72,24.16) – (530.72,34.8) – (518.41,34.8) – cycle ; (521.95,29.48) .. controls (521.95,28.23) and (523.12,27.22) .. (524.57,27.22) .. controls (526.01,27.22) and (527.18,28.23) .. (527.18,29.48) .. controls (527.18,30.73) and (526.01,31.74) .. (524.57,31.74) .. controls (523.12,31.74) and (521.95,30.73) .. (521.95,29.48) – cycle ; (392,227.56) – (436.54,227.56) – (436.54,258) – (392,258) – cycle ; (427.21,209.9) – (471.75,209.9) – (471.75,240.33) – (427.21,240.33) – cycle ; (464.31,195) – (508.85,195) – (508.85,225.44) – (464.31,225.44) – cycle ; (397.02,251.85) .. controls (397.02,250.61) and (398.15,249.59) .. (399.53,249.59) .. controls (400.92,249.59) and (402.04,250.61) .. (402.04,251.85) .. controls (402.04,253.1) and (400.92,254.12) .. (399.53,254.12) .. controls (398.15,254.12) and (397.02,253.1) .. (397.02,251.85) – cycle ; (434.03,229.83) .. controls (434.03,228.58) and (435.15,227.56) .. (436.54,227.56) .. controls (437.93,227.56) and (439.05,228.58) .. (439.05,229.83) .. controls (439.05,231.07) and (437.93,232.09) .. (436.54,232.09) .. controls (435.15,232.09) and (434.03,231.07) .. (434.03,229.83) – cycle ; (484.06,212.48) .. controls (484.06,211.23) and (485.19,210.22) .. (486.58,210.22) .. controls (487.96,210.22) and (489.09,211.23) .. (489.09,212.48) .. controls (489.09,213.73) and (487.96,214.74) .. (486.58,214.74) .. controls (485.19,214.74) and (484.06,213.73) .. (484.06,212.48) – cycle ; (444.04,222.91) – (455.86,222.91) – (455.86,233.55) – (444.04,233.55) – cycle ; (407.42,241.64) – (419.23,241.64) – (419.23,252.28) – (407.42,252.28) – cycle ; (480.67,207.16) – (492.48,207.16) – (492.48,217.8) – (480.67,217.8) – cycle ; (484.06,212.48) .. controls (484.06,211.23) and (485.19,210.22) .. (486.58,210.22) .. controls (487.96,210.22) and (489.09,211.23) .. (489.09,212.48) .. controls (489.09,213.73) and (487.96,214.74) .. (486.58,214.74) .. controls (485.19,214.74) and (484.06,213.73) .. (484.06,212.48) – cycle ; (413,153) – (320.2,209.95) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (259,43) node \[align=left\] [Camera and Object Acquisition]{}; (412,131) node \[align=left\] [Noise]{}; (294,132) node \[align=left\] [+]{}; (443.98,60.53) node \[align=left\] [**A**]{}; (481.66,40.95) node \[align=left\] [**A**]{}; (520.07,25.62) node \[align=left\] [**A**]{}; (409.25,243.53) node \[align=left\] [**A**]{}; (445.4,223.95) node \[align=left\] [**A**]{}; (480.67,207.16) node \[align=left\] [**A**]{}; (359,171) node [$R_{p}$]{}; (295,263) node \[align=left\] [End User]{}; Conclusion ========== In this work we addressed two questions related to privacy and utility for moving object detection from a video stream using the Kalman filter. We modeled them as convex optimization problems based on LMIs. The proposed framework was implemented on a numerical problem for two scenarios. First, the purpose was to track an object with an upper bound on estimation error while ensuring utility. Second, we calculated the minimal noise that needs to be injected to a frame to ensure desired privacy prescribed by a lower bound on the localization error of the object. Acknowledgment ============== We are thankful to the reviewers whose valuable feedback helped in improving our work.
--- author: - 'Y. Wang' - 'H. Beuther' - 'M. R. Rugel' - 'J. D. Soler' - 'J. M. Stil' - 'J. Ott' - 'S. Bihr' - 'N. M. McClure-Griffiths' - 'L. D. Anderson' - 'R. S. Klessen' - 'P. F. Goldsmith' - 'N. Roy' - 'S. C. O. Glover' - 'J. S. Urquhart' - 'M. Heyer' - 'H. Linz' - 'R. J. Smith' - 'F. Bigiel' - 'J. Dempsey' - 'T. Henning' bibliography: - 'references.bib' date: 'Received dd, mm, yyyy; accepted dd, mm, yyyy' title: 'The HI/OH/Recombination line survey of the inner Milky Way (THOR): data release 2 and overview' --- [The Galactic plane has been observed extensively by a large number of Galactic plane surveys from infrared to radio wavelengths at an angular resolution below 40. However, a 21 cm line and continuum survey with comparable spatial resolution is lacking. ]{} [The first half of THOR data ($l=14.0\degr-37.9\degr$, and $l=47.1\degr-51.2\degr$, $\lvert b \rvert \leq 1.25\degr$) has been published in our data release 1 paper. With this data release 2 paper, we publish all the remaining spectral line data and Stokes I continuum data with high angular resolution (10–40), including a new dataset for the whole THOR survey region ($l=14.0-67.4\degr$ and $\lvert b \rvert \leq 1.25\degr$). As we published the results of OH lines and continuum emission elsewhere, we concentrate on the analysis in this paper.]{} [With the [*Karl G. Jansky*]{} Very Large Array (VLA) in C-configuration, we observed a large portion of the first Galactic quadrant, achieving an angular resolution of $\leq 40$. At $L$ Band, the WIDAR correlator at the VLA was set to cover the 21 cm line, four OH transitions, a series of H$n\alpha$ radio recombination lines (RRLs; $n=151$ to 186), and eight 128 MHz-wide continuum spectral windows (SPWs), simultaneously.]{} [We publish all OH and RRL data from the C-configuration observations, and a new dataset combining VLA C+D+GBT (VLA D-configuration and GBT data are from the VLA Galactic Plane Survey) for the whole survey. The emission shows clear filamentary substructures at negative velocities with low velocity crowding. The emission at positive velocities is more smeared-out, likely due to higher spatial and velocity crowding of structures at the positive velocities. Compared to the spiral arm model of the Milky Way, the atomic gas follows the Sagittarius and Perseus Arm well, but with significant material in the inter-arm regions. With the C-configuration-only +continuum data, we produced a optical depth map of the THOR areal coverage from 228 absorption spectra with the nearest-neighbor method. With this $\tau$ map, we corrected the emission for optical depth, and the derived column density is 38% higher than the column density with optically thin assumption. The total mass with optical depth correction in the survey region is 4.7$\times10^8~M_\odot$, 31% more than the mass derived assuming the emission is optically thin. If we applied this 31% correction to the whole Milky Way, the total atomic gas mass would be 9.4–10.5$\times 10^9~M_\odot$. Comparing the with existing CO data, we find a significant increase in the atomic-to-molecular gas ratio from the spiral arms to the inter-arm regions.]{} [The high-sensitivity and resolution THOR dataset provides an important new window on the physical and kinematic properties of gas in the inner Galaxy. Although the optical depth we derive is a lower limit, our study shows that the optical depth correction is significant for column density and mass estimation. Together with the OH, RRL and continuum emission from the THOR survey, these new data provide the basis for high-angular-resolution studies of the interstellar medium (ISM) in different phases.]{} Introduction ============ The Galactic plane has been observed extensively over the past decades by different survey projects at multiple wavelengths in both continuum and spectral lines, from near infrared (e.g., UKIDSS[^1], @lucas2008; [*Spitzer*]{}/GLIMPSE[^2], @benjamin2003 [@churchwell2009], [*Spitzer*]{}/MIPSGAL[^3], @carey2009, [*Herschel*]{}/Hi-GAL[^4], @Molinari2010), to (sub)mm (e.g., ATLASGAL[^5], BGPS[^6], GRS[^7], MALT90[^8], MALT-45[^9], FUGIN[^10], MWISP[^11], @schuller2009 [@rosolowsky2010; @aguirre2011; @csengeri2014; @Jackson2006; @Foster2011; @Jordan2015; @Umemoto2017; @Su2019]), and radio wavelengths (e.g. MAGPIS[^12], CORNISH[^13], CGPS[^14], SGPS[^15], VGPS[^16], HOPS[^17], Sino-German 6 cm survey, @helfand2006 [@hoare2012; @Taylor2003; @McClure2005; @stil2006; @Walsh2011; @Sun2007]). These surveys provide vital data to study and understand the interstellar medium (ISM) in different phases: atomic, molecular, ionized gas, and dust. While many of the surveys have a high angular resolution ($\leq$ 40$\arcsec$), the highest angular resolution 21 cm line survey of the northern Galactic plane, VGPS [@stil2006], has a resolution of only 60, which makes it difficult to compare with the aforementioned surveys to study the phase transitions of the ISM. We therefore initiated the , OH, recombination line survey of the Milky Way (THOR[^18]; @beuther2016). A large fraction of the Galactic plane in the first quadrant of the Milky Way ($l=14.0-67.4^\circ$ and $\lvert b \rvert \leq 1.25^\circ$) was observed with the [*Karl G. Jansky*]{} Very Large Array (VLA) in C-configuration. At $L$ Band, the WIDAR correlator at the VLA was set to cover the 21 cm line, four OH transitions, a series of H$n\alpha$ radio recombination lines (RRLs; $n=151$ to 186), as well as eight 128 MHz wide continuum spectral windows (SPWs), simultaneously. With the C-configuration, we achieve an angular resolution of $<25$ to compare with existing surveys at a matching resolution. The main survey description and data release 1 ($l=14.0\degr-37.9\degr$, and $l=47.1\degr-51.2\degr$) is presented in @beuther2016. Here, we publish the remaining , OH, and RRL data, including a whole new set of data that combines the existing D-configuration and Green Bank Telescope (GBT) observations to recover the larger scale emission [@stil2006]. Scientifically, we focus on an overview of the new data in this paper. The continuum emission, OH absorption and masers from the survey were studied and presented in @Bihr2016, @Walsh2016, @rugel2018, @Wang2018, and @Beuther2019. Additionally, @anderson2017 identified 76 new Galactic supernova remnant (SNR) candidates with the continuum data. Using the THOR RRL data, @Rugel2019 studied the feedback in W49A and suggest that star formation in W49A is potentially regulated by feedback-driven and re-collapsing shells. Since the 1950s, the Galactic 21 cm line has been extensively observed both in emission and absorption [e.g., @Ewen1951; @Muller1951; @Heeschen1954; @Radhakrishnan1972; @Dickey1983; @Dickey1990; @Gibson2000; @Heiles2003a; @Li2003; @Goldsmith2005]. The atomic gas traced by is widely distributed in the Galaxy [e.g., @Dickey1990; @Hartmann1997; @Kalberla2009], and numerous surveys have been carried out [e.g., @Kalberla2005; @Stanimirovic2006; @stil2006; @McClure-Griffiths2009; @Peek2011; @Dickey2013; @Winkel2016; @Peek2018] to study the properties of atomic gas and the Galactic spiral structure [e.g., @vandeHulst1954; @Oort1958; @Kulkarni1982; @Nakanishi2003]. By studying the emission at low spatial resolution, @Oort1958 constructed the first face-on distribution map of the Milky Way, and found multiple spiral arms. Later surveys have revealed additional spiral arms in the outer Galaxy [e.g., @Weaver1970; @Kulkarni1982; @Nakanishi2003; @Levine2006]. While most of these works are concentrated on the outer Galaxy, by combining and H$_2$ map derived from CO observations at low angular resolution, @Nakanishi2016 were able to trace the spiral arms from the inner to the outer Galaxy. Assuming the 21 cm line is optically thin, the total mass of the Milky Way has been estimated to be 7.2 to 8.0$\times10^9~M_\odot$ [@Kalberla2009; @Nakanishi2016]. Studies towards nearby gas at high latitudes show that optical depth can be negligible for mass estimation in such a context [e.g., @Lee2015; @Murray2018a], while on the other hand, studies of self absorption towards nearby galaxies (M31, M33 and LMC) revealed that mass can increase by 30–34% with optical depth correction [@Braun2009; @Braun2012]. A study toward mini starburst region W43 in our Milky Way revealed an even more extreme correction factor of 240% for mass estimation when applying optical depth correction. However, no systematic study has yet been done in the Galactic plane. THOR provides the opportunity to study the distribution and spiral structures of the atomic gas in the northern Galactic plane with the emission data, to further investigate the optical depth, and calibrate the mass estimation of the atomic gas using absorption data. Furthermore, the atomic hydrogen gas, especially the cold neutral medium (CNM, $T\sim40-100$ K, @McKee1977 [@Wolfire1995]) also traces the -to-H$_{2}$ transition. Combining the absorption lines with the simultaneously observed OH absorption lines from the THOR survey and complementary molecular gas information from such as CO observations allows us to study the transition phase between atomic gas and molecular gas [@rugel2018]. This paper presents the second data release of the THOR survey. We focus scientifically on the emission and absorption. The observation strategy and data reduction details are described in Sect. \[sect\_obs\]. The parameters of the data products, along with an overview of the results are presented in Sect. \[sect\_results\]. The results are discussed in Sect. \[sect\_discuss\], and our conclusions are summarized in Sect. \[sect\_con\]. Observations and Data reduction {#sect_obs} =============================== We observed the first quadrant of the Galactic plane, covering $l=14.0-67.4^\circ$ and $\lvert b \rvert \leq 1.25\degr$ with the VLA in C-configuration in L band from 1 to 2 GHz. The observations were carried out in three phases, a pilot study ($l=29.2\degr-31.5\degr$), phase 1 ($l=14.0\degr-29.2\degr,\ 31.5\degr-37.9\degr$ and $47.1\degr-51.2\degr$), and phase 2 ($l=37.0\degr-47.9\degr$ and $51.1\degr-67.4\degr$), spanning across several semesters (from 2012 to 2014). The detailed observing strategy and data reduction are discussed and described in @Bihr2016, and data release 1 in @beuther2016, which present only the pilot and phase I data. With the WIDAR correlator, we cover the 21 cm line, four OH lines ($\Lambda$ doubling transitions of the OH ground state, the ${\rm {}^{2}\Pi_{3/2};J=3/2}$ state, “main lines” at 1665 and 1667 MHz, “satellite lines” at 1612 and 1720 MHz), 19 H$\alpha$ recombination lines, as well as eight continuum bands, for instance, SPWs. Each continuum SPW has a bandwidth of 128 MHz. Due to strong radio frequency interference (RFI) contaminations, two SPWs around 1.2 and 1.6 GHz were not usable and were discarded. The remaining six SPWs are centered at 1.06, 1.31, 1.44, 1.69, 1.82, and 1.95 GHz. For the fields at $l=23.1-24.3^\circ$ and $25.6-26.8^\circ$, the SPW around 1.95 GHz is also severely affected by RFI and is therefore flagged [see @Bihr2016]. Each pointing was observed three times to ensure a uniform $uv$-coverage, and the total integration time is $5-6$ min per pointing. A detailed description of the observational setup can be found in @beuther2016. The full survey was calibrated and imaged with the Common Astronomy Software Applications (CASA)[^19] software package [@McMullin2007]. The modified VLA scripted pipeline[^20] (version 1.2.0 for the pilot study and phase 1, version 1.3.1 for phase 2) was used for the calibration. The absolute flux and bandpass were calibrated with the quasar 3C 286. J1822-0938 (for observing blocks with $l<39.1\degr$) and J1925+2106 (for the remaining fields) were used for the phase and gain calibration. Except for the RRLs, all data were inverted and cleaned with multiscale CLEAN in CASA to better recover the large scale structure [see also @beuther2016]. In most regions, the individual RRLs are too weak to be detected, and so cleaning the image is typically not useful. We therefore stacked the dirty images of all RRL spectral windows that are not affected by RFI with equal weights in the velocity. All RRL dirty images were produced with the same spectral resolution of 10 km s$^{-1}$ and smoothed to a common angular resolution of 40 before the stacking [see also @beuther2016]. For the continuum and RRL data, we employed the RFlag algorithm in CASA, which was first introduced to AIPS by E. Greisen in 2011 to minimize the effects from RFI in each visibility dataset before imaging. Some SPWs, such as the continuum SPW at $1.2$ GHz and $1.6$ GHz, and some RRLs, have so much RFI over the whole band that no usable data could be recovered by RFlag, and were therefore abandoned [see also @beuther2016; @Wang2018]. New dataset ----------- For the 21 cm line observations, we combined the THOR C-configuration data with the Very Large Array Galactic Plane Survey (VGPS, @stil2006), which consists of VLA D-configuration data combined with single-dish observations from the Green Bank Telescope (GBT), to recover the large-scale structure. The data from the pilot and phase 1 of the survey were published in data release 1 [@beuther2016], in which we combined the THOR C-configuration images directly with the published VGPS data using the task “feather” in CASA. This method does recover the large scale structure, but the quality of the images is not ideal. The images are quite pixelized and contain many sidelobe artifacts (see left panel in Fig. \[fig\_hi\_compare\]). To improve the image quality, we chose a different method to combine the dataset. We first combined the C-configuration data in THOR with the D-configuration of VGPS in the visibility domain. We subtracted the continuum in the visibility datasets with UVCONTSUB in CASA, and used the multiscale CLEAN in CASA[^21] to image the continuum-subtracted C-configuration data together with D-configuration data. The images were afterward combined with the VGPS images (D+GBT) using the task, “feather”, in CASA (see also Wang et al. submitted). Since the D-configuration observations of VGPS cover only $l=17.6\degr-67\degr$, the combined data are restricted to $l=17.6\degr-67\degr$, which is slightly smaller than the sky coverage of the C-configuration-only data ($l=14.0-67.4\degr$). Compared to the images from data release 1, the quality of the new images has significantly improved (Fig. \[fig\_hi\_compare\]). More details about the data products are given in Table \[table\_product\]. ![image]({figures/HI_old_new_G24_map}.pdf){width="80.00000%"} Results {#sect_results} ======= We describe the parameters of the data products, and present an overview of the results in this section. Data release 2 -------------- All spectral line data from the pilot and phase 1, as well as all Stokes I continuum data, have been published and are already available to the community [@bihr2015; @Bihr2016; @beuther2016; @Beuther2019; @Walsh2016; @rugel2018; @Wang2018]. In this paper, we publish the data from the second half of the survey, including the new dataset for the whole survey, available at our project website[^22] and at the CDS[^23]. We summarize the basic parameters of the data products in Table \[table\_product\]. Because of different requirements for calibrating and imaging the polarization data [see also @beuther2016], the data reduction of these data for the whole survey is still ongoing. The first results of the Faraday rotation study in Galactic longitude range 39 to 52 are presented by @Shanahan2019. More polarization data should be available at a later stage. The noise of our data is dominated by the residual side lobes. Particularly in regions close to strong emission from the continuum and the masers, the noise can increase significantly. The noise properties of the continuum data and OH masers are studied in detail by @Bihr2016, @beuther2016, @Walsh2016, @Wang2018, and @Beuther2019. We list only the typical noise values in Table \[table\_product\]. \[table\_product\] [l c c c c c r ]{} & Rest Freq. & Width &$\Delta v$ &beam & beam & noise\ & (MHz) & (km s$^{-1}$) & (km s$^{-1}$)& native & smoothed & mJy beam$^{-1}$\ \ & 1420.406 & 277.5& 1.5& –& 40& 10\ +cont. &1420.406 & 300 & 1.5 &13.0 to 19.1& 25& 10\ OH1 & 1612.231&195& 1.5&11.6 to 18.7& 20& 10\ OH2 & 1665.402& 195 & 1.5&11.1 to 18.1& 20& 10\ OH3 & 1667.359& 195 & 1.5&11.0 to 13.7&20& 10\ OH4 & 1720.530& 195 & 1.5& 11.0 to 17.6&20& 10\ RRL & – &210&10& – &40& 3.0\ cont1 & 1060 &–&–&14.7 to 24.4&25& 1.0\ cont2 & 1310 &–&–&12.2 to 19.7&25& 0.3\ cont3 & 1440 &–&–&11.6 to 18.1&25& 0.3\ cont4 & 1690 &–&–&9.5 to 15.4&25& 0.3\ cont5 & 1820 &–&–&9.1 to 14.5&25& 0.3\ cont6 & 1950 &–&–&8.2 to 13.1&25& 0.7\ cont3+VGPS &1420& –&–&–&25& 6.5\ OH -- Continuum subtracted images at both the native and the smoothed resolution (20) of the four OH lines are provided to the community. The correlator setup was slightly different between the pilot study, phase 1, and phase 2, which mainly affects the sky coverage of OH lines and RRLs and the native spectral resolution of spectral lines (see @beuther2016 for details). The OH line at 1667 MHz was only observed in the pilot study and phase 2 ($l=29.2\degr-31.5\degr,\ 37.9\degr-47.9\degr$, and $51.1\degr-67.0\degr$, see also, @rugel2018 and @Beuther2019). Diverse physical processes are traced by OH masers, from expanding shells around evolved star to shocks produced by star-forming jets or SNRs [@Elitzur1992]. @Beuther2019 survey identified 1585 individual maser spots distributed over 807 maser sites in the THOR survey, among which $\sim$50% of the maser sites are associated with evolved stars, $\sim20$% are associated with star-forming regions, and $\sim3$% are potentially associated with SNRs (see @Beuther2019 for details). Thermal OH lines are often detected as absorptions towards strong continuum sources, and they can be used to trace molecular clouds where CO is not detected, so called “CO-dark” regions [e.g., @Allen2015; @Xu2016]. By studying the two main transitions (1665 and 1667 MHz), @rugel2018 detect 59 distinct OH absorption features against 42 continuum background sources, and most of the absorptions occur in molecular clouds associated with Galactic regions. This is the first unbiased interferometric OH survey towards a significant fraction of the inner Milky Way, and provides a basis for theoretical and future follow-up studies (see @rugel2018 for details). RRLs {#rrl} ---- As mentioned in the previous section, to increase the signal-to-noise (S/N), ratio we smoothed and stacked all available recombination line images. The final RRL images have an angular resolution of 40 and a velocity resolution of 10 km s$^{-1}$. The typical linewidths of RRLs measured toward regions are around $\sim20-25$ km s$^ {-1}$[@Anderson2011], so a 10 km s$^{-1}$ resolution is reasonable to study the kinematics of regions. Combining the RRL data from THOR with complementary CO data, @Rugel2019 found shell-like structures in RLL emission toward W49A. The ionized emission and molecular gas emission show correlation towards the shell-like structures. By comparing to one-dimensional feedback models [@Rahner2017; @Rahner2019], @Rugel2019 suggest W49A is potentially regulated by feedback-driven and re-collapsing shells (see @Rugel2019 for details). Mostly due to sensitivity limitations, interferometric mapping surveys of RRL emission have been rare [e.g., @Urquhart2004]. The stacking method allows us to achieve higher sensitivities than is usually possible when only single lines are observed. THOR provides the community with a new set of RRL maps towards a large sample of regions, which can be used for sample statistical studies. Continuum --------- As presented in @Wang2018, the THOR survey provides the continuum data (both at the native resolution and smoothed to a resolution of 25), as well as a continuum source catalog, to the community. To recover the extended structure, we combined the C-configuration 1.4 GHz continuum data from THOR with the 1.4 GHz continuum data from the VGPS survey (D+Effelsberg) using the task, “feather” [@Wang2018]. The resulting images for the pilot region and phase 1 are very similar to the ones obtained using the VGPS continuum as an input model in the deconvolution of the THOR data. This combined dataset retains the high angular resolution of the THOR observations (25), and at the same time, it can recover the large-scale structure. @anderson2017 identified 76 new Galactic SNR candidates in the survey area with this dataset. @anderson2017 further showed that despite the different bandwidths between the VGPS continuum ($\sim1$ MHz) and the THOR continuum ($\sim128$ MHz), the flux retrieved from the combined data is consistent with the literature. The continuum source catalog contains 10 387 objects that we extracted across our survey area. With the extracted peak intensities of the six usable SPWs between 1 and 2 GHz, we were able to determine a reliable spectral index (spectral index fitted with at least 4 SPWs) for 5657 objects. By cross-matching with different catalogs, we found radio counter parts for 840 regions, 52 SNRs, 164 planetary nebulae, and 38 pulsars. A large percentage of the remaining sources in the catalog are likely to be extragalactic background sources, based on their spatial and spectral index distributions. A detailed presentation of the continuum catalog can be found in @Bihr2016 and @Wang2018. For the 21 cm line, in addition to the data cubes from C+D+GBT data, we also provide the C-configuration-only datacubes with continuum at both the native resolution and the smoothed beam (25), which can be used to measure the optical depth towards bright background continuum sources [@Bihr2016]. A case study of self-absorption towards a giant molecular filament is presented by Wang et al. (submitted). In the following sections, we focus scientifically on emission and absorption. ### emission {#sect_hi} Figure \[fig\_himom0\] shows the C+D+single-dish 1.4 GHz continuum map and the integrated intensity map ($v_{\rm LSR}=-113\sim163$ km s$^{-1}$). While the combined emission data illustrates that the atomic gas is confined near the Galactic mid-plane at lower longitudes ($l<42\degr$), at larger longitudes ($l>54\degr$) gas is more evenly distributed in latitudes. The map in Fig. \[fig\_himom0\] also shows absorption patterns against some strong continuum sources, such as the star-forming complex W43 at $l\sim30.75\degr$, and W49 at $l\sim43\degr$. Figure \[fig\_hi\_channel\] depicts the channel maps by integrating the emission over 15 km s$^{-1}$ velocity bins. The neutral gas is clearly seen located at higher latitude in channels at $v_{\rm LSR}<-38$ km s$^{-1}$. This is likely due to the disk being strongly warped in the outer Milky Way [@Burton1986; @Diplas1991; @Nakanishi2003; @Kalberla2009]. Some cloud structures are more prominent in the channel maps, such as the filamentary structures in the channels at $v_{\rm LSR}<-38$ km s$^{-1}$. While at negative velocities the filamentary structures can been seen clearly, the emission at positive velocities appears less structured. This observational phenomenon is likely due to higher spatial and velocity crowding of structures at positive velocities. The negative velocity channels are tracing the outer Galactic plane, and there is little line of sight confusion, while at positive velocities, due to near-far distance ambiguity, the emission from multiple spiral arms could be at the same velocity and result in structures at different distances bing blended together. The integrated intensity map (Fig. \[fig\_himom0\]) also demonstrates that the angular size of the emission along the latitude is smaller at lower longitudes than at larger longitudes. This is due to the fact that the tangent points are further away at lower longitudes. This is also seen in the channel map (Fig. \[fig\_hi\_channel\]) in the positive velocities, in which the tangent points are traced by the left end of the emission in each panel with positive velocities, and the emission structures appear to be smaller at lower longitudes due to them being further away [see also, @Merrifield1992]. ![image]({figures/Cont_1400_mosaic_all_ft_vgps-log-1small}.pdf){height="23.8cm"} ![image]({figures/THOR_HI_mom-0_-113.0_to_163.0km-s-1small}.pdf){height="24cm"} ![image]({figures/Cont_1400_mosaic_all_ft_vgps-log-2small}.pdf){height="23.8cm"} ![image]({figures/THOR_HI_mom-0_-113.0_to_163.0km-s-2small}.pdf){height="24cm"} ![image]({figures/HI_mosaic_channnel_ft_vgps_all_small}.pdf){height="24cm"} By averaging the emission in the latitude axis, we constructed the longitude-velocity ($l-v$) diagram of the emission (Fig. \[fig\_hi\_pv\]). The general shape of the $l-v$ diagram agrees with one obtained with the VGPS data [@Strasser2007], but the higher resolutions reveal much finer details, such as the vertical lanes at $l\sim31\degr,\ 43\degr$, and $\sim49\degr$, which are caused by absorption against the background continuum emission from the star-forming regions W43, W49, and W51, respectively (see also Fig. \[fig\_himom0\]). We also created the $^{13}$CO(1–0) $l-v$ diagram with the same method and plotted it as contours in Fig. \[fig\_hi\_pv\]. The $^{13}$CO(1–0) data is taken from the Exeter FCRAO CO Galactic Plane Survey [@Mottram2010], and Galactic Ring Survey [GRS, @Jackson2006]. We re-gridded the Exeter $^{13}$CO data to the same velocity resolution and coverage as the GRS, so there is no $^{13}$CO coverage at velocities where $v_{\rm LSR}<-5$ km s$^{-1}$. Compared to the $^{13}$CO emission, the emission also traces more extended and diffuse structures. The emission in general agrees well with the spiral arm models of @Reid2016, except for the Outer Scutum-Centaurus (OSC) Arm. The OSC Arm is at a large Galactocentric distance and is outside of our survey area at longitudes larger than 40 due to the warping and flaring of the outer disk [@Dame2011; @Armentrout2017]. Thus, we can only detect a small portion of the OSC Arm in the inner part of the Galactic plane that our survey covers, as illustrated in Fig. \[fig\_hi\_pv\]. Another feature in the $l-v$ diagram is a strong self-absorption pattern at $\sim$5 km s$^{-1}$ following the Aquila Rift (magenta box in Fig. \[fig\_hi\_pv\]). This absorption feature spans almost 20 in longitude ($\sim$17 to 36). The $^{13}$CO emission, on the other hand, lies right inside the absorption feature in the $l-v$ diagram. This large-scale absorption feature could be caused by the Riegel-Crutcher cloud, which centers at $v_{\rm LSR}\sim5$ km s$^{-1}$ and covers the longitude range $l=345\degr$ to 25, and the latitude range $b\leq6\degr$ [@Riegel1969; @Riegel1972; @Crutcher1974]. ![image](figures/HI_pv_with_spiral_arms_with_co.pdf){width="\textwidth"} After resampling both the and $^{13}$CO data to the same angular and spectral resolution (pixel size of 22, beam size of 46, and velocity resolution of 1.5 km s$^{-1}$), we constructed the $T_{\rm B}$()/$T_{\rm B}$($^{13}$CO) ratio $l-v$ diagram by dividing the $l-v$ diagram by the $^{13}$CO $l-v$ diagram (Fig. \[fig\_ratio\_pv\]). For the $^{13}$CO $l-v$ diagram, a 3$\sigma$ value was used where the emission is below 3$\sigma$ (1$\sigma=0.04$ K in the $^{13}$CO $l-v$ diagram). As expected, Fig. \[fig\_ratio\_pv\] shows that the $T_{\rm B}$()/$T_{\rm B}$($^{13}$CO) ratio is low where there is $^{13}$CO emission (see also, Sect \[sect\_hi\_ratio\]). After removing the pixels with no emission ($T_{\rm B}$()$<5\sigma$, 1$\sigma$=0.2 K), the histogram of the $T_{\rm B}$()/$T_{\rm B}$($^{13}$CO) ratio $l-v$ diagram in Fig. \[fig\_ratio\_hist\] reveals one strong peak at $\sim100,$ and a secondary peak at $\sim600$. If we assume both and $^{13}$CO emissions are optically thin and uniform excitation temperature, the variations in the $T_{\rm B}$()/$T_{\rm B}$($^{13}$CO) ratio also represent the variations in the atomic to molecular gas column density ratio. Figure \[fig\_ratio\_hist\] indicates that the atomic-to-molecular gas column density ratio can increase by a factor of six from inter-arm regions to spiral arms. Considering that we used 3$\sigma$ value of the $^{13}$CO $l-v$ diagram for regions where there is no $^{13}$CO emission to construct the $T_{\rm B}$()/$T_{\rm B}$($^{13}$CO) ratio $l-v$ diagram, this factor of six is a lower limit (see also Sect \[sect\_hi\_ratio\]). ![image](figures/HI_over_CO_pv_with_spiral_arms.pdf){width="\textwidth"} ![Histogram of the $T_{\rm B}$()/$T_{\rm B}$($^{13}$CO) ratio $l-v$ diagram shown in Fig. \[fig\_ratio\_pv\].[]{data-label="fig_ratio_hist"}](figures/HI_CO_pv_ratio.pdf){width="50.00000%"} ### optical depth {#sec_tau} The THOR C-configuration-only data [@beuther2016], which have not been continuum-subtracted, are used to measure the optical depth towards bright background continuum sources. We extracted the spectra towards all continuum sources with an S/N ratio larger than seven from @Wang2018. Since the synthesized beam of the C-configuration-only data is 25, we extracted the average spectrum from a $28\arcsec\times28\arcsec$ ($7\times$7 pixels) area centered on the position of the continuum source to increase the S/N ratio. We used the software STATCONT [@Sanchez-Monge2018] to estimate the continuum level $T_{\rm cont}$ and the noise of the spectra. Some example spectra are shown in Fig. \[fig\_spectra\]. With the absorption spectra, we can estimate the optical depth towards the continuum source following the method described in @bihr2015: $$\tau = -{\rm ln}\left(\frac{T_{\rm{on,\ cont}} - T_{\rm off,\ cont}}{T_{\rm{cont}}}\right), \label{eq_tau}$$ where $T_{\rm on,\ cont}$ is the brightness temperature of the absorption feature measured towards the continuum source, and $T_{\rm off,\ cont}$ is the off-continuum-source brightness temperature. Since we use the THOR C-configuration data to calculate $\tau$, the smooth, large-scale structure is mostly filtered out [@beuther2016]. Therefore, we can neglect the off emission $T_{\rm off,\ cont,}$ and simplify Eq. \[eq\_tau\] to: $$\tau_{\rm{simplified}} = -{\rm ln}\left(\frac{T_{\rm{on,\ cont}}}{T_{\rm{cont}}}\right) . \label{eq_tau_simplified}$$ For channels with a $T_{\rm{on,\ cont}}$ value less than three times the rms, we use the 3$\sigma$ value to get a lower limit on $\tau$. The bottom panels in Fig. \[fig\_spectra\] show the $\tau$ spectra for the corresponding sources, and the calculated $\tau$ is always saturated in some channels. Compared to the VGPS VLA D-configuration absorption spectra and the $\tau$ spectra derived from them (resolution 60, @Strasser2007), the THOR C-configuration absorption spectra are much more sensitive and therefore probing higher optical depth (Fig. \[fig\_spectra\]). ![image]({figures/tau_spectra_G17.910+0.372_comp}.pdf){width="30.00000%"} ![image]({figures/tau_spectra_G48.241-0.968_comp}.pdf){width="30.00000%"} ![image]({figures/tau_spectra_G62.078+0.609_comp}.pdf){width="30.00000%"} In this study, since only $T_{\rm{on,\ cont}}<(T_{\rm cont} - 3~\sigma$) is considered to be real absorption, only sources with $T_{\rm cont}>6~\sigma$ can have channels with real absorption and do not reach the lower limit of $\tau$. In total, 228 sources have a $T_{\rm cont}>6~\sigma$ to make the $\tau$ map, among which $\sim$60% are Galactic sources (Table \[table\_tau\]). We listed $T_{\rm cont}$, $\sigma$ of $T_{\rm cont}$, the lower limit of $\tau$, and the integrated $\tau$ including the physical nature of the 228 sources in Table \[table\_tau\]. We then grid the 228 $\tau$ measurements channel by channel for the whole survey using the nearest-neighbor method[^24] to create the $\tau$ data-cube. Since the Galactic sources do not trace any absorption from the gas located behind them, we replaced the optical depth values for these channels with the ones from the nearest extragalactic sources. The resulting integrated $\tau$ map is shown in Fig. \[fig\_tau\_map\]. The highest integrated $\tau$ we measured is at $l\sim43$, at the location of the high-mass star-forming complex W49A. A very high integrated $\tau$ is measured towards the star-forming complex W43 ($l\sim31$). Figure \[fig\_tau\_map\] shows that higher $\tau$ is measured in the inner longitude range, which is reasonable considering more material is packed along the line of sight due to velocity crowding in regions below $l\sim43$. Since the used $\tau$ spectra are all saturated in some channels, this map represents a lower limit of the optical depth in the survey region. ![image]({figures/HI_integrated_tau}.pdf){width="\textwidth"} ### column density and distribution {#sect_hi_dist} We estimated the column density of atomic hydrogen from the emission line data (C+D+GBT) using [e.g., @wilson2013]: $$N_{\rm H} = 1.8224\times10^{18}\: \int T_{\rm S}(v) \tau(v)\, dv. \label{eq_column_density_hi}$$ The optical depth corrected spin temperature is $T_{\rm S}(v)=T_{\rm B}(v)/(1-{\rm e}^{-\tau(v)})$, where $T_{\rm B}$ is the brightness temperature of the emission. We used the $\tau$ data-cube (Sect. \[sec\_tau\]) to correct the spin temperature channel by channel and estimate the column density. As shown in Fig. \[fig\_himom0\] and Fig. \[fig\_hi\_channel\], absorption against strong continuum sources is clearly seen as negative features in the continuum subtracted emission map, such as toward W43 at $l\sim31\degr$. To derive the column density toward strong continuum sources, we determined a mean brightness temperature from a region of radius 10  around the continuum source to derive $T_{\rm S}$. Then, we extracted C-configuration-only +continuum spectra from these regions pixel-by-pixel to derive the optical depth (see Sect. \[sec\_tau\]). With the spin temperature and optical depth derived, we can estimate the column density towards continuum sources and add these to the optical depth corrected column density map. Figure \[fig\_nh\_histo\] shows the histograms of the atomic hydrogen column density integrated between –113 to 163 km s$^{-1}$ of the survey area. The median value of the column density is 1.8$\times10^{22}$ cm$^{-2}$, which is 38% higher than the value assuming optically thin emission. ![Histograms of the atomic hydrogen column density integrated between –113 to 163 km s$^{-1}$. The black (thin) histogram represents the column density derived assuming the emission is optically thin, and the red (thick) histogram represents the column density corrected fro optical depth. The dashed vertical lines mark the median values of the two histograms, respectively. []{data-label="fig_nh_histo"}]({figures/HI_N_histo}.pdf){width="50.00000%"} To obtain the distribution of atomic gas in the Galactic plane, we estimated the kinematic distance of each channel and each pixel of the emission map with the Kinematic Distance Utilities[^25] [@Wenger2018]. The “universal” rotation curve of @Persic1996 including terms for an exponential disk and a halo, as well as the Galactic parameters from @Reid2014 are used. See Table. 5 in @Reid2014 for detailed parameters. To solve the kinematic distance ambiguity in the inner Galaxy (inside the solar orbit), we took the following approach. We assume that the average vertical density profile of the atomic gas $n(z)$ in the inner Galaxy can be described by the sum of two Gaussians and an exponential function [@Lockman1984; @Dickey1990]: $$\begin{split} n(z) = \ & \sum_{i=1}^2 n_i(0)\ {\rm exp}\ \left[-z^2/\left(2h_i^2\right)\right]\\ &+ n_3(0)\ {\rm exp}\ \left(-|z|/h_3\right), \end{split} \label{eq_hi_vertical}$$ with $z$ describing the distance from the Galactic mid-plane. The coefficients from @Dickey1990 are listed in Table \[table\_hi\_vertical\]. Since the volume density distribution of the atomic gas in the mid-plane is approximately axisymmetric with respect to the Galactic center [@Kalberla2008], we can assume the volume density is the same at the same Galactocentric distance in the mid-plane. Furthermore, the $v_{\rm LSR}$ distance profiles are symmetric with respect to the tangent point for the degeneracy part [see also, @Anderson2009], so each velocity bin traces the same line-of-sight distance at the near side as at the far side. Thus we assume the column density is also the same at the same Galactocentric distance at the near and far side in the Galactic mid-plane. Due to the kinematic distance ambiguity in the inner Galaxy, the column density we derived for each pixel at each velocity channel is a combined result from both the far and near side. Thus, we can use Eq. \[eq\_hi\_vertical\] to estimate the percentage of the column density contribution from the near and far side for each line-of-sight to solve the kinematic distance ambiguity. \[table\_hi\_vertical\] i $n_i(0)$ (cm$^{-3}$) $h_i$ (pc) --- ---------------------- ------------ 1 0.395 90 2 0.107 225 3 0.064 403 : Coefficients of Eq. \[eq\_hi\_vertical\] taken from @Dickey1990. With the kinematic distances determined for each channel, we applied a 5$\sigma$ cut and converted the column density cube into a face-on mean surface density map, shown in Fig. \[fig\_faceon\]. Comparing to the spiral arm model from @Reid2016, some of the atomic gas follows the spiral arms well, such as the Sagittarius and Perseus arms, but there is also much atomic gas in the inter-arm regions. Along the Sagittarius and Perseus arms, and in the very outer region beyond the Outer Arm in Fig. \[fig\_faceon\], the region distribution agrees with the atomic gas. However, the Outer Arm itself is not associated with much atomic gas in the face-on plot, although there is good agreement between the emission and the Outer Arm in the $l-v$ diagram. This outer component of the atomic gas was also observed by @Oort1958, @Nakanishi2003, and @Levine2006. @Nakanishi2003 found this emission structure agrees spatially with the Outer Arm discovered by @Weaver1970. The Perseus Arm and Outer Arm in Fig. \[fig\_faceon\] are found to be distinct structures in the atomic gas distribution with a void of emission and regions between the arms. The absolute value of the derived surface density represents a mean surface density along the $z$ direction, and is therefore lower than the results of @Nakanishi2003 and @Nakanishi2016, which derived a surface density integrated along $z$. ![Face-on view of surface mass distribution in the survey area overlaid with spiral arms from @Reid2016, and regions from @Anderson2014. The Galactic center is at $\left[0,~0\right]$, and the Sun is at the top-left corner at a Galactocentric distance of 8.34 kpc [@Reid2014]. The gray lanes mark the Outer, Perseus, Sagittarius, and Scutum arms with widths from @Reid2014. The Local Spur, Aquila Spur, and Norm Arm are marked with white lines. The long bar [@Hammersley2000; @Benjamin2005; @Nishiyama2005; @Benjamin2008; @Cabrera2008] is marked with the shaded half-ellipse. The crosses mark the regions from @Anderson2014 excluding the ones that fall into the tangent region (marked with two gray curves).[]{data-label="fig_faceon"}]({figures/H_face_on_surface_density_200pc_reid14_with_bar}.pdf){width="50.00000%"} We applied different Galactic models and rotation curves for the kinematic distance determination to create the face-on surface density maps shown in Fig. \[app\_faceon\]. We find that the face-on density map depends only slightly on the assumed model for the kinematic distance. Compared to Fig. \[fig\_faceon\] (Galactic parameters from @Reid2014, rotational curve from @Persic1996), the assumption of a uniform rotational curve (Fig. \[app\_faceon\], right) only lowers the gas surface density close to the bar region. Similar changes occur when applying the IAU Solar parameters ($R_0=8.5$ kpc, $\Theta_0=220$ km s$^{-1}$) and the @Brand1993 Galactic rotation model (Fig. \[app\_faceon\], left), together with slightly shifting the gas to higher distances. ### Mean spin temperature of the atomic gas With the emission, we can obtain the line-of-sight mean spin temperature or the density-weighted harmonic mean spin temperature using [@Dickey2000]: $$\left<T_{\rm S}\right> = \frac{\int T_{\rm B}(v)\ dv} {\int 1-e^{-\tau(v)}\ dv}. \label{eq_ts}$$ As we show in Fig. \[fig\_ts\], the mean spin temperatures in the THOR survey area are between 50 and 300 K, with a median value of 143 K. We do not see any correlation between $\left<T_{\rm S}\right>$ and longitudes. Combining survey data from VGPS [@stil2006], CGPS [@Taylor2003], and SGPS [@McClure2005], @Dickey2009 studied the atomic gas in the outer disk of the Milky Way (outside the solar circle). They found the mean spin temperature to be $\sim$250–400 K, and it stays nearly constant with the Galactocentric radius to $\sim$25 kpc. Since the mean spin temperature $\left<T_{\rm S}\right>$ reveals the fraction of CNM in the total atomic gas (see Equation 8 in @Dickey2009), the lower $\left<T_{\rm S}\right>$ we obtained indicates we observe a higher fraction of CNM in our survey area. Since @Dickey2009 only considered the outer Milky Way, the differences in the $\left<T_{\rm S}\right>$ may indicate a higher fraction of CNM in the inner Milky Way. ![Histograms of density-weighted harmonic mean spin temperature between –113 to 163 km s$^{-1}$. The dashed vertical lines mark the median values of 143 K.[]{data-label="fig_ts"}]({figures/HI_Ts_histogram}.pdf){width="50.00000%"} Discussion {#sect_discuss} ========== We discuss the results of overview presented in Sect. \[sect\_results\] in this section. gas mass {#sect_hi_mass} --------- With the method described in Sect. \[sect\_hi\_dist\], we estimate the optical depth corrected total mass in our survey area to 4.7$\times10^8~M_\odot$ (using emission above 5$\sigma$). Combining $^{12}$CO(1–0) and $^{13}$CO(1–0) from the GRS and the Exeter FCRAO CO Survey, @Roman-Duval2016 estimated an H$_2$ mass in the inner Galaxy (inside the solar circle) of 5.5$\times10^8~M_\odot$ (GRS: $18\degr\leq l \leq55.7\degr$, |$b\leq1\degr$|; Exeter: $55\degr\leq l \leq100\degr$, $-1.4\leq b\leq1.9\degr$). Within the solar circle, we estimate the total mass to be 2.1$\times10^8~M_\odot$ ($18\degr\leq l \leq67\degr$, |$b|\leq1.25\degr$). Although the GRS+Exeter survey covers a larger area than THOR, the area they covered inside the solar circle is only $\sim4\%$ larger than THOR, so we can still compare these two masses. Therefore, in the inner Galaxy in this longitude range, the molecular component represents about 72% of the total gas. This fraction also agrees with the molecular fraction of 50-60% of total gas inside the solar circle estimated by @Koda2016, @Nakanishi2016, and @Miville2017. Considering the total mass we derived is a lower limit, this percentage is likely an upper limit. If we assume the emission to be optically thin, the total mass within the whole survey area is estimated to be $3.6\times 10^{8}~M_\odot$. Comparing this with our optical-depth corrected estimate of 4.7$\times 10^{8}~M_\odot$, we find that the mass increases by $\sim$31% percent with the optical depth correction. Considering that all of the $\tau$ spectra saturate in some channels, this $31\%$ is again a lower limit. We define the ratio between the mass after and before the optical depth correction as $R_{\rm \ion{H}{I}}$, so for the whole survey area $R_{\rm \ion{H}{I}}=1.31$. Assuming optically thin emission, the total mass of the gas of the Milky Way within the Galactic radius of 30 kpc was estimated to be 7.2–8$\times10^{9}~M_\odot$ [@Kalberla2009; @Nakanishi2016]. If we apply $R_{\rm \ion{H}{I}}=1.31$ to the whole Milky Way, the total gas mass would be 9.4–10.5$\times10^{9}~M_\odot$. We discuss the uncertainties of the total mass in Sect. \[sect\_mass\_err\]. As part of the pilot study of the THOR project, @bihr2015 estimated the mass of the atomic hydrogen gas in the W43 region after applying the correction for optical depth and absorption against the diffuse continuum emission and derived an $R_{\rm \ion{H}{I}}=2.4$, which is higher than what we have derived for the entire survey. Considering W43 has the second largest integrated $\tau$ in the survey, it is reasonable that a larger than average $R_{\rm \ion{H}{I}}$ was measured here. The optical depth map (Fig. \[fig\_tau\_map\]) also shows higher $\tau$ in the inner longitude region ($l\lesssim 44\degr$) than in the outer longitude region. In contrast to this, @Lee2015 found $R_{\rm \ion{H}{I}}\sim1.1$ by applying the optical depth correction pixel-by-pixel towards the atomic gas around the Perseus molecular cloud. Combining the 21 cm emission maps from the Galactic Arecibo L-band Feed Array Survey [GALFA-HI @Peek2011; @Peek2018] and absorption spectra from 21-[*SPONGE*]{} [@Murray2015; @Murray2018b], @Murray2018a found $1.0<R_{\rm \ion{H}{I}}<1.3$ towards high latitude local ISM ($|b|>15\degr$). Considering that we are observing with much higher angular resolution than GALFA, and multiple spiral arms along the line of sight are at the same velocity in the Galactic plane, it is reasonable that the $R_{\rm \ion{H}{I}}$ we derived is higher. On the other hand, studies toward nearby galaxies (M31, M33 and LMC) found $R_{\rm \ion{H}{I}}\sim1.3-1.34$ [@Braun2009; @Braun2012], which is in agreement with what we found. gas distribution ----------------- Ever since @vandeHulst1954 and @Oort1958 discovered the spiral structure of the atomic hydrogen gas in the Milky Way, considerable effort has been devoted to investigating the gas distribution and spiral arms in the Galaxy [e.g., @Kulkarni1982; @Nakanishi2003; @Levine2006; @Kalberla2009; @Nakanishi2016]. The structure outside of the Outer Arm in Fig. \[fig\_faceon\] was also observed by @Oort1958, @Nakanishi2003, @Levine2006, and @Nakanishi2016. @Nakanishi2003 fit this structure with the so-called Outer Arm with a pitch angle of $\sim7\degr$ in the polar coordinates (the x-axis is the azimuthal angle $\theta$ and the y-axis the Galactic radius $R$ in log scale). They found that this agrees with the Outer Arm found by @Weaver1970. Also, one of the arms fit by @Levine2006 goes through the northern part of this structure (Y $>0$, X $>10$ in Fig. \[fig\_faceon\]). @Levine2006 claimed that the spiral arm model derived from the regions [@Morgan1953; @Georgelin1976; @Wainscoat1992] could not fit this structure, although Fig. \[fig\_faceon\] shows that many regions are associated with this structure. The Outer Arm plotted in Fig. \[fig\_faceon\] is extrapolated from the pitch angle fitted to parallax sources at greater longitudes [$l>70\degr$, @Reid2016]. Since the pitch angle can vary by azimuth angle [@Honig2015], the real Outer Arm in this region ($17\degr<l<67\degr$) could have a different pitch angle and be at a larger Galactocentric radius. On the other hand, the noncircular motions in the outer disk [e.g., @Kuijken1994] could also affect the distribution we derived in this region. Another feature in the inner part of Fig. \[fig\_faceon\] is that the region right below the end of the bar shows low surface density. Depending on the Galactic rotation model we used to determine the kinematic distances, there could even be a cavity in this region (all emission is below 5$\sigma$ and masked out, Fig. \[app\_faceon\]). The region is located where the Galactic long bar ends [@Hammersley2000; @Benjamin2005; @Nishiyama2005; @Benjamin2008; @Cabrera2008]. The long bar introduces strong non-circular motions in the sources, and so the axisymmetric rotation curve does not apply on and near the bar [@Fux1999; @Rodriguez2008; @Reid2014]. Thus, the kinematic distances derived for the gas and most regions in this region may have large errors. Therefore, the low-level emission may not be real. The same low-level distribution is also seen in the CO map by @Roman-Duval2009 [@Roman-Duval2016], regions. Considering all these distributions are based on kinematic distance (except for the distance of less than 10% regions in this region are derived from parallax), we should consider the distribution of gas and regions inside Scutum Arm with caution. Atomic to molecular gas ratio {#sect_hi_ratio} ----------------------------- Our optical depth correction discussed in the previous sections involves a lot of interpolation and averaging. We would like to avoid doing it when comparing the and $^{13}$CO maps, since interpolation and averaging brings large uncertainties to each particular region. In the following discussion, we compare the and $^{13}$CO emission directly. As we demonstrate in Fig. \[fig\_faceon\], high column density gas is concentrated along and above the Sagittarius Arm in the inner Galaxy. This is also seen by @Nakanishi2016. On the other hand, molecular clouds traced by CO are mainly distributed around and inside the Scutum Arm [@Roman-Duval2009; @Nakanishi2016]. The molecular cloud fraction ($f_{\rm mol}$) map in @Nakanishi2016 shows that $f_{\rm mol}$ is anti-correlated with Galactocentric distance. Compared to Fig. \[fig\_faceon\], inside the Sagittarius Arm, the molecular cloud fraction may be as high as $f_{\rm mol}>0.6$, while in the outer regions, $f_{\rm mol}$ quickly drops to almost zero [see also, @Miville2017]. However, they did not discuss how this ratio varies between inter-arm regions and spiral arm regions. As the $l-v$ diagrams in Fig. \[fig\_hi\_pv\] and Fig. \[fig\_ratio\_pv\] show that the majority of the molecular gas is tightly associated with the spiral arms, while the atomic gas on the other hand is more widely distributed with significant material in the inter-arm regions. The histogram of $T_{\rm B}$()/$T_{\rm B}$($^{13}$CO) ratio shows a bimodal distribution (Fig. \[fig\_ratio\_hist\]). If we assume both $^{13}$CO and emissions are optically thin, the $T_{\rm B}$()/$T_{\rm B}$($^{13}$CO) ratio bimodal distribution could be proxy of the atomic-to-molecular gas ratio. Therefore, we can interpret from Fig. \[fig\_ratio\_hist\] that on average, the atomic-to-molecular gas ratio may increase by approximately a factor of six from spiral arms to inter-arm regions. Uncertainties of the optical depth $\tau$ ----------------------------------------- The uncertainties of the optical depth are determined mainly by two factors, the brightness of the continuum source and the rms noise of the absorption spectra. The ratio of these two is the S/N ratio. As we mentioned in Sect. \[sec\_tau\], we selected sources with an S/N ratio $>$6 to ensure that the spectra have real absorption and the $\tau$ spectra is not saturated in all channels. Thus, the S/N ratio for the selected spectra ranges between six and 250, with a median value of 12, and the uncertainties associated with the rms noise are between 0.18 to 0.004 with a median value of 0.09, depending on the brightness of the continuum source. Another source of uncertainty for the optical depth $\tau$ is that the $\tau$ spectra are all saturated in some channels, as shown in Fig. \[fig\_spectra\]. To test the $\tau$ limits, high sensitivity VLA follow-up observations were carried out towards three selected sources, G21.347–0.629, G29.956–0.018, and G31.388–0.384 (Rugel et al., in prep.). In the THOR survey, where each field was observed for five to six minutes in on-source time, the optical depths of these three sources all saturate at $\tau\sim$2.7 to 3.1. In the follow-up observations, each continuum source was observed for significantly longer, and the noise dropped by $\sim50-70\%$. However, many channels in the $\tau$ spectra still saturate at $\tau\sim3.5-3.9$ if we smooth the data to the same angular and spectral resolution as the THOR C-configuration data. Depending on the S/N ratio of the continuum source, this $\sim50-70\%$ drop in the noise could increase the lower limit of the optical depth to $\sim1.2-2.6$ times the current value, and also increase the optical-depth corrected column density to $\sim 1.1-1.6$ times the current value (Fig. \[fig\_ratio\]). ![Increase of the optical depth lower limit (top panel) and the column density (bottom panel) if the noise drops by 50% (dashed line) and 70% (solid line) plotted as a function of S/N ratio. The shaded histogram in each panel is the S/N ratio distribution of the continuum sources we used to extract the absorption spectra and construct the optical depth map. []{data-label="fig_ratio"}]({figures/tau_snr_factor}.pdf){width="45.00000%"} Uncertainties of the mass {#sect_mass_err} ------------------------- The uncertainties of the mass estimation for atomic gas originate from several factors: the optical depth, the distance, absorption against the diffuse continuum emission, and the self absorption (HISA). As we discussed in the previous section, we could be underestimating the peak optical depth $\tau_{\rm limit}$ by 20 to 160%, but the impact on the integrated optical depth is not clear. If we assume the noise levels of the $\tau$ spectra drop to 30% of the values listed in Table. \[table\_tau\], with the same 228 sources to correct the optical depth, the total mass increases by 9%. However, as we mentioned in the previous section, many channels in the $\tau$ spectra still saturate in the high sensitivity follow-up observations, and this 20% is still a lower limit. The second main factor of uncertainty is the distance. With different Galactic parameters, we get different kinematic distances, which result in different mass estimates. With the rotation curve of @Persic1996, and the Galactic parameters from @Reid2014, we derived the total mass 4.7$\times10^8 M_\odot$. With the same Galactic parameters from @Reid2014 but assuming a uniform rotational curve, we would get the same mass. If we take the IAU Solar parameters ($R_0=8.5$ kpc, $\Theta_0=220$ km s$^{-1}$) and Galactic rotation model from @Brand1993, the total mass increases by 13%. We are also aware that assuming axisymmetry and using the average vertical density distribution $n(z)$ of to solve the kinematic distance ambiguity of the column density distribution is not ideal, but this is the best we can do without detailed modeling of the Galactic disk. Furthermore, @Wenger2018 compared the kinematic distances with the parallax distances of 75 Galactic high-mass star-forming regions, and they found out that the kinematic distances we used in this paper (derived with the rotation curve of @Persic1996 and the Galactic parameters from @Reid2014) have a median offset of 0.43 kpc, with a standard deviation of 1.24 kpc from parallax distances. @bihr2015 estimated the mass of the atomic hydrogen gas in W43 to be $6.6_{-1.8}\times10^6~M_\odot$ after applying the correction for optical depth and absorption against the diffuse continuum emission ($l=29.0-31.5\degr,\ |b|\leq 1\degr,\ v_{\rm LSR}=60-120$ km s$^{-1}$). By integrating across the same velocity range, we derived a mass of 8.5$\times10^6~M_\odot$ within the same area with our distance determination method. The mean distance at $l=30.25\degr$ between 60 to 120 km s$^{-1}$ from our method is 7.4 kpc, much larger than the distance of 5.5 kpc [@Zhang2014] adopted by @bihr2015. Taking the same distance 5.5 kpc results in a mass of $4.9\times10^6~M_\odot$. Further applying correction for the absorption against the diffuse continuum emission with the method described in @bihr2015 increases the mass by 19% to $5.7\times10^6~M_\odot$, which approximately agrees with what @bihr2015 estimated, but is slightly smaller. Since @bihr2015 use one single strong continuum source to correct the optical depth for the whole W43 region, they may overcorrect for the outer region in W43. The 19% mass increase from applying a correction for the absorption against the diffuse continuum emission is an upper limit if we consider the whole survey area, since W43 is one of the most extreme star-forming complexes in the Milky Way and is considered to be a mini starburst region [@Nguyen2011; @Beuther2012; @Nguyen2013; @Zhang2014]. Furthermore, to correct the diffuse continuum emission, we need information on its distance. It is reasonable to assume all diffuse continuum emission is in the background and correct it for a particular region, but we can not apply this to the whole survey. Thus, we do not apply any correction for the diffuse continuum emission. The last factor is HISA, which we did not consider for the mass estimation. However, HISA was studied towards a Giant Molecular Filament (GMF) with the THOR data by Wang et al., submitted, and they showed, for this specific GMF, that the mass traced by HISA is only 2-4% of the total mass. Studies of narrow self absorption (HINSA) show that the ratio between the column density traced by HINSA to the H$_2$ column density is $\sim 10^{-3}$ to $2\times 10^{-2}$ [@Li2003; @Goldsmith2005; @Zuo2018]. As we discussed in Sect. \[sect\_hi\_mass\], the total H$_2$ mass in this part of the Galactic plane is also comparable to the mass, the effect of HISA and HINSA to the mass is negligible. In summary, we could still be underestimating the mass by 20 to 40%, even with the current optical depth correction. Conclusions {#sect_con} =========== In this paper, we describe the THOR data release 2, which includes all OH and RRL data, and an entire new dataset from the THOR survey. In addition, a detailed analysis of the data is presented. The main results can be summarized as follows: 1. While the channel map shows clear filamentary substructures at negative velocities, the emission at positive velocities is more smeared-out. This is likely due to higher spatial and velocity crowding of structures at positive velocities. Both the $l-v$ diagram and the face-on view of the emission show that some of the atomic gas follows the spiral arms well, such as the Sagittarius and Perseus Arm, but there is also much gas in the inter-arm regions. 2. We produced a spectrally resolved $\tau$ map from 228 absorption spectra. 3. We corrected optical depth for the emission with the $\tau$ map. The atomic gas column density we derived with optical depth correction is 38% higher than the column density derived with optically thin assumption. We estimate the total mass in the survey region to be 4.7$\times10^8~M_\odot$, 31% higher than the mass derived with the optically thin assumption. If we apply this 31% correction to the whole Milky Way, the total atomic gas mass would be 9.4–10.5$\times 10^9~M_\odot$. 4. Considering that all the $\tau$ spectra are saturated in some channels and we did not apply the correction for the diffuse continuum emission, we could be underestimating the mass by an additional 20–40%. Future higher sensitivity observations are needed to better constrain the optical depth. 5. We constructed a face-on view of the mean surface density of the atomic gas in the survey area. 6. We estimated the density-weighted harmonic mean spin temperature $\left<T_{\rm S}\right>$ integrated between –113 and 165 km s$^{-1}$ with a median value of $\left<T_{\rm S}\right>\sim143$ K, about a factor of two lower than what was estimated in the outer disk of the Milky Way, which may indicate a higher fraction of CNM in the inner Milky Way. 7. The latitude averaged $T_{\rm B}$()/$T_{\rm B}$($^{13}$CO) ratio distribution shows two peaks at $\sim$100 and $\sim$600, which may indicate that the atomic-to-molecular gas ratio can increase by a factor of six from spiral arms to inter-arm regions. The , OH, RRL, and continuum data from the THOR survey together provide the community the basis for high-angular-resolution studies of the ISM in different phases. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. Y.W., H.B., S.B., and J.D.S. acknowledge support from the European Research Council under the Horizon 2020 Framework Program via the ERC Consolidator Grant CSF-648505. H.B., S.C.O.G., and R.S.K. acknowledge support from the Deutsche Forschungsgemeinschaft in the Collaborative Research Center (SFB 881) “The Milky Way System” (subproject B1, B2, B8). This work was carried out in part at the Jet Propulsion Laboratory which is operated for NASA by the California Institute of Technology. R.J.S. acknowledges an STFC Rutherford fellowship (grant ST/N00485X/1). N.R. acknowledges support from the Max Planck Society through the Max Planck India Partner Group grant. F.B. acknowledges funding from the European Union’s Horizon 2020 research and innovation program (grant agreement No 726384). This research made use of Astropy and affiliated packages, a community-developed core Python package for Astronomy [@astropy2018], Python package [*SciPy*]{}[^26], APLpy, an open-source plotting package for Python [@robitaille2012], and software TOPCAT [@taylor2005]. Optical depth measurements towards the selected sources ======================================================= Face-on surface density maps of the atomic gas ============================================== ![image]({figures/H_face_on_surface_density_200pc_brand93_with_bar}.pdf){width="49.00000%"} ![image]({figures/H_face_on_surface_density_200pc_flat_with_bar}.pdf){width="49.00000%"} [^1]: UKIRT Infrared Deep Sky Survey [^2]: Galactic Legacy Infrared Midplane Survey Extraordinaire [^3]: A 24 and 70 Micron Survey of the Inner Galactic Disk with MIPS [^4]: [*Herschel*]{} Infrared GALactic plane survey [^5]: APEX Telescope Large Area Survey of the Galaxy [^6]: Bolocam Galactic Plane Survey [^7]: The Boston University-Five College Radio Astronomy Observatory Galactic Ring Survey [^8]: The Millimeter Astronomy Legacy Team Survey at 90 GHz [^9]: The Millimetre Astronomer’s Legacy Team - 45 GHz [^10]: FOREST unbiased Galactic plane imaging survey with the Nobeyama 45 m telescope [^11]: The Milky Way Imaging Scroll Painting [^12]: Multi-Array Galactic Plane Imaging Survey [^13]: the Co-Ordinated Radio ‘N’ Infrared Survey for High-mass star formation [^14]: The Canadian Galactic Plane Survey [^15]: The Southern Galactic Plane Survey [^16]: The VLA Galactic Plane Survey [^17]: The H$_2$O Southern Galactic Plane Survey [^18]: <http://www.mpia.de/thor/Overview.html> [^19]: <http://casa.nrao.edu>; version 4.1.0 for the pilot study and phase 1, version 4.1.2 for phase 2. [^20]: <https://science.nrao.edu/facilities/vla/data-processing/pipeline/scripted-pipeline> [^21]: version 5.1.1 [^22]: <http://www.mpia.de/thor> [^23]: <https://cds.u-strasbg.fr> [^24]: <https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.griddata.html> [^25]: <https://github.com/tvwenger/kd> [^26]: <https://www.scipy.org/>
--- abstract: 'We discuss the existence and regularity of periodic traveling-wave solutions of a class of nonlocal equations with homogeneous symbol of order $-r$, where $r>1$. Based on the properties of the nonlocal convolution operator, we apply analytic bifurcation theory and show that a highest, peaked, periodic traveling-wave solution is reached as the limiting case at the end of the main bifurcation curve. The regularity of the highest wave is proved to be exactly Lipschitz. As an application of our analysis, we reformulate the steady reduced Ostrovsky equation in a nonlocal form in terms of a Fourier multiplier operator with symbol $m(k)=k^{-2}$. Thereby we recover its unique highest $2\pi$-periodic, peaked traveling-wave solution, having the property of being exactly Lipschitz at the crest.' address: - 'Institute for Analysis, Karlsruher Institute of Technology (KIT), D-76128 Karlsruhe, Germany' - 'Department for Mathematical Sciences, Norwegian University of Science and Technology (NTNU), NO-7491 Trondheim, Norway' author: - Gabriele Bruell - Raj Narayan Dhara bibliography: - 'BD\_Reduced\_Ostrovsky.bib' title: Waves of maximal height for a class of nonlocal equations with homogeneous symbols --- Introduction ============= The present study is concerned with the existence and regularity of a highest, periodic traveling-wave solution of the nonlocal equation $$\label{eq:nonlocal} u_t + L_ru_x + uu_x=0,$$ where $L_r$ denotes the Fourier multiplier operator with symbol $m(k)=|k|^{-r}$, $r>1$. Equation is also known as the *fractional Korteweg–de Vries equation*. We are looking for $2\pi$-periodic traveling-wave solutions $u(t,x)=\phi(x-\mu t)$, where $\mu>0$ denotes the speed of the right-propagating wave. In this context equation reduces after integration to $$\label{eq:steady} -\mu \phi + L_r\phi + \frac{1}{2}\phi^2=B,$$ where $B\in {\mathbb{R}}$ is an integration constant. Since the symbol of $L_r$ is homogeneous, any bounded solution of the above equation has necessarily zero mean; in turn this implies that the integration constant $B$ is uniquely determined to be $$B=\frac{1}{4\pi}\int_{-\pi}^\pi\phi^2(x)\,dx.$$ The question about singular, highest waves was already raised by Stokes. In 1880 Stokes conjectured that the Euler equations admit a highest, periodic traveling-wave having a corner singularity at each crest with an interior angle of exactly $120^\circ$. About 100 years later (in 1982) Stokes’ conjecture was answered in the affirmative by Amick, Fraenkel, and Toland [@AFT]. Subject of a recent investigation by Ehrnström and Wahlén [@EW] is the existence and precise regularity of a highest, periodic traveling-wave solution for the Whitham equation; thereby proving Whitham’s conjecture on the existence of such a singular solution. The (unidirectional) Whitham equation is a genuinely nonlocal equation, which can be recovered from the well known Korteweg–de Vries equation by replacing its dispersion relation by one branch of the full Euler dispersion relation. The resulting equation takes (up to a scaling factor) the form of , where the symbol of the Fourier multiplier is given by $m(k)=\sqrt{\frac{\tanh(k)}{k}}$. In order to prove their result, Ehrnström and Wahlén developed a general approach based on the regularity and monotonicity properties of the convolution kernel induced by the Fourier multiplier. The highest, periodic traveling-wave solution for the Whitham equation is exactly $C^\frac{1}{2}$-Hölder continuous at its crests; thus exhibiting exactly half the regularity of the highest wave for the Euler equations. In a subsequent paper, Ehrnström, Johnson, and Claasen [@EJC] studied the existence and regularity of a highest wave for the bidirectional Whitham equation incorporating the full Euler dispersion relation leading to a nonlocal equation with cubic nonlinearity and a Fourier multiplier with symbol $m(k)=\frac{\tanh(k)}{k}$. The question addressed in [@EJC] is whether this equation gives rise to a highest, periodic, traveling wave, which is peaked (that is, whether it has a corner at each crest), such as the corresponding solution to the Euler equations? Overcoming the additional challenge of the cubic nonlinearity, the authors in [@EJC] follow a similar approach as implemented for the Whitham equation in [@EW] and prove that the highest wave has a singularity at its crest of the form $|x\log(|x|)|$; thereby still being a cusped wave. Concerning a different model equation arising in the context of shallow-water equations, Arnesen [@A] investigated the existence and regularity of a highest, periodic, traveling-wave solution for the Degasperis–Procesi equation. The Degasperis–Procesi equation is a local equation, but it can also be written in a nonlocal form with quadratic nonlinearity and a Fourier multiplier with symbol $m(k)=(1+k^2)^{-1}$, which is acting itself –in contrast to the previously mentioned equations– on a quadratic nonlinearity. For the Degasperis–Procesi and indeed for all equations in the so-called *b-family* (the famous Camassa–Holm equation being also such a member), explicit peaked, periodic, traveling-wave solutions are known [@CH; @DHK]. Using the nonlocal approach introduced originally for the Whitham equation in [@EW], the author of [@A] adapts the method to the nonlocal form of the Degasperis–Procesi equation and recovers not only the existence of a highest, peaked, periodic traveling wave, but also proves that any even, periodic, highest wave of the Degasperis–Procesi equation is exactly Lipschitz continuous at each crest; thereby excluding the existence of even, periodic, *cusped* traveling-wave solutions. Of our concern is the existence and regularity of highest, traveling waves for the fractional Korteweg–de Vries equation , where $r>1$. In the case when $r=2$, can be viewed as the nonlocal form of the *reduced Ostrovsky equation* $$(u_t+uu_x)_x=u.$$ For the reduced Ostrovsky equation, a highest, periodic, peaked traveling-wave solution is known explicitly [@Ostrovsky1978] and its regularity at each crest is exactly Lipschitz continuous. Recently, the existence and stability of smooth, periodic traveling-wave solutions for the reduced Ostrovsky equation, was investigated in [@GP; @HSS]. In [@GP2], the authors prove that the (unique) highest, $2\pi$-periodic traveling-wave solutions of the reduced Ostrovsky equation is linearly and nonlinearly unstable. We are going to investigate the existence and precise regularity of highest, periodic traveling-wave solutions of the entire family of equations $\eqref{eq:nonlocal}$ for Fourier multipliers $L_r$, where $r>1$. Based on the nonlocal approach introduced for the Whitham equation [@EW], we adapt the method in a way which is convenient to treat homogeneous symbols, and prove the existence and precise Lipschitz regularity of highest, periodic, traveling-wave solutions of corresponding to the symbol $m(k)=|k|^{-r}$, where $r>1$. The advantage of this nonlocal approach relies not only in the fact that it can be applied to various equations of local and nonlocal type, but in particular, that it is suitable to study entire families of equations simultaneously; thereby providing an insight into the interplay between a certain nonlinearity and varying order of linearity. The main novelty in our work relies upon implementing the approach used in [@EW; @EJC; @A] for equations exhibiting *homogeneous* symbols. For a homogeneous symbol, the associated convolution kernel can not be identified with a positive, decaying function on the real line. Instead we have to work with a periodic convolution kernel. The lack of positivity of the kernel can be compensated by working within the class of zero mean function, though. Moreover, we affirm that starting with a linear operator of order strictly smaller than $-1$ in equation a further decrease of order does not affect the regularity of the corresponding highest, periodic traveling-wave. Main result and outline of the paper ------------------------------------ Let us formulate our main theorem, which provides the existence of a global bifurcation branch of nontrivial, smooth, periodic and even traveling-wave solutions of equation , which reaches a limiting peaked, precisely Lipschitz continuous, solution at the end of the bifurcation curve. \[thm:main\] For each integer $k\geq 1$ there exists a wave speed $\mu^*_{k}>0$ and a global bifurcation branch $$s\mapsto (\phi_{k}(s),\mu_{k}(s)),\qquad s>0,$$ of nontrivial, $\frac{2\pi}{k}$-periodic, smooth, even solutions to the steady equation for $r>1$, emerging from the bifurcation point $(0,\mu^*_{k})$. Moreover, given any unbounded sequence $(s_n)_{n\in{\mathbb{N}}}$ of positive numbers $s_n$, there exists a subsequence of $(\phi_{k}(s_n))_{n\in {\mathbb{N}}}$, which converges uniformly to a limiting traveling-wave solution $(\bar \phi_{k},\bar\mu_{k})$ that solves and satisfies $$\bar \phi_{k}(0)=\bar \mu_{k}.$$ The limiting wave is strictly increasing on $(-\frac{\pi}{k},0)$ and exactly Lipschitz at $x\in \frac{2\pi}{k}{\mathbb{Z}}$. It is worth to notify that the regularity of peaked traveling-wave solutions is Lipschitz for *all* $r>1$. The reason mainly relies in the smoothing properties of the Fourier multiplier, which is of order strictly bigger than $1$, see Theorem \[thm:regularity\]. The outline of the paper is as follows: In Section \[S:Setting\] we introduce the functional-analytic setting, notations, and some general conventions. Properties of general Fourier multipliers with homogeneous symbol and a representation formula for the corresponding convolution kernel are discussed in Section \[S:Fourier\]. Section \[S:Properties\] is the heart of the present work, where we use the regularity and monotonicity properties of the convolution kernel to study a priori properties of bounded, traveling wave solutions of . In particular, we prove that an even, periodic traveling-wave solution $\phi$, which is monotone on a half period and whose maximum equals the wave speed, is precisely Lipschitz continuous. Eventually, in Section \[S:Global\] we investigate the global bifurcation result. By excluding certain alternatives for the bifurcation curve, we conclude the main theorem. In Section \[S:RO\] we apply our result to the reduced Ostrovsky equation, which can be reformulated as a nonlocal equation of the form with Fourier symbol $m(k)=k^{-2}$. We recover the well known explicit, even, peaked, periodic traveling-wave given by $$\phi(x)= \frac{2\pi^2-x^2}{18},\qquad \mbox{for}\quad \mu=\frac{\pi^2}{9}$$ on $[-\pi,\pi]$ and extended periodically. Moreover, we prove that any periodic traveling-wave $\phi\leq \mu$ is *at least* Lipschitz continuous at its crests; thereby excluding the possibility of periodic, traveling-waves $\phi\leq \mu$ exhibiting a cusp at its crests. Let us mention that the Fourier multiplier $L_2$ for the reduced Ostrovsky equation can be written as a convolution operator, whose kernel can be computed explicitly, see Remark \[rem:ker\]. Furthermore, relying on a priori bounds on the wave speed coming from a dynamical system approach for the reduced Ostrovsky equation in [@GP], we are able to obtain a better understanding of the behavior of the global bifurcation branch. Functional-analytic setting and general conventions {#S:Setting} =================================================== Let us introduce the relevant function spaces for our analysis and fix some notation. We are seeking for $2\pi$-periodic solutions of the steady equation . Let us set ${\mathbb{T}}:=[-\pi,\pi]$, where we identify $-\pi$ with $\pi$. In view of the nonlocal approach via Fourier multipliers, the Besov spaces on torus ${\mathbb{T}}$ form a natural scale of spaces to work in. We recall the definition and some basic properties of periodic Besov spaces. Denote by $\mathcal{D}({\mathbb{T}})$ the space of test functions on ${\mathbb{T}}$, whose dual space, the space of distributions on ${\mathbb{T}}$, is $\mathcal{D}^\prime({\mathbb{T}})$. If $\mathcal{S}({\mathbb{Z}})$ is the space of rapidly decaying functions from ${\mathbb{Z}}$ to ${\mathbb{C}}$ and $\mathcal{S}^\prime({\mathbb{Z}})$ denotes its dual space, let $\mathcal{F}:\mathcal{D}^\prime({\mathbb{T}})\to \mathcal{S}^\prime( {\mathbb{Z}})$ be the Fourier transformation on the torus defined by duality on $\mathcal{D}({\mathbb{T}})$ via $$\mathcal{F}f (k)=\hat f(k):=\frac{1}{2\pi}\int_{{\mathbb{T}}} f(x)e^{-ixk}\,dx, \qquad f\in \mathcal{D}({\mathbb{T}}).$$ Let $(\varphi)_{j\geq 0}\subset C_c^\infty({\mathbb{R}})$ be a family of smooth, compactly supported functions satisfying $$\operatorname{supp}\varphi_0 \subset [-2,2],\qquad \operatorname{supp}\varphi_j \subset [-2^{j+1},-2^{j-1}]\cap [2^{j-1},2^{j+1}] \quad\mbox{ for}\quad j\geq 1,$$ $$\sum_{j\geq 0}\varphi_j(\xi)=1\qquad\mbox{for all}\quad \xi\in{\mathbb{R}},$$ and for any $n\in{\mathbb{N}}$, there exists a constant $c_n>0$ such that $$\sup_{j\geq 0}2^{jn}\|\varphi^{(n)}_j\|_\infty\leq c_n.$$ For $p,q\in[1,\infty]$ and $s\in{\mathbb{R}}$, the [periodic Besov spaces]{} are defined by $$B_{p,q}^s({\mathbb{T}}):=\left\{ f\in \mathcal{D}^\prime({\mathbb{T}})\mid \|f\|_{B^s_{p,q}}^q:=\sum_{j\geq 0}2^{sjq}\left\|\sum_{k\in {\mathbb{Z}}} e^{ik(\cdot)} \varphi_j(k)\hat f(k)\right\|_{L^p}^{q}<\infty\right\},$$ with the common modification when $q=\infty$[^1]. If $s>0$ and $p\in [1,\infty]$, then $$W^{s,p}({\mathbb{T}})\subset B^s_{p,q}({\mathbb{T}})\subset L^p({\mathbb{T}})\qquad \mbox{for any} \quad q\in [1,\infty].$$ Moreover, for $s>0$, the Besov space $B^s_{\infty,\infty}({\mathbb{T}})$ consisting of functions $f$ satisfying $$\|f\|_{B^s_{\infty,\infty}}=\sup_{j\geq 0}2^{sj}\left\|\sum_{k\in {\mathbb{Z}}} e^{ik(\cdot)} \varphi_j(k)\hat f(k)\right\|_\infty < \infty$$ is called [periodic Zygmund space]{} of order $s$ and we write $$\mathcal{C}^s({\mathbb{T}}):=B^s_{\infty,\infty}({\mathbb{T}}).$$ Eventually, for $\alpha \in (0,1)$, we denote by $C^\alpha({\mathbb{T}})$ the space of $\alpha$-Hölder continuous functions on ${\mathbb{T}}$. If $k\in {\mathbb{N}}$ and $\alpha\in (0,1)$, then $C^{k,\alpha}({\mathbb{T}})$ denotes the space of $k$-times continuously differentiable functions whose $k$-th derivative is $\alpha$-Hölder continuous on ${\mathbb{T}}$. To lighten the notation we write $C^s({\mathbb{T}})=C^{\left \lfloor{s}\right \rfloor, s- \left \lfloor{s}\right \rfloor }({\mathbb{T}})$ for $s\geq 0$. As a consequence of Littlewood–Paley theory, we have the relation $\mathcal{C}^s({\mathbb{T}})=C^s({\mathbb{T}})$ for any $s>0$ with $s\notin {\mathbb{N}}$; that is, the Hölder spaces on the torus are completely characterized by Fourier series. If $s\in {\mathbb{N}}$, then $C^s({\mathbb{T}})$ is a proper subset of $\mathcal{C}^s({\mathbb{T}})$ and $$C^1({\mathbb{T}})\subsetneq C^{1-}({\mathbb{T}})\subsetneq \mathcal{C}^1({\mathbb{T}}).$$ Here, $C^{1-}({\mathbb{T}})$ denotes the space of Lipschitz continuous functions on ${\mathbb{T}}$. For more details we refer to [@T3 Chapter 13]. We are looking for solutions in the class of $2\pi$-periodic, bounded functions with zero mean, the class being denoted by $$L^\infty_0({\mathbb{T}}):= \{f\in L^\infty({\mathbb{T}}) \mid f \mbox{ has zero mean} \}.$$ In the sequel we continue to use the subscript $0$ to denote the restriction of a respective space to its subset of functions with zero mean. If $f$ and $g$ are elements in an ordered Banach space, we write $f\lesssim g$ ($f\gtrsim g$) if there exists a constant $c>0$ such that $f\leq c g$ ($f\geq cg$). Moreover, the notation $f\eqsim g$ is used whenever $f\lesssim g$ and $f\gtrsim g$. We denote by ${\mathbb{R}}_+$ the nonnegative real half axis ${\mathbb{R}}_+:=[0,\infty]$ and by ${\mathbb{N}}_0$ the set of natural numbers including zero. The space $\mathcal{L}(X;Y)$ denotes the set of all bounded linear operators from $X$ to $Y$. Fourier multipliers with homogeneous symbol {#S:Fourier} =========================================== The following result is an analogous statement to the classical Fourier multiplier theorems for nonhomogeneous symbols on Besov spaces (e.g. [@BCD Proposition 2.78]): \[prop:FM\] Let $m>0$ and $\sigma:{\mathbb{R}}\to {\mathbb{R}}$ be a function, which is smooth outside the origin and satisfies $$|\partial^a \sigma(\xi)|\lesssim |\xi|^{-m-a}\qquad \mbox{for all}\quad \xi\neq 0,\quad a\in {\mathbb{N}}_0.$$ Then, the Fourier multiplier $L$ defined by $$Lf=\sum_{k\neq 0}\sigma(k)\hat f(k)e^{ik(\cdot)}$$ belongs to the space ${\mathcal{L}(B^s_{\infty,\infty}}_0({\mathbb{T}});{B^{s+m}_{\infty,\infty}}_0({\mathbb{T}}))$. In view of the zero mean property of $f$, the proof can be carried out in a similar form as in [@AB Theorem 2.3 (v)], where it is show that a function $f$ belongs to ${B^s_{\infty,\infty}}({\mathbb{T}})$ if and only if $$\sum_{k\neq 0}\hat f(k)(ik)^{-m}e^{ik(\cdot)} \in {B^{s+m}_{\infty,\infty}}({\mathbb{T}}).$$ The above proposition yields in particular that $$\begin{aligned} \label{def:L} L_rf:= \sum_{k\neq 0}|k|^{-r}\hat f(k)e^{ik(\cdot)} , \qquad r>1,\end{aligned}$$ defines a bounded operator form $\mathcal{C}_0^s({\mathbb{T}})$ to $\mathcal{C}_0^{s+r}({\mathbb{T}})$ for any $s>0$; thereby it is a smoothing operator of order $-r$. We are interested in the existence and regularity properties of solutions of $$\label{Equation} -\mu \phi + L_r\phi + \frac{1}{2}\phi-\frac{1}{2}\widehat{\phi^2}(0)=0,\qquad r>1.$$ The operator $L_r$ is defined as the inverse Fourier representation $$L_r f(x)= \mathcal{F}^{-1}(m_r\hat f)(x),$$ where $m_r(k)=|k|^{-r}$ for $k\neq 0$ and $m_r(0)=0$. In view of the convolution theorem, we define the integral kernel $$\begin{aligned} \label{def:K} K_r(x):= 2\sum_{k=1}^\infty |k|^{-r}\cos\left(xk\right), \qquad x\in {\mathbb{T}},\end{aligned}$$ so that the action of $L_r$ is described by the convolution $$\label{eq:convolution_kernel} L_rf=K_r*f.$$ One can then express equation as $$\label{eq:RO} -\mu \phi +K_r*\phi -\frac{1}{2}\widehat{\phi^2}(0)=0, \qquad K_r:= \mathcal{F}^{-1}( m_r).$$ In what follows we examine the kernel $K_r$. We start by recalling some general theory on [completely monotonic]{} sequences taken from [@Guo; @Widder]. A sequence $(\mu_k)_{k\in{\mathbb{N}}_0}$ of real numbers is called *completely monotonic* if its elements are nonnegative and $$(-1)^n\Delta^n\mu_k \geq 0\qquad \mbox{for any}\quad n,k\in{\mathbb{N}}_0,$$ where $\Delta^0\mu_k=\mu_k$ and $\Delta^{n+1}\mu_k=\Delta^n\mu_{k+1}-\Delta^n \mu_k$. A function $f:[0,\infty)\to{\mathbb{R}}$ is called *completely monotone* if it is continuous on $[0,\infty)$, smooth on the open set $(0,\infty)$, and satisfies $$(-1)^n f^{(n)}(x)\geq 0\qquad\mbox{for any}\quad x>0.$$ For completely monotonic sequences we have the following theorem, which can be considered as the discrete analog of Bernstein’s theorem on completely monotonic functions. \[thm:B\] A sequence $(\mu_k)_{k\in{\mathbb{N}}_0}$ of real numbers is completely monotonic if and only if $$\mu_k=\int_0^1 t^k d\sigma(t),$$ where $\sigma$ is nondecreasing and bounded for $t\in[0,1]$. There exists a close relationship between completely monotonic sequences and completely monotonic functions. \[lem:CM\] Suppose that $f:[0,\infty)\to {\mathbb{R}}$ is completely monotone, then for any $a\geq 0$ the sequence $(f(an))_{n\in{\mathbb{N}}_0}$ is completely monotonic. We are going to use the theory on completely monotonic sequences to prove the following theorem, which summarizes some properties of the kernel $K_r$. \[thm:P\] Let $r>1$. The kernel $K_r$ defined in has the following properties: - $K_r$ is even, continuous, and has zero mean. - $K_r$ is smooth on ${\mathbb{T}}\setminus\{0\}$ and decreasing on $(0,\pi)$. - $K_r \in W^{r-{\varepsilon},1}({\mathbb{T}})$ for any ${\varepsilon}\in (0,1)$. In particular, $K_r^\prime$ is integrable and $K_r$ is $\alpha$-Hölder continuous with $\alpha \in (0,r-1)$ if $r\in (1,2]$, and continuously differentiable if $r> 2$. Claim a) follows directly form the definition of $K_r$ and $r>1$. Now we want to prove part b). Set $$\mu_k:=(k+1)^{-r}\qquad \mbox{for}\quad k\in{\mathbb{N}}_0.$$ Clearly $x\mapsto (x+1)^{-r}$ is completely monotone on $(0,\infty)$. Thus, Lemma \[lem:CM\] guarantees that $(\mu_k)_{k\in {\mathbb{N}}_0}$ is a completely monotonic sequence. By Theorem \[thm:B\], there exists a nondecreasing and bounded function $\sigma_r:[0,1]\to{\mathbb{R}}$ such that $$(k+1)^{-r}=\int_0^1 t^{k}\, d\sigma_r(t)\qquad\mbox{for any}\quad k\geq 0.$$ In particular $$|k|^{-r}=\int_0^1 t^{|k|-1}\, d\sigma_r(t)\qquad\mbox{for any}\quad k\neq 0.$$ The coefficients $t^{|k|-1}$ can be written as $$t^{|k|-1}=\int_{\mathbb{T}}f(t,x)e^{-ixk}\,dx\qquad \mbox{for}\quad k\neq 0,$$ where $$f(t,x)=\sum_{k\neq 0}t^{|k|-1}e^{ixk}+a_0(t)$$ for some bounded function $a_0:(0,1)\to {\mathbb{R}}$. Thereby, $$\begin{aligned} |k|^{-r}= \int_{\mathbb{T}}\int_0^1 f(t,x)\,d\sigma_r(t)e^{-ixk}\,dx\qquad\mbox{for any}\quad k\neq 0. \end{aligned}$$ In particular, we deduce that $$\int_0^1 f(t,x)\,d\sigma_r(t)=\sum_{k\neq 0} |k|^{-r}e^{ixk}=K_r(x).$$ Notice that we can compute $f$ explicitly as $$\begin{aligned} f(t,x)-a_0(t)&= \sum_{k\neq 0}t^{|k|-1}e^{ixk}=2\sum_{k=1}^\infty t^{k-1}\cos(xk)=2\sum_{k=0}t^{k}\cos(x(k+1))\\ &=2\operatorname{Re}\left(e^{ix}\sum_{k=0}t^{k}e^{ixk}\right)=2\operatorname{Re}\left(e^{ix}\sum_{k=0}^\infty \left(te^{ix}\right)^k \right). \end{aligned}$$ Thus, for $x\in (0,\pi)$, we have that $$f(t,x)=2\operatorname{Re}\left(e^{ix}\frac{1}{1-te^{ix}}\right)+a_0(t)=\frac{2(\cos(x)-t)}{1-t^2\cos(x)+t^4}+a_0(t).$$ Consequently, on the interval $(0,\pi)$, the kernel $K_r$ is represented by $$\label{eq:rep} K_r(x)=\int_0^1 \left(\frac{2(\cos(x)-t)}{1-t\cos(x)+t^2}+a_0(t)\right)\,d\sigma_r(t).$$ From here it is easy to deduce that $K_r$ is smooth on ${\mathbb{T}}\setminus\{0\}$ and decreasing on $(0,\pi)$, which completes the proof of b). Regarding the regularity of $K_r$ claimed in c), let ${\varepsilon}\in (0,1)$ be arbitrary. On the subset of zero mean functions of $W^{r-{\varepsilon},1}({\mathbb{T}})$ an equivalent norm is given by $$\|K_r\|_{W^{r-{\varepsilon},1}_0}\eqsim \|\mathcal{F}^{-1}\left(|\cdot|^{r-{\varepsilon}}\hat K_r\right)\|_{L^1}.$$ Thereby, $K_r$ is in $W^{r-{\varepsilon},1}_0({\mathbb{T}})$ if and only if the function $$x\mapsto \mathcal{F}^{-1}(|\cdot|^{r-{\varepsilon}}\hat K_r)(x)= 2\sum_{k=1}^\infty |k|^{r-{\varepsilon}-r}\cos(xk)=2\sum_{k=1}^\infty |k|^{-{\varepsilon}}\cos(xk)$$ is integrable over ${\mathbb{T}}$. Now, this follows by a classical theorem on the integrability of trigonometric transformations (cf. [@Boas Theorem 2] ), and we deduce the claimed regularity and integrability of $K_r^\prime$. The continuity properties are a direct consequence of Sobolev embedding theorems, see [@Demengel Theorem 4.57]. \[rem:ker\] \[lem:touch\] Let $r>1$. The operator $L_r$ is parity preserving on $L^\infty_0({\mathbb{T}})$. Moreover, if $f,g \in L^\infty_0({\mathbb{T}})$ are odd functions satisfying $f(x)\geq g(x)$ on $[0,\pi]$, then either $$L_rf(x)> L_rg(x)\qquad \mbox{for all}\quad x\in (0,\pi),$$ or $f=g$ on ${\mathbb{T}}$. The fact that $L_r$ is parity preserving is an immediate consequence of the evenness of the convolution kernel. In order to prove the second assertion, assume that $f,g\in L^\infty_0({\mathbb{T}})$ are odd, satisfying $f(x)\geq g(x)$ on $[0,\pi]$ and that there exists $x_0\in (0,\pi)$ such that $f(x_0)=g(x_0)$. Using the zero mean property of $f$ and $g$, we obtain that $$L_rf(x_0)-L_rg(x_0)=\int_{-\pi}^\pi (K_r(x_0-y)-\min K_r)\left( f(y)-g(y)\right)\,dy>0,$$ where $\min K_r$ denotes the minimum of $K_r$ on ${\mathbb{T}}$. In view of $K_r$ being nonconstant and $K_r(y)-\min K_r\geq 0$ for all $y\in {\mathbb{T}}$, we conclude that $$L_rf(x_0)-L_rg(x_0)>0,$$ which is a contradiction unless $f=g$ on ${\mathbb{T}}$. A priori properties of periodic traveling-wave solutions {#S:Properties} ======================================================== In the sequel, let $r>1$ be fixed. We consider $2\pi$-periodic solutions of $$\label{eq:gRO} -\mu\phi +L_r\phi+\frac{1}{2}\phi^2-\frac{1}{2}\widehat{\phi^2}(0)=0.$$ The existence of solutions is subject of Section \[S:Global\], where we use analytic bifurcation theory to first construct small amplitude solutions and then extend this bifurcation curve to a global continuum terminating in a highest, traveling wave. Aim of this section is to provide a priori properties of traveling-wave solutions $\phi\leq\mu$. In particular, we show that any nontrivial, even solution $\phi \leq \mu$, which is nondecreasing on the half period $(-\pi,0)$ and attaining its maximum at $\phi(0)=\mu$ is precisely Lipschitz continuous. This holds true for any $r>1$, see Theorem \[thm:reg\]. *We would like to point out that the subsequent analysis can be carried out in the very same manner for $2P$-periodic solutions, where $P\in (0,\infty)$ is the length of a finite half period.* Let us start with a short observation. \[lem:distance\] If $\phi \in C_0 ({\mathbb{T}})$ is a nontrivial solution of , then $$\phi(x_M)+ \phi(x_m)\geq 2 \left( \mu -\|K_r\|_{L^1}\right),$$ where $\phi(x_M)=\max_{x\in{\mathbb{T}}}\phi(x)$ and $\phi(x_m)=\min_{x\in{\mathbb{T}}}\phi(x)$. If $\phi\in C_0({\mathbb{T}})$ is a nontrivial solution of , then $\phi(x_M)>0>\phi(x_m)$ and $$\begin{aligned} \mu (\phi(x_M)-\phi(x_m))&=K_r*\phi(x_M)-K_r*\phi(x_m)+\frac{1}{2}\left( \phi^2(x_M)- \phi^2 (x_m) \right)\\ & \leq \|K_r\|_{L^1}(\phi(x_M)-\phi(x_m)) + \frac{1}{2}\left( \phi(x_M) - \phi(x_m)\right)\left( \phi(x_M) + \phi(x_m)\right),\end{aligned}$$ which proves the statement. In what follows it is going to be convenient to write as $$\label{trav:eqn} \frac{1}{2}(\mu - \phi)^{2} = \frac{1}{2}\mu^{2} - L_r\phi+\frac{1}{2}\widehat{\phi^2}(0).$$ In the next two lemmata we establish a priori properties of periodic solutions of requiring solely boundedness. \[lem:c1\] Let $\phi \in L_0^\infty({\mathbb{T}})$ be a solution of , then $\left(\mu-\phi \right)^2 \in C^1({\mathbb{T}})$ and $$\left\|\frac{d}{dx}\left(\mu-\phi \right)^2\right\|_\infty \leq 2\left\|K_r^\prime\right\|_{L^1}\|\phi\|_\infty \quad \mbox{for all}\quad x \in {\mathbb{T}}.$$ We can read of from , that the derivative of $(\mu-\phi)^2$ is given by $$\frac{d}{dx}(\mu-\phi)^2(x)=-2 K_r^\prime*\phi(x).$$ Since $K_r^\prime$ and $\phi$ are integrable over ${\mathbb{T}}$ (cf. Theorem \[thm:P\]), the convolution on the right hand side is continuous and the claimed estimate follows. \[lem:uniform\_bound\] Let $\phi\in L_0^\infty({\mathbb{T}})$ be a solution of , then $$\|\phi\|_\infty \leq 2 \left(\mu + \|K_r\|_{L^1}\right) +2\pi\|K_r^\prime\|_{L^1}.$$ If $\phi=0$, there is nothing to prove. Therefore it is enough to assume that $\phi$ is a nontrivial solution. From Lemma \[lem:c1\] we know that $(\mu-\phi)^2$ is a continuously differentiable function. In view of $\phi$ being a function of zero mean and $(\mu-\phi)^2$ being continuous, we deduce the existence of $x_0\in {\mathbb{T}}$ such that $$(\mu-\phi)^2(x_0)=\mu^2.$$ By the mean value theorem, we obtain that $$(\mu-\phi)^2(x)= \left[(\mu - \phi)^2\right]^\prime(\xi)(x-x_0)+\mu^2$$ for some $\xi \in {\mathbb{T}}$ and $$\begin{aligned} \widehat{\phi^2} (0)&= \frac{1}{2\pi}\int_{-\pi}^\pi \phi^2(x)\,dx =\frac{1}{2\pi}\int_{-\pi}^\pi\left[(\mu - \phi)^2\right]^\prime(\xi)(x-x_0)\,dx,\end{aligned}$$ where we used that $\phi$ has zero mean. Again by Lemma \[lem:c1\] we can estimate the term above generously by $$\widehat{\phi^2} (0) \leq 2\pi\|K_r^\prime\|_{L^1}\|\phi\|_\infty.$$ Using that $\phi$ solves , we obtain $$\begin{aligned} \|\phi\|^2_\infty \leq 2 (\mu + \|K_r\|_{L^1})\|\phi\|_\infty + 2\pi\|K_r^\prime\|_{L^1}\|\phi\|_\infty.\end{aligned}$$ Dividing by $\|\phi\|_\infty$ yields the statement. From now on we restrict our considerations on periodic solutions of , which are even and nondecreasing on the half period $[-\pi,0]$. \[lem:nod\] Any nontrivial, even solution $\phi \in C_0^1({\mathbb{T}})$ of which is nondecreasing on $(-\pi,0)$ satisfies $$\phi^\prime(x)>0 \qquad \mbox{and}\qquad \phi(x)<\mu \qquad \mbox{on}\quad (-\pi,0).$$ Moreover, if $\phi\in C_0^2({\mathbb{T}})$, then $\phi^{\prime \prime}(0)<0$. Assuming that $\phi\in C_0^1({\mathbb{T}})$ we can take the derivative of and obtain that $$(\mu-\phi)\phi^\prime(x)=L_r\phi^\prime (x).$$ Due to the assumption that $\phi^\prime\geq 0$ on $(-\pi,0)$ it is sufficient to show that $$\label{eq:ineq1} L_r\phi^\prime(x)> 0 \qquad \mbox{on}\quad (-\pi,0)$$ to prove the statement. In view of $\phi^\prime$ being odd with $\phi^\prime(x)\geq 0$ on $[-\pi,0]$, the desired inequality follows from Lemma \[lem:touch\]. In order to prove the second statement, let us assume that $\phi\in C_0^2({\mathbb{T}})$. Differentiating twice yields $$(\mu-\phi)\phi^{\prime\prime}(x)=L_r\phi^{\prime\prime} (x)+(\phi^\prime)^2(x).$$ In particular, we have that $$(\mu-\phi)\phi^{\prime\prime}(0)=L_r\phi^{\prime\prime} (0).$$ We are going to show that $L_r\phi^{\prime\prime} (0)<0$, which then (together with the first part) proves the statement. Using the evenness of $K_r$ and $\phi^{\prime\prime}$, we compute $$\begin{aligned} \frac{1}{2}L_r\phi^{\prime\prime} (0) &= \frac{1}{2}\int_{-\pi}^\pi K_r(y)\phi^{\prime\prime}(y)\,dy \\ &= \int_{0}^\pi K_r(y)\phi^{\prime\prime}(y)\,dy \\ &= \int_{0}^{\varepsilon}K_r(y)\phi^{\prime\prime}(y)\,dy+\int_{{\varepsilon}}^\pi K_r(y)\phi^{\prime\prime}(y)\,dy\\ &=\int_{0}^{\varepsilon}K_r(y)\phi^{\prime\prime}(y)\,dy + K_r({\varepsilon})\phi^\prime({\varepsilon})- \int_{\varepsilon}^\pi K_r^\prime(y)\phi^\prime(y)\,dy.\end{aligned}$$ Notice that the first integral on the right hand side tends to zero if ${\varepsilon}\to 0$, so does the second term in view of $\phi$ being differentiable and $K_r$ continuous on ${\mathbb{T}}$. Concerning the last integral, we observe that $$\frac{1}{2}L_r\phi^{\prime\prime} (0)=- \lim_{{\varepsilon}\to 0^+}\int_{\varepsilon}^\pi K_r^\prime(y)\phi^\prime(y)\,dy<0,$$ since $K_r^\prime$ and $\phi^\prime$ are negative on $(0,\pi)$. We continue by showing that any bounded solution $\phi$ of that satisfies $\phi<\mu$ is smooth. \[th:phi:prop\] Let $\phi\le \mu$ be a bounded solution of . Then: - If $\phi<\mu$ uniformly on ${\mathbb{T}}$, then $\phi\in C^{\infty}({\mathbb{R}})$. - Considering $\phi$ as a periodic function on ${\mathbb{R}}$ it is smooth on any open set where $\phi<\mu$. Let $\phi<\mu$ uniformly on ${\mathbb{T}}$. Recalling Proposition \[prop:FM\], we know that the operator $L_r$ maps ${B^s_{\infty,\infty}}_0({\mathbb{T}})$ into ${B^{s+r}_{\infty,\infty}}_0({\mathbb{T}})$ for any $s\in{\mathbb{R}}$. Moreover, if $s>0$ then the Nemytskii operator $$\begin{aligned} f\mapsto \mu - \sqrt{\frac{1}{2}\mu^{2}- f}\end{aligned}$$ maps $ {B^s_{\infty,\infty}}_0({\mathbb{T}})$ into itself for $f<\frac{1}{2}\mu^2$. From we see that for any solution $\phi<\mu$ we have $$L\phi-\frac{1}{2}\widehat \phi^2(0)<\frac{1}{2}\mu^2.$$ Thus, $$\begin{aligned} \label{maps:reg} \begin{split} \left[ L_r\phi\mapsto \sqrt{\frac{1}{2}\mu^2- L\phi+\frac{1}{2}\hat \phi^2(0)}\right]\circ \left[ \phi\mapsto L_r\phi-\frac{1}{2}\widehat \phi^2(0)\right]: {B^s_{\infty,\infty}}_0({\mathbb{T}}) \to {B^{s+r}_{\infty,\infty}}_0({\mathbb{T}}), \end{split}\end{aligned}$$ for all $s\geq 0$. Eventually, gives rise to $$\begin{aligned} \phi = \mu-\sqrt{\mu^{2} - 2L_r\phi+ \hat{\phi}^{2}(0) }.\end{aligned}$$ Hence, an iteration argument in $s$ guarantees that $\phi\in C^{\infty}({\mathbb{T}})$. In order to prove the statement on the real line, recall that any Fourier multiplier commutes with the translation operator. Thus, if $\phi$ is a periodic solution of , then so is $\phi_h:=\phi(\cdot +h)$ for any $h\in {\mathbb{R}}$. The previous argument implies that $\phi_h \in C^\infty({\mathbb{T}})$ for any $h\in{\mathbb{R}}$, which proves statement (i). In order to prove part (ii) let $U\subset {\mathbb{R}}$ be an open subset of ${\mathbb{R}}$ on which $\phi <\mu$. Then, we can find an open cover $U=\cup_{i\in I}U_i$, where for any $i\in I$ we have that $U_i$ is connected and satisfies $|U_i|<2\pi$. Due to the translation invariance of and part (i), we obtain that $\phi$ is smooth on $U_i$ for any $i\in I$. Since $U$ is the union of open sets, the assertion follows. \[thm:regularity\] Let $\phi\leq \mu$ be an even solution of , which is nondecreasing on $[-\pi, 0]$. If $\phi$ attains its maximum at $\phi(0)=\mu$, then $\phi$ cannot belong to the class $C^1({\mathbb{T}})$. Assuming that $\phi \in C^1({\mathbb{T}})$, the same argument as in Lemma \[lem:c1\] implies that the function $(\mu-\phi)^2$ is twice continuously differentiable and its Taylor expansion in a neighborhood of $x=0$ is given by $$\begin{aligned} \label{eq:taylor:ser} (\mu-\phi)^2(x)=[(\mu-\phi)^2]^{\prime}(0)x+\frac{1}{2}[(\mu-\phi)^2]^{\prime\prime}(\xi)x^2\end{aligned}$$ for some $|\xi|\in (0,|x|)$ where $|x|\ll 1$. Since $\phi$ attains a local maximum at $x=0$, its first derivative above vanishes at the origin whereas the second derivative is given by $$\frac{1}{2}[(\mu-\phi)^2]^{\prime\prime}(\xi)=-K_r^{\prime}*\phi^\prime(\xi).$$ We aim to show that in a small neighborhood of zero the right hand side is strictly bounded away from zero. Set $f(\xi):=-K_r^\prime*\phi^\prime(\xi)$. Using that $K_r$ and $\phi$ are even functions with $K_r^\prime$ and $\phi^\prime$ being negative on $(0,\pi)$, we find that $$f(0)=-K_r^\prime*\phi^\prime(0)=2\int_0^\pi K_r^\prime(y)\phi^\prime(y)\,dy=c>0$$ for some constant $c>0$. Since $f$ is even (cf. Lemma \[lem:touch\]) and continuous, there exists $|x_0|\ll 1$ and a constant $c_0>0$ such that $$\frac{1}{2}[(\mu-\phi)^2]^{\prime\prime}(\xi)=f(\xi)\geq c_0,\qquad \mbox{for all}\quad \xi\in (0,|x_0|).$$ Thus, considering the Taylor series  in a neighborhood of zero, we have that $$(\mu-\phi)^2(x)\gtrsim x^2\qquad \mbox{for}\quad |x|\ll 1,$$ which in particular implies that $$\label{eq:Lip} \frac{\mu-\phi(x)}{|x|}\gtrsim 1\qquad \mbox{for}\quad |x|\ll 1.$$ Passing to the limit $x\to 0$, we obtain a contradiction to $\phi^\prime(0)=0$. We are now investigating the precise regularity of a solution $\phi$, which attains its maximum at $\phi(0)=\mu$. \[thm:reg\] Let $\phi\leq \mu$ be an even solution of , which is nondecreasing on $[-\pi, 0]$. If $\phi$ attains its maximum at $\phi(0)=\mu$, then the following holds: - $\phi\in C^\infty({\mathbb{T}}\setminus \{0\})$ and $\phi$ is strictly increasing on $(-\pi,0)$. - $\phi\in C_0^{1-}({\mathbb{T}})$, that is $\phi$ is Lipschitz continuous. - $\phi$ is precisely Lipschitz continuous at $x=0$, that is $$\begin{aligned} \label{itm:3} \mu - \phi(x) \simeq |x| \qquad \mbox{for}\; |x|\ll 1.\end{aligned}$$ <!-- --> - Assume that $\phi\leq \mu$ is a solution which is even and nondecreasing on $(-\pi,0)$. Let $x\in (-\pi,0)$ and $h\in (0,\pi)$. Notice that by periodicity and evenness of $\phi$ and the kernel $K_r$, we have that $$\begin{aligned} K_r*\phi(x+h)&-K_r*\phi(x-h)\\ &=\int_{-\pi}^{0} \left(K_r(x-y)-K_r(x+y)\right)\left(\phi(y+h)-\phi(y-h)\right)\,dy.\end{aligned}$$ The integrand is nonnegative, since $K_r(x-y)-K_r(x+y)> 0$ for $x,y\in (-\pi,0)$ and $\phi(y+h)-\phi(y-h)\geq 0$ for $y\in (-\pi,0)$ and $h\in (0,\pi)$ by assumption that $\phi$ is even and nondecreasing on $(-\pi,0)$. Since $\phi$ is a nontrivial solution and $K_r$ is not constant, we deduce that $$\label{eq:mon} K_r*\phi(x+h)-K_r*\phi(x-h)>0$$ for any $h\in (0,\pi)$. Moreover, we have that $$\frac{1}{2}\left( 2\mu-\phi(x)-\phi(y) \right)\left(\phi(y)-\phi(x)\right)=K_r*\phi(x)-K_r*\phi(y)$$ for any $x,y\in {\mathbb{T}}$. Hence $K_r*\phi(x)=K_r*\phi(y)$ if and only if $\phi(x)=\phi(y)$. In view of , we obtain that $$\phi(x+h)\neq \phi(x-h)\qquad\mbox{for any}\quad h\in (0,\pi).$$ Thereby, $\phi$ is strictly increasing on $(-\pi,0)$. In view of Theorem \[th:phi:prop\], $\phi$ is smooth on ${\mathbb{T}}\setminus\{0\}$. - In order to prove the Lipschitz regularity at the crest, we make use of a simple *bootstrap argument*. We would like to emphasize that the following argument strongly relies on the fact that we are dealing with a smoothing operator of order $-r$, where $r>1$. Let us assume that $\phi$ is *not* Lipschitz continuous and prove a contradiction. If $\phi\leq \mu$ is merely a bounded function, the regularization property of $L_r$ implies immediately the $\phi$ is a priori $\frac{1}{2}$-Hölder continuous. To see this, recall that $$\frac{1}{2}\left( 2\mu-\phi(x)-\phi(y) \right)\left(\phi(y)-\phi(x)\right)=L_r\phi(x)-L_r\phi(y).$$ Using $\phi\leq \mu$, we deduce that $$\frac{1}{2}\left(\phi(x)-\phi(y)\right)^2\leq |L_r\phi(x)-L_r\phi(y)|.$$ Since $L_r:L_0^\infty({\mathbb{T}})\to\mathcal{C}_0^r({\mathbb{T}})$, where $r>1$, the right hand side can be estimated by a constant multiple of $|x-y|$. An immediate consequence is the $\frac{1}{2}$-Hölder continuity of $\phi$. Since $\phi$ is smooth in ${\mathbb{T}}\setminus\{0\}$, we can differentiate the equality $$\frac{1}{2}(\mu-\phi)^2(x)=K_r*\phi(0)-K_r*\phi(x)$$ for $x\in (-\pi,0)$ and obtain that $$\label{eq:BA} (\mu-\phi)\phi^\prime(x)=\left(K_r*\phi\right)^\prime(x)-\left(K_r*\phi\right)^\prime(0),$$ where we are using that $\left(K_r*\phi\right)^\prime(0)=0$. If $\phi$ is $\frac{1}{2}$-Hölder continuous, then $K_r*\phi\in \mathcal{C}_0^{\frac{1}{2}+r}({\mathbb{T}})$. In view of $r>1$, we gain at least some Hölder regularity for $(K_r*\phi)^\prime$. Thereby, $$\label{eq:BE} (\mu-\phi)\phi^\prime(x)\lesssim |x|^{a}$$ for some $a\in (\frac{1}{2},1]$. By assumption that $\phi$ is not Lipschitz continuous at $x=0$, the above estimate guarantees that $\phi$ is at least $a$-Hölder continuous, where $a>\frac{1}{2}$. We aim to bootstrap this argument to obtain Lipschitz regularity of $\phi$ at $x=0$. If above $\frac{1}{2}+r>2$, we use that $K_r*\phi\in \mathcal{C}_0^{\frac{1}{2}+r}({\mathbb{T}})\subset C_0^2(T)$, which guarantees that its derivative is at least Lipschitz continuous ($a=1$ in ) and we are done. If $\frac{1}{2}+r\leq 2$, we merely obtain an improved $a$-Hölder regularity of $\phi$. However, repeating the argument finitely many times, yields that $\phi$ is indeed Lipschitz continuous at $x=0$, that is $$\label{eq:upper} \mu-\phi(x)\lesssim |x|,\qquad\mbox{for}\quad |x|\ll 1.$$ - In view of the upper bound we are left to establish an according lower bound for $|x|\ll 1$ to prove the claim . To achieve this, we show that the derivative is positive and bounded away from zero on $(-\pi,0)$. Let $\xi \in (-\pi,0)$, then $$\begin{aligned} (\mu-\phi)\phi^\prime(\xi)=K_r^\prime * \phi(\xi) =\int_{-\pi}^0 \left(K_r(\xi-y)-K_r(\xi+y) \right)\phi^\prime(y)\,dy.\end{aligned}$$ Using the upper bound established in , we divide the above equation by $(\mu-\phi)(\xi)$ and obtain that $$\begin{aligned} \phi^\prime(\xi)\gtrsim \int_{-\pi}^0 \frac{K_r(\xi-y)-K_r(\xi+y)}{|\xi|} \phi^\prime(y)\,dy.\end{aligned}$$ Our aim is to show that $\liminf_{\xi \to -0}\phi^\prime(\xi)$ is strictly bounded away from zero. We have that $$\begin{aligned} \lim_{\xi \to -0}&\frac{K_r(\xi-y)-K_r(\xi+y)}{|\xi|}\\&= \lim_{\xi \to -0}\left(\frac{K_r(y-\xi)-K_r(y)}{\xi}+\frac{K_r(y)-K_r(y+\xi)}{\xi}\right)\frac{\xi}{|\xi|}=2K_r^\prime(y) \end{aligned}$$ for any $y\in (-\pi,0)$ (keep in mind that $\xi<0$). The integrability of $K_r^\prime$ allows us to estimate $$\label{eq:upperbound} \liminf_{\xi \to 0}\phi^\prime(\xi)\gtrsim 2\int_{-\pi}^{0} K_r^\prime(y)\phi^\prime(y)\, dy =c$$ for some constant $c>0$, since $\phi$ as well as $K_r$ are strictly increasing on $(-\pi,0)$. Let $x<0$ with $|x|\ll 1$. Applying the mean value theorem for $\phi$ on the interval $(x,0)$ yields that $$\begin{aligned} \frac{\phi(0) - \phi(x)}{|x|} = \phi'(\xi) \qquad \text{for some}\quad |\xi|\ll 1.\end{aligned}$$ In accordance with , we conclude that $$\mu - \phi (x) \simeq |x|\qquad \mbox{for} \; |x|\ll 1.$$ *The above theorem implies in particular that *any* periodic solution $\phi \leq \mu$ of , which is monotone on a half period, is at least Lipschitz continuous. Thereby, the existence of corresponding cusped traveling-wave solutions satisfying $\phi \leq \mu$ is a priori excluded.* \[lem:lowerbound:aux\] Let $\phi \leq \mu$ be an even solution of , which is nondecreasing on $[-\pi,0]$. Then there exists a constant $\lambda=\lambda(r)>0$, depending only on the kernel $K_r$, such that $$\mu-\phi(\pi)\geq \lambda \pi.$$ Let us pick $x\in [-\frac{3}{4}\pi,-\frac{1}{4}\pi]$. Then, $$\begin{aligned} \label{eq:estimateA} \begin{split} (\mu-\phi(\pi))\phi^\prime(x)&\geq (\mu-\phi(x))\phi^\prime(x)\\ &=\int_{-\pi}^{0}\left(K_r(x-y)-K_r(x+y)\right)\phi^\prime(y)\,dy\\ &\geq \int_{-\frac{3}{4}\pi}^{-\frac{1}{4}\pi}\left(K_r(x-y)-K_r(x+y)\right)\phi^\prime(y)\,dy, \end{split} \end{aligned}$$ using the evenness of the kernel $K_r$, implying that $K_r(x-y)-K_r(x+y)>0$ for $x,y\in (-\pi,0)$. We observe that there exists a constant $\lambda=\lambda(r)>0$, depending only on the kernel $K_r$, such that $$K_r(x-y)-K_r(x+y)\geq 2\lambda \qquad \mbox{for all}\quad x,y \in \left(-\frac{3}{4}\pi,-\frac{1}{4}\pi\right).$$ Thus, integrating with respect to $x$ over $\left(-\frac{3}{4}\pi,-\frac{1}{4}\pi\right)$ yields $$\begin{aligned} (\mu-\phi(\pi))\left(\phi\left(-\frac{1}{4}\pi\right)-\phi\left(-\frac{3}{4}\pi\right)\right)&\geq\int_{-\frac{3}{4}\pi}^{-\frac{1}{4}\pi}\left(\int_{-\frac{3}{4}\pi}^{-\frac{1}{4}\pi} K_r(x-y)-K_r(x+y)\, dx\right)\phi^\prime(y)\,dy\\ &\geq \lambda \pi \left(\phi\left(-\frac{1}{4}\pi\right)-\phi\left(-\frac{3}{4}\pi\right)\right). \end{aligned}$$ In view of $\phi$ being strictly increasing on $(-\pi,0)$ (cf. Theorem \[thm:reg\] (i)), we can divide the above inequality by the positive number $\left(\phi\left(-\frac{1}{4}\pi\right)-\phi\left(-\frac{3}{4}\pi\right)\right)$ and thereby affirm the claim. We close this section by proving that there is a natural bound on $\mu$ above which there do not exist any nontrivial, continuous solutions, which satisfying the uniform bound $\phi \leq \mu$. This is going to be used to exclude certain alternatives in the analysis of the global bifurcation curve in Section \[S:Global\]. \[lem:bound\_mu\] If $\mu\geq 2 \|K_r\|_{L^1}$, then there exist no nontrivial continuous solution $\phi\leq\mu$ of . Assume that $\phi\leq \mu$ is a nontrivial continuous solution of . The statement is a direct consequence of Lemma \[lem:distance\]: Since $\phi$ is continuous and has zero mean, we have that $$2(\mu-\|K_r\|_{L^1})\leq \phi(x_M)+\phi(x_m) <\phi(x_M) \leq \mu,$$ where $\phi(x_M)= \max_{x\in{\mathbb{T}}} \phi(x)$ and $\phi(x_m)=\min_{x\in{\mathbb{T}}} \phi(x)<0$. Then $$\mu < 2 \|K_r\|_{L^1}.$$ Global bifurcation and conclusion of the main theorem {#S:Global} ===================================================== This section is devoted to the existence of nontrivial, even, periodic solutions of . After constructing small amplitude solutions via local bifurcation theory, we extend the local bifurcation branch globally and characterize the end of the global bifurcation curve. By excluding certain alternatives, based on a priori bounds on the wave speed (cf. Lemma \[lem:bound\_mu\] and Lemma \[lem:lowerbound\] below), we prove that the global bifurcation curve reaches a limiting highest wave $\phi$, which is even, strictly monotone on its open half periods and with maximum at $\phi(0)=\mu$. By Theorem \[thm:reg\] then, the highest wave is a peaked traveling-wave solution of $$u_t + L_ru_x + uu_x=0\qquad\mbox{for}\quad r>1.$$ We use the subscript $X_{\rm even}$ for the restriction of a Banach space $X$ to its subset of even functions. Let $\alpha \in (1,2)$ and set $$F:C^{\alpha}_{0,\rm even}({\mathbb{T}})\times \mathbb{R}^{+}\rightarrow C^{\alpha}_{0,\rm even}({\mathbb{T}}),$$ where $$\label{oper:F} F(\phi,\mu):= \mu\phi - L_r\phi - \phi^{2}/2+\widehat{\phi^2}(0)/2, \qquad (\phi, \mu) \in {{C}}^{\alpha}_{0,\rm even}({\mathbb{T}})\times \mathbb{R}_{+}.$$ Then, $F(\phi,\mu)=0$ if and only if $\phi$ is an even $C^\alpha_0({\mathbb{T}})$-solution of corresponding to the wave speed $\mu\in {\mathbb{R}}_+$. Clearly. $F(0,\mu)=0$ for any $\mu\in {\mathbb{R}}_+$. We are looking for $2\pi$-periodic, even, nontrivial solutions bifurcating from the line $\{(0,\mu)\mid \mu\in{\mathbb{R}}\}$ of trivial solutions. The wave speed $\mu>0$ shall be the bifurcation parameter. The linearization of $F$ around the trivial solution $\phi=0$ is given by $$\label{eq:Fderivative} D_\phi F(0,\mu): {{C}}^{\alpha}_{0,\rm even}({\mathbb{T}}) \to {{C}}^{\alpha}_{0,\rm even}({\mathbb{T}}), \qquad \phi\mapsto \left(\mu \,{\rm id}-L_r \right) \phi.$$ Recall that $L_r:{{C}}^{\alpha}_{0,\rm even}({\mathbb{T}}) \to {\mathcal{C}}^{\alpha+r}_{0,\rm even}({\mathbb{T}})$ is parity preserving and a smoothing operator, which implies that it is compact on ${{C}}^{\alpha}_{0,\rm even}({\mathbb{T}})$. Hence, $D_\phi F(0,\mu)$ is a compact perturbation of an isomorphism, and therefore constitutes a Fredholm operator of index zero. The nontrivial kernel of $D_\phi F(0,\mu)$ is given by those functions $\psi\in {{C}}^\alpha_{0,\rm even}({\mathbb{T}})$ satisfying $$\begin{aligned} \widehat{\psi}(k)\left( \mu - |k|^{-r}\right) = 0,\ \ \ k\neq 0.\end{aligned}$$ For $\mu\in (0,1]$, we see that $\operatorname{supp}\psi\subseteq \{\pm \mu^{-\frac{1}{r}}\}$. Therefore, the kernel of $D_\phi F(0,\mu)$ is one-dimensional if and only if $\mu=|k|^{-r}$ for some $k\in {\mathbb{Z}}$, in which case it is given by $$\begin{aligned} \label{ker:form} \ker D_\phi(0,\mu)= \mbox{span} \{\phi^*_k\} \qquad \mbox{with}\quad \phi^*_k(x):=\cos \left(xk \right).\end{aligned}$$ The above discussion allows us to apply the Crandall–Rabionwitz theorem, where the transversality condition is trivially satisfied since we bifurcate from a simple eigenvalue (cf. [@buffoni2003 Chapter 8.4]). \[cor:lcl:bfr\] For each integer $k\ge 1$, the point $(0,\mu_k^*)$, where $\mu_k^*=k^{-r}$ is a bifurcation point. More precisely, there exists ${\varepsilon}_0>0$ and an analytic curve through $(0,\mu^*_{k})$, $$\begin{aligned} \{ (\phi_{k}({\varepsilon}), \mu_{k}({\varepsilon}))\mid |{\varepsilon}|<{\varepsilon}_0\} \subset {{C}}^{\alpha}_{0,\rm even}({\mathbb{T}})\times {\mathbb{R}}_+,\end{aligned}$$ of nontrivial, $\frac{2\pi}{k}$-periodic, even solutions of with $\mu_k(0)=\mu^*_k$ and $$D_{{\varepsilon}}\phi_{k}(0) =\phi^*_{k}(x)= \cos\left(xk\right).$$ In a neighborhood of the bifurcation point $(0,\mu^*_{k})$ these are all the nontrivial solutions of $F(\phi,\mu)=0$ in ${{C}}^{\alpha}_{0,\rm even}({\mathbb{T}})\times {\mathbb{R}}_+$. We aim to extend the local bifurcation branch found in Theorem \[cor:lcl:bfr\] to a global continuum of solutions of $F(\phi, \mu)=0$. Set $$\begin{aligned} S:= \{(\phi,\mu)\in U: F(\phi,\mu)=0 \},\end{aligned}$$ where $$U:= \{(\phi,\mu)\in {C}^{\alpha}_{0,\rm even}({\mathbb{T}})\times {\mathbb{R}}_+\mid \ \phi<\mu\}.$$ \[lem:glb:ind\] The Frechét derivative $D_{\phi}F(\phi,\mu)$ is a Fredholm operator of index $0$ for all $(\phi,\mu)\in U$. If $(\phi,\mu)\in U$, then $\phi<\mu$ and $$D_{\phi}F(\phi,\mu)=(\mu-\phi){\rm id}-L_r,$$ constitutes a compact perturbation of an isomorphism. Thereby, it is a Fredholm operator of index zero. Let us recall that all bounded solutions $\phi$ of , that is all bounded solutions $\phi$ satisfying $F(\phi, \mu)=0$, are uniformly bounded by $$\label{eq:uniform_bound} \|\phi\|_\infty \leq 2(\mu + \|K_r\|_{L^1}+ 2\pi \|K_r^\prime\|_{L^1}),$$ as shown in Lemma \[lem:uniform\_bound\]. \[lem:cpt\] Any bounded and closed set of $S$ is compact in ${C}^{\alpha}_{0,\rm even}(\mathbb{T})\times {\mathbb{R}}_+$. If $(\phi,\mu)\in S$, then in particular $\phi$ is smooth and $$\begin{aligned} \label{def:til:F} \phi=\mu-\sqrt{\mu^{2} + \hat{\phi^2}(0)-2L_r\phi}=:\tilde{F}(\phi,\mu).\end{aligned}$$ Since the function $\tilde F$ maps $U$ into $\mathcal{C}^{\alpha +r}_{0, \rm even}({\mathbb{T}})$, the latter being compactly embedded into $C^\alpha_{0, \rm even}({\mathbb{T}})$, we obtain that $\tilde F$ maps bounded sets in $U$ into relatively compact sets in $C^{\alpha}_{0, \rm even}({\mathbb{T}})$. Let $A\subset S\subset U$ be a bounded and closed set. Then $\tilde F (A)=\{\phi \mid (\phi, \mu)\in A\}$ is relatively compact in $C^{\alpha}_{0, \rm even}({\mathbb{T}})$. In view of $A$ being closed, any sequence $\{(\phi_n,\mu_n)\}_{n\in {\mathbb{N}}}$ has a convergent subsequence in $A$. We conclude that $A$ is compact in $C^{\alpha}_{0, \rm even}({\mathbb{T}})\times {\mathbb{R}}_+$. Using Lemma \[lem:glb:ind\] and \[lem:cpt\] we can extend the local branches found in Theorem \[cor:lcl:bfr\] to global curves. The result follows from [@buffoni2003 Theorem 9.1.1] once we show that $\mu({\varepsilon})$ is not identically constant for $0<{\varepsilon}\ll 1$. The latter claim however is an immediate consequence of Theorem \[thm:Biformulas\] below. The proof essentially follows the lines in [@EK Section 4]. \[thm:glb:bfr\] The local bifurcation curve $s\mapsto (\phi_{k}(s),\mu_{k}(s))$ from Theorem \[cor:lcl:bfr\] of solutions of extends to a global continuous curve of solutions ${\mathbb{R}}_+\to S$ and one of the following alternatives holds: - $\|(\phi_{k}(s), \mu_{k}(s))\|_{C^{\alpha}(\mathbb{T})\times {\mathbb{R}}_+}$ is unbounded as $s \to \infty$. - The pair $(\phi_{k}(s),\mu_{k}(s))$ approaches the boundary of $S$ as $s\to \infty$. - The function $s\mapsto(\phi_{k}(s),\mu_{k}(s))$ is (finitely) periodic. (0,-0.2) – (0,2.5) node\[left\][max $\phi$]{}; (0,0) – (2,0) – (2,2); (1,0) circle (0.03cm) node\[below=5pt\]; (-0.2,0) – (2.8,0) node\[right\]; (0,0) – (2.4,2.4) node\[right\] ; (2,2.5) – (2,-0.1) node\[below\] ; plot \[smooth, tension=1\] coordinates [ (1,0) (1.12,0.3) (1.6,0.8) (1.8,1.8)]{}; We apply the Lyapunov–Schmidt reduction, in order to establish the bifurcation formulas. Let $k\in {\mathbb{N}}$ be a fixed number and set $$M:= \mbox{span}\left\{\cos\left(xl\right)\mid l\neq k\right\},\qquad N:=\ker D_\phi F(0,\mu^*_{k})= \mbox{span}\{\phi^*_{k}\}.$$ Then, $C^\alpha_{0, \rm even}({\mathbb{T}})=M \oplus N$ and a continuous projection onto the one-dimensional space $N$ is given by $$\Pi \phi = \left<\phi, \phi^*_{k} \right>_{L_2}\phi^*_{k}$$ where $\left< \cdot, \cdot \right>_{L_2}$ denotes the inner product in $L_2({\mathbb{T}})$. Let us recall the Lyapunov–Schmidt reduction theorem from [@Kielhoefer Theorem I.2.3]: There exists a neighborhood $\mathcal{O}\times Y \subset U$ of $(0,\mu^*_{k})$ such that the problem $$\label{eq:infinite} F(\phi, \mu)=0 \quad \mbox{for}\quad (\phi, \mu)\in \mathcal{O} \times Y$$ is equivalent to the finite-dimensional problem $$\label{eq:finite} \Phi({\varepsilon}\phi^*_{k} , \mu):= \Pi F ({\varepsilon}\phi^*_{k} + \psi({\varepsilon}\phi^*_{k} , \mu), \mu)=0$$ for functions $\psi \in C^\infty(\mathcal{O}_N \times Y, M)$ and $\mathcal{O}_N \subset N$ an open neighborhood of the zero function in $N$. One has that $\Phi(0, \mu^*_{k})=0$, $\psi(0,\mu^*_{k})=0$, $D_\phi \psi(0,\mu^*_{k})=0$, and solving problem provides a solution $$\phi= {\varepsilon}\phi^*_{k}+\psi ({\varepsilon}\phi^*_{k}, \mu)$$ of the infinite-dimensional problem . \[thm:Biformulas\] The bifurcation curve found in Theorem \[thm:glb:bfr\] satisfies $$\label{eq:biformula1} \phi_k({\varepsilon})={\varepsilon}\phi^*_k(x)- \frac{{\varepsilon}^2}{2}k^r\left( 1+\frac{1}{1-2^{-r}}\cos \left( 2kx\right)\right)+O({\varepsilon}^3)$$ and $$\label{eq:biformula2} \mu_{k}({\varepsilon})=\mu^*_{k}+{\varepsilon}^2k^r\frac{3-2^{1-r}}{8(1-2^{-r})}+O({\varepsilon}^3)$$ in $C^\alpha_{0, \rm even}({\mathbb{T}})\times {\mathbb{R}}_+$ as ${\varepsilon}\to 0$. In particular, $\ddot{\mu}_{k}(0)>0$ for any $k\geq 1$, that is, Theorem \[cor:lcl:bfr\] describes a supercritical pitchfork bifurcation. Let us prove the bifurcation formula for $\mu_{k}$ first. The value $\dot \mu_{k}(0)$ can be explicitly computed using the bifurcation formula $$\dot \mu_{k} (0)=-\frac{1}{2}\frac{\left< D_{\phi \phi}^2 F(0,\mu^*_{k})[\phi^*_{k}, \phi^*_{k}], \phi^*_{k} \right>_{L^2}}{\left< D_{\phi \mu}^2 F(0,\mu^*_{k})\phi^*_{k},\phi^*_{k} \right>_{L^2}},$$ cf. [@Kielhoefer Section I.6]. We have $$\begin{aligned} D_{\phi \phi}^2 F[0,\mu^*_{k}](\phi^*_{k},\phi^*_{k})&=(\phi^*_{k})^2,\\ D_{\phi,\mu}^2 F[0,\mu^*_{k}]\phi^*_{k}&=-\phi^*_{k}.\end{aligned}$$ In view of $\int_{{\mathbb{T}}}(\phi^*_{k})^3(x)\,dx=0$, the first derivative of $\mu^*_{k}$ vanishes in zero. In this case the second derivative is given by $$\label{eq:2derivative} \ddot \mu_{k}(0)=-\frac{1}{3}\frac{\left< D_{\phi\phi\phi}^3 \Phi(0,\mu^*_{k})[\phi^*_{k},\phi^*_{k},\phi^*_{k}],\phi^*_{k}\right>_{L_2}}{\left< D_{\phi \mu}^2 F(0,\mu^*_{k})\phi^*_{k},\phi^*_{k} \right>_{L^2}},$$ where $\Phi \in C^\infty (\mathcal{O}_N \times Y, N)$ is the function defined in . We have that $$\begin{aligned} &D_\phi \Phi(\phi, \mu)\phi^*_{k}=\Pi D_\phi F(\phi+ \psi(\phi, \mu), \mu) \left[\phi^*_{k} + D_\phi \psi (\phi, \mu)\phi^*_{k} \right], \\ &D_{\phi\phi} \Phi (\phi, \mu)[\phi^*_{k},\phi^*_{k}] \\ &\quad=\Pi D_{\phi\phi}^2F(\phi + \psi(\phi, \mu), \mu)\left[\phi^*_{k} + D_\phi \psi(\phi,\mu)\phi^*_{k}, \phi^*_{k} + D_\phi \psi(\phi, \mu)\phi^*_{k} \right]\\ &\qquad + \Pi D_{\phi}F(\phi + \psi(\phi, \mu), \mu)D_{\phi \phi}^2\psi(\phi, \mu)[\phi^*_{k},\phi^*_{k}],\\ &D_{\phi\phi\phi}^3\Phi(\phi,\mu)[\phi^*_{k},\phi^*_{k},\phi^*_{k}]= \Pi D_{\phi}F(\phi + \psi(\phi, \mu), \mu)D_{\phi\phi\phi}^3 \psi(\phi, \mu)[\phi^*_{k},\phi^*_{k},\phi^*_{k}]\\ &\qquad +3\Pi D_{\phi\phi}^2F(\phi+ \psi(\phi, \mu), \mu)[\phi^*_{k}+D_\phi\psi(\phi, \mu)\phi^*_{k},D^2_{\phi\phi}\psi(\phi,\mu)[\phi^*_{k},\phi^*_{k}]],\end{aligned}$$ in view of $F$ being quadratic in $\phi$ and therefore $D_{\phi\phi\phi}^3F(\phi,\mu)=0$. Using that $\psi(0,\mu^*_{k})=D_\phi \psi(0,\mu^*_{k})\phi^*_{k}=0$ we obtain that $$\begin{aligned} D_{\phi\phi\phi}^3\Phi(0, \mu^*_{k})[\phi^*_{k},\phi^*_{k},\phi^*_{k}]&=\Pi D_\phi F(0,\mu^*_{k})D_{\phi\phi\phi}^3 \psi(0,\mu^*_{k})[\phi^*_{k},\phi^*_{k},\phi^*_{k}]\\ &\quad+3 \Pi D_{\phi\phi}^2 F(0,\mu^*_{k})[\phi^*_{k},D_{\phi\phi}^2 \psi(0,\mu^*_{k})[\phi^*_{k},\phi^*_{k}]].\end{aligned}$$ Since $N= \ker D_\phi F(0,\mu^*_{k})$ and $\Pi$ is the projection onto $N$, the above derivative reduces to $$\begin{aligned} D_{\phi\phi\phi}^3\Phi(0, \mu^*_{k})[\phi^*_{k},\phi^*_{k},\phi^*_{k}]= 3\Pi\phi^*_{k}D_{\phi\phi}^2 \psi(0,\mu^*_{k})[\phi^*_{k},\phi^*_{k}].\end{aligned}$$ As in [@Kielhoefer Section 1.6] we use that $D_\phi F(0,\mu^*_{k})$ is an isomorphism on $M$ to write $$\begin{aligned} \label{eq:Dphiphi} \begin{split} D_{\phi\phi}^2 \psi(0,\mu^*_{k})[\phi^*_{k},\phi^*_{k}]&=- (D_\phi F(0,\mu^*_{k}))^{-1}(1-\Pi)D_{\phi\phi}^2F(0,\mu^*_{k})[\phi^*_{k},\phi^*_{k}]\\ &=- (D_\phi F(0,\mu^*_{k}))^{-1}(1-\Pi)(\phi^*_{k})^2\\ &=-\frac{1}{2}(D_\phi F(0,\mu^*_{k}))^{-1}\left( 1+\cos\left(2xk\right)\right)\\ &=-\frac{1}{2}\left(\frac{1}{\mu^*_{k}}+\frac{\cos\left(2xk \right)}{\mu^*_{k}-(2k)^{-r}}\right). \end{split}\end{aligned}$$ We conclude that $$D_{\phi\phi\phi}^3\Phi(0, \mu^*_{k})[\phi^*_{k},\phi^*_{k},\phi^*_{k}]=-\frac{3}{2}\phi^*_{k} \left( \frac{1}{\mu^*_{k}}+\frac{1}{2(\mu^*_{k}-(2k)^{-r})} \right).$$ In view of the dominator in being $-1$, the second derivative of $\mu_{k}$ at zero is given by $$\label{eq:2D} \ddot \mu_{k}(0)=\frac{1}{2}\left(\frac{1}{\mu^*_{k}}+\frac{1}{2(\mu^*_{k}-(2k)^{-r})} \right)= k^{r}\frac{3-2^{1-r}}{4(1-2^{-r})} >0,\ \ \text{for all}\ r>1.$$ The formula is now a direct consequence of a Maclaurin series expansion and $\dot \mu_{k}(0)=0$. Since $\ddot \mu_{k}(0)>0$, we conclude that the bifurcation curve describes a supercritical pitchfork bifurcation. Keeping in mind that $\phi_k(0)=0$ and $\dot \phi_k(0)=\phi^*_k$, we are left to compute $\ddot \phi_k (0)$ in order to establish . We use that $$\phi_k({\varepsilon})={\varepsilon}\phi^*_k+\psi({\varepsilon}\phi^*_k, \mu_k({\varepsilon})),$$ cf. [@Kielhoefer Chapter I.5]. It follows that $$\begin{aligned} \ddot \phi_k(0)=&D^2_{\phi\phi}\psi(0,\mu^*_{k})[\phi^*_k,\phi^*_k]+2D^2_{\phi\mu}\psi(0,\mu^*_{k})[\phi^*_k,\dot \mu_{k}(0)] + D^2_{\mu\mu}\psi(0,\mu^*_{k})[\dot \mu_k(0),\dot \mu_{k}(0)]\\ &+D_\mu\psi(0,\mu^*_{k})\dot \mu_{k}(0).\end{aligned}$$ Since $D_\mu \psi(0,\mu^*_{k})=0$ and $\dot \mu_{k}(0)=0$, we obtain that $$\ddot \phi_k(0)=D^2_{\phi\phi}\psi(0,\mu^*_{k})[\phi^*_k,\phi^*_k].$$ Thus, the claim follows from . \[lem:con\] Any sequence of solutions $(\phi_{n},\mu_{n})_{n\in{\mathbb{N}}}\subset S$ to with $(\mu_n)_{n\in {\mathbb{N}}}$ bounded has a subsequence which converges uniformly to a solution $\phi$. In view of the boundedness of $(\mu_n)_{n\in {\mathbb{N}}}$ implies that also $(\phi_n)_{n\in {\mathbb{N}}}$ is uniformly bounded in $C({\mathbb{T}})$. In order to show that $(\phi_n)_{n\in {\mathbb{N}}}$ has a convergent subsequence, we prove that $(\phi_n)_{n\in {\mathbb{N}}}$ is actually uniformly Hölder continuous. By compactness, it then has a convergent subsequence in $C({\mathbb{T}})$. From Theorem \[thm:P\] it is known that $K_r$ is $\alpha$-Hölder continuous for some $\alpha\in (0,1]$. Since $(\phi_n)_{n\in {\mathbb{N}}}$ is uniformly bounded, we have that $(K_r*\phi_n)_{n\in{\mathbb{N}}}$ is uniformly $\alpha$-Hölder continuous. Recalling that $$\frac{1}{2}(\phi_n(x)-\phi_n(y))^2\leq |K_r*\phi_n(x)-K_r*\phi_n(y)|$$ whenever $\phi_n\leq \mu$, we deduce that $(\phi_n)_{n\in{\mathbb{N}}}$ is uniformly $\frac{\alpha}{2}$-Hölder continuous. Thus, $(\phi_n,\mu_n)_{n\in {\mathbb{N}}}$ has a convergent subsequence which allows us to choose a uniformly convergent subsequences to a solution of . The remainder of the section is devoted to exclude alternative (iii) in Theorem \[thm:glb:bfr\] and to prove that alternative (i) and (ii) occur simultaneously, which in particular implies that the highest wave is reached as a limit of the global bifurcation curve. Let $$\mathcal{K}_k:= \{ \phi\in {C}^{\alpha}_{0,\rm even}({\mathbb{T}}):\ \phi\ \text{is $2\pi/k$-periodic and nondecreasing in}\ (-\pi/k,0)\},$$ a closed cone in ${C}^{\alpha}_0(\mathbb{T})$. \[prop:A3\] The solutions $\phi_{k}(s)$, $s>0$ on the global bifurcation curve belong to $\mathcal{K}_k\setminus \{0\}$ and alternative (iii) in Theorem \[thm:glb:bfr\] does not occur. In particular, the bifurcation curve $(\phi_{k}(s),\mu_{k}(s))$ has no intersection with the trivial solution line for any $s>0$. Due to [@buffoni2003 Theorem 9.2.2] the statement holds true if the following conditions are satisfied - $\mathcal{K}_k$ is a cone in a real Banach space. - $(\phi_{k}({\varepsilon}),\mu_{k}({\varepsilon}))\subset \mathcal{K}_k\times {\mathbb{R}}$ provided ${\varepsilon}$ is small enough. - If $\mu \in {\mathbb{R}}$ and $\phi\in \ker D_\phi F(0,\mu)\cap \mathcal{K}_k$, then $\phi=\alpha \phi^*$ for $\alpha \geq 0$ and $\mu=\mu^*_{k}$. - Each nontrivial point on the bifurcation curve which also belongs to $\mathcal{K}_k\times {\mathbb{R}}$ is an interior point of $\mathcal{K}_k\times {\mathbb{R}}$ in $S$. In view of the local bifurcation result in Theorem \[cor:lcl:bfr\], we are left to verify condition (d). Let $(\phi,\mu)\in \mathcal{K}_k\times {\mathbb{R}}$ be a nontrivial solution on the bifurcation curve found in Theorem \[thm:glb:bfr\]. By Theorem \[th:phi:prop\], $\phi$ is smooth and together with Lemma \[lem:nod\], we have that $\phi'>0$ on $(-\pi,0)$ and $\phi''(0)<0$. Choose a solution $\varphi$ lying within a $\delta \ll1$ small enough neighborhood in $C^\alpha_0({\mathbb{T}})$ such that $\varphi < \mu$ and $\|\phi-\varphi\|_{C^{\alpha}}<\delta$. In view of an iteration process on the regularity index yields that $\|\phi-\varphi\|_{C^{2}}<\tilde{\delta}$, where $\tilde{\delta}>0$ depends on $\delta$ and can be made arbitrarily small by choosing $\delta$ small enough. It follows that for $\delta$ small enough $\varphi<\mu$ is a smooth, even solution, nondecreasing on $(-\frac{\pi}{k},0)$ and hence $(\phi,\mu)$ belongs to the interior of $\mathcal{K}_k\times {\mathbb{R}}$ in $S$, which concludes the proof. \[lem:lowerbound\] Along the bifurcation curve in Theorem \[thm:glb:bfr\] we have that $$\mu(s)\gtrsim 1$$ uniformly for all $s\geq0$. Let us assume for a contradiction that there exists a sequence $(s_n)_{n\in {\mathbb{N}}}\in {\mathbb{R}}_+$ with $\lim_{n\to \infty}s_n=\infty$ such that $\mu(s_n)\to 0$ as $n\to \infty$, while $\phi(s_n)\to \phi_0$ as $n\to \infty$ along the bifurcation curve found in Theorem \[thm:glb:bfr\]. In view of Lemma \[lem:con\], there exists a subsequence of $(s_n)_{n\in {\mathbb{N}}}$ (not relabeled) such that $\phi(s_n)$ converges to a solution $\phi_0$ of . Along the bifurcation curve we have that $\phi(s_n)<\mu(s_n)$. Taking into account the zero mean property of solutions of , it follows that $\phi_0=0$ is the trivial solution. But then Lemma \[lem:lowerbound:aux\] yields the contradiction $$0=\lim_{n\to \infty}\left(\mu(s_n)-\phi(s_n)(\pi)\right)\geq \lambda \pi>0.$$ \[thm:A12\] In Theorem \[thm:glb:bfr\], alternative (i) and (ii) both occur. Let $(\phi_{k}(s),\mu_{k}(s)), s\in{\mathbb{R}}$, the bifurcation curve found in Theorem \[thm:glb:bfr\]. In view of Proposition \[prop:A3\] we know that any solution along the bifurcation curve is even and nondecreasing on $(-\frac{\pi}{k},0)$. Moreover, alternative (iii) in Theorem \[thm:glb:bfr\] is excluded. That is either alternative (i) or alternative (ii) in Theorem \[thm:glb:bfr\] occur. Let us assume first that alternative (i) occurs, that is either $\|\phi_{k}(s)\|_{C^\alpha}\to \infty$ for some $\alpha\in (1,2)$ or $|\mu_{k}(s)|\to \infty$ as $s\to \infty$. The former case implies alternative (ii) in view of Theorem \[th:phi:prop\]. Since $\phi_{k}(s)$ has zero mean and keeping in mind Lemma \[lem:bound\_mu\], it is clear that the second option $\lim_{s\to \infty}|\mu_{k}(s)|=\infty$ can not happen unless we reach the trivial solution line, which is excluded by Proposition \[prop:A3\]. Suppose now that alternative (ii) occurs, but not alternative (i). Then there exists a sequence $(\phi_{k}(s_n),\mu_{k}(s_n))_{n\in {\mathbb{N}}}$ in $S$ satisfying $\phi_{k}(s_n)<\mu$ and $\lim_{n\to \infty}\max \phi_{k}(s_n)=\mu$, while $\phi_{k}(s_n)$ remains uniformly bounded in $C^\alpha({\mathbb{T}})$ for $\alpha\in (1,2)$ and $\mu\gtrsim 1$ by Lemma \[lem:lowerbound\]. But this is clearly a contradiction to Theorem \[thm:reg\]. We deduce that both, alternative (i) and alternative (ii) occur simultaneously. Now, we are at the end of our analysis and conclude our main result: Let $(\phi_{k}(s),\mu_{k}(s))$ be the global bifurcation curve found in Theorem \[thm:glb:bfr\] and let $(s_n)_{n\in {\mathbb{N}}}$ be a sequence in ${\mathbb{R}}_+$ tending to infinity. Due to our previous analysis (Lemma \[lem:bound\_mu\] and Proposition \[prop:A3\]), we know that $(\mu_{k}(s_n))_{n\in {\mathbb{N}}}$ is bounded and bounded away from zero. In view of the $\mu_{k}$-dependent bound of $\phi_{k}$ we obtain that also $(\phi_{k}(s_n))_{n\in {\mathbb{N}}}$ is bounded, whence Lemma \[lem:con\] implies the existence of a converging subsequence (not relabeled) of $(\phi_{k}(s_n),\mu_{k}(s_n))_{n\in{\mathbb{N}}}$. Let us denote the limit by $(\bar \phi, \bar \mu)$. By Theorem \[thm:A12\] and Theorem \[thm:reg\] we conclude that $\bar \phi (0)=\bar \mu$ with $\bar \phi$ admitting precisely Lipschitz regularity at each crest, which proves the main assertion Theorem \[thm:main\]. Application to the reduced Ostrovsky equation {#S:RO} ============================================= In this section we show that our approach can be applied to traveling-waves of the reduced Ostrovsky equation, which is given by $$\label{eq:GRO} \left[u_t +uu_x \right]_x-u=0$$ and arises in the context of long surface and internal gravity waves in a rotating fluid [@Ostrovsky1978]. We are looking for $2\pi$-periodic traveling-wave solutions $u(t,x)=\phi(x-\mu t)$, where $\mu>0$ denotes the speed of the right-propagating wave. In this context equation reduces to $$\label{eq:T} \left[ \frac{1}{2}\phi^{2}-\mu\phi\right]_{xx}-\phi=0.$$ Let us emphasize that the existence of periodic traveling wave solutions of is well-known. Furthermore, there exists an explicit example of a $2\pi$-periodic traveling-wave with wave speed $\mu=\frac{\pi^2}{9}$ of the form $$\label{eq:formula} \phi_p(x)=\frac{3x^2-\pi^2}{18},$$ which satisfies point-wise on $(-\pi,\pi)$. It is easy to check that $\phi_p$ is precisely Lipschitz continuous at its crest points located at $\pi(2{\mathbb{Z}}+1)$ and smooth elsewhere. (-6,0) – (7,0) node\[right\] [$x$]{}; (0,-1) – (0,2); (-0.2, 1)–(0.2,1) node\[left\][$\frac{\pi^2}{9}$]{}; (-pi, 0.2)–(-pi,-0.2) node\[below\][$-\pi$]{}; (pi, 0.2)–(pi,-0.2) node\[below\][$\pi$]{}; plot (,[(3\*-pi\^2)/18]{}); plot (,[((3\*(+2\*pi)\*(+2\*pi)-pi\^2)/18]{}); plot (,[((3\*(-2\*pi)\*(-2\*pi)-pi\^2)/18]{}); Recall that any periodic solution of has necessarily zero mean. Therefore, working in suitable spaces restricted to their zero mean functions, the pseudo differential operator $\partial_x^{-2}$ can be defined uniquely in terms of a Fourier multiplier. We show in Lemma \[lem:relation\] that the steady reduced Ostrovsky equation can be reformulated in nonlocal form as $$\label{eq:Reduced_Ostrovsky} -\mu \phi + L\phi + \frac{1}{2}\left( \phi^{2}-\widehat{\phi^2}(0)\right)=0.$$ Here $L$ denotes the Fourier multiplier with symbol $m(k)=k^{-2}$ for $k\neq 0$ and $m(0)=0$. Recall that any function $f\in {C}^\alpha({\mathbb{T}})$ for $\alpha>\frac{1}{2}$ has an absolutely convergent Fourier series, that is $$\sum_{k\in {\mathbb{Z}}}|\hat f(k)|<\infty,$$ and the Fourier representation of $f$ is given by $$f(x)=\sum_{k\in{\mathbb{Z}}}\hat f\left(k\right)e^{ixk}.$$ \[lem:relation\] Let $\alpha>\frac{1}{2}$. A function $\phi\in {C}_0^\alpha ({\mathbb{T}})$ is a solution of if and only if $\phi$ solves $$-\mu \phi + L_2\phi + \frac{1}{2}\left( \phi^{2}-\widehat{\phi^2}(0)\right)=0,$$ where $$L\phi(x) :=\sum_{k\neq 0}k^{-2}\hat \phi(k)e^{ixk}.$$ Notice that $\phi\in \mathcal{C}_0^\alpha({\mathbb{T}})$ is a solution of if and only if $$\int_{-\pi}^\pi \left[ \frac{1}{2}\phi^{2}(x)-\mu\phi(x)\right]\psi_{xx}(x)\,dx = \int_{-\pi}^\pi\phi(x) \psi(x)\,dx$$ for all $\psi \in C_c^\infty(-\pi,\pi)$, which is equivalent to $$\mathcal{F}\left( \left[ \frac{1}{2}\phi^{2}-\mu\phi\right]\psi_{xx}\right)(0)= \mathcal{F}\left(\phi \psi\right)(0).$$ Using the property that the Fourier transformation translates products into convolution, we can write $$\mathcal{F}\left( \frac{1}{2}\phi^{2}-\mu\phi\right) * \mathcal{F}\left(\psi_{xx}\right)(0)=\hat \phi * \hat \psi(0).$$ In view of $\phi$ having zero mean and therefore $\hat \phi (0)=0$, we deduce that $\phi\in {C}_0^\alpha ({\mathbb{T}})$ is a solution to if and only if $$-\sum_{k\neq 0}\mathcal{F}\left( \frac{1}{2}\phi^{2}-\mu\phi\right)(-k)k^2\hat \psi (k)=\sum_{k\neq 0 }\hat \phi(-k)\hat \psi (k)$$ for all $\psi \in C_c^\infty(-\pi,\pi)$. In particular, $$\frac{1}{2}\widehat{\phi^{2}}(k)-\mu\hat \phi(k)+k^{-2}\hat\phi (k)=0 \qquad \mbox{for all}\quad k \neq 0,$$ which is equivalent to $$\sum_{k\neq 0} \left( \frac{1}{2}\widehat{\phi^{2}}(k)-\mu\hat \phi(k)+k^{-2}\hat\phi (k)\right)e^{ixk}=0.$$ Due to the fact that $\phi$ has zero mean, the above equation can be rewritten as $$-\mu \phi + L\phi + \frac{1}{2}\left( \phi^{2}-\widehat{\phi^2}(0)\right)=0,$$ which proves the statement. We proved in Theorem \[thm:reg\] that *any* even, periodic, bounded solution $\phi\leq \mu$, which is monotone on a half period, is Lipschitz continuous on ${\mathbb{R}}$, which guarantees by Lemma \[lem:relation\] that all solutions of we consider here are indeed solutions of the reduced Ostrovsky equation. As a consequence of our main result Theorem \[thm:main\], we obtain the following corollary: \[cor:RO\] For each integer $k\geq 1$ there exists a global bifurcation branch $$s\mapsto (\phi_{k}(s),\mu_{k}(s)),\qquad s>0,$$ of nontrivial, $\frac{2\pi}{k}$-periodic, smooth, even solutions to the steady reduced Ostrovsky equation emerging from the bifurcation point $(0,k^{-2})$. Moreover, given any unbounded sequence $(s_n)_{n\in{\mathbb{N}}}$ of positive numbers $s_n$, there exists a subsequence of $(\phi_{k}(s_n))_{n\in {\mathbb{N}}}$, which converges uniformly to a limiting traveling-wave solution $(\bar \phi_{k},\bar\mu_{k})$ that solves and satisfies $$\bar \phi_{k}(0)=\bar \mu_{k}.$$ The limiting wave is strictly increasing on $(-\frac{\pi}{k},0)$ and is exactly Lipschitz at $x\in \frac{2\pi}{k}{\mathbb{Z}}$. In case of the reduced Ostrovsky equation, we know even more about the bifurcation diagram. Using methods from dynamical systems, the authors of [@GP; @GP2] are able to prove that the peaked, periodic traveling-wave for the reduced Ostrovsky equation is the *unique* nonsmooth $2\pi$-periodic traveling-wave solution ([@GP2 Lemma 2]). Moreover, from [@GP Lemma 3] we obtain the following a priori bound on the wave speed for nontrivial, $2\pi$-periodic traveling-wave solutions of : \[lem:optimal\] If $\phi$ is a nontrivial, smooth, $\frac{2\pi}{k}$-periodic, traveling-wave solution of the reduced Ostrovsky equation, then the wave speed $\mu$ satisfies the bound $$\mu\in k^{-2}\left(1,\frac{\pi^2}{9}\right).$$ (0,-0.2) – (0,2.5) node\[left\][max $\phi$]{}; (1,0) – (1,1) – (1.8,1.8) – (1.8,0); (1,0) circle (0.03cm) node\[below=5pt\]; (-0.2,0) – (2.8,0) node\[right\]; (0,0) – (2.4,2.4) node\[right\] ; (1.8,1.8) – (1.8,-0.1) node\[below\] ; (1,1) – (1,0) ; plot \[smooth, tension=1\] coordinates [ (1,0) (1.12,0.3) (1.6,0.8) (1.8,1.8)]{}; *Notice, that in the class of $2\pi$-periodic solutions, the range for the wave speed $\mu$ supporting nontrivial traveling-wave solutions of the reduced Ostrovsky equation is given by $(1,\frac{\pi^2}{9})$, where $\mu=1$ is the wave speed from which nontrivial, $2\pi$-periodic solutions bifurcate and $\mu=\frac{\pi^2}{9}$ is exactly the wave speed corresponding to the highest peaked wave in .* *Regarding the $2\pi$-periodic, nontrivial traveling-wave solutions of on the global bifurcation branch from Corollary \[cor:RO\], we have that Lemma \[lem:bound\_mu\] and Lemma \[lem:lowerbound\], proved in the previous sections, guarantee that the wave speed is a priori bounded by $$\mu \in \left(M, \frac{4\pi^3}{9\sqrt{3}}\right)\qquad\mbox{for some}\quad M\in (0,1].$$ Certainly this bound is if far from the optimal bound provided by [@GP] in Lemma \[lem:optimal\]. Thus, there is still room for improvement in our estimates.* Acknowlegments {#acknowlegments .unnumbered} -------------- The author G.B. would like express her gratitude to Mats Ehrnström for many valuable discussions. Moreover, G.B. gratefully acknowledges financial support by the Deutsche Forschungsgemeinschaft (DFG) through CRC 1173. Part of this research was carried out while G.B. was supported by grant no. 250070 from the Research Council of Norway.\ The author R.N.D. acknowledges the support during the tenure of an ERCIM ‘Alain Bensoussan’ Fellowship Program and was supported by grant nos. 250070 & 231668 from the Research Council of Norway. Moreover, R.N.D. would also like to thank the Fields Institute for Research in Mathematical Sciences for its support to attain the Focus Program on *Nonlinear Dispersive Partial Differential Equations and Inverse Scattering* (July 31 to August 23, 2017) in the related field. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the [Fields Institute](www.fields.utoronto.ca). [^1]: One can show that the above definition is independent of the particular choice of $(\varphi)_{j\geq 0}$
--- abstract: 'We discuss a model of a conformal p-brane interacting with the world volume metric and connection. The purpose of the model is to suggest a mechanism by which gravity coupled to p-branes leads to the formation of [*structure*]{} rather than homogeneity in spacetime. Furthermore, we show that the formation of structure is accompanied by the appearance of a [*multivalued*]{} cosmological constant, i.e., one which may take on different values in different domains, or [*cells*]{}, of spacetime. The above results apply to a broad class of non linear gravitational lagrangians as long as metric and connection on the p-brane manifold are treated as independent variables.' address: - | Department of Physics, California State Polytechnic University,\ Pomona, CA 91768 - | International Center for Theoretical Physics,\ Strada Costiera 11, 34014 Trieste, Italy - | Dipartimento di Fisica Teorica Università di Trieste,\ INFN Sezione di Trieste,\ Strada Costiera 11, 34014 Trieste, Italy author: - 'A.Aurilia' - 'A.Smailagic' - 'E.Spallucci' title: 'Conformal p-branes as a Source of Structure in Spacetime' --- =10000 Introduction ============ This paper has two main objectives. The first is to discuss a new effect that the dynamics of extended objects (p–branes) may have on the geometry of spacetime. The second, allied, objective is to introduce a class of gravity theories in (p+1)–dimensions characterized by the formation of structure on the p–brane manifold. To elaborate further on these points, and partly to motivate our work, we recall one of the basic tenets of General Relativity, namely, that the matter content of the universe shapes the spacetime geometry and, conversely, that the geometry “ guides ” the motion of material [*particles*]{} along geodesic lines. The notion of “ geodesic motion ” can be generalized to incorporate the world–track swept by [*extended objects*]{}, say strings and membranes, in curved spacetime [@noi]. However, once this is done, one finds that a new possibility arises from the interplay between geometry and dynamics, namely, the [*formation of structure in spacetime.*]{} This structure consists of separate vacuum domains, or [*cells*]{} of spacetime, each characterized by a distinct geometric phase; that is, the background geometry could be Riemannian or Minkowskian in one domain, of Weyl type in another or Riemann–Cartan in yet another cell, etc., with the highest concentration of matter to be found on the domain walls separating each cell. The interior of each cell constitutes a “ false vacuum ” to the extent that it is characterized by a distinct value of the cosmological constant or vacuum energy density. Since each cell has a dynamics of its own, this overall cellular structure is an ever changing one, and the qualitative picture that comes to our mind is that of a “ frothiness ” in the very fabric of spacetime. In different guises and with different objectives in mind, a cellular structure in spacetime has been invoked before [@cole], and is implicitly assumed, for instance, in connection with the idea of chaotic inflation [@linde], or in connection with the geometrodynamic idea of [*spacetime foam*]{} as the inevitable consequence of quantum fluctuations in the gravitational field [@wheeler]. However, since very little is known about quantum gravity, a mathematical implementation of these ideas has always been exceedingly difficult, or vague. [*In contrast, the aim of this paper is to show that one can “ tunnel through the barrier of ignorance ” about quantum gravitational effects and discuss the formation of a multiphase cellular structure in spacetime as a consequence of the classical dynamics of p–branes coupled to gravity.*]{} This brings us to our second and more detailed objective, i.e., the introduction of a new class of gravity theories in (p+1)–dimensions. To trace the genesis of this approach we recall that the notion of a spacetime with many geometric phases originated in an earlier attempt to deal with the phenomena of vacuum decay and inflation [@lego], and, more recently, from a stochastic approach to the dynamics of a string network in which we have shown that domains of spacetime (voids) characterized by a Riemannian geometry and a nearly uniform string distribution, appear to be separated by domain walls characterized by a Weyl type geometry and by a discontinuity in the string distribution [@vanz]. In this approach, based on a stochastic interpretation of the Nambu–Goto action, the geometry of spacetime is not preassigned, but it is required to be compatible with the matter–string distribution in the universe. Thus, the string degrees of freedom are coupled to both metric $\gamma_{mn}$ and connection $\Gamma^\l{}_{mn}$ of the [*ambient spacetime*]{} through the curvature scalar $R=\gamma^{mn}R_{mn}(\Gamma)$. The general philosophy of this paper is the same, i.e., geometry and matter distribution must be self consistent and not preordained. In this paper, however, by “ geometry of spacetime ” we mean the [*intrinsic*]{} geometry of the Lorentzian p–brane manifold and not the geometry of the target space in which the p–brane is imbedded and which we assume, for simplicity, to be a $D$–dimensional Minkowski spacetime. From this vantage point, the p–brane [*classical*]{} action that we suggest below in Eqs. (2.1 and 2.5), can be interpreted as the action for gravity in (p+1)–dimensions coupled to some “ scalar fields ” represented in the action by the imbedding functions with support on the p–brane manifold. The payoff of this particular choice of action is the possibility, not usually contemplated by conventional General Relativity, of a multiphase intrinsic geometry that may form on a p–brane manifold. In fact, the main result of this paper is a mechanism, [*coded in the condition*]{} (2.14), by which gravity coupled to [*extended objects*]{} manages to produce structure rather than uniformity in spacetime. The source of this mechanism can be traced back to two key properties of our model: the first property is that the gravitational term in the action is described by an [it analytic]{} function of the scalar curvature on the p–brane manifold; the second property is that the energy–momentum tensor of the p–brane is [*traceless*]{}.These properties are coded in the two terms of the action (2.5): the first property (analyticity of the gravitational term) is simply [*assumed*]{}, with no other justification tha to serve our purpose, which is to arrive at the condition (2.14) bypassing quantum gravitational effects; the second property (tracelessness of the energy–momentum tensor) is [*enforced*]{} by restricting our consideration to [*conformal p– branes*]{} defined by Eq.(2.1). The rationale for this choice of p– brane action is that any other choice would result in the appearance of the trace of the energy–momentum tensor on the right hand side of Eq.(2.14), thereby invalidating our conclusions. The main body of the paper, section II, is divided into three subsections. In subsection A, we introduce the action functional for the conformal p–brane non–minimally coupled to the world volume metric and connection, which we consider as independent variables. In subsection B, we describe the solution of the classical field equations corresponding to a Riemannian geometry over the p–brane world volume. In subsection C, we show how the same classical field equations admit another type of solution. For a generic p–brane, with $p>1$, this solution corresponds to a Riemann–Cartan geometry characterized by a traceless torsion tensor. The string case is exceptional in that the solution of the field equations corresponds to a Weyl geometry. The Action ========== Classical p–brane dynamics -------------------------- In the conventional approach originated by Dirac, Nambu and Goto, p–branes are treated as $(p+1)$–dimensional manifolds [*imbedded*]{} in a $D$–dimensional spacetime. Alternatively, one may elect to focus on the [*intrinsic geometry*]{} of the p–brane manifold, regardless of the imbedding in the ambient spacetime. Our action integral reflects both points of view. A first step toward this “ hybrid ” model was suggested for the string by Howe and Tucker [@ht]. On purely dimensional grounds, the Howe–Tucker string action, which is equivalent to that of Nambu and Goto, is invariant under Weyl rescaling of the world metric $\gamma_{mn}$ and, as a consequence, the string classical energy–momentum tensor has vanishing trace. [*This is the key property of strings which we wish to extend to a generic p–brane*]{}.As anticipated in the Introduction, one way to achieve this is to give up the world–volume interpretation of the action and to formulate p–brane dynamics in a manifestly Weyl invariant form. The extension of the Howe–Tucker action, though feasible, does [*not*]{} meet this requirement [@suga]. Rather, the Weyl invariant classical action for a p–brane is [@Duff] $$S_{\rm C}=-\kappa\int_W d^{p+1}\xi\sqrt{-\gamma}\left[ {1\over(p+1)}\gamma^{mn}\partial_m X^\mu \partial_n X_\mu \right]^{\left(p+1\right)/2}\ ,\label{uno}$$ where $\kappa$ is the p–brane surface tension, $\xi^m$, $m=0,\dots,p$ denote the world volume coordinates with world volume metric $\gamma_{mn}$, and $X^\mu$, $\mu=0,\dots, D-1$, denote spacetime coordinates with a flat metric $\eta_{\mu\nu}$. Since the combination $\displaystyle{\sqrt{-\gamma}\left(\gamma^{mn}\right)^{(p+1)/2}}$ is Weyl invariant for any $p$, the p–brane energy momentum tensor $$\begin{aligned} T_{mn}&\equiv& -{2\over\sqrt{-\gamma}}{\delta S_C\over\delta\gamma^{mn}} \nonumber\\ &=&\kappa \partial_m X^\rho \partial_n X_\rho\left[ {1\over(p+1)}\gamma^{pq}\partial_p X^\mu \partial_q X_\mu \right]^{\left(p-1\right)/2}+\nonumber\\ &-&\kappa\gamma_{mn}\left[ {1\over(p+1)}\gamma^{pq}\partial_p X^\mu \partial_q X_\mu \right]^{\left(p+1\right)/2} \label{timunu}\end{aligned}$$ is [*traceless*]{}, i.e. $T^m{}_m=0$. The next step in our approach is to add to the action an explicit symmetry breaking term which accounts for the “ intrinsic ” gravitational interaction on the p–brane manifold. Note that a generic term of this type is expected to arise in the effective action as a consequence of quantum corrections [@smail]. However, for our specific purposes, stated in the Introduction, we define on the p– brane manifold a world hypersurface [*affine connection*]{} $\Gamma^s{}_{mn}$, $(s,m,n=0,\dots,p)$ through an interaction term $L_{int.}(R)$ which is assumed to be an [*analytic*]{}, but otherwise arbitrary function of the world volume scalar curvature $R$. We do not select the usual Christoffel connection because this choice would impose a Riemannian geometry on the world volume. Instead, as explained in the Introduction, we consider the conformal p–brane geometry as a dynamical quantity to be determined by the equations of motion. In this general case, the strength of the connection is measured by the curvature tensor $$R^l{}_{mns}\equiv \partial_n\Gamma^l{}_{ms}- \partial_s\Gamma^l{}_{mn} +\Gamma^l{}_{an}\Gamma^a{}_{ms}-\Gamma^l{}_{as}\Gamma^a{}_{mn}\ , \label{due}$$ and the corresponding contracted curvature tensor and curvature scalar, are given by $$R_{ms}(\Gamma)=R^l{}_{mls}\ ,\qquad R(\gamma,\Gamma)=\gamma^{ms}R_{ms}\ . \label{treuno}$$ Note that $R_{ms}$ does [*not*]{} depend on the world volume metric, but is a function of the connection alone. Futhermore, $\gamma^{mn}$ projects out the symmetric part of $R_{mn}$ in the definition of the scalar curvature $R$. Against this background, the action describing our model is $$S(X,\gamma,\Gamma)=S_{\rm C}(X,\gamma)+ \int_W d^{p+1}\xi\sqrt{-\gamma}L_{int.}(R) \label{tre}$$ in which $L_{int.}(R)$ can be regarded either as an assigned function of $R$, or as a generic analytic function to be determined by the equations of motion. Note that in the action (\[uno\]), the p–brane is [*minimally*]{} coupled to the world volume metric in a Weyl invariant manner, whereas in the action (\[tre\]) we have introduced a non–minimal interaction term. Except for a special form of $L_{int.}(R)$, to be discussed shortly, it is to be expected that this term breaks the conformal symmetry of the action (2.1) and our immediate objective is to discuss the main dynamical consequence of this symmetry breaking term, namely, the formation of structure accompanied by the appearance of a multivalued cosmological constant on the p–brane manifold. Varying eq. (\[tre\]) with respect to the p–brane coordinates $X^\mu$ , we find $$\partial_m\left(\sqrt{-\gamma}\gamma^{mn}\partial_n X^\mu\right)=0\ . \label{quattro}$$ Equation (\[quattro\]) is the “ free ” wave equation for the p–brane field $X^\mu(\xi)$ and would represent the whole content of our model in the absence of the intrinsic gravitational term. Eq (2.6) is essentially a generally covariant Klein–Gordon equation with respect to the world volume metric $\gamma_{mn}$, and does not depend on the connection $\Gamma$. As a matter of fact, $X^\mu(\xi)$ behaves as a scalar multiplet under a general coordinate transformation $\xi^m\rightarrow \xi^{'m}= \xi^{'m}(\xi)$, and, therefore, general covariance only determines the coupling to the metric. Next, varying eq. (\[tre\]) with respect to the world volume metric, we find $$\begin{aligned} &&\gamma^{mn}\left[ {1\over(p+1)}\gamma^{pq}\partial_p X^\mu \partial_q X_\mu \right]^{\left(p+1\right)/2}-\gamma^{mi}\gamma^{nj}\partial_i X^\rho\partial_j X_\rho \left[ {1\over(p+1)}\gamma^{pq}\partial_p X^\mu \partial_q X_\mu \right]^{\left(p-1\right)/2}+\nonumber\\ &&-L_{int.}^\prime (R)R^{(mn)}(\Gamma) +{1\over 2}L_{int.}(R)\gamma^{mn}=0\ , \label{cinque}\end{aligned}$$ where the prime denotes derivation with respect to $R$, $R_{(mn)}$ is the symmetric part of the contracted curvature tensor, and $\nabla_a$ is the covariant derivative with respect to the $\Gamma$ connection. In the absence of non–minimal interactions, eq. (\[cinque\]) reduces to a relationship between the world volume metric $\gamma_{mn}$ and the induced metric $g_{mn}=\partial_m X^\mu \partial_n X_\mu$, modulo an arbitrary Weyl rescaling. This relationship is changed by $L_{int.}(R)$ and eq.(\[cinque\]) encodes the coupling between the p–brane field, metric and connection in the general case. Finally, we have to vary the action with respect to the connection. In order to do this, it may be useful to recall the formula $$\gamma^{ms}\delta_\Gamma R_{ms}(\Gamma)=\gamma^{ms}\left[\nabla_l \delta\Gamma^l{}_{ms}- \nabla_s \delta\Gamma^l{}_{ml}\right]\ . \label{ocinque}$$ Hence, the requirement $$\delta_\Gamma S=\int_W d^{p+1}\xi\sqrt{-\gamma}L_{int.}^\prime (R) \gamma^{ms}\delta_\Gamma R_{ms}(\Gamma)=0 \label{osei}$$ gives, after an integration by parts: $$\nabla_l\left[\sqrt{-\gamma}L_{int.}^\prime (R) \gamma^{mn}\right]-\nabla_s\left[\sqrt{-\gamma}L_{int.}^\prime (R)\gamma^{ms} \right]\delta^n_l=0\ . \label{osette}$$ Taking the trace over the pair $(l,n)$, we find that $\displaystyle{\nabla_n(\sqrt{-\gamma}L_{int.}^\prime (R)\gamma^{mn})=0}$, so that we can write eq. (\[osette\]) in the form $$\nabla_l\left[L_{int.}^\prime (R)\sqrt{-\gamma}\gamma^{mn}\right]=0\ . \label{sei}$$ Equation (\[sei\]) relates $\Gamma^m{}_{nr}$ to $\gamma_{mn}$ and can be used to determine the world volume geometry. In order to see this, we note that the first two terms in eq.(2.7) represent just the traceless p–brane energy– momentum tensor (\[timunu\]). Therefore, if we take the trace of eq.(\[cinque\]), the dependence on $X^\mu(\xi)$ disappears and we obtain the following relation between the metric and the connection, $$RL_{int.}^\prime (R)-{p+1\over 2}L_{int.}(R)=0 .\label{sette}$$ Equation (\[sette\]) was first derived in ref. [@to] as a condition on a broad class of non–linear gravitational lagrangians leading to the same Einstein equations obtained from the usual Hilbert action. Volovich [@vol] has subsequently applied that condition to the case of gravity on the world–sheet of a string and our work was largely inspired by these papers. Regarding equation (2.12), essentially one has two options: the first is to interpret eq.(\[sette\]) as a [*differential*]{} equation for $L_{int.}$, in which case the solution is easily found to be $$L_{int.}(R)={\rm const.}\times R^{(p+1)/2}\ . \label{otto}$$ This function is analytic and invariant under Weyl rescaling. Thus, for any extended object, there is a non–minimal gravitational coupling which is singled out by the Weyl invariance of the action. However, in general one starts from an [*assigned*]{}, non–invariant interaction Lagrangian, so that the form of eq. (\[sette\]) is fixed [*a priori*]{}. This is our second option. As an example, if we specialize the model to the [*bag*]{} case, $p=3$, a suggestive form of $L_{int.}(R)$ is: $\displaystyle{ L_{int.}(R)=\rho -\mu^2 R(\Gamma)+\lambda R^2(\Gamma)}$. This “ interaction ” lagrangian can be interpreted as first order General Relativity plus a quadratic correction in which $\rho$ plays the role of the “ bare ” cosmological constant and $\mu$ can be identified with the Planck mass. Note that if we set $R(\Gamma)\equiv \phi^2$, where $\phi$ is a scalar field, then $L_{int.}(R)$ takes the form of a [*Higgs potential*]{}, and one may wonder about spontaneous symmetry breaking of Weyl invariance. However, in spite of this formal similarity, one should keep in mind that Weyl invariance is broken explicitly, rather than spontaneously, by the very presence of an interaction term, regardless of the specific form of $L_{int.}(R)$. Returning to the general case and to the formation of structure, we suggest to interpret $L'_{int.}$ in equation (2.12) as an [*order parameter*]{} for the geometric phases on the p–brane manifold. The essential property which makes this interpretation possible is that an analytic function has a [*discrete*]{} number of zeros within its analyticity domain. Since the whole left hand side of eq. (\[sette\]) is an analytic function of $R$, there can be only a discrete set of solutions, say $\{c_i\}$, such that $$c_i L_{int.}^\prime (c_i)-{p+1\over 2}L_{int.}(c_i)=0 \ ,\qquad R=c_i\ .\label{nove}$$ Hence, the conformal p–brane geometry admits two distinct phases characterized by the “ order parameter ” $L_{int.}^\prime (c_i)=0$ , or, $L'_{int.}\ne 0$. In the first instance , the scalar curvature $R=c_i$ is an extremal of $L_{int}$ and eq.(2.14) implies $L_{int}(c_i)=0$ . When $L'_{int}(c_i)\ne 0$, eq.(2.14) implies $L_{int}(c_i)\ne 0.$ We will argue, next, that in correspondence of each of these cases there exists a distinct geometric phase with a characteristic cellular structure on the p–brane manifold. Riemannian geometric phase --------------------------- If the curvature extremizes the “ potential ”, i.e. $L_{int.}^\prime (c_i)=0$, then equation (\[nove\]) requires $L_{int.}(c_i)=0$. Equation (\[sei\]) is trivially satisfied, and the connection is no longer dynamically determined but can be freely chosen. In this case, eq.(\[cinque\]) simplifies and becomes, $$\begin{aligned} &&\gamma^{mn}\left[ {1\over(p+1)}\gamma^{pq}\partial_p X^\mu \partial_q X_\mu \right]^{\left(p+1\right)/2}+\nonumber\\ -&&\gamma^{mi}\gamma^{nj}\partial_i X^\rho\partial_j X_\rho \left[{1\over(p+1)}\gamma^{pq}\partial_p X^\mu \partial_q X_\mu \right]^{\left(p-1\right)/2}=0.\label{dieci}\end{aligned}$$ From this equation it follows that the world volume metric can be written as the induced metric times an arbitrary function of the world coordinates, $$\gamma_{mn}=\Omega(\xi)\partial_m X^\mu \partial_n X_\mu\ . \label{dodici}$$ Thus, this geometric phase corresponds to a Riemannian background geometry which is governed by the first order, contracted, Einstein equation $R=c_i$. Evidently, [*for each $c_i$*]{}, this equation describes a spacetime of constant curvature (p–cell). Thus, barring any degeneracy in the set of solutions $\{c_i\}$, one is led to the conclusion that the dynamics of a p–brane induces a cellular structure on the p–brane manifold. For $p=3$, each cell consists of a three dimensional region separated from other cells by domain walls and the over all structure resembles an “ emulsion ” [@lego], or a “ soap bubble froth ” in which the dynamics of each bubble is governed by matching conditions on the metrics of neighboring cells. Note, incidentally, that the contracted Einstein equation $R=c_i$ represents a generalization of the basic equation of (1+1)– dimensional gravity. As a matter of fact Eq.(\[dodici\]) holds true for any p–brane and is a consequence of the Weyl invariance of the p–brane action. However, while conformal invariance allows a common formal treatment of strings and higher dimensional objects, the [*role*]{} played by conformal invariance is distinctly unique in the case of strings. For instance, equation (\[dodici\]) does [*not*]{} imply that the p–brane manifold is conformally flat except in the string case, $p=1$, for which one can find a coordinate transformation which maps the induced metric into a flat metric. A necessary and sufficient condition for [*conformal flatness*]{} of higher dimensional manifolds with $p+1\ge 4$ is that the Weyl tensor vanishes. Riemann–Cartan geometric phase ------------------------------- If $L_{int.}^\prime (c_i)\ne 0$ then $\Gamma^m{}_{nr}$ becomes a dynamical variable. In fact, eq. (\[sei\]) gives $$\nabla_a\left[\sqrt{-\gamma}\gamma^{mn}\right]=0\longrightarrow \nabla_a\gamma^{mn}={\gamma^{mn}\over\sqrt{-\gamma}}\nabla_a\sqrt{- \gamma} . \label{tredici}$$ But, $$\nabla_a\sqrt{-\gamma}={1\over 2}\sqrt{- \gamma}\left[\gamma^{mn}\partial_a \gamma_{mn}-2\Gamma^m{}_{ma}\right] \ ,\label{quattordici}$$ so that eq. (\[tredici\]) can be written in the form $$\nabla_a\gamma^{mn}={1\over 2}\gamma^{mn}\left[\gamma^{rs}\partial_a \gamma_{rs}-2\Gamma^l{}_{la}\right]\ . \label{unoquattro}$$ To solve eq. (\[unoquattro\]), we recall that a general affine connection can always be written as the Christoffel symbol plus a term, say $K^l{}_{mn}$, which behaves as a tensor under general coordinate transformation $$\Gamma^l{}_{mn}=\{{}_m{}^l{}_n\}+K^l{}_{mn}\ . \label{ansatz}$$ The Christoffel symbol $\{{}_m{}^l{}_n\}$ is a metric compatible connection, so that the ansatz (\[ansatz\]), once inserted into eq. (\[unoquattro\]), gives us an equation for the tensor part $K^l{}_{mn}$ alone $$(p-1)K^l{}_{lm}=0\ , \label{traccia}$$ where we have used the identity $\displaystyle{ \{{}_m{}^l{}_l\}=(1/2)\gamma^{ab} \partial_m \gamma_{ab} }$. Eq. (\[traccia\]) shows that, for any extended object different from the string, the trace of $K^l{}_{mn}$ must vanish, so that eq. (\[unoquattro\]) for the ansatz  (\[ansatz\]) reduces to $$\nabla_a\gamma^{mn}=0\Longrightarrow K^l{}_{pq}={1\over 2}\left( T^l{}_{pq}+T_{pq}{}^l+T_{qp}{}^l\right)\ ,$$ where $\displaystyle{T^l{}_{pq}=(1/2)\left(\Gamma^l{}_{pq}- \Gamma^l{}_{qp}\right)}$ is the [*torsion tensor*]{}, and $\Gamma^l{}_{pq}$ is identified with the [*Riemann–Cartan connection.*]{} This new geometric phase is also characterized by a cellular structure, since the scalar curvature is still subject to the constraint $R(\gamma,\Gamma)=c_i$. The novelty in this case is the appearance of a cosmological constant with a cell–dependent value. Indeed, in this geometric phase, eq. (\[cinque\]) reduces to the Einstein–Cartan field equation $$R^{(mn)}(\Gamma) -{c_i\over p+1}\gamma^{mn}= -{1\over L_{int.}^\prime (c_i) }T^{mn}(X)\ , \label{quindici}$$ where $-(1/ L_{int.}^\prime (c_i))$ plays the role of Newton’s constant, and $c_i/(p+1)$ acts as an effective cosmological constant in any given cell on the p–brane manifold. It is interesting how Newton’s constant and the cosmological constant are related by the above formalism. Evidently both originate from the set of solutions $\{c_i\}$ of equation (2.14) (analyticity assumption). As anticipated in the Introduction, it is this assumption that allows us to bypass our ignorance of quantum gravitational effects: if a generic p–cell has a linear dimension of the order of Planck’s length at the time of its nucleation, then the analyticity assumption is tantamount to state that the quantum fluctuations in the background metric are of the same order of magnitude as the metric itself, which is the central consideration behind the geometrodynamic idea of spacetime foam. Once the nucleation of p–cells has taken place, the problem of their evolution is largely a classical and tractable one [@spall], and this is the point of view advocated in this paper. Finally, it should be noted that our formalism also provides an insight into the question of the special status that strings hold among p–branes: the point is that, for $p=1$, eq.(\[traccia\]) is satisfied by [*any*]{} $K^l{}_{lm}$. This means that the connection $\Gamma^l{}_{qp}$ is defined up to an arbitrary vector field $B_m\equiv -K^l{}_{lm}$. Accordingly, eq.(\[unoquattro\]) becomes $$\nabla_a\gamma^{mn}=\gamma^{mn}B_a\ . \label{semimetric}$$ Eq.(\[semimetric\]) is the [*semi–metric*]{} condition for the Riemann–Weyl connection [@vol] $$\Gamma^l{}_{qp}=\{{}_q{}^l{}_p\}+{1\over 2}\left(\delta^l_p B_q + \delta^l_q B_p-\gamma_{pq}B^l\right)$$ where $B_p$ acts as the Weyl “ gauge potential ” associated with volume–changing scalings. Thus, we conclude that the intrinsic geometry on the world–sheet of a string is characterized by the pair $(\gamma_{mn},\,B_p)$, while for a generic p–brane the geometrical objects are the metric and a traceless torsion tensor. Furthermore, the above results seem to be independent of any special length or energy scale but seem suggestive enough to be given a cosmological interpretation [*at, or near the Planck scale*]{} in the physically interesting case in which the p–brane consists of a spatial 3–dimensional manifold, $p=3$. In this case, the non– minimal coupling term to the bag curvature gives rise to a “ gravitational action ” whose effect is to form a cellular structure on the manifold. This structure is not static, but a highly dynamical one which evokes, at least in our mind, a vivid picture of the ground state of the primordial universe not unlike the chaotic inflation scenario[@linde]. In the light of the above results, the physical spacetime can be pictured as a set of cells in which the geometry is dynamically determined and not fixed at the outset. In this scenario, extended objects (strings and membranes) may well play a role comparable, or even alternative, to that of the Higgs field, as the universe bootstraps itself into existence out of the primordial spacetime foam. In this paper we have suggested that this structure is a manifestation of the underlying multiphase geometry induced by the very dynamics of p–branes encoded in the action (\[tre\]). In this interpretation, the cosmic vacuum is a multi–phase system in a double sense: inside a cell there may exist a Riemannian or a Riemann–Cartan geometry; furthermore, for each type of geometry, curvature can attain different constant values labelled by $c_i$. These parameters, in turn, determine the value and sign of the energy density in each cell. Consequently, each cell may behave as a blackhole, wormhole, inflationary bubble, etc.. The classical and semiclassical evolution of any such cell has been discussed in earlier papers [@spall]. Here, as a final note, we add that a semi–classical description of the quantum mechanical ground state, for such a multi–domain system, is obtained by approximating the (euclidean) Feynman integral with the sum over classical solutions. The non–minimal interaction term in eq. (\[tre\]) acts as an [*effective cosmological constant*]{} once evaluated on a classical solution. Thus, the cosmological constant enters the model as a semi–classical dynamical variable and, therefore, is susceptible of dynamical adjustments[@smail]. From this view point, the vanishing of $L_{int.}(c_i)$ in the Riemannian phase is an attractive result. A.Aurilia and E.Spallucci, “  The Role of Extended Objects in Particle Theory and in Cosmology ” Proceedings of the Trieste Conference on Super–Membranes and Physics in $2+1$ dimensions ” Trieste, 17–21 June, 1989; ed. M.J.Duff, C.N.Pope, E.Sezgin; World Scientific, 1990. See, for instance, E.A.B.Cole, Nuovo Cimento [**1A**]{}, 120, (1971). A.Linde, “ Particle Physics and Inflationary Cosmology ” (Harwood Academic, New York, 1990). J.A.Wheeler, Ann. Phys. (NY) [**2**]{}, 604, (1957). A.Aurilia, G.Denardo, F.Legovini and E.Spallucci, Nucl. Phys. [**B252**]{}, 523 (1984). A.Aurilia, E.Spallucci and I.Vanzetta, Phys. Rev. [**D50**]{}, 6490 (1994). P.S. Howe and R.W.Tucker, J. Phys. [**A10**]{}, L155, (1977). A.Sugamoto, Nucl. Phys. [**B215**]{}, 381, (1981). M.S.Alves and J.Barcelos–Neto, Europhys. Lett. [**7**]{}, 395, (1988).\ M.J.Duff, Class. Quantum Grav. [**6**]{}, 1577, (1989). A.Aurilia, A.Smailagic and E.Spallucci, Class. Quantum Grav. [**9**]{}, 1883, (1992). M.Ferraris, M.Francaviglia and I.Volovich, “ Universality of Einstein equations in Palatini formalism ”, University of Torino preprint,TO–JLL–P 1/92. I.V.Volovich, Mod. Phys. Lett. [**A8**]{}, 1827, (1993). A.Aurilia, M.Palmer and E.Spallucci, Phys. Rev. [**D40**]{}, 2511, (1989).\ A.Aurilia, R.Balbinot and E.Spallucci, Phys. Lett. [**B262**]{} 222, (1991).
--- abstract: 'We report the discovery of five quasars with redshifts of $4.67 - 5.27$ and $z''$-band magnitudes of $19.5-20.7$ ($M_B \sim -27$). All were originally selected as distant quasar candidates in optical/near-infrared photometry from the Sloan Digital Sky Survey (SDSS), and most were confirmed as probable high-redshift quasars by supplementing the SDSS data with $J$ and $K$ measurements. The quasars possess strong, broad   emission lines, with the characteristic sharp cutoff on the blue side produced by  forest absorption. Three quasars contain strong, broad absorption features, and one of them exhibits very strong [N[v]{}]{} emission. The amount of absorption produced by the  forest increases toward higher redshift, and that in the $z$ = 5.27 object is consistent with a smooth extrapolation of the absorption seen in lower redshift quasars. The high luminosity of these objects relative to most other known objects at $z \gtsim 5$ makes them potentially valuable as probes of early quasar properties and of the intervening intergalactic medium.' author: - 'Wei Zheng, Zlatan I. Tsvetanov, Donald P. Schneider, Xiaohui Fan, Robert H. Becker, Marc Davis, Richard L. White, Michael A. Strauss, James Annis, Neta A. Bahcall, A. J. Connolly, István Csabai, Arthur F. Davidsen, Masataka Fukugita, James E. Gunn, Timothy M. Heckman, G. S. Hennessy, Željko Ivezić, G. R. Knapp, Eric Peng, Alexander S. Szalay, Aniruddha R. Thakar, Brian Yanny, and Donald G. York' title: 'Five High-Redshift Quasars Discovered in Commissioning Imaging Data of the Sloan Digital Sky Survey$^1$' --- Introduction ============ Since the identification of the first quasar redshift (3C 273, [@schmidt63]), quasars have been at the forefront of modern cosmology. With luminosities tens or hundreds of times higher than those of galaxies, quasars are a powerful probe of the distant primordial universe. Over the past decade approximately two hundred objects above redshift four have been discovered, and there is a growing consensus that the number density of luminous quasars peaks between redshifts of two and three and steeply declines out to the limits of current measurements ($z \approx 4.5$; see Warren, Hewett, & Osmer 1994; [@ssg95]; [@kddc95]). Recent studies ([@fan99],2000) have dramatically increased the number of known quasars with redshifts larger than 4.5, opening the possibility of investigating the quasar luminosity function at redshift five and beyond; this information will determine whether the number density of quasars continues to decline with increasing redshift, or whether theoretical models ($e.g.,$ [@hl98]) that predict a significant number of quasars at $ z > 5$ are correct. The majority of high-redshift quasars have been identified by optical color selection. As the result of intergalactic absorption, the flux in the spectral region shortward of  in distant objects is significantly attenuated. The onset of the  forest and the Lyman break can be detected with broad-band imaging. Multicolor optical surveys ([@who91], [@imh91], [@sgd99]) have proven effective in identifying $z>4$ quasars, but as most such surveys lack information redward of the $I$ band, they have difficulty distinguishing between quasars at redshifts larger than $\approx$ 4.8 and cool stars. Recently, the commissioning data of the Sloan Digital Sky Survey (SDSS, [@york00]) have led to the identification of new high-redshift quasars at an unprecedented rate: approximately 45 quasars at $z> 3.6$ (two at $z \sim 5$, [@fan99],2000; [@HET]), have been published in the past two years. These quasars, at $M_B\sim -27$, are at the luminous end of the quasar luminosity distribution. (Throughout this paper cosmological properties are calculated assuming and $q_0$ = 0.5.) In addition to wide area surveys, color selection for high-redshift objects has been applied to small, deep fields. Three galaxies at $ z > 5.6$ have been spectroscopically confirmed ([@hu98]; [@weymann98]; [@hu99]), and recently [@stern00] found an AGN ($M_B \sim -22.7$) at $z=5.5$. The Hubble Deep Field has led to the discovery of high-redshift candidates ($z \sim$ 6-10, [@lanzetta99]; [@chen99]); these objects, however, are orders of magnitude less luminous than quasars, and their spectral properties are difficult to study even with the world’s largest telescopes. Although the $z=5$ barrier was broken more than two years ago, objects at these extreme redshifts are still rare enough that each new example is a potentially valuable probe of the very early universe. This is especially true in the case of the most luminous quasars, since follow-up spectroscopy of these relatively bright objects can reveal information about the physical conditions in early quasars and about the state of the intergalactic medium at very high redshifts. The SDSS collaboration has undertaken extensive efforts to search for high-redshift quasars. In this paper, we report the discovery of five quasars at $z>4.6$, of which the most distant is at $z=5.27$. Selection of Quasar Candidates ============================== The SDSS (York et al. 2000) utilizes a wide-field camera with 54 CCDs ([@gunn98]), mounted on a dedicated 2.5m telescope at the Apache Point Observatory (APO), New Mexico, to survey $\approx$ $\pi$ steradians of the sky around the Northern Galactic Cap. CCD images in five broad optical bands ($u'$, $g'$, $r'$, $i'$, $z'$, centered at 3540 Å, 4770 Å, 6230 Å, 7630 Å and 9130 Å ; [@fukugita96]) yield a nominal $5\sigma$ detection of point sources in AB magnitudes of 22.3, 23.3, 23.1, 22.3, and 20.8, respectively. The commissioning data have so far covered $\sim 600$ square degrees, mostly near the equatorial region, $|\delta| < 1.5 ^\circ$. With five bands and a spectral resolution of $R \sim 3$, the SDSS imaging data can distinguish quasars from stars in a broad redshift range ([@fan]). At $3.5 \ltsim z \ltsim 5$, quasars lie well away from the stellar locus in the $g'r'i'z'$ color space, due to the large equivalent width of the  emission line and the significant absorption produced by the  forest and Lyman limit systems. At redshifts between 4.4 and 5, the $r'-i'$ color becomes large, and $i'-z'$ is near zero. At $ z \gtsim 5$, the flux drop shortward of the redshifted  emission line affects the $i'$-band magnitude, and the $i'-z'$ color increases with redshift. At this point the quasar track quickly approaches the red end of the stellar locus in the $r'i'z'$ color diagram; the rise in contamination by very cool stars requires that additional discriminators be added to aid the quasar selection. For quasars, the underlying continuum longward of can be approximated with a power law ([@francis91]; [@ssg]; [@zheng97]) of $f_\nu \propto \nu^{-0.9} $, leading to a small color differences in bands located redward of the  emission line This small color difference contrasts sharply with the colors of most cool stars ([@leggett96], 2000), whose flux rises rapidly towards longer wavelengths. Our target selection is based on three regions in SDSS color space: (1) $r^*-i^* > 1.35$ and $i^*-z^* < 0.3$; (2) $r^*-i^* > 2$ and $i^*-z^* <0.7$; or (3) $z'$-band detection only, i.e. $z^* <20.8$ and the detection in the other four bands is below $5 \sigma$. If an object’s $r'$ or $i'$ magnitude is below the respective $5\sigma$ detection level, we use the latter in calculating the color in case (1) and (2). (Note the $^*$ superscript for the magnitudes; the measurements reported here are based on a preliminary calibrations of the SDSS photometric system.) In addition, an object must be classified as a point source by the SDSS processing software and have to be included as a quasar candidate. We applied these selection criteria to $\sim 200$ square degrees of SDSS imaging data acquired in 1999 March and 2000 February. The selection criteria used here differ from those employed by Fan et al. (1999, 2000) to identify $z > 4$ quasars. Whereas Fan et al. required $i^* < 20.2$, candidates in this paper can be undetected in the $i'$ band. The flux measurements for this paper’s candidates often have lower signal-to-noise ratio, and a sample drawn from these selection criteria will naturally be more susceptible to contamination from non-quasars than those in Fan et al. The color cuts in this paper are closer to the stellar locus than the Fan et al. criteria; this will also resultin a reduction in the selection efficiency. We obtained IR photometry of quasar candidates in the $J$ and $K$ bands at the NASA Infrared Telescope Facility (IRTF) at Mauna Kea, Hawaii. The observations were taken on 2000 March 9-12 using the NSFCam equipped with a 256$\times$256 InSb array. The plate scale was 030 pixel$^{-1}$, and the seeing was $\sim$12 in the $K$-band. Each selected target was imaged with standard dithering technique with total exposure times of 7 min and 5 min in $J$ and $K$, respectively. Images of IR standard stars were taken throughout the observations to monitor the magnitude zero points. Typical photometric errors are in the 0.05-0.1 magnitude range, depending on the brightness of objects; see [@zt00] for details of the observations and data calibration. We observed $\sim 20$ known SDSS quasars with redshifts greater than 4 to empirically calibrate our selection technique. Our additional color constraints are $z^* - J < 1.5$ and $J - K < 1.8$. Approximately ten such objects whose colors closely resembled those of known quasars were selected. Note that four of the five quasars are at redshift smaller than 5, and they can be selected by the SDSS data alone ([@fan99], 2000). Spectroscopic Observations ========================== Spectroscopic follow-up observations of SDSS high-redshift quasar candidates were carried out in 2000 February and March, with the Digital Imaging Spectrograph (DIS) of the APO 3.5m telescope. The DIS is a double spectrograph; for high-redshift quasars, only the red part of the low-resolution spectrum, covering the wavelength range 5400 Å to 10000 Å at 13 Å resolution, contained any useful signal. The exposure time for each object was 30 minutes; even with a limited signal-to-noise ratio in the spectra, the redshift identifications, based on the strong, asymmetric  emission line and absorption produced by the  forest, are unambiguous. The observations in February were made before the IRTF run. Of the six candidates, only one, SDSS 1129$-$0142, turned out to be a quasar. The J2000 coordinates are given in the object name (format see [@fan99].) For brevity, we have shortened the names to SDSS hhmm+ddmm throughout the text.) In late March we observed five SDSS/IR candidates, and two, SDSS 1021$-$0309 and SDSS 1208+0010, are quasars. After the initial identification with the APO data, additional spectroscopic observations of the three quasars were obtained in 2000 April with the Low Resolution Spectrograph (LRS; [@hill00], [@HET]) at the prime focus of the Hobby-Eberly Telescope ([@lwr98]) at McDonald Observatory. The LRS configuration of a 2$''$ slit, 300 line mm$^{-1}$ grating, and OG515 blocking filter produced spectra with a wavelength coverage of 5150 Å to 10,150 Å at a resolution of 20 Å. The exposure times were typically 30 minutes. Two additional SDSS/IR quasar candidates were observed with the Keck 10m telescope on April 5-6, and both of them, SDSS 1451$-$0104 and SDSS 1122$-$0229, are quasars. The spectra were taken with the Echellette spectrograph and imager (ESI, [@esi]) on the Keck Observatory 10-m telescope. The ESI was used in high dispersion mode which covers the wavelength range of 4000 to 11000 Å with 11  resolution. The quasars were viewed through a 1wide slit oriented at the parallactic angle. The exposure time is 20 minutes each. The $z=5.27$ quasar, SDSS 1208+0010, was also observed for 30 minutes. The spectra were flux calibrated relative to the standard star G191B2B. The spectra were extracted and reduced using standard IRAF programs, and binned to 3.85Å. Only one of the five “non-infrared" candidates, observed in February, is a quasar. Of the seven SDSS/IR candidates observed, four are quasars, while the other are late-type stars. This result tentatively suggests that IR selection may significantly improve the selection efficiency, but clearly a larger sample is needed to confirm this conclusion. Table 1 lists the optical/IR photometric measurements. The quasar spectra are displayed in Fig. 2, with prominent spectral features marked. All the spectra have been placed on an absolute flux scale by matching the $i^*$ magnitudes in Table 1 with the $i^*$ magnitudes synthesized from the spectrum. The Keck and HET spectra of SDSS 1208+0010 reveal the  emission at $\sim 9500$Å, which is not clear in the data taken at APO. Discussion ========== Table 2 contains the redshift (see notes below for the specifics of the measurements for each object), AB$_{1450}$ (the AB magnitude of the quasar at 1450 Å in the rest frame, corrected for Galactic absorption), the power law index of the continuum, the continuum depression due to absorption in the  and  regions ($D_A$ and $D_B$; see [@OK82]), and the absolute $B$ magnitude of the quasars. The Galactic extinction is calculated using the reddening map of [@Schlegel98]. The luminosities for the quasars were calculated assuming that the continuum power law slope from the far ultraviolet to the optical was $-0.5$ ([@ssg]). The individual power law slopes, and hence the depression estimates, are quite uncertain given the limited baseline available in the spectra and the presence of BAL features. All are moderately luminous quasars, and the spectra of three contain BAL features. Each spectrum contains at least one excellent candidate for a damped  system, and there is no detected flux below the rest frame Lyman limit in any of the quasars. As shown in Figure 2, the extent of intergalactic absorption shortward of the redshifted  emission increases from $z=4.67$ to $z=5.27$. This is reflected in the continuum depression values in Table 2. However, the residual flux can be seen in all the spectra, particularly around the [O[vi]{}]{}+feature, suggesting that the intergalactic medium is not completely opaque. Our measurements are consistent with the known distributions of the forest absorption as derived at lower redshifts ([@press93]). The emission features in these quasars are common among AGN at lower redshift. Three of the the quasars exhibit significant BAL features. Finding charts for the five quasars are given in Fig. 1. Notes on the quasars: [**SDSS 1021$-$0309**]{} ($z$ = 4.70): The sharp split of the and [N[v]{}]{} lines may suggest unusually strong [N[v]{}]{} emission. A simple fit to a 300Å- region longward of the  cutoff yields a [N[v]{}]{}/ ratio of 0.64 for the narrow features. If the split is a result of [N[v]{}]{} absorption, the corresponding wavelength for the  absorption counterpart should be centered at $\sim 8710$Å. While a significant  BAL trough is present in the spectrum, it is centered at $\sim 8560$Å. The redshift determined by the [O[i]{}]{}, [Si[iv]{}+O[iv]{}\]]{}, and  emission lines are consistent with each other at the 0.005 level. The  and [N[v]{}]{} emission shows redshifts that are consistent (with slightly larger errors) with the other lines. [**SDSS 1122$-$0229**]{} ($z$ = 4.80): This object has the strongest emission lines among the five. The measurements of , [O[i]{}]{}, [N[v]{}]{} and emission yields a redshift of $z = 4.80 \pm 0.03$. The  absorption cutoff is not as sharp as the others, stretching $\sim 60$Å with several narrow absorption troughs. [**SDSS 1129$-$0142**]{} ($z$ = 4.85): This quasar is slightly more luminous than 3C 273, which in the adopted cosmology is $M_B = -27.0$. It displays a number of spectacular BAL features; because of this, many of the properties given in Table 2 contain large uncertainties. Strong, broad troughs of  and [Si[iv]{}+O[iv]{}\]]{} dominate the spectrum, and there is a suggestion of the presence of [O[i]{}]{} and [N[v]{}]{} absorption features. The redshift is based on assigning a rest wavelength of 1219 Å to the peak of the  emission line (see [@ssg]). The observed values of the continuum depression are exceptionally large; this almost certainly arises from significant intrinsic absorption. [**SDSS 1208+0010**]{} ($z$ = 5.27): Very strong, relatively narrow  and [N[v]{}]{} emission dominate the spectrum; this spectrum bears an uncanny resemblance to that of the $z = 4.04$ quasar ([@ssg87]). The [N[v]{}]{},[O[i]{}]{}, and  lines yield a consistent redshift; the peak of the  feature occurs at 1218 Å, typical for quasars at redshifts above four ([@ssg]). The depression due to the  forest (see Table 2) is quite large, but not unusual for this redshift, suggesting that there is not a dramatic change in the character in the  forest from lower redshifts to at least $z \sim 5.2$. [**SDSS 1451$-$0104**]{} ($z$ = 4.67): The redshifts determined from the peaks of  and [Si[iv]{}+O[iv]{}\]]{} match that of the  edge within 0.005. A significant, broad absorption trough is present between $\sim 8250 - 8550$Å. The discovery of these quasars once again demonstrates the ability of the SDSS to effectively identify $z>4.6$ quasars, and extends the SDSS redshift range to well beyond five. By supplementing the SDSS measurements with $J$ and $K$ photometry, we have been able to efficiently identify (success rate of $\approx$ 50%) high-redshift quasars in magnitude/color space regions that are fainter and closer to the stellar locus than were presented in Fan et al. (1999,2000); it is likely that IR photometry will be a valuable tool in the search for faint, $z > 5$ quasars. Our IR photometry is only a test, and the results do not constitute a complete sample. To date the SDSS has imaged but a few percent of the planned survey area; based on the results to date, the complete survey should contain well over a hundred $z>4.7$ quasars found with well-defined selection criteria. The Sloan Digital Sky Survey (SDSS) is a joint project of The University of Chicago, Fermilab, the Institute for Advanced Study, the Japan Participation Group, The Johns Hopkins University, the Max-Planck-Institut für Astronomie, Princeton University, the United States Naval Observatory, and the University of Washington. Apache Point Observatory, site of the SDSS, is operated by the Astrophysical Research Consortium. Funding for the project has been provided by the Alfred P. Sloan Foundation, the SDSS member institutions, the National Aeronautics and Space Administration, the National Science Foundation, the U.S. Department of Energy, and Monbusho, Japan. The SDSS Web site is http://www.sdss.org/. We would like to thank Russet McMillan (APO), William Golisch (IRTF), Bob Goodrich, Terry McDonald (Keck), Grant Hill and Matthew Shetrone (HET) for their assistance with the observations. David Weinberg provided a number of comments that improved the paper. This work is partially supported by NASA Long Term Space Astrophysics grant NAGW-4443 to the Johns Hopkins University (WZ and ZT), and by NSF grant AST99-00703 (DPS). [DUM]{} Chen, H-W., Lanzetta, K. M., & Pascarelle, S. 1999, Nature, 398, 586 Djorgovski, S.G., Odewahn, S.C., Gal, R.R., Brunner, R., de Carvalho, R. R., Longo, G., & Scaramella, R. 1999, , 31, 1235 Epps, H.W. & Miller, J.S. 1998, in Proc of SPIE 3355, Optical Astronomical Instrumentation, ed. Sandro D’Odorico, (Bellingham, WA:SPIE), 48 Fan, X. 1999, AJ, 117, 2528 Fan, X., [et al.]{} 1999, AJ, 118, 1 . 2000, AJ, 119, 1 Francis, P. J., Hewett, P. C., Foltz, C. B., Chaffee, F. H., & Weymann, R. J. 1991, , 373, 465 Fukugita, M., Ichikawa, T., Gunn, J.E., Doi, M., Shimasaku, K., & Schneider, D. P. 1996, AJ, 111, 1748 Gunn, J. E. [et al.]{} 1998, AJ, 116, 3040 Haiman, Z., & Loeb, A. 1998, , 503, 505 Hill, G.J., [et al.]{} 2000, in preparation Hu, E. M., Cowie, L. L. & McMahon, R. G. 1998, , 502, L99 Hu, E. M., McMahon, R. G. & Cowie, L. L. 1999, , 522, L9 Irwin, M.J., McMahon, R.G., and Hazard, C. 1991, in The Space Distribution of Quasars, edited by D. Crampton (ASP, San Francisco), p. 117 Kennefick, J.D., Djorgovski, S.G., & de Carvalho, R.R. 1995, AJ, 110 , 2553 Lanzetta, K. M., Chen, H-W., Fernández-Soto, A., Pascarelle, S., Puetter, R., Yahata, N., & Yahil, A. 1999, astro-ph/9907281 Leggett, S. K., Allard, F., Berriman, G., Dahn, C. C., Hauschildt, P. H. 1996, , 104, 117 Leggett, S. K., et al. 2000, , in press Lupton, R.H., Gunn, J.E., & Szalay, A. 1999, AJ, 118, 1406 Oke, J.B., & Korycansky, D.G. 1982, ApJ, 255, 11 Press, W. H., & Rybicki, G. B. 1993, , 418, 585 Ramsey, L.W., [et al.]{} 1998, in Proc. SPIE, 3352, Advanced Technology Optical/IR Telescopes VI, Ed. L.M. Stepp, (Bellingham, WA:SPIE), 34 Schlegel, D.J., Finkbeiner, D.P., & Davis, M. 1998, ApJ, 500, 525 Schmidt, M. 1963, Nature, 197, 1040 Schmidt, M., Schneider, D.P., and Gunn, J.E. 1987, ApJL, 321, 7 Schmidt M., Schneider, D.P., & Gunn J.E. 1995, AJ, 110, 68 Schneider, D.P., Schmidt M., & Gunn J.E. 1991, AJ, 101, 2004 Schneider, D.P., [et al.]{} 2000, PASP, 112, 6 Stern, D., Spinrad, H., Eisenhardt, P., Bunker, A. J., Dawson, S., Stanford, S. A., & Elston, R. 2000, , 533, L75 Tsvetanov, Z., [et al.]{} 2000, in preparation Warren, S. J., Hewett, P. C., & Osmer, P. S. 1991, ApJS, 76, 23 . 1994, ApJ, 421, 412 Weymann, R. J., Stern, D., Bunker, A., Spinrad, H., Chaffee, F. H., Thompson, R. I., & Storrie-Lombardi, L. 1998, , 505, L95 York, D. [et al.]{} 2000, , in press Zheng, W., Kriss, G.A., Telfer, R.C., Grimes, J.P., & Davidsen, A.F. 1997, ApJ, 475, 469 [ccccccccc]{} SDSSp J102119.16$-$030937.2 & 23.72 & 25.82 & 21.76 & 20.09 & 20.02 & 18.77 & 17.08 & 0.042 & $\pm 0.55$ & $\pm 0.47$ & $\pm 0.09$ & $\pm 0.03$ & $\pm 0.10$ & $\pm 0.07$ & $\pm 0.05$ & SDSSp J112242.98$-$022905.1 & 23.73 & 24.77 &22.22 & 20.38& 20.47& $\geq 19.5$& — & 0.055 & $\pm 0.57$ & $\pm 0.50$ & $\pm 0.12$ & $\pm 0.04$ & $\pm 0.15$ & $\pm 0.10$ & — & SDSSp J112956.10$-$014212.4 & 24.06 & 25.26 & 22.02 & 19.64 & 19.51 & 17.51 & 16.09 & 0.072 & $\pm 0.55$ & $\pm 0.37$ & $\pm 0.10$ & $\pm 0.03$ & $\pm 0.07$ & $\pm 0.05$ & $\pm 0.05$ & SDSSp J120823.82+001027.7 & 24.39 & 24.77 & 22.75 & 20.79 & 20.72 & 19.43 & 18.10 & 0.024 & $\pm 0.65$ & $\pm 0.31$ & $\pm 0.08$ & $\pm 0.27$ & $\pm 0.35$ & $\pm 0.10$ & $\pm 0.10$ & SDSSp J145118.77$-$010446.2 & 24.40 & 24.45 & 22.65 & 20.70 & 20.53 & 19.43 & 18.17 & 0.044& $\pm 0.45$ & $\pm 0.47$ & $\pm 0.16$ & $\pm 0.04$ & $\pm 0.13$ & $\pm 0.10$ & $\pm 0.10$ & [ccccccc]{} SDSSp J102119.16$-$030937.2 & 4.696 $\pm$ 0.004 & 20.25 & $-1.6$ & 0.58 & 0.76 & $-26.3$ SDSSp J112242.98$-$022905.1 & 4.795 $\pm$ 0.004 & 20.58 & $-1.2$ & 0.64 & 0.79 & $-26.0$ SDSSp J112956.10$-$014212.4 & 4.85  $\pm$ 0.03 & 19.22 & $-1.3$ & 0.84 & 0.97 & $-27.4$ SDSSp J120823.82+001027.7 & 5.273 $\pm$ 0.004 & 20.47 & $-0.7$ & 0.71 & 0.81 & $-26.3$ SDSSp J145118.77$-$010446.2 & 4.672 $\pm$ 0.004 & 20.42 & $-0.7$ & 0.71 & 0.86 & $-26.2$ [0]{}[60]{}[60]{}[-152]{}[70]{}
--- abstract: 'Parallel tempering and population annealing are both effective methods for simulating equilibrium systems with rough free energy landscapes. Parallel tempering, also known as replica exchange Monte Carlo, is a Markov chain Monte Carlo method while population annealing is a sequential Monte Carlo method. Both methods overcome the exponential slowing associated with high free energy barriers. The convergence properties and efficiency of the two methods are compared. For large systems, population annealing initially converges to equilibrium more rapidly than parallel tempering for the same amount of computational work. However, parallel tempering converges exponentially and population annealing inversely in the computational work so that ultimately parallel tempering approaches equilibrium more rapidly than population annealing.' author: - 'J. Machta' - 'R. S. Ellis' title: 'Monte Carlo Methods for Rough Free Energy Landscapes: Population Annealing and Parallel Tempering ' --- Introduction {#sec:intro} ============ Equilibrium systems with rough free energy landscapes, such as spin glasses, configurational glasses and proteins, are difficult to simulate using conventional Monte Carlo methods because the simulation tends to be trapped in metastable states and fails to explore the full configuration space. A number of techniques have been proposed to overcome this problem. Some of these techniques involve simulating an extended state space that includes many temperatures [@Okamoto04; @NaHa07a]. Multicanonical simulations, umbrella sampling, the Wang Landau method, simulated tempering and parallel tempering all fall into this class. Parallel tempering [@SwWa86; @Geyer91; @HuNe96; @EaDe05], also known as replica exchange Monte Carlo, is perhaps the most widely used of these methods because it is simple to program and performs well in many settings. It is the standard method for simulating spin glasses [@KaKoYo06; @BaCrFe10] and is used for protein folding [@Ha97; @ScHeVeWe05] and lattice gauge theory [@BuFuKeMu07]. Parallel tempering and the other members of its class are all Markov chain Monte Carlo methods. In Markov chain Monte Carlo, the target distribution is approached via repeated application of an elementary process that typically satisfies detailed balance with respect to the target distribution. In the case of parallel tempering, the target distribution is a joint distribution whose marginals are equilibrium distributions for a set of temperatures. In sequential Monte Carlo, by contrast, the target distribution is the last member of a sequence of distributions, each of which is visited once. The initial distribution is easy to equilibrate and a resampling step transforms one distribution to the next in the sequence. Population annealing [@HuIb03; @Mac10a] is a sequential Monte Carlo algorithm in which the sequence of distributions are equilibrium distributions of decreasing temperature. In this paper we will describe both parallel tempering and population annealing and compare their efficiency and convergence properties in the context of a simple, tractable free energy landscape comprised of two wells separated by a high barrier. Although this free energy landscape is highly simplified compared to the landscapes of more realistic models, we believe that it captures some of the essential features of rough free energy landscapes and that the lessons learned from this analysis will be useful in understanding and improving the performance of both parallel tempering and population annealing in realistic settings. In Ref. [@Mac09a] we analyzed the performance of parallel tempering for this landscape. Although they are based on quite different strategies, parallel tempering (PT) and population annealing (PA) share a number of common features. Both are methods that build on a conventional Markov chain Monte Carlo procedure whose stationary distribution is a fixed temperature equilibrium ensemble, such as the Metropolis algorithm or Glauber dynamics. We refer to this procedure as the [*equilibrating subroutine*]{}. At sufficiently high temperature the equilibrating subroutine converges rapidly to the equilibrium ensemble. Both PT and PA take advantage of this rapid equilibration at high temperature to accelerate the convergence to equilibrium at lower temperatures. Both PT and PA attempt to transform equilibrium high temperature configurations into equilibrium low temperature configurations through a sequence of temperature steps such that the system remains close to equilibrium. In PT there is a single replica of the system at each temperature in the sequence, and replicas are allowed to move between temperatures via replica exchange. These replica exchange moves are carried out with acceptance probabilities that satisfy detailed balance so that the entire set of replicas tends toward equilibrium at their respective temperatures. Population annealing is closely related to simulated annealing [@KiGeVe83; @LaAa87]. In simulated annealing a single realization of the system is cooled from high to low temperature following an [*annealing schedule*]{}. After each temperature step the system is out of equilibrium and the equilibrating subroutine is used to move it back toward equilibrium. However, at low temperatures, the equilibrating subroutine is unable to facilitate transitions between different minima of the free energy landscape, and simulated annealing falls out of equilibrium if the weights associated with the free energy minima vary with temperature, as is typically the case. Thus simulated annealing cannot be used to sample equilibrium ensembles, and its primary use is to find ground states. Population annealing solves this problem by simultaneously cooling a population of replicas of the system through a sequence of temperatures. Each temperature step is associated with a resampling of the population so that some replicas are copied and other replicas are destroyed in such a way that the replicas are correctly weighted in the colder ensemble. In this way, at least for large populations, the population remains close to equilibrium as the system is cooled. The resampling step in population annealing is similar to methods used in diffusion Monte Carlo [@Anderson75] and the “go with the winner" strategy [@Grass2002]. Sequential Monte Carlo methods [@DoFrGo01], of which population annealing is an example, are not well known in statistical physics but have been widely applied in statistics and finance. One purpose of this paper is to bring this general method to the attention of computational statistical physicists. We argue that PA may have an important role to play in simulations of systems with rough free energy landscapes and has some advantages over parallel tempering, especially in situations where a moderately accurate result is required quickly and parallel computing resources are available. The outline of the paper is as follows. We describe population annealing in Sec. \[sec:pa\] and parallel tempering in Sec. \[sec:pt\]. Section \[sec:dw\] introduces the two-well free energy landscape, and Sec. \[sec:padw\] analyzes the performance of population annealing in this landscape. Section \[sec:pavpt\] compares the performance of population annealing and parallel tempering, and the conclusions of this section are supported by numerical results presented in Sec. \[sec:nr\]. Section \[sec:disc\] concludes the paper with a discussion. Population Annealing {#sec:pa} ==================== The population annealing algorithm operates on a population of $R$ replicas of the system. For disordered spin systems, each replica has the same set of couplings. The algorithm consists of cooling the population of replicas through an annealing schedule from a high temperature, where equilibrium states are easily sampled, to a low target temperature, where the equilibrating subroutine cannot by itself feasibly equilibrate the system. The annealing schedule is defined by a set of ${S}+1$ inverse temperatures, $$\label{eq:repbb} {\beta}_0 > {\beta}_1> \ldots, > {\beta}_{{S}}.$$ The highest temperature, $1/{\beta}_{{S}}$, is chosen to be a temperature for which it is easy to equilibrate the system. It is often convenient to choose ${\beta}_{{S}}=0$ as this facilitates the calculation of the absolute free energy of the system at each temperature in the annealing schedule. In each temperature step the population is resampled and the equilibrating subroutine is applied to every replica at the new temperature. The first part of a temperature step from ${\beta}$ to ${\beta^\prime}$ is resampling the population so that lower energy replicas are multiplied while higher energy replicas are eliminated from the population. Suppose that the population is in equilibrium at ${\beta}$; the relative weight of a replica ${j}$ with energy $E_{j}$ at inverse temperature ${\beta^\prime}$ is given by $\exp\left[ -({\beta^\prime}-{\beta})E_{j}\right]$. Thus, the expected number of copies of replica ${j}$ that appear in the resampled population at ${\beta^\prime}$ is $$\label{eq:wn} {\rho}_{j}({\beta},{\beta^\prime})=\frac{\exp\left[-({\beta^\prime}-{\beta})E_{{j}}\right]}{{Q}({\beta},{\beta^\prime})},$$ where ${Q}$ is the normalization given by $$\label{eq:Q} {Q}({\beta},{\beta^\prime})=\frac{\sum_{j=1}^{R} \exp\left[-({\beta^\prime}-{\beta})E_{{j}}\right]}{R} .$$ The new population of replicas is generated by resampling the original population such that the expected number of copies of replica ${j}$ is ${\rho}_{j}$. The actual number of copies $n_1,n_2,\ldots,n_R$ of each replica in the new population is given by the multinomial distribution $p\left[R; n_1,\dots,n_R; {\rho}_1/R, \ldots, {\rho}_R/R \right]$ for $R$ trials. In this implementation the population size is fixed. Other valid resampling methods are available. For example, the number of copies of replica ${j}$ can be chosen as a Poisson random variable with mean proportional to ${\rho}_{j}(\beta,\beta^\prime)$, in which case the population size fluctuates [@Mac10a]. For large $R$ and small $({\beta^\prime}-{\beta})$, the resampled distribution is close to an equilibrium ensemble at the new temperature, ${\beta^\prime}$. However, the regions of the equilibrium distribution for ${\beta^\prime}$ that differ significantly from the equilibrium distribution for ${\beta}$ are not well sampled, leading to biases in the population at ${\beta^\prime}$. In addition, due to resampling, the replicas are no longer independent. To mitigate both of these problem, the equilibrating subroutine is now applied. Finally, observables are measured by averaging over the population. The entire algorithm consists of ${S}$ steps: in step $k$ the temperature is lowered from ${\beta}_{{S}-k+1}$ to ${\beta}_{{S}-k}$ via resampling followed by the application of the equilibrating subroutine and data collection at temperature ${\beta}_{{S}-k}$. Population annealing permits one to estimate free energy differences. If the annealing schedule begins at infinite temperature corresponding to $\beta_{{S}}=0$, then it yields an estimate of the absolute free energy ${{\tilde F}}({\beta}_k)$ at every temperature in the annealing schedule. The following calculation shows that the normalization factor ${Q}(\beta,\beta^\prime)$ is an estimator of the ratio of the partition functions at the two temperatures: $$\begin{aligned} \label{eq:ratioz} \frac{Z(\beta^\prime)}{Z(\beta)} &=& \frac{\sum_\gamma e^{-\beta^\prime E_\gamma}}{Z(\beta)}\nonumber\\ &=& \sum_\gamma e^{-(\beta^\prime-\beta) E_\gamma} ( \frac{e^{-\beta E_\gamma}}{Z(\beta)} ) \nonumber\\ &=& \langle e^{-(\beta^\prime-\beta) E_\gamma} \rangle_\beta \nonumber\\ &\approx& \frac{1}{R} \sum_{j=1}^{R} e^{-(\beta^\prime-\beta) E_{{j}}}={Q}(\beta,\beta^\prime) .\end{aligned}$$ The summation over $\gamma$ is a sum over the microstates of the system while the sum over $j$ is a sum over the population of replicas in PA. The last approximate equality becomes exact in the limit $R \rightarrow \infty$. From Eq. \[eq:ratioz\] the estimated free energy difference from ${\beta}$ to ${\beta^\prime}$ is found to be $$\label{eq:freediff} -{\beta^\prime}{{\tilde F}}({\beta^\prime}) = -{\beta}F({\beta}) + \log {Q}({\beta},{\beta^\prime}) ,$$ where $F({\beta})$ is the free energy at ${\beta}$ and ${{\tilde F}}$ is the estimated free energy at ${\beta^\prime}$. Given these free energy differences, if ${\beta}_{{S}}=0$, then the PA estimator of the absolute free energy at each simulated temperature is $$\label{eq:sumQ} -\beta_k {{\tilde F}}(\beta_k) = \sum_{\ell=k+1}^{{S}} \log {Q}(\beta_{\ell},\beta_{\ell-1}) + \log \Omega ,$$ where $\Omega=\sum_\gamma 1$ is the total number of microstates of the system; i.e. , $k_B \log \Omega$ is the infinite temperature entropy. Parallel Tempering {#sec:pt} ================== Parallel tempering, also known as replica exchange Monte Carlo, simultaneously equilibrates a set of $R$ replicas of a system at ${S}$ inverse temperatures $$\label{eq:repbb} {\beta}_0 > {\beta}_1> \ldots, {\beta}_{{S}-1} .$$ There is one replica at each temperature so that $R={S}$ in contrast to population annealing, where typically the number $R$ of replicas greatly exceeds the number ${S}$ of temperatures; i.e. , $R\gg {S}$. The equilibrating subroutine operates on each replica at its respective temperature. Replica exchange moves are implemented that allow replicas to diffuse in temperature space. The first step in a replica exchange move is to propose a pair of replicas $(k,k-1)$ at neighboring temperatures ${\beta}={\beta}_k$ and ${\beta^\prime}={\beta}_{k-1}$. The probability for accepting the replica exchange move is $$\label{eq:reprob} {p_{\rm swap}}=\min\left[1, e^{({\beta}-{\beta^\prime})(E-E^\prime)}\right].$$ Here $E$ and $E^\prime$ are the respective energies of the replicas that were originally at ${\beta}$ and ${\beta}^\prime$. If the move is accepted, the replica equilibrating at ${\beta}$ is now set to equilibrate at ${\beta^\prime}$ and vice versa. Equation \[eq:reprob\] insures detailed balance so that the Markov chain defined by parallel tempering converges to a joint distribution whose marginals are equilibrium distributions at the ${S}$ temperatures of Eq. \[eq:repbb\]. Diffusion of replicas in temperature space allows round trips from low to high temperature and back. The benefit of these roundtrips is that free energy barriers are crossed in a time that grows as a power of the barrier height [@Mac09a] rather than exponentially with respect to the barrier height as is the case for most single temperature dynamics. Optimization schemes for PT depend in part on adjusting parameters to maximize the rate of making roundtrips [@KaTrHuTr06; @TrTrHa06; @BiNuJa08]. Two-Well Model Free Energy Landscape {#sec:dw} ==================================== In this section we describe a simple free energy landscape with two minima such as occurs, for example, in the low temperature phase of the Ising model or $\phi^4$ field theories. This free energy was introduced in Ref. [@Mac09a] in the context of analyzing the efficiency of parallel tempering. For ${\beta}\geq {\beta_c}$ and ${\beta_c}$ a critical temperature, the free energy $F_{\sigma}({\beta})$ associated with each well is defined by $$\label{eq:F} {\beta}F_{\sigma}({\beta}) = -\frac{1}{2}({\beta}-{\beta_c})^2 {\Delta}_{\sigma},$$ where $$\label{eq:diste} {\Delta}^2_{\sigma}= \left\{ \begin{array}{l l} {K}+ H/2 & \quad \mbox{if ${\sigma}=1$}\\ {K}- H/2 & \quad \mbox{if ${\sigma}=0$}\\ \end{array} \right.$$ and ${\sigma}$ labels the well. The deep well corresponds to ${\sigma}=1$ and the shallow well to ${\sigma}=0$. The well index ${\sigma}$ is the only macroscopic parameter in the model and the “landscape” is zero dimensional. However, we also assume that the free energy at the saddle point between the wells is zero so that $F$ is the free energy barrier between the wells. The landscape is flat at ${\beta}={\beta_c}$. The parameter ${K}$ is a proxy for system size. In more realistic systems, barrier heights typically grow as a power of the number of degrees of freedom $N$ of the system. The statistics of the energy of microstates in each well follows from this free energy using thermodynamics. The internal energy $U_{\sigma}({\beta})$ is the average of the energy distribution in well ${\sigma}$ and is obtained from $$\label{eq:U} U_{\sigma}({\beta})= \frac{\partial {\beta}F_{\sigma}}{\partial {\beta}} = -({\beta}-{\beta_c}) {\Delta}^2_{\sigma}.$$ Using the relationship between specific heat and energy fluctuations, we find that the variance of the energy in well ${\sigma}$ is simply ${\Delta}^2_{\sigma}$. The free energy also determines the probability ${c}({\beta})$ of being in the deep well according to $$\label{eq:pp} {c}({\beta})={\bf E}({\sigma})=\frac{1}{1+e^{- ({\beta}-{\beta_c})^2 H}} .$$ Microstates of the model are specified by an energy and a well index. We assume that in equilibrium, the distribution of energies, conditioned on the well index ${\sigma}$, is a normal distribution with mean $U_{\sigma}({\beta})$ and variance ${\Delta}^2_{\sigma}$. Thus, the well index ${\sigma}$ is a Bernoulli random variable such that ${\sigma}=1$ with probability ${c}({\beta})$ and ${\sigma}=0$ with probability $1-{c}({\beta})$. The energy $E$ is given by $$\label{eq:diste} E = \left\{ \begin{array}{l l} N(U_1({\beta}),{\Delta}_1^2) & \quad \mbox{if ${\sigma}=1$},\\ N(U_0({\beta}),{\Delta}_0^2) & \quad \mbox{if ${\sigma}=0$},\\ \end{array} \right.$$ where $N(\mu,\Delta^2)$ is a normal random variable with mean $\mu$ and variance $\Delta^2$. The dynamics of the model under the equilibrating subroutine for ${\beta}\leq {\beta_c}$ is assumed to have the following properties. The well index is conserved except at the critical temperature, ${\beta_c}$. That is, there are no transitions between the wells except for ${\beta}={\beta_c}$. On the other hand, for $\beta>{\beta_c}$ the equilibrating subroutine is assumed to equilibrate the system within each well in a single time unit. Thus, the sequence of energies produced by successive steps of the equilibrating subroutine will be i.i.d. normal random variables $N(U_{\sigma}({\beta}),{\Delta}_{\sigma}^2)$, where ${\sigma}$ is the well index. For ${\beta}={\beta_c}$ the equilibrating subroutine first chooses one of the wells with equal probability and then chooses the energy from the associated normal distribution. Convergence of Population Annealing {#sec:padw} =================================== We first consider a single step of population annealing from inverse temperature ${\beta}$ to inverse temperature ${\beta}^\prime > {\beta}$. We will compute the error made by population annealing in the free energy and the fraction of the population in the deep well as a function of the number of replicas, the size of the temperature step ${\beta}^\prime- {\beta}$, and the parameters ${\beta_c}$, $K$ and $H$ of the two-well model. Let $$Y_{j}=\exp\left[ -(\beta^\prime-\beta)E_{j}+ \lambda {{\sigma}}_{j}\right] ,$$ where $E_{j}$ is the energy and ${{\sigma}}_j$ is the well index of replica ${j}$. Setting $\lambda=0$ yields the un-normalized re-weighting factor (see Eq. \[eq:wn\]) of replica ${j}$ from inverse temperature ${\beta}$ to ${\beta}^\prime$. The extra term $ \lambda {{\sigma}}_{j}$ in the exponent will be used to calculate the probability of being in the deep well. To obtain the error in the free energy and the fraction of replicas in the deep well at temperature ${\beta}^\prime$ assuming the correct equilibrium distribution at ${\beta}$, we compute $I({\beta},{\beta}^\prime,\lambda)$, the expectation of the logarithm of $Y$: $$I({\beta},{\beta}^\prime,\lambda)={\bf E}\log (\frac{1}{R}\sum_{j=1}^R Y_j) .$$ Using Eq. \[eq:ratioz\], we obtain the PA estimate of the free energy difference by setting $\lambda=0$ in $I({\beta},{\beta}^\prime,\lambda)$: $$\label{ eq:bf} {\beta^\prime}{{\tilde F}}({\beta^\prime})-{\beta}F({\beta}) = -I({\beta},{\beta}^\prime,0) \equiv {\bf E}( \log {Q}({\beta^\prime},{\beta})) .$$ In this equation, ${\beta}^\prime {{\tilde F}}({\beta}^\prime)$ is the estimate of the free energy at ${\beta^\prime}$ given the exact value at ${\beta}$. The fraction in the deep well at the lower temperature is obtained by differentiation with respect to $\lambda$: $$\label{eq:cprime} {{\tilde c}}({\beta}^\prime)= \frac{dI({\beta},{\beta}^\prime,\lambda)}{d\lambda}\bigg|_{\lambda=0}.$$ Our goal is to determine how much these estimates deviate from the corresponding exact values. Let $S_n$ be a sum of $n$ independent, identically distributed random variables $X_j$. Generically, one can use Taylor’s Theorem to prove that the leading terms in an asymptotic expansion of the expectation of a function $f(S_n/n)$ have the form $$\label{eq:efy} {\bf E}f(S_n/n) ={\bf E}f(\sum_{j=1}^n X_j/n)= f({\bf E} X) + \frac{1}{2 n} f^{\prime\prime}({\bf E} X) {\bf Var } X +\mbox{O}\!\left(\frac{1}{n^{3/2}}\right) \!,$$ where ${\bf E}(X)$ and ${\bf Var }(X)$ denote the expectation and variance of $X_j$, respectively. Thus, for our case, $$\label{eq:i} I({\beta},{\beta}^\prime,\lambda) = \log({\bf E} Y) - \frac{1}{2 R} \frac{{\bf Var } Y}{({\bf E} Y)^2} +\mbox{O}\!\left(\frac{1}{R^{3/2}}\right) \! .$$ The first term is the exact result and the second term is the leading order systematic error in the population annealing estimate due to a finite population size. Setting $\lambda=0$, we have $$\label{eq:ef} {\beta^\prime}{{\tilde F}}({\beta^\prime})-{\beta^\prime}F({\beta^\prime})= \frac{1}{2 R} \frac{{\bf Var } Y}{({\bf E} Y)^2} +\mbox{O}\!\left(\frac{1}{R^{3/2}}\right) \!$$ This result shows that the systematic error decreases as the inverse of the population size and that the free energy approaches the exact value from above as the number of replicas increases. The variance of the free energy estimator was observed to be a useful measure of the convergence of the algorithm [@Mac10a]. Here we formalize that observation by computing the variance of ${\beta^\prime}{{\tilde F}}({\beta^\prime})$ presuming that ${\beta}F({\beta})$ is exactly known and thus has no variance. The variance of the free energy estimator is given by, $$\label{eq:varff} {\bf Var } ({\beta^\prime}{{\tilde F}}({\beta^\prime})) = {\bf E} \left( \log^2 \frac{1}{R} \sum_{j=1}^{R} e^{-(\beta^\prime-\beta) E_{{j}}} \right) - \left( {\bf E} \log \frac{1}{R} \sum_{j=1}^{R} e^{-(\beta^\prime-\beta) E_{{j}}}\right)^2 .$$ Applying Eq. \[eq:efy\] to both $\log$ and $\log^2$ and substituting the results into Eq. \[eq:varff\] yields $$\label{eq:varf} {\bf Var } ({\beta^\prime}{{\tilde F}}({\beta^\prime}) )= \frac{1}{ R} \frac{{\bf Var } Y}{({\bf E} Y)^2} +\mbox{O}\!\left(\frac{1}{R^{3/2}}\right) \! ,$$ where $\lambda$ is set to zero on the RHS of this equation. Comparing Eqs. \[eq:varf\] and \[eq:ef\], we find that $$\label{eq:varef} {\beta^\prime}{{\tilde F}}({\beta^\prime})-{\beta^\prime}F({\beta^\prime}) = \frac{1}{2}{\bf Var } ({\beta^\prime}{{\tilde F}}({\beta^\prime}) )+\mbox{O}\!\left(\frac{1}{R^{3/2}}\right) \! .$$ This equation is useful because ${\bf Var } ({\beta^\prime}{{\tilde F}}({\beta^\prime}) )$ can be directly estimated from multiple runs of PA, and thus the accuracy of the algorithm as applied to a specific system can be estimated from the algorithm itself. Although Eq. \[eq:varef\] was derived for a single step of PA for the two-well model, the calculation does not rely on specific features of the model. Furthermore, since the variance is additive we conjecture that Eq. \[eq:varef\] is a good approximation for the full PA algorithm applied to any statistical mechanical system. In support of this conjecture, we note that Eq. \[eq:varef\] is a good approximation for the PA estimate of the low temperature free energy of the one-dimensional Ising spin glass studied in [@Mac09a] for which the exact free energy can be calculated using transfer matrix methods. For the case of the two-well model, we can evaluate the Gaussian integrals exactly for both the mean and variance of the weight factor $Y$. The general result is $$\begin{aligned} \label{eq:fulli} I({\beta},{\beta}^\prime,\lambda) & = & \frac{{K}}{2} \left[({\beta}^\prime-{\beta_c})^2-({\beta}-{\beta_c})^2\right] \\ && + \log \cosh\left[({\beta}^\prime-{\beta_c})^2 \frac{H}{4}+\lambda/2\right] - \log \cosh\left[({\beta}-{\beta_c})^2 \frac{H}{4}\right] +\lambda/2 \nonumber \\ &&+ \frac{1}{2R}-\frac{1}{2R}\exp\left[ (\beta^\prime-\beta)^2{K}\right]\Big( \frac{\cosh\left[({\beta}-{\beta_c})^2 H/4\right]\cosh\left[(2{\beta}^\prime-{\beta}-{\beta_c})^2 H/4 +\lambda\right] }{\cosh\left[({\beta}^\prime-{\beta_c})^2 H/4 +\lambda/2\right]^2 }\Big) \nonumber \\ &&+\mbox{O}\!\left(\frac{1}{R^{3/2}}\right) \! \nonumber .\end{aligned}$$ From this general result it is instructive to consider expansions to first order in $H$. The estimate of the free energy difference is $$\begin{aligned} \label{eq:freestep} {\beta^\prime}{{\tilde F}}({\beta^\prime})-{\beta}F({\beta}) &=& -\frac{{K}}{2} \left[({\beta^\prime}-{\beta_c})^2-({\beta}-{\beta_c})^2\right] \\ \nonumber &&+ \frac{1}{2R}\left[\exp(({\beta^\prime}-{\beta})^2 {K})-1\right] +\mbox{O}\!\left(\frac{1}{R^{3/2}}\right) \! +\mbox{O}\!\left(H^2\right).\end{aligned}$$ The first term on the RHS of this expression is the exact free energy difference in the two-well model for symmetric wells. The second term is the error made by population annealing, which decreases inversely in $R$. The form of the error term also reveals that the size of the temperature steps should be $({\beta^\prime}-{\beta}) \lesssim 1/\sqrt{{K}}$ to keep the error under control as the barrier height increases. The error in the free energy estimate at ${\beta}_0$ for small $H$ can be obtained by summing the errors made in each temperature step of the algorithm. Since the errors depend only on the size of the temperature step, the algorithm is optimized with constant size steps in ${\beta}$. Thus the error estimate at ${\beta}_0$ is $$\label{eq:freetotal} {\beta}_0 {{\tilde F}}({\beta}_0)-{\beta}_0 F({\beta}_0) = \frac{{S}}{2R}\left(\exp\left[({\beta}_0-{\beta_c})^2 {K}/{S}^2\right]-1\right)+\mbox{O}\!\left(\frac{1}{R^{3/2}}\right) \! +\mbox{O}\!\left(H^2\right).$$ Next we consider the fraction of the population in the deep well at temperature ${\beta}^\prime$ given the equilibrium value at ${\beta}$. Combining Eq. \[eq:cprime\] and \[eq:fulli\] and expanding to leading order in $H$, we obtain $$\begin{aligned} \label{eq:cr} {{\tilde c}}({\beta}^\prime) &= &\frac{1}{2} +\frac{H ({\beta}^\prime-{\beta_c})^2}{8} \\ \nonumber &&-\frac{H}{8R} (\beta^\prime-\beta)(3\beta^\prime-\beta-2\beta_c) \exp\left[(\beta^\prime-\beta)^2 {K}\right] +\mbox{O}\!\left(\frac{1}{R^{3/2}}\right)+\mbox{O}\!\left(H^2\right).\end{aligned}$$ The first two terms on the RHS of this expression are the leading order in $H$ expansion of the exact value, Eq. \[eq:pp\]. The correction term shows that again the temperature steps should satisfy $({\beta^\prime}-{\beta}) \lesssim 1/\sqrt{{K}}$. From Eq. \[eq:cr\] we could obtain an estimate of the overall error in the fraction in the deep well by summing over the ${S}$ temperature steps. Unfortunately the result significantly underestimates the true error. The reason is that the resampling step introduces correlations between replicas so that the probability distribution for the number of replicas in the deep well has a variance that is broader than that of the binomial distribution assumed in the above analysis. Nonetheless, we conjecture that the leading term in the error made by the full algorithm in the fraction in the deep well, $({{\tilde c}}({\beta}_0) -{c}({\beta}_0))$, behaves as $1/R$ and can be minimized when ${S}\sim \sqrt{{K}}$. The effect of correlations is much less important for the free energy, as evidenced by the absence of a term that is order $H$ in Eq. \[eq:freestep\], and we conjecture that Eq. \[eq:freetotal\] is exact to leading order. We are currently studying these questions. The two main conclusion from this analysis are that (1) the error decreases inversely with the number of replicas and (2) the error can be made small only if the temperature step size satisfies $({\beta^\prime}-{\beta}) \lesssim 1/\sqrt{{K}}$. Parallel Tempering vs Population Annealing {#sec:pavpt} ========================================== How do PT and PA compare in the efficiency with which they converge to equilibrium? Here we estimate the amount of computational work needed to make the deviation from equilibrium small. The quantities that we increase, holding other parameters fixed, are the number of replicas $R$ for PA and the number of sweeps $t$ for PT. Within the stylized two-well model, we define computational work ${{\cal W}}$ as the total number of times replicas are acted on by the equilibrating subroutine. For PA, with $R$ replicas and ${S}$ temperature steps the work is ${{\cal W}}=R{S}$. For PT with $R$ replicas the computational work is given by ${{\cal W}}=Rt$ where $t$ is the number of PT sweeps. Our measure of computational work ignores the time required to resample the population in PA or implement replica exchange in PT. For large systems, this time is negligible compared to the time spent in the equilibrating subroutine. The quantity ${{\cal W}}$ assigns one unit of time for one sweep of the equilibrating subroutine for a single replica; thus the computational work measured in elementary operations rather than sweeps of the equilibrating subroutine is $N{{\cal W}}$, where $N$ is the number of degrees of freedom of the system. Since $N$ is the same for PA and PT and not explicitly defined in the two-well model, we don’t consider this factor explicitly. Suppose we carry out the simulations on a massively parallel computer and consider parallel time instead of sequential time (work). Since each replica can be independently acted on by the equilibrating subroutine, one parallel time unit is required for one sweep of all the replicas. Thus, for the highly parallel PA, the parallel time is the number of temperature steps ${S}$ whereas for the less parallel PT, the parallel time is the number of PT sweeps $t$. In [@Mac09a] we analyzed the efficiency of PT for the two-well model. As is generally the case for Markov chain Monte Carlo methods, convergence to equilibrium is asymptotically exponential. The deviation from equilibrium is controlled by an exponential autocorrelation time ${\tau_{\rm exp}}$, and its leading behavior is proportional to $\exp(-t/{\tau_{\rm exp}})$, where $t$ is the number of Monte Carlo sweeps. In the two-well model, for small asymmetries between the wells (i.e., $({\beta}_0-{\beta_c})^2 H \leq 1$), ${\tau_{\rm exp}}$ is controlled by the diffusive time scale for a replica to diffuse between the lowest and highest temperature. If the number of temperatures is sufficiently large, then the acceptance fraction for replica exchange is not small and the elementary time step in this diffusive process is order unity so that ${\tau_{\rm exp}}\sim R^2$. The optimum number of replicas was shown to scale as $R={S}+1 \sim \sqrt{{K}}$ and given this choice, ${\tau_{\rm exp}}\sim R^2 \sim {K}$. When the asymmetry becomes large, the optimum number of temperatures remains approximately the same but there is a crossover to a ballistic regime and ${\tau_{\rm exp}}\sim R \sim \sqrt{{K}}$. Convergence to equilibrium occurs on a time scale ${\tau_{\rm exp}}$ so that, in the small-asymmetry diffusive regime, the work ${{\cal W}}_0$ required to begin to achieve moderately accurate results is given by ${{\cal W}}_0 \sim R{\tau_{\rm exp}}\sim {K}^{3/2}$. For PA we found in Sec. \[sec:padw\], Eq. \[eq:fulli\] that the error term depends on ${K}$ as $\exp\left[(\beta^\prime-\beta)^2 {K}\right]$ so that the optimum number of temperature steps scales as ${S}\sim \sqrt{{K}}$, just as is the case for PT. Based on the form of Eq. \[eq:cr\], we conjecture that the overall error behaves as $S^a/R$ with $a$ an exponent less than or equal to unity. Thus, the error decreases as ${S}^{1+a}/{{\cal W}}\sim {K}^{(1+a)/2}/{{\cal W}}$, and the computational work ${{\cal W}}_0$ required to begin to achieve moderately accurate results behaves as ${{\cal W}}_0 \sim {K}^{(1+a)/2}$. For large systems (large ${K}$) and nearly degenerate free energy minima, population annealing is expected to be more efficient initially than parallel tempering by a factor of a power of the barrier height, ${K}^{1-a/2}$. However, for large amounts of computational work (i.e., ${{\cal W}}\gg R{\tau_{\rm exp}}\sim {K}^{3/2}$) PT is much closer to equilibrium than PA because PT converges exponentially in $t$ while PA converges inversely in the comparable variable $R$. Numerical Results {#sec:nr} ================= We have carried out simulations of the two-well model using both PT and PA to compare the efficiency of the algorithms and test the conjectures of the previous section. In these simulations, the equilibrating subroutine samples a Gaussian random number with mean and variance appropriate to the temperature and well-index of the replica, Eq. \[eq:diste\]. The well index is a conserved quantity except at ${\beta_c}$. At ${\beta_c}$ the equilibrating subroutine first chooses the well index with equal probability and then chooses the energy according to Eq. \[eq:diste\] with ${\beta}={\beta_c}$ and the given value of ${\sigma}$. Figures \[fig:16\], \[fig:64\] and \[fig:256\] show the convergence to equilibrium of $\gamma$, the deviation from equilibrium of the probability of being in the deep well at the lowest temperature given by $$\label{ } \gamma={{\tilde c}}({\beta}_0)-{c}({\beta}_0),$$ as a function the number of sweeps $t$ for PT (small blue points) or populations size $R$ for PA (large red points). The horizontal axis therefore measures computational work for both algorithms in the same units. The figures differ according to the value of the well depth ${K}$ and number of temperatures ${S}$ with ${K}=16$, ${S}=11$ for Fig. \[fig:16\], ${K}=64$, ${S}=23$ for Fig. \[fig:64\] and ${K}=256$, ${S}=47$ for Fig. \[fig:256\]. In each case, there is a small asymmetry, $H=0.1$ and at the lowest temperature ${c}({\beta}_0)=0.68997$. The highest and lowest temperatures are ${\beta_c}=1$ and ${\beta}_0=5$, respectively, and the number of temperatures ${S}+1$ is chosen to be close to the optimum value for PT and to scale as $\sqrt{K}$. The simulations confirm the conclusions of Sec. \[sec:pavpt\] and show that for larger systems (larger ${K}$) PA is initially closer to equilibrium than PT for the same amount of computational work. We have also tested two conjectures concerning the convergence of PA for the two-well model. The first conjecture is that the error in the fraction in the deep well decreases inversely in the population size. Figure \[fig:rgvr\] shows $R\gamma$ as a function of $R$ for the case $H=0.1$, ${K}=64$, and ${S}=23$. ![ $R\gamma$ vs. $R$ for ${K}=64$, ${S}=23$ and $H=0.1$.[]{data-label="fig:rgvr"}](rgvr.pdf){width="5in"} It is clear that within the error bars $\gamma$ is behaving as $1/R$ over the range of $R$ studied. The averages and error bars are obtained from 10$^5$ independent runs of PA for each population size. The second conjecture is that for large $R$ and ${S}\sim \sqrt{{K}}$ $$\label{eq:gvk} R\gamma \sim K^{a/2}.$$ ![ $R\gamma$ vs. ${K}$ for $R=10000$ and $H=0.1$. The solid line is the best power-law fit, $R\gamma = K^{0.43}$.[]{data-label="fig:rgvk"}](rgvk.pdf){width="5in"} Figure \[fig:rgvk\] is a log-log plot of $R\gamma$ vs. ${K}$. For each value of $K$, PA is run with a large population size $R=10000$, $H=0.1$, and the values of ${S}$ given above satisfying ${S}\sim \sqrt{{K}}$. Averages and errors are obtained from 10$^5$ independent runs. The best fit to Eq.  \[eq:gvk\] is shown as the solid line and yields $a=0.85 \pm 0.08$. Note that $a\leq1$ as conjectured, supporting the hypothesis that for modest amounts of computational work, PA yields a smaller error in $\gamma$ than PT and that this advantage increases with well-depth parameter ${K}$. Discussion {#sec:disc} ========== We have seen that both parallel tempering and population annealing are able to solve the problem of sampling equilibrium states of systems having several minima in the free energy landscape separated by high barriers. Parallel tempering is a Markov chain Monte Carlo method and thus converges to equilibrium exponentially in the number of sweeps, whereas population annealing converges inversely in the population size. Their relative efficiencies are described qualitatively by the parable of the tortoise and the hare. For a given amount of computational work, the hare (population annealing) is initially closer to equilibrium but ultimately the tortoise (parallel tempering) catches up and gets ahead. The problem of high barriers between different free energy minima is only one of the difficulties encountered in simulating systems with rough free energy landscapes. A second, generic problem is that the relevant free energy minima may have small basins of attraction so that when the system is annealed starting from high temperature, it is very unlikely that the relevant low temperature states will be found. Small basins of attraction occur for first order transitions and for NP-hard combinatorial optimization problems and are almost certainly a feature of the low temperature phase of spin glasses. In this situation a very large number of sweeps of parallel tempering or a very large population size in population annealing are required simply to find the relevant states. Parallel tempering is widely used in several areas of computational physics while population annealing is not well known. One of the conclusions of this paper is that population annealing is an attractive alternative to parallel tempering for studies where moderately accurate answers are required quickly. This is especially the case if massively parallel computing resources are available since population annealing is well suited to parallelization. Jon Machta was supported in part from NSF grant DMR-0907235 and Richard Ellis by NSF grant DMS-0604071. [22]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , ****, (). , ****, (). , ****, (). , in **, edited by (, , ), p. . , ****, (). , ****, (). , , , ****, (). , , , , , , , , , , , ****, (). , ****, (). , , , , ****, (). , , , , ****, (). , in **, edited by (, ), vol. , pp. . , ****, (). , ****, (). , , , ****, (). , ** (, ). , ****, (). , ****, (). , , , eds., ** (, ). , , , , (). , , , ****, (). , , , ****, ().
--- abstract: 'Nuclear parton distribution functions (NPDFs) are determined by a global analysis of experimental measurements on structure-function ratios $F_2^A/F_2^{A''}$ and Drell-Yan cross section ratios $\sigma_{DY}^A/\sigma_{DY}^{A''}$, and their uncertainties are estimated by the Hessian method. The NPDFs are obtained in both leading order (LO) and next-to-leading order (NLO) of $\alpha_s$. As a result, valence-quark distributions are relatively well determined, whereas antiquark distributions at $x>0.2$ and gluon distributions in the whole $x$ region have large uncertainties. The NLO uncertainties are slightly smaller than the LO ones; however, such a NLO improvement is not as significant as the nucleonic case.' author: - 'M. Hirai' - 'S. Kumano' - 'T.-H. Nagai' title: | Global NLO Analysis of\ Nuclear Parton Distribution Functions --- [ address=[Department of Physics, Juntendo University, Inba, Chiba, 270-1695, Japan]{}]{} [ address=[Institute of Particle and Nuclear Studies, High Energy Accelerator Research Organization (KEK)\ 1-1, Ooho, Tsukuba, Ibaraki, 305-0801, Japan]{}, altaddress=[Department of Particle and Nuclear Studies, Graduate University for Advanced Studies\ 1-1, Ooho, Tsukuba, Ibaraki, 305-0801, Japan]{}]{} [ address=[Department of Particle and Nuclear Studies, Graduate University for Advanced Studies\ 1-1, Ooho, Tsukuba, Ibaraki, 305-0801, Japan]{}]{} Introduction ============ From measurements on structure-function ratios $F_2^A/F_2^{D}$, we found that nuclear parton distribution functions (NPDFs) are not equal to corresponding nucleonic PDFs. Although there are many analyses on the PDFs in the nucleon, the NPDFs have not been investigated extensively. However, such studies are becoming more and more important in recent years in order to precisely understand measurements of heavy-ion reactions at RHIC and LHC. It should lead to a better understanding on properties of quark-hadron matters. Such investigations are also valuable for applications to neutrino reactions. For an accurate determination of neutrino oscillation, nuclear corrections in the oxygen nucleus are important. Although the oscillation experiments are done in a relatively low-energy region, gross properties of cross sections could be described by using quark-hadron duality. In future, nuclear structure functions will be investigated by the MINER$\nu$A project and at neutrino factories. The unpolarized PDFs in the nucleon have been investigated for a long time, and they are relatively well determined. Uncertainties of the determined PDFs were also estimated recently, and such studies were extended to the polarized PDFs [@aac0306] and fragmentation functions [@hkns07]. On the other hand, we have been investigating parametrization of the NPDFs and their uncertainties in a similar technique [@hkm01; @hkn04] by analyzing experimental data on nuclear structure-function ratios $F_2^A/F_2^{A'}$ and Drell-Yan cross section rations $\sigma_{DY}^A/\sigma_{DY}^{A'}$. However, the uncertainty estimation was limited to the leading order (LO) in the previous analysis [@hkn04]. It is the purpose of this work to show the NPDFs and their uncertainties in both LO and next-to-leading order (NLO) and to discuss NLO improvements [@hkn07]. Analysis method =============== First, the functional form of our NPDFs is introduced. Since nuclear modifications are generally within the 10%$-$30% range for medium and large size nuclei, it is easier to investigate the modifications than the absolute NPDFs. We define the NPDFs at the initial $Q^2$ scale ($\equiv Q^2_0$) as $$\begin{aligned} f_i^A(x,Q_0^2)=w_i(x,A,Z)f_i(x,Q_0^2),\end{aligned}$$ where $f_i^A(x,Q_0^2)$ is the parton distribution with type $i$ ($=u_v$, $d_v$, $\bar u$, $\bar d$, $s$, $g$) in a nucleus, $f_i(x,Q_0^2)$ is the corresponding parton distribution in the nucleon, $A$ is mass number, and $Z$ is the atomic number. The variable $Q^2$ is given $Q^2=-q^2$ with the virtual photon momentum $q$ in lepton scattering, and the Bjorken variable $x$ is defined by $x =Q^2/(2M\nu)$ with the energy transfer $\nu$ and the nucleon mass $M$. We call $w_i(x,A,Z)$ a weight function, which indicates a nuclear modification for the type-$i$ distribution. The weight functions are expressed by $$\begin{aligned} w_i(x,A,Z)=1+\Bigl( 1-\frac{1}{A^\alpha} \Bigr ) \frac{a_i+b_ix +c_i x^2 +d_i x^3}{(1-x)^{\beta_i}},\end{aligned}$$ where $\alpha$, $a_i$, $ b_i$, $c_i$, $d_i$, and $\beta_i$ are parameters to be determined by a $\chi^2$ analysis. Here, the valence up- and down-quark parameters are the same except for $a_{u_v}$ and $a_{d_v}$. Since there is no data to find flavor dependence in antiquark modifications, the weight functions of $\bar{u}$, $\bar{d}$, and $\bar{s}$ are assumed to be the same at $Q_0^2$. We impose three conditions, baryon-number, charge, and momentum conservations, so that three parameters are fixed. The initial scale of the NPDFs is chosen $Q_0^2=1\ {\rm GeV^2}$. They are evolved to experimental $Q^2$ points. From the threshold at $Q^2=m_c^2$, charm-quark distributions appear due to the $Q^2$ evolution. Using these NPDFs, we calculate $F_2^A/F_2^{A'}$ and Drell-Yan ratios $\sigma_{DY}^A/\sigma_{DY}^{A'}$. The parameters are determined so as to minimize the total $\chi^2$ $$\begin{aligned} \chi^2=\sum_j \frac{(R_j^{data}-R_j^{theo})^2}{(\sigma_j^{data})^2},\end{aligned}$$ where $R_j$ indicates $F_2^A/F_2^{A'}$ and $\sigma_{DY}^A/\sigma_{DY}^{A'}$. The uncertainties of the determined NPDF are estimated by the Hessian method: $$\begin{aligned} [\delta f^A (x)]^2 =\Delta \chi^2 \sum_{i,j} \frac{\partial f^A (x,\hat{\xi})}{\partial \xi_i} H_{ij}^{-1} \frac{\partial f^A(x,\hat{\xi})}{\partial \xi_j} ,\end{aligned}$$ where $H_{i j}$ is the Hessian matrix, $\xi_i$ is a parameter, and $\hat\xi$ indicates the optimum parameter set. The $\Delta \chi^2 $ value is taken as 13.7 so that the confidence level $P$ becomes the one-$\sigma$-error range ($P=0.6826$) for thirteen parameters by assuming the normal distribution in the multi-parameter space. Results ======= We used data with $Q^2 \ge 1\,{\rm GeV^2}$. They consist of 290, 606, 293, and 52 data points for $F_2^D/F_2^p$, $F_2^A/F_2^D$, $F_2^A/F_2^{A'}$ ($A' \ne D$), and $\sigma_{DY}^A/\sigma_{DY}^{A'}$, respectively. For the total 1241 data, we obtained the minimum $\chi^2$ values, $\chi_{min}^2/$d.o.f.=1.35 and 1.21 for the LO and NLO. ![Experimental data of $F_2^{Ca}/F_2^D$ are compared with theoretical ratios, which are calculated at $Q^2=10\ {\rm GeV^2}$. The dashed and solid curves indicate LO and NLO results, and the uncertainties are shown by the shaded bands.[]{data-label="fig:F2CaD"}](F2CaD.eps){width="0.7\hsize"} ![Experimental data of $\sigma_{DY}^{p Ca}/\sigma_{DY}^{pD}$ are compared with the theoretical ratios calculated at $Q^2=50\ {\rm GeV^2}$. The dashed and solid curves indicate LO and NLO results, and the uncertainties are shown by the shaded bands.[]{data-label="fig:DYCa"}](DYCa.eps){width="0.7\hsize"} As examples, we show actual data with the theoretical LO and NLO ratios for the calcium nucleus together with their uncertainties in Figs. \[fig:F2CaD\] and \[fig:DYCa\]. The theoretical curves and their uncertainties are calculated at $Q^2=10\ {\rm GeV^2}$ and $Q^2=50\ {\rm GeV^2}$ for the $F_2$ and the Drell-Yan, respectively; however, the experimental data were taken at various $Q^2$ values. There are discrepancies from the experimental data at $x<0.01$ in Fig. \[fig:F2CaD\], but they should be attributed to the $Q^2$ difference [@hkn07]. The figures indicate good agreement between the data and the theoretical curves, and most data are within the uncertainties. As expected, the NLO uncertainties are smaller than the LO ones, especially at small $x$, but they are similar in the region, $x>0.02$. As for Drell-Yan ratios in the region $x>0.04$, the NLO effects on the uncertainties are not obvious. In order to illustrate the nuclear dependence, we show the weight functions for all the analyzed nuclei and $^{16}\rm O$ at $Q^2 = 1\ {\rm GeV^2}$ in Fig. \[fig:all-npdfs\]. The NPDFs in the oxygen nucleus are shown because they are important for neutrino-oscillation studies. As the mass number becomes larger in the order of D, $\rm ^{4}He$, Li, $\cdots$, and Pb, the curves deviate from the line of unity. We provide a code for calculating the NPDFs and their uncertainties at our web site [@npdfweb]. By supplying the kinematical conditions, $x$ and $Q^2$, and a nuclear species, one can obtain the NPDFs ($u^A$, $d^A$, $s^A$, $\bar u^A$, $\bar d^A$, $s^A$, $c^A$, and $g^A$) numerically. The technical details on its usage are explained in Refs. [@hkn04; @hkn07] and within the subroutine. ![Nuclear modifications $w_i$ ($i=u_v$, $d_v$, $\bar{q}$, and $g$) are shown in the NLO for all the analyzed nuclei and $^{16}\rm O$ at $Q^2 = 1\ {\rm GeV^2}$.[]{data-label="fig:all-npdfs"}](all-npdfs.eps){width="0.85\hsize"} ![Nuclear modifications of the PDFs and their uncertainties are shown for the calcium nucleus at $Q^2 = 1\ {\rm GeV^2}$. The dashed and solid curves indicate LO and NLO results, and their uncertainties are shown by the shaded bands.[]{data-label="fig:w-Ca"}](w-Ca.eps){width="0.85\hsize"} Nuclear modifications and their uncertainties are shown for the calcium nucleus in Fig. \[fig:w-Ca\]. The valence-quark distributions are well determined in the medium and large $x$ regions because the $F_2$ structure functions are dominated by the valence-quark distributions and because the $F_2$ ratios are accurately measured. At small $x$, the valence-quark modifications have also small uncertainties because of the baryon-number and charge conservations. The antiquark distributions are well determined at $x<0.2$ due to the $F_2$ and Drell-Yan data. However, they have large uncertainties at $x > 0.2$ because there is no Drell-Yan data which constrains the antiquark modifications. Future Drell-Yan measurements at large $x$ should improve the situation of the antiquark distributions. There are J-PARC, Fermilab-E906, and GSI-FAIR projects, in which Drell-Yan processes will be investigated. We found that the uncertainty bands for the gluon are very large, which indicates that the gluon modifications cannot be well determined in the whole $x$ region. The gluon distributions contribute to the $F_2$ and Drell-Yan ratios as higher-order effects. Therefore, the gluon distributions cannot be accurately determined especially in the LO analysis. Some improvements are expected in the NLO analysis. In fact, Fig. \[fig:w-Ca\] indicates that the NLO uncertainty band for the gluon becomes smaller than the LO one. However, it is not as clear as the improvements in the polarized PDFs [@aac0306] and the fragmentation functions [@hkns07]. The gluon distribution in the nucleon has been determined mainly by using $Q^2$ dependence of the structure function $F_2$. In the nuclear case, the $Q^2$ dependencies of the ratios $F_2^A/F_2^{A'}$ are not accurately measured. It leads to the large uncertainty bands in the gluon modifications even in the NLO analysis. We hope that accurate data will be provided at future electron facilities such as eRHIC and eLIC. Nuclear modifications in the deuteron are also investigated in our recent analysis [@hkn07]. Since the deuteron data are used for determining the “nucleonic” PDFs after some nuclear corrections, current PDFs in the nucleon could partially contain nuclear effects. Proper nuclear corrections should be taken into account in the PDF analysis of the nucleon to exclude such effects. Summary ======= By the global analyses of the data on the nuclear $F_2$ and Drell-Yan ratios, the NPDFs have been determined in both LO and NLO. Although the valence-quark distributions and antiquark ones at $x<0.2$ are well determined, the uncertainty bands are very large in the antiquark modifications at $x > 0.2 $ and the gluon ones at whole $x$ even in the NLO analysis. We need future experimental efforts for determining all the nuclear modifications. Our NPDFs and their uncertainties can be calculated by the code supplied in Ref. [@npdfweb]. [9]{} M. Hirai, S. Kumano, and N. Saito, Phys. Rev. D [**69**]{}, 054021 (2004); [**74**]{}, 014015 (2006). M. Hirai, S. Kumano, T.-H. Nagai, and K. Sudoh, Phys. Rev. D [**75**]{}, 094009 (2007); M. Hirai, S. Kumano, M. Oka, and K. Sudoh, arXiv:0708.1816 \[hep-ph\]. M. Hirai, S. Kumano, and M. Miyama, Phys. Rev. D [**64**]{}, 034003 (2001). M. Hirai, S. Kumano, and T.-H. Nagai, Phys. Rev. C [**70**]{}, 044905 (2004). M. Hirai, S. Kumano, and T.-H. Nagai, arXiv:0709.3038 \[hep-ph\]. Our code for the NPDFs is available at http://research.kek.jp/people/kumanos/nuclp.html.
--- abstract: 'Normalization techniques are important in different advanced neural networks and different tasks. This work investigates a novel dynamic learning-to-normalize (L2N) problem by proposing Exemplar Normalization (EN), which is able to learn different normalization methods for different convolutional layers and image samples of a deep network. EN significantly improves flexibility of the recently proposed switchable normalization (SN), which solves a static L2N problem by linearly combining several normalizers in each normalization layer (the combination is the same for all samples). Instead of directly employing a multi-layer perceptron (MLP) to learn data-dependant parameters as conditional batch normalization (cBN) did, the internal architecture of EN is carefully designed to stabilize its optimization, leading to many appealing benefits. (1) EN enables different convolutional layers, image samples, categories, benchmarks, and tasks to use different normalization methods, shedding light on analyzing them in a holistic view. (2) EN is effective for various network architectures and tasks. (3) It could replace any normalization layers in a deep network and still produce stable model training. Extensive experiments demonstrate the effectiveness of EN in wide spectrum of tasks including image recognition, noisy label learning, and semantic segmentation. For example, by replacing BN in the ordinary ResNet50, improvement produced by EN is 300% more than that of SN on both ImageNet and the noisy WebVision dataset. The codes and models will be released.' author: - | Ruimao Zhang$^{1}\thanks{Equal contribution}$ ,  Zhanglin Peng$^{1*}$,  Lingyun Wu$^1$,  Zhen Li$^{3,4}$,  Ping Luo$^2$\ $^1$ SenseTime Research, $^2$ The University of Hong Kong,\ $^3$ The Chinese University of Hong Kong (Shenzhen), $^4$ Shenzhen Research Institute of Big Data\ [{zhangruimao, pengzhanglin, wulingyun}@sensetime.com, [email protected], [email protected] ]{} bibliography: - 'egbib.bib' title: Exemplar Normalization for Learning Deep Representation --- Introduction ============ \ Normalization techniques are one of the most essential components to improve performance and accelerate training of convolutional neural networks (CNNs). Recently, a family of normalization methods is proposed including batch normalization (BN) [@C:BN], instance normalization (IN) [@A:IN], layer normalization (LN) [@A:LN] and group normalization (GN) [@C:GN]. As these methods were designed for different tasks, they often normalize feature maps of CNNs from different dimensions. To combine advantages of the above methods, switchable normalization (SN) [@C:SN] and its variant [@C:SSN] were proposed to learn linear combination of normalizers for each convolutional layer in an end-to-end manner. We term this normalization setting as static ‘learning-to-normalize’. Despite the successes of these methods, once a CNN is optimized by using them, it employed the same combination ratios of the normalization methods for all image samples in a dataset, incapable to adapt to different instances and thus rendering suboptimal performance. As shown in Fig. \[fig:figure1\], this work studies a new learning problem, that is, dynamic ‘learning-to-normalize’, by proposing Exemplar Normalization (EN), which is able to learn arbitrary normalizer for different convolutional layers, image samples, categories, datasets, and tasks in an end-to-end way. Unlike previous conditional batch normalization (cBN) that used multi-layer perceptron (MLP) to learn data-dependent parameters in a normalization layer, suffering from over-fitting easily, the internal architecture of EN is carefully designed to learn data-dependent normalization with merely a few parameters, thus stabilizing training and improving generalization capacity of CNNs. EN has several appealing benefits. (1) It can be treated as an ***explanation tool*** for CNNs. The exemplar-based important ratios in each EN layer provide information to analyze the properties of different samples, classes, and datasets in various tasks. As shown in Fig. \[fig:fig1ratio\], by training ResNet50 [@C:Resnet] on ImageNet [@C:ImageNet], images from different categories would select different normalizers in the same EN layer, leading to superior performance compared to the ordinary network. (2) EN makes ***versatile design*** of the normalization layer possible, as EN is suitable for various benchmarks and tasks. Compared with state-of-the-art counterparts in Fig. \[fig:fig1acc\], EN consistently outperforms them on many benchmarks such as ImageNet [@C:ImageNet] for image classification, Webvision [@A:webvision] for noisy label learning, ADE20K [@C:ADE20K] and Cityscapes [@C:cityscape] for semantic segmentation. (3) EN is a ***plug and play module.*** It can be inserted into various CNN architectures such as ResNet [@C:Resnet], Inception v2 [@C:Inception-v3], and ShuffleNet v2 [@C:ShuffleNetV2], to replace any normalization layer therein and boost their performance. The **contributions** of this work are three-fold. (1) We present a novel normalization learning setting named dynamic ‘learning-to-normalize’, by proposing Exemplar Normalization (EN), which learns to select different normalizers in different normalization layers for different image samples. EN is able to normalize image sample in both training and testing stage. (2) EN provides a flexible way to analyze the selected normalizers in different layers, the relationship among distinct samples and their deep representations. (3) As a new building block, we apply EN to various tasks and network architectures. Extensive experiments show that EN outperforms its counterparts in wide spectrum of benchmarks and tasks. For example, by replacing BN in the ordinary ResNet50 [@C:Resnet], improvement produced by EN is $300\%$ more than that of SN on both ImageNet [@C:ImageNet] and the noisy WebVision [@A:webvision] dataset. Related Work ============ Many normalization techniques are developed to normalize feature representations [@C:BN; @A:LN; @A:IN; @C:GN; @C:SN] or weights of filters [@C:CWN; @C:WN; @C:SpectralNorm] to accelerate training and boost generation ability of CNNs. Among them, Batch Normalization (BN) [@C:BN], Layer Normalization (LN) [@A:LN] and Instance Normalization (IN) [@A:IN] are most popular methods that compute statistics with respect to channel, layer, and minibatch respectively. The follow-up Position Normalization [@A:PN] normalizes the activations at each spatial position independently across the channels. Besides normalizing different dimensions of the feature maps, another branch of work improved the capability of BN to deal with small batch size, including Group Normalization (GN) [@C:GN], Batch Renormalization (BRN) [@C:BRN], Batch Kalman Normalization (BKN) [@C:BKN] and Stream Normalization (StN) [@A:StreamNrom]. In recent studies, using the hybrid of multiple normalizers in a single normalization layer has achieved much attention [@C:IBN; @C:BIN; @J:SN; @C:SW; @C:DN]. For example, Pan *et al.* introduced IBN-Net [@C:IBN] to improve the generalization ability of CNNs by manually designing the mixture strategy of IN and BN. In [@C:BIN], Nam *et al.* adopted the same scheme in style transfer, where they employed gated function to learn the important ratios of IN and BN. Luo *et al.* further proposed Switchable Normalization (SN) [@C:SN; @A:DoNorm] and its sparse version [@C:SSN] to extend such a scheme to deal with arbitrary number of normalizers. More recently, Dynamic Normalization (DN) [@C:DN] was introduced to estimate the computational pattern of statistics for the specific layer. Our work is motivated by this series of studies, but provides a more flexible way to learn normalization for each sample. The adaptive normalization methods are also related to us. In [@A:CBN], Conditional Batch Normalization (cBN) was introduced to learn parameters of BN (scale and offset) adaptively as a function of the input features. Attentive Normalization (AN) [@A:AN] learns sample-based coefficients to combine feature maps. In [@C:MN], Deecke *et al.* proposed Mode Normalization (MN) to detect modes of data on-the-fly and normalize them. However, these methods are incapable to learn various normalizers for different convolutional layers and images as EN did. The proposed EN also has a connection with learning data-dependent [@C:DynamicFilter] or dynamic weights [@C:L2G] in convolution and pooling [@J:generalizingPooling]. The subnet for computation of important ratios is also similar to SE-like [@C:SENet; @C:AANet; @C:ECANet] attention mechanism in form, but they are technically different. First, SE-like models encourage channels to contribute equally to the feature representation [@A:CENet], while EN learns to select different normalizers in different layers. Second, SE is plugged into different networks by using different schemes. EN could directly replace other normalization layers. Exemplar Normalization (EN) =========================== Notation and Background ----------------------- **Overview.** We introduce normalization in terms of a 4D tensor, which is the input data of a normalization layer in a mini-batch. Let $\bm{X} \in \mathbb{R}^{N\times C\times H\times W}$ be the input 4D tensor, where $N,C,H,W$ indicate the number of images, number of channels, channel height and width respectively. Here $H$ and $W$ define the spatial size of a single feature map. Let matrix $\bm{X}_n \in \mathbb{R}^{C\times HW}$ denote the feature maps of $n$-th image, where $n \in \{1,2,...,N\}$. Different normalizers normalize $\bm{X}_n$ by removing its mean and standard deviation along different dimensions, performing a formulation $$\widehat{\bm{X}}_n = \bm{\gamma} ~ \frac{ \bm{X}_n - \bm{\mu}^k }{ \sqrt{(\bm{\delta}^k)^2 + \epsilon } } + \bm{\beta}$$ where $\widehat{\bm{X}}_n$ is the feature maps after normalization. $\bm{\mu}^k$ and $\bm{\delta}^k$ are the vectors of mean and standard deviation calculated by the $k$-th normalizer. Here we define $k\in\{$BN, IN, LN, GN,...$\}$. The scale parameter $\bm{\gamma} \in \mathbb{R}^C$ and bias parameter $\bm{\beta} \in \mathbb{R}^C$ are adopted to re-scale and re-shift the normalized feature maps. $\epsilon$ is a small constant to prevent dividing by zero, and both $\sqrt{\cdot}$ and $(\cdot)^2$ are channel-wise operators. **Switchable Normalization (SN).** Unlike previous methods that estimated statistics over different dimensions of the input tensor, SN [@C:SN; @J:SN] learns a linear combination of statistics of existing normalizers, $$\widehat{\bm{X}}_n = \bm{\gamma} ~ \frac{ \bm{X}_n - \sum_k \lambda^k \bm{\mu}^k }{ \sqrt{ \sum_k \lambda^k ~ (\bm{\delta}^k)^2 + \epsilon } } + \bm{\beta} \label{eq:sn}$$ where $\lambda^k \in [0,1]$ is a learnable parameter corresponding to the $k$-th normalizer, and $\sum_k \lambda^k = 1$. In practice, this important ratio is calculated by using the softmax function. The important ratios for mean and variance can be also different. Although SN [@C:SN] outperforms the individual normalizer in various tasks, it solves a static ‘learning-to-normalize’ problem by switching among several normalizers in each layer. Once SN is learned, its important ratios are fixed for the entire dataset. Thus the flexibility of SN is limited and it suffers from the bias between the training and the test set, leading to sub-optimal results. In this paper, Exemplar Normalization (EN) is proposed to investigate a dynamic ‘learning-to-normalize’ problem, which learns different data-dependant normalizations for different image samples in each layer. EN extremely expands the flexibility of SN, while retaining SN’s advantages of differential learning, stability of model training, and capability in multiple tasks. Formulation of EN ----------------- Given input feature maps $\bm{X}_n$, Exemplar Normalization (EN) is defined by $$\widehat{\bm{X}}_n = \sum_k ~~\bm{\gamma}^k (~\lambda^k_n~ \frac{ \bm{X}_n - \bm{\mu}^k }{ \sqrt{ (\bm{\delta}^k)^2 + \epsilon } }~)+ \bm{\beta}^k \label{eq:csn}$$ where $\lambda^k_n \in [0,1]$ indicates the important ratio of the $k$-th normalizer for the $n$-th sample. Similar with SN, we use softmax function to satisfy the summation constraint, $\sum_k \lambda^k_n = 1$. Compared with Eqn.  and Eqn. , the differences between SN and EN are three-fold. (1) The important ratios of mean and standard deviation in SN can be different, but such scheme is avoided in EN to ensure stability of training, because the learning capacity of EN already outperforms SN by learning different normalizers for different samples. (2) We use important ratios to combine the normalized feature maps instead of combining statistics of normalizers, reducing the bias in SN when combining the standard deviations. (3) Multiple $\bm{\gamma}$ and $\bm{\beta}$ are adopted to re-scale and re-shift the normalized feature maps in EN. To calculate the important ratios $\lambda^k_n$ depended on the feature map of individual sample, we define $$\bm\lambda_n = \mathcal{F}( \bm{X}_n, \bm{\Omega}; \Theta)$$ where $\bm\lambda_n = [\lambda^1_n,..., \lambda^k_n,...\lambda^K_n]$, and $K$ is the total number of normalizers in EN. $\bm{\Omega}$ indicates a collection of statistics of different normalizers. We have $\bm{\Omega}=\{ (\bm{\mu}^k, \bm{\delta}^k)\}_{k=1}^K$. $\mathcal{F}(\cdot)$ is a function (a small neural network) to calculate the instance-based important ratios, according to the input feature maps $\bm{X}_n$ and statistics $\bm{\Omega}$. $\Theta$ denotes learnable parameters of function $\mathcal{F}(\cdot)$. We carefully design a lightweight module to implement the function $\mathcal{F}(\cdot)$ in next subsection. ![Illustration of the Exemplar Normalization (EN) layer, which is able to learn the sample-based important ratios to normalize the input feature maps by using multiple normalizers. Note that the scale parameter $\bm{\gamma}$ and shift parameter $\bm{\beta}$ in Eqn.  are omitted to simplify the diagram.[]{data-label="fig:CSN"}](csnlayer.pdf){width="0.65\linewidth"} An Exemplar Normalization Layer {#sec:csnlayer} ------------------------------- Fig. \[fig:CSN\] shows a diagram of the key operations in an EN layer, including important ratio calculation and feature map normalization. Given an input tensor $\bm{X}$, a set of statistics $\bm{\Omega}$ are estimated. We use $\bm{\Omega}_k$ to denote the $k$-th statistics (mean and standard deviation). Then the EN layer uses $\bm{X}$ and $\bm{\Omega}$ to calculate the important ratios as shown in the right branch of Fig. \[fig:CSN\] in blue. As shown in the left branch of Fig. \[fig:CSN\], multiple normalized tensors are also calculated. In Fig. \[fig:CSN\], there are three steps to calculate the important ratios for each sample. (1) The input tensor $\bm{X}$ is firstly down-sampled in the spatial dimension by using average pooling. The output feature matrix is denoted as $\bm{x} \in \mathbb{R}^{N\times C} $. Then we use every $\bm{\Omega}_k$ to pre-normalize $\bm{x}$ by subtracting the means and dividing by the standard deviations. There are $K$ statistics and thus we have $\hat{\bm{x}} \in \mathbb{R}^{N \times K \times C}$. After that, a 1-D convolutional operator is employed to reduce the channel dimension of $\hat{\bm{x}}$ from $C$ to $C/r$, which is shown in the first blue block in Fig. \[fig:CSN\]. Here $r$ is a hyper-parameter that indicates the reduction rate. To further reduce the parameters in the above operation, we use group convolution with the group number $C/r$ to ensure the total number of convolutional parameters always equals to $C$, irrelevant to the value of $r$. The output in this step is denoted as $\bm{z}$. \(2) The second step is to compute the pairwise correlation of different normalizers for each sample, which is motivated by the high-order feature representation [@C:HighOrder; @C:Anet]. For the $n$-th sample, we use $\bm{z}_n \in \mathbb{R}^{K\times C}$ and its transposition $\bm{z}_n^T$ to compute the pairwise correlations by $\bm{v}_n = \bm{z}_n \bm{z}_n^T\in \mathbb{R}^{K\times K}$. Then $\bm{v}_n$ is reshaped to a vector to calculate the important ratios. Intuitively, the pairwise correlations capture the relationship between different normalizers for each sample, and allow the model to integrate more information to calculate the important ratios. In practice, we also find such operation could effectively stabilize the model training and make the model achieve higher performance. \(3) In the last step, the above vector $\bm{v}_n$ is firstly fed into a fully-connected (FC) layer followed by a tanh unit. This is to raise its dimensions to $\pi K$, where $\pi$ is a hyper-parameter and the value of $K$ is usually small, $3$. In practice, we set the value of $\pi$ as $50$ in experiments. After that, we perform another FC layer to reduce the dimension to $K$. The output vector $\bm{\lambda}_n\in\mathbb{R}^{K\times 1 }$ is regarded as the important ratios of the $n$-th sample for $K$ normalizers, where each element is corresponding to an individual normalizer. Once we obtain the important ratio $[\bm{\lambda}_1,\bm{\lambda}_2,...,\bm{\lambda}_N]^T $, the `softmax` function is applied to satisfy the summation constraint that the important ratios of different normalizers sum to $1$. **Complexity Analysis.** The numbers of parameters and computational complexity of different normalization methods are compared in Table \[tab:complexity\]. The additional parameters in EN are mainly from the convolutional and FC layers to calculate the data-dependant important ratios. In SN [@C:SN], such number is $2K$ since it adopts the global important ratios for both mean and standard deviant. In EN, the total number of parameters that is applied to generate the data-dependant important ratios is $C+\Psi(K)$, where $C$ equals to the input channel size of the convolutional layer (“Conv.” with $C$ parameters in Fig. \[fig:CSN\]). $\Psi(K)$ is a function of $K$, which indicates the amount of parameters in the two FC layers (the top blue block in Fig. \[fig:CSN\]). In practice, since the number of $K$ is small ($3\sim4$), the value of $\Psi(K)$ is just about $0.001M$. In this paper, EN employ a pool of normalizers that is the same as SN, $\{$IN,LN,BN$\}$. Thus the computational complexities of both SN and EN for estimating the statistics are $\mathcal{O}(NCHW)$. We also compare FLOPs in Sec. \[sec:Experiment\], showing that the extra \#parameters of EN is marginal compared to SN, but its relative improvement over the ordinary BN is 300% larger than SN. -------------- -------------------------------------------------------- ---------------- ----------------------- computation complexity BN [@C:BN] $\bm{\gamma},\bm{\beta}$ $2C$ $\mathcal{O}(NCHW)$ IN [@A:IN] $\bm{\gamma},\bm{\beta}$ $2C$ $\mathcal{O}(NCHW)$ LN [@A:LN] $\bm{\gamma},\bm{\beta}$ $2C$ $\mathcal{O}(NCHW)$ GN [@C:GN] $\bm{\gamma},\bm{\beta}$ $2C$ $\mathcal{O}(NCHW)$ BKN [@C:BKN] $\bm{A}$ $C^2$ $\mathcal{O}(NC^2HW)$ SN [@C:SN] $\bm{\gamma},\bm{\beta}, \{\omega_k, \nu_k\}_{k=1}^K $ $2C + 2K$ $\mathcal{O}(NCHW)$ $2KC+ $ $ C+ \Psi(K) $ -------------- -------------------------------------------------------- ---------------- ----------------------- : **Comparisons** of parameters and computational complexity of different normalizers. $\bm{\gamma}$ and $\bm{\beta}$ indicate the scale and shift parameters in Eqn., and $\bm{\Theta}$ is the parameters of “Conv.” and FC layer in proposed EN. $K$ denotes the number of normalizer and $\Psi(\cdot)$ is a function of $K$ that determines the number of $\bm{\Theta}$. $\{\omega_k, \nu_k\}_{k=1}^K$ are the learnable important ratios in SN [@C:SN]. \[tab:complexity\] Experiment {#sec:Experiment} ========== Image Classification with ImageNet dataset {#sec:imagenet} ------------------------------------------ **Experiment Setting.** We first examine the performance of proposed EN on ImageNet [@C:ImageNet], a standard large-scale dataset for high-resolution image classification. Following [@C:SN], the $\bm{\gamma}$ and $\bm{\beta}$ in all of the normalization methods are initialized as $1$ and $0$ respectively. In the training phase, the batch size is set as $128$ and the data augmentation scheme is employed same as  [@C:Resnet] for all of the methods. In inference, the single-crop validation accuracies based on $224\times224$ center crop are reported. We use ShuffleNet v2 x$0.5$ [@C:ShuffleNetV2] and ResNet50 [@C:Resnet] as the backbone network to evaluate various normalization methods since the difference in their network architectures and the number of parameters. Same as [@C:ShuffleNetV2], ShuffleNet v2 is trained by using Adam optimizer with the initial learning rate $0.1$. For ResNet50, all of the methods are optimized by using stochastic gradient decent (SGD) with stepwise learning rate decay. The hyper-parameter $r$ in ShuffleNet v2 x$0.5$ and ResNet50 are set as $8$ and $32$ respectively since the smallest number of channels are different. The hyper-parameter $\pi$ is $50$. For fair comparison, we replace compared normalizers with EN in all of the normalization layers in the backbone network. Backbone Method GFLOPs Params. top-1 top-5 ------------ -------- -------- --------- ---------- ---------- BN 0.046 1.37M 60.3 81.9 ShuffleNet SN 0.057 1.37M 61.2 82.9 v2 x0.5 SSN 0.052 1.37M 61.2 82.7 EN 0.063 1.59M **62.2** **83.3** SENet 4.151 26.77M 77.6 93.7 AANet 4.167 25.80M 77.7 **93.8** BN 4.136 25.56M 76.4 93.0 GN 4.155 25.56M 76.0 92.8 ResNet50 SN 4.225 25.56M 76.9 93.2 SSN 4.186 25.56M 77.2 93.1 EN 4.325 25.91M **78.1** 93.6 : Comparisons of classification accuracies ($\%$), network parameters (Params.) and floating point operations per second (GFLOPs) of various methods on the validation set of ImageNet by using different network architectures. \[table:imagenet\] **Result Comparison.** Table \[table:imagenet\] reports the efficiency and accuracy of EN against its counterparts including BN [@C:BN], GN [@C:GN], SN [@C:SN] and SSN [@C:SSN]. For both two backbone networks, EN offers a super-performance and a competitive computational cost compared with previous methods. For example, by considering the sample-based ratio selection, EN outperforms SN $1.0\%$, and $1.2\%$ on top-1 accuracy by using ShuffleNet v2 x0.5 and ResNet50 with only a small amount of GFLOPs increment. The top-1 accuracy curves of ResNet50 by using BN, SN and EN on training and validation set of ImageNet are presented in Fig. \[fig:threeDatasetCurve\]. We also compare the performance with state-of-the-art attention-based methods, SENet [@C:SENet] and AANet [@C:AANet], without bells and whistles, the proposed EN still outperforms these methods. \ \ Noisy Classification with Webvision dataset {#sec:webvision} ------------------------------------------- **Experiment Setting.** We also evaluate the performance of EN on noisy image classification task with Webvision dataset [@A:webvision]. We adopt Inception v2 [@C:Inception-v3] and ResNet50 [@C:Resnet] as the backbone network. Since the smallest number of channels in Inception v2 is $32$, the feature reduction rate $r$ in the first “Conv.” is set as $16$ for such network architecture. In ResNet50 [@C:Resnet], we maintain the same reduction parameter $r=32$ as Imagenet. The center crop with the image size $224 \times 224$ are adopted in inference. All of the models are optimized with SGD, where the learning rate is initialized as $0.1$ and decreases at the iterations of $\{30,50,60,65,70\}\times10^4$ with a factor of $10$. The batch size is set as $256$ and the data augmentation and data balance technologies are used by following [@C:CurriculumNet]. In the training phase, we replace compared normalizers with EN in all of the normalization layers. **Result Comparison.** Table \[table:webvision\] reports the top-1 and top-5 classification accuracies of various normalization methods. EN outperforms its counterparts by using both of two network architectures. Specially, by using ResNet50 as the backbone, EN significantly boost the top-1 accuracy from $72.8\%$ to $73.5\%$ compared with SN. It achieves about $3$ times relative improvement of EN against SN compared to the ordinary plain ResNet50. Such performance gain is consistent with the results on ImageNet. The training and validation curves are shown in Fig. \[fig:threeDatasetCurve\]. The cross dataset test is also conducted to investigate the transfer ability of EN since the categories in ImageNet and Webvision are the same. The model trained on one dataset is used to do the test on another dataset’s validation set. The results are reported in Fig. \[tab:crossdataset\] that EN still outperforms its counterparts. Model Norm GFLOPs Params. top-1 top-5 ------- ------ -------- --------- ---------- ---------- BN 2.056 11.29M 70.7 88.0 SN 2.081 11.30M 71.3 88.5 EN 2.122 12.36M **71.6** **88.6** BN 4.136 25.56M 72.5 89.1 SN 4.225 25.56M 72.8 89.2 EN 4.325 25.91M **73.5** **89.4** : Comparison of classification accuracies ($\%$), network parameters and GFLOPs of various normalization methods on the validation set of Webvision by using different network architectures. The best results are bold. \[table:webvision\] training set $\rightarrow$ val. set method top-1 top-5 ------------------------------------- -------- ---------- ---------- BN 67.9 85.8 SN 68.0 86.3 EN **68.4** **86.8** BN 64.4 84.3 SN 61.1 81.0 EN **64.7** **84.6** : Top-1 and top-5 accuracy ($\%$) of cross dataset results. The dataset before ’$\rightarrow$’ is adopted to train ResNet50 with various normalization methods. The validation set after ’$\rightarrow$’ is used for testing. The number of categories in two datasets are the same. \[tab:crossdataset\] Tiny Image Classification with CIFAR dataset {#sec:cifar} -------------------------------------------- **Experiment Setting.** We also conduct the experiment on CIFAR-10 and CIFAR-100 dataset. The training batch size is $128$. All of the models are trained by using the single GPU. The training process contains $165$ epoches. The initial learning rate is set as $0.1$ and decayed at $80$ and $120$ epoch, respectively. We also adopt the warm up scheme [@C:Resnet; @C:ResV2] for all of the models training, which increases the learning rate from $0$ to $0.1$ in the first epoch. **Result Comparison.** The experiment results on CIFAR dataset are presented in Table \[tab:cifar\]. Compared with the previous methods, EN shows better performance than the other normalization methods over various depths of ResNet [@C:Resnet]. In particular, the top-1 accuracies of EN on CIFAR-100 are significantly improved by $1.04\%$, $1.31\%$ and $0.79\%$ compared with SN with different network depths. Dataset Backbone BN SN EN --------- ----------- ------- ------- ----------- ResNet20 91.54 91.81 **92.41** ResNet56 93.15 93.41 **93.73** ResNet110 93.88 94.01 **94.22** ResNet20 67.87 67.74 **68.78** ResNet56 70.83 70.70 **72.01** ResNet110 72.41 72.53 **73.32** : Top-1 accuracy ($\%$) on CIFAR-10 and CIFAR-100 dataset by using various networks. The best results are bold. \[tab:cifar\] Semantic Image Segmentation --------------------------- **Experiment Setting.** We also evaluate the performance of EN on semantic segmentation task by using standard benchmarks, ADE20K [@C:ADE20K] and Cityscapes [@C:cityscape] datasets, to demonstrate its generalization ability. Same as [@C:SN; @J:PDN], we use DeepLab [@J:deeplab] with ResNet50 as the backbone network and adopt the atrous convolution with the rate $2$ and $4$ in the last two blocks. The downsample rate of the backbone network is $8$ and the bilinear operation is employed to upsample the predicted semantic maps to the size of the input image. All of the models are trained with $2$ samples per GPU by using “ploy” learning rate decay. The initial learning rate on ADE20K and Cityscapes are set as $0.02$ and $0.01$, respectively. Single-scale and multi-scale testing are used for evaluation. Note that the synchronization scheme is not used in SN and EN to estimate the batch mean and batch standard deviate across multi-GPU. To finetune the model on semantic segmentation, we use $8$ GPU with $32$ images per GPU to pre-train the EN-ResNet50 in ImageNet, thus we report the same configuration of SN (SN(8,32) [@J:SN]) for fair comparision. **Result Comparison.** The mIoU scores on ADE20K validation set and Cityscapes test set are reported in Table \[tab:segmenation\]. The performance improvement of EN is consistent with the results in classification. For example, the mIoUs on ADE20K and Cityscapes are improved from $38.4\%$ and $75.8\%$ to $38.9\%$ and $76.1\%$ by using multi-scale test. -------- ------------- ------------- ------------- ------------- mIoU$_{ss}$ mIoU$_{ms}$ mIoU$_{ss}$ mIoU$_{ms}$ SyncBN 36.4 37.7 69.7 73.0 GN 35.7 36.6 68.4 73.1 SN 37.7 38.4 72.2 75.8 EN **38.2** **38.9** **72.6** **76.1** -------- ------------- ------------- ------------- ------------- : Semantic Segmentation results on ADE20K and Cityscapes datasets. The backbone is ResNet50 with dilated convolutions. The subscripts “ss" and “ms" indicate single-scale and multi-scale test respectively. The best results are bold. \[tab:segmenation\] Ablation Study -------------- **Hyper-parameter $\pi$**. We first investigate the effect of hyper-parameter $\pi$ in Sec. \[sec:csnlayer\]. The top-1 accuracy on ImageNet by using ResNet50 as the backbone network are reported in Table \[tab:hyperpara1\]. All of the EN models outperform SN. With the number of $\pi$ increasing, the performance of classification growths steadily. The the gap between the lowest and highest is about $0.6\%$ excluding $\pi=1$, which demonstrates the model is not sensitive to the hyper-parameter $\pi$ in most situations. To leverage the classification accuracy and computational efficiency, we set $\pi$ as $50$ in our model. ------------- ------ --------- --------- --------- ----------------- --------- 1 10 20 50 100 top-1 76.9 77.1 77.5 77.8 **78.1** 78.0 $\Delta$ SN - $+$ 0.2 $+$ 0.6 $+$ 0.9 $\textbf{+1.2}$ $+$ 1.1 ------------- ------ --------- --------- --------- ----------------- --------- : Top-1 accuracy ($\%$) on ImageNet by using EN-ResNet50 with different ascending dimension hyper-parameter $\pi$. \[tab:hyperpara1\] ------------- ------ -------- -------- -------- ----------------- -------- 2 4 16 32 64 top-1 76.9 77.7 77.9 77.9 **78.1** 77.7 $\Delta$ SN - $+0.8$ $+1.0$ $+1.0$ $\textbf{+1.2}$ $+0.8$ ------------- ------ -------- -------- -------- ----------------- -------- : Top-1 accuracy ($\%$) on ImageNet by using EN-ResNet50 with different hyper-parameter $r$ in the ‘Conv.’ of Sec. \[sec:csnlayer\]. Note that the total number of parameters with different $r$ are the same. \[tab:hyperpara2\] ------------------------------------------ ------------- ----------------- top-1 / top5 $\Delta$ EN EN-ResNet50 78.1 / 93.6 - $a.$ $\rightarrow$ 2-layer MLP 76.7 / 92.9 $-$1.4 / $-$0.7 $b.$ $\rightarrow$ w/o Conv. 77.6 / 92.9 $-$0.5 / $-$0.7 $c.$ $\rightarrow$ ReLU 77.7 / 93.4 $-$0.4 / $-$0.2 $d.$ $\rightarrow$ single $\gamma,\beta$ 77.6 / 93.3 $-$0.5 / $-$0.3 ------------------------------------------ ------------- ----------------- : Top-1 and Top-5 accuracy ($\%$) on ImageNet by using EN-ResNet50 with different configurations. \[tab:configuration\] **Hyper-parameter $r$**. We also evaluate the different group division strategy in the first “Conv.” of Fig. \[sec:csnlayer\] through controlling the hyper-parameter $r$. Although the total numbers of parameters in “Conv.” layer are the same by using distinct $r$, the reduced feature dimensions are different, leading to the different computational complexity, the larger $r$, the smaller computation cost in the subsequent block. Table \[tab:hyperpara2\] shows the top-1 accuracy on ImageNet by using EN-ResNet50 with different group division in the first “Conv.” shown in Fig. \[fig:CSN\]. All of the configurations achieve higher performance than SN. With the value of $r$ growths, the performance of EN-ResNet50 increases stably expect $64$, which equals to the smallest number of channels in ResNet50. These results indicate that feature dimension reduction benefits to the performance increment. However, such advantage may disappear if the reduction rate equals to the smallest number of channels. **Other Configurations**. We replace the other components in the EN layer to verify their effectiveness. The configurations for comparison are as follows. ($a$) A 2-layer multi-layer perceptron (MLP) is used to replace the designed important ratio calculation module in Fig. \[fig:CSN\]. The MLP reduces the feature dimension to $1/32$ in the first layer followed by an activation function, and then reduce the dimension to the number of important ratios in the second layer. ($b$) The “Conv.” operation in the Fig. \[fig:CSN\] are omitted and pairwise correlations $\bm{v}_n$ in Sec. \[sec:csnlayer\] ‘step(2)’ are directly computed. ($c$) The Tanh activation function in the top blue block of Fig. \[fig:CSN\] is replaced with ReLU. ($d$) Instead of multiple $\gamma,\beta$ in Eqn.  (each $\gamma,\beta$ is corresponding to one normalizer), single $\gamma,\beta$ are adopted. Table \[tab:configuration\] reports the comparisons of proposed EN with different internal configuration. According to the results, the current configuration of EN achieves the best performance compared with the other variants. It is worthy to note that we find the output of 2-layer MLP changing dramatically in the training phase (important ratios), making the distribution of feature maps at different iterations changed too much and leading to much poor accuracy. \ Analysis of EN -------------- **Learning Dynamic of Ratios on Dataset.** Since the parameters which are adopted to learn the important ratios $\bm{\lambda}$ in EN layer are initialized as $0$, the important ratios of each sample in each layer have uniform values ( $1/3$ ) at the beginning of the model training. In the training phase, the values of $\bm{\lambda}$ changes between $0$ and $1$. We first investigate the averaged sample ratios in different layers of ResNet50 on ImageNet and Webvision validation set. We use the optimized model to calculate the ratios of each sample in each layer, then the average ratios of each layer are calculated over all of the validation set. According to Fig. \[fig:DatasetRatioComparison\], once the training dataset is determined, the learned averaged ratios are usually distinct for different datasets. To analysis the changes of ratios in the training process, Fig. \[fig:visualdataset\] plots the leaning dynamic of ratios of $100$ epochs for $53$ normalization layers in ResNet50 . Each value of ratios are averaged over all of the samples in ImageNet validation set. From the perspective of the entire dataset, the changes of ratios in each layer of EN are similar to those in SN, whose values have smooth fluctuation in the training phase, implying that distinct layers may need their own preference of normalizers to optimize the model in different epochs. **Learning Dynamic of Ratios on Classes and Images.** One advantage of EN compared with SN is able to learn important ratios to adaptive to different exemplars. To illustrate such benefit of EN, we further plot the averaged important ratios of different classes (w/ and w/o similar appearance) in different layers in Fig. \[fig:visualclass\], as well as the important ratios of various image samples in different layers in Fig. \[fig:visualsample\]. We have the following observations. \(1) Different classes learn their own important ratios in different layers. However, once the neural network is optimized on a certain dataset (ImageNet), the trend of the ratio changes of are similar in different epochs. For example, in Fig. \[fig:visualclass\], since the Persian cat and Siamese cat have a similar appearance, their leaned ratio curves are very close and even coincident in some layers, Layer5 and Layer 10. While the ratio curves from the class of Cheeseburger are far away from the above two categories. But in most layers, the ratio changes of different normalizers are basically the same, only have the numerical nuances. \(2) For the images with the same class index but various appearances, their learned ratios could also be distinct in different layers. Such cases are shown in Fig. \[fig:visualsample\]. All of the images are sampled from confectionery class but with various appearance, the exemplar of confectionery and shelves for selling candy. According to Fig. \[fig:visualsample\], different images from the same category also obtained different ratios in bottom, middle and top normalization layers. ![The visualization of averaged sample ratios in $53$ normalization layers of EN-ResNet50 trained on ImageNet for 100 epoches. The y-axis of each sub-figure denotes the important ratios of different normalizers. The x-axis shows the different training epoches. Zoom in three times for the best view.[]{data-label="fig:visualdataset"}](imagenet_whole_dataset_mean_weight.pdf){width="1.0\linewidth"} ![The visualization of the important ratios of 3 categories (Persian cat, Siamese cat and Cheeseburger ) in 6 different layers of ResNet50. Each column indicates one of the normalizers. []{data-label="fig:visualclass"}](visualclass.pdf){width="1.0\linewidth"} ![The visualization of the important ratios of 3 samples selected from Confectionery class in different layers of ResNet50. []{data-label="fig:visualsample"}](visualsample.pdf){width="1.0\linewidth"} ![image](cluster.pdf){width="\linewidth"} Conclusion ========== In this paper, we propose Exemplar Normalization to learn the linear combination of different normalizers with a sample-based manner in a single layer. We show the effectiveness of EN on various computer vision tasks, such as classification, detection and segmentation, demonstrate its superior learning and generalization ability than static learning-to-normalize method such as SN. In addition, the interpretable visualization of learned important ratios reveals the properties of classes and datasets. The future work will explore EN in more intelligent tasks. In addition, the task-oriented constraint on the important ratios will also be a potential research direction. **Acknowledgement** This work was partially supported by No. 2018YFB1800800, Open Research Fund from Shenzhen Research Institute of Big Data No. 2019ORF01005, 2018B030338001, 2017ZT07X152, ZDSYS201707251409055, HKU Seed Fund for Basic Research and Start-up Fund. Appendix {#appendix .unnumbered} ======== **Sample Clustering via Important Ratios.** Exemplar Normalization (EN) provides another perspective to understand the structure information in CNNs. To further analyze the effect of proposed EN on capturing the semantic information, we concatenate the learned important ratios in all of the EN layers for the input images and adopt t-Distributed Stochastic Neighbor Embedding (t-SNE) [@J:t-sne] to reduce the dimensions to 2-D. The visualization of these samples are shown in Fig. \[fig:clustering\]. In practice, we train EN-ResNet50 on the ImageNet [@C:ImageNet] training set. The normalizer pool used in EN is $\{$ IN, LN, BN $\}$. Then we randomly select $10$ categories from ImageNet validation set to visualize the sample distribution. For each categories, all of the validation samples are used ($50$ samples per category). The name of the selected categories and related exemplary images are present at bottom of Fig. \[fig:clustering\]. To visualize each sample, we extract and concatenate its important ratios from all of the EN layer in EN-ResNet50. Thus the dimension of concatenated important ratios is $53 \times 3 = 159$. Then we use the open source of t-SNE[^1] to reduce the dimension from $159$ to $2$ to visualize the sample distribution. We select $10$ typical training epochs to show the clustering dynamic in the training phase. According to Fig. \[fig:clustering\], we have the following observations. (1) The learned important ratios can be treated as one type of structure information to realize **semantic preservation**. When the model converges, at $96$ epoch, the samples with the same label are grouped into the same cluster. It further demonstrates different categories tend to select different normalizers to further improve their representation abilities, as well as the prediction accuracy of the model. (2) The learned important ratios in EN also makes **appearance embedding** possible. For example, the `samoyed` and `standard schnauzer` have the same father category according to the WordNet[^2] hierarchy and the samples in these two categories share the same appearance. Thus, the distance between the corresponding two clusters are small. The same result also achieves in category `pizza` and `plate`. But cluster `samoyed` is far away from cluster `pizza` since they provide great difference in appearance. (3) We also investigate the **clustering dynamic** in Fig. \[fig:clustering\]. We show the sample distributions in $10$ different epochs of training process. In the beginning of the model training, all of the samples are uniform distributed and none of semantic clusters are generated. From $5$ epoch to $25$ epoch, the semantic clusters are generated rapidly along with the model optimization. The semantic clusters are basically formed after $31$ epoch, which is the first epoch after the first time to decay the learning rate. After that, the sample distribution are slightly adjusted in the rest epochs. [^1]: <https://lvdmaaten.github.io/tsne/> [^2]: <https://wordnet.princeton.edu/>
--- abstract: | We show that every heptagon is a section of a $3$-polytope with $6$ vertices. This implies that every $n$-gon with $n\geq 7$ can be obtained as a section of a $(2+{\left\lfloor{\frac{n}{7}}\right\rfloor})$-dimensional polytope with at most ${\left\lceil {\frac{6n}{7}} \right\rceil}$ vertices; and provides a geometric proof of the fact that every nonnegative $n\times m$ matrix of rank $3$ has nonnegative rank not larger than ${\left\lceil {\frac{6\min(n,m)}{7}} \right\rceil}$. This result has been independently proved, algebraically, by Shitov (J. Combin. Theory Ser. A 122, 2014). **Keywords:** polygon; polytope projections and sections; extension complexity; nonnegative rank; nonrealizability; pseudo-line arrangements author: - | Arnau Padrol[^1]\ Institut für Mathematik\ Freie Universität Berlin\ Berlin, Germany\ [email protected]\ - | Julian Pfeifle[^2]\ Dept. Matemàtica Aplicada II\ Universitat Politècnica de Catalunya\ Barcelona, Spain\ [email protected] title: '**Polygons as sections of higher-dimensional polytopes**' --- Introduction ============ Let $P$ be a (convex) polytope. An [ *extension*]{} of $P$ is any polytope $Q$ such that $P$ is the image of $Q$ under a linear projection; the [ *extension complexity*]{} of $P$, denoted [ *$\operatorname{xc}(P)$*]{}, is the minimal number of facets of an extension of $P$. This concept is relevant in combinatorial optimization because if a polytope has low extension complexity, then it is possible to use an extension with few facets to efficiently optimize a linear functional over it. A [ *section*]{} of a polytope is its intersection with an affine subspace. We will work with the polar formulation of the problem above, which asks for the minimal number of vertices of a polytope $Q$ that has $P$ as a section. If we call this quantity the [ *intersection complexity*]{} of $P$, [ *$\operatorname{ic}(P)$*]{}, then by definition it holds that $\operatorname{ic}(P)=\operatorname{xc}({{P}^\circ})$, where ${{P}^\circ}$ is the polar dual of $P$. However, extension complexity is preserved under polarity (see [@ThomasParriloGouveia2013 Proposition 2.8]), so these four quantities actually coincide: $$\operatorname{ic}(P)=\operatorname{xc}({{P}^\circ})=\operatorname{xc}(P)=\operatorname{ic}({{P}^\circ}).$$ Despite the increasing amount of attention that this topic has received recently (see [@FioriniRothvosTiwary2012], [@ThomasParriloGouveia2013], [@GouveiaRobinsonThomas2013], [@Shitov2014] and references therein), it is still far from being well understood. For example, even the possible range of values of the intersection complexity of an $n$-gon is still unknown. Obviously, every $n$-gon has intersection complexity at most $n$, and for those with $n\leq 5$ it is indeed exactly $n$. It is not hard to check that hexagons can have complexity $5$ or $6$ (cf. [@ThomasParriloGouveia2013 Example 3.4]) and, as we show in Proposition \[prop:ichexagon\], it is easy to decide which is the exact value. By proving that a certain pseudo-line arrangement is not stretchable, we show that every heptagon is a section of a $3$-polytope with no more than $6$ vertices. This reveals the geometry behind a result found independently by Shitov in [@Shitov2014], and allows us to settle the intersection complexity of heptagons. [thm:icheptagon]{} Every heptagon has intersection complexity $6$. In general, the minimal intersection complexity of an $n$-gon is $\theta(\log n)$, which is attained by regular $n$-gons [@BenTalNemirovski2001][@FioriniRothvosTiwary2012]. On the other hand, there exist $n$-gons whose intersection complexity is at least $\sqrt{2n}$ [@FioriniRothvosTiwary2012]. As a consequence of Theorem \[thm:icheptagon\] we automatically get upper bounds for the complexity of arbitrary $n$-gons. [thm:icngon]{} Any $n$-gon with $n\geq 7$ is a section of a $(2+{\left\lfloor{\frac{n}{7}}\right\rfloor})$-dimensional polytope with at most ${\left\lceil {\frac{6n}{7}} \right\rceil}$ vertices. In particular, $\operatorname{ic}(P)\leq {\left\lceil {\frac{6n}{7}} \right\rceil}$. Of course, this is just a first step towards understanding the intersection complexity of polygons. By counting degrees of freedom, it is conceivable that every $n$-gon could be represented as a section of an $O(\sqrt{n})$-dimensional polytope with $O(\sqrt{n})$ vertices. For sections of $3$-polytopes, our result only shows that every $n$-gon is a section of a $3$-polytope with not more than $n-1$ vertices, whereas we could expect an order of $\frac23 n$ vertices. There is an alternative formulation of these results. The [ *nonnegative rank*]{} of a nonnegative $n\times m$ matrix $M$, denoted [ *$\operatorname{rank_+}(M)$*]{}, is the minimal number $r$ such that there exists an $n\times r$ nonnegative matrix $R$ and an $r\times m$ nonnegative matrix $S$ such that $M=RS.$ A classical result of Yannakakis [@Yannakakis1991] states that the intersection complexity of a polytope coincides with the [nonnegative rank]{} of its slack matrix. In this setting, it is not hard to deduce the following theorem from Theorem \[thm:icngon\] (it is easy to deal with matrices of rank $3$ that are not slack matrices). \[thm:nonnegativerank\] Let $M$ be a nonnegative $n\times m$ matrix of rank $3$. Then $\operatorname{rank_+}(M)\leq {\left\lceil {\frac{6\min(n,m)}{7}} \right\rceil}$. This disproved a conjecture of Beasley and Laffey (originally stated in [@BeasleyLaffey2009 Conjecture 3.2] in a more general setting), who asked if for any $n\geq 3$ there is an $n\times n$ nonnegative matrix $M$ of rank $3$ with $\operatorname{rank_+}(M)=n$. While this paper was under review, Shitov improved Theorem \[thm:icngon\] and provided a sublinear upper bound for the intersection/extension complexity of $n$-gons [@Shitov2014b]. Notation -------- We assume throughout that the vertices ${\ensuremath{\left\{p_i\,\middle|\,i\in \ZZ/n\ZZ\right\}}}$ of every $n$-gon $P$ are cyclically clockwise labeled, [i.e., ]{}the edges of $P$ are $\operatorname{conv}\{p_i,p_{i+1}\}$ for $i\in\ZZ/n\ZZ$ and the triangles $(p_{i+2},p_{i+1},p_i)$ are positively oriented for $i\in\ZZ/n\ZZ$. We regard the $p_i$ as points of the Euclidean plane $\EE^2$, embedded in the real projective plane $\PP^2$ as $\EE^2 = {\ensuremath{\{{{(x, y, 1)}^\top}\mid x, y \in\RR\}}}$. For any pair of points $p, q \in \EE^2$, we denote by ${\ell_{p,q}} = p \wedge q$ the line joining them. It is well known that ${\ell_{p,q}}$ can be identified with the point ${\ell_{p,q}} = p \times q$ in the dual space $(\PP^2)^*$, where $$p\times q ={{\left( \begin{vmatrix}p_2& p_3\\q_2&q_3\end{vmatrix},\; -\begin{vmatrix}p_1& p_3\\q_1&q_3\end{vmatrix},\; \begin{vmatrix}p_1& p_2\\q_1&q_2\end{vmatrix} \right)}^\top}$$ denotes the cross- product in Euclidean $3$-space. Similarly, the meet $\ell_1\vee\ell_2$ of two lines $\ell_1,\ell_2\in (\PP^2)^*$ is their intersection point in $\PP^2$, which has coordinates $\ell_1\times\ell_2$. The intersection complexity of hexagons ======================================= As an introduction for the techniques that we use later with heptagons, we study the intersection complexity of hexagons. Hexagons can have intersection complexity either $5$ or $6$ [@ThomasParriloGouveia2013 Example 3.4]. In this section we provide a geometric condition to decide among the two values. This section is mostly independent from the next two, and the reader can safely skip it. First, we introduce a lower bound for the $3$-dimensional intersection complexity of $n$-gons that we will use later. \[prop:ic3bound\] No $n$-gon can be obtained as a section of a $3$-polytope with less than ${\left\lceil {\frac{n+4}{2}} \right\rceil}$ vertices. Let $Q$ be a $3$-polytope with $m$ vertices such that its intersection with the plane $H$ coincides with $P$, and let $k$ be the number of vertices of $Q$ that lie on $H$. By Euler’s formula, the number of edges of $Q$ is at most $3m-6$, of which at least $3k$ have an endpoint on $H$. Moreover, the subgraphs $G^+$ and $G^-$ consisting of edges of $Q$ lying in the open halfspaces $H^+$ and $H^-$ are both connected. Indeed, if $H={\ensuremath{\left\{x\,\middle|\,{\langle {a} , {x} \rangle}=b\right\}}}$, then the linear function ${\langle {a} , {x} \rangle}$ induces an acyclic partial orientation on $G^+$ and $G^-$ by setting $v\rightarrow w$ when ${\langle {a} , {v} \rangle}<{\langle {a} , {w} \rangle}$. Following this orientation we can connect each vertex of $G^+$ to the face of $Q$ that maximizes ${\langle {a} , {x} \rangle}$, and following the reverse orientation, each vertex of $G^-$ to the face that minimizes ${\langle {a} , {x} \rangle}$ (compare [@Ziegler1995 Theorem 3.14]). Hence, there are at least $m-k-2$ edges in $G^+\cup G^-$. These are edges of $Q$ that do not intersect $H$. There are also at least $3k$ edges that have an endpoint on $H$. Now, observe that every vertex of $P$ is either a vertex of $Q$ in $H$ or is the intersection with $H$ of an edge of $Q$ that has an endpoint at each side of $H$. Hence, $$\label{eq:boundnumvertices}n-k\leq (3m-6)-(3k)-(m-k-2)=2m-4-2k,$$ and since $k\geq 0$, we get the desired bound. The lower bound of Proposition \[prop:ic3bound\] is optimal: for every $m\geq 2$ there are $2m$-gons appearing as sections of $3$-polytopes with $m+2$ vertices (Figure \[fig:optimalcuts\]). ![For $m\geq 2$, the join of an $m$-path with an edge is the graph of a stacked $3$-polytope with $2m+2$ vertices that has a $2m$-gon as a section (by a plane that truncates the edge).[]{data-label="fig:optimalcuts"}](optimalcut){width=".22\textwidth"} The intersection complexity of a hexagon is either $5$ or $6$. \[prop:ichexagon\] The intersection complexity of a hexagon is $5$ if and only if the lines ${\ell_{p_0,p_5}}$, ${\ell_{p_1,p_4}}$ and ${\ell_{p_2,p_3}}$ intersect in a common point of the projective plane for some cyclic labeling of its vertices ${\ensuremath{\left\{p_i\,\middle|\,i\in\ZZ/6\ZZ\right\}}}$. \[ .58\] [![All cuts of the quadrangular pyramid and the triangular bipyramid into two connected components, up to symmetry.[]{data-label="fig:cuts"}](Cuts2 "fig:"){width=".55\textwidth"}]{} \[.4\] [![All cuts of the quadrangular pyramid and the triangular bipyramid into two connected components, up to symmetry.[]{data-label="fig:cuts"}](hexagoncut "fig:"){width=".3\textwidth"}]{} The only $4$-polytope with $5$ vertices is the simplex, which only has $5$ facets; thus, none of its $2$-dimensional sections is a hexagon. Therefore, if $P$ is a hexagon, then $\operatorname{ic}(P)=5$ if and only if it is the intersection of a $2$-plane $H$ with a $3$-polytope $Q$ with $5$ vertices. There are only two combinatorial types of $3$-polytopes with $5$ vertices: the quadrangular pyramid and the triangular bipyramid. By , $H$ does not contain any vertex of $Q$. Hence, $H$ induces a cut of the graph of $Q$ into two (nonempty) disjoint connected components. A small case-by-case analysis (cf. Figure \[fig:cuts\]) tells us that the only possibility is that $Q$ is the bipyramid and $H$ cuts its graph as shown in Figure \[fig:bipyrcut\]. However, in every geometric realization of such a cut (with the same labeling), the lines ${\ell_{p_0,p_5}}$, ${\ell_{p_1,p_4}}$ and ${\ell_{p_2,p_3}}$ intersect in a common (projective) point: the point of intersection of ${\ell_{q_0,q_1}}$ with $H$ (compare Figure \[fig:hexagonsection\]). For the converse, we prove only the case when the point of intersection is finite (the case with parallel lines is analogous). Then we can apply an affine transformation and assume that the coordinates of the hexagon are $$\begin{aligned} p_0&=(0,\alpha),& p_1&=(\beta x,\beta y),& p_2&=(\gamma,0)\\ p_3&=(1,0),&p_{4}&=(x,y),& p_{5}&=(0,1); \end{aligned}$$ for some $x,y>0$ and $\alpha,\beta,\gamma>1$. Now, let $K>\max(\alpha,\beta,\gamma)$ and consider the polytope $Q$ with vertices $$\begin{aligned} q_0&=(0,0,-K),\qquad q_1=(0,0,-1),\qquad q_2=\frac{\big((K-1)\gamma,0, K(\gamma-1)\big)}{K-\gamma},\\ q_3&=\frac{\big(x(K-1)\beta,y(K-1)\beta, K(\beta-1)\big)}{K-\beta}, \quad q_4=\frac{\big(0,(K-1)\alpha, K(\alpha-1)\big)}{K-\alpha}. \end{aligned}$$ If $H$ denotes the plane of vanishing third coordinate, then $q_0$ and $q_1$ lie below $H$, while $q_2$, $q_3$ and $q_4$ lie above. The intersections ${\ell_{q_i,q_j}}\cap H$ for $i\in\{0,1\}$ and $j\in\{2,3,4\}$ coincide with the vertices of $P\times\{0\}$. This proves that $P\times\{0\}=Q\cap H$, and hence that $\operatorname{ic}(P)=5$. Let $P$ be a regular hexagon, and let $Q$ be a polytope with $5$ vertices such that $Q\cap H=P$ for some plane $H$. By the proof of Proposition \[prop:ichexagon\], $Q$ is a triangular bipyramid and one of the two halfspaces defined by $H$ contains only two vertices of $Q$: $q_0$ and $q_1$. Even more, since $P$ is regular, the line ${\ell_{q_0,q_1}}$ must be parallel to one of the edge directions of $P$ because, as we saw in the previous proof, the projective point ${\ell_{q_0,q_1}}\cap H$ must coincide with the intersection of two opposite edges of $P$ (at infinity in this case). This means that there are three different choices for the direction of the line ${\ell_{q_0,q_1}}$; and shows that the set of minimal extensions of $P$ (which can be parametrized by the vertex coordinates) is not connected, even if we consider its quotient space obtained after identifying extensions related by an admissible projective transformation that fixes $P$ and those related by a relabeling of the vertices of $Q$. A similar behavior was already observed in [@MondSmith2003] for the space of nonnegative factorizations of nonnegative matrices of rank $3$ and nonnegative rank $3$. Consider the set of all hexagons with $5$ fixed vertices. The position of the last vertex determines its intersection/extension complexity. This is depicted in Figure \[fig:hexagonRS\]. The hexagon fulfills the condition of Proposition \[prop:ichexagon\] if and only if the last point lies on any of the three dark lines. Hence, $\operatorname{ic}(P)=5$ if the last point lies on a dark line and $\operatorname{ic}(P)=6$ otherwise. Actually, an analogous picture appears for any choice for the position of the initial $5$ points (the dark lines are always concurrent because of Pappus’s Theorem). In addition, the dark lines depend continuously on the coordinates of the first $5$ points. This implies that, if we take two realizations that have the last point in two different $\operatorname{ic}(P)=6$ regions in Figure \[fig:hexagonRS\], then we cannot continuously transform one into the other. Said otherwise, the realization space of the hexagon (as considered by Richter-Gebert in [@RG]) restricted to those that have intersection complexity $6$ is disconnected. The complexity of heptagons =========================== In this section we prove our main result, Theorem \[thm:icheptagon\], in two steps. The easier part consists of showing that a special family of heptagons, which we call standard heptagons, always have intersection complexity less than or equal to $6$ (Proposition \[prop:icstandardheptagon\]). The remainder of the section is devoted to proving the second step, Proposition \[prop:noncrossingstandardization\]: every heptagon is projectively equivalent to a standard heptagon. A standard heptagon ------------------- Here, and throughout this section, $P$ denotes a heptagon and ${\ensuremath{\left\{p_i\,\middle|\,i\in \ZZ/7\ZZ\right\}}}$ is its set of vertices, cyclically clockwise labeled. We say that $P$ is a [ *standard heptagon*]{} if $p_0=(0,0)$, $p_3=(0,1)$ and $p_{-3}=(1,0)$; and the lines ${\ell_{p_1,p_2}}$ and ${\ell_{p_{-1}, p_{-2}}}$ are respectively parallel to the lines ${\ell_{p_0,p_3}}$ and ${\ell_{p_0,p_{-3}}}$ (see Figure \[fig:heptagon\]). We can easily prove that standard heptagons have intersection complexity at most $6$. \[.45\] \[ .45\] [![Standard heptagons.](Intersection "fig:"){width=".4\textwidth"}]{} \[prop:icstandardheptagon\] Every standard heptagon $P$ is a section of a $3$-polytope $Q$ with $6$ vertices. In particular $\operatorname{ic}(P)\leq 6$. For any standard heptagon $P$, there are real numbers $b,c < 0 < a,d,\lambda,\mu$ such that the coordinates of the vertices of $P$ are $$\begin{aligned} p_0&=(0,0),& p_1&=(c,d),& p_2&=(c,d+\mu),&p_3&=(0,1),\\ &&p_{-1}&=(a,b),& p_{-2}&=(a+\lambda,b), & p_{-3}&=(1,0). \end{aligned}$$ Fix some $K>\max(\lambda -1,\mu -1)$ and consider the points $$\begin{aligned} q_0&:=(0,0,1),\,\, q_1:=(0,0,-K),\,\,q_2:=(1+K,0,-K),\,\,q_3:=(0,1+K,-K),\\ q_4&:=\frac{\big(a(1+K),b(1+K), \lambda K\big)}{(1+K)-\lambda},\,\, q_5:=\frac{\big((1+K)c,(1+K)d,\mu K\big)}{(1+K)-\mu}.\end{aligned}$$ We claim that $P$ is the intersection of the $3$-polytope $Q:=\operatorname{conv}\{q_0,q_1,\dots,q_5\}$ with the plane $H:={\ensuremath{\left\{(x,y,z)\in\RR^3\,\middle|\,z=0\right\}}}$: $$Q\cap H=P\times \{0\}.$$ Observe that every vertex of $Q\cap H$ corresponds to the intersection of $H$ with an edge of $Q$ that has one endpoint on each side of the plane. Since $q_0$, $q_4$ and $q_5$ lie above $H$ and $q_1$, $q_2$ and $q_3$ lie below, the intersections of the relevant lines ${\ell_{ q_i,q_j}}$ with $H$ are (see Figure \[fig:intersection\]): $$\begin{aligned} {\ell_{q_0,q_1}}\cap H&=(0,0,0),& {\ell_{q_0,q_2}}\cap H&=(1,0,0),& {\ell_{q_0,q_3}}\cap H&=(0,1,0),\\ {\ell_{q_4,q_1}}\cap H&=(a,b,0),&{\ell_{q_4,q_2}}\cap H&=(a+\lambda,b,0),&{\ell_{q_4,q_3}}\cap H&=(a,b+\lambda,0),\\ {\ell_{q_5,q_1}}\cap H&=(c,d,0),&{\ell_{q_5,q_2}}\cap H&=(c+\mu,d,0),&{\ell_{q_5,q_3}}\cap H&=(c,d+\mu,0) .\end{aligned}$$ These are the vertices of $P\times\{0\}$ together with $(a,b+\lambda,0)$, $(c+\mu,d,0)$, which proves that $Q\cap H\supseteq P\times \{0\}$. To prove that indeed $Q\cap H= P\times \{0\}$, we need to see that both $(a,b+\lambda)$ and $(c+\mu,d)$ belong to $P$. The convexity of $P$ implies the following conditions on the coordinates of its vertices by comparing, respectively, $p_{-1}$ with the lines ${\ell_{p_0,p_3}}$ and ${\ell_{p_0,p_{-3}}}$, $p_{-1}$ with $p_{-2}$, and $p_{-2}$ with the line ${\ell_{p_3,p_{-3}}}$: $$\begin{aligned} a&>0, & -b&>0, & \lambda&>0, & 1-a-b-\lambda>0.\end{aligned}$$ Hence, the real numbers $\frac{1-a-b-\lambda}{1-b}$, $\frac{a}{1-b}$ and $\frac{\lambda}{1-b}$ are all greater than $0$. Since they add up to $1$, we can exhibit $(a,b+\lambda)$ as a convex combination of $p_{-1}$, $p_{-2}$ and $p_3$: $$\frac{1-a-b-\lambda}{1-b}\,p_{-1}+\frac{a}{1-b}\,p_{-2}+\frac{\lambda}{1-b}\,p_3=(a,b+\lambda).$$ This proves that $(a,b+\lambda)\in P$. That $(c+\mu,d)\in P$ is proved analogously. Standardization lines of heptagons ---------------------------------- Our next goal is to show that every heptagon is projectively equivalent to a standard heptagon. For this, the key concept is that of a standardization line. Consider a heptagon $P$, embedded in the projective space $\PP^2$, whose vertices are cyclically labeled ${\ensuremath{\left\{p_i\,\middle|\,i\in \ZZ/7\ZZ\right\}}}$. For $i\in \ZZ/7\ZZ$, and abbreviating ${\ell_{i,j}}:={\ell_{p_i,p_j}}$, construct $$\begin{aligned} p_i^+ &:= {\ell_{i+ 1,i+ 2}}\vee{\ell_{i,i+ 3}},& p_i^- &:= {\ell_{i- 1,i- 2}}\vee{\ell_{i,i- 3}},& \ell_i &:= p_i^+ \wedge p_i^-.\end{aligned}$$ We call the line $\ell_i$ the $i$th [ *standardization line*]{} of $P$. If $\ell_i\cap P=\emptyset$, it is a [ *non-crossing*]{} standardization line. Figure \[fig:standardizationlines\] shows a heptagon and its standardization lines $\ell_0$ and $\ell_ {-3}$. Observe that $\ell_0$ is a non-crossing standardization line, while $\ell_{-3}$ is not. ![The standardization lines $\ell_0$ and $\ell_ {-3}$.[]{data-label="fig:standardizationlines"}](StandardizationLine1 "fig:"){width=".45\linewidth"} ![The standardization lines $\ell_0$ and $\ell_ {-3}$.[]{data-label="fig:standardizationlines"}](StandardizationLine2 "fig:"){width=".45\linewidth"} \[lem:converttostandard\] A heptagon $P$ is projectively equivalent to a standard heptagon if and only if it has at least one non-crossing standardization line. The line at infinity of a standard heptagon must be one of its standardization lines, which is obviously non-crossing. Conversely, the projective transformation that sends a non-crossing standardization line of $P$ to infinity, followed by a suitable affine transformation, maps $P$ onto a standard heptagon. Hence, having a non-crossing standardization line characterizes standard heptagons up to projective equivalence. Our next step is to show that every heptagon has a non-crossing standardization line (Proposition \[prop:noncrossingstandardization\]). But to prove this, we still need to introduce a couple of concepts. Observe that $\ell_i$ cannot cross any of the lines ${\ell_{p_{i+1},p_{i+2}}}$, ${\ell_{p_{i+0},p_{i+3}}}$, ${\ell_{p_{i-1},p_{i-2}}}$ and ${\ell_{p_{i+0},p_{i-3}}}$ in the interior of $P$, since by construction their intersection point is $p_i^\pm$, which lies outside $P$ (compare Figure \[fig:standardizationlines\]). In particular, if $\ell_i$ intersects $P$, either it separates $p_{i+1}$ and $p_{i+2}$ from the remaining vertices of $P$, or it separates $p_{i-1}$ and $p_{i-2}$. If the standardization line $\ell_i$ separates $p_{i+1}$ and $p_{i+2}$ from the remaining vertices of $P$, we say that it is [ *$+$-crossing*]{}; if it separates $p_{i-1}$ and $p_{i-2}$ it is [ *$-$-crossing*]{}. In the example of Figure \[fig:standardizationlines\], $\ell_{-3}$ is $-$-crossing. The lines ${\ell_{p_{i},p_{i+3}}}$ and ${\ell_{p_{i+1},p_{i+2}}}$ partition the projective plane $\PP^2$ into two disjoint angular sectors (cf. Figure \[fig:sector\]). One of them contains the points $p_{i-1}$, $p_{i-2}$ and $p_{i-3}$, while the interior of the other is empty of vertices of $P$. We denote this empty sector [ *$S_i^+$*]{}. Similarly, [ *$S_i^-$*]{} is the sector formed by ${\ell_{p_{i},p_{i-3}}}$ and ${\ell_{p_{i-1},p_{i-2}}}$ that contains no vertices of $P$. \[.45\] [![The relevant angular sectors.[]{data-label="fig:sectors"}](CrossingSector "fig:"){width=".4\textwidth"}]{}\[.45\] [![The relevant angular sectors.[]{data-label="fig:sectors"}](SectorCompatibility "fig:"){width=".4\textwidth"}]{} These sectors allow us to characterize $\pm$-crossing standardization lines. \[lem:crossingcharacterization\] The standardization line $\ell_i$ is $+$-crossing if and only if $p_i^-\in S_i^+$. Analogously, $\ell_i$ is $-$-crossing if and only if $p_i^+\in S_i^-$. The line $\ell_i$ is $+$-crossing when it separates $p_i$ and $p_{i+1}$ from the rest of $P$; this happens if and only if $\ell_i\subset S_i^+$. Since ${\ell_{p_{i-1},p_{i-2}}}\cap\ell_i = p_i^-$, this is equivalent to $p_i^-\in S_i ^+$. The case of $-$-crossing follows analogously. With this characterization, we can easily prove the following compatibility condition. \[lem:sectorcompatibility\] If $\ell_i$ is $+$-crossing, then $\ell_{i-3}$ cannot be $-$-crossing. Analogously, if $\ell_i$ is $-$-crossing, then $\ell_{i+3}$ cannot be $+$-crossing. Both statements are equivalent by symmetry. We assume that $\ell_i$ is $+$-crossing and $\ell_{i-3}$ is $-$-crossing to reach a contradiction. Observe that $p_i^-=p_{i-3}^+$ by definition. By Lemma \[lem:crossingcharacterization\], $p_i^-$ must lie both in the sector formed by ${\ell_{p_{i},p_{i+3}}}$ and ${\ell_{p_{i+1},p_{i+2}}}$ and in the sector formed by ${\ell_{p_{i-3},p_{i+1}}}$ and ${\ell_{p_{i+3},p_{i+2}}}$. However, the intersection of these two sectors lies in the interior of the polygon (cf. Figure \[fig:sectorcompatibility\]), while $p_i^-$ lies outside. \[cor:allcrossing\] If all the standardization lines $\ell_i$ intersect $P$, they are either all $+$-crossing or all $-$-crossing. Every heptagon has a non-crossing standardization line ------------------------------------------------------ We are finally ready to present and prove Proposition \[prop:noncrossingstandardization\]. In essence, we prove that the combinatorics of the pseudo-line arrangement in Figure \[fig:nonrealizable\] are not realizable by an arrangement of straight lines in the projective plane. Here the “combinatorics” refers to the order of the intersection points in each projective pseudo-line. However, any heptagon that had only $+$-crossing standardization lines would provide such a realization (compare the characterization of Lemma \[lem:crossingcharacterization\]). ![A non-stretchable pseudo-line arrangement.[]{data-label="fig:nonrealizable"}](NonRealizable){width=".55\linewidth"} For the proof, we will need the formula $$\label{eq:quadprod} ({a \times b} ){\times} ({c}\times {d}) = [{a,\, b,\, d}]\, c - [{a,\, b,\, c}] \, d ,$$ where $[{a,\, b,\, c}]= \det(a,b,c)$ is the $3\times 3$-determinant formed by the homogeneous coordinates of the corresponding points. That is, $[{p_x,\ p_y, \ p_z}]=\pm 2 \operatorname{Vol}\left(\operatorname{conv}\{{p_x,\ p_y, \ p_z}\}\right)$. Observe that, since the vertices of the heptagon are labeled clockwise and are in convex position, $ [{p_x,\ p_y, \ p_z}]>0$ whenever $z$ lies in the interval $(x,y)$. To simplify the notation, in what follows we abbreviate $[{p_{i+x},\ p_{i+y}, \ p_{i+z}}]$ as $[x,y,z]_i$, for any $x,y,z,i\in \ZZ/7\ZZ$. \[lem:alg\_characterization\] With the notation from above, the standardization line $\ell_i$ is $+$-crossing if and only if $$\begin{aligned} \label{eq:bdycondition1} [p_{i+2},p_{i+1}, p_i^-]= [- 1,-2,-3]_{i}[2, 1, 0]_{i} - [- 1,-2,0]_i[2, 1,-3]_{i}\geq 0; \end{aligned}$$ and is $-$-crossing if and only if $$\begin{aligned} \label{eq:bdycondition2} [p_{i-2},p_{i-1},p_i^+]= [1,2,3]_i[-2,-1, 0]_{i} - [1,2,0]_{i}[-2,-1, 3]_{i} \geq 0. \end{aligned}$$ Using , the coordinates of the standardization point $p_i^-$ are given by $$\begin{aligned} p_i^- &= (p_{i- 1}\wedge p_{i- 2})\vee(p_{i}\wedge p_{i- 3}) =(p_{i- 1}\times p_{i- 2})\times(p_{i}\times p_{i- 3})\\ &\stackrel{\eqref{eq:quadprod}}{=} \phantom{-}[- 1,-2, -3]_ i\ p_{i} - [- 1,-2,0]_i\ p_{i-3}. \end{aligned}$$ Observe that $[p_{i+3},p_i,p_i^-]<0$ since $$\begin{aligned} [p_{i+3},p_i,p_i^-]&= [- 1,-2, -3]_ i[3,0, 0]_ i - [- 1,-2,0]_i[3,0,-3]_ i\\ &=\phantom{[- 1,-2, -3]_ i[3,0, 0]_ i} - [- 1,-2,0]_i[3,0, -3]_ i \ < \ 0, \end{aligned}$$ because $[3,0,0]_i=0$ and $[- 1,-2,0]_i > 0$, $[3,0, -3]_ i > 0$ by convexity. Therefore, in view of Lemma \[lem:crossingcharacterization\], requiring $\ell_i$ to be $+$-crossing reduces to the equation $[p_{i+2},p_{i+1}, p_i^-]\geq 0$, since otherwise $ p_i^-$ would not lie in the desired sector. This expression can be reformulated as . The proof of  is analogous. \[prop:noncrossingstandardization\] Every heptagon has at least one non-crossing standardization line. We want to prove that $P$ has at least one non-crossing standardization line. By Corollary \[cor:allcrossing\] (and symmetry), it is enough to prove that it is impossible for all $\ell_i$ to be $+$-crossing. We will assume this to be the case and reach a contradiction. If $\ell_i$ is $+$-crossing for all $0\leq i\leq 6$, then by Lemma \[lem:alg\_characterization\], the coordinates of the vertices of $P$ fulfill for all $i\in\ZZ/7\ZZ$. Moreover, if $\ell_i$ is $+$-crossing then it cannot be $-$-crossing. Therefore, again by Lemma \[lem:alg\_characterization\], one can see that the coordinates of the vertices of $P$ fulfill $$\begin{aligned} \label{eq:bdycondition3} [2,1,0]_{i}[-1,-2, 3]_{i} - [2,1,3]_i[-1,-2, 0]_{i}> 0, \end{aligned}$$ for all $i\in\ZZ/7\ZZ$. Therefore, if all the $\ell_i$ are $+$-crossing, the addition of the left hand sides of and for all $i\in\ZZ/7\ZZ$ should be positive. With the abbreviations $$\begin{aligned} A_i&:=[- 1,-2,-3]_{i},& B_i&:=[2, 1, 0]_{i},& C_i&:=[- 1,-2,0]_i,\\ D_i&:=[2, 1,-3]_{i},& E_i&:=[2,1,0]_{i},& F_i&:=[-1,-2, 3]_{i},\\ G_i &:=[2,1,3]_i,& H_i&:=[-1,-2, 0]_{i};\end{aligned}$$ this can be expressed as $$\label{eq:globalcondition} \sum_{i\in \ZZ/7\ZZ}A_iB_i-C_iD_i+E_iF_i-G_iH_i>0 .$$ However, it turns out that for every heptagon the equation $$\label{eq:heptagonidentity} \sum_{i\in \ZZ/7\ZZ}A_iB_i-C_iD_i+E_iF_i-G_iH_i=0$$ holds by the upcoming Lemma \[lem:invariant\]. This contradiction concludes the proof that every heptagon has at least one standardization line. \[lem:invariant\] Let $A$ be a configuration of $7$ points in $\EE^2\subset\PP^2$ labeled ${\ensuremath{\left\{a_i\,\middle|\,i\in \ZZ/7\ZZ\right\}}}$. Denote the determinant $[{a_{i+x},\ a_{i+y}, \ a_{i+z}}]$ as $[x,y,z]_i$, for any $x,y,z,i\in \ZZ/7\ZZ$. Finally, let $$\begin{aligned} A_i&:=[- 1,-2,-3]_{i},& B_i&:=[2, 1, 0]_{i},& C_i&:=[- 1,-2,0]_i,\\ D_i&:=[2, 1,-3]_{i},& E_i&:=[2,1,0]_{i},& F_i&:=[-1,-2, 3]_{i},\\ G_i &:=[2,1,3]_i,& H_i&:=[-1,-2, 0]_{i};\end{aligned}$$ Then, $$\tag{\ref{eq:heptagonidentity}} \sum_{i\in \ZZ/7\ZZ}A_iB_i-C_iD_i+E_iF_i-G_iH_i=0.$$ Although  can be checked purely algebraically, we provide a geometric interpretation. Observe that $[x,y,z]_i=\pm2\operatorname{Vol}(a_{i+x},a_{i+y},a_{i+z})$, which implies that the identity in  can be proved in terms of (signed) areas of certain triangles spanned by $A$. Figure \[fig:invariant\] depicts some of these triangles when the points are in convex position. -------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------- ![The determinants involved in Lemma \[lem:invariant\].[]{data-label="fig:invariant"}](A0 "fig:"){width=".18\linewidth"} ![The determinants involved in Lemma \[lem:invariant\].[]{data-label="fig:invariant"}](B0 "fig:"){width=".18\linewidth"} ![The determinants involved in Lemma \[lem:invariant\].[]{data-label="fig:invariant"}](C0 "fig:"){width=".18\linewidth"} ![The determinants involved in Lemma \[lem:invariant\].[]{data-label="fig:invariant"}](D0 "fig:"){width=".18\linewidth"} $A_0$ $B_0$ $C_0$ $D_0$ ![The determinants involved in Lemma \[lem:invariant\].[]{data-label="fig:invariant"}](E0 "fig:"){width=".18\linewidth"} ![The determinants involved in Lemma \[lem:invariant\].[]{data-label="fig:invariant"}](F0 "fig:"){width=".18\linewidth"} ![The determinants involved in Lemma \[lem:invariant\].[]{data-label="fig:invariant"}](G0 "fig:"){width=".18\linewidth"} ![The determinants involved in Lemma \[lem:invariant\].[]{data-label="fig:invariant"}](H0 "fig:"){width=".18\linewidth"} $E_0$ $F_0$ $G_0$ $H_0$ -------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------- To see , we show the stronger result that both $$\begin{aligned} \sum_{i\in \ZZ/7\ZZ}A_iB_i& =\sum_{i\in \ZZ/7\ZZ}G_iH_i \qquad \text{ and }\label{eq:identity1}\\\sum_{i\in \ZZ/7\ZZ}E_iF_i&=\sum_{i\in \ZZ/7\ZZ}C_iD_i.\label{eq:identity2}\end{aligned}$$ Indeed, the identity is easy, because $A_iB_i=G_{i+3}H_{i+3}$ for all $i\in \ZZ/7\ZZ$ since $A_i=G_{i+3}$ and $B_i=H_{i+3}$. Moreover, it is straightforward to check that $$\begin{aligned} C_i&=E_{i-2},\label{eq:Ci} $$ and it is also not hard to see that $$D_i+C_{i-3}=F_{i-2}+E_{(i+3)-2}.\label{eq:square}$$ Finally, we subtract the right-hand side of from the left hand side: $$\begin{aligned} & \sum_{i\in \ZZ/7\ZZ}E_iF_i -\sum_{i\in \ZZ/7\ZZ}C_iD_i \ = \ \sum_{i\in \ZZ/7\ZZ}E_{i-2}F_{i-2}-\sum_{i\in \ZZ/7\ZZ}C_iD_i \\ &\stackrel{\phantom{\eqref{eq:Ci}}}{=} \sum_{i\in \ZZ/7\ZZ}E_{i-2}(F_{i-2}+E_{i+1}-E_{i+1}) -\sum_{i\in \ZZ/7\ZZ}C_i(D_i+C_{i-3}-C_{i-3}) \\ & \stackrel{\eqref{eq:Ci}}{=} \sum_{i\in\ZZ/7\ZZ}C_i \underbrace{\big(F_{i-2}+E_{i+1}-D_i-C_{i-3}\big)}_{{}=0 \text{ by }\eqref{eq:square}} -\sum_{i\in \ZZ/7\ZZ}E_{i-2}E_{i+1} +\sum_{i\in \ZZ/7\ZZ}C_iC_{i-3} \\ &\stackrel{\eqref{eq:Ci}}{=} -\sum_{i\in \ZZ/7\ZZ}C_{i}C_{i+3} +\sum_{i\in \ZZ/7\ZZ} C_iC_{i-3} \ = \ 0, \end{aligned}$$ and this concludes our proof of . In contrast to Proposition \[prop:noncrossingstandardization\], there exist heptagons with $6$ crossing standardization lines. For example, the convex hull of $$\begin{aligned} p_0&=(\tfrac{7}{5},\tfrac{1}{2}),&p_1&=(\tfrac{6}{5},\tfrac{1}{10}), &p_2&=(1,0), &p_3&=(0,0), \\p_{-3}&=(0,1), &p_{-2}&=(1,1), &p_{-1}&=(\tfrac{6}{5},\tfrac{9}{10})\end{aligned}$$ has $6$ crossing standardization lines (Figure \[fig:6crossings\]). Notice that the symmetry about the $x$-axis makes $\ell_2$ and $\ell_{-2}$ coincide. ![A heptagon with $6$ crossing standardization lines.[]{data-label="fig:6crossings"}](6crossing){width=".6\linewidth"} The intersection complexity of heptagons ---------------------------------------- Using projective transformations to standardize heptagons is the last step towards Theorem \[thm:icheptagon\]. \[lem:projective\] Projective equivalence preserves intersection complexity. Let $\sigma:P_1\to P_2$ be a projective transformation between $k$-dimensional polytopes. Let $Q_1\subset\RR^d$ be a polytope with $\operatorname{ic}(P_1)$ many vertices and let $H$ be an affine $k$-flat such that $Q_1\cap H=P_1$. Finally, let $\tau$ be a projective transformation of $\RR^d$ that leaves invariant both $H$ and its orthogonal complement, and such that $\tau|_H=\sigma$. Then $\tau(Q_1)\cap H=\sigma (P_1)=P_2$. \[lem:icheptagon\] Any heptagon $P$ is a section of a $3$-polytope with no more than $6$ vertices. Let $P$ be a heptagon. By Proposition \[prop:noncrossingstandardization\] it has a non-crossing standardization line, which implies that $P$ is projectively equivalent to a standard heptagon by Lemma \[lem:converttostandard\]. Our claim follows by combining Lemma \[lem:projective\] with Proposition \[prop:icstandardheptagon\]. The combination of this lemma with the lower bound of Proposition \[prop:ic3bound\] finally yields our claimed result. \[thm:icheptagon\] Every heptagon has intersection complexity $6$. The intersection complexity of $n$-gons ======================================= We can use Lemma \[lem:icheptagon\] to derive bounds for the complexity of arbitrary polygons. We begin with a trivial bound that presents $n$-gons as sections of $3$-polytopes. \[thm:ic3ngon\] Any $n$-gon $P$ with $n\geq 7$ is a section of a $3$-polytope with at most $n-1$ vertices. The proof is by induction. The case $n=7$ is Lemma \[lem:icheptagon\]. For $n\ge8$, let $x=(a,b)$ be a vertex of $P$, and consider the $(n-1)$-gon $P'$ obtained by taking the convex hull of the remaining vertices of $P$. By induction there is a $3$-polytope $Q'$ with at most $n-2$ vertices such that $Q'\cap H_0=P'\times \{0\}$, where $H_0={\ensuremath{\left\{(x,y,z)\in\RR^3\,\middle|\,z=0\right\}}}$. Then $Q=\operatorname{conv}\big(Q'\cup (a,b,0)\big)$ satisfies $Q\cap H_0=P\times \{0\}$, and has $n-1$ vertices. Which is the smallest $f(n)$ such that any $n$-gon is a section of a $3$-polytope with at most $f(n)$ vertices? Is $f(n)\sim \frac23 n$? We can derive more interesting bounds when we allow ourselves to increase the dimension. We only need the following result (compare [@ThomasParriloGouveia2013 Proposition 2.8]). \[lem:icunion\] Let $P_1$ and $P_2$ be polytopes in $\RR^d$, and let $P=\operatorname{conv}(P_1\cup P_2)$. If $P_i$ is a section of a $d_i$-polytope with $n_i$ vertices for $i=1,2$, then $P$ is a section of a $(d_1+d_2-d)$ polytope with not more than $n_1+n_2$ vertices. In particular, $\operatorname{ic}(P)\leq \operatorname{ic}(P_1)+\operatorname{ic}(P_2)$. For $i=1,2$, let $Q_i$ be a polytope in $\RR^{d_i}$ with $n_i$ vertices and such that $Q_i\cap H_i=P_i$, where $H_i={\ensuremath{\left\{x\in \RR^d\,\middle|\,x_j =0 \text{ for }d_i-d< j\leq d\right\}}}$ is the $d$-flat that contains the points with vanishing last $d_i-d$ coordinates. Now consider the following embeddings of $Q_1$ and $Q_2$ in $\RR^{d_1+d_2-d}$: - for $q\in Q_1$ let $f_1(q)=(q_1,\dots,q_{d},q_{d+1},\dots,q_{d_1}, 0,\dots,0)$, and - for $q\in Q_2$ let $f_2(q)=(q_1,\dots,q_{d},0,\dots,0,q_{d+1},\dots,q_{d_2})$. Finally, consider the polytope $Q:=\operatorname{conv}\big(f_1(Q_1)\cup f_2(Q_2)\big)$, which has at most $n_1+n_2$ vertices, and the $d$-flat $H:={\ensuremath{\left\{x\in\RR^{d_1+d_2-d}\,\middle|\,x_j=0 \text{ for }d<j\leq d_1+d_2-d\right\}}}$; then $P=Q\cap H$. \[thm:icngon\] Any $n$-gon with $n\geq 7$ is a section of a $(2+{\left\lfloor{\frac{n}{7}}\right\rfloor})$-dimensional polytope with at most ${\left\lceil {\frac{6n}{7}} \right\rceil}$ vertices. In particular, $\operatorname{ic}(P)\leq {\left\lceil {\frac{6n}{7}} \right\rceil}$. This is a direct consequence of Lemmas \[lem:icheptagon\] and \[lem:icunion\]. Which is the smallest $f(n)$ such that any $n$-gon is a section of a $g(n)$-dimensional polytope with at most $f(n)$ vertices? Are $f(n)=O(\sqrt{n})$ and $g(n)=O(\sqrt{n})$? Acknowledgements {#acknowledgements .unnumbered} ================ The authors want to thank Günter Ziegler and Vincent Pilaud for many enriching discussions on this subject. [10]{} LeRoy B. Beasley and Thomas J. Laffey. . , 431(12):2330–2335, 2009. Special Issue in honor of Shmuel Friedland. Aharon Ben-Tal and Arkadi Nemirovski. On polyhedral approximations of the second-order cone. , 26(2):193–205, 2001. Samuel Fiorini, Thomas Rothvo[ß]{}, and Hans Raj Tiwary. . , 48(3):658–668, 2012. Jo[ã]{}o Gouveia, Pablo A. Parrilo, and Rekha R. Thomas. . , 38(2):248–264, 2013. Jo[ã]{}o Gouveia, Richard Z Robinson, and Rekha R Thomas. . Preprint, [[`arXiv:1305.4600`](http://arxiv.org/abs/1305.4600)]{}, 2013. David Mond, Jim Smith, and Duco van Straten. . , 459(2039):2821–2845, 2003. J[ü]{}rgen Richter-Gebert. , volume 1643 of [ *[Lecture Notes in Math.]{}*]{} Springer, Berlin, 1996. Yaroslav Shitov. . , 122:126–132, 2014. Yaroslav Shitov. . Preprint, [[`arXiv:1412.0728`](http://arxiv.org/abs/1412.0728)]{}, 2014. Mihalis Yannakakis. . , 43(3):441–466, 1991. G[ü]{}nter M. Ziegler. , volume 152 of [*[Grad. Texts in Math.]{}*]{} Springer-Verlag, New York, 1995. [^1]: Supported by the DFG Collaborative Research Center SFB/TR 109 “Discretization in Geometry and Dynamics”. [^2]: Supported by grants COMPOSE (EUI-EURC-2011-4306), MTM 2012-30951, MTM 2011-24097 and 2009-SGR-1040.
--- abstract: | The most X-ray luminous cluster known, RXJ1347-1145 ($z=0.45$), has been the object of extensive study across the electromagnetic spectrum. We have imaged the Sunyaev-Zel’dovich Effect (SZE) at 90 GHz ($\lambda = 3.3$ mm) in RXJ1347-1145 at $10''$ resolution with the 64-pixel MUSTANG bolometer array on the Green Bank Telescope (GBT), confirming a previously reported strong, localized enhancement of the SZE $20''$ to the South-East of the center of X-ray emission. This enhancement of the SZE has been interpreted as shock-heated ($> 20 \, {\rm keV}$) gas caused by an ongoing major (low mass-ratio) merger event. Our data support this interpretation. We also detect a pronounced asymmetry in the projected cluster pressure profile, with the pressure just east of the cluster core $\sim 1.6 \times$ higher than just to the west. This is the highest resolution image of the SZE made to date. author: - 'B. S. Mason, S.R. Dicker, P.M. Korngut, M.J. Devlin, W.D. Cotton, P.M. Koch, S.M. Molnar, J. Sievers, J.E. Aguirre, D. Benford, J.G. Staguhn, H. Moseley, K.D. Irwin, P.Ade' bibliography: - 'rxj1347mustangApr10.bib' title: 'Implications of a High Angular Resolution Image of the Sunyaev-Zel’dovich Effect in RXJ1347-1145' --- Introduction ============ The rich cluster RXJ1347-1145 ($z=0.45$) is the most X-ray luminous galaxy cluster known [@schindler95; @schindler97; @allen02] and has been the object of extensive study at radio, millimeter, submillimeter, optical and X-ray wavelengths [@kitayama04; @komatsu01; @Gitti07; @allen02; @schindler97; @pointecouteau99; @ota08; @cohen02; @bradac08; @miranda08]. Discovered in the ROSAT All-Sky Survey, RXJ1347-1145 was originally thought to be a dynamically old, relaxed system [@schindler95; @schindler97] based on its smooth, strongly-peaked X-ray morphology— a prototypical relaxed “cooling-flow” cluster. The NOBA 7 bolometer system on the 45-meter Nobeyama telescope [@kitayama04; @komatsu01] has made high-resolution observations ($13''$ FWHM, smoothed to $\sim 19''$ in the presented map) of the Sunyaev-Zel’dovich effect (SZE) at 150 GHz which indicate a strong enhancement of the SZ effect $20'' \, (170 \, {\rm kpc})$ to the south-east of the peak of the X-ray emission, however. Hints of this asymmetry had been seen in earlier, lower resolution measurements with the Diabolo $2.1$ mm photometer on the IRAM 30-m [@pointecouteau99]. The enhancement has been interpreted as being due to hot ($T_e > 20 \, {\rm keV}$) gas which is more difficult to detect using X-rays than cooler gas is, owing to the lower responsivities of imaging X-ray telescopes such as Chandra and XMM at energies above $\sim 10 \, {\rm keV}$. In contrast, the SZE intensity is proportional to $T_e$ up to arbitrarily high temperatures, aside from relativistic corrections which are weak at 90 GHz, so such hot gas stands out. The feature is consistent with the presence of a large substructure of gas in the intra-cluster medium (ICM) shock-heated by a merger, as is seen in the “Bullet Cluster” 1E0657-56 [@markevitch02]; this interpretaion has been supported by more recent observations [e.g. @allen02; @ota08]. [*Thus, rather than being an example of a hydrostatic, relaxed system, high-resolution SZE observations suggest that the observed properties of the ICM in RXJ1347-1145 are strongly affected by an ongoing merger.*]{} This is a striking cautionary tale for ongoing blind SZE surveys [@carlstrom02], for which useful X-ray data will be difficult or impossible to obtain for many high-$z$ systems, as well as a sign that our current understanding of nearby, well-studied X-ray clusters may be dramatically incomplete. Reports [@komatsu01; @pointecouteau01; @kitayama04] of a strong enhancement of the SZE away from the cluster center are based on relatively low-resolution images compared to the size of the offsets and features involved. SZE images at lower frequencies also show show substantial offsets between the peak of X-ray and SZE emission; for instance, the 21 GHz [@komatsu01] and SZE peak is $\sim 20''$ to the SE of the X-ray peak, and the 30 GHz [@reese02] SZE peak is $\sim 13''$ to the SE of the X-ray peak. The situation is further complicated by the presence of a radio source in the center of the cluster. We have sought to test these claims, and to begin to untangle the astrophysics of this interesting system, with higher resolution imaging at a complementary frequency. In this paper we present the highest angular resolution image of the SZE yet made. We observed RXJ1347-1145 with the MUSTANG 90 GHz bolometer array on the Robert C. Byrd Green Bank Telescope (GBT). At the redshift of the cluster (and assuming $\Omega_{\Lambda}= 0.3,\Omega_{tot}=1, h=0.73$) the GBT+MUSTANG $9''$ beam corresponds to a projected length of 54 kpc. The observations are described in § \[sec:obs\] and the data reduction in § \[sec:reduc\]. Our interpretation and conclusions are presented in § \[sec:concl\]. Instrument & Observations {#sec:obs} ========================= MUSTANG is a 64 pixel TES bolometer array built for use on the 100-m GBT [@gbtref]. MUSTANG uses reimaging optics with a pixel spacing of $0.63 f\lambda$, operates in a bandpass of 81–99 GHz, and is cooled by a pulse tube cooler and Helium-4/Helium-3 closed cycle refrigerator. Further technical details about MUSTANG can be found in @dicker2006 [@dicker2008]. More detailed information is also provided about MUSTANG, the observing strategy, and the data analysis algorithms in @agn and [@orion], which present the results of other early MUSTANG observations. Further information can be obtained at the MUSTANG web site[^1]. The observations we present were collected in two runs, one on 21 Feb. 2009 and one on 25 Feb. 2009, each approximately four hours in duration including time spent setting up the receiver and collecting calibration observations. For both runs the sky was clear with $\sim 6 \, {\rm mm}$ of precipitable water vapor, corresponding to $\sim 20$ K zenith atmospheric loading at $3.3 \, {\rm mm}$. Both were night-time sessions, important because during the day variable gradients in the telescope structure’s temperature degrade its 90 GHz performance significantly. The telescope focus was determined by collecting small maps of a bright calibrator source at a range of focus settings; every 30-40 minutes throughout the session the beam is checked on the calibrator. Typically the required focus corrections are stable to a few millimeters over several hours once residual daytime thermal gradients have decayed. Once the focus was established, the in-focus and out-of-focus beam maps were used to solve for primary aperture wavefront phase errors using the “Out-of-Focus” (OOF) holography technique described by @bojanoof. The solutions were applied to the active surface. This procedure improves the beamshape and increases the telescope peak forward gain, typically by $\sim 30\%$. This approach is effective at correcting phase errors on scales of 20 meters or larger on the dish surface, but is not sufficiently sensitive to solve for smaller scales. Therefore there are residual uncorrected wavefront errors that result in sidelobes out to $\sim 40''$ from the main beam. Deep beam maps were collected on the brightest 90 GHz sources on several occasions, principally in test runs on 24/25 March 2009. The repeatability of the GBT 90 GHz beam after application of the OOF solutions was found to be good. The analysis of the beam map data is discussed in § \[sec:beam\]. Maps of RXJ1347-1145 (J2000 coordinates $13h47m30.5s$, $-11^{\circ}45'09''$) were collected with a variety of scan patterns designed to simultanously maximize on-source time and the speed at which the telescope moved when crossing the source. The effects of atmospheric and instrumental fluctuations, which become larger on longer timescales, are reduced by faster scan speeds. The primary mapping strategies were: a) a “daisy” or “spirograph” scan in which the source of interest is frequently recrossed; and b) a “billiard ball” scan, which moves at an approximately constant speed and has more uniform coverage over a square region of interest. The nominal region of interest in this case was $5'\times 5'$, centered on RXJ1347-1145. The size of the maps is sufficiently small that except under the most exceptionally stable conditions, instrument and random atmosphere drifts dominate the constant atmosphere (${\rm sec(za)}$) term and any possible ground pickup. The total integration time on source was $3.4 \, {\rm h}$. The asteroid Ceres was observed on both nights and used as the primary flux calibrator assuming $T_B = 148 \, {\rm K}$ (T. Mueller, private comm.). We assign a 15% uncertainty to this calibration. We checked the Ceres calibration on nights when other sources (Saturn, CRL2688) were visible and found consistent results to within the stated uncertainty. Using these observations and the lab-measured receiver optical efficiency of $\eta_{opt,rx} = 50 \pm 10 \%$ we compute an overall aperture efficiency of $\eta_{aperture,gbt} = 20\%$, corresponding to a Ruze-equivalent surface RMS of $315 \micron$. This result is consistent with recent traditional holographic measurements of the GBT surface. Since the observations presented here the surface has been set based on further holography maps and now has a surface RMS, weighted by the MUSTANG illumination pattern, of $\sim 250 \micron$. Data Reduction {#sec:reduc} ============== Beam Characterization {#sec:beam} --------------------- Imaging diffuse, extended structure requires a good understanding of the instrument and telescope beam response on the sky. To achieve this we collected numerous beam maps through our observing runs, including several deep beammaps on bright ($5 \, {\rm Jy}$ or more) sources. After applying the Out-of-Focus holography corrections to the aperture the beam results were repeatable; Figure \[fig:beams\] shows the radial beam profile from maps of a bright source (3C279) collected on two occasions. We find a significant error beam concentrated around the main lobe which increases the beam volume from $87 \, {\rm arcsec^2}$ (for the core component only) to $145 \, {\rm arcsec^2}$. We attribute this beam to residual medium and small scale phase errors on the primary aperture. The beam shape and volume is taken into account when comparing to model predictions. By way of comparison, Figure \[fig:beams\] also shows the profile of the beam determined from the radio source in the center of RXJ1347-1145. Since the SZ map has been smoothed, the apparent beam is slightly broader, but allowing for this, still consistent with the beam determined on 3C279. Imaging ------- A number of systematic effects must be taken into account in the time domain data before forming the image: 1. The responsivities of individual detectors are measured using an internal calibration lamp that is pulsed periodically. Optically non-responsive detectors ($10$-$15$ out of 64) are flagged for removal from subsequent analysis. Typical detector responsivities are stable to $2-3\%$ over the course of several hours. 2. Common mode systematic signals are subtracted from the data. These are caused by atmospheric and instrumental (thermal) fluctuations. The pulse tube cooler, which provides the 3K base temperature of the receiver, induces a $1.4$ Hz signal due to small emission fluctuations of the 3K optics. The pulse-tube signal is removed by fitting and subtracting a $1.41$ Hz sine wave. The remaining common mode signal is represented by a template formed by a weighted average of data from good pixels; this template is low-pass filtered and subtracted from the data, with a fitted amplitude per detector. The low-pass filter time constant (typically $0.1 \, {\rm Hz}$) is determined by the stability of the data in question. This procedure helps to preserve large-scale structure in the maps. 3. Slow residual per-pixel drifts are removed using low-order orthogonal polynomials. 4. Individual detector weights are computed from the residual detector timestreams after the above steps. Since the noise level of the detectors varies considerably this is an important step. Best results are obtained by retaining only the top $\sim 80\%$ of responsive detectors. 5. The remaining calibrated detector timestreams are inspected visually on a per-scan (typically 5 minute period) basis. Scans which have timestreams with obvious, poorly-removed systematic signals remaining are removed. This results in flagging $28\%$ of scans. The SNR in an individual detector timestream is sufficiently low that this does not bias our map. Following these calibration steps the detector timestream data are gridded onto a $2''$ pixellization in Right Ascencion and Declination using a cloud-in-cell gridding kernel. To check our results we have implemented three, mostly independent analysis pipelines. The results in this paper are based on a straightforward but flexible single-pass pipeline written in IDL, described above. There is also an iterative, single-dish CLEAN based approach implemented in the OBIT package [@obit] and an optimal SNR method in which the time domain data are decomposed into noise (covariance) eigenvectors; their temporal power spectra computed; and a maximum likelihood map constructed from the noise-weighted eigenvectors. Results obtained with these algorithms were consistent. The first two approaches are described in more detail in @orion. Our final map, smoothed by a $4''$ FWHM Gaussian and gridded on $0''.5$ pixels, is shown in Figure \[fig:finalmap\], along with the difference between the two individual night maps. It shows a strong, clear SZ decrement, well separated from the central point source and consistent with the level expected from the @kitayama04 [hereafter K04] 150 GHz measurement. The right hand panel shows the image formed by differencing the images of the two individual nights. By computing the RMS in a fiducial region in the center of the difference image (and scaling down by a factor of $2$ to account for the differencing and the shorter integration times) we estimate a map-center image noise of $\sim 0.3 \, {\rm mJy/bm}$ (rms). The noise level in regions of the map outside the fiducial region is corrected for exposure time variations assuming Gaussian, random noise with a white power spectrum. The enhancement of the SZE to the south east of the X-ray peak, originally detected by Komatsu et al. at $4.2\sigma$ significance, is confirmed by our measurement at $5.4 \sigma$ (indicating the peak SNR per beam) with a factor of $\sim$ 2 greater angular resolution. A detailed assessment of the impact of this is presented in § \[sec:sims\]. Work is underway to develop analysis techniques which account for correlated noise in a way that permits quantitative model fitting. ![image](Threemap_1347.pdf){width="6in"} Simulations {#sec:sims} ----------- It is difficult to measure diffuse, extended structure such as the SZE, particularly in the presence of potentially contaminating systematic signals such as time-varying atmospheric fluctuations. To assess the impact of residual, unmodelled noise fluctuations in the maps we have undertaken an extensive suite of simulations which replace the raw detector timestream data with simulated data. As a source for the simulated data we used real detector timestreams collected during observations of a blank patch of sky collected for another project. The phase of these timestreams with respect to the telescope trajectory on RXJ1347-1145 was randomly shifted to create different instances of noisy cluster observations. We added simulated astronomical signals as described below in order to determine how well the (known) input signals are recovered in the maps. To assess the spatial fidelity of our reconstructed images, random white-noise skies were generated on $2''$ grids, subsequently smoothed by a $9''$ (FWHM) Gaussian. These skymaps served as input to generate fake timestreams which were then processed by the exact processing scripts used to produce the image in Figure \[fig:finalmap\]. The ratio of the absolute magnitude of the Fourier transform of the reconstructed sky map to the absolute magnitude of the Fourier transform of the input skymap measures the fidelity of our image reconstructions as a function of angular scale. The results of repeating this 100 times, with different white-noise skies and noise instances, are shown in Figure \[fig:sim\]. We find that our pipeline faithfully recovers structures up to $60''$, with reasonable response but some loss of amplitude on larger scales, up to $120''$. The loss of structure on small angular scales is an effect of our relatively coarse pixellization. Simulations were carried out at similar signal to noise ratios as those in our final map, although changes in the signal to noise ratio of over a factor of 5 showed no significant change to our transfer function The common-mode subtraction, essential to removing atmospheric and instrumental systematic signals, can also introduce negative bowls around bright point sources which could mimic the SZE in cases such as RXJ1347-1145. To determine the magnitude of this systematic we have followed a similar approach. Instead of white-noise skies the input signal consists of a single unresolved source with a flux density of 5 mJy at the location of the radio source seen in RXJ1347-1145. The resulting negative bowl in the reconstructed images has a mean peak spurious decrement $\sim 2\%$ of the point source peak brightness, in comparison with $\sim 50\%$ for our real data. Additionally the iterative pipeline (OBIT) is much less susceptible to such artifacts, and shows consistent results. We conclude that this is not a significant contribution to our result. Image Domain Noise Estimate --------------------------- We divide the data set in half and subtract the individual night images to obtain a difference map. The RMS of this difference map in the central 85 by 93 arcseconds, dividing by two to correct for the differencing and the reduced integration time in each individual night image, gives an image noise level of $0.3 \, {\rm mJy}$. A histogram of the pixel values in this region of the difference image is shown in figure \[fig:hist\]. We obtain a complementary estimate of the noise from a region of the final SZ map well away from the cluster. Correcting for the difference in integration times in these regions of the map, this result is consistent to within 8%. Using this noise figure, the peak SNR per beam in the map— on the SZE decrement SE of the cluster core— is $5.4\sigma$. More aggressive filtering the map results in an even higher detection significance for the SE enhancement by reducing the low spatial frequency tail of the noise power spectrum (Figure \[fig:sim\]). The Effects, and Subtraction, of the Central Radio Source --------------------------------------------------------- Our final map has sufficient angular resolution to distinguish the central radio source from the structures of interest. In particular it is clear that, as seen in earlier analyses of the SZE in this cluster [@pointecouteau99] there is a strong azimuthal variation in the intensity of the SZE at a radius of $\sim 20''$ from the X-ray centroid, which also coincides with the radio source. To produce a source-subtracted image we fit and subtract an azimuthally symmetric, double-Gaussian beam (as determined from 3C279 in § \[sec:beam\]). The reason for assuming azimuthal symmetry is that the hour angle sampling of the 3C279 data is considerably more limited than that of the RXJ1347-1145 data; therefore the 3C279 data will not provide a good measurement of the effective two-dimensional beam, only of its average radial profile. Furthermore the SNR on the point source in RXJ1347-1145 is insufficient to measure significant departures from azimuthal symmetry. The average radial profile of the central source in RXJ1347-1145 is shown in Figure \[fig:beams\] out to $r=15''$, where the signal becomes too weak to measure above thermal noise and variations in the SZE. Effect of Background Anisotropies --------------------------------- The angular scales reconstructed in our map ($\sim 1'.5$ and smaller) correspond to spherical harmonic multipoles of $\ell = 7200$ and higher. On these scales intrinsic CMB anisotropies are strongly suppressed by photon diffusive damping at the last scattering surface and do not contribute measurably to our result at the sensitivity level we have achieved. ![image](Composite_and_xray_Oct13_2009.pdf){width="6in"} Interpretation & Conclusions {#sec:concl} ============================ Comparison with Previous SZE Observations ----------------------------------------- Figure \[fig:Noba\] presents a direct comparison of the MUSTANG and NOBA results in units of main-beam averaged Compton y parameter. For a more accurate comparison, we downgrade the resolution and pixelscale of the MUSTANG map to match that of NOBA (13” FWHM on a 5” pixel grid). The overall agreement between the maps is excellent, in particular as regards the amplitude and morphology of the local enhancement of the SZE south-east of the cluster core. The largest discrepancy is south west of the cluster, where NOBA shows a $3\sigma$ compact decrement which is absent from the MUSTANG data. Considering the low and uniform X-ray surface brightness in the vicinity of this discrepancy (see Figure \[fig:composite\]) and the higher angular resolution and lower noise of the MUSTANG data, it is likely that this feature is a spurious artifact in the NOBA map. Both datasets also show a ridge extending north from the shock front on the eastern side of the cluster. In the 150 GHz map the feature is of marginal significance ($1-2\sigma$); interestingly, it is clearly visible in the 350 GHz SZE increment map but K04 dismiss it due to the possibility of confusing dust emission from the nearby galaxies. Empirical Model of the SZE in RXJ1347-1145 ------------------------------------------ We construct a simple empirical model for the cluster SZE assuming the isothermal $\beta$-model of @schindler97 normalized by the SZE measurement of @reese02 and @kitayama04 to describe the bulk cluster emission. We add a 5 mJy point source in the cluster core, coincident with the peak of the $\beta$-model, and two Gaussian components in integrated pressure, one south-east and one almost directly east of the cluster center. In comparing to our 90 GHz data, we use the relatavistic correction of @sazonov98, assuming $kT = 25 \, {\rm keV}$ (which reduces the amplitude of the decrement by 15%) for the Gaussian components and $kT = 10 \, {\rm keV}$ for the bulk component. The parameters chosen (two Gaussian widths for each component, a position, a peak surface brightness, and a position angle) are shown in Table \[tbl:szmodel\]. The resulting sky image is convolved with our PSF (§ \[sec:beam\]) and transfer function (§ \[sec:sims\]). We find that this provides a good match to the data (Figure \[fig:szmodel\]). The peak comptonization at $10''$ Gaussian resolution is $3.9 \times 10^{-4}$ on the eastern ridge and $6.0 \times 10^{-4}$ on the region identified as a shock by Komatsu et al. When convolved to $19''$ FWHM (NOBA) resolution, we find $\Delta y = 3.9 \times 10^{-4}$, close to their observed value $\Delta y = 4.1 \times 10^{-4}$. The intent of this static, phenomenological model is simply to provide a description of the observed high angular-resolution SZE and a direct comparison of NOBA and MUSTANG results. Work is underway which will allow quantitatively determining the best fit physical model by simulataneously fitting datasets at multiple wavelengths using a Monte-Carlo Markov Chain. This work is beyond the scope of this paper and will be presented in a follow up publication. --------------- ----------------- ---------- ---------------------------------------------------------- Component Amplitude Offset Notes \[$y/10^{-3}$\] \[$''$\] $\beta$-model $1.0$ $0,0$ $\theta_c = 10''$, $\beta=0.60$ Shock $1.6$ -14, 14 $\sigma_1=8''$, $\sigma_2=2''$, ${\rm P.A.}=45^{\circ}$ Ridge $1.0$ 10, 14 $\sigma_1=8''$, $\sigma_2=2''$, ${\rm P.A.}=-15^{\circ}$ --------------- ----------------- ---------- ---------------------------------------------------------- Multi-wavelength Phenomenology ------------------------------ Our data show an SZE decrement with an overall significance of $5.4 \sigma$. At the center of the cluster, coincident with the peak of X-ray emission and the brightest cluster galaxy (BCG), there is an unresolved $5 {\,{\rm mJy} }$ radio source. This flux density is consistent with the 90 GHz flux density presented in @pointecouteau01, as well as what is expected from a power law extrapolation of $1.4$ GHz and 30 GHz measurements [@NVSS; @reese02]. A strong, localized SZE decrement can be seen $20''$ to the south-east of the center of X-ray emission and clearly separated from the cluster center. Our data also indicate a high-pressure ridge immediately to the east of the cluster center. K04 tentatively attribute the south-east enhancement to a substructure of gas $240 \pm 183 \, {\rm kpc}$ in length along the line of sight, at a density (assumed uniform) of $(1.4 \pm 0.59) \times 10^{-2} \, {\rm cm^{-3}}$ and with a temperature $T_e = 28.5 \pm 7.3 \, {\rm keV}$. Recent X-ray spectral measurements [@ota08] with SUZAKU also indicate the presence of hot gas in the south-east region ($T_e = 25.1^{+6.1}_{-4.5} \ ^{+6.9}_{-9.5} \, {\rm keV}$ with statistical and systematic errors, respectively, at 90% confidence level). @allen02 have reported that the slight enhancement of softer X-ray emission in this region seen by Chandra is consistent with the presence of a small substructure of hot, shocked gas. @kitayama04 attribute the hot gas to an ongoing merger in the plane of the sky. The merger hypothesis is supported by optical data, in particular, the presence of a second massive elliptical $\sim 20''$ directly to the east of the BCG that coincides with the center of X-ray emission (and with the radio point source). Furthermore the density and temperature of the hot substructure indicate that it is substantially overpressured compared to the surrounding ICM. Assuming a sound speed of $1600 \, {\rm km/sec}$ this overpressure region should relax into the surrounding ICM on a timescale $\sim 0.1 \, {\rm Gyr}$, again arguing for an ongoing merger. Our data support this merger scenario. To put them in context, Figure \[fig:composite\] shows a composite image with archival Chandra and HST data, and the weak + strong lensing mass map of @bradac08. We propose that the data are best explained by a merger occuring in or near the plane of the plane of the sky. The left-hand (“B”) cluster, having fallen in from the south-[*west*]{}, has just passed closest approach and is hooking around to the north-west. As the clusters merge shock forms, heating the gas in the wake of its passage. As argued by [@kitayama04], and seen in simulations [@takizawa99], the clusters must have masses within a factor of 2 or 3 of equality and a substantial ($\sim 4000 \, {\rm km/sec}$) relative velocity in order to produce the high observed plasma temperatures, $T_e > 20 \, {\rm keV}$. This merger geometry is consistent with the lack of structure in the line-of-sight cluster member galaxies’ velocities [@cohen02]. The primary (right-hand, “A”) cluster contains significant cold and cooling gas in its core (a “cooling flow”). Such gas is seen to be quite robust in simulated major cluster mergers [@gomez02; @poole08]. Even in cases where the cooling flow is finally disrupted by the encounter, [@gomez02] find a delay of $1-2$ GYr between the initial core encounter and the collapse of the cooling flow. The existence of a strong cooling flow, therefore, does not argue against a major merger in this case. More detailed simulations could shed further light on this interesting system. Broad Implications ------------------ Since calibrating SZ observable - mass relationships is vital to understand the implications of ongoing SZE surveys, it is important to understand the mechanism by which a substantial portion of the ICM can be heated so dramatically, and how this energy is distributed through the ICM over time. Observations of cold fronts in other clusters [e.g. @v01] have shown that energy transport processes in the ICM are substantially inhibited, perhaps by magnetic fields. It is distinctly possible, then, that once heated by shocks, very hot phases would persist. We have estimated the magnitude of the bias in an arcminute-resolution Compton $y$ measurement that is introduced by a hot gas phase by convolving the two gaussian components of the SZE model in Table \[tbl:szmodel\] with a $1'$ FWHM Gaussian beam, typical of SZ survey telescopes such as ACT [@actref; @actclus] or SPT [@sptref; @sptcat]. Compared with the bulk emission component, also convolved with a $1'$ beam, the small-scale features are a 10% effect. While relatively modest this is a systematic bias in the Compton $y$ parameter which, if not properly accounted for, would result in a $20\%$ overestimate in distances (underestimate in $H_0$) derived from a comparison of the SZE and X-ray data which did not allow for the presence of the hot gas component. To assess the impact on the [*scatter*]{} in $M-y$, a larger sample of high-resolution SZE measurements is needed. A full calculation would also need to take into consideration effects such as detection apertures and the spatial filtering due to imaging algorithms, some of which would increase the importance of the effect and some of which would decrease its importance. This is the one of a very few clusters that has been observed at sub-arcminute resolution in the SZE [see also @nord09], so it is possible that many clusters exhibit similar behavior. Such events, if their enhancement of the SZE brightness is transient, could also bias surveys towards detecting kinematically disturbed systems near the survey detection limits. The astrophysics that has been revealed by high resolution X-ray observations, and is beginning to be revealed by high resolution SZE data, is interesting in its own right. The SZE observations require large-aperture millimeter telescopes which have henceforth been lacking, but with both large single dishes and ALMA coming online, exciting observations will be forthcoming. There is substantial room for improvement: since the observations we report here the GBT surface has improved from $320 \micron$ RMS to $250 \micron$ RMS, which will yield more than a factor of $1.5$ improvement in sensitivity. The array used in these observations, while state of the art, has not yet achieved sky photon noise limited performance; further progress is being made in this direction. Considering these facts, and that the results presented here were acquired in a short period of allocated telescope time (8h), this new high-resolution probe of the ICM has a bright future. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. We thank Eiichiro Komatsu for providing the NOBA SZ map; Marusa Bradac for providing her total mass map; Masao Sako, Ming Sun, Maxim Markevitch, Tony Mroczkowski and Erik Reese for helpful discussions; and Rachel Rosen and an anonymous referee for comments on the manuscript. [^1]: [http://www.gb.nrao.edu/mustang/]{}
--- abstract: 'We present the results of a systematic investigation of the qualitative behaviour of the Friedmann-Lema[î]{}tre-Robertson-Walker (FLRW) and Bianchi I and V cosmological models in Randall-Sundrum brane-world type scenarios.' address: | $^*$Relativity and Cosmology Group,\ Portsmouth University, Portsmouth PO1 2EG, Britain author: - 'Antonio Campos$^*$ and Carlos F. Sopuerta$^*$' title: | Dynamics of Cosmological Models\ in the Brane-world Scenario --- Recently, Randall and Sundrum have shown that for non-factorizable geometries in five dimensions the zero-mode of the Kaluza-Klein dimensional reduction can be localized in a four-dimensional submanifold [@RanSun:1999b]. The picture of this scenario is a five-dimensional space with an embedded three-brane where matter is confined and Newtonian gravity is effectively reproduced at large distances. Here, we summarize the qualitative behaviour of FLRW and Bianchi I and V cosmological models in this scenario (see [@CamSop:2001] for more details). In particular, we have studied how the dynamics changes with respect to the general-relativistic case. For this purpose we have used the formulation introduced in [@ShiMaeSas:2000]. From the Gauss-Codazzi relations the Einstein equations on the brane are modified with two additional terms. The first term is quadratic in the matter variables and the second one is the electric part of the five-dimensional Weyl tensor. In this communication we will consider the effects due to the first term. The study including both corrections has been carried out in [@CamSop:2001b]. We also assume that the matter content is described by a perfect fluid with energy density, $\rho$, and pressure, $p$, related by a linear barotropic equation of state, $p=(\gamma - 1)\rho$ with $\gamma \in [0,2]$. When the brane dynamics is described by a FLRW model we find five generic critical points: the flat FLRW models ($\mbox{F}$); the Milne universe ($\mbox{M}$); the de Sitter model ($\mbox{dS}$); the Einstein universe ($\mbox{E}$); and the non-general-relativistic Bin$\acute{\mbox{e}}$truy-Deffayet-Langlois (BDL) model ($\mbox{m}$) [@BinDefLan:2000]. The dynamical character of these critical points and the structure of the state space depend on the equation of state, or in other words, on the parameter $\gamma$. This means that we have bifurcations for some values of $\gamma$, namely $\gamma=0, \textstyle{1\over3}, \textstyle{2\over3}$. The bifurcation at $\gamma=\textstyle{1\over3}\,$ is a genuine feature of the brane world and is characterized by the appearance of an infinite number of non-general-relativistic critical points. The Einstein Universe critical point appears for $\gamma\geq \textstyle{1\over3}$, in contrast with the general-relativistic case, where it appears for $\gamma\geq\textstyle{2\over3}$. Actually, for $\textstyle{1\over3} < \gamma < 2$ we do not have an isolated critical point corresponding to the Einstein universe but a line of critical points, as can be seen in the state space shown in Figure \[esp3\]. Another important feature of these scenarios is that the dynamical character of some of the points changes. For instance, the expanding and contracting flat FLRW models, which in general relativity are repeller and attractor for $\gamma>\textstyle{2\over3}$, are now saddle points for all values of $\gamma$. The new non-general-relativistic critical point, the BDL solution [@BinDefLan:2000], describes the dynamics near the initial Big-Bang singularity and, for recollapsing models, near the Big-Crunch singularity. More precisely, the dynamical behaviour near these singularities is governed by a scale factor $a(t)=t^{1/(3\gamma)}$ which differs from the standard evolution in general-relativistic cosmology, where $a(t)=t^{2/(3\gamma)}$. Finally, the general attractor for ever expanding universes is, as in general relativity, the de Sitter model. For recollapsing universes, which now appear for $\gamma>\textstyle{1\over3}$, the contracting BDL model is the general attractor. However, if we only consider the invariant manifold representing general relativity, the contracting Friedmann universe is the general attractor for $\gamma > 2/3$. On the other hand, for zero cosmological constant and $\gamma < 2/3$ the expanding Friedmann universe is also an attractor. For the homogeneous but anisotropic Bianchi I and V cosmological models, which contain the flat and negatively curved FLRW models respectively, we find an additional critical point, namely the Kasner vacuum spacetimes ($\mbox{K}$). In the Bianchi I case the state space can be represented by the same type of drawings used for the non-positive spatial curvature sector of the FLRW evolution (see Figure \[esp3\]). A representative set of diagrams for Bianchi V models is given in [@CamSop:2001]. For Bianchi I models we have found a new bifurcation at $\gamma=1$ and for Bianchi V models at $\gamma=\textstyle{1\over3},1\,$, in addition to the general relativity bifurcations at $\gamma=0,2\,$ and $\gamma=0,\textstyle{2\over3},2\,$, respectively. Some of the dynamical features explained above for the FLRW are shared by the these Bianchi models. However, the most interesting point here is the possibility of studying the dynamics of anisotropy in brane-world scenarios. Specifically, we have seen [@CamSop:2001] that, although now we can have intermediate stages in which the anisotropy grows, expanding models isotropize as it happens in general relativity. This is expected since the energy density decreases and hence, the effect of the extra dimension becomes less and less important. The situation near the Big Bang is more interesting. In the brane-world scenario anisotropy dominates only for $\gamma<1$, whereas in general relativity dominates for all the physically relevant values of $\gamma$. To conclude, let us summarize the main features of the dynamics of cosmological models on the brane. First, we have found new equilibrium points, the BDL models [@BinDefLan:2000], representing the dynamics at very high energies, where the extra-dimension effects become dominant. Thus, we expect them to be a generic feature of the state space of more general cosmological models in the brane-world scenario. Second, the state space presents new bifurcations for some particular equations of state. Third, the dynamical character of some of the critical points changes with respect to the general-relativistic case. Finally, for models in the range $1< \gamma \leq 2$, that is for models satisfying all the ordinary energy conditions and causality requirements, we have seen that the anisotropy is negligible near the initial singularity. This naturally leads to the questions of whether the oscillatory behaviour approaching the Big Bang predicted by general relativity is still valid in brane-world scenarios. We are currently investigating this issue by considering Bianchi IX cosmological models [@CamSop:2001c]. [**Acknowledgments:**]{} This work has been supported by the European Commission (contracts HPMF-CT-1999-00149 and HPMF-CT-1999-00158). L. Randall and R. Sundrum, Phys. Rev. Lett [**83**]{}, 4690 (1999). A. Campos and C. F. Sopuerta, Phys. Rev. D [**63**]{}, 104012 (2001). T. Shiromizu, K. Maeda, and M. Sasaki, Phys. Rev. D [**62**]{}, 024012 (2000). A. Campos and C. F. Sopuerta, submitted to Phys. Rev. D (hep-th/0105100). P. Bin$\acute{\mbox{e}}$truy, C. Deffayet, and D. Langlois, Nucl. Phys. [**B565**]{}, 269 (2000). A. Campos and C. F. Sopuerta, [*in preparation*]{}.
--- abstract: 'We study correspondences between algebraic curves defined over the separable closure of ${\Bbb Q}$ or $\F_p$.' address: - | Courant Institute of Mathematical Sciences, N.Y.U.\ 251 Mercer str.\ New York, NY 10012, U.S.A. - | Department of Mathematics\ Princeton University\ Fine Hall, Washington Road\ Princeton, NJ 08544-1000, U.S.A. author: - Fedor Bogomolov - Yuri Tschinkel title: Unramified correspondences --- xy \[subsection\][Proposition]{} \[section\][Theorem]{} \[subsection\][Theorem]{} \[subsection\][Corollary]{} \[subsection\][Condition]{} \[subsection\][Lemma]{} \[subsection\][Assumption]{} \[subsection\][Definition]{} \[subsection\][Sublemma]{} \[subsection\][Problem]{} \[subsection\][Question]{} \[subsection\][Conjecture]{} \[subsection\][Remark]{} \[subsection\][Remarks]{} \[subsection\][Example]{} \[subsection\][Exercise]{} \[subsection\][Notations]{} [[F]{}]{} [C]{}[[C]{}]{} ¶[[P]{}]{} [Q]{}[[Q]{}]{} [Z]{}[[Z]{}]{} [C]{}[[C]{}]{} Introduction {#sect:introduction .unnumbered} ============ A class $\cC(\ovl{{\Bbb Q}})$ of complete algebraic curves over $\ovl{{\Bbb Q}}$ will be called [*dominating*]{} if for every algebraic curve $C'$ over $\ovl{{\Bbb Q}}$ there exist a curve $\tilde{C}\in \cC(\ovl{{\Bbb Q}})$ and a birational surjective map $\tilde{C}\ra C'$. A curve $C$ will be called [*universal*]{} if the class $\cU_C(\ovl{{\Bbb Q}})$ of its unramified covers is dominating. Every algebraic curve $C$ defined over a number field admits a surjective map onto $\P^1$ which is unramified outside $(0,1,\infty)$. In 1978 Manin pointed out that Belyi’s theorem implies the following \[prop:domi\] The class $\cM\cU(\ovl{{\Bbb Q}})$ consisting of modular curves and their unramified covers is dominating. There are many other classes of curves with the same property, for example: 1. hyperelliptic curves and their unramified coverings; 2. the class $\cC\cU(\ovl{{\Bbb Q}}):=\cup_{n\in \N} \cC_n(\ovl{{\Bbb Q}})$, with $\cC_n(\ovl{{\Bbb Q}})$ consisting of curves with function field $\ovl{{\Bbb Q}}(z,\sqrt[n]{z(1-z)})$ and their unramified coverings. 3. the class $\cC\cN(\ovl{{\Bbb Q}}):=\cup_{n\in \N} \cC\cN_n(\ovl{{\Bbb Q}})$ where $\cC\cN_n(\ovl{{\Bbb Q}})$ consists of all unramified covers of any curve $C_n$ with the property that $C_n\ra\P^1$ is ramified in $(0,1,\infty)$ only and all local ramification indices of $C_n$ over 0 are divisible by 3, over 1 divisible by 2 and over $\infty$ divisible by $n$. In particular, we could take $C_n$ to be the modular curve $X(n)$. (Sketch) Let us consider the class of hyperelliptic curves and their unramified covers. Let $C'$ be an arbitrary curve and $\sigma\,:\, C'\ra \P^1$ a [*generic*]{} map, branched over the points $q_1,...,q_n$ (generic means that there is only one ramification point over each branch point and all local ramification indices are equal to 2). Denote by $C$ a hyperelliptic curve whose ramification contains $q_1,...,q_n$. Then $\tilde{C}:=C\times_{\P^1} C'$ is an unramified cover of $C$ which surjects onto $C'$. For the classes $\cC\cU(\ovl{{\Bbb Q}})$ and $\cC\cN(\ovl{{\Bbb Q}})$ we use Belyi’s theorem. \[q:1\] Does there exist a universal algebraic curve $C$ (over $\ovl{{\Bbb Q}}$)? \[q:2\] Does there exist a number $n\in \N$ such that every curve defined over $\ovl{{\Bbb Q}}$ admits a surjective map onto $\P^1$ with ramification over $(0,1,\infty)$ such that all local ramification indices are $\le n$? \[q:3\] Is every curve $C$ (over $\ovl{{\Bbb Q}}$) of genus $g(C)\ge 2$ universal? It is clear that an affirmative answer to Question \[q:2\] implies a (constructive) affirmative answer to Question \[q:1\]. In this note we answer these questions in a simple model situation: instead of $\ovl{{\Bbb Q}}$ we consider the (separable) closure $\ovl{F}_p$ of the finite field ${\F}_p$. \[thm:main\] Let $p\ge 5$ be a prime and $C$ a hyperelliptic curve over $\ovl{{\F}}_p$ of genus $g(C)\ge 2$. Then $C$ is universal: for any projective curve $C'$ there exist a finite étale cover $\tilde{C}\ra C$ and a surjective regular map $\tau\,:\,\tilde{C}\ra C'$. In Section \[sect:var\] we prove the following geometric fact (over arbitrary algebraically closed fields of characteristic $\neq 2,3$): \[prop:hypo\] Every hyperelliptic curve $C$ has a finite étale cover $\tilde{C}$ which surjects onto the genus 2 curve $C_0$ given by $\sqrt[6]{z(1-z)}$. In particular, if $C_0$ is universal then every hyperelliptic curve of genus $\ge 2$ is universal. Applying the Chevalley-Weil theorem we conclude that the Mordell conjecture (Faltings’ theorem) for $C_0$ implies the Mordell conjecture for every hyperelliptic curve of genus $\ge 2$. The fact that there is some interaction between the arithmetic of different curves has been noted previously. Moret-Bailly and Szpiro showed (see [@S], [@M]) that the proof of an [*effective*]{} Mordell conjecture for [*one*]{} (hyperbolic) curve (for example, $C_0$) implies the ABC-conjecture, which in turn implies an effective Mordell conjecture for [*all*]{} (hyperbolic) curves (Elkies [@E]). Here [*effective*]{} means an explicit bound on the height of a $K$-rational point on the curve for all number fields $K$. Here again, Belyi’s theorem is used in an essential way. [**Acknowledgments.**]{} We have benefited from conversations with B. Hassett and A. Chambert-Loir. The first author was partially supported by the NSF. The second author was partially supported by the NSF and the Clay foundation. Main construction {#sect:2} ================= \[nota:loci\] Let $\tau\,:\, C\ra C' $ be a surjective map of algebraic curves. We denote by $\Ram(\tau)\subset C$ the ramification locus of $\tau$ and by $\Bran(\tau)=\tau(\Ram(C))\subset C'$ the branch locus of $\tau$. For a point $q\in C$ we denote by $e_{q}(\tau)$ the local ramification index at $q$. We denote by $$e(\tau):=\max_{q\in C}e_{q}(\tau)$$ the maximum local ramification index of $\tau$. We say that $\tau$ has [*simple*]{} ramification if $e(\tau)\le 2$ and that $\tau$ is [*generic*]{} if in addition there is only one ramification point over each branch point. Every curve admits a generic map onto $\P^1$, at least after a separable extension of the ground field. Let $p\ge 5$ be a prime number. In this section we work over a separable closure $\ovl{\F}_p$ of the finite field $\F_p$. First we show that there exists at least one curve satisfying the conclusion of Theorem \[thm:main\]. Let $\pi_0\,:\, E_0\ra \P^1$ be the elliptic curve given by $$\sqrt[3]{z(z-1)}.$$ Let $\sigma_0\,:\, C_0\ra \P^1$ be the genus 2 curve given by $$\sqrt[6]{z(z-1)},$$ and $\iota_0\,:\, C_0\ra E_0$ the corresponding 2-cover. Clearly, $\iota_0$ has simple ramifications over the preimages of $0,1$. Let $C$ be an arbitrary curve. Choosing a generic function on $C$ we get a generic covering $\sigma\,:\, C\ra \P^1$ (such covering is defined over $\ovl{\F}_p$). Assume further that $\Bran(\sigma)\subset \P^1$ does not contain $(0,1,\infty)$. Consider the diagram Here $C_1 = C \times_{\P^1} E_0$ (it is irreducible since $E_0\ra \P^1$ is a 2-cover). Then $C_1\ra E_0$ has simple ramification over a finite number of points in $E_0$. Recall that $E_0$ has a group scheme structure, and [*all*]{} $\ovl{{\F}}_p$-points of $E_0$ are torsion points. This implies that there exists an étale map $E_0\ra E_0$ such that all ramification points of $C_1$ over $E_0$ are mapped to $0$. More precisely, any finite set of $\ovl{{\F}}_p$-points of $E_0$ is contained in the group subscheme $E_0^{et}[n]\subset E_0$ - the maximal étale subgroup of the multiplication by $n$-kernel $E_0[n]$ (for some $n\in \N$). For every positive integer $n$ there exists a positive multiple of $m$ of $n$ and an étale map $E_0\ra E_0$ with kernel $E_0^{et}[m]$. Taking the composition of $C_1\ra E_0$ with the multiplication by a suitable $m$, we get a (possibly new) surjective regular map $C_1\ra E_0$ which is ramified only over the zero point in $E_0$ and has the property that all the local ramification indices are at most 2. Using this map let us define $C_2:=C_0\times_{E_0} C_1$. Consequently, any component of $C_2$ surjects onto $C_1$ and is an étale covering of $C_0$ (ramification cancels ramification). This component satisfies the conclusion of Theorem \[thm:main\]. \[lemm:ee\] Let $C$ be any smooth complete algebraic curve and $E$ any curve of genus 1. There exists a curve $C_1$ which surjects onto $C$ and $E$ such that the ramification of the map $C_1\ra E$ lies entirely over a single point of $E$ and its local ramification indices are all equal to $2$. Consider a generic map $\sigma\,:\, C\ra \P^1$ with $e(\sigma)\le 2$. Choose a double cover $\pi\,:\, E\ra \P^1$ such that the branch loci $\Bran(\sigma)$ and $\Bran(\pi)$ on $\P^1$ are disjoint. Then the product $C_1:=C\times_{\P^1} E$ is an irreducible curve which is a double cover of $C$. The curve admits a surjective map $\iota_1\,:\, C_1\ra E$ with $e({\iota_1})\le 2$. Similarly to the previous construction we can find an unramified cover $\varphi\,:\, E\ra E$ such that the composition $\varphi\circ \iota_1\,:\, C_1\ra E$ is ramified only over one point in $E$ and the local ramification indices are still equal to $2$. \[coro:ce\] Assume that some unramified covering $\tilde{C}$ of $C$ surjects onto an elliptic curve $E$. Assume further that there exists a point $q$ on $E$ such that [*all*]{} local ramification indices of the map $\tilde{C}\ra E$ over $q$ are divisible by 2. Then $C$ is universal. It is sufficient to take the product of $\tilde{C}\times_E C_1$. Any irreducible component of the resulting curve will be an unramified covering of $\tilde{C}$ (and hence $C$) and will admit a surjective map onto $C_1$ and $C$. \[coro:c\] Every hyperelliptic curve $C$ over $\ovl{\F}_p$ (with $p\ge 5$) of genus $\ge 2$ is universal. Consider the standard projection $\sigma\,:\, C\ra \P^1$ (of degree 2). Its branch locus ${\rm Bran}(\sigma)$ consists of $2g+2$ points. Let $\pi\,:\, E\ra \P^1$ be a double cover such that ${\Bran}(\pi)$ is contained in ${\Bran}(\sigma)$. Then the product $\tilde{C}=C\times_{\P^1}E$ is an unramified double cover of $C$. Moreover, $\tilde{C}$ is a double cover of $E$ with ramification at most over the preimages in $E$ of the points in ${\Bran}(\sigma)\setminus {\Bran}(\pi)$. We now apply Corollary \[coro:ce\]. In [*finite*]{} characteristic, there are many other (classes of) universal curves. For example, cyclic coverings with ramification in 3 points, hyperbolic modular curves, etc. Thus it seems plausible to formulate the following \[conj:main\] Any smooth complete curve $C$ of genus $g(C)\ge 2$ defined over $\ovl{{\F}}_p$ (for $p\ge 2$) is universal. The case of characteristic 0 {#sect:case} ============================ In this section we work over $\ovl{{\Bbb Q}}$. We show that the method outlined in Section \[sect:2\] can employed in characteristic zero to produce natural infinite sets of algebraic points on $\P^1$ which occur as ramification points of surjective maps from $\P^1_2$ to $\P^1_1$ branched over $(0,1,\infty)\in \P^1_1$ only and having an [*a priori*]{} bound on the ramification index (here $\P^1_1$ and $\P^1_2$ are two different copies of the projective line $\P^1$). Notice that, in principle, it is easy to produce [*some*]{} sets of points (of any finite cardinality) with this property: Take an $n\ge 6$ and any triangulation of $\P^1_2$ with vertices of index $\le n$. A barycentric subdivision of each such triangulation defines a function from $\P^1_2$ to $\P^1_1$ with local ramification indices $\le 2n$ (for more details see [@bh]). Therefore, any curve with bounded ramification over this set of vertices will have bounded ramification over $\P^1_1$. However, we have no explicit control over the coordinates of the ramification points on $\P^1_2$. An (obvious) analogous way to control ramification indices is to consider the following diagram where the map $\phi_n$ is the quotient by the subscheme of $n$-torsion points and the maps $E\ra \P^1$ are the standard double covers, ramified over $(0,1,\infty, \lambda)$. Clearly, all the ramification points of $\varphi_{n,E}$ (in $\P^1_2$) are over $0,1,\infty$ and ${\lambda}$ (in $\P^1_1$) and $e(\varphi_{n,E})=2$. Belyi’s theorem gives a map $\beta\,:\, \P^1_1\ra \P^1_0$, which ramifies only over the points $(0,1,\infty)\in \P^1_0$, maps $\{0,1,\infty,{\lambda}\}\subset \P^1_1$ into $\{ 0,1,\infty\}\subset \P^1_0$ and has local ramification indices $\le n$. Moreover, it provides an explicit bound on $\deg(\beta)$ and, consequently, on $e(\beta)$ (in terms of the absolute height of ${\lambda}$). Let $\beta_{{\lambda}}\,:\, \P^1_1\ra \P^1_0$ be a map such that $$e(\beta_{{\lambda}})=\inf_{\beta} \{ e_{\beta}\}$$ over the set of all maps as above. Then the map $\beta_{{\lambda}}\circ \varphi_{n,E}\,:\, \P^1_2\ra \P^1_0$ ramifies over three points only and has index $e(\beta_{{\lambda}}\circ \varphi_{n,E}) \le 2n$. Let $$R_E:=\pi(E(\ovl{{\Bbb Q}})_{\rm tors}) \subset \P^1_2(\ovl{{\Bbb Q}})$$ be the image of the torsion points of $E$. Let $\sigma\, :\, C\ra \P^1_2$ be any map ramified only in a subset of $R_E$. Let $\pi:=\beta_{{\lambda}}\circ \varphi_{n,E}\circ \sigma$. Then $$e(\pi)\le 2e(\sigma)\cdot e(\beta_{{\lambda}}).$$ A natural application of the construction in Section \[sect:2\] is as follows: \[exam:3\] Let $\pi\,:\, E\ra \P^1$ be a triple cover with $\Bran(\pi)=\{ 0,1,\infty\}$ ($E$ is a CM elliptic curve with $j$-invariant $0$). Consider the following diagram where $C_0$ is a curve of genus $g(C_0)=2$ given by $\sqrt[6]{z(z-1)}$, $\phi_n$ is the quotient map by the subscheme of torsion points of order $n$, and $\varphi_{n,E}$ the corresponding map from $\P^1_2$ to $\P^1_1$ ramified only over $(0,1,\infty)$. Let $\cX_g=\{ X\}$ be the subset of curves of genus $g$ admitting a map $\sigma_X\,:\, X\ra \P^1_2$ such that - $e(\sigma_X)=2$; - $\Bran(\sigma_X)\subseteq \pi(E(\ovl{{\Bbb Q}})_{\rm tors})$. Then, for any $X\in \cX_g$ the map $$\varphi_{n,E}\circ \sigma_X\,:\, X\ra \P^1_1$$ has index $e(\varphi_{n,E}\circ \sigma_X)\le 6$ and there exists an unramified cover $\tilde{C}\ra C_0$ surjecting onto $X$. Moreover, $\cX_g$ is [*dense*]{} (in real and $p$-adic topologies) in the natural Hurwitz scheme $\cH_g$ parametrizing curves of genus $g$. The set of curves dominated by unramified covers of $C_0$ is much larger than $\cX_g$. Indeed, consider any 4-tuple of points in $$\pi(E(\ovl{{\Bbb Q}})_{\rm tors})\subseteq \P^1_2$$ and an elliptic curve $E'$ obtained as a double cover of $\P^1_2$ ramified in those 4 points. Then $E'$ is also dominated by unramified covers of $C_0$ and we can iterate the above construction for $E'$. Geometric constructions {#sect:var} ======================= Let $(E,q_0)$ be an elliptic curve, $q_1$ a torsion point of order two on $E$ and $\pi\,:\, E\ra \P^1$ the quotient with respect to the involution induced by $q_1$. Let $n$ be an odd positive integer and $\varphi_{n,E}\,:\, \P^1_2\ra \P^1_1$ the map induced by Any quadruple $r=\{r_1,...,r_4\}$ of four distinct points in $\varphi^{-1}_{n,E}(\pi(q_0))$ defines a genus 1 curve $E_r$ (the double cover of $\P^1$ ramified in these four points). \[prop:tree\] Let $\iota\,:\, C\ra E$ be any finite cover such that all local ramification indices over $q_0$ are even. Then there exists an unramified cover $\tau_r\,:\, C_r\ra C$ which dominates $E_r$ and has only even local ramification indices over some point in $E_r$. Assume that $n\ge 3$ and consider the following diagram where $E_r$ is a double cover of $\P^1_2$ ramified in any quadruple of points in the preimage $\phi_{n,E}^{-1}(\pi(q_0))$ and $C_r$ is any irreducible component of $C_2\times_{\P^1_2} E_r$. Any point $q_r\in E_r$ such that $q_r\notin \Ram(\pi_r)$ (that is, its image in $\P^1_2$ is distinct from $r_1,...,r_4$) has the claimed property. \[rem:any\] Iterating this procedure (and adding isogenies) we obtain many elliptic curves $E'$ which are dominated by curves having an unramified cover onto $E$. It would be interesting to know if for any two elliptic curves over $\ovl{{\Bbb Q}}$ there exists a cycle connecting them (at least modulo isogenies). We will now show that [*any*]{} elliptic curve can be connected in this way to $E_0$. Let $E_0\subset \P^2=\{ (x:y:z)\}$ be the elliptic curve $$x^3+y^3+z^3=0,$$ and $$E_0[3]=\sT:=\left\{ \begin{array}{ccc} (1:0:1), & (1:0:-\zeta), & (1:0:-\zeta^2),\\ (0:1:1), & (0:1:-\zeta), & (0:1:-\zeta^2),\\ (1:1:0), & (1:-\zeta:0), & (1:-\zeta^2:0) \end{array} \right\}$$ its set of $3$-torsion points (where $\zeta$ is a primitive cubic root of 1). Denote by $\cE_{{\lambda}}=\{E_{{\lambda}}\}$ the family of elliptic curves on $\P^2$ passing through $\sT$ given by $$E_{{\lambda}}\,\,:\,\, x^3 + y^3 +z^3 + {\lambda}xyz = 0.$$ It is easy to see that for each ${\lambda}$ the set $E_{\lambda}[3]$ of 3-torsion points of $E_{{\lambda}}$ is precisely $\sT$. $$\begin{array}{ccccc} \pi & : & \P^2 & \ra & \P^1\\ & & (x:y:z) & \mapsto & (x+z:y) \end{array}$$ the projection respecting the involution $x\ra z$ on $\P^2$. Denote by $\pi_{{\lambda}}$ the restriction of $\pi$ to $E_{{\lambda}}$. Clearly, $\pi_{{\lambda}}$ exhibits each $E_{{\lambda}}$ as a double cover of $\P^1$ and $\pi_{{\lambda}}$ has only simple double points for all ${\lambda}$. Moreover, $$\pi(\sT)=\{ (0:1),\, (1:-\zeta),\, (1:- \zeta^2),\, (1:-1),\, (1: 0)\}$$ and for all ${\lambda}$ there exists a (non-empty) set $S_{{\lambda}}\subset \Bran(\pi_{{\lambda}})\subset \P^1$ such that $\pi_{{\lambda}}^{-1}(S_{{\lambda}})\subset \sT$. Let $\pi_0'\,:\, E_0'\ra \P^1$ be a double cover ramified in 4 points in $\pi(\sT)$. \[lemm:e\] Let $\iota\,:\, C\ra E_{{\lambda}}$ be a double cover such that over at least one point in $\Bran(\iota)$ the local ramification indices are even. Then there exists an unramified cover $\tilde{C}\ra C$ and a surjective morphism $\tilde{\iota}\,:\, \tilde{C}\ra E_0'$ such that over at least one point in $\Bran(\tilde{\iota})\subset E_0'$ all local ramification indices of $\tilde{\iota}$ are even. Consider the diagram Then $C_1\ra \P^1$ has even local ramification indices over all points in $\pi(\sT)$. It follows that $$\tilde{C}:=C_1\times_{\P^1}E_0'\ra E_0'$$ has even local ramification indices over the preimages of the fifth point in $\pi(\sT)$, as claimed. \[nota:uni\] Let $\cC$ be the class of curves such that there exists an elliptic curve $E$, a surjective map $\iota\,:\, C\ra E$ and a point $q\in \Bran(\iota)$ such that all local ramification indices at points in $\iota^{-1}(q)$ are even. Any hyperelliptic curve of genus $\ge 2 $ belongs to $\cC$. More generally, $\cC$ contains any curve $C$ admitting a map $C\ra \P^1$ with even local ramification indices over at least 5 points in $\P^1$. \[prop:ccc\] For any $C\in \cC$ there exists an unramified cover $\tilde{C}\ra C$ surjecting onto $C_0$ (with $C_0\ra \P^1$ given by $\sqrt[6]{z(1-z)}$). Consider $C_1=C\in \cC$ with $\iota_1\,:\, C_1\ra E=E_{{\lambda}}$ as in \[nota:uni\]. Define $C_2$ as an irreducible component of $C_1\times_E E$: Define $C_3:=C_2\times_{\P^1} E_0$ by the diagram Observe that for $q\in \Bran(\pi_0)$ the local ramification indices in the preimage $(\iota_2\circ \pi_{{\lambda}})^{-1}(q)$ are all even. It follows that the map $\tau_3\,:\, C_3\ra C_2$ is [*unramified*]{} and that $\iota_3\, :\, C_3\ra E_0$ has even local ramification indices over (the preimage of) $q_5\in \{\pi(\sT)\setminus \Bran(\pi_0)\}$ (the 5th point). Define $C_4$ as an irreducible component of $C_3\times_{E_0} E_{0}$ in the diagram The map $\iota_4$ is ramified over the preimages $(\pi_0\circ \varphi_3)^{-1}(q_5)$, with even local ramification indices. Finally, $C_5= C_4\times_{E_0} C_0$ from the diagram has a dominant map onto $C_0$ and is unramified over $C_4$ (and consequently, $C_1$). [99]{} G. V. Belyi, [*Galois extensions of a maximal cyclotomic field*]{}, Izv. Akad. Nauk SSSR Ser. Mat. [**43**]{}, (1979), no. 2, 267–276, 479. G. V. Belyi, [*Another proof of the Three Points theorem*]{}, Preprint MPI 1997-46 at [http://www.mpim-bonn.mpg.de]{}, (1997). F. Bogomolov, D. Husemoller, [*Geometric properties of curves defined over number fields*]{}, Preprint MPI 2000-1 at [http://www.mpim-bonn.mpg.de]{}, (2000). N. Elkies, [*ABC implies Mordell*]{}, Intern. Math. Res. Notices [**7**]{}, (1991), 99–109. L. Moret-Bailly, [*Hauteurs et classes de Chern sur les surfaces arithmétiques,*]{} Astérisque [**183**]{}, (1990), 37–58. L. Szpiro, [*Discriminant et conducteur des courbes elliptiques*]{}, Astérisque [**183**]{}, (1990), 7–18.
--- abstract: 'We investigate the effect of the cosmological expansion on the bending of light due to an isolated point-like mass. We adopt McVittie metric as the model for the geometry of the lens. Assuming a constant Hubble factor we find an analytic expression involving the bending angle, which turns out to be unaffected by the cosmological expansion at the leading order.' author: - 'Oliver F. Piattella' bibliography: - 'McVittie.bib' title: Lensing in the McVittie metric --- Introduction ============ McVittie metric [@McVittie:1933zz] is a spherically symmetric solution of Einstein’s equations which asymptotically tends to a Friedmann-Lemaître-Robertson-Walker (FRLW) universe. It was introduced in 1933 by McVittie in order to investigate cosmological effects on local systems, e.g. on closed orbits of planets or stars. This issue has been examined again in the past years [@1996MNRAS.282.1467B; @Faraoni:2007es; @Carrera:2008pi; @Bochicchio:2012vu; @Kopeikin:2012by; @Nolan:2014maa], especially in relation to the Pioneer anomaly [@Anderson:2001ks]. It is still a matter of debate whether and how much cosmological effects influence the physics of local systems. McVittie metric has been intensively analyzed by many authors, see e.g. Refs. [@Nolan:1998xs; @Nolan:1999kk; @Nolan:1999wf; @Kaloper:2010ec; @Lake:2011ni; @Nandra:2011ug; @Nandra:2011ui]. In particular, Nolan analyzed the mathematical properties of McVittie solution in a series of three papers [@Nolan:1998xs; @Nolan:1999kk; @Nolan:1999wf]. One of the most important results is that McVittie metric is not a black hole solution because where one expects a horizon, there is instead a weak singularity (i.e. geodesics can be extended through it). There is an exception to this theorem: when the external, FLRW part of McVittie solution tends to be cosmological constant-dominated [@Kaloper:2010ec; @Lake:2011ni]. Taking advantage of the well-posedness of McVittie metric with flat spatial hypersurfaces [@Nolan:1998xs] (see also Ref. [@Nandra:2011ug]), we use this solution in order to understand how the deflection of light caused by a point mass is affected by the embedding of the latter in an expanding universe. This idea has been recently explored in Ref. [@Aghili:2014aga], where the authors numerically show that an effect due to the Hubble constant $H_0$ does exist on the deflection angle. There is an ample literature on this problem, mostly specialized to the case in which a cosmological constant $\Lambda$ dominates and, for this reason, based on Kottler (Schwarzschild-de Sitter) metric [@kottler1918physikalischen]. In his pioneering investigation, Islam [@islam1983cosmological] found no influence whatsoever by $\Lambda$ on the bending of light. Only less than a decade ago, Ishak and Rindler [@Rindler:2007zz; @Ishak:2008zc], via a new definition of the bending angle, showed that an effect due to $\Lambda$ seems to exist. Their work and results gave rise to many others investigations, see e.g. Refs. [@Schucker:2007ut; @Ishak:2008ex; @Park:2008ih; @Sereno:2008kk; @Simpson:2008jf; @Khriplovich:2008ij; @Ishak:2010zh; @Biressa:2011vy; @Hammad:2013wda; @Butcher:2016yrs]. There seems to be common agreement now that $\Lambda$ indeed affects the bending of light. A debate actually exists on the entity of this influence. Among the works cited above, Refs. [@Park:2008ih; @Simpson:2008jf; @Khriplovich:2008ij; @Butcher:2016yrs] disagree with the existence of any relevant effect caused by $\Lambda$ on the lensing phenomenon. The reason is essentially the following: putting source, lens and observer in a cosmological setting, i.e. taking into account the Hubble flux, makes the $\Lambda$ contribution completely negligible (but non zero in principle) because of how it enters the definition of the angular diameter distances and because of aberration effects due to the relative motion. In order to better understand these points, we present here a perturbative, analytic calculation of the bending angle in McVittie metric. We assume a constant Hubble parameter, thereby focusing on the case of a cosmological constant-dominated universe. By using McVittie metric in the coordinates of Eq. , we take into account the embedding of source, lens and observer in a cosmological context, thereby potentially addressing the issues raised in Refs. [@Park:2008ih; @Simpson:2008jf; @Khriplovich:2008ij; @Butcher:2016yrs]. We find no extra contribution to the bending angle coming from cosmology. Therefore, we corroborate the results of Refs. [@Park:2008ih; @Simpson:2008jf; @Khriplovich:2008ij; @Butcher:2016yrs]. The paper is structured as follows. In Sec. \[Sec:McVittiemetric\] we present McVittie metric and tackle the lensing problem, calculating the bending angle. In Sec. \[Sec:Alignment\] we focus on the case of Einstein’s ring systems. Finally, Sec. \[Sec:DiscandConcl\] is devoted to discussion and conclusion. We use natural $G = c = 1$ units throughout the paper. McVittie metric and lensing {#Sec:McVittiemetric} =========================== McVittie metric [@McVittie:1933zz] has the following form: $$\label{mcvittiecomoving} ds^2 = -\left(\frac{1 - \mu}{1 + \mu}\right)^2dt^2 + (1 + \mu)^4a(t)^2(d\rho^2 + \rho^2d\Omega^2)\;,$$ where $a(t)$ is the scale factor and $$\label{mudefinition} \mu \equiv \frac{M}{2a(t)\rho}\;,$$ where $M$ is the mass of the point-like lens. When $\mu \ll 1$, metric  can be approximated by $$\label{mcvittiepert} ds^2 = -\left(1 - 4\mu\right)dt^2 + (1 + 4\mu)a(t)^2(d\rho^2 + \rho^2d\Omega^2)\;,$$ which is the usual perturbed FLRW metric in the Newtonian gauge and $2\mu$ is what is usually called gravitational potential. We adopt the same formalism Dodelson uses in Chapter 10 of his textbook [@Dodelson:2003ft]. We use as time-variable the background comoving distance $\chi$ (as if it there were no point-like mass) from us to the plane where the photon is at a certain time $t$. See Fig. \[figuretrajectory\]. (0,0) – (7,0); (0,0) – (0,1); (0,0.5) node\[right\] [$b$]{}; (7,0) circle (2pt) node\[below right\] [Us]{}; (3.5,0) circle (2pt) node\[above\] [Lens]{}; (0,1) circle (2pt) node\[above\] [Source]{}; (0, 1) .. controls(3.5,0.7) .. (7, 0); (0,-0.5) – (7,-0.5); (3.5,-1) – (7,-1); (2,-0.5) node\[below\] [$\chi_S$]{}; (5,-1) node\[below\] [$\chi_L$]{}; (4,0) – (4,0.7); (4,0.7) – (7,0.7); (5,0.7) node\[above\] [$\chi(t)$]{}; The relation between $\chi$ and the background expansion is the usual one for the FLRW metric: $$\frac{d\chi}{dt} = -\frac{1}{a}\;,$$ and the comoving distances of the source and of the lens, $\chi_S$ and $\chi_L$ respectively, do not change. The null geodesics equation for the transversal displacement $l^i$ of the photon is the following: $$\begin{aligned} \label{transvdisplac} \frac{a}{p}\frac{1-\mu}{1+\mu}\frac{d}{d\chi}\left(\frac{p}{a}\frac{1+\mu}{1-\mu}\frac{dl^i}{d\chi}\right) = \frac{2(1-\mu)}{(1+\mu)^7}\delta^{il}\partial_l\mu\nonumber\\ + 2Ha\left[1 + \frac{2\partial_t\mu}{(1+\mu)H}\right]\frac{dl^i}{d\chi}\nonumber\\ - \frac{2}{1+\mu}\left(\delta^i_j\partial_k\mu + \delta^i_k\partial_j\mu - \delta_{jk}\delta^{il}\partial_l\mu\right)\frac{dl^j}{d\chi}\frac{dl^k}{d\chi}\;,\end{aligned}$$ where $i = 1,2$; $H \equiv \partial_t a/a$ is the Hubble factor and $p$ is the photon proper momentum. Since McVittie metric is spherically symmetric, the two equations for $i = 1,2$ are identical. Now we do consider $\mu$ small and investigate how a point mass in the expanding universe affects the trajectory of a light ray. In order to do this, in Eq.  we consider $\mu \ll 1$ and small displacements $l^i \ll \chi$. Equation  can be then simplified as follows: $$\frac{d^2l^i}{d\chi^2} = 4\partial_i\mu\;.$$ Using Eq.  in the equation above, one gets: $$\label{xthetaeq} \frac{d^2l}{d\chi^2} = -\frac{2Ml}{a(\chi)\left[(\chi - \chi_L)^2 + l^2\right]^{3/2}}\;,$$ where we dropped the index $i$ denoting the transversal direction, thanks to spherical symmetry. With the following definitions: $$x \equiv \frac{\chi}{\chi_L}\;, \qquad \alpha \equiv \frac{2M}{\chi_L}\;, \qquad y \equiv \frac{l}{\chi_L}\;,$$ Eq.  becomes: $$\label{fundeqy} \frac{d^2y}{dx^2} = -\alpha\frac{y}{a(x)\left[(x - 1)^2 + y^2\right]^{3/2}}\;.$$ Note that a vanishing $\alpha$ implies that $a(x)$ has no effect on the trajectory. This is a sort of “casting out nines", since indeed we do not expect lensing caused by cosmology only. We suppose the source to be at a comoving distance $\chi_S$ and solve the above equation considering small $\alpha$, via the following expansion of the solution: $$y = y^{(0)} + \alpha y^{(1)} + \alpha^2 y^{(2)} + \dots\;,$$ and retaining the first order only in $\alpha$. As initial conditions, we choose: $$\label{icfull} y(x_S) = y_S\;, \qquad y(0) = 0\;,$$ which mean that the light ray starts from the source with an impact parameter $b \equiv y_S\chi_L$, see Fig. \[figuretrajectory\], and it must arrive to us in order to be detected, thus $y(0) = 0$. The zero-order solution is trivial, i.e. a straight line: $$y^{(0)} = C_1x + C_2\;.$$ We choose the two integration constants so that $y^{(0)} = y_S$, i.e. the trajectory is a straight, horizontal line, see Fig. \[figuretrajectoryzerothorder\]. (0,0) – (7,0); (7,0) circle (2pt) node\[above\] [Us]{}; (3.5,0) circle (2pt) node\[above\] [Lens]{}; (0,1) circle (2pt) node\[above\] [Source]{}; (0, 1) – (7, 1); (0,-0.5) – (7,-0.5); (3.5,-1) – (7,-1); (2,-0.5) node\[below\] [$x_S$]{}; (5,-1) node\[below\] [$x_L$]{}; (4,1) node\[above\] [$y^{(0)}$]{}; (0,0) – (0,1); (0,0.5) node\[right\] [$y_S$]{}; The first-order solution is then given by the following equation: $$\label{y1eq} \frac{d^2y^{(1)}}{dx^2} = -\frac{y_S}{a(x)\left[(x - 1)^2 + y_S^2\right]^{3/2}}\;,$$ for which we must choose the following initial conditions: $$y^{(1)}(x_S) = 0\;, \qquad y^{(1)}(0) = -y_S/\alpha\;,$$ in order to respect those in Eq.  for the full solution. Let’s now consider the contribution of $a(x)$. For a constant Hubble factor $H = H_0$, one can easily determine the scale factor as function of the comoving distance: $$\label{achiHconstant} \chi = \int_0^z\frac{dz'}{H(z')} = \frac{z}{H_0} \equiv \frac{1}{H_0}\left(\frac{1}{a} - 1\right)\;, % \Rightarrow a(\chi) = \frac{1}{H_0\chi + 1}\;.$$ where we solved the integral by introducing the redshift $z$, defined as in the last equality of the above equation. This is, in principle, incorrect because one should also take into account the gravitational redshift caused by the point mass. However, that would produce a second order contribution in $\alpha$ in Eq. , so we neglect it. Using Eq. , Eq.  becomes: $$\label{y1solH0} \frac{d^2y^{(1)}}{dx^2} = -\frac{y_S\left(1 + H_0\chi_L x\right)}{\left[(x - 1)^2 + y_S^2\right]^{3/2}}\;.$$ In the limit $y_S \ll 1$ the deviation angle $$\delta \equiv \left.\frac{dy}{dx}\right|_{x = 0} - \left.\frac{dy}{dx}\right|_{x = x_S}\;,$$ derived by using the solution of Eq.  is the following: $$\delta = \frac{2\alpha(1 + \chi_L H_0)}{y_S} + \mathcal{O}(y_S)\;. % - \frac{\alpha\left[2 -2x_S + x_S^2 - \chi_L H_0(2 - 4x_S + x_S^2)\right]}{2(x_S - 1)^2}y_S + \mathcal{O}(y_S^2)\;.$$ Note that Eq.  can be solved exactly, but its solution is quite cumbersome so we do not write it down here explicitly. Recalling that $\alpha \equiv 2M/\chi_L$ and $y_S = b/\chi_L$, we can write the above formula as: $$\label{corrangle} \delta = \frac{4M(1 + \chi_L H_0)}{b} + \mathcal{O}(b/\chi_L)\;.$$ This result is similar to the one in the Schwarzschild case, except for the fact that the mass seems to be increased by a relative amount of $H_0\chi_L$ and $b$ is not the proper closest approach distance to the lens, but it is the comoving transversal position of the source. From Eq.  we know that $H_0\chi_L = z_L$, i.e. the redshift of the lens. Considering the standard $\Lambda$CDM model Friedmann equation $$\frac{H^2}{H_0^2} = \Omega_\Lambda + \Omega_{\rm m}(1 + z)^3\;,$$ $H$ is approximately constant only as long as $\Omega_\Lambda \gg \Omega_{\rm m}(1 + z)^3$. Using the observed values for the density parameters, approximately $\Omega_\Lambda = 0.7$ and $\Omega_{\rm m} = 0.3$, the above condition amounts to state that $z \ll 0.3$. Therefore, the solution we found in Eq.  for the bending angle is reliable only when the redshifts involved are very small, much less than 0.3. Let’s consider now the lens equation in the thin lens approximation: $$\label{thinlenseq} \theta \approx \beta + \delta\frac{D_{LS}}{D_{S}}\;,$$ where $\theta$ is the angular apparent position of the source, $\beta$ is the actual angular position of the source, $D_{\rm LS}$ is the angular distance between the lens and the source and $D_S$ is the angular distance between us and the source. Note that, being metric  in an isotropic form, coordinate angles are equal to physical angles. This can be checked, for example, using the definition introduced in Ref. [@Rindler:2007zz] or via the construction used in Ref. [@Schucker:2007ut]. Using Eq. , the angular diameter distance between the source and us can be expressed as $$\label{DSdistance} D_S = \frac{1}{H_0}(1 - a_S) = \frac{1}{H_0}\frac{z_S}{1 + z_S}\;.$$ The angular diameter distance between the lens and us has a similar form: $$\label{DLdistance} D_L = \frac{1}{H_0}(1 - a_L) = \frac{1}{H_0}\frac{z_L}{1 + z_L}\;.$$ On the other hand, $D_{LS} \neq D_S - D_L$, but, see e.g. Ref. [@Peacock:1999ye]: $$\label{DLSdistance} D_{LS} = a_S(\chi_{S} - \chi_L) = \frac{1}{H_0}\frac{z_S - z_L}{1 + z_S}\;.$$ Using Eq. , Eq.  can be written as follows: $$\label{newlensequation} \theta \approx \beta + \frac{4M(1 + z_L)}{b}\frac{D_{LS}}{D_S}\;.$$ For comparison, let’s consider the standard lens equation, which is based on the union of the results coming from Schwarzschild metric and from FLRW metric, see e.g. Refs. [@Peacock:1999ye; @Weinberg:2008zzc]: $$\label{Weinbergformula} \theta \approx \beta + \frac{4M}{r_0}\frac{D_{LS}}{D_S}\;.$$ Here, $r_0$ is the closest approach distance to the lens. In a cosmological setting, it is considered as a proper distance and, therefore, expressed as $r_0 = \theta D_L$. In our Eq. , $b$ is the comoving transversal position of the source, thus apparently we have a different result. However, this may not be the case because in comoving coordinates $b$ can also be interpreted as the impact parameter and as the closest approach distance to the lens. Indeed, $y^{(0)} = y_S$ is the zero-order solution and any correction to it is a first-order quantity in $\alpha$ which, if substituted into Eq. , would give a sub-dominant contribution. Therefore, at the leading order, we can write $b = \theta\chi_L$. The factor $(1 + z_L)$ of Eq.  combined with $\chi_L$ gives precisely the angular diameter distance to the lens: $b/(1 + z_L) = \theta\chi_L/(1 + z_L) = \theta D_L$. No difference from the standard case is found. In case of alignment {#Sec:Alignment} ==================== The solution we have exploited in the previous section does not work in the case of alignment among source, lens and observer. See Fig. \[figuretrajectoryeinsteinring\]. (0,0) – (7,0); (7,0) circle (2pt) node\[above\] [Us]{}; (4,0) circle (2pt) node\[below\] [Lens]{}; (0,0) circle (2pt); (0,0.2) node\[above\] [Source]{}; (1.5,0) arc (0:20:1); (1.75,-0.05) node\[above\] [$\theta_S$]{}; (6,0) arc (180:160:1); (5.7,-0.05) node\[above\] [$\theta_E$]{}; (0, 0) .. controls(4, 1) .. (7, 0); (0,-0.5) – (7,-0.5); (4,-1) – (7,-1); (2,-0.5) node\[below\] [$\chi_S$]{}; (5,-1) node\[below\] [$\chi_L$]{}; (5,0) – (5,0.7); (5,0.7) – (7,0.7); (6,0.7) node\[above\] [$\chi$]{}; (4,0) – (4,0.7); (4,0.3) node\[left\] [$b$]{}; In this case, the zero order solution cannot be a horizontal trajectory, because it would never reach us. The zero order trajectory is now $$y^{(0)} = \theta_S(x_S - x)\;,$$ where we assume $\theta_S \ll 1$. In order for the trajectory to reach us, we must choose the initial condition $y^{(1)}(0) = -\theta_Sx_S/\alpha$. Computing again the deflection angle, we get: $$\label{bendangEinst} \delta = \frac{4M(1 + \chi_L H_0)}{\theta_S(\chi_S - \chi_L)} + \frac{\chi_L}{2(\chi_S - \chi_L)} + \mathcal{O}(\theta_S)\;.$$ But, the simple geometry of the lensing process in Fig. \[figuretrajectoryeinsteinring\] shows that: $$\theta_S(\chi_S - \chi_L) = b = \theta_E\chi_L\;,$$ and $\chi_L/(1 + z_L) = D_L$, reproducing thus the standard result. Again, at the leading order no cosmological correction appears. Discussion and conclusions. {#Sec:DiscandConcl} =========================== We have analyzed the issue of whether cosmology affects local phenomena such as the bending of light by a compact mass. To this purpose, we adopted McVittie metric, which describes the geometry of a point mass (the lens, in our picture) in the expanding universe. We assume a constant Hubble factor, thereby presupposing a cosmological constant-dominated FLRW universe surrounding the point mass. We found no correction coming from cosmology at the leading order in the bending angle, thus corroborating the results of [@Park:2008ih; @Simpson:2008jf; @Khriplovich:2008ij; @Butcher:2016yrs]. The author is grateful to L. Bombelli, A. Maciel da Silva, V. Marra, D. C. Rodrigues, T. Schücker, and the anonymous referees for important suggestions and stimulating discussions. The author also thanks CNPq (Brazil) for partial financial support.